title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Evolution of Cholesky factor given evolution of symmetric positive definite matrix | $$\frac{dA}{dt} = \frac{dLL^\top}{dt} = L\frac{dL^\top}{dt} + \frac{dL}{dt}L^\top.$$ |
Convolution of i.i.d. with uniform distribution | $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\bbox[10px,#ffd]{\ds{\int_{0}^{1}\int_{0}^{1}\cdots\int_{0}^{1}
\bracks{x_{1} + x_{2} + \cdots + x_{n} < x}\dd x_{1}
\,\dd x_{2}\ldots\dd x_{n}}}
\\[5mm] = &\
\int_{0}^{1}\int_{0}^{1}\cdots\int_{0}^{1}\
\underbrace{\int_{c - \infty\ic}^{c + \infty\ic}
{\expo{\pars{x - x_{1} - x_{2} - \cdots - x_{n}}s} \over s}\,{\dd s \over 2\pi\ic}}_{\ds{\bracks{x - x_{1} - x_{2} - \cdots - x_{n} > 0}}}\
\dd x_{1}\,\dd x_{2}\ldots\dd x_{n}
\end{align}
where $\ds{c >0}$.
Then,
\begin{align}
&\bbox[10px,#ffd]{\ds{\int_{0}^{1}\int_{0}^{1}\cdots\int_{0}^{1}
\bracks{x_{1} + x_{2} + \cdots + x_{n} < x}\dd x_{1}
\,\dd x_{2}\ldots\dd x_{n}}}
\\[5mm] = &\
\int_{c - \infty\ic}^{c + \infty\ic}{\expo{xs} \over s}
\pars{\int_{0}^{1}\expo{-s\xi}\dd\xi}^{n}{\dd s \over 2\pi\ic} =
\int_{c - \infty\ic}^{c + \infty\ic}{\expo{xs} \over s}
\pars{\expo{-s} - 1 \over -s}^{n}{\dd s \over 2\pi\ic}
\\[5mm] = &\
\int_{c - \infty\ic}^{c + \infty\ic}
{\expo{xs} \over s^{n + 1}}\pars{1 - \expo{-s}}^{n}{\dd s \over 2\pi\ic} =
\int_{c - \infty\ic}^{c + \infty\ic}
{\expo{xs} \over s^{n + 1}}\sum_{k = 0}^{n}{n \choose k}
\pars{-\expo{-s}}^{k}{\dd s \over 2\pi\ic}\label{1}\tag{1}
\\[5mm] = &\
\sum_{k = 0}^{n}{n \choose k}\pars{-1}^{k}\
\underbrace{\int_{c - \infty\ic}^{c + \infty\ic}
{\expo{\pars{x - k}s} \over s^{n + 1}}{\dd s \over 2\pi\ic}}
_{\ds{\bracks{x - k > 0}\,{\pars{x - k}^{n} \over n!}}}\ =\
\left.{1 \over n!}\sum_{k = 0}^{n}\pars{-1}^{k}{n \choose k}
\pars{x - k}^{n}\,\right\vert_{\ k\ <\ x}
\\[5mm] = &\
\bbx{{1 \over n!}\sum_{k = 0}^{N}\pars{-1}^{k}{n \choose k}
\pars{x - k}^{n}\quad\mbox{where}\quad
N \equiv \min\braces{n,\left\lfloor x\right\rfloor}}
\end{align} |
Counting Substrings: In a given text, find the number of substrings that start with an $A$ and end with a $B$. | Scan the string from left to right. Every time you encounter an $A$, increment a counter. Every time you encounter a $B$, the value of the counter tells you how many substrings starting with an $A$ end at that particular $B$, so add the value to a running total. By the end of the string, the running total gives you the answer. |
Determine $[F(\alpha) : F(\alpha^3)]$ | Hint $\alpha$ is a root of $X^3-\alpha^3$, therefore its minimal polynomial is a divisor of this. |
Can function be derived from this relation? | Let us call $f(x)$ for the moment $y$. Then rewriting what you have, we get
$$x \cdot \frac{dy}{dx} - y = 0.$$
This is a first order total differential equation (total because no partial derivatives are involved, and first order because the highest degree of the derivative(s) involved is one).
It is linear because the coefficient of $\frac{dy}{dx}$ is a function of $x$.
If you know the method of separation of variables, you can solve your differential equation like this. Write it as
$$x \cdot \frac{dy}{dx} = y.$$
Then by seperation of variables, we have that
$$\frac{1}{y} dy = \frac{1}{x} dx,$$
hence integrating both sides we get that
$$\ln|y| = \ln|x| + C,$$ where $C$ is some constant. Taking the exponential of both sides gives
$$y = e^Cx.$$
Hence the general class of functions that satisfy your relation in the question are the straight lines through the origin of positive slope (because $e^C$ is always positive).
This post may be of interest to you, the answers given here why the method of separation of variables is justified. |
Every homomorphic image of abelian group is abelian but converse need not be true. | I would have written $f(x)=xA_3$, as that's what the elements of $S_3/A_3$ look like. But otherwise, yeah, that's a good example. This $f$ (and all other homomorphisms constructed from a group to a quotient of that group in this way) is called the canonical quotient homomorphism. Because it's the most obvious way to map a group into a quotient.
For an even simpler example, take any non-abelian group and map it to the trivial group (or to the identity element of your favourite group). |
How do I solve the following PDE? | First write Y(x,t) = X(x)T(t) then substitute into the PDE to get
$T''(t)X(x) = 4X''(x)T(t)$
Divide both sides by $X(x)T(t)$ to get
$\frac{T''(t)}{T(t)} = 4\frac{X''(x)}{X(x)}$.
Both sides are functions of independent variables, so it must be a constant, call it $-\lambda^2$ (the negative and square are solely for convenience). Then you have to solve
$\frac{T''(t)}{T(t)} = -\lambda^2 = 4\frac{X''(x)}{X(x)}$.
The obvious solutions are sines and cosines (or complex exponentials if you prefer). To determine the amplitudes of the sines and cosines, use the initial conditions. I'll sketch part of the solution.
Solving the PDE we have $X''(x) + 4\lambda^2 X(x) = 0$. The solutions to this are given by $X(x) = A\cos(2\lambda x) + B\sin(2\lambda x)$. To determine the values of A and $\lambda$, we turn to the boundary conditions. The boundary conditions tell us that $X(0) = 0 = A$ and $X(1) = 0 = B\sin(2\lambda)$.
The right boundary condition gives that $\lambda = \frac{\pi}{2}+k\pi$. Therefore the overall expression for $X$ will be given as a sum of sines (with the values of $k$ to be determined). So we have
$X(x) = \sum_{k=0}^{\infty} b_n\sin((2k+1)\pi x)$.
Continue this procedure for T and then combine the results to get the overall solution. |
functional equation $f(f(x))=af(x)+bx$ | Assume $f(x_1)=f(x_2)$. Then
$$ bx_1=f(f(x_1))-af(x_1)=f(f(x_2))-af(x_2)=bx_2$$
and hence (as $b\ne 0$) $x_1=x_2$. We conclude that $f$ is injective. As $f$ is also continuous, $f$ is strictly monotonous. Thereefore $f\circ f$ is strictly increasing.
Assume $L:=\lim_{x\to +\infty}f(x)$ exists. Then $f(L)=\lim_{x\to+\infty}(af(x)+bx)=\infty$, contradiction. Similarly, $\lim_{x\to -\infty}f(x)$ cannot exist.
We conclude that $f$ is a continuous bijection $\Bbb R\to\Bbb R$.
Pick $x_0\in \Bbb R$ and define recursively $x_{n+1}=f(x_n)$. Then $x_{n+2}=ax_{n+1}+bx_n$ for all $n$. It is well-known that the solutiosn of this recursion are of the form
$$x_n=\alpha \lambda_1^n+\beta\lambda_2^n, $$
where $\lambda_{1,2}=\frac{a\pm\sqrt{a^2+4b}}{2}$ are the solutions of $X^2-aX-b=0$ and $\alpha,\beta$ are found by solving $x_0=\alpha+\beta$, $x_1=\alpha\lambda_1+\beta\lambda_2$. From the given restrictions for $a,b$, we find $-\lambda_1<\lambda_2<0<\lambda_1<1$. This implies that $x_n\to 0$, independent of $x_0\in\Bbb R$.
By continuity, $f(0)=0$.
Assume $x_n>0$ and $x_{n+2}\ge x_n$. As $f\circ f$ is strictly increasing, this implies that the subsequence $x_{n+2k}$ is non-decreasing, contradicting convergence to $0$. Therefore $x_n>0$ implies $x_{n+2}<x_n$.
Similarly, $x_n<0$ implies $x_{n+2}>x_n$.
As $f$ is bijective, we can extend the sequence $x_n$ no $n\in\Bbb Z$.
If $\alpha\ne 0$, the $\lambda_1^n$ terms is dominant for $n\gg 0$. In particular, all $x_n$ with $n\gg 0$ have the same sign. It follows that the sequences $x_{2n}$ and $x_{2n+1}$ are both decreasing or both increasing.
If $\beta\ne 0$, the$\lambda_2^n$ term is dominant for $n\ll 0$. In particular, $x_n$ flips signs as $n\ll 0$. It follows that the sequences $x_{2n}$ and $x_{2n+1}$ have opposite monotonicity.
Both these observations together imply that $\alpha$ and $\beta$ cannot both be non-zero.
Therefore, either $f(x)=\lambda_1x$ or $f(x)=\lambda_2 x$. By continuity of $f$, a switch between these two conditions can at most occur at $x=0$, but then $f$ would not be monotonous. We conclude that no switch occurs, so that the only possible solutions are
$$ f(x)=\lambda_1x\text{ for all }x\in\Bbb R$$
and
$$ f(x)=\lambda_2x\text{ for all }x\in\Bbb R.$$ |
How to estimate the Holder exponent of approximation functions by the holder continuity of its limit | I think no, we can't prove that. I guess I have a counterexample for $\alpha=1$ (Lipschitz case) and $f_n$ defined on a compact set. Let $f_n(x):[0,1] \mapsto \mathbb{R}$ be such that $f_n(x)=\sqrt{x}$ if $x \in [0,\frac{1}{2n}]$ and $f_n(x)$ is linear if $x \in (\frac{1}{2n},\frac{1}{n}]$, connecting points $(1/2n,\sqrt{1/2n})$ and $(1/n,0)$. $f_n$ is continuous for every $n$, and
$$\max_{x \in [0,1]}|f_n(x)-0|=\max_{x \in [0,1]}|f_n(x)|=\sqrt{\frac{1}{2n}} $$,
whence $f_n \rightarrow 0$ uniformly on $[0,1]$ and the zero function is of course Lipschitz. On the other hand, for every $n$, $f_n(x)$ is not Lipschitz since it isn't even differentiable at $x=0^{+}$, because the derivative is unbounded.
Added later note: $\sqrt{x}$ is not holder continuous for every $\alpha \in (1/2,1]$, while it is if $\alpha \in (0,1/2]$, so the same ideas provides counterexamples for $\alpha \in (1/2,1]$. |
Why does $S + o(G-S)$ have the same parity as $n(G)$? | If $n$ is the number of vertices, $n\bmod 2$ is the number of vertices reduced modulo $2$; it’s $1$ if $n$ is odd and $0$ if $n$ is even. Counting modulo $2$ is keeping track only of this reduced number, which amounts simply to keeping track of whether your total is odd or even.
Suppose that $G-S$ has $k$ components with an odd number of vertices (‘odd components’) and $m$ with an even number of vertices (‘even components’). Let $K$ be the total number of vertices in the odd components and $M$ the total number of vertices in the even components. $M$ is a sum of even numbers, so it’s even; modulo $2$ it’s $0$. $K$ is the sum of $k$ odd numbers, so it’s odd iff $k$ is odd. That is, $K$ and $k$ have the same parity: both are odd, or both are even. Mod $2$ both are $1$, or both are $0$, so mod $2$ we have $K=k$; in more careful notation that’s $K\equiv k\pmod 2$.
Clearly $$n(G)=|S|+K+M\;.$$ Expand that mod $2$:
$$\begin{align*}
n(G)&\equiv|S|+K+M\\
&\equiv|S|+k+0\\
&\equiv|S|+k\pmod2\;,
\end{align*}$$
which is exactly what you wanted to prove, since $k=o(G-S)$. |
How to compute the determinant of this block matrix? | Performing Gaussian elimination,
$$\begin{bmatrix} \mathrm I & \mathrm O\\ \mathrm A^\top \mathrm C^{-1} & \mathrm I \end{bmatrix} \begin{bmatrix} - \mathrm C & -\mathrm A \\ \mathrm A^\top & \mathrm O \end{bmatrix} = \begin{bmatrix} - \mathrm C & -\mathrm A \\ \mathrm O & -\mathrm A^\top \mathrm C^{-1} \mathrm A \end{bmatrix}$$
Note that the determinant of a block triangular matrix is the product of the determinants of the diagonal blocks. |
Metric space limit points | Its because $x$ is a limit point. If $d(x,x_1)=r_1$, then you can pick $x_2$ with $d(x,x_2)<r_1$. Since $x$ is a limit point such an $x_2$ exists. Can you see how to obtain infinitely many points close to $x$ with this process? |
Is $L^1_{loc}(\mathbb{R})$ complete with the norm $|f|=\sup_{x\in \mathbb{R}}\int_x^{x+1}|f(y)|dy$ | Yes, it's complete. First, observe that your norm is comparable to the simpler norm
$$
\|f\| = \sup_{n\in\mathbb{Z}} \int_n^{n+1}|f(y)|\,dy \tag{1}
$$
Indeed, $\|f\|\le |f|\le 2\|f\|$ because every interval of length $1$ is contained in some interval of the form $[n,n+2]$.
The space with $(1)$ is just the direct sum $\bigoplus_\infty X_n$ of Banach spaces $X_n=L^1([n,n+1])$. The direct sum preserves completeness, which is standard and proved here. |
Population differential equation | I'm assuming that "harvested" in this context means killed. So I would be tempted to use $\frac{dN}{dt}=2N-0.25N^2-H$. As for how to solve this, the ODE is separable. You should be able to solve it directly by integration. |
Given an infinite time, can a number be chosen randomly from a set of infinite real numbers? | if there is a probability something will happen, given an infinite time it will happen
That's too informal. "Given an infinite time" does not make sense.
What you can say is: given an experiment with probability of sucess $p>0$, if you repeat $n$ (independent) trials of it and call $P_n$ the probability of having at least one success, then $P_n \to 1$ as $n \to \infty$. You must be careful, still. First, this doesn't say that we have $P_n=1$ after some $n>N$; and you must also remember that $p_E=0$ is not equivalent to "event $E$ is impossible", as $p_E=1$ is not equivalent to "event $E$ must happen".
Anyway, this property is useless for your situation, and your robot won't work (even if it's trying to guess a real number drawn uniformly in the interval $[0,1]$), because the individual probability of success is zero. |
Is there some relationship for all points on a rectangle? | If the rectangle $R$ (or convex quadrilateral) has corners $A,B,C,D$,
then any point $p$ inside $R$ satisfies
$$
p = \alpha A + \beta B + \gamma C + \delta D
$$
where $\alpha , \beta , \gamma , \delta \in [0,1]$ and
$$
\alpha + \beta + \gamma + \delta = 1
$$
See Wikipedia's article on convex combination. |
How to prove that homology is a functor? | By definition a homology operator is a functor. This only refers to the very basics of definitions in category theory. All it says is that if $id$ is an identity morphism in $cKom$ (whatever $cKom$ may be in your case), then $H_k(id)$ is an identity morhpism in $Ab$, and that if $f:X\to Y $ and $g:Y\to Z$ are morphisms in $cKom$, then $H_k(g\circ f)=H_k(f)\circ H_k(g)$. You prove these identities based on the particular homology construction you have at hand. |
Minimal subset of $x_1, x_2, ...., x_{100}$ that XORs to $y$ | If you introduce binary variables $c_i$ that are $1$ when $x_i\in Z$ and $0$ otherwise, you can write
$$y=\bigoplus_ic_ix_i\;.$$
This is a $64\times100$ system of linear equations over the field $\mathbb F_2$ for the $c_i$ that you can easily solve using Gaussian elimination; the number of steps is of the order of a million. If the $x_i$ have full rank $64$, which is almost certain to be the case if they're randomly chosen with uniform distribution, the solution space will have $36$ dimensions, so you just need to enumerate $2^{36}$ different solutions to find the optimal one; this is doable in reasonable time on a present-day computer. |
Changing Limits on an Integral | Let $S$ be the square $(x,y):0\leq x\leq t,0\leq y\leq t$. The second integral is $$\int_{S,t''>t'}H(t')H(t'')dt''dt'$$
If you are allowed to change the order of integration, that becomes
$$\int_{S,t''>t'}H(t')H(t'')dt'dt''$$
Then rename $t'$ and $t''$
$$\int_{S,t'>t''}H(t')H(t'')dt''dt'$$
and turn it back to the double integral. |
Minimize the sum of solution of linear equation | Here is my suggestion for the game as specified in your link. I use a Mixed Integer Program (MIP) to model the problem. This can be solved using any MIP solver.
Basically we count the number of switches in the same row and column and require that the final state is even (to make it dark). (Note: the sum is over the whole column and the over the whole row, so we double count the element itself -- this is repaired by the subtraction). Even means 2 times an arbitrary integer. I said earlier to use a dummy objective. We can however even minimize the number of switches.
I use the initial state $s0$:
The model can be compactly written as:
The even constraint is basically what you described in your question. My $s_{i,j}$ is your $x_{i,j}$. In the model we use a binary variable $s_{i,j}$ which can only assume values 0 or 1. The integer variable $y_{i,j}$ is used only to make sure the final state is $0,2,4,...$.
When I solve this I see one would need to apply the following switches:
---- 37 VARIABLE s.L switch state
c1 c2 c3 c4
r1 1 1
r2 1 1 1
r3 1 1
r4 1 1
Note: you can solve MIP models like this remotely on NEOS. The complete GAMS model is here. (GAMS is the modeling language I used to express the model). |
Can we compute confidence intervals for the variance of an unknown distributions from sample variances? | In practical terms, if the distribution is unknown and one has a lot of data, one can assume that the distribution of the sample variance converges to a Gaussian one (e.g. see here). Then the confidence interval can be computed from this.
One can also do bootstrapping to approximate the statistic's distribution, and use it to estimate the confidence interval.
This is (asymptotically) quite accurate.
Another method might be to use a Bayesian approach (see: A Bayesian perspective on estimating mean, variance, and standard-deviation from data by Oliphant). (This method is built into scipy already :].) Essentially, it finds that, with an "ignorant" Bayesian prior, the sample variance follows an inverted Gamma distribution, from which confidence intervals can be constructed.
See also this question about the distribution of the sample variance and this one, which is also related. |
Help with proof of two spanning sets of the same space | The notation you are using is a bit off. For example $\in$ is the symbol indicating that an element belongs to a set, so saying things like
$$\{v_1,v_2,...,v_r\}\in \{cw_1 + cw_2 + ...cw_q\}$$
and
$$\{w_1,w_2,...,w_q\}\in \{kv_1 + kv_2 + ...kv_r\}$$
is not correct.
For the proof itself, the "only if" direction (that is, if $\mathrm{span}\{v_1,\ldots,v_r\}=\mathrm{span}\{w_1,\ldots,w_q\}$, then you have the statement about linear combinations) follows from the definition of span. Can you see why?
As for the other direction, what you need to show is that if each $v_i$ is a linear combination of the $w_j$, then $\mathrm{span}\{v_1,\ldots,v_r\}\subset \mathrm{span}\{w_1,\ldots,w_q\}$. Likewise, if each $w_j$ is a linear combination of the $v_i$, then $\mathrm{span}\{v_1,\ldots,v_r\}\supset \mathrm{span}\{w_1,\ldots,w_q\}$. I'll prove one and leave the rest to you.
Since $v_i$ is a linear combination of the $w_j$, by definition we have $v_i\in \mathrm{span}\{w_1,\ldots,w_q\}$, for each $i=1,\ldots,r$. Since $\mathrm{span}\{w_1,\ldots,w_q\}$ is a vector space, it is closed under taking sums of vectors and scalar multiplication, so we are free to take linear combinations of elements inside $\mathrm{span}\{w_1,\ldots,w_q\}$.
In particular, we can take ANY linear combinations of $S=\{v_1,\ldots, v_r\}$, and it will still lie in $\mathrm{span}\{w_1,\ldots,w_q\}$. Thus, by defintion
$$\mathrm{span}\{v_1,\ldots,v_r\}\subset \mathrm{span}\{w_1,\ldots,w_q\}.$$
This proves one direction. I leave showing $\mathrm{span}\{v_1,\ldots,v_r\}\supset \mathrm{span}\{w_1,\ldots,w_q\}$, and the "only if" direction to you. I hope this helps. |
Limits of $(e^{xy}-1)/y$ | Hint: Multiply by $1\color{grey}{=\dfrac x x}$. |
Using induction on a sequence | The simplest thing to do is simply run the problem 'the other way' - can you show that if $x_n < 4$, then $x_{n+1} < 4$? Once you have that, induction will let you go 'from one to infinity'; since the statement is true for $x_1$, and since you've shown that if it's true for $x_n$ then it's true for $x_{n+1}$, you're then allowed to conclude by induction that it's true for all natural numbers $n$.
As for the so-called 'induction step' - showing that if the inequality is true for $x_n$, it's true for $x_{n+1}$ - you should be able to do that yourself, using just what you know about inequalities (a more specific hint: if $a < b$, then $(a+c) < (b+c)$; in addition, if $c > 0$, then $a\times c < b\times c$. These should be all that you need.)
Edit: Yes, the approach that you offer in your addendum is exactly what you're after! Nicely done. |
On the Laurent series of $\frac{\sin z}{z-1}$ at $z=1$ | No it's not correct. The Laurent series of $\sin(z)$ around $z=1$ is given by $$\sin(z)=\sum_{k=0}^\infty \frac{f^{(k)}(1)}{k!}(z-1)^k,$$
where $$f^{(k)}(1)=\begin{cases}\sin(1)&k\in 4\mathbb N\\ \cos(1)&k\in 4\mathbb N+1\\ -\sin(1)&k\in 4\mathbb N+2\\ -\cos(1)&k\in 4\mathbb N+3.\end{cases}$$ |
Hyperbolic triangle with angles $\pi/2$, $\pi/6$ and $\pi/9$. | I would suggest to use the Poincare disk model in this case because it depicts angles like they were Euclidean angles.
The following figure depicts a triangle with angles $90^{\circ}$, $60^{\circ}$, and $0^{\circ}$. (not $30^{\circ}$ because the figure would have become a little overcomplicated. However you could halve the angle at $B$.)
If you decrease the length of $AB$ then the third angle will increase from $0$ to $30^{\circ}$. (Note that if the triangle is getting smaller and smaller then the laws of Euclidean geometry will be better and better approximations.) In between (between $0$ and $30$) somewhere the third angle will be of $20^{\circ}$
I downloaded the NonEuclid software from here and did my construction with the NonEuclid software. You can do the same construction (plus halving the angle at $B$) and watch how the third angle changes while changing the length $AB$.
Note that in hyperbolic geometry there is no similarity. That is all the triangles with the same triplets of angles are congruent. |
Area of the region enclosed by the three circles that have sides of a triangle as diameters. | Update. (There was a mistake in the first version.)
Assume that the given triangle $\triangle$ is acute. The three circles then intersect at the vertices $A$, $B$, $C$ of $\triangle$ and at the pedal points $H_a$, $H_b$, $H_c$ of the heights of $\triangle$. It follows that the vertices of the shape $S$ in question form the orthic triangle $\triangle_{\rm orth}$ of $\triangle$. According to this link the area of $\triangle_{\rm orth}$ is given by
$${\rm area}(\triangle_{\rm orth})=2{\rm area}(\triangle)\cos\alpha\cos\beta\cos\gamma\ .$$
The area of $S$ is equal to ${\rm area}(\triangle_{\rm orth})$ plus the areas of three circular segments.
The circular segment $S_A$ with center on $BC$ has radius ${a\over2}$ and central angle $\pi-2\alpha$. (Note that the corresponding peripheral angle is ${\pi\over2}-\alpha$, since the triangle $AH_cC$ has a right angle at $H_c$.) Its area then computes to
$${\rm area}(S_A)={a^2\over8}\bigl(\pi-2\alpha-\sin(\pi-2\alpha)\bigr)={a^2\over8}\bigl(\pi-2\alpha-\sin(2\alpha)\bigr)\ .$$
Put all together, and you obtain a formula for ${\rm area}(S)$in terms of the data $a$, $b$,$c$, $\alpha$, $\beta$, $\gamma$. |
Sum of chosen primes is composite | Choose the primes so that they are all congruent to $1$ modulo $k$. That we can find infinitely many is a consequence of Dirichlet's Theorem on primes in arithmetic progressions. |
Prove a map injective | It is not injective for $a=2$. Take $S=\{n\geq 2\}$ and $T=\{1\}$.
Then
$$f(S)=\sum_{n\geq 2} \frac{1}{2^{n}}=\frac{1}{2}=f(T).$$
On the other hand if $a>2$ the map $f$ is injective.
Infact if $S\not= T$ then let $m=\min((S\setminus T)\cup (T\setminus S))$ (the set $(S\setminus T)\cup (T\setminus S)$ is non-empty!).
If $m\in S$ (the other case is similar) then $a>2$ implies that
$$f(S)-\sum_{n \in S\cap T, n<m} \frac{1}{a^{n}}\geq\frac{1}{a^m}>\frac{1/a^{m}}{a-1}=\frac{1/a^{m+1}}{1-1/a}=\sum_{n\geq m+1}\frac{1}{a^n}\geq f(T)-\sum_{n \in S\cap T,n<m} \frac{1}{a^{n}}.$$ |
Choosing parameter such that the roots of a cubic polynomial have negative real parts | You can use the Hurwitz criterion after multiplying by (-1) you get
$$G(\lambda)=\lambda^3+3\lambda^2+(2-a)\lambda+1$$
The Hurwitz criterion for a cubic polynomial
$$a_3\lambda^3+a_2\lambda^2+a_1\lambda+a_0$$
says. The real part of the roots is negative if $a_3>0$, $a_2>0$, $a_1>0$, $a_0>0$ and $a_1a_2>a_0a_3$.
From this, we obtain $2-a>0$ (third inequality).
Additionally, we need to have $3(2-a)>1$ (last condition), which implies $2-a>1/3$. Can you do the rest? |
A little problem about convergence of series | The answer is yes. Use Dirichlet's Test. Note that if $\sum a_n$ converges then its partial sums are bounded and moroever the sequence $1/n$ is positive and decreasing. |
Proofs involving orthonormal basis | For (a), write $v = \sum_{i=1}^n \langle v,e_i\rangle e_i$ and find $\langle v,v\rangle$ using the conjugate-linearity of the dot-product.
For (b), extend the set to an orthonormal basis and note that if
$$
\sum_{i=1}^n |\langle v,e_i\rangle|^2 = \sum_{i=1}^k |\langle v,e_i\rangle|^2
$$
with $n > k$, then $\langle v,e_i\rangle = 0$ for all $i > k$. |
Show that limit of sequence does not exist | By Bolzano-Weierstrass theorem and taking the two subsequences formed by taking the even and odd numbered(indexed) terms respectively you get that there is no limit(odd indexed terms converge to 0 while the other sequence converges to 1). |
How do I evaluate $\sum_{r=1}^{n} [r(r+1)(r+2)(r+3)] $? | $r(r+1)(r+2)(r+3) = 24\binom{r+3}{4}$, and since
$$ \sum_{k=j}^{N}\binom{k}{j}=\binom{N+1}{j+1}$$
holds by induction, we have:
$$ \sum_{r=1}^{n} r(r+1)(r+2)(r+3) = 24\sum_{r=1}^{n}\binom{r+3}{4} = 24\binom{n+4}{5} = \frac{n(n+1)(n+2)(n+3)(n+4)}{5}.$$ |
The sum of two integers is a square. The sum of their squares is also a square. | WLOG for $d^2=a^2+b^2, a=2mn, b=m^2-n^2$
$a+b=m^2-n^2+2mn$
Let $m^2-n^2+2mn=(m+pn)^2\iff n(p^2+1)=2m(1-p)$
$\dfrac n{2(1-p)}=\dfrac m{1+p^2}=r$(say)
Now $(1+p^2,1-p)|(1+p^2,1-p^2)$ which must divide $1+p^2+1-p^2=2$
Case $\#1:$ $(1+p^2,2(1-p))=1$ if $p$ is even
$n=2r(1-p^2),m=r(1+p^2)$
Case $\#2:$
For odd $p, p^2\equiv1\pmod8, 1+p^2\equiv2$
In that case, $(1+p^2,2(1-p))=2$
$n=r(1-p)$ and $m=\dfrac{r(1+p^2)}2$ |
Solving the ODE $y+xy'=x^4 (y')^2$ | $$y+x\frac{dy}{dx}=x^4\left(\frac{dy}{dx}\right)^2$$
Change of variable : $x=\frac{1}{X}$
$\frac{dy}{dx}=\frac{dy}{dX}\frac{dX}{dx}=-X^2\frac{dy}{dX}$
$$y-xX^2\frac{dy}{dX}=x^4\left(X^2\frac{dy}{dX}\right)^2$$
$$y-X\frac{dy}{dX}=\left(\frac{dy}{dX}\right)^2$$
On can see immediately that $y=aX+b$ is convenient :
$(aX+b)-aX=a^2$ imply $b=a^2$ hense $y=aX+a^2$
$$y=\frac{a}{x}+a^2$$
This is the expected result.
If we don't see the short-cut above,
$$y+\frac{X^2}{4}=\left(\frac{dy}{dX}+\frac{X}{2}\right)^2$$
$$y+\frac{X^2}{4}=\left(\frac{d}{dX}\left(y+\frac{X^2}{4}\right)\right)^2$$
Let $Y=y+\frac{X^2}{4}$
$$Y=\left(\frac{dY}{dX}\right)^2$$
leading to $Y=\left(\frac{X}{2}+c\right)^2=y+\frac{X^2}{4}$
$$y=cX+c^2=\frac{c}{x}+c^2$$ |
taylor series approximation of e function | If this is an equation in real numbers, you don't need Taylor series to show that $y(0)=0$. If you substitute $x=0$ into your equation and let $z=y(0)$ we get
$$e^z=1-z$$
This has the unique real solution $z=0$. This is unique since the function on the left hand side is increasing while the one on the right is decreasing. |
rotate graph of function by 180 | To rotate $f(x)$ by 180 degrees about the origin, you need to mirror it horizontally ($f(-x)$) and also vertically ($-f(x)$). In your case,
$$\begin{eqnarray*}
-f(-x) &=& -(1+(-x)\cos(-x)) \\
&=& -(1 - x\cos(-x)) \\
&=& -1 + x\cos(-x) \\
&=& -1 + x\cos(x). \end{eqnarray*}
$$ |
Last digit of $2^{24!}$ | $$24!=1\times2\times\ldots\times10\times\ldots\times20\times \ldots\times 24=100\times(??)\equiv4 \mod{4}$$
Clearly the last digits are $00$ for $24!$, so the last digit of $2^{24!}$ is same as $2^4$ which is $6$. |
Find the value of the integral $\int_0 ^2\int_0^2f(x+y)~dx~ dy$ where $f(t)$ denotes the greatest integer $\le t$ | Hints: Use indicator functions. For set $A$,
$$
{\bf 1}_{A}(t):=\left\{\begin{array}{cl}1;&\mbox{if }\,\,t\in A\\0;&\mbox{otherwise}\,.\end{array}\right.
$$
This means we can write,
$$
\int_0 ^2\int_0^2f(x+y)~dxdy\\{}={}\int_0 ^2\int_0^2\Big(1\cdot{\bf 1}_{[1, 2)}(x+y){}+{}2\cdot{\bf 1}_{[2, 3)}(x+y){}+{}3\cdot{\bf 1}_{[3, 4)}(x+y)\Big) ~dxdy\,.
$$
Also, note that by the symmetry of the problem, for integer $j$,
$$
\int_0 ^2\int_0^2{\bf 1}_{[j, j+1)}(x+y)~dxdy{}={}\int_0 ^2\int_0^2{\bf 1}_{[0, j+1)}(x+y)~dxdy-\int_0 ^2\int_0^2{\bf 1}_{[0, j)}(x+y)~dxdy\,.
$$
Finally, there are triangles in all of this. |
Find the maximum and minimum values of of f(x,y,z) with the following constraints | The function is unbounded below and above. If you set $x=4$ and $y=0$, any $z$ satisfies the constraints while the objective function is $64+4z$. |
Non-example of mathematical assertion. | If we take statements to be the sort of things that can be true or false, then it's easy to see that:
"Suppose $n$ is divisible by $3$"
is not a statement, since it has no truth-value. We could maybe call it a 'command'. Its component:
"$n$ is divisible by $3$"
is also not a statement, because, as Santiago Canez pointed out in a comment above, unless $n$ is quantified over, unless the formula is 'closed', the expression will continue lacking a truth-value.
It's been pointed out to me that it's not very obvious why the main sentence is not a statement, so let me bring a concrete non-mathematical example. Consider the sentence:
"Suppose you are Australian"
Is this sentence true or false? (what do you think?) I think no. It is certainly either true or false whether you or any other particular person is Australian. But the sentence supposing one or the other itself has no truth-value. I chose this particular sentence with 'you' in it, because 'you' is an indexical and like '$n$' in the original sentence needs a context to obtain an actual value, so:
"you are Australian" ($\sim$ "$n$ is divisible by $3$")
is also lacking a truth-value until the 'you' is replaced by the name of an actual person. I hope the analogy is helpful. If it's not, leave a comment and I'll try to come up with a better one. |
Conditional probability with and without replacement | Without replacement: We can choose $r$ items in $\binom{n}{r}$ equally likely ways.
There are $\binom{n-N}{r}$ to do it using only allowed numbers. Divide.
With replacement: The probability that on any pick we miss all the $N$ "bad" ones is $\frac{n-N}{n}$. For the probability of avoiding bad ones $r$ times in a row, take the $r$-th power. |
Filling a grid so as not to completely fill any row, column or *any* diagonal | $$\pmatrix{0&0&1&0\\
1&1&1&0\\
0&1&1&1\\
0&1&0&0}$$
has no all-$1$ rows, columns, or "maximal" diagonals, and does it with $8$ $1$'s, which is greater than $(4-1)(4-2)=6$.
Note: This answer has been superseded by Ross Millikan's much more complete and general constructions.
Added later: On reading Ross Millikan's and zyx's answers, here's a construction that works for $2k\times2k$ grids, best thought of as black-and-white chessboards:
Place $0$'s on all the white squares along the top and bottom rows,
and on all the black squares along the left and rightmost columns.
This uses $4k=2n$ $0$'s and guarantees there's a $0$ in each row, column, and diagonal. Assuming the top left square is white, then, as Ross Millikan noted, there are $n$ white "up" diagonals and $n$ black "down" diagonals, so you can't get by with fewer than $2n$ $0$'s. So for even $n$, $g(n)=n^2-2n$. Note, though, that the construction fails rather badly for odd $n$. |
Finite groups, where the number of subgroups is equal to the number of elements | Here is list of all examples $\leq 100$, found with GAP (EDIT: I added some more, but I didn't check for groups of order $128$). The notation $:$ means semidirect product.
$$\begin{array}{|c|c|}
\text{StructureDescription} & \text{Order} \\ \hline
\text{Trivial group} & 1 \\ \hline
C_2 & 2 \\ \hline
S_3 & 6 \\ \hline
C_4 \times C_2 & 8 \\ \hline
D_{28} & 28 \\ \hline
C_6 \times S_3 & 36 \\ \hline
(C_{10} \times C_2) : C_2 & 40 \\ \hline
C_2 \times (C_5 : C_4) & 40 \\ \hline
(C_3 \times Q_8) : C_2 & 48 \\ \hline
((C_3 \times C_3) : C_3) : C_2 & 54 \\ \hline
C_6 \times A_4 & 72 \\ \hline
C_2 \times ((C_4 \times C_4) : C_3) & 96 \\ \hline
(C_5 \times C_5) : C_4 & 100 \\ \hline
D_{104} & 104 \\ \hline
S_3 \times D_{22} & 132 \\ \hline
C_3 \times D_{48} & 144 \\ \hline
(C_{40} \times C_2) : C_2 & 160 \\ \hline
(C_5 \times (C_8 : C_2)) : C_2 & 160 \\ \hline
((C_2 \times (C_5 : C_4)) : C_2) : C_2 & 160 \\ \hline
(C_4 \times (C_5 : C_4)) : C_2 & 160 \\ \hline
(C_{40} \times C_2) : C_2 & 160 \\ \hline
(C_8 \times D_{10}) : C_2 & 160 \\ \hline
(C_2 \times (C_5 : Q_8)) : C_2 & 160 \\ \hline
(C_2 \times (C_{11} : C_4)) : C_2 & 176 \\ \hline
(C_{15} \times C_3) : C_4 & 180 \\ \hline
\end{array}$$
Random related fact: the number of subgroups in the dihedral group $D_n$ of order $n$ is $\sigma(n/2) + d(n/2)$, where $\sigma$ is the sum of divisors function and $d$ is the divisor count function. Thus the dihedral group $D_n$ of order $n$ is an example for the problem when $$n = 2,\ 6,\ 28,\ 104,\ 260,\ 368,\ 1312,\ 17296,\ 24016,\ 69376,\ \ldots$$
I don't know if this sequence is infinite. For more terms, it is $2 \cdot$ $A083874$ from OEIS. Seems that really large examples exist, for example $9223653647124987904$ is in the sequence. |
Set-builder notation: is there another symbol for "such that"? | We usually use ":" and "|", as in $\{ x \in \mathbb{R} : x > 0\}$. Hope that answers your question. |
Can there be involutions in the unit group of $\mathbb{Z}_n$? | Firstly, and most importantly, your title is incorrect. Idempotents are elements $x\in G$ such that $x^2=x$, and it is an exercise in elementary group theory that the only idempotent in a group is the identity. As has been mentioned in the comments, it looks like you mean to say involutions instead of idempotents, so I answer on this assumption.
In general this is very possible, consider, for example, $4$ modulo $15$ (or indeed $(-1)^2=1$ always as is noted above). The important thing is that involutions are elements of order $2$. The general structure of $\mathbb{Z}/n\mathbb{Z}^\times$ is well known, (see e.g. the wikipedia page or most good introduction to groups/number theory books). I give it below:
Firstly, for an odd prime $p$ and positive integer $k$ we have
$$\mathbb{Z}/p^k\mathbb{Z}^\times\cong \mathbb{Z}/(p^k-p^{k-1})\mathbb{Z}.$$
Secondly, at powers of $2$ we have
$$\mathbb{Z}/2^k\mathbb{Z}^\times\cong
\begin{cases}
\{1\}&k=1\\
\mathbb{Z}/2&k=2\\
\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2^{k-2}\mathbb{Z} &k\geq 3
\end{cases}$$
The general case then follows from the Chinese remainder theorem, which says that if $n=\prod_{i=1}^rp_i^{k_i}$ is the unique prime decomposition of $n$ then
$$\mathbb{Z}/n\mathbb{Z}^\times=\prod_{i=1}^r\mathbb{Z}/p_i^{k_i}\mathbb{Z}^\times$$
From the structure of the unit group as a product of cyclic groups in this way, it is easy to see how often (and even how many) involutions appear. |
Speed of light of a falling ladder | This website here gives a good explanation on why the falling ladder does not fall at an infinite speed. Hint: the triangle is only assumed to be a right triangle — it might not be in all cases. |
How can I show that a curve $\gamma(t) = t+it \sin(1/t)$ is continous but not rectifiable | On each interval $I_k:=\left(\frac{1}{2\pi(k+1)},\frac{1}{2\pi k}\right];\ k\ge 1,\ \sin(1/t)$ executes a cycle and so $L_{\gamma|_{I_k}}>2\left(\frac{1}{2\pi(k+1)}\right)$ and since the $(I_k)$ are disjoint, it follows that $L_{\gamma}>\frac{1}{\pi}\sum_{k\ge 1}\frac{1}{k+1}, $ which diverges. |
What's wrong with my solution for the following differential equation? | The solution you obtain for $v$ is correct. Moreover, it's given by:
$$v(x) = 1/2 + c \, e^{3x^2}, \quad c \in \mathbb{R}.$$
The change of variables $v = 1/y^3$ leads to the final solution:
$$ 1/y^3 = 1/2 + c \, e^{3x^2},$$
which is an implicit form of expressing your result. It appears to be a typo on the book your are consulting/studying from, so good work!
Of course, note that $y = 0$ is also a solution, so the solution presented above is far from unique!
Cheers! |
Projective (or inverse) limit of C*-algebras | The category of $C^*$-algebras is complete. The limit of a diagram $(A_i,\lVert - \rVert)_{i \in I}$ of $C^*$-algebras has as underlying $*$-algebra the $*$-subalgebra of the $*$-algebra $\prod_{i \in I} A_i$ whose elements $x=(x_i)_{i \in I}$ are subject to two conditions: First, the usual matching condition: For edges $i \to j$ the map $A_i \to A_j$ should map $x_i$ to $x_j$. Second, a boundedness condition: The set $\{\lVert x_i \rVert : i \in I\}$ is bounded. We then define $\lVert x \rVert := \sup_{i \in I} \lVert x_i \rVert$. It is straight forward to check that this is a $C^*$-norm. The only nontrivial fact is that the norm is complete. But you can simply copy the usual proof that $\ell^{\infty}$ is complete. It is also easy to check the universal property, using the fact that $*$-homomorphisms between $C^*$-algebras are of norm $\leq 1$. |
Prove that $f\in L^2$ and $\lim_{n\rightarrow\infty} \int_A f_n = \int_Af$ | Let$$g_n = f_n - f.$$Then $g_n$ converges uniformly to zero. Since$$\|g_n\|_2 \le m(A)\|g_n\|_\infty,$$we have what we need. Note that $A$ must be of finite measure, or this breaks down. |
Ordinary power series generating function of $\{(-1)^nn^2\}^\infty_{n=0}$ | The ordinary generating function $\displaystyle{\sum_{n=0}^\infty (-1)^nx^n}$ is simply $\dfrac{1}{1+x}$ because $$\sum_{n=0}^\infty (-1)^nx^n=\sum_{n=0}^\infty(-x)^n=\frac{1}{1-(-x)}=\frac{1}{1+x}.$$ You can use the same sort of idea to obtain $\sum_{n=0}^\infty(-1)^nn^2 x^n$.
First, write $F(x)=\dfrac{1}{1-x}$. Then $$F'(x)=\frac{d}{dx}\sum_{n=0}^\infty x^n=\sum_{n=0}^\infty nx^{n-1}=\sum_{n=0}^\infty (n+1)x^n,$$ and $$F''(x)=\frac{d^2}{dx^2}\sum_{n=0}^\infty x^n=\sum_{n=0}^\infty n(n-1)x^{n-2}=\sum_{n=0}^\infty (n+2)(n+1)x^n$$ $$=\sum_{n=0}^\infty n^2x^n+3\sum_{n=0}^\infty (n+1)x^n-\sum_{n=0}^\infty x^n=\sum_{n=0}^\infty n^2x^n+3F'(x)-F(x).$$ This shows that $$\sum_{n=0}^\infty n^2x^n=F''(x)-3F'(x)+F(x),$$ and you can calculate that this is $\dfrac{x^2+x}{(1-x)^3}$. Finally, $$\sum_{n=0}^\infty (-1)^nn^2x^n=\sum_{n=0}^\infty n^2(-x)^n=\frac{(-x)^2+(-x)}{(1-(-x))^3}=\frac{x^2-x}{(1+x)^3}.$$
Remark: For any positive integer $k$, the generating function $\displaystyle{\sum_{n=0}^\infty n^kx^n}$ is given by $\dfrac{A_k(x)}{(1-x)^{k+1}}$, where $A_k(x)$ is called an Eulerian polynoimal. The Eulerian polynomial $A_2(x)$ is just $x^2+x$, which is indeed the numerator in the generating function for $\sum_{n=0}^\infty n^2x^n$. |
Integral Calculus Drained Tank | The answer to your first question appears to be correct. However, I understand the second question differently. The fourth minute does only last from 3:00 minutes to 4:00 minutes. Compare to how the first minute lasts from 0:00 to 1:00. You calculated the volume of the drained oil from 0:00 to 4:00. The question to that answer would be "How much oil is drained in the first four minutes".
Now that you know how much oil has been drained in the first four minutes, simply substract the oil that has been drained in the first three minutes, i.e. $$\int \limits_3^4 \frac{dv}{dt}dt$$ |
Continued fraction to irrational number | The idea is not much different. For something like this, it becomes $$ \left[ 1; \overline {2, 1} \right] = 1 + \cfrac {1}{2 + \cfrac {1}{1 + \cfrac {1}{2 + \cfrac {1}{1 + \cdots}}}} $$Then, you can use the regular technique. If this is $x$, then $$ x - 1 = \cfrac {1}{2 + \cfrac {1}{x}} \implies x - 1 = \cfrac {1}{\cfrac {2x + 1}{x}} = \frac {x}{2x+1}, $$then you can solve.
In general, for $\overline{b_1,b_2,\cdots,b_n}$, instead of the denominators outside of the infinite fractions being continues $b$ as suggested by $\overline{b}$, it keeps cycling between $b_1,b_2,b_3,\cdots,b_n$. |
An infinite dimensional limit | If $X_1,X_2,...$ is an i.i.d. sequence of random variables with uniform distribution on $[0,1]$ then the given limit is $\lim \frac 1 {\sqrt n} E(X_1^{2}+X_2^{2}+\cdots +X_n^{2})^{1/2}$. By SLLN we have $(X_1^{2}+X_2^{2}+\cdots +X_n^{2}) /n \to \int_0^{1}x^{2}dx=\frac 1 3$ a.s. and in $L^{1}$ and the result follows from this. |
Find a real 3x3 matrix A that satisfies a quadratic equation | It doesn't exist. All $3 \times 3$ real matrices have a real eigenvalue, since the characteristic polynomial is a real polynomial of odd degree, and must have a root. Thus, the minimal polynomial must also have a real root, and would have to divide $x^2 + x + 1$. This is not possible, as $x^2 + x + 1$ has no real root. |
What is more important for 1) encryption 2) decryption for the discrete log problem - g is of prime power order or g mod p is a primitive root? | In general, the DLP does not have these constraints (see Wikipedia). It simply asks us to find a $x$ such that $g^x \equiv h \pmod{p}$. However, there are certain things you can do to make solving this DLP sufficiently hard for an attacker.
We pick $g$ to be a primitive root otherwise we don't visit all the elements modulo $p$ and thus reducing the number of things an attacker must try to attack your message.
$p$ must be sufficiently large because the currently best known method of efficiently solving the DLP takes approximately $\sqrt{p}$ steps to find the solution.
We usually pick $p = 2q + 1$ where $p, q$ are both prime. The Pohlig-Hellman algorithm can solve the DLP by solving smaller DLPs of size $p_i^{e_i}$ where $p_i^{e_i} \mid p - 1$ and $p_i^{e_i + 1} \nmid p - 1$.
I'm not sure where your constraint for $g$ needing to have prime power order comes from. Consider solving $4^x \equiv -1 \pmod{29}$. This is a well formed DLP. Additionally, "$g$ is a primitive root and of prime power order" only works for Fermat primes, as the multiplicative order or a primitive root mod $p$ is $p - 1$, which is always even unless $p = 2^{2^n} + 1$. |
Equation of line of locus. | This is related to Eccentricity.
Let the point be $P \equiv (x, y)$.
Distance of P from $(2, 2)$ is $\sqrt{(x-2)^2 + (y-2)^2} $ (1)
Distance of a point $(x_1, y_1)$ from a line $ax+by+c=0$ is $\left|\dfrac{ax_1+by_1+c}{\sqrt{a^2+b^2}}\right|$
So distance of P from $x-y = 0$ is $\left|\dfrac{x-y}{\sqrt2}\right|$. (2)
Now, (1) and (2) are equal.
Therefore, $$\begin{align}\sqrt{(x-2)^2 + (y-2)^2} =\left|\dfrac{x-y}{\sqrt2}\right| \\(x-2)^2 + (y-2)^2 = \dfrac{(x-y)^2}{2}\end{align}$$
Can you simplify after this? |
Different solutions of homogenous system | You have an arithmetic error in the second matrix: the $(3, 5)$ entry ought to be $3$ (not $-1$ as you have written).
Your first solution is correct, anyway.
By the way, you can discover the pivots (and number of free variables, as well) by putting the matrix into row echelon form. But, in order to actually parametrize the solution space, you need to go further to reduced row echelon form (with zeroes above the pivot, as well). That's when you can stop the elimination procedure. |
Serre's Trick for flatness of a morphism of schemes | A proof of the 'claim':
Since $X(S)\neq \emptyset$, you have a map $S \to X$ and then a factorization of the identity map of $Y$: $Y \to Y\times_SS \to Y\times_SX \to Y$. Since $Y\times_SX$ is flat, so is $Y$. |
what argument can we prove this fact?? | In general, this type of problem can be quite difficult, though methods from Galois theory are often effective. However, in your particular case, there is a neat observation we can make. Indeed, the minimal polynomial of $2^{1/9999}$ over $\mathbb{Q}$ is $X^{9999}-2$, which is irreducible by Eisenstein at $2$. Hence, the degree of the extension $\mathbb{Q}(2^{1/9999})$ over $\mathbb{Q}$ is $9999$. If we had $\sqrt{3} \in \mathbb{Q}(2^{1/9999})$, then we would have $\mathbb{Q}(\sqrt{3}) \subset \mathbb{Q}(2^{1/9999})$, whence multiplicativity of degree gives us $[\mathbb{Q}(2^{1/9999}):\mathbb{Q}] = [\mathbb{Q}(2^{1/9999}):\mathbb{Q}(\sqrt{3})][\mathbb{Q}(\sqrt{3}):\mathbb{Q})]$. But $[\mathbb{Q}(\sqrt{3}):\mathbb{Q}] = 2$ does not divide $9999$, a contradiction. This type of reasoning is often quite useful, although again, these problems can be quite tricky in general: see this question, for example. |
Compute for Cov(X,Y) and Correlation(X,Y) | Since $(-X,Y)$ and $(X,Y)$ are identically distributed, $\mathrm{Cov}(X,Y)=\mathrm{Cov}(-X,Y)=-\mathrm{Cov}(X,Y)$ hence $\mathrm{Cov}(X,Y)=0$. Likewise, $\mathrm{Corr}(X,Y)=0$. |
Need help calculating integral $\int\frac{dx}{(x^2+4)^2}$ | Try letting $x=2\tan t$. Then $x^2+4=4(1+\tan^2 t)=4\sec^2 t$, $dx=2\sec^2 tdt$. |
Bijective function $f: \mathbb{Z} \to \mathbb{N}$ | Consider the following function from $\mathbb{N}$ to $\mathbb{Z}$:
$$ 0 \mapsto 0 $$
$$ 1 \mapsto 1 \hspace{2cm} 2\mapsto -1$$
$$ 3 \mapsto 2 \hspace{2cm} 4\mapsto -2$$
$$ 5 \mapsto 3 \hspace{2cm} 6\mapsto -3$$
$$ \vdots \hspace{2cm} \vdots$$
And, more generally, if $n$ is even, $n\mapsto -\frac{n}{2}$; if $n$ is odd, $n \mapsto \frac{n+1}{2}$. This function would be bijective.
The trouble is, when we begin to deal with infinite sets, size is no longer a tangible thing. As you can see, $\mathbb{N} \subset \mathbb{Z}$, but acording to this function, both have the same size. We call this size cardinality, and we write $\#(\mathbb{N}) = \#(\mathbb{Z}) = \aleph_0$. That is, the cardinal of both sets is aleph sub zero. However, this is the smallest infinity out there (and we know it as the countable infinity). The cardial of, say, the real numbers $\mathbb{R}$ is bigger than the cardinal $\aleph_0$ because you can't generate a bijective function from any countable set to the reals, this is known as Cantor's diagonal argument. |
minmod slope total variation diminishing | We consider the advection equation $q_t + cq_x = 0$ with $c>0$. For the REA slope-limiter method with minmod slope $\sigma_i^n$ specified in OP, the reconstructed piecewise linear data of the book's Eq. (6.11) reads \begin{aligned}
q^n(x, t_n) &= \sum_{i=-\infty}^\infty \left(Q_i^n + \sigma_i^n (x-x_i)\right) \Bbb I_i(x) \\
&= Q_{-\infty}^n + \sum_{i=-\infty}^\infty \left(Q_{i+1}^n - \sigma_{i+1}^n x_{i+1} - Q_i^n + \sigma_i^n x_i\right) H(x-x_{i+1/2}) \\ &\; + x \sum_{i=-\infty}^\infty (\sigma_{i+1}^n - \sigma_i^n)\, H(x-x_{i+1/2}) \, ,
\end{aligned}
where $\Bbb I_i(x)$ is the indicator function of the $i$th finite volume $[x_{i-1/2}, x_{i+1/2}[$, and $H$ is the Heaviside step function. In the case of zero slope $\sigma_i^n \equiv 0$, a straightforward computation of the total variation yields $$
TV(q^n(\cdot, t_n)) = \sum_{i=-\infty}^\infty |Q_{i+1}^n - Q_{i}^n| = TV(Q^n) \, ,
$$ cf. Eq. (6.21) of the book. The present exercise consists in proving that the above equality becomes an inequality of the form $TV(q^n(\cdot, t_n)) \leq TV(Q^n)$ in the case of minmod slope $\sigma_i^n \not\equiv 0$ defined in OP --- in fact, the evaluation of $TV(q^n(\cdot, t_n))$ in OP is incorrect. To understand how things work, one could start with the computation of $TV(q^n(\cdot, t_n))$ for very simple piecewise linear functions $q^n$, e.g. the case of one single nonzero state $Q_0^n \neq 0$, then the case of two successive nonzero states $Q_0^n, Q_1^n \neq 0$, etc.
This way, you'll be able to tackle the case of arbitrary data $Q^n$.
From the above inequality (Eq. (6.23) of the book) and Eqs. (6.24)-(6.25), one shows that the minmod slope-limiter method is TVD. |
Proving that ${4a \choose 2a} - {2a \choose a}$ is divisible by 4 | Beware: I am going for the overkill. By Legendre's theorem exponent of the largest power of $2$ dividing $n!$ is $\sum_{k\geq 1}\left\lfloor\frac{n}{2^k}\right\rfloor$, hence
$$ \nu_2\binom{2n}{n} = \sum_{k\geq 1}\left(\left\lfloor\frac{2n}{2^k}\right\rfloor-2\left\lfloor\frac{n}{2^k}\right\rfloor\right)\tag{1}$$
and the LHS is given by the number of "$1$"s in the binary representation of $n$. If we assume that $n$ has $k$ "1"s in its binary representation then both $\binom{2n}{n}$ and $\binom{4n}{2n}$ are numbers of the form $2^k\cdot\text{odd}$, hence their difference is a multiple of $2^{k+1}$. Since $k\geq 1$, we are done. |
Determining the type of conic and drawing a graph of it | The equation factors as
$$(x+y)^2-1=(x+y+1)(x+y-1)=0$$ and describes a pair of parallel straight lines. |
The reason for different terminologies | First, the definitions you list give only affine varieties of dimension at most $1$ (i.e. finite sets and curves), along with the affine plane. To get a general definition replace "affine plane" with "affine space."
Second, there is no consensus on whether varieties are irreducible by definition: one simply has to be aware of the convention used by a particular author.
Third, one must be a bit careful about thinking of an affine variety as a closed subset of affine space with the Zariski topology: this is only correct if one remembers either the embedding or the polynomial functions on the variety. For example, all curves are homeomorphic as topological spaces since they are simply infinite sets with the cofinite topology, but one should distinguish between e.g. singular and nonsingular curves, so this is clearly not satisfactory.
Fourth, thinking of affine varieties as embedded in affine space is aesthetically displeasing (at least to people like me) because the coordinates are not "intrinsic" to the variety structure. My preferred definition would be a topological space equipped with spaces of functions for every open set (i.e. a sheaf of functions) which is isomorphic to the (maximal) spectrum of a "nice" algebra, or, a little less abstractly, isomorphic to a closed subset of affine space with the usual polynomial functions. Which perspective you take depends on your taste and what you wish to do with algebraic geometry.
Edit: (in response to Georges's comment) all of this discussion applies over algebraically closed fields. When one works with general fields things get more complicated, so it is best to understand the situation over algebraically closed fields first. |
Finding Speed at a Single Point | You can try extrapolating the part between $Q_4$ and $P$ to get an estimate. Between $Q_4$ and $P$ the increase in time is $20 - 19.3 = 0.7$ seconds, while the increase in distance is $650 - 614 = 36$ meters. So in that interval, the object travels at $36$ meters per $0.7$ seconds, or $\frac{36}{0.7} \approx 51.43$ meters per second. The speed at $P$ may be a bit more looking at the shape of the function, but $51$ or $52$ meters per second should be a pretty good estimate.
Edit: We could try to fit a quadratic with the given data, but there is no exact match. Plugging in the data in Mathematica,
Fit[{{8, 165}, {13, 333}, {18.2, 559}, {19.3, 614}, {20, 650}}, {1, x, x^2}, x]
it spits out
0.976595 x^2 + 13.0624 x - 1.96826
Evaluating it at those five points, we get small errors:
{165.033, 332.887, 559.255, 613.908, 649.918}
We can then simply take the derivative w.r.t. $x$ to get the slope (speed) at any point $x$. The derivative is $1.95319 x + 13.0624$, so then plugging in $x = 20$ also gives us that the speed at the point $P$ is roughly $52.1262$. So again, $52$ meters per second seems like the best answer. |
What is the smallest solution of the congruence $a^{x} \equiv 1 (\textrm{mod}\ n)$? | This is called the Carmichael function and is usually denoted by $\lambda(n)$.
A "closed form" is that it is
$$\lambda(n)=\operatorname{lcm}_{p \text{ prime}} \{\varphi(p^{v_p})\}=\operatorname{lcm}_{p \text{ prime}} \{(p-1)p^{v_p-1})\}$$ where $v_p$ is the exponent of $p$ in $n$ that is $p^{v_p}\mid n$ yet $p^{v_p+1}\nmid n$.
Note the LCM is only formally over infinitely many numbers. You could also write it as
$$\operatorname{lcm} \{(p_1-1)p^{v_1-1}), \dots, (p_k-1)p^{v_k-1}) \}$$ where $n= p_1^{k_1} \dots p_k^{k_p}$ with distinct primes $p_i$. |
Evaluating a complex contour integral | You don't need to worry about the removable singularity. So, by the residue theorem, your integral is equal to$$2\pi i\left(1\times\operatorname{res}_{2\pi i}\left(\frac z{e^z-1}\right)+2\times\operatorname{res}_{-2\pi i}\left(\frac z{e^z-1}\right)\right),$$where the numbers $1$ and $2$ are the winding numbers of your loop with respect to $2\pi i$ and to $-2\pi i$ respectively. Now, you can compute the residues using the formula$$\operatorname{res}_a\left(\frac{f(z)}{g(z)}\right)=\frac{f(a)}{g'(a)}.$$ |
Regarding valuation on function field | Write $$g(X) = p(X)^n \frac{h(X)}{r(X)}$$ where $p(X)$ does not divide $h(X)$ or $r(X)$. This implies that $X-\alpha$ does not divide $h(X)$ nor $r(X)$, since $p(X)$ was the minimal polynomial of $\alpha$.
So the $X-\alpha$ valuation of $g$ is $n$ times the $X-\alpha$ valuation of $p$. Why is the latter odd?
Essentially, this is because $\mathrm{char} \, k \ne 2$ is odd. If we were over a field of characteristic zero, $p$ would be separable (because irreducible) and the valuation would be one. We aren't necessarily, though, but in the positive characteristic case, the ($X-\alpha$)-valuation of $p$ is the inseparability degree of $k(\alpha)/k$, which is a power of $\mathrm{char} \, k$ and therefore odd. |
Non hermitic positive definite matrix | Yes, indeed there are non-hermetic positive-definite matrices, as this wikipedia entry clearly points out.
And even in literature, you'll often find positive-definiteness used in contexts where Hermitian matrices couldn't possibly appear. |
Find closed formula for $a_{n+1}=(n+1)a_{n}+n!$ | Dividing by $(n+1)!$ gives us that $$\frac{a_{n+1}}{(n+1)!}=\frac{a_{n}}{n!}+\frac{1}{n+1}$$
Now substituting $\frac{a_{n}}{n!}=b_{n}$ $$b_{n+1}=b_{n}+\frac{1}{n+1}$$
Thus $b_{n}$ is is the harmonic series. |
How to prove that $\mathcal{D}(\Omega)$ is dense in $\mathcal{D}'(\Omega)$? | The first result you need is the following:
(i) If $u \in \mathcal{D}'(\mathbb{R}^n)$ and $\varphi \in \mathcal{D}(\mathbb{R}^n)$, then $u \ast \varphi \in \mathcal{E}(\mathbb{R}^n)$ and $(u \ast \varphi) \ast \psi = u \ast (\varphi \ast \psi)$ $\forall \psi \in C^{\infty}_c(\mathbb{R}^n)$
We show that $\mathcal{D}(\mathbb{R}^n)$ is dense in $\mathcal{D}'(\mathbb{R}^n)$.This means that if $u \in \mathcal{D}'(\mathbb{R}^n)$ is fixed, and $\lbrace \psi_\epsilon \rbrace_{\epsilon > 0}$ is standard mollifier, then $u \ast \psi_\epsilon \rightarrow u$ in $\mathcal{D}'(\mathbb{R}^n)$ , i.e.
\begin{align*}
\displaystyle \lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}^n} (u \ast \psi_\epsilon)(x) \varphi(x) dx = \langle \varphi , u \rangle , \forall \varphi \in \mathcal{D}(\mathbb{R}^n)
\end{align*}
Proof. It is known that $\forall \varphi \in \mathcal{D}(\mathbb{R}^n)$ fixed, we have $\psi_\epsilon \ast \varphi \rightarrow \varphi$ and $D^\alpha \psi_\epsilon \ast \varphi = D^\alpha \psi_\epsilon \ast \varphi \rightarrow D^\alpha \varphi$ uniformly $\forall K \subset \Omega$ and $\forall \alpha \in \mathbb{N}^n$, where $K$ is a compact.
Then, or the characterization of the convergence in $\mathcal{D}(\mathbb{R}^n)$, follows that $\psi_\epsilon \ast \varphi \rightarrow \varphi$ with respect vectorial topology $\mathcal{T}$ of $\mathcal{D}(\mathbb{R}^n)$.
Consider $\widetilde{\varphi}(x):=\varphi(-x)$, $u \in \mathcal{D}'(\mathbb{R}^n)$, $\varphi \in \mathcal{D}(\mathbb{R}^n)$ then it is well-defined convolution\begin{align*}
\displaystyle (u \ast \varphi)(x)=\langle \varphi(x-y),u(y) \rangle = \langle \varphi(x -\cdot), u(\cdot) \rangle = \langle \widetilde{\varphi}(x) , u \rangle = (u \ast \varphi)(0)
\end{align*}
Therefore, by (i) and since $u \in \mathcal{D}'(\mathbb{R}^n)$ is continuous, follows that
\begin{align*}
\langle \varphi , u \rangle &= (u \ast \varphi)(0) \\
&= \langle \widetilde{\varphi}(x) , u \rangle \\
& = \lim_{\epsilon \rightarrow 0^+} \langle (\psi_\epsilon \ast \widetilde{\varphi})(0) , u \rangle \\
&= \lim_{\epsilon \rightarrow 0^+} (u \ast (\psi_\epsilon \ast \widetilde{\varphi}))(0) \\
& = \lim_{\epsilon \rightarrow 0^+} ((u \ast \psi_\epsilon) \ast \widetilde{\varphi})(0) \\
&=\lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}^n} (u \ast \psi_\epsilon)(y) \widetilde{\varphi}(\cdot - y) dy \\
&= \lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}^n} (u \ast \psi_\epsilon)(y) \varphi(y) dy.
\end{align*}
We show that $\mathcal{D}(\Omega)$ is dense in $\mathcal{D}'(\Omega)$.
Proof.
Consider $\Omega_k = \lbrace x \in \Omega : \mathrm{dist}(x, \partial \Omega) > 1/k \rbrace \cap B(0,k)$ , $\forall k \in \mathbb{N}$. Then, $\Omega_k \subset \Omega_{k+1}$ and $\Omega=\bigcup_{k \in \mathbb{N}}\Omega_k$, and by regular version of Urysohn lemma we have
\begin{align*}
\exists \psi_k \in \mathcal{D}(\mathbb{R}^n) : \mathrm{supp}(\psi_k) \subset \Omega_{k+1}, \psi_k(x)=1 , \forall x \in \overline{\Omega}_k
\end{align*}
In particular, $\mathrm{supp}(\psi_k u) \subset \mathrm{supp}(\psi_k) \cap \mathrm{supp}(u) \subset \mathrm{supp}(\psi_k)$, and $\psi_k u \rightarrow u$ in $\mathcal{D}'(\Omega)$, and $\psi_k u \in \mathcal{E}'(\mathbb{R}^n)$ is a distribution with compact support.
Consider a standard mollifier $\lbrace \eta_\epsilon \rbrace_{\epsilon >0}$ such that $\mathrm{supp}(\eta_\epsilon) \subset B(0,\epsilon)$ and assuming $0 < \epsilon_k < \mathrm{dist}(\mathrm{supp}(\psi_k), \partial \Omega_{k+1})$ such that $\epsilon_k \rightarrow 0$ decrease.
By properties of the support convolution's, we have
\begin{align*}
\mathrm{supp}(\eta_{\epsilon_k} \ast (\psi_k u)) \subset B(0,\epsilon_k) + \mathrm{supp}(\psi_k) \subset \Omega_{k+1}.
\end{align*}
is a compact since it's closed and bounded, and then $\eta_{\epsilon_k} \ast (\psi_k u) \in \mathcal{D}(\Omega_{k+1}) \subset \mathcal{D}(\Omega)$, and for the preceding result $\eta_{\epsilon_k} \ast (\psi_k u) \rightarrow u$ in $\mathcal{D}'(\mathbb{R}^n)$ and then also in $\mathcal{D}'(\Omega)$. |
Godel's incompletness theorem - proving a statement is false | According to G's Th, it is correct to say that :
"there must exist false sentences that cannot be proven false".
If a "suitable" theory $T$ is consistent, then - by G's Th - there exists a true sentence $G_T$ such that:
$T \nvdash G_T$ and $T \nvdash \lnot G_T$;
this is the incompleteness of $T$.
But if $G_T$ is true, then $\lnot G_T$ is false.
To say that a sentence $A$ is "provably false (in the theory $T$)" must be transalted as :
if $A$ is false, then $T \vdash \lnot A$.
Thus, $\lnot G_T$ is a false sentence of $T$ and $T \nvdash G_T$, i.e. :
$T \nvdash \lnot (\lnot G_T)$
and so we have that $T$ has a false sentence (i.e. $\lnot G_T$) that cannot be proved false. |
Find the complete integral of $(x+y)(p+q)^2 + (x-y)(p-q)^2=1$ | Hint:
Let $\begin{cases}u=x+y\\v=x-y\end{cases}$ ,
Then $\dfrac{\partial z}{\partial x}=\dfrac{\partial z}{\partial u}\dfrac{\partial u}{\partial x}+\dfrac{\partial z}{\partial v}\dfrac{\partial v}{\partial x}=\dfrac{\partial z}{\partial u}+\dfrac{\partial z}{\partial v}$
$\dfrac{\partial z}{\partial y}=\dfrac{\partial z}{\partial u}\dfrac{\partial u}{\partial y}+\dfrac{\partial z}{\partial v}\dfrac{\partial v}{\partial y}=\dfrac{\partial z}{\partial u}-\dfrac{\partial z}{\partial v}$
$\therefore4u\left(\dfrac{\partial z}{\partial u}\right)^2+4v\left(\dfrac{\partial z}{\partial v}\right)^2=1$
$u\left(\dfrac{\partial z}{\partial u}\right)^2+v\left(\dfrac{\partial z}{\partial v}\right)^2=\dfrac{1}{4}$
It belongs to a PDE of the form http://eqworld.ipmnet.ru/en/solutions/fpde/fpde3212.pdf. |
Finding global error in Modified Euler method | I would write the assumptions about the error a little more explicitly, then you have less problems explaining what you do. That the error order is $a$ can be written explicitly as
$$
y_h(t)=y_E(t)+C(t)h^a+\text{higher degree terms}
$$
In the order estimate at $t=1$ the higher degree terms are disregarded, and 2 values of $h$ are used to eliminate $C(1)$ and compute $a$, 2 equations for 2 unknowns. Then you get exactly the computation you have done,
$$
\frac{y_h(1)-y_E(1)}{y_{h/2}(1)-y_E(1)}\approx 2^a
$$
from where you get the estimated value of the order $a$.
I get for a larger collection of step sizes the values
N= 1, h=1.0000, x=1.00, y= 10.54276846159383, a= 1.91118
N= 2, h=0.5000, x=1.00, y= 5.83159536458917, a= 2.10154
N= 4, h=0.2500, x=1.00, y= 4.52294038463724, a= 2.08881
N= 5, h=0.2000, x=1.00, y= 4.37414754577927, a= 2.07703
N=10, h=0.1000, x=1.00, y= 4.18433504965781, a= 2.04429
If you do the same for the explicit midpoint method (=improved Euler), then conversely the numerical values are better for the larger step sizes, but the order estimate converges slower towards $2$.
N= 1, h=1.0000, x=1.00, y= 4.48168907033806, a= 1.22604
N= 2, h=0.5000, x=1.00, y= 4.27769565474792, a= 1.55472
N= 4, h=0.2500, x=1.00, y= 4.17722465081606, a= 1.81270
N= 5, h=0.2000, x=1.00, y= 4.16044848001507, a= 1.86043
N=10, h=0.1000, x=1.00, y= 4.13503447502597, a= 1.94137 |
Calculate the expression with logarithm | Use the following formulas:
$$\log_a b = \frac{1}{\log_b a}$$ and $$a^{\log_a b} = b$$. |
Orbits under action of a subgroup on the set of conjagtes of a second subgroup | Here is a small example of a group with a group that does not control its own fusion:
Let $G = S_4$, $B=D_8$ be a Sylow 2-subgroup, and $A=Z(B)$. Then $A^G$ has three elements, all contained in $B$, but obviously $A^B$ has only one element $A$. The other two elements of $A^G$ lie in the other orbit.
A (different) general method: Let $G_1$ be a non-regular permutation group and $B_1$ be a point stabilizer (or any non-transitive subgroup with orbits of different sizes). Then $G = A \wr G_1$ and $B = A \wr B_1$ has the property you want. |
Convert the given set into roster notation: | Yes, your answer and notation are correct. |
Proving $\alpha(x) = f(x,y_0)$ is a continuous function. | Note that
$|\alpha(x) - \alpha(x_{0})| = |f(x,y_{0}) - f(x_{0},y_{0})|$ for all suitable $x$. The function $f$ is continuous by assumption; so $|f(x,y) - f(x_{0},y_{0})|$ can be controlled on some open ball of center $(x_{0},y_{0})$. So $|f(x,y_{0}) - f(x_{0},y_{0})|$ can also be controlled on some open ball of center $x_{0}$; so $\alpha$ is continuous at $x_{0}$. |
How do I tell the rank of the electric susceptibility tensor (and others)? | Sorry, long answer, but I think this should explain some questions in the comments as well. Feel free to ask more questions.
Your conclusion seems correct to me, admittedly, this is my first time learning of this definition of rank. But, since $\chi$ is finite dimensional, and since we are dealing with a physical problem, we can assume that our underlying vector space is $\mathbb{R}^n$.
The dual of the vectors in $\mathbb{R}^n$ would simply be the transpose. That is, column vectors map to row vectors and vice versa. Now, due to some properties about $\mathbb{R}^n$ (namely completeness), we can construct the operator $\chi$ by taking the outer product of two vectors in $V=\mathbb{R}^n$. The outer product is sort of the opposite of the inner product in that it maps $V \times V \mapsto V\times V$ rather than $V \times V \mapsto \mathbb{R}$. That is, in terms of vectors and matrices, the outer product maps two vectors to a matrix, whereas the inner product maps two vectors to a scalar.
Now, to bring this all together, the inner product can be represented as the juxtaposition of an $1\times n$ vector and an $n \times 1$ vector (assuming a finite dimensional vector space). Outer products on the other hand can be represented as the products of an $n \times 1$ vector and an $1 \times m$ vector, resulting in an $n\times m$ matrix.
Thus, for your case we have a matrix $\chi$ that can be constructed as the outer product of two vectors in $\mathbb{R}^n$, due to the definition of the outer product, we require that one of the matrices be in the dual (transpose) space, and the other be in the normal space (non-transpose) space. That leads me to assume, using your provided definition of rank, that $\chi$ must have rank $(1,1)$. |
What does it mean to be equal? | We say that $y$ is a square root of $x$ if $y^2 = x$.
We define a function $\sqrt{\cdot} : \mathbb{R}^+ \to \mathbb{R}$ ("the square root function") by $$\sqrt{x} := \text{the nonnegative number $y$ such that $y^2 = x$}$$
So you can see that $\sqrt{x}$ is a square root of $x$.
Not every square root of $4$ is equal to $\sqrt{4} = 2$. It turns out to be the case that $-\sqrt{4} = -2$ is also a square root of $4$.
When we refer to the square root of $x$, we mean $\sqrt{x}$; that is, the unique nonnegative number which squares to give $x$. When we refer to a square root of $x$, we mean any of the numbers which square to give $x$. It is a fact that there are usually two of these, and that one is the negative of the other; so in practice, we may refer to $\pm \sqrt{x}$ if we wish to identify all the square roots of a number. Only the positive one - that is, $\sqrt{x}$ - is the "principal" square root (or "the square root", or if it's really confusing from context, "the positive square root"); but both are square roots. |
Conditional probabilities: Finding the p.m.f. of $\mathbb{E}[X\mid Y]$ | Combining the facts that $\mathbb E\left[X\mid Y=0\right]=\frac14$ and $\mathbb E\left[X\mid Y=1\right]=\frac23$, you can say that
$$\mathbb E\left[X\mid Y=y\right]=\left(\frac23-\frac14\right)y+\frac14=\frac{5y}{12}+\frac14 \quad,\,\,y\in \{0,1\}$$
This means $\mathbb E\left[X\mid Y\right]$ is a random variable given by
$$\mathbb E\left[X\mid Y\right]=\frac{5Y}{12}+\frac14 \quad,\,\text{with probability }1$$
So $\mathbb E\left[X\mid Y\right]$ takes the value $\frac14$ with probability $\mathbb P(Y=0)$ and the value $\frac23$ with probability $\mathbb P(Y=1)$. |
Having difficulty with statistics problem (Stem-leaf) diagram | If your diagram looks like the following:
\begin{align}
2 &\mid 0\ 3\ 5\ 6 \\
3 &\mid 1\ 4\ 6 \\
4 &\mid 7\ 9
\end{align}
Then the stems are the tens digits and the leaves that appear to the right of each stem are the units digit for that stem.
So, from the top stem, we get $20, 23, 25, 26$. From the middle stem, we get $31,34,36$. From the bottom stem, we get $47, 49$. |
Please Help explain a solution in a Geometric distribution problem | I don't understand why there is a 6 in the sum if we only have 5 trials
simply because you are calculating
$$\mathbb{P}[X>5]=\mathbb{P}[X=6]+\mathbb{P}[X=7]+\mathbb{P}[X=8]+\dots$$
Obvioulsly it is simplier to do
$$(1-p)^5=0.05^5\approx 3\cdot 10^{-7}$$ |
Proving that if $\sum_{i =1}^ t {a_i} = \sum_{i=1}^ t {b_i}$, then $\sum_{i =1}^ t {a_i^2} \neq \sum_{i=1}^ t {b_i^2}$. | An oldie, compare $1,4,6,7$ with $2,3,5,8$ |
Find the matrix of the restriction $ S = T|_U$, relative to the basis B? | Just express with respect to $B$ the image under $T$ of the two vectors in $B$. |
Why is the inverse tangent function not equivalent to the reciprocal of the tangent function? | The reason that it isn't true is that, regrettably, the notation is not consistent. For this reason, many people avoid using $\tan^{-1}$ and use $\arctan$ instead, and so on for the other trigonometric functions. That said, $\tan^{-1}$ is logical notation, and such notation as $\tan^2$ is illogical. However, the weight of tradition and the simple convenience of the latter notation ensures its survival, and we will probably always be using it. |
Action of $A_n$ on cosets by translation | Hint: Derek Holt's comment plus the fact that intersection of two normal subgroups is normal. |
Prove that $\sum_{k=1}^\infty \sqrt{k+1}-\sqrt{k}$ diverges. | HINT: For each positive integer $N$:
$$\sum_{k=1}^N (\sqrt{k+1}-\sqrt{k}) = \sqrt{N+1}-\sqrt{1}.$$
Why is this so, and can you finish from here. |
Why does the differential $f_{*,p}$ equals $g_{*,p}$ at every point $p\in N$ | Note that $f_{\ast,p}$ is a map $T_pN\rightarrow T_{f(p)}\mathbb{R}$, whereas $g_{\ast,p}$ is a map $T_pN\rightarrow T_{g(p)}\mathbb{R}$. So, in a strict sense, these are not equal, because they do not even have the same codomain. In particular, trying to calculate this by definition as you did is doomed to fail. However, there are natural identifications $T_{f(p)}\mathbb{R}\cong\mathbb{R}\cong T_{g(p)}\mathbb{R}$ (which, I'm sure, Tu has introduced before this point) under which the maps in fact become equal.
I believe a conceptual approach clarifies best why this is true. Let $a\colon\mathbb{R}\rightarrow\mathbb{R},\,x\mapsto x-c$ be the auxiliary "subtraction by $c$" map. Then, $f=a\circ g$ by definition, whence $f_{\ast,p}=(a\circ g)_{\ast,p}=a_{\ast,g(p)}\circ g_{\ast,p}$ by the chain rule. However, $a_{\ast,g(p)}$ is an isomorphism, since $a$ is a diffeomorphism. This already implies that $f_{\ast,p}$ is surjective iff $g_{\ast,p}$ is surjective, so that the critical points of $f$ and $g$ are the same. Furthermore, this also lets us see that these maps are the same under the given identifications, since they identify the abstract derivative with the classical derivative from analysis and $a^{\prime}(g(p))=1$. |
What does $(dy)(x, \Delta x)$ mean? | $$dy(x,\Delta x):= dy(x)(\Delta x)=y'(x)\Delta x.$$
$dy(x)$ is a linear application : it's the linear approximation of $y$ in a neighborhood of $x$, i.e. $dy(x):\mathbb R\to \mathbb R$ if defined by $$dy(x)(h):= y'(x)h.$$
But $h=x+h-x=:\Delta x$, so you have to see $\Delta x$ as a "variable" (namely as the distance from $x$), and we denote $$dy(x,\Delta x):=dy(x)(\Delta x)=y'(x)\Delta x.$$
Normally, we write $dy$ instead of $dy(x,\Delta x)$. |
Differential Equation With a Twist | $y=e^x$ is a solution
suppose $y = u e^x$ is a solution:
$y' = u' e^x + u e^x$
$y'' = u'' e^x + 2 u' e^x + u e^x$
$(x^2 + x)y'' - (x^2-2)y' - (x+2) y = 0$
$e^x ((x^2 + x)(u'' + 2u') - (x^2-2) u') = 0$
$(x^2 + x)u'' + (x^2+2x+2) u' = 0$
$u'' = -(x^2+2x+2)/(x^2+x) u'$
$u''/u' = -1 -2/x + 1/(x+1)$
$\ln u' = -x-2\ln x +\ln(x+1) c$
$u' = C \frac{x+1}{x^2} e^x$
$u = C_1 (2 Ei(x)-e^x/x) + C_2$
$y = C_1 (2 e^x Ei(x)-e^{2x}/x) + C_2 e^x$ |
Formulating Solution for Branch and Bound | I would take the following variable:
$y_{ij}=\begin{cases} 1, \ \text{if plant i in mode j is used} \\ 0, \ \text{else} \end{cases}$
At most one mode for each plant
$\sum_{j=1}^3 y_{ij} \leq 1$
Equal or more 225,000 cars have to be produced
$80,000\cdot y_{11}+130,000\cdot y_{12}+190,000\cdot y_{13}+95,000\cdot y_{21}+160,000\cdot y_{22}$ $+230,000\cdot y_{23}+110,000\cdot y_{31}+ 175,000\cdot y_{32}+210,000\cdot y_{33}\geq 225,000$ |
Ways to arrange seats where a certain group of people has to be together. | In Q1 and the non-dachshund part of Q2, you're counting the dogs as all being distinguishable. That suggests you should also factor in the order of the four dachshunds in the DDDD block. Otherwise your work looks good. |
Deep Understanding of Independence of Probabilities | There is a difference between mutually exclusive events and independent events (indeed, a very strict difference in the sense that a couple of non-trivial events cannot be both mutually exclusive and independent).
Events $A$ and $B$ are mutually exclusive if it is impossible for both of them to occur simultaneously. To borrow from your example, it is impossible for the same train to be involved in a crash in London (event $A$) and New York (event $B$) at the same time. So $\text{P}(A \cap B) = 0$. Another example is, if a die is rolled once, and $A$ and $B$ are the events of getting a $1$ and $6$ (respectively). Then either $A$ occurs, or $B$ occurs, or neither occurs. But both $A$ and $B$ cannot occur. So $\text{P}(A \cap B) = 0$.
Events $A$ and $B$ are independent if the occurrence (or non-occurrence) of one does not affect the (probability of) occurrence of the other. In other words, $B$ has the same probability of occurrence independent of whether $A$ has occurred or not, and vice versa. To borrow from your example again, the occurrence of a train crash occurring in London (event $A$) does not normally change the likelihood of a train crash (involving some other trains) in New York (event $B$). I say "normally", because of course, in the real world there are several factors at play which we may not know about. For example, if there is a lot of publicity about the train crash in London this week, it might very lower the likelihood of a train crash in New York in the coming few weeks (as people might take more precautions than usual).
So let's take a simpler example. If two dice are rolled at the same time, and $A$ and $B$ are, respectively, the events of getting a $1$ on the first die and a $6$ on the second, then $A$ and $B$ are independent (again, assuming there is no hidden connection between the dice via the atmospheric disturbances they create, gravitational waves, and whatnot!). Can both occur simultaneously? Certainly, why not? It is perfectly conceivable that the first die shows $1$ and the second shows $6$ (as I'm sure you've seen many times, roughly one-thirty-sixth of the time, in fact, if you've played Monopoly). So $\text{P}(A \cap B)$ is not $0$. For fair dice, $\text{P}(A) = 1/6$, and $\text{P}(B) = 1/6$. Then what is the probability that both $A$ and $B$ occur? It is simply $\text{P}(A)\text{P}(B) = 1/36$, because the events are independent.
Now let's look at events that are not independent. You are in Pall Mall right now, and you need to get to Marylbone Station in exactly two turns. What is the probability that this (let's call it event $A$) will happen? You need a $4$ to get to Marylbone Station, and that can happen in exactly two turns only if you get $2$ in the first as well as the second turn. So the probability of $A$ is the probability of getting the single combination $(2, 2)$ out of all the $36 \times 36 = 1296$ possible combinations, that is, $\text{P}(A) = 1/1296$. This is what you calculate before throwing the dice. Now you throw the dice and you get a $2$ (let's call this event $B$). Is the probability of $A$ still $1/1296$? Of course not. Now that you know that you've got a $2$ and reached Whitehall, you know that you only need a $2$ in the next turn to reach Marylbone Station, and this has probability $1/36$. Thus, given that $B$ occurred, the probability of occurrence of $A$ changes. We write it in this way $\text{P}(A|B) = 1/36$ (read "Probability of $A$ given $B$"). This is called conditional probability. Conditional probability of $A$ given $B$ is defined as $\text{P}(A|B) = \dfrac{\text{P}(A \cap B)}{\text{P}(B)}$. Thus, $\text{P}(A \cap B) = \text{P}(B)\text{P}(A|B)$ (this is usually called the Multiplication Theorem).
For independent events $A$ and $B$, $\text{P}(A|B) = \dfrac{\text{P}(A \cap B)}{\text{P}(B)} = \dfrac{\text{P}(A)\text{P}(B)}{\text{P}(B)} = \text{P}(A)$. This makes sense, as it essentially says that the probability of $A$ is the same whether $B$ occurs or not. |
Combinatorial reasoning question | Assuming that the inclusion is strict, the answer is $n!$.
Since you need to choose $n+1$ strictly increasing sets from a set of cardinality $n$, $A_0$ must be $\phi$ and $A_i$ must contain exactly $i$ elements.
So, the number of ways of choosing:
$A_1 = n$,
$A_2 = (n-1)$,
$\vdots$
$A_n = 1$ ($A_n$ must be $\{1,2, \dots , n\}$)
Hence, the answer is $n!$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.