title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
What is the expected value in a non-normal distribution? | The mean $\mu$ and the median $\tilde\mu$ are equal if they exist and the pdf $f$ is symmetric with respect to $x=\mu$, that is, if $f(\mu-x) =f(\mu+x)$ for every $x>0$. |
Test if a vector is pointing towards the center of an ellipse | General parametric form
An ellipse in general position can be expressed parametrically as the path of
a point $(X(t),Y(t))$, where
$X(t)=X_c + a\,\cos t\,\cos \varphi - b\,\sin t\,\sin\varphi$
$Y(t)=Y_c + a\,\cos t\,\sin \varphi + b\,\sin t\,\cos\varphi$
as the parameter t varies from 0 to 2π. Here $(X_c,Y_c)$ is the center of the
ellipse, and $\varphi$ is the angle between the X-axis and the major axis of
the ellipse.
Center coordinate is $C(h,k)$ and $P(x_0,y_0)$ so the slope of the line that contains this two point is $$m=\frac{y_0-k}{x_0-h}$$
The slope of the tangent (which your vector is also perp. to it) to the ellipse at $P$ is
$$(\frac{dy}{dx})_{x_0,y_0}$$ which is equal to
$$\frac{dy}{dt} \div \frac{dx}{dt}$$
Theese two lines(first and the tangent) shoul be perpendicular for your question so the product of slopes should be $-1$
$$(\frac{dy}{dx})_{x_0,y_0} = -\frac{x_0-h}{y_0-k}$$
Take the derivative and put your point into the equation. |
The range of dot product of two vectors under certain constrains. | The maximum is always $1$, which happens whenever $\vec{x}=\vec{y}$,
e.g. for $x_i=y_i=\frac1{\sqrt n}$ for all $i$ or for $\vec{x}=\vec{y}=(1,0,\dots,0)$.
Let $a_n$ be the desired minimum in $n$ dimensions. It measures how far apart the vectors can get, since the dot product of unit vectors is the cosine of the angle between them, and the cosine function decreases for $\theta\in[0,\frac{\pi}2]$. The constraint space is a sector of the $(n-1)$-sphere, the positive quadrant/octant/portion, reduced by the symmetry of permuting the ordinates. For example, for $n=2$, our space is half of the unit circle in the first quadrant, which spans an angle $\frac\pi4$, so that $a_2=\frac1{\sqrt 2}$. In $n$ dimensions, the minimum is $\frac1{\sqrt n}$, which is obtained, for example, when $x_i=\frac1{\sqrt n},y_i=\delta_i$ (i.e. $y_1=1$ and $y_i=0$ for $i>1$). You should be able to prove this with induction, for which you can without loss of generality assume ascending rather than descending order if it is more convenient. |
Variance Reduction Using Antithetic Variates | For example 2, we are aiming to find an approximation for $I= \int_0^{1} \frac{1}{1+x}dx$. To this aim we will do the following:
Let $U$ be a uniform distribution defined as $U \sim [0,1]$. Then the we have the pdf of $U$, denoted $g$ , is defined as follows: $$ g(x)=\begin{cases} 1 \quad x \in [0,1] \\
0 \qquad elsewhere
\end{cases} $$
Thus given $f(x)= \frac{1}{1+x}$, then $$E[f(U)]=\int_0^{1} f(x) g(x)dx=\int_0^{1} \frac{1}{1+x} dx $$
Thus using this above equality we are able to estimate $I$ by estimating the expectation. Taking a sample $\{ x_i\}_{i=1,..,n}$ of $n$ points in $[0,1]$, then $I$ can be estimated by $$ I \approx \frac{1}{n} \sum_{i=1}^n \frac{1}{1+x_i} $$ |
How to check the real analyticity of a function? | This is a difficult question in general. Ideally, to show that $f$ is analytic at the origin, you show that in a suitable neighborhood of $0$, the error of the $n$-th Taylor polynomial approaches $0$ as $n\to\infty$.
For example, for $f(x)=\sin(x)$, any derivative of $f(x)$ is one of $\sin(x)$, $\cos(x)$, $-\sin(x)$, or $-\cos(x)$, and the error given by the $n$-th Taylor polynomial takes the form $\displaystyle \frac{f^{(n+1)}(\alpha)}{(n+1)!}x^{n+1}$ for some $\alpha$ between $0$ and $x$ (that depends on $n$). In absolute value, this is bounded by $\displaystyle \frac{|x|^{n+1}}{(n+1)!}$, that (on any bounded set) approaches $0$ uniformly as $n\to\infty$. This shows that the Taylor series for $f(x)=\sin(x)$ converges to $\sin(x)$, in any neighborhood of $0$ (and therefore everywhere). The same applies to $f(x)=\cos(x)$. A similar argument holds for a variety of functions, including $f(x)=e^x$.
And there are general theorems; for instance, any solution of a linear homogeneous ordinary differential equation with analytic coefficients is analytic (in a small neighborhood), as the differential equation can be used to establish bounds on the error term. The case of sine is an example, as $\sin(x)$ is a solution of $y''=-y$.
But the question is difficult in general. For example, a uniformly convergent series of analytic functions needs not be analytic. For instance, consider Weierstrass function, which in fact is nowhere differentiable.
Given a smooth function $f$ and a point $a$ in its domain, it may be that the formal Taylor series associated to $f$ at $a$ does not converge anywhere. Clearly in that case $f$ is not analytic at $a$. But it may be that the formal Taylor series associated to $a$ converges in an interval, but it does not converge to $f$ (identically) in any such interval. Then, again, $f$ is not analytic, but this may be harder to establish. For a short survey of $C^\infty$ nowhere analytic functions, by Dave L Renfro, see here.
In practice, for many analytic functions $f$, analyticity is established not by studying the rate of decay of the error terms, but by "inheritance". For example, $f$ could be the series of term by term derivatives of an analytic function, or its term by term antiderivative, or the result of composing two analytic functions, etc. |
$\tau = \left(\sum_{n = 1}^\infty f_n\right) d\nu + \sum_{n = 1}^\infty \mu_n$ the Lebesgue decomposition of $\tau$? | Yes.
The composition.
$$\tau (A) = \sum \tau_n (A) = \sum (\int_A f _n d \nu + \mu_n(A)).$$
By monotone convergence $\sum \int_A f_n d \nu = \int_A \sum f_n d\nu$. Let $f=\sum f_n$. Then $f$ is measurable and nonnegative. Also since $\tau$ is finite, $f\in L^1(\nu)$. Define the set function $\mu= \sum \mu_n$. We show this is a measure. It is clearly nonnegative. Next:
a. $\mu(\emptyset)=0$.
b. If $A_1,A_2,\dots$ are disjoint, then
\begin{align*} \mu (\cup A_j)&= \lim_{N\to\infty}\sum_{n\le N} \mu_n (\cup A_j)\\
&= \lim_{N\to\infty} \sum_{n\le N} \sum_j \mu_n (A_j)\\
&= \lim_{N\to\infty} \sum_j \sum_{n\le N} \mu_n (A_j) \\
&= \sum_j \lim_{N\to\infty} \sum_{n\le N} \mu_n (A_j)\\
&= \sum_j \mu(A_j).
\end{align*}
We have used monotone convergence for the fourth equality, and the third equality is a statement about a finite sum of series of positive terms.
Therefore $\mu$ is a measure. Since $\tau$ is finite, $\mu$ is also finite.
Bottom line: $d\tau = f d\nu + d\mu$, where $f\in L^1(\nu)$ and $\mu$ is finite measure.
By uniqueness of Lebesgue decomposition, all that remains to show is that $\mu\perp \nu$. Since for each $n$ $\mu_n \perp \nu$, there exists $A_n$ such that $\nu (A_n) =\mu_n (A_n^c) =0$. Let $A=\cup A_n$. Then
$\nu(A)\le \sum_n \nu(A_n) =0$, and
$$\mu(A^c) = \mu( \cap A_n^c) =\sum_n \mu_n (\cap A_n^c) \le \sum \mu_n (A_n^c)=0.$$
Therefore $\mu\perp \nu$. |
SVM : Why can we set 1 in the hyperplane equation? | Very simply, if you choose any number other than 1, you can simply scale it away again. Consider
$$
\mathbf{w}\cdot\mathbf{x}-b=\pm\delta
$$
Now, divide both sides of the equation by $\delta$, and we get
$$
\left(\frac1\delta\mathbf{w}\right)\cdot\mathbf{x}-\frac{b}{\delta}=\pm1
$$
Which means that we can define $\mathbf{\hat w}=\frac1\delta\mathbf{w}$ and $\hat b=\frac{b}{\delta}$, and we have
$$
\mathbf{\hat w}\cdot\mathbf{x}-\hat b=\pm1
$$
And because the goal is to minimise $\|\mathbf{w}\|$ (in order to maximise the size of the margin, $\frac2{\|\mathbf{w}\|}$), it doesn't matter if we scale $\mathbf{w}$ by some constant such as $\delta$ first - and so, we can use $\mathbf{\hat w}$ in place of $\mathbf{w}$, and the choice of $\delta$ is irrelevant (aside from needing to be a positive real number).
Since it's irrelevant, might as well make it the simplest possible positive real number, 1.
As for why it "sets the scale" of the problem, think of it this way: changing $\delta$ would change the scaling of $\mathbf{w}$ (that is, choosing $\delta=2$ would make $\mathbf{w}$ twice as big, for example). And so, just as changing $\delta$ changes the scale, so too does setting $\delta$ set the scale - it keeps it fixed, rather than having it vary from instance to instance. |
Is $2^{|\mathbb{N}|} = |\mathbb{R}|$? | In short: A binary number $0.a_1a_2a_3\ldots$ can be identified with the set $\{n\in \mathbb N\mid a_n\ne 0\}$.
A few details have to be checked, though |
Prove that the two linecircle intersecting points multiplied by eachother are equal to the square of the circle's tangent | Let the center be 'O' and drop a perpendicular from 'O' to AB and let this point be 'D'.
$(PD – AD) (PD + BD)$
$= (PD – AD) (PD + AD) $
$= PD^2 – AD^2$
In right ΔOPD,
$OP^2 = OD^2 + PD^2$
$⇒ PD^2 = OP^2 – OD^2$
∴ $PA . PB = (OP^2 – OD^2) – AD^2 = OP^2 – (OD^2 + AD^2)$
In right ΔOAD,
$OA^2 = OD^2 + AD^2$
∴ $PA . PB = OP^2 – OA^2 = OP^2 – OT^2 (OA = OT)$
In ΔOPT,
$OP^2 = PT^2 + OT^2$
$⇒ OP^2 – OT^2 = PT^2$
∴$ PA . PB = PT^2$ |
Proof of Probability Inequality for Conditional Expectation | I just took an exam on this.......
$$\begin{align} \mathbb E [Y-Z]^2 =&\mathbb E[(Y-\mathbb E[Y\vert X])+(\mathbb E[Y\vert X]-Z)]^2\\=&\mathbb E[Y-\mathbb E[Y\vert X]]^2+2\mathbb E [(Y-\mathbb E[Y\vert X])(\mathbb E[Y\vert X]-Z)]+ \mathbb E[\mathbb E[Y\vert X]-Z]^2\\
\end{align}
$$
consider the second term:
$$
\begin{align}
\mathbb E [(Y-\mathbb E[Y\vert X])(\mathbb E[Y\vert X]-Z)]&=\mathbb E \{\mathbb E [(Y-\mathbb E[Y\vert X])(\mathbb E[Y\vert X]-Z)\vert X]\}\\
&=\mathbb E \{\mathbb E [(Y-\mathbb E[Y\vert X])(\mathbb E[Y\vert X]-I_{\{X\in B\}})\vert X]\}\\
&=\mathbb E \{(\mathbb E[Y\vert X]-I_{\{X\in B\}})\mathbb E [(Y-\mathbb E[Y\vert X])\vert X]\}\\
&=\mathbb E \{(\mathbb E[Y\vert X]-I_{\{X\in B\}})(\mathbb E [Y\vert X]-\mathbb E[Y\vert X])\}\\
&=0
\end{align}
$$
so
$$\begin{align}\mathbb E [Y-Z]^2 &=\mathbb E[Y-\mathbb E[Y\vert X]]^2+ \mathbb E[\mathbb E[Y\vert X]-Z]^2\\
&\geq \mathbb E[Y-\mathbb E[Y\vert X]]^2
\end{align}
$$
since the last term cannot be negative. |
How to solve tautology without using truth table? | The statement you're trying to prove is false, so cannot be proven.
Take for example
a = It's Saturday
b = It's the weekend
Then your LHS suggests
1) it's not Saturday
and 2) if it's Saturday then it's the weekend
This does not imply that it's not the weekend! It could be Sunday. |
Function of (x and Another Function of x) in R^3 | This is a curve on a surface.
Specifically, it is the curve on the surface $z=g(x,y)$ over the graph of the curve $y=f(x)$ in the domain of $g$. |
Define $S(v+null T) = Tv $ where $S:V/(nullT) \rightarrow W$ and $T: V \rightarrow W$. Show $S$ is injective. | He started with an arbitrary vector $v + \operatorname{null} T\in \operatorname{null} S$ and showed that the vector must be the zero vector of $V/\operatorname{null} T$, i.e., $0 + \operatorname{null} T$. Thus, the only possible element of $\operatorname{null} S$ is $0 + \operatorname{null} T$. Since $0 + \operatorname{null} T$ is already in $\operatorname{null} S$, you deduce that $\operatorname{null} S = \{0 + \operatorname{null} T\}$ (here he simply writes $\operatorname{null} S = 0$). |
prime numbers property, Merten's theorem related sequence | Since $-\log(1-x) = x+\mathcal{O}(x^2)$ for $\alpha > 1$, $\prod_{k=1}^\infty (1-n^{-\alpha})$ converges as well as $\prod_{k=1}^\infty (1-p_k^{-\alpha})= \frac{1}{\zeta(\alpha)}$
For $\alpha \in (1/2,1]$ : $\sum_{k=1}^\infty p_k^{-2\alpha}$ converges so that $$-\log \prod_{k=1}^K (1-p_k^{-\alpha}) =\underbrace{\mathcal{O}(1)+ \sum_{k=1}^K p_k^{-\alpha} \sim \sum_{n=2}^{p_K} \frac{n^{-\alpha}}{\log n}}_{\text{prime number theorem}} \sim \mathcal{O}(1)+C\frac{p_K^{1-\alpha}}{\log p_K}$$
and by the PNT again $p_K \sim K \log K$
For $\alpha > 0$ you can make the same reasoning $-\log \prod_{k=1}^K (1-p_k^{-\alpha}) \sim \sum_{k=1}^K p_k^{-\alpha} \sim \sum_{n=1}^{p_K} \frac{n^{-\alpha}}{\log n} \sim C\frac{p_K^{1-\alpha}}{\log K}$
Thus the answer is that $\lim_{K \to \infty} p_K^\alpha \prod_{k=1}^K (1-p_k^{-\alpha}) = 0$ for $\alpha \in \mathbb{C},\alpha \in (0,1)$ (or $\Re(\alpha) \in (0,1)$) otherwise it diverges
In number theory $\sum_p$ means summing over the prime, $\sum_n$ means summing over the integers. Let $\Re(\alpha) \in (0,1)$ so that everything $\to \infty$,
$L(N) = \sum_{ n=2}^N \frac{1}{\ln n}$ and $\pi(N) = \sum_{p \le N} 1$.
By the PNT $L(N) \sim \pi(N)$ so that $ L(N) = \pi(N)(1+o(1))$. Summing by parts
$$\sum_{n =1}^N \pi(n) (n^{-\alpha}-(n+1)^{-\alpha}) =\pi(N)(N^{-\alpha}-(N+1)^{-\alpha}) +\sum_{p < N} p^{-\alpha} $$
$$\sum_{n =1}^N L(n) (n^{-\alpha}-(n+1)^{-\alpha}) =L(N)(N^{-\alpha}-(N+1)^{-\alpha}) +\sum_{n < N} \frac{n^{-\alpha}}{\ln n} $$
where $N^{-\alpha}-(N+1)^{-\alpha} = \int_N^{N+1} \alpha x^{-\alpha-1}dx \sim \alpha N^{-\alpha-1}$ so that $L(N)(N^{-\alpha}-(N+1)^{-\alpha})$ is much smaller than the sum
and
$$\sum_{n =1}^N \pi(n) (n^{-\alpha}-(n+1)^{-\alpha})=\sum_{n =1}^N L(N)(1+o(1)) (n^{-\alpha}-(n+1)^{-\alpha}) \\ \sim \sum_{n =1}^N L(N)(n^{-\alpha}-(n+1)^{-\alpha})$$
Qed
$$\sum_{p < N} p^{-\alpha} \sim \sum_{2 \le n < N} \frac{n^{-\alpha}}{\ln n}$$
Finally $\frac{d}{d\alpha}\sum_{2 \le n < N}\frac{n^{-\alpha}}{\ln n}=-\sum_{2 \le n < N}n^{-\alpha} \sim \frac{N^{1-\alpha}-1}{1-\alpha}$ and
$$\sum_{p < N} p^{-\alpha} \sim \sum_{2 \le n < N}\frac{n^{-\alpha}}{\ln n} \sim \int_1^\alpha \frac{N^{1-a}-1}{a-1}da\sim \int_1^\alpha \frac{N^{1-a}-1}{\alpha-1}da = \frac{N^{1-\alpha}-2}{(1-\alpha)\ln N}\sim \frac{N^{1-\alpha}}{(1-\alpha)\ln N}$$
as claimed |
About the conditional probability of sum of random variable $X_i$ | By definition
$$
\mathbb{P}(X_1+...+X_k=j|X_k=1)=\frac{\mathbb{P}((X_1+...+X_{k-1}=j-1)\cap(X_k=1) )}{\mathbb{P}(X_k=1)}
$$
Applying independence,
$$
\mathbb{P}((X_1+...+X_{k-1}=j-1)\cap(X_k=1))=\mathbb{P}(X_1+...+X_{k-1}=j-1)\mathbb{P}(X_k=1),
$$
so the $\mathbb{P}(X_k=1)$ terms in numerator and denominator cancel. |
How do I evaluate this integral $I = \int_{0}^{2 \pi} \ln (\sin x +\sqrt{1+\sin^2 x}) dx$? | In the First : $\sinh x = \frac{e^x - e^{-x} }{2} \Longrightarrow \sinh^{-1}x = \ln(x+\sqrt{x^2+1}).$
to get this just solve the equation $y=\sinh x$ to get the above inverse function (notice that $e^x>0$). The integral becomes :
\begin{align*} \int_0^{2\pi} \sinh^{-1} \sin x\ \mathrm{d}x &= \int_0^{\pi} \sinh ^{-1} \sin x \ \mathrm{d}x + \int_{\pi}^{2\pi} \sinh^{-1} \sin x \ \mathrm{d}x \\ &= \int_0^{\pi} \sinh ^{-1} \sin x \ \mathrm{d}x+ \int_0^{\pi} \sinh ^{-1} \sin (x+\pi) \ \mathrm{d}x \\ &= \int_0^{\pi} \sinh ^{-1} \sin x \ \mathrm{d}x - \int_0^{\pi} \sinh ^{-1} \sin x \ \mathrm{d}x \\ &=0.\end{align*}
(the $\sinh^{-1}$ function is odd) |
Hypothesis Testing Marble Frequencies | Take as a null hypothesis that the probability of any of the $27$ non-while balls being of any of the 11 possibilities is equal ($p_{hue}=1/11,\ hue=black, blue, \dots indigo$).
Now look at the probability of getting 11 or more of one colour in a sample of 27. That will allow you to test the hypothesis that there is no favoured $hue$ against that there is one. If you want the alternative to be that the favoured $hue$ is black you should be using the probability of getting 11 or more $black$ in the sample of 27. |
Is this a valid proof that if 5 investors split a payout of \$2,000,000, then at least 1 investor receives at least $400,000? | If each investor gets less than $\$400,000$, then the sum of their payments is less than $\$2,000,000$, a contradiction; thus at least one investor gets at least $\$400,000$. |
Finding the mean of all 9-digit numbers formed from four $4$s and five $5$s | Hint: How many numbers are there? You need to choose four locations for the $4$s. Think of adding up all the numbers in a column. Each column will have some number of $4$s and some number of $5$s. How many of each? What is the mean of all the numbers in a column? You have the same mean of each column, so multiply it by $111\ 111\ 111$ |
Integral of product of spherical Bessel function of first kind with the second | We can use this gadget to compute the integral, although we have to set it up carefully: $j_l(sx)$ and $y_l(sx)$ satisfy the Sturm–Liouville equation
$$ -\frac{d}{dx} \left( x^2 \frac{dy}{dx} \right) +l(l+1)y = s^2x^2 y, $$
so the gadget gives
$$ \int x^2 j_l(sx)y_l(x) \, dx = -x^2\frac{j_l'(sx)y_l(x)-j_l(sx)y_l'(x)}{s^2-1} + C.$$
To find the integral when $s = 1$, we have to take the limit. The series expansion about $s=1$ can be computed using the Taylor expansions to be
$$ \frac{x^2(j_l'(x)y_l(x)-j_l(x)y_l'(x))}{2(s-1)} + \frac{x^2}{4} (j_l'(x)y_l(x)-j_l(x)y_l'(x)) + \frac{x^3}{4}\big(j_l''(x)y_l(x)-j_l'(x)y_l(x)\big) + O(s-1) $$
However, the first terms are just constant because the Wronskian of $j_l(x)$ and $y_l(x)$, $W = j_l'(x)y_l(x)-j_l(x)y_l'(x)$, is proportional to $1/x^2$, so we can subtract off a function of $s$ so that the indefinite integral is continuous in $s$ at $s=1$. We hence obtain
$$ \int x^2 j_l(sx)y_l(x) \, dx = \frac{x^3}{4}\big(j_l''(x)y_l(x)-j_l'(x)y_l(x)\big) + C', $$
which we can rewrite by using the recurrence relations and the differential equation if desired. |
Let A ∈ R 5×5 with det(A) = −4. Find det(B) | Guessing from your "standard notation", use the facts that:
$\det(S_{ij}(c)) = 1$
$\det(P_{ij}) = -1$
$\det(D_i(c)) = c$
$\det(MN) = \det M \cdot \det N$
This yields:
$$
(1)(-1)(\tfrac{1}{8})(-1)(4)(-1)(\det B) = -4 \iff \det B = 8
$$ |
How to simplify an expression | Hint: $(\sqrt{6}-2)^2=10-4\sqrt{6}$. |
Proof of SSS congruence theorem | Two triangles are congruent if their corresponding sides are equal in length and their corresponding angles are equal in size.
If you are given that corresponding sides are equal in length, you can easily apply the Cosine Rule and obtain that each of the corresponding angles are also equal. Hence the two triangles are congruent.
For a more basic proof, refer here. |
Finding Multivariable limits using polar coordinates | Use
\begin{align}
x &= r \cos \theta \\
y &= r \sin \theta
\end{align}
So $x^2 + y^2 = r^2$ hence
\begin{equation}
\frac{\sin (x^2 + y^2)}{(x^2 + y^2)^2}
=
\frac{\sin r^2}{r^4}
\end{equation}
Using L'Hopital twice, we get
\begin{equation}
\frac{\sin r^2}{r^4}
\sim
\frac{2\cos\left(r^2\right)-4r^2\sin\left(r^2\right)}{12r^2}
\rightarrow
+\infty
\end{equation} |
Can somebody explain with one example the concepts: Lemma-Hypothesis-Theorem-Assumption-Proof-Axiom-Thesis-Determination-Definition-Proof | A hypothesis is an educated guess, often a statement you want to show or prove (or disprove). A good example is the Riemann Hypothesis, which Riemann hypothesized in the mid 1800s. It says that the real parts of the nontrivial zeroes of the function
$$\zeta(s) = \displaystyle \sum_{n \geq 1} \frac{1}{n^s}$$
all have real part $1/2$. We do not currently know if it is true or not, which is usually indicative of being a hypothesis.
A proof is an argument that shows some statement is true (or untrue). This is a fundamental concept of mathematics. We do not accept just any hypothesis as true - we usually want proof. Proofs can be simple or very complex. A simple proof might be
Claim: The square of an $\color{#AA0000}{\text{even}}$ $\color{#AA00AA}{\text{integer}}$ is even.
Proof: An integer $n$ is even if it is $\color{#00AAAA}{\text{divisible}}$ by $2$, which means that it is of the form $n = 2k$ for some other integer $k$. In that case, $n^2 = (2k)^2 = 4k^2 = 2(2k^2)$ which is divisible by $2$, and therefore it is also even. $\diamondsuit$
You'll notice that there are three words that are different colors. This is because proofs build off of definitions. It's important to know the meaning of both the proof and the statement in an unambiguous way. Each of the three words have a mathematical definition. For example:
An integer $n$ is $\color{#AA0000}{\text{even}}$ if it is divisible by $2$.
An integer $n$ is $\color{#00AAAA}{\text{divisible}}$ by an integer $k$ if $n = kl$ for some other integer $l$.
Defining the $\color{#AA00AA}{\text{integers}}$ can be a bit more complicated. On the one hand, we all know what the integers are. They're the numbers $\ldots, -2, -1, 0, 1, 2, \ldots$ along with their standard multiplication, addition, and subtraction that we've been learning in schools since we were young.
But it is possible to really dive in. What is a number? What does it mean to add, subtract, multiply? What does it mean for two things to be "equal?" At every step, we can ask how more and more things are defined. Where does it end?
This is a very good and deep question. How can we prove anything? A Greek mathematician named Euclid had a good idea around the 3rd century BCE. He thought there were a few statements about geometry so obvious, and so clear, that everyone could accept their truth. Then every other geometrical statement would be true if they could be reduced down to those universally accepted truths.
Although he split his statements into "common notions" and "postulates", we would now call them all axioms. So one axiom might be that a line can be drawn through two given points. This is Euclid's first postulate. It's very intuitive... sort of.
In other words, an axiom is a starting point for reasoning. But it's slightly more convoluted than this. The "axioms" that are basic "truths" are one type of axiom. Another type are non-logical axioms (which I would link to, but which I can't because this is my first answer and I'm only allowed 2 links), which don't describe fundamental truths but instead are usually defining properties of some system or structure.
For instance, the $\color{#AA00AA}{\text{integers}}$ are often defined axiomatically. This might sounds something like:
The integers are a set $\mathbb{Z}$ with two binary operations $+, \cdot$ satisfying
$a + b = b + a$ and $a \cdot b = b \cdot a$ for all integers $a,b$
There is an integer denoted by $0$ such that $a + 0 = a$ for all integers $a$
... and so on
One could say that an assumption is very similar (interchangeable, even) to the non-logical axiom. It is a statement about structure, not about truth.
As you can see, there's quite a bit here. Sometimes, one breaks up larger proofs into smaller, more manageable pieces. To do this, one breaks up a proof of a big statement into smaller statements that come together to prove the big one. To that end, there is a (very and extremely loose) hierarchy of statements. Roughly speaking,
A Theorem is a big, standalone, desirable statement that has been proved.
A Proposition is a slightly smaller one. Perhaps it could stand on its own, but isn't "good" enough to be a theorem.
A Lemma is smaller still. Often lemmas are used to prove larger propositions or theorems.
A Claim is smaller still, but is used in such a range of places that we shouldn't try to pin it down.
A Corollary is a result that follows directly from a theorem, proposition, lemma, (or claim, I suppose).
The names are all organizational concepts that indicate logical structure and importance. It is not too dissimilar to refactoring code, if you happen to know some programming. Since they have no real meaning, they are used in different ways by different people. Sometimes the same statement has different names. For instance, Burnside's Lemma is also called Burnside's Counting Theorem. So the hierarchy is very loose. But they are all statements that have been proved.
Altogether, you might think that someone makes a hypothesis about something. Then he or she tries to prove it by reducing it to known statements (and ultimately, axioms). They might call the result a theorem and then organize the proof into lemmas and propositions for easier mathematical communications. |
If $\lim \int_0^1 |f_n(x)|=0$ and $|f_n|^2\leq g$ where g is integrable on [0,1], then $\lim \int_0^1 |f_n|^2=0$ | Suppose $\int |f_n|^{2}$ does not tend to $0$. There exists $\epsilon >0$ and a subsequence $(f_{n_k})$ such that $\int |f_{n_k}|^{2} \geq \epsilon$ for all $k$. Since $f_{n_k} \to 0$ in $L^{1}$ it has a subequence which converges to $0$ a.e. Now apply DCT to $(f_{n_k}^{2})$ to get a contradiction. |
Doubt on displacement of a parabola(Again) | Your equation for the original parabola should read $y = 2\left(x-\dfrac{3}{4}\right)^2+\dfrac{23}{8}$. The displaced parabola will have the equation $y = 2(x-x_v)^2 + y_v$, where $(x_v, y_v)$ is the vertex. Since the axis is $x=1$, $x_v$ is $1$. The point $(2,-1)$ is on the parabola, so we have $-1 = 2(2-1)^2 + y_v \Rightarrow y_v = -1 -2 = -3$. Then, the equation of the displaced parabola is $y = 2(x-1)^2 -3$. |
Lattice Path Enumeration | You are correct, except I don't see why you need to use stars and bars to get the expression $\binom {n+k}{2k}$: if you take $k$ up steps, $k$ down steps, and $n-k$ horizontal steps, then just the usual notion of binomial coefficients is enough to show that there are $\binom{n+k}{2k}$ ways to choose which of the steps are up or down steps (and then it is uniquely determined which ones are up and which ones are down).
The expression
$$
\sum_{k \ge 0} \binom{n+k}{2k}
$$
does have a closed form; if you compute its value for the first few values of $n$, you will see that
$$
\sum_{k \ge 0} \binom{n+k}{2k} = F_{2n+1}
$$
(that is, the $(2n+1)^{\text{th}}$ Fibonacci number).
You can prove this identity algebraically, but it might be a better strategy to find a different combinatorial argument. As a hint, a well-known problem counted by the Fibonacci numbers is the number of ways to tile a $1 \times n$ rectangle by $1 \times 1$ and $1 \times 2$ tiles. |
Distance to the circumference of a circle from an inscribed point | By cosine formula, we have
$$L^2+r^2-2(L)(r)\cos(\pi-\phi)=(r+h)^2$$
Solving, we have
$$L=-r\cos\phi+\sqrt{2rh+h^2+r^2\cos^2\phi}$$ |
Formal power series (multiplication, divison) | So we have to determine coeffficients $a_i$ $(0\leq i\leq 3)$ such that
$$(1-2x+4x^2-8x^3+?x^4)(a_0+a_1x+a_2x^2+a_3x^3+?x^4)=2+3x+5x^2+7x^3+?x^4\ .$$
It is easily seen that the resulting system of equations is triangular. This means that we at once obtain $a_0=2$, then immediately $a_1=\ldots$, and so forth. |
Martingale from the symmetric random walk. | Just realized that this question already answers my question perfectly. |
Euclidean space - Orthonormal basis | Let $f_1, \ldots, f_n$ be the standard orthonormal basis for $\mathbb{R}^n$, and consider the map $ T: \mathbb{R}^n \to E$ defined by $f_k \mapsto e_k$ for all $k$. We have, for all $x \in E$,
$$\|x\|^2 = \sum_{i=1}^n\langle x, e_i \rangle = \sum_{i=1}^n\langle x, Tf_i)^2 = \sum_{i=1}^n ((T^*x) \cdot f_i) ^2 = \|T^*x\|^2.$$
Therefore, $T^* : E \to \mathbb{R}$ is an isometry. Also, the set $\{e_j\}$ is total: if $\langle e_j,x\rangle=0$ for all $j$, the identity gives $x=0$. Thus $T$ is surjective.
The inverse of a surjective isometry is its adjoint, which is also an isometry, hence $T$ is an isometry too. Invertible isometries preserve orthonormal bases, by the polarisation identity, so as $T$ maps the standard basis to $e_1, \ldots, e_k$, they necessarily form an orthonormal basis. |
Primitive elements of $\mathbb{F}_5[X]/[X^2-2]$ | Basically you are staring at $\Bbb{F}_5[\sqrt2]$. You hopefully know that $2$ is of (multiplicative) order $4$, so $\sqrt2$ is of order eight. How to find an element of order three? Well, let's recall from complex roots of unity that
$$
\omega=\frac{-1+\sqrt{-3}}2
$$
is such a beast. But, we have the counterpart lying around here! Recall that in $\Bbb{F}_5$ we have $-3=2$, so $\sqrt2$ can serve in the role of $\sqrt{-3}$ well.
So in your field
$$
\omega=\frac{-1+\sqrt2}2=2+3\sqrt2
$$
is of order three. Because $\gcd(3,8)=1$ the product
$$
g=\omega\cdot\sqrt2=2\sqrt2+1
$$
will be of order $3\cdot8=24$. |
Continuous probability distribution with no first moment but the characteristic function is differentiable | Below, there is a discrete example because I misread the word continuous.
However, if $X$ has the density $f$ given by
$$f(x)=\begin{cases}0&\mbox{ if }|x|\leqslant 2,\\
\frac C{x^2\log |x|}&\mbox{ if }|x|>2,\end{cases}$$
then $X$ is not integrable and the characteristic function is given by
$$\varphi(t)=2C\int_2^{+\infty}\frac{\cos(tx)}{x^2\log x}\mathrm{d}x.$$
Let us show that $\varphi$ is differentiable at $0$ and $\varphi'(0)=0$. We start from
$$\frac{\varphi(t)-\varphi(0)}{2tC}=\int_2^{1/t}\frac{\cos(tx)-1}{x^2\log x}\frac{dx}t+\int_{1/t}^{+\infty}\frac{\cos(tx)-1}{x^2\log x}\frac{dx}t=:A(t)+B(t).$$
Then using $|\cos u-1|\leqslant u^2$
$$|A(t)|\leqslant \int_2^{1/t}\frac{t^2x^2}{x^2\log x}\frac{dx}t=t\int_2^{1/t}\frac{dx}{\log x}.$$
Let $y:=\log x$, then $x=e^y$ and $dx=e^ydy$:
$$|A(t)|\leqslant t\int_{\log 2}^{-\log t}\frac{e^y}y\mathrm dy,$$
and splitting the interval at $R$, we get
$$|A(t)|\leqslant te^R/\log 2+R^{-1},$$
hence $\lim_{to\to 0}A(t)=0$.
Since
$$|B(t)|\leqslant \frac 1t\int_{1/t}^\infty\frac{2}{x^2\log(x)}dx,$$
we obtain after the substitution $s=1/x$ that
$$|B(t)|\leqslant -\frac 2t\int_0^t\frac{ds}{\log s}.$$
An other transformation $tu=s$ yields
$$|B(t)|\leqslant -\int_0^1\frac{du}{\log u+\log t},$$
which converges to $0$ as $t$ goes to $0$.
A similar argument can be used to show the differentiability at each point.
This example is taken from Stoyanov's book Counter-examples in probability.
Take $X$ such that $$\mathbb P(X=(-1)^kk)=\frac C{k^2\log k},\quad k\geqslant 2.$$
Then $\mathbb E|X|$ is infinite, while the characteristic function
$$\varphi_X(t)=\sum_{k\geqslant 2}\frac C{k^2\log k}e^{it(-1)^kk}$$
is differentiable everywhere: compute $\varphi_X(t+h)-\varphi_X(t)$ approximating $e^{ih(-1)^kk}-1$ by $h(-1)^kk+o(h)$. |
Question on degree sequence of eulerian, hamiltonian bipartite graph | Since it's Hamiltonian it contains a $C_{10}$ subgraph, and it has minimum degree at least $2$. Consequently, it's bipartition (which exists, since it's bipartite) must have two parts of size $5$, so the graph has maximum degree at most $5$. Moreover, it's connected and Eulerian, so every vertex has even degree: necessarily $2$ or $4$ in this case.
It has 18 edges, so the degrees sum to 36, by the Handshaking Lemma. Thus the degree sequence must be $(4, 4, 4, 4, 4, 4, 4, 4, 2, 2)$. Two examples are below (and hopefully it's clear how I constructed them):
They're Hamiltonian, as identified by the green Hamilton cycles. They're Eulerian because they have all-even degrees. They're bipartite, since the vertices are $2$-colored. And they're non-isomorphic, since in the first example the two degree-$2$ vertices are adjacent, and in the second example they're non-adjacent. |
$C^2=A^2+B^2-2AB \cdot\cos(c)$ getting a different answer than creating a third triangle with the distance formula? | The second point has coordinates $(98, -54.3)$. |
What are the maps of these closed sets in $\mathbb R^3 \mapsto \mathbb R$ | Either kind of ellipsoid is closed and path-wise connected in $\Bbb R^3$. $H$ is a continuous function so the image of the ellipsoid under the map is also closed and path-wise connected in $\Bbb R$. The values of $|x|$, $|y|$, and $|z|$ for points in either ellipsoid are bounded by $a^2r^2$, $b^2r^2$, and $c^2r^2$ respectively, so by the triangle inequality the values of $H$ are also bounded:
$$|x+y+z|\le |x|+|y|+|z|\le a^2r^2+b^2r^2+c^2r^2$$
So the image under $H$ is non-empty, closed, bounded, and path-wise connected in $\Bbb R$. That means the image is a closed interval.
You could even use Lagrange multipliers or a parametrization of the ellipsoid to find out exactly which closed interval, if needed.
All that also applies to the closed ball and the function $f$. The main difference is that the closed interval in that case is not necessarily symmetric with the point zero, due to the lack of symmetry in $f$. |
Basic arboricity example | Here is an inductive construction:
Assume you have a decomposition in $n-1$ trees for $K_{2n-2}$. We can build a decomposition of $K_{2n}$ in $n$ trees in the following way: for $i = 1 \ldots n-1$, in the $i-th$ tree:
The edges between $v_1, \ldots v_{2n-2}$ in the $i$-th tree are the same as in the $K_{2n-2}$ construction.
$v_{2n-1}$ is connected to $v_{i}$
$v_{2n}$ is connected to $v_{n-1+i}$
Then let the $n$-th tree be the one formed by connecting $v_{2n-1}$ with $v_n, \ldots v_{2n-2}$, $v_{2n}$ with $v_1, \ldots v_{n-1}$ and connecting $v_{2n-1}$ and $v_{2n}$ together. |
Optimize $\max _{x_1,x_2,...,x_N} N , \text{ s.t.} \sum_{i=1}^N f(x_i) \le a$ | The obvious guess is going to be that the optimal solution will take the form $x_1=x_2=\dots=x_N$ for some $N$. Thus, this suggests solving the following optimization problem:
$$\begin{align*}
\text{maximize } \; &N\\
\text{subject to } \; &N f(x) \le a, N g(x) \le b, Nx \le c
\end{align*}$$
where $x$ ranges over $\mathbb{R}$ and $N$ ranges over $\mathbb{N}$. Now you have only a two-dimensional optimization problem.
For instance, you could iterate over $N=1,2,3,\dots$ and for each fixed $N$, search for $x \in [-\infty,c/N]$ such that $f(x) \le a/N$ and $g(x) \le b/N$. Depending on the form of $f,g$, this might be feasible to solve.
If this is possible, you could speed it up by using bisection search to find the largest such $N$ (i.e., start by iterating over $N=1,2,4,8,16,\dots$; then when you find $N_0,N_1$ such that $N_0$ is feasible and $N_1$ isn't feasible, use binary search to find the largest $N$ such that $N_0 \le N < N_1$ and $N$ is feasible).
Better yet, define
$$h(x) = \max(f(x)/a, g(x)/b, x/c).$$
You can verify that $N,x$ is a feasible solution if and only if $h(x) \le 1/N$. Thus, to maximize $N$, it suffices to find $x^*$ that minimizes $h(x)$, and take $N=\lfloor 1/h(x^*) \rfloor$. Since $h:\mathbb{R} \to \mathbb{R}$ is a unidimensional function, minimizing $h(x)$ might be feasible via a number of methods, depending on the specific properties of of $f,g$. |
Decimal Representation vs Decimal Expansion | Since $1234.56$ is shorthand for $1 \times 10^3 + 2 \times 10^2 + 3 \times 10^1 + 4 \times 10^0 + 5 \times 10^{-1} + 6 \times 10^{-2}$, there is really no difference between these possibilities. And there is also no difference between decimal expansion and decimal representation. |
Solve the integral $ \int{\frac{x-\sqrt{x^2-5x+6}}{x+\sqrt{x^2+5x+6}}}dx $ | Hint:
$\int\dfrac{x-\sqrt{x^2-5x+6}}{x+\sqrt{x^2+5x+6}}~dx$
$=\int\dfrac{x-\sqrt{(x-3)(x-2)}}{\sqrt{(x+3)(x+2)}+x}~dx$
$=\int\dfrac{\left(x-\sqrt{(x-3)(x-2)}\right)\left(\sqrt{(x+3)(x+2)}-x\right)}{\left(\sqrt{(x+3)(x+2)}+x\right)\left(\sqrt{(x+3)(x+2)}-x\right)}~dx$
$=\int\dfrac{x\sqrt{(x+3)(x+2)}}{5x+6}~dx+\int\dfrac{x\sqrt{(x-3)(x-2)}}{5x+6}~dx-\int\dfrac{x^2}{5x+6}~dx-\int\dfrac{\sqrt{(x+3)(x+2)}\sqrt{(x-3)(x-2)}}{5x+6}~dx$ |
If $f(x , t)$ is continuous when $ x \in [a , b]$ , then $F'(t) = f( t , t)$ for $t \in [a , b]$. | Take $f(x,t)=t$ for a counter-example.
Your proof only shows that the derivative of $\int_a^{t} f(x,c)d$ is $f(c,c)$ (which is of course true). When you differentiate $F$ you cannot keep $t$ in $f(x,t)$ fixed at $t=c$. You will have to vary it. $F'(c)=\lim_{h \to 0} \frac {\int_a^{c+h} f(x,c+h) dx-\int_a^{c} f(x,c) dx } h$. |
Graphing the complex function | In the case of Möbius transformations, you don't need any software. Consider the map $f : \overline{\mathbb{C}}_z \to \overline{\mathbb{C}}_w$ given by $w = (1-z)^{-1}$. It follows that $z = (w-1)w^{-1}$. If $|z| = 1$ then $|(w-1)w^{-1}| = 1$ and so $|w| = |w-1|$. The image of the unit circle is the perpendicular bisector of $w=0$ and $w=1$, i.e. the line parallel to the imaginary axis that passes through $w = \frac{1}{2}$.
In general, if $f : \overline{\mathbb{C}}_z \to \overline{\mathbb{C}}_w$ is given by
$$w = \frac{az + b}{cz + d} \, . $$
where $(a:b:c:d) \in \mathbb{CP}^3$ then
$$z = \frac{dw-b}{cw-a} \, . $$
The image of the unit circle is given by setting $|z| = 1$ and so $|cw-a| = |dw-b|$ is the equation of the image in the $w$-sphere. |
Let S be the set of all triangles in the xy plane, each having one vertex at the origin and the other two vertices lie on the coordinate.. | $100 = 2^2 \cdot 5^2$.
So possible integer values of $a$ comes from three choices for factor $2$ - either factor $2$ is not present, multiplied one time or multiplied two times $(2^0, 2^1$ or $ 2^2)$. We have same three choices for $5$. We also observe as we choose $a$, $b$ is automatically decided and so we obtain ordered pair $(a, b)$.
That gives us total of $3\cdot 3 = 9$ combinations in one quadrant. We multiply by $4$ to obtain $36$ possible combinations in total across all $4$ quadrants. |
n-degree neighborhood of a node v | If this is where the definition is from:
In link prediction, graph distance plays a primary role in determining the imbalance ratio. We define the $n$-degree neighborhood of a node $v_i$ as the set of nodes exactly $n$ hops away from $v_i$. --- Lichtenwalter
et al., New Perspectives and Methods in Link Prediction (pdf).
Then from the context, the "the $n$-degree neighborhood" would most likely mean the set of vertices at distance $n$, The use of "degree" here will be in the same sense as in six degrees of separation.
If this is correct, $v_1$ has just $v_4$ as the $2$-degree neighbor. |
Increase rectangle by percentage | Say the picture you have has width $w$ and length $l$; then the total area would be $w\times l = A$.
Say you want to reduce the image by some percentage $P$, in decimal form. I assume you can convert percentages to and from decimals. So a $50\%$ reduction is $0.5$ and a $33.33333333(\dots)\%$ reduction is $0.3333(\dots)$
What does that mean? Well, if the new width and length are $w'$ and $l'$, it means we want
$$\frac{w'l'}{wl} = P$$
Also we know we want to scale the images down, so we could say that $w' = aw$ and $l' = bl$ for some values of $a,b$ you want to find out. If you want the aspect ratio to be the same, we need that $a = b$. Now, if we go to the last equation I wrote and substitute $w'$ by $aw$ and $l'$ by $al$ we get
$$\frac{a^2wl}{wl} = P \iff a^2 = P \iff a = \sqrt{P}$$
Therefore, if you want the new image to get $P\times 100\%$ of the size of the old one, just multiply each side by $\sqrt{P}$.
If you want to reduce the image to $50\%$ of the size, $P = 0.5$ and $\sqrt{P} \approx 0.707$. Or if you want to make the image have $400\%$ of the size, $P = 4 \implies \sqrt{P} = 2$, and you just multiply each side by $2$. |
Geometric progression, how to find $x$ | Hint: What does it mean to be geometric? It means you get the next term by multiplying the previous by a common ratio, $r$. Then $x=8r$ and $50=xr$... can you solve these two equations for $x$? |
calculate number of ways to achive opposite Vertex | Actually I have figured it out. If anyone is interested, here is a sketch of solution.
Lets say our starting point is called A next(going counter clockwise) is called B and so on we call all vertexes with alphabet letters. Our end point will be then called E.
Let $a_n$ by a number of ways to get from $A$ to $A$ making $n$ moves. $b_n$ be a nuber of ways to go to $B$ from $A$ making $n$ moves and so on. We are interested in $e_n$ as it is a number of ways to get to $E$ from $A$. We form a system of equations:
$$a_n=b_{n-1}+h_{n-1}=2b_{n-1}$$
$$b_n=a_{n-1}+c_{n-1}$$
$$c_n=b_{n-1}+d_{n-1}$$
$$d_n=c_{n-1}$$
$$e_n=d_{n-1}+f_{n-1}=2d_{n-1}$$
Now we calcualte $e_{n+1}=2d_n=...=$ and just plug our system of equation here to get expresion with $e_{n-i}$ only. We get: $e_{n+1}=2\cdot e_{n-1}-\frac{1}{2}\cdot e_{n-3}$ |
Sobolev Space norm and Beppo-Levi Space norm | The Sobolev norm is also defined for regions in $\mathbb{R}^n$. You would typically
have
$$\|f\|^p=\sum_{(i_1,\ldots,i_n)}\int\left|
\frac{\partial^I f}{\partial x_1^{i_1}\cdots\partial x_n^{i_n}}
\right|^pd\mathbf{x}$$
where $I=i_1+\cdots+i_n$ and the sum is over all tuples with $0\le I\le m$. |
Waves in spaces of even dimension | The phenomenon behind this is usually referred to as Huygen's principle, and can be described a little bit more precisely as the following:
Say you have the solution $u(x,t)$ to an initial-value problem for the wave equation in $x\in\mathbb{R}^n$:
$$\Delta u - c^2 u_{tt} = 0,\quad u(x,0) = \phi,\quad u_t (x,0) = \psi$$
It turns out $u(x,t)$ depends only on the values of $\phi$ and $\psi$ on the surface of a ball $B$, centered at $x$ and of radius $ct$, for odd dimensions $n \geq 3$. Heuristically, that means the initial "disturbance" always propagates out at $c$, so the "size" of the nonzero part of the solution (support) is invariant with time. For even dimensions, the value of $u(x,t)$ is also dependent on everything inside that ball $B$ (i.e. things don't just propagate at $c$), so the "size" of the non-zero part of the solution (support) isn't invariant with time.
A short explanation of why it happens is as a result of the fact that there is no transformation in even dimensions that can turn a wave equation into a 1-D problem, which is the case for odd dimensions $n \geq 3$. It is more thoroughly described and fully derived in Evans' book or here.
A slightly more beautiful but more esoteric explanation comes from Balasz, who noted that one can obtain solutions to the wave equation by contour integration after performing a Wick rotation. The integrands of these possess a differing type of singularity based on the dimensional parity; a branch point in even dimensions and a pole in odd dimensions. And when calculating the value of the contour integral in the odd $n$ case, one needs to refer only to one time value (that of the pole), while this is impossible in the even $n$ case.
I should also point out that a system of waves in even dimensions actually does asymptotically decay to nothing everywhere inside of the ball $B$ (i.e. everything in the interior of $B$ eventually destructively interferes with itself), as shown in this paper by Bob Strichartz; he notes this can be used to justify why the classical prediction of certain observables emerges out of free quantum wave-functions. So take Jim Holt's statement with a grain of salt! |
Semisimple objects in abelian categories | Even with the extra condition that $\mathcal{A}$ is a Grothendieck category, it may still have no simple objects. I think the following is the easiest example I know.
Let $R$ be a (necessarily non-noetherian) commutative local ring with non-zero maximal ideal $\mathfrak{m}$ satisfying $\mathfrak{m}^2=\mathfrak{m}$. Let $\mathcal{C}$ be the category of $R$-modules and let $\mathcal{D}$ be the full subcategory of modules annihilated by $\mathfrak{m}$; i.e., of semisimple modules.
Then $\mathcal{D}$ is a full abelian subcategory of $\mathcal{C}$ closed under coproducts. An extension in $\mathcal{C}$ of two objects of $\mathcal{D}$ is a module for $R/\mathfrak{m}^2=R/\mathfrak{m}$, and so $\mathcal{D}$ is closed under extensions. So $\mathcal{D}$ is a localizing subcategory of $\mathcal{C}$, which implies that the quotient category $\mathcal{A}=\mathcal{C}/\mathcal{D}$ is a Grothendieck category.
Suppose $M$ is an $R$-module. $M$ has a maximal semisimple quotient $M'=M/M\mathfrak{m}$, and in turn $M\mathfrak{m}$ has a maximal semisimple submodule $M''=\operatorname{soc}(M\mathfrak{m})$.
Let $N=M\mathfrak{m}/M''$. Then $M\cong M\mathfrak{m}\cong N$ in $\mathcal{A}$. Since we know that semisimple modules are closed under extensions, $N$ can have no non-zero semisimple quotients or submodules without contradicting the maximality of the quotient $M/M\mathfrak{m}$ or the submodule $\operatorname{soc}(M\mathfrak{m})$.
Suppose $N'$ is a proper non-zero $R$-submodule of $N$. Since neither $N'$ nor $N/N'$ can be in $\mathcal{D}$, $N$ is not simple in $\mathcal{A}$.
So $\mathcal{A}$ is a Grothendieck category with no simple objects. |
what is multiplicative group of all integers coprime with $N$ called? | It's called the group of units mod n, often written $(\mathbb{Z}/n\mathbb{Z})^\times$.
By the way, the superscript-cross notation commonly refers to the group of units in the ring, so this is an example of that notation. |
Given $\det(A)$ and $\det(B)$, is my calculation of $\det(-2B^T B A)$ correct? | No. Notice that for a matrix $A\in\mathcal M_n(\Bbb F)$ and $\lambda\in\Bbb F$
we have
$$\det(\lambda A)=\lambda^n\det(A)$$ |
Fourier series $\sum_{m=0}^\infty \frac{\cos (2m+1)x}{2m+1}$ | Rewrite the series as
$$\Re{\left [ \sum_{n=0}^{\infty} \frac{\left ( e^{i x}\right )^{2 n+1}}{2 n+1} \right]} = \Re{[\text{arctanh}{(e^{i x})}]} = \frac12 \Re{\left[\log{\left(\frac{1+e^{i x}}{1-e^{i x}}\right)}\right]}$$
With some manipulation,noting that $\log{z} = \log{|z|} + i 2 \pi \arg{z}$, we find that
$$\sum_{n=0}^{\infty} \frac{\cos{(2 n+1) x}}{2 n+1} = \frac12 \log{\left |\cot{\frac{x}{2}}\right |}$$ |
Finding a probability function for the max and expectations | (1) The complicated answer would be to tell you to look up "order statistics". However, this is a bit simpler. The maximum is $\le c$ if and only if each of the random variables is $\le c$. Since they are independent, that's just the probability that $X\le c$ times the probability that $Y\le c$ times the probability that $Z\le c$. But each of those is $0$ if $c\le 1$, $(c-1)/2$ if $1\le c \le 3$, and $1$ if $c\ge 3$. Thus, the probability distribution function $F(c)$ for $max(X,Y,Z)$ is $0$ if $c\le 1$, $(c-1)^3/8$ if $1\le c\le 3$ and $1$ if $c\ge 3$.
If in fact what you needed was the density $f$ of $max(X,Y,Z)$ that's easy now since it's just the derivative of $F$, i.e., the density is $\frac{3(c-1)^2}{8}$ if $1\le c \le 3$ and $0$ otherwise.
(2) You don't actually need to compute the distribution of $Z$ here. Just use that expectation is linear. $E(Z)=E(X-Y)=E(X)-E(Y)=0.5-0.5=0$. |
What is $\int_X +\infty\ d\mu$ if $\mu(X) = 0$? | The generally accepted definition for a simple function is: a measurable function $g$ is simple if and only if it image is finite and is contained in $\mathbb{R}$, thus the function defined by $s:=\infty \mathbf{1}_{X}$ is not simple because $\infty\notin \mathbb{R}$.
The integral of $s$ is zero, and this follows directly from the definition of the integral of Lebesgue, that is, if $f$ is a non-negative $\mu$-measurable function then
$$
\int_{X} f \mathop{}\!d \mu :=\sup\left\{\int_{X}g \mathop{}\!d \mu : g \text{ is simple and }g\leqslant f\right\}
$$
Now: let $g:=\sum_{k=1}^n c_k \mathbf{1}_{A_k}$ where each $c_k\in \mathbb{R}$ and each $A_k$ is $\mu$-measurable, then $g$ is simple and it integral (respect to $\mu$) is defined by $\sum_{k=1}^n c_k \mu(A_k)$.
Therefore it follows that the integral of any simple function in a set of measure zero is zero, because
$$
\int_{X}g \mathop{}\!d \mu := \int \mathbf{1}_{X}g \mathop{}\!d \mu = \int\left(\sum_{k=1}^n c_k \mathbf{1}_{A_k}\mathbf{1}_{X}\right) \mathop{}\!d \mu\\
=\int \left(\sum_{k=1}^n c_k \mathbf{1}_{A_k \cap X}\right) \mathop{}\!d \mu = \sum_{k=1}^n c_k \mu(A_k \cap X)=0
$$
because $\mu(A_k \cap X)=0$ for each $k$. Therefore $\int_X s \mathop{}\!d \mu =0$.
However, if you have a different definition of simple function that includes the value $\infty$ for the $c_k$, then the result cannot be proved, it must be assumed as an axiom or convention in the context of Lebesgue integration theory. |
Markov chain Monte Carlo where we modify the Markov chain | Once you have reversibility of transition function $P$ w.r.t. to distribution $\pi$, that is
$$ \forall_{A,B} \int_A P(x, B)d\pi(x) = \int_B P(y, A)d\pi(y), $$
you get that $\pi$ is a stationary distribution
$$
\pi P(A) =\int_X P(x, A)d\pi(x) \overset{reversibility}= \int_A P(y, X)d\pi(y) = \int_A 1\ d\pi(y) = \pi(A).
$$ |
If $\int_E f\,d\mu=\int_E g\,d\mu$ then $f=g$ a.e? | Let $A\neq B$ be two sets st $\mu(A)=\mu(B)$, $f=I_A$ and $g=I_B$ their indicator functions. Then:
$$\int_E I_A\,d\mu=\int_E I_B\,d\mu \iff \mu(A)=\mu(B)$$
but clearly $f\neq g$. |
What if the angle between these two unit vectors? | If $a - \sqrt{2} b$ is a unit vector, then the inner product
\begin{equation*}
\langle a - \sqrt{2}b, a - \sqrt{2}b \rangle = ||a - \sqrt{2} b||^{2} = 1.
\end{equation*}
My suggestion would be to work out the inner product, first using bilinearity, and then using the relationship between the inner product of two vectors, their magnitudes, and their angles. |
Does a closed surface in the 3-sphere bound a handlebody? | Not necessarily. Start with a sphere and add a knotted tube on the inside and a knotted tube on the outside. This is a genus two surface that does not bound a handlebody. |
Unknotting number formally? | A (say PL) knot diagram $D$ can be thought of as a (PL) immersed curve in the plane with only transverse double points such that each of the double points carries "crossing information", i.e. it is decorated in a way to depict which strand passes over the other. A crossing change just alters the decoration at one of the double points so that the over and under strands are interchange.
The unknotting number $u(D)$ of a diagram $D$ is the minimum number of crossing changes necessary to change $D$ into a diagram of the unknot. The unknotting number of a diagram is always finite, and here's one way to see that. Let $p$ be some basepoint on the knot away from the crossings. Start traveling along the knot from $p$ (in an arbitrary direction). Perform crossing changes (if necessary) so that the first time you encounter any crossing, you encounter it along an "over" strand. Such a diagram is always the unknot. Suppose that the diagram is in the $(x,y)$-plane, and that direction of projection is the $z$-axis. Then the embedding of the knot in $\mathbb{R}^3$ can be taken to be decreasing in the $z$-direction except for one line segment that parallel to the $z$-axis that projects to the point $p$.
The unknotting number $u(K)$ of the knot $K$ is then defined as
$$u(K) = \min\{u(D)~|~D~\text{is a diagram of }K\}.$$
Since each $u(D)$ is finite, of course $u(K)$ is also finite. |
Problem on Combinations allowing repetition with no missing element. | This problem can be mapped to the stars and bars problem. You have the $k$ stars and $n-1$ bars. The total combinations is ${ k + n - 1} \choose {n-1}$ if you allow empty urns. In your case, you don't want to allow empty urns, and this is equivalent to saying you want a star in each urn. Thus, you can just place a star in each urn and then use the regular stars and bars method to partition the remaining stars, allowing empty urns.
Therefore, you now only have $k - n$ stars to partition, and ${k - 1} \choose {n-1}$ total combinations. |
Conditional expectation $E(XY\mid Z)$ | Hint: $XY=\frac{1}{4}((X+Y)^2-(X-Y)^2)$, and expectation is linear. Recall that if $X$ and $Y$ are independent normal with the same distribution, then $X+Y$ and $X-Y$ are independent. |
Commutativity of Boolean ring | $$x+y = (x+y)^2 = x^2 + xy + yx + y^2 = x + xy + yx + y$$
This shows $xy + yx = 0$. That means $xy + \underbrace{xy + yx}_{=0} = xy$. From your statement ($X + X = 0$) we can rearrange to see:
$$\underbrace{xy + xy}_{=0} + yx = xy \Rightarrow yx = xy$$ |
Whats the maximum number of points inside a rectangle such that no two points have a distance less than one | If we place a disk with radius $\frac 12$ on each point, the constraint is now that that no two disks can intersect (and their centers must lie inside the rectangle). This is now a circle packing problem. It's well-known that the optimal circle packing is hexagonal, so I would suppose that the answer to the question is along those lines.
I think that in the particular case of the 3 by 4 rectangle, there is no better packing. However, for larger shapes, the optimal packing will approximate a hexagonal one. |
Does the series $\sum_{n=1}^\infty \left(\frac{1}{\sqrt[3]{n}}-\sqrt[3]{\ln(\frac{n+1}{n})}\right)$ diverge or converge? | I assume that there is a typo and that the series is with a $-$ instead of a $+$. For $x$ near $0$ we have
$$
\sqrt[3]{\ln(1+x)}=\sqrt[3]{x-\frac{x^2}{2}+O(x^3)}=\sqrt[3]{x}\Bigl(1-\frac{x}{6}+O(x^2))\Bigr).
$$
Then, with $x=1/n$ we get
$$
\frac{1}{\sqrt[3]{n}}-\sqrt[3]{\ln(\frac{n+1}{n})}=\frac{1}{\sqrt[3]{n}}-\frac{1}{\sqrt[3]{n}}\Bigr(1-\frac{1}{6\,n}+O(\frac{1}{n^2})\Bigr)=\frac{1}{6\,n^{4/3}}+O(\frac{1}{n^{7/3}}).
$$
Since $4/3>1$, the series converges. |
Does this game approach a zero chance of winning as the game gets longer? | If at any point you have encountered more blank tiles than solid tiles, then you have lost, regardless of the order of the tiles. You can model the difference between the number of blank and solid tiles as a 1-dimensional random walk. An infinite 1D random walk will cross every value an infinite number of times - one of those values will be 0, where there are an equal number of blank and solid tiles. You will cross that point almost surely (with probability 1) as the walk length increases to infinity, meaning that at some point, you will have encountered more blank tiles than solid ones.
As the length of the run increases to infinity, the probability of success approaches 0. |
Simplify by rationalizing the denominator. | Hint
$$\dfrac{2(a^3+b^3+c^3)}{(a+b+c)^2}=?$$
Here $a=\sqrt2,b=\sqrt3,c=\sqrt5$
Observe that $a^2+b^2=c^2$ |
Max-Min values of $f(x,y) = x^3+y^3-6x^2-y-1$ | You've found the set of 4 critical points that make the partial derivatives of the function 0. Now you need to determine whether these are local maxes or mins or nothing. You can use 2nd derivatives to determine this.
To find local maxima and minimum, you can look at the Hessian, the second derivative matrix, which in this case is:
$$\begin{pmatrix} 6x-12 & 0\\ 0 & 6y \end{pmatrix}$$
If this is positive definite, then you've found a local min. If its negative definite, you've found a local max. You've found 4 critical points, and the matrix is negative definite at $(0,-\sqrt{3}/{3})$, and positive definite at $(4,\sqrt{3}/{3})$, so these are a local max and min respectively.
At the other two points $(0,\sqrt{3}/3)$ and $(4, -\sqrt{3}/3)$, the second derivative matrix is not positive or negative semi-definite (the determinate is negative), so we know those points are neither local maxes or mins.
The wikipedia entry https://en.wikipedia.org/wiki/Second_partial_derivative_test gives more details on how to use the 2nd derivative test.
Clearly, no global maxima or minima exist. If you consider points of the form $(0,y)$, as $y \to \infty$, $f(0,y) \to \infty$ and as $ y \to -\infty$, $f(0,y) \to -\infty$. |
A continuous real function $f$ satisfies $f(2x)=3f(x),\forall x\in \mathbb R$.If $\int_0^1f(x)=1$,then | Let $x = 2u \Rightarrow dx = 2du \Rightarrow \displaystyle \int_{0}^2 f(x)dx = 2\displaystyle \int_{0}^1 f(2u)du = 2\displaystyle \int_{0}^1 3f(u)du = 6 \Rightarrow \displaystyle \int_{1}^2 f(x)dx = 6 - 1 = 5$ |
Does this trig system have a solution? | solve the equation
$$\tan(\omega L/2)=\tan(-\omega L/2)$$
per hand |
What makes $0!$ equal to 1? | Check out Factorial
One part of the answer given under 'Definition' is simply that it's convenient. It allows things such as the Taylor series of $e^x$ to work.
Another reason is that it makes sense when viewing it from a permutation viewpoint, since $n!$ is the number of possible permutations of $n$ objects. If we have 0 objects, there is only one possible permutation - that leaving all 0 objects where they are. |
Function field and meromorphic functions on a Riemann surface | a) For every Riemann surface $X$ (compact or not) the sheaf $\mathcal {Rat}$ of rational functions is different from the sheaf $\mathcal {M}$ of meromorphic functions.
More precisely for every strict open subset $U\subsetneq X$, we have $\mathcal {Rat}(U)\subsetneq \mathcal {M}(U)$.
This is already crystal clear on the simplest example in the world: for $U=\mathbb C\subsetneq \mathbb P^1(\mathbb C)=X$ obviously $e^z\in \mathcal {M}(U)\setminus \mathcal {Rat}(U)$.
b) The sheaf $\mathcal {Rat}$ is constant for every Riemann surface $X$ (compact or not) while the sheaf $\mathcal {M}$ is non-constant for every such $X$.
What GAGA says (but was known in the special case of Riemann surfaces before Serre's article) is that the set of global sections of these sheaves are equal in the case of a compact Riemann surface $X$: although $\mathcal {Rat}\subsetneq \mathcal {M}$, we nevertheless have $\mathcal {Rat}(X)= \mathcal {M}(X)$. |
Understanding Slope Better | So, if we were to study the line $y = \frac{2}{3}x$, does that mean the rate of change of the line can be interpreted as "two units of every three units of "? With more relatable units, if is measured in meters and is measured in seconds, would the rate of change of a particle traveling along this line would be read as "two meters every three seconds"?
Yes, you are exactly right. And if it helps you build a more physical intuition, the units of the slope are the units of $y$ divided by the units of $x$. So the number of hot-dogs Takeru Kobayashi eats in a hot-dog eating competition can be modeled with the line $y = m x$, where $y$ has units of hot dogs, $x$ has units of minutes, and $m \simeq 6.6$ hot dogs / minute. Very impressive.
As for more general differential curves on any domain: a slope is a slope. The fact that a derivative exists guarantees that $g(x)$ can be approximated as a line in the neighborhood centered at $x = 3$. It is possible to overthink this. Why not eat a hot dog? |
Let $a_n$ be defined inductively by $a_1 = 1, a_2 = 2, a_3 = 3$, and $a_n = a_{n−1} + a_{n−2} + a_{n−3}$ for all $n \ge 4$. Show that $a_n < 2^n$. | For $a_1, a_2, a_3$ it is trivial.
Now assume that $a_{k} < 2^k$, $a_{k+1} < 2^{k+1}$, $a_{k+2} < 2^{k+2}$.
Therefore $$a_{k+3} = a_k+a_{k+1}+a_{k+2} < 2^k + 2^{k+1} + 2^{k+2} = 7\cdot2^k < 2^{k+3}$$
That completes the induction and thereby the proof, QED.
You don't seem to get the idea of induction. The normal induction is:
Show that it holds for $n=1$
Show that it if it holds for $n=k$ (called the induction hypothesis), then it holds for $n=k+1$.
In this case we use induction with three hypotheses:
Show that it holds for $n=1$, $n=2$ and $n=3$.
Show that it if it holds for $n=k$, $n=k+1$ and $n=k+2$ (called the induction hypothesis), then it holds for $n=k+3$. |
Category theory problem? Linear Algebra problem? Pull-back transformations | The strategy of proof is very similar for all of the three points: I'll write down (a), and if you want I can write more also regarding (b) and (c).
So, for (a): say that $\varphi$ is surjective. Then we want to show that $\varphi^*$ is injective, i.e. $\varphi^*(f)$ equals the zero map only if $f$ already was the zero map.
Indeed, suppose $\varphi^*(f)(x)=f\cdot\varphi=0\ \forall x$. Then it must be $f(x)=0\ \forall x$ in the image of $\varphi$. But this image is the whole space! Then $f$ must have been the zero map already - you don't lose any domain by precomposing.
On the other hand, suppose your $\varphi^*$ is injective: then $\varphi^*(f)(x)=f\cdot\varphi=0\ \forall x$ implies $f=0$. But now you could for example argue ad absurdum - pick a point which is not in the image of $\varphi$ and a function which is not zero only in that point - then $\varphi^*(f)=0$, but $f$ was not. Then surjectivity must hold, and the two conditions are equivalent.
You should approach (b) and (c) with the same spirit. I hope this helps! |
If a tensor's multilinear rank is $(R,R,R)$, then is its canonical/CP rank also $R$? | A randomly generated $2\times 2\times 2$ tensor over reals (each entry is generated randomly) may have with positive probability rank 3 and rank 2 (so called typical ranks). Over the complex field it has rank 2 with probability 1. For higher dimensions the rank of an $n\times n\times n$ generic tensor is always higher than $n$ (there are several typical ranks over R and one typical rank over C). Also, the computation of the tensor rank is NP-hard. |
Not artinian right module | $\def\QQ{\mathbb{Q}}$We compute that $$\begin{pmatrix}0&0\\a&b\end{pmatrix}\cdot\begin{pmatrix}\lambda&0\\f&g\end{pmatrix}=\begin{pmatrix}0&0\\\lambda a+fb&bg\end{pmatrix}$$ for all $a$, $b$, $f$, $g\in\QQ(x)$ and all $\lambda\in\QQ$. This describes the action of the ring on the module.
As you noticed, the subset of all elements of $M$ of the form $\begin{pmatrix}0&0\\a&0\end{pmatrix}$ with $a\in\QQ(x)$ ius a submodule $N$ of $M$. The formula above implies —and you should have no difficulty showing this— that if $V$ is any $\QQ$-subspace of $\QQ(x)$ then the set $$S_V=\left\{\begin{pmatrix}0&0\\a&0\end{pmatrix}:a\in V\right\}$$ is a submodule of $N$. This gives you many, many submodules.
Can you now find a descending chain of submodules?
A good excercise you can do is showing that in fact all proper submodules of $M$ are of the form $S_V$ for some $\QQ$-subspace of $\QQ(x)$. |
Conditional expectation in Poisson point process | Yes as the comment pointed out:
Note that $N(t_1, t_3) = N(t_1, t_2) + N(t_2, t_3)$. $N(t_2, t_3)$ is independent of $N(0, t_2)$ and has a Poisson distribution (independent increment):
$$ N(t_2, t_3) \sim \text{Poisson} (\lambda(t_3 - t_2))$$
Conditional on $N(0, t_2)$, $N(t_1, t_2)$ has a Binomial distribution (a well known result which can be easily proved):
$$ N(t_1, t_2)|N(0, t_2) = x \sim \text{Binomial}\left(x, \frac {t_2 - t_1} {t_2}\right) $$
Therefore,
$$ \begin{align*} \mathbb{E}[N(t_1, t_3)|N(0, t_2) = x]
& = \mathbb{E}[N(t_1, t_2) + N(t_2, t_3)|N(0, t_2) = x] \\
& = \mathbb{E}[N(t_1, t_2)|N(0, t_2) = x] + \mathbb{E}[N(t_2, t_3)] \\
& = x \frac {t_2 - t_1} {t_2} + \lambda(t_3 - t_2)
\end{align*} $$
Since this holds for every $x$ in the support of $X$, you can replace the $x$ by the random variable $X$ for calculating $\mathbb{E}[Y|X]$. |
How do you write sigmoid function for matrices and vectors? | Oftentimes, people simply write $\sigma(\mathbf{x})$ to denote elementwise application of the sigmoid function to a vector or matrix. (For example, the author does it here, search the page for "vectorizing".) If in doubt, maybe you could just quickly explain the notation you are using. |
Is a function continuous at $x=a$ if $lim_{x \rightarrow a^-}f(x)=+\infty$ and $lim_{x \rightarrow a^+}f(x)=+\infty$? | The left hand side and right hand side limit are equal to each other in the sense that they are both +infinity. But for continuity, the limits should be finite. So in your case, $y=\frac{1}{x^2}$ would serve as an example, but this function is clearly not continuous at $x=0$ |
Rate of convergence of random variables for weak convergence | If
$\alpha_n \stackrel{\rightarrow}{_D} \alpha$, s.t.
$\alpha_n \stackrel{_D}{\approx} \alpha + \epsilon(n)$,
then the rate of convergence is the convergence rate of $\epsilon(n)$. Eg, in CLT type convergence, the error would be of order $n^{-\frac{1}{2}}$ and the measures would converge at such rate. |
Finding the area between $x \sqrt{4x-x^2}$ and $\sqrt{4x-x^2}$ | It’s easier if you shift the horizontal axis.
If you make the substitution $x=u+2$, you’re looking at the functions $f(u)=\sqrt{4-u^2}$ and $g(u)=(u+2)\sqrt{4-u^2}$ for $-2\le u\le 2$, which are equal at $u=-2,-1$, and $2$. The differences are
$$g(x)-f(x)=(u+1)\sqrt{4-u^2}=u(4-u^2)^{1/2}+\sqrt{4-u^2}$$
and its negative. Integrating the first term on the right is straightforward, and the integral of the second from $-2$ to $2$ is on geometric considerations the area of a semicircle of radius $2$. |
$\{\frac 1{\sqrt \pi} \sin(nx)\}_{n\ge 1}\cup \{\frac 1{\sqrt \pi} \cos((n+\frac 12)x)\}_{n\ge 0}$ is an orthonormal basis | The functions $\{ \sin(nx)\}_{n=1}^{\infty}$ form an orthonormal basis of $L^2[0,\pi]$. These can be divided into
$$
\sin(2nx), n=1,2,3,\cdots \\
\sin((2n+1)x), n=0,1,2,3,\cdots
$$
By shifting $x$ by $\pi/2$, a new set is obtained on $[-\pi/2,\pi/2]$.
\begin{align}
\sin(2n(x+\pi/2))&=(-1)^n\sin(2nx) \\
\sin((2n+1)(x+\pi/2))&=\sin((2n+1)x+n\pi+\pi/2)\\
&=(-1)^n\cos((2n+1)x)
\end{align}
Finally, the full set on $[-\pi,\pi]$ consists of elements
$$
\sin(nx),\cos((n+1/2)x).
$$ |
Modelling the probability of dice | On each throw, there is a 1/6 chance of throwing a 6 and 5/6 chance of not throwing a 6. So this is a binomial distribution with "p" equal to 1/6 and "q" equal to 5/6 and "n" equal to 30. The probability of x sixes is
$\begin{pmatrix}30 \\ x\end{pmatrix}\left(\frac{1}{6}\right)^x\left(\frac{5}{6}\right)^{30- x}$ |
Given $A$ and $B$, how many positive integers $N$ such that $N\times B$ has at least one divisior $D$ that lies in $N \lt D \le A$? | Are you expecting some simple math formula for the following value?
$$\left|\left\{n\in\mathbb{N}\ :\ \exists n_1,n_2,d\in\mathbb{N}\ :\ n=n_1 n_2,\ \ d\mid B,\ \ n_1<d,\ \ n_2\le\frac Ad\right\}\right|$$
Hard to believe such formula exists.
But it can be computed :-) |
Finding all the embeddings from $K=\mathbb{Q}(\sqrt{2},\sqrt{3})$ into $\mathbb{C}$ . | Well the extension is generated by $\sqrt 2$ and $\sqrt 3$, so the embedding is defined (generated) by the images of these two numbers.
The image of $\sqrt 2$ has to satisfy $x^2-2=0$ and the image of $\sqrt 3$ has to satisfy $x^2-3=0$ (these being the minimal polynomials of the two roots over $\mathbb Q$).
These basic facts define the options for embeddings, which you should be able to identify. |
Standard normal distribution hazard rate | $\phi $ is the standard normal density, $\Phi$ is the cdf, $\Phi'=\phi$, $\bar{\Phi}=1-\Phi $ is the survivor
function, and $\lambda =\phi /\bar{\Phi}$ is the hazard rate. We know that $%
\lambda(s) >s$.
Lemma
$\lambda(s) <\frac{s}{2}+\frac{\sqrt{s^{2}+4}}{2}.$(Baricz / J.
Math. Anal. Appl. 340 (2008) 1362--1370)
Proposition The hazard rate $\lambda $ is convex, $\lambda ^{\prime \prime }>0$.
Proof
We prove this by differentiation. Note that $\lambda ^{\prime }(s)=\frac{-s\phi(s)
}{\bar{\Phi}(s)}+\frac{\phi ^{2}(s)}{\bar{\Phi}^{2}(s)}=\lambda(s) \left( \lambda(s)
-s\right) >0$ and then
\begin{eqnarray*}
\lambda ^{\prime \prime }(s) &=&\lambda(s) \left( \lambda(s) -s\right) \left( \lambda(s)
-s\right) +\lambda(s) \left( \lambda(s) \left( \lambda(s) -s\right) -1\right) \\
&=&\lambda(s) \left( s^{2}-3s\lambda(s) +2\lambda ^{2}(s)-1\right) \\
&=&\lambda(s) \left( \left( 2\lambda(s) -s\right) \left( \lambda(s) -s\right)
-1\right) .
\end{eqnarray*}
This is positive iff $\left( 2\lambda(s) -s\right) \left( \lambda(s) -s\right) -1>0
$. If $s\leq 0$, then
$$
\left( 2\lambda(s) -s\right) \left( \lambda(s) -s\right) -1\geq 2\lambda
^{2}\left( 0\right) -1=\frac{4}{\pi }-1>0.
$$
For $s>0$ we have, using the lemma repeatedly, that
\begin{eqnarray*}
\left( 2\lambda(s) -s\right) \left( \lambda(s) -s\right) -1 &>&\sqrt{s^{2}+4}\cdot
\left( -\lambda(s) +\sqrt{s^{2}+4}\right) -1 \\
&=&s^{2}+4-\lambda(s) \sqrt{s^{2}+4}-1 \\
&>&s^{2}+4-\left( \frac{s}{2}+\frac{\sqrt{s^{2}+4}}{2}\right) \sqrt{s^{2}+4}%
-1 \\
&=&\frac{1}{2}\left( s^{2}-s\sqrt{s^{2}+4}\right) +1
\end{eqnarray*}
But for $s>0$,
\begin{eqnarray*}
\frac{1}{2}\left( s^{2}-s\sqrt{s^{2}+4}\right) +1 &=&\frac{s^{2}+2-s\sqrt{%
s^{2}+4}}{2}>0\Leftrightarrow \\
s^{2}+2 &>&s\sqrt{s^{2}+4}\Leftrightarrow \\
s^{4}+4s^{2}+4 &>&s^{2}\left( s^{2}+4\right) ,
\end{eqnarray*}
which is true. We conclude that $\lambda ^{\prime \prime }>0$ and hence that
$\lambda $ is convex. |
Probability of i.i.d uniform random variables | Hint
Compute $P(U>s)$ for a $U(0,t)$ using the definition of the uniform distribution.
Since they are independent, $P(U_1>s,U_2>s) = P(U_1>s)P(U_2>s) = P(U>s)^2.$ |
Coefficient of Taylor Series of $\sqrt{1+x}$ | Let $f$ be an $n$ times differentiable function at $a$. Then, for any $0 \leq k \leq n$, the $k^{\text{th}}$ Taylor polynomial of $f$ at $a$ is $$P_k(x) = \sum_{i=0}^k\frac{f^{(i)}(a)}{i!}(x-a)^i.$$ In your situation, you have $a = 0$; we sometimes refer to this as a Maclaurin polynomial: $$P_k(x) = \sum_{i=0}^k\frac{f^{(i)}(0)}{i!}x^i.$$ So for $k \geq 3$ there is an $x^3$ term, in fact, only one, and it has coefficient $\displaystyle\frac{f^{(3)}(0)}{3!}$ which is $\dfrac{1}{16}$ (details of which can be found below by putting your mouse over the grey box).
$f(x) = (1+x)^{\frac{1}{2}}$, $f'(x) = \frac{1}{2}(1+x)^{-\frac{1}{2}}$, $f''(x) = -\frac{1}{4}(1+x)^{-\frac{3}{2}}$, $f'''(x) = \frac{3}{8}(1+x)^{-\frac{5}{2}}$ so $f^{(3)}(0) = f'''(0) = \frac{3}{8}$. Therefore $$\displaystyle\frac{f^{(3)}(0)}{3!} = \frac{\frac{3}{8}}{6} = \frac{1}{16}.$$ |
Limiting distribution to Weibull | As the expression of the limit suggest, you have to distinguish the cases $x<0$ and $x\geqslant 0$. For $x\geqslant 0$, $F\left(\frac{x}{(bn)^{1/\alpha}}+x_0\right)=1$ for all $n$. For $x\lt 0$, let $x_n=\frac{x}{(bn)^{1/\alpha}}+x_0$. Then by assumption, we know that
$$
(x_0-x_n)^{-\alpha}(1-F(x_n))\to b
$$
or in other words,
$$
(x_0-x_n)^{-\alpha}(1-F(x_n))= b+\epsilon_n, \epsilon_n\to 0.
$$
Therefore, $(1-F(x_n))^n=(x-x_0)^{n\alpha}(b+\epsilon_n)^n$.
Rearrange the expression and use the fact that for all $t$,
$$
\left(1+\frac tn +\frac{\epsilon_n}n\right)^n\to e^t.
$$ |
Intermediate Value Theorem in point-set topology. | Better: suppose we have $(Y,<)$ in the order topology and $f: X \to Y$ continuous with $X$ connected and for some points $a,b \in X$ we have $f(a) < r < f(b)$ with $r \in Y$.
Suppose there is no $x \in X$ with $f(x)=r$.
Then define $O_1=\{y \in Y: y < r \}$ and $O_2=\{y \in Y: y > r\}$ which are open in $Y$ in the order topology.
Firstly $$f^{-1}[O_1] \cup f^{-1}[O_2] = X$$
because we assumed $r$ is not assumed as a value, and $Y$ has a linear order, so always $f(x) > r$ or $f(x) < r$ must hold, but not both, so the sets are disjoint. We also know that $a \in f^{-1}[O_1]$ and $b \in f^{-1}[O_2]$, so both these sets are open (by continity of $f$), disjoint, non-empty and cover $X$. This is a contradiction with the connectedness of $X$. So we must have some $x$ with $f(x)=r$. |
I know the sum of last six months, is it possible to know each one separately? Values are random and independent | Considering a twelve month period, for example, you have seven six-month numbers for twelve unknowns. This system is underdetermined, i.e., you could pick arbitrary numbers for the first five months and then compute all subsequent month numbers from that. A similar reasoning applies to any length of the total ttime range considered.
There is one exception: If sales figures are by definition non-negative, everything may be determined provided one of the six-month figures is zero. |
Proving a subgroup of $S_4$ is not isomorphic to $(\mathbb{Z}_4, +)$ by contradiction | If the diagonal in the Cayley Table is the identity element, then each element multiplied by itself in your subgroup gives the identity element. Thus, each of $f_{2}, f_{3}, f_{4}$ satisfy
$$
f_{2}^{2} = f_{1}, f_{3}^{2} = f_{1}, f_{4}^{2} = f_{1}.
$$
By definition, these have order $2$ since they give you the identity when you square them. However, you've correctly identified that $1 \in \mathbb{Z}_{4}$ does not have order $2$, since $1 + 1 = 2 \neq 0$. Since the order of any element must be preserved under any group isomorphism, no such isomorphism can exist because there is no element in your subgroup that you can send $1$ to.
Your proof is good, since you've done pretty much the same thing: you've identified that $1$ generates $\mathbb{Z}_{4}$ and that by giving it an image you get a contradiction. However, you don't need to go so far --- once you've identified that there are mismatched orders between the groups, that is enough.
If you're not allowed to work with the knowledge that isomorphic groups preserve element orders, then your proof is definitely appropriate. |
Finding the exact value of $ \sum \frac{4n-3}{n(n^2-4)} $ | $$\begin{align*} \frac{4n-3}{n(n^2-4)}=\frac{3}{4n}-\frac{11}{8(n+2)}+\frac{5}{8(n-2)}
\end{align*}$$
$$\begin{align*} \frac{4n-3}{n(n^2-4)}=\frac{1}{8}(6(\frac{1}{n}-\frac{1}{n+2})+5(\frac{1}{n-2}-\frac{1}{n+2}))
\end{align*}$$
$$\begin{align*} \sum_{n=3}^{m}\frac{1}{n}-\frac{1}{n+2}=\frac{7}{12}-\frac{1}{m+1}-\frac{1}{m+2}\rightarrow7/12\end{align*}$$
$$\begin{align*} \sum_{n=3}^{m}\frac{1}{n-2}-\frac{1}{n+2}=\frac{25}{12}-\frac{1}{m-1}-\frac{1}{m}-\frac{1}{m+1}-\frac{1}{m+2}\rightarrow25/12 \end{align*}$$
$$\begin{align*} \sum_{n=3}^{\infty} \frac{4n-3}{n(n^2-4)}=\frac{1}{8}(6\times7/12+5\times25/12)=\frac{167}{96}
\end{align*}$$ |
Convert Recursive to Closed Formula | ok. There is a general way to solve linear recursions as yours, see for starting the Wikipedia article. The basic idea is to look for solutions of the form $T_n = \lambda^n$. Inserting this ansatz into the equation we obtain
$$
\lambda^n = 2\lambda^{n-1} - \lambda^{n-10}
$$
and after dividing by $\lambda^{n-10}$
$$
\lambda^{10} - 2\lambda^9 + 1 = 0
$$
Now you have to find all $\lambda_i$, $i = 1,\ldots, 10$ which solve this equation. Wolfram|Alpha helps. Now one can show, that $\{(\lambda_i^n)_{n\ge 0}\mid 1 \le i \le 10\}$ is a basis of the space of all sequences which fulfill your recursion. Now we put your initial values into the ansatz $T_n = \sum_{i=1}^{10} a_i \lambda_i^n$. We obtain (with $\lambda_1 = 1$)
\begin{align*}
a_1 + a_2\lambda_2 + a_3\lambda_3 + \cdots + a_{10}\lambda_{10} &= 1\\\
a_1 + a_2\lambda_2^2 + a_3\lambda_3^2 + \cdots + a_{10}\lambda_{10}^2 &= 1\\\
\vdots &= \vdots\\\
a_1 + a_2 \lambda_2^{9} + a_3 \lambda_3^{9} + \cdots + a_{10}\lambda_{10}^{9} &= 128\\\
a_1 + a_2 \lambda_2^{10} + a_3 \lambda_3^{10} + \cdots + a_{10}\lambda_{10}^{10} &= 256
\end{align*}
a linear system for $a_1, \ldots, a_{10}$. You have to solve this for $a_1, \ldots, a_{10}$.
The problem is, as suggested by Wolfram|Alpha, the roots of your equation besides $\lambda_1 = 1$ are diffucult, if not ipossible, to write in closed form.
Hope this helps, |
Is the set of rational number discrete or continuous? | This does depend on the topology that we equip $\mathbf{Q}$ with. If it has its usual topology, i.e. the topology inherited from the standard topology on $\mathbf{R}$, then we have that it is not discrete. A topological space $X$ is said to be discrete if given any $x\in X$ there exists an open set $U$ containing $x$ such that $U\cap X=\{x\}$. Given any $\frac{p}{q}\in \mathbf{Q}$, and an open neighborhood of radius $\epsilon$, we can find another rational $\frac{m}{n}$ satisfying $\lvert \frac{p}{q}-\frac{m}{n}\rvert<\epsilon$, so that $\mathbf{Q}$ is not discrete. |
How to prove that $\lim\limits_{n \to \infty} \frac{k^n}{n!} = 0$ | The series for $e^k$
$$
\sum_{n=0}^\infty\frac{k^n}{n!}
$$
converges by the ratio test. The terms of a convergent series must tend to $0$.
For $n\ge2k$, the ratio of terms is $\frac{k^{n+1}/(n+1)!}{k^n/n!}=\frac{k}{n+1}<\frac{1}{2}$. We can remove the reference to series (which seems to have bothered someone) with the following sandwich, valid for $n\ge2k$:
$$
0\le\frac{k^n}{n!}\le\frac{k^{2k}}{(2k)!}\left(\frac{1}{2}\right)^{n-2k}
$$
which shows that $\displaystyle\lim_{n\to\infty}\frac{k^n}{n!}=0$. |
Need help with proof with absolute value and complex numbers. | It is equivalent to $\lvert z-w\rvert^2 <\lvert 1-\bar zw\rvert^2$, i.e.:
\begin{align*}
&(z-w)(\bar z-\bar w) < (1-\bar zw)(1- z\bar w)\iff z\bar z+w\bar w <1+z\bar zw\bar w\\ \iff &1 -z\bar z-w\bar w+z\bar zw\bar w=(1-z\bar z)(1-w\bar w)=(1-\lvert z\rvert^2)(1-\lvert w\rvert^2)>0.
\end{align*} |
Integration by parts but not in terms of dx | Hint
Let $s>0$. Since $x\mapsto e^{-sx}$ is $\mathcal C^1(\mathbb R)$,
$$\int_0^\infty x\,\mathrm d (e^{-sx})=\int_0^\infty x\cdot \frac{\mathrm d }{\mathrm d x}(e^{-sx})\,\mathrm d x$$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.