title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Is there a simple proof for the irrationality or transcendence of $e^{q}$ where $q \in \mathbb{Q}$? | Write $q=a/b$.
If $e^q$ is rational, then $e^a=(a^q)^b$ is rational.
Similarly, if $e^q$ is algebraic, then $e^a=(e^q)^b$ is algebraic.
So if you know that $e$ is transcendental, you get it for all $e^q$. |
Finding the complex roots of a polynomial f knowing that it shares one root with a polynomial g | You made a mistake. Their GCD is $x^2 + x - 1$. |
Parameterize and find the arc length | the intersection of the sphere $x^2+y^2+z^2=1$ and the plane $x+y+z=1$
Eliminate $z$
$$ x^2+y^2+(1-x-y)^2= 1\rightarrow x^2+y^2 +xy -x-y =0 \tag1 $$
Note that a surface is 2-parametered, while intersection is 1- parametered. So we find one parameter as a function of the other.
Solve for $y(x)$
$$ 2 y(x)= (1-x) \pm \sqrt {(1-x)(1-3x)} \tag2$$
suggested parametrization
$$ [x,2y,2z] =\big[t, (1-t)-\sqrt{(1-t)(1+3t)}, (1-t)+\sqrt{(1-t)(1+3t)}\big] \tag3$$
The parametrization can exist only if $ (1>t>-\dfrac13) $
Arc length can be found by integration between these limits.
$$ \int_0^{2 \pi}(\sqrt{x^{'2}+y^{'2}+z^{'2} })\,dt $$
The small circle circumference can be analytically evaluated.
Sketched the plane and sphere to visually verify the parametrization.
Same ( easier) procedure for cylinder/plane intersection.
Using direction cosines the slant plane works out as $ \cos^{-1}\frac{1}{\sqrt3}$. On flat development the intersection elliptic arc becomes a sine-curve $ z=\sqrt{3} \sin \, (s)$ whose analytical solution is $ a E(\theta, e)$ in terms of second order elliptic integrals. |
Finding solution for irrational inequality | Let $a=\sqrt x$, as you've done, and consider $g(a) := (a^2 - 2a - 3)(a^2 + a - 2) = (a-3)(a+1)(a+2)(a-1)$ defined for non-negative $a$. One can verify (with, say, a sign chart) that $g(a) < 0 \Longleftrightarrow a \in (1,3)$.
Thus, noting that $x \mapsto \sqrt x$ is continuous and increasing, the solution is $x \in (1,9)$ |
If an equality is true for any finite N then we can take the limit | Suppose you have an equality $a_n = b_n$ that holds for all $n\in\mathbb N$. If (without restriction of generality) there exists $a = \lim_{n\to\infty} a_n$ then we can simply see that
$$ a = \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n, $$
because every $a_n$ is equal to $b_n$.
Therefore we know that, if we have an equality that depends on a variable, we can simply apply the limit to both sides and still have an equality.
Concerning your problem in particular: the series $\sum_{i=0}^\infty G_i$ is called convergent if the sequence of partial sums converges: $\sum_{i=0}^\infty G_i := \lim_{N\to\infty}\sum_{i=0}^N G_i$. |
Is $ \{ m+rn\mid m,n \in \mathbb{Z}\}$ dense in $\mathbb{R}$ if $r\in\mathbb{Q}$? | For a fixed $\alpha\in\mathbb{Q}$, write $\alpha = \frac{a}{b}$ with $\gcd(a,b)=1$, $b\gt 0$. Then
$$\{m+\alpha n\mid m,n\in\mathbb{Z}\} = \left\{r\in\mathbb{Q}\;\left|\; r=\frac{t}{v},\ t,v\in\mathbb{Z}, \gcd(t,v)=1, v\gt 0, v|b\right\}\right..$$
To prove the right hand side is contained on the left hand side,
given any $\frac{t}{v}$ as given, write $\frac{t}{v} = m + \frac{q}{v}$, with $m\in\mathbb{Z}$, $0\leq q\lt v$; then $\gcd(q,v)=1$ (since $t = vm + q$ is relatively prime to $v$). Since $v|b$, we can write $b=vs$. So $\frac{t}{v} = m + \frac{qs}{b}$. Since $\gcd(a,b)=1$, there exist $k,\ell$ such that $ka+\ell b = 1$. Hence $kqsa + \ell qsb = qs$. Hence
$$\frac{t}{v} = m+ \frac{qs}{b} = m + \frac{kqsa + \ell qsb}{b} = m + \ell qs + kqs\alpha.$$
For the converse inclusion, any element on the left hand side can be written as
$$ m + n\alpha = \frac{bm+an}{b},$$
which when written in lowest terms will be of the form $\frac{t}{v}$ with $v|b$. This gives equality.
However, the right hand side describes a discrete subset of $\mathbb{R}$ (since any two elements of the set are at least $\frac{1}{b}$ apart), hence it is not dense. It is none other than the group $\mathbb{Z}[\frac{1}{b}]$. |
Can every smooth sine function be given a smooth argument? | I interpret smooth as $C^\infty$.
Take a $C^\infty$ function $f:\mathbb R\to[0,\infty)$ such that $\sqrt{f}$ is not $C^\infty$, for example
$$f(t) = \exp(-1/t^2)(\sin^2 (\pi/t)+\exp(-1/t^2))$$
(after Georges Glaeser, see also here). The function $g(t)=\sqrt{f(t)}$ is not even $C^2$ because
$$g''(1/n)=n^4(\pi^2+4n^2e^{-n^2}-6e^{-n^2})\to\infty\quad \text{ as }n\to\infty$$
Let $\alpha(t) =g(t)+\pi/2$. Then $\sin\alpha(t)=\cos g(t) = h(f(t))$ is the composition of two $C^\infty$ functions, namely $f$ and $h(x)=\cos\sqrt{x}$.
On the other hand, $\alpha$ has unbounded second derivative near $0$, and there's nothing here that tweaking it to $\tilde \alpha $ can fix. |
Fixed Point of $x_{n+1}=i^{x_n}$ | We had this question quite recently, as I recall. But I cannot now find it. Anyway, here we see $a_n$ for $n$ from $1$ to $50$. They are converging, right?
The limit is:
$$
\frac{2i}{\pi}W\left(\frac{-i\pi}{2}\right) \approx .4382829366+.3605924718 i
$$
Of course $W$ is the Lambert W function. |
Do operators which admit non-orthonormal spectral expansions have any non-basis eigenvectors? | With your definition of basis from the comment (i.e., a Schauder basis), the answer is trivial for the case where each $e_j$ is an eigenvector for a different eigenvalue $\lambda_j$. Indeed, for a fixed $k$ the equality $Ax=\lambda_k x$ reads
$$
\sum_j x_j \lambda_j e_j=\sum_j x_j\lambda_k e_j.
$$
By the uniqueness of representation the above equality forces $x_j\lambda_j=x_j\lambda_k$ for each $j$; so, if $j\ne k$ and $\lambda_j\ne\lambda_k$, we get $x_j=0$. Thus $x=x_ke_k$ (you cannot avoid multiples, as a multiple of an eigenvector is an eigenvector). |
Taking partial time derivative of a functional | Questions like this exhibits exactly why people shouldn't use thermodynamic notation.
Anyway. I am going to introduce an auxiliary variable to make the change clear. You have two coordinate systems $(\vec{r},\tau)$ and $(\vec{x},t)$, related by
$$ \tau = t, \qquad \vec{r} = a(t) \vec{x} $$
What you write as $\dfrac{\partial}{\partial t}\Big|_{\vec{r}}$ is the partial derivative $\partial_\tau$ in the $(\vec{r},\tau)$ coordinates, and the derivative $ \dfrac{\partial}{\partial t} \Big|_{\vec{x}}$ is the partial derivative $\partial_t$ in the $(\vec{x},t)$ coordinates.
Standard coordinate transformation tells you
$$ \partial_\tau = \frac{\partial t}{\partial\tau} \partial_t + \frac{\partial \vec{x}}{\partial \tau}\cdot \nabla_{\vec{x}} $$
The change of variables can be rewritten in the form
$$ t = \tau, \qquad \vec{x} = \vec{r} / a(\tau) $$
and hence
$$ \frac{\partial t}{\partial \tau} = 1 \qquad \frac{\partial \vec{x}}{\partial \tau} = - \vec{r} \frac{\dot{a}(\tau)}{a^2(\tau)} $$ |
A linear fractional transformation and mapping of concentric circles | I think you can consider the problem from another way: consider the inverse function which maps the unconcentric circles into concentric circles, which can be done by find the common symmetric points of two circles and map them to 0 and infinity. |
How to find the integral of $\dfrac{1-x}{(1+x)^2}$? | Write: $1 - x = 1 + x - 2x$, and split it into 2 parts:
Part 1: $\dfrac{(1 + x )}{( 1 + x)^2} = \dfrac1{(1 + x)}$ whose anti derivative is $\ln(1 + x)$
Part 2:
$-2x = -2(1 + x) + 2$, and $\dfrac{-2(1 + x)}{(1 + x)^2} = \dfrac{-2}{1 + x}$ with \anti derivative $-2ln(1 + x)$, and the other part $\dfrac{2}{(1 + x)^2}$ has anti derivative $2(1 + x)^{-1}$ |
Using sketch to find exact value of trigonometric expression | Print it and give it to your teacher. Or send him this link. Answer is $\pm\frac{12}{5}=\pm 2.4$ |
What is the derivative of the sum of a vector with respect to the vector? | Just work it out with multivariable calculus by noting that the derivative of a linear term is constant.
$\frac{d}{dX}1^TX = 1^T\frac{d}{dX}X = 1^T I = 1$
Where $1$ is a vector of $1s$ |
Paths in $\mathbb{R}$ are homotopic. | This community wiki solution is intended to clear the question from the unanswered queue.
Yes, your approach works. You can easily verify that the same construction works for any two maps $f, g: X \to C$ where $X$ is an arbitrary topological space and $C$ is a convex subset of an arbitrary topological vector space $E$. |
Uniform radius in an open set | Hint: $\gamma([0,1])\subset\Omega$ is compact. |
Cayley-Hamilton with A=SDS^-1 | $\chi_A(t)=\det(tA-I)=\det(tSDS^{-1}-I)=\det(S(tD-I)S^{-1})=\det(tD-I)=\chi_D(t)$.
Therefore, $\chi_A(A)=\chi_D(SDS^{-1})=S\chi_D(D)S^{-1}=0.$
That is, you just need to prove the Cayley–Hamilton theorem for diagonal matrices, which is easy.
Indeed, $\chi_D(t)=(t-d_{11})\cdots(t-d_{nn})$. Since $De_i=d_{ii}e_i$, we have $\chi_D(D)e_i=0$ for all $i$ and so $\chi_D(D)=0$. |
Need visualization advice for learning partial derivatives and calculus with more than one variable. | The Math Insight website at http://mathinsight.org/ is replete with diagrams and visualizations of concepts from multivariable and vector calculus.
For the directional derivative, see: http://mathinsight.org/directional_derivative_gradient_introduction. |
Compute $\int_0^n \left[\frac x {x+1} + \frac x {2x+4} + \frac x {3x+9} + \cdots\right] \, dx$ for $x>0$ | We have
\begin{align}
f(x) & = \sum_{k=1}^{\infty} \dfrac{x}{kx+k^2} = \sum_{k=1}^{\infty} \left(\dfrac1k - \dfrac1{k+x} \right) = \sum_{k=1}^{\infty} \int_0^1 \left(t^{k-1} - t^{k+x-1}\right)dt\\
& = \int_0^1 (1-t^x) \left(\sum_{k=1}^{\infty} t^{k-1} \right) dt = \int_0^1 \dfrac{1-t^x}{1-t} dt
\end{align}
We now have
$$I_n = \int_0^n f(x) dx = \int_0^n \int_0^1 \dfrac{1-t^x}{1-t} dt dx = \int_0^1 \dfrac{n + \dfrac{1-t^n}{\log(t)}}{1-t}dt$$
From WolframAlpha for $k=1,2,3,4,5$, we get the value of
$$J_k = \int_0^1 \left(\dfrac1{1-t} + \dfrac{t^{k-1}}{\log(t)} \right) dt = \gamma + \log(k)$$
Assuming this is true in general and noting that $I_n = \displaystyle \sum_{k=1}^n J_k$, we get that,
$$I_n = \sum_{k=1}^n J_k = n \gamma + \log(n!)$$ |
Fitting a logarithmic trendline on already logged values | Nothing of what you are doing makes sense. You are trying to fit "noise" and any model you will try will be as bad/useless as any other.
The best model: $$y=17.4$$ |
Why does this solution work? | For (c), you can see that the answer is
$$\small\underbrace{(\#\text{ of ways of choosing the 1 chemist out of 11 chemists})}_{\Large\binom{11}{1}}\underbrace{(\#\text{ of ways of choosing the 4 techs out of 15 techs})}_{\Large\binom{15}{4}}$$
(the other 4 people chosen must be technicians, because the problem specifies exactly one chemist)
For (a), since we are not distinguishing between the two subgroups of people, the answer is just
$$(\#\text{ of ways of choosing 5 people out of 26 people})=\binom{26}{5}$$
However, for (e), I'm not sure I'm interpreting the question correctly; why does part (e) use "may", when the other parts use "must"? Does that mean (e) is asking saying "how many ways are there to make the committee all technicians, or all chemists, or a mix", in which case it's the same thing as part (a)? |
What is $\Pr(Y\in[\pi,X+\pi]\mid X)$ if $X \sim U(0,\pi)$ and $Y \sim U(0,2\pi)$? | It depends on the joint distribution of $X$ and $Y$. If almost surely, $Y=2X$, then $\Pr(Y\in [\pi, \pi+X]|X) = \mathbb I(X>\pi/2)$.
If $X$ and $Y$ are independent, $\Pr(Y\in [\pi, \pi+X]|X) = X/(2\pi)$. Indeed, the unconditional probability in this last case is 1/4. I can't find, off the top of my head, a situation when the conditional probability is 1/4. |
limit $\lim_{n\to \infty} n\left[1-\cos\left(\frac{\theta}{n}\right) -i\sin\left(\frac{\theta}{n}\right)\right]$ | Note that
$$n\left[1-\cos\left(\frac{\theta}{n}\right) -i\sin\left(\frac{\theta}{n}\right)\right]=n\left[ 1-e^{ i\left( \frac{\theta}{n} \right) } \right]=-i\theta\frac{ \left[ e^{ i\left( \frac{\theta}{n} \right) } -1\right]}{i\frac{\theta}n}\to -i\theta$$ |
second order ODE with variable coefficients | Let me rewrite the equation using simpler notations:
$$
C_1 x y'' + (C_2 - C_3 x) y' + (C_4 + C_5 x) y = C_6,
$$
where $C_i$ are the corresponding constants.
However, even in a very particular homogeneous case with $C_6 = 0$ and $C_1,\dots,C_5 = 1$, i.e.
$$
x y'' + (1 - x) y' + (1 + x) y = 0,
$$
WolframAlpha gets very "hard" solution.
So, it looks unrealistic to obtain a general solution in a closed form for arbitrary coefficients $C_i$. |
If $\sigma (T)'\subseteq \{ 0\} \ \ \forall T $ then are $T$s compact? | No. For instance, if $A$ consists just of scalar multiples of the identity, then the spectrum of every element of $A$ is a single point but $A\not\subseteq K(H)$. |
Why is $f'(x)= 0$ but $f$ is not a constant function? | It is not true that $f'(x)=0$ implies $f$ is constant for functions $f$ with an arbitrary domain. This implication is only valid when the domain is an interval. Since the domain of your function is $(-\infty,0)\cup(0,\infty)$, you can conclude that $f$ is constant on each of those intervals, but not necessarily that it is constant on their union (i.e., that it takes the same constant value on each of them).
Indeed, more generally, given any constants $c$ and $d$, you could define a function $g(x)$ on $(-\infty,0)\cup(0,\infty)$ by $g(x)=c$ if $x<0$ and $g(x)=d$ if $x>0$. Then $g'(x)=0$ for all $x$ in the domain of $g$, but $g$ is not constant (unless $c=d$). |
Multivariable calculus find maximum and minimum. | This is a Lagrange multiplier problem.
To do this problem, first take $\nabla f$
Then, take $\nabla g$, your constraint function, $(x^{2} + y^{2})$
The relationship between them is,
\begin{equation}
\nabla f = \lambda \nabla g
\end{equation}
For your case, your system of equations should look like:
\begin{array}
4y^{2} - 2x = \lambda 2x \\
8xy = \lambda 2y
\end{array}
Solve for lambda and equate the two lambdas together, to obtain
\begin{equation}
\frac{8xy}{2y} = \frac{4y^{2}-2x}{2x}
\end{equation}
Simplify to get an explicit relationship between x and y, and then use that to replace either x or y in your constraint equation, $g$, and you get the value of x, or y, which you can use to plug back into your explicit relationship to obtain the other variable.
You now have some points, $(x,y)$. Plug them into your $f$, to obtain maximum/minimum values. Those maximum/minimum values occur at the corresponding points.
Another note: You may also want to test for $g = 0$ cases as well, since the constraint is an inequality. It would then follow zero is their least possible value. |
How to use the separation axiom? | Set builder notation isn't actually part of the language of set theory; it's just a useful system of abbreviations. So reasoning about it is inherently informal, because it's actually not there fundamentally. I think the confusion you're having mostly results from trying to treat set builder notation as more fundamental than it actually is.
Generally we read "$\{x:\varphi(x)\}$" as referring to the set $A$ satisfying $$\forall x(x\in A\iff \varphi(x)).$$ Note that this involves an implicit claim - namely, that such an $A$ exists in the first place. (There's also a uniqueness claim, but by extensionality if one such $A$ exists then there is exactly one such $A$; so it's really only the existence claim which matters.)
The notation "$\{x\in A:\varphi(x)\}$" refers to the same thing as "$\{x: x\in A\wedge\varphi(x)\}$." However, it comes with a useful detail: the "$x\in A$" clause at the beginning guarantees existence via the separation axiom. So in some sense - once we accept the separation axiom (and extensionality) - the notation "$\{x\in A:\varphi(x)\}$" doesn't make any implicit assumptions. In particular:
For every formula $\varphi(x,y_1,...,y_n)$, ZF proves that for every set $A$ and all sets $a_1,...,a_n$ (the "parameters") there is a unique set $X$ such that $$\forall x(x\in X\iff x\in A\wedge \varphi(x; a_1,...,a_n)).$$
This set $X$ is denoted "$\{x\in A: \varphi(x, a_1,...,a_n)\}$." |
Measurability question for martingale (Is $S(X_i, Z_i)- E(S(X_i,Z_i) \mid \mathcal{F}_i)$ $\mathcal{F}_i=\{X_1, \dots, X_i \}$-measurable?) | $S_k$ is $\mathcal{F}_k=\sigma(X_1,...,X_k)$-measurable if and only if there exists a Borel-measurable function $f$ such that $S_k=f(X_1,...,X_k)$.
This is known as Doob's lemma (or sometime Doob-Dynkin's Lemma).
Regards |
Geodesic distance between equidistant points on a sphere | you want the article by Saff and Kuijlaars. This problem is traditionally called the Tammes problem. https://perswww.kuleuven.be/~u0017946/publications/Papers97/art97a-Saff-Kuijlaars-MI/Saff-Kuijlaars-MathIntel97.pdf
Asymptotically, for $n$ vertices on the sphere, one expects geodesic distance to the closest neighbors at about $4/\sqrt n.$
By hand calculation, we cannot expect a packing to have a larger proportion of the area of the sphere than circles in a hexagonal packing in the plane. This observation suggests a geodesic radius of $1.9046 / \sqrt n,$ or pairwise distance of nearest neighbors $3.809 / \sqrt n.$ |
equivalence of definition of the first cohomology group | Define $G^{op}$ to be the opposite group of $G$. This is the group with the same underlying set of elements, but with group operation $\ast$ defined by $g\ast h =h\cdot g$, where $\cdot$ denotes the group operation in $G$. Note that left modules over $G$ are right modules over $G^{op}$, and vice-versa.
Then a right derivation of $G$ is exactly a left derivation of $G^{op}$.
The source of the apparently different definitions is that Silverman's coefficients are right $G$-modules, whereas Weibel's are left $G$-modules. |
Is there a method or shortcut (done by hand) to determine the area bounded by the curves of mixed equations? | If a parametric curve $(x(t),y(t))$ circles the origin in a counter-clockwise direction, then the area it encloses is given by
$$
A = \frac{1}{2} \int \left( x \frac{dy}{dt} - y \frac{dx}{dt} \right) dt
$$
(This is found by noting that the area of the long narrow wedge subtended by $\vec{r} = (x,y)$ and $\vec{r} + \Delta \vec{r} = (x + \Delta x, y + \Delta y)$ is approximately $\frac{1}{2}|\vec{r} \times \Delta \vec{r}|$.)
In principle, you could use this to solve your problem. Your heart-shaped curve and the parabola intersect at one point with $x> 0$; call this $(x_0, y_0)$. A line from $(0,0)$ to $(x_0, y_0)$ divides the area into two sub-areas. The area between the parabola and this diagonal can be found by standard Calculus 101 techniques. The area bounded by the diagonal, the heart-shaped curve, and the $y$-axis can be found by integrating the above parametric area equation from $t = t_0$ to 0, where $t_0$ is defined such that $(x(t_0), y(t_0)) = (x_0, y_0)$. This latter integrand is expressible in terms of cosines and sines, so although the answer might be ugly, the indefinite integral does exist in closed form.
The problem with this method is that the point of intersection $(x_0, y_0) \approx (3.222, 10.379)$ may be impossible to write down in closed form. (For the record, $t_0 \approx 0.626257...$.) Playing around with Mathematica, it appears to be expressible in terms of sines and inverse tangents of roots of a certain 6th-order polynomial. If a numerical answer is all that's required, though, then you could certainly get an answer to some number of decimal places using this technique. |
Nowhere monotonic continuous function | The Weierstrass function, mentioned in other answers, is indeed an example of a nowhere monotone function, meaning that $f$, even though continuous and bounded, is increasing at no point, decreasing at no point (and differentiable at no point as well). Details of this can be found in Example 7.16 in van Rooij, and Schikhof, A second course on real functions, Cambridge University Press, 1982.
That $f$ is increasing at $a$ means that there is a neighborhood $I$ of $a$ such that if $t<a$ is in $I$, then $f(t)\le f(a)$, and if $t>a$ is in $I$, then $f(t)\ge f(a)$. Thus, $f$ not increasing at $a$ iff any neighborhood of $a$ has points $t$ such that $(f(t)-f(a))(t-a)<0$. Being decreasing at $a$ can be stated similarly. See here.
We know that if $f$ is differentiable at $a$ and $f'(a)>0$ then $f$ is increasing at $a$, and if $f'(a)<0$, then $f$ is decreasing at $a$, so if a nowhere monotone function has a point $a$ in its domain where $f'(a)$ exists, then we must have $f'(a)=0$. It is indeed possible for a non-constant continuous increasing function $f$ to satisfy $f'(a)=0$ almost everywhere (we say that $f$ is singular). (Of course, if $f'(a)=0$ everywhere, then $f$ is constant.) The best known example of this phenomenon is Cantor's function, also known as the Devil's staircase (The link goes to O. Dovgoshey, O. Martio, V. Ryazanov, M. Vuorinen. The Cantor function, Expositiones Mathematicae, 24 (1), (2006), 1-37).
The above being said, note anyway that being increasing at a point is far from being increasing in a neighborhood of the point. If we require that $f$ is differentiable (and not constant), then there will be points $a$ where $f'(a)>0$ (so $f$ is increasing at $a$) or $f'(a)<0$ (so $f$ is decreasing at $a$). Nevertheless, as shown for example in Katznelson and Stromberg (Everywhere differentiable, nowhere monotone, functions, The American Mathematical Monthly, 81, (1974), 349-353) we can still find differentiable functions $f$ that are monotone on no interval. (I briefly state some properties of their example here; there used to be an accessible link to the paper, but apparently that is no longer the case.) |
Is $n_p(G)$ unique for different groups of size $p$? | Hint: take $S_3$ and $C_6$ Both groups of order $6$. And look at the prime $2$. |
Why does the log-log scale on my Slide Rule work? | If x = 3n, then log x = n log 3.
The C scale is logarithmic, which means if the reading is p, then the distance is proportional to log p.
Similarly, in the LLx scale the distance is proportional to log log p.
Thus, when you align 1 to "3" in LL3, you introduce an offset of (log log 3). Suppose you get a reading of n in the C scale, then the corresponding value in LL3 would be:
log log p = log log 3 + log n
(LL3) (offset) (C)
eliminating one level of log gives
log p = log 3 * n
eliminating one more level of log gives
p = 3^n
LL2 is the same as LL3 except it covers a different range. |
Understanding proof : Runge-Kutta and B-series | (2a) Yes. (2b) You are right that in the cited form the proof of the theorem seems to assume that an expansion of the step as B-series exists, and only computes the concrete form of its coefficients. As I do not know what preceded this theorem, this may be a defect or just some unfortunate formulation.
The steps to cover this gap are not that large
For a generic smooth or analytic $f$ the stage equations as implicit system of equations have a Taylor expansion in powers of $h$.
Modulo $O(h)$ it is trivial to see that $Y_{0,i}=y_0+O(h)$.
Modulo $O(h^2)$ one then sees $Y_{0,i}=y_0+hc_if(y_0)+O(h^2)$, $c=A1\!\!1_s$. This can be interpreted as the beginning of a B-series.
Now conclude by induction that if $Y_{0,i}$ is a B-series $\sum_{|\tau|<k}\alpha(\tau)\Phi_i(\tau)(h)F(\tau)$ modulo $O(h^k)$, then by inserting into the stage equations it follows that also the degree $k$ terms, $|\tau|=k$ are terms of the B-series, as the expansion of $f(\sum_{|\tau|<k}\alpha(\tau)\Phi_i(\tau)(h)F(\tau))$ consists of only B-series terms (as that's the design principle of the B-series). (2c) Disregarding the specific composition, this gives some other set of B-series coefficients, provisionally call them $\Phi'_i$.
Looking closer at the terms and comparing coefficients, one then also finds the coefficient relations of the theorem.
(2d) $\Phi'$ is to be understood as the column vector of $\Phi'_1,...,\Phi'_s$. The final subscript $i$ then indicates that the row-$i$ term is extracted. This is just matrix-vector-multiplication notation, $\sum_j A_{ij}x_j=[Ax]_i$.
The formula is somewhat over-complicated, it would have been simpler to write
$$
\Phi_i(\tau)(h)=\begin{cases}1,&\tau=\emptyset\\h\left(\mathcal{A} \Phi^{\prime}(\tau)\right)_{i},&\text{else}\end{cases}
$$
There are no terms without $f$ in the expansion of $f(Y_{0,j})$, so $\Phi'(\emptyset)=0$, I'm not sure why this was explicitly added in (2e). |
show that linear space is direct sum of two subspaces and find projections | To show it's a direct sum you have to check $L_1$ has dimension $3$ and $L_2$ has dimension $2$, and that $L_1\cap L_2=\{0\}$.
For the projection of $g(x)$ onto $L_1$ parallel to $L_2$, you need to find coefficients $\lambda, mu$ such that $\;g(x)-\lambda(x^2+x)-\mu(x^3+1)$ satisfies the conditions defining $g$.
The projection of $g(x)$ onto $L_2$ parallel to $L_1$ is of course $\lambda(x^2+x)+\mu(x^3+1)$. |
The dimension of a Polyhedron using its vertices | You have three points, not collinear. Their convex hull is a triangle,
in some plane. So it has dimension $2$.
In general with points $P_1,\ldots,P_n$, the dimension of their
convex hull is the dimension of the vector space spanned by
$\vec{P_1P_2},\vec{P_1P_3},\ldots,\vec{P_1P_n}$. |
non negative super martingale | $X_n$ is uniformly bounded from below, so it converges a.s. to an integrable random variable for Doob's Supermartingale Convergence Theorem.
This fact together with the event $\{T < \infty\}$ allows us to use the Optional Stopping Theorem, so
$\mathsf{E}[X_{T+n}|\mathscr{F}_{T}]\le X_{T}=0$.
But since $X$ is also bounded below, and is non negative, we have $\forall n\ge 1$
$0 \le \mathsf{E}[X_{T+n}|\mathscr{F}_{T}]\le X_{T}=0$. |
calculating expected number of packets. | First, create a series representing the chance of getting a mini-figure, assuming there are 4:
$$(4/4) + (3/4) + (2/4) + (1/4)$$
The first one has a 100% of not being already taken. After that there are 3 remaining. Now 2, before finally trying to get that last Lego. The average amount of figures bought before getting the correct figure is the series' reciprocal, aka. 1/4 says I average 4 attempts before getting the figure. 3/4 says I average 4/3 attempts. The true series is for total attempts is:
$$(4/4) + (4/3) + (4/2) + (4/1)$$
In series notation I get:
$$\sum_{n=1}^4 \frac 4n$$
or, more generally, for $x$ number of figures:
$$\sum_{n=1}^x \frac xn = x\sum_{n=1}^x \frac 1n$$
Which equals $xH_x$, $H_n$ being the $n$th harmonic number (Harmonic numbers defined as $\sum_{n=1}^x \frac 1n$). The harmonic numbers approach $ln(x)$, proofs found elsewhere, so the total estimated numbers of attempts is $xln(x)$, substituting $H_x$ with $ln(x)$.
Note: Using the $ln(x)$ is semi-accurate (Which is why the question uses the word "approximately", using the harmonic series is truly accurate. Even as $x$ approaches infinity, the $ln(x)$ is about $0.5772156649...$ away from the harmonic series (That number is the Euler–Mascheroni constant). |
Question about proving: If $x \in \omega$ and $y \in x$, then $ y \in \omega$ | You are correct. The statement $y\in\emptyset$ is false. Hence the statement "$\emptyset\in\omega$ and $y\in\emptyset$" is false. Since this is the antecedent of the statement you wish to prove, the statement is vacuously true. Recall $p\to q$ is true whenever $p$ is false. |
Polyhedron and Euler's formula | In many situations, “Euler's formula says so” would be sufficient justification. It demonstrates that you didn't guess, didn't do some brute force computation, but instead knew suitable tools and how to apply them. In other situations, taking that formula for granted would be insufficient, and you might have to justify why the formula holds. And in between these two there is a range where you would need to closely examine that all the precondidtions of Euler's formula hold. In particular that the polyhedron is a topological sphere (genus 0), which sounds like it should be identical to the no tunnels property you quoted. |
Tail inequalities for multivariate normal distribution | These inequalities have been studied by [Savage1962]:
Let $M=\Sigma^{-1}$ with $M = (m_{ij})$.
Assuming that for all $1\leq i\leq d$, we have $\Delta_i := \sum_{j=1}^d C_j m_{ji} > 0$, then,
$$F(C,\Sigma) = \frac{ |M|^{\frac 1 2}}{(2\pi)^{\frac d 2}} \int_0^\infty\cdots\int_0^\infty \exp\Big[-\frac 1 2 (X+C)^\top M(X+C) \Big] dX_1\dots dX_d$$
$$ < \Big(\prod_{i=1}^{d} \Delta_i\Big)^{-1} \frac{ |M|^{\frac 1 2}}{(2\pi)^{\frac d 2}} \exp \Big[ - \frac 1 2 C^\top M C \Big]~.$$
And more recently by [Hashorva2003] in a more general case.
[Savage1962] Savage, I.R. Mill's ratio for multivariate normal distributions. J. Res. Natl. Bur. Stand., Sec. B: Math.& Math. Phys., Vol. 66B, No. 3, p. 93 (1962)
[Hashorva2003] Hashorva, E, and Hüsler, J. On multivariate Gaussian tails. Annals of the Institute of Statistical Mathematics 55.3 p. 507-522. (2003) |
How this trade-off has been calculated for Regularized least-squares in convex-optimization boyd book | My apologies, please ignore my question. I have solved my problem. I am just posting the answer in case someone needed. In case the question is too irrelevant here, it can be deleted. Regards |
Understanding AGM Induction | The root in the expression for the previous $G$ will be $n-1$ so raising to $n-1$ will "undo" it.
$$\left(\sqrt[n-1]{\prod_i{a_i}}\right)^{n-1} = \prod_i a_i$$ |
Basis for a polynomial vector space | To expand on my comment above:
By polynomial long division and the fact that polynomial space is a euclidean space (i.e. euclidean division algorithm works) we know that every polynomial can be written uniquely as
$$f(x)=(x(x-1)(x-2))q(x)+r(x)$$
with $r(x)=0$ or the degree of $r(x)$ strictly less than three.
Ordinarily with no extra conditions, this implies that $\{x^2(x(x-1)(x-2)),~~ x(x(x-1)(x-2)),~~ (x(x-1)(x-2)),~~ x^2,~~ x,~~ 1\}$ forms a basis for $\Bbb P_5$
That is to say, any degree five or less polynomial can be written uniquely in the form $$f(x)=x(x-1)(x-2)(ax^2+bx+c)+dx^2+ex+f$$
The condition that $f(0)=f(1)=f(2)$ implies that with $f(x)=(x(x-1)(x-2))q(x)+r(x)$ that $r(x)$ must be a constant, implying $d=e=0$
One such choice of a basis is then $\{x^2(x(x-1)(x-2)),~~ x(x(x-1)(x-2)),~~ (x(x-1)(x-2)),~~ 1\}$
Rewritten if you prefer:
$$\{x^5-3x^4+2x^3,~~x^4-3x^3+2x^2,~~ x^3-3x^2+2x,~~1\}$$ |
How to prove a "generalization" of the "Clarkson inequality"? | The left-hand side is increasing in $r$, see for example How do you show monotonicity of the $\ell^p$ norms?. Therefore it suffices to prove the inequality for the smallest possible value of $r$, that is for $r=s$:
$$
\left( \lvert x+y \rvert^s + \lvert x-y \rvert^s \right)^{1/s} \leq 2^{1-1/s}\left( \lvert x \rvert^s + \lvert y \rvert^s \right)^{1/s}.
$$
This Clarkson type inequality holds for $s \ge 2$, see for example:
Showing $\left|\frac{a+b}{2}\right|^p+\left|\frac{a-b}{2}\right|^p\leq\frac{1}{2}|a|^p+\frac{1}{2}|b|^p$
Three related inequalities (the first being $2(|a|^p + |b|^p) \leq |a + b|^p + |a - b|^p \leq 2^{p-1}(|a|^p + |b|^p)$)
On the other hand, the inequality does not hold in general for $0 < s < 2$, as can be seen by setting $x=1$ and $y=0$. |
Integral Conversion To Spherical Coordinates | Parametrize with spherical coordinates
Note $\rho \sin (\phi)=r$ and $\rho \cos (\phi)=z$. This gives that,
$$\vec r(\phi,\theta)=\langle 1\sin (\phi) \cos (\theta), 1\sin (\phi) \sin (\phi), 1 \cos (\phi) \rangle$$
Now we need to compute $|r_\phi \times r_\theta|$. Lucky for me already know if there was a $\rho$ instead of $1$ above I would get $\rho^2 \sin (\phi)$ because that is the Jacobian associated with a change to spherical coordinates. So we get $1^2 \sin (\phi)$. Hence $dS=1^2 \sin (\phi) dA$.
So then
$$\iint_{S} x^2 dS=\iint (\sin (\phi) \cos (\theta))^2 1^2 \sin (\phi) dA$$
Of course $\theta \in [0,2\pi]$ and $\phi \in [0,\pi]$. |
When is there in a probability space no null sets? | Edit: I am assuming that all singletons are measurable. See the comments below. (This is a reasonable assumption. In many cases, the probabilty measure lives on a topological space in which singletons are closed and all Borel sets are measurable.)
There is only the trivial null set iff every singleton has positive measure.
So in the countable case, if the underlying space is $\mathbb N$, you could assign to each $\{n\}$, $n\in\mathbb N$, the measure $2^{-n-1}$.
This generates a probability space with no non-trivial set of measure $0$.
If your space is uncountable, you will always have a singleton of measure 0, since there cannot be uncountably many pairwise disjoint measurable sets of positive measure.
As for filtering away null sets, yes, you can always consider the $\sigma$-algebra of measurable set and factor out the ideal of sets of measure 0. This give you the measure algebra of space, and the only measure 0 element of this algebra is the equivalence class of the empty set, but this process doesn't give you a probability space as such, just a complete Boolean algebra. |
Decrypting a Vigenere cipher with affine key | Yes, it could be broken using a ciphertext only attack given enough ciphertext.
Breaking vigenere ciphertext typically involves brute forcing the length of the key, then running statistical analysis. In this case, we already know the length of the key (26 characters), so we can go right to statistical analysis. With enough ciphertext this would be easy.
Obviously this is assuming knowledge of the plaintext language. In practice, that isn't too strong of an assumption, however. |
If a ≡ b (mod n) and m|n, then a ≡ b (mod m) | This is a perfect proof. This is sufficiently concise. |
integrate $\int\frac{\sin x}{1+\sin^{2}x}dx$ | Hint:
Write $$\frac{-1}{2-u^2}=\frac{-1}{(\sqrt{2}+u)(\sqrt{2}-u)}=\frac{-1}{2\sqrt{2}}\left(\frac{1}{\sqrt{2}+u}+\frac{1}{\sqrt{2}-u}\right)$$ |
Laplacian inequality in Sobolev space | No, this is not true for all $\alpha>0$. The necessary and sufficient condition is $\alpha\ge \lambda_1$ where $\lambda_1>0$ is the first eigenvalue of the Laplacian with Dirichlet boundary condition. Indeed, if $|\Delta \theta| \le \alpha |\theta|$ in $\Omega$, then integration by parts yields
$$\int_\Omega |\nabla \theta|^2 = - \int_\Omega \theta\,\Delta \theta \le \alpha \int_\Omega \theta^2 \le \frac{\alpha}{\lambda_1} \int_\Omega |\nabla \theta|^2 $$
hence $\alpha\ge \lambda_1$.
Conversely, if $\alpha\ge \lambda_1$ then the first eigenfunction of the Dirichlet Laplacian satisfies $|\Delta \theta|=\lambda_1|\theta|\le \alpha|\theta|$. |
Integrating the separable, first-order ordinary differential equation $m \frac{dv}{dt} = mg - av$ | Hint The denominator of l.h.s. of the differential equation
$$\frac{m \,dv}{m g - a v} = dt$$
is linear in $v$, so this can be readily integrated. To see things a little more clearly, make the (linear) change of variables $u = m g - a v$, $du = -a \,dv$.
This gives $$\frac{m \,dv}{m g - a v} = -\frac{m}{a} \frac{du}{u} .$$ As you probably recall, $\int \frac{du}{u} = \log |u| + C$. |
sum of binomial coefficients expansion to prove equation | $$\binom{2}{2}+\ldots+\binom{n-1}{2}+\binom{n}{2}=\frac{1\times 2}{2}+\frac{2\times 3}{2}+\frac{3\times 4}{2}+...+\frac{(n-1)\times n}{2}=\frac{1}{2}(1\times 2+2\times 3+3\times 4+...+(n-1)\times n)=\frac {1}{2}\times \frac{(n-1)n(n+1)}{3}=\frac{(n-1)n(n+1)}{6}$$ |
Find $f\in L^1(\mathbb{R})$ such that $\|\int_{-N}^N \hat{f}(\xi)e^{2\pi i\xi \cdot}\operatorname{d}\xi-f\|_1\nrightarrow 0, N\to\infty?$ | Let $f(t)=\begin{cases}1&-\frac12\le t\le \frac12\\0&\text{otherwise}\end{cases}$.
Then
$$\hat{f}(s) = \int_{-\frac12}^{\frac12} e^{-2\pi ist}\,dt = \frac1{-2\pi is}\left(e^{-\pi is}-e^{\pi is}\right)=\frac{\sin(\pi s)}{\pi s}$$
Integrate that against $e^{2\pi isx}$ on $[-N,N]$ and we get
\begin{align*}g_N(x) &= \int_{-N}^N \frac{\sin(\pi s)}{\pi s}e^{2\pi isx}\,ds = 2\int_{0}^N \frac{\sin(\pi s)}{\pi s}\cos(2\pi sx)\,ds\\
g_N'(x) &= -2\int_{0}^N 2\sin(\pi s)\sin(2\pi sx)\,ds = 2\int_0^N \cos(\pi s(1+2x))-\cos(\pi s(1-2x))\,ds\\
g_N'(x) &= \frac{2\sin(\pi N(2x+1))}{\pi(2x+1)} - \frac{2\sin(\pi N(2x-1))}{\pi(2x-1)}\end{align*}
For some $N$, that's pretty small. The two $\sin$ terms are equal for integer $N$, leaving us with something that's $O(x^{-2})$ as $x\to\infty$. For other $N$, it grows larger. If $N-\frac12$ is an integer, then $\sin(2\pi Nx+\pi N)=-\sin(2\pi Nx-\pi N)$ and
$$g_N'(x) = \frac{8x\sin(2\pi Nx+\pi N)}{(4x^2-1)\pi} = \frac{\pm 8x\cos(2\pi Nx)}{(4x^2-1)\pi}$$
For large $x$ and half-integer $N$, then, $g_N$ changes by
$$\int_{(k-\frac12)/N}^{(k+\frac12)/N}g_N'(x)\,dx\approx \int_{(k-\frac12)/N}^{(k+\frac12)/N}\frac{\pm 8k/N\cos(2\pi Nx)}{4k^2\pi/N^2}\,dx = \frac{\pm 2}{k\pi^2}\approx\frac{\pm 2}{\pi^2 Nx}$$
between extremes, as $x$ changes by $\frac1N$. That's too much variation for a $L^1$ function; $|g_N(x)|$ will be at least $\frac cx$ for some fixed $c>0$ most of the time, and thus won't be in $L^1$.
So then, even in this simple example, we don't get $L^1$ convergence - a sequence of functions that aren't in $L^1$ at all can't possibly converge to anything in $L^1$. This was the first example I tried - just pick $f$ far enough from smoothness that $\hat{f}$ isn't $L^1$, and the failure is practically inevitable. |
Recurrence Relation for QuickSort | If
$T(N)= T(N-1)+T(0)+\Theta(\sqrt{N})
$,
then there are positive constants
$a$ and $b$ such that
$a\sqrt{n}
<T(n)- T(n-1)-T(0)
< b \sqrt{n}
$.
Summing this from
$n=2$ to $m$,
$\sum_{n=2}^{m} a\sqrt{n}
<\sum_{n=2}^{m} (T(n)- T(n-1)-T(0))
< \sum_{n=2}^{m} b \sqrt{n}
$,
and,
since
$\sum_{n=2}^{m} \sqrt{n}
\approx \frac12 m^{3/2}
$,
we get
$ T(m)-T(1)-(m-1)T(0)
=\Theta(m^{3/2})
$
or
$ T(m)
=\Theta(m^{3/2})
$. |
Closure of $B=\{ f\in C'[0,1] : |f(x)|\leq 1, |f'(t)|\leq 1 \forall t\in[0,1]\}$ (NBHM $2005$) | Arzelà–Ascoli states that a set of equicontinuous, uniformly bounded functions on $[0,1]$ has a uniformly convergent subsequence. Hence $B$ is relatively compact (since $C[0,1]$ is a metric space).
A convex set is connected, and so is the closure ($C[0,1]$ is a normed space). To show that $B$ is convex, consider $f,g \in B$, and $\lambda \in [0,1]$. Then $|\lambda f(x)+(1-\lambda)g(x)| \le \lambda |f(x)| + (1-\lambda)|g(x)| \le 1$, and similarly for
$\lambda f'(x)+(1-\lambda)g'(x)$. Hence $\lambda f+(1-\lambda)g \in B$.
If I take the function $f(x) = 3$, it is clear that $B(f,1) \cap B = \emptyset$. Hence $B$ cannot be dense. |
Use method of characteristics, find a parametric representation of a shock equation | This is only a quick answer (not the complete solving of the PDE) only to see where the answer i) comes from :
You should have given the notations you use for the variables in the characteristics equations. The symbols used below are probably different, which might be confusing.
If you need more explanation, ask someone else. I will not be here for a long time. |
How to calculate the wave front set for the characteristic function of a 2-dimensional ball? | This is shown in detail in Section 4 of this paper. The idea is to transform to coordinates in terms of which the characteristic function of the ball is a tensor product between a smooth function and a one-dimensional Heaviside distribution. Of course, in this case polar coordinates do the trick. |
Why does $\lim_{x\to\infty}\frac{(-1)^{x+1}}{x}$ converge to 0? | We have
$$0\le\left|\dfrac{(-1)^{x+1}}{x}\right|=\dfrac{1}{x}\xrightarrow {x\to\infty} 0$$ |
Trigonometry to radially spread points around a center | The spread angle is correct if you're trying to get an even spread around your central point.
Now we need to use some trigonometry. Remember all trigonometric functions work relative to the positive $x$-axis in a counter clockwise direction.
So for each of you points you need to calculate the following values for each of your $n$ points.
$$
\Delta x = h\cdot\cos(\theta)\\
\Delta y = h\cdot\sin(\theta)
$$
Where $\theta$ is the angle between the current point and the $x$ axis, most languages work in radians rather than degrees, and then offset by the center.
In pseudocode this looks like
h = radius
c = (x0,y0) //center point
n = number_of_steps
stepangle = 2*pi/n
for i in 0..n-1
theta = i * stepsize
x = h * cos(theta) + c[0]
y = h * sin(theta) + c[1] //these are your new x and y values
end for
I can't see any major problems with your code, but I don't know too much about JQuery to tell if the loop is doing the right thing. |
Finding relation of two sets | Short answer: Yes and you also explained correctly why.
Maybe in other words to help your understanding:
The set $R$ contains all pairs of numbers $(a,b)$ for which $a$ is an element of $A$ and $b$ is an element of $B$ and furthermore the pair must also satisfy the condition $b = a^2$. The pairs you mentioned are the only ones that satisfy these three conditions. |
Gram Determinant equals volume? | Let $d$ vectors ${\bf a}_i\in V$ $\>(1\leq i\leq d)$ be given, where $V$ is an $n$-dimensional euclidean space, e.g., $V={\mathbb R}^n$ with the usual scalar product. Then there is a $d$-dimensional subspace $U\subset V$ containing the ${\bf a}_i\,$, and $U$ inherits the euclidean structure from $V$. Let $({\bf e}_j)_{1\leq j\leq d}$ be an orthonormal basis of $U$. Then the $d$-dimensional volume of the parallelotope $P$ spanned by the ${\bf a}_i$ is given by
$${\rm vol}_d(P)=|\det(A)|\ ,$$
where $A$ is the $d\times d$-matrix containing the coordinates $(a_{i.1},a_{i.2},\ldots,a_{i.d})$ of the ${\bf a}_i$ in terms of the ${\bf e}_j$ in its columns. (This is the $d$-dimensional analogue of the formula
$${\rm vol}_3(P)=\left|\>\det\left[\matrix{a_{1.1}&a_{2.1}&a_{3.1}\cr
a_{1.2}&a_{2.2}&a_{3.2}\cr a_{1.3}&a_{2.3}&a_{3.3}\cr}\right]\>\right|$$
valid in $3$-space.) It follows that
$$\left({\rm vol}_d(P)\right)^2=\bigl(\det(A)\bigr)^2=\det(A'\>A)=\det(G)\ .\tag{1}$$
The elements $g_{ik}$ of the $d\times d$-matrix $G:=A'\,A$ are given by
$$g_{ik}:={\rm row}_i(A')\cdot{\rm col}_k(A)={\bf a}_i\cdot{\bf a}_k\qquad(1\leq i\leq d,\ 1\leq k\leq d)\ .$$
It turns out that $G$ is the matrix of the scalar products ${\bf a}_i\cdot{\bf a}_k\,$ and is independent of the basis chosen for $U$. This Gram matrix $G$ depends only on the given ${\bf a}_i$ and the euclidean structure in $V$.
Taking the square root in $(1)$ gives the final formula
$${\rm vol}_d(P)=\sqrt{\det(G)}\ .$$ |
Question in Probability Theory | Hint: Use the known values of $|A\cap B \cap C|$ and $|B \cap C|$. |
Height of saltus of Poisson process at points of discontinuity | First note that if $Y$ has the Poisson distribution with parameter $\lambda$ then
$P[Y\ge 2]=(1/2)\lambda^2+o(\lambda^2)$ as $\lambda\to 0$. In particular, there exists $\lambda_0>0$ such that $P[Y\ge 2]\le \lambda^2$ for $0<\lambda<\lambda_0$.
Write $\Lambda(t):=\int_0^t\vartheta(s)\,ds$.
Let's show that on the time interval $[0,1]$, $N$ can only have jumps of size $1$, almost surely.In fact
$$
\{N|_{[0,1]} \hbox{ has a jump of size 2 or more}\} =\cap_{m=1}^\infty\cup_{k=1}^m\{N(k/m)-N((k-1)/m)\ge 2\}.
$$
The function $\Lambda$ defined above is continuous, hence uniformly continuous on $[0,1]$. Thus, given $\epsilon\in(0,\lambda_0)$, there exists an integer $m_0$ so large that if $s,t\in[0,1]$ with $|t-s|<1/m_0$ then $|\Lambda(t)-\Lambda(s)|<\epsilon$.
Thus, if $m\ge m_0$ then
$$
P(N(k/m)-N((k-1)/m)\ge 2)\le[\Lambda(k/m)-\Lambda((k-1)/m)]^2\le\epsilon\cdot [\Lambda(k/m)-\Lambda((k-1)/m)],
$$
and so
$$
\eqalign{
P(\cup_{k=1}^m\{N(k/m)-N((k-1)/m)\ge 2)
&\le\sum_{k=1}^mP(N(k/m)-N((k-1)/m)\ge 2)\cr
&\le\sum_{k=1}^m[[\Lambda(k/m)-\Lambda((k-1)/m)]^2\cr
&\le\epsilon\cdot\sum_{k=1}^m[\Lambda(k/m)-\Lambda((k-1)/m)]\cr
&\le \epsilon\cdot\Lambda(1).\cr
}
$$
It follows that
$$
P(\{N|_{[0,1]} \hbox{ has a jump of size 2 or more}\})=0.
$$ |
$\lnot p$ whenever $q:\;$Do I understand this? | You've done just fine:
Not $p$ whenever $q$ can indeed be translated as "Not p, if q", i.e., "If q, then not p":
This tranlsates, literally, as $$q\rightarrow \lnot p$$
and note that this is equivalent to its contrapositive: $$p \rightarrow \lnot q$$ |
System of equations $\lfloor x\rfloor+\{y\}=1.2,\ \{x\}+\lfloor y\rfloor = 3.3$ | That's right. You can make it cleaner by taking the integer and fractional parts, and using the fact these functions are idempotent (in fact, they're linear projection operators summing to the identity). |
Condition for sequence to be differentiable at 0 | Most books define differentiable functions on an interval $ (a, b) $. But we can define the derivative of a function $ f: A \to \mathbb {R} $ at a point $ c \in A $ if is $ c $ is an accumulation point ( or cluster point ) of $ A $.
Then $f^\prime(c)$ is unic number that have the propert:
$$
\forall \epsilon >0, \exists \delta=\delta(c,\epsilon)>0\quad \mbox{ such that } x\in A \mbox{ e }0<|c-x|<\delta \implies \left|\frac{f(x)-f(c)}{x-c}- f^\prime(c) \right|<\epsilon. $$ |
Prove that it is possible to divide integral area into two equal parts | Hint: How are $h(a)$ and $h(b)$ related? |
Finding eigenvalues and eigenvectors of $B=AP$, where $P$ is a permutation matrix | What you are asking is not true. Consider the eigenvalues of
$$ \left[ \begin{matrix}
1 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & 3 \\
\end{matrix} \right] $$
and that of
$$ \left[ \begin{matrix}
0 & 0 & 1 \\
0 & 2 & 0 \\
3 & 0 & 0 \\
\end{matrix} \right] $$
I just swapped the first and last columns; the eigenvalues are different.
EDIT: In response to comment.
If $A$ is also a permutation matrix and your field is $\mathbb{C}$ the eigenvalues are also not necessarily the same. Consider the $5 \times 5$ identity matrix and
$$ \left[ \begin{matrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 \\
\end{matrix} \right] $$
In one case the characteristic polynomial is $-(s-1)^5$ and in the other $- s^5+s^3+s^2-1$. |
Applications of orthogonal trajectories | Perhaps what you mean is how differential equations can be applied to find othogonal trajectories?
http://www.sosmath.com/diffeq/first/orthogonal/orthogonal.html |
Gradient descent (derivative) from an algorithm to minimize the error, but doesn't work with negative real value | real cannot possibly be negative, it's the sum of $2$ positive quantities as $10^{|x|} - 1$ is never less than $0$ for any $x \in \mathbb{R}$ (this is because $10^x > 1$ for $x > 0$).
So when you do the gradient descent, it tries to achieve the best possible error, which occurs when $x$ is zero. That's why you get $A_{in}$ and $A_{out}$ to be nearly zero after algorithm terminates, and in that case, the error is $(\textbf{rate} - 0 - 0 )^2 = \textbf{rate}^2$ which you already noted. |
Intuition behind the theorem about ideals with finite number of zeroes | First, by the generalized chinese remainder theorem, we have that $K[x_1,\cdots,x_n]/I$ is isomorphic to $\prod K[x_1,\cdots,x_n]/\mathfrak{m}_i^{e_i}$, where the $\mathfrak{m}_i$ are the maximal ideals corresponding to the finite collection of points $P_i\in V(I)$ and $e_i$ are the multiplicity of $I$ at these points. This corresponds to the statement that a regular function on a finite set is determined by what happens at each point.
The reason why we want to show that this is the same as the version coming from the local rings is that our original setup with $K[x_1,\cdots,x_n]$ contains some amount of "unnecessary choice": we've picked a coordinate system on $\Bbb A^n_k$, while dealing with local rings removes such choices.
To translate to the local ring side, we will show that $R_\mathfrak{m}/\mathfrak{m}^n_\mathfrak{m}\cong R/\mathfrak{m}^n$ for any ring $R$ with a maximal ideal $\mathfrak{m}$. Since localization commutes with quotients, $R_\mathfrak{m}/\mathfrak{m}^n_\mathfrak{m}\cong (R/\mathfrak{m}^n)_\mathfrak{m}$, but every element in $R\setminus\mathfrak{m}$ is already invertible in $R/\mathfrak{m}$. This shows that $K[x_1,\cdots,x_n]/I\cong \prod \mathcal{O}_i/I\mathcal{O}_i$. |
Express in the form of $a+ib$. | Hints:
$\require{cancel}(1-i)^2=\bcancel{1}-2i+\bcancel{i^2}=-2i$
$1-1/i^3=1 - i / i^4=1 - i$ |
Constructing non-zero obstruction by cutting along a non-separating torus and regluing | You guess at the gluing map seems close to correct, although perhaps it might be $(u,v) \mapsto (u,u^bv)$ (I'm not quite sure what your coordinates $u,v$ represent, they might be $T^2$ coordinates in my displayed equation below).
To compute the fundamental group, use the fact that gluing the two boundary components of the space
$$A \times S^1 \approx (S^1 \times [0,1]) \approx (S^1 \times S^1) \times [0,1] \approx T^2 \times [0,1]
$$
gives an example of a mapping torus. By doing exercise 11 in Section 1.2 of Hatcher's book "Algebraic Topology", you will learn by example a general technique for using Van Kampen's theorem to compute the fundamental group of a mapping torus. |
Boolean Algebra simplifcation | Final steps hidden in spoiler quotes. Disclaimer: there may be a faster way to see this, but these are the steps that jumped out at me to follow in the order I noticed them
$w'x'y'z+w'xy'z+wxy'z+wx'y'z+w'x'yz$
$=w'y'z(x'+x) + wy'z(x+x')+w'x'yz~~~~$ by applying rules of distributivity
$=w'y'z+wy'z+w'x'yz~~~~$ by the fact that $x'+x=1$
$=(w'+w)y'z+w'x'yz~~~~$ by distributivity
$=y'z+w'x'yz~~~~$ by $w'+w=1$
.
$=z(y'+wx'y)~~~~$ by distributivity
.
$=z(y'+wx')~~~~$ by $a+a'b=a+b$ |
How can I mathematiically enforce a set of constraints on a range from 0 to 1? | When $g$ and $h$ form a narrow hose going up-down-up there will be no quadratic polynomial $f$ in between. |
"With high probability" statement from CLT | The following holds: For every $c$ and $n$, consider the event $$A_{n,c}=\left[\left|\bar{X}-\mu\right|\leqslant\frac{c}{\sqrt{n}} \right].$$ Then, for every positive $\varepsilon$, there exists some finite $c$ such that, for every $n$, $$P(A_{n,c})\geqslant1-\varepsilon.$$ Is this what you need as a reformulated $(\ast\ast\ast)$? |
Intuition/How to determine if onto or 1-1, given composition of g and f is identity. [GChart 3e P239 9.72] | In terms of intuitions, I think that the best way to think about an injection is that it is a function that preserves difference; if two elements are different before having $f$ applied to them, then they are still different after having $f$ applied to them. Symbolically:
$$x_1\neq x_2 \Rightarrow f(x_1) \neq f(x_2).$$
Another way to think about this is as information: Knowing that $x_1\neq x_2$ is a sort of information. If $f(x_1)=f(x_2)$, then observing a value $y\in \textrm{Range}(f)$ does not tell you which $x$ was input into $f$ to give the observed $y$. Thus an injection can be said to preserve information. A non-injective function collapses values in the domain.
A typical example of a function that collapses values is a projection such as $f:\mathbb{R}^{2}\to\mathbb{R}$ given by $f(x,y)=x$. The observation of $f(x,y)$ gives you some information about $\langle x,y\rangle$ - namely it tells you the value of the first coordinate. But all information about the second coordinate has been lost.
The way to think of a left inverse is that it undoes what the original function did. So, $g$ is the left inverse of $f$ if $g\circ f$ is the same thing as doing nothing. In other words: $$g\circ f = \textrm{id}_{\textrm{dom}(f)}.$$
It should be apparent that a function $g$ can only undo the action of $f$ if the action of $f$ has not lost any information. If information has been lost when calculating $f(x)$, then there is no way that $g$ can "know" where to return a value $y=f(x)$. In other words, a function can only have a left inverse if it is injective.
More formally, the application of a function cannot create information. As long as $ f $ is known, the value of $ f (x) $ cannot contain more information than value of $ x $ since $ f (x) $ can always be calculated from $ x $. Once information has been lost, repeated application of other functions cannot recover it. Thus, since the identity function obviously preserves information, the composition $ g\circ f $ must also preserve information, meaning that neither $ f $ nor $ g $ can lose information. The is an important caveat here though; it is not necessary that $ g $ never collapse values, only that the restriction of $ g $ to the range of $ f $ preserve information. What $ g $ does outside the range of $ f $ is irrelevant to $ g\circ f $.
The converse of the above statement, that every injective function has at least one left inverse, is similar. You can think about this in terms of fibers. If $ f :A\to B $ is a function, then for any $ b\in B $, the fiber of $ f $ at $ b $ is the set $ \{a\in A \vert f (a)=b\} $. A function is injective of all the fibres have at most one element. So, to construct a right inverse, just let $ g (b) $ be the unique element of the fibre of $ f $ at $ b$, if that fiber is nonempty (and any element of $ A $ you like if the fiber is empty.)
(If you want to understand why these sets are called fibers, consider the fibers of the projection in my example above.)
The dual notions of surjections and right inverses are a little less easy to intuit in this way, but can be made sense of using fibers again.
If $g:B\to A$ is known to be a surjection, then every fiber $\{b\in B\vert g(b)=a\}$ is non empty. To construct a right inverse of $g$, all you need to do is find an $f$ that takes each $a\in A$ to some element of the corresponding fiber. As long as you accept the Axiom of Choice, this can always be done, so every surjection has a right inverse.
Conversely, it's pretty easy to see that any function with a right inverse must be a surjection. The range of $g\circ f$ is necessarily a subset of the range of $g$, but if $g\circ f=\textrm{id}_{\textrm{dom}(f)}$, then the range of $g\circ f$ must be the range of $\textrm{id}_{\textrm{dom}(f)}$, which is the entire domain of $f$. Thus the range of $g$ must be the whole domain of $f$, and $g$ must be a surjection.
So, where does this leave your questions?
(1) and (2) are a little odd. We know from the statement of that question that $g\circ f=\textrm{id}_{A}$, and so $f$ has a left inverse ($g$), and $g$ has a right inverse ($f$). Thus in both cases, the left hand side of the biconditional is assumed true, and so the biconditionals are true just in the cases that the right hand sides are true.
However, since the things I have written above proves the biconditionals directly, there's noting much to add.
(3) and (4) are more interesting. We know that $f$ preserves difference/information. But need it be surjective? Well, there's no reason to think it must. The fact that there may be points in the codomain of $f$ (i.e. $B$) which do not lie in the range of $f$ seems irrelevant to the question of information.
Consider the function $f:\mathbb{R}\to\mathbb{R}^{2}$ given by $f(x)=\langle x,x\rangle$. If I told you that $f(x)=\langle 3,3\rangle$, you would have no trouble telling me the value of $x$. And if told you that $f(x)=\langle 3,4\rangle$, while you could not tell me the value of $x$, you could tell me that I was wrong - that $\langle 3,4\rangle$ could not possibly the value of $f(x)$. So, you are able to construct a left inverse to $f$, even though $f$ is not surjecive.
I'll leave the rest for you, unless you have more questions. |
Proving that a function is absolutely monotonic on a given interval | The track is the rightone, however, I suggest you explicitly show by induction that $f^{(k)}(x)=k!\cdot (-x)^{-k}$ for all $k\ge 1$. |
Find the point of discontinuity of the two functions 1. $f(x)=[\sin{x}]$, [] std for greatest integer function | The first one is simple once you see that $\sin(x)$ only outputs values between 1 and -1 and also repeats every 2$\pi$, so if there's a discontinuity at $x_0$, then there's also one at $x_0+2\pi$.
The second one is easier once you simplify like this: $$\lim_{n\to\infty} \frac{(1+\sin(\frac{\pi}{x}))^n+1}{(1+\sin(\frac{\pi}{x}))^n+1}-\frac{2}{(1+\sin(\frac{\pi}{x}))^n+1}$$$$ = \lim_{n\to\infty} 1-\frac{2}{(1+\sin(\frac{\pi}{x}))^n+1}$$
Now just look at $\lim_{n\to\infty}(1+\sin(\frac{\pi}{x}))^n$.
If $1+\sin(\frac{\pi}{x}) > 1$, then it blows up to infinty.
If $1+\sin(\frac{\pi}{x}) < 1$, then it goes to zero instead.
At last, if $1+\sin(\frac{\pi}{x}) = 1$, then it stays at 1.
Consider all three cases for $g(x)$ and look for the values when it switches from one case to another.
I hope you can take from there. |
Convolution of an integrable function of compact support with a bump function. | With a change of variable you obtain
$$
\int_{\mathbb{R}}f(x-y)\psi(y)dy = \int_{\mathbb{R}}f(y)\psi(x-y)dy.
$$
Now, using the fact that $\psi\in\mathscr{C}^{\infty}$ ($\psi$ and all its derivatives are bounded because $\mathrm{supp}(\psi)\subset\mathbb{R}$) and $f\in L^1$ with compact support, the Dominated Convergence Theorem gives the differentiability you need. In particular
$$
D_x\left[ \int_{\mathbb{R}}f(y)\psi(x-y)dy\right] = \int_{\mathbb{R}}f(y)D_x\psi(x-y)dy.
$$ |
Rationalize the denominator: $(\frac{3}{2x^2}) ^{1/4}$ | Your error is when you go from the first line to the second. We have
$$\sqrt[4]{\frac{3}{2x^2}}= \frac{\sqrt[4]{3}}{\sqrt[4]{2x^2}}\cdot \frac{\sqrt[4]{2x^2}}{\sqrt[4]{2x^2}} = \frac{\sqrt[4]{6x^2}}{\sqrt[4]{4x^4}}$$
You can re-express the denominator as $\sqrt[4]{4}\sqrt[4]{x^4} = |x|\sqrt[4]{4}$, which is almost free of radicals (see my note below for where the absolute value came from). Can you take it from here?
Looking at your work, you seem to be using the (erroneous) rule $\sqrt[N]{a} \cdot \sqrt[N]{a} = a$. This is true for $N=2$, but not otherwise (this problem has $N=4$).
Edit: Technically, $\sqrt[4]{x^4} = x$ only holds if $x \ge 0$; the more general solution is $\sqrt[4]{x^4} = |x|$, since $\sqrt[4]{\cdot} \ge 0$. You may have been given that $x>0$ for the problem, at which point $|x| = x$ and there is no difference, but it's a good idea to keep this fact in mind. |
How can you prove that $b=2, m=3$ are the only positive integer solutions to $4b+3m=17$? | $$ 4b+3m=17 \implies 4b=17-3m $$
$$\implies 4b=16+(m+1)-4m = 4(4-m)+ (m+1) $$
Thus $m+1$ must be a multiple of $4$ where $m<4$ to make both sides positive.
The only choice is $m=3$ which implies $b=2$ |
Clear proof of L'Hopital rule for $\infty/\infty$ form | You first proved, using Cauchy's Mean Value Theorem, that
$$\color{blue}{\frac{f(x)}{g(x)}=\frac{1-\frac{g(c)}{g(x)}}{1-\frac{f(c)}{f(x)}}\cdot\frac{f'(\xi_x)}{g'(\xi_x)}}$$
and also
$$\color{red}{\left|\frac{f'(\xi)}{g'(\xi)}-m\right|<\epsilon}\,,\,\,\text{for}\;\;0<|\xi-a|<\delta$$
and also, that
$$\lim_{x\to a}\frac{1-\frac{g(c)}{g(x)}}{1-\frac{f(c)}{f(x)}}=1\implies\color{green}{\left|\frac{1-\frac{g(c)}{g(x)}}{1-\frac{f(c)}{f(x)}}-1\right|<\frac\epsilon{|m|+\epsilon}}\;,\;\;\text{for}\;\;0<|x-a|<\delta''<\delta$$
and from here
$$\left|\frac{f(x)}{g(x)}-m\right|\stackrel{\color{blue}{(*)}}=\left|\color{red}{\left(\frac{f'(\xi_x)}{g'(\xi_x)}-m\right)}+\frac{f'(\xi_x)}{g'(\xi_x)}\color{green}{\left(\frac{1-\frac{g(c)}{g(x)}}{1-\frac{f(c)}{f(x)}}-1\right)}\right|$$$${}$$
where $\color{blue}{(*)}\;$ is the first, blue, equality above (just open up parentheses and check!). You now have only to substitute the red and green inequalities |
How to reperesent $\sin^{4}(x)$ byFourier series? | $$\sin^4{x}=\left(\dfrac{1-\cos{2x}}{2}\right)^2=\dfrac{1}{4}\left(1-2\cos{2x}+\cos^2{2x} \right)=\\
=\dfrac{1}{4}\left(1-2\cos{2x}+\dfrac{1+\cos{4x}}{2} \right)=\dfrac{3}{8}-\dfrac{1}{2}\cos{2x}+\dfrac{1}{8}\cos{4x}$$ |
Finding generating series | Hints:
The number of all compositions of $n$ into a positive number of positive parts is $2^{n-1}$ except when $n=0$ when it is $0$ (stars and bars can show this). So find the generating function for this
The number of compositions of $n$ into $1$ positive part is $1$ except when $n=0$ when it is $0$ (stars but no bar). So find the generating function for this
The number of compositions of $n$ into $2$ positive parts is $n-1$ except when $n=0$ when it is $0$ (stars and one bar). So find the generating function for this
So the generating function for the number of compositions of $n$ into $3$ or more positive parts is the first generating function above minus the second and the third
To make sure "every part is even" involves replacing $x$ by $x^2$, so you can find the generating function for the number of compositions of $n$ into $3$ or more positive even parts
As a check, we have $8=2+2+2+2=4+2+2=2+4+2=2+2+4$, i.e. $4$ ways of making $8$, so see if your answer reproduces this for the coefficient of $x^8$ in its expansion |
Second order differential equation in function of a real parameter. | *Just a hint *
$$X''+2X'=\lambda X$$
$$X''+2X'-\lambda X=0 $$
First you need the characteristic polynomial. Here it's
$$R^2+2R-\lambda=0$$
The discriminant is
$$\Delta=b^2-4ac=4+4\lambda= 4(1+\lambda)$$
Thats what you have to discuss. Whether the $\Delta$ is 0, >0,<0...
You have three important cases $\lambda=-1 ,\quad \lambda >-1$ and finaly $\lambda<-1$.... |
Find the right change of variables for the following double integral | Hint The occurrence of two equations of the form $x^2 + y^2 = C$ (for a constant $C$) suggest that we choose one coordinate to be $$v = x^2 + y^2 .$$ Choosing a second coordinate $u$ is only a little trickier---to do this, notice that the first two equations can be written in the form
$\frac{y}{\sqrt{x}} = B$ (for a constant $B$), which suggests setting $$u = \frac{y}{\sqrt{x}} .$$ By construction, this change of coordinates maps the region $D$ to a rectangle in the $uv$-plane. |
Need a reference for techniques for the evaluation of limits of functions | You can try:
http://planetmath.org/ListOfCommonLimits.html
The bottom of the list also includes a textbook source that is helpful for limits:
Catherine Roberts & Ray McLenaghan, ``Continuous Mathematics'' in Standard Mathematical Tables and Formulae ed. Daniel Zwillinger. Boca Raton: CRC Press (1996): 333, 5.1 Differential Calculus |
A conditional expectation problem | To prove such a statement, the common technique is to use so-called standard machinery:
$(1)$ Prove the statement is true for indicator functions;
$(2)$ Use linearity to prove it for all simple functions;
$(3)$ Use simple to appxomiate all the non-negative function;
$(4)$ For general function $f$, use $f=f^{+}-f^{-}$.
In almost every case, if you can prove $(1)$, then all other follows immediately.
Let us now use this technique to prove what you want.
Firstly, for any $B\in\mathcal{F}$, consider the indicator function $\mathbb{1}_{B}$. Then, we compute as follows:
\begin{align*}
\mathbb{E}(\mathbb{1}_{B}|A_{i})\mu(A_{i})=\mathbb{P}(B|A_{i})\mu(A_{i})&=\dfrac{\mu(B\cap A_{i})}{\mu(A_{i})}\mu(A_{i})\\
&=\mu(B\cap A_{i})\\
&=\int_{B\cap A_{i}}d\mu\\
&=\int_{A_{i}}\mathbb{1}_{B}d\mu.
\end{align*}
Thus, the desired identity holds for indicator function.
Then, note that simple functions are the linear combination of indicator functions, and conditional expectation is linear, so the desired identity holds for all simple functions (you could write them out and combine them back and you will see).
Thirdly, for any non-negative function $f$, there exists an increasing sequence of non-negative simple functions that converge to $f$, namely $\{\phi_{n}\}_{n=1}^{\infty}$, $\phi_{n}\geq 0$, $\phi_{n}\leq \phi_{n+1}$ and $\phi_{n}\longrightarrow f$ as $n\longrightarrow\infty$. Then it follows from the monotone convergence theorem of conditional expectation and of regular integral that the desired results holds for all non-negative functions, since $$\mathbb{E}(f|A_{i})\mu(A_{i})=\lim_{n\rightarrow\infty}\mathbb{E}(\phi_{n}|A_{i})\mu(A_{i})=\lim_{n\rightarrow\infty}\int_{A_{i}}\phi_{n}d\mu=\int_{A_{i}}fd\mu.$$
Finally, for general integral function $f$, we can write $f=f^{+}-f^{-}$ and $f^{+}$, $f^{-}$ are non-negative. Then by the linearity of conditional expectation, it follows immediately that the desired equality holds for all integral function $f$. |
Find the particular solution for the forced, second order differential equation | Hint: Look for a particular solution of the form $\pmatrix{a\cr b\cr} \cos(10 t) + \pmatrix{c\cr d\cr} \sin(10 t)$. |
How to inscribe a square in an arbitrary quadrilateral using compass and straight edge | Here’s a simple construction for the interesting case where the square must touch all four sides. (The other cases are easily constructed by starting with a square with three vertices on two sides of $ABCD$, and scaling it so its fourth vertex touches the third side of $ABCD$ using a homothety about the intersection of the initial two sides.)
Construct $E$ on $\overrightarrow{DA}$ with $\angle DCE = 45^\circ$, and $F$ on $\overrightarrow{CB}$ with $\angle CDF = 45^\circ$. Draw $CG$ and $DH$ perpendicular to $CD$ with $CD \parallel EG \parallel FH$. Let $GH$ intersect $AB$ at $I$. Then $I$ is a vertex of the square.
Drop perpendicular $IJ$ to $CD$. Construct $K$ on $\overrightarrow{DA}$ with $\angle DJK = 45^\circ$, and $L$ on $\overrightarrow{CB}$ with $\angle CJL = 45^\circ$. Then $K, L$ are two more vertices of the square and the last one can be constructed. |
Derivation of geodesic on $m$-sphere $\gamma : \mathbb{R} \rightarrow S^m$ is given by $\gamma(t) = \cos(t |v|)p \ + \frac{\sin(t |v|)}{|v|} v$ | Consider $\mathbb{S}^m \subset \mathbb{R}^{m+1}$ with the induced metric. What is to be known is that for a riemannian submanifold $(N,g,\nabla) \subset (M,\bar g,\bar \nabla)$, the Levi-Civita connection $\nabla$ is the orthogonal projection of the ambiant Levi-Civita connexion $\bar \nabla$ on the tangent space. Thus, if $c$ is a parametrized curve in $\mathbb{S}^m$, $\nabla_{c'} c' = \left(\bar \nabla_{c'} c'\right)^{\perp}$. Using this, one can show that the curve
\begin{align}
\gamma (t) = \cos(t\|v\|)p + \sin(t\|v\|)\frac{v}{\|v\|}
\end{align}
is a geodesic of $\mathbb{S}^m$. Indeed, in the ambiant space $\mathbb{R}^{m+1}$, the connexion is trivial, so the computation shows that $\bar\nabla_{\gamma'(t)}\gamma'(t) = \gamma''(t)=-\|v\|\sin(t\|v\|)p + \cos(t\|v\|)v$ is orthogonal to $T_p\mathbb{S}^m$, and $\nabla_{\gamma'}\gamma' = 0$. Thus, $\gamma$ is a geodesic.
To conclude, by uniqueness of geodesics (because $\nabla_{\gamma'}\gamma'$ is a linear second order differential equation in charts), if $c$ is a geodesic with $c(0)=p$ and $c'(0) = v$, $c = \gamma$.
Edit Here is another proof, more constructive. The idea is to show that a geodesic has to lie inside a linear plane of $\mathbb{R}^{m+1}$. For that, let $p \in \mathbb{S}^m$ and $v \in T_p\mathbb{S}^m$ be a non-zero tangent vector. Let $P = \mathrm{span}(p,v)$. Consider the linear isometry $u$ of $\mathbb{R}^{m+1}$ that is the reflexion with respect to $P$. One can show it is an isometry of $\mathbb{S}^m$ because it stabilizes the sphere and preserves the scalar product on tangent spaces.
Now that we know $u$ is an isometry of $\mathbb{S}^m$, let $\gamma$ be the geodesic of $\mathbb{S}^m$ with $\gamma(0) = p$ and $\gamma'(0)=v$. Then $u\circ \gamma$ is also a geodesic (easy computation) with the same initial data. Thus, $u\circ \gamma = \gamma$ and $\gamma$ has to lie in the set of points in $\mathbb{S}^m$ unvariant by $u$, that is, $P \cap \mathbb{S}^m$. So $\gamma$ is a curve lying insinde the great circle $\mathbb{S}^m\cap P$. As it is a circle and as a geodesic has to be parametrized with constant velocity, is has to be of the form given above. |
Find the limit of $2+\left(-\frac{2}{e}\right)^n$, as $n\to\infty$, if it exists | $e>2$ so $\frac{2}{e}<1$ so
$$\lim_{n \to \infty}{\left(-\frac{2}{e}\right)^n}=0$$ so
$$\lim_{n \to \infty}{2+\left(-\frac{2}{e}\right)^n}=2$$ |
Number of solutions for an equation for specific values | Hints on 1) to put you on track.
under condition $x_1\geq3$ any solution of $x_1+x_2+x_3\cdots+x_{10}=30$ for nonnegative integers $x_1,x_2,\dots,x_{10}$ corresponds with a solution of $y_1+x_2+x_3+\cdots+x_{10}=27$ for nonnegative integers $y_1,x_2,\dots,x_{10}$. This in the understanding that $x_1=y_1+3$.
extra condition $x_2\leq2$ then tells us that it is enough to find the number of solutions of $y_1+x_3+\cdots+x_{10}=n$ where $n\in\{25,26,27\}$. These $3$ cases can be handled separately.
these problems can be solved by applying stars and bars. |
Find the value of $\int_0^{\infty} \frac{x^{a-1}\,\mathrm{d}x}{1+x^{2}},\,$ where $\,0<a<2$ | I imagine you have integrated $\,\,f(z)=\dfrac{z^{a-1}}{z^2+1},\,$ along the contour
$\,\gamma=\gamma_1\cup\gamma_2\cup\gamma_3\cup\gamma_4,\,$ where
$$
\gamma_1=[-R,-\varepsilon],\quad\gamma_2=\{\varepsilon\mathrm{e}^{i(\pi-t)}: t\in[0,\pi]\},\quad \gamma_3=[\varepsilon,R],\quad\gamma_4=\{R\mathrm{e}^{it}: t\in[0,\pi]\}.
$$
In such case
$$
z^{a-1}=\mathrm{e}^{(a-1)\log z},
$$
where $\log z$ is a branch of logarithm defined in
$$
\Omega=\mathrm{C}\setminus \{it: t\in (-\infty,0]\},
$$
as follows. If $z=r\mathrm{e}^{i\vartheta}$, then $\log z=\log r+i\vartheta$, with $$
\vartheta\in (-\pi/2,3\pi,2).
$$
So, in such case, Residue Theorem provides
$$
\int_\gamma f(z)\,dz=2\pi i\,\mathrm{Res}\left(\frac{z^{a-1}}{1+z^2},i\right)=2\pi i\cdot\frac{i^{a-1}}{2i}=\pi\mathrm{e}^{i(a-1)\pi/2}=-i\pi \mathrm{e}^{ia\pi/2}.
$$
Clearly
$$
\lim_{R\to\infty,\,\varepsilon\to 0}\int_{\gamma_1}f(z)\,dz=
\int_{-\infty}^0\frac{z^{a-1}}{z^2+1}=(-1)^{a-1}\int_0^\infty\frac{x^{a-1}\,dx}{1+x^2}=-\mathrm{e}^{ia\pi}\int_0^\infty\frac{x^{a-1}\,dx}{1+x^2},
$$
since $(-1)^{a-1}=\mathrm{e}^{i(a-1)\pi}=-\mathrm{e}^{ia\pi},\,$ and
$$
\lim_{R\to\infty,\,\varepsilon\to 0}\int_{\gamma_3}f(z)\,dz=
\int_0^{\infty}\frac{z^{a-1}}{z^2+1}=\int_0^\infty\frac{x^{a-1}\,dx}{1+x^2}.
$$
Meanwhile,
$\lim_{\varepsilon\to 0}\int_{\gamma_2}f(z)\,dz=0$ and
$\lim_{R\to\infty}\int_{\gamma_4}f(z)\,dz=0$. Hence, altogether
$$
(1-\mathrm{e}^{ia\pi})\int_{0}^\infty\frac{x^{a-1}\,dx}{1+x^2}=
\lim_{\varepsilon\to 0,\,\,R\to\infty}\int_\gamma f(z)\,dz=
2\pi i\,\mathrm{Res}\left(\frac{z^{a-1}}{1+z^2},i\right)
=-i\pi \mathrm{e}^{ia\pi/2}.
$$
Hence
$$
\int_{0}^\infty\frac{x^{a-1}\,dx}{1+x^2}=\frac{i\pi \mathrm{e}^{ia\pi/2}}{\mathrm{e}^{ia\pi}-1}=\frac{i\pi }{\mathrm{e}^{ia\pi/2}-\mathrm{e}^{-ia\pi/2}}=\frac{\pi}{2\sin (a\pi/2)}.
$$ |
variance of number of divisors | Consider the Dirichlet generating series of $\tau^2(n)$
$$L(s) = \sum_{n\ge 1}\frac{\tau^2(n)}{n^s}.$$
Since $\tau^2(n)$ is multiplicative it has Euler product
$$L(s) = \prod_p
\left(1+ \frac{2^2}{p^s} + \frac{3^2}{p^{2s}} + \frac{4^2}{p^{3s}}+\cdots\right).$$
Now observe that
$$\sum_{k\ge 0} (k+1) z^k = \frac{1+z}{(1-z)^3} =
\frac{1-z^2}{(1-z)^4}.$$
This gives for the Euler product that
$$L(s) = \prod_p \frac{1-1/p^{2s}}{(1-1/p^s)^4} = \frac{\zeta^4(s)}{\zeta(2s)}.$$
We can now predict the first terms of the asymptotic expansion of
$$q_n = \sum_{k=1}^n \tau^2(k)$$ using the Mellin-Perron summation formula, which gives
$$q_n = \frac{1}{2} \tau^2(n)
+ \frac{1}{2\pi i} \int_{3/2-i\infty}^{3/2+i\infty} L(s) \frac{n^s}{s} ds.$$
The contribution from the pole at $s=1$ is
$$\mathrm{Res}\left(L(s) \frac{n^s}{s}; s=1\right)
= \frac{1}{\pi^2} n \log^3 n
+ \left(\frac{12\gamma-3}{\pi^2} -\frac{36\zeta'(2)}{\pi^4}\right) n\log^2 n +\cdots
\\ \approx
0.101321183642338 \times n \log^3 n + 0.744341276391456\times n \log^2 n
\\+ 0.823265208269489\times n \log n + 0.460323372258732 \times n.$$
The contribution from the pole at $s=0$ is
$$\mathrm{Res}\left(L(s) \frac{n^s}{s}; s=0\right) = -\frac{1}{8}$$
but we will not include it here because it lies to the left of the zeros of the $\zeta(2s)$ term on the line $\Re(s) = 1/4.$
This gives the following asymptotic expansion:
$$\frac{1}{n} \sum_{k=1}^n \tau^2(k) \sim \frac{1}{2n} \tau^2(n)
+ \frac{1}{\pi^2} \log^3 n
+ \left(\frac{12\gamma-3}{\pi^2} -\frac{36\zeta'(2)}{\pi^4}\right) \log^2 n +\cdots.$$
This approximation is excellent, as the following table shows.
$$\begin{array}{l|ll}
n & q_n/n & \text{approx.} \\
\hline
100 & 30.46 & 30.3377762704858 \\
400 & 54.33 & 54.1863460568776 \\
1000 & 75.083 & 75.1903114374140 \\
5000 & 124.1196 & 124.110890836637 \\
\end{array}$$
Addendum. The reader is invited to supply a rigorous proof of the asymptotic expansion from above. |
Examples of systems conforming the Lorenz Attractor | The chaotic waterwheel! It was literately build to physically realise the Lorenz equations (see DIY version and Harvard version). A nice discussion about it can be found in the book by Strogatz. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.