title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Infinite Sum of Sines With Increasing Period | Here's a proof that $f$ only vanishes at $x = 0$ (you can use a similar method to get some asymptotic results as well).
Write $f(x)/x$ as
\begin{align*}
{f(x)\over x} &= \sum_{n\geq 1} {\operatorname{sinc}{(x/n^2)}\over n^2}
\end{align*}
Since $f(x)/x$ is even, we need only treat the case $x\geq 0$. Split the sum into the regions where $x/n^2$ is smaller or greater than $\pi$. Since $\operatorname{sinc}{\lambda}>0$ for $|\lambda|\leq \pi$, and since $\operatorname{sinc}{\lambda}\geq 2/\pi$ for $|\lambda|\leq \pi/2$, we have
\begin{align*}
\sum_{x/n^2\leq \pi} {\operatorname{sinc}{(x/n^2)}\over n^2} &\geq {2\over \pi}\sum_{x/n^2\leq \pi/2} {1\over n^2} = {2\over \pi} \sum_{n\geq (2x/ \pi)^{1/2}}{1\over n^2} > {2\over \pi} \int_{\lceil(2x/\pi)^{1/2}\rceil}^\infty {dt\over t^2} = {2\over \pi}{1\over \lceil(2x/\pi)^{1/2}\rceil}.
\end{align*}
On the other hand, since $\operatorname{sinc}{\lambda}\leq 1/\lambda$ for all $\lambda>0$, we have
\begin{align*}
\left|\sum_{x/n^2> \pi} {\operatorname{sinc}{(x/n^2)}\over n^2}\right| & \leq \sum_{x/n^2>\pi} {1\over x} \leq {1\over x}\left\lfloor\left({x\over \pi}\right)^{1/2}\right\rfloor\leq \left({1\over \pi x}\right)^{1/2}.
\end{align*}
So
\begin{align*}
f(x)/x> {2\over \pi}{1\over \lceil(2x/\pi)^{1/2}\rceil} - {1\over (\pi x)^{1/2}},
\end{align*}
which is positive when $x\geq \pi$. Since all terms in the sum are positive if $0\leq x < \pi$, it follows that $f(x)/x$ is always positive.
By the way, here's another heuristic (which can be made precise without too much trouble I think). We have
\begin{align*}
{f(x)\over x} & = {1\over 2}\hat g(x) = {1\over 2}\int_{-1}^1 g(t)e^{-ixt}\,dt = {1\over 2x}\int_{-x}^x g(t/x)e^{-it}\,dt,
\end{align*}
where $g = \sum_{n\geq 1} \chi_n$ is the sum of the characteristic functions of the intervals $[-n^{-2},n^{-2}]$. (Since $g$ is an $L^1$ function, this tells us at once that $f(x)/x\to 0$ as $x\to\infty$.) Note that $\{y:g(y)>n\} = [-n^{-2},n^{-2}] = \{y: y^{-1/2}>n\}$ (or something similar), so we should roughly expect $g$ to look like $y^{-1/2}$, and so we should expect $\hat g(x)$ to be approximately $x^{-1/2}$ (as can be seen from the last integral). You can use the same idea to get a sense of what the function would look like if you replace $\sin{(x/n^2)}$ with $\sin{(x/n^\alpha)}$ for $\alpha > 1$ (it should I think look like $x^{1/\alpha}$). |
Curve of intersection, value for parameter | You look at
$$\left(\dfrac{\sqrt{2}y}{a}\right)^2+\left(\dfrac{z}{2a}\right)^2=1$$
and you think of $\cos^2 t + \sin^2 t = 1$
From $\cos t = \dfrac{\sqrt{2}y}{a}$ you get $y = \dfrac{a}{\sqrt 2} \cos t$
From $\sin t = \dfrac{z}{2a}$ you get $z = 2a \sin t$
From $x = -2y$ you get $x = -\sqrt 2 a \cos t$
So your parameterized curve looks like
$r(t) = \left(
-\sqrt 2 a \cos t,\; \dfrac{a}{\sqrt 2} \cos t,\; 2a \sin t
\right)$
We still need to work out the domain of the parameter $t$.
We want r(t) to go from $(0,0,-2a)$ to $(0,0,2a)$ and we require $x \ge 0$ and $y \le 0$. (You wanted $y < 0$ but that won't work.) We know that $\cos t = 0$ when $t = \dfrac{\pi}{2}$ and when $t = \dfrac{3\pi}{2}$ and, by inspection, we see that this will work. that is
$t \in \left[ \dfrac{\pi}{2}, \dfrac{3\pi}{2} \right]$ |
What is the definition of the field a vector space is defined over and how does this field translate into a sub-vector space of this space?? | This is a thoughtful question. Ordinarily, we deal with a vector space $V$ as a v.s. over a particular field $K$, and the fact that $K$ may have subfields $k$, over which $V$ is also a vector space, is acknowledged, but not usually made use of.
When we speak of a sub-vector space $W$ of such a $V$ as above, we ought most correctly mention over which subfield $k$ it is that $W$ is a vector space. But almost always, what we have in mind is for $W$ to be a vector space over the $K$ that $V$ was a v.s. over.
Here’s an example: The Cartesian plane $V=\Bbb R^2$ is a two-dimensional vector space over the real field $\Bbb R$. Since I haven’t said anything about subfields of $\Bbb R$ such as the rational field or any of the infinitely many others, when I say, “Let $W$ be a proper subspace of $V$”, it would be willfully overprecise to ask me over which subfield of $\Bbb R$ I was taking as the scalar field of $W$, since it goes almost without saying that I meant for $W$ to be an $\Bbb R$-subspace of $V$.
If you want to take subspaces over other subfields of the original scalar field, you ought to specify this with wording such as: “Let $W$ be a $\Bbb Q$-subspace of $V$, now considering $V$ as a $\Bbb Q$-space.” |
Subgroups of $\Bbb Z_5 \times \Bbb Z_5$ | We list the subgroups of order $5$. There is the group generated by $(0,1)$. Then there are the groups generated by $(1,b)$, where $b$ is an element of $\mathbb{Z}_5$. That's all. We can if we wish give the addition table for each. |
Evaluating $\int _0 ^\infty \frac{(u^2+1)^{2n-4}}{u^{2n-2} - u^{2n-3} + u^{2n-4} -+ \cdots - u + 1} du$ for $n\geq 2$ | The denominator is
$$\sum_{k=0}^{2n-2} (-1)^ k u^k=\frac{1+u^{2n-1}}{1+u }$$ SO you want to compute
$$I_n=\int_0 ^\infty \frac{ (u+1) \left(u^2+1\right)^{2 n-4}}{u^{2n-1}+1}\,du$$ The degree of numerator is $(4n-7)$ and the degree of denominator is $(2n-1)$ which makes that you have a problem with the upper bound as soon as $n\geq3$. For example, for $n=3$, the expansion of the integrand is
$$\color{red}{1}+\frac{1}{u}+\frac{2}{u^2}+O\left(\frac{1}{u^3}\right)$$ |
Range of a Rational Function | $$\implies x^2(y-1)+x(3-3y)+4y+4=0$$
The discriminant
$$=(3-3y)^2-4(y-1)(4y+4)=-(y-1)(7y+25)$$ which needs to be $\ge0$
Now $(x-a)(x-b)\le0, a\le b\implies a\le x\le b$
Alternatively,
$$\dfrac{x^2-3x-4}{x^2-3x+4}=1+\dfrac{x^2-3x-4}{x^2-3x+4}-1=1-\dfrac8{x^2-3x+4}$$
Now $x^2-3x+4=\dfrac{4x^2-12x+16}4=\dfrac{(2x-3)^2+7}4$
Now $0\le(2x-3)^2\le\infty\iff\dfrac74\le x^2-3x+4\le\infty\iff\dfrac47\ge \dfrac1{x^2-3x+4}\ge0$
Can you take it from here? |
Find Dimensions to minimize the cost | Asuming the formula to be correct, you properly found that $$C'_x=\frac{y}{5}-\frac{1600}{x^2}=0\qquad , \qquad C'_y=\frac{x}{5}-\frac{1600}{y^2}=0$$ Because of the symmetry $x=y$. So $$\frac{x}{5}-\frac{1600}{x^2}=0 \implies x^3=8000\implies x=y=20$$ If you do not see the symmetry, from $C'_x=0$, extract $y$ $$C'_x=0 \implies y=\frac{8000}{x^2}\implies C'_y=\frac{x}{5}-\frac{x^4}{40000}=\frac{x(8000-x^3)}{40000}=0$$
Edit
Since you solved the problem, let us make it more general : the given volume is $V$, for top and bottom the cost is $a$, for the sides the cost is $b$. Doing the same as you did $$C=2 a x y+2 b (x z+y z)=2 a x y+2 b \left(\frac{V}{x}+\frac{V}{y}\right)$$ Differentiating $$C'_x=2 a y-\frac{2 b V}{x^2}=0\qquad , \qquad C'_y=2 a x-\frac{2 b V}{y^2}=0$$ where we see again the symmetry. Doing the same as above, the solution is then given by $$x=y=\sqrt[3]{\frac{b V}{a}}\qquad z=\sqrt[3]{\frac{a^2 V}{b^2}}$$ So, as expected, the base is a square and the dimensions depend on the ratio of the costs of used materials. |
Minimum possible order of a group with elements of order 1 to 5 | Consider $Z_{60}$, the cyclic group of order 60. If it is generated by $x$, then $x^0$ has order 1, $x^{30}$ has order 2, $x^{20}$ has order 3, $x^{15}$ has order 4, and $x^{12}$ has order 5. Note that by your argument, we know that the lcm of 1, 2, 3, 4 and 5 must divide the order of such a group, and since the lcm is 60, the minimum possible order is 60, and indeed, as we just showed, there is a group with this property of this order. |
Solution of a system of linear equations | I became aware of this question by way of an answer on Meta and feel I must push back against the comment of DonAntonio and the answer of amWhy.
Uniqueness of the solution of the system
$$
\begin{aligned}
x_1&=3\\
x_2&=2\\
x_3&=3
\end{aligned}
$$
is obvious and needs no proof. What is there to prove? Is it conceivable that if you plug in numbers other than $3,$ $2,$ and $3$ for $x_1,$ $x_2,$ and $x_3$ you might obtain three true statements?
The determinant is a complicated object, and by bringing it in in this situation, you are making something simple appear much more difficult than it actually is.
Here's what I think you were probably getting at: Let's start with a simpler analogue. Is the solution of the equation $x=2$ unique? Of course is is: $2$ is the only solution. Now $x=2$ may be the end result of simplifying a more complicated equation, such as $13x=26.$ The latter is a special case of the general equation $ax=b.$ It is certainly the case that the latter has a unique solution if and only if $a\ne0.$ If $a=0,$ then there is no solution unless $b=0,$ in which case there are infinitely many solutions.
Likewise, the matrix equation $Ax=b,$ where $A$ is a square matrix and $x$ and $b$ are column vectors, has a unique solution if and only if $\det A\ne0.$ If $\det A=0,$ then it has either no solution or infinitely many solutions.
So it is helpful to introduce the determinant to make statements about the nature of the solution set of the general equation $Ax=b.$ But for concrete $A$ and $b,$ it is usually more efficient to row reduce the system than to compute $\det A.$ (More precisely, computing $\det A$ is best done by actually performing row reduction, but there is no need to mention determinants if you are row reducing to solve a concrete problem.) The end result of the row-reduction process will tell you whether there is a unique solution or not.
The only thing that might need proof is that the three row operations (swapping rows, multiplying a row by a non-zero number, adding a multiple of one row to another row) preserve the solution set. That is generally proved in a linear algebra course, and you can probably assume it from that point on. If not, let $S$ be a system and let $S'$ be the system that results from applying a row operation. You just need to prove that any solution to $S$ is a solution to $S',$ and that any solution to $S',$ is a solution to $S.$ This is straightforward, but it seems like overkill to do it in every row reduction problem you perform. |
Example of a local ring which is not an integral domain | For example, $\mathbb Z/4\mathbb Z$.
In general, $R/M^n$ where $M$ is a maximal ideal and $R$ is commutative will be an example, as long as $M\neq \{0\}$. |
Proof of $W=M_{n}(R)$ | Every matrix $M$ satisfies
$M=B^{-1}(BMB^{-1})B.$
That is, the matrix $A$ you are looking for is $BMB^{-1}=BMB^T.$ |
Why is my proof incorrect for showing set of all increasing functions $f:\Bbb N\to\Bbb N$ is uncountable? | The critical idea here is that Cantor's diagonalization argument hinges on the fact that it works no matter which proposed enumeration of the reals you start with. It's not enough to simply provide a listing that misses some things - if it were, we could show that the naturals were uncountable! In order to show that a set $S$ is uncountable, you must show that there is no way of assigning a different natural number to each member of $S$, not that the one you happened to choose doesn't work.
Importantly, Cantor's diagonalization does not show a bijection between $\mathbb{N}$ and $\mathbb{R}$; it says "suppose that a bijection exists" and then derives a contradiction. Or, if you phrase the argument slightly differently, it says "pick any function from $\mathbb{N}$ to $\mathbb{R}$" and shows that no matter which function you picked, that function is not surjective. |
Solving for Eigenvalues of Bessel like differential equations | Hint:
$\dfrac{\partial^2R}{\partial r^2}+\dfrac{1}{r}\dfrac{\partial R}{\partial r}+\left(\dfrac{\beta^2}{r^2}-\alpha^2r^2\right)R=-\lambda R$
$\dfrac{\partial^2R}{\partial r^2}+\dfrac{1}{r}\dfrac{\partial R}{\partial r}+\left(\dfrac{\beta^2}{r^2}-\alpha^2r^2+\lambda\right)R=0$
Let $x=r^2$ ,
Then $\dfrac{\partial R}{\partial r}=\dfrac{\partial R}{\partial x}\dfrac{\partial x}{\partial r}=2r\dfrac{\partial R}{\partial x}$
$\dfrac{\partial^2R}{\partial r^2}=\dfrac{\partial}{\partial r}\left(2r\dfrac{\partial R}{\partial x}\right)=2r\dfrac{\partial}{\partial r}\left(\dfrac{\partial R}{\partial x}\right)+2\dfrac{\partial R}{\partial x}=2r\dfrac{\partial}{\partial x}\left(\dfrac{\partial R}{\partial x}\right)\dfrac{\partial x}{\partial r}+2\dfrac{\partial R}{\partial x}=2r\dfrac{\partial^2R}{\partial x^2}2r+2\dfrac{\partial R}{\partial x}=4r^2\dfrac{\partial^2R}{\partial x^2}+2\dfrac{\partial R}{\partial x}=4x\dfrac{\partial^2R}{\partial x^2}+2\dfrac{\partial R}{\partial x}$
$\therefore4x\dfrac{\partial^2R}{\partial x^2}+2\dfrac{\partial R}{\partial x}+2\dfrac{\partial R}{\partial x}+\left(\dfrac{\beta^2}{x}-\alpha^2x+\lambda\right)R=0$
$4x\dfrac{\partial^2R}{\partial x^2}+4\dfrac{\partial R}{\partial x}-\left(\alpha^2x-\lambda-\dfrac{\beta^2}{x}\right)R=0$
$x^2\dfrac{\partial^2R}{\partial x^2}+x\dfrac{\partial R}{\partial x}-\left(\dfrac{\alpha^2x^2}{4}-\dfrac{\lambda x}{4}-\dfrac{\beta^2}{4}\right)R=0$
Let $R=e^{-\frac{\lambda x}{4}}U$ ,
Then $\dfrac{\partial R}{\partial x}=e^{-\frac{\lambda x}{4}}\dfrac{\partial U}{\partial x}-\dfrac{\lambda e^{-\frac{\lambda x}{4}}U}{4}$
$\dfrac{\partial^2R}{\partial x^2}=e^{-\frac{\lambda x}{4}}\dfrac{\partial^2U}{\partial x^2}-\dfrac{\lambda e^{-\frac{\lambda x}{4}}}{4}\dfrac{\partial U}{\partial x}-\dfrac{\lambda e^{-\frac{\lambda x}{4}}}{4}\dfrac{\partial U}{\partial x}+\dfrac{\lambda^2e^{-\frac{\lambda x}{4}}U}{16}=e^{-\frac{\lambda x}{4}}\dfrac{\partial^2U}{\partial x^2}-\dfrac{\lambda e^{-\frac{\lambda x}{4}}}{2}\dfrac{\partial U}{\partial x}+\dfrac{\lambda^2e^{-\frac{\lambda x}{4}}U}{16}$
$\therefore x^2e^{-\frac{\lambda x}{4}}\dfrac{\partial^2U}{\partial x^2}-\dfrac{\lambda x^2e^{-\frac{\lambda x}{4}}}{2}\dfrac{\partial U}{\partial x}+\dfrac{\lambda^2x^2e^{-\frac{\lambda x}{4}}U}{16}+xe^{-\frac{\lambda x}{4}}\dfrac{\partial U}{\partial x}-\dfrac{\lambda xe^{-\frac{\lambda x}{4}}U}{4}-\left(\dfrac{\alpha^2x^2}{4}-\dfrac{\lambda x}{4}-\dfrac{\beta^2}{4}\right)e^{-\frac{\lambda x}{4}}U=0$
$x^2\dfrac{\partial^2U}{\partial x^2}-\dfrac{\lambda x^2-2x}{2}\dfrac{\partial U}{\partial x}-\left(\dfrac{(4\alpha^2-\lambda^2)x^2}{16}-\dfrac{\beta^2}{4}\right)U=0$ |
Solving $z^2 - 8(1-i)z + 63 - 16i = 0$ with reduced discriminant | Generally if you have and equation of the form
$$az^2+2bz+c=0$$
Then $\Delta= (2b)^2-4ac = 4(b^2-ac)= 4 \Delta'$
Definition: $\color{blue}{\Delta'=b^2-ac}$ is the so called the reduced discriminant of the equation $az^2+2bz+c=0$.
Therefore the solution are
$$z=\frac{-2b\pm\sqrt{\Delta}}{2a} =\color{red}{\frac{-b\pm\sqrt{\Delta'}}{a}} $$
if you have
$$az^2+bz+c=0$$ then, $$\color{brown}{\Delta' =\left(\frac{b}{2}\right)^2-ac}$$ |
Combinatoric meaning of $\binom{n}{k}$ | Permutations. A simple example in which it is possible to show all possibilities
may be helpful. Suppose you have three objects A, B, and C.
You wish to select two of the three and arrange them. The possibilities are:
AB AC
BA BC
CA CB
In frequently used combinatiorial notation, the number of arrangements is
$${}_3P_2 = \frac{3!}{(3-2)!} = \frac{3!}{1!} = 3(2) = 6.$$
Combinations. If you wish to choose 2 of these three objects without
regard to order, then the actual order is unimportant,
so we choose alphabetical order to make our list, ignoring the three
(now redundant) outcomes that are not in alphabetical order:
AB AC
BC
In frequently used combinatorial notation, the number of ways
to choose the objects without regard to order is
$${}_3C_2 = {3 \choose 2} =\frac{3!}{2!\,(3-2)!} = \frac{3!}{2!\,1!} =3.$$
Note: In stars-and-bars problems, you have a number of positions
among the stars available to receive the bars. The order in which
the bars are inserted is not relevant, so this becomes a
problem involving binomial coefficients ${positions \choose bars}$.
I am not sure exactly what problem you intended to solve by
${8 \choose 2},$ so I hesitate to comment on details of that. |
Calculating limit of ln(arctan(x)) using chain rule | Try this with the function $2x$.
$$\lim_{x\to 0} 2x =0.$$
Now take the derivative of $2x$, which is $2$:
$$\lim_{x\to 0} 2 = 2.$$
Hmmmm....different answers. This may be a L'hospital's Rule confusion. |
Semi decision procedures for Peano arithmetic? | You might be interested in "generic-case," as opposed to worst-case complexity results.
The earliest-that-I'm-aware-of variant is average case complexity; see for example "Matrix transformation is complete for the average case" (http://research.microsoft.com/en-us/um/people/gurevich/opera/97.pdf), in which Blass and Gurevich show that the Bounded Product Problem for $PSL_2(\mathbb{Z})$ has average-case polynomial complexity, even though it is NP-complete in the worst case.
This proved tricky to study, and so versions based on asymptotic density (generic and strong-generic computability) were introduced in "Generic- case complexity, decision problems in group theory and random walks" (http://www.sciencedirect.com/science/article/pii/S0021869303001674). A number of "infeasible," or even outright incomputable, problems turn out to be simple in the strongly generic sense: e.g., the word problem for Boon's original example of a group with undecidable word problem is strongly generically linear-time decidable!
The point is, if you have an algorithm $A$ solving a problem $P$ "most of the time" in time $f(n)$, we can truncate $A$ to $A_f$ by stopping $A$ after $f(n)$ steps; this gives an efficient (well, $f$-time) algorithm for $P$ which works "most" of the time.
Off the top of my head, I can't recall results specifically about problems from number theory, but I'm sure there are some; I'll add them once I find them. Note, however, that these problems which are phrased in terms of group theory etc. are really problems about realtions on natural numbers, so do (even if not explicitly) correspond to subtheories of PA.
This has also been studied in pure computability theory, that is, without regard to feasibility: see e.g. http://www.math.uiuc.edu/~jockusch/lastGC.pdf. Also see http://arxiv.org/pdf/1406.2982.pdf for some more esoteric aspects. |
Convergence of double power series | Hint: For $\alpha < 0$,
$$
n^{n^\alpha } - 1 = \exp \left( {\frac{{\log n}}{{n^{\left| \alpha \right|} }}} \right) - 1 > 1 + \frac{{\log n}}{{n^{\left| \alpha \right|} }} - 1 = \frac{{\log n}}{{n^{\left| \alpha \right|} }},
$$
for all $n\geq 1$. Also, for sufficiently large $n$,
$$
n^{n^\alpha } - 1 = \exp \left( {\frac{{\log n}}{{n^{\left| \alpha \right|} }}} \right) - 1 < 1 + 2\frac{{\log n}}{{n^{\left| \alpha \right|} }} - 1 = 2\frac{{\log n}}{{n^{\left| \alpha \right|} }} .
$$ |
What happens if we try to define the Lebesgue integral by an infimum? | Consider the function $f(x)= x^{-\frac12}$ on $(0,1)$.
Note that the function values can be arbitrarily large.
Since a simple function takes only finitely many values, every simple function that is bounded below by $f$ has to be infinite on a set of non-zero measure.
Thus, the integral using your suggested infimum definition would be $\infty$,
whereas the usual Lebesgue integral would have a finite value.
This is just an example, but it demonstrates the difficulty that will arise with non-bounded functions.
For bounded functions $f$ your infimum definition is equivalent to the usual defintion, see the comment of Crostul. |
How many layers of consistency can PA recognize? | Well, first of all it doesn't really make sense to talk about $T_\alpha$ for $\alpha$ an ordinal; instead we need to talk about ordinal notations (these are basically just "particularly nice" computable well-orderings of $\mathbb{N}$ - specific copies of ordinals rather than the more abstract ordinals themselves). Two notations for the same ordinal may wind up behaving quite differently in terms of the "iterated consistency extension" they yield - see e.g. here.
That said, it turns out that there is a rather sharp answer to your question. Ordinal notations go up through the computable ordinals, that is, up to $\omega_1^{CK}$. Meanwhile, every arithmetically definable well ordering is in fact isomorphic to a computable one (indeed much more is true: "hyperarithmetic = computable" for ordinals, this is a theorem of Spector).
So briefly:
We can iterate consistency principles along computable well-orderings (not computable ordinals per se) without trouble, but we can't even arithmetically refer to $\omega_1^{CK}$ or above. |
prove that for $n \ge 4, {{2n}\choose{n}} \ge n\cdot2^n$ | Inductive step is
$${{2(k+1)}\choose{k+1}} = {{2k}\choose{k}}\times \frac{(2k+1)(2k+2)}{(k+1)^2}\ge k\times 2^k\times\frac{(2k+1)(2k+2)}{(k+1)^2}= k\times 2^{k+1}\times\frac{(2k+1)}{(k+1)}\ge (k+1)\times 2^{k+1}$$ |
Circle containing a point of square and touching two sides | Here's the situation:
The radius from$(r,r)$ to $(1,1)$ is the hypotenuse of a right triangle of sides $1-r$ and $1-r$. Express this as an equation, which you can solve for $r$. (The equation will have two solutions; just pick the one with $r<1$.)
As a short cut, you can just look at that right-angled triangle and note that the ratio of the hypotenuse to the shorter sides is $\sqrt 2$. |
How to get solutions in elementary functions to the following non-linear ODE | HINT :
You have to do asymptotic analysis without solving the ODE.
If you had the analytic solution of the ODE and then study the asymptotic behavior you distort the problem.
I suppose that the ODE was especially chosen so that one cannot express the exact solution on a closed form.
If the initial conditions are given (for example at $x=0\quad\to\quad y(0)=y_0$ and $y'(0)=y'_0$ ) the function $y(x)$ is determined. Since $y(x)=C$=constant is a solution $y(x\to\infty)=C$. The function tends asymptotically to a constant (not always the same, depending on the initial conditions).
This is confirmed by numerical calculus, for examples :
$$y\frac{\text{d}^2y}{\text{d}x^2}-2\left(\frac{\text{d}y}{\text{d}x}\right)^2+xy^3\frac{\text{d}y}{\text{d}x}=0$$
As $x\to\infty \qquad y\to C.\quad$ So, the ODE is then approximately :
$$C\frac{\text{d}^2y}{\text{d}x^2}-2\left(\frac{\text{d}y}{\text{d}x}\right)^2+C^3 x\frac{\text{d}y}{\text{d}x}\simeq 0$$
$$C\frac{\text{d}^2y}{\text{d}x^2}
+\left(-2\frac{\text{d}y}{\text{d}x}+C^3 x \right)\frac{\text{d}y}{\text{d}x}
\simeq 0$$
Moreover, $\frac{\text{d}y}{\text{d}x}$ is small while $x$ is large. Thus :
$$\frac{\text{d}^2y}{\text{d}x^2}+C^2 x\frac{\text{d}y}{\text{d}x}
\simeq 0$$
$$y(x)\simeq C-c_1\left(1-\text{erf}\left(\frac{C}{\sqrt{2}}x\right) \right)\qquad x \text{ large}$$ |
Taylor series expansion of $\frac{1}{1+x^2}$ about a point $a \in \Bbb{R}$ | Rewrite
$$
\frac{1}{(t+a)^2+1)}=\frac{A}{t+a+i}+\frac{B}{t+a-i}
$$
whence
$$
A=\frac{i}{2},\qquad B=-\frac{i}{2}.
$$
Now expand in power series around $t=0$
$$
\frac{1}{t+a+i}=\frac{1}{a+i}\frac{1}{1+\dfrac{t}{a+i}}
$$
and do the same for the other fraction.
Sum up and set $t=x-a$. |
What's wrong with this proof of symmetry of equality? | Looks as if the tool is broken, in the sense of not implementing Leibniz's Law properly, as the move from (3) and (4) to (5) is an unexceptional example of the application of LL. |
Hardy- Littlewood Circle Method | $F$ is a complex-valued function, so you can think of "peaks" as local maxima of $|F(\rho e(\alpha))|$, where $0 < \rho < 1$ is fixed. You can imagine that these peaks happen around certain rational values $\alpha = a/q$ ($q$ not too large) because the oscillations of $e(\alpha)$ are "in phase" with each other at rational points of the $[0,1)$ interval. As $\rho$ gets closer to 1, the local peaks become more frequent and pronounced.
Vaughan mentions the asympotic expansion:
$$F\left(\rho e\left(\frac{a}{q}+\beta\right)\right) \sim \frac{C}{q} S(q,a)(1 - \rho(\beta))^{-1/2}$$
where $n$ is large, $\rho = 1 - 1/n$ and
$$S(q,a) = \sum_{m=1}^q e(am^2/q).$$
Vaughan says the asymptotic expansion works for denominator $q \leq \sqrt{n}$ and $\beta$ small, roughly $\beta \leq 1/(q\sqrt{n})$. You can interpret this to mean that $F(\rho e(a/q+\beta))$ is approximately equal to $\frac{C}{q} S(q,a)(1 - \rho(\beta))^{-1/2}$ for $\rho$ close to 1 and $q, \beta$ in the ranges given, where $C$ here is actually $\sqrt{\pi}/2$.
In fact the asymptotic estimate does not seem to quite work on the full range of $q$ and $\beta$ that Vaughan gives, and I think we need to be somewhat more restrictive. I give more explicit estimates below. Perhaps somewhat else can give better estimates that allow the asymptotic to work for a larger range.
First, we'll need an application of partial summation. Suppose that $f:\mathbb{R}_{\geq 0} \rightarrow \mathbb{C}$ is continuous and $f(x) \rightarrow 0$ as $x \rightarrow \infty$. Then
$$\sum_{n=1}^\infty f(n) = \int_0^\infty f(x) \, dx + \int_0^\infty \{x\} f'(x) \,dx$$
where $\{x\}$ is the fractional part of $x$. More generally, if we have a congruence condition for our summation, we have the estimate
$$\sum_{\substack{n=1\\ n \equiv m (\bmod q)}}^\infty f(n) = \frac{1}{q} \int_0^\infty f(x) \, dx - \int_0^\infty f(x) \,d\left\{\frac{x-m}{q}\right\}$$
$$= \frac{1}{q} \int_0^\infty f(x) \, dx - f(0)\left(1 - \frac{m}{q}\right) + \int_0^\infty \left\{\frac{x-m}{q}\right\} f'(x) \,dx.$$
Therefore
$$\left| \sum_{\substack{n=1\\ n \equiv m (\bmod q)}}^\infty f(n) - \frac{1}{q} \int_0^\infty f(x) \, dx \right| \leq |f(0)| + \int_0^\infty |f'(x)| \, dx.$$
Let's apply this to the function
$$f(x) = \left(\rho e(\beta) \right)^{x^2}.$$
We have
$$\left| \sum_{\substack{n=1\\ n \equiv m (\bmod q)}}^\infty \left(\rho e(\beta) \right)^{n^2} - \frac{1}{q} \int_0^\infty \left(\rho e(\beta) \right)^{x^2} \, dx\right| \leq 1 + \int_0^\infty \left|\left(\left(\rho e(\beta) \right)^{x^2}\right)'\right| \, dx.$$
We evaluate the derivatives and integrals to get
$$\left| \sum_{\substack{n=1\\ n \equiv m (\bmod q)}}^\infty \left(\rho e(\beta) \right)^{n^2} - \frac{\sqrt{\pi}}{2q\sqrt{-\log{\rho e(\beta)}}}\right| \leq 1 + \int_0^\infty \left| 2x\left(\left(\rho e(\beta) \right)^{x^2}\right)(\log \rho + 2 \pi i \beta)\right| \, dx$$
$$\leq 1 + (-\log \rho + 2 \pi \beta)\int_0^\infty 2x\rho^{x^2} \, dx = 2 + \frac{2 \pi \beta}{(-\log{\rho})}.$$
Now we can estimate the full sum. We split into residue classes to get
$$\sum_{n=1}^\infty \left(\rho e(a/q + \beta) \right)^{n^2} = \sum_{m (\bmod q)} e(am^2/q) \sum_{\substack{n=1\\ n \equiv m (\bmod q)}}^\infty \left(\rho e(\beta) \right)^{n^2}.$$
Therefore we have an estimate for the full sum
$$\left|F\left(\rho e\left(\frac{a}{q}+\beta\right)\right) - \frac{\sqrt{\pi}/2}{q} S(q,a) \frac{1}{\sqrt{-\log \rho e(\beta)}}\right| \leq \left(2 + \frac{2 \pi \beta }{(-\log \rho)}\right) q.$$
The asymptotic estimate will be with fixed modulus $q$ and sending $\rho \rightarrow 1^-$. Specifically, we fix $q$ and a small $\epsilon > 0$, and put $\rho = 1 - 1/n$. We send $n \rightarrow \infty$, so that $\rho \rightarrow 1^-$. At the same time, for each $n$ we need to choose $\beta$ such that $\beta = O(n^{-2/3-\epsilon})$, so $\beta$ goes to zero faster than Vaughan indicated.
Note that for small $\rho$ and $\beta$, we have
$$-\log \rho e(\beta) \sim 1- \rho e(\beta) = O\left(\frac{1}{n^{2/3+\epsilon}}\right).$$
We thereby obtain an estimate
$$F\left(\rho e\left(\frac{a}{q}+\beta\right)\right) = \frac{\sqrt{\pi}/2}{q} S(q,a) \frac{1}{\sqrt{1- \rho e(\beta)}} + O(n^{1/3 - \epsilon}).$$
Since $(1- \rho e(\beta))^{-1/2} \gg n^{1/3 + \epsilon/2} \gg n^{1/3 - \epsilon}$, we get the asymptotic
$$F\left(\rho e\left(\frac{a}{q}+\beta\right)\right) \sim \frac{\sqrt{\pi}/2}{q} S(q,a) \frac{1}{\sqrt{1- \rho e(\beta)}}$$
as $n \rightarrow \infty$. |
Confusion regarding the splitting lemma | Take the sequence $0\rightarrow A \xrightarrow{\alpha} B \xrightarrow{\beta} C \rightarrow 0 $ an exact sequence.
And take $u:B\rightarrow A$ the inverse of $\alpha$, then the sequence splits. To prove this, you can take the morphism \begin{equation} \psi: B \rightarrow A \oplus C\ \end{equation}
defined by $\psi(b)=u(b)+\beta (b)$.
Now it's easy to check that this is an isomorphism and this is the (correct) isomorphism |
Number of solutions using graph | Since your drawing is of low quality, see the figures below.
Note that in the transformation to the log form, you forget a solution because it is not $\ln(x)$ but $\ln|x|$ (the absolute value of $x$). Do not forget the branch $\ln(-x)$ for the range $x<0$. |
Constrained optimisation to unconstrained using trigonometry | we have $$f(x)=x\sqrt{1-x^2}$$ then we have
$$f'(x)=-{\frac {2\,{x}^{2}-1}{\sqrt {- \left( x-1 \right) \left( x+1
\right) }}}
$$ solving $$f'(x)=0$$ we get $$x=\frac{\sqrt{2}}{2}$$ (note that $$x>0$$ is given!) |
probability of two successive random numbers has the same starting number | Answer is 1/9.for example you take a 5 digit number then there are 9*10*10*10*10 ways then for successive number to have same digit there are 1*10*10*10*10 ways.probability is 1/9 by dividing |
Proper Method for such questions (Algebra) | You can convert the given equation to a quadratic equation with respect to a single variable, and you just need to satisfy the condition $\Delta_x≥0$ for the Polynomial Discriminant for the equation to have real roots. If the polynomial is homogeneous, you can divide all polynomial terms into a particular variable to reduce the number of variables and use the substitution, for example $ z = \dfrac{x}{y} $ (for a polynomial with variables $ x $ and $ y $). Sometimes various inequalities can be helpful. Think of it this way. It is not a good idea to use a general formula for a quartic equation where all roots are rational. I will go over a simple example to show the "general way":
$$a^2+b^2+c^2-ab-bc-ac=0$$
$$\implies a^2-a(b+c)+(b^2+c^2-bc)=0$$
$$\implies \Delta_a =(b+c)^2-4(b^2+c^2-bc)=-3(b-c)^2≥0$$
$$\implies -3(b-c)^2=0\Longrightarrow b=c$$
$$\implies a=\dfrac {b+c}{2}=\dfrac {2b}{2}=b=c$$
So, we find real solutions as $$a=b=c=\text {any real number}.$$ |
Prove there is a full measure set s.t Birkhoff averages of any continuous function converge on that set | Your first intuition is right. As $C(X)$ is separable you can find a dense sequence $f_k$. For every $f_k$ there is a full measure set $X_k$ for which the Birkhoff sums converge to the integrable.
Then a countable intersection of full measure set is a full measure set. So on the set $Y = \cap X_k$ every $f_k$ has its Birkhoff sums converging to its average.
No take a $g \in C(X)$. There is a subsequence $\{g_k\} \subset \{f_k\}$ such that for every $\epsilon$, there is a $K$ such that for every $k>K$, $|f-g_k|_\infty \leq \epsilon$.
So for every $x \in Y$, $|\frac{1}{N}\sum_{i=1}^{N}g(T^i(x))-\int g|=|\frac{1}{N}\sum_{i=1}^{N}g(T^i(x))-g_k(T^i(x))+\frac{1}{N}\sum_{i=1}^{N}g_k(T^i(x))-\int g_k+\int (g_k-g)|$
But we have that $$
|\int (g_k-g)| \leq \epsilon Vol(X)
$$
$$
|\frac{1}{N}\sum_{i=1}^{N}g(T^i(x))-g_k(T^i(x))|\leq \epsilon
$$
And $$
\lim \frac{1}{N}\sum_{i=1}^{N}g_k(T^i(x))-\int g_k = 0
$$
So applying the triangular inequality to detach this three pieces we get that $$
\limsup |\frac{1}{N}\sum_{i=1}^{N}g(T^i(x))-\int g| \leq \epsilon (1+ Vol(X))
$$
as this is true for every $\epsilon$ we have that $$
\limsup |\frac{1}{N}\sum_{i=1}^{N}g(T^i(x))-\int g| =0
$$
And then $$
\lim |\frac{1}{N}\sum_{i=1}^{N}g(T^i(x))-\int g| =0
$$ |
How to solve an equations? | Put it all over a common denominator, and the numerator is a polynomial in $x$.
There probably won't be a "closed-form" solution if $n \ge 2$. Numerically, use the standard numerical methods (e.g. Newton's method). |
Converting to polar coordinates in integral over $\mathbb{R}^{n}$ | Let $g(r) := \frac{C}{(1 + r^2)^{\frac{n+1}{2}}}$ for $r > 0$, so $F(x) = g(|x|)$.
Since $F$ is radial, the change to polar coordinates becomes
$$
\int_{\mathbb{R}^n}F(x) \,dx = \int_{0}^{\infty} \int_{\partial B(0,r)}g(r)\,d S\, d r
$$
where $\partial B(0,r)$ denotes the boundary of the ball of radius $r$ in $\mathbb{R}^n$. Now $g$ only depends on $r$, so this will allow you to compute the surface integral (it is just the surface area of $B(0,r)$), and then subsequently the integral over $r$. |
Adjoint Functor Theorem | It is trivial that $G : C \to D$ admits a left adjoint iff for every $X \in D$ the category $(X \downarrow G)$ has an initial object (namely, $X \to G(F(X))$ is an initial object iff $\hom(X,G(-))$ is represented by $F(X)$). However, if $C$ is complete and $G$ is continuous, then $(X \downarrow G)$ is also complete and we may use Freyd's criterion for the existence of an initial object:
If $C$ is a complete category such that there is a set of objects $S$ which is "weakly initial" i.e. such that every object of $C$ admits a morphism from some object in $S$, then $C$ has an initial object.
The proof is direct and constructive (but not very useful in applications). First, we consider the product $p$ of all objects in $S$. This admits a morphism to any object of $C$. In order to enforce uniqueness, we have to make $p$ smaller: One considers the equalizer $e$ of all endomorphisms of $p$. Then one easily checks that $e$ is an initial object of $C$.
From this we derive Freyd's adjoint functor theorem: If $C$ is complete and $G : C \to D$ is a continuous functor such that for every $X \in D$ the category $(X \downarrow G)$ has a weakly initial set (often called solution set), then $G$ admits a left adjoint.
The existence of a solution set is often easy to verify. Freyd's adjoint functor theorem has lots of applications (existence of tensor products, Stone-Cech compactifications, existence of free algebras of any type such as free groups, free rings, tensor algebras, symmetric algebras etc., but also of colimits of algebras of any type). I think that in any of these applications we can also give a more direct proof, but usually this proof requires more calculations. Freyd's adjoint theorem allows us to unify all these examples. I think this is one of the main purposes of category theory: unification. And this leads to simplification. But I don't know if there are any results which really depend on Freyd's adjoint functor theorem (i.e. there are no proofs without it).
Notice: In Freyd's criterion for the existence of an initial object, and hence for the existence of a left adjoint, we may obviously replace "set" by "essentially small class" (which means that there is a set such that every object of the class is isomorphic to an object in this set).
Now let us look at some specific example. The forgetful functor $U : \mathsf{Grp} \to \mathsf{Set}$ has a left adjoint. First of all, $U$ creates limits. If $X$ is a set, a solution set for $X \downarrow U$ consists of all maps $i : X \to U(G)$ where $G$ is generated by the image of $i$. The class of these groups is essentially small, since $U(G)$ admits a surjection from $\coprod_n (X \times \mathbb{N})^n$, namely $((x_1,e_1),\dotsc,(x_n,e_n)) \mapsto x_1^{e_1} \dotsc x_n^{e_n}$. If $G$ is any group and $X \to U(G)$ is a map, then we may consider the subgroup $G'$ which is generated by the image, so that $X \to U(G)$ factors through $U(G')$. Thus we have a solution set, and $U$ has a left adjoint $F : \mathsf{Set} \to \mathsf{Grp}$ (free groups).
We can use the proof of Freyd's adjoint functor theorem to write it down "explicitly". Let $$P = \prod_{\substack{i : X \to U(G) \\ i \text{ generates } G}} G$$ with the obvious map $X \to U(P)$ and let $X \to F(X)$ be the equalizer of all endomorphisms of $(X \to U(P),P)$. Then $F(X)$ is the free group on $X$. Alternatively, we may define $F(X)$ (this is what Lang does in his Algebra book!) as the subgroup of $P$ which is generated by the image of $X \to U(P)$ - this gives uniqueness in the universal propert of $F(X)$, remember that this was the only purpose of taking the big equalizer. This abstract construction is usually not considered to be explicit (although it is explicit), because it doesn't tell us what the elements of $F(X)$ are (because it is still a widespread belief that elements describe a mathematical object). The element structure of $F(X)$ can be derived from the universal property, using an action of $F(X)$ on the set of reduced words (see Serre's book Trees). |
Correspondance checking software | You may use the one-sample Kolmogorov-Smirnov test to compare a sample with a reference distribution.
This is implemented in R as ks.test. |
What type of problem is this? Combinatorics? | I preassume that the cups are distinguishable.
Discern the cases:
$8=4\times2+0\times1+6\times0$
$8=3\times2+2\times1+5\times0$
$8=2\times2+4\times1+4\times0$
$8=1\times2+6\times1+3\times0$
$8=0\times2+8\times1+2\times0$
Final answer is: $$\frac{10!}{4!0!6!}+\frac{10!}{3!2!5!}+\frac{10!}{2!4!4!}+\frac{10!}{1!6!3!}+\frac{10!}{0!8!2!}$$ |
No. of possible solutions of given equation | The problem doesn't make sense unless the variables are integers, so I assume that.
I would iteratively (as a function of $m$) build a table containing numbers $S(j,k)$. Here
$S(j,k)$ is the number of solutions of the system
$$
Z_1+Z_2+\cdots+Z_j=k.
$$
The initialization in the case $j=1$ is easy: $S(1,k)=1$, if $k$ is a possible value of $Z_1$, and $S(1,k)=0$ otherwise.
Assume that we have computed all the numbers $S(\ell,k)$ for some number of variables $\ell$. Then we have the recurrence relations for all $k$
$$
S(\ell+1,k)=\sum_{\text{$t$ is an allowed value for $Z_{\ell+1}$}}S(\ell,k-t).
$$
That should do it. The number $S(m,n)$ is your answer.
Of course, you have to do a little bit of computation to decide on the lower and upper bounds of the indices in your table.
It may or may not be easier, if you subtract $X_i$ from $Z_i$ so that the lower bound for the (partial) sums becomes zero. If you do that, you also have to subtract $\sum_i X_i$ from $n$. |
First-order Taylor expansion for a function of two variables | You applied the mean value theorem incorrectly. It says
$$
{u(x+h,y+k)-u(x+h,y)} = u_y(x+h,y+\theta_1 k)k $$
and
$$
{u(x+h,y)-u(x,y)} = u_x(x+\theta_2 h,y)h
$$
where $\theta_1,\theta_2\in (0,1)$. The continuity of partial derivatives yields
$$u_y(x+h,y+\theta_1 k) = u_y(x ,y ) + \eta_1(h,k) $$
and
$$u_x(x+\theta_2 h ,y) = u_y(x ,y ) + \eta_2(h) $$
where $\eta_1(h,k)$ and $\eta_2(h)$ tend to zero as $(h,k)\to (0,0)$. Hence, the quantity
$$\epsilon_1 = \eta_1(h,k) k+ \eta_2(h) h$$ has the desired property. |
the collision in Kuramoto model is not occur for the identical case. | In the case of uniform angular base velocities, you get for angle differences
\begin{align}
\dot \theta_i &=\omega+K/N\sum_{m=1}^N \sin(\theta_m-\theta_i)\\
\dot \theta_i -\dot \theta_i
&=K/N\sum_{m=1}^N [\sin(\theta_m-\theta_i)-\sin(\theta_m-\theta_j)]\\
&=2K/N\sum_{m=1}^N \cos(\theta_m-\tfrac12(\theta_i+\theta_j))\sin(\tfrac12(\theta_j-\theta_i))
\end{align}
So if a collision occurs at some point, the derivative of the difference will be zero. Thus there is a solution with constant difference. By uniqueness this is also the only variant where a collision occurs. |
Example of a monoid required for union operation | There are several issues in your question. In particular, it is ambiguous to call $G$ a groupoid, and later a group. To start with, $G$ is only a set, and union is not an operation on $G$, but on the set $P(G)$ of subsets of $G$. Moreover, it is not correct to say that $G$ is closed, for two reasons: first, you are actually not considering $G$ but $P(G)$. Moreover, it is not correct to say that $P(G)$ is closed. The right way would be to say that $P(G)$ is closed under the union operation (or simply under union).
Thus, I hope you will not mind if I first rephrase your question as follows:
Let $G$ be a set. Consider the set of all subsets of $G$, equipped
with the union as operation. I already observed that this operation is
associative. What type of algebraic structure does it define: a
semigroup, a monoid, a group?
Answer. It defines a commutative monoid. You already observe that union
is associative and hence defines a structure of semigroup. It is a commutative semigroup since for all subsets $E$ and $F$ of $G$, $E \cup F = F \cup E$.
Furthermore, the empty set is the identity for this operation, since, for every subset $E$ of $G$, $E \cup \emptyset = \emptyset \cup E = E$.
The only case when you obtain a group is when $G$ is the empty set. Do you see why it is a group in this case? If $G$ is nonempty, the full subset $G$ has no inverse. Indeed, an inverse would be a subset $E$ of $G$ such that $G \cup E = \emptyset$, but this would imply $G = \emptyset$, a contradiction. |
If $x \in \mathbb{Z}[\alpha]$, for $\alpha$ an algebraic integer, is $x^{-1} N(x) \in \mathbb{Z}[\alpha]$ too? | Yes. This is always true.
Because $\alpha$ is an algebraic integer and $z\in\Bbb{Z}[\alpha]$, we can conclude that $z$ is an algebraic integer as well.
Consider the minimal polynomial $m(x)$ of $z$,
$$
m(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0\in\Bbb{Z}[x].
$$
We have $m(z)=0$, so $(x-z)\mid m(x)$ in the ring $R[x]$, $R=\Bbb{Z}[\alpha]$. Because $x-z$ is monic, the (long) polynomial division works in $R[x]$, and we get that
$$
m(x)=(x-z)q(x)
$$
for some polynomial $q(x)\in R[x]$.
Your claim follows because the constant term $\pm N(z)z^{-1}$ of $q(x)$ is an element of $R$. |
Differentiating under integral with bounded derivatives | You are probably thinking of this:
$$\frac{d}{dx}\int_\Omega f(x,y)\,d\mu(y) = \int_\Omega\frac{\partial f}{\partial x}(x,y)\,d\mu(y)$$
provided there is an integrable function $g$ with $$\left|\frac{\partial f}{\partial x}(x,y)\right|\le g(y).$$
The proof is by just writing up the derivatives as limits and applying DCT.
More precisely, you need to have $f\colon\mathbb{R}\times\Omega\to\mathbb{R}$ where $\mu$ is a measure on $\Omega$, $f$ needs to be differentiable at $x$ for almost all $y\in\Omega$ and measurable wrt $y$, and the above inequality needs to hold for almost every $y$ and all $x$.
Here is the gist of the proof:
$$\begin{aligned}
\frac{d}{dx}\int_X f(x,y)\,d\mu(y)
&=\lim_{h\to0}\int_\Omega\frac{f(x+h,y)-f(x,y)}{h}\,d\mu(y) \\
&=\int_\Omega\lim_{h\to0}\frac{f(x+h,y)-f(x,y)}{h}\,d\mu(y) \\
&=\int_\Omega\frac{\partial f}{\partial x}(x,y)\,d\mu(y).
\end{aligned}$$
In the first line is the definition of the derivative except I skipped a step: You should have difference of two integrals and then divided by $h$, but I just joined the two integrals into one. In the second line I put the integral on the inside, which must be justified; and finally, the definition of the partial derivative.
The justification for taking the limit inside the integral is this: The mean value theorem from calculus gives
$$\frac{f(x+h,y)-f(x,y)}{h}=\frac{\partial f}{\partial x}(x+\theta h,y)$$
for some $\theta\in(0,1)$ (depending on $x$ and $y$), so the absolute value of the fraction is at most $g(y)$ by the assumption; so the DCT applies. |
proving tautologically equivalent | You want to prove that if $A$ and $B$ are two formulas, and $C_A$ is a formula containing $A$ and $C_B$ comes from $C_A$ by replacing that part by $B$, we have :
Replacement theorem. If $\vDash_{TAUT} A \equiv B$, then $\vDash_{TAUT} C_A \equiv C_B$.
We assume that $A$ and $B$ are tautologically equivalent when $\vDash_{TAUT} A \equiv B$.
We can prove it in two ways :
(i) by truth-tables [see Stephen Cole Kleene, Mathematical Logic (1967), page 19].
If $\vDash_{TAUT} A \equiv B$, then the truth-table for $A$ and $B$ are identical, i.e. in each row, if $A$ evaluates to true (false), also $B$ evaluates to true (false).
Hence if, in the computation of a given line of the table for $C_A$, we replace the computation of the specified part $A$ by a computation of $B$ instead, the outcome will be unchanged. Thus $C_B$ has the same table of $C_A$; so $\vDash_{TAUT} C_A \equiv C_B$.
(ii) by induction on the depth of the occurrence of $A$ in $C_A$ [see Stephen Cole Kleene, Introduction to Metamathematics (1952), page 116].
The formula $C_A$ can be built up form $A$ with repeated applications of the rules for the use of connectives (like: from $P$ and $Q$, construct $P \lor Q$).
The number of steps in this construction of $C_A$ from $A$, we call the depth of an occurrence of $A$ in $C_A$. In other words, the depth of the part $A$ in $C_A$ is the number of connectives within the scopes of which it lies.
The proof is by induction on the depth of $A$ in $C_A$, taking the $A$ and $B$ fixed for the induction.
Basis: $A$ is at depth $0$ in $C_A$. Then $C_A$ is simply $A$ and $C_B$ must be $B$. So the theorem is simply : if $\vDash_{TAUT} A \equiv B$, then $\vDash_{TAUT} A \equiv B$.
Induction step: $A$ is at depth $d+1$ in $C_A$ and we assume as induction hypothesis that the result holds for depth $d$.
According to the rules for formation of formulas (for simplicity we assume that we are using only the $\lnot$ and $\lor$ connectives), we have that $C_A$ must have one of the following forms : $N \lor M_A$, $M_A \lor N$, $\lnot M_A$, where $M_A$ is the part of $C_A$ (and $C_B$ will be : $N \lor M_B$, $M_B \lor N$, $\lnot M_B$, respectively) where the specified occurrence of $A$ lies at depth $d$.
The induction hypothesis amount to assuming that if $\vDash_{TAUT} A \equiv B$, then $\vDash_{TAUT} M_A \equiv M_B$, because $M_A$ (and so $M_B$) have depth $d$.
In order to complete the proof, we need some simple lemmas :
if $\vDash_{TAUT} A \equiv B$, then $\vDash_{TAUT} \lnot A \equiv \lnot B$
if $\vDash_{TAUT} A \equiv B$, then $\vDash_{TAUT} A \lor C \equiv B \lor C$
if $\vDash_{TAUT} A \equiv B$, then $\vDash_{TAUT} C \lor A \equiv C \lor B$.
Then, applying the above lemmas (with $M_A$ as the $A$, $M_B$ as the $B$ and $N$ as the $C$), we have :
if $\vDash_{TAUT} M_A \equiv M_B$, then $\vDash_{TAUT} C_A \equiv C_B$.
Therefore, "connecting" the last result with the induction hypothesis above :
if $\vDash_{TAUT} A \equiv B$, then $\vDash_{TAUT} C_A \equiv C_B$. |
How to take derivative of integral of function? | An important result of the calculus of variations is that if you have a functional $L[f(x)]$ such that
$$
L[f] =\int_a^b J(x,f,f') dx,
$$
$L$ is minimized if
$$
\frac{\partial J}{\partial f} - \frac{d}{dx} \frac{\partial J}{\partial L'} = 0.
$$
This is the Euler-Lagrange equation. In your case,
$$
L[f] = \int_0^1 \left[f(x)^{1-1/\alpha}-\lambda g(x) f(x) \right] dx,
$$
therefore the Euler-Lagrange equation leads to
$$
\frac{\partial}{\partial f} \left[f(x)^{1-1/\alpha}-\lambda g(x) f(x) \right] - \frac{d}{dx} \frac{\partial}{\partial f'} \left[f(x)^{1-1/\alpha}-\lambda g(x) f(x) \right] =0.
$$
The derivative in relation to $f'$ vanishes because $L[f]$ does not depend on $f'$. Evaluating the derivative in relation to $f$:
$$
\frac{\alpha-1}{\alpha}f(x) ^{-1/\alpha} - \lambda g(x) = 0
$$
and now solving for $f$:
$$
f(x) = \left( \frac{\lambda \alpha}{\alpha-1} g(x)\right)^{-\alpha},
$$
which is the $f(x)$ that minimize your $L$. |
Using integration to solve a formula for the area of a ellipse | Hint...You just need to change the limits to $0$ and $\frac {\pi}{2}$ and use the identity $\cos^2\theta=\frac 12(1+\cos2\theta)$ and you will be finished |
Does this combination problem count the repeats | It seems like you are confusing "repeated PICs" with "repeated characters".
The number of unique PICs is $26\times26\times9 + 26\times9\times9$. The strings come in two disjoint sets: those with two letters and those with one. For those with two, we have $26$ choices for the first letter, $26$ for the second, and then $9$ choices for the digit. For those with two, we have $26$ choices for the letter, $9$ for the first digit, and then $9$ choices for the other.
What you are counting is the size of the subset of those PICs that never repeat a digit or character, which is $26\times25\times9 + 26\times9\times8$. |
How many different sets of 6 and 7 different numbers can we list out from 11,13,18,19,19,20,23,25? | Be aware that set means no two elements are the same.
That repeated 19 should make no difference.
Here are 6 number sets (total 7 of them):
{ 13, 18, 19, 20, 23, 25 }
{ 11, 18, 19, 20, 23, 25 }
{ 11, 13, 19, 20, 23, 25 }
{ 11, 13, 18, 20, 23, 25 }
{ 11, 13, 18, 19, 23, 25 }
{ 11, 13, 18, 19, 20, 25 }
{ 11, 13, 18, 19, 20, 23 }
There is only this one 7 number set:
{ 11, 13, 18, 19, 20, 23, 25 }
Is there a mistake in the question?
Maybe repeated 19 should be some other number. |
How to simplify this log limit | Here's a non-Taylor series approach. Consider the denominator $\log(1+\frac{d}{a})=\log(a+d)-\log(a)$, which goes to zero as $d\to0$. This should remind you of the definition of the derivative: indeed,
$$\lim_{d\to 0}\dfrac{\log(a+d)-\log(a)}{d}=\dfrac{d}{dx}\log(x)|_{x=a}=\frac{1}{a}$$
Rearranging, we conclude that for small $d$ we indeed have $1/\log(1+\frac{d}{a})\approx a/d$. |
$H^1$ of $\Bbb Z$ as a trivial $G$-module is the abelianization of $G$ | As an abelian group, $I_G$ is free on $\{(g-1): g \in G\}$, so we may define a homomorphism $I_G \to G/G'$ by $g-1 \mapsto gG'$. The kernel contains $I_G^2$ because of the identity $(g-1)(h-1) = (gh-1)-(g-1)-(h-1)$, so this descends to a map $I_G/I_G^2\to G/G'$ which is inverse to your map $G/G' \to I_G/I_G^2$. Both maps are therefore isomorphisms. See for example Gruenberg's Cohomological Topics in Group Theory, Springer LNM #143. |
Is every sigma algebra the sigma algebra generated by some function? | Let $\mathcal A_0\subseteq\mathcal A$ be a $\sigma$-algebra.
Let $\langle B,\mathcal B\rangle=\langle A,\mathcal A_0\rangle$.
Then the identity function $\mathsf{id}:A\to B$ prescribed by $a\mapsto a$ is well defined and measurable.
This function generates $\mathcal A_0$ because $\mathsf{id}^{-1}(\mathcal A_0)=\mathcal A_0$. |
Prove $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$ | I will use the following lemma (the proof below):
$$2x \geq x^2(3-x^2)\ \ \ \ \text{ for any }\ x \geq 0. \tag{$\clubsuit$}$$
Start by multiplying our inequality by two
$$2\sqrt{a} +2\sqrt{b} + 2\sqrt{c} \geq 2ab +2bc +2ca, \tag{$\spadesuit$}$$
and observe that
$$2ab + 2bc + 2ca = a(b+c) + b(c+a) + c(b+c) = a(3-a) + b(3-b) + c(3-c)$$
and thus $(\spadesuit)$ is equivalent to
$$2\sqrt{a} +2\sqrt{b} + 2\sqrt{c} \geq a(3-a) + b(3-b) + c(3-c)$$
which can be obtained by summing up three applications of $(\clubsuit)$ for $x$ equal to $\sqrt{a}$, $\sqrt{b}$ and $\sqrt{c}$ respectively:
\begin{align}
2\sqrt{a} &\geq a(3-a), \\
2\sqrt{b} &\geq b(3-b), \\
2\sqrt{c} &\geq c(3-c). \\
\end{align}
$$\tag*{$\square$}$$
The lemma
$$2x \geq x^2(3-x^2) \tag{$\clubsuit$}$$
is true for any $x \geq 0$ (and also any $x \leq -2$) because
$$2x - x^2(3-x^2) = (x-1)^2x(x+2)$$
is a polynomial with roots at $0$ and $-2$, a double root at $1$ and a positive coefficient at the largest degree, $x^4$.
$\hspace{60pt}$
I hope this helps ;-) |
Computing a formula for $\partial^2 f/\partial{v}\partial{w}$ | Since $\dfrac{\partial f}{\partial w}$ is in terms of $x$, $y$, and $z$, you'll need to be careful with how you rewrite $\dfrac{\partial}{\partial v}$. Since we know that
$$\begin{aligned}\frac{\partial f}{\partial w} &= \frac{\partial x}{\partial w}\frac{\partial f}{\partial x} + \frac{\partial y}{\partial w}\frac{\partial f}{\partial y} + \frac{\partial z}{\partial w}\frac{\partial f}{\partial z} \\ &= \left(\frac{\partial x}{\partial w}\frac{\partial}{\partial x}+\frac{\partial y}{\partial w}\frac{\partial}{\partial y} + \frac{\partial z}{\partial w}\frac{\partial}{\partial z}\right)f\end{aligned}$$
it follows that
$$\frac{\partial}{\partial w} = \frac{\partial x}{\partial w}\frac{\partial}{\partial x} + \frac{\partial y}{\partial w}\frac{\partial}{\partial y} + \frac{\partial z}{\partial w}\frac{\partial}{\partial z}$$
Hence, we similarly have that
$$\frac{\partial}{\partial v} = \frac{\partial x}{\partial v}\frac{\partial}{\partial x} + \frac{\partial y}{\partial v}\frac{\partial}{\partial y} + \frac{\partial z}{\partial v}\frac{\partial}{\partial z}$$
and thus
$$\frac{\partial^2f}{\partial v\partial w} = \frac{\partial x}{\partial v}\frac{\partial}{\partial x}\left(\frac{\partial f}{\partial w}\right) + \frac{\partial y}{\partial v}\frac{\partial}{\partial y}\left(\frac{\partial f}{\partial w}\right) + \frac{\partial z}{\partial v}\frac{\partial}{\partial z}\left(\frac{\partial f}{\partial w}\right)$$ |
A question related to the smallest topology on a set. | This is known as the initial topology.
If $(X,\tau)$ is a topological space so that all $f_\lambda \to Y_\lambda$ is continuous, then $f^{-1}(U)$ is open for all open $U \subseteq Y_\lambda$.
One can prove the statement by showing that the topology generated by a sub-basis is the smallest one containing all of the open sets in $S$ (since one can describe the generated topology as the intersection of all topologies containing $S$), which is a more general fact.
There are more details written here or here. |
Problems about symmetric groups | Your question is very general and you should probably specify what you are looking for exactly. One thing I always like to do when students learn about the symmetric group for the first time is talk about the Futurama episode The prisoner of Benda where they ask a question about permutations and even provide an explicit, mathematically correct proof of the solution. |
Solve equation $\sqrt{s+13} - \sqrt{7-s} = 2$ | $$2s+2=4\sqrt{7-s}$$
Divide by $2$ both sides
$$s+1=2\sqrt{7-s}$$
Square both sides
$$s^2+1+2s=28-4s$$
Put everything on the left
$$s^2+6s-27=0$$
Now solve by radicals
$$s_{1,2}=\frac{-6\pm\sqrt{6^2-4(-27)}}{2}$$
giving you the two solutions
$$s_1=3\qquad;\qquad s_2=-9$$
Finally discard $s_2$ since it is not a solution of your title equation |
Some sufficient condition for a Noetherian local ring to be a DVR | It is a consequence of Krull's intersection theorem: $\;\displaystyle\bigcap_n \mathfrak m^n$ is the set of $x\in R$ for which there exists $m\in\mathfrak m$ such that $(1-m)x=0$. As $R$ is local and $\mathfrak m$ is its maximal ideal, $1-x$ is a unit in $R$, so $x=0$. |
Functional Square Root of Piecewise Functions | Note that $t^{\circ 4}=s^{\circ 2}=\operatorname{id}$. While $s$ maps $0+\epsilon\mapsto 1+\epsilon\mapsto 0+\epsilon$ and $2+\epsilon\mapsto 3+\epsilon\mapsto 2+\epsilon$, we can simply let $t$ map $0+\epsilon\mapsto2+\epsilon\mapsto1+\epsilon\mapsto3+\epsilon\mapsto0+\epsilon$. Simply extend this pattern, i.e.
$$t(x)=\begin{cases}x+2&\text{if }\lfloor x\rfloor\equiv 0\pmod 4\\
x+2&\text{if }\lfloor x\rfloor\equiv 1\pmod 4\\
x-1&\text{if }\lfloor x\rfloor\equiv 2\pmod 4\\
x-3&\text{if }\lfloor x\rfloor\equiv 3\pmod 4
\end{cases} $$ |
How to show that $\mathrm{Sym}_{n\times n}(\Bbb{R})$ and $\mathrm{Skew}_{n\times n}(\Bbb{R})$ are subspaces of $\mathrm{M}_{n\times n}(\Bbb{R})$ | This CW post intends to remove the question from the unanswered queue.
As already noted in the comments, your proof works fine. (I would rather write something like $(aa_{ij}+b_{ij})_{ij}$, so that there is not the confusion it could be a scalar, although here it is clear from the context).
As others remarked, another to prove that this is a subspace is to note that it is the kernel of the linear map $M\mapsto M\pm M^t$. Since sums of linear maps are linear, this amounts to showing that $M\mapsto M^t$ is a linear map. This would be a very similar computation to the computations you have. |
How do I find the derivative of the $l1$-norm of a vector of complex numbers with respect to the vector? | Unfortunately, the one norm of a complex vector (the sum of the absolute values of the entries of the vector) is not a differentiable function. In fact, the absolute value of a scalar complex number z=x+i*y is not
a differentiable function.
To see this, use the Cauchy-Riemann conditions. Write the absolute value as
abs(z)=abs(x+i*y)=sqrt(x^2+y^2)+i*0
let u(x,y)=sqrt(x^2+y^2) (the real part) and v(x,y)=0 (the imaginary part.)
If the absolute value was differentiable then it would satisfy the Cauchy-Riemann conditions. In particular, the partial derivative of u with respect to x would have to equal the partial derivative of v with respect to y. Since this clearly doesn't hold, there's no need to check the other half of the CR conditions, and you can conclude that abs(z) is not differentiable. |
Missing sign in deriving sigmoid function | $$\sigma(x) = \frac{1}{1 + e^{-x}}$$
Use the quotient rule, don't forget about the MINUS sign from the rule, and the MINUS sign due to the derivative of $e^{-x}$.
$$\sigma'(x) = \frac{-(-e^{-x})}{(1 + e^{-x})^2} = \frac{e^{-x}}{(1+e^{-x})^2}$$ |
The relationship between each harmonic numbers | If you persue the approximation of $H_n$ a bit further you may find
$$ H_n=\ln n+\gamma+\frac1{2n}-\frac1{12n^2}+\frac1{120n^4}-\frac1{252n^6}+\frac1{240n^8}-\frac1{132n^{10}}+\mathcal O(\frac1{n^{12}}).$$
Hence for some constant $c$ (not explicitly specified in that Wikipedia page), we have
$$ \epsilon_n=1-\frac1{252n^2}+\frac1{240n^4}-\frac1{132n^6}+\frac{\delta_n}{cn^8}$$
with $|\delta_n|<1$. Depending on the ecaxt value of $c$ this gives us that the sequence of the $\epsilon_n$ is strictly increasing either for all sufficiently large $n$ or possibly even for all $n$. Indeed, $e_{n+1}-\epsilon_n\approx\frac1{126n^3}$ already by the first term and the contribution of the last is at most $\approx \frac2{cn^8}$. |
Ways to represent $ n=\pm1^2 \pm2^2 \pm \dots \pm k^2, $ (Erdos-Suranyi?) | The classic solution to show existence of one solution is to use the identity
$$ (n+3)^2 - (n+2)^2 - (n+1)^2 + n^2 = 4$$
and the fact that $1,2,3,4$ have a representation:
$$1 = + 1^2$$
$$2 = - 1^2 - 2^2 - 3^2 + 4^2$$
$$3 = -1^2 + 2^2$$
$$4 = -1^2 - 2^2 + 3^2$$
To get one representation of $4m+r$, we inductively get one representation for $4(m-1) + r$, and use the above identity.
Now, as Andre pointed out, given one representation we can extend that to infinitely many representations by writing $0+0+0 \dots$ as $(4 - 4) + (4-4) + (4-4) \dots$ and using the above identity multiple times. |
How do you solve heavy equations in a organised and efficient way? | you have $$y(2x+y^2+y)=0$$ and $$x(x+3y^2+2y)=0$$ then we have the cases
$x=0$ or $y=0$
from $$2x+y^2+y=0$$ and $x+3y^2+2y=0$ we get
$$x=-3y^2-2y$$ and with this $$-6y^2-4y+y^2+y=0$$ |
Expression for Sum of Multivariate Gaussian Random Vectors | Is it me, or can one forget all the specifics of the question and simply rely on the well-known Bayes formula $f_Y(y)=\displaystyle\int f_{Y\mid X}(y\mid x)f_X(x)\mathrm dx$? |
Evaluate $\int \frac{\cos\pi z}{z^2-1}\, dz$ inside rectangle with vertices $2+i,2-i,-2+i,-2-i$ | The residue of $\frac{\cos(\pi z)}{z^2-1} =\frac{h(z)}{z-1}, \ h(z) = \frac{\cos(\pi z)}{z+1}$ at $z = 1$ is $h(1)=\frac{\cos(\pi)}{2} = -1/2$.
The residue of $\frac{\cos(\pi z)}{z^2-1} =\frac{H(z)}{z+1}, H(z) = \frac{\cos(\pi z)}{z-1}$ at $z = -1$ is $H(-1) = 1/2$.
The proof of the residue theorem in the case of finite contours and poles of order $1$ is not complicated :
$g(z) = f(z) - \frac{-1/2}{z-1}- \frac{1/2}{z+1}$ is holomorphic (on a simply connected open containing the contour) so the Cauchy integral theorem applies :
$\int_C g(z)dz =0$
and $$\int_C f(z)dz = \int_C (\frac{-1/2}{z-1}+ \frac{1/2}{z+1})dz = 2i \pi (-1/2+1/2) = 0$$
(for evaluating $\int_C \frac{1}{z-a}dz$, use that when choosing the correct branch for the logarithm : $\frac{d}{dz}\log(z-a) = \frac{1}{z-a}$) |
Let $ X $ Be the Number of Faces that Never Showed Up in $ n $ Dice Rolls - What's $ \mathbb{E} \left[ X \right] $? | By way of enrichment here is how to solve it using EGFs. Supposing
that the die has $q$ faces and is rolled $n$ times we have from first
principles for the expectation
$$\mathrm{E}[X] = \frac{1}{q^n} \sum_{p=0}^q p {q\choose q-p}
n! [z^n] (\exp(z)-1)^{q-p}
\\ = n! [z^n] (\exp(z)-1)^q
\frac{1}{q^n} \sum_{p=0}^q p {q\choose p}
(\exp(z)-1)^{-p}
\\ = n! [z^n] (\exp(z)-1)^q
\frac{q}{q^n} \sum_{p=1}^q {q-1\choose p-1}
(\exp(z)-1)^{-p}
\\ = n! [z^n] (\exp(z)-1)^{q-1}
\frac{q}{q^n} \left(1+\frac{1}{\exp(z)-1}\right)^{q-1}
\\ = n! [z^n] \frac{q}{q^n} \exp((q-1)z)
= \frac{q}{q^{n}} (q-1)^n = q\left(1-\frac{1}{q}\right)^n.$$
What we see here confirms the result from linearity of expectation
with $q$ indicator variables for each possible value and $(1-1/q)^n$
the probability of that value not appearing.
Observe that this technique will produce higher factorial moments
and hence the variance, e.g. we get
$$\mathrm{E}[X(X-1)] = n! [z^n] (\exp(z)-1)^q
\frac{q(q-1)}{q^n} \sum_{p=2}^q {q-2\choose p-2}
(\exp(z)-1)^{-p}
\\ = n! [z^n] (\exp(z)-1)^{q-2}
\frac{q(q-1)}{q^n} \left(1+\frac{1}{\exp(z)-1}\right)^{q-2}
\\ = n! [z^n] \frac{q(q-1)}{q^n} \exp((q-2)z)
= \frac{q(q-1)}{q^{n}} (q-2)^n =
q(q-1)\left(1-\frac{2}{q}\right)^n.$$
Recall that
$$\mathrm{Var}[X] = \mathrm{E}[X(X-1)] + \mathrm{E}[X] - \mathrm{E}[X]^2$$
so that we obtain
$$\mathrm{Var}[X] =
q(q-1)\left(1-\frac{2}{q}\right)^n
+ q\left(1-\frac{1}{q}\right)^n
- q^2\left(1-\frac{1}{q}\right)^{2n}.$$ |
The probability of $7 7$’s in $17$ digit random number? | Let us decide MSD (left most) first,then assign the remaining 7's, finally fill the rest digits.
MSD $\neq$ 7, 8 choices for this digit; $C_7^{16}$ choices for all seven 7's to take 7 digits; $9^9$ choices for the rest 9 digits. Total 8$C_7^{16}9^9$ choices for this case.
MSD =7, only one choice for this digit; $C_6^{16}$ choices for the other six 7's to take 6 digits; $10^9$ choices for the rest 10 digits. Total $C_6^{16}$ $10^9$ choices for this case.
Adding results from 1 and 2 , there are 8$C_7^{16}9^9$+ $C_6^{16}$ $10^9$ choices for seven 7's in a 17 digits number. With 9.10$^{16}$ possible 17 digits numbers, the asked probability is $\frac{8C_7^{16}9^9+ C_6^{16}10^9 }{9.10^{16}}$. |
Getting the value of a Fourier Transform, problem with the complex part | Your sum has both real and imaginary parts. Because
$$e^{ix}= \cos(x) + i \sin(x) \; .$$
Is $f(a,b)$ in your formula a real number? Then the transform has real and imaginary components in general. (For certain choices of $f(a,b)$, you can have purely real numbers.)
If you mean $e^{-i}$, then use the formula I provided you and you should be able to compute it. |
Product of ideals in a $C^*$-algebra coincides with the intersection | As far as I can tell, $I_1I_2$ is defined as the closed linear span of $I_1$ and $I_2$. That's certainly how it's done in Murphy's book. |
For any $\phi$, is $\phi$ self-adjoint on $E_{\lambda_1} \oplus E_{\lambda_2} \oplus...\oplus E_{\lambda_k}$ if $\phi$ stable on $T$ | Consider $\phi:\mathbb{R}^n\rightarrow \mathbb{R}^n$ with eigenvalues $\lambda_1=1$ and $\lambda_2=2$ and corresponding eigenspaces $E_1=\langle\begin{pmatrix}0\\1\\0\\ \vdots \\0\end{pmatrix}\rangle$ and $E_2=\langle \begin{pmatrix}1\\1\\0\\ \vdots \\0\end{pmatrix}\rangle $, then $\phi|_{T}$ is representd by the matrix $$\begin{pmatrix}2&0\\1&1\end{pmatrix}$$ from which You can see $\phi|_{T}$ is not selfadjoint. If a linear mapping is selfadjoint eigenvectors to different eigenvalues have to be orthogonal as can be seen from $$\lambda_1(x_1,x_2)=(\phi x_1,x_2)=(x_1,\phi x_2)=\lambda_2(x_1,x_2)$$
where $\lambda_1\neq\lambda_2$ are eigenvalues with corresponding eigenvectors $x_1,x_2$. |
Relation between roots of function and roots of its derivative | If $p(x)=x^2+1$, then $p(x)$ has zero roots. However, $p'(x)=2x$, so $p'(x)$ has one root $x=0$.
That argument from your Calculus textbook proves that between any two roots of the original polynomial $p(x)$ there is at least one root of $p'(x)$ between them, but there may be other roots. |
Elementary number theory (HCF) | We show (i) If $d$ divides $x$ and $y$, then $d$ divides $X$ and $Y$ and (ii) If $d$ divides $X$ and $Y$, then $d$ divides $x$ and $y$.
Assertion (i) is obvious.
To prove (ii), note that from the first equation, by multiplying through by $b_2$, we have
$$b_2X=a_1b_2x+b_1b_2 y.$$
From the second equation we have
$$b_1Y=a_2b_1y+b_1b_2y.$$
Subtract. We get
$$b_2X-b_1Y=(a_1b_2-a_2b_1)x=x.$$
Now from $d\mid X$ and $d\mid Y$ we conclude that $d\mid x$.
A similar argument shows that $d\mid y$.
We have shown that $x,y$ and $X,Y$ have the same set of common divisors, and in particular the same greatest common divisor. |
Integrating Square Root of Rational Trigonometric Equation | Well, if you substitute:
$$\text{u}:=\frac{\sqrt{2}\cdot\cos\left(\frac{x}{2}\right)}{\sqrt{1+\cos\left(\text{k}\right)}}\tag1$$
You end up with:
$$\mathscr{I}_{\space\text{k}}:=\int_\text{k}^\pi\sqrt{\frac{1-\cos\left(x\right)}{\cos\left(\text{k}\right)-\cos\left(x\right)}}\space\text{d}x=-2\int_{\frac{\sqrt{2}\cdot\cos\left(\frac{\text{k}}{2}\right)}{\sqrt{1+\cos\left(\text{k}\right)}}}^0\frac{1}{\sqrt{1-\text{u}^2}}\space\text{d}\text{u}=$$
$$-2\cdot\left\{\arcsin\left(0\right)-\arcsin\left(\frac{\sqrt{2}\cdot\cos\left(\frac{\text{k}}{2}\right)}{\sqrt{1+\cos\left(\text{k}\right)}}\right)\right\}=2\cdot\arcsin\left(\frac{\sqrt{2}\cdot\cos\left(\frac{\text{k}}{2}\right)}{\sqrt{1+\cos\left(\text{k}\right)}}\right)=$$
$$\pi\cdot\sqrt{\cos^2\left(\frac{\text{k}}{2}\right)}\cdot\sec\left(\frac{\text{k}}{2}\right)\tag2$$
And, use:
$$\cos\left(\frac{\text{k}}{2}\right)\cdot\sec\left(\frac{\text{k}}{2}\right)=1\tag3$$ |
Sieves over covering families? Topology or pretopology? | In my understanding, the difference between Grothendieck topologies and pretopologies is the same as the difference between (usual) topologies and basis of the topology. You can define a usual topology say on $\mathbb{R}^n$ by requiring that open balls of radius $\frac{1}{n}$ whose center have rational coordinates are open. From this, there is a way to recover the full topology.
Say two basis are equivalent if they define the same topologies. There is a quick way to see if two basis $\mathfrak{B}_1,\mathfrak{B}_2$ are equivalent : just check if for any $U_1\in\mathfrak{B}_1$ and any $x\in U_1$, there exists $U_2\in\mathfrak{B}_2$ such that $x\in B_2$ and $B_2\subset B_1$ and the other way around.
Now the problem with Grothendieck topologies is that this quick way does not exists. More precisely, given two Grothendieck pretopologies $Cov_1, Cov_2$, say they are equivalent if they define the same Grothendieck topology. Now there is in general no way to quickly compare them with the following lines : if $\{U_i\to X\}$ is a cover in $Cov_1$, then there exists $\{V_j\to X\}$ in $Cov_2$ such that...
Well, in fact the condition for comparing them leaves the domain of coverings : we need sieves.
In a similar vein, if we have two pretopologies, $Cov_1,Cov_2$ how to define the pretopology defined by their union ?
(In fact given a pretopology $Cov$, there is a way to saturate it which makes possible the comparison, but this saturation process uses sieves, or even if it avoid sieves, it is not easier than them...)
I never heard that the indices may be annoying, though I understand why (if we need to compare objects, like Cech nerves, associated to two coverings which differ by their order, though this is trivial, a rigorous proof becomes more technical...), the thing is, in most Grothendieck topology (the exact name is superextensive topology), $\{U_i\to X\}$ is a covering iff $\{\coprod_i U_i\to X\}$ is a covering, so we can replace any covering by a single object $\{U\to X\}$. Hence, the indices are not a problem anymore...
Now, for many purposes, pretopologies are sufficient ! In fact, I never used sieves though I used several Grothendieck topologies. In algebraic geometry most topologies are given by covering and they are already comparable : a Zariski covering is in particular an étale covering which is also an $fppf$-covering... |
How many lucky usernames are there? | In a four character user name, we have a condition that three must be identical. Thus, the user name can have at most two unique characters.
Case 1: The user name has exactly two unique characters.
The two characters can be selected from the $26$ characters in $\binom{26}{2}$ ways.
Out of these two, one has to be selected to appear three consecutive times. This can be done in $\binom{2}{1}=2$ ways.
Finally, which of these two characters comes first in the user name needs to be decided, which can be done in $\binom{2}{1}=2$ ways.
Thus, the total number of ways is $\binom{26}{2}.2.2=\binom{26}{2}.4$
Case 2: The user name can have a single character appearing $4$ times
This is just like the previous case, except that we need to add the cases in which the user name has only one unique character. That character can be selected in $\binom{26}{1}=26$ ways.
Then, the total number of ways is $\binom{26}{2}.4 + 26$
Looking at the options, it appears that the question is asking for case 1, which means the third option is correct. |
How prove an equality | We have, for all $a\in \mathbb{R}$ (for you $a=10$)
$$\int_{-a}^a f(x) \text{d} x = \int_0^a f(x) \text{d} x + \int_{-a}^0 f(x) \text{d} x .$$
But thanks to a change of variable $u =-x$
$$ \int_{-a}^0 f(x) \text{d} x = \int_a^0 -f(-u)\text{d} u = \int_0^a f(-u) \text{d} u.$$
We finally obtain the result since $f$ is even which means that for all $x\in \mathbb{R}$, $f(x)=f(-x)$. |
Iterated Integral with variable substitution | It's convenient to solve this integral with substitution. Since we want to separate the variables in the Integral
\begin{align*}
\int_0^{1/2} \int_x^{1-x} (x+y)^9(x-y)^9 dy\,dx\tag{1}
\end{align*}
it's reasonable to use the substitution
\begin{align*}
\left.
\begin{matrix}
u=x+y\\
v=x-y\\
\end{matrix}
\right\}
\quad\Longleftrightarrow\quad
\left\{
\begin{matrix}
x=\frac{1}{2}(u+v)\\
y=\frac{1}{2}(u-v)
\end{matrix}
\right.\tag{2}
\end{align*}
According to the Change of variable theorem we want to calculate
\begin{align*}
\int_{x_0}^{x_1}\int_{y_0}^{y_{1}}f(x,y)\,dy\,dx
=\int_{u_0}^{u_1}\int_{v_0}^{v_{1}}f(x(u,v),y(u,v))
\left|\operatorname{det}
\begin{pmatrix}
x_u&x_v\\
y_u&y_v
\end{pmatrix}
\right|
\,dv\,du\tag{3}
\end{align*}
Jacobian
At first we calculate the absolute value of the determinant of the Jacobian matrix
\begin{align*}
\left|\operatorname{det}
\begin{pmatrix}
x_u&x_v\\
y_u&y_v
\end{pmatrix}
\right|=
\left|\operatorname{det}
\begin{pmatrix}
\frac{1}{2}&\frac{1}{2}\\
\frac{1}{2}&-\frac{1}{2}
\end{pmatrix}
\right|
=\frac{1}{2}
\end{align*}
Area transformation
We observe from (1) the region of integration is
\begin{align*}
&0\leq x\leq \frac{1}{2}\\
&x\leq y\leq 1-x
\end{align*}
This is the area of a triangle with three lines as boundary lines, the $y$-axis $x=0$, the major diagonal $x=y$ and a parallel to the minor diagonal through $(1,0)$, $y=1-x$.
Since the transformation (2) is linear, lines are transformed to lines. So, the transformed $(u,v)$-area is again enclosed by three lines.
\begin{array}{lclcl}
x=0\qquad&\rightarrow&\qquad \frac{1}{2}(u+v)=0
&\qquad\rightarrow&u=-v\\
x=y\qquad&\rightarrow&\qquad \frac{1}{2}(u+v)=\frac{1}{2}(u-v)
&\qquad\rightarrow&v=0\\
y=1-x\qquad&\rightarrow&\qquad \frac{1}{2}(u-v)=1-\frac{1}{2}(u+v)
&\qquad\rightarrow&u=1\\
\end{array}
We see the $(u,v)$-triangle area is given by the three lines $u=-v, v=0$ and $u=1$ which can be written as
\begin{align*}
0\leq u \leq 1\\
-u\leq v\leq 0
\end{align*}
$$ $$
Integration
Now we have all ingredients to perform the integration according to (3). We obtain
\begin{align*}
\int_0^\frac{1}{2}\int_x^{1-x} (x+y)^9(x-y)^9 dy\,dx
&=\int_0^1\int_{-u}^0(uv)^9\cdot\frac{1}{2}\,dv\,du\\
&=\frac{1}{2}\int_0^1u^9\left(\int_{-u}^0v^9\,dv\right)\,du\\
&=\frac{1}{2}\int_0^1u^9\left(\left.\frac{1}{10}v^{10}\right|_{-u}^0\right)\,du\\
&=-\frac{1}{20}\int_0^1u^{19}\,du\\
&=-\frac{1}{20}\left.\left(\frac{1}{20}u^{20}\right)\right|_{0}^1\\
&=-\frac{1}{400}
\end{align*} |
Time derivative of white noise | There isn't really an issue with taking derivatives of stochastic processes like $W$, so long as you interpret the resulting process appropriately. Even the usual white noise process "$\xi = \frac{dW}{dt}$" should really be interpreted as a generalized stochastic process, that is the realizations of $\xi$ are generalized functions. This is because - as you state - realizations of $W$ are almost surely nowhere differentiable. However, they do have derivatives "in the sense of distributions" that is, generalized derivatives, and this is one way to attack the problem (the Ito calculus/stochastic differential form $dW_t$ approach is another way). If you have never seen the theory of generalized functions (called distributions elsewhere but in probability that word has another meaning), the following will probably not make too much sense to you, but this is how I work with these things. Gel'fand and Vilenkin ("Generalized Functions Volume IV") is the classic reference for this approach but there are probably better modern refs.
To define a generalized stochastic process $\eta$, you fix a space of test functions - usually smooth, compactly supported functions $\mathcal{D} = C_0^\infty$. Then, a generalized stochastic process $\eta(\omega)$ is a random element of $\mathcal{D}^\prime$ (a map $\eta:\Omega\rightarrow\mathcal{D}^\prime$ where $(\Omega,\mathcal{F},\mathbb{P})$ is a probability space). A much more convenient way to say this is that given any test function $\varphi\in\mathcal{D}$, we have that
$$
X_\varphi = \langle \eta,\varphi\rangle
$$ is an ordinary real random variable. The bracket notation is intended to "look like" an inner product, i.e. you can think of $\langle \eta,\varphi\rangle = \int \eta(x)\varphi(x)dx$, though this isn't really correct because $\eta$ is "not a function".
The mean and covariance are then defined as
$$
\langle\mathbb{E}[\eta],\varphi\rangle = \mathbb{E}[\langle\eta,\varphi\rangle] = \mathbb{E}[X_\varphi]
$$ and
$$
Cov(\varphi,\psi) = \mathbb{E}[X_\varphi X_\psi]
$$ From this, you can extract the covariance operator via
$$
\mathbb{E}[X_\varphi X_\psi] = \langle \mathcal{C}\varphi,\psi\rangle
$$ This formula is difficult to parse until you work some examples - we'll see in a second how this works.
Returning to your original question: suppose we want to define $\dot{W}$ using this approach. Well, in the theory of generalized functions, we have the definition
$$
X_\varphi = \langle \dot{W},\varphi\rangle = - \langle W,\dot{\varphi}\rangle
$$The negative sign comes from "integration by parts". Now, because $W$ is (almost surely) continuous and $\dot{\varphi}$ is smooth, we can use integrals instead of "abstract brackets":
$$
X_\varphi(\omega) = -\int_{-\infty}^\infty W(t,\omega)\dot{\varphi}(t) dt
$$ Thus (interchanging limits requires a moment of justification):
$$
\mathbb{E}[X_\varphi(\omega)] = -\int_{-\infty}^\infty \mathbb{E}[W(t,\omega)] \dot{\varphi}(t) dt = 0
$$ and
$$
\mathbb{E}[X_\varphi(\omega)X_\psi(\omega)] = \int_{-\infty}^\infty\int_{-\infty}^\infty \mathbb{E}[W(s,\omega)W(t,\omega)] \dot{\varphi}(s)\dot{\psi}(t) dsdt = \int_{-\infty}^\infty\int_{-\infty}^\infty \min(s,t) \dot{\varphi}(s)\dot{\psi}(t) dsdt
$$ To see how this results in "$k(s,t) = \delta(s-t)$" covariance, you do a bit of calculus, remembering that $\varphi(s)$ and $\psi(t)$ are smooth and compactly supported so all the integration by parts boundary terms vanish, and you see that
$$
\int_{-\infty}^\infty\int_{-\infty}^\infty \min(s,t) \dot{\varphi}(s)\dot{\psi}(t) dsdt = \int_{-\infty}^\infty \varphi(t) \psi(t) dt
$$ Thus we have written
$$
\mathbb{E}[X_\varphi X_\psi] = \langle \mathcal{C}\varphi,\psi\rangle
$$where $\mathcal{C}$ is the "identity operator", that is the convolution operator with kernel $\delta(s-t)$.
If you want to do the same thing but with $\ddot{W}$, you would start with the definition of the generalized ("distributional") second derivative:
$$
\langle\ddot{W},\varphi\rangle = \langle W,\ddot{\varphi}\rangle
$$ You can then work through the same process to see that
$$
\langle\mathbb{E}[\ddot{W}],\varphi\rangle = \langle\mathbb{E}[W],\ddot{\varphi} \rangle = 0
$$ and
$$
\mathbb{E}[X_\varphi X_\psi] = \int_{-\infty}^\infty\int_{-\infty}^\infty\min(s,t) \ddot{\varphi}(s)\ddot{\psi}(t) dsdt = -\int_{-\infty}^\infty \ddot{\varphi}(t)\psi(t) dt = \langle \mathcal{C}\varphi,\psi\rangle
$$ Thus the covariance operator is the negative second derivative, i.e. the covariance kernel function is $-\ddot{\delta}(s-t)$.
Additional note In response to a good comment, how do we know that the processes $\dot{W}$ and $\ddot{W}$ are Gaussian? First, a generalized Gaussian random process $\eta$ is one for which any random vector formed by testing against $N$ functions is (multivariate) Gaussian, i.e. if
$$
X_{\varphi_1:\varphi_N} = [\langle \eta,\varphi_1\rangle,\ldots,\langle \eta,\varphi_N\rangle ]^t \in \Bbb{R}^N
$$ then $\eta$ is Gaussian if and only if $X_{\varphi_1:\varphi_N}$ is Gaussian for every choice of $(\varphi_1,\ldots,\varphi_N)\in \mathcal{D}^N$. With this definition, it is easy to show that if $W$ is a classical Gaussian random process - say one with almost surely continuous paths such as the Wiener process - then $W$ is also a generalized Gaussian random process.
Then, Gaussianity of the (generalized) derivatives of $W$ follows from the definitions
$$
\langle\dot{W},\varphi\rangle := - \langle W,\dot{\varphi}\rangle\\
\langle\ddot{W},\varphi\rangle := \langle W,\ddot{\varphi}\rangle
$$ Since $W$ is a generalized Gaussian R.P., $\dot{W}$ and $\ddot{W}$ are as well. |
On Finding Means of Distributions | Don't forget that the conditional mean should be written $$\mu(y)=\int x f_{X|Y}(x|y)d x$$ and is a function of $y$.
For the posterior mean,
\begin{eqnarray*}
E \left[ \theta |x \right] & = & \int \theta f_{\Theta |X} \left( \theta |x
\right) \mathrm{d} \theta\\
& = & \frac{1}{f_X \left( x \right)} \int \theta f_{\Theta} \left( \theta
\right) f_{X| \Theta} \left( x \left| \theta \right. \right) \mathrm{d}
\theta
\end{eqnarray*}
If you have two components, then
\begin{eqnarray*}
E \left[ \theta |x \right] & = & \left(\begin{array}{c}
E \left[ \theta_1 |x \right]\\
E \left[ \theta_2 |x \right]
\end{array}\right)\\
& = & \left(\begin{array}{c}
\int \theta_1 f_{\Theta |X} \left( \theta |x \right) \mathrm{d} \theta_1\\
\int \theta_2 f_{\Theta |X} \left( \theta |x \right) \mathrm{d} \theta_2
\end{array}\right)
\end{eqnarray*}
There is nothing special about vectors here. It was implicit, in $\begin{array}{lll}
E \left[ \theta |x \right] & = & \int \theta f_{\Theta |X} \left( \theta |x
\right) \mathrm{d} \theta
\end{array}$ that, if $\theta$ is a vector then both sides of the equation are a vector.
For the mean from the joint distribution (and non-rigorously)
\begin{eqnarray*}
E \left[ X \right] & = & \int xf_{X, Y} \left( x, y \right) \mathrm{d} x
\mathrm{d} y\\
& = & \int xf_X \left( x \right) \underbrace{\left( \int f_{Y \left| X
\right.} \left( y|x \right) \mathrm{d} y \right)}_{= 1} \mathrm{d} x\\
& = & \int xf_X \left( x \right) \mathrm{d} x
\end{eqnarray*}
So if the object in which you are interested only depends on some marginal variables, you only need to compute the mean with respect to the distribution of those marginal variables.
I hope that things are clearer now! |
Tangent Points to Ellipse | Samjoe gave a nice succinct answer. But I’d like to elaborate a little:
To get the slope of the tangent line, which is a derivative, we can use implicit differentiation:
The ellipse is $$\frac{x^2}{4}+y^2=1$$
Now find the derivative:
$$\frac{2x}{4}+2y\,y^\prime=0$$
$$2y\,y^\prime=-\frac{x}{2}$$
$$y^\prime=-\frac{x}{4y}$$
Now let's denote our tangent point on the ellipse as $(x_o,\,y_o)$ , and,
denoting the slope as $k$, we get:
$$k=-\frac{x_o}{4y_o}$$
So our general equation of the tangent line to the ellipse is
$$y-y_o=-\frac{x_o}{4y_o}(x-x_o)$$
Now we multiply the both sides of our equation by $y_o$:
$$y\,y_o-y_o^2=-\frac{x_o\,x}{4}+\frac{x_o^2}{4}$$
Now we rearrange the terms:
$$\frac{x_o\,x}{4}+y\,y_o=\frac{x_o^2}{4}+y_o^2$$
The right hand side of the equation $\frac{x_o^2}{4}+y_o^2=1\;$ because the point $(x_o,y_o)$ lies on the ellipse.
So we get $$\frac{x_o\,x}{4}+y\,y_o=1$$
This is the equation of our tangent line/lines. All we need to do now is to plug $x=0$ and $y=4$. We should not confuse the point/points $(x_o,y_o)$ on the ellipse and the point $(0,4)$ which is outside. So we get
$$0\cdot x_o+4\,y_o=1$$
and $\quad y_o=1/4.\quad$ Now it's trivial to find $\;x_o.\;$ We just plug $\; y_o=1/4\;$ into the equation of the
ellipse to get the values of $\; x_o$: $\quad x_o=±\sqrt{15}/2$
So, the two tangent points are $\;(-\sqrt{15}/2,\;\,1/4)\;$ and $\quad(\sqrt{15}/2,\;\,1/4)$
Hope it was helpful |
Reducing a fraction? | $$\frac{KM+KN}{N^2+NM}=\frac{K(M+N)}{N(N+M)}=\frac{K}{N}$$
As Alex Jordan comments, we can cancel out $M+N$ if and only if $M+N\neq 0$. In this case, given the fact that the denominator is of the form $N(M+N)$ we already know this is a non-zero number, and we can cancel.
On the other hand, if we were given something like $x=y$ then either $x=y=0$ or $x\neq 0$ and then we can divide by $x$ and have $\frac yx=1$. |
2 dimensional (graphical) topological representation of a sphere | It's not! They are describing the projective plane as a sphere whose antipodal (opposite) points have been identified. This is no longer a sphere.
Ah, there is indeed a picture of a sphere as an identification space. I am not able to draw pictures here, but imagine a long pointed piece of rubber, narrowing to one point at the top and to one point at the bottom. When we glue the edges together, points at the corresponding heights being attached to one another, we get a (not perfectly round) sphere with a seam on it. |
If I were to add the axiom schema of (restricted) comprehension to my "reduced" set theory, would I be able to prove any new propositions? | Let's write $T$ for your "reduced set theory": extensionality, nullset, pairs, unions, and powerset. And let's write $\text{Inf}$ for the axiom of infinity and $\text{Comp}$ for the schema of comprehension.
Now for any sentence $\varphi$, just by basic logic, we have $$T+\text{Inf}\vdash \varphi \quad \text{if and only if} \quad T\vdash \text{Inf}\rightarrow \varphi.$$ And $$T+\text{Inf}+\text{Comp}\vdash \varphi \quad \text{if and only if} \quad T+\text{Comp}\vdash \text{Inf}\rightarrow \varphi.$$
I assume you agree that the schema of comprehension allows $T+\text{Inf}$ to prove new propositions. So if we let $\varphi$ be some sentence such that $T+\text{Inf}+\text{Comp}\vdash \varphi$ and $T+\text{Inf}\not\vdash \varphi$, then we have $T+\text{Comp}\vdash \text{Inf}\rightarrow \varphi$ and $T\not\vdash \text{Inf}\rightarrow \varphi$.
So yes, adding the schema of comprehension to $T$ does allow us to prove new propositions.
A more interesting question is whether the theory $T+\lnot\text{Inf}$ (with the negation of the axiom of infinity) proves all instances of the comprehension schema. I agree that intuitively it should... but this might be sensitive to exactly how one phrases the infinity axiom... |
Query regarding an alegbraic inequality | For $q > 1$, take $a = x + 1$ and $b = x$ where $x > 0$. Then, $1 = 1^q \ge C_q((x+1)^q-x^q) > C_q q x^{q-1}$.
Regardless of the value of $C_q$, this inequality is violated for sufficiently large $x$.
For $0 < q < 1$, you can show that $C_q = \dfrac{1}{q}$ works if you restrict $a > b > 0$. Without this restriction, one of $a$, $b$, or $a-b$ could be negative, which makes defining $a^q$, $b^q$, or $(a-b)^q$ tricky. |
General solution of a differential equation $x''+{a^2}x+b^2x^2=0$ | First make the substitution:
$x=-\,{\frac {6y}{{b}^{2}}}-\,{\frac {{a}^{2}}{{2b}^{2}}}$
This will give you the differential equation:
$y^{''} =6y^{2}-\frac{a^{4}}{24}$ which is to be compared with the second order differential equation for the Weierstrass elliptic function ${\wp}(t-\tau_{0},g_2,g_3)$:
${\wp}^{''} =6{\wp}^{2}-\frac{g_{2}}{2}$
Where $g_{2}$, $g_3$ are known as elliptic invariants. It then follows that the solution is given by:
$y={\wp}(t-\tau_{0},\frac{a^{4}}{12},g_3)$
$x=\,-{\frac {6}{{b}^{2}}}{\wp}(t-\tau_{0},\frac{a^{4}}{12},g_3)-\,{\frac {{a}^{2}}{{2b}^{2}}}$
Where $\tau_0$ and $g_3$ are constants determined by the initial conditions and $t$ is the function variable. |
Wien's Displacement Law and the Planck Function | Wien's displacement law is in terms of frequency instead of wavelength. The Planck function has a shape that is dependent on the parametrization you use. In other words, the maxima will not be the same according to which parametrization you use.
EDIT: The quantity you study is itself a derivative w.r.t. either $\lambda$, either $\nu$. $B$ is a spectral density. But that means the spectral densities are connected by
$$B_{\lambda} = \frac{d\nu}{d\lambda} B_{\nu}$$
Since $\lambda\nu=c$, the scaling factor is exactly $\frac{d\nu}{d\lambda}=-\frac{\nu^2}{c}$. Note on the wikipedia page they add an extra $-$ sign, I suppose that's to keep the densities positive. |
Clarification about the definition of surjectivity | Ze is using the same definition of surjectivity.
However, ze is also using the fact that $\Delta: \mathcal P^k \to \mathcal P^{k-2}$ is a linear map between vector spaces. Therefore, it's enough to pick a basis for $\mathcal P^{k-2}$ and check that each of the basis elements is in the image of $\Delta$. This is enough to show that $\Delta$ is surjective, because you started by picking a basis! |
Non-bijective inverse? | In general, $f^{-1}([a, b])\neq [f^{-1}(a), f^{-1}(b)]$, even for continuous functions (and even for bijective functions).
In this case, we're first looking at $f([0, 1]) = [2, 5]$. Then we apply $f^{-1}$ to that interval. By definition, $f^{-1}([2, 5])$ is the collection of all $x$ so that $f(x)\in [2, 5]$. And it so happens that this collection of $x$-values is the interval $[-1, 1]$. |
non-decreasing by young's inequality | Young's inequality says $ab \leq \frac {a^{p}} p + \frac {b^{p}} q$ if $a,b \geq 0$,$1<p<\infty $ and $\frac 1 p + \frac 1 q =1$. Let $0<t<s$ and denote by $c$ the number $(\int_0^{\infty } x^{s}f(x)\, dx)^{t/s}$. Let $p=\frac s t$, $q=\frac s {s-t}$. Apply Young's inequality with $a=\frac {x^{t}} c$ and $b=1$. You will get $\frac {x^{t}} c \leq \frac {x^{s}} {pc^{p}}+\frac 1 q$. Mulitiply by $f(x)$ and integrate to get $ \frac {\int_0^{\infty}x^{t}f(x)\, dx } c \leq \frac 1 p +\frac 1 q =1$ (where we used the fact that $c^{p}$ in the denominator cancels with $\int_0^{\infty } x^{s} f(x)\, dx$). Hence $ {\int_0^{\infty}x^{t}f(x)\, dx } \leq c=(\int_0^{\infty } x^{s}f(x)\, dx)^{t/s}$. This gives $ ({\int_0^{\infty}x^{t}f(x)\, dx })^{1/t} \leq c=(\int_0^{\infty } x^{s}f(x)\, dx)^{1/s}$ which is what we want to prove. |
Contour intergals of rational fuction | 1)$$F=\frac{1}{x^3+y^3}\frac{1}{2}d(x^2+y^2)$$
In polar coordinates it becomes
$$\frac{1}{\cos^3(\theta)+\sin^3(\theta)}\frac{dr^2}{2r^3}=\frac{1}{\cos^3(\theta)+\sin^3(\theta)}\frac{dr}{r^2}$$
You get $0$ just by "integrating" the radial part.
2) If you integrate on the segment between $(0,1)$ and $(1,0)$ you are integrating on the line $y=1-x$:
$$F=\frac{1}{2}\frac{d(2x^2-2x+1)}{3x^2-3x+1}=\frac{1}{2}\frac{2}{3}\frac{d(3x^2-3x+1)}{3x^2-3x+1}=\frac{1}{3}d\ln|3x^2-3x+1|\ .$$ |
Changing index in summation | If in doubt, write the terms out explicitly ... lets leave the $1/6$ out
\begin{eqnarray*}
\sum_{k=8}^{\infty}\left(\frac{5}{6}\right)^{k-1} &=& \left(\frac{5}{6}\right)^{7} + \left(\frac{5}{6}\right)^{8} + \left(\frac{5}{6}\right)^{9} + \cdots \\
&=& \left(\frac{5}{6}\right)^{7} \left(1 + \frac{5}{6} + \left(\frac{5}{6}\right)^{2} + \cdots \right) \\
&=& \left(\frac{5}{6}\right)^{7} \sum_{j=0}^{\infty} \left(\frac{5}{6}\right)^{j}. \\
\end{eqnarray*} |
Prove $0 \leq \frac{b-a}{1-ab} \leq 1$ if $ 0 \leq a \leq b \leq 1$ | If the problem says $a<b$ then we are only left to prove $\dfrac{b-a}{1-ab}<1$
or
$$ 1-\dfrac{b-a}{1-ab} = \dfrac{1-ab+a-b}{1-ab} = \dfrac{(1+a)(1-b)}{1-ab} $$
And we are done |
Solving a 2 variable integral with a delta function | Note that the the integral is zero by symmetry if $n$ is odd. Assume from now on that $n\geq 0$ is even.
One idea is to use polar coordinates $$(p,q)~=~(r\cos\theta,r\sin\theta).\tag{1}$$
Then
$$I~:=~ \iint_{\mathbb{R}^2}\! \mathrm{d}p~\mathrm{d}q~p^n~\delta(p^2+q^2-E)
~=~ \int_{\mathbb{R}_+}\! \mathrm{d}r~r^{n+1}\delta(r^2-E)\int_{[0,2\pi]}\! \mathrm{d}\theta~\cos^n\theta $$
$$~=~\ldots~=~\frac{1}{2}H(E) |E|^{n/2} 2\pi \frac{(n-1)!!}{n!!}.\tag{2}$$ |
Find orthogonal vector that is in plane | Your intuition is sort of right: you can't have a direction vector that is simultaneously orthogonal to a plane, but at the same time be a direction between vectors in the plane. It's not quite what they're asking though.
Note that the plane doesn't pass through the origin. When they say they want a vector in the plane, they're looking for a point in the plane, or if you like, a vector from the origin whose tip lies within the plane. Basically, we're looking for the unique point in the plane that is closest to the origin, so that the line between the origin and this point is perpendicular to the plane.
Normal vectors need to be parallel to $(-1, 2, -1)$, as you pointed out. So, the vector (or point) we're looking for takes the form $(-k, 2k, -k)$ for some $k \in \Bbb{R}$, but at the same time satisfies the equation $-x + 2y - z = 2$. Let's use this information to solve for $k$:
$$-(-k) + 2(2k) - (-k) = 2 \iff 6k = 2 \iff k = \frac{1}{3}.$$
So, our vector comes to be
$$\left(-\frac{1}{3}, \frac{2}{3}, -\frac{1}{3}\right).$$
This is the unique vector that is both orthogonal to the plane (i.e. the vector from $(0, 0, 0)$ to that point is orthogonal to the plane), and lies in the plane (i.e. the endpoint lies in the plane). |
Bounding a Mobius Fractional Sum | First, I want to point out that $$\sum_{n\leq x}\left\{\frac nx\right\}\neq\{x\}+\sum_{2\leq n\leq x-1}\left\{\frac nx\right\}.$$ The problem is that $x$ need not be an integer (in particular, you may not have a case $n=x$, so $x-1$ might miss an integer). Now, the fractional part of a number is always bounded above by $1$, so you in fact have $$\sum_{2\leq n\leq x}\left\{\frac xn\right\}\leq\sum_{2\leq n\leq x}1=\lfloor x\rfloor-1.$$ Combining the steps as you have, you end up with $$\left|1+\sum_{n\leq x}\mu(n)\left\{\frac xn\right\}\right|\leq 1+\{x\}+\lfloor x\rfloor-1=x.$$ |
Showing homomorphism for $\theta: GL_2 (\Bbb Q) \rightarrow \Bbb Q\setminus \{0\}$ given by $\theta(A) = \det A$. | Start working from the other end. It’s usually better either to work from the more complicated end or to work on both ends of the calculation simultaneously.
You know that $$AB=\pmatrix{a_1&a_2\\a_3&a_4}\pmatrix{b_1&b_2\\b_3&b_4}=\pmatrix{a_1b_1+a_2b_3&a_1b_2+a_2b_4\\a_3b_1+a_4b_3&a_3b_2+a_4b_4}\;,$$
so
$$\begin{align*}
\det AB&=(a_1b_1+a_2b_3)(a_3b_2+a_4b_4)-(a_1b_2+a_2b_4)(a_3b_1+a_4b_3)\\
&=\color{red}{a_1b_1a_3b_2}+a_1b_1a_4b_4+a_2b_3a_3b_2+\color{blue}{a_2b_3a_4b_4}\\
&\qquad-\color{red}{a_1b_2a_3b_1}-a_1b_2a_4b_3-a_2b_4a_3b_1-\color{blue}{a_2b_4a_4b_3}\\
&=a_1b_1a_4b_4+a_2b_3a_3b_2-a_1b_2a_4b_3-a_2b_4a_3b_1\\
&=a_1a_4b_1b_4-a_1a_4b_2b_3+a_2a_3b_2b_3-a_2a_3b_1b_4\\
&=a_1a_4(b_1b_4-b_2b_3)-a_2a_3(b_1b_4-b_2b_3)\\
&=(a_1a_4-a_2a_3)(b_1b_4-b_2b_3)\\
&=\det A\det B\;.
\end{align*}$$ |
Evaluate $\int \cos(3x) \sin(2x) \, dx$. | HINT:
Using Werner Formulas
$$2\cos3x\sin2x=\sin(3x+2x)-\sin(3x-2x)$$
Now use $\displaystyle\int\sin mx\ dx=-\frac{\cos mx}m+C$
Finally and optionally, we can use Multiple Angle Formula to expand $\cos5x$ in the power of $\cos x$ |
Quotient by equivalence relation | So, $A$ and $B$ are equivalent iff they are both "left-equivalent"
and "right-equivalent". Now $A$ and $B$ are left equivalent ($A=VB$
with $V\in\text{SL}_2(\Bbb Z)$) iff they have the same "row-space"
(lattice spanned by their rows). This lattice has index $2$ in $\Bbb Z^2$;
there are three such lattices.
Likewise $A$ and $B$ are right equivalent iff they have the same column space. So $A\sim B$ iff they have the same row and column spaces
and there are at most $3\times 3=9$ equivalence classes. I see no
reason why any given combination or row and column spaces cannot occur... |
R integral domain, Q its field of fraction, M R-module with nontrivial annihilator. Is Ext(Q,M) always 0? | Let $a \in R $ non-zero with $aM=0$. Since Ext is linear in both variables, we have that multiplication by $a$ on $\operatorname{Ext}^i(Q,M) $ is both an isomorphism and the zero map (since it is an isomorphism on $Q$ and zero on $M$). Of course the Ext group must be zero itself then. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.