title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to estimate the parameters of a logistic differential equation from the values of its solution at times 0, 1 and 2? | $$n'(t)=kn(t)\left(m-n(t)\right)\Longleftrightarrow$$
$$\frac{n'(t)}{n(t)\left(m-n(t)\right)}=k\Longleftrightarrow$$
$$\int\frac{n'(t)}{n(t)\left(m-n(t)\right)}\space\text{d}t=\int k\space\text{d}t\Longleftrightarrow$$
$$\frac{\ln\left|\frac{n(t)}{m\left(n(t)-m\right)}\right|}{m}=kt+\text{C}\Longleftrightarrow$$
$$\left|\frac{n(t)}{m\left(n(t)-m\right)}\right|=\text{C}e^{mkt}\Longleftrightarrow$$
$$\left|\frac{1}{m-\frac{m^2}{n(t)}}\right|=\text{C}e^{mkt}\Longleftrightarrow$$
$$\left|m-\frac{m^2}{n(t)}\right|=\frac{\text{C}}{e^{mkt}}\Longleftrightarrow$$
$$\left|1-\frac{m}{n(t)}\right|=\frac{\text{C}}{|m|e^{mkt}}$$
Now solve $\text{C}$ using $n(0)=65$:
$$\left|1-\frac{m}{65}\right|=\frac{\text{C}}{|m|e^{mk\cdot0}}\Longleftrightarrow\left|1-\frac{m}{65}\right|=\frac{\text{C}}{|m|}\Longleftrightarrow\text{C}=|m|\left|1-\frac{m}{65}\right|$$
So, we get:
$$\left|1-\frac{m}{n(t)}\right|=\frac{\left|1-\frac{m}{65}\right|}{e^{mkt}}$$
Now, when we know that $n(1)=98$ and $n(2)=142$ we get the system:
$$
\begin{cases}
\left|1-\frac{m}{98}\right|=\frac{\left|1-\frac{m}{65}\right|}{e^{mk}}\\
\left|1-\frac{m}{142}\right|=\frac{\left|1-\frac{m}{65}\right|}{e^{2mk}}
\end{cases}\Longleftrightarrow
\begin{cases}
k=\frac{\ln\left(\frac{\left|1-\frac{m}{65}\right|}{\left|1-\frac{m}{98}\right|}\right)}{m}\\
\left|1-\frac{m}{142}\right|=\frac{\left|1-\frac{m}{65}\right|}{e^{2mk}}
\end{cases}
$$
Substitute them into eachother gives:
$$\left|1-\frac{m}{142}\right|=\frac{\left|1-\frac{m}{65}\right|}{\exp\left[2\cdot m\cdot\frac{\ln\left(\frac{\left|1-\frac{m}{65}\right|}{\left|1-\frac{m}{98}\right|}\right)}{m}\right]}\Longleftrightarrow\left|1-\frac{m}{142}\right|=\frac{\left|1-\frac{m}{98}\right|^2}{\left|1-\frac{m}{65}\right|}$$
And the solutions we get are:
$$m=\frac{8134}{17},k=\frac{14\ln\left(\frac{213}{130}\right)}{8134}$$
Now, we get that:
$$\left|1-\frac{8134}{17n(t)}\right|=\frac{7029}{1105\left(\frac{213}{130}\right)^{\frac{14t}{17}}}$$ |
Fourier series over non-symmetric interval | For a Fourier series you do not need symmetric intervals. You can just scale an translate the trigonometric polynomials in such a way that they become periodic wrt. to any given (bounded) interval. If you do that you have to ensure that you also get an orthonormal system by introducing appropriate factors in front of either your Fourier integral or the series or both. (Like $1/pi$ in the classical case).
In your particular example just start out with $\sin$ and translate and scale it so that it vanishes at $-3, -1/2$ and $2$.
Alternatively, translate and scale the function which you need to develop into a series to $(-\pi, \pi)$, calculate it's Fourier series and translate back. |
Beginner real analysis help | Use the comparison test. I think we can agree that $\frac{4^n}{n! + 3^n} < \frac{4^n}{n!}$ for all $n$, correct? Then we have that
$$
\sum_{n=1}^\infty \frac{4^n}{n! + 3^n} \;\; < \;\; \sum_{n=1}^\infty \frac{4^n}{n!}.
$$
The series on the right converges to something quite familiar. |
How do I show that a continuous partial derivate makes the function differentiable? | Suppose that $\frac{\partial f}{\partial x}$ is continuous. We have:
\begin{equation}
\begin{split}
&f(a_1+h_1, a_2+h_2)-f(a_1, a_2) = f(a_1+h_1, a_2+h_2)-f(a_1, a_2+h_2)+f(a_1, a_2+h_2) -f(a_1, a_2) \\
&= \int_0^{h_1} \frac{\partial f}{\partial x}(a_1+t, a_2+h_2)\mathrm{d}t + h_2\frac{\partial f}{\partial y}(a_1, a_2)+o(h_2) \\
& = \int_0^{h_1} \left( \frac{\partial f}{\partial x}(a_1+t, a_2+h_2) -
\frac{\partial f}{\partial x}(a_1, a_2) \right) \mathrm{d}t + h_1\frac{\partial f}{\partial x}(a_1,a_2) + h_2\frac{\partial f}{\partial y}(a_1, a_2)+o(h_2)
\end{split}
\end{equation}
The continuity of $\frac{\partial f}{\partial x}$ implies its uniform continuity on $[a_1, a_1+h_1]\times[a_2,a_2+h_2]$. So, for small enough $(h_1, h_2)$, we have:
\begin{equation}
\bigg\vert \frac{\partial f}{\partial x}(a_1+t, a_2+h_2) -
\frac{\partial f}{\partial x}(a_1, a_2) \bigg\vert < \epsilon, \mbox{ for } t \in [0, h_1]
\end{equation}
Finally:
\begin{equation}
f(a_1+h_1, a_2+h_2)-f(a_1, a_2) = h_1\frac{\partial f}{\partial x}(a_1,a_2) + h_2\frac{\partial f}{\partial y}(a_1, a_2)+ o(h_1) +o(h_2)
\end{equation}
And we conclude that $f$ is differentiable in $(a_1,a_2)$. |
Open sets in the infinite product of $\mathbb{N}^\infty$ | The standard base for the product topology on $\Bbb N^\infty$ is all sets of the
form $\prod_{n \in \omega} U_n$ where all $U_n \subseteq \Bbb N$ and all but finitely many $U_n$ are equal to $\Bbb N$ (i.e. $\{n \in \omega: U_n \neq \Bbb N\}$ is a finite set).
But we can WLOG assume that the finitely many proper subsets are all singletons and occur as the initial part of $\omega$ and then we still have a base, which is smaller (call such basic open sets "simple basic sets" for now):
Let $U=\prod_n U_n$ be a basic subset as described in the beginning and let $(x_n) \in U$. Let $\{n \in \omega: U_n \neq \Bbb N\}$ have $N$ as its maximum element.
Then define $V:= \prod_n V_n$ to be the basic subset with $V_n = \{x_n\}$ for all $n \le N$ and $V_n = \Bbb N$ otherwise. This by construction is a simple basic set and $(x_n) \in V$ is obvious and so is $V \subseteq U$.
As we can do this for all $(x_n) \in U$, every standard basis set is a union of "simple basic sets" and so all open sets are unions of "simple basic sets", and these are precisely the sets of the form
$B((a_1,\ldots,a_n)) := \{(x_n) \in \Bbb N^\omega: (x_n)_n \text{ extends } a_1, a_2, \ldots a_n\}$
So you can say that $O$ in the product is open iff for each $x \in O$ there is some initial part $x|N$ of $x$, such that all sequences that extend $x|N$ are also in $O$, which is the more precise version of your "extension characterisation". |
How to calculate the "upward" angle of a Regular Icosahedron vertex away from a tangential plane | Suppose $r$ is the outer radius of a regular pentagon with edge length 1. (It is easiest to keep the keep the outer edge lengths constant to match the edges of the icosahedron.) Then it is not too hard to see that $\sin(36) = \frac{1/2}{r}$, so $r = \frac1{2\sin36}$.
Now looking at the the pentagonal pyramid, you have a vertical right triangle with $r$ as the base, and hypotenuse $1$ as this is another edge of the icosahedron. The angle you are looking for has $\cos\alpha= \frac r1$. So $\alpha = \cos^{-1}(\frac1{2\sin36}) \approx 31.7174$. |
Determine the greatest common divisor of polynomials $x^2+1$ and $x^3+1$ in $\Bbb Q[X]$. | The Euclidean algorithm steps are
\begin{align*}
x^3+1&=x (x^2+1) + (-x+1)\\
x^2+1&=-x (-x+1)+ (x+1)\\
-x+1&=-(x+1)+2
\end{align*}
So the GCD is $1$.
We can go back up
\begin{align*}
2 &= (-x+1) +(x+1)\\
&= (-x+1) + (x^2+1)+x (-x+1)\\
&= (-x+1) (x+1)+(x^2+1)\\
&= (x^3+1 - x(x^2+1))(x+1)+(x^2+1)\\
&= (x+1) (x^3+1) + (-x^2-x+1) (x^2+1)
\end{align*}
and so $1 = \frac{1}{2} [(x+1) (x^3+1) + (-x^2-x+1) (x^2+1)]$ |
Why is this not a choice function from subsets of $\Bbb R$? | No, it is not a choice function, because it is not defined for all non-empty subsets of $\mathbb R$. For instance, $f\bigl((1,2]\bigr)$ is not defined. |
Difference between "Lower sum" and "Riemann lower sum" | There's no difference between a "lower sum" and a "lower Riemann sum." There is a difference between a "left (Riemann) sum" and a "lower (Riemann) sum." In particular, the left Riemann sum uses the leftmost point of each interval, while the lower Riemann sum uses the point on each interval at which the function is at a minimum.
(Of course for an increasing function the two are the same.) |
Non-Standard Deviation | Try sorting the numbers and see if it helps. |
Is the Jacobian a divergence of some vector field? (manifolds) | If $M$ is a compact $d$-dimensional manifold with nonempty boundary, then $H^d(M) = 0$, and so any $d$-form is exact. Thus, $\det df\,dV = d\eta$ for some ($d-1$)-form $\eta$. Consider the $1$-form $\omega = \star\eta$. Using the metric, $\omega$ corresponds to a vector field $X$ and $\text{div}\,X\,dV = d{\star}\omega = \pm d\eta = \pm \det df\,dV$, as required (up to a sign). |
The rank of a linear transformation $A: E \longrightarrow F$ is the largest integer such that the pullback of A is not zero | There's a nicely written proof of this question in: Relation between rank of a linear transformation and pullback ($r$-linear forms.)
As for your answer, I think the general idea is correct and I would only like to add:
1 - After proving that $r < m + 1$ you say:
"We know too that every set of $\{v_i \in E \}$ with at most element $m$ elements is linearly independent"
This doesn't always hold (just take a set $\{v_i \in E \}$ such that $v_1 = v_2$). But I do believe that we can choose a convenient set $\{v_i \in E \}$ such that this holds and this is enough for our purposes here.
2 - You also say:
"... then given $\omega \in$ ..."
Again, here it is not necessary that what you say holds for every $\omega$. It is enough to find at least one form $\omega$ who does not vanish to conclude what you want (check the link to see how to find a possible $\omega$ that fulfills the conditions we want). |
Show that $\| u - v \|^2 = \| u - P_U(v) \|^2 + \| v - P_U(v) \|^2 $ and minimize $d(u, v)$ | In general, if $x$ and $y$ are orthogonal vectors in an inner product space, then
$$
\|x \pm y\|^{2} = \|x\|^{2} + \|y\|^{2}.
$$
(This is often called the Pythagorean Theorem, for obvious reasons.)
The vector $u - P_{U}(v)$ lies in $U$, while $v - P_{U}(v)$ is orthogonal to $U$ (that's essentially the definition of the orthogonal projection of $v$ to $U$), and
$$
u - v = \bigl(u - P_{U}(v)\bigr) - \bigl(v - P_{U}(v)\bigr).
$$
Your first conclusion follows immediately from the Pythagorean Theorem by taking $x = u - P_{U}(v)$ and $y = v - P_{U}(v)$.
As a consequence,
$$
\|v - P_{U}(v)\|^{2} \leq \|v - P_{U}(v)\|^{2} + \|u - P_{U}(v)\|^{2} = \|u - v\|^{2}\quad\text{for all $u$ in $U$,}
$$
with equality if and only if $u = P_{U}(v)$. In words, $P_{U}(v)$ is closer to $v$ than every element of $U$. (If that isn't apparent at first glance (which is often the case the first time you see this), inspect each term and think about what it means.)
For (ii), you want to compute $P_{U}(v)$ for the given subspace $U$ and vector $v$. Luckily, the basis of $U$ that's been provided is orthonormal: Each of $u_{1}$ and $u_{2}$ is a unit vector, and $\langle u_{1}, u_{2}\rangle = 0$. You probably have a formula for orthogonal projection to a subspace $U$ when an orthonormal basis of $U$ is given...?
If that formula doesn't sound familiar, you can derive it on the spot: There exist scalars $c_{1}$ and $c_{2}$ such that
$$
P_{U}(v) = c_{1} u_{1} + c_{2} u_{2}.
$$
Since $v - P_{U}(v)$ is orthogonal to $U$, you have
$$
\langle v - (c_{1} u_{1} + c_{2} u_{2}), u_{1}\rangle = 0,\qquad
\langle v - (c_{1} u_{1} + c_{2} u_{2}), u_{2}\rangle = 0.
$$
Since $\{u_{1}, u_{2}\}$ is orthonormal, the preceding equations can be solved easily (almost by inspection) for $c_{1}$ and $c_{2}$ by distributing the dot product. |
Where can I find rigorous statements about the spectral decomposition of reductive groups? | This is the main aim of the book "Spectral decomposition and Eisenstein series" by Moeglin and Waldspurger, which is based on the original work of Langlands but updated to use more modern notation and techniques. This has a very detailed account of the spectral decomposition theorems in chapter VI. |
Integer coefficient polynomial $p(x)$ has no integer roots of $\,p(0)$ and $p(1)$ are odd [Parity Root Test] | We know
$$f(0) \equiv 1 \pmod{2}$$
This means that the constant term of $f$ is an odd integer.
We also know
$$f(1) \equiv 1 \pmod{2}$$
This means that the sum of the coefficients of $f$ is odd.
Consider $f(x)$ for a general $x \in \mathbb{Z}$.
Case 1: $x$ is even.
If $x$ is even, all of the powers of $x$ must also be even, and since the product of even numbers with any integer coefficients is also even, everything but the constant term of $f$ is even. Thus, $f(x)$ must be odd.
Case 2: $x$ is odd.
If $x$ is odd, all of the powers of $x$ must also be odd. We have a bunch of odd numbers $x^i$ (including $x^0)$, and we are adding multiples of them together. We know that the sum of the coefficients is odd; this means we have, in total, an odd number of $x^i$s. The sum of an odd number of odd numbers must be odd, and so $f(x)$ is odd.
We've proven that $f(x)$ is odd for integer $x$. It follows, a fortiori, that $f(x)$ is nonzero. |
$L^2$ regularity of a convolution with Newtonian potential | In "because $w\in L^2$ is smooth" the emphasis is on smooth. The convolution of $w$ with an $L^1$ function is as smooth as $w$ itself (assuming nothing bad comes from the tail at infinity). This is because we can put the derivative on $w$:
$$
\nabla_x \int w(y)\phi(x-y)\,dy = \nabla_x \int w(x-y)\phi(y)\,dy = \int \nabla_x w(x-y)\phi(y)\,dy
$$
and if the last integral converges absolutely, the formal differentiation is justified.
In your case $\phi(x)=1/|x|$ is not globally integrable, but it is locally integrable. So, let's introduce a smooth cutoff function $\chi$ such that
$\chi(x)=1$ when $|x|\le 1$
$\chi(x)=0$ when $|x|\ge 2$
Then split the kernel as $\phi_1(x) = \chi(x)/|x|$ and $\phi_2(x) = (1-\chi(x))/|x|$.
Observe that $\phi_2$ has bounded derivatives of all orders, with appropriate decay at infinity: $D^k \phi_2 = O(1/|x|^{k+1})$. Thus, you can write
$$
D^k_x \int w(y)\phi_2(x-y)\,dy =\int w(y) D^k_x \phi_2(x-y)\,dy \tag{1}
$$
and estimate the integral accordingly.
With $\phi_1$, differentiate $w$ itself,
$$
D^k_x \int w(y)\phi_1(x-y)\,dy = \int D^k_x w(x-y)\phi_1(y)\,dy \tag{2}
$$
The integral converges because $\phi_1$ is compactly supported and $w$ is smooth. However, the decay of $(2)$ as $x\to\infty$ is unclear without an assumption on the derivatives of $w$. I think the authors may have meant that $w$ decays at infinity together with its first and second derivatives. |
How to determine the Mori cone of curves for $\mathbb{P}^2$ blowing up a point? | In the specific case you ask about, the answer is that $\overline{NE}(X)$ is spanned by $E$ and $H-E$. One way to see this is that every irreducible curve $C$ other than $E$ itself must satisfy $E \cdot C \geq 0$, and every irreducible curve $C$ must satisfy $(H-E) \cdot C \geq 0$ since the class $H-E$ is basepoint-free.
As you suggest, whenever we blow up at most $n+1$ general points in $\mathbf P^n$, the result is a toric variety $X$, and in this situation there is a straightforward way to compute $\overline{NE}(X)$.
Once we start to blow up more points, things get much trickier. Starting from $\mathbf P^2$ we can blow up at most 8 points and still end up with a del Pezzo surface; the Cone Theorem then guarantees that $\overline{NE}(X)$ is a finite rational polyhedral cone, and we can actually compute it in all these cases. (This is a good exercise so I won't write the answer; it is written in many places.) For 9 points we no longer have a del Pezzo surface, and in fact the cone no longer has finitely many extremal rays, but we can still give a good description.
(Similar results are available for blowups of $\mathbf P^3$ in at most 8 points, and in higher dimensions.)
That's about as good as it gets, though. Remarkably, as far as I am aware even for the blowup of $\mathbf P^2$ in 10 points the complete description of $\overline{NE}(X)$ is still not known, despite much work.
For more information about the limit of what is known, you can search for recent work on the so-called SHGH conjecture and Nagata's conjectire. |
Connection between covariant and contravariant components o tensor | I guess you are interested in the connection between covariant and covariant tensors on a Riemannian manifold. Let $(M,g)$ be a Riemannian manifold and let $\alpha$ be a $(0,1)$- covariant tensor (a section of $T^*M$). Then there exists a unique $(1,0)$-contravariant tensor $v$ (a section of $TM$) such that $\alpha=g(v,\cdot)$.
Example: $g(grad\, f,\cdot)=df$
So for any (r,s)- tensore $T$ (a section of $\otimes^rT^M{\otimes^s}T^*M$) you can find a unique (s,r)-tensor $T^*$ associated with $T$ via the metric $g$. |
How is this integral being evaluated? (Please don’t solve a explanation would be appreciated) | This is a keen observation, and you're correct that the last inequality requires justification beyond the linearity of Riemann integration, exactly because the integrands of the two separate integrals both have singularities at $x = \frac{\pi}{2}$. The previous integral,
$$\int_0^{\pi} (\sec^2 x - \sec x \tan x) \,dx ,$$
does not have this problem. Now, it's true that the integrand isn't defined at $x = \frac{\pi}{2}$, but the integrand remains bounded near that point, so this doesn't affect the value of the Riemann integral. In fact, this singularity is removable: We can assign the integrand a value at that point, namely $0$, to make it a function continuous on the entire interval of integration, which is what we need to apply the Fundamental Theorem of Calculus. We'll do exactly this to evaluate (and more to the point justify the evaluation of) the original integral.
Even though we can't split up the integral by linearity---at least not without some other justification, like introducing improper integrals, which OP hinted in the comments he hadn't encountered yet---we can still look separately for antiderivatives of each summand separately. These are both elementary, and we get that $(\tan x)' = \sec^2 x$ and $(\sec x)' = \sec x \tan x$, so by linearity of the derivative, $\tan x - \sec x$ is an antiderivative of the original integrand, at least everywhere (in the interval of integration) other than $x = \frac{\pi}{2}$.
We again have to wrangle with the fact that $\tan x - \sec x$ isn't defined at $x = \frac{\pi}{2}$, and again this singularity removable. We can check this quickly by rewriting $\tan x - \sec x = (\tan x - \sec x) \cdot \frac{\tan x + \sec x}{\tan x + \sec x} = -\frac{1}{\tan x + \sec x}$, which tends to $0$ as $x \mapsto \frac{\pi}{2}$. In order to apply the usual version of the F.T.C., we also need to check that the function $\tan x - \sec x$ (with the singularity removed) is differentiable at $x = \frac{\pi}{2}$, which I'll leave as an exercise. With this in hand, we have that
$$(\tan x - \sec x)' = \sec^2 x - \sec x \tan x ,$$ where now both $\tan x - \sec x$ and $\sec^2 x - \sec x \tan x$ represent the functions with the respective singularities removed, and so the F.T.C. gives
$$\int_0^{\pi} (\sec^2 x - \sec x \tan x) \,dx = (\tan x - \sec x) \vert_0^{\pi} .$$ |
Help Needed Statisitcs | $2.58$ is a critical value for the standard normal distribution corresponding to the stated significance level of $\alpha = 0.01 = 1\%$ for the test. Specifically, it is the value for which $$\Pr[|Z| > 2.57583] \approx 0.01,$$ where $Z \sim \operatorname{Normal}(\mu = 0, \sigma^2 = 1)$.
In terms of the figure shown in the solution, what this means is that the area under the bell curve in the two tails (that is, the area under the curve for values larger than $2.58$ and smaller than $-2.58$) is approximately $0.01$ or $1\%$ of the total area under the curve. Because the value of the test statistic you calculated is less than $-2.58$, this means the probability that you could have obtained the data you did, assuming that the supposition that the true proportion is $p = 0.45$, is smaller than $1\%$, which means there is considerable evidence to suggest that this supposition is incorrect; hence the true proportion is unlikely to be $0.45$.
How do we calculate such critical values for a given $\alpha$? Note that if the test is two-sided, then what we want is $$z_{\alpha/2}^* = \Phi^{-1}(1 - \alpha/2),$$ where $$\Phi(z) = F_Z(z) = \Pr[Z \le z] = \int_{t = -\infty}^z \frac{1}{\sqrt{2\pi}} e^{-t^2/2} \, dt$$ is the cumulative distribution function of the standard normal distribution, and gives us the probability that a standard normal random variable is less than or equal to $z$. So the inverse CDF is the quantile function, and we call the critical value $z_{\alpha/2}^*$ the upper $100 \alpha/2$ percentile of the standard normal distribution. Tables of common critical values are given for various $\alpha$ levels and are also found in normal distribution tables. |
How to construct a partial ordering from Peano's 5 Axioms? | (Answering my own question)
Thanks all for your help, but I didn't find my construction here to be very workable. I was trying to avoid using an addition function to "simplify things." That was probably a mistake.
Using Peano's Axioms, some basic set theory and a form of natural deduction, I formally constructed (in order) the function $+$ and the relations $\leq$ and $\lt$ as sets of ordered n-tuples such that, for all $x, y \in N$ we have:
$x+0 = x$
$x+S(y) = S(x+y)$
$x\leq y \iff \exists z\in N: x+z=y$
$x \lt y \iff x\leq y ~\land ~ x\neq y$
Everything then fell into place. I was just now able to derive the usual fundamental properties of $+$, $\leq$ and $\lt$ for a larger project that I am working on. |
circle sliding along the inside of another circle is a straight line | You have made two mistakes:
If the centre of the smaller circle is rotating at a rate $\omega$ radians per second relative to the centre of the larger circle, then the rate of rotation of the smaller circle relative to its own centre must be $-\omega$ in order for the point of contact between the two circles not to slip. This means that the angle in your steps 3 and 4 should $-\Theta$ instead of $2\Theta$.
You have missed out a factor of $\frac R2$ in moving from step 3 to step 4.
With these two corrections you should find that the rate of rotation of a point on the smaller circle relative to the centre of the larger circle is $\omega - \omega = 0$. This means that each point on the circumference of the smaller circle moves along a straight line which is a diameter of the larger circle. If you show the time dependence explicitly by replacing $\Theta$ by $\omega t$ then you will see this motion is simple harmonic motion. |
Find the volume of the solid formed by rotating the | $$V=\pi\int_0^\sqrt{11}(11y)^2-(y^3)^2dy$$ |
$a_{ij}=w_j^Tw_i$, Matrix $A=(a_{ij})$ is positive semi-definite | Hint:
Let $W =\begin{bmatrix} w_1 & \ldots w_n \end{bmatrix}$, we can write $A=W^TW$. |
Special properties of binary Galois field | This depends on Newton's identities relating certain symmetric polynomials to each other (alternatively you can just crank this out with pencil-and-paper work).
If we denote by $e_1,e_2,e_3,e_4$ the elementary symmetric functions, i.e. the coefficients of
$$
P(x)=(x-\alpha_1)(x-\alpha_2)(x-\alpha_3)(x-\alpha_4)=x^4+e_1x^3+e_2x^2+e_3x+e_4,
$$
and by $p_i, i\in\Bbb{N},$ the power sum
$$
p_i=\alpha_1^i+\alpha_2^i+\alpha_3^i+\alpha_4^i,
$$
then by the so called Freshman's dream (in characteristic two) we get
$$
\begin{aligned}
p_1&=e_1,\\
p_2&=\alpha_1^2+\alpha_2^2+\alpha_3^2+\alpha_4^2=(\alpha_1+\alpha_2+\alpha_3+\alpha_4)^2=e_1^2,\\
p_4&=e_1^4,
\end{aligned}
$$
and then a couple of applications of
Newton's identities eventually give that
$$
p_5=e_1^5+e_2e_1^3+e_3e_1^2+e_2^2e_1+e_2e_3+e_1e_4.\qquad(*)
$$
Your system $(2)$ states that $p_1=e_1=0$ and that $p_5=0$. Plugging in
$e_1=0$ into $(*)$ then gives the simple consequence
$$
0=p_5=e_2e_3.
$$
So we can conclude that if $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ is a solution of $(2)$ then either $e_3=0$ or $e_2=0$.
Let us first look at the case $e_3=e_1=0$. Then we have
$$
\begin{aligned}
P(x)&=x^4+e_1x^3+e_2x^2+e_3x+e_4\\
&=x^4+e_2x^2+e_4\\
&=(x^2+\sqrt{e_2}x+\sqrt{e_4})^2,
\end{aligned}
$$
by the Freshman's dream and the fact that in any finite field of characteristic two every element has a (unique) square root. This shows that all the roots of $P(x)$ are double roots contradicting the assumption that the $\alpha_i$s were to be distinct. Observe that this argument works equally well for the field $A$ as well as the field $B$.
The other case $e_2=e_1=0$ is different. This time
$$
P(x)=x^4+e_3x+e_4.
$$
Let us fix the field $K=GF(2^m)$.
The following trick from the theory of linearized polynomials allows us to make progress. Let $L(x)=x^4+e_3x$. Again by Freshman's dream we have
$$
L(a+b)=L(a)+L(b)\qquad(**)
$$
for all $a,b\in K$. If $\alpha_i,i=1,2,3,4,$ are four distinct zeros of $P(x)$, then $L(\alpha_i)=P(\alpha_i)+e_4=e_4$. By $(**)$ this implies that
for all $i=2,3,4$ we have
$$
L(\alpha_i-\alpha_1)=L(\alpha_i)+L(\alpha_1)=e_4+e_4=0.
$$
As $L(0)=0$ the (linearized) polynomial $L(x)$ also has four zeros.
But
$$
L(x)=x(x^3+e_3),
$$
so the non-zero roots of $L(x)$ are exactly the cubic roots of $e_3$. This explains why we get different behavior according to parity of $m=[K:GF(2)]$.
Namely:
If $m$ is odd, then $3\nmid 2^m-1$. As the group $K^*$ is cyclic of order $2^m-1$ this implies that every element of $K$ has a unique cube root in $K$. Applying this to $e_3\in K$ implies that $L(x)$ has exactly two roots in $K$, and therefore $P(x)$ cannot have four roots in $K$ either.
On the other hand id $m$ is even, then $3\mid 2^m-1$, and the cubes of $K^*$ form a subgroup of index three - each with three cube roots. In this case the polynomial $L(x)$ has four distinct roots in $K$ whenever $e_3$ is a non-zero cube in $K$. Therefore for some choice of $(e_3,e_4)$ the polynomial $P(x)$ has the required four distinct solutions. Consequently system $(2)$ also has solutions of the required type. An easy example is when $\alpha_i,i=1,2,3,4,$ range over the elements of the subfield $GF(4)$. In that case $\alpha_i^5=\alpha_i^2$ and it is easy to check that you get a solution. |
basic question - Volume of revolution of weird shape | The area looks kind of like a thin leaf in the first quadrant.
The graph of $y = \sqrt{x}$ is greater than $y = x^2$ in the first quadrant on the interval $[0,1]$.
So take your differential volume to be the ring of thickness $dx$ with outer radius $\sqrt{x}$ and inner radius $x^2$. Then your differential area is $\pi(\sqrt{x}^2 - (x^2)^2) \mathrm{dx}$.
Integrate from $x=0$ to $x=1$ and you're done. |
Convergence of the arguments of a convergent sequence | You have the assumption that $z_n$ converges to $A$ and $A$ is not the origin or on the negative real axis. You need to show that given $\epsilon>0$ there is $N$ such that $n>N$ implies $\arg(a_n)$ is within epsilon of $\arg(A)$. It's equivalent to show there is a $\delta$ such that if $z$ is within $\delta$ of $A$ then $\arg(z)$ is within $\epsilon$ of $\arg(A)$.
To do this the actual delta you use might need to be the min of some things. So first we can draw a circle around $A$ of radius say $\epsilon_1 <= \epsilon$, but such that the entire disk inside the circle is also away from the negative x axis. Next consider all the angles formed by points in this disk; a diagram reveals they will be within $\arcsin(\epsilon_1/|A|)$ of arg(A).
So if z is within $\arcsin(\epsilon_1/|A|)$ of A, we have the desired closeness
$|arg(z)-arg(A)|<\epsilon$.
We do need to use monotonicity of $\arcsin(x)$ here at the end, and continuity, and $\arcsin(0)=0$. |
How to integrate $\int{ dx \over \sqrt{1 + x^2}}$ | Let $\displaystyle I = \int \dfrac{1}{\sqrt{1+x^2}}~\mathrm dx.~ $Let $\tan u = x.$ Then $\sec^2 u ~\mathrm du = \mathrm dx,~u = \arctan x$ and $$ \begin{align*}I & = \int \sec u~ \mathrm du = \log (\tan u + \sec u)+C\\
& = \log\left(x + \sqrt{1 + x^2}\right) + C~.
\end{align*}$$
Feel free to ask questions if anything is unclear. |
Vector vs. Matrix notation | The latter two notations are the same if one takes
$\langle a,b \rangle :=\begin{pmatrix}
a\\
b
\end{pmatrix}$. This notation is sometimes used to have vectors written in-line. In this case, what you have is a linear combination of the two standard vectors:
$$x\langle 1,0 \rangle+y\langle 0,1 \rangle
=x\begin{pmatrix}
1\\
0
\end{pmatrix} + y\begin{pmatrix}
0\\
1
\end{pmatrix} =
\begin{pmatrix}
x\\
y
\end{pmatrix} = \langle x,y \rangle$$
Notice that these are both notations for vectors. The first notation is for a 2-by-2 matrix, which is not a vector unlike the latter two expressions. You can write the latter two expressions using the matrix-vector product, which might be the source of your confusion. In the 2-by-2 case the matrix-vector product is defined
$$\begin{bmatrix}
a &b \\
c & d
\end{bmatrix}\begin{pmatrix}
x\\
y
\end{pmatrix} =x\begin{pmatrix}
a\\
c
\end{pmatrix}+y\begin{pmatrix}
b\\
d
\end{pmatrix}$$
From this it should be trivial to see what 2-by-2 matrix is used to give you your latter two expressions. |
Limit of something that does not depend on x at all? | Note that $f(z)$ can factored out as a constant because $f(z)$ does not depend on $x$. Therefore,
$$ \lim_{x\to a} f(z)=f(z)\left[\lim_{x\to a} 1\right]=f(z) $$ |
Assune A,B,C,D are sets such that $|A |= |B |$ and $|C |= |D |$. Show that $|C^A |= |D^B |$ | Let be $f:A\rightarrow B$ and $g:C\rightarrow D$ bijections. Define $F:C^A\rightarrow D^B$ as
$$
F(h):=g\circ h\circ f^{-1}
$$
Edit: As $A$ and $B$ has the same cardinality, then exists a bijection $f$ between them. In the same form you conclude that $g$ exists.
Now, we need to prove that $F$ is a bijection:
$F$ is one to one: Let $h,h'\in C^A$ such that $F(h)=F(h')$ so,
$$
\begin{array}{rcl}
g\circ h\circ f^{-1} & = & g\circ h'\circ f^{-1}\\
g^{-1}\circ g\circ h\circ f^{-1}\circ f & = &g^{-1}\circ g\circ h'\circ f^{-1}\circ f\\
h & = & h'
\end{array}
$$
F is onto: Let $h\in D^B$. Is easy to see that $g^{-1}\circ h\circ f:A\rightarrow C$ and $F(g^{-1}\circ h\circ f)=h$.
With this, we have that $|C^A|=|D^B|$. |
uniform acceleration of two bodies | Take a frame in which X is stationary with origin at Y's initial position. Then Y has initial speed 10 and acceleration 6, so distance after time $t$ is $10t+3t^2$. We want that to be 400, so we take $t=10$. |
What is the weak closure of $C_c(X)$ in $C_b(X)$? | Due to Hahn-Banach, the closure of a subspace in the weak topology of a normed space is equal to the norm closure. Then it is easy to check (without using the Stone-Cech compactification as in Eric Wofsey's answer) that $\overline{C_c(X)}=C_0(X)$ (the space of all continuous functions which, for all $\varepsilon>0$, are less than $\varepsilon$ outside some compact set). What you technically need is normality of locally compact Hausdorff spaces to find cut-off fuctions. |
Compute limit with the help of MacLaurin series expansion | Edit: The problem has been corrected. We keep the original solution, and add a solution to the corrected problem below.
Hint: We have
$$e^{(2x)^2}=1+(2x)^2+\frac{(2x)^4}{2!}+\frac{(2x)^6}{3!}+\cdots.$$
(Just write down the Maclaurin expansion of $e^t$, and everywhere in the expansion replace $t$ by $(2x)^2$.) Thus
$$\frac{e^{(2x)^2}-1}{x^2}=4+O(x^2).$$
Answer to corrected problem: This asks for
$$\lim_{x\to 0} \frac{e^{2x^2}-1}{x^2}.$$
We have
$$e^{2x^2}=1+2x^2+\frac{(2x^2)^2}{2!}+\frac{(2x^2)^3}{3!}+\cdots.$$
Subtract $1$, divide by $x^2$. We get $2$ plus a bunch of terms that have $x$'s in them. As $x\to 0$, these terms approach $0$, so the limit is $2$.
Another way: We used the Maclaurin expansion mechanically in the solution, because it is a nice tool that it is very important to know about. But there are simpler ways. Let $t=x^2$. Then we want to find
$$\lim_{t\to 0^+}\frac{e^{2t}-1}{t}.$$
Let $f(t)=e^{2t}$. Note that $f(0)=1$. Then by the definition of the derivative,
$$\lim_{t\to 0} \frac{f(t)-f(0)}{t}=f'(0).$$
In our case, $f'(t)=2e^{2t}$, so $f'(0)=2$, and our limit is $2$. |
Solving sin 3x * cos 3x = sin 2x | $$\sin 3x .\cos 3x=\sin2x\\
\Rightarrow 2\sin 3x .\cos 3x=2\sin2x\\
\Rightarrow\sin6x=2\sin2x\\
\Rightarrow\sin3(2x)=2\sin2x\\
\Rightarrow 3\sin2x-4\sin^32x=2\sin2x\\
\Rightarrow 3\sin2x-4\sin^32x-2\sin2x=0\\
\Rightarrow \sin2x-4\sin^32x=0\\
\Rightarrow \sin2x(1-4\sin^22x)=0\\
$$
Then $$\sin 2x=0$$ and $$\sin2x=\pm \frac{1}{2}$$ |
What is the laymen meaning of "sampling from a binomial distribution"? | Surveys give the best example. Suppose, from a previous survey, it is known that 40% of students at a certain university are in favor of concealed campus carry laws. If you were to do the survey again now, and set out to ask 500 students, you would expect about 200 students to respond in favor. Recall that for a random variable to be distributed as binomial, you must have a fixed number of trials, the same outcomes each time, and independence of trials. We have that here. |
Models of set theory | I already answered a similar question at length on MathOverflow so let me try a short answer, suitable for this forum.
You mistakenly assume that it is the business of model theory or logic to somehow "produce" set theory out of nothing. Logic and model theory are just two branches of ordinary mathematics -- they do not precede them, although they are a bit peculiar because their object of study is mathematics itself (its methods, its possiblities, its limitations). Therefore, model theorists and logicians are "allowed" to use all the usual tools of mathematics (numbers, sets, topological spaces, and so on).
When logicians speak of "foundations" of mathematics, they may give the impression that they are "building the cathedral" starting from its foundation. But it is much better to view what they are doing as a study of how the cathedral is built and how we can improve it. For instance, logicians have observed the fact that almost all of modern mathematics can be expressed in the language of set theory, but this does not mean that we need to "secure" set theory before the rest of mathematics can be done. History is my witness: geometry, algebra, and analysis existed before set theory and logic came along.
P.S. I will not deny that historically a primary objective of logic was in fact to secure foundations, especially the kind of logic that Bertrand Russell did in his Principia mathematica. However, at least since Gödel we have known that such an endeavor must fail. In any case, I am expressing here my personal view that attempts to secure absolute foundations of mathematics are a bit like attempts to prove there is god. Ultimately it comes down to an act of faith. |
What is a basis for the space of anti-symmetric $3\times 3$ matrices? | Hint 1: What value must the diagonal entries take? And if the value of the $(i,j)^{\text{th}}$ entry is $a$, what is the value of the $(j,i)^{\text{th}}$ entry?
Hint 2: Hover over the grey box below when you've thought a bit about Hint 1.
The bottom-left entries are determined by the upper-right entries, over which you have free choice. |
What is Amalgamation | It is better to illustrate first through examples: let $G_1=\langle x\rangle \cong \mathbb{Z}_4$ and $G_2=\langle y\rangle\cong \mathbb{Z}_6$. We see:
$\langle x^2\rangle$ is a subgroup of order $2$ in $G_1$;
$\langle y^3\rangle$ is a subgroup of order $2$ in $G_2$.
Both are isomorphic subgroups; but not "same" (coincide). We try to put the groups $G_1$ and $G_2$ in a bigger group where $\langle x^2\rangle$ and $\langle y^3\rangle$ will become "same"; and we do "that only".
Consider words in $x$ and $y$ with "only" condition that $x^4=1$ and $y^6=1$. This is nothing but "free" product of $\langle x\rangle$ and $\langle y\rangle$. Next, we pose "one more (and only one) condition" to identify groups $\langle x^2\rangle$ and $\langle y^3\rangle$. A simple condition is $x^2=y^3$; i.e. whenever a word in $x,y$ contains a term $x^2$ we can write there $y^3$. Thus, we have a new group:
$$\langle x,y\colon x^4, y^6, x^2=y^3\rangle.$$
In this group, the word $x^3y^3$ can be written as
$$x^3y^3=x.x^2.y^3=x.y^3.y^3=xy^6=x$$
simpler expression, because of gluing $x^2$ and $y^3$.
In other words, we have "glued" (amal-gum-ated) groups $\langle x\rangle$ and $\langle y\rangle$ by cyclic group of order $2$ in both.
Now instead of cyclic groups of order $4$ and $6$, if we take cyclic groups of order say $3$ and $8$, then these orders are relatively prime, the only subgroup common in them is identity. But when we put these groups in a larger group, as the identity element in larger group is unique, we will not have to do "gluing" of identity elements in $\mathbb{Z}_3$ and $\mathbb{Z}_8$. In this case, we get simply "free" product of two cyclic groups of order $3$ and $8$; it has presentation:
$$\langle a,b\colon a^3, b^8\rangle.$$
With such simple examples, the "abstract" theory of amalgamation will become clear easily. |
Proof of lemma about product of irreducible polynomials over finite fields | Hints:
Denoting by $\;k:=\overline{\Bbb F_q}\;$ an algebraic closure of $\;\Bbb F_q\;$ , prove that for any $\;n\in\Bbb N\;$ we have that
$$\Bbb F_{q^n}=\left\{\;\alpha\in k\;:\;\;\alpha^{q^n}-\alpha=0\;\right\}$$
and now use/prove that $\;\Bbb F_{q^m}\le\Bbb F_{q^n}\;$ (the left field is a subfield of the right field) iff $\;m\le n\;,\;\;m,n\in\Bbb N$ |
Given $\Phi(t)\in M_{n}(\mathcal{C}^1)$, non-singular for each $t\in\Bbb{R}$, exists only one $A(t)$ s.t. $\Phi$ is fundamental to $x'=Ax$. | To the existence, $A\Phi=\Phi'\Leftrightarrow A=\Phi'(\Phi^{-1})$ |
Prove if {$a_n$} $\rightarrow$ $\infty$, then {$a_n$} is not bounded above. Give an indirect proof. | Suppose there is some $M \geq 0$ such that $|a_{n}| \leq M$ for all $n \geq 1$. But by assumption there is some $N \geq 1$ such that $a_{n} > M$ for all $n \geq N$, a contradiction.
Is this proof something you are after? |
Is "decomposition" of every Friedlander-Iwaniec prime unique? | $$97 = 4^2 + 3^4 = 9^2 + 2^4$$
But that sort of thing is the only problem, since the representation of a prime as the sum of two squares (if it exists, i.e. $p = 2$ or $p \equiv 1 \pmod{4}$) is unique. |
About group ring and projective module | Consider the case $R=\Bbb F_2$, $G=\Bbb Z/2\Bbb Z$. Since $R$ is a field, every $R$-module is projective, so the task reduces to finding a right $R[G]$-module that is not projective. Consider the module $M$ given by $\Bbb F_2$ with a trivial $G$-action. Then we have an exact sequence
$0 \to M \to R \to M \to 0$ of right $R[G]$-modules. (Let $g$ be a generator of $G$ and consider the subspace in $R[G]$ spanned by $1+g$. $1+g$ is invariant, thus the span is isomorphic to $M$. Now check that $G$ acts trivially on the quotient)
This exact sequence doesn't split, as $G$ doesn't act trivially on $R$, hence $M$ is not projective.
Every projective $A$-module is projective as an $R$-module. To see this, note that $R[G]$ is free as an $R$-module, so every free $R[G]$-module is a free $R$-module. It follows that every direct summand of a free $R[G]$-module is a direct summand of a free $R$-module, so projective $R[G]$-modules are projective as $R$-modules. |
Finding the range of a function, which contains $a$ and its multiplicative inverse | Your answer is correct. There is a hole in the logic when you transform $\sin^2 x + \csc^2 x$ to $x^2 + 1/x^2$ because you only cite the ranges of each function. It is critical that $\sin^2 x = 1/\csc^2 x$ for this to work but you don't say that. If we were considering $f(x)+g(x)$ where the range of $f(x)$ is $[0,1]$ and the range of $g(x)$ is $[1,\infty)$ it could be that $f(x)=0$ when $g(x)=1$ and the sum could be as low as $1$. It would also be better not to reuse $x$ here. There are lots more letters. |
How can I find gold in the jungle? | I think that this is easiest to solve using complex analysis.
If we say A, B, S are at arbitrary points in the complex plane.
The journey from $A$ to $S$ can be represented as $S + (A-S)$ And a vector that is $90^\circ$ perpendicular of equal length and a right turn is $-i(A-S)$
And then we do something similar for the vectors from $S$ to $B$ to $D.$ However a left turn is $i(B-S)$
$x =S + \frac 12 ((A-S) - i(A-S)) + \frac 12((B-S) + i(B-S)) = \frac 12 A + \frac 12 B + \frac 12 i(B-A)$
And $x$ does not depend on $S$ and can be found by $A,B$ alone
To find the treasure, start at A. Walk half way to B, turn left $90^\circ$ walk and equal distance as you covered to the halfway point. Dig.
Here is a figure, not quite to scale. |
Prove that a simple graph with $n$ edges and $n$ vertices and $\delta\ge2$ is a disjoint union of cycles | The total of the valencies of the vertices in a graph with
$e$ edges is $2e$, and if the graph has $n$ vertices
the mean valency is $2e/n$. Here the $e=n$ and so the mean
valency is $2$, but also all vertices have valency $\ge2$
so every vertex has valency exactly two.
So at each vertex one can start a walk, leaving on one edge,
and at each vertex leaving from the different edge to the one
one arrived on, until one eventually reaches the start vertex,
completing a cycle. |
Build Tree by Prüfer Code $(6,2,2,6,2,5,10,9,9)$ | I would suggest it goes like this:
Let us first count the degrees of the nodes in the Prüfer Code, ie. count how many times they appear:
$$
\begin{array}{c|c}
\text{node}&6&2&5&10&9\\
\hline
\text{degree}&2&3&1&1&2
\end{array}
$$
Thus $6$ has to have appeared twice as a neighbor of a removed vertex before it could have been removed itself. Similarly $2$ must have three leaves that are removed before it itself becomes a removable leave.
The vertices that can be removed, the free nodes, must be those NOT in the Prüfer Code.
Thus:
T={}, code=(6,2,2,6,2,5,10,9,9), free=(1,3,4,7,8,11)
T={(1,6)}, code=(2,2,6,2,5,10,9,9), free=(3,4,7,8,11)
T={(1,6),(3,2)}, code=(2,6,2,5,10,9,9), free=(4,7,8,11)
T={(1,6),(3,2),(4,2)}, code=(6,2,5,10,9,9), free=(7,8,11)
T={(1,6),(3,2),(4,2),(7,6)}, code=(2,5,10,9,9), free=(6,8,11)
T={(1,6),(3,2),(4,2),(7,6),(6,2)}, code=(5,10,9,9), free=(2,8,11)
T={(1,6),(3,2),(4,2),(7,6),(6,2),(2,5)}, code=(10,9,9), free=(5,8,11)
T={(1,6),(3,2),(4,2),(7,6),(6,2),(2,5),(5,10)}, code=(9,9), free=(8,10,11)
T={(1,6),(3,2),(4,2),(7,6),(6,2),(2,5),(5,10),(8,9)}, code=(9), free=(10,11)
T={(1,6),(3,2),(4,2),(7,6),(6,2),(2,5),(5,10),(8,9),(10,9)}, code=(), free=(9,11)
And finally we must connect the remaining two, namely (9,11). |
The density of diagonalizable matrices of $M_n(\mathbb{C})$ problem. | For continuity, show that the map $P\mapsto AP$ is continuous, then that the map $P\mapsto PB$ is continuous for any $B\in\mathcal M_n(\mathbb C)$.
For question (c), notice that the extra diagonal elements of $\widehat{A_0}-P^{-1}A_0P$ are all $0$, and by construction, the modulus of the diagonal elements is smaller than $\eta$.
Notice that $\widehat{A_0}$ has distinct eigenvalues. If $M$ is a diagonalizable matrix in $\mathbb C$, so is $P^{-1}MP$ for any invertible $P$. |
"Binary-Like" Function?; In Consecutive Products as Multi-Factorials.... | My comments from the other question apply here as well. But we actually can define $Z(a,n)$ even with such a mild class of functions as one including the ceiling function, addition, subtraction, and division.
$$Z(a,n) = \left\lceil \frac {n-1}a - \left\lceil \frac{n-1}{a} \right\rceil \right\rceil.$$
Actually, the example you gave in the comments extends as well, if you allow yourself interval-type $\sum$-schema, smooth functions, and the complex numbers.
$$Z(a,n) = 1-\frac1a\sum_{k=1}^{a-1}\exp\left(2\pi i\frac{k(n-1)}{a}\right)$$
It's not as clear why this one works, but try it out :) |
Topic for attribute exploration | In the early of Stumme they used graphs. Have you managed to find something?
http://www.kde.cs.uni-kassel.de/stumme/papers/1995/P1781-GfKl95.pdf
It is one of the works on the topic. The domain for exploration is types of graphs.
The modern one by Robert Jaeschke and S. Rudolph.
It contains some connections to user queries, the paper is referenced below:
http://www.qucosa.de/fileadmin/data/qucosa/documents/11313/019.pdf |
Are all groups of order 6 cyclic? | $S_3$ is of order 6, and nonabelian. |
Finding the orthonormal basis of $\Bbb R^3$ using the Gram-Schmidt algorithm | You are supposed to use the inner product that you were given. With respect to that inner product, $\|(1,1,1)\|=\sqrt6$ and therefore the first vector that you get when you apply Gramm-Schmidt is $e_1=\frac1{\sqrt6}(1,1,1)$. Then, let$$a_2=(1,0,1)-\langle(1,0,1),e_1\rangle e_1=\left(\frac13,-\frac23,\frac13\right).$$It norm is $\sqrt{\frac43}$ and so you take $e_2=\frac1{2\sqrt3}(1,-2,1)$. Finally, you take$$a_3=(0,1,2)-\langle(0,1,2),e_1\rangle e_1+\langle(0,1,2),e_2\rangle e_2=\left(-\frac32,0,\frac12\right).$$Its norm is $\sqrt3$ and so you take $e_3=\frac1{\sqrt3}\left(-\frac32,0,\frac12\right)$. |
Prove the following function is continuous at all irrational points | Suppose we take a sequence $a:\mathbb{N}\to\mathbb{R}$ and consider Heine definition of continuity i.e.
Let's say a function $f:\mathbb{R}\to\mathbb{R}$ is continious at the point $x_0$ iff for every sequence $a_n$ $\lim\limits_{n\to\infty}a_n=x_0\Rightarrow \lim\limits_{n\to\infty}f(a_n)=f(x_0)$
More formal:
a function $f: X \to Y$ is sequentially continuous if whenever a sequence $(x_n)$ in $X$ converges to a limit $x$, the sequence $(f(x_n))$ converges to $f(x)$
So let's suppose $\lim\limits_{n\to\infty}a_n=a\in\mathbb{R}\setminus\mathbb{Q}$, but $\lim\limits_{n\to\infty}f(a_n)=\frac{1}{b}\ne 0$.
Let's consider $\lim\limits_{n\to\infty}\frac{1}{f(a_n)}=b\in\mathbb{Z}$, so, $\forall\varepsilon>0\,\exists N(\varepsilon):\,\forall n>N(\varepsilon)\, |\frac{1}{f(a_n)}-b|<\varepsilon$ and take $\varepsilon=\frac{1}{2}$, as $\frac{1}{f(a_n)}\in\mathbb{Z}$ so $|\frac{1}{f(a_n)}-b|<\frac{1}{2}\Rightarrow \frac{1}{f(a_n)}=b$,
so $\forall n>N(\frac{1}{2})\, \frac{1}{f(a_n)}=b$, so $\forall n>N(\frac{1}{2})\ a_n$ has the form $\frac{c_n}{b}$ where $c_n\in\mathbb{Z},\,\gcd(c_n,b)=1$ so $a$ can't be irrational because it's rational with denominator $b$, hence a contradiction.
More, if $\lim\limits_{n\to\infty}\frac{1}{f(a_n)}=b'\notin\mathbb{Z}$ we take $b$ as the nearest integer to $b'$ and the argument above works. |
Let $f:X\to Y$ and $g:Y\to Z$ be separated morphisms of schemes. Must $g\circ f:X\to Z$ be a separated morphism? | http://stacks.math.columbia.edu/tag/01KU gives (more than) the answer to your question! |
Prove $A$ is compact if $A\cup B$ is compact and the closure of $A$ does not intersect $B$. | First of all, let us see why the claim is true. Suppose $\bar{A}\cap B=\emptyset$ and $A\cup B$ is compact. We want to prove that $A$ is compact. To this end, let ${U_i}_{i\in I}$ be an arbitrary cover of $A.$ Note that $B\subseteq \bar{A}^c,$ and therefore $\{U_i:i\in I\}\cup \{\bar{A}^c\}$ is an open cover of $A\cup B.$ From the compactness of $A\cup B,$ It must have a finite sub-cover. (Possibly) Discarding the $\bar{A}^c,$ you obtain a finite refinement of original cover for $A.$ This proves that $A$ is compact.
As @zhw already pointed in the comment that the conclusion is false if you only assume $A\cap B=\emptyset.$ This answers the question which you asked (Namely, why is the stronger condition required). It is also instructive to look at where the above proof fails if you only assume $A\cap B=\emptyset.$ It still holds that $B\subseteq A^c,$ but the problem is that $A^c$ may not be open. |
How to show the function $f\colon z\mapsto\lvert z\rvert^2$ is continuous on the entire complex plane? | I think you can try to rewrite the original function $f(z) = |z|^{2}$ as $f(z) = z z^{*}$ where $z^{*}$ is the complex conjugacy. |
Formal approach to (countable) prisoners and hats problem. | I'm not sure why Terry Tao compared this to the Banach-Tarski paradox. This is, essentially, just a Vitali set.
To see this, note that in the space $2^\omega$ the finite sequences (or rather, eventually zero sequences) are a dense countable set call it $Q$. If you think about $2^\omega$ as a Boolean ring, then symmetric difference is addition, and saying that two sequences are equal up to a finite difference is exactly to say that $x-y=x+y\in Q$.
So the set of representatives is really just a Vitali set. And from there the paradoxical result should be easier to swallow as a familiar "issue". And we see why additivity causes problems. |
Incomparable Elements In A Poset | $\{0\}$ and $\{1\}$ are indeed incomparable elements of $\wp(\{0,1,2\})$ with respect to the partial order $\subseteq$, and for the reason that you gave: $\{0\}\nsubseteq\{1\}$, and $\{1\}\nsubseteq\{0\}$. There’s no reason to look at the ordered pairs, though it’s true that neither $\langle\{0\},\{1\}\rangle$ nor $\langle\{1\},\{0\}\rangle$ belongs to the order $\subseteq$. |
Summing a recurrence to infinity | Let $f(x) = \displaystyle \sum_{n=0}^\infty a_n x^n$. Then:
$$\begin{array}{rcl}
f(x) &=& \displaystyle \sum_{n=0}^\infty a_n x^n \\
x f(x) &=& \displaystyle \sum_{n=0}^\infty a_{n-1} x^n \\
x^{-1} f(x) &=& \displaystyle \sum_{n=0}^\infty a_{n+1} x^n \\
(x + 3x^{-1}) f(x) &=& \displaystyle \sum_{n=0}^\infty (a_{n-1} + 3a_{n+1}) x^n \\
(x + 3x^{-1}) f'(x) + (1-3x^{-2}) f(x) &=& \displaystyle \sum_{n=0}^\infty n (a_{n-1} + 3a_{n+1}) x^{n-1} \\
(x^2 + 3) f'(x) + (x-3x^{-1}) f(x) &=& \displaystyle \sum_{n=0}^\infty n (a_{n-1} + 3a_{n+1}) x^n \\
\end{array}$$
So we obtain the differential equation:
$$f(x) = (x^2 + 3) f'(x) + (x-3x^{-1}) f(x)$$
Which we now solve:
$$\begin{array}{rcl}
(x^2 + 3) f'(x) + (x-3x^{-1}) f(x) &=& f(x) \\
(x^2 + 3) f'(x) &=& (1-x+3x^{-1}) f(x) \\
\dfrac{f'(x)}{f(x)} &=& \dfrac{1-x+3x^{-1}}{x^2+3} \\
(\ln f)' &=& \dfrac{x-x^2+3}{x(x^2+3)} \\
\ln f &=& -\log(x^2+3) + \log(x) + \dfrac1{\sqrt3} \arctan\left(\dfrac{x}{\sqrt3}\right) + C \\
f(x) &=& e^C \dfrac{x}{x^2+3} \exp\left( \dfrac1{\sqrt3} \arctan\left(\dfrac{x}{\sqrt3}\right) \right)
\end{array}$$
Given the constraint $a_1 = 1$ we know $e^C = 3$, so:
$$f(x) = \dfrac{3x}{x^2+3} \exp\left( \dfrac1{\sqrt3} \arctan\left(\dfrac{x}{\sqrt3}\right) \right)$$
So $a_n$ can be found from the Taylor series expansion of $f$. |
Question on evaluating the surface integral over a cube | The problem, is that you are not using outward pointing normal vectors. Every plane has two unit normal vectors, but only one of them is pointing outward. For a flux integral, we use the convention that the flux is the flow out of an object.
In your case, the outward pointing normal vector for $S_1$ is $\langle -1,0,0\rangle$, which changes the sign of your answer. The outward pointing normal vector for $S_2$ remains $\langle 1,0,0\rangle$, so that answer doesn't change. There are two more vectors which need to swap signs, and after that, you'll get
$$
\left(-\frac{1}{2}\right)+\frac{3}{2}+\left(-\frac{1}{2}\right)+\frac{1}{2}+(-0)+1=2.
$$ |
Minimum size of the generating set of a direct product of symmetric groups | Let $S_m$ act on $\{1,2,\ldots,m\}$ and $S_n$ on $\{1',2',\ldots,n'\}.$
[adding more detail to the following paragraph as per request]
A fact given in most elementary texts on permutation groups is that the 2-cycles
$(12),(13),(14),\ldots,(1m)$ generate all of $S_m$.
With that result known we next see that two generators $\sigma=(12)$ and $\tau=(123\cdots m)$ generate all of $S_m$. This is because we get sufficiently many transpositions by conjugating the former by powers of the latter. For example $\tau\sigma\tau^{-1}=(23)$, $\tau^2\sigma\tau^{-2}=(34)$
et cetera. Further conjugating gives $(13)=(23)(12)(23)$ and so forth.
Similarly we can use $\tau'=(23\cdots m)$ in place of $\tau$: $\tau'^k\sigma\tau'^{-k}$ gives the 2-cycles $(1j), 1<j\le m$, and having these suffices.
I think that two generators will always suffice to generate the direct product.
If $m$ and $n$ are both odd, then it is clear that the permutations $\alpha=(123\ldots m)(1'2')$ and
$\beta=(12)(1'2'3'\ldots n')$ generate the whole thing, because $\alpha^2$ and $\beta^n$ generate $S_m$ and $\alpha^m$ and $\beta^2$ generate $S_n$. The general observation here is that if a permutation $\sigma$ is the product of two
disjoint cycles of coprime lengths, then the individual cycles belong to the subgroup generated by $\sigma$.
If either $m$ (resp. $n$) is even, then we use $(23\ldots m)(1'2')$ (resp.
$(12)(2'3'\cdots n')$) instead as the other generator. The key is that the longer
cycle is of an odd length, so the above observation applies.
Addedum: A single generator obviously won't do :-) |
$f(x) = x^{p}(1-x)^{q}$ for all $x\in \left[0,1\right]\;,$ Where $p,q\in \mathbb{Z^{+}}$, Then Max. of $f(x)$ at $x=$ | Consider the sum
$$\frac{x}{p}+\cdots +\frac{x}{p}+\frac{1-x}{q}+\cdots+\frac{1-x}{q},$$
where there are $p$ copies of $\frac{x}{p}$ and $q$ copies of $\frac{1-x}{q}$.The sum is $1$, so the arithmetic mean is $\frac{1}{p+q}$.
By AM/GM we have
$$\frac{1}{p+q} \ge \sqrt[p+q]\frac{x^p(1-x)^q}{p^pq^q}$$
with equality when $x/p=(1-x)/q$, that is, when $(p+q)x=p$. Now a little manipulation yields the maximum value of $x^p(1-x)^q$. |
Is this definition correct for the inverse of a function? | The formulation of the definition in the OP assumes existence and uniqueness of the inverse, and existence is not there always while uniqueness has to proved. I would reformulate it as:
Let $\,f:X\to Y\,$ be a function. The function $\,g:Y\to X\,$ is said to be an inverse of $\,f$ if $\,g\circ f=i_X\,$ and $\,f\circ g=i_Y$. We denote the inverse of $\,f\,$ by $\,f^{-1}$. |
Prove by induction that sum of an odd number of odd numbers is odd | Base case: $a_1$ is odd, therefore it is okay.
Induction hypothesis: For some $k$, $\sum_{i=1}^{2k+1} a_i$ is odd.
Induction step: The sum of two odd numbers is even. Therefore $a_{2k+2}+a_{2k+3}$ is even. And even+odd=odd, and
$$\sum_{i=1}^{2k+3} a_i = \sum_{i=1}^{2k+1} a_i+a_{2k+2}+a_{2k+3}$$
Thus we can conclude it is true. |
Integral estimation - I am being an..... | Thje inequality is obviously false if $x \leq 0$. For $x>0$ just note that $\int_x^{\infty} e^{-y^{2}/2} dy=\int_x^{\infty} \frac 1 y[ye^{-y^{2}/2}] dy$. Since $\frac 1 y <\frac 1 x$ we get $\int_x^{\infty} e^{-y^{2}/2} dy \leq \int_x^{\infty} \frac 1 x[ye^{-y^{2}/2}] dy$. Pull out $\frac 1 x$. Can you evaluate the remaining integral? |
Permutation graph and matching diagram | I might be misunderstanding the conventions used here, but it looks to me like both the permutation graph and the matching diagram are mislabeled, or at least drawn very confusingly. If we read the top numbers of the diagram as the domain and the bottom numbers as the codomain (which seems like the natural reading to me), then the matching diagram shows $2$ being sent to the $5$th position, not the $3$rd position as it should be in $\sigma$. This would seem to be most naturally interpreted as the matching diagram and permutation graph for $\sigma^{-1}$ rather than $\sigma$ itself.
I welcome any corrections to this interpretation from someone more familiar with these conventions. |
Show that there is odd number of elements of a finite group satisfying $x^3=e$ | There is an involution on the set of all elements $g$ such that $g^3=e$ sending $g$ to $g^{-1}$. This involution has exactly one fixed point, namely $g=e$. Hence altogether there is an odd number of them. |
Characterize the family of all $n\times n$ nonsingular matrices $A$ for which one step of the Gauss-Seidel algorithm solves $Ax=b$, starting at | If $A = L + U$, and our initial vector is $0$, what is the next approximation of a solution that Gauss-Seidel computes?
That next approximation must be $b$ (that's what "one step ... solves" means). So, what does that tell you about $L$, $U$ and $A$?
You need to at least try these before someone here is going to give you anything like an answer. |
Why are chordal graphs always perfect graphs? | This is due to the Strong perfect graph theorem.
A hole is an induced cycle of length at least four. Its complement is called antihole. We call an hole odd if it has an odd number of vertices, and even otherwise.
Now, the Strong perfect graph theorem states, that a graph is perfect if and only if it contains no odd hole and no odd antihole.
Therefore, it is sufficient to show that a chordal graph $G$ contains no odd hole or odd antihole as an induced subgraph. $G$ clearly contains no odd hole, since the largest cycle in $G$ is a triangle. Suppose that $G$ contains an odd anti-hole. First observe, that the complement of $C_5$ (the odd hole of length $5$) is isomorphic to $C_5$. If the anti-hole in $G$ is of order $5$, then $C_5$ is an induced subgraph in the complement of $G$, but since $C_5=C_5^C$, $C_5$ is an induced subgraph in $G$. A contradiction.
Suppose, the odd anti-hole has order $2k+5,\ k \in \mathbb{N}$. It follows that the path $P_5$ of order $5$ is an induced subgraph in $G^C$, but $P_5^C$ is not chordal (it contains a $C_4$). A contradiction. |
An algebraic manipulation of the Zeta function | Unfortunately your expression is not complete, there are some terms missing. To analyse the difference, we give your series a name:
$$Z(s) = 1+2^{-s}\zeta \left ( s \right )+3^{-s}\zeta \left ( s \right )+5^{-s}\zeta \left ( s \right )+7^{-s}\zeta \left ( s \right )+\ldots = 1 + \zeta(s)\sum_p p^{-s},\quad\quad(*)$$
We will denote the difference with $\zeta(s)$ by $E(s) = \zeta(s) - Z(s)$. The question now is whether $E(s)$ has indeed the form
$E(s) \overset{?}= D(s)$, where $D(s)$ is as in your post $$D(s) = \sum_{p}\sum_{q > p} (pq)^{-s} = 6^{-s} + 10^{-s} + 14^{-s} + 15^{-s} + 21^{-s} + \ldots .$$
(notice that for distinct prime numbers $p, q$ we have $LCM(p,q) = pq$.)
When we look at the number of times the term $30^{-s}$ shows up in the expanded version of $Z(s)$, we see that it appears three times: once in the term $2^{-s}\zeta(s)$ in the form $2^{-s}\cdot 15^{-s}$, once in the term $3^{-s}\zeta(s)$ in the form $3^{-s}\cdot 10^{-s}$, and once in the term $5^{-s}\zeta(s)$ in the form $5^{-s}\cdot 6^{-s}$. That is two more terms $30^{-s}$ than the $\zeta$-function has, so the correction $E(s)$ contains a term $2\cdot 30^{-s}$. Likewise, counting terms $12^{-s}$ shows that it appears two times in equation $(*)$, one time coming from $2^{-s}\zeta(s)$ and one time coming from $3^{-s}\zeta(s)$. Therefore, $E(s)$ has a term $12^{-s}$. Both of these terms are missing in $D(s)$, so we must conclude that $E(s) \neq D(s)$, or in other words,
$$\zeta (s)\neq1+\zeta (s)\sum_{p}^{ }p^{-s}-\sum_{p}^{ }\sum_{q> p}^{ }(pq)^{-s}.$$
Is there a way to fix this? Yes, there is, we just have to repair the expression for $D(s)$. To do that, we must first analyse your series $Z(s)$ a bit more. After all, if we want $D(s)$ to correct for all `extra' terms in $Z(s)$, we need to know which terms occur in $Z(s)$, and with which multiplicity. To do that, we will analyse the terms occuring in $Z(s)$ in exactly the way we did before with $30^{-s}$ and $12^{-s}$, but this time we are going to do it for general $n$.
Let $n$ be any natural number. We are going to find out how often $n^{-s}$ appears in $Z(s)$. For this, we first factorize $n$ as $n = p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r}$, where the $p_i$ are distinct primes. Then we get a term $n^{-s}$ in $Z(s)$ from the term $p_1^{-s}\zeta(s)$, in the form $p_1^{-s}\cdot (p_1^{e_1-1}p_2^{e_2}\cdots p_r^{e_r})^{-s}$, another term $n^{-s}$ from the term $p_2^{-s}\zeta(s)$, in the from $p_2^{-s} \cdot (p_1^{e_1}p_2^{e_2-1}\cdots p_r^{e_r})^{-s}$, and so on until the term $n^{-s}$ coming from $p_r^{-s}\zeta(s)$ in the form $p_r^{-s} \cdot (p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r-1})^{-s}$. Thus, the total number of times $n^{-s}$ appears in $Z(s)$ is exactly $r$, the number of distict prime divisors of $n$. (Make sure you understand this part. This is the difficult part of the story, but I believe it will become clear once you try the argument for a few numbers like $n = 600 = 2^3\cdot 3\cdot 5^2$). Note that this matches what we found in the specific cases $n = 12$ and $n = 30$ we looked at before: the term $12^{-s}$ occurs 2 times and $30^{-s}$ occurs 3 times, and indeed, $12$ has 2 different prime divisors and $30$ has 3 different prime divisors. There is only one exception, and that is the case $n = 1$. This number has zero prime divisors, but the term $1^s = 1$ occurs one time. This is just because of how we defined $Z(s)$, so that is nothing to worry about, but it is good to keep in mind anyways.
So now we now which terms we need to subtract from $Z(s)$ to get to $\zeta(s)$. Namely, we need to subtract every term of the form $(p_1^{e_1}p_2^{e_2})^{-s}$ once (these are the doubly counted terms that you wrote about, but note that we now include product of distinct prime powers, not just distinct primes), every term of the form $(p_1^{e_1}p_2^{e_2}p_3^{e_3})^{-s}$ two times, every term of the form $(p_1^{e_1}p_2^{e_2}p_3^{e_3}p_4^{e_4})^{-s}$ three times, and so on. We can put that into formulas in the same way you did in your question, and then we will end up with the following formula:
$$\zeta(s) = Z(s) - \sum_{p_1}\sum_{p_2>p_1}\sum_{e_1=1}^\infty\sum_{e_2=1}^\infty (p_1^{e_1}p_2^{e_2})^{-s} - \sum_{p_1}\sum_{p_2>p_1}\sum_{p_3>p_2}\sum_{e_1=1}^\infty\sum_{e_2=1}^\infty\sum_{e_3=1}^\infty 2(p_1^{e_1}p_2^{e_2}p_3^{e_3})^{-s} - \ldots$$
Well, that doesn't look very nice. Perhaps we should think this over a bit. How are we going to clean up this formula? Well, the first iterated summation is over all products of two distinct prime powers. Which numbers are products of two distinct prime powers? Well, those are exactly all numbers with two different prime divisors. Likewise, the second iterated summation is over all numbers with exactly three different prime divisors. Thus, we might define the sets
$$ A_k = \{n : \textrm{ $n$ has exactly $k$ distinct prime divisors } \}.$$
for each $k$. So $A_2$ consists of those numbers with exactly 2 distinct prime numbers, etc. Then we can rewrite the horrible formula above to this form:
$$\zeta(s) = Z(s) - \sum_{n \in A_2} n^{-s} - \sum_{n \in A_3} 2n^{-s} - \sum_{n \in A_4} 3n^{-s} - \ldots = Z(s) - \sum_{k = 2}^\infty \sum_{n \in A_k}(k-1)n^{-s},$$
which at least looks better. But we still have a double summation, perhaps we can get rid of that as well? Indeed we can. For this, we first note that the outer sum, which now start at 2, might just as well start at 1, since al those terms will be 0 anyway. Also, lets define $\omega(n)$ to be the number of distinct prime divisors $n$ has, so that $\omega(n) = k$ if and only if $n \in A_k$. Then we can rewrite the previous formula to
$$\zeta(s) = \sum_{k = 1}^\infty \sum_{n \in A_k} (\omega(n)-1) n^{-s}.$$
We see that the individual terms in the double summation no longer directly depend on $k$. Thus, the sum only depends on which values of $n$ occur. Well, we first sum over all numbers with exactly one prime divisor, then over all numbers with exactly two prime divisors, then all numbers with exactly three prime divisors, and so one. Thus, in the end, the summation is just over all numbers except $1$. So we might as well write
$$\zeta(s) = Z(s) - \sum_{n = 2}^\infty (\omega(n) - 1)n^{-s}.$$
Ah, but that looks good. We now have a single summation left, and moreover, when we split off the $-1$-part in each term, we almost get a $\zeta$-function (but be careful, the summation runs from $n = 2$, not $n=1$). Rewriting using this idea in our mind, we end up with
\begin{align}\zeta(s) &= Z(s) - \sum_{n = 2}^\infty \omega(n)n^{-s} + \sum_{n=2}^\infty n^{-s} \\&= Z(s) - \sum_{n = 2}^\infty \omega(n)n^{-s} + \sum_{n=1}^\infty n^{-s} \,- 1 \\&= Z(s) + \zeta(s) - 1 - \sum_{n = 2}^\infty \omega(n)n^{-s}.\end{align}
But now we have $\zeta(s)$ on both sides of the equation, so it cancels, and after that we are left with the equation (bringing the summation to the other size)
$$Z(s) - 1 = \sum_{n = 2}^\infty \omega(n)n^{-s}.$$
Note that this last equation expresses precisely that for any $n > 1$ the term $n^{-s}$ will show up in $Z(s)$ with multiplicity exactly $\omega(n)$, the number of prime divisors of $n$. This is exactly what we already had discovered a few paragraphs back. Apparently we have come back to were we started. Well, at least it suggests that our computations and reasoning were correct.
On a related note, the idea to use the unique factorisation property of the integers to rewrite $\zeta(s)$ is a good one, and it also occured to Euler. However, he did it in a slightly different way, leading to a way to express $\zeta(s)$ as an infinite product instead of an infinite sum. For details of the derivation, I urge you to read this wiki page. |
Uniqueness of "Punctured" Tubular Neighborhoods (?) | I don't really see the connection to your handle attachment question -- it looks to me like you're approaching it as a more general problem than it actually is.
But this question has a reasonable answer. Think of your half-space as sitting inside the Euclidean space $\mathbb R^n$. Then you can intersect the half-space with spheres of radius $r$ centred around the origin. This converts your punctured half-space (via a diffeomorphism) to:
$$ D^{n-1} \times \mathbb R $$
So you are studying the group of diffeomorphisms of this manifold where the diffeomorphism restricts to the identity on the boundary. I think in full generality this space is not connected. Aside from dimension $4$ I think this space is known to have the homotopy-type of $\Omega Diff(D^n)$, where $Diff(D^n)$ is the group of diffeomorphisms of the $n$-disc $D^n$ which restrict to the identity on the boundary.
The basic idea for why you'd expect something like this is that a diffeomorphism of $D^{n-1} \times \mathbb R$ restricts to an embedding of $D^{n-1} \times \{0\}$ into $D^{n-1} \times \mathbb R$. That map is a fibre bundle. There's a standard sequence of relations between this embedding space and $Diff(D^n)$, it's outlined at the end of Hatcher's paper on the Smale conjecture. Needless to say, the spaces $Diff(D^n)$ for $n$ large tend not to be contractible, and the fundamental groups are also known to be non-trivial in a large number of cases. Partial computations of the fundamental groups were done by Milnor and Kervaire. |
Fourier transform of exponent | The original post has $f$ defined such that
$$f(x)=(e^{-ab}-1)H(x)$$
where $H$ is the Heaviside function defined as
$$H(x) =
\begin{cases}
1, & \text{if $x\ge 0$} \\
0, & \text{if $x<0$}
\end{cases}
$$
The Fourier-Transform $\hat H(k)$ of the Heaviside function is
$$\hat H(k)\equiv \int_{-\infty}^{\infty}H(x)e^{ikx}dx=\pi \delta(k)+\text{PV}\left(\frac{i}{k}\right)$$
where PV denotes the Cauchy Principal Value and where $\delta$ is the Dirac delta and is a Generalized function or Distribution defined as
$$\int_{-\infty}^{\infty}\delta(x)f(x)dx=f(0)$$
for all test functions $f$.
Thus, we have for $a=1$ and $b=-1$ (as in the original post)
$$\hat f(k)=(e-1)\left(\pi \delta(k)+\text{PV}\left(\frac{i}{k}\right)\right)$$ |
Smallest number of $n$-simplices in a triangulation of the sphere | As you said, there must be at least one $n$-simplex in $X$, call it $\sigma$. This simplex has $n+1$ faces $f_0,\dots, f_n,$ with the face $f_i$ being opposite to the vertex $v_i$ of $\sigma$. Due to the local topology of an $n$-manifold, each $f_i$ is the intersection of $\sigma$ and another $n$-simplex $\sigma_i$ which does not contain $v_i$. Since every other $\sigma_j$ intersects $\sigma$ in $f_j$, it contains $v_i$ and is thus distinct from $\sigma_i$. This shows that there are $n+2$ $n$-simplices $\sigma, \sigma_0,\sigma_1,\dots,\sigma_n$. |
Integrating Around "Poles"? | For simplicity suppose that $f$ has a pole located at the origin and let us evaluate the integral $$\oint_{S^1(\epsilon)} f(z) dz$$ over a circle with radius $\epsilon$. This can be parametrized by $z = \epsilon e^{i\phi}$ and one has ${\rm d}z = i z {\rm d} \phi$. This is where the $i$ comes from. From the point of view of the vector analysis ${\rm d} z$ is a tangent (co)vector field (check for yourself that at the point $z = (x,y)$ the tangent field points into the direction given by $iz = (-y,x)$). In other words, $i$ is not that important: we can always pretend that we don't know what complex numbers are and work instead with pairs of real numbers and operations defined on them.
Factors such as $2 \pi$ that these integrals have in common come from the fact that we often integrate over some circles (more generally, spheres). This is natural because we want to enclose the singularity so that it doesn't spoil the calculation. By doing so, we obtain a surface integral over the sphere. When we are finished computing the integral, we can send the radius of the sphere to zero and the resulting integral will be proportional to the area of the sphere times the value of the function (or its gradient, or flow, etc.) over the sphere which will be constant (provided the function is well-behaved). |
Alternate proof of prime avoidance lemma. | Note first that for a ring with three distinct maximal ideals $\mathfrak{a},\mathfrak{b},\mathfrak{c}$, then $\mathfrak{a} \subset \mathfrak{b}+\mathfrak{c}$.
So we use indeed induction: assume that $\mathfrak{a} \subset \bigcup_{i=1}^n{\mathfrak{p}_i}$.
If any smaller reunion of the $\mathfrak{p}_i$ contains $\mathfrak{a}$, we are done by induction hypothesis. So in particular, we have some $x_i \in \mathfrak{p}_i \backslash \mathfrak{p}_n$. We also have some $a \in \mathfrak{a}$ not in any $\mathfrak{p}_k$ (induction hypothesis) with $k <n$, and some $b \in \mathfrak{a} \backslash \mathfrak{p}_n$.
If $a \notin \mathfrak{p}_n$, we are done. Otherwise, consider $a+bx_1 \ldots x_{n-1}$. |
Proving the integral of the cantor function | Let $f$ denote the Cantor function. Then $\int_0^1 fdx=\int_0^1\frac{f(x)+f(1-x)}{2}dx=\int_0^1\frac{1}{2}dx.$ |
Find the nullity of the transformation $T(f)=f-f'$ from $C^\infty$ to $C^\infty$ | We have $\ker T=\operatorname{span}(x\mapsto e^x)$ so $(x\mapsto e^x)$ is a basis for the kernel of $T$ and then the nullity of $T$ is $1$. |
Manipulation of sums (in order to understand proof regarding Gauss sums) | Here is a derivation with an intermediate step which might be helpful.
We obtain
\begin{align*}
\color{blue}{\sum_{a=1}^q}&\color{blue}{\sum_{b=1}^q \chi (a) \overline{\chi(b)}\mathbb{1}_{a=b}\cdot q}\\
&=q\sum_{a=1}^q\left(\chi (a) \overline{\chi(a)}+\sum_{{b=1}\atop{b\ne a}}^q \chi (a) \overline{\chi(b)}\right)\mathbb{1}_{a=b}\\
&=q\sum_{a=1}^q\chi (a) \overline{\chi(a)}\cdot 1
+q\sum_{a=1}^q\sum_{{b=1}\atop{b\ne a}}^q \chi (a) \overline{\chi(b)}\cdot 0\\
&=q\sum_{a=1}^q\left|\chi (a)\right|^2=q\sum_{a=1}^q \mathbb{1}_{\gcd\left( a,q \right) =1}\\
&\,\,\color{blue}{=q\phi(q)}
\end{align*} |
Using a Riemann sum to (effectively) prove a $p$-series | Hint. For $n\geq 1$, for $p>1$ consider the lower Riemann sum of the decreasing function $f(x)=1/x^p$ over the interval $[1,n]$ with respect to the uniform partition $1<2<\dots<n$. |
Conjecture based on limited trail followed by inductive proof | It means try the first few cases; in this case the 1st,2nd,3rd,... derivative, and see if you spot any pattern. After that, try to prove the pattern correct by using induction. |
Is $f(x)=0$ a polynomial function? | $f(x)=0$ is a polynomial function, with degree $-\infty$ (by convention).
In this way, $\deg(fg) = \deg(f) + \deg(g)$ and $\deg(f+g) \leq \max(\deg (f), \deg (g))$ are true for any polynomials $f$ and $g$. |
$\frac{1}{\sqrt1}+\frac{1}{\sqrt2}+\dots+\frac{1}{\sqrt{n}}<2\sqrt{n}$? | Base step: 1<2.
Inductive step: $$\sum_{j=1}^{n+1}\frac1{\sqrt{j}} < 2\sqrt{n}+\frac1{\sqrt{n+1}}$$
So if we prove
$$2\sqrt{n}+\frac1{\sqrt{n+1}}<2\sqrt{n+1}$$
we are done. Indeed, that holds true: just square the left hand side sides to get
$$4n+2\frac{\sqrt{n}}{\sqrt{n+1}}+\frac1{n+1}<4n+3<4n+4$$
which is the square of the right end side.
Errata: I forgot the double product in the square. The proof must be amended as follows:
$$2\sqrt{n}<2\sqrt{n+1}-\frac1{\sqrt{n+1}}$$ since by squaring it we get
$$4n<4n+4-4+\frac1{n+1}$$ which is trivially true. |
Weakening or strengthening theorems in A Logical Approach to Discrete Math by Gries and Schneider | You can apply a theorem like (a) in two ways, either to an antecedent or to a consequent.
(1) Assume you know that the sequent $p\lor q\Rightarrow X$ is valid. From (a) $p\Rightarrow p\lor q$ and cut/transitivity it follows that also $p\Rightarrow X$ is valid. You have strenghtened the antecedent from $p\lor q$ (the consequent of (a)) to $p$ (the antecedent of (a)). I guess this is what the authors mean with "transforming the consequent into the antecedent".
(On the other hand, the whole sequent $p\Rightarrow X$ is now weaker because it relies on a stronger premise!)
(2) Assume you know that the sequent $X\Rightarrow p$ is valid. From (a) $p\Rightarrow p\lor q$ and cut (transitivity) it follows that also $X\Rightarrow p\lor q$ is valid. You have weakened the antecedent from $p$ to $p\lor q$. In other words, you have transformed p (the antecedent of (a)) into $p\lor q$ (the consequent of (a)). |
Set that is not algebraic | Let $A=\{ (\cos(t),\sin(t),t) \in \mathbb{A}^3 : t \in \mathbb{R} \}$. Suppose that $f(x,y,z)\in I(A)$, i.e. $f$ vanishes on $A$. Because
$$f(\cos(\theta+2\pi k),\sin(\theta+2\pi k),\theta+2\pi k)=f(\cos(\theta),\sin(\theta),\theta+2\pi k)=0$$
for all $k\in\mathbb{Z}$, we have that for each $\theta\in[0,2\pi)$, the polynomial $f(\cos(\theta),\sin(\theta),z)\in\mathbb{R}[z]$ has infinitely many zeros. What does that imply? |
Drawing by replacement and without | I think it might just be a bit unclearly formulated. The unconditional mean, $E(X)$, is the same, but the conditional mean changes -- and that's what the lecturer must be referring to, since s/he's saying that you remove objects with no replacement.
As an example, the hypergeometric distribution has expectation $ \frac{nk}{N}$. If you start with 10 white balls in an urn containing 100 balls, the mean for one draw ($n=1$) is $\mu=\frac{10}{100}=\frac{1}{10}$. Let's say you draw one white ball and do not replace it. Then you have the expectation $\frac{9}{99}\neq \frac{1}{10}$. But this is not the unconditional expectation $E(X)$, as you are conditioning on removing one ball. |
Simplify $\frac{7^{ \log_{5} 15 }+3^{2+\log_{5}7}}{7^{\log_{5}3}}$ | Hint:
$$\log_5{15}=1+\log_53$$
If $\log_53=x,3=5^x\implies5^{\log_53}=5^x=3$
$$3^{\log_57}=(5^{\log_53})^{\log_57}=(5^{\log_57})^{\log_53}=7^{\log_53}$$ |
The equation $x^2=a$ in a finite group $G$ of odd order. | $$(a^n)^2=a{}{}{}{}{}{}{}{}{}{}{}{}$$ |
Evaluating the series $\sum\limits_{n=1}^\infty \frac1{4n^2+2n}$ | Rewrite the series as follows:
$$\sum_{n=1}^\infty\frac{1}{4n^2+2n}=\sum_{n=1}^\infty\frac{1}{2n(2n+1)}=\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2n+1}\right)=\sum_{m=2}^\infty\frac{(-1)^m}{m}.$$
You may now evaluate it using the Taylor series of logarithm.
Added: The third equality is justified as follows:
Write $a_N=\sum_{n=1}^N\left(\dfrac{1}{2n}-\dfrac{1}{2n+1}\right)$ and $b_M=\sum_{m=2}^M\dfrac{(-1)^m}{m}$. The sequence of partial sums $b_M$ is convergent by the Leibniz criterion. Furthermore, $a_N = b_{2N+1}$ holds for all $N\in\mathbb N$, i.e. $(a_N)_{N=1}^\infty$ is a subsequence of $(b_M)_{M=1}^\infty$. Therefore these sequences converge to the same limit, which justifies the equality. |
prove that the following interesting problem | Perhaps I am missing something, but surely I can choose the subspace
$V = \{f+ \lambda 1_{[{1\over 2},1]} | f \in C[0,1], \lambda \in \mathbb{R} \}$
and define $\Lambda:V \to \mathbb{R}$ by $\Lambda(g) = \lim_{x \downarrow {1 \over 2}} g(x) - \lim_{x \uparrow {1 \over 2}} g(x)$. Since
$\Lambda(f+\lambda 1_{[{1\over 2},1]}) = \lambda$, we have $\ker \Lambda = C[0,1]$ and $\|\Lambda\|_V = 1$. Now extend $\Lambda$ to $L^\infty(m)$ using Hahn Banach.
It should be clear that the above $\Lambda$ cannot be written as $\Lambda(f) = \int fg dm$. If $\int f g dm = 0$ for all $f \in C[0,1]$, then we must have $g(x) = 0$ ae. x[$m$], which would imply that $\Lambda = 0$. |
Find: $\lim_{x\to0}\left(\lim_{n\to\infty}2^{2n}\left(1-\left(f ^{\circ n}(x)\right)\right)\right)$ | Make the substitution $x = \cos \theta$. Then $f(x) = \sqrt{\dfrac{1+\cos \theta}{2}} = \cos \dfrac{\theta}{2}$. Hence, $f^{(n)}(x) = \cos \dfrac{\theta}{2^n}$.
So, $\displaystyle\lim_{n \to \infty}2^{2n}(1-f^{(n)}(x)) = \lim_{n \to \infty}2^{2n}\left(1-\cos \dfrac{\theta}{2^n}\right) = \lim_{n \to \infty}2^{2n}\left(\dfrac{\theta^2}{2 \cdot 2^{2n}}+O\left(\dfrac{1}{2^{4n}}\right)\right) = \dfrac{\theta^2}{2}$ $= \dfrac{1}{2}(\arccos x)^2$.
Taking the limit as $x \to 0$ should be easy. |
Sequential Murray-von Neumann equivalence of projections | It fails badly on $B(H)$: let $p=I$, $q=0$, and $x_n=\sum_{k=1}^\infty E_{n+k,k}$, where $\{E_{k,j}\}$ is a set of matrix units. Then
$$
x_n^*x_n=\sum_{k,j=1}^\infty E_{k,n+k}E_{n+j,j}=\sum_{k=1}^\infty E_{kk}=I,
$$
$$
x_nx_n^*=\sum_{k,j=1}^\infty E_{n+k,k}E_{j,n+j}=\sum_{k=1}^\infty E_{n+k,n+k}=\sum_{k=n+1}^\infty E_{kk}\xrightarrow[n\to\infty]{sot}0.
$$
This makes it fail in any infinite von Neumann algebra.
As you mention, it works on finite factors. I'm not sure about finite von Neumann algebras, because there equal trace is not enough for equivalence. |
Proving that the following inequality is true for all $n$. | The inequality is equivalent to
$$2\sqrt{n}-\frac n{\sqrt{n-1}}>\frac4A.$$
The left-hand side is a strictly increasing function of $n$ that converges to $+\infty$ as $n$ approaches infinity. Hence, the inequality holds for all $n\geq2$ as long as it holds for $n=2$. This means
$$2\sqrt{2}-2>\frac 4A$$
or, equivalently,
$$A>\frac2{\sqrt{2}-1}.$$ |
A term that is not bounded by 1. | If you differentiate $f_n$ for $n \geq 1$ you have that the function attains its maximum at $x=\frac{\sqrt{3}}{n^2}$ on $[0,1]$ |
Let $Q$ be a polynomial of degree 23 such that $Q(x)=-Q(-x)$ | $Q$ is an odd function, so it's integral over a interval symmetric about zero is zero. You can take it from there. |
Show that $a^3+b^5=7^{7^{7^7}}$ has no solutions with $a,b\in \mathbb Z.$ | Using Euler's theorem, you can compute $7^{7^{7^7}} \equiv 19 \mod 31$. However, $19$ is not expressible as the sum of a cube and a fifth power modulo $31$.
In other words, $a^3 + b^5 = 7^{7^{7^7}}$ has no solutions over $\mathbb{Z}/31\mathbb{Z}$, so it also has no solutions over $\mathbb{Z}$.
In general, with this type of question, it is useful to work modulo a prime $p$ such that the exponents involved divide $p-1$. The reason for this is that when $n \mid p-1$, $a^n$ cannot take many different values modulo $p$ (this is a consequence of Euler's theorem). Here $3$ and $5$ both divide $p-1=30$.
Edit: Euler's theorem can help you to compute $7^{7^{7^7}} \pmod {31}$ as follows. Since $(7,31)=1$ and $\phi(31)=30$, we have $7^{30} \equiv 1 \mod 31$. Therefore, $7^{7^{7^7}} \equiv 7^k \mod 31$, where $k$ is the remainder of $7^{7^7}$ upon division by $30$. In order to compute this, it suffices to compute $7^{7^7}$ modulo $2$, $3$ and $5$. For $7^{7^7} \pmod 5$ you can again use Euler's theorem to reduce this to $7^{\ell} \mod 5$, where $\ell$ is the remainder of $7^7$ upon division by $\phi(5)=4$. |
How to turn a decimal into a number to divide something by into it. | I think what you are looking for is the reciprocal or multiplicative inverse of the number, typically denoted $x^{-1}$ or $\frac{1}{x}$. Using your example of $0.5$, you have $0.5=\frac{1}{2}$. How do you get $2$ from this? Well,
$(0.5)^{-1}=\left(\frac{1}{2}\right)^{-1}=\dfrac{1}{\frac{1}{2}}=2$. This will only produce an integer if the decimal you are using is the inverse of an integer. |
Finding z transform of this function. | I guess we are dealing with one-side transform here. Then (assuming $a>0$)
$$F(z)=\sum_{t=0}^{\infty} t^a e^{-bt} z^{-t}=\sum_{t=1}^{\infty} \frac{w^{t}}{t^{-a}}={\rm Li}_{-a}(w)$$ where $w=z^{-1}\,e^{-b}$ and
${\rm Li}_s(\cdot)$ is the polylogarithm function. This has no nice simple closed form expression for arbitrary $a$. Is this howework? Are you sure you got it right? |
Supplement to Rudin's Real and Complex Analysis | If you want a different viewpoint on measure theory and integration, and since you plan to study probability theory later anyway, one idea would be to study probability theory now in a book that develops the Lebesgue integral from scratch. For example, you could use Billingsley's Probability and Measure. It has good exercises, and probability theory provides lots of concrete examples.
There's a French book I haven't seen in a long while, but as I recall it was similar to Rudin though slower moving, less comprehensive, and with easier exercises. It's called Intégration, by André Gramain. |
Find the derivative at (1,2) | Because that's the part where the external $x^2$ is differentiated
$$(fg)'=f'g+fg'$$
Here $f=x^2$ and $g=\sqrt{5-x^2}$, thus $f'=2x$ and $g'=-2x(5-x^2)^{-1/2}$, thus the result, the second part of the equation is $f'g$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.