title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Find $P(Y_1≤3/4,Y_2≥1/2)$ of a joint probability density function. | No you are right! You should instead choose
$$ P(1/2 \leq Y_2 \leq 1 \, , \, {\bf0 \leq Y_1 \leq 1/2)}. $$
This doesn't affect the answer since
$$ \int_{1/2}^1 \int^{1/2}_{y_1=0} 6(1-y_2) \, dy_1 dy_2 = \int_{1/2}^1 \int^{1}_{y_1=1/2} 6(1-y_2) \, dy_1 dy_2. $$
Do you see why this makes sense? Hint: Draw the area you want to integrate over. |
Link between Gram Matrix and volume of parallelpiped question - Determinant | Well, I think the most clean way to proceed here is by induction. Fon $n=1$ we have:
$$det(G(v_1))=Vol_1(P(v_1))^2=<v_1,v_1>$$
Now let us suppose the formula valid for $n-1$ and induce the formula for $n$. First of all we notice that both $det(G(v_1,...,v_N))$ and $Vol_n(P(v_n))$ do not depend from a specific choice of the base (that's because the first one is a determinant and the second one is originated by the norm).
Now instead of calculating directly $det(G(v_1,...,v_n))$ we calculate the determinant after a change of basis
$$det(G(v_1,...,v_n))=det(S^{-1}G(v_1',...,v_n')S)$$
where $v_n'$ is orthogonal to all $v_i'$ with $i<n$.
In this coordinates the calculation should be a little more convenient:
$$det(G(v_1,...,v_n))=<v_n',v_n'>det(S^{-1}G(v_1',...,v_{n-1}')S)$$
now using the inductive hypotesis
$$<v_n',v_n'>det(G(v_1',...,v_{n-1}'))=<v_n',v_n'>Vol_{n-1}(P(v_1',..,v_{n-1}'))^2$$
And finally using the recursive definition of the Volume and using the fact that the volume is invariant for change of basis we obtain
$$Vol_{n}(P(v_1',..,v_{n}'))^2=Vol_{n}(P(v_1,..,v_{n}))^2$$ |
Maximizing minimum of intersection size | Let $B=\{1,2\}, A_1$ contains $1$ but not $2, A_2$ contains $2$ but not $1$, and all the other $A$s have both $1$ and $2$. None of the $A$s have $3$ or $4$. Changing $1$ to $3$ or $2$ to $4$ will not decrease the minimum, but changing both will. |
Decomposition of an idempotent matrix | You've proved the hardest part. Next, let's take any $x\in\mathop{\mathrm{Im}}A_1$. It also belongs to $\mathop{\mathrm{Im}}A$, so $x=Ax=A_1x+A_2x$. Therefore, $A_2x=x-A_1x\in\mathop{\mathrm{Im}}A_1$, but it also belongs to $\mathop{\mathrm{Im}}A_2$, and since $\mathop{\mathrm{Im}}A_1\cap\mathop{\mathrm{Im}}A_2=\{0\}$, we conclude that $A_2x=0$. This implies $A_2A_1=0$ and $A_1x=x$, so $A_1$ is idempotent. The same reasoning applies to $A_1A_2=0$ and idempotency of $A_2$. |
How do we know (from Weierstrass' theorem) that the second-order conditions are sufficient for a maximum? | If $x_*$ is a critical point of $f$ one has
$$f(x_*+X)=f(x_*)+{1\over2}H(X.X)+o(|X|^2)\qquad(X\to0)\ ,$$
by Taylor's theorem. If $H$ is positive definite then $H$ assumes a positive minimum $\mu>0$ on $S^2$. It follows that $H(X,X)\geq\mu|X|^2$ for all $X\in{\mathbb R}^n$. We therefore can write
$$f(x_*+X)\geq f(x_*)+\left({\mu\over2}+o(1)\right)|X|^2\qquad(X\to0)\ .$$
This shows that $$f(x_*+X)-f(x_*)>0\qquad(0<|X|<\delta)$$
for some $\delta>0$, hence $f$ has a strict local minimum at $x_3$. |
Is the radical of a free module superflous? | Let $k$ be a field, and $R=k[[t]]$ the ring of power series over $k$ in one variable. Then the Jacobson radical $J$ consists of the power series with zero constant term.
Take $I=\mathbb{N}$, and $K$ the submodule of $R^{(\mathbb{N})}$ generated by the elements
$$(1,t,0,0,\dots), (0,1,t,0,\dots),(0,0,1,t,\dots),\dots.$$
Clearly $K+J^{(\mathbb{N})}=R^{(\mathbb{N})}$, but $K\neq R^{(\mathbb{N})}$ since the last non-zero coordinate of any element of $K$ has zero constant term. So $J^{(\mathbb{N})}$ is not superfluous. |
Does the Euler characteristic of a manifold depend upon the field of coefficients? | Every closed manifold is homotopy equivalent to a finite CW-complex. For a proof see Milnor's book Morse Theory in the section titled "Homotopy Type" (pg. 12 in the ancient edition I have).
(The statement of this is a remark at the end of part 1 section 3) |
General Cauchy theorem aplication | Expanding the comment:
$$\frac{1}{(z-a)(z-b)} = \frac{1}{a-b}\left(\frac{1}{z-a}-\frac{1}{z-b}\right),$$
hence
$$\int_\Gamma \frac{1}{(z-a)(z-b)}\,dz = \frac{1}{a-b}\left(\int_\Gamma\frac{dz}{z-a} - \int_\Gamma \frac{dz}{z-b}\right) = \frac{2\pi i}{a-b} \left(n(\Gamma,a) - n(\Gamma,b)\right).$$
Since $a$ and $b$ belong to the same component of the complement of $U$, we have $n(\Gamma,a) = n(\Gamma,b)$ for every cycle $\Gamma$ in $U$. |
Show pointwise converegent for Fourier serie | $|e^{-|x|}-e^{-|y|}| \leq ||x|-|y||e^{\pi}$ by MVT applied to the function $e^{x}$. This gives $|e^{|x|}-e^{|y|}| \leq |x-y|e^{\pi}$. Hence $f$ is of bounded varation. For any continuous function of bounded variation the Fourier series converges uniformly. |
Acceleration $\mathbf a$ in function of the velocity $\mathbf v$ | (A particular case of) the multivariable chain rule says:
$$\frac{d}{dt}f\big(x_1(t),\ldots,x_n(t)\big)=\sum_{i=1}^n\frac{\partial f}{\partial x_i}\frac{dx_i}{dt}.$$
In your specific case:
$$\frac{d}{dt}\mathbf{v}\big(x(t),y(t),z(t)\big)=\frac{\partial\mathbf{v}}{\partial x}\frac{dx}{dt}+\frac{\partial\mathbf{v}}{\partial y}\frac{dy}{dt}+\frac{\partial\mathbf{v}}{\partial z}\frac{dz}{dt}$$
Now the result follows from noting that $(\mathbf{v}\cdot\nabla)\mathbf{v}$ is equal to:
$$(\mathbf{v}\cdot\nabla)\mathbf{v}=\left(\frac{dx}{dt}\frac{\partial}{\partial x}+\frac{dy}{dt}\frac{\partial}{\partial y}+\frac{dz}{dt}\frac{\partial}{\partial z}\right)\mathbf{v}.$$ |
Prove for all $n\in \mathbb{N}$ that $\sum_{i=0}^{n} i\cdot F_{2i} = (n+1)F_{2n + 1} - F_{2n + 2}$. | \begin{align}
\sum_{i=0}^{k+1}iF_{2i}&=(k+1)F_{2k+2}+(k+1)F_{2k+1}-F_{2k+2}\\
&=kF_{2k+2}+(k+1)(F_{2k+3}-F_{2k+2})\\
&=(k+1)F_{2k+3}-F_{2k+2}\\
&=(k+1)F_{2k+3}-(F_{2k+4}-F_{2k+3})\\
&=(k+2)F_{2k+3}-F_{2k+4}.
\end{align} |
Is the semigroup such that $\forall x\ (x\in S\to\exists y\ (y\in S\wedge\forall z\ (z\in S\to zxy=z)))$ a group? | Take the semigroup $S = \{a,b\}$, where $aa=ab=a$ and $ba=bb=b$.
Then, for each $x \in S$, take $y = x$. Then, for all $z \in S$, $zxy=zxx=z$. However, $S$ is not a group. |
Is mean pairwise distance a metric over subsets of a metric space. | So, if I get you right, for $X$ a metric space with metric $d(x,y)$, define $D(U,V)$ for non-empty finite subsets $U,V\subset X$ by
$$
D(U,V)=\frac{1}{|U|\cdot|V|}\sum_{u\in U,\,v\in V}d(u,v).
$$
A metric would require $D(U,U)=0$, but if $|U|>1$ we get $D(U,U)>0$, so this is not a metric.
However, you specifically ask if the triangle inequality $D(U,V)+D(U,V)\ge D(U,W)$ is true. It is for the simple reason that
$$
D(U,V)+D(V,W)=\tfrac{1}{|U|\cdot|V|\cdot|W|}\sum_{u,v,w}d(u,v)+d(v,w)
\ge\tfrac{1}{|U|\cdot|V|\cdot|W|}\sum_{u,v,w}d(u,w)=D(U,W)
$$
where $u\in U$, $v\in V$, and $w\in W$.
I don't see any quick fix of this to get a metric with $D(U,U)=0$. If I try defining a new distance function $d_D(U,V)=2D(U,V)-D(U,U)-D(V,V)$, you may get $d_D(U,V)<0$ which is even worse. |
Why does the mandelbrot set seem to end at a copy of itself? | Pretty sure it's an aesthetic choice, as Mark says in his comment. It arguably makes a more satisfying end to end inside a mini copy (or just before). Alternatives are to end outside the set (which would fill the screen with a flat colour eventually), or on the boundary (perhaps spiralling forever into a Misiurewicz point).
With pertubation and series approximation techniques being used for deep zooms, it can be computationally cheaper to reuse the primary reference computations between keyframes, and a central mini copy makes for a good primary reference. But that has to be weighed up against the iteration counts increasing asymptotically more quickly when approaching a mini copy than when approaching a Misiurewicz point, for example. The computation time for the last few keyframes of a video ending at a mini copy can dominate the time taken for the rest of the video. |
Summing Matrix Series | Hint: Use the fact that $A^2 = -\epsilon^2 I$. (Then $A^3 = -\epsilon^2 A$, $A^4 = \epsilon^4 I$, etc.) |
Cross product commutativity | You're missing one part in your cross product formula: the cross product is actually $$\mathbf a\times \mathbf b = |\mathbf a||\mathbf b|\sin(\theta)\hat {\mathbf n}$$
And that's where your confusion is coming from because it's exactly that $\hat {\mathbf n}$ that is changing signs whenever you take the cross product in the alternate order.
$\hat {\mathbf n}$ is the right-handed unit normal to the plane containing the vectors $\mathbf a$ and $\mathbf b$. The term "right-handed" here means that if you were to place the side of your right hand (the part you'd karate chop someone with) parallel to the first vector in the cross product, $\mathbf a$, and curl your finger toward the second vector in the cross product, $\mathbf b$, then your thumb would point in the direction of the right-handed normal.
Using your right hand, confirm for yourself that the directions of $\mathbf a\times \mathbf b$ and $\mathbf b \times \mathbf a$ are opposite. Thus the negative sign in the identity $$\mathbf a \times \mathbf b = -\mathbf b \times \mathbf a$$ |
Choosing teams with minimum number of boys and girls. | Just calculate the 3 cases instead.
(b,g)=(4,2), (3,3) and (2,4)
Anyways you r also right. |
Find the value of theta so that: $\sin(\theta + 30^\circ ) = \cos 50^\circ$ | Hint:
$$ \sin \theta = \cos (90^\circ-\theta)$$
$$\cos50^\circ = \sin40^\circ$$
can you solve for $\theta$ using the above? |
some statements based on continuity | Regarding part 2, see Krish's comment.
Regarding part 3:
Let $a:=f(x_0)$, $b:=g(x_0)$ and consider the continuous function $x\mapsto h(x)=f(x)-g(x)$.
We have $$f(a)=f(f(x_0))=g(g(x_0))=g(b)$$
and $$g(a)=g(f(x_0))=f(g(x_0))=f(b), $$
hence $$ h(a)=-h(b).$$
If $h(a)=0$, we can let $x_1=a$ (or $b$). And if $h(a)\ne 0$, the IVT gives us $x_1$ between $a$ and $b$ with $h(x_1)=0$. |
Greatest distance one point can have from a vertice of a square given following conditions | We may suppose that
$$A(1,0),B(1,1),C(0,1),D(0,0),P(x,y).$$
Then, we have
$$u^2=(x-1)^2+y^2$$
$$v^2=(x-1)^2+(y-1)^2$$
$$w^2=x^2+(y-1)^2$$
So, we have
$$u^2+v^2=w^2\iff 2(x-1)^2+y^2+(y-1)^2=x^2+(y-1)^2$$$$\iff x^2-4x+2+y^2=0\iff (x-2)^2+y^2=2$$
Hence, we want to find the greatest distance from the origin to a point on a circle $(x-2)^2+y^2=2$.
Thus, the answer is $\color{red}{2+\sqrt 2}$ when $P(2+\sqrt 2,0)$. |
If $X\sim\mathrm{exp}(1)$ and $Y\sim\mathrm{exp}(1),$ is $X=Y$? | That two random variables have the same distribution doesn't guarantee they're equal. They could even be independent. All the matching distribution means is $x\le x,\,Y\le x$ have equal probabilities for all $x$, not that they have equal truth values. |
Which of the following is NOT true? please see the options listed below. | For example, thinking of (d), check the following map
$$\Bbb N\to 2\Bbb N\;\;,\;\;n\mapsto 2n$$ |
Questions about product measure | Let $(\beta_i)_{i \in I} \in [0, +\infty]^{I}$ be an arbitrary
family of non-negative real numbers. We set $ln(0)=-\infty$.
Definition 1. A standard product of the family of numbers
$(\beta_i)_{i \in I}$ denoted by ${\bf (S)}\prod_{i \in
I}\beta_i$ is defined as follows:
~${\bf (S)}\prod_{i \in I}\beta_i=0$ if ~$ \sum_{i \in
I^{-}}\ln(\beta_i)=-\infty$, where $I^{-}=\{i:ln(\beta_i)<0\}$ ,
and ${\bf (S)}\prod_{i \in I}\beta_i=e^{\sum_{i \in
I}\ln(\beta_i)}$ if $\sum_{i \in I^{-}}\ln(\beta_i) \neq -\infty$.
Now we will try to give answers to Stefan's questions when a set
of induces $I$ is countable.
Question 1(Stefan). Is product measure only defined for
probability measures?
Let $(E_i,\mathbb{B}_i,u_i)_{i \in I}$ be a family of totally
finite, continuous measures.
Theoretically there are possible the following three cases:
Case 1. ${\bf (S)}\prod_{i \in I}u_i(E_i)=0.$
In that case we define $\prod_{i \in I}u_i$ as zero measure, i.e.
$$
(\forall X)(X \in \prod_{i \in I}\mathbb{B}(E_i) \rightarrow
(\prod_{i \in I}u_i)(X)=0).
$$
Case 2. $0< {\bf (S)}\prod_{i \in I}u_i(E_i)< +\infty.$
In that case we define $\prod_{i \in I}u_i$ as follows:
$$
(\forall X)(X \in \prod_{i \in I}\mathbb{B}(E_i) \rightarrow
(\prod_{i \in I}u_i)(X)= ({\bf (S)}\prod_{i \in I}u_i(E_i))\times
(\prod_{i \in I}\frac{u_i}{u_i(E_i)})(X)).
$$
Case 3. ${\bf (S)}\prod_{i \in I}u_i(E_i)= +\infty.$
In that case we define $\prod_{i \in I}u_i$ as a standard product
of measures $(u_i)_{i \in I}$ construction of which is given
below:
Without loss of generality, we can assume that $u_i(E_i)\ge 1$ when $i \in I$.
Let $L$ be a set of rectangles $R:=\prod_{i \in I}R_i$ where $R_i
\in \mathbb{B}(E_i)(i \in I)$ and $0\le{\bf (S)}\prod_{i \in
I}u_i(R_i)<+\infty$
Note that a rectangle $R$ with $0<{\bf (S)}\prod_{i \in
I}u_i(R_i)<+\infty$ exists because $u_i$ is continuous and
$u_i(E_i)\ge 1$.
Let $\mu_R$ be a measure defined on $\prod_{i \in
I}\mathbb{B}(R_i)$ as follows
$$
(\forall X)(X \in \prod_{i \in I}\mathbb{B}(R_i) \rightarrow
\mu_R(X)= ({\bf (S)}\prod_{i \in I}u_i(R_i))\times (\prod_{i \in
I}\frac{u_i}{u_i(R_i)})(X)).
$$
For each $R \in L$ we have a measure space $(R,S_R(:=\prod_{i \in
I}\mathbb{B}(R_i)),\mu_R)$. That family is consistent in the
following sense: if $R=R_1 \cap R_2$ then
$$
(\forall X)(X \in S_R \rightarrow
\mu_R(X)=\mu_{R_1}(X)=\mu_{R_2}(X)).
$$
If a measurable subset $X$ of $\prod_{i \in I}E_i$ is covered by a
family $\{R_k : R_k \in L ~\&~k=1,2, \cdots\}$ then we set
$$
\Lambda(X)=\mu_{R_1}(R_1 \cap X)+\mu_{R_2}((R_2\setminus R_1)\cap
X)+\cdots+\mu_{R_n}([R_n\setminus \cup_{1 \le i \le n-1}R_i]\cap
X)+ \cdots.
$$
If a measurable subset $X$ of $\prod_{i \in I}E_i$ is not covered
by a countable family of elements of $L$, then we set
$\Lambda(X)=+\infty$.
Note that $\Lambda$ is measure on $\prod_{i \in I}\mathbb{B}(E_i)$
and $\Lambda(R)={\bf (S)}\prod_{i \in I}u_i(R_i)$ for each $R \in
L$.
This measure is called standard product of measures $(u_i)_{i \in
I}$ and is denoted by ${\bf (S)}\prod_{i \in I}u_i$.
As we see product can be defined for totally finite continuous
measures. Here we need no a requirement of totally finiteness(they
may be infinite (i.e. $u_i(E_i)=+\infty$) as well we do not
require their sigma-finiteness.
I think that it gives a partially solution of that problem when
$card(I)=\aleph_0$ and the measure ${\bf (S)}\prod_{i \in I}u_i$
is well defined on $\prod_{i \in I}E_i.$
P.S. I agree with Mister Stefan Walter remark that there may be a situation when product measures are not defined uniquelly.
Indeed, let $(n_k)_{k \in N}$ be a family of strictly increasing natural numbers such that $n_0=0$ and $n_{k+1}-n_k \ge 2$. We set $\mu_k=\prod_{i \in [n_k,n_{k+1}]}u_i$.
Let us consider ${\bf (S)}\prod_{k \in N}\mu_k$. Then that measure will be defined
on $\prod_{i \in I}\mathbb{B}(E_i)$ and $({\bf (S)}\prod_{k \in N}\mu_k)(R)={\bf (S)}\prod_{i \in I}u_i(R_i)$ for all $R \in L^{+}$, where
$$
L^{+}=\{ R:R \in L~\&~0<{\bf (S)}\prod_{i \in I}u_i(R_i)<+\infty\}
$$
Note that the measure ${\bf (S)}\prod_{k \in N}\mu_k$ is called $(n_{k+1}-n_k)_{k \in N}$-standard product of measures $(u_i)_{i \in I}$.
It is natural that both measures $({\bf (S)}\prod_{i \in I}u_i)$ and
$({\bf (S)}\prod_{k \in N}\mu_k)$ can be considered as products of measures $(u_i)_{i \in I}$ but they(in general) are different.
Indeed, let $u_i=l_1$ for $i \in I$, where $l_1$ denotes a linear Lebesgue measure on real axis. Let $n_{k+1}-n_k=2$ for $k \in N$. Consider a set $D$ defined by
$$
D=[0,2]\times [0,\frac{1}{2}]\times [0,3]\times [0,\frac{1}{3}]\times \cdots.
$$
Then
$({\bf (S)}\prod_{i \in I}u_i)(D)=0$ and $({\bf (S)}\prod_{k \in N}\mu_k)(D)=1.$ |
For which $z \in \Bbb{C}$ does this series converge? | For $|z|<1$, the summand is $<2|z|^n$ in absolute value for $n$ large enough (because the denominator $\to 1$), so there the series converges.
The summands (hence also the convergence and limiting function) are invariant under the substitution $z\leftarrow \frac1z$:
$$\frac{(1/z)^n}{1+(1/z)^{2n}} =\frac{z^{2n}\cdot(1/z)^n}{z^{2n}\cdot(1+(1/z)^{2n})}=\frac{z^n}{z^n+1}$$
Therefore the series converges also for $|z|>1$.
For $|z|=1$, the summands are all $\ge\frac12$ in absolute value (or even undefined), so no convergence |
The Taylor coefficients of a function of the form $\exp\circ f$, where $f$ is a power series | The functions $f$ and $g$ viewed as functions over the complex are analyticon a disc with center $0$ and radius $b$. Therefore $F$ and $G$ are also analytic on the same disc. This implies that their Taylor expansions converge on that disc. In particular they converge on the same interval $(-b,b)$.
Yes, use triangle inequality and chain rule (Faa di Bruno's formula). |
Dimension of a basis of a subset | Your reasoning is correct. To fill in the technical details: You could see this as the kernel of the mapping $\mathbb{R}^3\rightarrow\mathbb{R}$ represented by the matrix $(a,b,c)$. If one of $a,b,c$ is nonzero, then this matrix has rank one and by the dimension formula the kernel has dimension $3-1=2$.
(Otherwise it has of course dimension three by the same formula or because your equation reduces to $0=0$) |
Is it possible to calculate $e^x$ given $2^x$? | $e^x = e^{\frac{x ln(2)}{ln(2)}} = e^{\frac{ln(2^x)}{ln(2)}} $ |
$\int_{[0,1)} \left|x^n - g(x)\right|^2 \to 0$ implies $g = 0$ if $g$ continuous | Hint:
You have by Cauchy-Schwarz:
$$\int_0^1|x^n-g(x)|dx\leq \sqrt{\int_0^1|x^n-g(x)|^2 dx}$$
Now
$$|g(x)|\leq x^n+|x^n-g(x)|$$ |
Are there any smooth/analytic solutions to the functional equation $f(x+1)-f(x)=f\left(\frac 1x\right)$? | My approach is by construction.
Firstly, I will solve the functional equation on $\mathbb R^+$, without considering continuity/differentiability, which I will care about later.
Denote $\phi$ the golden ratio.
Partition $\mathbb R^+$ into
$[0,\phi]$
$[\phi,1]$
$[1,1+\phi]$
$[1+\phi,\infty)$
(Call the $n$th partition P$n$)
(Why $\phi$? : My initial idea is to define arbitrary functions on $[0,a]$ and $[1,1+a]$, and use the functional equation to extend the function. It turns out that this allows extension to $[\frac1a,\infty)$. To prevent overlapping of 'arbitrary region' and 'extension region', the critical $a$ satisfies $1+a=\frac 1a\implies a=\phi$.)
In my studies, I found that we can define two arbitrary functions $f_1$ and $f_3$ on P1 and P3 respectively. Then, on P4, we have
$$f(x)=f_4(x):=f_3(1+1/x)-f_1(1/x)\qquad x\in [1+\phi,\infty)$$
Furthermore, for $x$ in P2,
$$f(x)=f_2(x):=f_4(x+1)-f_3(1/x)=f_3\left(1+\frac1{x+1}\right)-f_1\left(\frac1{x+1}\right)-f_3\left(\frac1x\right)$$
Secondly, I will solve the functional equation on $\mathbb R^-$.
This case is not analogous to the one above. Partition $\mathbb R^-$ into $[-n,-n-1]$ for $n=0,1,2,\cdots$, and $f(x)=f_{-n}(x)$ on $[-n,-n-1]$.
Since in the functional equation, the arguments are $x$, $x+1$, and $\frac1x$, it is impossible that only one argument is negative. Therefore, $f$ on $\mathbb R^-$ is not completely determined by $f$ on $\mathbb R^+$, and we do have some degrees of freedom on the $\mathbb R^-$.
It turns out that we can define an arbitrary $f_0$.
Then,
$$f_{-1}(x)=f_0(x+1)-f_0(1/x)$$
$$f_{-2}(x)=f_{-1}(x+1)-f_0(1/x)=f_0(x+2)-f_0(1/(x+1))-f_0(1/x)$$
$$\cdots$$
$$f_{-n}(x)=f_0(x+n)-\sum^{n-1}_{k=0}f_0\left(\frac1{x+k}\right)$$
Now let us find the conditions for continuity.
In general, we require
$f_{-(n-1)}(-n)=f_{-n}(-n)$ for $n=1,2,3,\cdots$.
On $\mathbb R^+$ neighbouring functions have to agree on the boundary.
After a lot of tedious algebra, it turns out that it is required that
$f_1(\phi)=0$
$f_3(3/2)=2f_3(1)+f_1(1/2)$
$2f_0(-1)=f_0(0)=f_1(0)$
Similarly, for differnetiability,
$\phi \cdot f_1'(\phi)=\sqrt5 \cdot f_3'(\phi+1)$
$f_3'(3/2)=f_1'(1/2)$
$f_0'(0)=f_1'(0)=0$
To sum up:
If two differentiable functions $\mu:[-1,\phi]$ and $\nu:[1,1+\phi]$ satisfy the following conditions:
$\mu(\phi)=0$
$\nu(3/2)=2\nu(1)+\mu(1/2)$
$2\mu(-1)=\mu(0)$
$\phi \cdot \mu'(\phi)=\sqrt5 \cdot \nu'(\phi+1)$
$\nu'(3/2)=\mu'(1/2)$
$\mu'(0)=0$
then,
$$f(x) = \begin{cases}
\nu(1+1/x)-\mu(1/x) & x>1+\phi
\\ \nu(x) & 1+\phi > x > 1
\\ \nu\left(1+\frac1{x+1}\right)-\mu\left(\frac1{x+1}\right)-\nu\left(\frac1x\right) & 1 > x > \phi
\\ \mu(x) & \phi > x > -1
\\ \mu(x+n)-\sum^{n-1}_{k=0}\mu\left(\frac1{x+k}\right) & -n>x>-n-1 \quad (n=1,2,3,\cdots)
\end{cases}$$ |
Estimating probability based on past stats | So you have a time series $x_t$ , from $t=1\dots 20$ and each $x_t \in \mathbb N$.
You want to predict $\hat x_{20}(1)$. This is quite a big topic; you start with a time plot to see if there is any immediate trend or seasonalities. Then you can try to fit an ARIMA process to your observations (based on autocorrelation function, partical autocorrelation function and so on). you could also try different models and choose between them with some kind of criterion (AIC, BIC come to mind).
Then you can predict the next value by using standard methods for predicting an ARIMA process.
I'm afraid I can't really give more help, this is a big topic: you can start with Chatfield's book.
Or if you don't really care what's going on you can always use some R package to do the estimation.. I think ets is relevant, though I don't remember exactly |
Why is the delta function the continuous generalization of the kronecker delta and not the identity function? | The identity operator is the same in both cases.
For the discrete case
$$
\sum_{j=1}^n \delta_{ij} x_j = x_i
$$
and in terms of operators, $I(\mathbf{x}) = \mathbf x$ .
For the continuous case
$$
\int_{-\infty}^\infty \delta(x - y) f(x)\, dx = f(y)
$$
and in terms of operators $I(f) = f$. |
Is $\frac00=\infty$? And what is $\frac10$? Are they same? Does it hold true for any constant $a$ in $\frac{a}0$ | The limit $\displaystyle \lim_{x \rightarrow 0 } \frac{x}{x}=1$, as you said. It is not infinity. Why, because the fraction is simplified to $1$. So no matter where $x$ tends the limit will always be $1$.
As for undefined it means that something is not defined. For example the function $f(x)=\frac{1}{x}$ is not defined at $x_0=0$, that is it is undefined at $x_0=0$.
Indeterminate, on the other hand means other thing. Consider the limit:
$$\lim_{n \rightarrow +\infty} \left(1-\frac{1}{n} \right)^n$$
You can see immediately that it is of the form $1^{\infty}$, which is a very well known indeterminate form. Some could speculate here that this limit would be $1$. No, it is not. It equals $e^{-1}$.
Now, let's take a look at another limit. For example:
$$\lim_{x\rightarrow 1^{+}} \frac{\sqrt{1-x^2}}{1-x^2}$$
Again we see that as $x \rightarrow 1^+$ both the nominator and the denominator are zero. This is an indeterminate form $\frac{0}{0}$. However you can sub $1-x^2 =u$ and change variables. Then $u \rightarrow 0$ and the limit is expressed as:
$$\lim_{x \rightarrow 0^+} \frac{\sqrt{u}}{u}$$
which clearly is $+\infty$.
So, indeterminate form means that you can not estimate the limit right away, that is plug in the value and calculate. Undefined means that something is not defined. So, there terms are not the same. |
Rigorous proof of a linear algebra theorem | By the way, once you establish the equation
\begin{equation}
\dim V = \dim S + \dim S^{\perp}
\end{equation}
or equivalently,
\begin{align}
\dim S^{\perp} &= \dim V - \dim S \\
&= n - r
\end{align}
there isn't much more to explain; the proof is complete at this point.
To establish this equality, what I'd do is show that we have a direct sum decomposition $V = S \oplus S^{\perp}$, which means $S \cap S^{\perp} = \{ 0\}$ and for every $v \in V$, there exist $x \in S$ and $y \in S^{\perp}$ such that $v = x+y$. This is useful, because whenever there is such a direct sum decomposition, if $\beta_S$ is a basis for $S$ and $\beta_{S^{\perp}}$ is a basis for $S^{\perp}$, then their union $\beta_S \cup \beta_{S^{\perp}}$ will be a basis for $V$ (if this isn't obvious to you, you should attempt a proof). In particular, $\beta_S$ and $\beta_{S^{\perp}}$ are disjoint, thereby giving the equality
\begin{equation}
\dim V = |\beta_S \cup \beta_{S^{\perp}}| = |\beta_S| + |\beta_{S^{\perp}}| = \dim S + \dim S^{\perp}
\end{equation} |
Solution of a recurrence equations | Use the Master Method:
$T(n) = 2T(\frac{n}3) + n + 1$ falls into Case 3, because $c > \log_{b}(a)$, with $c=1$, $a=2$, $b=3$, $f(n) = n+1$. Additionally $af(\frac{n}{b}) \leq kf(n)$ is trivially satisfied with $k = \frac{2}{3}$.
Thus, $T(n) = \Theta(f(n)) = \Theta(n)$. |
Distribution of determinants of $n\times n$ matrices with entries in $\{0,1,\ldots,q-1\}$ | In the continuous limit of $n$ fixed, $q \to \infty$, $\frac{1}{q^n}\log |\det(M)|$ is asymptotically normal as $n \to \infty$. See Terry Tao's comments on this MO thread. The linked paper of Nguyen--Vu has a nicely readable intro, see particularly around equations (1.6)-(1.7). The intuition is roughly that the determinant is going to be the (signed) hypervolume, which can be computed as an iterated multidimensional "base times height". Taking the logarithm and fuzzing your eyes, this looks like a sum of i.i.d. random variables. The details are involved, of course, and I have not attempted to digest them. Someone with expertise in this approach may be able to "discretize" it quickly. It's likely more appropriate as an MO question than an MSE question.
Edit: Now that I look at it, Nguyen--Vu's actual main Theorem 1.1 covers this discrete case too (even though the linked MO thread was just after the continuous case), and more generally any distribution with exponentially decaying tails. So, we get that for fixed $q$, $\log |\det(M)|$ is asymptotically normal as $n \to \infty$, with explicit convergence rates. |
Is there a name for this set $\{x\in \mathbb{N}_{\geq 2}\mid x\neq ca^2b^3,\{a,b,c\in \mathbb{N}\mid a+b>2\}\}$? | These are the squarefree numbers, A005117 in the OEIS. Wikipedia has a page Squarefree integer, and MathWorld has Squarefree.
A possible operation on this set is the 'exclusive product': $m\times n=mn/s$ where $s$ is the largest square dividing $mn.$ This is an abelian group. |
Find $\sum_{i=1}^n \frac{\sqrt{i+1}-\sqrt{i}}{\sqrt{i^2+i}}$ | You are almost there. This is a telescoping finite sum. The comment about infinite series is irrelevant because the upper index is $n$, not $\infty$.
We have $$\sum_{i=1}^n \frac{1}{\sqrt{i}} - \sum_{i=1}^n \frac{1}{\sqrt{i+1}} = \sum_{i=1}^n \frac{1}{\sqrt{i}} - \sum_{i=2}^{n+1} \frac{1}{\sqrt{i}} = \frac{1}{\sqrt{1}} - \frac{1}{\sqrt{n+1}},$$ where we have shifted the index of the second summation by $1$. |
If $x-y = 5y^2 - 4x^2$, prove that $x-y$ is perfect square | Let $z=x-y$. Then $(x-5z)^2=z(20z+1)$ and so $z(20z+1)$ is a perfect square. Since $\gcd(z,20z+1)=1$ both $z$ and $20z+1$ must be perfect squares. |
Proving $f(x) = f(0) + f'(0)x + \int_0^x (x-t) f''(t) dt$ for all x | As Jeb noted, the first calculation of $h'(x)$ is incorrect; in fact it contradicts later calculation of the same quantity. Otherwise, the approach is sound, but inefficient. Differentiating integrals with respect to a parameter is an error-prone procedure. I would do the following:
In the fundamental theorem of calculus
$$f(x) = f(0) + \int_0^x f'(t)\,dt $$
integrate by parts:
$$\begin{split}
\dots &= f(0) + (t-x)f'(t)\bigg|_{t=0 }^{t=x} - \int_0^x (t-x)f''(t)\,dt
\\&= f(0) + x f'(0) + \int_0^x (x-t)f''(t)\,dt \end{split}$$ |
Is the unit square a submanifold/manifold? | Your thinking is correct. The unit square is not a manifold for the reason you gave in your question, but it is a topological manifold with boundary. Keeping smoothness out of the picture for a moment, a topological manifold with boundary is a second countable Hausdorff space where every point has a neighborhood homeomorphic to either $\mathbb{R}^n$, or the half-space
$$
\mathbb{H}^n \;\; =\;\; \{(x_1, x_2, \ldots, x_n) \in \mathbb{R}^n \; | \; x_n \geq 0\}.
$$
You can see that the edges of the square lie in a neighborhood that is homeomorphic to one of these half-spaces and the edge point lies on the edge of $\mathbb{H}^n$. Similarly, a corner point lies on the boundary of one of these half-spaces since it has a neighborhood that can be continuously deformed into a half space (think of unfolding the two edges so that they flatten out). Generally if you have an $n$-manifold with boundary $M$ it can be decomposed as $M = Int(M) \cup \partial M$ where the interior is an $n$-manifold (in the traditional sense) and $\partial M$ is an $(n-1)$-dimensional manifold with $Int(M)\cap \partial M = \emptyset$.
If we now consider smoothness again, then the closed unit square is not a differentiable manifold strictly because of the corners. While there may be a homeomorphism taking a corner neighborhood to the half-space, there isn't a diffeomorphism that accomplishes that. |
how solve the boolean expression | Hint:
$LHS=AB+ABC+ABC=AB+ABC=AB(1+C)=AB.1=AB=RHS$
$LHS=Z(Y+Z)(X+Y+Z)=Z(Y+Z+0)(Y+Z+X)=Z(Y+Z+0.X)=Z(Y+Z)=(Z+0)(Z+Y)=Z+0.Y=Z=RHS$
This assumes that you are familiar with the various properties and identities of Boolean Algebra. |
Integrating $\sqrt{\tan x}$ | It is hard to follow, and find the place where some coefficients are possibly not copied correctly, so maybe it is easier to have a clean quick calculation, that allows a quick comparison of the final results.
Same start, $u=\sqrt{\tan x}$, so formally $x=\arctan(u^2)$, $dx=\frac{2u\; du}{1+u^4}$,
\begin{align}
\int\sqrt{\tan x}\; dx
&\equiv
\int\frac{2u^2}{u^4+1}\; du
\\\\
&=
\int\frac{u^2+1}{u^4+1}\; du
+
\int\frac{u^2-1}{u^4+1}\; du
\\\\
&=
\int\frac{(u^2+1)/u^2}{(u^4+1)/u^2}\; du
+
\int\frac{(u^2-1)/u^2}{(u^4+1)/u^2}\; du
\\\\
&=
\int\frac{d\left(u-\frac 1u\right)}{\left(u-\frac 1u\right)^2+2}
+
\int\frac{d\left(u+\frac 1u\right)}{\left(u+\frac 1u\right)^2-2}
\\\\
&=
\frac 1{\sqrt 2}\arctan\left(\frac 1{\sqrt 2}\left(u-\frac 1u\right)\right)
+
\frac 1{2\sqrt 2}\ln\left|
\frac
{\left(u+\frac 1u\right)-\sqrt 2}
{\left(u+\frac 1u\right)+\sqrt 2}
\right|
+\text{local constant}
\ .
\end{align}
(The coefficient in the logarithmic term is fine. For the term involving $\arctan$, we can pass to its "inverse argument". I hope, this is an answer giving the profit in the future, not only the chasing of $\pm1/2$ in this particular case.) |
How to find the limit $\lim\limits_{n\to\infty}1/\sqrt[n]{n}$ which is indeterminate on evaluation but is convergent? | Indeterminate forms can have values.
Note from L'Hospital's Rule that $\lim_{n\to \infty}\frac{\log(n)}{n}=\lim_{n\to \infty}\frac{1/n}{1}=0$. Hence, we have
$$\begin{align}
\lim_{n\to \infty}\frac{1}{n^{1/n}}&=\lim_{n\to \infty}e^{-\frac1n \log(n)}\\\\
&e^{-\lim_{n\to \infty}\left(\frac1n \log(n)\right)}\\\\
&=e^0\\\\
&=1
\end{align}$$
as expected! |
If $\lim \int f_n\, d\mu = 0$ then $\lim \int f_n^a \,d\mu = 0$ | Your proof works for $\alpha \ge 1,$ but fails for $\alpha < 1.$ For example, if $f_n(x) = x^n,$ then $f_n(x) \le 1$ on $[0,1]$ for all $n.$ But $(f_n(x))^{1/2} \le 1^{1/2-1}f_n(x)$ fails.
However, Holder's inequality gives a simple proof for such $\alpha$:
$$\int_0^1 f_n^\alpha = \int_0^1 f_n^\alpha \cdot 1 \le \left (\int_0^1 (f_n^\alpha)^{1/\alpha}\right )^\alpha \left(\int_0^1 1^{1/(1-\alpha)}\right )^{1-\alpha} = \left ( \int_0^1 f_n \right )^\alpha\cdot 1 \to 0.$$ |
Probability question using combinations | There are $6$ forbidden places places that Alice and Bob could sit, illustrated below:
Once they are seated, the remaining people can be seated in $3!^2$ distinct ways.
The total number of seating arrangements (forbidden or not) is $4!^2$.
Hence the probability of a permissible seating arrangement is therefore $$1-\frac{6 \times 3!^2}{4!^2}=1-\frac{216}{576}=0.625.$$
(Note: I've assumed that there is a designated male side and a female side here; if the restriction is only that "the males are on one side", then both the numerator and denominator above should be multiplied by $2$.) |
Proof of property of Markov chain | Just pad the gap between $X_{n_j}$ and $X_{n_{j + 1}}$, $j = 0, \ldots, k$. Formally, this means we introduce the omitting random vectors $(X_{n_j + 1}, \ldots, X_{n_{j + 1} - 1})$ by $Y_{n_j}, j = 0, \ldots, k$ (fix $n_0 = 0, n_{k + 1} = n$).
By law of total probability and the Markov property:
\begin{align}
& P[X_{n + 1} = s | X_{n_k} = x_{n_k}, \ldots, X_{n_1} = x_{n_1}] \\
= & \frac{P[X_{n + 1} = s, X_{n_k} = x_{n_k}, \ldots, X_{n_1} = x_{n_1}]}{P[X_{n_k} = x_{n_k}, \ldots, X_{n_1} = x_{n_1}]} \\
= & \frac{\sum_{y_{n_0}, \ldots, y_{n_k}}P[X_{n + 1} = s, Y_{n_k} = y_{n_k}, X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]}{\sum_{y_{n_0}, \ldots, y_{n_k}}P[Y_{n_k} = y_{n_k}, X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]} \\
= & \frac{\sum_{y_{n_0}, \ldots, y_{n_k}}P[X_{n + 1} = s, Y_{n_k} = y_{n_k}| X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]P[X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]}{\sum_{y_{n_0}, \ldots, y_{n_k}}P[Y_{n_k} = y_{n_k} | X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]P[X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]} \\
= & \frac{\sum_{y_{n_0}, \ldots, y_{n_k}}P[X_{n + 1} = s, Y_{n_k} = y_{n_k}| X_{n_k} = x_{n_k}]P[X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]}{\sum_{y_{n_0}, \ldots, y_{n_k}}P[Y_{n_k} = y_{n_k} | X_{n_k} = x_{n_k}]P[X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]} \quad \text{(Markov Property)} \\
= & \frac{\sum_{y_{n_0}, \ldots, y_{n_{k - 1}}}P[X_{n + 1} = s| X_{n_k} = x_{n_k}]P[X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]}{\sum_{y_{n_0}, \ldots, y_{n_{k - 1}}}P[X_{n_k} = x_{n_k}, \ldots, Y_{n_1} = y_{n_1}, X_{n_1} = x_{n_1}, Y_{n_0} = y_{n_0}]} \quad \text{(Do the sum with respect to $y_{n_k}$ first)} \\
= & P[X_{n + 1} = s | X_{n_k} = x_{n_k}].
\end{align}
To answer your concern, we can prove in a more general sense, i.e., for any positive integers $m, k$, it follows that
$$P[X_{m + k} = x_{m + k}, \ldots, X_{m + 1} = x_{m + 1} | X_m = x_m, \ldots, X_0 = x_0] = P[X_{m + k} = x_{m + k}, \ldots, X_{m + 1} = x_{m + 1} | X_m = x_m]. \tag{1}$$
By multiplicative rule and Markov property, the left hand side of $(1)$ equals to
$$P[X_{m + k} = x_{m + k} | X_{m + k - 1} = x_{m + k - 1}] \cdots P[X_{m + 1} = x_{m + 1} | X_m = x_m],$$
whence to prove $(1)$, it suffices to prove
$$ P[X_{m + j} = x_{m + j} | X_{m + j - 1} = x_{m + j - 1}] =
P[X_{m + j} = x_{m + j} | X_{m + j - 1} = x_{m + j - 1}, \ldots, X_m = x_m].
\tag{2}$$
for $j = 1, \ldots, k$.
To prove $(2)$, let $Z$ denote the vector $(X_0, X_1, \ldots, X_{m - 1})$, again by law of total probability, the right hand side of $(2)$ equals to:
\begin{align}
& P[X_{m + j} = x_{m + j} | X_{m + j - 1} = x_{m + j - 1}, \ldots, X_m = x_m] \\
= & \sum_{z} P[X_{m + j} = x_{m + j}, Z = z | X_{m + j - 1} = x_{m + j - 1}, \ldots, X_m = x_m] \\
= & \sum_{z} P[Z = z | X_{m + j - 1} = x_{m + j - 1}, \ldots, X_m = x_m]P[X_{m + j} = x_{m + j} | X_{m + j - 1} = x_{m + j - 1}, \ldots, Z = z] \\
= & \sum_{z} P[Z = z | X_{m + j - 1} = x_{m + j - 1}, \ldots, X_m = x_m]P[X_{m + j} = x_{m + j} | X_{m + j - 1} = x_{m + j - 1}] \\
= & P[X_{m + j} = x_{m + j} | X_{m + j - 1} = x_{m + j - 1}]
\sum_z P[Z = z | X_{m + j - 1} = x_{m + j - 1}, \ldots, X_m = x_m] \\
= & P[X_{m + j} = x_{m + j} | X_{m + j - 1} = x_{m + j - 1}].
\end{align}
This completes the proof. |
Convex polyhedron and its Gauß-curvature | Given any convex polyhedron $P$, consider the Minkowski sum of $P$ with $\bar{B}(\epsilon)$, a closed ball centered at $0$ with small radius $\epsilon$.
$$P_{\epsilon} \stackrel{def}{=} P + \bar{B}(\epsilon) = \{\; \vec{p} + \vec{q} : \vec{p} \in P, |\vec{q}| \le \epsilon\; \}$$
Let $K$ be the Gaussian curvature on the boundary $\partial P_{\epsilon}$.
Let $V, E, F$ be the number of vertices, edges and faces of $P$ respectively.
The boundary $\partial P$ is composed of $V + E + F$ fragments.
$F$ planar polygons, one for each face.
On these planer polygons, the Gaussian curvature $K$ vanishes.
$E$ cylindrical fragments with radius $\epsilon$, one for each edge.
For any edge $e$ of $P$, let $\ell_e$ be its length. Let $\psi_e$ be the angle between the two outward pointing normals of the two faces of $P$ attached to $e$.
The cylindrical fragment is the "cartesian product" of a line segment of length $\ell_e$ and a circular arc of length $\psi_e\epsilon$. Since one of the principal curvatures vanishes on a cylinder, the Gaussiaon curvature $K$ again vanishes on these cylindrical fragments.
$V$ spherical fragments with radius $\epsilon$, one for each vertex.
Let $p$ be a vertex of $P$ and $e_1, e_2, \ldots, e_{d}$ be the edges in contact with $p$. For simplicity of description, we will extend the definition of $e_i$ for other integer $i$ by periodicity. We will assume the edges are ordered so that $e_{i}$ and $e_{i+1}$ are adjacent to each other. Let
$\alpha_i$ be the angle between the edge $e_i$ and $e_{i+1}$.
$\hat{n}_i$ be the outward pointing normal vector for the face between $e_{i}$ and $e_{i+1}$.
The spherical fragment associated with $p$ will be a geodesic polygon having vertices at $\vec{v}_i = \vec{p} + \epsilon \hat{n}_i$. The geodesic between $\vec{v}_{i-1}$ and $\vec{v}_i$ has length $\epsilon \psi_{e_i}$.
If we apply Gauss Bonnet Theorem to this geodesic polygon, we find its area is given by the formula
$$\epsilon^2 ( 2\pi - \sum_{i=1}^d \beta_i)$$
where $\beta_i$ is the change of angle of the tangent vectors of the arcs corresponds to $e_{i}$ and $e_{i+1}$ at $\vec{p} + \epsilon\hat{n}_i$.
The key is $\beta_i = \alpha_i$. To see this, switch to a new coordinate system
where
$p$ is the origin.
the edge $e_1$ is along the $x$-axis, i.e. the direction $(1,0,0)$.
the edge $e_2$ is along the direction $(\cos\alpha_1, \sin\alpha_1, 0)$.
$\hat{n}_1$ is along the $z$-axis, i.e the direction $(0,0,1)$.
It is easy to see
the plane determined by $\hat{n}_0$ and $\hat{n}_1$ is the $yz$-plane.
the plane determined by $\hat{n}_1$ and $\hat{n}_2$ is the one $-x\sin\alpha_1 + y\cos\alpha_2 = 0$.
This means the angle between these two planes is equal to $\alpha_1$.
This in turn implies, $\beta_1$, the change of angle of tangent vectors at $\hat{n}_1$ is equal to $\alpha_1$.
As a result, the integral of $K$ over such a spherical segment
is equal to to the angular deficit of corresponding vertex $p$.
$$\frac{1}{\epsilon^2} \times \epsilon^2 ( 2 \pi - \sum_{i=1}^{d} \beta_i ) = 2\pi - \sum_{i=1}^d \alpha_i$$
These settles two questions:
Why the angular deficit can be viewed as a concentration of Gaussian curvature?
This is because it is equal to the integral of Gaussian curvature of a smoothed version of $P$ around a neighborhood of corresponding vertex.
Why the angle deficit of a vertex is positive?
The answer is close to trivial.
The angular deficit is equal to the solid angle of corresponding spherical fragment! |
Estimate of complex integral | I don't see a direct application of the standard estimates, so let's use a deviation. The circle $\lvert z-\pi\rvert = \pi\sqrt{2}$ intersects the imaginary axis in $\pm \pi i$. Let $\gamma$ be a (piecewise smooth) path in the left half-plane connecting $-\pi i$ and $\pi i$; we shall see later what kind of path might be convenient. Then $C-\gamma$ is a (piecewise smooth) closed path winding around $\log 2$ once, so
$$\int_{C-\gamma} 2 - \frac{e^z}{z-\log 2}\,dz = -2\pi i e^{\log 2} = -4\pi i.$$
Also, since $2$ has the primitive $2z$, we have
$$\int_\gamma 2\,dz = [2z]_{-\pi i}^{\pi i} = 4\pi i,$$
and therefore
$$\int_C 2 - \frac{e^z}{z-\log 2}\,dz = -4\pi i + \int_\gamma 2 - \frac{e^z}{z-\log 2}\,dz = -\int_\gamma \frac{e^z}{z-\log 2}\,dz.$$
It remains to choose $\gamma$ in such a way that
$$\left\lvert\int_\gamma \frac{e^z}{z-\log 2}\,dz\right\rvert < \frac{2}{3}$$
can be shown. For example, the polygonal path with vertices $-\pi i,\, -R-\pi i,\, -R +\pi i, \pi i$ for large enough $R$ does that. |
Distances between joined points from angles | Your set of angles should be taken differently : you should use "signed" successive polar angles $\theta_k$ (a minus sign if one turns left, a plus sign if one turns right).
In this way the squared distance between endpoints is :
$$\underbrace{x^2 \left|\sum e^{i\theta_k}\right|^2}_{\text{complex version}}=x^2 \left[\left(\sum \cos \theta_k\right)^2+\left(\sum \sin \theta_k\right)^2 \right] \tag{1}$$
I prefer to manage squared distances, because it provides a smoother behavior.
Edit : in a 3D setting, the corresponding formula for the squared distances would use spherical coordinates
$$\begin{cases}x&=&r \sin \theta \cos \varphi\\
y&=&r \sin \theta \sin \varphi\\
x&=&r \cos \theta
\end{cases}$$
and the formula would be :
$$r^2 \left(\left(\sum(\sin \theta_k \cos \varphi_k)\right)^2+
\left(\sum(\sin \theta_k \sin \varphi_k)\right)^2+\left(\sum(\cos \theta_k)\right)^2\right)$$ |
Converting English to Quantifiers: 'There is no greatest prime' | There's one mistake that just looks like a typo: it seems that you meant $a > b$ rather than $b > a$.
More fundamentally, what goes wrong is that you (in particular) claim that any $b$ is prime. Even if we forget all the conditions on $a$, your sentence still claims that $\forall b(\mathrm{prime}(b))$. What you probably mean is that if $b$ is prime, then there is a larger prime $a$. For example,
$$
\forall b(\mathrm{prime}(b) \to \exists a(\mathrm{prime}(a) \land a > b)).
$$ |
Difference Between Product and Function Spaces | I prefer to write ${^JX}$ for the set of functions from $J$ to $X$. Each $f\in{^JX}$ is, as you say, a relation, but not from $X$ to $J$: it’s a relation from $J$ to $X$, so ${^JX}\subseteq\wp(J\times X)$. This means that if $f\in{^JX}$, then $f\in\wp(J\times X)$, and therefore $f\subseteq J\times X$.
You write:
So it is all of the sudden unclear to me how to satisfactorily differentiate them on a conceptual level, which is alarming to me, since their set-theoretic definitions are clearly extremely different (one is a finite Cartesian product and the other is possibly an infinite Cartesian product).
No, one is a possibly infinite Cartesian product, and the other is a subset of a finite Cartesian product; there is no conflict here at all. ${^JX}$ contains precisely those subsets of $J\times X$ that (a) are functions, and (b) have domain $J$. ($J\times X$ also has subsets that are partial functions, i.e., functions whose domains are proper subsets of $J$.) |
Is it known whether all prime powers $p^k$ with $k\ge 8$ are group-abundant? | According to this math.SE question the number of groups of order $p^k$ satisfies
$$ gnu(p^k) \geq p^{\frac{2}{27}k^2(k-6)}.$$
Filling in $k \geq 8$ gives
$$ gnu(p^k) \geq p^{\frac{2}{27}k\cdot 8\cdot 2} > p^{k},$$
so all prime powers $p^k$ with $k \geq 8$ are group-abundant. |
Why we ignore the other elements of $F_{p^k}$ when we check if any element of it is a root of multiplicity greater than $1$ in $p(x)$ or not? | .Let us look at the question you have given again.
Given the polynomial $p(x) = x^4 +x + 6$, for which $q \in \mathbb N$ prime, is it true that $p(x)$ has a multiple root in "the field of characteristic $q$".
The question answered by egreg which was attached in the comments by you, has pretty much the same question, except that certain values of $q$ are given for checking.
This tells us that "the field of characteristic $q$" probably stands for $\mathbb F_q$, which is the unique (upto isomorphism) field of $q$ elements, for example $\mathbb Z \over q\mathbb Z$.
Let us recall what a multiple root means.
For this, we first state the "long division" theorem for fields :
If $f(x),g(x) \in K[x]$ are two polynomials, with $g(x)$ non-zero, then there exist two polynomials $q(x)$ and $r(x)$ , such that $\deg r < \deg q$, and $f(x) = q(x)g(x) + r(x)$.
This is similar to polynomial long division, but applies in any field.
We all know what it means to be a root : in some generality, given a ring $R$ , an element $a \in S$ where $S$ is an extension ring of $R$ (i.e. a ring containing $R$ as a subring) is said to be a root of $f(x) \in R[x]$ if $f(a) = 0$, where $f(a)$ is an element of $S$, given by substituting $a$ in place of $x$ in $f$.
Now, we have the famous "remainder theorem" for figuring out when an element is a root of a polynomial : this is a corollary of the long division procedure.
Remainder Theorem : Let $f \in K[x]$, where $K$ is a field. Let $L$ be an extension field of $K$(so, $f(x) \in L[x]$ also, since the coefficients of $f$ lie in $K$ which lies inside $L$), and let $a \in L$. Then, there exists a polynomial $q(x) \in L[x]$ such that $f(x) = (x-a)q(x) + f(a)$.
Its corollary is :
As a consequence, $a$ is a root of $f$ if and only if $f(x)$ is a multiple of $(x-a)$ in the extension which $a$ belongs to.
Now, what does a multiple root mean? For this, we define multiplicity :
Given $f(x) \in K[x]$ and $a \in L$ a field extension of $K$ such that $a$ is a root of $f(x)$, the multiplicity of $a$ is the largest value of $k \geq 1$ such that $(x-a)^k$ is a multiple of $f(x)$ in $L[x]$.
In particular,
$a$ is said to be a multiple root of $f(x)$ if the multiplicity of $a$ is greater than $1$ i.e. $(x-a)^2$ is a multiple of $f(x)$.
In particular, by the remainder theorem, $a$ is a multiple root of $f(x)$ implies that $a$ is a root, implies that $f(a) = 0$.
We add to this the following theorem : every polynomial in $K[x]$ of degree $n$ (in one variable) has at most $n$ roots , multiplicity included, in any extension field of $K$.
I briefly touch upon algebraic closure here. Given a field $F$, we say that a field $K$ is an algebraic closure of $F$, if $K$ is a field extension of $F$, and given any polynomial $f \in F[x]$, all the roots of $F$ lie over $K$.
The fact is, that the algebraic closure of any finite field of characteristic $p$, is the union of all finite fields of characteristic $p$.
That is, given $f(x) \in F_{p^k}[x]$, the roots of $f$ all lie in the infinite union $\cup_{i=1}^\infty F_{p^i}$.
In particular, since $f$ has only as many roots as its degree, the roots of $f$ all lie in $F_{p^N}$ for some large enough $N$ depending on $f$ (the union above is increasing, so being in a finite union of these sets is akin to being in the largest set).
How do we find multiple roots? Well, that can be done in two ways. egreg suggested a more elementary way, I suggested the more conventional way.
What egreg has done, is the following : To check if $a$ is a multiple root of $f$, we use the long division procedure to divide $x^4 + x + 6$ by $(x -a)^2$, where $a$ is treated like a constant.
This leads to the following :
$$
x^4 + x + 6 = (x-a)^2(x^2+2ax+3a^2) + ((4a^3+1)x + (6-3a^4)) \tag{*}
$$
I won't explain the long division : do it just like how you do it for ordinary polynomials, and it will work out.
The above is an equality of polynomials in any field.
So, this means that $a$ is a multiple root if and only if the remainder term, which has been calculated as $(4a^3+1)x + (6-3a^4)$ , is zero.
A polynomial is equal to zero if and only if its coefficients are zero, that is if and only if $4a^3+1$ and $6-3a^4$ are both zero.
Where, equality to zero is different for different fields. For example, we have $2 = 0 \in \mathbb F_2$, but $2 \neq 0 \in \mathbb F_3$.
Let us start solving the problem now. First of all, we need to note that we are interpreting $x^4+x+6$ as a polynomial over the finite field $\mathbb F_q[x]$ and finding when a multiple root occurs, for each $q$.
Now, the roots of $x^4+x+6$ over $\mathbb F_q[x]$, for each $q$, will all occur in $F_{q^k}$ for some $k$, because of the nature of the algebraic closure of $F_q[x]$. What that means, is that the equation $(*)$ needs to be considered only in finite fields $F_{q^k}$. That is, the remainder term , and the resulting equalities, are all equalities in $F_{q^k}$.
Crisply put : $a$ is a multiple root of $x^4+x+6$ over $F_{q}$, if and only if $4a^3+1 = 0$ and $6-3a^4 = 0$, where equality holds in some $F_{q^k}$ (basically, the one in which $a$ lies, since being a root of $x^4+x+6$ it must lie in some $F_{q^k}$). egreg checks for each $q$ , whether or not the second condition holds.
He starts with fields of characteristic $2$. So, the remainder term is going to lie in a field $F_{2^k}$. But he notes, that $4a^3 + 1 = 1 \in \mathbb F_{2^k}$, because $4a^3$ is always a multiple of $2$, and $F_{2^k}$ is of characteristic $2$. This means, that the remainder term has a non-zero $x$ coefficient, so cannot be zero in $F_{2^k}$. This is why, in fields of characteristic $2$, there can be no multiple root of $x^4+x+6$. (Note : nowhere above was the value of $k$ used : this is why egreg does not need to mention it in his answer, but I am doing so).
What about characteristic three? Here, again the remainder terms are being considered in $F_{3^k}$ for some $k$. But then, $6 - 3a^4 = 0 \in \mathbb F_{3^k}$ for every $k$, by the fact that the characteristic is $3$. For the second term, he performs reduction : $4a^3+1 = a^3+1$ in $F_{3^k}$ by the characteristic $3$ property. So the question is : can $a^3+1 = 0$ happen? It can , with $a= -1$. Hence, here, the remainder term can be made zero by taking $a = -1$. Hence, $-1$ is a multiple root of $x^4+x+6$ modulo $3$. As it turns out, $-1 \in \mathbb F_3$ itself, so we did not need to go into an extension for equality of the remainder terms.
Then comes the rest, which requires a somewhat separate section.
Note that $a = 0$ can never be a multiple root of $x^4+x+6$, because the remainder term, in any field, will have non-zero $x$ coeffcient ($4a^3+1$ becomes $1$). So $a \neq 0$ is assumed. Usually, this means that $a$ is going to be inverted soon in the argument.
Now, egreg performs a brilliant manipulation : given that $4a^3+1 =0$ and $6-3a^4=0$, he wants to remove the large powers of $a$, so he does : $3a(4a^3+1) + 4(6-3a^4) = 0$, which simplifies to $3a+24 = 0$.
Now, if this inequality held in a field of characteristic $2$ or characteristic $3$, then this equation would simplify further, since $24$ is a multiple of $2$ and $3$.
However, if the chracteristic is not one of these, then we get $3(a+8) =0$, and since we are in an integral domain, and $3 \neq 0$, we get $a = -8$. Conclusion : $a = -8$ is the only possibility for a multiple root of $x^4+x+6$ over $F_q$, where $q \neq 2,3$ is prime. In other words, if $a$ is a multiple root of $x^4+x+6$ over $F_q$ for some $q \neq 2,3$, then necessarily $a = -8$.
What are we going to do now? Check when $-8$ is a multiple root, of course, because it is sometimes, and it is not sometimes. How do we check? By plugging it into the remainder term. Plugging it in : $4a^3+1$ becomes $-2047$, and $6-3a^4$ gives $-12282$.
We want a field, where both these quantities $-2047$ and $-12282$ are zero. Of course, they are zero in a field of characteristic $q$, if and only if $q$ divides each of them. It is easy to check that $2047$ actually divides $12282$, and $2047 = 23 \times 89$, so the only fields that work out are $\mathbb F_{23}$ and $\mathbb F_{89}$.
That is , the remainder term will be zero in $F_{23}$ and $F_{89}$ (we need not take powers, since $a = -8$ belongs to the original field itself).
Consequently, the description of the answer is the following :
$x^4+x+6$ over $F_{q}$ has no multiple roots in $F_{q^k}$, if $q \neq 3,23,89$.
$x^4+x+6$ over $F_3$ has a multiple root $1$, and over $F_{23}$ and $F_{89}$ hass a multiple root $-8$.
Using Wolfram alpha, you can check that $x^4+x+6$ factorizes as $x(x+1)^3$ in $F_3$, then as $(x+8)^2(x^2+7x+8)$ in $F_{23}$, and $(x+8)^2(x+28)(x+45)$ in $F_{89}$.
This gives a complete answer to the problem, using egreg's approach.
To get on with my approach, we define the "formal derivative" of $f(x) \in K[x]$, where $K$ is either a finite field or a number field, as the usual derivative $f'$ (The reason we need to be "formal" is because we are inventing the definition here, not deriving it by introducing limits / differential quotients etc). So for example, the formal derivative of $2x^2 + 3x+4$ is $4x+3$. However, when considered modulo $3$ for example, the derivative is just $x$ now.
The formal derivative allows us to capture multiple roots very well, for the following reason : suppose that $f$ is a polynomial in $K[x]$, and suppose $f$ has a multiple root $a$ in some extension $L$. Then, $f = (x-a)^2 \times q(x)$ by the long division procedure, and taking the derivative using the product rule (it applies, one can check that) $f' = (x-a) \times ((x-a)q'(x) + q(x))$. From the remainder theorem, we see that $f'(a) = 0$, or that $a$ is a root of $f$ and of $f'$. Or, $f$ and $f'$ have a common root, in some extension of $K$.
Conversely, if $f$ has no multiple roots, then one can check by going to the algebraic closure, that $f$ and $f'$ do not have common roots in any extension of $K$.
So, the question of $f$ having a multiple root boils down to $f$ and $f'$ sharing a multiple root in some extension of $K$.
Now, at this point, we have the Euclidean algorithm : something that finds a greatest common divisor of two polynomials, in a given field. I am sure you must know this algorithm : if not, look it up. It is similar to the one on the natural numbers.
In our case, we have $x^4+x+6$, whose formal derivative is $4x^3+1$. We have to find the gcd of these polynomials. I will explain. The $\gcd$ will not change if we take $4(x^4+x+6)$ instead of $x^4+x+6$ unless we are in characteristic two, since this will just be a constant non-zero multiple of the original. In char two, this polynomial will be zero, so we will deal with that separately.
We see that $4(x^4 + x+6) - x(4x^3+1) = 3(x+8)$. This is just the general $\gcd$ procedure, first step.
Next, $4x^3+1 = (x+8)(4x^2-32x+256) - 2047$, is the $\gcd$ procedure, second step. We conclude, from the Eulidean algorithm on the polynomial ring $\mathbb Q[x]$, that the $\gcd$ is $1$, since here the remainder term is constant.
But that is the gcd for rational polynomials. Some steps above will collapse for finite fields, especially those for which any of the polynomial coefficients will collapse.
Let us look at the equation in step $1$ above. We have, first in char two, that the derivative is $1$, so there is no multiple root. In char $3$, the right hand side is zero, so in fact the algorithm stops there with the gcd equal to $4x^3+1$. So there is a multiple root here. For other characteristics, the right hand side is not zero, so we can proceed to look at other characteristics in equation $2$.
Here, we see that the remainder, which is $-2047$, is zero only in characteristic $23$
and characteristic $89$. So, there are multiple roots in these fields. For every other characteristic, the gcd is a non-zero constant, so the algorithm gives gcd 1 for every other char, hence giving the complete list of primes in which there is a multiple root.
So this is my way of doing things. Alternately, here is what the hint was suggesting to you.
The point is, that I mentioned earlier that $f$ has a multiple root if and only if $f$ and $f'$ shared a common root in some extension of $K$. This is exactly what your hint is : go forth to extensions of $F_q$, namely $F_{q^k}$, and find common roots of $f$ and $f'$.
However, in our case, $f' = 4x^3+1$ is a cubic polynomial. And these are special. Why?
Remember that degrees add when we multiply polynomials. That is, the degree of $fg$ is the sum of the degrees of $f$ and $g$. Now, if $f'$ is a product of two polynomials, then one of them must be degree less than or equal to $1$(Use the above fact). By the remainder theorem theorem, either $f'$ is irreducible or has a root. Has a root where? In $F_q$, of course, since it is over $F_q$ as a polynomial. Consequently, we only need to look for common roots of $f$ and $f'$ in $F_q$ and not any larger extension, because $f'$ will certainly have a root in $F_q$ if it is reducible.
But, in general this will not be true . Two polynomials may have non-trivial gcd, but not share a root over $F_q$ : such a root may be shared over a larger extension. In our case, it so happens that we need not go further.
So your procedure, will be to substitute different values in $F_q$ only, into $f$ and $f'$ and check what works. Of course, this is not working well, because having to deal with substituting $1,...,89$ for example with $F_{89}$ is not feasible. It is suggested that you follow either egreg or me for this sort of a problem next time round. |
$f:(0,\infty) \rightarrow \mathbb{R}$ satisfies $f(x)-f(y)=f\left(\dfrac{x}{y}\right)$ for all $x,y \in (0,\infty)$ and $f(1)=0.$ | The OP is clearly about whether the solution provided is ok, and not about a solution, I address the former.
Answer to OP: no!!
Because you neither specify what $\delta_1$ is, nor do you prove, by other means, that such a $\delta_1$ exists. It was fixed by @Paramanand Sigh in the comments. Another possible addition to OP will do the job. Add: For a given $a>0$ there is a $\delta_1>0$ such that
$$
|x-a|<\delta_1 \implies |\frac{x}{a}-1|<\delta \, .
$$
To emphasize: the fact that a $\delta$ exists does not automatically prove a $\delta_1$ exists. Notice that $\delta_1$ must depend on the point where you consider continuity.
Advice: When writing a proof, assume that a computer will read it. So, the first thing you do is to make sure every parameter you introduce is clearly defined either beforehand, or immediately after a comma. |
How to solve this problem using Permutations and Combinations Theory where I have to count number of possible arrangements? | I'll just consider the case $n=6, m=3$ as mentioned in the OP's comment.
There are $3^6=729$ possible sequences. We want to subtract the number that have $3$ consecutive equal numbers.
There are $4$ positions in which such a sequence can start, and $3$ possible values. The remaining $3$ positions can be filled in $3^3$ ways, giving $12\cdot27=324$.
A sequence with four consecutive equal numbers will have been counted twice, so we must subtract $3\cdot3\cdot3^2=81$.
A sequence with $5$ consecutive equal numbers will have been added in $3$ times and subtracted twice, so it has been counted once, and no adjustment is needed.
A sequence with $6$ consecutive equal numbers has been added in $4$ times and subtracted $3$ times, so again, so adjustment is needed.
Now we have to consider sequences of the form $aaabbb$ with $a\neq b$. These have been added in twice, so we have to subtract them. There are $3$ ways to choose $a$ and then $2$ ways to choose $b$ so there are $6$ such sequences.
All in all, we have $$
729-(324-81-6)=492$$
EDIT
Actually, it is possible to solve the general case, though it gets messy. Instead of using inclusion and exclusion, we set up a difference equation.
Call a sequence "admissible" if it doesn't contain $3$ consecutive equal characters. Let $a_n$ be the number of $n$-character admissible sequences. Let $b_n$ be the number of $n$-character admissible sequences whose last two characters are different, and let $c_n$ be the number of $n$-character admissible sequences whose last two characters are the same.
We have $$\begin{align}
a_n&=b_n+c_n\tag1\\
b_{n+1}&=(m-1)b_n+(m-1)c_n\tag2\\
c_{n+1}&=b_n\tag3
\end{align}$$
$(2)$ is true because we get to an $n+1$-length sequence whose last characters are different by appending any character different from the last to any admissible sequence of length $n$.
$(3)$ is true because to get an admissible sequence whose last two characters are the same, we must duplicate the last character of any admissible sequence whose last two characters are different.
Combining $(2)$ and $(3)$ gives $$b_{n+1}=(m-1)b_n+(m-1)b_{n-1},\tag4$$ a linear, homogeneous, second-order difference equation with constant coefficients. We have the initial values, $b_0=0,\ b_1=m$, and $(4)$ can be solved by standard methods. (It may seem that we should have $b_0=1$, but we need $b_0=0$ in order that $(4)$ give $b_2=m(m-1).$
Once we have solved for $b_n$, we can rewrite $(1)$ and $(2)$ as $$a_n=\frac{b_{n+1}}{m-1}$$
I leave it to you to solve $(4)$. |
Terminology between essentially bounded function and bounded function. | Suppose that $E = \{x : f(x) > K\}$ is not measure zero. Then for all $x \in E$, $\sigma_n(x)$ does not converge to $f(x)$ since $f(x) > K$ but $\sigma_n(x) \leq K$. Hence $\sigma_n$ does not converge to $f$ almost everywhere since it does not converge on $E$ which is not measure zero. Contradiction! |
Finding rank of augmented matrix | you can follow the normal approach of finding rank of a matrix by reducing it to either row reduceed echelon form or counting the nonzero rows in row reduceed form.I prefer the last.
As you give your matrix has rank 3 so it has 3 nonzero rows in row reduceed form and since you adding a identity matrix of order $4 ×4$ it is in the row reduceed form always.
Now when you come at the last row you have all element is zero of your original matrix and 1 in the identity matrix making the whole system having a nonzero fourth row . So your answer is correct and it is 4 |
Proving supremum for non-empty, bounded subsets of Q iff supremum in R is rational | HINTS:
Suppose that $\sup_{\Bbb R}E=\alpha$ is irrational, but that $E$ has a (necessarily rational and therefore necessarily different) supremum $s$ in $\Bbb Q$. Either $s<\alpha$ or $s>\alpha$, and you can get a contradiction either way.
Suppose now that $\sup_{\Bbb R}E=\alpha\in\Bbb Q$. Clearly $e\le\alpha$ for each $e\in E$, so $\alpha$ is an upper bound for $E$ in $\Bbb Q$. Show that if $q\in\Bbb Q$, and $q<\alpha$, then $q$ is not an upper bound for $E$: there is some $e\in E$ such that $e>q$. This will show that $\alpha$ is the least upper bound of $E$ in $\Bbb Q$, i.e., that $\alpha=\sup_{\Bbb Q}E$. |
Inequalities $\pi(x^a+y^b)^\alpha\leq \pi(x^c)^\beta+\pi(y^d)^\gamma$ involving the prime-counting function, where the constants are very close to $1$ | I guess these constants lead to a wrong way to the second Hardy–Littlewood conjecture. Indeed, according to Wikipedia, for $x\ge L_0=355991$ we have
$$\frac x{\log x} \left( 1+\frac 1{\log x}\right)<\pi(x)< \frac x{\log x} \left( 1+\frac 1{\log x}+\frac {2.51}{(\log x)^2}\right).$$
Then for $x,y\ge L_0$ the difference between the left-hand side and the right-hand side of the conjecture experession is not so big. Namely,
$$\pi(x+y)-\pi(x)-\pi(y)\le$$ $$\frac {x+y}{\log (x+y)} \left( 1+\frac 1{\log (x+y)}+\frac {2.51}{(\log (x+y))^2}\right)- \frac x{\log x} \left( 1+\frac 1{\log x}\right)-\frac y{\log y} \left( 1+\frac 1{\log y}\right)\le$$
$$\frac {x+y}{\log (x+y)} \left( 1+\frac 1{\log (x+y)}+\frac {2.51}{(\log (x+y))^2}\right)- \frac x{\log (x+y)} \left( 1+\frac 1{\log (x+y)}\right)-\frac y{\log (x+y)} \left( 1+\frac 1{\log (x+y)}\right)=$$
$$\frac {2.51(x+y)}{(\log (x+y))^3}.$$
Thus I guess that if $\max\{a,b\}\alpha<\min\{c\beta, d\gamma\}$ then the right-hand side of the expression from your question grows asymptotically faster than the left-hand side and so the inequality holds for sufficiently big $L$.
So a way to the conjecture is to obtain better upper bounds for $\pi(x+y)-\pi(x)-\pi(y)$ than $\frac {2.51(x+y)}{(\log (x+y))^3}$ and I guess advances in this direction can be found in papers related to the conjecture.
Remark that Trudgian’s upper bound for the difference between $\pi(x)$ and $\operatorname{li}(x)=\int_0^x \frac {dt}{\ln t}$,
$$|\pi(x)- \operatorname{li}(x)|\le f(x)= 0.2795\frac {x}{(\log x)^{3/4}}\exp\left(-\sqrt{\frac{\log x}{6.455}} \right)$$
for $x\ge 229$ implies that for $x,y\ge 229$ we have
$$\pi(x+y)-\pi(x)-\pi(y)\le$$ $$\operatorname{li}(x+y)- \operatorname{li}(x)- \operatorname{li}(y)+f(x)+f(y)+f(x+y)\le$$ $$
\frac 1{\log (x+y-1)}-2\operatorname{li}(1)+ f(x)+f(y)+f(x+y).$$ |
Integrate $\int_{-\infty}^{\infty} u(uu')'\,dx$ | Your integration by parts uses $v=uu^\prime$ to get$$[u^2u^\prime]_{-\infty}^\infty-\int_{-\infty}^\infty uu^{\prime2}dx,$$but without more information you can't evaluate the last integral in general. |
Strong closure of a C*-algebra of operators. | The strong operator topology on $\mathscr{B}(H)$ is not metrisable if the underlying Hilbert space $H$ is infinite-dimensional. Therefore it need not be sequential a priori. What the Kaplansky density theorem really tells you is that elements in the SOT-closure of a C*-algebra are limits of (norm) bounded nets. Moreover, if the Hilbert space is separable, then bounded nets can be replaced with bounded sequences. |
Multiply the i-esim column of a matrix for the i-esim element of a vector | Yes, $C=AB$ if $B=\begin{bmatrix}v_1&0&\cdots & 0 \\
0& v_2 & \cdots & 0\\
\vdots & \vdots & \ddots & \vdots \\
0&0&\cdots & v_n
\end{bmatrix}$. |
Applying Green's theorem for a line integral of a vector field | Here is a clearer version. If $F(x,y) = (P(x,y),Q(x,y))$ then
$$ \int_C P\,dx + Q\,dy = \iint_D \left(\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right) dy dx, $$
where $D$ is the region enclosed by $C$. |
Is this why any column in matrix A is in set ColA? | Suppose we have a matrix $A$. Because the set Col $A$ is defined as the Span of the columns of $A$, then
$$
\mathbf{a_n} = 0\cdot\mathbf{a_1} + 0\cdot\mathbf{a_2} + \cdots 1\cdot \mathbf{a_n} + \cdots 0\cdot\mathbf{a_p}
$$
implies $\mathbf{a_n}$ is a linear combination of columns of $A$ and thus in Col $A$. |
Subspaces of $\mathbb{R}^{\mathbb{N}}$ | You should also prove that if $f,g\in R_{bounded}^N$ and $r,s\in R$, then $rf+sg\in R_{bounded}^N$. Namely, that $R_{bounded}^N$ is closed by linear combinations.
This will follow from the triangle inequality $$|rf+sg|\leq |s||f|+|r||g|$$
and the boundedness of $f$ and $g$. |
Why do many calculators evaluate $(-0.5)!$ to $\sqrt\pi$? | Hint: $\Gamma(n)=(n-1)!= \int_0^\infty x^{n-1}e^{-x}\, \mathrm{d}x$ so $(-0.5)!=(0.5-1)!=\Gamma(0.5)=\int_0^\infty x^{-0.5}e^{-x}\, \mathrm{d}x=\int_0^\infty \frac{1}{\sqrt x}e^{-x}\, \mathrm{d}x$ now you should just prove the above value equals to $\sqrt \pi$. By the way, it's better to use the term "evaluate" or "calculate" instead of "solve", because we can't actually "solve" a number! |
The construction of a continuous function f(x) | Hint: Show that $\displaystyle\int_0^1 (x-a)^2f(x)\mathrm dx=0$. Then, $(x-a)^2f(x)\geqslant0$ for every $x$ in $(0,1)$, hence... |
Estimation, Upper limits, Lower limits | Hint: If the rods are correct to the nearest $0.2$ cm, then the first rod is in actuality somewhere between $2.4$ cm and $2.8$ cm. The second rod is actually somewhere between $3.3$ cm and $3.7$ cm. From this information, what do we know about the shortest the combined rod can be and the longest the combined rod can be? |
Does this equation hold in general? | Not an answer:
First, we can simplify the sums as follows:
$$
\sum_{a=0}^{n} \sum_{b=\max[0,a-(n-k)]}^{\min[k,a]} \sum_{c=\max[0,a-(n-k)]}^{\min[k,a]} a (n-a)!a!{k\choose{b}}{k\choose{c}}{n-k\choose{a-b}}{n-k\choose{a-c}}(-1)^{b+c}
\\=
\\
\sum_{a=0}^{n} \sum_{b=0}^{\infty} \sum_{c=0}^{\infty} a (n-a)!a!{k\choose{b}}{k\choose{c}}{n-k\choose{a-b}}{n-k\choose{a-c}}(-1)^{b+c}
\\=\\
\sum_{a=0}^{n}a (n-a)!a! \sum_{b=0}^{\infty} \sum_{c=0}^{\infty} {k\choose{b}}{k\choose{c}}{n-k\choose{a-b}}{n-k\choose{a-c}}(-1)^{b+c}
\\=\\
n!\sum_{a=0}^{n}a {\binom n a}^{-1} \sum_{b=0}^{\infty} \sum_{c=0}^{\infty} {k\choose{b}}{k\choose{c}}{n-k\choose{a-b}}{n-k\choose{a-c}}(-1)^{b+c}
$$
Now let's look at the inner sums (I'm representing them as generating functions, that's why I add the variable $x$. Later on, we can evaluate the generating function at $x=1$):
$$
\sum_{b=0}^{\infty} \sum_{c=0}^{\infty} {k\choose{b}}{k\choose{c}}{n-k\choose{a-b}}{n-k\choose{a-c}}(-1)^{b+c} x^b x^c
\\=\\
\sum_{b=0}^{\infty} \sum_{c=0}^{\infty}
\left((-1)^b {k\choose{b}} {n-k\choose{a-b}} x^b\right)
\left((-1)^{c}{k\choose{c}}{n-k\choose{a-c}} x^c \right)
\\=\\
\left(\sum_{b=0}^{\infty} (-1)^b {k\choose{b}} {n-k\choose{a-b}} x^b\right)
\left(\sum_{c=0}^{\infty} (-1)^{c}{k\choose{c}}{n-k\choose{a-c}} x^c \right)
\\=\\
\left(\sum_{b=0}^{\infty} (-1)^b {k\choose{b}} {n-k\choose{a-b}} x^b\right)^2
$$
We replace the inner sums by the easier formula:
$$
n!\sum_{a=0}^{n}a {\binom n a}^{-1} \left(\sum_{b=0}^{\infty} (-1)^b {k\choose{b}} {n-k\choose{a-b}} x^b\right)^2
$$
This sum however is, up to the factor $a$, equal to the sum in this question.
As the formulas and the result are similar the chances are that one can use the idea of the answer in the linked question to arrive at a result, or alternatively transform the sum. |
Prove that $A_1,B_1,C_1$ is collinear | Here's a vector approach.
With a view to invoking Menelaus' Theorem, we write
$$A_1 = a B + (1-a) C \qquad B_1 = b C + (1-b) A \qquad C_1 = c A + (1-c) B$$
and we hope to show
$$\frac{a}{1-a} \cdot \frac{b}{1-b} \cdot \frac{c}{1-c} = -1 \qquad (*)$$
Let $A^{\prime\prime} = M + \frac{1}{2}\frac{|A^\prime M|}{|AM|}(M-A)$ be the midpoint of $MA^\prime$. Since $\overrightarrow{A^{\prime\prime}A_1} \perp \overrightarrow{AM}$, we have
$$( A_1 - A^{\prime\prime} )\cdot( M - A ) = 0$$
so that
$$( \; a B + ( 1 - a ) C \; )\cdot( M - A ) = A^{\prime\prime}\cdot(M-A) = M\cdot(M-A) + \frac{1}{2}|AM||A^\prime M|$$
Thus,
$$a \; ( \; B - C \; )\cdot( M - A ) = ( M - C )\cdot(M-A) + \frac{1}{2}|AM||A^\prime M|$$
and
$$-(1-a) \; (\; B - C \; )\cdot( M - A ) = ( M - B )\cdot( M - A ) + \frac{1}{2}|AM||A^\prime M|$$
whence
$$\frac{a}{1-a} = - \frac{2\;\overrightarrow{CM}\cdot\overrightarrow{AM}+ |AM||A^\prime M|}{2\;\overrightarrow{AM}\cdot\overrightarrow{BM}+|AM||A^\prime M|}$$
Now, recall the Power of a Point theorem, which says that $M$ divides chords through it into sub-segments whose lengths have a constant product, denoted here as $p$:
$$|AM||A^\prime M| = |BM||B^\prime M| = |CM||C^\prime M| = p$$
This guarantees that the left-hand side of $(*)$ expands to a circular product:
$$
\left(- \frac{2\;\overrightarrow{CM}\cdot\overrightarrow{AM}+ p}{2\;\overrightarrow{AM}\cdot\overrightarrow{BM}+p}\right)
\left(- \frac{2\;\overrightarrow{AM}\cdot\overrightarrow{BM}+ p}{2\;\overrightarrow{BM}\cdot\overrightarrow{CM}+p}\right)
\left( - \frac{2\;\overrightarrow{BM}\cdot\overrightarrow{CM}+ p}{2\;\overrightarrow{CM}\cdot\overrightarrow{AM}+p}\right)
$$
which reduces to $-1$; according to Menelaus, $A_1$, $B_1$, $C_1$ are indeed collinear.
For now, I'll leave proof that $M^\prime$ lies on the circle as homework. |
Limit of a function in terms of nearness. | A point $x\in X$ is said to be near a set $A\subset X$ if every neighborhood of $x$ intersects $A$ at some point $a$.
Now isn’t this just $x\in\overline{A}$?
Yes.
$\lim_{x\to a}f(x)=L$ if and only if for every $A\subset\mathbb R$, $a\in\overline{A}$ implies that $L\in\overline{f(A)}$.
Does this statement really hold?
Yes. (I assume that $f$ is a function from $\Bbb R$ to a topological space).
Both answers are easy to check. |
Finite State Automata whose language is all palindromic strings of length 6 | For (1) you're right, assuming that your alphabet is finite. A finite language is always regular, so in particular a language where the length of a word is bounded will always be regular. You can reduce the size of your automaton a bit by combining all of the states that can't possibly lead to acceptance (which will happen once you read a 4th, 5th or 6th symbol that doesn't match the 3rd, 2nd or 1st) into a common discard state
For (2) you're right too -- what you're being asked to prove is not true, so you cannot get through with a proof. (An easy counterexample is that the language $\{\mathtt 0,\mathtt 1\}^*$ is generated by a finite automaton, but it has $2^n$ elements of length $n$, which is supralinear.
For (3): Perhaps you have learned the Myhill-Nerode theorem, which resolves this easily. Alternatively, the pumping lemma for regular languages can do it too, but is generally more cumbersome to work with in my opinion. |
Calculating Hom(A,B) | I guess there are probably intelligent things to say in general, but I don't really know what to say beyond "a homomorphism is specified by what it does to generators, and specifying the images of the generators gives a homomorphism if and only if all relations are sent to zero." So let's work through examples instead. I make heavy use of universal properties.
$\mathbb{Z}$ is the free abelian group on one generator. This means that $\text{Hom}(\mathbb{Z}, A)$ can be canonically identified with $A$ (where the identification sends a homomorphism to the image of $1$).
$\mathbb{Z}/p\mathbb{Z}$ is the free abelian group on a generator of order $p$. This means that $\text{Hom}(\mathbb{Z}/p\mathbb{Z}, A)$ can be canonically identified with the subgroup of all elements of $A$ of order (dividing) $p$.
$\mathbb{Q}$ is the colimit of its subgroups $\frac{1}{n} \mathbb{Z}$ with the obvious inclusions, so $\text{Hom}(\mathbb{Q}, A)$ can be canonically identified with the appropriate limit of copies of $\text{Hom}(\mathbb{Z}, A) \cong A$. The image of $1 \in \mathbb{Q}$ must be divisible, so if $A$ has no divisible elements, then $\text{Hom}(\mathbb{Q}, A) = 0$. If $A$ is torsion-free, then $\text{Hom}(\mathbb{Q}, A)$ can be canonically identified with the subgroup of divisible elements of $A$, since in this case the image of $1 \in \mathbb{Q}$ uniquely specifies a homomorphism.
That takes care of all combinations of $\mathbb{Z}, \mathbb{Z}/p\mathbb{Z}, \mathbb{Q}$, but it's probably worth saying some things about homomorphisms into these groups as well.
A homomorphism into $\mathbb{Z}$ has image $n \mathbb{Z}$ for some $n \ge 0$. If $n > 0$, then the image is isomorphic to $\mathbb{Z}$. Any surjection onto $\mathbb{Z}$ can be split, since $1 \in \mathbb{Z}$ can be sent to any element in its preimage, so any element of an abelian group $A$ which has nonzero image under a homomorphism $A \to \mathbb{Z}$ must have infinite order, and the subgroup it generates must be a direct summand of $A$, and this necessary condition is also sufficient.
A homomorphism $A \to \mathbb{Z}/p\mathbb{Z}$ necessarily factors through $A/pA$, which is a vector space over $\mathbb{F}_p$. Hence $\text{Hom}(A, \mathbb{Z}/p\mathbb{Z})$ can be canonically identified with the dual vector space $(A/pA)^{\ast}$ over $\mathbb{F}_p$.
Homomorphisms into $\mathbb{Q}$ seem somewhat complicated in general.
For more complicated examples, I guess the smart thing to do is write down short exact sequences and compute Ext groups. I don't know much about this, though, and I'm not sure the examples you're having trouble with merit techniques of this level. |
According to Newton’s law of cooling, the temperature u(t ) of an object satisfies the differential equation | The solution to the differential equation is given by
$$\Delta T(t)=\Delta T(0)e^{-kt}$$
where $\Delta T(t)=u(t)-T$, $\Delta T(0)=u_0-T.$
You have that
$$\Delta T(\tau)=\frac{\Delta T(0)}{2}=\Delta T(0)e^{-k\tau}\iff \frac{1}{2}=e^{-k\tau}$$
Take natural logarithm from both sides to obtain:
$$\ln(2^{-1})=-k\tau\iff k\tau =\ln 2$$ |
A $G_δ$ subset of $2^ω$ that is homeomorphic to $ω^ω$ | As noted in a comment to the question, a previous answer by Brian M. Scott provides one method to demonstrate that the particular subset $G$ is homeomorphic to the Baire space. In particular, he uses the following characterisation of the Baire space:
The [Baire space $\omega^\omega$] is (up to homeomorphism) the unique zero-dimensional, separable, Čech-complete metrizable space that is nowhere locally compact.
He then shows that $G$ has all of these properties to conclude that it is homeomorphic to the Baire space without explicitly constructing a homeomorphism.
If you don't have this "pile-driver" handy, you'll probably have to construct the homeomorphism by hand.
First, to show that $G$ is Gδ, note that $G = \bigcap_n U_n$ where $$U_n := \{ \mathbf{x} \in 2^\omega : \mathbf{x}\text{ switches between }0\text{ and }1\text{ at least }n\text{ times}\}.$$ (So $\mathbf{x} \in U_n$ iff $\mathbf{x}$ has an initial segment of the form $0^{k_0} 1^{k_1} \cdots b^{k_n}$ or $1^{k_0} 0^{k_1} \cdots b^{k_n}$ where each $k_i > 0$, and $b$ is the appropriate bit.)
To construct the homeomorphism, note that given any $\mathbf{x} \in G$, we may write it as an infinite concatenation as $$\mathbf{x} = 0^{k_0} 1^{k_1} 0^{k_2} 1^{k_3} \cdots$$ where $k_0 \geq 0$, and $k_i > 0$ for all $i > 0$. Using this we define a function $\varphi : G \to \omega^\omega$ as follows: $$\varphi ( \mathbf{x} ) = ( k_0 , k_1 - 1 , k_2 - 1 , k_3 - 1 , \ldots ).$$
It is not too difficult to show that this function is a homeomorphism from $G$ onto the Baire space.
As a final note, it is somewhat superfluous to show that $G$ is Gδ since it is a theorem that any completely metrizable subspace of a completely metrizable space must be a Gδ subset of that space. That is, the existence of the homeomorphism between $G$ and the Baire space shows that $G$ is a Gδ subset of the Cantor space. (Also any subspace of the Cantor space which is homeomorphic to the Baire space is a Gδ subset of the Cantor space; e.g, the subspace provided in Arthur Fischer's answer to the same question.) |
What is this operator called? | This operation ${\rm Ops}(4)$ is called tetration, from the greek root tetra meaning four; it's also sometimes called a "power tower". There are also many further generalizations of this type of sequence; Knuth's up-arrow notation gives $a^{a^{a^a}}=a\uparrow\uparrow4$, so that $a\uparrow\uparrow n$ is the tetration operation. By adding more arrows you get pentation and so on, and the Conway chained arrow notation generalizes this still further.
FYI, for "to the power of-ation" the word you're looking for is exponentiation. |
What are the lim sups and lim infs of a sequence? | This is probably what you're looking for:$$
\liminf_{n \to \infty} \{a_{n}\} \leq
\liminf_{k \to \infty} \{a_{n_k}\} \leq
\limsup_{k \to \infty} \{a_{n_k}\} \leq
\limsup_{n \to \infty} \{a_{n}\}
$$ |
Composition of two rotations of the same angle $\alpha$ fixing points $a,b \in{S^2}$ | I woke this morning with the answer in my head. Look at this picture in the plane:
Consider the point $P$. Under a rotation of angle $\alpha$ about $A$ (clockwise), $P$ will move to $Q$. When we then rotate about $B$ clockwise by $\alpha$, $Q$ will move back to $P$. The net result? The point $P$ ends up fixed by the sequence of two rotations.
Now imagine that this is really an overhead shot of a sphere. We're looking down on a portion of the great circle between $A$ and $B$; that portion is drawn in blue. The orthogonal great circle, through the midpoint of the blue segment, is drawn in orange (or at least a portion of it is!). Rotation of the sphere about $A$ by angle $\alpha$ will again take $P$ to $Q$, and the same argument applies to $B$, and we're done.
You might say "But what if $A$ and $B$ are far apart, like $3\pi/2$ apart? Then we can look from the other side, where they'll only be $\pi/2$ apart. For any two points that are not antipodal, there's some hemisphere whose central great circle contains both, with the center point of the hemisphere being the midpoint of $A$ and $B$ along the great circle. Once you've got that, this picture is all you need.
(More formal argument: under stereographic projection from the sphere to the plane tangent to the (spherical) midpoint of $A$ and $B$, the figure described in the problem becomes the figure drawn in this plane. Since stereographic projection is conformal, angles are preserved, and great circles map to either great circles or lines, etc.) |
Convergence of Sequence of Solutions to Elliptic Equation | I suppose that $\Omega$ is a bounded open set and the costants $\Lambda$ not depends by $n$, that is
$$
a^{ij}_n\le \Lambda \qquad \forall n
$$
Then thesis it follows by
\begin{equation}
\begin{split}
\int_\Omega (a^{ij}_n\partial_ju_n-a^{ij}\partial_ju)&= \int_\Omega (a^{ij}_n\partial_ju_n-a^{ij}_n\partial_ju) +\int_\Omega (a^{ij}_n\partial_ju-a^{ij}\partial_ju)\\
& =\int_\Omega a^{ij}_n(\partial_ju_n-\partial_ju) + \int_\Omega (a^{ij}_n-a^{ij})\partial_ju \\
&\le \Lambda||\partial_ju_n-\partial_ju||_{L^1}+ ||\partial_ju||_{L^2}||a^{ij}_n-a^{ij}||_{L^2}\to_{n\to\infty}0
\end{split}
\end{equation}
by hypothesis and by dominated convergence theorem. |
Find conditions on a,b,c and d in matrix B that commute with both matrices C and D | Let's look at the first matrix first. You get
$$\left[\begin{array}{cc} a & b\\c & d\end{array}\right]\left[\begin{array}{cc}1 &0\\0& 0\end{array}\right] = \left[\begin{array}{cc}1 &0\\0& 0\end{array}\right]\left[\begin{array}{cc} a & b\\c & d\end{array}\right]$$
or
$$\left[\begin{array}{cc}a &0\\c& 0\end{array}\right]=\left[\begin{array}{cc}a &b\\0& 0\end{array}\right].$$
Since all entries have to match up, we must have $a=a$, $b=0$, $c=0$, and $0=0$. The only informative equations here are $b=0$ and $c=0$.
Now, if you repeat the above for the second matrix, what do you get? |
Does $L_1$ convergence of continuous functions imply pointwise convergence? | Consider $f_n(x) = (-1)^n x^n.$ Then $f_n\in C[0,1],$ and $f_n \to 0$ in $L^1,$ but $f_n(1)$ does not converge. |
locally compact, Hausdorff, second-countable $\Rightarrow$ paracompact | I wonder why Warner claims that $\overline{G_i}$ is compact.
By the definition each $G_i$ is a union of some finite subcollection of $\{U_k\}$. And each $U_k$ is relatively compact, again by the definition.
And so the conclusion follows from two facts:
union of closures is closure of union for finite collections
finite union of compact subsets is compact. |
Filtered Colimit of associative $k$-algebras that are domains | I don't know of any proof of this statement that would be easier for commutative algebras than for noncommutative algebras. If you have a diagram $F:C\to k\text{-Alg}$ where $C$ is filtered, $F(i)$ is a domain for each object $i$ of $C$, and $X$ is its colimit, suppose $x,y\in X$ are such that $xy=0$. Then there is some $i$ such that $x$ and $y$ are both lift to elements $\bar{x}$ and $\bar{y}$ of $F(i)$, and moreover such that $\bar{x}\bar{y}=0$. The fact that $F(i)$ is a domain now implies $\bar{x}=0$ or $\bar{y}=0$, so $x=0$ or $y=0$. |
Centroid of a region | The region you are interested is the blue shaded region shown in the figure below.
The coordinates of the centroid denoted as $(x_c,y_c)$ is given as $$x_c = \dfrac{\displaystyle \int_R x dy dx}{\displaystyle \int_R dy dx}$$ $$y_c = \dfrac{\displaystyle \int_R y dy dx}{\displaystyle \int_R dy dx}$$
where $R$ is the blue colored region in the figure above.
Let us compute the denominator in both cases i.e. $\int_R dy dx$. Note that this is nothing but the area of the blue region. Hence, we get that
\begin{align}
\int_R dy dx & = \int_{x=0}^{x=1} \int_{y=0}^{y=x^3} dy dx + \int_{x=1}^{x=2} \int_{y=0}^{y=2-x} dy dx = \int_{x=0}^{x=1} x^3 dx + \int_{x=1}^{x=2} (2-x) dx\\
& = \left. \dfrac{x^4}{4} \right \vert_{0}^{1} + \left. \left(2x - \dfrac{x^2}2 \right)\right \vert_{1}^{2} = \dfrac14 + \left( 2 \times 2 - \dfrac{2^2}{2} \right) - \left(2 - \dfrac12 \right) = \dfrac14 + 2 - \dfrac32 = \dfrac34
\end{align}
Now lets compute the numerator for both cases.
To find $x_c$, we need to evaluate $\int_R x dy dx$. We get that
\begin{align}
\int_R x dy dx & = \int_{x=0}^{x=1} \int_{y=0}^{y=x^3} x dy dx + \int_{x=1}^{x=2} \int_{y=0}^{y=2-x} x dy dx = \int_{x=0}^{x=1} x^4 dx + \int_{x=1}^{x=2} x(2-x) dx\\
& = \left. \dfrac{x^5}{5} \right \vert_{0}^{1} + \left. \left( x^2 - \dfrac{x^3}{3}\right) \right \vert_1^2 = \dfrac15 + \left( 2^2 - \dfrac{2^3}3\right) - \left( 1^2 - \dfrac{1^3}3\right) = \dfrac15 + \dfrac43 - \dfrac23 = \dfrac{13}{15}
\end{align}
To find $y_c$, we need to evaluate $\int_R x dy dx$. We get that
\begin{align}
\int_R y dy dx & = \int_{x=0}^{x=1} \int_{y=0}^{y=x^3} y dy dx + \int_{x=1}^{x=2} \int_{y=0}^{y=2-x} y dy dx\\
& = \int_{x=0}^{x=1} \left. \dfrac{y^2}{2} \right \vert_0^{x^3} dx + \int_{x=1}^{x=2} \left. \dfrac{y^2}{2} \right \vert_{0}^{2-x} dx\\
& = \int_{x=0}^{x=1} \dfrac{x^6}{2} dx + \int_{x=1}^{x=2} \dfrac{(2-x)^2}{2} dx = \left. \dfrac{x^7}{14} \right \vert_{0}^{1} + \left. \dfrac{(x-2)^3}{6} \right \vert_{1}^{2}\\
& = \dfrac1{14} + \left( \dfrac{(2-2)^3}{6} - \dfrac{(1-2)^3}{6} \right) = \dfrac1{14} + \dfrac16 = \dfrac5{21}
\end{align}
Hence, $$x_c = \dfrac{\displaystyle \int_R x dy dx}{\displaystyle \int_R dy dx} = \dfrac{13/15}{3/4} = \dfrac{52}{45}$$ $$y_c = \dfrac{\displaystyle \int_R y dy dx}{\displaystyle \int_R dy dx} = \dfrac{5/21}{3/4} = \dfrac{20}{63}$$ |
Find $\lim _{x \to 0} \cot(3x)\sin(4x)$ | Hint:$cot(3x)sin(4x)=cos(3x){x\over {sin(3x)}}{{sin(4x)}\over x}$
Use the fact that $lim_{x\rightarrow 0}{{sin(x)}\over x}=1$. |
Finite variation continuous function, enough to look at rational points? | Let $r>0$ .Then corresponding to $r>0$ there exists a partition $P=\{0=a_0<a_1<a_2<\dots a_n=T\}$ such that
$\sum_{i=0}^n|f(a_{i+1)}-f(a_i)|>K-r$.
Since $\Bbb Q$ is dense in $\Bbb R$ so for each $a_i;0\le i\le n$ there exists $q_i\in \Bbb Q$ such that $|a_i-q_i|<\frac{1}{n}\forall n\in \Bbb N$
Also $f$ is continuous so $|f(a_i)-f(q_i)|<\frac{1}{n}\forall n\in \Bbb N$.
Then the new partition becomes $P^{'}=\{0=q_0<q_1<q_2<\dots q_n=T\}$ .
The sum $\sum_{i=0}^n|f(a_{i+1)}-f(a_i)|$ can be made $\epsilon $ close to $\sum_{i=0}^n|f(q_{i+1)}-f(q_i)|$ depending on our choice of $q_i's$ for each $\epsilon>0$
.
Thus
$\sum_{i=0}^n|f(q_{i+1)}-f(q_i)|>K-r$ |
Particular Integral of $\frac{\partial^2 u}{\partial x^2}+2 \frac{\partial^2 u}{\partial x\partial y}+\frac{\partial^2 u}{\partial y^2}=x$. | Hint
If we set
$$u(x,y)=\frac{1}{6}x^3+Ax+By+C$$
then it will be a solution for any real values of $A$, $B$, and $C$. |
Showing a polynomial does not belong to an ideal in $k[x,y,z]$ | Hint: Look at the degrees of the polynomials on both sides of the equation you get after assuming that $x$ is in the ideal. |
Definition of a structure | You can see Peano arithmetic and the corresponding structure of natural numbers:
$(\mathbb N, 0, S, +, \times)$.
Here $\mathbb N = \{ 0,1,2,\ldots \}$ is the domain $D$ and $0$ is the only individual constant $c$ denoting the number zero.
$S(x)$ is a function symbol that is interpreted with the successor function, i.e. $\text {Succ} : \mathbb N \to \mathbb N$.
Finally, $+$ and $\times$ are binary function symbols, interpreted with sum and product respectively.
If we want to consider also an example of relation, we have to consider the binary predicate symbol $<$, that will be interpreted with the "less then" relation, i.e. $\text {less} \subseteq \mathbb N \times \mathbb N$. |
Inequality that came up on the power series | $$\delta < \frac{r^2}{c} \Rightarrow \frac{c\delta}{r^2} < 1 \Rightarrow \sum_{n=1}^N\Big(\frac{c\delta}{r^2}\Big)^n < \sum_{n=1}^{\infty}\Big(\frac{c\delta}{r^2}\Big)^n = \frac{1}{1-\frac{c\delta}{r^2}} = \frac{r^2}{r^2-c\delta}$$ |
How to solve triple integral? | You have
$$\int \dfrac{1}{ax+b}=\dfrac{\ln(ax+b)}{a}+C.$$
Then
$$\int_{-3}^4 \dfrac{dx}{5x+8z+80} = \left. \dfrac{\ln(5x+8z+80)}{5} \right |^{x=3}_{x=-4} = \dfrac{\ln(8z+105)}{5} -\dfrac{\ln(8z+60)}{5}. $$
Then you have
\begin{multline}
\int_{-4}^5 \left(\dfrac{\ln(8z+105)}{5} -\dfrac{\ln(8z+60)}{5} \right)dy = \ln(8z+105) -\ln(8z+60) \\ - \dfrac{1}{4} \left( \dfrac{\ln(8z+105)}{5} -\dfrac{\ln(8z+60)}{5} \right).
\end{multline}
The last integral is
$$ \int_{-2}^5 \left( \ln(8z+105) -\ln(8z+60)- \dfrac{1}{4} \left( \dfrac{\ln(8z+105)}{5} -\dfrac{\ln(8z+60)}{5} \right) \right)dz,$$
which is very ugly. Again, you have a 'simple form' of this integral
$$ \int \ln(ax+b) \, dx= \dfrac{(ax+b) \ln(ax+b)-ax}{a}. $$
For the first term
\begin{multline}
\int_{-2}^5 \ln(8z+105) \,dz = \dfrac{(40+105)\ln(40+105)-40}{8} \\ - \dfrac{(-16+105)\ln(-16+105)+16}{8}.
\end{multline}
And as you may know, the algebra will be tedious for the remaining three terms. |
Solving recurrences whose characteristic equations have complex roots | I suppose that the recurrence has real coefficients. Then you know more in that with $r_1=α+βi$ you get $r_2=α-βi=\bar r_1$.
Update: Then from the reality of the first two sequence elements one gets
$α_1+α_2=\bar α_1+\bar α_2$ and $α_1·r+α_2·\bar r=\bar α_1·\bar r+\bar α_2\bar r$ which can be summarized as
\begin{alignat}{4}
&(α_1-\bar α_2)&&+(α_2-\bar α_1)&&=0\\
&(α_1-\bar α_2)·r&&+(α_2-\bar α_1)·\bar r&&=0\\
\end{alignat}
and since $r\ne \bar r$ this homogeneous system only has the zero solution $α_1-\bar α_2=0$, $α_2-\bar α_1=0$ so that $α_1=α$ and $α_2=\bar α$.
Thus any real solution has the form
$$
a_k=αr^k+\bar α\bar r^k=2Re(αr^k)=2Re(α)Re(r^k)-Im(α)Im(r^k)
\\=Re(α)(r^k+\bar r^k)+iIm(α)(r^k-\bar r^k)
$$ |
Two definitions of order embedding, and the reason why one is correct and the other is not | There is a duality to all these definitions. Sometime reflexivity makes things simpler, sometimes it makes them more complicated.
For example, $A$ is an antichain if for every $a,b\in A$, if $a\leq b$ then $a=b$. But in the strict order, $A$ is an antichain if for every $a,b\in A$, $a\nless b$.
Similarly, $A$ is a chain if for every $a,b\in A$ either $a\leq b$ or $b\leq a$. But in the strict order, $A$ is a chain if for every $a,b\in A$ either $a<b$ or $b<a$ or $a=b$.
In the definition of an order embedding when talking about a reflexive case, injectivity is a theorem. But then when you move to strict orders, you need to require it explicitly. As you noted.
Of course, some conditions let you omit injectivity (e.g. a linear order, or more generally extensionality). But in general, you need to require it explicitly. |
Are there polynomials in $\mathbb{R}[x]$, other than $P(x)=x^2$, such that $P(\sin x)+P(\cos x)=1$ for all real $x$? | This has been solved on Art of Problem Solving: Polynomial - TUYMAADA-2000.
Assume that $P$ is a polynomial satisfying
$$ \tag{*}
\forall x \in \Bbb R: P(\sin x) + P(\cos x) = 1 \, .
$$
First show that $P(-y) = P(y)$ for all $y= \sin(x) \in [-1, 1]$ and conclude that $P$ is even, i.e. $P(x) = Q(x^2)$ for some polynomial $Q$.
Then show that $Q(\frac 12 +y) + Q(\frac 12 - y) = 1$ for all $y = \sin^2(x) - \frac 12 \in [-1/2, 1/2]$, and conclude that $Q(\frac 12 +y) - \frac 12$ is odd, i.e. $Q(\frac 12 +y) - \frac 12 = y R(y^2)$ for some polynomial $R$.
It follows that
$$
P(x) = (x^2- \frac 12) R((x^2 - \frac 12)^2) + \frac 12
$$
for a polynomial $R$. Conversely, every such polynomial satisfies $(*)$. |
Arithmetic and geometric progression question (too long for title) | Here's some thoughts:
Your geometric progression could be $a, ar, ar^2$, with sum $65$. Also $a-1, ar, ar^2-19$ is an arithmetic progression with sum $45$. Hence the middle number, $ar$ is $15$. |
Linear space with basic vectors of outer space | It is a little unclear what you are asking, but I will give it a shot.
Space-time from relativity is an example of a 4 dimensional space where 3 of the dimensions come from outer space. You can pick any basis for outer space. Usually you pick ordinary mutually perpendicular unit vectors. You pick a unit vector in the time-like direction for the 4th basis vector. Any vector in space-time has 4 coordinates (x,y,z,t).
You can arrange the coordinates differently, (t,x,y,z). But the number is always the same. Every n-dimensional vector space has n basis vectors and n coordinates.
Infinite dimensional spaces are a little tricky. A typical space contains only vectors with a finite length. You need an infinite number of basis vectors and an infinite number of coordinates to describe an infinite dimensional space. Given the usual metric, (1,1,1,1,...) is not a vector in that space. It would have an infinite length. (1, 1/2, 1/4, ...) is a vector.
If you embed a 3 dimensional space in an infinite dimensional space, 3 of the coordinates are for the embedded space. I don't know of any physics reason to do this. Infinite dimensional spaces that physicists use have other uses, such as Fourier Analysis. These do not have outer space embedded in them. |
A basic confusion in the proof of Picard's existence theorem | There is a typo: the author meant $|x(t)-x_0|\le b$.
And this inequality is used in the next sentence: "Now the Lipschitz condition on $f$ implies..." The Lipschitz condition for $f$ was assumed to hold on the rectangle $R$, so we must check that the things we put into $f$ are within this rectangle. |
Expectation of the number of times a coin is thrown until the appearance of a second "tail" | Since André has given the complete answer, I will finish mine. This relates to the comment by A.S.
The expectation would be
$$
\begin{align}
\sum_{k=2}^\infty k\binom{k-1}{1}p^2(1-p)^{k-2}
&=2p^2\sum_{k=2}^\infty\binom{k}{2}(1-p)^{k-2}\\
&=2p^2\sum_{k=2}^\infty\binom{k}{k-2}(1-p)^{k-2}\\
&=2p^2\sum_{k=2}^\infty(-1)^{k-2}\binom{-3}{k-2}(1-p)^{k-2}\\
&=2p^2\sum_{k=0}^\infty(-1)^k\binom{-3}{k}(1-p)^k\\
&=2p^2\frac1{(1-(1-p))^3}\\
&=\frac2p
\end{align}
$$
where $\binom{-3}{k-2}$ is a negative binomial coefficient. |
For $g(x) = 1/x$ extended to complex-values, what is antiderivative of $g$? | DISCLAIMER: Since OP edited his original question, changing it completely, the following post doesn’t seem to answer the question anymore.
A complex function $g:\Omega \to \mathbb{C}$ ($\Omega \subseteq \mathbb{C}$ open) has an antiderivative iff $\oint_\gamma g(z)\ \text{d} z = 0$ for any closed path $\gamma$ contained in $\Omega$. Your $g$ does not satisfy this assumption in $\Omega = \mathbb{C} \setminus \{ 0\}$, therefore it fails to have an antiderivative.
On the other hand, your $g$ possesses a multi-valued antiderivative, namely $G(z) = \log z = \ln |z| + \imath\ \operatorname{arg} z$ (here $\ln$ stands for the real natural logarithm): in fact, each single-valued branch $\phi$ of $G$ is holomorphic and satisfies $\phi^\prime (z) = \frac{1}{z} = g(z)$ for each $z$ in its domain. |
Definite integral of $\frac{\sin(x)}{x}$ | For large $a$, you can do asymptotics and find
$$
\int_{-a}^a \frac{\sin x}{x}dx \sim \pi - \left( \frac2{a} - \frac4{a^3}\right)\cos a - \left( \frac2{a^2} - \frac{12}{a^4}\right)\sin a
$$
For $a$ as large as 400, this will be accurate to better than a part in a million. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.