title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Intersection of cone and cylinder layout formula for sheet metal application | Place the cone with the vertex at the origin, with axis along $Oz$, and with opening angle $\beta$ (which can be computed from the diameters and height of the truncated cone). The equation is then
$x^2 + y^2 = z^2 \tan^2\beta$.
Now take a cylinder of radius $r$, also with axis $Oz$. It can be parametrized as $x = r\cos{u}, y=r\sin{u}, z=v$. A curve on the cylinder corresponds to a curve in the parameter space $(u,v)$; something like $v = f(u)$ would determine the curve as $(r\cos{u}, r\sin{u}, f(u))$.
Now rotate the cylinder an angle $\alpha$ about the $x$-axis and shift it along the $z$-axis by $h$, depending on the position of the axis of the cylinder and angle with the axis of the cone. The new parametrization looks something like
$
x = r\cos{u},
y = r\cos{\alpha}\sin{u} - v \sin{\alpha},
z = r\sin{\alpha}\sin{u} + v\cos{\alpha} + h.
$
The intersection between the cylinder and the cone gives a quadratic equation in $v$. Not pretty, but solvable - with an explicit formula for a solution in terms of $u$ and parameters $\alpha$, $\beta$, $r$, $h$. |
Proof that $\sup (A^2) = (\sup A)^2$. | hint
Let $ B $ a non empty subset of $\Bbb R$.
To prove that $ M=\sup B $, you just need to prove that $ M $ is an upper bound of $ B $ and that there exist a sequence $ (b_n)$ of elements in $ B $ which converge to $ M $.
Use the implication you might know
$$\lim_{n\to+\infty}b_n=M\implies \lim_{n\to+\infty}b_n^2=M^2$$ |
Find the derivative of $F^{-1}$ if $ F(x) = \int_{1}^x \frac{1}{t}dt$ | The first formula you wrote: $(F^{-1})'(F(x)) = \frac{1}{F'(x)}$ is expressing $(F^{-1})'$ evaluated at $F(x)$; not $(F^{-1})'$ times $F(x)$.
Starting from $$(F^{-1})'(F(x)) = \frac{1}{F'(x)}$$ if we put $y = F(x)$, then $x = F^{-1}(y)$, so plugging this in gives $$(F^{-1})'(y) = \frac{1}{F'(F^{-1}(y))}$$ which is exactly the second equation. |
$A\subset B$ with non empty resolvent sets $\implies A=B$? | Claim:
Let $X$ be a Banach space. Let $A: D(A) \subseteq X \to X$ and $B: D(B) \subseteq X \to X$ be two closed linear operators.
Suppose that $\rho(A) \cap \rho(B) \neq \emptyset$, where $\rho(A)$ denotes the resolvent set of $A$.
Suppose that $D(A) \subseteq D(B)$ and $Au = Bu \; \forall u \in D(A)$. Then $A=B$.
Proof:
It is sufficient to show that $D(B) \subseteq D(A)$.
Let $u \in D(B)$ and take $\lambda \in \rho(A) \cap \rho(B)$.
Define $v:= \lambda u - B u$ and $w := (\lambda I - A)^{-1}v$, so that $w \in D(A)$.
Now observe that
$$
\lambda u - B u = v = (\lambda I -A)w = \lambda w - Aw = \lambda w- Bw,
$$
so that
$$
(\lambda I - B)(u-w) = 0
$$
Since $\lambda I- B$ is injective, this implies that $u=w$, so that $u \in D(A)$.
In particular, regarding your question, the reason that $A=B$ is not that $A$ and $B$ have non-empty resolvent sets, but that their resolvent sets intersect. |
argument of the complex number $1-\cos x-i\sin x$ | You can use the exponential form of a complex number. In those terms the argument is $\phi$:
$$
re^{i\phi}=1-e^{ix}.
$$
Then you can just say $re^{i\phi}=r\cos\phi+ir\sin\phi$ and use trigonometry. You have to solve $r\sin\phi=-\sin x$ and $r\cos\phi=1-\cos x$, so first solve for $r$ by eliminating $\phi$, then solve for $\phi$. |
Can a lower-semi continuous function be convex? | For example, the following is a convex l.s.c. function from $\mathbb R$ to $]-\infty, \infty]$:
$$ f(x) = \cases{0 & if $|x| \le 1$ \cr
+\infty & otherwise}
$$ |
For which values of $a$ is $e^{-a\sqrt x}$ convex in $\mathbb R^+$? | Well, why not to differentiate twice and check?
$$f(x):=e^{-a\sqrt x}\;,\;\;f'(x)=-\frac a{2\sqrt x}e^{-a\sqrt x}\;,$$
$$f''(x)=\frac{e^{-a\sqrt x}}{4x}a\left(\frac 1{\sqrt x}+a\right)$$
Now
$$f''(x) \geq 0\iff a\left(\frac1{\sqrt x}+a\right) \geq 0\iff\begin{cases} a \geq 0\;,\;\;\;\text{or}
\\{}\\a<0\;\;\text{and}\;\;\frac1{\sqrt x}<-a\end{cases}$$ |
how do I assign -12 in base 10 to biased 32 in binary? | Biased $32$ is one the ways to encode negative numbers in binary. See here for some of the other ways. Biased $32$ means that you are going to treat $-32_{10}$ as $000000$, $-31_{10}$ as $000001$, $-30_{10}$ as $000010$, etc. Notice that $-31--32 = 1$, which is the unsigned binary number we assign to it. So for $-12$, we need the unsigned binary number for $-12--32 = 20$. So $-12_{10}$ is $010100$. |
The derivative and the relation to the limit | The limit is the number that your difference quotient approaches but never actually reaches. What this basically means is that we can divide the real number line in two regions and the number that divides it in two regions is the numerical value of the limit. Thus, as we're approaching the number that represents our limit, we can potentially assume all possible values that are bounded by the limit. All other numbers that lie to the right (or the left) of the limit are out of our reach and thus cannot be the limit. |
Summing the surface area 25,000,000 disks? | You need to define what you are trying to calculate. From covering the horizon it sounds like you want to add up the angular diameters of all the asteroids. This would be assuming they are all lined up on the horizon without overlap and asking how much of the circle they cover. The data given is good on the population at each size, so we just need to use the distance to the asteroids. The figure of semi-major axes from [Wikipedia][1] shows the main belt asteroids average about $2.7$ AU $\approx 4\cdot 10^8$ km. We can use this as the distance from earth, though it will vary depending on whether the asteroid is on the same side of the sun as us. The figure you link shows $600k$ asteroids between $1$ and $3$ km, so they occupy about $6\cdot 10^5\frac 2{4\cdot 10^8}\approx 3\cdot 10^{-3}$ radian. You can go through the rest of the data points on the figure and add them up. By comparison, the moon occupies about $\frac {3400}{380000}\approx 9\cdot 10^{-3}$ radian as seen from earth, so we are $1/3$ of the way there with this bin alone. |
Show that given an irreducible module of heighest weight $0$ over a lie algebra has dimension $1$ | There is a unique irreducible highest weight representation with any given highest weight. The trivial representation is an irreducible representation of highest weight 0. |
Identifying the orbit space of the unitary group $U(n)$ in the compact symplectic group $Sp(n)$ | The homogeneous space $Sp(1) / U(1)$ is an example of compact classical symmetric space.
Here, $Sp(n)$ is the group of transformations that preserve the quaternionic structure of $\mathbb{H}^n$, and $U(n)$ is the group of transformations that preserve a given complex structure on $\mathbb{H}^n$ along with the usual inner product $(a, b) \mapsto \Re(a \bar{b})$ on $\mathbb{H}^n$. So, we can view $Sp(n) / U(n)$ as the space (the symmetric space of type CI) of complex structures on $\mathbb{H}^n$ that preserve the inner product.
In the case $n = 1$, like you say we can regard $Sp(1)$ as the group of unit quaternions in $\mathbb{H}$, which topologically is $\mathbb{S}^3$, and the $U(1)$-orbits are precisely the circles parameterized by $t \mapsto u \exp(i t)$. The projection $$Sp(1) \cong \mathbb{S}^3 \to \mathbb{S}^3 / \mathbb{S}^1 \cong Sp(1) / U(1) \cong \mathbb{S}^2$$ is this case is precisely the Hopf Fibration. In fact, we can explicitly identify the complex structures that preserve the inner product on $\mathbb{H}$: They are precisely the maps
$$x \mapsto (ai + bj + ck) x, \qquad a^2 + b^2 + c^2 = 1,$$ and by identifying respectively with the points $ai + bj + ck \in \mathbb{H}$ we can identify $Sp(1) / U(1) \cong \mathbb{S}^2$ as the set of imaginary quaternions of unit length. |
How to solve this equality? [3] | i have got $$8x^8+96x^6+62x^4-1=0$$ which can be solved by radicals if we write
$$8t^4-96t^3+62t^2-1=0$$ |
Since every subset is open, every subset is also closed | By definition, a subset is closed if its complement is open. If $U$ is a subset, then $X\setminus U$ is also a subset, thus is open if all subsets are open. As $X\setminus U$ is open, then $U$ is closed. This is true for all subsets $U$. |
Pirates And Coins No.1 | If we have only pirates $D,E$, then $D$ may suggest that he gets all the coins, a suggestion supported by the required 50% (himself).
Consequently, with three pirates $C,D,E$, pirate $E$ will support any suggestion that gives him more than $0$ coins, $D$ will object to any suggestion (including the suggestion to give him all as this results in some killing fun). Therefore $C$ can suggest to keep $99$ and give $1$ to $E$, which will be supported by himself and $E$.
Consequently with four pirates $B,C,D,E$, pirate $D$ will support any suggestion giving him more than nothing. This allows $B$ to suggest $99$ for himself and $1$ for $D$. Note that if he suggested to keep all, then $C$ and $E$ would object while $D$ might not bother: His choice does not influence his survival or his income - but objecting gives him the satisfaction of killing $B$!
Consequently with five pirates $A,B,C,D,E$, pirate $C$ will support any suggestion giving him anything at all; same for $E$. Therefore $A$ suggests to keep $98$ coins and give $1$ coin each to $C$ and $E$. |
What makes something a joint distribution, for example is $P(X+Y \le a)$ considered a joint distribution? | For exmaple, $f_{XY}(x,y)$ is a 2-dim joint density that can give you a 1-dim (marginal) density $f_X(x)$ for $X$ (as if $Y$ doesn't exist), via $f_X(x) = \int f_{XY}(x,y) dy$. Similarly, it can give you a 1-dim density $f_Y(y)$ for $Y$ (as if $X$ doesn't exist).
Now consider another $W \equiv X+Y$. It is a single random variable which should have its own 1-dim density $f_W(w)$. You can obtain this via performing variable transformation (as in calculus) on $f_{XY}(x,y) \mapsto f_{WZ}(w,z)$, where $Z$ is another random variable that is auxiliary. From this new joint density you will get the desired $f_W(w) = \int f_{WZ}(w,z) dz$ as if $Z$ never existed in the first place.
Here your $W = X+Y$ happens to be like change of coordinates, and the accompanying $Z$ is often conveniently chosen to be $Z \equiv X- Y$. Of course, the choice of $Z$ is arbitrary, and $Z = X$ or $Z = Y$ are both good.
There are other ways to obtain $f_W(w)$ besides integrating out the auxiliary variable from a 2-dim joint density. The point is, you can visualize in this case that $W$ is a new coordinate axis (just like $X$) that points along the diagonal direction. Just like how $X$ has its own 1-dim density $f_X$, this $W$ has its own density $f_W$. |
How Many Uniquely Enumerated $4\times4$ Sudoku Grids Exist? | Not all of your $384$ possibilities actually work. For example one of your $384$ choices seems to be
$\begin{matrix}
1&2&|&3&? \\
3&?&|&2&? \\
-&-&+&-&- \\
2&3&|& ? & ? \\
?&?&|& ? & ?
\end{matrix}$
forcing
$\begin{matrix}
1&2&|&3&4 \\
3&4&|&2&1 \\
-&-&+&-&- \\
2&3&|& 1/4 & ? \\
4&1&|& ? & 2/3
\end{matrix}$
but it is impossible to complete the grid.
Your argument about $8$ symmetries might fail if there is a pattern which is rotationally symmetric with a $180^\circ$ turn, which there is:
$\begin{matrix}
1&2&|&3&4 \\
3&4&|&1&2 \\
-&-&+&-&- \\
2&1&|& 4 & 3 \\
4&3&|& 2 & 1
\end{matrix}$ |
Proving $\sin(-\theta) = -\sin(\theta)$ and $\cos(-\theta) = \cos(\theta)$ without sin/cosine addition formulas | Are you familiar with Euler's Formula?
$$e^{i\theta} = \cos(\theta) + i\sin(\theta)$$
By plugging in $-\theta$ to this formula, applying the reciprocal rule of exponents to the left side, simplifying, and then equating the real and imaginary parts will result in what your are trying to prove:
$$e^{-i\theta} = \cos(-\theta) + i\sin(-\theta)$$
$$\dfrac{1}{e^{i\theta}} = \cos(-\theta) + i\sin(-\theta)$$
$$\dfrac{1}{\cos(\theta)+i\sin(\theta)} = \cos(-\theta) + i\sin(-\theta)$$
$$\cos(\theta)-i\sin(\theta) = \cos(-\theta) + i\sin(-\theta)$$
Sorry for the bad quality and limited work, I am doing this on my phone. |
Sum of the distance between consecutive terms implies the sequence is Cauchy. | Hint
Use that $\|x_{n+k}-x_n\|\le \|x_{n+k}-x_{n+k-1}\|+\cdots +\|x_{n+1}-x_n\|.$
Now, use that $$\sum_{n\geq 1} ||x_n-x_{n+1}||<\infty$$ to have control on the right part of the inequality, from where you have control on the left part. |
$C([0,1])$ is not weak* closed in $L^{\infty}(0,1)$ | I assume that $L^1$ is meant to be defined with respect to Lebesgue measure.
Take $$f_n : [0,1] \to [0,1], x\mapsto \begin{cases} (2x)^n &: x\in [0,\frac 1 2) \\ 1 &: x\in [\frac 1 2 , 1].\end{cases}$$ It is clear that $f_n$ converges pointwise to the function
$$f (x) = \begin{cases} 0 &: x\in [0,\frac{1}{2}) \\1 &: x \in[\frac 1 2 , 1].\end{cases}$$
Therefore by Lebesgue dominated convergence theorem we have that for every $g\in L^1$ (with $\vert f_n g\vert \leq \vert g\vert$) it holds
$$\int f_n g \to \int fg$$
Therefore we have $f_n \to f$ in your weak sense, but $f$ is not continuous. |
Simple zero $a$ is equal to $\frac{1}{2\pi i}\int_C \frac{zf'(a)}{f(z)} dz$ for $f$ holomorphic within $C$? | By the residue theorem,$$\frac1{2\pi i}\int_C\frac{zf'(z)}{f(z)}\,\mathrm dz=\operatorname{res}_{z=a}\left(\frac{zf'(z)}{f(z)}\right).$$Since $a$ is either a removable singularity or a simple pole of $f$,\begin{align}\operatorname{res}_{z=a}\left(\frac{zf'(z)}{f(z)}\right)&=\lim_{z\to a}(z-a)\frac{zf'(z)}{f(z)}\\&=\lim_{z\to a}\frac{zf'(z)}{\frac{f(z)-f(a)}{z-a}}\\&=\frac{af'(a)}{f'(a)}\\&=a.\end{align} |
For which $\lambda$ do we have solutions | Because of the way your question is phrased, I will assume that $A$ is the augmented matrix of a system of 3 equations on 3 variable
If we perform row reduction:
\begin{align}
\begin{bmatrix} 1 & 1 & \lambda & 1 \\ 4 & \lambda ^2 & -8 & 4 \\ \lambda & 2 & 4 & \lambda + 1\end{bmatrix}
&\xrightarrow[R3-\lambda R1]{R2-4R1}
\begin{bmatrix} 1 & 1 & \lambda & 1 \\ 0 & \lambda ^2 -4& -8-4\lambda & 0 \\ 0 & 2-\lambda & 4-\lambda^2 & 1\end{bmatrix}\\ \ \\
&=
\begin{bmatrix} 1 & 1 & \lambda & 1 \\ 0 & (\lambda -2)(\lambda+2)& -4(\lambda+2) & 0 \\ 0 & 2-\lambda & -(\lambda-2)(\lambda+2) & 1\end{bmatrix}
.
\end{align}
If $\lambda=2$, we get
$$
\begin{bmatrix} 1 & 1 & 2 & 1 \\ 0 & 0& -16 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}
$$
The fourth row now signifies the equation $0=1$, so the system has no solution.
If $\lambda=-2$, we get
$$
\begin{bmatrix} 1 & 1 & -2 & 1 \\ 0 & 0& 0 & 0 \\ 0 & 4 & 0 & 1\end{bmatrix}
\xrightarrow{}
\begin{bmatrix} 1 & 0 & -2 & 3/4 \\ 0 & 1 & 0 & 1/4\\ 0 & 0& 0 & 0\end{bmatrix}
$$
This says that $x_2=1/4$, and $x_1=2x_3+3/4$. As we are free to choose $x_3$, the system has infinitely many solutions.
If $\lambda$ is neither $2$ nor $-2$, we can divide the second row by $\lambda+2$ and the third one by $\lambda-2$ to get
\begin{align}
\begin{bmatrix} 1 & 1 & \lambda & 1 \\ 0 & \lambda -2& -4 & 0 \\ 0 & -1& -\lambda+2 & 1\end{bmatrix}
&\xrightarrow[R2+(\lambda+2)R3]{R1+R3}
\begin{bmatrix} 1 & 0 & 2 & 2 \\ 0 & 0& -\lambda^2 & 0 \\ 0 & -1& -\lambda+2 & 1\end{bmatrix}\\ \ \\
&\xrightarrow[]{}
\begin{bmatrix} 1 & 0 & 2 & 2 \\ 0 & 1& \lambda-2 & -1\\ 0 & 0& \lambda^2 & 0\end{bmatrix}.
\end{align}
If $\lambda\ne0$, then the system will have three leading ones and no inconsistency, so it has unique solution.
If $\lambda=0$, then again we are free to choose $x_3$ and so the system has infinitely many solutions.
In summary:
If $\lambda=0$, infinitely many solutions.
If $\lambda=-2$, infinitely many solutions.
If $\lambda=2$, no solution.
If $\lambda$ is not $0$ nor $2$ nor $-2$, the system has unique solution. |
Let $f$ be a polynomial with at least $k$ different roots, prove that $f'$ hast at least $k-1$ different roots | Between any two roots $x_i$ and $x_j$ of $f$, we know that $f$ is continuous on $[x_i, x_j]$ and differentiable on $(x_i, x_j)$ with $f(x_i) = f(x_j) = 0$ so by Rolle's there is a $x_k \in (x_i, x_j)$ such that $f'(x_k) = 0$. Hence $f'$ has at least $k-1 $ roots. |
Connected sum while keeping curvature bounded. | It is possible to take the connected sum of manifolds of dimension at least 3 of positive scalar curvature, and get a metric of positive scalar curvature, by the work of Gromov and Lawson,
Gromov, Mikhael; Lawson, H. Blaine, Jr. Positive scalar curvature and the Dirac operator on complete Riemannian manifolds. Inst. Hautes Études Sci. Publ. Math. No. 58 (1983), 83–196 (1984).
For sectional and Ricci curvature this is not possible. Of course, you get some lower bound even in these cases by compactness. If you have other conditions in mind you need to make your question more specific (and in particular scale-invariant). |
If the length of tangent from $(1,1)$ to the circle $2x^2+2y^2-4x-6y+k=0$ is 5 units, find k | The length of the tangent from the point $(h,k)$ is $L=\sqrt{F(x,y)}$ if the coefficients of $x^2, y^2$ are 1. Then,
$$L(h,k)=\sqrt{h^2+k^2-2h-3k+K/2}\implies \sqrt{K/2-3}=5 \implies K=56$$
But finally the equation $x^2+y^2-2x-3y+28$ does not represent a circle as its radius $r=\sqrt{1+9/4-28}$ becomes imaginary.
We can state that the Eq. $x^2+y^2-2x-3y+K/2=0$ can represent a circle! only if
$K<13/2$. Next, for tangents from $(1,1)$ to be possible (for the point $(1,1)$ to be outside the circle)$, L=\sqrt{1+1-2-3+K/2}>0$, this put a further restriction on $K$ as $K<6.$
Finally, The correct answer to OP's question is no real value of $K$ is possible any real value found carelessly (Like my earlier answer 56) will attract inconsistencies. |
Interior, closure and boundaries of $\mathbb{Q} \times \mathbb{Q}$ in $\mathbb{R}^2$ | Interior is empty set as between any two rational there is a irrational. Now $\mathbb{Q}$ is dense in $\mathbb{R}$ therefore the closure of $\mathbb{Q}\times \mathbb{Q}$ in $\mathbb{R^2}$ is whole $\mathbb{R^2}$. Now as interior is empty set then boundary is also whole $\mathbb{R^2}$. |
Prove that $\int_Bg(z)dz=\int_{B_1}g(z)dz+\int_{B_2}g(z)dz$ | This proposition as it is stated now does not hold. $B_1$ or $B_2$ is allowed to be inside of the other, resulting in
$$\int_Bg(z)dz=\int_{B_1}g(z)dz=\int_{B_2}g(z)dz$$
Now we add the condition that neither of $B_1$ and $B_2$ resides inside of the other, as well as that $g$ is analytic in a (open and connected) domain $D$ that contains the union of the following two sets. The first set is the intersection of, the outerior of $B_1$ and $B_2$, and the interior of $B$. The second is $B\cup B_1\cup B_2$. This will produce the desired equation.
Proof: Transform the domain homeomorphically so that $B\cup B_1\cup B_2$ becomes the circles depicted in the graph below. Then take the contour $C$ as shown in the graph below running counter-clockwise on the large circle, clockwise on the smaller circles, only letting the distance between the gap be zero.
Take the contour $\gamma$ that is the image of $C$ on the original domain, apply Cauchy-Goursat theorem to $g$ on $D$ and contour $\gamma$
$$0=\int_\gamma g(z)dz=\int_B g(z)dz-\int_{B_1}g(z)dz-\int_{B_2}g(z)dz+0+0$$
where the last two $0$'s signifies the integration on the parallel sides of the straight "streets" in the graph in opposite directions which results in cancellation of the integration on those parts of $\gamma$.
Now we have obtained the desired equation. |
What is polynomial time? | It means that the steps you need to solve a problem (and thus the amount of time you need) is some polynomial function of its size:
$ S = f(N) $
Polynomial functions are functions that involves $N$ or $N^2$ or $N$ to other powers, like:
$5N^8+2N^5+10N^2+..$ But important, it's not exponential functions like $ 2^N$
Because that makes the amount of steps you need to solve a problem very big very quickly.
Typical problems are multiplication and mazes.
In contrast, consider the travelling sales man problem: https://en.wikipedia.org/wiki/Travelling_salesman_problem
The steps to solve this problem (finding the optimal route) is not a polynomial function of the amount of cities (size). |
Riemann integral and an unbounded function | You are correct about the fact that Riemann-integrability is usually defined for bounded functions, and $\log\sin\theta$ is not bounded over $(0,\pi)$. On the other hand the boundedness assumption can be dropped in some peculiar circumstances. Indeed, the sequence
$$ a_n = \frac{\pi}{n}\sum_{k=1}^{n-1}\log\sin\frac{\pi k}{n}=-\pi\log 2+\frac{\pi\log(2n)}{n} $$
is convergent, and for any $n>2$
$$ a_n = \int_{0}^{\frac{n-2}{n}\pi}\log\sin\left(\frac{\pi}{n}\left\lceil\frac{n\theta}{\pi}\right\rceil\right)\,d\theta $$
holds. By the dominated convergence theorem, $\lim_{n\to+\infty} a_n$ equals the value of the Lebesgue (or improper-Riemann) integral $\int_{0}^{\pi}\log\sin\theta\,d\theta$. |
Can this be done without integration by substitution? | Your proof is O.K. You have used that $G(x-c)$ is an anti-derivative of $g(x-c)$.
Such an argument I will call "the simplest form of substitution." |
The degree of the extension $F(a,b)$, if the degrees of $F(a)$ and $F(b)$ are relatively primes. | You should be able to prove the degree of $F(a,b)$ over $F(a)$ is at most the degree of $F(b)$ over $F$, and what you want follows from that. |
Quick help on showing a set is bounded | If $(x,y)\in S$ then you have the following bounds:
\begin{equation}
x^2+xy+y^2=3\Rightarrow xy\leq 3,
\end{equation}
\begin{equation}
x^2+xy+y^2 =(x+y)^{2}-xy =3\Rightarrow xy\geq -3.
\end{equation}
Now if $(x,y)\in S$ and $x^2+y^2> 6$ then you must have $xy <-3$, so you must have $S\subset B(0,6)$, where $B(0,6)=\{x\in\mathbb{R}^{2}/x^2+y^2\leq6\}$, and this set $S$ is bounded. |
How to show that $ ({\bf I_n}-A)^{-1} = \Sigma_{l=0}^m A^l $ | A nilpotent matrix $A$ only has eigenvalues which are $0$.
The matrix $I-A$ then only has eigenvalues $1$. $(I-A)^{-1}$ will have inverted eigenvalues and $1^{-1}=1$
What does Caley-Hamilton theorem tell us? $(I+A-I)^k =0 \Leftrightarrow A^k=0$.
But we may already know this from nilpotency.
Another fruitful aspect of this is the geometric series. Maybe this question helps you there. |
Generalised Eigenvectors Issue | In example 2, since the geometric multiplicity of the eigenspace is 2, also $B$ is a 3x3 matrix, so there are 2 Jordan blocks $J_1$ and $J_2$ with different size. Your method adopted in example 1 can only deal with Jordan blocks with same block size.
Without loss of generality, just take the block size of $J_1$ and $J_2$ to be 2 and 1 respectively.
Then $$J=\begin{pmatrix}
2&1&0 \\
0&2&0\\
0&0&2
\end{pmatrix}$$
To solve this problem, our first step is to find out $Nul[(B-2I)^2]$ and the dimension of it.
By computation, $(B-2I)^2$= zero matrix, $dim(Nul[(B-2I)^2])=3$, we'll stop here since the max nullity is 3 as $B$ is a 3x3 matrix, it can't be more for higher power.
$$\\$$
Our second step is to let $\underline{b}_1,\underline{b}_2,\underline{b}_3$ to be the $i-$column of the matrix $P$.
Now we are going to find the vectors in the first block $J_1$, where they are $\underline{b}_1,\underline{b}_2$.
We must start to find $\underline{b}_2$ first, because "2" is the largest number on this block. Therefore we need to take $\underline{b}_2=\underline{e}_1$ as $(B-2I)\underline{e}_1 \neq \underline{0}$; if it equals to the zero vector, then we just take $\underline{b}_2=\underline{e}_2$ or other possible $\underline{e}_k$.
After that, we can apply $(B-2I)$ on $\underline{b}_2$, it makes the "2" to be "1", i.e. $$\underline{b}_1=(B-2I)\underline{b}_2$$.
We are now finish the works on $J_1$, let's move on to $J_2$.
Luckily, there is only one vector in $J_2$.
Note that the smallest number $j$ for $\underline{b}_j$ on each block is the eigenvector. So in this case, "3" is the smallest number on $J_2$.
Therefore we can just take $\underline{b}_3$ to be the eigenvector that you found.
Take
$$\underline{b}_3= \begin{pmatrix} 1 \\ 0 \\ 1\end{pmatrix} \notin Span\{\underline{b}_1,\underline{b}_2\}$$
to make the matrix $P$ non-sigular.
Finally, $$B=\begin{pmatrix}
-1&1&1 \\
1&0&0\\
2&0&1
\end{pmatrix}\begin{pmatrix}
2&1&0 \\
0&2&0\\
0&0&2
\end{pmatrix}
\begin{pmatrix}
-1&1&1 \\
1&0&0\\
2&0&1
\end{pmatrix}^{-1}$$ |
How to solve the following equation? | $y=\dfrac{\ln\left(\dfrac{x}{m}-sa\right)}{r^2}$
$y=\dfrac{\ln\left(\dfrac{x}{m}-as\right)}{r^2}$ , then we have
$r^2y=\ln\left(\dfrac{x}{m}-as\right)$
$e^{r^2y}=\dfrac{x}{m}-as$
$me^{r^2y}=x-mas$
$me^{rry}=x-mas$ |
Can Bayesian Optimization be used in lieu of LP, QP, or MIP? | I believe that the OP is referring to the optimization algorithm that builds a Gaussian Process (GP) surrogate model to a black box function and then optimizes the surrogate model. This approach has been extended from unconstrained optimization to include constrained problems as well.
Bayesian Optimization has the advantage that it can be used on an arbitrary function $f(x)$ that can only be computed in black-box fashion. It has the disadvantage that it fails to make any use of known structure in the objective function or constraints.
You can certainly apply this kind of Bayesian Optimization to a more highly structured problem, but chances are that specialized algorithms will perform better than Bayesian Optimization for your problems because these specialized algorithms can take advantage of the structure of the problem. |
Prove that if $f$ is Lebesgue measurable on $\mathbb{R^n}$, then $f(x-y)$ is Lebesgue measurable on $\mathbb{R^n}\times \mathbb{R^n}$. | Let $h:\mathbb{R}^{2d}\to\mathbb{R}^d$ be such that $h(x,y)=x-y$ and $f:\mathbb{R}^d\to\mathbb{R}$ a Lebesgue measurable function. We want to show that $K:\mathbb{R}^{2d}\to\mathbb{R}$ with
$
K(x,y):=f\circ h(x,y)=f(x-y)
$
is Lebesgue measurable.
Now let $E$ be a Borel set of $\mathbb{R}$. It suffices to
show that $K^{-1}(E)=h^{-1}\circ f^{-1}(E)$ is Lebesgue measurable in $\mathbb{R}^{2d}$. Observe that $f^{-1}(E)$ is Lebesgue measurable. It thus suffices to show that $h^{-1}(F)$ is Lebesgue measurable whenever $F\subset\mathbb{R}^d$ is Lebesgue measurable.
This is true when $F$ is Borel measurable since $h$ is continuous. On the other hand, one can check that the preimage of a set of zero outer measure is again of zero outer measure, so null sets pull back to null sets. Since Lebesgue measurable sets differ by a null set from a Borel measurable set, the claim for Lebesgue measurable sets then follows from the corresponding claim for Borel sets.
[Added for the "one can check" part] Note that $h(x,y)=\psi\circ T(x,y)$ with
$$
\psi(x,y)=x,\quad T(x,y)=(x-y,x).
$$
It is also known that $m(T(E))=|\det T|m(E)$. To deal with $\psi$, a warming-up question would be to show that if $E\subset\mathbb{R}$ has zero outer measure, then the set $E\times[0,1]\subset \mathbb{R}^2$ has zero outer measure; using the countable subadditivity of outer measure, this implies that $E\times\mathbb{R}$ also has zero outer measure. |
Prove $\sigma=\alpha\phi$ | The statement trivially holds if $\phi = 0$. For the purposes of the proofs below, we assume this is not the case.
Hint: (if you know about quotient spaces): consider the induced maps $\tilde \phi : V/\ker \phi \to \Bbb F$ and $\tilde \sigma : V / \ker \phi \to \Bbb F$. Note that $\tilde \sigma = \alpha \tilde \phi$, and conclude that $\sigma = \alpha \phi$.
Hint: (alternate approach): consider the linear map $T:V \to \Bbb F^2$ given by $T(v) = (\phi(v),\sigma(v))$. If $\phi(v) = 0 \implies \sigma(v) = 0$, then $T$ fails to be onto. Thus, there exists a non-zero map $f: \Bbb F^2 \to \Bbb F$ (given by $f(x_1,x_1) = a_1x_1 + a_2x_2$) such that $f$ is zero over the image of $T$. Use $a_1,a_2$ to reach the desired conclusion. |
Adjoint of Sum = Sum of Adjoints? | $\langle (A + B)^\dagger x, y \rangle = \langle x, (A + B)y \rangle = \langle x, Ay + By \rangle$
$= \langle x, Ay \rangle + \langle x, By \rangle = \langle A^\dagger x, y \rangle + \langle B^\dagger x, y \rangle; \tag 1$
from this we have
$\langle (A + B)^\dagger x - A^\dagger x - B^\dagger x, y \rangle = 0; \tag 2$
since this binds for all $y$, we have
$(A + B)^\dagger x - A^\dagger x - B^\dagger x = 0,
\tag 3$
or
$(A + B)^\dagger x = A^\dagger x + B^\dagger x,
\tag 4$
holding for all $x$; we thus conclude that
$(A + B)^\dagger = A^\dagger + B^\dagger .
\tag 5$ |
fermat's last theorem by elementary methods | Let $n$ be a positive integer, $n > 1$, and suppose $b,c,d$ are positive integers with $\gcd(b,c,d)=1$ such that $d^n = b^n + c^n$.
Next, suppose $p$ is a prime such that $d=ap+b$.
\begin{align*}
\text{Then}\;\;&(ap + b)^n = b^n + c^n\\[4pt]
\implies\;&p\mid c^n\\[4pt]
\implies\;&p\mid c\\[4pt]
\end{align*}
So you can't choose $p$ arbitrarily, since $p$ must be a factor of $c$.
What you proved is that if $p$ is a prime factor of $c$, then $p\mid a$ or $p\mid n$.
Stated in full, what you proved is equivalent to this:
\begin{align*}
&\text{If:}\\[4pt]
&\;\;{\small\bullet}\;\;\text{$n$ is a positive integer with $n > 1$}\\[4pt]
&\;\;{\small\bullet}\;\;\text{$b,c,d$ are positive integers with $\gcd(b,c,d)=1$}\\[-1pt]
&{\phantom{\;\;{\small{\bullet}}\;\;}}\text{such that $d^n = b^n + c^n$}\\[4pt]
&\;\;{\small\bullet}\;\;\text{$p$ is a prime factor of $c$}\\[8pt]
&\text{Then:}\\[4pt]
&\;\;{\small\bullet}\;\;\text{$p\mid n\;$ or $\;p^2 \mid (d-b)$}\\[4pt]
\end{align*} |
Question on Ricci flow and Einstein Tensor | A vector field $X$ is said to be a conformal Killing vector field if
${\cal L}_Xg=\varphi g$ some function $\varphi$. In special case $\varphi=0$ it is called a Killing vector field.
Consider an Einstein Riemannian manifold $(M,g)$ of zero scalar curvature $R=0$ thus it is Ricci flat i.e. $Ric=R g=0$. If $(M,g)$ admit a Killing vector field $X$ (i.e. $\mathcal{L}_Xg=0$) then $(M, g, X)$ satisfies in your conditions. That is $0=\mathcal{L}_{X}g_{\mu\nu} =Rg_{\mu\nu}=0$ and $c=-\Lambda$ for zero Cosmological constant $\Lambda=0$.
For example $\Bbb R^n$ with Euclidean metric satisfies in your requirements. another example is torus $\Bbb S^1\times \Bbb S^1$ with flat metric because it admit Killing vector field.
All of these are correct if cosmological constant were zero, that I don't know what it is intuitively. |
Calculating fundamental group and homology group to prove not homeomorphic to each other | Hint: The fundamental group and homology group respect products, i.e. $\pi(A\times B)=\pi(A)\times \pi(B)$ and $H(A\times B)=H(A)\otimes H(B)$. Furthermore if two spaces are homeomorphic they have the same fundamental and homology groups. |
limiting distribution of Xbar considering an autoregressive process | Correction: sorry, my previous explanation was wrong.
Something is not right in your computation as the final term converges to zero and not to a random variable. I guess you want to consider $\sqrt{n}(\bar{X}-\mu)$ to get a CLT. Again, using Slutskys Theorem should work, but please clarify your intention first. |
Can one avoid AC in the proof that in Noetherian rings there is a maximal element for each set? | It actually depends on how you define Noetherian rings. The two most common definitions are:
A ring in which every non-empty set of ideals has a maximal element.
A ring which satisfies the ascending chain condition on ideals — every nondecreasing sequence of ideals is eventually constant.
The fact that (1) implies (2) is clear, but the implication from (2) to (1) is not provable without some weak form of the Axiom of Choice (specifically, the Axiom of Dependent Choice).
Now, using the first definition, it is trivial that every proper ideal $I$ in a Noetherian ring $R$ is contained in a maximal one. Take the collection $\mathcal{A}$ of proper ideals of $R$ that contain $I$. Note that $\mathcal{A}$ is nonempty since $I \in \mathcal{A}$. Therefore, $\mathcal{A}$ has a maximal element $M$, which must be a maximal ideal of $R$ by definition of $\mathcal{A}$.
Using the second definition, it is not provable without any form of choice that every proper ideal $I$ in a Noetherian ring $R$ is contained in a maximal one. It is tempting to proceed by contradiction. Assume there is no maximal ideal that contains $I$. Since $I$ is not maximal, we can find a proper ideal $I' \supsetneq I$. Since $I'$ is not maximal, we can find an ideal $I'' \supsetneq I'$. And so on, thereby obtaining a strictly ascending chain $$I \subsetneq I' \subsetneq I'' \subsetneq \cdots$$ However, each step of this construction requires choosing an ideal extending the previous one, infinitely many choices in total. In the general case, one needs the Axiom of Dependent Choice to justify this.
Note that the Axiom of Dependent Choice is only needed for the general case. For example, if the ring $R$ is countable then the process outlined above can be made effective. Let $x_0,x_1,\dots$ be a fixed enumeration of $R$. To pick $I'$, scan through the enumeration of $R$ until you find an element $x_i$ such that $x_i \notin I$ and $I + Rx_i \neq R$, then let $I' = I + Rx_i$ and continue in the same manner to find $I'', I''', \dots$ If $I$ is not contained in a maximal ideal, then such an $x_i$ can always be found and thus we contradict the ascending chain condition. |
Show functions are solution to a system using matrices | This is really easy and requires no DiffEQ knowledge... just show those two functions satisfy those two equations. Take their derivatives and plug them in. |
Kolmogorov Extension Theorem vs. Caratheodory Extension Theorem | This is the same answer I gave on MO:
The KET fails for general measurable spaces, the classical example can be found in a paper by Andersen and Jessen. Topological assumptions are necessary so that the resulting measure is not only finitely additive but countably additive. There exists a quasi-topological condition of measure spaces, perfectness, that is sufficient. A probability space $(\Omega,\sigma,\mu)$ is perfect if for every random variable $f:\Omega\to\mathbb{R}$, there exists a Borel set $B\subseteq f(\Omega)$ with measure one under the distribution $\mu\circ f^{-1}$. A proof of KET under the assumption that the marginal measures are perfect due to Lamb is given here. The strategy of the proof is to employ an existence result for regular conditional probability spaces and the construct the proces for them using the Ionescu-Tulcea theorem. |
How many ways to get a 5 card hand with atleast 3 black cards | By symmetry, since you will either have 3 or more black cards or 3 or more red cards, the answer has to be $(1/2) \times \binom{52}{5}.$
In answer to your specific question, consider the hand
$A,2,3,4,5$, all spades. As far as overcounting goes, if you select the $A,2,3$ as your first 3 black cards, then you might also select the $4,5$ as the two leftover cards.
So, this particular combination of cards will be counted $\binom{5}{2} = 10$ times, rather than just once. |
Why harmonicity is a local property? | Here is one way to solve this puzzle. Let ${\mathcal U}$ be the cover of $M$ by open subsets $U_p$ as in your question. Since $M$ is paracompact, without loss of generality, we may assume that this cover is locally finite. In particular, it admits a partition of unity $\{\eta_U\}_{U\in {\mathcal U}}$. The manifold $N$ admits an isometric embedding in some Euclidean space ${\mathbb R}^k$. Henceforth, I will consider $N$ as a submanifold of $E^k$ with the induced Riemannian metric. Then the energy of a map
$$
f: M\to N
$$
is the same as the energy of $f$ as a map $f: M\to {\mathbb R}^k$. One has to be careful here because $M$ is noncompact: We will have to take the energy of the restriction of $f$ to a relatively compact subset of $M$. To simplify the matters, let me simply assume that $M$ is compact, otherwise I have to repeatedly restrict to arbitrary relatively compact subsets.
For the purpose of variation of the energy functional we, of course, still have to restrict to maps $M\to N$. On the infinitesimal level, we will be computing
$$
\delta E(f)(V)
$$
where $V$ is a tangent vector field along $N$. It is convenient to
regard $V$'s as defined on $M$ rather than on $N$, so technically speaking, they are sections of the pull-back of the tangent bundle: $f^*(TN)$. I will denote the space of such sections as ${\mathfrak X}_f(M,N)$.
Notice that I have no need to actually know the explicit form of the derivative of $E$. In particular, I do not even need to know what energy is, as long as $E$ is an integral $\int_M e(f)$, where $e(f)$ (which normally is the energy-density) is some (nonlinear) 1st order differential operator on maps $M\to {\mathbb R}^k$ (but I need to know that $E$ is differentiable in Gateaux's sense).
The condition that $f: M\to N$ is a critical point of $E$ is that for every tangent vector field $V$ along $N$ we have
$$
\delta E(f)(V)=0.
$$
Notice that $\delta E(f)(V)$ is linear in the variable $V$.
Now, suppose that I know only the vanishing property
$$
\delta E(f|_{U})(V)=0, \forall V\in {\mathfrak X}_f(U, N)
$$
for each $U\in {\mathcal U}$. Given a vector field $V\in {\mathfrak X}_f(M, N)$, I define local vector fields
$$
V_U:= \eta_U V
$$
supported on $U$. Since $\{\eta_U\}$ is a partition of unity, we have
$$
V= \sum_{U\in {\mathcal U}} V_U.
$$
Then, by linearity,
$$
\delta E(f)(V)= \sum_{U\in {\mathcal U}} \delta E(f|U)(V_U).
$$
Each term in the right hand side of this equation vanishes since $f$ was a "local" critical point of $E$. Hence, $\delta E(f)(V)=0$ for every $V$ as above. |
Show that $[\sum\limits_{k = 0}^n\binom{n}{k}]^2=\sum\limits_{r = 0}^{2n}\binom{2n}{r}$ | Use $$\sum_{f=0}^g{g\choose f}=2^g$$ substituting appropriate values for $f,g$ on both sides of what you want to prove. |
A basic question in the definition of limit point | A limit point $p$ of set $S$ in a metric space is such that for any $\epsilon\in\Bbb{R}$, the ball $(B(p,\epsilon)\setminus \{p\})\bigcap S\neq\emptyset$. This means that every such ball $B(p,\epsilon)$ contains points from $S$ aside from $p$ (if $p$ is a part of $S$ too).
Now think about $(0,1)$ and its limit point $\frac{1}{2}$. You will get your answer.
In topoogy in general, a limit point need not just approximate one side of an interval. Spending some time with this new definition of a limit point helped me learn the suject better. |
Complex integration around a keyhole contour | The whole reason we need to consider a keyhole contour in this case is that the behavior of $\log(z)$ is complicated near $0$ and the negative axis (i.e., it's a branch cut, rather than a pole).
Therefore, trying a contour that looks like this (image obtained here) might be a good idea.
Remember that this contour is to deal with the $\log(z)$ part. To deal with the poles of $h(z)$, you may need to cut out more poles from this contour. The key idea is to obtain an estimate of the integral on (boundary of this contour)-(the boundary that you need for the integral), and use Residue Theorem. |
Is there a name for this lattice theoretic property? | In On the duality of cardinal invariants, Top. Proc. 2, no. 2 (1977) pp. 563-582 I used the ad hoc terminology $A F\text{-}covers\;s$, but that was chosen to suit a very specific context. Outside that context my inclination would be to say that $A$ is dense in (or below) $s$. I don’t know any standard term, but I’ve not worked much in this area, and there could certainly be one of which I’m not aware. |
What is the line going through points $(5, 5, 5), (2, 2, 2) \in \mathbb{R^3}$ when mapped it is mapped to a point in the real projective plane? | The real projective plane can be defined as the set of line going through the origin in $\mathbb{R}^3$ and there is a map $f\colon\mathbb{R}^3\setminus\{0\}\to\mathbb{R}P^2$ which maps a point to the corresponding point in $\mathbb{R}P^2$ which is given by the unique line through the origin on which that point lies.
The line going through the points $(5,5,5)$ and $(2,2,2)$ happens to go through the origin (it is equal to $\{(\lambda,\lambda,\lambda)\mid \lambda\in\mathbb{R}\}$ which contains $0$) and so there is a point in $\mathbb{R}P^2$ which correspond to this line. Because they lie on the same line, you will also note that $f(5,5,5)=f(2,2,2)$. The map $f$ is continuous and is in fact also a quotient map. Because of this, it means we could define $\mathbb{R}P^2$ instead as a quotient of $\mathbb{R}^3\setminus \{0\}$ by an equivalence relation $\sim$ given by $(a,b,c)\sim(a',b',c')$ if and only if there exists a $\lambda\in\mathbb{R}\setminus 0$ such that $(a',b',c')=(\lambda a,\lambda b,\lambda c)$. Show that $x\sim y \iff f(x)=f(y)$.
If we then consider the map $\tilde{f}\colon \mathbb{R}^3\setminus\{0\}/{\sim}\to\mathbb{R}P^2$ given by $\tilde{f}([(a,b,c)]_{\sim})=f(a,b,c)$ then this map is a homeomorphism. |
An expression of scalar curvature | One place to find a proof is in my Introduction to Riemannian Manifolds (2nd ed.), Theorem 7.30. (The notation is a little different from yours, but it should be easy to translate between the two notations.) |
Can $\sum_n \frac{1}{x_{n_{ik}}}$ still diverge if $\sum_n \frac{1}{x_{n_{i}}}$ diverge and $x_n$ is nondecreasing? | Yes, it diverges as well: here's a proof. Lets write $y_i = x_{n_i}$ because there are too many subscripts and the first subsequence $n_i$ is not important at all.
Since $y_i$ is non-decreasing, for the $k$ indices $j=ki+1,\dots,k(i+1)$, we have
$$ y_{ki} \le y_j \iff \frac1{y_j} \le \frac1{y_{ki}}$$
which means that, since $k$ is fixed,
$$\sum_{j=ki+1}^{k(i+1)} \frac1{y_j} \le \frac{k}{y_{ki}} $$
Taking a sum from $i=1$ to $N$,
$$ \sum_{i=1}^{N-1}\frac k{y_{ki}} \ge \sum_{i=1}^N \sum_{j=ki+1}^{k(i+1)} \frac1{y_j} = \sum_{\iota=k+1}^{k(N+1)}\frac1{y_\iota} \xrightarrow[N\to\infty]{} \infty .$$
Therefore $ \sum_{i=1}^{\infty}\frac 1{y_{ki}}=\infty$ as well.
If instead of $i$, you put a function of $i$ (e.g. $i^2$) then something else could happen. |
An MCQ on Greens function | I hope you meant $$G(x,t) =\begin{cases}
a+ b\log t & \text{if $0<x\le t$ } \\[2ex]
c+ d\log x & \text{if $t\le x\le1$ }
\end{cases}$$
Now, using:
$G(x,t)$ satisfies the boundary condition.
$$\implies G(1,t)=G'(1,t)\implies c+d\log 1=\frac d 1\implies c=d$$
That narrows our options to a and d.
Derivative, $G'(x,t)$ jumps at $t$ by $1\over p(t)$=$1\over t$
$$\implies d=1$$
Hence 1. |
what makes a function to be continuous at a point? | A mathematical property, as ''continuity'', has not a cause but a definition. A definition can have a historical or practical origin that we can call the cause of this definition. In this case the mathematical definition of continuity capture our ''physical'' intuition of a line that we can draw with a pencil without interruptions. |
Subspaces of Representations of Lie Groups | Suppose $W$ is $G$-invariant. Then $d\rho_e(X)w$, for $w \in W$ and $X \in \mathfrak g$, is, for example, the derivative of the path in $V$ that you get when you push $w$ around by $\rho$ applied to a path in $G$. Convince yourself that this path is actually in $W$ and use the fact that $W$ is a vector space.
On the other hand, suppose $W$ is $\mathfrak g$-invariant, so that $d\rho$ maps $\mathfrak g$ to $\mathfrak{gl}(W).$ Then statement (ii) gives you a map $\widetilde G \to GL(W)$, which you want to see as inducing the action of $\rho(G)$ on $W$... |
How to find the sum $\sum_{i = 0}^{\infty}\frac{F_i}{7^i}$? | You know $\frac{z}{-z^2 - z + 1} = \mathcal{F}(z) = \sum_{i = 0}^{\infty} F_i z^i$ for any $z$. In particular for $z=1/7$:
$$
\sum_{i = 0}^{\infty}\frac{F_i}{7^i} = \mathcal{F}(1/7) = \frac{1/7}{-(1/7)^2 - 1/7 + 1}.
$$ |
Show $a^2+b^2 \equiv 0 \mod p$ has no solutions except $a \equiv b \equiv 0 \mod p \Longleftrightarrow -1$ is a non-square modulo $p$. | Prove the contrapositive: if $a^2+b^2\equiv0\pmod p$ does have solutions other than $a\equiv b\equiv 0$, then $-1$ is a square modulo $p$.
So, suppose $a^2+b^2\equiv0\pmod p$ where $a,b$ are not both $0$ modulo $p$. Without loss of generality suppose $b\not\equiv0$; then $b$ has a multiplicative inverse $c$ and we have
$$(ac)^2+(bc)^2\equiv0\quad\Rightarrow\quad (ac)^2+1\equiv0$$
so $-1$ is congruent to $(ac)^2$ which is obviously a square. |
Why the process $M_t = \sup_{0\leq s\leq t} W_s$ is not a markov process? | Here is a try: Fix $s \leq t$, then
\begin{align*} M_t &= \max \left\{ \sup_{r \leq s} B_r, \sup_{s < r \leq t} B_r \right\} \\ &= \max \left\{M_s, \sup_{r \leq t-s} (B_{r+s}-B_s)+B_s \right\}. \end{align*}
The restarted process $W_r := B_{s+r}-B_s$, $r \geq 0$, is again a Brownian motion. If we denote by $M_t^W := \sup_{r \leq t} W_r$ its running maximum, then we see that
$$M_t = \max\{M_s,M_{t-s}^W+B_s\}.$$
Since $(W_t)_{t \geq 0}$ is independent from $\mathcal{F}_s$, we find that
$$\mathbb{E}(M_t \mid \mathcal{F}_s) = g(M_s,B_s),\tag{1}$$
where
$$g(x,y) := \mathbb{E}( \max\{x,y+M_{t-s}^W\}).$$
The aim is to show that the function $g(M_s,B_s)$ cannot be measurable with respect to $\sigma(M_s)$ (intuivitely this is clear, but making it rigorous is not so easy). If we manage to show this, then it follows from $(1)$ that $(M_t)_{t \geq 0}$ is not Markovian (...because if it were Markovian then the left-hand side of $(1)$ would be $\sigma(M_s)$-measurable).
First we need to get our hands on $g$. To this end, we use the reflection principle. By definition,
$$g(x,y) = x \mathbb{P}(x>y+M_{t-s}^W) + \mathbb{E}((y+M_{t-s}^W) 1_{y+M_{t-s}^W \geq x}).$$
Using the fact that $M_{t-s}^W$ equals in distribution $|W_{t-s}|$, we see that
$$\mathbb{P}(x>y+M_{t-s}^W) = \mathbb{P}(|W_{t-s}| < x-y)$$
and
\begin{align*} \mathbb{E}(M_{t-s}^W 1_{y+M_{t-s}^W \geq x}) &= \mathbb{E}(|W_{t-s}| 1_{|W_{t-s}| \geq x-y}) \\ &= \sqrt{\frac{2}{\pi(t-s)}} \int_{x-y}^{\infty}z \exp \left(- \frac{z^2}{2(t-s)} \right) \, dz \\ &= \sqrt{\frac{2(t-s)}{\pi}} \exp \left(- \frac{(x-y)^2}{2(t-s)} \right). \end{align*}
Consequently,
\begin{align*} g(x,y) &= x \mathbb{P}(|W_{t-s}|<x-y) + y \mathbb{P}(|W_{t-s}| \geq x-y) + \sqrt{\frac{2(t-s)}{\pi}} \exp \left(- \frac{(x-y)^2}{2(t-s)} \right). \end{align*}
Writing $$ \mathbb{P}(|W_{t-s}|<x-y) = 1- \mathbb{P}(|W_{t-s}|\geq x-y)$$ we see that $$g(x,y) = x+h(x-y) \tag{2}$$ for some continuous function $h$. More precisely, $$h(r) := - r \mathbb{P}(|W_{t-s}| \geq r) + \sqrt{\frac{2(t-s)}{\pi}} \exp \left(- \frac{r^2}{2(t-s)} \right), \qquad r \geq 0.$$
Pick disjoint intervals $[a,b]$ and $[c,d]$ such that $h^{-1}([a,b])$ and $h^{-1}([c,d])$ have positive Lebesgue measure.
Finally, we are ready to check that $g(M_s,B_s)$ cannot be $\sigma(M_s)$-measurable. Suppose, to the contrary, that it were $\sigma(M_s)$-measurable. Then it is immediate from $(2) $that $h(M_s-B_s)$ is also $\sigma(M_s)$-measurable. Consequently, there would be a Borel set, say $A$, such that
$$\{h(M_s-B_s) \in [a,b]\} = \{M_s \in A\}. \tag{3}$$
Since $M_s-B_s$ has a strictly positive density on $(0,\infty)$, we have, by our choice of $[a,b]$,
$$\mathbb{P}(M_s \in A)>0,$$
and so $A$ has strictly positive Lebesgue measure. Moreover, the fact that $(M_s,B_s)$ has a strictly positive density (on its support) implies that $(M_s,M_s-B_s)$ has a strictly positive density (on its support). Since $A$ and $h^{-1}([c,d])$ have positive Lebesgue measure, we obtain that
$$0 < \mathbb{P}(M_s \in A, M_s-B_s \in h^{-1}([c,d])) = \mathbb{P}(M_s \in A,h(M_s-B_s) \in [c,d]). \tag{4}$$
On the other hand, $(3)$ and the disjointness of the intervals $[a,b]$ and $[c,d]$ shows that
$$\mathbb{P}(M_s \in A,h(M_s-B_s) \in [c,d]) = \mathbb{P}(h(M_s-B_s) \in [a,b], h(M_s-B_s) \in [c,d])=0,$$
which contradicts $(4)$.
Remark: Using a reasoning very similar to that at the beginning of this answer, it is possible to show that the two-dimensional process $(M_t,B_t)_{t \geq 0}$ is Markovian. By the way, also $M_t-B_t$ is Markovian. |
Find $P(n+1)$ for a polynomial P | Your solution is fine.$$Q(-1) = \lambda (-1)(-2)\ldots (-(n+1))=\lambda (-1)^{n+1}(n+1)!=1$$
$$\lambda = \frac{(-1)^{n+1}}{(n+1)!}$$
$$Q(x)=\frac{(-1)^{n+1}x(x-1)\ldots (x-n)}{(n+1)!}=(x+1)P(x)-x$$
Let $x=n+1$
$$\frac{(-1)^{n+1}(n+1)n\ldots 1}{(n+1)!}=(n+2)P(n+1)-(n+1)$$
Hence $$P(n+1) = \frac{n+1 + (-1)^{n+1}}{n+2}$$
Let's check with small cases to see if $(-1)^{n+1}$ should be there.
Let $n=1$, then we have $P(0)=0$ and $P(1)=\frac12$. Then $P(x) = \frac{x}2$.
We have $P(2)= 1$.
Notice that $\frac{n+1}{n+2}<1$ and hence $P(n) = \frac{n+1}{n+2}$ is certainly wrong. |
Is there a straight line solution for this separable differential equation? | Your calculations look OK, but seem to miss the point. You do not need to compute the full solution, you only need to test solutions of the required format.
The equation is homogeneous of degree 2 in the polynomial sense. Thus set $y=xu$ then
$$
x^2y'=x^3u'+x^2u=x^2(u^2-3u-5)\implies xu'=u^2-4u-5=(u-5)(u+1)
$$
This has the constant solutions $u=5$ and $u=-1$ with the corresponding linear homogeneous solutions for $y$.
In general you might want to test $y=a_0+a_1x$ to find all linear solutions.
$$
x^2a_1=a_1^2x^2+2a_0a_1x+a_0^2-3a_1x^2-3a_0x-5x^2
$$
and then compare coefficients to find $a_0=0$ which then leads back to the already found solutions. |
Properties of $y$ if $\frac{d^2y}{dx^2}+xy=0$, $x \in [a,b],$ and $y(a)=y(b)=0.$ | This equation with a minus instead of the plus is known as the Airy equation, and it's solution is a linear combination of the the two Airy functions. You can easily show that the solution for:
$$y''(x) -kxy = 0$$
Is therefore:
$$y(x) = c_1\mathrm{Ai}(\sqrt[3]{k}x) + c_2\mathrm{Bi}(\sqrt[3]{k}x)$$
Where in your case, $k=-1$, and we can choose the real root to give:
$$y(x) = c_1\mathrm{Ai}(-x) + c_2\mathrm{Bi}(-x)$$
We also have that:
$$y(a) = c_1\mathrm{Ai}(-a) + c_2\mathrm{Bi}(-a)=0$$
$$y(b) = c_1\mathrm{Ai}(-b) + c_2\mathrm{Bi}(-b)=0$$
If the determinant of the above system is non zero, we get a trivial solution, which we are given is not the case. Thus, we must have:
$$\frac{\mathrm{Bi}(-b)}{\mathrm{Ai}(-b)} = \frac{\mathrm{Bi}(-a)}{\mathrm{Ai}(-a)}$$
Which puts a constraint on our $a,b$. And that our solution is of the form:
$$y(x) = c_1\left(\mathrm{Ai}(-x) - \frac{\mathrm{Ai}(-a)}{\mathrm{Bi}(-a)}\mathrm{Bi}(-x)\right)$$
Now let's look at $\mathrm{A_i}, \mathrm{B_i}$:
Let's start with option number two. If $a<0<b$, that means that $-b<0<-a$. Moreover, $0<\mathrm{Ai}(-a)/\mathrm{Bi}(-a) < 1$ Thus, you can easily see that in the range $(a, 0)$ our function is a positive sum of two monotonically decreasing functions, and is therefore monotonically decreasing itself. Thus, option #2 is true.
As to option #1 (Thanks @zhw!) this is also true. to see this, let's look at the function $\mathrm{Ai}(-x)/ \mathrm{Bi}(-x)$. This function is monotonically increasing for all $x<0$, as can be proved by looking at the derivative:
$$\frac{d}{dx}\left(\frac{\mathrm{Ai}(-x)}{\mathrm{Bi}(-x)}\right)=\frac{\mathrm{Ai}(-x) \mathrm{Bi}'(-x) - \mathrm{Ai}'(-x) \mathrm{Bi}(-x)}{\mathrm{Bi}(-x)^2}=\frac{c}{\mathrm{Bi}(-x)^2}>0$$
So if $b$ were smaller than zero, we cannot satisfy the above condition on the ratios of functions. Thus, option #1 is also true.
As to options 4, if e.g. $a=b$ we cannot have an infinite amount of zeros. So, option #4 is false. |
Proving that an equation has no natural solutions | Use AM–GM inequality for three numbers: x/y, y/z, z/x. |
Center of gravity of a self intersecting irregular polygon | I think your best bet will be to convert the self-intersecting polygon into a set of non-self-intersecting polygons and apply the algorithm that you linked to to each of them. I don't think it's possible to solve your problem without finding the intersections, and if you have to find the intersections anyway, the additional effort of using them as new vertices in a rearranged vertex list is small compared to the effort of finding them.
To answer your subquestion: Yes, the order of the nodes does matter, especially if the polygon is allowed to be self-intersecting since in that case the order is an essential part of the specification of the polygon and different orders specify different polygons -- for instance, the "square" with the ordering you describe would be the polygon on the right-hand side of the two examples of self-intersecting polygons that the page you linked to gives (rotated by $\pi/2$).
P.S.: I just realized that different orders can also specify different non-self-intersecting (but not convex) polygons, so the only case where you could specify a polygon by its vertices alone is if you know it's convex. But even then you have to use the vertices in the right order in the algorithm you linked to. |
Can every smooth scalar function $f$ on $M$ be written as $f=\operatorname{div}(X)$? | Note that you can identify vector fields on $M$ with $(n-1)$-forms. To see this, note that the metric provides a map from $T_pM \to (T_pM)^\ast$ since any non-degenerate symmetric bilinear form provides a map into the dual. Since vector fields are sections of the tangent bundle, we can pointwise take the dual to get a section of the cotangent bundle. However this will be a $1$-form which is not what we want (since taking the exterior derivative of this doesn’t yield what we want). We then apply Hodge duality to convert the $1$-form into an $(n-1)$-form. Taking the divergence more or less amounts to taking the exterior derivative of an $(n-1)$-form to yield an $n$-form. We can play the same trick with Hodge duality to convert this back to a $0$-form i.e. a function.
Barring all the technical duality, your question can be restated as: is every $n$-form on $M$ exact? Note that $n$-forms on $M$ are trivially closed. So $H_{DR}^n(M)$ can measure whether every function can be written as the divergence of a vector field. In particular, the answer is yes if and only if $H_{DR}^n(M) = 0$. So you can think of this cohomology group as the group of obstructions to solving this PDE.
Now for an easy criterion: if $M$ is a smooth, connected, oriented manifold, then $H_{DR}^n(M) = \mathbb{R}$ if $M$ is compact and is $0$ if $M$ is not compact. |
Showing that $a+b\sqrt{2}$ is associative over multiplication | Are you trying to show that ordinary multiplication is associative when applied to numbers of the form $a+b\sqrt2$ (with $a$ and $b$ rational and/or integral)?
That is trivially true because $(pq)r=p(qr)$ holds for all real numbers $p$, $q$, and $r$ -- the case where they all lie in $\mathbb Z+\mathbb Z\sqrt2$ or $\mathbb Q+\mathbb Q\sqrt2$ is just a special case of this general truth. |
Confused when changing from Lebesgue Integral to Riemann Integral | It is not really a matter of changing from Lebesgue integral to Riemann integral, it is a matter of changing measures (sort of a change of variables).
By definition, given $X : \Omega \to \mathbb{R}$ a random variable, $E[X]=\int_{\Omega}X$.
$X$ defines a measure $\widetilde{m}$ in $\mathbb{R}$, called the push-forward, by $\widetilde{m}(A)=P(X^{-1}(A)).$ By definition, this measure is invariant under $X$, and hence
\begin{equation} \tag{1}
\int_{\mathbb{R}} f d\widetilde{m}= \int_{\Omega}f \circ XdP.
\end{equation}
The equality follows from the usual arguments (prove for characteristics, simple functions, then use convergence. Recall that $\mathbf{1}_A \circ X=\mathbf{1}_{X^{-1}(A)}$).
Let $h$ be the density of $X$. We then have, by definition of density, that $\widetilde{m}(A)=P(X^{-1}(A))=\int_A h dm$ for any $A \in \mathcal{B}(\mathbb{R})$, where $m$ is the Lebesgue measure. By "change of variables" (which is Theorem $1.29$ in Rudin's RCA for example, but is just another instance of repeating the argument of proving for characteristics, simple functions and using convergence), we have:
\begin{equation} \tag{2}
\int_{\mathbb{R}} f d\widetilde{m}=\int_{\mathbb{R}}f \cdot h dm.
\end{equation}
Combining $(1)$ and $(2)$,
$$\int_{\mathbb{R}} f \cdot h dm= \int_{\Omega}f \circ X dP. $$
Taking $f=\mathrm{Id}$ yields
$$\int_{\mathbb{R}} x h(x)dx= \int_{\Omega} X dP=E[X]. $$
Taking $f=\mathrm{Id}\cdot\mathbf{1}_I$, where $I$ is some interval (for example, $(0,+\infty)$ as in your case), we have
$$\int_{I} x h(x)dx= \int_{X^{-1}(I)} X dP, $$
recalling again that $\mathbf{1}_A \circ X=\mathbf{1}_{X^{-1}(A)}$. Since $P(X<0)$ in your case is $0$, this last integral is actually equal to the integral over the whole space, and hence to $E[X]$, which gives your equality. |
Why does $\epsilon$ come first in the $\epsilon-\delta$ definition of limit? | Good question!
The answer is that if we put $\delta$ first, then our definition would no longer correspond to what we mean by the limit of a function.
Proposition Let $f$ be the constant function $0$. Let $c,L$ be any real numbers. Then:
For all $\delta>0$ there exists $\epsilon>0$ such that if $0<|x-c|<\delta$, then $|f(x)-L|<\epsilon$.
In other words, if we define $\lim'$ by putting the $\delta$ first, then $\lim_{x\to c}'f(x)=L$.
Proof. Let $\delta>0$ and let $\epsilon=|L|+1$. If $0<|x-c|<\delta$ then $$|f(x)-L| = |0-L|=|L|<|L|+1=\epsilon\quad\Box$$
So our constant function $0$, which should clearly converge to $0$ at every point, under the new definition converges to every real number at every point.
In particular, limits aren't unique and the properties we expect to hold for a 'limit' no longer hold. It just isn't a very interesting definition and it no longer captures our intuition of what limits should be.
By contrast, the definition with $\epsilon$ first exactly captures the notion of a 'limit'. |
Is equality of processes stable to multiplication with an independent process. | In the case of equality in law then the answer is clear from the independence hypothesis you have formally :
$P(X^1_t=S_t.Y_t^1<a)=P(Y_t^1<a/s).P(S_t<s)=P(Y_t^2<a/s).P(S_t<s)=P(S_t.Y_t^2<a)$ and as $P(X^1_t<a)=P(X_t^2<a)$ this leads to the conclusion, $P(X_t^2<a)=P(S_t.Y_t^2<a)$.
For the case of almost sure equality (or perfect equality) you have almost surely, for a set $A$ of probability $1$ where both $X^1_t=X^2_t$ and $Y^1_t=Y^2_t$ , for any $\omega \in A$ :
$X_t^1(\omega)=Y_t^1(\omega).S_t(\omega)=Y_t^2(\omega).S_t(\omega)$ but you also have :
$X_t^1(\omega)=X_t^2(\omega)$ so $X_t^2(\omega)=Y_t^2(\omega).S_t(\omega)$
best regards |
R and $\rho$ when calculating the interval of convergence of a power series | Why don't you look up the theorem in your book. I think you're misapplying it. Perhaps $\rho$ is the limit when you leave out $x$ from your calculation, that is the limit of the coefficient. In your case, the limit of the ratio of your coefficients is 4, which is $\rho$, so that $R = 1/4$. |
How to get the foci / focus Hyperbola | Hint: Write the equation as $$\dfrac {(x+3)^2}{\dfrac{25}{9}}-\dfrac{(y-5)^2}{25}=-1.$$ |
Giving randomly 5 red balls and 5 blue balls to 5 kids, each kid get 2 balls. What is the expected value of number of kids got 2 different balls? | Your approach with indicator variables using linearity of expectation went exactly in the right direction. You just used the wrong probabilities. Give the first child a ball; no matter which colour it is, $5$ of the remaining $9$ balls are of the other colour, so the probability that the second ball you give them is of the other colour is $\frac59$. Thus the expected number of kids with different colours is $5\cdot\frac59=\frac{25}9$. |
Length of perpendicular vector | Let $C$ be the point on line $l$ which is the shortest distance from $A$. This is the perpendicular. $C$ has co-ordinates given by $2\vec i-2\vec j-\vec k+\mu_0(-\vec i+2\vec j+\vec k)=(2-\mu_0)\vec i+(2\mu_0-2)\vec j+(\mu_0-1)\vec k$ for some $\mu_0$ which we must determine.
To minimise the distance we minimise the distance between $C$ and $A$:$$\begin{align}|C-A|^2 = & (2-\mu_0-1)^2+(2\mu_0-2-1)^2+(\mu_0-1-1)^2 \\
& = {\mu_0}^2-2\mu_0+1+4{\mu_0}^2-12\mu_0+9+{\mu_0}^2-4\mu_0+4\\
& = 6{\mu_0}^2-18\mu_0+14\\
\end{align}$$
This is a quadratic. To minimise, we differentiate and set the derivative equal to zero, which gives: $$12\mu_0-18=0\implies\mu_0=3/2\\\implies |C-A|^2=6\cdot{\frac32}^2-18\cdot\frac32+14=\frac12\\\implies|C-A|=\frac{1}{\sqrt2}$$ |
Find the area between functions $x^3=2y^2;x=0;y=-2$ | The area should be
$$\int^2_0[f(x)-g(x)]dx$$ as $f(x)\ge g(x)$ in the given interval.
So,$$\sqrt2\cdot2^{5/2} = 2^{1/2}2^{5/2} = 2^{3} = 8 \implies 4- \frac{8}{5} = \frac{12}{5}$$ |
How do I compute the solid angle of a square in space in spherical coordinates? | Let $A_i=(x_i,y_i,1)$ $(i\ {\rm mod}\ 4)$ be the four vertices of the square $Q$. The edges $A_{i-1}A_i$ together with the origin $O$ determine four planes $\sigma_i$ that bound a quadrilateral cone. Using the vector product you can compute the unit normals ${\bf n}_i$ of these planes, and this in turn will allow you to compute the inner angle $\alpha_i$ between the two planes intersecting along the line $OA_i$. This $\alpha_i$ then is the angle at $A_i'\in S^2$ of the spherical quadrangle $Q'$ obtained through the projection. The area of $Q'$ is then simply given by the "spherical excess" of the $\alpha_i$, i.e.,
$${\rm area}(Q')=\sum_{i=0}^3\alpha_i\ -2\pi\ .$$ |
Calculate Fourier series and explicitly sum it for x=0 | Let us put this straight: the Fourier series of $f(x)$ is not the written one. Assuming
$$ \theta(x)=\left\{\begin{array}{rcl}0&\text{if}& x<0\\\tfrac{1}{2}&\text{if}& x=0\\1&\text{if}& x>0\end{array}\right.$$
we have that the Fourier series of $\theta(-x)\sin(x)+\theta(x)\cos(x)$ over $[-\pi,\pi]$ is given by
$$ -\frac{1}{\pi}+\frac{1}{2}\cos(x)+\frac{2}{\pi}\sum_{n\geq 1}\frac{\cos(2nx)}{4n^2-1}+\frac{1}{2}\sin(x)+\frac{4}{\pi}\sum_{n\geq 1}\frac{n\sin(2nx)}{4n^2-1} $$
and the evaluation of such series at $x=0$ leads to the value
$$ -\frac{1}{\pi}+\frac{1}{2}+\frac{1}{\pi}\sum_{n\geq 1}\left(\frac{1}{2n-1}-\frac{1}{2n+1}\right)=\frac{1}{2} $$
as expected from $\frac{1}{2}\left[\lim_{x\to 0^-}f(x)+\lim_{x\to 0^+}f(x)\right]$. |
Construct the angle whose sine is $\frac{3}{2+\sqrt{5}}$ | Outline. Let $A=(0,2)$ and $B=(1,0)$. Consider the circle around $A$ with radius $AB=\sqrt{5}$, which intersects the $y$-Axis at $C=(0, 2+\sqrt{5})$. Let now $\Gamma$ be the circle centered at $O=(0,0)$ through $C$ which intersects the line $l$ through $D=(0,3)$ and parallel to the $x$-Axis at $E$. Then
$$\sin\angle DEO=\frac{OD}{OE}=\frac3{2+\sqrt{5}}$$ |
Properties of Miquel point | We have the following circles:
Brown circle $PAB$ with center $O_1$
Red circle $PCD$ with center $O_2$
Blue circle $QBC$ with center $O_3$
Green circle $QAD$ with center $O_4$
All these circles have one common point $M$ (Miquel).
Let us first show that points $O_1,O_2,O_3,O_4$ are concyclic (part 2 of your problem):
$$O_1O_2\bot MP, \ \ O_1O_4\bot MA\implies\angle O_2O_1O_4+\angle AMP\tag{1}=180^\circ$$
On the other side, quadrialteral AMPB is concyclic and therefore:
$$\angle AMP+\angle B=180^\circ\tag{2}$$
By comparing (1) and (2) you get:
$$\angle O_2O_1O_4=\angle B\tag{3}$$
In a similar way:
$$O_3O_2\bot MC, \ \ O_3O_4\bot MQ\implies\angle O_2O_3O_4+\angle QMC=180^\circ\tag{4}$$
On the other side, quadrialteral QMCB is concyclic and therefore:
$$\angle QMC+\angle B=180^\circ\tag{5}$$
By comparing (4) and (5), we get:
$$\angle O_2O_3O_4=\angle B\tag{6}$$
From (3) and (6) we get that angles above the same segment $O_2O_4$ are equal:
$$\angle O_2O_1O_4=\angle O_2O_3O_4$$
...and therefore quadrilateral $O_1O_2O_4O_3$ is concyclic.
Back to part (1) of your problem. Introduce angle $\angle MAD=\beta$:
$$\angle MAD=\angle MAP=\frac12\angle MO_1P=\angle MO_1O_2=\beta\tag{7}$$
On the other side:
$$\angle MAD=\frac12 \angle DO_4M=\angle MO_4O_2=\beta\tag{8}$$
So angles $\angle MO_1O_2$ and $\angle MO_4O_2$ above the same segment $MO_2$ are equal and therefore quadrialteral $O_1O_2MO_4$ is cyclic. We already proved that $O_1O_2O_4O_3$ is cyclic so points $O_1,O_2,O_3,O_4,M$ all must be on the same circle (not shown in the picture).
Now introduce angle $\angle AMD=\alpha$. We have that $DM\bot O_2O_4$ and $AM\bot O_1O_4$. Consequentially:
$$\angle O_1O_4O_2=\angle AMD=\alpha$$
But quadrualteral $O_1O_2O_3O_4M$ is concyclic and therefore:
$$\angle O_1MO_2=\angle O_1O_4O_2=\alpha$$
So we have proved that:
$$\angle AMD=\angle O_1MO_2=\alpha$$
$$\angle MAD=\angle MO_1O_2=\beta$$
So triangles $\triangle MAD$ and $\triangle MO_1O_2$ have the same angles and consequentially they are similar. |
Show that $\pi(p_x^2) \geq 2p_x -2$ | By Prime number theorem,
$$\pi(x)>\frac x{logx}$$
$$\pi(p^2)>\frac{p^2}{2logp}$$
(1):$$\frac{p^2}{2logp}>2p-2$$
Is true if $$\frac{p^2}{2logp}>2p$$
$$p>4logp$$
Using graph( http://m.wolframalpha.com/input/?i=y%3Dx%2Cy%3D4logx&x=0&y=0 ) , the inequality hold for $p>8.61...$
So your original postulate holds at least for prime numbers starting from 11. However, for $p=7$, there are 13 prime numbers under $49$ (it is due to our approximation).
But for $p=5$, there are 9 prime numbers under 25. So it doesnt satisfy the inequality.
For your second question, there are a lot of bounds for the nth prime number, but the results should be hard to prove.
Bounds for $n$-th prime |
Second cohomology group $H^2(\mathfrak{g},\mathbb{R})$ of a semisimple Lie algebra | Yes, the claim follows from the nondegeneracy of the Killing form:
Fix any basis of $\mathfrak{g}$; then, we can rephrase the condition as the existence of a matrix $[\phi]$ such that $$[\omega] = [\kappa] [\phi] ,$$ where $[\omega]$ and $[\kappa]$ are respectively the matrix representations of $\omega, \kappa$ with respect to the basis. Since $\kappa$ is nondegenerate, so is $[\kappa]$, and we may rearrange to find an explicit expression for $\phi$. |
Is this proof about sequences correct? | The idea is correct. Few steps are not.
For example $u_n + v_n \neq L + M$ but $u_n + v_n \rightarrow L + M$.
Similarly $x_n = \frac{u_n + v_n + |u_n - v_n|}{2} \rightarrow \frac{L + M + |L - M|}{2} = \max(L,M)$ |
Finding $\sum \frac{1}{n^2+7n+9}$ | Start with the infinite product expansion of $\cos x$
$$\cos x = \prod_{k=0}^\infty \left(1 - \frac{x^2}{(k+\frac12)^2\pi^2}\right)$$
Taking logarithm of $\cos(\pi x)$ and differentiate, we have
$$-\pi\tan(\pi x) = \sum_{k=0}^\infty \frac{2x}{x^2-(k+\frac12)^2}
\quad\implies\quad
\sum_{k=0}^\infty\frac{1}{(k+\frac12)^2-x^2} = \frac{\pi}{2x}\tan(\pi x)
$$
This lead to
$$
\sum_{k=0}^\infty \frac{1}{k^2+7k+9}
= \sum_{k=0}^\infty \frac{1}{(k+\frac72)^2 - \frac{13}{4}}
= \sum_{k=3}^\infty \frac{1}{(k+\frac12)^2 - \frac{13}{4}}\\
= \frac{\pi}{2\cdot\frac{\sqrt{13}}{2}}\tan\left(\frac{\pi\sqrt{13}}{2}\right) - \sum_{k=0}^2 \frac{1}{(k+\frac12)^2 - \frac{13}{4}}
= 1 + \frac{\pi}{\sqrt{13}}\tan\left(\frac{\pi\sqrt{13}}{2}\right)
$$ |
how to prove the existence of solution of system of equations using the degree of Brouwer? | The solution I mention is correct and complete. |
Find set of vectors orthogonal to $\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}$ | The space of vectors orthogonal to $\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}$ can be defined by $span(\begin{bmatrix}
-1 \\
1 \\
0 \\
\end{bmatrix} , \begin{bmatrix}
-1 \\
0 \\
1 \\
\end{bmatrix} )$ where the dimension of the space is 2. |
Consider the covering map $p: \mathbb{R} \times \mathbb{R}_+ \to \mathbb{R}^2-0 $ given by $p(x,t) = (t\cos(2\pi x), t\sin(2\pi x))$ | Let the covering map $p:\mathbb R \times \mathbb R_+\rightarrow \mathbb R^2-0$ be given by $$p(x,y)=(y\cos{2\pi x},y\sin{2\pi x})$$
Let the following $f,g,h:[0,1]\rightarrow \mathbb R^2-0$ be given:
$$\begin{array}{cc}
f(t) = & (2-t,0)\\
g(t) = & ((1+t)\cos{2\pi t},(1+t)\sin{2\pi t})\\
h(t) = & f*g(t)
\end{array}$$
For respective lifts $f',g',h':[0,1]\rightarrow \mathbb R\times \mathbb R_+$ we must have $p \circ f' = f $, that is $p(f'_1,f'_2)=f$ (respectively $g,h$). So we solve:
$$\begin{array}{rl}
p(x,y)=& (2-t,0)\\
y\cos{2\pi x} = & 2-t\\
y\sin{2\pi x} = &0
\end{array}$$
We see this has a nice solution setting $$x= 0$$
$$y=2-t$$
So we let
$$f'(t)= (0,2-t)$$
Which is easily verified. Again for $g'$ we solve:
$$\begin{array}{cc}
p(x,y) = & ((1+t)\cos{2\pi t},(1+t)\sin{2\pi t})\\
y\cos{2\pi x} = & (1+t)\cos{2\pi t}\\
y\sin{2\pi x} = & (1+t)\sin{2\pi t}
\end{array}$$
And this has solution $$x=t$$ $$y= 1+t$$
So we let $$ g'(t)= (t,1+t)$$
The case of $h'$ should be simpler, set $h'=f' *g'$. This is well defined as $f'(1)= (0,1)$ and $g'(0)=(0,1)$. We verify: $$\begin{array}{rl}
p\circ h' = & p \circ (f'*g')\\
= & p\circ \left\{ \begin{array}{cc}
f'(2t) & t \in [0,\frac{1}{2}]\\
g'(2t-1) & t\in [\frac{1}{2}, 1]\\
\end{array}\right. \\
= &\left\{ \begin{array}{cc}
p \circ f'(2t) & t \in [0,\frac{1}{2}]\\
p\circ g'(2t-1) & t\in [\frac{1}{2}, 1]\\
\end{array}\right. \\
= & \left\{ \begin{array}{cc}
f(2t) & t \in [0,\frac{1}{2}]\\
g(2t-1) & t\in [\frac{1}{2}, 1]\\
\end{array}\right. \\
= & f*g\\
= & h
\end{array}$$
And we have all the desired lifts. |
How many projectives and injectives exist in a path algebra? | If $k$ is a field, and your quiver has no oriented cycles, then there are exactly as many indecomposable projective modules (up to isomorphism) as there are vertices in the quiver. Moreover, any finite-dimensional representation admits a projective cover. (This is all true because of more general results on Artinian algebras).
In your specific example, to find the projective cover of the representation, try to find its top, and then compute its projective cover. Here, this should be $P_3\oplus P_6$, where $P_i$ is the projective module "at vertex $i$" (meaning that if $A$ is the path algebra, and $e_i$ is the path of length zero at vertex $i$, then $P_i = A e_i$). |
Is the difference of the natural logarithms of two integers always irrational or 0? | If $\log(a)-\log(b)$ is rational, then $\log(a)-\log(b)=p/q$ for some integers $p$ and $q$, hence $\mathrm e^p=r$ where $r=(a/b)^q$ is rational. If $p\ne0$, then $\mathrm e=r^{1/p}$ is algebraic since $\mathrm e$ solves $x^p-r=0$. This is absurd hence $p=0$, and $a=b$. |
Probability With replacement | Firstly I agree with John Chessant in the sense of the ratio of the various coloured discs, i.e. R:B:W = 3:2:1.
As to whether Anne's suggestion is "reasonable" or not, there is a somewhat subjective element to this question.
I'm inclined to think that Anne must of had some prompting factor to suggest 10, she may have gave such a number because she could clutch all or most of the discs in the one hand thereby giving her a fair guesstament as to the number of discs.
A statistician would deem it blatantly unreasonable, Anne's mother would deem it highly reasonable. I waver somewhere in between, after all it is not outside the realm of possibilities that the results can occur with the ratios of 5:3:2 however unlikely. But personally believing the ratio is most likely 3:2:1 with the actual numbers being 6,4,2, for Red, Blue and White respectively, then Anne's guesstament is out by only 2, if this is the case I still hold the same view regarding her suggestion i.e. Not unreasonable ( call me a softie).
In light of what I have said in the above I give my answer to the second question as----
Red=6
Blue=4
White=2 |
There does not exist any integer $m$ such that $3n^2+3n+7=m^3$ | I am making this community wiki because I'm really just unpacking the comments. With the right method, this problem may seem trivially easy, but it still requires a few tedious calculations and the answer is not instantaneously obvious, despite what some people may want you to believe.
We know that $m^3 \equiv 0, 1, 8 \pmod 9$ (if you doubt this, take the cube of any positive integer, add up its digits and repeat adding up digits until you only have a single digit left; a 9 is equivalent to 0).
Then $3n^2 \equiv 0, 3 \pmod 9$. And $3n \equiv 0, 3, 6 \pmod 9$, which suggests that $3n^2 + 3n \equiv 0, 3, 6 \pmod 9$ as well. You add 7 to these and you get $3n^2 + 3n \equiv 7, 1, 4 \pmod 9$.
Therefore we need $3n^2 + 3n \equiv 3 \pmod 9$ so that $3n^2 + 3n + 7 \equiv 1 \pmod 9$.
Enumerating the cases one by one:
If $n \equiv 0 \pmod 9$, then $3n^2 + 3n + 7 \equiv 0 + 0 + 7 = 7 \pmod 9$.
If $n \equiv 1 \pmod 9$, then $3n^2 + 3n + 7 \equiv 3 + 3 + 7 = 4 \pmod 9$.
If $n \equiv 2 \pmod 9$, then $3n^2 + 3n + 7 \equiv 3 + 6 + 7 = 7 \pmod 9$.
If $n \equiv 3 \pmod 9$, then $3n^2 + 3n + 7 \equiv 0 + 0 + 7 = 7 \pmod 9$.
If $n \equiv 4 \pmod 9$, then $3n^2 + 3n + 7 \equiv 3 + 3 + 7 = 4 \pmod 9$.
If $n \equiv 5 \pmod 9$, then $3n^2 + 3n + 7 \equiv 3 + 6 + 7 = 7 \pmod 9$.
If $n \equiv 6 \pmod 9$, then $3n^2 + 3n + 7 \equiv 0 + 0 + 7 = 7 \pmod 9$.
If $n \equiv 7 \pmod 9$, then $3n^2 + 3n + 7 \equiv 3 + 3 + 7 = 4 \pmod 9$.
If $n \equiv 8 \pmod 9$, then $3n^2 + 3n + 7 \equiv 3 + 6 + 7 = 7 \pmod 9$.
As it turns out, $3n^2 + 3n \equiv 3 \pmod 9$ is impossible. Thus we have shown that $3n^2 + 3n + 7$ lacks one of the characteristics of cubes and therefore can't be a cube.
If we had found an instance of the desired congruence modulo 9, that would have been insufficient to prove the equation does have solutions, even though the absence of the desired congruence does prove there are no solutions.
Also note that it's unnecessary to restrict $n$ to positive integers, since if $n$ is negative, then $n^2$ is positive, while $n = 0$ gives us 7, which is quite clearly not a cube. |
What are the roots of $ x^{2} = 2^{x} $? | Taking the logarithm and assuming $x>0$,
$$2\ln(x)=x\ln(2),$$ or $$\frac{\ln(x)}x=\frac{\ln(2)}2.$$
The derivative of the LHS is $$\frac{1-\ln(x)}{x^2},$$ which has a single root at $x=e$.
As there is a single extremum and the function is continuous, the equation
$$\frac{\ln(x)}x$$has at most two roots, which you found.
For negative $x$, the equation turns to
$$\frac{\ln(-x)}x=\frac{\ln(2)}2.$$
As the LHS function is positive only on one side of the extremum, there is at most one negative root. This root exists as the function goes to $\infty$, but it has no closed form. |
How many solutions does the equation $x+y+z=17$ have where $x,y,z$ are non negative integers? | This is a classical Stars and Bars problem. Using the explanation in the link it's not too dificult to conlcude that the wanted answer is:
$$\binom{17+3-1}{17} = \binom{19}{17}$$ |
Polynomials with specified ranges in intervals | Hint:
A degree five version might work. Symmetrize things around zero, and rescale so the two midpoints are at $1,-1.$ Insist on $f(x)=ax+bx^3+cx^5,$ which since it is odd means one only needs to look at the interval containing the midpoint $1$. So if $s<1<t$ are the two ends of the interval $[s,t]$ with midpoint $1$, we will send $1$ to itself, and then choose some interval $[m,n]$ around $1$ for the image, and we may as well have the midpoint of $[m,n]$ be $1$ also. We also choose $m,n$ close enough to $1$ that the interval $[s,t]$ gets shrunken by the appropriate amount to satisfy the conditions.
Now one can check that the three equations $f(1)=1,\ f(s)=m,\ f(t)=n$ lead always to uniquely solvable equations for the coefficients $a,b,c.$ [This is a matter of looking at the determinant of the system.] If one is lucky the resulting curve will end up increasing on the interval $[s,t],$ and that interval will map into the interval $[m,n].$
As simple-minded as it seems, this scheme did indeed work when I took $s=0.8,\ t=1.2$ and went for a shrink by a factor of $1/2$ via $m=0.9,n=1.1.$ When I did this, the coefficient of $x^3$ came out negative, however the function when graphed was increasing. Likely the derivative, a quadratic in $x^2,$ had negative discriminant, though I didn't check that.
This may not always work, and to even look at whether it does would involve perhaps prohibitive symbolic algebra, in order to get sufficient information on the coefficients $a,b,c$ in terms of the chosen parameters $s,t,m,n.$ |
additive subgroup of real numbers with non empty interior | Let A be the subgroup of the real line R with non-empty interior.Let x be a point in the interior of A. Then there exist a neighborhood$(x-\epsilon,x+\epsilon)$ containing x which is contained in A. As A is a subgroup n$(x-\epsilon,x+\epsilon)$ for any integer n will belongs to A. In this way you will get A=R. |
Using Fubini's Theorem in Contour Integrals proof | One of the nice things about Functional Analysis is that you can generally reduce to the scalar case by applying a linear functional to everything, rearranging scalar integrals, and then pulling the functional back outside. Then, knowing that you have enough functionals to separate points allows you to remove the functional from both sides of the resulting equation. You do, however, need the existence of the integrals without the functionals, or to define the integrals by how they interact with functionals. But the theorems to interchange orders of integration can be reduced to corresponding scalar cases.
In full detail: The Banach algebra $\mathcal{A}$ is a Banach space and its dual space $\mathcal{A}^{\star}$ consists of all continuous linear functionals on $\mathcal{A}$. Suppose you have a continuous rectifiable curve $\gamma(t) : [0,1]\rightarrow\mathbb{C}$ and a continuous function $f : \Omega\subseteq\mathbb{C}\rightarrow \mathcal{A}$ on an open region $\Omega$ containing the image of the curve $\gamma$. Then you can define a Riemann-Stieljes integral
$$
\int_{\gamma} f(z)dz = \lim_{\|\mathcal{P}\|\rightarrow 0} \sum_{j}f(\gamma(t_j^{\star}))\{\gamma(t_j)-\gamma(t_{j-1})\},
$$
where the partition $\mathcal{P}$ consists of subdivision points $0 = t_0 < t_1 < \cdots < t_n = 1$ and intermediate points $t_j^{\star} \in [t_{j-1},t_j]$. The right side converges in the norm of $\mathcal{A}$ as $\|\mathcal{P}\|\rightarrow 0$. If $\Phi \in \mathcal{A}^{\star}$, then $\Phi$ is continuous and linear, which gives the existence of the following limits, independent of the choice of $\Phi$:
\begin{align}
\Phi\left(\int_{\gamma}f(z)dz\right)& =\Phi\left(\lim_{\|\mathcal{P}\|\rightarrow 0}\sum_{j}f(\gamma(t_j^{\star}))\{\gamma(t_j)-\gamma(t_{j-1})\}\right) \\
& = \lim_{\|\mathcal{P}\|\rightarrow 0}\sum_{j}\Phi(f(t_j^{\star}))\{\gamma(t_j)-\gamma(t_{j-1})\} \\
& = \int_{\gamma}\Phi(f(z))dz
\end{align}
The final integral on the right exists because the vector integral on the far left exists, and the two are related as shown above. You can easily throw in a scalar function $\alpha(z)$, too:
$$
\Phi\left(\int_{\gamma}\alpha(z)f(z)dz\right)=\int_{\gamma}\alpha(z)\Phi(f(z))dz.
$$
This becomes a fundamental building block for dealing with integrals.
Next, in your case, you have integrals
$$
I=\int_{\Gamma_1}\left(\int_{\Gamma_2}f(w)(w-g(z))^{-1}dw\right)(z-a)^{-1}dz.
$$
These are defined as iterated integrals. The inner integrand is a continuous function of $w$ for fixed $z \in \Gamma_1$, and the inner integral gives a continuous function of $z$. So you can then multiply the scalar value of this inner integral by $(z-a)^{-1}$ and integrate with respect to $z$. In fact, by continuity of Banach algebra multiplication, you can also write $I$ as
$$
I = \int_{\Gamma_1}\left(\int_{\Gamma_2}f(w)(w-g(z))^{-1}(z-a)^{-1}dw\right)dz.
$$
Now apply a linear functional $\Phi\in\mathcal{A}^{\star}$ and apply our identity twice:
\begin{align}
\Phi(I) & =\int_{\Gamma_1}\Phi\left(\int_{\Gamma_2}f(w)(w-g(z))^{-1}(z-a)^{1}dw\right)dz \\
& = \int_{\Gamma_1}\left(\int_{\Gamma_2}f(w)\Phi((w-g(z))^{-1}(z-a)^{-1})dw\right)dz
\end{align}
This reduces everything to scalar integrals. The function $\Phi(\cdots)$ has all of the properties of the vector function in terms of joint continuity, etc., which then allows you to interchange orders of integration and to apply our identity in reverse:
\begin{align}
\Phi(I) & = \int_{\Gamma_2}\int_{\Gamma_1}f(w)\Phi((w-g(z))^{-1}(z-a)^{-1})dzdw \\
& = \int_{\Gamma_2}f(w)\int_{\Gamma_1}\Phi((w-g(z))^{-1}(z-a)^{-1})dzdw \\
& = \int_{\Gamma_2}f(w)\Phi\left(\int_{\Gamma_1}(w-g(z))^{-1}(z-a)^{-1}dz\right)dw \\
& = \Phi\left(\int_{\Gamma_2}f(w)\int_{\Gamma_1}(w-g(z))^{-1}(z-a)^{-1}dzdw\right)
\end{align}
Because this is true for all linear functional $\Phi \in \mathcal{A}^{\star}$, then
$$
I = \int_{\Gamma_2}f(w)\int_{\Gamma_1}(w-g(z))^{-1}(z-a)^{-1}dzdw
$$
And that is exactly what you want, where the end result is that you can interchange orders of integration for the problem at hand. |
Galois solvable and Galois abelian elements | If $G:=\operatorname{Gal}(F(\alpha)/F)$ is solvable, there is even a priori no reason that the intermediate extension $F(\beta)/F$ is Galois. Take for example the splitting field of $X^3-2$ over $\mathbf{Q}$. It has solvable Galois group $S_3$, but $\mathbf{Q}(\sqrt[3]{2})/\mathbf{Q}$ is not Galois.
If $G$ is abelian, then all intermediate extensions are Galois. By
the Galois correspondence, $\operatorname{Gal}(F(\beta)/F)$ is a
quotient of $G$, thus abelian too. |
Logic: Number x is a square of a prime number. Difference between conjunction and implication | Putting an implication directly under an $\exists$ is almost always wrong.
Here your first attempt says
There is some $p$ such that if $p$ is a prime, then $x$ is its square.
And that is always true, no matter what $x$ it -- namely, you can just set $p$ to be $4$. Since $4$ is not a prime, the implication is automatically true, and that single example is enough to make the $\exists$ true.
The second attempt is much better (though bof may or may not be right when he thinks your bracketing is off; depends on your conventions for how tightly quantifiers bind syntactically):
There is a $p$ such that $p$ is a prime and $x$ is its square.
On the other hand for $\forall$ you would typically use an implication under the quantifier to limit which possible values you're interested in. As before, the implication will be automatically true when the condition is not satisfied, and for $\forall$ it is the true case that does not influence the truth value of the entire $\forall$ formula. |
Gaussian Elimination Clarification | Assuming the first column is non-zero (otherwise move to the next column).
If the first entry is non-zero. You have completed that instruction.
Suppose not, if $p$-th entry is the first non-zero entry in the first column. Swap the $p$-th row with the first row. |
Is it possible to calculate this conditional probability? | With Bayes theorem would be enough. First, suppose net events:
$T$ Person is a terrorist
$A$ Alarm beeps
From your statement, we have:
$$ P(T) = 0.2 $$
$$ P(A \vert T) = 1$$
$$ P(A \vert \bar{T}) = \alpha $$
Applying Bayes:
$$ P(T \vert A) = \frac{P(A \vert T) P(T)}{P(A)} = \frac{0.2}{0.2+0.8 \alpha}$$
Where $P(A)$ can be computed from total probability, as follow:
$$P(A) = P(A \vert T) P(T) + P(A \vert \bar{T}) P(\bar{T}) = 0.2 + 0.8 \alpha$$
So, to have $P(T \vert A) = 0.999$, we need:
$$\frac{0.2}{0.2+0.8 \alpha} = 0.999 $$
Where you can resolve for alpha. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.