title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Using inequalities and limits | Suppose that $\lim_{x \to p}f(x) = A, \lim_{x \to p}g(x) = B$ and we know that $f(x) \leq g(x)$ in neighborhood of $p$. Now we need to understand the meaning of the limit statements informally. The values of $f(x)$ are near to $A$ and values of $g(x)$ are near to $B$ when $x$ is near $p$. Now on the contrary suppose that $A > B$. First let us understand this by setting $A = 2, B = 1$. Now values of $f(x)$ are near $A = 2$ so for example like in range $1.9-2.1$ when $x$ is near $p$ and values of $g(x)$ are near $B = 1$ for example in range $0.9-1.1$ Clearly we can see that $$0.9 < g(x) < 1.1 < 1.9 < f(x) < 2.1$$ and this contradicts that $f(x) \leq g(x)$ when $x$ is in a certain neighborhood of $p$. Hence we must have $A \leq B$.
A formal proof consists of showing the same argument without assuming any specific values of $A, B$ (like $A = 2, B = 1$ of previous paragraph). Again let's assume that $A > B$ and let $C = (A + B)/2$ be number lying exactly in between $A$ and $B$ ($B < C < A$). Now idea is to keep values of $g(x)$ near to $B$ such that they don't exceed $C$ and values of $f(x)$ near to $A$ so that they always exceed $C$. This will make $g(x) < f(x)$ contrary to given condition.
Now we put the epsilon delta stuff. Setting $\epsilon = (A - B)/2$ so that $C = B + \epsilon = A - \epsilon$ we see that we have a $\delta > 0$ such that $|f(x) - A| < \epsilon$ for $0 < |x - p| < \delta$ and $|f(x) - B| < \epsilon$ for $0 < |x - p| < \delta$. Thus we have for $0 < |x - p| < \delta$, $$B - \epsilon < g(x) < B + \epsilon = C = A - \epsilon < f(x) < A + \epsilon$$ and thus $g(x) < f(x)$ and we arrive at a contradiction. Hence $ A \leq B$.
The above explanation shows that an epsilon delta argument is not difficult to give provided we really understand the meaning of an informal argument clearly (its more like a translation into an unambiguous language).
As to the case of Sandwich Theorem (Squeeze theorem) with $f(x) \leq g(x) \leq h(x)$ and $\lim_{x \to p}f(x) = \lim_{x \to p}h(x) = L$ it is easy to show that $\lim_{x \to p}g(x)$ exists. Corresponding to any $\epsilon > 0$ we have a $\delta > 0$ such that $$L - \epsilon < f(x) < L + \epsilon, L - \epsilon < h(x) < L + \epsilon$$ whenever $0 < |x - p| < \delta$. Hence we get $$L - \epsilon < f(x) \leq g(x) \leq h(x) < L + \epsilon$$ i.e. $L - \epsilon < g(x) < L + \epsilon$ so that $\lim_{x \to p}g(x)$ exists and is equal to $L$. You see, you don't need to use the previous result (dealing with inequalities of two functions and their limits) to get the Sandwich theorem. |
Modeling Rain on a Windshield for various Speeds using Calculus | Since all quantities involved are constant in time and space there is no calculus needed to tackle this problem.
We may assume that there are $N\gg1$ equal-sized rain drops per unit of volume, and that all of these drops fall with the same velocity
$${\bf r}=(r_1,r_2,-r_3),\quad r_3>0\ .$$
The order of magnitude is about $12\>$mph for $r_3$.
Consider a test surface $S$ with unit normal ${\bf n}$. When ${\bf n}$ is parallel to ${\bf r}$ the rain drops hitting $S$ in the next second fill a cylindrical volume with base $S$ and height $|{\bf r}|$. The number of these drops is therefore given by $$N\,{\rm area}(S)\,|{\bf r}|\ .$$ When ${\bf n}$ is tilted by an angle $\alpha$ with respect to ${\bf r}$ this number drops to
$$N\,{\rm area}(S)\,|{\bf r}|\cos\alpha=N\,{\rm area}(S)\>{\bf r}\cdot{\bf n}\ .\tag{1}$$
The car moves in $x$-direction at velocity
$${\bf v}=(v,0,0),\qquad v\geq0\ ,$$
and its windshield is inclined by an angle $\theta$, $\>0\leq\theta<{\pi\over2}$, with respect to the vertical. It follows that the inward unit normal of the windshield is given by
$${\bf n}=(-\cos\theta,0,-\sin\theta)\ .$$
Now the relative velocity of the rain drops with respect to the moving windshield $S$ is given by
$${\bf p}:={\bf r}-{\bf v}=(r_1-v,\>r_2,\>-r_3)\ .$$
In order to compute the number $\Phi$ of rain drops hitting $S$ per second we have to replace ${\bf r}$ in $(1)$ by ${\bf p}$ and so obtain
$$\Phi=N\,{\rm area}(S)\>({\bf r}-{\bf v})\cdot{\bf n}=N\,{\rm area}(S)\>\bigl((v-r_1)\cos\theta+r_3\sin\theta\bigr)\ .$$
Here the right hand side is an increasing function of $v$. |
Prove that set is arcwise-connected. Function $\mathbb{R}^3 \to \mathbb{R}^2$ with differential of rank $2$. | Consider a straight line $gamma\colon\mathbb R\to\mathbb R^3$ such that $\gamma(0)=a$, $\gamma(1)=b$ where $a,b\in\mathbb R^3\setminus f^{-1}(0)$.
Let $T\subseteq\mathbb R$ be the set of $t$ such that there exists $r(r)>0$ such that a path exists from $a$ to all points $x\in\mathbb R^3\setminus f^{-1}(0)$ with $|x-\gamma(t)|<r(t)$. Show that $T$ is open and closed and contains $0$. (Closed is the tricky part). |
Prokhorov's Theorem-Prove if tight subsubsequence, then tight sequence. | It seems that there is a mistake in the statement: this should be
Let $P_n$ be a sequence of Borel probability measures on $\mathbb{R}$ such that each subsequence $\{P_{n_k}\}_k$ has a further subsequence that is tight.
Otherwise, if $P_{2n}=P$ and $\left(P_{2n+1}\right)_{n\geqslant 1}$ is not tight, we have a counter-example.
There is no need to use Prokhorov's theorem. We can argue as follows: if the sequence $(P_{n})_{n\geqslant 1}$ is not tight, then there is $\delta_0\gt 0$ such that for each $j$, we can find infinitely many integers $i$ such that $P_{i}\left(\mathbb R\setminus \left[-j,j\right]\right)\gt \delta_0$.
Otherwise, for all $\delta$, there exists $j$ such that the set of integers $i$ such that $P_{i}\left(\mathbb R\setminus \left[-j,j\right]\right)\gt \delta$ is finite. Using tightness of a finite family of probability measures, we would obtain tightness of $(P_{n})_{n\geqslant 1}$
Therefore, we can construct inductively an increasing sequence of integers $(n_j)_{j\geqslant 1}$ such that $$\forall j\geqslant 1, \quad P_{n_j}\left(\mathbb R\setminus \left[-j,j\right]\right)\gt \delta_0.$$
The sequence $\left(P_{n_j}\right)_{j\geqslant 1}$ does not admit any tight subsequence. |
What is a Number Theorist | I think non-mathematicians are often more interested in the "why" and "how" than the "what", so I'm usually very light on mathematical details (unless they ask for it). I try to talk more about the motivations, history and fun facts of it, developping more or less detail depending the listener's (apparent) interest. My usual answer is along those lines :
In a nutshell, Number theory is the study of the integers. Although their definition is quite straighforward, there are still many unanwsered questions about them. If they ask for an example, I usually talk a bit about primes numbers : I explain we know stuff (that there are infinitely many of them, we can even roughly count them) but it's difficult to actually find them. Sometimes, I also mention that some seemingly simple problems like Fermat's last theorem took centuries to be solved.
One reason for this difficulty might be that there's a bigger picture that we still don't see. So being a number theorist is trying to get some perspective. Very often, this requires taking a detour, like creating and mastering sophisticated new objects which retrospectively sheds some new light on old ones. If they ask for an example, I talk about how some new numbers (negative, complex...) were created to solve equations, and mention that number theorists have invented a whole lot of other "exotic" numbers for other purposes (finite fields, algebraic integers, quaternions, $p$-adic numbers...).
Now number theory is interesting because integers are at the heart of mathematics, so understanding them might lead to advances in mathematics as a whole (which might lead to advances in science ?). Another reason is that we use it in cryptography because it provides problems that are difficult to solve. And on a more personnal note, my motivation is also that I find it beautiful and quite fascinating. |
Proving an entire, complex function with below bounded modulus is constant. | The map $z\mapsto 1/f(z)$ is
well defined;
entire;
bounded on the complex plane.
Here is a generalization. |
convergence of infinite sum with indicator function of another variable | First, $S_n\xrightarrow{p}S \Leftrightarrow S_n\xrightarrow{a.s.}S$. Since $\sum_{i\ge 1}X_i(\omega)$ converges (for almost all $\omega$), it suffices to show that $\sum_{i\ge 1}|Z_i-Z_{i+1}|<\infty$ a.s., where $Z_i:=1\{|Y_i|\le i\}$. But
$$
\mathsf{E}\sum_{i\ge 1}|Z_i-Z_{i+1}|\le 2\sum_{i\ge 1}\mathsf{P}(|Y_1|>i)< \infty
$$
because $\mathsf{E}|Y_1|<\infty$ so that $\sum_{i}|Z_i-Z_{i+1}|<\infty$ a.s. |
Which of these statements about biholomorphic functions $f \colon D(0, 1) → D(0, 1)$ is true? | It helps to keep in mind the classification of biholomorphic self-maps of $\mathbb D$ (also known as orientation-preserving isometries of the hyperbolic plane in the disk model) into
hyperbolic, which look like translation, and have no fixed points in $\mathbb D$
parabolic, which look like rotation about a boundary point (which is not a point in $\mathbb D$). They have no fixed points either:
elliptic, which look like rotation about a point in $\mathbb D$ (which is a fixed point).
(The author of the materials linked above is Colleen Robles, as far as I can tell.)
a) clearly false: a bi-(anything) function cannot be constant.
b) c), d) are disproved by an example of hyperbolic or parabolic isometry |
Convergence of sequence of curves from convergence of tangent vectors | Yo can’t expect a lot...
Take the example of a family of circles in the real plane all passing through the origin and tangent to the $x$-axis, with $v_n=v \neq 0$ for all $n \in \mathbb N$. Also suppose that the radius of $c_n$ is equal to $n$.
You don’t have uniform convergence (neither pointwise by the way). |
Bilinear form and congruence matrix | Given a class of isometric symmetric bilinear space $[(V,b)]$ a class of congruent symmetric matrix can be assigned in this way.
Take a representative of $[(V,b)]$. Let it be $(V, b)$. Given a vector basis $\mathfrak{E}(V)$ of $V$ and an isometry of $(V, b)$ on $K^n$, $b$ is represented by a symmetric matrix $B$. Choosing any other vector basis and/or any other isometry of $(V,b)$ on $K^n$ the symmetric matrix representing $b$ is congruent to $B$. Moreover the converse is also true, that is, any matrix congruent to $B$ represents $b$ under some isometry and vector basis. Let's call $[B]$ the class of matrices conguent to $B$.
Then from $(V,b)$ we have arrived at $[B]$. If you choose any other element of $[(V,b)]$ we would arrive at the same $[B]$.
$[(V,b)]\mapsto [B]$ is bijective. Indeed, let $(W, c)\notin[(V,b)]$, whatever basis is chosen in $W$ and whatever isometry of $(W,c)$ onto $K^n$ is chosen the symmetric matrix representing $c$ would never be $C\in[B]$. And of course it is surjective.
Alternatively, if you want you could also say that there exist symmetric matrices $B$ such that $(V,b)\cong\langle B\rangle$ then the wanted bijection is $[\langle B\rangle] \mapsto[B]$ (where $\langle B\rangle=(K^n,b_B)$ and $b_B(x,y)=x^tBy$) |
How many ways of combining 4 fruits, repeting at most 1 twice? | We can first choose 4 distinct fruits from the 8. This gives us $8\choose4$ possibilities.
To allow for double fruit types, we can choose 3 fruits, then just double any one of them. There are $3{8\choose3}$ possibilities here.
Add them up
$${8\choose4}+3{8\choose3}$$
Edit: I'm sorry but the above is wrong, well incomplete. We still need to include when we have two pairs of fruit. That is choosing 2 fruits and doubling both up.
$${8\choose4}+3{8\choose3}+{8\choose2}=266$$ |
Regular sum of hypercube regions' volumes | The sum in $(1)$ is equal to the coefficient of $x^N$ in
$$\Big(\sum_{n=1}^{\infty}\frac{x^n}{n+1}\Big)^k=\Big(\frac{-x-\ln(1-x)}{x}\Big)^k.$$
This alone can already be used for computations. A closer look at
$$\Big(\frac{-x-\ln(1-x)}{x^2}\Big)^k=\sum_{n=0}^{\infty}a_{n,k}x^n$$
(the sum in $(1)$ is thus $a_{N-k,k}$) reveals a better-to-use recurrence
$$a_{n,k}=\frac{k}{n+2k}\sum_{m=0}^{n}a_{m,k-1}.\qquad(k>0)$$
This can also be used for estimates and asymptotic analysis (if needed). |
Recursive fibonacci algorithm correctnes? [proof by induction] | There’s not actually very much to do, since the algorithm so closely follows the recursive definition of the Fibonacci numbers.
Clearly $\operatorname{Fibonacci}(0)=0=F_0$ and $\operatorname{Fibonacci}(1)=1=F_1$; that gets the induction off the ground. Now suppose that $m>1$, and your algorithm returns the correct value for all non-negative integers $n<m$. Then on the input $m$ it returns $$\operatorname{Fibonacci}(m-1)+\operatorname{Fibonacci}(m-2)\;,$$ which by the induction hypothesis is $F_{m-1}+F_{m-2}=F_m$, so it returns the correct value for all non-negative integers $n\le m$. |
How to show equinumerosity of the powerset of $A$ and the set of functions from $A$ to $\{0,1\}$ without cardinal arithmetic? | For each subset $S$, define the characteristic function $\chi_S\colon A\to\{0,1\}$ by
$$\chi_S(a) = \left\{\begin{array}{ll}
1&\text{if }a\in S,\\
0&\text{if }a\notin S.
\end{array}\right.$$
The map $S\mapsto \chi_S$ is one-to-one: if $S\neq T$, then there exists $x\in S\triangle T$; hence $\chi_S(a)\neq \chi_T(a)$.
The map is onto: given $f\colon A\to\{0,1\}$, let $S=\{a\in A\mid f(a)=1\}$. Then $\chi_S = f$.
This gives the desired bijection. |
Dual Space Annihilator in C[0,1] | A function is affine if and only if its second derivative vanishes.
We only have continuous functions here, but this idea can still be made work, but we have to resort to a discrete second derivative.
For $a\in[0,1]$, let $\delta_a:V\to\mathbb R$ be the evaluation functional at $a$, $\delta_a(f)=f(a)$.
(This is continuous in the usual topology of $V$ if you are interested in the topological dual. Here it doesn't really matter if it is the algebraic or the topological one.)
We will only use sums of functionals like this.
For $(a,b)\in[0,1]^2$, denote $f_{a,b}=\delta_a-2\delta_{(a+b)/2}+\delta_b$.
Now let
$$
F=\{f_{a,b};(a,b)\in[0,1]^2\}.
$$
Claim:
For $y\in V$ the following are equivalent:
$y\in U$
$f(y)=0$ for all $f\in F$.
Proof:
If $y\in U$, it is a simple calculation to observe that $f(y)=0$ for all $f\in F$.
Proving the other direction is the harder part.
Suppose $y\in V$ is annihilated by all $f\in F$.
There is an element $z\in U$ so that $y(0)=z(0)$ and $y(1)=z(1)$.
(This $z$ is actually unique.)
Let $w=y-z$.
We know that $w\in V$ and we will show that in fact $w=0$; from this it will follow that $y=z\in U$.
By construction $w(0)=0$ and $w(1)=0$.
We also know that $f_{0,1}(w)=w(0)-2w(1/2)+w(1)=0$, so $w(1/2)=0$.
Now we can use $w(0)=w(1/2)=0$ and $f_{0,1/2}(w)=0$ to get $w(1/4)=0$ and similarly $w(3/4)=0$.
If $w$ vanishes at two points, it has to vanish in their midpoint as well.
Continuing inductively, we find that for any $n>1$ and $1<m<2^n$ our function satisfies $w(2^{-n}m)=0$.
But points like this are dense in $[0,1]$ and $w$ is continuous, so in fact $w(x)=0$ for all $x\in[0,1]$.
$\square$ |
"Reverse" Chebyshev Inequality that gives lower bound of being far from mean | To answer the first question: no there isn't. Consider a random variable $X$ taking values $\pm n$ each with probability $\frac{1}{2n^2}$ and $0$ otherwise. This has $\mathbb E(X)=0$ and $\mathbb E(X^2)=1$, but for fixed $k$ the probability $\mathbb P(|X|>k)=\frac 1{n^2}$ can be made arbitrarily small by choosing $n>k$ appropriately.
However, in this example the third moment $\mathbb E(X^3)=n$ goes to infinity. It turns out this is a necessary feature of such an example: you can get a lower bound for the tail probability in terms of the mean and second and third moments. See this paper by Rohatgi and Székely. This will give something nontrivial for e.g. binomial distributions. |
Parabola and Differentiation | Let $P$ have coordinates $(t,t^2)$. The gradient of $OP$ is $t$, so the equation of the perpendicular bisector of $OP$ is $$y-\frac{t^2}{2}=-\frac 1t(x-\frac t2)$$
Therefore $Q$ has $y$ coordinate $$\frac{t^2}{2}+\frac 12\rightarrow\frac 12$$ as $t\rightarrow0$ |
Finding probability $P({X}^{2}<0)$. | I solve the first part but the second part, I'm little bit confused
You are perfectly right.
as $X^2\geq 0$, $\forall X$ the requested probability is zero
Perhaps there is a typo....you can calculate $P(X^2<\theta)=\theta^{-\frac{1}{2}}$
(this makes sense and the question looks similar to the one you posted) |
interval of convergence for series | Because of the estimate using AM-GM$$\left|\frac{x}{n(1+x^2n)}\right|=\frac1{n\left(\frac1{|x|}+|x|n\right)}\le\frac1{n\cdot2\sqrt{n}}=\frac1{2n^{3/2}},$$ which is valid even in the case $x=0,$ your series converges uniformly on the whole real line. |
given that $ker[T]=ker[T^2]$ prove that $ker[T]\cap im[T]=\{{0}\}$ | Suppose $\;x\in\ker T\cap\text{Im}\,T\;$ , then $\;Tx=0\;$ and also there exists a vector $\;v\;$ such that $\;x=Tv\;$ , but then
$$0=Tx=T(Tv)=T^2v\implies v\in\ker T^2\stackrel{\text{given!}}=\ker T\implies0=Tv=x$$
and we're done |
Calculating Fourier Transform of Sech(x) | $$\mathcal{F}[sech(x)](\xi)=2\int_{-\infty}^{\infty}\frac{e^{2\pi ix\xi}}{e^{2x}+1}e^xdx\overbrace{=}^{e^{2x}=z}\int_{0}^{\infty}\frac{z^{\pi i\xi+\frac{1}{2}-1}}{z+1}dz$$
$$\mathcal{F}[sech(x)](\xi)=\mathfrak{B}\left(\pi i\xi+\frac{1}{2},1-\pi i\xi-\frac{1}{2}\right)=\pi csc\left(\pi\left(\pi i\xi+\frac{1}{2}\right)\right)=\pi sech(\pi^2 \xi)$$ |
Adaptation to Banach–Mazur theorem | For the beginning the theorem should be read as follows:
Theorem For every normed space $X$ there is isometric embedding into $C(K)$ for some compact space $K$.
Proof. Consider $K:=\operatorname{Ball}_X(0,1)$ with the weak-$^*$ topology. By the Banach-Alaoglu theorem it is compact. By definition of the weak-$^*$ topology the map
$$
J(x): K\to\mathbb{C}:f\mapsto f(x)
$$
is continuous for each $x\in X$. So we have a well-defined map
$$
J:X\to C(K):x\mapsto J(x)
$$
Note that
$$
\Vert J(x)\Vert=\sup\{|J(x)(f)|:f\in K\}=\sup\{|f(x)|:f\in \operatorname{Ball}_X(0,1)\}=\Vert x\Vert
$$
for each $x\in X$. In the last step we use a corollary of the Hahn-Banach theorem. Thus $J$ is an isometric embedding.
Note that the corollary of the Hahn-Banach theorem does not require completeness, so its usage was valid in the proof above. |
Sum $ \sum\limits_{k=1}^{\infty} \frac{k^2}{2^k}$ | Observe that by just differentiating
$$
1+x+x^2+...+x^n=\frac{1-x^{n+1}}{1-x}, \quad |x|<1, \tag2
$$ with respect to $x$ and by multiplying by $x$, we get the identity
$$
\sum_{k=1}^{n}kx^k=\frac{1-x^{n+1}}{(1-x)^2}+\frac{-(n+1)x^{n}}{1-x} , \quad |x|<1,\tag3$$ differentiating once more and multiplying by $x$ gives, as $n \to \infty$, using $|x|<1$:
$$
\sum_{k=1}^{\infty}k^2x^k=\frac{x (1+x)}{(1-x)^3} , \quad |x|<1,\tag4$$
then put $x:=\dfrac12$ to obtain
$$
\sum_{k=1}^{n}k^2/2^k \longrightarrow 6, \quad \text{as} \, n \to \infty.
$$ |
Verify question about complements | Well, if I understand you, the (universal) complement of any set is really just, assuming that $U$ is the universal set:
$$A^C=U\setminus A.$$
Your question, however, asks for the set definition of $A^C$. Then, simply put, the definition of $A^C$ would be would be:
$$A^C = \left\{x\in\mathbb{R}\mid x^3\geq x^2\right\}.$$ |
The Dihedral Constant Center of a Tetrahedron | (My labeling doesn't match OP's.)
Consider tetrahedron $OABC$ with faces (and face-areas) $W$, $X$, $Y$, $Z$ opposite vertices $O$, $A$, $B$, $C$. Define
$$a := |OA| \quad b := |OB| \quad c := |OC| \quad d := |BC| \quad e := |CA| \quad f := |AB|$$
and let $V$ be the volume. Also, define dihedral angles $A$, $B$, $C$, $D$, $E$, $F$ along the edges with corresponding lower-case labels. (There should be no confusion with using "$A$" for both a vertex and an angle.) First, the "dihedral constant" is given by the formula I posted in a comment:
$$\delta(OABC) = \frac{1}{9V^2}\left(\begin{array}{c}
-W^4-X^4-Y^4-Z^4 +2W^2X^2+2W^2Y^2+2W^2Z^2 \\
+2Y^2Z^2+2Z^2X^2+2X^2Y^2
\end{array}\right) $$
Consider a point defined by the coordinate-vector equation
$$P := p\,A + q\,B + r\,C + s\,O \qquad\text{where}\quad p + q + r + s = 1$$
We'll see that $p$, $q$, $r$, $s$ become closely associated with respective opposite faces $X$, $Y$, $Z$, $W$.
It's possible to write the dihedral constants of the tetrahedrons determined by $P$ in terms of the elements of the original tetrahedron. Pardon a bit more notation, but ... To reduce some visual clutter in the formulas, we define $m^2 = \delta(OABC)$, as well as $W_s :=W/s$, $X_p := X/p$, $Y_q := Y/q$, $Z_r:= Z/r$ and
$$t_A := Y_q Z_r \cos A \qquad t_B := Z_r X_p \cos B \qquad t_C := X_p Y_q \cos C$$
$$t_D := W_s X_p \cos D \qquad t_E := W_s Y_q \cos E \qquad t_F := W_s Z_r \cos F$$
With these, we have ...
$$\begin{align}
\delta(PABC) = &- \left(\;p s\,a^2 + q s\,b^2 + r s\,c^2 + q r\,d^2 + p r\,e^2 + p q\,f^2 \;\right) \\[4pt]
&+ p\,d^2 + q\,e^2 + r\,f^2 + s\,m^2\\[4pt]
&+ \frac{8 W^2\,p q r}{9V^2} \left(\;-W_s^2 + (\;t_A + t_B + t_C\;) - (\;t_D + t_E + t_F\;)\;\right)
\end{align}$$
To make some sense of the alphabet soup, first observe that tetrahedrons $OABC$ and $PABC$ have face $W$ in common. Now,
In the first grouping of terms, $a$ is the edge between faces $Y$ and $Z$, hence it's opposite the edge ($d$) between $W$ and $X$. The constants associated with the opposite-edge faces are $s$ and $p$, which we see in the term $ps\,a^2$. Likewise, $b$ is opposite the edge between faces $W$ and $Y$, which are associated with $s$ and $q$, and we have the term $qs\,b^2$. Etc. This grouping is symmetric in the elements of the tetrahedron; we'll see it again.
In the second grouping, edges $d$, $e$, $f$ surround face $W$. Constants $p$, $q$, $r$ are associated with the respective faces ($X$, $Y$, $Z$) adjacent to $W$ across those edges. The left-over constant, $s$, goes with the dihedral constant of the original tetrahedron.
In the third grouping, the multiplied constant features the common face $W$ and the product ($pqr$) of constants except the one associated with that face. For the $t$-terms, observe that $D$, $E$, $F$ are the dihedral angles along the edges surrounding $W$, while $A$, $B$, $C$ are the angles surrounding opposite vertex $O$.
The reader can double-check those rules (and my typing) by comparing the expressions for the other constants:
$$\begin{align}
\delta(OPBC) = &-\left(\;p s\,a^2 + q s\,b^2 + r s\,c^2 + q r\,d^2 + p r\,e^2 + p q\,f^2 \;\right) \\[4pt]
&+ p\,m^2 + q\,c^2 + r\,b^2 + s\,d^2 \\[4pt]
&+ \frac{8 X^2\,q r s}{9V^2} \left(\;-X_p^2 + (\;t_A + t_E + t_F\;) - (\;t_D + t_B + t_C \;)\;\right) \\[8pt]
\delta(OAPC) = &-\left(\;p s\,a^2 + q s\,b^2 + r s\,c^2 + q r\,d^2 + p r\,e^2 + p q\,f^2 \;\right) \\[4pt]
&+ p\,c^2 + q\,m^2 + r\,a^2 + s\,e^2 \\[4pt]
&+ \frac{8 Y^2\,p r s}{9V^2} \left(\;-Y_q^2 + (\; t_D + t_B + t_F \;) - (\; t_A + t_E + t_C \;) \;\right) \\[8pt]
\delta(OABP) = &-\left(\;p s\,a^2 + q s\,b^2 + r s\,c^2 + q r\,d^2 + p r\,e^2 + p q\,f^2 \;\right) \\[4pt]
&+ p\,b^2 + q\,a^2 + r\,m^2 + s\,f^2 \\[4pt]
&+ \frac{8 Z^2\,p q s}{9V^2} \left(\;-Z_r^2 + (\;t_D + t_E + t_C \;) - (\; t_A + t_B + t_F \;)\;\right)
\end{align}$$
(As promised, the symmetric first grouping appears in all the formulas.)
In any case ... The search for a Dihedral Constant Point reduces to solving for $p$, $q$, $r$ (and $s=1-p-q-r$) such that
$$\delta(PABC) = \delta(OPBC) = \delta(OAPC) = \delta(OABP) \tag{$\star$}$$
This turns out to be no easy feat. Even in the case of OP's right-corner tetrahedron $O=(0,0,0)$, $A=(\sqrt{2},0,0)$, $B=(0,\sqrt{3},0)$, $C=(0,0,\sqrt{6})$, eliminating, say, $q$ and $r$ (and $s$), from system $(\star)$ leaves an irreducible degree-$27$(!) polynomial in $p$. (I'm ignoring some extraneous factors that Mathematica is showing me.) Surprisingly, the polynomial has a single real root corresponding to OP's solution. It seems unlikely that there's a closed form for this value.
I won't carry out the full analysis here, but I'll show how the formulas for the Dihedral Constant simplify in the case of a right corner tetrahedron $OABC$ with hypotenuse-face $W$. From the right triangular faces, we have
$$d^2 = b^2 + c^2 \qquad e^2 = c^2 + a^2 \qquad f^2 = a^2 + b^2 \qquad
X = \frac12 b c \qquad Y = \frac12 ca \qquad Z = \frac12 ab$$
Also,
$$\cos A = \cos B = \cos C = 0 \quad\to\quad t_A = t_B = t_C = 0$$
$$\cos D = \frac{X}{W} \quad \cos E = \frac{Y}{W} \quad \cos F = \frac{Z}{W} \quad\to\quad t_D = \frac{1}{ps}X^2
\quad t_E = \frac{1}{qs}Y^2
\quad t_F = \frac{1}{rs}Z^2$$
$$W^2 = X^2 + Y^2 + Z^2 \qquad V = \frac16 a b c \qquad m^2 = \delta(OABC) = a^2 + b^2 + c^2$$
Making appropriate substitutions and manipulations, we achieve
$$\begin{align}
\delta(PABC) = &\phantom{+}k^2 - 2 \left( p\,a^2 + q\,b^2 + r\,c^2 \right) \\[4pt]
&- \frac{2 \left( a^2 b^2 + b^2 c^2 + c^2 a^2 \right)}{s^2\,a^2b^2c^2} \left(\;
q r\,b^2 c^2 ( p + s )
+ p r\,c^2 a^2 ( q + s )
+ p q\,a^2 b^2 ( r + s )
\;\right)\\[8pt]
\delta(OPBC) = &\phantom{+}k^2 - a^2 - \frac{2 q r \,b^2 c^2(p+s)}{p^2\,a^2}\\[8pt]
\delta(OAPC) = &\phantom{+}k^2 - b^2 - \frac{2 p r \,a^2 c^2(q+s)}{q^2\,b^2} \\[8pt]
\delta(OABP) = &\phantom{+}k^2 - c^2 - \frac{2 p q \,a^2 b^2(r+s)}{r^2\,c^2}
\end{align}$$
where $k^2 := a^2 ( 1 + p^2 ) + b^2 ( 1 + q^2 ) + c^2 ( 1 + r^2 )$ is a common summand that cancels (so, can be ignored) in $(\star)$. The reader (in particular, OP, who is fluent in Mathematica) is invited to verify that, when $a=\sqrt{2}$, $b=\sqrt{3}$, $c = \sqrt{6}$, the system has solution
$$(p,q,r) = (0.20686\ldots, 0.16353\ldots, 0.13263\ldots)$$
which corresponds to Dihedral Constant Point $(pa, qb, rc)$ as given by OP. I'm also leaving consideration of the non-right tetrahedral examples to the reader.
Addendum. It may be worth noting that I derived the face-based formula for the Dihedral Constant by invoking some basic hedronometric relations. Specifically,
$$Y^2 + Z^2 - 2 Y Z \cos A \;=\; H^2 \;=\; W^2 + X^2 - 2 W X \cos D$$
$$[H,Y,Z] = 4 Y^2 Z^2 \sin^2 A = 9 V^2 a^2 \qquad [H,W,Z] = 4 W^2 Z^2 \sin^2 D = 9 V^2 d^2$$
Here, $H$ is what I call a pseudoface. It's essentially the quadrilateral shadow of the tetrahedron cast on a plane parallel to a pair of opposite edges ($a$ and $d$ in the case of pseudoface $H$), but one can define it formally via the cosine relation. The sine relation involves the "Heronic product":
$$[x,y,z]:=(x+y+z)(-x+y+z)(x-y+z)(x+y-z)$$
(so named because of its use in Heron's formula for the area of a triangle). The relations imply, for instance, that
$$a \cot A = a\;\frac{\cos A}{\sin A} = \frac{\sqrt{[H,Y,Z]}}{3V}\;\frac{\left(Y^2+Z^2-H^2\right)/(2YZ)}{\sqrt{[H,Y,Z]}/(2YZ)} = \frac{Y^2+Z^2-H^2}{3V}$$
Thus,
$$\begin{align}
a \otimes d &:= \frac{1}{9V^2}\left(\;[H,Y,Z] + [H,W,X] + 2\,\left(Y^2+Z^2-H^2\right)\left(W^2+X^2-H^2\right)\;\right)
\end{align}$$
which expands into the formula given above. $\square$ |
Chebyshev expansion of $\log(1 + x)$ | Note that for $t\in (0,\pi)$,
$$
\begin{align}
\log(1+\cos(t))
&=\log\left(\frac{e^{it}+2+e^{-it}}{2}\right)\\
&=\log\left(1+e^{it}\right)+\log\left(1+e^{-it}\right)-\log(2)\\
&=\sum_{n=1}^\infty(-1)^{n+1}\frac{e^{int}}{n}+\sum_{n=1}^\infty(-1)^{n+1}\frac{e^{-int}}{n}-\log(2)\\
&=2\sum_{n=1}^\infty(-1)^{n+1}\frac{\cos(nt)}{n}-\log(2).
\end{align}
$$
Hence, by letting $x=\cos(t)\in (-1,1)$, we obtain
$$\log(1+x)=\sum_{n=1}^\infty\frac{2(-1)^{n+1}}{n}T_n(x)-\log(2)T_0(x)$$
which implies that
$$a_0=-\log(2)\quad\mbox{and}\quad
a_n=\frac{2(-1)^{n+1}}{n}\mbox{ for $n>0$}$$
where $\ln(1+x)=\sum_{n\geq 0}a_n T_n(x)$,
which is a bit different from what we read on the wikipage.
P.S. Going back to the wikipage:
\begin{align*}
\int_{-1}^1\frac{\log(1+x)T_m(x)}{\sqrt{1-x^2}}\,dx&=\int_{\pi}^0\frac{\log(1+\cos(t))\cos(mt)}{|\sin(t)|}\,(-\sin(t)dt)\\
&=\int_0^{\pi}\left(2\sum_{n=1}^\infty(-1)^{n+1}\frac{\cos(nt)}{n}-\log(2)\right)\cos(mt) dt\\
&=\begin{cases}
-\pi\log(2) &\mbox{if $m=0$,}\\
\dfrac{(-1)^{m+1}\pi}{m}&\mbox{if $m>0$.}
\end{cases}
\end{align*}
On the other hand if $\ln(1+x)=\sum_{n\geq 0}a_n T_n(x)$,
\begin{align*}
\int_{-1}^1\frac{\log(1+x)T_m(x)}{\sqrt{1-x^2}}\,dx&=a_m\int_{-1}^1\frac{T_m(x)^2}{\sqrt{1-x^2}}\,dx
=\begin{cases}
\pi a_0 &\mbox{if $m=0$,}\\
\dfrac{\pi}{2}a_m&\mbox{if $m>0$.}
\end{cases}
\end{align*} |
For a positive definite symmetric matrix, the orthogonal diagonalization is a SVD of A. | It should be $\Sigma^\top \Sigma = D^2$, since $A^\top A = A^2 = P D^2 P^\top$.
I think there is an even more direct approach: just say $A = U \Sigma V^\top$ with $U=V=P$ and $\Sigma=D$. Since $A$ is positive definite (note that you have not used this yet in your argument) the diagonal entries of $D$ are positive, so this does not contradict the constraint that $\Sigma$ has nonnegative diagonal entries. (This argument would fail if $D$ had a negative entry.) |
Newton cooling law; mathematical modelling. | Well, using the exact value of $k=\frac{1}{10}\ln2$, we get $$t=\ln 16 \div (\frac{1}{10}\ln 2) = 40 .$$ |
What is differential operator and what is this notation in my book? Linear algebra | The Author is denoting with $C'[a,b]$ and $C[a,b]$ the vector spaces for the linear transfomation $D_x$ that is
$$D_x: C'[a,b]\to C[a,b]$$
notably $D_x$ is the derivative operator, that is for $x^2 \in C'[a,b]$
$$D_x(x^2)=(x^2)'=2x\in C[a,b]$$
and
$C'[a,b]$ is the vector (sub)space of the differentiable functions on $[a,b]$
$C[a,b]$ is the vector (sub)space continuos functions on $[a,b]$ |
Number of Triangles in a circle | $\binom{n}{3}$ is a perfectly reasonable answer, since any three points on the circle define a triangle.
I think the circle construction was used only to ensure that no three points lie on a line (if there were such points, you would include degenerate triangles in the $\binom{n}{3}$ computation). |
How to determine if a matrix equation are independent? | Use the command Reduced row echelon form in order to identify the pivot rows which correspond to the independent equations. |
Tips for writing proofs | That is a mouthful of questions. There are no definitive answers, but here are some ideas:
Being clear and succinct:
First, when you write your proof, make sure you define your terms, and your symbols. Don't make your readers scramble to look up terms. And if someone doesn't know what $\phi (a)$ is, how is he supposed to find out?
Of course I'm not suggesting you define things like "a group of order 15". This really is standard. If your reader doesn't know what a group is, he should read elsewhere. But I've had recent encounters with words like "unicity" (does he mean uniqueness?). What does the symbol dL/dT.L mean? Maybe it's standard somewhere, but I've never seen it. If there is any doubt, define what you mean.
Next, settle on a clear scheme for your notation. If you have two vector spaces V and W, maybe your V vectors should be {$v_1, v_2, ... v_n$} and use w's for the W space. Why use a's and b's? Yet some people do. Worse, they change the notation in midstream -- a guaranteed way to confuse people. Then there are the writers who reuse symbols and assign them different meanings later in the proof.
I would add that personally I am not a fan a Greek letters, if only because they are a pain to type; but they can be hard to read also. However, if you have an "A" and "a" that are related and want to introduce an $"\alpha"$ that is also connected, that makes sense.
Once you've figured out your proof, think of writing it as if you were trying to explain it to a bright high school algebra student. Write down each step and at least on the first draft don't leave out any steps. Make sure you explain properly how you get from each step to the next.
As with defining your terms, this requires some judgement. If you write "x + 3 = 6 so that x = 3" you don't have to explain. If you write "a group of order 15 must have subgroups of order 5 and 3" whether you explain depends on who your audience will be. If it is a conference of group theorists, you can skip the explanation. If it is a 1st course in group theory, you probably should include it.
Begin by erring on the side of including more rather than less explanation. Note the word "begin". Clear, succinct proofs to not spontaneously spring into life. They are usually the result of several careful, attentive drafts. If I'm writing seriously I usually plan on 4 drafts. This is true even for material that has nothing to do with math, which people manage to misunderstand anyway.
Do not skimp on drafts, but put each one away for awhile before starting on the next. Then reread what you did with a newly critical eye. What would your supposed high school student think?
Finally, if you have a lot of explanation that is impeding the flow of the proof, move some of it to footnotes. That way people can follow your argument without getting tangled up in the details. If they really want the details, they'll read the notes.
How to approach: Many books have been written on this endless subject. I can't write a book but here are some ideas:
A. Make sure you understand what is being asked. Do you understand the definitions? How are you going to prove something about independent vectors if you don't know what "independent" means.
B. Start with some simple examples. You have a problem in $R^n$? Can you solve it for $R^2$? Many such solutions do not really depend on the 2 and will generalize immediately. You have a problem about groups? Can you solve it for a cyclic group? For an Abelian group? For $S_3$?
Someone once accused me of thinking all matrices are diagonal. I don't really think that, but if I can solve it for a diagonal matrix, maybe I can solve it for a diagonalizable matrix.
Once you see how things are working in a simple case, you may get some insight into what is going on. Or maybe you can piggyback on your special case -- show that the difference between that and the general case doesn't affect things much.
And starting with a special case is time-honored. Many important papers have proved only a special case of what is really desired.
C. Develop a big bag of techniques. There are things that are used over and over, homomorphisms, isomorphisms, linear operators, basis, adjoint etc. etc. Start with the common techniques. Work a lot of problems involving these techniques. Someone said that if you have a hammer all problems look like a nail --well not everything can be solved with an isomorphism (I wish it could). How is that for a mixed metaphor? Avoid them in your papers.
The more problems you work, the more techniques you will know. You can never know too many.
There is no short cut to this. |
If the average of 2 successive years’ production 1/2($a_n + a_{n-1}$) is 2n + 5 and $a_0=3$, find $a_n$. | This is a non-homogeneous linear recurrence equation where the non-homogeneous part is a polynomial. There are general methods to solve such recurrences but your specific example can be simplified.
To solve the recurrence relation
$$ \begin{cases}
a_0 = 3 \\
a_n = 4n - 10 - a_{n-1}
\end{cases} $$
First notice that $a_1 = -9$. One can try to expand the recurrence one term further:
$$ a_{n+2} = 4(n+2) - 10 - a_{n+1} = 4(n+2) - 10 - (4(n+1) - 10 - a_{n}) $$
$$ a_{n+2} = 4n + 8 - 10 - 4n - 4 + a_{n} = a_{n} + 4 $$
Therefore we have
$$ a_{2k} = a_0 + 4k = 3 + 2(2k) $$
$$ a_{2k+1} = a_1 + 4k = -11 + 2(2k+1)$$
Combining them, we have $a_n = 2n -4 + 7(-1)^n$. |
Minimum Distance between a Triangle and a Distance Field 3D | Unless I am misunderstanding, you are seeking the closest point in $T$ to $S$, where $S$ could be nonconvex (but is perhaps connected?).
If that interpretation is correct, then a key search phrase is collision detection.
In computer graphics, one often must compute the closest point between two polyhedral objects to anticipate collision, and there is a large literature on the topic. If you can
approximate your set $S$ by polyhedra, then you can apply collision detection techniques.
For example:
Ponamgi, Manocha, Lin.
"Incremental algorithms for collision detection between solid models."
(ACM link)
A software package named SWIFT++ is maintained at the Univ. North Carolina: SWIFT++ link :
SWIFT++ is a collision detection package capable of detecting intersection, performing tolerance verification, computing approximate and exact distance, or determining the contacts between pairs of objects in a scene composed of general rigid polyhedral models.
To illustrate the sophistication of collision-detection
state of the art, here is a snapshot from a video showing a computation of two colliding knots,
each composed of 37,000 triangles:
Video: Interactive Continuous Collision Detection for Non-Convex Polyhedra
The paper describing this work is here:
Zhang, Xinyu, Minkyoung Lee, and Young J. Kim. "Interactive continuous collision detection for non-convex polyhedra." The Visual Computer 22.9-11 (2006): 749-760.
(Springer link). |
Prove that $\lim f(x) =0$ and $\lim (f(2x)-f(x))/x =0$ imply $\lim f(x)/x =0$ | Using OP's second technique, we have $$f(x) = f(x/2^{n}) + x\sum_{k = 0}^{n - 1}\frac{\gamma(x/2^{k + 1})}{2^{k + 1}}$$ where $\gamma(x) \to 0, f(x) \to 0$ as $x \to 0$. Letting $n \to \infty$ we get $$\frac{f(x)}{x} = \sum_{k = 0}^{\infty}\frac{\gamma(x/2^{k + 1})}{2^{k + 1}}$$ Also given any $\epsilon > 0$ we have a $\delta > 0$ such that $|\gamma(x)|< \epsilon$ whenever $0 < |x| < \delta$. Hence $|\gamma(x/2^{k + 1})| < \epsilon$ and then we can see that the infinite series on the right of above equation is absolutely convergent as absolute value of each term is bounded by $\epsilon/2^{k + 1}$ and $\sum_{k = 0}^{\infty}\epsilon/2^{k + 1} = \epsilon$. It follows that $$\left|\frac{f(x)}{x}\right| \leq \epsilon$$ whenever $0 < |x| < \delta$ and hence $\lim\limits_{x \to 0}\dfrac{f(x)}{x} = 0$ No extra conditions on $f$ are needed to establish the result.
I must say that OP had got over with the tricky part of the solution and only slight manipulations regarding infinite series and $\epsilon, \delta$ type arguments were needed. |
Proving the derivative of a certain point using the sequence definition | What you have done is fine, and you don't have to get into any business with $\epsilon$ because you already know that $\lim_{n \to \infty}a_n = -3$
Since $\lim_{n \to \infty}a_n$ exists and is finite, and clearly $\lim_{n \to \infty} 3 = 3$ so it also exists and is finite, you can conclude that
$$
\lim_{n \to \infty}(a_n - 3) = \left( \lim_{n \to \infty}a_n \right) - \left ( \lim_{n \to \infty} 3 \right) = -3-3 = -6
$$ |
Model space from Blaschke factor | I was dealing with the same stuff recently (a little more general). Let me just copy my lemma here. $\mathfrak I$ is the set of inner functions.
Lemma:
Let $(z_j)_{j\in\mathbb N}\subset\mathbb D$ be a Blaschke sequence such that $z_i\neq z_j$ for $i\neq j$ and let $h$ be the Blaschke product with respect to $(z_j)_{j\in\mathbb N}$. Then
$$
\mathcal H_h := (hH^2)^\perp = \overline{\operatorname{span}}\{k_{z_j} : j\in\mathbb N\}.
$$
Proof.
Let $f\in H^2$. Then $\langle hf,k_{z_j}\rangle = h(z_j)f(z_j) = 0$ for all $j\in\mathbb N$. Hence, $k_{z_j}\in\mathcal H_h$. Now, let $f\in\mathcal H_h$ such that $\langle f,k_{z_j}\rangle = 0$, i.e., $f(z_j) = 0$, for all $j\in\mathbb N$. Factorize $f$ as $f = uv$ with $u\in\mathfrak I$ and an outer function $v$. As $v$ has no zeros in $\mathbb D$, we have $u(z_j) = 0$ for all $j\in\mathbb N$. Hence, the Blaschke product $h$ must be a divisor of $u$, i.e., $u = hu_1$ with some $u_1\in\mathfrak I$. But this implies $f = h(u_1v)\in hH^2$ and thus $f=0$. |
Is $\mathbb{Q}$ isomorphic to $\mathbb{Z^2}$? | You have to be more specific about what you mean by "isomorphic". It depends on what category you are working in. For example, as sets $\mathbb{Z}^n \cong \mathbb{Q}$ for any natural number $n \geq 1$, since both sets are countable. On the other hand, it is certainly false that $\mathbb{Z}^2 \cong \mathbb{Q}$ as groups.
Perhaps more shockingly, if you've taken linear algebra, all finite dimensional vector spaces are isomorphic as sets! On the other hand, they are obviously not isomorphic as vector spaces. So obviously bijection is the wrong notion of isomorphism if you're studying linear algebra,
Here's the general idea. Depending on what you are interested in studying, you define a category of objects and morphisms in that category. Usually (but not always) the objects will be presented by sets with some structure and the morphisms will be functions that preserve the structure. So if you are studying groups, your objects will be groups and the morphisms will be group homomorphisms. If you are studying topological spaces the objects will be spaces and the morphisms will be continuous maps. Anyway, now that you've decided on a category to study, the isomorphisms will be the invertible morphisms in that category.
If you want to learn more about this structural perspective on mathematics, check out Lawvere and Schanuel's book Conceptual Mathematics: A First Introduction to Categories. It's very accessible for beginners and teaches you some very important tools and ideas for doing mathematics. |
Limit of a Sequence in $\mathbb{R}^2$ | You can look at $(x_n)$ and $(y_n)$ independently.
Then show that $(x_n)$ and $(y_n)$ are both converging by showing that they are decreasing and bounded ($x_0 \geq x_n \geq 0, y_0 \geq y_n \geq 0$) |
Make two vectors mutually orthogonal while minimizing change to both vectors | We know that we can represent, for example, vector $v$ as $v = v^{\perp u} + v^{\parallel u}$. Now, to make $v$ orthogonal $u$, we write $v^{\perp u} = v - v^{\parallel u}$.
We can calculate $v^{\parallel u}$ as $v^{\parallel u} = \frac{\langle v, u \rangle}{\langle u, u \rangle}u$. |
Proving $\|AB\| ≤ \|A\|\|B\|$ | Assuming that $\|\cdot\|$ is an induced norm,
$$ \|A B\| =\max_{\|v\|=1}\|A B v\| \leq \max_{\|w\|=1}\|A w\| \cdot \max_{\|v\|=1}\|B v\| = \|A\|\cdot \|B\|.$$ |
How to create line with 6th order spline? | A Bézier curve with six control points
is defined as
\begin{align}
\mathbf{B_6}(t)
&=
\sum _{i=0}^{6}
{6 \choose i}(1-t)^{6-i}t^{i}\,P_i
\tag{1}\label{1}
,
\end{align}
where $P_i$, $i=0,\dots,6$ are the control points of the spline.
Because of the properties of the convex hull of
the Bezier control points,
to get a visual appearance of the straight line
between the points $A,B$,
one can just set
$P_0=A$, $P_6=B$, and place the other five control
points somewhere on the segment $AB$,
so your choice of
$P_0,P_1,P_2=A$,
$P_4,P_5,P_6=B$,
$P_3=\tfrac12\,(A+B)$ will do for that purpose.
However, to get also the linear expression in \eqref{1},
we need to
expand \eqref{1}, in order to get
a representation as
a polynomial of degree $6$
in the standard form
\begin{align}
a_6t^6+a_5t^5+a_4t^4+a_3t^3+a_2t^2+a_1t+a_0
\tag{2}\label{2}
,
\end{align}
where
\begin{align}
a_0&=P_0
,\\
a_1&= 6\,(P_1-P_0)
,\\
a_2&=15\,(P_0-2\,P_1+P_2)
,\\
a_3&=20\,(-P_0+3\,P_1-3\,P_2+P_3)
,\\
a_4&=15\,(P_0- 4 P_1 + 6 P_2 - 4 P_3+ P_4)
,\\
a_5&= 6\,(-P_0+5\,P_1-10\,P_2+10\,P_3-5\,P_4+P_5)
,\\
a_6&=P_0-6\,P_1+15\,P_2-20\,P_3+15\,P_4-6\,P_5+P_6
.
\end{align}
To get a set of control points $P_i$
such that expression \eqref{2} becomes linear in parameter $t$,
we need to make all coefficients $a_2,\dots,a_6$ zero.
The solution then is just
\begin{align}
P_i&=\tfrac16\,(A\cdot(6-i)+B\cdot i)
,\quad i=0,\dots,6
,
\end{align}
that is, all control points are
evenly distributed along the segment $AB$. |
$M$ contains all ordinals | Hint: So $\alpha \not \in On^M.$ If $\beta>\alpha,$ can $\beta \in On^M?$ Remember the definition of transitivity. |
Bounds for $\sum_{i=1}^n i^{\ln(i)}$ | As you wrote, it seems to be hard to give an exact formula for this.
$$S_n = \sum_{i=1}^n i^{\log(i)}$$
Computing a few values reproduced below for $\log(S_n)$
$$\left(
\begin{array}{cc}
n & \log(S_n) \\
100 & 23.5591 \\
200 & 30.9636 \\
300 & 35.7524 \\
400 & 39.3537 \\
500 & 42.2632 \\
600 & 44.7156 \\
700 & 46.8417 \\
800 & 48.7224 \\
900 & 50.4113 \\
1000 & 51.9458
\end{array}
\right)$$ and plotting them, it seems that they look like $$\log(S_n)=a\, n^b+c$$ Performing a nonlinear regression, the following is obtained
$$\begin{array}{clclclclc}
\text{} & \text{Estimate} & \text{Standard Error} \\
a & 28.7896 & 0.77787 \\
b & 0.164932 & 0.00226 \\
c & -37.9895 & 1.03575 \\
\end{array}$$ corresponding to a quite good fit of the data ($R^2$ almost equal to $1$) .
Extrapolating is always dangerous : using this model, $\log(S_{2000})=62.8620$ while the correct value should be $62.6013$ which is not too bad (I hope).
Edit
Inspired by OmG's answer, we could use $$\int_1^n x^{\log(x)}\,dx=n^{\log (n)+1} F\left(\log (n)+\frac{1}{2}\right)-F\left(\frac{1}{2}\right)$$ where appears the Dawson integral.
Update
We have the double inequality $$n^{\log(n)} < S_n <n^{1+\log(n)}$$ Considering $T_n=\frac{S_n}{n^{\log(n)}}$ and curve fitting $T_n$ for the range $10^2 < n <10^4$,an approximation could be $$T_n=0.133133\,n^{0.89683}+3.14905$$For example, using $n=10^4$, this approximation gives $S_n=3.59432\times 10^{39}$ while the exact value should be $S_n=3.59626\times 10^{39}$
Similarly,for infinitely large values of $x$ (see here), $$F(x) = \sum_{k=0}^{\infty} \frac{(2k-1)!!}{2^{k+1} x^{2k+1}}
= \frac{1}{2 x} + \frac{1}{4 x^3} + \frac{3}{8 x^5}+O\left(\frac 1{x^7} \right) $$ Limiting the the first term, this would make $$\int_1^n x^{\log(x)}\,dx\approx \frac{n^{1+\log (n)}}{2 \log (n)+1}$$ For example, using $n=10^4$, this last approximation gives for the integral $=3.57353\times 10^{39}$ while its exact value should be $3.59279\times 10^{39}$. For sure, adding more terms will make the approximation better. |
How to solve equation with the floor function? 100 sided die Problem | The reason your substitution works is that the equation is a nice smooth quadratic. If the maximum is between two integer inputs, the highest value with an integer input will be on one side of the maximum or the other. If the degree is higher it usually still works, though you can find cases where it doesn't.
This is a fine approach for solving it. |
Determine the equation of the ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ such that it has the least area but contains the circle $(x-1)^2+y^2=1$ | Firstly, define $f(x):=b\sqrt{1-\frac{x^2}{a^2}}$ and $g(x):=\sqrt{1-(x-1)^2}$ such that $f$ and $g$ represent the ellipse and the circle respectively in the upper half of the coordinate system.
In order to guarantee that the ellipse contains the circle, we need to have:
$$
f(x)≥g(x)\iff b\sqrt{1-\frac{x^2}{a^2}}≥\sqrt{1-(x-1)^2} \iff b^2-\frac{b^2}{a^2}x^2≥2x-x^2\iff \\
\left(1-\frac{b^2}{a^2}\right)x^2-2x+b^2≥0
$$
In the last inequality, we have a quadratic function, which needs to be nonnegative everywhere, thus the discriminant $D$ has to be nonpositive:
$$
D=4-4\left(1-\frac{b^2}{a^2}\right)b^2≤0\iff 1≤b^2-\frac{b^4}{a^2} \iff \frac{b^4}{a^2}≤b^2-1
$$
We can see that $0<b^2-1\iff 1<b$, therefore we can divide by $b^2-1$ without any changes:
$$
a^2≥\frac{b^4}{b^2-1}\iff a≥\frac{b^2}{\sqrt{b^2-1}}
$$
This last inequality is sufficient and necessary for the given condition.
Therefore, we have:
$$
A=\pi ab≥\frac{\pi b^3}{\sqrt{b^2-1}}
$$
Thus, the minimum of $A$ has to be greater than or equal to the minimum of $\frac{\pi b^3}{\sqrt{b^2-1}}$. This minimum can be found as follows:
Define $h(b):=\frac{\pi b^3}{\sqrt{b^2-1}}$.
$$
h'(b)=\frac{\pi b^2\left(2b^2-3\right)}{\left(b^2-1\right)^{\frac{3}{2}}} \implies \left(h'(b)=0\iff b=\sqrt{\frac{3}{2}}\right)
$$
$b=0$ is impossible because $b>1$. Therefore, the minimum has to be at $b=\sqrt{\frac{3}{2}}$ so we have $\min A≥h\left(\sqrt{\frac{3}{2}}\right)=\frac{\pi 3\sqrt{3}}{2}$. This minimum can indeed be achieved for $a=\frac{3}{\sqrt 2}$ and $b=\sqrt{\frac{3}{2}}$, for which we also have:
$$
a=\frac{3}{\sqrt 2}≥\frac{\sqrt{\frac{3}{2}}^2}{\sqrt{\sqrt{\frac{3}{2}}^2-1}}=\frac{b^2}{\sqrt{b^2-1}}
$$
Thus, the equation of the ellipse has to be:
$$
\frac{2x^2}{9}+\frac{2y^2}{3}=1
$$ |
What method can be used for solving this fokker Planck equation and how? | Your PDE is second order linear parabolic and homogeneous PDE which can be written as:
$$ \partial_t p + a \nabla p = D \, \nabla^2 p, \quad p = p(x,t), \quad 0<x<L, \quad t >0,$$
where $a$ and $D = b/2$ are constants. Since you have a bounded domain on $x$ and your equation is linear, separation of variables can do the trick here. Assume you are given an initial condition such that $p(x,0) = p_0(x)$ and you define the non-dimensional variable $\xi = x/L$ (you can do the same with $t$ and $p$) so you have:
$$\partial_t p + k \partial_\xi p = \alpha \partial_{\xi}^2p, \quad p = p(\xi,t), \quad 0 < \xi < 1, \quad t > 0, \tag{1} $$ with $\alpha = D/L^2$, $p(0,t) = f(t), \ p(1,t) = g(t)$ and $p(\xi,0) = p_0(L \xi) \equiv \tilde{p}_0(\xi)$. We cannot proceed further with separation of variables because of the inhomogenous boundary conditions, so we have to make use of the superposition principle defining:
$$p = u+v,$$
where $u$ satisfies homogenous boundary conditions and $v$ is the simplest function that satisfies $v(0,t) = f(t)$ and $v(1,t) = g(t)$. Set $v(x,t) = A(t) x+ B(t)$ and solve for $A$ and $B$. Write the problem for $u$:
$$\partial_t u + k \partial_\xi u = \alpha \partial_\xi^2 u \underbrace{ -\partial_tv-k\partial_\xi v + \alpha\partial^2_\xi v}_{W(x,t)}, \tag{2}$$ the known term $W$ makes your equation for $u$ non-homogenous so you must solve the following new problem, which I create (Fredholm's alternative):
$$ \theta_t + k \theta_x = \alpha \theta_{xx}, $$
together with $\theta(0,t) = \theta(1,t) = 0$ and I don't care about the initial conditions (which for $u$ turn to be $u(x,0) = p(x,0)-v(x,0)$).
Set now:
$$\theta(\xi,t) = P(\xi)Q(t), \quad P \neq 0, \ Q \neq 0$$ and substitute back in the PDE for $\theta$ to have:
$$PQ' + k P' Q = \alpha P'' \implies \frac{Q'}{Q} = \frac{\alpha P'' - k P'}{P} = \lambda \in \mathbb{R}^- \cup \{0\} \mathbb{R}^+. \tag{3} $$
Eq. $(3)$ yields to two different problems for $P$ and $Q$. Solve for $P$ and find the so-called eigenfunctions and eigenfunctions associated to the homogenous Dirichlet boundary conditions $P(0) = P(1) = 0$. Expand then the solution $u$ in terms of the eigenfunctions (Sturm-Liouville theory):
$$u = \sum^\infty_{n} Q_n(t) P_n(x),$$ and find the Fourier coefficients $Q_n(t)$ by introducing this in eq. $(2)$ and making use of the property of orthogonality of $P_n(x)$, i.e.,
$$ \int^1_0 P_n(x) P_m(x) r(x) \, \mathrm{d} x = \delta_{nm}, $$ where $\delta$ is the Kronecker delta and $r(x)$ is the weight function of the self-adjoint problem for $P$ (which is $r(x) = 1$ here).
Hope you find this helpful. Cheers! |
Simplifying integral $\int_4^3 \sqrt{(x - 3)(4 - x)} dx$ by an easy approach | Let $x=3.5+t$. We are finding $\int_{1/2}^{-1/2}\sqrt{(1/2)^2-t^2}\,dt$, which we recognize as (the negative of) a familiar area. |
Does cone associated with PSD matrix always convex? | Yes, the set of positive semi-definite matrices is a convex set.
It is rather easy to understand intuitively as the set is all $X$ such that $v^TXv\geq 0 ~\forall v\in R^{n}$, which effectively is an intersection of infinitely many linear inequalities. |
Birthday Paradox with Leap Year | I conclude that the result in the question is accurate, but the method by which it is obtained is questionable.
The questionable aspect is the use of
$\lvert\mathcal{A}_1\rvert$ and $\lvert\mathcal{A}_2\rvert$,
the cardinalities of the sets $\mathcal{A}_1$ and $\mathcal{A}_2$.
The usual way to use cardinalities of sets of outcomes in order to compute the probability of a desirable event ("success") is to partition the probability space into some number $D$ of equally likely events such that each event is either completely within the favorable outcomes or completely within the unfavorable outcome.
The favorable outcomes are then represented by a set of events
$\mathcal{A}$,
and the probability of success is then simply $\lvert\mathcal{A}\rvert/D.$
When working on the birthday paradox under the usual assumptions ($365$ possible birthdays, any person equally likely to be born on any of those dates),
for a group of $n$ persons the probability space can be partitioned into
$365^n$ equally-likely events, one for each possible combination of birthdays of the $n$ distinct persons in the group.
You can count up the number of events in which no two persons share a birthday,
which is $^{365}P_n,$ and the probability of no match is therefore
$^{365}P_n / 365^n.$
In the question, however, the probability space has not been partitioned into a set of equally likely events.
Instead, the probability is computed with a denominator $365.25^n$ which is not even an integer.
There are, however, at least three ways to fix this. One way (which is detailed in another answer) is to consider each person to be equally likely to have a birthday on every day in a four-year period.
I present two more methods below.
The basis of each of these methods is the assumption that each day other than February $29$ is equally likely, whereas February $29$ is $\frac14$ as likely as any other particular day, and these $366$ possibilities cover the entire probability space. The total probability of all events is therefore $365.25$ times the probability of a birthday on January $1$ (which is $1/365.25$),
while the probability of a birthday on February $29$ is one quarter of that, or $0.25/365.25.$ The probability space is therefore partitioned into $366$ events whose probabilities add up to $1$ as required.
First alternative method
One typical way to approach the usual ($365$-day) problem via conditional probabilities. The probability of no match is the product of conditional probabilities $P_k$ that the $k$th person will have a birthday distinct from the previous $k-1$ persons,
given that the first $k-1$ persons all have birthdays distinct from each other.
If we define $B_0$ equal to the entire probability space,
and for $k > 0$ we define $B_k$ as the event that the first $k$ persons all have distinct birthdays,
then $P_k = \mathbb P(B_k \mid B_{k-1})$ and the probability of no match in the entire group is
$$ \prod_{k=1}^n \mathbb P(B_k \mid B_{k-1}). $$
Adapting this approach to the leap-year version of the problem,
let $A_0$ be the event that all $n$ persons have distinct birthdays not including February $29.$
If $B_0$ is the entire probability space,
and for $k > 0$ we define $B_k$ as the event that the first $k$ persons all have distinct birthdays not including February $29,$
then
$$ \mathbb P(A_0) = \prod_{k=1}^n \mathbb P(B_k \mid B_{k-1}). $$
Now let $A_m$ be the event that all persons in the group were born on distinct days and the $m$th person was born on February $29$,
Let $C_0$ be the entire probability space, let $C_1$ be the event that the first person is born on February $29,$ and for $k > 1$ define $C_k$ as the event that the first person was born on February $29$ and the next $k-1$ persons all have distinct birthdays not including February $29.$
Then
$$ \mathbb P(A_1) = \prod_{k=1}^n \mathbb P(C_k \mid C_{k-1}). $$
By symmetry, $\mathbb P(A_1) = \mathbb P(A_2) = \cdots = \mathbb P(A_n).$
So the probability all $n$ persons in the group are born on distinct days is
$$ \mathbb P(A_0) + \mathbb P(A_1) + \cdots + \mathbb P(A_n)
= \mathbb P(A_0) + n \mathbb P(A_1). $$
Now let's compute $\mathbb P(A_0).$
We suppose that a person has a $1/365.25$ probability to be born on any particular day other than February $29,$ and a $0.25/365.25$ probability to be born on February $29.$
The probability that the first person's birthday is not February $29$ is therefore
$\mathbb P(B_1 \mid B_0) = 365/365.25.$
More generally, for $k > 0,$ given that the first $k-1$ persons have all distinct birthdays,
the probability that the $k$th person has a birthday distinct from any of the previous $k-1$ and not on February $29$ (that is, the probability that the first $k$ persons have distinct birthdays not including February $29$) is
$\mathbb P(B_k \mid B_{k-1}) = (365 - (k - 1))/365.25.$
Therefore
\begin{align}
\mathbb P(A_0)
&= \prod_{k=1}^n \frac{365 - (k - 1)}{365.25} \\
&= \frac{365 \cdot 364 \cdot 363 \cdot \cdots \cdot (365 - (n - 1))}{365.25^n} \\
&= \frac{^{365}P_n}{365.25^n}.
\end{align}
Next let's compute $\mathbb P(A_0).$
The probability that the first person's birthday is February $29$ is
$\mathbb P(C_1 \mid C_0) = 0.25/365.25.$
For $k > 1,$ given that the first person is born on February $29$ and the first $k-1$ persons have all distinct birthdays,
the probability that the $k$th person has a birthday distinct from any of the previous $k-1$ and not on February $29$ (that is, the probability that the first $k$ persons have distinct birthdays and the first one was born on February $29$) is
$\mathbb P(C_k \mid C_{k-1}) = (365 - (k - 2))/365.25.$
Therefore
\begin{align}
\mathbb P(A_1)
&= \frac{0.25}{365.25} \prod_{k=2}^n \frac{365 - (k - 2)}{365.25} \\
&= \frac{0.25}{365.25} \cdot
\frac{365 \cdot 364 \cdot \cdots \cdot (365 - (n - 2))}{365.25^{n-1}} \\
&= \frac{0.25 \cdot {}^{365}P_{n-1}}{365.25^n}.
\end{align}
In conclusion,
$$ \mathbb P(A_0) + n \mathbb P(A_1)
= \frac{^{365}P_n}{365.25^n} + \frac{n \cdot 0.25 \cdot {}^{365}P_{n-1}}{365.25^n},
$$
the same result that was given in the question.
But the fact that we can use the permutation symbol here seems to be coincidental,
since permutations if $n$ persons or $n-1$ persons were not involved in any part of the derivation of this result.
Second alternative method
A second alternative method also uses conditional probabilities, but not so many of them, and also uses a counting argument involving permutations.
Again we consider two possible kinds of event:
$A_0,$ which occurs when all $n$ persons have distinct birthdays from each other and none was born on February $29$;
and $A_k$ for $k\geq 1,$ which occurs when the $k$th person is born on February $29$ and the other $k-1$ have have distinct birthdays from each other, none of which is February $29.$
The probability of $A_0$ is the probability that none of the $n$ persons was born on February $29,$ which is $(365/365.25)^n,$ times the conditional probability that no two of them share a birthday, given that none was was born on February $29.$
The conditional probability is simply the probability of no matching birthdays among $n$ persons in the ordinary $365$-day birthday problem,
which we already know is $^{365}P_n / 365^n$
by counting the number of favorable combinations of birthdays among the $365^n$ possible combinations.
So
$$ \mathbb P(A_0)
= \left(\frac{365}{365.25}\right)^n \times \frac{^{365}P_n}{365^n}
= \frac{^{365}P_n}{365.25^n}.
$$
For $A_1$, the probability is the probability that the first person was born on February $29,$ which is $0.25/365.25,$ times the probability that none of the other $n-1$ persons was born on February $29,$ times the conditional probability that the remaining $n-1$ persons were all born on different days,
given that none of them was born on February $29.$
Similarly to the previous case, the conditional probability is simply the probability of $n-1$ distinct birthdays in the ordinary $365$-day birthday problem, which is $^{365}P_{n-1} / 365^{n-1}.$
So
$$ \mathbb P(A_1)
= \frac{0.25}{365.25}
\left(\frac{365}{365.25}\right)^{n-1} \times \frac{^{365}P_{n-1}}{365^n}
= \frac{0.25 \cdot {}^{365}P_{n-1}}{365.25^n}.
$$
Therefore the final answer is
$$ \mathbb P(A_0) + n \mathbb P(A_1)
= \frac{^{365}P_n}{365.25^n} + \frac{n \cdot 0.25 \cdot {}^{365}P_{n-1}}{365.25^n}.
$$
In this method, the fact that we can write parts of the formula using permutations is no coincidence; we actually derived those parts of the formula by counting permutations. |
Inequality $|z_1+z_2|^2 \le (1+|z_1|^2)(1+|z_2|^2)$ | You can use Cauchy-Schwarz :
$$
|z_1+z_2|^2 \leq (|z_1|+|z_2|)^2 =
(1 \times |z_1|+|z_2| \times 1)^2 \leq (1+|z_1|^2)(1+|z_2|^2).
$$ |
Easy way to find maximum/minimum value of polynomial | Hint: An affine change of coordinates $t=x-x_i+3d$, with $d=\frac12h$, transforms $P(x)$ to $Q(t)=(t-3d)(t-d)(t+d)(t+3d)$. |
Compactness of $\{x \rightarrow \sin(\pi nx) : n \in \mathbb{Z}\}$ | Arzela-Ascoli is definitely one way to go. As you say, we should show that the sequence of functions $f_n(x)=\sin(\pi n x)$ is not equicontinuous. You're also on the right track -- noting that the periods can be made arbitrarily small, but the amplitude is still 1.
Here is a formal argument. Fix any $\delta>0$. By the Archimedian property of the reals, we can choose $N$ so large that $2/N<\delta$. Then for this $N$,
$$\vert 0-1/(2N)\vert =1/(2N)<2/N<\delta$$
but then for the function $f_N(x)=\sin(\pi N x)$, we have
$$\vert \sin(\pi N \cdot 0)-\sin(\pi N (1/(2N)))\vert=\vert 0-\sin(\pi/2)\vert=1\;.$$
So we have shown that there exists $\epsilon>0$ (namely $\epsilon=1$) such that for all $\delta>0$, there exists two points $x,y$ with $\vert x-y\vert <\delta$ (namely, $0$ and $1/(2N)$) so that for the function $f_N(x)=\sin(\pi N x)$, we have
$$\vert \sin(\pi N x)-\sin(\pi N y)\vert\geq \epsilon\;.$$
By definition, the sequence is not equicontinuous. |
How to integrate $\int\limits_0^\infty e^{-a x^2}\cos(b x) dx$ where $a>0$ | I will consider the integral without the limit process. $$I(a,b)=\int\limits_0^\infty e^{-a x^2}\cos(b x) dx=\sum_{n=0}^{\infty}\frac{(-1)^n.b^{2n}}{(2n)!}\int_0^{\infty}x^{2n}e^{-ax^2}dx$$ by expanding the cosine. The integrals in the above sum are the familiar Gaussian Integrals defined by $$I_m=\int_0^{\infty}x^me^{-ax^2}dx$$ for non negative integral m. One may note that $I_0=\frac{1}{2}\sqrt{\frac{\pi}{a}}$ and that $I_{2n}=(-1)^n\frac{d^n}{da^n}I_0$. It follows that $$I(a,b)=\frac{\sqrt\pi}{2}\sum_{n=0}^{\infty}\frac{b^{2n}}{(2n)!}\frac{d^n}{da^n}a^{-\frac{1}{2}}.$$ It is not difficult to see that $$\frac{d^n}{da^n}a^{-\frac{1}{2}}=\frac{-1^n}{\sqrt a}\frac{1.2...(2n-1)}{2^n}\frac{1}{a^n}.$$ Substituting the above in the expression for $I(a,b)$ one has $$I(a,b)=\sqrt{\frac{\pi}{4a}}\sum_{n=0}^{\infty}\frac{1}{n!}\left(-\frac{b^2}{4a}\right)^n$$ $$\implies I(a,b)=\sqrt{\frac{\pi}{4a}} e^{-\frac{b^2}{4a}} .$$ |
Proving $T=X_1-X_2$ sufficient and complete | As @StubbornAtom suggested, find $\mathbb P(X_1=k_1,\ldots,X_n=k_n\mid X_1-X_2=0)$. Does this probability depend on $\theta$ or does not? This is more than sufficient to answer if $X_1-X_2$ is sufficient or not.
In addition, few words on concept of sufficiency. Sufficient statistics are those whose knowledge carries the same information about an unknown parameter as knowledge of the entire sample. Can the knowledge of the difference of the first two elements of the sample replace the knowledge of the entire sample? Why then do people toss a coin many times, trying to estimate success probability? They would have thrown it twice and, based on the difference in the results of the first two throws, made a conclusion about the probability of success. |
Global extrema of $f(x,y)= e^{-4x^2-9y^2}(2x+3y)$ on the ellipse $4x^2+9y^2 \leq 72$ | From the lagrangian
$$
L(x,y,\mu,\epsilon) = f(x,y) + \mu(g(x,y)-72+\epsilon^2)
$$
here $\epsilon$ is a slack variable to transform the inequality constraint into an equality.
The stationary conditions are
$$
\left\{
\begin{array}{rcl}
8 \lambda x-8 (2 x+3 y) x+2& = & 0 \\
18 \lambda y-18 (2 x+3 y) y+3& =& 0 \\
\epsilon ^2+4 x^2+9 y^2-72 & = & 0 \\
2 \epsilon \lambda & = & 0 \\
\end{array}
\right.
$$
here $\lambda = \mu e^{4x^2+9y^2}$. After solving we obtain
$$
\begin{array}{ccccc}
x & y & \lambda & \epsilon & f(x,y) \\
-3 & -2 & -\frac{143}{12} & 0 & -\frac{12}{e^{72}} \\
-\frac{1}{4} & -\frac{1}{6} & 0 & \sqrt{\frac{143}{2}} & -\frac{1}{\sqrt{e}} \\
\frac{1}{4} & \frac{1}{6} & 0 & \sqrt{\frac{143}{2}} & \frac{1}{\sqrt{e}} \\
3 & 2 & \frac{143}{12} & 0 & \frac{12}{e^{72}} \\
\end{array}
$$
NOTE
Solutions with $\epsilon = 0$ are solutions at the boundary and solutions with $\epsilon \ne 0$ are interior at the feasible region.
Attached two plots. The first shows the location for the solution points and the second shows a detail for the two internal solutions. |
Rooted trees on simple graphs | I'm not sure what the root has to do with it, but here's how I'd do it for plain (unrooted) trees. The statement I'd prove by induction on $k$ is "a simple graph with minimum degree at least $k$ contains a copy of every tree with $k+1$ vertices." To do the induction step (from $k$ to $k+1$): let $G$ be a simple graph with minimum degree at least $k+1$, and let $T$ be a tree with $k+2$ vertices. Let $v$ be a leaf of $T$, and let $u$ be its neighbor. By the induction hypotheses, $G$ contains a copy $T_0'$ of the tree $T_0=T-v$. Let $u'$ be the vertex of $T_0'$ corresponding to the vertex $u$ of $T_0$. Since $u'$ has at least $k+1$ neighbors in $G$, it has a neighbor $v'$ which is not a vertex of $T_0'$. Adding the vertex $v'$ and the edge $u'v'$ to $T_0'$, we get a copy of $T$ in $G$. |
A convergence problem | This is wrong. It is not true that$$\sum_{n=1}^\infty \varepsilon_n < \infty \Longleftrightarrow \lim_{n\to \infty} \sqrt[n]{\varepsilon_n} < 1.$$For instance, take $\varepsilon_n=\frac1{n^2}$. In this case, you have$$\sum_{n=1}^\infty \varepsilon_n < \infty \text{ and } \lim_{n\to \infty} \sqrt[n]{\varepsilon_n} = 1.$$The root test only asserts that$$\sum_{n=1}^\infty \varepsilon_n < \infty \Longleftarrow \lim_{n\to \infty} \sqrt[n]{\varepsilon_n} < 1.$$Assuming that $(\forall n\in\mathbb{N}):\varepsilon_n\geqslant0$, you can solve your problem as follows: since $\sum_{n=1}^\infty\varepsilon_n<+\infty$, $\varepsilon_n<1$ if $n$ is large enough. But ${\varepsilon_n}^p\leqslant\varepsilon_n$. Therefore, you can apply the comparison test. |
Count possible traversals in an undirected graph | You can do it with an inclusion-exclusion argument. Let $S$ be a set of $k$ of the vertices. Then there are $$\binom{2n-k}k\cdot k!\cdot\frac{(2n-2k)!}{2^{n-k}}=\frac{(2n-k)!}{2^{n-k}}$$ traversals that visit each of the $k$ vertices in $S$ (and possibly some other vertices as well) twice in succession: there are $\binom{2n-k}k$ ways to choose where in the traversal the $k$ vertices in $S$ will come, $k!$ ways to permute the vertices in $S$ amongst those positions, and $\frac{(2n-2k)!}{2^{n-k}}$ ways to arrange the rest of the traversal. Since there are $\binom{n}k$ sets of $k$ vertices, the number of traversals that do not visit any vertex twice in succession is
$$f(n)=\sum_{k=0}^n(-1)^k\binom{n}k\frac{(2n-k)!}{2^{n-k}}\;.\tag{1}$$
The first few values are $f(1)=0,f(2)=2,f(3)=30$, and $f(4)=864$; these are enough to identify the sequence as OEIS A114938, described as the number of permutations of the multiset $\{1,1,2,2,\dots,n,n\}$ with no two consecutive terms equal. The formula given there is $$f(n)=\sum_{k=0}^n(-1)^{n-k}\binom{n}k\frac{(n+k)!}{2^k}\;,$$ which is simply $(1)$ with $k$ replaced by $n-k$. There does not appear to be a nice closed form. |
Convergent sequence or not? | The standard comparison with an integral does the job with little effort.
To wit, for every $p\gt0$, the function $u:x\mapsto x^{-1/p}$ is decreasing on $x\gt0$ hence, for every $i\geqslant1$, for every $x$ in the interval $((i+1)/n,(i+2)/n)$ and every $y$ in the interval $(i/n,(i+1)/n)$,
$$
u(x)\leqslant u((i+1)/n)\leqslant u(y).
$$
Integrating this double inequality yields
$$
n\int_{(i+1)/n}^{(i+2)/n}u(x)\mathrm dx\leqslant u((i+1)/n)\leqslant n\int_{i/n}^{(i+1)/n}u(y)\mathrm dy.
$$
On the other hand,
$$
a_n=n^{-(p-1)/p}n^{-1/p}\sum_{i=1}^nu((i+1)/n)=n^{-1}\sum_{i=1}^nu((i+1)/n),
$$
hence, summing the estimations of $u((i+1)/n)$ above yields
$$
\int_{2/n}^{(n+2)/n}u(x)\mathrm dx\leqslant a_n\leqslant \int_{1/n}^{(n+1)/n}u(x)\mathrm dx.
$$
If $p\ne1$, a primitive of $u$ is $v:x\mapsto (p/(p-1))x^{1-1/p}$. One sees that $v(1/n)$ and $v(2/n)$ both converge to $v(0)=0$ (this is the first and only time when one uses the hypothesis that $p\gt1$...) and that $v((n+1)/n)$ and $v((n+2)/n)$ both converge to $v(1)=p/(p-1)$. Finally,
$$
\lim_{n\to\infty}a_n=p/(p-1).
$$ |
For $f$ analytic on $D_1(0)$, $|f(z)| \ge 1$ on $\partial D_1(0)$, and $f(D_1(0)) \cap D_1(0) \neq \emptyset$, show $f(D_1(0)) \supseteq D_1(0)$. | Your argument works fine for $w=0$ (it's just an application of the minimum principle, as Daniel Fischer noted). Let's spell it out from the basic maximum principle:
If $f$ omits $0$, then $1/f$ is holomorphic, hence bounded by $1$ in $\mathbb D$ by the maximum principle. This contradicts the assumption that $|f|<1$ somewhere in $\mathbb D$.
In the above form, the argument generalizes to other points of the unit disk by replacing $1/f$ with a more general Möbius map that turns the unit disk inside out. Namely:
If $f$ omits $w_0\in \mathbb D$, then $g(z)=\dfrac{1-\overline{w_0} f(z)}{f(z)-w_0}$ is holomorphic, hence bounded by $1$ in $\mathbb D$ by the maximum principle. This contradicts the assumption that $|f|<1$ somewhere in $\mathbb D$ (because $|f|<1$ iff $|g|>1$). |
Vectorial derivatives | Assuming that $a_\tau$, $\mathbf{b}_\tau$ and $C_\tau$ do not depend on $\mathbf{x}_\tau$, the result is
$$
\frac{\mathrm{d}y_{\tau}}{\mathrm{d}\mathbf{x}_t} = \frac{\mathrm{d}a_\tau}{\mathrm{d}\mathbf{x}_t} + \frac{\mathrm{d}(\mathbf{b}'_\tau\mathbf{x}_t)}{\mathrm{d}\mathbf{x}_t} + \frac{\mathrm{d}(\mathbf{x}'_tC_\tau\mathbf{x}_t)}{\mathrm{d}\mathbf{x}_t} = 0 + \mathbf{b}'_\tau + \mathbf{x}'_tC_\tau + \frac{\mathrm{d}\mathbf{x}'_t}{\mathrm{d}\mathbf{x}_t}C_\tau\mathbf{x}_t.
$$ |
A die is thrown 10 times. What is the probability that $6$ isn't registered and that at least one "1" is registered. | Let $A$ be the event at least one $2$, and $B$ the event no $6$. We want $\Pr(A\cap B)$. This is $\Pr(A|B)\Pr(B)$.
It is not hard to see that $\Pr(B)=\frac{5^{10}}{6^{10}}$.
For $\Pr(A|B)$, we can proceed as follows. Given $B$, there are $5^{10}$ equally likely outcomes. Of these, $4^{10}$ have no $2$, so $5^{10}-4^{10}$ have at least one $2$. It follows that
$$\Pr(A|B)=\frac{5^{10}-4^{10}}{5^{10}}.$$
Now we can find $\Pr(A\cap B)$.
Another way: The probability of no $6$ is $\left(\frac{5}{6}\right)^{10}$. The probability of no $6$ and no $2$ is $\left(\frac{4}{6}\right)^{10}$. It follows that the probability of no $6$ and at least one $2$ is
$$\left(\frac{5}{6}\right)^{10}-\left(\frac{4}{6}\right)^{10}.$$ |
Differentiation of integrals | I assume that "the first-order condition of $\max\{f\}$" refers to the equation $\frac{\partial f}{\partial x} = 0$. With that said, we have (using the Leibniz integral rule)
$$
\frac{\partial}{\partial x}\left[\frac{1}{1-\beta} \left[ \int_{0}^{N}x_{\nu}^{1 - \beta} d\nu \right]L^{\beta} - wL - \int_{0}^{N} p_{\nu} x_{\nu}d\nu\right] = \\
\frac{1}{1-\beta} \left[\int_{0}^{N} \frac{\partial}{\partial x} x_{\nu}^{1 - \beta} d\nu \right]L^{\beta} - \frac{\partial}{\partial x} wL - \int_{0}^{N} \frac{\partial}{\partial x}p_{\nu} x_{\nu}d\nu =\\
L^{\beta}\int_{0}^{N} x_{\nu}^{- \beta} d\nu - \int_{0}^{N} p_{\nu} d\nu = \\
\int_0^N [L^\beta x_\nu^{- \beta} - p_\nu]d\nu.
$$
So, we are apparently interested in quantities of $x_\nu$ for which $\int_0^N [L^\beta x_\nu^{- \beta} - p_\nu]d\nu = 0$. Now, we would need more context to know exactly what is meant by saying that your equation is "the first order condition". However, it is certainly true that if $[L^\beta x_\nu^{- \beta} - p_\nu] = 0$ for all $0 \leq \nu \leq N$, then the integral must be zero. So, setting this quantity to zero guarantees that the first order condition holds, but is not equivalent to the first order condition.
Rearranging this equation yields
$$
[L^\beta x_\nu^{- \beta} - p_\nu] = 0\\
L^\beta x_\nu^{- \beta} = p_\nu\\
x_\nu^{- \beta} = L^{-\beta}p_\nu\\
x_{\nu} = L p_\nu^{-1/\beta}.
$$ |
60 is 20% percent less than what number | The correct equation is $X-0.2X=60$ or equivalenly $0.8X=60$, which evaluates to $X=\frac{60}{0.8}=\frac34=75$. |
finding angular acceleration given the angular velocity and the radius of the circle | This is a question pertaining to centripetal acceleration. Keep in mind that:
\begin{align}a_c = \dfrac{v^2}{r}\end{align}
Does this make sense now? You are not dealing with rotational kinematics in this problem. |
Why is the sum of two functions expressed as simple functions is the sum of the weighted indicator variable of the intersection? | Suppose $g = 3 \mathbb I_{A_1}$ and $h = 4 \mathbb I_{B_1}$.
The definition is multiplying the coefficients. The combination has the value $12$ only in the intersection, not the union. |
Showing a semidirect product group is isomorphic to A4 | I would approach this as follows.
Observe that $f$ has the same effect on $V$ as does conjugation by $(243)$ (check this, it's late here and I may have made a mistake).
So you should be able to construct an isomorphism from $V\rtimes_\phi C_3$ to $A_4$ that is the identity on $V$ and sends a generator of $C_3$ to $(243)$.
Another way of looking at it could be to construct $A_4$ as the (internal) semidirect product of $V$ and $\langle(243)\rangle$. Then you could use any other 3-cycle in place of $(243)$, but you need to be prepared to replace it with its inverse, if the conjugation 3-cycle runs in the opposite direction.
Adding a bit extra. Recall that the group operation in a semi-direct product $H\rtimes_\phi K$ is that of $H$ and $K$ on the individual subgroups together with the rule that conjugation of $H$ by an element $k\in K$ gives exactly the automorphism $\phi(k)$ of $H$. This implies the following. To get a homomorphism $f:H\rtimes_\phi K\to G$ from $H\rtimes_\phi K$ to a third group $G$ you need to:
Specify a homomorphism $i:H\to G$,
specify a homomorphism $j:K\to G$, and
verify that for each $k\in K$ and $h\in H$ you have
$$j(k)i(h)j(k)^{-1}=i(\phi(k)(h)).$$
Then
$$f(h,k)=i(h)j(k)$$
is automatically a homomorphism.
In the present case (always the case when you have an internal semi-direct product) the homomorphism $i$ is meant to be the inclusion mapping $V\to A_4$ and the homomorphism $j:C_3\to A_4$ was described by telling you where the generator of $C_3$ goes. |
Why is $f(x)=1/x$ uniformly continuous on $[a, \infty)$? | $\left|\dfrac{1}{x}-\dfrac{1}{y}\right|=\dfrac{|x-y|}{|x||y|}\leq\dfrac{1}{a^{2}}|x-y|$, the choice of $\delta>0$ should be clear. |
Question about degenerate inner product subspaces | You can't prove that because it is false.
Let $\mathbb R^3$ have the degenerate metric $B(x,y)=x_1y_1-x_2y_2$.
Then $v=(1,1,0)$ and $w=(-1,1,0)$ both satisfy $B(v,v)=0$ but their sum does not. So, the subset of isotropic vectors is not closed under addition. |
Integration limits of central moments | Assuming that your "chromatographic peaks" are represented by the $c(t)$ in your screenshot, then both formulas are equivalent. If you integrated with boundaries $\pm \infty$, the whole domain from $-\infty$ to $0$ would just give you zero, as your peaks do not show up until $t > 0$.
In general, if you had probability density functions $c$ which can also take values for $t \leq 0$, you need the formula with boundaries $\pm \infty$, because to calculate the moments, you really need all the information that your density function contains. But in your case, the formula from your screenshot does the job. |
Automorphism iff G is abelian | Assume that $\alpha$ is an automorphism. Then for each $a,b\in G$ we have:
$ab=\alpha(a^{-1})\alpha(b^{-1})=\alpha(a^{-1}b^{-1})=\alpha((ba)^{-1})=ba$,
hence $G$ is abelian. |
Can deep learning be a good way to learn a "High-quality" simple functions for images? For example, identical transformation, rotation, translation. | The question is interesting , but vague , so may answer inherits the last property.
In deep learning the models are highly nonlinear functions with lots of parameters . One consequence is that this models are very flexible - in the limit they can approximate anything, including any kind of linear mappings . However ,even if the data has a linear dependency , what you probably get after training such a model is a complicated non-linear function (but the quality of the predictions can be good ).
If you know that the data comes from a linear model, it is probably best to try to learn such a model using a more specific algorithm ( a random example: On Learning Rotations - Raman Arora ). |
What properties of groups are needed for orbits to be well-defined under group actions? | An equivalence relation is a relation that is reflexive, symmetric and transitive.
Can you find, for any $x\in X$, an element $g$ such that $g\cdot x = x$?
Can you prove that, for any $x, y\in X$ such that $g\cdot x = y$ for some $g\in G$, there exists a $g'\in G$ such that $g'\cdot y = x$?
If you have $g\cdot x = y$ and $h\cdot y = z$, what is the element $w$ of $G$ such that $w\cdot x = z$ ? What property do you use? |
Why is the DFT of $1 = \sum_{k=-\infty}^{\infty}\delta(\theta-k)$? | Let $x = e^{- 2 \pi i n k /N}$. Note that $x = 1$ if and only if $k$ is a multiple of $N$. So, in the case that $k$ is such a multiple, we have
$$
\hat z[k] = \frac 1N\sum_{n=0}^{N-1}1 \cdot e^{-2\pi i k n/N} =
\frac 1N\sum_{n=0}^{N-1} x^n = \frac 1N\sum_{n=0}^{N-1} 1 = 1.
$$
Otherwise, we have
$$
\hat z[k] = \frac 1N\sum_{n=0}^{N-1}1 \cdot e^{-2\pi i k n/N} =
\frac 1N\sum_{n=0}^{N-1} x^n = \frac 1N\frac{1 - x^{N}}{1 - x} = \frac{0}{1 - x} = 0.
$$
This corresponds to the sum
$$
\hat z[k] = \sum_{j=-\infty}^\infty \delta[k - Nj].
$$ |
Smoothness of integrals of dirac delta function | The first integral is the Heaviside function. It's discontinuous at $0$ because the limits from the left and from the right are different ($0$ and $1$). One might explain the discontinuity by saying that one can't trace the graph without lifting the pencil.
The second integral is continuous: you can trace this graph without lifting your pencil. The limit as $x\to 0$ is $0$, and it agrees with the value of the function there.
The fact that there is a corner at $0$ means something different: this function is not differentiable at $0$. (The slopes from the left and from the right don't match.)
And the third integral is differentiable: |
Euclid's Elements Book 1 Proposition 24 | There are indeed multiple cases here, and Euclid only considers one, as is his habit. See Heath's commentary, pp. 297 - 299. |
Rewrite a Lagrange function to Euler-Lagrange equation in polar coordinate | Not so clear: $q$, and $p$ are your generalized coordinates and momenta. You first have to define your Lagrangian as a function of $x$ or $x$ and $y$ or whatever, and then you perform the coordinate changing.
For example, the simplest Lagrangian is given by
$$\mathcal{L} = \frac{m}{2} v^2 = \frac{m}{2}(\dot{x}^2 + \dot{y}^2) $$
which would be your kinetic energy term. Assuming the potential is zero. Performing a coordinate change, you get
$$\mathcal{L} = \frac{m}{2}(\dot{R}^2 + R\dot{\theta}^2)$$
Final Remark
Having $p$ and $q$ means nothing. Thou have to find what $p$ and $q$ means, with respect upon the problem you're looking at. |
Show that if $n$ is not divisible by $2$ or by $3$, then $n^2-1$ is divisible by $24$. | $n^2-1=(n-1)(n+1)$
$n$ is not even so $n-1$ and $n+1$ are even.
Also $n=4t+1$ or $4t+3$, this means at least one of $n-1$ or $n+1$ is divisible by 4.
$n$ is not $3k$ so at least one of $n-1$ or $n+1$ must be divisible by 3.
So $n^2-1$ has factors of 4, 2(distinct from the 4) and 3 so $24|n^2-1$
Edit: I updated my post after arbautjc's correction in his comment. |
Getting a block diagonal matrix into a specific form with permutation matrices | What you seem to require here is the $(2k)\times(2k)$ perfect shuffle permutation,
$$\mathbf S=[\mathbf e_1,\mathbf e_{k+1},\mathbf e_2,\mathbf e_{k+2},\dots,\mathbf e_k,\mathbf e_{2k}]$$
where $\mathbf e_j$ is the $j$-th column of the identity matrix. (I previously talked about these permutation matrices here.) Your matrix is in fact very nearly a Golub-Kahan tridiagonal, except that it is skew-symmetric instead of being symmetric.
Try it out in Mathematica:
perfectShuffle[n_Integer?EvenQ] :=
IdentityMatrix[n][[All, Flatten[Transpose[Partition[Range[n], n/2]]]]]
(* block diagonal *) With[{n = 10},
SparseArray[{Band[{2, 1}] -> Riffle[ConstantArray[C, n/2], 0],
Band[{1, 2}] -> -Riffle[ConstantArray[C, n/2], 0]},
{n, n}]] // MatrixForm
(* shuffled matrix *) With[{n = 10},
perfectShuffle[n] .
SparseArray[{Band[{2, 1}] -> Riffle[ConstantArray[C, n/2], 0],
Band[{1, 2}] -> -Riffle[ConstantArray[C, n/2], 0]},
{n, n}].Transpose[perfectShuffle[n]]] // MatrixForm |
Fundamental group generators of null homology manifolds | If $\gamma$ is a loop whose representative as a cycle in $Z_1$ is a boundary, $\gamma$ is not necessarily nullhomotopic.
I think it's instructive to work through a simple example. Let's consider $S^1 \vee S^1$ be the figure eight, where the two simple loops are labelled $a$ and $b$, and study the composite loop $\gamma = aba^{-1}b^{-1}$. (This doesn't satisfy all your criteria: I don't know of an easily visualized space with perfect fundamental group.) This loop is not nullhomotopic, but it is mapped to the zero cycle $a + b - a - b = 0$, which is clearly a boundary. But there's no way to use the fact that $a + b - a - b$ is a boundary to construct a disk in $X$ that bounds $\gamma$. |
The best way to calculate a risk score based on driving conditions | Get a dataset with different observations of those weather conditions and a risk score for bad driving conditions, perhaps from a human assessor or from the weather service.
Then estimate a statistical model that predicts those risk scores based on weather conditions. It could be a Logit/Probit model if your risk assessment is just binary (bad driving conditions vs good driving conditions), or ordered Logit if you have more than two categories. OLS might also be used. OLS would also have a precise meaning to "mathematical best way" - the statistical model would fit the data best in a least squares sense. |
Given $3$ non co-linear points in a plane there are at least $3$ lines. | It depends a lot on what axioms and definitions you use.
So there are 3 points. For every pair of points there exists at least a line connecting them. Actually it's exactly one line if the points are distinct, but usually infinitely many lines if the points are the same. The fact that two distinct points always define a line is often an axiom in a formal definition of planar geometry. So I'd only take the algebraic path you took if I had some specific algebraic definitions of what a point or a line is. Such definitions would need to address the fact that the same line can be described by different equations, namely multiples of one another, which easily leads to concepts like equivalence classes and homogeneous coordinates.
Now you have that there has to be a line for every pair of points. If two of these lines were the same, then the three points involved would be collinear. By problem statement they are not, so there have to be at least three lines. Note that this also covers the situation where two of the points are the same: in that case any line incident with one of them will be incident with the other as well, so by definition your points would be collinear as well. You can only ever have non-collinear points if they are distinct. |
Show that $J$ is diagonalizable and find an eigenbasis | I don't know what you've learned so far, but I'm guessing that you know how to deal with ordinary matrices. Given that $1$, $\cos x$, and $\sin x$ form a basis of $V$, you can represent any element $a+b\cos x+c\sin x$ of $V$ as the vector $\begin{pmatrix}a & b & c\end{pmatrix}^T$. To identify the matrix representing $J$, we can see what it does to the basis vectors $1$, $\cos x$, and $\sin x$:
$$J[1](x) = \int_0^\pi1\;dt=\pi$$
$$J[\cos x](x)=\int_0^\pi\cos(x-t)\;dt=2\sin x$$
$$J[\sin x](x) = \int_0^\pi \sin(x-t)\;dt = -2\cos x$$
Evidently, $J$ can be represented by the matrix:
$$J=\begin{pmatrix}\pi & 0 & 0\\0 & 0 & -2\\ 0 & 2 & 0\end{pmatrix}$$
Solving the equation $\det(J-\lambda I)=0$ yields the eigenvalues $\lambda = \pm 2i, \pi$. Using standard methods for finding eigenvectors of matrices, you should find that the following are eigenvalue/eigenfunction pairs:
$$\lambda_1 = \pi, \qquad f_1=\begin{pmatrix}1 \\0 \\0\end{pmatrix}=1$$
$$\lambda_2 = 2i, \qquad f_2 = \begin{pmatrix}0 \\ 1 \\ -i\end{pmatrix} = \cos x - i \sin x$$
$$\lambda_3=-2i,\qquad f_3=\begin{pmatrix}0 \\1\\i\end{pmatrix} = \cos x + i \sin x$$
and I'm sure you can take it from here... |
Eigenvectors calculation: a defective matrix, a zero matrix | Although it does not really matter, it is traditional (well, in the U.S.) to put the $1$ in the Jordan form above the diagonal. I have been noticing students lately getting to the Jordan form but failing to write things in the reverse( and actually useful) order.
$$
\left(
\begin{array}{rr}
0 & \frac{1}{\beta} \\
1 & 0
\end{array}
\right)
\left(
\begin{array}{rr}
3 & 0 \\
\beta & 3
\end{array}
\right)
\left(
\begin{array}{rr}
0 & 1 \\
\beta & 0
\end{array}
\right) =
\left(
\begin{array}{rr}
3 & 1 \\
0 & 3
\end{array}
\right)
$$
$$
\left(
\begin{array}{rr}
0 & 1 \\
\beta & 0
\end{array}
\right)
\left(
\begin{array}{rr}
3 & 1 \\
0 & 3
\end{array}
\right)
\left(
\begin{array}{rr}
0 & \frac{1}{\beta} \\
1 & 0
\end{array}
\right) =
\left(
\begin{array}{rr}
3 & 0 \\
\beta & 3
\end{array}
\right)
$$
Then the exponential of $Jt$ with
$$
J =
\left(
\begin{array}{rr}
3 & 1 \\
0 & 3
\end{array}
\right)
$$
is
$$
e^{3t} \; \;
\left(
\begin{array}{rr}
1 & t \\
0 & 1
\end{array}
\right)
$$
and you use the second matrix identity above to finish
$$
e^{At} =
e^{3t}
\left(
\begin{array}{rr}
0 & 1 \\
\beta & 0
\end{array}
\right)
\left(
\begin{array}{rr}
1 & t \\
0 & 1
\end{array}
\right)
\left(
\begin{array}{rr}
0 & \frac{1}{\beta} \\
1 & 0
\end{array}
\right) =
e^{3t}
\left(
\begin{array}{rr}
1 & 0 \\
\beta t & 1
\end{array}
\right) =
\left(
\begin{array}{rr}
e^{3t} & 0 \\
\beta t e^{3t}& e^{3t}
\end{array}
\right)
$$ |
Difference between Celsius and Fahrenheit leads to unexpected observations. | If two scales have non-coincident zeros, the quantity differences will have transformation law that is different from quantity itself:
$$
y =ax+b,\qquad \text{but}\qquad \Delta y = y_2-y_1 = (ax_2+b)-(ax_1+b)=a(x_2-x_1)=a\Delta x
$$
For Fahrenheit and Celsius:
$$
F = 1.8C+32,\qquad \text{but}\qquad\Delta F = 1.8\Delta C
$$
so difference in 10°F is 5.55°C. |
Uniqueness and existence proof - differential equation | Hint: $h(y'(0))=y(0)=0$. Can you contradict this by choosing a specific $h$? |
probability based on past successful experience | If we assume that methods always have the same probability of success $p_1$ and $p_2$ respectively, then we can make some estimate of these (unknown to us) parameters based on empirical evidence. Maximum likelihood estimate will give us $p_1 = 0.7, \, p_2 = 0.6$ and that will be that.
Now comes the tricky part (spoiler: answer for your question as it is won't change). Suppose that second method had $8/10$ success rate. The probability of this happening with $p_2 = 2/3$ would be $45*2^8/3^{10} \approx 0.2$. In other words, while on average second method would be better, there would be a significant chance of it being worse.
If all you care about is having your success rate as high as possible, simply choosing method with best success rate will give you what you want. However, if you wish to, say, get average success rate $0.75$ with least "chance of surprise", actual answer in this altered example would be a mixed strategy (using first strategy with probability $r_1$ and second with probability $r_2$). The higher you set your intended average rate, the greater becomes the dispersion (probability of getting significantly higher or lower than average).
Alternatively, you can try to get the best possible average result for the given deviation (in general case, you will get a mixed strategy as well). Exact means of computing the best mix in these cases are studied in modern portfolio theory. |
Well-ordered set countability | Suppose you are given an uncountable, well-ordered set $S$.
Isn't it possible to provide a bijection $f:\mathbb{N} \rightarrow S$
No. $\mathbb N$ is countable. You will end up with a set of elements $\{s_1,s_2,s_3,\dots\}$, but there will exist elements you have not covered. There is nothing in your definition that demands you covered all of the elements.
An example (not a well ordered set, I know, but may still illustrate my point) is if you look at the set $$S=\left\{\frac12, \frac23, \frac34, \dots, \frac{n}{n+1},\dots\right\}\cup[1,2]$$
The procedure you described works on $S$, although $S$ is not well ordered. It takes $\frac12 = s_1$ as it is the least element. Then it takes $s_2=\frac23$ and so on. It produces $s_i = \frac{i}{i+1}$ which is an injection from $\mathbb N$ to $S$, but it does not cover the whole $S$.
Edit: In fact, you can even take the set $$T=\left\{\frac12, \frac23, \frac34, \dots, \frac{n}{n+1},\dots\right\}\cup\{1\},$$
whic is well ordered and is even countable, but your procedure still does not produce a bijection from $\mathbb N$ to $T$. |
equilibrium and direction field | $a$ and $b$ are arbitrary positive real numbers. For your problem, we set $y' = 0$ and solve for $y$. As you said $y = 0$ and $y = \frac{a}{b}$. Based on the given graph, you see that at $y = 1$, the equilibrium point is unstable, whereas the equilibrium point at $y = 0$ is stable.
If $b$ is negative, then the direction of the field changes. Otherwise, if $b$ is positive, then we obtain the graph of the field as you have already shown (since you are suppose to sketch the field such that $a = b = 1$).
One determines the direction of the field either by checking the values besides $y = 0$ and $y = \frac{a}{b}$ or by graphing and figuring out the field based on the values of $y'$. In addition, for the latter method, the values of $y'$ depends on the direction of the field based on the values of the points on the graph.
Algebra
It's clear that if you check $y = 2$, then $y' > 0$. If $y = -2$, then $y' > 0$. If $y = \frac{1}{2}$, then $y' < 0$. Thus, the field points up at $y < 0$ and $y > 1$ and points down at $0 < y < 1$.
Graphing
Here is the link of the graph from Wolfram Alpha. We have the following observations:
Since at $y < 0$ and $y > 1$ we have positive values, the field points up.
Since at $0 < y < 1$ we have negative values, the field points down.
This is how one studies the directions of the field in "graph"-sense. |
About graph symmetric with respect to the origin. | Think of it this way, which is intuitive. The description is equivalent to say that you fold the plane with respect to the vertical axis and then fold the "half plane" with respect to the horizontal axis; then the north-east graph coincides with the south-west graph. |
Choose X s.t. $\Pr[X \geq k] = 1/k$ | Generally speaking, $$\Pr[X \ge k] \ne \Pr[X = k],$$ unless $\Pr[X > k] = 0$. So your use of a uniform distribution does not satisfy the stated criterion.
Let's consider what the condition $$\Pr[X \ge k] = 1/k$$ would imply about an integer-valued random variable $X$. Clearly, $X \ge 1$, so $X$ is a positive integer. Then $$\Pr[X = k] = \Pr[X \ge k] - \Pr[X \ge k+1] = \frac{1}{k} - \frac{1}{k+1} = \frac{1}{k(k+1)},$$ and we specifically have $$\begin{align*}
\Pr[X = 1] &= \frac{1}{2}, \\ \Pr[X = 2] &= \frac{1}{6}, \\ \Pr[X = 3] &= \frac{1}{12}, \\ \Pr[X = 4] &= \frac{1}{20}, \end{align*}$$ and so forth.
If the goal is to simulate this random variable--that is to say, create an algorithm that will produce realizations of $X$ that follow this distribution--then one simple method is to generate a continuous uniform random variable $U$ between $0$ and $1$ (e.g., some function like rand()). Then take the reciprocal of this number to get $1/U$, and round downward to the nearest integer. So for example, floor(1/rand()) is what you might use on a computer. This will give you the desired realization of $X$. I leave it to you as an exercise to show that this actually works. |
How to find the sparsest vector in a given subspace of $\mathbb{F}_2^n$ | For the purposes of this problem, we restrict ourselves to linear codes presented in terms of its generator (or equivalently, the parity check) matrix. Then Alexander Vardy (1997) showed that computing the minimum distance of a linear code exactly is NP-hard. [The decision version of this problem is: given a linear code $C$ and a bound $k$, does there exist a nonzero codeword $c$ in $C$ of weight $k$ or less? This version is NP-complete.]
There have been several improvements to this basic result -- it's now known that it's also quite hard to (a.) approximate the distance, (b.) decode a received word under the promise that there is a unique codeword close to it, etc. Unfortunately, I don't quite have the time right now to detail all the known developments, but this seems to be a good place to start: http://cseweb.ucsd.edu/~daniele/Research/CodeComp.html . I will try to expand this answer later.
Reference:
A. Vardy, The Intractability of Computing the Minimum Distance of a Code - IEEE Transactions on Information Theory, 1997. |
Probability of draws at random with replacement of five tickets | The idea is right, but note that $0$ is not positive. So the probability we get a positive on any draw is $\frac{2}{5}$.
So if $X_i=1$ if we get a positive on the $i$-th draw, with $X_i=0$ otherwise, then $E(X_i)=\frac{2}{5}$. Now use the linearity of expectation to conclude that the expected number of positives in $400$ draws is $(400)\left(\frac{2}{5} \right)$. |
Integration problem, existence of a constant in a inclusion between integrable functions and essentially bounded functions | The rough idea is that if there are sets of arbitrarily small measure, then you can define unbounded integrable functions by making them get larger and larger but on subsets that are more quickly getting smaller and smaller. This is why $L^1$ is not contained in $L^\infty$ in the case of $(0,1)$ with Lebesgue measure, for example.
I'll try to give you a little more of the idea without giving everything away. You can prove this with contraposition, so your assumption will be that for all $c\gt0$ there exists a Borel set $A$ with $0\lt\mu(A)\lt c$. Use this to construct an infinite sequence of disjoint Borel sets with positive measure converging to zero, say at a rate faster than $2^{-n}$. Then define a function whose value on the $n^\text{th}$ set is $n$.
All the work is in constructing the sequence of sets. I'd rather not spoil your fun by saying more, unless you get stuck along the way and have a specific question.
December 15 update: To enjoy the new spoiler feature, I decided to post a sketch of one way to construct the sequence in the box below.
Suppose that for all $c\gt0$ there exists a Borel set $A$ with $0\lt\mu(A)\lt c$. Start with any Borel set $A_0$ such that $0\lt\mu(A_0)\lt \infty$. (Technically the finiteness is redundant here, because $\mu$ is assumed to be finite.) For each positive integer $n$, the hypothesis guarantees that given $A_{n-1}$ with positive measure, there is a Borel set $A_n$ such that $0\lt\mu(A_n)\lt \frac{1}{3}\mu(A_{n-1})$. Do this for every positive integer (countable AC). Then define a sequence of Borel sets $B_0,B_1,\ldots$ by $B_n=A_n\setminus\cup_{k\gt n}A_k$. The inequality $\mu(A_k)\lt 3^{n-k}\mu(A_n)$ for all $k\gt n$ implies that each $B_n$ has positive measure (at least half of $\mu(A_n)$) and that $\sum_{n\geq 0}n\mu(B_n)$ converges. If $m\gt n$, then $B_m$ is contained in $A_m$, while $B_n$ is contained in $A_n\setminus A_m$, so $B_m$ and $B_n$ are disjoint. The function $\sum_{n\geq1}n\chi_{B_n}$ is thus in $L^1\setminus L^\infty$. |
Composing rotations and translations given in local coordinate system to determine a time series of global orientations and positions | Third take. Stop changing the definitions! :)
We have a sequence of cumulative rotation unit quaternions $R_i$ and cumulative translations $T_i$ in each local coordinate system, with $Q_i$ describing the rotation from the global coordinate system to local coordinate system at step $i$, and $P_i$ being the sum total translations up to and including step $i$ in the global coordinate system:
$$Q_i = Q_{i-1} R_{i-1} \tag{1a}\label{None1a}$$
or equivalently
$$Q_i = Q_{i+1} R_i^{-1} \tag{1b}\label{None1b}$$
and
$$P_i = P_{i-1} + Q_{i-1} T_{i-1} Q_{i-1}^{-1} \tag{2a}\label{None2a}$$
or equivalently
$$P_i = P_{i+1} - Q_i T_i Q_i^{-1} \tag{2b}\label{None2b}$$
We can use $\eqref{None1a}$ or $\eqref{None1b}$ to construct $Q_0^\prime, Q_1^\prime, \dots, Q_{N-1}^\prime, Q_{N}^\prime$. With $\eqref{None1a}$, set $Q_N^\prime = 1$ and iterate $i = N-1, N-2, \dots, 2, 1, 0$. With $\eqref{None1b}$, set $Q_0\prime = 0$ and iterate $i = 1, 2, \dots, N-1, N$.
Then, fix $Q_k = 1$ (where $0 \le k \le N$), by iterating
$$Q_i = Q_k^{\prime -1} Q_i^\prime, \quad \forall i \tag{3}\label{None3}$$
After you have the orientations, you can calculate the total translations $P_i$ using
$$P_0^\prime = \vec{0}, \quad P_i^\prime = P_{i-1}^\prime + Q_{i-1} T_{i-1} Q_{i-1}^{-1} \tag{4a}\label{None4a}$$
or
$$P_N^\prime = \vec{0}, \quad P_i^\prime = P_{i+1}^\prime - Q_i T_i Q_i^{-1} \tag{4b}\label{None4b}$$
which will yield the same translations except for a constant difference. Again, to fix $P_k = \vec{0}$ for some $0 \le k \le N$, iterate
$$P_i = P_i^\prime - P_k^\prime, \quad \forall i \tag{5}\label{None5}$$
As you can clearly see, this has $O(N)$ time complexity.
If $k = N$, use the $(a)$ methods; and if $k = 0$, the $(b)$ methods. This way the fixup is an identity operation, and can be skipped.
Note that with the fixup pass, both $(a)$ and $(b)$ work for all $k$, including $k = 0$ and $k = N$. Choosing one over the other is just an optimization.
Here is an example Python3 program, that implements two helper classes, Vector and Versor, and implements the above logic. (If you save this as example.py, you can use pydoc3 example to see the API it provides. It is in Public Domain, and you can use the classes in your own scripts if you put it in the same directory, and add from example import Vector, Versor.)
"""3D Euclidean vector class 'Vector', and unit quaternion class 'Versor'
-- a playground for experimenting with rotations in 3D"""
# SPDX-License-Identifier: CC0-1.0
from math import sqrt, pi, sin, cos, atan2
class Vector(tuple):
"""3D Cartesian vector type"""
FORMAT = "(%8.5f,%8.5f,%8.5f)"
def __new__(cls, x, y, z):
"""Create a new vector"""
return tuple.__new__(cls, (float(x), float(y), float(z)))
@property
def x(self):
"""x coordinate"""
return self[0]
@property
def y(self):
"""y coordinate"""
return self[1]
@property
def z(self):
"""z coordinate"""
return self[2]
@property
def norm(self):
"""Euclidean length"""
return sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2])
@property
def normsqr(self):
"""Euclidean length squared"""
return self[0]*self[0] + self[1]*self[1] + self[2]*self[2]
@property
def unit_vector(self):
"""Scaled to unit length"""
n = sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2])
if n > 0:
return tuple.__new__(Vector, (self[0]/n, self[1]/n, self[2]/n))
else:
return tuple.__new__(Vector, (0, 0, 0))
def perpendicular_to(self, other):
"""Part perpendicular to another vector"""
nn = float(other[0])**2 + float(other[1])**2 + float(other[2])**2
if nn > 0:
d = (self[0]*other[0] + self[1]*other[1] + self[2]*other[2]) / nn
return tuple.__new__(Vector, (self[0] - d*other[0],
self[1] - d*other[1],
self[2] - d*other[2]))
else:
return tuple.__new__(Vector, (0, 0, 0))
def transform(self, matrix, translate=(0,0,0), pretranslate=(0,0,0)):
"""Transform this point by rotation matrix and translation"""
curr = Vector(self[0]+pretranslate[0], self[1]+pretranslate[1], self[2]+pretranslate[2])
post = Vector(translate)
return Vector(matrix[0][0]*curr[0] + matrix[0][1]*curr[1] + matrix[0][2]*curr[2] + post[0],
matrix[1][0]*curr[0] + matrix[1][1]*curr[1] + matrix[1][2]*curr[2] + post[1],
matrix[2][0]*curr[0] + matrix[2][1]*curr[1] + matrix[2][2]*curr[2] + post[2])
def __str__(self):
"""Conversion to string"""
return Vector.FORMAT % self
def __abs__(self):
"""Absolute value is the Euclidean length"""
return sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2])
def __neg__(self):
"""Opposite vector, negation"""
return tuple.__new__(Vector, (-self[0], -self[1], -self[2]))
def __bool__(self):
"""Nonzero vectors are considered True, zero vectors False"""
return (self[0]*self[0] + self[1]*self[1] + self[2]*self[2] > 0)
def __add__(self, other):
"""Vector addition"""
return tuple.__new__(Vector, (self[0]+other[0], self[1]+other[1], self[2]+other[2]))
def __radd__(self, other):
"""Vector addition"""
return tuple.__new__(Vector, (other[0]+self[0], other[1]+self[1], other[2]+self[2]))
def __sub__(self, other):
"""Vector subtraction"""
return tuple.__new__(Vector, (self[0]-other[0], self[1]-other[1], self[2]-other[2]))
def __radd__(self, other):
"""Vector subtraction"""
return tuple.__new__(Vector, (other[0]-self[0], other[1]-self[1], other[2]-self[2]))
def __mul__(self, scalar):
if isinstance(scalar, (int, float)):
return tuple.__new__(Vector, (self[0]*scalar, self[1]*scalar, self[2]*scalar))
else:
return NotImplemented
def __rmul__(self, scalar):
if isinstance(scalar, (int, float)):
return tuple.__new__(Vector, (scalar*self[0], scalar*self[1], scalar*self[2]))
else:
return NotImplemented
def __truediv__(self, scalar):
if isinstance(scalar, (int, float)):
return tuple.__new__(Vector, (self[0]/scalar, self[1]/scalar, self[2]/scalar))
else:
return NotImplemented
def __rtruediv__(self, ignored):
return NotImplemented
def __or__(self, other):
"""Dot product, a | b"""
if isinstance(other, (list, tuple)) and len(other) == 3:
return self[0]*other[0] + self[1]*other[1] + self[2]*other[2]
else:
return NotImplemented
def dot(self, other):
"""Dot product"""
return self[0]*other[0] + self[1]*other[1] + self[2]*other[2]
def __xor__(self, other):
"""Cross product, a ^ b"""
if isinstance(other, (list, tuple)) and len(other) == 3:
return tuple.__new__(Vector, ( self[1]*other[2] - self[2]*other[1],
self[2]*other[0] - self[0]*other[2],
self[0]*other[1] - self[1]*other[0] ))
else:
return NotImplemented
def cross(self, other):
"""Cross product"""
return tuple.__new__(Vector, ( self[1]*other[2] - self[2]*other[1],
self[2]*other[0] - self[0]*other[2],
self[0]*other[1] - self[1]*other[0] ))
class Versor(tuple):
"""Unit quaternion type describing an orientation or a rotation"""
FORMAT = "(%8.5f;%8.5f,%8.5f,%8.5f)"
def __new__(cls, w, x, y, z):
"""Create a new versor"""
w = float(w)
x = float(x)
y = float(y)
z = float(z)
n = sqrt(w*w + x*x + y*y + z*z)
if n == 0:
w = 1
x = 0
y = 0
z = 0
elif n != 1:
w /= n
x /= n
y /= n
z /= n
return tuple.__new__(cls, (w, x, y, z))
@classmethod
def from_axis_angle(cls, axis, angle):
"""Create a versor from an axis and angle in degrees"""
x = float(axis[0])
y = float(axis[1])
z = float(axis[2])
h = angle * pi / 360.0
n = sqrt(x*x + y*y + z*z)
if n == 0:
return tuple.__new__(cls, (1, 0, 0, 0))
s, c = sin(h), cos(h)
return tuple.__new__(cls, (c, s*x/n, s*y/n, s*z/n))
def __str__(self):
"""Conversion to string"""
return Versor.FORMAT % self
def __abs__(self):
return sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2])
def __neg__(self):
return Versor(-self[0], -self[1], -self[2], -self[3])
def __mul__(self, other):
if isinstance(other, (int, float)):
return (self[0]*other, self[1]*other, self[2]*other, self[3]*other)
elif isinstance(other, (list, tuple)) and len(other) == 4:
return (self[0]*other[0] - self[1]*other[1] - self[2]*other[2] - self[3]*other[3],
self[0]*other[1] + self[1]*other[0] + self[2]*other[3] - self[3]*other[2],
self[0]*other[2] - self[1]*other[3] + self[2]*other[0] + self[3]*other[1],
self[0]*other[3] + self[1]*other[2] - self[2]*other[1] + self[3]*other[0])
else:
return NotImplemented
def __rmul__(self, other):
if isinstance(other, (int, float)):
return (other*self[0], other*self[1], other*self[2], other*self[3])
elif isinstance(other, (list, tuple)) and len(other) == 4:
return (other[0]*self[0] - other[1]*self[1] - other[2]*self[2] - other[3]*self[3],
other[0]*self[1] + other[1]*self[0] + other[2]*self[3] - other[3]*self[2],
other[0]*self[2] - other[1]*self[3] + other[2]*self[0] + other[3]*self[1],
other[0]*self[3] + other[1]*self[2] - other[2]*self[1] + other[3]*self[0])
else:
return NotImplemented
@property
def w(self):
"""Versor real component"""
return self[0]
@property
def x(self):
"""Versor vector x component"""
return self[1]
@property
def y(self):
"""Versor vector y component"""
return self[2]
@property
def z(self):
"""Versor vector z component"""
return self[3]
@property
def axis(self):
"""Rotation unit axis vector"""
n = sqrt(self[1]*self[1] + self[2]*self[2] + self[3]*self[3])
if n > 0:
return Vector(self[1]/n, self[2]/n, self[3]/n)
else:
return Vector(0, 0, 0)
@property
def angle(self):
"""Rotation angle, in degrees"""
n = sqrt(self[1]*self[1] + self[2]*self[2] + self[3]*self[3])
if n > 0:
return atan2(n, self[0]) * 360.0 / pi
@property
def normalized(self):
"""Versor normalized to unit length"""
n = sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2] + self[3]*self[3])
if n == 0:
return tuple.__new__(Versor, (1, 0, 0, 0))
elif n != 1:
return tuple.__new__(Versor, (self[0]/n, self[1]/n, self[2]/n, self[3]/n))
else:
return self
@property
def inverse(self):
"""Inverse of this versor"""
return tuple.__new__(Versor, (self[0], -self[1], -self[2], -self[3]))
@property
def matrix(self):
"""Rotation matrix corresponding to this versor"""
c01 = 2*self[0]*self[1]
c02 = 2*self[0]*self[2]
c03 = 2*self[0]*self[3]
c11 = 2*self[1]*self[1]
c12 = 2*self[1]*self[2]
c13 = 2*self[1]*self[3]
c22 = 2*self[2]*self[2]
c23 = 2*self[2]*self[3]
c33 = 2*self[3]*self[3]
return ( Vector(1-c22-c33, c12-c03, c13+c02),
Vector(c12+c03, 1-c11-c33, c23-c01),
Vector(c13-c02, c23+c01, 1-c11-c22) )
def rotate(self, other):
"""For vector v, q.rotate(v) calculates q v q^-1 .
For quaternion p, q.rotate(p) calculates q p."""
if isinstance(other, (list, tuple)) and len(other) == 3:
x = float(other[0])
y = float(other[1])
z = float(other[2])
c00 = self[0]*self[0]
c11 = self[1]*self[1]
c22 = self[2]*self[2]
c33 = self[3]*self[3]
c01 = 2*self[0]*self[1]
c02 = 2*self[0]*self[2]
c03 = 2*self[0]*self[3]
c12 = 2*self[1]*self[2]
c13 = 2*self[1]*self[3]
c23 = 2*self[2]*self[3]
return Vector(x*(c00+c11-c22-c33) + y*(c12-c03) + z*(c02+c13),
x*(c03+c12) + y*(c00-c11+c22-c33) + z*(c23-c01),
x*(c13-c02) + y*(c01+c23) + z*(c00-c11-c22+c33))
elif isinstance(other, (list, tuple)) and len(other) == 4:
return Versor(self[0]*other[0] - self[1]*other[1] - self[2]*other[2] - self[3]*other[3],
self[0]*other[1] + self[1]*other[0] + self[2]*other[3] - self[3]*other[2],
self[0]*other[2] - self[1]*other[3] + self[2]*other[0] + self[3]*other[1],
self[0]*other[3] + self[1]*other[2] - self[2]*other[1] + self[3]*other[0])
else:
raise ValueError("Cannot rotate a %s" % str(type(other)))
def rotated_by(self, other):
"""For quaternion p, q.rotated_by(p) calculates p q."""
w = other[0]*self[0] - other[1]*self[1] - other[2]*self[2] - other[3]*self[3]
x = other[0]*self[1] + other[1]*self[0] + other[2]*self[3] - other[3]*self[2]
y = other[0]*self[2] - other[1]*self[3] + other[2]*self[0] + other[3]*self[1]
z = other[0]*self[3] + other[1]*self[2] - other[2]*self[1] + other[3]*self[0]
n = sqrt(w*w + x*x + y*y + z*z)
if n == 0:
return tuple.__new__(Versor, (1, 0, 0, 0))
elif n != 1:
w /= n
x /= n
y /= n
z /= n
return tuple.__new__(Versor, (w, x, y, z))
def transform(self, points, translate=(0, 0, 0), pretranslate=(0,0,0)):
result = []
pre = Vector(pretranslate[0], pretranslate[1], pretranslate[2])
post = Vector(translate[0], translate[1], translate[2])
for p in points:
result.append(self.rotate(Vector(p) + pre) + post)
return result
if __name__ == '__main__':
from random import Random
rng = Random()
N = 10
# Cumulative inverse rotations and local translations
R = []
T = []
for i in range(0, N):
while True:
x = rng.uniform(-1,+1)
y = rng.uniform(-1,+1)
z = rng.uniform(-1,+1)
n = sqrt(x*x + y*y + z*z)
if n > 0.1 and n < 1.0:
break
R.append(Versor.from_axis_angle(Vector(x, y, z), rng.uniform(-180, 180)))
T.append(Vector(rng.uniform(-2,+2), rng.uniform(-2,+2), rng.uniform(-2,+2)))
n = N - 1
# Valid range for k is 0..N, inclusive.
k = round(rng.uniform(-0.49, N+0.49))
Qa = [ None ]*(N+1)
Qb = [ None ]*(N+1)
Pa = [ None ]*(N+1)
Pb = [ None ]*(N+1)
#
# Orientation
#
# a) Forwards
Qa[0] = Versor(1,0,0,0)
for i in range(1, N+1): # i = 1, 2, ..., N-1, N.
Qa[i] = Qa[i-1].rotate(R[i-1])
# b) Backwards
Qb[N] = Versor(1,0,0,0)
for i in range(N-1,-1,-1): # i = N-1, N-2, ..., 1, 0.
Qb[i] = Qb[i+1].rotate(R[i].inverse)
print("Orientation quaternions before fixup pass:")
for i in range(0, N+1): # i = 0, 1, ..., N-1, N.
print("%s : %s" % (str(Qa[i]), str(Qb[i])))
# Orientation fixup.
QaFix = Qa[k].inverse
QbFix = Qb[k].inverse
for i in range(0, N+1): # i = 0, 1, ..., N-1, N.
Qa[i] = QaFix.rotate(Qa[i])
Qb[i] = QbFix.rotate(Qb[i])
#
# Position
#
# a) Forwards
Pa[0] = Vector(0,0,0)
for i in range(1, N+1): # i = 1, 2, ..., N-1, N.
Pa[i] = Pa[i-1] + Qa[i-1].rotate(T[i-1])
# b) Backwards
Pb[N] = Vector(0,0,0)
for i in range(N-1,-1,-1): # i = N-1, N-2, ..., 1, 0.
Pb[i] = Pb[i+1] - Qb[i].rotate(T[i])
# Position fixup.
PaFix = Pa[k]
PbFix = Pb[k]
for i in range(0, N+1): # i = 0, 1, ..., N-1, N.
Pa[i] = Pa[i] - PaFix
Pb[i] = Pb[i] - PbFix
print("Orientation quaternions after fixup pass for k=%d:" % k)
qerr = [ 0, 0, 0, 0 ]
perr = [ 0, 0, 0, 0 ]
for i in range(0, N+1): # i = 0, 1, ..., N-1, N.
print("%s = %s %s = %s" % (str(Qa[i]), str(Qb[i]), str(Pa[i]), str(Pb[i])))
qerr = [ max(qerr[0], abs(Qa[i][0] - Qb[i][0])),
max(qerr[1], abs(Qa[i][1] - Qb[i][1])),
max(qerr[2], abs(Qa[i][2] - Qb[i][2])),
max(qerr[3], abs(Qa[i][3] - Qb[i][3])) ]
perr = [ max(perr[0], abs(Pa[0] - Pb[0])),
max(perr[1], abs(Pa[1] - Pb[1])),
max(perr[2], abs(Pa[2] - Pb[2])) ]
print("Max errors: %.6f %.6f %.6f %.6f %.6f %.6f %.6f" % (*qerr, *perr))
When run, the program generates ten rotations and translations, and uses the two iteration directions to solve $Q_i$ and $P_i$ with a random $k$.
The output contains first $Q_a^\prime$ and $Q_b^\prime$ before the fixup pass, then $Q_a$, $Q_b$, $P_a$, and $P_b$. If this approach works, then $Q_a = Q_b$ and $P_a = P_b$, with a selected row ($k = 0$ being the first) having $Q_a = Q_b = 1$ and $P_a = P_b = \vec{0}$.
The last line reports the maximum component-wise differences between each pair of $Q_a$, $Q_b$ and $P_a$, $P_b$.
Here is the output from an example run:
Orientation quaternions before fixup pass:
( 1.00000; 0.00000, 0.00000, 0.00000) : ( 0.77039;-0.19139,-0.58682,-0.15973)
( 0.41966;-0.19167,-0.83338, 0.30435) : (-0.15381;-0.53969,-0.79943, 0.21446)
( 0.69413; 0.48687,-0.52190, 0.09363) : ( 0.33662; 0.10393,-0.86924, 0.34685)
( 0.77771; 0.49771,-0.37579,-0.07897) : ( 0.46126; 0.22090,-0.84049, 0.17893)
( 0.92556; 0.28991,-0.23517,-0.06307) : ( 0.62045; 0.04565,-0.78269, 0.01871)
(-0.24442; 0.33496,-0.88851,-0.19650) : (-0.67698; 0.27822,-0.63217, 0.25427)
( 0.40926; 0.48004,-0.01805,-0.77572) : ( 0.27267; 0.74382,-0.47920,-0.37783)
(-0.34109; 0.24307,-0.89218,-0.16908) : (-0.76681; 0.20925,-0.55835, 0.23762)
(-0.48096; 0.51707, 0.30692,-0.63805) : (-0.19338; 0.91384, 0.31398,-0.17004)
( 0.45215; 0.51974, 0.68178, 0.24620) : ( 0.88721; 0.27829, 0.22401, 0.29195)
( 0.77039; 0.19139, 0.58682, 0.15973) : ( 1.00000; 0.00000, 0.00000, 0.00000)
Orientation quaternions after fixup pass for k=5:
(-0.24442;-0.33496, 0.88851, 0.19650) = (-0.24442;-0.33496, 0.88851, 0.19650) (-0.36075, 0.15348,-3.62981) = (-0.36075, 0.15348,-3.62981)
( 0.51388; 0.34046, 0.64085, 0.45752) = ( 0.51388; 0.34046, 0.64085, 0.45752) ( 0.47612,-0.59985,-2.00515) = ( 0.47612,-0.59985,-2.00515)
( 0.43873;-0.16576, 0.87133,-0.14426) = ( 0.43873;-0.16576, 0.87133,-0.14426) ( 0.18914,-0.35767,-1.86871) = ( 0.18914,-0.35767,-1.86871)
( 0.32603;-0.37848, 0.85420,-0.14422) = ( 0.32603;-0.37848, 0.85420,-0.14422) ( 1.60830,-2.51529,-2.32274) = ( 1.60830,-2.51529,-2.32274)
( 0.09222;-0.39072, 0.91569, 0.01848) = ( 0.09222;-0.39072, 0.91569, 0.01848) ( 0.61687,-1.03912,-1.88966) = ( 0.61687,-1.03912,-1.88966)
( 1.00000; 0.00000, 0.00000,-0.00000) = ( 1.00000; 0.00000,-0.00000, 0.00000) ( 0.00000, 0.00000, 0.00000) = ( 0.00000, 0.00000, 0.00000)
( 0.22923;-0.94011, 0.20253,-0.15045) = ( 0.22923;-0.94011, 0.20253,-0.15045) ( 1.83789,-0.53006,-0.60693) = ( 1.83789,-0.53006,-0.60693)
( 0.99072; 0.07993,-0.09386, 0.05718) = ( 0.99072; 0.07993,-0.09386, 0.05718) ( 2.73874,-2.16200,-1.44103) = ( 2.73874,-2.16200,-1.44103)
( 0.14344;-0.59250,-0.61448,-0.50078) = ( 0.14344;-0.59250,-0.61448,-0.50078) ( 2.76191,-1.35384,-0.52793) = ( 2.76191,-1.35384,-0.52793)
(-0.59057;-0.19371, 0.41969,-0.66149) = (-0.59057;-0.19371, 0.41969,-0.66149) ( 4.10824,-0.82590,-2.47648) = ( 4.10824,-0.82590,-2.47648)
(-0.67698;-0.27822, 0.63217,-0.25427) = (-0.67698;-0.27822, 0.63217,-0.25427) ( 5.38527,-3.28029,-2.42548) = ( 5.38527,-3.28029,-2.42548)
Max errors: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
Hopefully this suffices as a numerical proof that this approach works. |
Extension of a bounded linear map | We start out with the assumption that $K$ is Hilbert, which includes completeness in its definition. Therefore $K$ is already a closed convex subset of $H$ and step 1 is not needed. Just compose projection of $H$ onto $K$ with $T$ to get the desired map.
$$H \stackrel{P_K}{\to} K\stackrel{T}{\to} X$$ |
Construct a regular expression | If you have access to a complement operator $-$, we can implement Daniel Schepler's suggestion:
$$-(-(b^* ab^* ab^*)^* + -(a^* ba^* ba^*)^*).$$
Each interior parentheses says to look for two $a$'s which are separated by (any number of) $b$'s or vice versa. By putting the stars around these you allow yourself as many pairs as you want. The minus signs can be eliminated using De Morgan's Laws if you have access to an AND operator. |
Showing that $X=\{0,1\}\times \mathbb N$ and $Y=\mathbb N\times \{0,1\}$ have different order type | You are onto a good idea. Here is one way to make it happen.
Let $f:X\to Y$ be an order-preserving bijection, and consider this special element $(1,1)\in X$, which you already singled out. Let $f(1,1)=(a,b)$. Then $(a,b)$ has an immediate predecessor $(c,d)$.
Clearly, since $f$ is order-preserving, we have $f^{-1}(c,d)<(1,1)$. This means that there is an element $(0,e)\in X$ with $f^{-1}(c,d)<(0,e)<(1,1)$. Now applying $f$ to this inequality gives us a contradiction. |
G is a cyclic group of cardinality of a power of a prime number | If $\lvert G\rvert$ has two prime divisors $p$ and $q$, there exist two elements with orders $p$ and $q$ by Cauchy's theorem, and none of them is contained in the other. Thus $\lvert G\rvert$ must be a prime power $p^n$.
Let $a\in G$ be an element with maximal order. For any other element $b\in G$, either $b\in\langle a\rangle$ or $a\in\langle b\rangle$ by hypothesis. The latter case implies $\langle a\rangle=\langle b\rangle$ by the maximality of the order of $a$. In both cases we can assert that $b\in\langle a\rangle$, thereby proving $a$ is a generator of $G$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.