title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to prove Schwarz reflection principle? | Since $|f|$ is non-vanishing, there exists $M>0$ such that $|f(z)|>M$ for all $z \in \bar D$.
Define $f(z)=\frac{1}{f(\frac{1}{\bar z})}$ for $z \in \mathbb{C}-D$. Note that if $z \in \mathbb{C}-D$, then $\frac{1}{z} \in D$.
We claim that $f$ is bounded and entire. then it is a constant function.
You can easily check that $|f|$ is bounded, and $f$ is holomorphic in D and $\mathbb{C}-D$
It suffices to show that $f$ is holomorphic on boundary ($|z|=1$)
Apply Morera's theorem ! |
Integer coefficients in a polynomial function | There is no $r(x)$ meeting the required conditions.
Suppose there was. Clearly its coefficients would have gcd 1 (because $r(1)=-1$), so it would be "primitive". By Gauss polynomial lemma, $(3x-2)^8$ must be a factor of $r(x)$. Similarly $(x+1)^2$ and $(x+2)^4$. Hence we must be able to write it as $q(x)(x+1)^2(3x-2)^8(x+2)^4$ where $q(x)$ has integer coefficients. But then $r(1)=-1$ implies $q(1)=-\frac{1}{324}$, which is impossible.
To spell out the first step in more detail: the lemma states that if $r(x)$ is a (non-constant) primitive polynomial with integer coefficients which is reducible over $\mathbb{Q}$, then it is also reducible over $\mathbb{Z}$. We know that $r(x)$ has the factor $(x-\frac{2}{3})^8$ in $\mathbb{Q}[x]$, so we keep taking out factors until we are forced to take out that factor in $\mathbb{Z}[x]$ as $(3x-2)^8$. |
Closed form for the integral $\int_0^\infty \frac{dx}{\sqrt{(x+a)(x+b)(x+c)(x+d)}}$ | Hint:
Assume that $0<a<b<c<d$. Using the magical linear-fractional transformation,
$$\frac{\left(d-b\right)\left(x+a\right)}{\left(d-a\right)\left(x+b\right)}=t,$$
we obtain:
$$\begin{align}
R{\left(a,b,c,d\right)}
&=\int_{0}^{\infty}\frac{\mathrm{d}x}{\sqrt{\left(x+a\right)\left(x+b\right)\left(x+c\right)\left(x+d\right)}}\\
&=\small{\int_{\frac{a\left(d-b\right)}{b\left(d-a\right)}}^{\frac{d-b}{d-a}}\frac{\mathrm{d}t}{\sqrt{\frac{\left(d-a\right)\left(b-a\right)t}{\left(d-b\right)-\left(d-a\right)t}\cdot\frac{\left(d-b\right)\left(b-a\right)}{\left(d-b\right)-\left(d-a\right)t}\cdot\frac{\left(d-a\right)\left(d-b\right)\left(1-t\right)\left[\left(d-b\right)\left(c-a\right)-\left(d-a\right)\left(c-b\right)t\right]}{\left[\left(d-b\right)-\left(d-a\right)t\right]^{2}}}}\cdot\frac{\left(d-a\right)\left(d-b\right)\left(b-a\right)}{\left[\left(d-b\right)-\left(d-a\right)t\right]^{2}}}\\
&=\int_{\frac{a\left(d-b\right)}{b\left(d-a\right)}}^{\frac{d-b}{d-a}}\frac{\mathrm{d}t}{\sqrt{t\left(1-t\right)\left[\left(d-b\right)\left(c-a\right)-\left(d-a\right)\left(c-b\right)t\right]}}\\
&=\frac{1}{\sqrt{\left(d-b\right)\left(c-a\right)}}\int_{\frac{a\left(d-b\right)}{b\left(d-a\right)}}^{\frac{d-b}{d-a}}\frac{\mathrm{d}t}{\sqrt{t\left(1-t\right)\left[1-\frac{\left(d-a\right)\left(c-b\right)}{\left(d-b\right)\left(c-a\right)}t\right]}}.\\
\end{align}$$
The rest is pretty straightforward. Can you take it from there? |
Free groups and derivative | Because, $F$ is free on the set $X$, there are no relations among words (beyond those that make it a group: associativity, identity, inverses). You can prove your formula by induction on the length of the word $w$.
First, can you show that $D(1) = 0$, where $1 \in F$ is the identity? Hint: $1 = 1 \cdot 1$.
While you're at it, why not observe that a word of length $1$ is just $w = x$ or $w = x^{-1}$ for $x \in F$. Do you see why $D(w)$ is of the correct form?
Now, for the inductive step, any word can be written
$$
w = w_1 \cdots w_{n - 1} \cdot w_n = (w_1 \cdots w_{n - 1}) \cdot w_n.
$$
Use the Leibniz rule and the inductive hypothesis, and you've got your result. Can you see how to finish it? |
Is there a formula for finding the lowest factor of a number | There isn't! The best you can to without advanced methods is “trial division”: check if the number is divisible by 2, then by 3, then by 5, 7, 11, and so on.
You only need to check divisibility by prime numbers. For example, you don't have to check if it is divisible by 6, since if it were, it would also be divisible by 2 and 3, which you would have checked already.
You only need to check up to the square root of the original number. For example, when checking for divisors of 293, you only have to check up to $\sqrt{293} = 17.11$. When you find that none of 2, 3, 5, 7, 11, 13, or 17 divides 293, you can stop and conclude that 293 is prime.
For advanced methods, see Wikipedia's article on integer factorization. For a method of intermediate difficulty, try the Pollard rho algorithm. Note that finding the smallest factor of a number is in general as hard as finding a complete factorization, because the number you are factoring might be a product of two prime numbers, and finding the smaller automatically gives you the larger. |
Is the inverse of a M-Matrix again an M-Matrix? | It looks like the answer is no. See this thread on MO Matrices whose inverse is positive. There they use that $M$ matrices have non-negative inverses. That is also stated on the wiki page:M-Matrix |
If $X < a$, $EX < a$? | There exists $b<a$ such that $P(X\leq b)>0$. Hence if $X<a$, $EX<a$. |
Bound on the $L^2$-norm | There cannot be. Consider the following functions defined on $[0,1]$.
$$f_n(x)=\begin{cases}
n&0\leq x\leq 1/n\\
0& 1/n<x\leq 1
\end{cases}$$
$$g(x)=0$$
Then $\|f_n-g\|_1=1$ for all $n$ but $\|f_n-g\|_2$ explodes. |
How to show $\operatorname{Hom}_A(M,N)$ is a finitely generated $A$-module? | Consider the surjection $A^n \to M$. This induces an injection $Hom_A(M,N) \to Hom_A(A^n, N) = N^n$. So this is a submodule of a finitely generated module and by the Noetherian condition, we are done. |
the distribution of $\frac{(X_1 +X_2)^2}{(X_1 -X_2)^2}$ | Observe that since $X_1$ and $X_2$ are indepedent
$$
X_1+X_2\sim N(0,2);\quad X_1-X_2\sim N(0,2)
$$
where the second parameter is variance. We may write $X_1+X_2\stackrel{d}{=}\sqrt{2}X_1$ and $X_1+X_2\stackrel{d}{=}\sqrt{2}X_2$. Hence
$$
\frac{(X_1 +X_2)^2}{(X_1 -X_2)^2}\stackrel{d}{=}\frac{X_1^2}{X_2^2}\sim F(1,1)
$$
because $X_1^2\perp X_2^2$ and $X_1^2\sim \chi^2_{(1)}$.
Note $\stackrel{d}{=}$ means equal in distribution. |
Petersen Graph deleting an edge | This follows from the edge transitivity of the Petersen Graph. In short, for any two edges $e_1$ and $e_2$ of the graph, there is an automorphism sending $e_1$ to $e_2$.
To put it in less technical language, a graph is edge transitive if all the edges "look the same". For example, the graph of a cube is edge transitive, as is the complete graph on $n$ vertices. |
Parameterise doubly stochastic matrices | $D$ is a compact convex set in $\mathbb R^{n \times n}$, and thus a manifold with boundary. Let's say its dimension is $d$.
Suppose $f$ existed. The restriction of $f$ to any closed ball is a homeomorphism onto its image. Now using the Baire category theorem, this image must have nonempty interior for some ball, and therefore we must have $i = d$. Now use invariance of domain. |
Check the convexity of a function | A set is $S \subseteq \mathbb R^2$ is convex iff $(x_1, y_1), (x_2, y_2) \in S$, $\lambda \in [0,1]$ imply that $\bigl((1-\lambda)x_1 + \lambda x_2, (1-\lambda)y_1 + \lambda y_2\bigr) \in S$. Check this for your set $S$, that is, check whether
$$ x_i - y_i^2 \le y_i \le x_i + y_i^2, \qquad i \in \{1,2\} $$
imply
$$ (1-\lambda)x_1 + \lambda x_2 - \bigl((1-\lambda)y_1 + \lambda y_2\bigr)^2 \le (1-\lambda)y_1 + \lambda y_2 \le (1-\lambda)x_1 + \lambda x_2 + \bigl((1-\lambda)y_1 + \lambda y_2\bigr)^2 $$
for each $\lambda \in [0,1]$. |
Derivative after parameter - system of ode | Whenever you have a parameter depending ODE
$$
\dot x=f(t,x,q),~~~x(0,q)=x_0(q)
$$
then the derivation of the solution for the parameters takes the form
$$
\frac∂{∂t}\frac∂{∂q_k}x(t,q)=\frac∂{∂x}f(t,x,q)\frac∂{∂q_k}x(t,q)+\frac∂{∂q_k}f(t,x,q),
\\~\\
\frac∂{∂q_k}x(0,q)=\frac∂{∂q_k}x_0(q),
$$
or perhaps more readable, set $u_k=\frac∂{∂q_k}x$ and use "operator" notation, then
$$
∂_tu_k(t,q)=∂_xf(t,x,q)u_k(t,q)+∂_{q_k}f(t,x,q),
\\~\\
u_k(0,q)=∂_{q_k}x_0(q).
$$
Note that in general this equation contains the original solution $x$, so it increases the system. In a numerical solver, the new equations can be solved along the original equations with a correspondingly increased state space vector.
In the current case, set $u=\frac{∂x}{∂A}$, then
\begin{align}
\dot u_1&=u_2,&~~u_1(0)=0,\\
\dot u_2&=-x_1-Au_1-Bu_2& u_2(0)&=0.
\end{align}
This in no way has the stated solutions. In matlab you would formulate the extended system like
function xdot=ext_sys(t,x,a,A,B,C)
u = x(3:4);
dx = [ x(2); -A*x(1)-B*x(2)+a*sin(C*t) ]
du = [ u(2); -x(1)-A*u(1)-B*u(2) ]
xdot = [ dx ; du ]
(Update 2/27/2021) Debugging Kutta's 3/8 method in the comments with better (slightly) formatting
function [t, x] = RK38th(f, x0, t0, T, h)
t = t0:h:T;
nt = numel(t);
nx = numel(x0);
x = nan(nx, nt);
x(:,1) = x0;
for k = 1:nt-1
k1 = h*f(t(k), x(:,k));
k2 = h*f(t(k) + h/3 , x(:,k) + k1/3 );
k3 = h*f(t(k) + (2*h)/3, x(:,k) - k1/3 + k2);
k4 = h*f(t(k) + h, x(:,k) + k1 - k2 + k3);
dx = (k1 + 3*k2 + 3*k3 + k4)/8;
x(:,k+1) = x(:,k) + dx;
end
end
Note that in k2, the x update is now like in the other stages composed of the k vectors.
This then results in the following plot, where the left part shows the solution, and the right part compares the computed derivative with a difference quotient, where the difference in $A$ was chosen large as $0.1$ to get visible deviations. For smaller values of eps, the curves are visually the same.
The parameters and calling code are
A = 2; B = 0.02; C = 1; a = 0.8;
t0 = 0; T = 10; h = 0.5;
x0 = [2; 0];
x0_ext = [ x0(1); x0(2); 0; 0 ];
eps = 1e-1;
[T0,X0] = RK38th(@(t,x)ext_sys(t,x,a,A,B,C), x0_ext, t0, T, h);
[T1,X1] = RK38th(@(t,x)ext_sys(t,x,a,A+eps,B,C), x0_ext, t0, T, h);
subplot(121);
plot(T0,X0);
legend(['x1'; 'x2'; 'u1'; 'u2']);
subplot(122);
plot(T0,(X1(1:2,:)-X0(1:2,:))/eps, '-+', T0, X0(3:4,:));
legend(['dx1'; 'dx2'; 'u1'; 'u2']); |
Hamiltonian $C_n$ Proof | It depends on your audience. If this was a problem in a second or third year course, then you can probably make do without the proof of the claim. At the same time, if this was for a first year course emphasizing proper proof techniques and mathematical rigor, then I would probably include the proof of the claim.
As a rule of thumb, here is my take regarding situations like this: if you have to ask whether a statement needs proof, then the statement needs proof. The proof is only one line in this case anyways, so its probably worth the effort:
$$n - 3 \ge \frac{n}{2} \iff \frac{n}{2} \ge 3 \iff n \ge 6$$ |
Alternating Sum of Cubes | Outline: It is useful to distinguish between $n$ odd and $n$ even.
For $n$ odd our sum is
$$\left(n^3+(n-1)^3+\cdots +1^3\right)-2\left((n-1)^3+(n-3)^3+\cdots +2^3\right).$$
You know a closed form for the sum of the first $n$ cubes. And $(n-1)^3+(n-3)^3+\cdots +2^3$ is $8$ times the sum of the first $\frac{n-1}{2}$ cubes.
A similar trick works for even $n$.
Alternately, note that
$$k^3-(k-1)^3=3k^2-3k+1.$$
Group in pairs. We know how to sum $k^2$, $k$, and $1$. |
The existence of an algebra homomorphism between $\mathcal{M}_n({\mathbb{K}})$ and $\mathcal{M}_s(\mathbb{K})$ implies $n | s$ | Any $M_s(K)$-module becomes an $M_n(K)$-module via pullback under the the
map $\Phi$. So, consider $K^s$ as the tautological $M_s(K)$-module. This is
also a $M_n(K)$-module. But $M_n(K)$ is semisimple,
and up to isomorphism its only simple module is $K^n$. As $\Phi$ is a $K$-algebra
map, then $K^n$ also has dimension $n$ as a $K$-vector space, even when
viewed as a $K^n$-modules. It must be a direct sum of $r$ simple $M_n(K)$-modules.
Via a dimension count $rn=s$, so $n\mid s$.
The existence of $P$ is essentially the Noether-Skolem theorem. We can split up $K^s=N_1\oplus\cdots\oplus N_r$ where the $N_j$ are simple $M_n(K)$-modules.
So we can choose a basis of $K^s$ so that
$$\Phi=\pmatrix{\Phi_1&0&\cdots&0\\0&\Phi_2&\cdots&0\\\vdots&\vdots
&\ddots&\vdots\\0&0&\cdots&\Phi_r}$$
with respect to this basis. Each $\Phi_i$ is a $K$-automorphism of $M_n(K)$
which is an inner automorphism due to Noether-Skolem. |
Hilbert polynomial of twisted cubic 'by hand'? | First, the coordinate ring of the twisted cubic is $k[s^3,s^2t,st^2,t^3]$, since $[s:t] \mapsto [s^3:s^2t:st^2:t^3]$ is a parametrization of the twisted cubic. Note that in particular, requiring that the ring homomorphism $k[w,x,y,z] \to k[s^3,s^2t,st^2,t^3]$ inducing the isomorphism is in fact graded means that the generators $s^3,s^2t,st^2,t^3$ each live in the grade one piece of the algebra. See Georges Elencwajg's answer to another question for a proof.
It therefore suffices to find the Hilbert polynomial for $k[s^3,s^2t,st^2,t^3]$. But the monomials in this algebra are in one-to-one correspondence with the monomials in $k[s,t]$ living in each graded piece that is a multiple of three. Since the Hilbert polynomial for $k[s,t]$ is $d+1$, the Hilbert polynomial for $k[s^3,s^2t,st^2,t^3]$ is therefore $3d+1$.
I want to point out that the above argument is a nice way to compute other Hilbert polynomials: for example, if $X$ is the image of the Segre embedding $\mathbf{P}^1 \times \mathbf{P}^2 \to \mathbf{P}^5$, the Hilbert polynomial of $X$ is $h_X(d) = (d+1)(d/2 + 2)$. |
Volume of a solid $y=\cos(x)$ and $y=0$ for the interval $0\le x \le \frac{\pi}2$ | You are correct.
Using the distribute law $(\star)$, we have
$$
2 \pi \left( \frac{\pi}{2} - 1\right)
\overset{(\star)}{=} (2 \pi) \left( \frac{\pi}{2} \right) - (2 \pi)(1)
= \pi^2 - 2 \pi.
$$ |
Proof of this property: $x=0$ if and only if $|x|0$ | Your proof is flawed because you cannot prove a general fact from listing examples. Also the negation of "$|x| < y$ for all positive $y$" is "there exists some positive $y$ such that $|x| \geq y$", whereas you have its negation written as "$|x| < z$ for all $z \implies x \neq 0$, i.e. $x < 0$ or $x > 0$". Here is is how you could go about doing this:
Suppose first that $x = 0$. Then $|x| = 0$, which is smaller than any positive real number.
You attempted to prove the next part by contradiction, which could actually work, but the cleanest way is by contrapositive. Here is how that might work:
Suppose that $x \neq 0$. Then $|x| \neq 0$ and $|x| \geq 0$, so $|x| > 0$. In particular there is a positive number which is less than or equal to $|x|$, $|x|$ itself.
By contrapositive, if $x \in \mathbb{R}$ is such that $|x| < y$ for every positive $y \in \mathbb{R}$, then $x = 0$. |
If two set systems' elements all have even intersection, then the product of their cardinality is $\le 2^n$ | This can be proved using linear algebra over the field of two elements, $GF(2).$ Sets form a vector space over $GF(2)$ where symmetric difference is the vector addition. Suppose there are $k$ linearly independent sets in $\mathcal A.$ Let $\mathcal A'$ be a complementary vector space, so it has dimension $n-k.$ Each set $B\in\mathcal B$ defines a linear map $\mathcal A'\to GF(2)$ by $S\mapsto |S\cap B|$ mod $2.$ If $B,B'\in\mathcal B$ satisfy $|S\cap B|=|S\cap B'|$ mod $2$ for all $S\in\mathcal A',$ then by linearity and the fact that $|S\cap B|=|S\cap B'|$ mod $2$ for all $S\in\mathcal A$ we know that $|S\cap B|=|S\cap B'|$ mod $2$ for all $S.$ In particular $|\{i\}\cap B|=|\{i\}\cap B'|$ mod $2$ for all elements $i,$ which implies $B=B'.$ So the sets in $\mathcal B$ map injectively to the vector space of linear maps $\mathcal A'\to GF(2),$ which has dimension $n-k.$ So $|\mathcal A|\cdot|\mathcal B|\leq 2^k2^{n-k}=2^n.$ |
Find the sum of the angles $\angle CD_1P + \angle CD_2P + ... + \angle CD_8P$ | Notice we can rewrite the sum of angles as
$$\begin{align}\sum_{k=1}^8 \angle CD_kP
&= \sum_{k=1}^8 (\angle CD_kB - \angle PD_kB)
= \sum_{k=1}^8 \angle CD_kB - \sum_{k=1}^8 \angle PD_kB\\
&= \underbrace{\sum_{k=1}^8 \angle CD_kB}_{\mathcal{A}} - \angle PD_1B - \underbrace{\sum_{k=2}^8 \angle PD_kB}_{\mathcal{B}}
\end{align}
$$
Since $\angle CD_kB + \angle CD_{9-k}B = 180^\circ$ for $k = 1, .., 8$, we have
$$\mathcal{A} = \sum_{k=1}^8\angle CD_k B = \frac12\sum_{k=1}^8(\angle CD_k B + \angle CD_{9-k}B) = 8 \times 90^\circ$$
Since $\triangle PD_1B$ is equilateral, we have $\angle PD_1 B = 60^\circ$.
Since $\angle PD_kB + \angle PD_{10-k}B = 180^\circ$ for $k = 2,..8$, we have
$$\mathcal{B} = \sum_{k=2}^8 \angle PD_kB = \frac12\sum_{k=2}^8 (\angle PD_kB + \angle PD_{10-k}B) = 7 \times 90^\circ$$
Combine these, we obtain:
$$\sum_{k=1}^8 \angle CD_kP = 8 \times 90^\circ - 60^\circ - 7 \times 90^\circ
= 30^\circ
$$ |
Which is faster? $n^{\log n}$ or $(\log n)^n$? | Convert to a standard exponential:
$$n^{\log n}=\mathrm e^{\log^2n}, \quad (\log n)^n=\mathrm e^{n\log(log n)}$$
then check whether $\;\log^2n=o\bigl(n\log(\log n)\bigr)\:$ or $\;n\log(\log n)=o\bigl(\log^2n\bigr)$. |
Average Distance Between a Point And a Line Segment | If you let the origin of coordinates be the left endpoint of the segment, and let the segment lie on the $x$-axis, and you let the $x$-coordinate of the other endpoint be $k,$ and the fixed point be $(a,b),$ then the distance from a point on the segment to the fixed point is given by $$D(x)=\sqrt{b^2+(x-a)^2},$$ where we choose $x>0$ (so that $k>0$ too). Thus, if we require the average distance $D_a$ to satisfy the condition $$\int_0^kD(x)\mathrm dx=D_a\int_0^k\mathrm dx,$$ then the average distance is given by the $$\frac1k\int_0^k\sqrt{b^2+(x-a)^2}\mathrm dx,$$ which may be explicitly calculated by making the substitution $x-a=b\tan y.$ |
Sum of diminishing series with constant addition | You have $s(n)=s(0)0.9^n+90\cdot 0.9^{n-1}+90\cdot 0.9^{n-2}+\dots 90$, where each term after the first comes from the decay of each added $90$. This is a geometric series which we can sum, getting $$s(n)=s(0)0.9^n+90\frac {1-0.9^n}{1-0.9}$$ |
What does it mean for a random variable to be bigger than another? | If $X$ and $Y$ are random variables with the same underlying probability space $\langle\Omega,\mathcal A,\mathsf{P}\rangle$ then note that $X$ and $Y$ are measurable functions $\Omega\to\mathbb R$.
In this context $X\leq Y$ then stands for:$$\forall\omega\in\Omega\left[X(\omega)\leq Y(\omega)\right]$$
A consequence of that is:$$\mathsf{P}(X\leq Y)=1$$or in words: $X\leq Y$ almost surely.
Another consequence - requiring that for both the expectation exists - is:$$\mathsf EX\leq\mathsf EY$$as is mentioned in your question. |
Continuity of the mapping $\left| \left( \frac{x}{y} \right)^{\frac{1}{1-x/y}} - \left( \frac{y}{x} \right)^{\frac{1}{1-y/x}} \right|$ | Hint. Note that $g(t)=t^{1/(1-t)}$ is a continuous function in $(0,1)\cup (1,+\infty)$ and
$$\lim_{t\to 0^+}g(t)=0\quad\mbox{and}\quad \lim_{t\to 1}g(t)=\lim_{s\to \infty}\left(1-\frac{1}{s}\right)^s=e^{-1}.$$
Hence $g$ can be extended to a function $\overline{g}$ which is continuous in $[0,+\infty)$.
Moreover for all $(x,y) \in \Omega$,
$$f(x,y)= \left| \overline{g}\left( \frac{x}{y} \right) - \overline{g}\left( \frac{y}{x} \right) \right|.$$ |
Let $K$ a separable, compact metric space. We can say that $C(K,\Bbb R)$ is also separable? | This essentially follows from a form of Stone-Weierstraß (if $A$ is a unital subalgebra of $C(K)$, with $K$ compact Hausdorff, that separates points, then $\overline{A} = C(K)$, which is a well-known standard form):
If $(K,d)$ is compact metric, it's separable so pick a countable dense $D = \{d_n: n \in \mathbb{N}\}$ and let $A$ be the unital subalgebra generated by the countably many functions $f_{n}(x) = d(x,d_n)$. These separate the points of $X$. Hence $A$ is dense (and has a countable dense subset itself in the rational-linear combinations of the generating functions etc.) |
an upper bound for number of primes in the interval $[n^2+n,n^2+2n]$ | $n$ consecutive numbers can hold at most A023193$(n)$ primes as long as the first is greater than $n$. A023193$(n) < 2\pi(n)$ for large $n,$ so in any case there are at most $O(n/\log n).$
Your particular example is (up to offset) A094189, but I couldn't find more information about that case. |
Show that there exists a metric $d$ on $\mathbb{R}$ such that $(\mathbb{R},d)$ is compact | There exists a bijection $f: \mathbb R \to [0,1]$. Define $d(x,y)=|f(x)-f(y)|$. This makes $\mathbb R$ compact. |
How many strings of letters from $\{A, b, c\}$ where two small letters aren't next to each other. | Hint: Let $P(n)$ denote the number of such strings of length $n$.
For $n\geq3$, the number of such strings of length $n$ ending with A is $P(n-1)$, since adding A at the end of any such string of length $n-1$ won't introduce bb, bc, cb or cc. Using the same logic, the number of such strings of length $n$ ending with b is the number of such strings of length $n-1$ ending with A, that is, $P(n-2)$. Similarly, the number of such strings of length $n$ ending with c will also be $P(n-2)$. Hence
$$
P(n)=P(n-1)+2P(n-2)\quad(n\geq3)
$$
You can brute force to obtain $P(1)$ and $P(2)$ and now all you have to do is solve the recurrence relation problem. |
what is the probability that there are at least two people with this type of blood in that group of 3000 people? | You've forgotten to subtract off the case where 0 people have that blood type:
$$P(X \geq 2) = 1-P(X = 1)-P(X = 0)$$
Otherwise, your approach seems totally correct. The probability should be very close to 1, since 5% of 3000 is 150, which is the expected number of people with the blood type. Is it possible that the exercise reads 0.05%? This would provide a probability that is not essentially 1. |
Determine Gaussian integral without using erf(z) | The way I did it is to let $z = (\tau - \mu)/\sigma$, then, after the appropriate substitution, you get the well known Gaussian Integral. |
What is the LCM of a and b if $(a+2 \sqrt{2} )/b$ is the ration of the area of larger circle and smaller circles? | Draw in the centers and the segments between them:
Now, can you find the distance $OO_1$ between the center of the big circle and the center of one of the small circles, in terms of $R$? Can you get the radius of the big circle from that? |
Density of $C^\infty(\mathbb{R}^n)$ in $C^0(\mathbb{R}^n)$ | Let $f:\mathbb{R}^n \to \mathbb{R}$ be continuous.
Choose a partition of unity $\{\phi_k\}_{k=1}^{\infty}$ on $\mathbb{R}^n$ which is subordinate to some locally finite family $\{U_k\}_{k=1}^{\infty}$ of precompact open sets which covers $\mathbb{R}^n$.
Let $\epsilon>0$. Then for all $k$, there exists a $C^{\infty}$ function $g_k$ such that $\sup_{x \in \text{supp } \phi_k} |f(x)-g_k(x)|<\epsilon\cdot 2^{-k}$ (we can do this because the support of $\phi_k$ is compact for all $k$, and we know that it's true in the compact case).
Let $g = \sum_k g_k\phi_k$. Then $g$ is $C^{\infty}$, because locally it is the finite sum of $C^{\infty}$ functions.
Since $\sum_k \phi_k=1$, we have that for all $x \in \mathbb{R}^n$,
$$|f(x)-g(x)| = \bigg|\sum_k f(x)\phi_k(x) - g_k(x)\phi_k(x)\bigg| \leq \sum_k |f(x)-g_k(x)|\cdot |\phi_k(x)| \leq \sum_k \epsilon \cdot 2^{-k} = \epsilon$$ Thus $\sup_{x \in \mathbb{R}^n} |f(x)-g(x)|\leq \epsilon$ and $g$ is $C^{\infty}$, as desired.
Actually, this would work on any manifold (not just $\mathbb{R}^n$), I think. |
Miscellaneous series | Look carefully, the sum given is $S_{2m}$, not $S_m$, so we substitute $n =2m$ in the end. Basically we are summing $2m$ terms, not $m$ terms. The expression given is for sum of $2m$ terms. |
Equation of BI,CI given and angle A to be found | BI and CI are bisectors of B and C . Look at the triangle BIC : B/2 + BIC + C/2 =
180
The angle between BI and CI (BIC) = 135
B/2 + 135 + C/2 = 180
B + C = 90
In the triangle ABC : A + B + C = 180
A = 90 |
How to prove rigorously that this implies continuity? | First of all, you have to assume that $f$ is continuous and satisfies a growth condition at $\infty$. Then $u(x,t)$ converges uniformly to $f(x)$ as $t\to0$ on compact subsets of $\mathbb{R}$.
To simplify things I will assume that $f$ is bounded, say $|f(x)|\le M$ for all $x\in\mathbb{R}$. Now let $x_0\in \mathbb{R}$. Given $\epsilon>0$ there is a $\delta>0$ such that if $|z-x_0|\le\delta$ then $|f(z)-f(x_0)|\le\epsilon$. Take $x$ such that $|x-x_0|\le\delta/2$. If $|y|\le\delta/2$, then $(x-y)-x_0|\le\delta$.
Then
$$\begin{align}
|u(x,t)-f(x_0)|&\le\frac{1}{\sqrt{4\,\pi\,t}}\int_{-\infty}^{\infty}e^{-y^2/4t}|f(x-y)-f(x_0)|\,dy\\
&\le\frac{\epsilon}{\sqrt{4\,\pi\,t}}\int_{|y|\le\delta/2}e^{-(x-y)^2/4t}\,dy+\frac{2\,M}{\sqrt{4\,\pi\,t}}\int_{|y|>\delta/2}e^{-(x-y)^2/4t}\,dy\\
&\le\epsilon+\frac{2\,M}{\sqrt{\pi}}\int_{|2\sqrt{t}z-x|>\delta/2}e^{-z^2}\,dx,
\end{align}$$
and the last integral converges to $0$ as $t\to0$ uniformly on the interval $|x|\le\delta/2$. |
Showing this map is injective | Injectivety:
If $\phi (g(x))= \phi (f(x))$ then $g(x-2)=f(x-2)$ for all $x$ so if we put $x=t+2$ we get $g(t) = f(t)$ so $g(x) = f(x)$.
Surjectivety:
Take any $h(x)$ in $\mathbb{Q}[x]$. So if we take $g(x) := h(x+2)\in\mathbb{Q}[x]$ then $$\phi(g(x)) = g(x-2)= h(x)$$ |
Show that $\lim_{r\to 0} \frac{1}{r^2}\int_{C_{r}}f(z)dz = 2 \pi i\frac{\partial f}{\partial \bar{z}}(z_0)$ | It is convenient to use little-o notation. Apply the Taylor estimate for complex functions. If $f$ is $C^1$ in a neighborhood of $z_0$ then $$f(z) = f(z_0) + \frac{\partial f}{\partial z}(z_0) (z - z_0) + \frac{\partial f}{\partial \bar z} (z_0) (\bar z - \bar z_0) + o(|z - z_0|).$$
According to Cauchy's theorem you have $$\int_{C_r} f(z) \, dz = \frac{\partial f}{\partial \bar z} (z_0) \int_{C_r} (\bar z - \bar z_0) \, dz + \int_{C_r} o(|z - z_0|) dz.$$
The latter integrals can be parameterized using $z = z_0 + re^{it}$. Then $\bar z - \bar z_0 = r e^{-it}$ and $dz = ire^{it}$ so that
$$\int_{C_r} (\bar z - \bar z_0) \, dz = \int_0^{2\pi} re^{-it} i re^{it} \, dt = 2\pi i r^2$$
and
$$\left| \int_{C_r} o(|z - z_0|) dz \right|\le \int_{C_r} o(|z - z_0|) |dz| = \int_0^{2\pi} o(r) r \,dt = 2\pi r^2 \frac{o(r)}{r}.$$
Thus
$$\frac{1}{r^2} \int_{C_r} f(z) \, dz = 2\pi i \frac{\partial f}{\partial \bar z} (z_0) + 2\pi \frac{o(r)}{r}.$$
Comment on notation: we can express points $z \in \mathbb C$ as $z = x + iy$. Then $\bar z = x - iy$, $x = \frac 12 (z + \bar z)$ and $y = -\frac i2 (z - \bar z)$. The $z$ and $\bar z$ derivatives of $f$ are defined formally via the chain rule: $$\frac{\partial f}{\partial z} = \frac{\partial f}{\partial x} \frac{\partial x}{\partial z} + \frac{\partial f}{\partial y} \frac{\partial y}{\partial z} = \frac 12[ f_x - i f_y]$$
$$\frac{\partial f}{\partial \bar z} = \frac{\partial f}{\partial x} \frac{\partial x}{\partial \bar z} + \frac{\partial f}{\partial y} \frac{\partial y}{\partial \bar z} = \frac 12[ f_x + i f_y].$$
If $f$ is analytic then $\dfrac{\partial f}{\partial z} = f'(z)$ and $\dfrac{\partial f} {\partial \bar z} = 0$. |
Proof by induction that the sum of terms is integer | Let $f(n)=\frac{n^5}5+\frac{n^3}3+\frac{7n}{15}$
$$f(n+1)-f(n)=\frac{(n+1)^5}5+\frac{(n+1)^3}3+\frac{7(n+1)}{15}-\left(\frac{n^5}5+\frac{n^3}3+\frac{7n}{15}\right)$$
$$=\frac{(n+1)^5-n^5}5+\frac{(n+1)^3-n^3}3+\frac{7(n+1)-7n}{15}$$
$$=\frac{5n^4+10n^3+10n^2+5n+1}5+\frac{3n^2+3n+1}3+\frac{7}{15}$$
$$=(n^4+2n^3+2n^2+n)+(n^2+n)+\left(\frac15+\frac13+\frac7{15}\right)$$ (using Binomial Theorem)
Now, as you have shown $\frac15+\frac13+\frac7{15}=1$
$$\text{So, if }f(n)=\frac{n^5}5+\frac{n^3}3+\frac{7n}{15} \text{ is an integer for integer } n$$
$$\text{ so will be }f(n+1)=\frac{(n+1)^5}5+\frac{(n+1)^3}3+\frac{7(n+1)}{15}$$
This problem can be extended to more primes and proved using the fact that prime $p$ divides $\binom pr$ for $1\le r\le p-1$
I can not keep back a non-inductive proof:
$$\frac{n^5}5+\frac{n^3}3+\frac{7n}{15}=\frac{n^5-n}5+\frac{n^3-n}3+\frac{7n}{15}+\frac n3+\frac n5=\frac{n^5-n}5+\frac{n^3-n}3+n$$
Now, using Fermat's Little Theorem, $n^p-n\equiv0\pmod p$ for any integer $n$ and prime $p$
Similarly, $$\frac{n^7-n}7+\frac{n^5-n}5+\frac{n^3-n}3+n-\left(\frac n7+\frac n3+\frac n5\right)=\frac{n^7}7+\frac{n^5}5+\frac{n^3}3+\frac{34n}{105}$$ will be an integer |
Representing projection operator in terms of orthonormal basis | If $P$ is also assumed to be symmetric ($P^T=P$), and $b_1,\dots,b_M$ is an orthonormal basis of the image of $P$, then indeed one has $P=\sum_ib_ib_i^T=:B$:
Note that since $P$ is symmetric, its eigenspaces (i.e. the kernel and the range) are orthogonal.
Furthermore, it's easy to verify that if $v\in{\rm im}(P)={\rm span}(b_1,\dots,b_M)$, then $Bv=v$, and that if $v\perp{\rm im}(P)$, then $Bv=0$, and these properties characterize $P$.
Nevertheless, the statement is true for general projections as well:
If $P^2=P$, we still have $V={\rm im}(P)\oplus\ker(P)$, though these subspaces might not be orthogonal to each other in the general case.
Choosing bases $v_1,\dots,v_M$ in ${\rm im}(P)$ and $u_1,\dots,u_K$ in $\ker(P)$, we find that the matrix of $P$ in basis $v_1,\dots,v_M,u_1,\dots,u_K$ is diagonal with $M$ entries of $1$ and $K$ entries of $0$, because $Pv_i=v_i$ and $Pu_i=0$.
Consequently, ${\rm tr}(P)=M$. |
separation of variables 3D cylindrical | I suppose that $u$ and $U$ are the same function.
$$u_{tt} = C^{2}\Delta u = C^{2}\left( \frac{1}{r}\frac{\partial}{\partial r}(r\frac{\partial u}{\partial r}) + \frac{1}{r^{2}} \frac{\partial ^{2}u}{\partial \theta ^{2}} + \frac{\partial ^{2}u}{\partial t^{2}}\right)$$
$$\frac{1}{T(t)}\frac{d^{2}T}{dt^{2}}=C^2\left(\frac{1}{R(r)}\frac{d}{dr}(r\frac{dR}{dr}) + \frac{1}{r^{2}\phi (\theta)}(\frac{d^{2}\phi}{d\theta^{2}}) + \frac{1}{T(t)}\frac{d^{2}T}{dt^{2}}\right)$$
$$\frac{C^2}{R(r)}\frac{d}{dr}(r\frac{dR}{dr}) + \frac{C^2}{r^{2}\phi (\theta)}(\frac{d^{2}\phi}{d\theta^{2}}) + \frac{C^2-1}{T(t)}\frac{d^{2}T}{dt^{2}}=0$$
\begin{cases}
\frac{1}{\phi (\theta)}(\frac{d^{2}\phi}{d\theta^{2}})=c_1\\
\frac{1}{T(t)}\frac{d^{2}T}{dt^{2}}=c_2\\
\frac{C^2}{R(r)}\frac{d}{dr}(r\frac{dR}{dr}) + \frac{C^2c_1}{r^2} + (C^2-1)c_2=0\\
\end{cases}
$c_1$ and $c_2$ are arbitrary constants.
Then, you can solve individually the three ODEs. |
On irreducible polynomials over a field $K$ | Supose it is not so, then we can write $\;f=pq\;$ , with $\;\deg p\,,\,\deg q>\frac n2\;$ , but then (remember we're in a polynomial ring over a field (*))
$$n=\deg f\stackrel{(*)}=\deg p+\deg q>\frac n2+\frac n2=n\;\;\text{contradiction}$$ |
Identify a transfer function from frequency data? | In order to identify a second order system of the form
$$G(s) = \dfrac{K\omega_0^2}{s^2+2\zeta\omega_0s+\omega_0^2}$$
You need to determine three values. The first value is the terminal value $f(\infty)=K$ of the time response. Then you determine the peak overshoot $\text{PO}=\dfrac{f_\max(t)-K}{K}$.
By using the formula
$$\zeta = \sqrt{\dfrac{\ln^2 \text{PO}}{\pi^2+\ln^2 \text{PO}}}$$
you can determine the parameter $\zeta$.
And the last parameter you need to determine is the time to the first peak
$t_\text{P}$. You can use this to determine the natural frequency $\omega_0$ by
$$\omega_0 =\dfrac{\pi}{t_\text{P}\sqrt{1-\zeta^2}}.$$
This procedure should give you quite reasonable results. After having this approximate transfer function you can tweak these parameters. |
Fundamental group of a knot | The knot group isn't the fundamental group of the knot, it's the fundamental group of the complement of the knot. |
Integral of the Laplace-Beltrami Operator multiplied by a function | Yes, you are correct. Wikipedia specifies compact support to ensure convergence and that the functions are zero on the boundary. Since you have no boundary, the latter does not apply (and notice that in fact the identity you want is used near the beginning of this section). The divergence operator on a manifold is defined so that
$$ \operatorname{div}(fU) = \langle \operatorname{grad} f,U \rangle_g + f \operatorname{div}{U}, $$
and integrating this over the whole manifold, the divergence theorem forces the left-hand side to vanish, and you get the identity. |
L'Hospital's rule with a denominator that goes away | L'Hospital's Rule works nicely here, but we can get by with less machinery. For note that
$$\frac{\sqrt{1+2x}-\sqrt{1-4x}}{x}=\frac{(\sqrt{1+2x}-\sqrt{1-4x})(\sqrt{1+2x}+\sqrt{1-4x})}{x(\sqrt{1+2x}+\sqrt{1-4x})}.$$
When we multiply out the top, we get $(1+2x)-(1-4x)$, which is $6x$. Cancel the $x$ with the $x$ in the denominator. So we want
$$\lim_{x\to 0}\frac{6}{\sqrt{1+2x}+\sqrt{1-4x}},$$
which is easy. |
Need help to understand a line of a proof of diagonalizability of real symmetric matrices | A way of restating this: if a matrix $A$ is diagonalizable, then there is no vector $v$, scalar (eigenvalue) $\lambda$, and integer $n\geq2$ such that
$$
(A- \lambda I)v \neq 0 \text{ but } (A-\lambda I)^nv =0
$$ |
How to find exponent coefficients in a sum of exponents? | If you know, that
$$y(x) = \exp(c_1 x)+\exp(c_2 x)+\cdots+\exp(c_n x),$$
then
calculate $y(1), y(2), \ldots, y(n)\;$: $\;p_1 = y(1), p_2 = y(2), \ldots, p_n=y(n)$.
You'll get system of equations:
$$\left\{
\begin{array}{r}
\exp(c_1)+\exp(c_2)+\cdots+\exp(c_n)=p_1; \\
\exp(2c_1)+\exp(2c_2)+\cdots+\exp(2c_n)=p_2; \\
\cdots \cdots \cdots \qquad \qquad \qquad \qquad \\
\exp(nc_1)+\exp(nc_2)+\cdots+\exp(nc_n)=p_n.
\end{array}
\right.
$$
If denote $s_1=\exp(c_1), s_2=\exp(c_2), \ldots, s_n=\exp(c_n)$, then
$$\left\{
\begin{array}{r}
s_1+s_2+\cdots+s_n=p_1; \\
s_1^2+s_2^2+\cdots+s_n^2=p_2; \\
\cdots \cdots \cdots \qquad \qquad \\
s_1^n+s_2^n+\cdots+s_n^n=p_n.
\end{array}
\right.
$$
With the help of Power sum symmetric polynomial
you'll find values $e_1,e_2, \ldots, e_n$ $-$ elementary symmetric polynomials:
$$\left\{
\begin{array}{l}
e_1 = s_1+s_2+\cdots+s_n = p_1; \\
e_2 = s_1s_2+s_1s_3+\cdots+s_{n-1}s_n = \dfrac{1}{2}(p_1^2-p_2); \\
\cdots \cdots \cdots\qquad \qquad \\
e_n = s_1s_2\cdots s_n=\dfrac{1}{n}\sum\limits_{j=1}^n (-1)^{j-1}e_{n-j}p_j;
\end{array}
\right.
$$
so $s_1,s_2,\ldots,s_n$ are roots of equation
$$s^n-e_1\cdot s^{n-1}+\cdots+(-1)^{n-1}e_{n-1}\cdot s+(-1)^n e_n=0.$$
Finally, when you will find roots $s_1,s_2,\ldots,s_n$,
$$c_j=\ln(s_j), \; j=1,\ldots,n.$$ |
Limit of $\frac{1}{\ln n}\sum_{j=1}^n\frac{x_j}{j}$ in two different cases. | Replacing $x_n$ by $x_n-X$ if necessary, we can assume that $X=0$. Let $s_n:=\sum_{i=1}^nx_i$. Then there exists a sequence $\left(\delta_n\right)_{n\geqslant 1}$ converging to $0$ such that $x_n=s_n-s_{n-1}=n\delta_n-\left(n-1\right)\delta_{n-1}$ hence
$$
\frac 1{\ln n}\sum_{j=1}^n\frac{x_j}j=\frac 1{\ln n}\sum_{j=1}^n\frac{j\delta_j-\left(j-1\right)\delta_{j-1}}j=\frac 1{\ln n}\sum_{j=1}^n\left(\delta_j-\delta_{j-1}\right)+\frac 1{\ln n}\sum_{j=1}^n\frac{\varepsilon_{j-1}}j.
$$
The first sum in the last expression is telescopic and goes to $0$ as $n$ goes to infinity; for the second one, fix an integer $N$ and observe that for all $n\geqslant N$,
$$
\left\lvert \frac 1{\ln n}\sum_{j=1}^n\frac{\varepsilon_{j-1}}j\right\rvert \leqslant \frac N{\ln n}+\frac 1{\ln n}\sum_{j=1}^n\frac{1}j\sup_{i\geqslant N}\left\lvert \delta_i\right\rvert.
$$
Since the quantity $\frac 1{\ln n}\sum_{j=1}^n\frac{1}j$ can be bounded independently on $n$ (say by $c$, we get
$$
\limsup_{n\to +\infty}\left\lvert \frac 1{\ln n}\sum_{j=1}^n\frac{\varepsilon_{j-1}}j\right\rvert \leqslant c\sup_{i\geqslant N}\left\lvert \delta_i\right\rvert
$$
and since $N$ is arbitrary, we can conclude. |
Show the $\mathbb{C} S_3$-module of dimension 2 has $S(V \otimes V)$ is not irreducible | If $S(V\otimes V)$ is meant to represent the symmetric tensor product of two copies of $V$, then $\dim S^2 V= 3$, which is not the dimension of any of the three finite dimensional irreps of $S_3$. If $S(V\otimes V)$ instead represents the direct sum $\oplus S^k(V\otimes V)$, then each graded piece is an invariant subspace, so it is reducible. |
Name for probability density function supported on a closed interval and increasing | Fun question. Here are a few distributions that come to mind ... This is illustrative, but should be relatively straightforward to derive first derivatives etc, if needed.
$\text{Beta}(a,b)$ distribution, with parameter $a > 1$ and $b\leq1$
In the above plot, $b=0.97$. The $b = 1$ case is plotted below separately as the Power Function.
$\text{Bradford}(\beta)$ distribution with parameter $-1<\beta<0$
$\text{PowerFunction}(a)$ with parameter $a>1$ (special case of Beta)
Two-component mix of Triangular and Uniform
Variation on a $\text{Leipnik}(\theta)$ distribution with parameter $0<\theta<1$ |
Covering a circle with rectangular labels | Turning the above comment into an answer.
Consider a sphere of diameter $1$ in $\mathbb{R}^3$ and identify the original circle of diameter $1$ with the intersection of the sphere with a plane that divides it precisely in half.
We prove by contradiction that it is impossible that $n$ rectangles of width $d_1, \ldots, d_n$ and $\sum_{j=1}^n d_j < 1$ cover the circle. Indeed, suppose that the rectangles cover the original circle. For each rectangle, consider its projection on the sphere. Since the sphere has radius $\frac12$, the projection is a ring of area
$$
A_j = 2\pi d_j \cdot \frac12 = \pi d_j,
$$
where $d_j$ is the width of the rectangle. Since the rectangles cover the circle, we conclude that the rings must cover the sphere implying that
$$
\text{Area of sphere} \le \sum_{j=1}^n A_j = \pi \sum_{j=1}^n d_j < \pi.
$$
However, the surface area of the sphere is $4 \pi (\frac12)^2 = \pi$ and we conclude $\pi < \pi$, reaching a contradiction. |
For fixed $t_0$ and $st_0}\mid \mathcal{F}_s)$ | Just let $t \geq s$ and calculate
\begin{align*}
\mathbb{E} \left [B_{t \wedge t_{0}}\, | \, \mathcal{F}_{s} \right ]
&= \mathbb{E} \left [B_{t} \mathcal{1}_{t \leq t_{0}} \, | \, \mathcal{F}_{s} \right ] + \mathbb{E} \left [B_{t_{0}} \mathcal{1}_{t > t_{0}} \, | \, \mathcal{F}_{s} \right ] \\
&\overset{(i)}{=} \mathcal{1}_{t \leq t_{0}} \mathbb{E}[B_{t} \, | \, \mathcal{F}_{s}] + \mathcal{1}_{t > t_{0}} \mathbb{E}[B_{t_{0}} \, | \, \mathcal{F}_{s} ] \\
&= \mathcal{1}_{t \leq t_{0}} B_{s} + \mathcal{1}_{t > t_{0}} B_{s \wedge t_{0}} \\
&\overset{(ii)}{=} \mathcal{1}_{t \leq t_{0}} B_{s \wedge t_{0}} + \mathcal{1}_{t > t_{0}} B_{s \wedge t_{0}} \\
&= B_{s \wedge t_{0}}.
\end{align*}
Note that step $(ii)$ follows since if $t_{0}<s$ then $t \geq s > t_{0}$ by assumption and so $1_{t \leq t_{0}} = 0$.
Step $(i)$ follows since $\mathcal{1}_{t \leq t_{0}}(\omega) \equiv \mathcal{1}_{t \leq t_{0}}$ is deterministic and so is $\mathcal{F}_{s} = \sigma(B_{u} \, | u \leq s ) \, $-measurable. In fact, $\mathcal{1}_{t \leq t_{0}}$ is $\mathcal{F}_{0} = \sigma(B_{0}) = \{\emptyset,\Omega \}$-measurable. For example, if say $t$ (which is fixed) is $\leq t_{0}$, then either $\mathcal{1}_{t \leq t_{0}}^{-1}(B) = \Omega$ (if $1 \in B$) and $\mathcal{1}_{t \leq t_{0}}^{-1}(B) = \emptyset$ (if $1 \notin B$).
Feel free to ask if there are any questions. |
Critical Points and Gradients/Derivatives | Looks like 12 critical points to me.
$$\nabla g(x)=\left[\begin{array}{c} f'(x_1)f(x_2)f(x_3)\cdots f(x_{20})\\
f(x_1)f'(x_2)f(x_3)\cdots f(x_{20})\\
f(x_1)f(x_2)f'(x_3)\cdots f(x_{20})\\
\vdots \\
f(x_1)f(x_2)f(x_3)\cdots f'(x_{20})\end{array}\right].$$
Note carefully the placement of the $'$ symbols.
What do you need to prove that each of those rows equals zero?
EDIT:
It appears that $f'(x_i)=0$ for all $i$ is only one of the conditions giving $\nabla g(x)=0$. We could also have $f(x_i)=0$ for all $i$, or $f(x_1)=f'(x_1)=0$ for example.
Consider $f(x)=x$. Then $g(x)=x^{20}$ and $$\nabla g(x)=\left[\begin{array}{c} 1\cdot x_2\cdot x_3\cdots x_{20}\\
x_1\cdot 1 \cdot x_3\cdots x_{20}\\
x_1\cdot x_2\cdot 1\cdots x_{20}\\
\vdots \\
x_1\cdot x_2\cdot x_3\cdots 1\end{array}\right].$$
This is equal to zero when $x_1=x_2=\cdots x_{20}=0$, but $f'(x_i)=1$ for all $i$. |
solve $(3x-1)^{x-1}=1$ | hint. If $$(x-1)\log(3x-1) = 0$$
Then either
$$\log(3x-1) = 0 $$
or
$$x-1 = 0 $$ |
What type of aperiodic tiling is used by Turkish Airlines on their bathroom walls? | This arabesque pattern basically consists of a pair of parallel straight lines symmetric to x-axis rotated by angle $\dfrac {2 \pi}{n},$ integer $n$. They are not aperiodic, but with selective segment deletes for aesthetic appeal in circular symmetry.
Simple modifications from the basic e.g., circular arcs, removal of crowding segments at center can be noticed. In the second picture $ (n=4, n=8) $ are seen with an angle offset.
Many choices for vectors possible, easily generated as rotate copy CAS geometry patterns. |
Finding a general solution to a differential equation, using the integration factor method | When the DE is in this form: $y'+p(x)y=q(x)$ the integrating factor is $m(x)=e^{\int{p(x)dx}}$. |
How to define the image measure ("joint density") of $(X, Y)$, where $Y = h (X)$? | The image measure is given by the formula
$$
\mu_{X,Y}(A)=P((X,Y)\in A)=\int_{\Bbb R} 1_A(x,h(x))f_X(x)\,dx.
$$
As $\mu_{X,Y}$ is concentrated on the graph of $h$, which is a Borel subset of $\Bbb R^2$ of Lebesgue measure $0$ (Fubini), $\mu_{X,Y}$ is not absolutely continuous with respect to $2$-dimensional Lebesgue measure, so it does not admit a density. The utility of the delta function formalism in this context is not apparent. |
Definition of Disconnected Subspaces | We cannot say that $Y$ is disconnected if it is contained in the union of two disjoint nonempty open subsets of $X$. For example, if $X=\mathbb{R}$ and $Y=(1,2)$, then
$$Y\subset (-1,0)\cup (1,2)$$
even though $Y$ is connected.
If $X$ is a topological space and $Y\subseteq X$, then certainly, if $Y$ is disconnected as a subset of $X$, i.e. if $Y=A\cup B$ for non-empty disjoint open $A,B\subset X$, then it is disconnected in its subspace topology, because both $A$ and $B$ are open subsets of $Y$ in the subspace topology.
However, the opposite direction is not true. That is, if $Y=U\cup V$ for non-empty disjoint open $U,V\subset Y$, then we have that $U=Y\cap A$ and $V=Y\cap B$ for some non-empty open $A,B\subseteq X$, but it need not be the case that we can always find $A$ and $B$ that are disjoint.
For example, let $X=\{1,2,3\}$ given the topology $T=\{\varnothing,\{2\},\{1,2\},\{2,3\},X\}$. Let $Y=\{1,3\}\subset X$. Then $Y$ is disconnected in the subspace topology, as $U=\{1\}$ and $V=\{3\}$ are disjoint non-empty sets that are open in the subspace topology of $Y$ and whose union is $Y$, but $Y$ cannot be covered by disjoint non-empty open subsets of $X$. |
Is there any matrix $2\times 2$ such that $A\neq I$ but $ A^3=I$? | Rotation by $2\pi/3$ in the plane. Find the $2 \times 2$ matrix that gives you this linear transformation. |
When N is a poisson distribution | The law of total expectation states that $$\operatorname{E}[S_N] = \operatorname{E}[\operatorname{E}[S_N \mid N]],$$ and the law of total variance states $$\operatorname{Var}[S_N] = \operatorname{E}[\operatorname{Var}[S_N \mid N]] + \operatorname{Var}[\operatorname{E}[S_N \mid N]].$$ When $S_N = X_1 + \cdots + X_N$, that is to say, $S_N$ is the sum of $N$ iid random variables, then the law of total expectation plus linearity of expectation gives $$\operatorname{E}[S_N] = \operatorname{E}[\operatorname{E}[X_1 + \cdots + X_N \mid N]] = \operatorname{E}[\operatorname{E}[X_1] + \cdots + \operatorname{E}[X_N]] = \operatorname{E}[N \operatorname{E}[X]] = \operatorname{E}[N]\operatorname{E}[X],$$ where $X_i \sim X$ for $i = 1, 2, \ldots$ is the common distribution of the $X_i$s. Note that in this case the $X_i$s need not be independent, only identically distributed, because linearity of expectation $$\operatorname{E}[X_1 + \cdots + X_n] = \operatorname{E}[X_1] + \cdots + \operatorname{E}[X_n] $$ does not require iid (but the subsequent reduction to $n \operatorname{E}[X]$ does require each expectation $\operatorname{E}[X_i]$ to be equal).
For the total variance, we already know from the above that the conditional expectation is $$\operatorname{E}[S_N \mid N] = N \operatorname{E}[X].$$ The conditional variance does require the iid property: $$\operatorname{Var}[S_N \mid N] \overset{\text{iid}}{=} \operatorname{Var}[X_1] + \cdots + \operatorname{Var}[X_N] = N \operatorname{Var}[X]$$ since if independence is not satisfied, you would have nonzero covariance terms. Then $$\operatorname{Var}[S_N] = \operatorname{E}[N]\operatorname{Var}[X] + \operatorname{Var}[N \operatorname{E}[X]] = \operatorname{E}[N] \operatorname{Var}[X] + \operatorname{E}[X]^2 \operatorname{Var}[N].$$
In the case that $N \sim \operatorname{Poisson}(\lambda)$, we have $\operatorname{E}[N] = \operatorname{Var}[N] = \lambda$, thus the above further simplifies to $$\operatorname{E}[S_N] = \lambda \mu, \\ \operatorname{Var}[S_N] = \lambda(\sigma^2 + \mu^2),$$ where $\mu = \operatorname{E}[X]$ and $\sigma^2 = \operatorname{Var}[X]$. |
Fodor's lemma on singular cardinals | I’m pretty sure that all you can get without very strong extra conditions is the weak form of the pressing-down lemma that says that the function must be bounded on a stationary set. For example, let $\kappa = \omega_{\omega_1}$, $\mu = \omega_1$, and $C = \{\omega_\xi:\xi\in\omega_1\}$, and define $\varphi:\kappa \to \kappa$ by setting $\varphi(\alpha) = \xi$ for $\omega_\xi \le \alpha < \omega_{\xi+1}$; clearly $\varphi$ is regressive, but it’s not constant on any stationary subset of $\kappa$. It’s not clear to me that any reasonable conditions would avoid examples like this, though of course what’s reasonable depends on what you want the result for.
To prove the weak form, suppose that $\kappa > \operatorname{cf}\kappa = \mu > \omega$; then $\kappa$ has a cub $C = \{\alpha_\xi:\xi \in \mu\}$, where the enumeration is continuous. Let $S$ be a stationary subset of $\kappa$; then $S_C = \{\xi \in \mu:\alpha_\xi \in S\}$ is a stationary subset of $\mu$. Let $\varphi$ be a regressive function on $\kappa$. Define $$\varphi_C:S_C \to \mu:\xi \mapsto \min\{\eta\in\mu:\varphi(\alpha_\xi) \le \alpha_\eta\};$$
clearly $\varphi_C(\xi)\le\xi$. Suppose that $\varphi_C(\xi)=\xi$; then for every $\eta<\xi$ we have $\alpha_\eta<\varphi(\alpha_\xi)<\alpha_\xi$, and since $C$ is closed, $\xi$ must be a successor ordinal. Thus, $\varphi_C$ is regressive on $\{\xi\in S_C:\operatorname{cf}\xi\ge\omega\}$, which is a stationary subset of $\mu$, and therefore $\varphi_C$ is constant on some stationary subset $T$ of $S_C$, say with value $\eta_0$. Then for each $\xi\in T$, $\varphi_C(\xi)=\eta_0$, so $\varphi(\alpha_\xi)\le\alpha_{\eta_0}$. Since $\{\alpha_\xi:\xi \in T\}$ is stationary in $\kappa$, we’re done: $\varphi$ is bounded on the stationary set $\{\alpha_\xi:\xi \in T\}$.
We actually do get a little more. For every $\eta<\eta_0$, $\varphi(\alpha_\xi)>\alpha_\eta$, so $\varphi(\xi)\ge\sup\{\alpha_\eta:\eta<\eta_0\}=$ $\alpha_{\sup\{\eta:\eta<\eta_0\}}$; thus, $\varphi$ is constant on a stationary set if $\eta_0$ happens to be a limit ordinal. If $\eta_0=\gamma+1$, however, all we can say is that $\varphi$ maps some stationary set into $[\alpha_\gamma,\alpha_{\gamma+1}]$. |
Suficient condition for tensor product of vector spaces.. | (i) is not correct, it should be $\langle \mathrm{im}(\phi) \rangle = G$.
In order to solve the "exercise" (which in fact, is the equivalence between your wrong definition (which should be seen as a characterization) and the correct definition), you only have to show (i). To do that, apply the given universal property to see that the inclusion $\langle \mathrm{im}(\phi) \rangle \hookrightarrow G$ induces a bijection $\hom(G,H) \to \hom(\langle \mathrm{im}(\phi) \rangle,H)$ for every $H$. This implies that $\langle \mathrm{im}(\phi) \rangle \to G$ is an isomorphism (Yoneda Lemma; any argument (especially the ones which will appear in the other answers) are repetitions of the proof of the Yoneda Lemma). |
what is the appropriate inner product | First define $f(u):=(u^2-1)e^{-u^4/2},\,g(u):=(u^3+2)e^{-u^4/2}$, so the inequality we want to prove is$$\int_{\Bbb R}f(x)g(x)dx\int_{\Bbb R}f(y)g(y)dy\le\int_{\Bbb R}f^2(x)dx\int_{\Bbb R}g^2(y)dy.$$In terms of the usual inner product $\langle h_1,\,h_2\rangle:=\int_{\Bbb R}h_1(u)h_2(u)du$, this becomes $\langle f,\,g\rangle^2\le\langle f,\,f\rangle\langle g,\,g\rangle$, which is just the result we were asked to use. |
Show that the finite subsets of the natural numbers are bounded. | A good proof! I just have some pointers regarding notation.
In your inductive hypothesis (second paragraph), rather say "Suppose it is true for some $n \in N$." You can go on to clarify what this means (that there exists some $M \in N$...).
Be careful in the the $n+1$ case, you don't want to confuse the bound $M$ from the previous case with the bound for $n+1.$ Since the bound $M$ is defined by the $n$ case, you don't want to say $M = k,$ rather introduce a new variable for the bound for the case $n+1.$ Also, it is not necessary to introduce the variable $k,$ you can use $f(n+1)$ as is.
Just to check, are you taking the natural numbers to include 0? Since this will influence your base case. |
How to prove that for any $c∈C$, $HW = HD(c)$ | Fix $c \in C$.
$$
HD(c)
= \{d_H(x, c) \mid x \in C\}
= \{w_H(x - c) \mid x \in C\}
$$
A vector space is closed under addition and scalar multiplication, so $x - c \in C$ if and only if $x \in C$. Thus
$$
\{w_H(x - c) \mid x \in C\}
= \{w_H(x) \mid x \in C\}
= HW.
$$ |
Length of a Continuous Curve | The basic fact you are (or should be) using is that if $S$ is any set of real numbers, and $X=x_1, \dotsc, x_n, \dotsc$ is any convergent subsequence, then $L=\lim X \leq \sup S.$ This is obvious, because if $L > \sup S,$ then there is an $i$ such that $x_i \geq L - (L-\sup S),$ contradicting the supremum property. |
Permutations with restrictions on item positions | One can succinctly express the count of possible matchings of items to allowed positions (assuming it is required to position each item and distinct items are assigned distinct positions) by taking the permanent of the biadjacency matrix relating items to allowed positions.
In the example above we would express the count, taking items $a,b,c$ as columns and $1,2,3$ as rows:
$$ \operatorname{perm} \begin{pmatrix} 1 & 1 & 0 \\ 1 & 1 & 1 \\ 0 & 1 & 1 \end{pmatrix} = 3 $$
Sadly the computation of a matrix permanent, even in the restricted setting of "binary" matrices (having entries $0,1$), was shown by Valiant (1979) to be $\#P-$complete. The topic was discussed in this previous Math.SE Answer. See also this slightly more recent Math.SE Question.
A naive approach to computing a permanent exploits the expansion by (unsigned) cofactors in $O(n!\; n)$ operations (similar to the high school method for determinants). A clever algorithm by H.J. Ryser (1963) allows the exact evaluation of an $n\times n$ permanent in $O(2^n n)$ operations (based on inclusion-exclusion).
A deterministic polynomial time algorithm for exact evaluation of permanents would imply $FP=\#P$, which is an even stronger complexity theory statement than $NP=P$. So the prospects for this appear extremely dim at present. Interest in boson sampling as a model for quantum computing draws upon a connection with evaluation of permanents. |
Evaluating a sequence given a recurrent relation for consecutive values | Let me walk you through it. Start by applying the definition a few times, for $x=1, 2, 3, 4, \ldots$:
$$\mbox{for } x=1: \quad f(2) = 2f(1) + 1 = 2 \cdot 5 + 1$$
$$\mbox{for } x=2: \quad f(3) = 2f(2) + 1 = 2 \left[ 2 \cdot 5 + 1 \right] + 1 = 2^2\cdot 5 + 2 +1$$
$$\mbox{for } x=3: \quad f(4) = 2f(3) + 1 = 2 \left[ 2^2\cdot 5 + 2 + 1 \right] + 1= 2^3\cdot 5 + 2^2 +2 + 1$$
$$\mbox{for } x=4: \quad f(5) = 2f(4) + 1 = 2 \left[ 2^3\cdot 5 + 2^2 +2 + 1 \right] + 1 = 2^4\cdot 5 + 2^3 + 2^2 + 2 + 1$$
Perhaps you start seeing a pattern, which leads you to guess that
$$f(n+1) = 2^n\cdot 5 + 2^{n-1} + 2^{n-2} + \ldots + 2^2 + 2^1 + 2^0 = 2^n\cdot 5 + \sum_{k=0}^{n-1} 2^k$$
This is the expression suggested by Fred. As Arthur points out, you may now prove that this is correct for any $k \ge 1$ using mathematical induction. (For a quick-and-dirty argument, you may plug in the expression in the original formula and check out that it holds true.)
Incidentally, given that
$$\sum_{k=0}^{n-1} 2^k = 2^n - 1$$
the expression derived by Fred may be rewritten as
$$f(n+1) = 2^n\cdot 5 + \sum_{k=0}^{n-1} 2^k = 2^n\cdot 5 + 2^n - 1 = 2^n\cdot 6 - 1$$
as recommended by Arthur.
Finally, using the expression
$$f(3) - f(0) = 23-2=21$$ |
Relationship between a function's limits at infinity and its derivative's ones | Let $g(x)=(1-x^2)^2$ if $x\in[-1,1]$ and $0$ for $|x|>1$. Show that $g(x)$ is differentiable and, in fact, the derivative is continuous. $g(x)$ can be seen as a differentiable "bump" that starts rising at $x=-1$, reaching a peak value of $1$ at $x=0$, and returning to $0$ at $x=1$.
Now, define $f(x)$ as:
$$f(x)=\sum_{k=1}^{\infty} \frac{1}{k}g(k^2(x-k))$$
$f(x)$ can be seen as a function which is mostly zero except with bumps with a maximum of $1/k$ in the intervals $(k-1/k^2,k+1/k^2)$, where $k$ is a positive integer.
Claim: $\lim_{x\to\infty} f(x) = 0$ and $\lim_{x\to\infty} f'(x)$ does not exist.
Proof: Now, $f(k)=\frac{1}{k}$ for all positive integers $k$, and if $x>k$, then $0\leq g(x)<\frac{1}{k}$. This shows that $f(x)\to 0$ as $x\to\infty$.
Also $f(k-1/k^2)=0$. By the mean value theorem, this means that for some $c\in (k-1/k^2,k)$, $f'(c)=\frac{f(k)-f(k-1/k^2)}{1/k^2} = k$. This shows that $f'(x)$ does not converge to zero. In fact, it is not even bounded.
However, if $f(x)$ converges to a finite value, and $f'(x)$ converges at all, then $f'(x)$ converges to zero. You can prove this again with the mean value theorem. See the proof later in the answer.
If $f(x)$ is concave or convex, then $f'(x)$ is monotonic. That means that if $f'(x)$ does not converge to any value, then for some $N$, we have $f'(x)>1$ for all $x> N$ or $f'(x)<-1$ for all $x>N$. This is easily shown to not be possible if $f(x)\to L$, since it would mean that $|f(x+1)-f(x)|>1$ for $x>N$.
This means that $f'(x)$ converges, and thus it must converge to $0$.
Originally, I wrote that it was true if $f(x)$ was monotonic, but I don't think that's so.
If we define $h(x)=\sum_{k=1}^{\infty} g(k^3(x-k))$ and $f(x)=\int_0^x h(t)\,dt$ then $f'(x)=h(x)$ does not converge to zero, but $f(x)$ is monotonically increasing, and $f(x)$ is bounded by $2\sum_k \frac{1}{k^2}$, so $f(x)$ converge, but $f'(x)$ does not converge.
We could even make $h(x)$ unbounded by making it $h(x)=\sum_{k=1}^{\infty} kg(k^4(x-k))$.
If you want a counterexample with $f$ strictly increasing, use $1-e^{-x} + \int_{0}^x h(t)\,dt$.
One last thing I didn't prove: If $f'(x)$ and $f(x)$ both converge to finite values, then $f'(x)$ converges to $0$.
The key is the result:
Lemma: If $f$ is differentiable and $\lim_{x\to\infty} f(x)=L$, then $\liminf_{x\to\infty} |f'(x)|=0$.
Proof:
This follows from the mean value theorem.
Since $f(x)\to L$, here is an $N$ so that for $x>N$, $|f(x)-L|<1/2$. Thus, for $x>N$, and any $D>0$, $|f(x+D)-f(x)|<1$.
Now, given any $\epsilon>0$ and any $M$ pick $x>\max(M,N)$ and set $D=\frac{1}{\epsilon}$.
By the mean value theorem, there must be a $c\in[x,x+D]$ such that $$f'(c)=\frac{f(x+D)-f(x)}{D}=\epsilon(f(x+1/\epsilon)-f(x))$$ and thus $|f'(c)|<\epsilon$.
So $\liminf_{x\to\infty} |f'(x)|=0$.
Corollary:
If $f(x)$ and $f'(x)$ converge to finite values as $x\to\infty$, then $\lim_{x\to\infty} f'(x)=0$.
Proof: Since $\lim_{x\to\infty} |f'(x)|=|\lim_{x\to\infty} f'(x)|$, and $\liminf |f'(x)|=\lim |f'(x)|$ when the right hand exists, you have the result.
Finally, if $f(x)\to\infty$ you can get any values for $\alpha=\liminf |f'(x)|$. For example, $f(x)=\alpha x$ for $\alpha\neq 0$ give $f'(x)=\alpha$. $$f(x)=\begin{cases}\log x&x\geq1\\
x-1&x<1\end{cases}$$
has $f(x)\to\infty$ as $x\to\infty$ but $f'(x)\to 0$.
For concave $f$, you always have $\lim f'(x)$ in $[-\infty,\infty]$, but without concavity, you can get any values $\alpha\leq\beta$ for $\alpha=\liminf f'(x)$ and $\limsup f'(x)$. |
If $A$ is self-adjoint, then $\left\|A\right\|=\sup_{x\in H\setminus\{0\}}\frac{\langle Ax,x\rangle}{\left\|x\right\|^2}$ | Let $B\in\mathfrak L(H)$. By the Cauchy-Schwarz inequality, $$c:=\sup_{x\in H\setminus\{0\}}\frac{|\langle Bx,x\rangle_H|}{\left\|x\right\|_H^2}\le\left\|B\right\|_{\mathfrak L(H)}=\sup_{\left\|x\right\|_H,\:\left\|y\right\|_H\:\le\:1}|\langle Bx,y\rangle_H|.\tag2$$ Now asssume that $B$ is self-adjoint. Then, $$\langle B(x+y),(x+y)\rangle_H-\langle B(x-y),x-y\rangle_H=4\langle Bx,y\rangle_H\tag3$$ and hence $$|\langle Bx,y\rangle_H|\le c\frac{\left\|x+y\right\|_H^2+\left\|x-y\right\|_H^2}4=c\frac{\left\|x\right\|_H^2+\left\|y\right\|_H^2}2\tag4$$ by the parallelogram law for all $x,y\in H$, i.e. $$\sup_{\left\|x\right\|_H,\:\left\|y\right\|_H\:\le\:1}|\langle Bx,y\rangle_H|\le c.\tag5$$ Thus, $$\left\|B\right\|_{\mathfrak L(H)}=c.\tag6$$ |
Proving that $ (A \triangle B) - C= (A - C) \triangle (B -C)$ | As far as i can tell your solution has no mistakes, however it can be done shorter. You can do set algebra and try to meet the equality by showing that one side can be written as the other side, my advice is to always work from the biggest side and reduce it to the smallest side, here is my proof
$$(A \triangle B) - C= (A - C) \triangle (B -C)$$
Start:
$$(A - C) \triangle (B -C)$$
Def Triangle operator
$$(A-C)-(B-C)\cup (B-C)-(A-C)$$
Def -operator
$$((A\cap C^c)\cap(B\cap C^c)^c)\cup ((B\cap C^c)\cap(A\cap C^c)^c)$$
DeMorgan
$$((A\cap C^c)\cap(B^c\cup C))\cup ((B\cap C^c)\cap(A^c\cup C))$$
Distribution and absorption one step
$$(A\cap (B^c\cup C)\cap C^c\cap B^c)\cup (B\cap (A^c\cup C)\cap A^c\cap C^c))$$
absorption
$$(A\cap C^c\cap B^c)\cup (B\cap A^c\cap C^c))$$
commutative property and def -
$$(A-B)-C\cup(B-A)-C$$
Def triangle/ distribution or/-
$$(A \triangle B) - C$$ |
Defective Markov transition matrix and relation to its limiting distributions | For an ergodic chain it still converges to a stationary distribution. By Perron-Frobenius theorem $1$ is a simple eigenvalue, so all other eigenvalues hold $|\lambda_i| < 1$. Now you can build an orthogonal basis of generalized eigenvectors. Though it's a bit more involved, other terms [those eigenvectors of other than $1$ eigenvalues] will disappear in limit. See Rosenthal, Jeffrey S. "Convergence rates for Markov chains." Siam Review 37.3 (1995): 387-405 for more details. |
Rouche's theorem problem | Hint:Do you know this one? $
(1+x)^n\ge 1+nx$
take $f(z)=z^n, g(z)=nz-1$ then show that $|f(z)|>|g(z)|$ on $|z|=1+\sqrt{{2\over(n-1)}}$
Hence $f, f+g$ has same number of root inside the specified circle |
A mixture of AP and GP | Your working is sound, here is what you do, to sum a mixture of A.P. and G.P.
Given a sum of the form,
$$\underbrace{a+(a+d)k+((a+d)k+d)k+\ldots+((a+d)k+\cdots+d)k}_{\text {n terms}}$$
We want to obtain a closed form solution.
$$\underbrace{a+ak+ak^2+\cdots+ak^n}_{\text{Geometric Progression} = S_1} +dk+dk^2+dk+dk^3+dk^2+dk+\cdots+dk^n+dk^{n-1}+\cdots dk$$
$$S_1+ ndk+(n-1)dk^2+(n-2)dk^3+\cdots+dk^n$$
$$S_1+\underbrace{n(dk+dk^2+\cdots+dk^n)}_{\text{Geometric Progression}=S_2}-d(k^2+2k^3+3k^4+\cdots+(n-1)k^n)$$
Can you do it now?
I think there might be a better solution though. |
Find the automorphisms for the Galois Group of the minimial polynomial $x^4+1$. | You’re dealing with the four primitive eighth roots of unity. Call any one of them $\zeta$, and let this be fixed. The others now are $\zeta^3$, $\zeta^5$, and $\zeta^7=\zeta^{-1}$. Your four automorphisms are the identity, which takes $\zeta$ to $\zeta$, and the three others, which I suppose you can call $\sigma_3$, $\sigma_5$, and $\sigma_7$, with $\sigma_i$ taking $\zeta$ to $\zeta^i$. You can check, for instance, that $\sigma_3\circ\sigma_5=\sigma_7$ in this way: $$\sigma_3(\sigma_5(\zeta))=\sigma_3(\zeta^5)=[\sigma_3(\zeta)]^5=(\zeta^3)^5=\zeta^{15}=\zeta^7=\sigma_7(\zeta)\,.$$ I leave the rest to you. |
Maximum number of circle packing into a rectangle | Consider the following diagram of a triangular packing:
If the circles have radius $r$, then each pair of horizontal red lines is a distance $r$ apart, and they're a distance $r$ from the edges. Each pair of vertical blue lines is a distance $r \sqrt 3$ apart, and they're still a distance $r$ from the edges.
So if you want the triangular packing to have $m$ circles in each column, and $n$ columns, then the rectangle must be at least $(2m+1) \cdot r$ units tall and $(2 + (n-1)\sqrt3) \cdot r$ units long. (Also, if the rectangle is only $2m \cdot r$ units tall, we can alternate columns with $m$ and $m-1$ circles.)
If the rectangle is $257 \times 157$ and the radius of a circle is $\sqrt{\frac{10}{\pi}}$, then:
If we make $257$ the vertical dimension, then the rectangle is a bit over $144 \cdot r$ units tall, and a bit over $(2 + 49\sqrt3) \cdot r$ units wide. So we can arrange the circles in $50$ columns that alternate between $72$ and $71$ circles, for $25 \cdot 72 + 25 \cdot 71 = 3575$ circles.
If we make $157$ the vertical dimension, then the rectangle is a bit over $87 \cdot r$ units tall, and a bit over $(2 + 82\sqrt3) \cdot r$ units wide. So we can arrange the circles in $83$ columns that have $43$ circles each, for $3569$ circles.
We pick the first option, which gave us the greater value.
If you want to know the exact number of circles that can fit, there is nothing better to be done than this calculation. But you can estimate the number of circles that will fit by knowing that the limiting density of the triangular packing is $\frac{\pi}{2\sqrt 3}$.
The $257 \times 157$ rectangle has area $40349$, but at most a $\frac{\pi}{2\sqrt 3}$ fraction of that area can be used: at most area $\frac{40349 \pi}{2\sqrt 3} \approx 36592.5$. If all circles have area $10$, then at most $3659$ circles can fit in that area. As you can see, this is an overestimate, because we aren't using the space around the edges of the packing as efficiently as possible. |
Quaternions. Problem with understanding essence of. | Well it is always tricky trying to answer the question "why do things work they way they do" because the answer is usually a moving target.
Unfortunately, not one lectures or books what I saw or read don't explain principe of working of quaternion in 4d,
Well, Hamilton was not really 'thinking in $4$ dimensions' AFAIK... he was very interested in doing three dimensional geometry with quaternions. If you still think this is a sticking point you will have to explain what exactly you want to understand. As with most mathematics, you can get a get a long way by simply "getting used" to how things work, and then forming your own pictures as you go. Expecting a clear picture from the outset is often too ambitious.
I explain it for myself as three Cartesian plane with the one common real component which defines a vector position in 4d.
I do not think it is useful to think of quaternions as a vector position in $4$-D (that is going no further than thinking of $\mathbb R^4$.) Complex numbers are certainly a lot more than vector positions in $2$-D. You could think of $3$-space as two orthogonal Cartesian planes meeting on a real line, and you can think of $4$-space as three mutually orthogonal planes meeting on a real line. But this has more to do with $\mathbb R^3$ and $\mathbb R^4$ and not really anything to do with the quaternions.
This is my attempt to somehow imagine a quaternion.
Why is imagining quaternions any more challenging than imagining integers, rational numbers, or real numbers? It's just another, albeit different, number system that you can add, multiply and divide in. IMO more strenuous attempts to "imagine" ("imagine as an object in reality"?) do not yield anything useful compared to the amount of thinking that goes into it.
Is it far from reality?
I think we see this question sometimes, but it doesn't have an answer. I don't know what reality you're talking about. The usefulness of whatever picture a person has is relative to their own understanding of the subject, and stands on the merits of its own appeal. There is no standard of reality to measure a description against.
Why we disregard of real part of coordinate in first part of Euler formula?
I don't know what you're talking about. As I understand it, we do not disregard that part. I will give you my heuristic for rationalizing quaternion rotation below. I'm excerpting a couple slides from a talk I gave on quaternions:
Helpful identities
If the coefficients of $q$ have Euclidean length $1$, then $q^{-1}=\bar q$.
If $v$ and $w$ have real part zero, then
The real part of $vw$ is $-1(v\bullet w)$.
The pure quaternion part of $vw$ is $v\times w$.
If $u^2=-1$, $(\cos(\theta)+u\sin(\theta))(\cos(\theta)-u\sin(\theta))=\cos(\theta)^2+\sin(\theta)^2=1$ (basic trigonometry)
If $u^2=-1$, $(\cos(\theta)+u\sin(\theta))(\cos(\theta)+u\sin(\theta))=\cos(2\theta)+u\sin(2\theta)$ (De Moivre's formula)
Rationalizing quaternions' multiplication action on the model of $3$-space
The model of $3$-space I'm referring to, of course, is the space of quaternions with real part zero. As usual, we take $q = \cos(\theta/2)+u\sin(\theta/2)$ as the rotation quaternion, where $u$ is a unit vector pointing along the axis of rotation and $\theta$ is the angle of rotation around the axis measured using the right-hand rule. We aim to make the 'sandwich' action look more like what happens in complex arithmetic
$u$ is unmoved by $q$: $$quq^{-1}=
(\cos(\theta/2)+u\sin(\theta/2))u(\cos(\theta/2)-u\sin(\theta/2))=\\
(\cos(\theta/2)+u\sin(\theta/2))(\cos(\theta/2)-u\sin(\theta/2))u=\\
(\cos(\theta/2)^2-(u\sin(\theta/2)^2))u=\\
(\cos(\theta/2)^2+\sin(\theta/2)^2)u=u$$
if $v$ is a unit length pure quaternion orthogonal to $u$: $$qvq^{-1}=
(\cos(\theta/2)+u\sin(\theta/2))v(\cos(\theta/2)-u\sin(\theta/2))=\\
(\cos(\theta/2)+u\sin(\theta/2))(\cos(\theta/2)+u\sin(\theta/2))v=\\
(\cos(\theta/2)+u\sin(\theta/2))^2v=\\
(\cos(\theta)+u\sin(\theta))v\leftarrow\text{looks like a rotation in the complex plane}$$
$q$ leaves $u$ unchanged and rotates its normal plane by $\theta$. Everything else follows rigidly, so we have the rotation explained in terms that look like complex arithmetic.
There is one critical thing to notice here, though: in the last expression, $u$ and $v$ would both be $i$ in complex arithmetic. Let me try to explain that. The circle of quaternions that cause rotations around $u$ live in the plane $P$ spanned by $1$ and $u$. They are acting on the set $P^\perp$, the orthogonal complement of the first plane. You can see we need at least $4$ dimensions to fit these together in the same space.
I don't know the right way to explain how this can be aligned with complex multiplication, but I believe there is a concrete rationalization. In simple terms, I think it has to do with shifting perspective that the things you are operating on live in $P^\perp$ to operating on things in $P$. In harder terms, I believe it has to do with a duality between $P$ and $P^\perp$, which I've seen explained in some texts on Clifford/geometric algebra, but I do not properly know.
Final word
I believe also there is another good explanation of how the 'sandwich' action arises, an explanation that relies on exponential maps and Lie algebra, which again I have not fully absorbed. I leave it to someone who is more familiar with that field to provide a complementary answer along those lines. |
Lagrange multipliers - maximum and minimum values given constraint | I do not see where you used Lagrange multipliers. Any critical points will satisfy the Lagrange multipliers equation
$$
\begin{bmatrix}
yz & xz & xy
\end{bmatrix} = \lambda \begin{bmatrix}
2x & 4y & 6z
\end{bmatrix} \ .
$$
This gives you the system of equations
$$
\begin{align}
yz &= 2\lambda x & (1) \\
xz &= 4\lambda y & (2) \\
xy &= 6\lambda z & (3) \\
96 &= x^2+2y^2+3z^2 & (4)
\end{align}
$$
First consider $\lambda = 0$ and then consider other cases. Try this and see how it goes for you. I hope this helps.
EDIT:
If you assume $x,y,z\neq 0$, you can solve for $\lambda$ in each of $(1), (2), (3),$ and $(4)$. You then obtain (by equating these $\lambda$), as you did, $6y^2=96$, which gives $y=\pm4$. You similarly obtain $x^2 = 32$, or $x=\pm4\sqrt{2}$, and $z^2 = \frac{32}{3}$, or $z=\pm4\sqrt{\frac{2}{3}}$. With all of the $\pm$'s, you get a few critical points. Plug them in to see which ones are the largest/smallest. |
{Thinking}: Why equivalent percentage increase of A and decrease of B is not the same end result? | The current question seems to be this. Sorry, but this is "math language" with some prose sprinkled in.
Say I have 100 cupcakes selling for 1 dollar each. If I increase the number of cupcakes by 66%, then I'll have 166 cupcakes. $$\frac{\text{# of cupcakes}}{\text{price}}=\frac{166}{100}=1.66$$
If I have 100 cupcakes selling for 1 dollar each, and I instead decrease the price by 66%, then I'll be selling each one for 34 cents each, and we get
$$\frac{\text{# of cupcakes}}{\text{price}}=\frac{100}{34}=2.94$$
So the issue seems to be this: Increasing the denominator by some percentage and decreasing the denominator by the same percentage don't give the same result. So what gives? Maybe this is just me, but I find that math helps me make this intuitive. Look at multiplying by a percentage like this: adding 66% is multiplying by 1.66=(1+.66), and subtracting 66% is like multiplying by .34=(1-.66). So if our percentage is $x$ (66% here), we find that
$$\frac{a(1+x)}{b}\neq\frac{a}{b(1-x)}$$
plug some numbers in to see that this is the same situation. Now $a$ and $b$ cancel, so we find that this is just saying that
$$1+x\neq\frac1{1-x}$$
Now comes the real question: Why might people expect something different to happen? I think we need to look at what people think is happening here, where intuition works perfectly, and see why they're applying it somewhere where it doesn't belong.
Let's look at a different situation. The one people might think the above is: scaling. If we scale the numerator up by $k$, then we get
$$\frac{ka}b$$
If we scale the denominator down by $k$, we get
$$\frac a{\frac 1 k b}=\frac {ka} b$$
So here they are the same! This is what you might think the above situation was. But it clearly isn't. So what's the difference?
The difference is that you were talking about adding and subtracting percentages. So when you say you took off 66%, this is the reverse of adding 66%. But fractions don't work with addition in this way:
$$\frac{5+2}{3}\neq\frac{5}{3-2}$$
If you phrased it in terms of multiplication, everything would work. The reverse of scaling up is the reverse of scaling down. So let's repeat, but doing that instead.
Say I have 100 cupcakes selling for 1 dollar each. If I scale the number of cupcakes up by 1.66, then I'll have 166 cupcakes. $$\frac{\text{# of cupcakes}}{\text{price}}=\frac{166}{100}=1.66$$
If I have 100 cupcakes selling for 1 dollar each, and I instead scale the price down by 1.66, then I'll be selling each one for 34 cents each, and we get
$$\frac{\text{# of cupcakes}}{\text{price}}=\frac{100}{\frac{100}{1.66}}=\frac{100}{60.24}=1.66$$
So the best I can say is that some people think that adding/subtracting percentages is the same thing as multiplying/dividing by them, which it isn't. A 66% decrease doesn't undo a 66% increase. This is because percentages are all about scaling a number, so talking about adding and subtracting them in the first place is really just horribly obscuring what's really going on. This I think is a language issue. Look at the differences here:
"Let's say I have 100 people who have 100 houses between them. If I double the number of people, then there will be 2 people to every house. If I halve the number of houses, there will be two people to every house."
"Let's say I have 100 people who have 100 houses between them. If I add 100% of people, then there will be 2 people to every house. If I subtract 100% of houses, there will be 2 people to every house."
Some careful reading would reveal that the second situation is wrong, and that the two are not the same. Doubling is the same thing as adding 100%, but halving is not the same thing as subtracting 100%. Talking about adding/subtracting percentages is just awkward and obscures what's really going on, which is scaling. |
Is the integral test for convergence still applicable? | Yes, the important thing is that it is decreasing after a while at least.
This is because a finite number of terms isn't relevant for the convergence of the serie. |
How to minimize the maximum of $n$ multivariate functions? | You can try to solve the following problem:
$$
\underset{l_1\in \mathbb{N},l_2\in \mathbb{N},t}{\text{mininize}}\quad t\\
\text{subject to} \quad f_1(l_1,l_2)\leq t\\
\quad \quad \quad \quad f_2(l_1,l_2)\leq t\\
\quad \quad \quad \quad f_3(l_1,l_2)\leq t\\
\quad \quad \quad \quad f_4(l_1,l_2)\leq t
$$ |
Proving sequence is Cauchy | hint
Use
$$x_j<x_{j+1}<\sqrt{2}<y_j$$
$$x_{j+1} - x_j <y_j-x_j <\frac{1}{2^j}$$
And for $n,p\ge 1,$
$$x_{n+p}-x_n<\sum_{j=n}^{n+p-1}\frac{1}{2^j}=\frac{1}{2^{n-1}}(1-\frac{1}{2^p})$$
$$<\frac{1}{2^{n-1}}$$
So, given an $ \epsilon>0$, because of $$\lim_{n\to +\infty}\frac{1}{2^{n-1}}=0$$
there exists $ N \in \Bbb N $ such that
$$(\forall p\ge 1) \;\;(\forall n\ge N) \;\;$$
$$|x_{n+p}-x_n|<\frac{1}{2^{n-1}}<\epsilon$$
Done. |
The two regular expressions abc and abc(φ)* are equivalent. (T/F) | If in your notation $\varphi$ denotes the empty string, they are equivalent. If $\varphi$ is anything else, they are not. |
Can every theorem involving countable things necessarily be proven by induction? | I assume you found something like$$\left(\sum_{i=1}^ni\right)^2-\left(\sum_{i=1}^{n-1}i\right)^2=\left(\sum_{i=1}^ni-\sum_{i=1}^{n-1}i\right)\left(\sum_{i=1}^ni+\sum_{i=1}^{n-1}i\right)=n\cdot n^2=n^3.$$I think whether all theorems on countable sets are provable by induction is an open question; the hard part would be checking some ordering exists for which a proof of the inductive step must exist. Whether I'm right or wrong, this question is probably a better fit for MathOverflow than for math.se, because it looks very, very nontrivial.
But I suspect far more theorems are thereby provable than first seem to be. There's a very complicated inductive argument in Wiles's proof of (the semistable case of) the modularity theorem, for which he needed not only a sensible ordering of E- & M-series, but also a way to make the inductive step work for said ordering. (I'm not an expert on it, but I expect he worked backwards from needing the inductive step to work to inspire him in how he'd order them.) |
Degree of composition extension field, given disjoint subfields | Consider $K=\mathbb Q$, $M=\mathbb Q(\sqrt[3]2)$ and $N=\mathbb Q(\varepsilon_3\sqrt[3]2)$, where $\varepsilon_3$ is the third primitive root of unity.
Clearly, $MN=\mathbb Q(\sqrt[3]2,\varepsilon_3)$. By calculating degrees of chains $\mathbb Q\subset\mathbb Q(\sqrt[3]2)\subset \mathbb(\sqrt[3]2,\varepsilon_3)$ and $\mathbb Q\subset\mathbb Q(\varepsilon_3)\subset \mathbb(\sqrt[3]2,\varepsilon_3)$ we get $[MN:\mathbb Q]=6$.
Since $M$ is a real field and $N$ is not, $M\neq N$, so $\mathbb Q\subseteq M\cap N\subsetneq M$. Since $[M:\mathbb Q]=3$ (minimal polynomial is $x^3-2$) is prime, by the chain rule $[M\cap N:\mathbb Q]=1$, i.e. $M\cap N=\mathbb Q$. Also $[N:\mathbb Q]=3$ (minimal polynomial is $x^3-2$). |
Concept of order statistics | They are the composition of measurable functions with the random vector $X=(X_{1}, \ldots X_{n})$, so they are random variables (since composition of measurable function with random variable is a random variable).
Take for example the case $n=2$. Then we have two random variables $X_1,X_2$, so $X=(X_1,X_2)$ is a random vector. In this case your $X_{(1)}$ would be $X_{(1)}=\min(X)$ and your $X_{(2)}=\max(X)$. The functions $g=\min$ and $h=\max$ are measurable, so $X_{(1)}$ and $X_{(2)}$ are random variables.
For a generic $n$ you would consider $n$ different measurable functions $g_i$ that would give you the $i$-th greatest component of $X=(X_{1}, \ldots X_{n}$). |
What is the probability that you have a Straight Flush if you have a Flush? | By definition of a conditional probability,
$$
P(\text{straight flush}\mid\text{flush})=\frac{P(\text{straight flush}\cap\text{flush})}{P(\text{flush})}=\frac{P(\text{straight flush})}{P(\text{flush})}
$$
since if we have a straight flush, we also have a flush. |
Possible Number Combos That I can not figure out | Hint: There are $\binom{4}{1}$ ways to choose the QB. For each such way, there are $\binom{8}{2}$ ways to choose the RB's. For every way of choosing the QB and the RB's, there are $\binom{12}{3}$ ways to choose the WR's. And so on. |
Prove an inequality using Cauchy-Schwarz. | If $x=0$, nothing to prove. If not, choose $y=x$, then $\|x\|^{2}\leq c\|x\|$, now dividing both side by $\|x\|$. |
How can I effectively solve linear system $Ax = b$ with some known variables | Apply a permutation matrix so that the known values of $x$ are all at the bottom of the vector.
Then, your system is
$$A\begin{bmatrix}x_{unknown}\\x_{known}\end{bmatrix} = \begin{bmatrix}b_{unknown}\\b_{known}\end{bmatrix}$$
Now, you can also write $A$ as a block matrix $$A=\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}$$and get a new, smaller system to solve, since
$$A_{11} x_{unknown} + A_{12} x_{known} = b_{unknown}$$
meaning that $$A_{11}x_{unknown} = b^*$$
where $b^*=b_{unknown} - A_{12}x_{known}$ |
Likelihood of N runs given a certain set of black and white balls | C Monsour has provided you with a correct answer. Here is another method.
A sequence of $9$ black and $7$ white balls is completely determined by choosing $9$ of the $9 + 7 = 16$ positions for the black balls. Therefore, there are
$$\binom{16}{9}$$
sequences in the sample space.
Since there are four runs of black balls and three runs of white balls, a favorable sequence must begin with a black ball. Therefore, to count the favorable sequences, we need to count the number of ways of dividing the nine black balls into four runs and the seven white balls into three runs.
The number of ways nine black balls can be placed in four runs is the number of solutions of the equation
$$b_1 + b_2 + b_3 + b_4 = 9$$
in the positive integers. A particular solution corresponds to the placement of three addition signs in the eight spaces between successive ones in a row of nine ones.
$$1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1$$
For instance, placing an addition sign in the third, fifth, and eighth spaces corresponds to the solution $b_1 = 3$, $b_2 = 2$, $b_3 = 3$, $b_4 = 1$. The number of such solutions is the number of ways we can select three of the eight spaces between successive ones to be filled with addition signs, which is
$$\binom{8}{3}$$
The number of ways seven white balls can be placed in three runs is the number of solutions of the equation
$$w_1 + w_2 + w_3 = 7$$
in the positive integers. A particular solution corresponds to the placement of two addition signs in the six spaces between successive ones in a row of seven ones. Therefore, the number of such solutions is the number of ways we can choose two of the six spaces between successive ones in a row of seven ones to be filled with addition signs, which is
$$\binom{6}{2}$$
Thus, the number of favorable cases is
$$\binom{8}{3}\binom{6}{2}$$
which gives the probability
$$\frac{\dbinom{8}{3}\dbinom{6}{2}}{\dbinom{16}{9}}$$ |
Differential Equations/Uniquness | Hint: ignoring the initial condition $y(0) = \sqrt{2}$, the function $\tilde{y}(t) \equiv 0$ is a solution to your differential equation.
Hint 2: argue by contradiction: if your solution is not always positive, then there must be some time $t_0$ such that $y(t_0) = 0$. Use Hint 1 and the uniqueness theorem. |
If $A,B$ are diagonalisable, does $AB$ diagonalisable imply $BA$ diagonalisable? | Counterexample:
\begin{aligned}
AB&=\pmatrix{1&0&0\\ 0&0&0\\ 0&0&1}\pmatrix{0&0&0\\ 1&1&0\\ 0&0&1}=\pmatrix{0&0&0\\ 0&0&0\\ 0&0&1},\\
BA&=\pmatrix{0&0&0\\ 1&1&0\\ 0&0&1}\pmatrix{1&0&0\\ 0&0&0\\ 0&0&1}=\pmatrix{0&0&0\\ 1&0&0\\ 0&0&1}.
\end{aligned} |
Besides proving new theorems, how can a person contribute to mathematics? | You can create new jobs for mathematicians, e.g. by funding institutes like Jim Simons. Arguably this does much more for mathematics than actually doing mathematics due to replaceability: the marginal effect of becoming a mathematician is that you do marginally better mathematics than the next best candidate for your job, which is a much smaller effect than creating a new mathematician job where there wasn't one before.
You can also work on tools for mathematicians to use like arXiv (or MathOverflow!). Arguably this also does much more for mathematics than actually doing mathematics. Incidentally, arXiv was developed by a physicist, Paul Ginsparg, and almost none of the mathematics graduate students I've talked to about this know his name. |
Given $f(x,y)$ whats the probability $X>Y$. Limits of double integration | It is very convenient that the random variables are called $X$ and $Y$.
Draw and label the usual axes. The joint density function of $X$ and $Y$ "lives" in the first quadrant. We want to find the probability that $X\gt Y$. Draw the line $y=x$. The probability that $X\gt Y$ is the probability that the ordered pair $(X,Y)$ will end up below the line $y=x$. In our case, that means in the first quadrant and below the line $y=x$.
The joint density is symmetric about $y=x$. The probability of ending up on the line $y=x$ is $0$. It follows that $\Pr(X\gt Y)=\frac{1}{2}$.
But that does not answer your question about integration. So let assume that the problem is a little more complicated, and that the parameters of the two independent exponentials are possibly different, say $\lambda$ for $X$ and $\mu$ for $Y$. The joint density is $\lambda\mu e^{-\lambda x}e^{-\mu y}$ for $x,y\gt 0$.
Call this function $f(x,y)$.
Let $D$ be the part of the first quadrant that is below the line $y=x$. The required probability is the double integral
$$\iint_D f(x,y) dA =\iint_D f(x,y)\,dy\,dx $$
We want to express the double integral as an iterated integral. So we will integrate first with respect to $y$, and then with respect to $x$, or maybe the other way around.
First with respect to $y$: Now our picture comes handy. Our region $D$ is the part of the first quadrant below $y=x$. So for any $x$, the variable $y$ travels from $y=0$ to $y=x$. Then $x$ travels from $0$ to $\infty$. Our iterated integral is
$$\int_{x=0}^\infty\left(\int_{y=0}^x f(x,y)\,dy\right)\,dx.$$
First with respect to $x$: Note that in the region $D$, $x$ "starts" at $y$ and goes to $\infty$. Then $y$ travels from $0$ to $\infty$. That yields the integral
$$\int_{y=0}^\infty\left(\int_{x=y}^\infty f(x,y)\,dx\right)\,dy.$$
We can do the integration in either of the two ways. It turns out that integrating first with respect to $x$ makes the algebra somewhat simpler. |
How to identify stiffness of a second order non-linear ode | Stiffness concerns how 'hard' a problem is to solve numerically. If a problem is stiff, it typically means that you would have to use a very small time-step in an explicit scheme to solve it without seeing spurious instabilities. This means that you will be waiting for a while to simulate out to a reasonable time. A stiff solver is more stable somehow (typically by being implicit/semi-implicit) and allows you to take a larger time-step. MATLAB does all of this time-step selecting business for you 'under the hood' unless you provide it with input options. The only thing it cant really do for you is select the most optimal scheme to solve your problem. That's what choosing ODE115s, or ODE45, or ODE23d, is all about. If you are getting a solution from any of these methods, regardless of the stiffness of your problem, it is correct (up to different numerical accuracies for different schemes), you just may have waited longer than you needed to. |
Trapezoidal Rule Negative Error | The trapezoidal rule is estimating your function as sequence of lines over a set of intervals.
i.e. at each point in the interval $[a,b], \lambda f(a) + (1-\lambda) f(b) $ with $\lambda \in [0,1]$ is being used as a stand in for $f(\lambda a + (1-\lambda) b)$
Since $f(x)$ is convex, $\lambda f(a) + (1-\lambda) f(b)\ge f(\lambda a + (1-\lambda) b)$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.