title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Help with a Poisson Process problem | Let $\{N(t):t\geqslant0\}$ be a Poisson process with rate $\lambda>0$, $T>0$, and $n$ a positive integer. Then
$$
\mathbb P(N(t)\leqslant n-1) = \sum_{k=0}^{n-1}e^{-\lambda t}\frac{(\lambda t)^k}{k!}.
$$
Here $\lambda=\frac34$, $T=5$, and $n=2$, so we have
$$
\mathbb P(N(5)\leqslant 1) = e^{-\frac34\cdot5}\left(1 + \frac34\cdot 5\right)= \frac{19}4e^{-\frac{15}4}\approx0.1117093.
$$ |
Show that $\varphi (a) = a^n$ is an automorphism of $G$, if $G$ is abelian and $GCD(n, |G|)=1$ | I assume you have shown that $\phi$ is a homomorphism.
Showing that a homomorphism is injective is equivalent to showing that its kernel is trivial.
Let $a \in \ker \phi$. We want to show that $a=1$. Let $g$ be the order of $G$ and let $k$ be the order of $a$. By Lagrange's theorem, $k \mid g$. Since $a \in \ker \phi$, we know that $a^n = 1$. Hence $k \mid n$. Since $\gcd(n,g) = 1$, there exist $x,y \in \mathbb{Z}$ such that $nx + gy = 1$. Now $k$ divides both $g$ and $n$, so it divides (a multiple of) their sum $nx + gy$. So $k \mid nx + gy = 1$. Therefore $k=1$ (orders of elements are $>0$). So $a$ has order $1$. It follows that $a$ is the neutral element.
$\ker \phi$ is trivial, so $\phi$ is injective. |
Distance from $f(0)$ to the boundary of $D$ if $f$ maps open unit disk to $D$ conformally | Hint: Let $R= d(f(0),\partial D).$ Define $g(z) = f(0)+Rz.$ Then $g$ maps the open unit disc onto $B(f(0),R)\subset D.$ Consider $f^{-1}\circ g.$ |
Cover time chess board (king) | There is in principle no difficulty in answering this question.
As I point out in my answer here, calculating the expected cover
time of some set $\cal S$ reduces to calculating the expected hitting time of every possible non-empty subset $A$ of $\cal S$:
$$\mathbb{E}(\text{cover time})=\sum_A (-1)^{|A|-1} \mathbb{E}(T_A)$$
These hitting times are defined by $T_A=\inf(n\geq 0: X_n\in A)$.
Look out below! Ignorant of chess rules, I didn't realize that a king can move diagonally. The calculations below are based on a piece that can only move in four ways: north, south, east, or west.
Just to illustrate, let me show you the solution for a $2\times 2$ chessboard:
The expected time to cover the other 3 squares ${a,b,c}$ is equal to
$$\mathbb{E}(T_{a})+\mathbb{E}(T_{b})+\mathbb{E}(T_{c})-\mathbb{E}(T_{a,b})-\mathbb{E}(T_{a,c})-\mathbb{E}(T_{b,c})+\mathbb{E}(T_{a,b,c})$$
Standard Markov chain theory uses linear algebra to find these expected hitting times
$$\mathbb{E}(T_{a})=\mathbb{E}(T_{c})=3, \mathbb{E}(T_{b})=4,
\mathbb{E}(T_{a,b})=\mathbb{E}(T_{b,c})=2, \mathbb{E}(T_{a,c})=\mathbb{E}(T_{a,b,c})=1$$
Putting it all together, we find that the expected cover time is $3+3+4-2-2-1+1=6$.
Note that I counted the king's initial position as already covered. If you
require a return to your starting point you can modify the above technique.
The number of terms in the sum make this method impractical for an $8\times 8$ chessboard, however!
Added: If my calculations are correct, the expected cover time for the $3\times 3$ board is $${140803109038245\over 4517710919176}=31.1669$$ |
Parameterize Ellipse for Line Integral Computation | It is actually a circle. The equation is $(x-\frac a 2)^{2}+y^{2}=\frac {a^{2}} 4$. You can parametrize it by writing $x=\frac {a} 2+\frac {|a|} 2 \cos \theta, y=\frac {|a|} 2 \sin \theta$. |
Determine coefficient λ so vectors p and q are mutually perpendicular | Hint: we get
$$\vec{p}\cdot \vec{q}=3\lambda\vec{a}^2+51\vec{a}\cdot \vec{b}-\lambda\vec{a}\cdot \vec{b}-17\vec{b}^2$$
and use that $$\vec{x}^2=|\vec{x}|^2$$ |
This cant be right can it? Negative exponents in algebra book. | When you use slashes for division it is not clear where the precedence is. The official convention, discussed many places on this site, is that you compute from left to right, so $1/1/1/3^4=((1/1)/1)/3^4=(1/1)/3^4=1/3^4=3^{-4}$ The exponent binds more tightly than division. Your book, however, has used stacked fractions, which come with implied parentheses. $\dfrac 1{\frac 1{3^4}}=1/(1/3^4)=1/3^{-4}=3^4$. The grouping is the cause of the disagreement. |
Highschool Inequality | We have
$$
\sqrt{(t^2-4)^2+(3t-6)^2}=\sqrt{|t-2|^2 ((t+2)^2+3^2)}< b \sqrt{(t+2)^2+3^2}.
$$
Moreover by the triangular inequality the last term is $\le$ than
$$
b \sqrt{(|t-2|+4)^2+3^2}<b \sqrt{\left(b^2+8b+9\right)+16}.
$$
Therefore it would be sufficient that
$$
b\sqrt{b^2+8b+25}<a,
$$
which is equivalent to
$$
b^2(b^2+8b+16)<a^2.
$$
Obviosly the LHS is increasing in $b$ [with $b>0$], therefore it is sufficient to choose a sufficiently small $b>0$. |
The equivalence classes and their representatives | "I can show that ∼ is an equivalence relation. For the reflexivity, simmetry and transitivity to hold simultaneously, λ should be 1, except if there's a mistake in my reasoning."
No, $\lambda$ is arbitrary. For reflexivity, $\lambda=1$. But to prove symmetry, let $(a,b,c)=\lambda(x,y,z)$ for some $\lambda\ne 0$. Then $(x,y,z)=\frac{1}{\lambda}(a,b,c)$.
Thus $(a,b,c)\sim (x,y,z)$ implies $(x,y,z)\sim(a,b,c)$. Similar for transitivity.
The quotient set is the projective plane over the reals. A transversal is given by
$$\{(1,b,c)\mid b,c\in\Bbb R\}\cup \{(0,1,c)\mid c\in\Bbb R\}\cup \{(0,0,1)\}.$$
The points with first component 0 are the points at infinity. |
Is my general formula for polynomial multiplication right? | Using your notation, we know that the product $p(x)q(x)$ is a polynomial of degree $n+m$.
Hence one has $$p(x)q(x) = \sum_{k=0}^{n+m} c_k x^k,$$
with coefficients $c_k$ to be determined.
Let $k \in \{ 0, \cdots, n+m \}$. Now the question is: how do you obtain terms in $x^k$ in the multiplication? Here are the ways to get some $x^k$, and we'll just have to add them altogether.
multiply the constant term of $p$ by the term of degree $k$ of $q$ ;
multiply the term of degree $1$ of $p$ by the term of degree $k-1$ of $q$ ;
...
multiply the term of degree $j$ of $p$ by the term of degree $k-j$ of $q$ ;
...
multiply the term of degree $k$ of $p$ by the constant term of $q$.
Therefore, one has
$$c_k = \sum_{i=0}^k p_i q_{k-i}$$
where $p_i$ (resp. $q_i$) denotes the $i$-th coefficient of $p$ (resp. $q$).
That gives us the general formula you're looking for:
$$ \bbox[lightgreen,5px,border:2px solid green]{\left( \sum_{i=0}^{\deg p} p_i x^i\right) \cdot \left( \sum_{j=0}^{\deg q} q_j x^j\right) = \sum_{k=0}^{\deg p + \deg q} \sum_{l=0}^k p_l q_{k-l} x^k}.$$ |
A divisibility question involving Fermat's theorem use | By Fermat's theorem, $a^6 \equiv 1 \pmod{7}$ thus $a^6-1 = 7k$ for some $k \in \mathbb{N}$. Likewise, $a^6 = (a^2)^3 \equiv 1 \pmod{3}$ thus $a^6-1 = 3m$. Finally, $a^6-1 = (a-1)(a+1)(a^4+a^2+1)$. One of $a-1$ and $a+1$ is divisible by at least $4$ and the other by at least $2$ for any odd $a$, thus the whole expression is divisible by at least $8$. |
Find $\lim_{x \to 1} \cos(\pi \cdot x) \cdot \sqrt{\frac{(x-1)^2}{(x^2-1)}} \cdot \frac{1-x}{x^2+x-1}$ (I need a review of my resolution please :) ) | Your answer is correct.
The limit can be split into parts if each of them exists as follows
$$\lim_{x \to 1} \cos(\pi \cdot x) \cdot \sqrt{\frac{(x-1)^2}{(x^2-1)}} \cdot \frac{1-x}{x^2+x-1}$$
$$=\lim_{x \to 1} \cos(\pi \cdot x) \cdot \lim_{x \to 1}\sqrt{\frac{x-1}{x+1}} \cdot \lim_{x \to 1}\frac{1-x}{x^2+x-1}$$
$$=(-1)\cdot 0\cdot 0=\color{blue}{0}$$ |
What does relaxing the iid assumptions mean? Intuitive and technical perspectives. | Some parts of your question are not clear, but if you want examples in statistics in which weaker assumptions than i.i.d. can bear fruit, here are two:
The weak law of large numbers says that if $X_1,X_2,X_3,\ldots$ are uncorrelated (a weaker assumption than independence) random variables each with expected value $\mu$ and variance $\sigma^2$ (a weaker assumption that indentical distribution) then
$$
\forall\varepsilon>0\ \lim_{n\to\infty} \Pr\left(\left|\frac{X_1+\cdots+X_n}{n}-\mu\right| < \varepsilon \right) = 1.
$$
The Guass--Markov theorem considers a model $Y=X\beta+\varepsilon$ where $X\in\mathbb R^{n\times p}$ with $n\gg p$, $\beta\in\mathbb R^{p\times 1}$, $\varepsilon\in\mathbb R^{n\times 1}$ and so $Y\in\mathbb R^{n\times 1}$, and $X$ and $Y$ are observable data, $\beta$ unobservable and fixed, and $\varepsilon$ is unobservable and random. "Fixed" means not random; and "random" means something that changes if a new sample is taken. Thus $\varepsilon$ and hence $Y$ change if a new sample is taken, which the observable $X$ and the unobservable $\beta$ remain the same. One assumes the entries in $\varepsilon$ have expected value $0$ and all have the same variance $\sigma^2<\infty$ (a weaker assumption that indentical distribution) and they are uncorrelated (a weaker assumption than independence). The conclusion is that the best linear unbiased estimator (B.L.U.E.) of $\beta$ is $\hat\beta=(X^TX)^{-1}X^T Y$, the least-squares estimator. ("Linear" means linear as a function of $Y$.) |
Find the inverse of a function like $f(t,x,y)=(\frac{x}{t+1},\frac{y}{t+1})$ | This function is not invertible because it is not injective; for example:
$$f(0,1,1)=f(1,2,2)=... =(1,1,1)$$ |
Solving system of ODEs in Matlab | function testODE
k=pi/2; p=[-40:40];
TT=1;
z=zeros(1,numel(p)-2);
w=-2*cos(k);
gamma=1;
z=zeros(1,numel(p)-2)
v=-2.5;
v1=-2.625;
v2=-2.375;
psi3=TT*exp(3*i*k);
psi2=TT*exp(2*i*k);
psi1=-psi3+(v2-w+gamma*abs(psi2).^2).*psi2;
psi0=-psi2+(v1-w+gamma*abs(psi1).^2).*psi1;
R0=(psi0.*exp(-i*k)-psi1)./(exp(-i*k)-exp(i*k));
R1=(psi0.*exp(i*k)-psi1)./(exp(i*k)-exp(-i*k));
y = (p <= 1) .* (R0 * exp(1i*k*p) + R1 * exp(-1i*k*p)) + (p >= 2) .* (TT * exp(1i*k*p));
z(1+(numel(p)-2)/2:2+(numel(p)-2)/2)=[v*y(1+ceil(end/2)),v*y(2+ceil(end/2))]
f = @(t,y) [(R0 * exp(1i*k*p(1)) + R1 * exp(-1i*k*p(1))).*exp(-1i*w*t); diff(y,2)+2*y(2:end-1); (TT * exp(1i*k*p(end))).*exp(-1i*w*t)];
y0 = y;
t0 = 0;
T = 5;
N = 50;
[Y,t] = RK42d(f, t0, T, y0, N);
figure;
hold on;
plot(p,Y,'r');
plot(p,Y,'b');
end
function [Y, t] = RK42d(f, t0, T, y0, N)
%the input of f and y0 must both be vectors
%composed of two fuctions and their respective
%intial conditions
%the vector Y will return a vector as well
h = (T - t0)/(N - 1); %Calculate and store the step size
Y = zeros(N,81); %Initialize the X and Y vector
t = linspace(t0,T,N); % A vector to store the time values
Y(1,:) = y0'; % Start Y vector at the intial values.
for i = 1:(N-1)
y = Y(i,:)';
k1 = f(t(i),y);
k2 = f(t(i) +0.5*h, y +0.5*h*k1);
k3 = f(t(i) +0.5*h, y +0.5*h*k2);
k4 = f(t(i) +h , y +h*k3);
Y(i+1,:) = y + (h/6)*(k1+ 2.*k2 + 2*k3 + k4);
%Update approximation
end
end |
Tangent plane to the surface $\cos(x)\sin(y)e^z = 0$? | Let $f : U\subset \mathbb{R}^3\to \mathbb{R}$ be given by $f(x,y,z)=\cos x\sin y e^z$ then we have that your surface is indeed a level set $M = f^{-1}(0)$. Then it's easy: remember that the gradient of a function is orthogonal to the level sets. Using this we have that
$$\nabla f(x,y,z)=(-\sin x\sin ye^z, \cos x\cos y e^z, \cos x\sin y e^{z})$$
So that at $(\pi/2,1,0)$ we have $\nabla f(\pi/2,1,0)=(-\sin 1,0,0)$, so that the normal is a multiple of the vector $e_1$, and hence since the magnitude of the normal vector doesn't matter, we can pick the normal vector to be $e_1$. Of course then, the tangent plane is just the $yz$ plane. |
double summation derivation problem | You missed the fact that what you have to write is the derivative with respect to $\tau_i$ $$\frac{\partial{L}}{\partial{\tau_{\color{red}{i}}}}=-2\sum_{j=1}^n(y_{ij}+\hat\mu-\hat\tau_i)$$ About the sign, don't care since you will set the derivatives equal to $0$. |
Prove that the ordering relation $<_Z$ on the integers is well defined. | There seems to be one way to solve this with only the operations and relations defined in $\mathbb{N}$, provided one knows of the following theorem:
T1: Let $a,b,c$ be natural numbers. Then $a<b \iff a+c<b+c$
The proof builds on some other theorems shown during the construction of the natural numbers that I won't prove here.
Showing that $a+d<c+b \iff a'+d'<c'+b'$ is then rather straightforward because we don't need to worry about adding terms to the inequality, since it is possible to "remove" them again by theorem T1:
By (1.) we can "create" $a'$ from $a$ by adding $b'$:
$a+d < c+b \iff a+b'+d < c+b+b' \iff a'+b+d < c+b+b'$
In the same way we can create $d'$ from $d$ by (2.):
$a'+b+d < c+b+b' \iff a'+b+d+c' < c+b+b'+c' \iff a'+b+d'+c < c+b+b'+c' \iff a'+d'+(b+c)<c'+b'+(b+c)$
By theorem T1 we can then conclude that this is equivalent to $a'+d' < c'+b'$ |
Is $\mathbb{Q} \otimes_{\mathbb{Z}} \mathbb{Z}[x]$ isomorphic to $\mathbb{Q}[x]$ as a $\mathbb{Z}$-module? | I think your proof is correct but if it is not the mistake is at the very end, in the line:
'Putting the two diagrams together and use uniqueness of Universal Mapping property of tensor product yields $\phi \tilde{\tau}=Id$'.
The proof that the maps exist and that the diagrams are commutative are perfectly fine and easy to follow. The fact that $\phi \tilde{\tau}=Id$ can also be verified directly from the definitions you gave so the conclusion is correct as well.
However I am not entirely sure how this conclusion would follow from the uniqueness of the UM property. This might however be a result of my own lack of familiarity with category theory.
Still: this uniqueness you are talking about is the thing that gives us $\tilde{\tau}$, right? And so it follows from the diagrams being commutative and the uniqueness of the UM property that if $\phi$ has a (right) inverse $\phi^{-1}$ then this inverse must equal $\tilde{\tau}$. But how do we know that $\phi$ has a right inverse? Perhaps you could clarify this step a bit further.
EDIT: with the added diagram I believe the proof is watertight |
Multiplication of rational with irrational number? | Yes. If $ab$ and $a$ are both rational and $a \neq 0$, then $\frac{ab}{a} = b$ is rational as well. Given that $b$ is irrational, we must have $a=0$. Since $b$ is irrational it is nonzero, so $\frac{a}{b}$ is well-defined and equals $0$. |
Definiteness of a submatrix of a positive definite matrix | In this case $A_w$ is positive definite. If $B$ is positive definite then
$W^TBW$ is always positive semidefinite, and is positive definite if
the columns of $W$ are linearly independent.
In general, for a column vector $x$,
$$x^T(W^TBW)x=(Wx)^TB(Wx)\ge0$$
as $B$ is positive definite. If in addition, $x\ne 0$ and $W$ has linearly
independent columns, then $Wx\ne0$ and
$$x^T(W^TBW)x=(Wx)^TB(Wx)>0.$$ |
Reverse Z-Value for normal distrubtion | This is indeed a question about the inverse of a normal cumulative distribution function (CDF). Such inverse functions are often called quantile functions. The particulars of the computation depend on whether you are using software or printed normal tables.
Software: If you have software available, you can use your equation $P(X \le C) = 0.22,$
where $X \sim \mathsf{Norm}(\mu = 4,\, \sigma = 7).$ In R statistical
software normal quantile functions are denoted qnorm, so the answer $-1.405$ can
be obtained as:
qnorm(.22, 4, 7)
# [1] -1.405352
This is similar to @Henry's code, in which the argument lower.tail=F
takes 22% of the probability from the upper tail, whereas qnorm
without this argument takes 22% from the lower tail (according to the
usual definition of the CDF).
Other statistical and mathematical software and calculators perform
similar computations using different syntax.
The plot below shows the PDF and CDF of $\mathsf{Norm}(4, 7)$ with
red dashed lines indicating key quantities.
Printed tables: If you doing this with printed standard normal CDF tables,
then you need to 'standardize' much as you have done:
$$P(X \le C) = P\left(\frac{X - \mu}{\sigma} \le \frac{C - 4}{7}\right)
= P\left(Z \le \frac{C-4}{7}\right) = 0.22,$$
where $Z \sim \mathsf{Norm}(0,1).$ Looking in the body of the table
you can find a z-score that corresponds as nearly as possible to 0.22
(where the meaning of 'corresponds' depends on the format of the table). Here, the corresponding
'z-score' in the margin of the table is $-0.77.$ (Most tables allow no
more than two-place accuracy).
Then $(C - 4)/7 = -0.77)$ so that $C \approx -1.39.$ But you can trust only two significant digits of the answer,
so it amounts to $-1.4.$ By interpolation in the body of the table, you
might get a little more accuracy. But your answer is about as close
as you can reliably get using printed normal tables. |
Find $\lim_{x \to \infty} (-1)^{x-1}\sin(\pi\sqrt{x^2+0.5x+1})$ with $x\in\mathbb{N}$ | Note that
$$\sqrt{x^2+0.5x+1}-x=\frac{0.5x+1}{\sqrt{x^2+0.5x+1}+x}$$
Then, if $x$ is an integer, recalling that $\sin(t-x\pi)=(-1)^x\sin(t)$, it follows that
$$(-1)^{x-1}\sin(\pi\sqrt{x^2+0.5x+1})=-\sin(\pi\sqrt{x^2+0.5x+1}-\pi x)\\=
- \sin\left(\frac{\pi(0.5x+1)}{\sqrt{x^2+0.5x+1}+x}\right).$$
Can you take it from here? |
Does $g$ always normalize $H \cap H^g$? | No.
As a counterexample take $G$ to be the split extension of an elementary abelian group of order $16$ by a cyclic group of order $4$ with this action
$$
G=\left\langle x_1,x_2,x_3,x_4, g\mid x_i^2=1, x_i x_j=x_j x_i, g^4=1, x_i^g=x_{i+1 \mod 4}\right\rangle
$$
and let $H=\langle x_1,x_2\rangle$.
Then $H\cap H^g=\langle x_2\rangle$ is not normalised by $g$. |
Can someone explain to me in layman's terms the difference between a dot product and inner product? | "Dot product" is a special case of "inner product". We define "dot product" on spaces $\mathbb R^n$. But "inner product" is also defined on other spaces; and indeed there are other inner products on $\mathbb R^n$ in addition to the dot product. See the Wikipedia page for more information. |
Proof using derivative information to find limit | You made a mistake. (It's not a big one but it impacts your answer).
On Line $3$, you should have had $\sqrt{x}+\sqrt{a}$ in numerator.
Now, redo the steps (easy), and you'll end up with $2\sqrt{a}$. |
Inequality. $\frac{a}{\sqrt{b}}+\frac{b}{\sqrt{c}}+\frac{c}{\sqrt{a}}\geq3$ | let $a=x^2,b=y^2,c=z^2$
$$\Longleftrightarrow x^2+y^2+z^2=3\Longrightarrow \dfrac{x^2}{y}+\dfrac{y^2}{z}+\dfrac{z^2}{x}\ge 3$$
note
$$(x^2+y^2+z^2)^2=x^4+y^4+z^4+2x^2y^2+2z^2x^2+2y^2z^2$$
use
AM-GM inequality,then we have
$$4\dfrac{x^2}{y}+2x^2y^2+x^4\ge (2\cdot 2+2+1)x^{\frac{2\cdot 2\times 2+2\cdot 2+2\cdot 2}{2\cdot 2+2+1}}=7x^{\frac{16}{7}}$$
and the same
$$4\dfrac{y^2}{z}+2y^2z^2+y^4\ge 7y^{\frac{16}{7}}$$
$$4\dfrac{z^2}{x}+2z^2x^2+z^4\ge 7z^{\frac{16}{7}}$$
so
$$4\sum\dfrac{x^2}{y}+(x^2+y^2+z^2)^2\ge 7\sum x^{\frac{16}{7}}$$
$$\Longleftrightarrow x^{\frac{16}{7}}+y^{\frac{16}{7}}+z^{\frac{16}{7}}\ge x^2+y^2+z^2$$
use AM-GM inequality
we have
$$7x^{\frac{16}{7}}+1=x^{\frac{16}{7}}+x^{\frac{16}{7}}+\cdots+x^{\frac{16}{7}}+1\ge 8\sqrt[8]{x^{\frac{16}{7}\cdot 7}}=8x^2 $$
$$7\sum x^{\frac{16}{7}}+\sum x^2\ge \sum 8x^2$$
so
$$\sum x^{\frac{16}{7}}\ge \sum x^2$$
In general,we have
$x^n+y^n+z^n=3,2p+q>2n,p,q,n\in N^{+}$,then
$$\dfrac{x^p}{y^q}+\dfrac{y^p}{z^q}+\dfrac{z^p}{x^q}\ge 3$$ |
Second cohomology of a torsion-free hyperbolic group | Based on my comments:
First of all, the question about hyperbolic groups is very different from the one about fundamental groups of closed connected manifolds of negative curvature: "Most" hyperbolic groups are very much unlike "manifold groups." Second: The mentioned paper by Epstein and Fujiwara is interesting but totally irrelevant for the purpose of your question. Now, your real question is:
Is there an example of a closed connected even-dimensional manifold $M$ of negative curvature such that $b_2(M)=0$?
Here is what I know: The first interesting case, of course, is of 4-dimensional manifolds. Such a manifold $M$ would have positive Euler characteristic (see the references here), hence, effectively, you are asking about the existence of a negatively curved 4-dimensional rational homology sphere. This is an open problem (stated explicitly for manifolds of constant negative curvature by Bruno Martelli, I think). If there is such a hyperbolic 4-manifold, it would have the smallest possible volume among hyperbolic 4-manifolds.
Among locally-symmetric manifolds of negative curvature, complex-hyperbolic ones always have $b_2>0$ (because of the Kahler class). I do not believe there are any explicitly known examples (say, meaning that somebody computed their Betti numbers) of closed real-hyperbolic manifolds of dimension $\ge 6$. There are also no known vanishing theorems for $b_2$ in the class of manifolds. (All the known results are on the "nonvanishing side", they are of the type: There exists a finite-sheeted covering space with positive Betti numbers $b_i$ so some values of $i$.) This leaves one with quotients of quaternionic-hyperbolic spaces (and of the Cayley-hyperbolic plane). While there are no explicitly known examples (again, meaning that somebody computed Betti numbers), there might be vanishing/nonvanishing theorems for $b_2$ known in this class.
As for negatively curved manifolds of dimension $\ge 4$ which are not locally-symmetric, there is only a handful of constructions (which mostly use locally-symmetric manifolds as their starting point) and no known construction can ensure vanishing of $b_2$.
Thus, unless there are known vanishing results for $b_2$ in the case of torsion-free cocompact discrete subgroups of isometries of quaternionic-hyperbolic spaces ${\mathbf H}{\mathbb H}^n, n\ge 2$, your question should be treated as an open problem. |
A user's guide to Penrose graphical notation? | There are nice notes from a tutorial given at Siggraph in 2002 here : http://research.microsoft.com/apps/pubs/default.aspx?id=79791 |
If $a = \mathrm {sup}\ B$, how to show that the following holds? | Induction is not needed here, but rather the definition of supremum. Since $a = \sup B$ is the least upper bound on $B$, any other upper bound must be at least as large as $a$. In particular, since
$$a - \frac 1 n < a$$
it cannot be an upper bound for $B$. Likewise, since $$a < a + \frac 1 n$$
$a + 1/n$ is still an upper bound on $B$.
This statement really has nothing to do with natural numbers (suggesting that induction won't be fruitful): It's true that
$a = \sup B$ if and only if for every $\epsilon > 0$, $a + \epsilon$ is an upper bound on $B$, and $a - \epsilon$ is not. |
Packing custom length squares into a rectangle with a custom ratio | I think a better way to view this is to multiply the sides of the original rectangle up until they are both natural numbers and think of a grid of unit squares that you need to cover with squares with natural side. Your $2.5 \times 1$ rectangle becomes $5 \times 2$ and your $0.65 \times 1$ becomes $13 \times 20$. Your two examples use the greedy algorithm-find the largest square that fits in the remaining region, cut it out, and continue. The good news is that the proces will terminate. The bad news is that it may not be optimal. An example is a $5 \times 6$ rectangle, which the greedy algorithm cuts into a $5 \times 5$ square and five $1 \times 1$'s for a total of six. You can cut it into two $3 \times 3$ squares and three $2 \times 2$'s for five. |
Do all fields with internal absolute values arise either as ordered fields or like $\mathbb{C}$ from them? | $\DeclareMathOperator{\id}{id}$
I am editing my message yet again to bring you if not an answer, a few ideas.
What I was trying to find as a counter example was an ordered field $F$ equipped with its absolute value $| \ |_F$, a proper subfield $E$ and a morphism $\varphi: F_{>0} \rightarrow E_{>0}$ fixing $E_{>0}$ (we would then define $\varphi(0) := 0$ and $\forall x \in F_{>0}, \varphi(-x):= -\varphi(x)$) satisfying $\forall x,y \in F^{\times}, |\varphi(x+y)|_F \leq |\varphi(x)|_F + |\varphi(y)|_F$. The reason I thought this probably existed is that if $E^{\times},F^{\times}$ being abelian groups without torsion, there are plenty of morphisms $F_{>0} \rightarrow E_{>0}$ defined by choosing a rationally independant basis* of $E_{>0}$ and completing it into a rationally independant basis of $F_{>0}$, then taking projections or other sorts of endomorphisms. However, this does not insure the condition $\forall x,y \in F^{\times}, |\varphi(x+y)|_F \leq |\varphi(x)|_F + |\varphi(y)|_F$, and we even know that this condition can never hold in some cases, for instance if $E$ is dense in $F$, because it forces $\varphi$ to be continuous with respect to the order topology, and so $\varphi$ should be $\id_F$.
So I tried findind concrete examples rather than choice-based ones and I couldn't. It would be interesting to understand whether this type of method can work or not.
*it's like a basis over $\mathbb{Q}$ (every element is a unique linear combination in this basis with coefficients in $\mathbb{Q})$) except not every linear $\mathbb{Q}$-combination need have a meaning in the group because not every element need have a $n$-th root for every $n$. It exists (and can be completed like a regular basis can) for any abelian group without torsion, assuming AC. |
Proving homogenous quadratic inequalities | $$
5x^2-4xy+6y^2 = 2(x-y)^2 + 3x^2 + 4y^2,
$$
where each term is non-negative |
Prove the expression using deduction | You can use simplification (sometimes called conjunction elimination):
1. $A(x) ∧ B(x) \rightarrow A(x)$
2. $A(x) ∧ B(x) \rightarrow B(x)$.
From 1 and 2, we have $\exists x(A(x) ∧ B(x)) \rightarrow \exists x(A(x)) ∧ \exists x(B(x))$.
The converse of the above is not true, because if $\exists x_1$ such that $A(x_1)$ and $\exists x_2$ such that $B(x_2)$ there is not enough information to conclude that $x_1 = x_2$. In other words, we cannot conclude that $\exists x$ such that both $A(x)$ and $B(x)$. |
Understanding a simple differential equation using "separation of variables" | ${(1)}$ Well, technically we shouldn't treat ${dx(t)}$ or ${dt}$ as numbers. But in this context, yes, we are treating ${dx(t)}$ as ${x'(t)dt}$. It's just a rearrangement of ${\frac{dx(t)}{dt} = x'(t)}$.
${(2)}$ The ${+c}$ is required. When you integrate indefinitely, you always need to add a constant. This is because ${\frac{d}{dx}(f(x)+c)=\frac{d}{dx}f(x)}$ for any constant ${c \in \mathbb{R}}$, and the integral is meant to be a sort of "inverse" to the derivative. If I tell you I thought of a function, and the derivative of that function is ${2x}$ - did I think of ${x^2+1}$ or just ${x^2}$? You don't know, and there is no way to know without further information. Hence you must say "you thought of ${x^2 + c}$ for some constant $c$, but I don't know what the constant you chose is".
You may ask: "why isn't there a ${+c}$ on the left hand integral then too?" and that's because the constant on the left could just be moved to the other side and you would end up with "constant - constant", which is just another constant. So yes, we did have to add a "${+c}$" to both sides, but the ${+c}$ on one side can just be absorbed into the constant on the other side.
${(3)}$ Well, as you said, ${dx(t)=x'(t)dt}$. So the integral becomes
$${\int \frac{x'(t)dt}{x(t)}}$$
And after substituting ${u=x(t)}$ you get
$${\int \frac{du}{u}=\ln(u)+c}$$
But since ${u=x(t)}$ then
$${=\ln(x(t))+c}$$
Edit: I actually saw your question posted here: What is the meaning of the differential of a variable when the variable is the value of a constant function? with your example of ${\int_{1}^{2}df(t)}$. What this evaluates to actually depends on what you mean by your bounds. In other words, do you mean
$${\int_{t=1}^{t=2}df(t)}$$
or do you mean
$${\int_{f=1}^{f=2}df(t)}$$
? If you mean the first, the answer is
$${\int_{t=1}^{t=2}f'(t)dt=f(2)-f(1)}$$
And so obviously the fact $f$ is a function of $t$ really does matter. If you mean the second, then
$${\int_{f=1}^{f=2}df(t)=\int_{f=1}^{f=2}df=2-1=1}$$
This may confuse you as (int the pre-edit version of my answer) I said it didn't matter - and that ${\int \frac{dx(t)}{dx}=\int \frac{dx}{x}}$ will give you the same answer. The reason it didn't matter is because we hadn't picked any actual bounds yet, and things get kinda "hidden away" by ${u}$ substitution. The proof of separation of variables actually proves for us that ${\frac{dy}{dx}=\frac{f(x)}{g(y)}}$ implies that
$${\int g(y)dy = \int f(x)dx}$$
by using a $u$ substitution. So technically, we cannot just say ${\int \frac{dx(t)}{x(t)}=\int \frac{dx}{x}}$ - it requires further proof, and the proof relies on substitution. Meaning the first way I mentioned is a bit circular, and I have removed it from my answer. Substitution is the proper way - but you don't need to do this everytime. |
Finding the least square solution with 3 variables | You want to fit a plane that passes through the origin, there is no intercept term.
We do not need to introduce the intercept column. That is remove the first column of your proposed design matrix and proceed to solve for $c$ in $$X^TXc = X^TZ$$ |
Connected unbounded sets $S\subset \Bbb{R}^n$ such that $x\mapsto ||x||^t$ is uniformly continuous on $S$? | Let $r:(0,\infty)\to(0,\infty)$ and $\omega:(0,\infty)\to S^{n-1}$ be smooth smooth functions that define a curve $\gamma$ by $\gamma(s)=r(s)\omega(s)$ in spherical coordinates.
I want to choose $S=\gamma((0,\infty))$, so I need to assume $r(s)\to\infty$ as $s\to\infty$.
I will also assume $r$ to be increasing and $t$ to be positive.
Let $f(x)=\|x\|^t$.
To ensure uniform continuity, I want the local Lipschitz constant of $f|_S$ at $\gamma(s)$,
$$
L(s)=\frac{\frac{d}{ds}f(\gamma(s))}{\|\dot\gamma(s)\|},
$$
to be uniformly bounded.
We have
$$
\frac{d}{ds}f(\gamma(s))
=
tr^{t-1}\dot r
$$
and
$$
\|\dot\gamma(s)\|^2
=
\|\dot r\omega+r\dot\omega\|^2
=
\dot r^2+r^2\|\dot\omega\|^2.
$$
For the last equation, note that $2\omega\cdot\dot\omega=\frac{d}{ds}\|\omega\|^2=0$.
Thus
$$
L(s)^2
=
t^2\frac{r^{2(t-1)}\dot r^2}{\dot r^2+r^2\|\dot\omega\|^2}
=
t^2\frac{r^{2(t-1)}}{1+(\|\dot\omega\|/\dot\ell)^2},
$$
where $\ell=\log(r)$.
For the Archimedean spiral $r=\omega=s$ we get $L(s)^2=t^2s^{2(t-1)}/(1+s^2)$, which stays bounded if and only if $t\leq2$.
This was expected.
A uniform bound on $L(s)$ does not suffice for uniform continuity; if the "spiral" $S$ is too tight, $f|_S$ is not uniformly continuous.
To make this issue easier to handle, let me assume that $\omega$ is periodic with some period $p>0$.
I'm not sure if a periodic choice is optimal, but I have a vague feeling that an "optimal spiral" is periodic enough for the argument to work.
Note that if you want uniform continuity with respect to the path metric, bounding $L(s)$ is enough.
If you want it w.r.t. the induced Euclidean metric, it is not.
Suppose we want Hölder continuity with exponent $\alpha\in(0,1]$.
Then we get the requirement that
$$
r(s+p)^t-r(s)^t\lesssim (r(s+p)-r(s))^\alpha.
$$
(I don't want to keep track of multiplicative constants anymore, and I will assume $r$ to be so nice that I can make some approximations.)
The function $r$ cannot grow too fast if we want $L(s)$ to remain bounded, so
$$
r(s+p)^t-r(s)^t\approx t(r(s+p)-r(s))r(s)^{t-1}
$$
should be a reasonable approximation.
This combined with the above estimate gives
$$
(r(s+p)-r(s))^{1-\alpha}r(s)^{t-1}\lesssim 1.
$$
Approximating $r(s+p)-r(s)\approx p\dot r(s)$, we get
$$
\dot r(s)^{1-\alpha}r(s)^{t-1}\lesssim 1.
$$
This condition is not necessary for uniform continuity if $r(s+p)-r(s)$ has a uniform lower bound.
To make $L(s)$ bounded we should have
$$
r^{2(t-1)}
\lesssim
(\|\dot\omega\|/\dot\ell)^2.
$$
If we choose the parametrization so that $\|\dot\omega\|$ is constant, we end up with two requirements (if $r(s+p)-r(s)\to0$ as $r\to\infty$):
$\dot r^{1-\alpha}r^{t-1}\lesssim1$,
$\dot r r^{t-2}\lesssim1$.
Assuming $\alpha<1$ (which is not very restrictive), the first condition can be rewritten as $\dot r r^{(t-1)/(1-\alpha)}\lesssim1$.
The condition then becomes
$$
\dot r r^{\max\{(t-1)/(1-\alpha),t-2\}}\lesssim1.
$$
If the spiral tightens up so that $r(s+p)-r(s)\to0$ as $r\to\infty$, the modulus of continuity of $N_t|_S$ should be as bad as that of $N_t$ in all of $\mathbb R^n$ (although this does not somehow show up in the calculation above).
It seems that the most promising way to go is to demand that $r(s+p)-r(s)\gtrsim1$ and $\dot r r^{t-2}\lesssim1$.
This answer is not conclusive, though... |
A "geometrical" representation for Ramsey's theorem | Partitioning $[\omega]^{(n)}$ into $k$ pieces can be interpreted as colouring the complete $n$-uniform hypergraph, with vertex set $\omega$, in $k$ colours.
When $n=2$ the complete $2$-uniform hypergraph is the complete graph with vertex set $\omega$.
For a definition of hypergraph see http://en.wikipedia.org/wiki/Hypergraph. Briefly it is the generalisation of a graph where edges can join more than two vertices. And by complete $n$-uniform I mean all the edges contain exactly $n$ vertices and every possible edge of size $n$ is in the hypergraph. |
$\sin(A)$, where $A$ is a matrix | Yes, $e^A$ always converges for a square matrix $A.$ In practice, $e^A$ is found by writing the Jordan normal form of $A,$ let us call it $J,$ with a basis of characteristic vectors for $A$ written as the columns of another matrix, call it $P.$ Then we have $$ J = P^{-1} A \; P.$$ It is possible that $J$ is diagonal, and this is guaranteed if the eigenvalues of $A$ are distinct. Otherwise, there are a few 1's in the diagonal immediately above the main diagonal of $J.$
Oh, before I forget, $$ \sin x = \frac{e^{ix}- e^{-ix}}{2 i}$$ and the same formula gives you $\sin A.$
Meanwhile, it is not difficult to find $e^J$ as $J$ is diagonal or, at least, in Jordan form, and the matrix with only the diagonal elements of $J$ commutes with the matrix that has only the off-diagonal elements of $J.$ Then, finally, we use the identity $$ e^A = e^{P J P^{-1}} = P \; e^J P^{-1}$$ which follows easily from formal properties of the defining power series.
EDIT: the fundamental fact of life is that, IF $A,B$ commute, then $e^{A+B} = e^A e^B = e^B e^A.$ If $A,B$ do not commute there is no assurance. Meanwhile, for the identity matrix $I$ and a real or complex number $z,$ we do get $e^{zI} = e^z I.$ Put that together, you get $e^{A + 2 \pi i I} = e^A.$ Also, $e^{0I} = I,$ and $e^{-A} = \left( e^A \right)^{-1}.$
Here are the comments from November, 2011, visible in order: |
Is equality proof valid if terms are moved across equality? | Why not just drop the first line after "Suppose"? Everything after that is correct and proves the proposition. Good job! |
Improper integral with two infinite bounds | Observe the integrand is an even function, thus: $I = 2\displaystyle \int_{0}^\infty \dfrac{dy}{1+y^2} = 2\tan^{-1} y|_{y=0}^\infty = \pi$ |
Problem defining a function via Step function and Dirac's Delta | $H(\frac{\pi}{2})$ does not depend on $x$. It is not a function of $x$. It is a constant. And it should not become $\delta$ when taking the second derivative.
It also mean that your definition of $f$ is incorrect. Likely you intended
$$f(x) = H(x + \frac{\pi}{2}) \cos x - H(x - \frac{\pi}{2}) \cos x$$
Now you need to differentiate it. |
Prove $\sum_{k=0}^{n-2}{n-k \choose 2} = {n+1 \choose 3}$ | The right hand side ${n+1 \choose 3}$ is the number of ways to choose 3 elements from $\{1,2,\ldots,n+1\}$. Let $A$ denote the set of all 3-subsets of $\{1,2,\ldots,n+1\}$. We want to show that $|A|$ is equal to the left hand side.
Let $A_i$ denote the set of all 3-subsets of $\{1,2,\ldots,n+1\}$ which have $i$ as their largest element. For example, if $i=n+1$, then $A_{n+1}$ is the set of all 3-subsets which contain $n+1$, and the number of ways to choose the remaining 2 elements is ${n \choose 2}$. Hence, $|A_{n+1}| = {n \choose 2}$. More generally, $|A_i| = {i-1 \choose 2}$, for $i=3,\ldots,n+1$, because the remaining 2 elements must be chosen from $\{1,\ldots,i-1\}$. Note that $i$ has to be at least 3 because the largest of 3 elements will be at least 3.
Using the fact that the $A_i$'s $(i=3,4,\ldots,n+1)$ are disjoint and their union is all of $A$, we obtain the desired formula. |
Relative homology of ball and sphere | You have the exact sequence $$\underset{=0}{\underbrace{\tilde{H}_k(B^n)}} \to \tilde{H}_k(B^n,S^{n-1}) \to \tilde{H}_{k-1}(S^{n-1}) \to \underset{=0}{\underbrace{\tilde{H}_{k-1}(B^n)}},$$ so $$\tilde{H}_k(B^n,S^{n-1}) \simeq \tilde{H}_{k-1}(S^{n-1})= \left\{ \begin{array}{cl} \mathbb{A} & \text{if} \ k=n \\ 0 & \text{otherwise} \end{array} \right..$$
If you do not know $\tilde{H}_k(S^n)$, you can use Mayer-Vietoris sequence in order to show $$\tilde{H}_k(S^n) \simeq \tilde{H}_{k-1}(S^{n-1})$$ by decomposing $S^n$ in two hemispheres. |
Can someone check my proof (Abbott exercise 6.2.10) | Nice proof. It looks fine to me and I didn't find any mistakes. (I would leave this as a comment instead but I do not have enough reputation points) |
Formula for $ (I+\varepsilon A)^{-1} = ?$ | Assuming that $ \epsilon$ is a small quantity you have \begin{equation} (I+\epsilon A)^{-1} =I-\epsilon A +O(\epsilon ^2) \end{equation}. This can be easily verified directly:
\begin{equation}
(I+\epsilon A)(I-\epsilon A)=
I +\epsilon A-\epsilon A+ \epsilon ^2 A^2=I+\epsilon ^2 A^2
\end{equation}
so the answer is indeed accurate up to order $\epsilon$. |
Probability. Exclusive and Non Exclusive Events | Well, Dilip Sarwate actually took the solution one step further and introduced some set theory into it. I shall break it down simply and illustrate what he meant. Let's just use the first $20$ integers. In other words, $\{1, 2, \cdots, 20\}$. He let the set $\bf{A}$ be the set of numbers divisible by $2$ and $\bf{B}$ be the set of numbers divisible by $3$. I shall assume $A$ and $B$ would mean the number is "divisible by $2$ and divisible by $3$". Hence, you are finding $\text{Pr}(A\cap B)$.
In my example, $\bf A$ = $\{2, 4, \cdots, 20\}$ and hence $\text{Pr}(A)=\frac{10}{20}$.
Also, $\bf B$ = $\{3, 6, \cdots, 18\}$ and hence $\text{Pr}(B)=\frac{6}{20}$.
To find $\bf A \cap \bf B$, simply count the elements that exist both in sets $\bf A$ and $\bf B$. How many elements are there? Divide that number by $20$ and you should get your answer. With regards to his final statement
\begin{equation}\bf A \cup \bf B = \bf A + \bf B - (\bf A \cap \bf B)\end{equation}
Basically, form a set of elements which exist in both sets $\bf A$ and $\bf B$ (DO NOT DOUBLE-COUNT ELEMENTS...the definition of a set already means it contains unique elements). It is actually the set of elements in $\bf A$ added to $\bf B$, and then you subtract the elements in both sets $\bf A$ and $\bf B$, to take care of the double-counting. |
Show $\operatorname{Aut}(C_2 \times C_2)$ is isomorphic to $S_3=D_6$ | Notice that $C_2\times C_2$ is a vector space over the field $Z_2$. Hence $Aut(Z_2\times Z_2)=Gl(2,2)$. It is easy to see that $Gl(2,2)\cong S_3 $. |
Let $V$ be a vector space and let $\{ \alpha_1 ,\alpha_2, ..., \alpha_n \}$ be a basis of $V$ | Try to first get some intuition by specializing the problem. Take a vector space you know well, like $\mathbb R^3$, and choose a simple basis you know and love, for instance the standard basis. Now, construct the $\beta _i$ (there are only three to construct) and check if it's a basis or not. That way you will gain a better understanding of the question and you might start getting ideas on how to prove your answer. |
If $f$ is a polynomial of degree $n$, then $f(x) \equiv 0\pmod p$ has at most $n$ solutions. | Remember that you are trying to prove that the congruence $f(x)\equiv 0\pmod{p}$ has at most $n$ solutions. (Added. There must be an assumption that $a_n\not\equiv 0\pmod{p}$ missing, by the way)
The argument is essentially the same one we use to prove that a polynomial of degree $n$ has at most $n$ roots in any field: we do induction on the degree.
If $f$ is of degree $0$, a nonzero constant, then it has no solutions and we are fine.
Assume the result holds for any polynomial of degree less than $n$, and that $f$ has degree $n\gt 0$. If $f$ has no roots, then we are done: $0$ is less than $n$, so $f$ satisfies the conclusion. If $f$ has at least one root $r$, then we can use the Factor Theorem to write $f(x) = (x-r) g(x)$ for some polynomial $g$ of degree $n-1\lt n$. If $s$ is a root of $f$ different from $r$, then $0 = f(s) = (s-r)g(s)$; since $(s-r)\neq 0$, then $g(s)=0$. That is, all other roots of $f$ come from roots of $g$. By the induction hypothesis, $g$ has at most $n-1$ roots. So $f$ has at most $1+n-1 = n$ roots.
This proves that, whether $f$ has at least one root or no roots, it has at most $n$ roots, proving the result.
The argument here is the same. If $f(x)\equiv 0 \pmod{p}$ has no solutions, then you are already done: $0$ is less than $n$. So we can consider the case in which there is at least one solution. It's not that the conclusion will not hold in this latter case (in fact, we will prove it does), it's that the conclusion does not immediately follow: from "at least one solution" we cannot immediately conclude "at most $n$ solutions", whereas from "no solutions" we can immediately conclude "and so at most $n$ (since $0$ is less than $n$)".
Now, we use a lemma:
Lemma. Let $r$ be any integer. Then for any nonnegative integer $t$, $x-r$ divides $x^t - r^t$.
Proof. Simply note that
$$(x-r)(x^{t-1}+x^{t-2}r+\cdots +xr^{t-2}+r^{t-1}) = x^t-r^t.\quad\Box$$
Note well: The fact that $x-r$ divides $x^t-r^t$ does not depend on whether $r$ is a root of $f(x)$ or not: it's a simple algebraic fact. It's always true.
Now, if $f(x)$ does have roots modulo $p$, then it has at least one; call it $r$. If $f(x)\equiv 0\pmod{p}$ has at least one root $r$, then we have $f(r)\equiv 0\pmod{p}$. We have
$$\begin{align*}
f(x) &\equiv f(x) - 0\pmod{p}\\
&\equiv f(x) -f(r)\pmod{p}\\
&\equiv a_nx^n + \cdots + a_0 - (a_nr^n +\cdots + a_0)
\pmod{p}\\
&\equiv a_n(x^n - r^n) + a_{n-1}(x^{n-1}-r^{n-1}) + \cdots + a_1(x-r) + (a_0-a_0)\pmod{p}
\end{align*}$$
Now, by the Lemma, $(x-r)$ divides each of $x^n-r^n$, $x^{n-1}-r^{n-1},\ldots,x-r$. So $x-r$ divides
$$a_n(x^n-r^n)+a_{n-1}(x^{n-1}-r^{n-1}) + \cdots + a_1(x-r).$$
(This argument takes the place of the Factor Theorem in the case above).
The conclusion is that if $f(r)\equiv 0\pmod{n}$, then we can write
$$f(x) \equiv (x-r)g(x)\pmod{p}$$
for some polynomial $g(x)$ with integer coefficients. Now the argument proceeds as in the real case: if $f(s)\equiv 0\pmod{p}$, then $(s-r)g(s)\equiv 0 \pmod{p}$, so $g(s)\equiv 0\pmod{p}$. So any other root of $f$ gives a root of $g$, and since $g$ has at most $n-1$ roots modulo $p$, then $f$ has at most $1+(n-1)=n$ roots modulo $p$. |
Transforming a cone graph. | A general form for the cone (that has the central axis at $(x,y)=(23.5,0)$) is
$$
(x-23.5)^2 + y^2 = (kz+c)^2
$$
for some constants $k,c$. Notice that the cone that you have currently defined has $k=1, c=-22$. The circle has $k=0, c=111.6$
Your wish is that at $z=0$, the cross-section of the cone coincides with the circle, so that the right-hand side is equal to $111.6^2$. That means
$$
(k\cdot 0 + c) = 111.6^2
$$
We get $c= \pm 111.6$.
On the other hand, you want that the cone's top is at $z=22$. What corresponds to the top part of the cone? Answer: When the right-hand side is equal to zero. Therefore, this condition becomes
$$
k \cdot 22 + c = 0
$$
Which value of $c$ do we plug in? Well, we know that we want to ultimately write the expression as
$$
\sqrt{(x-23.5)^2 + y^2} = kz+c \geq 0
$$
and we want the expression to be valid in the range $0\leq z \leq 22$. Therefore, the sign of $c$ should be positive and $k$ should be negative.
Plugging in the solution, we get
$$
(x-23.5)^2 + y^2 = \left(-\frac{116.6}{22} z + 111.6 \right)^2
$$
And if you want to "cut out" the part above $z=22$, you can take the square roots to obtain
$$
\sqrt{(x-23.5)^2 + y^2} = -\frac{116.6}{22} z + 111.6
$$
(You can still re-arrange the terms if you want, but I believe this is a valid solution) |
Prove that $\text{trace}(X^T Y)\le \sqrt{\text{trace}(X^T X)\text{trace}(Y^T Y)}$ | It's the Cauchy–Schwarz inequality for the Frobenius inner product. |
Permutation around round table. | Because it is a circular table, we use the ordinary convention that arrangements equivalent by a rotation are to be considered the same. Equivalently, we can assume that one of the chairs is a throne and Alicia sits there.
Her left neighbour (a boy) can be chosen in $5$ ways, and for each of these ways her right neighbour can be chosen in $4$ ways, for a total of $(5)(4)$.
For every such arrangement, we are left with $5$ seats in which to put the remaining people. Think of where the remaining boys will sit. There are $3$ placees. These determine $4$ "gaps" (including the two endgaps) into which we must slip the two girls, one girl per gap. Choosing the gaps can be done in $\binom{4}{2}$ ways, and then the girls can be permuted in $2!$ ways and the boys in $3!$ ways. That gives a total of $(5)(4)\binom{4}{2}(2!)(3!)$.
Note that the argument works generally. |
Family of holomorphic functions {f}, and their derivatives | Yes, "family" is the same as "set". Saying it is locally uniformly bounded means that for every $z_0 \in D$ there exist $M$ and $r > 0$ such that $|f(z)| < M$ for all $f$ in the family and all $z$ with $|z - z_0| < r$.
Hint: express $f'$ in terms of a contour integral involving $f$. |
Strong maximum modulus principle | By definition, $z_0\in D$ is a local maximum of $|f|$ if it is a global maximum of $|f|$ on some neighborhood $U\subset D$ such that $z_0\in U$. Suppose such a local maximum exists at $z_0$; then by the weak version, $f$ must be constant on the neighborhood $U\ni z_0$. But since $D$ is connected and $U\subset D$, the fact that $f$ is constant on $U$ implies that $f$ is constant on all of $D$ by the identity theorem. Thus $|f|$ can only have a local maximum in $D$ if $f$ is constant, so a non-constant holomorphic function cannot attain a local maximum modulus in $D$. |
What is the Cauchy-Schwarz inequality for matrix products? | You can verify $\operatorname{Tr}A^\dagger B$ is an inner product on matrices. Its Cauchy-Schwarz inequality is$$|\operatorname{Tr}A^\dagger B|^2\le(\operatorname{Tr}A^\dagger A)(\operatorname{Tr}B^\dagger B).$$In components, this is$$\left|\sum_{ij}A_{ij}^\ast B_{ij}\right|^2\le\sum_{ij}|A_{ij}|^2\sum_{kl}|B_{kl}|^2.$$ |
Build the sum of number parts and restore original number | I don't know the name, though there is a way to reconstruct this number.
Maybe it's best to demonstrate this with an example. Say we are given $137171$. Then we want to find some initial number ABCDEF, where A,B,C,D,E,F are digits from 0-9. It is important to note that we can write this in an equation. We want to find ABCDEF such that $137171 = A(111111) + B(11111) + C(1111) + D(111) + E(11) + F(1)$. The method works like this: find the largest A value such that the total sum so far is less than $137171$, then find the largest B value such that the total sum so far is less than $137171$, etc. until we either reach $137171$ or determine that it is impossible.
Why does this work? Well, for the A value for example, we can't have an A that makes the total sum bigger than $137171$, since the final result would then be bigger than $137171$. If we picked a number that is 1 smaller than the maximum value that makes the total sum less than $137171$, we need $B(11111) + C(1111) + D(111) + E(11) + F(1) > 111111$ However, this is not possible, since $9(11111) + 9(1111) + 9(111) + 9(11) + 9(1) = 111105 < 111111$.
So we fix the max possible A value, then fix the max possible B value, and so on until we find ABCDEF. Hope this helps! |
Number of Elliptic Curves over Fp | Elliptic curves over an algebraically closed field are isomorphic if and only if they have the same $j$-invariant, so computing the $j$-invariants will tell you when two curves are not isomorphic at least (if the $j$-invariants are different!)
Over a non-algebraically closed field elliptic curves of fixed $j$-invariant (not equal to 0, 1728) are all quadratic twists of each other, that is there exists $D\in \mathbf F_p^\times$ s.t. $E_1\cong E_2^{(D)}$, where the $D$th quadratic twist is defined by
$$y^2 = x^3 + ax +b \mapsto Dy^2 = x^3 +ax+b$$
this is short Weierstrass form, so lets use $p\ne 2,3$,
the quadratic twist is isomorphic to the original curve when $D$ is a square.
For a finite field the group $\mathbf F_p^\times/(\mathbf F_p^\times)^2 \cong C_2$, there every non-square differs from each other by a square, so there is only one possible quadratic twist of each elliptic curve, which is not isomorphic to the original, this is what gives us that there are roughly $2p$ curves, excluding $0, 1728$ we have $\approx p-2$ $j$-invariants, each of which gives two non-isomorphic quadratic twists over $\mathbf F_p$.
In Sage you can use the .is_quadratic_twist method of an elliptic curve to check if two curves are quadratic twists, and .is_isomorphic to check isomorphism. You can also find the twisted curves using .quadratic_twist.
Using these methods you can reduce your list down to the exact number of curves, or build up the complete list starting from the set of all $j$-invariants.
Note that when $j = 0,1728$ this is more complicated as you also get sextic and quartic twists! |
The simple extension $\mathbb Q(i+\sqrt{2})$ | The standard way is to see (using what you showed) that $\mathbb Q(i + \sqrt{2}) = \mathbb Q(i,\sqrt{2})$ (the inclusion of the right side in the left being obvious, and the opposite inclusion being the first part of (a)).
Now write $\mathbb Q(i,\sqrt{2})$ as an iterated extension, namely $\mathbb Q(\sqrt{2})(i).$ The first of these is quadratic, and the second has degree either $1$ of $2$. But since $i \not\in \mathbb Q(\sqrt{2})$ (there are many ways to check this, but the easiest is perhaps to observe that $\mathbb Q(\sqrt{2}) \subset \mathbb R,$ while $i \not\in \mathbb R$), its degree must be $2$. The second part of (a) follows. |
Difficult limits question | I suppose $n,m$ are positive integers.
As $x \to 0$ we have:
\begin{align}
\cos(x^n) &= 1 - \frac{x^{2n}}{2} + O(x^{4n})
\\
e^{\cos(x^n)} &= e e^{-x^{2n}/2+O(x^{4n})}
= e\left(1-\frac{x^{2n}}{2}+O(x^{4n})\right)
= e-\frac{e x^{2n}}{2}+O(x^{4n})
\\
e^{\cos(x^n)} - e &= -\frac{e x^{2n}}{2}+O(x^{4n})
\\
\frac{e^{\cos(x^n)} - e}{x^m} &= -\frac{e x^{2n-m}}{2}+O(x^{4n-m})
\end{align}
So to get limit $-e/2$ we need $2n-m = 0$. That is: $m=2n$ so $\frac{m}{n} = 2$. |
Show that if $w^3=1$, then $1+w+w^2=0$ | The conditional is clearly not true for $w=1$. The hypothesis holds ($1^3=1$), but the conclusion does not ($1+1+1^2 \neq 0$). Therefore, you have to assume that $w \neq 1$ in order to conclude the proof. |
How to calculate the optimal amount to bet? | If you continually bet $10\%$ of capital with $0.1$ chance of returning $20x$ you will win almost surely. Your position after $n$ wins and $m$ losses only depends on $n$ and $m$, not on the order of wins and losses. Each win triples your bankroll, while each loss multiplies it by $0.9$. You will have $3^n\cdot 0.9^m$ times your original stake. The law of large numbers is on your side. After one win and nine losses your capital has multiplied by $3 \cdot 0.9^9\approx 1.16$. You will never run out of money because you never bet your whole bankroll. After a large number of games the winning fraction will be close to $10\%$ and you will be way ahead. The bet is in your favor-take it!
You might be interested to look up the Kelly criterion for how much to bet in a favorable case. If every dollar is equally useful to you, the highest expectation is to bet all your money each time. If twice as much money is not twice as good, you don't necessarily want to bet all your money. |
A random variable $X$ has support in $[a,b]$, and $\mathbb{E}X=b$. Prove that $P(X\geq\mathbb{E}X)=1$ | From your assumptions:
$$\max_{\omega \in \Omega} X(\omega) \leq b \;\rm{and}\; E[X]=\int_{\Omega} X(\omega)dP(\omega)=b \implies E\left[\frac{X(\omega)}{b}\right]=1$$
We can use the upper bound $b$ on $X$ to say:
$$ \frac{X(\omega)}{b}\leq 1\; \textrm{surely} \implies1=E\left[\frac{X(\omega)}{b}\right]=\int_{\Omega} \frac{X(\omega)}{b}dP(\omega)\leq \int_{\Omega} dP(\omega)= 1 $$
Therefore,
$$\int_{\Omega} \frac{X(\omega)}{b}dP(\omega) = 1 \implies \frac{X(\omega)}{b}= 1\; \textrm{a.s.} \implies P(X=b)=P(X=E[X])=1\implies P(X\geq E[X])=1 \;\textrm{by monotonicity of measure}:\{X=b\}\subset\{X\geq b\} $$ |
Converting cross product form of a line to vectorial form | Take the cross product of LHS and RHS with $l$ and then apply the double cross product formula:
$$l \times (r \times l) = l \times b $$
$$(l \cdot l) r - (l \cdot r) l = l \times b $$
Up to you now... |
Reference for Topological Groups | Alexander V. Arhangel'skii, Mikhail G. Tkachenko, “Topological groups and related structures”, Atlantis Press, Paris; World Sci. Publ., NJ, 2008.
Lev S. Pontrjagin “Topological groups”, Gordon and Breach, 1966. |
What two probability distributions (other than the Gaussians) convolved together give a Gaussian pdf? | Given that Fourier transform turns convolution into product, the answer is either trivially no (the only PDF which convolved with itself gives a Gaussian is Gaussian itself) or trivially yes (for any PDF with a nowhere vanishing Fourier transform there is another PDF which convolved with it gives a Gaussian). |
Solve $ \frac{2}{\pi}\int_{-\pi}^\pi\frac{\sin\frac{9x}{2}}{\sin\frac{x}{2}}dx $ | Such integral equals:
$$\frac{4}{\pi}\int_{-\pi/2}^{\pi/2}\frac{\sin(9x)}{\sin(x)}\,dx=\frac{4}{\pi}\int_{-\pi/2}^{\pi/2}\frac{e^{9ix}-e^{-9ix}}{e^{ix}-e^{-ix}}\,dx \tag{1}$$
that is:
$$ \frac{4}{\pi}\int_{-\pi/2}^{+\pi/2}\left(e^{8ix}+e^{6ix}+\ldots+1+\ldots+e^{-6ix}+e^{-8ix}\right)\,dx=\frac{4}{\pi}\int_{-\pi/2}^{\pi/2}1\,dx=\color{red}{4}\tag{2} $$
since $\int_{-\pi/2}^{\pi/2}\cos(2nx)\,dx = 0$. |
Find an example of weak convergent sequence which is not Cauchy | Hint: $L^2(\mathbb R)$ is a separable Hilbert space. For example, the weighted Hermite polynomials form a complete orthonormal basis for $L^2(\mathbb R)$.
Prove that, if $H$ is a separable Hilbert space with complete orthonormal basis $\{ e_i \}$, then the sequence $e_1, e_2, e_3, \dots $ weakly converges to $0$, but is not Cauchy.
[Remember, any vector in $x \in H$ can be written in the form $x =\sum_i a_i e_i$, where $a_i = \langle x, e_i \rangle$, and note that $\sum_i |a_i|^2 = || x ||^2 < \infty$ by Parseval's theorem.]
Alternatively, consider $f_n = \tfrac 1 {\sqrt{\pi}}\sin( n x) 1_{[0,2\pi]}$. This is an orthonormal sequence in $L^2(\mathbb R)$, though it is not a complete orthonormal basis. Note that for any $x \in L^2 (\mathbb R)$, we have $\sum_n \langle x, f_n \rangle^2 \leq ||x||^2$ by Bessel's inequality. Then use the same idea... |
Question about Christoffel symbols of Riemann metric | $g^{il}g_{lj}=\delta^i_j$ is constant, so using Leibniz,
$$ 0 = g^{il}{}_{,k} g_{lj} + g^{il}g_{lj,k} $$
Contracting with $g^{mj}$,
$$ g^{im}{}_{,k} = - g^{mj}g^{il} g_{lj,k}, $$
and then you can insert your previous expression for $g_{lj,k}$ (and in particular, it's not zero). The covariant derivative $g^{im}{}_{;k}$ is zero, by essentially the same argument; the point is that both derivatives are set up to have the Leibniz property over contraction. |
Why is $\ker(id\otimes \cdot b:R/(a)\otimes_R R \to R/(a)\otimes_R R)=R/(d)$? | First of all, we need $b$ to be a non-zero divisor, such that $0 \to R \xrightarrow{\cdot b} R \to R/(b) \to 0$ is indeed a projective resolution.
Then you are on the right track, that you have to compute the kernel of
$$R/(a) \xrightarrow{\cdot b} R/(a).$$
You should check that the kernel is generated by $\frac{a}{d}$ and indeed $\frac{a}{d} \cdot R/(a) \cong R/(d)$.
The hardest part should be that any element of the kernel is a multiple of $\frac{a}{d}$. For that you should precisely define, what it means to be a greatest common divisor in your case (This is not a priori clear in a non-UFD).
I think the weakest definition is the following: Whenever $c|a$ and $c|b$, we have $c|d$. Use this as follows:
Let $x$ be in the kernel, i.e. $xb \in (a)$, hence $a|xb$. It is clear that $a|xa$, hence $a|xd$, so $x\frac{d}{a} \in R$, which means that $x$ is a multiple of $\frac{a}{d}$. |
If $(\pi,V)$ is irreducible, then $\pi(G)$ spans $\operatorname{End}(V)$ | Here is a simple argument inspired by the Lie algebra structure on $\mathrm{End}(V)$:
Since $e\mapsto I_V$, where $e$ is the identity element of $G$, we know that diagonal elements lie in the image. Hence it suffices to show surjectivity onto the set of traceless matrices $\mathrm{End}(V)_{tr=0}$. Since $\mathbb{C}[G]\to \mathrm{End}(V)$ is $\mathbb{C}$-linear, it suffices to show that a basis of $\mathrm{End}(V)$ is in the image.
Let $v\in V$ be some non-zero vector, and consider the set $S=\{g\cdot v: g\in G\}$. Since $V$ is irreducible, $\mathrm{span}(S)=V$, so we have a basis $\{g_i\cdot v\}_{i=1}^{dim(V)}$ for some $g_i\in V$.
I claim that it suffices to show that the elementary matrices $\{E_{i,j}\}_{i\neq j}$ is in the image where $E_{i,j}$ is the matrix with entry $1$ at $(i,j)$ and zero elsewhere. The reason is that we have the relation $$E_{i,i}-E_{j,j}=E_{i,j}E_{j,i}-E_{j,i}E_{i,j},$$ so that if the $\{E_{i,j}\}$ lie in the image of $\mathbb{C}[G]$, so do these diagonal matrices. Together these guys span $\mathrm{End}(V)_{tr=0}$, so we would be done.
Our choice of basis of $V$ ensures $g_jg_i^{-1}\mapsto E_{i,j}$, so the map is surjective. |
Multivariable chain rule exercise | Well, if
$$
\frac{\partial g}{\partial r} = \frac{\partial f}{\partial x}\cos\theta + \frac{\partial f}{\partial y}\sin\theta
$$
Differentiating with respect to $\theta$,
$$
\frac{\partial^2 g}{\partial \theta \partial r} = \frac{\partial}{\partial \theta}\left(\frac{\partial f}{\partial x}\cos\theta + \frac{\partial f}{\partial y}\sin\theta\right)
$$
I'll do the first term:
\begin{align}
\frac{\partial}{\partial \theta}\left(\frac{\partial f}{\partial x}\cos\theta\right) &= \cos\theta\frac{\partial}{\partial \theta} \left(\frac{\partial f}{\partial x}\right) + \frac{\partial (\cos \theta)}{\partial \theta} \frac{\partial f}{\partial x} \\
&= \cos\theta\frac{\partial}{\partial \theta} \left(\frac{\partial f}{\partial x}\right) - \sin \theta\frac{\partial f}{\partial x}
\end{align}
Using the chain rule,
\begin{align}
\frac{\partial}{\partial \theta} \left(\frac{\partial f}{\partial x}\right) &= \frac{\partial x}{\partial \theta}\frac{\partial}{\partial x} \left(\frac{\partial f}{\partial x}\right) \\
&= \frac{\partial x}{\partial \theta} \frac{\partial^2 f}{\partial x^2}
\end{align}
and so
$$
\frac{\partial}{\partial \theta}\left(\frac{\partial f}{\partial x}\cos\theta\right) = \frac{\cos^2\theta}{r}\frac{\partial^2 f}{\partial x^2} - \sin \theta\frac{\partial f}{\partial x}.
$$
Can you take it from here? |
Prove $b^{2} (\cot A + \cot B) = c^{2}(\cot A + \cot C)$ | We have
$$\cot A+\cot B=\frac{\cos A}{\sin A}+\frac{\cos B}{\sin B}
=\frac{\cos A\sin B+\sin A\cos B}{\sin A\sin B}$$
Now, by using the formula $\sin (A+B)=\sin A\cos B+\cos A\sin B\,\,\,\,$ and $\,\,\,\,\sin C=\sin (\pi-C)$ we get
$$b^2(\cot A+\cot B)=\frac{b^2\sin (A+B)}{\sin A\sin B}=\frac{b^2\sin C}{\sin A\sin B}...(1)$$
In a similar way we get
$$c^2(\cot A+\cot C)=\frac{c^2\sin B}{\sin A\sin C}...(2)$$
In order to prove that $(1)$ and $(2)$ are equal it will be sufficient to prove:
$$\frac{b^2\sin C}{\sin B}=\frac{c^2\sin B}{\sin C}$$
Which is equivalent to
$$\frac{b^2}{\sin^2 B}=\frac{c^2}{\sin^2 C}$$
And the last equality holds due to Sine Law. |
Clear the unkown matrix Y from the equation | If $^\ast$ just conjugates, you simply have
$${\bf s}=V(Y{\bf v})^\ast=VY^{\ast}{\bf v}^\ast$$
However, you can't extract a matrix if you only know the action on a single vector. You need to know what it does to at least the same number of vectors as the dimension of the matrix is, before you can extract the matrix. |
Fourier analysis notation - Sh and Ch | The hyperbolic functions are the "real" counterparts of the ordinary trigonometric ones.
$$\text{ch}(x)=\cosh(x)=\frac{e^x+e^{-x}}2\leftrightarrow \cos(x)=\frac{e^{ix}+e^{-ix}}2,$$
$$\text{sh}(x)=\sinh(x)=\frac{e^x-e^{-x}}2\leftrightarrow \sin(x)=\frac{e^{ix}-e^{-ix}}{2i}.$$
They are odd and even linear combinations of the exponential, so they easily appear with the latter.
Their name stems form the relation
$$c^2-s^2=1$$ that corresponds to an hyperbola, to be compared to
$$c^2+s^2=1$$ for the circular functions. |
how to compute expectation and variance of r.v.(geometric d.)? | Your moment generating function is incorrect. So your mistake is one you made earlier
The MGF of a geometric distribution is
$$f(z)=\mathbb E(e^{zX}) = \frac{1-p}{1-pe^t}.$$
So for the mean and variance you can calculate $\mathbb E(X) = f'(0)$ and $\mathbb E(X^2) = f''(0)$. I'm assuming this either is homework or an exercise in a book you're reading so I'll let you finish it off yourself. |
Having a problem when figuring out a better way to solve $\int \sqrt{3-2x} \: \: x \, \, \, dx$ | Since you are looking for a simpler way, here is one:
$$u= 3-2x \Rightarrow du =-2 dx, x= \frac{3-u}{2}$$
Then you get
$$\int \sqrt{3-2x} \: \: x \, \, \, dx = \frac{1}{-2} \int \sqrt{u} \frac{3-u}{2} du= -\frac{1}{4} \int 3 u^\frac{1}{2} -u^\frac{3}{2} du \,.$$ |
Reformulation using Trig sum | Since we require $$A\cos d\cos\omega t-A\sin d\sin\omega t=c_1\cos\omega t+c_2\sin\omega t$$
to hold for all $t$, we have $A\cos d=c_1$, $A\sin d=-c_2$. Then we have $$A^2(\cos^2d+\sin^2d)=A^2=c_1^2+c_2^2.$$
We can choose $A=\pm\sqrt{c_1^2+c_2^2}$. The resultant phase angles $d$ differ by $\pi$. |
Tensor contraction with multiple indices | Both calculations are wrong, I'm afraid. Since when does $\epsilon^{\lambda\eta}\sigma_{\eta\nu} = \sigma^\lambda{}_\nu$? This is valid only for metric tensor, but $\epsilon^{\lambda\eta}$ ($\begin{pmatrix}0&1\\-1&0\end{pmatrix}$) is not a metric tensor, assuming $\epsilon^{\lambda\eta}$ is the two-dimensional Levi-Civita symbol.
Suppose $T^{\beta\delta} = \epsilon^{\alpha\beta}\sigma_{\gamma\alpha}\epsilon^{\gamma\delta}$, and the metric is identity. Since there are only 4 distinct elements, we could carry out the computation directly:
$T^{00} = \epsilon^{10}\sigma_{11}\epsilon^{10} = \sigma_{11} = -\sigma_{00}$
$T^{01} = \epsilon^{10}\sigma_{01}\epsilon^{01} = -\sigma_{01}$
$T^{10} = \epsilon^{01}\sigma_{10}\epsilon^{10} = -\sigma_{10}$
$T^{11} = \epsilon^{01}\sigma_{00}\epsilon^{01} = \sigma_{00} = -\sigma_{11}$
or use the identities $\epsilon_{ab}=\epsilon^{ab}$ and $\epsilon_{ab}\epsilon^{cd} = \delta_a^c\delta_b^d - \delta_a^d\delta_b^c$ to arrive at
$$ T^{\beta\delta} = (\delta^{\alpha\gamma}\delta^{\beta\delta} - \delta^{\alpha\delta}\delta^{\beta\gamma})\sigma_{\gamma\alpha} =\delta^{\beta\delta}\operatorname{tr}(\sigma) -\sigma^{\beta\delta} = -\sigma^{\beta\delta}. $$
Edit: If you define $\epsilon^{\mu\nu}$ to be the metric, then your first formula is wrong, because index raising is done by
$$\huge x^{\color{red}\mu} = g^{\color{red}\mu\color{green}\nu} x_{\color{green}\nu} \tag{Correct} $$
and not
$$\huge x^{\color{red}\mu} = g^{\color{#FF4000}\nu\color{#808000}\mu} x_{\color{green}\nu} \tag{Wrong} $$
especially when your metric $g^{\mu\nu}$ is asymmetric. |
Is the condition $x\in\mathbb{R}$ necessary to the set statement $\{x \in\mathbb{R} \vert x> 0\}$? | It's absolutely necessary. You could have for instance
$$\{x \in \mathbb{Q}: x>0\} $$
which only includes positive rationals, not the irrationals.
It's common to abbreviate these to $\mathbb{R}_{>0}$ and $\mathbb{Q}_{>0}$ respectively. |
Martingale associated to Markov chain | If this is still of interest, such process can be represented as a PDMP (piecewise-deterministic Markov process) with time being one of the states. In this book M. Davis gives an explicit characterization of the extended generator of a PDMP together with its domain - i.e. you will find both necessary and sufficient conditions on $f$. |
Rolling Dice Game, Probability of Ending on an Even Roll | The problem is not clear as stated.
Interpretation $\#1$: If you interpret it as "find the probability that the game end in an evenly numbered round" you can reason recursively.
Let $P$ denote the answer. The probability that the game ends in the first round is $\frac 26+\frac 46\times \frac 46=\frac 79$. If you don't end in the first round, the probability is now $1-P$. Thus $$P=\frac 79\times 0 +\frac 29\times (1-P)\implies \boxed{P=\frac 2{11}}$$
as in your solution.
Interpretation $\#2$: If the problem meant "find the probability that $B$ wins given that $A$ starts" that too can be solved recursively. Let $\Psi$ denote that answer and let $\Phi$ be the probability that $B$ wins given that $B$ starts. Then $$\Psi=\frac 46\times \Phi$$ and $$\Phi=\frac 46 +\frac 26\times \Psi$$ This system is easily solved and yields $$\boxed {\Psi=\frac 47}$$ as desired. |
Hahn-Banach and the Fundamental Theorem of Calculus for Banach-space valued functions | We have $\phi (\int_x^{y}f'd\mu -(f(y)-f(x))=0$ for all $\phi \in E^{*}$. So the only thing left is to show that if $u \neq 0$ in $E$ then there exists $\phi \in E^{*}$ such that $\phi (u) \neq 0$. This is where you can apply the Hah n Banach argument: there exists $\phi \in E^{*}$ such that $\|\phi\|=1$ and $\phi (u) =\|u\| >0$ if $u \neq 0$. |
Chromatic number of plane and sphere | Regarding 2: Apparently, it is typical to use the Euclidean distance in the ambient 3-space rather than arc-length along the sphere.
That link also partially addresses your item 3: The lower and upper bounds depend on the radius. The interesting question is not what happens with spheres so large they essentially act like planes. It's "What radii correspond to changes in coloring behaviour?". |
Finding the number of consecutive objects | Let the number of stations between the starting point and the first halt be $x_1$, between the first and second halt be $x_2$, between the second and third halt be $x_3$, and between the third halt and the destination be $x_4$.
Now,
$$x_1+x_2+x_3+x_4+3\text{(the three stations where the train halted)}=14$$
That is,
$$x_1+x_2+x_3+x_4=11$$
Now, no two of these stations will be consecutive if $x_i\ge1$ where $i\in\{2,3\}$
Let $x_2=y_2+1$ and $x_3=y_3+1$. Then, we have $y_2,y_3\ge0$.
Let $x_1=y_1,x_2=y_2$
Thus, the solution will be the number of non negative integer solutions of the equation,
$$y_1+y_2+y_3+y_4=9$$ |
Characteristic polynomial of a unitary matrix. | A unitary matrix with real entries satisfies $A^TA=I$, where $I$ is the $n\times n$ identity matrix. By definition, the characteristic polynomial $p_A(t)=\det(tI-A)$, hence
$$ t^np_A(1/t)=t^n\det((1/t)I-A)=\det(I-tA)=\det(A^TA-tA)$$
$$=\det(A^T-tI)\det(A)=(-1)^n\det((tI-A)^T)\det(A)=(-1)^n\det(tI-A)\det(A)$$
$$ =(-1)^n\det(A)p_A(t) $$
Finally, $\det(A)=\pm 1$ since $A^TA=I$, so we get the desired result. |
Existence of zeroes of holomorphic Functions | Hints: for (1), use the identity principle to prove that if all the derivatives at $z_0$ are zero, then $f=0$ (look at the Taylor expansion in some ball around $z_0$). For (2) use that the zeros of any holomorphic function are isolated: write $f(z)=(z-z_0)^{{\rm ord}(f,z_0)}g(z)$, with $g$ holomorphic and $g(z_0)\neq 0$ and then compute that integral. |
Finding a narrower confidence interval for a given CI, sample mean and size | Hint: The (usual) formula for a confidence interval for a population proportion is different from the formulas for a confidence interval for a population mean. That formula involves the sample proportion, a confidence coefficient, and the sample size -- not a standard deviation (at least, not as a separate variable that you need to find the value of). |
How many ways to sit people on a bench in the same order | Hint:
Choose which of the positions on the bench are left empty versus occupied by people. Given such a selection, the people will sit in the prescribed order in the seats available to them.
$\binom{n}{m}$ |
Binary Integer Programming | You will need 480 variables of the form $P01S01$ to $P08S60$.
These variables must be integer variables equal to 0 or 1.
$P03S40=1$ means that student 40 is allocated to project 40.
$P03S40=0$ means that student 40 is not allocated to project 40.
You will need up to 5 constraints for each project.
So if project 14 needs at least one programmer you need to set $P14S01+P14S08+P14S45+... \geq 1$ where students 1, 8, 45 etc are programmers.
That's the easy bit. The harder bit is in defining your objective function.
To encourage high creativity and high teamwork you might want to multiply together the creativity and teamwork scores for all the students in each project and add these to the objective function.
If the creativity score (value from 1 to 10 - zero causes problems) for students 01 to 60 is given by the variable $C01$ to $C60$ and the teamwork score is given by the variable $T01$ to $T60$ then the product can be found by evaluating: $C01^{P01S01} \times C02^{P01S02} \times ... C60^{P01S60} \times T01^{P01S01} \times T02^{P01S02} \times ... T60^{P01S60}$
To encourage the combination of low creativity and high helpfulness you might want to multiply together the "uncreativity" = 11-creativity and the helpfulness scores for all the students in each project and add these to the objective function.
If the creativity score (value from 1 to 10 - zero causes problems) for students 01 to 60 is given by the variable $C01$ to $C60$ and the helpfulness score is given by the variable $H01$ to $H60$ then the product can be found by evaluating: $(11-C01)^{P01S01} \times (11-C02)^{P01S02} \times ... (11-C60)^{P01S60} \times H01^{P01S01} \times H02^{P01S02} \times ... H60^{P01S60}$ |
How can I show that $A=\begin{pmatrix}a&b\\b&d\end{pmatrix}$ with $b\neq 0$ is diagonalizable? | Here's a different option.
Instead of saying to yourself "Those were tedious computations", you could instead say "What have I really proved?"
Lemma: The characteristic polyomial of a $2 \times 2$ matrix $M = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$ is
$$\lambda^2 - \text{tr}(M) \lambda + \text{det}(M)
$$
where $\text{tr}(M)$ and $\text{det}(M)$ denote the trace and the determinant:
$$\text{trace}(M) = a + d, \qquad \text{det}(M) = ad - bc
$$
Proof: (tedious but suddenly, perhaps, more interesting computations)
Theorem: If a $2 \times 2$ matrix $M$ satisfies the inequality
$$\text{trace}^2(M) - 4 \, \text{det}(M) > 0
$$
then it is diagonalizable.
Proof: $\text{trace}^2(M) - 4 \, \text{det}(M)$ is the discriminant of the characteristic polynomial, therefore if it is positive then there are two distinct characteristic roots. QED
Corollary: If $M = \pmatrix{a & b \\ b & d}$ and $b \ne 0$ then $M$ is diagonalizable.
Proof: $\text{trace}^2(M) - 4 \, \text{det}(M) = (a+d)^2 - 4 (ad-bb^2) =$ (still one last tedious, but perhaps more enlightening, computation) $= (a-d)^2 + 4b^2 > 0$.
And now you can start to ask yourself: What happens with $3 \times 3$ matrices?.... or $4 \times 4$? ............ |
Floating point representation | I see $2$ mistakes here. In the first number you should be reading
$$(-1)^s2^{e-B}\left(\color{red}{1}+\frac M{16}\right)=(-1)^02^{5-3}\left(1+\frac{14}{16}\right)=7.5=x$$
You forgot about the implicit leading $1$ bit in $\text{IEEE-754}$ binary floating point formats. The fraction $\frac{14}{16}$ is there because the mantissa was $1110_2=14$ and there are $4$ bits in the mantissa and $2^4=16$. Now, the second mistake is that the question didn't ask you to decode the original input according to the rules of format B, but rather to convert the value we just got to format B. Thus
$$x=7.5=(-1)^s2^{e-7}\left(1+\frac M8\right)$$
(The denominator of $8$ because there are $3$ mantissa bits and $2^3=8$.) Since $x>0$, $s=0$ and since $2^2\le x<8$, $e-B=2=e-7$ so $e=9$ then
$$1+\frac M8=\frac x{2^{e-B}}=\frac x{2^2}=\frac {15}8=1+\frac78$$
So $M=7$ and we have $x=[s]\,[e]\,[M]=0\,1001\,111$ |
My proof that if $P(A) \subseteq P(B)$, then $A \subseteq B$ | Half-line proof, without using elements:
$A\in\mathcal P(A)$, hence $A\in\mathcal P(B)$, which means $A\subseteq B$. |
Show explicitly $X \to CX$ is Cofibration | To set conventions: I will define $CX = X\times I /X\times \{0\}$, I will identify $X \cong X\times \{1\} \subset CX$. We define a retraction $r: CX \times I \to (X\times I) \cup (CX \times \{0\})$ as follows:
$$r(((x,s),t))=
\begin{cases}
((x,1), t-2 +2s) & \text{if}\ \ \ 1-\frac{1}{2}t\le s \le 1\\
((x, \frac{2s}{2-t}),0)& \text{if}\ \ \ 0 \le s \le 1-\frac{1}{2}t
\end{cases}$$
It is necessary to check that the function is well defined, though in that case it is clearly continuous. One also needs to check that it is a retraction.
Doing the case of $X = \{*\}$ by drawing a picture of a square and visualizing the retraction onto the left and top edge gives the idea for the formulas.
There is a more sophisticated approach to this type of thing using NDR pairs, and it is fairly easy to prove that $(CX, X)$ is an NDR pair. One advantage is it works out such formulas behind the scenes.
Finally, once you have obtained $r$, how do you obtain $m: CX\to Y^I$? First, note that $H: X\to Y^I$ defines by duality a map $H':X\times I \to Y$ given by $H'(x,t) = H(x)(t)$. The map $f: CX\to Y$ can be viewed as a map $f': CX\times \{0\} \to Y$ in the obvious way, by identification. Thus, we obtain a map $H'\cup f' : (X\times I) \cup (CX \times \{0\}) \to Y$, which agrees on the overlap by the commutativity of your diagram above. Therefore, $H'\cup f'$ is continuous. Now, define $m': CX\times I \to Y$ by $m' = (H'\cup f') \circ r$. The dual map $m: CX \to Y^I$ is the map $m$ you seek. For clarity, the map $m$ is defined by $m((x,s))(t) = m'((x,s),t)$. Hopefully, this clarifies the connection between the retraction and the lifting problem above. |
Confused about "total derivative" | Let $U\subseteq \mathbb{R}^n$ be an open, nonempty set, $f: U\rightarrow \mathbb{R}^m$ be a function and $x_0\in U$ some point. We say that $f$ is (Frechet-)differentiable in $x_0$ if there exists a linear function $L_x: \mathbb{R}^n \rightarrow \mathbb{R}^m$ such that
$$ \lim_{h\rightarrow 0} \frac{\Vert f(x_0+h)-f(x_0)-L_x(h) \Vert_{\mathbb{R}^m}}{\Vert h \Vert_{\mathbb{R}^n}} =0.$$
In this case we call $L_x$ the total derivative of $f$ at the point $x_0$. Usually we write $Df(x_0)$ instead of $L_x$.
So what is this total derivative? It is the best "linear approximation". I.e. if you want to fit some linear function at $f$ in $x_0$, then you want to pick the total derivative. How is this related to the "usual" derivative we see in college? If we have $g:\mathbb{R} \rightarrow \mathbb{R}$ which is differentiable in $x_0$, then $g'(x_0)$ is the slope of the tangent at $g$ in $x_0$. This line is the best fit you can have at $g$ in $x_0$. In one dimension the line is uniquely determined by its slope (and the fact that it has to go through $(x_0,g(x_0))$. Hence, there is the following correspondence between derivative and total derivative in one dimension: $g'(x_0)= Dg(x_0)[1]$. Or in other words
$$ Dg(x_0):\mathbb{R}\rightarrow \mathbb{R}, Dg(x_0)[h] = g'(x_0) \cdot h. $$ |
Show that the following set is open | A function is continous if and only if the preimage of every open set is open.
You have a function from $\mathbb R^5 \to \mathbb R^2$
$f(v,w,x,y,z) = (x^2e^{v+w^{100}}),(xy−z^2)$
The set in the co-domain $(2,\infty),(-\infty,-1)$ is an open set.
If the function is continuous, the pre-image is an open set. |
Sum of uncountable many pozitive numbers | Note that
$$ \{\,\beta:P(A_\beta)>0\,\}=\bigcup_{n\in\mathbb N} \{\,\beta:P(A_\beta)>\tfrac1n\,\}.$$
So if the left is uncountable, at least one set on the right must be uncountable. But then already the sum over these contributes $>\sum_{k=1}^\infty\frac1n=\infty$. |
Small and large categories when category theory is taken as the foundation of mathematics | Category theory wants to contain set theory (Set is an important category! Also, sets are basically discrete categories), so all of the usual reasons compel you to pay attention to size issues. They can even show up without appealing to sets; e.g. there can't be a category of all categories.
A more category-theoretic flavoring of size issues is:
A small category is a category object in Set
A locally small category is a Set-enriched category
so even if there weren't size issues, smallness would still be an important topic in category theory, even if it were just a special case of enriched category theory and internal category theory. |
proof of this theorem | Hint: Write $Q(x) = Q(x) - Q(a) = a(n)(x^n - a^n) + a(n-1)(x^{n-1} - a^{n-1}) + \cdot$
Factor out each $x^n - a^n = (x - a)P(x)$. |
Find the probability that X = Y | This probability is generally $0$, for the same reason that $\mathbb{P}(X = x)$ is $0$ for any particular $x$.
To find a probability involving $X$ and $Y$, you integrate their joint density function over the corresponding region of the $xy$-plane. If your joint density function is actually supported on a 2-dimensional region of the plane, then integrating over any 1-dimensional subset of the plane will give you $0$, because it can be fit inside a region with arbitrarily small measure. (Here, "measure" refers to the Lebesgue measure on $\mathbb{R}^2$.) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.