title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Find quotient and remainder using long division $(x^6+x^4+x^2+1)÷(x^2+1)$ | $$x^6+x^4+x^2+1=x^4(x^2+1)+x^2+1=(x^2+1)(x^4+1),$$
which gives that the quotient is $x^4+1$ and the remainder is $0$. |
Non-orthonormal basis vectors impossible? | The coordinates may be orthonormal but the basis vectors do not have to be orthonormal.
Let $v_1=(1,3)$ and $v_2= (3,2)$ then the coordinates of $v_1$ with respect to the basis $\{ v_1, v_2 \}$ is $(1,0)$.
Also the coordinates of $v_2$ with respect to the basis $\{ v_1, v_2 \}$ is $(0,1)$
The coordinates are orthonormal but the vectors are not. |
Determinate $\lambda\in R$ so that the following equation has 2 real,distinct solutions. | Hint: defining $$f(x)=2x+2\ln(x)-\lambda(x-\ln(x))$$ then $$f'(x)=2+\frac{2}{x}-\lambda\left(1-\frac{1}{x}\right)$$ then $$f'(x)=0$$ if $$x_E=\frac{\lambda+2}{\lambda-2}$$ and $$f''(x_E)=-\frac{(-2+\lambda)^2}{\lambda+2}$$ then must hold
$$f''(x_E)<0$$ and $$f(x_E)=(\lambda+2)\left(\ln(x_E)-1\right)>0$$ |
Existence of an unique Lobachevski transform in the Half-Plane Model. | I think you made a mistake in your proof of
proposition 1
I've already proved that given two sets $\{A',B',C'\}$ and $\{A,B,C\}$ of points in the Riemann sphere, there exists exactly two möbius transforms mapping $A \to A'$, $B \to B'$ and $C \to C'$, where if $\phi$ is one of them, then $\rho o \phi$ is the other, where $\rho$ is the reflexion over the conformal circle through $A',B',C'$. (Proposition 1)
The second transformation only works if $\{A',B',C'\}$ are on a hyperbolic line. (an euclidean line orthogonal to the boundary , or a circle with its centre on boundary ) .
When the points are not on such a hyperbolic line then $\rho$ does not map the hyperbolic plane into itself.
The second statement you have to proof is I think to show that there is only one transformation (like for 3 points not on an hyperbolic line)
On the boundary some special situations exist:
Betweeness does not hold, (like on a circle) you can have transformations
like
$ A \to A , B \to C \text{ and } C \to B $
Fixing $A$ and $B$ does not fix $C$ (2 points on the boundary line are always infinitly far away from eachother) you can have transformations like. $A \to A , B \to B , C \to D \text{ with } C \not= D $.
Hope this helps |
Central limit theorem - Coin toss | You can define $X_i$ as you suggest though not all Euros have a head (is it the map side they all have? or the other side which sometimes has a head?) Let's define $X_i$ as an indicator of the map side, so you either want at least $110$ or no more than $90$ map sides showing
You then want $\mathbb P \left(\sum X_i \ge 110\right)+\mathbb P \left(\sum X_i \le 90\right)$
Assuming independence, you can then use the binomial distribution for an exact calculation, or a Gaussian approximation with a cuttoffs at $90.5$ and $109.5$ (or using symmetry, double no more than $90.5$). The probability would be almost $18\%$
For example in R:
> pbinom(90, size=200, prob=0.5) + 1 - pbinom(109, size=200, prob=0.5)
[1] 0.178964
> 2 * pnorm(90.5, mean=200*0.5, sd=sqrt(200*0.5*0.5))
[1] 0.1791092 |
Spectral radius of a complete tripartite graph | I can confirm the statement, but I have no idea why one would expect it to be true. I computed the spectral radii numerically with numpy, and they both turn out to be $12$. I used numpy, and the function numpy.linalg.eigs which gives the associated eigenvectors as well as the eigenvalues. The eigenvectors are returned normalized, but in this case, they were of simple structure, and I was easily able to find eigenvectors with integer components so that I could do exact arithmetic.
Here is my python script for confirming the eigenvectors. In case you don't know python, A is the adjacency matrix of $K_{4,4,12}$ and B that of $K_{2,9,9}$. The eigenvector of A is
$$
(3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2)$$
and the eigenvector of B is $$
(3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2)$$
import numpy as np
A = np.ones((20,20), dtype=int)
A[0:4,0:4]=0
A[4:8,4:8]=0
A[8:20,8:20]=0
x = np.array(8*[3]+12*[2])
y = np.array(20*[0], dtype=int)
z= 12*x-A@x
assert all(y[i]==z[i] for i in range(20))
B=np.ones((20,20), dtype=int)
B[0:2,0:2]=0
B[2:11,2:11]=0
B[11:20,11:20]=0
x = np.array(2*[3]+18*[2])
y = np.array(20*[0], dtype=int)
z= 12*x-B@x
assert all(y[i]==z[i] for i in range(20))
We note that the matrixes are irreducible, so the Perron-Frobenius theorem applies. One of assertions is that the Perron-Frobenius eigenvalue is the only one with an associated eigenvector of non-negative elements, which proves that $12$ is the maximum eigenvalue in both cases.
EDIT On second thought, with this example as a guide, we should be able to work out the spectral radius of any complete $k$-partite graph. We expect the Perron-Frobenius eigenvector to be a "block vector" of $k$ blocks, where each block is a constant vector whose length is the size of the corresponding part of the graph. I can't do this off the top of my head, but it doesn't sound hard. I don't see a formula, but this approach reduces the problem to finding the eigenvalues of a $k\times k$ matrix. In the instant case, we would just have to show that the largest eigenvalue of $$
\pmatrix{
0&4&12\\
4&0&12\\
4&4&0
} \text { and }
\pmatrix{
0&9&9\\
2&0&9\\
2&9&0}
$$
are both $12$. |
Prove or disprove this condition for convergence of a sequence | HINT: Suppose that $\langle a_n:n\in\Bbb N\rangle$ is not convergent. If it has an unbounded subsequence, then it’s easy to find an $f:\Bbb N\to\Bbb N$ such that $a_{n+f(n)}-a_n\not\to 0$, so assume that $\langle a_n:n\in\Bbb N\rangle$ is bounded but not convergent. Then the sequence has two subsequences converging to different limits, say $L$ and $M$. Use these two subsequences to construct a function $f:\Bbb N\to\Bbb N$ and a subsequence $\langle a_{n_k}:k\in\Bbb N\rangle$ such that $a_{n_k+f(n_k)}-a_{n_k}\to L-M\ne 0$ as $k\to\infty$. |
Smoothness for morphism of schemes | By definition, there exists an open subscheme $U$ of $X$ containing all closed points of $X$ such that $U \to Y$ is smooth. If $U \not= X$, then $X \backslash U$ is a closed subscheme and the morphism from $X$ to $Y$ is singular at each point on $X \backslash U$. But $X \backslash U$ is quasi-compact, hence contains closed points. This contradicts the first sentence. So $U$ must coincide with $X$. |
Solving a system of differentialequations with a periodic solution | The $\begin{pmatrix} \sin(\omega x)\\0\end{pmatrix}$ tells you that the solution is a column vector. Let's express it as $\begin{pmatrix} y_1\\y_2\end{pmatrix}$ and try to find the results for $y_1$ and $y_2$. If you expand the expression with the matrix, you find 2 equations:
$\displaystyle \frac{dy_1}{dx} = y_2 + \sin(\omega x)$,
$\displaystyle \frac{dy_2}{dx} = -y_1$.
If you derive wrt $x$ on both equations, you get
$\displaystyle \frac{d^2y_1}{dx} = \frac{dy_2}{dx} + \omega \cos(\omega x) = -y_1 + \omega \cos(\omega x)$,
$\displaystyle \frac{d^2y_2}{dx} = -\frac{dy_1}{dx} = -y_2 - \sin(\omega x)$.
Rearranging these expressions,
$\displaystyle \frac{d^2y_1}{dx} + y_1 = \omega \cos(\omega x)$,
$\displaystyle \frac{d^2y_2}{dx} + y_2 = - \sin(\omega x)$.
For $y_1$, the solution of the homogeneou equation is
$y_1^h = A \sin(x) + B \cos(x)$
A particular solution of the inhomogeneous equation has the form
$y_1^p = \kappa_1 \cos(\omega x)$.
You can find that $\kappa_1$ must be
$\displaystyle \kappa_1 = \frac{\omega}{1-\omega^2}$.
Therefore
$\displaystyle y_1 = A \sin(x) + B \cos(x) + \frac{\omega}{1-\omega^2} \cos(\omega x)$
To obtain $y_2$ you only need to use one of the first equations in this answer to find that
$\displaystyle y_2 = \frac{dy_1}{dx} - \sin(\omega x) = A \cos(x) - B \sin(x) - \frac{1}{1-\omega^2} \sin(\omega x)$ |
Question re: derivation of formula for volume of cone | I suspect the discrepancy is about the top disk in the pile - in particular, suppose you wanted to place disks of thickness $\frac{h}3$ in the cone. Since the text asks for the disks to be inscribed, we find that the disk resting on the base of the cone has radius $\frac{2}3r$, since it extends up to a height of $\frac{h}3$ above the base, where the cone has that radius. The disk atop that would have a radius of $\frac{1}3r$. Then, the disk atop that would have a radius of $0$, since it reaches to the top of the cone, where the radius goes to $0$.
So, you could either could $2$ disks, excluding the top one, or count $3$ disks, but have one of them be degenerate. Presumably, the next step is to sum up the volumes of these disks, in which case it doesn't matter whether you included or excluded the top disk of volume zero. |
Uniqueness quantification of planes in vector space | A plane extends infinitely in all directions it encompasses. Rotating the plane about a normal vector therefore does not change the plane. |
What is the derivative of $\frac{1}{(1-x)^2}$? | There are some paradoxes in mathematics, but not in your examples.
$$-\frac{2}{(1-x^3)} = \frac{2}{(x^3-1)}$$ And $$(\cos(x))'=-\sin(x)$$ while
$$(\cos(-x))'=-\sin(-x)\cdot(-x)'= \sin(-x)=-\sin(x)$$ |
displacement between vectors in 3D | Yor intuition for the last displacement is correct, but you are getting some confusion in signs. Using a slightly different notation:
$\ r_1=(2.9,4.5,3.3)$
$\Delta r_{12}=(4.9,2.1,5.8) \ so \ r_2 =r_1+\Delta r_{12}=(7.8,6.6,9.1)$
$\Delta r_{23} = (1.5,1.5,3.0) \ so \ r_3 = r_2 + \Delta r_{23} = (9.3,8.1,12.1)$
The total diplacement is $\ r_3-r_1 = \Delta r_{12} + \Delta r_{23} = (6.4,3.6,8.8)$
I hope this helps. |
Quadratic variation of $X(s)=W_{s+\epsilon}-W_{s}$ | The proof is rather similar to the case of the Brownian motion. Let's fix some notation first:
$$S^{\Pi}(f,s) := \sum_{j=1}^n |f(s_j)-f(s_{j-1})|^2$$
where $\Pi = \{0=s_0<\ldots<s_n=s\}$ is a partition of $[0,s]$ and $f: \mathbb{R} \to \mathbb{R}$ an arbitrary function.
Let $\Pi$ a partition of $[0,s]$ such that $\text{mesh} \, \Pi := \max |s_j-s_{j-1}| < \varepsilon$. We have
$$\begin{align*} \mathbb{E}(S^{\Pi}(X,s)) &= \sum_{j=1}^n \mathbb{E} \big[ \big| (W_{s_j+\varepsilon}-W_{s_{j-1}+\varepsilon})-(W_{s_j}-W_{s_{j-1}}) \big|^2 \big] \\ &= \sum_{j=1}^n \mathbb{E}[|W_{s_j+\varepsilon}-W_{s_{j-1}+\varepsilon}|^2]+ \mathbb{E}[|W_{s_j}-W_{s_{j-1}}|^2] \\ &= \sum_{j=1}^n (s_j-s_{j-1})+(s_j-s_{j-1}) = 2s \end{align*}$$
where we used $W_t-W_r \sim N(0,t-r)$, $t>r$, and the independence of $W_{s_j+\varepsilon}-W_{s_{j-1}+\varepsilon}$ and $W_{s_j}-W_{s_{j-1}}$ (they are independent because the mesh size of $\Pi$ is smaller than $\varepsilon$). This shows that $2s$ is a good candidate for the quadratic variation of $X$.
Now we want to show that $\mathbb{E}[(S^{\Pi}(X,s)-2s)^2] \to 0$ as $\text{mesh} \, \Pi \to 0$. To do so, we have to calculate expressions of the form
$$C_{j,k} := \mathbb{E} \big[ \big( (\Delta(s_{j-1}+\varepsilon,s_j+\varepsilon)-\Delta(s_{j-1},s_j))^2-2(s_{j-1}-s_j) \big) \cdot \big( (\Delta(s_{k-1}+\varepsilon,s_k+\varepsilon)-\Delta(s_{k-1},s_k))^2-2(s_k-s_{k-1}) \big) \big]$$
where $\Delta(r,t) := W_t-W_r$. A straight-forward calculation shows that $C_{j,k} =0$ for $j \neq k$. Thus, we obtain
$$\begin{align*} \mathbb{E}[(S^{\Pi}(X,s)-2s)^2] &= \mathbb{E} \left[ \left( \sum_{j=1}^n \big( \Delta(s_{j-1}+\varepsilon,s_j+\varepsilon)-\Delta(s_{j-1},s_j) \big)^2 - 2 (s_j-s_{j-1}) \right)^2 \right] \\ &= \sum_{j=1}^n \mathbb{E} \bigg[ \bigg( (\underbrace{( \Delta(s_{j-1}+\varepsilon,s_j+\varepsilon)-\Delta(s_{j-1},s_j)}_{\sim N(0,2 (s_j-s_{j-1}))})^2 -2(s_j-s_{j-1}) \bigg)^2 \bigg] \end{align*}$$
The sum converges to $0$ as $\text{mesh} \Pi \to 0$. This follows as in the case of the Browian motion by applying scaling property, i.e. $N(0,2(s_j-s_{j-1})) \sim \sqrt{2(s_j-s_{j-1})} N(0,1)$.
(There are a lot of calculations, so don't hesitate to ask if you don't get along with it.) |
Trace-Preserving Matrices | No. Let
$$A=\begin{pmatrix}
\frac 12&-\sqrt{\frac 38}\\
\sqrt{\frac 32}&\frac 12\\
\end{pmatrix}$$
and
$$B=\begin{pmatrix}
\frac 23&0\\
0&\frac 13\\
\end{pmatrix}$$
Then $B>0$, $\mathrm{tr}(B)=1$, and $A^\dagger BA=B$ but
$$A^\dagger A=\begin{pmatrix}
\frac 74&\sqrt{\frac 3{32}}\\
\sqrt{\frac 3{32}}&\frac 58\\
\end{pmatrix}$$ |
How to prove the observability condition for the following state sytem? | Checking the rank of the observability matrix $\mathcal{O}$ is equivalent to the Hautus test, which for observability can be formulated as
$$
\text{rank}
\begin{bmatrix}
C \\ A - \lambda\,I
\end{bmatrix}=n \quad \forall\,\lambda \in \mathbb{C}
$$
with $A\in\mathbb{R}^{n \times n}$.
In this case it is easier to proof observability using the Hautus test. Since the given $A$ is just one Jordan block with eigenvalues $\mu$, then the matrix, whose rank it tested, might only lose rank when $\lambda = \mu$. Performing this substitution gives
$$
\begin{bmatrix}
C \\ A - \mu\,I
\end{bmatrix} =
\begin{bmatrix}
c_1 & c_2 & c_3 &\cdots & c_n \\
0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots & 0 \\
0 & 0 & \cdots & 0 & 1 \\
0 & 0 & \cdots & 0 & 0
\end{bmatrix}
$$
From here it is straightforward to show that the rank of that matrix is only equal to $n$ if $c_1 \neq 0$. |
Product of quasi-projective varieties is quasi-projective | Your proof is not correct since it assumes that the topology on $X\times Y$ is the product topology of the Zariski topologies on $X$ and $Y$. This is definitely not the case (in the book you are reading there should be an exercise illustrating this).
A good way is to use the Segre embedding as other answers suggested: take $X=U_X\cap Z_X\subseteq \mathbf{P}^n$ and $Y=U_Y\cap Z_Y \subseteq \mathbf{P}^m$ for appropriate open sets $U_X,U_Y$ and closed sets $Z_X,Z_Y$. Then there is a morphism
$$\sigma_{m,n} : \mathbf{P}^n\times \mathbf{P}^m \to \mathbf{P}^{mn + n +m}$$
which is a closed embedding. Hence is it enough to show that $S=\sigma_{m,n}(X\times Y)$ is locally closed and irreducible.
Let $\pi_1,\pi_2$ be the two projection maps onto the two factors of $\mathbf{P}^n\times \mathbf{P}^m$. These maps are morphisms and note that
$$W = \sigma_{m,n}(\pi_1^{-1}(X)\cap \pi_2^{-1}(Y))$$
Each of the two sets on the right side are locally closed in $\mathbf{P}^n\times \mathbf{P}^m$ since $X,Y$ are quasi-projective and the $\pi_i$ are continuous. Indeed
$$W= \sigma_{m,n}(\pi_1^{-1}(U_X\cap Z_X)\cap \pi_2^{-1}(U_Y\cap Z_Y))=\\
= \sigma_{m,n}(\pi_1^{-1}(Z_X)\cap \pi_2^{-1}(Z_Y))\cap \sigma_{m,n}(\pi_1^{-1}(U_X)\cap \pi_2^{-1}(U_Y))$$
is intersection of a closed set and an open set (recall that the Segre map is a homeomorphism). Irreducibility can be shown by contradiction. |
Map, injection or both? | A (binary) relation between sets $X$ and $Y$ is nothing else than a subset of the Cartesian product $X × Y$. In your question we have $R \subset \mathbb N \times \mathbb N$. In my opinion it is misleading to write it the form $n \mapsto n^3- 3n^2 - n
$, I would prefer to write
$$R = \{ (n,m) \in \mathbb N \times \mathbb N \mid m = n^3- 3n^2 - n \} .$$
I recommend to have a look at https://en.wikipedia.org/wiki/Binary_relation.
$R$ is not a function $\mathbb N \to \mathbb N$. As you have shown, for $n = 1, 2,3$ we do not have $m \in \mathbb N$ such that $(n,m) \in R$. However, we may regard it is as a partial function which means that for each $n \in \mathbb N$ we have at most one $m \in \mathbb N$ such that $(n,m) \in R$. The restriction $\mathbb N \setminus \{1, 2, 3\} \to \mathbb N$ is a function.
$R$ is an injective relation. This means that if $(n, m) \in R$ and $(n', m) \in R$, then $n = n'$. In fact, consider the function $\phi : \mathbb R \to \mathbb R,\phi(x ) = x^3 -3x^2-x$. Its derivative $\phi'(x) = 3x^2 - 6x -1$ is positive for $x > \xi = 1 + \sqrt{4/3}$, thus $\phi$ is strictly increasing on $(\xi,\infty)$. Since $ 4 > \xi$, we get injectivity.
$R$ is not a surjective relation which means that there exists $m \in \mathbb N$ such for all $n \in \mathbb N$ we have $(n,m) \notin R$. In fact, we have $\phi(4)= 12, \phi(5) = 45$. Since $\phi$ is strictly increasing on $(\xi,\infty)$, we may take $m = 13$. |
Solving Volterra integral equation | Hint. Assume $u(\cdot)$ is continuous over $[0,\infty)$. Then one may differentiate the initial equation twice, using the Leibniz integral rule getting
$$
u''(t)+u(t)=\frac{5}4t, \quad t\geqslant 0,
$$
which can be classically solved using $u(0)=0$ and $u'(0)=\dfrac54$. |
The cardinality of a set of functions | The definition of $h$ is fixed as $h(i)=f(i)$ for $i\ge k$, and it can be assigned at will either $0$ or $1$ for $0< i <k$. Which gives $2\cdot2\cdot...\cdot2=2^{k-1}$ possible functions. So cardinality of $A$ is $2^{k-1}$
As noted in the comments, this answer if for $\mathbb N=\{1,2,3,...\}$. If $0\in \mathbb N$, then the answer is $2^k$ |
Why do equations with two distinct variables with 2 distinct linear equations work? | A single equation in one variable picks out a value in $\Bbb R$ An equation in two variables picks out a line in $\Bbb R^2$. Two equations pick out two lines, which generally intersect in a point. Basically each equation eliminates one degree of freedom, so you need as many equations as unknowns. |
What does it mean that $\sin(t) = 0$ for $t = 0$ or $\pi$ mod $2\pi$ | Your prof says $\sin(t)=0$ iff
$$
\frac{t}{2\pi}\in\mathbb{Z}\quad\text{or}\quad\frac{t-\pi}{2\pi}\in\mathbb{Z}
$$
which, as you put, is a "fancy pants way" to say $\sin(t)=0$ iff $t=k\pi$ for $k\in\mathbb{Z}$. |
How to solve $F e = \vec{0}$? | I don't have enough reputation for commenting: You can simply eliminate one of the rows of $F^T$ and obtain a $2\times 3$ matrix. Then the null space of that matrix is your subspace of solutions. Also, the subspace is spanned by an arbitrary element from this subspace. One immediate solution would be Matlab command e = null(F') |
Prove that the group $\mathbb{Z}^{n}$ is generated by at least $n$ elements | Assume $\mathbb Z^n$ is generated by $v_1,\ldots, v_m$. Then these $m$ vectors, viewed as elements of $\mathbb Q^n$, also span $\mathbb Q^n$ as a $\mathbb Q$-vector space (for $v\in\mathbb Q^n$, there is some $N\in\mathbb N$ with $Nv\in \mathbb Z^n$; then $Nv=\sum a_iv_i$ implies $v=\sum\frac{a_i}{N}v_i$). Since $\dim\mathbb Q^n=n$, this implies $m\ge n$. |
Show that when $p$ is a principal type, it must contain the formula that implies it | As pointed out in the comment this question only makes sense for complete types. So let $p$ be a principal type, with principal formula $φ \in F_n$. Since p is complete, $φ \not\in p \implies ¬φ \in p$. Add new constant symbols $c = (c_1, c_2, ..., c_n)$. The principality condition gives us that $T \cup \{φ(c)\} \models ψ(c)$ for all $ψ(x) \in p$, and so $T \cup \{φ(c)\} \models ¬φ(c)$. It follows that $T \models ¬φ(c)$, and therefore $T \models ¬\exists x φ(x)$. This contradicts the fact that $φ$ is realised in all models of T, i.e. $Τ \models \exists x φ(x)$. Thus $φ \in p$. |
A problem involving dense orbits of a continuous map | This is essentially the same as saying that if any sequence $(p_0,p_1,...,p_n,...)$ is dense, then taking out a finite number of elements still leaves it a dense sequence $(p_n,p_{n+1},...)$.
If you have in any neighborhood of $x$ an infinite number of elements of the first sequence, you have still infinitely many elements of the second sequence in it. |
A set $A_n$ of $n$ elements such that for each pair of elements, one is a member of the other | This is not allowed, in fact. If we had $a\in b\in a,$ the set $c=\{a,b\}$ would violate the axiom of regularity. The axiom of regularity says that any set has an element disjoint from itself, but $b\in a \cap c$ and $a\in b\cap c$ contradiction. |
Pigeonhole Principle - Pairs of Sets with distinct sums | You cannot get all of {3,...,17}. If you have both 3 and 4, they must be 1+2 and 1+3, so 2 and 3 are on the same side. Then to also get 5, 4 must be on the same side as 2 and 3. Repeat this argument up to 16. But then you cannot get 17. A similar argument shows that you cannot get all 15 of {185,....,199}. |
Limit as $x$ approaches to $0$ is nonzero divided by $0$ | Why did you get stuck when you got $\displaystyle\lim_{x\to0}\frac{2x+3}{15x^4}$? This limit is equal to $+\infty$. Therefore, the limit that you're after is $+\infty$ too. |
The smallest integers having $2^n$ divisors | You do use something special about $2$, namely that it is prime:
If you replace $2$ with a different prime $q$,
you end up with
$$ f(q^n)p^{q^{k+1}-q^k}\ge f(q^{n+1})=Np^{q^{\ell}-q^{\ell-1}}\ge f(q^n)p^{q^{\ell}-q^{\ell-1}},$$
so $k\ge \ell-1$ and again $k=\ell-1$ and $f(q^{n+1})=p^{(q-1)q^k}f(q^n)$. |
$xy=1$ find the minimum value of $x^{6}+x^{4}y^{2}+x^{2}y^{4}+y^{6} $ | AM-GM says
$$
\frac{x^6+x^4y^2+x^2y^4+y^6}{4}\geq\sqrt[4]{x^6\cdot x^4y^2\cdot x^2y^4\cdot y^6}=1
$$
where equality is obtained iff all the four terms in the numerator are equal. And it is possible to make them equal, so we can obtain equality. |
Does the derivative with respect to something have to be a variable? | Intuitively, the derivative of $f$ with respect to $u$ is the limit of the change in $f$ as $u$ changes, divided by the change in $u$, as the change in $u$ vanishes. This does not require $u$ to be a "variable" in the usual sense: you can certainly ask for the rate of change of, say, $f(x) = \sin(x^2+1)$ with respect to $u=x^2$. So, no, it does not have to be an "independent variable" in the sense that you seem to be thinking about.
In fact, that's what the Chain Rule is all about! It tells you that if $f$ depends on $g$ and $g$ depends on $x$, then the rate of change of $f$ with respect to $x$ is equal to the rate of change of $f$ with respect to $g$, times the rate of change of $g$ with respect to $x$:
$$\frac{df}{dx} = \frac{df}{dg}\;\frac{dg}{dx}.$$
Here, we usually have $g$ a function, not a "variable". Yet we can talk about the derivative of $f$ with respect to $g$.
Every time you have a function, you can try to talk about the rate of change of the function with respect to something else, provided you have some way of quantifying the change. |
Ferrers Diagram Partition | Let's illustrate this situation for the case $r=3$. Initially $k=0$. Then the total is $2r+k=6$ and we seek the partitions with exactly $r+k=3$ nonzero parts. We disvover that there are three of them:
$6=4+1+1$
$6=3+2+1$
$6=2+2+2$
We translate these into Ferrers diagrams:
$4+1+1$
° ° ° °
°
°
$3+2+1$
° ° °
° °
°
$2+2+2$
° °
° °
° °
Now increment $k$ to $1$. This increases the total to $7$ and the number of required terms to $4$. Again there are three partitions satisfying these conditions:
$7=4+1+1+1$
$7=3+2+1+1$
$7=2+2+2+1$
To get a total as small as $7$ with as many as four nonzero parts, we need all the partitions to have a $1$ at the end. We see this in the Ferrers diagrams:
$4+1+1+1$
° ° ° °
°
°
°
$3+2+1+1$
° ° °
° °
°
°
$2+2+2+1$
° °
° °
° °
°
Compare the two sets of diagrams: the $k=1$ diagrams are just the $k=0$ diagram with one more single-unit row attached to the bottom.
What happens in general is that each $(r,k)$ diagram, for a total of $2r+k$ with $r+k$ nonzero parts, matches a diagram for $(r,0)$ plus each of $k$ additional units in a separate, additional row. |
E(XY) = E(X).E(Y|X) . Is this true for mean = zero. | Your equation (1) is misleading. The joint density function of two random variables is not related to their product. Further we more commonly use $f$ for probability density functions, rather than $\mathsf P$. So the joint PDF is:
$$f_{X,Y}(x,y) = f_X(x)f_{Y\mid X}(y\mid x)$$
The expected value of the product of two random variables is then:
$$\begin{align}
\mathsf E[XY]
& = \iint_{\mathcal{X\times Y}} xy\; f_{X,Y}(x,y)\operatorname d y \operatorname d x
\\[1ex]
& = \iint_{\mathcal{X\times Y}} xy\;f_{Y\mid X}(y\mid x)f_X(x)\operatorname d y \operatorname d x
\\[1ex]
& = \int_\mathcal X x\left(\int_\mathcal{Y\mid X=x} y\;f_{Y\mid X}(y\mid x)\operatorname d y\right)f_X(x)\operatorname dx
\\[1ex] & = \int_\mathcal X x\;\mathsf E[Y\mid X=x]\;f_X(x)\operatorname dx
\\[1ex] & = \mathsf E[X\;\mathsf E[Y\mid X]]
\end{align}$$
Which is not the same thing as: $\mathsf E[X]\;\mathsf E[Y\mid X]$ |
Finding a dual basis for the vector space of polynomials degree less than or equal to 2 | Let $\{f_1,f_2,f_3\}$ be the basis of $V$ dual to $\{X_1,X_2,X_3\}$, i.e.
$$X_i(f_j)=\delta_{ij}. $$
Let us find $f_1,f_2,f_3$. By definition, $f_i=a_i+b_it+c_it^2$. Then
$$1\stackrel{!}{=}X_1(f_1)=a_1+\frac{b_1}{2}+\frac{c_1}{3}$$
$$0\stackrel{!}{=}X_2(f_1)=b_1+2c_1 $$
$$0\stackrel{!}{=}X_3(f_1)=a_1$$
which implies $a_1=0$, $b_1=3$ and $c_1=-\frac{3}{2}$, i.e. $f_1(t)=3t-\frac{3}{2}t^2$.
Similar computations lead to $f_2$ and $f_3$, by solving
$$X_1(f_2)=X_3(f_2)=X_1(f_3)=X_2(f_3)=0,$$
and
$$X_2(f_2)=X_3(f_3)=1. $$ |
Solve this $\;2\sqrt[4]{\frac{x^4}{3} + 4} = 1 + \sqrt {\frac{3}{2}} |y| \ldots$ | Following Ron Gordon's suggestion, and noting that the if $(x,y)$ is a solution, then $(\pm x, \pm y)$ are all solutions, we need to solve:
$$2\sqrt[4]{\frac{x^4}{3}+4}=1+\sqrt{\frac{3}{2}}x$$
Both sides to the fourth power:
$$\frac{16x^4}{3}+64=1+2\sqrt6x+9x^2+3\sqrt6x^3+\frac{9x^4}{4}\\$$
It turns out that this expression factors:
$$(x-\sqrt6)^2(37x^2+38\sqrt6x+126)=0$$
So that $x=\sqrt6$. The quadratic has no real roots.
Answers are then $(\pm\sqrt6,\pm\sqrt6)$. |
How can we show the identity of the two following equations. Plane Earth Loss Model. | In this expression I think that the absolute value has a complex meaning (if that it's not clear
Read Absolute value in the complex numbers
).
Let's suppose:
$$ b\ =\ (\frac{\lambda}{4\pi *d})^2 $$
$$ a\ = \ k \frac{2h_Th_R}{d} $$
Then we have to prove that:
$$b\ |1\ -\ e^{j*a}|^{2}\ =\ 2\ b\ |1\ -\ cos(a)|$$
First of all for the Eulero identity:
$$ e^{j*x}\ =\ cos(x)\ +\ j\ sin(x)$$
That imply:
$$b\ |1\ -\ e^{j*a}|^{2}\ =\ b\ |1\ -\ cos(a)\ -\ j\ sin(a)|^2\ =$$
But:
$$|x\ +\ j\ y|^2\ =\ x^2\ +\ y^2 $$
for each x + j y complex number,Then:
$$=\ b\ ((1\ -\ cos(a))^2\ +\ sin(a)^2)\ =$$
Then doing the product and using the identity:
$$ cos(x)^2\ +\ sin(x)^2\ =\ 1$$
We have:
$$ =\ b\ (2\ -\ 2\ cos(a))\ =\ 2\ b\ (1\ -\ cos(a))$$
That's what we want to prove. |
Probability of voting | The probability of a tie is
$$P_{tie}=\frac{\binom{50}{25}}{2^{50}}$$
thus the probability that neither group reaches a tie is
$$P = (1-P_{tie})^2$$
(where all events are assumed to be independent). This gives
$$P = \left(1 - \frac{\binom{50}{25}}{2^{50}}\right)^2 \approx \left(1 - \frac{1}{5\sqrt{\pi}}\right)^2\approx 0.79$$
where in the second step we approximated using Stirling's formula. |
Topologies in a Riemannian Manifold | First, you need to stipulate that $M$ is connected for the distance function to turn $M$ into a metric space.
When $M$ is connected, the answer is yes, the metric topology is the same as the given manifold topology. The proof boils down to showing that the Riemannian metric is uniformly comparable to the Euclidean metric in small coordinate balls. You can read a proof in my Introduction to Smooth Manifolds (2nd ed., Theorem 13.29). |
prove : if E(X) doesn't exist $E(x^2)$ too doesn't exist. | If $E(X)$ doesn't exist, $E(X_+)$ or $E(X_-)$ or both are infinite. An application of the Jensen Inequality with the function $x^2$ will show you that $E(X^2) \geq E(X_+^2) \geq E(X_+)^2 = \infty$ (or the same calculation with $X_-$ if thats infinite).
Your calculation has the Problem, that not every random variable has to have a density. |
Special types of prime numbers | There are infinitely many primes that divide Fibonacci numbers.
See for instance
Infinite primes via Fibonacci numbers
and the original paper
Another proof of the infinite primes theorem |
differential equation as taylor series | You have
$\begin{align}
x^\prime(t) & = g(x(t)) \\
x^{\prime\prime}(t) & = g^\prime(x(t))x^\prime(t) \\
x^{(3)}(t) & = g^{\prime\prime}(x(t))\left(x^\prime(t)\right)^2 + g^{\prime}(x(t))x^{\prime\prime}(t) \\
...
\end{align}$
Now you have to find a scheme after continuing the derivation of the above equations and you have to set $t=0$ in the end.
Note: It will just make sense, if $x(t)$ is analytic, i.e. the Taylor series has positive radius of convergence. |
Logical Truth via Truth Trees | Yes, you need to negate it for the setup. Then, if all branches close, it is a tautology, and if you get a finished and open branch, it is not a tautology. |
Max mutual information with variance constraint | I don't have a very illuminating answer or anything, just numerical evidence that the Gaussian is not the maximiser above.
I'll work with natural logs for convenience.
Note that, by independence, $I(X_1; X_1 + X_2 + Z) = h(X_1 + X_2 + Z) - h(X_2 + Z).$
If $X$ is Gaussian of variance $\sigma^2$, then we can explicitly compute functional above to be $1/2 \log (2 - 1/(1+\sigma^2))$ This is increasing with $\sigma$, and under the constraints, the best any Gaussian can do is $1/2 \log (3/2) \le 0.20274.$
On the other hand, consider $X $ uniform on $\{+ 1, -1\}$.
Let $\varphi$ be the density of the standard Gaussian. Note then that the density of $X + Z$ is $(\varphi(u-1) + \varphi(u+1))/2$ and the density of $X_1 + X_2 + Z$ is $(\varphi(u-2) + \varphi(u+2) + 2\varphi(u))/4.$
At this point, I simply enter these expressions into wolfram alpha. By these computations,
$h(X_1 + X_2 +Z) \ge 1.960$ - see https://bit.ly/2RWQuyl
$h(X_2 + Z) \le 1.756$ - see https://bit.ly/2FWPo0A
Thus, for this $X$, the functional is $\ge 0.204.$
(In case you want to check the above - W|A helpfully displays the latex-ed out versions of the expressions keyed in). |
Finding a system of equations that defines a line $r$ | What you get is a different representation of the same line.
Compare
\begin{align*}
(1,2,3)+t(3,1,2) &= (0,\tfrac53,\tfrac73)+u(1,\tfrac13,\tfrac23)
\end{align*}
From the first coordinate you can read that $u=1+3t$. If you plug that into the other coordinates, you will find that the equation holds in this case as well. So it's just a different representation of the same line. |
Are $\mathbb{R}$ and $\mathbb{Q}$ the only nontrivial subfields of $\mathbb{R}$? | There are infinitely many such fields. A particular example is $\{x+y\sqrt 2\mid x,y\in \mathbb Q \}$. It is easy to verify that this is a proper subfield of $\mathbb R$ that properly contains $\mathbb Q$. More generally, let $\alpha \in \mathbb R$ be any irrational number. Then, since the intersection of any family of subfields of a given field is again a field, there exists the smallest subfield of $\mathbb R$ containing $\alpha$. This field is usually denoted by $\mathbb Q(\alpha)$. It certainly contains $\mathbb Q$ and it is not difficult to show (take it as a nice exercise) that it must be properly contained in $\mathbb R$.
The above already gives you a huge (infinite) repository of intermediate fields. But there are more. Recall that an algebraic real number is a real number that is the root of a polynomial with integer coefficients. It requires a bit of general field theory to prove that the collection of all real algebraic numbers forms a field. The existence of transcendental numbers show that this field is properly contained in $\mathbb R$. And there are even more intermediate fields.
Just to place things in the right context, recall that any field $F$ containing $\mathbb Q$ can be seen as a vector space of $\mathbb Q$. Every vector space has a dimension. The dimension of $\mathbb R$ over $\mathbb Q$ is infinite. So in that sense the field extension $\mathbb R:\mathbb Q$ is very large. For every natural number $n$ there is an intermediate field $\mathbb Q\subseteq F\subseteq \mathbb R$ whose dimension over $\mathbb Q$ is $n$. |
an equivalent definition of connectedness | As you point out, usually a property can be defined in various equivalent ways. Sometimes the equivalence of different formulations is not obvious at all. It might even be a deep theorem. In such a case it may be crucial to choose the relevant formulation when proving a theorem.
However, sometimes the equivalence is very easy to see or almost trivial as in your case of defining disconnectedness in two ways: if you have the separation $X=A\cup B$, you can define a continuous function $f:X\to \{0,1\}$ by $f[A]=\{0\}$ and $f[B]=\{1\}$; conversely if you have the continuous function $f$, then the sets $f^{-1}\{0\}$ and $f^{-1}\{1\}$ give a separation. This means that it does not matter very much which formulation you use when proving theorems about connected/disconnected spaces, since you can do the above easy deductions inside the proof if needed.
Trying to think of a case when the functional definition of disconnectedness is (a little bit) more useful, I could not come up with anything but the following (perhaps artificial) example.
Assume you have a family $\{X_i : i\in I\}$ of disconnected spaces. Then by the functional definition of disconnectedness you get, for each $i\in I$, nonempty disjoint sets $A_i,B_i$ with $A_i\cup B_i=X_i$ and a continuous surjective function $f_i:X_i \to \{0,1\}$ with $f[A_i]=\{0\}$ and $f[B_i]=\{1\}$. Now consider the function $f:\prod_{i\in I}X_i\to \{0,1\}^I$ defined by $f((x_i)_{i\in I})=(f_i(x_i))_{i\in I}$. This function $f$ is also continuous and surjective since $f_i,i\in I$ are. Hence the space $\{0,1\}^I$ is a continuous image of $\prod_{i\in I}X_i$. In particular, the Cantor set $\{0,1\}^\mathbb{N}$ (and thus any compact metrisable space) is a continuous image of any countably infinite (or larger) product of disconnected spaces. |
Spanning list reduction to basis & relation to Algorithm Correctness | I'm not sure if I understand what you're asking, but if you show that the algorithm does what it is supposed to do, namely reduce a spanning list to a basis, then that gives you a proof of the theorem. In fact, it gives you more, because it also tells you how to reduce a spanning set into a basis if you were actually given one.
That said, if you would want to implement the procedure outlined in the proof of the book into say a computer, then you might want to add some details on checking whether $v_j$ is in the span of $v_{0},\ldots,v_{j-1}$. For the purpose of the theoretical algorithm, this is somewhat irrelevant, because we know that it either is in the span, or it's not, and the algorithm will proceed either way.
Edit. Regarding the steps in the comment:
1 and 3. is verified at the end of the proof: using (2.21) it is observed that the output of the algorithm is indeed a basis. 2. is true because at each step $j$, you turn your attention to vector $v_j$, and since there are finitely many vectors, you will surely at some point reach the last vector. I do not understand 4. As for 5.: The proof gives no indication on how to implement the algorithm in practice, or how long it would take for a computer to finish it. In that sense, it might not be considered a true algorithm. In fact, let me rephrase 'Step $j$' as follows:
Step $j$: Ask the Lord Almighty if $v_j$ is in the span of $v_1,\ldots,v_{j-1}$. If so, delete it; if not, leave it unchanged.
I claim that the proof is still valid when phrased in this rather crazy way. Indeed, all that matters is that there is an outcome to the step: either $v_j$ is in the span, or it's not. Whether a computer can actually figure that out is besides the point: there's an answer either way and you get to move on to step $j+1$ either way. |
Continuous Time Random Walk on $\mathbb{Z}$, Probability to be $>k$ as $t \to \infty$ | Just elaborating on Sangchul Lee's answer.
Consider the discrete version of this random walk. Denote by $S_n$ the current location in the random walk, then
$$S_n = \sum_{i=1}^nX_i$$
where $X_i$ are independent $Ber(\frac{1}{2}$) random variables.
Then, according to the central limit theorem, $\frac{S_n}{n}$ converges in distribution to the normal distribution $N(0,1)$.
Let $k$ be some positive integer, and let $\epsilon > 0$.
For $n$ large enough, $\frac{k}{\sqrt{n}} < \epsilon$.
From this $n$ onwards, $\mathbb{P}(S_n>k)=\mathbb{P}(\frac{S_n}{\sqrt{n}}>\frac{k}{\sqrt{n}})\geq \mathbb{P}(\frac{S_n}{\sqrt{n}}>\epsilon).$
By the definition of convergence in distribution, $\mathbb{P}(\frac{S_n}{\sqrt{n}}>\epsilon)\xrightarrow{n \to \infty} 1 -\phi(\epsilon)$,
where $\phi$ is the cumulative distribution function of the normal distribution.
$\epsilon$ was an arbitrary positive number, so
$\lim_{n\to \infty}\mathbb{P}(S_n>k)\geq \sup_{\epsilon>0}[1-\phi(\epsilon)]=\frac{1}{2}.$
But $\lim_{n\to \infty}\mathbb{P}(S_n>k) \leq \lim_{n\to \infty}\mathbb{P}(S_n>0) = \lim_{n\to \infty}\frac{1}{2} = \frac{1}{2}.$
Thus, $\lim_{n\to \infty}\mathbb{P}(S_n>k) = \frac{1}{2}.$ In particular, $\lim_{n\to \infty}\mathbb{P}(S_n>|k|) = 1.$
P.S. This holds for the continuous version of the random walk because the continuous markov chain is just slowed down / accelerated version of the discrete. So the limiting behaviour, as time approaches infinity, is the same. |
Gaussian curvature is identically zero if $\vec n_u$ and $\vec n_v$ are linearly dependent | Use the fact that $\vec{x}_u \cdot \vec{n} = 0$, and differentiate with respect to $u$ and $v$. Do a similar computation with $v$ instead of $u$. See what this makes the second fundamental form. Can you now use the fact that $n_u$ and $n_v$ are dependent to show that the determinant vanishes? |
Finding the area of a triangle, given the distance between center of incircle and circumscribed circle | Hint:-
Let the equal sides be $x$,
Distance of circumcenter from A, OA = circumradius, $R=\dfrac{abc}{4\triangle}$
Distance of incenter from A, MA = $r\csc{\dfrac{A}{2}}$
You will get MA - OA in terms of $x$ and $\beta$, but MA - OA = k
Then you can substitute the value of $x$ in terms of $k$ and $\beta$ in the formula for the area, i.e.
$\dfrac{1}{2}\cdot2x\cos\beta\cdot x\sin\beta$ |
Does $F$ have a local minimum at $0$? | Clearly F(0,0)=0 and $\partial y F = xe^x+e^y+ye^y$ and $\partial x F = ye^x+xye^x-e^x$. We have $\partial y F(0,0)=1$ and $\partial x F(0,0)=-1$.
Therefore we can apply Dini's theorem getting that, locally C is described by $(x,\phi(x))$ with $\phi(x)$ a derivable function with $\phi'(x)= -\frac{\partial x F(0,0)}{\partial y F(0,0)}=1$.
Now we have to see if (0,0) is a critical point for f in C:
$\partial f(x,\phi(x)) = \partial x f(x,\phi(x))\phi'(x) + \partial y f(x,\phi(x))\phi'(x)$ which in 0 becomes: $\partial f(0,\phi(0)) = -2*1+2*1=0$.
Since H(f) is positively defined we can conclude it is a minium point. |
product of two algebras is still a algebra? | The product $A\times B$ is an $R$-algebra, just defining all the operations (including multiplication by elements of $R$) coordinatewise. It doesn't have any particular relation to $A\otimes B$, though.
(If all your rings are commutative, then the two constructions are in a certain sense dual: $A\times B$ is the product of $A$ and $B$ and $A\otimes_R B$ is the coproduct of $A$ and $B$ (in the category of commutative $R$-algebras).) |
Is the poset of natural numbers cofiltered? | You're right, this poset is both filtered and cofiltered. It's a rather degenerate example of a cofiltered poset, though, as it has a least element, or in categorical language an initial object. We are often interested in filtered colimits and cofiltered limits, but the limit of a diagram indexed by a category admitting an initial object is given trivially, by evaluation at that object. |
What are the possible dimensions of the kernel of T? | Here’s one suggestion (not rigorous but to give you some intuition):
The kernel is the set of vectors in the domain that are mapped to zero in the codomain. The dimension of the kernel can be thought of as the number of dimensions that get ‘squashed’ by the transformation. By ‘squashed’, I mean, for example, all of the vectors in a $3$-dimensional space being mapped to a $2$-dimensional plane. You can imagine a cube, or some other $3$-dimensional object, being squashed until it is flat.
We are mapping from a $5$-dimensional space to a $3$-dimensional space, so we are already forced to squash $2$ dimensions. Therefore the dimension of the kernel is at least $2$. If all of the vectors are mapped to zero by the transformation, then all $5$ dimensions of the domain will be squashed, meaning that the dimension of the kernel is at most $5$. So we have $2 \leq dim(Null(T)) \leq 5$.
If you want to use the rank-nullity theorem, we can instead consider the image of $T$. In the case where all vectors are mapped to zero, the image clearly has dimension zero. It is also clear that the dimension of the image can be at most $3$, which will be the case if the ‘output’ vectors occupy all of the space we are mapping to. So we have $0 \leq dim(Im(T)) \leq 3$ which, by the rank-nullity theorem ($dim(Im(T)) + dim(Null(T)) = 5$ in this case), implies the result above. |
Regarding Schwarz lemma | Apply the Schwarz Lemma to $f=G^{-1}\circ g$. |
planar graphs with minimum degree 4 | There are infinitely many such graphs. Have fun ;-) |
Why is $\mathbb{Q}[i]$ not a subgroup of $\mathbb{R}$? | $\mathbb{Q}(i)$ is not a subsets of the set of real numbers, so it is not a subgroup. |
Show that $d_2$ defined by $d_2(x,y)=\frac{|x-y|}{1+|x-y|}$ is a metric | $d_2 (x,y)+d_2 (y,z)=\frac{d(x,y)}{1+d(x,y)}+\frac{d(y,z)}{1+d(y,z)}$
$ \geq
\frac{d(x,y)}{1+d(x,y)+d(y,z)}+ \frac{d(y,z)}{1+d(x,y)+d(y,z)} = \frac{d(x,y)+d(y,z)}{1+d(x,y)+d(y,z)}$
$= 1-\frac{1}{1+d(x,y)+d(y,z)} \geq 1-\frac{1}{1+d(x,z)}$
$= \frac{d(x,z)}{1+d(x,z)}=d_2 (x,z)$ |
Model of cardinality $\lambda$ where every definable subset is either finite or has size $\lambda$ | $P$ is irrelevant in the statement, so let's forget about it.
What you wrote is almost right. If $\varphi(x)$ is a non-algebraic formula with one free variable, then it's consistent to add $\lambda$-many new constant symbols and say that they all satisfy $\varphi(x)$. But it's not necessarily consistent to add $\lambda$-many new constant symbols and say they all don't satisfy $\varphi(x)$, unless $\lnot \varphi(x)$ is also non-algebraic! So it's better to just handle $\varphi$ and $\lnot \varphi$ separately.
Now the details of the construction depend on whether "definable set" means definable with parameters or not. Let's assume we want to handle definable sets with parameters, since this is a little bit trickier.
Start with a model $M_0\models T$ of cardinality $\lambda$ (which exists by Löwenheim-Skolem). Let $T_M$ be the elementary diagram of $M$ (the complete theory of $M$ in the language $L_M$ with a constant symbol for every element of $M$ - note that this language has cardinality $\lambda$).
Make a list of all the non-algebraic formulas in one variable with parameters from $M$. There are $\lambda$-many of these. For each one, say $\varphi(x,\overline{a})$, introduce $\lambda$-many new constant symbols and add to $T_M$ the axiom $\varphi(c,\overline{a})$ for each new constant $c$, as well as the axioms $c\neq d$ for each pair of new constants $c$ and $d$. The resulting language still has cardinality $\lambda$, so by Löwenheim-Skolem, the resulting theory has a model $M_1$ of cardinality $\lambda$, and $M_0\preceq M_1$ since $M_1\models T_M$.
Now $M_1$ is a model of $T$ of cardinality $\lambda$ such that every set definable with parameters from $M_0$ is finite or has cardinality $\lambda$. But what about sets definable with parameters from $M_1$ which are not in $M_0$? To deal with these, we repeat the construction above, building an elementary chain $M_0\preceq M_1\preceq M_2\preceq \dots$. The union of this chain is a model of cardinality $\lambda$ such that every definable set with parameters is finite or has cardinality $\lambda$. Why? The finitely many parameters all appear in some $M_n$, so already the definable set has size $\lambda$ in $M_{n+1}$, and it can only grow when we take the union.
If you only care about definable sets without parameters, then there's no need for the elementary chain: you're already done after the first step. |
Using the Intermediate Value Theorem to prove the existence of a number$\;$ | I'm sorry to say that, but, yes, there was enough information given.
The symbol $\sqrt[3]{20}$ denotes by definition a number $w$ with the the property that $e^3=20$. Therefore, it is natural to find $e$ by looking for a solution of the equation $$x^3=20.$$
In the context of the Intermediate Value Theorem (as mentioned in the problem statement), it is quite evident, that we are supposed to view the left hand side of this equation as a function (that we easily recognize as being continuous, just as needed for the theorem), and the constant on right hand as the function value.
Recalling the statement of the IVT, the obvious task then is to find real numbers $a,b$ with $f(a)<20<f(b)$. And we do not even have to look far for these as they are literally given in the prblem statement as $2$ and $3$. |
Can a hermitian, rational polynomial have non-zero odd and real coefficients in the numerator/denominator? | For an easier typing, let me use $X$ instead of $\omega$. Saying that $X$ is real essentially amounts to working with polynomials (as opposed to polynomial functions) - if this line makes no sense to you, just ignore it.
We can always write
$$\chi \left( X \right) = \frac {\sum _n \left( c_n + {\rm i} c_n ^\dagger\right) X^n} {\sum _n \left( d_n + {\rm i} d_n ^\dagger \right) X^n}$$
as
$$\chi \left( X \right) = \frac {\left( \sum _n c_{2n} X^{2n} + {\rm i} \sum _n c_{2n+1} ^\dagger X^{2n+1} \right) + \left( {\rm i} \sum _n c_{2n} ^\dagger X^{2n} + \sum _n c_{2n+1} X^{2n+1} \right)} {\left( \sum _n d_{2n} X^{2n} + {\rm i} \sum _n d ^\dagger _{2n+1} X^{2n+1} \right) + \left( {\rm i} \sum _n d_{2n} ^\dagger X^{2n} + \sum _n d _{2n+1} X^{2n+1} \right)}.$$
Let us introduce the notations
$$P = \sum _n c_{2n} X^{2n} + {\rm i} \sum _n c_{2n+1} ^\dagger X^{2n+1} \\
R = {\rm i} \sum _n c_{2n} ^\dagger X^{2n} + \sum _n c_{2n+1} X^{2n+1} \\
Q = \sum _n d_{2n} X^{2n} + {\rm i} \sum _n d ^\dagger _{2n+1} X^{2n+1} \\
S = {\rm i} \sum _n d_{2n} ^\dagger X^{2n} + \sum _n d _{2n+1} X^{2n+1}, $$
so that
$$\chi = \frac {P + R} {Q + S} .$$
Note that, given how we have grouped terms, we have
$$P(-X) = P^* (X), \quad Q(-X) = Q^* (X), \quad R(-X) = -R^* (X), \quad S(-X) = -S^* (X) .$$
Using the above relations,
$$\frac {P(-X) + R(-X)} {Q(-X) + S(-X)} = \chi (-X) = \chi^* (X) = \frac {P^* (X) + R^* (X)} {Q^* (X) + S^* (X)} = \frac {P(-X) - R(-X)} {Q(-X) - S(-X)} ,$$
whence after cross-multiplying, reducing the identical terms grouping the remaining ones and changing $-X$ into $X$, we get
$$SP = QR .$$
If $P = 0$, then $QR = 0$.
1.1. If $Q = 0$, then
$$\chi = \frac R S = \frac {{\rm i} R} {{\rm i} S}$$
and the numerator and denominator have exactly the required form.
1.2. If $Q \ne 0$, then $R = 0$, so $\chi = 0$, which again has the required form (trivially).
If $P \ne 0$, then
$$ S = \frac {QR} P .$$
2.1. If $P + R = 0$, then $\chi = 0$ has the required form.
2.2. If $P + R \ne 0$, then
$$\chi = \frac {P + R} {Q + \frac {QR} P} = \frac {P (P + R)} {Q (P + R)} = \frac P Q$$
which again has the required form. |
Definite Gamma function integral | A good approximation for the proposed integral is
$$ I(a) = \int_a^{a+1}\,\Gamma(x) dx \sim \frac{a-1-\epsilon}{\log{(a-\epsilon)}}\,\Gamma(a) \, , \, \epsilon \approx 0.12 $$
The form with $\epsilon=0$ can be shown to be a first order approximation to that obtained with the saddle point strategy, and the given $\epsilon=0.12$ is an empirical constant that gives agreement of about 3 digits for all $a>2$ and not just in the limit $a \to \infty$.
By differentiation with respect to $a$ (I actually did it first through operator methods) one can see that
$$ I(a) = \int_0^{\infty}\,e^{-t}t^{a-1}\big(\frac{t-1}{\log{t}}\big) dt \,. $$
The saddle point strategy, to first order, says to put the $e^{-t}t^{a}$ into an exponential $\exp(-h(t)) = \exp(-(t-\log{a})).$ Differentiate $h(t)$ with respect to $t$ and set the expression to zero to find the saddle point $t=a.$ The 'slowly varying' function $g(t),$ with $g(t)=(t-1)/\log(t)$ for our case, is pulled through the integral sign and evaluated at the saddle point. Instead of going through the saddle point machinery (which for g(t) = 1) many will recognize as the manner in which to obtain the Stirling approximation to the $\Gamma$ function, we stop. We realize that what we have left after pulling the 'slowly varying' function through IS the gamma function. |
Time Average Mean of X(t)=A, where A is a r.v. Ergodic vs. non-ergodic. | I guess when you calculate the time average integral, you are suppose to treat r.v.'s as if they are constants:
$$\bar{x} = \lim \limits_{T \to \infty} \frac{1}{2T} \int \limits_{-T}^{T} x(t)~dt$$
$$\bar{x} = \lim \limits_{T \to \infty} \frac{1}{2T} \int \limits_{-T}^{T} A~dt$$
$$\bar{x} = A ~\lim \limits_{T \to \infty} \frac{1}{2T} \int \limits_{-T}^{T} 1~dt$$
$$\bar{x} = A ~\lim \limits_{T \to \infty} \frac{1}{2T} \bigg[t\bigg]_{-T}^{T} $$
$$\bar{x} = A ~\lim \limits_{T \to \infty} \frac{1}{2T} \bigg[T--T\bigg]$$
$$\bar{x} = A ~\lim \limits_{T \to \infty} \frac{2T}{2T}$$
$$\bar{x} = A ~\lim \limits_{T \to \infty} 1$$
$$\bar{x} = A$$ |
Inequality on pairs of projections in Kato's book | Initially, Kato assumes that $P$ and $Q$ are orthogonal projectors satisfying
$\,\|(1-Q)P\|<1$. He then proves
$$\|(1-P)\,Q_0\|\:\leqslant\:\|(1-Q)P\,\|\tag{6.55}$$
where $Q_0$ is the orthogonal projector onto
$N_0\!:=\,\operatorname{Im}\left(Q|_{\operatorname{Im}P}\right) \,=\,\operatorname{Im}(QP)\,$.
$(6.55)$ shows that $Q_0$ and $P$ also satisfy the initial assumptions, hence
$$\|(1-Q_0)\,Z\|\:\leqslant\:\|(1-P)\,Q_0\|\tag{see page 58, line 3}$$
follows, this times with $Z$ being the orthogonal projector onto
$$\operatorname{Im}\left(P|_{\operatorname{Im}Q_0}\right) \,=\,\operatorname{Im}(PQ_0)\,.$$
But in fact one has $Z=P\,$ because $\operatorname{Im}(PQ_0)=\operatorname{Im}(P)\,$. The preceding equality follows from section I.4.6 in Kato's book, which applies to the pair $P,Q_0\,$ as these are close by since $\|P-Q_0\|<1$ holds by $(6.58)$. |
How to solve this indefinite integral using integral substitution? | HINT:
Make the substitution $x=2\sec^2(\theta)$ and arrive at
$$4\int \sec^3(\theta)\,d\theta$$ |
$6p|\:a+b+c\ $ then $:6p|\:a^p+b^p+c^p$ | Assuming $p$ is an odd prime greater than $3$, you have, directly by FlT, that $a^p+b^p+c^p \equiv a+b+c \equiv 0 (\bmod{p}).$ Now you just have to show that $6\mid a^p+b^p+c^p$. Since $2\mid a+b+c$, then either $0$ or $2$ of $a, b, c$ are odd. Therefore either $0$ or $2$ of $a^p, b^p, c^p$ are odd, so $2 \mid a^p+b^p+c^p.$ Likewise, every integer is congruent to $-1, 0$ or $1$ modulo $3$, so raising it to an odd power doesn't change the congruence class. So if $3\mid a+b+c$ it also divides $a^p+b^p+c^p$. |
For which values $p$ does $\int_0^\infty x\sin(x^p) dx $ converge? | By the Laplace transform
$$\lim_{M\to +\infty}\int_{0}^{M}z^{\frac{2}{p}-1}\sin(z)\,dz =\frac{1}{\Gamma\left(1-\frac{2}{p}\right)}\int_{0}^{+\infty}\frac{ds}{s^{2/p}(1+s^2)}\,ds $$
is a convergent integral for any $p>2$. |
Conditional probability with students seating | I don't think that there is a $\frac{1}{3}$ chance that the octet will be at the beginning or end. Wouldn't it be $\frac{2}{7}$? Which would make the probability $\frac{2}{7}\frac{1}{6}+\frac{5}{7}\frac{2}{6}=\frac{2}{7}$
Alternately, you could think of the octet as a single entity. There are $7!$ ways the group can be arranged. Now lump the ninth friend in with the octet and there are now $6!\cdot2!$ ways this can be arranged. $\frac{6!2!}{7!}=\frac{2}{7}$ |
Dimension of the set of mxn complex component matrices over real numbers | Yes, the reasoning is correct. It would be 2mn (mn for the real part for each position and mn for the imaginary part for each position). |
Unique general solution for linear PDE with method of characteristic | There are more solutions, which a different formulation of the "method of characteristics" may have found. Any solution to this linear transport PDE has to be constant along characteristics, but this does not reduce to $u=f(x^2-y^2)$.
Let $u$ be any solution. The characteristic curves (which can be defined independently of $u$) solve
$$ \partial_s \binom{X}{Y} = \binom{Y}{X},$$
And by chain rule
$$ \partial_s \big( u(X,Y) \big) = u_x(X,Y)X_s + u_y(X,Y) Y_s = Y u_x(X,Y) + Xu_y(X,Y) = 0. $$
Having a look at a plot of the characteristics, which are (as you correctly said) the level sets of $x^2 - y^2$,
We see that the characteristic curves $x^2 - y^2 = c$ cover the whole of $\mathbb R^2$, so that every value of $u$ is determined as long as we give enough initial data. But this plot also shows us another thing: every level set except for $x^2 - y^2 = 0$ is made up of a pair of connected components, and when you prescribe $u(x,y) = f(x^2-y^2)$, you are forcing $u$ to be the same value on each such pair of these components. This is not necessary.
One way to obtain all possible solutions is to prescribe the values of $u$ on both the $x$ and $y$-axes, with a compatability condition for the characteristic point at $(x,y)=(0,0)$:
$$ u(x,0) = f_1(x), \\ y(0,y) = f_2(y), \\ f_1(0) = f_2(0).$$ |
Simplex method - infeasible basic variables | If your pivot operation took any of the basis variables negative, you mis-calculated the distance you could go in the pivot operation. Did you compare all values of the pivot row ratios and choose the most restrictive constraint to dictate the value of the newly introduced baisis element? |
Evaluate $g^{(51)}(0)$ of $g(x)=\int_{1}^{x^2}\left(\cos(t)-1\right)^6t^8\,dt$ | Here is a discussion which shows at what derivative orders there will be nonzero $
g^{(n)}(x=0)
$.
Let $f(t)=\left(\cos(t)-1\right)^6t^8 $ and $F(t)$ its antiderivative. Then $g(x) = F(x^2) - F(1)$, $g'(x) = 2 x F'(x^2) = 2 x f(x^2)$. Going further,
$$g''(x) = 2 f(x^2) + (2 x)^2 f'(x^2)\\
g'''(x) = 12 x f'(x^2)+ (2 x)^3 f''(x^2)\\
g^{(4)}(x) = 12 f'(x^2) + 24 x^2 (f'(x^2) +f''(x^2)) + (2 x)^4 f^{(3)}(x^2)\; .\\
$$
I.e. the only terms which could possibly contribute to $g^{(n)}(x=0) $ are the leading derivatives of $f(t)$. Let's look at those.
Notice $f(x^2=0) = 0$, $f'(x^2=0) = 0$ etc. We have to look for derivatives of $f$ which are not zero at $x=0$.
Let $f(t)=a(t) \cdot b(t) = \left(\cos(t)-1\right)^6 \cdot t^8 $. Then by repeated application of the product rule,
$$
f^{(n)}(t)= \sum_{k=0}^n {\binom{n}{k}} a^{(n-k)}(t)b^{(k)}(t)
$$
As $b(t) = t^8$, the only nonzero value of $b^{(k)}(0)$ appears at $k =8$ where $b^{(k)}(0) = 8!$. We have $a(t) = \left(\cos(t)-1\right)^6 = 2^6 \sin^{12}(t/2)$. That means we need to take at least the twelfth derivative of $a(t)$ before we get $a(0) \ne 0$. So for $n \ge 20$, we have
$$
f^{(n)}(t)= 8! {\binom{n}{8}} a^{(n-8)}(t)
$$
The mth derivative $a^{(m)}(t=0)$ will be determined, by repeated application of the chain rule, only by the one term proportional to $\cos^{12}(t/2)$ since all other terms have factors $\sin^k(t/2)$ which make them zero as $t =0$. Since we need to generate this term by repeated differentiation of $a(t) = 2^6 \sin^{12}(t/2)$, we have that all $a^{(m)}(t=0)$ for odd $m$ since then no such isolated term $\cos^{12}(t/2)$ will occur. As $m = n-8$, we can state that for odd $n$, $
f^{(n)}(t=0)= 0
$.
Now let's return to the derivatives of $g$ which we started to evaluate at the top. It is clear that for all odd $n$, $
g^{(n)}(x=0) = 0
$ since all terms will have factors of $x$. For the even $n$, $
g^{(n)}(x)
$ will have a leading term with $
f^{(-1+n/2)}(x^2)
$. As we have seen that these factors will become nonzero only at the 20th, 22nd, 24th derivative etc., we finally obtain that $
g^{(n)}(x=0) \ne 0
$ will be obtained for all $n = 42 + 4\, k$ with $k \in \cal{N}_0$.
For obtaining the value of $
g^{(n)}(x=0)$, let's look at the first nonzero case at $n=42$. From the discussion of the first few derivatives of $g(x)$ above, we have that $
g^{(42)}(x=0)
$ is determined only by $
g^{(42)}(x=0) = c_{42} \cdot f^{(20)}(t=0)
$. Now $
f^{(20)}(t=0) = 8! {\binom{20}{8}} a^{(12)}(t=0) = 8! {\binom{20}{8}} 2^{-6} \cdot 12!
$.
The factor $ c_{42}$ is more difficult. We have to sum up the number of all ways to arrive from $g'(x) = 2 x f(x^2)$ at $g^{(42)}(x) = c_{42} \cdot f^{(20)}(x^2) + \cdots$. The following thought experiment helps in determining the factor. Set $f(z) = z^{20}$. Then $f^{(20)}(z) = 20! $ and $f^{(n)}(z) = 0 $ for $n \ge 21$. So we have $g^{(42)}(x=0) = c_{42} \cdot f^{(20)}(z = x^2=0) + \cdots = 20! c_{42} $. On the other hand, with this setting we have $g'(x) = 2 x^{41}$ and hence $g^{(42)}(x) = 2 \cdot 41!$. Hence $c_{42} = 2 \frac{41!}{20!}$.
Compare that by the same method, $c_{2} = 2 \frac{1!}{0!} = 2$ and $c_{4} = 2 \frac{3!}{1!} = 12$ as we have already established.
In total, we have $
g^{(42)}(x=0) = 2 \frac{41!}{20!} 8! {\binom{20}{8}} 2^{-6} \cdot 12! = 2^{-5} 41! \simeq 1.0454\cdot 10^{48}
$.
I have checked that this result is indeed true by direct computation of $
g^{(42)}(x=0) $ with Matlab help. |
Use infinite series to evaluate $\lim_{x \rightarrow \infty} (x^3 - 5x^2 + 1)^{\frac{1}{3}} - x$ | Let $1/x=h$ to find
$$\lim_{h\to0^+}\dfrac{(1-5h+h^3)^{1/3}-1^{1/3}}h$$
Now rationalize the numerator using $$a^3-b^3=(a-b)(a^2+ab+b^2)$$ |
Motivation Of Correlation Coefficient Formula | Suppose we have a scatterplot of heights X and weights Y of n subjects.
The 'center of the data cloud' is at the point $(\bar X,\,\bar Y)$.
One might expect a positive association between heights and weights.
Points above and to the right of the center make a positive
contribution to the sum $\sum (X_i -\bar X)(Y_i - \bar Y).$
So also do points below and to the left of center.
Points that might suggest a negative association will be above and
to the left of center or below and to the right of center. For them,
the product $(X_i -\bar X)(Y_i - \bar Y)$ will have a negative
and a positive factor, thus a negative product. So such points
will make a negative contribution to $\sum (X_i -\bar X)(Y_i - \bar Y).$
The denominator is essentially the product of the numerators of the
standard deviations of X and Y. The effect of the denominator is to
make $r$ a quantity without units. In the US system of measurements
the numerator has units 'foot-pounds', and the denominator has the
same units, so $r$ has no units. If the subjects were weighed and measured
in the metric system, the correlation of their weights and heights would
be numerically the same as if they were weighed and measured in the US
system.
Also, inclusion of the denominator scales correlations $r$ so that
they lie between $-1$ and $+1,$ where $r = 1$ means the points
perfectly fit an upward sloping line (regardless of the numerical
value of the slope, which has units), and $r = -1$ means the points
perfectly fit a downward sloping line.
If either the SD of the X's or the SD of the Y's is 0, then the points
lie on either a vertical line or a horizontal line, respectively. In
either case the denominator of $r$ would be $0$ and the correlation
is not defined.
In the plot below, there is a strong linear component to the positive
association of X and Y: $r = 0.968.$ The horizontal and vertical grid
lines cross at $(\bar X, \bar Y).$ Each of the dark green points makes a positive
contribution to the numerator of $r,$ as discussed above. Also, the
two red points make (slight) negative contributions to the numerator of $r.$ |
Riemann zeta function and Bernoulli function | We have that, if $0<x\leq1$
, the Bernoulli polynomial can be written as $$B_{2n+1}\left(x\right)=\left(-1\right)^{n+1}\frac{2\left(2n+1\right)!}{\left(2\pi\right)^{2n+1}}\sum_{k\geq1}\frac{\sin\left(2\pi kx\right)}{k^{2n+1}}.$$ You can find a proof on Apostol, "Introduction to analytic number theory", page 267. It remain to note that, if $k$ is an integer$$\int_{0}^{1}\sin\left(2\pi kx\right)\cot\left(\pi x\right)dx=1.$$ |
Testing Function Convexity | Let $\lambda, \mu \in \langle 0,1\rangle$ be such that $\lambda + \mu = 1$ and $x = (x_1, \ldots, x_n), y= (y_1, \ldots, y_n) \in \mathbb{R}^n$. We wish to show that $$f(\lambda x + \mu y) \le \lambda f(x) + \mu f(y)$$
We shall use Hölder's inequality for conjugate exponents $\frac1{\lambda}$ and $\frac{1}\mu$. We have:
\begin{align}
f(\lambda x + \mu y) &= \ln \sum_{i=1}^n e^{\lambda x_i + \mu y_i} \\
&= \ln \sum_{i=1}^n e^{\lambda x_i}e^{\mu y_i}\\
&\le \ln\, \left[\left(\sum_{i=1}^n e^{\lambda x_i\cdot \frac1\lambda}\right)^{\lambda}\left(\sum_{i=1}^n e^{\mu y_i\cdot \frac1\mu}\right)^{\mu}\right]\\
&= \lambda \ln \sum_{i=1}^n e^{x_i} + \mu \ln \sum_{i=1}^n e^{y_i}\\
&= \lambda f(x) + \mu f(y)
\end{align}
Hence, $f$ is convex on $\mathbb{R}^2$. |
Find intersection of several intervals mathematically | All the numbers $1 + 60k$, $k \in \mathbb{N}$ |
Real eigenvalues of $B = P^T A P$ with orthogonal and tall $P$ | This is not true. E.g. $A=\pmatrix{0&-1&1\\ 1&0&2\\ 1&1&1}$ has a real spectrum $\{-1,0,2\}$ but
$$
B=\pmatrix{1&0&0\\ 0&1&0} \pmatrix{0&-1&1\\ 1&0&2\\ 1&1&1}\pmatrix{1&0\\ 0&1\\ 0&0}
=\pmatrix{0&-1\\ 1&0}
$$
has only non-real eigenvalues. |
Notation: $L_p$ vs $\ell_p$ | $\ell^p$ spaces are a special case of $L^p$ spaces.
If $(X,\mu)$ is a measure space, $L^p(X)$ (or $L^p_{\mathbb{R}}(X)$) is the (Banach) space of all measurable functions
$f\colon X\to \mathbb{R}$ such that $$\int_X |f|^p\,d\mu\lt \infty.$$
In the special case in which $X=\mathbb{N}$ and $\mu$ is the counting measure, functions $f\colon\mathbb{N}\to\mathbb{R}$ can be taken to be sequences of elements of $\mathbb{R}$, and the integral is the sum of the terms of the sequence. That is, $L^p(\mathbb{N})$ is the set of sequences $(x_i)$ such that $\sum |x_i|^p\lt\infty$. To denote this special case, which occurs very often, we use $\ell^p$.
(You can replace $\mathbb{R}$ with any normed vector space, replacing the absolute value with the norm.) |
Interest of some Dirichlet characters | Quadratic Dirichlet characters arise naturally for reciprocity in number theory, i.e., for Gauss sums and Gauss quadratic reciprocity law, but also in $L$-functions and many other topics. As an example, the proof of Dirchlet's theorem on primes in arithmetic progressions uses such $L$-functions. A detailed and explicit account is given inTom Apostol's book "Introduction to analytic number theory". |
Complement of the difference of the sets | Using the property $A \backslash B = A \cap B^c$ and the De-Morgan's laws you get the following chain of equalities:
$$
(A \backslash B)^c = (A \cap B^c)^c = A^c \cup (B^c)^c = A^c \cup B
$$ |
Sequence of 1's and 2's | As @lulu mentioned in a comment, the Thue-Morse sequence is an infinite sequence with this property.
We can prove by long induction on $n$ that no sequence of length $n\ge 1$ occurs triply repeated:
First, assume that $n$ is odd. A basic property of the Thue-Morse sequence is that repeated single symbols only occur at odd positions -- that is, $a_{2n-1}=a_{2n}$ is possible, but $a_{2n}=a_{2n+1}$ is not. So if an odd-length sequence appears even two times in succession, it cannot have any neighboring symbols equal, because one of the copies of it would appear at a position of wrong parity. So the repeated sequence must have a strict alternation between 1s and 2s. But this means it begins and ends with the same symbols, so if there are three copies next to each other, there will be a repeated symbol at each of the two joins between copies. But these repetitions will also have different parity, which is again impossible.
On the other hand, suppose $n$ is even. Then taking just those symbols that are at even indices in the original Thue-Morse sequence will create a sequence of length $n/2$ that appears at half the index in the T-M sequence. (This is because the sequence of even-indexed symbols of the T-M sequence is the same as the sequence itself). But since $n/2<n$, the induction hypothesis says that this is impossible. |
Determine if a sequence of independent rv's satisfies the Lindeberg condition | For $(a.)$: $\sum_{k=1}^nσ^2_k$ is a geometric sum, so $$\sum_{k=1}^n 2^{-k}=-1+\sum_{k=0}^n2^{-k}=-1+\frac{1-2^{-n-1}}{1-\frac12}=1-2^{-n}$$ Since, $σ^2_k=2^{-k}$ is a decreasing sequence, you have that $$\lim_{n\to\infty}\frac{\max_{1\le k\le n}σ^2_k}{\sum_{k=1}^nσ^2_k}=\lim_{n\to\infty}\frac{σ^2_1}{1-2^{-n}}=\frac{\frac12}{1-0}=\frac12\neq 0$$
For $(b.)$: I do not agree with $σ^2_1=1$. I think that $σ^2_k=1-2^{-k}+2^k$ for any $k\ge 1$ (but anyway this does not affect the solution). Again, using the geometric sum
$$\sum_{k=1}^nσ_k^2=\sum_{k=1}^n(1+2^k)-\sum_{k=1}^n2^{-k}=n+\frac{1-2^{n+1}}{1-2}-\frac{1-2^{-n-1}}{1-\frac12}=n+2^{n+1}-3+2^{-n}$$ Hence, since $σ^2_k$ is increasing, you have that $$\lim_{n\to\infty}\frac{\max_{1\le k\le n}σ^2_k}{\sum_{k=1}^nσ^2_k}=\lim_{n\to\infty}\frac{2^n+1-2^{-n}}{n+2^{n+1}+1-2^{-n}}=\lim_{n\to\infty}\frac{1+2^{-n}-2^{-2n}}{\frac{n}{2^{n}}+2+2^{-n}-2^{-2n}}=\frac12\neq 0$$ So, if I did not miscalculate, neither sequence of rv's satisfies the condition of asymptotically negligible variances. |
limit set of a bounded solution | Indeed, for any sequence $t_n\to\infty$ the sequence $x(t_n)$ contains a convergent subsequence; its limit belongs to the limit set of $x$. |
write $A^{6}$ as a Linear combination of $I , A , A^{2}$ | Hint: Use Cayley-Hamilton Theorem to write $A^{3}$ as $aI+bA+cA^{2}$. Keep multiplying by $A$ to see that any power of $A$ is a linear combination of $I,A$ and $A^{2}$. For $A^{6}$ is is not difficult to compute the coefficients. |
Relationship between existence and computable function | This is false in general: let $K$ be the halting set (i.e. $K=\{x:\varphi_x(x)~ \mathrm{halts}\}$) and let $A=\{1\}$. Clearly $A$ is recursive. Define $g$ as $g(x,n)=1$ if $\varphi_x(x)$ halts in $n$ steps and $0$ otherwise. We have
$$ \forall x \quad x\in K \iff (\exists n)( g(x,n)\in A) $$
But $K\not\le_m A$. In particular there is no computable $f$ that chooses an $n$ s.t. $x\in K \iff g(x,n)\in A$ (that would be equivalent to computably producing the number of steps a machine needs to halt). |
Show that this subset of $L^r$ is closed | Basically you look at a sequence $\{g_n\}$ in $K_r$. Suppose $g_n \to g$ in $L^r$.
Then we have $\int_{p>0} \frac{g^r}{p^{r-1}} d \lambda = \lim\inf\int_{p>0} \frac{g_n^r}{p^{r-1}} d \lambda < \infty$ by Fatou's lemma. We used the fact that $g_n \in L^r$ to ensure its less than infinity.
Next we prove $\int g d\lambda = 0$.
$g$ is the $L^r$ limit of a sequence of positive functions and is thus positive $\lambda$ a.e. Hence $\int g d\lambda \geq 0$.
Also by Fatou's lemma $\int g dx \leq \inf \int g_n dx = 0$.
Therefore $\int g dx = 0$.
This is enough to prove that $g \in K_r$ i.e. $K_r$ is closed. |
Probability on drawing colored balls | The question is to get $$\text {The probability of selecting a red marble at any draw}$$
Not $$\text {The probability of selecting a a red marble in at least one of the $n$ draws}$$
The First time you draw a marble the Probability is:
$${\text{Red Marbles}\over\text{Total marbles}}={\frac{r}{r+g}}$$
The Second time you draw a marble the Probability is:
$${\frac{r}{r+g}}*{\frac{r+c}{r+g+c}}+{\frac{g}{r+g}}*{\frac{r}{r+g+c}}={\frac{r}{r+g}}$$
And so on...
As you can see the Probability does not depend on $\text{ n or c}$. |
why does this inequality hold with expectations of supremums | Hint: For every collection $\Xi$ of integrable random variables such that $\eta=\sup\{\xi\mid\xi\in\Xi\}$ is integrable, one has $\xi\leqslant\eta$ for every $\xi$ in $\Xi$, hence $E[\xi]\leqslant E[\eta]$ for every $\xi$ in $\Xi$, which is equivalent to
$$\sup\limits_{\xi\in\Xi}E[\xi]\leqslant E[\eta]=E\left[\sup\limits_{\xi\in\Xi}\xi\right].$$
Can you adapt this idea to your setting? |
Probability of placing two books and another two books together on a shelf | Assuming that all $5$ books are distinguishable, there are $5!$ ways to order them on the shelf, so that will be the denominator of your probabilities. You are correct about the second problem: there are $4!$ ways to order the $4$ red and blue books, and then just $1$ way to shove the green book into the middle position.
For the first question you can start by listing the ways in which the blue books can be together and the red books can be together: $BBRRG$, $BBGRR$, $GBBRR$, $RRBBG$, $RRGBB$, AND $GRRBB$. Then you can start thinking about how many different possibilities there are for the red and blue books in each of these general arrangements.
Or you can be a bit more analytical and notice that this can only happen when the green book is at one end or in the middle. Thus, there are $3$ possible locations for the green book. Once it has been placed, we have to decide which of the pairs of blue and red books will come first; that can be done in $2$ ways. Finally, we have to decide which of the red books will come first in its pair and which of the blue books will come first in its pair; each of these is a two-way choice. Can you put the pieces together now to finish it off? |
normal subgroup and index of subgroup | Let me try.
Since $N$ is normal, $HN$ is subgroup of $G$.
Now, you have $x = [N: H\cap N] \leq [G:H]$, so $x$ is finite.
Moreover, you have $x = [N: H\cap N] = [HN : H]$.
On other hand, $([G:H],|N|) = 1$ or $([G:HN][HN:H],[N: H \cap N]|H \cap N|) = 1$, so you have $[HN:H] = [N : H\cap N] = 1$, then $N = H\cap N$, or $N \leq H$. |
Smallest trace of a matrix product where one is given and the other is orthogonal | Up to a change of orthonormal basis, we may assume that $\Sigma=diag(\lambda_i)$ where $\lambda_i>0$.
If $V=[v_{i,j}]$, then
$tr(V\Sigma)=\sum_i v_{i,i}\lambda_i\geq -\sum_i \lambda_i=tr((-I)\Sigma)$.
Conclusion. The $\min$ is reached for $V=-I$ and its value is $-tr(\Sigma)$. |
Proving the divergence of a limit | Note that for $n > 9$, $n > 3\sqrt{n}$. So if $M > 0$, choose $N(M) > \max\{9,M^2\}$. Then for $n \ge N$, $|n - 2\sqrt{n}| = n - 2\sqrt{n} > \sqrt{n} \ge \sqrt{N} > M$. |
Differential geometry problem about curves | The curvature vector is defined to be the derivative of the unit tangent vector. Let us say that $\gamma(s)$ is the arc length parametrization of your curve. That is to say: $$F(\gamma(s))=0$$ $$G(\gamma(s))=0.$$ To find the tangent vector you derived and obtained: $$\nabla F(\gamma)\cdot\gamma'=0$$ $$\nabla G (\gamma)\cdot \gamma'=0.$$
To get the answer to the second question you need the direction of $\gamma''$ (having supposed $\|\gamma'\|=1$). So you derive and get: $$\mathcal{H}F(\gamma)[\gamma',\gamma']+ \nabla F(\gamma)\cdot\gamma''=0$$ $$\mathcal{H}G(\gamma)[\gamma',\gamma']+ \nabla G(\gamma)\cdot\gamma''=0.$$
Where $\mathcal{H}F$ is the hessian matrix and the square parenthesis represent the linear product.
Once you solve it you will get the answer. This will answer to your third question too. |
Condition of invertible diagonal operator on Hilbert space | $u(e_n)=\lambda_ne_n$. So, if $u$ is invertible, then $\lambda_n \neq 0$, $u^{-1}(e_n)=\frac 1 {\lambda_n}e_n$ and $\frac 1 {|\lambda_n|} \leq \|u^{-1}\|$ proving that $\inf_n |\lambda_n| >0$.
Conversely, if $\inf_n |\lambda_n| >0$ then $v(\sum \alpha_n e_n)=\sum \frac1 {\lambda_n} \alpha_n e_n$ defines a bounded operator and $u\circ v=v\circ u=I$. |
What periodic functions have to do with rational numbers? | The solution of this system is made of functions with two distinct periods: $2\pi$ and $2k\pi$. For the solution to be globally periodic, $k$ must be rational, let $\dfrac pq$, so that after $q$ period for the second, you have seen $p$ periods of the first.
In other words, rational numbers have to do with periodic functions in that the respective periods must be commensurable. |
Inner and outer expansions | Now insert that for $x\approx 0$ you have
$$\frac{\sin x}{x}\approx 1-\frac16x^2$$
and
$$
\coth(x)=\frac{e^x+e^{-x}}{e^x-e^{-x}}\approx\frac{1+\frac12x^2}{x+\frac16x^3}\approx \frac1x+\frac13x
$$
to cancel out the singularities at $x=0$. |
Why graphs are so important | Graphs are a common method to visually illustrate relationships in the data. The purpose of a graph is to present data that are too numerous or complicated to be described adequately in the text and in less space.
Wikipedia says,
Graphs can be used to model many types of relations and processes in physical, biological, social and information systems. Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the term network is sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges.
In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in social media, travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data. |
Is a weak derivate of $f$ always a classical derivative of some $g$? | It is true for $n=1$. In this case $W^{1,p}$ is the space of $p$-absolutely continuous functions, and these are differentiable a.e., which is even stronger than your requirement. For higher derivatives you can simply iterate.
It can fail for $n>1$ (of course Sobolev embedding theorems ensure that it still true if the function has enough weak derivatives). Let $\Omega$ be bounded, $(q_k)$ a dense subset of $\Omega$ and
$$u(x)=\sum_{k=1}^\infty 2^{-k}\log\log(1+\|x-q_k\|^{-1}).$$
This limit exists in $W^{1,n}(\Omega)$ and is nowhere locally (essentially) bounded. In particular, whichever representative of $u$ and null set $Z$ you choose, there is always a set $\Omega'$ of full measure such that for all $x\in \Omega'$ there exists a sequence $(x_j)$ in $\Omega\setminus Z$ that converges to $x$ and satisfies $|u(x_j)|\to \infty$. Thus $u|_{\Omega\setminus Z}$ is discontinuous for each null set $Z$. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.