title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Simple linear regression for predictive purposes | You know that $Y$ has the expression
$$
Y = 0.159 + 0.4\times 0.2 + e
$$with $e$ a random variable with expectation 0. Hence the best prediction is
$$\hat Y = 0.159 + 0.4\times 0.2
$$
which is the expected value of the second year batting average, given that the first year average is $0.200$.
Here is the shape of the distribution of the second year average. Here
$m = 0.159 + 0.4\times 0.2$. You see that as the distribution is symetric around $m$, $m$ is the widest choice. |
How to show that a lattice is isomorphic to an other lattice? | There are several methods to do that. The main question is how the lattices are given and which properties they have.
In case the lattice is doubly founded, it is sufficient to consider all bijective mappings that map supremum irreducibles (such elements with one single lower neighbour) to supremum irreducibles and infimum irreducibles (with a single upper neighbour) to infimum irreducible elements. At least that is the method that is used in Formal Concept Analysis to discuss lattice properties with the use of so called reduced formal contexts. This is done in the following way:
Let $(L,≤)$ be a complete lattice, $G$ the set of supremum irreducible elements called objects, and $M$ the set of infimum irreducible elements called attributes. Then we can define a binary relation $I$ between $G$ and $M$ by the formula $I=(G×M∩{≤})$. A conclution of the fundamental theorem of Formal Concept Analysis is that two doubly founded or complete lattices are isomorphic iff their reduced contexts are isomorphic. There are additional refinements availlable in case you can fix certain automorphisms of the lattice or the context.
In case your lattice is not given as a data set, you have to use the usual algebraic methods to find a bijective mapping that is either and order isomorphism or preserves both infimum and supremum.
In case you want to get a more detailed answer, you should provide some more information about your lattice. |
Finding the minimal polynomial of an $n \times n$ matrix | Let
$$
U=\begin{pmatrix} 0 & -1 \\ 1 & -1 \\ 2 & -1 \\ \vdots & \vdots \\ n-1 & -1
\end{pmatrix}
\;\;\;\text{and}
\;\;\;
V=\begin{pmatrix} 1 & 1 & 1 & \cdots & 1 \\ 0 & 1 & 2 & \cdots & n-1
\end{pmatrix}
$$
Then $A=UV.$ Therefore, $\mathrm{rank}(A)=2,$ and $A$ has only $2$ non-zero eigenvalues. $iA$ is hermitian. This means $A$ is diagonalizable (in $\mathbb{C}$) and the minimal polynomial is square-free. As $iA$ has real eigenvalues ($iA$ is hermitian), the non-zero eigenvalues of $A$ are pure imaginary. Furthermore, the non-zero eigenvalues must appear in conjugate pairs, because $A$ is real.
If we put all this together, we get $\mu_A(x) = x^3+ax$ for a suitable $a.$
In order to find $a$, we take a look at $\mu_A(A)$ :
$$
\mu_A(A) = A^3+aA = UVUVUV+aUV = U\left((VU)^2+aI\right)V = 0
$$
$(VU)^2$ can be computed using Faulhaber's formulas. We find:
$$
(VU)^2 = \begin{pmatrix}
-\frac{n^2(n^2-1)}{12} & 0 \\ 0 & -\frac{n^2(n^2-1)}{12}
\end{pmatrix}
$$
Therefore, $a=\frac{n^2(n^2-1)}{12}$ and we have found our minimal polynomial. |
Traveling Salesman with exceptions | This problem, as stated, is too vague to answer. What is critical is the relationship between $m,n$. If $m=n$, then this is identical to TSP. If $m=n-1$ or similar, it is polynomially equivalent to TSP. If $m=2$, then the problem is polynomial-time: find the shortest path between the two vertices with Dijkstra's algorithm. |
Is there a symbol for "preserves"? | A map between relational structures over the same signature is exaclty the same as a homomorphism between these structures, so I would just say that $f$ is a homomorphism; I'm not aware of any other notation for that. |
Is the determinant differentiable? | As others have noted, since $A=(a_{ij})_{i,j=1\dots n}$ has determinant
$$
\det A = \sum_{\sigma\in S_n} \epsilon_\sigma\prod_{k=1}^n a_{k,\sigma(k)}
$$
which is a polynomial expression in the $a_{ij}$, the map $\det: \mathbb R^{n\times n}\to\mathbb R$ is infinitely differentiable. The first derivative with respect to $a_{ij}$ is calculated as
$$
\frac{\partial}{\partial a_{ij}} \det A = (\operatorname{adj} A)_{ji},
$$
where $\operatorname{adj} A$ is the adjugate matrix of $A$.
We can also look at the total derivative (or Fréchet derivative) $\mathrm D\det: \mathbb R^{n\times n}\to L(\mathbb R^{n\times n},\mathbb R)$ which assigns to every $A\in\mathbb R^{n\times n}$ the linear map $\mathrm D \det(A) : \mathbb R^{n\times n}\to \mathbb R$ given by
$$
(\mathrm D\det(A))(B) = \sum_{i,j} \left( \frac{\partial}{\partial a_{ij}} \det A\right) b_{ij}= \sum_{i,j} (\operatorname{adj}A)_{ji}b_{ij} = \operatorname{tr}((\operatorname{adj} A)B).$$
For invertible $A$ we can use $A^{-1}=\frac{1}{\det a}\operatorname{adj}A$ to get the expression
$$
(\mathrm D\det(A))(B) = \det(A)\operatorname{tr}(A^{-1} B).
$$
This allows us to use the chain rule to calculate the derivative of functions like $f(t)=\det(A(t))$ where $A:\mathbb R\to\mathbb R^{n\times n}$ is a differentiable matrix-valued function. By the chain rule, we have
\begin{align}
f'(t) &= \left(\mathrm Df(t)\right)(1) = \left(\mathrm D(\det\circ A)(t)\right)(1) =
\left(\mathrm D \det(A(t)) \circ \mathrm D A(t)\right)(1) \\&=
\left(\mathrm D \det(A(t))\right)\left(\mathrm D A(t)(1)\right) =
\left(\mathrm D \det(A(t))\right)\left(\frac{\mathrm d A(t)}{\mathrm dt}\right) \\&=
\operatorname{tr}\left(\left(\operatorname{adj} A(t)\right)\frac{\mathrm d A(t)}{\mathrm dt}\right).
\end{align} |
How to find square root of a matrix if it's eigen valuses are same. | If $S$ is a $2\times2$ matrix whose only eigenvalue is $0$, then $S$ is similar to a matrix of the form$$\begin{pmatrix}0&a\\0&0\end{pmatrix}.$$Therefore, $S^2$ is the null matrix.
On the other hand,$$\begin{pmatrix}2&a\\0&2\end{pmatrix}^2=\begin{pmatrix}4&4a\\0&4\end{pmatrix}.$$So, take $a=\frac14$. |
Radon measure with lebesgue decomposition | I suppose $\nu_a$ stands for the absolutely continuous part and $\nu_b$ for then singular part.
$\|\mu+\nu\|=\|\mu\|+\|\nu\|$ if $\mu \perp \nu$. In Lebesgue decompostion we do have $\nu_a \perp \nu_b$. Hence, $\|\nu\|=\|\nu_a\|+\|\nu_b\|$. |
Are there any conditions that are necessary for the existence of a Hamiltonian path in a graph? | Hamiltonian cycle implies biconnected, which in turn implies that every node has degree at least two.
Hamiltonian path implies connected and at most two nodes of degree one. |
Rank of a direct sum of groups | You are mixing up the terms rank (see here) and order (see here). For any two abelian groups $A$ and $B$, it is true that
$$\mathrm{rank}(A\oplus B)=\mathrm{rank}(A)+\mathrm{rank}(B),$$
but the rank of any finite abelian group is $0$, so the rank of $E$ in your question is still $r$. |
Lovasz Extension Intuition | An input to $f^L$ is a point in the $N$-dimensional unit cube $[0,1]^N$. An input to $f$ is one of the $2^N$ corners of this cube. So $f$ attaches a number $f(s)$ to each corner $s$, and the goal is to "intelligently" or at least "reasonably" extend this to attach a number to every point in the whole cube.
Lovász takes advantage of the fact that this cube can be "reasonably" subdivided into $N!$ simplices. (An $N$-dimensional simplex in Euclidean $N$-dimensional space is, by definition, the convex hull of $N+1$ points that don't all lie in an $(N-1)$-dimensional hyperplane.) I'll describe the subdivision below, and point out that each of the simplices has its $N+1$ corners among the corners of the cube, but first let me get to the main point of the construction. Each point in a simplex is a weighted average of the corners of that simplex. So if you have numbers $f(s)$ attached to the corners $s$, then it is reasonable to attach, to any weighted average $x$ of the corners $s$, the corresponding weighted average of the $f$-values. That is, for any weights $\lambda_i$, you would define $f(\sum_i\lambda_is_i)$ to be $\sum_i\lambda_if(s_i)$. As in any weighted average, the weights $\lambda_i$ should be non-negative real numbers that add up to $1$. (A faster but less explicit way to say the same thing is to define $f$ to be linear on each of the simplices and to agree with the given values at the corners of the simplex.)
I still have to tell you what these simplices are and how they fit together to make the cube. For most points $x$ in the cube $[0,1]^N$, the $N$ coordinates $x_1,\dots, x_N$ are all distinct. Let me temporarily consider only those points. For any such $x$, we can list its $N$ coordinates in decreasing order, say $x_{\sigma(1)}>x_{\sigma(2)}>\dots>x_{\sigma(N)}$ for some permutation $\sigma(1),\sigma(2),\dots,\sigma(N)$ of $\{1,2,\dots,N\}$. For any fixed permutation $\sigma$, all the points $x$ whose coordinates are ordered according to that $\sigma$ constitute (the interior of) one of the desired simplices.
So far, I've ignored the points $x$ that have two or more coordinates equal; these points will be on the boundary of two or more of these simplices.
The corners of the simplex corresponding to the permutation $\sigma$ are the $N+1$ points obtained as follows. Pick some $k$ in the range $0\leq k\leq N$ and let $s\in\{0,1\}$ be the point with $s_{\sigma(i)}=1$ for $0<i\leq k$ and $s_{\sigma(i)}=0$ for $k<i\leq N$. These $N+1$ points $s$ (one for each choice of $k$) are what were called $S_0,\dots,S_N$ in the question. that notation tacitly identifies an $N$-component vector $s$ of $0$'s and $1$'s (i.e., an element of $\{0,1\}^N$) with a subset of $\{1,2,\dots,N\}$, namely the set $\{i:s_i=1\}$. The notation $1_S$ used in the question means the vector that corresponds to the subset $S$ of $\{1,2,\dots,N\}$. |
least-squares estimation | (1) is linear in the parameters
($a, b, c$).
If your data points are
$(x_i, y_i)_{i=1}^m$,
what you want to do
is minimize the sum
$S
=\sum_{i=1}^m (y_i-(ax_i^2+bx_i+c))^2
$.
For each parameter
(for example, $a$),
differentiate $S$
with respect to $a$.
This will get an equation.
In the example,
$\begin{array}\\
\frac{\partial S}{\partial a}
&=\sum_{i=1}^m \frac{\partial }{\partial a}(y_i-(ax_i^2+bx_i+c))^2\\
&=\sum_{i=1}^m 2(y_i-(ax_i^2+bx_i+c))\frac{\partial }{\partial a}(y_i-(ax_i^2+bx_i+c))\\
&=\sum_{i=1}^m 2(y_i-(ax_i^2+bx_i+c))(-x_i^2)\\
&=-2\sum_{i=1}^m x_i^2(y_i-(ax_i^2+bx_i+c))\\
&=-2\left(\sum_{i=1}^m x_i^2y_i-\sum_{i=1}^m x_i^2ax_i^2-\sum_{i=1}^m x_i^2bx_i-\sum_{i=1}^m x_i^2c\right)\\
&=-2\left(\sum_{i=1}^m x_i^2y_i-a\sum_{i=1}^m x_i^4-b\sum_{i=1}^m x_i^3-c\sum_{i=1}^m x_i^2\right)\\
\end{array}
$
Setting this
partial derivative to zero,
we get
$$\sum_{i=1}^m x_i^2y_i
=a\sum_{i=1}^m x_i^4+b\sum_{i=1}^m x_i^3+c\sum_{i=1}^m x_i^2.
$$
This is one of the
three equations in $a, b, c$
that are needed.
Do the same with
$\frac{\partial S}{\partial b}
= 0
$
and
$\frac{\partial S}{\partial c}
= 0
$.
Then solve these three
linear equations for
$a, b, $ and $c$.
This is the least squares method.
(2) is linear in $a$
but nonlinear in $n$.
You can make it linear in its parameters
by writing
$\ln(y) = \ln(a)+n \ln(x)
$.
Then the parameters
are $\ln(a)$ and $n$.
Anyway, this is a start.
Linear least squares,
like problem (1),
are a lot easier than
nonlinear least squares
like problem (2). |
Is there a universal or general “matrix-builder” notation? | I have seen the notation $(c(k,\ell))_{k\ell}$ or maybe a variant like $(c(k,\ell))_{k=1,\ell=1}^{y,z}$ or $(c(k,\ell))_{k=1\ldots y,\ell=1\ldots z}$. With the first, one usually specifies the dimensions in the surrounding text.
The notation is sort of an "anonymous" tensor, where contraction ends up being substitution. |
What is the Jacobi amplitude? | Some relations:
\begin{align*}
u &= F(\phi| m) \\
&= \int_{0}^{\phi} \frac{d\theta}{\sqrt{1-m\sin^2 \theta}} \\
&= \int_{0}^{\sin \phi} \frac{dx}{\sqrt{1-mx^2}} \\
&= \int_{0}^{\operatorname{sn} u}
\frac{dt}{\sqrt{(1-t^2)(1-mt^2)}} \\
x &= \sin \phi \\
&= \operatorname{sn} (u|m) \\
&= \sin [F^{-1}(u|m)] \\
\operatorname{sn}^{-1} (x|m) &= F(\sin^{-1} x|m) \\
\operatorname{am} (u|m) &= \phi \\
&= F^{-1}(u|m) \\
\operatorname{sn} (u|m) &= \sin \phi \\
\end{align*}
Note that Mathematica or WolframAlpha uses $m=k^2$. |
Limit of nth root of 1/n as n goes to infinity. | You have
$$\sqrt[n]{\frac{1}{n}} = \exp \left( -\frac{\ln(n)}{n}\right)$$
Now when $n$ tends to $+\infty$, $-\frac{\ln(n)}{n}$ tends to $0$.
so you deduce that $\sqrt[n]{\frac{1}{n}}$ tends to $\exp(0)=1$. |
Finding eigenvalues of a 3x3 matrix given determinant and trace | Suppose your eigenvalues are $x$ and $y$.
your matrix $A$ is similar to a diagonal matrix $B$ which has it's eigenvalues on its diagonal.
Now, similar matrices have the same determinant and the same trace, thus we can get to the following equations:
$$2x+y = -1$$
$$x^2y=45$$
The first one is the sum of the diagonal (we know that there are 2 unique eigenvalues thus, one of them will show up 2 times on the diagonal).
The second one is the product of the diagonal (determinant of diagonal matrix).
$$... y=\frac{45}{x^2}$$
$$... x=-3 \space\space\space$$
if $x=-3 => y=5$
$x^2y=45$ and $2x+y=-1$.
And that's our answer :) |
Smart variable change for non linear system | I don't know if you have tried the usual transformation of $x$,$y$ to polar coordinates $r,\theta$. Your equations then take an (apparently) simpler form:
$$
r = \epsilon
$$
$$
\tan\theta = \rho \frac{1}{\zeta-\epsilon\cos\theta}
$$
Introducing the first inside the second you get a trascendental equation that may be easier to solve (at least numerically):
$$
\zeta\tan\theta-\epsilon\sin\theta=\rho
$$
Maybe you'll find a solution there :/ |
Meaning of definition of a linear subspace of $\mathrm{P}_2(x)$ | $$p(0) = \alpha p(1)$$
$$\iff c = \alpha(a + b + c)$$
$$\implies W = \big\{ p(x) = ax^2 + bx + \alpha(a + b + c) \big\} $$
To show a set is a subspace, you have to show the 3 following:
W is non-empty
W is closed under addition
W is closed under scalar multiplication.
W is non-empty
You usually prove this by showing the $0$ vector of the superspace belongs to the subspace.
In fact,
$$0 = 0x^2 + 0x + 0$$ and $0 = \alpha(0 + 0+ 0)$
$\implies 0 \in W$
W is closed under scalar multiplication and addition
Let $p(x)$ and q(x) $\in$ $W$. They satisfy the property of $W$ namely,
$p(x) = ax^2 + bx + c = ax^2 + bx + \alpha(a + b + c)$
$q(x) = dx^2 + ex + f =dx^2 + ex + \alpha(d + e + f)$
Now
$$p(x) + kq(x) = ax^2 + bx + \alpha(a + b + c) + kdx^2 + kex + k\alpha(d + e + f)$$
$$= (a + kd)x^2 + (b + ke)x + \alpha(a + b + c+ kd + ke + kf)$$
Take
$$A = a + kd$$
$$B = b + ke$$
$$C = c + kf$$
$$\implies p(x) + kq(x) = Ax^2 + Bx + \alpha(A + B + C)$$
Thus $p(x) + kq(x) \in W$
You can now conclude W is a subspace of $P_2$ |
Is the convex function of a DC function convex ? Or is it DC? | In general, the composition $H$ is not convex. As a counterexample, let $D=[2,3]\subset\mathbb{R}$. Let $h_1,h_2:D\to\mathbb{R}$ be given by
$$ h_1(x)=\frac{1}{x}\quad\text{and}\quad h_2(x)=x^2. $$
Then $h\equiv h_1-h_2$ is d.c. Let $g:D\to\mathbb{R}$ be given by
$$ g(x)=\frac{1}{x}. $$
The function $g$ is convex on $D$, but the function
$$ H(x)=g(h(x))=\frac{1}{\frac{1}{x}-x^2} $$
is concave on $D$.
In general, the composition is d.c., under some mild assumptions. See Theorem H in this paper.
Edit
As Kavi Rama Murthy notes in the comments, to disprove that $H$ is not convex in general, you could take any two convex functions $h_1,h_2$ such that $h_1-h_2$ is not convex (e.g. $h(x)=0-x^2$) then compose with $g(x)=x$ (the identity). This is algebraically much simpler than my counterexample above. |
number of one-one function;a set to itself | For a finite set this is simple. A function between finite sets of the same size is injective if and only if it is surjective if and only if it is bijective. So the set of injective functions is simply the set of permutations, i.e. the ways you can "re-order" $\{a,b,c\}$. For infinite sets this becomes much more difficult. |
Finding constant to satisfy convergence of infinite series using MacLaurin expansion | Consider the sequence $$a_n=e - \left(1+\frac{1}{n}\right)^n- \frac{eb}{n}$$ For large $n$, you have
$$\left(1+\frac{1}{n}\right)^n=e-\frac{e}{2 n}+\frac{11 e}{24 n^2}+O\left(\frac{1}{n^3}\right)$$ making
$$a_n=\frac{\frac{e}{2}-e b}{n}-\frac{11 e}{24
n^2}+O\left(\frac{1}{n^3}\right)$$ and the harmonic series does not converge. Then, ??? |
How is distance in sequence spaces measured? | In $\ell_p$: $$d(a_n,b_n)=\sqrt[p]{\sum_{n \in \Bbb N} |a_n-b_n|^p}$$ |
How can I understand the First Isomorphism Theorem better? | See, $R/I$ is a ring, whose elements are equivalence classes under the relation $x \sim y$ if $x - y \in I$. That is, if I ask you "name an element of $R / I$", then you would tell me the name of some element in $R$, say $r \in R$, and I will take the equivalence class of $r$, which is denoted $r+I$, as the element of $\frac RI$ which you are referring to.
Let's take an example. Take the ring $\mathbb Z $ under usual addition and multiplication, along with the ideal $3\mathbb Z = \{3n : n \in \mathbb Z\}$, the multiples of $3$.
Then, what are the elements of $\frac{\mathbb Z}{3\mathbb Z}$? You name any element of $\mathbb Z$, and I will interpret the equivalence class it belongs to, as the element you are referring to. For example, if you say $73$, then $73 + 3\mathbb Z$ is an element of $\frac{\mathbb Z}{3 \mathbb Z}$. Similarly, so is $65 + {3 \mathbb Z}$.
Now, every element of $\frac{\mathbb Z}{3 \mathbb Z}$ is of the form $r + {3 \mathbb Z}$ where $r \in \mathbb Z$.
So what is the isomorphism given to you actually doing? Given an element $E$ of $\frac RI$, it is an equivalence class, therefore there is some $r$ such that $E = r + I$. Now, find where that $r$ goes under $\phi$, and send $E$ to $\phi(r)$.
For example, if $\phi : Z \to S$ (where $S$ is some ring) is a map whose kernel is exactly $3 \mathbb Z$, then the isomorphism between $\frac{\mathbb Z}{\mathbb 3Z}$ and $S$ is like this :
The isomorphism would take $73 + {3\mathbb Z}$ to $\phi(73)$. It would take $65 +{3 \mathbb Z}$ to $\phi(65)$. It would take $1 + {3\mathbb Z}$ to $\phi(1)$. And so on.
Now, I hope at least you know what the isomorphism is in the general case. But there is a deeper issue.
If I give you an equivalence class, $E$, in $\frac RI$, I told you there is an element such that $E = r+I$. But $r$ need not be unique! For example, $1 +3 \mathbb Z$ is equal to $73 + \mathbb 3Z$, because $73 - 1 = 72 = 3 \times 24 \in \mathbb 3Z$, so $1 \sim 73$, and related elements belong to the same equivalence class, so $1 + 3 \mathbb Z = 73 + 3\mathbb Z$.
But then, what will $\phi$ do when confronted with this? If I give it $1 + 3\mathbb Z$, then it will return $\phi(1)$, but if I give it $73 + \mathbb Z$, then it will return $\phi(73)$.
How can $\phi $ return two elements, $\phi(1)$ and $\phi(73)$, while taking the same equivalence class $1 + 3\mathbb Z = 73 + \mathbb Z$ as input? Every function has a single input and returns a single output, but it seems there could be two outputs here, $\phi(1)$ and $\phi(73)$!
This is not the case : actually, $\phi(1) = \phi(73)$, by the fact that $\phi(72) = 0$(what is the kernel of $\phi$?) and $73 = 72 +1$ so $\phi(73) = \phi(72 + 1) = \phi(72) + \phi(1) = 0 + \phi(1) = \phi(1)$. So, $\phi$ is not doing anything wrong. More formally, we say it is well-defined in this situation.
Putting the deeper issue to rest, I hope you now see :
what the isomorphism from $\mathbb R/I$ to the image of $\phi$ is doing.
why it is well-defined (I gave an example, but you can generalize and work it out).
As to why it is an isomorphism, well, that I shall explain only if you require it to be done. |
Fundamental group of two circles joined | As you assumably had to learn covering space theory to compute $\pi_1(S^1)$, this may be the best approach. If you can guess that the fundamental group of the wedge is $G=\mathbb{Z}*\mathbb{Z}$, then you can just observe that $G$ acts freely on the infinite tree with degree four at each node: the generator $a$ corresponds to moving right by one interval, and $b$ to moving up, if we visualize this graph embedded in the plane. The quotient by this action is $S^1\vee S^1$, and the graph is contractible, so it must be the universal cover of $S^1\vee S^1$ with deck transformations isomorphic to $\pi_1(S^1\vee S^1)$. |
Alignment of the "formal independence" and "real world independence" | No. There is no way to say whether the events of rain and heads are correlated. Perhaps the increased humidity associated with rain affect the drag coefficient of air in such a way as to increase the probability of heads. That is, we cannot make an exact conclusion.
The justification is that the forces underlying the two systems interact only weakly, so the randomness in one system is approximately unaffected by the randomness in the other. The conclusion is always approximate; we never conclude $P(AB)=P(A)P(B)$ exactly, except in trivial cases.
I like to think of it like this: imagine a coupled differential equation $y_1'=F_1(y_1,y_2)$, $y_2'=F_2(y_1,y_2)$. To say that $y_1$ and $y_2$ are "real-world indpendent" would mean there are functions $\tilde F_1$ and $\tilde F_2$ so that $\tilde F_1(y_1)\approx F(y_1,y_2)$ uniformly in $y_2$, and the same for $\tilde F_2$, in such a way that the solutions to $\tilde y_1'=\tilde F_1(\tilde y_1)$ and $\tilde y_2'=\tilde F_2(\tilde y_2)$ are within $\epsilon$ of $y_1$ and $y_2$ on the interval $[0,T]$, for some $\epsilon$ too small to be detected by human measurement, for some $T$ longer than the lifetime of humanity. If $F_1$ and $F_2$ are random functions, this means that $P(AB)\approx P(A)P(B)$, where $A$ is an event for $y_1$ and $B$ for $y_2$.
On a frictionless pool table, two elastically colliding pool balls are set in motion in random directions. They will almost surely land in a pocket. Let $A$ be the event that the first ball lands in a corner pocket, and $B$ that the second ball lands in a corner pocket. I would not call these events real world independent since the pool balls interact. But it should be the case that $P(AB)=P(A)P(B)$. |
Number of elements of given order in the group | It looks like you're trying to apply Möbius inversion with $g(m) = m$ and $f(m) = N_m.$ However, in order to apply Möbius inversion, you must have
$$
g(m) = \sum_{d\mid m} f(d)
$$
for all $m\in\Bbb N.$ In particular, this would mean that
$$
m = \sum_{d\mid m}N_d,
$$
even if $m\nmid\# G.$
Now, take any nontrivial $G$ and consider $m = n^2.$ For any $d\mid n^2$ with $d \nmid n,$ we have $N_d = 0.$ Then
$$\sum_{d\mid n^2} N_d = \sum_{d\mid n} N_d + \sum_{d\mid n^2, d \nmid n} N_d= n + 0\neq n^2.$$
Thus, Möbius inversion doesn't apply here!
Here's a concrete counterexample: let $G = S_3.$ Then $N_1 = 1, N_2 = 3,$ and $N_3 = 2.$ However,
$$
\sum_{d\mid 2}\mu(d)\frac{2}{d} = 2\mu(1) + \mu(2) = 2 + (-1) = 1\neq 3 = N_2.
$$ |
Proving sets to be independent | Use the fact that $P(A\cup B)=P(A)+P(B)-P(A\cap B)$. |
Using Other User's Data to estimate probability of clicking a link | The simple approach is to find all the other users that have been shown link 6 and compute the fraction of them that clicked on it. That corresponds to assuming all your users behave similarly. The next level is to look at the correlations between users clicking on links and find users who click the same links as user 1, then compute the faction of those who clicked on link 6. Further on, you can examine their whole Facebook page, zip code, car they own, etc. and try to improve your prediction. Many companies have whole departments devoted to this. You need to make your question much more specific. |
Why is $\mathbb R/\mathbb Z$ a circle? | If we take $\mathbb{R}/\mathbb{Z}$ as the group quotient so we mod out via the equivalence relation $x \sim y$ iff $x - y \in \mathbb{Z}$, not that we identify $\mathbb{Z}$ to a point and keep all other points separate (as is also often denoted by this notation):
Then $0 \sim 1$, so we can approach $0 = 1$ from two directions, both $\frac{1}{n} $ and $1 - \frac{1}{n}$ have the same limit, so all points have near neighbours on both sides. You have sort of bent $[0,1)$ so that the 1 gets close to 0, and eventually identical with it. You glue the $0$ and $1$ together, as it were.
To see an explicit homeomorphism, let $f(t) = e^{2\pi it}, f: \mathbb{R} \rightarrow S^1$, then $f$ is continuous. You have to check that $x \sim y$ iff $f(x) = f(y)$. Then define $\hat{f}: \mathbb{R}/\mathbb{Z} \rightarrow S^1$, by $\hat{f}([x]) = f(x)$. The previous condition on $f$ and $\sim$ is a reformulation that $f$ is well-defined (as $x \sim y \rightarrow f(x) = f(y)$ so the representative of the class does not matter for the value) and 1-1 (as $f(x)= f(y) \rightarrow x \sim y \rightarrow [x] = [y]$). It's continuous: if $O \subset S^1$ is open, then $\hat{f}^{-1}[O]$ is open by the definition of quoient topology iff $q^{-1}[\hat{f}^{-1}[O]] = f^{-1}[O]$ is open in $\mathbb{R}$, and this is just continuity of $f$.
We don't need to show openness of $\hat{f}$ as w |
River boat round trip | Hint: if the current is $c$, the speed upstream is $20-c$ and downstream is $20+c$ |
Deriving singular values based on column properties | All right, as I said in comment, your equation can be written as $$Y^tY=I$$ with $Y=\sqrt{D}X$. This shows that the eigenvalues of $Y^TY$, i.e., the singular values of $Y$ are $1$ ($p$ times) and $0$ ($d-p$ times). Now it is possible to find the singular values of $X$. |
Why isn't $\sum_{k=0}^\infty (-1)^kz^{-2k-1}=\sum_{-\infty}^{-1}\frac{z^m}{i^{m+1}}$? | In the first sum an important property is that it only has odd exponentials. As an example the coefficient for $z^{-2}$ is zero and so forth. However this property is not preserved when I do the manipulation $[\text{Let }=−2−1<=>=−\frac{+1}{2}]$ since I get a square root. As an example the coefficient for $z^{-2}$ therefore becomes $\frac{1}{i}\neq0$. A better manipulation would be $m=-k$. |
Prove irreducibility of a polynomial | $x^3-m$ is reducible iff it has a factor of degree 1 iff it has a root iff $m$ is a cube. In particular, $x^3-m$ is irreducible when $m$ is squarefree. |
Simple Projection Proof | Pick $v\in V$. You can write this uniquely as $u+w$ with $u\in U,w\in W$. You're defining $P$ by $P(v)=P(u+w)=u$, the projection onto $U$.
Then it follows $$(i)\;\;\operatorname{im} P=U$$ $$(ii)\;\;\operatorname{ker} P=W$$
$(i)$ Pick $u\in U$. Then $P(u)=u$ so $u\in \operatorname{im} P$. Conversely, if $v\in\operatorname{im} P$ we have that $P(x)=v$ for some $x\in V$. But we can write $x=u+w$ uniquely with $u\in U,w\in W$ so that $P(u+w)=u=v$. So $v\in U$.
$(ii)$ If $w\in W$ then we can write $w=0+w$ with $0\in U,w\in W$ so $$P(w)=P(0+w)=0$$ and $w\in \operatorname{ker}P$. Converesely if $P(v)=0$ write $v=u+w$ with $u\in U,w\in W$. Then $P(v)=P(u+w)=u=0$. Thus $u=0$, $v=w$ and $v\in W$. |
Enumerating the primitive recursive functions without repetition | In general, if I can enumerate a list of computable functions I know to be total, then I can enumerate that same list without repetition.
Here's how to do this. Suppose I have an enumeration $\{f_n: n\in\mathbb{N}\}$ of some total computable functions. Say $f_n$ looks new at stage $s$ if for every $m<n$, there is a $k<s$ such that $f_n(k)\not=f_m(k)$; that is, $f$ is visibly different from each of the previous $f_m$s, just by looking at the first $s$-many bits.
Since each $f_n$ is total, the predicate "$f_n$ looks new at stage $s$" is computable (just look at the first $s$ bits of $f_1, f_2, . . . , f_n$ and compare). So to build a list without repetitions, we just wait until we see some new $f_n$ look new, and then add it to our list; since we only add $f_n$s which look new, we're guaranteed to not get repetition, and as long as we do things in a reasonable order, every $f_n$ will wind up represented on the list.
Formally, here's how we build our no-repetitions list:
Note that $f_1$ automatically looks good at stage $1$; let $n_1=1$.
Let $s_1$ be the least natural number such that some $f_n$ ($n\not= n_1$) looks new at stage $s$; let $n_2$ be the least such $n$.
In general, having defined $n_i$, let $s_i$ be the least natural number such that some $f_n$ ($n\not=n_1, n_2, . . . , n_i$) looks new at stage $s$; and let $n_{i+1}$ be the least such $n$.
We get a sequence of numbers $n_1, n_2, n_3, . . .$ this way. Our no-repetitions list is just $$f_{n_1}, f_{n_2}, f_{n_3}, . . .$$ It's now a good exercise to show that this list has no repetitions, and for each $n$ there is some $i$ such that $f_n=f_{n_i}$.
A huge caveat:
There is no effective procedure to tell where a given function shows up on this repetition-free list. Specifically, let's say I want to know where $f_{12}$ lands. Unfortunately, there's no way to find which $i$ satisfies $f_{n_i}=f_{12}$! This is a good exercise, and is why this construction doesn't contradict the fact that deciding equality of primitive recursive functions isn't effective.
You may also be interested in the much harder construction of a Friedberg enumeration: there is a computable sequence $e_i$ such that
Every partial recursive function $\varphi_e$ is equal to some $\varphi_{e_i}$, and
$\varphi_{e_i}=\varphi_{e_j}$ iff $i=j$.
That is, we can list the partial recursive functions without repetition! This is really crazy, and may seem to contradict the fixed point theorem. The saving grace is, again, the fact that this list is basically terrible: there's no way to tell where a specific function appears on it. In particular, basic operations on indices are no longer computable. |
Absolute value, and multiplying by x | I would compute with absolute values longer:
$$\Bigl\lvert\frac{-2}x\Bigr\rvert=\frac{2}{\lvert x\rvert}<1\iff\lvert x\rvert>2\iff x>2\enspace\text{or}\enspace x<-2. $$ |
What is $\pmatrix{m+n\cr n}$ asymptotically? | Assuming that $m$ is fixed we have for $n\gg m$ by definition of binomial coefficient:
$$
\binom{m+n}{n}\approx\frac{n^m}{m!}.
$$ |
What is the variance of residual of regression (matrix form)? | The second formula you have assumed independence (or at least zero covariance) of $Y$ and $PY$, that does not hold.
See here, you forgot two important terms.
$$var (Y - PY) = var (Y) + var(PY) + cov(Y, -PY) + cov(-PY, Y) $$ |
Limit of a composite function that is not continuous | Since you mention you are interested in the reasoning behind the answers, lets go into that first. To be more general, let $A,B,C\subseteq\mathbb{R}$ be non-empty and lets say $f:A\to B$ and $g:B\to C$ are two functions. Given some $a\in A$, we can say that $$\lim_{x\to a}g(f(a))=g\left(\lim_{x\to a}f(x)\right)$$ provided that $g$ is continuous at $\lim\limits_{x\to a}f(x)$.
In your case, we had two points of interest: $x=-3$ and $x=0$. Both of these cases fail to satisfy the above conditions, so we have to resort to another method, that being left and right sided limits.
In the case where $x=-3$, we can approach $x$ from the left and the right; approaching from the left, graphically, tells us that $\lim\limits_{x\to -3^-}f(f(x))=-1$ and approaching from the right tells us that $\lim\limits_{x\to -3^+}f(f(x))=-1$. This means $\lim\limits_{x\to -3}f(f(x))=-1$.
Now, in the case where $x=0$, approaching from the left of zero yields $\lim\limits_{x\to 0^-}f(f(x))=-2$ and approaching from the right similarly yields $\lim\limits_{x\to 0^+}f(f(x))=-2$.
Since you don't need to be completely rigorous in your justification, to compute, for example, the left sided limits, simply move your finger near the left of $-3$ and approximate the out put of $f$ where you finger is. For example, lets say your finger is near $x'=-3.5$. Then $f(-3.5)=2$ and $f(f(-3.5))=f(2)=-1$. From the right, lets say your finger is near $x'=-2.5$. Then $f(f(-2.5))=f(2)=-1$. Since these limits agree, then you can be sure your answer is $-1$. This is not a very precise answer, but the reasoning behind it is left and right sided limits if you're more interested. Trying to provide epsilon-delta proofs would be a good way to strengthen your understanding of why this works. |
Teenager solves Newton dynamics problem - where is the paper? | In the document Comments on some recentwork by Shouryya Ray by Prof. Dr. Ralph Chil and Prof. Dr. Jürgen Voigt (Technische Universität Dresden), dated June 4, 2012 it is written:
Conducting an internship at the Chair of Fluid Mechanics at TU
Dresden, Shouryya Ray encountered two ordinary differential equations
which are special cases of Newton's law that the derivative of the
momentum of a particle equals the forces acting on it. In the first
one, which describes the motion of a particle in a gas or fluid, this
force is the sum of a damping force, which depends quadratically on
the velocity, and the (constant) gravitational force.
$$\begin{eqnarray*} \dot{u} &=&-u\sqrt{u^{2}+v^{2}},\qquad
u(0)=u_{0}>0 \\ \dot{v} &=&-v\sqrt{u^{2}+v^{2}}-g,\quad v(0)=v_{0}.
\end{eqnarray*}\tag{1}$$ Here, $u$ and $v$ are the horizontal and
vertical velocity, respectively.
(...)
The second equation reads $$ \ddot{z}=-\dot{z}-z^{3/2},\qquad
z(0)=0,\dot{z}(0)=z_{1},\tag{2} $$ and describes the trajectory of the
center point $z(t)$ of a spherical particle during a normal collision
with a plane wall.
(...)
Let us come back to problem (1) which was the starting point of the media stories. In the context of Shouryya Ray's work it was an unfortunate circumstance, that a recent article from 2007$^8$ claims that no analytical solution of problem (1) was known, or that it was known only in special cases, namely falling objects$^9$. This might have misled Shouryya Ray who was not aware of the classical theory of ordinary differential equations.
(...)
To conclude, Shouryya Ray has obtained analytic solutions of the problem (1), by transforming it successively to the problems (3)-(5), and by applying a recent result of D. Dominici in order to obtain a recursive formula for the coefficients of the power series representation of $\psi$. He then validated his results numerically. Given the level of prerequisites that he had, he made great progress. Nevertheless all his steps are basically known to experts and we emphasize that he did not solve an open problem posed by Newton.
(...)
We hope that this small text gives the necessary information to the mathematical community, and that it allows the community to both put in context and appreciate the work of Shouryya Ray who plans to start a career in mathematics and physics.
The function $\psi$ is given by
$$\psi (t)=(v_{0}-g\Psi (t))/u_{0},$$
where
$$\Psi (t)=\int_{0}^{t}\exp \left[ \int_{0}^{\tau }\sqrt{u^{2}(s)+v^{2}(s)}ds
\right] d\tau .$$
I've read about this text on this blog post.
PS. Also in Spanish the Francis (th)E mule Science's News post El problema de Newton y la solución que ha obtenido Shouryya Ray (16 años) discusses these problems. |
Question about motion. | So, Starting at $t=0$ the object has velocity $v(0)=0$ and acceleration $f\ \frac{m}{s^2}$.
Thus at time $t$, by the above mentioned formulas, the object will have travelled
$$x(t)=\frac12t^2f\ \text{ meters}, $$
and will have a velocity of
$$v(t)=ft \ \frac{m}{s}.$$
Between time $t$ and $2t$ it will have travelled, accounting for $\color{red}{\text{the increment in acceleration}}$
$$x(2t)-x(t)=v(t)\cdot t+ \frac12t^2(f\color{red}{+f})\ \text { meters}$$
and will have a final velocity of
$$v(2t)=v(t)+t(f+f)\ \frac{m}{s}.$$
It is now quite clear, this you can prove by inspection, that between time $kt$ and time $(k+1)t$ your object will travel
$$x((k+1)t)-x(kt)=v(kt)\cdot t+ \frac12t^2((k+1)f)\ \text { meters}$$
and its final velocity will be
$$v((k+1)t)-v(kt)=t((k+1)f)\ \frac{m}{s}.$$
Now we sum all contributions, to see that
$$v(Kt)=\sum_{k=0}^{K-1}\left[v((k+1)t)-v(kt)\right]+v(0)=\sum_{k=0}^{K-1}\left[t((k+1)f)\right]+0=tf \sum_{k=0}^{K-1}(k+1)=tf\frac{K(K+1)}{2},$$
so that in particular,
$$v(nt)=tf\frac{n(n+1)}{2}.$$
Similarly
\begin{align*}x(nt)& =\sum_{k=0}^{n-1}\left[x((k+1)t)-x(kt)\right]+x(0)=\sum_{k=0}^{n-1}\left[v(kt)\cdot t+ \frac12t^2((k+1)f)\right]+x(0)\\ & =t^2f\sum_{k=0}^{n-1}\frac{k(k+1)}{2}+ \frac12t^2 f\sum_{k=0}^{n-1} (k+1)\\& = t^2f\sum_{k=0}^{n-1}\left[\frac{k(k+1)}{2}+\frac{k+1}{2}\right] \\ & = \frac{t^2f}{2}\sum_{k=0}^{n-1}(k+1)^2 \\ & = \frac{t^2f}{2}\cdot\frac{n(n+1)(2n+1)}{6}
\end{align*}
Where in the last step I have used the sum of squares formula (How to get to the formula for the sum of squares of first n numbers?).
Remark: I suppose the suggested answer is missing a $1/6$ factor |
A graph is bipartite iff its cycles are even. (need help with proof) | You have first dealt with the easy part of the "iff": A bipartite graph cannot have an odd cycle. The correct idea is there, but as a "printed proof" it leaves much to be desired.
Now comes the other part: A (maybe infinite) graph with no odd cycles can be made bipartite. Here I cannot see a satisfying logic line. In your first sentence you talk about "all cycles", which is fine, but already in your second sentence you talk about "that" cycle, which you haven't defined. Note that you cannot assume that there is a single cycle going through all the vertices or edges. But you may assume (with noting this in a preamble) that the given graph is connected.
The second part requires a construction that partitions the vertex set $V$ into two parts. Here is a hint: Call two vertices $v_1$, $v_2\in V$ equivalent if they can be joined with a path of even length. |
Examine $\int \limits_{0}^{\infty} \frac{\ln x}{1+x^{2}} d x$ for convergence by using the direct comparison test | As a side-effect of your analysis, it's enough to check only one of $x\to0$ and $x\to\infty$ as they contribute equally.
For small $x$, then, what is an obviously good approximation for $\frac{1}{1+x^2}$ which happens to be an upper bound for it? |
How to prove that idempotency is redundant here in lattices? | Let $x\vee x=s$. It follows from 4) $x\wedge s=x$. Hence $x\vee x=x\vee (x\wedge s)=x$ (again from 4)). |
How to solve $x'(t) =\frac {10 - x(t)}{2000}$, such that $x(0) = 0$, without using complex numbers | This equation is separable
$$\displaystyle \frac {dx}{dt} = -\frac {(x - 10)}{2000}, x(0) = 0$$
$$\int \frac {dx}{x-10}=-\frac 1 {2000}\int dt$$
$$\ln(x-10)=-\frac t {2000}+K$$
for $x<10$
$$ \displaystyle 10-x=Ke^{-\frac t {2000}}$$
$$ \displaystyle x=10+Ce^{-\frac t {2000}}$$
The constant for the initial condition given is
$$x(0)=0 \implies C=-10$$
$$ \implies \displaystyle x(t)=10(1-e^{-\frac t {2000}})$$ |
Is there a fast way to compute the lowest eigenvalue of this symmetric PD matrix in this specific scenario? | There are many ways to do this. One way would be inverse iteration, which is essentially power iteration with $C^{-1}$. However, this requires a solve at each step.
Another possibility is to observe that the matrix $\alpha I - C$ will has eigenvalues $\alpha - \lambda_i$ where $\lambda_i$ is an eigenvalue of $C$. Therefore, if we pick $\alpha$ so that $|\alpha-\lambda_\min| > |\alpha - \lambda_i|$ for all $\lambda_i$ except $\lambda_{\min}$ the top eigenvalue of $\alpha I - C$ will correspond to the bottom eigenvalue of $C$. We can then compute the top eigenvalue of $I - \alpha C$ which will give us the smallest eigenvalue of $C$.
A simple way to ensure this is to pick $\alpha > \lambda_{\max}$. If you want, you could compute the top eigenvalue of $C$ and use this. Otherwise you could use the fact that, $\lambda_{\max}(C) \leq \lambda_\max(A^HDA) + \lambda_\max(M) \leq 1 + \lambda_\max(M)$ |
$T_1 : V_1 \to V_2$ and $T_2 : V_2 \to V_1$ both onto. Then does it imply that $V_1$ and $V_2$ are isomorphic as vector spaces? | Yes. In the infinite-dimensional case one can still show that any two bases have the same cardinality, and hence define the dimension to be the cardinality of a basis. Your hypothesis implies that $\dim(V_1)\ge\dim(V_2)$ and $\dim(V_2)\ge\dim(V_1)$, hence $\dim(V_1)=\dim(V_2)$, hence $V_1$ and $V_2$ are isomorphic (a bijection between the bases extends to an isomorphism). |
How to prove induction theorem from the axioms of the real numbers? | What you have to realize is that you still need some set theory (I think) to do this. One reason is that the concept of fields are defined in terms of sets (a field $(\mathbb R, +, \cdot)$ is a set $\mathbb R$ together with...). You will also probably need second order logic to state the existence of $\mathbb N$.
Also you have to realize that no matter how you do it it's a lot of work ahead, basically the same amount as if you start at $\mathbb N$. The difference is that you go the other way around - instead of extending $\mathbb N$ you identify $\mathbb N$, $\mathbb Z$, $\mathbb Q$ as subsets of $\mathbb R$. For this reason I will not go into details, but rather scetch the approach.
The big question is how to identify $\mathbb N$ since that set has a central role in the induction principle. The answer lies in the that it basically has do fulfil the axioms of Peanno, but we don't have to be that primitive now.
What we do is to consider (associative) submonoids under addition that also contain the multiplicative identity (they contain the additive identity too, by definition). We observe that such submonoids exists as $\mathbb R$ is one. We construct $\mathbb N$ as being the intersection of all these (underlying sets).
Now there's a number of steps that is required to prove the induction principle. We note that since $\mathbb N\subseteq\mathbb R$ we can use the operations from $\mathbb R$ on $\mathbb N$ (although we haven't proven that $\mathbb N$ is closed under these (in fact it isn't closed under subtraction and division).
Proposition 1
If $x\in\mathbb N$ and $x\ne 0$ and $x\ne 1$ then there exists $a,b\in\mathbb N$ distinct from $x$ such that $x=a+b$.
If there weren't we could form a monoid by excluding $x$
Proposition 2
If $x\in\mathbb N$ and $x\ne 0$ we have that $x\ge 1$. (Corollary: all elements in $\mathbb N$ are non-negative)
If we have a monoid we still get a monoid if we remove all elements between $0$ and $1$ and the negative elements.
Proposition 3
If $x\notin\mathbb N$ we have that the distance $\inf_{k\in\mathbb N} |x-k|$ to $\mathbb N$ is strictly greater than $0$.
This is showed by dividing the set greater and less integers. We see that at most two integers we have $|x-k|<1$. The infimum must be the smaller of these and can't be zero because $x\notin N$.
Proposition 4
Every bounded above (below) non-empty subset of $\mathbb N$ has a maximum (minimum), that is a $\sup$ ($\inf$) that is inside the subset.
This is shown by using the $\sup$ ($\inf$) and use proposition 3 to show that it's inside the subset.
Proposition 5
If $x\in N$ and $x\ne 0$ there exists a $a\in N$ such that $x = a+1$.
We use proposition 2 to show that in proposition 1 $a$ and $b$ is between $0$ and $x$. We then use proposition 4 to find a minimal $b$ that fulfills the equation $x=a+b$. We use proposition 1 to show that this $b$ must be $1$.
Now we're ready to show the induction principle:
Proposition 6
Assume that $K\subseteq\mathbb N$ such that $0\in K$ and that for every $x\in K$ we also have that $x+1\in K$. Then $K=\mathbb N$.
Consider $L = \mathbb N\setminus K$. If $L\ne\emptyset$ it will be bounded below by $0$ because it's a subset of $\mathbb N$. So there exists a minimal element $n\in L$. Now if $n=0$ it would contradict that $0\in K$. Otherwise we have from proposition 5 that $n=a+1$ for some $a$ and since $a<n$ we have $a\notin L$ so $a\in K$ and by assumtion $n=a+1\in K$ which is also a contradiction. Therefore we conclude that $L=\emptyset$ which means that $K = \mathbb N$.
As for the rest of the axioms of Peanno we see that the axioms 1-5 follows directly as they are true for any field. For the axioms 6-8 we define $S(n) = n+1$ and they're quite straight forward to prove.
After this one can take about the standard approach in building up from Peanno's axioms, but things are a bit easier since we don't have to do "funny constructs". We can for example define rational numbers as the quotient of integers (since we already have a field).
As for recursively defined functions (on $\mathbb N$) that is an allowed construct. When you prove that however you should not use notation that suggests that it is in fact a function, not before the proof is complete. For example if we take exponention we will define it as $x^0 = 1$ and $x^{n+1}=x^nx$, but before the proof we will use the relation $E$ defined as $(x, 0)Ey \leftrightarrow x=y$ and $(x,n+1) E z \leftrightarrow \exists y: (x,n) E y\land z=xy$. We then by induction (over $n$) prove that for every $x\in\mathbb R$ and every $n\in\mathbb N$ we have exactly one $a$ such that $(x,n)Ea$ which makes $E$ a function. The proof is somewhat straight forward, but some attention has to be paid for the case where $x=0$ (in which case $z=0y=0$ regardless of $y$ which therefore exists). After this we introduce the expression $x^n$ as $(x,n) E x^n$.
Then one goes on to prove the standard rules for exponentiation by induction. Then these rules makes it natural to define $x^{-n}$ as $1/x^n$ and it's defined for integers. Now we reprove the standard rules for exponentiation (but this time we don't need induction). The next steps is a bit tricky, but we define $x^{p/q} = a$ if there's a number $a$ such that $x^p = a^q$ - we use completeness to prove that such a number always exists (except for the exceptions). The final part is to define $x^a$ for arbitrary $a$ which is again done via completeness. We have to reprove the rules of exponentiation for each of these two steps too. |
Is every fiber of a morphism between varieties of pure dimension? | No, this is not true. Take the blow-up of the affine space $\mathbb{A}^n$ at the origin (see wikipedia). Fibers over any point except the origin are just single points, so $0$-dimensional, and the fiber over the point $0$ is $\mathbb{P}^{n-1}$. |
Matrices equipped with the rank distance form a metric space | Your answers for the first and second points are correct. The second is not hand-wavy by any means, you have said the crucial thing, namely that $X-Y$ is a scalar multiple of $Y-X$, so their rank is the same.
As for the third one, if $Z - X = a$ and $Y - Z = b$, then you essentially have to prove that $\operatorname{rank}(a+b) \leq \operatorname{rank}(a) + \operatorname{rank}(b)$.
However, note that if some set of vectors span the columns of $a$, and another set of vectors span the columns of $b$, then the union of these vectors spans the columns of $a+b$. From here, I urge you to use the definition of rank to understand why this statement follows.
EDIT : Suppose that $\{e_i\}$ span the columns of $a$, and $\{f_j\}$ span the columns of $b$. My claim is that $\{e_i\} \cup \{f_j\}$ span the columns of $a+b$.
It's easy to see why. Suppose $A$ is a column of $a+b$, then we know that $A= A_a + A_b$, where $A_a$ and $A_b$ are the corresponding columns of $a$ and $b$ which were added to give the column $A$ of $a+b$.
Now, $A_a$ is spanned by the $\{e_i\}$, so there exist constants $c_i$ such that $A_a = \sum_{i} c_ie_i$. Simiarly, since $A_b$ is spanned by $\{f_j\}$, we get that $A_b = \sum_j d_jf_j$ for some constants $d_j$.
Adding these two, gives that $A = \sum_i c_ie_i + \sum_j {d_jf_j}$. Hence, you can see that $A$ is spanned by $\{e_i\} \cup \{f_j\}$. Hence, the rank of $A$ is less than the size of $\{e_i\} \cup \{f_j\}$, which is less than $\operatorname{rank} a + \operatorname{rank} b$. Hence, the inequality follows. |
dimension of the intersection of subspaces | The $t$ just means transpose, so you are working with column vectors (you are probably supposed to have the $t$ inside your definitions for $U$ and $V$ as well).
Also notice that $U$ and $V$ are each either n- or (n-1)-dimensional. This is because $U$ is the orthogonal complement of $\langle (a_{1},\ldots, a_{n})^{t} \rangle$, or equivalently, the null space of the matrix $\begin{bmatrix} a_{1} & \ldots & a_{n} \end{bmatrix}$, which will have either 0 or 1 pivot positions.
Notice that if $(b_{1},\ldots, b_{n})^{t} \in \langle (a_{1},\ldots, a_{n})^{t} \rangle$, then $U \subset V$. Try to show this, it will help.
(You can also notice that $U \cap V$ is the orthogonal complement of $W$). |
find the minimum value of $\frac{a}{b}+\frac{b}{c}+\frac{c}{a}$ | This is easy using the AM-GM inequality. We get
$$
\frac{\frac ab + \frac bc + \frac ca}{3}\geq \sqrt[3]{\frac ab\cdot \frac bc \cdot \frac ca} = 1
$$ |
The limiting distribution of the central limit theorem | Yes, the limit of $A_n$ as you define it will be $\delta$-distributed and it can only have one value, its expectation, $\mu$. The right way to do it is to define $$B_n =\frac{1}{\sqrt{n}}(X_1+\dots+X_n)-\sqrt n\,\mu$$ and then it will be distributed as $\mathcal N(0, \sigma)$.
Alternatively, if you like to be able to think in terms of $A_n$, you can interpret the statement as: for large $n$, $A_n$ is approximately distributed as $\mathcal N(\mu, \sigma/\sqrt n)$. It's distribution gets narrower as $n$ gets larger.
The moral of the story is that when you add up random variables, the mean grows linearly with $n$, but the fluctuations grows with $\sqrt n$. So if you divide by $n$, the fluctuations disappear, and the mean approaches the expected value. |
What's the difficulty in finding instantaneous velocity? | He means that whereas it's easy to define the average velocity over a period (as displacement divided by time), it's much harder to define instantaneous velocity. So instead of thinking initially about an instantaneous velocity, he considers the average velocity over a very short period of time. |
Finding a matrix $Q \in \mathbb{R}^{d\times r}$ such that $Q^\top Q=I_r$ and $(QQ^\top)_{ii}=h_{ii}$ | (For convenience, I write $h_i$ instead of $h_{ii}$.)
You may start with $Q=\pmatrix{I_r\\ 0}$. The idea is to fix the diagonal entries of $QQ^T$ one by one, by applying Givens rotations to $Q$ recursively. More specifically, suppose at some stage, we have $Q^TQ=I_r$ and
$$
QQ^T=\left[\begin{array}{c|c}P&\ast
\\ \hline\ast&\begin{matrix}q_{k+1}\\ &\ddots\\ &&q_d\end{matrix}
\end{array}\right],\tag{$\dagger$}
$$
where the diagonal entries of $P$ are members of $\{h_1,\ldots,h_d\}$. By relabelling the $h_i$s if necessary, we may assume that it is $(h_1,\ldots,h_k)$. We also suppose that the bottom right subblock of $QQ^T$ in $(1)$ is a diagonal matrix $\operatorname{diag}(q_{k+1},\ldots,q_d)$ such that
$\sum_{i=k+1}^dh_i=\sum_{i=k+1}^dq_i$,
$q_{k+1}\ge\cdots\ge q_s>h_{k+1}\ge\cdots\ge h_d>q_{s+1}\ge\cdots\ge q_d$ for some $s$.
Now, note that
\begin{align}
&\pmatrix{\cos t&-\sin t\\ \sin t&\cos t}
\pmatrix{q_s\\ &q_{s+1}}
\pmatrix{\cos t&\sin t\\ -\sin t&\cos t}\\
=&\pmatrix{q_s\cos^2 t+q_{s+1}\sin^2 t&\ast\\ \ast&q_s\sin^2 t+q_{s+1}\cos^2 t},
\end{align}
Therefore, by applying an appropriate Givens rotation $R$ to the $s$-th and $(s+1)$-th rows of $Q$, we may turn one of the $s$-th or $(s+1)$-th diagonal entries of $(RQ)(RQ)^T$ into any desired convex combination of $q_s$ and $q_{s+1}$. In particular,
if $q_s-h_{k+1}<h_d-q_{s+1}$, let us make the $s$-th diagonal entry of $(RQ)(RQ)^T$ becomes $h_{k+1}$;
if $q_s-h_{k+1}\ge h_d-q_{s+1}$ instead, let us make the $(s+1)$-th diagonal entry of $(RQ)(RQ)^T$ equal to $h_d$.
Note that the entries in $P$ are unaffected and we still have $(RQ)^T(RQ)=Q^TQ=I_r$. Perform a further permutation to move the newly set diagonal entry to position $(k+1,k+1)$. If the other diagonal entry involved in the rotational transform also equals to some $h_i$, perform one more permutation to move that diagonal entry to the $(k+2,k+2)$-th position. The resulting matrix is still of the form $(\dagger)$ (but $k$ is now incremented by $1$ or $2$), with the bottom right subblock remains diagonal. More importantly, since the trace is preserved and due to the way we set the new $(k+1)$-th diagonal entry, conditions 1 and 2 in the above are also satisfied in the resulting matrix.
So, we have reduced the dimension of the problem by $1$ or $2$. Proceed recursively, we can construct a matrix $Q$ with orthonormal columns so that the diagonal of $QQ^T$ is a permutation of $(h_1,\ldots,h_d)$. Now, apply a final permutation to the rows of $Q$ so that the diagonal of $QQ^T$ is exactly $(h_1,\ldots,h_d)$. |
Calculating Separable Closures | Here are a few details, completing Zev's completely correct, succinct comment (which I upvoted, of course).
An important first remark is that the polynomial $f(X)=X^4+tX^2+t\in k[X]$ is irreducible over $k=\mathbb F_2(t)=Frac (\mathbb F_2[t])$ : this is Eisenstein's criterion in all its splendour, applied with the prime $t\in \mathbb F_2[t].\:$ Hence $[K:k]=4$.
The polynomial $g(X)=X^2+tX+t\in k[X]$ is then also irreducible , so that $\alpha^2$ is of degree $2$ over $k$, since it is killed by $g$.
Moreover since $g$ is not a polynomial in $X^2$ and is irreducible ( an important, often forgotten condition!) , it is separable over $k$ and so is $\alpha ^2$, one of its roots.
So we have the tower of quadratic extensions $k\subset k(\alpha ^2)\subset K$ with $k\subset k(\alpha ^2)$ separable and $k(\alpha ^2)\subset K=k(\alpha )$ obviously purely inseparable.
Only one intermediate field of $k\subset K$ can perform both these feats: $K_{sep}$ .
Hence $K_{sep}=k(\alpha^2)$. |
Pooling Log Inequality | Recall the weighted AM-GM-HM inequality: for any finite list of non-negative reals $x_i$ and $w_i,$ with $\sum w_i >0,$ $$ \frac{ \sum w_i x_i}{\sum w_i } \ge \left( \prod {x_i}^{w_i}\right)^{1/\sum w_i} \ge \frac{\sum w_i}{ \sum \frac{w_i}{x_i}}.$$
We'll apply this with $w_1 = x_1 = a, w_2 = x_2 = b$. This gives $$ \frac{a^2 + b^2}{a+b} \ge \left( a^a b^b \right)^{1/a+b} \ge \frac{a+b}{ a/a + b/b} = \frac{a+b}{2}.$$
Since $-\log$ is a decreasing function, this gives the inequality (after raising by $a+b$, taking $-\log$, and rearranging, and using the convention $0\log 0 = 0$,)
$$ -(a+b)\log \frac{a^2 + b^2}{a+b} \le - a\log a - b\log b \le -(a+b) \log \frac{a+b}{2}.$$
Finally, note that $ (a+ b)^2 \ge a^2+b^2$ for $a , b \ge 0,$ which means that $-\log(a+b) \le -\log \frac{a^2 + b^2}{a+b}$, finishing the argument. |
Is this a counter example to Stromquist's Theorem? | What about the square on the bottom- with it and the triangle's mutual side returned?
Edit: Btw. Don't be disheartened that your counterexample didn't come through- looking for counterexamples to proved theorems will never give a genuine counterexample (unless ZFC turns out to be inconsistent!!;)), but it can provide valuable insight into what the theorem means.
Here you can see from experience that 'most of the square' may be contained within the curve- this is something that may not be obvious looking at the examples of Stormquist in action in books on the topic.
Trying to find counterexamples, IMHO, is a great way to develop as a mathematician- just do not be afraid to be wrong ;) |
Given two different ranges, probability of a number in one range being greater than the other | You can count all possible variations, and the ones that answer the actual question and divide them (good/all) to get the probability.
Now we are dealing with ordered pairs of numbers, call Range 1 $X$ and Range 2 $Y$. We want $P(X>Y)$. Now the good pairs are
$$(16,15),\ (17,15),\ .., (20,15);\ (17,16),\ .. (20,16); ...; (20,19).$$
We can count these for example according to the possible $Y$'s: the minimal possible $Y$ such that $X>Y$ is $Y=15$, then $X$ can be $16..20$, that is $5$ possibilities. If $Y=16$, we have $4$ possibilities, and so on. So it is $5+ 4+ 3+ 2+ 1=15$ possibilities altogether.
And the number of all possible pairs is now $11\cdot 10$. |
Question on the proof that totally bounded and complete implies sequentially compact | If $D_n $ wasn't finite you couldn't conclude that infinitely many elements of $\sigma $ are contained in one element of $D_n $ (there could be one in each, for instance )...
Secondly $\sigma $ is a sequence, hence has infinitely many terms, some or all of which may be equal (such as when $X $ is finite)... |
If $F$ has characteristic $P>0$ prove that $A=\begin{pmatrix}1&\alpha\\0&1\end{pmatrix}$ satisfies $A^p=I$. | You can check that
$$
A^2=\left(\begin{matrix}
1&2\alpha\\ 0&1
\end{matrix}\right).
$$
By induction
$$
A^p=\left(\begin{matrix}
1&p\alpha\\ 0&1
\end{matrix}\right),
$$
but $p\alpha=0$ for $F$ has characteristic $p$, so $A^p=I$. |
Inner products and norms | No. No matter which bilinear form $(x,y)\mapsto x^T A y$ you choose, a norm not satisfying the parallelogram identity can never be induced as $\|x\|=\sqrt{x^T A x}$. |
Prove that if $f$ is convex and upper bounded, it must be constant. | Assume $f(x)<M$ for all $x$. Consider two points $x_1,x_2$. If $f(x_1)\ne f(x_2)$, then the line $\ell$ through $(x_1,f(x_1))$ and $(x_2,f(x_2))$ intersects the line $y=M$. Since $f(x)$ must lie on or above $\ell$ for all $x\notin[x_1,x_2]$, this is not possible. |
Let $(R, +, \cdot)$ be a finite ring without zero divisors, show that $R$ has a neutral element for $\cdot$. | So at this point, you have $a^{i-j}a = a$. We want to show that $a^{i-j}$ is in fact the identity element for $\cdot$. Now, this amounts to saying that for any $b \in R$, $a^{i-j}\cdot b = b$. So we have the following:
$a^{i-j} \cdot b = c
\iff a \cdot a^{i-j} \cdot b = a \cdot c
\iff a^{i-j}a \cdot b = a \cdot c
\iff ab = ac $.
But since we have no zero divisors, this means that $b = c$. |
Whittaker model for $\mathrm{GL}(2, \mathbb{R})$ | For any local field $k$, an intertwining operator from the induced model for a principal series to the Whittaker model (such as referred to by Paul Garrett in his comment above) is explicitly: $$\phi \mapsto W_\phi,$$ where $$W_\phi(g) = \int_{k} \phi \left( ( \begin{smallmatrix} & -1 \\ 1 & \end{smallmatrix}) ( \begin{smallmatrix} 1 & x\\ & 1 \end{smallmatrix}) g \right) \psi(x)\,dx,$$
where $\psi$ is the chosen additive character for your Whittaker model.
In general, this only converges conditionally, but if you interpret the integral as the limit over an increasing sequence of compact subsets of $k$ defined by $|x|\leq R$, $R \to \infty$, then it does give an intertwiner. |
Does $(\mathbf A+\epsilon \mathbf I)^{-1}$ always exist? Why? | Recall that $\det(A-t I)=p_A(t)$ is the characteristic polynomial of $A$, which is a degree $N$ polynomial in $t$, where $N$ is the number of rows of $A$. Hence, it is a continuous function in $t$, and has at most $N$ roots, so there is some largest negative root $t_0$, or possibly no negative roots at all (in that case, choose $t_0=-1$ just for completeness). For all $0 < \epsilon < - t_0$, $p_A(-\epsilon)$ is nonzero, meaning that $\det(A+ \epsilon I) \ne 0$, so $A+\epsilon I$ is invertible.
EDIT: Corrected signs to agree with the question |
Generalized Gronwall inequality | Use the hint from Martin R plus the generalized Gronwall inequality from Lemma 2.7 in G. Teschl, Ordinary Differential Equations and Dynamical Systems. |
Is the union of all $ \left[ \dfrac {1} {n},+\infty\right) $ equal to the union of all $ \left(\dfrac {1} {n},+\infty\right) $? | Note that we have $B_{n+1}\supset A_n$, which means that $\bigcup_{n=1}^\infty B_{n+1}$ clearly contains $\bigcup_{n=1}^\infty A_n$, by the same argument. But $B_{n+1}\supset B_n$ implies that $\bigcup_{n=1}^\infty B_{n}=\bigcup_{n=1}^\infty B_{n+1}$ (adding $B_1$ to the union changes nothing, since anything that's in $B_1$ is also in $B_2$).
This means that the two unions are equal.
One can also argue from the definition of union: What does $x\in \bigcup_{n=1}^\infty A_{n}$ mean? It means that there exists some $k$ such that $x\in A_k$. But clearly, this also means that $x\in B_{k+1}$, which by definition of union means that $x\in \bigcup_{n=1}^\infty B_{n}$. Therefore $\bigcup_{n=1}^\infty A_{n}\subseteq \bigcup_{n=1}^\infty B_{n}$. Inclusion the other way is proven similarly (except you can get away with using the same index $k$ on $A_k$ and $B_k$). |
Vector Calculus - Minimizing integral | Generally speaking you can try the following
Solve the integral
2. Compute the first derivative or the solution
3. Solve for the zeros of the first derivative of the solution
4. Compute the second derivative at each zero.
5. Any zero for which the second derivative is zero represents a local minimum.
Sorry for the bad formatting |
Use the Inclusion-exclusion principle in order to count the number of positive integers $\le 1000$ that can't be divided by $7, 11$ and $13$. | If it's "and":
$$
1000-\left\lfloor\frac{1000}{1001}\right\rfloor=1000
$$
If it's "or":
$$
\scriptsize1000-\left\lfloor\frac{1000}7\right\rfloor-\left\lfloor\frac{1000}{11}\right\rfloor-\left\lfloor\frac{1000}{13}\right\rfloor+\left\lfloor\frac{1000}{77}\right\rfloor+\left\lfloor\frac{1000}{91}\right\rfloor+\left\lfloor\frac{1000}{143}\right\rfloor-\left\lfloor\frac{1000}{1001}\right\rfloor=720
$$ |
Show that $(U_{n})_{n \leq N}$ is the smallest $\mathcal{F}_{n}$-super-martingale that dominates $X_{n}$ | $(U_n)$ dominates $X_n$ means $X_n \leq U_n$. Smallest means if $(V_n)$ is another super-martingale such that $X_n \leq V_n$ then $U_n \leq V_n$. |
What IS the successor function without saying $S(n) = n + 1$? | Yep, that works as a successor function! You do need to make sure the number concept is the minimal concept, otherwise you might have a set that has more than the nodes in the linked-list. Otherwise, the way your doing it is actually pretty swell, and you're right to find the "naive successor function definition" problematic and cyclic.
A successor function $S:\mathbb{N}\to \mathbb{N}$ is any function from which follows the following properties:
For all $x\in \mathbb{N}, S(x) \neq x$
$S$ is one-to-one.
There is some element $e\in\mathbb{N}$ such that, for all $x\in \mathbb{N}, S(x) \neq e$
Also, $\mathbb{N}$ is the minimal set on which you can define such an $S$.
Any such function and any such set can be one type of successor function. Note, the only things needed for that definition is logic, sets, and functions. I never use addition or zero or any sembalence of numbers. Also, the successor function needn't be unique; multiple different implementations can fit this "successor interface." You can check out a video by PBS on defining the successor function here. |
For what values $a, b$ parabola $y = ax^2$ will be tangent with the line $y=2x+b$? | Hints:
1-Two functions intersects, so we have to solve $ax^2=2x+b$.
2- The derivation of $y=ax^2$ (which is $2ax$) is $2$ when $x$ is one of above quadratic equation's solutions. |
Can you construct a rectangle with a given side, equal to a square? | Let $PC$ be the side length of the given square and $PA$, with $\angle CPA=90^\circ$, the given side length of the desired rectangle. Let $O$ be the intersection of the bisector of $AC$ with $\overleftrightarrow{AP}$. The circle around $O$ through $A$ (and $C$) intersects $AP$ in a second point $B$. Then $PB$ is the other side length of the desired rectangle. |
what is the direct limit of free group of rank $n$ | Yes, it is if you have the right inclusions.
I'm assuming you're taking $F(\{x_1,...,x_n\}) \to F(\{x_1,...,x_{n+1}\}), x_i\mapsto x_i$ for instance.
Then if you have a reduced word $w$ on $\{x_n, n<\omega\}$ it's a finite word so it belongs to some $F(\{x_1,...,x_n\})$ and is reduced there too; so it belongs to the direct limit.
Now if it were equal to $1$ in the direct limit it would mean that there is some $m\geq n$ such that $w=1$ in $F(\{x_1,...,x_m\})$. But as the word is reduced this amounts to $w=\epsilon$.
Hence the direct limit is simply the set of reduced words on $\{x_n, n<\omega\}$, where none of them is $1$ except for the empty word; that is: the direct limit is the free group on countably many generators.
A more general way to see it is the following (for the readers with a bit of knowledge in category theory) : the direct limit is actually a colimit, and since the functor sending a set to the free group is a left adjoint, it commutes with the colimit. Hence $\varinjlim_n F(\{x_1,...x_n\}) = F(\varinjlim_n \{x_1,...,x_n\})$. But the inclusions have been weel chosen, so $\varinjlim_n \{x_1,...,x_n\} = \{x_n, n<\omega\}$ and this is exactly what we wanted. |
Extending a discrete valuation | I do hope that this wasn’t in a published text. For, one must never never never put the quantifiers at the end of the statement.
The first statement should have been written, “$\forall\alpha,\beta\in K, v(\alpha+\beta)\ge\min\bigl(v(\alpha),v(\beta)\bigr) $”, and the intention of the author or instructor was probably, “$\forall\alpha\in K$ with $v(\alpha)\ge0$, we have $v(1+\alpha)\ge0$” .
Perhaps you will see now that the way to go from the second formulation to the first is to start with two elements $\gamma,\delta\in K$, say $v(\gamma)\ge v(\delta)$, and apply the second formulation to $\gamma/\delta$. |
Find the probability of $P(\overline{Z}<0.5)$. | Hint: $\overline Z$ is normal with mean $0$ and variance $\frac 1 {16}$. |
If $A$ and $B$ are $n×n$ matrices such that $AB=B$ and $BA=A$ then find the value of $A^{4} + B^{4} - A^{2} -B^ {2} + I$ | Hint: $$A^2=A(BA)=(AB)A=BA=A$$ |
Prove the relations $\mathcal{R}_{1}$ and $\mathcal{R}_{2}$ are functions and the inverse of each other | Your work shows that the domain of both relations is full, and if they are functional, then $R_2\circ R_1(x)=x$ and $R_1\circ R_2(y)=y$ for all $x\in X $ and $y\in Y$, so they are inverses to each other.
Thus, only functionality is left to prove.
Let $xR_1y$ and $xR_1z$. Then $\exists x': yR_2x'R_1y$, but as $x(R_2\circ R_1)x'$, we must have $x=x'$ by hypothesis.
Now this implies $yR_2xR_1z$, so $y(R_1\circ R_2)z$ which yields $y=z$ again by the hypothesis. |
Measure theory and the extrema four inequality | The definition of limit superior and limit inferior may be false? It seems different from the ordinary textbook of measure theory.
\begin{equation}
\varliminf_{n\to\infty}E_n=\bigcup_{n=1}^{\infty}\bigcap_{k\geq n}E_k,\varlimsup_{n\to\infty}E_n=\bigcap_{n=1}^{\infty}\bigcup_{k\geq n}E_k
\end{equation} |
Limit of sequence in which each term is defined by the average of preceding two terms | $$2x_n = x_{n-1} + x_{n-2}$$
$$2x_2 = x_{1} + x_{0}\\
2x_3 = x_{2} + x_{1}\\
2x_4 = x_{3} + x_{2}\\
2x_5 = x_{4} + x_{3}\\
...\\
2x_n = x_{n-1} + x_{n-2}$$
Now sum every equation and get
$$2x_n+x_{n-1}=2x_1+x_0$$
Supposing that $x_n$ has a limit $L$ then making $n\to \infty$ we get:
$$2L+L=2x_1+x_0\to L=\frac{2x_1+x_0}{3}$$ |
Does every logic have sequent calculus, if not - what are alternatives to them? What prohibits to make general sequent calculus for universal logic? | It depends what you mean by "logic". Is second-order logic a logic? If so, then the answer to your question is no: second-order logic has no associated sequent calculus, since it is not compact (so in particular there is no way to represent entailment in second-order logic in a finitary way).
Note that there are other non-compact logics - say, infinitary logic $\mathcal{L}_{\omega_1\omega}$. However, second-order logic is a more compelling counterexample: unlike $\mathcal{L}_{\omega_1\omega}$, which does have a kind of proof system associated to it (via an infinitary sequent calculus, developed by Lopez-Escobar and later Barwise, if I recall correctly) which is reasonably set-theoretically absolute, second-order logic is just completely terrible.
Specifically, the question of validity in second-order logic - what second-order sentences are true in all models - is a fundamentally set-theoretic one. For example, there is a sentence $\varphi$ in second-order logic which is valid iff the Continuum Hypothesis holds, and similarly for many other set-theoretic statements (this general phenomenon is reflected in the ludicrous size of the Hanf number of second-order logic).
Maybe this suggests that you should restrict attention to compact logics. If so, though, you'll be hard pressed to find examples other than first-order logic (and its sublogics) without invoking some set-theoretic ideas. This is because of Lindstrom's theorem, which states that there is no logic strictly stronger than first-order logic which is compact and has the Lowenheim-Skolem property. So, if you want a logic stronger than first-order logic, it had better do something complicated with respect to uncountable structures, specifically. An example is first-order logic with a quantifier for "there are uncountably many," which was shown to be (countably) compact by Keisler if I recall correctly; there are other more technical examples known. |
Question about meaning and definition of zero map | The zero map $X\to K$ is the function that takes everything to $0$, i.e. $f(x)=0$ for all $x\in X$. If $f$ is not this function, i.e. if there exists any $x\in X$ so that $f(x)\neq0$, then it is not the zero map. Note that $f(0)=f(0+0)=f(0)+f(0)$ and hence $f(0)=0$ whenever $f$ is linear, so it is not possible for a linear map to satisfy $f(x)\neq0$ for all $x$. |
Reducible Quadratic form | The paper in question concerns binary quadratic forms $Q(x,y)=Ax^2+Bxy+Cy^2$ with coefficients in the rational numbers. Such a form
is reducible if and only if there are linear forms $L_1(x,y)=Ux+Vy$ and $L_2(x,y)=U'x+V'y$ with rational coefficients
such that $Q=L_1L_2$. Irreducibility (over the rational numbers) is equivalent to the assumption that the discriminant $\Delta=B^2-4AC$ of the form $Q(x,y)$ is not a square in the rational numbers. |
Borsuk-Ulam theorem for $n = 1$ | Yes, it is. Now note that if $g(-x)=-g(x)$ and use the connectedness of $S^1$. |
An inequality involving $a_i, b_i$ such that $\sum_{i=1}^n a_i = \sum_{i=1}^n b_i = 1$ | Let me write $a_j=c_j^2$, $b_j=d_j^2$. Then
$$
\sum |c_j^2-d_j^2|=\sum (c_j+d_j)|c_j-d_j| \le \| c+d\|_2\|c-d\|_2\le 2\|c-d\|_2 ,
$$
by CS and since $c,d$ are unit vectors in $\ell^2$. So the square of the sum on the RHS of your inequality is estimated by $4\|c-d\|_2^2=8(1-\langle c,d\rangle)$, as desired. |
log power rule - how to manipulate polynomials | $3 = 2^{log_2^3}$
Let $\log_2 3= \lambda$
Thus, the equation becomes:
$$3=2^{log_2^3}=2^\lambda$$
But, from equation 2, we can see that $\log_2 3=\lambda$. Or, $2^\lambda= 3$
Hope this answers your question. |
Stewart theorem validity on a sphere | We have $\sin a \cos d = \sin m \cos b + \sin n \cos c$ for a spherical triangle. And by third-order approximation, we can recover the original Stewart theorem $a^3 + 3ad^2 = m^3 + 3mb^2 + n^3 + 3nc^2$. |
Proof verification of $a\sup S=\sup(aS)$ | Your proof is essentially correct, but I would write it a bit differently.
Let $u=\sup S$. Then, for all $s\in S$, $s\le u$, which implies $as\le au$. Therefore $au\ge\sup aS$, because it is an upper bound for $aS$.
Let now $\varepsilon>0$. We want to find $t\in aS$ such that $t>au-\varepsilon$. Since $u=\sup S$, there exists $s\in S$ with $s>u-\frac{\varepsilon}{a}$; then
$$
as>au-\varepsilon
$$
so $t=as\in aS$ is an element with the required property. |
Finding minimum value of observation for a given power in hypothesis testing. | As you found, UMP test is given by Neyman Pearson's Lemma with rejection region
$$\mathbb {P}[\overline{X}_n>k|\mu=\mu_0]=\alpha$$
Now $\overline{X}_n>k$ is your decision rule ($k$ now is fixed) and you can calculate the power ( usually indicated with $\gamma$ because $\beta$ is normally used for type II error)
$$\mathbb {P}[\overline{X}_n>k|\mu=\mu_1]=\gamma$$
Understood this, finally fix $\gamma$ and get $n$
Example
$\mu_0=5$
$\mu_1=6$
$\alpha=5\%$
$n=4$
The critical region is
$$(\overline {X}_4-5)2=1.64\rightarrow \overline {X}_4=5.8224$$
Thus your decision rule is
$$\overline {X}_4\geq 5.8224$$
and you can calculate the power
$$\gamma=\mathbb {P}[\overline {X}_4\geq 5.8224|\mu=6]=1-\Phi(-0.36)\approx 64\%$$
Now suppose you want a fixed power $\gamma \geq 90\%$, simply re-solve the same inequality in $n$
$$\mathbb {P}[\overline {X}_n\geq 5.82|\mu=6]\geq 0.90$$
Getting
$$(5.8224-6)\sqrt{n}\leq-1.2816$$
That is
$$n\geq\Bigg \lceil \Bigg(\frac{1.2816}{0.1776}\Bigg)^2\Bigg\rceil=53$$ |
$L^1$ function satisfying extra condition is $L^p$ for $p\in[1,2)$. | Let $S_c$ be the set $S_c= \{x | f(x) > c \}$. Note that $\int_{S_c} f \geq cm(S_c)$. We then have, by the hypothesis, that $\sqrt{m(S_c)} \geq cm(S_c)$, which implies that $m(S_c) \leq \frac{1}{c^2}$. Similarly, note that $\{ x | (f(x))^p > c\} = S_{c^{1/p}}$, hence $m(\{ x | (f(x))^p > c\}) \leq c^{-2/p}$.
Finally, note that $\int_{S_1} f^p = \int_1^\infty m(f^p > c) dc \leq \int_1^\infty c^{-2/p} dc$, which makes sense when $p<2$, but for $p=2$ will give a contradiction, as the function $(4x)^{-0.5}$ shows. |
If $\ell_{A_\mathfrak{p}}(N)<\infty$, then is it true that $\operatorname{Hom}_A(N,E(A/\mathfrak{q}))=0$? | Suppose $\phi\in\operatorname{Hom}_A(N,E(A/\mathfrak{q}))$, and $\phi(n)=e\neq0$. We will derive a contradiction.
Since $N$ is of finite length over $A_{\mathfrak{p}}$, $\mathfrak{p}^kN=0$ for some $k$. Since $\mathfrak{p}\not\subset\mathfrak{q}$, $\exists a\in\mathfrak{p}^k\backslash\mathfrak{q}$, so $a$ acts as a unit on $E(A/\mathfrak{q})$. Then
$$0=\phi(an)=ae\neq0,$$
a contradiction. |
Find the length of the boundary from parametric equation | The parameter domain is the square $[0,1] \times [0,1]$ in the $uv$-plane. The key thing is that the boundary of the parameter domain is mapped to the boundary of the surface, so you can split that boundary of the parametric surface into four (parametric) curves corresponding to the sides of the square in the parameter domain:
for $v=0$, let $u:0 \to 1$;
for $v=1$, let $u:0 \to 1$;
for $u=0$, let $v:0 \to 1$;
for $u=1$, let $v:0 \to 1$.
For example, in the first case with $v=0$, the parametric curve corresponding to one of the four 'sides' is given by:
$$\begin{pmatrix} x(u) \\ y(u) \\ z(u) \end{pmatrix} =
\begin{pmatrix} e^u+e^{-u} \\ 2u \\ 0 \end{pmatrix} \quad,\quad 0 \le u \le 1$$
And the length of the curve is given by the formula you had in mind:
$$\int_{0}^{1} \sqrt{x'(u)^2+y'(u)^2+z'(u)^2} \,\mbox{d}u = \ldots$$
You can do this for each of the four sides.
To get an idea, you can use WolframAlpha for a plot of the surface and (one of) the curve(s):
$\quad\quad$ $\quad\quad\quad$
Note that it automatically gives you the arc length as well - so you can check. |
Is it possible to find the product of two numbers given their difference? | Think about this: 1 and 2 differ by 1. Their product is 2. But also, 1,000,000 and 1,000,001 differ by 1. Is their product also 2? |
How can one integrate with respect to a function not in the integrand? | You can think of it like this: fix $t$ then calculate $y = \gamma(t)$ and $x = |\gamma|(t)$. Since $x$ is monotonic you can find its inverse, so if you are given $x$ you can calculate $t$ and then $y$. Say in another words given $x$ you can always find $y(x)$. The integral is then
$$
\int dx\;f(y(x))
$$
which I guess will make more sense to you |
Sharp inequality for real numbers | By induction. I omit the base case ($n = 3$): I checked it and it works.
Suppose the hypothesis is true for $n-1$.
Let $x := c_2 + \dotsc + c_n$. We know that $c_2 < \frac{x}{2}$. By the induction hypothesis applied to the $(n-1)$-tuple $\bigl(\frac{c_2}{x},\dotsc ,\frac{c_n}{x}\bigr)$ we obtain
$$c_2 + c_2^{\ast} c_3 + \dotsc + c_2^{\ast}\dotsc c_{n-2}^{\ast} c_{n-1} < \frac{x}{2}.$$
Then
$$\begin{align}&\quad c_1+c_1^*c_2+\dotsc+c_1^*\dotsc c_{n-2}^*c_{n-1} \\
&=c_1+c_1^*\bigg(c_2+c_2^*c_3+\dotsc+c_2^*\dotsc c_{n-2}^*c_{n-1}\bigg) \\
&<c_1+c_1^*\frac{x}{2}\le\frac{1}{2}.\end{align}$$ |
Convergence a.e of a sequence of functions in $L^{p}$ | Put $g_k = k^{-(1+\alpha/ p)} f_k.$ By homogeneity, we have $$\| g_k \|_p \le C k^{-1}.$$ In particular, by the Chebyshev inequality, we see that for any $\epsilon > 0$, $$\lvert \{x \in \mathbb R^n : \lvert g_k(x) \rvert > \epsilon\} \rvert \le \frac{\| g_k \|^p_p}{\epsilon^p} \le \frac{C^p}{k^p\epsilon^p} \to 0 \,\,\,\, \text{ as } k \to \infty.$$ This shows that $g_k \to 0$ in measure. Here is a proof that convergence in measure implies convergence a.e. along a subsequence: Convergence in $L^p$ and convergence almost everywhere. |
Improper Real Integral Involving Circular Functions Using Complex Transformation | Notice that $1-2a\cos\theta+a^2 = (1-ae^{i\theta})(1-ae^{-i\theta})$, hence
$$\frac{1}{1-2a\cos\theta+a^2}=\sum_{j\geq 0}a^j e^{ji\theta}\sum_{k\geq 0}a^k e^{-ki\theta}\tag{1}$$
has a simple Fourier cosine series. The same holds for $\cos^3\theta$:
$$ \cos^3\theta = \frac{3}{4}\cos(x)+\frac{1}{4}\cos(3x) \tag{2}$$
and since $\int_{0}^{2\pi}\cos(n\theta)\cos(m\theta)\,d\theta=\pi\,\delta(m,n) $,
$$ \int_{0}^{2\pi}\frac{\cos^3\theta}{1-2a\cos\theta+a^2}\,d\theta=\pi\left[\frac{3}{4}\sum_{|j-k|=1}a^{j+k}+\frac{1}{4}\sum_{|j-k|=3}a^{j+k}\right]\tag{3}$$
that simplifies to:
$$ \pi\left[\frac{3}{4}\sum_{j\geq 0}a^{2j+1}+\frac{3}{4}\sum_{j\geq 1}a^{2j-1}+\frac{1}{4}\sum_{j\geq 0}a^{2j+3}+\frac{1}{4}\sum_{j\geq 3}a^{2j-3}\right]$$
then to:
$$\boxed{ \int_{0}^{2\pi}\frac{\cos^3\theta}{1-2a\cos\theta+a^2}\,d\theta = \color{red}{\frac{a (3+a^2)}{1-a^2}\cdot\frac{\pi}{2}}}\tag{4}$$
This question is strictly related with the residue theorem and the Poisson kernel. |
What do Subscripted numbers in an equation mean? | In this case the subscripts tell you which term of the sequence you’re looking at: $F_n$ is the $n$-th term of the sequence. This particular sequence is the Fibonacci sequence, which is defined by setting $F_0=0$ and $F_1=1$, thereby establishing the zero-th and first terms, and defining the rest recursively by the relationship that you quoted in your question: $$F_n=F_{n-1}+F_{n-2}\tag{1}$$ for all $n>1$. The formula $(1)$ then says that the $n$-th Fibonacci number is the sum of the $(n-1)$-st and $(n-2)$-nd Fibonacci numbers. When $n=2$, that says that $$F_2=F_1+F_0=1+0=1\;;$$ then when $n=3$ it says that $$F_3=F_2+F_1=1+1=2\;,$$ when $n=4$ it says that $$F_4=F_3+F_2=2+1=3\;,$$ and so on.
In this way we have an infinite sequence $\langle F_n:n\in\Bbb N\rangle=\langle0,1,1,2,3,5,8,\dots\rangle$. In general $\langle x_n:n\in\Bbb N\rangle$ is an infinite sequence $\langle x_0,x_1,x_2,x_3,\dots\rangle$, the subscripts indicating the position of each term in the sequence. In the sequence the order matters. That is, although the sets $\{x_0,x_1,x_2,x_3,\dots\}$ and $\{x_1,x_0,x_3,x_2,\dots\}$ are identical, the sequences $\langle x_0,x_1,x_2,x_3,\dots\rangle$ and $\langle x_1,x_0,x_3,x_2,\dots\rangle$ are not.
You can think of these subscripts simply as labels to keep the positions straight, just as we can use $\langle x_1,x_2,x_3\rangle$ for an ordered triple representing a point in $3$-space. From a more formal point of view, however, a sequence is actually just a function. For example, the sequence $$\langle x_0,x_1,x_2,x_3,\dots\rangle$$ of real numbers is a shorthand for the function $$x:\Bbb N\to\Bbb R:n\mapsto x_n\;,$$ so that we could just as well write $x(n)$ as $x_n$. |
Forming a group from the product of two other groups? | In the original version of the question, you described a process of attaching elements of one group to elements of the other. This isn't the right picture, and I think the simplest example to start with is the direct product.
If you're familiar with the Cartesian product of sets, the direct product of groups is just "the Cartesian product for groups" (and this can be made precise via category theory if you're interested). Given groups $\mathcal{G}=(G,*_\mathcal{G})$ and $\mathcal{H}=(H,*_\mathcal{H})$, the direct product $\mathcal{G}\times \mathcal{H}$ is the group whose elements are ordered pairs $(g, h)\in G\times H$ - that is, elements of the Cartesian product of the underlying sets of $G$ and $H$ - and where the group operation is given coordinatewise:$$(a,b)*_{\mathcal{G}\times\mathcal{H}}(c,d)=(a*_\mathcal{G}c,b*_\mathcal{H}d).$$
For example, if we take $G$ and $H$ to each be the group of real numbers under addition, their direct product is exactly what we should expect: it's just $\mathbb{R}^2$ with "vector addition" $(a,b)+(c,d)=(a+c,b+d)$ (I'm assuming you've seen vectors before, maybe in a physics class, here; if not, forget the phrase in quotes).
Note that every pair of elements is allowed in the direct product, so there's no "matching up" of elements which occurs at the beginning of the construction. (Incidentally, we can take the direct product of more than two groups - even of infinitely many groups at once! - and there's a related thing, the direct sum, which is different in the infinite case - but that's a side issue.)
Now what about other ways to combine groups?
(For simplicity, from now on I'll conflate a group with its underlying set.)
The semidirect product, as the name suggests, is a generalization of the direct product. Elements of the semidirect product are still pairs of elements from the appropriate groups, but the way they interact is more complicated: now we don't just have the two groups $G$ and $H$, but we also have (as your friend mentioned) a fixed group homomorphism $\phi$ from $H$ to $Aut(G)$ - or, in more intuitive language, an action $\phi$ of $H$ on $G$. These three pieces of data combine together to form the semidirect product $G\rtimes_\phi H$:
Elements of $G\rtimes_\alpha H$ are ordered pairs $(a,b)\in G\times H$. That is, the underlying set of $G\rtimes_\phi H$ is the same as that of the direct product; we're not going to change what an element is, just how elements relate to each other.
Multiplication is given "coordinatewise but with $\phi$ creeping in": we let $$(a,b)*(c,d)=(a\phi_b(c), bd).$$ (Here as is often the convention I'm writing "$\phi_b(c)$" instead of "$\phi(b)(c)$;" it's a lot clearer this way, at least in my opinion.)
Note that in the case of the "trivial action" where $\phi: H\rightarrow Aut(G)$ is the trivial homomorphism (equivalently, $\phi_y(x)=x$ for all $y\in H, x\in G$), we have $a\phi_b(c)=ac$ and we just get the direct product back! So this really is a generalization.
So you see that a similar picture is occurring: we're not matching elements of the groups up with each other, but rather every possible pair of elements plays a role in the group we build. The direct and semidirect products each start by taking the Cartesian product of the underlying sets of the groups in question and then go from there, the direct product putting forth as little effort as possible and the semidrect product doing something which at first probably seems kind of bizarre (but it is very useful).
There are other ways to combine groups - free product, wreath product, ... - which are quite different, but I'll stop here for now; I think digesting the (semi)direct product is a good starting point. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.