title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to fix the confusion of power? | The confusion here comes from the way we define the power function for complex numbers.
Consider a complex number $z$ in polar (or exponential) form, given as $z = Re^{i\theta}$. Note that the argument $\theta$ isn't unique: letting $\alpha = \theta + 2\pi$ yields $Re^{i\alpha} = Re^{i(\theta + 2\pi)} = Re^{i\theta}e^{2i\pi} = Re^{i\theta}$.
Now let's consider the natural logarithm of $z$: using the properties of logarithms gives us $\log{z} = \ln R + i \theta$. Since $\theta$ can take on many different values, it should be clear that $\log$ can as well, and this should make sense given that $e^z$ is periodic.
The reason $\log z$ being multi-valued is important is because it is used directly in the definition of complex-valued power functions: for any $z \neq 0$, $z^x = e^{x\log{z}}$. So, since $\log$ is multi-valued, so is the power function, which is why you get multiple answers here.
The fact that these values are multi-valued may seem like a detriment to their usefulness, but we can define a single value for these multi-valued functions using the concept of branches. The idea here is kind of similar to what we do with $\arcsin$ in the real numbers: there are infinite possible angles for with the same sine, but if we restrict the range of the function to a certain range, then we can define a single principal result.
In the complex numbers we do this by imposing a restriction on the values that the argument, $\theta$, of a complex number can take to an interval of $2\pi$, typically saying $\theta$ must satisfy $0 \leq \theta < 2\pi$. Now, $\log$ (and by proxy the power function) have one value on this branch. However, these different branches have some ramifications for what happens when you cross the line $\theta = 0$, so this is likely what causes the identity in question to fail in this instance.
So in your example with $(-1)^{\frac{15}{2}}$, which answer you get as the principal result depends on what branch of the function you work on. On the "default" principal branch with $0 \leq \theta < 2\pi$, you would get:
$$(-1)^{\frac{15}{2}} = e^{\frac{15}{2}\log{-1}} = e^{\frac{15}{2}i\pi} = \cos(\frac{15}{2}\pi)+i\sin(\frac{15}{2}\pi) = -i$$
However, if you use $2\pi \leq \theta < 4\pi$, you would get:
$$(-1)^{\frac{15}{2}} = e^{\frac{15}{2}\log{-1}} = e^{\frac{15}{2}\cdot3i\pi} = \cos(\frac{45}{2}\pi)+i\sin(\frac{45}{2}\pi) = i$$ |
Integrating $\arccos(x)$ with the residue theorem | For $\arccos(z)$, the points $z=1, z=-1$ are branch points, no good for computing via the residue theorem. When this happens, we may sometimes evaluate by finding the residues outside the contour. But there the point $z=\infty$ is also a branch point, so we still cannot use the residue theorem.
However, $\arccos z - i\log z$ (with the appropriate branches of $\arccos$ and $\log$) is single-valued at $z=\infty$, with residue $0$, so for a contour $\gamma$ that surrounds both $1$ and $-1$ once in the clockwise direction,
we get $\oint_\gamma \arccos z\;dz = \oint_\gamma i\log z\;dz = 2 \pi$. The limit of this branch of $\arccos z$ approaching the real axis from below is $\arccos x$, but
the limit approaching from above is $-\arccos x$. So
$$
2\pi = \int_1^{-1} (-\arccos x)\;dx + \int_{-1}^1 \arccos x\;dx
= 2\int_{-1}^1 \arccos x
$$
Therefore our answer is $\pi$. |
Finite Difference Boundary Conditions | With Dirichlet conditions you can either build it into the right hand side of the equation, or you can introduce a trivial equation for those conditions as actual variables. It will be the same.
With Neumann or Neumann-like (e.g. Robin) conditions you want to be careful that you discretize the boundary derivative in a way which has the same order as your discretization of the interior derivatives. One way to do this with finite differences is to use "ghost points". I confess that this is rather hard to motivate within the finite difference framework but it gives results that are much like those you get in the finite element framework.
The point here is that the values $u_0,u_{n+1}$ are thought of as being variables, and there is an equation involving the "interior" derivative at these points. You then add additional points $u_{-1},u_{n+2}$ and choose their values to satisfy the derivative condition. For example, with homogeneous Neumann conditions you take the approximation of the derivative at the first point to be $\frac{u_1-u_{-1}}{2\Delta}$ and set that to zero so that $u_{-1}=u_1$. Same for $u_{n+2}$. This then actually enters into the problem because you have an equation for the second derivative at the leftmost point: $u_{-1}-2u_0+u_1=b_0$, and $u_{-1}=u_1$ so this is $-2u_0+2u_1=b_0$. |
$5^n$ is relatively prime with 13, n in $\mathbb{N}$ | You need to show, that $13$ doesn't divide $5^n$ for every $n\in \mathbb{N}$. Let us assume, that $13$ does divide $5^n$. Therefore, $13$ is a factor of $5^n$. But as 13 is prime and $5$ is the only prime-factor of $5^n$, $13$ cannot divide $5^n$ by the uniqueness of the prime decomposition. |
I am trying to find a regular expression for {Strings whose number of 0s is not divisible by 4}, with and without the use of complement. | Let $L = (1^*01^*01^*01^*01^*)^* = \{u \in \{0,1\}^* \mid |u|_0 \equiv 0 \bmod 4 \}$. Then the language you are looking for is
$$
L(01^* + 01^*01^* + 01^*01^*01^*).
$$
Can you see why? |
Show that a set of vectors spans a subspace of $\mathbb R^3$ even when it doesn't span $\mathbb R^3.$ | Row reducing (check details for yourself),
$$\pmatrix{1&3&4&|&x\cr 2&4&5&|&y\cr 3&5&6&|&z\cr}
\sim\pmatrix{1&3&4&|&x\cr 0&-2&-3&|&y-2x\cr 0&0&0&|&x-2y+z\cr}\ .$$
Therefore
$$\eqalign{
(x,y,z)\in\hbox{span(your vectors)}\quad
&\Leftrightarrow\quad\hbox{the system has a solution}\cr
&\Leftrightarrow\quad x-2y+z=0\ .\cr}$$ |
Zeno-like riddle with additional complication: Runner with dog running back and forth at different speeds | This is a very famous problem, there is a hard way to solve it (formulate an infinite sum and calculate what it converges to) and a very easy way to do it (think like a physicist and apply the relation velocity = way per time).
I've heard the anecdote that a psychologist noticed that mathematicians always did it the hard way and needed several minutes to solve the puzzle, while physicists always did it the easy way and needed only seconds. When he asked John von Neumann to solve the puzzle, von Neumann answered within seconds. The psychologist asked "But you are a mathematician, you're supposed to do it the hard way and evaluate the sum!" von Neumann answered "That's what I did..." |
What happens to the notion of semantic entailment in logics other than boolean? | In multi-valued logics there is still an ordering on the truth values. A statement $A$ entails a statement $B$ if $A$ is stronger than $B$ under this ordering (usually written $A \le B$). (Typically in such logics you can't view assumptions as a set, you have to treat them as a multiset, because $A \land A$ might be a stronger statement than $A$. Hence I'm replacing your set of assumption $A$ by a single assumption $A$ that is the conjunction of a multiset of assumptions.) Fuzzy logic, where the truth values are real numbers is a good example of such a logic. Multi-valued logics of this kind often don't admit the law of contraction: $A \to A \land A$ is not admissible. |
Find the density function of $X+Y$ when $X,Y \sim_{i.i.d} U[0,1]$ | Think geometrically. Let $(X,Y)$ be a random coordinate in the unit square $[0,1] \times [0,1]$. Then if $U = X+Y \le u$, this means $(X,Y)$ is a point in the subset of the unit square that is "below" the line $X+Y \le u$. For $0 < u \le 1$, this is a triangle with vertices $(0,0), (u,0), (0,u)$. If $1 < u \le 2$, then this is a pentagon with vertices $(0,0), (1,0), (1,u-1), (u-1,1), (0,1)$. Its complement is the triangle $(1,u-1), (1,1), (u-1,1)$. What is the area of this region as a function of $u$?
Now with this insight, can you work with your method in a way that allows you to arrive at the same conclusion? |
Show that a transcendental simple extension has infinitely many intermediate fields. | Hint: Let $F$ be the base field, and $K = F(\alpha)$ the simple transcendental extension. Are the intermediate subfields $F(\alpha^{2}), F(\alpha^{3}), \ldots$ distinct? Why? |
If $PAP^{-1} = B$, does there exist $Q$ with positive determinant such that $QAQ^{-1} = B$? | No.
A counterexample for $n = 2$ is $A = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$ and $B = -A$. Then, the only $2\times 2$-matrices $P$ that satisfy $PA = BP$ are those of the form $\begin{pmatrix} a & b \\ b & -a \end{pmatrix}$. Obviously, if such a matrix $P$ has real entries, then its determinant is $\leq 0$. |
NFA of $k$ states recognizing all words of length $\le k$ | a. If $A$ is nonempty, there is a path through $N$ from a start state to an accepting state. By removing cycles, we may suppose the path has length at most $k$. The word along the edges of this path is a word of length $\le k$ accepted by $N$.
b. See d. below
c. Use the subset construction to find a DFA $D$ accepting $A$. $D$ has less or equal to $2^k$ vertices by construction. Let $D'$ be the DFA obtained from $D$ by reversing the accepting states (a nonaccepting state is now accepting and vice versa). Then $D'$ accepts $\overline A$. Apply the argument from the first part to find a word of length $\le 2^k$ which $D'$ accepts, or equivalently lies in $\overline A$
d. Let the alphabet consist of one letter '1', and consider words as their corresponding natural numbers in unary. Fix a set of natural numbers $X$. For each $n\in X$, there is an NFA $N_n$ of $n$ states which accepts all numbers which are not divisible by $n$ (its just a cycle with one nonaccepting state same as the start state). Let a new NFA $N$ be the union of the $N_n$ for $n\in X$ with an additional start state (which is accepting), which has a single $\epsilon$ edge to each of the start states of the $N_n$. The natural numbers not accepted by $N$ are precisely the nonzero multiples of $\text{lcm}(X)$, and $N$ has $1 + \sum_{n\in X} n$ states.
We can for instance take $X$ to be the primes less than $p$ for some $p$ to obtain one parameterized sequence of NFA's $N_p$ with the required property. Indeed, arguing very coarsely, $N_p$ has at most $p^2$ states but the first number not accepted is at least $2^{\pi(p)} \ge 2^{p / \log p -1}$ for sufficiently large $p$. Unfortunately this is only super polynomial in the number of states, and not quite exponential. I'm not sure if the idea can be improved to be properly exponential. |
EigenValues and EigenVectors in PCA? | I believe that PCA is one of the most well-suited applications of eigenvector/eigenvalues.
Raw data can possibly be coupled(made complex) with principal component.
Some coupling can be thought of linear transformations.
Eigenvalues and eigenvectors can be used to decouple linear transformation.
Calculate the accuracy of mapping to the new set of data.
Let's package it nicely with a name called PCA.
I found a good youtube video that explain PCA well.
To explain coupling, think of two variables, area,$A$ and perimeter,$P$ of a rectangle. We know that $A = width * height$ and $P = 2(width+height)$. Thus we can say that $A$ and $p$ is coupled with $width$ and $height$ of the rectangle, changing $A$ will also change $p$ because of the change of $width$ and $height$. Though the example I given is not linear, you can get the idea. |
Suppose $p>q$, $X\sim \text{Bernoulli}(p)$, $Y\sim \text{Bernoulli}(q)$. Couple $X$ and $Y$ to maximise $P(X=Y)$. | The first diagonal entry is the probability that $X=Y=0$, and we know that both of the following statements are true:
Since $\Pr[X=Y=0] \le \Pr[X=0]$, it is at most $1-p$.
Since $\Pr[X=Y=0] \le \Pr[Y=0]$, it is at most $1-q$.
However, $p>q$, so the first constraint is stronger, and we can forget about the second constraint.
Similarly, the second diagonal entry is the probability that $X=Y=1$, and we know that both of the following statements are true:
Since $\Pr[X=Y=1] \le \Pr[X=1]$, it is at most $p$.
Since $\Pr[X=Y=1] \le \Pr[Y=1]$, it is at most $q$.
However, $p>q$, so the second constraint is stronger, and we can forget about the first constraint.
This shows that $\Pr[X=Y]$ is at most $1-p+q$. Algebraically, $$\Pr[X=Y] \le \Pr[X=Y=0] + \Pr[X=Y=1] \le (1-p) + q.$$ We need to show that this upper bound is also a lower bound, and that's what the table does. |
The restriction of a covering map on the connected component of its definition domain | I'll use the definition of covering map as it appears in Hatcher's Algebraic Topology: A continuous map $p:Y\to X$ is called a covering map, if for every $x\in X$ there is an open neighborhood $U$ around $x$ whose preimage is a (possibly empty) disjoint union of open sets, each of which is mapped homeomorphically onto $U$ via $p$. Note that this definition does not require a covering map to be surjective.
Still, if the codomain is connected, the map must be surjective by the following argument:
Assume $x\notin p(Y)$. Then its preimage $p^{-1}(x)$ is empty. There is an open $U$ containing $x$ such that $p^{-1}(U)$ equals $\bigsqcup_{\alpha\in I}U_\alpha$ where $U_\alpha\approx U$ and all $U_\alpha$ are open. But since $x$ is not in the image of $p$, the disjoint union must be the empty union. This means that $U$ does not intersect $p(Y)$, so $p(Y)$ is closed.
On the other hand, $p$ is an open map. If $V\subset Y$ is an open set containing $y$ and $U$ is the evenly covered neighborhood of $p(y)$, then $y$ is contained in some $U_\alpha$. Since $V\cap U_\alpha$ is open, $p(V\cap U_\alpha)$ is an open subset of $U$, thus an open set in $X$ which is contained in $p(V)$, so $p(V)$ is open.
If $X$ is connected, then $p(Y)$ must be all of $X$, being a clopen subset of a connected space.
To prove that the restriction of the $p$ in your problem to the connected component $Z$ is also a covering map, take an $x\in X$ and an open neighborhood $U$ such that $p^{-1}(U)$ equals $\bigsqcup_{\alpha\in I}U_\alpha$ where $U_\alpha\stackrel p\approx U$ and all $U_\alpha$ are open. Since $X$ is locally connected, there is an open connected $V$ such that $x\in V\subset U$. Its preimage is $\bigsqcup_{\alpha\in I}V_\alpha$ where $V_\alpha$ is simply the preimage of $V$ in the particular $U_\alpha$. Each $V_\alpha$ is connected, so it is either entirely in $Z$ or it is disjoint to $Z$. If you delete all the $V_\alpha$'s which are not in $Z$ from the union, you obtain the preimage of $V$ under the restriction of $p$. This means that this restricted $p$ still is a covering map. |
using Dirichlet transforms to show infinity of primes | Ah, probably a typo.. I think, this should be correct:
$$F(z)=\sum_{r\ge 1} \hat f(r)\, z^r=\sum_{r\ge 1}\sum_{d|r}f(d)\,z^r=\sum_{d\ge 1}\sum_{n\ge 1}f(d)z^{nd}=\sum_df(d)\cdot \frac{z^d}{1-z^d}\,.$$
Update: In the finalized version (which you mentioned only in a comment), these formulas are all correct, and the conclusion for having infinitely many primes follows:
Consider the convolution unit: $\hat u(1)=1$ and $\hat u(n)=0$ if $n>1$, with that, by the inversion formula, we have
$$u(n)=\sum_{d|n} \mu(n/d)\,\hat u(d)=\mu(n)\,.$$
Now the suppport of $\hat u$ is $\{1\}$, finite, so, by the lemma the support of $u=\mu$ must be infinite. However, if we had only finitely many primes, there would be only finitely many square-free numbers (all the products of distinct primes), so $\mu$ would be of finite support, a contradiction. |
On the Axiom of Choice for Conglomerates and Skeletons | I don't know what kind of background set theory you're working with, but it seems that for any reasonable choice, the "axiom of choice for conglomerates" is equivalent to the "axiom of choice for classes". A surjection $f:\{X_i\}_{i\in I}\to\{Y_j\}_{j\in J}$ of conglomerates can be interpreted as a surjective relation between the index classes $I$ and $J$ (namely, $i$ is related to $j$ if $f(X_i)=Y_j$), and so if you can well-order $I$, you can describe a right inverse (send $Y_j$ to $X_i$ for the $<$-least $i$ such that $f(X_i)=Y_j$, for some well-ordering $<$ of $I$). |
For square images, do SVD and diagonalization always produce the same result in compressing images? | Intuitively SVM generates orthogonal features from the image. The eigenvectors are not necessarily orthogonal. At a given cut in the number features for reconstructing back the image, it is more useful to have vectors that are different from each other, and orthogonal is as different as you can get |
Derivative of multivariable function | I dont know if this answers your question, but in case of transformation from $\mathbb{R}^n $ to $\mathbb{R}$ the f'($\mathbf{u}$) is the gradient of f at the point $\mathbf{u}$:
$$ \left (\frac{\partial f}{\partial x_1}(\mathbf{u}),\frac{\partial f}{\partial x_2}(\mathbf{u}),\frac{\partial f}{\partial x_3}(\mathbf{u}),\cdots \right)$$ which is a 1xN matrix - or your vector $\mathbf{y}$. This is the result of the famous Riesz representation theorem. So applying some vector $\mathbf{x}$ to this matrix yields a directional derivative in direction $\mathbf{x}$ at the point $\mathbf{u}$. Mathematically written: $$ f'(\mathbf{u})\mathbf{x} = \nabla f(\mathbf{u})\cdot\mathbf{x} $$ |
Problem in existence of constant $C_K$ in Theorem 6.2.2 in Esmonde and Murty's Problems in Algebraic Number Theory | Well,
$$N(M) = N(\prod_{|t| \leq H_K} t) = \prod_{|t| \leq H_K} N(t)$$
Since $H_K$ is independent of $A$, it is clear that $N(M) = C_K$ is independent of $A$ as well.
The authors did not choose just those $t$'s matching an $\alpha \in A$, but instead all of the $|t| \leq H_K$ which is how the seeming dependence is removed. |
Graph theory resource for mathematical Olympiads | I'll post a host of three links:
See here for some problems in graph theory used by its author in engaging students preparing for IMO at the camp.
and here for some elementary notes in graph theory.
And, a book whose title suits your description is "Graph Theory for the Olympiad Enthusiast" published by South African Math Society. You must be knowing places where you can download books. I'll keep mum about this here.
Another thing I'll add is this book: "Pearls in Graph Theory: A Comprehensive Introduction". This is published by Academic Press. This is a wonderful book that touches to topics that are non-routine to beginners. |
$\mathbb{Z}$-invariant function on $\mathbb{R}^3$ manifold. | Just take $f(x,y,z) = yz$. This is unbounded, smooth, and $$f(\phi_k(x,y,z))= ({\rm e}^ky)({\rm e}^{-k}z) = yz,$$so it passes to the quotient as an unbounded smooth function. |
Another elementary number theory problem | Modular arithmetic is distributive: $ab\pmod{r}\equiv (a\pmod{r})(b\pmod{r})$.
$n\pmod{7}$ can take on any value in $\{0,1,2,\cdots,6\}$. So just square each value and recalculate the mod 7. For example, $4^2\equiv 2\pmod{7}$. Show that the resulting remainders are $0,1,2,4$. |
Galois Group of the Hilbert Class Field | $\def\QQ{\mathbb{Q}}\def\ZZ{\mathbb{Z}}$In general, it is not true that $Gal(L/\QQ) \cong Gal(K/\QQ) \ltimes \mathrm{Cl}(K)$.
What is true
Suppose that we have a short exact sequence of groups
$$0 \to A \to H \to G \to 0$$
where $A$ is abelian. Since $A$ is normal, the group $H$ acts on $A$ by conjugation; since $A$ is abelian, this action factors through $H/A \cong G$. So, in any such setting, we get an action of $G$ on $A$.
In your notation, Galois theory gives us
$$0 \to \mathrm{Cl}(K) \cong Gal(L/K) \to Gal(L/\QQ) \to Gal(K/\QQ) \to 0.$$
It turns out that this action of $Gal(K/\QQ)$ is the same as the natural action of $Gal(K/\QQ)$ on the class group. I don't know a reference for this, but it's not hard to see that $\sigma \cdot \mathrm{Frob}_{\mathfrak{p}} \cdot \sigma^{-1} = \mathrm{Frob}_{\sigma \mathfrak{p}}$ for an unramified prime $\mathfrak{p}$ of $K$ and an element $\sigma \in G$. The result follows from this plus an extremely weak version of Cebataorv density (just enough to know that the values of Frobenius generate the Galois group).
However, this does not mean that the short exact sequence is semidirect and, as we will see below, it need not be.
A strategy for finding a counterexample
Suppose that we could construct a Galois extension $F/\QQ$, with Galois group $H$, and an abelian subgroup $A$ of $H$ such that
(1) The sequence $0 \to A \to H \to H/A \to 0$ did not split and
(2) For every prime $\mathfrak{p}$ of $F$ (including the infinite primes), the inertia group $I_{\mathfrak{p}}$ is disjoint from $A$.
I claim that taking $K$ to be the fixed field of $A$ will give a counterexample.
The extension $F/K$ is abelian (since $A$ is abelian) and unramified (by the condition on inertia groups). So we have $F \subseteq L$ and we have a commuting diagram
$$\begin{matrix}
0 &\to& \mathrm{Cl}(K) &\longrightarrow& Gal(L/\QQ) &\longrightarrow& Gal(K/\QQ) &\to & 0 \\
& & \downarrow & & \downarrow & & \| & & \\
0 &\to& A &\longrightarrow& Gal(F/\QQ) &\longrightarrow& Gal(K/\QQ) &\to & 0 \\
\end{matrix}$$
Suppose for the sake of contradiction that we had a map $Gal(K/\QQ) \to Gal(L/\QQ)$ splitting the top sequence. Then the composite map $Gal(K/\QQ) \to Gal(L/\QQ) \to Gal(F/\QQ)$ would split the bottom sequence, contradicting (1).
This makes it clear why counterexamples are hard to find. Since $\QQ$ has no unramified extensions, one of the $I_\mathfrak{p}$ must be nontrivial. So, just on the level of group theory, we need to find a non-semidirect short exact sequence $0 \to A \to H \to G \to 0$, with $A$ abelian and a nontrivial subgroup $I \subset H$ so that $I \cap A = \{ e \}$. If $I \to G$ is surjective, this implies that the sequence is semidirect after all. Just on the group theory level, we see that there are no examples with $G$ a cyclic group of prime order.
A counterexample
Let $\zeta$ be a primitive $85$-th root of unity. Recall that $$Gal(\QQ(\zeta)/\QQ) \cong (\ZZ/85)^{\ast} \cong (\ZZ/5)^{\ast} \times (\ZZ/17)^{\ast} \cong (\ZZ/4) \times (\ZZ/16).$$
We will always write elements of this group as ordered pairs in $(\ZZ/4) \times (\ZZ/16)$.
Let $F$ be the fixed field of $\{ 0 \} \times (4 \ZZ/16)$. So $Gal(F/\QQ) \cong (\ZZ/4) \times (\ZZ/4)$. The only ramified primes are $5$, $17$ and $\infty$ with inertia groups $\ZZ/4 \times \{0 \}$, $\{ 0 \} \times \ZZ/4$ and $(2,0)$ respectively. Let $A$ be the order $2$ subgroup generated by $(2,2)$. We see that $A \cap I_{\mathfrak{p}}$ is trivial for every $\mathfrak{p}$. Finally, we have $A \cong \ZZ/2$, $H \cong (\ZZ/4) \times (\ZZ/4)$ and $H/A \cong (\ZZ/4) \times (\ZZ/2)$, so the sequence is not semidirect.
I have not actually computed $\mathrm{Cl}(K)$ for this example.
One can build similar counterexamples whenever $H \cong (\ZZ/q^2)^2$ for some prime $q$, being a little careful about the prime at $\infty$ if $q=2$.
Smaller counterexample
I messed around a bit more, and concluded that there is nothing stopping me from taking $H$ to be the dihedral group of order $8$ and $A$ to be its (two-element) center. For example, let $F$ be the splitting field of $\QQ(\sqrt{1+8 \sqrt{-3}})$ (so $K = \QQ(\sqrt{-3}, \sqrt{193})$). If I haven't made any dumb errors, the only ramified primes are $3$, $193$ and $\infty$, and the inertia groups are all noncentral two-element subgroups. Even if I got this particular example wrong, I'm pretty sure there is no general obstacle to making an example like this. |
Prove that $\|a\|+\|b\| + \|c\| + \|a+b+c\| \geq \|a+b\| + \|b+c\| + \|c +a\|$ in the plane. | Note that
$(|a|+|b|+|c|-|b+c|-|a+c|-|a+b|+|a+b+c|)(|a|+|b|+|c|+|a+b+c|)=(|b|+|c|-|b+c|)(|a|-|b+c|+|a+b+c|)+(|c|+|a|-|c+a|)(|b|-|c+a|+|a+b+c|)+(|a|+|b|-|a+b|)(|c|-|a+b|+|a+b+c|)$
By done! |
Proof that a set $C$ is convex $\iff$ its intersection with any line is convex | So your proof is correct, but a little confusing -- especially the second part. It's easier to do directly instead of via contradiction. A little simplification:
$\Leftarrow$: Let $x,y\in C$ and let $\lambda\in[0,1]$. We wish to prove $\lambda x + (1-\lambda)y\in C$.
Let $L$ be the line through $x$ and $y$. $x$ and $y$ are in $C\cap L$. By convexity of $C\cap L$, we have that $\lambda x + (1-\lambda)y\in C\cap L$ and hence clearly $\lambda x + (1-\lambda)y\in C$. |
Velocity Vector Parallel To xy Plane | Part A:
It's hard to check your work with part A since you only gave an answer, but what you should've done is taken the dot product of the velocity vector with the normal vector to the $xy$-plane, which would be $\langle 0, 0, 1 \rangle$, and set it equal to $0$. (Why will this give us what we want?)
Doing this, we get $\mathbf{v}(t) \cdot \langle 0, 0, 1 \rangle = 10t-40 = 0$
So, unless I've done something wrong, our answers come out to be off by a factor of $2$.
Part B:
Very helpful hint: Minimize the square of the magnitude of the velocity vector instead. You will get the same answer, but this will make it such that you don't have to deal with yucky square roots! Also, in your work, it looks like you didn't square everything properly like you needed to.
At any rate, you should get:
$$v(t) = \langle 10t, 10, 10t-40 \rangle$$
$$|v(t)|^2 = 100t^2 + (10t-40)^2 + 100$$
And now I'll let you finish up with the usual process of setting the derivative equal to zero. |
How many $7$-digit numbers can be generated with numbers in $S=\{1,2,3,4\}$ such that all of the numbers in $S$ are used at least once? | By a brute-force counting algorithm, I obtained a total of $8400$ 7-digit numbers that can be generated with the digits in $S={1,2,3,4}$ such that all of the digits are used at least once.
A possible combinatory solution for this problem is as follows. Once the four digits from $1$ to $4$ have been placed in some positions within our number, there remain three other places $A_1,A_2,A_3$ that can be filled with any digits. We have to consider different cases for this triple.
If $A_1,A_2,A_3$ are all equal, then our $7$-digit number includes four equal digits and three other different digits (i.e., it is obtained by rearranging a pattern of the form $aaaabcd$, where each letter can be any of the four possible digits). We can place the four equal digits in $\binom 74$ ways, and then the remaining different digits in $3!$ ways. Any of the $4$ possible digits can be that occurring four times, so the possible $7$-digits numbers in this case are
$$4\binom 743!=840$$
If two of the three numbers $A_1,A_2,A_3$ are equal, then our $7$-digit number includes three equal digits, a pair of other equal digits, and two other different digits (i.e., it is obtained by rearranging a pattern of the form $aaabbcd$). We can place the three equal digits in $\binom 73$ ways, the successive two equal digits in $\binom 42$ ways, and then the remaining different digits in $2$ ways. The pair of digits occurring three and two times can be chosen among the possible four digits in $2\binom 42$ ways, so the possible $7$-digits numbers in this case are
$$4\binom 42 \binom 73 \binom 42=5040$$
Lastly, if the three numbers $A_1,A_2,A_3$ are all different, then our $7$-digit number includes three pairs of equal digits and another different digit (i.e., it is obtained by rearranging a pattern of the form $aabbccd$). Let us focus on the pairs $aa$, $bb$, and $cc$. We can place the first pair of equal digits in $\binom 72$ ways, the second pair in $\binom 52$ ways, and the third in $\binom 32$ ways, leaving a single way to fill the remaining empty place with the last digit. Note that, by placing the three pairs in this manner, we accounted for all possible combinations resulting from the different order in which $aa$, $bb$, and $cc$ are inserted in the number. The triple of digits $a,b,c$ occurring in pairs can be chosen among the possible four digits in $4$ ways, so the possible $7$-digits numbers in this case are
$$4 \binom 72 \binom 52 \binom 32=2520$$
Collecting all the results we get that, as expected on the basis of the brute-force counting, the searched total number is
$$840+5040+2520=8400$$
corresponding to $\approx 51.3\%$ of all $4^7=16384$ possible $7$-digit numbers obtainable with the digits from $1$ to $4$. |
Indiscreet topology on quotient space | Let $V = \{x_\alpha, \alpha < \mathfrak{c}\}$, be a set of representatives for the equivalence relation (one point for each class of $\sim$), a so-called Vitali set. So $\mathbb{R}/\mathbb{Q}$ can be seen as $ V$, essentially. Now let $O$ be any set that is non-empty open in this quotient space. Then $q^{-1}[O]$ is non-empty open in $\mathbb{R}$. And for every $\alpha$, $x_\alpha + \mathbb{Q}$ is dense in the reals as all shifts are autohomeomorphisms of the reals, so this set intersects $q^{-1}[O]$. But if $x\in q^{-1}[O]$ can be written as $x = x_\alpha + q, q \in \mathbb{Q}$, then $x \sim x_\alpha$, showing that $x_\alpha = q(x) \in O$. As this holds for all $\alpha$, $O$ must contain all classes of $\sim$, so the only non-empty open set is the whole space, ergo it's indiscrete. |
Determining a recurrence relation (Homework) | This is the key phrase of your solution:
set of all strings of length 3 minus strings with out 2 consecutive nucleotide types
The only problem is that for some reason you subtract 17, not 16.
How many strings do not contain two consecutive same types? You start a string with any type (4 possibilities), and at each step add one of the two possible letters of the type opposite to the last added. For $n=3$ you have $4*2^2=16$.
In general, $d_n=4^n-4\cdot2^{n-1}=4^n-2^{n+1}$. So, the sequence starts with 0, 8, 48, 224 etc.
But this is not a recurrence relation. For that, consider how you would build all the strings of length $n$: first, you may take any string of length $n-1$ satisfying the condition and add any letter to it ($4\cdot d_{n-1}$), or take a string of length $n-1$ not satisfying the condition and add one letter of the type the same as the last letter ($2\cdot(4^{n-1}-d_{n-1})$). Now sum the two to obtain the recurence relation, and check that the closed form solution for it as above. |
Does there exist a positive integer $k$ such that $A = 2 ^ k + 3 ^ k$ is a square number? | Note $A=2^k+3^k\equiv (-1)^k \pmod 4$, and a square number must be congruent to $0$ or $1$ modulo $4$, so $k$ must be even. Say $k=2m$.
Now $A=2^{2m}+3^{2m}=4^m+9^m\equiv(-1)^m+(-1)^m \pmod 5$, and a square number must be congruent to $0$ or $\pm1$ modulo $5$, so this is impossible. |
How was the integral formula for the binomial coefficient discovered? | The trig identity $$\cos(x)^n = 2^{-n}\sum_{k=0}^n {n \choose k}\cos((n-2k)x)$$
combined with the result $$ \int_{-\pi}^\pi \cos(nx)\cos(mx) dx = \delta_{n,m}\pi,$$ if $n$ or $m\ne 0$ and $2\pi$ otherwise, easily gives the integral formula you want. |
Convergence of $\int_0^1 \frac{\sqrt{x-x^2}\ln(1-x)}{\sin{\pi x^2}} \mathrm{d}x.$ | If you write $x = 1 - \delta$, you obtain
$$\int_0^{1-\varepsilon} \frac{\sqrt{(1-\delta)(1-(1-\delta))}\,\ln \delta}{\sin \bigl(\pi(1-\delta)^2\bigr)}\, d\delta = \int_0^{1-\varepsilon} \frac{\sqrt{\delta(1-\delta)}\,\ln \delta}{\sin \bigl(\pi(2\delta-\delta^2)\bigr)}\,d\delta,$$
and the integrand of that can be compared to the harmless
$$\frac{\sqrt{x}\,\ln x}{2\pi x}.$$ |
Is a unit the same thing as an invertible element? | Yes
Here is the link on wikipedia for unit in ring theory, which is what I assume you mean.
http://en.wikipedia.org/wiki/Unit_(ring_theory) |
Limit to infinity of polynomial fraction | Let us prove your statement by simply showing $$\frac{x^2}{x-2}=x+\frac{4}{x-2}+2$$
So $$\frac{x^2}{x-2}=\frac{x^2-4+4}{x-2}=\frac{x^2-4}{x-2}+\frac{4}{x-2}=\frac{(x-2)(x+2)}{x-2}+\frac{4}{x-2}=x+2+\frac{4}{x-2}$$
Now take the limit as $x\to\infty$ of both sides: $$\lim_{x\to\infty}\frac{x^2}{x-2}=\lim_{x\to\infty}\left(x+2+\frac{4}{x-2}\right)=\lim_{x\to\infty}\left(x+2\right)+\lim_{x\to\infty}\frac{4}{x-2}=\lim_{x\to\infty}\left(x+2\right)$$ |
Prove that $f(x)=x\sin(x)$ isn't uniformly continuous | hint
Take
$$x_n=2n^2\pi$$
$$y_n=x_n+\frac 1n$$
it is clear that
$$\lim_{n\to +\infty}(y_n-x_n)=0$$
and, by MVT,
$$f(y_n)-f(x_n)=(y_n-x_n)f'(c_n)$$
$$=\frac 1n(\sin(c_n)+c_n\cos(c_n))$$
with
$$2n^2\pi<c_n<2n^2\pi+\frac 1n$$
and
$$2n\pi<\frac{c_n}{n}$$
$$0<\sin(c_n)<\sin(\frac 1n)$$
$$\cos(\frac 1n)<\cos(c_n)<1$$
thus
$$\lim_{n\to+\infty}(f(y_n)-f(x_n)=+\infty$$ |
LCM of time periods of a Fourier series | The LCM of $24$ and $36$ is $72,$ but $24\times 36 = 864.$ Simply multiplying doesn't generally find the LCM. Especially when you have a pattern such as that in $2p,\ 2p/2,\ 2p/3,\ 2p/4,\ 2p/5,\ \ldots$ In that case, $2p$ is the LCM. |
Which are the equivalence classes for the following relation? | You can check for yourself.
$\boxed{\color{green}{\checkmark}}$ Do each of the nine pairs occur in one and only one equivalence class?
$\boxed{\color{green}{\checkmark}}$ Are all the pairs in the same class equivalent?
$\boxed{\color{green}{\checkmark}}$ Are none of the pairs in different classes equivalent?
It looks like you have all boxes ticked to me. |
18th derivative of $\arctan(x^2)$ at point $x=0$ | Hint. One may write
$$\frac{\mathrm d^{18}}{\mathrm dx^{18}} \arctan(x^2)=\frac{\mathrm d^{17}}{\mathrm dx^{17}} \frac{2x}{1+x^4}$$ then, by a partial fraction decomposition, one has
$$
\frac{2x}{1+x^4}=\frac12 \Re\left( \frac{i}{x-\frac{1-i}{\sqrt{2}}}\right)-\frac12 \Re\left( \frac{i}{x-\frac{1+i}{\sqrt{2}}}\right).
$$ Then using
$$
\frac{\mathrm d^n}{\mathrm dx^n} \frac{1}{x-a}=\frac{(-1)^n\:n!}{(x-a)^{n+1}}.
$$
one finally gets
$$
\left.\frac{\mathrm d^{18}}{\mathrm dx^{18}} \arctan(x^2)\right|_{x=0}=711\: 374\: 956\: 192\: 000.
$$ |
How to find the second derivative of $2x^3 + y^3 = 5$? | Your answer only requires some re-arrangement:
$$
2x^3 + y^3 = 5
$$
multiply by $x$ on both sides to get:
$$
2x^4 + xy^3 = 5 x
$$
or:
$$
xy^3 = 5x - 2x^4
$$
...(substitute this in your equation to get):
$$
\frac{-4(5x - 2x^4 )- 8x^4}{y^5}\to y''=\frac{-20x}{y^5}
$$ |
Characterization of Dirac measure using integrals | If there is a measurable cardinal $\kappa$, there is a $\{0,1\}$-valued measure on $\mathcal P(X)$, where $X$ is a set of cardinality $\kappa$, that is not a Dirac measure. It is relatively consistent with ZFC that there exist no measurable cardinals; it is believed to be relatively consistent that there are (but we can't prove it).
Bottom line: you're not going to be able to prove or disprove this.
Are you sure there wasn't some restriction on $X$? For example, if $X$ has cardinality $\le c$ it's easy. |
Complex Logarithm Derivation | This is a direct extension of the real situation: if $y=e^x$ then $x=\ln y$. The exponential and natural logarithm are inverses. In the complex case we want the exponential and logarithm to be inverses as well and so we define it as such and investigate the consequences. If $z=e^w$ then $w=\ln z$ and we see that the the logarithm is defined in terms of the complex exponential's argument. If we begin with $z=x+iy=re^{it}$ you can now plug this into the definition and see why things are the way they are. The only thing remaining is to observe that Euler's formula indicated that the complex exponential is cyclic in nature and this appears in the logarithm as the different branches. |
Eigenvalues of the one-dimensional Schrödinger operator | Suppose $-f''(x)+u(x)f(x)=\lambda f(x)$, where $-f''+uf\in L^2(\mathbb{R})$ and $f\in L^2(\mathbb{R})$ is twice absolutely continuous. Let $\mu = \inf u(x)$. Then
\begin{align}
-\langle f'',f\rangle & = \lambda\|f\|^2-\langle uf,f\rangle \\
\|f'\|^2 & = \lambda \|f\|^2-\langle uf,f) \\
& \le \lambda \|f\|^2-\inf u(x)\|f\|^2 \\
& = (\lambda-\inf u(x))\|f\|^2
\end{align}
Therefore $\lambda < \inf u(x)$. (If $\lambda=\inf u(x)$, then $\|f'\|=0$, which forces $f=C$ and $f\in L^2$ forces $C=0$.) |
Evaluate the limit of $\lim_{n\to\infty}\frac{1}{n^2}\left(\frac{2}{1}+\frac{9}{2}+\frac{64}{9}+\cdots+\frac{(n+1)^{n}}{n^{n-1}}\right)$ | We have that by Stolz-Cesaro
$$\frac{a_n}{b_n}=\frac{\sum_{k=1}^{n}\frac{(k+1)^{k}}{k^{k-1}}}{n^2}$$
$$\frac{a_{n+1}-a_n}{b_{n+1}-b_n}=\frac{\frac{(n+2)^{n+1}}{(n+1)^{n}}}{(n+1)^2-n^2}=\frac{(n+2)^{n+1}}{(2n+1)(n+1)^{n}}=\frac{n+2}{2n+1}\left(1+\frac{1}{n+1}\right)^{n} \to \frac e 2$$ |
Trace-Determinant Inequality for MLE Estimation of Multivariate Normal Distribution | Taking $\Sigma = p^{-1} I_p$ and $B =I_p$ then the LHS is $e^{-p^2} p^{p^2} = (e^{-p} p^p)^p$. Note that $e^{-p} p^p \to \infty$ as $p\to \infty$ while $\inf_{b >0} (2b)^b e^{-b} = e^{-1/2}$. Then the inequality can not hold. |
cogroup object in the category of pointed set. | The answer is yes: every cocartesian category admits cogroups. In the case of $\mathbf{Set}^*$, the category of pointed sets, a cogroup is nothing but a set-theoretic cogroup where the comultiplication, counit and inverse maps preserve the point.
In details a cogroup in $\mathbf{Set}^*$ amounts to the following data:
a pointed set $(X,x)$
a comultiplication map $m \colon X \to X \vee X$ where $$X \vee X= X \times \{0\} \cup X \times \{1\}/\sim $$ (with $\sim$ being the smallest equivalence relation such that $(x,0)\sim (x,1)$) such that $m(x)$ is the equivalence class of $(x,0)$ and $(x,1)$, that is $$m(x)=[(x,0)]=\{(x,i) \in X \times \{0\} \cup X \times \{1\} \colon (x,i) \sim (x,0)\}$$
counit is given by the unique morphism $$e \colon X \to \bullet$$ with value in the initial object of $\mathbf{Set}^*$, which is the singleton set having its only element as a chosen point
the coinverse map is a map $i \colon X \to X$ which such that $i(x)=x$.
These data have to satisfy the axioms of a cogroup (which are the dual of groups' axioms).
As for an example of cogroup: in $\mathbf{Set}^*$ you have always the trivial cogroup $(\bullet,m^\bullet,e_\bullet,i_\bullet)$, having as support the singleton set, and whose structural morphisms (comultiplication, counit and coinverse) are given by the only possible morphisms from this set to itself (remind that $\bullet \vee \bullet \cong \bullet$). |
A question on almost disjoint collection | Let $T={}^{<\omega}E=\bigcup_{n\in\omega}{}^nE$, the set of functions from finite ordinals into $E$. Then $\langle T,\subseteq\rangle$ is a tree of height $\omega$. Note that for $s\in{}^mE$ and $t\in{}^nE$, $s\subseteq t$ iff $t\upharpoonright m=s$. Each $\sigma\in{}^\omega E$ defines a branch through $T$; for $\sigma\in{}^\omega E$ let $B_\sigma=\{\sigma\upharpoonright n:n\in\omega\}$, the set of nodes on that branch. Each $B_\sigma$ is a countably infinite subset of $T$. $|T|=\omega\cdot|E|=|E|$, so there is a bijection $\varphi:T\to E$; for each $\sigma\in{}^\omega E$ let $A_\sigma=\varphi[B_\sigma]$, and let $\mathscr{A}=\left\{A_\sigma:\sigma\in{}^\omega E\right\}$. I leave it to you to verify that $\mathscr{A}$ has the desired properties. |
Proving divisibility using induction | Hint $\ $ The inductive step follows by telescopy. $ $ Let $f(k)= a^k-1$
Then $\,\color{#0a0}{f(k\!+\!1)\!-\!f(k)} = a^{k+1}-a^k = (a\!-\!1)a^k$ is divisible by $\,a\!-\!1$
so if $\,\color{#c00}{f(k)}$ is divisible by $\,a\!-\!1\,$ then so too is $\,f(k\!+\!1) = \color{#0a0}{f(k+1)\!-\!f(k)} + \color{#c00}{f(k)}$
Remark $\, $ Summing the above makes the telescopic cancellation explicit
$$\begin{align}a^{k+1}-1 =\, &\,\ \ \color{#c00}{a^{k+1}\!-a^k}+\color{#0a0}{a^k\!-a^{k-1}} + \cdots+ a^1\!-a^0\\[.3em]
=\, &\,(a-1)\, (\color{#c00}{a^k} + \color{#0a0}{a^{k-1}}+\cdots+1),\ \ \text{or, more formally}\\ f(k\!+\!1)-f(0) =\,& \sum_{i=0}^k (f(i\!+\!1)\!-\!f(i)) = \sum_{i=0}^ka^i(a\!-\!1) = (a\!-\!1)\sum_{i=0}^k a^i\end{align} $$
The first equality above, expressing $f(k\!+\!1)-f(0)$ as the sum of its first differences, has a trivial (and obvious) proof by induction. Once you prove this general form inductively, you can use it as a lemma to inductively prove many special cases - which are ubiquitous. You can find many examples and much further discussion of telescopic induction in various prior posts. |
Regulators and uniqueness | Different regularizations may lead to different regularized values. For instance
$$ \lim_{\lambda\to 0}\sum_{n\geq 0}n e^{-\lambda n} = +\infty $$
while the zeta regularization of $\sum_{n\geq 1} n $ gives the (in)famous value $\zeta(-1)=-\frac{1}{12}$.
If we take an hybrid between smoothed sums and the zeta regulatization we have:
$$\sum_{n\geq 1}'' n = \sum_{N\geq 1}'\frac{N+1}{2} = \frac{\zeta(0)+\zeta(-1)}{2}=-\frac{7}{24}.$$
We also have a class of regularizations that depends on a positive parameter $\delta$: the Bochner-Riesz mean. There isn't a single regularization: a regularization is just a (somewhat arbitrary) way to extend the concept of convergence. About integrals, the Cauchy principal value can be interpreted as the Fourier transform of a distribution. About series, we may say that
$$ \sum_{n\geq 1}' a_n = L$$
à-la-Cesàro if $$\lim_{N\to +\infty}\frac{A_1+\ldots+A_N}{N}=L,$$
i.e. if the sequence of partial sums is converging on average. A convergent series is also a Cesàro-convergent series, but with such an extension
$$ {\sum_{n\geq 0}}'(-1)^n = \frac{1}{2}=\lim_{\lambda\to 0}\sum_{n\geq 0}(-1)^n e^{-\lambda n}$$
where $\sum_{n\geq 0}(-1)^n$ is not convergent in the usual sense. |
A repeatedly rolling dice with square integers on faces | I did a long calculation to get that the value was 1/2 for each probability.
We can come up with a symmetry argument pretty easily to show this, too.
If the game lasts $n$ turns, we can show that there are the same number of ways to get even as odd by listing the die rolls. Take the pairings $1\leftrightarrow 4$, $16\leftrightarrow 25$.
If $n=1$, the game ends on the first roll, then the value is $9$ or $36$, equally likely even or odd.
Then, when $n>1$ we have one value of $1,4,16,25$ in the first roll, and you swap with the corresponding number in that first roll, and get back a run of the game ending on $n$ turns with the opposite parity (since $1\cong 4\pmod 3$ and $16\cong 25\pmod 3$, this swap doesn't change when th game ends.)
So for each $n$, the number of games that end in exactly $n$ rolls are exactly equal for even and odd results.
For (b), the game either ends on the first roll, $1/3$ of the time, or, since $1\equiv 4\equiv 16\equiv 25\pmod{3}$ and $9\equiv 36\equiv 0\pmod{3}$, the game ends when the first roll is not $9,25$ and you've gotten a total of three values from $1,4,16,25$.
So the expected length is:
$$\frac{1}{3}\cdot 1 + \frac{2}{3}(1+2E)$$
where $E$ is the expected number of rolls until you get one value from $1,4,9,16$. But $E=3/2$, so the expected length of the game is:
$$\frac{9}{3}=3$$ |
How to calculate the variance of linear prediction parameters? | It seems like the best way to calculate the variance on the parameters returned by the LPSVD algorithm is to use the Cramér-Rao bound as the estimate. If anyone has a more elegant solution I'd like to hear it. |
Testing convergence of $\sum\limits_{n=2}^{\infty} \frac{\cos{\log{n}}}{n \cdot \log{n}}$ | This problem appears in the Nordic university-level mathematics team-competition, NMC, 2010, with solution at the beginning of the following pdf: http://cc.oulu.fi/~phasto/competition/2010/solutions2010.pdf.
The search was series "cos(log(n))". |
Why is $ g'(s)=\frac{\partial f(t,y+s(x-y))}{\partial x}(x-y)$ | The function $f(t,x)$ depends on two variables: the first $t$ and the second $x$. The notation $\frac{\partial f}{\partial x}$ means the derivative with respect the second variable. When finding
$$
\frac{d}{ds}f(t,y+s(x-y))
$$
we have by the chain rule
$$\begin{align}
\frac{d}{ds}f(t,y+s(x-y))&=\frac{\partial f}{\partial \text{ first variable}}\frac{dt}{ds}+\frac{\partial f}{\partial \text{ second variable}}\frac{d}{ds}(y+s(x-y))\\
&=\frac{\partial f}{\partial x}(t,y+s(x-y))\,(x-y).
\end{align}$$ |
Find the argument of $-1/2+iT$ | Both are obviously correct up to multiples of $\pi$. Now because of $T>0$ the first one, $\pi-\arctan(2T)$, has a value in $[\frac\pi2,\pi]$ and the second one, $\frac\pi2+\arctan(\frac1{2T})$, has its value also in $[\frac\pi2,\pi]$, so both formulas give identical results. |
The implication sign of Group Closure | Well if $G$ is all you have, then there is no alternative to $x,y\in G$. However, for a subgroup $H<G$, taking an element $g_1g_2=h\in H$ does not mean that $g_1$ and $g_2$ are elements of $H$. For example, $g_1=g_2^{-1}$ implies that $h=1\in H$, but $g_1$ need not be in $H$. |
The inequality. Regional olympiad 2015 | We have
$$
\sum_{cyc} \sqrt{a+\frac{1}{a}}=\sum_{cyc} \sqrt{a+\frac{ab+bc+ca}{a}}=\sum_{cyc} \sqrt{a+\frac{bc}{a}+b+c}\ge\sum_{cyc} \sqrt{2\sqrt{bc}+b+c}=\sum_{cyc} \sqrt{\left(\sqrt b+\sqrt c\right)^2}=\sum_{cyc} \left(\sqrt b+\sqrt c\right)=2\left(\sqrt a+\sqrt b+\sqrt c\right)
$$
as desired.
Note: The inequality is due to the well known inequality between the arithmetic mean and the geometric mean, which states that:
$$
\frac{a_1+…+a_n}{n}\ge\left(a_1\cdots a_n\right)^\frac{1}{n}
$$
Taking $n=2$, $a_1=a$ and $a_2=\frac{bc}{a}$ this yields:
$$
\frac{a+\frac{bc}{a}}{2}\ge\left(a\cdot\frac{bc}{a}\right)^\frac12=\sqrt{bc}\iff a+\frac{bc}{a}\ge2\sqrt{bc}
$$
This is, as shown above, enough to prove your inequality. |
gather terms of expression | When you expand, you can use the fact that $\mathbf{x}^{\top}\boldsymbol\Sigma^{-1}\boldsymbol\mu$ and $\boldsymbol\mu^{\top}\boldsymbol\Sigma^{-1}\mathbf{x}$ are the same. This follows from the symmetry of the inner product $(\mathbf{x},\mathbf{y}) \mapsto \mathbf{x}^{\top} \boldsymbol\Sigma^{-1}\mathbf{y}$. Therefore,
$$ (\mathbf{x}-\boldsymbol\mu)^{\top}\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu) + (\boldsymbol\mu - \boldsymbol\mu_{0})^{\top} \mathbf{S}^{-1} (\boldsymbol\mu - \boldsymbol\mu_{0}) \tag{$\star$}$$
expands into :
$$ \underbrace{ \boldsymbol\mu^{\top} \big( \boldsymbol\Sigma^{-1} + \mathbf{S}^{-1} \big) \boldsymbol\mu }_{(1)} - \underbrace{ 2 \boldsymbol\mu^{\top} \big( \boldsymbol\Sigma^{-1}\mathbf{x} + \mathbf{S}^{-1}\boldsymbol\mu_{0} \big) }_{(2)} + \underbrace{\ldots}_{(3)} $$
where $(1)$ are terms which are quadratic in $\boldsymbol\mu$, $(2)$ are terms which are linear in $\boldsymbol\mu$ and $(3)$ are terms which do not depend on $\mu$. When you expand $(\star)$ (using bilinearity), you will always obtain quadratic and linear terms. Looking specifically for one or the other may help you instead of expanding and then, group the terms.
If you want to differentiate this quantity with respect to $\boldsymbol\mu$, define :
$$ f : \boldsymbol\mu \in \mathbb{R}^n \mapsto \boldsymbol\mu^{\top}\big( \boldsymbol\Sigma^{-1} + \mathbf{S}^{-1} \big) \boldsymbol\mu - 2 \boldsymbol\mu^{\top}\big( \boldsymbol\Sigma^{-1} \mathbf{x} + \mathbf{S}^{-1} \boldsymbol\mu_{0} \big) \in \mathbb{R}. $$
Differentiate $f$ and setting its differential equal to $0$ is the same as finding the gradient of $f$ and setting this vector to $0$. In order to find the gradient of $f$ at $\boldsymbol\mu$, remember that it is defined by :
$$ \forall \mathbf{h} \in \mathbb{R}^n, \; f(\boldsymbol\mu + \mathbf{h}) = f(\boldsymbol\mu) + \mathbf{h}^{\top} \nabla f(\boldsymbol\mu) + o(\Vert \mathbf{h} \Vert^2). $$
Then, again, identifying in $f(\boldsymbol\mu + \mathbf{h})$ the terms that are linear in $\mathbf{h}$ will give you the gradient of $f$. |
Subgroup of $\mathbb{R}$ either dense or has a least positive element? | Let $G$ be an additive subgroup of $\mathbb{R}$. Suppose that there does not exists the least positive element, i.e. $\inf\{|x|:x\in G-\{0\}\}=0$. Then we can prove that $G$ is dense in $\mathbb{R}$ as follows: suppose that $y\in\mathbb{R}$, for any $\epsilon>0$, there exist $x\in G$ such that $|x|<\epsilon$. We can assume that $x>0$, otherwise, we can take $-x$ which belongs to $G$ since $G$ is an additive group. Then there exists an integer $n$ such that
$$nx\leq y<(n+1)x,$$
which implies that
$$|y-nx|<x<\epsilon,$$
where $nx\in G$ since $G$ is an additive group. This shows that $G$ is dense in $\mathbb{R}$. |
Is $z \mapsto \operatorname{Re}(z)$ a linear map? | The map $\operatorname{Re}: \Bbb C \to \Bbb C$ is a linear map when we regard $\Bbb C$ as a real vector space. Indeed, we can write any element $z \in \Bbb C$ as $x + iy$ (that is, decompose it w.r.t. the real basis $\{1, i\}$). Then, for any $a, b \in \Bbb R$ and $x + iy, x' + iy' \in \Bbb C$, we have $$\operatorname{Re} [a(x + iy) + b(x' + iy')] = ax + bx' = a \operatorname{Re} (x + iy) + b \operatorname{Re} (x' + iy') .$$
On the other hand, this is not a linear map when we regard $\Bbb C$ as a complex vector space (that is, $\operatorname{Re}$ is not complex-linear). Indeed, we have
$$\operatorname{Re}(i) = 0 \neq i = i \operatorname{Re} 1 .$$ |
Sum of 3 smallest divisors add to 17 | Where does my solution go astray?
Note that, for $3^113^a$, $a$ can be larger than $1$.
We get
$$3\times 13^2=507,\quad 5^2\times 11=275,\quad 5\times 11^2=605$$ |
Characterization of open maps in terms of nets | Proposed alternate characterization:
A map $f:X\rightarrow Y$ is open if for any $(x,y)\in f$ and net $(y_\beta)$ in $Y$ with limit $y$, there is a net $(x_\alpha)$ in $X$ with limit $x$, and such that $f\left(\{x_\alpha\}\right)\subseteq \{y_\beta\}$.
First, let's show that this is sufficient to demonstrate that a map is open in the usual sense (taking open sets to open sets). Assume that it's not: then despite the hypothesis, there's an open set $U\subseteq X$ such that $f(U)$ is not open, i.e., there is some $y \in f(U)$ such that every neighborhood of $y$ contains a point not in $f(U)$. Let $x \in U$ be such that $f(x)=y$, and let $(y_\beta)$ be a net in $Y$ indexed by the neighborhoods of $y$ (ordered by inclusion), and such that $y_\beta \in \beta \setminus f(U)$ for each neighborhood $\beta$. That is, $\{y_{\beta}\}$ and $f(U)$ are disjoint. Clearly $(y_\beta)$ has limit $y$, since it is eventually in $V$ for any neighborhood $V$ of $y$ (in particular, $y_\beta \in \beta \subseteq V$ whenever $\beta \ge V$). But for any net $(x_\alpha)$ in $X$ such that $f(\{x_\alpha\})\subseteq \{y_\beta\}$, the sets $f(\{x_\alpha\})$ and $f(U)$ are disjoint; hence $x_\alpha \not\in U$ for all $\alpha$, and therefore $(x_\alpha)$ is not in $U$ eventually (or ever) and cannot have limit $x$. This is a contradiction, so the hypothesis above is sufficient to establish that $f$ is open.
To show that it is necessary, assume that $f$ is an open map. Let $(x,y)\in f$ and let $(y_\beta)$ be a net in $Y$ with limit $y$. Then $(y_\beta)$ is eventually in each neighborhood of $y$. Because $f$ is open, for each neighborhood $U$ of $x$, there is a neighborhood $V$ of $y$ such that $V\subseteq f(U)$; then $(y_\beta)$ is eventually in $f(U)$, and in particular there is some $\beta(U)$ in the index set of $(y_\beta)$ such that $y_{\beta(U)} \in f(U)$, and hence some $x_U\in U$ satisfies $f(x_U)=y_{\beta(U)}$. Let $(x_\alpha)$ be a net in $X$ indexed by the neighborhoods of $x$ (ordered by inclusion) such that $x_\alpha\in \alpha$ and $f(x_{\alpha})=y_{\beta(\alpha)}$ for each neighborhood $\alpha$ of $x$. For each neighborhood $U$ of $x$, $(x_\alpha)$ is eventually in $U$, since $x_\alpha \in \alpha \subseteq U$ whenever $\alpha \ge U$. We conclude that $(x_\alpha)$ has limit $x$, and since $f(\{x_\alpha\}) \subseteq \{ y_\beta \}$ as well, this completes the proof.
Since we've shown both implications, we have demonstrated that the alternate characterization in terms of nets is equivalent to the usual one in terms of open sets and/or neighborhoods. $\;\square$ |
Apparently simple integral | Substitute $x = y^2$ and use long division to simplify the integrand.
Thus, we have $dx = 2 y dy$. Substituting in the original integrand, we have:
$$\int \frac{x}{3+\sqrt{x}} dx = \int \frac{y^2}{3+y} 2y dy$$
i.e.,
$$\int \frac{x}{3+\sqrt{x}} dx = \int \frac{2y^3}{3+y} dy$$ |
Are "unlabeled" graphs isomorphism classes of labeled graphs? How can people say things like $K_n$ is "the" complete graph on $n$ vertices? | The math is more important than the formal description of the math.
Yes, if we're defining a (simple) graph as a pair $(V,E)$, where each $e \in E$ is a subset of $V$ of size $2$, then there are multiple graphs that are "complete graphs on $n$ vertices". So we could let $K_n$ denote the isomorphism class of complete graphs on $n$ vertices, and sometimes we refer to such an isomorphism class as an "unlabeled graph".
But in practice, it's convenient to make statements like "The complete graph on $n$ vertices has $\binom n2$ edges", and pretty much everyone does it when the vertex set is irrelevant, which is almost always. It would be more technically correct to say "Every complete graph on $n$ vertices has $\binom n2$ edges", but this statement is a bit confusing. It reminds us of the statement "Every tree on $n$ vertices has $n-1$ edges", but unlike the statement about trees, the statement about complete graphs doesn't actually need to consider multiple different cases. Once you've proven that one complete graph on $n$ vertices has $\binom n2$ edges, you know that they all do.
In actual usage, it makes the most sense to interpret $K_n$ as denoting an arbitrary complete graph, whose vertex set we don't care about, because everything we say will work no matter what it is.
P.S. I'd be careful about the word "labeled graph". This, in actual usage, can be understood to mean something like "graph with vertex set $\{1, 2, \dots, n\}$ for some $n$". For example:
When we ask "How many unlabeled trees are there on $n$ vertices?" it's fair to translate that into "How many isomorphism classes of trees with $n$ vertices are there?"
When we ask "How many labeled trees are there on $n$ vertices?" the answer is $n^{n-2}$, not $\infty$: we are asking about the number of trees on a fixed set of $n$ vertices, which doesn't need to be $\{1,2,\dots,n\}$ but might as well be. |
Example from "Beginner Logic" (E.J.Lemmon) | $\newcommand{\ass}[2]{ #1 \qquad &(#1)#2 \qquad & A }$
$\newcommand{\di}[4]{ #4 \qquad &(#1)#2 \lor #3 \qquad & #4 \lor I }$
$\newcommand{\mpp}[4]{ #4 \qquad &(#1)#2 \qquad & #3MPP }$
$\newcommand{\de}[5]{ #5 \qquad &(#1) #2 \lor #3 \qquad & #4 \lor E }$
$\newcommand{\cp}[5]{ #5 \qquad &(#1) #2 \to #3 \qquad & #4 CP }$
No, because disjunction introduction means that if either $P$ or $R$ (or both) then we can conclude that $P \lor R$. So, if $P \lor R$ and $R$ we could still have $\lnot P$.
So, the inverse is true: $P \vdash P \lor R$ but not necessarily $P \lor R \vdash P$. We would get there in this case: $$P \lor R, \ \lnot R \vdash P.$$ |
Explanation of the statement of Great Picard Theorem | It's the second possibility. And don't forget the “with one possible exception” part. Think about $e^{1/z}$, for instance. |
$P\cdot (Q \times P)$ where $P$ and $Q$ are vectors | $Q\times P$ is perpendicular to the plane spanned by $P$ and $Q$, so $P\cdot(Q\times P)=0$.
You are right that $(P\cdot Q)\times P$ does not make sense since $P\cdot Q$ is a scalar. |
Property of some composite Mersenne numbers | In $\Bbb{Z}[\zeta_3]=\Bbb{Z}[x]/(x^2+x+1)$, if $q \equiv 1\bmod 6$ is a prime number then $\Bbb{F}_q[x]/(x^2+x+1)$ isn't an integral domain thus $(q)$ isn't a prime ideal, and since $\Bbb{Z}[\zeta_3]$ is a PID it means
$(q)= (a+b\sqrt{-3})(a-b\sqrt{-3})$ and
$\Bbb{Z}[\zeta_3]/(a+b\sqrt{-3}) \cong \Bbb{F}_q$.
Iff $2$ is a cube and a square in $\Bbb{Z}[\zeta_3]/(a+b\sqrt{-3})$ then it is a $6$-th power in $\Bbb{F}_q$ so that $2^{(q-1)/6}\equiv 1\bmod q $ and $q\ | \ 2^{(q-1)/6}-1$.
Quadratic reciprocity says $2$ is a square when $q\equiv \pm 1\bmod 8$,
Cubic reciprocity says $2$ is a cube when $3|b$. |
ODE for mixing problem | My attempt.
Over a period of time $\Delta t$, a quantity $a \Delta t$ will flow to the jar, from the source at temperature $T_{source}$. The jar has volume $L$, and for simplicity I define $n = L / a \Delta t$.
Assuming that the updated temperature is just a (volume)-weighted average, the updated temperature $T_1$ will read
$$ T_1 = \frac{n T_0 + T_{source}}{n+1}$$
Where $T_0$ stands fort the temperature before the quantity of fluid is added.
The temperature difference $\Delta T = T_1 – T_0$ equals then
$$ \Delta T = \frac{T_{source} – T_0}{n+1}$$
In the limit $\Delta t \to 0$, $$ \frac{(T_{source} – T_0) a \Delta t}{L } $$
$$ \frac{\Delta T } {\Delta t} = \dot{T}$$
And one recovers the ODE
$$ \dot{T} = \frac{(T_{source} - T)a}{L}$$
The larger the jar, the slower the temperature change, as one would expect. The cooler the temperature at the inlet, the faster the change. |
When is $a^2+b^2=x^2+y^2$, where $a\ne b\ne x\ne y$? | The relation $$a^2+b^2=x^2+y^2$$ happens if the points $(\pm a, \pm b)$ and $(\pm x, \pm y )$ are on the same circle.
Thus there are infinitely many solutions for each $(a,b)\ne (0,0)$ |
Why is $\int_{-\infty}^\infty |g(x)|\,dx = \int_0^\infty \mu(\{x : |g(x)| \ge t\})\,dt$ true? | Observe
\begin{align}
\int^\infty_0 \mu\{x \mid |g(x)|\geq t\}\ dt =& \int^\infty_0 \int^\infty_{-\infty} \chi_{\{x \in \mathbb{R} \mid |g(x)| \geq t\}}\ dxdt\\
=&\ \int^\infty_{-\infty} \int^\infty_0\chi_{\{x \in \mathbb{R} \mid |g(x)| \geq t\}}\ dtdx\\
=&\ \int^\infty_{-\infty} \int^{|g(x)|}_0 \ dtdx \\
=&\ \int^\infty_{-\infty} |g(x)|\ dx.
\end{align}
Edit: Note that the interchanging of integrals are allowed because everything is non-negative. |
Trying to understanding the proof of the fact that Kazhdan property (T) implies expanders. | The statement is quite obvious if you look at it closely. The group $\Gamma$ acts on $V$ (via left multiplication) through the subgroup $\Gamma/N$ of the full permutation group $S_n$, where $n=|V|$, acting by permuting the elements of $V$. Now, $S_n$ clearly preserves $H_0$. It also preserves the constant function $\chi_V$. Now, given any function $f$ on $V$ you consider the difference
$$
\bar f= f-\frac{1}{n}\sum_{x\in V} f(x)
$$
By construction $\bar f$ is in $H_0$, while $f- \bar f$ is a constant function, which, thus, belongs to ${\mathbb C} \chi_V$. Hence, we obtained the required direct sum decomposition of $H$ which is invariant under $S_n$ and, hence, under $\Gamma$. |
What is the formular to find the total number of ways to build a ring out of square tiles with a hole in the middle? | Given $T$ tiles, we will need to put $4$ in the corners and (hopfully) there will be an even number of tiles to build the walls. So let $t=\frac{T-4}{2}$. Now it is easier to think about the internal rectangle that they encircle; In your example $t=4$ and this could give rise to rectangles $1 \times 3$,$2 \times 2$ or $3 \times 1$ and with this double counting the sequence would be $t-1$
\begin{eqnarray*}
1,2,3,4,\cdots
\end{eqnarray*}
Removing the double counting gives the sequence
\begin{eqnarray*}
1,1,2,2,3,3,4,\cdots
\end{eqnarray*}
or $ \lceil \frac{t}{2} \rceil $. |
Help with expected frecuencies | You are provided experimental results in the form of sample values and frequencies, and the type of distribution to which they should belong.
From the sample you estimate the parameters for that distribution.
Using those parameters you then calculate theoretical results, and compare these to the experimental.
For the "normal distribution" experiment, the sample results are collected by intervals. So you obtain the sample mean, you sum the product of frequency and average of each interval, then divide by the sum of frequencies to obtain $150.625$.
$$ \dfrac{\frac{6(135+140)}2+\frac{18(140+145)}2+\frac{32(145+150)}2+\frac{61(150+155)}2+\frac{22(155+160)}2+\frac{5(160+165)}2}{6+18+32+61+22+5}$$
Which is the usual $\bar x = \frac{\sum_i x_i f_i}{\sum_i f_i}$ where $x_i=\tfrac12(x_{i,\min}+x_{i,\max})$
The sample variance is obtained simiarly. |
Properties of exponential function to solve $3x=e^x$ | Let $f(x) = 3x - e^x$. Then $f(0) = -1$ and $f(1) = 3 - e > 0$, so by the IVT $f$ has a zero in $(0,1)$. But $\lim_{x\to\infty}f(x) = -\infty$, so there must be another zero in $(1,\infty)$. On the other hand, $f(x) < 0$ for all $x < 0$, so there are no zeros in $(-\infty,0)$. Thus $f$ has exactly two zeros.
There will not always be two solutions. Consider $x = e^x$ and $x + 1 = e^x$. |
What is the intuition behind Gramian method for linear independence? and Is there $simple$ proof of it? | Consider integrable functions of the form $f_{i}:[a,b]\to\mathbb{R}$ with $i:1,\,2\,\cdots,\,n$. Now assume that the set containing the functions $f_i$ is linearly dependent, then there exist real coefficients $c_i$ (with not all zero) such that
$$
\sum_{i=1}^{n}c_{i}f_i(x)=0.
$$
Define a vector containing the coefficient as $\gamma=[c_1,\,c_2,\cdots,\,c_n]$, and a vector containing the functions as $F(x)=[f_1(x),\,f_2(x),\cdots,\,f_n(x)]$. Take the squared $\mathcal{L}_2$-norm of the previous expression, which yields
$$
\int_{a}^{b}\Big(\sum_{i=1}^{n}c_{i}f_i(x)\Big)^2\text{d}x=
\gamma\int_{a}^{b}F^\top(x)F(x)\text{d}x\,\gamma^\top=\gamma\, G\,\gamma^\top=0.
$$
This means that the (gramian) matrix $G$ is singular. For the contrary, suppose that the set is linearly independent, then such matrix has to be non-singular, otherwise there exist constants $c_i$ which make the quadratic form zero. That is why the non-singularity of the gramian is a sufficient and necessary condition for linear independence. |
Nilradical of a commutative ring | The nilradical consists of all nilpotent elements of the ring, that is, those elements $a$ such that there exists $n>0$ with $a^n=0$.
The proof can be found in every textbook on commutative algebra. One part is essentially obvious: suppose $a$ is nilpotent and let $P$ be a prime ideal. Then $a^n=0$ for some $n>0$, so $0=a^n=aa^{n-1}$, which implies that either $a\in P$ or $a^{n-1}\in P$; continuing by induction, we conclude that $a\in P$ anyway.
The other direction is a bit more complicated (and more interesting). It consists in showing that if $a$ is not nilpotent, then there is a prime ideal $P$ such that $a\notin P$.
For instance, in the ring $\mathbb{Z}/24\mathbb{Z}$ there are nilpotent elements, namely $6$, $12$ and $18$, besides $0$. The nilradical is indeed $\{0,6,12,18\}$ (a nilpotent element must have both $2$ and $3$ in its factorization, when looked at as an integer). |
Universal covering space of $S_{2}/\sim$, where $\sim$ is certain relation. | Your space is homotopy equivalent to the sphere $S^2$ with a line segment (in blue in the picture) joining $p$ and $q$:
By continuously deforming the line segment, you can fuse the endpoints and you end up with $S^2 \vee S^1$, the wedge of a sphere and a circle at a point (I've drawn the circle outside the sphere, it doesn't change anything):
Since the sphere $S^2$ is simply connected, the universal cover of this space looks like a "necklace". It's an infinite (in both directions) number of spheres, and each sphere is linked to the next by a line segment:
Each sphere is mapped homeomorphically onto the $S^2$ component of $S^2 \vee S^1$, and each segment is projected onto the circle component by identifying endpoint. |
$A^5 = A$ is diagonalizable, $A - \overline A^{T}$ is diagonalizable | If minimal polynomial of $A$ has distinct roots, then $A$ is diagonalizable. (This is a standard theorem in linear algebra.) So 1 is true since $x^{5} -x = x(x-1)(x+1)(x-i)(x+i)$ has distinct roots and the minimal polynomial of $A$ should divide this.
For 2, you don't need the condition $A^{5} = A$. The matrix $B = A -\bar{A}^{T}$ is skew-Hermitian (which satisfies $\bar{B}^{T} = -B$, so that $C = iB$ is Hermitian, which are always diagonalizable, so is $B = -iC$.
Without mentioning anything about minimal polynomial, let's try to prove it directly. (I think this proof may work for any matrices with distinct eigenvalues)
Let's define $V_{\lambda} = \{x\in \mathbb{C}^{n}\,:\, Ax = \lambda x\}$, eigenspace of $A$ for the eigenvalue $\lambda\in \{0, \pm 1, \pm i\}$. Then diagonalizability of $A$ is equivalent to
$$
\mathbb{C}^{n} = \bigoplus_{\lambda} V_{\lambda}
$$
For distinct $\lambda \neq \lambda'$, it is easy to check that $V_{\lambda} \cap V_{\lambda'} = \{0\}$. Hence we only need to show that $\sum_{\lambda} V_{\lambda} = \mathbb{C}^{n}$.
For any given $v \in \mathbb{C}^{n}$, define
$$
v_{0} = v - A^{4}v\\
v_{1} = Av +A^{2}v + A^{3}v +A^{4}v\\
v_{-1} = Av - A^{2}v + A^{3}v - A^{4}v\\
v_{i} = Av - iA^{2}v - A^{3}v + iA^{4}v \\
v_{-i} = Av + iA^{2}v - A^{3}v - iA^{4}v.
$$
Then $v_{\lambda}\in V_{\lambda}$ by direct computation with $A^{5} = A$, and
$$
v= v_{0} + \frac{1}{4} (v_{1}- v_{-1} -iv_{i} + iv_{-i})
$$
proves the desired claim. |
By substituting $z = h(t)$ show that $\delta(h(t))=\sum\limits_{i}\frac{\delta(t−t_i)}{\mid h^{\prime}(t_i)\mid}$ | I think the way to think about this is as a local phenomenon which can be extended easily. If $h$ is differentiable and $h'$ is nowhere zero, then the inverse function theorem says that $h$ is locally invertible. (The true inverse function theorem is a little bit stronger than this, but I'm just going with this version.) Therefore if we consider the interval $[a,b]$ (which is small enough so that $h$ is invertible on it), then by doing a change of variable we get
$$\int_a^b \delta(h(t)) f(t)\,dt = \int_{h(a)}^{h(b)} \delta(z) f(h^{-1}(z))\frac{1}{h'(h^{-1}(z))}\,dz.$$
If $0\in [h(a),h(b)]$, then this becomes $\frac{f(h^{-1}(0))}{h'(h^{-1}(0))}.$ If you sum all of the local bits (the integrals over $[a,b]$), you have your result.
Side note: This approach really bothers me because you have to first give meaning to $\delta\circ h$ in the first place. As a distribution, this isn't defined a priori, but the integral approach makes it easy to argue why this should be the definition of $\delta\circ h$. |
How to identify limit ordinals? | There can be several answers, depending on what you mean by "identify".
The simplest answer would be to look at the Cantor normal form of $\alpha$, and see if it has any finite ordinal there. If the answer is no, then $\alpha$ is a limit ordinal (or $0$, which may or may not be a limit ordinal depending on you convention) and otherwise it is a successor ordinal.
Another answer would be that $\alpha$ is a limit ordinal if and only if for every $\beta<\alpha$, $\beta+1<\alpha$ (with the same caveat about $0$ as before). Although it seems not to be exactly what you are looking for. |
Prove that the following function is $C^\infty$ in the point $\xi=0$ | Hint. Use taylor's integral formula for $\xi \to e^{i\xi \lambda}$, then try to use some "regularity under the integral sign" theorem. Conclude by recurrence. |
Find A,B such that the given function is a solution to the given differential equation | It's easier than you think.
$$x(t)= A\sin(t)+B\cos(t)$$
Now differentiate to find $x'(t)$
$$x'(t)= A\cos(t) - B\sin(t) $$
Sub this into your original equation and solve for A and B
$$A\cos(t) - B\sin(t) - 3A\sin(t) - 3B\cos(t) = \frac{\cos(t)}{2}$$
Simplifying to
$$\cos(t)[A - 3B - \frac12] - \sin(t)[B + 3A] = 0$$
Since there is no $\sin(t)$ or $\cos(t)$ on RHS, these components must equal 0.
Hence
$A - 3B - \frac12 = 0 $ and $B + 3 = 0$
Solve to get $A = \frac{1}{20}$ and $B = \frac{-3}{20}$ |
Hausdorff Lindelöf Space is Regular? | The argument does not go through. A quite complicated construction called the “irrational-slope topology” is given by Steen and Seebach (1978) with the following properties:
Hausdorff; but
not regular; yet
second-countable (and hence Lindelöf).
I think the problem is as follows. In proving that every Lindelöf regular space is normal, you first take two disjoint closed sets, say $A$ and $B$, and separate each $a\in A$ from $B$ (using regularity) by an open neighborhood of $a$ whose closure does not cut into $B$ and vice versa.
But!
If the space is only Hausdorff and you're trying to prove regularity along the same lines (i.e., show that if $a\notin B$ and $B$ is closed, then $a$ and $B$ can be separated by open neighborhoods), then the closure of the analogous open neighborhood of $a$ that separates it from some $b\in B$ (using the Hausdorff property) may actually cut into $B$ at some other point $b'\in B$! That's why an analogous argument breaks down. |
Finding range of the expression $\sum_{cyc}\frac{|x+y|}{|x|+|y|}$ where $x,y,z\in\mathbb{R}-\{0\}$ | (Fill in the gaps as needed. If you're stuck, show your work and explain why you're stuck.)
Hint: If $xy \geq 0$, show that $ \frac{ | x+y | } { |x| + |y| } = 1$.
If $ xy < 0$, show that $ 0 \leq \frac{ | x+y | } { |x| + |y| } < 1 $.
Hence, conclude that
$$ 1 \leq \sum \frac{ | x+y | } { |x| + |y| } \leq 3. $$ |
A Generalized Water Filling Problem: $\max_{1^Tx\in[0,1]}\sum_j\ln(\sum_ia_{i,j}x_i)$ | As long as I understood the problem reads as
$$
\max \sum_{j\in J}\ln(a_j^{\top} x)
$$
subjected to
$$
0 \le \left< x,1 \right>\\
\left< x, 1 \right> \le 1
$$
This can be formulated as a Lagrangian multipliers with slack variables problem
$$
L(x,\lambda, \epsilon) = \sum_{j\in J} \ln{a_j^{\top} x}+\lambda_1(\left< x,1 \right>-\epsilon_1^2)+\lambda_2(\left< x,1 \right>-1+\epsilon_2^2)
$$
with stationary points given by the solutions for
$$
\nabla_{x_i}L = \sum_{j\in J}\frac{a_j^i}{a_j^{\top}x}+\lambda_1+\lambda_2 = 0\\
\nabla_{\lambda_1}L = \left< x,1 \right>-\epsilon_1^2=0\\
\nabla_{\lambda_2}L = \left< x,1 \right>-1+\epsilon_2^2=0\\
\nabla_{\epsilon_1}L = \lambda_1\epsilon_1 = 0\\
\nabla_{\epsilon_2}L = \lambda_2\epsilon_2 = 0
$$
I hope this helps. |
Calculate $\int_{γ}\frac{\sin z}{z^4}dz$ | use the residue theorem, $z=0$ is a singularity, then
$$\int_{\gamma}\frac{sin(z)}{z^4}dz=2\pi i Res(f;z=0)=2\pi i \left(\frac{-1}{6}\right)=-\frac{i\pi}{3}$$
By cauchy
$$\int_\gamma\frac{g(z)}{(z-z_0)^{n+1}}=\frac{2\pi i}{n!}g^{n}(z_0)$$
where $g(z)=sin(z)$ so $z=0$ $g(z)$ is holomorphic at $z=0$ then:
$\frac{d^3}{dz^3}g(z_0)=-1$ hence $$\int_{\gamma}\frac{sin(z)}{z^4}dz=\frac{2\pi i}{3!}(-1)=-\frac{i\pi}{3} $$ |
Integral curves and null geodesics | Actually, by setting $K'=\alpha K$ and calculating the covariant derivative with respect to $K'$ we have
$$\nabla_{K'}K'=\alpha K(\alpha)K+\alpha^2\nabla_K K=(\alpha K(\alpha)+\lambda\alpha^2)K.$$
So if $\alpha$ satisfies $\alpha K(\alpha)+\lambda\alpha^2=0$ we have desired vector field which correspond integral curve is a geodesic. |
Showing an inequality with ln | How about we exponentiate both sides? We get
$$e^{(ln(x)+ln(y))/2}=e^{ln(x)/2}e^{ln(y)/2}=\sqrt{xy}$$
and
$$e^{ln((x+y)/2)}=(x+y)/2$$
Now the result is immediate per the AM-GM inequality. |
Computing the double integral $\int_{-\infty}^{\infty}\int_{0}^{\infty} \cos k\xi \cdot u(\xi ) \, dkd\xi$ | As I pointed out in the comment, the integral does not converge in its current form. So let us consider the iterated integral with the order of integration reversed:
$$ I = \int_{0}^{\infty}\int_{-\infty}^{\infty} \cos(k\xi)u(\xi) \, \mathrm{d}\xi\mathrm{d}k, $$
where $c = \sqrt{a_0/(6+a_0)}$. Although a direct computation is available, a more natural solution is to relate the integral to Fourier transform of $u$. Indeed, if we define $\mathcal{F}[u](\xi) = \int_{\mathbb{R}} u(x)e^{-i\xi x} \, \mathrm{d}x$, then it is well-known that the inverse transform $\mathcal{F}^{-1}$ is given by $\mathcal{F}^{-1}[u](x) = \frac{1}{2\pi} \int_{\mathbb{R}} u(\xi)e^{ix\xi} \, \mathrm{d}\xi$, and so,
$$ I = \frac{1}{2} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \cos(k\xi) u(\xi) \, \mathrm{d}\xi\mathrm{d}k = \pi (\mathcal{F}^{-1}\mathcal{F}u)(0) = \pi u(0) = \pi a_0. $$
Alternatively, we may adopt some regularization method. In this approach, we consider the following function
$$ I(\epsilon) = \int_{0}^{\infty}\int_{-\infty}^{\infty} \cos(k\xi)e^{-\epsilon k}u(\xi) \, \mathrm{d}\xi\mathrm{d}k $$
for $\epsilon \geq 0$. Of course, $I(0)$ corresponds to the original integral. Also, from the previous remark, we know that $I(\epsilon) \to I(0)$ as $\epsilon \to 0^+$. (This is due to a version of Abel's theorem.) Moreover, for $\epsilon > 0$ the integrand of $I(\epsilon)$ is integrable, and so, we may apply the Fubini's theorem to compute
\begin{align*}
I(\epsilon)
&= \int_{-\infty}^{\infty} \int_{0}^{\infty} \cos(k\xi)e^{-\epsilon k}u(\xi) \, \mathrm{d}k\mathrm{d}\xi \\
&\qquad = \int_{-\infty}^{\infty} \frac{\epsilon}{\epsilon^2 + \xi^2} u(\xi) \, \mathrm{d}\xi
\stackrel{(\xi = \epsilon s)}= \int_{-\infty}^{\infty} \frac{1}{1 + s^2} u(\epsilon s) \, \mathrm{d}s.
\end{align*}
Then letting $\epsilon \to 0^+$ and applying the dominated convergence theorem to interchange the order of integration and limit gives
\begin{align*}
I(0)
= \int_{-\infty}^{\infty} \lim_{\epsilon \to 0^+} \frac{1}{1 + s^2} u(\epsilon s) \, \mathrm{d}s
= \int_{-\infty}^{\infty} \frac{1}{1 + s^2} u(0) \, \mathrm{d}s
= \pi a_0.
\end{align*} |
Converse of a consequence to the FT of Calculus | The claim is false. There exist functions which are differentiable everywhere, but whose derivatives are not continuous. (Example on Math.SE.) Let $F_0$ be such a function, and $f$ its derivative. Given a pair $(x_0,y_0)$, set $$F(x) = F_0(x) - F_0(x_0) + y_0.$$ We then have $$F'(x) = F_0'(x) = f(x)$$ and $$F(x_0) = F_0(x_0) - F_0(x_0) + y_0 = y_0,$$ as required. (Uniqueness holds since antiderivatives are always unique up to a constant offset.) |
How can I prove a limit is infinity? | You have to show: $\forall \epsilon >0 \; \exists \delta>1$ such that: $\delta>x>1 \implies$ $\frac{x^2}{x-1} > \epsilon$.
Note that the function is non-increasing (monotonous) on $[1,2]$.
Therefore it is sufficient to just ask for a $1 < \delta <2$ with the property that: $\frac{\delta^2}{\delta-1} > \epsilon$. The condition above is then automatically true for any $1 < x < \delta$.
$ \delta = \frac{1}{2} (\epsilon+\sqrt{\epsilon-4} \sqrt{\epsilon})$ satisfies this condition. |
What are the steps to solve this problem? | HINT
To have a valid triangle, all legs must be positive, and in your problem, you are likely assuming $n$ is an integer. So what values can $n$ take?
UPDATE
The assumption is to take factorials of non-negative integers, since you have $(2-n)!$ you must have $n = 0,1,2$. Plug each and check if you have valid triangles, then compute the area as needed. |
The second solution for $xy''+y=0$. | Let $y(x)=\sum_{n=0}^\infty c_nx^{n+r}$ be a solution. Then $$y''(x)=\sum_{n=0}^\infty (n+r)(n+r-1)c_nx^{n+r-2}.$$
Therefore
$$xy''(x)=\sum_{n=0}^\infty(n+r)(n+r-1)c_nx^{n+r-1}.$$
Hence
$$xy''(x)+y(x)=r(r-1)c_0x^{r-1}+\sum_{n=0}^\infty (n+r)(n+r+1)c_{n+1}x^{n+r}+\sum_{n=0}^\infty c_nx^{n+r}.$$
This requires $r(r-1)=0$ and
$$(n+r)(n+r+1)c_{n+1}=-c_n\tag{1}$$
for all $n\ge 0$.
The roots $r=0$ and $r=1$ of $r(r-1)=0$ differ by an integer. Therefore, there exist two linearly independent solutions
$$y_1(x)=\sum_{n=0}^\infty c_n x^{n+1}$$
and
$$y_2(x)=ky_1(x)\ln x+\sum_{n=0}^\infty b_n x^{n+0}.$$
If $c_0=1$, then from $(1)$ with $r=1$, we conclude that $$c_n=\frac{(-1)^n}{n!(n+1)!}$$ for all $n\ge 0$. That is,
$$y_1(x)=\sum_{n=0}^\infty\frac{(-1)^n}{n!(n+1)!}x^{n+1}.$$
We now want to find $y_2(x)$. Up to rescaling, we may assume $k=1$. Note that
$$y_2'(x)=\frac{y_1(x)}{x}+y'_1(x)\ln x+\sum_{n=0}^\infty nb_nx^{n-1}$$
and
$$y_2''(x)=-\frac{y_1(x)}{x^2}+\frac{2y'_1(x)}{x}+y_1''(x)\ln x+\sum_{n=0}^\infty n(n-1)b_nx^{n-2}.$$
Hence
$$0=xy_2''(x)+y_2(x)=-\frac{y_1(x)}{x}+2y'_1(x)+xy_1''(x)\ln x+\sum_{n=0}^\infty n(n-1) b_nx^{n-1}+y_1(x)\ln x+\sum_{n=0}^\infty b_n x^n.$$
But $xy_1''(x)+y_1(x)=0$, so
$$0=xy_2''(x)+y_2(x)=-\frac{y_1(x)}{x}+2y_1'(x)+\sum_{n=0}^\infty \big((n+1)nb_{n+1}+b_n\big)x^n.$$
Note that
$$\frac{y_1(x)}{x}=\sum_{n=0}^\infty\frac{(-1)^n}{n!(n+1)!}x^n$$
and
$$y_1'(x)=\sum_{n=0}^\infty\frac{(-1)^n}{n!n!}x^n.$$
Therefore we have
$$\sum_{n=0}^\infty\left((n+1)nb_{n+1}+b_n-\frac{(-1)^n}{n!(n+1)!}+\frac{2(-1)^n}{n!n!}\right)x^n=0.$$
Thus
$$(n+1)nb_{n+1}+b_n=\frac{(-1)^{n+1}(2n+1)}{n!(n+1)!}$$
for all $n=0,1,2,\ldots$. This shows that $b_0=-1$.
For $n\ge 2$,
$$b_{n}=\frac{(-1)^{n}(2n-1)}{(n-1)n!n!}-\frac{b_{n-1}}{n(n-1)}.$$
Let $B_n=(-1)^nn!(n-1)!b_n$, so $B_n=\frac{2n-1}{n(n-1)}+B_{n-1}$.
Thus
$$B_n-B_{n-1}=\frac{1}{n-1}+\frac{1}{n}$$
for $n\ge 2$. Hence
$$B_n-B_1=\left(1+\frac12\right)+\left(\frac12+\frac13\right)+\ldots+\left(\frac1{n-1}+\frac1n\right).$$
Wlog, we can set $B_1=1$ (otherwise we add an appropriate multiple of $y_1(x)$ to $y_2(x)$). Therefore,
$$B_n=2H_n-\frac1n,$$
where $H_n=\sum_{j=1}^n\frac1j$ is the $n$th Harmonic number (with $H_0=0$). This shows that
$$b_n=\frac{(-1)^n}{n!n!}\left(2nH_n-1\right).$$
Observe that this formula works for $n=0$ as well. That is
$$y_2(x)=\sum_{n=0}^\infty \frac{(-1)^n}{n!(n+1)!}x^{n+1}\ln x+\sum_{n=0}^\infty \frac{(-1)^n}{n!n!}\left(2nH_n-1\right)x^n.$$
All solutions $y(x)$ to $xy''(x)+y(x)=0$ are linear combinations of $y_1(x)$ and $y_2(x)$. |
$f:M\rightarrow M$ is an isometry and $ker(Id - d_{\gamma(t)}f) = \mathbb{R} \dot{\gamma}(t)$, show that $\gamma$ is a geodesic | This is a consequence of the fact that, under your given assumptions, $\gamma$ constitutes a local parameterization of an immersed submanifold of $M$ which is fixed by an isometry. But such immersed submanifolds are totally-geodesic. In this instance, the immersed submanifold is of dimension 1 and an immersed, totally-geodesic submanifold of dimension 1 is just a geodesic. |
Demonstrating convergence of a sequence of functions | $g_n$ converges pointwise to 1 only on $]0,1]$. In this example you have $g(0)=0$ because $g_n(0)=0~ \forall n \in \mathbb{N}$.
To prove your assumption (on $]0,1]$ and for pointwise convergence), take any number $x_0 \in ]0,1]$ and prove that there is an $N$ such as for any $n \geq N$, $g_n(x_0)=1$. |
Find the area enclosed by the curve and the chord $AB$. | The vertice $(0,1)$ is the point of the curve where the ordinate is maximum. Now,
let $\left(b,-\frac{1}{4}b^2+1\right)$ the coordinates of $B$, since $\angle AVB$ is right
\begin{align}
m_{AV}m_{VB}&=-1\\[4pt]
\frac{1-0}{0-(-2)}\cdot\frac{-\frac{1}{4}b^2+1-1}{b-0}&=-1\\[4pt]
\frac{1}{2}\cdot\frac{-b}{4}&=-1\\[2pt]
b&=8
\end{align}
It follows that $B$ has coordinates $(8,-15)$, we must find the equation of the line $AB$:
\begin{align}
y-y_1&=m(x-x_1)\\
y-0&=\left\{\frac{-15-0}{8-(-2)}\right\}[x-(-2)]\\[3pt]
y&=-\frac{3}{2}(x+2)
\end{align}
The area enclosed between the curve $y=-\frac{1}{4}x^2+1$ and the chord $AB$ can be found using an integral:
\begin{align}
\text{Area }&=\int_{-2}^8\left\{-\frac{1}{4}x^2+1-\left[-\frac{3}{2}(x+2)\right]\right\}\,dx
\end{align} |
Why is the Riemann mapping theorem important? | For the importance of the Riemann mapping theorem, see Wikipedia.
The Riemann mapping theorem can be generalized to the biholomorphic classification of Riemann surfaces: these are essentially the Riemann sphere, the whole complex plane, and the open unit disc. This classification is known as the uniformization theorem.
For methods to compute $f$ that are useful in applications, see the Schwarz–Christoffel mapping. |
Use Laurent series to evaluate | By inspection, the only pole of the function relevant in $\;|z|\le2\;$ is zero, so we need a Laurent series in an annulus of the form $\;0<|z|<r\le2\;$ :
$$\frac1{4z-z^2}=\frac1z\cdot\frac1{4-z}=\frac1{4z}\cdot\frac1{1-\frac z4}\stackrel{\text{Why can we?}}=\frac1{4z}\sum_{n=0}^\infty\frac{z^n}{4^n}=\sum_{n=0}^\infty\frac{z^{n-1}}{4^{n+1}}\implies$$
$$\implies a_{-1}\stackrel{\text{with}\;n=0\;}=\frac14$$
and then
$$\oint_{|z|=2}\frac{dz}{4z-z^2}=2\pi i\cdot\frac14=\frac{\pi i}2$$ |
Quaternions disadvantages in Quadrotor UAV control on $SE(3)$ | The $3$ sphere is mentioned because that is what the usual description of quaternions as rotations uses.
The $3$ sphere inside $\mathbb H$ consists of all the elements of length $1$. These elements can be parametrized by a pure quaternion $u$ and a real number $\theta$ in the form $\cos(\theta/2)+u\sin(\theta/2)$. This is the form of a rotation of angle $\theta$ counterclockwise around the vector in the direction $u$.
But the thing is that $q=\cos(\theta/2)+u\sin(\theta/2)$ and $-\cos(\theta/2)-u\sin(\theta/2)$ perform exactly the same rotation! When you apply $q$ as $qxq^{-1}$, it is also obvious that $(-q)x(-q)^{-1}=(-1)^2qxq^{-1}=qxq$. Of course $q$ and $-q$ are antipodes of each other.
This is the source of the $2-1$ correspondence between the quaternions and $SO(3)$. I don't know about the nature of the problem they produce in control systems exactly, but I imagine that it's important to safely choose the version that minimizes motion in the guidance system. Otherwise you might swing "the long way around." |
$\chi^2$ test and sampling variance | First note that $X' = \frac{X - \overline{X}}{\sigma}$ and $Y' = \frac{Y - \overline{Y}}{\sigma}$ are $\mathcal{N}_{0,2}$ - just take expectations and variances. By symmetry considerations, $X'^2, Y'^2$ have densities $2\psi_{0,2}$ on positive reals, where $\psi_{\mu ,\sigma }$ is the Gaussian density function. Now, since $f(x)=\sqrt{x}$ is invertible differentiable, $\mathbb{P} (X'^2 \le t ) = \displaystyle\int_{-\infty }^{\sqrt{t}} 2\psi_{0,2}(x)dx.$ Hence the density of $X'^2$ can be computed using the chain rule - $(2\psi_{0,2}(\sqrt{x}) / (2\sqrt{x})=\frac{1}{\sqrt{4\pi }} e^{-x/4} x^{-1/2} = \gamma_{1/4, 1/2} (x),$ where $\gamma $ is a gamma density. Now it's an exercise (use convolutions or change of variables) to show that the sum of $\Gamma_{\alpha_1 , \beta }, \Gamma_{\alpha_2, \beta }$ distributed random variables is $\Gamma_{\alpha_1 + \alpha_2, \beta}$ distributed - in particular, your random variable is $\Gamma_{1/2, 1/2} \sim \chi_1^2.$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.