title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Bijection Explanation? | Let's look at a smaller example. Suppose there are $7$ spaces in the parking lot, and $4$ cars arrive.
On the left, we list all of the ways that four cars can park so that there are no two adjacent empty spaces. On the right, we list the results of following the instructions in the solution: between each pair of consecutive empty spaces, delete one of the intervening cars.
_ X _ X _ X X _ _ _ X X
_ X _ X X _ X _ _ X _ X
_ X _ X X X _ _ _ X X _
_ X X _ X _ X _ X _ _ X
_ X X _ X X _ _ X _ X _
_ X X X _ X _ _ X X _ _
X _ X _ X _ X X _ _ _ X
X _ X _ X X _ X _ _ X _
X _ X X _ X _ X _ X _ _
X X _ X _ X _ X X _ _ _
On the left, we have a a complicated object, which is unclear how to count. On the right, we see something simpler; every possible way to park $2$ cars in a row of $5$ spots is listed exactly once. The number of such arrangements on the right is clearly $\binom{5}2$. Since the left and right columns have the same number of arrangements, this also counts the number of arrangements on the left.
This is the purpose of clever bijections. You make a perfect matching between a mysterious set and a simple one, and use what you know about the simple one to explain the mysterious one. |
Counting question - binomial coefficients | With $\binom n2\binom{n-2}2/2$ you are choosing four elements, but then you are splitting that selection into two unordered groups of two. Hence it is doing more than merely selecting four elements ($\binom n4$), and produces a larger number. |
Show that a Star graph is balanced | All trees are balanced, so in particular stars are balanced.
An $n$-vertex tree has average degree $2-\frac 2n$. Any subgraph of average degree $2$ can be reduced to a subgraph of minimum degree $2$ by removing leaves, but a subgraph of minimum degree $2$ would contain a cycle. Trees don't have cycles, so any subgraph of a tree has average degree less than $2$. However, for a $k$-vertex subgraph, the largest possible average degree less than $2$ is $2 - \frac 2k \le 2 - \frac 2n$.
(In fact, all trees are strictly balanced, since the only way to get $2 - \frac2k = 2 - \frac2n$ is to have $k=n$, taking the entire tree as a subgraph.) |
How to identify a curve | You have it. If $\kappa=a/c$ and $\tau=b/c$, where $c=\sqrt{a^2+b^2}$, then your curve is congruent to (differs by a rigid motion from) the circular helix $\alpha(t)=(a\cos t, a\sin t, bt)$. Given $\kappa$ and $\tau$, you can determine $a$ and $b$ by algebra. |
Existence of limit of two variables | $\lim_{(x,y)\to(0,0)}y\ln(|x|)$ doesn't exist since
$$\lim_{x\to 0}f(x,\frac{k}{\ln(|x|)})=k$$ |
lower bound for Kruskal's weak tree function | It appears that you are using ordered trees (sometimes called structured trees) in your construction. Both tree(n) and TREE(n) are defined using unordered trees, where all children can be swapped arbitrarily. So for example the trees ((())()) and (()(())) are considered isomorphic, and they both can be homeomorphically embedded into (((()))()).
If we define otree(n) to be the same as tree(n) except using ordered trees, then otree(3) is indeed a fairly large number, larger than Graham's number.
Forget the above, I misunderstood the construction a little bit; your sequence appears to be correct, and improves the current known lower bound. Kudos!
I calcluate this new bound to be $3 \cdot 2^{49} - 2$, or 1,688,849,860,263,932. |
Gamma function Eulera | You began well by writing $\Gamma\big(n+3\big)=\Gamma\Big(\big(n+2\big)+1\Big).~$ Now apply $\Gamma\big(x+1\big)=x~\Gamma\big(x\big).~$ Then repeat the process. |
Nice application of fixed point theorems | You could show that the Brouwer fixed point theorem implies Sperner's Lemma (and then perhaps some combinatorial applications thereof) or prove the existence of Nash Equilibrium in any game. In the same realm but not exactly implications are borsuk-ulam and the ham sandwich theorem, which are interesting in their own right. |
lagrangian minimisation problem and Karush-Kuhn-Tucker conditions | Note first that since $L(x,y,z,k) = −xyz−k(xy+2z(x+y)−50) = L(y,x,z,k)$, the problem is symmetric in $x,y$.
Let's proceed with brutal arithmetic. We have 4 equations in 4 unknowns:
$$
\begin{split}
0 &= -L_x = yz + k(y + 2z)\\
0 &= -L_y = xz + k(x + 2z)\\
0 &= -L_z = xy + 2k(x + y)\\
50 &= xy+2z(x+y) \quad (\text{from } c=0)
\end{split}
$$
Claim 1. $x \neq 0$ and $y \neq 0$ and $x + y \neq 0$.
Proof.
Note first that if $x=0$, then the second constraint implies $k=0$ or $z=0$. If $x=k=0$, the first constraint implies $yz=0$ and the last constraint implies $2yz=50$, which is impossible.
Similarly, if $x=z=0$, the first constraint implies $ky = 0$. As we proved, $k=0$ is impossible, so must be $x=y=z=0$, which contradicts the last constraint.
To establish the last claim, note that if $x+y =0$, the third constraint yields $xy=0$ and the last one yields $xy=50$, which is a contradiction.
QED
Now, solving the last constraint for $z$ and the $L_z$ constraint for $k$, we get
$$k = \frac{-xy}{2(x+y)} \text{ and } z = \frac{50-xy}{2(x+y)}.$$
Both of these are well-defined since $x\neq0$ and $y \neq 0$. Plug both of these into the first constraint, getting
$$0 = \frac{y(50-xy)}{2(x+y)}
- \frac{xy^2}{2(x+y)}
- \frac{2xy(50-xy)}{2^2(x+y)^2}$$
and now multiply both sides by $2(x+y)^2 \neq 0$ to get
$$0 = y(50-xy)(x+y) - xy^2(x+y) - xy(50-xy).$$
Divide by $y \neq 0$ and bring the last term into the first term and divide by $y$ again:
$$
\begin{split}
0 &= (50-xy)(x+y) - xy(x+y) - x(50-xy)\\
0 &= y(50-xy) - xy(x+y) \\
0 &= 50-xy - x(x+y) = 50 - 2xy - x^2\\
50 &= x^2 + 2xy
\end{split}
$$
and because the problem is symmetric, the symmetric constraint must hold (if you want, you can derive it the same way from the second constraint):
$$
\begin{split}
x^2 + 2xy &= 50 \\
y^2 + 2xy &= 50
\end{split}
$$
Hence subtracting them yields $x^2 = y^2$, so $x = \pm y$, but $x \neq -y$ by Claim 1, so $x = y$. Now $50 = x^2 + 2xy = 3x^2$ implies $x = \pm \sqrt{50/3} = y$, and there are two solutions: $(\sqrt{50},\sqrt{50})$ and $(-\sqrt{50},-\sqrt{50})$.
Since $x,y$ are lengths, we know only one will make sense: $x = y = \sqrt{50/3}$. Now plug this back in to find $z$ and $k$ as desired. |
If $x_n \rightarrow x$ and $Ax_n \rightharpoonup y$, why does $y = Ax$? | This is false. Let $f$ be a discontinuous linear functional on $X$ and $Ax=f(x)x_0$ where $x_0$ is a fixed non-zero vector. Since $f$ is not continuous its kernel $M$ is not closed. So there is a sequence $\{x_n\}$ in $M$ converging to some point $x$ not belonging to $M$. Now $Ax_n=0$ for all $n$ so $Ax_n\to 0$ weakly, in fact in the norm. But $y \neq Ax$ so $Ax_n$ does not tend to $Ax$.
If $A$ is a bounded operator the conclusion is true and is easy to prove: just show that $x^{*}(y)=x^{*}(Ax)$ for all $x^{*}\in X^{*}$. |
An entire function with finite covering group is a polynomial. | What you are trying to prove is false. I will be using an example given by Alex Eremenko here. Take
$$
f(z)= \int_{0}^z \exp(t^2)dt.
$$
Eremenko observed that $f$ is an entire function without critical points whose image is the ${\mathbb C}$. Let us identify its covering transformations $\phi$, i.e. homeomorphisms ${\mathbb C}\to {\mathbb C}$ such that $f\circ \phi=f$. It is easy to see that $\phi$ has to be biholomorphic, hence, of the form
$$
\phi(z)=az+b, a\ne 0.
$$
Take the equation $f(az+b)=f(z)$ and differentiate it twice with respect to $z$. We obtain first $\exp((az+b)^2)= \exp(z^2)$ and then
$$
a(az+b)= z.
$$
Thus, $a=\pm 1$, $b=0$. Therefore, there are exactly two deck-transformations, $z\mapsto \pm z$. However, $f(z)$ is clearly not a polynomial. |
Please help me clear confusion over principal roots and identities for n-th radicals | It is quite likely that your textbook deals with it that way in order not to have to deal with case case in which $n$ is even (in which case no negative number has a $n$th root) and the case in which $n$ is odd (in which case each number has one and only one $n$th root) separately.
And, yes, those assertions are correct. |
When is the nth component of a (co)vector equal to its scalar product with the nth element of its dual basis? | The confusion here stems from interpreting the coefficients of the basis vectors as the actual basis vectors themselves. In this particular exercise, if you take for instance,
$$\vec e_1 = \begin{bmatrix} 2 & 1\end{bmatrix} ^\top $$
the question that begs being answered is, What is $2$ and what is $1$? They certainly form a vector in the sense of a list, but they actually are coefficients of another (tacitly un-spoken) basis, which we could symbolize as $\{\color{red}{\vec u_1},\color{red}{\vec u_2}\},$ so that
$$\vec e_1 = 2\color{red}{\vec u_1} + 1 \color{red}{\vec u_2}$$
and, likewise,
$$\vec e_2 = \begin{bmatrix}-1 & 3 \end{bmatrix}^\top $$
really implies,
$$\vec e_2 = -1\color{red}{\vec u_1} + 3 \color{red}{\vec u_2}$$
Your proposed system of equations in the OP is simply designed to end up recovering orthonormal covector coordinates. This is what your matching of the LHS of the system of equations with the corresponding coefficient winds up producing, now assuming an underlying co-vector basis $\{\color{blue}{ \tilde u^1},\color{blue}{ \tilde u^2}\}$:
$$\tilde e^1 = 1\color{blue}{\tilde u^1} + 0 \color{blue}{\tilde u^2}$$
and
$$\tilde e^2 = 0\color{blue}{\tilde u^1} + 1 \color{blue}{\tilde u^2}$$
in your proposed answer.
But, by skipping the Kronecker delta setup for the dual vector space basis pairing with the vector space basis in your proposed answer, you are simply deferring addressing how you match vector and co-vector basis:
What would be the inner product of these basis vectors and covectors? For instance,
$$\begin{align}
\langle \tilde e^1,\vec e_1\rangle &= \left(1\color{blue}{\tilde u^1} + 0 \color{blue}{\tilde u^2}\right)\left(2\color{red}{\vec u_1} + 1 \color{red}{\vec u_2}\right)\\
&= 2 \color{blue}{\tilde u^1}\color{red}{\vec u_1}+1\color{blue}{\tilde u^1}\color{red}{\vec u_2}
\end{align}$$
leave both $ \color{blue}{\tilde u^1}\color{red}{\vec u_1}$ and $\color{blue}{\tilde u^1}\color{red}{\vec u_2}$ undefined.
The way the exercise is actually solved in the book implies that the un-spoken underlying vector basis, $\{\color{red}{\vec u_1},\color{red}{\vec u_2}\}$ are the orthonormal standard Euclidean basis, linked to the co-vector basis $\{\color{blue}{\tilde u^1},\color{blue}{\tilde u^2}\}$ through the Kronecker function, so that
$$\begin{align}
\langle \tilde e^1,\vec e_1\rangle &= \left(\frac 3 7\color{blue}{\tilde u^1} + \frac 1 7 \color{blue}{\tilde u^2}\right)\left(2\color{red}{\vec u_1} + 1 \color{red}{\vec u_2}\right)\\
&= \frac 6 7 \color{blue}{\tilde u^1}\color{red}{\vec u_1}+\frac 3 7 \color{blue}{\tilde u^1}\color{red}{\vec u_2}+\frac 2 7 \color{blue}{\tilde u^2}\color{red}{\vec u_1}+\frac 1 7\color{blue}{\tilde u^2}\color{red}{\vec u_2}\\
&=\frac 6 7 1 + \frac 1 7 1\\
&=1
\end{align}$$
works out as implicitly desired only if $\color{blue}{\tilde u^\alpha}\color{red}{\vec u_\beta}=\delta^\alpha_\beta.$ |
Solving a second Order ODE of a catenary curve | I'm starting from the "Since the sum of the forces..", since that is where the confusion starts.
Notice that the tangential force is the gradient of the Catenary (curve) and the slope can be found by using
$$\frac{\text{rise}}{\text{run}} = \frac{W}{H} = \frac{w}{h} \ \ \ (1)$$
Where $w$ and $h$ are the magnitudes of $W$ and $H$, respectively. But the paper also states that "the magnitude, $w$, is proportional to the length $s$ of the chain between the origin and the point $(x, y)$" i.e.
$$w \propto s \implies \frac{w}{h} = \frac{\mu s}{h} \ \ \ (2)$$
where $\mu$ is the weight density of the chain.
So we have the gradient of the Catenary, $y'(x)$, is given by
$$\begin{align}
y'(x) &= \frac{\text{rise}}{\text{run}} \\
&= \frac{\mu s}{h} \ \ \ (3)\\
\end{align}$$
But remember, $s$ is the length of the chain from the point $(x, y)$ to the origin, also known as the arc length, which has an integral equation
$$s = \int_{0}^{x} \sqrt{1 + y'(t)^{2}} dt \ \ \ (4)$$
Substituting $(4)$ into $(3)$ gives
$$y'(x) = \frac{\mu}{h} \int_{0}^{x} \sqrt{1 + y'(t)^{2}} dt$$
Taking the derivative of both sides with respect to $x$ (the integral is just an application of the fundamental theorem of calculus) and setting $\alpha = \frac{\mu}{h}$, we get
$$y''(x) = \alpha \sqrt{1 + y'(x)^{2}}$$
They then solved the equation by squaring both sides of the equation and taking the derivative of both sides with respect to $x$, using implicit differentiation i.e.
$$\begin{align}
y''^{2} &= \alpha^{2} (1 + y'^{2}) \\
\implies \frac{d}{dx} y''^{2} &= \frac{d}{dx} \alpha^{2} (1 + y'^{2}) \\
\implies 2y'' y''' &= 2 \alpha^{2} y' y'' \\
\end{align}$$
Dividing by $2y''$
$$\implies y''' = \alpha^{2} y' \ \ \ (5)$$
Then they used the change of variables,
$$z = y' \implies z' = y'' \implies z'' = y'''$$
Which makes $(5)$ become
$$z'' = \alpha^{2}z$$ |
Algebraic geometry over general fields via Galois theory | Let $k$ be a (perfect) field, $\bar{k}$ a fixed algebraic closure, and let $G_{\bar{k}/k}$ be the Galois group of the extension $\bar{k}/k$. You may define affine and projective spaces as usual, i.e., over $\bar{k}$. However, it makes sense to let $$\mathbb A^n(k) = \left\{(x_1,...,x_n):x_i \in k\right\} \subset \mathbb A^n(\bar{k}),$$ and similarly $\mathbb P^n(k) \subset \mathbb P^n(\bar{k})$. In terms of Galois theory, we note that $$\mathbb A^n(k) = \left\{p \in \mathbb A^n(\bar{k}): p^\sigma = p~\mathrm{for~all}~\sigma \in G_{\bar{k}/k}\right\},$$ where $p = (x_1,...,x_n)$ and $p^\sigma = (x_1^\sigma,...,x_n^{\sigma})$, $\sigma \in G$. Similarly, if $f \in k[x_1,...,x_n]$ and $p \in \mathbb A^n$, then $$f(p^\sigma) = f(p)^\sigma.$$ Thus if we have an algebraic set $V$ over $\bar{k}$ in the usual sense, then we can define $$V(k) = \left\{p \in V:p^\sigma = p~\mathrm{for~all}~\sigma \in G\right\}.$$ There is much more you could generalize, but I'll stop here. If you are interested, please read Silverman's The Arithmetic of Elliptic Curves. |
Self-envelope of oscillatory sinc(x) function | HINT
According to the suggestions given in the comments by Jack and Arthur, we need to check that
$\frac{\sin x}x=\pm \frac1x$ determine intersection points
at that point the two curves are tangent (by derivative) |
Show that if $\kappa$ is an uncountable cardinal, then $\kappa$ is an epsilon number | The following is intended as a half-outline/half-solution.
We will prove by induction that every uncountable cardinal $\kappa$ is an $\epsilon$-number, and that the family $E_\kappa = \{ \alpha < \kappa : \omega^\alpha = \alpha \}$ has cardinality $\kappa$.
Suppose that $\kappa$ is an uncountable cardinal such that the two above facts are knows for every uncountable cardinal $\lambda < \kappa$.
If $\kappa$ is a limit cardinal, note that in particular $\kappa$ is a limit of uncountable cardinals. By normality of ordinal exponentiation it follows that $$\omega^\kappa = \lim_{\lambda < \kappa} \omega^\lambda = \lim_{\lambda < \kappa} \lambda = \kappa,$$ where the limit is taken only over the uncountable cardinals $\lambda < \kappa$.
Also, it follows that $E_\kappa = \bigcup_{\lambda < \kappa} E_\lambda$, and so $| E_\kappa | = \lim_{\lambda < \kappa} | E_\lambda | = \kappa$.
If $\kappa$ is a successor cardinal, note that $\kappa$ is regular. Note, also, that every uncountable cardinal is an indecomposable ordinal. Therefore $\kappa = \omega^\delta$ for some (unique) ordinal $\delta$. As $\omega^\kappa \geq \kappa$, we know that $\delta \leq \kappa$. It suffices to show that $\omega^\beta < \kappa$ for all $\beta < \kappa$. We do this by induction: assume $\beta < \kappa$ is such that $\omega^\gamma < \kappa$ for all $\gamma < \beta$.
If $\beta = \gamma + 1$, note that $\omega^\beta = \omega^\gamma \cdot \omega = \lim_{n < \omega} \omega^\gamma \cdot n$. By indecomposability it follows that $\omega^\gamma \cdot n < \kappa$ for all $n < \omega$, and by regularity of $\kappa$ we have that $\{ \omega^\gamma \cdot n : n < \omega \}$ is bounded in $\kappa$.
If $\beta$ is a limit ordinal, then $\omega^\beta = \lim_{\gamma < \beta} \omega^\gamma$. Note by regularity of $\kappa$ that $\{ \omega^\gamma : \gamma < \beta \}$ must be bounded in $\kappa$.
To show that $E_\kappa$ has cardinality $\kappa$, note that by starting with any ordinal $\alpha < \kappa$ and defining the sequence $\langle \alpha_n \rangle_{n < \omega}$ by $\alpha_0 = \alpha$ and $\alpha_{n+1} = \omega^{\alpha_n}$ we have that $\alpha_\omega = \lim_{n < \omega} \alpha_n < \kappa$ is an $\epsilon$-number. Use this fact to construct a strictly increasing $\kappa$-sequence of $\epsilon$-numbers less than $\kappa$.
(There must be an easier way, but I cannot think of it.) |
Hartshorne EX. III.10.3 | Your strategy is inappropriate to this question. This first part is a straightforwards application of Nakayama's lemma: $\Omega_{X/Y}$ is coherent, so $(\Omega_{X/Y})_x$ is a finitely generated module over the local ring $\mathcal{O}_{X,x}$. Then the statement about the dimension is the same as $(\Omega_{X/Y})_x\otimes_{\mathcal{O}_{X,x}} k(x)= (\Omega_{X/Y})_x\otimes_{\mathcal{O}_{X,x}} \mathcal{O}_{X,x}/\mathfrak{m}_x = (\Omega_{X/Y})_x/\mathfrak{m}_x(\Omega_{X/Y})_x=0$, or $\mathfrak{m}_x(\Omega_{X/Y})_x=(\Omega_{X/Y})_x$, so by Nakayama, this means that $(\Omega_{X/Y})_x=0$. This means that the stalk of $\Omega_{X/Y}$ is zero at every point, or that $\Omega_{X/Y}$ is the zero sheaf. So we've shown (i) implies (ii).
The rest of the argument will make frequent use of some basic facts about $\Omega_{X/Y}$:
Hartshorne II.8.2A: For a ring map $A\to B$ and a base extension $A\to A'$ which gives a ring map $A' \to B':= B\otimes_A A'$, we have $\Omega_{B'/A'}=\Omega_{B/A}\otimes_B B'$.
Hartshorne II.8.2A: For $S\subset B$ multiplicatively closed, $S^{-1}\Omega_{B/A}=\Omega_{S^{-1}B/A}$.
If $R\subset A$ is multiplicatively closed and $R$ maps to invertible elements of $B$, then $\Omega_{B/A}=\Omega_{B/R^{-1}A}$. (Proof: apply the Leibniz rule to $1=f(s)f(s)^{-1}$.)
For any surjective morphism of rings $A\to B$, we have $\Omega_{B/A}=0$.
Hartshorne II.8.3A: If $A\to B \to C$ are maps of rings, we have a natural exact sequence of $C$-modules $$\Omega_{B/A}\otimes_B C \to \Omega_{C/A} \to \Omega_{C/B}\to 0$$
Hartshorne II.8.7A: Let $A$ be a local ring with maximal ideal $\mathfrak{m}$ containing a field $k$ isomorphic to it's residue field. Then there is an isomorphism $\mathfrak{m}/\mathfrak{m}^2\to \Omega_{B/k}\otimes_B k$.
If $A\to B$ is a ring map with $\Omega_{B/A}=0$, then the induced map on reductions $A_{red}\to B_{red}$ also has $\Omega_{B_{red}/A_{red}}=0$.
The final statement comes from an application of II.8.3A to the sequences of ring maps $A\to A_{red}\to B_{red}$ and $A\to B\to B_{red}$: the first gives you that $\Omega_{B_{red}/A}\cong \Omega_{B_{red}/A_{red}}$ since $\Omega_{A_{red}/A}=0$ as the ring map $A\to A_{red}$ is surjective, and the second gives you that $0=\Omega_{B/A}\otimes_B B_{red}$ surjects on to $\Omega_{B_{red}/A}\cong\Omega_{B_{red}/A_{red}}$ because $\Omega_{B_{red}/B}=0$.
To show (ii) implies (i), the fact that our map is flat and of finite type implies it is open because of exercise III.9.1. Let $Y'$ be an irreducible component of $Y$ and let $X'$ be an irreducible component of $X$ which maps in to $Y'$. Now take an affine open $Y''$ of $Y'$ which is open in $Y$ and some affine open $X''$ of $X'$ open in $X$ mapping in to it. As our map is open, the image of $X''$ in $Y''$ is open and thus dense, so the generic point of $X''$ maps to the generic point of $Y''$ (further, this map of generic points is exactly the same as the map of generic points one gets with $X'\to Y'$).
By the final quoted statement above and the fact that taking reductions does not change dimensions nor the fact that the generic point of $X''$ maps to the generic point of $Y''$, we may replace $X''$ and $Y''$ by their reductions while maintaining their dimensions and $\Omega_{X''/Y''}=0$. Now look at the map on generic points: by the localization property above, we get that the module of differentials associated to the induced map of fraction fields vanishes, so by II.8.6a, we have that these fraction fields are of the same transcendence degree and thus $X''$ and $Y''$ (and therefore $X'$ and $Y'$) have the same dimension. (Explicitly, if $A\to B$ is the map of domains induced by $X''\to Y''$, then let $S=B\setminus \{0\}$ and let $R=A\setminus\{0\}$, so $0=S^{-1}\Omega_{B/A}=\Omega_{S^{-1}B/A}=\Omega_{S^{-1}B/R^{-1}A}$, and this last module is the module of differentials associated to the induced map on fraction fields.)
To show that (ii) and (iii) are the same, we argue affine-locally, which lets us treat this as an algebra problem. To be clear: we want to show that a finite type map of $k$-algebras $f:A\to B$ has $\Omega_{B/A}=0$ iff it is unramified. By the localization properties above, it is not hard to see that this is equivalent to the stalk-local problem: if we have a finite type local map of local rings $A\to B$, then unramified is equivalent to $\Omega_{B/A}=0$.
Suppose $A,B$ are local rings with maximal ideals $\mathfrak{m},\mathfrak{n}$ and residue fields $E,F$ respectively and $f:A\to B$ is a local ring map of finite type between them. If we assume unramified, then by base change we get that $\Omega_{B/A}\otimes_A E = \Omega_{(B/\mathfrak{m}B)/E}$, and as $\mathfrak{m}B=\mathfrak{n}$ by assumption, we get that this last module is just the module of differentials associated to the map of residue fields $E\to F$. By assumption, this is separable algebraic, so by II.8.6a we have that it vanishes, and then by Nakayama we get that $\Omega_{B/A}=0$ as requested.
For the reverse direction, we'll show that if either the conditions $\mathfrak{m}B=\mathfrak{n}$ or "$F$ is a separable extension of $E$" is violated, then $\Omega_{B/A}\neq 0$. Start with the extension of fields: write $E\to B/\mathfrak{m}B \to F$, and applying II.8.3A, we get the following exact sequence.
$$ \Omega_{(B/\mathfrak{m}B)/E} \otimes_{B/\mathfrak{m}B} F \to \Omega_{F/E}\to \Omega_{(B/\mathfrak{m}B)/F}\to 0$$
The final term vanishes because the ring map $B/\mathfrak{m}B\to B/\mathfrak{n}=F$ is surjective, so the first map is a surjection. In particular, if the extension of fields $E\subset F$ is not algebraic separable then by II.8.6a we get the middle term is nonzero, then the left term must be nonzero, and as $\Omega_{(B/\mathfrak{m}B)/E}\cong \Omega_{B/A}\otimes_A E$, we get $\Omega_{B/A}\neq 0$.
There are two ways we can fail to have $\mathfrak{m}B=\mathfrak{n}$ - either $\sqrt{\mathfrak{m}B}$ is equal to $\mathfrak{n}$ or not. If they're not equal, II.8.3a applied to $E\to B/\mathfrak{m}B\to B/\sqrt{\mathfrak{m}B}$ combined with the observation that $B/\sqrt{\mathfrak{m}B}$ has a fraction field which is of positive transcendence degree over $E$ gives that $\Omega_{B/A}\neq 0$ similarly to the above conclusion.
In the other case, $\sqrt{\mathfrak{m}B}=\mathfrak{n}$ but $\mathfrak{m}B\neq \mathfrak{n}$, we have that $B/\mathfrak{m}B$ is Artinian and thus finite-dimensional as an $E$-vector space. Let $K$ be an algebraic closure of $A/\mathfrak{m}$. Then $B/\mathfrak{m}B \otimes_{E} K$ is again Artinian and a finite product of Artinian local rings with $K$ as their residue field. As $B/\mathfrak{m}B$ has a nilpotent element, $(B/\mathfrak{m}B)\otimes_{E} K$ has a nilpotent element and there exists some local ring factor $B'\subset K\otimes_{E} B/\mathfrak{m}B$ with nonzero nilpotent maximal ideal $\mathfrak{q}$. By II.8.7, we have that $\mathfrak{q}/\mathfrak{q}^2\cong \Omega_{B'/K}\otimes_{B'} K$ and thus the RHS is nonzero, and so $\Omega_{B'/K}$ is nonzero by Nakayama. As $\Omega_{B'/K}$ is a localization of $\Omega_{((B/\mathfrak{m}B)\otimes_{E} K)/K}\cong \Omega_{(B/\mathfrak{m}B)/E}\otimes_{E} K$ at a maximal ideal, $\Omega_{(B/\mathfrak{m}B)/E}$ is nonzero and we are done.
The equivalence of (ii) and (iii) seems like kind of a pain to me if you're trying to do it with just the tools Hartshorne has available at this point. I prefer the StacksProject definitions which define all of these things through standard smooth/etale/unramified ring maps, which makes everything a bit more straightforwards. |
Statistical Distance between two points? | This is basically using a vector from $z$-scores, with the assumption that the mean of each coordinate is $0$.
In $\mathbb{R}^n$ we get:
$$\vec{z}:=(\frac{x_1}{\sigma_{1}},\frac{x_2}{\sigma_{2}},...,\frac{x_n}{\sigma_{n}})$$
Where $\sigma_i$ is the standard deviation along direction $i$.
In this case, we can re-cast $d(O,P)$ as:
$$d(O,P)=||\vec{z}||_2$$
The generalization to any $L^p$-norm is obvious:
$$d(O,P)=||\vec{z}||_p$$
Extension to Correlated Variables
Note the relationship of the above $L^2$ norm to the quadratic form in the exponent of the multivariate gaussian with uncorrelated components. This suggested that we can model the general $L^2$ statistical distance using this quadratic form. Let $\Sigma$ be the covariance matrix of your variables, $\mu$ be the location of point that you want to calculate the distance to (from $x$) (in your post, you chose the origin $\mu = 0$). The the $L^2$ norm in your post can be rewritten as:
$$d(O,P):= \sqrt{x^T\Sigma^{-1} x}$$
Where $\Sigma$ is a diagonal matrix of the variances. You can incorporate correlated variables using the full covariance matrix.
Not sure if there's a natural extension to $L^p$ |
show that the limit of $\lim_{n \to \infty} n^{\frac{1}{n}} = 1$ | Hint: Note that $n^{1/n} \geq 1$ for $n\geq 1$ so there is a sequence of non-negative reals $\delta(n)$ s.t. $n^{1/n} = 1 + \delta (n)$ for each $n$. Show that $\delta(n) \to 0$ as $n\to \infty$ by raising both sides to the power of $n$.
Solution: In particular, note that:
$n = (1+\delta (n))^n = 1+n\delta (n) + \dfrac{n(n-1)}{2}[\delta(n)]^2 + … + [\delta (n)]^n$
$\implies n \geq \dfrac{n(n-1)}{2}[\delta(n)]^2$ (As all other terms are non-negative)
$\implies 0\leq \delta (n) \leq \sqrt{\dfrac{2}{n-1}} \to 0$ as $n\to \infty$
$\implies \delta (n) \to 0$ as $n\to \infty$ (By squeezing)
$\implies 1+ \delta (n) \to 1$ as $n \to \infty$
Hence $\displaystyle\lim_{n\to \infty} n^{1/n} = 1$, as desired. |
Returning a deck of cards to its original state with overhand shuffle | I don't think the problem can be solved satisfactorily without knowing the specific values of $N$ (total cards) and numbers of cards in individual blocks.
To restore to the initial state, do you allow the blocks to be the same as those you used to shuffle?
If yes, then simply using the same block sizes and reverse the order of the moves.
If no, then the problem cannot be solved without knowing the details. Based on your example, if the restoring process allows only block size of 6, there is no way to restore to the initial state. |
Squares sharing a side have different colors | Let $n\ge 1$ and $0\le r\le g\le b$ with $r+g+b=n^2$ and $b\le\frac 12 n^2$.
Then $b\ge \frac13n^2\ge r$.
Consider a black/white checkerboard pattern with black in the top left corner. Then there are $k:=\lceil \frac{n^2}2\rceil$ black and $w:=\lfloor \frac{n^2}2\rfloor$ white fields. By the given restrictions, $b\le w$.
Label the fields first black from top left to bottom right, then white from top left to bottom right:
$$\begin{matrix}{1}&k+1&2&k+2&\cdots &\begin{cases}\lceil\frac n2\rceil\\ k+\lfloor \frac n2\rfloor\end{cases}\\
k+\lfloor \frac n2\rfloor+1& \lceil \frac n2\rceil+1&k+\lfloor \frac n2\rfloor+2&\lceil \frac n2\rceil+2&\cdots&\begin{cases}k+n\\ n\end{cases}\\
n+1&k+n+1&n+2&k+n+2&\cdots&\begin{cases}\lceil\frac n2+n\rceil\\ k+\lfloor \frac n2\rfloor+n\end{cases}\\
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\
\cdots&\cdots&\cdots&\cdots&\cdots&k
\end{matrix} $$
Note that going two rows down always increases the label by $n$ and that the labels of adjacent squares differ by
$k$, $k-1$, $k+1$, $k+\lfloor \frac n2\rfloor$, or $k-\lceil\frac n2\rceil$, hence at least by $$\tag1k-\left\lceil\frac n2\right\rceil=\frac{n^2-n}2=w-\left\lfloor \frac n2\right\rfloor.$$
For $n\ge 3$, we have $\frac {n^2}3\le\frac{n^2-n}2$ and therefore
$$\tag2 r\le\frac{n^2-n}2.$$
Note that $(2)$ holds also when $n<3$.
Now we place blue squares on the (by label) first $b$ fields, green squares on the last $g$ fields, and red squares on the remaining $r$ fields.
All blue squares are on black fields because $b\le k$, hence no two are adjacent
All green squares are on white fields because $g\le w$, hence no two are adjacent
Because of $(2)$, any two red square labels differ by at most $\frac{n^2-n}2-1$. It follows from $(1)$ that they are not adjacent.
We conclude that
$$f(n)\ge \left\lceil\frac12n^2\right\rceil.$$
On the other hand, we partition the $n\times n$ boards into $\lceil \frac {n^2}2\rceil$ regions, namely $\lfloor \frac{n^2}2\rfloor $ dominoes and possibly one single field. Within each of these regions, at most one blue square is allowed. Hence,
$f(n)\le \left\lceil\frac12n^2\right\rceil$ and ultimately
$$f(n)= \left\lceil\frac12n^2\right\rceil.$$ |
Further Problem on Tangents Requiring the Use of Differentiation | we have $$y(x)=x^3-2x$$ then $$y'(x)=3x^2-2$$ then $$y'(a)=3a^2-2$$ and the tangent line has the form
$$y(x)=(3a^2-2)x+n$$ for $x=a$ we get $$y(a)=a^3-2a$$
and you can compute $n$:
$$a^3-2a=(3a^2-2)a+n$$
can you finish? |
How to verify that $P \in \langle Q_1, \dots, Q_m \rangle$ in $\mathbb C[x_1, \dots , x_n]$? | There is Buchberger's algorithm for finding a Gröbner basis. |
Proving that $\displaystyle \int_0^x \frac{\sin t}{t+1}dt > 0$ for all $x >0$ | You're on the right track.. you also have to show that ${\displaystyle \int_{2k\pi}^x {\sin(t) \over t + 1}\,dt > 0}$ for all $2k\pi < x < 2(k+1)\pi$, since you have
$$ \int_{0}^x {\sin(t) \over t + 1}\,dt = \sum_{i = 0}^{k-1} \bigg(\int_{2i\pi}^{(2i + 1)\pi} {\sin(t) \over t + 1}\,dt + \int_{(2i+1)\pi}^{(2i + 2)\pi} {\sin(t) \over t + 1}\,dt\bigg) + \int_{2k\pi}^x {\sin(t) \over t + 1}\,dt $$
You've shown the first sum is positive, but you still have to show the last term is positive too. For that part, I suggest showing that ${\displaystyle \int_{2k\pi}^x {\sin(t) \over t + 1}\,dt}$ increases as $x$ goes from $2k\pi$ to $(2k + 1)\pi$, and then decreases as $x$ goes from $(2k + 1)\pi$ to $(2k + 2)\pi$. From what you've done already, you know it will not decrease all the way to zero.
And now for the "slick trick" solution: Note that the derivative of $1 - \cos(x)$ is $\sin(x)$, and the derivative of ${\displaystyle {1 \over t + 1}}$ is ${\displaystyle -{1 \over (t + 1)^2}}$. So integrating by parts you have
$$\int_{0}^x {\sin(t) \over t + 1}\,dt = {1 - \cos(t) \over t + 1}\bigg|_{t = 0}^{t = x} +
\int_0^x {1 - \cos(t) \over (t + 1)^2}\,dt$$
$$= {1 - \cos(x) \over x + 1} + \int_0^x {1 - \cos(t) \over (t + 1)^2}\,dt$$
Since $1 - \cos(t) \geq 0$ for all $t$, the first term is nonnegative. Similarly, the integrand of the second term is nonnegative and thus the resulting integral is positive for $x > 0$. |
Verify that the followings are indeed functors | For 2.a) it's about the $\mathscr P$-image of the identity function being itself an identity function.
By definition, we have $\mathscr P(1_A)(X)=1_A(X)=X$ for every $X\in \mathscr P(A)$, so it's indeed the identity function.
For 2.b) you should prove $\mathscr P(g\circ f) =\mathscr P(g)\circ \mathscr P(f)$, using again the definition of $\mathscr P$ on arrows. |
Resolve apparent paradox re probability of getting "a head ahead" tossing a coin. | Consider the following binary sequence:
0110001111000001111110000000...
Where each block of identical bits is longer than the previous one.
We can split it as follows:
0 11000 111100000 1111110000000 ...
Where each block has more zeros than ones.
But you can split it another way:
011 0001111 00000111111 ...
Where each block has more ones than zeros.
So if you don't consider this contradictory, your coin toss paradox also is not contradictory. If however you find this contradictory, your error is basically in rearranging a divergent series:
$1 - 2 + 3 - 4 + 5 - 6 ...$ |
Standard Matrix vs. Identity Matrix | The $n\times n$ identity matrix is always $I=[\delta_{ij}]$ where $\delta_{ij}=\begin{cases} 1 & \text{if } i=j \\ 0 & \text{otherwise}\end{cases}$. For example the 3x3 identity matrix is $I=\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
The standard matrix of a linear transformation $T:\mathbb{R}^n \to \mathbb{R}^m$ is an $m \times n$ matrix $A$ with the property that for all vectors $x$ evaluating the linear transformation is the same as multiplying by the matrix. That is $T(x)=Ax$.
So it is possible for an identity matrix to be a standard matrix, but it's not always the case. |
Can We understand Vector Bundles as pushouts? | The quotient of an object $X$ by an equivalence relation $R\subset X\times X$, in any category with enough structure, is defined as the coequalizer of the two projections $R\stackrel{\to}\to X$. Any coequalizer can be written as the pushout of the corresponding span $X\leftarrow R\sqcup X\to X$, so in that sense your vector bundle is a pushout, but that's not a very natural viewpoint.
The most natural diagram to use here has, I think, a more complicated shape. Consider the poset structure on the set $S'=A\sqcup A\times A$ generated by $(\alpha,\beta)\leq \alpha$ and $(\alpha,\beta)\leq \beta$, for each $(\alpha,\beta)\in A\times A$. It is most natural, but immaterial, to also remove the diagonal of $A\times A$ from $S'$, yielding finally a poset $S$. When $A$ contains two points, this construction yields the poset $\alpha\leftarrow (\alpha,\beta)\rightarrow \beta$, so that colimits along $S$ for larger index sets $A$ are generalized pushouts.
Now your vector bundle is the colimit of the $S$-indexed diagram of spaces sending $\alpha\mapsto U_\alpha\times \mathbb{R}^n$, $(\alpha,\beta)\mapsto p^{-1}(U_\alpha\cap U_\beta)$, and the comparison maps $(\alpha,\beta)\leq \alpha,(\alpha,\beta)\leq \beta$ to the restrictions of $h_\alpha$ and $h_\beta$, respectively. This colimit has exactly the desired effect of identifying $(x,v)$ with $h_\beta h_\alpha^{-1}(x,v)$ when sensible.
It might be worth remarking that the same story describes how to describe a manifold as a colimit of its coordinate patches, and for many other situations in which objects are built by gluing together local data. |
Why is the sample distribution the Exponential distribution Gamma distributed? | The sum of $n$ IID exponential random variables with rate $\beta$ is well known to follow a gamma distribution with parameters $n, \beta$. See: Gamma Distribution out of sum of exponential random variables.
I assume you can prove the following:
Let $c>0$ and let $X\sim \operatorname{Gamma}(r,\lambda)$ with $r,\lambda>0$. It is well-known that $cX\sim\operatorname{Gamma}(r, \lambda/c)$.
Using these two, you can prove your desired result. |
Specific position / Restricted permutation [I don't exactly know the name of this topic] | Try to count the opposite: The number of permutations with A not in the first place AND with E in the last place.
There are 6! permutations with E in the last place. From the remaining 6 letters there are 5! with A in the first place (all permutations of RTICL). So we have 6!-5! permutations with A not in the first place and E in the last place. The total number of permutations is 7! so the result is: 7!-6!+5! |
Have people tried to find the area under a curve by means other than integration? | It depends on what we mean by "integrating."
There were many quadratures, both ancient and early modern, that did not involve the calculation of antiderivatives in the first-year calculus sense. By the middle of the seventeenth century, all curves of the shape $y=x^\alpha$ had in principle been dealt with, along with a number of others, such as the cycloid. One might mention in this context the Method of Fermat, and the indivisibles of Torricelli and Cavalieri. This was all quite a number of years before the official appearance of the calculus at the hands of Newton and Leibniz. |
calculate radius of circle that by given length of square that is inside it | Mark the center of circle as point $O$.
Mark the intersection of the red line and the square edge $BC$ as $F$.
Draw a line from $O$ to $B$ in your diagram. The length of this line must be equal to the radius of the circle, $r$.
The length of the line from $O$ to $F$ must then be equal to $8-r$ as $EO=r$ and $EF=8$.
Now use pythagorus in triangle $OBF$ to get:
$$r^2=4^2+(8-r)^2$$and solve for $r$ |
About a/the definition of plane. | No, $X$ is always just one point in the plane.
The quote does not give any name to the plane (or collection of points) itself.
You should parse it as "the collection of all [points $X$ such that ...]", not "[the collection of all points] $X$". |
Solving an integral using first two terms of the Mclaurin series for $f(x)$ | When you take only two terms, the approximation of $\sin(4x^2)$ by only these two terms in not good enough in a left neighborhood of $0.73$, so you need to take three terms.
Calculating the integral with three terms gives:
$$\int_0^{0.73}4x^2-\frac{32 x^6}{3}+\frac{128 x^{10}}{15}\,dx=0.3746$$ |
Complex analysis inequalities | If $z$ is real, then $|e^{iz}|=1$ and $|z^2+1|=z^2+1=|z|^2+1$, hence
$\left|\frac{e^{iz}}{z^2 + 1}\right|=\frac{1}{z^2+1}=\frac{1}{|z|^2+1}$. |
Sum of all the $x$- and $y$-intercepts of the graph of $f(x)$ is equal to the sum of all the $x$- and $y$-intercepts of the graph of $f^{-1}(x)$ | You said $f(x)$ is one-to-one, so $f^{-1}(x)$ is well defined. (I will also asume that $f(x)$ actually intersepts the $x$- and $y$-axis, for obvious reasons.) We have the sum $$ x_1+f(0),$$ where $f(x_1)=0$ and let's say $f(0)=y_1$.
This means that $f^{-1}(0)=x_1$ and $f^{-1}(y_1)=0$. This means that $x_1$ is the $y$-intersept of $f^{-1}(x)$ and that $y_1$ is its $x$-intersept.
If we count the $x$- and $y$-intersept of $f^{-1}(x)$ in the same way we did for $f(x)$, we get
$$y_1+f^{-1}(0)=f(0)+x_1,$$ which is the same sum indeed. |
Can someone help me understand the intuition with the formula for finding the kth Percentile? | One way to think about the above formula would be using the basic definition of how a percentage of anything is computed.
$$k = \frac{i}{n+1}100$$
$n+1$ is used here simply as a matter of indexing (there could be a $0^{th}$ percentile.) |
Simplifying expression Mobius Function | It is the product $$f(n)=(1-p_1)(1-p_2)\cdots(1-p_k)$$ where the $p_i$ are the distinct prime divisors of $d.$
This is because $g(n)=\mu(n)n$ is multiplicative, and hence so is $F,$ and $F(p^k)=1-p$ for prime $p.$
In your case, $3500$ has prime divisors $2,5,$ and $7$. So:
$$F(3500)=(1-2)(1-5)(1-7)=-24$$
You could write it as $F(n)=n\sum_{d\mid n} \mu(d)\frac{1}{n/d}.$ Then you get,
by Möbius inversion: $$\frac{1}{n}=\sum_{d\mid n}\frac{F(d)}{d}$$
Or:
$$1=\sum_{d\mid n}\frac{n}{d}F(d)$$
That won't help you compute $F$ much however. |
Is this analysis problem involving induction flawed? | I agree that if we define $\mathbb{N}$ as a "set of positive integers", the question is redundant. However if we say that $\mathbb{N}$ is a set of natural numbers (and, after all, that is exactly what $\mathbb{N}$ stands for), defined by Peano axioms, then the question is no longer redundant.
As a side note, calling $\mathbb{N}$ a "set of positive integers" is, IMHO, a bad style. Unfortunately, there is no standard convention for "set of positive integers". Many books use $\mathbb{Z}^{+}$ for that, however at least one book I'm familiar with uses $\mathbb{Z}^{+}$ for "group of integers under the regular addition". May be $\mathbb{Z}_{+}$ is better (?) |
Different definitions of differentiability | Yes, there is a big advantage: to prove things! Proving the properties of differentiable functions using the second definition is at least as simple and usually strictly simpler than using the first one.
Consider, for instance, the proof of the fact that if $f$ is differentiable at $a$ and $g$ is differentiable at $f(a)$, then $g\circ f$ is differentiable at $a$. So, there is some matrix $A$ and some function $\varphi$ defined near $0$ such that $\lim_{x\to a}\frac{\varphi(x-a)}{\|x-a\|}=0$ and that$$f(x)=f(a)+A(x-a)+\varphi(x-a);$$also, there is some matrix $B$ and some function $\mu$ defined near $0$ such that $\lim_{x\to f(a)}\frac{\mu(x-f(a))}{\|x-f(a)\|}=0$ and that$$g(x)=g(f(a))+B(x-f(a))+\mu(x-f(a)).$$So, let $\eta(x)=B\varphi(x)+\mu\bigl(Ax+\varphi(x)\bigr)$. Then\begin{align}g\bigl(f(x)\bigr)&=g\bigl(f(a)\bigr)+B(f(x)-f(a))+\mu(f(x)-f(a))\\&=g\bigl(f(a)\bigr)+B\bigl(A(x-a)+\varphi(x-a)\bigr)+\mu\bigl(A(x-a)+\varphi(x-a)\bigr)\\&=g\bigl(f(a)\bigr)+(BA)(x-a)+B\varphi(x-a)+\mu\bigl(A(x-a)+\varphi(x-a)\bigr)\\&=g\bigl(f(a)\bigr)+(BA)(x-a)+\eta(x-a).\end{align}
Now, compare this with a proof using the first definition. |
Class functions of finite group in matrix entry space | $a_{ij}$ are linearly independent. All is to prove $\rho_t B=B \rho_t$, where B is a matrix of coefficients. By Schur's Lemma, $B=c\delta_{ij}$. |
The minimum value of $\frac{(x+\frac{1}{x})^6-(x^6+\frac{1}{x^6})-2}{(x+\frac{1}{x})^3+x^3+\frac{1}{x^3}}$ | hint: let
$$f(x)=\dfrac{\left(x+\dfrac{1}{x}\right)^6-\left(x^6+\dfrac{1}{x^6}\right)-2}{\left(x+\dfrac{1}{x}\right)^3+x^3+x^{-3}}=3\left(x+\dfrac{1}{x}\right)\ge 6$$
because
$$\left(x+\dfrac{1}{x}\right)^6-\left(x^6+\dfrac{1}{x^6}\right)-2=\left(x+\dfrac{1}{x}\right)^6-\left(x^3+\dfrac{1}{x^3}\right)^2$$
so
$$f(x)=\left(x+\dfrac{1}{x}\right)^3-\left(x^3+\dfrac{1}{x^3}\right)=3\left(x+\dfrac{1}{x}\right)$$ |
Question about inequality in linear algebra | You basically have it, but $\|u\|^2 \ge 0$ is not "given by definition". Rather, it follows from the fact that the square of a real number is nonnegative, and $\|u\|$ is a real number . It would be just fine if you said that $\|v\|^2 + \|u\|^2 \ge \|v\|^2$ since $\|u\|^2 \ge 0$. |
Symmetric Polynomials and Automorphisms of Complex Polynomial Rings | This conjecture is false. Choose a multiple $d=k n$ of $n$, with $k>1$, such that $n(n-1)/2$ divides $d$ (that is, $d/\binom{n}{2} = 2k/(n-1)$ is an integer $r$.) The monomial $x_1^k\cdots x_n^k$, which has total degree $d$, can be divided up into $n(n-1)/2$ degree-$r$ factors in any number of arbitrary ways, having no invariance properties whatsoever.
For example, if $n=3$ you can take $k=2$, and write
$$x_1^2 x_2^2 x_3^2=(x_1^2)(x_2 x_3)(x_2 x_3).$$ |
Sum of medians of a triangle | Say that the triangle is $ABC$. The vector giving the median from $A$ to $BC$ is $(AC+AB)/2$. Similarly, the one from $B$ to $AC$ is $(BA+BC)/2$, and the one from $C$ to $BA$ is $(CB+CA)/2$. Adding these, we get zero since $CB=-BC$, etc. |
How to solve the exponential equation $\exp({\frac{b+a}{a}u})+\exp({\frac{b}{a}u})=w$? | Numerically.
Letting $x=e^u$ (as anon does in a comment) the equation becomes $x^{b/a}(x+1)=w$ and then $$x^b(x+1)^a=v$$ where $v=w^a$. Even if $a,b$ are positive integers, making this a polynomial equation, there will be no analytic solution if $a,b$ are relatively prime and $a+b\ge5$. So you would have to resort to numerical methods, such as Newton's Method, which generally depend on a knowledge of Calculus.
[Added 9 July]
Even in the case $a=2$, $b=1$, the equation $$e^{3u/2}+e^{u/2}=w$$ becomes $$x^3+x=w$$ on setting $x=e^{u/2}$, and while there is a formula for solving that one, an analogue of the quadratic formula but for cubics, that formula is not generally included in the school curriculum nowadays. You won't have any trouble finding it on the web, if you want it - just look for "cubic formula".
I suppose the best way to solve your equation is to find some software designed to do it for you. |
Zeta Regularization and Products of Primes | A proof can be found in this article or this preprint. |
Contour for Principal Value Integral | For contour integrals with integration limits $0$ and $\infty$, it is an elementary to trick to plug in a $\ln z$; i.e. we instead consider the contour integral:
$$\oint_C \frac{z\ln z}{(z-\gamma)(z^2+\Omega^2)(e^z-1)}dz$$
where $C$ is the contour as shown in the image:
I will elaborate.
Let $$f(z)= \frac{z}{(z-\gamma)(z^2+\Omega^2)(e^z-1)}dz$$ and let $J$ be your integral.
$$I_2+I_4=\int^\infty_0 f(t)\ln t dt$$
$$I_8+I_6=\int^0_\infty f(t)\ln(te^{2\pi i})dt=\int^0_\infty f(t)\ln t+f(t)\cdot 2\pi i dt$$
Then, $$I_2+I_4+I_6+I_8=-2\pi i\int^\infty_0 f(t)dt =-2\pi i J$$
Moreover, $I_1$ and $I_5$ vanish in the limit.
Also,
$$I_3=-\frac12\cdot 2\pi i\cdot\ln\gamma \lim_{z\to \gamma}f(z)(z-\gamma)$$ The negative sign is due to the clockwise direction of $I_3$ and because $I_3$ is a semicircle so there is a $\frac12$ factor.
Similarly,
$$I_7=-\frac12\cdot2\pi i\cdot(\ln\gamma+2\pi i)\lim_{z\to\gamma}f(z)(z-\gamma)$$
Just to sum things up a little bit, we have
$$-2\pi iJ=2\pi i\sum\text{Res} -I_3-I_7$$
The calculation of the residue is quite straightforward.
At $+ i\Omega$, the residue is
$$(\ln\Omega+\frac{i\pi}2)\lim_{z\to i\Omega}f(z)(z-i\Omega)$$
At $-i\Omega$, the residue is
$$(\ln\Omega+\frac{3\pi i}2)\lim_{z\to -i\Omega}f(z)(z+i\Omega)$$
At $2\pi in$ where $n>0$, the residue is
$$(\ln 2\pi n+\frac{i\pi}2)\lim_{z\to2\pi in}f(z)(z-2\pi i n)$$
At $2\pi in$ where $n<0$, the residue is
$$(\ln 2\pi n+\frac{3i\pi}2)\lim_{z\to2\pi in}f(z)(z-2\pi i n)$$
Note that $J$ is real, so considering only the real part of the residue is sufficient. |
How to change the limits of integration | $s=-r^{2}$ gives $ds=-2rdr$ so $dr =-\frac 1 {2r} ds$. Also, as $r$ increases from $0$ to $\infty$, $s$ decreases from $0$ to $-\infty$. |
$\epsilon$-$\delta$ Proof of Limit in $\mathbb{R}^4$ | you don't need an explicit coordinate system. Any
$$ \frac{(a,b,c,d)}{\sqrt{a^2 + b^2 + c^2 + d^2}} $$
is a unit vector; let us use different names, maybe
$(p,q,r,s)$ such that $p^2 + q^2 + r^2 + s^2 = 1.$
Then your point
$$ (a,b,c,d) = t (p,q,r,s) $$
so that $$ a^2 + b^2 + c^2 + d^2 = t^2 $$
Next, $$(ad-bc)^2 = t^4 (pq-rs)^2$$
Now, how big can $|pq-rs|$ be? We know $-1 \leq p,q,r,s \leq 1$ |
Normal distribution, absolute values | Since $\Phi(-y)=1-\Phi(y)$ you should actually get $2\Phi(K/2)=1.76$, but you otherwise have the right strategy. |
Effect of perturbation on Perron eigenvalues | I don't have a good reference on perturbations of PF eigenvalues but it is fairly simple to make examples. For $\epsilon\geq 0$:
$$ M_\epsilon = \left( \begin{matrix} 1 & 1 & 0 \\ 0 & 1 & 1\\
\epsilon & 0 & 1\end{matrix}\right) $$
has char pol $p(\lambda)=(\lambda-1)^3-\epsilon\ $ so the leading eval is $\lambda=1+\epsilon^{1/3}$, i.e. is $\frac{1}{3}$-Hölder continuous in $\epsilon$.
In dimension $d$ you may at worst get $\frac{1}{d}$-Hölder.
It can not get worse simply because you look at zeroes of a polynomial with coefficients that are polynomials in matrix elements. A root of order $d$ behaves $1/d$-Hölder continuously in the matrix elements and an isolated zero depends analytically upon the matrix.
In particular, from this you also see that in the strictly positive case, the leading eval is isolated (PF-theorem) so behaves analytically with the matrix. |
Subalgebras of certain C*-algebras | This is not even even true for a singleton $X=\{\ast\}$. In this case $C(X,M_n(\mathbb{C}))\simeq M_n(\mathbb{C})$. Consider, for instance, $A$ the subalgebra of diagonal matrices. Then $A\simeq \mathbb{C}^n$. So for $n\geq 2$, this is not isomorphic to any $C(Y,M_m(\mathbb{C}))$ for $Y\subseteq X$, as the latter are isomorphic to $M_m(\mathbb{C})\not\simeq \mathbb{C}^n$. |
Prove: Every compact metric space is separable | Hint: Consider the countable family of coverings $\mathcal U_n=\left\{B\left(x,\frac1n\right)\mid x\in X\right\}$ for all $n$, use compactness to distill a countable family of points, and show it is dense. |
Which way will produce the following integral? | I assume $\gamma$ is supposed to be a closed contour. Let the residues of your integrand at the poles $p_j$ be $r_j$ and the winding number of $\gamma$ around $p_j$ be $w_j$. Then what you want is $\sum_j r_j w_j = 0$. There are many ways to do this. |
Is there an abbreviation for "we want to prove that"? | In my undergraduate math classes, I have seen some faculty use "WTS:" as shorthand for "Want to show:". I found this to be quite convenient, but only after the meaning was explained: until then it was quite confusing, so do take @GiuseppeNegro's comment above into account.
If you feel you are going to be making repeated use of it in a document/exam paper then you could mention at the start what this abbreviation stands for exactly. |
Can there exist a holomorphic function $f:\mathbb{C}\backslash\{1/n:n\geq1\}\rightarrow\mathbb{H}$? | Let $g(z)=\frac 1 {f(z)+i}$. The $g$ is also analytic on the same domain and it is bounded: $|g(z)| \leq 1$. Hence it has a removable singularity at the points $\frac 1 n$. It extends to a bounded entire function so it is a constant by Liouville's Theorem. It follows that $f$ is itself a constant. |
Constructing a rational function from its asymptotes | Your work is correct. Note that your solutions are the ''more simple'' rational functions that satisfies the requests. Obviously you can find infinitely many other rational functions that do the same, but have some other property. |
Solving equation involving Euler's totient function | Let $n=p_1^{k_1}\cdot p_2^{k_2}\cdots p_r^{k_r}$, where $p_i$ are disntinct prime numbers.
Then, $\varphi(n)=p_1^{k_1-1}(p_1-1)p_2^{k_2-1}(p_2-1)\cdots p_r^{k_r-1}(p_r-1)=2^2=1\cdot 4=2\cdot 2$.
Now it is easy to conclude $r=1$, $p_1=2$, $k_1=3$ or $p_1-1=4$, $p_1=5$, $k_1=1$ (remain case $p_1-1=2=p_2-1$, isn't possible since $p_1\neq p_2$).
Solutions are $n=8$ and $n=5$. |
If a process is dominated by an integrable variable, then this process is of class $\mathcal D$ | If you have any collection $\mathcal A$ of random variables that is dominated by an integrable random variable $Y$, then $\mathcal A$ is uniformly integrable. This is because for any $\epsilon>0$, there exists some $\delta>0$ such that $$ P(A) < \delta \implies E[|Y|1_A]<\epsilon \implies \sup_{X \in \mathcal A} E[|X|1_A]< \epsilon$$
Now note that if $|X(t, \omega)| \leq |Y(\omega)|$, then for any a.s. finite stopping time $\tau$ we also have that $|X(\tau(\omega),\omega)| \leq |Y(\omega)|$; hence the collection $\mathcal A:=\{X_{\tau}:\tau$ a finite stopping time$\}$ is dominated by $Y$. |
Winding Number and Area Preserving Maps | Yes; it is possible to have fixed points of an arbitrary negative index. To get an example, think first of the index $-1$ fixed point at the origin for the map $(x,y) \mapsto (2x, y/2)$. This fixed point is hyperbolic and its local dynamics consists of four "hyperbolic sectors". You can make examples with more than 4 hyperbolic sectors, to get fixed points of other negative indices. For instance the fixed point in this picture has index $-2$ (and can be made area-preserving). In general if you have a similar dynamics with $2k$ sectors the index will be $1-k$.
So the absolute value of the index may be anything. What is true however is that the index itself (for an area-preserving homeomorphism) can't be greater than $1$. This is proved here:
Pelikan, S., Slaminka, E. (1987). A bound for the fixed point index of area-preserving homeomorphisms of two-manifolds. Ergodic Theory and Dynamical Systems, 7(3), 463-479. https://doi.org/10.1017/S0143385700004132
In the case of $C^1$ diffeomorphisms there is an earlier version (with much easier proof) here:
Simon, Carl P. "A Bound for the Fixed-Point Index of an Area-Preserving Map with Applications to Mechanics.." Inventiones mathematicae 26 (1974): 187-200. |
Orders of numbers modulo another number | Hint: The definition of order requires that it be the minimum positive exponent producing 1.
For example, if we're working modulo $5$ (say), then $2$ has order $4$, because
$$2^1=2,\quad 2^2=4,\quad 2^3=3,\quad \fbox{$2^4=1\strut$}$$ but $2^2=4$ has order $2$, because
$$4^1=4,\quad \fbox{$4^2=1\strut$}$$ |
Show that an element belongs to unique factorization domain. | Hint $\: $ Mimic the proof of the Rational Root Test, which works over any domain where gcds exist, i.e $\rm\:(a,b) = 1,\ 0 = b^n\:\! f(a/b) =\: a^n + b\:(\cdots)\ \Rightarrow\ b\:|\:a^n\ \Rightarrow\ b\:|\:1\:$ by Euclid's Lemma. |
Multivariate normal distribution, calculating transform distribution and conditional expectations | So, I finally got it right. Here is the full solution.
We are given that the transform of $(X_1,X_2,X_3)$ to $(Y_1,Y_2,Y_3)$ is
$$\begin{pmatrix}
Y_1\\
Y_2\\
Y_3
\end{pmatrix}=\begin{pmatrix}
1&0 &1 \\
2&-1 &0 \\
0&-1 &2
\end{pmatrix}\begin{pmatrix}
X_1\\
X_2\\
X_3
\end{pmatrix}.$$
Since $\pmb{Y}$ is a linear transformation of multivariate normal vector $\pmb{X}$, as StubbornAtom commentated, then $\pmb{Y}$ is also a multivariate normal vector. We may then combine the components anyway we want and that (vector) combination will aslo be normally distributed. We choose the normal bivariate vectors as $(Y_1,Y_3)$ and $(Y_2,Y_3)$. From here we can easily calculate the conditional distribution of $Y_3$ given $Y_1=y_1$ and $Y_2=y_2$, respectively. The conditional distributions will also be (univariate) normally distributed with mean (or expected value, which we are looking for!) given as
$$\mu_3 + \rho \frac{\sigma_3}{\sigma_{i}}(y_{i}-\mu_{i})=\mu_3 + \frac{\text{Cov}(Y_i,Y_3)}{\sigma_{i}^2}(y_{i}-\mu_{i}),\quad i=1,2 .$$
Here $\text{Cov}(Y_i,Y_3)$ is the covariance between $Y_i$ and $Y_3$, and can be found as the $i,3$ (or $3,i$ since symmetry) element of the covariance matrix $\Sigma$.
From here we can calculate the desired results. |
On a proof regarding linear sub-spaces | The answer to the Question 1 is no. Here is a counterexample. Take $n=4$ and let's denote by $e_i$ the i$^{\text{th}}$ vector of the canonical basis of $\Bbb R^4$. If $B_1=e_1$, $B_2=e_4$ and if $$AB_1=e_2, \;A^2B_1=e_3,\;A^3B_1=e_1$$ and $AB_2=e_3$ then $$r_1=3,\;r_2=1<r_1$$ but $\langle B_1, AB_1, B_2\rangle$ doesn't contain $AB_2=e_3$.
Edit: The matrix A is
$$\begin{bmatrix} 0&0&1&0\\ 1&0&0&0\\ 0&1&0&1\\ 0&0&0&0 \end{bmatrix}$$ |
Number of ternary sequences of length $n$ with the property that $|x_i - x_{i - 1}| = 1$ for each $i$ such that $2 \leq i \leq n$ | Most of your reasoning is correct, but the final result is not.
As you observed, every other element of a ternary sequence of length $n$ with the property that $|x_i - x_{i - 1}| = 1$ for $i = 2, 3, \ldots, n$ is a $1$.
Case 1: The sequence begins with $1$.
As you observed, exactly $\lfloor \frac{n}{2} \rfloor$ elements of the sequence are not equal to $1$. Each such entry can be filled in two ways, with either a $0$ or $2$. Hence, there are
$$2^{\lfloor \frac{n}{2} \rfloor}$$
such sequences.
Case 2: The sequence begins with $0$ or $2$.
As you observed, exactly $\lceil \frac{n}{2} \rceil$ elements of the sequence do not have a $1$ in that position. Each such entry can be filled in two ways, with either a $0$ or $2$. Hence, there are
$$2^{\lceil \frac{n}{2} \rceil}$$
such sequences.
Total: Since the two cases are mutually exclusive and exhaustive, the number of admissible sequences is
$$2^{\lfloor \frac{n}{2} \rfloor} + 2^{\lceil \frac{n}{2} \rceil}$$ |
Choosing a $x \in$ $X$, with $X$ an infinte set, $<$ a well-ordering on $X$, such that $x < x'$ for only finitely many $x' \in X$ | No, if the well order has a maximal element, using it is a counterexample. |
Legendre symbol simplification | For the first question
$$
\left(\dfrac{19}{29}\right) = \left(\dfrac{29}{19}\right)
= \left(\dfrac{10}{19}\right)
$$
since $29$ is $1$ mod $4$.
For the second,
$$
\left(\dfrac{5}{19}\right) =
\left(\dfrac{19}{5}\right) =
\left(\dfrac{-1}{5}\right) = 1
$$
since $5$ is $1$ mod $4$
and
$$
\left(\dfrac{2}{19}\right) = -1
$$
because $19$ is $3$ mod $8$. (http://www.maa.org/programs/faculty-and-departments/classroom-capsules-and-notes/the-quadratic-character-of-2) |
For a matrix $A$ with an unknown coefficient, how do I show the number of solutions in $Ax=b$? | There's a general result:
A non-homogeneous linear system $\;A[x]=[b]$ has solutions if and only if the matrix $A$ and the augmented matrix $[Ab]$ have the same rank.
Furthermore, this common rank is the codimension of the set of solutions (which is an affine space).
Proceeding to row reduction, you should obtain there's a unique solution, except if $a=\pm 1$. |
Correlation Coefficient as Cosine | $\newcommand{\lowersub}[1]{_{\lower{0.5ex}{\small #1}}}$
The Euclidean norm of an $n$-dimensional vector $(X_1, X_2,...,X_n)^\top$ is ${\lVert x\rVert}_2 =\sum\limits_{k=1}^n {X}^2_k$, which is a measure of the distance from the origin.
In a similar way, $\sqrt{\mathsf E\Big(\!\big(X-\mathsf E(X)\big)^2\Big)}$ is a measure of the magnitude a random variable deviates from its mean. Thus the standard deviation may be regarded as the weighted norm of the centred random variable.
( As A.S. commented, centering gives a more meaningful comparison of how two random variables deviate from their means. )
For discrete random variable we have:
$$\lVert X-\mu\lowersub X\rVert = \sqrt{\sum_{\forall x} (x-\mu\lowersub{X})^2\mathsf P(X=x)}$$
And for a continuous real-valued random variable we have:
$$\lVert X-\mu\lowersub{X}\rVert = \sqrt{\int_\Bbb R (x-\mu\lowersub{X})^2\,f\lowersub{X}(x)\operatorname d x}$$
Likewise, the angle between two $n$-dimensional vectors, $\vec x, \vec y$ is related to their inner product, and norms, by the cosine rule.
$$\cos \alpha\lowersub{\vec x,\vec y} = \frac{\langle \vec x, \vec y\rangle}{{\lVert \vec x\rVert}_2{\lVert \vec y\rVert}_2}$$
Similarly the co-variance, of two centered random variables, is analogous to an inner product, and so we have the concept of correlation as the cosine of an angle.
$$\rho\lowersub{X,Y}=\dfrac{\mathsf E((X-\mu\lowersub{X})(Y-\mu\lowersub{Y}))}{\sqrt{\mathsf E((X-\mu\lowersub{X})^2)\,\mathsf E((Y-\mu\lowersub{Y})^2)}}=\dfrac{\langle X-\mu\lowersub{X} , Y-\mu\lowersub{Y}\rangle}{\lVert X-\mu\lowersub{X}\rVert\,\lVert Y-\mu\lowersub{Y}\rVert} = \cos \theta\lowersub{X,Y}$$ |
Concerning crossproduct and orthonormality of vectors | Combining them into a matrix
$$A=\begin{bmatrix}
u_1 & v_1 & w_1\\
u_2 & v_2 & w_2\\
u_3 & v_3 & w_3
\end{bmatrix}$$
then what you've observed can be restated as:
$$(\textsf{row }i)\cdot(\textsf{row }j)=\begin{cases}
1 & \text{if }i=j,\\
0 & \text{if }i\neq j
\end{cases}$$
In other words, you've observed that if we make a square matrix $A$ with orthonormal columns, then the rows of $A$ are also orthonormal to each other.
The rows of $A$ are just the columns of the transpose matrix, denoted $A^{\mathsf{T}}$.
A square matrix whose columns are orthonormal to each other is called an "orthogonal matrix" (poor naming choice, unfortunately). See here on Wikipedia.
Thus, the final restatement of what you've observed is that the transpose of an orthogonal matrix is again an orthogonal matrix.
How to prove this observation?
Note that because the columns of $A$ are orthonormal, we have
$$A^{\mathsf{T}}A=\begin{pmatrix}
u\cdot u & u\cdot v & u\cdot w\\
v\cdot u & v\cdot v & v\cdot w\\
w\cdot u & w\cdot v & w\cdot w
\end{pmatrix}=\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}=I$$
It is an important fact that for any square matrices $A$ and $B$, if we have $BA=I$, then we must also have $AB=I$. There is a stack exchange thread about that here.
But now, observe that because $A^{\mathsf{T}}A=I$, we must have $AA^{\mathsf{T}}=I$, so that
$$AA^{\mathsf{T}}=\begin{pmatrix}
(\textsf{row }1)\cdot(\textsf{row }1) & (\textsf{row }1)\cdot(\textsf{row }2) & (\textsf{row }1)\cdot(\textsf{row }3)\\
(\textsf{row }2)\cdot(\textsf{row }1) & (\textsf{row }2)\cdot(\textsf{row }2) & (\textsf{row }2)\cdot(\textsf{row }3)\\
(\textsf{row }3)\cdot(\textsf{row }1) & (\textsf{row }3)\cdot(\textsf{row }2) & (\textsf{row }3)\cdot(\textsf{row }3)
\end{pmatrix}=\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}=I$$
Thus the rows of $A$ are also orthonormal to each other. |
If $\lim\limits_{x \to \infty} f'(x) = L$ and $\lim\limits_{n \to \infty} f(n) = A$ exists, prove that $L = 0$. | If we can show that $|L|\lt \epsilon$ for every $\epsilon \gt 0$ then we will have that $|L|\le 0\Rightarrow L=0$.
Let $\epsilon\gt 0$ be given. Since $$\lim_{x\to \infty} f^{\prime}(x)=L$$ we can find $M$ such that $x\ge M\Rightarrow |f^{\prime}(x) - L|\lt \frac{\epsilon}{2}$. Since $$\lim_{n\to\infty}f(n)=A$$ we can find $N$ such that $n\ge N\Rightarrow |f(n)-A|\lt \frac{\epsilon}{4}$.
Consider $m\gt max(M,N)$. The mean value theorem tells us we can find $c\in (m,m+2)$ such that $$f^{\prime}(c) = \frac{f(m+2) - f(m)}{2}$$ Then $|f^{\prime}(c)| = |\frac{f(m+2) - f(m)}{2}|\lt |f(m+2)-f(m)|\le |A+\frac{\epsilon}{4} - (A-\frac{\epsilon}{4})|=\frac{|\epsilon|}{2}=\frac{\epsilon}{2}$
Now, $|f^{\prime}(c)-L| = |L-f^{\prime}(c)|\lt\frac{\epsilon}{2}\Rightarrow -\frac{\epsilon}{2}\lt L-f^{\prime}(c)\lt \frac{\epsilon}{2}\Rightarrow f^{\prime}(c)-\frac{\epsilon}{2}\lt L\lt f^{\prime}(c)+\frac{\epsilon}{2}$.
But $-\frac{\epsilon}{2}\lt f^{\prime}(c)\lt \frac{\epsilon}{2}$ so $-\epsilon = \frac{-\epsilon}{2} + \frac{-\epsilon}{2}\lt f^{\prime}(c)+\frac{-\epsilon}{2}\lt L\lt f^{\prime}(c)+\frac{\epsilon}{2}\lt \frac{\epsilon}{2} + \frac{\epsilon}{2}=\epsilon$.
Therefore, $|L|\lt \epsilon$ and must be equal to $0.$ |
Construct bijective functions | No, there is not a certain trick. Sometimes it is hard to find a bijective function, sometimes it is not.
Lets observe the sets that we are supposed to map onto each other.
$[0,\infty)$ and $(0,\infty)$.
The only difference in these sets is that $0$ is an element of the $[0,\infty)$, while it is not an element of $(0,\infty)$.
So the first question we might ask is "What do we map 0 onto", because the sets are so similar that the first function that comes to mind might be the identity. But this does not work, since then $0$ has no image.
So we have to choose some element of $(0,\infty)$ to map $0$ onto.
In your solution this element is $1$, which makes sense. It is then pretty normal to go on like that, and map 1 onto 2,2 onto 3 and so on. |
Are all polynomial expressions an algebraic expression. | Algebraic equation and polynomial equation are synonyms. They are both asking for the roots of a polynomial.
But algebraic expression has a wider sense than polynomial expression (rarely used) in that it allows divisions and rational exponents. (If you allow just division, you get a rational expression.)
By decreasing generality, an algebraic expression is also a rational expression, which is also a polynomial expression, which is also an arithmetic expression. |
Collection of congruencies | Well, from what I can tell, three main witty things are done with congruences in number theory here: we use Fermat's Little Theorem and Euler's Theorem, devise divisibility rules, or combine them with the Chinese Remainder Theorem. I suppose we might also use Quadratic Reciprocity upon occasion (I think it's particularly important). I don't really think of many other things than these that we use, in general,
All of these, and the more fundamental properties of congruences, should be handled in most Elementary Number Theory books. I happen to know that Rosen's Intro to Elementary Number Theory has all of these in it. I also love that book. |
Partial Differential from Implicit Expression | Taking into account the fact that you want to obtain $$\left(\frac{\partial{\rho}}{\partial{T}}\right)_{p}$$ as Chinny84 commented, you need to consider that $p$ is constant. So, you have
$$F(\rho,T)=-\frac{A \rho ^2 \left(1-\frac{a \rho }{M}\right)}{M^2}+\frac{\rho ^2 R T
\left(-\frac{b B \rho }{M}+B+\frac{M}{\rho }\right) \left(1-\frac{c \rho }{M
T^3}\right)}{M^2}-p=0$$ So, compute $$\left(\frac{\partial{F(\rho,T)}}{\partial{T}}\right)_{\rho}$$ $$\left(\frac{\partial{F(\rho,T)}}{\partial{\rho}}\right)_{T}$$ and, as usual, compute $$\left(\frac{\partial{\rho}}{\partial{T}}\right)_{p}=- \frac {\left(\frac{\partial{F(\rho,T)}}{\partial{T}}\right)_{\rho}}{\left(\frac{\partial{F(\rho,T)}}{\partial{\rho}}\right)_{T}}$$ |
Prove there exists a unique polynomial | Among all polynomials that have poperties a) and b) (such polynomials exist, for exampl e$f$ itself), let $f^*$ be one that minimizes the sum of exponentes of nonzero monomials. Then show that $f*$ has property b). Indeed, if a monomial $ax_1^{e_1}x_2^{e_2}\cdots x_n^{e_n}$ occurs with $a\ne 0$ and some $e_i\ge p$, we can use the Fermat identity $x^p\equiv x\pmod p$ to observe that the polynomial $g$ obtained from $f^*$ by replacing $ax_1^{e_1}x_2^{e_2}\cdots x_n^{e_n}$ with $ax_1^{e_1}x_2^{e_2}\cdots x_i^{e_i-p+1}\cdots x_n^{e_n}$ has properties a) and c) and thus contradicts the minimality of $f^*$. We conclude that no such monomial exists in $f^*$.
As for uniqueness: Count the number of polynomial functions $\mathbb Z_p^n\to \mathbb Z_p$ |
Confusion about conditional expectation | What you give as second approach is not correct.
Be aware that: $$\mathbb E(Y\mid A)=\frac{\mathbb E[Y\mathbf1_A]}{P(A)}$$You forgot the division by $P(A)$. |
Proving union of countably infinite sets | While I think your idea is correct, I would not give you full points for it if it was an exam answer. The problems:
What are $a_i$ and $b_i$? If they are elements of $A$ and $B$, then you should write that down.
If $a_i$ are elements of $A$, and $b_i$ are elements of $b$, then $f$ is not well defined as you only map to $n$ elements of $A$ and to an infinite amound of elements from $B$, which is not possible since $B$ ins finite and $A$ is infinite.
Your proof that $f$ is a bijection is lacking. It is fairly easy to see, but you made no effort to prove it. In fact (see previous comment), $f$ is not a bijection as it does not map to $a_{n+1}$ and is therefore not surjective. |
Proving an isomorphism without proving that it is Onto | It is not saying that the two groups are isomorphic. It is just saying that the first group is isomorphic to the image of the map. By definition, the map is onto its image but that image is not necessarily the whole of the second group, it might be a subset / subgroup.
For example, the obvious map from $\Bbb{Z}$ to $\Bbb{Q}$ regarded as groups under addition is a homomorphism and one to one. The image of the map is the integers within $\Bbb{Q}$ which is not all of $\Bbb{Q}$. So, it has proved that there is a subgroup of $\Bbb{Q}$ isomorphic to $\Bbb{Z}$ but not that all of $\Bbb{Q}$ is isomorphic to $\Bbb{Z}$. |
Finding the stochastic differential equation satisfied by process Y | You have a typo in your first equation, it should be
$$dY_t=\beta X_t^{\beta-1}dX_t+\beta(\beta-1)X_t^{\beta-2}(dX_t)^2$$
But it looks to me that you are correct, and the above equation is wrong.
If $Y_t =f(X_t)$, with $f(x)=x^{\beta}$, I do expect the $\frac{1}{2}$:
$$dY_t=df(X_t)=\frac{\partial f(X_t)}{\partial x}dX_t + \color{red} {\frac{1}{2}} \cdot\frac{\partial^2 f(X_t)}{\partial x^2}(dX_t)^2$$ |
On equivalence between limsup and supremum of subsequential limits | Ok first your attempt will not work because if $E\subset \{S_n\}$, and we know $$p:=\limsup_{n\to\infty} S_n$$
You're attempt might work but I think its quite hard because the set $E$ contains all limits of series $\{e_n\}\subset \{S_n\}$. Which is a lot and there some really strange ones in there.
Just assume that this is not true so:
$$\sup E > \limsup S_n \quad \quad\lor\quad \quad \sup E < \limsup S_n$$
I guess you will find it to prove really easy that this can't happen as then there is a subseries $e_n$ where:
$$ \lim_{n\to\infty} e_n > \limsup_{n\to\infty} S_n $$ |
Find the probability that at least two of the chosen socks have the same color out of 18 socks- 2 pairs brown,3 pairs blue,4 pairs yellow. | Option A:
The total number of ways to select $3$ socks is $\binom{4+6+8}{3}=816$
The number of ways to select $1$ sock of each color is $\binom41\cdot\binom61\cdot\binom81=192$
Hence the probability for at least $2$ socks of the same color is $1-\frac{192}{816}$
Option B:
The total number of ways to select $3$ socks is $\binom{4+6+8}{3}=816$
The number of ways to select $3$ brown is $\binom43 = 4$
The number of ways to select $3$ blue is $\binom63 = 20$
The number of ways to select $3$ yellow is $\binom83 = 56$
The number of ways to select $2$ brown and $1$ blue is $\binom42\cdot\binom61= 36$
The number of ways to select $2$ brown and $1$ yellow is $\binom42\cdot\binom81= 48$
The number of ways to select $2$ blue and $1$ brown is $\binom62\cdot\binom41= 60$
The number of ways to select $2$ blue and $1$ yellow is $\binom62\cdot\binom81=120$
The number of ways to select $2$ yellow and $1$ brown is $\binom82\cdot\binom41=112$
The number of ways to select $2$ yellow and $1$ blue is $\binom82\cdot\binom61=168$
This adds up to $624$ ways for selecting at least $2$ socks of the same color
Hence the probability for at least $2$ socks of the same color is $\frac{624}{816}$ |
If $f \in L^1(-\infty, \infty)$ , and $G(u) = \frac{f(x+u) - f(x^+)}{\pi u}$, is $G(u) \in L^1(0^+, \infty)$ true? | This is not true in general.
As a counter example, look at $f : t \mapsto exp(-t^2) \in L^1(\mathbb{R})$.
$\forall x \in \mathbb{R} , G(u) = \frac{exp\left(-(x+u)^2\right)-exp(-x^2)}{ u \pi} \notin L^1(\mathbb{R}^{+})$.
Therefore I have no idea why the Riemann-Lebesgue lemma can be applied here. |
How do I take the derivative of a matrix equation? | $L = (y-Xw)'(y-Xw)=(y’-w’X’)(y-Xw)=y’y-w’X’y-y’Xw+w’X’Xw$
$$dL=-dw’X’y-y’Xdw+dw’X’Xw+w’X’Xdw=-dw’X’y-dw’X’y+dw’X’Xw+dw’X’Xw$$
$$dL=dw’(-2X’y+2X’Xw)$$
I used the total derivative first. Then I used that the terms are scalars, hence transposing them doesn’t change them. In the last equation the bracket at the right is the gradient. |
Matrix representation of Heisenberg group | I think the answer is as follows, we have:
$$ \begin{pmatrix}
1 & a & c\\
0 & 1 & b\\
0 & 0 & 1\\
\end{pmatrix}
\begin{pmatrix}
1 & a' & c'\\
0 & 1 & b'\\
0 & 0 & 1\\
\end{pmatrix}=
\begin{pmatrix}
1 & a+a' & c+c'+ab'\\
0 & 1 & b+b'\\
0 & 0 & 1\\
\end{pmatrix} $$
is corresponds to $(a,b,c)\circ (a',b',c') = (a + a', b + b', c + c' + ab')$.
. And $$ \begin{pmatrix}
1 & 0 & 0 & y \\
x & 1 & b & z \\
0 & 0 & 1 & - x\\
0 & 0 & 0 & 1 \\
\end{pmatrix}
\begin{pmatrix}
1 & 0 & 0 & y' \\
x' & 1 & y' & z' \\
0 & 0 & 1 & - x'\\
0 & 0 & 0 & 1 \\
\end{pmatrix}
=\begin{pmatrix}
1 & 0 & 0 & y+y' \\
x+x' & 1 & y+y' & z + z' + xy'-yx' \\
0 & 0 & 1 & - x-x'\\
0 & 0 & 0 & 1 \\
\end{pmatrix}$$
is corresponds to $(x,y,z).(x',y',z')=(x + x', y + y', z + z' + xy'-yx')$.
It follows that, the matrix representation of $H^3$, if the law in $H^3$ is $(x,y,z) . (x',y',z') = (x + x', y + y', z + z' + xy'-yx')$, is given by
$$\begin{pmatrix}
1 & 0 & 0 & y \\
x & 1 & y & z \\
0 & 0 & 1 & - x\\
0 & 0 & 0 & 1 \\
\end{pmatrix}, \quad x,y,z \in \mathbb R$$
thank you for any remark or comment. |
Difference of two binomial random variables with identical distribution | From the various answers at the linked question, it looks as if
$$P(|X-Y|=0)={2n \choose n} \frac{1}{2^{2n}}$$
while for positive $z$ $$P(|X-Y|=z)={2n \choose n+z} \frac{1}{2^{2n-1}}$$ |
Range of convergent of $\int_0^\infty \frac{\arctan(5x)dx}{x^a}$ where $a\in\Bbb R$ | You've identified the two trouble spots, at $0$ and $\infty,$ and written the integral as a sum of integrals to deal with it. Good. And your answer to 1) is fine.
You have the wrong answer to $2).$ Recall that $\dfrac{\arctan u}{u} \to 1$ as $u\to 0.$ Thus $\arctan (5x)$ is like $5x$ for small $x.$ Thus in 2) you're essentially looking at
$$\int_0^1 \frac{x}{x^a}\, dx.$$
Give it another try. |
Discrete Math Informal Proofs Using Mathematical Induction | Base Step: $2 \cdot 3^{1-1} = 2 = 3^1 - 1$
The inductive hypothesis is: $\sum_{n=1}^{k} 2 \cdot 3^{n-1} = 3^k - 1$
We must show that under the assumption of the inductive hypothesis that $$3^k - 1 + 2 \cdot 3^k = 3^{k + 1} - 1$$
We verify this as $$3^k - 1 + 2 \cdot 3^k = 3^k(1 + 2) - 1$$
$$= 3^{k+1} - 1$$ |
Number of generators of a finitely generated module | Because $M$ is finitely generated, so are its quotients $M/IM$, so certainly there is an integer $n$ such that for some maximal $I$ we have $\dim M/IM=n$ while for every other maximal $J$, $\dim M/JM\leqslant n$.
It is worth to point out one usually wants to look at localizations of modules to define "rank" (this comes from the geometric interpretation of localization): if $P$ is projective f.g. then the localizations of $P$ on primes are free (by Kaplansky), and if they all have the same rank, it makes sense to define the rank of $P$ as this common number. One can prove, in fact, that when fixing a f.g. projective $P$, the map
$$\operatorname{Spec}(R) \longrightarrow \mathbb Z$$
$$ \mathfrak p \mapsto \dim P_{\mathfrak p}$$
is locally constant (this is a version of local freeness), so f.g.p modules have a rank when $R$ is say connected. |
Convergence of difference of series | This is indeed true. We compute
\begin{eqnarray*}
\sum_{m}\left|a_{n,m+1}-a_{n,m}\right| & = & \sum_{m}\left|\sum_{k}\left[p_{n,\left(m+1\right)-k}-p_{n,m-k}\right]p_{n,k}\right|\\
& \leq & \sum_{k}\left[p_{n,k}\cdot\sum_{m}\left|p_{n,\left(m+1\right)-k}-p_{n,m-k}\right|\right]\\
& \overset{\ell=m-k}{=} & \sum_{k}\left[p_{n,k}\sum_{\ell}\left|p_{n,\ell+1}-p_{n,\ell}\right|\right]\\
& \overset{\sum_{k}p_{n,k}=1}{=} & \sum_{\ell}\left|\int_{\left(\ell+1\right)/n}^{\left(\ell+2\right)/n}f\left(y\right)\,{\rm d}y-\int_{\ell/n}^{\left(\ell+1\right)/n}f\left(x\right)\,{\rm d}x\right|\\
& \overset{x=y-\frac{1}{n}}{=} & \sum_{\ell}\left|\int_{\ell/n}^{\left(\ell+1\right)/n}\left[f\left(x+\frac{1}{n}\right)-f\left(x\right)\right]\,{\rm d}x\right|\\
& \leq & \sum_{\ell}\int_{\ell/n}^{\left(\ell+1\right)/n}\left|f\left(x+\frac{1}{n}\right)-f\left(x\right)\right|\,{\rm d}x\\
& = & \int_{\mathbb{R}}\left|f\left(x+\frac{1}{n}\right)-f\left(x\right)\right|\,{\rm d}x\\
& \xrightarrow[n\to\infty]{} & 0.
\end{eqnarray*}
Here, the last step used continuity of translation with respect to
the $L^{1}$ norm, see e.g. Continuity of $L^1$ functions with respect to translation. |
How do I plot a linear, exponential and logarithmic function with same starting points? | If you have two points $(x_0,y_0)$ and $(x_1,y_1)$ on the line and the equations for the quadratic and the logarithmic are
$$q(x)=ax^2+bx+c\\
L(x)=\log(\alpha x+\beta)$$
For the quadratic you get two equations
$$y_0=ax_{0}^2+bx_0+c\\
y_1=ax_{1}^2+bx_1+c$$
and three variables. Then, one of the parameters can be chosen freely (for example, you can impose $a=1$).
For the logarithmic you have
$$e^{y_0}=\alpha x_0+\beta\\
e^{y_1}=\alpha x_1+\beta$$
Note that I have inverted the logarithm. You obtain again a linear system of 2 equations with 2 variables, which can be solved. I hope it will be useful for you! |
Is it possible to solve this equation using lambert function | No. Lambert's W function solves only cases of the form $~xe^x=$ constant, whereas here we have $xe^x=$ variable. |
The orbits of normal subgroup of a transitive group form a block system. | As for your first two questions, remember what you are trying to prove: the orbits of $N$ on $\Omega$ form a block system. By definition, a block system for the action of $G$ on $\Omega$ is a partition of $\Omega$ that is left invariant by $G$. So in order to show that the orbits of $N$ on $\Omega$ form a block system, you need to show that these partition $\Omega$ and are $G$-invariant. It's just by the definition of what you are trying to prove really.
As for the third question, we want to show that the orbits of $N$ are $G$-invariant. That means that if $\Lambda$ is an orbit of $N$ and $g \in G$, then $\Lambda g$ is also an orbit of $N$. You don't ask about this so I suppose that the proof that $\Lambda g$ is a union of orbits of $N$ is clear. Now we need to show that $\Lambda g$ is actually just one orbit. This follows from the fact that $N$ acts transitively on $\Lambda g$. Indeed, suppose that $\Lambda_1, \Lambda_2$ are two orbits contained in $\Lambda g$. Since $N$ acts transitively on $\Lambda g$, given $x_1 \in \Lambda_1$ and $x_2 \in \Lambda_2$, there exists $n \in N$ such that $x_1 n = x_2$. Therefore $x_1$ and $x_2$ are in the same $N$-orbit. So $\Lambda_1 = \Lambda_2$ (recall that two orbits are either equal or disjoint). We conclude that $\Lambda g$ is a union of orbits any two of which are equal, so it is just an orbit. |
Calculate entropy of modified base32 hash | This answer assumes you use all $36$ letters and numbers. I don't know how many letters and numbers are among the $32$. You should be able to update it to reflect your alphabet.
To calculate the entropy, you just need to calculate the number of ids and take the base 2 log. If there are six letters, you have $26^6=308,915,776$ ids. To get the number with numbers included, note that the number with five letters and a number is $\frac {10}{26}$ of that, so the total is $26^6\sum_{i=0}^6 \left(\frac{10}{26}\right)^i=26^6\frac{1-(\frac{10}{26})^7}{1-\frac{10}{26}}\approx 5E8$ The base $2$ log of this is about $28.9$. This should be compared with $\log_2 36^6 \approx 31.0$, so you are losing about $2.1$ bits of entropy. I assumed you always have letters before numbers. You can reduce this a bit if you allow both orders of numbers and letters. |
Optimizing Group Spending | This idea is studied in economics, where it is called the velocity of money.
You should be careful that not all transactions create value in the same way. If you pay me $\$1$ to erase your blackboard, this is equivalent to and creates the same value as if you sell me the dirty blackboard for $\$100$ and then buy it back for $\$101$ once it is clean. However, the total size of the transactions may look different, $\$1$ versus $\$201$. |
Order of cyclic subgroup | You don't need Bézout's lemma. Let $g$ be a generator of $D$. Then
$$k\cdot g \in D[n] \iff (nk)\cdot g = 0 \iff m\mid nk.$$
Now just write $n = \nu d$ and $m = \mu d$ and you see that
$$m\mid nk \iff \mu \mid \nu k \iff \mu \mid k,$$
since $\gcd(\mu,\nu) = 1$. |
Oral exercise from an entrance examination: polynomials and vector spaces | Let $F'$ be a complement of $F $, so that $E $ is the direct sum of $F $ and $F'$. Write $v_j=w_j+z_j $, with $w_j\in F $ and $z_j\in F'$. Then $$f (t)=\sum_jP_j (t)w_j+\sum_jP_j (t)z_j. $$ By removing elements (and changing the polynomials accordingly) if necessary, we may assume that $z_1,\ldots, z_d $ are linearly independent.
Now let $Y=\{t:\ f (t)\in F\} $. Then $$Y=\{t:\ P_1 (t)=\cdots=P_d (t)=0\}. $$ So either $Y=k $ (when $P_1=\cdots=P_d=0$), or $Y $ is finite (as a nonzero polynomial has finitely many roots). This gives the two cases. |
Calculate contour integral | Solve for $z^4 = -81$, you will get four roots and write it as $\frac{1}{z^4 + 81}=\frac{1}{(z-a)(z-b)(z-c)(z-d)}$ where a,b,c,d are roots of the above $z^4 = -81$. Then use partial fractions $\frac{1}{(z-a)(z-b)(z-c)(z-d)} = \frac{A}{z-a}+\frac{B}{z-b}+\frac{C}{z-c}+\frac{D}{z-d}$. Find the value of A,B,C,D and then substitute in the above equation and then integrate using Cauchys integral formula. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.