title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Image of Sylow Subgroup under Automorphism | Hints:
The group $\;P\;$ acts on the set $\;Syl_q\;$of all the Sylow $\;q$-subgroups of $\;G\;$ , for each prime $\;q\;$ dividing $\;|G|\;$, by $\;\;\;\sigma\cdot Q:=\sigma(Q)\;,\;\;\forall\,Q\in S_q\;$ . By the orbit-stabilizer theorem, we get that
$$|\mathcal O(Q)|=[P:P_Q]\;,\;\;P_Q=\text{Stab}(Q):=\{\pi\in P\;;\;\;\pi(Q)=Q\;\}$$
If $\;p\mid |G|\;$ then $\;|Syl_p|=1\pmod p\;$ and we're done as it must be $\;|\mathcal O(Q)|=1\;$ for some $\;Q\in Syl_p\;\;\;$ (why?)
Otherwise (i.e., there exists a prime $\;q\neq p\;$ dividing the group's order), using $\;Syl_q^P:=\{ Q\in Syl_q\;:\;\;\sigma(Q)=Q\;,\;\;\forall\sigma\in P\}\;$ , the fixed point theorem gives us
$$|Syl_q^P|=|Syl_q|\pmod p$$
Finish the proof now. |
Map from schemes to stacks | No, a map $X \to BG$ is determined by a map $f:X \to S$ and a $f^*G$-torsor over $X$, while those map that factor through $s_0$ correspond to trivial torsors. So, as soon as there are nontrivial torsors over $X$, there are maps that do not factor through $s_0$. |
Show that holomorphic function $f: \mathbb{C} \rightarrow \mathbb{C}$ is constant | Your hypothesis reads: $$f(x+iy) = u(x,y) + i(a u(x,y) + b),$$ that is, the point $f(x+iy)$ as a point in $\Bbb R^2$ belongs to the said line. The Cauchy-Riemann equations read, abbreviating the notation: $$u_x = a u_y, \quad u_y = -au_x.$$ So $u_x = -a^2u_x \implies (1+a^2)u_x = 0$ and... oh, you can conclude now! (if you need one more push, please tell me) |
Weak convergence and convergence almost everywhere | Because $W_0^{1,p}$ is compactly embedded in $L_p$, the embedding operator is obviously compact so it maps weakly convergent sequences in strongly convergent, i.e a weakly convergent subsequence in $W_0^{1,p}$ is strongly convergent in $L_p$. Finally, you can extract a further subsequence, which is pointwise a.e convergent: see
Does convergence in $L^{p}$ implies convergence almost everywhere? |
Showing one-one onto | Hint: a bijective, continuous function is monotonic: it is either strictly increasing, or strictly decreasing.
edit: As mentioned in the comments, monotonicity just gets you injective; $\arctan(x)$ for instance is monotonic but not bijective. You also need surjectivity: you have to show your function actually does have all of $\mathbb R$ as its range. |
Finding a diagonalizable endomorphism $f : \mathbb{R}^4 \to \mathbb{R}^4$ such that $\text{ker}(f) = \text{im}(f)$ | Here is a very quick proof: show that because $\ker (f) = \operatorname{im}(f)$, it must hold that $f \neq 0$ but $f^2 = 0$. However, the only diagonalizable endomorphism $f$ for which $f^2 = 0$ is the zero endomorphism. So, $f$ cannot be diagonalizable.
Regarding the points that you have written: first of all, you have not proved that $b_1,b_2$ are the only eigenvectors of $f$. Second, showing that the example you tried to make failed to be diagonalizable while satisfying the condition does not prove that there are no such endomorphisms. |
If $X$ is a cone, show that $I(X)$ is homogeneous. | It is enough to show that if a polynomial $p(T_1,... ,T_n)=\sum p_d(T_1,... ,T_n)$ vanishes on the line $l=\mathbb C\cdot a $ where $a=(a_1,...,a_n)\neq 0\in X\subset \mathbb C^n$, each of the homogeneous components $p_d(T_1,... ,T_n)$ of degree $d$ of the polynomial $p(T_1,... ,T_n)$ vanishes on $l$.
But since $p(z\cdot a)=\sum p_d(z\cdot a_1,... ,z\cdot a_n)=\sum z^d\cdot p_d( a_1,... , a_n)=0$ is a polynomial in $z$ vanishing for all $z\in \mathbb C$, that polynomial is zero and we necessarily must have $p_d( a_1,... , a_n)=0$ for all $d $, i.e. all homogeneous components $p_d$ of $p$ vanish on $l$.
We have thus proved that $I(X)$ is indeed a homogeneous ideal. |
Real function injectivity proof | Directly: for $\,0<x<1\;$
$$f(x)=(1-x)e^{\frac x{1-x}\log x}\implies $$
$$f'(x)=-e^{\frac x{1-x}\log x}+(1-x)e^{\frac x{1-x}\log x}\left(\frac{\log x}{(1-x)^2}+\frac1{1-x}\right)=$$
$$=e^{\frac x{1-x}\log x}\left(\frac{\log x}{1-x}\right)<0$$ |
So I have a question. I got a number in 6*1*8 where the total number has to be divisible with 9 and not 8. How do I find * (the two unknown digits)? | The sum of the digits must be divisible by 9 for the whole number to be divisible by 9.
The last 3 digits of the number must be divisible by 8 , for the whole number to be divisible by 8. |
Taylor polynomial of $e^{\frac{x^2}{2}}$ at $x=0$ | Let $f : x \mapsto e^{x^2/2}$.
First, $f(0)=1$ so the first coefficient is $a_0=1$.
Differentiate one time
$$
f'\left(x\right)=xe^{x^2/2}
$$
that value $0$ when $x=0$ so $a_1=0$.
Again ;
$$
f''\left(x\right)=\left(xe^{x^2/2} \right)'=e^{x^2/2}+x\left(xe^{x^2/2}\right)
$$
that equals $1$ when $x=0$ so the coefficient $a_2=1/2$ as expected.
You can keep doing it to find the first terms of the polynomial. |
How to convert into system of differential equations | Hint: take $y'=z$ and rearrange. |
Given $a > b+c$, $e>d+f$, and $i>g+h$, can the quantity $a(ei-hf) + b(-di+fg) - c(dh+eg)$ ever be zero? | Consider $\det\left(\begin{array}{ccc} a & b & c\\ d & e & f \\ g & h & i \end{array}\right).$ This gives your expression. From the given condition, the matrix is diagonally dominant, so it's invertible, whence your expression is never zero. |
let $K \subset U \subset X$, $(X,d)$ metric space $U$ open and $K$ compact, prove the exist an $r>0$ such that $d(x,K)\leq r \rightarrow x \in U$ | Hint: Since $U$ is open and $K\subseteq U,$ what can you say for each $x\in U$ (and in particular each $x\in K$)? Don't forget that $K$ is compact, so that any open cover can be reduced to a finite subcover. |
Is there a generalization of "factorization" involving roots for multivariate polynomials? | I think the proper generalization is as follows: if $p=p(X_1,X_2,\ldots,X_n)$ is a polynomial over an algebraically closed field $F$ and $q$ is a nonzero polynomial over $F$ in $n$ variables which is irreducible, and such that $q(X)=0\implies p(X)=0$, then $q$ divides $p$. This follows from Hilbert's Nullstellensatz.
To see that this is a generalization, notice that for polynomials of single variable over algebraically closed fields, the irreducible polynomials are exactly the polynomials of degree $1$ and a polynomial in one variable of degree $1$ has only a single point in its zero set. |
The ring of linear maps i isomorphism to the ring of matrices (finite dimesional case) | Let $V$ be a finite-dimensional vector space with basis $\beta=\left\{v_1, \dots, v_n\right\}$.
We construct a map $\phi:M_n(\mathbb{K})\rightarrow L(V)$ by mapping a matrix $A$ to the linear transformation $L_A:V\rightarrow V$ defined by $L_A(v)=AX$ where $X$ is given by $\begin{pmatrix}
\lambda_1\\ \vdots \\ \lambda_n
\end{pmatrix}$ where $v=\sum_{i=1}^n\lambda_iv_i$.
Conversely, we construct a map $\psi:L(V)\rightarrow M_n(\mathbb{K})$ as follows:
Let $f\in L(V)$, then for each $i$ there exist $\mu_{j,i}\in \mathbb{K}$ such that $$f(v_i)=\sum_{i=1}^n\mu_{j,i}v_j.$$
In this way we obtain a matrix $f_{\beta}^{\beta}=(\mu_{j,i})_{1\leq i,j\leq n}^T$. Hence the map $\psi$ is given by $\psi(f)=f^{\beta}_{\beta}$.
It remains to show that these two maps are inverses of each other.
We first show that $(\psi\circ\phi)(A)=A$. Notice that $L_A(v_i)=Ae_i$. Here $e_i=\begin{pmatrix}
0\\
\vdots\\
1\\
\vdots\\
0
\end{pmatrix}$, the column vector with zeroes everywhere except at the $i$-th position where there is a $1$. Now $Ae_i=\begin{pmatrix}
a_{1i}\\
a_{2i}\\
\vdots\\
a_{ni}
\end{pmatrix}$ is simply the $i$-th column of $A$. It follows that $\mu_{j,i}=a_{j,i}$. Hence $f^{\beta}_{\beta}=(\mu_{j,i})_{i,j}=(a_{j,i})_{i,j}^T=(a_{i,j})_{i,j}=A$. This shows that $\psi\circ \phi=Id_{M_n(\mathbb{K})}$.
Can you show $\psi\circ \phi=Id_{L(V)}$ as well?
Once you figured this out you should do the more general case as well: Let $V$ and $W$ be finite-dimensional vector spaces with dimension $n$ and $m$ respectively. Then $L(V,W)\cong M_{m\times n}(\mathbb{K})$. A very important remark is that the isomorphism depends on a choice of bases in both $V$ and $W$. Hence, after choosing bases $\alpha$ and $\beta$ of $V$ and $W$ respectively, any linear map $f:V\rightarrow W$ corresponds uniquely to a $m\times n$ matrix with entries in $\mathbb{K}$.
Understanding the above correspondence is the magical key to understanding everything of basic finite-dimensional linear algebra. All further topics such as eigenvectors, eigenbases, diagonalization and so on become easy once you truly understand this correspondence. |
Is the image of the intersection of images of an automorphism equal to the intersection? | No, it is not true. It's true that $f[A] \subseteq A$. (If $a \in f^n[X]$ for every $n \geq 0$, then $f(a) \in f^{n+1}[X]$ for every $n \geq 0$, so in every $f^n[X]$ for every $n \geq 1$; and $f^0[X] = X$.) But the reverse containment doesn't have to hold.
Here's a counterexample, with $A \not\subseteq f[A]$. Let
$$
X = \{(m,n) \in \mathbb{Z}^2 \mid 1 \leq m \leq n\} \cup \{(0,n) \mid n \leq 0 \}.
$$
Define $f : X \to X$ by
$$
f(m,n) = \begin{cases}
(m-1,n) & \text{if $m \geq 2$} \\
(0,0) & \text{if $m=1$} \\
(0,n-1) & \text{if $m=0$}
\end{cases}
$$
Now $(0,0) = f^n(n,n)$. But $(0,0) \neq f(x,y)$ for any $(x,y)$ such that $(x,y) \in f^n[X]$ for all $n \geq 0$. Indeed if $f(x,y) = (0,0)$ then $(x,y) = (1,n)$ for some $n \geq 1$, and $(x,y) = f^{n-1}(n,n)$, but $(x,y) \notin f^n[X]$.
If you'd prefer a representation as a directed graph ("dots and arrows") the idea here is that there is a point with an incoming path of length $n$, for every $n \geq 1$, and one single outgoing path. |
Finding angle between two vectors . | The angle between any 2 vectors has the range $[0,\pi]$.
Now, the principal range of $\cos^{-1}$ is $[0,\pi]$, while the range of $\sin^{-1}$ is from $[-\frac{\pi}{2},\frac{\pi}{2}]$.
So, $\sin^{-1}$ cannot describe angles from $[\frac{\pi}{2},\pi]$ properly, while $\cos^{-1}$ can.
For example, if $A×B = 0$, then $\sin^{-1} (0)$ can take the values $0$ and $\pi$. Thus, you are not able to know whether $A$ and $B$ are parallel or anti-parallel just from the cross product. |
Taylor polynomial and remainder | It is much simpler than what you've done:
Start from Newton's expansion for $(1+u)^\alpha$, with $\alpha=\frac13$ at order $1$, together with Taylor-Lagrange formula:
$$(1+u)^{\tfrac13}= 1+\frac13u-\frac19\frac1{(1+\xi)^{\tfrac53}}u^2\qquad\text{for some $\;\xi\;$ between $0$ and $u$.}$$
Then perform the substitution $u=x^2$:
$$(1+x^2)^{\tfrac13}= \underbrace{1+\frac13x^2}_{T_2(x)}-\frac19\frac1{(1+\xi)^{\tfrac53}}x^4\qquad\text{for some $\;\xi\;$ between $0$ and $x^2$.}$$
Now, as $0< \xi< x^2\le 1$, we know that
$$|f(x)-T_2(x)|\le\frac19x^4\le\frac19|x|^3, \quad\text{so }\; C=\frac19.$$
Furthermore we know the error will be negative. |
Solution of equations involving logarithm and simple equation | Since $c-n = \log(n)+1$, $c-1 = n+\log(n)$, and in turn $e^{c-1}= n e^n$. Thus $n={\rm W}(e^{c-1})$ where W is Lambert W function.
EDIT: Adding details in response to the OP's request.
Put $x=e^{c-1} (> 0)$. Then, there exists a unique (real) number $n=n(x)$ such that $x=ne^n$. For example, if $c=2$, so that $x=e$, then $n=1$ (since $e=1e^1$). As another example, if $c=3$, so that $x=e^2$, then $n \approx 1.5571455989976$; indeed, letting $\hat n=1.5571455989976$, $\hat n e^{\hat n}$ is very close to $e^2$. The unique solution $n=n(x)$ of the equation $x=ne^n$ ($x > 0$) is given by (defined) $n={\rm W}(x)$, where ${\rm W}$ is the Lambert W function (thus, for example, ${\rm W}(e)=1$ and ${\rm W}(e^2) \approx 1.5571455989976$). While the function W is rather complicated, it can be evaluated immediately using WolframAlpha. So, given the constant $c$, just ask WolframAlpha to compute ${\rm W}(e^{c-1})(=n)$ (note that the function W is implemented as ProductLog). |
Equivalence in Algebraic Manifolds (Variety) | I will give a few hints for the implications:
$i) \Rightarrow ii):$ If $V$ is point, then its ideal must be maximal. (...)
$ii) \Rightarrow iii):$ This one is kinda trivial. $K$ is a $K$-vector space over itself.
$iii) \Rightarrow i):$ If the coordinate ring is finite-dimensional, then it is artinian, hence dimension zero. |
Is the following proof that $\sqrt2$ is irrational valid? | It is wrong. For instance, from the equality $\frac21=\frac{x^2}{y^2}$, you deduce that $2=x^2$ and that $1=y^2$. Why? What about the equality $\frac1{\sqrt2}=\frac{\sqrt2}2$ (which is valid)? Do you deduce from it that $1=\sqrt2$? |
Expected global clustering coefficient for Erdős–Rényi graph | If there are $3 \binom n3 p^3$ triangles in expectation, and $3 \binom n3 p^2$ connected triples, the global clustering coefficient should approach their ratio (which is $p$).
Of course, naively taking their ratios doesn't work: $\mathbb E[\frac {X}Y]$ is not the same thing as $\frac{\mathbb E[X]}{\mathbb E[Y]}$. This is one of the main challenges in dealing with the expected value of a ratio. Instead, we'll show that both quantities are concentrated around their mean, and proceed that way.
Let $X$ denote the number of triangles in $\mathcal G(n,p)$. It's easy to see if we properly define triangles that $\mathbb E[X] = 3\binom n3 p^3$, which for consistency with connected triplets I want to define as $3\binom n3$ choices of a potential path $P_3$, and a $p^3$ chance that both edges of the path and the edge that makes it a triangle are present.
Moreover, the number of triangles is $3n$-Lipschitz in the edges of the graph (changing one edge changes the number of triangles by at most $3n$) so by McDiarmid's inequality
$$
\Pr[|X - \mathbb E[X]| \ge n^{2.5}] \le 2 \exp \left(-\frac{2n^5}{\binom n2 (3n)^2}\right) \le 2 e^{-4n/9}.
$$
If we let $Y$ be the number of connected triplets, then the expected number of them is $3\binom n3 p^2$: there are $3\binom n3$ potential copies of $P_3$ in the graph, and each is realized with a probability of $p^2$. This is actually $2n$-Lipschitz in the edges of the graph (each edge is part of at most $2n$ paths on $3$ vertices) but we'll round that up to $3n$-Lipschitz so that the same bound applies.
As a result, with probability $1 - 4 e^{-4n/9}$, both values are within $n^{2.5}$ of their expected value, and so the ratio $\frac{X}{Y}$ satisfies
$$
\frac{3\binom n3 p^3 - n^{2.5}}{3\binom n3 p^2 + n^{2.5}} \le \frac{X}{Y} \le \frac{3\binom n3 p^3 + n^{2.5}}{3\binom n3 p^2 - n^{2.5}}
$$
and therefore $\frac XY = p + O(n^{-0.5})$ in these cases. The remaining cases occur with probability $4e^{-4n/9}$, and the ratio is between $0$ and $1$ even then, so they can affect the expected value by at most $4e^{-4n/9}$ as well. Therefore $\mathbb E[\frac XY] = p + O(n^{-0.5})$, which approaches $p$ as $n \to \infty$. |
Homotopy equivalence of fibres of a Hurewicz fibration | @Kevin Carlson gave the right hint, but I'll flesh out the details. Since $B$ is path-connected, we have that there is a path $\gamma$ in $B$ which starts at $b$ and ends at $b'$. Now consider the following diagram
$\require{AMScd}$
\begin{CD}
p^{-1}(b) @>{g}>> E\\
@V{i_0}VV @VV{p}V\\
p^{-1}(b)\times I @>>{h}> B
\end{CD}
Let $g$ denote the inclusion of the fiber above our point $b$ into the total space. We can see then that the composite $p\circ g$ sends everything to the point $b$. This means that $h(x,0)=b$ for any $x\in p^{-1}(b)$.
Let's define $h(x,t) = \gamma(t)$ to be our path between $b$ and $b'$. We now invoke the fact that $p$ is a Hurewicz fibration to see that it satisfies the homotopy lifting property. That is, there is a unique map $\widetilde{h}: p^{-1}(b) \times I \to E$ which commutes with the above diagram (I would draw it but AMScd is a horrible package).
Finally we have that $\widetilde{h}$ defines a homotopy between $p^{-1}(b)$ and $p^{-1}(b')$ in $E$. |
Question about the limit definitions of derivative and definite integral | Yes. What you call the "most natural" value is the value that ensures continuity (when possible).
A function is continuous at a point when it has no "jump" there; this is formalized by the $\epsilon/\delta$ neighborhoods approach. When a function is undefined at a point, but nearby values are available, it is a natural choice to assign the value that ensures continuity. This is the very definition of a limit: we expect $f(x)$ to be the same as $f(x\pm\delta)$ with a small $\epsilon$ error.
Take some function defined as $f(x)=2+x$ for $x>0$ and undefined otherwise. If you want to extend the definition to $x=0$, the obvious choice will be $f(0)=2$. This extension is possible because you know the values of the function for points arbitrarily close to $0$. On the opposite, an extension of $f$ at $-1$ is not well defined.
This is the exact situation you face when defining the derivative or integral: they can only be defined in terms of limits and are the values that respect continuity. For example, let $f(h)$ be the slope of the chord of the parabola $y=x^2$ between the points $(1,1)$ and $(1+h,(1+h)^2)$: it equals $\frac{\Delta y}{\Delta x}=\frac{1+2h+h^2-1}{1+h-1}=2+h$, for $h>0$. But for $h=0$, this slope is undefined (we are missing a second point to define the straight line). If we are interested in the slope of the tangent, then we use continuity and admit that $f(0)=2$. |
Baire’s Category Theorem counterexample | Yes they do because $\mathbb Q \setminus \{a\}$ is open and dense for each $a\in \mathbb Q$. When you take the intersection of these sets for every $a$ you get $\varnothing$, notice that this is a countable intersection since there ate countably many rationals. |
Extended GCD of polynomials | Suppose that $h = g \geq f$ and $h(x) = 1 + g(x)$.
If we have
$$ a(x) f(x) + b(x) g(x) = h(X) $$
then we also have
$$ a(x) f(x) + (b(x) - 1) g(x) = 1 $$
If we require $a < g$ and $b < f$, then there is only one possibility for the values of $a(x)$ and $b(x)$, and we can find $f(x)$ and $g(x)$ so that we must have $a = g-1$ and $b = f-1$.
In particular, we cannot suppose that we can find $a(x)$ and $b(x)$ so that $a \leq h-f$ and $b=0$ as you suppose.
I'm fairly certain that if you require $h \geq f+g$, then your conjecture holds true. You can probably shave 1 or 2 off of that lower bound, but my initial thoughts are that you can't do better than that if you want actual guarantees. |
Is space is not normal? | Try separating the closed set $\mathbb{P} = \mathbb{R}\setminus \mathbb{Q}$
from the closed set $\{0\}$. |
Maximum of Random Variables | Let $A:=\left\{\max_{i\leqslant k_n}|X_{ni}|>\varepsilon\right\}$ and $B:=\left\{\sum_{i\leqslant k_n}X_{ni}^2 1_{\{|X_{ni}|>\varepsilon\}}>\varepsilon^2\right\}$.
$A\subseteq B$: if $\omega$ is such that $\max_{i\leqslant k_n}\left|X_{ni} \left(\omega\right)\right|>\varepsilon$, then there exists $i_0\leqslant k_n$ such that $\left|X_{ni_0} \left(\omega\right)\right|>\varepsilon$. Consequently,
$$\sum_{i\leqslant k_n}X_{ni}\left(\omega\right)^2 1_{\{|X_{ni}\left(\omega\right)|>\varepsilon\}}\geqslant
X_{ni_0}\left(\omega\right)^2 1_{\{|X_{ni_0}\left(\omega\right)|>\varepsilon\}}\gt\varepsilon^2.$$
$A^c\subseteq B^c$: if $\omega\in A^c$, then for all $i\leqslant k_n$, $\left|X_{ni} \left(\omega\right)\right|\leqslant\varepsilon$ hence $\sum_{i\leqslant k_n}X_{ni}\left(\omega\right)^2 1_{\{|X_{ni}\left(\omega\right)|>\varepsilon\}}=0$ and $\omega\in B^c$. |
Problem regarding the connectedness of the Topologist's Sine Curve | First color:
$$\color{fuchsia}{Then\ \ \gamma_1(t_0 +\delta)\gt0}$$
This follows from the fact that $t_0$ is the LUB of $\gamma^{-1}(B)$. hence, any $s>t_0$ cannot be such that $\gamma(s) \in B$. Then, it must at $A$, where the first coordinate (namely, $\gamma_1$) will be greater than $0$.
Second color:
$$\color{fuchsia} {\text{Hence there exists a }n\in \mathbb N\text{ s.t. } \gamma_1 (t_0) < { 2\over {4n+1}} < 1}$$
This must be a typo. I think he meant:
$$ {\text{Hence there exists a }n\in \mathbb N\text{ s.t. } \gamma_1 (t_0) < { 2\over {4n+1}} < \gamma_1(t_0+\delta)}$$
and this follows since $\frac{2}{4n+1} \rightarrow 0$.
Third color:
$$\color{fuchsia}{|\gamma_2(t)-\gamma_2(t_0)|\ge 1}.$$
I think something is missing here. He doesn't know the value of $\gamma_2(t_0)$. We could fix this by dividing into cases: if $\gamma_2(t_0)> 0$, choose the $n$-stuff in the second color step as to make the $\sin (t)=-1$. If $\gamma_2(t_0)<0$, choose the $n$-stuff in the second color step as to make $\sin(t)=1$. |
Prove that a set is a topology. | Here $X=\mathbb R$. you should check this conditions(A is a subset of $P(\mathbb R))$:
1) $\emptyset \in A$
2)$\mathbb R \in A$
3)closed under finite intersections.
4)closed under arbitrary unions.
since, all the given sets satisfy 1,2, check the conditions 3,4. |
Show that the largest eigenvalue of $A$ lies in the given interval | first things, it is positive definite, by Sylvester's Law of Inertia
The law can also be stated as follows: two symmetric square matrices
of the same size have the same number of positive, negative and zero
eigenvalues if and only if they are congruent $ S ′ = A S A^T \; , \; $ with $A$ nonsingular
$$ Q^T D Q = H $$
$$\left(
\begin{array}{rrrrrr}
1 & 0 & 0 & 0 & 0 & 0 \\
\frac{ 1 }{ 5 } & 1 & 0 & 0 & 0 & 0 \\
\frac{ 1 }{ 5 } & \frac{ 4 }{ 9 } & 1 & 0 & 0 & 0 \\
\frac{ 1 }{ 5 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 13 } & 1 & 0 & 0 \\
\frac{ 1 }{ 5 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 13 } & - \frac{ 3 }{ 10 } & 1 & 0 \\
\frac{ 1 }{ 5 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 13 } & - \frac{ 3 }{ 10 } & - \frac{ 3 }{ 7 } & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrrrrr}
5 & 0 & 0 & 0 & 0 & 0 \\
0 & \frac{ 9 }{ 5 } & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{ 13 }{ 9 } & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{ 10 }{ 13 } & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{ 7 }{ 10 } & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{ 4 }{ 7 } \\
\end{array}
\right)
\left(
\begin{array}{rrrrrr}
1 & \frac{ 1 }{ 5 } & \frac{ 1 }{ 5 } & \frac{ 1 }{ 5 } & \frac{ 1 }{ 5 } & \frac{ 1 }{ 5 } \\
0 & 1 & \frac{ 4 }{ 9 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 9 } \\
0 & 0 & 1 & - \frac{ 1 }{ 13 } & - \frac{ 1 }{ 13 } & - \frac{ 1 }{ 13 } \\
0 & 0 & 0 & 1 & - \frac{ 3 }{ 10 } & - \frac{ 3 }{ 10 } \\
0 & 0 & 0 & 0 & 1 & - \frac{ 3 }{ 7 } \\
0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrrrrr}
5 & 1 & 1 & 1 & 1 & 1 \\
1 & 2 & 1 & 0 & 0 & 0 \\
1 & 1 & 2 & 0 & 0 & 0 \\
1 & 0 & 0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right)
$$
Next, if $7I - H$ is also positive definite, but $6I - H$ indefinite, the largest eigenvalue lies between 6 and 7....
$$ Q_7^T D_7 Q_7 = 7I-H $$
$$\left(
\begin{array}{rrrrrr}
1 & 0 & 0 & 0 & 0 & 0 \\
- \frac{ 1 }{ 2 } & 1 & 0 & 0 & 0 & 0 \\
- \frac{ 1 }{ 2 } & - \frac{ 1 }{ 3 } & 1 & 0 & 0 & 0 \\
- \frac{ 1 }{ 2 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 6 } & 1 & 0 & 0 \\
- \frac{ 1 }{ 2 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 6 } & - \frac{ 1 }{ 8 } & 1 & 0 \\
- \frac{ 1 }{ 2 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 6 } & - \frac{ 1 }{ 8 } & - \frac{ 1 }{ 7 } & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrrrrr}
2 & 0 & 0 & 0 & 0 & 0 \\
0 & \frac{ 9 }{ 2 } & 0 & 0 & 0 & 0 \\
0 & 0 & 4 & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{ 16 }{ 3 } & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{ 21 }{ 4 } & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{ 36 }{ 7 } \\
\end{array}
\right)
\left(
\begin{array}{rrrrrr}
1 & - \frac{ 1 }{ 2 } & - \frac{ 1 }{ 2 } & - \frac{ 1 }{ 2 } & - \frac{ 1 }{ 2 } & - \frac{ 1 }{ 2 } \\
0 & 1 & - \frac{ 1 }{ 3 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 9 } & - \frac{ 1 }{ 9 } \\
0 & 0 & 1 & - \frac{ 1 }{ 6 } & - \frac{ 1 }{ 6 } & - \frac{ 1 }{ 6 } \\
0 & 0 & 0 & 1 & - \frac{ 1 }{ 8 } & - \frac{ 1 }{ 8 } \\
0 & 0 & 0 & 0 & 1 & - \frac{ 1 }{ 7 } \\
0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrrrrr}
2 & - 1 & - 1 & - 1 & - 1 & - 1 \\
- 1 & 5 & - 1 & 0 & 0 & 0 \\
- 1 & - 1 & 5 & 0 & 0 & 0 \\
- 1 & 0 & 0 & 6 & 0 & 0 \\
- 1 & 0 & 0 & 0 & 6 & 0 \\
- 1 & 0 & 0 & 0 & 0 & 6 \\
\end{array}
\right)
$$
$$ Q_6^T D_6 Q_6 = 6I - H $$
$$\left(
\begin{array}{rrrrrr}
1 & 0 & 0 & 0 & 0 & 0 \\
- 1 & 1 & 0 & 0 & 0 & 0 \\
- 1 & - \frac{ 2 }{ 3 } & 1 & 0 & 0 & 0 \\
- 1 & - \frac{ 1 }{ 3 } & - 1 & 1 & 0 & 0 \\
- 1 & - \frac{ 1 }{ 3 } & - 1 & - \frac{ 3 }{ 2 } & 1 & 0 \\
- 1 & - \frac{ 1 }{ 3 } & - 1 & - \frac{ 3 }{ 2 } & 3 & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrrrrr}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 3 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{ 5 }{ 3 } & 0 & 0 & 0 \\
0 & 0 & 0 & 2 & 0 & 0 \\
0 & 0 & 0 & 0 & - \frac{ 5 }{ 2 } & 0 \\
0 & 0 & 0 & 0 & 0 & 20 \\
\end{array}
\right)
\left(
\begin{array}{rrrrrr}
1 & - 1 & - 1 & - 1 & - 1 & - 1 \\
0 & 1 & - \frac{ 2 }{ 3 } & - \frac{ 1 }{ 3 } & - \frac{ 1 }{ 3 } & - \frac{ 1 }{ 3 } \\
0 & 0 & 1 & - 1 & - 1 & - 1 \\
0 & 0 & 0 & 1 & - \frac{ 3 }{ 2 } & - \frac{ 3 }{ 2 } \\
0 & 0 & 0 & 0 & 1 & 3 \\
0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrrrrr}
1 & - 1 & - 1 & - 1 & - 1 & - 1 \\
- 1 & 4 & - 1 & 0 & 0 & 0 \\
- 1 & - 1 & 4 & 0 & 0 & 0 \\
- 1 & 0 & 0 & 5 & 0 & 0 \\
- 1 & 0 & 0 & 0 & 5 & 0 \\
- 1 & 0 & 0 & 0 & 0 & 5 \\
\end{array}
\right)
$$ |
Prove that $\int_{B(x,r)}|\nabla u|^2\leq \frac{C}{r^2}\int_{B(x,2r)}|u|^2$ | Hint
Using brute force :
$$\int_{B(x,2r)}u^2|\nabla \xi|^2-\int_{B(x,2r)}u\xi^2 f(u)\leq \int_{B(x,2r)}u^2|\nabla \xi|^2+\int_{B(x,2r)}|u|\xi^2 |f(u)|
$$$$\leq \int_{B(x,2r)}u^2|\nabla \xi|^2+\int_{B(x,2r)}|u|^2\leq\left(1+\frac{C}{r^2}\right)\int_{B(x,2r)}|u|^2.$$
Using the fact that $\Omega $ is bounded, you can conclude. |
simplex method: zero nonbasic variables, zero the leaving variable | Indeed, this has to do with the fact that we are pivoting from one extreme point to another. When you are on an extreme point, necessarily some of your variables equal $0$, do you see why ?
In two dimensions, you need at least two lines to define a point. In terms of linear programming, an extreme point is at the intersection between two constraints that are active. And for a constraint to be active, necessarily the corresponding slack variable equals $0$. So in two dimensions, you need at least two variables to have value $0$ to be on an extreme point (and the other variables to be non negative). |
Changing limit and derivative operator | You can't take the derivative in a limit. But the way a continuously differentiable map can have a limit while the derivative doesn't. Example: $f(x) = 1+ \frac{\sin x^2}{x}$ (defined for $x>0$).
However, you can solve the problem in following way.
Suppose that $a >0$, then for $x$ large enough, say $x\ge M >0$ you have $f^\prime(x) \ge a/2$. And applying Mean Value Theorem, $f(x) \ge a/2(x-M) + f(M)$ for $x \ge M$. Which implies that $ \lim\limits_{x \to \infty} = \infty$. In contradiction with $\lim\limits_{x \to \infty} f(x)=1$ .
You can proceed in a similar way for $a<0$.
Hence the only option is $a=0$. |
Limit of square root function | HINT: multiply the given term by $$\frac{\sqrt{x^2+5x+2}+\sqrt{x^2+x+1}}{\sqrt{x^2+5x+2}+\sqrt{x^2+x+1}}$$ |
Size of factors in number ring | If $x$ is even in $z=x+y\sqrt 2$ then $z$ is a multiple of $\sqrt 2$, hence $\rho(z)\le 1$.
On the other hand, if $|N(z)|=|x^2-2y^2|$ is the square of a prime $p$ (and $x+y\sqrt 2$ is not a prime in $R$), then we must have $N(a+b\sqrt2)=a^2-2b^2=\pm p$, which implies $a^2\ge p$ or $b^2\ge \frac2p$, hence $||a+b\sqrt 2||\ge \sqrt{\frac p2}$ and fnally $\rho(z)\ge \sqrt{\frac p2}$. |
Related rate volume increasing | $$\frac{dV}{dt} = \text{area} \cdot \frac{dh}{dt}$$
Solve for $dh/dt$. |
Strange eigenvector of a transition probability matrix | First, there is no such thing as “the” eigenvector, since any non-zero scalar multiple of an eigenvector is also an eigenvector for the same eigenvalue.
Clearly, the eigenvector of $1$ that you found and the one the author uses are scalar multiples of each other, so they are equivalent. The author has normalized it so that the sum of its components is $1$, i.e., so that it’s a probability vector that turns out to be the limiting state of the process represented by the matrix. |
$\int_{0}^{\infty}(-1)^{[x^2]}$ converge? | For integer $k\ge1$,
$$
\int_{\sqrt{k-\frac12}}^{\sqrt{k+\frac12}}(-1)^{\left[x^2\right]}\,\mathrm{d}x=(-1)^k\left(\sqrt{k+\frac12}-\sqrt{k-\frac12}\right)
$$
Summing yields
$$
\begin{align}
\int_0^{\sqrt{n+\frac12}}(-1)^{\left[x^2\right]}\,\mathrm{d}x
&=\sqrt{\frac12}+\sum_{k=1}^n(-1)^k\left(\sqrt{k+\frac12}-\sqrt{k-\frac12}\right)\\
&=\sqrt{\frac12}+\sum_{k=1}^n(-1)^k\frac1{\sqrt{k+\frac12}+\sqrt{k-\frac12}}
\end{align}
$$
which converges by the Alternating Series Test. |
let $a\sim b$ iff for some integer $k$, $a^k = b^k$ | Note that $a^{mn}=b^{mn}$ and $b^{nm}=c^{nm}$.
Remark: The relation is uninteresting if we allow exponents to be $0$. |
Geometric intuition on $\langle x, A^\top y\rangle = \langle y, Ax\rangle$ | Use the singular value decomposition $A=P\Sigma Q$ where $P$ and $Q$ are orthogonal, and $\Sigma$ is diagonal. Regard $A$ as acting on $V$ and $A^T$ as acting on the isometric dual space $V^*$. The SVDs of $A$ and $A^T$ make it clear that the "geometry" of the action of one is the same as the action of its transpose partner on the dual. The equality $\langle x,A^Ty\rangle=\langle y,Ax\rangle$ simply unravels this via the inner product, which of course provides the isometry between the two space. |
Suppose f is continuously differentiable at least twice with the condition that f is an even function. | This is not true in general.
Take the even function defined for $x \ge 0$ by $f(x)=x^3$. |
Solving X'=AX with a Repeated root | The solution you wrote is correct. On a minor note, the nullspace is not what you wrote but the span of $(-2,1,0)$ and $(-1,0,1)$.
The nontrivial case (not in this problem) occurs when the dimension of $N(A-\lambda I)$ does not agree with the multiplicity, for instance you have eigenvalues $1,1,5$ but $N(A- 1I)$ has dimension one. In that case you need to look into the generalized eigenspaces. |
Contour integration $\log(x)/(1-x^8)$ | Hint:
Consider the following contour with $\Gamma=\Gamma_1\cup\Gamma_2\cup\Gamma_3$ and an angle of $\pi/8$.
It is trivial to show that
$$\lim_{R\to\infty}\int_{\Gamma_2}\frac{\ln(z)}{1-z^8}~\mathrm dz=0$$
$$\lim_{(e,R)\to(0^+,\infty)}\int_{\Gamma_1}\frac{\ln(z)}{1-z^8}~\mathrm dz=\int_0^\infty\frac{\ln(z)}{1-z^8}~\mathrm dz$$
$$\lim_{(e,R)\to(0^+,\infty)}\int_{\Gamma_3}\frac{\ln(z)}{1-z^8}~\mathrm dz=-e^{\pi i/8}\int_0^\infty\frac{\ln(z)+\frac\pi8}{1+z^8}~\mathrm dz$$
and the indent likewise tends to zero. Thus, we have
$$\int_0^\infty\frac{\ln(z)}{1-z^8}~\mathrm dz=e^{\pi i/8}\int_0^\infty\frac{\ln(z)+\frac\pi8}{1+z^8}~\mathrm dz+\oint_\Gamma\frac{\ln(z)}{1-z^8}~\mathrm dz$$
The right integral may then be dealt with using the residue theorem.
Now consider $f(z)=\frac{[\ln(z)]^2}{1+z^8}$ and the keyhole contour with $C=C_1\cup C_2\cup C_3\cup C_4$ and $r,R$ being the inner radius and outer radius respectively.
We then have:
$$\lim_{(r,R)\to(0^+,\infty)}\int_{C_2}f(z)~\mathrm dz=\lim_{(r,R)\to(0^+,\infty)}\int_{C_4}f(z)~\mathrm dz=0$$
$$\lim_{(r,R)\to(0^+,\infty)}\int_{C_1}f(z)~\mathrm dz=\int_0^\infty\frac{[\ln(z)]^2}{1+z^8}~\mathrm dz$$
$$\lim_{(r,R)\to(0^+,\infty)}\int_{C_3}f(z)~\mathrm dz=-\int_0^\infty\frac{[\ln(z)+2\pi i]^2}{1+z^8}~\mathrm dz$$
Adding these together, we find that
$$\begin{align}\lim_{(r,R)\to(0^+,\infty)}\oint_Cf(z)~\mathrm dz&=\int_0^\infty\frac{[\ln(z)]^2-[\ln(z)+2\pi i]^2}{1+z^8}~\mathrm dz\\&=\int_0^\infty\frac{-4\pi i\ln(z)+4\pi^2}{1+z^8}~\mathrm dz\end{align}$$
Thus,
$$\int_0^\infty\frac{\ln(z)}{1+z^8}~\mathrm dz=-\frac1{4\pi}\Im\left[\lim_{(r,R)\to(0^+,\infty)}\oint_Cf(z)~\mathrm dz\right]$$
$$\int_0^\infty\frac1{1+z^8}~\mathrm dz=\frac1{4\pi^2}\Re\left[\lim_{(r,R)\to(0^+,\infty)}\oint_Cf(z)~\mathrm dz\right]$$
Now apply residue theorem and you are done! |
Physicist trying to understand GIT quotient | Apparently $V/G$ is a very badly behaved space. I do not know really why though. I can imagine that there might be some singularities but can they not be resolved e.g. by blowing up? Also, why sometimes this space is not Hausdorff?
The simplest way to think about this is just to consider an example, and the best one is probably the following:
Consider the action of $\mathbb C^* = \mathbb C -0$ on $\mathbb C^2$, where the action is given by
$$(x,y) \mapsto (\lambda x, \lambda^{-1} y) $$
What are the orbits for this action? There are the orbits of the form $xy = c \neq 0$ for any complex number $c$. Then there axial orbits $\{ (x,0) : x \neq 0\}$ and $\{ (0,y): y \neq 0\}$. Finally there is the zero orbit, which just contains one point $0$.
The vast majority of the orbits are of the first type, which suggest the quotient $\mathbb C^2 /\mathbb C^*$ should be $\mathbb C$, but then the question remains, what happens to the other orbits? The axial orbits and the zero orbit all lie arbitrarily close to one another (a sequence of points in the axial orbits can converge to zero, but zero is not in the orbit). Therefore the resulting space taking the quotient naively would be $\mathbb C$, but with three copies of the $0$ point, which is a non-Hausdorff space. Since we are quotienting a variety, we would hope to get another variety, and this is clearly not one.
GIT deals with this by declaring any orbit which contains zero in the closure to be unstable, and the quotient is defined only on the (poly)stable points (I don't want to go into what stability means, since it's a bit complicated and there are multiple competing definitions. I'll link some stuff to read if you want to know more at the bottom.)
To me this is the space of prime ideals that are invariant under $G$. But I do not see how exactly this is related to the original space we wanted to construct. It seems quite different actually.
Intuitively what is this space $V//G$ and why is it useful?
Again, best to think of an example. In the example above, the polynomial ring associated to the variety $\mathbb C^2$ is the whole polynomial ring in two variables $\mathbb C[x,y]$. Then under the action above, a polynomial $f(x,y)$ gets sent to $f(\lambda x, \lambda^{-1} y)$, and we see therefore that the polynomial $f = xy$ is invariant under the action of $\mathbb C^*$. It's not hard to show that all invariant polynomials are generated by this one, so the invariant ring is
$$ \mathbb C[x,y]^{\mathbb C^*} = \mathbb C[xy]$$
It's clear then that
$$\operatorname{Spec}(\mathbb C[x,y]^{\mathbb C^*}) = \operatorname{Spec}(\mathbb C[xy]) = \mathbb C,$$
and in this we see that since the axial orbits and the zero orbit all lie in the same invariant class ($xy =0$); hence the GIT quotient treats all 3 as equivalent.
(Try and do this example again for yourself but with a different space and a different action. It's a good exercise. The only way to get your head around this stuff is lots of examples in my opinion.)
As for why the GIT quotient is useful: well, it's the correct quotient to use in algebraic geometry. Since taking quotients is so common in geometry, it's no suprise that mathematicians want a good theory about how to do it. Most interesting to me personally is how it relates to the symplectic reduction through the Kempf-Ness theorem. That's projective GIT, and that's the really interesting bit to me. There's also infinite-dimensional analogue of the Kempf-Ness theorem that concerns the theory of connections on principal $U(n)$-bundles upto gauge equivalence.
Anyway some resources. I learnt GIT and symplectic reduction from the notes by Richard Thomas. There's also a book by Dolgachev on Invariant theory that's pretty good for an algebraic perspective, and the original GIT was of course found in the book by Mumford. I believe the latest edition has some stuff on symplectic reduction as well. There's also a book on invariants and moduli by Mukai which is simply brilliant. I'm currently writing some stuff about this for my master's project, I'll link it here when I'm done. |
Find all natural numbers n such that $7n^3 < 5^n$ | Base Case: For $n=4$, we have $7(4)^3 = 448 < 625 = 5^4$, which works.
Induction Hypothesis: Assume that the claim holds true for all $n \in \{4,\ldots,k\}$, where $k>3$.
It remains to prove the inequality true for $n=k+1$. Indeed, observe that:
\begin{align*}
7(k+1)^3 &= 7(k^3 + 3k^2 + 3k + 1) \\
&< 7(k^3 + 3k^2 + 9k + 27) \\
&= 7(k^3 + (3)k^2 + (3)^2k + (3)^3) \\
&< 7(k^3 + (k)k^2 + (k)^2k + (k)^3) & \text{since }3 < k\\
&= 7(4k^3) \\
&= 4(7k^3) \\
&< 5(7k^3) \\
&< 5(5^k) & \text{by the induction hypothesis}\\
&= 5^{k+1}\\
\end{align*}
as desired. This completes the induction. |
Find the generalized eigenspaces corresponding to the distinct eigenvalues of T | Hint:
Your linear transformation can be realized as the following matrix:
$$\begin{bmatrix}0&-1\\1&0\end{bmatrix}.$$ |
Euler's transformation to derive that $\sum\limits_{n=1}^{\infty}\frac{1}{n^2}=\sum\limits_{n=1}^{\infty}\frac{3}{n^2\binom{2n}{n}}$ | A proof by pure creative telescoping has been linked in the comments above, but there also is an interesting proof that comes from manipulations of a logarithmic integral.
If we set
$$ I = -\int_{0}^{\pi/2}\log\left(1-\frac{1}{4}\sin^2 x\right)\frac{dx}{\sin x}$$
by expanding $-\log\left(1-\frac{1}{4}\sin^2 x\right)$ as a Taylor series in $\sin x$ we get that
$$\begin{eqnarray*}
\color{blue}{I} = \sum_{n\geq 1}\frac{1}{n 4^n}\int_{0}^{\pi/2}\sin(x)^{2n-1}\,dx = \sum_{n\geq 1}\frac{1}{4n(2n-1)\binom{2n-2}{n-1}}=\color{blue}{ \frac{1}{2}\sum_{n\geq 1}\frac{1}{n^2\binom{2n}{n}}}
\end{eqnarray*}$$
and by applying the tangent half-angle substitution we get that:
$$ I = \int_{0}^{1}-\log\left(1-\left(\frac{t}{1+t^2}\right)^2\right)\frac{dt}{t} $$
where the rational function $1-\left(\frac{t}{1+t^2}\right)^2$ can be expressed through products and ratios of polynomials of the form$1-t^m$. Eureka, since:
$$ I_m=-\int_{0}^{1}\frac{\log(1-t^m)}{t} = \sum_{n\geq 1}\frac{1}{n}\int_{0}^{1}t^{mn-1}\,dt = \sum_{n\geq 1}\frac{1}{mn^2}=-\frac{\zeta(2)}{m}$$
implies:
$$ \color{blue}{I} = \left(I_2-2 I_4+I_6\right)=\color{blue}{\frac{1}{6}\zeta(2)}$$
as wanted.
Since we have
$$ 2\arcsin^2(x) = \sum_{n\geq 1}\frac{(2x)^{2n}}{n^2\binom{2n}{n}} $$
as proved here, the acceleration formula implies that $\zeta(2)=6\arcsin^2\left(\frac{1}{2}\right)=\frac{\pi^2}{6}$. |
Coins with Combinations | let $W(n)$ be the number of ways both of you get n coins. then what you want is $\sum_{k=0}^6 W(k)$
Now how many ways are there for both of them to get n heads. there are $\binom{6}{n}$ ways for you to get n heads and $\binom{6}{n}$ ways for your fiend to get n heads , therefore $W(n)=\binom{6}{n}^2$.
So what you want is $\sum_{k=0}^6\binom{6}{n}^2$
Now take a look and see that $\binom{6}{n}^2=\binom{6}{n}\cdot \binom{6}{6-n}$ Where the first term can be viewed as the number of ways to walk on a lattice(moving only up and right) to get to point (n,6-n) and the second term is the number of ways to get from that point to the point (6,6). Since every path on the lattice with moves only up and right pass through the diagonal exactly once we can see the sum of all these terms is the number of paths from 0,0 to n,n moving only up and right on the lattice.But this is also the number of ways to chose 6 moves up out of 12 moves total$\binom{12}{6}$. Therefore $\sum_{n=0}^6\binom{6}{n}^2=\binom{12}{6}=924$ |
find the equation of the tangent line to the curve 8/x^2+x+2 at x=2 | The slope of a curve at a certain point is the derivative evaluated at that point.
Let $y=\frac{8}{x^2}+x+2$
Find its derivative;
$$y'=-\frac{16}{x^3}+1$$
Then evaluate $y'(2)$;
$$y'(2)=-\frac{16}{8}+1=-1$$
The equation of the tangent is $$y-y(2)=y'(x)(x-2)$$ |
questions about a sum of logarithmic integrals | This is a very different way to look at this sum using number theory and the prime number theorem. I hope his approach explains some context where such sums may appear, it is not meant to be the shortest solution (for that just evaluate the sum), but it is motivated by looking at the average of $\omega(n)$, the number of distinct prime factors of $n$.
Consider $\omega(n)=\sum_{p|n}1$, and look at $\sum_{n\leq x}\omega(n).$ Then $$\sum_{n\leq x}\omega(n)=\sum_{n\leq x}\sum_{p|n}1=\sum_{p\leq x}\sum_{k\leq\frac{x}{p}}1=\sum_{p\leq x}\left[\frac{x}{p}\right].$$ Using the hyperbola method, the middle term also is $$\sum_{p\leq x}\sum_{k\leq\frac{x}{p}}1=\sum_{pk\leq x}1=\sum_{k\leq\sqrt{x}}\sum_{p\leq\frac{x}{k}}1+\sum_{k>\sqrt{x}}\sum_{p\leq\frac{x}{k}}1$$ $$=\sum_{k\leq\sqrt{x}}\pi\left(\frac{x}{k}\right)+\sum_{p\leq\sqrt{x}}\left(\left[\frac{x}{p}\right]-\left[\sqrt{x}\right]\right).$$ Now, by the prime number theorem, $\pi\left(\frac{x}{k}\right)=\text{li}\left(\frac{x}{k}\right)+O\left(\frac{x}{k}e^{-c\sqrt{\log\frac{x}{k}}}\right).$ Rewriting again, and taking into account that $\left[x\right]=x+O(1)$ this is $$\sum_{k\leq\sqrt{x}}\text{li}\left(\frac{x}{k}\right)+x\sum_{p\leq\sqrt{x}}\frac{1}{p}+O\left(\frac{x}{\log x}+\sum_{k\leq\sqrt{x}}\frac{x}{k}e^{-c\sqrt{\log\frac{x}{k}}}\right).$$ Hence $$\sum_{n\leq x}\omega(n)=\sum_{p\leq x}\left[\frac{x}{p}\right]=\sum_{k\leq\sqrt{x}}\text{li}\left(\frac{x}{k}\right)+x\sum_{p\leq\sqrt{x}}\frac{1}{p}+O\left(\frac{x}{\log x}\right).$$ The middle sum is $$\sum_{p\leq x}\left[\frac{x}{p}\right]=x\sum_{p\leq x}\frac{1}{p}+O\left(\frac{x}{\log x}\right)$$ so we conclude $$\sum_{k\leq\sqrt{x}}\text{li}\left(\frac{x}{k}\right)=x\sum_{\sqrt{x}<p\leq x}\frac{1}{p}+O\left(\frac{x}{\log x}\right)=x\left(\log\log x-\log\log\sqrt{x}\right)+O\left(\frac{x}{\log x}\right)$$ $$=x\log2+O\left(\frac{x}{\log x}\right).$$
Remark: Also, why are you looking at the double logarithm? $\log \log x$ will be extremely close to $\log \log y$ for almost all small $x$ and $y$. For example, $\log \log(110 000) -\log \log (100 000)=0.0082..$ |
Rado–Kneser–Choquet Theorem proof | The theorem you want is on page 9 - the extension of the analytic argument principle to sense preserving harmonic ones; then you just apply it to $f-a$ for any $a$ not in the image of the Jordan curve $f(\mathbb{T})$ since then $f-a$ satisfies the same properties as $f$ (non-vanishing homeomorphism on the boundary etc) and then by the argument principle and discretness of $2\pi N$, $f-a$ has the same number of zeros inside $\mathbb{D}$ when $a$ belongs to the same connected component of $C- f(\mathbb{T})$, so no zeros if $a$ is outside and precisely one zero if $a$ is inside since otherwise $f$ cannot be a homeomorphism at the boundary.
There are two subtle points here, both implied by the sense-preserving property of $f$, first being that zeros are discrete (there are harmonic functions, even harmonic polynomials like $\Re(z)$, with non-discrete zero set) and second is that they all have positive index (otherwise you could have two zeros with index 1 and one with index -1, say since only zeros of analytic functions are apriori guaranteed to have positive index), so if the argument change is $2\pi$ it means there is a unique zero. |
Proving this sum is irrational | First of all, let us find a nonzero annihilator polynomial of $\sqrt{2}+\sqrt[3]{5}.$ Let $p=x^2-2$ and $q=x^3-5$, $p$ is a nonzero annihilator of $\sqrt{2}$ and $q$ is a nonzero annihilator polynomial of $\sqrt[3]{5}$. Hence, the resultant with respect to $t$ of the polynomials $p(t)$ and $q(x-t)$ is a nonzero annihilator polynomial of $\sqrt{2}+\sqrt[3]{5}$. After computation, one has: $$\textrm{res}_t(p(t),q(x-t))=x^6-6x^4-10x^3+12x^2- 60x+17.$$
Using rational root theorem, this polynomial has no rational root. Whence the result. |
Why it's not true that $\frac{d}{dt}\int f(t,x)dx=\int \frac{\partial }{\partial t}f(t,x)dx$ where $f(t,x)=g(x)sign(t-x)$? | AFAIK there is another condition in the stated "theorem" that we require: that $f(\cdot,x)\in C^1(\Bbb R )$ for a.e. $x\in \Bbb R $, what clearly doesn't holds in your case because $g(x)\operatorname{sign}(\cdot-x)$ is discontinuous for each $x\in \Bbb R $ when $g\neq 0$.
In view of this it seems that this condition can be weakened to assert that $\partial f(\cdot,x)$ must exists, as a function on it first argument, for a.e. $x\in \Bbb R $ and that it must be Lebesgue-integrable. Then probably in the stated theorem it must be changed the condition
For a.e. $(t,x)\in \mathbb R^+\times \mathbb R$, $\frac{\partial }{\partial t}f(t,x)$ exist
to
For a.e. $x\in \mathbb R$, $\partial f({\cdot},x)$ exist and is Lebesgue-integrable
In any case this condition doesn't hold either for your function because $\partial f(\cdot,x)$ doesn't exists for any chosen $x$ when $g\neq 0$.
Anyway we can check that $F'=2f$:
$$
\begin{align*}
F(t)&=\int_{\Bbb R }g(x)\operatorname{sign}(t-x)\,\mathrm d x\\
&=\int_{\Bbb R }g(t-x)\operatorname{sign}(x)\,\mathrm d x\\
&=\int_{0}^{\infty }g(t-x)\,\mathrm d x-\int_{-\infty }^0g(t-x)\,\mathrm d x\\
&=\int_{-\infty }^tg(w)\,\mathrm d w-\int_t^{\infty }g(w)\,\mathrm d w\\
&=2G(t)
\end{align*}
$$
with the change of variable $w:=t-x$ and where $G$ is some primitive of $g$ (such primitive exists because $g$ is a test function), so $F’=2g$ as expected. However is clear that $\int_{\Bbb R }\partial _t f(t,x)\,\mathrm d x=\int_{\Bbb R }0\,\mathrm d x=0$, or by
$$
\begin{align*}
\int_{\Bbb R }\partial_t[ g(x)\operatorname{sign}(t-x)]\,\mathrm d x&=\int_{\Bbb R }\partial _t[g(t-x)\operatorname{sign}(x)]\,\mathrm d x\\
&=\int_{0}^{\infty }-g'(t-x)\,\mathrm d x-\int_{-\infty }^0g'(t-x)\,\mathrm d x\\
&=\int_{-\infty }^\infty g'(w)\,\mathrm d w\\
&=0
\end{align*}
$$ |
$xf(x) - yf(y) = (x-y)f(x+y)$ | Write the given functional equation for the three pairs $(x,y)$, $(y,z)$, $(z,x)$, and add up. You then obtain
$$0=(x-y)f(x+y)+(y-z)f(y+z)+(z-x)f(z+x)\ .$$
We now have a functional equation with three free variables. Put
$$x:={t+1\over2},\quad y:={t-1\over2},\quad z:={1-t\over2}\ ,$$
and you get
$$0=1\cdot f(t)+(t-1)\cdot f(0)-t\cdot f(1)\ ,$$
or
$$f(t)=f(0)+t\bigl(f(1)-f(0)\bigr)\qquad\forall t\in{\mathbb R}\ .$$
This shows that $f$ has to be of the form $f(t)=at+b$ with arbitrary constants $a$, $b$. |
Proof based problem related to non-trivial solution of a linear equation system | You are already finished, because
$$
\frac{1}{1+a}+\frac{1}{1+b}+\frac{1}{1+c}-2=\frac{ - 2abc - ab - ac - bc + 1}{abc + ab + ac + a + bc + b + c + 1}=0.
$$ |
An extension of Dominated Convergence Theorem | As $f_n \to f$ almost everywhere, we can choose $X_0$ such that $m(X_0)=m(X)$ and
$$\lim_{n \to \infty} f_n(x)=f(x)$$
for all $x \in X_0$. Then
$$\tilde{f}(x) := \begin{cases} f(x) & x \in X_0 \\ 0 & \text{otherwise} \end{cases}.$$
defines a measurable function satisfying $\tilde{f}=f$ almost everywhere. By assumption, $|f_n| \leq g$ almost everywhere and therefore we find
$$|f| = \lim_{n \to \infty} |f_n| \leq g$$
almost everywhere. Hence, $|\tilde{f}| \leq g$ almost everywhere. Consequently, we conclude that $\tilde{f}$ is integrable and we may apply the "standard" dominated convergence theorem to get the second statement. |
condition of potentially good reduction of representations | The direction $\rho(I_K)< \infty \implies \rho(I_{K'})=1$ for a finite extension:
$\rho(I_K)< \infty$ means that $I_K \cap \textrm{ker}\rho$ has finite index in $I_K$. But we also have $I_K \cap \textrm{ker} \rho$ as a closed subset of $I_K$, hence is open in $I_K$. Therefore there is an open set $G_{K'}=G(K^s/K')$ of $G_K$ given by a finite extension $K'/K$ such that $I_K \cap G_{K'} \subseteq I_K \cap \textrm{ker} \rho$. But $I_K \cap G_{K'}=I_{K'}$ and by the above containment we have $\rho(I_{K'})=1$.
The reverse direction:
But since $G_{K'}$ has finite index in $G_K$, $I_{K'}$ has finite index in $I_K$. By the condition $\rho(I_{K'})=1$, we have $I_{K'} \subseteq \textrm{ker}\rho \cap I_K \subseteq I_K$. So $[I_K: \textrm{ker}\rho \cap I_K]= \rho(I_{K})< \infty$. |
A multivariate function is convex iff it is convex in all axes? | If $g$ is any non-convex function on $\mathbb R$ with $g(0)=0$ then $f(x)=g(x_1)g(x_2)...g(x_n)$ gives a counterexample. |
Transitive Relations and functions | Not quite, but close. The function $f:X \to X$ defined by $f(x)=x$ is a transitive relation. Your proof fails because you don't know that $b \neq c$.
Edited to add:
I believe your proof does show that $f$ is a transitive relation $\iff f \circ f = f$. |
Spectrum of a bounded operator $T$ satisfying $T^n=I$ | Suppose $T^{n}=I$. Let $p_{k}$ be the Lagrange polynomials
$$
p_{k}=\prod_{j=0,j\ne k}^{n-1}(\lambda-e^{2\pi ji/n})\left/\prod_{j=0,j\ne k}^{n-1}(e^{2\pi ki/n}-e^{2\pi ji/n})\right.\;.
$$
Notice that $p_{0}+\cdots+p_{n-1}=1$ because it is an (n-1)-st degree polynomial that equals $1$ at all n-th roots of unity. Define $P_{k}=p_{k}(T)$. Then $P_{0}+\cdots+P_{n-1}=I$. Furthermore $P_{j}P_{k}=P_{k}P_{j}=0$ for $j \ne k$ because such a product can be represented as $p(T)$ where $(\lambda^{n}-1)$ divides $p$. So,
$$
I = (P_{0}+\cdots+P_{n-1})^{2}=P_{0}^{2}+\cdots+P_{n-1}^{2}.
$$
Using this, one obtains
$$
\begin{align}
P_{k}^{2}= (I-\sum_{j=0,j\ne k}^{n-1}P_{j})^{2} & =I-2\sum_{j=0,j\ne k}^{n-1}P_{j}
+ \sum_{j=0,j\ne k}^{n-1}P_{j}^{2} \\
& = I-2(I-P_{k})+(I-P_{k}^{2})
\end{align}
$$
Therefore, $P_{k}^{2}=P_{k}$ is a projection. Let $\lambda =e^{2\pi i/k}$. Then $(T-\lambda^{k}I)P_{k}=0$, and
$$
T=T(P_{0}+\cdots+P_{n-1}) = \lambda^{0}P_{0}+\cdots+\lambda^{k-1}P_{k-1}.
$$
In other words, $\mathcal{H}$ is a direct sum of closed subspaces on which $T$ is $\lambda^{k}$ times the identity. A particular $k$ can be missing in such a sum if $P_{k}=0$, which is certainly possible because there is no assumption that $\lambda^{n}-1$ is a minimal annihilating polynomial for $T$.
Conversely, given any finite direct sum decomposition of $\mathcal{H}$ into non-trivial closed subspaces $M_{j}$, it is possible to define $T$ as a scalar multiple of the identity on each $M_{j}$ with scalars taken to be any $n$-th root of unity. Then $T^{n}=I$. So the spectrum of such a $T$ can be any non-empty subset of the n-th roots of unity.
These arguments work the same for any annihilating polynomial with distinct roots. |
How to introduce variations on the competitive Lotka-Volterra model? | I will provide examples in each case.
life expectancy of both species, parasitism, diseases, lack of food depending on the season
(1) Life expectancy: this is usually denotes by a death term. For example, $x' = bx-dx$. In this case, the life expectancy of the $x$ population is $\frac{1}{d}$, where $d$ is the (exponential) death rate. To incorporate this into your population, just add the death terms in both populations.
(2) Parasitism: you can use another compartment $z(t)$, which represents the parasitic species. The specific interactions between the parasite and its host will depend. For example, you can have $azy$ where a is the rate at which the parasite consumes nutrition from its host. Then $z'(t) = azy$ and add $-azy$ into the $y'(t)$ expression.
(3) Disease: you can either try to do this using the epidemiology-type model (SIR model) or implicitly (by having disease-effect term).
(4) Lack of food depending on the season: this can be modeled explicitly by having a compartment $n(t)$ that represents the available nutrition. Then model the growth of $x(t)$ and/or $y(t)$ with respect to the available nutrition. A reasonable approach is a cell-quota model. To incorporate the seasonal effect, you can use something like $n'(t) = sin(at)$, where the periodicity of the sine function is used to represents seasonality. |
How can I prove that this function is uniformly continuous? | You can show that it is differentiable, and has bounded derivative. Then use Lagrange's mean value theorem. |
Bourbaki-Witt to Tarski-Knaster Fixed Point Theorem | Yes, this proof works. When $\lambda$ is a limit ordinal, you can prove $g(\lambda)\leq g(\lambda^+)$ as follows. For any $\alpha<\lambda$, $g(\alpha)\leq g(\lambda)$ and hence $g(\alpha)\leq g(\alpha^+)=f(g(\alpha))\leq f(g(\lambda))=g(\lambda^+)$. Since $g(\lambda)$ is the least upper bound of all these $g(\alpha)$, we must have $g(\lambda)\leq g(\lambda^+)$. |
On exisitance a finite group | No, there is not.
As the (gap) tag was given this is presumably intended as a request on how to do this calculation. First construct the group and convert it to a PcGroup:
f:=FreeGroup("a","b","x");
rels:=ParseRelators(f,"a8=b8=x4=1,a4=b4=x2,[a,x]=[b,x]=1,[[a,b],b]=[[a,b],a]=[[b,a],a]=[[b,a],b]=1");
g:=f/rels;
Size(g);
g:=Image(IsomorphismPcGroup(g));
The command IsCentralFactor tests whether a group is capable, in this case it returns false, thus the group is not capable. |
powers of a Markov matrix equals identity | $A$ and $A^{-1}=A^6$ have only $\geq 0$ entries. Then $A$ is a pseudo permutation.
Prove the above assertion and deduce an explicit solution of your exercise. |
Finding circles on lattice points with arbitrary origin. | I wrote some Python code to find minimal lattice circles for a puzzle I was solving. See Enigma 136: Twelve-point square, where I give a list of minimal lattice circles with radius less than 1800 (and some other ones that are known to be minimal), along with some other circles which I've found, but are not necessarily minimal.
The smallest unverified circles are for n=29 and n=31 points on the circumference. |
Is $x$ in $ax+b$ every value, or a specific unspecified value? | Too long for a comment, and there's already a good short answer.
This interesting question is an instance of a common misunderstanding that most students eventually resolve intuitively. An "$x$" sometimes means a particular number, usually one you must find, sometimes a typical number (or element of the domain of a function). The difference is rarely mentioned explicitly.
The $x$ in $y = mx +b$ is the second kind. What's really being specified is the function $f$ defined by the rule
$$
f(\text{anything}) = m \times \text{ anything} + b
$$
- no need to mention $x$ or $y$. This is often written as
the function $f(x) = mx+b$
even though $f(x)$ isn't the function, $f$ is.
Should we ban that abuse of the language? That's a hard question to answer. Most of the time students can understand from the context what's going on. In those cases the extra cumbersome prose would be more confusing than helpful. But some of the time the abuse leads to confusion. |
Intuition behind being able to choose many different deltas when proving with epsilon delta limits | You can't choose just any $\delta$. But if you have a given $\delta$ that fits, then any $\delta'$ with $0<\delta'<\delta$ will also fit. This is because it says: "If $\textrm{something}<\delta$, then $\textrm{whatever}$". But if this $\textrm{something}$ is smaller than $\delta'$, then it is automatically smaller than $\delta$ as well, so $\textrm{whatever}$ still holds for all $\textrm{something}<\delta'$. |
Lower bound and upper bound functions | For part a).take:
$$f(x) = \begin{cases} -x+6 \text{ if}\text{ } 3 \leq x < 4\\ 4 \text{ if} \text{ }x = 4\\ x-2 \text{ if} \text{ }4 < x \leq 5\end{cases}$$.
For part b), take:
$$f(x) = \begin{cases} -x+6 \text{ if}\text{ } 3 \leq x < 4\\ 4 \text{ if} \text{ }x = 4\\ x-2 \text{ if} \text{ } 4 < x \leq 5\\ 7 \text{ if} \text{ } x > 5\end{cases}$$.
For part c), take:
$f(x) = x$. |
Arithmetic mean of 4 numbers word problem | You're correct. Since the average must be greater than $8$, it cannot include $8$. And since $8\cdot 4 = 32$, when adding all the terms, you can't have $32$. This also means you cannot have $56$; all numbers must be between $32$ and $56$ non-inclusive, so $$33, 34, \dots, 54, 55.$$ |
Find the sum of the series | HINT:
Using Fermat's Little theorem, $$n^p-n\equiv0\pmod p$$ where $p$ is any prime and $n$ is any integer
$\displaystyle\implies n^7-n\equiv0\pmod 7\implies \frac{n^7-n}7$ is an integer
Show that $k(n)$ is integer for all integer $n$ |
Noetherian module over two rings | We have $S \cong R/I$ where $I$ is the kernel of the homomorphism. Now, the $R$-submodules of $M$ are precisely the $R/I$-submodules of $M$, which means that $M$ satisfies the ascending chain condition as an $R$-module if and only if it satisfies the ascending chain condition as an $R/I$-module.
Note that the isomorphism $S \cong R/I$ is sufficient for the result to follow, because of the way we define the action of $S$ on $M$. |
Solving ill posed linear equations | What do you mean by "best solution"?
If your problem is ill-posed, then you need regularization. In this case, conjugate gradient provides a form of regularization. Also, you may wish to consider using Truncated Singular Value Decomposition, consisting into filtering out, during the matrix inversion, the less relevant singular values which are responsible for the ill-posedness. You may also wish to consider Tikhonov regularization (leading, under the SVD approach, to a particular shaping of the singular values).
As reference on inverse problems (especially linear inverse problems) and related (regularized) solution strategies, a good book is M.Bertero, P.Boccacci, Introduction to Inverse Problems in Imaging, IOP Publishing, Bristol. |
Relation between inf and intersection in a complete lattice | In your lattice the order is given by $\subseteq$-relation so $A\subseteq B$ iff $a\in A\Rightarrow a\in B $ therefore $\inf_{(L.\subseteq)}(\mathcal{A}):=\bigcap\limits_{Y\in\mathcal{A}}Y$.
More generally if $(L,\leq)$ is a complete lattice then we say a $S \subseteq L$ is a closure system if for any $X\subseteq S$, we have $\inf_{(L,\leq)}(X)\in S$; and an easy result to prove is that every complete lattice is isomorphic to a closure system on the power set of some set. |
Ultrafilters - when did it start? | The notion of filter was defined for the first time probably by Henri Cartan [Cart1] (see also [B, §6.1]). However, some authors mention that the notion of filterbase was used by L. Vietoris [V] under the name "Kranz" prior to H. Cartan (see [P, p.46], [R1] or [R2]). R. M.Dudley [D, p.76] mentions that C. Caratheodory used filterbases in [Cara1] and M. H. Stone was in fact dealing with filters in [S].
[B] N. Bourbaki. Elements of Mathematics. General Topology. Chapters I-IV. Springer-Verlag, Berlin, 1989.
[Cara1] C. Carathéodory. Über die Begrenzung einfach zusammenhängender Gebiete. Math. Ann., 73:323-370, 1913.
[Cart1] H. Cartan. Théorie des filtres. C. R. Acad. Sci. Paris, 205:595-598, 1937.
[Cart2] H. Cartan. Filtres et ultrafiltres. C. R. Acad. Sci. Paris, 205:777-779, 1937.
[D] R. M. Dudley. Real Analysis and Probabilty. Cambridge University Press, Cambridge, 2002.
[P] G. Preuss. Foundations of Topology. Kluwer, Dordrecht, 2002.
[R1] H. Reitberger. The contributions of L. Vietoris and H. Tietze to the foundations of general topology. In C. E. Aull and R. Lowen, editors, Handbook of the history of general topology, Volume 1, pages 31-40. Kluwer, Dordrecht, 1997.
[R2] H. Reitberger. Leopold Vietoris (1891-2002). Notices of the American Mathematical Society, 49(10):1231-1236, 2002.
[S] Marshall Harvey Stone. The theory of representations for Boolean algebras. Trans. Amer. Math. Soc., 40:37-111, 1936.
[V] L. Vietoris. Stetige Mengen. Monatshefte f. Math., 31:545-555, 1921. |
A tough integral : $\int_{0}^{\infty }\frac{\sin x \text{ or} \cos x}{\sqrt{x^{2}+z^{2}}}\ln\left ( x^{2}+z^{2} \right )\mathrm{d}x$ | For the second integral
Note that
$$ K_\nu(az)=\frac{\Gamma(\nu+1/2)(2z/a)^\nu}{\sqrt{\pi}}\int_0^\infty\frac{\cos at }{(t^2+z^2)^{\nu+1/2}} dt$$
By differentiation with respect to $\nu$
\begin{align}
\frac{\partial K_\nu(az)}{ \partial \nu} &=(\Gamma'(\nu+1/2)+\log(2z/a) )K_\nu(az)\\&-\frac{\Gamma(\nu+1/2)(2z/a)^\nu}{\sqrt{\pi}}\int_0^\infty\frac{\cos a t }{(t^2+z^2)^{\nu+1/2}} \log(x^2+z^2)dt
\end{align}
Note that in
$$\left|\frac{\partial K_\nu(z)}{ \partial \nu} \right|_{\nu=0} = 0$$
This implies
$$\int_0^\infty\frac{\cos a t }{\sqrt{t^2+z^2)}} \log(x^2+z^2)\,dt = (\Gamma'(1/2)+\log(2z/a) )K_0(az) $$
Note that
$$\Gamma'(1/2)+\log(2z/a) = -\gamma -2\log(2)+\log(2)+\log(z/a) =\log(z/2a) -\gamma$$
Hence
$$\mathcal{J}=\left ( \log (z/2a)-\gamma \right )K_0\left ( az \right )$$
Addendum
$$K_\nu (z) = \int^\infty_0 e^{-z\cosh t} \cosh(\nu t)\,dt$$
This implies
$$\frac{\partial K_\nu(z)}{ \partial \nu} = \int^\infty_0t e^{-z\cosh t} \sinh(\nu t)\,dt$$
Hence we have
$$\left|\frac{\partial K_\nu(z)}{ \partial \nu} \right|_{\nu=0} = 0$$ |
Let $F$ be a Galois extension over $\mathbb{Q}$ with $[F:\mathbb{Q}]=2^n$, then all elements in $F$ are constructible | The idea is to try to deduce the result for $F$ by induction on $n$.
First, to get your feet wet, you want to show that if $[F:\mathbb{Q}]=2$, then all elements of $F$ are constructible. Feel comfortable with that one.
Then, how would the general proof by induction work?
You want to argue that an $F$ that is Galois over $\mathbb{Q}$ with $[F:\mathbb{Q}]=2^{n}$ has a subextension $K$, $\mathbb{Q}\subseteq K\subseteq F$, with $K$ Galois over $\mathbb{Q}$, $[F:K]=2$, and $[K:\mathbb{Q}]=2^{n-1}$. Then you would use induction to show that everything in $K$ is constructible. And then you'd use an argument similar to the one you used for the case $n=1$ in order to show that since everything in $K$ is constructible, and $[F:K]=2$, then everything in $F$ is constructible. So your induction hypothesis would look something like
If $L$ is Galois over $\mathbb{Q}$, and has $[L:\mathbb{Q}]=2^k$ for some $k\lt n$, then every element of $L$ is constructible.
To do it that way you would want to use a subgroup of $\mathrm{Aut}(F/\mathbb{Q})$ which is of index $2^{n-1}$ (rather than of order $2^{n-1}$), and which is normal (so that the corresponding extension $K$ is normal over $\mathbb{Q}$).
You can approach it going "the other way", with a subgroup of order $2^{n-1}$ (which is necessarily normal, being of index $2$), as youo ask. How would an inductive argument look in that case? The subgroup $H$ of order $2^{n-1}$ gives you an intermediate field $K$, $\mathbb{Q}\subseteq K\subseteq F$, with $[F:K] = 2^{n-1}$, $[K:\mathbb{Q}]=2$. So you would know, from the case $n=1$, that everything in $K$ is constructible, and you would want to argue inductively that everything in $F$ is constructible.
So here, your induction hypothesis should be somewhat different: the induction hypothesis I quote above would not let you conclude that everything in $F$ is constructible from the fact that everything in $K$ is constructible, because $K$ is not $\mathbb{Q}$ so the induction hypothesis would not apply.
You need a different induction hypothesis: one that doesn't "care" what the base field is, as long as everything in it is constructible. So your induction hypothesis should look like:
If $K$ is an extension of $\mathbb{Q}$ in which all elements are constructible, and $L$ is a field extension of $K$ with $[L:K]=2$, then every element of $L$ is constructible.
Then you could use that induction hypothesis to conclude everything in $F$ is constructible.
But this introduces a new problem: in order to prove the $n=1$ of the induction hypothesis I proposed first, you would need to prove that if $K$ is a field with $[K:\mathbb{Q}]=2$, then everything in $K$ is constructible; this is not too hard (I hope). But in order to use the second induction hypothesis, the case $n=1$ needs to be more general: now you need to show that if $K$ is any extension of $\mathbb{Q}$ in which every element is constructible, and $L$ is a field extension of $K$ with $[L:K]=2$, then every element of $L$ is constructible. Because the second induction hypothesis assumes more than the first, you need to prove more in the base case if you want to use it.
This is exactly what Chris pointed out: his "first" is the proof of the $n=1$ case of the second proposed induction hypothesis above; his "then" is the inductive step.
(Of course, you are just trading when and where you will prove that if everything in $K$ is constructible and $[L:K]=2$, then everything in $L$ is constructible; in the first argument, you need to do that to finish off the inductive step; in the second argument, you need it to get started. The one advantage of the method you propose is that you only need to know that a group of order $2^n$ has a subgroup of order $2^{n-1}$; with the first method, you need to know it has a normal subgroup of order $2$. This is not very hard, though, one just shows the center is nontrivial.)
Added after edit to the question.
I think you are complicating your life again by trying to attack the entire problem in one fell swoop somehow, rather than just taking it one chunk at a time. Your original idea of trying to use induction somehow has a lot of merit, and means that you don't really need to invoke theorems on solvability and normal series, just the fact that a group of order $2^{n+1}$ must have a normal/central subgroup of order $2$ (or if you want to go by your original attempt, just using that a group of order $2^{n+1}$ must have a subgroup of order $2^n$, see the final comments after the next horizontal rule).
The key is indeed showing that:
Result we hope to prove. If $[F:K]=2$ (and $F/K$ is a Galois extension; but this is immediate, because any extension of degree $2$ in a field of characteristic different from $2$ is always a Galois extension), and all elements of $K$ are constructible, then all elements of $F$ are constructible.
Assume for a moment that you have already managed to prove this. How will the induction argument go? If $[F:\mathbb{Q}]=2^1$ is a Galois extension (that is, $n=1$) then the result will follow because every element of $\mathbb{Q}$ is certainly constructible, so by the result we are assuming you have proven, every element in $F$ is constructible. Done.
Now, assume inductively that for any finite Galois extension $K$ of the rationals, if $[K:\mathbb{Q}]=2^k$, then every element of $K$ is constructible. We want to show that if $F$ is an extension of $\mathbb{Q}$ with $[F:\mathbb{Q}]=2^{k+1}$, then every element of $F$ is constructible. If we can prove this, then this will prove the result you want by induction on $n$.
So, say $F$ is a Galois extension of $\mathbb{Q}$ with $[F:\mathbb{Q}]=2^{n+1}$. Then we know that $\mathrm{Gal}(F/\mathbb{Q})$ is a group with $2^{n+1}$ elements, and thus has nontrivial center; the center is abelian of order $2^r$ for some $r\geq 1$, so there is a central subgroup of order $2$, call it $N$. Then $N\triangleleft G$, $[G:N]=2^k$, so the fixed field of $N$, call it $K$, satisfies $\mathbb{Q}\subseteq K\subseteq F$, $[K:\mathbb{Q}]=[G:N]=2^k$, $[F:K]=2$, and $K$ is Galois over $\mathbb{Q}$ because $N$ is normal in $G$. By the induction hypothesis, every element of $K$ is constructible. Now, we have $[F:K]=2$, and every element of $K$ is constructible, so by the Result-we-hope-to-prove, we conclude that every element of $F$ is constructible, and we are done. QED, RIP, $\Box$.
So, how about that "Result-we-hope-to-prove"? Well, suppose that, as we've suggested, you manage to prove that:
Lemma-we-hope-to-prove. If $K\subseteq F\subseteq\mathbb{C}$ are field extensions, and $[F:K]=2$, then there exists $\xi\in F$ with $\xi^2\in K$ such that $F=K(\xi)=K[\xi]$.
Then: assuming every element of $K$ is constructible, then notice that every element of $F$ can be written (uniquely) as $a+b\xi$ with $a,b\in K$. Now, $a$ and $b$ are both constructible, so if $\xi$ is constructible, then $a+b\xi$ is constructible (product of constructible numbers is constructible, sums of constructible numbers are constructible), so every element of $F$ is constructible. So, assuming the Lemma, it will all fall to showing $\xi$ is constructible. Since $\xi^2 = k\in K$, and $k$ is constructible, then <insert valid argument here>. This proves the Result-we-hope-to-prove, modulo proving the Lemma-we-hope-to-prove.
So now we are down to proving the Lemma-we-hope-to-prove.
Since $[F:K]=2$, if you take $\alpha\in F$, $\alpha\notin K$ (it must exist), then $\{1,\alpha\}$ is a basis for $F$ over $K$ (linearly independent because $\alpha\notin K$, and of the correct cardinality to be a basis). Since $\{1,\alpha,\alpha^2\}$ is linearly dependent (too many vectors), but $\{1,\alpha\}$ is linearly independent, then $\alpha^2$ is a linear combination of $1$ and $\alpha$. So we can find $b,c\in K$ such that $\alpha^2+b\alpha+c = 0$. That is, the minimal polynomial of $\alpha$ over $K$ is $f(x) = x^2+bx+c$.
But we know exactly what the roots of $f(x)$ are: they are
$$\frac{-b+\sqrt{b^2-4c}}{2}\quad\text{and}\quad\frac{-b-\sqrt{b^2-4c}}{2}.$$
So $\alpha$ must be one of them. Letting $r$ be the first root, note that the second root is $-r-b$, so replacing $\alpha$ by $-\alpha-b$ if necessary (remember that $b$ is in $K$), we may assume that $\alpha = \frac{-b+\sqrt{b^2-4c}}{2}$.
Now... what about that $\sqrt{b^2-4c}$ ? Is it in $F$? Is it in $K$? What is $K\left(\sqrt{b^2-4c}\right)$, anyway?
Note. If you change what you are trying to prove to the apparently more general:
Let $F\subseteq\mathbb{C}$ be a Galois extension of $K$ such that $[F:K]=2^n$. If all elements of $K$ are constructible then all elements of $F$ are constructible.
Then you get a result which implies the one you want, by taking $K=\mathbb{Q}$ (since certainly every element of $\mathbb{Q}$ is constructible). In this approach, the base of the induction would be the Result-we-hope-to-prove, and the induction hypothesis would be:
Induction hypothesis. If $M\subseteq\mathbb{C}$ is a Galois extension of $L$, $[M:L]=2^k$, and every element of $L$ is constructible, then every element of $M$ is constructible.
Then instead of using a central subgroup $N$ of $\mathrm{Aut}(F/K)$ of order $2$, you can take the subgroup $H$ of order $2^k$ that you know exists inside $\mathrm{Gal}(F/K)$. Letting $L$ be the fixed field of $H$, you have that $[L:K]=2$, $[F:L]=2^k$, and $F$ is Galois over $L$. By the Result-we-hope-to-prove you know every element of $L$ is constructible (since $H\triangleleft G$, so $L$ is Galois over $K$), and then by the induction hypothesis applied to $F/L$ you know that every element of $F$ is constructible. It all still comes down, though, to the Result-we-hope-to-prove. |
Can a generating set for a polynomial ideal have less elements than a minimal Gröbner base? | Let $I=\langle x^3-2xy, x^2y-2y^2+x \rangle$ be an ideal in $K[x,y]$ with grlex and $y\prec x$.
Then a minimal Gröbner basis has $k=3$ elements, e.g.,
$$
G=\{ x^2,xy,y^2-\frac{1}{2}x \}.
$$
But $I$ is generated by $2$ elements. |
Is there any formula of monadic second-order logic that is only satisfied by an infinite set? | Skolem proved a quantifier elimination result for Peirce's "Calculus of Classes". See this article in the Stanford Encyclopaedia of Philosophy for some references. This calculus amounts to the first-order theory over the signature $(\subseteq; \emptyset, -, \cup, \cap)$ of type $(2; 0, 1, 2, 2)$ whose intended interpretation is the set of subsets of some universe with $\subseteq$ being the subset relation,with $\emptyset$ denoting the empty set and with $-$, $\cup$ and $\cap$ denoting complementation, union and intersection.
Skolem's result shows that every sentence is equivalent to a propositional combination of sentences $L_n$ ($n = 1, 2, \ldots)$, where $L_n$ means "the universe has at least $n$ elements". Monadic second order logic can be reduced to the theory of the Calculus of Classes by mapping sets to themselves and by treating elements as singleton sets, noting that singleton sets are the atoms for the subset relation. A satisfiable propositional combination of the sentences $L_n$ is satisfiable in a finite universe, hence a satisfiable sentence of the form $\exists x.\phi$ is satisfiable with a witness for $x$ that is finite. |
What is the value of $1\%0$? | "$1\%0$" would be equivalent to $1-\Big\lfloor\frac10\Big\rfloor\times0$.
Modulo by $0$ is as meaningless as Division by $0$. |
How to solve this ODE $(x^3+y^3)\,dx=3x^2y\,dy$ | $u =\dfrac{y}{x} \implies y = ux \implies y' = u + xu' \implies 3u + 3xu' = \dfrac{1}{u} + u^2\implies 3x\dfrac{du}{dx} = u^2+\dfrac{1}{u}-3u= \dfrac{u^3-3u^2+1}{u}\implies \displaystyle \int \dfrac{udu}{u^3-3u^2+1}= \displaystyle \int \dfrac{dx}{3x} \implies \displaystyle \sum_{\{c: c^3-3c^2+1 = 0\}} \dfrac{\log (u-c)}{c-2}= \ln |x|+K, K:$ constant according to Wolfram Alpha.
Note: If this is a math model to solve an applications, the best method would be the Euler's method. |
Division by rational (decimal) number meaning | Short answer:
Multiplication is repeated addition of the same value, so if your are adding some value to itself over and over again, multiplication by the number of times this value occurs in the sum will give the final result. This is where the concept of multiplication comes from.
Division undoes multiplication. This is where it comes from. So if you know what the sum is, and the number of times the value was included in the summation, then the value that was being added to itself can be obtained by division.
(1) If I may be allowed to skip over the added complications of fractional numbers, this is what is happening in your example. Each Euro is worth some number $c$ of CZK. The value of 1.5 Euros is $1.5 \times c$. But we know that $1.5$ Euros is worth the same as $42 CZK$, so we get $ 1.5 \times c = 42$.
We rewrite this as a division:
$$c = \frac {42}{1.5} = 28$$
to see that the value of each Euro is $28$ CPK.
(2) that is the definition of "average": The average of a collection of values is the number that if every one of values in the collection were changed to this number, then the sum of all the values would not change. From this definition, it follows that
$$\text{Count}\times\text{Average} = \text{Sum}$$ and therefore that $$\text{Average} = \frac{\text{Sum}}{\text{Count}}$$
(3) "mean" and "average" are the same thing. They are just two different words for the same concept.
Long answer:
Suppose you buy 8 widgets that cost 2 Euro each. How much does it cost? Well, you add them up:
$$2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 = 16$$
That's rather tiring, having to repeatedly add the same thing. So we introduce a second operation: multiplication. Since there are $8$ widgets altogether, we define $8 \times 2 = 16$. A little development of rules for multiplying, and we have a useful tool that makes a lot of common calculations much easier. But remember that at its base, multiplication is just repeated addition of the same number.
But when you have multiplication, sometimes you end up needing to ask the opposite question: "I bought some wingdings at a price of $1.5$ Euros each, and the total charge was $30$ Euros. How many wingdings did I purchase? (Fortunately, I am buying in a place without sales tax.) If I call the total number of wingdings purchased $N$, then the total cost will be
$$\underbrace{1.5 + 1.5 + ... + 1.5}_{N\text{ times}} = N \times 1.5$$ Since I paid $30$ Euros total, it must be that $$N \times 1.5 = 30$$ So I need some means of undoing that multiplication by $1.5$, so I can figure out what $N$ is. For this reason, we invented division. With additional development of rules and processes, we find convenient means of calculating that $$\frac {30}{1.5} = 20$$ which means the same thing as saying $20 \times 1.5 = 30$. So I bought $20$ wingdings. No other number when multiplied by $1.8$ Euros would result a total cost of $30$ Euros.
But in these two examples, everything costs exactly the same amount. Suppose next I buy a combination of 10 widgets, wingdings, and whachamacallits, which cost 2.5 Euros each. The total cost now is 20 Euros. How many did I buy of each? Unfortunately, this question cannot be answered fully. There are several different combinations that would result in 20 Euros:
All widgets: $10 \times 2 = 20$.
8 widgets, 1 wingding, 1 whachamacallit: $8 \times 2 + 1 \times 1.5 + 1 \times 2.5 = 20$.
...
5 wingdings, 5 whachamacallits: $5 \times 1.5 + 5 * \times 2.5 = 20$.
Well then, if I can't figure out how many I have of each, is there anything useful I can say? Suppose instead of all costing different amounts, everything had cost the same. What would this amount be, given that $10$ items costs $20$ Euros? We divide to see that if everything cost the same, it would be $2$ Euros.
While this example is contrived, there are a number of cases where there really isn't a good way to control how many items come at one value instead of another. For example, manufacturing variation causes the weight of supposedly identical products to vary from item to item. So you cannot say exactly how much each widget weighs. But, if you weigh 1000 widgets, sum up the weight and divide by 1000 to figure out how much they would weigh if they really were the same weight, then it is likely that the 1000 widgets represents a good sample of the values that might occur from the manufacturing variation. So if you get in another order of 2000 widgets, you can now predict that the weight of this 2000 will be close to twice the weight of the first 1000 even though they don't all weigh the same. Now you don't have to weigh every widget in the future to have a good idea how much any large number of them will weigh.
For this reason, we define the concept of "average" or "mean":
The average of a collection of values is the number that, if all of the values in the collection were changed to this number, then the sum of the collection would be same.
Now if all of the items in the collection were change to this same average, then the sum of the new changed collection would be the number of items times the average. Since the average is chosen to leave the sum unchanged, we get this formula, we must have that adding the average value to itself the same number of time as there are values in the collection (i.e., multiplying the average by the collection count) gives the original sum of the collection:
$$\text{Count}\times\text{Average} = \text{Sum}$$
Rewriting it using division gives the familiar formula:
$$\text{Average} = \frac{\text{Sum}}{\text{Count}}$$ |
A question about behaviour of prime numbers in totally real field.. | For a number field $F$,
The primitive element theorem gives $F=\Bbb{Q}(a)$, we can take $a$ to be an algebraic integer with monic minimal polynomial $f\in \Bbb{Z}[x]$, if $p \nmid Disc(f)$ and $p | f(n)$ then $f$ isn't irreducible in $\Bbb{F}_p[x]$ thus $p$ isn't a prime ideal of $\Bbb{Z}[x]/(f(x))$ thus it is not a prime ideal of $O_F$. Whence there are infinitely many non-inert primes.
If finitely many primes of $\Bbb{Q}$ split completely in $F$ then $$\zeta_F(s) = \prod_p\prod_{P \ prime \ \in O_F,P \ni p}\frac1{1-N(P)^{-s}}$$
converges for $\Re(s) > 1/2$.
It is known to be untrue because for $F$ totally real and $s > 1$
$$\Gamma(s)\zeta_F(s)\ge \Gamma(s)\sum_{a\in (O_F-0)/O_F^\times} N(a)^{-s}=\int_{\Bbb{R}_{>0}^n / U} f(x)(\prod_{j=1}^n x_j)^{s-1}dx_j$$ where $n=[F:\Bbb{Q}]$ $$f(x_1,\ldots,x_n)=\sum_{a\in O_F} \exp(-\sum_{j=1}^n x_j |\sigma_j(a)| x_j)$$ and $$U = \{ (|\sigma_1(u)|,\ldots,|\sigma_n(u)|,u\in O_F^\times\}$$
A more careful analysis shows that $\zeta_F(s)$ has a simple pole at $s=1$ which means that a positive density of the primes split completely in $F$. |
Ernst Zermelo's counterexample | This does not work.
Zermelo's problem is to find a set of reals $C$ such that for each real $r$, there is exactly one real $s\in C$ such that $r\sim s$ (that is, $r-s\in\mathbb{Q}$).
If we replace "exactly" with "at least," then this is easy: e.g. just take $C=\mathbb{R}$. Similarly, if we replace "exactly" with "at most," we can just take $C=\emptyset$ (:P).
The difficulty comes when we try to hit each rational equivalence class exactly once. Given a rational equivalence class $W$, we might hope that one of the following will work:
Pick the smallest positive element of $W$.
Pick the unique $r$ such that $W$ has the form $\{r+q: q\in\mathbb{Q}\}$.
The former doesn't work because each rational equivalence class is dense; the latter doesn't work since any $r\in W$ will have this property.
Finally, we might hope something like the following will work:
Pick the simplest element of $W$.
For instance, if $W$ is the rational equivalence class of $\sqrt{2}$, then clearly $\sqrt{2}$ is the simplest element in $W$. $17+\sqrt{2}$ is also in $W$, but is more complicated.
The problem is: what does "simplest" mean? Any attempt to pin this down will quickly run into trouble. If you play around with this for a while, you'll see what I mean . . . |
How to judge the rank relationship of matrix equation $AB=E$ | Clearly, we have $r(A) \leq m$. And we have $m = r(E) = dim(im(E)) \leq dim(im(A)) = r(A)$. Then $r(A) = m$.
We have $B^T A^T = (AB)^T = E^T = E$. Then by the same reasoning, $r(B^T) = m$. Then $r(B) = m$. |
Handling variable process iterations with some probability of failure | The expected time $\tau$ to get from one checkpoint to the next satisfies
$$
\tau=p^mmt+\sum_{k=0}^{m-1}p^k(1-p)\left((k+1)t+\tau\right)=p^mmt+\frac{mp^{m+1}-(m+1)p^m+1}{1-p}t+\left(1-p^m\right)\tau\;,
$$
and thus
$$
\tau=-\frac{1-p^{-m}}{1-p}t\;.
$$
Assuming $m\mid n$, that makes the total cost
$$
\frac nm\left(c-\frac{1-p^{-m}}{1-p}t\right)\;.
$$
Setting the derivative with respect to $m$ to $0$ yields a transcendental equation for $m$ that you'd need to solve to in order to optimize with respect to $m$. |
A query related to number theoretic problem | An explanation to the OP's answer
Each $a_i$ appears in exactly $4$ terms in the sum. Call these four terms $t_{i1},t_{i2},t_{i3},t_{i4}$ and define $S_i=S-\displaystyle\sum_{j=1}^4 t_{ij}$. If we change $a_i$ to $-a_i$ keeping the rest $a_j$ unchanged, all $t_{ij},1\le j\le4$ will also change sign while $S_i$ remains unaffected as it doesn't contain $a_i$. We will have a new sum, $S'$, given by$$S'=S_i-\sum_{j=1}^4t_{ij}=S-2\sum_{j=1}^4t_{ij}$$Now all you need to show is that $2\mid\displaystyle\sum_{j=1}^4t_{ij}$. This is true, as we can either have all $t_{ij}$'s equal to $\pm1\left(\displaystyle\sum_{j=1}^4t_{ij}=\pm4\right)$, or three of them $\pm1$ and the fourth $\mp1\left(\displaystyle\sum_{j=1}^4t_{ij}=\pm2\right)$, or two $\pm1$ and the other two $\mp1\left(\displaystyle\sum_{j=1}^4t_{ij}=0\right)$. This means $S'\equiv S\mod4$.
Now, if we progressively change all negative $a_i$ to $-a_i$, ultimately each term of the sum will become $1$, and the final sum $S'=\underbrace{1+1\cdot\cdot\cdot+1}_{\text{n times}}=n\equiv S=0\mod 4$. Thus, $n\equiv0\mod4$. |
how to prove $\prod_{k=1}^{p-1} \sin(\frac{\pi k}{p}) = \frac{p}{2^{p-1}}$? | Kindly see this article page number $5$. |
Prove $k\vec v = \vec 0$ if and only if $k=0$ or $\vec v = 0$ | OP seems to have lost interest in this question, but I'll go ahead and fill in at least one answer.
I don't know what the referenced set of 10 axioms were, but here is one conventional set of axioms for a vector space $V$ over a field $K$. $V$ must have a binary operation $+\colon V \times V \to V$, and a map $\cdot\colon K \times V \to V$, satisfying:
$V$ is closed under addition: for all $\vec v$ and $\vec w$ in $V$, $\vec v + \vec w \in V$.
Vector addition is associative: or all $\vec u$, $\vec v$, and $\vec w$ in $V$, $(\vec u + \vec v) + \vec w = \vec u + (\vec v + \vec w)$.
An additive identity exists: there is a vector $\vec 0$ in $V$ with the property that $\vec v + \vec 0 = \vec v$ for all $\vec v \in V$.
Additive inverses exist: for all $\vec v\in V$ there exists a unique $\vec w \in V$ such that $\vec v+\vec w = \vec 0$. (We usually write $-\vec v$ for this $\vec w$.)
Vector addition is commutative: For all $\vec v,\vec w \in V$, $\vec v + \vec w = \vec w + \vec v$.
$V$ is closed under scalar multiplication: For all $k\in K$ and $\vec v \in V$, $k\vec v \in V$.
scalar multiplication distributes over vector addition: for all $k \in K$ and $\vec v$, $\vec w\in V$, $k(\vec v + \vec w) = k\vec v + k\vec w$.
scalar multiplication distributes over scalar addition: for all $k, k'\in K$ and $\vec v\in V$, $(k + k')\vec v = k\vec v + k' \vec v$.
scalar multiplication is associative: for all $k, k'\in K$ and $\vec v\in V$, $k(k'\vec v) = (kk')\vec v$.
The multiplicative identity in $k$ acts as the identity on $V$: For all $\vec v \in V$, $1\vec v = \vec v$.
If these are the 10, then the statement
For all $k\in K$ and $\vec v \in V$, $k\vec v = \vec 0$ iff $k=0$ or $\vec v=\vec 0$.
can be proven quickly.
Proof ($\Leftarrow$): First, we show $k \vec 0 = \vec 0$ for all $k\in K$:
$$
k\vec 0 = k(\vec 0+\vec 0) = k\vec 0 + k \vec 0
$$
by axiom 7. By axiom 4, we can add the inverse of $k\vec 0$ to each side, and end up with $\vec 0 = k \vec 0$. Similarly, if $v\in V$, then
$$
0 \vec v = (0+0)\vec v = 0\vec v + 0 \vec v \implies 0\vec v = 0
$$
by axioms 8 and 4. So if either $k=0$ or $\vec v=0$, then $k\vec v = \vec 0$.
($\Rightarrow$): Suppose $k\vec v=\vec 0$ where $k\neq 0$. Since $K$ is a field, $k$ as a multiplicative inverse $k^{-1}$ such that $k^{-1}k = 1$. Therefore
$$
\vec 0 = k^{-1}\vec 0 = k^{-1} (k \vec v) = (k^{-1}k)\vec v = 1 \vec v = \vec v
$$
by axioms 9 and 10. So $\vec v = 0$. Therefore, if $k\vec v=0$, either $k=0$ or $\vec v = 0$. |
How many elements must be selected from the set $S$ to ensure at least two elements have a sum of 110? | Your method doesn't work, you've correctly identified that there are 12 pairs that could sum to 110. What you need to do then is identify the minimum number of elements you need to select to guarantee you have both elements of a pair. You identified that there are $\{7, 103\}, \{11, 99\}, \cdots, \{51, 59\}$ as 12 pairs, as well as $\{55\}$ and $\{3\}$ that cannot fit into any pair. Thus we have 14 sets overall. If we picked 14 elements, it could be the case that we picked $\{ 7, 11, \cdots, 51, 3, 55\}$, none of which can form a pair that sums to 110. Adding a 15th element guarantees it will be in one of those pairs, by the pigeonhole principle. The question asks: how many elements must be selected to guarantee there is any pair that sums to 110, so it doesn't matter which specific one it is. |
Notion of surjective functions | As you said, from the definition of surjective function, se say $f: A\rightarrow B$ is surjective ($f$ is onto B$)$ if $\forall y \in B\, \exists x \in A(f(x)=y)$.
So this means that if you want to prove that a given function $f$ is surjective then you must show that given $y \in B$ there exists $x \in A$ such that $f(x)=y$. You don't need to show that some kind of property holds for all $a$ in $A$ here. Instead, you need to show that a property holds for all $y \in B$: the property $\exists x \in A(f(x)=y)$. So you must fix an arbitrary element $y \in B$ and then show that there exists some element of $A$ such that $f(x)=y$. |
Generating Function and Mean | I presume that in $$G(z) = a_0 + a_1 z + a_2 z^2 + \dots,$$
$a_k$ represents the probability that a certain discrete random variable (call it $X$) takes the value $k$, for each $k$.
For the mean $E[X]$ defined as $E[X] = a_0(0) + a_1(1) + a_2(2) + a_3(3) + \dots$, we can differentiate to get $$G'(z) = a_1 + 2a_2z + 3a_3z^2 + 4a_4z^3 + \dots,$$
whose value at $z = 1$ is exactly the mean we want.
For the mean $E[X^2]$ defined as $E[X^2] = a_0(0^2) + a_1(1^2) + a_2(2^2) + a_3(3^2) + \dots$ (which is not what you wrote, but what I presume you actually want), you can multiply the above $G'(z)$ by $z$, and differentiate again:
$$zG'(z) = a_1z + 2a_2z^2 + 3a_3z^3 + 4a_4z^4\dots,$$
so
$$(zG'(z))' = a_1 + 2^2 a_2z + 3^2 a_3z^2 + 4^2a_4z^3 + \dots.$$
The left-hand side is $G'(z) + zG''(z)$, and setting $z = 1$ in this (namely $G'(1) + G''(1)$) gives the value of $E[X^2]$. |
Poisson distribution | I assume that the following scenarios are counted:
two customers in 1pm to 2pm slot, none in the other slot
no customers arrive in 1pm to 2pm slot, two arrive in 3pm to 4pm slot
one customer in 1pm to 2pm slot, one customer in the other slot
These scenarios are mutually exclusive, so the probability is:
$$\begin{align}
\Pr(\text{exactly two})&=\frac{\Gamma^2 e^{-\Gamma}}{2!}\frac{\Gamma^0 e^{-\Gamma}}{0!}+\frac{\Gamma^0 e^{-\Gamma}}{0!}\frac{\Gamma^2 e^{-\Gamma}}{2!}+\frac{\Gamma^1 e^{-\Gamma}}{1!}\frac{\Gamma^1 e^{-\Gamma}}{1!} \\[2ex]
&=(\tfrac{1}{2}+\tfrac{1}{2}+1)\Gamma^2 e^{-2\Gamma} \\[2ex]
&=2\Gamma^2 e^{-2\Gamma} \\[2ex]
&=2\times7^2 e^{-14}
\end{align}$$
because there are $\Gamma=7$ expected arrivals per hour.
If my interpretation of the question is correct, your answer is off by a factor of 2.
Since Poissonian events are independent of each other and do not depend on when timing intervals start, the split periods can be treated as a single continuous two hour period, as André Nicolas does. |
$T(v_1), \ldots,T(v_k)$ are independent if and only if $\operatorname{span}(v_1,\ldots,v_k)\cap \ker(T)=\{0\}$ | Let $v=c_1v_1+\cdots+c_nv_n\in\mathrm{ker}~T$, noting that also $v\in\mathrm{span}~\{v_1,\ldots,v_n\}$, then $$T(v)=T(c_1v_1+\cdots+c_nv_n)=c_1T(v_1)+\cdots+c_nT(v_n)=0$$
implies $c_1=\cdots=c_n=0$ since $T(v_1),\ldots,T(v_n)$ are linear independent. It follows that $\mathrm{span}~\{v_1,\ldots,v_n\}\bigcap\mathrm{ker}~T=\{0\}$. The other side is immediately obtained from the equation. |
Trouble showing the addition of the fractional parts are a group. | Note that in your answer,
$$[[x+y]+z]$$
does not make sense. To work it out you would have to begin with
$$[x+y]+z\ ,$$
which is a set plus an individual number, which cannot be done. The associative law states that
$$(\,[x]+[y]\,)+[z]=[x]+(\,[y]+[z]\,)\ :$$
if you carefully use the definition of addition, and make sure you do the steps in the right order, you should find this is not too hard to prove.
Re: well-defined, let's start with an example of something which is not well defined. Suppose you try to order your equivalence classes by defining
$$[x]>[y]\quad\hbox{if and only if}\quad x>y\ .$$
Then we have, for example,
$$[0.5]>[0.2]\quad\hbox{and}\quad [0.5]<[1.2]\ .$$
But this does not make sense, because the equivalence class $[1.2]$ is the same as $[0.2]$, so we have said that it is both less than and greater than $[0.5]$. We say that the order specified here is not well-defined: the point is that since $[0.2]=[1.2]$, anything sensible that we say about this class cannot depend on the specific numbers $0.2$ or $1.2$.
Looking at addition, we have for example
$$[0.2]+[0.5]=[0.7]\ .$$
If we calculate $[1.2]+[0.5]$ it is really the same sum, so we should get the same answer. And indeed,
$$[1.2]+[0.5]=[1.7]=[0.7]\ .$$
What you have to prove is that addition always works "properly" in this respect, that is,
$$\hbox{if}\quad [x_1]=[x_2]\ \hbox{and}\ [y_1]=[y_2]\ ,\quad\hbox{then}\quad
[x_1+y_1]=[x_2+y_2]\ .$$
Good luck! |
Is the measure an open ball always nonzero? | First, there is no notion of "ball" in an arbitrary measure space.
Second, if $X$ is a metric space with a measure $\mu$, yes it is possible for a ball to have zero measure even in $\mathbb R^n$. For instance in $\mathbb R$ you could define $\mu(E) = \lambda(E \cap [0,1])$ where $\lambda$ is Lebesgue measure. |
Is there a choice homomorphism? | $\mathbb{R}/\mathbb{Q}$ is a $\mathbb{Q}$-vector space. Choose a $\mathbb{Q}$-basis for $\mathbb{R}/\mathbb{Q}$, and pick a preimage for each basis vector in $\mathbb{R}$. By mapping each basis vector to the chosen preimage, you have successfully constructed a $\mathbb{Q}$-linear map $\mathbb{R}/\mathbb{Q} \to \mathbb{R}$ that splits the quotient map. This is in particular an additive homomorphism.
More generally, any short exact sequence $$0 \to U \to V \to W \to 0 $$ of vector spaces splits. |
Equality of two analytic functions on $\overline{\mathbb{D}}.$ | Here is the approach using maximum modulus principle.
Clearly $f$ and $g$ have same zeros of same multiplicity in $\overline{\mathbb{D}}$. For if not, their modulus can not be equal at the zero of any of them.
Since $f$ and $g$ have same zeros, $f/g$ is analytic in $\overline{\mathbb{D}}$. So $|f/g|$ reaches maximum on $\partial\overline{\mathbb{D}}$ ($\partial\overline{\mathbb{D}}$ is boundary of $\overline{\mathbb{D}}$) by maximum modulus principle. Likewise $g/f$ is analytic in $\overline{\mathbb{D}}$ and reaches maximum on $\partial\overline{\mathbb{D}}$.
So $|f/g|$ reaches maximum and minmum on $\partial\overline{\mathbb{D}}$. If $|f(z)|=|g(z)|=0$ on $\partial\overline{\mathbb{D}}$, then $f(z)=g(z)=0$ on $\partial\overline{\mathbb{D}}$, contradicting $f(z_0)=g(z_0)\not=0\,\,\,\text{for some}\,\,\ z_0\in\mathbb{D}$. So $|f/g|=1$ on $\partial\overline{\mathbb{D}}$ and $f(z)=g(z)$ in entire $\overline{\mathbb{D}}$. |
Expectation of Chi square distribution | $$\chi_{8}^2=Gamma(4;1/2)$$
Where $1/2$ is the rate parameter. Now it is not difficult to calculate any moment you want solving the integrals
Alternative method:
Given that $Y\sim \chi_8^2$,
$$Y=Z_1^2+\dots +Z_8^2$$
Where $Z_i$ are iid Standard gaussian. Moments of the standard gaussian are well known thus...
$$\mathbb{E}[Y]=\underbrace{1+1+\dots +1}_{\text{8 times}}=8$$
$$\mathbb{V}[Y]=8\Bigg[\frac{4!}{2^2\cdot 2!}-1\Bigg]=16$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.