title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Some Sum of Squares Inequality | This is not true. Suppose $\sigma_{(m,1)}=1$ for every $m$ and $\sigma_{(m,k)}=\epsilon>0$ for every $k>1$. Then
\begin{aligned}
A&=\sum_{k=1}^{K}\sum_{m=L+1}^{NL}\sigma_{(m,k)}^2
=(N-1)L\left[1+(K-1)\epsilon^2\right],\\
B&=\frac{1}{LK}\left(\sum_{k=1}^K\sum_{m=1}^{NL}\sigma_{(m,k)}\right)^2
=\frac{N^2L}{K}\left[1+(K-1)\epsilon\right]^2.
\end{aligned}
Therefore $A>B$ when $\epsilon$ is sufficiently small and $(N-1)K>N^2$. |
Prime ideals of $M_{n,n}(R)$ | The claim is false.
Let $I$ be an ideal of $R$.
The kernel of the ring homomorphism $M_{n,n}(R)\to M_{n,n}(R/I)$ (component-wise) is $M_{n,n}(I)$ so that we have $M_{n,n}(R)/M_{n,n}(I)\cong M_{n,n}(R/I)$.
Note that for $n>1$ and $I\ne R$ the latter will have non-trivial zero divisors such as $\begin{pmatrix}0&a\\0&0\end{pmatrix}$, even if $R/I$ has no zero-divisors! |
Operator Norm and Hilbert-Schmidt | The operator norm is defined as $\sup\left\{\frac{\lVert Tx\rVert}{\lVert x\rVert} ,x\in H, x\neq 0\right\}$. Since by hypothesis each terms of the set $\left\{\frac{\lVert Tx\rVert}{\lVert x\rVert} ,x\in H, x\neq 0\right\}$ is $\leq \lVert T\rVert_{HS}$, we get what we want.
Of course, the most difficult was to prove that $\lVert Tx\rVert\leq \lVert x\rVert\cdot \lVert T\rVert_{HS}$, which need Bessel-Parseval inequality. |
Opposite of an undercategory | Not quite-you have to switch to $D$ over $*$. Given any $X$ in a category $C$, $(X/C)^{op}$ has object the maps $X\to c$ in $C$ and morphisms $(X\to c)\to(X\to c')$ maps
$c'\to c$ in $C$ making the triangle commute in $C$. Alternatively, write the objects as maps $c\to X$ in $C^{op}$ and similarly for the morphisms, and you have an isomorphism $(X/C)^{op}\cong C^{op}/X$. |
Converse of Lagranges theorem on the group other than symmetric group $S_{n}$ and Alternating group $A_{n}$ | Consider $G=H/Z(H)$ where $H=SL(2,\Bbb Z_7)$.This group is a simple group of order $168$. So this group has no subgroup of order $84$, even though $84$ divides $168$. |
if $\mathbb{S}$is any non empty set , why $ \varnothing- \mathbb{S} =\varnothing $ is true? | There is no $x\in \varnothing$, so there is no $x$ such that $x\in \varnothing$ and $x\not\in S$. |
Coordinate vector of a point relative to a basis | Personally I would skip the matrices and do a proof directly from the definitions of $f,g,h$.
Suppose $af + bg + ch = 0$ for some $a,b,c \in \mathbb{R}$. Then by definition of $f,g,h$ we have $a+b+c=0, a+b=0, a=0$ which implies that $a=b=c=0$. So $f,g,h$ are linearly independent.
Now suppose $k \in F(G)$ and $k(x)=a, k(y)=b, k(z)=c$ for some $a,b,c \in \mathbb{R}$. Then $k = cf + (b-c)g + (a-b)h$. So $f,g,h$ spans $F(G)$.
We can also use the formula above to find the coordinates of the $k$ in your original question. |
Solving a first order quasilinear PDEs using the idea of characteristics | I'm not completely sure, but it seems that
$u_t=1+(t-u-u_t t)f'(x+\frac{1}{2}t^2−ut)$
and
$u_x=(1 - u_x t)f'(x+\frac{1}{2}t^2−ut)$,
so that
$u_t+uu_x = 1 + (1-u_t -uu_x)tf'(x+\frac{1}{2}t^2−ut)$,
hence
$(u_t+uu_x-1)[1+tf'(x+\frac{1}{2}t^2−ut)]=0$.
Since $f$ is an arbitrary function, the equation above implies $u_t+uu_x-1=0$. |
Binomial coefficient real life example. | The issue is that the trials are not independent from each other. Having chosen a junior first, for example, changes the probability of now choosing a senior.
You must look at how many ways are there to choose two seniors (which is exactly $\binom{20}{2}$) and how many ways are there to choose six juniors (which is exactly $\binom{15}{6}$) and multiply them. This gives you the overall number of valid arrangements.
To get the probability, simply divide by the number of overall arrangements possible (which is $\binom{35}{8}$). |
minimal projections in matrix-algebras | Here the answer again: $p$ isn't minimal in $M_4(\mathbb{C})$, because consider $q= \begin{pmatrix} 1 & 0&0&0 \\ 0 & 0&0&0\\0 & 0&0&0\\ 0 & 0&0&0 \end{pmatrix}$. It is $q=q^*=q^2$ and $0\le q\le p$, but it isn't $p=q$, because p has a smaller image than p. |
Prove that every prime number greater than 3 is either one more or one less than a multiple of $6$ | Doing a proof by contrapositive isn't the same as doing a proof by contradiction. Here you're supposed to assume that a given integer $x$ is not $\equiv 1\pmod 6$ and not $\equiv 5 \pmod 6$ and then conclude that $x$ is not a prime number. There are four cases to consider, and you have already spelled out the logic to take it from here.
In essence, because you're trying to prove the statement "if $x$ is prime then $x\equiv 1\pmod 6$ or $x\equiv 5 \pmod 6$,"
you can do so by proving that "if $x\not\equiv 1\pmod 6$ and $x\not\equiv 5\pmod 6$, then $x$ is not prime." This should not lead to any contradictions, e.g., "there are no primes larger than 3." |
Transpose of a 3D Tensor | It looks like this is happening.
Suppose the tensor components are $a_{ijk}$ where $i$ tells you which matrix you're talking about, $j$ says which row you're talking about, and $k$ says which column you're talking about. The transpose you presented then has components $a_{kji}$, so transpose seems to reverse the indices, or at least swapp first/last indices.
Of course your $jk$ indices can be swapped without changing $a_{ijk}$ which means $a_{jki}$ is another possibility; you should have made the rows/columns in your choice of $a$ distinguishable so you can better see what the operation is doing. |
Transforming to polar coordinates | As said in the comments you cannot just "reverse" the partial derivative like this.
Instead you can actually calculate $\dfrac{\partial r}{\partial x}$ by considering also derivating $\theta$ respectively to $x$ (while $y$ is fixed).
$\dfrac{\partial \theta}{\partial x}=\dfrac{-y}{x^2\left(1+\dfrac{y^2}{x^2}\right)}=\dfrac{-y}{x^2+y^2}=\dfrac{-\sin \theta}{r}$
$\begin{align}\dfrac{\partial r}{\partial x}
&=\dfrac{\partial}{\partial x}\left(\dfrac{x}{\cos \theta}\right)
=\dfrac 1{\cos \theta}+x\dfrac{\partial}{\partial \theta}\left(\dfrac 1{\cos \theta}\right)\dfrac{\partial \theta}{\partial x}
=\dfrac 1{\cos \theta}+x\left(\dfrac{\sin \theta}{\cos^2 \theta}\right)\left(\dfrac{-\sin \theta}{r}\right)\\\\
&=\dfrac 1{\cos \theta}+x\left(\dfrac{-\sin^2 \theta}{x\cos \theta}\right)
=\dfrac{1-\sin^2 \theta}{\cos \theta}=\cos \theta
\end{align}$
Now both results agree.
Yet you can notice that you still needed to express $\theta$ in term of $x,y$ to get its proper derivative. Thus what you somehow tried to avoid for $r$ you have to do for $\theta$, so it was preferable to go for your second calculation right away. |
$\theta: \mathcal {F}\rightarrow \mathcal {G}$ is an isomorphism iff $\theta_x:\mathcal {F}_x\to\mathcal {G}_x $ is an isomorphism for any $x $. | You can follow your nose and prove this directly. I've written a proof below which probably uses some form of the lemma you mentioned.
The stalk $F_x$ can be thought of as the group of equivalence classes $(t,V)$, where $V$ is an (open) neighborhood of $x$ and $t \in F(V)$. We have $(t_1, V_1) = (t_2,V_2)$ if and only if there exists an open neighborhood $W$ of $x$, contained in $V_1 \cap V_2$, such that $t_1|_W = t_2|_W$. If $(t,V) \in F_x$, then $\theta_x(t,V)$ is defined to be $(\theta(V)t,V)$. This is well defined.
Assume $F_x \rightarrow G_x$ is an isomorphism for all $x$. It's not difficult to show that each map $F(W) \rightarrow G(W)$ is injective. Let's show that that each map $\theta(U):F(U) \rightarrow G(U)$ is surjective. Let $s \in G(U)$. For each $x \in U$, let $s(x)$ be the image of $s$ in $G_x$. Now $\theta_x$ is surjective, so there exists a neighborhood $V_x$ of $x$ and an element $(t_x,V_x) \in F_x$ (for $t_x \in F(V_x)$) such that $\theta_x(t_x,V_x) = s(x)$. Using the definition of $\theta_x$, we can choose $V_x$ so that it is contained in $U$ and so that $\theta(V_x)t_x = s|_{V_x}$.
I claim that $t_x$ and $t_y$ agree on $W := V_x \cap V_y$ for all $x, y \in U$. Indeed, $\theta(W)$ is injective, and maps $t_x|_W - t_y|_W$ to $s|_W - s|_W = 0$. Since $V_x : x \in U$ forms an open cover of $U$, the sheaf axiom tells us that there exists a $t \in F(U)$ such that $t|V_x = t_x$ for all $x \in U$.
We now have $\theta(U)t = s$, because they agree on the open cover $V_x$ of $U$. |
Problem about a sequence of independent random variables with mean $0$ and variance $1$. | If you are familiar with $L^{2}$ space and expansions w.r.t. orthonormal sets then this result is immediate. The given sequence is orthonormal in $L^{2}$ so $EX_nY \to 0$ for every $Y \in L^{2}$, in particular for bounded $Y$. In fact much more is true: $\sum_n (EX_nY)^{2} \leq EY^{2} <\infty$. |
Linear recurrent sequences and matrices. | This should be the same as the linear systems we often get, just $y_n = A^n x.$ Here the matrix $A$ is square, you are calling it $d.$
The Cayley-Hamilton Theorem says that $A$ satisfies a polynomial, degree no larger than $d.$ The same equation is satisfied by the column vectors $y_{k+d}, y_{k+d-1}, \ldots, y_k.$ That is why the $d$ entries of $y$ each obey the same linear recursion.
Finally, your $u_n$ obeys the same recursion, coefficients are those of the characteristic polynomial of $A,$ or the minimal polynomial if that has smaller degree.
Here is a recent one How does one solve this recurrence relation?
I have answered many questions the same way... |
Optimize stocks for the investment | I think you have posed the knapsack problem:
The knapsack problem or rucksack problem is a problem in combinatorial
optimization: Given a set of items, each with a weight and a value,
determine the number of each item to include in a collection so that
the total weight is less than or equal to a given limit and the total
value is as large as possible. It derives its name from the problem
faced by someone who is constrained by a fixed-size knapsack and must
fill it with the most valuable items.
The problem is essentially difficult (in a well defined technical sense). There
are clever computer programs that find the answer, but for long lists they are not as fast as you'd hope.
https://en.wikipedia.org/wiki/Knapsack_problem |
Norm of powers of a maximal ideal (in residually finite rings) | Very much like the previous answers you got for a similar problem, I suggest you try to work out the case of $A=F[t^2,t^3]$ ($F$ a finite field, in which case $A$ is residually finite integral domain) and $M=(t^2,t^3)$. You should run into trouble already for $k=2$. |
Can one define the $\omega$th term of a geometric sequence? | Short version: sort of but not very satisfyingly.
For $r,a,n$ possibly infinite cardinals, cardinal multiplication and cardinal exponentiation do indeed give meaning to the expression $r\cdot a^n$. However, fixing $r$ and $a$, as a function of $n$ this isn't very interesting since cardinal arithmetic is so "coarse." For example, as long as $a,r\le 2^{\aleph_0}$ (and $r>1$) we'll always have $a\cdot r^{\aleph_0}=2^{\aleph_0}$ since $$2^{\aleph_0}\cdot (2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}\cdot 2^{\aleph_0\cdot \aleph_0}=2^{\aleph_0}.$$ So things get pretty boring.
Ordinal arithmetic is somewhat more interesting than cardinal arithmetic, but it still trivializes a lot in this case. In particular, for any finite $n$ we have $$n^\omega=\sup_{k\in\omega}n^k=\omega$$ in the sense of ordinal exponentiation.
This is an instance of a more general situation: "ordinary" mathematical operations often only extend into the transfinite at the cost of losing lots of the original nature. Cardinal exponentiation is a good positive example of this: it is incredibly interesting, but completely fails to look like ordinary exponentiation - and indeed the key ingredient which makes cardinal exponentiation interesting (namely cofinality) doesn't even make sense at the finite level. |
Definition and decidability of bounded quantifiers | Is this the correct definition of a bounded quantifier?
I would say you are talking about "boundable" existential quantifiers, rather than bounded quantifiers.
It is true that terms in Peano arithmetic are polynomials, so in the language of Peano arithmetic if we have a bounded quantifier $(\exists x < t(y)) R(x,y)$ then $t(y)$ is a polynomial in $y$.
You have defined what it means to be able to replace the existential quantifier in $(\forall y)(\exists x)R(x,y)$ with a bounded quantifier.
Has the defining sentence to be true or provable?
You will get two different notions depending whether you want to defining sentence to be true or provable. This follows from the next answer.
Is the property of a sentence to have a bounded quantifier decidable?
No. Let $P(y)$ be any formula with one free variable $y$. Let $S(x,y)$ be a formula such that it is satisfied only when $x = 2^y$. Let $R(x,y)$ be
$$[x = 0 \land (\forall z < y) P(z)] \lor [S(x,y) \land (\exists z < y)\lnot P(z)].$$
Note that $(\forall y)(\exists x)R(x,y)$ holds, provably in PA. But the quantifier is boundable (by a polynomial) if and only if $(\forall y)P(y)$ holds. And the quantifier is provably boundable if and only if $(\forall y)P(y)$ is provable. In general, the latter two are not equivalent. |
Greens function method for Newtonian potential | The constraint at infinity is more powerful than a Cauchy condition. Think of $u''=0$ in $\mathbb{R}$. If you request that $\lim_{x \to \pm\infty} u(x)=0$, then $u=0$ is the only solution.
You should not confuse a Cauchy problem with a boundary value problem: their theories are rather different. |
Why is $\operatorname{SO}(p,q)$ isomorphic to $\operatorname{SO}(q,p)$? | Let $Q$ be an $n$-ary quadratic form over any field $F$of characteristic different from $2$, represented by the symmetric matrix $A$: i.e.,
$Q(x) = x^T A x$. Then the orthogonal group of $Q$ is
$O(Q) = \{P \in \operatorname{GL}_n(F) \mid P^T A P = A\}$
and the special orthogonal group of $Q$ is
$\operatorname{SO}(Q) = \{P \in \operatorname{SL}_n(F) \mid P^T A P = A\}$.
Now let $\alpha$ be a nonzero element of $F$. Then for all $A \in \operatorname{GL}_n(F)$ (resp. in $\operatorname{SL}_n(F)$), if
$P^T A P = A$
then scaling both sides by $\alpha$ gives
$P^T (\alpha A) P = (\alpha A)$,
so $O(Q) \subset O(\alpha Q)$ (resp. $\operatorname{SO}(Q) \subset \operatorname{SO}(\alpha Q)$). Replacing $\alpha$ by $\alpha^{-1}$ shows that we have $O(Q) = O(\alpha Q)$ and $\operatorname{SO}(Q) = \operatorname{SO}(\alpha Q)$.
Notice that you have implicitly used that equivalent quadratic forms have conjugate (hence isomorphic) orthogonal groups. Actually that argument is (slightly) more involved than the one above. Combining them one sees that the conjugacy class of $O(Q)$ or $\operatorname{SO}(Q)$ depends only on the similarity class of the quadratic form. For nondegenerate quadratic forms over $\mathbb{R}$ this amounts precisely to your statement about switching the $p$ and the $q$. |
Probability convergence in distribution | For every $u\leqslant\frac12$, $\mathrm e^{-u-u^2}\leqslant1-u\leqslant\mathrm e^{-u}$ (can you prove this?). Thus, for every $n\geqslant2k$,
$$
\exp\left(-\frac{a_k}{n}-\frac{b_k}{n^2}\right)\leqslant\prod_{i=1}^k\left(1-\frac{i}n\right)\leqslant\exp\left(-\frac{a_k}n\right),
$$
where
$$
a_k=\sum_{i=1}^ki=\frac{k(k+1)}{2},\qquad b_k=\sum_{i=1}^ki^2\leqslant k^3.
$$
Let $x\gt0$. If $k=\lfloor x\sqrt{n}\rfloor$, then $n\geqslant2k$ for every $n$ large enough and $k\to\infty$ when $n\to\infty$ hence $a_k\sim\frac12k^2\sim\frac12x^2n$ and $b_k=o( n^2)$, thus the lower and upper bounds of the product both converge to $\mathrm e^{-x^2/2}$.
This shows (and I am rewriting this part of your post because there is a misprint in it) that
$$
P[X_n\gt x\sqrt{n}]\to\mathrm e^{-x^2/2},\qquad P[X_n\leqslant x\sqrt{n}]\to1-\mathrm e^{-x^2/2},
$$
hence $X_n/\sqrt{n}\to X$ in distribution, where the density of $X$ is the function $f_X$ defined by
$$
f_X(x)=x\mathrm e^{-x^2/2}\mathbf 1_{x\geqslant0}.
$$
Alternatively, $X_n^2/(2n)$ converges in distribution to a standard exponential random variable. |
Groups quasi-isometric to $\mathbb{Z}^n$ | This can be proved using the fact that the Hirsch length of a polycyclic group is a quasi-isometric invariant (this is a result of Bridson). Let's denote the Hirsch length of a group by $h(G)$.
Now Bass proved that a virtually nilpotent group $G$ has polynomial growth, and the degree of that growth is given by
$$ d(G) = \displaystyle\sum rk(G_i/G_{i+1})i.$$
Here $G_i$ is the $i$-th term of the lower central series.
The converse is Gromov's polynomial growth theorem, that groups with polynomial growth are virtually nilpotent.
So now let's put all this together. Let $G$ be a finitely generated group quasi-isometric to $\mathbb{Z}^n$. Then by Gromov's theorem there is a subgroup of finite index $H\le G$, which is nilpotent (and so polycyclic).
Now it is easy to check that $h(\mathbb{Z}^n)=d(\mathbb{Z}^n)=n$. So $h(H)=d(H)=n$. But this implies (using Bass's formula) that $H'$ is finite. It is easy to then see that $[H:Z(H)]$ is finite (see below).
Now $Z(H)$ is an abelian group, also quasi-isometric to $\mathbb{Z}^n$. Thus, again by Bass's formula, $Z(H)\cong \mathbb{Z}^n\times T$, for some finite abelian group $T$.
But then $[G:\mathbb{Z}^n] = |T|\cdot[H:Z(H)]\cdot[G:H]$, which is finite.
Proof that $|H'|$ finite implies $[H:Z(H)]$ finite.
Let $H$ be generated by $h_1,h_2,\ldots,h_m$. Now the conjugates of $h_i$ are contained in the set $h_iH'$, which by assumption is finite. Thus $h_i$ has finitely many conjugates, and so $C_H(h_i)$ has finite index in $H$. But then $Z(H)=\cap C_H(h_i)$ is also finite index. |
Dice 10000 probability of not scoring | It's easier to directly count the number of ways where you don't get three of a kind using just $2,3,4,6$. There are ${4 \choose 3} \frac{6!}{(2!)^3} = 360$ ways to do it with three pairs and ${4 \choose 2} \frac{6!}{(2!)^2} = 1080$ ways to do it with two pairs and two separate rolls. So the final probability becomes $\frac{360+1080}{6^6} \approx 0.0309$. |
What are good resources to learn Mathematics for a working professional? | Take a look at the lecture notes by William Chen, and the books by The Trillia Group. Rummaging around the 'net for lecture notes for first /second math courses at universities around the world should net a nice collection. And you will also find homeworks and exams with solutions, for practice.
In any case, do the math: It is almost never worth it to print out some book (many are available on-line free, possibly in form of older editions or preprints) if you can buy a nicely bound copy at your favorite on-line bookstore.
Oh, and stay around. You'll learn more by looking at the questions (and answers) flying around here than any class could teach you. Less structure, sure. But definitely more fun. |
Then $H/Z(H)$ is isomorphic to ______________? | Hint
Consider $\varphi :H\to \mathbb Q^2$ defined by $$\varphi \begin{pmatrix}1&a&b\\0&1&c\\0&0&1\end{pmatrix}=(a,c).$$
Tasks :
Check it's a Group homomorphism.
Is it surjective ?
What is its kernel ?
Conclusion ? |
linear function on $\mathbb{C}^{3}$ | For any vector spaces $\mathbf{V}$ and $\mathbf{W}$, and for any linear transformation $f\colon\mathbf{V}\to\mathbf{W}$, if $\beta$ is a basis for $\mathbf{V}$, then $f(\beta) = \{f(\mathbf{v})\mid \mathbf{v}\in\beta\}$ spans $\mathrm{Im}(f)$. (In fact, this is true if $\beta$ is a spanning set for $\mathbf{V}$; it doesn't have to be a basis, but it may as well be, since any linear dependencies that may exist among elements of $\beta$ will necessarily exist among element of $f(\beta)$ as well...)
In particular, for your $f$, taking $\beta$ to be the standard basis we have $f(1,0,0) = (1,2,2+i)$, $f(0,1,0)=(-1,i,0)$, and $f(0,0,1)=(i,0,-1)$ will necessarily span $\mathrm{Im}(f)$.
That is, $\mathrm{Im}(f) = \mathrm{span}\Bigl( (1,2,2+i), (-1,i,0), (i,0,-1)\Bigr)$.
Now, this is a spanning set for $\mathrm{Im}(f)$, not necessarily a basis for $\mathrm{Im}(f)$. But: every spanning set contains a basis. So all we have to do is go through the three vectors, discarding any vector that is a linear combination of the previous ones. Since we know (by the Rank-Nullity Theorem) that the image of $f$ has dimension $2$, noting that $(1,2,2+i)$ and $(-1,i,0)$ are linearly independent tells us that they are a basis for the image.
(Indeed, note that
$$(i,0,-1) = \left(-\frac{2}{5}+\frac{1}{5}i\right)(1,2,2+i) + \left(-\frac{2}{5}-\frac{4}{5}i\right)(-1,i,0)$$
so the third vector in the spanning set is already in the span of the first two.) |
Natural Algebraic Structures on the Set of Automorphisms of a Structure | The most fruitful structure to attach to $M$ that I'm aware of is that of a topological group. For any tuples of elements $\overline{a}$ and $\overline{b}$ from $M$ of the same length, let $$U_{\overline{a},\overline{b}} = \{\sigma\in \text{Aut}(M) \mid \sigma(\overline{a}) = \overline{b}\},$$ and take these sets $U_{\overline{a},\overline{b}}$ as a basis for the topology on $\text{Aut}(M)$. See section 4.1 of Hodges' (longer) Model Theory. You may also be interested in the exercises in this section.
In the case of an $\aleph_0$-categorical countable structure $M$, an amazing amount of information about $M$ is encoded in $\text{Aut}(M)$ as a topological group:
Note that $\text{Sym}(M)$, the group of all permutations of the underlying set of $M$, with its usual topology, is the automorphism group of the reduct of $M$ to the empty language. There is a one-to-one correspondence between reducts of $M$ (up to equivalence) and closed subgroups of $\text{Sym}(M)$ containing $\text{Aut}(M)$. Here a reduct of $M$ is a structure $N$ with the same underlying set as $M$ in which every function, relation, and constant on $N$ is definable in the language of $M$. Two reducts are equivalent if they're reducts of each other. This is essentially in Hodges.
If $\text{Aut}(M)\cong \text{Aut}(N)$ as topological groups, and $M$ and $N$ are $\aleph_0$-categorical and countable, then $M$ and $N$ are bi-interpretable. The standard reference for this is this paper by Ahlbrandt and Ziegler.
I believe the same results are true for general countable structures, if you're willing to let "definable" mean $L_{\omega_1,\omega}$-definable instead of first-order definable.
This is just scratching the surface of the work done on the relationship between $M$ and the topological group $\text{Aut}(M)$. Here's another theorem that occurs to me:
In his paper Unimodular Minimal Structures, Hrushovski showed that if $M$ is a strongly minimal structure, then for every finite-dimensional algebraically closed substructure of $M$, $M'$, $\text{Aut}(M')$ is locally compact, and unimodularity of $\text{Aut}(M')$ (its right- and left- invariant Haar measures agree) is equivalent to a model-theoretic condition on $M$ (also called unimodularity), which in turn implies that $M$ is locally modular (in particular it doesn't interpret an algebraically closed field). |
How do I prove $2^0 + 2^1 + 2^2 + 2^3 +\cdots + 2^{d-1} \le n - 1$ $\space$ if $\space$ $d = \lfloor \log_2 n \rfloor$? | Observe that the series $\sum_{k=0}^d 2^k$ is a geometric series which has the closed form $(2^{d}-1)/(2-1) = 2^{d}-1$.
Because $d = \lfloor \log_2n \rfloor$, we have
$$d = \lfloor \log_2n \rfloor \leq \log_2n$$
by definition of the floor function. Raising both sides of the inequality by 2 and subtracting 1 thereafter, we deduce that
$$2^d - 1 \leq 2^{\log_2n} - 1 = n -1 $$
as desired. |
If $\{a_{n}\} \to \pm \infty$, then show that $\liminf a_{n} =\limsup a_{n} = \pm \infty$ | Suppose $a_n\to+\infty$. You have that $\inf_{k\geq n}a_k$ is increasing. Since $a_n\to+\infty $, for all $k$, there is a $n_k$ such that $a_{n_k}>k$ and thus $\inf_{m\geq k}a_{n_m}\geq k$ therefore $\inf_{k\geq n}a_k$ is not bounded and thus $\liminf_{n\to\infty }a_n=+\infty $. To conclude that $\limsup_{n\to\infty }a_n=+\infty $, juste remark that
$$\inf_{k\geq n}a_k\leq \sup_{k\geq n}a_k$$
for all $n$. |
Understanding the proof of Euler's formula | I'm also not sure what it is you don't understand; I'll try to explain the only thing I can see that might be confusing.
You ask how we can 'legally' consider a graph with fewer edges. Of course we can consider whatever we want, so I suppose you mean something like why we can consider it instead of the larger graph. In the equation $V+(F-1)-(E-1)=V+F-E=2$, the variables $V$, $F$ and $E$ are the vertex/face/edge counts of the larger graph with $n+1$ faces, and the part $V+(F-1)-(E-1)=2$ is the induction hypothesis applied to the smaller graph with the edge removed: It has $n$ faces, so by the induction hypothesis Euler's formula holds for it; since it has $V$ vertices, $F-1$ faces and $E-1$ edges, we have $V+(F-1)-(E-1)=2$. Now cancel the ones to obtain $V+F-E=2$. This is Euler's formula for the larger graph with $n+1$ faces, which is what we needed to prove to complete the induction. |
Is Gradient really the direction of steepest ascent? | What will help your intuition the most is remembering that the derivative (the gradient) is a local feature, it only depends on what the function is at that point, and not any distance away.
You may be visualizing a function which buckles down in the gradient direction, so it's not the steepest ascent some distance away -- but at the point where you find the tangent plane it is the steepest ascent for at least a very small distance.
At a point where a function is differentiable, the function is almost planar in a very, very small region around that point. Remember to visualize the local region as nearly a plane, and your intuition will be happier with the gradient. |
Finding $\lim x_n$ when $\left( 1+\frac{1}{n}\right)^{n+x_n}=1+\frac{1}{1!}+\frac{1}{2!}+\dots+\frac{1}{n!}$ | The limit is indeed $\frac{1}{2}$. Due to Taylor's formula with integral remainder,
$$ \sum_{k=0}^{n}\frac{1}{k!} = e-\int_{0}^{1}\frac{(1-t)^n}{n!}\,e^t\,dt=e\left(1+O\left(\tfrac{1}{(n+1)!}\right)\right)=\exp\left(1+O\left(\tfrac{1}{(n+1)!}\right)\right)\tag{1} $$
while
$$ \left(1+\frac{1}{n}\right)^{n+x}=\exp\left[(n+x)\left(\frac{1}{n}-\frac{1}{2n^2}+o\left(\frac{1}{n^2}\right)\right)\right]=\exp\left[1+\frac{x-\frac{1}{2}}{n}+o\left(\frac{1}{n}\right)\right]\tag{2} $$
so by equating the RHSs of $(1)$ and $(2)$ we get $\lim_{n\to +\infty}x_n=\frac{1}{2}$ as expected. |
The value of $\int_0^{2\pi} g(re^{i(\theta + \phi)}) \, d\phi$ is independent of $\theta$? | The mapping
$$
\varphi \mapsto re^{i(\theta+\varphi)}
$$
maps the line to the circle of radius $r$ centered at $0$, and wraps once around the circle every time $\varphi$ increases by $2\pi$. You go around the whole circle once. It doesn't matter at which point on the circle you start. The parameter $\theta$ only identifies the starting point. |
Trouble evaluating a limit | Applying L'Hospital once, we get
$$\lim_{x\to 2k}\frac{2x-2k}{-\pi\sin(\pi x)}$$
which is not an $\dfrac00$ indeterminate form and you can't apply L'Hospital twice. |
Prove that injection of 0 or 1 in strings of regular lang > regular lang | Well, take an accepting automaton A for the language $L$.
Then the accepting automaton $A'$ for the language $L'=INS(L)$ is given as follows:
1) Take the automaton $A$ and the starting state of $A$ is the starting state of $A'$.
2) Take the ending states of $A$ in $A'$ and extend the ending states by the transitions $\stackrel{0}{\rightarrow}$ and $\stackrel{1}{\rightarrow}$ and connect them to the starting state of another copy of $A$.
3) The ending states of the second copy of $A$ are the ending states of $A'$.
So the automaton $A'$ looks as follows
$$\rightarrow A \stackrel{0/1}{\rightarrow} A.$$ |
Non-monotonic decrease of residuals in Conjugate Gradients: | The residual $\|Ax-b\|$ is not necessarily monotonically decreasing.
Check out Stephen Boyd's slides on the conjugate gradient method.
He shows an example where it increases (slide 22).
http://see.stanford.edu/materials/lsocoee364b/11-conj_grad_slides.pdf
However, the distance from the solution (in terms of the inner product defined by $A$)
$$\|x-x^{\star}\|_{A}^{2} = (x-x^{\star})^{T} A (x-x^{\star})$$
is monotonically decreasing. This implies that the objective $f(x) = x^{T} A x - 2 b^{T} x$ is also monotonically decreasing since
$$\|x-x^{\star}\|_{A}^{2} = x^{T} A x - 2 b^{T} x + \|x^{\star}\|_{A}^{2}$$
where $x^{\star}$ satisfies $A x^{\star} = b$.
Note the difference between the objective above and the squared residual
$$\|A x - b\|^2 = x^T A^T A x - 2 b^T A x + b^T b$$ |
Why could integral of the both side of a differential equation be wrong | I don't see any mistakes in your reasoning. But, as logarithm alluded to in a comment, perhaps you were supposed to write h in terms of g only, without mentioning f. If so, then you've written a correct statement, but you haven't answered the question. |
Existence of a nonsingular submatrix | Because the corresponding $r\times r$ minor is non-zero. |
Number of relations that are both symmetric and reflexive | To be reflexive, it must include all pairs $(a,a)$ with $a\in A$. To be symmetric, whenever it includes a pair $(a,b)$, it must include the pair $(b,a)$. So it amounts to choosing which $2$-element subsets from $A$ will correspond to associated pairs. If you pick a subset $\{a,b\}$ with two elements, it corresponds to adding both $(a,b)$ and $(b,a)$ to your relation.
How many $2$-element subsets does $A$ have? Since $A$ has $n$ elements, it has exactly $\binom{n}{2}$ subsets of size $2$.
So now you want to pick a collection of subsets of $2$-elements. There are $\binom{n}{2}$ of them, and you can either pick or not pick each of them. So you have $2^{\binom{n}{2}}$ ways of picking the pairs of distinct elements that will be related. |
Show that the vertex lies on the surface $z^2(\frac{x}{a}+\frac{y}{b})=4(x^2+y^2)$ | The generator line of the first cone is $\frac{x-\alpha}{l_1}=\frac{y-\beta}{m_1}=\frac{z-\gamma}{n_1}.$
Setting $y=0$ gives
$$\frac{x-\alpha}{l_1}=\frac{0-\beta}{m_1}=\frac{z-\gamma}{n_1}\implies x=-\beta\frac{l_1}{m_1}+\alpha,\quad z=-\beta\frac{n_1}{m_1}+\gamma\tag1$$
Also,
$$\frac{l_1}{m_1}=\frac{x-\alpha}{y-\beta},\quad \frac{n_1}{m_1}=\frac{z-\gamma}{y-\beta}\tag2$$
From $(1)(2)$,
$$z^2=4ax\implies \left(-\beta\cdot \frac{z-\gamma}{y-\beta}+\gamma\right)^2=4a\left(-\beta\cdot\frac{x-\alpha}{y-\beta}+\alpha\right)\tag3$$
The generator line of the second cone is $\frac{x-\alpha}{l_2}=\frac{y-\beta}{m_2}=\frac{z-\gamma}{n_2}.$
Setting $x=0$ gives
$$\frac{0-\alpha}{l_2}=\frac{y-\beta}{m_2}=\frac{z-\gamma}{n_2}\implies y=-\alpha\frac{m_2}{l_2}+\beta,\quad z=-\alpha\frac{n_2}{l_2}+\gamma\tag4$$
Also,
$$\frac{m_2}{l_2}=\frac{y-\beta}{x-\alpha},\quad \frac{n_2}{l_2}=\frac{z-\gamma}{x-\alpha}\tag5$$
From $(4)(5)$,
$$z^2=4by\implies \left(-\alpha\cdot \frac{z-\gamma}{x-\alpha}+\gamma\right)^2=4b\left(-\alpha\cdot \frac{y-\beta}{x-\alpha}+\beta\right)\tag6$$
Setting $z=0$ in $(3)(6)$ give
$$\left(-\beta\cdot \frac{0-\gamma}{y-\beta}+\gamma\right)^2=4a\left(-\beta\cdot\frac{x-\alpha}{y-\beta}+\alpha\right)\implies y^2\gamma^2=4a(\alpha y-\beta x)(y-\beta)$$
$$\implies b\alpha (\gamma^2-4a\alpha) y^2+4ab\alpha \beta xy+4ab\alpha^2\beta y-4ab\alpha \beta^2 x=0\tag7$$
$$\left(-\alpha\cdot \frac{0-\gamma}{x-\alpha}+\gamma\right)^2=4b\left(-\alpha\cdot \frac{y-\beta}{x-\alpha}+\beta\right)\implies x^2\gamma^2=4b(x\beta-\alpha y)(x-\alpha)$$
$$\implies a\beta (\gamma^2-4b\beta) x^2+4ab\alpha\beta xy+4a\beta^2 b\alpha x-4ba\beta \alpha^2 y=0\tag8$$
The four intersection points of $(7)$ with $(8)$ satisfy
$$b\alpha (\gamma^2-4a\alpha) y^2+4ab\alpha \beta xy+4ab\alpha^2\beta y-4ab\alpha \beta^2 x+k(a\beta (\gamma^2-4b\beta) x^2+4ab\alpha\beta xy+4a\beta^2 b\alpha x-4ba\beta \alpha^2 y)=0$$
Since it represents a circle for some $k$, we have to have $k=-1$ and
$$b\alpha (\gamma^2-4a\alpha)=-a\beta (\gamma^2-4b\beta) ,$$
i.e.
$$\gamma^2\left(\frac{\alpha}{a}+\frac{\beta}{b}\right)=4(\alpha^2+\beta^2)$$ |
proving that a group G has in order equal to a power of 2 | By Cauchy's theorem if the order of the group were divisible by an odd prime $p$ it would have an element of order $p$. |
Solve $2\sin(x)^2 -\sin(x) = 1$ by hand | I'll be happy to give you a starting point.
Let $u = \sin{x}$. Then our equation $2 \sin^{2}(x) - \sin{x} = 1$ becomes $2 u^{2} - u = 1$. Then $2u^{2} - u - 1 = 0$, and factoring gives: $(2u + 1)(u - 1) = 0$.
The price for this starting point is now you must attempt the rest of the problem on your own and post your attempts before receiving any further help (at least from me). |
Homology with local coefficients and flat bundles | Your general statement is simply false. For instance, if the action is trivial, your statement would say that the homology of any space agrees with the homology of its universal cover, which is ridiculous.
What is true is that if $A = \Bbb Z[\pi_1(X)/H]$ as a $\pi_1(X)$-module, and $p: X' \to X$ is the covering space associated to the subgroup $H \subset \pi_1(X)$, then $\mathcal A = \tilde X \times_{\pi_1 X} A$ gives a local coefficient system over $X$ (if you like to think of a local coefficient system as being a bundle of abelian groups over $X$). Then we have $$H_*(X;\mathcal A) \cong H_*(X'; \Bbb Z).$$
What this says is that a very specific local coefficient system gives the homology of this covering space. Not that you can compute the homology of general coefficient systems by passing to a covering space. |
Is $F(x,y)=x$ an atomic formula? | It is an atomic formula. The atomic formulas include all formulas of the shape $s=t$ where $s$ and $t$ are terms. The terms $s$ and $t$ can be quite complicated. For instance $G(F(x,y))$ is a term, where $x$ and $y$ are variable symbols, $F$ is a binary function symbol, and $G$ is a unary function symbol. |
Variation on Birthday Problem - Probability that 47 of 191 students have birthdays on two conditions. | Very low The usual birthday paradox comes up because there are so many pairs of people to have the same birthday. When you want so many to have these matches it becomes difficult. For $2$, you should be able to write a program to find the probability that two will match. It will already be low as a single zero in the time will make sure there is no match. For 1, the chance that $47$ have a birthday in the same calendar week (not quite the same as what you ask) is about ${191 \choose 47}52^{-46}$ which Alphaevaluates as about $1.5 \cdot 10^{-34}$ |
Linearly dependent columns | $A,B$ are matrices which represent linear maps $f,g$ of $R^n$ in the standard base. $AB=0$ i.e $f\circ g=0$, since $B$ not zero, the image of $g$ is not zero so the kernel of $f$ is not zero. So the rank of $A<n$. This implies that the columns of $A$ which are $n$-vectors contained in the image of $f$ which has a dimension $<n$ are dependent. |
Mathematical function to check whether its parameter is zero or not | Knuth suggests using the Iverson bracket notation $[x\ne0]$ for that function. You can also try $|\mbox{sgn}(x)|$. See http://en.wikipedia.org/wiki/Sign_function |
How to find missing side for this isosceles triangle problem? | Note that the triangles ABD and CBD are isosceles; the triangles ABC and BCD are similar. Therefore,
$$AC = x = 1+ \frac1x \implies x^2-x-1=0$$
which yields $x = \frac12(1+\sqrt5)$. |
Proving a and b are perfect squares if and only if $\gcd(a,b)$ and $\operatorname{lcm}(a,b)$ are perfect squares | Hint: For any given prime $p$, let the exponent of $p$ in the prime factorization of $a$ be $a_p$ and let the exponent of $p$ in the prime factorization of $b$ be $b_p$ (where $0$ is allowed as an exponent in both cases). In terms of $a_p$ and $b_p$, what are the exponents of $p$ in $\operatorname{lcm}(a, b)$ and in $\gcd(a, b)$ respectively? |
Show that $|\lambda_i(A)|<1$ iff $|\lambda_i(\beta A)|<1$ $\forall \beta: |\beta|\leq 1$ | The last equivalence if you wish:
"$\Rightarrow$" Take $\beta=1$.
"$\Leftarrow$" Multiply the inequality by $|\beta|$.
P.S. To me if something "went too fast" in this proof it were the very first fact: why $\lambda_i(\beta A)=\beta \lambda_i(A)$? It is the only nontrivial passage. |
Distance is (uniformly) continuous | The reason $d(x,A)$ is a uniformly continuous function is that $|d(x,A) - d(y,A)| \le |x -y|$ for all $x$ and $y$. Let $x$ and $y$ be given. For every $z\in A$, $$d(x,A) \le |x-z| \le |x-y| + |y-z|.$$ Thus $d(x,A) - |x-y|$ is a lower bound for the set $\{|y-z|:z\in A\}$. Hence $d(x,A) - |x - y| \le d(y,A)$, i.e., $d(x,A) - d(y,A) \le |x - y|$. A similar argument shows that $d(y,A) - d(x,A) \le |x - y|$. Hence $|d(x,A) - d(y,A)| \le |x - y|$. |
Binomial Theorem Identities | There’s no real difference: they express the same fact about binomial coefficients. To see this, first replace $n$ by $m$ and $k$ by $\ell$ in the second equation to get
$$\binom{m-1}\ell+\binom{m-1}{\ell-1}=\binom{m}\ell\;.\tag{1}$$
Now let $\ell=k+1$ and $m=n+1$; then $(1)$ becomes
$$\binom{n}{k+1}+\binom{n}k=\binom{n+1}{k+1}\;,$$
which is your first equation.
In terms of Pascal’s triangle each of these equations says that each entry is the sum of the two above it. |
Image in freely homotopic group and conjugacy | $f$ is a function.
The input of $f$ is an element $[g] \in \pi_1(X,x_0)$.
To determine the output $f([g])$, recall that $g : [0,1] \to X$ is a path, i.e. a continuous function, with $g(0)=g(1)=x_0$. Also, $[g]$ is the path homotopy class of $g$, meaning the homotopy class relative to the endpoints. The output $f([g])$ is an element of $[S^1,X]$, which is the set of homotopy classes of continuous functions $S^1 \mapsto X$. So, using $g$, one must write a formula for a continuous function $G : S^1 \mapsto X$. This formula will do, once one checks that it is well defined when $t=0,1$:
$$G(e^{2 \pi i t}) = g(t)
$$
Once you've done that, $f([g]) = [G] \in [S^1,X]$ is the homotopy class of the function $G$. Now you have to prove that this is a well-defined formula, by showing that if $[g]=[h]$, namely if $g,h : [0,1] \to X$ have endpoints at $x_0$ and are path homotopic, then $[G]=[H]$, namely $G,H : S^1 \to X$ are homotopic.
Of course, you still have to prove what it asks you to prove, namely: given two closed paths $g,h : [0,1] \to X$, the equation $[G]=[H]$ holds if and only $[g]$ and $[h]$ are conjugate elements of the group $\pi_1(X,x_0)$, which by definition means that there exists $[k] \in \pi_1(X,x_0)$ such that $[k] [g] [k]^{-1} = [f]$, which translates to the statement that there exists a closed path $k : [0,1] \to X$ with endpoints at $x_0$ such that the concatenation $k * g * \bar k$ is path homotopic to $f$. |
Taylor Expansion with Remainder and Norm | As pointed out by Calvin, the $|d|$ comes from the fact that the directional derivative is in $\mathbb{R}^{n}$. Let me say some words here.
If $U$ is an open subset of $\mathbb{R}^{n}$ and $f: U \to \mathbb{R}$ has all its $n$ partial derivatives over $U$, we say $f$ is of class $C^{1}$. Now, let's assume $f$ is $C^{1}$ and take $U \subset \mathbb{R}^{2}$ for simplicity. Fix $\bar{x} = (\bar{x_{1}},\bar{x_{2}}) \in U$ and take $v = (h,k)$ such that $\bar{x}+v \in B \subset U$, where $B$ is an open ball with center $\bar{x}$. Define $r(v) = r(h,k)$ by $$r(v) = f(\bar{x_{1}}+h,\bar{x_{2}}+k)-f(\bar{x_{1}},\bar{x_{2}})-\frac{\partial f}{\partial x}h-\frac{\partial f}{\partial y}k.$$
Now, we can write $$ r(v) = f(\bar{x_{1}}+h,\bar{x_{2}}+k)-f(\bar{x_{1}},\bar{x_{2}}+k)+f(\bar{x_{1}},\bar{x_{2}}+k)-f(\bar{x_{1}},\bar{x_{2}}) - \frac{\partial f}{\partial x}h - \frac{\partial f}{\partial y}k $$
Now, we can use the Mean Value Theorem for real functions to find $\theta_{1},\theta_{2} \in (0,1)$ such that
$$r(v) = \frac{\partial f}{\partial x}(\bar{x_{1}}+\theta_{1}h,\bar{x_{2}}+k)h +\frac{\partial f}{\partial y}(\bar{x_{1}},\bar{x_{2}}+\theta_{2}k)k -\frac{\partial f}{\partial x}h - \frac{\partial f}{\partial y}k $$
Thus $$ \frac{r(v)}{|v|} = \bigg{[} \frac{\partial f}{\partial x}(\bar{x_{1}}+\theta_{1}h,\bar{x_{2}}+k)-\frac{\partial f}{\partial x}(\bar{x_{1}},\bar{x_{2}})\bigg{]}\frac{h}{\sqrt{h^{2}+k^{2}}} + \bigg{[}\frac{\partial f}{\partial y}(\bar{x_{1}},\bar{x_{2}}+\theta_{2}k)-\frac{\partial f}{\partial y}(\bar{x_{1}},\bar{x_{2}})\bigg{]}\frac{k}{\sqrt{h^{2}+k^{2}}}.$$
If we take $v \to 0$, the continuity of the derivatives implies that the terms inside both $[\cdots]$ go to zero and because $h/\sqrt{h^{2}+k^{2}} \le 1$ and, $k/\sqrt{h^{2}+k^{2}} \le 1$, we readily see that $r(v)/|v| \to 0$. |
If $ x\in X $ is an accumulation point of set $ A$ in $ X$, is then $ f(x) $ an accumulation point of set $f(A) $ in $Y$? Where $f$ is continuous. | Consider $f:\mathbb{R}\rightarrow\{0\}$ and $A=(0,1)$, then $0$ is an accumulation point of $A$, but $f(0)$ is not an accumulation point of $\{0\}$. |
What exactly is a manifold? | First off, the definition is not totally standardized. I ran across the following comment on the talk page of the WP article, which I thought was very helpful in explaining this possible source of confusion:
Those of us who were introduced to manifolds via point set topology
(as in Munkres) have a gut feeling that this is what manifolds are,
and that the differential structure is an overlay. Those of us who
were introduced to manifolds via the differential structure (as in
Spivak) have a gut feeling that that is what manifolds are.
Both the WP article and this book have helpful lists of things that are and aren't manifolds. I would suggest having these lists handy while going through actual definitions of manifolds, because otherwise it's hard to understand why the different aspects of the definitions make sense.
It's also helpful to prepare yourself with an intuitive, informal idea of what we're trying to encapsulate in a formal definition. The basic idea is that we want to be able to describe a geometry stripped of (1) any notion of measurement, and (2) any notion of what is a straight line. However, we want to preserve distinctions like the distinction between a torus and a sphere.
Informally, here is a definition that I like. An n-dimensional manifold is a space M with the following properties:
M1. Dimension: M’s dimension is n.
M2. Homogeneity: No point has any property that distinguishes it from any other point.
M3. Completeness: M is complete, in the sense that specifying an arbitrarily small neighborhood gives a unique definition of a point.
If you work through some of the examples collected earlier, you'll see that this pretty much does the job. You can verify that the following are manifolds: the real line, a circle, the open half-plane $y>0$, the union of two disjoint planes. And that the following are not: a line glued to a plane, the rational numbers, the closed half-plane $y \ge 0$.
Here is a previous question I asked about formalizing the above definition.
The more typical definition is that a manifold is a space that is locally like $\mathbb{R}^n$. I dislike it philosophically, but it's easier to formalize than M1-M3 above. To formalize it, you can say that a manifold is a space in which any sufficiently small neighborhood is homeomorphic to an open set in $\mathbb{R}^n$. Homeomorphic means that you can find a homeomorphism between them. A homeomorphism is, intuitively, a process of stretching and distorting something without cutting or gluing. More formally, a homeomorphism is a function that is invertible and continuous in both directions. |
Counterexample of which the solution of Laplace equation is in Hilbert space $H^{2}(\Omega)$ of infinite strip domain. | The statement does hold for $a=0,$ where it becomes a form of Poisson's equation.
Consider the extension of $u$ to $\overline u:\mathbb R\times (\mathbb R/2h)\to\mathbb C$ that is odd in the $y$-direction, i.e. $\overline u(x,y)=-\overline u(x,-y).$ By a similar argument as in the proof of the Sobolev extension theorem, this extension is still in $H^2,$ so in fact it must satisfy the equation $-\Delta \overline u=\overline f$ where $\overline f$ is the extension of $f$ that is odd in the $y$-direction. This extension is defined on an abelian group so has a Fourier transform. I will drop the overlines from now on - they were just used to justify the Fourier transform.
The Fourier transform $\hat u$ can be thought of as satisfying
$$u(x,y)=\int \sum_{k=1}^\infty \hat u(p,k) e^{i p x}\sin(\pi k y/h) dp$$
so that the $x$ derivative becomes multiplication by $ip$, and the $y$ derivative is like multiplication by $\pi k/h$ (and switching $\sin\mapsto \cos\mapsto-\sin$), so
$$\|u\|^2_{H_2(\Omega)}\geq C\int \sum_{k=1}^\infty |\hat u(p,k)|^2 (1+p^2+( k/h)^2+p^4+p^2( k/h)^2+( k/h)^4) dp$$
and
$$\|f\|^2_{L_2(\Omega)}\leq C\int \sum_{k=1}^\infty |\hat u(p,k)|^2 (p^4+(k/h)^4) dp$$
where the $C$'s may be different - I am using $C$ to ignore constant factors, and with the right constants these would be equalities. These easily give $\|u\|_{H_2(\Omega)}\leq C(1+h^2) \|f\|_{L_2(\Omega)}$ for some $C$. (Use $1\leq h^2(k/h)^2$ and some "$2ab\leq a^2+b^2$" inequalities.) |
Finding a special subsequence of any Cauchy sequence | Take a point $x_{1, 1}$ such that the open ball $B(x_{1, 1}, \varepsilon_1)$ contains infinitely many points. Denote the subsequence of points of $(x_n)$ in this ball by $(x_{1, k})$. In particular, we may re-order the sequence so that the sequential order of terms in $(x_n)$ is preserved in $(x_{1, k})$.
Now, there is a point $x_{2, 1}$ such that the intersection $B(x_{1, 1}, \varepsilon_1) \cap B(x_{2, 1}, \varepsilon_2)$ contains infinitely many points. Denote this subsequence of points by $(x_{2, k})$ with the same ordering conditino as before. Indeed, $(x_{2, k})$ is clearly a subsequence of $(x_{1, k})$.
Continue this process. Taking the sequence $(x_{j, 1}) := (x_{n_j})$ works. |
If $x = y^3$ in $\mathbb{Z} + \frac{-1 + \sqrt{-3}}{2}\mathbb{Z}$, then there is some $w \in \mathbb{Z} + \sqrt{-3}\mathbb{Z}$ such that $x = w^3$. | Let $\phi$ be a third root of unity, i.e. $\phi^3=1$ and $\phi^2+\phi+1=0$.
The trick is the following:
Lemma. If we are given some element $y=a+b\phi \in \mathbb Z+\phi\mathbb Z$,
then one of the three elements $y,y\phi,y\phi^2$ is contained in
$\mathbb Z+\sqrt{-3}\mathbb Z$.
Proof: We have $\phi^2=-(\phi+1)$, i.e. $y\phi^2=-(y\phi + y)$. The sum of two elements in $(\mathbb Z+\phi\mathbb Z)\setminus (\mathbb Z+\sqrt{-3}\mathbb Z)$ is clearly in $\mathbb Z+\sqrt{-3}\mathbb Z$. So if both summands on the RHS are not contained in $\mathbb Z+\sqrt{-3}\mathbb Z$, then the LHS is. $\small\Box$
Furthermore we have $\phi^3=1$, i.e. $$y^3=(y\phi^2)^3=(y\phi)^3.$$ Thus one of those three guys is the $w$ you search for.
Note that this solution is somehow a priori clear, there is no chance to avoid this Lemma: If $x=y^3=w^3$, we have that $\frac{w}{y}$ is a third root of unity, i.e. $w \in \{y,y\phi,y\phi^2\}$. So the truth of the Lemma I proved was the only chance we had to begin with. |
Solving a Polynomial Expression | HINT:
$$7=2x^2-3xy-2y^2=x(\underbrace{2x+y})-2y(\underbrace{2x+y})=(2x+y)(x-2y)$$
As $x,y$ are integers so will be $2x+y,x-2y$
What are the aliquot divisors of $7?$ |
Find the smallest number b such that the function | HINT: To make it invertible, you have to make it one-to-one. This cubic is increasing when $x$ is very negative or very positive, so the way to do this is to make it increasing. Thus, you want the smallest $b$ that ensures that $f\,'(x)\ge 0$ for all $x$. Its graph will then look like that of $y=x^3$, rising to a single point with a horizontal tangent and then rising again, instead of rising, falling, and then rising again. |
Question about rotation of spheres. | The equation $x^2+y^2=1$ represents a circle with center at the origin $(0, 0)$ & a radius of $1$ unit. The line $x=0$ represents the y-axis which equally divides the circle into two halves.
Thus the region bounded by $x=0$ & $x^2+y^2=1$ will be a semicircle which by rotating about the x-axis generates a semi-sphere whose volume is $$=\int_{x=0}^{x=1}\pi y^2dx$$ $$=\pi \int_{0}^{1} (1-x^2)dx$$
$$=\pi \left[x-\frac{x^3}{3}\right]_{0}^{1}$$
$$=\pi \left[1-\frac{1^3}{3}-0\right]$$
$$=\pi \left[\frac{2}{3}\right]$$ $$=\color{red}{\frac{2}{3}\pi}$$
Which is equal to the volume of the semi-sphere with unit radius $$=\frac{2}{3}\pi(radius)^3$$ $$=\frac{2}{3}\pi(1)^3=\frac{2}{3}\pi$$
Both the above values are same hence, the answer is correct. |
Determining whether $y=\sqrt{x^3+x^2+x+1}$ is one-to-one. | If you knew what derivatives were, then you could easily do this question. With that knowledge, I knew the function was one-one.
But to come to your question, you are on the right track, but a little more hard work is required:
$$
x^3+x^2+x=y^3+y^2+y \implies (x^3-y^3) + (x^2-y^2) + (x-y) = 0
$$
As it turns out, we can factorize the above expression:
$$
(x^3-y^3) + (x^2-y^2) + (x-y) = (x-y)(1 + x+y+x^2+xy+y^2) =0
$$
Now, all we need to show is that $1 + x+y+x^2+xy+y^2$ is a strictly positive quantity, so that $x-y=0$ is forced by the previous statement.
Now, we complete various squares, and we can rewrite the expression:
\begin{split}
1 + x+y+x^2+xy+y^2 & = \left(\frac{x+y}{2}\right)^2 + \frac{3}{4}x^2 + \frac{3}{4}y^2 + x+y+1 \\ & = \left(\frac{x+y}{2}\right)^2 + \frac{(3x+2)^2}{12} + \frac{9y^2+12y+8}{12} \\ & = \left(\frac{x+y}{2}\right)^2 + \frac{(3x+2)^2}{12} + \frac{(3y+2)^2}{12} + \frac{1}{3}
\end{split}
Which means that $1+x+y+x^2+xy+y^2 \geq \frac 13 >0$.
Hence, from the previous equality, we get $x-y=0$, so that $x=y$. Hence, the function given is one-one.
If you do not know derivatives, then little tricks like completing squares and factorisations are immensely helpful, but in general you are going to struggle if I give you some function with lots of $\ln$s and sines and cosines. So you'll have to wait until calculus to get a more comprehensive answer to this question. |
Solving periodic equation $\cos 7\theta=\cos 3\theta+\sin 5\theta$. | $$\sin 5\theta=0$$
$\sin x =0$ has solutions $x=n\pi$
Thus we get $$5\theta=n \pi$$
$$\theta= \frac{n\pi}{5}\hspace{10pt} n\in Z$$
Also $$\sin 2\theta=-\dfrac{1}{2}$$
$$2\theta=2n\pi-\frac{\pi}{6} \implies \theta=n\pi-\frac{\pi}{12}\hspace{10pt} n\in Z$$
$$2\theta=(2n+1)\pi+\frac{\pi}{6} \implies \theta=n\pi+\frac{7\pi}{12}\hspace{10pt} n\in Z$$ |
How should I state the null and alternative hypotheses, when the alternative speaks in favour of the null one? | I don't think there's an error in the question, you could perform a one-tailed test.
This is how I would choose to define the null and alternate hypothesis
$$
H_0: p \leq 0.5\\
H_1: p > 0.5
$$
I'll explain why,
The alternate hypothesis can be defined with $\leq$ or $\geq$ operators, I'm not sure where you read that you cannot use these operators in the alternate hypothesis, or maybe you were referring to traditional two-tailed tests.
When I was first learning hypothesis testing, I used this book - "Statistics" by Robert S. Witte and John S. Witte (commonly referred to as Witte and Witte). I strongly recommend it, it will provide you with strong basics, suitable for whatever your current comfort level is in regard to the topics.
Rejecting the null hypothesis is a stronger decision than to accept it.
Assuming we plan to perform the $Z$ test, this is what the $Z$ distribution would look like
You are given a sample and you want to check to see if the sample arises from a particular distribution (or given the sample, you want to check if the mean of the population distribution is some $\mu_0$ i.e. your hypothesis is $H_0: \mu = \mu_0$) , so you compute the $z$ score of the sample and if the $z$ score lies within the range $[-1.96, 1.96]$ you supposedly "accept" the hypothesis with 95% probability of your decision being right. But if suppose the mean of the population distribution was slightly shifted to the right or left, it is highly likely that the sample could still arise from the new distribution when they overlap like so
In this image, the "acceptance regions" will also have a really large overlap, and hence it might also be the case that $\mu_0$ is not the right mean of the population.
Therefore, accepting the hypothesis is always a weak decision, but when you are faced with a scenario where you end up not rejecting the null hypothesis, it's better to state that you cannot reject the null hypothesis given the samples you were provided with instead of stating that you "accept" the hypothesis
Let us now look at the other possible decision, rejecting the null hypothesis. Explaining why this decision is a strong decision is easy,
At $\alpha = 0.5%$ what is the probability that we reject the null hypothesis given that it is actually true (or what is the probability with which we make a mistake when the decision we take with the samples we have is the rejection of the null hypothesis ?) = $\alpha$ (which is usually low)
Because of this reason, rejecting the null hypothesis is a strong decision.
When we're dealing with a hypothesis involving operators like $\geq$ and $\leq$, we usually perform one-tailed tests. In two-tailed tests, we would reject an equality claim if the sample leans away from the claim value (lower or higher), in one-tailed tests we're only concerned with one direction.
So in your question, if suppose all 100 people sampled have coffee before breakfast, this wouldn't cause you to reject the claim made but if none of the 100 have coffee before breakfast you have strong evidence to reject the claim (i.e. you only care about one direction)
Now, to solve your question, I'm going to assume $\alpha = 0.5$. When performing the $Z$ test, these will be the decision rules I would follow
If $z >= 1.65$, I reject the null hypothesis (which states that the proportion $p$ is less than $0.5$) and thereby accept the claim with $95%$ confidence (you have a sample that has a significantly higher proportion than 0.5, a significant sample)
If $z < 1.65$ I accept the null hypothesis and thereby do not accept the claim made (which is more than $50%$ of people have coffee before breakfast)
I would assume the data follows a binomial distribution and use its normal approximation to perform the test.
Most of the images I've used are from the book I mentioned. I apologise if I got something wrong, or failed to answer or understand your question. |
Proof that $H(X) \leq \log(|A|)$ (Shannon entropy) | For any probability distributions $p, q$ on $A$, Gibbs inequality states that
$$
H(p)=-\sum_{k\in A} p(k)\log p(k)\leq -\sum_{k\in A} p(k)\log q(k)
$$
with equality iff $p=q$. Take $q$ to correspond to a uniform distribution on $A$. Then
$$
H(p)\leq -\sum_{k\in A} p(k)\log \frac{1}{|A|}=\log|A|
$$
with equality iff $p$ is a uniform distribution as desired. |
What are some sources to learn projective geometry? | Check out Elliptic Tales. Very good. |
Can an element with no pre-image be the pre-image of some element? | If I understand what you're asking then consider two functions:
$f: A \rightarrow B, f(x)=x^2$ where $A = B = \mathbb{R}$
$g: X\rightarrow Y, g(x)=x^2$ where $X = \mathbb{R}$ and $Y = \mathbb{R}_{\geq 0}$ (real numbers $\geq 0$).
Note that $-1 \in A$ and $-1 \in B$, but certainly there is no real number $r$ such that $r^2=-1$. As such, we say that $f$ is not onto. However, note that we still have $f(-1) = 1$, so $-1$ is an element with no pre-image but is itself the preimage of something.
Now, consider $g$ and note that $\forall y \in Y$, there is some $x\in X$ such that $g(x)=y$, namely $x=\pm\sqrt{y}$. Since $g$ can map to any element of $Y$ (the co-domain), we call $g$ an "onto" or "surjective" map. Notice that simply by changing our codomain the mapping $x\mapsto x^2$ can be surjective. As such, it's often very important to specify domains and codomains along with mappings when talking about functions.
Hopefully this cleared some things up! |
Whether a low-rank matrix is still low-rank after deleting one row? | The matrix is at most rank $r$ and at worst rank $r-1$.
The rank of a matrix is the number of linearly independent columns it has. So, if you remove one column, then you have two options:
If the columns was linearly independent of the rest, then the number of linearly independent columns has decreased by one. Therefore, the new rank is $r-1$.
If the columns was a linear combination of some other columns, then the number of linearly independent columns remained the same. Therefore, the rank of the new matrix is $1$.
For example, if I have the $1\times 2$ matrix $\begin{bmatrix}1 & 0\end{bmatrix}$ (which is of rank $1$) and I remove one column, then the rank of the new matrix will either be $1$ (if I remove the second column) or $0$ (if I remove the first column). |
Banach spaces with a bounded linear functional constant on some normalized Hamel basis | This is true for all $V$. We may assume $\dim V>1$; then by Hahn-Banach there exists a bounded functional $f:V\to\mathbb{R}$ of norm greater than $1$. Let $H=f^{-1}(\{1\})$ and let $S$ be the unit sphere of $V$. It suffices to show that $S\cap H$ spans $V$, since then a maximal linearly independent subset of $S\cap H$ will be a normalized Hamel basis $\mathcal{B}$ with $f_{\mathcal{B}}=f$.
First, since $f$ has norm greater than $1$, there is some unit vector $u\in S$ such that $|f(u)|>1$. Let $z=u/f(u)$, so $z\in H$ and $\|z\|<1$. Now let $v\in H$ be distinct from $z$ and observe that the entire line $L=\{tv+(1-t)z:t\in\mathbb{R}\}$ is contained in $H$. As $t\to\pm\infty$, $\|tv+(1-t)z\|\to\infty$ since $v\neq z$. But also, when $t=0$, $\|tv+(1-t)z\|=\|z\|<1$. Thus by the intermediate value theorem, there are at least two points on $L$ of norm $1$, one for a negative value of $t$ and one for a positive value of $t$. The span of those two points then contains $L$, and in particular contains $v$. That is, $v$ is in the span of $S\cap H$.
We have thus shown that every element of $H$ except for $z$ is in the span of $S\cap H$. Since $\dim V>1$, $H$ contains more than one point and in particular contains an entire line through $z$. But then $z$ can be written as a linear combination of two other points on that line, so $z$ is also in the span of $S\cap H$. Thus the span of $S\cap H$ contains all of $H$.
Finally, suppose $v\in V$ is arbitrary. If $f(v)\neq 0$ then $v$ is a scalar multiple of an element of $H$ and thus is in the spane of $S\cap H$. If $f(v)=0$ then we can fix an element $w\in H$ and then $w$ and $w+v$ are both in $H$, and thus $v=(w+v)-v$ is in the span of $S\cap H$. Thus $S\cap H$ spans all of $V$. |
Splitting field for an irreducible polynomial which is not separable | I think the following is a counterexample. Let $F=\mathbf{Z}_2(t)$ as earlier. Consider the polynomial $f(x)=x^4+x^2+t^3\in F[x]$. If $z$ is one of its zeros, then $y=z^2$ is a zero of the separable irreducible polynomial $g(x)=x^2+x+t^3$. The other zero of $g(x)$ is easily seen to be $y+1$, so $F[y]$ is the splitting field of $g(x)$, and $F[y]/F$ is a Galois extension.
So $z$ and $z+1$ are the zeros of $f(x)$ - both of multiplicity two. The splitting field of $f$ is thus $F[z]$. This is not isomorphic to $F$. There are probably several ways of seeing this. The first one that comes to mind is probably unnecessarily advanced. Namely we see that $F[z]$ has $F[y]$ as a subfield. And we can identify $F[y]$ as the function field of the elliptic curve $y^2+y=t^3$, which has genus $1$ and thus cannot be isomorphic to a rational function such as $F$. OTOH, if $F[z]$ were isomorphic to $F$, then by Lüroth's theorem its subfield $F[y]$ would also be a genus zero function field. |
Application of compactness theorem. | We can indeed use compactness here! Although it's much easier to use the version for first-order logic than the propositional one, so that's what I'll do.
One direction is trivial. Now suppose for every finite set $S$, there are linear orders $<_i^S$ with the desired property.
Then consider the following language $L$:
We have a binary relation symbol "$<$"
. . . and $n$ more binary relation symbols "$<_i$" $1\le i\le n$
. . . and a bunch of constant symbols, $c_a$, for $a\in A$;
and the theory $T$ in this language consisting of the following sentences:
$<$ is a partial order and $<_i$ is a linear order for each $1\le i\le n$
the sentence "$c_a<c_b$" for $a<b$
and the sentence "$\forall x, y(x<y\iff [(x<_1y)\wedge (x<_2y)\wedge . . . \wedge (x<_3y)])$".
By assumption, every finite subset of $T$ has a model; by Compactness, the whole of $T$ has a model. Now let $M$ be any such model; we get some binary relations $\prec_i$ on $A$ for $1\le i\le n$, given by $$a\prec_ib \iff M\models c_a<_ic_b.$$ And it's not hard to show that these relations have the desired property.
OK, now what about propositional logic? Can we make do with just the propositional version of compactness?
The answer is yes, but it's somewhat ad hoc. Here's what we do. We consider, as above, a really big language. It's a propositional language now, so it just consists of "propositional atoms" (or letters, or whatever your text calls them). Here are the ones we'll use (and these are all distinct from each other):
For $a, b\in A$ and $1\le i\le n$, we have the atom "$p_{a, b, i}$", which intuitively means "$a<_ib$".
For $a, b\in A$ we have the atom "$q_{a, b}$", which intuitively means "$a<b$".
Now, can you see how to write down a set $S$ of propositions in these symbols which expresses "the relations $<_1, <_2, . . . , <_n$ show that $A$ is $<(n+1)$-dimensional"?
Do you see why - assuming each finite subset of $A$ is $<(n+1)$-dimensional - $S$ is finitely satisfiable?
Finally, do you see how to go from a truth assignment for $S$ to the desired family of orderings on $A$? |
Examples for Hilbert's Quote | $$x_{n-1} =\frac{\alpha+\beta x_n +\gamma x_{n-1} +\delta x_{n-2}}{A+Bx_{n}+Cx_{n-1}+Dx_{n-2}}, n= 0, 1, ...,$$
where the parameters $\alpha, \beta, \gamma, \delta, A, B,C, D$ are non-negative real numbers and the initial conditions $m$ are arbitrary non-negative real numbers such that the denominator is always positive.
We are primarily concerned with the boundedness nature of solutions, the stability of the equilibrium points, the periodic character of the equation, and with convergence to periodic solutions including periodic trichotomies.
If we allow one or more of the parameters in the equation to be $0$, then we can see that the equation contains
$$(2^4 -1)(2^4 -1)= 225 $$
special cases, each with positive parameters and positive or non-negative initial conditions.
According to David Hilbert “The art of doing mathematics consists in finding that special case which contains all the germs of generality" and according to Paul Halmos 'The source of all good mathematics is the special case, the concrete example"
The special case of this equation contains a lot of the germs of generality of the theory of difference equations of order greater than one about which, at the beginning of the third millennium, we know no surprisingly little. The mathematics behind the special cases of this equation is also beautiful, surprising, and interesting.
The methods and techniques we develop to understand the dynamics of various special cases of rational difference equations and the theory that we obtain will also be useful in analysing the equation in any mathematical model that involves difference equation. |
Is it correct that $\text{Cov}(X,Y) = \text{E}((X-\text{E}(X))Y)$? | Interesting; I wasn't aware of this fact, but I don't see any errors in your derivation. (it's not a concept I've ever dealt much with)
At first glance, it looks like the usual definition has pedagogical advantages, making obvious two key properties:
Covariance is symmetric: Cov(X,Y) = Cov(Y,X)
Covariance depends only on how much the random variables differ from their means
As for computation, I think the alternative you suggest only has niche uses: I imagine that for nearly every purpose, at least one of $E((X - \mu_x)(Y - \mu_y))$ or $E(XY) - \mu_x \mu_y$ is more computationally convenient than $E((X - \mu_x) Y)$: the former due to smaller numbers and better numerical stability, and the latter due to ease of tabulation and simpler formulas. |
Solving a third degree matrix equation | From the spectral theorem, you can diagonalize $X$:
$X = P^{-1} D P$ with $D=\text{diag}(\lambda_1,...,\lambda_n)$ and $\lambda_i >0$ for all $i$.
Let $\tilde A = P B P^{-1}$ and $\tilde B = P B P^{-1}$
Then the new equation to solve is:
$\tilde A = D(I+ \tilde B-D^{2})$ and you have $n$ unknowns ($\lambda_1,...,\lambda_n$).
In particular, for $i \neq j$, you get $\tilde a_{i,j} = \lambda_i \tilde b_{i,j}$. From this you can see that you will not always get a solution for any $A$ and $B$. |
tangent plane to a surface, does it exist in origo? | This figure is a (double) cone with vertex at (0, 0, 0) (in English that is the "origen"). Yes, the point (3, 4, 5) does lie on that line and the tangent plane at that point is given by 3x+ 4y- 5z= 0. yes, that plane contains the origin. In fact the tangent plane at any point on a cone contains the origin. There is, however, no plane that is tangent to the cone [b]at[/b] its vertex. The cone is not "smooth" there. |
PDF of X - Y given pdf of X + Y | By symmetry, Y and b+a-Y are identically distributed hence, by independence of (X,Y), the pairs (X,Y) and (X,b+a-Y) are identically distributed. This implies that X+Y and X+(b+a-Y) are identically distributed hence X-Y is distributed like (X+Y)-(b+a), whose distribution is known. |
Fourier: $f(x) = x$ with $0 \leq x \leq 2\pi$ (Exercise almost solved) | Fix your integral when $n=0,$ because $\frac1n=\frac10$ is not a thing.
Combine terms for $n$ and $-n$ when $n>0.$
You are correct that, for $n\neq 0$ $$\int_0^{2\pi} x\cos nx\,dx=0.$$ It is not because $x\cos nx$ is odd. There are no odd functions on $[0,2\pi].$ Even if you extend $x\cos nx$ to $[-2\pi,2\pi]$ by repeating the values by period $2\pi,$ it wouldn’t be odd. That’s because $f(x)=x$ is always positive on $[0,2\pi].$
You can avoid breaking up into sine and cosine integrals by integrating directly:
$$\int_0^{2\pi} xe^{inx }\,dx$$
Using integration by parts where $u=x, dv=e^{inx}\,dx.$ Handle the case when $n=0$ separately. |
Line segment connecting points in a convex set | Let $r(t_1)=t_1a + (1-t_1)b$ and $r(t_2)=t_2a+(1-t_2)b$ be two points on the line connecting a and b. I claim $tr(t_1)+(1-t)r(t_2)$ also takes the above form of a point on the line connecting and b. So
$tr(t_1)+(1-t)r(t_2)=t(t_1a+(1-t_1)b)+(1-t)(t_2a+(1-t_2)b)=(tt_1+(1-t)t_2)a+(t(1-t_1)+(1-t)(1-t_2))b$.
Since you asked for a hint i did not complete the computation but you start this way and you must continue until you get the equation into the same form as r(t) given in the question. You must check that the two coefficients also add up to 1 and that each coefficient of $a$ and $b$ is in $[0,1]$. |
Definition of Ideal in a Ring | Here is a counterexample. Take the ring $R=x\mathbb Z[x]$, consisting of polynomials with integer coefficients and constant coefficient zero. Consider the set $S=\mathbb Nx + x^2\mathbb Z[x]$, consisting of elements of $R$ with nonnegative $x$-coefficient. The set $S$ is clearly closed under addition, and multiplying an element of $R$ and an element of $S$ gives a multiple of $x^2$, which is an element of $S$. But $S$ is not closed under subtraction. |
Can $S^2$ be turned into a topological group? | We'll show that if $S^n$ is a topological group then $n$ must be odd.
Argument 1: Suppose $m : S^n \times S^n \to S^n$ is a topological group structure on $S^n$. For $g \in S^n$ not equal to the identity, $m(g, -) : S^n \to S^n$ has no fixed points, and so by the Lefschetz fixed point theorem, its Lefschetz trace must be $0$. On the other hand, $S^n$ is path-connected, so $m(g, -)$ is homotopic to $m(e, -) = \text{id}_{S^n}$, and in particular the two have the same action on homology and hence the same Lefschetz trace. But the Lefschetz trace of the identity is the Euler characteristic, which is $1 + (-1)^n$; hence $n$ must be odd. $\Box$
More generally, the same argument shows that if a compact path-connected triangulable space has a topological group structure then its Euler characteristic must be $0$.
Argument 2: I think this is a condensed form of Vladimir Sotirov's argument. Any $H$-space structure on $S^n$ induces a Hopf algebra structure on the cohomology $H^{\bullet}(S^n)$, and in particular a coproduct $\Delta$. If $x$ denotes a generator of $H^n(S^n)$, then for degree reasons and because of the existence of a counit, the coproduct must take the form
$$\Delta(x) = 1 \otimes x + x \otimes 1 \in H^{\bullet}(S^n) \otimes H^{\bullet}(S^n)$$
and hence
$$\Delta(x^2) = (1 \otimes x + x \otimes 1)^2 = 1 \otimes x^2 + (-1)^{n^2} x \otimes x + x \otimes x + x^2 \otimes 1$$
where here we recall that on a tensor product of cohomology rings the cup product takes the form
$$(a \otimes b) \cup (c \otimes d) = (-1)^{\deg(b) \deg(c)} (a \cup c) \otimes (b \cup d)$$
for homogeneous $a, b, c, d$. Since $x^2 = 0$ and $x \otimes x$ is not torsion, the above relation can only hold if $(-1)^{n^2} + 1 = 0$; hence $n$ must be odd. $\Box$
More generally, Hopf proved that over a field, the cohomology of a connected $H$-space with finitely generated cohomology is an exterior algebra on odd generators. |
Poker hand probabilities | To calculate the probability of having a pair: $P(\textrm{pair}) = 1 - P(\textrm{no pair}) = 1 - \frac{48}{51} \frac{44}{50} \frac{40}{49} \frac{36}{48} = \frac{2053}{4165} \approx 0.4929$.
To calculate the probability of having exactly one pair (and no better hand): There are $13$ possibilities for the rank of the pair, and $6$ possibilities for the suits of the cards in the pair. Furthermore, there are $12 \choose 3$ possibilities for the ranks of the other cards, which can be of any suit. So the probability is: $$P(\textrm{exactly one pair}) = \frac{13 \cdot 6 \cdot {12 \choose 3} \cdot 4^3}{52 \choose 5} = \frac{352}{833} \approx 0.4226$$. |
Find the number of common tangents to $y^2=2012x$.... | Let $(s,t)$ be the point on $y^2=2012x$ and let $(u,v)$ be the point on $xy=(2013)^2$.
Then, we have
$$t^2=2012s,\tag1$$
$$uv=(2013)^2.\tag2$$
Since
$$y^2=2012x\Rightarrow 2y\cdot\frac{dy}{dx}=2012,$$
we know that the $y$-axis is tangent to this curve at the origin. However, since $y$-axis is not tangent to the curve $xy=(2013)^2$, we may assume that $t\not=0$. Then, we have
$$\frac{dy}{dx}=\frac{2012}{2y}.\tag3$$
On the other hand, since $u\not=0$, we have
$$xy=(2013)^2\Rightarrow y+x\cdot\frac{dy}{dx}=0\Rightarrow \frac{dy}{dx}=-\frac yx.\tag4$$
From $(1),(2),(3),(4)$, the tangent line both at $(s,t)$ and at $(u,v)$ can be represented as
$$y-t=\frac{2012}{2t}(x-s)\iff y=\frac{2012}{2t}x+\frac{2t^2-2012s}{2t}=\frac{2012}{2t}x+\frac t2$$
and
$$y-v=-\frac{v}{u}(x-u)\iff y=-\frac{v^2}{(2013)^2}x+2v$$
respectively.
Hence, we have
$$\frac{2012}{2t}=-\frac{v^2}{(2013)^2},\ \ \frac t2=2v\Rightarrow v^3=-\frac{2012(2013)^2}{8},\ t=4v.$$
Thus, we know that there is only one such set $(s,t,u,v)$. Hence, the answer is $1$. |
The convergence of complex numbers is equivalent to the convergence of absolute values and arguments | The claim is not true: $$z_n = -1 + \dfrac{(-1)^n}{n}\,i$$ clearly converges to $-1$, but $\operatorname{Arg}(z_n) \approx \pi$ for even $n$ and $\approx -\pi$ for odd $n$.
(The implication in the other direction is valid though.) |
AB - BA = Z commutator | You can exhibit two matrices $A$ and $B$ that satisfy the given equation when $x=0$. You may try to solve the smaller problem
$$
XY-YX=Z:=\pmatrix{1&0\\ 0&-1}\tag{1}
$$
first. Then enlarge $X$ and $Y$ to two $3\times3$ matrices $A$ and $B$ by inserting a zero row and a zero column in the middle of each of $X$ and $Y$.
To solve $(1)$, you may pick an $X$ at random (for this particular $Z$ in $(1)$, don't pick a diagonal matrix; do you know why?) and solve for $Y$. Since $Y$ has four entries, you have a system of four linear equations in four unknowns. It's usually solvable unless the choice of $X$ is very bad. One very good choice for our current problem $(1)$ is $X=\pmatrix{0&1\\ 1&0}$.
If you want to know more about the equation $XY-YX=Z$, see Kahan's paper Only Commutators Have Trace Zero. |
Why does the pushout preserve monic in an abelian category? | The universal properties are not the only tools available. In fact, this stability of epimorphisms under pullbacks does not hold in arbitrary categories (see the linked question). We really have to use the fact that we are in an abelian category. You can find the proof in the standard sources, for example:
S. Mac Lane, Categories for the working mathematician, Prop. 2 in section VIII.4.
P. Freyd, Abelian categories, Prop. 2.54
Stacks project, Tag 08N4 |
$P(X \geq t) \leq \frac{E(X^2)}{E(X^2) + t^2}$, where $E(X) = 0$ and $E(X^2)$ is finite. | If $t$ is not necessarily positive, then I think this is wrong.
Let $X\equiv 0$.
For $t=-1$ you get
$$1=\mathbb P(X\geq -1)\leq 0 = \frac{0}{0+1}\implies 1\leq 0\implies \unicode{x21af}$$
Another example would be to take $X\sim Ber(1/2)$, and pick $t= -1$.
You get:
$$1=\mathbb P(X\geq -1)\leq \frac{1}{2} = \frac{1}{1+1}\implies 1\leq\frac{1}{2}\implies \unicode{x21af}$$
I think there are a lot more cases, where this does not apply. |
Constructing a Banach space of cardinality $\beth_{\omega+1}$ | Let us try to estimate $|B|$ from below. Every space $B_{n-1}^*$ ($n\geqslant 1)$ is of the form $C(K_n)$ for some compact Hausdorff space. For example, $K_1 = \beta\mathbb{N}$ and $|K_1| = \beth_1$. In particular, by the Riesz–Markov–Kakutani representation theorem each space $B_n$ is isometric to the space $M(K_n)$ of Radon measures on $K_n$. Moreover, we can embed injectively $K_{n}$ ($n\geqslant 1$) into $B_n$ via $$x\mapsto \delta_x\;(x\in K_n).$$ The subspace $D_n:=\overline{\mbox{span}}\{\delta_x \colon x\in K_{n}\}\subset B_n$ is isometric to $\ell_1(K_{n})$.
We shall prove inductively that $|B_{n}|\geqslant \beth_{n}$ for each $n\geqslant 2$.
Suppose that $|D_n|\geqslant \beth_{n}$. It is clear that $|B_n|\geqslant |D_n|=|\ell_1(K_n)|=|K_n|$. In particular, $|K_n|\geqslant \beth_n$.
We claim that $|D_{n+1}|\geqslant \beth_{n+1}$. Indeed, $$|B_n^{**}|\geqslant |D_n^{**}|=|C(\beta |K_{n}|)^{*}|\geqslant |\beta |K_{n}||\geqslant 2^{\beth_{n}}=\beth_{n+1},$$
where the last inequality follows from Pospíšil's theorem.
We have thus proved that $|B|\geqslant \beth_\omega$. However, if $\kappa$ is the cardinality of a Banach space, then $\kappa^{\aleph_0}=\kappa$. Consequently, $|B|\geqslant (\beth_\omega)^{\aleph_0}=\beth_{\omega+1}$ (see this thread for the proof).
The opposite inequality is quite easy. |
Prove that for each prime $p$ there exists a nonabelian group of order $p^3$ | Your approach doesn't seem fruitful. All of the techniques you are using could at best show that it's not impossible to have an abelian group of order $p^3$. :)
Basically this comes down to the fact that $|\text{Aut}(\mathbb{F}_p^2)|=(p^2-1)(p^2-p)$, and so, in particular is divisible by $p$. Thus, there is a non-trivial semi-direct product $\mathbb{F}_p^2\rtimes (\mathbb{Z}/p\mathbb{Z})$.
Explicitly you can construct a non-abelian group of order $p^3$ by looking at the Heisenberg group:
$$H(p):=\left\{\begin{pmatrix}1 & a & b\\ 0 & 1 & c\\ 0 & 0 & 1\end{pmatrix}:a,b,c\in\mathbb{F}_p\right\}$$
It's probably worth mentioning that there are always exactly $5$ groups of order $p^3$, two non-abelian, for each prime $p$. See here.
I feel like it's worth mentioning the following interest fact:
Call a number $n$ nilpotent if $n=p_1^{e_1}\cdots p_m^{e_m}$ such that for all $i,j$ and for all $1\leqslant k\leqslant e_i$ we have that $p_i^k\not\equiv 1\mod p_j$.
Theorem:
All groups of order $n$ are nilpotent if and only if $n$ is a nilpotent number.
All groups of order $n$ are abelian if and only if $n$ is a cubefree nilpotent number.
All groups of order $n$ are cyclic if and only if $n$ is a squarefree nilpotent number (this is equivalent to $(n,\varphi(n))=1$). |
Easy question about vector spaces | $\mathbb{R}$ is a vector space over $\mathbb{Q}$ and $\mathbb{Q}(\sqrt{2})$ and the dimensions are the same.
But $\dim_{K'} F = (\dim_K F)(\dim_{K'} K)$, so if $\dim_K F$ is finite then $\dim_{K'} K = 1$. |
Evaluate $\lim_{n\to{\infty}}\frac{e(1-\frac{1}{n})^n-1}{n^{\alpha}}=c$ | $\begin{array}\\
(1-1/n)^n
&=\exp(n\ln(1-1/n))\\
&=\exp(-n(1/n+1/(2n^2)+O(1/n^3)))\\
&=\exp(-1-1/(2n)+O(1/n^2))\\
&=\exp(-1/(2n)+O(1/n^2))/e\\
&=(1-1/(2n)+O(1/n^2))/e\\
\end{array}
$
so the numerator is
$-1/(2n)+O(1/n^2)$.
For the limit to exist, we must have
$a=-1$.
The limit is then
$c=-1/2$
so
$12(c-a)=12(-1/2-(-1))=12(1/2)=6$. |
$f(A)=\left\{f(x):x\in A\right\},f^{-1}(B)=\left\{x\in X:f(x)\in B\right\}$ | The key to understand these is that you can start with $x \in A \Rightarrow f(x) \in f(A) \Rightarrow x \in f^{-1}(f(A)) \Rightarrow A \subseteq f^{-1}(f(A))$, and the equality occurs if $f$ is one-to-one. Thus choice $A)$ has to be true. In order to see the one-to-one of $f$ plays into the proof, you realize the step that you need to show: $f^{-1}(f(A)) \subseteq A$. Let $x \in f^{-1}(f(A)) \Rightarrow f(x) \in f(A) \Rightarrow f(x) = f(a)$ for some $a \in A \Rightarrow x = a\Rightarrow x \in A$ since $f$ is one-to-one. Thus $f^{-1}(f(A)) \subseteq A$, and together with the first part, you have $A = f^{-1}(f(A))$. |
Using a generating function to determine $a_n$ from a general second order recurrence relation | Hint: expand the generating function in partial fractions. You'll want to consider two cases, depending on whether or not the denominator has distinct roots. |
$\sum_{k=1}^{\infty}|a_kb_k| < \infty$ $\forall \sum_{k=1}^\infty|a_k|$ convergent. Prove $\{b_k\}_{k=1}^\infty$ is bounded. For continuous functions? | Answer for the first part. Suppose $(b_k)$ is not bounded. There is a subsequence $(k_j)$ such that $|b_{k_j}| >j$ for all $j$. Let $a_k=0$ if $k \notin \{k_1,k_2,..\}$ and $a_{k_j}=\frac 1 {j b_{k_j}}$. Then $\sum |a_n| <\infty$ but $\sum |a_nb_n| =\infty$. |
Why is $\int_{0}^{t} e^{nt} \mathrm{\ dt} = \frac{1}{n} \left(e^{nt} - 1\right)$? [solved; notation is also faulty in the first place] | $$\int_0^t e^{nz} \mathrm{\ d}z=\left[\frac{1}{n}e^{nz}\right]_0^t=\frac{e^{nt}-1}{n}$$ |
Challenging inequality with three variables | $\alpha$ can simply be factored out, so we can set it to $1$ and then ignore it.
At Least Two of $\boldsymbol{a,b,c}$ Must be Equal
To insure that
$$
\delta\left[f(a)+f(b)+f(c)\right]=f'(a)\,\delta a+f'(b)\,\delta b+f'(c)\,\delta c=0
$$
for all variations $\delta a,\delta b,\delta c$ so that
$$
\delta(abc)=abc\left(\frac{\delta a}{a}+\frac{\delta b}{b}+\frac{\delta c}{c}\right)=0
$$
orthogonality requires a $\lambda$ so that
$$
af'(a)=bf'(b)=cf'(c)=\lambda
$$
For any $n\in\mathbb{N}$ and $f(x)=\left(\frac{x}{x^{11}+1}\right)^{1/\beta}$, if we look at $xf'(x)$, we see that it is $2$-$1$ everywhere, except at the extreme points. This means that whatever $\lambda$ we have, there are at most two values of $x$ so that $xf'(x)=\lambda$. That is, at least two of $a,b,c$ must be equal.
Optimal Value of $\boldsymbol{a}$
Without loss of generality, assume that $b=a$ and $c=a^{-2}$. Then we want to maximize $2f(a)+f\!\left(a^{-2}\right)$:
$$
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}a}\left(2f(a)+f\!\left(a^{-2}\right)\right)
&=\frac2a\left(af'(a)-a^{-2}f'\!\left(a^{-2}\right)\right)\\
&=0
\end{align}
$$
is true when $a=1$. However, if $1\lt\beta\lt1.088$, there are two values of $a$ where the derivative vanishes. Looking at plots, it appears that the critical point at $a=1$ gives the maximum. In fact, if we look at the plot for $\beta=1$, the critical point at $a=1$ gives the maximum:
Greater values of $\beta$ give a smaller maximum for $a\lt1$.
If we accept that $a=b=c=1$ gives the maximum, we get
$$
\left(\frac{a}{a^{11}+1}\right)^{1/\beta}+\left(\frac{b}{b^{11}+1}\right)^{1/\beta}+\left(\frac{c}{c^{11}+1}\right)^{1/\beta}\le3\left(\frac12\right)^{1/\beta}
$$ |
Show that the sum function $f(x) = \sum_{n=1}^\infty \frac{1}{ \sqrt{n} } (exp(-x^2/n)-1)$ is continous | Showing that $f$ is continuous on $[-K, K]$ for all $K>0$ is enough, because for a given $x \in \mathbb{R}$, you can let $K=2|x|$, and have that $f$ is continuous on $[-2|x|, 2|x|]$, hence it's continuous at $x \in [-2|x|, 2|x|]$. |
Convergence of $\sum_{n=1}^{\infty }\left ( 1+\frac{1}{n} \right )^{n^{2}}\cdot \frac{(-1)^n}{ne^{n}}$ | We have
\begin{align}u_n=\left ( 1+\frac{1}{n} \right )^{n^{2}} \frac{(-1)^n}{ne^{n}}&=\frac{(-1)^n}{ne^n}e^{n^2\log(1+\frac{1}{n})}=\frac{(-1)^n}{ne^n}e^{n^2(\frac{1}{n}-\frac{1}{2n^2}+o(\frac{1}{n^2}))}\\&=\frac{(-1)^n}{n}-\frac{(-1)^n}{2n^2}+o(\frac{1}{n^2})\end{align}
hence the series $\displaystyle \sum_{n=1}^\infty u_n$ is convergent as sum of convergent series, moreover since
$$|u_n|\sim_\infty \frac{1}{n}$$
then the series $\displaystyle \sum_{n=1}^\infty |u_n|$ is divergent. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.