title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Offsetting a 2-D polynomial | I interpret this as a question of recentering a polynomial. The formula
$$
\sum_{k=0}^p a_k \left(z+z'\right)^k = \sum_{\ell=0}^p \left(\sum_{k=l}^p \binom{k}{\ell} (z')^{k-\ell} a_k \right) z^\ell=\sum_{\ell=0}^p b_{\ell}\, z^\ell
$$
can be derived using Taylor series. |
if $\lim_{n\rightarrow\infty}\int_E|f_n|=\int_E|f|$ then $\int_E|f-f_n|\rightarrow 0$ | Perhaps you can simply use Fatou's Lemma:
\begin{align*}
2\int|f|=\int\liminf_{n}(|f_{n}|+|f|-|f_{n}-f|)\leq\liminf_{n}(\int|f_{n}|+|f|-|f_{n}-f|),
\end{align*}
so
\begin{align*}
2\int|f|\leq 2\int|f|-\limsup_{n}\int|f_{n}-f|,
\end{align*}
so
\begin{align*}
\limsup_{n}\int|f_{n}-f|\leq 0.
\end{align*}
Anyway, now I see that: $|f_{n}-f|\leq|f_{n}|+|f|$, so $|f_{n}-f|-|f_{n}|\leq|f|$, so $|f_{n}-f|+|f|-|f_{n}|\leq 2|f|$. |
Finding expectation using cumulative distribution function. | That formula only works when your random variable is confined to the integers, which yours is not (since it can be $2.2$ with positive probability). Instead you can use the formula $E[X]=\int_0^\infty (1-F(x)) dx - \int_{-\infty}^0 F(x) dx$. This formula holds for general $X$ provided that the subtraction makes sense (i.e. you don't have $\infty-\infty$). The formula you wrote is obtained if you split the integrals into the intervals $[n,n+1)$ for integers $n$, provided that $F$ is always constant on such intervals (which again is not the case here).
You can also just write down the PMF and use the usual sum over all possible values. The PMF will be $p(x)=\lim_{y \to x^+} F(x)-\lim_{y \to x^-} F(x)$ for all $x$ where $F$ has a jump. |
does brownian motion and poisson random measure have to be independent? | Yes, they are independent; for a proof see e.g.
N. Ikeda, S. Watanabe: Stochastic Differential Equations and Diffusion Processes, Theorem II.6.3.
Note that a similar question (for the case of Poisson processes, not Poisson random measures) has been discussed on mathoverflow. |
Problem on Bayes's Theorem | We are told that Messi is on the team. The rest of the team can be chosen in $\binom{S-1}{N-1}$ ways. We are invited to assume that all these ways are equally likely.
We want to count the number of ways in which the team has $i$ members from Messi's group, not including Messi. These $i$ members can be chosen in $\binom{M-1}{i}$ ways, and the rest of the $N-1-i$ members of the team can be chosen in $\binom{S-M}{N-1-i}$ ways. So the probability the team has $i$ additional members from Messi's group is
$$\frac{\binom{M-1}{i}\binom{S-M}{N-1-i}}{\binom{S-1}{N-1}}$$
ways. Add up, $i=K$ to $M-1$.
Remark: If for example the number of members in $M$ is greater than $N$, then some of the terms in the sum will involve binomial coefficients $\binom{a}{b}$, where $a\lt b$. That is no problem, if under such conditions we define $\binom{a}{b}$ to be $0$. |
Show that the collection of all rectangles $[a_1,b_1]\times\dots\times[a_n, b_n]$ with $a_i,b_i$ rational can be arranged in a sequence | We know that $\mathbb Q$ can be arranged in a sequence. Therefore, we can create a rectangle:
$$p_1, q_{11}, q_{12},q_{13},\dots\\ p_2, q_{21}, q_{22},q_{23},\dots\\p_3, q_{31}, q_{32},q_{33},\dots\\\vdots$$
where
each rational number appears as exactly one $p_i$ (i.e., for each $x\in\mathbb Q$, there exists exactly one $i$ such that $x=p_i$),
For each $i$, the numbers $q_{i1},q_{i2},\dots$ is a non-repeating list of all rational numbers larger than $p_i$ (i.e., for each $i$ and each $y\in \mathbb Q$ such that $y>p_i$, there exists exactly one $j$ such that $y=q_{ij}$).
Can you continue from here?
Hint:
For each interval $[a,b]$ with rational $a,b$, there exists exactly one pair $(i,j)$ such that $a=p_i$ and $b=q_{ij}$. |
UMVUE of $\frac{p}{1-p}$ when $X\sim bin(n,p)$ | I think the UMVUE does not exist for $\frac{p}{1-p}$.
$$E[T(X)]=\sum_{t=0}^{n}T(t){n\choose t}p^t(1-p)^{n-t}=\frac{p}{1-p}$$
$$\sum_{t=0}^{n}T(t){n\choose t}(\frac{p}{1-p})^t(1-p)^{n}=\frac{p}{1-p}$$
$$\sum_{t=0}^{n}T(t){n\choose t}(\frac{p}{1-p})^t=\frac{p}{1-p}*\frac{1}{(1-p)^{n}}$$
by choosing $\lambda=\frac{p}{1-p}$
$$\sum_{t=0}^{n}T(t){n\choose t}\lambda^t
=\lambda *(1+\lambda)^n$$
so $\forall \lambda$
$$\sum_{t=0}^{n}T(t){n\choose t}\lambda^t
=\lambda *(1+\lambda)^n$$
but it can not happen! since the max power of $\lambda$
in both side not equal.
another way:
It is easy that $\frac{p}{1-p}=-1+\frac{1}{1-p}$
$\frac{1}{p}$ and $\frac{1}{q}=\frac{1}{1-p}$are not U-estimable so
$\frac{p}{1-p}=-1+\frac{1}{1-p}$ is not U-estimable |
If some sequence $(x_{n})$ is convergent in $X$ show that $(x_{n})$ is stationary (eventually constant). | Hint: Suppose $x_n\rightarrow x$ but there are $x_n\neq x$ with $n$ arbitrarily large. Consider the neighborhood of $x$ defined by $V:=X - \{x_n | x_n\neq x\}$. Does the sequence $(x_n)$ eventually lie in this neighborhood? |
Prove that $\dim_{R/(p)}(p^nM/p^{n+1}M)=0$ | That is because, more generally, given two ideals $I$ and $J$ in a ring $R$, one has
$$I\cdot R/J=(I+J)/J.\tag1$$
This relation yields here
$$p^n M=p^n\cdot R/(q^r)=(p^nR+q^rR)/(q^r)=R/(q^r), $$
since the ideals $(p^n)$ and $(q^r)$ are coprime.
Proof of $\;(1)$ :
$ I\cdot R/J=\{\,i+J\mid i\in I\,\}$
$ (I+J)/J=\{\,i+j+J\mid i\in I,\, j\in J\,\}=\{\,i+(j+J)\mid i\in I,\, j\in J\,\}$.
Just observe that $j+J=J$. |
Estimation of $\psi (n)$ in number theory | \begin{align*}
e^{\psi(2n+1)} \int_{0}^{1} x^{n} (1-x)^n dx &= \text{lcm}(1,2,...,2n+1) \int_{0}^{1} x^n \sum_{k=0}^{n}\binom{n}{k}(-1)^k x^k dx\\
&= \text{lcm}(1,2,...,2n+1) \sum_{k=0}^{n}\binom{n}{k}(-1)^k \int_{0}^{1} x^{n+k} dx\\
&= \text{lcm}(1,2,...,2n+1) \sum_{k=0}^{n}\binom{n}{k}(-1)^k \frac{1}{n+k+1}\\
&\in \mathbb{Z}
\end{align*}
$e^{\psi(2n+1)} \int_{0}^{1} x^{n} (1-x)^n dx > 0$ is obvious.
Since
\begin{align*}
e^{\psi(2n+1)} \int_{0}^{1} x^{n} (1-x)^n dx &= e^{\psi(2n+1)} \frac{n!}{2^n(2n+1)!!}\\
&\geq 1
\end{align*}
We have
$$
e^{\psi(2n+1)} \geq \frac{2^n(2n+1)!!}{n!} \geq 2^{2n}
$$
which ends the proof.
I am so sorry that I work it out right after I asked this question. Thank you all! If my proof has any mistake, please let me know. |
How to prove that $Ax = e^x$ has two solutions when $e < A < \infty$ | Consider the function $f(x)=e^x-Ax$. Then $f(0)=1$. We have $f'(x)=e^x-A$, $f''(x)=e^x$. As the second derivative is always positive, our function is convex. The derivative has a single zero at $x=\log A$, so $f$ has a minimum at that point. This means that
$$
f(x)\geq f(\log A)=A-A\log A.
$$
As $\lim_{x\to\infty}f(x)=\infty$, if the minimum is negative, then $f$ will have two roots (and none if the minimum is positive).
Assuming $A>0$, we have $A\log A>A$ precisely when $\log A>1$, i.e. $A>e$.
In conclusion, we have
Two points where $Ax=e^x$ when $A>e$;
One point where $Ax=e^x$ when $A=e$;
No points where $Ax=e^x$ when $A<e$; |
Understanding solution of binomial problem | We want to show by induction on $n$: If $1\leq j,k\leq p-1$ and $n\equiv k \pmod{p-1}$ then
\begin{align*}
\binom{n}{j}+\binom{n}{(p-1)+j}+\binom{n}{2(p-1)+j}+\binom{n}{3(p-1)+j}+\cdots\equiv \binom{k}{j} \pmod{p}
\end{align*}
The recombination of the sums, OP is asking for, is an essential part of the induction step. To better see what's going on we look at the entire induction step.
We obtain
\begin{align*}
\sum_{l\geq 0}&\binom{n}{l(p-1)+j}\\
&=\sum_{l\geq 0}\binom{m+(p-1)}{l(p-1)+j}\tag{1}\\
&=\sum_{l\geq 0}\sum_{i=0}^{\min\{l(p-1)+j,p-1\}}\binom{m}{l(p-1)+j-i}\binom{p-1}{i}\tag{2}\\
&\equiv \sum_{l\geq 0}\sum_{i=0}^{\min\{l(p-1)+j,p-1\}}(-1)^i\binom{m}{l(p-1)+j-i}\tag{3}\\
&=\sum_{i=0}^j(-1)^i\binom{m}{j-i}+\sum_{l>0}\sum_{i=0}^{p-1}\binom{m}{l(p-1)+j-i}\tag{4}\\
&=\sum_{i=0}^j(-1)^i\binom{m}{j-i}+\sum_{i=0}^{p-1}\binom{m}{p-1+j-i}+\sum_{i=0}^{p-1}\binom{m}{2(p-1)+j-i}+\cdots\tag{5}\\
&=\left[\color{blue}{\binom{m}{j}}-\binom{m}{j-1}+\cdots-(-1)^j\binom{m}{1}+(-1)^j\binom{m}{0}\right]\\
&\qquad +\left[\color{blue}{\binom{m}{(p-1)+j}}-\binom{m}{(p-1)+j-1}+\cdots+\binom{m}{j}\right]\\
&\qquad +\left[\color{blue}{\binom{m}{2(p-1)+j}}-\binom{m}{2(p-1)+j-1}+\cdots+\binom{m}{(p-1)+j}\right]\tag{6}\\
&\qquad \ \,\vdots\\
&=\color{blue}{\sum_{l\geq 0}\binom{m}{l(p-1)+j}}\\
&\qquad+\left[(-1)^j\binom{m}{0}-(-1)^j\binom{m}{1}+\cdots-\binom{m}{j-1}\right]\\
&\qquad+\left[\binom{m}{j}-\binom{m}{j+1}+\cdots-\binom{m}{(p-1)+j-1}\right]\\
&\qquad+\left[\binom{m}{(p-1)+j}-\binom{m}{(p-1)+j+1}+\cdots+\binom{m}{2(p-1)+j-1}\right]\tag{7}\\
&\qquad \ \,\vdots\\
&=\color{blue}{\sum_{l\geq 0}\binom{m}{l(p-1)+j}}+(-1)^j\sum_{i=0}^m\binom{m}{i}(-1)^i\tag{8}\\
&=\color{blue}{\sum_{l\geq 0}\binom{m}{l(p-1)+j}}+(-1)^j(1-1)^m\\
&=\color{blue}{\sum_{l\geq 0}\binom{m}{l(p-1)+j}}\\
&\equiv \binom{k}{j}\pmod{p}\tag{9}
\end{align*}
and the induction step is completed.
Comment:
In (1) we set $n=m+p-1$.
In (2) we apply the Vandermonde's identity.
We explicitly set lower and upper index of the inner sum which will be useful in the next steps.
In (3) we use $\binom{p-1}{i}\equiv (-1)^i\pmod{p}$ corresponding to problem 1.1.a in the paper.
In (4) we split the summand with $l=0$ since the inner sum has upper limit $j$ while in all other cases ($l>0$) the inner sum has upper limit $p-1$.
In (5) we write the summands with $l=0,1$ and $l=2$ explicitly.
In (6) we expand each of the sums with $l=0,1$ and $l=2$.
In (7) we collect the first summand (marked in blue) of each of the sums and write the other summands in reverse order.
In (8) we observe that all the terms from the expanded sums are consecutive terms starting with $(-1)^j\binom{m}{0}$ and they can be written as one sum.
In (9) we apply the induction hypothesis.
Note: In the paper there is a typo in the derivation of the formula on page 8. In the expression
\begin{align*}
+\left(\binom{m}{2(p-1)+j}-\binom{m}{2(p-1)+j-1}+\cdots +\binom{m}{\color{blue}{2}(p-1)+j}\right)+\cdots
\end{align*}
the blue marked factor $\color{blue}{2}$ is not correct and should be deleted. The last summand is $\binom{m}{(p-1)+j}$. |
Solving for x by minimizing $\|Ax-b\|_{\infty}$ in matlab | The problem
$$
\min_x\|Ax-b\|_\infty
$$
is equivalent to
$$\tag{*}
\min_{x,t} t \quad \text{subject to} \quad -t\boldsymbol{1}\leq Ax-b\leq t\boldsymbol{1}.
$$
To see this note that the inequality constraint in (*) says that $\|Ax-b\|_\infty\leq t$.
Now your Matlab code is nothing but translating (*) to a form digestible by a standard linear programming solver. |
Concentration of the norm (sub-gaussianity) | Using $e^x \geq 1+x$
\begin{align*}
\mathbb{E}exp\left (\frac{(\|X\|_2-\sqrt{n})^2}{(\mathbb{E}\|X\|_2-\sqrt{n})^2}\right ) & \geq 1+\mathbb{E}\frac{(\|X\|_2-\sqrt{n})^2}{(\mathbb{E}\|X\|_2-\sqrt{n})^2} \\
& = 1+\frac{\mathbb{E}\|X\|_2^2-2\sqrt{n}\mathbb{E}\|X\|_2+n}{(\mathbb{E}\|X\|_2)^2-2\sqrt{n}\mathbb{E}\|X\|_2+n} \\
&\geq 2
\end{align*}
hence
$$
|\mathbb{E}\|X\|_2-\sqrt{n}| \leq \|\|X\|_2-\sqrt{n}\|_{\psi_2}\leq CK^2
$$
other solutions
replace $CK^2$ with $o(1)$ |
Trouble with a proof. I cannot prove this without inf many proofs for each and every case. | Take $a_2=0,b=0,a_1=\sum _0 ^k x^{n}$ with x arbitrary. Am I missing something or is this so trivial a question? |
Determine the range of ${\rm i}\frac{\rm d}{{\rm d}t}$ on $\left\{f\in L^2([0,1]):f\text{ is absolutely continuous and }f(0)=f(1)=0\right\}$ | The description of the range is not correct. If $f(x)=(1-x)\sqrt x$ then it is easy to see that $if' \notin L^{2}$ even though $f$ is AC and vanishes at $0$ and $1$. |
Fact about polynomials | Why would that be true? For take $n\ge 3$, and $Q_3=\ldots = Q_n=1$. Then you can solve for $Q_1$ and $Q_2$ in the system (Cramer rule) to get them as rational fractions. Now multiply all the $Q_i$ by a common denominator. |
Distribution of shoe production | Without further information, you can't compute these probabilities exactly. You can, however, get some inequalities. For $a$, for example, let $p$ be the probability that production is at least $800$. Since production can't be negative we see that the mean production, $\mu$ is at least $p\times 800$. Thus we have $$800p≤300\implies p≤\frac 38$$.
For $b$ you have more information, so you can do a bit better. We can invoke Chebyshev's Inequality. We have $\sigma =\sqrt {150}=12.24744871$ and we are interested in $k=\frac {100}{\sigma}=8.164965809$ from which we deduce that $$P(|X-300|≥100)≤\frac 1{8.164965809^2}=.015$$
Thus the probability you want is bounded below by $.985$ |
Every group with 5 elements is an abelian group | Hint: suppose your group is not abelian. Then you can find two different elements, say (after renumbering) $g_1$ and $g_2$, not equal to the identity, such that $g_1g_2 \neq g_2g_1$. Then the elements $\{e, g_1,g_2, g_1g_2, g_2g_1\}$ are all different. Now try to derive a contradiction (look at $g_1^2$ - which element of the set is this? Do the same for $g_1g_2g_1$). |
CNF/ Create a cnf variable from some forumals of CNF | Correct me if I am wrong, but I am assuming you want something like S XOR T: $S \oplus T$.
This is equivalent to $T \lor S \land \lnot (T \land S)$:
To get this in CNF:
$$T \lor S \land \lnot (T \land S) \equiv (T \lor S) \land (\lnot T \lor \lnot S)\tag{1}$$
Now, if we want to express this using your assignments of $T = (A \lor C) \land (B \lor C)$ and $S=(D \lor E) \land (F \lor G)$, we get, by substitution into the right-hand side of $(1)$:
$$ \lbrace[(A \lor C) \land (B \lor C)] \lor [(D \lor E) \land (F \lor G)]\rbrace \land \lbrace \lnot[(A \lor C) \land (B \lor C)] \lor \lnot[(D \lor E) \land (F \lor G)]\rbrace\tag{2}$$
Let me know if you need help with changing $(2)$ into CNF, and/or if I've misunderstood your question altogether! |
Is there a characterization of groups with the property $\forall N\unlhd G,\:\exists H\leq G\text{ s.t. }H\cong G/N$? | In the comments, Charles Hudgins was kind enough to link me to Ying, J.H. Arch. Math (1973) 24: 561. This was a paper leading up to John Hsiao Ying's 1973 PhD thesis, Relations between subgroups and quotient groups of finite groups. As far as I can tell, the thesis is not available online anywhere; however, I was recently able to track it down from the library at State University of New York at Binghamton. I will summarize here what I learned from reading his research, and also aggregate some of the information from the help I've gotten from you lovely people. Shout out to Dr. Ying, if you're out there somewhere.
There are still open questions littered throughout this answer and around this topic in general, which will hopefully draw some interest from all of you. Please feel free to ask new questions based on this one, or to edit this answer with updated information if you know something I don't.
Definitions
First of all, it turns out there is some existing terminology for studying the relationship between subgroups and quotients. As we have seen in the history of this question on MSE and MO, it is easy to confuse the condition defining these groups with a number of subtley different conditions, each which have different consequences. The paper introduces the groups that are the topic of this question as those "satisfying condition (B)," but the thesis goes on to give them a name: Q-dual groups. There are several related definitions, the most relevant of which I will condense as follows.
An S-dual group satisfies $\forall H\leq G,\:\exists N\unlhd G\text{ s.t. }H\cong G/N$ -- that is, each subgroup of $G$ is isomorphic to a quotient group of $G$.
A Q-dual group satisfies $\forall N\unlhd G,\:\exists H\leq G\text{ s.t. }H\cong G/N$ -- that is, each quotient group of $G$ is isomorphic to a subgroup of $G$.
A group which is S-dual and Q-dual is self-dual.
Actually, there are two definitions for a self-dual group. The one I've picked comes from Fuchs, Kertész, and Szele (1953). In the context of Abelian groups, there is an alternative definition, which Baer will roar at you about in these papers if you care.
Examples and nonexamples
So, what do we know about Q-dual groups?
To begin with, Q-dual groups are rare. Here Derek Holt presents evidence that most groups are not Q-dual, where "most" is defined in the sense that if $g(n)$ denotes the fraction of isomorphism classes of finite groups of order $\leq n$ that are not Q-dual, $g(n)\to 1$ as $n\to \infty$. On the other hand, it's also important to note that S-dual groups are more rare than Q-dual groups, which is something that Ying points out.
Let's look at some examples, some from the comments, some from papers. Here are some groups which are Q-dual:
Finite simple groups
Symmetric groups
Hall-complemented groups (for each $H\leq G$, there is a $K\leq G$ with $H \cap K = \mathbf{1}$ and $G=HK$)
Higman-Neumann-Neumann type universal groups (see Higman, Neumann, and Neumann (1949))
The semidirect product $A\rtimes \langle z \rangle$ of an abelian $p$-group $A$ with a power automorphism $z$ of prime power order.
Some groups that are not Q-dual:
Some simple examples: $C_3\rtimes C_4$, $C_4\rtimes C_4$, $C_5\rtimes C_4$ with a nontrivial center
Any quasisimple group (that isn't simple)
The commutator subgroup of a free group of rank $2$
Finitely generated linear groups over a field characteristic $0$. More generally, by Selberg's lemma and Malcev's theorem, any torsion-free hyperbolic group that is residually finite is not Q-dual (which may constitute all torsion-free hyperbolic groups; see here)
Relating this to other properties,
Q-dual groups may or may not be solvable and vice versa
Q-dual groups may or may not be nilpotent and vice versa (this relationship discussed in a section below)
Q-dual groups may or may not be $p$-groups and vice versa
the Q-dual property is not subgroup closed (see example in last section)
Reduction to smaller order
It's often desirable to throw away irrelevant parts of a group when studying a property.
Let $G$ be Q-dual. If there exists an element of prime order in the center that is not a commutator, then $G$ has a non-trivial, cyclic direct factor.
This dovetails nicely with the next theorem.
Let $G$ be Q-dual. If $G=H\times \langle x \rangle$, $H$ and $\langle x \rangle$ are Q-dual.
This lets us reduce away cyclic direct factors.
Building examples of Q-dual groups
To build new Q-dual groups from old ones, take any Q-dual group $H$ that has a unique non-trivial minimal normal subgroup. Given a prime $p\not\mid |H|$, $H$ has a faithful irreducible representation on an elementary abelian $p$-group $C_p^{\; n}$, so we can build $G=C_p^{\; n}\rtimes H$, which is Q-dual by the theorem. You can keep going with that, tacking on as many primes as you like.
Next we have a large class of easily constructed Q-dual groups.
Let $A$ be an elementary abelian $p$-group and $\varphi\in\operatorname{Aut}(A)$ have prime order. Then $A\rtimes \langle \varphi \rangle$ is Q-dual.
It's possible that this family comprises all nonabelian Q-dual $p$-groups of class $2$ with exponent $p$, but this is open.
Relationship to nilpotency
Ying's work focuses largely on nilpotent groups, based on the observation that the Q-dual condition is most pronounced in groups with many normal subgroups. Together with the (almost surely true) conjecture that most finite groups are nilpotent, this seems like a pretty good place to start.
A nilpotent group $G$ is Q-dual if and only if all of its Sylow subgroups are Q-dual.
(Actually this is proven in a paper by A.E. Spencer, which I have yet to get ahold of. I'll post a link when I do.)
This is fair enough, and lets us reduce to studying $p$-groups. From here, he delves into nilpotency class.
Let $G$ be an odd order Q-dual $p$-group of class $2$. Then $G'$ is elementary abelian.
The additional hypotheses that $p$ be odd and the nilpotency class be $2$ are important. The counterexample given is the dihedral group of order $16$, whose commutator subgroup is cyclic of order $4$, and which has nilpotency class $3$. It's important to note that this is a $2$-group, however, and it may be that this is what causes the generalization to fail, not the higher class. In particular, it is still open (as of 1973!) whether odd $p$-groups of class greater than $2$, or $2$-groups of class $2$, have elementary abelian commutator subgroups.
Let $G$ be an odd order Q-dual $p$-group of class $p$. Furthermore, suppose $\Omega_1(G)$ is abelian. Then $G=A\rtimes \langle z \rangle$ where $A$ is abelian, $z$ has order $p$, and $[a,z]=a^{\operatorname{exp}(A)/p}$ for all $a\in A$.
When $\operatorname{exp}(G)>p^2$, this becomes an if and only if.
Let $G$ be a $p$-group of class $2$ with $\operatorname{exp}(G)>p^2>4$. Then $G$ is Q-dual if and only if $G=A\rtimes \langle z \rangle$ where $A$ is abelian, $z$ has order $p$, and $[a,z]=a^{\operatorname{exp}(A)/p}$ for all $a\in A$.
That is a pretty thorough characterization of this special case. When $\operatorname{exp}(G)\leq p^2$, things get more complicated.
Example. Let $p$ be an odd prime, $|a|=p^2$, $|b|=|c|=p$, $[a,x]=a^p$, $[a,y]=b$, $[c,z]=a^p$, and all other commutators between $a,b,c,x,y$ and $z$ equal to $1$. Then $\left(\langle a\rangle\times \langle b\rangle\times \langle c\rangle\right)\rtimes \langle x,y,z\rangle$ is a finite Q-dual $p$-group of class $p$ and exponent $p^2$.
This example shows that the Q-dual property is not subgroup closed, via the subgroup $\langle a,c,x,z\rangle$. It also shows that finite Q-dual $p$-groups of class $2$ need not contain an abelian maximal subgroup. It is still open whether there are counterexamples of this nature for odd primes $p$. |
For which real $a$ does the series $\sum_{k=0}^{\infty}\frac{1}{(1+6k)^{ia}}-\frac{1}{(5+6k)^{ia}}$ vanish? | Your character is $\chi(n)=(\frac{n}3)1_{2\ \nmid\ n}$, the Dirichlet series is $L(\chi,s)=\sum_{n\ge 1}\chi(n)n^{-s}$.
It doesn't converge anywhere on $\Re(s)=0$. You need the analytic continuation, for a non-trivial Dirichlet character it amounts to $L(\chi,it)=\lim_{\sigma\to 0^+}L(\chi,\sigma+it)$.
$$L(\chi,0)=\lim_{s\to 0} L(\chi,s)=\lim_{s\to 0} s\int_1^\infty (\sum_{n\le x} \chi(n))x^{-s-1}dx$$
The mean value of $\sum_{n\le x} \chi(n)$ is $\frac23$.
$\sum_{n\le x} \chi(n)-\frac23$ is periodic zero-mean implies that
$$\int_1^\infty (\sum_{n\le x} \chi(n)-\frac23)x^{-s-1}dx$$ converges for $\Re(s) > -1$. Whence
$$L(\chi,0)=\lim_{s\to 0} s\int_1^\infty \frac23 x^{-s-1}dx= \frac23$$
Next, following the same argument as for $\zeta(s)$ we can show that $L((\frac{n}3),s)\zeta(s)$ has no zeros on $\Re(s)=1$, whence $L((\frac{n}3),s)$ has no zeros on $\Re(s)=1$. The functional equation implies that $L((\frac{n}3),s)$ has no zeros on $\Re(s)=0$, and $$L(\chi,s)=(1+2^{-s})L((\frac{n}3),s)$$ implies that its zeros on $\Re(s)=0$ are at $\frac{(2 \Bbb{Z}+1)i\pi}{\log 2}$. |
Homology of universal cover of $S^1 \vee S^1 \vee S^2$ is not the same as homology of $\Bbb R^2$ | The universal cover of $T$ is contractible hence homology vanishes. Now consider $\hat X$ the universal cover. You know that $H_2(X) \neq 0$ and you also know that you have the inclusion $i: S^2 \to X$ and that $H_2(i):H_2(S^2)\to H_2(X)$ is non-trivial. But $S^2$ is simply connected, hence there is a lift $\tilde i:S^2 \to \hat X$ and since $H_2(i)=H_2(p\tilde i)=H_2(p)H_2(\tilde i)$ is non trivial as mentioned before, so is $H_2(\tilde i)$. In particular $\hat X$ has non-trivial homology. |
Sign of a function in a given interval | There are at least two simple ways.
(1) The sign is constant over each of the intervals, so you need only pick one point in each interval and evaluate the function there to determine the sign over the entire interval. (But note that the second interval should be $(-7,2)$.) For example, $$f(-8)=(-8-3)(2-(-8))(-8+7)=(-11)(10)(-1)=110\;,$$ so the function is positive on $(\leftarrow,-7)$.
(2) Make a sign graph:
x-3: - - - - - 0 +
2-x: + + + 0 - - -
x+7: - 0 + + + + +
f(x): + 0 - 0 + 0 -
----------|-------------------------------|---------|------------
-7 2 3
On each interval the signs of the factors are constant, so to get the sign of $f(x)$ on the interval just count minus signs: an odd number of negative factors gives you a negative product, and an even number of them gives you a positive product. |
How to show $p_n < p_1 + p_2 + \cdots + p_{n-1}$? | You seem to have the right basic idea, but where you wrote
$$p_{n+1} < p_1 + p_2 +\dotsc + p_{n-1} + p_n < p_n + p_n = 2p_n$$
the second inequality is wrong. It would be better to first invoke Bertrand's postulate to establish $p_{n+1}\lt 2p_n$ and then apply the inductive hypothesis to get
$$p_{n+1}\lt2p_n=p_n+p_n\lt p_1+p_2+\cdots+p_{n-1}+p_n$$ |
How to divide class into groups to maximize the level of collaboration? | Let's enumerate the children using A-O just so that each person can be represented by a letter and then take some pieces of paper that you can divide into 4 sections that each section represents a group. At least this would be the easiest way for me to picture this in my head and then each piece of paper can represent a rotation.
While you could start with the easy part of:
Group 1: A,B,C,D
Group 2: E,F,G,H
Group 3: I,J,K,L
Group 4: M,N,O
Then you could rotate by flipping the matrix-like outline above:
Group 1: A,E,I,M
Group 2: B,F,J,N
Group 3: C,G,K,O
Group 4: D,H,L
Diagonals from either may be another route to getting another group(Visually this is starting at one letter and then going over one and down one in theory):
Group 1: M,J,G,D
Group 2: A,F,K,O
Group 3: N,L,E,B
Group 4: C,H,I
The key is to find the other groupings so that each person is working with others that they haven't worked before initially and then allow some duplication as at this point, each person has worked with 7, as H has been in the 3some group twice now, to 9, as some of the others that stayed with 4 got 3 new people each time, other people as those in Group 4 get slightly shortchanged but I'd think this is a typical Scheduling problem in some regards. |
Subobject Classifier of a Topos is Injective | Thinking about maps to $\Omega$ as subobjects, this is just saying that if you have a subobject $C\subseteq A$, then there is a subobject $D\subseteq B$ such that $C=A\cap D$ as subobjects of $B$. How do you find such a $D$? Well, you can just take $D=C$!
To make this precise, let $i:C\to A$ be the subobject classified by $f$. Then $gi:C\to B$ is monic since $i$ and $g$ are, so it is classified by a map $h:B\to\Omega$. Now in the diagram
$$\require{AMScd}
\begin{CD}
C @>{1}>> C @>{}>> 1\\
@V{i}VV @VV{gi}V @VV{t}V\\
A @>{g}>> B @>{h}>> \Omega
\end{CD}
$$
the right square is a pullback by definition of $h$ and the left square is a pullback since $g$ is monic. This implies the outer rectangle is a pullback, i.e. that $hg$ classifies the subobject $i:C\to A$, so $hg=f$. |
How to solve $x \geq \sum_{i=1}^{\lfloor\log_2{n}\rfloor} \frac{1}{y+\log_2{x}+i}$ | If $\log_2(n)$ is not necessarily an integer, I suppose you want to take the sum up to $m = \lfloor \log_2(n) \rfloor$. If you write $z = y + \log_2(x)$ your inequality says
$$ x \ge \sum_{i=1}^{m} \frac{1}{i+z}$$
For any real $z$ that is not one of $-1, -2, \ldots, -m$, the right side is a real number $f(z)$, and the inequality is true for all points on the curve $y + \log_2(x) = z$, i.e. $x = 2^{z-y}$, to the right of $x=f(z)$. The boundary of your region is described by the curve $x = \sum_{i=1}^m \frac{1}{i+z}$, which is not particularly pleasant. Here is what your region looks like in the case $m=3$. |
Finite abelian group contains an element with order equal to the lcm of the orders of its elements | A finite abelian group can be written as a (finite) direct product of cyclic groups:
$$
G=C_{m_1}\times C_{m_2}\times\dots\times C_{m_r}
$$
where $C_n$ denotes a cyclic group of order $n$. Thus the order of any element in $G$ divides $\operatorname{lcm}(m_1,m_2,\dots,m_r)$. On the other hand, if $g_i$ is a generator of $C_{m_i}$, the element
$$
g=(g_1,g_2,\dots,g_r)
$$
has order precisely $\operatorname{lcm}(m_1,m_2,\dots,m_r)$.
Fill in the details. |
Linear transformation dimensions of domain and range | I will make some assumptions, but I think this is the basic idea:
v is an element of a subspace of dimension d, so $$v= \alpha_1 \begin{pmatrix} a_1 \\ a_2 \\ \vdots \\ a_n\end{pmatrix} + \alpha_2 \begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_n\end{pmatrix} + \cdots + \alpha_d \begin{pmatrix} d_1 \\ d_2 \\ \vdots \\ d_n\end{pmatrix}$$
Since $$A = \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & & \\ a_{m1} & & a_{mn} \end{pmatrix}$$
Then $$Av = \begin{pmatrix} a_{11} (\alpha_1a_1+\alpha_2b_1+\cdots+\alpha_dd_1) + \cdots + a_{1n} (\alpha_1a_n+\alpha_2b_n+\cdots+\alpha_dd_n) \\ \vdots \\ a_{m1} (\alpha_1a_1+\alpha_2b_1+\cdots+\alpha_dd_1) + \cdots + a_{mn} (\alpha_1a_n+\alpha_2b_n+\cdots+\alpha_dd_n) \end{pmatrix}$$
$$ = \begin{pmatrix} \alpha_1c_{11}+\alpha_2c_{12}+\cdots+\alpha_dc_{1d} \\ \vdots \\ \alpha_1c_{m1}+\alpha_2c_{m2}+\cdots+\alpha_dc_{md}\end{pmatrix}$$
$$ = \alpha_1 \begin{pmatrix} c_{11} \\ \vdots \\ c_{m1} \end{pmatrix} + \cdots + \alpha_d \begin{pmatrix} c_{m1} \\ \vdots \\ c_{md} \end{pmatrix} $$
This vector is an element of a d dimensional subspace if the vectors $$\begin{pmatrix} c_{11} \\ \vdots \\ c_{m1} \end{pmatrix}, \cdots, \begin{pmatrix} c_{1d} \\ \vdots \\ c_{md} \end{pmatrix} $$ are linearly independent. The vector may be an element of a less than d dimensional subspace if the vectors $$\begin{pmatrix} c_{11} \\ \vdots \\ c_{m1} \end{pmatrix}, \cdots, \begin{pmatrix} c_{1d} \\ \vdots \\ c_{md} \end{pmatrix} $$ are linearly dependent. Also, this must be the case when $$n < d$$ |
Diagonalization of a simple matrix. Arriving at a contradiction | The eigenvalues are $\{3,0,0\}.$ So $0$ is a degenerate eigenvalue, and not a simple root of the characteristic polynomial. What is the dimension of its corresponding eigenspace? According to Mathematica, the eigenvalues corresponding to $\{3,0,0\}$ are:
$$\left\{\left[\begin{matrix}1\\1\\1\end{matrix}\right],\left[\begin{matrix}-1\\0\\1\end{matrix}\right],\left[\begin{matrix}-1\\1\\0\end{matrix}\right]\right\}. $$
As these are clearly linearly independent, the geometric multiplicity of the eigenspace corresponding to $\lambda=0$ is therefore $2,$ making your matrix diagonalizable. |
Meaning of index of a multiplication symbol in a Cartesian product | As you say, these indices are used to denote a pullback rather than a (cartesian) product. In a general category, given two arrows with the same codomain $X\stackrel{f}\longrightarrow Z\stackrel{g}\longleftarrow Y$, their pullback is an object, denoted $X\ {_f\times_g }\ Y$ (or more frequently, $X\times_Z Y$), and equipped with two maps $\pi_1 :X\ {_f\times_g }\ Y\to X$ and $\pi_2:X\ {_f\times_g }\ Y\to Y$ such that $f\circ\pi_1=g\circ\pi_2$, wich is universal among such objects; this means that for every object $W$ and arrows $u:W\to X$ and $v:W\to Y$ such that $f\circ u=g\circ v$, there exist a unique map $w:W\to X\ {_f\times_g }\ Y$ such that $\pi_1 \circ w=u$ and $\pi_2\circ w=v$.
By contrast, the product of $X$ and $Y$ is defined by the same data of an object with two maps, but without the condition $f\circ\pi_1=g\circ\pi_2$, and it has the same property, but without the conditions $f\circ u=g\circ v$; so the pullback is a bit like a product but with an additional restriction. Incidentally, the pullback is often called "fibered product".
In the set-theoretical case, the pullback can be easily obtained from the product: whereas the product is the set of pairs $(x,y)$ such that $x\in X$ and $y\in Y$, the pullbacks is the set of such pairs with the additional condition that $f(x)=g(y)$. The two maps $\pi_1$ and $\pi_2$ are then just the restriction of the product projections to this set (i.e. $\pi_1(x,y)=x$ and $\pi_2(x,y)=y$).
In this specific case, the pullback $C_{\mathrm{mor}}\ {_{s} \times_{t}}\ C_{\mathrm{mor}}$ is the set of pairs of arrows $(\alpha,\beta)$ such that the source/domain of $\alpha$ is the target/codomain of $\beta$; in other words, it is the set of pairs of arrows that can be composed in your category $C$. |
Anti-derivative of Lognormal CDF | The I_function is not correct. Your R code should be more like the following.
erfc <- function(x) 1 - (2 * pnorm(x * sqrt(2)) - 1)
I_function<-function(x,mu,sigma){
return( (x*erfc((mu - log(x))/(sqrt(2)*sigma)) -
exp(mu + sigma**2/2)*erfc((mu + sigma**2 - log(x))/(sqrt(2)*sigma)))/2)
}
integral_over_LogNormalCDF<-function(lowerBound,upperBound,mu,sigma){
return(I_function(upperBound,mu,sigma)-I_function(lowerBound,mu,sigma))
}
The test values you used result in the following:
integral_over_LogNormalCDF(2,3,10,3)
# 0.001231376
integral_over_LogNormalCDF(2,3,1,3)
# 0.4879823
integral_over_LogNormalCDF(2,3,-1,3)
# 0.7376231 |
Show that $C_1(1+||x||^2)^k \leq \sum_{|\alpha| \leq k} x^{2\alpha} \leq C_2 (1+||x||^2)^k$ | The guess in the comments ( $(1+\|x\|^2)^k = \sum_{|\alpha|\le k} x^{2\alpha}$) is not correct; since none of the indices repeat, all coefficients on the right are +1, while the left hand side has positive coefficients, at least as large as binomial coefficients. Alternatively, verify that the identity is wrong if $x=(1,0,\dots,0)$: the right hand side is $k+1$ while the left hand side is $2^k$.
For the upper bound,
$$\sum_{|\alpha|\le k} x^{2\alpha} \le \sum_{j=0}^k \|x\|^{2j} \sum_{|\alpha|=j}1 \le C \sum_{j=0}^k \binom{k}{j}(\|x\|^2)^j = C (1+\|x\|^2)^k. $$
Here, $C = \sup_{j=0}^k\sum_{|\alpha|=j}1$ is a constant depending on the known quantities $k,n$.
For the lower bound, if $j\neq 0$, then
$$ \sum_{|\alpha|=j} x^{2\alpha} = \|x\|^{2j}
\sum_{|\alpha| = j}
\frac{x^{2\alpha}}{\|x\|^{2j}} \ge \|x\|^{2j} \frac{\sum_{m=1}^n x_m^{2j}}{\|x\|^{2j}} \ge c\|x\|^{2j}$$
where $c = \inf_{x\in\mathbb R^n} f(x)$ where $f(x) := \frac{\sum_{m=1}^n x_m^{2j}}{\|x\|^{2j}} \ge 0$.
Note that $c=d(j)^{2j}>0$ by the equivalence of the $\ell^2$ and $\ell^{2j}$ norms on $\mathbb R^n$. ($d(j)$ depends on $n$ and $j$.) A similar bound obviously holds for $j=0$. Now
$$\sum_{|\alpha|\le k} x^{2\alpha} \ge \frac{\inf_{j'=0}^k d(j')^{2j'}}{\max_{j''=0}^k \binom{k }{j''}}\sum_{j=0}^k \binom{k }{j}(\|x\|^2)^j = \tilde c (1+\|x\|^2)^k. $$
PS this question doesn't have anything to do with the fractional-sobolev-spaces tag. (and your other questions too) |
Pullback of line bundles and divisors | a) No, it is not true that $\mathcal{O}_X(C)=f^*\mathcal{O}_Y(C')$.
For example take $X=\mathbb P^2_{x:y:z}, Y=\mathbb P^2_{u:v:w}, C=V(x)$ and let $$f(x:y:z)=(u:v:w)=(x^2:yz:z^2)$$ so that $C'= V(u)$.
Then $\mathcal O_X(C)=\mathcal O_{\mathbb P^2}(1)$, whereas $f^*(\mathcal O_Y(C'))=f^*(\mathcal O_{\mathbb P^2}(1))=\mathcal O_{\mathbb P^2}(2)$
b) The same set-up also gives a negative answer to your second question:
Take for $L'$ the line bundle $\mathcal O_{\mathbb P^2}(1)$. Then $f^*(L')=\mathcal O_{\mathbb P^2}(2)$ and you may take for $C$ the conic $z^2=xy$.
Its image $f(C)$ is the curve $C'$ given by $ w^3=uv^2$ which nor all thy Piety nor Wit will lure to $|L'|$ . |
Remainder term for the expansion of $\sqrt{1+\sin{x}}$ | Yes, that is exactly right.
Your remainder term will be
$$R_2(x) = \int_0^x \frac{f^{(3)} (t)}{3!} (x - t)^3 \, dt$$
where you need to substitute the expression for $f^{(3)}(t)$. Your text might have some assumption on $f$ that you need to check to be sure that you can write the remainder in this integral form. |
Extension of equivalent norms | Since the norms $\|\cdot\|$ and $|\cdot|$ are equivalent on $W$, for some $k$ we have $|w|\le k\|w\|$ for all $w\in W$.
Let $B_1=\{x\in X:\|x\|\le1/k\}$ and $B_2=\{w\in W:|w|\le1\}$. Let $U$ be the convex hull of $B_1\cup B_2$.
The plane of the diagram represents the subspace $W$. The central dot is the origin. The red pentagon is the boundary of the unit ball $B_2$. The green triangle is the boundary of the intersection of the unit ball $B$ wrt $\|\cdot\|$ and $W$. The smaller triangle is that dilated by $1/k$, ie the boundary of $B_1$. So $|w|\le1$ for any point on the smallest triangle and hence it lies entirely inside the red pentagon.
$B_2$ lies entirely in the plane, but there will be points of $B_1$ above and below the plane. So it is clear that $U\cap W=B_2$.
We take $U$ as the unit ball for the desired norm $\|\cdot\|_1$.
Since $U\cap W=B_2$, the norm $\|\cdot\|_1$ coincides with $|\cdot|$ on $W$.
Since we could also expand the green triangle to contain the red pentagon, it is clear that $\|\cdot\|$ and $\|\cdot\|_1$ are equivalent on $X$. |
Math floor and limits? $\lim \limits_{n \to \infty}\frac{n}{2}\left\lfloor \frac{3}{n}\right\rfloor$ | If $n\gt 3$ then $\left\lfloor \dfrac3n\right\rfloor=0$ because $\dfrac3n\lt1$. Thus the first is $0$.
The second one is equal to either $\dfrac23$, $\dfrac23\cdot\dfrac{k}{k+1}$, or $\dfrac23\cdot\dfrac{k}{k+2}$ where $n=3k+r$, with $k,r\in \mathbb Z$ integers and $r\lt3$. In the limit $n\rightarrow\infty$, $k\rightarrow\infty$, so the limit is $\dfrac23$. |
A sequence of continuous functions converging to sgn(F) a.e. on a finite measurable set | Given $n \in \mathbb N$, take disjoint compact sets $A$, $B$, $C$ such that $F < 0$ on $A$, $F = 0$ on $B$ and $F > 0$ on $C$, with $m(E \backslash (A \cup B \cup C)) < 2^{-n}$. Use Tietze Extension Theorem to find a continuous $F_n$ so $F_n = -1$ on $A$, $0$ on $B$ and $1$ on $C$. |
Expected return for Craps Casino game | A paper entitled "Expected Value and the Game of Craps" by Blake Thorton
Not only does it have exercises to work on while you learn, there's even a colorful craps table to look at so as to be inspired....
http://www.dehn.wustl.edu/~blake/courses/WU-Ed6021-2012-Summer/handouts/Expected_Value.pdf |
Work-at-home days such that the office is always staffed | The team has a total of eight days off and twelve days in per week. You can get that with one person at home two of the days and two people at home the other three days. You have two or three people in the office each day.
There are many other combinations, but it is easy to guarantee at least one person in the office or even two. |
How many combinations can you get from a three times three matrix | You said it: It is combination therefore number of options for $k$ asterix in $n= 3 \times 3$ matrix is $$ {n \choose k} = \frac{n!}{k! (n-k)!}.$$
Total number of options is given by
$$ {9 \choose 0} + {9 \choose 1} +{9 \choose 2} + ... +{9 \choose 9} = 2^9.$$
Edit: In my solution is option without asterixs (empty matrix). |
How to solve a matrix for the span of columns. | Apply the reduced Echelon form to the $augmented$ matrix: $$
\begin{bmatrix}
2 & 5 & 4 & b_{1}\\
1 & 2 & -1 & b_{2}\\
-1 & -1 & 7 & b_{3}\\
\end{bmatrix}
\quad
$$
This will give $$
\begin{bmatrix}
1 & 0 & -13 & -2b_{1}+5b_{2}\\
0 & 1 & 6 & b_{1}-2b_{2}\\
0 & 0 & 0 & b_{1}-3b_{2}-b_{3}\\
\end{bmatrix}
\quad
$$
Thus the solution to the system will be $x_{1}-13x_{3}=-2b_{1}+5b_{2},$ $x_{2}+6x_{3}=b_{1}-2b_{2}$ and $b_{1}-3b_{2}-b_{3}=0.$ In other words, provided that $b_{1}-3b_{2}-b_{3}=0$ then there will be infinitely many solutions. See here for examples of augmented matrices. |
Sketch the image of the circle $|z-1| \leq 1$ under the map $w = z^2$. Compute the area of the image. | That circle is the set $\{\cos(\theta)+1+\sin(\theta)i\mid\theta\in[0,2\pi]\}$. And you have, for each $\theta\in[0,2\pi]$,\begin{align}(\cos(\theta)+1+\sin(\theta)i)^2&=\cos^2(\theta)+2\cos(\theta)+1-\sin^2(\theta)+i\bigl(2\sin(\theta)+2\sin(\theta)\cos(\theta)\bigr)\\&=\bigl(2+2\cos(\theta)\bigr)\bigl(\cos(\theta)+\sin(\theta)i\bigr).\end{align}Computing the area that you are interested in in polar coordinates, you get\begin{align}\int_0^{2\pi}\int_0^{2+2\cos(\theta)}r\,\mathrm dr\,\mathrm d\theta&=\int_0^{2\pi}2\bigl(1+\cos(\theta)\bigr)^2\,\mathrm d\theta\\&=\int_0^{2\pi}3+4\cos(\theta)+\cos(2\theta)\,\mathrm d\theta\\&=6\pi.\end{align} |
Sigma hierarchy of logical formulae | You need to get your head round the idea of a bounded wff -- i.e. all the quantifiers are bounded -- so we never have an occurrence of $\forall n$, for example, but always $(\forall n < \tau)$ for some term. These are known as $\Delta_0$ or $\Sigma_0$ wffs. A bounded universal quantifier is like a finite conjunction, and a bounded existential quantifier is like a finite disjunction. So wffs with only bounded quantifiers are morally equivalent to unquantified wffs. Hence they are Jolly Nicely Behaved in various ways.
Then, [logical equivalents of] existentially quantified $\Delta_0$ wffs are $\Sigma_1$ wffs and [logical equivalents of] universally quantified $\Delta_0$ wffs are $\Pi_1$ wffs. [Think $\Sigma$ for logical Sum and $\Pi$ for logical product.]
You might find some further help in §26 of http://www.logicmatters.net/resources/pdfs/gwt/GWT2e.pdf |
Study the differentiability of these functions | That's not the best approach.
It should be clear that $f$ is differentiable at $0$. If $z\ne0$; then $f$ is differentiable at $z$ if and only if $\varphi(z)=e^{\overline z}$ is differentiabl at $z$. And it is much easier to apply the Cauchy-Riemann equations method to $\varphi$. If $x,y\in\Bbb R$, then$$\varphi(x+yi)=e^x(\cos(y)-\sin(y)i)$$and therefore you should take $u(x,y)=e^x\cos(y)$ and $v(x,y)=-e^x\sin(y)$. |
Quotient ring is Prime iff the ideal is prime ideal | Let $R$ be a ring.
Then let $f:R \to R/A$ definef by $f(r)=r+A$ be natural ring homomorphism, with $\text{ker}(f)=A$
Hint:
(Correspondence theorem)
Note that ,for any prime ideal $J \subseteq R/A$, we have $f^{-1}(J)$ prime ideal in $R$, which contains $A$.
Note that, $ f^{-1}((0)+A)=A $, where (0) is zero ideal.Note that,
$R/A$ is prime ring
Iff
$(0)+A$ is prime ideal in $R/A$
Iff
$A$ is prime ideal in $R$. |
Find a linear transformation | Primes denote the transpose:
$$T((7,0)')=T(1\cdot (1,3)'+3\cdot (2,-1)')$$
$$=1\cdot T((1,3)')+3 \cdot T((2,-1)')=1\cdot 4+3\cdot (-3)=-5$$
In order to find the proper linear combination solve $(7,0)'=a(1,3)'+b(2,-1)'$, to get $a=1$ and $b=3$. At the second equality I used the linearity of $T$. |
How to show that $\sum_{i=1}^n | \langle f, f_i\rangle |^2 \leq \Vert f \Vert^2$ | Quoting from the wikipedia page for Bessel's inequality, with $f=x$ and $f_i=e_i$, the key point is:
$$0 \le \left\| x - \sum_{k=1}^n \langle x, e_k \rangle e_k\right\|^2 = \|x\|^2 - 2 \sum_{k=1}^n |\langle x, e_k \rangle |^2 + \sum_{k=1}^n | \langle x, e_k \rangle |^2 = \|x\|^2 - \sum_{k=1}^n | \langle x, e_k \rangle |^2$$ |
Counting elements of $y^2 - y = x^3$ in finite fields | Use Lemma 4.2 of the lecture notes. |
Past coin tosses affect the latest one if you know about them? | If, as you said, it's an unbiased and fair coin, then it has an equal chance of coming up heads as tails on the 100th flip, by definition, because that's what an unbiased fair coin is: it is a coin that has an equal chance to come up heads or tails on any flip.
On the other hand, if you see a coin come up tails 99 times in a row you had best revisit your assumption that it is an unbiased fair coin. It would be foolish to bet on it coming up heads after that.
Addendum: Here's the reference you asked for: Gambler's fallacy |
Prove this limit $\lim \limits_{x\to\infty}f(x)=0$ | We are given $f(x) + \int_0^x f \to L.$ To show $f(x)\to 0,$ we need to show $ \int_0^x f \to L.$ But note
$$\tag 1 \int_0^x f =\frac{e^x\int_0^x f}{e^x}.$$
Since the denominator on the right $\to \infty,$ we can contemplate using L'Hopital. Let's try it: The quotient of derivatives is
$$\tag 2 \frac{e^x(f(x)+\int_0^x f)}{e^x} = f(x)+\int_0^x f.$$
The right hand side of $(2)\to L$ by hypothesis. Therefore, by L'Hopital, $(1)\to L,$ and we are done. |
Eigenvalues and eigenvectors of orthogonal projection matrix | Let $V$ be the subspace of $\Bbb R^3$ defined by
$$
V=\{(x,y,z)\in\Bbb R^3:x+2y+z=0\}
$$
Now, note that every vector in $V$ is of the form
$$
(-2y-z,y,z)=y(-2,1,0)+z(-1,0,1).
$$
Putting $v_2=(-2,1,0)$ and $v_3=(-1,0,1)$ then gives $V=\operatorname{Span}\{v_2,v_3\}$.
The map $P:\Bbb R^3\to \Bbb R^3$ is defined by fixing $V$ and orthogonally projecting the vectors not in $V$ onto $V$.
The vector $v_1=(1,2,1)$ is normal to the given plane and $P$ sends this vector to the origin. Hence $P(v_1)=0\cdot v_1$ so that $v_1$ is an eigenvector of $P$ with eigenvalue $\lambda_1=0$.
Next, since $P$ fixes $V=\operatorname{Span}\{v_2,v_3\}$, we have $P(v_2)=v_2$ and $P(v_3)=v_3$. Hence $v_2$ and $v_3$ are eigenvectors of $P$ with eigenvalue $1$. |
Curve for similar triangles | Using your picture from above, set $2k = \overline{AC}$, $h = \textrm{dist}(B, AC)$, and $\theta = \angle AED$.
For fixed $k, h, \theta$, there clearly is a unique point $D$ such that the two triangles $\triangle AED$ and $\triangle ADC$ are similar.
Set $a := \overline{AE} = k - h\cot(\theta), s := \overline{DE}$.
By the cosine law, $\overline{AD}^2 = a^2 + s^2 - 2as\cos(\theta)$. Also, similarity says that $$\frac{\overline{AE}}{\overline{AD}} = \frac{\overline{AD}}{\overline{AC}},$$
or $a^2 + s^2 - 2 as \cos(\theta) = 2ak$, so that
\begin{align*}
s &= a \cos(\theta) + \sqrt{ a^2\cos^2(\theta) + a(2k-a)}\\ &= (k - h \cot(\theta))\cos(\theta) + \sqrt{(k - h \cot(\theta))^2\cos^2(\theta) + k^2 - h^2 \cot^2(\theta)}
\end{align*}
Unfortunately, there seems to be no simpler expression for $s$ (that I could find, at least). In any case, we now have a closed form expression for $D$ in polar coordinates by centering at $B$ and noting that
$$r := \overline{BD} = \frac{h}{\sin(\theta)} + s.$$
Hope this helps (but I don't think this would look like a cardioid in general). |
Triangle inequality squared? | Disclaimer: the ket notation is painful to watch.
I think it's the ket notation that it not allowing you to see the forest for the trees. From the triangle inequality:
$$
\|x+y\|\leq\|x\|+\|y\|.
$$
Now square, and expand the binomial on the right:
$$
\|x+y\|^2\leq(\|x\|+\|y\|)^2=\|x\|^2+2\|x\|\,\|y\|+\|y\|^2.
$$
Actually, this is the way one usually proves the triangle inequality via Cauchy-Schwarz:
$$
\|x+y\|^2=\|x\|^2+2\operatorname{Re}\langle x,y\rangle+\|y\|^2\leq\|x\|^2+2\|x\|\,\|y\|+\|y\|^2=(\|x\|+\|y\|)^2.
$$
The minus sign on the left makes no difference, as the triangle inequality gives
$$
\|x-y\|=\|x+(-y)\|\leq\|x\|+\|-y\|=\|x\|+\|y\|.
$$ |
If $p : E \to X$ is a covering map and $E$ is path connected, then all the fibers have the same cardinality. | another way you can say this by using uniqueness of path-lifting
given two points $x$ & $y \in X$ you can consider a path from $x$ to $y$ (this will exists since $E$ is path-connected and $p$ is continuous)...then given any point $x_0 \in p^{-1} (x)$ you can get a lift of that path starting from $x_0$ ...and by the uniqueness there will be a unique $y_0$ where the lift of the initial path will end..this gives a injective map from the set $p^{-1}(x) \to P^{-1} (y) $ ...and by using same argument we'll get a injective map $p^{-1}(y) \to P^{-1} (x) $ ...this will prove that all the fibers have same cardinality.
(but honestly speaking some how in this proof also I am using the fact that $p$ is locally homeomorphism...so this proof is not very far away from the proof you've mentioned) |
Existence of a random variable $X$ such that the moment generating function of $X$ is given by $\exp(t^3c)$ for some number $c$? | J. Marcinkiewicz derived conditions under which functions of a certain form can be characteristic functions. Among them was :
If the moment generating function of a random variable $X$ is the exponential of a polynomial $P$ i.e. $E[e^{tX}] = e^{P(t)}$, then $P$ has degree at most two and $X$ is a normal random variable.
Therefore, there are no random variables with MG function $e^{t^3c}$ or $e^{t^n c}$ for $n > 2$, in fact far more cases get excluded thanks to the above theorem. (Note that ditto conditions carry over for the MGF)
As it turns out, thanks to Bochner's theorem the condition for a function to be a characteristic function (generalization of MGF) is only positive definiteness, continuity at the origin, and being $1$ at the origin. These conditions hold for $e^{P(t)}$ if $P(0) = 0$, so only positive definiteness has to be checked, and it turns out that this is the condition violated if the power of $t$ is higher than $2$. |
Exercise Real Analysis | I put it here hopefully it won't be a problem.
Let define the function $g: A \rightarrow B$ as follows:
$$ x\mapsto \begin{cases} f^{-1}(x),& \text{if }\,x\in \bigcup_{i=1}^\infty D_i\\
x,& \text{if }\, x\in A\backslash\bigcup_{i=1}^\infty D_i \end{cases} $$
We claim that the function is well-defined and is a bijection. Since each $D_i$ is disjoint to each other then there is not ambiguity. Also we have $\bigcup_{i=1}^\infty D_i \subset f[C]$ and so the inverse function of $f$ exist [since is a bijection if we restrict the target set to its image ].
Suppose $x,x'\in A$ and $x\not= x'$. We will show that $g(x)\not= g(x')$, i.e., g is a $1:1$ map.
If $x,x'\in \bigcup_{i=1}^\infty D_i$, so $x=f(y_{i-1})\in f[D_{i-1}]$ and $x'=f(y_{k-1})\in f[D_{k-1}]$ for $i,k>0$. Clearly $y_{i-1}\not= y_{k-1}$ because $f$ is function. Then $g(f(y_{i-1}))=y_{i-1}$ and $g(f(y_{k-1}))=y_{k-1}$ and for what we said above the result follows. If $x,x'\in A\backslash \bigcup_{i=1}^\infty D_i$ the result is trivial. Just we need to check when $x \in \bigcup_{i=1}^\infty D_i$ and $x'\in A\backslash \bigcup_{i=1}^\infty D_i$. Then $g(x)= y_{i-1}\in D_{i-1}$ [$x= f(y_{i-1})$] and $g(x')=x'$. For the sake of contradiction suppose that $x'=y_{i-1}$. We divide the claim in two parts when $i=1$ and when $i>1$. For the former $x'=y_{i-1}\in B\backslash A$ but $x' \in A$ a contradiction. For the latter, $x'=y_{i-1} \in D_{i-1}\subset \bigcup_{i=1}^\infty D_i$ which is again a contradiction. Therefore $x'\not=y_{i-1}$.
In any case we have $g(x)\not= g(x')$. Hence $g$ is an injective map.
To conclude we have to show that $g$ is a surjective map. Let
$$z\in B= B\backslash A \cup A = B\backslash A \cup \bigg(\bigcup_{n=1}^\infty D_n \cup A \backslash \bigcup_{n=1}^\infty D_n \bigg) $$
we will show that there is some $x\in A$ such that $g(x)= z$. It is not difficult to show that the three sets are disjoints. Then we analyze the entire problem by cases again.
If $z\in B \backslash A$, so $f(z)\in f[D_0]=D_1\subseteq \bigcup_{n=1}^\infty D_n \subseteq A$ and then $g(f(z))= z$ as desired. If $z\in \bigcup_{n=1}^\infty D_n$ then we have $z\in D_i$ for $i>0$, so $f(z)\in D_{i+1} \subseteq \bigcup_{n=1}^\infty D_n \subseteq A$, furthermore $g(f(z))= z$. If $z \in A\backslash \bigcup_{n=1}^\infty D_n$, then $g(z)=z$ and we're done.
Thus the map is surjective and injective, i.e, bijective. Furthermore, $\#A =\#B$.
Corollary: (Bernstein Thm) Let $f: A \rightarrow B$, and $g: B \rightarrow A$ be injectives maps. Then, there exists a bijective map $h: A\rightarrow B$.
Proof: Clearly $f: A \rightarrow f[A]$ is a bijection. We define $\tilde h:= f \circ g$. Now, since $f[A]\subseteq B$ and $\tilde h: B \rightarrow f[A]$ is an injection then there exists a bijective map $h: f[A]\rightarrow B$ as we have shown. Hence the composition with $f$ give us the desired bijection. |
Structure sheaf of a locally ringed space can be pulled back from an open cover | The idea of patching things together from smaller pieces is good, but your guess as to how specifically to do this is not quite correct. One constructs a fiber product $X\times_Z Y$ by having maps $X\to Z$ and $Y\to Z$, but in your case, you have maps $U_i\cap U_j\to U_i$ and $U_j$, which go the wrong way to use a fiber product. Instead, the categorical construction you want to use is a colimit - this correctly describes how to glue things together.
To be specific, if you have an open covering $\{U_i\}_{i\in I}$ of your locally ringed space, then we can form the poset $\mathcal{P}$ of nonempty finite subsets of $I$ ordered by inclusion, and it is not so hard to show that $\lim_{\rightarrow\mathcal{P}} \bigcap_{i\in p\subset\mathcal{P}} U_i$ is just $X$ (apply the universal property on the topological space side and check the standard sheaf gluing conditions).
The thing that would go wrong if you "defined $\mathcal{O}_X(X)$ to be smaller" is that $\mathcal{O}_X$ would fail to be a sheaf. You would violate the gluing property of sheaves, so $\mathcal{O}_X$ would not be a sheaf, and $(X,\mathcal{O}_X)$ would not be a locally ringed space because $\mathcal{O}_X$ is not a sheaf. |
Solve $\int{\frac{x-1}{x\sqrt{3x^2-4}}}\;dx$ | You are essentially finished. We have $\sec u=\frac{\sqrt{3}x}{2}$ and $\tan u=\frac{\sqrt{3x^2-4}}{2}$. Thus your procedure gives as first term
$$\frac{1}{\sqrt{3}}\log\left|\frac{\sqrt{3}x+\sqrt{3x^2-4}}{2}\right|.$$
This is equal to
$$\frac{1}{\sqrt{3}}\log\left|\sqrt{3}x+\sqrt{3x^2-4}\right|-\frac{1}{\sqrt{3}}\log 2.$$
So the two versions differ by a constant, which can be absorbed in the constant of integration. |
Solution of $f(x)=0.5 \cdot x^{(T)}Ax-b^T \cdot x+c$ | Hint: Since $A$ is symmetric positive definite, what can you say about its eigenvalues and diagonalization? Are any of them $0$? |
Sum of random variables at least $\log n$ | Let $\ell(n)=\lfloor \ln(n)\rfloor+1$, and let all $E[X_i]=\frac{1}{\ell(n)}$ for $i\leq \ell(n)$ and $E[X_i]=0$ otherwise. Essentially, you are concentrating all the "mass" into the first $\ell(n)\approx\ln(n)$ variables (the minimum number whose sum can exceed $\ln(n)$), divided evenly.
The probability that $X_i=1$ for $i\leq \ell(n)$ is $\left(\frac{1}{\ell(n)}\right)^{\ell(n)}\approx\left(\frac{1}{\ln(n)}\right)^{\ln(n)}=n^{-\ln\ln(n)}\approx n^{-\ln\ln(n)+1}$, where the last term is the Chernoff Bound you obtained.
Note that the (multiplicative) error "hidden" in the first of the two $\approx$ (due to having to use $\lfloor \ln(n)\rfloor+1$ instead of $\ln(n)$ because the $X_i$ are discrete) is of the order of $\ln(n)$, so smaller than that "hidden" in the second $\approx$ which is $n$. The latter (which is, after all, just a $+1$ added to a $-\ln\ln(n)$ exponent) is mostly a consequence of the approximations required to produce a manageable formula like the $\frac{e^\delta}{(1+\delta)^{1+\delta}}$ one you used, from the "full-power" Chernoff-Hoeffding bound written in terms of relative entropy for $n$ independent $X_i$ with values in $[0,1]$ and expected sum $\mu$:
$\Pr\left[\sum_{i=1}^n X_i \geq \mu+\lambda)\right]\leq e^{−nH_{\mu/n}(\mu/n+ \lambda)}$, where $H_p(x)=x\ln(\frac{x}{p})+(1−x)\ln(\frac{1−x}{1-p})$ is the relative entropy of $x$ with respect to $p$. |
Proof by Induction: Series of binomial coefficients with same k-length subsets | Write it as
$$ \text{stuff} + \dotsb + \binom nk = \binom{n+1}{k+1} $$
Do you know any identity involving $\binom nk$ and $\binom{n+1}{k+1}$? |
MatLab: Motion of charged particle in electromagnetic field in cylinder | You have to take the scale of your constants into account. The Lipschitz constant L of the ODE system can be taken as abs(Q/m*B)=1.6180874e+07. For the RK4 method to work sensibly you need L*h between 1e-4 and 1e-2, for the larger choice this is h=1e-9. The upper bound has then to be adapted to be conform with the number of integration steps, below I used 10000 steps to see a significant portion of the solution curve.
Next correct the use of indices, if yDot and yDotDot are on position 3 and 4 in the output dudt of uprime, then y and yDot have also to be found at these positions in the state vector.
R = 0.012;
a = 0.015;
b = 0;
w = 0.2;
v0 = 455*10^3;
function dudt = uprim(t, u)
m = 9.1091*10^-31;
Q = -1.6021*10^-19;
E = 20.0;
B = 0.92*10^-4;
x = u(1);
xDot = u(2);
y = u(3);
yDot = u(4);
r = sqrt( (x - a).^2 + (y - b).^2 );
xDotDot = (Q/m)*(E + B*yDot*(1 - w*(r/R)));
yDotDot = (Q/m)*( - B*xDot*(1 - w*(r/R)));
dudt = zeros(size(u));
dudt(1) = xDot;
dudt(2) = xDotDot;
dudt(3) = yDot;
dudt(4) = yDotDot;
end
h = 1e-9;
t = 0:h:1e-5;
u = zeros(6, length(t));
u(1, 1) = a - R;
u(2, 1) = v0;
u(3, 1) = 0;
u(4, 1) = 0;
u(5, 1) = 0;
u(6, 1) = 0;
for i = 1:(length(t) - 1)
k1 = uprim(t(i) , u(:, i) );
k2 = uprim(t(i) + 0.5*h, u(:, i) + 0.5*h*k1);
k3 = uprim(t(i) + 0.5*h, u(:, i) + 0.5*h*k2);
k4 = uprim(t(i) + h, u(:, i) + h*k3);
u(:, i + 1) = u(:, i) + (h/6)*(k1 + 2*k2 + 2*k3 + k4);
end
figure(1);
plot(t, u(1,:), t, u(3,:));
figure(2);
plot(u(1,:),u(3,:));
The plot contains left the x and y components as functions of t and right the curve (x,y). |
Prove that a quadratic equation has no roots in a congruence class | EDITED QUESTION: Apparently we also know $p=421.$
$$ x^2 + 12 x - 37 \equiv 0 \pmod p $$
add $73=73.$
$$ x^2 + 12 x + 36 \equiv 73 \pmod p $$
$$ (x+6)^2 \equiv 73 \pmod p $$
let $w = x+6$
$$ w^2 \equiv 73 \pmod p $$
This has solutions for some $p$ but not others. |
Uniform Continuity of $x \sin x$ | Let $f(x)=x\sin x$ and $x_n=\pi n$ and $y_n=\pi n+\frac{1}{n}$
then $\displaystyle\lim_{n\to\infty}( y_n-x_n)=0$ and
$f(y_n)-f(x_n)=(\pi n+\frac{1}{n})\sin(\pi n+\frac{1}{n})-\pi n\sin(n\pi)=_{n\longrightarrow\infty} \pi(-1)^n+o(1)\not \to 0$ |
Follow-up to question 3121498, asked in February 2019 | This is a partial answer, which proves that $p^k + 1 \neq 2m$ holds in general for all odd perfect numbers.
Let $n = p^k m^2$ be an odd perfect number with special prime $p$.
Suppose to the contrary that $p^k + 1 = 2m$.
Then $m^2 - p^k = m^2 - (2m - 1) = m^2 - 2m + 1 = (m - 1)^2$, which contradicts the proposition in the OP. |
Find a transformation $3 \times 3$ matrix given a set of input and output $3 \times1$ matrices | "Gets them close enough" is the tricky part. How do you measure how far apart they are? If the answer is "I use the usual Euclidean metric," so that the distance from $(a, b, c)$ to $(x, y, z)$ is $\sqrt{(a-x)^2 + (b-y)^2 + (c-z)^2}$, then you can minimize the sum of squared errors by using the Moore-Penrose pseudo-inverse.
Here's (roughly) the trick. You want
$$
Mv_i = u_i
$$
for some given list of "input" vectors $v_1, v_2, \ldots, v_k$ and "target vectors" $u_1, u_2, \ldots, u_k$, where each of these is in $\Bbb R^3$, and $M$ should be a $3 \times 3$ matrix.
If you put the $v$-vectors into a $ 3 \times k$ matrix (each vector becomes one column of the matrix $V$), and do the same thing for the $u$-vectors, making a matrix $U$, then you're hoping that
$$
M V = U
$$
where $M$ is $3 \times 3$, and $V$ and $U$ are each $3 \times k$. For $k > 3$, this problem is generally not solvable. But if you look at
$$
R = MV - U
$$
for a particular $M$, that's a matrix that shows the "errors" -- how far $Mv_i$ is from $u_i$. And if the sum of the squares of the entries of $R$ is small...then you've done well.
Now that is a calculus problem: over all $3 \times 3 $ matrices $M$, consider the function
$$
f(M) = \sum_{i,j}(MV - U)^2_{i,j}
$$
and minimize it. You take derivatives, set them to zero, and (after a good deal of algebra) you arrive at a formula for $M$.
Let me lead you there in a slightly different way. Let's suppose that there is such a matrix $M$ for your $U$ and $V$ data. Start from
$$
MV = U
$$
and multiply both sides by $V^t$ to get
$$
M (VV^t) = (U V^t)
$$
The matrices in parentheses are now $3 \times 3$, and unless your set of $v$-vectors is particularly bad (e.g., they all lie in one plane!), the matrix $V V^t$ is invertible. So you find that
$$
M = (UV^t)(V V^t)^{-1}
$$
Now that's the answer for $M$ in the case that $MV = U$ does have a solution. The surprising thing (which you get by doing the calculus I mentioned) is that even if $MV = U$ doesn't have an exact solution, the matrix
$$
M^{*} = (UV^t)(V V^t)^{-1}
$$
is the one for which $f(M)$ is minimized, i.e., for which the transformed $v$-vectors are as close as possible (in the aggregate) to the corresponding $u$-vectors. |
Calculating the minimum in geometry | Ok if i understand correctly you need $Q$ with maximal height, this is an isoperimetric problem with fix $y=b+c$ then $Q$ must be on the perpendicular bissector of $[AB]$ and thus $QA=QB=y/2$.
The question may be different since he want the closest point $Q$ to $W$ with $QA+QB=y$ fixed which is quite different... |
Finding cumulative distribution function satisfying $ P(X \leq x \:\cup \: Y \leq y) = P(X+Y \leq x + y) $ | There is no such $F$: if $f(x)=1-F(x)$ then we have $f(x+y)=f(x)f(y)$. Also $f$ is monotone. The only such functions are of the type $f(x)=e^{cx}$ so $F(x)=1-e^{cx}$. So your extra condition $F(1)=1$ can never be satisfied. |
Solving the trigonometric equation $\sin3x=\cos2x$ | This is $\cos(\frac\pi2-3x)=\cos(2x)$. Now, since $\cos A=\cos B\iff \exists k\in\Bbb Z, A-B=2k\pi\lor\exists h\in\Bbb Z, A+B=2h\pi$, this becomes $$\frac\pi2-3x=2k\pi+2x\lor \frac\pi2-3x=2h\pi-2x\\ x=-\frac25\pi k+\frac\pi{10}\lor x=-2h\pi+\frac\pi2$$
Now, the only such angles in $[0,\pi]$ are obtained for $k=0,-1,-2$ or for $h=0$. Therefore the solution is $x\in\left\{\frac\pi{10},\frac\pi2,\frac9{10}\pi\right\}$ |
Can anyone solve this crazy combination problem? There are 9 blocks, 3 red, 3 yellow and 3 blue. Each set of colours is numbered 1 to 3. | We can arrange the blocks in a line in $9!$ ways. For each such arrangement, we have to break it into $4$ towers, with from $0$ to $4$ blocks in a tower. This is the number of ways to put $9$ indistinguishable balls into $4$ buckets, with no more than $3$ balls in a bucket. One way to compute this is with a combination of stars and bars and inclusion-exclusion.
There are $\binom{12}{3}=220$ ways to put the balls in the buckets with no restrictions. There are $4$ ways to choose a bucket in which to place $4$ balls. Then there are $\binom{8}{3}=56$ ways to distribute the remaining $5$ balls. But distributions with $4$ balls in two buckets have been subtracted twice, so we have to add them back in. There are $\binom42=6$ ways to choose the two buckets, and $4$ ways to distribute the last ball, giving $$220-4\cdot56+6\cdot4=20$$ |
Square root limit problem | Notice that the limit makes sense only from the right, i.e. for $\,x>0\,$:
$$\lim_{x\to 0^+}\left(\frac1{\sqrt x}-\frac1{\sqrt{x^2+x}}\right)=\lim_{x\to 0^+}\frac{x^2+x-\left(\sqrt x\right)^2}{\sqrt x\sqrt{x^2+x}\left(\sqrt{x^2+x}+\sqrt x\right)}=$$
$$=\lim_{x\to 0^+}\frac{x^2}{\sqrt x\sqrt{x^2+x}\left(\sqrt{x^2+x}+\sqrt x\right)}\frac{\frac1{x^2}}{\frac1{x^2}}=$$
$$=\lim_{x\to 0^+}\frac{\frac{x^2}{x^2}}{\frac{\sqrt x}{\sqrt x}\cdot\frac{\sqrt{x^2+x}}{x}\left(\frac{\sqrt{x^2+x}+\sqrt x}{\sqrt x}\right)}=\lim_{x\to 0^+}\frac1{ 1\cdot\sqrt{1+\frac1x}\left(\sqrt{x+1}+1\right)}=0$$ |
Finding a regular process which is a martingale | I will post the solution using the suggestion by saz here for those who are interested.
Applying Ito formula to function $F(x,t)=tx^2$ we get
$$tB_t^2=2\int_0^tsB_sdB_s+\int_0^tB^2_sds+\int_0^tsds.$$
It can be proven that integral $\int_0^tsB_sdB_s$ is a martingale. Since $A_t$ is a regular process, we get:
$$A_t=\int_0^t\left(B^2_s+s\right)ds.$$ |
Equivalency in the elementary measure theory | Leta $A_n =\left\{x\in X :f(x)>\frac{1}{n} \right\} .$ If for some $k$ we have $\mu (A_k ) >0$ then $$\int_X fd\mu \geqslant \int_{A_k} fd\mu \geqslant \frac{1}{k} \mu (A_k ) >0 .$$
Therefore $\mu (A_k ) =0 $ for all $k\in\mathbb{N}$ and hence $$\mu (\{x\in X :f(x)>0\} ) =\lim_{n\to \infty } \mu (A_n )=0 .$$ |
How does relation R on A not satisfy these conditions? | a. The professor is right: that relation is transitive, seeing as $R\circ R=R$.
b. Your professor is right: the relation is not transitive since you have $1R2\land 2R1\land \neg 1R1$.
c. I see a red $\checkmark$. Doesn't that denote a good answer? |
Left (resp. right) adjoint functor fully faithful iff unit (resp. counit) isomorphism | For any pair of objects $X,X' \in \mathcal C$, we can consider the induced map $\mathrm{Hom}_{\mathcal C}(X,X') \to \mathrm{Hom}_{\mathcal D}(\mathcal F(X),\mathcal F(X')) \to \mathrm{Hom}_{\mathcal C}(X,GF(X'))$. Via naturality, we know that this composite map is actually equal to $f \mapsto \eta_{X'} \circ f$. This is just $\mathrm{Hom}_{\mathcal C}(-,\eta_{X'})$. From the Yoneda lemma, we know that this is a bijection for all $X$ iff $\eta_{X'}$ is an isomorphism.
On the other hand, $\mathrm{Hom}_{\mathcal D}(\mathcal F(X),\mathcal F(X')) \to \mathrm{Hom}_{\mathcal C}(X,GF(X'))$ is always a bijection, since we have an adjunction, so that the composite map $\mathrm{Hom}_{\mathcal C}(X,X') \to \mathrm{Hom}_{\mathcal D}(\mathcal F(X),\mathcal F(X')) \to \mathrm{Hom}_{\mathcal C}(X,GF(X'))$ is a bijection iff the map $\mathrm{Hom}_{\mathcal C}(X,X') \to \mathrm{Hom}_{\mathcal D}(\mathcal F(X),\mathcal F(X'))$ is a bijection. This is precisely the condition that $\mathcal F$ is fully faithful.
The statement for $\varepsilon$ is dual to the one about $\eta$. |
Relation between the eigenvalues of a matrix A and the eigenvalues of its hermitian and skew-hermitian parts | Note: My convention for $H$ and $K$ is to take $H = \frac{A + A^*}{2}$ and $K = \frac{A - A^*}{2i}$, so that $A = H + iK$. With this convention, $H$ and $K$ are both Hermitian. Notably, they have real eigenvalues.
First, note that
$$
\DeclareMathOperator{\tr}{tr}
\sum |\lambda_i|^2 \leq \tr(A^*A)
$$
Something along these lines is usually proven together with the spectral theorem for normal matrices following Schur's theorem (see for example Horn and Johnson). Notably, the inequality becomes equality if and only if $A$ is normal.
Now, expand $\tr(A^*A)$ in terms of $H$ and $K$ to get
$$
\tr(A^*A) =
\tr[(H+iK)^*(H+iK)]=
\tr(H^*H)+ i\tr(HK) - i \tr(HK)+\tr(K^*K)=\\
\tr(H^*H)+ \tr(K^*K)
$$
Note, however, that both $H$ and $K$ are normal.
Or, a yet more straightforward observation at this point is that since both $H$ and $K$ are Hermitian, we have
$$
\tr(H^*H)+ \tr(K^*K) =
\tr(H^2)+ \tr(K^2) = \sum_{i} \alpha_i^2 + \sum_i \beta_i^2
$$
where we note that these eigenvalues must be real. |
Bivariate normal random variables decomposition | Use $Y = \gamma X + \eta$ to define $\eta := Y - \gamma X$. Now, we need to show two things:
1) $\eta$ has a normal distribution with mean $0$ and variance $Var(Y) - \gamma^2$
2) $\eta$ is independent of $X$
Make the transformation,
$$ \begin{pmatrix}\eta \\ X \end{pmatrix} = \underbrace{\begin{pmatrix}-\gamma & 1 \\ 1 & 0\end{pmatrix}}_A\begin{pmatrix}X \\ Y \end{pmatrix}$$
You should know that if $Z$ is a bivariate normal vector with mean vector $0$ and covariance matrix $\Sigma$, then $AZ$ is also bivariate normal with mean vector $0$ and covariance matrix, $A\Sigma A^T$. Use this fact to compute the covariance matrix of $\begin{pmatrix}\eta \\ X \end{pmatrix}$ above. You will see that the covariance matrix is diagonal, which implies independence for jointly normal vectors. Moreoever, the upper left diagonal element of the covariance matrix will be, $Var(Y) - \gamma^2$ as desired. |
Discrete Cumulative distribution for negative binomial function | HINT
I would say use binomial distribution:
$P(Y=x) = {50\choose x} 0.03^x 0.97^{50-x}$
then $P(Y>50) = [P(0)+P(1)+P(2)]$
Edited - Fixed: Y > 50 if count = 0, 1 or 2 in 50 attempts.
Clarification: The event $\{Y> 50\}$ is that for finding at most $2$ successes somewhere among the first $50$ of an indefinite sequence of independent Bernoulli trials, each with identical success rate $0.03$ (so the third success occurs some unspecified time after). $$\mathsf P(Y>50)=\binom{50}0 (0.03)^0(0.97)^{50}+\binom{50}1 (0.03)^1(0.97)^{49}+\binom{50}2 (0.03)^2(0.97)^{48}$$ |
Let $(x_n)$ and $(y_n)$ be Cauchy sequences. Give a direct proof that $(2x_n + y_n)$ is a Cauchy sequence. | Let $\epsilon>0$. The fact that $(x_n)_{n\in\mathbb{N}}$ is a cauchy sequence gives you some natural number $N_1\in \mathbb{N}$ such that forall $n,m>N_1$ it holds $|x_n-x_m|<\epsilon$ for the distance. Because $(y_n)_n$ is a cauchy sequence too you get some $N_2$ with a similar property for $(y_n)_n$. Now let $N:=max(N_1,N_2)$. Then for all $n,m>N$ it holds that $|(2x_n+y_n)-(2x_m+y_m)|<\ldots$. Hint: Use the triangular inequality. |
Construction of an $n$-Sphere | Yes, that's the way to do it. The two $n$-balls are the upper and lower hemispheres of $S^n$. If you think of $S^n$ as the set of point in $\mathbb R^{n+1}$ with unit norm, then you can decompose this into two sets according to whether the first coordinate is $\geq 0$ or $\leq 0$. Each of these sets is homeomorphic to an $n$-ball by forgetting the first coordinate, giving a projection into $\mathbb R^n$ whose image is all points of norm $\leq 1$. |
Proving existence of solutions for second-order differential equation $u''(t)=-w^2\sin(u(t))$ | You can rewrite
$$ \begin{cases}
u''(t) = -w^2 \sin(u(t)),& t\in \mathbb{R}_{\geq 0};\\
u(0)=u_0; \\
u'(0)=v_0.
\end{cases}$$
as
$$ \begin{cases}
\begin{pmatrix}
u(t) \\ v(t)
\end{pmatrix}' =
\begin{pmatrix}
v(t) \\ -w^2 \sin(u(t))
\end{pmatrix}; \\
\begin{pmatrix}
u(0) \\ v(0)
\end{pmatrix}
= \begin{pmatrix}
u_0 \\ v_0
\end{pmatrix}
\end{cases} $$
by setting $v(t)=u'(t)$. Note that the function
$$F: \mathbb{R}^2 \rightarrow \mathbb{R}^2, \ F( u, v)= (v,-w^2 \sin(u)) $$
is lipschitz (as $\sin$ has bounded derivative). Thus, by the Picard-Lindelöf Theorem our second Cauchy problem has a unique solution. As the second system is equivalent to the first one, it has a unique solution as well. |
Show that if $f'$ is strictly increasing, then $\frac{f(x)}{x}$ is increasing over $(0,\infty)$ | By MVT, for some $c \in (0,x)$ such that $$\frac{f(x)}{x}=\frac{f(x)-f(0)}{x-0}=f'(c)$$
Since $x>c$, this gives us that $$f'(x)>f'(c)=\frac{f(x)}{x}$$
Thus, $f'(x)x-f(x)>0$ if $x>0$.
But the derivative of $\frac{f(x)}{x}$ is $$\frac{f'(x)x-f(x)}{x^2}>0$$Our proof is done. |
Reference: Calabi-Yau toric varieties | Lagrangian torus fibration and mirror symmetry of Calabi-Yau hypersurface in toric variety, by Wei-Dong Ruan, see page 13, corollary 2.3. https://arxiv.org/pdf/math/0007028.pdf
Brian Greene also has a more general discussion on this in String Theory on Calabi Yau Manifolds, see in particular sections 9.3-9.5. https://arxiv.org/pdf/hep-th/9702155.pdf |
Excision in homology: $H(D^2, S^1)$ | Here's an example of how I've seen excision used.
Proposition: Let $M$ be a surface. Then $H_2(M,M\setminus\{*\})\cong\mathbb Z$.
Proof:
The point $*$ is contained in some closed disk $D\subset M$ with boundary $\partial D\cong S^1$. Now apply excision with $Z=M\setminus D$. Then you get
$$H_2(M,M\setminus\{*\})\cong H_2(D,D\setminus\{*\})\cong H_2(D,\partial D)$$
and from the long exact sequence of the pair $(D^2,S^1)$, you show that $H_2(D,\partial D)\cong\mathbb Z$. (As you mentioned.)
$\Box$
The analogous result for $n$-manifolds is very useful for defining what an orientation of a topological manifold is. |
Combination problem for N items in M identical groups | This appears to be a problem listed in the Twelvefold Way: counting sujective maps from an $N$-element set to a $M$-element set, modulo permutations of the latter set (so that the $M$ non-empty pre-images effectively loose their label; all that remains is a partition of the $N$-element set into $M$ unlabelled subsets). The numbers $g(N,M)$ are called Stirling numbers of the second kind, and often written $\genfrac\{\}0{}NM$. |
Sequence $a_k=1-\frac{\lambda^2}{4a_{k-1}},\ k=2,3,\ldots,n$. | The sequence $(a_k)$ is defined by iterating the homography $h:a\mapsto1-\lambda^2/(4a)$ hence the two fixed points of $h$ yield a dilation $h$ is conjugate to. Those fixed points are $\alpha_\pm=\frac12(1\pm\mu)$ where $\mu^2+\lambda^2=1$, and
$$
\frac{h(a)-\alpha_+}{h(a)-\alpha_-}=\beta\cdot\frac{a-\alpha_+}{a-\alpha_-},
$$
for some $\beta$ which can be computed noting that $h(0)=+\infty$ hence $\beta=\alpha_-/\alpha_+$. In particular, if $a_1=1$ and $a_n=h^{n-1}(a_1)=0$, one gets
$$
\frac{\alpha_+}{\alpha_-}=\beta^{n-1}\cdot\frac{1-\alpha_+}{1-\alpha_-}=\left(\frac{\alpha_-}{\alpha_+}\right)^{n-1}\cdot\frac{\alpha_-}{\alpha_+},
$$
that is,
$$
\left(\frac{\alpha_+}{\alpha_-}\right)^{n+1}=1.
$$
Furthermore, $a_k\ne0$ for every $k\lt n$ (otherwise $a_{k+1}$ is undefined) hence the same computation yields
$$
\left(\frac{\alpha_+}{\alpha_-}\right)^{k+1}\ne1,\qquad k\lt n.
$$
All this means that $\alpha_+=\omega\cdot\alpha_-$, where $\omega$ is a primitive $(n+1)$-th root of $1$, that is, $\omega^{n+1}=1$ and $\omega^{k}=1$ for every $1\leqslant k\leqslant n$. Hence $\mu=\frac{\omega-1}{\omega+1}$ and $\lambda^2=1-\mu^2=\frac{4\omega}{(1+\omega)^2}$. Writing $\omega$ as $\omega=\mathrm e^{2\mathrm i\theta}$, one gets
$$
\lambda^2=\frac4{(\mathrm e^{\mathrm i\theta}+\mathrm e^{-\mathrm i\theta})^2}=\frac1{\cos^2(\theta)},
$$
where $\theta$ can be any angle $\alpha$ such that $(n+1)\theta=0\pmod{\pi}$ and $k\theta\ne0\pmod{\pi}$ for every $1\leqslant k\leqslant n$.
Finally, the answer is NOT was is written in the question but
$$
\lambda=\frac{1}{\cos\left(\frac{k\pi}{n+1}\right)}\quad\text{for some}\ 1\leqslant k\leqslant n\ \text{such that}\ \mathrm{gcd}(k,n+1)=1.
$$ |
Find $\lim\limits_{n\to \infty} [(1+\frac{1}{n})^n-(1+\frac{1}{n})]^{-n}$ | If $n\ge2$, then
$$\left(1+{1\over n}\right)^n\ge1+{n\over n}+{n(n-1)\over2n^2}\gt{17\over8}$$
which means
$$\left(1+{1\over n}\right)^n-\left(1+{1\over n}\right)\gt{9\over 8}-{1\over n}$$
For $n\ge16$ (chosen to keep the arithmetic simple), we have
$$\left(1+{1\over n}\right)^n-\left(1+{1\over n}\right)\gt{17\over16}$$
and thus
$$\left(\left(1+{1\over n}\right)^n-\left(1+{1\over n}\right)\right)^{-n}\lt\left(16\over17\right)^n\to0$$ |
Show that $ \int_0^1 \sup_{n\in\mathbb N}\{ f_n(x) \}dm(x)=\infty $ | Suppose that $\int_0^1 g(x)dm(x)<\infty.$ Since $g \ge 0$ and $g$ is measurable, $g$ is integrable.
Furthermore we have $0 \le f_n \le g$ for all $n$ on $[0,1]$.
Now invoke the dominated convergence theorem to get a contradiction. |
Matlab Recursion Loop | Here's how I would do the loop:
A = [1.52, -.7; .56, .4];
x = zeros(2,26);
x(:,1) = [0;1];
for i = 1:25
x(:,i+1) = A*x(:,i);
end |
Which kinds of compositions of invertible elementary and nonelementary functions are elementary? | I answer myself.
The elementary functions are closed regarding composition. That means, the composition of two elementary functions is elementary again.
Let $^{-1}$ denote the compositional inverse.
Let $g$ be an elementary function.
Keep in mind that $f$ and $g$ are elementary.
We formulate the problems as $h(f(x))=g(x)$ and $f(h(x))=g(x)$.
1.)
$$h(f(x))=g(x)$$
Because $h$ is invertible:
$$f(x)=h^{-1}(g(x))$$
If $h^{-1}$ is nonelementary, $h^{-1}(g(x))$ is elementary, and the equation is valid.
If $h^{-1}$ is elementary, $h^{-1}(g(x))$ is elementary, and the equation is valid.
A left inverse of $h$ is sufficient.
2.)
$$f(h(x))=g(x)$$
Because $f$ is invertible:
$$h(x)=f^{-1}(g(x))$$
If $f^{-1}$ is nonelementary, $f^{-1}(g(x))$ is nonelementary, and the equation is valid.
If $f^{-1}$ is elementary, $f^{-1}(g(x))$ is nonelementary only if $g$ is nonelementary.
A left inverse of $f$ is sufficient.
$ $
Functions with a left or right inverse are treated in MathStackexchange: Are elementary compositions of nonelementary functions also nonelementary? |
Calculate the value of $ l$ | HINT.
The foot of the height of the pyramid is the circumcenter of the base. Find height $h$ of the pyramid from volume and base area, then find circumradius $r$ of the base. You have then:
$$
l=\sqrt{h^2+r^2}.
$$ |
If $x$ is a real number, then $|x+1| \leq 3$ implies that $-4 \leq x \leq 2$. | If $a$ and $b$ are real numbers and if $b \ge 0$, then the inequality $|a| \le b$ exactly means
$$-b \le a \le b.$$
Hence , if $b=3$, then
$$|a| \le 3 \iff -3 \le a \le 3.$$
Now let $a=x+1$. Then we get
$$|x+1| \le 3 \iff -3 \le x+1 \le 3 \iff -4 \le x \le 2.$$ |
Find the solution set of $|z-1| = 1$, where $z\in \mathbb{C}$ | Let $z=x+i*y$, where $i^2=-1$. Then we have
$$1=|z-1|=|x+iy-1|=\sqrt{(x-1)^2+y^2},$$
so squaring both sides leads to the equation
$$(x-1)^2+y^2=1.$$
This is the equation of a circle of radius 1, centered at the point $(1,0)$. |
Injective and quasi injective modules | Hint: Use the definition of the injectivity in the case of $\iota:N\to M$ being the inclusion. |
Submodules and quotients of free modules over Noetherian local rings | Your mistake is in believing that freeness of $F$ implies that the sequence splits. It is projectivity of $B$ that ensures the sequence splits. Projectivity of $A$ doesn't obviously relate to the sequence splitting without some additional hypotheses (presumably what's in the reference you give). |
how to prove (or disprove) for any $A, B \in \mathbb Z$ there exists factors $X_1, X_2 \in \mathbb Z$ such that satisifies $X_1A + X_2B = 1$? | This is only true if $\gcd{(a,b)}=1$ by Bézout's identity. |
Finding area between two curves using double integral | Convert to polar coordinates
$$ r= \sqrt{\cos 2 \theta}$$
Area in first quadrant
$$=\int_0 ^{\pi/4} r^2/2\; d\theta = \int \frac12 {\cos 2 \theta} \;d\theta =\dfrac{\sin 2 \theta}{4}|^{\pi/4}_0 = \frac 14 $$
For $x\ge0$ area is to be doubled. |
uncertain point in textbook's solution of a "distance from point to line" problem | $2-\frac24=\frac32$, not $\frac34$. With that corrected, your answer from the pythagorean theorem is equal to the book's.
The book is using a different method, noting that the $BA_1C_1$ is equilateral, and is scaling up the known height of an equilateral triangle of side length 1. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.