title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to calculate the limit that seems very complex.. | HINT:
$$C+i\cdot S=\sum_{r=0}^n\left[\binom nr\left(\cos\dfrac{r+1}{n^2}+i\sin\dfrac{r+1}{n^2}\right)\right]$$
$$=\sum_{r=0}^n\binom nre^{i(r+1)/n^2}$$
$$=e^{i/n^2}\sum_{r=0}^n\binom nr(e^{i/n^2})^r$$
$$=e^{i/n^2}(1+e^{i/n^2})^n$$
Now $1+e^{2iy}=2\cos y(\cos y+i\sin y)$
and use de Moivre's Formula |
Showing monotone convergence of recursive relation with $ x_{n+1} = \sin(x_{n}) $ | This requires an additional ssumption. For example if $x_1=-\frac {\pi} 2$ then $x_2=-1 >x_1$.
If $x_n \geq 0$ for all $x$ then this result is true and it follows from the inequality $\sin x \leq x$ for all $x \geq 0$.
Proof of $\sin x \leq x$ for $x \geq 0$: let $f(x)=x-\sin x$. Then $f(0)=0$ and $f'(x)=1-\cos x \geq 0$. Hence $f$ is montonically increasing so $f(x) \geq f(0)=0$ for $x \geq 0$. |
Syntactic proof that Peirce's law doesn't hold in simply-typed lambda calculus | You are right in your intuition that proving that a derivation does not exist is somewhat cumbersome. Soundness and completeness of formal proof systems provide us with an interesting duality:
$\vDash \phi \Leftrightarrow$ all models satisfy $\phi$ $\Leftrightarrow$ there exists no model that does not satisfy $\phi$
(soundness) $\Leftarrow$ $\Rightarrow$ (completeness)
$\vdash \phi \Leftrightarrow$ there exists a derivation that proves $\phi$
$\not \vDash \phi \Leftrightarrow$ not all models satisfy $\phi$ $\Leftrightarrow$ there exists a model that does not satisfy $\phi$
(completeness) $\Leftarrow$ $\Rightarrow$ (soundness)
$\not \vdash \phi \Leftrightarrow$ there exists no derivation that proves $\phi$ $\Leftrightarrow$ all potential derivations can not prove $\phi$
Proving a universally quantified statement ("all") is often an awkward thing to do because you need to make an argument that any entity from a possibly infinite domain (such as any structure or any possible derivation) has the desired property (such as (not) satisfying or (not) proving a formula), while proving an existentially quantified statement ("there is") is very easy, because it suffices to just provide one example (one counter-model/one proof) and you're done.
So if you are supposed to show $\vDash$, rather than making a possibly cumbersome semantic argument about validity in all models, soundness and completeness allow as to show $\vdash$, i.e. provide one syntactic derivation that proves the desired formula.
One the other hand, showing $\not \vdash$, i.e. arguing that there can be no derivation can be annoying, but soundness and completeness enable us to just show that $\not \vDash$ holds by providing one counter-model and you're done.
So the easier way to show that $\not \vdash t: ((\sigma \to \tau) \to \sigma) \to \sigma)$ would be to provide a semantic argument that shows that there are models of typed $\lambda$-calculus/intuitionistic logic in which the formula $((\sigma \to \tau) \to \sigma) \to \sigma)$ does not hold. A semantics for intuitionistic logic is e.g. Kripke semantics.
If you are, however, interested in syntactic proof, you need to make an argument that no syntactic derivation is possible, i.e. any attempt to prove the formula will eventually lead to a dead-end where it is not possible to continue the proof, and you need to reason why.
So let's attempt to build up a derivation tree:
The only reasonable way to start (bottom-up) is to first (un-)do an implication introduction to get rid of $((\sigma \to \tau) \to \sigma)$: Implication elimination would require an even larger major premise and a suitable minor premise$^1$, and introducing a new variable is not an option because the set of assumptions (the left-hand-side of the sequent) has to be empty in the conclusion formula.
The next reasonable step after $(\to I)$ is to make use of $x: ((\sigma \to \tau) \to \sigma)$ - which can in turn be derived by one application of the identity rule - to show $x(\lambda y.M): \sigma$ by an application of $(\to E)$, which requries a proof of the antecedent, $\lambda y.M: (\sigma \to \tau)$.
With a similar argument as in the first step, the only useful continutation for the latter node is another $(\to I)$, which adds $y: \sigma$ as a new assumption, and we now have to prove $M: \tau$. This is the point were we get stuck.
The first observation is that we can't use $(\to I)$ anymore because $\tau$ is not a function type.
Simply adding $\tau$ to the context, i.e. introducing a new variable $z$ which has the type $\tau$, and closing the branch with $(Id)$ is not an option either, because we would need to carry this additional assumption down the tree, and end up with the sequent $z: \tau \vdash \lambda x.(x(\lambda y.z)): ((\sigma \to \tau) \to \sigma) \to \sigma$, with a context containing $z: \tau$ instead of being empty. So we would not have a proof (= a derivation of the form $\vdash t: \alpha$, with no assumptions on the left-hand side of the sequent), but instead the derivability of the type would depend on the stipulation of some variable $z$ being declared to have the type $\tau$. But then our term would contain a free variable, which does not solve the type inhabitation problem: That for any type $\alpha$, we could find a trivial "inhabitation" of the form $x:\alpha \vdash x:\alpha$ is uninteresting - instead, type inhabitation asks about the existence of a closed term.
So alternatively, we could attempt to introduce new declarations of the form $... \to \tau$ to the context, and continue the tree by an application of elimination rules; however this would merely shift the problem upwards the tree:
If we chose $\ldots \vdash \sigma \to \tau$ as the major premise with $\ldots \vdash \sigma$ (derived by an application of the $(Id)$ rule) as the minor premise to the $(\to E)$ rule, an endless loop would occur, since a similar rule application is already used one step lower in the tree. The fact that the context now in addition contains $\sigma$ does not make a difference in this case.
If we chose $\vdash \tau \to \tau$ as the major premise to the $(\to E)$ rule, another endless loop would occur, since the other branch (the minor premise) would need to contain $\vdash \tau$ with the same context again.
And introducing new type variables different from $\sigma$ and $\tau$ wouldn't help either, but only leave us with even more to prove.
So there is no way to continue the upper right branch of the derivation tree, and at no point could we have made a smarter choice by selecting a different rule.
Since the tree can not be resolved, the type is not inhabited.
As you see, proving that a formula is not derivable can be rather cumbersome, because you need to make sure that every way of trying to build up a tree will eventually fail - I hope my proof was comprehensive enough in this respect.
Another thing you might find interesting w.r.t. to the correspondence to the $\{\to\}$-fragment of intuitionistic logic: The formula $(((((\sigma \to \tau) \to \sigma) \to \sigma) \to \tau) \to \tau)$, i.e. the above formula with two succedents $\tau$ appended, is deriavable/inhabited. This formula represents the double negation of Peirce's law (under the assumption that $\tau$ stands for $\bot$, and $\neg \neg \phi$ is seen as an abbreviation of $(\phi \to \bot) \to \bot$). This formula can be derived in intuitionistic logic, and in particular in the $\{\to\}$-fragment, i.e. in positive implication logic.$^2$ Thus, the type $(((((\sigma \to \tau) \to \sigma) \to \sigma) \to \tau) \to \tau)$ is inhabited (take $\alpha$ for $\sigma$ and $\beta$ for $\tau$):
An inhabitant (a $\lambda$-term) corresponding to this type (formula) would be $\lambda x.(x(\lambda y.(\lambda z.x(\lambda y.z)))$. You see that the part $\lambda x.(x(\lambda y.M))$ is the same as before, except that now $M$ can be resolved.
Some more words on Peirce from a proof-theoretic perspective:
A derivation of Peirce's law in classical logic requires an application of the reductio ad absurdum rule (or something equivalent, like double negation eliminiation): An assumption $\neg \alpha$ is made from which $\bot$ is derived, then $\alpha$ is concluded while discharging the assumption $\neg \alpha$. Again, you would need to argue why RAA is necessary, i.o.w., why there can be no derivation without this step.
RAA/DNE is part of/admissible in natural deduction for classical logic, but not intuitionistic logic: In IL, only ex falso quodlibet sequitur available, which allows to conclude anything from $\bot$ but does not allow to discharge assumptions for the form $\neg \alpha$. In fact, from a proof-theoretic perspective, the presence or absence of RAA/DNE is exactly what tells the two systems apart; the remaining set of rules is the same; so IL is basically CL with RAA/EFQL+DNE replaced by the weaker EFQL rule.
Since RAA/DNE is necesssary for the proof but this rule is not available in IL, Peirce is not derivable in ND (or any other system) for IL, and in turn not for positive implcation logic, the $\{\to\}$-fragment of IL.
$^1$ In a formal proof, major premise = the premise with the formula that contains the occurrence of the operator to be eliminated by the rule, minor premises = the other premises. In the $(\to E)$ rule, with a conclusion is of the form $\Gamma \vdash \beta$ and two premises, the major premise is the premise of the form $\Gamma \vdash \alpha \to \beta$, and the minor premise is the one of the form $\Gamma \vdash \alpha$.
$^2$ In fact, it can be shown that for any formula which is only valid classically but not in IL, its double negation can be derived in IL. Other than in classical logic, in IL, $\neg \neg \phi$ is not equivalent to $\phi$: it is a weaker propositions, so $\phi$ implies $\neg \neg \phi$, but not necessarily the other way round. |
Proof of $\int\limits_{A}f=\int\limits_{\mathbb{R}}f{1}_A$ for the Lebesgue integral | In the comments section ThomasE. proposed a completely different yet beautiful approach that I will now present here.
First a lemma: \begin{equation}\int\limits_{\mathbb{R}}f=\int\limits_{A}f+\int\limits_{A^c}f\end{equation}
Proof: Let $s_1,s_2$ be any simple functions on $A$ and $A^c$ respectively and define $s(x)=\begin{cases}s_1(x) & \text{if $x\in A$,}
\\
s_2(x) &\text{if $x\in A^c$}
\end{cases}
$. Then, \begin{gather}\int\limits_{A}f+\int\limits_{A^c}f=\sup\left\{\int\limits_{A}s_1:0\le s_1\le f|_A\right\}+
\sup\left\{\int\limits_{A^c}s_2:0\le s_2\le f|_{A^c}\right\}\\
\int\limits_{A}f+\int\limits_{A^c}f=\sup\left\{\int\limits_{A}s_1+\int\limits_{A^c}s_2:0\le s_1\le f|_A\text{ and }0\le s_2\le f|_{A^c}\right\}
=\sup\left\{\int\limits_{\mathbb{R}}s:0\le s\le f\right\}\\
\int\limits_{A}f+\int\limits_{A^c}f\le \int\limits_{\mathbb{R}}f
\end{gather}
and
\begin{gather} \int\limits_{\mathbb{R}}f=\sup\left\{\int\limits_{\mathbb{R}}s:0\le s\le f\text{ and }s\text{ is simple}\right\}=
\sup\left\{\int\limits_{A}s+\int\limits_{A^c}s:0\le s_A\le f\text{ and }0\le s|A^c\le f\right\}\\
\int\limits_{\mathbb{R}}f=\sup\left\{\int\limits_{A}s:0\le s|A\le f\right\}+\sup\left\{\int\limits_{A^c}s:0\le s\le f^c\right\}\le\int\limits_{A}f+\int\limits_{A^c}f
\end{gather}
Thus, the Lemma is proven. Now
\begin{equation}\int\limits_{A}f=\int\limits_{A}f+\int\limits_{A^c}0=\int\limits_{A}f1_A+\int\limits_{A^c}f1_A=\int\limits_{\mathbb{R}}f1_A\end{equation}
This all seems to be correct to my eyes, but is it? |
Finding an odd determinant | Here's another solution which, from the sound of the OP's description of his/her work, at least appears to give a different view of things.
Write
$\mathbf x = \begin{pmatrix} x_1 \\ x_2 \\ . \\ . \\ . \\ x_n \end{pmatrix} \tag{1}$
and
$\mathbf y = \begin{pmatrix} y_1 \\ y_2 \\ . \\ . \\ . \\ y_n \end{pmatrix}; \tag{2}$
then
$A = I + \mathbf x \mathbf y^T, \tag{3}$
and we seek to show that $\det A = 1 + \mathbf y^T \mathbf x = 1 + \sum_1^n y_i x_i$. Note that if $\mathbf y = 0$, then $A = I$, so we have $\det A = 1$ and we are done; a similar argument applies if $\mathbf x = 0$; we thus assume that $\mathbf x \ne 0 \ne \mathbf y$. We examine the eigen-structure of $\mathbf x \mathbf y^T$. Observe that
$(\mathbf x \mathbf y^T) \mathbf x = (\mathbf y^T \mathbf x) \mathbf x, \tag{4}$
showing that $\mathbf x \ne 0$ is an eigenvector of $\mathbf x \mathbf y^T$ with eigenvalue $\mathbf y^T \mathbf x$. Since $\mathbf y \ne 0$, we must have $y_i \ne 0$ for some $i$. Consider the vectors $\mathbf z_j$, $j \ne i$, with components $\mathbf z_{jk} = -\delta_{jk} + y_i^{-1}y_j \delta_{ik}$; then $\mathbf z_j \ne 0$ for all $j$ and $\mathbf y^T \mathbf z_j = \mathbf z_j^T \mathbf y = 0$ for all $j$ as well, since
$\mathbf z_j^T \mathbf y = \sum_{k = 1}^{k = n} (-\delta_{jk} + y_i^{-1}y_j \delta_{ik})y_k = -y_j + y_j = 0. \tag{5}$
Finally, the $\mathbf z_j$ are linearly independent, for if we take $n - 1$ scalars $\alpha_j$, $j \ne i$, we see that the components of $\sum_{j = 1, j \ne i}^n \alpha_j \mathbf z_j$ are given by $(\sum_{j = 1, j \ne i}^n \alpha_j \mathbf z_j)_k = -\alpha_k$ as long as $k \ne i$, so that $\sum_{j = 1, j \ne i}^n \alpha_j \mathbf z_j = 0$ forces $\alpha_j = 0$ for all $j \ne i$. These arguments also establish that
$(\mathbf x \mathbf y^T) \mathbf z_j = 0, \; j \ne i, \tag{6}$
whence the $\mathbf z_j$ are a set of $n - 1$ linearly independent eigenvectors of $\mathbf x \mathbf y^T$, all with eigenvalue $0$. Thus we see that $\mathbf x \mathbf y^T$ has $n - 1$ eigenvalues $0$ and one eigenvalue $\mathbf y^T \mathbf x$. Things are finished off by exploiting the simple fact that for any square matrix $B$, $B \mathbf v= \lambda \mathbf v \Leftrightarrow (B + \mu I)\mathbf v = (\lambda + \mu)\mathbf v$. Applying this to $A = I + \mathbf x \mathbf y^T$ we see that the eigenvalues of $A$ are $1$ ($n - 1$ times) and $1 + \mathbf y^T \mathbf x$ (one time). Multiplying the eigenvalues together shows that
$\det A = 1 + \mathbf y^T \mathbf x, \tag{7}$
as was hypothesized.
Hope this helps. Cheerio,
and as always,
Fiat Lux!!! |
Problem 4 Barry Simon a comprehensive course in analysis part 1. | It is obvious that $f=g$ a.e. implies $|f(y)| \leq \sup_x|g(x)|$ a.e. so $|f(y)| \leq RHS$ a.e. Hence LHS $\leq $RHS. On the other hand if $g(x)=f(x)$ when $|f(x)| \leq \|f\|_{\infty}$ and $0$ otherwise then $f=g$ a.e. and $\sup_x|g(x)| \leq \|f\|_{\infty}$ so RHS $\leq$ LHS. |
If $x$ $\in \mathbb{R}$, then $(x^{2} + 1)^{2}\geq 2x$ | If $x\in\mathbb R$, then $x^2+1\ge1$.
On the other hand, $(x-1)^2\ge0$, so $x^2+1\ge 2x$, so $1\ge\dfrac{2x}{x^2+1}$.
Therefore, $x^2+1\ge\dfrac{2x}{x^2+1}$. |
Can a double-factorial be a perfect square? | No, for odd $n,$ there is a prime between $(n-1)/2$ and $n.$ The exponent of this prime in factoring $n!!$ is one, that is, odd.
Edit. looking at even numbers and the definition, it appears
$$ (2n)!! = 2^n n! $$
in which case we ignore the exponent of $2$ and concentrate on $n!,$ which cannot be a square either for this $n \geq 3,$ also because of an odd prime. See here. |
convert a double integral to a triple integral | Let $\Omega = \{ (x,y,z) \;|\; (x,y) \in D \mbox{ and } 0 \leq z \leq 1-x-y \}$. Then turn the triple integral into a double $+$ a single integral...
$$(p-1)\iiint\limits_{\Omega} x^{m-1}y^{n-1}z^{p-2} \,dV = \iint\limits_D \left(\int_0^{1-x-y} (p-1)x^{m-1}y^{n-1}z^{p-2}\,dz\right)\,dA$$
$$= \iint_D \left. x^{m-1}y^{n-1}z^{p-1} \right|_0^{1-x-y}\,dA =
\iint_D x^{m-1}y^{n-1}(1-x-y)^{p-1}\,dA$$ |
Metric spaces and σ-closure-preserving bases (Nagata's metrization theorem) | For each $n\in\Bbb Z^+$ let $\mathscr{G}_n$ be a locally finite open refinement of the open cover
$$\left\{B\left(x,\frac1n\right):x\in X\right\}\;.$$
Locally finite families are closure-preserving, so each $\mathscr{G}_n$ is closure preserving. Let $x\in X$ and $n\in\Bbb Z^+$; then $\{G\in\mathscr{G}_n:x\in G\}$ is finite, so $V_n(x)=\bigcap\{G\in\mathscr{G}_n:x\in G\}$ is an open nbhd of $x$. Fix any $G_x\in\{G\in\mathscr{G}_n:x\in G\}$; then $G\subseteq B\left(y,\frac1n\right)$ for some $y\in X$, so
$$\operatorname{diam}V_n(x)\le\operatorname{diam}G_x\le\operatorname{diam}B\left(y,\frac1n\right)\le\frac2n\;.\tag{1}$$
Thus, for each $n\in\Bbb Z^+$ we have
$$x\in V_{3n}(x)\subseteq B\left(x,\frac1n\right)\;,$$
and $\{V_n(x):n\in\Bbb Z^+\}$ is therefore a local base at $x$.
Added: Let $\mathscr{G}=\bigcup_{n\in\Bbb Z^+}\mathscr{G}_n$. The fact that $\mathscr{G}$ is a base for $X$ also follows easily from $(1)$: for any $G\in\mathscr{G}_{3n}$ we have
$$x\in G\subseteq B\left(x,\frac1n\right)\;.$$ |
Counting Conjugacy Classes: $A^6 = I$ | Please read my comments under the OP's question. In this answer, I would like to only address the case $m=6$, that is, the situation the OP asks for. I do not know how to deal with the general exponent $m$.
As we have established, the characteristic polynomial involved in each indecomposable block of $A$ is one of the following four cyclotomic polynomials: $$\Phi_1(x)=x-1\,,\,\,\Phi_2(x)=x+1\,,\,\,\Phi_3(x)=x^2+x+1\,,\text{ and }\Phi_6(x)=x^2-x+1\,.$$ Now, $A$ must have at least one indecomposable block of size $2\times 2$. Let $a$ and $b$ denote the number of $2$-by-$2$ indecomposable blocks of $A$ and the number of $1$-by-$1$ blocks of $A$. Then, $a\geq 1$ and $2a+b=k$. Denote by $J_1,J_2,\ldots,J_a$ the $2$-by-$2$ blocks of $A$, whereas $K_1,K_2,\ldots,K_b$ the $1$-by-$1$ blocks. We may assume that $J_1,J_2,\ldots,J_p$ have $\Phi_6$ as the characteristic polynomial, whereas $J_{p+1},J_{p+2},\ldots,J_a$ have $\Phi_3$ as the characteristic polynomial. Similarly, suppose that $K_1,K_2,\ldots,K_q$ have $\Phi_2$ as the characteristic polynomial, whilst $K_{q+1},K_{q+2},\ldots,K_b$ have $\Phi_1$ as the characteristic polynomial.
If $p\geq 1$, then there is no restriction on $q$. Thus, for each fixed $a\in\Biggl\{1,2,\ldots,\left\lfloor\dfrac{k}{2}\right\rfloor\Biggr\}$, there are $a$ ways to choose $p\geq 1$ and $b+1=k-2a+1$ ways to choose $q$. Thus, for $p\geq 1$, there are
$$\begin{align}\sum_{a=1}^{\left\lfloor\frac{k}2\right\rfloor}\,a(k-2a+1)&=(k-1)\,\sum_{a=1}^{\left\lfloor\frac{k}2\right\rfloor}\,a-4\,\sum_{a=1}^{\left\lfloor\frac{k}2\right\rfloor}\,\frac{a(a-1)}{2}
\\
&=\frac{k-1}{2}\,\left\lfloor\frac{k}{2}\right\rfloor\,\left(\left\lfloor\frac{k}{2}\right\rfloor+1\right)-\frac{2}{3}\,\left\lfloor\frac{k}{2}\right\rfloor\,\left(\left\lfloor\frac{k}{2}\right\rfloor+1\right)\,\left(\left\lfloor\frac{k}{2}\right\rfloor-1\right)
\end{align}$$
corresponding conjugacy classes.
If $b=0$, then $q\geq1$ is required. Thus, for each fixed $a\in\Biggl\{1,2,\ldots,\left\lfloor\dfrac{k}{2}\right\rfloor\Biggr\}$, there are $b=k-2a$ ways to choose $q\geq 1$. Hence, for $p=0$, there are
$$\sum_{a=1}^{\left\lfloor\frac{k}2\right\rfloor}\,(k-2a)=k\,\left\lfloor\frac{k}{2}\right\rfloor-\left\lfloor\frac{k}{2}\right\rfloor\,\left(\left\lfloor\frac{k}{2}\right\rfloor+1\right)$$
corresponding conjugacy classes.
This shows that
$$N(k,6)=\frac{1}{6}\,\left\lfloor\frac{k}{2}\right\rfloor\,\left(3\,(k-3)\,\left\lfloor\frac{k}{2}\right\rfloor-4\,\left\lfloor\frac{k}{2}\right\rfloor^2+9\,k-5\right)$$
is the total number of conjugacy classes. We have
$$N(1,6)=0\,,\,\,N(2,6)=1\,,\,\,N(3,6)=3\,,\,\,N(4,6)=7\,,\text{ and }N(5,6)=12\,.$$
(The OP miscounted something. From his list, there should be $12$ distinct conjugacy classes for $k=5$.)
Note that
$$\frac{(k-1)(k^2+10k-3)}{24}\leq N(k,6)\leq \frac{k(k-1)(k+10)}{24}\,.$$
The left-hand side is an equality iff $k$ is an odd positive integer. The right-hand side is an equality iff $k=1$ or $k$ is an even positive integer. |
radius of convergence for a taylor sum | By ratio test, using
$$(2 (n+1))!=(2n)! (2n+1)(2n+2) $$
$$(n+1)!=n!(n+1) $$ and
$$2^{2 (n+1)}=4. 2^{2n} ,$$
we find
$$\lim_{\infty}|\frac {a_{n+1}}{a_n}|=$$
$$\lim_{\infty} \frac {(2n+1)(2n+2)}{4(n+1)^2}=$$
$$1=R $$ |
Complexity of generating a prime larger than $N$ | I believe it's unknown if there is an algorithm that can compute a prime larger than $N$ in polynomial time (that is, $O((\log{N})^k)$ for some k). Cramér's conjecture implies that exhausitive search (counting up from $N+1$ and testing with AKS) runs in polynomial time. |
Word problem -- water pipes and pool | Let $p_1$, $p_2$, and $p_3$ be the fraction of the pool filled in one minute by pipes 1, 2, and 3, respectively. Their reciprocals $\dfrac 1{p_1}$, $\dfrac 1{p_2}$, and $\dfrac 1{p_3}$ are the times required to fill the pool (for example, if $p_1 = \dfrac 1{10}$, then one tenth of the pool is filled in one minute so it takes 10 minutes to fill).
Since pipes 1, 2, and 3 working together fill the pool in 6 minutes, they fill $\dfrac 16$ of the pool in one minute. The equation is $$p_1 + p_2 + p_3 = \frac 16.$$
Since pipe 2 fills the pool in $\dfrac 34$ of the time that pipe 1 does, you have $$ \frac 1{p_2} = \frac{3}{4} \cdot \frac{1}{p_1}.$$
Since pipe 3 takes 10 mintes longer than pipe 2 you have $$\frac{1}{p_3} = \frac{1}{p_2} + 10.$$
Now you can determine $p_1$, $p_2$, and $p_3$. |
existence and uniqueness solution Cauchy problem $y'=y(y+1)e^{-y}$ | For $t < 0$, there is no problem. The reason is that $y$ in increasing for $y(0) > 0$ and decreasing for $y(0) < -1$. Hence we only consider the case $t > 0$.
For $y(0) > 0$, $y(t) > 0$. Since $f(t,y)$ is Lipschitz in $y$ for $y\geq 0$, you have
$$
y'(t) \leq Cy(t)
$$
for some $C$. This implies that $y(t)\leq y(0)e^{Ct}$.
For $y(0) < 1$, $y(t) < -1$. Put $u = -y-1$. Then
$$
u'(t) = u(u+1) e^{u+1} \geq u^2e^u \geq u^4.
$$
By integration,
$$
\frac{1}{u(0)^3} - \frac{1}{u(t)^3} \geq 3t.
$$
Which means the solution can't be global (otherwise $u(t) < 0$ for large $t$). |
Poisson bracket of function of coordinates in terms of canonical brackets | The axioms of the Poisson bracket imply that for $f\in C^\infty(M)$, the mapping $\lbrace f,\cdot\rbrace:C^\infty(M)\to C^\infty(M)$ is a vector field (i.e. a derivation on the ring of $C^\infty(M)$ functions). Therefore like all vector fields, it satisfies
$$
\lbrace f,\cdot \rbrace = \sum_j\lbrace f,x^j\rbrace \frac{\partial}{\partial x^j}
$$
(this is a standard result about vector fields, that essentially follows from Taylor's theorem). So
$$
\lbrace f,g\rbrace = \sum_i\lbrace f,x^j\rbrace \frac{\partial g}{\partial x^j}.
$$
However since $\lbrace\cdot,\cdot\rbrace$ is antisymmetric, the same applies to $f$, so
$$
\lbrace f,g\rbrace = \sum_{ij}\lbrace x^i,x^j\rbrace \frac{\partial f}{\partial x^i}\frac{\partial g}{\partial x^j}.
$$ |
Lagrangian step for optimizing a concave function | This problem can be solved by using the following theorem (one of my favorites):
A convex function on a compact convex set attains its maximum at an extreme point of the set.
Note that $\min_C \sum_i \Psi(c_i)$ is equivalent to $\max_C -\sum_i \Psi(c_i)$ and that $-\sum_i \Psi(c_i)$ is a convex function (because the sum of convex/concave functions is convex/concave).
The extreme points of the compact convex set defined by $\sum_i c_i = 1$ and $c_i \geq 0$ are $C_j=[\delta_{ij}]$ with $\delta_{ij}=1$ if $i=j$ and $\delta_{ij}=0$ otherwise. It's easy to see that each $C_j$ is a solution, and that the sought minimum is $\Psi(1)+(k-1)\Psi(0)$. (There might exist more solution $C$ than that in case $\Psi$ is not strictly concave.)
So let's see how we can find these solutions using the Lagrangian. The above solutions $C_j$ correspond to $\gamma = -\Psi'(c_j)$ and $\alpha_i = \Psi'(c_i) + \gamma$.
You might ask: "But what is so special about this solution, and why does it help me to solve the minimization problem?" The important point here are the "complementary slackness" conditions (see KKT conditions), which come into play here, because we are dealing with inequalities. (The Lagrange multiplier technique is often introduced only for the case of equalities, but generalizations like the KKT conditions allow to use it also for the case of inequalities.)
Because of $\sum_i c_i=1$, there must be at least one $j$ with $c_j>0$. Because of the "complementary slackness" conditions, the assumption $c_j>0$ leads to $\alpha_j=0$, which implies $\gamma = -\Psi'(c_j)$. For the other $c_i$, we can now distinguish the cases $\alpha_i=0$ and $\alpha_i\neq0$. Because of the "complementary slackness" conditions, $\alpha_i\neq0$ implies $c_i=0$. The case $\alpha_i=0$ leads to $c_i = \Psi'^{-1}(-\gamma)$. By that approach you would have to check $2^{k-1}$ candidate solutions, which is theoretically feasible, but not very practical. So you better make use of the special knowledge provided by "convexity"/"concavity" instead of only relying on the general Lagrangian technique. (Note however that the minimization of concave functions over convex domains is still an NP-hard problem...) |
If $x$ leaves remainder $2$ when divided by $8$, what will the remainder be when $x + 9$ is divided by $8$? | The easiest way:
Note that $x-2$ is a multiple of $8$, hence $x-2=8k,\ k\in\mathbb{Z}.$
Therefore $$x+9=(x-2)+11=(x-2)+8+3=8k+8+3=8(k+1)+3.$$
Now we can conclude the remaind is $3$.
The cool way:
We have that
$$x\equiv2\ (mod\ 8)$$
Add $9$ in both sides we have
$$x+9\equiv11\equiv3\ (mod \ 8).$$ |
Does this function define first-order ordinary differential equations? | This is one sense in which a differential equation can be homogeneous, but be cautioned that this is not nearly the more common sense, which is very different!
The usual sense of homogeneous in the context of differential equations is the following: A linear ordinary differential equation is one of the form
$$\frac{d^m}{dx^m} y(x) + A_{m - 1}(x) \frac{d^{m - 1}}{dx^{m - 1}} + \cdots + A_1(x) \frac{d}{dx} y(x) + A_0(x) y = F(x).$$
It is furthermore homogeneous iff $F(x)$ (called the source term) is $0$, or equivalently, iff $y(x) = 0$ is a solution. A key feature of homogeneous linear o.d.e.s is that their solutions comprise a vector space under the usual addition and scalar multiplication of functions.
We see immediately that an o.d.e. $y' = f(x, y)$ is homogeneous if $f(x, y)$ is linear in $y$, that is, if $\frac{\partial^2 f}{\partial y^2} = 0$.
Wikipedia describes a second notion of homogeneous first-order equations that I hadn't encountered before. It's closer to the notion you describe but still distinct. A first-order differential equation of the form $$P(x, y) \,dx + Q(x, y) \,dy = 0$$ is of homogeneous type if $P, Q$ are homogeneous functions of the same degree: A function $R(x, y)$ is homogeneous of degree $m$ iff $R(\lambda x, \lambda y) = \lambda^m R(x, y)$. Rearranging, we can rewrite such an equation (ignoring the places where $Q$ takes on the value zero) as $$\frac{d}{dx} y(x) = -\frac{P(x, y)}{Q(x, y)},$$ and the right-hand side satisfies $$-\frac{P(\lambda x, \lambda y)}{Q(\lambda x, \lambda y)} = -\frac{\lambda^m P(x, y)}{\lambda^m Q(x, y)} = -\frac{P(x, y)}{Q(x, y)},$$ so any homogeneous first-order differential equation of homogeneous type can be put in the indicated form.
Conversely, we can rewrite the given equation $y' = f(x, y)$ as $$-f(x, y) \,dx + dy = 0,$$ and both of the coefficients are homogeneous of degree zero, hence so view the given equation as one of homogeneous type, and hence the notion in the question agrees with the notion of homogeneous type, at least up the zero set of $Q$. |
A car moves in a straight line with velocity $t^{−2}−1/25 ft/s$. | $$
\int_3^5 \left(\frac{1}{t^2} - \frac{1}{25} \right)\,dt + \left|\int_5^6 \left(\frac{1}{t^2} - \frac{1}{25} \right)\,dt \right| = \frac{3}{150}
$$
and
$$
\int_3^6 \left(\frac{1}{t^2} - \frac{1}{25} \right)\,dt= \frac{7}{150}
$$
Does this help? |
Prove or disprove convergence for the series: $\sum_{n=2}^{\infty}\left((1+\frac{1}{n})^n-e\right)^{\sqrt{\log(n)}}$ | I will assume that the absolute value of $\left(1+\tfrac{1}{n}\right)^n-e$ is taken since otherwise the terms don't exist, as mentioned in the comment.
The difference between $e$ and $\left(1+\tfrac{1}{n}\right)^n$ is of order $\tfrac{1}{n}$ (expand the binomial to the second order term to see that; this is mentioned in the comments, too). So for large $n$ the terms are of order $n^{-\sqrt{\log(n)}}$. For $n$ large enough, they are bound from above by, say, $n^{-2}$; hence, the series converges. |
Partial derivative of an Integral - potentially trivial | $C(K)=\int_K^{\infty} x\rho (x)dx-K \int_K^{\infty} \rho(x)dx$. So the first derivative w.r.t. $K$ is $-K\rho (K)-\int_K^{\infty} \rho(x)dx+K\rho (K)=-\int_K^{\infty} \rho(x)dx$ (where we have used the product rule). Hence the second derivative is $\rho (K)$. |
Uniform continuity supremum | A start: Take $d=2$ for example. We know that there is $C$ such that $|x-y|\le 1 \implies |f(y)-f(x)|\le C.$ Look at $(m,n) \in \mathbb N^2.$ Then
$$f(m,n) - f(0,0) = (f(m,n)-f(m,n-1)) + (f(m,n-1)-f(m,n-2)) + \cdots + (f(m,1) - f(m,0))$$ $$ + (f(m,0)-f(m-1,0)) + \cdots + (f(1,0)-f(0,0)).$$
This is $\le C(m+n).$ And $m+n \le \sqrt 2 \sqrt {m^2 + n^2}$ $ = \sqrt 2|(m,n)-(0,0)|.$ Thus we have
$$f(m,n) - f(0,0) \le C\sqrt 2|(m,n)-(0,0)|.$$
That looks good, no? See if you can use this idea for the full proof. |
Borel measurable functions | Hint 1:
Proposition 1: Let $(E,\mathcal{A})$ be a measurable space and let $f:(E,\mathcal{A})\to (F_1,\mathcal{B}_1)$, $g:(E,\mathcal{A})\to(F_2,\mathcal{B}_2)$ be two measurable functions. Then the product function
\begin{align*}h:(E,\mathcal{A})&\to(F_1\times F_2,\mathcal{B}_1\otimes\mathcal{B}_2)\\x&\mapsto(f(x),g(x))\end{align*}
is measurable.
Where $\mathcal{B}_1\otimes\mathcal{B}_2$ denotes the product sigma algebra on $F_1\times F_2$.
Proof
We need to show that the preimage of a measurable set with respect to $h$ is measurable. Let us take an arbitrary set
$$\mathcal{C}=\{B_1\times B_2 ; B_1 \in \mathcal{B}_1 , B_2 \in \mathcal{B}_2\}$$
Then we have
$$h^{-1}(B_1\times B_2) = f^{-1}(B_1) \cap g^{-1}(B_2)$$
but $f^{-1}(B_1)$ is measurable since $f$ is measurable and similarly $g^{-1}(B_2)$ is measurable since $g$ is. The intersection of two measurable sets is again measurable, hence $h^{-1}(B_1\times B_2) = f^{-1}(B_1) \cap g^{-1}(B_2)$ is measurable.
Hint 2
Proposition 2: The composition of two measurable functions is measurable
Proof
Let $f$ and $g$ be two measurable functions then we have
$$ (g \circ f)^{-1}(C) = \underbrace{f^{-1}(\underbrace{g^{-1}(C))}_{\text{measurable}}}_{\text{measurable}}$$
Do you see how to procede? |
Minimizing area of a triangle with two fixed point and a point on parabola | HINT:
$1$st of all, it should be $(t^2,t)$
Now the area $$\frac12\left|\det\begin{pmatrix} -1 & 0 & 1 \\ 0 & 1 & 1 \\ t^2 & t&1\end{pmatrix}\right|$$
$$=\frac12 (t^2-t+1)=\frac{(2t-1)^2+3}8\ge \frac38$$
So, the area will be minimum if $t=\frac12$ |
Let $\{e_k\}_{k=1}^\infty$ denote an orthonormal system. Show that $ T\{c_k\}_{k=1}^\infty = \sum_{k=1}^\infty c_ke_k$ is well-defined and bounded | Since $\{e_k\}$ is orthonormal, you have
$$
\left\|\sum_{k=m+1}^n c_ke_k\right\|^2=\sum_{k=m+1}^n|c_k|^2,
$$
and the $\ell^2$ condition then guarantees that your series converges.
As for the second question, not only is $T$ bounded, but it is isometric, with the same computation:
$$
\|Tc\|^2=\left\|\sum_{k=1}^\infty c_ke_k\right\|^2=\sum_{k=1}^\infty|c_k|^2=\|c\|^2.
$$ |
Prove $\sum_{i=n}^{2n}i=\frac{3}{2}n(n+1)$ using induction | Hint:
$$
\sum_{k = n + 1}^{2n + 2} k
= \left(\sum_{k = n}^{2n + 2} k\right) - n
= \left(\sum_{k = n}^{2n} k\right) + \left(2n + 1 + 2n + 2\right) - n
= \left(\sum_{k = n}^{2n} k\right) + 3n + 3.
$$
Explanation:
In the very first expression my first summand is the $n + 1$-th term. After the equality sign I begin at the $n$-th term, which is not in the first expression, so I have to subtract it again.
For the next equality, it's the same procedure, only the other way around. In the second term, I add until the $2n + 2$-th term. I only want to add until $2n$, so I have to subtract the $2n + 2$-th and $2n + 1$-th term. Note that the $j$-th term of the series is always $j$. |
How does one find the Laplace transform for the product of the Dirac delta function and a continuous function? | Here is how (check Laplace transform of $\delta(t-a)$)
$$ \int_{0}^{\infty} g(t)e^{-st} dt = \int_{0}^{\infty}\delta(t-2\pi) \cos t\, e^{-st}\,dt = \cos(2\pi)e^{-2\pi s }= e^{-2\pi s }. $$ |
Why is $\int_{\partial D}x\,dy$ invalid for calculating area of $D$? | You can use $\int_{\partial D} x\,dy$ to compute area in this context. The "familiar formula" does have a more symmetric look to it -- maybe that's why you find it more familiar.
There are infinitely many formulas like this that work. In general you need two functions $P$ and $Q$ such that $Q_x-P_y=1$. Then $\int_{\partial D} P\,dx+Q\,dy$ will compute the area.
$P=-y/2$ and $Q=x/2$ gives your familiar formula.
$P=0$ and $Q=x$ is the formula in question.
One could also use $P=-y$ and $Q=0$ (i.e. $\int_{\partial D} -y\,dx$) to compute the area.
Those 3 choices are standard ones presented in traditional multivariate calculus texts. But of course there are infinitely many other choices as well. |
Do negative binomials imply negative factorials exist? | TLDR: Factorials of negative integers don't exist under usual definitions.
Proving the identity:
Using the definition of the generalized binomial coefficient:
$$\binom{n}{k}=\frac{n^{\underline{k}}}{k!}~~~~~~\text{where}~n\in\Bbb C, k\in\Bbb N$$
where $n^{\underline{k}}=\underbrace{n(n-1)(n-2)\cdots(n-k+2)(n-k+1)}_{k~\text{terms}}$ is the falling factorial.
We have:
$\begin{array}{rlr}\binom{-n}{k}&=\frac{(-n)^{\underline{k}}}{k!}\\
&=\frac{(-n)(-n-1)(-n-2)\cdots(-n-k+2)(-n-k+1)}{k!}\\
&=\frac{(-n)(-n-1)(-n-2)\cdots(-n-k+2)(-n-k+1)}{k!}\cdot (-1)^{k}\cdot (-1)^k&\text{as}~((-1)^k)^2=1\\
\color{red}{(\dagger)}&=\frac{(n)(n+1)(n+2)\cdots(n+k-2)(n+k-1)}{k!}\cdot (-1)^k&\text{distributing}~(-1)^k~\text{across numerator}\\
&=\frac{(n+k-1)(n+k-2)\cdots (n+2)(n+1)(n)}{k!}\cdot (-1)^k&\text{rearranging numerator}\\
&=\frac{((n+k-1))((n+k-1)-1)\cdots ((n+k-1)-k+2)((n+k-1)-k+1)}{k!}\cdot(-1)^k&\text{rewriting terms in numerator}\\
&=\frac{(n+k-1)^{\underline{k}}}{k!}\cdot (-1)^k&\text{recognizing falling factorial}\\
&=\binom{n+k-1}{k}\cdot (-1)^k&\text{generalized binomial coeffecient}\end{array}$
Technically, we could have stopped sooner at $\color{red}{(\dagger)}$ had we noted that $\frac{n^{\overline{k}}}{k!}=\binom{n+k-1}{k}$ where $n^\overline{k}$ is the rising factorial, $n^\overline{k}=n(n+1)(n+2)\cdots(n+k-1)$.
Comments about involvement of negative factorials:
Factorials of negative numbers do not appear in the above proof or definitions at all. In fact, going to the gamma function, a generalization of factorials to all complex numbers, one has that $\Gamma(n)$ is a pole for all negative integers $n$ (similar to a division by zero error).
Ignoring this, if you try to look at something like $(-5)!$ as $(-5)(-6)(-7)(-8)\cdots=\lim\limits_{n\to\infty}\prod\limits_{k=5}^n (-k)$, as an infinite product it will clearly diverge as the related infinite series $\lim\limits_{n\to\infty}\sum\limits_{k=5}^n\ln(-k)$ diverges.
As for your algebra implying the negative factorial's existence, that would be a happy accident due to the cancellations involved. In terms of logical implications, $P\Rightarrow \text{True}$ does not imply that $P$ is true. By assuming it exists, you concluded a true statement, but that by itself does not prove that it exists. |
Two homogenous system are equivalent if they have the same answer | This cannot be done without the rank theorem or some other theorem of similar power.
We are given an $(m_A\times n)$-matrix $A$ and an $(m_B\times n)$-matrix $B$. Denote by ${\rm ker}(A)\subset{\mathbb R}^n$ the solution space of $Ax=0$, by $A_{i\!-}\in{\mathbb R}^n$ the $i^{\rm th}$ row vector of $A$, and by $${\rm row}(A):=\langle A_{1\!-}\>,\ldots, A_{m_A\!-}\rangle\subset{\mathbb R}^n$$ the row space of $A$. Similarly for $B$. We have to prove the following claim:
$${\rm row}(A)={\rm row}(B)\qquad\Leftrightarrow\qquad{\rm ker}(A)={\rm ker}(B)\ .
$$
For this it is sufficent to prove
$${\rm row}(B)\subset {\rm row}(A)\qquad\Leftrightarrow\qquad{\rm ker}(A)\subset{\rm ker}(B)\ .$$
Proof of $\Rightarrow\>: \quad$ Assume ${\rm row}(B)\subset {\rm row}(A)$, and consider a vector $x\in{\rm ker}(A)$. Let $B_{i\!-}\> x=\sum_{k=1}^n B_{ik}x_k=0$ be an equation of the $B$-system. As $B_{i\!-}=\sum_{j=1}^{m_A} \lambda_j A_{j\!-}$ for certain $\lambda_j\in{\mathbb R}$ we obtain
$$B_{i\!-}\>x=\sum_{j=1}^{m_A}\lambda_j A_{j\!-}\>x=0\ .$$
Since this is true for all $i\in[m_B]$ it follows that $x\in{\rm ker}(B)$. (This was the easy part.)
Proof of $\Leftarrow\>: \quad$ If ${\rm row}(B)\not\subset {\rm row}(A)$ then there is a row $B_{i\!-}$ of $B$ that does not belong to ${\rm row}(A)$. Denote by $A'$ the matrix obtained from $A$ by adding the row $B_{i\!-}$ at the bottom. Then $A'$ has rank one larger than $A$, hence ${\rm ker}(A')\subset{\rm ker}(A)$ has dimension one less than ${\rm ker}(A)$. It follows that there are vectors $x\in{\rm ker}(A)\setminus{\rm ker}(A')$. These vectors do not satisfy the last $A'$-equation $B_{i\!-}\>x=0$, hence do not belong to ${\rm ker}(B)$. This proves ${\rm ker}(A)\not\subset{\rm ker}(B)$. |
proof relating to limit definition of e | The inequality $1<(x+1)\ln(1+\frac{1}{x})$ becomes $\frac{1}{x+1} <\ln(1+\frac{1}{x})$ upon dividing by $x+1$, and the inequality $x\ln(1+\frac{1}{x})<1$ becomes $\ln(1+\frac{1}{x})<\frac{1}{x}$ upon dividing by $x$. |
Flipping a set of unfair coins | Let $X$ be the number of heads we observe.
Suppose we wish to find $P(X=3)$
There are ${5 \choose 3}=10$ ways to select the coins that will show up as heads. In particular
$$H_1H_2H_3$$
$$H_1H_2H_4$$
$$H_1H_2H_5$$
$$H_1H_3H_4$$
$$H_1H_3H_5$$
$$H_1H_4H_5$$
$$H_2H_3H_4$$
$$H_2H_3H_5$$
$$H_2H_4H_5$$
$$H_3H_4H_5$$
The respective probabilities for these are
$$P(H_1H_2H_3)=0.38\cdot0.18\cdot0.71\cdot0.34\cdot0.71$$
$$P(H_1H_2H_4)=0.38\cdot0.18\cdot0.29\cdot0.66\cdot0.71$$
$$P(H_1H_2H_5)=0.38\cdot0.18\cdot0.29\cdot0.34\cdot0.29$$
$$P(H_1H_3H_4)=0.38\cdot0.82\cdot0.71\cdot0.66\cdot0.71$$
$$P(H_1H_3H_5)=0.38\cdot0.82\cdot0.71\cdot0.34\cdot0.29$$
$$P(H_1H_4H_5)=0.38\cdot0.82\cdot0.29\cdot0.66\cdot0.29$$
$$P(H_2H_3H_4)=0.62\cdot0.18\cdot0.71\cdot0.66\cdot0.71$$
$$P(H_2H_3H_5)=0.62\cdot0.18\cdot0.71\cdot0.34\cdot0.29$$
$$P(H_2H_4H_5)=0.62\cdot0.18\cdot0.29\cdot0.66\cdot0.29$$
$$P(H_3H_4H_5)=0.62\cdot0.82\cdot0.71\cdot0.66\cdot0.29$$
Summing these, we get
$$P(X=3)\approx 0.286$$
Similarly, for finding $P(X=4)$, there are ${5 \choose 4}=5$ ways to pick the four successes and for finding $P(X=5)$, there are ${5 \choose 5}=1$ ways to pick the five successes.
These computations aren't very fun so perhaps a computer program can be implemented. |
Finding the transcendental equation and the form of the eigenfunctions given a regular Sturm-Liouville problem | The non-zero solutions of $X''+\mu X = 0$ with $X(\pi)=0$ can be normalized so that $X'(\pi)=1$. The resoluting solution $X_{\mu}(x)$ has the form
$$
X_{\mu}(x) = \frac{\sin(\sqrt{\mu}(x-\pi))}{\sqrt{\mu}}.
$$
This is correct even for $\mu=0$ if you take the limiting case as $\mu\rightarrow 0$:
$$
X_{0}(x)=\lim_{\mu\rightarrow 0}X_{\mu}(x)=\lim_{\mu\rightarrow 0}\frac{\sin(\sqrt{\mu}(x-\pi)}{\sqrt{\mu}(x-\pi)}(x-\pi)=x-\pi.
$$
Using this form of $X_{\mu}(x)$
, the eigenvalue equation becomes
\begin{align}
0 & =X_{\mu}(0)-hX_{\mu}'(0) \\
& =\frac{\sin(\sqrt{\mu}(-\pi))}{\sqrt{\mu}}-h\cos(\sqrt{\mu}(-\pi)) \\
& = -\frac{\sin(\sqrt{\mu}\pi)}{\sqrt{\mu}}-h\cos(\sqrt{\mu}\pi).
\end{align}
The solutions $\mu$ are the zeros of an entire function of $\mu$. Note that
$\mu=0$ is a solution iff $h=-\pi$. For $\mu\ne 0$, the the above may be written as the transcendental equation
$$
\tan(\sqrt{\mu}\pi)=-h\sqrt{\mu}.
$$
If you plot both of these functions, you can graphically see the non-zero values of $\sqrt{\mu}$ that are solutions. |
Special Contour integral question | You can do it by parametrizing the curve as suggested by @EricTowers. (The answer is not independent of $r$ for $\mu$, though.)
A possibly easier way to do it is to note that if $n\neq -1$, then $F(z) = \dfrac1{n+1}(z-z_0)^{n+1}$ is an anti-derivative of $f(z) = (z-z_0)^n$ and the (complex version) of the fundamental theorem of calculus gives you a quick way to compute the integral.
In the case $n=-1$, you can still find an antiderivative, $F(z) = \log (z-z_0)$, but you have to take some care in choosing which branch of the complex logarithm to use. (Note that if $\mu$ had been a full circle, you wouldn't be able to find an anti-derivative in the $n=-1$ case.) |
Integral $\int_{-\infty}^{\infty}\frac{1}{e^{\frac{x-\mu}{T}}+1}\frac{\gamma}{(x-x_{0})^{2}+\frac{\gamma^{2}}{4}}dx$ | Using the substitution $x = \mu + T u$ and the abbreviations $u_0 = \frac{x_0 - \mu}{T}$ and $\beta = \frac{\gamma}{2T}$, we can write your integral as
$$ f \colon (0,\infty) \times \mathbb{R} \, , \, f (\beta, u_0) = 2 \beta \int \limits_{-\infty}^\infty \frac{\mathrm{d} u}{(\mathrm{e}^{u} + 1)[(u-u_0)^2 + \beta^2]} \equiv 2 \beta \int \limits_{-\infty}^\infty g_{\beta,u_0}(u) \, \mathrm{d} u \, . $$
$g_{\beta,u_0}$ has simple poles at $u_0 \pm \mathrm{i} \beta$ with residues
$$ \operatorname{Res}(g_{\beta,u_0},u_0 \pm \mathrm{i} \beta) = \pm \frac{1}{2 \mathrm{i} \beta (\mathrm{e}^{u_0 \pm \mathrm{i} \beta} + 1)}$$
and at $\pm (2n+1) \pi \mathrm{i}$ (so only at the odd-integer multiples of $\pi \mathrm{i}$ !) with residues
$$ \operatorname{Res}(g_{\beta,u_0},\pm (2n+1) \pi \mathrm{i}) = - \frac{1}{[(2n+1) \pi \mathrm{i} \mp u_0]^2 + \beta^2}$$
for $n \in \mathbb{N}_0$ . The integrals of $g_{\beta,u_0}$ along semi-circles (avoiding the poles on the imaginary axis) vanish in the limit of large radii, so we can use the residue theorem to evaluate the integral.
Closing the contour in the upper half-plane (the lower half-plane works just as well) yields
$$ f(\beta,u_0) = 2 \beta \, 2 \pi \mathrm{i} \left[\operatorname{Res}(g_{\beta,u_0},u_0 + \mathrm{i} \beta) + \sum \limits_{n=0}^\infty \operatorname{Res}(g_{\beta,u_0},(2n+1) \pi \mathrm{i}) \right] \, .$$
Now we plug in the values of the residues, perform a partial fraction decomposition in the infinite sum and use the series formula for the digamma function $\psi$ to find
$$ f(\beta,u_0) = 2 \pi \left[\frac{1}{\mathrm{e}^{u_0 + \mathrm{i} \beta} + 1} - \frac{1}{2 \pi \mathrm{i}} \left(\psi \left(\frac{1}{2} + \frac{\beta + \mathrm{i} u_0}{2 \pi}\right) - \psi \left(\frac{1}{2} + \frac{-\beta + \mathrm{i} u_0}{2 \pi}\right) \right)\right] \, .$$
Finally, we apply the reflection formula to the second digamma function and simplify the result:
\begin{align}
f(\beta,u_0) &= 2 \pi \left[\frac{1}{\mathrm{e}^{u_0 + \mathrm{i} \beta} + 1} - \frac{1}{2 \mathrm{i}} \tan\left(\frac{\beta - \mathrm{i} u_0}{2}\right)\right. \\
&\phantom{222222}- \left.\frac{1}{2 \pi \mathrm{i}} \left(\psi \left(\frac{1}{2} + \frac{\beta + \mathrm{i} u_0}{2 \pi}\right) - \psi \left(\frac{1}{2} + \frac{\beta - \mathrm{i} u_0}{2 \pi}\right) \right)\right] \\
&= 2 \pi \left[\frac{1}{2} - \frac{1}{\pi} \operatorname{Im} \left(\psi \left(\frac{1}{2} + \frac{\beta + \mathrm{i} u_0}{2 \pi}\right) \right) \right] = \pi - 2 \operatorname{Im} \left[\psi \left(\frac{1}{2} + \frac{\beta + \mathrm{i} u_0}{2 \pi}\right) \right] \, .
\end{align}
Returning to the original parameters, we end up with
$$ \int \limits_{-\infty}^\infty \frac{1}{\mathrm{e}^{(x - \mu)/T} + 1} \frac{\gamma}{(x-x_0)^2 + \gamma^2/4} \, \mathrm{d} x = \pi - 2 \operatorname{Im} \left[\psi \left(\frac{1}{2} + \frac{\frac{\gamma}{2} + \mathrm{i} (x_0 - \mu)}{2 \pi T}\right) \right] \, . $$
Note that in the special case $x_0 = \mu$ the result is simply $\pi$. This can also be shown using elementary methods, so the implicit assumption $u_0 + \mathrm{i} \beta \not\in (2 \mathbb{Z} + 1) \pi \mathrm{i}$ used in the computation of the residues is justified. |
Maximum flow in a network proof | I'll reformulate the statement a bit to avoid ambiguity (although it is possible that the ambiguity is only in my head due to my poor knowledge of English): in any network $G=(V, E)$ there exists a maximum flow $f$ that can be found by a sequence of at most $|E|$ augmenting paths.
One way to prove that is by considering a maximum flow $f$ that has the minimal possible value of $\sum_{u,v \in V} |f(u, v)|$. Such a flow exists, because the space of all max flows is compact and the function $f \to \sum_{u,v \in V} |f(u, v)|$ is continuous.
Now consider a new graph $G'=(V, E')$ where $E'=\{(u,v)|f(u, v)>0\}$. It is easy to prove that since $\sum_{u,v \in V} |f(u, v)|$ is minimal, this graph $G'$ is acyclic. Now it is quite easy to see that flow $f$ can be found by a sequence of at most $|E'|$ augmenting paths whose edges belong to $E'$.
NOTE: about that ambiguity that I mentioned earlier. If the statement actually means that any maximum flow can be found by a sequence of at most $|E|$ augmenting paths, then it is simply not true. To build a counterexample one can just take any network and attach to it a new cycle (somewhere on the side) with huge edge capacities, and send a huge flow along that cycle. To build such a flow one would need to augment the flow a huge number of times. |
Schwartz Reflection over analytic curve | Rough sketch of the idea: Let $g$ be some biholomorphic function on an open set that restricted to $\mathbb R$ parameterizes a part of a curve. Let $f$ be defined on an open neighborhood of a real segment, intersected with the closed upper half plane. Suppose that $f$ maps the real segment into the curve parameterized by $g$ and that $g^{-1} \circ f$ satisfies the conditions for Schwarz’ reflection principle. (Note that it maps reals to reals.) Then $g^{-1} \circ f$ extends holomorphically over the real line. Apply $g$ again to get an extension of $f$, whose image extends over (is “reflected in”) the curve. |
Prove that $\iint_{\left[ 0,1 \right] \times \left[ 0,1 \right]}{\frac{f\left( x \right)}{f\left( y \right)}\text{d}x\text{d}y\ge 1}$ | Suppose $\int_{0}^{1} f$ and $\int_{0}^{1} \frac{1}{f}$ exists and let $I$ be the given integral,
Observe that the given integral can be written as
$$ I =
\left(\int_{0}^{1} f(x) \, dx \right)\left(\int_0^1\frac{1}{f(x)} \, dx\right)
$$
Then by Cauchy Schwartz inequality for integrals we have
$$
I \geq \left(\int_0^1 1\, dx\right)^2 = 1
$$ |
Taylor series leads to two different functions - why? | When you integrate, you should include a constant of integration. What you see here is that when integrating the functions, you get different constants of integration. This is why your answers differ by only a constant, namely $a\ln a$ (you can see this by use of $\log$ rules).
If you take care with the limits or boundary conditions in the integration step, then the answers will agree exactly. |
Is my understanding of weights and weight spaces of subalgebras of $\mathfrak{gl}(V)$ correct? | Yes, that is exactly correct. Each $V_\lambda$ is a simultaneous eigenspace for every linear map in $M$. $\lambda$ is the function that assigns to every linear map in $M$ the eigenvalue associated to its action on vectors in $V_\lambda$.
This example may be helpful: Take $V = \mathbb C^3$ and take
$$M = \left\{ \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array} \right)\in \mathfrak{gl}(V) \mid a+ b+ c = 0 \right\} .$$
(This is actually a Cartan subalgebra of $\mathfrak{sl}_3 \mathbb C$, by the way.)
The three simultaneous eigenspaces are
$$ V_{\lambda_1} = \mathbb C \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) , \ \ \ V_{\lambda_2} = \mathbb C \left( \begin{array}{c} 0 \\ 1 \\ 0 \end{array} \right), V_{\lambda_3} = \mathbb C \left( \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right), $$
and the weights, which are functions assigning each matrix in $M$ to its eigenvalue when acting on the respective space, are
$$ \lambda_1 : \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array} \right) \mapsto a,$$
$$ \lambda_2 : \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array} \right) \mapsto b,$$
$$ \lambda_3 : \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array} \right) \mapsto c,$$
Note that in this example, $a + b+ c = 0$ for all matrices in $M$. Therefore,
$$ \lambda_1 + \lambda_2 + \lambda_3 = 0.$$
so only two of the three weights are linearly independent. This "dependency between weights" is quite a common feature of weights of Lie algebras. |
Let $M$ be a noetherian $R$-module and $I=\mathrm{Ann}_R(M)$. Then $R/I$ is a noetherian ring for all ring $R$? | It is not true that $R/\text{ann}_R(M)$ must be right Noetherian if $M$ is a Noetherian right $R$-module. Any right primitive ring that is not right Noetherian is a counterexample: take $M$ to be a faithful simple module (which is certainly Noetherian). Examples of such rings include endomorphism rings of infinite dimensional vector spaces, free algebras on more than one generator, and group algebras of free groups on more than one generator. (See, e.g., ``The Algebraic Stucture of Group Rings'' by Passman, Corollary 9.2.11 for the group algebra case.)
If you want a noncommutative result, you have to assume $M$ is a bimodule. For example, if $M$ is an $R$-$R$-bimodule that is Noetherian as both a left and a right $R$-module, then $R/\text{ann}_R(M)$ is right (or left) Noetherian, where the annihilator is the right (or left) annihilator.
To see this, suppose $m_1,\dots,m_n$ generate $M$ as a left $R$-module and define a right $R$-module map $\phi:R\to M^n$ by $\phi(r)=(m_1r,\dots,m_nr)$. Because $m_1,\dots,m_n$ generate $M$ as a left $R$-module, the kernel of $\phi$ is $\text{ann}_R(M)$, where the annihilator is the right annihilator of $M$. Thus $R/\text{ann}_R(M)$ emebeds in $M^n$ as a right module and so is right Noetherian.
We can weaken the hypotheses. If we assume $M$ is an $S$-$R$-bimodule for some ring $S$ and assume that ${}_SM$ is finitely generated and $M_R$ is Noetherian, then the same proof shows $R/\text{ann}_R(M)$ is right Noetherian. |
Is finding the basis of the image the same thing as finding the image? | Well, if $T:V \to W$ is a linear map, then $\mathrm{Im}(T)$ is a subspace of $W$ and $\ker(T)$ ais a subspace of $V$. In particular, they are closed under the vector space operations. Hence, if you find a basis for the corresponding subspaces, you have succeeded in fully describing them.
On the other hand, I think a generating set would do the job just fine. As in, if
$\ker T:=\{a_1v_1+\dots +a_nv_n \mid a_i \in \mathbb{F}, v_i \in V\}$ for some collection of $\{v_i\}$, then you have also described the kernel, although this description is suboptimal. |
Show that $f'(a)=\frac{1}{2\pi}\int_0^{2\pi}\mathrm{e}^{-\mathrm{i}\theta}f(a+\mathrm{e}^{\mathrm{i}\theta})\mathrm{d}\theta$ | Use the general version of Cauchy Integral Formula:
$$f^{(n)}(a) = \frac{1}{2 \pi i} \int_C \frac{f(z)}{(z-a)^{n+1}} dz$$
For the first derivative you get exactly $(z-a)^2$ in the denominator.
By the way The problem is not true as stated. Since you need to integrate over a circle of radius 1, you need your function to be Analytic on a closed disk of radius at least 1, if your curve is too close to a, and the function is not analytic outside $a$ the problem is not true. Note that as given, the problem doesn't guarantee that $f(a+e^{it})$ even makes sense. |
Does a bijection and a homomorphism imply isomorphism? | Its not true. The existence of a homomorphism shows nothing, since there is always the trivial homomorphism. For instance there is bijection $\mathbb Z \to \mathbb Q$, and also a homomorphism $\mathbb Z \to \mathbb Q$ (one for any rational number...), but they are not isomorphic. |
Determining whether a sequence of functions converges uniformly | You're almost done. Since
$$f_n\left(\frac1{n+1}\right)=n\left(1+\frac1n\right)^{-(n+1)}\sim_\infty e^{-1}n\xrightarrow{n\to\infty}\infty\ne0$$
then the sequence doesn't converge uniformly on $[0,1]$.
Notice that also by your work you can prove easily that we have the uniform convergence on every interval $[a,1]$ for $0<a<1$. |
Can there be an infinite $\kappa$ with $\kappa\to(\omega)^\omega_2$ in $\mathsf{ZF}$? | Yes, it is consistent to have such cardinals. In fact, it is consistent relative to an inaccessible cardinal that $\omega\to(\omega)^\omega_2$. This is a famous result of Mathias, in
MR0491197 (58 #10462). Mathias, A. R. D. Happy families. Ann. Math. Logic 12 (1977), no. 1, 59–111.
(It is still open whether the inaccessible cardinal is required.)
The result holds in Solovay's model where all sets of reals are Lebesgue measurable. It also holds under the assumption of $\mathsf{AD}^+$, in particular, in all known models of $\mathsf{AD}$.
$\mathsf{AD}^+$ is a technical strengthening of the axiom of determinacy $\mathsf{AD}$, and it is open whether both theories are actually the same. All our techniques to obtain models of determinacy give us models of $\mathsf{AD}^+$. It is open whether $\mathsf{AD}$ suffices.
Actually, $\mathsf{AD}$ gives us many additional examples of cardinals $\kappa$ with the required partition property, and much more. For instance, determinacy implies that $\omega_1\to(\omega_1)^{\omega_1}_2$. This is the strong partition property. A good deal of infinitary combinatorics under $\mathsf{AD}$ is about establishing such partition properties at various cardinals. The consistency strength of such an uncountable (well-orderable) cardinal is higher than for $\omega\to(\omega)^\omega_2$, as the assumption readily gives us that $\omega_1$ and $\omega_2$ are measurable cardinals which, in turn, gives us inner models of $\mathsf{ZFC}$ with two measurable cardinals (and more).
I suggest Jech's set theory book or Kanamori's The higher infinite to learn more about the subject. There is a very nice short book by Kleinberg devoted solely to consequences of these partition properties,
MR0479903 (58 #109). Kleinberg, Eugene M. Infinitary combinatorics and the axiom of determinateness. Lecture Notes in Mathematics, Vol. 612. Springer-Verlag, Berlin-New York, 1977. iii+150 pp. ISBN: 3-540-08440-1. |
Simpson rule or trapezoidal rule | For Simpson's rule, $S(f)$, there exists a point $\xi \in [0,1]$ such that the error
$$
E(S(f),n) = \left|\int_0^1 f(x) dx - S(f) \right| = \frac{1}{180 n^4} |f^{(4)}(\xi)| \leq \frac{e}{180 n^4}.
$$
For the trapezoidal rule, T(f), the error is
$$
E(T(f),n) = \left|\int_0^1 f(x) dx - T(f) \right| = \frac{1}{12 n^2} |f^{"}(\xi)| \geq \frac{1}{12 n^2}.
$$
We thus have
$$
E(S(e^x),5) \leq 3e-05
$$
and
$$
E(T(e^x),10) \geq 8e-04,
$$
hence we can be sure that Simpson's rule will be the better one. In fact, I got $E(S(e^x),5) = 7.3415e-06$ and $E(T(e^x),10) = 0.0012$, so the difference is somewhat larger than the theoretical bounds predict. |
What are all conditions on a finite sequence $x_1,x_2,...,x_m$ such that it is the sequence of orders of elements of a group? | I'm afraid that your question is very broad, and I doubt that it is answerable in full.
Let's start here.
For $n\in \mathbb{N}$, denote by $\pi(n)$ the set of prime divisors of $n$.
Definition. The prime graph of a finite group $G$, denoted $\Gamma(G)$, is a graph with vertex set $\pi(|G|)$ with an edge between primes $p$ and $q$ if and only if there is an element of order $pq$ in $G$.
Your question requires, among other things, knowing how many elements of order $pq$ there are in the group for every two primes $p,q$ dividing $|G|$, and in particular whether that number is nonzero. So answering your question would also tell us the answer to, "What are all the possible prime graphs of a finite group?" (In fact, it would be but a small corollary.) However, prime graphs are the subject of ongoing research, and there are still many unsolved problems in the world of prime graphs (most of which are easier than a full characterization).
We can perhaps reformulate your question into this language, though. Graphs can be generalized to hypergraphs, which are comprised of a vertex set $V$ and an edge set $E\subseteq \mathcal{P}(V)$ (i.e. the restriction that edges must connect at most two vertices in $V$ is removed).
For $n\in \mathbb{N}$, denote by $\overline{\pi}(n)$ the set of prime power divisors of $n$.
Definition. Define the weighted prime power hypergraph of a finite group $G$ as the hypergraph $\Gamma_H(G)$ with vertex set $\overline{\pi}(n)$ with a hyperedge $S$ if and only if the elements of $S$ are pairwise coprime and there exists an element of order $\prod_{s\in S}s$ in $G$. Furthermore, let each edge $S$ be given the weight $w_S$, which is equal to the number of elements of order $\prod_{s\in S}s$ in $G$.
Note that $\sum_{S\in E}w_S=|G|$. Naturally we can additionally define a strict total order on the edgeset of $\Gamma_H(G)$ where $S<T\Leftrightarrow \prod_{s\in S}s<\prod_{t\in T}t$. Sorting the weights by this order is equivalent to your $x_k$ sequence.
Thus your question is precisely "what are all possible ordered weight sequences of the prime power hypergraph of a finite group?" which is clearly a very complicated question.
The only thing I can say about really say about it is the following. For solvable groups $G$, it has been proven that $|\pi(|G|)|\leq 4\text{rank}(\Gamma_H(G))$ asymptotically. Additionally, it is conjectured that $|\pi(|G|)|\leq 3\operatorname{rank}(\Gamma_H(G))$ for all solvable groups. So, for solvable groups, you will always have to the count weights of edges whose size is up to at least one third the size of the vertex set.
To move forward, I would suggest you try to substantially narrow your question. You could restrict your question to a certain class of groups ($p$-groups? abelian groups? symmetric groups?). Furthermore I think it would be more likely to find an answer if you removed the ordering condition on $x_k$, as it adds a level of difficulty to the problem that I suspect outweighs any possible insight it would afford.
For further reading I'd recommend my answer to this similar question concerning sets of element orders, a related notion to yours. |
Circle and Rectangle: Solving Systems of Nonlinear Equation | By the formula $2(a+b) = 28$, $a+b=14$, where the sides of the rectangle $a$ and $b$. Since $a,b$ have to be positive, we can conclude that the maximum length of one side is smaller than $14$.
This length is smaller than $20$, the diameter of the circle, so any rectangle that can be possibly cut from the sheet.
As for translating the rectangle, we just have to translate the rectangle up or down until it touches the rectangle. Since there is a point where the rectangle transfers from being fully inside to partially outside, at one point the rectangle must touch the circle exactly.
Let there be a rectangle that is parallel to the $x$-axis, and a circle centred at $(0,0)$. Then, we can find a chord, and thus a rectangle, that intersects the circle at two points (internally).
By some observation, the lines at the top and bottom of the rectangle have equations $y = \frac{b}{2}$ and $y = -\frac{b}{2}$, where $b$ is the length of the chord, and also the top/bottom side of the rectangle.
Since you already have the $y$-coordinates (because they lie on the line), we can use the equality $y=\frac{b}{2}$ and put that value in the equation $x^2+y^2=10^2$ with the two equations to solve for $x$. This should give you the coordinates of the points you need. |
Reduction modulo homomorphism from Z_m to Z_n | Consider such a map $\phi:\mathbb Z/m\mathbb Z\to\mathbb Z/n\mathbb Z$; we claim that $\phi$ is a homomorphism if and only if $n\mid m$.
If: Suppose that $n\mid m$. We have $$x\equiv \phi(x)\pmod{m}\iff m\mid x-\phi(x)\implies n\mid x-\phi(x)\iff x\equiv \phi(x)\pmod{n}$$ Implying that $\phi(x+y)\equiv x+y\equiv \phi(x)+\phi(y)\pmod{n}$ $\Box$
Only if: Such a homomorphism $\phi$ is clearly surjective, since $$\phi(k)\equiv k\cdot\phi(1)\equiv k\pmod{m}$$ It follows from the First Isomorphism Theorem that $\mathbb Z_m/\text{ker}(\phi)\cong \text{im}(\phi)=\mathbb Z_n$, and in particular that $n=|\mathbb Z_n|$ divides $|\mathbb Z_m|=m$ $\Box$ |
Having trouble understanding the solution for the 10x10 board problem | It's true for any integer matrix $M=(m_{ij})_{m \times n}$ without repeated numbers. The sum of the entries of $M$ is at most $$nm\max(M)-\sum_{k=1}^{nm-1} k.$$ In the particular matrix in the earlier question, it was an $8 \times (i+1)$ matrix with maximum $\mathbf{x}_{i7}$.
Let's use this as a running example: $$M=\begin{array}{|ccccc|} \hline 12 & 30 & 36 & 7 & 34 \\ 24 & 38 & 25 & 6 & 28 \\ 20 & 32 & -5 & 37 & 0 \\ \hline \end{array}.$$
Let $X=(x_{ij})_{m \times n}$ be the matrix defined by $x_{ij}=\max(M)$.
So, in our example $\max(M)=38$ and so $$X=\begin{array}{|ccccc|} \hline 38 & 38 & 38 & 38 & 38 \\ 38 & 38 & 38 & 38 & 38 \\ 38 & 38 & 38 & 38 & 38 \\ \hline \end{array}.$$
Let $D=(d_{ij})_{m \times n}$ be defined by $D=X-M$.
So, in our example $$D=\begin{array}{|ccccc|} \hline 26 & 8 & 2 & 31 & 4 \\ 14 & 0 & 13 & 32 & 10 \\ 18 & 6 & 43 & 1 & 38 \\ \hline \end{array}.$$
We have defined $X$ and $D$ such that $M=X-D$. So the sum of entries in $M$ is the sum of entries in $X$ minus the sum of entries in $D$.
By definition, the sum of entries in $X$ is $nm\max(M)$.
The matrix $D$ contains $0$ and $nm-1$ distinct positive integers. The smallest possible sum of $nm-1$ distinct positive integers is $\sum_{k=1}^{nm-1} k$ (which are the $nm-1$ smallest positive integers). Hence the sum of entries in $M$ is at most $$nm\max(M)-\sum_{k=1}^{nm-1} k.$$ (We could make this last step formal by creating a sorted list from the entries of $D$ and showing each element is no smaller than the corresponding element in the sequence $(0,1,\ldots,nm-1)$.) |
PDE for a damped oscillator | $$u_t + uu_x = -\gamma u $$
Charpit-Lagrange characteristic ODEs :
$$\frac{dt}{1}=\frac{dx}{u}=\frac{du}{-\gamma u}$$
A first characteristic equation comes from solving $\frac{dx}{u}=\frac{du}{-\gamma u}$ :
$$u+\gamma x=c_1$$
A second characteristic equation comes from solving $\frac{dt}{1}=\frac{dx}{u}=\frac{dx}{c_1-\gamma x}$ :
$$\gamma x+e^{-\gamma t}=c_2$$
The general solution of the PDE on implicit form $c_1=F(c_2)$ is :
$$\boxed{u=\gamma x+F(\gamma x+e^{-\gamma t})}$$
$F$ is an arbitrary function (to be determined according to the initial condition).
Condition :
$$u(x,0)=f(x)=\gamma x+F(\gamma x+1)$$
$$F(\gamma x+1)=f(x)-\gamma x$$
Let $\gamma x+1=X\quad\implies\quad x=\frac{X-1}{\gamma}$
$$F(X)=f\left(\frac{X-1}{\gamma}\right)-X+1$$
Now the function $F(X)$ is known. We put it into the above general solution where $X=\gamma x+e^{-\gamma t}$
$$u=\gamma x+f\left(\frac{(\gamma x+e^{-\gamma t})-1}{\gamma}\right)-(\gamma x+e^{-\gamma t})+1$$
$$\boxed{u(x,t)=f\left(x+\frac{e^{-\gamma t}-1}{\gamma}\right)+1-e^{-\gamma t}}$$ |
Am I missing something with row reduction to find the determinant of a matrix? | We have that
$$\begin{vmatrix}4&-4&2&1\\1&2&0&3\\2&0&3&4\\0&3&2&1\end{vmatrix} = \frac18\begin{vmatrix}4&-4&2&1\\0&12&-2&11\\0&4&4&7\\0&3&2&1\end{vmatrix} $$
and then we can use Laplace on the first row or proceed further to obtain
$$=\frac18\frac1{12}\begin{vmatrix}4&-4&2&1\\0&12&-2&11\\0&0&14&10\\0&0&10&-7\end{vmatrix}=\frac18\frac1{12}\frac1{14}\begin{bmatrix}4&-4&2&1\\0&12&-2&11\\0&0&14&10\\0&0&0&-198\end{bmatrix}=-99$$ |
Lie algebra of nilpotent Lie group | No. If the Lie algebra (i.e. the first construction) is denoted $\mathfrak{g}$, then the second one will be isomorphic to the associated Carnot-graded Lie algebra $\mathrm{Car}(\mathfrak{g})$.
Since there exist finite-dimensional nilpotent real Lie algebras (in dimension $\ge 5$) $\mathfrak{g}$ that are not isomorphic to their associated Carnot-graded Lie algebra, and since any finite-dimensional nilpotent real Lie algebra is isomorphic to a subalgebra of $\mathfrak{ut}(n,\mathbf{R})$ for some $n$, you get a negative answer to your question (at least for $n$ large enough, probably for all $n\ge 4$ actually). |
What is the maximal size of an equal-distance set in $\mathbb{R}^n$? | Yes it's true, and the proof is by induction on the cardinality $k = |A|$.
By scaling, we can reduce to the case that the equal-distance is $1$.
Now one proves a stronger statement by induction, namely that $k \le n+1$ and there is an isometry $\mathbb{R}^n \mapsto \mathbb{R}^n$ that takes $A$ to an equilateral $k-1$ simplex of side length $1$ contained in $\mathbb{R}^{k-1}$. This is obviously true for $k=1$.
Assuming it is true for numbers $<k$, let $A'$ be obtained by removing one point of $A$, and so there is a similarity taking $A'$ to the vertex set of an equilaterial $k-2$ simplex $\sigma$ of side length $1$ in $\mathbb{R}^{k-2}$. So we may assume that $A'$ is the vertex set of this simplex. Now take the set of unit radius spheres around the points of $A'$ and intersect them. That set of intersections is a sphere of some dimension whose diameter is strictly less than $1$. Therefore the set $A$ can contain at most one point on that sphere. And any single point on that sphere, union $A'$, is the vertex set of an equilateral $k-1$ simplex, which we can rotate to be in $\mathbb{R}^{k-1}$ by a rotation that fixes $\mathbb{R}^{k-2}$. |
Root of a plynomial in (0,1) | Let $L\left(x\right):=q\cdot f_{K}\left(x\right)-f_{K}\left(1-x\right)$.
We have $L\left(0\right)=-1<0$ and $L\left(1\right)=q>0$, so by
continuity, $L\left(x\right)=0$ has at least one solution (existence).
Towards uniqueness, you have shown that there exists a unique $x$
such that $f_{K}\left(x\right)-f_{K}\left(1-x\right)=0$, so we write
$L\left(x\right)=\left(q-1\right)\cdot f_{K}\left(x\right)$. Show
that has a unique maximum $x^{*}$ by taking the derivative wrt $x$.
Show then that $L\left(x^{*}\right)>0.$ Since $L\left(0\right)<0$,
$L\left(x^{*}\right)>0$ and $L\left(\cdot\right)$ is strictly increasing
in $\left[0,x^{*}\right]$ we have that it has a unique solution in
that interval. |
Another question of finding eigenvalues with parameters | According to Maple, the characteristic polynomial of $A$ is
$$\begin{align}
x^3 &{}+(-c-a-t-b-s)x^2 \\&{}+ (-a^2+2as+st+at+sc+tc+ab+bt)x \\&{}+ (tac-asc-2ast+scb-abt+a^2t-tsc-tcb),\end{align}$$ and it does not factor. This means the problem is pretty hopeless. The constant term is also clearly not $-\det(A')$, so either you mistyped the matrix $A$ in the first place, or you made errors in the row operations leading to $A'$ (but which would not lead to solving this problem anyway if done correctly, just to get the right constant term of the characteristic polynomial).
Added. With the now modified matrix, the simplest thing is to conjugate by $I_3-E_{1,2}-E_{1,3}$ (with $E_{i,j}$ an elementary matrix; conjugations means left-multiply by this matrix and right-multiply by the inverse $I_3+E_{1,2}+E_{1,3}$); this transforms $A$ into the similar matrix
$$
A'=\begin{pmatrix}a+b+c&b&c\\0&s&0\\0&0&t\end{pmatrix},
$$
which being triangular clearly has eigenvalues $a+b+c$, $s$, and $t$, and so has$~A$. |
A question of convergence of a series closely related to two sequences of prime numbers | All primes $\ge 3$ are of the form $6k \pm 1$ and Dirichlet theorem of primes in arithmetic progression says that asymptotic density of primes of the form $6k+1$ and $6k-1$ or $6k-5$ is equal. Hence
$$
\pi(x) = 2 + \pi_{6k+1}(x) + \pi_{6k-1}(x) \approx 2\pi_{6k+1}(x) \approx 2\pi_{6k-5}(x)
$$
Hence by the prime number theorem $q_n \approx r_n \approx 2n\log n$. This implies
$$
|q_n - r_n| < k_1 n\log n
$$
for some positive constant $k_1$. Hence
$$
\sum_{n \le x} \frac{1}{|q_n - r_n|} > \sum_{n \le x} \frac{k_2}{n\log n} > \sum_{n \le x} \frac{k_2}{p_n} > k_2\log\log x
$$
where $p_n$ is the $n$-th prime and $k_2$ is some positive constant. This is clearly divergent. |
Rational polynomial of degree n with n-2 real roots, 2 complex roots and Galois group not being $S_n$ | Sure: for instance, $f(T)=T^4-2$ works. Its splitting field is $\mathbb{Q}(\sqrt[4]{2},i)$ which has degree $8$ and thus the Galois group is not $S_4$. |
alternative rule for negation introduction | Together with the proof given in @Hailey's answer, it only remains to show that $\neg P\vee Q$ can be derived from $P\Rightarrow Q$. For this you need the law of the excluded middle.
For the proof we have the premise $P\Rightarrow Q$. Now suppose $P$ holds, then by modus ponens and the premise we have that $Q$ holds. From this $\neg P\vee Q$ holds by conjunction introduction. Therefore $P\Rightarrow\neg P\vee Q$. Now also $\neg P\Rightarrow\neg P$. Therefore $\neg P\Rightarrow\neg P\vee Q$ again by conjunction introduction. From these two implications $P\vee\neg P\Rightarrow\neg P\vee Q$ can be derived, which is a form of disjunction introduction which I leave out here. Lastly by the law of the excluded middle we can take $P\vee\neg P$ and together with modus ponens we have $\neg P\vee Q$, as required.
[Note: This has been voted up, however the fully correct answer is the one that begins 'The trick is...'.] |
Bounding the extrema of polynomials from $\frac{d^n}{dx^n} \exp(-1/x)$ | I thought of this after typing the question...
Let $p_n(x) = \sum_{k=0}^{n-1} p_{n,k} x^k$. We know $p_{n,0} = 1$ for all $n$. Also, $$p_{n+1,n} =-2n p_{n,n-1} + (n-1) p_{n,n-1} = -(n+1) p_{n,n-1} $$ More generally,
$$ p_{n+1,k} = p_{n,k}-2np_{n,k-1} + (k-1)p_{n,k-1}$$
(with the understanding that for $k>n-1$ or $k<0$, $p_{n,k}=0$.) Thus
$$ P_{n+1} := \sup_k |p_{n+1,k}| \le 2(n+1) P_n$$
This means that
$$P_{n}\le 2^{n} n! $$
so
$$\sup_{x\in [0,a]} |p_n(x)|\le2^n n!\sum_{k=0}^{n-1} a^k =\frac{2^n n!(a^{n}-1)}{a-1}$$
Now, recall that for $x>0$, $e^{x} > \frac{x^j}{j!}$. Rearranging,
$$ e^{-x} \le \frac{j!}{x^j}\implies e^{-1/x} \le x^j j! \implies \frac{ e^{-1/x}}{x^{2n} } \le (2n)! \le 4^n n!^2$$
so I'm left with the bound
$$ \sup_{x\in [0,a]} |f^{(n)}(x)| \le C_a^{n+1} n!^3$$
So the function is Gevrey of order 3. Don't know if I can improve it... |
What simple function $f(i)$ produces an evenly distributed pseudorandom output for $i \in [1, 2, ...)$? | If all you care about is aperiodicity and the uniform distribution of values of $f(i)$ (as opposed to the joint distribution of tuples of values like $(f(i),f(i+1))$) you should be content with the equidisribution theorem of Weyl. Examples like $$f(i)=\lfloor r (\pi i \bmod 1)\rfloor$$
or
$$f(i)=\lfloor r (\sqrt 2 i \bmod 1)\rfloor$$
do the trick.
Abandoning Weyl's theorem, you could use a recipe like $$f(i) = (i+k)\bmod r \text{ for $i$ in the range } r^{k-1}\le i < r^k,$$
which is to say
$$f(i) = (i+\lfloor \log_r (rk)\rfloor) \bmod r,$$
which can be computed by examining the base $r$ representation of $i$ and not evaluating the logarithm directly.
Or, for another example in this vein, if $i$ has base $r$ representation $i=\sum_k b_k r^k$ for $0\le b_k<r$, to let $f(i)=\left(\sum_k b_k\right) \bmod r$. For instance, if $r=2$ this is the parity of the binary digits in the base 2 representation of $i$. These examples are especially easy to compute and not periodic in the precise sense of the word.
All of the above use the particular definition of "uniformly distributed" that is explained in the cited article. |
How to apply the method least squares polynomial of single degree? | We can rewrite the regression equations slightly as
$$
z_0 c_0 + z_1 c_1 + z_2 c_2 = y_t - a
$$
then arrange them as linear system
$$
A x = b
$$
with
$$
A = (z_0, z_1, z_2) \\
x = (c_0, c_1, c_2)^t \\
b = (y_t-a)
$$
where the $z_i$ and $y_t-a$ are column vectors each, having $k$ components if your data list has $k$ lines of such data. $k \ge 3$ would be needed.
Then extend to
$$
A^t A x = A^t b
$$
then
$$
x = (A^t A)^{-1} A^t b \quad (*)
$$
is a least square approximation, minimizing
$$
A x - b
$$
in the Euclidean norm.
Example Calculation
Entering some matrix $A$ and vector $b$ for four lines of data:
octave:1> A = [1, 2, 3; 2, 3, 1; 3, 3, 2; 2, 2, 3]
A =
1 2 3
2 3 1
3 3 2
2 2 3
octave:2> b = [1;2;2;1]
b =
1
2
2
1
Calculating the transposed matrix $A^t$:
octave:3> A'
ans =
1 2 3 2
2 3 3 2
3 1 2 3
Calculating the approximation $x$:
octave:4> x = inv(A'*A) * A' * b
x =
0.079755
0.674847
-0.153374
Checking the error vector and its length:
octave:5> e = A*x - b
e =
-0.030675
0.030675
-0.042945
0.049080
octave:6> el = norm(e)
el = 0.078326 |
Convolution with Gaussian, without distribution theory, part 2 | As the solution available here:
Let $f \in L^1\cap L^p$ at first. Then, by the Young's inequality,
$$ \|u(t,\cdot)\|_{\infty} \le \|\Gamma(t,\cdot)\|_{p'} \|f\|_p $$
And
$$ \|u(t,\cdot)\|_{1} \le \|f\|_1 $$
By $L^p$-Interpolation, we've that
$$ \|u(t,\cdot)\|_p \le \|u(t,\cdot)\|_1^{1/p}\|u(t,\cdot)\|_{\infty}^{(p-1)/p}\le C(f,p)\|\Gamma(t,\cdot)\|_{p'} ^{(p-1)/p} $$
Where $p'^{-1} + p^{-1} = 1$. But
$$\|\Gamma(t,\cdot)\|_{r}^r = \frac{1}{(4\pi t)^{nr/2}}\int e^{\frac{-r|x|^2}{4t}}dx = \frac{1}{(4\pi t)^{nr/2}} c(n) \int_{0}^{\infty} s^{n-1}e^{-\frac{rs^2}{4t}}ds = \\ \frac{c(n)}{(4\pi t)^{nr/2}} \sqrt{\frac{4t}{r}}^n \int_0^{\infty} w^{n-1}e^{-w^2} dw $$
Which shows that, for $r>1$, $\|\Gamma(t,\cdot)\|_r \rightarrow 0$ as $t \rightarrow \infty$. Thus, we've proved the Theorem for $f \in L^1 \cap L^p$.
For the general case, let $g \in L^1\cap L^p$ be such that $\|g-f\|_p \le \varepsilon$, and then
$$ \|u_f(t,\cdot)\|_p \le \|u_g(t,\cdot)-u_f(t,\cdot)\|_p + \|u_g(t,\cdot)\|_p \le \varepsilon + \varepsilon$$
If $t$ is big enough, where we've used once again that $\|u(t,\cdot)\|_p\le \|f\|_p$. So, the Theorem is proved. |
how to find the character of the series: $\sum_{n=1}^{\infty}\left(\frac{n+1}{n}\right)^{-n^{3}}$ | Root test shows
$$\lim_\infty\sqrt[n]{\left(\dfrac{n+1}{n}\right)^{-n^3}}=\lim_\infty\left(1+\dfrac{1}{n}\right)^{-n^2}=e^{-\infty}=0<1$$ |
Joint probability function of dependent uniform and exponential random variables | Given the event $Y=y$, you have
$$
\operatorname{E}(XY^3 \mid Y=y) = \operatorname{E}(Xy^3\mid Y=y) = y^3 \operatorname{E}(X\mid Y=y) = y^3 \cdot y = y^4.
$$
So then you need the expected value of $Y^4$:
$$
\operatorname{E}(Y^4) = \int_0^2 y^4 \left( \frac 1 2 \, dy \right) = \cdots.
$$
This is an instance of the law of total expectation:
$$
\operatorname{E}(XY^3) = \operatorname{E}(\operatorname{E}(XY^3\mid Y)) = \operatorname{E}(Y^4).
$$ |
Variance of unbiased estimator | You can get the variance of $W$ from the variance of $Y$, and using the formulas that if $U$ and $V$ are independent random variables, and $\alpha$ is a constant, then $\text{Var}(U+V) = \text{Var}(U) + \text{Var}(V)$, and $\text{Var}(\alpha U) = \alpha^2 \text{Var}(U)$. |
Verify if the function $\, f\colon \Bbb Z \to \Bbb R $ defined by $\, f(n)=n^3-3n$ is injective | Combining the above discussions in the comments,I can see that for the first question,the given function is not injective .
Counter example : $(−1)^3−3(−1)=2=(2)^3−3(2), \text{but}\, −1≠2. $
For the 2nd question, use the following hints:
hint 1 : use the fact that a polynomial of degree $≤3$ is reducible if and only if it has a root.
hint 2 : Just show that the polynomial has a root by using the intermediate value theorem; you don't need to know what that root actually is. |
Show that $\lim_{p \to \infty}||x||_p=||x||_M$ | Given $x = \big( x_1, x_2, \ldots, x_n \big)$, denote $\ell = \max\big\{ \vert x_1 \vert , \ldots, \vert x_n \vert \big\}$.
Case 1. If $\ell = 0$, the proof is trivial.
Case 2, If $\ell > 0$, then
$$\big\| x \big\|_p = \left( \displaystyle \sum_{i=1}^n|x_i|^p\right)^{\frac{1}{p}} \leq \big( n \ell^p \big)^{1/p} = n^{1/p} \cdot \ell \xrightarrow{p\to +\infty} \ell \,\, . $$
This implies that
$$\limsup_{p\to+\infty} \big\| x \big\|_p\leq \ell \,\, .$$
On the other side,
$$ \big\| x \big\|_p = \left( \displaystyle \sum_{i=1}^n|x_i|^p\right)^{\frac{1}{p}} \geq \ell \Longrightarrow \liminf_{p\to+\infty} \big\| x \big\|_p\geq\ell $$
Combine these two inequality, we can get the desired limit.
Q.E.D. |
Prove Taylor series converges to $f$. | I'm almost sure what you need for convergence is:
$$|f^{(n)}(x)|<R^n$$
In such a case you would have the following:
$${R_n}\left( x \right) = \int\limits_a^x {\frac{{{{\left( {x - t} \right)}^n}}}{{n!}}{f^{\left( {n + 1} \right)}}\left( t \right)dt} $$
Set $$t = x + \left( {a - x} \right)u$$
$${R_n}\left( x \right) = \frac{{{{\left( {x - a} \right)}^{n + 1}}}}{{n!}}\int\limits_0^1 {{u^n}{f^{\left( {n + 1} \right)}}\left[ {x + \left( {a - x} \right)u} \right]du} $$
Then
$$\eqalign{
& 0 \leqslant \left| {{R_n}\left( x \right)} \right| \leqslant \frac{{{{\left| {x - a} \right|}^{n + 1}}}}{{n!}}{R^{n + 1}}\int\limits_0^1 {{u^n}du} \cr
& 0 \leqslant \left| {{R_n}\left( x \right)} \right| \leqslant \frac{{{{\left| {x - a} \right|}^{n + 1}}}}{{\left( {n + 1} \right)!}}{R^{n + 1}} \cr} $$
And for $n \to \infty$ we have that $|R_n(x)| \to 0$
Here's my pick on your condition. If
$${f^{\left( {n + 1} \right)}}\left( x \right) \leqslant C\frac{{\left( {n + 1} \right)!}}{{{R^{n + 1}}}}$$
The you'd have
$$\eqalign{
& 0 \leqslant \left| {{R_n}\left( x \right)} \right| \leqslant C \frac{{{{\left| {x - a} \right|}^{n + 1}}}}{{n!}}\frac{{\left( {n + 1} \right)!}}{{{R^{n + 1}}}}\int\limits_0^1 {{u^n}du} \cr
& 0 \leqslant \left| {{R_n}\left( x \right)} \right| \leqslant C{\left( {\frac{{\left| {x - a} \right|}}{R}} \right)^{n + 1}} \cr} $$
And the limit would be $0$ if $\left| {x - a} \right| < R$ |
Probability of exactly three of a kind in a roll of 5 dice | Assume the dice are distinguishable. Then there are $6^5$ possible outcomes since there are six possible outcomes for each of the five dice.
Three of a kind: There are $\binom{5}{3}$ ways for three of the five dice to display the same outcome and six possible outcomes those three dice could display. There are $\binom{5}{2}2!$ ways for the two remaining dice to display two of the five other possible values (as there are $\binom{5}{2}$ ways to select two of the remaining five values and $2!$ ways to arrange those values on the remaining two distinct dice), giving
$$\binom{5}{3}\binom{6}{1}\binom{5}{2}2! = 1200$$
favorable outcomes.
Thus, the probability that three of a kind is obtained is
$$\Pr(\text{three of a kind}) = \frac{1200}{6^5}$$
Edit: Evidently, we were also supposed to consider all cases in which exactly three of the dice show the same outcome, so we must add the results for a full house.
Full house: There are $\binom{5}{3}$ ways for three of the five dice to display the same outcome and six outcomes those dice could display. There are five possible outcomes the other two dice could both display. Hence, there are
$$\binom{5}{3}\binom{6}{1}\binom{5}{1} = 300$$
ways to obtain a full house. Thus, the probability of obtaining a full house is
$$\frac{300}{6^5}$$
Total: The probability that exactly three of the dice display the same number is found by adding the probabilities for three of a kind and a full house, which yields
$$\frac{1200}{6^5} + \frac{300}{6^5} = \frac{1500}{6^5}$$
as the given answer states.
Check: We know that the total number of outcomes is $6^5 = 7776$.
All different: There are $\binom{6}{5}$ ways of selecting five different outcomes and $5!$ arrangements of those outcomes on the dice. Thus, there are
$$\binom{6}{5}5! = 720$$
ways to obtain five different numbers.
One pair: There are $\binom{5}{2}$ ways for two of the dice to display the same outcome and six possible outcomes those two dice could display. There are $\binom{5}{3}3!$ ways for the three remaining dice to display three of the remaining five values. Hence, there are
$$\binom{5}{2}\binom{6}{1}\binom{5}{3}3! = 3600$$
ways to obtain a pair.
Two pairs: There are $\binom{6}{2}$ possible outcomes for the pairs. There are $\binom{5}{2}$ ways for two of the five dice to show the smaller of those outcomes and $\binom{3}{2}$ ways for two of the other three dice to show the larger of those outcomes. There are four possible outcomes for the remaining die. Hence, there are
$$\binom{6}{2}\binom{5}{2}\binom{3}{2}\binom{4}{1} = 1800$$
ways to obtain two pairs.
Three of a kind: We showed above that there are $1200$ ways to obtain three of a kind.
Full house: We showed above that there are $300$ ways to obtain a full house.
Four of a kind: There are $\binom{5}{4}$ ways for four of the five dice to display the same outcome and six outcomes those dice could display. There are five possible outcomes for the remaining die. Hence, there are
$$\binom{5}{4}\binom{6}{1}\binom{5}{1} = 150$$
ways to obtain four of a kind.
Five of a kind: All the dice must show the same outcome. There are six possible outcomes. Hence, there are $6$ ways to obtain five of a kind.
Total: The above cases are mutually exclusive and exhaustive. Observe that
$$720 + 3600 + 1800 + 1200 + 300 + 150 + 6 = 7776 = 6^5$$ |
If $W$ is a Brownian motion, then how can I justify a statement about $\Delta W_{t_{i+1}^{n}}^{2}-\Delta t_{i+1}^{n}$ | For fixed $s \leq t$ set $X:=W_t-W_s$ and $u:= t-s$. Claim: It holds that $$\mathbb{E}[(X^2- u)^4] = M \cdot u^4 \tag{1}$$ for some constant $M$ not depending on $s$ and $t$.
Proof: Since $(W_t)_{t \geq 0}$ is a Brownian motion, the random variable $X$ is Gaussian with mean zero and variance $t-s$. In particular, $X = \sqrt{t-s} W_1$ in distribution. Thus,
$$\mathbb{E}[(X^2-u)^4] = \mathbb{E}[((t-s) W_1^2- (t-s))^4] = (t-s)^4 \underbrace{\mathbb{E}[(W_1^2-1)^4]}_{=:M<\infty}.$$
Applying $(1)$ it follows that
$$\mathbb{E}[(\Delta W_{t_{i+1}^n}^2-\Delta t_{i+1}^n)^4] = M (\Delta t_{i+1}^n)^{\color{red}{4}}.$$ |
Binomial coefficients bounded by entropy exponential | $$\sum_{k=\lceil n x \rceil}^{n}{n \choose k} = \sum_{k=\lceil nx \rceil}^{n}{n \choose n-k} = \sum_{j=0}^{\lfloor n \lambda \rfloor}{n \choose j} $$
where in the later $ n - \lceil n x \rceil=\lfloor n(1-x)\rfloor = \lfloor n \lambda\rfloor $, with $\lambda = 1-x$, $0\le \lambda < 1/2$
Then see here. |
Finding pattern, grouping all possible pairs of first N natural number , with certain condition | This is equivalent to a round-robin tournament, for which there is a standard scheduling algorithm: Imagine the numbers sitting in opposite pairs at a long table; fix one of them (say, the $1$) and for each group rotate the others around the table. Thus, in group $k$ with $1\le k\lt N$, the $1$ is paired with $k+1$ and otherwise $(i,j)$ are paired if $i+j\equiv2k\bmod N-1$. |
If $a$ and $b$ are the roots of $z^2 - 2z + 4 = 0$ then what is $a^n + b^n + ab$ ($n$ is a natural number)? | HINT:
$$z=1\pm\sqrt3i=2\left(\cos\dfrac\pi3\pm i\sin\dfrac\pi3\right)$$
Using de Moivre's formula,
$$z^n=2^n\left(\cos\dfrac{n\pi}3\pm i\sin\dfrac{n\pi}3\right)$$ |
$C_n:=A_n\cap (A_1\cup\cdots\cup A_{n-1})^c$ pairwise disjoint? | Say $m<n$. Note that
$$ C_m \cap C_n= A_m \cap \left( \cap_{i=1}^{m-1}A_i^c\right) \cap A_n \cap \left( \cap_{i=1}^{n-1}A_i^c\right).$$
If $x\in C_m \cap C_n$, then $x\in A_m$ and $x\in \cap_{i=1}^{n-1}A_i^c$. But ($m<n$)
$$ \cap_{i=1}^{n-1}A_i^c \subseteq A_m^c.$$. |
Find k such that f(x) is increasing. | For $f$ to increase for all $x$ we need $f' \ge 0$ for all $x$.
$f' = 3x^2 + 2kx + 3 = ax^2 + bx + c$, we know this is a upward facing parabola ($a > 0$), and so if the vertex lies on or above the x-axis, we are okay.
The vertex has $x$-coordinate $-b/2a = -k/3$ and has y-cooridnate $f'(-b/2a = -k/3) = k^2/3 - 2k^2/3 + 3 = 3 - k^2/3$ which we require to be larger than or equal to zero.
This occurs if $|k| \le 3$. So it seems like your answer is correct? |
How to check if a sequence is random? | No, you can't. There are many tests for randomness: having roughly half the bits be zero, having the right number of strings of $0$'s of various lengths, having the right proportion of each eight bit chunk, etc. If you search for "randomness test" you can read about many of them. Generally they can prove a string is not random (at a certain confidence level), but not that it is random. For example, suppose I gave you the string created by XORing the strings of $e$ starting from the millionth bit and $pi$ starting from the billionth. We would expect this string to pass all the statistical tests you might try, but it has a very simple rule behind it. Unless you recognize the string somehow, you are unlikely to figure out the rule. |
Number of solutions of the function | $f $ is even thus if $-2\le x\le 0, $ $f (x)=2^{-x}-1$.
we have
$f (1)=f (-1)=2-1=1$
and
$$[-10,20]= $$
$$\cup_{k=0}^6 [-10+4k,-10+4k+4] \cup [18,20] .$$
in each intervall of the form $[-10+4k,-6+4k ] $, there are two roots and there one root at $[18,20] $
thus
$$Total =2×7+1=15 \; roots$$
which are ;
$$S=\{-9+2n, n=0,14\}$$
$$=\{-9,-7,-5,...,17,19\} $$ |
how many ways are there to distribute 10white and 10black balls into 20 distinct boxes so that at most one box is empty | There is one empty box, and therefore a box that contains two balls. The unlucky box can be chosen in $\binom{20}{1}$ ways, and for each choice the lucky box can be chosen in $\binom{19}{1}$ ways.
Suppose now that we have chosen the empty box and the lucky box. There are $3$ cases: (i) the lucky box contains two whites; (ii) the lucky box contains two blacks; (iii) the lucky box gets a black and a white.
For Case (i), we choose $8$ boxes from the remaining $18$ to get the remaining $8$ whites. This can be done in $\binom{18}{8}$ ways.
Obviously Case (ii) has the same number of possibilities as Case (i).
For Case (iii), we need to choose $9$ places from the remaining $18$ for the whites. |
What's the relationship between quadratics and convex functions | Not all quadratic functions are convex. For instance, $f(x)=-x^2$ is not convex. And not all convex functions are quadratic, like $f(x)=e^x$. |
Probability and choices of events in a finite sample space | I think you are conflating the definition of an event with the definition of an outcome. You are correct that there are only three outcomes in this sample space.
However, an event is a set of [possibly multiple] outcomes. The solution enumerates all possible sets of the three outcomes. |
If G is a group of 2 x 2 matrices under matrix multiplication and a,b,c,d are integers modulo 2 , ab - bc is not equal to 0 it's order is 6? | $$\det\begin{pmatrix}1&1\\0&1\end{pmatrix}^2 = \det\begin{pmatrix}1&0\\0&1\end{pmatrix} = 1 \ne 0$$ |
What is the set of all matrices satisfying $\mathfrak{so}(n)$ definition? | For two congruent matrices $S_1, S_2$, you get isomorphic Lie algebras. Since over an algebraically closed field (like $\mathbb C$), any two symmetric matrices of full rank are congruent, the choice here doesn't matter.
But if the base field is $\mathbb R$ as you write twice, symmetric matrices are up to congruence classified by their signature (Sylvester's Theorem), and the different definitions you have encountered give non-isomorphic Lie algebras: $S=I_n$ has signature $(n,0)$ and defines the compact form which is commonly denoted as the real Lie algebra $\mathfrak{so}_n$. On the other extreme, $S=\begin{pmatrix}
1&0&0\\
0&O&I_{k}\\
0&I_{k}&O
\end{pmatrix}$ for $n=2k+1$ (or $S=\begin{pmatrix}
O&I_{k}\\
I_{k}&O
\end{pmatrix}$ for $n=2k$) have signature $(k+1,k)$ (or $(k,k)$, respectively), and with them one defines the split real Lie algebras which are commonly denoted as something like $\mathfrak{so}_{k+1,k}$ (or $\mathfrak{so}_{k,k}$, respectively). For other signatures, there are more real Lie algebras, corresponding to the various indefinite orthogonal groups.
Note however that the complexifications of any of these, by what was said above for algebraically closed fields, become all isomorphic to each other. Maybe the confusion stems from some sources not carefully distinguishing between (non-isomorphic) real Lie algebras and their (isomorphic) complexification.
Added later: One should really think of these constructions not only in terms of matrices, but in terms of symmetric bilinear forms (equivalently, quadratic forms) which are, in one way or another, what defines these Lie algebras, or their corresponding Lie groups (as the transformations which leave that form invariant one way or another). That perspective makes it quite clear that congruence of matrices will give isomorphic Lie algebras, because it is nothing else but base change. Cf. also https://math.stackexchange.com/a/3489788/96384, "Step 2" in https://math.stackexchange.com/a/3708980/96384, and Let $gl_S(n,F) = \{x \in gl(n,F) : x^tS = -Sx \}$ and $T = P^tSP$. Show $gl_S(n,F) \cong gl_T(n,F)$.
Also, I would like to point out for future readers that in general, even non-congruent matrices can give rise to isomorphic Lie algebras: There is a "coarser" equivalence relation, namely "congruence up to scalars", which at least in many cases is actually equivalent to isomorphism of the corresponding Lie algebras. For example even over $\mathbb R$, the matrices $I_n$ and $-I_n$ are not congruent, but both give rise to the classical compact $\mathfrak{so}_n$: One can flip the number of $1$'s and $-1$'s in the signature, i.e. multiply $S$ with $-I_n$, and still get isomorphic Lie algebras. Cf. comments to $gl_S(n,F) \cong gl_T(n,F) \rightarrow T \cong S?$, and for very abstract context, https://math.stackexchange.com/a/3981832/96384. |
Euler method with infinite gradient at initial value | Others can probably give a more "conventional" approach but one thing you can do, because $y'(x)$ is independent of $y(x)$, is take the following approach for the first step (at least).
For the initial slope think rather angle. You want to proceed with a certain angle but you can't use $\theta_1=\frac{\pi}{2}$... which is obviously no good.
Instead take as angle the mid-angle of $\theta_1=\pi/2$ and the angle of the curve at $x=h$:
$$\theta_2:=\tan^{-1}\left(y'(h)\right)=\tan^{-1}\left(\frac{1}{\sqrt{h}}\right),$$
and so take as initial angle:
$$\theta_0=\frac{\theta_1+\theta_2}{2}=\frac{\frac{\pi}{2}+\tan^{-1}\left(\frac{1}{\sqrt{h}}\right)}{2},$$
and so an initial slope of:
$$m_0:=\tan(\theta_0).$$
From here you can do normal Euler or perhaps keep it going with slope
$$m_k=\frac{y'(x_k)+y'(x_{k+1})}{2}.$$
This is an adaptation of Heun's Method. With a step-size of $h=0.1$ this gives an error for $y_1\approx y(x_1)$ of the order of $0.015$. |
If $X$ and $Y$ are two NON independent random normal variables, what is the distribution of $Z = \frac{X}{Y^n}$ | I'm not sure we can completely dismiss criteria for optimality of BMI.
BMI is an artificial index which I suppose is intended to be larger
for people who are obese than for people who are scrawny.
BMI may be highly correlated with some 'better' measure of obesity,
for example volume of fatty tissue as a percentage of total body volume.
When you have an unquestionably better measure of obesity, you might use to 'fine tune' BMI.
One method might be to set $\log(Y_i) = a + b\log(W_i) + c\log(H_i),$
where for each subject $i$ out of a large number $n$ of randomly
selected subjects $Y_i$ is better measure, $W_i$ is the weight and $H_i$
is the height. Then you should do the usual regression diagnostics, to
make sure the method is valid. BMI uses $a = 0,\,b = 1$ and $c = -2.$ If your
regression study were to give appreciably different values for $a, b$ and $c,$
then your new 'BMI' might be more useful.
However, on dimensionality grounds, the existing BMI might make good
sense. Clearly, weight is going to increase with height in 'ideally
proportioned' people, and perhaps the increase is proportional to height${}^2$---probably, not as height${}^3$ because people aren't spheres. So BMI = weight/height${}^2$
seems vaguely reasonable.
As you suggest, you might explore the relationship between $W$ and $H,$ but is should be based
on data from appropriately chosen people. Then you could consider
the regression model $\log W = a + b\log H$. But you really need to
explore this relationship between $W$ and $H$ in real data. I agree that there is no point to an an analysis based on the assumption that $W$ and $H$
are independent. I suppose that BMI works as well as it does just because it anticipates a relationship between height and weight.
You say that you "know that $W\sim N(a\mu_{H} + \mu_{\epsilon}, a^2\sigma^2_{H} + \sigma^2_{\epsilon}).$" (Do you know the $\mu$'s and $\sigma$s?) Do you have the data that gives rise to this? If so, and if the subjects were suitable, that would be the basis for the kind of regression I mentioned in the previous
paragraph.
Of course, one can find the distribution of $Z = W/H^n,$ for suitably chosen
or modeled data on $W$ and $H,$ but frequently such distributions of ratios of normals have
bad properties, so I'm not sure that is the best approach. |
$f(a+h)= \sum_{k=0}^{n} (1/k!)f^{(k)}(a) h^k$ for any polynomial f(x) (Comparing summations) | As the formula
$$f(a+h)= \sum_{k=0}^{n} \frac{1}{k!}f^{(k)}(a) h^k$$ is linear in $f$, it is sufficient to prove it for monomials. And for a monomial $p(x) = x^n$ of degree $n$ and $0 \le k \le n$
$$p^{(k)}(a) = \frac{n!}{(n-k)!}a^{n-k}.$$ Therefore
$$\sum_{k=0}^{n} \frac{1}{k!}f^{(k)}(a) h^k= \sum_{k=0}^{n}\frac{n!}{(n-k)!k!}a^{n-k}h^k = p(a+h)$$ according to the binomial theorem
$$(a+h)^n = \sum_{k=0}^{n} \binom{n}{k} a^{n-k}h^k.$$
Note: your formula is only valid if $n$ is greater of equal to the degree of the polynomial. |
simple maths problem | You already know how to calculate $c(X)$, the cost of a single car with $X$ seats. The number of cars you need is $\sum_i \lceil \frac {n_i}X \rceil$ where $n_i$ is the population at location $i$. Just loop over $X$ from $1$ to the maximum population of any location, add up the sum, and pick the best. |
Proving an identity related to the torsion of a connection. | If think the term $fXY$ should really be $fX(1)Y$. And $X(1) = 0$. ($fXY$ doesn't really mean anything.) |
Mean Value Theorem, Indeterminate forms and L'Hospital's rule | I don't think you understand Mean Value Theorem (MVT) correctly (as far as I understood your question).
For $f(x)$ and $g(x)$ MVT gives us the following:
$\exists$ $c_1 \in [a, b]: f^{'}(c_1) = \dfrac{f(b) - f(a)}{b - a}$
$\exists$ $c_2 \in [a, b]: g^{'}(c_2) = \dfrac{g(b) - g(a)}{b - a}$
$\exists$ $c_3 \in [a, b]: \dfrac{f^{'}(c_3)}{g^{'}(c_3)} = \dfrac{f(b) - f(a)}{g(b) - g(a)}$
$c_1, c_2, c_3$ are not related in any way, and MVT gives us no information about the actual values of $c_1, c_2, c_3$, we only know that these points do exist. In addition, $c_1, c_2, c_3$ don't have to be unique. |
what does a critical number is doing for a function?? | Think about the definition of critical number - it involves the derivative. If the derivative (or slope) of a function is negative, the function is decreasing. If the derivative (or slope) of a function is positive, the function is increasing. So at a critical number, the slope is zero. Does this guarantee the slope is changing from positive to negative (or vice versa)?
As an example, think about the function $f(x) = x^3$. Does it have a critical number? Is the function switching from increasing to decreasing or decreasing to increasing at the critical number? |
Express $\log_5 288$ in terms of decimal logarithms $\log 2$ and $\log 3$ | HINT:
Using $$\log_ab=\frac{\log_cb}{\log_ca},$$
$$\log_5{288}=\log_5(2^53^2)=5\log_52+2\log_53=\frac{5\log_{10}2+2\log_{10}3}{\log_{10}5}$$
Now $\displaystyle 1=\log_{10}{10}=\log_{10}2+\log_{10}5\iff \log_{10}5=1-\log_{10}2=\cdots$ |
Is the function differentiable at $0$? | The problem is slightly tricky because we don't know the anti-derivative of $\cos (1/x)$ but we do know that function $g(x) = x^{2}\sin(1/x), g(0) = 0$ is continuous and differentiable for all $x$ and $$g'(x) = 2x\sin (1/x) - \cos(1/x),g'(0) = 0$$ and hence $$g(h) = \int_{0}^{h}(2t\sin (1/t) - \cos (1/t))\,dt$$ or $$\int_{0}^{h}\cos(1/t)\,dt = 2\int_{0}^{h}t\sin(1/t)\,dt - g(h)$$ and hence
\begin{align}
F'(0) &= \lim_{h \to 0}\frac{1}{h}\int_{0}^{h}\cos(1/t)\,dt\notag\\
&= \lim_{h \to 0}\frac{2}{h}\int_{0}^{h}t\sin(1/t)\,dt - \frac{g(h)}{h}\notag\\
&= 2\lim_{h \to 0}\frac{1}{h}\int_{0}^{h}t\sin(1/t)\,dt\notag\\
\end{align}
Now $t\sin(1/t)$ has a removable discontinuity at $t = 0$ and its limit is $0$ as $t \to 0$ hence by Fundamental Theorem of Calculus the limit $$\lim_{h \to 0}\frac{1}{h}\int_{0}^{h}t\sin(1/t)\,dt$$ above is $0$ and therefore $F'(0) = 0$. |
$(x,y)$ pairs in lattice $Z^2$ that are co-prime with euclidean-norm at most $k$ | This is the Primitive Circle Problem, the asymptotics are still $\Theta(k^2)$ (and the constant is known to be $6/\pi$). |
Can unit vectors $i, j, $ and $k$ be elements of a group? | The unit vectors $i,j,k$ can all be elements of the same abelian group. One specific example is if you take your set to be all vectors in $\mathbb{R}^3$ and use the operation of vector addition. |
Estimate of n factorial: $n^{\frac{n}{2}} \le n! \le \left(\frac{n+1}{2}\right)^{n}$ | Not equal, but by a standard inequality: $\sqrt{ab} \le \frac{a+b}{2}$, so $ab \le \frac{(a+b)^2}{4}$.
So all products $i\cdot ((n+1)-i)$ are estimated above by $\frac{(i + ((n+1)-i))^2}{4} = \frac{(n+1)^2}{4}$. So no equality, but upper bounded by. Because we have $(n!)^2$ we get rid of the square roots again so $n! \le (\frac{n+1}{2})^n$ (where the $n$-th power comes form the fact that we have $n$ terms in the $(n!)^2$ expression.
The first equality is just rearranging terms. |
Prove that for any integers $a,b,c,$ there exists a positive integer $n$ such that the number $n^3+an^2+bn+c$ is not a perfect square. | Let $a,b,c \in \mathbb Z$, and let $f(n)=n^3+an^2+bn+c$, $n \in \mathbb N$. We show that at least one of $f(1)$, $f(2)$, $f(3)$, $f(4)$ is not a perfect square. We use the fact that $m^2 \equiv 0\:\text{or}\:1\pmod{4}$ for $m \in \mathbb Z$.
Suppose $f(n)$ is a perfect square, $n \in \{1,2,3,4\}$. We note that
$$ \begin{eqnarray*} f(1) \equiv a+b+c+1\pmod{4}, \\
f(2) \equiv 2b+c \pmod{4}, \\
f(3) \equiv a+3b+c+3 \pmod{4}, \\
f(4) \equiv c \pmod{4}. \end{eqnarray*} $$
Since $f(3)-f(1)$, $f(4)-f(2)$ are both even, each must be divisible by $4$. But then $4$ must divide both $2b$ and $2(b+1)$. This is impossible.
Therefore, at least one of $f(1)$, $f(2)$, $f(3)$, $f(4)$ must be a non-square, as claimed. $\blacksquare$ |
Find a test for $H_{0} : \sigma_{1}^{2} \ne \sigma_{2}^{2}$, against $H_{1} : \sigma_{1}^{2} =\sigma_{2}^{2}$ | I think you're testing the hypothesis $H_0 : \mu_1 = \mu_2$. At least you test statistic seems to suggest so. Mind you, I am only a beginner in this field, so you might be right and I might be wrong. Also, I know only about the equality case (usually the $H_0$ is based on equality, right-tailed, left-tailed or two-tailed). Anyways, here goes
For testing $H_{0} : \sigma_{1}^{2}= \sigma_{2}^{2}$, the appropriate test statistic is
$$F_0 = \frac{S_1^2}{S_2^2}$$
where the reference distribution of $F_0$ is the $F$ distribution with $n-1$ degrees of freedom for numerator and $m-1$ degrees of freedom for denominator. The null hypothesis would be rejected if $F_0 \gt F_{\alpha/2, n-1,m-1}$ or if $F_0 \lt F_{1-(\alpha/2), n-1,m-1}$
You can read more about it in the book Design of Experiments by Montgomery, Chapter 2, the ending section. |
prove a continuous mapping theorem for $g_n(X_n)\stackrel{d}{\to}g(X)$? | No: consider $g_n$ defined in the following way: $g_n$ is $0$ outside $[0,2/n]$,
takes the value $1$ at $1/n$ and is interpolated linearly on $[0,1/n]$ and $[1/n,2/n]$. Let $X_n=1/n$. Then $X_n\to 0$, $g_n\to 0$ pointwise but $g_n\left(X_n\right)=1$ for each $n$.
However, if the convergence of $\left(g_n\right)_{n\geqslant 1}$ to $g$ is uniform, then everything goes well, as
$$g_n\left(X_n\right)=\underbrace{g_n\left(X_n\right)-g_n\left(X\right)}_{\to 0\mbox{ in probability} } +\underbrace{g_n\left(X \right).}_{\to g(X)\mbox{ in distribution} } $$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.