title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Example of an uncountable metric space where every point is isolated | Real numbers with the discrete metric. I.e. $d(x,y) = 1$ if $x \neq y$. |
Binomial confidence intervals and the Central Limit Theorem | Suppose we have $x = 21$ successes in $n = 50$ trials. Here are a few
approaches to getting a 95% CI for success probability $\theta.$
Using binomial tails in quest of an "exact" interval. Without using a normal approximation, the rough idea to get a 95% CI for binomial success probability $\theta$
is to find $\theta_1$ such that $P(X \le x; n, \theta_1) \approx .025$
and $\theta_2$ such that $P(X \le x; n, \theta_2) \approx .975.$ Then
the CI is $(\theta_1, \theta_2).$
This idea is discussed in Section 3 of these course notes.
Briefly, an important difficulty is that the discreteness of the binomial distribution
prevents getting 'tail probabilities' of exactly .025 for either $\theta_i.$
One solution is to fuss about putting a little more probability in one tail
and a little less in the other tail in order to get as close to a 95% CI
as possible. Another solution (the 'conservative approach') is to allow tail probabilities smaller than .025
to ensure at least 95% confidence
Below is a rough demonstration (with no fussing) that I programmed in R statistical
software. It gives the CI $(.300, .568).$
x = 21; n = 50
th = seq(0, 1, by=.001) # grid of values of 'theta' in [0,1]
cdf = pbinom(x, n, th)
d1 = abs(cdf - .975)
th1 = th[d1 == min(d1)]; th1
## 0.3
d2 = abs(cdf - .025)
th2 = th[dl == min(dl)]; th2
## 0.568
In R, a refined version of this style of CI is part of the output of the function binom.test, which includes the CI $(0.282, 0.568).$ From what I can tell,
this function uses a conservative approach, hence the slightly smaller
left endpoint. [By default, the function tests whether the coin is fair, although other hypotheses may be specified.]
binom.test(21,50)
Exact binomial test
data: 21 and 50
number of successes = 21, number of trials = 50, p-value = 0.3222
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.2818822 0.5679396
sample estimates:
probability of success
0.42
Bayesian Posterior Interval. Sometimes Bayesian posterior intervals
are used as confidence intervals. Based on a 'noninformative' $\mathsf{Unif}(0,1)$ prior distribution and the binomial likelihood function from
$x = 21$ and $n = 50$, one obtains the posterior distribution
$\mathsf{Beta}(x + 1, n-x + 1).$ Cutting probability .025 from each tail
of this distribution one obtains the interval $(0.293, 0.558).$
qbeta(c(.025,.975), x+1,n-x+1)
## 0.2934549 0.5583072
Agresti-Coull Interval. The so-called 'Wald'
CI, based on $\hat \theta = x/n,$ is of the form $\hat \theta \pm 1.96\sqrt{\hat\theta(1-\hat\theta)/n}.$ You are correct that this
kind of interval can give very bad results for small $n,$ especially
when $\theta$ is far from $1/2.$ Not only does it use a normal
approximation that may not be appropriate for such $n$ and $\theta,$ it
also assumes (under the square root sign) that $\hat\theta = s/n$ is
the same as $\theta.$ "Bad" means
that the true coverage probability of a
"95%" CI can be far from $95\%$, often much lower.
However, a considerable improvement is achieved by using
$\tilde n = n + 4$ and $\tilde\theta = (x+2)/\tilde n$ to make the CI
$\tilde \theta \pm 1.96\sqrt{\tilde\theta(1-\tilde\theta)/\tilde n}.$
This is called the Agresti-Coull interval; for confidence levels other
than 95%, slight adjustments are made. You can google 'Agresti interval'
for details. The Agresti interval for $x = 21$ and $n = 50$ is $(0.294, 0.558)$.
pa = (x+2)/(n+4); pm = -1:1
pa + pm*1.96*sqrt(pa*(1-pa)/(n+4))
## 0.2940364 0.4259259 0.5578154
While the Agresti intervals use a normal approximation, they do not
assume that $\tilde \theta$ is the same as $\theta.$
When $n$ is quite small, all of the CIs based on a normal approximation
perform badly. However, when $n$ is small it is increasingly difficult
to find appropriate binomial quantiles to make an "exact" interval of approximate
confidence 95% (or any other desired level). |
Volume integral and Variations | Proceed from the basic principles. Let $\eta$ be an admissible function. Writing $\text{grad} u =\nabla u$ and applying Taylor's expansion up to the terms of the second order we obtain
$$I\left[u+\epsilon\eta\right]= \int_{V}f(u+\epsilon\eta,\nabla(u+\epsilon\eta))dV=I[u]+ \epsilon\int_{V}\left[f_{u}\eta+f_{\nabla u}\nabla\eta\right]dV+O\left(\epsilon^{2}\right)$$
Where $f_{\nabla u}$ is the shorthand notation for the vector whose components are $f_{u_x},f_{u_y},f_{u_z}$. Now by definition of the variation via Gateaux differential we deduce
$$\delta I\left[u,\eta\right]=\left.\frac{d}
{d\epsilon}I\left[u+\epsilon\eta\right]\right|_{\epsilon=0}=\int_{V}\left[f_{u}\eta+f_{\nabla u}\nabla\eta\right]dV$$
The idea is as usual, to get rid of the gradients of the variation $\eta$ Using the product rule for $\nabla$
$$f_{\nabla u}\nabla\eta=\nabla\left(f_{\nabla u}\eta\right)-\nabla f_{\nabla u}\eta$$
$$\delta I\left[u,\eta\right]=\int_{V}\left[f_{u}-\nabla f_{\nabla u}\right]\eta dV+\int_{V}\nabla\left(f_{\nabla u}\eta\right)dV$$
Now use divergence theorem to transform the second term:
$$\int_{V}\nabla\left(f_{\nabla u}\eta\right)dV=\int_{\partial V}\boldsymbol{n}f_{\nabla u}\eta dS$$
Where $\boldsymbol{n}$ is the outward normal of the boundary $\partial V$.
Finally
$$\delta I\left[u,\eta\right]=\int_{V}\left[f_{u}-\nabla f_{\nabla u}\right]\eta dV+\int_{\partial V}\boldsymbol{n}f_{\nabla u} \eta dS$$
Using the necessary condition
$$\delta I\left[u,\eta\right]=0$$
and the fundamental lemma, the first term will give the Euler-Ostrogradski (Euler-Lagrange) equation. The second term will give the (natural) boundary condition. |
Prove $A\cup (A\cap B) = A$ | Without context, I would do this without applying rules. First I would show:
$$ A \cup (A \cap B) \subset A$$
then I would show $$A \subset A \cup (A \cap B) .$$ These two together imply the relation you're trying to prove.
For the first statement, assume you have $a \in A \cup (A \cap B)$. Because $a$ is in this union, it is either in $A$ or in the intersection of $A$ and $B$... It must be in $A$ in either of these cases.
For the second statement, assume you have $a \in A$. Every element of $A$ is also in $A \cup C$ for any set C so $a \in A\cup (A \cap B)$. |
Applications of Riesz Representation Theorem | It is given that $\|T(c_k)\| \leq \sqrt B \sqrt {\sum |c_k|^{2}}$ for all $(c_k) \in l_2$. This gives $|\sum c_k \langle f_k, f \rangle|\leq \|f\| \sqrt B \sqrt {\sum |c_k|^{2}}$. Fix $N$ and take $c_k$ to be the complex conjugate of $\langle f_k, f \rangle|$ for $k \leq N$ and $0$ for $k >N$. After division by $\sqrt {\sum |c_k|^{2}}$ this gives $\sum\limits_{k=1}^{N} |\langle f_k, f \rangle|^{2} \leq \|f\|^{2}B$. Let $N \to \infty$ to finish the proof. |
Predicates and Indirectly Proving the last step of Mathematical Induction | Esentially what you have proved is nothing else than:
Assume : $ k^2 \geq 2k + 3 $ then $ k \geq 3 $
This can be easily derived:
$$
k^2 - 2k - 3 = (k-3)(k+1) \geq 0 \rightarrow k \geq 3
$$
So your proof is correct (is just a rearrange of the above steps)
but it is not using the inductive principle,
since you are not proving that $P(k+1)$ is true.
What you have done is find the set that belongs to an assumption.
$$IH(k) \rightarrow k \in A$$ |
Alternative way for expressing "No student in this class has sent e-mail to exactly two other students in this class." | It's not correct. The first thing that strikes me is that if $y$ is unique, then how can you say $z$ is unique when it has the same property as $y$? Clearly they're not unique. The pair is what you want unique. |
finding a point of intersection | It is useful to make a sketch of the two circles. Let the point we are looking for be $P=(0,b)$.
By the Pythagorean Theorem, the square of the distance from $P$ to the points of tangency to the first circle is given by
$$\left((0-3)^2+(b-6)^2\right) -16.$$
You can write a similar expression for the square of the distance from $P$ to the points of tangency with the second circle.
Equate the two squares of distance, and solve for $b$. |
Series : Every rearrangement converges $\implies$ absolutely convergent | The contrapositive of your statement follows from the Riemann series theorem.
To be more precise, suppose for a contradiction that $\sum z_i$ is a sum that is convergent under every rearrangement to the same limit $\ell$, but it is not absolutely convergent. Then $\sum|\Re z_i|+\sum|\Im z_i|\ge \sum|z_i|=\infty$, so either $\sum\Re z_i$ or $\sum\Im z_i$ is a conditionally convergent real series, so the Riemann series theorem gives a rearrangement $\sigma$ such that one of these two converges to something different from their original sum, and then $\sum z_{\sigma(i)}=\sum\Re z_{\sigma(i)}+\sum\Im z_{\sigma(i)}\ne\ell$ in contradiction to the hypothesis. |
Number theory problem | Some hints:
Prove that $x^p-x$ has $p$ distinct roots in $\mathbb{Z}/p\mathbb{Z}$, so every element mod $p$ is a root with multiplicity $1$.
Use the root theorem (the element $a$ is a root of $q(x)$ if and only if $x-a$ divides $q(x)$) to find a factorization of $x^p-x$. Show/verify that every divisor of $x^p-x$ has distinct roots.
If $P(x)$ has $d$ distinct roots, use the root theorem to write a factorization of $P(x)$ and verify that it divides $x^p-x$. |
Calculate trigonometric integral: $\int \sin(x)[\sec(2x)]^{3/2}dx$ | Firstly, note $$\sec 2x = \frac{1}{2\cos^2(x) - 1}$$ and hence
$$\int \sin(x)[\sec(2x)]^{3/2} \ dx= \int \sin(x) \left[{2\cos^2(x) - 1}\right]^{-3/2} \ dx $$
Taking $u = \cos x$ yields
$$-\int ({2u^2 - 1})^{-3/2} \ du$$
We can solve the above integral firstly for only nonnegative $u$ and use the fact that the function is even (hence its primitive odd) to extend it in the final step. Thus there is no problem with taking only the principal root when solving for $u$ in our next step; it all works out.
Take $t = ({2u^2 - 1})^{-\frac 12}$ so that $$dt = \frac{-2u}{({2u^2 - 1})^{\frac 32}} \ du$$ and we get
$$\int \frac{1}{2u} \ dt = \frac{1}{2\sqrt 2}\int \frac{{2t \ dt}}{\sqrt{1 + t^2}} = \frac{1}{\sqrt{2}}\sqrt{1 + t^2} + C = \frac{u}{(2u^2 - 1)^{\frac 12}} + C = {\cos x}{\sqrt{\sec 2x}} + C$$ |
What is the meaning of: "... the antecedent is a contradiction." | Standardly, “contradiction” is used widely in logic, to include self-contradictions, and any wff that comes out false on all valuations counts as a (self)-contradiction in this sense, even if not of the explicitly self-contradictory form $(\alpha \land \neg\alpha)$.
So for a start, any negation of a tautology -- for example, ‘$\neg(P \lor \neg P)$’ -- counts as a contradiction in this sense. So does ‘ $\bot$’, the primitive absurdity constant, although it is quite unstructured. Likewise for any wff $\gamma$ that entails both some wff and its negation; $\gamma$ can't possibly be true, comes out false on all relevant valuations, so counts as a contradiction in the wide sense. |
Hybrid equivalence of Polynomial-like maps | It represents $\frac{\partial \phi}{\partial \bar z}$, the Wirtinger derivative defined by
$$ \frac{\partial \phi}{\partial \bar z} := \frac 12 \left(\frac{\partial \phi}{\partial x} + i\frac{\partial \phi}{\partial y}\right) $$ |
Finding a basis for the Eisenstein space $\mathcal{E}_1(12,\chi)$ | I don't know about how to generate that Eisenstein space, but from quadratic reciprocity for $O_K = Z[\frac{\sqrt{-3}+1}{2}]$ which is a PID we have
$$\sum_{a,b \ne (0,0)} |a+b \frac{\sqrt{-3}+1}{2}|^{-2s} = |O_K^\times| \zeta_K(s) =6 \zeta(s) L(s,(\frac{-3}{.}))=6 \zeta(s) L(s,(\frac{.}{12}))\\ =6 \sum_{n=1}^\infty n^{-s} \sum_{d | n, d \ odd} (\frac{d}{3})(-1)^{(d-1)/2}$$
So that with the quadratic form $f(a,b) = |a+b \frac{\sqrt{-3}+1}{2}|^2$ we have
$$\sum_{a,b} q^{f(a,b)} = 1 +6 \sum_{n=1}^\infty q^n \sum_{d | n, d \ odd} (\frac{d}{3})(-1)^{(d-1)/2} \in M_1(\Gamma_1(12))$$
Then with the quadratic form $Q(a,b) = |a+b\sqrt{-3}|^2$ I think it is not of level $12$ but of level $12 [O_K : Z[\sqrt{-3}]] = 24$, that's why you won't find it in your Eisenstein space. |
Is it true for arbitrary topological spaces? | Counter example. A = (0,1) = U, V = (2,3).
As U and V are open, it is prefered to simply require
them to be disjoint instead of separated.
Did not Rudin require a point of A be in U and
another point of A to be in V? |
Irrational inequality. Symbolab and Wolfram Alpha have different answers | $\sqrt{x^2-4}$ is not negative. If it is $0$, the inequality holds; otherwise, the denominator must be positive. By the other hand, $x\neq 2$ to avoid 'division by zero'.
The correct solution is Wolfram Alpha's. |
Proving that $2^{2^n} + 5$ is always composite by working modulo $3$ | The basic idea is, work modulo 3. What happens, modulo 3, when you raise 2 to an even power? |
A specific example of a CW complex and a few questions concerning it. | We will show that the complement is open. Note that your space is inside the closed ball $\bar B(0,1)$. Take any point outside the ball. It clearly has a neighbourhood disjoint with $Y$ (enough to take a neighbourhood disjoint with $\bar B(0,1)$). Now take any point $x \in \bar B(0,1)\setminus Y$. In polar coordinates it is given by $(\phi,r)$ for some $(\phi,r)\in (0,2\pi)\times (0,1]$. There is a unique $n$ such that $\frac{2\pi}{n+1}<\phi<\frac{2\pi}{n}$. Now for appropriately small $r$ the open ball $B(x,r)$ is disjoint with $Y$ (easy and very formal details left to the reader).
They want $x(0)\mapsto 0$ and $x(n)\mapsto e^{2\pi i/n}$, $D^1_n \overset{f_n}{\to} I_n$, where $n>0$. More precisely if $D_n^1=[0,1]$, then the map $f_n$ is $f_n(t) = t\cdot e^{2\pi i/n}$.
The inverse function is not continuous. The sequence $y_n=e^{2\pi i/n}$ approaches $y_1=e^{2\pi i}=1$, but $\psi^{-1}(y_n)=x(n)$ doesn't approach $\psi^{-1}(y_1)=x(1)$ (you can easily find an open neighbourhood of $x(1)$ disjoint with other $x(n)$ - such a neigbourhood is for example the image of $(0,1] \subseteq D_1^1$).
3,5. Bonus. Actually the topology of $X$ and $Y$ differ also at $x(0)$ and $0$ ($\psi^{-1}$ is not continuous at $0$). Imagine the following open neighbourhood of $x(0)$:
$N=\bigcup_{n=1}^\infty [0,1/n)_n$, where $[0,1/n)_n\subseteq [0,1]_n=D_n^1$ (it is open as its inverse image is open in every cell). Its image via $\psi$ (inverse image via $\psi^{-1}$) is not a neighbourhood of $0$: the sequence $z_n=\frac{e^{2\pi i/n}}{n}$ obviously approaches $0$ (in polar coordinates the radius tends to $0$), but it is disjoint with $\psi(N)$ (sequence $a_n$ has limit $a_0$ if and only if for each neighbourhood of $a_0$ there are only finitely many $a_n$ outside it). The same with different words: there is no ball around $0$ in $Y$ that could be included in $\psi(N)$. |
Intersection of two Field Extensions of a function field | It is certainly possible for the two splitting fields to contain elements outside of $K(t_1,t_2)$ but algebraic over $K$.
Consider the case $K=\Bbb{Q}$, $p_1(x)=x^3-t_1$, $p_2(x)=x^3-t_2$. To get the splitting field of either $p_1(x)$ or $p_2(x)$ you need to include the third roots of unity, $\omega$ and $\omega^2$. And $K(t_1,t_2,\omega)$ will then be contained in the intersection of the splitting fields.
I think that if $K$ is algebraically closed, then this cannot happen, and the intersection of the two splitting fields will be just $K(t_1,t_2)$. I'm afraid I don't have a clean argument in mind though. |
Intuition calculating average price change | Using $p$ for prices, $q$ for quantities (amounts) the two formulae are
$${{\sum_i (p_{2i}/p_{1i}) q_{1i}} \over {\sum_i q_{1i}}}-1$$
and
$${{\sum_i p_{2i}q_{1i}} \over {\sum_i p_{1i}q_{1i}}}-1
={{\sum_i (p_{2i}/p_{1i}) p_{1i}q_{1i}} \over {\sum_i p_{1i}q_{1i}}}-1$$
So both are weighted average of price changes with the weights being the first-period quantities in the former and the first-period volumes in the latter. There is no right answer since only the user can decide which system of weights better captures the relative importance of commodities.
However there is one argument against the first formula. The quantities depend on the choice of units. If you start measuring one commodity in grams rather than kilograms then its importance in the index would go up thousandfold. This is not a problem in the second case.
The second formula also has the advantage of being well known. It is called the Laspeyres index. The problem you are trying to solve has plagued economists for long. So much so that cutting edge macroeconomic texts of the early twentieth century would devote full chapters to it. |
Finite Field Question | Each linear map $A: \mathbb{F}_2^5 \to \mathbb{F}_2^5$ corresponds to a unique $5\times 5$ matrix with entries in $\mathbb{F}_2$. There are $2^{25}=33554432$ such matrices. This immediately rules out the possibility that there are an infinite number of maps $A: \mathbb{F}_2^5 \to \mathbb{F}_2^5$ (injective or otherwise).
By the rank-nullity theorem, each injective linear map $A: \mathbb{F}_2^5 \to \mathbb{F}_2^5$ corresponds to a unique matrix in the general linear group $GL(5, \mathbb{F}_2)$. The
order of the group is
$$
\begin{align}
\prod_{k=0}^{n-1}(2^n - 2^k) &= (2^5 - 1)(2^5 - 2)(2^5 - 4)(2^5 - 8)(2^5 - 16) \\ &=31 \cdot 30 \cdot 28 \cdot 24 \cdot 16 = 9999360
\end{align}\tag1
$$
which means that $\frac{9999360}{2^{25}} \approx 29.8\%$ of the matrices in $\mathbb{F}_2^{5\times 5}$ correspond to injective maps.
The formula $(1)$ captures the fact that the first column of an invertible matrix in $\mathbb{F}_2^{5\times 5}$ can be anything but the zero vector, the second can be anything but the zero vector and the first column, the third anything but any of the four linear combinations of the first two columns and so on. |
Definition of Relative Version of Cross Product in Homology | If you have a tripled ($X, A, B)$ with $A$ and $B$ open in $X$ you can define a relative cup product
$$H^*(X, A) \times H^*(X, B) \to H^*(X, A \cup B)$$
Define this on (co)chain level by taking cochains $\alpha, \beta$ from $C^k(X, A)$ and $C^\ell(X, B)$ respectively, cup product of which lives in the subgroup $C^{k+\ell}(X, A + B) \subset C^{k+\ell}(X)$ of cochains vanishing on sums of chains in $A$ and chains in $B$. $\alpha \cup \beta$ does indeed do so (this is just checking the definitions). To push $\alpha \cup \beta$ into something co homologous to a cochain living in the subgroup $C^{k+\ell}(X, A \cup B)$ is all what remains. This can be done by noting that the inclusion $C^*(X, A \cup B) \hookrightarrow C^*(X, A + B)$ is an isomorphism on homology (five lemma + (co)chain-level statement of excision).
The geometric point is that if you think of $\alpha \cup \beta$ as a piecewise-linear function on $X$ (appropriately triangulated), it vanishes on sums of simplices in $A$ and simplices in $B$. It vanishes on any chain on $A \cup B$ (which need not be like that: the chain could contain simplices which goes half into $A$ and half into $B$) because that's what excision says: if you triangulate the chain, refine it enough, it'd consist entirely of simplices of $A$ and simplices of $B$, on which $\alpha \cup \beta$ vanishes. That's all what "upto cohomology" happens.
That being said, $p_1^*(a)$ lives in $H^*(X \times Y, A \times Y)$ and $p_2^*(b)$ lives in $H^*(X \times Y, X \times B)$. Their cup product lives in $H^*(X \times Y, A \times Y \cup X \times B)$. |
Homeomorphic images of an "almost" basic open set in $2^{\omega}$ | Not necessarily.
First for any finite $0-1$ sequence $s$ let $O_s = \{x \in 2^\omega: s \subseteq x\}$. And for any $x \in 2^\omega$ and $n \in \omega$ let $x/n \in 2^\omega$ be the sequence such that for each $i\in\omega$, $x/n(i) = x(i+n)$. $x/n$ just cuts the first $n$ elements of $x$.
Now let $f$ be the constant $0$ function and $X = 2^\omega - \{f\}$ and $Y = O_{\langle 1\rangle} \cup O_{\langle 0, 0\rangle} - \{f\}$. Now let $\sigma: Y \rightarrow X$ with $\sigma(x) = x$ for $x\in O_{\langle 1\rangle}$ and $\sigma(x) = x/1$ for $x \in O_{\langle 0, 0\rangle}$. $\sigma$ is clearly bijective. To see that it is continuous, you can take a sequence of converging functions, and see that after some step, they fall in one "component" of $Y$ and then it becomes easy. To see that it's inverse is continuous, take another convergent sequence of functions and look at their first coordinates, then again it will be straightforward as above. So $\sigma$ is a homeomorphism.
Also note that $Y$ is not of the form $Y = O'-M'$ for some basic open set $O'$ and meager set $M'$. Because if it were so, then the only choice for $O'$ would be $O_\emptyset$, but then we must have $O_{\langle 0, 1\rangle} \subseteq M'$, which is a contradiction.
As you requested, I think I have an example, where the $O'$ can't even be any open set. First for $x,y \in 2^\omega$, let $x*y \in 2^\omega$ be such that $x*y(2n+1) = x(n)$ and $x*y(2n) = y(n)$. $x*y$ is just the sequence which has $x$ as it's odd-indexed subsequence and has $y$ as it's even-indexed subsequence.
Now let $f$ and $X$ be as above, and let $\sigma:X \rightarrow 2^\omega$ with $\sigma(x) = x*f$. You can again see that $\sigma$ is a homeomorphism onto it's range. The range of $\sigma$ which we denote with $Y$, is the set of all functions which are $0$ in their even-indexed subsequence minus $f$. You can see that $Y$ is itself nowhere dense. So it is meager. And so it can't have the above representation. |
What is the Maclaurin series representation of $(1 - \frac{x}{5})^{-4}$ | $$(1 - \frac{x}{5})^{-4}=\frac{1}{(1-\frac{x}{5})^4}$$
use
$$\frac{1}{1-x}=\sum_{n=0}^{\infty}x^n$$
$$(\frac{1}{1-x})'''=\sum_{n=3}^{\infty}n(n-1)(n-2)x^{n-3}$$
$$\frac{6}{(1-x)^4}=\sum_{n=3}^{\infty}n(n-1)(n-2)x^{n-3}$$
or
$$\frac{1}{(1-x)^4}=\frac{1}{6}\sum_{n=3}^{\infty}n(n-1)(n-2)x^{n-3}$$
$$\frac{1}{(1-x)^4}=\frac{1}{6}\sum_{n=0}^{\infty}(n+1)(n+2)(n+3)x^{n}$$
now let $x\rightarrow \frac{x}{5}$ |
Show that there are no values of $c$ such that $f(x) = c/x$ can serve as the values of the probability distribution | You need the fact that the harmonic series $\sum_x \frac 1 x$ is divergent. So we cannot have $c$ such that $c \sum_x \frac 1 x=1$. |
local property of a curve to deduce a global property | I assume by the curve's 'trace' you mean its image in Euclidean space.
(1) Let the fixed point be $\mathbf{p}$, and parametrize the curve by $\mathbf{x}(t)$. If the displacement vector going from $\mathbf{x}$ to $\mathbf{p}$ is parallel to the normal of the curve, then it must also be orthogonal to the tangent vector, and the tangent vector is parallel to $\mathbf{x}'(t)$. So we have
$$(\mathbf{x}(t)-\mathbf{p})\cdot\mathbf{x}'(t)=0 $$
The left hand side is actually the derivative of a simple expression. Thus, integrating yields
$$\frac{1}{2}\|\mathbf{x}(t)-\mathbf{p} \|^2=C.$$
This is precisely the equation of a circle of radius $\sqrt{2C}$ centered at $\mathbf{p}$.
(2) The curve here will actually be an open ray emanating outward from $\mathbf{p}$, if I'm understanding the question correctly. If the displacement $\mathbf{x}(t)-\mathbf{p}$ is parallel to the tangent vector, and so is $\mathbf{x}'(t)$, and the curve is regular, then there is a scalar function $\alpha(t)$ such that
$$\alpha(t)(\mathbf{x}(t)-\mathbf{p})=\mathbf{x}'(t).$$
This can be solved with the integrating factor method, but we'll take a shortcut. Reparametrize the curve so that the velocity is exactly $\mathbf{x}(s)-\mathbf{p}$, and then define $\mathbf{y}(s)=\mathbf{x}(s)-\mathbf{p}$. Then
$$\frac{d\mathbf{y}}{ds}=\mathbf{y}(s)$$
whose only solution is $\mathbf{y}(s)=e^s\mathbf{a}$ for some constant vector $\mathbf{a}$, which means that $\mathbf{x}(s)=\mathbf{p}+e^s\mathbf{a}$. |
How many isomorphic graphs does this iso class of 5 vertices and 5 edges have? | To get the number of graphs for the second graph, we start by choosing a triangle. There are $\binom{5}{3} = 10$ ways to do this. We then choose two of those vertices for to be adjacent to $c$ and $e$. There are $\binom{3}{2} = 3$ ways to do this. Then we choose $c$ and $e$, and there are two ways to do this. So by rule of product, $10 * 3 * 2 = 60$. |
Teaching algebra in a culturally relevant way while fitting Common Core standards | I completely believe that your kids do not understand x and y, let alone exponents, let alone negative exponents. And the amount of time it takes to make up this ground is considerable. Of course you can't do it in the time allowed.
However, in terms of getting more relevant, one thing they all like is sports. You might poll them about which sports engage them the most (at least they will pay attention while you do that). Say it is basketball. Suppose someone needs to take 1000 practice free throws. Suppose the first day they take 5 throws, the 2nd day they double that, the 3rd day they double down again, etc? How many days to get to 1000 free throws? (And how much better will they be after that? Maybe the needed number is 5000 or 10,000 -- what do they think?). I'm sure you can invent many other problems. Perhaps negative exponents can be expressed as penalties?
Another thing that interests them is money. You might talk about scams. We all get scammed, even those of us who know a lot of math, if only because the worst scammers manage to lobby their way into laws that enable or even require the scams. It is easier here to discuss percentages than exponents, but I'll bet you can imagine some financial scenarios, legitimate or otherwise.
I've passed along your question to a friend who has substantial experience teaching math to inner city students. If she has any suggestions, she or I will enter them here. |
How to find the left and right side limits of $f(x)=|1+x|√−1x,f:R→R$ | use that $$\frac{\sqrt{|x+1|}-1}{x}=\frac{|x+1|-1}{x(\sqrt{|x+1|}+1)}$$ |
In triangle ABC. O is the circumcentre and H is the orthocentre. Of the circle BOC passes through H, prove that angle A = 60 | As I see, the circle BOC is the circumcircle of triangle BOC. We have four point B, H, O, C are in the same circle, so $\angle BHC = \angle BOC$.
Note that $\angle BHC = 180^\circ - \angle A$ and $\angle BOC = 2 \angle A$.
Summing up, we have $2\angle A = 180^\circ - \angle A$, or $\angle A = 60^\circ$. |
Recurrence relation for the determinant of a tridiagonal matrix | The recurrence is obtained by developing the determinant along the last column (or, equivalently, along the last row). |
$x,y$ and $z$ are consecutive integers, $\frac {1}{x}+\frac {1}{y}+\frac {1}{z}$... | Write $x = y-1$ and $z = y+1$. Then we have $$\frac{1}{y-1} + \frac{1}{y} + \frac{1}{y+1} = \frac{2y}{y^2 - 1} + \frac{1}{y} = \frac{3y^2 - 1}{y^3 - y} > \frac{1}{45}$$ Since $3y^2 - 1< 3y^2$, this implies $$\frac{3y}{y^2 - 1} = \frac{3y^2}{y^3 - y} > \frac{1}{45} \iff 135y > y^2- 1 \iff y-\frac{1}{y} < 135$$ As $y$ is assumed to be a positive integer, it follows that $y\le 135$. From here, you can check that $y=135$ certainly works for the initial inequality, so the largest possible value of $x+y+z = 3y$ is $3\cdot 135 = 405$. |
Distribution of a product of normal distributions : why am I wrong? | Firstly, your notation is confused. You should not use a probability mass function when you mean a probability density function.
Secondly, the convolution is based on the chain rule of differentiation and the law of total probability.
$$\begin{align}f_{XY}(w) ~&=~ \int_\Bbb R \underset{\text{Jacobian Determinant}}{\underbrace{\left\lVert\frac{\partial(s,w/s)}{\partial (s,w)}\right\rVert} f_X(s)}f_Y(w/s)\operatorname d s \\[1ex] &=~ \frac 1{2\pi} \int_\Bbb R\lvert s^{-1}\rvert \exp(-s^2/2)\exp(-w^2/2s^2)\operatorname d s \\[1ex] &=~\frac 1\pi \int_0^\infty s^{-1}\,\mathsf e^{-(s^2+w^2/s^2)/2}\operatorname d s\end{align}$$
Thirdly, that's not going to resolve into elementary functions.
(Hint Topic: Modified Bessel Function of the Second Kind.) |
Some equivalences for Ideals of the ring of real valued functions | For $2) \implies 3)$, assume that $I_N$ is prime, but $N$ has two distinct points $x, y$. Construct functions $f, g$ that don't belong to $I_N$, but their product does. (Hint: let $f$ be zero on $x$, but not on $y$).
For $3) \implies 1)$, note that if $A \subset B$, then $I_B \subset I_A$, and the latter inclusion is strict whenever the former is. |
how to derive a joint pdf | It is not possible to find the joint density $f(x,y)$ from $f_X$ and $f_Y$ in general. When $X$ and $Y$ are independent $f(x,y)=f_X(x)f_Y(y)$. |
Is this a hyperbolic PDE? | You are right but $\Delta(x, y)=b^2-4ac$. If we have a second-order PDE
$$a(x, y)u_{xx}+b(x ,y)u_{xy}+c(x, y)u_{yy}+ F(x, y, u, u_x, u_y)=0$$ (called semilinear equation), then we can classify the eqn. according to the sign of $\Delta(x, y).$ It is hperbolic on $\Omega$ if $\Delta(x, y)>0$ , parabolic on $\Omega$ if $\Delta(x, y)=0$ and elliptic on $\Omega$ if $\Delta(x, y)<0$ for all $(x, y)\in\Omega$ . |
Calculating the possibility of winning numbers | The number of possible outcomes is
$$\binom{49}{6}$$
which is the number of ways to select a subset of six of the $49$ numbers.
The number of ways to select exactly $k$ winning numbers is the number of ways we can select $k$ of the six winning numbers and $6 - k$ of the $43$ other numbers in the draw, which is
$$\binom{6}{k}\binom{43}{6 - k}$$
Hence, the probability of selecting exactly $k$ of the winning numbers is
$$\frac{\dbinom{6}{k}\dbinom{43}{6 - k}}{\dbinom{49}{6}}$$
In particular, the probability of selecting all six winning numbers is
$$\frac{\dbinom{6}{6}\dbinom{43}{0}}{\dbinom{49}{6}} = \frac{1}{\dbinom{49}{6}}$$
which makes sense since there is only one way to correctly select all six numbers.
For $k = 2$, we get
$$\frac{\dbinom{6}{2}\dbinom{43}{4}}{\dbinom{49}{6}}$$
I will leave the calculations for $k = 0, 1, 3, 4, 5$ to you.
The events $k = 0, 1, 2, 3, 4, 5, 6$ are mutually exclusive and exhaustive, so their probabilities should add up to $1$. |
Does Lagrange's theorem sometimes imply that there are subgroups? | The simplest counterexample is that $A_4$, the group of even permutations on 4 items, has 12 elements, but contains no subgroups of order 6.
I believe the general question of when an $mn$-group contains a subgroup of order $n$ is still an open research area. Cauchy's theorem guarantees that if $p$ is prime, then a group of order $pn$ contains an element of order $p$, and therefore a cyclic subgroup of order $p$. So for your example, every group of order 15 does contain subgroups of orders 3 and 5. (And, obviously, 1.) As N.S. noted, the Sylow theorems are important here.
This page discusses the issue in more detail, including some other circumstances when the converse of Lagrange's theorem holds. I think the Wikipedia article on Lagrange's theorem also discusses the converse. |
Green function of 1D Laplacian | To show this result, we apply a Fourier sine series expansion to the function
$$
G(x) = \begin{cases} x(1-y) & x < y \\ y(1 - x) & x > y \end{cases}.
$$
(Note that this is reversed from the original question, but I'm pretty sure it's correct.)
We want to find the expansion in terms of sine waves:
$$
G(x) = \sum b_n \sin (\pi n x)
$$
Over the range $[0,1]$, we have
$$
\int_0^1 \sin (\pi n x) \sin (\pi m x) \, dx = \frac{1}{2} \delta_{mn},
$$
and so
$$
\int_0^1 G(x) \sin (\pi m x) \, dx = \sum_n \left[ b_n \int_0^1 \sin (\pi n x) \sin (\pi m x) \, dx \right] = \frac{1}{2} b_m.
$$
So we can calculate the coefficients of the series as
$$
b_n = 2 \int_0^1 G(x) \sin (\pi n x) \, dx.
$$
I'll leave it to you to do from here. |
Number of distinct labelings of a binary tree | The correct "formula" is
$$ \text{Number of distinct labellings of } T = \frac{n!}{|\operatorname{Aut}(T)|}. $$
Now what on Earth is $\operatorname{Aut}(T)$? This is the automorphism group of the tree. For example, for the first tree if we switch the two leafs there is no effect. The operation of "swapping the two leafs" and "doing nothing" (the identity function) are automorphisms of the tree.
If we label the tree as
1
|
2 - 4 - 3
then we have $6$ possible permutations that keep the tree structure intact. Namely, we can:
do nothing
swap $1$ and $2$
swap $2$ and $3$
swap $3$ and $1$
rotate left: $1 \mapsto 2 \mapsto 3 \mapsto 1$
rotate right: $1 \mapsto 3 \mapsto 2 \mapsto 1$
These 6 operations are the automorphism group of the tree: the entire set of automorphisms. It is a "group" because we can compose two automorphisms by applying one after the other and get a new automorphism. Since there are 6 automorphisms, there are $4!/6 = 4$ distinct labellings.
Example 2:
Consider the tree with vertices $1,\dots,n - 1, n$ and edges $1 \sim n$, $2 \sim n$ through $n - 1 \sim n$. This is the same picture as the first tree but with $n - 1$ vertices attached to the centre rather than $3$.
The automorphism group of this tree is the symmetric group $S_{n - 1}$ which consists of all permutations of $1,2,\dots, n - 1$ which are obtained from shuffling the symbols around.
For example with $n - 1 = 7$, I can take the permutation $5137642$ which corresponds to the function that sends $1$ to $5$, $2$ to $1$, $3$ to $3$, $4$ to $7$, $5$ to $6$ and $7$ to $2$.
The symmetric group has $(n - 1)!$ elements (by definition in fact). To see that this number is $(n - 1)(n - 2) \cdots 2 \cdot 1$ notice that there are $n - 1$ ways to pick the first element of the permutation and having done that, there are $n - 2$ ways to pick the second, and so on. Thus there are
$$ \frac{n!}{(n - 1)!} = n $$
distinct ways to label this tree.
Example 2':
For $n = 5$ above you are looking at all ways to permute $1,2,3,4$. You can
Swap a pair (6 pairs)
Swap two pairs (swap a pair then the remaining two vertices, this double counts so you get 6/2 = 3)
Identity (1)
Choose three vertices and then rotate among those three (4 choices of three vertices * two directions to rotate = 8)
6 more which permute $1,2,3,4$ in a cyclic fashion:
$1 \mapsto 2 \mapsto 3 \mapsto 4 \mapsto 1$
$1 \mapsto 2 \mapsto 4 \mapsto 3 \mapsto 1$
$1 \mapsto 3 \mapsto 2 \mapsto 4 \mapsto 1$
$1 \mapsto 3 \mapsto 4 \mapsto 2 \mapsto 1$
$1 \mapsto 4 \mapsto 2 \mapsto 3 \mapsto 1$
$1 \mapsto 4 \mapsto 3 \mapsto 2 \mapsto 1$
for a total of 6 + 3 + 1 + 8 + 6 = 24
Example 3:
Consider the cycle on $n \ge 3$ vertices. This isn't a tree but the same formula applies. The automorphism group here is the dihedral group.
This automorphism group consists of $n$ rotations and $n$ reflections for a total of $2n$ elements (draw this for $n = 3, 4, 5$ to see what's going on). Thus there are
$$ \frac{n!}{2n} = \frac{(n - 1)!}{2} $$
distinct ways to label the cycle graph. |
Derivative of mixed matrix terms with inverse matrix | We can rewrite the answer of user7530 in the equivalent way. If $f(G)=KG^{-1}J$, then the derivative is the linear function $Df_G:H\rightarrow -KG^{-1}HG^{-1}J$ where $H$ is a square matrix. For 2., if $g(G)=J^T{G^{-1}}^TKG^{-1}J$, then we derive a product: $Dg_G:H\rightarrow -J^T(G^{-1}HG^{-1})^TKG^{-1}J-J^T{G^{-1}}^TKG^{-1}HG^{-1}J$. If $K$ is a symmetric matrix then the derivative is $-U-U^T$ where $U=J^T{G^{-1}}^TKG^{-1}HG^{-1}J$. |
Is this a misuse of the term "probability space"? | Let us assume that $\Omega$ is finite or countable.
I think the main confusion here is the relationship between a probability measure and a random variable. A probability measure $\mathbb P$ is a function $\mathbb P:\mathcal F \rightarrow [0, 1]$ for some $\sigma$-algebra. A random variable, on the other hand, is a function $X: \Omega \rightarrow S$ where $S$ is some set, not $[0, 1]$ in general. I.e. it maps outcomes to some other values, whereas the probability measure assigns probabilities to events in the $\sigma$-algebra.
The distribution:
You are correct in thinking that a random variable can be characterised by its distribution, but the two things are not the same. The distribution assigns probabilities to the events that $X$ takes certain values. The distribution $P^X$ of $X$ is defined by
$$
P^X(A) = \mathbb P(\lbrace \omega: X(\omega) \in A \rbrace)
$$
for $A \in S$. Let $S'$ be the image of $\Omega$ under $X$. Indeed, the distribution of $X$ defines a probability measure on $(S', 2^{S'})$, but not on the original space $(\Omega, 2^{\Omega})$.
So you see, the distribution of $X$ defines a probability measure on some other space than $(\Omega, 2^{\Omega})$. Meanwhile you can define alternative probability measures on the measurable space $(\Omega, 2^{\Omega})$ as much as you want.
So when they say "Let $\mathbb Q$ and $\mathbb P$ be defined one the same probability space" they mean two different probability measure on the measurable space $(\Omega, 2^{\Omega})$.
This is of course not an exhaustive treatment of the problem, but hope it serves to clarify some parts of it. |
Problem with gcd and lcm | If I have understood the problem correctly, given
$$20=\gcd(8n^2 + 6n; 8n^2 + 10n) =2n\gcd(4n + 3; 4n + 5)=2n\gcd(4n + 3; 2)$$
we have to find
$$\text {lcm}(n^2 + n; n^2 + 3n)=n\text {lcm}(n+1; n+3).$$
The first line tells us something about $n$...
P.S. I read from your profile that you are a 8th grade student. So please ask if you need further help. |
Prove that $\| \frac{1}{x}\int_{0}^{x} f(s) ds \|_2 \leq 2\|f\|_2$ | We can use the other inequality to get
$$\left\lVert x \mapsto \frac{1}{x} \int_0^x f(s) \, \mathrm{d} s \right \rVert_2^2 \leq \int \limits_0^\infty \frac{2 \sqrt{x}}{x^2} \int \limits_0^x \sqrt{s} |f(s)|^2 \, \mathrm{d} s \, \mathrm{d} x = 2 \int \limits_0^\infty \frac{1}{x^{3/2}} \int \limits_0^x \sqrt{s} |f(s)|^2 \, \mathrm{d} s \, \mathrm{d} x \, .$$
Changing the order of integration (Tonelli's theorem) we find
$$ \left\lVert x \mapsto \frac{1}{x} \int_0^x f(s) \, \mathrm{d} s \right \rVert_2^2 \leq 2 \int \limits_0^\infty \sqrt{s} |f(s)|^2 \int \limits_s^\infty \frac{1}{x^{3/2}} \, \mathrm{d} x \, \mathrm{d} s = 4 \int \limits_0^\infty |f(s)|^2 \, \mathrm{d} s = 4 \lVert f \rVert_2^2$$
as desired. |
2 test for convergence problems: $\sum^{\infty}_{k=1}5^k/(3^k+4^k)$and $\sum^{\infty}_{n=1}\tan\left(1/n\right)$ | For the first problem
$$\frac{\frac{5^{k+1}}{3^{k+1}+4^{k+1}}}{\frac{5^k}{3^k+4^k}}$$
$$=\frac{5^{k+1}(3^k+4^k)}{5^k(3^{k+1}+4^{k+1})}$$
$$=5\cdot\frac{\left(\frac34\right)^k+1}{3\cdot\left(\frac34\right)^k+4}$$
So, using Ratio Test: $$\lim_{k\to\infty}\frac{\frac{5^{k+1}}{3^{k+1}+4^{k+1}}}{\frac{5^k}{3^k+4^k}}=\frac54$$ |
$\left| (4 \mathbb{N} -1) \cap \mathbb{P} \right| \ = \ \infty$ where $\mathbb{P}$ is the set of prime numbers. | Suppose $p_1,\cdots,p_n$ are all of them.
Then consider $4p_1\cdots p_n-1$ |
Problem with some Bayesian posterior pdf functions. | It is very simple.
I'm aware of the basics like Posterior∝Likelihood×Prior
That is all you have to do to finish your exercise:
Prior density (skip all the quantities not depending on $\theta$ and substitute = with $\propto$):
$$\pi(\theta)\propto \theta^{a-1}e^{-b \theta}$$
model density:
$$p(y|\theta)=\theta e^{-\theta y}$$
Posterior density:
$$\pi(\theta|y)\propto \theta^a e^{-\theta(b+y)}$$
where you immediately recognize the kernel of a $Gamma(a+1;b+y)$$
Just for the sake of completeness, you can write the exact posterior density, with the normalization constant required
$$\pi(\theta|y)=\frac{(b+y)^{a+1}}{\Gamma(a+1)}\theta^{(a+1)-1}e^{-\theta(b+y)}$$ |
How to formalize my argument for existence of $\lim \limits_{\epsilon \to 0^{+}} \int_{\epsilon}^{c} \frac{\sin x}{x} dx$ | As $\lim_{x\to 0}\frac{\sin x}{x}=1$ we find that the singularity at $x=0$ is removable. As the only discontinuities of $\frac{\sin x}{x}$ on $\left[0,c\right]$ are removable for all $c\geq 0$ we find that the integral exists. |
Direct sum, product, sum and intersection of ideals | There's no reason for the equality $I+J=IJ.$ If you want a specific example, consider $I=J=(2)\subset \mathbb Z.$ Then $I+J=(2)$ whereas $IJ=(4).$ In general we have the relationship $IJ\subset I\cap J\subset I+J.$ The containment $IJ\subset I\cap J$ is actually an equality if $I$ and $J$ are comaximal, that is, if $I+J=R$ is the unit ideal (as long as $R$ is commutative).
In your particular example it is true that $I^2=(2).$ However, $1\in R,$ so $1=1^2\in R^2.$ Thus $R^2=R$ as ideals.
This is all contained in chapter 7 of Dummit and Foote's book. |
How do I solve these systems of equations | Note that from the first equation (part a) we have $l_2 = 6-l_1$. Now plug the $l_2$ into the other equation: $l_1^2 + (6-l_1)^2 = 20$. This is a quadratic which we solve: $$l_1^2+(36-12l_1+l_1^2)=20 \\ \Rightarrow 2l_1^2 - 12l_1 + 16 = 0 \\ \Rightarrow l_1^2 - 6l_1 + 8 = 0 \\ \Rightarrow (l_1 - 4)(l_1-2) = 0$$ Therefore $l_1 = 2$ and $l_2=6-l_1=4$ or $l_1=4$ and $l_2=6-l_1=2$. That just means one square has side length 2 and the other has side length 4.
You can apply the same method (called substition) for the second set of equations. |
Providing counterexamples for a false statement | It's always good to be careful with definitions.
A standard definition for "disjoint family" is:
A family of sets $\{A_i\}_{i\in I}$ is disjoint if $$\bigcap_{i\in I} A_i=\emptyset$$
So, following the standard definition, your statement says:
If $A\cap B\cap C=\emptyset$, then at least one of $A\cap B, B\cap C$ or $A\cap C$ has no element.
This statement is not true, and a simple counterexample can show it is not. |
Simple exercise in fundamental groups and identification degree | Let $(\Bbb S^1,\times)$ be the topological group with identity element $1$, where $\times$ denotes the usual multiplication of complex numbers. For any two continuous functions $\alpha,\beta:[0,1]\to \Bbb S^1$, define $\alpha\bullet\beta:[0,1]\to \Bbb S^1$ as $\alpha\bullet\beta(t)=\alpha(t)\times\beta(t)$ for all $t\in [0,1]$. Let $C_1$ be the constant loop in $\Bbb S^1$ based at $1$. Also, $*$ stands for concatenation of two paths.
For two loops $\gamma,\delta$ in $\Bbb S^1$ based at $1$ we have $\gamma*C_1\simeq_{\text{rel }1}\gamma\simeq_{\text{rel }1} C_1*\gamma$ as $C_1$ represents the identity element of $\pi_1(\Bbb S^1,1)$, so that $$\gamma*\delta=(\gamma*C_1)\bullet (C_1*\delta)\simeq_{\text{rel }1}\gamma\bullet \delta\simeq_{\text{rel }1}(C_1*\gamma)\bullet (\delta*C_1)=\delta*\gamma.$$
In the above, to check the "$=$" just plugin an arbitrary $t\in [0,1]$ both left and right-hand sides. So, we completed part $(i)$.
Note that $\deg:\pi_1(\Bbb S^1,1)\to \Bbb Z$ is a group isomorphism.
Also, $\pi_1\big(\Bbb S^1\times \Bbb S^1,(1,1)\big)\cong\pi_1(\Bbb S^1,1)\times \pi_1(\Bbb S^1,1)$ via projections induced maps. So, write an element of $\pi_1\big(\Bbb S^1\times \Bbb S^1,(1,1)\big)$ as $\big([\omega_1],[\omega_2]\big)$ for some $[\omega_1],[\omega_2]\in \pi_1(\Bbb S^1,1)$. Then, $$\Phi_\#\big([\omega_1],[\omega_2]\big)=[\omega_1\bullet\omega_2]=[\omega_1*\omega_2].$$ So, using degree isomorphism $$\Phi_\#\big(\deg[\omega_1],\deg[\omega_2]\big)=\deg[\omega_1*\omega_2]=\deg[\omega_1]+\deg[\omega_2].$$ |
What is $\mathbb Z[t]/(t,5)$? | Using 3rd isomorphism theorem $$\mathbb Z[t]/(5,t)\cong (\mathbb Z[t]/(t))/((5,t)/(t))$$
Now,
$$\mathbb Z[t]/(t)\cong \mathbb Z$$
and $$(5,t)/(t)\cong (5).$$
Therefore $$\mathbb Z[t]/(5,t)\cong \mathbb Z/(5)=\mathbb F_5.$$ |
Integral and its limit | Expand $\cos^{\frac1n}x$ in powers of $\frac{1}{n}$ using that
$$\cos^{\varepsilon}x=1+\varepsilon\ln\cos x+O\left(\varepsilon^2\right).$$
Now the limit becomes equal to
$$\lim_{n\to \infty}n\int_{0}^{\pi/2}\left(1-\cos^{\frac1n}x\right)dx=-\int_0^{\pi/2}\ln\cos x\,dx=\frac{\pi\ln 2}{2}.$$
P.S. The integral can also be evaluated in terms of gamma functions (see here), but this is of little help for evaluating the limit. |
In a connected metric space, to prove $d(s,y)<\epsilon$, where $y\in B(a,r+\epsilon)\setminus B(a,r) $ and $s$ is some point in boundary of $B(a,r)$. | Let $S=\left\{\langle x,y\rangle\in\Bbb R^2:x^2+y^2=1\text{ and }x\ne 1\right\}$, and let $X=\Bbb R\setminus S$ with the usual metric; because $\langle 1,0\rangle\in X$, $X$ is still connected. Let $a=\langle 0,0\rangle$ and $r=1$, and for any $\epsilon>0$ let $y=\left\langle -1-\frac{\epsilon}2,0\right\rangle$; then $y\in B(a,r+\epsilon)\setminus B(a,r)$, but the only point in the boundary of $B(a,r)$ is $\langle 1,0\rangle$, and $d(y,\langle 1,0\rangle)=2+\frac{\epsilon}2$, which is greater than or equal to $\epsilon$ whenever $\epsilon\le 4$. |
logic equivalence question? | You made a small mistake as after the Conditional Law (and DeMorgan) you should get:
$$(p \land ¬q) \lor (q \land ¬p) \lor r $$
That is, you get:
$$((p \rightarrow q) \land (q \rightarrow p)) \rightarrow r \Leftrightarrow \text{ (Conditional Law)}$$
$$\neg ((\neg p \lor q) \land (\neg q \lor p)) \lor r \Leftrightarrow \text{ (DeMorgan)}$$
$$\neg (\neg p \lor q) \lor \neg (\neg q \lor p)) \lor r \Leftrightarrow \text{ (DeMorgan x 2)}$$
$$(p \land \neg q) \lor (q \land \neg p) \lor r \Leftrightarrow \text{ (Commutation)}$$
$$(q \land \neg p) \lor(p \land \neg q) \lor r \Leftrightarrow \text{ (Commutation x 2)}$$
$$(\neg p \land q) \lor(\neg q \land p) \lor r$$ |
Weierstrass $\wp$ function doubly periodic | You don't need complex analysis for this one. Just write down the series definition for $\wp(u+\omega)$. If you understand what is summed over, it should be pretty clear that you are summing "the same terms" as before.
Once you have this idea, making a rigorous proof is an easy exercise in analysis. |
What is the most complete formally verified "foundations of mathematics" compilation available for free on the web? | I doubt that a succinct unbiased answer can be given, so I'll try to give a broad overview of what's out there instead.
Basically, there are a number of different approaches to logic and foundations in general, but for the vast majority of everyday mathematical statements/purposes, most are suitable. So it's hard to say which ones are "more complete" than others, if they all can do the job. For examples of getting into the weeds about differences across theorem provers, see Proof strength of Calculus of (Inductive) Constructions on MO or Importing (translating) Mizar into Coq (Axiomatic set theory into constructive type theory) here on MathSE.
Depending on where exactly you draw the line, some proportion of the theorem checkers and similar on Freek Wiedijk's list are up to the task of verifying an approach to foundations. Currently, he lists 44 "first order provers", 38 "proof checkers", 26 "tactic provers", and 43 "theorem provers", where his definitions of these categories are given on this explanation page. I don't honestly know what proportion of these have a publicly available foundation for math. And as you noted, some (like Coq) have multiple.
Since you ask about the "best" one, a proxy might be to measure how many notable theorems have been verified with a given system (though in a case like Coq, different theorems might have been on top of different foundations). Wiedijk has a list of 100 notable theorems and which have been verified in major systems, where major means they have verified many of the theorems or a theorem not covered by the others. At the time of writing, the major contenders are HOL Light, Isabelle, Metamath, Coq, Mizar, Lean, and ProofPower.
All of those major systems have a variety of different approaches to foundations, representing theorems/proofs, verifying theorems, etc. The goals of a project like the Metamath Proof Explorer are very different than the goals of, say, most people trying to verify things with Coq. |
Use induction: postage $\ge 64$ cents can be obtained using $5$ and $17$ cent stamps. | You can exhibit combinations of stamps for $64,65,66,67$ and $68$ cents as your base case, and then use the inductive step of adding a $5$ cent stamp for any greater amount.
The inductive phase addresses values of $69$ cents or more and can assume that the previous $5$ lesser values all have valid combinations of stamps.
More explicitly:
Base case:
$n=64$: $2\times$ $17$c stamps and $6\times$ $5$c stamps:
$n=65$: you can fill these in
$n=66$:
$n=67$:
$n=68$:
Inductive step: For any $k\ge 69$, assume that $n$ cents in the previous five values can be made by $a_n\times$ $17$c stamps and $b_n\times$ $5$c stamps.
Then for $k$ cent value set $a_k = a_{k-5}$ and $b_{k} = b_{k-5}+1$, giving:
$\begin{align}
17 a_k + 5 b_k &= 17a_{k-5} + 5(b_{k-5}+1)\\
&= 17a_{k-5} + 5b_{k-5} + 5\\
&= (k-5) + 5 = k\\
\end{align}$
as required |
How to rigorously prove from set theory that piecewise functions exist? | Existence: $h$ is the set $(f \cap (S_1 \times T)) \cup (g \cap (S_2 \times T))$.
Uniqueness: if $h_1,h_2$ are two such functions, then $h_1 \cap (S_i \times T)=h_2 \cap (S_i \times T)$ for $i=1,2$ so $h_1=h_1 \cap (S \times T)=h_2 \cap (S \times T)=h_2$. |
Jacobian matrix of $F:\mathbb{R}^3 \times \mathbb{R}^3 \to \mathbb{R}^3$ | Since you're dealing with the identity chart, this becomes a simple multivariable calculus problem, so you can forget all about manifolds. $F$ is a bilinear map so the derivative is very simple. Bilinear maps should be thought of as products, so we're just going to apply the product rule here: given a point $(u,v)\in\Bbb{R}^3\times \Bbb{R}^3$ and $(\xi,\eta)\in\Bbb{R}^3\times \Bbb{R}^3$, we have
\begin{align}
DF_{(u,v)}[\xi,\eta]&= F(u,\eta)+F(\xi,v)\\
&=u \times \eta + \xi\times v
\end{align}
(The really short way of summarizing all of this is to say $d(u\times v)=u\times dv+ du\times v$). This is now a linear function of $(\xi,\eta)$, so finding the matrix representation means we successively plug in basis vectors. For example, calculate the following six quantities:
$DF_{(u,v)}[(1,0,0,0,0,0)] = u\times (1,0,0)+(0,0,0)\times v = u\times (1,0,0)=(0,u_3,-u_2)$
$DF_{(u,v)}[(0,1,0,0,0,0)]$
$DF_{(u,v)}[(0,0,1,0,0,0)]$
$DF_{(u,v)}[(0,0,0,1,0,0)]$
$DF_{(u,v)}[(0,0,0,0,1,0)]$
$DF_{(u,v)}[(0,0,0,0,0,1)]$
I leave the other computations to you. For each of these you should get certain elements of $\Bbb{R}^3$ as I've demonstrated for the first one. Now, take these $6$ elements of $\Bbb{R}^3$ and arrange them in columns to form a $3\times 6$ matrix. That is the matrix representation of $DF_{(u,v)}$ in terms of standard bases, and equivalently it is also the matrix representation of $F_{*,(u,v)}$ in terms of the corresponding bases. |
What is non-MDS codes? | Well, by definition an MDS code is a linear code that achieves the singleton bound (so we have equality in the inequality).
So it stands to reason that a non-MDS code is one where the singleton bound is strict, so $$A_q(n,d) < q^{n-d+1}$$
So there must be some square submatrx of your $P$ that is singular, according to the characterisation theorem quoted on the linked page. So you can make your own example that way.. |
Set notation confusion (Empty Sets) | As you say, $\varnothing$ is the empty set; you can indeed represent it as $\{\}$.
$\{\varnothing\}$ is a set with one member; that member is the empty set. If you think informally of a set as a box, $\varnothing$ is an empty box, and $\{\varnothing\}$ is a box that contains an empty box and nothing else. You could write it $\{\{\}\}$.
$\{\varnothing,\{\varnothing\}\}$ is a set with two elements; one of those elements is the empty set, and the other one is the set whose only element is the empty set. In the box metaphor $\{\varnothing,\{\varnothing\}\}$ is a box that contains two other boxes; one of those boxes is empty, and the other one contains an empty box. You could write this set $$\bigg\{\{\},\Big\{\{\}\Big\}\bigg\}\;,$$ where I’ve used different sizes of braces to make it easier to see which ones match. |
Finding the limit: $\lim_{x\to \infty}\frac{1}{2}x\sin {\frac{180(x-2)}{x}}$ | You really shouldn't work in degrees; more specifically, the sin function itself is defined (despite what you might have learned in high school) with 'radian' arguments. Formulae like $e^{ix}=\cos x+i\sin x$, or $\frac{d}{dx}\sin x=\cos x$, rely on it. There's another formula that relies on it that's the critical one here: $\lim_{x\to 0}\frac{\sin x}x=1$. (Incidentally, another way of thinking about this might be that since angles are 'dimensionless', unlike length or spans of time or mass, $180^\circ$ is literally just a fancy way of writing '$\pi$')
Now, writing your function 'properly', you have $f(x)=\frac12x\sin\left(\pi(1-\frac2x)\right)$ $= \frac12 x\sin(\pi-\frac{2\pi}{x})$. Using the symmetry of the $\sin$ function, this is equal to $\frac12x\sin(\frac{2\pi}x)$. Now, we can substitute $y=\frac1x$; taking the limit as $x\to\infty$ is the same as taking the limit as $y\to 0$ (technically only from positive $y$, but that's moot here), and so your limit is equal to $\frac12\lim_{y\to 0}\dfrac{\sin(2\pi y)}{y}$. But $\lim_{y\to 0}\frac{\sin(ay)}y$ $= a\lim_{y\to 0}\frac{\sin(ay)}{ay}$ $=a$; this gives your limit as $\frac12\cdot2\pi=\pi$. |
Proof of inequality harmonic progression | $\frac{1}{n}<\log\left(\frac{2n+1}{2n-1}\right)=2\operatorname{arctanh}\left(\frac{1}{2n}\right)$ gives that $H_n=\sum_{k=1}^{n}\frac{1}{k}$ is less than $\log(2n+1)$, and the comparison with a piecewise-constant function (or just $\log\left(1+\frac{1}{n}\right)<\frac{1}{n}$) gives that $H_n$ is greater than $\int_{1}^{n+1}\frac{dx}{x}=\log(n+1)$. In particular the least $n$ such that $H_n>4$ is between $\frac{e^4-1}{2}$ and $e^4-1$, hence between $26$ and $54$. Actually such $n$ is $31$. |
Invariant of two functions | This problem hinges upon realizing that the set formed by the following $6$ transformations is closed under composition: $$x,\quad \frac 1x, \quad 1-x, \quad \frac 1{1-x},\quad \frac x{x-1},\quad \frac {x-1}x$$
(This is easily checked by direct calculation)
It follows that if we take any function and sum its translates over these six functions we will get a function invariant under both $x\mapsto \frac 1x$ and $x\mapsto (1-x)$ (and the other operations of course).
Taking $f(x)=x$ just returns the constant function $3$, but if we take $f(x)=x^2$ we get $$F(x)=x^2+(1-x)^2+\left(\frac 1x\right)^2+\left(\frac 1{1-x}\right)^2+\left(\frac x{x-1}\right)^2+\left(\frac {x-1}x\right)^2$$ and this function is a non-trivial example of what we are looking for. |
Check that for every natural $n \ge 1$, has: | $$\int_0^1(1-x^2)^ndx=x(1-x^2)^n\bigg|_0^1-\int_0^1xd[(1-x^2)^n]$$
$$=-\int_0^1xn(1-x^2)^{n-1}(-2x)dx$$
$$=2n\int_0^1x^2(1-x^2)^{n-1}dx$$
$$=-2n\int_0^1(1-x^2)^ndx+2n\int_0^1(1-x^2)^{n-1}dx$$
Hence
$$(2n+1)\int_0^1(1-x^2)^ndx=2n\int_0^1(1-x^2)^{n-1}dx$$
$$\int_0^1(1-x^2)^ndx=\frac{2n}{2n+1}\int_0^1(1-x^2)^{n-1}dx$$
For the second part
$$\int_0^1(1-x^2)^ndx=\frac{2n}{2n+1}\int_0^1(1-x^2)^{n-1}dx$$
$$=\frac{2n}{2n+1}\frac{2(n-1)}{2(n-1)+1}\int_0^1(1-x^2)^{n-2}dx$$
$$=\frac{2n}{2n+1}\frac{2(n-1)}{2(n-1)+1}\frac{2(n-2)}{2(n-2)+1}\int_0^1(1-x^2)^{n-3}dx$$
$$\cdots$$
$$=\frac{2n(2n-2)(2n-4)\cdots 2}{(2n+1)(2n-1)(2n-3)\cdots 3}\int_0^1(1-x^2)^0dx$$
$$=\frac{[2n(2n-2)(2n-4)\cdots 2]^2}{(2n+1)!}$$
$$=\frac{2^{2n}(n!)^2}{(2n+1)!}$$ |
Dual and Second Dual Basis | If I understood your question correctly...
Since the space is finite dimensional, a basis for the dual space is $\left\{ y_{1},y_{2},y_{3}\right\} $ with
$$y_{i}\left(e_{j}\right)=\delta_{i,j}\equiv\begin{cases} 0 & \text{if }i=j\\ 1 & \text{otherwise} \end{cases}.$$
Since $y_{i}$ is linear, $y_{i}\left(x\right)$ (where $x\in\mathbb{R}^{3}$) is determined by the previous equation by writing $x=c_{1}e_{1}+c_{2}e_{2}+c_{3}e_{3}$ and
$$
y_i\left(x\right)=y_{i}\left(c_{1}e_{1}+c_{2}e_{2}+c_{3}e_{3}\right)=c_{1}y_{i}\left(e_{1}\right)+c_{2}y_{i}\left(e_{2}\right)+c_{3}y_{i}\left(e_{3}\right).
$$
Since the dual space is finite-dimensional too, you can continue in
this way to construct a basis for the second dual. See https://en.wikipedia.org/wiki/Dual_space#Finite-dimensional_case. |
Determining the height of a dome given the width and original length. | A circle drawn along the $x$-axis, intersecting the $y$-axis at $(0,3)$ and $(0,-3)$, with equation
$$(x-a)^2+y^2 = \pi^2,$$ solves the problem. Using the Pythagorean theorem, $a=\sqrt{\pi^2-9}$. You can check that $\theta \pi= \frac{8}{\pi}*\pi = 8$. |
Consider this grammar | Everything you stated is correct. It is CS and unrestricted. |
Compute fourier series of $f(x)=e^{bx}$ on $(-\pi,\pi)$. | Things get simpler if we use complex way. We find that
$$\begin{align*}
a_n+ib_n&=\frac1{\pi}\int_{-\pi}^\pi e^{bx}e^{inx}\mathrm dx \\&=\frac1{\pi(b+in)}\left(e^{(b+in)\pi}-e^{-(b+in)\pi}\right)\\
&=\frac{(b-in)(-1)^n}{\pi(b^2+n^2)}2\sinh(b\pi)\\
&=\frac{(-1)^nb}{\pi(b^2+n^2)}2\sinh(b\pi)-i\frac{(-1)^nn}{\pi(b^2+n^2)}2\sinh(b\pi).
\end{align*}$$ So we have
$$
a_n=\frac{(-1)^nb}{\pi(b^2+n^2)}2\sinh(b\pi),\quad n\ge 1
$$ and
$$
b_n=-\frac{(-1)^nn}{\pi(b^2+n^2)}2\sinh(b\pi) ,\quad n\ge 1.
$$ It looks like you have the wrong sign for $b_n$. |
Boolean Algebra - Tautology for (D or Not D or (anything)) | Yes, it is correct, by the reason you mentioned.
EDIT: In more detail, since disjunction is associative, you end with something of the form $T\vee P$, with $T$ true and $P$ your "anything else". |
Why not, first order logic to DNF conversion? | Both have their uses, and neither easier to convert to. DNF gives you the truth table of a formula: it shows you exactly which assignments of truth values to atomic formulas make the entire formula true. Converting a formula to CNF, and converting to DNF, are both NP-hard.
One reason why CNF gets more attention: the method of resolution, used in automated theorem-provers, requires conversion to CNF. Another: in complexity theory, one of the most famous and earliest NP-complete problems, 3-SAT, involves satisfiability of CNF formulas with at most 3 variables per clause. The dual problem of the validity of a formula, for which DNF is arguably better suited, clearly involves all assignments, and is co-NP-complete. |
General solution of differential equation of order 3 | You can find the particular solution by doing an integration by parts.
$\frac12 \int_0^t (t-s)^2 e(s)ds = [0-0] -2* \frac12 \int_0^t (t-s) E(s)ds $ where E(s) is a primitive of $e(s)$ such that $E(0)=0$
Similarly we have : $-\int_0^t (t-s) E(s)ds = - ( [0-0] - \int_0^t F(s)ds)$ where $F(s)$...
And u(t)=$\int_0^t F(s)ds$ is a solution of $u'''(t)=e(t)$
EDIT, to be clearer :
u(t)=$\int_0^t u'(s)ds + u(0)$
u(t)=$-\int_0^t (t-s) u''(s)ds + u'(0)t + u(0)$
$u(t)=\frac12 \int_0^t (t-s)^2 u'''(s) ds + u''(0)/2*t^2 + u'(0)t + u(0) = \frac12 \int_0^t (t-s)^2 e(s) ds + u''(0)/2*t^2 + u'(0)t + u(0) $ |
Nice ODE that has wildly behaving solutions | The answer is no. Your equation is linear. The solutions to $n$-th order linear equations (with $b = 0$) are an $n$-dimensional vector space. See any standard differential equations book for this. You haven't specified how "bad" you would like solutions to be, but a typical minimal requirement, but not sufficient, for chaos is 3rd order nonlinear.
Edit: one of the common examples comes from the Rossler system, see Wikipedia, which is a system of 3 first order equations. From them you can derive a really horrible looking 3rd order equation, but this is not usually done because the system is more agreeable for study. The solution trajectories are chaotic and elegant. |
Precision of a simulated estimate | You've estimated a proportion of $\hat{p}=0.833564$ from $n=1000000$ trials. You can for example construct a $95\%$ confidence interval:
$$[\hat{p}-1.96\sqrt{\frac{\hat{p}(1-\hat{p})}{n}},\hat{p}+1.96\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}]=[0.8329,0.8343]$$
So with $95\%$ confidence you can say that you are off by less than $0.00073$ i.e. inside the above interval.
BTW the true answer is $\frac{5}{6}=0.8333...$ as you can see by computing a simple integral and therefore in reality you are off by just $0.00023$. |
How to prove $f(x)=ax$ if $f(x+y)=f(x)+f(y)$ and $f$ is locally integrable | Integrate the functional equation with respect to $x$ between $0$ and $1$. The result is the equation
$$
\int_y^{y+1} f(u) du = \int_0^1 f(x) dx + f(y) \text .
$$
The integral on the left side exists and is continuous in $y$ because $f$ is locally integrable. Therefore the right side is also continuous in $y$; that is, $f$ is continuous! The rest is clear sailing. |
How do I read this triple summation? $\sum_{1\leq i < j < k \leq 4}a_{ijk}$ | $$
\sum_{i=1}^2 \sum_{j=i+1}^3 \sum_{k=j+1}^4 a_{ijk} =
\sum_{k=3}^4 \sum_{j=2}^{k-1} \sum_{i=1}^{j-1} a_{ijk} =
a_{123}+a_{124}+a_{134}+a_{234}
$$ |
How do I simplify integrating factor? | Use that $$-\log(\cos(x))=\log\frac{1}{\cos(x)}$$ |
Is it true that for p prime if $g^{(p-1)/2} = -1$ then g is a generator? | You claim that $g^{(p-1)/2} = -1$ implies that $g$ is a generator.
Does it not follow that if $g$ is a generator, $g^3$ is also a generator?
Is this true for $p = 7$, where the group of units is cyclic of order 6? |
arithmetic with quantum integers | A bit long for a comment:
There's a "more standard" way of making the quantum integers into a ring via so-called quantum addition: $[x]_q\oplus_q[y]_q=[x]_q+q^x[y]_q$. If you work it out, this will give you $[x]_q\oplus_q [y]_q=[x+y]_q$.
There's a similar definition for multiplication: $[x]_q\otimes_q [y]_q=[x]_q[y]_{q^x}$, which if you work out gives you $[x]_q\otimes_q [y]_q=[xy]_q$. For more details, see the first few pages of this paper. Most importantly, they make the set of $[n]_q$ into a ring. In section 3 of the paper, they work out more classical results of $[x]_q[y]_q$ and $[x+y]_q$. |
Why is $L^p(S) \subseteq L^q(S)$? | In general there need not be any inclusion between $L^p(S,\mu)$ and $L^p(S,\mu)$, for an arbitrary measure space $(S,\mu).$
However if your measures are bounded, there are inclusions. See for example Wikipedia. Thus criteria for inclusion of $L^p$ spaces is, for $0<p<q\leq\infty$:
$L^q(S,\mu)\subseteq L^p(S,\mu)$ iff $S$ does not contain sets of finite but arbitrarily large measure,
$L^p(S,\mu)\subseteq L^q(S,\mu)$ iff $S$ does not contain sets of non-zero but arbitrarily small measure.
Let's see this with some examples.
For example, let $S$ be the positive natural numbers line $S=\{n\in\mathbb{N}|n>0\}$ with counting measure. This space has sets of arbitrarily large measure, and the sequence $1/n^{1/p}$ is in $L^q$ but not $L^p$, so we have $L^q\not\subseteq L^p.$
Or let $S$ be the real interval $(0,1)$ with the Lebesgue measure. This space has sets of arbitrarily small measure, and the function $1/x^{1/q}$ is in $L^p$ but not $L^q$, so we have $L^p\not\subseteq L^q.$
The interval $(0,\infty)$ has sets of arbitrarily large and small measure, so neither inclusion holds, as the functions $1/x^{1/q}$ and $1/x^{1/p}$ show.
If the measure is both bounded above and bounded below, then there are inclusions going both ways, and $L^p$ and $L^q$ are isomorphic as topological vector spaces. I think this only happens if $S$ is finite and so $L^p$ is finite-dimensional, where all norms are equivalent.
So the upshot is, if we would like to prove one or the other inclusion, we must adopt one of the assumptions about our measure space.
So let's adopt axiom 1, then $S$ must have finite measure. Now $p<q$, so $q/p>1$, so $x\mapsto x^{q/p}$ is a convex function, so by Jensen's inequality, we have
$$\left(\frac{1}{\mu(S)}\int\lvert f\rvert^p\right)^{q/p}\leq \frac{1}{\mu(S)}\int \lvert f\rvert^q,$$
so $\lVert f\rVert_p\leq \mu(S)^{\frac{1}{p}-\frac{1}{q}}\lVert f\rVert_q$ and so $L^q(S)\subseteq L^p(S).$
Alternatively, apply Hölder's inequality to $f^p\in L^{\frac{q}{p}}$ and $\chi_S\in L^{\frac{q}{q-p}},$ giving
$$\lVert\chi_S f^p\rVert_1\leq\lVert\chi_S\rVert_{\frac{q}{q-p}}\lVert f^p\rVert_{\frac{q}{p}},$$
and take the $p$th root.
For option 2, to answer your title question and show that $L^p(S)\subseteq L^q(S)$, let's assume we have a lower bound, so $\mu(E)\geq m$ for all $E\subseteq S.$
Given a simple function
$$f=\sum a_i\chi_{E_i}$$
where $E_i$ are disjoint measurable sets and $\chi_{E_i}$ are the indicator functions, we have
$$\lVert f\rVert_p^p=\sum \lvert a_i\rvert^p\mu(E_i)$$
and so
$$\frac{\sum \lvert a_i\rvert^p\mu(E_i)}{\lVert f\rVert_p^p}=1$$
but if a sum of nonnegative terms is equal to one, then each term is at most one, so
$$\frac{\lvert a_i\rvert^p\mu(E_i)}{\lVert f\rVert_p^p}\leq 1$$ as well as its $p$th root
$$
\frac{\lvert a_i\rvert\mu(E_i)^{1/p}}{\lVert f\rVert_p}\leq 1.
$$
A fundamental fact of exponentiation, which is at the root of the identity we seek, with exponents $p<q$ is that if the base $x>1$ then it is increasing, $x^p<x^q$, while if the base $x<1$ it is decreasing, $x^p>x^q.$
Thus we have
$$
\frac{\lvert a_i\rvert^q\mu(E_i)^{q/p}}{\lVert f\rVert_p^q}\leq\frac{\lvert a_i\rvert^p\mu(E_i)}{\lVert f\rVert_p^p}\leq 1,
$$
and therefore also the sum
$$
\frac{\sum\lvert a_i\rvert^q\mu(E_i)^{q/p}}{\lVert f\rVert_p^q}\leq\frac{\sum\lvert a_i\rvert^p\mu(E_i)}{\lVert f\rVert_p^p}=1.
$$
In order to transform the left-hand side of this inequality into the $\lVert\cdot\rVert_q$ norm, we use the boundedness $\mu(E_i)\geq m$ to write
$$\mu(E_i)^{q/p}=\mu(E_i)\mu(E_i)^{\frac{q-p}{p}}\geq\mu(E_i)m^{\frac{q-p}{p}},$$
so that our inequality becomes
$$
\frac{m^{\frac{q-p}{p}}\sum\lvert a_i\rvert^q\mu(E_i)}{\lVert f\rVert_p^q}\leq\frac{\sum\lvert a_i\rvert^q\mu(E_i)^{q/p}}{\lVert f\rVert_p^q}\leq\frac{\sum\lvert a_i\rvert^p\mu(E_i)}{\lVert f\rVert_p^p}=1.
$$
Finally, clearing denominators and taking the $q$th root gives us
$$\lVert f\rVert_q\leq \frac{1}{m^{\frac{1}{p}-\frac{1}{q}}}\lVert f\rVert_p,$$
for simple functions $f$. And since the simple functions are dense, it gives us also the inclusion we desire $L^p(S)\subseteq L^q(S).$ |
Prove that $(a,b)$ is not compact based on the definition of compactness | Your proof that $[a,b]$ is compact is incorrect, as is your proof that $(a,b)$ is compact. It is not enough to take a single open cover and show it has a finite subcover; all open covers must have a finite subcover. Your example with $O[n]$ is enough to show $(a,b)$ is not compact. We have found an open cover with no finite subcover, thus it cannot be true that every open cover has a finite subcover.
To show $[a,b]$ is compact is quite tricky, due to the aforementioned need to show every open cover has a finite subcover. I would recommend looking at Wikipedia's proof of the Heine-Borel Theorem. |
Test the following series for convergence | Note that since $-1\le \sin(n\pi/4)\le 1$ for all $n$, then
$$\frac23\le \frac{1}{1+\frac12 \sin(n\pi/4)}\le 2$$
Therefore, we find that
$$\frac23 \sum_{n=1}^N \frac1{n^2}\le \sum_{n=1}^N \frac1{n^2\left(1+\frac12 \sin(n\pi/4)\right)}\le 2 \sum_{n=1}^N \frac1{n^2}$$
Can you finish now? |
Solve the comparison | As $67\equiv11\pmod{28}$
$$67x+17\equiv0\pmod{28}\iff 11x+17\equiv0\iff11x\equiv-17\equiv11\pmod{28}$$
$$\iff x\equiv1\pmod{28}$$
Using property $\#12$ of this and as $(11,28)=1$ |
For which value of $\delta$ is $\sum_{n=1}^{\infty}\frac{x}{n(1+n^{\delta}x^{2})}$ uniformly convergent on $\mathbb{R}$? | Notice that we have
$$
\left| \sum_{n=N+1}^{\infty} \frac{x}{n(1+n^{\delta}x^2)} \right|
\leq \int_{N}^{\infty} \frac{|x|}{t(1+t^{\delta}x^2)} \,dt.
$$
The last integral can be computed by applying the substitution $u=x^2 t^{\delta}$. (Of course, we may exclude the trivial case $x = 0$ so that the substitution makes sense.)
$$
\int_{N}^{\infty} \frac{|x|}{t(1+t^{\delta}x^2)} \, dt
= \frac{|x|}{\delta} \int_{N^{\delta} x^2}^{\infty} \frac{du}{u(1+u)}
= \frac{|x|}{\delta}\log\left( 1+\frac{1}{N^{\delta} x^2} \right).
$$
It is easy to check that
$$ f(x) = \begin{cases}
x\log(1+x^{-2}), & x \neq 0 \\
0, & x = 0
\end{cases}$$
is uniformly bounded on $\mathbb{R}$. Let $C > 0$ be any bound of $f$. Then
$$ \forall x \in \mathbb{R} \ : \quad \left| \sum_{n=N+1}^{\infty} \frac{x}{n(1+n^{\delta}x^2)} \right|
\leq \frac{|f(N^{\delta/2} x)|}{\delta N^{\delta/2}} \leq \frac{C}{\delta N^{\delta/2}} $$
This tells that the series converges uniformly for all $\delta > 0$. |
How to show that two arcs are parallel with respect to poincare metric of the unit disc? | The proof is relatively easy using the upper plane model. I will sketch it as it reduces to simple euclidean geometry proofs. Take a Mobius transform of the disc to the upper plane that sends $a$ to $0$ and $b$ to $\infty$; then the two circular arcs become lines through the origin $L,M$ say.
The perpendicular from a point $P \in L$ is (contained in) the Euclidean circle centered on the real axis whose tangent at the intersection $Q$ with $M$ is perpendicular on $M$. This immediately implies that the center is the origin and the distance in cause is the hyperbolic distance between $PQ$ where $P, Q$ are on the same circle with the center the origin and we need to show that is constant regardless of $P \in L$.
But it is a well-known result that if we let $A_p, B_P$ the intersection of the circle with the real axis, the hyperbolic distance from $P$ to $Q$ is $|\log \frac{|PA_P||QB_P|}{|PB_P||QA_P|}|$ (where $|PA_P|$ is just the usual Euclidean distance)
However it is obvious that $PA_P || P'A_{P'}$ for any $P, P' \in L$ as the triangles $OPA_P, OP'A_{P'}$ are isosceles at the origin ($|OP|=|OA_P|$ etc) and hence similar, so all those products ratios are constant in $P,Q$ as required (as they are just $r/r'$ or $r'/r$) and we are done! |
How do you define the inverse of an (exponential Lie) operator? | If I understand the question correctly, if we have any linear operator $A$ for which the exponential $\exp(A)$ is meaningful, then $A$ commutes with $-A$, and so
$$
\exp(A) \exp(-A) = \exp(A + (-A)) = \exp(\mbox{the zero operator}) = I.
$$
The inverse of $(I + B)^{-1}$ (again, if the latter is defined) is $(I+B)$. |
Integration of $\sin(2x)$ via two different methods | See this related post. This is a commmon mistake while doing integration.
For example, if $f(x)=g(x)$, then would you say $\displaystyle \int f(x) \mathrm dx=\int g(x) \mathrm dx$? |
Completing the square to decompose quadratic forms in three variables | a method. Usually we take the matrix $H$ to be the Hessian matrix of second partial derivatives. In the special case of integer coefficients with all the mixed coefficent terms even, we can get $H$ all integers by taking half the Hessian matrix.
Outcome this time:
$$ P^T H P = D $$
$$\left(
\begin{array}{rrr}
1 & 0 & 0 \\
- 1 & 1 & 0 \\
- 1 & 3 & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & 1 & - 2 \\
1 & 0 & 1 \\
- 2 & 1 & - 4 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & - 1 & - 1 \\
0 & 1 & 3 \\
0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & - 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
$$ Q^T D Q = H $$
$$\left(
\begin{array}{rrr}
1 & 0 & 0 \\
1 & 1 & 0 \\
- 2 & - 3 & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & - 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & 1 & - 2 \\
0 & 1 & - 3 \\
0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrr}
1 & 1 & - 2 \\
1 & 0 & 1 \\
- 2 & 1 & - 4 \\
\end{array}
\right)
$$
It is the diagonal elements of $D$ and the rows of $Q$ that give the expression,
$$ \color{red}{ (x+y-2z)^2 - (y-3z)^2 + z^2 } $$
Algorithm discussed at http://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr
https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia
$$ H = \left(
\begin{array}{rrr}
1 & 1 & - 2 \\
1 & 0 & 1 \\
- 2 & 1 & - 4 \\
\end{array}
\right)
$$
$$ D_0 = H $$
$$ E_j^T D_{j-1} E_j = D_j $$
$$ P_{j-1} E_j = P_j $$
$$ E_j^{-1} Q_{j-1} = Q_j $$
$$ P_j Q_j = Q_j P_j = I $$
$$ P_j^T H P_j = D_j $$
$$ Q_j^T D_j Q_j = H $$
$$ H = \left(
\begin{array}{rrr}
1 & 1 & - 2 \\
1 & 0 & 1 \\
- 2 & 1 & - 4 \\
\end{array}
\right)
$$
==============================================
$$ E_{1} = \left(
\begin{array}{rrr}
1 & - 1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
$$ P_{1} = \left(
\begin{array}{rrr}
1 & - 1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
, \; \; \; Q_{1} = \left(
\begin{array}{rrr}
1 & 1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
, \; \; \; D_{1} = \left(
\begin{array}{rrr}
1 & 0 & - 2 \\
0 & - 1 & 3 \\
- 2 & 3 & - 4 \\
\end{array}
\right)
$$
==============================================
$$ E_{2} = \left(
\begin{array}{rrr}
1 & 0 & 2 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
$$ P_{2} = \left(
\begin{array}{rrr}
1 & - 1 & 2 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
, \; \; \; Q_{2} = \left(
\begin{array}{rrr}
1 & 1 & - 2 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
, \; \; \; D_{2} = \left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & - 1 & 3 \\
0 & 3 & - 8 \\
\end{array}
\right)
$$
==============================================
$$ E_{3} = \left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 3 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
$$ P_{3} = \left(
\begin{array}{rrr}
1 & - 1 & - 1 \\
0 & 1 & 3 \\
0 & 0 & 1 \\
\end{array}
\right)
, \; \; \; Q_{3} = \left(
\begin{array}{rrr}
1 & 1 & - 2 \\
0 & 1 & - 3 \\
0 & 0 & 1 \\
\end{array}
\right)
, \; \; \; D_{3} = \left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & - 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
==============================================
$$ P^T H P = D $$
$$\left(
\begin{array}{rrr}
1 & 0 & 0 \\
- 1 & 1 & 0 \\
- 1 & 3 & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & 1 & - 2 \\
1 & 0 & 1 \\
- 2 & 1 & - 4 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & - 1 & - 1 \\
0 & 1 & 3 \\
0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & - 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
$$ Q^T D Q = H $$
$$\left(
\begin{array}{rrr}
1 & 0 & 0 \\
1 & 1 & 0 \\
- 2 & - 3 & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & - 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & 1 & - 2 \\
0 & 1 & - 3 \\
0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrr}
1 & 1 & - 2 \\
1 & 0 & 1 \\
- 2 & 1 & - 4 \\
\end{array}
\right)
$$ |
If I flip 10 dice, what's the probability I get 1 1, 2 2s, 3 3s, and 4 4s? | Look at a smaller example: rolling one $1$ and two $2$’s with three dice. There are clearly $3$ ways to get this result: $122,212$, and $221$. Thus, the probability is $\dfrac3{6^3}=\dfrac1{72}$.
Now compute it using your bit-string method: your target is $01001111$, one string out of a possible $\dbinom83=56$, giving the incorrect result $\dfrac1{56}$. Thus, the method cannot be right, and your doubts are justified.
The problem is indeed the one that you identified at the end of the question. There is only one way to roll ten $1$’s; there are $\dbinom{10}5$ ways to roll five $1$’s and five $2$’s.
Added: To return to the larger problem, you want to count the ways of getting the desired outcome. There are $\dbinom{10}4$ ways to pick which $4$ dice come up $4$, $\dbinom63$ ways to choose which of the remaining $6$ dice come of $3$, and so on, so the total number of ways is
$$\binom{10}4\binom63\binom32\binom11=\frac{10!}{4!3!2!1!}=12,600\;,$$ the multinomial coefficient $$\binom{10}{4,3,2,1}\;.$$ |
composition of permutations example | $(12)(34)$ is in the kernel of the defined mapping because it takes $y_i\to y_i\,,i=1,2,3$. For instance, since $y_1=\{1,2\}\cup\{3,4\}$, and $(12)(34)$ takes $\{1,2\}\to \{1,2\}$ and $\{3,4\}\to\{3,4\}$, it takes $y_1$ to $y_1$.
If we look at $y_2$, we see that $\{1,3\}\to\{2,4\}$ and $\{2,4\}\to\{1,3\}$. Thus $(12)(34)$ takes $y_2$ to $y_2$.
Similarly for $y_3$.
So, $(12)(34)\to e\in S_3$ under the mapping.
As far as working with them, consider $(1234)$. From your calculations, we get $y_1\to y_3$. And $y_2\to y_2$. And finally $y_3\to y_1$. Thus, under the mapping Artin has defined, we get $(1234)\in S_4\to (13)\in S_3$. |
Claim on Wikipedia in connection with integrability and Risch's algorithm. Any references? | Comment:May be this idea helps:
$x^4+10x^2-96x-71=(x^2+5)^2-96(x+1)=$
Let:
$x^2+5=u\Rightarrow 2xdx=du\Rightarrow dx\frac {du}{2x}=\frac{du}{2\sqrt{u-5}}$
Putting this in integrand you get:
$$ \frac{dx}{x^4+10x^2-96x-71}=\frac{du}{[u-96(\sqrt{u-5}+1)](2\sqrt{u-5})}$$
Which can be transformed to two fractions.This is not possible if you replace 71 by 72. |
Discrete mathematics - understanding proof by induction | When they write $1+2+\ldots + (k+1)$, they mean summing up every positive integer up to $k+1$. The integer before $k+1$ is $k$.
Hence $$1+2+\ldots + (k+1) =(1+2+\ldots + k)+ (k+1)\tag{1} $$
We already know that $1+2+\ldots + k = \frac{k(k+1)}{2}$ by the induction hypothesis, substitute this into $(1)$ and we have
$$(1+2+\ldots + k)+ (k+1)= \frac{k(k+1)}{2}+(k+1)$$ |
Given three non-negative numbers $a,b,c$ so that $a+b+c=3$. Prove that $\prod\limits_{cyc}(\!2+ a^{2}\!)+abc\geqq 28$ . | Let $a+b+c=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.
Thus, we need to prove that
$$\prod_{cyc}(2u^2+a^2)+u^3w^3\geq28u^6$$ or $f(w^3)\geq0,$ where
$$f(w^3)=w^6-11u^3w^3+16u^6-24u^4v^2+18u^2v^4.$$
But $$f'(w^3)=2w^3-11u^3\leq0,$$ which says that it's enough to prove our inequality for a maximal value of $w^3$,
which happens for equality case of two variables.
Let $b=a$ and $c=3-2a$.
Id est, we need to prove that
$$(2+a^2)^2(2+(3-2a)^2)+a^2(3-2a)\geq28$$ or
$$(a-1)^2(4a^4-4a^3+15a^2-16a+16)\geq0,$$ which is obvious.
Done! |
Describe an $O(N)$ time algorithm for determining if there is an integer in a sequence $A$ and an integer in a sequence $B$ such that $x = a + b$ | Allocate a hash map $H$. For each $a \in A$, set $H[x - a]$ to 1. Then, for each $b \in B$, if $H[b]$ is 1, you are done. Hash maps are $O(1)$ and worst case you traverse each list once, so the algorithm is $O(n)$. |
Independent families versus generators in boolean algebras | This was answered on Math Overflow here. The gist is that an independent family of subsets of $\kappa$, along with the subsets of $\kappa$ of size $< \kappa$, can never generate all of $\mathcal{P}(\kappa)$ as a Boolean algebra. |
A problem regarding functional calculus. | If $0\in F_i$, you are done. Otherwise, as $F_i$ is compact, there exists $\delta>0$ such that $|z|>\delta$ for all $z\in F_i$. Then the function $g:z\mapsto zf_i(z)$ satisfies $|g(z)|\geq\delta$ for all $z\in F_i$ and so $h=1/g$ is well-defined and analytic on $F_i$. This implies that $Tf_i(T)|_{A_i}$ is invertible (with inverse $h(T)$), and so $0\not\in\sigma(T|_{A_i})$. |
Show that there is no non-trivial solution for $x^2 + 2z^2 = 10y^2$ | Module $5$ the equation becomes
$$
x^2+2z^2\equiv0\pmod5.
$$
Note that for an integer $n$ not divisible by $5$, we have $n^2\equiv\pm1\pmod5$. Hence
$$
x^2+2z^2\equiv\pm1\pm2\not\equiv0\pmod5,
$$
for $x,z$ not divisible by $5$.
So any non-trivial solution to the equation must satisfy $5$ divides both $x$ and $z$, and hence $y$. But if $(x, y, z)$ is a solution, then $(x/5, y/5, z/5)$ is another solution, so they are divisible by $5$ again. This implies that $x, y, z$ are infinitely divisible by $5$, which is impossible. So no non-trivial solutions to the equation exist.
Hope this helps. |
Question about notation in First Variation Equation | The $x$ is indeed there to represent differentiation wrt. those variables and not $t$, the RHS of your first equation has indices $i,j$ varying from $1$ to $n$, so that you get the matrix of the derivative.
If the subindex $x$ weren't there, one could interpret that one needs to compute all derivatives of a function from $R^{1+n}$ into $R^n$. Also because the function $\phi (t;x_0)$ denotes the solution at time $t$ with initial condition $x_0$, the author speaks of differentiability wrt. them, meaning wrt. the value of $x_0$ (which is not fixed until you solve the system: each choice yields a different solution curve).
In other words: for fixed $x_0$, you can think of $\phi$ as a curve in $R^n$, parametrized by $t$ and if you vary $x_0$ in an neighborhood you get a family of curves which depends differentiably on the choice of starting point. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.