title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Properties of 3-vector dot product | Upon Jeff's request here the solution I provided. The validity of the proof as it is right now requires that: $$\sum_{1\leqslant 1<i<j\leqslant n}b_ic_j\geqslant 0.$$
Using Lagrange's identity, for $(a_i)$ and $(b_ic_i)$, one has:
$$\textrm{TD}(A,B,C)^2=\|A\|^2\cdot\sum_{i=1}^n(b_ic_i)^2-\underbrace{\sum_{1\leqslant i,j\leqslant n}^n(a_ib_jc_j-a_jc_ib_i)^2}_{\geqslant 0}.$$
Hence, one derives:
$$\textrm{TD}(A,B,C)^2\leqslant\|A\|^2\cdot\sum_{i=1}^n(b_ic_i)^2.\tag{1}$$
Besides, one has: $$\langle B,C\rangle^2=\sum_{i=1}^n(b_ic_i)^2+\underbrace{2\sum_{1\leqslant i<j\leqslant n}b_ic_j}_{\geqslant 0}.$$
Therefore, one derives: $$\sum_{i=1}^n(b_ic_i)^2\leqslant\langle B,C\rangle^2.\tag{2}$$
Using $(1)$ and $(2)$, one has: $$|\textrm{TD}(A,B,C)|\leqslant \|A\|\cdot|\langle B,C\rangle|.\tag{3}$$
Finally, using Cauchy-Schwarz inequality, one gets: $$|\textrm{TD}(A,B,C)|\leqslant\|A\|\cdot\|B\|\cdot\|C\|.$$
Remark. Inequality $(3)$ is stronger than the one you are interested in. |
Is $z_1^T z_1 A + z_2^T z_2 A ... + z_n^T z_n A = Z^T Z A$? | If $z_k$ denotes the $k$th row of the matrix $Z$, then we have
$$
z_1^Tz_1A + \cdots + z_n^Tz_nA = \\
(z_1^Tz_1 + \cdots + z_n^Tz_n)A = \\
\pmatrix{z_1^T & z_2^T & \cdots & z_n ^T} \pmatrix{z_1\\z_2\\ \vdots \\ z_n}A =\\
Z^TZA
$$ |
Local behaviour of a family of differential equations | I am not fluent in the topic but I know some words to start: that problem is an example of a singular perturbation problem. A related topic is perhaps the idea of a matched asymptotic. Hope this helps. |
Is my application of Burnside's Lemma correct in this combinatorial problem? | Using Burnside I get the following result. By inspection we have the following cycle index:
$$\frac{1}{2}(a_1^5 + a_1 a_2^2).$$
We need to compute the number of digit sequences fixed by these two permutations, the identity and the flip. Furthermore the flip exchanges sixes and nines at the same time as it permutes slots.
The identity fixes all $10^5$ digit sequences. For the flip according to Burnside we must have constant values on the cycles taking into account that a six and a nine placed on a two-cycle are also constant because the flip simultaneously exchanges those digits. We cannot have a six or a nine on the one-cycle because they wouldn't be constant on that cycle. This gives the following four possibilities:
$$8\times 8\times 8 + 8 \times 8\times 2 + 8\times 2\times 8 + 8\times 2\times 2.$$
The first factor in these four products corresponds to the choice for the one-cycle and the next two to the choices for the two two-cycles. These are eight or two, depending on whether we choose a digit that is not six nor nine, or we choose either six and nine or nine and six.
Burnside now yields
$$\frac{1}{2}
\left(10^5 + 8\times 8\times 8 + 8 \times 8\times 2 +
8\times 2\times 8 + 8\times 2\times 2\right) = 50400.$$
The following code snippet was used to verify the above calculation.
#! /usr/bin/perl -w
#
MAIN: {
my %seen;
for(my $n=0; $n<10**5; $n++){
my $m = sprintf "%05d", $n;
my $rev = reverse $m;
$rev =~ s/6/x/g;
$rev =~ s/9/y/g;
$rev =~ s/x/9/g;
$rev =~ s/y/6/g;
my @l = sort { $a <=> $b } ($m, $rev);
$seen{$l[0] . "-" . $l[1]} = 1;
}
print scalar(keys %seen);
print "\n";
} |
Does every real line bundle admit a flat connection? | This result is true and your approach is correct. Not an intuition, but the obstruction of the existence of a flat connection is the curvature which vanishes here because the dimension is $1$.
https://en.wikipedia.org/wiki/Connection_(vector_bundle)#Curvature |
Finding the limit of a set | Using $$\lim_{n \to \infty} \frac1n\sum_{r=1}^n f\left(\frac rn\right)=\int_0^1f(x)dx$$
$$\lim_{n\to\infty}\frac1{\sqrt n}\sum_{1\le r \le n}\frac1{\sqrt r}=\lim_{n\to\infty}\frac1n\sum_{1\le r \le n}\frac1{\sqrt {\frac rn}}=\int_0^1\frac{dx}{\sqrt x}$$ |
Non separable metric space implies an uncountable set with lower bounded distances? | For each $r>0$ by $A_r$ we denote a family of all subsets $S\subset X$ with the property
$$
x\neq y,\; x,y\in S \implies d(x,y)>r
$$
Assume all sets in $A_r$ are at most countable for each $r>0$, then $X$ is separable. Indeed, fix $r>0$, then by Zorn's lemma $A_r$ have maximal element $S_r$ which is countable by assumption. From maximality it follows that $d(x,S_r)\leq r$ for all $x\in X$. Now consider countable set
$$
S_0=\bigcup\limits_{q\in\mathbb{Q}_+} S_q
$$
By construction it is dense in $X$, hence $X$ is separable.
As the conseqence, if $X$ is not separable, then there is an uncountable set $S\in A_r$ for some $r>0$.
Similarly one can show that metric space have dense set of cardinality $\kappa$ if there is no set of cardinality $>\kappa$ with pairwise distance between elements bigger than some constant. |
Why $\mathbb Q(\sqrt 3,\sqrt 5)=\mathbb Q(\sqrt 3+\sqrt 5)$? | No isomorphism is needed :
$$(\sqrt 2-\sqrt 5)(\sqrt 2+\sqrt 5)=-3,$$
i.e. $\sqrt 2-\sqrt 5\in \mathbb Q(\sqrt 2+\sqrt 5)$
and
$$\sqrt 2=\frac{\sqrt 2+\sqrt 5+(\sqrt 2-\sqrt 5)}{2},$$
$$\sqrt 5=\frac{\sqrt 2+\sqrt 5-(\sqrt 2-\sqrt 5)}{2}.$$ |
Hilbert's Hotel and Infinities for Pre-university Students | Real numbers are best avoided until properly treated, which requires a fair dose of analysis. Uncountability can be demonstrated on other sets. I had, on several occasions already, explained Hilbert's Hotel to kids of various ages. After the hotel seems never to fill up, I bring in an infinite amount of families to the hotel. Each family has a child. Each child has an infinite string of toys. There are two kinds of toys only, so each child has an infinite string of just these two types of toys. The diagonal argument is then given by hypothesizing each family gets a room, and then looking for the kid whose string of toys is the complement of the diagonal string. Voila, the hotel can't accommodate all families.
Alternatively/complimentarily, you can prove Cantor's Theorem (that the power set of a set is always of larger cardinality than the set) to almost anybody. |
Convergence of $\frac{4}{\pi}\sum_{m=1}^{\infty}\frac{2m-1}{4m^2-4m-3}\sin[ (2m-1)x]$ | The series you got is the odd extension of this function to $[-\pi,\pi]$and periodic with period $2\pi$.
You are seeing in the partial sums the Gibbs phenomenon.
If you were to take the Cesaro sums of the series you would see it converge to $0$ at $x=0$ and similar points, see Fejer's theorem. |
$ι:U→V$ is an embedding, $Q:=ιι^*$, $L∈(ℝ^d)$, $Φ∈\text{HS}(U,ℝ^d)$ $⇒$ $\text{tr}LΦ\sqrt Q(Φ\sqrt Q)^*$ doesn't depend on $ι$ | I am not 100% sure of the exact definition of the object you are using, but I will make soame commets anyway.
1) I think there is typo. The sense of the question seems to require $\Phi\in\operatorname{HS}(V,\mathbb R^d)$, and not $\Phi\in\operatorname{HS}(U,\mathbb R^d)$ as you write.
2) $x, u$ ar decorations; all it matters is that $L \in\mathfrak L(\mathbb R^d)$, and $L^*=L$
3) The statement "$e_n:=\sqrt Qf_n\;\;\;\text{for }n\in N:=\left\{n\in\mathbb N:\lambda_n>0\right\}$" looks suspicious: $e_n \in U$, and $\sqrt Qf_n \in V$. There should be a $\iota$ or $\iota^*$ somewhere to place left hand side and right hand side on the same space. The relation
$$ e_n:=\lambda_n^{-\frac{1}{2}}\iota^*(f_n)\;\;\;\text{for }n\in N:=\left\{n\in\mathbb N:\lambda_n>0\right\}$$
seem to work with the rest of the statement, if my understanding of HS embedding is correct.
With this relation, $\iota(e_n) = \lambda_n^{-\frac{1}{2}}\iota\iota^*(f_n) = \lambda_n^{-\frac{1}{2}}Q(f_n) = \lambda_n^{\frac{1}{2}}f_n, \; \iota^* \iota(e_n) =\lambda_n e_n $ and
$$ \sum_n ||\iota(e_n)||_V ^2 = \sum_n\lambda_n .$$
4) Now lets go over (2) in your proof.
\begin{equation}
\begin{split}
\operatorname{tr}LBB^\ast&=\sum_{n\in\mathbb N}\sqrt Qf_n\cdot\Phi^\ast L\Phi\sqrt Qf_n\\
&=\sum_{n\in\mathbb N} <\lambda_n^{\frac{1}{2}} f_n| \Phi^\ast L\Phi \lambda_n^{\frac{1}{2}} f_n>_V\\
&=\sum_{n\in N} <\iota(e_n) | \Phi^\ast L\Phi \iota(e_n)>_V \\
&=\sum_{n\in N} \lambda_n^{-1} <\iota \iota^* \iota(e_n) | \Phi^\ast L\Phi \iota(e_n)>_V \\
&=\sum_{n\in N} \lambda_n^{-1} < \iota(e_n) | \iota (\iota^* \Phi^\ast L\Phi \iota \; e_n) >_V \\
&=\sum_{n\in N} \lambda_n^{-1} <e_n | \iota^* \Phi^\ast L\Phi \iota e_n >_{HS} \\
\end{split}
\end{equation}
The trace of $\iota^* \Phi^\ast L\Phi \iota $ is:
\begin{equation}
\begin{split}
\operatorname{tr}\iota^* \Phi^\ast L\Phi \iota &=
\sum_{n\in N} \lambda_n^{-1} <e_n , \iota^* \Phi^\ast L\Phi \iota \; e_n >_{U} \\
\end{split}\tag{a}
\end{equation}
whereas the trace of $LBB^*$ is
\begin{equation}
\begin{split}
\operatorname{tr}LBB^\ast&=\sum_{n\in N} <\iota(e_n) | \Phi^\ast L\Phi \iota(e_n)>_V \\
&=\sum_{n\in N} <e_n | \iota^* \Phi^\ast L\Phi \iota(e_n)>_U \\
\end{split}\tag{b}
\end{equation}
(a) and (b) are different.
It is easy to trace the computations when all spaces are finite dimentional, keeping track of the 3 different dot-products at play (i.e.: the dot product in $U$, then one in $V$, and the HS product induced on $U$ by the HS embedding. |
What are super-translations? | A suprertranslation is a diffeomorphism on an asymptotically-flat Riemannian or pseudo-Riemannian manifold, such that the result is also asymptotically-flat. (The diffeomorphism doesn't need to be an isometry).
(Of course, I didn't define asymptotic-flattness. I don't know the definition, but do know that in an asymptotically-flat manifold if you go far-enough in any direction, the metric becomes close to being flat as you wish).
The group of suprertranslations of a certain manifold is called the BMS group, after Bondi,Van der Burg and Sachs who discussed them in the context of gravitational waves in the following papers:
*Bondi, Hermann, M. G. J. Van der Burg, and A. W. K. Metzner. "Gravitational waves in general relativity. VII. Waves from axi-symmetric isolated systems." Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. Vol. 269. No. 1336. The Royal Society, 1962.
*Sachs, Rainer K. "Gravitational waves in general relativity. VIII. Waves in asymptotically flat space-time." Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. Vol. 270. No. 1340. The Royal Society, 1962. |
PRA: Rare event approximation with $P(A\cup B \cup \neg C)$? | Removing all the negatives certainly gives an upper bound. But if one looks at the logic of the inclusion-exclusion argument, whenever we have just added, we have added too much (except possibly, at the very end). So at any stage just before we start subtracting again, our truncated expression gives an upper bound.
Thus one obtains upper bounds by truncating after the first sum, or the third, or the fifth, and so on. |
Sum of divergent and convergent sequence proof. | Your proof is correct!
As Hagen von Eitzen notes in the comments, the core of the proof is the fact that $s_n$ is eventually bounded below. And "eventually bounded below" and "bounded below" are the same thing. |
About the continuous function | Take $x=0, y=\frac{1}{n}$, then
$$f(x,y)=f\left(0,\frac{1}{n}\right)=0\to 0.$$
Take $x=y=\frac{1}{n}$, then
$$f(x,y)=f\left(\frac{1}{n},\frac{1}{n}\right)=\frac{\sin \frac{2}{n^2}}{\frac{2}{n^2}}\to1.$$
Hence the function $f$ is discontinious at $(0,0)$, independently of the value $f(0,0).$ |
Discrete Math Course | How many you’re required to take depends entirely on the institution and departments involved. How many you ought to take depends on the sort of applied mathematics and physics you want to do. A standard introductory course of the sort that you’re now taking ought to be required of anyone going into any field of applied mathematics, but if your interests lean heavily towards physics, you might not need much more than that. Then again, you might: I know a theoretical chemist whose work involved a lot of combinatorics. |
Graph Theoretic model plan? Combinatorics | Hint: Look at the graph of a single pool first. There are $7$ nodes, one for each team, and an edge drawn between them if they are playing. Each node has degree $5$. |
unanimity game, calculate core/shapley value | A payoff-vector is in the core, as long as it is not blocked by $T$, the only coalition that can block. We have $v(N)=v(T)=1$ and everything in which $\sum_{i\in T}x_1< 1$ can be blocked by $T$. On the other hand, if $\sum_{i\in T}x_i=1$, the only way to make a member of $T$ better off is by making another member of $T$ worse off. So everything that gives the whole pie to $T$ is in the core.
Everyone outside $T$ is a null-player and should therefore get a value of zero. By symmetry, everyone in $T$ gets the same value. By efficiency, these values should add up to $v(N)$. Hence, $v(i)=v(N)/|T|$ for all $i\in T$. |
Binomial Distribution and approximation | Hints and Comments:
You seem to be mixing up two different limiting properties.
(1) As the sample size $n$ of a survey becomes large the sample proportion of those who like lemons tends to become ever closer to the population probability that a randomly chosen person likes lemons [That is based on the Law of Large Numbers.]
(2) The sum or average of a large number $n$ of random variables (with finite variances) tends to be nearly normal. [That is based on the Central Limit Theorem.] A binomial distribution $\mathsf{Binom}(n, p)$ is the sum of $n$ Bernoulli (0-1) random variables, and so the CLT applies to allow reasonable
normal approximations of binomial distributions, especially when $n$ is large and $p$ is not too near 0 or 1 (near 1/2 is best).
Maybe that is enough to help your sort out your Question. If not, please
deconvolve the LLN and CLT issues an ask about them separately.
Sample proportion: You interview $n=1000$ people and find that $X=331$ of them like lemons. Then the sample proportion $\hat p = X/n$ is the sample proportion.
For large $n$ $\hat p$ is close to the population proportion $p$ who like lemons.
Finding the mean: Suppose $X \sim \mathsf{Binom}(n=3,p=.5).$ Then
the by definition
$$\mu_X = E(X) = \sum_{k=0}^3 kP(S=k) = 0(1/8) + 1(3/8) + 2(3/8) + 3(1/8) = 12/8 = 3/2.$$
For the binomial distribution only it turns out that there is a
convenient formula: $E(X) = np = 3(1/2) = 3/2.$ Somewhat similarly, one
can show that $Var(X) = np(1-p).$
Some other families of distributions have analogous special formulas for expectations and variances. You can take the attitude that there are
confusingly many of them, or you can be glad that when special formulas
exist for the mean and variance of a type of distribution they save a lot
of work.
Suppose 63% of 1200 randomly sampled people (that's 756 of them) like lemons. Then what is the question? The best point estimate of the population proportion $p$ who like lemons is $\hat p = 756/1200 = .63.$ Of course, $\hat p$ isn't
exactly $p$. You may wonder how far off the estimate might be.
Using the normal approximation to the binomial distribution, one can find a 95% confidence interval (CI) for $p.$ The simplest form of CI (OK for $n$ over 1000.) is $\hat p \pm 1.96\sqrt{ \frac{\hat p(1-\hat p)}{1200} }.$ That
computes to the interval $(0.658, 0.702),$ which has $\hat p$ as its center. |
Flux of a vector field through a sphere | hint
$$\iint_S \vec{f}\cdot \vec{ds}=$$
$$\iiint_V \nabla\cdot \vec{f} dv=$$
$$\iiint_Vdxdydz=$$
volume of the sphere
$$=\frac 43 \pi (\;radius\;)^3$$
but, the equation of your sphere is
$$x^2+y^2+z^2=4=R^2$$ |
Property of Banach algebra with involution | In the definition (that I know) of involutive Banach algebra it is assumed that $\Vert A^*\Vert=\Vert A\Vert$. Now we use submultiplicaive property of the norm to get
$$
\Vert A^* A\Vert\leq\Vert A^*\Vert\Vert A\Vert=\Vert A\Vert\Vert A\Vert=\Vert A\Vert^2
$$ |
Can I use divergence theorem on a plane? | There is a two-dimensional divergence theorem referring to domains $S\subset{\mathbb R}^2$, their boundaries $\partial S$, and a vector field ${\bf v}$ defined in a domain $\Omega\supset S$. This theorem is equivalent to "Greens theorem in the plane".
From your question one gets the impression that in fact you have a three-dimensinonal problem with a $${\bf v}(x,y,z):=\bigl(u(x,y,z),v(x,y,z),w(x,y,z)\bigr)\ ,\tag{1}$$
and you are told to compute the flux of ${\bf v}$ through a certain elliptical disc $E$. You have to compute this flux as a standard flux integral, using a parametrization of $E$.
The three-dimensional divergence theorem applies to three-diemnsional bodies $B$, their boundaries $\partial B$, and a vector field $(1)$. There is no body $B$ present in your problem. |
Strong deformation retraction between CW-complexes, Brown theorem proof | Your space $T$ is nothing else than the reduced double mapping cylinder of the maps $\iota : A \hookrightarrow X$ and $g : A \to Y$. To see that, note that $I^+ \wedge A = (I \times
A)/(I \times \{a_0\})$.
The subspaces $A_1$ and $A_2$ of $T$ are copies of the reduced mapping cylinders of $\iota$ and of $g$. Thus you get the usual strong deformation retractions $f : A_1 \to X$ and $r : A_2 \to Y$. Hence the inclusions $i_1 : X \to A_1$ and $i_2 : Y \to A_2$ are homotopy equivalences and induce bijections $i_1^* : F(A_1) \to F(X)$ and $i_2^* : F(A_2) \to F(Y)$.
Now consider $v \in F(X)$ and $u \in F(Y)$.
Let $\bar v = (i_1^*)^{-1}(v) \in F(A_1)$. Then by definition $\bar v \mid_X = i^*_1(\bar v) = i^*_1( (i_1^*)^{-1}(v)) = v$.
Let $\bar u = (i_2^*)^{-1}(u) \in F(A_2)$. Then $\bar u \mid_Y = u$. |
differential geometric notation | You are correct, except for the extra $(e_k)$ at the last step.
There are two points with the upper and lower indices. One is that formulas should be compatible with the Einstein summation convention — so that you sum over an index that appears once up and once down. The other is that the position of the index keeps track of whether you're dealing with a contravariant object (vector or multi-vector) or covariant object (dual vector, etc.). Of course, the basis vectors of a vector will have a lower index ($e_j$) and the basis vectors of a dual vector will have an upper index ($\alpha^j$), so the corresponding coordinates are the reverse — the coordinates of a vector have upper indices and the coordinates of a dual vector have lower indices.
Note that a linear map maps vectors to vectors, so it can be written as a tensor of type $(1,1)$. The linear map $T$ with matrix $(a^i_j)$ mapping $V$ to $V$, using basis $\{e_i\}$ on both sides, can be represented as $T = \sum\limits_{i,j} a^i_j e_i\otimes \alpha^j$ (where $\{\alpha^j\}$ is the dual basis for $V^\vee$). Notice that this is equivalent to writing $T(e_j) = \sum\limits_i a^i_j e_i$. |
linear form on the polynomials vector space satisfying $\phi(1) \ne 0$ | HINT:
By the division algorithm
$$P= (X-a)\cdot Q + P(a)$$
so $$\phi(P) = \phi( (X-a) Q) + P(a) \phi(1)= P(a) \phi(1) = \lambda P(a)$$ |
How to prove this theorem for exponential equations | $a^x = a^y$
x.log a = y.log a
log a = 0 iff a = 1.
Draw your own conclusions. |
Consider the sequence $f_n(x) = (\sin(πnx))^n , n = 1, 2, ...,$ on the interval $[0,1].$ | Let us estimate
\begin{eqnarray*}
\left\Vert f_{n}\right\Vert _{1} & = & \int_{0}^{1}\left|\sin\left(\pi nx\right)\right|^{n}\, dx\\
& \overset{y=\frac{n}{2}x}{=} & \frac{2}{n}\cdot\int_{0}^{n/2}\left|\sin\left(2\pi y\right)\right|^{n}\, dy\\
& \leq & \frac{2}{n}\cdot\int_{0}^{n}\left|\sin\left(2\pi y\right)\right|^{n}\, dy\\
& = & 2\int_{0}^{1}\left|\sin\left(2\pi y\right)\right|^{n}\, dy,
\end{eqnarray*}
where I used that $y\mapsto\left|\sin\left(2\pi y\right)\right|$
is $1$-periodic in the last step.
Now observe that $\left|\sin\left(2\pi y\right)\right|^{n}\leq1$
and $\left|\sin\left(2\pi y\right)\right|^{n}\xrightarrow[n\rightarrow\infty]{}0$
as long as $\left|\sin\left(2\pi y\right)\right|\neq1$. But $\left|\sin\left(2\pi y\right)\right|=1$
holds only for $y\in\left\{ \frac{1}{4},\frac{3}{4}\right\} $, i.e.
on a finite set, hence on a null-set. Thus, $\left|\sin\left(2\pi y\right)\right|^{n}\xrightarrow[n\rightarrow\infty]{}0$
almost everywhere.
Using dominated convergence, we conclude $\left\Vert f_{n}\right\Vert _{1}\xrightarrow[n\rightarrow\infty]{}0$.
It is well-known (cf. Subsequence convergence in $L^p$) that every $L^{1}$-convergent sequence $\left(f_{n}\right)_{n\in\mathbb{N}}$
has a subsequence $\left(f_{n_{k}}\right)_{k\in\mathbb{N}}$ so that
$f_{n_{k}}\left(x\right)\xrightarrow[n\rightarrow\infty]{}f\left(x\right)$
almost everywhere, where $f$ is the $L^{1}$-limit.
Hence, we can even take $E\subset\left[0,1\right]$ with $m\left(E\right)=1$,
because there is a subsequence $\left(f_{n_{k}}\right)_{k\in\mathbb{N}}$
converging to zero almost everywhere. |
the summation of $-1^k(1\cdot3\cdot5\cdots(2k-1))x^{2k}$ | Assume $x\ne0$.
Then by applying the ratio test to this power series, one has
$$
\begin{align}
L = \lim_{k\to\infty}\left|\frac{a_{k+1}}{a_k}\right|&=\lim_{k\to\infty}\left|\frac{(-1)^{k+1}(1\cdot3\cdot5\cdots(2k-1)(2k+1))}{(-1)^k(1\cdot3\cdot5\cdots(2k-1))}\cdot x^2\right|
\\\\&=\lim_{k\to\infty}(2k+1)|x|^2
\\\\&=\infty,
\end{align}
$$ the radius of convergence is then $R=0$.
The given series never converges except for $x=0$. |
For what integers $n$ does $\varphi(n)=n-2$? | Assume $n$ has more than one prime factor. Let these prime factors be $p$, $q$. Since $p$ and $q$ and $n$ are not coprime with $n$, but $\phi(n)$ counts the number of numbers coprime to $n$, this implies $n-2=\phi(n)\le n-3$. Contradiction.
Thus, $n$ has one prime factor. So $n=p^{e}$ so some $p$ prime and $e$ natural. So $$p^{e}-2=n-2=\phi(n)=\phi(p^{e})=p^{e}-p^{e-1}$$
So $p^{e-1}=2$. By the Fundamental Theorem of Arithmetic, $p=2, e=2$,. So the only solution is $n=4$. We are done. |
Is the function of the limit = the limit of the function when I'm talking about continuous functions of ordinals? | As we generally only speak of limits of non-decreasing sequences of ordinals, we may assume that the function $f$ is itself non-decreasing.
Let $\alpha = \lim_{\xi \to \gamma} \alpha_\gamma$. If $f ( \alpha ) \neq \lim_{\xi \to \gamma} f ( \alpha_\xi )$ there are two possibilities:
If $f ( \alpha ) < \lim_{\xi \to \gamma} f ( \alpha_\xi )$ then there must be a $\beta < \gamma$ such that $f ( \alpha ) < f ( \alpha_\xi )$ for all $\beta \leq \xi < \gamma$. This clearly contradicts that $f$ is non-decreasing.
If $f ( \alpha ) > \lim_{\xi \to \gamma} f ( \alpha_\xi )$, then by continuity we have that $\lim_{\xi \to \gamma} f ( \alpha_\xi ) < f ( \alpha ) = \lim_{\beta \to \alpha} f ( \beta )$, and therefore there must be a $\delta < \alpha$ such that $\lim_{\xi \to \gamma} f ( \alpha_\xi ) < f ( \beta )$ for all $\delta \leq \beta < \alpha$. But as $\alpha = \lim_{\xi \to \gamma} \alpha_\xi$ there is a $\xi < \gamma$ such that $\delta < \alpha_\xi$. This again contradicts that $f$ is non-decreasing. |
A specific question about a factor theorem proof | It's exactly what's happening.
Observe that $q_1$ is just the constant $1$ polynomial. |
Prove question $(A\setminus B) \cup (B\setminus C) = A\setminus C$ , $ (A\setminus B)\setminus C= A\setminus(B\cup C)$ | For the first one, suppose that $(A \setminus B) \cup (B \setminus C)$ is not empty. Take any $x \in (A \setminus B) \cup (B \setminus C)$. Then either $x \in A \setminus B$ or $x \in B \setminus C$. Note that in this particular case, both cannot be true (why?). If $x \in A \setminus B$, then $x \in A$ and $x \not \in B$. If $x \in B \setminus C$, then $x \in B$ and $x \not \in C$. This does not imply that $x \in A \setminus C$. If $x \in A \setminus B$, one of the possibilities above, then this does not give us any information about whether $x \in C$.
For example, suppose $A = \{1,2,3\},\ B = \{1,2\}$, and $C = \{3\}$. Then $3 \in A \setminus B$ and so $3 \in (A \setminus B) \cup (B \setminus C)$, but $A \setminus C = \{1,2\}$ and so $3 \not \in A \setminus C$. |
Show by mathematical induction that if $n$ is a positive integer, then $(2n)!\lt 2^{2n}(n!)^{2}.$ | Another proof :
$$\frac{(2n)!}{(n!)^2}=\binom{2n}{n}< \sum\limits_{k=0}^{2n} \binom{2n}{k} = 2^{2n}$$ |
Help with a technique in factoring a polynomial of four terms and two variables | You can write your expression as $$x^3+(-4y)^3+(-2)^3-3\cdot x \cdot (-4y)\cdot (-2)$$ which is in the form $$s^3+t^3+u^3-3stu$$
This has a standard factorisation which you can derive as follows:
Let $s,t,u$ be the roots of the cubic $y^3-ay^2+by-c=0$ so that $a=s+t+u; b= st+tu+us; c=stu$
Substitute successively $y=s,t,u$ and add to obtain:
$$(s^3+t^3+u^3)-a(s^2+t^2+u^2)+b(s+t+u)-3c=0$$
Using the expressions for $a,b,c$ and reorganising a little, this becomes $$(s^3+t^3+u^3)-3stu=(s+t+u)(s^2+t^2+u^2-st-tu-su)$$
You can apply this in your case to extract one factor. You might also want to note that the quadratic factor satisfies $$2(s^2+t^2+u^2-st-tu-su)=(s-t)^2+(t-u)^2+(u-s)^2$$ so is non-negative for real $s,t,u$. Over the complex numbers take $\omega ^3=1$ a complex cube root of unity, then using $\omega+\omega^2=-1$ $$(s+\omega t +\omega^2 u)(s+\omega^2 t+\omega u)=s^2+t^2+u^2-st-tu-us$$
I've left you to apply this to your own case. |
Where is the fallacy in this coupling argument of two Bernoulli variables? | Identical distributions are not the same as pointwise almost sure identity.
A mistake is that $-Y_j$ is not what you write but rather $-Y^j = 2\mathbf{1}_{U^j \gt p^j} - 1$. Of course, $-Y^j$ has the same distribution as $2\mathbf{1}_{U^j \leqslant1-p^j} - 1$ since $U^j$ is uniform on $\{-1,1\}$ hence the distributions of $-U^j$ and $U_j$ coincide, but the pointwise identity in your post fails.
More generally, note that $-X\leqslant X$ almost surely is logically equivalent to $X\geqslant0$ almost surely. And in your setting, $[X\lt0]$ has positive probability...
What happens here is that, simply because $P(Y^j=+1)\geqslant\frac12$ for every $j$, each random variable $-Y^j$ is stochastically dominated by $Y^j$ hence (possibly enlarging the sample space) there exists some independent random variables $(Y'_j)$ such that each $Y'_j$ is distributed like $-Y^j$ and $Y'_j\leqslant Y^j$ almost surely. Then, every $\text{logit}(p^j)$ being nonnegative, $X'=\sum\limits_j\text{logit}(p^j)Y'_j$ is such that $X'$ is distributed like $-X$ and $X'\leqslant X$ almost surely. Thus, $[X\lt0]$ has the same probability as $[X'\gt0]$ and $[X'\gt 0]\subseteq[X\gt0]$. Together, these two assertions imply that $P(X\lt0)\lt P(X\gt0)$. |
How to calculate the following integral? | There's a well known trick for this integral:
\begin{align*}
\left(\int_{-\infty}^\infty e^{-\frac{x^2}{2}}\,\mathrm{d}x\right)^2 &= \int_{-\infty}^\infty\int_{-\infty}^\infty e^{-\frac{x^2+y^2}{2}}\,\mathrm{d}x\mathrm{d}y \\
&= \int_0^\infty\int_0^{2\pi} e^{-\frac{r^2}{2}}r\,\mathrm{d}\theta\mathrm{d}r \\
&= 2\pi \left[-e^{-\frac{r^2}{2}}\right]_{0}^\infty \\
&= 2\pi
\end{align*}
Hence, $\int_{-\infty}^\infty e^{-\frac{x^2}{2}}\,\mathrm{d}x = \sqrt{2\pi}$ |
Very Complex Permutation and Combinations with conditions | Denote by $S_n$ the number of admissible strings of length $n$. Then $S_1=36$, $S_2=36^2=1296$. Furthermore the $S_n$ satisfy the following recursion:
$$S_n=35 S_{n-1}+35 S_{n-2}\qquad(n\geq3)\ .$$
Proof. Any admissible string of length $n\geq3$ ends with two different letters or two equal letters. It is obtained by appending one letter to an admissible string of length $n-1$ or a pair of two equal letters to a string of length $n-2$, where in both cases we may choose from $35$ letters.$\quad\square$
The resulting problem can be handled with the "Master Theorem" on difference equations. |
How can you find out solution for linear differential equation with 2 variables? | Integrating both sides of
$$\frac{S'}{S}=\frac{-1}{r}$$
yields
$$\log |S| = -\log r + C$$
which can be simplified to
$$S = \frac{A}{r}$$
where $A$ is a constant that takes the place of $\pm e^C$.
Integrate once more to get $g(r) = A\log r+B$.
But note that the same answer could be expressed as $g(r) = A\log r^2+B$; the square inside the logarithm only contributes to $A$, which is an indefinite constant anyway.
In terms of $x,y$ the second form is preferable, because it's simpler: $A\log(x^2+y^2)+B$ instead of $A\log\sqrt{x^2+y^2}+B$. |
accepted language notation for CFG for recurring 1's and 0's | First of all, the grammar you wrore down allows any number of $0$'s and $1$'s in any order, as long as the string starts with a 1. Your regex does not encode this information, so your regex is wrong.
I'm not sure why you are mixing regular expression notation and the "language" notation, but usually people don't combine those.
Note that when you write a regex, the operators allowed are $$( \ ) \ * \ | $$
So, the regex corresponding to your grammar would be
$$R = 1 \ (0 \vert 1) \ (0 \vert 1)^*$$
Since the grammar forces you to have at least one $0$ or $1$ in the $A$ production.
If you are allowed the use of the $+$ operator (which usually stands for one or more occurence, unline $*$ which is zero or more occurence), then your regex can be
$$
R = 1 \ (0 | 1)^+
$$ |
Modifying a Textbook Theorem in Real Analysis | Yes, if $k$ is assumed positive.
Consider fixed but arbitrary $x,y \in \mathbb{R}$ and $k>0$. Suppose
$$\mathop{\forall}_{\varepsilon>0} x \leq y+k\cdot\varepsilon$$
Then using the positivity of $k$, we deduce:
$$\mathop{\forall}_{\varepsilon>0} x \leq y+k\cdot(\varepsilon \cdot k^{-1})$$
So $$\mathop{\forall}_{\varepsilon>0} x \leq y+\varepsilon$$
Hence by the quoted theorem, we have $$x \leq y.$$
We conclude that:
$$\left(\mathop{\forall}_{\varepsilon>0} x \leq y+k \cdot \varepsilon\right) \rightarrow x \leq y$$
In summary:
Proposition.
$$\mathop{\forall}_{k>0}\,\,\mathop{\forall}_{x,y \in \mathbb{R}}\left[\left(\mathop{\forall}_{\varepsilon>0} x \leq y+k \cdot \varepsilon\right) \rightarrow x \leq y\right]$$
Remark. I think there's something wrong with present-day mathematical notation. Notice that its kind of unclear how positivity was used. A good notation would not suffer these kinds of problems. |
Prove the following conditional divisibility | Because $n$ is odd (note: for $n=2$, $\frac{a^2+b^2}{a+b}$ need not be an integer) we have $$a^n+b^n=(a+b)(a^{n-1}-a^{n-2}b+a^{n-3}b^2-a^{n-4}b^3+\cdots-a^1b^{n-2}+b^{n-1})$$
We compute $$gcd(a^{n-1}-a^{n-2}b+a^{n-3}b^2-a^{n-4}b^3+\cdots-a^1b^{n-2}+b^{n-1},a+b)=$$
$$gcd(a^{n-1}-a^{n-2}b+a^{n-3}b^2-a^{n-4}b^3+\cdots-a^1b^{n-2}+b^{n-1}\color{red}{-a^{n-2}(a+b)},a+b)=$$
$$gcd(-2a^{n-2}b+a^{n-3}b^2-a^{n-4}b^3+\cdots-a^1b^{n-2}+b^{n-1}\color{red}{+2a^{n-3}(a+b)},a+b)=$$
$$gcd(3a^{n-3}b^2-a^{n-4}b^3+\cdots-a^1b^{n-2}+b^{n-1}\color{red}{-3a^{n-4}(a+b)},a+b)=$$
$$\cdots$$
$$=gcd(nb^{n-1},a+b)$$
However, since $gcd(a,b)=1$, also $1=gcd(a+b,b)=gcd(a+b,b^{n-1})$. Hence we can reduce the original to $gcd(n,a+b)$. Since $n$ is prime this is either $1$ or $n$. |
How can $|\mathbb{N}| = |\mathbb{Z}|$? | Let's consider an example that is a bit easier to deal with:
\begin{align}
A &= \{1,2,3,4,\dots\}, \\
B &= \{0,1,2,3,\dots\}.
\end{align}
It's easy to see that $A\subsetneq B$ and yet $A\to B, k \mapsto k-1$ is a bijection.
The conclusion that $|A|<|B|$ doesn't work for infinite sets, since there are always elements to "fill the gaps". In this example, removing $0$ from $B$ you get a gap at $0$, but $1$ can fill that gap. Now there's a gap at $1$, but $2$ can fill that gap, … Since this "gap filling process" never ends, there is no gap that can't be filled and we get $|A|=|B|$. |
solutions to $\int_{-\infty}^\infty \frac{1}{x^n+1}dx$ for even $n$ | For any complex number $\alpha$ such that $0<\text{Re}(\alpha)<1$, let
$$I(\alpha):=\int_0^\infty\,\frac{t^{\alpha-1}}{t+1}\,\text{d}t\,,$$
where the branch cut of the map $z\mapsto z^{\alpha-1}$ is taken to be the positive-real axis.
For a real number $\epsilon\in(0,1)$, consider the positively oriented keyhole contour $\Gamma_\epsilon$ given by
$$\begin{align}\left[\epsilon\,\exp(\text{i}\epsilon),\frac{1}{\epsilon}\,\exp(\text{i}\epsilon)\right]&\cup\left\{\frac{1}{\epsilon}\,\exp(\text{i}t)\,\Big|\,t\in[\epsilon,2\pi-\epsilon]\right\}
\\&\cup\left[\frac{1}{\epsilon}\,\exp\big(\text{i}(2\pi-\epsilon)\big),\epsilon\,\exp\big(\text{i}(2\pi-\epsilon)\big)\right]\cup\Big\{\epsilon\,\exp(\text{i}t)\,\Big|\,t\in[2\pi-\epsilon,\epsilon]\Big\}\,.\end{align}$$
Then,
$$\lim_{\epsilon\to0^+}\,\oint_{\Gamma_\epsilon}\,\frac{z^{\alpha-1}}{z+1}\,\text{d}z=I(\alpha)-\exp\big(2\pi\text{i}(\alpha-1)\big)\,I(\alpha)=-2\text{i}\,\exp(\pi\text{i}\alpha)\,\sin(\pi\alpha)\,I(\alpha)\,.$$
Via the Residue Theorem,
$$\lim_{\epsilon\to0^+}\,\oint_{\Gamma_\epsilon}\,\frac{z^{\alpha-1}}{z+1}\,\text{d}z=2\pi\text{i}\,\text{Res}_{z=-1}\left(\frac{z^{\alpha-1}}{z+1}\right)=2\pi\text{i}\,(-1)^{\alpha-1}=-2\pi\text{i}\,\exp(\pi\text{i}\alpha)\,.$$
Hence, $$\int_0^\infty\,\frac{t^{\alpha-1}}{t+1}\,\text{d}t=I(\alpha)=\frac{\pi}{\sin(\pi\alpha)}\,.$$
Now, take $\alpha:=\dfrac1n$ for some integer $n\geq 2$. Then,
$$\frac{\pi}{\sin\left(\frac{\pi}{n}\right)}=\int_0^\infty\,\frac{t^{\frac1n-1}}{t+1}\,\text{d}t=\int_0^\infty\,\frac{n}{x^n+1}\,\text{d}x\,,$$
by setting $x:=t^{\frac1n}$. This proves the equality
$$\int_0^\infty\,\frac{1}{x^n+1}\,\text{d}x=\frac{\pi}{n\,\sin\left(\frac{\pi}{n}\right)}\,.$$ If $n$ is even, then
$$\int_{-\infty}^{+\infty}\,\frac{1}{x^n+1}\,\text{d}x=2\,\int_0^\infty\,\frac{1}{x^n+1}\,\text{d}x=\frac{2\pi}{n\,\sin\left(\frac{\pi}{n}\right)}=\frac{\pi}{\frac{n}{2}\,\sin\left(\frac{\pi}{n}\right)}\,.$$
In fact, it can be seen that $I(\alpha)=\text{B}(\alpha,1-\alpha)=\Gamma(\alpha)\,\Gamma(1-\alpha)$, where $\text{B}$ and $\Gamma$ are the usual beta and gamma functions, respectively. Therefore, this gives a proof of the Reflection Formula for complex numbers $\alpha$ such that $0<\text{Re}(\alpha)<1$. Then, using analytic continuation, we prove the Reflection Formula for all $\alpha\in\mathbb{C}$. |
Closed form for binomial sum with absolute value | If we assume that $n$ is even, $n=2m$, our sum times $2^n$ equals:
$$ \sum_{j=0}^{m-1}\binom{2m}{j}(2m-2j)+\sum_{j=m+1}^{2m}\binom{2m}{j}(2j-2m) =\sum_{j=0}^{m-1}\binom{2m}{j}(4m-4j)$$
where:
$$\sum_{j=0}^{m-1}\binom{2m}{j} = \frac{4^m-\binom{2m}{m}}{2}$$
and:
$$\sum_{j=0}^{m-1}\binom{2m}{j}j = 2m\sum_{j=0}^{m-2}\binom{2m-1}{j}=2m\cdot\frac{2^{2m-1}-\binom{2m-1}{m-1}}{2}.$$
The case $n=2m+1$ can be managed in a similar way. |
What is the maximum value of the function x times cos(x) | From
$$(x\cos x)'=\cos x-x\sin x=0$$ you know that the local extrema occur at the roots of this trascendental equation. They have no closed-form solution and you need to recourse to numerical methods, but there's an infinity of them.
You can rewrite the equation
$$x=\cot x$$ and observe that when $x$ is large, $x$ must be close to a multiple of $\pi$, let $k\pi+\delta$. Then
$$k\pi+\delta=\cot(k\pi+\delta)\approx\frac1\delta-\frac\delta3$$ from the Laurent series.
This gives the solutions
$$\delta=\frac{-3k\pi\pm3\sqrt{k^2\pi^2+\frac{16}3}}8.$$
The value for $k=0$ is $0.866$, not too bad. |
Countability of certain subset of $\mathbb{R}$ | Hint: Denote $I_{n}=(\frac{1}{\sqrt{n}},1)$ for all $n$. Note that if we have infinitely many $x_{k}\in S\cap I_{n}$ for $k\in\mathbb{N}$, then
\begin{align*}
1>x_{1}^{2}+...+x_{k}^{2}=\sum_{i=1}^{k}x_{i}^{2}\geq \sum_{i=1}^{k}\frac{1}{n}
\end{align*}
for all $k\in\mathbb{N}$.
And $(0,1)$ is a countable union of the sets $I_{n}$. |
Derivation of statistical test for equality of two regression slopes | Under $H_0$, $\alpha = \beta$ and given that $\epsilon_i$, $i=1,2$ follows a normal distribution you have
$$
\frac{ \hat{\alpha} - \hat{\beta} }{\sigma_{\hat{\alpha} - \hat{\beta} }}\sim N (0,1),
$$
where $\sigma_{\hat{\alpha} - \hat{\beta} } = \sigma\sqrt{
\frac{(n_2 - 2) S^2_{x_2} + (n_1 - 2)S^2_{x_1}}{ S^2_{x_1} S^2_{x_2} } }.
$
Note that (abusing notation, but you will get the idea) $$
S_{x_1}^2 + S_{x_2}^2 = \sigma^2\left(
\frac{\chi^2_{n_1-2}}{n_1-2}
+
\frac{\chi^2_{n_2-2}}{n_2-2}
\right).
$$
Recall that for independent r.v.s $\chi^2_{n_1-2} + \chi^2_{n_2-2} = \chi ^2 _{n_1 + n_2 - 4}.$ Try to work out the algebra to show that
$$
\frac{ \hat{\alpha} - \hat{\beta} }{\sigma_{\hat{\alpha} - \hat{\beta} }}
/\sqrt{\frac{ \chi^2_{n_1 + n_2 - 4} } { n_1 + n_2 - 4} }
\sim t_{n_1 + n_2 - 4}.
$$ |
Prove by contraposition help. | In general, the contrapositive of a conditional:
'If P then Q'
is the statement:
'If not Q then not P'
Applied to your statement, we would thus get:
'If $n$ is a perfect square, then $n$ is not a positive integer such that $n \equiv 2 \pmod{4}$ or $n \equiv 3 \pmod{4}$'
... But somehow I doubt that's what they meant. In fact, the original statement was probably meant as:
'For any positive integer $n$, it holds that if $n \equiv 2 \pmod{4}$ or $n \equiv 3 \pmod{4}$, then $n$ is not a perfect square'
So then by taking the contrapositive of the contrapositive of the conditional that is part of that general statement about positive integers, we get:
'For any positive integer $n$, it holds that if $n$ is a perfect square, then it is not the case that $n \equiv 2 \pmod{4}$ or $n \equiv 3 \pmod{4}$'
...which makes a lot more sense.
Indeed, to prove this statement:
Take $n$ to be a positive integer and assume it is a perfect square. So, $n=k^2$ with $k$ an integer. $k$ is either even or odd. If $k$ is even, then $k=2m$ for some integer $m$, and so $n=(2m)^2=4m^2$. Hence, $n \equiv 0 \pmod{4}$. If $k$ is odd, then $k=2k+1$ for some integer $m$' and so $n=(2m+1)^2=4m^2+4m+1$, and hence $n \equiv 1 \pmod{4}$. So, it is not the case that $n \equiv 2 \pmod{4}$ or that $n \equiv 3 \pmod{4}$. |
Why does the Ratio Test prove that this particular sequence converges on 0? | To say that a sequence $\{b_n\}_{n \in \mathbb{N}}$ converges to a limit $L$ is to say that, for all $\varepsilon > 0$, there is an $N \in \mathbb{N}$ such that $n > N$ implies $|b_n - L| < \varepsilon$.
Rephrased into colloquial English: For any positive number $\varepsilon$, the sequence of those $b$-subscript elements eventually gets within $\varepsilon$ of $L$, and stays at least that close to the limit. The key word in this description is the same term that you italicized:
Naturally, eventually the ratio will be less than $1$, but doesn't the theorem kind of imply that the ratio of any two consecutive elements of the sequence will have a ratio greater than $−1$ and less than $1$?
No, the theorem does not say this. The theorem's hypothesis is precisely that the sequence made from a ratio of consecutive terms converges to $l$, which means that eventually this sequence (i.e., these ratios) will be within $\varepsilon$ of $l$. |
Lebesgue measure has the Darboux property | I assume $A$ is defined to be a subset of $\mathbb{R}$.
Define $f(x) = \lambda(A \cap [-x,x])$. Then $f(x)$ is continuous, $f(0) = 0$, and $\lim_{x \to +\infty} f(x) = \lambda(A)$, so by the intermediate value theorem, there is some value of $x$ for which $f(x) = b$. |
$\lim_{n\rightarrow \infty} \binom{n}{3} \frac{1}{n^{1.1}}$ | $$\lim_{n \to \infty} \frac{n^3+O(n^3)}{6n^{1.1}}= \lim_{n \to \infty} \frac{n^{1.9}+O(n^{1.9})}{6}=\infty$$ |
tangent line to a circle is perpendicular to the radius | Consider the point-slope formula:
$$\left( y-y_1 \right)=m\left( x-x_1 \right).$$
This is nothing more than a consequence of the definition of slope. More specifically,
$$m=\frac{\left( y_2-y_1 \right)}{\left( x_2-x_1 \right)}\Rightarrow \left( y_2-y_1 \right)=m\left( x_2-x_1 \right).$$
If we are in a situation where we already know the slope $m$ and we already know a point $( x,y )$, we can drop one of the points in that last form (the $_2$ in $x_2$ and $y_2$ for example) and write the point-slope formula as above.
Using the point-slope form
$$(y−y_1)=m(x−x_1),$$
with $m=\frac{3}{4}$, and the point $(1,-3)$ we substitute our given $m$, $x$, and $y$ to get.
$$(y−(−3))=\frac{3}{4}(x−1).$$
If you want to convert this to the slope intercept form then the process is
\begin{align*}
&(y−(−3))=\frac{3}{4}(x−1) \\
\Rightarrow & y+3=\frac{3}{4}x-\frac{3}{4} \\
\Rightarrow & y=\frac{3}{4}x-\frac{3}{4}-3 \\
\Rightarrow & y=\frac{3}{4}x-\frac{15}{4}.
\end{align*}
Note that I still recommend my original solution where we start with the slope-intercept form. It is much faster (and seems much more sensible to me), although I have noticed a strange reluctance to demonstrate that method directly in the textbook (Bittinger) that I am currently using for my classes. |
The estimation for $\int_{\tau}^\infty \frac{e^{t({1/2-\epsilon})}}{\sqrt{\cosh(t)-\cosh(\tau)}}dt$ | First of all, we have
$$\frac{\sinh(h)}{h} = \sum_{n=0}^\infty \frac{h^{2n}}{(2n+1)!} \ge 1$$
for all $h \in \mathbb{R}$. Now, using the fundamental theorem of calculus, we find that
$$\cosh(t)-\cosh(\tau) = \int_{\tau}^t \sinh(h) \, dh \geq \int_{\tau}^{t} h \, d h = \frac{1}{2} (t^2-\tau^2).$$
Next, we consider the integral over the subregion $[\tau,\tau+1]$: Inserting the latter inequality we obtain
$$
\int_{\tau}^{\tau+1} \frac{e^{t(1/2-\varepsilon)}}{\sqrt{\cosh(t)-\cosh(\tau)}} dt \leq 2^{1/2} e^{(\tau+1)/2} \int_{\tau}^{\tau+1} \frac{1}{\sqrt{t^2-\tau^2}} dt
$$
To evalute the ingegral, we note that $f(t) := \sqrt{t^2 - \tau^2}$ is a bijective mapping from $[\tau,\tau+1]$ onto $[0,\sqrt{2\tau+1}]$ and hence
\begin{align*}
\int_{\tau}^{\tau+1} \frac{1}{\sqrt{t^2-\tau^2}} dt &= \int_0^{\sqrt{2 \tau+1}} \frac{1}{\sqrt{s^2+\tau^2}} ds \\
&= \int_0^{\sqrt{2 \tau+1}/\tau} \frac{1}{\sqrt{s^2+1}} ds = \mathrm{arsinh} \bigg( \frac{\sqrt{2\tau+1}}{\tau} \bigg).
\end{align*}
To end this, we recall that
$$\mathrm{arsinh}(x)= \log \left(x+ \sqrt{x^2+1}\right)$$
and thus
$$ \mathrm{arsinh}\bigg( \frac{\sqrt{2\tau+1}}{\tau} \bigg) = \log \bigg( \sqrt{2 \tau^{-1} + \tau^{-2}} + 1 + \tau^{-1}\bigg) \ll |\log{\tau}|,$$
as can be easily checked. (Here we use Vingroadov's notation $A \ll B$ meaning that $A \le c B$ for some $c>0$.) For $t \geq 1 + \tau$ one has $\sinh(h)/e^h \ge C:= (1-e^{-2})/2$ and therefore
$$\cosh(t)-\cosh(\tau) = \int_{\tau}^t \sinh(h) \, dh \geq C( e^{t} - e^{\tau}),$$
Hence, we get
$$\int_{\tau+1}^\infty \frac{e^{t(1/2-\varepsilon)}}{\sqrt{\cosh(t)-\cosh(\tau)}} dt \le \frac{1}{\sqrt{C}} \int_{\tau+1}^\infty \frac{e^{-t\varepsilon}}{\sqrt{1-e^{\tau-t}}} dt \ll \frac{e^{-\varepsilon(\tau+1)}}{\varepsilon}.$$
Taking both together, we obtain that
$$\tag{1}\int_{\tau}^{\infty} \frac{e^{t(1/2-\varepsilon)}}{\sqrt{\cosh(t)-\cosh(\tau)}} dt \leq C_\varepsilon (1+ |\log\tau|)$$
for all, say, $\tau \leq 1$.
If $\tau \ge 1$, then the second inequality holds here as well. For the first region we have $t \ge 1$ and thus $\cosh(h) - \cosh(\tau) \gg e^t - e^\tau$. So, we can apply the second argument in the form
$$
\int_{\tau}^{\tau+1} \frac{e^{t(1/2-\varepsilon)}}{\sqrt{\cosh(t)-\cosh(\tau)}} dt \ll \int_\tau^{\tau+1} \frac{e^{t(1/2-\varepsilon)}}{\sqrt{e^t - e^\tau}} dt \leq \int_\tau^{\tau+1} \frac{1}{\sqrt{1- e^{\tau-t}}} dt
$$
and this can further rewritten by
$$\int_0^1 \frac{1}{\sqrt{1-e^{-t}}} dt = \int_{e^{-1}}^1 \frac{1}{x \sqrt{1-x}} dx \ll \int_{e^{-1}}^1 \frac{1}{\sqrt{1-x}} dx \ll 1.$$
Thus, the inequality (1) holds for all $\tau >0$ as claimed. |
String permutations: ways to reorder DONALDDUCK with restrictions: | The last letter has to be $D$.
Hence the question is equivalent to
How many ways are there to reorder the string ONALDDUCK when the first letter can't be K
Consider the number of possible arrangement with no restricition. The answer would be $\frac{9!}{2}$
Now, consider the number of possible arrangement where the first letter must be $K$. we have $\frac{8!}{2}$
$$\frac{9!-8!}{2}=\frac{8 \cdot 8!}{2}$$ |
Rotations of sphere $\mathbb S^2$ | The first rotation is the reflection through $L$ followed by reflection through $M$. For the second, it's $M$ followed by $N$. The $M$'s cancel, and you're left with a rotation with axis $OR$. |
Are there non-trival logics that exibit soundness and completeness that are not first order? | Henkin showed that the higher-order logic known as Church's simple type theory is sound and complete for what are now called Henkin models. Henkin's paper Completeness in the Theory of Types is not very long and is very readable (once you know that Church and Henkin write $\alpha\beta$ for the type of functions from type $\beta$ to type $\alpha$, which we would now write as $\alpha \rightarrow \beta$).
Surprisingly, because higher-order logic is so much more expressive than first-order logic, Henkin's completeness proof is in many ways a lot simpler than the proof of completeness for first-order logic. |
How to work out the formula that connects several numbers | If you think there is a linear relationship between the $a, b, c$, etc., and $x$, then you could find the least-squares solution to the system of equations $\mathbf {Ay = X}$. The matrix $\mathbf A$ will consist of rows of the form $[a_i\ b_i\ c_i \ldots]$, and $\mathbf X$ is a column vector containing the values $x_i$. The vector $\mathbf y$ corresponds to the weights in your weighted average.
The system $\mathbf {Ay = X}$ does not necessarily have a solution, but you can find the "best fit" by multiplying both sides by $\mathbf A^t$ and solving the resulting system; i.e., $\mathbf {A}^t\mathbf{Ay} = \mathbf{A}^t\mathbf{X}$.
Thus the best-fit solution for your weights is $\mathbf{\hat y} = (\mathbf{A}^t\mathbf{A})^{-1}\mathbf{A}^t\mathbf{X}$. |
Determine the irreducible polynomial for $\alpha=\sqrt{3}+\sqrt{5}$ over $\mathbb{Q}(\sqrt{10})$ | Suppose by contradiction that
$$\sqrt{2}=a+b\sqrt{3}+c\sqrt{5}+d \sqrt{15} \,.$$
Squaring both sides and using the fact that $1, \sqrt{3}, \sqrt{5}, \sqrt{15}$ are linearly independent over Q you get
$$2=a^2+3b^2+5c^2+15d^2 \,.$$
$$ab+5cd =0 \,.$$
$$ac+3bd=0 \,.$$
$$ad+bc=0 \,.$$
From the last two equations we get
$$3bd^2=-acd=bc^2 \,.$$
Since $3d^2=c^2$ has no rational roots we get that $b=0$.
It follows that
$$2=a^2+5c^2+15d^2 \,.$$
$$5cd =0 \,.$$
$$ac=0 \,.$$
$$ad=0 \,.$$
From the last two equations it follows that two of $a,c,d$ must be $0$, and then you get from the first equation that $x^2 \in \{ 2, \frac{2}{5}, \frac{2}{15} \}$ where $x$ is the nonzero one... Contradiction with $x \in Q$. |
Is this the correct way of drawing a combinatorial circuit based on the disjunctive normal form and logic table? | The "Logic Gate" diagram is okay for what you have except that is not how you draw an "or-gate".
However, your function is not okay. Notice that you have $f(1,1,0) = (1\land 1\land 0)\lor(0\land 0 \land 0)$ which is $0$. You need it to be $1$.
$$f(X,Y,Z) ~\ne~ (X \land Y \land Z) \lor (\lnot X \land \lnot Y \land Z)$$
It is close though. (Maybe a typo?) |
Find numbers a, b, so that gcd(Fₙ, Fₙ₊₁) = aFₙ + bFₙ₊₁ holds using Euclidean Algorithm | Here you want to replace $a$ and $b$ with $a_n$ and $b_n$. We have
$$a_nF_{n} + b_n F_{n+1}=gcd(F_n, F_{n+1}) =gcd(F_{n-1}, F_n)=a_{n-1}F_{n-1}+ b_{n-1}F_n$$
Replace $F_{n+1}$ with $F_n+F_{n-1}$, we get
$$(a_n + b_n - b_{n-1})F_n + (b_n-a_{n-1})F_{n-1}=0$$
If we let $a_n + b_n - b_{n-1}=0$ and $ b_n-a_{n-1}=0$, we could get an $F-$ sequence again. For example, replace $a_n$ in the first equation with $b_{n+1}$ and let $c_n=(-1)^nb_n$, we get $$c_{n+1}=c_n+c_{n-1}$$
We can let $b_0=1$ and $b_1=-1=a_0$, then it can be shown $c_n=F_{n+1}$ (assume $F_0=0$, $F_1=1$) and thus $$b_n=(-1)^{n}F_{n+1}$$ and $$a_n = (-1)^{n+1} F_{n+2}$$,
And we are looking at a famous identity $$F_{n+1}^2-F_nF_{n+2} = (-1)^{n}gcd(F_n, F_{n+1})=(-1)^{n}$$
Hope you feel this is interesting. |
Find the upper and lower Riemann sums $U(f,P)$ and $L(f,P)$ for discontinuous function | Why do you say L = 1/4? You need to find inf f(x) for x in those intervals. f(x) =0 whenever x is irrational, and there are irrational numbers in there, so the inf has to be $\leq 0$. In this case it is 0, and so L=0. |
Find value for equation to be contained in interval | For $x=\pi$, $$\pi-\frac{\pi-M}{1+\epsilon}\in[0,\pi] \\ \iff \frac{\pi-M}{1+\epsilon} \in [0,\pi] $$ But this is true as $$\frac{\pi-M}{1+\epsilon} \le \frac{\pi-0}{1+0} = \pi $$ and $$\ge \frac{\pi-\pi}{1+1} = 0$$ |
Is $E[Bin(X,p)]=E[X]p$? | $E[Y] = E[E[Y|X]]$
The conditional expectation is $pX$. Then, $E[pX] = pE[X] = \mu p$ |
Proving $\left( 1-\frac{2}{n} \right )^{\frac {n\ln n}{4}}-\left( 1-\frac{1}{n} \right )^{\frac {2n\ln n}{4}}<0$ | The hint:
$$\frac{\left(1-\frac{1}{n}\right)^2}{1-\frac{2}{n}}>1.$$
Thus, $$\left(\frac{\left(1-\frac{1}{n}\right)^2}{1-\frac{2}{n}}\right)^{\frac{n\ln{n}}{4}}>1.$$ |
Derivative of sigmoid function that contains vectors | You have $$w^Tx=\sum_{i=1}^D w_ix_i$$ For the derivative with respect to $w_i$ you can write the function as $$\frac 1{1+e^{-\sum_{j=1}^D w_jx_j}}=\frac 1{1+e^{-\sum_{j=1,j\ne i}^D w_jx_j}e^{-w_ix_i}}$$
The term with the sum does not contain $w_i$, so you can consider it a constant when you take the derivative. |
Laurent series of f(z) | Note: Regarding your question, the series we take is important. Here I provide a complete Laurent expansion around $z=0$. From this you should be able to check our results (which slightly differ).
The function
\begin{align*}
f(z)&=\frac{1}{4}\frac{1}{z+\frac{5}{4}}+\frac{1}{2}\frac{1}{z+\frac{3}{2}}\\
\end{align*}
has two simple poles at $-\frac{5}{4}$ and $-\frac{3}{2}$.
Since we want to find a Laurent expansion with center $0$, we look at the poles $-\frac{5}{4}$ and $-\frac{3}{2}$ and see they determine three regions.
\begin{align*}
|z|<\frac{5}{4},\qquad\quad
\frac{5}{4}<|z|<\frac{3}{2},\qquad\quad
\frac{3}{2}<|z|
\end{align*}
The first region $ |z|<\frac{5}{4}$ is a disc with center $0$, radius $\frac{5}{4}$ and the pole $-\frac{5}{4}$ at the boundary of the disc. In the interior of this disc all two fractions with poles $-\frac{5}{4}$ and $-\frac{3}{2}$ admit a representation as power series at $z=0$.
The second region $\frac{5}{4}<|z|<\frac{3}{2}$ is the annulus with center $0$, inner radius $\frac{5}{4}$ and outer radius $\frac{3}{2}$. Here we have a representation of the fraction with pole $\frac{5}{4}$ as principal part of a Laurent series at $z=0$, while the fraction with pole at $\frac{3}{2}$ admits a representation as power series.
The third region $|z|>\frac{3}{2}$ containing all points outside the disc with center $0$ and radius $\frac{3}{2}$ admits for all fractions a representation as principal part of a Laurent series at $z=0$.
A power series expansion of $\frac{1}{z+a}$ at $z=0$ is
\begin{align*}
\frac{1}{z+a}&=\frac{1}{a}\cdot\frac{1}{1+\frac{z}{a}}\\
&=\sum_{n=0}^{\infty}\frac{1}{a^{n+1}}(-z)^n
\end{align*}
The principal part of $\frac{1}{z+a}$ at $z=0$ is
\begin{align*}
\frac{1}{z+a}&=\frac{1}{z}\cdot\frac{1}{1+\frac{a}{z}}=\frac{1}{z}\sum_{n=0}^{\infty}\frac{a^n}{(-z)^n}
=-\sum_{n=0}^{\infty}\frac{a^n}{(-z)^{n+1}}\\
&=-\sum_{n=1}^{\infty}\frac{a^{n-1}}{(-z)^n}
\end{align*}
We can now obtain the Laurent expansion of $f(x)$ at $z=0$ for all three regions
Region 1: $|z|<\frac{5}{4}$
\begin{align*}
f(z)&=\frac{1}{4}\sum_{n=0}^{\infty}\left(\frac{4}{5}\right)^{n+1}(-z)^n
+\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{2}{3}\right)^{n+1}(-z)^n\\
&=\sum_{n=0}^{\infty}\left(\frac{4^n}{5^{n+1}}+\frac{2^{n}}{3^{n+1}}\right)(-z)^n
\end{align*}
Region 2: $\frac{5}{4}<|z|<\frac{3}{2}$
\begin{align*}
f(z)&=-\frac{1}{4}\sum_{n=1}^{\infty}\left(\frac{5}{4}\right)^{n-1}\frac{1}{(-z)^n}
+\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{2}{3}\right)^{n+1}(-z)^n\\
&=-\sum_{n=1}^{\infty}\frac{5^{n-1}}{4^n}\frac{1}{(-z)^n}+\sum_{n=0}^{\infty}\frac{2^n}{3^{n+1}}(-z)^n
\end{align*}
Region 3: $\frac{3}{2}<|z|$
\begin{align*}
f(z)&=-\frac{1}{4}\sum_{n=1}^{\infty}\left(\frac{5}{4}\right)^{n-1}\frac{1}{(-z)^n}
-\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{3}{2}\right)^{n-1}\frac{1}{(-z)^n}\\
&=-\sum_{n=1}^{\infty}\left(\frac{5^{n-1}}{4^n}+\frac{3^{n-1}}{2^n}\right)\frac{1}{(-z)^n}
\end{align*} |
Is there a simpler way of categorizing all the subgroups of $D_n$? | I thought to give my question another try today. For what it's worth, the following seems somewhat shorter and makes use of more elementary notions:
We'll consdier two different types of subgroups. Those that contain reflections (elements of the form $r^xs$), and those that do not.
If $H \leq D_n$ doesn't contain a reflection, then it is a subgroup of $\langle r \rangle$, and is thus equal to $\langle r^d \rangle$ where $d|n$.
Suppose $H \leq D_n$ does contain a reflection, then since $1=r^n \in H$, $H \cap \langle r \rangle \neq \emptyset$. Let $d$ be the smallest positive number such that $r^d \in H$. If $r^e \in H-\langle r^d \rangle$ and $e=dk+f$ implies $r^f \in H$, which contradicts $d$ being minimal. Thus $H=\langle r^d \rangle \cup \{ \text{reflections} \}$. Let $r^xs \in H$ so that $x$ is again minimal. We can see $H$ contains
$$\{ r^{dk},r^{dk+x}s:0\leq k \leq \frac{n}{d}-1 \}$$
and if $H$ contains another reflection $r^ys$ with $0\leq y<n$, then $r^yss^{-1}r^{-x} = r^{y-x} \in \langle r^d \rangle$ implies $y=dh+x$. Therefore $h=\frac{y-x}{d}<\frac{n}{d}$ and
$$H=\{ r^{dk},r^{dk+x}s:0\leq k \leq \frac
{n}{d}-1 \}$$.
One can then use the same argument as Konrad to show this set is equal to $\langle r^d,r^is \rangle$ where $d|n$ & $0\leq i \leq d-1$. If $\langle r^d, r^is \rangle = \langle r^e, r^js \rangle$ then by the argument presented before $d=e$, so all such sets are different. Furthermore $\langle r^d \rangle \neq \langle r^e, r^js \rangle$ since only the second one contains a reflection, so the two lists of subgroups are disjoint. |
How to have idea to prove trigonometric identities | You had the right idea. Let's just follow through with it.
When you get to here
$$\cot^2(a)\cdot \frac{1}{\sin^2(b)}-\frac 1{\sin^2(a)} = \cot^2(a)\csc^2(b)-\csc^2(a)$$
you should take a look at what you're trying to get to.
We need to get something minus one, that tells us we should look at the Pythagorean identity:
$$\cos^2(a)+\sin^2(a) =1 \\ \implies \cot^2(a)+1=\csc^2(a)$$
We get the second equation by dividing the first by $\sin^2(a)$.
Plugging this in for $\csc^2(a)$ we get
$$\cot^2(a)\csc^2(b)-\csc^2(a) = \cot^2(a)\csc^2(b)-\cot^2(a)-1$$
Factor the $\cot^2(a)$ from the first two terms to get
$$\cot^2(a)\csc^2(b)-\cot^2(a)-1 = \cot^2(a)(\csc^2(b)-1)-1$$
Then from that same identity from before we see that $\csc^2(b)-1=\cot^2(b)$ and we're done. |
Equilateral triangle trisected | Use the Law of Cosines:
\begin{align}|AD|^2
&=|AB|^2 +|BD|^2-2|AB|\cdot |BD|\cos(60^{\circ})\\
&=|AB|^2\left(1+\frac{1}{3^2}-2\cdot\frac{1}{3}\cdot\frac{1}{2}\right)\\
&=\frac{7}{9}\cdot |AB|^2.
\end{align} |
Find the value of the constant $a$ which minimizes $E[(Y-aX)^2]$ | in the OP it says
$2aE[X^2] + 2E[XY] = 0\\a = \frac {E[X^2]}{E[XY]}$
This is inverted and should say
$2aE[X^2] + 2E[XY] = 0\\a = \frac {E[XY]}{E[X^2]}$
$\mu_X = E[X]\\
\sigma_X^2 = E[X^2] - E[X]^2\\
E[X^2] = \sigma^2_X + \mu^2_x\\
\text{cov}_{X,Y} = E[XY] - E[X]E[Y]\\
E[XY] = \text {cov}_{X,Y} + \mu_X\mu_y$
$a = \frac {E[XY]}{E[X^2]} = \frac {(\text{cov}_{X,Y} + \mu_X\mu_Y)}{(\sigma^2_X + \mu^2_X)}$ |
What is the area of the triangle? | $$|\triangle PBC| = \frac12|CP||BQ| = \frac12k^2$$ |
Proving $f$ is Lebesgue integrable iff $|f|$ is Lebesgue integrable. | That $f$ is measurable and $\int |f| \,d\mu < \infty$ is the definition of Lebesgue-integrability of $f$. (There's no need for $f$ to be non-negative. Rather, one first defines the value of the integral for non-negative functions and then one uses that to define the Lebesgue integral of functions whose ranges may include both negative and non-negative numbers.)
That $|f|$ is Lebesgue-integrable would therefore involve the absolute value of the absolute value of $f$. But that's the same as the absolute value of $f$.
The statement "$f$ is Lebesgue integrable if and only if $|f|$ is Lebesgue integrable." is not true unless there is an assumption that $f$ is measurable. A counterexample is that $f$ is $-1/2$ plus the indicator function of a non-measurable set in a space whose total measure is $1/2$. Then $|f|$ is integrable (and its integral is $1$) but $f$ is not measurable. |
Error propagation through recurrence relation | Let
$$\frac{\varepsilon_n}{\varepsilon_0}=\sum_{k=0}^{\infty} f(k, n) \varepsilon_0^k$$
where $f(k, n)$ is your $p_k(n)$.
Note that
\begin{align}
\sum_{k=0}^{\infty} f(k, n+1) \varepsilon_0^k=\frac{\varepsilon_{n+1}}{\varepsilon_0}& =\frac{\varepsilon_n(1+\varepsilon_n)}{\varepsilon_0} \\
&=\left(\sum_{k=0}^{\infty} f(k, n) \varepsilon_0^k\right)\left(1+\varepsilon_0\sum_{k=0}^{\infty} f(k, n) \varepsilon_0^k\right) \\
& =\left(\sum_{k=0}^{\infty} f(k, n) \varepsilon_0^k\right)+\varepsilon_0\left(\sum_{k=0}^{\infty} f(k, n) \varepsilon_0^k\right)^2 \\
&=\left(\sum_{k=0}^{\infty} f(k, n) \varepsilon_0^k\right)+\left(\sum_{k=0}^{\infty} \left(\sum_{j=0}^{k-1} f(j, n)f(k-1-j, n)\right)\varepsilon_0^k\right)
\end{align}
Thus $$f(k, n+1)=f(k, n)+\left(\sum_{j=0}^{k-1} f(j, n)f(k-1-j, n)\right)$$
Note: This equation holds $\forall n,k \in \mathbb{Z}, n, k \geq 0$. In the following proof by strong induction, there is no restriction on $n$. Any equation with $n$ is assumed to hold for all non-negative integers $n$.
We now prove by strong induction on $k$ that $f(k, n)$ is a polynomial in $n$ of degree $k$ with leading coefficient $1$.
The above equation implies $f(0, n+1)=f(0, n)$ (the 2nd term with the sum is empty), so by induction $f(0, n)=1 \, \forall n \in \mathbb{Z}, n \geq 0$.
Suppose that the statement holds for $0 \leq k \leq l$. Then by the induction hypothesis,
$f(i, n)f(l-i, n)$ is a polynomial in $n$ with degree $l$ and leading coefficient $1 \, \forall i$, so $f(l+1, n+1)-f(l+1, n)=\left(\sum\limits_{i=0}^{l} f(i, n)f(l-i, n)\right)$ is a polynomial in $n$ with degree $l$ and leading coefficient $(l+1)$. Thus there exists $a_0, a_1, \ldots , a_{l-1}$ such that
$$f(l+1, n+1)-f(l+1, n)=(l+1)!\binom{n}{l}+\sum_{i=0}^{l-1}{a_i\binom{n}{i}}$$
Note: To see why this is true, note that both $f(l+1, n+1)-f(l+1, n)$ and $(l+1)!\binom{n}{l}$ are polynomials in $n$ with degree $l$ and leading coefficient $l+1$, so their difference is a polynomial in $n$ with degree $\leq (l-1)$. Any polynomial $P(n)$ can be written as a linear combination of binomial coefficients $\binom{n}{i}$, where $0 \leq i \leq deg(P(n))$, so in particular the above equation holds for some $a_i$.
Thus
\begin{align}
f(l+1, n)& =f(l+1, 0)+\sum_{j=0}^{n-1}{(f(l+1, j+1)-f(l+1, j))} \\
&=\sum_{j=0}^{n-1}{(f(l+1, j+1)-f(l+1, j))} \\
& =\sum_{j=0}^{n-1}{\left((l+1)!\binom{j}{l}+\sum_{i=0}^{l-1}{a_i\binom{j}{i}}\right)} \\
& =(l+1)!\binom{n}{l+1}+\sum_{i=0}^{l-1}{a_i\binom{n}{i+1}}
\end{align}
Note:$f(l+1, 0)=0$ since $l+1 \geq 1$ and $1=\frac{\varepsilon_0}{\varepsilon_0}=\sum_{k=0}^{\infty} f(k, 0) \varepsilon_0^k$.
This is indeed a polynomial in $n$ of degree $l+1$ and leading coefficient $1$.
We are thus done by strong induction. |
Compute $\iiint_R 6z \ dV.$ | Yes, it is correct.\begin{align}
&\int_0^1 \int_0^{5-5x} \int_0^{-5x-y+5} 6z \,\,\, dz dy dx \\
&=\int_0^1 \int_0^{5-5x} 3(-5x-y+5)^2 \,\,dy dx \\
&=\int_0^1 \int_0^{5-5x} 3(5x+y-5)^2 \,\,dy dx \\
&=\int_0^1 -(5x+0-5)^3 \, dx \\
&=-5^3 \int_0^1(x-1)^3 \, dx \\
&= -5^3 \frac{(x-1)^4\mid_0^1}{4}\\
&=\frac{125}{4}
\end{align} |
Functions $f$ such that $f(x+1)-f(x-1)=2f'(x)$. | Let $f(x)$ be any differentiable function defined on $[0,1]$ and let $g(x)$ be any integrable function defined on $[1,2]$. Extend $f$ for $x\in (1,2]$ by defining
$$f(x) = \int_1^x g(t)\,dt + c$$
where $c$ is chosen to make $f$ continuous at $1$. Also require that $g(1) = f'(1)$, so that $f$ will be also differentiable at $1$. Extend $f$ to $x \in (2,3]$ by
$$f(x) = f(x-2) + g(x-1).$$
Now $f$ is defined on $[0,3]$ and satisfies the formula given in the question.
Repeat the above starting with $f$ so far defined on $[0,3]$ to extend the definition of $f$ to $[0,4]$, etc. to extend the definition to $[0, \infty)$.
One can work this procedure backwards to extend the definition of $f$ to all of $\mathbb{R}$. Thus any such $f$ satisfying the original formula can be created from starting functions $f$ and $g$ as described. |
Working out annual income | Let's say Kristin makes $k$ dollars. Try setting an equation for yourself. When you're done, the equation is also shown below:
$$\frac{p}{100} \times 28000 + \frac{p+2}{100} \times (k - 28000) = \frac{p+0.25}{100} k$$
Now solve for $k$. |
Contraction mapping theorem for metric spaces - why is this part of the proof notable? | It turns out that this is useful in some instances such as the following example:
Solve $x^7 - x^3 - 21x + 5 = 0$, for $x \in [0, 1]$, with error less than $10^{-6}$.
We can use the CMT and iteration to solve the equivalent equation $\frac{x^7 - x^3 + 5}{21} = x$.
After some work and setting $f(x) = \frac{x^7 - x^3 + 5}{21}$ we can show that $f$ is a contraction map with $K = \frac{1}{3}$ and hence $\frac{K}{K-1} = \frac{1}{2}$.
Then, using the fact that $|x - x_n| \leq \frac{K}{K-1} |x_n - x_{n-1}|$, we see that all we need to do is iterate until two successive terms are within $2 \cdot 10^{-6}$ of each other (i.e. $|x_n - x_{n-1}| < 2 \cdot 10^{-6}$).
So $d(x, x_n) \leq \frac{K}{1-K}d(x_{n-1}, x_n)$ actually tells us when we can stop in this example. |
What is the coefficient of$ x^6y^1$ in the expansion of $(3x^2+y)^4$? | Recall that
$$(a+b)^4=\sum_{k=0}^4{4\choose k}a^kb^{4-k}$$
so in your case we take $k=3$ to get the desired coefficient: $3^3{4\choose 3}=27\times 4$ |
A number that is even and prime. | "There is exactly one even prime number" can be expressed formally as
$$ \exists x: [(Even(x) \land Prim(x)) \land \forall y : [Even(y) \land Prim(y) \implies x=y]]. $$
The first part says there is at least one even prime number. The second part says there is at most one even prime number. |
Lower bound of a positive self-adjoint operator | My proof below is incomplete: for operators, positivity does not mean invertibility. I believe, however, that that case in which the infimum is zero can be handled separately.
I'm not sure how you proved the statement about the sup without the spectral theorem, but that'll be enough for the opposite statement.
Note that since $A$ is positive, it is invertible, and $A^{-1}$ is positive. We have
$$
\sup_{\|x\| = 1} \|A^{-1}x\| = \sup_{\|x\| = 1}|\langle x,A^{-1}x\rangle|
$$
from there, we note that
$$
\inf_{\|x\| = 1} \|Ax\| = \left[\sup_{\|x\| = 1} \frac{1}{\|Ax\|}\right]^{-1} =
\left[\sup_{x \neq 0} \frac{x^*x}{x^*AAx}\right]^{-1/2} =(\text{set }y = Ax)\\
\left[\sup_{x \neq 0} \frac{y^*A^{-1}A^{-1}y}{y^*y}\right]^{-1/2} = \left[\sup_{\|y\|=1}\|A^{-1}y\|\right]^{-1}
$$
Similarly, show that
$$
\inf_{\|x\| = 1}|\langle x,A x\rangle| = \left[\sup_{\|y\| = 1}|\langle y,A^{-1}y \rangle| \right]^{-1}
$$
the statement follows. |
If a sequence of random variables is uniformly bounded, is it's limit bounded as well? | Suppose $\{X_n\}$ is uniformly bounded, i.e. there exists $M>0$ such that $\sup_n|X_n|\leqslant M$ a.s., and that $X_n\stackrel{\mathrm{a.s.}}\longrightarrow X$. Then we may choose $N$ such that $n\geqslant N$ implies $|X_n-X|<1$ a.s. Hence with probability $1$,
$$|X| = |X-X_N+X_N| \leqslant |X-X_N|+|X_N|\leqslant M+1, $$
so that $X$ is bounded. |
How can I simplify after applying both the product rule and power chain rule? | $(x^2 + 4)^2 (3)(2x^3-1)^2(6x^2) + (2x^3-1)^3(2)(x^2+4)(2x)$
We start by multiplying together the constants in the equation. I added color to show the like terms. Please tell me if those detract from the answer.
$18 \color{blue}{x^2}\color{red}{(x^2 + 4)^2} \color{green}{(2x^3-1)^2} + 4\color{blue}{x}\color{green}{(2x^3-1)^3}\color{red}{(x^2+4)}=$
Now we combine like terms
$2x(x^2+4)(2x^3-1)^2(9x(x^2+4)+2(2x^3-1))$
Finally, we multiply out the inside and add like terms.
$2x(x^2+4)(2x^3-1)^2(9x^3+36x+4x^3-2)$
$2x(x^2+4)(2x^3-1)^2(13x^3+36x-2)$ |
Construct a sequence of continuous functions which converges pointwise to $\lfloor x \rfloor$ | Note that $(x-k)^n + k \rightarrow k$ for $x \in [k,k+1)$, since $0 \leq (x-k) < 1$. |
Cauchy problem - number of solutions? | The second one is simple since the equation is separable if you write it as $$x'=\frac{1}{3} y^{-\frac{2}{3}}$$ so $$x+c=\sqrt[3]{y}$$ and then $$y=(x+c)^3$$ If you insert the condition, then .....
I am sure that you can take from here. |
How to bound the error of an approximation for ln|k+x|? | First question
For $g(x)=x \in (-1,0)$, $$\sum_{n=1}^{\infty}\left(-1\right)^{n-1}\cdot\frac{x^n}{n}$$ is not an alternating series! $x^n$ itself has alternating signs and $g(x)$ tends to approach the harmonic series as $x \to 1$ which is diverging. This is coherent with what you found in Desmos.
Second question
If your goal is to evaluate $\ln(k+x)$, I would better use
$\ln \frac{1}{x} = - \ln x$ to only deal with $x \gt 1$ and then $$\ln(a+x) = \ln a + \ln \left(1 + \frac{x}{a}\right).$$ |
Proving continuity of a function using epsilon and delta | Let $\epsilon>0$ be an arbitrary number. Since the function $f$ is continuous at $p$ and $f(p)\ne 0$, there exists a number $\delta>0$ such that $|f(x)-f(p)|<\min\{\epsilon|f(p)|^2/2, |f(p)|/2\}$ for each $x\in X$ such that $|x-p|<\delta$. Then
$$\left|\frac 1{f(x)}-\frac 1{f(p)}\right|=\left|\frac {f(p)-f(x)}{f(x)f(p)}\right|\le$$ $$|f(p)-f(x)|\cdot\frac 1{|f(p)|}\cdot \frac 1{|f(x)|}
\le \frac{\epsilon|f(p)|^2}{2}\cdot \frac 1{|f(p)|}\cdot\frac 2{|f(p)|}=\epsilon. $$ |
non-PID which where all prime ideals are maximal | $\mathbb{Z}[x]$ is not a PID, but the ideal $(2)$ is prime and not maximal. The ring $k[x]/(x^2)$ is an example of a ring with exactly one prime ideal, but is not a domain, so not a PID. In $k[x,y]/(x,y)^2$ you again have exactly one prime ideal, and it is not principal. |
Finding the expectation of the Gamma density function | Let $k > 0$. Then $$\begin{align*}
\operatorname{E}[X^k] &= \int_{x=0}^\infty x^k f(x;\alpha,\beta) \, dx \\
&= \int_{x=0}^\infty x^k \frac{x^{\alpha-1} e^{-x/\beta}}{\Gamma(\alpha)\beta^\alpha} \, dx \\
&= \int_{x=0}^\infty \frac{\Gamma(\alpha+k) \beta^k}{\Gamma(\alpha)} \cdot \frac{x^{\alpha+k-1} e^{-x/\beta}}{\Gamma(\alpha+k) \beta^{\alpha+k}} \, dx \\
&= \frac{\Gamma(\alpha+k)\beta^k}{\Gamma(\alpha)} \int_{x=0}^\infty f(x;\alpha+k,\beta) \, dx.
\end{align*}$$ Since $f$ is a probability density for any positive choice of parameters, this last integral is equal to $1$; therefore, $$\operatorname{E}[X^k] = \frac{\Gamma(\alpha+k) \beta^k}{\Gamma(\alpha)}.$$ When $k$ is a positive integer, $\Gamma(\alpha+k) = (\alpha+k-1)(\alpha+k-2)\ldots \alpha \Gamma(\alpha)$ by the recursion relation, thus we would have $$\operatorname{E}[X^k] = \prod_{m=0}^{k-1} (\alpha+m).$$ When $k = 1$, this simply yields $\Gamma(\alpha+1) = \alpha \Gamma(\alpha)$ hence $$\operatorname{E}[X] = \alpha \beta$$ as claimed. Our more general approach furnishes the higher raw moments, e.g., $$\operatorname{E}[X^2] = (\alpha+1)\alpha \beta^2$$ which also gets us the variance $$\operatorname{Var}[X] = \operatorname{E}[X^2] - \operatorname{E}[X]^2 = \alpha \beta^2.$$ |
proof of sobolov inequality | You are missing a square: you should have
$$\int_{-\infty}^\xi (u^2 + u_x^2) \, dx \le \|u\|_{H^1}^2.$$
Since $\displaystyle 0 \le \int_{\mathbf R} f(x)^2 \, dx$ your inequality leads to
$$0 \le \int_{\mathbf R} f(x)^2 \, dx \le \|u\|_{H^1}^2 - 2 u(\xi)^2$$ so that
$$2u(\xi)^2 \le \|u\|_{H^1}^2.$$
Now take the square root. |
Why is Hausdorff measure Borel regular? | $\DeclareMathOperator{\diam}{diam}$First off, we can replace the $C_j$ with either open or closed sets for the reasons described in your second image. Since you seem to be a little confused on that point, let's look at that in a little more detail. First, suppose that we cover $A$ by a collection $C_j$. The diameter of the closure of a set is equal to the diameter of that set i.e. $\diam \overline{C}_j = \diam C_j$ for all $j$. But then
$$\mathcal{H}_{\delta}^{m}(A)
= C(m) \inf \sum_{j} (\diam C_j)^m
= C(m) \inf \sum_{j} (\diam \overline{C}_j)^m,
$$
where the infimum is taken over all $\delta$-coverings of $A$ (as above). Since size $\delta$ approximation of $\mathcal{H}^m(A)$ doesn't depend on whether or not the closures are closed, neither does the $m$-dimensional content.[1]
On the other hand, replacing the $C_j$ by open sets is slightly more delicate. However, it can be done: for any $\varepsilon > 0$, we can define a collection of sets of the form
$$
C_{j,\varepsilon} := \left\{ x : d(x,C_j) < \frac{\varepsilon}{2^{j+1}} \right\} =: U_j, $$
where $d(x,C_j)$ denotes the distance from $x$ to $C_j$, i.e. $\inf_{y\in C_j} d(x,y)$. Notice that if we fatten up each of the $C_j$ into an open set, then (at worst) we are increasing the diameter by twice the amount of fattening-up. Hence $\diam U_j \le \diam C_j + \frac{\varepsilon}{2^j}$, so
\begin{align}
\mathcal{H}_{\delta}^{m}(A)
&= C(m) \inf \sum_{j} (\diam C_j)^m \\
&\le C(m) \inf \sum_{j} (\diam U_j)^m \\
&\le C(m) \inf \sum_{j} \left[(\diam C_j)^m + \mathcal{O}\left( \frac{\varepsilon}{2^j} \right)\right] \\
&= \left[ C(m) \inf \sum_{j} (\diam C_j)^m\right] + \mathcal{O}(\varepsilon).
\end{align}
I'm sweeping a lot of details into that big-oh, so it would be a good idea to convince yourself that it is correct, and that I am not lying to you. The basic idea is that we can fatten up all of the sets in a cover by just a little bit in order to get an open cover. If we don't fatten things up too much, we end up with the same thing in the limit.
Alternatively, we can play the same game of fattening up a $\delta$-cover by a very small $\varepsilon$, then consider the $(\delta+\varepsilon)$-cover by open sets. Again, there are details that I am hiding, but you should be able to fill them in.
In short, we can replace the arbitrary $C_j$ in the original definition of the Hausdorff content with either open or closed $C_j$, and still get the same Hausdorff outer measure for any set.
This gets to the second part of your question: why does this imply that $\mathcal{H}^m$ is a regular Borel measure? It is usually a good idea to start with the definitions:
Definition: An outer measure $\mu$ is Borel if every Borel set $A$ is $\mu$-measurable, i.e. if
$$ \mu(B) = \mu(A\cap B) + \mu(A\setminus B) $$
for any set $B$.
Showing that Hausdorff measure is Borel is nontrivial. The usual trick is to first show that Hausdorff measure is a metric outer measure, then invoke a theorem which states that all metric outer measures are Borel measures. I don't see how this particular property is a corollary of the fact that we can use either open or closed covers, but I'll sketch the proof here (I think that Folland's book on real analysis has a more complete proof, and one of Falconer's books almost certainly spells it out).
Definition: An outer measure $\mu^\ast$ is said to be a metric outer measure if
$$ \mu^{\ast}(A\cup B) = \mu^{\ast}(A) + \mu^{\ast}(B) $$
whenever $\rho(A,B) > 0$, where $\rho(A,B)$ is the minumum distance between any two points in $A$ and $B$ (basically, we are requiring that $A$ and $B$ are contained in disjoint open sets; i.e. there is a fixed distance $\delta_0$ such that there are nonintersecting balls of radius $\delta_0$ centered at any two points in $A$ and $B$, respectively).
By construction $\mathcal{H}^m$ is an outer measure for any $m$ (we really only need to check subadditivity, which is not hard). On the other hand, if $A$ and $B$ are such that $\rho(A,B) = \delta_0$, then we can cover both $A$ and $B$ by countable collections of sets of radius $\min\{ \delta_0/3, \delta/2\}$ for any $\delta > 0$. Taking an infimum as $\delta \to 0$, we get the desired result.
Regularity, on the other hand, is a corollary of the fact that we can replace arbitrary covers with either open or closed covers. Recall:
Definition: $\mu$ is regular if for every set $A$ there exists an Borel set $B$ such that $A \subseteq B$ and $\mu(A) = \mu(B)$.
For each $n\in\mathbb{N}$, there exists some countable cover $\mathscr{U}_n = \{U_{n,j}\}$ of $A$ such that
Each set $U_{n,j} \in \mathscr{U}_n$ is open, and
$\sum_{j} (\diam U_{n,j})^m < \frac{1}{n}$.
Let
$$ B := \bigcap_{n} \bigcup_{j} U_{n,j}. $$
By construction $B$ is Borel (it is a countable intersection of countable unions of open sets, therefore Borel—it probably even belongs to one of those fancy $G_{\sigma\delta}$ or $F_{\delta\sigma}$ classes of sets, but I can never remember the precise definitions of the sets in the hierarchy, so I won't embarrass myself by bringing up those kinds of sets. Oh... shoot.). Also note that $B$ has been built so that
$$ \mu(A) = \mu(B), $$
which gives the regularity result. (Again, convince yourself that this is true.)
[1] Note that $C(m)$ is some constant that depends on $m$. Specifically, it is $\omega_m / 2^m$. I typically define the Hausdorff content without this constant, since it seems like a distraction to me, and can always be recovered later if need be. |
How can I get $\sin{(\frac{1}{x})}$ to always be positive? | No, there will be no factor $\pm 1 x^i $ that makes it positive since $\sin (1/x) $ is alternating. So this would imply to find a polynomial with this alternating character, and thus a infinite number of roots. However it may be possible to find a infinite series expansion that does the job |
How can one relate inverse of a differential operator to an integral operator? | For linear, constant coefficient problems, you can use the Fourier transform.
Take $P$ to be an arbitrary constant coefficient linear partial differential operator on $\mathbb{R}^n$. Associated to $P$ is its polynomial symbol $p(\xi)$. Formally speaking, one can treat a constant coefficient linear partial differential operator as a polynomial in the usual partial derivations $\partial_{x^i}$. We say that a (complex-coefficient) polynomial $p(\xi)$ over $\mathbb{R}^n$, where $\xi = (\xi_1, \xi_2, \ldots, \xi_n)$, is the symbol of $P$ if we formally insert $p(i\nabla)$ we will recover the operator $P$.
Some examples, for the partial derivative $\partial_{x^1}$, you get $-i\xi_1$. For the Laplacian $\sum_{i = 1}^n \partial_{x^i}^2$, you get $- \sum_{i = 1}^n \xi_i^2$.
The reason we care about the symbol is because they are nice under the Fourier transform. Namely, the Fourier transform of $Pf(x)$ (say $f$ is Schwartz class) is precisely $p(\xi)\hat{f}(\xi)$.
So for if $f$ and $u$ are Schwartz functions such that
$$ Pf = u $$
we get
$$ p\hat{f} = \hat{u}$$
by taking the Fourier transform of both side, and since now $p$ is a polynomial, we can divide through by
$$ \hat{f} = \frac{1}{p} \hat{u} $$
Now, if $\frac{1}{p}$ were actually a Schwartz function, then we can take the inverse Fourier transform on both sides and use that the Fourier transform interchanges products and convolutions, to arrive at
$$ f = \check{(p^{-1})} * u $$
and your convolution kernel $g(x,y)$ is nothing more than just what is given by the inverse Fourier transform
$$ g(x,y) = \check{(p^{-1})}(x-y) $$
Unfortunately, since $p$ is a polynomial, $1/p$ can never be in Schwartz class. There are two problems. The first is generic: since $p$ is polynomial, $1/p$ can have at most polynomial decay near $\infty$, so can never be "rapidly decaying" as required by Schwartz functions. This, however, can usually be amended by considering enlargement of the domain of Fourier transform to include tempered distributions. (In some cases you can even get by with using the $L^2$ theory for Fourier transform.) The second, slightly less generic problem, is that of zeros of $p$. When $p$ has a zero, $1/p$ has a singularity, and so is not smooth, and hence not in Schwartz class. The question about how to deal with this problem has driven much of mid twentieth century harmonic analysis, and has led to the study of singular integrals.
A discussion of singular integrals is way beyond the scope here. And for distribution theory I suggest the book of Friedlander and Joshi. In your particular case, the operator $(I - \partial^2_{xx})$ has symbol $1 + \xi^2$, so you don't have the second type of problem described above. The only issue is that the symbol for $(I-\partial^2_{xx})^{-1}\partial_x$ is $(-i\xi)/(1 + \xi^2)$ which is not even $L^2$. So you will have to use the theory of distributions to get the convolution kernel.
Lastly, the Fourier transform method rather strongly depends on your domain being the whole real line (or some $\mathbb{R}^n$). For boundary value problems it doesn't work so well. In the case of the interval, you can use some Fourier series methods (by considering things on a circle), but it tends to interact strangely with prescription of boundary values. |
Can we represent a line of length equal to irrational numbers? | Assume you don't know anything about rational vs. irrational, and are given the following problem: "Draw a line $\ell$, choose an arbitrary point $0\in\ell$ and another point $1\in\ell$. Where is the point that should get the label $\sqrt{2}$?" You would go ahead and find this point with a little construction, and without having any doubts that this is exactly the point you wanted.
Of course this construction lives in an ideal mathematical world, and we all know that the lines drawn with a pencil have a certain thickness, etc. But this is a completely other matter.
Concerning scaling: Given any two points $A$, $B\in\ell$ you can scale their distance $|AB|$ by a factor $\sqrt{2}$ using the three points you already have and drawing a few parallels. |
Power of prime ideal | Hint $\ $ In a domain D, every power of a prime ideal is principal iff D is a PID. Indeed, if every power of a prime ideal is principal then every prime ideal is principal, so D is a PID, by the proof below. Conversely, in a PID every ideal is principal, hence so is every power of a prime ideal.
Below is a proof of the key step, from my 2008/11/10 Ask an Algebraist post.
In reply to "principal prime ideals imples PID", posted by Student on November 9, 2008:
Let $\rm\:R\:$ be an integral domain. Let every prime ideal in $\rm\:R\:$ be principal.
Prove that $\rm\:R\:$ is a principal ideal domain (PID).
Below I give a simpler, more general way to view the proof, and references.
First let's recall a standard proof, e.g. that given by P.L. Clark, paraphrased below.
Proof $\rm\:\! 1\!:\:$ If not then the set of all nonprincipal ideals is nonempty.
Let $\rm\:\{I_j\}\:$ be a chain of nonprincipal ideals and put $\rm\:I = \bigcup_j I_j.\:$ If $\rm\:I = (x)\:$ then $\rm\:x \in I_j\:$ for some $\rm\:j,\:$ so $\rm\:I = (x) \subset I_j\:$ implies $\rm\:I = I_j\:$ is principal $\rm\:\Rightarrow\Leftarrow\:$
Thus by Zorn's Lemma there is an ideal $\rm\:I\:$ which is maximal with respect to the property of not being principal. As is so often the case for ideals maximal with respect to some property or other, we can show that $\rm\:I\:$ must be prime. Indeed, suppose that $\rm\:ab \in I\:$ but neither $\rm\:a\:$ nor $\rm\:b\:$ lies in $\rm\:I.\:$ So ideal $\rm\:J = (I,a)\:$ is strictly larger than $\rm\:I,\:$ so principal: $\rm\:J = (c).\:$ $\rm\:(I:a)\ :=\ \{r \in R : ra \in I\}\:$ is an ideal containing $\rm\:I\:$ and $\rm\:b,\:$ so strictly larger than $\rm\:I\:$ and thus principal: say $\rm\:(I:a) = (d).\:$ Let $\rm\:i\in I,\:$ so $\rm\:i = uc.\:$ Now $\rm\:u(c) \subset I\:$ so $\rm\:ua \in I\:$ so $\rm\:u \in (I:a).\:$ Thus we may write $\rm\:u = vd\:$ and $\rm\:i = vcd.\:$ This shows $\rm\:I \subset (cd).\:$ Conversely, $\rm\:d \in (I:a)\:$ implies $\rm\:da \in I\:$ so $\rm\:d(I,a) = dJ \subset I\:$ so $\rm\:cd \in I.\:$ Therefore $\rm\:I = (cd)\:$ is principal $\rm\:\Rightarrow\Leftarrow\quad$ QED
I show that the second part of the proof is just an ideal-theoretic version of a well-known fact about integers. Namely suppose that the integer $\rm\:i>1\:$ isn't prime. Then, by definition, there are integers $\rm\:a,b\:$ such that $\rm\:i\mid ab,\:$ $\rm\:i\nmid a,\:$ $\rm\:i\nmid b.\:$ This immediately yields a proper factorization of $\rm\:i,\:$ namely $\rm\ i = c\:\! (i:c)\:$ where $\rm\:c = (i,a).\:$ Thus: not prime $\Rightarrow$ reducible $ $ (or: irreducible $\Rightarrow$ prime). Below is an analogous constructive proof generalized to certain ideals.
Theorem $\rm\ \:$ Suppose that $\rm\:R\:$ is a ring, and suppose that $\rm\:I\ne 1$ is an ideal of $\rm\:R\:$ which satisfies the property: $\ $ ideal $\rm\:J\supset I\: \Rightarrow\: J\mid I\:.\:$ Then $\rm\:I\:$ not prime $\rm\:\Rightarrow\:$ $\rm\:I\:$ reducible (properly).
Proof $\rm\:\ \ I\:$ not prime $\rm\:\Rightarrow\:$ $\rm\:\exists\: a,b \notin I\:$ with $\rm\:ab \in I.\:$ $\rm\:A\: :=\: (I,a)\supset I\: \Rightarrow\: A\mid I,\:$ say $\rm\:I = AB\:$; wlog we may assume that $\rm\:b \in B\:$ since $\rm\:A(B,b) = AB\:$ via $\rm\:Ab = (I,a)b \subset I = AB.\:$ Finally the factors $\rm\:A,B\:$ in $\rm\:I = AB\:$ are proper: $\rm\:A = (I,a),\:$ $\rm a \notin I\:$; $\rm\:\ B \supseteq (I,b),\:$ $\rm b \notin I.\quad$ QED
The contains $\Rightarrow$ divides hypothesis: $\rm\:J\supset I\: \Rightarrow\: J\mid I\:$ is true for principal ideals $\rm\:J\:$ (hence proof 1), and also holds true for all ideals in a Dedekind domain. Generally such ideals $\rm\:J\:$ are called multiplication ideals. Rings whose ideals satisfy this property are known as multiplication rings. Their study dates back to Krull.
The OP's problem is Exercise $1\!-\!1\!-\!10,\: p. 8\:$ in Kaplansky: Commutative Rings, viz.
$10.\:$ (M. Isaacs) In a ring $\rm\:R\:$ let $\rm\:I\:$ be maximal among non-principal ideals. Prove that $\rm\:I\:$ is prime. (Hint: adapt the proof of Theorem $7$. We have $\rm\:(I,a) = (c).$ This time take $\rm\:J = \{x\in R\mid xc \in I\}.$ Since $\rm\:J \supset (I,b),\:$ $\rm\:J\:$ is principal. Argue that $\rm\:I = Jc\:$ and so is principal.)
See also this AaA post and the following AMS Math Review.
Mott, Joe Leonard. $ $ Equivalent conditions for a ring to be a multiplication ring.
Canad. J. Math. 16 1964 429--434. $ $ MR 29:119 13.20 (16.00)
If "ring" is taken to mean a commutative ring with identity and a multiplication ring is a "ring" in which, when $\rm\:A\:$ and $\rm\:B\:$ are ideals with $\rm\:A \subset B,\:$ there is an ideal $\rm\:C\:$ such that $\rm\:A = BC,\:$ then it is shown that the following statements are equivalent.
$\rm\:R\:$ is a multiplication ring;
if $\rm\:P\:$ is a prime ideal of $\rm\:R\:$ containing ideal $\rm\:A\:$ then there is an ideal $\rm\:C\:$ such that $\rm\:A = PC\!\:;$
$\rm\:R\:$ is a ring in which the following three
conditions are valid:
a. $\ $ every ideal is equal to the intersection of its isolated primary components;
b. $\ $ every primary ideal is a power of its radical;
c. $\ $ if $\rm\:P\:$ is a minimal prime of $\rm\:B\:$ and $\rm\:n\:$ is the least positive
integer such that $\rm\:P^n\:$ is an isolated primary component of $\rm\:B,\:$ and if $\rm\:P^n \neq P^{n+1},\:$ then $\rm\:P\:$ does not contain the intersection of the remaining isolated primary components of $\rm\:B.\:$ (Here an isolated $\rm\:P\!\!-\!primary$ component of $\rm\:A\:$ is the intersection of all $\rm\:P$-primary ideals that contain $\rm\:A.)$
Reviewed by H. T. Muhly |
definition of an isomorphism between field extensions | A morphism of field extensions is either injective or the zero map. This can be seen by considering the kernel of such a morphism. It is an ideal of the domain, which is a field, and therefore, must either be zero or the entire field. So if you identify the image with another field, then that rules out the case of the zero map. |
How many numbers are there between $200$ and $400$ which are divisible by 11 and but not by 2? | I would look at all the Multiples of 11 in between 200 and 400 and exclude all those who are a multiple of an even number. In other words look at all the multiples of 11 that can be found in the described range and exclude those that are the result of the multiplication of 11 with an even number. This should yield the result in no time.
The same idea laid out differently would be to find the smallest number that multiplied with 11 lies in the range and the largest number multiplied with 11 that lies in the range. Then count all the odd numbers between them (and including those two numbers). |
A tricky integration $I=\int _0 ^\infty \log\left(x+\frac{1}{x}\right)\frac{1}{1+x^2}dx$ | One may write
$$
I=\int _0^\infty \frac{\log(1+x^2)}{1+x^2}dx-\int _0^\infty \frac{\log x}{1+x^2}dx,
$$ by the change of variable $x \to 1/x$, one has
$$
\int _0^\infty \frac{\log x}{1+x^2}dx=-\int _0^\infty \frac{\log x}{1+x^2}dx=0,
$$ by the change of variable $x=\tan \theta$, one gets
$$
\int _0^\infty \frac{\log(1+x^2)}{1+x^2}dx=\!\int _0^{\large \frac{\pi}2} \!\frac{\log(\cos^{-2} \theta)}{1+\tan^2 \theta}(1+\tan^2 \theta)d\theta=-2\int _0^{\large \frac{\pi}2}\! \log(\cos \theta) \:d \theta=\pi \log 2,
$$ where we have used a well-known result.
Finally,
$$
I=\int _0^\infty \frac{\log \left(x+\frac1x \right)}{1+x^2}dx=\pi \log 2.
$$ |
Show that a group is isomorphic to $\Bbb R / \langle 2\pi \rangle$. | You may want to justify that you can see $\mathbb R/\langle2\pi\rangle$ as $[0,2\pi)$ with cyclic addition. Then you can map directly $t\longmapsto(\cos t,\sin t)$ and show that it is an isomorphism (note that the morphism property was given to you basically by definition). |
Anti-symmetric if $AB= 1$ and $BA=0$ but every vertex has loops? | Yes. A relation $\mathrel{R}$ is antisymmetric if
$$x\mathrel{R}y\quad\text{and}\quad y\mathrel{R}x\quad\text{implies}\quad x=y\;.$$
This means that if $x\ne y$, you can’t have both $x\mathrel{R}y$ and $y\mathrel{R}x$: you can have at most one of them. It says nothing at all about what you can (or must) have when $x=y$.
In terms of the associated graph, if you never have both an edge $A\to B$ and an edge $B\to A$ when $A\ne B$, then the relation is antisymmetric; you can have as many or as few loops as you like. (If you have a loop at every vertex, the relation is reflexive.) |
how to calculate fourier transform of a power of radial function | Yes... you use the way the definition of Fourier transform changes under rotations and dilations. Namely, recall that
$$\hat{f}(\xi) = \int_{R^n} f(x) e^{-2\pi i x \cdot \xi} \,dx$$
So that if $f(x) = |x|^{-a}$ this becomes
$$\hat{f}(\xi) = \int_{R^n} |x|^{-a} e^{-2\pi i x \cdot \xi} \,dx$$
Note this is an improper integral, so you have to be careful here with how one interprets the above. Next, note that if $R$ is a rotation, then $Rx \cdot \xi = x \cdot R^{-1}{\xi}$. So changing variables in the above, turning $x$ into $Rx$, and observing that $|x| = |Rx|$, leads to
$$\hat{f}(\xi) = \int_{R^n} |x|^{-a} e^{-2\pi i x \cdot R^{-1}\xi} \,dx$$
$$ = \hat{f}(R^{-1}\xi)$$
Since $R$ is any rotation this means that $\hat{f}$ is radial.
Next, change variables $x \rightarrow tx$ for some $t > 0$ in the above definition of $\hat{f}$. This time you get
$$\hat{f}(\xi) = \int_{R^n} |tx|^{-a} e^{-2\pi i tx \cdot \xi} t^n \,dx$$
$$ = |t|^{n-a}\int_{R^n} |x|^{-a} e^{-2\pi i x \cdot t\xi} \,dx$$
So $\hat{f}(\xi) = |t|^{n-a} \hat{f}(t\xi)$, or equivalently $\hat{f}(t\xi) =
|t|^{a-n}\hat{f}(\xi)$. This means on each radial line, $\hat{f(t\xi)}$ is equal to $C|t|^{a-n}$. And since $\hat{f}(\xi)$ is a radial function, $C$ is independent of which radial line you're on. This gives what you're looking for. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.