title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Showing that a homogenous ideal is prime. | We wish to prove:
If $S$ is a $\mathbb{Z}$-graded ring and $\mathfrak{p}$ is a homogeneous ideal of $S$ satisfying $ab \in \mathfrak{p}$ implies $a$ or $b$ in $\mathfrak{p}$ for homogeneous $a$ and $b$, then $\mathfrak{p}$ is prime.
So take 2 general elements $a,b \in \mathfrak{p}$ and assume $ab \in \mathfrak{p}$ but neither $a$ nor $b$ is in $\mathfrak{p}$. Let $a = \sum a_d$ and $b = \sum b_d$ be their homogeneous decompositions. Since $a \not \in \mathfrak{p}$, then some $a_d \not \in \mathfrak{p}$, and since all but finitely many $a_d$ are $0$, there exists a largest integer $d$ such that $a_d \not \in \mathfrak{p}$. Similarly, there exists a largest integer $e$ such that $b_e \not \in \mathfrak{p}$.
Since $ab \in \mathfrak{p}$ and $\mathfrak{p}$ is a homogeneous ideal, then all the components of $ab$ are in $\mathfrak{p}$. The $d+e$ component of $ab$ is $\sum a_i b_j$ where we sum over all pairs $(i,j)$ with $i+j = d+e$. But each such pair $(i,j)$, other than $(d,e)$, must have either $i>d$ or $j>e$, and hence (by the maximality of $d$ and $e$) we have $a_i b_j \in \mathfrak{p}$. Thus $a_d b_e \in \mathfrak{p}$ also, yet neither $a_d$ nor $b_e$ is in $\mathfrak{p}$, which contradicts the original assumption about $\mathfrak{p}$ for the homogeneous elements $a_d$ and $b_e$.
In short: If $a,b$ is a general counterexample for the primality of $\mathfrak{p}$, then $a_d, b_e$ is a homogeneous counterexample. |
Flipping a fair coin 1000 times. | No. It is the correct procedure, but not the correct value.
Note that up to the 994-th index the expected value of the indicator for starting a substring of seven consecutive heads is an identical non-zero constant, but for all indices after that it is zero (because there are not six remaining places in the string).
$\forall i\in\{1\,\ldots\,994\}: \mathsf E(X_i)=2^{-7}$. and $\forall i\in\{995\,\ldots\,1000\}:\mathsf E(X_i)=0$ |
Why does continuity on $\mathbb{T}$ imply that $f(-\pi) = f(\pi)$? | The book identifies a function that is $2\pi$-periodic as a function on the circle $\mathbf{T}$ (see Section 4.1 of the book):
So when $f$ is a function on the circle, it must be true that
$$
f(-\pi)=f(-\pi+2\pi)=f(\pi).
$$
"Continuity" is actually not quite relevant here, although the author mentioned it. |
Problem solving $2\times 2$ equation system | $$
(1) \ 2x+3y = 10 \
\\ (2) \ 4x-y = -1 \
$$
Adding $-2*(1)$ to $(2)$:
$$
\\2x+3y=10
\\0x-7y=-21
$$
So clearly $y=3$, and from that follows that
$$
2x=1
\\x=\frac{1}{2}
$$ |
Central limit theorem problem with sample size. | For a 95% probability, you want to compute a 95% confidence interval in which the top and bottom of the interval are 0.4 inches more and less than the mean value. In other words, you want the "margin of error" to be 0.4. I'm assuming from your formula above that the standard deviation of an individual value is 2.5.
Then the margin of error = 0.4 = 1.96 * sd / (sqrt(n)
(1.96 is the z-score corresponding to a two-tailed 95% confidence interval)
Solving (with sd= 2.5) gives n = 150. |
I need to know if this is correct. Set Theory | If by $a-b$ you mean set $a$ without the elements in set $b$, then the answer if the empty set, because both sets have the same elements. |
Finding a solution when the determinant is zero | In $Ax=b$ form, the will be at least one solution if and only if $b$ is in the column space of $A$.
If $A$ is a square matrix, there is a unique solution if and only if $\det(A) \ne 0$.
Putting these tests together we have for all square matrices $A$, $Ax=b$ has
no solution if $b$ is not in the column space of $A$.
a unique solution if $\det(A) \ne 0$.
infinitely many solutions if $b$ is in the column space of $A$ but $\det(A) = 0$.
A more general but also slightly more tedious (sometimes) method that will work for non-square matrices is to row reduce the augmented matrix $[A\mid b]$. If you get
a row with $[0\mid a]$ where the $0$ represents a row of zeros and $a\ne 0$, then there is no solution.
a pivot in each column and no rows with $[0\mid a]$ where $a\ne 0$, then there is a unique solution.
at least one column without a pivot and no rows with $[0\mid a]$ where $a\ne 0$, then there are infinitely many solutions. |
Why does $\frac{n!n^x}{(x+1)_n}=\left(\frac{n}{n+1}\right)^x\prod_{j=1}^{n}\left(1+\frac{x}{j}\right)^{-1}\left(1+\frac{1}{j}\right)^x$ | $$\prod_{j=1}^n\left(\frac{j+1}j\right)^x$$ telescopes and equals $(n+1)^x$. Multiplying
by
$$\left(\frac n{n+1}\right)^x$$
gives $n^x$.
$$\prod_{j=1}^n\left(1+\frac xj\right)^{-1}=\frac{n!}{(x+1)(x+2)\cdots(x+n)}
=\frac{n!}{(x+1)^{(n)}}$$
etc. (I prefer $x^{(n)}$ and $x_{(n)}$ for the rising and falling factorials resp.) |
Show that $\int _0 ^\infty \frac{x^{p-1}}{1+x} dx=\frac{\pi}{sin(p \pi)}, 0<p<1$ | basicly it comes from this theorm:
Theorom : $\displaystyle \int_0^\infty {x^m\over{x^n+1}}dx={{\pi \over n} \over sin(m+1({\pi \over n })) },n-m \ge2$
Proof:
using contour integral we can write :
$$\oint_C {z^m \over {z^n+1}} \,dz$$
if $C$ choosed ppropriately The integrand has n first-order singularities,at the $n$ $n$-th roots of $-1$,these singular points are uniformly spaced around
the unit circle in the complex plane. thus using euler formula we can write:
$$ -1=e^{i(1+2k)\pi}$$
singular points are located at:
$$ z_k=(-1)^{1 \over n}=e^{i({{1+2k} \over n})\pi},k=0,1,2,3,...,n-1 $$
for other values of k these same n points simply repeat.
now focus on one of these singular points , $k=0$.
pick C to enclose just that one singularity, at $z=z_0=e^{i{\pi \over n}}$
so the central angle of the wedge is $2\pi \over n$ and the singularity is at half that $\pi \over n$.
contour’s three portions are:
$$ C_1=z=x,dz=dx,0 \le x \le T $$
$$ C_2=z=Te^{i\theta},dz=iTe^{i\theta}d\theta,0 \le x \le {2\pi \over n} $$
$$ C_3=z=re^{{i2\pi}\over n},dz=e^{{i2\pi}\over n}dr,0 \le r \le T $$
so:
$$ \oint_C {z^m \over {z^n+1}} \,dz= \int_0^T {x^m \over {x^n+1}}dx + \int_0^{2\pi \over n} {(Te^{i\theta})^m \over {(Te^{i\theta})^n+1}}iTe^{i\theta}d\theta+ \int_T^0 {(re^{{i2\pi}\over n})^m \over {(re^{{i2\pi}\over n})^n+1}}e^{{i2\pi}\over n}dr$$
$$=\int_0^T {x^m \over {x^n+1}}dx- \int_0^T { {r^me^{i(m+1){2\pi \over n}}} \over {r^n+1} }dr + \int_0^{2\pi \over n}{ {T^{m+1}e^{im\theta}} \over {T^{n}e^{in\theta}+1} }ie^{i\theta}d\theta $$
now clearly as $T \to \infty$ the $\theta$-integral goes to zero because $m+1<n$ also
$$\int_0^T { {r^me^{i(m+1){2\pi \over n}}} \over {r^n+1} }dr=e^{i(m+1){2\pi \over n}} \int_0^T {x^m \over {x^n+1}}dx $$
so as $T \to \infty$:
$$\oint_C {z^m \over {z^n+1}} \,dz=\int_0^\infty {x^m \over {x^n+1}}dx(1-e^{i(m+1){2\pi \over n}})$$
or as :
$$(1-e^{i(m+1){2\pi \over n}})=-2isin((m-1){\pi \over n})e^{i(m+1){2\pi \over n}}$$
we have:
$$\oint_C {z^m \over {z^n+1}} \,dz=-2isin((m=1){\pi \over n})e^{i(m+1){2\pi \over n}}\int_0^\infty {x^m \over {x^n+1}}dx $$
since we can write the integrand of the contour integral as a partial fraction
expansion:
$$ {z^m \over {z^n+1}}={N_0 \over {z-z_0}}+{N_1 \over {z-z_1}}+...+{N_{n-1} \over {z-z_{n-1}}} $$
where the $N$'s are constants.integrating this expansion term-by-term:
$$ \oint_C {z^m \over {z^n+1}} \,dz=N_0\oint_C {dz \over {z-z_0}} $$
using cauchy’s first integral theorem all the other integrals are zero ,so the only singularity $C$ is $z_0$ ,now using cauchy’s second integral theorem with $f(z)=1$ we get:
$$ -2isin((m=1){\pi \over n})e^{i(m+1){2\pi \over n}}\int_0^\infty {x^m \over {x^n+1}}dx=2\pi iN_0 $$
and final step is calculate $N_0$:
$$ {(z-z_0)z^m \over {z^n+1}}=N_0+{N_1(z-z_0) \over {z-z_1}}+... $$
and if $z \to z_0$ then:
$$ N_0=lim_{z \to z_0} {(z-z_0)z^m \over {z^n+1}}={0 \over 0} $$
thus using L’Hoˆpital’s rule:
$$ N_0=lim_{z \to z_0} {(z-z_0)z^m \over {z^n+1}}={z_0^{m-n+1} \over n} $$
using $z_0=e^{i\pi \over n}$:
$$ N_0=-{e^{i( {m+1 \over n} )\pi} \over n } $$
inserting this to result we finally get :
$$ \bbox[5px,border:2px solid red]
{
\displaystyle \int_0^\infty {x^m\over{x^n+1}}dx={ {2\pi i}(-{e^{i( {m+1 \over n} )\pi} \over n }) \over {-2isin((m-1){\pi \over n})e^{i(m+1){2\pi \over n}}} }
}
$$
now in order to calculate what you asked:
define $t=x^n$ then :
$${dt \over dx} = nx^{n-1} $$ so :
$$\int_0^\infty { t^{{m} \over n} \over t+1} ({dt \over nt^{{n-1}\over n} })=\int_0^\infty { t^{{{m+1} \over n}-1} \over {t+1}}dt $$
now define $$a={{m+1} \over n} $$ and finally get($0<a<1$):
$$ \bbox[5px,border:2px solid green]
{
\displaystyle \int_0^\infty {x^{a-1}\over{x+1}}dx= {\pi \over sin(a\pi)}
}
$$ |
Solve differential equation: $y''-\dfrac{1}{x}y'+\dfrac{\alpha^2}{x^2}y=0$ | Differential equations of this form are called Euler differential equations, and they can usually be transformed into equations with constant coefficients via exponential substitution (you can read more here)
If we use the substitution - $x=e^t\Rightarrow t=log(x)\Rightarrow y'=\frac{1}{x}y_t,y''=-\frac{1}{x^2}y_t+\frac{1}{x^2}y_{tt}$ we get an equation with constant coefficients:
$y_{tt}-2y_t+\alpha^2=0$
Can you take it from here? |
Why are the two answers different? (integrating exponents) | You have
$$
e^{2\pi i x/T}=\cos(2\pi x/T)+i\sin(2\pi x/T).
$$
Thus, clearly $e^{2\pi i x/T}\neq 1$ for $0<x<T$ (for exaple, $x=T/2$ gives $e^{\pi i}=-1$)
You have to be more careful when using complex powers. |
Poisson Process Decomposition Problems | Notice for $t>0$ we have $N_1(t) \sim \text{Poisson}(70t)$, $N_2(t) \sim \text{Poisson}(30t)$, and $$N(t)=N_1(t)+N_2(t) \sim \text{Poisson}(100t)$$ Moreover, $$E(N(1)|N_2(1)>2)=\sum_{k=3}^{\infty}k \cdot \frac{P(N(1)=k,N_2(1)>2)}{P(N_2(1)>2)}$$ Also note for $k\geq 3$ $$\{N(1)=k\}\cap \{N_2(1)>2\}=\coprod_{j=3}^k\{N_1(1)=k-j\}\cap\{N_2(1)=j\}$$ Since the union on the right hand side is disjoint, $$P(N(1)=k,N_2(1)>2)=\frac{e^{-100}}{k!}\sum_{j=3}^k{k \choose j}30^j70^{k-j}$$ Using the Binomial Theorem, $$P(N(1)=k,N_2(1)>2)=e^{-100}\frac{100^k}{k!}\Bigg(1-(0.7)^k\bigg[1+\frac{3k}{7}+\frac{9k(k-1)}{98}\bigg]\Bigg)$$ Meanwhile, $$P(N_2(1)>2)=1-P(N_2(1)\in \{0,1,2\})=1-481e^{-30}$$ Wolfram Alpha says that $$E(N(1)|N_2(1)>2)=\sum_{k=3}^{\infty}k \cdot \frac{P(N(1)=k,N_2(1)>2)}{P(N_2(1)>2)}=\frac{100(e^{30}-346)}{e^{30}-481}\approx 100$$ For part (b) the expected number of arrivals by $N_1$ on $\big[0,t_1\big]\dot\cup\big[t_2,t\big]$ is $70(t+t_1-t_2)$ and since $$N_1(t_2)-N_1(t_1)|N(t_2) - N(t_1)=n \sim \text{Binomial}\Big(n,\frac{7}{10}\Big)$$ we see that $$E(N_1(t)|N(t_2)-N(t_1)=n)=\frac{7n}{10}+70(t+t_1-t_2)$$ |
Is this use of the Divergence Theorem correct? | Note that the spherical cap is not a closed surface, i.e., does not bound a region. In order to apply the Divergence Theorem, you must add in the "base" — the portion of the xy-plane of radius $3$. So the answer is most definitely not $0$. |
Need a help in the proof of an example of Riesz representation theorem. | Yes, you are correct: Cauchy-Schwarz is applicable and hence the functional is bounded. The inequality holds because the proof of Cauchy-Schwarz only uses the structure of inner product spaces, i.e. only uses the properties of inner products and vectors. So no reference to basis, hence dimension, is necessary. |
How to prove limit of x/(x^2+1) is 2/5 as x approaches 2 using epsilon delta definition | The strategy is right, however you made some transcription errors from one line to the next, esp. changing $5(x^2+1)$ to $5x^2+1$. Calculations you publish should be readable and not contain trivial errors.
So yes, at some point you get to
$$
\left|\frac{2x-1}{5(x^2+1)}\right|·|x-2|\le \frac{2|x-2|+3}{5}·|x-2|
$$
At this point one usually introduces an artificial restriction like $δ\le 1$ to control the first factor as
$$
2|x-2|+3<2δ+3\le 5
$$
In consequence, the upper bound is $|x-2|$ which allows to chose
$$
δ=\min(1,ε)
$$ |
Tables and histories of methods of finding $\int\sec x\,dx$? | I don't agree that evaluating this integral only involves methods that are unexpected. In particular, it is possible to integrate any rational expression involving trigonometric functions using the substitution
$$
u = \tan(x/2).
$$
This substitution has the nice property that
$$
\sin x \;=\; \frac{2u}{1+u^2},\qquad \cos x \;=\; \frac{1-u^2}{1+u^2}, \qquad\text{and}\qquad dx \;=\; \frac{2\,du}{1+u^2}.
$$
This is a perfectly standard technique, though it usually isn't taught in calculus classes anymore.
Applying this to the integral of secant gives
$$
\int \sec x\,dx \;=\; \int \frac{dx}{\cos x} \;=\; \int \frac{2\,du}{1-u^2} \;=\; \ln\left|\frac{1+u}{1-u}\right|+C \;=\; \ln\left|\frac{1+\tan(x/2)}{1-\tan(x/2)}\right|+C
$$
Most computer algebra systems use this technique as part of their integration algorithm, so this is the answer that you tend to get if you ask a computer for the integral of secant.
By the way, if this trick strikes you as clever, be aware that it works just as well to integrate rational expressions of sine and cosine using Euler's identity and the substitution $u=e^{ix}$. This tends to involve a lot of complex numbers, but it might seem more straightforward than the above substitution.
In any case, the only sense in which the integral of secant is difficult is that it can't be evaluated easily using the bag of tricks that we tend to teach in calculus classes nowadays. However, I don't think there's anything mathematically "natural" about the set of tricks that we teach, so I don't think the difficulty of integrating secant has any real mathematical significance.
Edit: By the way, the substitution $u = \tan(\theta/2)$ corresponds to a certain parameterization of the circle by rational functions. In particular, this is essentially the stereographic projection of the unit circle from the point $(-1,0)$ to $y$-axis, with the $\theta/2$ coming from the fact that an inscribed angle is half of the corresponding central angle. This same parameterization can be used to enumerate all Pythagorean triples.
Edit 2: To illustrate the point further, here's a way of integrating secant that only involves an "obvious" substitution. Let $u = \cos x$. Then
$$
du \;=\; - \sin x\,dx \;=\; -\sqrt{1-u^2}\,dx
$$
so
$$
\int \sec x\,dx \;=\; \int \frac{dx}{\cos x} \;=\; \int -\frac{du}{u\sqrt{1-u^2}} \;=\; \mathrm{sech}^{-1}u+ C \;=\; \mathrm{sech}^{-1}(\cos u) + C.
$$
Now, you might object to using the derivative formula for $\mathrm{sech}^{-1}$, on the grounds that this isn't usually covered in a first-year calculus course. But again, it seems arbitrary to me that we cover inverse trig functions but not inverse hyperbolic trig functions in first-year calculus. |
Prime ideals $I=(X+Y,X-Y)$ | Note that $\Bbb{Z}[X,Y]/(X-Y)\cong\Bbb{Z}[T]$ because the ring homomorphism
$$\Bbb{Z}[X,Y]\ \longrightarrow\ \Bbb{Z}[T]:\ P(X,Y)\ \longmapsto P(T,T),$$
is surjective and has kernel $(X-Y)$. Therefore
$$\Bbb{Z}[X,Y]/I\cong(\Bbb{Z}[X,Y]/(X-Y))/(\overline{X}+\overline{Y})\cong\Bbb{Z}[T]/(2T),$$
which is not an integral domain because $2T=0$ whereas $2,T\neq0$. So $I$ is not prime. |
Percentage Change: Decomposition | You can calculate the percentage change of two measurements as:
\begin{equation}
\text{Percentage change} = \frac{\Delta X}{X} = \frac{X_2-X_1}{X_1}\times100\%
\end{equation}
where $X_1$ and $X_2$ is the old and new measurement, respectively.
For the overall employment this gives an increase of 49.8% in your case, as you mention. Now you can simply do the same for the national and immigrant group separately:
\begin{equation}
\frac{195229-138332}{138332}\times 100\% = 41.1\%
\end{equation}
and
\begin{equation}\frac{39344-18189}{18189}\times 100\% = 116\%
\end{equation}
So from this we see that the national and immigrant employments in that period increased by 41.1% and 116%, respectively. Therefore a much larger relative change in new employments is seen in the immigrant group, however the immigrant group has a much smaller population therefore it doesn't affect the total percentage very much. |
Prove: $\sup (S \cup P) = \max \{\sup(S), \sup(T)\}$, where $S$ and $T$ are non-empty subests of $R$ | Let $m_S:=\sup S$, $m_T:=\sup T$ and $m:=\sup S\cup T$.
Part1
$S\subseteq S\cup T\implies m_S\leq m$.
$T\subseteq S\cup T\implies m_T\leq m$.
This can captured in: $\max(m_S,m_T)\leq m$.
Part2:
If $x\in S\cup T$ then we have the possibilities:
$x\in S$ leading to $x\leq m_S\leq \max(m_S,m_T)$
$x\in T$ leading to $x\leq m_T\leq \max(m_S,m_T)$.
So $\max(m_S,m_T)$ is an upper bound for $S\cup T$.
Conclusion: $m\leq\max(m_S,m_T)$.
You proved the second part but the first part stayed under-exposed. |
Deduce that $\sum_{k \in \mathbb{N}} \frac{k^2}{(4k^2-1)^2} = \frac{\pi^2}{64}$ | Hint You are missing the sins from your series.
Split your sum in n odd and even.
\begin{equation}
\sum_{n=2}^{\infty} \frac{2n(1+(-1)^n}{\pi(n^2-1)}=\left(\sum_{k=1}^{\infty} \frac{4k\left(1+(-1)^{2k}\right)}{\pi(4k^2-1)}\right)+\left(\sum_{k=1}^{\infty} \frac{2(2k+1)0}{\pi((2k+1)^2-1)}\right)\\=\sum_{k=1}^{\infty} \frac{8k}{\pi(4k^2-1)}
\end{equation}
Now use parseval. |
Working out $\operatorname{Proj} k[x_0,...x_n]/(x_0^2,x_0x_1)$ | I think $X$ is not generally reduced. Let's look at the case $X = \operatorname{Proj}(k[x_0,x_1,x_2]/(x_0^2,x_0x_1))$.
We examine this scheme in the affine chart $U$ defined by $x_2 \neq 0$. Since localization and taking degree zero pieces both commute with quotients, we have $X\cap U = \operatorname{Spec}(k[x,y]/(x^2,xy))$. That is, we invert $x_2$, set $x = x_0/x_2$ and $y = x_1/x_2$ to be coordinates on $U$, and obtain the local generators for the ideal, which "don't change" since they have no $x_2$ terms. (We are dehomogenizing the generators with respect to $x_2$.) The ideal of $X\cap U$ is $(x^2,xy) = (x)\cap (x,y)^2$, which implies that $X\cap U = \operatorname{Spec}(k[x,y]/(x,y)^2) \cup \operatorname{Spec}(k[x,y]/(x))$. This is the union of a (reduced) line with an embedded order two fat point, which is nonreduced. This shows exactly where the nonreducedness lies, since if localize at $x_1\neq 0$, then we get a reduced line.
If $n>2$, a similar phenonmenon occurs, except that line becomes a hyperplane, and the embedded point becomes an embedded codimension $2$ fat linear subscheme. (What happens if $n=1$?) |
Generator of rational functions unchanged under $\sigma(X) = X + 1$ | $L^G=K(X^p-X)$. To prove this use that $[L:L^G]=p$, $[L:K(X^p-X)]=p$ and $K(X^p-X)\subseteq L^G$. |
Continunity of a two variable function in Apostol's analysis | The function $f(x,y)$ is not continuous at $(0,0)$, because if you approach this point from e.g. the line $y=x$, then you can consider the limit of $f(x,y)=f(x,x)$ as $x$ approaches $0$. We have that $f(x,x)=1$ for all $x\neq 0$, which is different from the value at $(0,0)$, which is $f(0,0)=0$.
The partial derivatives, however, exist at $(0,0)$, because these are derivatives taken along the axes, and $f$ is defined in a very specific way on the axes. |
Evaluate the Integral : $\int_{2}^{1}\frac{dt}{8-3t}$ | You may just write
$$
\int^2_1\frac{dt}{8-3t}=-\frac13\int_1^2\frac{1}{t-\frac83}dt=-\frac13 \left[\ln \left| t-\frac83\right| \right]_1^2=\frac13 \ln\left(\frac52\right).
$$ |
Renewal process with Bernoulli interarrival times. | Following @Did's tip, we have
\begin{align}
\mathbb E[N(t)] &= \mathbb E[N(\lfloor t\rfloor)]\\
&= \mathbb E\left[\sum_{k=1}^{\lfloor t\rfloor}X_k \right]\\
&= \sum_{k=1}^{\lfloor t\rfloor}\mathbb E[X_k] \\
&= \frac{\lfloor t\rfloor}p\leqslant\frac{t+1}p.
\end{align} |
Solve $y'(t) = \dfrac{1}{1+ty}$ | This equation is equivalent to
$$\frac{dt}{dy}=1+ty\\\frac{dt}{dy}-yt=1$$ which is linear with factor $e^{\int (-y)dy}=e^{-\frac{y^2}{2}}$ and so
$$t=e^{\frac{y^2}{2}}\left\{\int e^{-\frac{y^2}{2}}\cdot1dy+c\right\}$$
Then use Gamma function to evaluate the integral. |
Asymptotics for series $\sum_{n \leq x} n/\log n$ | Let $$s_n = \sum^n_{k=2} \frac{k}{\log k}.$$ Then, obviously $s_n\to\infty$ as $n\to\infty$, and $$\lim_{n\to\infty}\frac{s_{n+1}-s_n}{(n+1)^2-n^2}=\lim_{n\to\infty}\frac{n+1}{(2n+1)\log(n+1)}=0.$$ Due to the Stolz–Cesàro theorem, this implies
$$\lim_{n\to\infty}\frac{s_n}{n^2}=0.$$
For your more general question, you can just replace $n^2$ by $\sum_{k\le n}f(k)$ and $\log(n)$ by $\log(n)^\varepsilon$. |
A Catalan number proof involving an $2$ by $n$ array | Call a (possibly incomplete) arrangement of numbers in the $2 \times n$ array admissible if each number is greater than all those to its left or above it. Now consider constructing a complete admissible arrangement by adding numbers $1, 2, \ldots, 2n$ one by one to the empty array in increasing order. If the numbers $1, 2, \ldots, m$ have been admissibly arranged, then there can be no empty cells above or to the left of the occupied cells, for in that case, putting any number from $m+1, \ldots, 2n$ in the empty cells would make the arrangement inadmissible.
Therefore at each step the lowest unassigned number can only be assigned to the leftmost empty cell in either row, or
only in the top row if both rows contain the same number of assigned cells. In the Figure, $?$ denotes one of $1, 2, 3, 4 $ (admissibly arranged), and $*$ a possible assignment for number 5.
The process of adding one number at a time in the leftmost empty cell of a row can be represented as a path in a grid. Starting at $(0,0)$, when we add the new number to the top row, we move right horizontally by one unit and when we place the number in the bottom row, we move down in the grid by one unit (we represent the grid downwards). A mapping between the $2 \times 5$ array and the corresponding path is shown in Figure.
Thus the number of admissible arrays is the same as the number of paths that lie above the diagonal and hence equals $C_n$. |
Modular arithmetic, remainder of a term in a sequence when divided by 100 | It turns out that $b_n=87$ when $n>2$. In fact, $3^{3^3}=7\,625\,597\,484\,987$ and therefore $b_3=87$. And, by the Euler-Fermat theorem, $3^{40}\equiv1\pmod{100}$. So$$3^{87}\equiv3^7\equiv87\pmod{100}.$$ |
Uniformly converging | It's not hard to see that $f_n\to 0$ pointwise everywhere. If $f_n$ converged uniformly to $0$ on $\mathbb R,$ we would have
$$\tag 1 \sup_{\mathbb R}|f_n|\to 0.$$
But for each $n>1,$ $f_n(2/n) = 1/2.$ Thus the supremums in $(1)$ are all at least $1/2$ for $n>1,$ showing $(1)$ fails. Thus $f_n$ fails to converge uniformly to $0$ |
Fitting power law with loglog or exponential? | Almost as Count Iblis commented, when you use least-squares methods, you want to minimize
$$SSQ_1=\sum_{i=1}^n \left(y_i^{(calc)}-y_i^{(exp)} \right)^2$$ When you linearized the model, you minimize
$$SSQ_2=\sum_{i=1}^n \left(\log\left(y_i^{(calc)}\right)-\log\left(y_i^{(exp)}\right) \right)^2$$
$$\log\left(y_i^{(calc)}\right)-\log\left(y_i^{(exp)}\right)=\log\left(\frac{y_i^{(calc)} }{y_i^{(exp)} } \right)=\log\left(1+\frac{y_i^{(calc)} -y_i^{(exp)}}{y_i^{(exp)} } \right)$$ If the errors are "small", using $\log(1+\epsilon)\sim \epsilon$, you then have
$$\log\left(y_i^{(calc)}\right)-\log\left(y_i^{(exp)}\right) \sim \frac{y_i^{(calc)} -y_i^{(exp)}}{y_i^{(exp)} }$$ which means that
$$SSQ_2 \sim \sum_{i=1}^n \left(\frac{y_i^{(calc)} -y_i^{(exp)}}{y_i^{(exp)} } \right)^2$$ which means that, using linearization and $SSQ_2$, you minimize more or less the sum of the squares of the relative errors while, using $SSQ_1$ you minimize the sum of the squares of the absolute errors.
For illustration purposes, let us consider the following data set
$$ \left(
\begin{array}{cc}
x & y \\
1 & 15 \\
2 & 30 \\
3 & 52 \\
4 & 80 \\
5 & 125 \\
6 & 200
\end{array}
\right)$$ and for simplicity use the model $y=e^{a+bx}$. If we take logarithms and perform the linear regression we shall get
$$\log(y)=2.32851+0.504671 x\tag 1$$ to which will correspond $SSQ_2=0.03665$.
Using the nonlinear regression, we shall get
$$y=\exp({2.49690+0.467135 x})\tag 2$$ which, as you noticed, shows different values for the parameters.
Now, let us perform the nonlinear regression using
$$SSQ_3=\sum_{i=1}^n \left(\frac{y_i^{(calc)} -y_i^{(exp)}}{y_i^{(exp)} } \right)^2$$ We shall obtain
$$y=\exp({2.30824+0.507829 x})\tag 3$$ Observe how close are the parameters in $(1)$ and $(3)$. Moreover, $SSQ_3=0.03676$ so close to $SSQ_2$ !
In any manner, when you face nonlinear regression prolems, you need stimates of the parameters to be tuned. Linearization (when possible) is the way to go. But, when you have the estimates, you must continue with the nonlinear regression since what is measured is $y$ and not any of its possible transforms. |
Find the curvature of the curve $\beta(s) = \int_{0}^{s} N(t) dt$ | The mistake is in your expression for $N'$. There you use a Frenet-Serret equation that only holds for arc length parametrized curves. But $\alpha$ is not arc length parameterized; its speed is equal to $2$.
In general, for a regular curve $\alpha$, the usual Frenet-Serret formulas pick up an extra factor $v_\alpha$ (the speed of the curve), because of the chain rule. So
$$
\begin{cases}
T'_\alpha=v_\alpha\kappa_\alpha N_\alpha\\
N'_\alpha=-v_\alpha\kappa_\alpha T_\alpha+v_\alpha \tau_\alpha B_\alpha\\
B'_\alpha=-v_\alpha \tau_\alpha N_\alpha
\end{cases}.
$$
Consequently, $$|N'_\alpha|=v_\alpha|-3T_\alpha-4 B_\alpha|=10.$$ |
Degree of maps on the 3-sphere | Amplifying Jason DeVito's comment above, you may want to look at this MO page (which includes a nice answer by Jason), and also this one, where the term index is used, rather than degree. |
Determine asymptotic complexity of the code | The function is clearly trying to make sure that all elements that are less that $x$ are on the left hand side, and all elements greater than $x$ are on the right hand side.
To do this, it scans from left-to-right with the index $l$, looking for the index $A[l]$ where $A[l] > x$.
Similarly, it scans right-to-left with the index $r$, looking for a $A[r]$ such that $A[r] < x$
Knowing this, we can design the worst case input. For example, for an array of length $n$, the worst case wold be where:
$l$ would have to look till $n- 1$ from $0 \to n - 1$
$r$ would have to look till $0$ from $n - 1 \to 0$
Hence, an input maybe like
$$S = [x - 1, x, x, x, \ldots, x + 1]$$
For this input $S$, $l$ will have to go to the $n-1$th index to find the $x + 1$ element which is greater than $x$. This is one search of length $n$
By a similar argument, $r$ will have a search of length $n$ from $n-1 \to 0$ to find the $x - 1$
If we consider our basic operator to be comparisons $<$ and $>$ that are done in the loop:
there are $n$ $<$ comparisons for $l$
there are $n$ $>$ comparisons for $r$.
Hence, the total search for a list of length $n$ in $2n$, which is in $\Theta(n)$
Hence, the algorithm is in $\Theta(n)$ |
Solving the Cauchy problem $u(0,y) = \sin y$ for PDE $u_x = yu_y$ | Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example:
$\dfrac{dx}{dt}=1$ , letting $x(0)=0$ , we have $x=t$
$\dfrac{dy}{dt}=-y$ , letting $y(0)=y_0$ , we have $y=y_0e^{-t}=y_0e^{-x}$
$\dfrac{du}{dt}=0$ , letting $u(0)=f(y_0)$ , we have $u(x,y)=f(y_0)=f(e^xy)$
$u(0,y)=\sin y$ :
$f(y)=\sin y$
$\therefore u(x,y)=\sin e^xy$ |
Proof of the First Isomorphism Theorem for Groups | Your proof of normality, definition of $\phi$ and proof that $\phi$ is well-defined are spot on (though I'd personally write something like "let $H = \ker(\varphi)$" at the start to avoid writing $\ker(\varphi)$ everywhere, as it makes things look slightly cleaner). Also, your proof that the map is a homomorphism is correct.
As for your starred line, as you're showing injectivity, I'd suggest replacing it with a simpler argument. In particular, as you have a homomorphism, to show injectivity it suffices to show that the kernel is trivial. But if $gH \in \ker(\phi)$, then by definition $\varphi(g) = 1$, so $g \in H$ and $gH = H$, which is the identity element in the quotient group.
An alternative proof - which implicitly proves the claim about trivial kernel that I mentioned above - is to say that if $\phi(gH) = \phi(hH)$, then $\varphi(g) = \varphi(h)$, so that $\varphi(gh^{-1}) = 1$ and $gh^{-1} \in H$. Thus $gH = hH$, as required.
I don't entirely follow your argument at $(*)$; what is the $\pi$ that crops up in the penultimate expression? Also, do you mean to write $\phi^{-1}\varphi[h\ker(\varphi)]$ in the second expression? My guess is that $\pi$ is the projection map from $G$ to $G/\ker(\varphi)$, and that in the end you're slightly assuming what you're trying to prove.
As an example, of a case where this might not prove injectivity, let's suppose that $J$ and $H$ are normal subgroups of $G$ with $J < H$ (strict containment). Then if $H$ is the kernel of some homomorphism $\varphi$ out of $G$, then we consider the map
$$\phi: G/J \longrightarrow \varphi(G)$$
given by
$$\phi(gJ) = \varphi(g).$$
In particular, this isn't an injective map. Then following your line $(*)$, we say that
$$\phi^{-1}(hJ) = \phi^{-1}\varphi[hJ] = \phi^{-1}\varphi(\pi^{-1}\{hJ\}),$$
where $\pi$ is the projection $G \rightarrow G/J$. Does your argument then prove injectivity in this case? As if it does, then there's a mistake somewhere!
(Alternatively, you might take $\pi$ to be the projection map $G \rightarrow G/\ker(\varphi) = G/H$. But in this case you can't work with it unless you know exactly what $\ker(\varphi)$ is, in which case you already know if it's injective or not!) |
The Ring Game on $K[x,y,z]$ | I computed the nimbers of a few rings, for what it's worth. I don't see any sensible pattern so perhaps the general answer is hopelessly hard. This wouldn't be surprising, because even for very simple games like sprouts starting with $n$ dots no general pattern is known for the corresponding nimbers.
OK so the way it works is that the nimber of a ring $A$ is the smallest ordinal which is not in the set of nimbers of $A/(x)$ for $x$ non-zero and not a unit. The nimber of a ring is zero iff the corresponding game is a second player win -- this is a standard and easy result in combinatorial game theory. If the nimber is non-zero then the position is a first player win and his winning move is to reduce the ring to a ring with nimber zero.
Fields all have nimber zero, because zero is the smallest ordinal not in the empty set. An easy induction on $n$ shows that for $k$ a field and $n\geq1$, the nimber of $k[x]/(x^n)$ is $n-1$; the point is that the ideals of $k[x]/(x^n)$ are precisely the $(x^i)$. In general an Artin local ring of length $n$ will have nimber at most $n-1$ (again trivial induction), but strict inequality may hold. For example if $V$ is a finite-dimensional vector space over $k$ and we construct a ring $k\oplus \epsilon V$ with $\epsilon^2=0$, this has nimber zero if $V$ is even-dimensional and one if $V$ is odd-dimensional; again the proof is a simple induction on the dimension of $V$, using the fact that a non-zero non-unit element of $k\oplus\epsilon V$ is just a non-zero element of $V$, and quotienting out by this brings the dimension down by 1. In particular the ring $k[x,y]/(x^2,xy,y^2)$ has nimber zero, which means that the moment you start dealing with 2-dimensional varieties things are going to get messy. But perhaps this is not surprising -- an Artin local ring is much more complicated than a game of sprouts and even sprouts is a mystery.
Rings like $k[[x]]$ and $k[x]$ have nimber $\omega$, the first infinite ordinal, as they have quotients of nimber $n$ for all finite $n$. As has been implicitly noted in the comments, the answer for a general smooth connected affine curve (over the complexes, say) is slightly delicate. If there is a principal prime divisor then the nimber is non-zero and probably $\omega$ again; it's non-zero because P1 can just reduce to a field. But if the genus is high then there may not be a principal prime divisor, by Riemann-Roch, and now the nimber will be zero because any move will reduce the situation to a direct sum of rings of the form $k[x]/(x^n)$ and such a direct sum has positive nimber as it can be reduced to zero in one move. So there's something for curves. For surfaces I'm scared though because the Artin local rings that will arise when the situation becomes 0-dimensional can be much more complicated.
I don't see any discernible pattern really, but then again the moment you leave really trivial games, nimbers often follow no discernible pattern, so it might be hard to say anything interesting about what's going on. |
Is this sequence always periodic? | By multiplying both sides in $n$ we have $$n^2-n(a_i+a_{i+1})+a_ia_{i+1}-a_{i+2}+1=n$$therefore $$a_{i+2}=(a_i-n)(a_{i+1}-n)-n+1$$defining $b_i=a_i-n$ we have $$b_{i+2}=b_ib_{i+1}-2n+1$$therefore $b_i$ is periodic if and only if $a_i$ is periodic. Now choose $n={1\over 2}$ therefore $$b_{i+2}=b_{i+1}b_i$$ where for $b_1=2$ and $b_2=2$ we have$${b_3=2^2\\b_4=2^3\\b_5=2^5\\b_6=2^8\\b_7=2^{11}\\\vdots}$$therefore $$\Large b_i=2^{f_i}$$ where $f_i$ is the famous Fibonacci sequence. Since $f_i$ grows unbounded, so do $b_i$ and $a_i$ which implies on non-periodicity of $a_i$. |
Why is the decision problem of the "Travelling Salesman" $\in \mathcal{NP}$? | In my understanding, a decision problem can't "return" the tour as well
You have misunderstood the verifier-based definition of NP. NP is a class of decision problems, but there is an additional requirement that there exists a proof that a "yes" answer is correct and that proof be verifiable in polynomial time. The need for a decision problem and the need for a polynomial time checkable proof are separate requirements.
Note that if you have an oracle that can answer yes/no questions about NP problems, you can use it to construct a proof, be it a Travelling Salesman tour, clique or a variable assignment that satisfies a Boolean formula. This is because all NP problems are downward self-reducible, meaning queries about subproblems of the main problem can be used to prune the search space down to a solution. |
A null set is a subset of other sets | It's a bit tricky.
Suppose you are looking at some set $S$ and you want to know if $\varnothing \subset S$. So you ask yourself:
Is every element of $\varnothing$ an element of $S$?
You might think that the answer to this question is "No" because $\varnothing$ doesn't have any elements. But in fact that is precisely the reason that the answer to the question is "Yes". Because $\varnothing$ has no elements, there aren't any elements in $\varnothing$ that aren't in $S$. Which is precisely the criterion that you need in order to say that $\varnothing \subset S$.
It might help if instead of defining subset using an affirmative formulation:
$T \subset S$ means that every element of $T$ is also an element of $S$
you instead use the equivalent negative formulation:
$T \subset S$ means that there aren't any elements of $T$ that aren't also elements of $S$.
Edited to add:
Just an afterthought. If you are new to studying mathematics you might find it helpful to know that this approach -- restating "Property $P$ is always true" as "Property $P$ is never false" -- is a fairly common technique in proving something. This type of argument is very close to what is known as an indirect proof ("indirect" because of showing that something is true, we show that is can't be false). |
Subgroup of semidirect product | Alas, it not true.
Possibly the simplest example is the dihedral group of order $8$,
$$
G = \langle a, b : a^4 = 1, b^2 = 1, b^{-1} a b = a^{-1} \rangle,
$$
with $A = \langle a \rangle$ and $B = \langle b \rangle$.
You can see that $H = \langle b a \rangle$, a subgroup of order $2$ which intersects $A$ trivially, is not conjugate to $B$.
This is because the centralizer of $b$ is $\langle b, a^{2} \rangle$, of order $4$. Thus $b$ has two conjugates, which are $b$ itself and $b^{a} = a^{-1} b a = b (b^{-1} a^{-1} b) a = b a^{2}$. |
Does there exist a skew-matrix $A\in \mathbb{F}_3^{n\times n}$ with det$(A)\not=0$ with and odd $n$? | $\det(A)=\det(A^T)=\det(-A)=(-1)^n \det(A)$.
If $n$ is odd this clearly implies $2\det(A)=0$, and in $\mathbb F_3$ this implies $\det(A)=0$. |
Verify my proof that $N$ in $ N = (P_1 \cdot P_2...P_n)+1 $ must be odd. | That is really complicated, but yes, you are right.
An easier way to see it would be the following:
As $P_1 = 2$, we have that
$$P_1\cdot P_2 \cdot \ldots \cdot P_n = 2\cdot P_2 \cdot \ldots \cdot P_n$$
is even.
Now $1$ is odd, and you might know that even plus odd always gives odd, case closed. :) |
Number theory equation(diophantine)! | We deal only with positive $x$ and $y$, so this is quite incomplete.
We can take care of $x+y\le 3$ by inspection. So assume $x+y\ge 4$. Bring the $x^2+y^2$ stuff to the left side. We get
$$(8x+8y-15)(x^2+y^2) =15(xy+1).$$
The left side is $\ge (17)(2xy)$. So it is bigger than the right. |
Finding $\lim_{(x,y)\rightarrow (0,0)} \frac{\tan(x^2+y^2)}{\arctan(\frac{1}{x^2+y^2})} $ | $$\lim\limits_{(x,y)\to (0,0)} \frac{\tan\left(x^2+y^2\right)}{\arctan\left(\frac{1}{x^2+y^2}\right)}$$
Using polar coordinates, we have
$$\lim\limits_{r\to 0^+} \frac{\tan\left(r^2\cos^2\phi+r^2\sin^2\phi\right)}{\arctan\left(\frac{1}{r^2\cos^2\phi+r^2\sin^2\phi}\right)}$$
$$=\lim\limits_{r\to 0^+} \frac{\tan\left(r^2\left(\cos^2\phi+\sin^2\phi\right)\right)}{\arctan\left(\frac{1}{r^2\left(\cos^2\phi+\sin^2\phi\right)}\right)}$$
$$=\lim\limits_{r\to 0^+} \frac{\tan\left(r^2\right)}{\arctan\left(\frac{1}{r^2}\right)}= \frac{0}{\left(\frac{\pi}{2}\right)}=0$$ |
Clarification needed for proof of infinitely many primes of the form $4k+3$ | The usual proof goes along the following lines: assume that $3=p_1,7=p_2,p_3,\ldots,p_m$ are some distinct primes of the form $4k+3$. The huge number
$$ N = -1+4\prod_{j=1}^{m}p_j $$
is $\equiv -1\pmod{4}$, hence it must have a prime divisor $\equiv -1\pmod{4}$. However, neither $p_1,p_2,\ldots,p_{m-1}$ or $p_m$ are divisors of $N$, since they all divide $N+1$, hence there must be an extra prime number of the form $4k+3$.
A similar approach is to construct a sequence of integers $\{a_n\}_{n\geq 1}$ with the properties that $a_n\equiv -1\pmod{4}$ for every $n\geq 1$, and for every $m>n\geq 1$
$$ \gcd(a_n,a_m)=1 $$
holds. It follows that there must be at least one different prime of the form $4k+3$ for each element of the sequence. A sequence fulfilling such constraints is, for instance,
$$ a_1=3,\qquad a_n = -1+4\prod_{k=1}^{n-1}a_k.$$ |
When $\Bbb Z_n$ is a domain. Counterexample to $ab \equiv 0 \Rightarrow a\equiv 0$ or $b\equiv 0\pmod n$ | No, your answer is incorrect.
Either/Or means it's sufficient that one of them abides the condition.
And in the specific example at hand, one of them indeed does, as $3|6$.
A proper counterexample would be $8\times9\equiv0\pmod{12}$.
$$[a=8,b=9,n=12]\implies[ab\equiv0\pmod{n}]\wedge[a<n]\wedge[b<n]\wedge[a\not|n]\wedge[b\not|n]$$ |
linearity of determinant | The property key to understanding this is the fact that the determinant of a Matrix with two identical rows is $0$:
$T(a(i)) = \det{[a(1)\ldots a(j-1) \quad a(i) \quad a(j+1)\ldots a(n)]} = 0$ for $i\neq j $
This can be proved by permuting the free column, that we have set to $a(i)$, with the fixed $i^{th}$ row. We obtain a new $T'(a(i))= -T(a(i))$, but both determinants are equal so the only possibility is $T(a(i))=0$.
Then:
$T(k a(i)+ x) = \\
\det{[a(1) ..a(j-1) \quad ka(i) \quad a(j+1) .. a(n)]} + \det{[a(1) .. a(j-1) \quad x \quad a(j+1) .. a(n)]} = \\
\det{[a(1) .. a(j-1) \quad x \quad a(j+1) .. a(n)]} $ |
Let $X$ be a countable set. Then which of the following are true? | All four options are correct. Consider the spaces $X_1=\big\{\frac{1}{n}:n\in \Bbb N\big\}$ and $X_2=\{0\}\cup\big\{\frac{1}{n}:n\in \Bbb N\big\}$ with usual distance metric $|\cdot|$ on $\Bbb R$.
Now consider a bijection between $X$ with these spaces, say $f:X\to Y$ be a bijection, where $Y$ is either $X_1$ or $X_2$. then, $d(a,b)=|f(a)-f(b)|$ for all $a,b\in X$ is a metric on $X$.
$X_1$ gives you options 2. and 4. and $X_2$ gives you options 1. and 3. |
Prove that set is closed | $C$ is a subset of the base $5$ numbers between $\frac{1}{2}$ and $1$. If $x$ is any real number we can express it as a base $5$ number where the fractional part is $\{c: c=\sum_{i=1}^\infty \frac{c_i}{5^i}\}$ where $c_i\in \{0,1,2,3,4\}$.
Therefore, if $x\notin C$, either $x\lt \frac{1}{2}$, $x\gt 1$ or $x$ differs from $2$ and $4$ in at least one of the $c_i$.
If $x\lt\frac{1}{2}$ or $x\gt 1$ we can find an open interval separating $x$ from $C$.
If $x$ differs from $2$ and $4$ in one or more of the $c_i$s, let $c_k$ be the smallest one. Then the distance from $x$ to any other point in $C$ is greater than or equal to $\frac{1}{5^k}$ so the open ball of radius $\frac{1}{5^{k+1}}$ about $x$ is not contained in $C$.
Therefore, the complement of $C$ is open and $C$ is closed. |
Solving $Ax = b$ using least squares (minimize $||Ax -b||_2$) | You can use the normal equations, i.e. the least squares solution $x_0$ is found by solving (for $x_0$) $$A^TAx_0 = A^T b. \quad(\star)$$
So calculate the matrix $A^TA$ and the vector $A^T b$, and you are left with solving the system of linear equations $(\star)$, which you can do by standard methods, like Gaussian elimination.
A proof that $x_0$ is a least squares solution if and only if it satisfies $A^T Ax_0 = A^T b$ can be found here.
$\newcommand{\CS}{\mathcal{C}}\newcommand{\R}{\mathbb{R}}$If you don't want to use the normal equations, you can use the method in my comment below. (Using the normal equations should be easier though.) Here is an explanation of that method. (I.e. why for the solution, $Ax$ should be the projection of $b$ onto the column space of $A$.)
Let $\CS(A)$ denote the column space of $A$. As $x$ varies, $Ax$ will take on all values in $\CS(A)$, effectively by definition of column space. But remember $\left\| Ax-b\right\|_2$ is the (Euclidean) distance between $Ax$ and $b$. So to minimise $\left\| Ax-b\right\|_2$ over $x$, you want the $x$ such that $Ax$ is closest to $b$ (in Euclidean distance). In other words, for the solution, $Ax$ should be the element in $\CS(A)$ that is closest to $b$ (since $Ax\in\CS(A)$ always).
Now, there is a theorem that if $W$ is a subspace of $\R^n$, the closest vector (using Euclidean distance) in $W$ to a given vector $b\in\R^n$ is the vector $\mathrm{proj}_W(b)$ (i.e. the projection of $b$ onto $W$). So for your problem, the closest vector in $\CS(A)$ to $b$ is the projection of $b$ onto $\CS(A)$, say $\widehat{b}$. Hence for the solution $x$, we must have $Ax=\widehat{b}$, since we said that for the solution, $Ax$ must be the closest vector to $b$ in $\CS(A)$. |
Domain of functions involving arcsine or arccosine | For the first function, the fraction is defined when $\arcsin \frac{x}{4} \ne 0 \implies \frac{x}{4} \ne \sin 0=0 \implies x\ne 0$
So the domain of the first function is $[-4,4]-\{0\}$
For the second function $$2+\arccos \frac{x+1}{4} \ne 0$$
$$\implies \arccos \frac{x+1}{4} \ne -2$$
Apply cos both sides,
$$\implies \frac{x+1}{4} \ne \cos(-2)$$
$$\implies x \ne 4\cos(-2) -1$$ |
Statement not provable from ZF | You’ve not actually derived the Tikhonov product theorem from $(*)$ and $(**)$: your argument does not show that the arbitrary product of compact spaces is compact. However, you can derive AC itself. Let $\{A_i:i\in I\}$ be a non-empty set of non-empty sets. Without loss of generality assume that the $A_i$ are pairwise disjoint and disjoint from $I$. For $i\in I$ let $X_i=A_i\cup\{i\}$. By $(*)$ there are topologies $\tau_i$ for $i\in I$ such that each $\langle X_i,\tau_i\rangle$ is a compact Hausdorff space, and there is no harm in assuming that $\{i\}\in\tau_i$.
By $(**)$ $X=\prod\{X_i:i\in I\}$ is a compact Hausdorff space. For $i\in I$ let $\pi_i:X\to X_i$ be the projection map, and let $F_i=\pi_i^{-1}[A_i]$; $A_i$ is closed in $X_i$, and $\pi_i$ is continuous, so $F_i$ is closed in $X$. Let $\mathscr{F}=\{F_i:i\in I\}$; it’s not hard to check that $\mathscr{F}$ is a centred family of closed sets in $X$. The key is that if $J$ is a finite subset of $I$, one need only choose an $a_i\in A_i$ for each $i\in J$ in order to define a point $x_J$ of $\bigcap\{F_i:i\in J\}$, since one can set $\pi_i(x_J)=i$ for $i\in I\setminus J$. Thus, $\bigcap\mathscr{F}\ne\varnothing$, and any point of $\bigcap\mathscr{F}\ne\varnothing$ is a choice function for $\{A_i:i\in I\}$. |
What is the side-view of a cylinder? | The only way a disc comes to look like a straight line is if your eye is in the same plane as that disc. Since your eye cannot lie in the same plane as both end discs, one of them will have to look a bit ellipsoid. |
Show that the radius of the circle inscribed in a right triangle whit natural sides is a natural number | Hint: the radius of inscribed circle in a right triangle is:$$r=\frac{c_1+c_2-i}{2}$$ where $c_2$ and $c_2$ are cathetus and $i$ is the hypotenuse. This expression is always multiple of $2$. |
Series - calculating the sum | The series can be written out as: $1-x^2+x^4-...$ which is a geometric series with inital term 1 and ratio $-x^2$. therefore it converges for $|x| < 1$ and equals $\frac{1}{1+x^2}$ |
Number of pairs $(A, B)$ such that $A \subseteq B$ and $B \subseteq \{1, 2, \ldots, n\}$ | We use $[n]$ to denote the set of natural numbers $\{1,2,\ldots,n\}$.
We obtain
\begin{align*}
\color{blue}{\sum_{{(A,B)}\atop{A\subseteq B\subseteq[n]}}1}
&=\sum_{B\subseteq [n]}\sum_{A\subseteq B}1\\
&=\sum_{j=0}^n\sum_{{B\subseteq [n]}\atop{|B|=j}}\sum_{k=0}^j\sum_{{A\subseteq B}\atop{|A|=k}}1\tag{1}\\
&=\sum_{j=0}^n\binom{n}{j}\sum_{k=0}^j\binom{j}{k}\tag{2}\\
&=\sum_{j=0}^n\binom{n}{j}2^j\tag{3}\\
&\,\,\color{blue}{=3^n}\tag{4}
\end{align*}
Comment:
In (1) we rearrange the sum according to terms with increasing size of subsets $A$ and $B$.
In (2) we use the fact that the number of subsets of size $q$ of a finite set $X$ is $\binom{|X|}{q}$.
In (3) and (4) we apply the binomial theorem.
Example: $n=2$
We have the following $3^2=9$ pairs $(A,B)$ when considering $[2]=\{1,2\}$:
\begin{align*}
&(\emptyset,\emptyset),\\
&(\emptyset,\{1\}),\,(\{1\},\{1\}),\\
&(\emptyset,\{2\}),\,(\{2\},\{2\}),\\
&(\emptyset,\{1,2\})\,(\{1\},\{1,2\}),\,(\{2\},\{1,2\}),\,(\{1,2\},\{1,2\})
\end{align*} |
Bounds / Approximation to sum of squares of sum | We see \begin{align*}\sum^N_{i=1} \lvert x_i + y_i \rvert^2 &= \sum^N_{i=1} \lvert x_i\rvert^2 + x_i \overline{y_i}+ \overline{x_i} y_i + \lvert y_i \rvert^2 \\
&=\sum^N_{i=1} \lvert x_i \rvert^2 + \lvert y_i \rvert^2 + 2 \text{Re}\{\overline{x_i} y_i\}.
\end{align*} However, the real part of a complex number is always less than the magnitude, so $$\sum^N_{i=1} \lvert x_i + y_i \rvert^2 \le \sum^N_{i=1} \lvert x_i\rvert^2 + \lvert y_i \rvert^2 + 2 \lvert x_i \rvert \lvert y_i \rvert.$$ Now we use $ab \le \frac 1 2 (a^2 + b^2)$ which holds for all $a,b \in \mathbb R$. Thus $$\sum^N_{i=1} \lvert x_i + y_i \rvert^2 \le \sum^{N}_{i=1} \lvert x_i \rvert^2 + \lvert y_i \rvert^2 + 2\cdot \frac 1 2 (\lvert x_i \rvert^2 + \lvert y_i \rvert^2 ) = 2\left(\sum^N_{i=1} \lvert x_i \rvert^2 + \sum^N_{i=1} \lvert y_i \rvert^2 \right).$$ This bound is tight. To see this, you can make $\text{Re}(\overline{x_i} y_i) \le \lvert x_i \rvert \lvert y_i \rvert$ an equality by taking $x_i,y_i$ real and positive. You can make $2\lvert x_i \rvert \lvert y_i \rvert \le \lvert x_i \rvert^2 + \lvert y_i \rvert^2$ and equality by taking $x_i = y_i$. |
Minimizing $4\sec^{2}(x)+9\csc^{2}(x)$ for $x$ in the first quadrant. Discrepancy in solution | Hint:
$2\sec x-3\csc x=0$ may not give us the minimum value as $\sec x\csc x$ is not constant
Use
$$4\sec^2x+9\csc^2x=4(1+\tan^2x)+9(1+\cot^2x)=13+(2\tan x-3\cot x)^2+2\cdot2\cdot3$$ |
Markov Chains - Proof that a finite chain has at least one recurrent state | If the state space is infinite, then all states can be transient -- think of a chain on the positive integers that deterministically marches off to infinity. The cited proof fails in the infinite-state case because it it not always true that
$$
\lim_n\sum_j p_{ij}(n) = \sum_j \lim_n p_{ij}(n),
$$
the reason being that you are attempting to interchange the order of two limits (the sum over $j$ is also a limit), and you are not assured of the same result after interchanging. |
Linear independence of columns and rows of a matrix. | The rank is the dimension of the column space (a.k.a. range or image) of a matrix. The column space is the span of the columns of the matrix. So one way to think of the rank is the size of a basis of the column space.
Some consequences:
If the matrix has $n$ columns, and $k$ of the columns are linearly independent, then the rank is at least $k$, since the span of these $k$ columns (which has dimension $k$) is a subspace of the column space.
If all $n$ columns of the matrix are linearly independent, then the columns themselves are a basis of the column space, so the rank is $n$. |
Proving Sard's theorem | In John Milnor's book, "Topology from the Differentiable Viewpoint", there is a proof of Sard's theorem.
We have a map $g:f\circ h^{-1}$
Where $f:U \to R^p$ where U is an open set in $R^n$
$h:U\to R^n$ where $h(x)=(f_1(x),x_2,...,x_n)$
$h$ is a diffeomorphism which means its a bijection
For each point $(t,x_2,...x_n)\in V'$,
Let $h^{-1}:V' \to R^n$ where $h^{-1} = (t,x_2,...,x_n)$
Then $f(t,x_2,...,x_n)=(t,x_2,...,x_{p-1})$ which is a hyperplane
Therefore the composition $g:f\circ h^{-1}$ maps hyperplanes to hyperplanes |
Prove that the set of $\mathbb{N} \times \mathbb{N} \times \mathbb{N}$ is countable. | If you already know that $\Bbb{N\times N}$ is countable, fix a bijection $f\colon\Bbb{N\times N\to N}$. Now consider $g\colon\Bbb{N\times N\times N\to N\times N}$ defined as: $$g(n,m,k)=(n,f(m,k)),$$ or better yet $h\colon\Bbb{N\times N\times N\to N}$ defined as $$h(n,m,k)=f(n,f(m,k))$$ and cut out the middle-man.
As my freshman discrete mathematics professor used to tell us, go home and convince yourself this is correct. |
Show that the set of all finite subsets of $\mathbb{N}$ is countable. | Define three new digits:
$$\begin{align} \{ &= 10 \\ \} &= 11\\ , &= 12 \end{align}$$
Any finite subset of $\mathbb{N}$ can be written in terms of $0,1,\cdots,9$ and these three characters, and so any expression of a subset of $\mathbb{N}$ is just an integer written base-$13$. For instance,
$$\{1,2\} = 10 \cdot 13^4 + 1 \cdot 13^3 + 12 \cdot 13^2 + 2 \cdot 13^1 + 11 \cdot 13^0 = 289873$$
So for each finite $S \subseteq \mathbb{N}$, take the least $n_S \in \mathbb{N}$ whose base-$13$ expansion gives an expression for $S$. This defines an injection into $\mathbb{N}$.
More generally, any set whose elements can be written as finite strings from some finite (or, in fact, countable) alphabet, is countable. |
$x^2 +2ax+b=0$ where $x_1$ and $x_2$ are real solutions. For which value of $b$ function $f(b)=|x_1-x_2|$ reaches maximum if $|x_1-x_2|=2m$ | We need $a^2-b > 0$, that is $a^2 > b$.
$$a^2-b\le m^2$$
$$b\ge a^2-m^2$$
In summary, $$b \in [a^2-m^2, a^2)$$ |
Want to show that sequences in little l 1 can be represented by orthonormal basis | You have to show that $\lim_{n\to\infty} \sum_{i=1}^n a_ie_i = a$.
By definition, we have
$$\left\|a - \sum_{i=1}^n a_ie_i\right\|_1 = \left\|(\underbrace{0, \ldots, 0}_{n}, a_{n+1}, a_{n+2}, \ldots)\right\|_1 = \sum_{i=n+1}^\infty |a_i| \xrightarrow{n\to\infty} 0$$
because $\sum_{i=1}^\infty |a_i|$ is a convergent series since $a \in \ell^1$. |
Existence and boundedness of partials in neighborhood implies continuity at point | Hint: $f$ is differentiable, hence continuous, along line segments parallel to the axes. So mimic the proof that $C^1$ implies differentiable. |
Artin-Wedderburn theorem and square dimension | The (unique) left simple modulo of the algebra $M_r(\Bbb{C})$ is the column space $\Bbb{C}^r$. In other words the simple component of dimension $r^2$ has a simple module of dimension $r$.
Observe that in case of a finite group $G$ of order $n$ and dimensions of simple representations $d_1,d_2,\ldots,d_k$ we have the equation
$$
n=d_1^2+d_2^2+\cdots+d_k^2.
$$
It all fits magically together :-) |
Trying to understand how to count with more than 1 group of balls | You clarified in comments that when you said,
...we have (3b,2w,1w) or (3b,1w,2w) as the same, but different from (2w,3b,1w) for example, and so we divide by $2!$,
you meant that the color labels, BWW, are the same for the first two, but different for the last one, WBW. I agree with all your answers, and I agree, if I understand you correctly, that you divide by $2!$ because W appears twice as a color label.
One observation: when you write
number of possibilities, in 5 draws, of 2b, 3y (no replacement, order important)? $\frac{7×6}{2!}×\frac{15×14×13}{3!}×5!$,
we can interpret this as $\binom{7}{2}\binom{15}{3}\cdot5!$, or as the number of ways of carrying out the process
choose $2$ black balls;
choose $3$ yellow balls;
arrange the $5$ selected balls in some order.
An alternative expression is
$$
\frac{5!}{2!\,3!}\times(7\cdot2)\times(15\cdot14\cdot13),
$$
which can be interpreted as $\binom{5}{2,3}P(7,2)P(15,3)$, or the number of ways of carrying out the process
label $5$ slots with color labels B, B, Y, Y, Y;
select and arrange balls in the B slots;
select and arrange balls in the Y slots.
Interestingly, the second method can be applied to the situation where the balls are replaced. In this case you get
$$
\frac{5!}{2!\,3!}\times7^2\times15^3.
$$
This is similar to a problem that was in an earlier version of your post.
Added in response to comment: I regard it as a mistake to say that you divide by $2!$ to consider $(a,d,e)$ and $(a,e,d)$ the same. There are two ways to think about what the $2!$ is really for, depending on which of the two process above you are using.
If you are using the first process, there are $\frac{3}{1}$ sets containing one black ball, $\frac{2\cdot1}{2!}$ sets containing two white balls (there's your division by $2!$), and $3!$ ways of arranging the elements of a three-element set formed by taking the union of a set of one black ball and a set of two white balls. When you divide by $3!$ to go to the unordered version of the problem, you are simply undoing the final arrangement step. You could say that you divided by $2!$ because the sets $\{d,e\}$ and $\{e,d\}$ are the same set, and not because we want the ordered triples $(a,d,e)$ and $(a,e,d)$ to be considered the same. (Dividing by $3!$ takes care of the latter.)
If you are using the second process, there are $\frac{3!}{2!}$ ways to put the color labels B, W, W on the slots (there's your division by $2!$), $3$ ways to put a black ball in the B slot, and $2\cdot1$ ways to put white balls in the two W slots. To do the unordered version of the problem, you divide by $3!$ to turn the number of ordered triples into the number of sets. You could say that you divided by $2!$ because the color labelings BWW and BWW are the same labeling. (Normally there are $3!$ permutations of three labels, but when two of the labels are identical, $3!$ overcounts the true number of permutations by a factor of $2!$.) |
Prove that the union of the interior of a set and the boundary of the set is the closure of the set | Based on the flaws suggested in the comments this I think (IMHO) this is an easier way to approach some parts of the proof.
To prove the line that $x \in ∂X \implies x \in \overline A $
Suppose $x$ is in the boundary of $A$ and $x$ is not in some closed set $B$ which contains $A$. Then $x \in B^c$ which is open and hence there is a neighbourhood $V_x$of $x$ which entirely avoids $A$ leading to a contradiction since every neighbourhood of $x$ must contains elements in $A$ and $A^c$.
Similar reasoning can be used to show that $x \in \overline A \implies x \in A^{\circ}$ or $x \in ∂X$.
Suppose $x \in \overline A$ and $x$ is an exterior point of $A$. Then there is a neighbourhood of $x$ which entirely avoids $A$. But then there is a closed set which contains $A$ but not $x$. This leads to a contradiction since $x \in \overline A \implies x$ is in every closed set containing $A$. Then $x$ is not an exterior point of $A \implies x$ is either an interior point or a boundary point of $A \implies x \in A^{\circ}$ or $x \in ∂X$ |
Is $A_P$ a GCD domain? | Here is an "element-wise" solution. We'll prove the following more general result (this appears as exercise 15.10 in Pete L. Clark's notes on commutative algebra).
Theorem: Let $D$ be a GCD domain. If $S$ is a multiplicatively closed subset of $D$ such that $0\notin S$, then $S^{-1}D$ is a GCD domain.
Proof: Let $\alpha, \beta\in S^{-1}D$. If we write $\alpha=a/s$ and $\beta=b/t$, with $a,b\in D$ and $s,t\in S$, we claim that $\gcd(\alpha,\beta)=\gcd(a,b)/1$. Then it follows immediately that $S^{-1}D$ is a GCD domain.
Let's put $\gcd(a,b)=d$, so $d\mid a$ and $d\mid b$. This means there are $a',b'\in D$ such that $a=da'$ and $b=db'$. Then $$\frac{d}{1}\cdot \frac{a'}{s}=\frac{a}{s},$$ $$\frac{d}{1}\cdot \frac{b'}{t}=\frac{b}{t}.$$
Therefore $d/1\mid a/s$ and $d/1\mid b/t$. Now, let $\gamma\in S^{-1}D$ such that $\gamma\mid a/s$ and $\gamma\mid b/t$. If we write $\gamma=c/u$, with $u\in S$, then there are $a''/s', b''/t'\in S^{-1}D$ ($s',t'\in S$) such that $c/u\cdot a''/s'=a/s$ and $c/u\cdot b''/t'=b/t$. Thus $$\frac{a}{s}=\frac{ca''}{us'},$$ $$\frac{b}{t}=\frac{cb''}{ut'}.$$
This lead us to $aus'=ca''s$ and $but'=cb''t$, so if we set $us'=s_1$ and $ut'=s_2$ we have $s_1,s_2\in S$ and $c\mid as_1$, $c\mid bs_2$. Then $c\mid as_1s_2$ and $c\mid bs_1s_2$. Therefore $$c\mid \gcd(as_1s_2,bs_1s_2)=s_1s_2\gcd(a,b)=s_1s_2d.$$
Let's set $s_1s_2=s_3$, then $s_3\in S$ and $c\mid ds_3$. Thus there is $x\in D$ such that $ds_3=cx$ and hence $$\frac{c}{u}\cdot \frac{xu}{s_3}=\frac{d}{1}.$$
This means that $\gamma\mid d/1$ and so we conclude that $d/1=\gcd(\alpha, \beta)$. |
Closed-form solution to $y''=-1/y^2$? | There are implicit solutions
$$ t = a + c k^3 \ln\left(c k^2 + y + \sqrt{y^2 + 2 c k^2 y}\right) - k \sqrt{y^2 + 2 c k^2 y}$$
and
$$ t = a - c k^3 \ln\left(c k^2 + y + \sqrt{y^2 + 2 c k^2 y}\right) + k \sqrt{y^2 + 2 c k^2 y}$$
But we would not expect these to have explicit closed-form solutions for $y$ as functions of $t$. |
Prove that a subset of an infinite subset of $\mathbb{R}$ is countable. | SKETCH: As Asaf said, the fact that you’re working in $\Bbb R$ is irrelevant; just let $A$ be any infinite set. Construct $H$ recursively, one point at a time. The fact that $A$ is infinite means that at every stage, when you’ve picked only the first $n$ points, say, there’s sure to be a point left to choose. Try to write it up yourself. If you get stuck, I’ve left a spoiler-protected write-up below; mouse-over to see it.
Let $a_0\in A$ be arbitrary. Since $A$ is infinite, $A\setminus\{a_0\}\ne\varnothing$, so we may choose any $a_1\in A\setminus\{a_0\}$. In general suppose that $n\in\Bbb N$, and we have already chosen distinct elements $a_k\in A$ for $k<n$; $A$ is infinite, so $A\setminus\{a_k:k<n\}\ne\varnothing$, and we may choose $a_n\in A\setminus\{a_k:k<n\}$. In the end let $H=\{a_n:n\in\Bbb N\}$. By construction the map $\Bbb N\to H:n\mapsto a_n$ is a bijection. |
interest being compounded annually | The formula used isn't right.
For a principal $P$, rate of interest $r$%, time in years $n$, the amount $A$ when compounded annually is given by
$$A=P(1+\frac{r}{100})^n$$
Thus
$$31104=P(1+\frac{20}{100})^3$$
$$31104=P(\frac{6}{5})^3$$
$$P=\frac{31104*125}{216}=18000$$
The principal must be $18000$. |
Dual isogenies of complex tori in Birkenhake-Lange | In the case of isogenies, the isogeny $f$ is given by an invertible linear transformation $L:V_X\to V_Y$, where $V_X$ is the tangent space of $V_X$ at 0 (and similarly with $Y$). Therefore $g$ is induced by $nL^{-1}$, and so is also analytic. |
How to understand Bass 3.7 | I think this notation means that the sequence $$\mu_1 (A), \mu_2 (A), \ldots$$ is increasing. |
Find the symmetrical matrix $A$ so that $Q(\vec x) = \vec x^TA \vec x$ | $A$ is called the matrix associated with the quadratic form $Q$. The general procedure is rather simple: put the coefficients of $x_i^2$ in the diagonal $a_{ii}$ and divide the coefficient of $x_{ij}$ in $2$, writing it twice in $A$: once in $a_{ij}$ and once in $a_{ji}$.
In your example, the coefficient of $x_1^2$ is $1$ so $a_{11}=1$, the coefficient of $x_2^2$ is $1$ so $a_{22}=1$. The coefficient of $x_1x_2$ is $1$ so $a_{12}=a_{21}=\frac{1}{2}$, resulting in:
$$
A=\begin{pmatrix}
1 & 0.5 \\ 0.5 & 1
\end{pmatrix}
$$
Here is another example. Consider $Q(\underline{x})=x_1^2+2x_2^2+x_3^2+2x_1x_2+x_3x_2$. Then the matrix associated with $Q$ is:
$$
\begin{pmatrix}
1 & 1 & 0 \\
1 & 2 & 0.5\\
0 & 0.5 & 1
\end{pmatrix}
$$
For further information see here. |
hamiltonian mechanics | The equation is just $\omega(X,Y)+\mathrm{d}H(Y)=0$ for any vector field $Y$ (just be careful with sign conventions), where one solves for $X$.
If a diffeomorphism $\phi$ is given, now it is easy to apply it on both sides of the equation and see how $X$ is written after the change of variables. |
Prove that the boundary and the interior of a manifold are disjoint. | There is definitely something to show: A priori it might be that for a given point $p\in M$ there exist two different charts - one that takes $p$ to the boundary of $\mathbb{H}^k=[0,\infty)\times \mathbb{R}^{k-1}$ and one that takes it to the interior. We have to show that this is impossible.
Suppose we had such charts, denoted with $\varphi_i:U\rightarrow \mathbb{H}^k$ ($i=1,2$) for which $\varphi_1(p)\in \mathrm{int}\mathbb{H}^k$ and $\varphi_2(p)\in \partial\mathbb{H}^k$. Then there would be a small neighbourhood $V\subset \mathrm{int}\mathbb{H}^k$ around $x=\varphi_1(p)$ such that the composition $f = \varphi_2\circ\varphi_1^{-1}\vert_V:V\rightarrow f(V)\subset\mathbb{H}^k$ was a diffeomorphism onto its image. Then $f(x)\in \partial \mathbb{H}^k$, which is forbidden by the following Lemma:
Lemma. Suppose $V\subset \mathrm{int}\mathbb{H}^k$ is open and $f:V\rightarrow f(V)\subset \mathbb{H}^k$ is a diffeomorphism onto its image. Then $f(V)\cap \partial \mathbb{H}^d = \emptyset$.
This Lemma, depending on your point of departure, is definitely non-trivial. E.g. in Lee's smooth manifold book it is Excercise 7.7 and follows from the earlier Proposition 7.16, which asserts that $f$, viewed as map into $\mathbb{R}^k$, is open. But $f(V)$ is only an open subset of $\mathbb{R}^k$ if it does not intersect $\partial \mathbb{H}^k$.
Another way of proving the Lemma is using algebraic topology - and this is what I alluded to in my comment about 'poking holes'. This might be a somehwat heavy gun for the situation at hand, but has the advantage that one can relax the requirement from $f$ being a diffeomorphism to it merely being a homeomorphism. The idea is as follows: Without loss of generality you can assume that $V$ is an open ball around $x$, but then $f\vert_{V\backslash x}:V\backslash x \rightarrow f(V)\backslash f(x)$ is a homeomorphism between a space which is homotopic to $S^{k-1}$ and a space which is contractible. But this is impossible, as one can check e.g. by looking at homology groups. |
Why is The Following equality true? (limit of a sum and integrals) | Note that:
$$\int\limits_{0}^{1} f(x) dx \approx \sum_{k=1}^{n}\int_{a_{k-1}}^{a_k}f(x)dx \approx \sum_{k=1}^{n} f(a_k)(a_k-a_{k-1}),$$
using the right Riemman sum. In your case, use $a_k=\frac{k}{n}$ and $f(x)=(1+x)^{-3}$. Then take the limit $n\to\infty$.
The $\frac{1}{n}$ term on the LHS in your expression is, precisely, the length of each subinterval: $a_k-a_{k-1}$. |
The meaning of "surjective" in the context of smooth manifolds | The condition "surjective" means exactly that the rank of the matrix of $DF_a$ has rank $m$, once this matrix is a linear transformation between vector spaces. And this is what surjective means. |
Number of twenty one digit numbers such that Product of the digits is divisible by $21$ | To have the product of the digits divisible by $21$ you just need (at least one of $3,6,$ or $9$ and at least one $7$) or at least one $0$. Compute how many numbers do not meet this criterion and subtract from all the numbers. How many $21$ digit numbers do not include a $7$? How many include no $3,6,9$? What happens with the ones that have neither of these? How many have no $0$? |
Proof: $\sum_{i=1}^pE_i \doteq \bigoplus_{i=1}^p E_i \leftrightarrow \forall i\in \{1,...,p\}(E_i \cap \sum_{t \in \{1,...,p\}-\{i\}}E_t=\{0\})$ | Proof of $\bf\leftarrow$
Let $e_i\in E_i$ such that
$$e_1+e_2+\cdots+e_n=0$$
so forall $i$ we have
$$e_i=-\sum_{j\ne i} e_j\in E_i \cap \sum_{j \in \{1,...,n\}-\{i\}}E_j=\{0\}$$
Proof of $\bf\rightarrow$ (by contraposition)
If there's $i$ such that
$$E_i \cap \sum_{j \in \{1,...,n\}-\{i\}}E_j\ne\{0\}$$
so let $x\ne0$ in this intersection hence
$$x=e_i=\sum_{j\ne i} e_j$$
so
$$e_i-\sum_{j\ne i} e_j=0$$
but $$e_i\ne0$$
and this means that $E_1+E_2+...+E_p$ isn't a direct sum.QED. |
How to use the chain rule to find the derivative of the function? | we set $$f(x)=\frac{u}{v}$$ then we have $$f'(x)=\frac{u'v-uv'}{v^2}$$ where
$$u=3x^2+2\sqrt{x^3+\frac{4}{x^4}}$$ then we have
$$u'=6x+2\frac{1}{2}\left(x^3+\frac{4}{x^4}\right)^{-1/2}\left(3x^2-\frac{16}{x^5}\right)$$
and $$v'=3x^2\sqrt{x^2+4}+(x^3-4)\frac{1}{2}\left(x^2+4\right)^{-1/2}\cdot 2x$$
Can you finish this? |
Prove $x^2 − y^2 = 2xyz$ has no solutions in positive integers | Dividing $x$ and $y$ by their greatest common divisor, we may assume $x$ and $y$ are coprime. But $y^2 = x (x - 2 y z)$ is divisible by $x$, and similarly $x^2$ is divisible by $y$, so ... |
Why is $\phi$ an epimorphism of rings? | Think $Z_m=\{0,1,2,...,m-1\}$ if $a\in Z_m$ then $a\in Z$ and $a\cong a \mod (m)$ |
Proving that $\mathrm{card}(2^{\mathbb{N}})=\mathrm{card}(\mathbb{N}^\mathbb{N})$ | I'll just point out that when you write $2^{\mathbb N^{\mathbb{N}}}$ , there is a certain ambiguity to it.
To make sure that it is clear what you have in mind, you should write $(2^{\mathbb N})^{\mathbb{N}}$. (This is what you have used here.)
The other possible meaning is $2^{(\mathbb N^{\mathbb{N}})}$.
In fact, when someone writes $a^{b^c}$, they usually mean $a^{(b^c)}$. (If someone wants to write $(a^b)^c$, they can write $a^{bc}$ instead. See How to evaluate powers of powers (i.e. $2^3^4$) in absence of parentheses? or $x^{y^z}$: is it $x^{(y^z)}$ or $(x^y)^z$? |
Circle tangent to a parabola | It is simpler to translate the parabola by $1$ towards the left, solving the problem for the parabola $y=x^2$ and then re-translating the final result by one unit towards the right. In this way, we can search a circle with its center on the $y $-axis, which typically has an equation of the form
$$x^2 +(y-k)^2=1$$
where $k $ is the $y $-coordinate of the lower point of the circle at its inferior intersection with the $y $-axis. The points of intersection between the parabola and the circle are the solutions of the system
$$\left\{\begin{array}{lll}
x^2 +(y-k)^2=1 \\
y=x^2
\end{array}\right.$$
The solutions for $y $ are
$$y = \frac {1}{2} \left(2 k \pm \sqrt{5 - 4 k} - 1 \right) $$
Because we are searching for points where the curves are tangent, the two solutions of $y $ have to coincide. This occurs for $k=\frac {5}{4} $. This leads to the circle
$$x^2 +(y-\frac {5}{4})^2=1$$
Re-translating towards the right we get the final equation of the circle
$$(x-1)^2 +(y-\frac {5}{4})^2=1$$
whose center is in $(1,\frac {5}{4}) $. Here is a graph of the two tangent curves. |
Comparing volumes of $d$-dimensional unit-balls to upper bound kissing-number. | This can indeed be done by comparing volumes. The volume $V_d(r)$ of a $d$-dimensional sphere of radius $r$ is
$$V_d(r)=C_dr^d,$$
for some positive constant $C_d$ independent of $r$. It follows that the number of pairwise disjoint unit $d$-spheres in a $d$-sphere of radius $3$ is at most
$$\frac{V_d(3)}{V_d(1)}=\frac{C_d3^d}{C_d1^d}=3^d,$$
and hence the kissing number $\kappa(d)$ in dimension $d$ satisfies $\kappa(d)+1\leq 3^d$, as desired. |
What I did wrong in finding the radius of convergence for this problem? | I'm not sure what you did where you wrote "here is my approach" (though I think there are several mistakes...), but it seems to be pretty direct:
$$a_n:=\frac{n!}{4n^n}\implies\frac{a_{n+1}}{a_n}=\frac{(n+1)!}{4(n+1)^{n+1}}\cdot\frac{4n^n}{n!}=\frac{n^n}{(n+1)^n}=\frac1{\left(1+\frac1n\right)^n}\xrightarrow[n\to\infty]{}\frac1e$$
and thus the convergence radis is $\;R=e\;$ and we have convergence for
$$\frac{|x|}e<1\iff |x|<e\iff -e<x<e$$ |
Computing $N_{S_n}(S_m)$ with $m<n.$ | Here are a couple of observations to help point you in the right direction.
It’s definitely the case that $N_{S_n}(S_m)\subsetneqq S_m$ when $n\ge m+2$: consider the transposition $(m+1\;m+2)$.
Now suppose that $n=m+1$ and $m>1$. If $\sigma\in S_n$ does not fix $n$, let $k=\sigma(n)$. There is a $\tau\in S_m$ such that $\tau(k)\ne k$, and hence $(\sigma^{-1}\tau\sigma)(n)=\sigma^{-1}(k)\ne n$, so that $\sigma^{-1}\tau\sigma\notin S_m$. |
Evaluate $\lim\limits_{n \to \infty}\sum_{k=1}^{n}\frac{1}{\sqrt{n+1-k}\cdot\sqrt{k}}$ | You can turn the sum into two Riemann sums each of which converges to $\frac \pi 2$:
$$\sum_{k=1}^{n}\frac{1}{\sqrt{n+1-k}\cdot\sqrt{k}} = \sum_{k=1}^{n}\frac{n+1-k + k}{\sqrt{n+1-k}\cdot\sqrt{k}}\cdot \frac 1{n+1}$$
$$=\frac 1{n+1}\sum_{k=1}^{n}\sqrt{\frac{n+1-k}{k}} + \frac 1{n+1}\sum_{k=1}^{n}\sqrt{\frac{k}{n+1-k}}$$
$$= \underbrace{\frac 1{n+1}\sum_{k=1}^{n}\sqrt{\frac{1}{\frac{k}{n+1}}-1}}_{\stackrel{n\to \infty}{\rightarrow}\int_0^1\sqrt{\frac 1x-1}dx=\frac \pi2} + \underbrace{\frac 1{n+1}\sum_{k=1}^{n}\sqrt{\frac{1}{\frac{1}{\frac{k}{n+1}}-1}}}_{\stackrel{n\to \infty}{\rightarrow}\int_0^1\sqrt{\frac{1}{\frac 1x -1}}dx=\frac \pi2}$$ |
Describe the continuous extensions of bounded linear functional | Hints:
By continuity, $\lambda$ has a unique extension to the closure $\overline{\mathcal{M}}$.
$\overline{\mathcal{M}}$ is a Hilbert space and so the Riesz representation theorem applies there.
Every $x \in \mathcal{H}$ can be written as the sum of its orthogonal projection onto $\overline{\mathcal{M}}$ and its orthogonal projection onto the orthogonal complement $\overline{\mathcal{M}}^\perp$. Note that orthogonal projection is a continuous linear operator.
General tip: When working in a Hilbert space, if you think you need the Hahn-Banach theorem, you're probably wrong. Hahn-Banach is a "non-constructive" tool, but whatever you want from it, in a Hilbert space you can define it explicitly. |
Left coset of some subgroup is right coset of other subgroup | Hint: If $g \in G$ and $H < G$ then $gh = (ghg^{-1})g$ for all $h \in H$. |
Convergence of Hypergeometric Distribution to Binomial | Suppose you have the following relative approximations $\frac{f_1(n)}{g_1(n)} \to 1$ and $\frac{f_2(n)}{g_2(n)} \to 1$.
Then you can use this to help compute the limit of $f_1/f_2$ via
$$\lim_{n \to \infty} \frac{f_1(n)}{f_2(n)}
= \left(\lim_{n \to \infty} \frac{f_1(n)}{f_2(n)}\right)
\frac{\lim_{n \to \infty} g_1(n)/f_1(n)}{\lim_{n \to \infty} g_2(n)/f_2(n)}
= \lim_{n \to \infty}
\frac{f_1(n)}{f_2(n)} \frac{g_1(n)/f_1(n)}{g_2(n)/f_2(n)}
= \lim_{n \to \infty} \frac{g_1(n)}{g_2(n)}.$$
This is what is being hidden in the "$\sim$" step where the factorials are replaced using Stirling's approximation.
Response to comment:
My answer is not talking about $\frac{\text{Hypergeometric PMF}}{\text{Binomial PMF}} \to 1$. You are correct that the desired result is $\text{Hypergeometric PMF} \to \text{Binomial PMF}$.
My answer is talking about why you can replace the factorials in $\frac{M! (N-M)! \cdots}{x! \cdots}$ individually with Stirling's approximation (the "$\sim$" step that your question is asking about).
Specifically, the authors really are doing
\begin{align}
\lim_{M,N \to \infty} \frac{{M \choose x}{N - M \choose K- x}}{{N \choose K}}
&= \lim_{M,N \to \infty} \frac{M!(N-M)!K!(N-K)!}{x!(M-x)!(N-M-K+x)!(K-x)!N!}
\\
&= \lim_{M,N \to \infty} \frac{\sqrt{2\pi}M^{M+1/2}e^{-M}...\sqrt{2\pi}(N-K)^{N-K+1/2}e^{K-N}}{\sqrt{2\pi}x^{x + 1/2}e^{-x}...\sqrt{2\pi}N^{N+1/2}e^{-N}}
\end{align}
and the my answer above is justifying the last equality. Dropping the limits and using "$\sim$" is a common shorthand. |
Show that $B$ $\in$ $GL_{n-1}($K$)$ $\Rightarrow$ $A$ $\in$ $GL_n$($K$) | HINT:
$$
\det A=-\det B\neq0
$$ |
Is there an elementary way to see that there is only one complex manifold structure on $R^2$? | There are two, not one. The open unit disc is homeomorphic to $\Bbb R^2$, and it's not biholomorphic to $\Bbb C$, so it's a distinct complex structure.
You will have trouble proving this in an easy way. Let me restate the uniformization theorem:
Every simply connected complex surface is biholomorphic to the plane, sphere, or open unit disc.
But I can tell you all of the simply connected surfaces without boundary: there's $S^2$ and $\Bbb R^2$ and that's it! So we can rephrase the uniformization theorem as follows:
Every two complex structures on $S^2$ are biholomorphic. Every complex structure on $\Bbb R^2$ is biholomorphic to either the complex plane or open unit disc.
So the uniformization theorem does not say much more than what you're asking for; if someone asked me for a proof I would tell them to read the uniformization theorem. (You can't just use Riemann mapping - it's not obvious that a complex manifold homeomorphic to $\Bbb R^2$ has a holomorphic embedding into $\Bbb C$.) |
Double integral with polar? | Set $x=r\cos(t)$ and $y=r\sin(t)$. We then have
\begin{align}
\iint_R e^{-(x^2+y^2)/2}dydx &= \int_{0}^{2\pi} \int_{r=0}^1 e^{-r^2/2}rdrdt = 2\pi \int_0^1 e^{-r^2/2} d(r^2/2)\\
& = 2\pi \int_0^{1/2}e^{-t}dt = 2\pi (1-e^{-1/2})
\end{align} |
Loci circle question | You have a common side (hypotenuse) PO in both the right angled triangles POS and POT. The side OS=OT as you mentioned. So PT=PS by Pythagoras. In fact, triangles POT and POS are congruent |
Example of conditional expectation under finite probability space | I will do just one of the cases you ask about and leave the rest to you.
The sigma-algebra generated by $X$ has as its atoms $ab$ and $cd$, because $X$ is constant on those sets. So, $\sigma(X) = \{\emptyset, ab, cd, abcd\}$.
By definition, the conditional expectation $E[Y \mid X]$ is a $\sigma(X)$-measurable random variable satisfying
$$\int_A E[Y \mid X] dP = \int_A Y dP, \ \ \forall A \in \sigma(X).$$
To say that $E[Y \mid X]$ is $\sigma(X)$-measurable in this context means simply that $E[Y \mid X]$ is constant on the atoms of $\sigma(X)$, which, as we saw above, are $ab$ and $cd$. So just from the $\sigma(X)$-measurability, we know we must have
$$E[Y \mid X](a) = E[Y \mid X](b) \,\ \text{and} \,\ E[Y \mid X](c) = E[Y \mid X](d).$$
By the integration property, we know that $E[Y \mid X]$ must preserve averages over $A \in \sigma(X)$. Take $A = ab$, for example. $Y$'s average over $ab$ is $0$, so $E[Y \mid X]$'s average over $ab$ must be $0$ as well. But, in view of $E[Y \mid X](a) = E[Y \mid X](b)$, this is possible only if $E[Y \mid X](a) = E[Y \mid X](b)=0$. The exact same reasoning applies to $cd$. Thus, we conclude that $E[Y \mid X]$ is everywhere equal to $0$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.