title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Difference between group algebra over a field and algebra over the same field? | An $F$-algebra $A$ generated by a multiplicative subgroup $G$ does not have to be isomorphic to the group algebra $F[G]$, regardless of any finiteness conditions. For a very simple example, taking $F=\mathbb{Q}$, then $A=\mathbb{Q}$ is generated by the group $G=\{1,-1\}$ as a $\mathbb{Q}$-algebra but is not isomorphic to $\mathbb{Q}[G]$.
In order to conclude that $A$ is isomorphic to $F[G]$, you need to additionally know that $G$ is $F$-linearly independent as a subset of $A$. That means exactly that the canonical homomorphism $F[G]\to A$ which is the identity on $G$ is an isomorphism, since $F[G]$ consists of formal linear combinations of elements of $G$. |
Limiting sequence in two ways. | In the telescoping sum you wrote, $$\lim_{N\to \infty} \sum_{n=1}^N\left[\sqrt{(n + 1)} - \sqrt{n}\right]$$
We can rearrange the terms so that we have the sum $$\lim_{N\to \infty}(-\sqrt1 - \sqrt2 - ... -\sqrt{N} + \sqrt2 + \sqrt3 +...+\sqrt{N} + \sqrt{N+1})$$
Once we cancel like terms, we have $\lim_{N \to \infty} (\sqrt{N+1} - \sqrt{1}) = \lim_{N \to \infty} (\sqrt{N+1} - 1) = \infty$
What this tells us is, even though the difference between two consecutive terms eventually reaches $0$, the sum of the consecutive differences need not converge. |
$a\equiv b\pmod{n}\iff a/x\equiv b/x\pmod{n/\gcd(x,n)}$ for integers $a,b,x~(x\neq 0)$ and $n\in\Bbb Z^+$? | More simply, write $\ \bar a = a/x,\ \bar b = b/x\ $ and $y = \bar a - \bar b.\ $ Then we get a $1$-line proof:
$$\,n\mid a\!-\!b=xy\iff n\mid xy,ny\color{#0a0}\iff n\mid (xy,ny)\!\color{#c00}{\overset{\rm D}{=}}\!(x,n)y \iff n/(x,n)\mid y$$
We employed the universal property $\color{#0a0}\iff$ along with the $\,\color{#c00}{\overset{\rm D}{=}} $ Distributive Law of the gcd. |
Proof that $e^{ab} = \left(e^a\right)^b$, using the series for $e^x$ | The problem with trying to prove this is that you need a rigorous definition for $x^y$ where $x$ can be any positive number and $y$ can be any real number. Now, we already have a rigorous definition for $\exp(y)=e^y$ for any $y\in\Bbb{R}$ as $\sum_{n=0}^\infty \frac{y^n}{n!}$. Therefore, I usually people write $x^y$ in terms of the exponential function:
$$x^y=\exp(y\ln x)$$
Thus, the easiest way to prove $(e^a)^b$ is to just use this definition of $x^y$:
$$(e^a)^b=\exp(b\ln e^a)=\exp(ba)=e^{ab}$$
I know that's not really the series solution you wanted, but I still think it is helpful to see this kind of proof. |
A simple exercise on integration by residues and an application of Jordan's lemma | You are mixing up factors of i. There are different ways to write a correct formula, e.g.:
$$\oint_C \sum_{\substack{1\leq h\leq n\\(h,n)=1}}\frac{e^{\pm \,2\pi \frac{h}{n}z}}{z^2+1}dz=\pi\mu(n)$$
when $C$ is a contour encircling $i$ but not $-i$. Depending on the choice of sign in the exponential you need either ${\rm Re}\; z$ bounded from below or from above along the contour for the integral to be convergent so you will not be able to relate this to an integral over ${\Bbb R}$. Note that in your 3rd formula there is a factor of i too much in the exponent. The following is correct given the definitions in your text:
$$\int_{-R}^{R}+\int_{C_R}=\pi\sum_{\substack{1\leq h\leq n\\(h,n)=1}}e^{-2\pi \frac{h}{n}}$$
The second equality in your 4th equation is not correct (and not justified by your text):
What is your aim with these calculations? |
Using an inverse Fourier transform to evaluate integral of $\mathrm{sinc}^2 x$ | You have the inverse Fourier transform $$\frac{1}{2\pi}\int\limits_{-\infty}^{+\infty} \tilde{f}(k)^2 e^{ikx} dk = \mathcal{F}^{-1}[\tilde{f}(k)^2] = f(x) * f(x) = \int\limits_{-\infty}^{+\infty} f(y)f(x-y) dy$$
where the right-hand side is the convolution product. Setting $x=0$ you get $$\frac{1}{2\pi}\int\limits_{-\infty}^{+\infty} \tilde{f}(k)^2 dk = \int\limits_{-\infty}^{+\infty} f(y)f(-y) dy = 2$$ where the evaluation is obtained by explicitly computing the integral of products of theta functions. Then, simpligying gives
$$4 \pi = \int\limits_{-\infty}^{+\infty} \tilde{f}(k)^2 dk = 4 \int\limits_{-\infty}^{+\infty}\frac{\sin^2 k}{k^2} dk$$ and therefore $$\int\limits_{-\infty}^{+\infty}\frac{\sin^2 x}{x^2} dx = \pi \, . $$ |
Evaluating $\int_0^\infty \frac{\ln |x-1|}{x^2((\ln x)^2-(\ln x)(\ln|x-1|))} dx$ | $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
The integral $\color{#f00}{diverges}$ because
\begin{align}
&{\ln\pars{\verts{x - 1}} \over x^{2}
\bracks{\vphantom{\Large A}%
\ln^{2}\pars{x} - \ln\pars{x}\ln\pars{\verts{x - 1}}}}
\stackrel{\mrm{as}\ x\ \to\ \infty}{\sim}
{\ln\pars{x} \over x^{2}\braces{\vphantom{\Large A}%
\ln^{2}\pars{x} - \ln\pars{x}\bracks{\vphantom{\large A}\ln\pars{x} - 1/x}}} = \color{#f00}{1 \over x}
\end{align} |
I know how to approach die problem when it is simple, However the given problem is kind of confusing. | OK let's see. I think there are a couple of steps here. It helps to label the relevant random variables; note this would help in your asking of the question, too: So let $X_1$, $X_2$ and $X_3$ be the random variables corresponding to the outcomes of the dice throws. If I've understood correctly, we are looking for:
$$
\mathbb{P}( X_1 = X_2 = 1 | X_3 > X_1 + X_2).
$$
To be even more convenient, I'm going to write $Y = X_1 + X_2$. The problem doesn't distinguish between $X_1$ and $X_2$ and we are only ever interested in their sum (because $X_1 = X_2 = 1$ if and only if $Y=2$). So we are looking for
$$
\mathbb{P}( Y = 2 | X_3 > Y).
$$
Now by the usual Bayes' rule:
$$
\mathbb{P}( Y = 2 | X_3 > Y) = \frac{\mathbb{P}(Y=2\ \text{and}\ X_3 > Y)}{\mathbb{P}(X_3 > Y)} = \frac{\mathbb{P}(X_3 > Y | Y = 2) \mathbb{P}(Y= 2)}{\mathbb{P}(X_3 > Y)}.
$$
So to compute this, we will need to know $\mathbb{P}(X_3 > Y)$. By the law of total probability and the fact that we cannot have $X_3 > Y$ unless $Y \leq 5$, we have:
\begin{align}
\mathbb{P}(X_3 > Y) &= \sum_{j=2}^5 \mathbb{P}(X_3 > Y\ \text{and}\ Y = j) \\
&= \sum_{j=2}^5 \mathbb{P}(X_3 > Y | Y = j)\, \mathbb{P}(Y=j) \\
&= \frac{4}{6}\, \frac{1}{36} + \frac{3}{6}\, \frac{2}{36} + \frac{2}{6}\, \frac{3}{36} + \frac{1}{6}\, \frac{4}{36} \\
&= \frac{20}{216} \\
&= \frac{5}{54}
\end{align}
And the numerator in the expression we are trying to calculate is
$$
\mathbb{P}(X_3 > Y | Y = 2) \mathbb{P}(Y= 2) = \frac{4}{6}\frac{1}{36} = \frac{1}{54}
$$
So the answer is
$$
\frac{1}{5}.
$$ |
Prove that $B$ is non singular and that $AB^{-1}A=A$ | Let $\mathbf{1}_n$ denote the $n\times n$ matrix with all entries equal to one and $E_n$ denote the $n\times n$ identity matrix. Then
$$
A_n = b\mathbf 1_n + (a-b)E_n = b\mathbf 1_n -nbE_n
$$
and
$$
B_n = A_n + \frac{1}{n}\mathbf 1_n = \left(b+\frac{1}{n}\right)\mathbf 1_n - nbE_n.
$$
In general, for a matrix $X_n=\alpha \mathbf 1_n + \beta E_n$, that is
$$
X =
\begin{pmatrix}
\alpha+\beta & \alpha & \alpha & \cdots & \alpha \\
\alpha & \alpha+\beta & \alpha & \cdots & \alpha \\
\alpha & \alpha & \alpha+\beta & \cdots & \alpha \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\alpha & \alpha & \alpha & \cdots & \alpha+\beta
\end{pmatrix},
$$
we can calculate the determinant by first subtracting the second row from the first to get
$$
\det X_n = \det
\begin{pmatrix}
\beta & -\beta & 0 & \cdots & 0 \\
\alpha & \alpha+\beta & \alpha & \cdots & \alpha \\
\alpha & \alpha & \alpha+\beta & \cdots & \alpha \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\alpha & \alpha & \alpha & \cdots & \alpha+\beta
\end{pmatrix},
$$
and then Laplace expand with respect to the first row to obtain
$$
\det X_n =
\beta
\det X_{n-1} + \beta
\det
\begin{pmatrix}
\alpha & \alpha & \cdots & \alpha \\
\alpha & \alpha+\beta & \cdots & \alpha \\
\vdots & \vdots & \ddots & \vdots \\
\alpha & \alpha & \cdots & \alpha+\beta
\end{pmatrix}.
$$
For the second matrix, subtract the first row from all others to get
$$
\det X_n =
\beta
\det X_{n-1} + \beta
\det
\begin{pmatrix}
\alpha & \alpha & \cdots & \alpha \\
0 & \beta & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \beta
\end{pmatrix}
=
\beta \det X_{n-1} + \alpha \beta^{n-1}
$$
Since $\det X_1=\alpha+\beta$, this recursion yields $$\det X_n=n\alpha\beta^{n-1} +\beta^n=\beta^{n-1}(n\alpha+\beta).$$
Thus, $X_n$ is invertible if and only if $\beta\neq 0$ and $n\alpha+\beta\neq 0$. In this case, we may guess that (or consider the adjugate to see that) $X^{-1}$ is of the form $X^{-1} = \gamma \mathbf 1 + \delta E_n$ as well and work out that
$$
X^{-1} = -\frac{\alpha}{\beta(n\alpha+\beta)} \mathbf 1_n + \frac{1}{\beta} E_n.
$$
In the example at hand,
$$
\det B_n = (-nb)^{n-1}(n\left(b+\frac{1}{n}\right)-nb)= (-nb)^{n-1}
$$
which is non-zero if and only if $b$ is non-zero. And
$$
B_n^{-1} = \frac{nb+1}{n^2b} \mathbf 1_n - \frac{1}{nb} E_n.
$$
However, to check that $AB^{-1}A=A$ is satisfied it is enough to show $A^2=AB$, since all matrices of the given form commute. |
If $2(bc^2+ca^2+ab^2) = b^2c+c^2a+a^2b+3abc, $ then prove that triangle $ABC$ is equilateral | Since $a,b,c$ are sides of a triangle we may write $(a,b,c) = (y+z,z+x,x+y)$
with $x,y,z>0$. The difference between the two sides is then
$$
x^3 + y^3 + z^3 \; + \; x^2 y + y^2 z + z^2 x
- 2 (xy^2 + yz^2 + zx^2).
$$
By cyclic symmetry we may assume $x \geq z$ and $y \geq z$, and write
$x = (1 + \alpha) z$ and $y = (1 + \beta) z$ with $\alpha,\beta \geq 0$
to find $z^3$ times
$$
2(\alpha^2 - \alpha\beta + \beta^2) + \beta(\alpha-\beta)^2 + \alpha^3,
$$
in which each term is nonnegative, and all are zero iff
$\alpha = \beta = 0$ $-$ which in turn is equivalent to $x=y=z$,
and thus $a=b=c$, so triangle $ABC$ is equilateral as claimed.
$\Box$
[Added later: The solution above applies a general technique for such
problems; but I see that for the present question
Hari Shankar posted (in a comment) a link to an
AoPS item
that gives an even simpler conclusion that retains the cyclic symmetry:
$$
x^3 + y^3 + z^3 \; + \; x^2 y + y^2 z + z^2 x
- 2 (xy^2 + yz^2 + zx^2) = x(x-z)^2 + y(y-x)^2 + z (z-y)^2,
$$
again with all terms nonnegative; so equality implies that
they all vanish and the rest follows as before. That link also
gives the example $(a:b:c) = (10:3:24)$ of a rational point on the cubic
$2(bc^2 + ca^2 + ab^2) = b^2c + c^2a + a^2b + 3abc$ with positive variables
that do not satisfy the triangle inequality.] |
meaning of subscript notation $\ 1_{a=a'}$ | I can't say for sure, since there isn't much context; but, frequently, the notation "$1_A$" (where $A$ is some statement) is used to represent the indicator function for that statement; that is,
$$
1_A=\begin{cases}1 & \text{if $A$ is true}\\0 & \text{if $A$ is false}\end{cases}.
$$
So, in this case, I would suspect that $1_{a=a'}$ is meant to be $1$ if $a=a'$, and $0$ if $a\neq a'$. |
Integers of the form $x^2-ny^2$ | Things are not as simple as you seem to think, even for positive forms. Among primes with $ (p|23) = 1,$ approximately $1/3$ are integrally represented by $x^2 + 23 y^2.$ Below are all the primes in question, up to 1000. Among primes $p \neq 2,3,23,$ with $ (p|23) = 1,$ the ones with $p = x^2 + 23 y^2 $ are those for which $x^3 - x + 1$ has three distinct roots $\pmod p.$ Also note that $x^2 + 23 y^2$ and $x^2 + x y + 6 y^2$ represent exactly the same odd numbers. Then, $ 3x^2 + 2xy + 8 y^2$ and $2 x^2 + x y + 3 y^2$ represent exactly the same odd numbers. See Hudson and Williams 1991 at http://zakuski.utsa.edu/~jagy/inhom.html
p = x^2 + 23 y^2
p p % 23
----------------------------
23 0
59 13
101 9
167 6
173 12
211 4
223 16
271 18
307 8
317 18
347 2
449 12
463 3
593 18
599 1
607 9
691 1
719 6
809 4
821 16
829 1
853 2
877 3
883 9
991 2
997 8
0 1 2 3 4 6 8 9 12 13 16 18 : mod 23
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
p = 3 x^2 + 2 x y + 8 y^2
p p % 23
----------------------------
3 3
13 13
29 6
31 8
41 18
47 1
71 2
73 4
127 12
131 16
139 1
151 13
163 2
179 18
193 9
197 13
233 3
239 9
257 4
269 16
277 1
311 12
331 9
349 4
353 8
397 6
409 18
439 2
443 6
461 1
487 4
491 8
499 16
509 3
541 12
547 18
577 2
587 12
601 3
647 3
653 9
673 6
683 16
739 3
761 2
811 6
823 18
857 6
859 8
863 12
887 13
929 9
947 4
967 1
1 2 3 4 6 8 9 12 13 16 18 : mod 23 |
Radon measure as a representation of density | Suppose you want to examine the gravitational field of a circular wire. By "wire" I mean that the cross-sectional diameter is so much less than the diameter of the circle that it is negligible. It is common in such cases to treat the wire as infinitely thin, and to just use path integrals. For example, the gravitational field $\mathbf g$ at a point $\mathbf p$ is given by $$\mathbf g = \int_C \frac{G(\mathbf {r - p})}{|\mathbf{r-p}|^3}\ell\, ds$$
where $\mathbf r$ is the integration variable, $ds$ is the arclength measure, and $\ell$ is the linear density of mass on the wire. We may want to consider the case where $\ell$ is not constant.
Similarly we may want to consider the gravitation of a thin disk,
$$\mathbf g = \int_D \frac{G(\mathbf {r - p})}{|\mathbf{r-p}|^3}\alpha\, dA$$
or of a ball
$$\mathbf g = \int_B \frac{G(\mathbf {r - p})}{|\mathbf{r-p}|^3}\rho\, dV$$
It would be nice to be able to unify all three cases and discuss them together instead of having to develop the same mathematics three times: once for path integration, once for surface integration, and once for volume integration (and a fourth time, for the gravitation of several discrete particles, or more times for combinations of all of them). Unfortunately, there is no volumetric density function $\rho$ that corresponds to an infinitely thin disk, wire, or particle - giving the correct result under Lebesgue or Riemman integration.
This can only be handled mathematically by expanding our concept of integration. There two common approaches to this: One approach is to generalize what is being integrated. This gives us "generalized functions" such as the Dirac delta function. The other is to express the mass distribution in the measure instead of as a density function in the integrand. This is the approach you are asking about.
In this approach we specify the mass of regions in space (i.e., of well-behaved sets), instead of the limiting ratio of mass-to-volume at points. In the three integrals above, we replace $\ell\,ds, \alpha\,dA, \rho\,dV$ with the mass-measure $dM$. The measure is defined by requiring for any set $U$ in the topological $\sigma$-algebra, $M(U)$ is the total mass in it.
We can define $M$ from a volumetric density by $$M(U) = \iiint_U \rho\, dV$$ as you've mentioned. For the disk or wire, we can define it by $$M(U) = \iint_{U\cap D}\alpha\,dA\quad\text{or}\quad M(U) = \int_{U\cap C}\ell\,ds$$ For point particles of mass $m_i$ at locations $\mathbf p_i$, we can define it as $$M(U) = \sum_{p_i \in U} m_i$$
But these are not the only possibilities for $M$. It does not need to derive from any these forms. Effectively, $M$ allows us to describe any mass distribution we want, without having to limit ourselves to particles or linear, areal, or volumnal densities. When Peter Markowich refers to the measure as being a representation of the "mass density", he is merely trying to tie the idea to a known concept. But the wording isn't explicitly correct (and I doubt he intended you to take it that way). $M$ is the mass distribution. The density is the relation of that distribution to length, area, or volume.
All of this works without restricting ourselves to Radon measures. However, just as one could invent density functions $\rho$ that are highly non-physical (for example, setting $\rho(x,y,z)$ to be the number of rational entries in $\{x, y, z\}$ - mathematically possible, but physically ludicrous), one can also come up with mass distributions $M$ that make no sense. For most things, it doesn't matter if the mass distribution is realistic or not. But allowing the worst cases limits the theorems that can be proved. Any physically viable distribution (even when allowing idealizations such as infinitely thin disks or wires) will be Radon measures, which are simply measures that respect the topology. If $\ell, \alpha, \rho$ are continuous, and the number of discrete points are finite, then each of the mass distributions defined above are Radon measures.
When the Radon-Nykodim derivative of $M$ with respect to Lebesgue measure exists, then it is indeed a mass density function $\rho$ that could be used instead. This is the purpose for which the Radon-Nykodim derivative was defined. But this concept extends the traditional idea of density exactly because it covers situations where the Radon-Nykodim derivative does not exist. The Radon-Nykodim derivative (for volume measure) does not exist for the circle, disk, and point measures defined above. |
The decision boundary found by your classifier? | Suppose $g$ 'classifies' as $y=1$ when it's argument is $>0$ and $y = 1$ when it's argument is $<0$. (Most classifier functions work this way).
You have that the argument is $\theta_0 + \theta_1 x_1 + \theta_2x_2$ and we are asked which kind of classification results from the choice $\theta_0 = -6, \theta_1 = 1, \theta_2 = 0$.
The first thing to note is that $\theta_2 = 0$, which means that the argument reduces to $\theta_0 + \theta_1 x_1$, which is independet of $x_2$. This is what you see immediately in the picture: The classification boundary is independent of $x_2$, i.e. has the same value for every $x_2$ and is therefore parallel to the $x_2-$axis. Furthermore, we see that $\theta_0 = -6$, i.e. if $x_1= 6$, then $\theta_0 + \theta_1 x_1 = 0$, i.e. the decision boundary where the argument switches the sign is precisely the line $x_1 = 6$ (and $x_2$ being arbitrary). |
Positive curvature on holomorphic vector bundles | In the expression $F_{X,Y}$, $X$ and $Y$ are vectors living in the real tangent space $TM \subset TM \otimes \mathbb C$. But of course $Z_i$ and $\bar Z_i$ do not lie in the real tangent space.
For example, consider the two-form $\omega = idx \wedge dy$. Then using $dx = \frac{1}{2}(dz + d\bar z), dy = \frac{1}{2i}(dz - d\bar z)$ you see that $\omega = \frac{1}{2} d\bar z \wedge dz$. So writing it in terms of real differential forms gives a purely imaginary coefficient but in terms of the complex coordinates you get a real coefficient. |
Solve a recurrence with $n/\log n$ | Calling $\log_2 n = a_n$ and considering $a_n > 0$
$$
T\left((a_n)^{\log_{a_n}n}\right)=T\left((a_n)^{\log_{a_n}n-1}\right)+1
$$
Calling now $\mathbb{T}(\cdot)=T\left((a_n)^{(\cdot)}\right)$ and $z = \log_{a_n}n$ we have the recurrence
$$
\mathbb{T}(z)=\mathbb{T}(z-1)+1
$$
with solution
$$
\mathbb{T}(z)=C_0(z)+z
$$
with $C_0(z)$ a generic periodic function with period $1$ that we will assume constant, or backwards
$$
T(n) = C_0 + \frac{\log_2 n}{\log_2(\log_2 n)}
$$
hence
$$
T(n) = \Theta\left(\frac{\log_2 n}{\log_2(\log_2 n)}\right)
$$
NOTE
for $a > 0, b > 0$
$$
b = a^{\log_a b}
$$ |
Embedding of $\mathbb{R}$ in $\mathbb{C}$ | As you've constructed it, no, it would not be accurate to say $x \in \mathbb{C}$ since you explicitly stated that $f: \mathbb{R} \to \mathbb{C}$.
In response to your closing comments: mappings don't assert anything. They're just instructions for generating something based on what input was provided. (Although the person doing the math could make such an assertion and use that mapping as a persuasive argument.) |
What is the universe of a sub algebra generated by $ \{(a \wedge b)\vee(c \wedge b') : a,c \in C\}$ ? | Let $X =\{(a \wedge b) \vee (c \wedge b') \mid a, c \in C\}, Y$ is the universe of a subalgebra of $B$ generated by $Z = C \cup \{b\}.$ We want to show that $X = Y$.
$Y \subseteq X$: since $Y$ is the smallest subalgebra of $B$ containing $Z$ it is sufficient to show that $X$ is a subalgebra and $Z \subseteq X$. Fix some $c \in C: c = c \vee 1 = c \vee (b \wedge b') = (c \wedge b) \vee (c \wedge b')$. Hence $C \subseteq X$. Also $b = (1 \wedge b) \vee (b' \wedge 0)$. Hence $b \in X$, so $Z \subseteq X$. It is left to show that $X$ is the subalgebra of $B$. Fix arbitrary elements $x, y \in X:$ $$x = (a_x \wedge b) \vee (c_x \wedge b'),\quad y = (a_y \wedge b) \vee (c_y \wedge b').$$ Use the laws of a Boolean algebra to show that the following equalities hold:
$$x \vee y = ((a_x \vee a_y) \wedge b) \vee ((c_x \vee c_y) \wedge b'),$$
$$x \wedge y = ((a_x \wedge a_y) \wedge b) \vee ((c_x \wedge c_y) \wedge b'),$$
$$x'= (a_x' \wedge b) \vee ((a_x' \vee c_x) \wedge b').$$
Since $C$ is a subalgebra of $B$ we have $x \wedge y, x \vee y, x' \in X$ hence $X$ is a subalgebra of $B$.
$X \subseteq Y$: since $Y$ is the subalgebra generated by $Z$, any Boolean term over a set of variables $Z$ is also in $Y$, but every element of $X$ is of the form $(a \wedge b) \vee (c \wedge b'),$ where $a, b, c \in Z$. |
Calculation of $\int_{0}^{\arcsin\tanh\ell}(\sin\theta)^{2n+1}\,d\theta$ | Hint:
For your original problem there is no need to calculate $\enspace\displaystyle \int_{0}^{\arcsin\tanh\ell}(\sin\theta)^{2n+1}\,d\theta \enspace $ .
$\displaystyle f(\ell):=\int_{0}^{\arcsin\tanh\ell}\operatorname{arctanh}\sin\theta\,d\theta \enspace\enspace => \enspace\enspace f'(\ell)=\frac{\ell}{\cosh \ell}$
We get:
$\displaystyle f(\ell)= 2 \left(C - \Im(\text{Li}_2(i/e^\ell)) - \ell \cot^{-1}(e^\ell) \right) $
where $\,C\,$ is the Catalan constant (e.g. https://en.wikipedia.org/wiki/Catalan%27s_constant )
and $\,\cot^{-1}\,$ the inverse of $\,\cot\enspace$ ; $\enspace\Im($...$)$ means the imaginary part of ...
and $\enspace \text{Li}_2(z)\enspace$ is the Dilogarithm (or Spence's) function
(e.g. http://mathworld.wolfram.com/Dilogarithm.html) |
A $4\times4$ determinant with entries $\pm1$ is divisible by 8 | Multiplying rows and columns of $B$ by $-1$ as necessary, which at most changes the sign of $\det(B)$, we can assume
$$B=\pmatrix{1&1&1&1\\-1&\pm1&\pm1&\pm1\\-1&\pm1&\pm1&\pm1\\-1&\pm1&\pm1&\pm1}$$
Adding the first row to each of the others gives
$$\det(B)=\det\pmatrix{1&1&1&1\\0&2a&2b&2c\\0&2d&2e&2f\\0&2g&2h&2i}=8\det\pmatrix{a&b&c\\d&e&f\\g&h&i}$$
where $a,b,c,d,e,f,g,h,i$ are all $0's$ or $1$'s, but in particular the last determinant is an integer. |
Analog of $C^{\infty}$ multiplication for discrete "vector fields" | This somewhat depends on what features of the module structure $\Omega^0M\times\Omega^1M\to\Omega^1M$ you're interested in replicating. One common choice is the cup product, which replicates the properties of the wedge product of differential forms in simplicial cohomology. It is defined in various ways in various contexts, and oftentimes one is only interested in the induced product on cohomology. Using your definition of simplicial cochains, we could, for instance, define
$$
(a^p\smile b^q)(v_0,\cdots,v_{p+q}) \\
=\frac{1}{(1+p+q)!}\sum_{\pi\in\mathcal{P}(0,\cdots,p+q)}\operatorname{sgn}(\pi)a^p(v_{\pi(0)},\cdots,v_{\pi(p)})b^q(v_{\pi(p)},\cdots,v_{\pi(p+q)})
$$
where $a^p$ is a $p$-cochain, $b^q$ is a $q$-cochain, $v_0,\cdots,v_{p+q}$ are vertices of of a $p+q$-simplex, $\mathcal{P}$ denotes the set of permutations of a sequence, and $\operatorname{sgn}$ denotes the sign of a permutation. This product has a lot of the properties of the wedge product, for instance being graded-commutative, and making the coboundary operator an antiderivation.
For $0$ and $1$-cochains, for instance, this would give
$$
(a^0\smile b^1)(v_0,v_1)=\frac{1}{2}(a^0(v_0)+a^0(v_1))b^1(v_0,v_1)
$$ |
Finite group in $GL(n,\mathbb{Q})$ is conjugate to finite group in $GL(n,\mathbb{Z})$? | As Noam D. Elkies has said, every finite subgroup of $GL(n,\mathbb{Q})$ is conjugated to a subgroup of $GL(n,\mathbb{Z})$. The proof is well known, see for example in the book of J.P. Serre on Lie Algebras and Lie Groups, Appendix $3$, Theorem $1$. A crucial lemma is, that for a finite subgroup $H$ of $GL(n,\mathbb{Q})$ there exists a lattice $M$ which sends $H$ onto itself. |
Inequality from Analysis Qual | $$N=\sqrt{a_1}.\frac{1}{\sqrt{a_1}}+...+\sqrt{a_n}.\frac{1}{\sqrt{a_n}}\leq \sqrt{a_1+...+a_n}\sqrt{\frac{1}{a_1}+...+\frac{1}{a_n}}$$
Now square both sides of the inequality. |
Parameterized curve describing trajectory of thrown object | Hint: Use the method called "completing the square". In your case
$$\tag{1} v_0^2-2guv_0\sin\beta+g^2u^2= \left(gu-v_0 \sin \beta \right)^2+v_0^2 \cos ^2\beta
. $$
This suggests the substitution $$y= g u-v_0 \sin \beta .$$
Spoiler below:
We need to calculate $$s(t) = \int_0^t \|k'(u)\|\, du.$$
Using (1) for the expression $ \|k'(u)\|$ and performing the above-mentioned substitution yields
$$\begin{align}s(t) &= \int_0^t \sqrt{v_0^2-2guv_0\sin\beta+g^2u^2}\, du \\ &= \int_0^t \sqrt{\left(gu-v_0 \sin \beta \right)^2+v_0^2 \cos ^2\beta\, }du\\&=\int_{- v_0 \sin\beta}^{gt- v_0 \sin\beta} \sqrt{y^2 + v_0^2 \cos ^2\beta}.\end{align} $$
The last integral is usually solved via the substitution $y= v_0\cos\beta \sinh x$. We obtain
$$\begin{align}s(t) &= v_0|\cos \beta|\int_{-\mathop{\rm asinh} \tan \beta}^{\mathop{\rm asinh} (g t/(v_0 \cos \beta)- \tan \beta)} \overbrace{\cosh^2 x}^{(1+\cosh 2x)/2}\,dx\\&= v_0|\cos \beta| \left(\frac{1}{2} x+ \frac{1}{4}\sinh(2x) \right)\Bigg|_{x=-\mathop{\rm asinh} \tan \beta}^{\mathop{\rm asinh} (g t/(v_0 \cos \beta)- \tan \beta)}\end{align}$$ |
Prove this is a group | You haven't stated that $G$ is nonempty, this is essential. However, under this assumption, we can prove that $(G,\cdot)$ is a group.
Let $g\in G$. Then by hypothesis (3), there exists $e\in G$ such that $g\cdot e = g$. Let $h\in G$ be arbitrary. Then there exists $c\in G$ such that $g\cdot c = h$ (by hypothesis (3) again), so that
\begin{align*}
h\cdot e &= (g\cdot c)\cdot e\\
&= g\cdot (c\cdot e)\\
&= g\cdot (e\cdot c)\\
&= (g\cdot e)\cdot c\\
&= g\cdot c\\
&= h.
\end{align*}
Thus, $e$ is an identity element for all of $G$. Right inverses exist for all elements of $G$ by (3), and the fact that $e$ is both a left and right identitie and that the inverses described are both left and right inverses follow from commutativity of $\cdot$. |
Series: determining convergence of series problem | I believe the expression
$$\sum_{1}^{\infty} a_n + 1$$
means
$$\left(\sum_{1}^{\infty} a_n \right) + 1$$
If it meant as you suggest, the expression would instead most likely be written as
$$\sum_{1}^{\infty} \left( a_n + 1 \right)$$
This would avoid potential ambiguity. |
Assume $x \geq 1$ and deduce $x^{1/n} \geq 1$ | Your proof is valid.
You could have a direct proof as well
Note that the function $f(x)= x^{1/n}$ is strictly increasing on $(0,\infty)$ since its derivative is positive.
Therefore,
$$ x>1\implies x^{1/n}>1^{1/n}=1$$ |
Prove using induction that $n^3 − n$ is divisible by 6 whenever $n > 0$. | In your assumption step, you need to assume the statement is true for $n=k$, i.e. $k^3-k$ is divisible by $6$.
In the induction step, expand and simplify $(k+1)^3-(k+1)$:
$$\begin{aligned}(k+1)^3-(k+1)&=k^3+3k^2+3k+1-k-1\\&=k^3-k+3k^2+3k\\&=(k^3-k)+3k^2+3k\\&=(k^3-k)+3k(k+1)\end{aligned}$$
Note that $k^3-k$ is divisible by $6$ by the inductive hypothesis (the assumption). $k$ and $k+1$ are consecutive integers, so their product must be even. Hence $3k(k+1)$ is also divisible by $6$. Thus, whenever $k^3-k$ is divisible by $6$, we also have that $(k+1)^3-(k+1)$ is divisible by $6$. |
Let $f$ be a function such that $f'(x)=\frac{1}{x}$ and $f(1) = 0$ , show that $f(xy) = f(x) + f(y)$ | From the definition $f'(x) = \frac{1}{x}$ with $f(0)=1$ we have $$f(x) \equiv \int_1^x \frac{dt}{t}$$
By a simple change of variables $z = ty$ to the integral above we get the result
$$f(x) = \int_y^{xy}\frac{dz}{z} = \int_1^{xy}\frac{dz}{z} - \int_1^y \frac{dz}{z} = f(xy) - f(y)$$
Note that we can also prove the logarithmic property $f(x^r) = r f(x)$ by applying the change of variables $z = t^{1/r}$. Taking it further we can also quite easily show that $f$ has an unique inverse $E(x)$ satisfying $E'(x) = E(x)$ and $E(x+y) = E(x)E(y)$ so $E(x) = e^x$ where $e = E(1)$ is defined by $\int_1^e\frac{dt}{t} = 1$. This gives an alternative definition of the exponential function and of Euler's constant $e$. |
Applied/Numerical Linear Algebra-Suggestions for Project | Not sure if this will be too advanced, but there's the discrete Radon transform. The idea is basically: say you have a matrix of unknown numbers, you take as your data the sum along any row, column, or diagonal, and your goal is to recover the original numbers.
This paper by Beylkin should be understandable to someone with your background:
http://amath.colorado.edu/pub/wavelets/papers/BEYLKI-1987.pdf
A key tool in the inversion process is the fast Fourier transform (FFT) which is perhaps the most important numerical algorithm ever. The abstract notion of a Fourier transform is also very important to signal processing (and endless other branches of mathematics).
One application is to medical imaging / the CT scan: if the numbers are density of tissue inside someone's body, the sum along a line (row/column/etc.) corresponds to the attenuation of an x-ray passing through that tissue. Thus, by probing someone's body with x-rays, you map the density of the stuff inside them.
The continuous analog of this is the Radon transform / X-ray transform, and they require some heavier math, but I would still recommend reading about them as background. |
Properties of the Laplace transform | The fourth option has bad syntax and doesn't make sense as written. I imagine it's asking if $\mathcal L[cfg] = c FG$ is true for $c \in \mathbb R$.
You're correct that the $\mathcal L[f\cdot g] \ne \mathcal L[f]\cdot\mathcal L[g]$ in general.
The correct relationship between the product of functions and the Laplace transform is given by the convolution theorem:
$\mathcal L[f*g] = \mathcal L[f]\cdot\mathcal L[g]$
The definition of $*$ is:
$(f * g)(t) = \displaystyle \int_0^t f(s)g(t-s)\,\mathrm ds$
(edit: In some contexts, the definition of $*$ has the bounds of integration be $-\infty$ to $+\infty$)
This is pronounced "$f$ convolved with $g$" or "the convolution of $f$ with $g$"
If you're asked to prove the first and second answers wrong, it's enough to find one counterexample.
Khan Academy has a few videos on this.
https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution |
Summing $S_{n,m}=\sum_{k=1}^{n} (-1)^k~ k^{m} ~ {n \choose k}$ for $mn.$ | The LHS of $(3)$ is not $-\dfrac{S_{n,m}}{m!}$ but rather $\displaystyle\frac{1}{m!}\sum_{k=\color{red}{0}}^{n}(-1)^k\color{red}{(}k\color{red}{{}+1)}^m\binom{n}{k}$.
To get to $S_{n,m}$, consider just $f(x)=(1-e^x)^n$ instead. |
First order partial differential equation question | Fix $y_0$ and define $v$ by $v(s)=u(s,y_0+s)$, then your differential equation reads $$
v'(s)=-v(s)\cdot(L-y_0-s)^{-1},
$$
solved by $v(s)=w(y_0)\cdot(L-y_0-s)$ for a given function $w$. Now $u(t,y)=v(s)$ for $s=t$ and $y_0=y-t$ hence $u(t,y)=w(y-t)\cdot(L-y)$.
The initial condition yields $w(-t)\cdot L=at$ hence $u(t,y)=(a/L)\cdot(t-y)\cdot(L-y)$. |
Integration doubt in notation. | No. The first notation is more common for integrals over intervals, but the second type of notation is more convenient for more general domains (for example a complicated domain in $\mathbb R ^2$). |
True or false? For all $A,B \in \mathbb{R}^{n \times n}$ we have that det(A+B) $\neq$ det(A)$+$ det(B) | Your reasoning is right, but you can not fix $n=0$ if you want to prove the statement is false for all $n$.
You are right when you say a counter-example is sufficient.
For example, take the null matrix $A$ of $\mathbb R^{n\times n}$, and $B:=A$.
You have $\det(A+B)=0=\det (A)+\det(B)$, so the statement is false. |
Derivative of $\frac{d}{dt} f(\gamma(t))$ with differential operators $\frac{\partial}{\partial z}$ and $\frac{\partial}{\partial \overline{z}}$ | What you are saying is that you have the path in the complex plane that is specified by a parametrization $\gamma(t)$. So you have that $z(t) = \gamma(t) \Rightarrow \bar{z} = \bar{\gamma}$. Then from the multivariable chain rule you have that
$$ \frac{df(z,\bar{z})}{dt} = \frac{\partial f}{\partial z}\frac{d z}{d t} + \frac{\partial f}{\partial \bar{z}}\frac{d \bar{z} }{d t}$$
Hence,
$$ \frac{df(z,\bar{z})}{dt} = \frac{\partial f}{\partial z} \gamma'(t) + \frac{\partial f}{\partial \bar{z}} \bar \gamma'(t)$$ |
Constructing a Hamiltonian system with a given number of saddles and centers | I don't know if you can manage it with polynomials, but you can do it.
The Hamiltonian $H(q,p)$ gives you the system
$$ \eqalign{ \frac{dq}{dt} &= \dfrac{\partial H}{\partial p}\cr
\frac{dp}{dt} &= -\dfrac{\partial H}{\partial q}\cr}$$
with equilibria at the critical points of $H$.
Now think of drawing level curves of $H$. Around the one centre the curves
will be closed. Each saddle will have level curves crossing in an X shape. You must be careful to make no closed level curves other that those surrounding that one centre. For example, you might have a configuration like this in the case $n=5$.
EDIT: Ah, you can do it with a polynomial: actually a configuration looking very much like that
occurs with
$$ H(p,q) = p^2 + q^2 + p^5-10p^3q^2+5pq^4 = p^2 + q^2 + \text{Re}((p+iq)^5) $$
or more generally, to have $n \ge 3$ saddles and one centre you could try this with $(p+iq)^n$
instead of $(p+iq)^5$. |
Verifying answer of bayes rule | We have:
$$P(D_1)=P(D_2)=.5$$
$$P(Q_1)=.6, \, P(Q_2)=.7$$
Note, that $P(Q_i)$ describes the probability, that one candidate will qualify. If we want two applicants from $D_i$ qualify, we have to perform two successes, ie. :
$$P(Q_i)^2$$
See, tat the above describes the conditional probability of qualification of both candidates, when they're both from $D_i$, ie:
$$P(Q|D_i)=P(Q_i)^2$$
For the first question - both candidates are from one group, therefore probability that they'll both qualify for this position is
$$P(Q)=P(D_1)P(Q|D_1)+P(D_2)P(Q|D_2)$$
For the second:
$$P(D_2|Q)=\frac{P(Q|D_2)P(D_2)}{P(Q)}$$ |
Relationship between semiconvexity and Lipschitz continuity | For a continuous counterexample, consider a function whose graph is a lower-semicircle. It is obviously convex but its slope blows up at each boundary. This tells us we can expect only local Lipschitz continuity. |
How to determine whether a given ideal of an order of a quadratic number field is invertible or not | Let $\sigma$ be the unique non-identity automorphism of $K/\mathbb{Q}$.
We denote $\sigma(\alpha)$ by $\alpha'$ for $\alpha \in K$.
Let $I = [a, r + \omega]$ be a primitive ideal of $R$, where $a \gt 0, r$ are rational integers.
Let $\theta = (r + \omega)/a$.
The minimal polynomial of $\theta$ over $\mathbb{Q}$ is $x^2 - px + q = (x - \theta)(x - \theta')$.
where $p = \theta + \theta', q = \theta\theta'$.
Let us compute $p$ and $q$.
$p = (r + \omega)/a + (r + \omega')/a = (2r + D)/a$.
$q = (r + \omega)(r + \omega')/a^2 = (r^2 + rD + (D^2 - D)/4)/a^2$.
Hence $\theta$ is a root of a polynomial $ax^2 + bx + c$, where $b = -ap = -(2r + D),
c = aq = (r^2 + rD + (D^2 - D)/4)/a$.
By this question, $N(r + \omega) = (r + \omega)(r + \omega')$ is divisinle by $a$.
Hence $c$ is an integer.
Note that $(-b + \sqrt D)/2 = r + \omega$.
The discriminant of this polynomial is $b^2 - 4ac = (2r + D)^2 - 4(r^2 + rD + (D^2 - D)/4) = D$.
By this question, $I = [a,\ (-b + \sqrt D)/2]$ is invertible if and only if gcd$(a, b, c) = 1$.
By Lemma 2 of my ansewer to the question,
$(R : I) = \mathbb{Z} + \mathbb{Z}\frac{(b + \sqrt D)}{2a}$.
Suppose $I$ is invertible.
By this question, $I^{-1} = (R : I) = \mathbb{Z} + \mathbb{Z}\frac{(b + \sqrt D)}{2a}$. |
How to get correct angle when using inverse tangent for quadrant II or quadrant III? | We knon that $\tan(x)$ is not a bijective function in all the domain, so to find the inverse, we have to restrict the domain to $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$. This explain why we can't obtain angles as $\pi$ or $-\frac{3\pi}{4}$ that we find in third and second quadrant. To overcome this, we can add $\pi$ for tge angles in second quadrant. So:
$$\alpha=\arctan(v)+\pi$$
where $v\leq0, v\in R$ and so $\arctan(v)\leq0$
For the angles in third quadrant, we have:
$$\beta=\arctan(v)+\pi$$
where $v\geq 0$ and so $\arctan(v)\geq 0$. |
Every connected $\omega$-stable group has a zero element? | I'm not very familiar with the theory of $\omega$-stable groups, but I think you've made two mistakes:
You've assumed that $a\models q$, for which there is no reason ($a$ is just any realization of $p$, of which $q$ is an extension, but by no means the only one). In fact, this is impossible, as any element of $G_1$ has isolated type over $G_1$ and generic type can't be isolated (because isolated types have rank $0$ and the group is infinite, and so it has a nonzero rank).
Another mistake, though possibly easily fixed, is that you tacitly assumed that $G_1$ is connected. Maybe you can do that, but I don't know that and you've certainly not said so. |
Intersection of Ideals of coprime elements in a UFD | What you are trying to prove is false; consider $D=\Bbb{Z}[x,y]$ and $a=x$, $b=y$, $c=x+y$. Then
\begin{eqnarray*}
(a,b)\cap(c)&=&(x,y)\cap(x+y)=(x+y),\\
(ac,bc)&=&\big(x(x+y),y(x+y)\big),
\end{eqnarray*}
which are not the same; if $x+y\in\big(x(x+y),y(x+y)\big)$ then there exist $f,g\in\Bbb{Z}[x,y]$ such that
$$x+y=x(x+y)f+y(x+y)g,$$
from which it follows that $1=xf+yg$, which is of course impossible. |
Ask book to deeply understand partially ordered sets | E.Harzheim, Ordered Sets, Springer Verlag, 2005. |
Continuity of multidimensional function $f(x,y) = \frac{x\exp\left({-1}/{y^2}\right)}{x^2+\exp\left({-2}/{y^2}\right)}$ | Note that
$$\frac{t^me^{\frac{-1}{t^{2n}}}}{t^{2m}+e^{\frac{-2}{t^{2n}}}}
=\frac{t^m}{t^{2m}e^{\frac{1}{t^{2n}}}+e^{\frac{-1}{t^{2n}}}}\to 0$$
indeed
$$t^{2m}e^{\frac{1}{t^{2n}}}=\frac{e^{\frac{1}{t^{2n}}}}{\frac1 {t^{2m}}}\to \infty$$
To show that $f(x,y)$ is not continuos at $(x,y)=(0,0)$ note that for $x=0$
$$\frac{xe^{\frac{-1}{y^2}}}{x^2+e^{\frac{-2}{y^2}}}=0$$
but for $x=t$ and $\left(-\frac1{y^2}\right)=\log t$ as $t\to 0^+$
$$\frac{xe^{\frac{-1}{y^2}}}{x^2+e^{\frac{-2}{y^2}}}=\frac{t^2}{2t^2}=\frac12$$ |
Does $\lim_{e \rightarrow 0}$ $ \int^{+e}_{-e} f(x) dx \rightarrow 0$ for all finite $f(x)$? | Yes. Note that if $f$ is Riemann integrable on $[-\varepsilon,\varepsilon]$, then in particular $f$ is bounded on this interval, and we have the inequality:
$$\left|\int_{-\varepsilon}^\varepsilon f(x)\,dx\right|\leq\sup_{|x|\leq\varepsilon}|f(x)|\int_{-\varepsilon}^\varepsilon\,dx=\sup_{|x|\leq\varepsilon}|f(x)|\cdot2\varepsilon.$$
As $\varepsilon\to0$, the rightmost quantity vanishes, and so the desired result follows from the squeeze theorem. |
$A^tA-AA^t$ in Mathematical Physics | I can't think of a perfectly suitable description of what $AA^{T}$ and $A^{T}A$ can represent (certianly out of my comfort zone with regards fluid dynamics)other that to say that for some special types of groups the transpose matrix is actually the inverse, the rotation matrices have this property.So geometrically the rotation matrices preserve length.
A more rigorous geometric interpretation is to consider a parameterization of a $k$-manifold $M \in \mathbb{R}^{n}$ such that $\gamma: \mathbb{R}^{k} \rightarrow \mathbb{R}^{n}$.
The Jacobian $J=[D\gamma]$ is such that $J^{T}J$ is the metric induced on $M$ by the embedding in $\mathbb{R}^{n}$. |
Poisson Distribution to determine number of defective tyres. | I think it will be clearer, if a bit longer, to write out in some detail about what you're approximating with what. That's because there are really two $n$s and two $p$s in the model, but you're only going to use one of each in your Poisson approximation.
First, we have the probability distribution for the number of defective tires in a given lot of 10. This is a binomial random variable with $n=10,p=(1/500)$. The probability that this is zero is the probability that a given lot of tires has no defective tires in it. This is exactly $(499/500)^{10}$.
Next, we have the probability distribution for the number of lots in a consignment of 10000 lots which contain no defective tires. Now the probability of a lot to have no defective tires is $(499/500)^{10}$ (we already computed it), so this is a binomial random variable with $n=10000$ and $p=(499/500)^{10}$.
Now you approximate that binomial variable with a Poisson variable with those values of $n$ and $p$. |
Group Theory Simple Proof | The wording is a little unclear, but I’m assuming that (1)-(3) hold for every choice of $a,b$, and $c$ in $S$.
You know, using (2) and (4), that $bp=pb=b$ for any choice of $b$, so it’s $bq$ and $br$ that you have to worry about. If $b=q$, you’ll get $bq=qq=q$, but it’s clear whether $br=qr$ will be equal to $q$. A similar problem arises if $b=r$: we don’t know whether $rq=r$ or not. Of course by (2) $rq=qr$, so the real question is what $qr$ is. All we know is that it must be one of $q$ and $r$ if such a $b$ is going to exist. So why not just try letting $b=qr$, whatever it is?
Then $(qr)p=p(qr)=qr$ by (2) and (4), $(qr)q=(rq)q=r(qq)=rq=qr$ by (2), (3), (1), and (2) again, and $(qr)r=q(rr)=qr$ by (3) and (1).
However, there’s a potential trap here: we want $qr$ to be whichever one of $q$ and $r$ makes this work, but how do we know that $qr$ isn’t actually $p$ instead? The answer is in Henry’s comment below: if $qr$ were equal to $p$, we’d have $(qr)r=pr=r$ by (4), but also $(qr)r=q(rr)=qr=p$ by (3) and (1), and $r$ would have to be equal to $p$, which we know is not true. We can be assured, therefore, that $qr\ne p$. |
Short exact sequence inducing $0$ map | No. Let the first exact sequence $0\to\Bbb R\to\Bbb R^2\to\Bbb R\to 0$ be the short exact sequence defined by $x\mapsto(x,0)$ and $(x,y)\mapsto y$ respectively and the second $0\to\Bbb R\to\Bbb R\to 0\to 0$ defined by the identity. Then we can choose the vertical map as $h:\Bbb R^2\to\Bbb R,\quad (x,y)\mapsto y$. |
the boundary normal vectors | Your claim is true. The first equation follows from the $-\max(\cdots)=\min(-\cdots)$. The second follows from a generalization of the minimax theorem. For example you can use Sion's minimax theorem. All the conditions on the function $f$ are satisfied trivially in your case because your function is linear. |
How do I show that every group of order 90 is not simple? | This is a very very late answer. But anyway I'll post for the use of people who are learning form this site (like myself).
First note that $90=2\times 3^2\times 5$ and of the form $pq^2r$ with three different primes (like $60=2^2\times 3\times 5$). Therefore this case is bit hard comparing with common exam/ homework problems, where we have the orders of the form $p, p^2, p^3, pq, pq^2, pqr.$
Assume a group $G$ of order $90$ is simple. Using Sylow theorems, with standard notations, it is not difficult to prove that $n_3=10$ and $n_5=6.$ So we have $6\times(5-1)=24$ elements of order $5$ and if $3$-Sylow subgroups intersects trivially then we have $10\times(9-1)=80$ elements of order $3,$ but this contradicts to the order of the group as $24+80\gt 90.$ Therefore $3$-Sylow subgroups must have a non-trivial intersection.
Let $H, K\in Syl_3(G),$ then $|H\cap K|=3$ and $$|HK|=\dfrac{|H||K|}{|H\cap K|}=27.$$ Further $[H :|H\cap K|]=[K:|H\cap K|]=3$ is the smallest prime factor of both $|H|, |K|.$ Therefore $H\cap K$ is normal in both $H$ and $K.$ Now let us look at the narmalizer $N=N_G(H\cap K)$ of the intersection in $G.$ It is clear that $HK\subset N$ and therefore we have $|N|$
$\ge27$
divides $90$
divisible by $9.$
Hence we have only two possibilities $|N|\in\{45, 90\}.$
If $N=G,$ then $H\cap K$ is normal in $G.$ Otherwise $[G:N]=2$ and then $N$ is normal in $G.$ Therefore we have a contradicts with our assumption and hence $G$ can not be simple. |
The necessary and sufficient condition for a regular n-gon to be constructible by ruler and compass. | Suppose that your regular polygon has a vertice on the $x$-axis. Then the first vertice counted counter clock wise has coordinates $(\cos(\frac{2\pi}{n}),\sin(\frac{2\pi}{n}))$.
Hence if you know the construction of the polygon, by projecting the first vertice on the $x$ axis, which can be done with a ruler and a compass, you can get $\cos(\frac{2\pi}{n})$.
Conversely, if you have a construction of $\cos(\frac{2\pi}{n})$, by intersecting the unit circle with the vertical line passing at point $(\cos(\frac{2\pi}{n}), 0)$ you get the first vertice of the regular polygon. Therefore the angle which enables to build the full polygon. |
Show that number of elements in $(\mathbb{Z}/n\mathbb{Z})^\times$ is $\varphi(n)$ | The key to these kinds of things is a clever observation called Bezout's Lemma. It says that if $x$ and $y$ are natural numbers with highest common factor equal to $d$, then you can find integers $a$ and $b$ such that
$$
ax + by = d.
$$
It is proved by applying Euclid's algorithm repeatedly and keeping track of what is happening. The relevance here is that if $m$ is relatively prime to $n$, then you have
$$
am + bn = 1
$$
for some numbers $a$ and $b$. i.e. given such $m$, you can find $a$ with
$$
am = (\text{multiple of}\ n) + 1...
$$ |
Rewrite the definition of supremum | Yes:
If $x<\sigma$, then $\epsilon=\sigma-x>0$, hence $\exists y\in S: y>\sigma-\epsilon=\sigma-\sigma+x=x$.
Conversely, let $\epsilon>0$. Then $x=\sigma-\epsilon<\sigma$, hence $\exists y\in S:y>x=\sigma-\epsilon$. |
Freaky Polynomial: $P_n(x)=\left(x\frac{d}{dx}\right)^n f(x)$ | Here is a supplementary to user @bonsoon's comment as to why the Stirling numbers of the second kind pop up. This begins with the identity
$$ x^n = \sum_{k=0}^{n} \left\{ {n \atop k} \right\} (x)_k, $$
where
$\left\{{n \atop k}\right\}$ is the Stirling number of the second kind, which counts the number of ways of partitioning the set $\{1,\cdots,n\}$ into $k$ parts, and
$(x)_k = x(x-1)\cdots(x-k+1)$ is the falling factorial.
(See the Wikipedia article, for instance.) Now if we introduce two operators $D = \frac{d}{dx}$ and $L = x\frac{d}{dx}$, then they satisfy $ L^n(x^a) = a^n x^a $ and $ D^n(x^a) = (a)_n x^{a-n} $, and so,
$$ L^n(x^a)
= a^n x^a
= \sum_{k=0}^{n} \left\{ {n \atop k} \right\} (a)_k x^a
= \sum_{k=0}^{n} \left\{ {n \atop k} \right\} x^k D^k (x^a). $$
Since both $L$ and $D$ are linear, the same identity holds for any polynomial $f(x)$ in place of $x^a$, yielding
$$ L^n f = \sum_{k=0}^{n} \left\{ {n \atop k} \right\} x^k D^k f. $$
Of course, this extends to any $C^n$-function $f$ by polynomial approximation. |
Interesting riddle game question? | Take the cases and number them from 1 to 100 (from one end to another). Then look how much the odd cases add up and how much the even cases add up. If the odd ones add more take number 1. After this your adversary will only be able to take even cases. This will lead to you being able to take an odd case, which again will lead the other person to only pick evens. the strategy is almost the same if the even ones add up more. |
Finite Approximation | Note: this answer is incomplete. Hopefully it clears up some things for the OP nevertheless. It should not be accepted in its current state.
First note that there appears to be a typo on the page -- $F$ should equal $[f_0, f_1, f_2, f_3, f_4]$, i.e. $F$ should only have five components, not six. Otherwise the multiplication in the equation you mention would not make sense (it's possible to multiply a $3 \times 5$ matrix with a $5 \times 1$ matrix, but it is not possible to multiply a $3 \times 5$ matrix with a $6 \times 1$ matrix). Plus $f_5$ was never mentioned as one of the approximations which are being looked for.
In other words, the first thing to understand about equation 1 is that there is a typo in its definition.
Equation 2 follows from Equation 1 by (a) multiplying equation 1 by $1/4$, and then (b) adding it to $1/4$ times the equation above Equation 1, which describes $F''$.
I.e., we add $1/4 \cdot F''$ ($1/4$ times the approximation to $f''$) to $1/4$ times the equation describing the approximation to $xf(x)$, to get $1/4$ times the approximation to $f'' + xf(x)$.
Note that $(\Delta x)^{-2} = 4$, which is why everything is being multiplied by $1/4$, to simplify the expression somewhat. And since $(1/4) \cdot 0 = 0$, it doesn't affect the end result.
Also, there may be another typo on the page -- at the top of the page, it is written $N = 4$ (upper-case $N$), but later on the page it is written $n = 4$ (lower-case $n$) -- I suspect they are meant to denote the same quantity, even though they are represented by different symbols.
The reason why (I think) the matrices which are being multiplied with $F$ from the left are $3 \times 5$ is because of the description of the problem, where we are only supposed to find the values of the approximations at interior points. The boundary points are $x_0$ and $x_4$, so the interior points are $x_1, x_2, x_3$ -- note that there are only three interior points, and that multiplying the matrices in question on the left of $F$ leads to a $3 \times 1$ vector. Also the only points which occur in the left matrix of equation 1 are $x_1, x_2, x_3$, which are the interior points, which again corresponds to/matches the problem description. |
complex number 5th roots of unity | Notice that $z^5=e^{2i\pi}=1$, hence $z$ is a root of the polynomial $x^5-1$. But $x^5-1=(x-1)(x^4+x^3+x^2+x+1)$, and obviously $z\neq 1$. So... |
If $U$ and $W$ are subsets of $V$ and $U \subset W$ then $W^{\perp} \subset U^{\perp}$ | The proof starts with taking an arbitrary $v\in W^\bot$, and proves that $v\in U^\bot$.
Therefore, it proves the statement:
$$\forall v: v\in W^\bot\implies v\in U^\bot,$$ and this statement is, by definition, equivalent to the statement $W^\bot\subset U^\bot$. |
$18$ mice were placed in $3$ groups, with all groups equally large. In how many ways can the mice be placed into $3$ groups? | It depends if groups are distinguishable. For e.g. if you put 6 mouse each in 3 different color boxes then your textbook is correct, if you put them in same color boxes then your teacher is correct.
Take a smaller case say 4 mouse and 2 boxes and then think about. You might want to write down all the possibilities to give you clarity.
Ans clearly mentions in the way textbook has treated order in which groups are formed matter (i.e. they are distinguishable you can label them 1,2,3) |
A Method for Finding the Expansion of $\sin m\theta$ and $\cos m\theta$ | I hope that the process would be clearer using summations instead.
Considering the differential equation $$(1-x^2)y''-xy'+m^2y=0$$ and using
$$y=\sum_{i=0}^k a_ix^i\qquad y'=\sum_{i=0}^k ia_ix^{i-1}\qquad y''=\sum_{i=0}^k i(i-1)a_ix^{i-2}$$ we then have (expanding $(1-x^2)y''$ as $y''-x^2y''$)
$$\sum_{i=0}^k i(i-1)a_ix^{i-2}-\sum_{i=0}^k i(i-1)a_ix^{i}-\sum_{i=0}^k ia_ix^{i}+\sum_{i=0}^k m^2a_ix^i=0 $$ that be compacted (notice that the last three summations are involving $x^i$) as
$$\sum_{i=0}^k i(i-1)a_ix^{i-2}-\sum_{i=0}^k(i^2-m^2)a_ix^i=0$$ Now, for the same degree $n$, we then have $$(n+1)(n+2)a_{n+2}-(n^2-m^2)a_n=0\implies a_{n+2}=\frac{n^2-m^2}{(n+1)(n+2)}$$ and since it is a second order differential equation $a_0$ and $a_1$ are just undefined for the time being. |
A question about applying Birkhoff's ergodic Theorem. | Your choice for $(a,b)$ does not give rationally independent elements, because $0\cdot a+1\cdot b=0$ (with $n=0$, $m=1$) hence the transformation is not ergodic. |
Any set of $n$ linearly independent elements in $A^n$ form a basis. | Hint: Try $A=\mathbb Z$, $n=1$, and $x_1=2$.
In general, $\{x_1,\ldots,x_n\}$ is a basis iff the matrix expressing them in the canonical basis is invertible in $A$, which happens iff its determinant is a unit in $A$. |
In what sense can the Lie algebra of $GL_n(\mathbb{C})$ be "identified" with $M_n(\mathbb C)$? | The group $G:=\text{GL}_n(\mathbb C)$ is an open subset of $M_n(\mathbb C)$. A vector field on $G$ is a map
$$
X:G\to M_n(\mathbb C).
$$
We denote by $\ell(x)X$ the left action of $x\in G$ on such a vector field $X$, that is, we define the vector field $\ell(x)X$ by
$$
(\ell(x)X)(y):=xX(x^{-1}y).
$$
Thus, $X$ is invariant under this action if and only if $X(x)=xX(1)$ for all $x$ in $G$. Such a vector field is said to be left invariant. In particular, for each $a\in M_n(\mathbb C)$ there is a unique left invariant vector field $\widetilde a$ such that $\widetilde a(1)=a$. We have $\widetilde a(x)=xa$ for all $x$ in $G$, and $\widetilde a$ is $C^\infty$. We must check
$$
\left[\ \widetilde a\ ,\ \widetilde b\ \right]=\widetilde{[a,b]}
$$
for all $a,b\in M_n(\mathbb C)$.
Let $e_{ij}\in M_n(\mathbb C)$ be the matrix with a one in the $(i,j)$ position and zeros elsewhere. It suffices to verify that the above display holds for $a=e_{ij}$ and $b=e_{pq}$.
We have
$$
X_{ij}(x):=\widetilde{e_{ij}}(x)=x\ e_{ij}=\sum_{p,q}\ x_{pq}\ e_{pq}\ e_{ij}=\sum_p\ x_{pi}\ e_{pj}.
$$
If we write
$$
X_{ij}=\sum_k\ x_{ki}\ \ \frac{\partial}{\partial x_{kj}}\quad,\quad
X_{pq}=\sum_r\ x_{rp}\ \ \frac{\partial}{\partial x_{rq}}\quad,
$$
then the Lie bracket $[X_{ij},X_{pq}]$ is just the commutator of the differential operators $X_{ij}$ and $X_{pq}$, and we get
$$
[\widetilde e_{ij},\widetilde e_{pq}]=[X_{ij},X_{pq}]=\delta_{jp}\ X_{iq}-\delta_{qi}\ X_{pj}=\widetilde{[e_{ij},e_{pq}]},
$$
as was to be shown. |
Regular closed subset of H-closed space | Let $X$ be H-closed, and let $F$ be a regular closed set in $X$. Let $\mathscr{U}$ be a relatively open cover of $F$. For each $U\in\mathscr{U}$ there is an open $V_U$ in $X$ such that $U=F\cap V_U$; let
$$\mathscr{V}=\{X\setminus F\}\cup\{V_U:U\in\mathscr{U}\}\;.$$
$\mathscr{V}$ is an open cover of $X$, so it has a finite proximate subcover $\mathscr{V}_0$. Let
$$\mathscr{U}_0=\{U\in\mathscr{U}:V_U\in\mathscr{V}_0\}\;;$$
clearly $\mathscr{U}_0$ is a finite subset of $\mathscr{U}$. Since $\operatorname{cl}(X\setminus F)\cap\operatorname{int}F=\varnothing$, and $\bigcup\mathscr{V}_0$ is dense in $X$, $\bigcup\{V_U:U\in\mathscr{U}_0\}$ must be dense in $\operatorname{int}F$, and hence $\bigcup\mathscr{U}_0$ must be dense in $\operatorname{int}F$. Thus,
$$F=\operatorname{cl}\operatorname{int}F\subseteq\bigcup_{U\in\mathscr{U}_0}\operatorname{cl}U\subseteq F\;,$$
$\mathscr{U}_0$ is a proximate subcover of $\mathscr{U}$, and $F$ is H-closed.
It is also true that a space $X$ is H-closed iff every open filter in $X$ has a cluster point, and we can use this characterization instead. Let $\mathscr{U}$ be a relatively open filter on $F$, and let $\mathscr{B}=\{U\cap\operatorname{int}F:U\in\mathscr{U}\}$. Clearly $U\cap\operatorname{int}F\ne\varnothing$ for each $U\in\mathscr{U}$, so $\mathscr{B}$ is an open filterbase in $X$. $X$ is H-closed, so the filter $\mathscr{V}$ generated by $\mathscr{B}$ has a cluster point $x\in X$, which is evidently also a cluster point of $\mathscr{U}$. And $\operatorname{int}F\in\mathscr{V}$, so every nbhd of $x$ meets $\operatorname{int}F$, and therefore $x\in\operatorname{cl}\operatorname{int}F=F$, so that $\mathscr{U}$ has a cluster point in $F$. |
Prove the unit ball in the space $\ell_2$ of infinite sequences is not totally bounded. | Consider the set $\{v_n : n=1,2,3,\ldots\}$ in $\ell_2$ where $v_n$ is the point $(x_1,x_2,x_3,\ldots)$ for which $x_n=1$ and $x_k=0$ for all $k\ne n.$ What happens if you try to cover it with balls of radius $1/10\text{?}$ Each ball can cover only one of these points. All of these points are on the boundary of the closed unit ball. Thus the closed unit ball cannot be covered by only finitely many open balls of radius $1/10.$ |
If $\lim\limits_{x\to b^-} f(x)=\infty$ Then $\lim\limits_{x\to b^-}f'(x)=\infty $ | Using the contrapositive approach, we assume that $f'(x)$ is bounded for all $a<x<b$ for some $a$. Let's call this bound $B$. Then, from the MVT, there exists $a<\xi<x$ such that
$$f(x)=f(a) +f'(\xi)(x-a)$$
Thus,
$$\begin{align}
|f(x)|&\le |f(a)| +|f'(\xi)||(x-a)|\\\\
&\le |f(a)|+B|x-a|
\end{align}$$
for $a<x<b$. But this implies that $f$ is bounded for $x\in (a,b)$, which is a contradiction. Therefore, $f'$ is unbounded.
Now, inasmuch as $f''>0$ is given, we have that $f'$ is increasing. Therefore, as $f'$ is increasing and unbounded we must have $$\lim_{x\to b^{-}}f'(x)=\infty$$ |
Showing that G is solvable | You're almost there:
$\;N\;$ is solvable (why?)
$\;G/N\;$ is also solvable (again, why?)
and thus $\;G\;$ is solvable. |
indeterminate forms of limits | 1) This limit is not an indeterminate form. Let $a \in ]0,+\infty[$. Since, $a^{x} = \exp(x\ln(a))$ for all $x \in \mathbb{R}$, you have :
$$
\lim \limits_{x \rightarrow +\infty} a^{x} =
\begin{cases}
+\infty & \text{if } a > 1 \\
1 & \text{if } a = 1 \\
0 & \text{if } a < 1 \\
\end{cases}
$$
2) For example :
$$ \lim \limits_{x \rightarrow +\infty} \left( 1+\frac{1}{x} \right)^{x} = e $$
3) You have :
$$ \frac{\sin(\frac{1}{x})}{\frac{1}{x}} = x \sin(\frac{1}{x}) $$
and since $\sin(\frac{1}{x})$ is bounded, we have :
$$ \lim \limits_{x \rightarrow 0} \frac{\sin(\frac{1}{x})}{\frac{1}{x}} = 0 $$ |
Making sense of weak dominance exercise | a) What happens if you bid more than your valuation and win? So if you have something at 98 and bid 98.01, you have utility -0.01 < 0 if you win. So how can you improve your outcome here?
b) Suppose you have valuation $v$ and bid $b < v$. What happens if someone else bids $b^{\prime} \in (b, v)$? How could you improve your outcome? |
Equivalent (?) definitions of Axiom A | First of all, the difference between B1 and J1,2 is immaterial and I imagine that B1 is simply a typo on the author's part. The fact is that you might as well always take $\leq^0$ to be the original order; furthermore, you can freely throw away finitely many of the orders and start the sequence $\leq^n$ further along without affecting anything. I'm not sure why Bartoszyinski allows nontransitive orders. Perhaps he has some arguments where this is relevant but taking the transitive envelope isn't possible.
Regarding your final question, a name for a foo always means a name which is forced by 1 to be a foo. In particular an ordinal name $\dot{\alpha}$ is a name such that $1\Vdash\dot{\alpha}\in\mathrm{Ord}$. This convention is used all over the place.
As for the issue of B3 vs J4, they are equivalent. The idea behind this is that ordinal names are morally the same thing as maximal antichains. Starting with an ordinal name $\dot{\alpha}$ you can extract a maximal antichain of conditions which force $\dot{\alpha}$ to have a particular value. Conversely, given a maximal antichain $A$ you can take an enumeration $f$ of $A$ and then mix the (check names for the) values of $f$ over $A$ to get an ordinal name.
With this idea it is clear that B3 and J4 are equivalent. Starting from B3 and an ordinal name $\dot{\alpha}$, assign a maximal antichain $A$ to $\dot{\alpha}$ as above and apply B3. Your $B$ is now the values of $\dot{\alpha}$ supported on the countable subset given by B3. Conversely, starting from J4 and an antichain $A$, we can assume $A$ is maximal and assign to it an ordinal name $\dot{\alpha}$ as above. Apply J4 to it to get a countable set of values $B$. Your countable set of conditions now consists of those whose value in the enumeration you took when building $\dot{\alpha}$ lies in $B$. |
On the uniqueness of a simple continued fraction | It is unique in both directions, you can find the proof here. In short, in a mathematical induction you can show that if $k$ first partial quotients agree then the $(k+1)$-th must as well, as both $a_{k+1}$ and $b_{k+1}$ are the integer parts of the same number. In other words, as $a_{k+2}$ and $b_{k+2}$ are at least 1 each, $[a_{k+1}, a_{k+2}, \ldots]$ and $[b_{k+1}, b_{k+2}, \ldots]$, for which it holds that $a_{k+1} \le [a_{k+1}, a_{k+2}, \ldots] < a_{k+1}+1$ and $b_{k+1} \le [b_{k+1}, b_{k+2}, \ldots] < b_{k+1}+1$, could never agree if $b_{k+1} \ne a_{k+1}$. The same argument, assuming $a_1, b_1 \ne 0$ (in your notation, as in the link that would be $a_2$, $b_2$), also secures the initial step of the induction. |
How to solve/transform/simplify an equation by a simple algorithm? | This is generally known as "computer algebra," and there are entire books and courses on the subject. There's no single magic bullet. Generally it relies on things like specifying canonical forms for certain types of expressions and massaging them. Playing with the form, it seems to know how to simplify a rational expression, but not for instance that $\sin^2 x + \cos^2 x = 1$. |
Bounded Operator and p-norm (more difficult than it seems). | If I set $x_i=|T(e_i)|^{q-1}$ when $T(e_i)>0$ and $x_i=-|T(e_i)|^{q-1}$ when $T(e_i)<0$ then $x_iT(e_i)=|T(e_i)|^q$ and $\frac{|T(\mathbf{x})|}{\|\mathbf{x}\|_p}=(\sum_{i=1}^{k}|T(e_i)|^q)^\frac{1}{q}$, for $\mathbf{x}=(x_1,...,x_k)$. |
Halting Problem as Incompleteness in Formal System | Yes. The "arithmetization" techniques developed by Gödel for the incompleteness theorem allows us to write just about any claim Turing machines as a first-order statement in the language of arithmetic. Gödel showed it suffices to have the constants $0$, $1$, operations $+$, $\times$, the relation $=$, and quantification over the natural numbers.
In particular, "such-and-such Turing machine terminates on such-and-such input" can be encoded as an arithmetical sentence in a systematic way, and we can then search for a proof of either it or its negation in our favorite proof system for arithmetical statements. Since it is easy to search for proofs (just enumerate all possible symbol strings and check whether each of them is a valid proof of what you're looking for), such a search would give an algorithm for the halting problem unless some of those sentences are undecidable.
The central trick in the construction is the beta function which allows you to express quantification over arbitrarily long finite sequences of natural numbers by a finite number of quantifiers over single natural numbers. Once you can quantify over such sequences, expressing a Turing machine computation is as easy as
There exists sequences $(s_i)_{0\le i\le n}$, $(L_i)_{0\le i\le n}$, $(R_i)_{0\le i\le n}$, intuitively encoding the state of the Turing machine at step $i$ of the computation, and the content of the tape to the left and right of the read head at step $i$ such that for each $i<n$, the numbers $(s_i,L_i,R_i,s_{i+1},L_{i+1},R_{i+1})$ are in a certain relation to each other and $s_n$ is a halting state.
Representing tapes as numbers is easy enough; just use base-$b$ representation for a $b$ large enough to contain the tape alphabet. |
Calculate distribution of mean and variance given Gaussian data points | The standard distribution theory for this model with $X_1, X_2, \dots, X_n$ a random sample from $\mathsf{Norm}(\mu, \sigma)$ is as follows:
$$\bar X \sim \mathsf{Norm}(\mu, \sigma/\sqrt{n}),$$
$$\frac{\sum_{i=1}^n(X_i - \mu)^2}{\sigma^2} \sim \mathsf{Chisq}(n),$$
$$\frac{(n-1)S^2}{\sigma^2} \sim \mathsf{Chisq}(n-1),$$
$$ T = \frac{\bar X - \mu}{S/\sqrt{n}} \sim \mathsf{T}(n-1),$$
where $\bar X = \frac 1 n \sum_{i=1}^n X_i,\,$ $E(\bar X) = \mu;\,$ $S^2 =
\frac{1}{n-1}\sum_{i=1}^n (X_i - \bar X),\,$ $E(S^2) = \sigma^2.$ And finally, for normal data (only) $\bar X$ and
$S^2$ are stochastically independent random variables--even though not
functionally independent.
$\mathsf{Chisq}$ denotes a chi-squared distribution with the designated degrees of freedom, and $\mathsf{T}$ denotes Student's t distribution with the designated degrees of freedom. You can find formal distributions and density functions of these
distributions on the relevant Wikipedia pages.
The first displayed relationship is most often used when $\sigma$ is known
and $\mu$ is to be estimated by $\bar X.$
The second relationship is most often used when $\mu$ is known
and $\sigma^2$ is to be estimated by $\frac 1 n \sum_{i=1}^n(X_i - \mu)^2.$
These relationships are easily shown using standard probability formulas, moment generating functions, and
the definition of the chi-squared distribution.
The last two displayed relationships and the independence of $\bar X$ and $S^2$ are often used
when both $\mu$ and $\sigma$ are unknown. Then ordinarily, $\mu$ is estimated by
$\bar S,\,$ $\sigma$ by $S^2,\,$ and $\sigma$ by $S$ (even though $E(S) < \sigma).$ Proofs are more advanced and are
discussed in mathematical statistics texts.
For the special case $n = 5,\, \mu = 100,\, \sigma=10$ a simulation in R statistical software of 100,000 samples suggests (but of course does not prove) that $\bar X \sim \mathsf{Norm}(\mu, \frac{\sigma}{\sqrt{n}}),\,$
$Q = \frac{(n-1)S^2}{\sigma^2} \sim \mathsf{Chisq}(4)$ and that $\bar X$ and
$S$ are independent. The code below the figure also illustrates $E(\bar X) = 100,\,$
$E(S) < 10.\,$ $E(S^2) = 100,$ and $r = 0,$ within the margin of simulation error (accuracy to two, maybe three significant digits).
set.seed(3218) # retain for exactly same simulation; delete for fresh run
m = 10^5; n = 5; mu = 100; sg = 10
MAT = matrix(rnorm(m*n, mu, sg), nrow=m) # m x n matrix: 10^5 samples of size 4
a = rowMeans(MAT) # m sample means (averages)
s = apply(MAT, 1, sd); q = (n-1)*s^2/sg^2 # m sample SD's and values of Q
mean(a)
## 100.0139 # aprx E(x-bar) = 100
mean(s); mean(s^2)
## 9.412638 # aprx E(S) < 10
## 100.3715 # aprx E(S^2) = 100
cor(a, s)
## -0.00194571 # approx r = 0
par(mfrow=c(1,3)) # enable 3 panels per plot
hist(a, prob=T, col="skyblue2", xlab="Sample Mean", main="Normal Dist'n of Sample Mean")
curve(dnorm(x, mu, sg/sqrt(n)), add=T, lwd=2, col="red")
hist(q, prob=T, col="skyblue2", ylim=c(0,.18), xlab="Q", main="CHISQ(4)")
curve(dchisq(x, n-1), add=T, lwd=2, col="red")
plot(a, s, pch=".", xlab="Sample Means", ylab="Sample SD", main="Illustrating Indep")
par(mfrow=c(1,1)) |
Describing a set of elements in a complex plane | Your answer t the first is correct and in fact clever.
By Pythagorous Theorem $S_2$ consists of all $z$ such that angle at $z$ in the triangle formed by $z,z_1,z_2$ is $\pi /2$. |
$a, b, c, d \in \mathbb{R}^+$, $\log_a(b) = 8/9, \log_b(c) = -3/4$ and $\log_c(d) = 2$, find the value of $\log_d(abc)$ | From the definition of logarithms, you have
$$\log_{a}(b) = 8/9 \implies b = a^{8/9} \tag{1}$$
$$\log_{b}(c) = -3/4 \implies c = b^{-3/4} \tag{2}$$
$$\log_{c}(d) = 2 \implies d = c^{2} \tag{3}$$
Now,
\begin{align}
\log_{d}(abc) = \log_{d}(a) + \log_{d}(b) + \log_{d}(c) \tag{4}
\end{align}
From (3), $$d = c^{2} \implies d^{1/2} = c \implies \log_{d}(c) = 1/2.$$ Similarly, from (2) and (3) we have $$d = c^{2} = (b^{-3/4})^{2} = b^{-3/2} \implies d^{-2/3} = b \implies \log_{d}(b) = -2/3.$$
Following the same analogy, from (1) and (2),
$$d = b^{-3/2} = (a^{8/9})^{-3/2} = a^{-4/3} \implies d^{-3/4} = a \implies \log_{d}(a) = -3/4.$$
I hope you can solve the rest. |
Uniform convergence, differentiability theorem's converse | This is not technically a counterexample, since it does not satisfy the second condition, but I believe it gives the right idea of why it will not work (I am not sure).
Consider the sequence
$$ f_{n}(x) = \frac{1}{n} \sin(n^{2}x). $$
This converges uniformly to $f(x) = 0$, but clearly the derivative is not going to converge uniformly to zero. Of course, it will not converge at all; the point, however, is that even if the change over an interval is bounded, the rate of change can be unbounded if the size of the interval shrinks fast enough.
So probably a counterexample could be constructed by replacing the $n^{2}x$ with something more complicated. I think perhaps that $nx^{-1}$ might work (basically the topologist's sine function but with the scaling factor). |
Showing a sequence of functions converges uniformly on any bounded interval | Hint 1: Decompose the problem into two easier questions:
$$
g_n(x)-f(x)=f_n\left(x+\frac{1}{n}\right)-f\left(x+\frac{1}{n}\right)+f\left(x+\frac{1}{n}\right)-f(x).
$$
Hint 2: Continuous implies uniformly continuous on compact intervals (Heine-Cantor).
Details: For all $x\in\mathbb{R}$
$$
\lvert f_n\left(x+\frac{1}{n}\right)-f\left(x+\frac{1}{n}\right)\rvert\leq \|f_n-f\|_\infty.
$$
So the left hand side term of the sum above converges uniformly to $0$ on $\mathbb{R}$, a fortiori on $[a,b]$.
Now $f$ is continuous on $[a,b+1]$, so it is uniformly continuous on this compact interval. This means that for every $\epsilon>0$, there exists $\delta>0$ such that such that $|f(y)-f(x)|\leq \epsilon$ for all $x,y$ in $[a,b+1]$ such that $|x-y|\leq\delta$. Now take $N$ large enough to have $\frac{1}{N}\leq \delta$. Then for all $n\geq N$ and all $x\in [a,b]$, we have $|x+\frac{1}{n}-x|=\frac{1}{n}\leq\frac{1}{N}\leq\delta$ and $a\leq x<x+\frac{1}{n}\leq b+1$. So
$$
\lvert f\left(x+\frac{1}{n}\right)-f(x) \rvert\leq \epsilon.
$$
This proves that the right hand side of the sum above converges uniformly to $0$ on $[a,b]$.
So $g_n-f$, which is the sum of these two terms, converges uniformly to $0$ on $[a,b]$. |
What is the difference between homomorphism and isomorphism? | Isomorphisms capture "equality" between objects in the sense of the structure you are considering. For example, $2 \mathbb{Z} \ \cong \mathbb{Z}$ as groups, meaning we could re-label the elements in the former and get exactly the latter.
This is not true for homomorphisms--homomorphisms can lose information about the object, whereas isomorphisms always preserve all of the information. For example, the map $\mathbb{Z} \rightarrow \mathbb{Z}/ 2\mathbb{Z}$ given by $z \mapsto z \text{ mod 2}$ loses a ton of information but is still a homomorphism.
Alternatively, isomorphisms are invertible homomorphisms (again emphasizing the preservation of information -- you can revert the map and go back). |
Functions and Relations - Help! | $f+g$ will be defined as $(f+g)(x)=f(x)+g(x)$ for all $x\in D_1\cap D_2$. |
A problem on limit superior | First, any point at distance more than $\sqrt2$ from the origin is in $A_n$ for no $n$ hence it is not in $A=\limsup A_n$ either.
Next, consider a point $(x,y)=(r\cos\alpha,r\sin\alpha)$ at distance less than $\sqrt2$ from the origin, thus $r\lt\sqrt2$. Let $C_\beta$ denote the square $[-1,1]^2$ rotated by the angle $\beta$. Then each $C_\beta$ contains the rays starting at $(0,0)$ of length $\sqrt2$ in the directions $\beta\pm\pi/4$ hence $(x,y)$ is in the interior of $C_{\alpha+\pi/4}$ and, by continuity, $(x,y)$ is in the interior of $C_\beta$ for every $\beta$ close enough to $\alpha+\pi/4$, say every $\beta$ in some interval $(\beta_1,\beta_2)\subset(0,2\pi)$ with positive length.
Since $\theta$ is irrational, there exists infinitely many integers $n$ such that the fractional part of $n\theta$ is in the interval $(\beta_1/2\pi,\beta_2/2\pi)\subset(0,1)$. For each such $n$, $C_{2\pi n\theta}=C_\beta$ for some $\beta$ in the interval $(\beta_1,\beta_2)$ hence $(x,y)$ is in $C_{2\pi n\theta}$. This happens for infinitely many $n$ hence $(x,y)$ is in $A$.
Finally, if $(x,y)$ is at distance exactly $\sqrt2$ of the origin, say $(x,y)=(\sqrt2\cos\alpha,\sqrt2\sin\alpha)$, then $(x,y)$ is in $C_{2\pi n\theta}$ if and only if $2\pi n\theta=\alpha+\pi/4\pmod{\pi/2}$. This happens for at most one value of $n$ hence $(x,y)$ is not in $A$.
This proves that $A$ is the open disk centered at zero with radius $\sqrt2$. |
Something like an incomplete gamma function | Making the change of variables $t=-u$, we have
$$ \int_0^z t^{-b}e^t \,dt = -\int_{0}^{-z} (-u)^{-b}e^{-u} \,du=(-1)^{-b+1}\int_{0}^{-z} t^{-b}e^{-u} \,du=(-1)^{1-b}\gamma(1-b,-z), $$
where $\gamma(s,x)$ is the lower incomplete gamma function. |
Is $f\left(x\right)=nx^{n}\left(1-x\right)$ uniformly convergent on $x\in\left[0,\:1\right]$ | Hint You can find easily the maximum of $f_n$ (which is positive so equal to $\vert f_n\vert$) on the interval $[0,1]$ and check if $\sup_{0\le x\le 1} f_n(x)$ converges to $0$ when $n\to\infty$ or not. |
Shared eigenvectors between $A$ and $A^k$ | No. Consider the matrix
$$N := \begin{pmatrix}0 & 1\\0 & 0\end{pmatrix}:$$
Its only eigenvalue is $0$, and the corresponding eigenspace $L$ is spanned by the first basis vector,
$$\begin{pmatrix}1 \\ 0\end{pmatrix}.$$
On the other hand, $N^2$ is the zero matrix, and so every vector in $\mathbb{R}^2$ is an eigenvector of $N^2$ (of eigenvalue zero), and in particular every vector in $\mathbb{R}^2 - L$ is an eigenvector of $N^2$ but not $N$.
On the other hand, all of the eigenvectors of $N$ are generalized eigenvectors of $N^2$. Edit This property does not generally hold for counterexamples, however, as avid19's instructive example shows. |
Why normal equation can have infinite solutions? | A solution $\beta$ always exists because the image of $X^T$ is equal to the image of $X^TX$. Now for any $x\in\ker X$ you have that $\beta+x$ is a solution. So whenever $X^TX$ is not invertible there are infinitely many solutions. |
Prove a closed set is a countable union of compact sets, in $\mathbb{R}^n$. | Consider $\bar{B}(0,n)$, the closed ball with center $0$ and radius $N$. If $C$ is a closed set, consider
$$
C_n=C\cap\bar{B}(0,n)
$$
Then $C_n$ is a closed subset of $\bar{B}(0,n)$, which is compact, hence $C_n$ is compact. Obviously
$$
C=\bigcup_{\substack{n\in\mathbb{N}\\n>0}}C_n
$$ |
Basis of null space notation | It’s there in the definition of $W$: its defining equation says that it is the null space of $a^t$, which is a $1\times4$ matrix. |
What is the value of $a+b+c+d$? | Consider $(iii)-(ii)$, we have
$$(a+c)(d-b) = (ad-bc) - (ab-cd) = 110 - 58 = 52$$
Since $10 \le a, c \le 20 \implies 20 \le a+c \le 40$ and the only divisor
of $52$ between $20$ and $40$ is $26$, we find
$$a+c = 26, d - b = 2$$
Consider $(ii)+(iii)$, we have
$$(a-c)(b+d) = (ab-cd) + (ad - bc) = 58+160 = 168$$
Since $10 \le b, d \le 20 \implies 20 \le b+d \le 40$ and the divisors of
$168$ between $20$ and $40$ are $21, 24, 28$, we have 3 possibilities.
$$
\begin{cases}
b+d = 21,a-c = 8\\
b+d = 24,a-c = 7\\
b+d = 28,a-c = 6
\end{cases}$$
Notice
$$\begin{cases}
a+c = 26 &\implies a-c = 26 - 2c \equiv 0\pmod 2\\
d-b = 2 &\implies b+d = 2 + 2b \equiv 0 \pmod 2
\end{cases}
$$ This rules out the first and second case. This leaves us with only
one and only one possibility:
$$b+d = 28\quad\implies\quad a+b+c+d = 26 + 28 = 54$$ |
Let $x$ be an integer and $n$ be a positive integer. Find the smallest $n$ such that $x^4+n^2$ is not a prime for any $x$. | Hint: $n$ is not very big. When you find an $n$ that doesn't seem to give any primes, try factoring the polynomial $x^4 + n^2$. |
Count the number of restricted multigraphs | You may want to look at the so-called configuration model. In particular if you fix the degree sequence of nodes $\mathbf{k} = (k_1,\ldots,k_n)$ the number of loopy multigraphs satisfying the given degree sequence is
$$
\binom{2m}{k_1,\ldots,k_n}
$$
where this is intended as the multinomial coefficient. |
Is this assignment of the topos of sheaves functorial? | It is much easier to see what is going on if you forget about sites – after all, when $\mathcal{C}$ has a subcanonical topology, the category sheaves on $\mathcal{C}_{/ X}$ is equivalent to the category of sheaves on $\mathcal{C}$ sliced over $h_X$. Thus your question is really about the assignment $A \mapsto \mathcal{E}_{/ A}$, where $\mathcal{E}$ is a Grothendieck topos and $A$ varies in $\mathcal{E}$.
Clearly, $A \mapsto \mathcal{E}_{/ A}$ can be made into a strict covariant functor: given $A \to B$, postcomposition defines a functor $\mathcal{E}_{/ A} \to \mathcal{E}_{/ B}$. On the other hand, it can also be made into a contravariant pseudofunctor: pullback along $A \to B$ defines a functor $\mathcal{E}_{/ B} \to \mathcal{E}_{/ A}$. |
Using natural deduction to prove that $p \implies q \vdash \lnot p \lor q$ | I assume that you are trying to prove : $p \to q \vdash \lnot p \lor q$.
1) $p \to q$ --- premise
2) $\lnot (\lnot p \lor q)$ --- assumed [a]
3) $\lnot p$ --- assumed [b]
4) $\lnot p \lor q$ --- from 3) by $\lor$-intro
5) $\bot$ --- from 2) and 4)
6) $p$ --- from 3) and 5) by $\lnot$-elim, discharging [b]
7) $q$ --- from 1) and6) by $\to$-elim
8) $\lnot p \lor q$ --- from 7) by $\lor$-intro
9) $\bot$ --- from 2) and 8)
10) $\lnot p \lor q$ --- from 2) and 9) by $\lnot$-elim, discharging [a]. |
Concrete Mathematics 2.10 - 2.13 Cannot find part of the solution | The problem is that the summation factor $s_n$ isn’t defined for $n=0$ and $n=1$. Thus, you can go back only as far as $S_2$:
$$\begin{align*}
S_n&=S_2+\sum_{k=3}^ns_kc_k\\
&=s_2a_2C_2+\sum_{k=3}^ns_kc_k\\
&=\frac13\cdot2\cdot3+\sum_{k=3}^n\frac{4k}{k(k+1)}\\
&=2+4\sum_{k=3}^n\frac1{k+1}\;,
\end{align*}$$
and
$$\begin{align*}
C_n&=\frac{S_n}{s_na_n}\\
&=\frac{n+1}2\left(2+4\sum_{k=3}^n\frac1{k+1}\right)\\
&=(n+1)\left(1+2\sum_{k=1}^n\frac1{k+1}-2\sum_{k=1}^2\frac1{k+1}\right)\\
&=(n+1)\left(1+2\sum_{k=1}^n\frac1{k+1}-\frac53\right)\\
&=(n+1)\left(2\sum_{k=1}^n\frac1{k+1}-\frac23\right)\\
&=2(n+1)\sum_{k=1}^n\frac1{k+1}-\frac23(n+1)\;,
\end{align*}$$
for $n\ge 3$. Substituting $n=2$, we see that the formula still works, but it fails for $n=0$ and $n=1$. |
Express $\frac{9x}{(2x+1)^2(1-x)}$ as a sum of partial fractions with constant numerators. Answer doesn't match with solution provided in book. | You have to have
$$9x=A(2x+1)\color{red}{(1-x)}+B(1-x)+C(2x+1)^2.$$ |
Chromatic number of a hypercube | Yes, you're right: All hypercube graphs are bipartite, and "bipartite" means exactly "has chromatic number $\le 2$". |
Factorization of parabolic subgroups. | The proof just uses the root subgroup structure of $G$ and the fact that up to conjugation every parabolic is a standard parabolic, so we need only prove this when $P = P_I$ for some set $I$ of simple roots.
If $R$ is the root system and $I \subseteq \Delta$ is some subset of simple roots then let $R_I = R \cap \mathbb ZI$. Let $T$ be the torus and for any root $\alpha \in R$ let $U_\alpha$ be the root subgroup corresponding to $\alpha$. Now $P_I$ is generated by $T$ and root subgroups $U_\alpha$ where $\alpha \in R \setminus R_I^-$. $U_I$ is generated by the root subgroups $U_\alpha$ where $\alpha \in R_I^+$ and $L_I$ is generated by $U_\alpha$ where $\alpha \in R \setminus R_I$.
You need to show that $U_I$ is the unipotent radical of $P_I$ and that $L_I$ is reductive, but once you have that you just observe that $L_IU_I$ is generated by $T$ and root subgroups $U_\alpha$ where $\alpha$ is in
$$R_I^+ \cup (R \setminus R_I) = R \setminus R_I^-.$$
But that's what generates $P_I$ so $L_IU_I = P_I$. |
Setting the limit while finding convolution | You have indeed a solution for every value of $t$. $y(t)$ is this:
$$y(t)=\begin{cases}
0 &\text{if}\; t\le 1\\
{1\over 2}-e^{-(t-1)}+{1\over 2} e^{2(t-1)} &\text{if}\;1<t\le 2\\
e^{-(t-2)}-{1\over 2} e^{2(t-2)}-\left(e^{-(t-1)}-{1\over 2} e^{2(t-1)}\right) &\text{if}\;2<t
\end{cases}$$
The solution comes straight from this integral
$$y(t)=\int_0^t q(t-\tau)r(t)\mathrm d\tau\;\;\text{with}\;r(t)=\begin{cases}
0 &\text{if}\; t\le 1\\
1 &\text{if}\;1<t\le 2\\
0 &\text{if}\;2<t
\end{cases}$$
To calculate the integral you have to split it in two parts or three parts when $1<t\le 2$ and $2<t$ respectively because the function $r$ is defined piecewise. The word "handling" refers to how manage the integral limits in the convolution when the function is defined piecewise. The solution $y$ at some $t$ comes integrating from $0$ to $t$, so you calculate separately for the three pieces, adding to one piece, the result from the previous (you add zero to the second and zero and all the way from $1$ to $2$ to the third).
$$y(t)=\begin{cases}
\int_0^t q(t-\tau)·0\mathrm d\tau &\text{if}\; t\le 1\\
\int_0^1 q(t-\tau)·0\mathrm d\tau+\int_1^t q(t-\tau)·1\mathrm d\tau&\text{if}\;1<t\le 2\\
\int_0^1 q(t-\tau)·0\mathrm d\tau+\int_1^2 q(t-\tau)·1\mathrm d\tau+\int_2^t q(t-\tau)·0\mathrm d\tau&\text{if}\;2<t
\end{cases}$$
You can see the integrals are all from $0$ to $t$, but separated by intervals. Finally,
$$y(t)=\begin{cases}
0 &\text{if}\; t\le 1\\
0+\int_1^t q(t-\tau)·1\mathrm d\tau&\text{if}\;1<t\le 2\\
0+\int_1^2 q(t-\tau)·1\mathrm d\tau+0&\text{if}\;2<t
\end{cases}$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.