title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
One-sided limit epsilon delta proof | Let $x \gt 3$:
$|\dfrac{x+1}{x+2}-4/5| =$
$|\dfrac{5x+5-4x-8}{5(x+2)}|= \dfrac{x-3}{5(x+2)} \lt$
$\dfrac{x-3}{5 \cdot 3}=\dfrac{x-3}{15}.$
Let $\epsilon>0$ be given.
Choose $\delta =15 \epsilon.$
Then $3 <x<3+\delta$ $(0<x-3<\delta)$ implies
$|\dfrac{x+1}{x+2}-5/4| \lt $
$\dfrac{x-3}{15} <\delta/(15)=\epsilon$. |
Petals for points near the origin | Let $P(z)=z^{p+1} - z$ where $p$ is a positive integer. How many petals does $P$ have at the origin?
In the parabolic case critical orbit looks like n-th arm star
Arms tend to attracting directions near fixed point. Number of attracting petals is equal to n.
So I have checked critical orbits using this Maxima CAS script. Here are examples for p=2 and p=13
My results are listed below:
p = 1 ( degree = 2) there are 2 petals
p = 2 ( degree = 3) there are 2 petals
p = 3 ( degree = 4) there are 6 petals
p = 4 ( degree = 5) there are 4 petals
p = 5 ( degree = 6) there are 10 petals
p = 6 ( degree = 7) there are 6 petals
p = 7 ( degree = 8) there are 14 petals
p = 8 ( degree = 9) there are 8 petals
p = 9 ( degree = 10) there are 18 petals
p = 10 ( degree = 11) there are 10 petals
p = 13 ( degree = 14) there are 26 petals
so:
for p odd there are 2*p petals
for p even there are p petals |
Find an Eulerian path or circuit in a graph given its adjacency matrix | I'll make my comment an answer/hint if just to reduce the unanswered queue by $\epsilon$.
Hint: From the adjacency matrix, you can see that the graph is $3$-regular. In particular, there are at least $3$ vertices of odd degree. |
I do not know how to proof this. An undirected graph $G$ with $n$ vertexes is connected if that graph has more than $\frac{(n-1)(n-2)}{2}$ edges. | Let $G(V,E)$ be a graph with $|V|=n$ vertices and $|E|=m$ edges. We need to show that if $m>(n-1)(n-2)/2$, then $G$ is connected. It's sufficient to show the contrapositive, i.e., if $m\leq (n-1)(n-2)/2$, then $G$ is disconnected.
First note that a disconnected graph with the most edges is the union of a complete graph on $n-1$ vertices and an isolated vertex. In this case, the disconnected graph has $\binom{n-1}{2}=(n-1)(n-2)/2$ edges. QED. |
In a four hand draw without replacement, what is the probability that no subsequent ranks are the same? | Well, the first card we draw does not depend on any previous draw, so we just assign it a $\frac{52}{52}$ probability. The second draw cannot be the same rank as the previous, and we still have $51-3=48$ cards left in the deck that do not have the previous rank, so we assign our second draw a $\frac{48}{51}$ probability.
Now the difficult part:
Suppose our first three draws were $K,Q,K$ (for example). The probability of getting the same card on our third draw as we did on our first draw is $\frac{3}{50}$. Now, for the fourth draw, there are only $49-2=47$ cards left in the deck that are not Kings, so we assign our fourth draw to be $\frac{47}{49}$.
Now suppose our first three draws were $K,Q,J$ (for example). The probability of not getting the same card on our third draw as we did on our first is $\frac{44}{50}$ (since there are still $50-3-3=44$ cards left in the deck that are not Queens or Kings). For the final draw, there are $49-3=46$ cards still left in our deck that are not Jacks, so we assign this probability to be $\frac{46}{49}$.
If we combine these two cases in general, then our ultimate probaility becomes:
$\frac{52}{52}\cdot \frac{48}{51}\cdot \frac{3}{50}\cdot\frac{47}{49}+\frac{52}{52}\cdot \frac{48}{51}\cdot \frac{44}{50}\cdot\frac{46}{49}=\frac{3464}{4165}\approx83.17\%$ |
Why $\lim_{R\to\infty}\int_{0}^{\pi}\sin(R^{2}e^{2i\theta})iRe^{i\theta}\:\mathrm{d}\theta = -\sqrt{\frac{\pi}{2}}$ | Let $u=Re^{i\theta}$, so that $du=iRe^{i\theta}\,d\theta$. Thus
$$\lim_{R\to \infty}\int_0^{\pi}\sin(R^2e^{i2\theta})iRe^{i\theta}\,d\theta=-\int_{-\infty}^{\infty}\sin(u^2)\,du=-\sqrt{\frac{\pi}{2}}$$
where we used the well-know results of the Fresnel Integral.
NOTE:
To evaluate the Fresnel integral, let's analyze the following complex integral.
$$\begin{align}
\oint e^{iz^2}\,dz&=\int_0^Re^{ix^2}\,dx+\int_0^{\pi/4}e^{iR^2e^{i2\phi}}iRe^{i\phi}\,d\phi+\int_R^0e^{i(1+i)^2t^2}(1+i)\,dt\\\\
&=\int_0^Re^{ix^2}\,dx+\int_0^{\pi/4}e^{iR^2e^{i2\phi}}iRe^{i\phi}\,d\phi+(1+i)\int_R^0e^{-2t^2}\,dt
\end{align}$$
Now, as $R\to \infty$, the first integral becomes
$$\lim_{R\to \infty}\int_0^Re^{ix^2}\,dx=\int_0^{\infty}\cos (x^2)\, dx+i\int_0^{\infty}\sin (x^2)\, dx$$
the second integral goes to zero since
$$\begin{align}
\left|\int_0^{\pi/4}e^{iR^2e^{i2\phi}}iRe^{i\phi}d\phi\right|&\le R\int_0^{\pi/4}e^{-R^2\sin 2\phi}\,d\phi\\\\
&\le R\int_0^{\pi/4}e^{-4R^2\phi/\pi}\,d\phi\\\\
&=\frac{\pi}{4}\frac{1-e^{-R^2}}{R}\to 0
\end{align}$$
and the third integral becomes the Gaussian Integral
$$(1+i)\int_{\infty}^0e^{-2t^2}\,dt=-(1+i)\sqrt{\frac{\pi}{8}}$$
Since the contour integral is zero, we have
$$\bbox[5px,border:2px solid #C0A000]{\int_0^{\infty}\cos (x^2)\, dx=\int_0^{\infty}\sin (x^2)\, dx=\sqrt{\frac{\pi}{8}}}$$ |
What is intresting about $\sqrt{\log_x{\exp{\sqrt{\log_x{\exp{\sqrt{\log_x{\exp{\cdots}}}}}}}}}=\log_x{e}$? | Elaborating on what Jack said, assume we have an $n$th root instead of a square root:
$$y = \sqrt[n]{\log_x{\exp{\sqrt[n]{\log_x{\exp{\sqrt[n]{\log_x{\exp{\cdots}}}}}}}}}$$
Then
$$y = \sqrt[n]{\log_x{\exp\left(y\right)}}$$
$$y = \sqrt[n]{y\log_x{e}}$$
$$y^n = y\log_x{e}$$
$$y^{n-1} = \log_xe$$
Obviously, with $n = 2$, $n-1 = 1$, meaning $y$ itself equals $\log_xe$.
This can be expanded upon though. |
Converting the coordinates of a point to cylindrical coordinates with positive values. | Your $r$ is correct.
Your $\theta $ should be in he third quadrant and have the same tangent as $1/2$
Thus it is $$\theta = \arctan (1/2) + \pi $$ |
Show that $\operatorname{Lip}_\alpha $ is not an homogeneous Banach space | I will use $a$ instead of $\alpha$. Let $T$ be the interval $[0,2\pi]$.
Let $f$ be a function of the shape
$f(t)=t^a$ for $t\in[0,1]$,
$f$ is continuous and smooth on $(0,2)$ with $f'(2)=0$,
$f$ is zero on $[2,2\pi]$.
We identify $f:T\to \Bbb R$ (with $f(0)=f(2\pi)$) with a $2\pi$-periodic function on $\Bbb R$.
The quantity inside the supremum defining $r(f)$ is bounded, because (we may and do assume $h>0$, and then) the function
$$
\begin{aligned}
T^2&\to\Bbb R\ ,\\
(t,h)&\to\frac 1{h^a}(f(t+h)-f(t))&&\text{ for }t\in T\ ,\ h\in(0,2\pi]\ ,\\
(t,0)&\to\lim_{h\searrow0}
\frac 1{h^a}(f(t+h)-f(t))=\delta_0(t)&&\text{ for }t\in T\ ,
\end{aligned}
$$
is bounded. (It is continuous on a compact set obtained from $T^2$ obtained by removing a "small open (quarter)disc" with center in $(0,0)$. Then the control around $(0,0)$ comes from $(t+h)^a-t^a\le h^a$. This holds since for $t=0$ or $h=0$ this is clear, and else by $a$-homogenity it is enough to show $(1+h)^a\le 1+h^a$, i.e. $1+h\le 1+\frac 1ah^a\le (1+h^a)^{1/a}$.)
So $f$ is an element in the given Banach space. Let now $\tau>0$ be the number used for the shift of $f$. The shifted function $f_\tau$ is zero in a neighborhood of $0$, so $f-f_\tau=f$ in this small neighborhood. So for the special $t=0$ we have
$$
\begin{aligned}
r(f_\tau-f)
&\ge
\sup_{0< h}\frac{|(f_\tau-f)(0+h)-(f_\tau-f)(0)|}{h^a}
\\
&=
\limsup_{0< h}\frac{|f(0+h)-f(0)|}{h^a}
\\
&=
\sup_{0< h}\frac{h^a}{h^a}
\\
&=
1\ .
\end{aligned}
$$ |
Sum of Lucas Numbers | Define a recurrence $a_{n+2}=a_{n+1}+a_n$.
Exercise 1: Prove that $a_n$ and $b_n$, both sequences satisfying the recurrence, are linearly independent if and only if their initial condition vectors $(a_0,a_1)$ and $(b_0,b_1)$ are linearly independent.
Two sequences are wholly determined by initial conditions. If $\vec{a}=\lambda \vec{b}$ then $a_n=\lambda b_n$ by induction, and the converse holds trivially. Logically, the equivalence of two statements is equivalent to the equivalence of their negations: dependence is the negation of independence.
Exercise 2: Let $\alpha=(1+\sqrt{5})/2$ and $\beta=(1-\sqrt{5})/2$ be the two roots of the characteristic polynomial $x^2-x-1=0$. Show that $a_n=\alpha^n$ and $b_n=\beta^n$ are linearly independent solutions.
Multiply $\alpha^2-\alpha-1=0$ by $\alpha^n$ to get $\alpha^{n+2}=\alpha^{n+1}+\alpha^n$; the same applies to $b_n$. This shows they are both solutions. The vectors $(1,\alpha)$ and $(1,\beta)$ are linearly independent because $\alpha\ne\beta$.
E1 proves that the vector space of solutions to the recurrence is two dimensional. E2 provides an explicit basis for this vector space. Hence $L_n=u \alpha^n+v\beta^n$ for some $u,v$. Write the conditions as
$$\begin{pmatrix}L_0 \\ L_1 \end{pmatrix}
= \begin{pmatrix}1 & 1 \\ \alpha & \beta \end{pmatrix}
\begin{pmatrix} u \\ v\end{pmatrix}
=\begin{pmatrix} 2 \\ 1\end{pmatrix}. $$
Solving this linear system gives $u=v=1$, hence $L_n=\alpha^n+\beta^n$. You may use this in addition to the geometric sum formula to compute $\sum L_k$, and the characteristic polynomial to rearrange further. |
Mutual Uniqueness of Operations in PA models | The following closely related results may interest you.
$1.$ The isomorphism type of the multiplicative semigroup of a model of PA determines, up to isomorphism, the additive semigroup of the model. This is not surprising, since one can define a relation $R(y,x)$ that behaves like $y=2^x$.
$2.$ For countable models of PA, the isomorphism type of the additive semigroup determines the isomorphism type of the multiplicative semigroup.
The above two quite old results are due to Ehrenfeucht.
$3.$ For uncountable models of PA, the isomorphism type of the additive semigrop does not determine the isomorphism type of the multiplicative semigroup.
Information can be found in this paper by Kossak, Nadel, and Schmerl, which unfortunately does not seem to be freely available. |
Short expression $\frac{x^2-y^2}{x(x-y)}+\frac{x^2-y^2}{x(x+y)}$ | Don't know how you got from $\frac{x²−y²}{x²−yx}$ to $\frac{y^2}{xy}$,
but fractions just don't work that way. To add those, you need a common denominator. Here you could use $x(x-y)(x+y)$, but it's a lot faster to note that $x^2-y^2=(x+y)(x-y)$ and then simplify both fractions so that they get common denominator $x$. |
Differentiation involving sigma notation | If $n=0$ then the term $\frac{nx^{n-1}}{(n+1)!}$ already vanishes so you don't need to include it.
So you have $$\frac{d}{dx}\sum_{n=0}^{\infty}\frac{x^n}{(n+1)!}=\frac{d}{dx}(1+\frac{x}{2!}+\frac{x^2}{3!}+....+\frac{x^n}{(n+1)!}+....)$$
$$=\frac{1}{2!}+\frac{2x}{3!}+....+\frac{nx^{n-1}}{(n+1)!}+...=\sum_{n=0}^{\infty}\frac{nx^{n-1}}{(n+1)!}=\sum_{n=1}^{\infty}\frac{nx^{n-1}}{(n+1)!}$$ |
Is continuous function space with standard inner product on $\big[0,\frac{1}{2}\big]$ not complete? | It's not complete. Take a function $f(x)=1$ for $x \neq 0$ and $f(0)=0$. You can show that $f(x)$ can be well approximated by elements in $C[0,1]$ but $f$ is not in $C[0,1]$. The closure of $C[0,1]$ is actually what's called $L^2[0,1]$. |
If $f_n$ are measurable and $f_n\to f$ almost everywhere, then $f$ is measurable (values on a topological space) | Let us first show that if $f_n \to f$ everywhere, then $f$ is also measurable, if $Y$ is a metric space. The proof is more or less literally taken from Serge Lang's "Real and Functional Analysis". A different (shorter) proof is given in Dudley's book "Real Analysis and Probability", Theorem 4.2.2.
First, let $U \subset Y$ be open. For each $x \in f^{-1}(U)$, we have $f_n (x) \to f(x) \in U$ and thus $f_n (x) \in U$, i.e. $x \in f_n^{-1}(U)$ for all sufficiently large $n \geq n_x$. This proves
$$
f^{-1}(U) \subset \bigcap_{m=1}^\infty \bigcup_{k=m}^\infty f_k^{-1}(U). \qquad (\dagger)
$$
Now, let $A \subset Y$ be closed. If
$$
x \in \bigcap_{m=1}^\infty \bigcup_{k=m}^\infty f_k^{-1}(A),
$$
then for each $m$ arbitrarily large, there is some $k \geq m$ with $f_k (x) \in A$. This implies that there is some subsequence $(f_{k_\ell})$ with $f_{k_\ell} (x) \in A$ for all $\ell$. But $f_{k_\ell} (x) \to f(x)$. Since $A$ is closed, this yields $f(x) \in A$, i.e. $x \in f^{-1}(A)$. Hence,
$$
\bigcap_{m=1}^\infty \bigcup_{k=m}^\infty f_k^{-1}(A) \subset f^{-1}(A). \qquad (\ddagger)
$$
Now, let $U \subset Y$ be open and define
\begin{eqnarray*}
A_n &:=& \{y \in Y \mid {\rm dist}(y, U^c) \geq 1/n\},\\
U_n &:=& \{y \in Y \mid {\rm dist}(y, U^c) > 1/n\}.
\end{eqnarray*}
Since the ${\rm dist}$ function is continuous, we see that $U_n$ is open and $A_n$ is closed with $U_n \subset A_n \subset U$ and $\bigcup_n U_n = \bigcup_n A_n = U$. Here, the equality to $U$ uses the fact that $U$ is open.
Using $(\ddagger)$, we see
$$
f^{-1}(U) = \bigcup_n f^{-1}(U) \supset \bigcup_n \bigcap_{m=1}^\infty \bigcup_{k=m}^\infty f_k^{-1}(A_n) \supset \bigcup_n \bigcap_{m=1}^\infty \bigcup_{k=m}^\infty f_k^{-1}(U_n).
$$
Conversely, $(\dagger)$ implies
$$
f^{-1}(U) = \bigcup_n f^{-1}(U_n) \subset \bigcup_n \bigcap_m \bigcup_{k=m}^\infty f_k^{-1}(U_n).
$$
All in all, we get equality. Since the right-hand side is a measurable set, $f^{-1}(U)$ is measurable. This shows that $f$ is measurable.
Now, if $X$ is a complete measure space and if $f = g$ a.e. with $g$ measurable, then $f$ is also measurable. To see this, let $N \subset X$ be of measure zero with $f = g$ on $N^c$. Now, if $U \subset Y$ is open, then
$$
f^{-1}(U) = [g^{-1}(U) \cap N^c] \cup [f^{-1}(U) \cap N],
$$
where $[g^{-1}(U) \cap N^c]$ is measurable because $g$ is and $[f^{-1}(U) \cap N]$ is measurable as a subset of a null-set, because the measure space is assumed complete.
Thus, if the convergence is only true almost everywhere, i.e. on $N^c$, where $N \subset X$ is of measure zero, then $g_n := f_n \cdot \chi_{N^c}$ is measurable with $g_n \to f \cdot \chi_{N^c}$ pointwise. Hence, $f \cdot \chi_{N^c}$ is measurable. But $f = f \cdot \chi_{N^c}$ on $N^c$, i.e. almost everywhere. Hence, $f$ is measurable.
Finally, we show that the statement is false in general if $Y$ is not a metric space. This counterexample is taken from Dudley's book, Proposition 4.2.3.
We take $Y = I^I$, where $I =[0,1]$ is the unit interval. We equip $I$ with the usual product topology and we define
$$
f_n : I \to I^I, x\mapsto (y\mapsto \max \{0, 1- n|x-y|\}).
$$
Since $I$ is first countable, $f_n$ is continuous if we can show that $x_k \to x \in I$ implies $f_n(x_k) \to f_n(x)$. But this simply means $f_n(x_k)(y) \to f_n (x)(y)$ for all $y \in I$, which is easy to see.
Now define
$$
f : I \to I^I, x \mapsto (y \mapsto \chi_\Delta (x,y)),
$$
where $\Delta = \{(x,x) \in I\times I \mid x\in I\}$.
It is easy to see $f_n (x)(y) \to f(x)(y)$ for all $x,y\in I$. But this means $f_n(x) \to f(x)$ for all $x \in I$.
Now, let $E \subset I$ be an arbitrary subset (potentially nonmeasurable) and let
$$
W := \{g \in I^I \mid \exists y\in E : g(y) > 1/2 \} = \bigcup_{y \in E} {g \in I^I \mid g(y)>1/2}.
$$
Then $W \subset I^I$ is open, but $f^{-1}(W) = E$. If we take $E$ to be nonmeasurable (w.r.t. the Lebesgue $\sigma$-algebra), this shows that $f$ is not Lebesgue-measurable, although all $f_n$ are continuous and hence measurable. |
Finding a point d distance away from another point only given a slope | If $m$ is the given slope, you also have $y_2-y_1=m(x_2-x_1)$, so $$d=\sqrt{1+m^2}|x_2-x_1|\\x_2=x_1\pm \frac d{\sqrt{1+m^2}}$$ |
Is $f(x) = \sum_{n\geq 1} \frac{\cos n x }{\sqrt{n}}$ monotonic on $(0,0.1)$? | Using a Riemann sum, $t=nx$ and $\mathrm{d}t=x$. As $x\to0$,
$$
\begin{align}
f(x)
&=\sum_{n=1}^\infty\frac{\cos(nx)}{\sqrt{n}}\\
&=\frac1{\sqrt{x}}\sum_{n=1}^\infty\frac{\cos(nx)}{\sqrt{nx}}x\\
&\sim\frac1{\sqrt{x}}\int_0^\infty\frac{\cos(t)}{\sqrt{t}}\,\mathrm{d}t\\
&=\sqrt{\frac\pi{2x}}
\end{align}
$$
Using Riemann-Stieltjes integration:
$$
\begin{align}
f(x)
&=\sum_{n=1}^\infty\frac{\cos(nx)}{\sqrt{n}}\\
&=\int_{0^+}^\infty\frac{\cos(tx)}{\sqrt{t}}\,\mathrm{d}(t-\{t\})\\
&=\int_{0^+}^\infty\frac{\cos(tx)}{\sqrt{t}}\,\mathrm{d}t
-\int_{0^+}^\infty\frac{\cos(tx)}{\sqrt{t}}\,\mathrm{d}\{t\}\\
&=\frac1{\sqrt{x}}\int_0^\infty\frac{\cos(t)}{\sqrt{t}}\,\mathrm{d}t
-\int_0^\infty\frac1{\sqrt{t}}\,\mathrm{d}\{t\}+O\!\left(x^2\right)\\
&=\sqrt{\frac\pi{2x}}+\zeta\!\left(\tfrac12\right)+O\!\left(x^2\right)
\end{align}
$$
If we continue in this fashion, we get
$$
\sum_{n=1}^\infty\frac{\cos(nx)}{\sqrt{n}}=\sqrt{\frac\pi{2x}}+\zeta\!\left(\tfrac12\right)-\frac{\zeta\!\left(-\tfrac32\right)}2x^2+\frac{\zeta\!\left(-\tfrac72\right)}{24}x^4+O\!\left(x^6\right)
$$ |
Rationalizing complex numbers when the denominator has a (+ 1) added to it | If $Z,W\in\mathbb{C}$ (and $Z\neq-1$) and if you wish to express $\dfrac{Z+2W+3}{Z+1}$ has a quotient whose denominator is a real number, the natural option consists in multiplying both numerator and denominator with $\overline{Z+1}$, thereby getting$$\frac{Z+2W+3}{Z+1}=\frac{(Z+2W+3)\left(\overline{Z+1}\right)}{(Z+1)\left(\overline{Z+1}\right)}=\frac{(Z+2W+3)\left(\overline Z+1\right)}{\lvert Z+1\rvert^2}.$$ |
Let T be an infinite set and let A(T) be the group of permutations of T. | You can view each permutation of the set $T$ as a bijective function $f : T \to T$. If you're familiar with the notion of a symmetry group, this is much the same concept.
The symmetry group $S_n$ is essentially the set of permutations of the set $\{1,2,3,...,n\}$. We can envision these as mapping the set $\{1,...,n\}$ to itself, where $k \mapsto f(k)$ in the sense that $k$ goes to the position $f(k)$ in the permutation.
Explicitly, consider $S_4$, the set of permutations of $\{1,...,4\}$. Then we can define the permutation $4123$ by a function $f$ doing the mappings
$$1 \mapsto 2, 2 \mapsto 3, 3 \mapsto 4, 4 \mapsto 1$$
because $1$ is in the second position, $2$ in the third, and so on, in the permutation. So instead of saying $(4123) \in S_4$ (to use the cycle notation for permutations), we can say $f \in S_4$ (where $f$ is defined by the above mapping).
You're dealing with an infinite set, $T$, not a finite set of integers, but the idea seems pretty similar. You can permute the elements of $T$ and define the permutation through some function. For example, you could define a permutation on $\Bbb Z$ by $f(n) = n+1$ for all integers $n$.
In this case, then,, the notion that $f(t)=t$ means $t$ is "fixed" for the permutation: it ends up in the same position as where it started. For example, a permutation on $\Bbb N$ could be defined by sending every element to itself, i.e. $f(t)=t$. Or you could define a permutation on $\Bbb R$ by $f(t)=t/2$, in which case the only $t$ such that $f(t)=t$ is $t=0$.
Similarly, $f(t) \ne t$ means each input is not sent to its original location (whenever the notion of "location" makes sense for $T$). If you're familiar with the notion of a derangement, the notion is similar. |
Feedback on Euclidean Algorithm: $gcd(277, 301)$ | GCD(277,301):
$301 - (277 \times 1) = 24$
$277 - (24 \times 11) = 13$
$24 - (13 \times 1) = 11$
$13 - (11 \times 1) = 2$
$11 - (2 \times 5) = 1$
$2 - (1 \times 2) = 0$
Thus the result is $\mbox{GCD}(277, 301) = 1$.
Expressed differently, we have:
Divisors of $277: 1, 277~$ (that is, a prime)
Divisors of $301: 1, 7, 43, 301$
What is the greatest common divisor between the two? Answer $ = 1$. |
Solving the following ODE. | Starting from your simplification:
$$\frac{dx}{dt}+\frac{dy}{dt}=k_1(a-x-y)x+k_2(a-x-y)y$$
$$\frac{dx}{dt}+\frac{dy}{dt}\;=\left[k_1x(a-x-y)+k_2y(a-x-y)\right]$$
$$\int\left(dx+dy\right)=\int\bigg(ak_1x+ak_2y-k_1x^2-k_2yx-k_1xy-k_2y^2\bigg)dt$$
$$x+y=\int\bigg(ak_1x+ak_2y-k_1x^2-k_2yx-k_1xy-k_2y^2\bigg)dt$$
We cant integrate $x$ and $y$ wrt. $t$ because they implicitly depend on $t$. |
Is this a valid identity: $\binom{n}r = \sum_{j=0}^k\binom{n-k}{r-j} \binom{k}j$? | The correct identity is
$$\binom{n}{r} = \sum_{j=0}^r \binom{n-k}{r-j} \binom{k}{j}$$
That is, the upper limit of the sum should be $r$, not $k$.
Here's a combinatorial argument. First note that $\binom{n}{r}$ is the number of $r$-element subsets of an $n$-element set. Suppose your $n$-element set contains $k$ objects of one type (say, cats) and $n-k$ elements of another (say, dogs). In any given $r$-element subset, there will be exactly $j$ cats and $r-j$ dogs for some $0 \le j \le r$; these choices of $j$ partition the set of $r$-element subsets, and there are $\binom{n-k}{r-j} \binom{k}{j}$ subsets containing exactly $r-j$ of the $n-k$ dogs and exactly $j$ of the $k$ cats. Thus the arrive at the above identity.
As stated, your identity doesn't type-check when $r<k$, since then $r-k < 0$ and your sum contains binomial coefficients with negative arguments. |
One parent is a cystic fibrosis carrier, and the other has no cystic fibrosis gene | Each parent has two copies of the chromosomes and we are given that only one parent has only one bad copy.
The child receives one random chromosome from each parent. Therefore it cannot recieve more than one bad copy.
Assuming that cyctic fibrosis is recessive (i,e. one has symptoms only if both copies are "bad"), the answer to (a) must be $0$ and the asnwer to (d) msut be $1$. Since (b) is correct, it follow sthat the answe to (c) is also $\frac12$ since (b) and (c) must add up to $1$ (we've already seen that "not have cystic fibrosis" is not a restriction).
However, if cyctic fibrosis is dominant (i,e. one has symptoms as soon as one of the copies is "bad"), the answer to (a) must be $0.5$ (just as for (b))and hence the asnwer to (d) must be the remaining $0.5$. Since "no symptoms" is the same as "no carrier", again the asnwer to (c) sis $0.5$ |
video lectures on topological K-theory? | I lately found these lectures on Topological K-Theory that I started watching them myself. I am not sure how well they match with Karoubi but the first half seems to introduce the basic concepts of K-theory. The second half is more focused on applications to D-branes (It more of a physics thing and maybe less interesting for you?). They don't have video recordings, but a good set of handwritten notes and audio recordings. |
Name of meta-properties | I'd call such properties "logical". In the precise sense, that they are defined by reference to the properties of a (formal) language in an arbitrary context.
As you observe, a property whose definition involves quantifying separately over formulae and subsets of the universe is neither first- or second-order. Classification into orders is not really helpful here. The property would find a home somewhere in the Russell-Whitehead ramified theory of types, but that doesn't really add much to your understanding of what it means or how it behaves.
Of course in a sufficiently rich context (a model of ZFC, for example) you can treat both the language, and sets of elements, as first-order concepts: this is more or less what Goedel does in the incompleteness theorems. So there's really no need to get hung up about order or type unless you're thinking philosophically about the foundations of mathematics.
For actual mathematical purposes calling "definability" a logical property of a set (in a way that "being a subset of the direct product of the set of all elephants and the set of all angels" isn't), and "being a set-builder" a logical property of a formula (unlike "rhyming"), is clear enough. |
Length of curve in metric space | Take a partition $Q=\{y_0,\dots, y_n\}$ such that $\Sigma(Q)>L(\gamma)-\epsilon$.
By the uniform continuity of $\gamma$, there exists $\delta>0$ such that $d(\gamma(t),\gamma(s))<\epsilon/n$ whenever $|t-s|<\delta$
Let $P=\{x_0,\dots,x_m\}$ be any partition with $\|P\|<\delta$. For each $y_j$ there exists $x_{k(j)}$ such that $|x_{k(j)}-y_j|<\delta$.
Use uniform continuity to estimate $\Sigma(\{x_{k(j)}\colon j=0,\dots,n\})$ from below.
There are some things to tidy up here, but this being homework, I'll leave the rest to you. |
Using Wilson's theorem for finding divisibility | $12! = 9! \times 10 \times 11 \times 12 \equiv -1 $mod $13$.
Also $12! \equiv (-1)(-2)(-3)\times 9!$ mod $13$,
so $9! \times 6 \equiv 1$ mod $13$
Thus $9! \equiv 11$ mod $13$ as the inverse of $6$ in $\frac{\mathbb{Z}}{13\mathbb{Z}}$ is $11$ |
Finding the last $17$ digits of $1707 \uparrow \uparrow 1783$. | For reference the number $1707 \uparrow \uparrow 1783$ uses Knuth's up-arrow notation for tetration. For every $n\in\Bbb{N}$, the number $1707\uparrow\uparrow n$ is the $n$-th term in the sequence $(s_n)_{n\in\Bbb{N}}$ defined by $s_0=1$ and $s_{n+1}=1707^{s_n}$.
To determine the remainder of $s_{1783}=1707 \uparrow \uparrow 1783$ after division by $10^{17}$ we repeatedly use Euler's theorem, which tells us that if $s_{1782}\equiv t_0\pmod{\varphi(10^{17})}$ then
$$s_{1783}=1707^{s_{1782}}\equiv1707^{t_0}\pmod{10^{17}}.$$
Repeating the same argument we see that, if $s_{1782-k}\equiv t_k\pmod{\varphi^k(10^{17})}$ then
$$s_{1783-k}=1707^{s_{1782-k}}\equiv1707^{t_k}\pmod{\varphi^{k-1}(10^{17})}.$$
Of course for positive integers $a$ and $b$ we have $\varphi(2^a5^b)=2^{a+1}5^{b-1}$,
so
$$\varphi^k(10^{17})=\begin{cases}
2^{17+k}5^{17-k}&\text{ if }k\leq17,\\
2^{51-k}&\text{ if }17\leq k\leq51,\\
1&\text{ if }k\geq51\end{cases},$$
and hence we may take $t_{50}=1$ as $\varphi^{50}(10^{17})=2$ and $s_{1732}\equiv1\pmod{2}$. As $\varphi^{49}(10^{17})=4$ it follows that
$$s_{1733}=1707^{s_{1732}}\equiv1707^1\equiv3\pmod{4},$$
and so we may take $t_{49}=3$. Repeating then yields
\begin{eqnarray*}
t_{49}=s_{1734}&=&1707^{s_{1733}}&\equiv&1707^3&\equiv&3^3\equiv3&\pmod{8},\\
t_{48}=s_{1735}&=&1707^{s_{1734}}&\equiv&1707^3&\equiv&11^3\equiv3&\pmod{16},\\
\vdots\\
t_{18}=s_{1765}&=&1707^{s_{1765}}&\equiv&1707^{5418148179}&\equiv&14008082771&\pmod{2^{34}},
\end{eqnarray*}
and now repeat a few more times to compute $t_0=s_{1783}\pmod{10^{17}}$. |
Proof of $(x^n)' = nx^{n-1}$ using the natural log and the chain rule | The entire premise is invalid for $x,y\leq 0$ because you take the logarithm. It is impossible to remedy this completely without removing the logarithm from at least part of the procedure.
You can, with some fiddling, fix it for negative $x,y$ by inserting negative signs in strategic places (and you might be expected to do this, or you might not), but for $x=y=0$ there is no salvation if you are to use logarithms. Here you must use other means.
So I wouldn't worry too much about fixing it. Mentioning that this proof has a limited validity will possibly give you some bonus marks, but that's probably the extent of it. |
$AB=I$ implies $BA=I$ in a $M_n(R)$ when $R$ is a semiring? | This is proven in Inversion of Matrices over a Commutative Semiring by Reutenauer and Straubing.
The proofs aren't especially short though (as you request in the comments).
The first two paragraphs give good context for their paper though:
It is a well-known consequence of the elementary theory of vector spaces
that if $A$ and $B$ are $n$-by-$n$ matrices over a field (or even a skew field) such
that $AB = 1$, then $BA = 1$. This result remains true for matrices over a
commutative ring, however, it is not, in general, true for matrices over noncommutative
rings.
In this paper we show that if $A$ and $B$ are $n$-by-$n$ matrices over a
commutative semiring, then the equation $AB = 1$ implies $BA = 1$. We give
two proofs: one algebraic in nature, the other more combinatorial. Both
proofs use a generalization of the familiar product law for determinants over
a commutative semiring.
It's worth mentioning that "semi-ring" for them requires having a multiplicative identity, so they do prove the result you desire. |
Laplace Operator in $3D$ | Yes, this is correct and there is a quicker way if you are interested only in the radial part. Let $u(x)=f(|x|)$ be a radially symmetric function. Its gradient is $\nabla u(x)=f'(|x|)\frac{x}{|x|}$. Therefore, the flux of the gradient across the outward-oriented sphere $|x|=r$ is $4\pi r^2 f'(r)$.
Consider the spherical shell $r<|x|<r+h$. Net flux of $\nabla u$ out of this shell is
$$4\pi (r+h)^2 f'(r+h)-4\pi r^2 f'(r)$$
By the divergence theorem, this is equal to $\iint_{r<|x|<r+h}\Delta u$. Dividing by $h$ and letting $h\to 0$ we obtain
$$
\iint_{|x|=r} \Delta u(x) = 4\pi (r^2 f'(r))'
$$
Finally, divide by the area $4\pi r^2$ to find the Laplacian itself:
$\Delta u(x) = r^{-2} (r^2 f'(r))' $ where $r=|x|$.
Similarly, in $n$ dimensions the radial part of $\Delta$ is
$$
r^{1-n} (r^{n-1} u_r)_r = u_{rr}+(n-1)r^{-1}u_r
$$ |
Prove that if $4^n-1$ is divisible by $5$, $n$ must be even | Your approach is fine in proving that if $n$ is even then $4^n-1$ is divisible by $5$. You have the base case, now you can assume $5|4^k-1$ with $k$ even and show $5|4^{k+2}-1$ because $5|4^{k+2}-4^k$. This is the converse of what you were asked to prove, but it is useful. You were really asked to prove that if $n$ is odd, $5$ does not divide $4^n-1$. Having shown it works for even $n$, you can then say that for odd $n$, $4^n-1=4\cdot 4^{n-1}-1=3\cdot4^{n-1}+(4^{n-1}-1)$ and the last term is divisible by $5$ while the other is not. |
Show that a system of linear equations has a unique solution | This is not always true.
We can rewrite $\lambda_j\times \lambda_h'=\lambda'_j\times \lambda_h$ as $\frac{\lambda_j}{\lambda_j'} = \frac{\lambda_h}{\lambda_h'}$. If we can show from these $K$ equations that
$$
\frac{\lambda_1}{\lambda_1'} = \frac{\lambda_2}{\lambda_2'} = \dots = \frac{\lambda_K}{\lambda_K'}
$$
then there is a constant $C$ such that $\lambda_i = C \lambda_i'$ for all $i$; from knowing that $$\lambda_1 + \dots + \lambda_k = \lambda_1'+ \dots + \lambda_K'= 1,$$ we deduce that $C=1$ and therefore $\lambda_i = \lambda_i'$ for all $i$.
However, we cannot necessarily conclude that all the ratios $\frac{\lambda_j}{\lambda_j'}$ are equal. This depends on the $K$ specific equations we chose. Let $G$ be the graph with vertex set $\{1,\dots,K\}$ and an edge $hj$ whenever we choose the pair $(h,j)$ to form an equation. The requirement to have a unique solution is that $G$ must be connected. If so, for any $a,b \in \{1,\dots,K\}$ there is a path from $a$ to $b$ in $G$, and we get $\frac{\lambda_a}{\lambda_a'} = \dots = \frac{\lambda_b}{\lambda_b'}$ by transitivity along that path.
But here is an example (for $K=5$) without a unique solution. Choose the $5$ equations
\begin{align}
\lambda_1 \lambda_2' &= \lambda_1'\lambda_2 \\
\lambda_1 \lambda_3' &= \lambda_1'\lambda_3 \\
\lambda_1 \lambda_5' &= \lambda_1'\lambda_5 \\
\lambda_2 \lambda_5' &= \lambda_2'\lambda_5 \\
\lambda_3 \lambda_5' &= \lambda_3'\lambda_5 \
\end{align}
Together, these equations are equivalent to $\frac{\lambda_1}{\lambda_1'} = \frac{\lambda_2}{\lambda_2'} = \frac{\lambda_3}{\lambda_3'} = \frac{\lambda_5}{\lambda_5'}$, but they leave out $\lambda_4$ entirely. So all $5$-tuples $\lambda$ with
$$
(\lambda_1, \lambda_2, \lambda_3, \lambda_4, \lambda_5) = (A \lambda_1', A\lambda_2', A\lambda_3', B\lambda_4', A\lambda_5')
$$
satisfy these $5$ equations, and if $A(\lambda_1'+\lambda_2'+\lambda_3'+\lambda_5') + B \lambda_4'= 1$, then $\lambda_1 + \dots + \lambda_5 = 1$ also holds. For any $0 < A < \frac{1}{\lambda_1'+\lambda_2'+\lambda_3'+\lambda_5'}$, we can set $B = \frac{1 - A(\lambda_1'+\lambda_2'+\lambda_3'+\lambda_5')}{\lambda_4'}$ and get a valid solution this way.
This is essentially the only kind of counterexample, though: cases where some variables are left out entirely. If every variable appears in at least one equation, then:
For every $i = 2,\dots,K-1$, either $\lambda_1 \lambda_i' = \lambda_1'\lambda_i$ or $\lambda_i\lambda_K' = \lambda_i'\lambda_K$ is an equation, forcing $\frac{\lambda_i}{\lambda_i'}$ to be equal to either $\frac{\lambda_1}{\lambda_1'}$ or $\frac{\lambda_K}{\lambda_K'}$.
To get $K$ equations, we need to either include both such equations for some $i$, or else include the equation $\lambda_1 \lambda_K' = \lambda_1'\lambda_K$. In either case, we can conclude that $\frac{\lambda_1}{\lambda_1'} = \frac{\lambda_K}{\lambda_K'}$. Therefore all $\frac{\lambda_i}{\lambda_i'}$ are equal. |
Derivative of vector $Q=(x_i-\mu-Lf_i)^t\Psi^{-1}(x_i-\mu-Lf_i)$ | Let's use the double-dot product as a convenient infix notation for the trace, i.e.
$$\,\,A:B={\rm tr}(A^TB)$$
Let's also define two new variables
$$\eqalign{
M &= \Psi^{-1} \cr
v &= (Lf_i + \mu - x_i) \implies dv = L\,df_i \cr
}$$
Then we can write the function, differential, and gradient as
$$\eqalign{
Q &= M:vv^T \cr
dQ &= M:(dv\,v^T+v\,dv^T) = (M+M^T)v:dv \cr
&= 2Mv:L\,df_i = 2L^TMv:df_i \cr
\frac{\partial Q}{\partial f_i}
&= 2L^TMv \,\,=\, 2L^T\Psi^{-1}(Lf_i + \mu - x_i) \cr
}$$
Set the gradient to zero and solve for the minimizer
$$\eqalign{
L^T\Psi^{-1}(Lf_i) &= L^T\Psi^{-1}(x_i-\mu) \cr
f_i &= (L^T\Psi^{-1}L)^{-1}L^T\Psi^{-1}(x_i-\mu) \cr
}$$ |
O'Neil's problem 2.10 on Tensor Derivations. Literature on the subject. | Your argument to show that $D^1_0$ is $\mathcal F(M)$-linear if and only if $D_0^0=0$ using the product rule is completely correct. Now consider the properties of $D^1_0$ in this case: it is an $\mathcal F(M)$-linear bundle map $TM \to TM$, and thus $$B : (X,\omega) \mapsto \omega(D^1_0 X)$$ is an $\mathcal F(M)$-linear map $TM \otimes TM^* \to \mathcal F(M)$; i.e. a $(1,1)$-tensor. (Depending on your definition of $(1,1)$ tensor this may require the additional step of showing that $\mathcal F(M)$-linear maps on tensor products of the tangent bundle correspond to smooth tensor fields; but I assume you're aware of this correspondence.) |
Conditional expectation of random sums | To prove that you need to prove two things, $\mathbb E [S_W|W]$ is such that
It is $\sigma(W)$-measurable, this is quite trivial to show that $\sum_{i=0}^W \mathbb{E}[X_i\mid W]$ is a measurable function of $W$.
For any $\sigma(W)$-measurable random variable $Y$, $\mathbb E[\mathbb E[S_W|W] Y] = \mathbb E[S_W Y]$.
Rewritting the last part you need to show that
$$\mathbb{E} \left[\sum_{i=0}^W \mathbb E[X_i|W] Y\right]=\mathbb{E} \left[\sum_{i=0}^W X_i Y\right]$$
This can sometimes be done as
\begin{align*}
\mathbb{E} \left[\sum_{i=0}^W \mathbb E[X_i|W] Y\right]&=\mathbb{E} \left[\sum_{i=0}^W \mathbb E[X_i Y|W]\right]\\
&=\mathbb{E} \left[\mathbb E\left[\sum_{i=0}^W X_i Y\middle|W\right]\right]\\
&=\mathbb{E} \left[\sum_{i=0}^W X_i Y\right]\\
\end{align*}
Using first the fact that $Y$ is $\sigma(W)$-measurable. On the last line it is just the tower property of conditional expectation. You can find all these properties here
For the second line, it is not always true for the infinite case and the answer is not fully known but a sufficient condition for it to hold is that $\sum_{i=0}^\infty \mathbb{E}\left[|X_i|\middle|W=\infty\right]$ converges. So if $\mathbb P (W=\infty)\neq 0$, but you can prove the above convergence, or if $\mathbb P(W=\infty)=0$, then you are done. Otherwise there would be a bit more work for proving the equality. |
Uniqueness of inverse of linear maps | I'll show that for a right-inverse to be unique, the kernel of $\phi$ must vanish, and $\phi$ must therefore be injective.
Write $V = V/\ker(\phi)\oplus \ker(\phi)$ and let $\psi : W \to V/\ker(\phi)\oplus\ker(\phi) $ be a right-inverse.
Then $\psi$ can be written as $\psi = \psi_1 + \psi_0$, where
$\psi_1 : W \to V/\ker(\phi)$ and
$\psi_0 : W \to \ker(\phi)$
Composing $\psi_0$ with any automorphism $\mu : \ker(\psi) \to \ker(\psi)$ yields a right-inverse $\psi_\mu = \psi_1 + \mu \circ \psi_0$ with $\phi \circ \psi_\mu = \phi\circ(\psi_1 + \mu \circ \psi_0) = \phi\circ\psi_1 = \phi\circ(\psi_1 + \psi_0) = id_W$. |
Proving by induction that $a_n = 2^n - 1$, $\forall n \in N$ | This is my simple solution by induction:
Base:
$a_0 = 2^0 - 1 = 0$
Hypothesis:
Suppose $a_n = 2^n - 1$ is true for an arbitrary $n \in N$.
Inductive step:
Let's try to prove that also $a_{n+1} = 2^{n+1} - 1$ is also true.
We know that $a_{n + 1} = 2a_n + 1$, so I will replace $a_n$ with $2^n - 1$.
Now we have:
$a_{n + 1} = 2\cdot (2^n - 1) + 1$
$a_{n + 1} = (2^{n+1} - 2) + 1$
$a_{n + 1} = 2^{n+1} - 1$
We have proved that if $a_n = 2^n - 1$, then follows $a_{n + 1} = (2^{n+1} - 1)$. |
Definition of improper divisor in algebra | If $a = bc$ where $b$ is an associate of $a$, then $a = bu$ for some unit $u$. So $bc = bu$, so $bcu^{-1} = b$, so $b(1 - cu^{-1}) = 0$. So if $b$ is nonzero (and the last statement is obviously not true if $a = b = 0$) then since $R$ is a domain, $1 - cu^{-1} = 0$, and so $c = u$. |
How can I conclude that the integers $a r_1, a r_2 ... ar_{\phi(m)}$ modulo $m$ are a permutation of the integers $r_1,r_2,...,r_{\phi(m)}$? | $(ar, m) = 1 \implies (a,m) = 1$ and $(r, m) = 1$. Proof: if not for either, then there's $d \neq 1$ dividing both $ar$ and $m$.
You already have that $r_i \neq r_j \pmod m$ since if they're equal then $a$ times them are equal, but you say that's not true. Thus the $r_i$ make up $\phi(m)$ representatives of the multiplicatve group of units $\pmod m$.
And since $a$ is also a unit (first paragraph), the map $U \to U, x \mapsto ax$ is a permutation of the units $U$. Choose representatives equal to the original set, and you have that $a$ permutes those representatives.
Note: For any group $G$, and $g \in G$, the map $x \mapsto gx$ is a permutation of $G$. Prove that.
Note: $a \in \Bbb{Z}$ is congruent/equal to a unit $\pmod m$ iff $(a,m) = 1$.
Note: a unit element in a ring is an element with a multiplicative inverse in that ring.
Note: $\phi(m) = $ the number of elements in the multiplicative group of units $\pmod m$, or $|U|$ from above.
Note: some of these notes are immediate implications of the others. |
The zero function is integrable in $\pmb{ANY}$ set and its integral is zero. | Because the result is intuitively obvious it is easy to give a proof with true statements that are not fully justified. If your objective is to be really precise, then I would make the following improvements.
(1) Given $m_R(\pmb{0})\le 0\le M_R(\pmb{0})$ and assuming that $m_R(\pmb{0})< 0\ < M_R(\pmb{0})$ you assert that for any $\epsilon > 0$ there exist $x,y \in Q$ such that
$$0=\pmb{0}(x)<m_R(\pmb{0})+\epsilon\,\,\,\text{and}\,\,\, 0=\pmb{0}(y)>M_R(\pmb{0})-\epsilon$$
and this is clearly is impossible. As it may hold for some $\epsilon $, produce a specific example where it fails to hold. For example, with $\epsilon = -m_R(\mathbf{0})/2 > 0$ we get the contradiction $0=\pmb{0}(x)<m_R(\pmb{0})/2 <0$.
(2) The proof that $\int_Q \mathbf{0} = 0$ is indirect and a little cumbersome. Why not simply say that for all $P$ we have
$$0 = L(f,P) \leqslant \int_Q \mathbf{0} \leqslant U(f,P) = 0$$
(3) A minor detail is in proving that $\int_S \mathbb{0} =0 $ for any bounded set $S$, you are starting with the definition
$$\int_S \mathbb{0} := \int_Q\mathbb{0}_S,$$
where $Q$ can be any rectangle containing $S$. I would add that $\mathbf{0}_S$ is everywhere continuous and, therefore, integrable on $Q$ regardless of the content of boundary $\partial S$. |
Find continuous $f$ with period $1$ such that $f(x) =\int_0^1 f(x-t)f(t) dt$ | Quick look: since $f$ is continuous and periodic, it has a (unique) Fourier series of the form $f(x) = \sum_{k\in\mathbb Z} c_k e^{i2\pi kx}$.
Inject this form in the integral and you get (I don't even care about switching sums and integrals at this stage):
$$f(x) = \int_0^1 \sum_{k,m\in\mathbb Z} c_k c_m e^{i2\pi k(x-t)}e^{i2\pi mt}dt = \sum_{k,m\in\mathbb Z} c_k c_m e^{i2\pi kx} \int_0^1 e^{i2\pi(m-k)t}dt$$
the term under the integral is nil unless $m=k$, hence:
$$f(x) = \sum_{k\in\mathbb Z} c_k e^{i2\pi kx} = \sum_{k\in\mathbb Z} c_k^2 e^{i2\pi kx}$$
And you get: $\forall k, c_k = c_k^2$, i.e. $c_k \in \{0,1\}$.
As stated in the other (better) answer, Plancherel theorem then implies $\sum |c_k|^2 < \infty$, which can only happen if $c_k=0$ for all but a finite number of $k$. So $f$ is of the form $\displaystyle f(x) = \sum_{m=1}^N e^{i2\pi k_m x}$ with $k_m\in \mathbb Z$ all distinct. |
A set with the largest possible cardinality? | Either you take as (true or not) statement something like well-formed formulae in the language of set theory. Then there are only countably many such statement, but you cannot express $x\in X$ for every set $X$ simply because in general you do not have a way to describe $X$.
Or you allow/consider only such sets $X$ that have an explicit description in the language of set theory. Then you will only have countably many such definable sets and no contradiction arises.
Or you allow arbitrary sets and extend your language with a symbol for every set. That gives you problems because already the set of symbols of your language is a proper class and in the end so is $T$. |
Value of $n$ required for harmonic series to cross a certain value. | From this answer, to compute the smallest $n$ such that $H_n$ exceeds an integer $N$,
$$log\left(n+\frac{1}{2}\right)+\gamma>N$$
$$log\left(n+\frac{1}{2}\right)>N-\gamma$$
$$n+\frac{1}{2}>e^{N-\gamma}$$
$$n>e^{N-\gamma}-\frac{1}{2}$$
so
$$n=\lceil e^{N-\gamma}-\frac{1}{2}\rceil$$ |
Combinatorics ( colouring a checkerboard) | A solution that does not use Burnside's lemma explicitly:
There are $m^{4t^2}$ possible boards, and naively we must divide this by $4$. However:
$m^{2t^2}$ boards are equivalent under a $180^\circ$ turn, so we must "boost" the count by $m^{2t^2}$ so that these boards are counted once (not $0.5$ times)
$m^{t^2}$ boards are equivalent under a $90^\circ$ turn, so we must "boost" the count by $2m^{t^2}$ so that these boards are counted once (not $0.5$ times; before the first bullet point above they are counted $0.25$ times)
Thus the final answer is, writing $z=m^{t^2}$ (the number of ways to colour a fixed $t×t$ grid with $m$ colours),
$$\frac{z^4+z^2+2z}4$$ |
The free group $F_3$ being a quotient of $F_2$ | If $F_3$ were a quotient of $F_2$, then $\mathbb{Z}^3$ would be, but $\mathbb{Z}^3$ cannot be generated by fewer than $3$ elements. To me it seems easier to see directly that $\mathbb{Z}^3$ needs at least $3$ generators than the corresponding statement for $F_3$, perhaps because it's easy to visualize.
The rank of a group is the smallest cardinality of a generating set. Here's a list of some facts about ranks of groups (including that the rank of $F_3$ is $3$) on Wikipedia. |
Help with a simple number theory proof | Hints:
$1000\equiv -1\mod 7$
$8>7$
Pigeon Hole Principle |
Stirling numbers of the first kind - identity from wikipedia. | Here is a proof in two parts, the first algebraic and the second combinatorial.
First step. Recall that the bivariate generating function of the Stirling Numbers of the first kind represents the species $$\mathfrak{P}(\mathfrak{C}(\mathcal{Z}))$$ and is given by
$$\exp\left(u\log\frac{1}{1-z}\right).$$
It follows that the sum that we want to evaluate is given by
$$n! [z^n] \sum_{p=k}^n {p\choose k} [u^p] \exp\left(u\log\frac{1}{1-z}\right)
= n! [z^n] \sum_{p=k}^n {p\choose k} \frac{1}{p!} \left(\log\frac{1}{1-z}\right)^p.$$
As we are extracting the coefficient of $z^n$ and the logarithmic term starts at $z$ we may extend the sum to infinity, getting
$$n! [z^n] \sum_{p=k}^\infty {p\choose k} \frac{1}{p!} \left(\log\frac{1}{1-z}\right)^p.$$
This turns into
$$n![z^n] \frac{1}{k!} \left(\log\frac{1}{1-z}\right)^k
\sum_{p=k}^\infty {p\choose k} \frac{k!}{p!} \left(\log\frac{1}{1-z}\right)^{p-k}$$
which becomes
$$n![z^n] \frac{1}{k!} \left(\log\frac{1}{1-z}\right)^k
\sum_{p=k}^\infty \frac{1}{(p-k)!} \left(\log\frac{1}{1-z}\right)^{p-k}$$
or
$$n![z^n] \frac{1}{k!} \left(\log\frac{1}{1-z}\right)^k
\exp\log\frac{1}{1-z}$$
which is finally equal to
$$n![z^n] \frac{1}{1-z}\times\frac{1}{k!} \left(\log\frac{1}{1-z}\right)^k.$$
Second step. The key observation is that we can recognize the univariate exponential generating function from the previous step. It represents the species
$$\mathfrak{E}(\mathcal{Z})\times\mathfrak{P}_k(\mathfrak{C}(\mathcal{Z}))$$
which consists of a set of $k$ cycles paired with a permutation. But there is a bijection between this species (on $n$ nodes) and the species
$$\mathfrak{P}_{k+1}(\mathfrak{C}(\mathcal{Z}))$$
(on $n+1$ nodes), which is obtained straightforwardly by removing the element $n+1$ from the cycle it is on, which leaves $k$ cycles and a permutation. Now the count of the second species is $$\left[ n+1 \atop k+1 \right]$$ and we are done. |
What is the main defferences between nets and ordinary sequences | This is how I understand it.
The definition of convergent sequence in a normed space is that:
for every $e>0$ there is $d>0$, such that $|a_n-a|<e$, when $n>d$.
This, translated to neighborhoods, means that
for every neighborhood of $a$ there is a tail of the sequence completely inside the neighborhood.
For the normed case we can produce a system of neighborhoods of a point that is countable, totally ordered by inclusion, and such that for every neighborhood of the point there is one in the system that is inside. Then one just needs that for each element of the system of neighborhood to contain a tail of the sequence.
So, in the absence of a norm we just need to imitate this. We consider a neighborhood system, which need not be countable. This forces us to consider $a_I$ for arbitrary sets $I$. Next the system may be forced to not be totally ordered by inclusion. So we need to define what would be the tail of the 'sequence' $a_I$. We define it by the properties that we need them to satisfy.
If $U_1$ and $U_2$ are neighborhoods of the system of neighborhoods, we want tails of $a_I$ to be in $U_1$ and in $U_2$. But $U_1\cap U_2$ is also open and therefore must also contain a tail of $a_I$.
So, for $U_i$, $i=1,2$, we have $d_1\in I$ such that for $j>d_i$ $a_j\in U_i$.
We also want something like this for $U_1\cap U_2$, but we would like it to be similar as the usual definition of limit.
We want that for every neighborhood of $a$ "eventually" all elements of the sequence are in there. So, we need to impose, at least, that the tails $j\geq d_i$, $i=1,2$ merge. Otherwise the tails for $U_1,U_2$, and $U_1\cap U_2$ may not have anythign to do with eachother. Imposing that these tails have non-empty intersection means that we have $d$ such that $d\geq d_1$ and $d\geq d_2$. So, we impose this property on $I$.
For every $d_1,d_2\in I$ there is $d\in I$ with $d\geq d_1$, and $d\geq d_2$. |
How to find an example of a non Abelian group of arbitrary finite order? eg. $39$ | There are composite orders (e.g. 15 or 765, or prime squares) such that all groups of that order will be abelian, and there is no all-purpose construction, but here are a few constructions of nonabelian groups that cover lots of orders:
If the order is even and $>4$, one can construct a dihedral group.
If the order involves a prime power $p^k$ such that another prime divisor (that could be $p$ as well, so this in particular covers all orders that are multiples of a prime cubed) that divides the order of $GL_k(p)$, one can form a nonabelian semidirect product. |
Scope of variable in a converse statement | Given a propositional logic statement of the form
$ p \to q$
the converse is $q \to p$,
so you merely flip the LHS and RHS of the arrow.
With first order logic you have to take more care. Your first statement isn't actually well defined, since the existential quantifier needs to have the whole statement as it's scope to apply to the RHS.
Correcting this depends a bit on what you intend to say; Depends a bit on what you are trying to express. A slight fix would be $$\exists g (g \in G \to \exists a,b(a,b \in G \wedge g = ab))$$
However, I'm guessing you are trying to say that if $g$
is an element of a group then there exist elements
$a$ and $b$
in the group such that $g = ab,$
which would be $$\forall g (g \in G \to \exists a,b(a,b \in G \wedge g = ab))$$
In this case you really need to define what you mean by "converse"; if you take the converse of the propositional logic statement within the quantifier we get:
$$\forall g (\exists a,b(a,b \in G \wedge g = ab) \to g \in G)$$
which says that for all $g,$ if $g$ is the product of two elements in the group $G$ then $g \in G,$ which is indeed true. |
Laplace and Fourier Transforms to derive an Unsteady Fundamental Solution | More a long comment that a full answer.
In some sense your guess is correct, since this is exactly how these transforms are used when they are applied jointly to a PDE: in the first explicit reference I recall, Richard Briggs explicitly says that in his monograph he will "perform Laplace transformation with respect to time and a Fourier transformation with respect to the spatial coordinate ..." ([1], §2,2 p. 12).
However, the reason for performing Laplace transform respect to the time variable for systems of PDEs is subtler respect to the need to solve an (at most second order, in most cases) ordinary differential equation. Indeed, as shown in this Q&A (already cited in comments), if you have a single PDE you can simply apply to it the Fourier transform respect to the spatial variable and solve the resulting ODE by any elementary methods. Now let's consider what happens if you have the following system of first order respect to time PDEs:
$$
\partial_t\mathbf u= \mathbf A(\partial_\mathbf{x})\mathbf{u}\label{ex}\tag{Ex.}
$$
where
$\mathbf{u}=(u_1,\ldots,u_n)$, $n\ge 2$ is the unknown $n$-dimensional vector,
$\mathbf{x}=(x_1,\ldots,x_m)$, $m\ge 1$ is the $m$-dimensional spatial variable $\implies\partial_\mathbf{x}=(\partial_{x_1},\ldots,\partial_{x_m})$ is the $m$-dimensional vector of the partial derivatives respect to the components of $\mathbf{x}$,
$\mathbf A(\partial_\mathbf{x})$ is a $n\times n$ matrix partial differential operator whose entries are polynomial in the variables $\partial_\mathbf{x}$ with complex coefficients.
If you apply to \eqref{ex} the Fourier transform respect to $\mathbf{x}$ i.e. $\mathscr{F}_{\bf{x}\mapsto\boldsymbol{\xi}}$ you get the following system of ODEs
$$
\frac{\mathrm{d}\mathbf u}{\mathrm{d}t}= \mathbf A(2\pi i\boldsymbol{\xi})\mathbf{u}\label{e}\tag{Ex.}
$$
which is easily (almost from the theoretic point of view) solvable by calculating its fundamental matrix
$$
e^{t\mathbf{A}(2\pi i\boldsymbol{\xi})}\label{fs}\tag{FS}
$$
by putting
$$
\mathbf{u}=e^{t\mathbf{A}(2\pi i\boldsymbol{\xi})}\mathbf{u}_0
$$
where $\mathbf{u}_0$ is the initial condition for the system \eqref{ex}. However, as Peter Henrici notes is his monumental work [2] §12.5, p. 537 example 7, calculating \eqref{fs} is not an easy task and also can hidden the structure of the solution respect to the spatial variables. Therefore, when dealing with systems of PDEs it is strongly advisable too algebrize completely the problem, i.e. to transform the problem at and in the linear algebra problem of solving possibly determined homogeneous or not, system of linear equations.
In our case, the 3D Stokes system, assuming the notation of [3], pp. 898-899, and putting $\mathbf u=(u,v,w)$, $\mathbf x=(x,y,z)$, $\boldsymbol\xi=(\xi_1, \xi_2,\xi_3)$, we have
$$
\left\{
\begin{split}
0 &= \frac{\partial{u}}{\partial x} + \frac{\partial{v}}{\partial y} + \frac{\partial{w}}{\partial z}\\
\rho \dfrac{\partial u}{\partial t} &= -\frac{\partial{p}}{\partial x} + \mu \nabla^2 u + \alpha_1\delta(\mathbf x)\delta(t)\\
\rho \dfrac{\partial v}{\partial t} &= -\frac{\partial{p}}{\partial y} + \mu \nabla^2 v + \alpha_2\delta(\mathbf x)\delta(t)\\
\rho \dfrac{\partial w}{\partial t} &= -\frac{\partial{p}}{\partial z} + \mu \nabla^2 w + \alpha_3\delta(\mathbf x)\delta(t)\\
\end{split}
\right.\label{st}\tag{ST}
$$
Obviously, since we are dealing with fundamental solutions, we should work in the framework of generalized function, for example distributions: thus we assume that $p, u,v,w$ belong to the space of Schwartz distributions $\mathscr{S}^\prime(\Bbb R^3\times\Bbb R)$, in order to be able to do Fourier analysis.
Applying Laplace transformation $\mathscr{L}_{t\mapsto s}$ we first get
$$
\left\{
\begin{split}
0 &= \frac{\partial\hat{u}}{\partial x} + \frac{\partial\hat{v}}{\partial y} + \frac{\partial\hat{w}}{\partial z}\\
\rho s \hat{u} &= -\frac{\partial\hat{p}}{\partial x} + \mu \nabla^2 \hat{u} + \alpha_1\delta(\mathbf x)\\
\rho s \hat{v} &= -\frac{\partial\hat{p}}{\partial y} + \mu \nabla^2 \hat{v} + \alpha_2\delta(\mathbf x)\\
\rho s \hat{w} &= -\frac{\partial\hat{p}}{\partial z} + \mu \nabla^2 \hat{w} + \alpha_3\delta(\mathbf x)\\
\end{split}
\right.,
$$
and then applying the Fourier transform respect to the $\bf{x}$ variable $\mathscr{F}_{\bf{x}\mapsto\boldsymbol{\xi}}$
$$
\left\{
\begin{split}
0 &= \xi_1\hat{u} + \xi_2\hat{v} + \xi_3\hat{w}\\
\rho s \hat{u} &= -2\pi i\xi_1\hat{p} - 4 \mu \pi^2 \Vert\boldsymbol\xi\Vert^2 \hat{u} + \alpha_1\\
\rho s \hat{v} &= -2\pi i\xi_2\hat{p} - 4 \mu \pi^2 \Vert\boldsymbol\xi\Vert^2 \hat{v} + \alpha_2\\
\rho s \hat{w} &= -2\pi i\xi_3\hat{p} - 4 \mu \pi^2 \Vert\boldsymbol\xi\Vert^2 \hat{w} + \alpha_3\\
\end{split}
\right..
$$
(by abuse of notation but for simplicity reasons, we do not change the symbols $ \hat{p}, \hat{u}, \hat{v}, \hat{w}$) and thus we finally get
$$
\begin{pmatrix}
\xi_1 & \xi_2 & \xi_3 & 0 \\
(\rho s + 4 \mu \pi^2 \Vert\boldsymbol\xi\Vert^2) & 0 & 0 & 2\pi i\xi_1 \\
0 & (\rho s + 4 \mu \pi^2 \Vert\boldsymbol\xi\Vert^2) & 0 & 2\pi i\xi_2 \\
0 & 0 & (\rho s + 4 \mu \pi^2 \Vert\boldsymbol\xi\Vert^2) & 2\pi i\xi_3 \\
\end{pmatrix}
\begin{pmatrix}
\hat{u}\\
\hat{v}\\
\hat{w}\\
\hat{p}
\end{pmatrix}=
\begin{pmatrix}
0\\
\alpha_1\\
\alpha_2\\
\alpha_3\\
\end{pmatrix}
$$
Now we have a fully algebraic, nonhomogeneous determined linear system which is solvable by elementary means. The solution vector we obtain is the Laplace transform respect to time and the Fourier transform respect to the spatial variable of the fundamental solution of the Stokes system \eqref{st}: and in order to reconstruct the fundamental solution, we shall simply component-wise inverse transform the found algebraic expressions with the aid of tables which, even if it is not the easiest task around, it is nevertheless less daunting that calculating first \eqref{fs} and then its inverse Fourier transform.
References
[1] Richard J. Briggs, Electron-stream Interaction with Plasmas, M.I.T. Press research monographs 29, Cambridge, Mass.: M.I.T. Press, pp. 187 (1964).
[2] Henrici, Peter, Applied and computational complex analysis. Vol. 2: Special functions-integral transforms-asymptotics-continued fractions, Wiley Classics Library. New York: Wiley. ix, 662 p. (1991). ZBL0925.30003.
[3] Tsai, C. C.; Young, D. L.; Fan, C. M.; Chen, C. W., "MFS with time-dependent fundamental solutions for unsteady Stokes equations", Engineering Analysis with Boundary Elements 30, No. 10, 897-908 (2006). ZBL1195.76324. |
How does the method of Lagrange multipliers fail (in classical field theories with local constraints)? | Generically, the $m$ equations $g_i(x)=0$ define a manifold $S$ of dimension $d:=n-m$. At each point $p\in S$ the $m$ gradients $\nabla g_i(p)$ are orthogonal to the tangent space $S_p$ of $S$ at $p$. The condition rnk$(\nabla g(p))=m$ means that these $m$ gradients are linearly independent, so that they span the full orthogonal complement $S_p^\perp$ which has dimension $m=n-d$. At a conditionally stationary point $p$ of $f$ the gradient $\nabla f(p)$ is in $S_p^\perp$, and if the rank condition is fulfilled, there will be constants $\lambda_i$ such that $\nabla f(p)=\sum_{i=1}^m \lambda_i\nabla g_i(p)$. In this case the given "recipe" will find the point $p$.
Consider now the following example where the rank condition is violated: The two constraints
$$g_1(x,y,z):=x^6-z=0,\qquad g_2(x,y,z):=y^3-z=0$$
define a curve $S\subset{\mathbb R}^3$ with the parametric representation $$S: \quad x\mapsto (x,x^2,x^6)\qquad (-\infty < x <\infty).$$
The function $f(x,y,z):=y$ assumes its minimum on $S$ at the origin $o$. But if we compute the gradients
$$\nabla f(o)=(0,1,0), \qquad \nabla g_1(o)=\nabla g_2(o)=(0,0,-1),$$
it turns out that $\nabla f(o)$ is not a linear combination of the $\nabla g_i(o)$. As a consequence Lagrange's method will not bring this conditionally stationary point to the fore. |
Discrete Topology, Separable iff Countable | It's slightly nicer to avoid unneccessary proofs from contradictions: suppose $X$ is separable and discrete, so that there is a countable dense subset $D$ of $X$. As all sets are closed in the discrete topology $X=\overline{D}=D$ is countable. |
How does this Taylor Polynomial work? | I think you have some signs wrong.
$n=0$ term: $(-1)^0 = 1$ (note that an empty product is considered to be 1).
$n=1$ term: $(-1)^1 (\frac{1}{2}) x^1 = -\frac{1}{2} x $ (again in the numerator, an empty product)
$n=2$ term: $(-1)^2 \frac{1}{2\cdot 4} x^2 = \frac{1}{8} x^2$
$n=3$ term: $(-1)^3 \frac{1 \cdot 3}{2 \cdot 4 \cdot 6} x^3 = - \frac{1}{16} x^3$
$n=4$ term: $(-1)^4 \frac{1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6 \cdot 8} x^4 = \frac{5}{128} x^4$
etc |
Stokes' Theorem which boundaries to integrate | To answer both questions: $\partial S$ really means the whole boundary of the surface $S$, and $S$ really means the whole surface $S$. If you pick and choose parts of the boundary or parts of the surface to parametrise, then you're integrating over a different region.
If $S$ is a surface built up from two smaller surfaces $S_1$ and $S_2$ such that $S_1$ and $S_2$ only intersect at a curve, then the integral over $S$ of a function is equal to the sum of the integrals over $S_1$ and $S_2$. This means that if you have a cylinder with a base but no lid, you need to parametrise the cylindrical part and the base, integrate over both and add them together.
Likewise, if $\partial S$ consists of three separate curves, you need to integrate over each curve and add them together. |
Transformation of a sphere and computing an integral by using sphere coordinates | let
$$x=2\,\rho\,\cos\theta\sin\phi$$
$$y=3\,\rho\,\sin\theta\sin\phi$$
$$z=6\,\rho\,\cos\phi$$
we have
$$\frac{\partial(x,y,z)}{\partial(\rho,\theta,\phi)}=36\rho^2\sin\phi$$
and
$$V=144\int_{0}^{2\pi }{\int_{0}^{\pi }{\int_{0}^{1}{{{\rho }^{4}}}}}{{\cos }^{2}}\theta \,{{\sin }^{3}}\varphi \,d\rho \,d\phi \,d\theta $$ |
Dirichlet forms and its definition | Just in case the OP is curious the way I used to be. The density assumption needs to be posed in the first place because a vector space we're dealing with now is no longer a finite-dimensional one.
It is well-known that on a finite-dimensional vector space, a symmetric form can define a linear operator and vice versa (via Riesz representation theorem). We want to carry this correspondence to an infinite-dimensional case. However, in such a situation arises the notion of unbounded operators. And it is very common that Dirichlet form theory deals with unbounded operators.
One important fact about them is any closed unbounded operator cannot be defined everywhere, i.e. the best we can hope is its domain being just dense in an underlying Hilbert space. In order to maintain the correspondence between linear operators and symmetric forms, it makes sense we should allow a symmetric form to be defined just on a dense subspace, not the entire space. You may see how a symmetric form is defined out of a linear operator and vice versa from [Ma, Rockner] Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, page 22-23.
Hope this helps. |
Trigonometric quadratic formula? And other trig solutions for roots of polynomials? | For a warm up here, I recommend you browse the section Wikipedia has on solving the cubic equation using trigonometry. A direct link can be found here. As you mention, the quartic equation can be solved the same way. Further note that one the page for the Quintic Function we have the following quote:
In 1858 Charles Hermite showed that the Bring radical could be characterized in terms of the Jacobi theta functions and their associated elliptic modular functions, using an approach similar to the more familiar approach of solving cubic equations by means of trigonometric functions.
I think the main problem you are having here is that the Abel–Ruffini theorem is often stated as such: "There exists no algebraic solution to the general quintic". This definition leaves a little bit up to the imagination, as the definition of algebraic solution does not mention trigonometric equations. As far as I can tell, the question of whether or not any solution expressible in terms of elementary functions can be expressed in terms of roots is an open question. Take for example the following quote from Dave L. Renfro, taken from a post on Math.SE concerning "polynomials with degree 5 solvable in elementary functions" (which you might find an interesting read):
As of 1999 this was not known -- see Conjecture 2 at the bottom of p. 442 of What is a closed-form number? by Timothy Y. Chow [American Mathematical Monthly 106 #5 (May 1999), 440-448]. I suspect it's still not known, since I believe such a result would have become relatively well known given its intrinsic interest and the fact that understanding the issue at hand doesn't require a lot of advanced mathematical training.
Moreover, here we find a post linking to a couple others, explaining how we can solve the general quintic in terms of the Jacobi Theta Function (which is, in some essence, a trigonometric function) by transforming the general quintic into Brioschi form.
While this is an interesting step, this is not quite what we want. On Wikipedia's page on Closed-form expressions we find the following quote:
Similarly solutions of cubic and quartic (third and fourth degree) equations can be expressed using arithmetic, square roots, and cube roots, or alternatively using arithmetic and trigonometric functions. However, there are quintic equations without closed-form solutions using elementary functions, such as $x^5 − x + 1 = 0$.
This quote makes the claim you desire. Now, how do we prove this? The obvious way is to show that the solutions must be in terms of an antiderivative, as we have known algorithms for determining whether or not an antiderivative can be expressed in terms of elementary functions. In my searchings, I found the following paper by R. Bruce King entitled "Beyond the Quartic Equation", which seems to disprove your conjecture that any polynomials of degree $\geq 5$ can be solved in terms of trigonometric functions. The link is primarily focused on disproving your conjecture for the quintic, but mentions generalizations in a later section (although these pages appear to not be in the free preview. I'll let you know if I find a better link. I would likewise appreciate it if someone else could find such a link).
I hope this provides a comprehensive analysis of your question! If you see any glaring flaws in my answer please share and I will do my best to amend them. Hopefully this should answer your question in as simple a manner as possible!
Note: Admittedly, it's a little hard to get a hold of a good Galois Theory course while in high school, so I don't have a strong enough background to answer with that approach. The only result I use from the field seems to concern the Risch Algorithm, which is used implicitly in the final paper I share. |
Why is Riemann integration used in complex analysis and not Lebesgue integration? | It does not matter much, since most of the things one integrates are a-priori continuous and compactly supported. For that matter, one could easily abstract the properties of "integrals" one needs, without specifying a construction of such integrals.
Unsurprisingly, at the time Cauchy developed the basic ideas, there was scarcely any formalization of any notion of "integral", but "everyone knew how they behaved (in nice circumstances)". By later in the 19th century, the formalization of integrals as Riemann integrals more-than-sufficed for basic complex analysis, although occasionally things like Lebesgue Dominated Convergence or Monotone Convergence would make things simpler to explain.
In terms of textbooks and coursework or logical development of any sort, it is obviously simpler to develop basic complex analysis early, as soon as one has any reasonable notion of integral, rather than waiting for a more sophisticated notion (such as Lebesgue's), because the issues addressed in the more sophisticated scenarios mostly are irrelevant to basic complex analysis.
Also, until relatively recently, complex analysis was often studied by engineering and physics students who had most definitely not encountered Lebesgue integration, but who had seen some version of Riemann's construction. So, again, since Riemann's construction more than suffices, there was no reason to add "burdens" for this population. |
Prove a function is holomorphic | Write
$$f(x,y)=u(x,y)+i v(x,y),\quad g(\xi,\eta)=a(\xi,\eta)+ib(\xi,\eta)$$
where $u$, $v$, $a$, $b$ are realvalued functions defined in $A$, resp. $B$. Then by definition of $g$ one has
$$a(\xi,\eta)+ib(\xi,\eta)=g(\xi+i\eta)=\overline{f(\xi-i\eta)}=u(\xi,-\eta)-iv(\xi,-\eta)$$
and therefore
$$a(\xi,\eta)=u(\xi,-\eta),\quad b(\xi,\eta)=-v(\xi,-\eta)\qquad\bigl((\xi,\eta)\in B\bigr)\ .$$
It follows that, e.g. $$b_\eta(\xi,\eta)=-v_y(\xi,-\eta)\cdot(-1)=v_y(\xi,-\eta)\ .$$
Since $u$ and $v$ satisfy the CR-equations in the variables $x$ and $y$ we conclude that
$$a_\xi(\xi,\eta)=u_x(\xi,-\eta)=v_y(\xi,-\eta)=b_\eta(\xi,\eta)\ ,$$
and similarly
$$a_\eta(\xi,\eta)=-u_y(\xi,-\eta)=v_x(\xi,-\eta)=-b_\xi(\xi,\eta)\ .$$
This shows that $g$ fulfills the CR-equations in the variables $\xi$ and $\eta$.
But there is also a direct approach, which in my view is simpler and more in tune with a complex world description.
As $f$ is holomorphic in $A$, for each point $z_0\in A$ (held fixed in the following) there is a complex number $C$ such that
$$f(z)-f(z_0)=C(z-z_0)+o(|z-z_0|)\qquad (z\in A, \ z\to z_0)\ .$$
Let a point $w_0\in B$ be given, and put $z_0:=\bar w_0$. Then by definition of $g$ one has
$$g(w)-g(w_0)=\overline{f(\bar w)}-\overline{f(\bar w_0)}=\overline{f(\bar w)-f( z_0)}=\overline{C(\bar w -z_0)+o(|\bar w-z_0|)}\qquad(w\in B)\ .$$
As $|\bar w -z_0|=|w-w_0|$ it follows that
$$g(w)-g(w_0)=\bar C(w-w_0)+o(|w-w_0|)\qquad(w\in B, \ w\to w_0)\ .$$
It follows that $g'(w_0)=\bar C$, and as $w_0\in B$ was arbitrary, we conclude that $g$ is holomorphic in $B$. |
Find the limit of $\lim_{n \to \infty}\frac{(-1)^{n^2}}{3} + 2 - \frac{1}{(-e)^n}$ | No. $\lim_{n \rightarrow \infty} (-1)^{n^2}$ does not exists as it is an occeleting sequence containing $1$ and $-1$. So $\lim_{n \rightarrow \infty}\frac{(-1)^{n^2}}{3} \neq \frac{1}{3}$. As $e > 1$ $\lim_{\rightarrow \infty}e^n = \infty$ and hence the reciprocal is $0$. |
Continuous functions defined on $[a, b]$ with values in a non-locally convex topological vector space that are not integrable? | Consider the space $\ell_p$ for $p = 1/4$. This space equipped with distance $\rho(x, y) = \|x - y\|^p$ is not locally convex complete metrizable topological vector space. Denote $e_n$ the vectors of the canonical basis of $\ell_p$, that is $e_1 = (1, 0, 0, ...)$, $e_2 = (0, 1, 0, 0, ...)$, etc. Let $\Delta_n = [2/(2n+1), 1/n]$. The intervals $\Delta_n$ are mutually disjoint and tend to 0. Now define the required function $f: [0, 1] \to \ell_p$ as follows: on every $\Delta_n$ the function $f$ takes the constant value $e_n/n$; between two neighboring intervals $\Delta_n$, $\Delta_{n+1}$ define $f$ by means of linear interpolation (in order to make it continuous), and put $f(0) = 0$. |
Expectation of a quadratic form of Bernoulli random variables | I just found a closed-form expression for the expectation, which can be evaluated in polynomial time.
\begin{align}
&\mathbb{E} \left[\left( \displaystyle\sum_{i=1}^N X_i \right)^{-2} \sum_{\substack{1\le i, j\le N \\ i\neq j}} X_iX_j B_{ij} \right] \\
&= \sum_{m=2}^N \mathbb{E} \left[\left( \displaystyle\sum_{i=1}^N X_i \right)^{-2} \sum_{\substack{1\le i, j\le N \\ i\neq j}} X_iX_j B_{ij} \left| \sum_{i=1}^N X_i = m \right. \right] \mathbb{P}\left( \sum_{i=1}^N X_i = m \right) \\
&= \sum_{m=2}^N \mathbb{E} \left[\left( \displaystyle\sum_{i=1}^N X_i \right)^{-2} \sum_{\substack{1\le i, j\le N \\ i\neq j}} X_iX_j B_{ij} \left| \sum_{i=1}^N X_i = m \right. \right] \mathbb{P}\left( \sum_{i=1}^N X_i = m \right) \\
&= \sum_{m=2}^N \frac{1}{m^2} \sum_{\substack{1\le i, j\le N \\ i\neq j}} B_{ij} \mathbb{E} \left[ X_iX_j \left| \sum_{i=1}^N X_i = m \right. \right] \mathbb{P}\left( \sum_{i=1}^N X_i = m \right) \\
&= \sum_{m=2}^N \frac{1}{m^2} \sum_{\substack{1\le i, j\le N \\ i\neq j}} B_{ij} \mathbb{P} \left( X_i = 1, X_j = 1 \left| \sum_{i=1}^N X_i = m \right. \right) \mathbb{P}\left( \sum_{i=1}^N X_i = m \right) \\
&= \sum_{m=2}^N \frac{1}{m^2} \sum_{\substack{1\le i, j\le N \\ i\neq j}} B_{ij} \mathbb{P} \left( \left. \sum_{i=1}^N X_i = m \right| X_i = 1, X_j = 1 \right) \mathbb{P}\left( X_i = 1, X_j = 1 \right) \\
&= \sum_{m=2}^N \frac{1}{m^2} \sum_{\substack{1\le i, j\le N \\ i\neq j}} B_{ij} p_i p_j \sum_{n=0}^N \frac{1}{(N+1)} A^{-nm+2n} \prod_{\substack{k=1\\k\neq i,j}}^N (p_k A^n + (1-p_k)) \\
\end{align}
Note that $m$ starts from 2 as when $m < 2$ the conditional expectation is 0.
In stead of $O(2^N)$, this formula can be evaluated in $O(N^5)$ time. |
A Question about Triangle Inequality | Well acute makes a difference.
You must know that $(8,12.68..,15)$ are Pythagoras triplets. Thus third side must be greater that $12,$ else you will have an obtuse angle.
Next you must also note that $(8,15,17)$ are also Pythagoras triplets. Thus, the third side must be less than $17$.
Thus set of all possible values of the third side is $\{13, 14, 15, 16\}$ |
Linear Map and Scalars Question | For $J=1,2,...,n$ let us denote $e_j:=(0,...,0,1,0,...,0) \in \mathbb F^n$
($1$ in the $j$-th place).
Then $T(e_j) \in \mathbb F^m$. Hence there are $A_{j1},...,A_{jm} \in \mathbb F$ such that
$T(e_j)=(A_{j1},...,A_{jm} )$ |
Deriving a joint cdf from a joint pdf | We always have $X<Y$. So if $x > y$ then
$$F(x,y)-F(y,y)=\int_y^x \int_{-\infty}^y f(u,v) du dv = \int_y^x \int_{-\infty}^y 0 du dv = 0.$$
In other words, if $x>y$ then $F(x,y)=F(y,y)$. So that reduces the problem to case 3. In case 3 you just have to calculate the integral:
$$\int_{-\infty}^y \int_{-\infty}^{\min \{ v,x \}} 2 e^{-u-v} du dv.$$
I got the limits here by rephrasing the region of integration defined as $\{ (u,v) : u \leq x, \, v \leq y, \, u \leq v \}$ |
Problem in proof of open mapping theorem? | For any $y\in Y$, set $y^\prime = \dfrac {\eta y} {\|y\|}$.
$y_0 +y^\prime \in \overline W$, so we can find $x^\prime\in 2kU$ such that $\|Tx^\prime - y^\prime\| < \dfrac {\eta\epsilon}{\|y\|}$.
Then (1) holds for $y$, with $x = \dfrac {\|y\| x^\prime}\eta$.
Or, just choose $\eta$ such that $y_0+y\in W$ if $\|y\| \leq \eta$ instead. |
Application of MVT to prove inequality | Using the MVT, we note that MVT says that:
$\frac{f(x)-f(0)}{x-0} = f'(\xi) $ for some $\xi\in[0,x]$
but since $f(0) = 0$ that just gives
$f(x) = xf'(\xi)$
Hence by the increasing nature of $f'$ the result follows.
Having said that, your solution also appears to be perfectly valid. |
Is this a compact space? | For $n\in\Bbb Z^+$ let $x^{(n)}=\langle x_k^{(n)}:k\in\Bbb Z^+\rangle$ be the sequence defined by
$$x_k^{(n)}=\begin{cases}
1,&\text{if }k=n\\
0,&\text{otherwise}\;.
\end{cases}$$
Clearly $\langle x^{(n)}:n\in\Bbb Z^+\rangle$ is a sequence in $A$, and it has no convergent subsequence; indeed, any two distinct terms are distance $1$ apart. |
Russell's paradox with bounded comprehension | Your definition is circular. You define $S$ using $A$, and you define $A$ using $S$.
And that's what precludes the definition in this case. |
Functions which satisfy $f(n)+2f(f(n))=3n+5$ | Induction! Suppose $f(n)=n+1$ Then, $$f(n)+2f(f(n))=3n+5$$
is equivalent to $$n+1+2f(n+1)=3n+5$$
so we get $$2f(n+1)=2(n+2)$$
So $f(n+1)=n+2$, which is what we wanted.
Moreover, as you calculated $f(1)=2$, our induction is complete. (note that this is a very important step) |
Parametrization of a sphere such that distances are separable | \begin{align}d(r_1,r_2)&=\cos^{-1}(r1\cdot r2), \\
&=\cos^{-1}(u_1v_1+u_2v_2), \end{align} |
If $\gcd(a_i, a_j)=1$ for $i\ne j$ then $\gcd(a_2 ... a_n, ..., a_1 ... a_{n-1})=1$? | HINT:
Assume there exists a prime that divides all the products of $n-1$ elements. Now, since it divides $a_2\ldots a_n$ it must divide one of the numbers $a_2$, $\ldots$ , $a_n$. Assume that number is $a_n$. Now that prime must divide the product where $a_n$ does not appear, so one of the numbers $a_1$, $\ldots$, $a_{n-1}$, contradiction.
Obs: You can also multiply all the equations from Bezout if you look carefully at the terms obtained. |
Evaluate the following limits | Notice we can rewrite the numerator and denominator as
$$\begin{align}
\verb/numerator/
&= (\sqrt{x+3} - 2) + (\sqrt{x}-1)
= \frac{(x+3)-4}{\sqrt{x+3}+2} + \frac{x-1}{\sqrt{x}+1}\\
&= (x-1)\left(\frac{1}{\sqrt{x+3}+2} + \frac{1}{\sqrt{x}+1}\right)\\
\verb/denominator/
&= (\sqrt{x^2+3}-2) + (\sqrt{x}-1)
= \frac{(x^2+3)-4}{\sqrt{x^2+3}+2} + \frac{x-1}{\sqrt{x}+1}\\
&= (x-1)\left(\frac{x+1}{\sqrt{x^2+3}+2} + \frac{1}{\sqrt{x}+1}\right)
\end{align}
$$
The complicated ratio we have equals to
$$\frac{
\displaystyle\;\frac{1}{\sqrt{x+3}+2} + \frac{1}{\sqrt{x}+1}
}{
\displaystyle\;\frac{x+1}{\sqrt{x^2+3}+2} + \frac{1}{\sqrt{x}+1}
}
\quad\to\quad
\frac{
\displaystyle\;\frac{1}{2+2} + \frac{1}{1+1}
}{
\displaystyle\;\frac{1+1}{2+2} + \frac{1}{1+1}
}
= \frac34
\quad\text{ as }x \to 1
$$ |
Arzela-Ascoli-type embedding: Is $H^1(0,T;X)$ compactly embedded in $C([0,T];X)$? | Having infinite-dimensional $X$ is problematic, because even the space of constant $X$-valued functions is not compactly embedded in $C([0, 1];X)$. Indeed, the image of its unit ball under the embedding into $C([0,1];X)$ is an isometric copy of the unit ball of $X$; not a totally bounded set.
The result you are looking for is the Aubin–Lions(-Simon) lemma where an additional compactness condition is imposed on the values taken by the functions in our space. |
Necessary and sufficient condition for constant function | It will be constant.
Hint: For any two points $(x_1, y_1)$, $(x_2, y_2)$, first move from $(x_1, y_1)$ to $(x_1, y_2)$, then to $(x_2, y_2)$.
(Or just apply Chain rule. Your conditions imply that $\nabla f$ is zero everywhere). |
Solving Poisson's equation on $B_1(0)\subset \mathbb{R}^2$ | General method
The standard result to use here asserts that the solution $w$ to
\begin{cases}
\Delta w = 0 & \quad \text{in}\quad B_1(0)\\
w=P_m(x,y) & \quad \text{on} \quad \partial B_1(0)
\end{cases}
where $P_m(x)$ is a polynomial of $\mathbb{R}^2$ restricted to $\partial B_1(0)$, is another polynomial of degree $P_{m-2}$ and it has the form $$w(x,y)=(1-(x^2+y^2))q(x,y)+P_m(x,y),$$ where $q$ has degree $m-2$. For example, you can find the proof in Theorem 5.1 in Chapter 5 in the book "Harmonic Function Theory" of Axcler, Bourdon, Ramey.
Application
We note that $\Delta \frac{y^3}{6}=y$ and we reduce to the previous case defining $$w(x,y):=u(x,y)-\frac{y^3}{6}$$
In our case $P_3(x,y)=1-\frac{y^3}{6}$ and hence we search for $$q(x,y)=a+bx+cy.$$
Imposing $\Delta w = 0$ we compute $a=b=0$ and $c=-1/8$. Hence we obtain
$$
u(x,y)=\frac{x^2+y^2}{8}-\frac{y}{8}+1.
$$ |
Do positive definite matrices always have an LU decomposition? | Yes, it's true that the corollary also holds for non-symmetric PD matrices. However, it probably turns out that the author/teacher is interested in symmetric PD matrices in particular, so that's all that is being mentioned. In a lot of times, we "don't want to bother thinking about" non-symmetric PD matrices.
I expect that this is leading up to the Cholesky decomposition, which only applies to symmetric PD matirces. |
Factor the Quadratic | $-16t^2+32t+20=-16(t^2-2t-5/4) $. Roots of $t^2-2t-5/4$ are
$$\frac{2\pm \sqrt{4+5}}{2} = 1\pm 3/2,$$
hence
$$-16t^2+32t+20=-16(t-(1-3/2))(t-(1+3/2))=-16(t-5/2)(t+1/2)=-4(2t-5)(2t+1)$$ |
Doubts in Integration of greatest integer function. | $[x]$ denotes the greatest integer function (Otherwise known as the floor function), which takes the largest integer less than or equal to $x$.
Since you cannot integrate $\int [x]~dx$ on its own, you must separate it into several functions which you can integrate.
For example, with the first example:
$$\int_0^1 [5x]~dx$$
It is known that $[5x]=0$ when $0\leq x <0.2$, and is known that $[5x]=1$ as soon as $x=0.2$ ($0.2\leq x<0.4$) and so on as shown on the graph below. Since constants can be integrated, we can convert the function as they've done:
$$\int_0^1 [5x]~dx=\int_0^{0.2} 0~dx+\int_{0.2}^{0.4} 1~dx+\int_{0.4}^{0.6} 2~dx+\int_{0.6}^{0.8} 3~dx+\int_{0.8}^{1} 4~dx$$ |
IMO 1987 - function such that $f(f(n))=n+1987$ | Here is an alternative approach. It is obvious that such an $f$ must be an injection. Now, look at sets $\mathbb{N}$, $A = f(\mathbb{N})$ and $B = f(f(\mathbb{N})) = \{n + 1987 \ | \ n \in \mathbb{N}\}$.
It is easy to see that $B \subset A \subset \mathbb{N}$, and also from the injectivity of $f$ that $f$ induces a bijection between the disjoint sets $\mathbb{N} \setminus A$ and $A \setminus B$. Therefore, $\mathbb{N} \setminus B = (\mathbb{N} \setminus A) \cup (A \setminus B)$ must contain an even number of elements. But $|\mathbb{N} \setminus B| = 1987$, which is a contradiction, and we are done. |
Why is $\sum x(1-x)$ equal to $1-\sum x^2$? | Note that $\sum_{i=1}^C P(i|t)=1$, that is how the $1$ is obtained in the simplification.
\begin{align}I_g(t) &= \sum_{i=1}^c p(i|t) (1 - p(i|t)) \\&= \sum_{i=1}^c (p(i|t) - p(i|t)^2) \\&= \sum_{i=1}^c p(i|t) - \sum_{i=1}^cp(i|t)^2\\&=1- \sum_{i=1}^cp(i|t)^2 \end{align} |
Contraction Mapping Theorem. Any $\{ x,f(x),f(f(x)),\ \ldots) \} $ converges to the unique fixed point of f. (Abbott p 114 q4.3.9 d) | 1st question: $y$ is the limit of the sequence $x_n=f^n(x)$ ($f$ applied $n$ times to $x$)
$y$ Is also the fixed point of the map $f$, and the sequence converges to this point because $f$ is a contraction.
2nd question: Because we get from this $|x'-y|\lt |x'-y|$ and the only way this can hold is if $x'=y$ |
Prove by induction $ (V^nx)(t) =\int_{0}^{t} \frac{(t-s)^{n-1}}{(n-1)!}x(s)ds $ | It's not a proof for general case, but you can use Cauchy repeated integration formula. (Here is the wikipedia article on it, which contains proof)
https://en.wikipedia.org/wiki/Cauchy_formula_for_repeated_integration |
Every maximal ideal is prime... why not converse? | If $R$ is any integral domain which is not a field, then $(0)$ is a prime ideal in $R$, but is not a maximal ideal. For example, $R=\mathbb{Z}$ is an integral domain which is not a field. |
Fundamental group of $\mathbb{S^n}$ for $n \ge 2$ | The problem with this argument is that there are loops $\alpha:I\to \mathbb{S}^n$ which are surjective, they cover entire sphere. Such loops are known as space-filling curves and they exist for any $n\geq 1$ by the Hahn-Mazurkiewicz theorem. Such loops always pass through (any) $p\in \mathbb{S}^n$ regardless of applied rotation (or any other homeomorphism). And so there is no valid choice of $p$ (as a point that $\alpha$ misses).
However it is the case that every loop is homotopic to a non-surjective loop. You can apply the following reasoning Proving directly that $S^2$ is simply connected: is a surjective loop homotopic to a non-surjective one? to any $S^n$ for $n\geq 2$. And with that missing piece your proof is complete. |
Expectation of conditional event for throwing a fair dice | $P\left(X=2\right)=\frac{3}{6}\times\frac{3}{6}=\frac{1}{4}$
$P\left(Y=1\wedge X=2\right)=\frac{1}{6}\times\frac{3}{6}=\frac{1}{12}$
hence $P\left(Y=1\mid X=2\right)=\frac{4}{12}=\frac{1}{3}$
$P\left(Y=2\wedge X=2\right)=0$ hence $P\left(Y=2\mid X=2\right)=0$
for $n\geq3$:
$P\left(Y=n\wedge X=2\right)=\frac{2}{6}\times\frac{3}{6}\times\left(\frac{5}{6}\right)^{n-3}\times\frac{1}{6}=\frac{1}{36}\times\left(\frac{5}{6}\right)^{n-3}$
hence $P\left(Y=n|X=2\right)=\frac{4}{36}\times\left(\frac{5}{6}\right)^{n-3}=\frac{1}{9}\times\left(\frac{5}{6}\right)^{n-3}$
This enables you to find $\mathbb{E}\left(Y\mid X=2\right)=\sum_{n=1}^{\infty}nP\left(Y=n\mid X=2\right)$
An alternative route:
Working unconditionally we have $\mathbb{E}Y=\frac{1}{6}\times1+\frac{5}{6}\times\left(1+\mathbb{E}Y\right)$
leading to $\mathbb{E}Y=6$.
Working under condition $X=2$ face $3$ will be seen at the first roll
with probability $\frac{1}{3}$. If face $3$ does not appear at the first roll
then from the third roll we can go on unconditionally. This because the condition does not affect the probabilities connected with these rolls.
This gives us equation:
$\mathbb{E}\left(Y\mid X=2\right)=\frac{1}{3}\times1+\frac{2}{3}\times\left(2+\mathbb{E}Y\right)=\frac{17}{3}$ |
Help me to understand the summation notation | It's $$\sum_{i_1,i_2\in \{1,2\}}d_{i_1}e^{-d_{i_1}}d_{i_2}e^{-d_{i_2}}$$
$$=d_1e^{-d_1}d_1e^{-d_1}+d_1e^{-d_1}d_2e^{-d_2}+d_2e^{-d_2}d_1e^{-d_1}+d_2e^{-d_2}d_2e^{-d_2}$$
$$=d_1^2e^{-2d_1}+2d_1e^{-d_1}d_2e^{-d_2}+d_2e^{-2d_2}$$
$$=(d_1e^{-d_1}+d_2e^{-d_2})^2$$ |
Show that a map with some properties is closed | This follows from the following lemma:
If $f:X\to Y$ is continuous and each point in $Y$ has a neighborhood $V$ such that the restriction of $f:f^{-1}(V)→V$ is closed, then $f$ is closed.
So if $Y$ is locally compact, what kind of neighborhood could you choose so that the restriction is closed?
Edit: Here is a proof for the lemma above: Let $\mathcal V$ be a cover of $Y$ such that each $y\in Y$ has some $V\in\cal V$ as a neighborhood. It is easy to show that $Y$ is coherent with $\mathcal V$, meaning that a subset $C$ is closed if $C\cap V$ is closed in $V$ for each $V\in\cal V$.
Now let $f:X→ Y$ have the property above. Take a closed $C$ in $X$. It is $f(C)\cap V=f(C\cap f^{-1}(V))$ which is closed in $V$ since $f$ is closed as a map $f^{-1}(V)→V$ for every $V\in\cal V.$ Hence $f(C)$ is closed in $Y.$ |
How to tell if $\sum_{n=1}^\infty\frac{\ln(n)}{n^2}$ converges using Integral Test? | By integral test we should obtain
$$\int_1^\infty \frac{\ln x}{x^2} dx=\left[-\frac{1+\log x}{x}\right]_1^\infty$$
or by $\ln x=u \implies \frac1x dx=du$
$$\int_0^\infty \frac{u}{e^u} du=\left[-\frac{u+1}{e^u}\right]_0^\infty$$
If you are not forced to use integral test, as an effective alternative, we can use limit comparison test with
$$\sum_{n=1}^\infty\frac{1}{n^p}$$
with $p>1$ such that
$$\frac{\frac{ln(n)}{n^2}}{\frac{1}{n^p}}\to 0$$ |
How do you prove a sequence is monotonic? | Hint: If you can find a differentiable function $f$ defined on an interval $(a,\infty)$ such that $a_i = f(i)$, then the sequence $(a_i)$ is eventually monotonic if $f'$ is eventually nonzero (since that would make $f$ eventually increasing or eventually decreasing). If the interval $(a,\infty)$ contains all indices $i$, then the entire sequence is monotonic. |
find the intersection between two sets | There is a small error in the description of the sets $A$, $B$ and $C$. For $x\in A$ you can prove that there exists a $n\in\mathbb Z$ such that $x=7n+28$. Then you have to write
$$
A=\{x\in\mathbb Z~:~\exists n\in\mathbb Z\text{ such that }x=7n+28\}.
$$
The restriction $n\in\mathbb N$ is wrong and $\forall$ is wrong too.
But you can do it better. Because of $28=4\cdot 7$ you can say
$$
A=\{x\in\mathbb Z~:~\exists n\in\mathbb Z\text{ such that }x=7n\}.
$$
Further, you get
$$
B=\{x\in\mathbb Z~:~\exists n\in\mathbb Z\text{ such that }x=8n+4\}
$$
$$
C=\{x\in\mathbb Z~:~\exists n\in\mathbb Z\text{ such that }x=9n\}.
$$
From $x\in A\cap B\cap C$ you can conclude that there exists $n\in\mathbb Z$ such that $x=7\cdot(8n+4)\cdot 9=504n+252$. Hence,
$$
A\cap B\cap C\subset\{x\in\mathbb Z~:~\exists n\in\mathbb Z\text{ such that }x=504n+252\}=:D.
$$
Now, consider $x\in D$ and you have to argue why $x\in A\cap B\cap C$ holds, which is easy to see. |
Parenthesis in Domain | It means that $f$ is $C^2$ on $U$. It is just to specify to a domain on which we have the $C^2$ property.
If $U$ happens to be non-open, then it means that there exists $\widetilde{U}$ an open neighborhood of $U$ and $F\in C^2(\widetilde{U})$ such that $F_{\vert U}=f$. |
What is the logical reason to use a proof by contradiction | There's a subtle distinction here, in that you can prove a negation without using contradiction.
Proof-by-contradiction is what we usually call the use of the double-negation law: "from $\neg \neg P$, I can deduce $P$".
But you're trying to deduce something of the form $\neg P$ ("the set $A$ is not finite"), and there's one obvious way to do that without using double-negation: suppose $P$ and then deduce a contradiction. That is automatically a proof of $\neg P$, without ever having to invoke the axiom $\neg \neg P \to P$.
If you're familiar with the usual definition of $\neg P$ as $P \to \bot$ (where $\bot$ is the "false" symbol), then this becomes clearer. How else would you try and prove $P \to \bot$, without assuming $P$ and showing $\bot$?
So my answer to your question is: you're trying to prove the statement that "if $A$ is finite then I can prove false", and the easiest way to start doing this is to assume $A$ is finite and then just directly prove false. |
Do the generators of the subfields of a field extension $K$ "generate" $K$ in a linear sense? | If $K/\mathbb{Q}$ is cyclic, with Galois group $\mathbb{Z}/n\mathbb{Z}$, then there are as many subfields of $K$ as there are divisors of $n$. Of course unless $n\leqslant 2$ this number is strictly less than $n$, so generators of subfields cannot form a linear basis of $K$, since the dimension of $K$ is $n$. |
Centers of quotients of Lie Groups | Let $Z=Z(G)$. Let $a \in G$ such that $aZ$ is in the center of $G/Z$. Then for all $b \in G$ we have $abZ=baZ$ so that $ba=abz_b$ ($=az_bb$) for some $z_b \in Z$. Thus $bab^{-1}=az_b$. Consider the normal subgroup, $N$, generated by $a$ and $Z$. In particular, $N = \{a^nz \,|\, n \in \mathbb{Z};\;z \in Z\}$. If you can show this is discrete [$N = \cup_{n\in\mathbb{Z}} a^nZ$], you'll have $N \subseteq Z$ (actually $N=Z$) by part (a) and so $a\in Z$ and so $aZ=Z$. |
Prove that $ AA^T=0\implies A = 0$ | Let
$$A=(a_{ij})\implies A^t=(a_{ji})\implies AA^t=(b_{ij})=\left(\sum_{k=1}^ma_{ik}a_{jk}\right)$$
so that
$$0=\sum_{i=1}^nb_{ii}=\sum_{i=1}^n\sum_{k=1}^na_{ik}a_{ik}$$
Complete the proof now. |
Epsilon delta proof for this function | Take $\varepsilon>0$. Since $\lim_{t\to0}\dfrac{\sin t}t=1$, there is soma $\delta'>0$ such that $\lvert t\rvert<\delta'\implies\left\lvert\dfrac{\sin t}t-1\right\rvert<2\varepsilon$. Let $\delta=\sqrt{\delta'}$. Then$$\lvert t\rvert<\delta\implies\lvert t^2\rvert<\delta'\implies\left\lvert\frac{\sin(t^2)}{2t^2}-\frac12\right\rvert<\varepsilon.$$ |
Equivalent normal vectors to a surface | There are multiple questions. Roughly in the order they're asked:
Your parametrization covers the sphere if $0 \leq \varphi \leq \pi$. As written (with $0 \leq \varphi \leq \pi/2$) you get only the hemisphere $z \geq 0$.
The cross product $r_{\theta} \times r_{\varphi}$ is inward-pointing (as Triatticus notes).
I haven't checked the numerical evaluation of your integral, but using the correct limits of integration causes more cancellation than is currently present. (The integral of $2yz$ over the entire sphere vanishes.)
In your second attempt, using the Cartesian form of the normal, you've
Used a normal vector of length $2$ (rather than a unit normal);
Written $ds = d\theta\, d\varphi$, while the surface element is
$$
ds = \|r_{\theta} \times r_{\varphi}\|\, d\theta\, d\varphi.
$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.