title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Why was Frobenius concerned the groups which today called "Frobenius Group"? | There are two equivalent definitions of a Frobenius group: (1) a transitive non-regular permutation group in which no non-identity element fixes more than one point; and (2) a group $G$ with a nontrivial subgroup $H$ such that $H \cap H^g = 1$ for all $g \in G \setminus H$. (The $H$ in (2) is the point stabilizer (the Frobenius complement) in (1).) In some contexts in group theory, the second definition seems the more natural.
I have done a quick bit of searching, and found the following. Burnside started investigating Frobenius group two or three years before Frobenius proved the main result on the existence of a Frobenius kernel in a paper in 1901. Burnside was almost certainly trying to prove this result himself. The proof by Frobenius is an application of induced characters, which Frobenius had been studying already for some years (he had earlier proved the Frobenius Reciprocity Theorem for example), and the trivial intersection with conjugate property of the subgroup $H$ is particularly meaningful in the context of induced characters, and it enabled him to complete the proof.
So I think the answer to your question is probably that he was investigating an existing conjecture of Burnside, which he succeeded in proving.
Disclaimer: I have no particular expertise in the history of mathematics, so don't treat any of the above as gospel truth! |
Order topology continuous functions | Here’s a counterexample if $g$ is not continuous. Let $X=Y=\Bbb R$, let $f(x)=0$ for all $x\in\Bbb R$, and let
$$g(x)=\begin{cases}
-1,&\text{if }x\le 0\\
1,&\text{if }x>0\,.
\end{cases}$$
Then $\{x\in\Bbb R:f(x)\le g(x)\}=\{x\in\Bbb R:0\le g(x)\}=(0,\to)$, which is not closed.
Your argument goes astray at the beginning, because it’s not necessarily true that $\{f(x):f(x)>g(x)\}$ is open. Suppose that $X=Y=\Bbb R$, $f(x)=0$ for all $x\in\Bbb R$, and $g(x)=x$ for all $x\in\Bbb R$; then
$$\{f(x):f(x)>g(x)\}=\{0\}\,,$$
which is not open.
However, the idea of showing that $U=\{x\in X:f(x)>g(x)\}$ is open is a good one. Let $x_0\in U$, $a=g(x_0)$, $b=f(x_0)$. Suppose first that there is some $c\in(a,b)$. Let $V_a=(\leftarrow,c)$ and $V_b=(c,\to)$; $V_a$ is an open nbhd of $a$ in $Y$, and $V_b$ is an open nbhd of $b$. Let $W_a=g^{-1}[V_a]$ and $W_b=f^{-1}[V_b]$; $f$ and $g$ are continuous, so $W_a$ and $W_b$ are open nbhds of $x_0$ in $X$. Let $W=W_a\cap W_b$; $W$ is an open nbhd of $x_0$, and for each $x\in W$ we have $f(x)>c>g(x)$, so $W\subseteq U$.
If there is no such $c$, then $b$ is the immediate successor of $a$ in $Y$. In that case let $V_a=(\leftarrow,b)=(\leftarrow,a]$ and $V_b=(a,\to)=[b,\to)$ and proceed to define $W_a,W_b$, and $W$ as before. If $x\in W$, then $f(x)\in V_b$, so $f(x)>a$, while $g(x)\in V_a$, so $g(x)\le a$, and therefore $f(x)>g(x)$, i.e., $x\in U$. Thus, in this case we also find that $x_0\in W\subseteq U$. In short, every point of $U$ has an open nbhd contained in $U$, so $U$ is open, as desired. |
The Sobolev type embedding for negative Sobolev space | Let $E:W^{1,q'}_0(\Omega)\to L^{q'}(\Omega)$ denote the compact Sobolev embedding operator. Let $I:L^q(\Omega) \to (L^{q'}(\Omega))^*$ be the usual isomorphism between these two spaces. Since $E$ is compact, $E^*$ is compact, and the operator $E^*I:L^q(\Omega) \to (W^{1,q'}_0(\Omega))^*$ is compact, as well. Hence $E^*Iu_k$ converges strongly in $(W^{1,q'}_0(\Omega))^*$.
It remains to check that this convoluted construction does the right thing:
Let $v\in W^{1,q'}_0(\Omega)$ be given. Then
$$
(E^*Iu)(v) = (Iu_k)(Ev) = \int_\Omega u(x) (Ev)(x) \ dx= \int_\Omega u(x) v(x) \ dx,
$$
thus $E^*I$ maps the function $u$ to the functional $v\mapsto \int_\Omega u v$, which is the standard identification of a a function in $L^q$ with a functional in $(W^{1,q'})^*$. In this sense, $u_k$ converges strongly to $u$ in $W^{-1,q}$. |
linear transformation $T_1T_2$ is bijective. Then what can we say about the rank of $T_1$ and $T_2$ | As you've alluded to, the rank of $T_1 T_2$ is $\min\{m,n\}$ in general. If $T_1 T_2$ is bijective, it's rank is exactly $m$. Hence $n \geq m$. You've required $m \neq n$, so $n > m$.
The rank of any linear map is bounded above by the dimensions of both its domain and target space; hence the rank of $T_i$ is $m$ for $i =1,2$.
(Note that $T_1 T_2$ is injective if it's bijective! For linear maps between finite dimensional vector spaces, injectivity, surjectivity, bijectivity, and having a trivial kernel are equivalent.) |
translating a usual proof into natural deduction. | Hint
Start from assumption 1) : $∃x∃y(R(x,y)∧R(y,x))$ and apply double $\exists$-elim :
2) $R(a,b) ∧ R(b,a)$ --- assumed for $\exists$-elim.
From transitivity ($\phi$), by $\forall$-elim, we get :
3) $R(a,b) ∧ R(b,a) → R(a,a)$.
Thus, with 2) :
4) $R(a,a)$.
From irreflexivity ($\psi$ : you have a typo) we get, by $\forall$-elim :
5) $\lnot R(a,a)$.
Now we have a contradiction : $\bot$ and we can close the double $\exists$-elim deriving :
6) $\bot$.
In this way we have : $\lnot ∃x∃y(R(x,y)∧R(y,x))$.
Now we are left with the boring part : to derive the equivalent $∀x∀y(R(x,y) → ¬R(y,x)$. |
Possible to have a continuous sequence? | As you may have gathered from the comments to your question: It is interesting to ask whether a function (say, $\Bbb{R} \to \Bbb{R}$) is continuous, but (at least in the "default settings", i.e. probably anything you encounter in introductory analysis) it isn't interesting to ask whether a sequence is continuous.
You conjectured that this isn't interesting because it's never continuous... But in fact it's uninteresting because it's always continuous!
To understand why this is, we need to generalize the notion of continuity. I'll start with something you may have heard in your analysis class, which is continuity of real functions:
Definition 1 (continuity in introductory analysis). A function $f : \Bbb{R} \to \Bbb{R}$ is continuous if for every $x_0 \in \Bbb{R}$ and $\epsilon > 0$ there exists $\delta > 0$ so that whenever $x \in (x_0 - \delta, x_0 + \delta)$, we have $f(x) \in (f(x_0) - \epsilon, f(x_0) + \epsilon)$.
Good. Okay, now let's try to explore some properties of continuous functions and hopefully we can get something we can formulate in a more general setting.
Take a continuous function $f$ and some open interval $(a,b)$ (let's say $a<b$ are real numbers). What can be said about the inverse image $f^{-1}\left((a,b)\right)$? First, let's recall what this means:
$$f^{-1}((a,b)) = \{ x \in \Bbb{R} : f(x) \in (a,b) \}$$
That is, it's the set of $x$ values which, after applying $f$, fall into our interval $(a,b)$. (Note that $f$ does not have to be invertible for this to make sense.)
Now, take some $y_0 \in (a,b)$. This point is inside an open interval, so it can't be at the edge - it must be bigger than $a$ and smaller than $b$. In particular, there is some $\epsilon >0$ such that the entire interval $(y_0 - \epsilon, y_0 + \epsilon)$ is contained in $(a,b)$.
Let's say $x_0$ is some point that satisfies $f(x_0) = y_0$ (there could be none, there could be one, or there could be many of those...) In any case, since $f$ is continuous, we have (from definition 1 above) that there exists $\delta>0$ so that the entire $\delta$-neighborhood of $x_0$ goes into the $\epsilon$-neighborhood of $y_0$, which is contained in $(a,b)$.
This means that whenever $f$ sends $x$ somewhere inside the interval $(a,b)$, it sends an entire open interval (maybe a very small one) around $x$ still inside the interval $(a,b)$. In other words, the inverse image $f^{-1}\left((a,b)\right)$ contains an interval around every point inside it.
We can formulate our findings as a theorem:
Theorem 2. If a function $f : \Bbb{R} \to \Bbb{R}$ is continuous, then for any interval $(a,b)$, the inverse image $f^{-1}((a,b))$ is a union of intervals.
In fact, the reverse direction is also true. I'll leave it for you as an exercise (you don't have to do the exercise right now - you'll receive it again in topology class someday):
Exercise 3. If a function $f : \Bbb{R} \to \Bbb{R}$ has for any interval $(a,b)$ that the inverse image $f^{-1}((a,b))$ is a union of intervals, then $f$ is continuous.
The theorem and the exercise together form a necessary and sufficient criterion for the continuity of a real function. This means it can be regarded as an alternative definition of continuity. However, we want to generalize beyond $\Bbb{R}$, since we want to understand what a continuous sequence is (whose domain is $\Bbb{N}$).
What structure that's specific to $\Bbb{R}$ have we used? Intervals. Other spaces, say $\Bbb{N}$, may have nothing you would naturally call an interval. Therefore, let's extract the intervals out of the definition:
Definition 4. An open set in $\Bbb{R}$ is a subset of $\Bbb{R}$ that's either empty or may be represented as union of intervals.
Definition 5 (continuity in topology). A function $f$ is continuous if for any open set $U$, the inverse image $f^{-1}(U)$ is also open.
From Theorem 2 and Exercise 3 you can see that the two definitions of continuity coincide: A real function is continuous in the sense of analysis if and only if it's continuous in the sense of topology.
However, the last definition is much easier to generalize: Let's say you're given a function $f : X \to Y$ where $X$ and $Y$ are some sets, and you need to find out whether the function is continuous. The question you should ask is then: What are the topologies? That is, which sets in the spaces $X$ and $Y$ are considered "open"? If one of them is $\Bbb{R}$ then usually Definition 4 (or an equivalent definition) is used.
However, if you want to know whether a sequence $f : \Bbb{N} \to \Bbb{R}$ is continuous, then you also need to know the topology (that is, which sets are open) in $\Bbb{N}$, where there are no intervals. In fact, $\Bbb{N}$ has quite a boring (default) topology:
Definition 6. An open set in $\Bbb{N}$ is just any set.
One way to understand this definition: We want to view $\Bbb{N}$ as a subspace of $\Bbb{R}$. So to get an open set in $\Bbb{N}$, we take an open set in $\Bbb{R}$ and remove any non-natural numbers. If you think about it - any set of natural numbers may be generated this way! (Take a small interval around any natural number you want in your open set, and union all the intervals you took.)
From this we finally get the easy
Corollary 7. Any sequence $f : \Bbb{N} \to \Bbb{R}$ is continuous.
Proof. Let $U$ be an open set in $\Bbb{R}$. Its inverse $f^{-1}(U)$ is some set of natural numbers we know nothing about. But no matter - it's open by Definition 6. So $f$ is continuous by Definition 5. QED. |
Lower bound for $\sum{a_ib_i}$ | We can easily reach $0$ by settings $a_1 := 1,\ a_i := 0\ \forall i > 1$ and $b_1 := 0, \ b_2 = 1,\ \ b_i := 0\ \forall i > 2$. Since all terms must be non-negative, this is an actual lower bound.
Also using Cauchy-Schwarz and the fact that $0 \leq a_i, b_i \leq 1$ :
$$
\sum_{i=1}^n a_ib_i \leq \left(\sum_{i=1}^n a_i^2\right)^{1/2}\left(\sum_{i=1}^n b_i^2\right)^{1/2} \leq \left(\sum_{i=1}^n a_i\right)^{1/2}\left(\sum_{i=1}^n b_i\right)^{1/2} = 1
$$
Where the bound can be reached with $a_1 = b_1 = 1, \ a_i = b_i = 0\ \forall i>1$. Overall we have
$$
0 \leq \sum_{i=1}^n a_ib_i \leq 1
$$ |
Inverse Fourier transform of $F(w)=\frac{1-\sum_{i=1}^n{a_ie^{-{r_k}{b_i}|w|}}}{|w|}$ | I shall follow the convention that IFT of $F$ is $\int_{\mathbb R}F(w)e^{2\pi i w t}dw$, take a scaling if necessary. First of all, a crucial lemma $$\int_0^{\infty } \frac{e^{-aw} \cos (b w)-e^{-cw} \cos (d w)}{w} \, dw=\frac{1}{2} \log \left(\frac{c^2+d^2}{a^2+b^2}\right)$$
Which is direct by applying Feynman's trick on either $a$ or $c$. Now, for case $m=1$ one have the IFT equals
$$I_1=\int_{\mathbb R} \frac{1-\sum_i a_i e^{-r_1 b_i |w|}}{|w|}e^{2\pi i w t}dw=2\int_{\mathbb R^+} \frac{1-\sum_i a_i e^{-r_1 b_i w}}{w}\cos(2\pi t w)dw=\sum_i a_i K_1(r_1 b_i)$$
Where we have took a reflection on part $\mathbb R^-$, applied Euler formula and rewritten $1$ as $\sum_i a_i$. Now the lemma gives
$$K_1(s)=\int_{\mathbb R^+}\frac{1-e^{-s w}}{w}\cos(2\pi t w)dw=\frac{1}{2} \left(\log \left(\frac{s^2}{4 t^2}+\pi ^2\right)-2 \log (\pi )\right)$$
Which completes the evaluation of this case. Note that this can be trivially generalized to cases that $m=2, 3$, namely elementary closed forms of IFT that $I_2=\sum_{i,j} a_i a_j K_2(r_1 b_i, r_2 b_j)$ and $I_3=\sum_{i,j,k} a_i a_j a_k K_3(r_1 b_i, r_2 b_j, r_3 b_k)$, where
$\small K_2(s,y)=\int_{\mathbb R^+}\frac{1-e^{-s w}}{w}\frac{1-e^{-y w}}{w}\cos(2\pi t w)dw=-\frac{1}{2} s \log \left(s^2+4 \pi ^2 t^2\right)+\frac{1}{2} s \log \left((s+y)^2+4 \pi ^2 t^2\right)+\frac{1}{2} y \log \left((s+y)^2+4 \pi ^2 t^2\right)-2 \pi t \tan ^{-1}\left(\frac{2 \pi t}{s+y}\right)+2 \pi t \tan ^{-1}\left(\frac{2 \pi t}{s}\right)-\frac{1}{2} y \log \left(4 \pi ^2 t^2+y^2\right)+2 \pi t \tan ^{-1}\left(\frac{2 \pi t}{y}\right)-\pi ^2 t$
$\scriptsize K_3(s,y,z)=\int_{\mathbb R^+}\frac{1-e^{-s w}}{w}\frac{1-e^{-y w}}{w}\frac{1-e^{-z w}}{w}\cos(2\pi t w)dw=\frac{1}{4} \left(2 \log (s) s^2+\log \left(\frac{4 \pi ^2 t^2}{s^2}+1\right) s^2-2 \log (s+y) s^2-\log \left(\frac{4 \pi ^2 t^2}{(s+y)^2}+1\right) s^2-2 \log (s+z) s^2+2 \log (s+y+z) s^2-\log \left(\frac{4 \pi ^2 t^2}{(s+z)^2}+1\right) s^2-8 \pi t \tan ^{-1}\left(\frac{2 \pi t}{s}\right) s+8 \pi t \tan ^{-1}\left(\frac{2 \pi t}{s+y}\right) s+8 \pi t \tan ^{-1}\left(\frac{2 \pi t}{s+z}\right) s-8 \pi t \tan ^{-1}\left(\frac{2 \pi t}{s+y+z}\right) s-4 y \log (s+y) s-2 y \log \left(\frac{4 \pi ^2 t^2}{(s+y)^2}+1\right) s-4 z \log (s+z) s+4 y \log (s+y+z) s+4 z \log (s+y+z) s-2 z \log \left(\frac{4 \pi ^2 t^2}{(s+z)^2}+1\right) s-8 \pi t y \tan ^{-1}\left(\frac{2 \pi t}{y}\right)+8 \pi t y \tan ^{-1}\left(\frac{2 \pi t}{s+y}\right)-8 \pi t z \tan ^{-1}\left(\frac{2 \pi t}{z}\right)+8 \pi t z \tan ^{-1}\left(\frac{2 \pi t}{s+z}\right)+8 \pi t y \tan ^{-1}\left(\frac{2 \pi t}{y+z}\right)+8 \pi t z \tan ^{-1}\left(\frac{2 \pi t}{y+z}\right)-8 \pi t y \tan ^{-1}\left(\frac{2 \pi t}{s+y+z}\right)-8 \pi t z \tan ^{-1}\left(\frac{2 \pi t}{s+y+z}\right)-8 \pi ^2 t^2 \log (s)-4 \pi ^2 t^2 \log \left(\frac{4 \pi ^2 t^2}{s^2}+1\right)+8 \pi ^2 t^2 \log (2 \pi t)-8 \pi ^2 t^2 \log (y)+2 y^2 \log (y)+8 \pi ^2 t^2 \log (s+y)-2 y^2 \log (s+y)-4 \pi ^2 t^2 \log \left(\frac{4 \pi ^2 t^2}{y^2}+1\right)+y^2 \log \left(\frac{4 \pi ^2 t^2}{y^2}+1\right)+4 \pi ^2 t^2 \log \left(\frac{4 \pi ^2 t^2}{(s+y)^2}+1\right)-y^2 \log \left(\frac{4 \pi ^2 t^2}{(s+y)^2}+1\right)-8 \pi ^2 t^2 \log (z)+2 z^2 \log (z)+8 \pi ^2 t^2 \log (s+z)-2 z^2 \log (s+z)+8 \pi ^2 t^2 \log (y+z)-2 y^2 \log (y+z)-2 z^2 \log (y+z)-4 y z \log (y+z)-8 \pi ^2 t^2 \log (s+y+z)+2 y^2 \log (s+y+z)+2 z^2 \log (s+y+z)+4 y z \log (s+y+z)-4 \pi ^2 t^2 \log \left(\frac{4 \pi ^2 t^2}{z^2}+1\right)+z^2 \log \left(\frac{4 \pi ^2 t^2}{z^2}+1\right)+4 \pi ^2 t^2 \log \left(\frac{4 \pi ^2 t^2}{(s+z)^2}+1\right)-z^2 \log \left(\frac{4 \pi ^2 t^2}{(s+z)^2}+1\right)+4 \pi ^2 t^2 \log \left(\frac{4 \pi ^2 t^2}{(y+z)^2}+1\right)-y^2 \log \left(\frac{4 \pi ^2 t^2}{(y+z)^2}+1\right)-z^2 \log \left(\frac{4 \pi ^2 t^2}{(y+z)^2}+1\right)-2 y z \log \left(\frac{4 \pi ^2 t^2}{(y+z)^2}+1\right)+(s+2 \pi t+y+z) (s-2 \pi t+y+z) \log \left(\frac{4 \pi ^2 t^2}{(s+y+z)^2}+1\right)\right)$
There is no essential difficulty of calculating $K_2$ and $K_3$ in spite of their complicated forms (due to which I used Mathematica to generate the result). Take $K_2$ for example, just apply Feynman's trick again on $s$ and use the lemma again. For $K_3$, just differentiate $s$ twice, use the lemma, and integrate back twice (with good care of boundary values taken). The method described here should work for all $m>3$, since essentially we are integrating nothing but the forms $\int s^k \log(s^2+a^2) ds$ and $\int s^k \tan^{-1}(s)ds$, which will be expressible by $\log, \tan^{-1}$ again (no matter how harge $m$ is) without using polylog terms (like in, for instance, this question).
For OP's convenience, this command should generate the expression of $K_3$.
Integrate[(1 - Exp[-s w]) (1 - Exp[-y w]) (1 - Exp[-z w])/w^3*
Cos[2 Pi t w], {w, 0, Infinity},
Assumptions -> s > 0 && y > 0 && z > 0 && t > 0] // FullSimplify
One may use them to write down explicit forms of IFT for certain $m$ and parameters. End of story. |
Has circle $0$ edges or infinite edges? | A side is defined to be a line segment between $2$ points. In a circle, the points are infinitely close together, making impossible to fit a line between them. Therefore a circle, geometrically, would have $0$ sides.
If a circle were to be allowed to have any amount of sides, then theoretically any side can be split up into multiple sides with a $180^\circ$ angle, creating a situation where a square has $56$ sides. |
Question about whether there is a unique functional in Riesz Representation Theorem | Yes. The map $u\longmapsto \langle \cdot,u\rangle$ is a (linear) bijection from $V$ onto $V^*$. If $u_1$ and $u_2$ give the same functional, you have $\langle v,u_1\rangle=\langle v,u_2\rangle$ for all $v\in V$; we may write this as $\langle v,u_1-u_2\rangle=0$ for all $v$, in particular for $v=u_1-u_2$ and we get $u_1=u_2$. So the map is injective. The surjectivity of the map is precisely the Riesz Representation Theorem.
The interesting thing is that this works also in infinite dimension, when $V$ is a Hilbert space. |
Cancellation propriety for continuous functions. | If $f$ onto, then the continuous condition for $f$ is superfluous.
Since $f$ is onto, so is $f^s$.
So from the condition that $f^{s + k} = f^s$, we obtain: $f^k\circ f^s = f^{s + k} = f^s = \text{id}\circ f^s$, since $f^s$ is onto, we can cancel $f^s$ on the right to obtain $f^k = \text{id}$. |
Proof Verification: $\lim_{x \to 0} \cos(1/x) $ does not converge | To complete we need also to prove that $\cos n$ oscillates.
As an alternative let consider as $n\in \mathbb{Z}$ with $n\to \infty$
$x_n=\frac1{2\pi n}\to 0\implies \cos \frac1{x_n}=\cos 2\pi n=1$
$x_n=\frac1{\pi (2n+1)}\to 0\implies \cos \frac1{x_n}=\cos ((2n+1)\pi)=-1$
thus the limit doesn't exist since we have two subsequences with different limits. |
Proving a function with a certain property on a dense set does not have bounded variation | I think I found a counterexample.
(we mostly describe the construction below, and details for the proof are mostly left out. Ask in the comments if some details require additional explanations.)
In order to construct this function $f$ of bounded variation,
we first define the functions
$$
h:[-1,1]\to\Bbb R,
\quad
y\mapsto \operatorname{sgn}(y) |y|^{\frac13},
\\
h_a:[0,1]\to\Bbb R,
\quad
y\mapsto h(y-a),
$$
where $a\in [0,1]$.
One can show that $h_a$ is monotone, continuous
and satisfies
$$
\sup_{y\in[0,1]}|h_a(y)|\leq 1,
\qquad
\limsup_{y\to a}\frac{|h_a(a)-h_a(y)|}{|a-y|^\frac12}=\infty.
$$
Let $\alpha:\Bbb N\to \Bbb Q\cap [0,1]$
a bijective function, i.e. an enumeration of the rational numbers in $[0,1]$.
We then define the function
$$
f:[0,1]\to\Bbb R,
\quad
y \mapsto \sum_{n=1}^\infty
2^{-n}h_{\alpha(n)}(y).
$$
The function $f$ is monotone, continuous (as the uniform limit
of continuous functions), and has bounded variation.
Moreover, one can show that
$$
\limsup_{y\to x}\frac{|f(x)-f(y)|}{|x-y|^\frac12}=\infty
$$
holds for all $x\in \Bbb Q\cap [0,1]$, which is a dense set in $[0,1]$. |
Probability of Lottery Match - Two Entries Compared With Two Draws | Let $p=\frac{1}{14000000}$ i.e. The probability of winning in any one draw.
The probability of winning in just one of the two draws is $2p(1-p)$ and the probability of winning in either of the two draws, or both, is $$1-(1-p)^2$$ |
Question on notation (topology & fiber bundles) | Because someone asked for a follow-up, I'll state what I (think I) know.
If $\tilde{B}\to B$ is the universal cover, then the fundamental group $\pi_1(B)$ of $B$ satisfies $\tilde{B}/\pi_1(B)\cong B$. In particular, $\pi_1(B)$ acts freely on $\tilde{B}$ by deck transformations, so for $x,x'\in\tilde{B}$, $x\sim x'$ if and only if $x$ differs from $x'$ by a Deck transformation if and only if $x=g(x')$ for some $g\in\pi_1(B)$.
Now, if $\varphi:\pi_1(B)\to\operatorname{Homeo}(F)$ is a homomorphism, then each $g\in\pi_1(B)$ also determines a homeomorphism $\varphi(g):F\to F$.
So, to form the quotient $\tilde{B}\times_{\pi_1(B)} F$, you're considering the relation $(\tilde{B}\times F)/\sim$ where $(x,y)\sim(x',y')$ if and only if $x,x'\in\tilde{B}$ satisfy $x=g(x')$ and $y,y'\in F$ satisfy $y=\varphi(g)(y')$ for some $g\in\pi_1(B)$.
To me, this is analogous to the formation of a mapping torus (see https://en.wikipedia.org/wiki/Mapping_torus).
As for this question:
Surely this is an example of a more general notation, and based on the excerpt above, it seems like it's used to denote the usual direct product of two spaces modulo an action by some group?
My question, I guess: To what extent is that accurate? What are the properties of the two spaces whose product is being taken? What is the product of the action by which we're quotienting?
I think the notations $\times_G$, $\rtimes_\varphi$, etc. almost always refer to taking some product in some category (see https://en.wikipedia.org/wiki/Product_(category_theory)) and quotienting out by some morphism relationship in that category (https://en.wikipedia.org/wiki/Quotient_category). Such has been my experience, at least. |
Gaussian Elimination SPD matrix | I guess the answer should be based on Cholesky factorization of SPD.
Since this is the case by forward backward substitution we get the result.
So no need to pivoting. |
Relative Velocity and closest approach | Hint: draw a picture. Label the current locations of the ships and the direction of travel of he container ship. If the container ship starts at $(0,0)$, its position at time $t$ is $(\frac {15t}{\sqrt 2},\frac {15t}{\sqrt 2})$. The pilot boat starts at $(0,-20)$. Write an equation that gives the pilot boat position as a function of time and heading, then match it up with the container ship position.
The difference between nautical miles and statute miles is immaterial because the speeds and distances are measured in consistent units. If they were feet it wouldn't matter, either. |
Check my solution for solvin this IVP $y''' + 3 y'' + 4y' + 12 y = 0$ | It should be easy for you to check the conditions directly, remembering that the cosine term drops out of the first derivative and the sine term drops out of the other two conditions.
I do get the same result as yours. |
Counting with maximum restriction | A point to note is that you can't have more than five of two different fruits. Using stars and bars, compute the number of ways to choose ten fruits with no restriction. Then compute the number of ways to choose ten fruits that include exactly $6,7,8,9,10$ apples, leaving $4,3,2,1,0$ others. Subtract four times the sum of those because any fruit could be the one with more than $5$. You don't need inclusion-exclusion because of the first sentence. |
Question about an equivalent definition of a simple group | Let $D$ denote the dyadic rationals- rational numbers of the form $\frac{a}{2^n}$ for some integer $n$. Then I claim that $G = D/\mathbb{Z}$ is a counterexample.
Since $G$ is abelian, every subgroup of $G$ is normal. Let $D_k \subseteq D$ be the subgroup of rational numbers of the form $\frac{a}{2^k}$. It is not hard to show that this is a complete list of subgroups of $D$ containing $\mathbb{Z}$, and it follows that all subgroups of $G$ are of the form $D_k/\mathbb{Z}$. We have $G/(D_k/\mathbb{Z}) \cong D/D_k$, and the isomorphism $D/D_k \cong D/\mathbb{Z}$ is given by the map $\phi:D \rightarrow D$ of multiplication by $2^k$.
Edit: This is the same as egreg's example for p = 2. |
Finding result of composing operations many times | The coefficients are Stirling numbers of the second kind.
We can write
$$P^n=\sum_{k=1}^n a_{n,k}x^k D^k$$
where $D=\frac d{dx}$. Then
$$P^{n+1}=\sum_{k=1}^n a_{n,k}(xD)(x^k D^k)
=\sum_{k=1}^n a_{n,k}(kx^k D^k+ x^{k+1} D^{k+1})$$
so that
$$a_{n+1,1}=a_{n,1},$$
$$a_{n+1,n+1}=a_{n,n}$$
and
$$a_{n+1,k}=a_{n,k-1}+ka_{n,k}$$
These recurrences are the same as for the Stirling numbers, so
$$a_{n,k}=S(n,k).$$ |
Finding multiplicative factors of $p\in[\sqrt{2},2)\cap\mathbb{Q}$ | Pick any rational $q\in(\sqrt{p},\sqrt{2})$.
Then $q^2>p$. Let $r=p/q<q$. So $rq=p$, we've chosen $q$ so that $q\in(0,\sqrt{2})$ and we've chosen $r>0$ with $r<q<\sqrt{2}$.
You can find a specific $q$ as follows.
If $p=\frac{p_1}{p_2}$ with $p_i$ integers, then $2-p\geq\frac{1}{p_2}$.
Find a positive solution to the Pell-like equation $a^2-2b^2=-1$ with $b^2>p_2$.
Then $$2-\frac{a^2}{b^2}=\frac{1}{b^2}<\frac{1}{p_2}=2-p.$$
So $p<\frac{a^2}{b^2}<2$.
Set $q=\frac{a}{b}$. |
Initial strict monoidal category | Under the forgetful 2-functor $U$ from strict monoidal categories to categories, we get a description of the cube category as an initial object in the comma category $C/U$, whose objects are strict monoidal categories with a functor from your categories $C$ and whose morphisms are strict monoidal functors which commute with the structure functors from $C$ after applying $U$.
Decategorifying by one level for intuition's sake, this is like asking for an initial object in the category of monoids under a set. But this is just the free monoid on that set! More generally, initial objects of the kind described above arise in the wild as the unit $C\to UFC$ of an adjunction.
So we see that this is really a question about the existence of free strict monoidal ategories. These can be constructed in basically the same way as free monoids: take as objects, finite strings of objects from the base category, and as morphisms, sequences of morphisms between each object in a string. The monoidal unit is the empty string, etc. It's more or less obvious that this gives an adjoint (technically, 2-adjoint) to the forgetful 2-functor $U$, with unit the inclusion of length-1 strings, so that your proposed initial objects will all exist.
EDIT: The above isn't quite right for this situation, since the cube category is not free: the one object of $C$ has to be mapped to the monoidal unit $I$. One should handle this by thinking not just about free objects, but about imposing relations on them, in this case the relation $c=I$. This is now an issue of colimits within strict monoidal categories, for which I think the best approach is to generalize the notion of algebraic theory to a 2-algebraic theory, and show that the 2-categories of models of such are cocomplete. In this particular case, it's easy to describe the colimit: $c$s just disappear from strings of objects, so you have objets $x^{\otimes n}$ for every $n\geq 0$, representing the $n$-cube. |
Are there numbers such that A + B = 10A+B? | Hint: Note that $\overline{AB}=10A+B$ ,where $\overline{xy}$ represents the combination of a number. E.G if $x=3$, $y=6$, $\overline{xy}=36$
Solution:
\begin{align}
A+B&=10A+B \text{Given}\\
0&=9A \tag{Subtraction Property}\\
9A&=0 \tag{Symmetric Property}\\
A&=0 \tag{Division Property}
\end{align}
When $A=0$, your number works. However, if $A=0$, it is not a 2-digit number, and thus does not meet the criteria. |
If AH and BG are angle bisectors, how would I find IJ? | Hint:
Point $I$ is the incenter of $\Delta ABC$. Let $B = (0,0)$, $C = (4,0)$ and $A = (4,3)$. Then, the y-coordinate of $I$ is:
$$\frac{3\cdot0+4\cdot3 + 5\cdot0}{12}$$ |
Contemporary mathematician one should know about | Wikipedia has a list of 21st-century mathematicians while the book Mathematicians : an outer view of the inner world has the following list in portrait form with their autobiography:
Adebisi Agboola --
Michael Artin --
Michael Francis Atiyah --
Manjul Bhargava --
Bryan John Birch --
Joan S. Birman --
David Harold Blackwell --
Enrico Bombieri --
Richard Ewen Borcherds --
Andrew Browder --
Felix E. Browder --
William Browder --
Lennart Axel Edvard Carleson --
Henri Cartan --
Sun-Yung Alice Chang --
Alain Connes --
John Horton Conway --
Kevin David Corlette --
Ingrid Chantal Daubechies --
Pierre Deligne --
Persi Warren Diaconis --
Simon Donaldson --
Noam D. Elkies --
Gerd Faltings --
Charles Louis Fefferman --
Robert Fefferman --
Michale Freedman --
Israel Moiseevich Gelfand --
William Timothy Gowers --
Phillip Griffiths --
Mikhael Leonidovich Gromov --
Benedict H. Gross --
Robert Clifford Gunning --
Eriko Hironaka --
Heisuke Hironaka --
Friedrich E. Hirzebruch --
Vaughan Frederick Randal Jones --
Nicholas Micahel Katz --
Robion Kirby --
Frances Kirwan --
Joseph John Kohn --
János Kollár --
Bertram Kostant --
Harold William Kuhn --
Robert Phelan Langlands --
Peter David Lax --
Robert D. MacPherson --
Paul Malliavin --
Benoit Mandelbrot --
William Alfred Massey --
John N. Mather --
Barry Mazur --
Margaret Dusa McDuff --
Curtis McMullen --
John Willard Milnor --
Maryam Mirzakhani --
Cathleen Synge Morawetz --
David Mumford --
John Forbes Nash, Jr. --
Edward Nelson --
Louis Nirenberg --
George Olatokunbo Okikiolu --
Kate Adebola Okikiolu --
Andrei Okounkov --
Roger Penrose --
Arlie Petters --
Marina Ratner --
Kenneth Ribet --
Peter Clive Sarnak --
Marcus du Sautoy --
Jean-Pierre Serre --
James Harris Simons --
Yakov Grigorevich Sinai --
Isadore Manual Singer --
Yum-Tong Siu --
Stephen Smale --
Elias Menachem Stein --
Dennis Parnell Sullivan --
Terence Chi-Shen Tao --
Robert Endre Tarjan --
John T. Tate --
William Paul Thurston --
Gang Tian --
Burt Totaro --
Karen Keskulla Uhlenbeck --
Sathamangalam Rangaiyengar Srinivasa Varandhan --
Michèle Vergne --
Marie-France Vigneras --
Avi Wigderson --
Andrew John Wiles --
Shing-Tung Yau --
Don Zagier. |
Symmetric group acting on polynomial | We have $\sigma(1) = 2$ and $\sigma(2) = 3$, so the first factor $(x_1-x_2)$ becomes $(x_{\sigma(1)}-x_{\sigma(2)}) = (x_2 - x_3)$. Similarly for the rest. Does that clear things up? |
How to prove $X$~$B(n,p)$ and $Y$~$B(m,p)$? | Hint Use the formula which says that if random variables $X$ and $Y$ are independent, then $f_{X|Y=y}(x,y)=\frac{f_{XY}(x,y)}{f_Y(y)}.$ |
Penrose-Moore Inverse is a special case of the Right-Inverse of a matrix? | $A$ being a right inverse of $T$ just means that $TA=I$ where $I$ is the identity matrix on the codomain of $T$. A given matrix may or may not have a right inverse.
The Moore-Penrose inverse $T^+$ of a right-invertible matrix is indeed a right inverse, but the Moore-Pensrose inverse is defined for all matrices, including those that have no right-inverse, thus it is not a special case of the right-inverse.
To see that for a right-invertible matrix, $T^\#$ is indeed a right inverse, you can just calculate:
$$TT^\# = T(T^+ + T^\bot X) = TT^+ + TT^\bot X = I +0X = I$$
Note that this uses the relation $TT^+=I$ which obviously is only true for right-invertible matrices.
Conversely if $T^+$ and $S$ are both right inverses of $T$, then $T(S−T^+)=0$, hence $S−T^+$ takes its values from the kernel, and thus it can be written as $T^\bot X$ for some matrix $X$. |
Inverse of a $\ln$ function | If you are allowed to use the Lambert $W$ function, then the procedure is roughly (there are a couple of domain considerations that must be taken into account): \begin{align} 4y&=(-1+2\ln x)x^2=(-1+\ln x^2)x^2\\ \frac{4y}{e}&=\left(\frac xe\right)^2\ln\left(\frac xe\right)^2&\text{call }s:=\frac{x^2}{e^2},\ t:=\frac{4y}{e}\\\frac ts&=\ln s\\ e^{t/s}&=s\\\frac1se^{t/s}&=1\\\frac ts&=W(t)\\\frac{x^2}{e^2}=s&=\frac{t}{W(t)}=\frac{4y}{e\cdot W(4y/e)}\\x&=2\sqrt{\frac{ey}{W(4y/e)}}\end{align} |
Sum of series $\sum^{10}_{i=1}i\bigg(\frac{1^2}{1+i}+\frac{2^2}{2+i}+\cdots \cdots +\frac{10^2}{10+i}\bigg)$ | The sum to calculate is
$$
S=\sum_{i,j=1}^{10}\frac{j^2i}{i+j}=[\text{rename }i\leftrightarrow j]=\sum_{i,j=1}^{10}\frac{i^2j}{j+i}.
$$
Hence
$$
2S=\sum_{i,j=1}^{10}\frac{i^2j+j^2i}{i+j}=\sum_{i,j=1}^{10}\frac{ij(i+j)}{i+j}=\sum_{i,j=1}^{10}ij=\Big(\sum_{i=1}^{10}i\Big)\Big(\sum_{j=1}^{10}j\Big)=\Big(\sum_{k=1}^{10}k\Big)^2=55^2.
$$
Finally
$$
S=\frac{55^2}{2}=\frac{(50+5)^2}{2}=\frac{2500+500+25}{2}=\frac{3025}{2}.
$$ |
What is the Laplace Inverse Transform of $\ln(s)/(s(s+a))$? | We assume that $a>0$.
The inverse Laplace Transform of $F(s)=\frac{\log(s)}{s(s+a)}$ is given by the Bromwich integral
$$\begin{align}
f(t)&\equiv\mathscr{L^{-1}}\{F\}(t)\\\\
&=\frac1{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty}F(s)e^{st}\,ds\\\\
&=\frac1{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\log(s)}{s(s+a)}\,e^{st}\,ds\\\\
&=\underbrace{\frac1{2\pi ia}\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\log(s)}{s}\,e^{st}\,ds}_{=g(t)}-\underbrace{\frac1{2\pi ia}\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\log(s)}{s+a}\,e^{st}\,ds}_{h(t)}\tag1
\end{align}$$
We proceed by evaluating the first integral on the right-hand side of $(1)$ by deforming the Bromwich contour around the branch cut along the negative real axis and the pole at $s=0$. Letting $\epsilon>0$, we find
$$\begin{align}
g(t)&=\frac1{2\pi i a}\left(\int_{-\epsilon}^{-\infty}\frac{\log(|s|)+i\pi}{s}\,e^{st}\,ds-\int_{-\epsilon}^{-\infty}\frac{\log(|s|)-i\pi}{s}\,e^{st}\,ds\right)\\\\
&+\frac1{2\pi i a}\int_{-\pi}^\pi \frac{\log(\epsilon e^{i\phi})}{\epsilon e^{i\phi}}\,e^{\epsilon e^{i\phi} t}\,i\epsilon e^{i\phi}\,d\phi\\\\
&=\frac1a \int_{\epsilon t}^\infty \frac{e^{-s}}{s}\,ds+\frac1a \log(\epsilon)\\\\
&=-\frac1a(\gamma+\log(t))+o(1)
\end{align}$$
where $o(1)\to 0$ as $\epsilon\to 0$. Letting $\epsilon\to 0^+$ yields
$$\bbox[5px,border:2px solid #C0A000]{g(t)=-\frac1a(\gamma+\log(t))}\tag 2$$
It is straightforward to show that $\mathscr{L}\{g\}(s)=\frac{\log(s)}{as}$
.
Next, we evaluate the second integral on the right-hand side of $(1)$ by deforming the Bromwich contour around the branch cut along the negative real axis and the pole at $s=-a$. Letting $\epsilon>0$, we find
$$\begin{align}
h(t)&=\frac1{2\pi i a}\left(\int_0^{-a+\epsilon}\frac{\log(|s|)+i\pi}{s+a}\,e^{st}\,ds-\int_0^{-a+\epsilon}\frac{\log(|s|)-i\pi}{s+a}\,e^{st}\,ds\right)\\\\
&+\frac1{2\pi i a}\left(\int_{-a-\epsilon}^{-\infty}\frac{\log(|s|)+i\pi}{s+a}\,e^{st}\,ds-\int_{-a-\epsilon}^{-\infty}\frac{\log(|s|)-i\pi}{s+a}\,e^{st}\,ds\right)\\\\
&+\frac1{2\pi i a}\int_0^\pi \frac{\log(a)+i\pi +\log\left(1-\frac{\epsilon e^{i\phi}}{a}\right)}{\epsilon e^{i\phi}}\,e^{-at}e^{\epsilon e^{i\phi} t}\,i\epsilon e^{i\phi}\,ds\\\\
&+\frac1{2\pi i a}\int_{-\pi}^0 \frac{\log(a)-i\pi +\log\left(1-\frac{\epsilon e^{i\phi}}{a}\right)}{\epsilon e^{i\phi}}\,e^{-at}e^{\epsilon e^{i\phi}t}\,i\epsilon e^{i\phi}\,ds\\\\
&=-\frac{e^{-at}}a \left(\int_0^{at}\frac{e^s-1}{s}\,ds+\log(a)-\log(\epsilon)\right)\\\\
&-\frac{e^{-at}}a \left(\gamma+\log(t)+\log(\epsilon)\right)\\\\
&+\frac{e^{-at}}a \log(a)+o(1)
\end{align}$$
where $o(1)\to 0$ as $\epsilon\to 0$. Letting $\epsilon\to 0^+$ yields
$$\bbox[5px,border:2px solid #C0A000]{h(t)=-\frac{e^{-at}}a\left(\gamma+\log(t)+\int_0^{at}\frac{e^s-1}{s}\,ds\right)}\tag 3$$
Using Frullani's integral, it is straightforward to show that $\mathscr{L}\{h\}(s)=\frac{\log(s)}{a(s+a)}$
Substituting $(2)$ and $(3)$ into $(1)$ yields
$$\bbox[5px,border:2px solid #C0A000]{f(t)=-\frac1a(\gamma+\log(t))+\frac{e^{-at}}a\left(\gamma+\log(t)+\int_0^{at}\frac{e^s-1}{s}\,ds\right)}$$ |
Finding the 3D coordinates of an unknown point from three known points | It would involve solving a system of 3 equations with 3 unknowns, where the solution $(x,y,z)$ represents the intersection point of the following three 3D spheres:
$$
(x-a_1)^2 + (y-b_1)^2 + (z-c_1)^2 = (D_1)^2\\
(x-a_2)^2 + (y-b_2)^2 + (z-c_2)^2 = (D_2)^2\\
(x-a_3)^2 + (y-b_3)^2 + (z-c_3)^2 = (D_3)^2
$$ |
Joy of Cats Proposition 8.16(4) | The proposition, as written, is true if and only if the forgetful functor $U$ is conservative. Indeed, if there exists $m\colon B\to A$ such that $U(m)$ is an iso but $m$ is not an iso, then $m$ is a monomorphism in $\mathbf{A}$ since $U$ reflects monomorphisms, and $f=U(m)$ is an extremal epimorphism, but it is not extremally generating since it factors through $U(m)$, even though $m$ is not an isomorphism. Conversely, if $U$ is conservative then you can finish your proof easily.
Note that surjective functions in the category of sets coincide with extremal epimorphisms (even without the axiom of choice), and the forgetful functor $\mathbf{Top}\to \mathbf{Set}$ preserves monomorphisms. So if the proposition was true, it would imply that every surjective function from a set to a topological space is extremally generating. But as mentionned in Example 8.17(3), this is only true for discrete topological spaces. Indeed, if $(A,\tau)$ is a topological space with a non-discrete topology, then any surjective function $X\to A$ factors through the underlying function of the non-invertible continuous monomorphism $id_A\colon (A,\mathcal{P}(A))\to (A,\tau)$ (note that this is an example of the situation in the first paragraph).
I suspect the proposition was meant to apply to concretely generating arrows instead. Indeed, if you ask that $m$ be initial in the sense of Definition 8.6(1) then you can easily prove that $U(m)$ being an iso does imply that $m$ is an iso, because the inverse of $m$ must also be an $\mathbf{A}$-morphism. And note that this is compatible with Examples 8.17(3) and (4), which says that surjective functions are concretely generating. |
Weighted average of multiple points | Try the formula
$$(w_1x_1+w_2x_2+w_3x_3, w_1y_1+w_2y_2+w_3y_3)$$
where $w_1,w_2,w_3$ are positive weights adding up to $1$. |
How can I get Maclaurin series for $\frac{x^2 + 3e^x}{e^{2x}}$? | Hint: I would start by simplifying:
$$
\frac{x^2+3e^x}{e^{2x}}=x^2e^{-2x}+3e^{-x}.
$$ |
If $F$ is a finite field with $|F|=k$, and $P_F$ is the set of polynomial functions from $F$ to $F$, then $|P_F|\leq k^k$ | Hint: $P_F$ is a subset of the set of maps of set of $F\rightarrow F$. |
Notation Question - Indicating Asymptotic Values | A lot of times, if we want to express some value very close to $x$, we write $x+\epsilon$, for all $\epsilon>0$. This fits nicely in with a lot of the definitions of limits, and other calculus concepts. |
Poisson process - 2D | Your ideas seem to lead nowhere. Even leaving aside the correctness of (1), its right-hand side implicitly assumes that the Poisson processes are given on the same probability space (how exactly?).
So I will give some ideas, which you, hopefully, will be able to elaborate.
Denote by $\Pi_\lambda$ a Poisson point process in $\mathbb R^2$ with intensity $\lambda$, a random measure of the form $\Pi_\lambda = \sum_{n=1}^\infty \delta_{x_\lambda(n)}$, where $\{x_\lambda(n), n\ge 1\}$ are your Poisson points.
Theorem. For any integrable function $f$,
$$
\lambda^{-1}\int_{\mathbb R^2} f(x) \Pi_\lambda(dx)\to \int_{\mathbb{R}^2}f(x) dx, \ \lambda\to \infty, \tag{1}
$$
in probability.
Proof The convergence $(1)$ is easy to see for indicators of bounded sets (e.g. by Chebyshev or LLN). By linearity, it extends to simple functions with bounded support.
Now for arbitrary integrable $f$ let $(f_n,n\ge 1)$ be a sequence of simple functions with bounded support such that $\int_{\mathbb{R^2}} |f_n(x) - f(x)|dx \to 0$, $n\to\infty$.
Now write for any $\varepsilon>0$
$$
\mathbb{P}\left(\left|\lambda^{-1}\int_{\mathbb R^2}f(x) \Pi_\lambda(dx) - \lambda^{-1}\int_{\mathbb R^2}f_n(x)dx\right|>\varepsilon \right)\\
\le \mathbb{P}\left(\left|\lambda^{-1}\int_{\mathbb R^2}f_n(x) \Pi_\lambda(dx) - \lambda^{-1}\int_{\mathbb R^2}f_n(x)dx\right|>\frac\varepsilon2 \right)\\ + \mathbb{P}\left(\left|\lambda^{-1}\int_{\mathbb R^2}\big(f(x)-f_n(x)\big) \Pi_\lambda(dx)\right|>\frac\varepsilon2 \right). \tag{2}
$$
Using Markov inequality, estimate
$$
\mathbb{P}\left(\left|\lambda^{-1}\int_{\mathbb R^2}\big(f(x)-f_n(x)\big) \Pi_\lambda(dx)\right|>\frac\varepsilon2 \right) \\\le \frac2{\varepsilon\lambda}\mathbb{E}\left[\left|\int_{\mathbb R^2}\big(f(x)-f_n(x)\big) \Pi_\lambda(dx)\right|\right]
\le \frac2\varepsilon \int_{\mathbb R^2}|f(x)-f_n(x)\big|dx.
$$
Finally, first let $\lambda \to \infty$ in $(2)$ and then $n\to\infty$, arriving at the desired statement. |
Need help with notation - Supremum | "Sup" stands for "supremum", not "support" ("support" would usually be abbreviated "supp" instead). So $$\sup_u |\nabla\tau(u)|$$ means the supremum of all values of $|\nabla\tau(u)|$; that is, the smallest number $s$ such that $|\nabla\tau(u)|\leq s$ for all $u$.
The symbol "$\sup\limits_u$" can be read aloud as "(the) sup (over $u$) of" or "the supremum (over $u$) of", where the parts in parentheses are optional and "sup" is pronounced like "soup". |
How do you show 1-forms are differentials of functions? | One can always proceed naively by supposing there is such a function, say, $f(x, y, z)$, differentiating to compute $df = \sum f_{x^a} \,dx^a$, comparing like coefficients, and solving the resulting p.d.e.
In the case of the example function, comparing the $dx^1$ coefficients gives $f_{x^1} = x^1 x^2$, so integrating gives $f(x^1, x^2, x^3) = \frac{1}{2} (x^1)^2 x^2 + g(x^2, x^3)$ for some $g$. Now, what does comparing the other coefficients give?
As Lord Shark the Known pointed out in a comment under the original question, if $\alpha = df$ for some $f$, then $d\alpha = d^2 f = 0$, giving a necessary condition for $\alpha$ to be a differential of a function. This has the advantage that the check requires only differentiation, not integration. In our case, we have
$d\alpha = -x^1 dx^1 \wedge dx^2 + x^2 dx^2 \wedge dx^3 + x^3 dx^1 \wedge dx^3 \neq 0$, so we can conclude immediately that $\alpha$ is not exact. This necessary condition is also sufficient if the domain of $\alpha$ is simply connected, but otherwise it need not be. |
Sequence of Events - Basic understanding | If you have any sample space, label it $\Omega$, then the Cartesian product $\Omega{\times}\Omega$ is also a sample space; and so in any $n$-order Cartesian exponential, $\Omega^n$ (for any positive integer $n$).
Any event drawn from the sample space $\Omega^n$ is also a sequence of $n$ events drawn from the sample space $\Omega$.
For instance, take the sample space of a fair coin toss, $\Omega=\{{\small\rm H},{\small\rm T}\}$. A sequence of the events resulting from two fair coin tosses, such as $({\small\rm T},{\small\rm H})$ , is a event from the sample space of $\{({\small\rm H},{\small\rm H}),({\small\rm H},{\small\rm T}),({\small\rm T},{\small\rm H}),({\small\rm T},{\small\rm T})\}$; which is $\{{\small\rm H},{\small\rm T}\}^2$ |
What's the asymptotic relation of $\log^{2}n$ and $\sqrt{n}$? | As Carl Schildkraut commented, you did not plot the functions over a sufficiently large range.
In the real domain, equation $$\log^2(x)=\sqrt x$$ has three roots expressed in terms of Lambert function $$x_1=e^{-4 W\left(\frac{1}{4}\right)}\approx 0.442394$$ $$x_2=e^{-4 W\left(-\frac{1}{4}\right)}\approx 4.17708$$ $$x_3=e^{-4 W_{-1}\left(-\frac{1}{4}\right)}\approx5503.66$$ which makes $$\log^2(x)\geq\sqrt x \qquad 0\leq x \leq x_1$$ $$\log^2(x)\leq\sqrt x \qquad x_1\leq x \leq x_2$$ $$\log^2(x)\geq\sqrt x \qquad x_2\leq x \leq x_3$$ $$\log^2(x)\leq\sqrt x \qquad x_3\leq x \leq \infty$$
Concerning function $$f(x)=\frac{\log^2(x)}{\sqrt x}$$ its derivatives are$$f'(x)=-\frac{(\log (x)-4) \log (x)}{2 x^{3/2}}$$ $$f''(x)=\frac{\log (x) (3 \log (x)-16)+8}{4 x^{5/2}}$$ The first derivative cancels for $x=1$ and $x=e^4$. The second derivative test shows that the first solution corresponds to a minimum $(f(1)=0,f''(1)=2)$ and that the second solution corresponds to a maximum $(f(e^4)=\frac{16}{e^2},f''(e^4)=-\frac{2}{e^{10}})$ |
Evaluating definite integral of a function | Hint:
$$\begin{align}\int_1^{100} dx \frac{f(x)}{x} = \int_1^{10} dx \frac{f(x)}{x} + \int_{10}^{100} dx \frac{f(x)}{x} \end{align}$$
and sub $x=100/u$ in the 2nd integral. |
Why $E[X|\mathcal{G}]=X$ if $X$ is $\mathcal{G}$-measurable? | To show $E[X|\mathcal{G}]$ equals some $Y$. You need to check:
$Y$ is $\mathcal{G}$-measurable;
$\int_A Y(\omega)dP(\omega)=\int_A X(\omega)dP(\omega)$ for all $A\in\mathcal{G}$.
In this case, the claim is $Y=X$ works. (1) holds by assumption and (2) holds trivially. |
Strategy in a blocking game | Since this is an impartial game with a normal play, every position has a Sprague-Grundy value. If it is 0, it is a win for the previous player. Otherwise, it is a win for the next player.
Indeed, this game is called cram. The Sprague-Grundy value of the 5x5 board is 0. Thus, the second player will always have a winning strategy. |
A formula for higher order derivatives of inverse function | As requested by the OP, I add a link to a paper containing the formula and the bibliographic data (in Japanese). |
Which line does the matrix project onto | The line the map transform onto is the line of the fixed points of the linear map. So you have to find the vectors $\begin{pmatrix}
x\\
y
\end{pmatrix}$ for which $$\begin{pmatrix}
1/2 & -1/2 \\
-1/2 & 1/2
\end{pmatrix}\begin{pmatrix}
x\\
y
\end{pmatrix}=\begin{pmatrix}
x \\
y
\end{pmatrix}$$ Namely the ones of the line $L \equiv x+y=0$ |
Why is this map continuous? (Cofibrations) | The domain $D$ of the first function is the preimage of the open set $(0,\infty)$ under the continuous map $f(x,t)=u(x)-t$, thus $D$ is open. The closure $\overline D$ is then a subset of $D':=f^{-1}[[0,∞)]$, which is the closed set $D'=\{(x,t)\in X\times I\mid t\le u(x)\}$. If we extend $j$ continuously to $D'$, then we can glue it with the second map, provided that we extended it in a compatible way. The only possible extension is:
$$\bar j=\begin{cases}
H(x,t/u(x)), &\text{ if }t\le u(x),\ 0<u(x)\\
H(x,1)=r(x)=x, &\text{ if }x\in A,\ t=0
\end{cases}$$
Here $\bar j:D'\to X$ restricts to $j$ on $D$, and it is compatible with the second function. Clearly $\bar j$ is continuous on $X\setminus A\times I$ and on $\text{int}A\times 0$, so we only need to check continuity for a point $(a,0), a\in\partial A.$
Lemma: Let $X$ be a space, $A\subset X$, and $H:X\times I\to X$ a homotopy such that $H(a,t)=a$ for $a\in A.$ If $V$ is open, then there is an open $W\subseteq V$ such that $V\cap A\subseteq W$ and for every $w\in W$ the path $H(w,t)$ is contained in $V$.
Let us take an open neighborhood $V$ around $a=\bar j(a,0)$. We have to find a neighborhood $W\times[0,\delta)$ of $(a,0)$ such that $\bar j[W\times[0,\delta)]\subseteq V$. Now, by the lemma there is an open $W\subseteq V$ such that $H[W\times I]\subseteq V$. This is just the neighborhood we need. |
Second difficulty in understanding the proof of theorem 1.14 in Hungerford. | For ease of understanding. It is clearer what happens in the case $A = A_1 \oplus \cdots \oplus A_n$, so that gets explained first. Then the slight generalization $A \cong A_1 \oplus \cdots \oplus A_n$ via the isomorphism $f$ is discussed; the special case is recovered when $f = \mathrm{id}_A$. |
Combinatorics - how many elements in a product. | Note that there are $\binom{n}{2}$ choices of $j$ and $k$. For each of those choices, the coefficient in the multinomial theorem is
$$
\frac{4!}{\underbrace{0!0!0!\dots2!\dots0!0!\dots2!\dots0!0!}_{n-2\text{ zeroes, and }2\text{ twos}}}
$$
Another Approach
For each of the $\binom{n}{2}$ choices of $j$ and $k$, there are $\binom{4}{2}$ ways to pick two factors to supply $x_j$ and the other two factors to supply $x_k$.
$$
(x_1+x_2+\dots+x_n)(x_1+x_2+\dots+x_n)(x_1+x_2+\dots+x_n)(x_1+x_2+\dots+x_n)
$$ |
Proof of Casorati-Weierstrass? | Your reasoning is pretty much correct. Since $f$ is holomorphic on $D_r(z_0)\setminus\{z_0\}$, and $\lim_{z\to z_0}f(z)=\frac{1}{g(z_0)}-w$ exists, it follows the singularity at $z_0$ is removable, contradicting that it is essential. This is one of the many ways to tell if a singularity is removable.
This is what they mean when they say $f$ is holomorphic at $z_0$; they mean the singularity is removable there, so it can be made holomorphic by an appropriate definition of $f(z_0)$. |
Confusing problem determining the exact formula for area of 'complex' shape | In case this is your diagram:
Think of the area in the middle as:
You have the twice the total area of the areas of two sectors, $FEC$ and $BEG$, less areas of $\triangle PGB$ and $\triangle FQC$, less the area of $\triangle PEQ$. Since the polygons are symmetric, you just have to solve the area of one each:
$$\frac12 A=2 A_{FEC}-2A_{\triangle FQC}-A_{\triangle PEQ}\tag{1}$$
WLOG, consider your square to lie on the $xy$ -plane, centered at $(0,0)$. Thus, your vertices will be a combination of the points $\left(\pm\frac n2,\pm\frac n2\right)$.
Consider the intersection between the two circles centered at $B,C$. Using the formula for circles, with $r=n$, we know they intersect at:
$$E=\left(0,\frac{1}{2}\left(\sqrt{3}n-n\right)\right)$$
The circles intersect the $x$-axis at the following points:
$$F=\left(-\frac{1}{2}\left(\sqrt{3}n-n\right),0\right)\\
G=\left(\frac{1}{2}\left(\sqrt{3}n-n\right),0\right)$$
For $O=(0,0)$, it's very clear that $FO=EO$, therefore $FE=FO\sqrt2$, and using the cosine law, we get that:
$$\left(\frac{n-\sqrt{3} n}{\sqrt{2}}\right)^2=-2 n^2 \cos (\angle FCE)+n^2+n^2\\
\angle FCE=\frac\pi6=30^\circ$$
Therefore, $$A_{FEC}=\pi n^2 \cdot \frac{30}{360}=\frac{\pi n^2}{12}$$
To find $A_{FQC}$, consider the line $EC:=\frac{1}{2}\left(\sqrt{3}-1\right)n-\sqrt{3}x$, which intersects the $x$-axis at $Q=\left(\frac{\left(\sqrt{3}-1\right)n}{2\sqrt{3}},0\right)$. Since we already know $F$, then $FQ=\frac{n}{\sqrt{3}}$. Using the distance formula tells us that $QC=\frac{n}{\sqrt{3}}$ thus $A_ {FQC} $ is :
$$A_{FQC}=\frac12\cdot n\cdot \frac{n}{\sqrt{3}}\cdot \sin(30^\circ)=\frac{n^2}{4 \sqrt{3}}$$
To find $A_{\triangle PEQ}$, notice that $PQ=2QO=\frac{\left(\sqrt{3}-1\right) n}{\sqrt{3}}$. Solving for the length of $EQ$, tells us that $\triangle PEQ$ is equilateral, therefore:
$$A_{\triangle PEQ}=\frac12 \left(\frac{\left(\sqrt{3}-1\right) n}{\sqrt{3}}\right)^2\cdot \sin \frac \pi3=\frac{1}{6} \left(2 \sqrt{3}-3\right) n^2$$
Going back to $(1)$, we now get that the area of the figure in the middle is:
$$A=\frac{1}{3} \left(-3 \sqrt{3}+\pi +3\right) n^2 \tag{2}$$
Consider the circle centered at $B:=\frac{1}{2}\left(\sqrt{3n^2-4nx-4x^2}-n\right)$, for $n=2$, the area in question, for $G=\frac{1}{2} \left(2 \sqrt{3}-2\right)$, should be:
$$A=4\int_0^G \frac{1}{2} \left(\sqrt{-4 x^2-8 x+12}-2\right) \, dx$$
Which evaluates to:
$$A=4\cdot\left(\frac{1}{3} \left(-3 \sqrt{3}+\pi +3\right)\right)$$
which is exactly what we get from $(2)$ |
Divide an equilateral triangle into at least $100$ regions? | Let us start with one line, parallel to each side of the triangle. Consider the following diagram:
$$n=3$$
By drawing 3 lines, we split it up into 7 segments.
Now what we can do is split up the inner triangle with the same logic:
$$ n = 6 $$
Note that this splits up the inner triangle into another 7 parts. In addition to this, it also adds 6 other segments into the triangle, between the first and seconds sets of lines.
This means that with $6$ lines, we have $7 + 7 + 6 = 20 $ segments
It is obvious that if we add another 3 lines, one each parallel to the sides that it will split up the inner triangle into another 7 segments.
$$n=9$$
In addition to this 7, we see again, it adds another 6 segments, next to the 6 created after the previous step. And another 6 is also added, inside 1st set of lines but outside the second set.
Therefore, here we have $20 + 7 + 6 + 6 =39$ segments.
At this point we can put together a formula for this. Let $n$ be the number of lines divided by 3 and $f(n)$ be the number of segments. (We will work in sets of 3 as it is easier. Then we have $$f(1) = 7,\quad f(2)=20\quad f(3) = 39$$
For each addition of 1 in $n$, the inner triangle is split into 7. We also see an increase in multiples of 6.
We know that when $n=2$, we have $7 + 7 + 6$ segments.
When $n=3$, we have $7+7+7+6+6+6$ segments.
We can see a pattern here.
Thus, we can make the following equation:
$$ f(n) = 7n + 6n + 6(n-1) + 6(n-2) + ... 6,\quad n>0$$
$$\implies f(n) = 7n + 6n + 6n +... + 6n -6 - 12- 18 -...-6(n-1)$$
The $-6-12...$ bit can be written as $$-1(6 + 12 + 18 + ... + n-1) = -3(1+2+3+...+(n-1) = -3\left(\frac{(n-1)((n-1)+1)}{2}\right) = \frac{-2}{3}n(n-1)$$
So, now we have $$f(n) = 7n + 6n(n-1) -\frac{2}{3}n(n-1) = 7n + 6n^2 - 6n -\frac{2}{3}n + \frac{2}{3} = 6n^2 + \frac{1}{3}n + \frac{2}{3}$$
Now we want to investigate the case with 100 segments.
Let $f(n) = 100$
Then $$ 100 = 6n^2 + \frac{1}{3}n + \frac{2}{3} \implies n \approx 4.04$$
Therefore, we need at least $n=5$ for 100 segments (as $ n$ must be a whole number).
Therefore, we need $ 3(5) = 15$ lines for at least 100 segments in the triangle. |
Number of solutions of an equality with absolute value operator | You can solve this by "going from the outside to the inside" of the equation and analyzing the different cases.
For your first example, it has to be true that
$$||x-1|-2|\in\{8,0\},$$
for the $0$ case, it follows that
$$|x-1|=2\iff x\in\{3,-1\}.$$
For the $8$ case, it follows that
$$|x-1|-2=\pm 8\implies |x-1|\in\{10,-6\},$$
which is only possible for
$$|x-1|=10\implies x-1=\pm 10,$$
which leads to $$x\in\{11,-9\}.$$
So, there are $4$ solutions in total, namely
$$x\in\{-9,-1,3,11\}.$$ |
Eigen value and eigen vectors | The characteristic polynomial is
\begin{align}
\det\begin{pmatrix}
0-X & 1 & 0 \\
0 & 0-X & 1 \\
1 & -3 & 3-X
\end{pmatrix}
&=
-X\det\begin{pmatrix}-X & 1 \\ -3 & 3-X\end{pmatrix}
-\det\begin{pmatrix}0 & 1 \\ 1 & 3-X\end{pmatrix}
\\
&=-X(-3X+X^2+3)+1
\\[6px]
&=-X^3+3X^2-3X+1=(1-X)^3
\end{align}
There cannot exist three linearly independent vectors: the matrix would be diagonalizable and similar to
$$
\begin{pmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}
$$
but the identity matrix is only similar to itself.
Actually, if we compute the rank of $A-I$ (where $A$ is the given matrix), we get, with Gaussian elimination,
$$
A-I=\begin{pmatrix}
-1 & 1 & 0 \\
0 & -1 & 1 \\
1 & -3 & 2
\end{pmatrix}
\to
\begin{pmatrix}
-1 & 1 & 0 \\
0 & -1 & 1 \\
0 & -2 & 2
\end{pmatrix}
\to
\begin{pmatrix}
-1 & 1 & 0 \\
0 & -1 & 1 \\
0 & 0 & 0
\end{pmatrix}
$$
so the rank is $2$ and the eigenspace relative to $1$ has dimension $3-2=1$. No set of two eigenvalues can be linearly independent. |
Find the distance between two lines | A point on $L_1$ has the form $(-1,2-t,t)$ and a point on $L_2$ has the form $(1+s,2-s,1)$. The square of the distance between these points is
$$ D = (2+s)^2 + (s-t)^2 + (t-1)^2.$$
The problem is to find the points where $D$ is minimized. $$\frac{\partial D}{\partial s} = 2(2+s) + 2 (s-t) = 4 + 4s - 2t
$$ $$\frac{\partial D}{\partial t} = -2(s-t) + 2(t-1) = -2s + 4t - 2.$$
The critical point is where both partial derivatives vanish, i.e., when $s = -1$ and $t=0$.
Plug these back into $D$ to get the minimum square distance $D = 1^2 + 1^2 + 1^2 = 3$. The distance between the lines is $\sqrt{3}$. |
Compare analytical and numerical Sine Transform | int const Pi = 3.14159265359;
The above is equivalent to int const Pi = 3; as far as C++ is concerned, which is not a good choice when you care about precision.
Try the following, instead, which also adds more decimals, since the double type has about 16 significant digits (at least if the implementation is IEEE-754 compliant):
double const Pi = 3.14159265358979324;
[ EDIT ] for (int i = 0; i <= N; i++) {
The above wil run the loop $N+1$ times due to the <= inclusive comparison. But the arrays are only allocated for $N$ elements double *X = new double[N]; so the last iteration invokes undefined behavior (out-of-bounds write). Everything that happens from that point on is technically undefined by the language standards. In practice, the code is corrupting some random memory locations which may, or may not, affect the end results. |
How to find eigenvalues of the following block circulant matrix | $$\det(B) = \det(A-C)^{N-1}*\det(A+(N-1)C)$$
I fiddled with numerics until I found this formula. I have no proof other than it works for (lotsa) values I tested.
Using this you can get the eigenvalues via the characteristic equation:
$$0 = \det(B - \lambda I_{PN} ) = \det((A-\lambda I_P)-C)^{N-1}*\det((A-\lambda I_P)+(N-1)C)$$ |
Prove that ${\{f_n\}}_{n\in\Bbb{N}}$ has a subsequence that converges uniformly to a continuous function on $[0,1]$ | Claim: The sequence $(f_n)_{n\in\mathbb N}$ is equicontinuous. That is, for each $x\in[0,1]$ and $\varepsilon>0$, there exists some $\delta>0$ such that if $y\in[0,1]$ and $|y-x|<\delta$, then $$|f_n(y)-f_n(x)|<\varepsilon\quad\text{for every $n\in\mathbb N$.}$$
Proof: Fix $n\in\mathbb N$. First, by Hölder's inequality, one has that for any $x\in[0,1]$ and $y\in[0,1]$ such that $x\geq y$:
\begin{align*}
\int_{[y,x]}|f_n'|\,\mathrm dt=&\,\int_{[y,x]}|f_n'|\times 1\,\mathrm dt\leq\left(\int_{[y,x]}|f_n'|^4\mathrm dt\right)^{1/4}\left(\int_{[y,x]}1^{4/3}\mathrm dt\right)^{3/4}\\
\leq&\,\left(\int_{[0,1]}|f_n'|^4\mathrm dt\right)^{1/4}\left(x-y\right)^{3/4}\leq\sqrt[4]{7(x-y)^{3}}.
\end{align*}
Second, absolute continuity implies that
$$f_n(x)=f_n(0)+\int_{[0,x]}f_n'\,\mathrm dt=13+\int_{[0,x]}f_n'\,\mathrm dt\quad\text{for every $x\in[0,1]$}.$$ Now, for any $x\in[0,1]$ and $y\in[0,1]$ such that $x\geq y$,
$$|f_n(x)-f_n(y)|=\left|\int_{[y,x]}f_n'\,\mathrm dt\right|\leq\int_{[y,x]}|f_n'|\,\mathrm dt\leq\sqrt[4]{7(x-y)^{3}}.$$ Since $n\in\mathbb N$ was arbitrary and the function $y\mapsto\sqrt[4]{7(x-y)^{3}}$ is continuous on the interval $[0,x]$ for any $x\in(0,1]$, the equicontinuity of the sequence $(f_n)_{n\in\mathbb N}$ follows. $\blacksquare$
Claim: The sequence $(f_n)_{n\in\mathbb N}$ is pointwise bounded. That is, for each $x\in[0,1]$, there exists a constant $M_x\geq 0$ such that $$|f_n(x)|\leq M_x\quad\text{for each $n\in\mathbb N$}.$$
Proof: Fix $x\in[0,1]$. For any $n\in\mathbb N$, one has that $$|f_n(x)|\leq13+\int_{[0,x]}|f_n'|\,\mathrm dt\leq13+\sqrt[4]{7x^3}.$$ Therefore, the sequence $(f_n)_{n\in\mathbb N}$ is pointwise bounded. $\blacksquare$
Given that $[0,1]$ is a compact Hausdorff space, the Arzelà–Ascoli theorem implies that the set $$\{f_n\,|\,n\in\mathbb N\}$$ is precompact in $C[0,1]$ with respect to the supremum norm. The existence of a uniformly convergent subsequence readily follows. |
Minimizing RSS for model with missing observations. Dummy variable vs Dropping observations | In model $2$, let's call $\bar{y}_m$ the mean of the $y_i$ for which $x_i$ is missing, and similarly $RSS_m$ their residual sum of squares, while $RSS_p$ is the residual sum of squares where $x_i$ is present. Clearly $RSS_2=RSS_p+RSS_m$.
$RSS_p=\sum\limits_\text{present} (y_i - \beta_1-\beta_2 x_i)^2$ and this is minimised when equal to $RSS_1$. We can achieve by setting $\beta_1$ and $\beta_2$ as in model $1$
$RSS_m=\sum\limits_\text{missing} (y_i - \beta_1-\beta_3)^2$ and this is minimised when $\bar{y}_m = \beta_1+\beta_3$. We can achieve this minimum for any $\beta_1$ by setting $\beta_3 = \bar{y}_m-\beta_1$
Since the minimum of a sum is at least the sum of the minima:
$RSS_2=RSS_p+RSS_m$ is minimised in model $2$ by using the $\beta_1$ and $\beta_2$ found in model $1$ together with $\beta_3 = \bar{y}_m-\beta_1$, |
derive formula for height of tower on a hill | We have $$\tan A=\frac{t+h}d\iff d=\frac{t+h}{\tan A}$$
Similarly, $$\tan B=\frac hd\iff d=\frac h{\tan B}$$
Compare the two values of $d$ |
Differentiability at a point $(0,0)$ | I'm assuming you mean to set $f(0,0) = 0$. In that case, the function most definitely is continuous, so trying to show it's not won't work :)
If it were differentiable, the derivative would have to be given by the matrix
$$Df(0,0)=\begin{bmatrix}\frac{\partial f}{\partial x}(0,0) & \frac{\partial f}{\partial y}(0,0)\end{bmatrix}.$$
Both these partial derivatives are $0$ (note that $f(x,0) = f(0,y) = 0$ for all $(x,y)$). On the other hand, if we try to compute the directional derivative in the direction of the vector $\mathbf v=(2,1)$ we obtain
$$\lim_{t\to 0} \frac{f(2t,t)-f(0,0)}{t} = \lim_{t\to 0} \frac{6t^4}{t|t|^3},$$
which might look like $6$, but in fact does not exist.
Caveat: I do not know what sort of course you're taking. In some courses, directional derivatives require unit vectors; in other courses, they do not. Either way, it doesn't matter here.
But it's a basic Theorem that if $f$ is differentiable at $(0,0)$ we can compute the directional derivative at in direction $\mathbf v$ by taking $Df(0,0)\mathbf v$, which in this case would be $0$ for all vectors $\mathbf v$.
Thus, $f$ cannot be differentiable at $(0,0)$. |
Selfnormalizing sub-algebra and direct sum decomposition | In Humphreys book it is proved long before Theorem 18.3, that every Cartan subalgebra $H$ is self-normalizing. Also, there is the following exercise in Humphreys book (and since you say that $H$ is abelian, I suppose that this is what you want):
Exercise 5: If L is semisimple, H a maximal toral subalgebra, prove that H is self-normalizing.
The solution is online here. |
Compute Integral over Implicit Domain | This is best understood in probabilistic terms.
Since the pdf of the standard normal distribution is given by
$$ f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2} \tag{1}$$
we have that
$$ P = \frac{1}{(2\pi)^{n/2}}\int_{D}e^{-(x_1^2+\ldots+x_n^2)/2}\,d\mu \tag{2}$$
is the probability that the sum of $n$ independent standard normal variables lies in $[-\sqrt{n},\sqrt{n}]$.
This sum is just a normal variable with mean zero and $\sigma^2=n$, hence:
$$ P = \frac{1}{\sqrt{2n\pi}}\int_{-\sqrt{n}}^{\sqrt{n}}\exp\left(-\frac{x^2}{2n}\right)\,dx =\frac{1}{\sqrt{2\pi}}\int_{-1}^{1}e^{-y^2/2}\,dy=\operatorname{Erf}\left(\frac{1}{\sqrt{2}}\right)\tag{3}$$
and:
$$ \int_{D}e^{-(x_1^2+\ldots+x_n^2)/2}\,d\mu = (2\pi)^{n/2}\operatorname{Erf}\left(\frac{1}{\sqrt{2}}\right).$$ |
Matrix representation of a partition | The important component of the question here seems to be: can we define a natural binary operation on the set P of partitions of $\\{1,2,\ldots,n\\}$ to form a group G? Cayley's Theorem tells us that if G exists, it is isomorphic to a subgroup of the symmetric group on G. So G will be isomorphic to a group of permutation matrices.
However, finding a natural binary operation on P is going to be tricky. Of course, P is just a set, and it would be possible to construct some highly-contrived binary operation on P, but typically it would not preserve the structure of the partitions.
As an off-the-top-of-my-head example of why I think it should be tricky: |P| is given by the Bell Numbers, which can be a prime number, whence G must be a cyclic group and each non-identity element of G must somehow generate G. |
Recurrency procedure for carpet | I have a way to do this without recursion, but again, I'm not sure if it meets the requirements of your assignment. This is far too complicated for me to type in a comment box, but it's a provisional answer only.
I would do it with a queue. First draw a square of side $S$, centered at the origin. Now for $n=0,1,2,3$ enqueue a record of the form
$$\left(\frac S2, x_n, y_n, n, 2\right)$$ These are to be interpreted as $$(side,center_x, center_y, type, stage),$$
where
$$x_n=\frac{3S\sqrt2}{4}\cos\left(\frac{n\pi}{2}+\frac{\pi}{4}\right)\\
y_n=\frac{3S\sqrt2}{4}\sin\left(\frac{n\pi}{2}+\frac{\pi}{4}\right)
$$
Now, while the queue is not empty, do the following:
Deque a record
Draw the square
If the next stage would exceed the limit, dequeue another record
Otherwise, use the same sort of procedure to enqueue $3$ more records
In the last step, we reduce the current side by a factor of $2$ and increase the stage by $1$. We get the centers of the squares in much the same way as in the first step. First, pretend that the current square is centered at the origin, so we can use exactly the same formula. Then just add the $x$ and $y$ coordinates of the current center to the computed $x$ and $y$ coordinates.
The only wrinkle is that we don't add a square of the type opposite to the current type to the queue. (I made an earlier edit saying this is unnecessary, but I was wrong.)
Your computer graphics system may not allow for negative coordinates, but this is no problem. Just add the $x$ and $y$ coordinates of the first square to the $x$ and $y$ coordinates of the centers of the first $4$ squares in the queue.
Mathematically, it looks like this will draw the squares counter-clockwise, but since the $y$-coordinate in computer graphics increases as the point moves down, I believe this will draw them clockwise on the screen.
This should draw the big square, then the $4$ stage $2$ squares, then $3$ stage $3$ squares touching each stage $2$ square, and so on. When the stage $3$ squares are drawn, first $3$ will be drawn, in clockwise order, around one of the stage $2$ squares, then we will move clockwise to the next stage $2$ square and draw $3$ stage $3$ squares touching it, and so on.
This worked for me:
from math import sin, cos, sqrt, pi
import tkinter as tk
from argparse import ArgumentParser
from collections import namedtuple
Square = namedtuple('Square', 'side x y type stage'.split())
parser = ArgumentParser()
parser.add_argument('side', type=int, help='side of big sqaure in pixels')
parser.add_argument('stages', type=int, help='number of stages')
parser.add_argument('--color', help='string recognized by Tk', default='red')
args = parser.parse_args()
root=tk.Tk()
canvas = tk.Canvas(root, height=1000, width=1000)
canvas.pack()
x0,y0 = int(canvas['width'])//2, int(canvas['height'])//2
def drawSquare(side, x, y):
canvas.create_rectangle(x-side//2, y-side//2, x+side//2, y+side//2, fill = args.color, outline=args.color)
drawSquare(args.side, x0, y0)
Q = []
for n in range(4):
theta = n*pi/2+pi/4
r = 3*sqrt(2)*args.side/4
x = x0+r*cos(theta)
y = y0+r*sin(theta)
Q.append(Square(side=args.side//2,x=x,y=y,type=n,stage=1))
while Q:
sq = Q.pop(0)
drawSquare(sq.side, sq.x, sq.y)
stage = sq.stage+1
if stage==args.stages:
continue
for n in range(4):
if n == (sq.type+2)%4:
continue
theta = n*pi/2+pi/4
r = 3*sqrt(2)*sq.side/4
x = sq.x+r*cos(theta)
y = sq.y+r*sin(theta)
Q.append(Square(side=sq.side//2,x=x,y=y,type=n,stage=stage))
root.mainloop()
With the arguments 200 5 this produced |
Solving a linear differential equations | A separable first-order linear differential equation is of the form
$$\dfrac{dy}{dx}=a(x)b(y)$$
Our differential equation being
$$\dfrac{dy}{dx}+p(x)y=f(x)$$
we want to reduce it to the first form above, and thus we're looking for the appropriate functions $a(x)$ and $b(y)$. Therefore we can write
\begin{align}a(x)b(y)&=f(x)-p(x)y\\&=p(x)\left(\dfrac{f(x)}{p(x)}-y\right)\end{align}
And we set $a(x)=p(x)$. Now, $\frac{f(x)}{p(x)}$ must be independent of $x$, so we set $f(x)=kp(x)$, with $k\in\mathbb{R}$.
Therefore, our reduced equation is
$$\dfrac{dy}{dx}=p(x)\cdot(k-y)$$
which is indeed separable as
$$\int\dfrac{dy}{k-y}=\int p(x)dx$$
Also note that in the case $k=0$, we get the first-order linear homogeneous equation
$$\dfrac{dy}{dx}+p(x)y=0$$ |
Let $f(x) = x^3-3x^2+6$ for $x\in \mathbb{R}$ and $g(x) = \max\{f(t); x+1\leq t\leq x+2\}$ for $-3\leq x\leq0$. Need help in understanding $g(x)$. | Hint. The derivative of $f$ is $f'(x) = 3x^2-6x=3x(x-2)$, hence $f$ is increasing in $(-\infty,0]$, it is decreasing in $[0,2]$, and again increasing in $[2,+\infty)$. Now you should be able to find
$$\max_{t\in[x+1,x+2]} f(t)$$
for any $x\in\mathbb{R}$. |
query on how to find extremum points | Observe for $f(x, y)$, we see that the minimal value is $0$ and $(0, 0)$ gives that value. Moreover, any deviation from $(0, 0)$ will give you a positive value. Hence $(0, 0)$ is a local minimum.
In the case of $g(x, y)$, we see that
\begin{align}
g(x, y) = x^4+y^2-10x^2y
\end{align}
which means
\begin{align}
\nabla g(x, y) = (x(4x^2-20y), 2y-10x^2)=(0, 0)
\end{align}
means
\begin{align}
x\cdot (4x^2-20y)=&\ 0\\
2y-10x^2=&\ 0.
\end{align}
Hence $(0, 0)$ is the only critical point. However, $(0, 0)$ is only a saddle since if we move in the direction $(x, -x)$ the function value is positive for small $x>0$ and in the direction $(x, x)$ the function value is negative, again for small $x>0$. |
Finding all $F $ automorphisms | If $t$ were mapped to some constant $u \in F$, note that $u$ is also mapped to $u$ (since the automorphism is supposed to fix $F$), thus the mapping is not a bijection and thus cannot be an automorphism. |
How to derive the weak form of a system of PDEs? | The weak form of a single PDE asserts an integral equality for all "test functions" in a suitable vector space.
If you retain the distinct test functions when summing several weak forms, so that we still quantify universally over them, then this summed-up form is equivalent to the system of weak forms because we could set all but one of the test functions to zero in order to recover a single weak form. |
Probability - Conditional statements with union and intersection | Note that $A$ and $B$ are not independent so $P(A\cap B)\not=P(A) P(B)$.
Rather, $P(A\cap B)=P(A) P(B\vert A)$.
This should give you (a) and then (b) and (c) just need to be corrected accordingly. The same reasoning applies to (d). |
If $n = m^2 + 1$ and $x$ is a square modulo $n$, then how to show that $n - x$ is also a square modulo $n$? | One way to see it is this: the equality $ n-1=m^2$ tells you that, modulo $ n $, $-1$ is a square, say $-1\equiv z^2$. So
$$
n-x\equiv -x=-y^2\equiv z^2y^2=(zy)^2.
$$ |
For $\triangle ABC$ with circumradius $R$ and nine-point center $E$, prove that $EA+EB+EC\le3R$ | We need the help of two triangle inequalities to prove the inequality mentioned in OP’s statement. We provide the proof of one of them while leaving the other for OP to try his/her hand at.
Consider the scalene triangle $UVW$ shown in $\mathrm{Fig.\space 2}$. Its sides and one of its medians (i.e. $WM$) have lengths $u$, $v$, $w$, and $m$. We shall write,
$$u^2=m^2+\frac{w^2}{4}+m\times w\times \cos\left(\omega\right)\qquad\left(\mathrm{for}\space \triangle MVW\right), \tag{1}$$
$$v^2=m^2+\frac{w^2}{4}-m\times w\times \cos\left(\omega\right)\qquad\left(\mathrm{for}\space \triangle UMW\right). \tag{2}$$
When we add (1) to (2), we get,
$$u^2+v^2 =2m^2+\frac{w^2}{2}\quad\rightarrow\quad 4m^2= 2u^2+2v^2-w^2. \tag{3}$$
As shown below, equation (3) can be transformed into the first of the two sought triangle inequalities, which is, in fact, the mainstay of the proof of OP’s inequality.
$$4m^2= 2u^2+2v^2-w^2= u^2+v^2+\left(u^2+v^2-w^2\right) = u^2+v^2+2uv\cos\left(\phi\right)$$
Since thelargest value of $\cos\left(\phi\right)$ is 1, we cam write,
$$4m^2\le u^2+v^2+2uv=\left(u+v\right)^2 \quad\rightarrow\quad u+v\ge 2m. \tag{4}$$
Here, the equality holds only for the degenerated triangle. The purpose of the numerical values shown in the diagram is to convince you that this inequality holds.
Now, pay your attention to the $\mathrm{Fig.\space 1}$, which depicts the configuration described by OP and auxiliary points and segments introduced by us. $H$ and $O$ are the orthocenter and circumcenter of the triangle $ABC$ respectively. We assume that OP is aware of the fact that the center of the nine-point circle $E$ is the midpoint of the segment $HO$. We denote the radius of the circumcircle $ABC$ as $R$.
Consider the triangle $AOH$, where $AE$ is one of the medians. We can apply the inequality (4) to this triangle to obtain $2AE\le AO+AH$. Since $AO=R$, this becomes $2AE\le R+AH$. In a similar vein, we can state $2BE\le R+BH$ and $2CE\le R+CH$ by considering the triangles $BHO$ and $CHO$ respectively. When we add these three inequalities together, we have,
$$2\left(AE+BE+CE\right)\le 3R+\left(AH+BH+CH\right). \tag{5}$$
Let us try to express the segment $AH_b$ as a function of $R$. Consider the isosceles triangle $ABO$. There, we have $AB=2R\sin\left(\hat{C}\right)$. Next, consider the right-angled triangle $ABH_b$. It is easy to see that $$AH_b =AB\cos\left(\hat{A}\right)= 2R\sin\left(\hat{C}\right) \cos\left(\hat{A}\right). \tag{6}$$
Finally, we consider the right-angled triangle $AHH_b$ to obtain
$$AH_b =AH\sin\left(\hat{C}\right) \tag{7}.$$
Now, we have two expressions for $AH_b$, i.e. (6) and (7). From them, it follows,
$$AH_b =2R\sin\left(\hat{C}\right) \cos\left(\hat{A}\right)= AH\sin\left(\hat{C}\right)\qquad\rightarrow\qquad AH=2R\cos\left(\hat{A}\right).$$
Similarly, we are able to express both $BH$ and $CH$ as $BH=2R\cos\left(\hat{B}\right)$ and $CH=2R\cos\left(\hat{C}\right)$ respectively. When we substitute these values in (5), we get,
$$2\left(AE+BE+CE\right)\le 3R+2R\left(\cos\left(\hat{A}\right)+\cos\left(\hat{B}\right)+\cos\left(\hat{C}\right)\right). \tag{8}$$
It is up to OP to prove using trgonometry, for a given triangle,
$$ \cos\left(\hat{A}\right)+\cos\left(\hat{B}\right)+\cos\left(\hat{C}\right)\le \frac{3}{2}.$$
Once OP has done that, he/she can insert it in (8) to get the sought inequality, i.e.
$$AE+BE+CE\le 3R, $$
where equality holds if and only if $\triangle ABC$ is an equilateral triangle, This is because, in an equilateral triangle, the two centers in question coincide. |
Estimate the Number of Conjugacy Classes of $G$ | Let $G$ be a non-abelian finite group. The class formula asserts that$|G|=|Z(G)| + \sum_i |Cl_G(x_i)|$, where $Cl_G(x_i)$ is the conjugacy class of certain non-central $x_i$. Now observe two things:
(1) $|G/Z(G)|$ cannot be $1, 2$ or $3$. Why? Because if $G/Z(G)$ is cyclic, then $G$ be must be abelian (group theory folklore!)
(2) $|Cl_G(x_i)|$ cannot be equal to $1$, because if it were then $x_i \in Z(G)$ and in the class formula we already have counted the contribution of the center of $G$.So what do we learn from these two observations? From (1): $|G/Z(G)| \geq 4$, so $|G| \geq 4|Z(G)|$. And from (2): $|Cl_G(x_i)| \geq 2$.
Now let us combine these two facts into the class formula:
$|G| \geq |Z(G)| + (c(G)-|Z(G)|).2= 2c(G)-|Z(G)|\geq2c(G)-\frac{1}{4}|G|$. From this it follows that $c(G)\leq\frac{5}{8}|G|$.Can this bound be attained? Yes, have a look at $G=Q$, the quaternion group of order 8.For your question (c) try to do the same as above, but then you know that there is a conjugacy class $|Cl_G(x_i)| \geq p$. For the other conjugacy classes you still need to take $2$ as a minimal cardinality estimate.
Bonus remark 1. There is also a lower bound on the number of conjugacy classes $c(G)$. Note that if $x \in G-Z(G)$, then $Z(G) \subsetneq C_G(x)$, where $C_G(x)$ is the centralizer of $x$ in $G$. We can conclude that for non-central elements $x$, we have $|C_G(x)|\geq 2|Z(G)|$, or equivalently $|Cl_G(x)| \leq \frac{|G|}{2|Z(G)|}$. And hence, working with the class formula again
$|G| \leq |Z(G)| + (c(G) - |Z(G)|).\frac{|G|}{2|Z(G)|} = |Z(G)| + \frac{|G|c(G)}{2|Z(G)|} - \frac{1}{2}|G|$. Hence $\frac{3}{2}|G| \leq |Z(G)| + \frac{|G|c(G)}{2|Z(G)|} \leq \frac{1}{4}|G| + \frac{|G|c(G)}{2|Z(G)|}$ and elaborating this a bit futher this yields the lower bound
$c(G) \geq 2\frac{1}{2}|Z(G)|$.
Note that also this bound is sharp, again take $G$ to be the quaternion group of order 8.
Bonus remark 2. Try to show that if $G$ is a non-abelian group of odd order, then even $c(G)\leq\frac{11}{27}|G|$.
Bonus remark 3. If $G$ is a non-abelian $p$-group, then $c(G)\leq\frac{p^2+p-1}{p^3}|G|$. The bound is sharp by the way and is attained by so-called extra-special $p$-groups. |
Function $f = (f_1,\dots,f_n)$ s.t. $[f_1(x),\dots,f_n(x)]$ goes from $[1,0,\dots,0]$ to $[0,\dots,0,1]$ in a smooth way | $$f_k(x)={q^{k-1}\over\sum\limits_{i=0}^{n-1}q^i},\text{ where }q={x\over1-x}$$
would do the job. |
Find an equation of the tangent line to the curve $y=x^3-3x+1$ at the given point $(2,3)$ | Hint: Given
$$
y=x^3-2x+1,
$$
we have that
$$
y'=3x^2-2.
$$ |
Differential reading problem | This is a well-known equation. See e.g.
http://en.wikipedia.org/wiki/Logistic_function |
Angle bisector divides the triangle into two triangles. Find the area of one of them. | I don't know whether its invertendo - compenendo - invertendo or something else. These answers were intended for me so they are written in the language that I can understand. So if you didn't got something do ask. |
Projection linear transformation: explain the wording | Sure, so $0,0$ gets projected down to $0$. But in general, the projection isn't 'straight down' like what you're assuming.
For example, the point $(1,2)$ gets projected to $0$ as well, since the line with slope $2$ going through the point $(1,2)$ passes through the origin. Similarly, $(2,3)$ will be projected to $1$. |
Surface integral - can it be simplified? | Switch to (modified) cylindrical coordinates: $x=r\cos(t)$, $y=y$
and $z=r\sin(t)$. Then your surface becomes: $r^2=\cos^2(y)$.
Notice that when $0 \leq y \leq \pi/2$, $\cos(y)$ is positive so you can take the square root of the above equation and get: $r=\cos(y)$.
Parameterize using $t$ and $y$ as parameters:
$$X(t,y)=\langle \cos(y)\cos(t), y , \cos(y)\sin(t) \rangle$$
since $r=\cos(y)$. Here, $0\leq t \leq 2\pi$ and $0 \leq y \leq \pi/2$.
Now $X_t = \langle -\cos(y)\sin(t),0, \cos(y)\cos(t) \rangle$ and $X_y = \langle -\sin(y)\cos(t),1, -\sin(y)\sin(t) \rangle$ whose cross product is:
$X_t \times X_y = \langle \cos(y)\cos(t),\sin(y)\cos(y), \cos(y)\sin(t) \rangle$ and so $dS = |X_t \times X_y|\,dy\,dt = \cos(y)\sqrt{1+\sin^2(y)}\,dy\,dt$
Thus your surface integral becomes $$\int_0^{2\pi}\int_0^{\pi/2}\sin(y)\sqrt{1+\sin^2(y)}\cos(y)\,dy\,dt$$
$u$-sub with $u=\sin(y)$ and $du=\cos(y)\,dy$ then to integrate $u\sqrt{1+u^2}$ sub again $w=1+u^2$ and $dw=2u\,du$.
You take it from there. :) |
Complex integrals and poles | An analytic function $f$ has a pole of order $n$ at $z_0$ if and only if the limit $\lim_{z\to z_0}(z-z_0)^nf(z)$ exists and it is a complex number different from $0$.
So, if $f$ has a pole of order $1$ at $z_0$, then $\lim_{z\to z_0}(z-z_0)^nf(z)=w$, for some $w\in\Bbb C\setminus\{0\}$. But then$$\lim_{z\to z_0}(z-z_0)^2f^2(z)=\lim_{z\to z_0}\left((z-z_0)f(z)\right)^2=w^2\ne0,$$and therefore $f^2$ has a pole of order $2$ at $z_0$. |
Finding an equation of two perpendicular tangent lines of a parabola | I'm assuming you don't want any calculus involved here (too bad!), so let $\,(a,b)\,,\,(c,d)\,$ be the points on the parabola through which pass two tangent lines to it that are perpendicular:
$$\text{First tangent: we need to solve the system}\;\;\;\;y^2=4px\;\;,\;\;y-b=m(x-a)\Longrightarrow$$
$$(m(x-a)+b)^2=4px$$
$$\text{Second tangent: we need to solve the sytem}\;\;\;\;y^2=4px\;\;,\;\;y-d=-\frac{1}{m}(x-c)\Longrightarrow$$
$$\left(-\frac{1}{m}(x-c)+d\right)^2=4px$$
Of course, solving the above take into account that
$$b^2=4pa\;\;,\;\;d^2=4pc$$
since both points were chosen to be on the parabola.
Also, remember that two straight lines (none of which is horizontal/vertical) with slopes $\,m_1\,,\,m_2\,$ are perpendicular iff $\,m_1m_2=-1\,$, and this the reason we took the second tangent's slope to be $\,-1/m\,$ |
Math 'equal to?' symbol | This notation is not standard. One can define an equivalence relation
on $\mathbb{R}^{n}$ s.t $v\sim u\iff u=\alpha v$ for $0\neq\alpha\in\mathbb{R}$
and then we can write for example
$$
\begin{pmatrix}6\\
2\\
2
\end{pmatrix}\sim\begin{pmatrix}3\\
1\\
1
\end{pmatrix}
$$
Regarding the use of the hat symbol and vectors - it is standard that
if $0\neq v\in\mathbb{R}^{n}$ then we denote
$$
\hat{v}=\frac{v}{\|v\|}
$$
in this case $\hat{v}$ spans the same one dimensional subspace as
$v$ and satisfies $\|\hat{v}\|=1$ |
in the Proof that √x is continuous on its domain [0,∞).. where did the √x go ?? | Let's back up a bit and do this properly:
Prove $f(x) = \sqrt x$ is continuous at any point $a \in [0,\infty]$
Let $\epsilon$ be given. Take $\delta = \epsilon \sqrt a\ $. If $$|x-a| < \delta$$
Then $$|\sqrt x - \sqrt a| = \frac{|x - a|}{|\sqrt x + \sqrt a|} \leq \frac{|x - a|}{\sqrt a} < \frac {\epsilon \sqrt a}{ \sqrt a} = \epsilon $$
Edit If $a = 0$, then put $\delta = \epsilon^2$, so that if $x < \delta$, then $\sqrt x < \sqrt {\epsilon^2} = \epsilon$. Note that I dropped the absolute values, since this only makes sense for positive $x$ |
"Commutativity" of Tor functor | Here's an outline. Let $A$ and $B$ be $R$-modules and $\mathcal{P}$ and $\mathcal{Q}$ be projective resolutions of them respectively. We have quasi-isomorphisms
$$ \mathcal{P} \otimes_R B \leftarrow \mathrm{Tot}(\mathcal{P} \otimes_R \mathcal{Q}) \rightarrow A \otimes_R \mathcal{Q} $$
which in turn induce isomorphisms
$$ H_i(\mathcal{P} \otimes_R B) \cong H_i(\mathrm{Tot}(\mathcal{P} \otimes_R \mathcal{Q})) \cong H_i(A \otimes_R \mathcal{Q}). $$
But then by definition of Tor, we have $\mathrm{Tor}_i^R(A, B) \cong H_i(\mathrm{Tot}(\mathcal{P} \otimes_R \mathcal{Q}))$. |
Solve for $y$ explicitly or prove that it is impossible | It is impossible. We can solve for $x$, and then graph $x$ as a function of $y$. $x=\frac{\sin(y)+e^y}{y}$. (See below, $y$ is on the horizontal axis, and $x$ is on the vertical axis)
Since there are two values for $y$ that give $x=10$, $y$ is not a function of $x$, or how could we pick what value to assign $y$ when $x=10$?
Therefore we cannot solve for $y$, since solving for $y$ expresses $y$ as a function of $x$.
Edit
To answer your second question, Wolfram Alpha can graph implicit functions like this. The picture we get this time is the following:
This is a really interesting curve, but this time, it is neither a function of $x$ nor $y$ for the same reasons as above, so we cannot solve the equation for either $x$ or $y$.
Edit 2
In general to show that a curve defined implicitly in terms of $x$ and $y$ cannot be solved for one of the variables say $y$, what you need to do is find two points on the curve where the value of $x$ is the same, but the points differ at their $y$ coordinate. See the first part of my answer, there were two points on the curve with $x=10$, and different $y$ coordinates. Then if you had $y=f(x)$, then how would you know what $f(10)$ is? |
Probability of getting a "double" in at least two throws of two dies | The probability of rolling a double on 2 six sided dice is $\frac{1}{6}$. So the probability of not rolling one is $\frac{5}{6}$.
The probability of this happening twice is $\left(\frac{5}{6}\right)^2=\frac{25}{36}$. The expectedayoff is $0.22 - deal me in! |
Does every map in dual have a predual in infinite-dimensional spaces? | EDIT: both $\aleph_0^{\aleph_0}<\aleph_1^{\aleph_1}$ and $\aleph_0^{\aleph_0}=\aleph_1^{\aleph_1}$ are consistent with the ZFC axioms (note that the former is forced by the Continuum Hypothesis). This and the argument below show at the very least that the answer to your question cannot be a solid yes. See Asaf Karagila's answer here for more insight about $(1)$.
My goal is to prove that as soon as $V$ is "big enough", this is impossible for cardinality reasons as $|\operatorname{End}V|<|\operatorname{End}V^*|$ (so there can be no surjective map from $\operatorname{End}V$ to $\operatorname{End}V^*$).
I will need the following result for which I couldn't find a reference but which seems reasonable: $$\text{If $a,b$ are cardinals such that $a<b$, then $a^a<b^b$.}\tag{1}$$
Since an endomorphism is determined by the choice of an element of $V$ for each element of a basis, you have $$|\operatorname{End}V|=|V|^{\dim V}\qquad\text{ and }\qquad|\operatorname{End}V^*|=|V^*|^{\dim V^*}$$
Lemma$^{\star}$: If $|V|>|k|$ where $k$ is the ground field, then $\dim V=|V|$.
Therefore, assuming $|V|>|k|$, you have $$|\operatorname{End}V|<|\operatorname{End}V^*|\Longleftrightarrow |V|^{|V|}<|V^*|^{|V^*|}$$
Lemma$^{\star}$: If $V$ is not finite-dimensional, then $\dim V<\dim V^*$.
Corollary (of the two lemmas together): If $|V|>|k|$, then $|V|<|V^*|$.
Conclusion (up to $(1)$):$$|V|>|k|\implies |\operatorname{End}V|<|\operatorname{End}V^*|$$
$^{\star}$(note that this is a link) |
Integration of $f(-x)$ | Let consider the change of variable $y=-x \implies dx=-dy$ then
$$\int_{0}^{\infty} f(-x)\ dx=-\int_{0}^{-\infty} f(y)\ dy=\int_{-\infty}^0 f(y)\ dy$$ |
Proving Singmaster's Conjecture: Can you prove there are finitely many solutions of $\binom{n+x-1}{n}=y$? | Hint: Except for $\binom{n}{0}=\binom{n}{n}=1$ we have $\binom{n}{k}\geq n$. |
Book on Riemannian geometry | John Lees Introduction to Smooth Manifolds is a good place to start. He covers the fundamentals of the subject, and the beginning half is filled with really important concepts required to better understand Riemannian manifolds. He also has a book called Riemannian Manifolds which is pretty good, but quite advanced in comparison to the smooth manifolds book. |
Is this an equivalence relation (reflexivity, symmetry, transitivity) (2) | First: just to be sure you understand: your relation $R$ is not an equivalence relation if there exists $x$ such that it is not the case that $x R x$: when, e.g., $\theta(x) = 0 \pmod {2\pi}$, since in this situation, it is NOT the case that $\theta(x) \ne 0 \pmod {2\pi}\;$ AND $\;\theta(x) \ne \pmod {2\pi}$
That is, the relation IS reflexive if and only if FOR ALL $x \in \mathbb{C}$, it is true that $\theta(x) \ne 0 \pmod {2\pi}\;$ AND $\;\theta(x) \ne 0 \pmod {2\pi}$.
You need reflexivity (as well symmetry and transitivity), so if $x$ satisfies $\theta(x) = 0 \pmod {2\pi} $, then you do not have $x R x$, hence it is not an equivalence relation. Not knowing exactly how $\theta(x)$ is defined makes it difficult to ascertain whether or not it can be the case that reflexivity fails. But if, say, $\theta: \mathbb{C} \to \mathbb{R}$ is onto, then reflexivity fails.
Your relation defines what it means for $s$ to be related to $q$: $$sRq \iff (\theta(s) \ne 0 \pmod {2\pi} \land \theta(q) \ne 0\pmod {2\pi}.$$ So it defines the set of all ordered pairs $(s, q)$, $s, q \in \mathbb{C}$, which satisfy the given relation. To obtain the inverse relation, you need to determine what relation defines how $q$ is related to $s$: what relation $R^{-1}$ defines the set of all ordered pairs $(q, s)$, when $(s, q)\in R$?
In this case, it turns out that the inverse relation $R^{-1}$ defines exactly the same relation as does $R$: simply commute the conditions: $s \ R^{-1} q \iff \theta(q) \ne 0 \pmod {2\pi} \land \theta(s) \ne 0\pmod {2\pi}$. Then you have $q\ R \ s $. |
inverse Laplace transform of gamma function | The Gamma function has poles at $z=0$ and at the negative integers. When $z=-k$, then the residue at that pole is $(-1)^k/k!$. Now, the inverse transform is
$$\frac1{i 2 \pi} \int_{c-i \infty}^{c+i \infty} ds \, \Gamma(p+1+T s) e^{s (t-T \log{N})} $$
The poles of this Gamma function are at $p+1+T s = -k$, or $s=-(p+1+k)/T$ for $k \in \{0,-1,-2,\ldots \}$. Thus, the ILT is
$$e^{-(p+1)(t-T \log{N}))/T} H(t-T \log{N})\sum_{k=0}^{\infty} \frac{(-1)^k}{k!} e^{-k (t-T \log{N})/T} $$
where $H$ is the Heaviside function, or, summing the series,
$$e^{-(p+1)(t-T \log{N}))/T} \exp{\left [-e^{-(t-T \log{N})/N}\right ]} H(t-T \log{N}) $$
which, aside from the Heaviside factor, agrees with your expected result. |
When does a Character Table have Non-real Entries, and how do I Compute them? | A question from the Cambridge Part II Maths course on Representation Theory:
Let $x$ be an element of order $n$ in finite group $G$. Say, without detailed proof, why:
if $\chi$ is a character of $G$, then $\chi(x)$ is a sum of $n$th roots of unity;
$\tau(x)$ is real for every character $\tau$ iff $x$ is conjugate to $x^{-1}$;
$x$ and $x^{-1}$ have the same number of conjugates in $G$.
Prove that the number of irreducible characters which take only real values is equal to the number of self-inverse conjugacy classes. |
Deriving a composite function | The chain rule says that the derivative of $a\circ b$ is $(a^{\prime}\circ b)\cdot b^{\prime}$ (provided $a^\prime$ and $b^\prime$ exist).
In your case, $a\equiv f$ and $b(\lambda)\equiv x^{0}+\lambda(x^{1}-x^{0})$.
Note that $a^{\prime}=f^{\prime}$ and $b^{\prime}(\lambda)=x^{1}-x^{0}$, from which the result follows. |
Why can't I get good approximation when choosing values away from point of expansion? (Taylor series) | Yes, there is a range. It depends on the function and the expansion point. Sometimes it is infinite, but not in your case. Here it is $(0,2]$.
The point of expansion is the point around which you expand (1 in your case). If you expand around some other point, you'll get different series and (generally speaking) different range of convergence. |
I don't understand this application of Holder's inequality | What Serg says is pretty true. I will assume $f$ to be positive (need to think about general case).
Note that for $p=1$ the claim is trivially true.
Now, let $p\in (1,\infty)$ with Hölder conjugate $q \in (1,\infty)$, i.e. $\frac{1}{p}+\frac{1}{q}=1$.
Then we have by Hölder
\begin{align*}
\int_0^x f(t) ~\mathrm{d}t &\leq \left(\int_0^x 1^q~\mathrm{d}t\right)^{\frac{1}{q}}\cdot \left(\int_0^x f^p~\mathrm{d}t\right)^{\frac{1}{p}}\\
&=x^{\frac{1}{q}}\cdot \left(\int_0^x f^p~\mathrm{d}t\right)^{\frac{1}{p}}.
\end{align*}
Raising this to the power of $p$ we obtain the claim as $\frac{p}{q}=p-1$. |
Unfamiliarity with complex geometry | It is hard to understand what happens in the computations. (Starting from $m=0$ and $n=2$. If these are the afixes of $M,N$, then i expect them to move around a center.)
Here are some words and a complex numbers approach to the story.
For a point $Q$ in the plane i will denote by $z_Q\in\Bbb C$ the afix of $Q$.
Here is a picture for the convenience of the reader.
We will place $A$ in the origin, since we will use a formula which is nice to use when $A$ is in the origin.
The center $C$ of the circle, where the opposite points $M,N$ move on is taken on the real axis to simplify calculations, so let $c\in\Bbb R$ be the afix of $C$.
(I am not using the letter $O$ from the OP to avoid confusions.)
Let $O'$ be the circumcenter of $AMN$. Then we have:
$$
\begin{aligned}
z_A &= 0\ ,\\
z_C &= c\in\Bbb R\ ,\\
z_M &= c+re^{it}\text{ for some suitable }t\in\Bbb R\ ,\\
z_N &= c-re^{it}\ ,\\
z_{O'} &=\frac{z_Mz_N(\bar z_M-\bar z_N)}{\bar z_Mz_N-\bar z_N z_M}\\
&=
\frac
{(c^2-r^2e^{2it})\Big(\ (c+re^{-it})-(c-re^{-it})\ \Big)}
{(c+re^{-it})(c-re^{it})-(c-re^{-it})(c+re^{it})}
\\
&=
\frac 1
{-2cr(e^{it}-e^{-it})}
(c^2-r^2e^{2it})\cdot 2re^{-it}
\\
&=\frac 1{c\cdot 2i\sin t}\color{blue}{(r^2e^{it}-c^2e^{-it})}\ .
\end{aligned}
$$
The quantity in the blue bracket is explicitly
$$ (r^2-c^2)\cos t + (r^2+c^2)i\sin t\ .$$
This shows that the real part of $z_{O'}$ is constant,
$$
\Re z_{O'}=\frac{(r^2+c^2)i\sin t}{c\cdot 2i\sin t}
=
\frac 1{2c}(r^2+c^2)\ .
\ .
$$
This corresponds to the "plan" from the OP, the projection of $O'$ on $AC$ (the real axis) is a fixed point, the point $P$ with $z_P=\frac 1{2c}(r^2+c^2)\in\Bbb C$.
$\square$
(The imaginary part involves the cotangent function, so that the whole perpendicular in $P$ on $AC$ is taken, not only a part of it.) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.