INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Möbius Transformation help Hey guys I need help on these 2 questions that I am having trouble on. 1) Show that the Möbius transformation $z \rightarrow \frac{2}{1-z}$ sends the unit circle and the line $x = 1$ to the lines $x = 1$ and $x = 0$, respectively. 2) Now deduce from this that the non-Euclidean distance between the unit circle and the line $x = 1$ tends to zero as these non-Euclidean lines approach the x-axis. I know that the non-euclidean distance between the lines $x=0$ and $x=1$ goes to zero as $y$ approachs $\infty$. But I dont know how to deduce that the unit circle would go to $x=1$ and $x=1$ to $x=0$ and how would I deduce from this. Its from my old past tests and I am trying to practice but I got stuck on this question. Please help out thank you
These are exercises 8.6.5, 8.6.6 from "The Four Pillars of Geometry." I'll use the terminology of the book. #1: Call the transform $\varphi(z) = \dfrac{2}{1-z}$. We know that Möbius transformations send non-Euclidean lines (circles and lines) to non-Euclidean lines. If we calculate the image of three points from the unit circle, we can uniquely determine the image of the unit circle as a whole. Pick three points from the unit circle, say $\{-1, 1, i\}$. We have: $$ \varphi(-1) = 1,\quad \varphi(1) = \infty,\quad \varphi(i) = \frac{2}{1-i} = 1 + i $$ All points lie on the line $x = 1$. Hence, the image of the unit circle is the line $x = 1$. Similarly for the line $x = 1$, pick the points $\{1, \infty, i + 1\}$: $$ \varphi(1) = \infty,\quad \varphi(\infty) = 0,\quad \varphi(1+i) = 2i $$ All points lie on the line $x = 0$. Hence, the image of the line $x = 1$ is the line $x = 0$. #2: We will show that the non-Euclidean distance between the unit circle and line $x = 1$ approaches zero as both non-Euclidean lines approach the point $1$ on the real axis. Consider a point $p_1$ on the unit circle approaching the point $1$, and another point $p_2$ on the line $x = 1$ approaching the point $1$. Their images under $\varphi$ both approach $\infty$ on the lines $x = 1$ and $x = 0$ respectively by #1. Since we know that the non-Euclidean distance is invariant under all Möbius transformations, and we also know that the distance between the lines $x = 0$ and $x = 1$ approaches $0$ as $y$ approaches $\infty$, it follows that the non-Euclidean distance between $p_1$ and $p_2$ approaches $0$ as they approach the $x$ axis.
$\int_{0}^{\pi/2} (\sin x)^{1+\sqrt2} dx$ and $\int_{0}^{\pi/2} (\sin x)^{\sqrt2\space-1} dx $ How do I evaluate $$\int_{0}^{\pi/2} (\sin x)^{1+\sqrt2} dx\quad \text{ and }\quad \int_{0}^{\pi/2} (\sin x)^{\sqrt2\space-1} dx \quad ?$$
$$\beta(x,y) = 2 \int_0^{\pi/2} \sin^{2x-1}(a) \cos^{2y-1}(a) da \implies \int_0^{\pi/2} \sin^{m}(a) da = \dfrac{\beta((m+1)/2,1/2)}2$$ Hence, $$\int_0^{\pi/2} \sin^{1+\sqrt2}(a) da = \dfrac{\beta(1+1/\sqrt2,1/2)}2$$ $$\int_0^{\pi/2} \sin^{\sqrt2-1}(a) da = \dfrac{\beta(1/\sqrt2,1/2)}2$$
Nonzero derivative implies function is strictly increasing or decreasing on some interval Let $f$ be a differentiable function on open interval $(a,b)$. Suppose $f'(x)$ is not identically zero. Show that there exists an subinterval $(c,d)$ such that $f(x)$ is strictly increasing or strictly decreasing on $(c,d)$. How to prove this? I think this statement is wrong...
The statement is indeed wrong. You can construct for example a function $f:\mathbb{R} \rightarrow\mathbb{R}$ which is differentiable everywhere such that both $\{x \in \mathbb{R} : f'(x) > 0\}$ and $\{x \in \mathbb{R} : f'(x) < 0\}$ are dense in $\mathbb R$ and thus $f$ is monotone on no interval. You can find such a construction on page 80 of A Second Course on Real Functions by van Rooij and Schikhof. See also here.
Decomposable elements of $\Lambda^k(V)$ I have a conjecture. I have a problem proving or disproving it. Let $w \in \Lambda^k(V)$ be a $k$-vector. Then $W_w=\{v\in V: v\wedge w = 0 \}$ is a $k$-dimensional vector space if and only if $w$ is decomposable. For example, for $u=e_1\wedge e_2 + e_3 \wedge e_4$ we have $W_u = 0$.
Roughly another 2 years later let me try giving a different proof. Suppose first $w$ is a decomposable $k$-vector. Then there exist $k$ linearly independent vectors $\{e_i\}$, $i=1,\ldots k$ such that $w=e_1\wedge\cdots\wedge e_k$. Complete $\{e_i\}$ to a basis of $V$, it is then clear that $e_i \wedge w=0$ for $i=1,\ldots k$, $e_i\wedge w\neq 0 $ for $i=k+1, \ldots n$, hence $\mathrm{dim}(W_w)= k$. Suppose now $\mathrm{dim}(W_w)= k$, and let $\{e_i\}$, $i=1,\ldots k$ be a basis of $W_w$. By using the following fact: Let $w$ be a $k$-vector, $u$ a non-vanishing vector. Then $w \wedge u =0 $ if and only if $w= z\wedge u $ for some $(k-1)$-vector $z$. one establishes that $w$ is of the form $w=c e_1 \wedge\cdots\wedge e_k$ for some constant $c$.
Probability - how come $76$ has better chances than $77$ on a "Luck Machine" round? a Luck machine is a $3$ $0-9$ digits on a screen, define $X_1$ to be the result contains $77$, and $X_2$ to be the result contains $76$. $p(X_1)=2*1/10*1/10*9/10+1/10*1/10*1/10$ $p(X_2)=2*1/10*1/10$ if i wasnt mistaken in my calculation, we get $p(X_2)>p(X_1)$ can anyone explain this shocking outcome?
If I understand your question correctly, you have a machine that generates 3 digits (10³ possible outcomes) and you want to know the probability that you have event $X_1$ : either "7 7 x" or "x 7 7" and compare this with event $X_2$: either "7 6 x" or "x 7 6". Notice that the 2 sub-events "7 7 x" and "x 7 7" are not disjoint (they have "7 7 7" as common outcome) while the 2 sub-events "7 6 x" and "x 7 6" are disjoint. $P(X_1) = P($"7 7 x" $\cup $ "x 7 7"$) = P($"7 7 x"$) + P($"x 7 7"$) - P($"7 7 7"$)$ $P(X_2) = P($"7 6 x" $\cup $ "x 6 7") = $P($"7 6 x") + $P($"x 7 6"$)$ This explains why $P(X_1) < P(X_2)$.
Let $ P $ be a non-constant polynomial in z. Show that $ P(z) \rightarrow \infty $ as $ z \rightarrow \infty $ This is a a homework problem I have and I am having some trouble on it. I had thought I solved it, but I found out a algebraic mistake made my proof incorrect. Here is what I have so far. Let $ P(z) = \sum\limits_{k=0}^n a_k z^k $ $\def\abs#1{\left|#1\right|}$ Then, $ \abs{a_nz^n} = \abs{\sum\limits_{k=0}^n a_k z^k - \sum\limits_{k=0}^{n-1} a_kz^k} \leq \abs{\sum\limits_{k=0}^n a_k z^k} + \abs{\sum\limits_{k=0}^{n-1} a_kz^k} $ by the triangle inequality. Let $ M \gt 0, \exists R \in \mathbb{R} \text{ s.t. } \abs{z} \gt R \implies M \lt \abs{a_n z^n} $ since it is known that $ \abs{a_n z^n} $ converges to infinity. So by choose such and R as above and choosing a sufficiently large z, you get: $ M \lt \abs{a_n z^n} \leq \abs{\sum\limits_{k=0}^n a_k z^k} + \abs{\sum\limits_{k=0}^{n-1} a_kz^k} \implies M - \abs{\sum\limits_{k=0}^{n-1} a_kz^k} \lt \abs{\sum\limits_{k=0}^n a_k z^k} $ However, this is not sufficient to prove the hypothesis and I am at a loss of what to do. Help woud be great.
Hints: $$(1)\;\;\;\;\;|P(z)|=\left|\;\sum_{k=0}^na_kz^k\;\right|=|z|^n\left|\;\sum_{k=0}^na_kz^{k-n}\;\right|$$ $$(2)\;\;\;\;\forall\,w\in\Bbb C\;\;\wedge\;\forall\;n\in\Bbb N\;,\;\;\left|\frac{w}{z^n}\right|\xrightarrow[z\to\infty]{}0$$
Prove that for any real numbers $a,b$ we have $\lvert \arctan a−\arctan b\rvert\leq \lvert a−b\rvert$. Prove that for any real numbers $a,b$ we have $\lvert \arctan a−\arctan b\rvert\leq \lvert a−b\rvert$. This should have an application of the mean value theorem.
Let $f(x)=\arctan x$, then by mean value theorem there exist $c\in (a,b)$ such that $$|f(a)-f(b)|=f^{\prime}(c) |a-b|$$ $$f^{\prime}(x)=\frac{1}{1+x^2}\leq1$$ So $f^{\prime}(c)\leq1$ and $|\arctan a-\arctan b|\leq |a-b|$.
differentiate f(x) using L'hopital and other problem * *Evaluate: $\ \ \ \ \lim_{x\to1}(2-x)^{\tan(\frac{\pi}{2}x)}$ *Show that the inequality holds: $\ \ \ x^\alpha\leq \alpha x + (1-\alpha)\ \ \ (x\geq0, \,0<\alpha <1)$ Please help me with these. Either a hint or a full proof will do. Thanks.
For the first problem: another useful method is to log the function to get $\frac{\log(2-x)}{\frac{1}{\tan(\frac{\pi x}{2})}}$, now you can apply L'Hospital's rule, take the limit and exponentiate it. I keep getting $e^{\frac{\pi}{2}}$, but you need to check the algebra more thoroughly
Find the value of $\lim_{n \rightarrow \infty} \sqrt{1+\left(\frac1{2n}\right)^n}$ Find the limit of the sequence as it approches $\infty$ $$\sqrt{1+\left(\frac1{2n}\right)^n}$$ I made a table of the values of the sequence and the values approach 1, so why is the limit $e^{1/4}$? I know that if the answer is $e^{1/4}$ I must have to take the $\ln$ of the sequence but how and where do I do that with the square root? I did some work getting the sequence into an indeterminate form and trying to use L'Hospitals but I'm not sure if it's right and then where to go from there. Here is the work I've done $$\sqrt{1+\left(\frac1{2n}\right)^n} = \frac1{2n} \ln \left(1+\frac1{2n}\right) = \lim_{x\to\infty} \frac1{2n} \ln \left(1+\frac1{2n}\right) \\ = \frac 1{1+\frac1{2n}}\cdot-\frac1{2n^2}\div-\frac1{2n^2}$$ Thank you
We have $\lim_{n\to\infty} (1+\frac{x}{n})^n = e^x$. Hence $\lim_{n \to \infty} (1+\frac{x}{2n})^{2n} = e^x$, then taking square roots (noting that the square root is continuous on $[0,\infty)$), we have $\lim_{n \to \infty} \sqrt{(1+\frac{x}{2n})^{2n}} = \lim_n (1+\frac{x}{2n})^{n} = \sqrt{e^x} = e^\frac{x}{2}$, and finally $\lim_{n\to\infty} \sqrt{(1+\frac{x}{2n})^{n}} = e^\frac{x}{4}$. Setting $x=1$ gives the desired result.
Combinatorics riddle: keys and a safe There are 8 crew members, The leading member wants that only a crew of 5 people or more could open a safe he bought, To each member he gave equal amount of keys, and locked the safe with several locks. What's the minimal number of locks he should put: * *at least 5 crew members would be required to open the safe? *Any team of 5 crew members could open the safe, but none of the 4 team crew members. 1,2 stands individually of course.
This can easily be generalized, replacing 8 and 4 with a variable each. Let the set of keys (and locks) be denoted by $K$. Let the set of keys that crew member $i$ doesn't receive be $K_i$. For distinct indices, Condition 1 states that $K_{ijkl} = K_i \cap K_j \cap K_k \cap K_l \neq \emptyset$ (no 4 crew can open the safe), and condition 2 states that $K_i \cap K_j \cap K_k \cap K_l \cap K_m = \emptyset$ (any 5 crew can open the safe). Claim: $ K_{ijkl} \cap K_{i'j'k'l'} = \emptyset$ for distinct quadruple sets. Proof: Suppose not. Then $\{i, j, k, l, i', j', k', l'\}$ is a set with at least 5 distinct elements, contradicting condition 2. $_\square$ Hence, there must be at least $8 \choose 4$ locks. Claim. $8 \choose 4$ is sufficient. Proof: Label each of the locks with a distinct quadruple from 1 to 8. To each crew member, give him the key to the lock, if it doesn't have his number on it. Then, any 4 crew members $i, j, k, l$ will not be able to open lock $\{i, j, k, l\}$. Any 5 crew members will be able to open all locks.
Prove that if $\gcd(a, n) = d$ then $\langle[a]\rangle = \langle[d]\rangle$ in $\mathbb Z_n$? I am not sure how to start this problem and hope someone can help me out.
Well, $d$ divides $a$, so in any $\Bbb Z_k$ it should be clear that $[a]\in\langle[d]\rangle$, whence $\langle[a]\rangle\subseteq\langle[d]\rangle$. The reverse inclusion doesn't generally hold, but since $d=\gcd(a,n)$, then there exist $x,y\in\Bbb Z$ such that $d=ax+ny$, so in $\Bbb Z_n$ we have $[d]=[a][x]$, which lets us demonstrate the reverse inclusion in a similar fashion.
Proving $n+3 \mid 3n^3-11n+48$ I'm really stuck while I'm trying to prove this statement: $\forall n \in \mathbb{N},\quad (n+3) \mid (3n^3-11n+48)$. I couldn't even how to start.
If you wrote it backwards by accident then the proof is by $$(3n^2 - 9n + 16)(n+3) = 3n^3 - 11n + 48$$ if you meant what you wrote then $n=0\;$ gives a counter examples since $48$ doesn't divide $3$.
Notatebility of Uncountable Sets I have noticed a pattern. The set of integers is infinite. Therefore one at first would think it impossible to come up with a notation allowing the representation of all integers. This though becomes easy actually to get around. Simply allow larger integers to take up larger notation. Now look at the rational numbers. They are dense, so there are an infinite number of rational numbers between 0 and 1. Seemingly impossible, but not. By dividing two integers, you can come up with any rational numbers. Algebraic, and even all the computable numbers can be notated. Yet when you get to the real numbers, the great uncountable set, there is no finite notation to represent all the real numbers. Why is it that countable sets can be notated, but it seems that uncountable sets can not? Has this been studied before?
Other people have answered why uncountable sets cannot be notated. I'll just add that some countable sets cannot be "usefully" notated. So it depends on your definition of notation. For example, the set of all Turing machines is countable. The set of Turing machines that do not halt is a subset, so is also countable. But there is no practical notation - that is, no programatic way - which allows for the enumeration of the set of non-halting Turing machines. A notation system is only as useful as it is computable, and there is no computable enumeration of this set.
Are all large cardinal axioms expressible in terms of elementary embeddings? An elementary embedding is an injection $f:M\rightarrow N$ between two models $M,N$ of a theory $T$ such that for any formula $\phi$ of the theory, we have $M\vDash \phi(a) \ \iff N\vDash \phi(f(a))$ where $a$ is a list of elements of $M$. A critical point of such an embedding is the least ordinal $\alpha$ such that $f(\alpha)\neq\alpha$. A large cardinal is a cardinal number that cannot be proven to exist within ZFC. They often appear to be critical points of an elementary embedding of models of ZFC where $M$ is the von Neumann hierarchy, and $N$ is some transitive model. Is this in fact true for all large cardinal axioms?
There is an online PDF of lecture slides by Woodin on the "Omega Conjecture" in which he axiomatizes the type of formulas that are large cardinals. I do not know how exhaustive his formulation is. See the references under http://en.wikipedia.org/wiki/Omega_conjecture
Chopping arithmetic on terms such as $\pi^2$, $\pi^3$ or $e^3$ I have a problem where I have to use 3-digit chopping with numbers such as $\pi^2$, $\pi^3$, $e^3$, etc. If I wanted to 3-digit chop $\pi^2$, do I square the true value of $\pi$ and then chop, or do I chop $\pi$ first to 3.14 then square it?
If you chop $\pi$ then square then chop, what you are really chopping is $3.14^2$ not $\pi^2$. So you must take $\pi^2$ and then chop it.
What does 1 modulo p mean? For example, from Gallian's text: Sylow Test for Nonsimplicity Let $n$ be a positive integer that is not prime, and let $p$ be a prime divisor of $n$. If 1 is the only divisor of $n$ that is equal to 1 modulo p, then there does not exist a simple group of order $n$. I of course understand what some a mod b is (ex. 19 mod 10 = 9); but wouldn't 1 mod anything positive be 1 [edit: changed from 0]?
To say that a number $a$ is $1$ modulo $p$ means that $p$ divides $a - 1$. So, in particular, the numbers $1, p + 1, 2p + 1, \ldots$ are all equal to $1$ modulo $p$. As you're studying group theory, another way to put it is that $a = b$ modulo $p$ if and only if $\pi(a) = \pi(b)$ where $\pi\colon\mathbb Z \to \mathbb Z/p$ is the factor homomorphism.
Calculate an infinite continued fraction Is there a way to algebraically determine the closed form of any infinite continued fraction with a particular pattern? For example, how would you determine the value of $$b+\cfrac1{m+b+\cfrac1{2m+b+\cfrac1{3m+b+\cdots}}}$$? Edit (2013-03-31): When $m=0$, simple algebraic manipulation leads to $b+\dfrac{\sqrt{b^2+4}}{4}$. The case where $m=2$ and $b=1$ is $\dfrac{e^2+1}{e^2-1}$, and I've found out through WolframAlpha that the case where $b=0$ is an expression related to the Bessel function: $\dfrac{I_1(\tfrac2m)}{I_0(\tfrac2m)}$. I'm not sure why this happens.
Reference: D. H. Lehmer, "Continued fractions containing arithmetic progressions", Scripta Mathematica vol. 29, pp. 17-24 Theorem 1: $$ b+\frac{1}{a+b\quad+}\quad\frac{1}{2a+b\quad+}\quad\frac{1}{3a+b\quad+}\quad\dots = \frac{I_{b/a-1}(2/a)}{I_{b/a}(2/a)} $$ Theorem 2 $$ b-\frac{1}{a+b\quad-}\quad\frac{1}{2a+b\quad-}\quad\frac{1}{3a+b\quad-}\quad\dots = \frac{J_{b/a-1}(2/a)}{J_{b/a}(2/a)} $$ and other results...
Factor $(a^2+2a)^2-2(a^2+2a)-3$ completely I have this question that asks to factor this expression completely: $$(a^2+2a)^2-2(a^2+2a)-3$$ My working out: $$a^4+4a^3+4a^2-2a^2-4a-3$$ $$=a^4+4a^3+2a^2-4a-3$$ $$=a^2(a^2+4a-2)-4a-3$$ I am stuck here. I don't how to proceed correctly.
Or if you missed Jasper Loy's trick, you can guess and check a value of $a$ for which $$f(a) = a^4 +4a^3 +2a^2 −4a−3 = 0.$$ E.g. f(1) = 0 so $(a-1)$ is a factor and you can use long division to factorise it out.
nCr question choosing 1 - 9 from 9 I've been trying to rack my brain for my high school maths to find the right calculation for this but I've come up blank. I would like to know how many combinations there are of choosing 1-9 items from a set of 9 items. i.e. There are 9 ways of selecting 1 item. There is 1 way of selecting 9 items. For 2 items you can choose... 1,2 1,3 1,4 1,5 1,6 1,7 1,8 1,9 2,3 2,4 and so on. How many ways in total are there of selecting any number of items from a set of 9 (without duplicates, i.e. you can't have 2,2). Also, order is not important. So 1,2 is the same as 2,1.
Line up the items in front of you, in order. To any of them you can say YES or NO. There are $2^9$ ways to do this. This is the same as the number of bit strings of length $9$. But you didn't want to allow the all NO's possibility (the empty subset). Thus there are $2^9-1$ ways to choose $1$ to $9$ of the objects. Remark: There are $\dbinom{9}{k}$ ways of choosing exactly $k$ objects. Here $\dbinom{n}{k}$ is a Binomial Coefficient, and is equal to $\dfrac{n!}{k!(n-k)!}$. This binomial coefficient is called by various other names, such as $C(n,k)$, or ${}_nC_k$, or $C^n_k$. So an alternate, and much longer way of doing the count is to find the sum $$\binom{9}{1}+\binom{9}{2}+\binom{9}{3}+\cdots +\binom{9}{9}.$$
Definition of limit in category theory - is $X$ a single object of $J$ or a subset of $J$? Let $F : J → C$ be a diagram of type $J$ in a category $C$. A cone to $F$ is an object $N$ of $C$ together with a family $ψ_X : N → F(X)$ of morphisms indexed by the objects $X$ of $J$, such that for every morphism $f : X → Y$ in $J$, we have $F(f) \centerdot ψ_X = ψ_Y$. http://en.wikipedia.org/wiki/Limit_(category_theory) So, is $X$ here referring to a single object of $J$ or several objects of $J$ grouped together? (In terms of set, is $X$ an element of $J$ or a subset of $J$?)
The cone is indexed by all objects of $J$. Before dealing with limits in general, you should understand products and their universal property. Of course all factors are involved.
Proving that an injective function is bijective I am having a lot of trouble starting this proof. I would greatly appreciate any help I can get here. Thanks. Let $n\in \mathbb{N}$. Prove that any injective function from $\{1,2,\ldots,n\}$ to $\{1,2,\ldots,n\}$ is bijective.
Another hint: Prove it by induction. It’s clear for $n=1$. Otherwise if the statement holds for some $n$, take an injective map $σ \colon \{1, …, n+1\} → \{1, …, n+1\}$. Assume $σ(n+1) = n+1$ – why can you do this? What follows?
Eigenvalues of a matrix with only one non-zero row and non-zero column. Here is the full question. * *Only the last row and the last column can contain non-zero entries. *The matrix entries can take values only from $\{0,1\}$. It is a kind of binary matrix. I am interested in the eigenvalues of this matrix. What can we say about them? In particular, when are all of them positive?
At first I thought I understood your question, but after reading those comments and answers here, it seems that people here have very different interpretations. So I'm not sure if I understand it correctly now. To my understanding, you want to find the eigenvalues of $$ A=\begin{pmatrix}0_{(n-1)\times(n-1)}&u\\ v^T&a\end{pmatrix}, $$ where $a$ is a scalar, $u,v$ are vectors and all entries in $a,u,v$ are either $0$ or $1$. The rank of this matrix is at most $2$. So, when $n\ge3$, $A$ must have some zero eigenvalues. In general, the eignevalues of $A$ include (at least) $(n-2)$ zeros and $$\frac{a \pm \sqrt{a^2 + 4v^Tu}}{2}.$$ Since $u,v$ are $0-1$ vectors, $A$ has exactly one positive eigenvalue and one negative eigenvalue if $v^Tu>0$, and the eigenvalues of $A$ are $\{a,0,0,\ldots,0\}$ if $v^Tu=0$.
Show that $f '(x_0) =g'(x_0)$. Assume that $f$ and $g$ are differentiable on interval $(a,b)$ and $f(x) \le g(x)$ for all $x \in (a,b)$. There exists a point $x_0\in (a,b)$ such that $f(x_0) =g(x_0)$. Show that $f '(x_0) =g'(x_0)$. I am guessing we create a function $h(x) = f(x)-g(x)$ and try to come up with the conclusion using the definition of differentiability at a point. I am not sure how. Some useful facts: A function $f:(a,b)\to \mathbb R$ is differentiable at a point $x_0$ in $(a,b)$ if $$\lim_{h\to 0}\frac{f(x_0 + h)-f(x_0)}{h}=\lim_{x\to x_0}\frac{f(x)-f(x_0)}{x-x_0}.$$
$h(x)=g(x)-f(x)$ is differentiable, and $h(x_0)=0$ and $h(x)\geq 0$ hence $h$ has a minimum at $x_0$ and hence $h'(x_0)=0$ And as $$h'(x_0)=0 \implies f'(x_0)=g'(x_0)$$ we are done.
dimension of a coordinate ring Let $I$ be an ideal of $\mathbb{C}[x,y]$ such that its zero set in $\mathbb{C}^2$ has cardinality $n$. Is it true that $\mathbb{C}[x,y]/I$ is an $n$-dimensional $\mathbb{C}$-vector space (and why)?
The answer is no, but your very interesting question leads to about the most elementary motivation for the introduction of scheme theory in elementary algebraic geometry. You see, if the common zero set $X_{\mathrm{classical}}=V_{\mathrm{classical}}(I)$ consists set-theoretically (I would even say physically) in $n$ points, then $N:=\dim_\mathbb C \mathbb{C}[x,y]/I\geq n$. If $N\gt n$, this is an indication that some interesting geometry is present: $X_{\mathrm{classical}}$ is described by equations that are not transversal enough, so that morally they describe a variety bigger than the naked physical set. The most elementary example is given by $I=\langle y,y-x^2\rangle$: we have $V_{\mathrm{classical}}(I)=\{(0,0)\}=\{O\}$ Everybody feels that it is a bad idea do describe the origin as $V(I)$, i.e. as the intersection of a parabola and one of its tangents: a better description would be to describe it by the ideal $J=\langle x,y\rangle,$ in other words as the intersection of two transversal lines. However the ideal $I$ describes an interesting structure, richer than a naked point, and this structure is called a scheme. This is all reflected in the strict inequality $$\dim_\mathbb C \mathbb{C}[x,y]/I=2=N\gt \dim_\mathbb C \mathbb{C}[x,y]/J=1=n=\text { number of physical points}.$$ Scheme theory in its most elementary incarnation disambiguates these two cases by adding the relevant algebra in the algebro-geometric structure, now defined as pairs consisting of a physical set plus an algebra: $$V_{\mathrm{scheme}}(J)=(\{O\},\mathbb{C}[x,y]/J )\subsetneq V_{\mathrm{scheme}}(I)= (\{O\},\mathbb{C}[x,y]/I ).$$ Bibliography Perrin's Algebraic Geometry is the most elementary introduction to this down-to-earth vision of schemes (cf. beginning of Chapter VI).
$\tan B\cdot \frac{BM}{MA}+\tan C\cdot \frac{CN}{NA}=\tan A. $ Let $\triangle ABC$ be a triangle and $H$ be the orthocenter of the triangle. If $M\in AB$ and $N \in AC$ such that $M,N,H$ are collinear prove that : $$\tan B\cdot \frac{BM}{MA}+\tan C\cdot \frac{CN}{NA}=\tan A. $$ Thanks :)
with menelaus' theorem, in$\triangle ABD$,$\dfrac{BM}{MA}\dfrac{AH}{HD}\dfrac{DK}{KB}=1 $,ie, $\dfrac{BM}{MA}=\dfrac{BK*HD}{AH*DK}$. in$\triangle ACD$,$\dfrac{CN}{NA}\dfrac{AH}{HD}\dfrac{DK}{KC}=1 $,ie, $\dfrac{CN}{NA}=\dfrac{CK*HD}{AH*DK}$. $tanB=\dfrac{AD}{BD}, tanC=\dfrac{AD}{DC},tanA=\dfrac{BC}{AH}$ LHS=$\dfrac{AD}{BD}*\dfrac{BK*HD}{AH*DK}+\dfrac{AD}{DC}*\dfrac{CK*HD}{AH*DK}$=$\dfrac{AD*HD}{AH*DK}(\dfrac{BK}{BD}+\dfrac{CK}{DC})$ $\dfrac{BK}{BD}+\dfrac{CK}{DC}=\dfrac{BK}{BD}+\dfrac{KB+BD+DC}{DC}=\dfrac{BK}{BD}+\dfrac{BK}{DC}+\dfrac{BD}{DC}+\dfrac{DC}{DC}=\dfrac{BK*BC}{BD*DC}+\dfrac{BC}{DC}=\dfrac{BC}{DC}*\dfrac{BK+BD}{BD}=\dfrac{BC*DK}{DC*BD}$ LHS=$\dfrac{AD*HD}{AH*DK}*\dfrac{BC*DK}{DC*BD}=\dfrac{BC*AD*HD}{AH*DC*BD}$ clearly, $\triangle BHD \sim \triangle ACD$,that is $\dfrac{BD}{AD}=\dfrac{HD}{CD}$, ie. $\dfrac{AD*HD}{DC*BD}=1$ LHS=$\dfrac{BC}{AH}=tanA$ for $tanA=\dfrac{BC}{AH}$, we can proof as follow: $A=\angle BAD+\angle DAC=\angle HCD+\angle HBD$, so $tanA=tan(\angle HBD+\angle HCD)=\dfrac{tan\angle HBD+tan\angle HCD}{1-tan\angle HBD*tan\angle HCD}=\dfrac{\dfrac{HD}{BD}+\dfrac{HD}{DC}}{1-\dfrac{HD*HD}{BD*DC}}=\dfrac{HD*BC}{BD*DC-HD^2}=\dfrac{HD*BC}{AD*HD-HD^2}=\dfrac{HD*BC}{HD(AD-HD)}=\dfrac{BC}{AH}$
A subspace of a vector space A subspace of a vector space $V$ is a subset $H$ of $V$ that has three properties: a) The zero vector of $V$ is in $H$. b) $H$ is closed under vector addition. That is for each $u$ and $v$ in $H$, the sum $u+v$ is in $H$. c) $H$ is closed under multiplication by scalars. That is, for each $u$ in $H$ and each scalar $c$, the vector $cu$ is in $H$. It would be great if someone could "dumb" this down. It already seems extremely simply, but i'm having a very difficult time applying these.
If your original vector space was $V=\mathbb R^3$, then the possible subspaces are: * *The whole space *Any plane that passes through $0$ *Any line through $0$ *The singleton set, $\{0\}$ One reading for the definition is that $H$ is a subspace of $V$ if it is a sub-set of $V$ and it is also a vector space under the same operations as in $V$.
Showing $n\Bbb{Z}$ is a group under addition Let $n$ be a positive integer and let $n\Bbb{Z}=\{nm\mid m \in\Bbb{Z}\}$. I need to show that $\left< n\Bbb{Z},+ \right>$ is a group. And I need to show that $\left< n\Bbb{Z},+ \right>\cong\left< \Bbb{Z},+ \right>$. Added: If $n\mathbb Z$ is a subgroup of $\mathbb Z$ then it must be closed under "+". The identity element 0 is an element of the subgroup. And for any $n$ in $n\mathbb Z$, it's inverse $-n$ must be in $n\mathbb Z$...
Since every element of $n\mathbb Z$ is an element of $\mathbb Z$, we can do an easier proof that it is a group by showing that it is a subgroup of $\mathbb Z$. It happens to be true that if $H\subset G$, where $G$ is a group, then $H$ is group if it satisfies the single condition that if $x,y\in H$, then $x\ast y^{-1}\in H$, where $\ast$ is the group operation of $G$ and $y^{-1}$ is the inverse of $y$ in $G$. In order to find an isomorphism $\varphi:\mathbb Z\to n\mathbb Z$, ask yourself what $\varphi(1)$ should be. Can you go from here?
Maximum number of points a minimum distance apart in a semicircle of certain radius You have a circle of certain radius $r$. I want to put a number of points in either of the semicircles. However, no two point can be closer than $r$. The points can be put anywhere inside the semicircle, on the straight line, inside area, or on the circumference. There is no relation among the points of the two semicircles. But as you can see, eventually they will be the same. How do I find the maximum number of points that can be put inside the semicircle?
The answer is five points. Five points can be achieved by placing one at the center of the large circle and four others equally spaced around the circumference of one semicircle (the red points in the picture below). To show that six points is impossible, consider disks of radius $s$ about each of those five points, where $r/\sqrt3 < s < r$. These five smaller disks completely cover the large half-disk; so for any six points in the large half-disk, at least two of them must lie in the same smaller disk. But then those two points are closer than $r$ to each other.
Radius of convergence of power series $\sum c_n x^{2n}$ and $\sum c_n x^{n^2}$ I've got a start on the question I've written below. I'm hoping for some help to finish it off. Suppose that the power series $\sum_{n=0}^{\infty}c_n x^n$ has a radius of convergence $R \in (0, \infty)$. Find the radii of convergence of the power series $\sum_{n=0}^{\infty}c_n x^{2n}$ and $\sum_{n=0}^{\infty}c_n x^{n^2}$. From Hadamard's Theorem I know that the radius of convergence for $\sum_{n=0}^{\infty}c_n x^n$ is $R=\frac{1}{\alpha}$, where $$\alpha = \limsup_{n \to \infty} |a_n|^{\frac{1}{n}}.$$ Now, applying the Root Test to $\sum_{n=0}^{\infty}c_n x^{2n}$ gives $$\limsup |a_nx^{2n}|^{\frac{1}{n}}=x^2 \cdot \limsup |a_n|^{\frac{1}{n}}=x^2 \alpha$$ which gives a radius of convergence $R_1 = \frac{1}{\sqrt{\alpha}}$. Now for the second power series. My first thought was to take $$\limsup |a_nx^{n^2}|^{\frac{1}{n^2}}=|x| \cdot \limsup |a_n|^{\frac{1}{n^2}}$$ but then I'm stuck. I was trying to write the radius of convergence once again in terms of $\alpha$. Any input appreciated and thanks a bunch.
$$ \limsup_{n\rightarrow\infty} |c_n|^{\frac{1}{n}}=\alpha <\infty $$ gives that there exists $N\geq 1$ such that if $n>N$ then $|c_n|^{\frac{1}{n}}< \alpha+1$. Then $|c_n|^{\frac{1}{n^2}}< (\alpha+ 1)^{\frac{1}{n}}$ for all $n>N$. It follows that $\limsup_{n\rightarrow\infty} |c_n|^{\frac{1}{n^2}}\leq 1$. Also, there is a subsequence $\{n_k\}$ such that $|c_{n_k}|^{\frac{1}{n_k}}$ converges to $\alpha$. For that subsequence, we have $$ |c_{n_k}|^{\frac{1}{n_k^2}} \rightarrow 1 $$ as $k\rightarrow\infty$. This implies that $\limsup_{n\rightarrow\infty} |c_n|^{\frac{1}{n^2}}=1$. Therefore, the radius of convergence of $\sum c_n x^{n^2}$ is 1.
A question about a basis for a topology If a subset $B$ of a powerset $P(X)$ has the property that finite intersections of elements of $B$ are empty or again elements of $B$, does the collection of all unions of sets from $B$ form a topology on $X$ then? My book A Taste of Topology says this is indeed the case, but I wonder how you can guarantee that $X$ will be open. For example, the empty set has the required properties, so that would mean that the empty set is a topology for any set $X$, which is impossible.
It is usually taken that the empty intersection of subsets of a set is the entire set, similar to how the empty product is often taken to be the multiplicative identity or the empty sum is often taken to be the additive identity.
To prove a property of greatest common divisor Suppose integer $d$ is the greatest common divisor of integer $a$ and $b$, how to prove, there exist whole number $r$ and $s$, so that $$d = r \cdot a + s \cdot b $$ ? i know a proof in abstract algebra, hope to find a number theory proof? for abstract algebra proof, it's in Michael Artin's book "Algebra".
An approach through elementary number-theory: It suffices to prove this for relatively prime $a$ and $b$, so suppose this is so. Denote the set of integers $0\le k\le b$ which is relatively prime to $b$ by $\mathfrak B$. Then $a$ lies in the residue class of one of elements in $\mathfrak B$. Define a map $\pi$ from $\mathfrak B$ into itself by sending $k\in \mathfrak B$ to the residue class of $ka$. If $k_1a\equiv k_2a\pmod b$, then, as $\gcd (a,b)=1$, $b\mid (k_1-k_2)$, so that $k_1=k_2$ (Here $k_1$ and $k_2$ are positive integers less than $b$.). Hence this map is injective. Since the set $\mathfrak B$ is finite, it follows that $\pi$ is also surjectie. So there is some $k$ such that $ka\equiv 1\pmod b$. This means that there is some $l$ with $ka-1=lb$, i.e. $ka-lb=1$. Barring mistakes. Thanks and regards then. P.S. The reduction step is: Given $a, b$ with $\gcd(a,b)=d$, we know that $\gcd(\frac{a}{d},\frac{b}{d})=1$. So, if the relatively prime case has been settled, then there are $m$ and $n$ such that $m\frac{a}{d}+n\frac{b}{d}=1$, and hence $ma+nb=d$.
Non-commutative or commutative ring or subring with $x^2 = 0$ Does there exist a non-commutative or commutative ring or subring $R$ with $x \cdot x = 0$ where $0$ is the zero element of $R$, $\cdot$ is multiplication secondary binary operation, and $x$ is not zero element, and excluding the case where addition (abelian group operation) and multiplication of two numbers always become zero? Edit: most seem to be focused on non-commutative case. What about commutative case? Edit 2: It is fine to relax the restriction to the following: there exists countably (and possibly uncountable) infinite number of $x$'s in $R$ that satisfy $x \cdot x = 0$ (so this means that there may be elements of ring that do not satisfy $x \cdot x =0$) excluding the case where addition and multiplication of two numbers always become zero.
Example for the non-commutative case: $$ \pmatrix{0 & 1 \\ 0 & 0} \pmatrix{0 & 1 \\ 0 & 0} = \pmatrix{0 & 0 \\ 0 & 0}. $$ Example for the commutative case: Consider the ring $\mathbb{Z} / 4\mathbb{Z}$. What is $2^2$ in this ring?
Determine the points on the parabola $y=x^2 - 25$ that are closest to $(0,3)$ Determine the points on the parabola $y=x^2 - 25$ that are closest to $(0,3)$ I would like to know how to go about solving this. I have some idea of solving it. I believe you have to use implicit differentiation and the distance formula but I don't know how to set it up. Hints would be appreciated.
Just set up a distance squared function: $$d(x) = (x-0)^2 + (x^2-25-3)^2 = x^2 + (x^2-28)^2$$ Minimize this with respect to $x$. It is easier to work with the square of the distance rather than the distance itself because you avoid the square roots which, in the end, do not matter when taking a derivative and setting it to zero.
How do the floor and ceiling functions work on negative numbers? It's clear to me how these functions work on positive real numbers: you round up or down accordingly. But if you have to round a negative real number: to take $\,-0.8\,$ to $\,-1,\,$ then do you take the floor of $\,-0.8,\,$ or the ceiling? That is, which of the following are true? $$\lfloor-0.8\rfloor=-1$$ $$\text{or}$$ $$\lceil-0.8\rceil=-1$$
The first is the correct: you round "down" (i.e. the greatest integer LESS THAN OR EQUAL TO $-0.8$). In contrast, the ceiling function rounds "up" to the least integer GREATER THAN OR EQUAL TO $-0.8 = 0$. $$ \begin{align} \lfloor{-0.8}\rfloor & = -1\quad & \text{since}\;\; \color{blue}{\bf -1} \le -0.8 \le 0 \\ \\ \lceil {-0.8} \rceil & = 0\quad &\text{since} \;\; -1 \le -0.8 \le \color{blue}{\bf 0} \end{align}$$ In general, we must have that $$\lfloor x \rfloor \leq x\leq \lceil x \rceil\quad \forall x \in \mathbb R$$ And so it follows that $$-1 = \lfloor -0.8 \rfloor \leq -0.8 \leq \lceil -0.8 \rceil = 0$$ K.Stm's suggestion is a nice, intuitive way to recall the relation between the floor and the ceiling of a real number $x$, especially when $x\lt 0$. Using the "number line" idea and plotting $-0.8$ with the two closest integers that "sandwich" $-0.8$ gives us: $\qquad\qquad$ We see that the floor of $x= -0.8$ is the first integer immediately to the left of $-0.8,\;$ and the ceiling of $x= -0.8$ is the first integer immediately to the right of $-0.8$, and this strategy can be used, whatever the value of a real number $x$.
How does linear algebra help with computer science? I'm a Computer Science student. I've just completed a linear algebra course. I got 75 points out of 100 points on the final exam. I know linear algebra well. As a programmer, I'm having a difficult time understanding how linear algebra helps with computer science? Can someone please clear me up on this topic?
The page Coding The Matrix: Linear Algebra Through Computer Science Applications (see also this page) might be useful here. In the second page you read among others In this class, you will learn the concepts and methods of linear algebra, and how to use them to think about problems arising in computer science. I guess you have been giving a standard course in linear algebra, with no reference to applications in your field of interest. Although this is standard practice, I think that an approach in which the theory is mixed with applications is to be preferred. This is surely what I did when I had to teach Mathematics 101 to Economics majors, a few years ago.
Describe all matrices similar to a certain matrix. Math people: I assigned this problem as homework to my students (from Strang's "Linear Algebra and its Applications", 4th edition): Describe in words all matrices that are similar to $$\begin{bmatrix}1& 0\\ 0& -1\end{bmatrix}$$ and find two of them. Square matrices $A$ and $B$ are defined to be "similar" if there exists square invertible $M$ with $A = M^{-1}BM$ (or vice versa, since this is an equivalence relation). The answer to the problem is not in the text, and I am embarrassed to admit I am having trouble solving it. The problem looked easy when I first saw it. The given matrix induces a reflection in the $x_2$-coordinate, but I don't see how the geometry helps. A similar matrix has to have the same eigenvalues, trace, and determinant, so its trace is $0$ and its determinant is $-1$. I spent a fair amount of time on it, with little progress, and I can spend my time more productively. This problem is #2 in the problem set, which suggests that maybe there is an easy solution. I would settle for a hint that leads me to a solution. EDIT: Thanks to Thomas (?) for rendering my matrix in $\LaTeX$. Stefan (STack Exchange FAN)
Make a picture, your matrix mirrors the $e_2$ vector and doesn't change anything at the $e_1$ vector. The matrix is in the orthogonal group but not in the special orthogonal group. Show that every matrix $$\begin{pmatrix} \cos(\alpha) & \sin(\alpha) \\ \sin(\alpha) & -\cos(\alpha)\\ \end{pmatrix} $$ make the same. Those are the nicest matrix which can happen to you but there are some more (those matrices appear when $M$ itself is in the orthogonal group. When $M$ is not in the orthogonal group it still won't change the eigenvalues (I am not sure if you already know waht eigenvalues are), $\lambda$ is an eigenvalue to a vector $v\neq 0$ if $$ A \cdot v=\lambda v$$ which means the vector is only enlarged or made smaller through the matrix, but not rotated or something like that. As $A$ has the eigenvalues $1$ and $-1$ you will always find vectors $v_1,v_2$ such that $$ B \cdot v_1= v_1$$ and $$ B\cdot v_2= -v_2.$$ So those matrices won't change one vector and the other one is "turned around". The eigenvectors of the matrix: $$ \begin{pmatrix} a & b \\c & d \\ \end{pmatrix}^{-1}\cdot \begin{pmatrix} 1 & 0 \\ 0 & -1\\ \end{pmatrix} \cdot \begin{pmatrix} a & b \\c & d \\ \end{pmatrix}$$ are $$\begin{pmatrix} \frac{a}{c} \\ 1 \end{pmatrix} \qquad \begin{pmatrix} \frac{b}{d} \\1 \end{pmatrix} $$ when $c$ and $d$ are not zero,
Showing that $\ln(b)-\ln(a)=\frac 1x \cdot (b-a)$ has one solution $x \in (\sqrt{ab}, {a+b\over2})$ for $0 < a < b$ For $0<a<b$, show that $\ln(b)-\ln(a)=\frac 1x \cdot (b-a)$ has one solution $x \in (\sqrt{ab}, {a+b\over2})$. I guess that this is an application of the Lagrange theorem, but I'm unsure how to deal with $a+b\over2$ and $\sqrt{ab}$ since Lagrange's theorem offers a solution $\in (a,b)$.
Hint: Use the mean value theorem.
$\cos(\arcsin(x)) = \sqrt{1 - x^2}$. How? How does that bit work? How is $$\cos(\arcsin(x)) = \sin(\arccos(x)) = \sqrt{1 - x^2}$$
You know that "$\textrm{cosine} = \frac{\textrm{adjacent}}{\textrm{hypotenuse}}$" (so the cosine of an angle is the adjacent side over the hypotenuse), so now you have to imagine that your angle is $y = \arcsin x$. Since "$\textrm{sine} = \frac{\textrm{opposite}}{\textrm{hypotenuse}}$", and we have $\sin y = x/1$, draw a right triangle with a hypotenuse of length $1$ and adjacent side to your angle $y$ with length $x$. Then, use the pythagorean theorem to see that the other side must have length $\sqrt{1 - x^2}$. Once you have that, use the picture to deduce $$ \cos y = \cos\left(\arcsin x\right) = \sqrt{1 -x^2}. $$ Here's a picture of the scenario:
Non-Deterministic Turing Machine Algorithm I'm having trouble with this question: Write a simple program/algorithm for a nondeterministic Turing machine that accepts the language: $$ L = \left\{\left. xw w^R y \right| x,y,w \in \{a,b\}^+, |x| \geq |y|\right\} $$
Outline: First nondeterministically choose where the cut-off between $w$ and $w^{\text{R}}$ is. Then compare and cross-out symbols to the left and right of this cut off until you find the first $i$ such that the symbol $i$ cells to the left of the cut-off is different than the symbol $i$ cells to the right of the cut off. Now check that there are at least as many non-crossed symbols to the left as there are to the right.
Which CSL rules hold in Łukasiewicz's 3-valued logic? CSL is classical logic. So I'm talking about the basic introduction and elimination rules (conditional, biconditional, disjunction, conjunction and negation). I'm not talking about his infinite-valued logical theory, but the 3-valued one where any atomic sentence can be given T,F or I. (See the Wikipedia article here: http://en.wikipedia.org/wiki/Three-valued_logic
1) I believe that Conditional introduction works: My experience is that the problem lies in getting a getting valid derivation from a set of premises, given the other rules that don't work. 2) Conditional elimination does not. This is the chief thing that cripples Lukasiewicz logic as a logic. Modus Ponens and the transitivity of the conditional both fail for the Lukasiewicz conditional, as they properly should, given that it allows conditionals with a value of I. If your conditionals are doubtful, you should not expect inference using them to be valid. (It is not generally known that this can be remedied: Define Spq as NCCpqNCpq and use that for the conditional) 3) Biconditional introduction works. 4) Biconditional elimination works. 5) Conjunction introduction works. 6) Conjunction elimination works 7) Disjunction introduction works. 8) Disjunction elimination does not work. 9) Negation introduction does not work. 10) Negation elimination does not work. Classically, 9 and 10 depend on the Law of the Excluded Middle, which doesn't hold in Lukasiewicz logic.
Evaluating $\lim_{x \to 0} \frac{\sqrt{x+9}-3}{x}$ The question is this. In $h(x) = \dfrac{\sqrt{x+9}-3}{x}$, show that $\displaystyle \lim_{x \to 0} \ h(x) = \frac{1}{6}$, but that $h(0)$ is undefinied. In my opinion if I use this expression $\displaystyle \lim_{x \to 0} \dfrac{\sqrt{x+9}-3}{x}$ above with the $-3$ inside the square root I got an undefinied expression, but if I put the $-3$ out of the square and I use this expression to calculate the limit $\displaystyle \lim_{x \to 0} \dfrac{\sqrt{x+9}-3}{x}$ I will get $\frac{1}{6}$. Here a print screen of the original question. If needed i can post the Pdf of the homework.
You cannot pull the negative 3 out of the square root. For example: $$\sqrt{1-3} = \sqrt{2}i \ne \sqrt{1} - 3 = -2$$
Show that $\displaystyle\lim_{x\rightarrow 0}\frac{5^x-4^x}{x}=\log_e\left({\frac{5}{4}}\right)$ * *Show that $\displaystyle\lim_{x\rightarrow 0}\frac{5^x-4^x}{x}=\log_e\left({\frac{5}{4}}\right)$ *If $0<\theta < \frac{\pi}{2} $ and $\sin 2\theta=\cos 3\theta~~$ then find the value of $\sin\theta$
* *$\displaystyle\lim_{x\rightarrow 0}\frac{5^x-4^x}{x}$ $=\displaystyle\lim_{x\rightarrow 0}\frac{5^x-1-(4^x-1)}{x}$ $=\displaystyle\lim_{x\rightarrow 0}\frac{5^x-1}{x}$ -$\displaystyle\lim_{x\rightarrow 0}\frac{4^x-1}{x}$ $=\log_e5-\log_e4 ~~~~~~$ $[\because\displaystyle\lim_{x\rightarrow 0}\frac{a^x-1}{x}=\log_a (a>0)]$ $=\log_e(\frac{5}{4})$
The minimum value of $a^2+b^2+c^2+\frac1{a^2}+\frac1{b^2}+\frac1{c^2}?$ I came across the following problem : Let $a,b,c$ are non-zero real numbers .Then the minimum value of $a^2+b^2+c^2+\dfrac{1}{a^2}+\dfrac{1}{b^2}+\dfrac{1}{c^2}?$ This is a multiple choice question and the options are $0,6,3^2,6^2.$ I do not know how to progress with the problem. Can someone point me in the right direction? Thanks in advance for your time.
Might as well take advantage of the fact that it's a multiple choice question. First, is it possible that the quantity is ever zero? Next, can you find $a, b, c$ such that $$a^2+b^2+c^2+\dfrac{1}{a^2}+\dfrac{1}{b^2}+\dfrac{1}{c^2} = 6?$$
Can twice a perfect square be divisible by $q^{\frac{q+1}{2}} + 1$, where $q$ is a prime with $q \equiv 1 \pmod 4$? Can twice a perfect square be divisible by $$q^{\frac{q+1}{2}} + 1,$$ where $q$ is a prime with $q \equiv 1 \pmod 4$?
Try proving something "harder": Theorem: Let $n$ be a positive integer. There exists a positive integer $k$ such that $n | 2k^2$ k=n
Case when there are more leaves than non leaves in the tree Prove that there are more leaves than non-leaves in the graph that don't have vertices of degree 2. Ideas: If graph doesn't have vertices of degree 2 this means that vertices of the graph have degree 1 or $\geq$ 3. Vertices with degree 1 are leaves and vertices with degree $\geq$ 3 are non-leaves. Particularly, root has degree 3 (therefore 3 vertices on the level 1), level 2 has 6 vertices, level $i$ has $3*2^{i-1}$ vertices. Let's assume there are $n$ level, therefore according to assumption $1+\sum_{i=1}^{n-1} 3*2^{i-1} < 3*2^{n-1}$. There are few problems: currently I don't have an idea how to show that the above inequality is true. In addition, do I need to consider particular cases when no all leaves on the same level of the tree, or maybe when no all non leaves have the same degree, however intuitively all these cases just extend number of leaves. I will appreciate any idea or hint how to show that the assumption is right.
If $D_k$ is the number of vertices of degree $k$ then $\sum k \cdot D_k=2E$ where $E$ is the number of edges. In a tree, $E=V-1$ with $V$ the number of vertices. So if $D_2=0$ you have $$1D_1+3D_3+4D_4+...=2E=2V-2\\ =2(D_1+D_3+D_4+...)-2.$$ From this, $$2D_1-2-D_1=(3D_3+4D_4+...)-(2D_3+2D_4+...),$$ $$D_1-2=1D_3+2D_4+3D_5+...,$$ and then $$D_1>D_3+2D_4+3D_5+... \ge D_3+D_4+D_5+...$$ The sum $D_3+D_4+D_5+...$ on the right here is the number of non leaves, since $D_2=0$, while $D_1$ is the number of leaves, showing more leaves than non leaves.
Being ready to study calculus Some background: I have a degree in computer science, but the math was limited and this was 10 years ago. High school was way before that. A year ago I relearnt algebra (factoring, solving linear equations, etc). However, I have probably forgotten some of that. I never really studied trigonometry properly. I want to self study calculus and other advanced math, but I feel there are some holes that I should fill before starting. I planned on using MIT OCW for calculus but they don't have a revision course. Is there a video course or book that covers all of this up to calculus? (No textbooks with endless exercises please.) I would like to complete this in a few weeks. Given my background, I think this is possible.
The lecture notes by William Chen cover the requested material nicely. The Trillia Group distributes good texts too.
Minimize $\sum a_i^2 \sigma^2$ subject to $\sum a_i = 1$ $$\min_{a_i} \sum_{i=1}^{n} {a_i}^2 \sigma^2\text{ such that }\sum_{i=1}^{n}a_i=1$$ and $\sigma^2$ is a scalar. The answer is $a_i=\frac{1}{n}$. I tried Lagrangian method. How can I get that answer?
$\displaystyle \sum_{i=1}^{n}(x-a_i)^2\ge0,\forall x\in \mathbb{R}$ $\displaystyle \Rightarrow \sum_{i=1}^{n}(x^2+a_i^2-2xa_i)\ge0$ $\displaystyle \Rightarrow nx^2+\sum_{i=1}^{n}a_i^2-2x\sum_{i=1}^{n}a_i\ge0$ Now we have a quadratic in $x$ which is always grater than equal to zero which implies that the quadratic can have either two equal roots in real nos. or has both complex roots.This implies that the discriminant is less than or equal to zero. Discriminant $=\displaystyle D=4\left(\sum_{i=1}^{n}a_i\right)^2-4n\sum_{i=1}^{n}a_i^2\le0$ $\displaystyle \Rightarrow\left(\sum_{i=1}^{n}a_i\right)^2-n\sum_{i=1}^{n}a_i^2\le0$ $\displaystyle \Rightarrow 1-n\sum_{i=1}^{n}a_i^2\le 0$ $\displaystyle \Rightarrow \frac{1}{n}\le \sum_{i=1}^{n}a_i^2$ Equality holds if the equation has equal real root But then $\displaystyle \sum_{i=1}^{n}(x-a_i)^2=0$ for some $x\in R$ $\Rightarrow x=a_i,\forall 1\le i\le n$ $\Rightarrow \sum _{i=1}^{n}x=\sum _{i=1}^{n}a_i=1$ $\Rightarrow x=a_i=\frac{1}{n},\forall 1\le i\le n$ Now as $\sigma^2\ge 0$ so the min. of $\sum_{i=1}^{n}a_i^2\sigma^2$ is attained when $\sum_{i=1}^{n}a_i^2$ is also minimum. I think this is a much better and elementary solution than solving it using Lagrange multipliers.
Moduli Spaces of Higher Dimensional Complex Tori I know that the space of all complex 1-tori (elliptic curves) is modeled by $SL(2, \mathbb{R})$ acting on the upper half plane. There are many explicit formulas for this action. Similarly, I have been told that in the higher dimensional cases, the symplectic group $Sp(2n, \mathbb{R})$ acts on some such space to give the moduli space of complex structures on higher dimensional complex tori. Is there a reference that covers this case in detail and gives explicit formulas for the action? In the 1-dimensional case, all complex tori can be realized as algebraic varieties, but this is not the case for higher dimensional complex tori. Does the action preserve complex structures that come from abelian varieties?
You could check Complex tori by Christina Birkenhake and Herbert Lange, edited by Birkhauser. In Chapter 7 (Moduli spaces), Section 4 (Moduli spaces of nondegenerate complex tori), Theorems 4.1 and 4.2 should answer you doubts. Hope it helps.
How to calculate $ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s} $ I'm trying to calculate this limit expression: $$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s} $$ Both the numerator and denominator should converge, since $0 \leq a, b \leq 1$, but I don't know if that helps. My guess would be to use L'Hopital's rule and take the derivative with respect to $s$, which gives me: $$ \lim_{s \to \infty} \frac{s (ab)^{s-1}}{s (ab)^{s-1}} $$ but this still gives me the non-expression $\frac{\infty}{\infty}$ as the solution, and applying L'Hopital's rule repeatedly doesn't change that. My second guess would be to divide by some multiple of $ab$ and therefore simplify the expression, but I'm not sure how that would help, if at all. Furthermore, the solution in the tutorial I'm working through is listed as $ab$, but if I evaluate the expression that results from L'Hopital's rule, I get $1$ (obviously).
If $ab=1,$ $$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s}= \lim_{s \to \infty} \frac{s}{s+1}=\lim_{s \to \infty} \frac1{1+\frac1s}=1$$ If $ab\ne1, $ $$\lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s}$$ $$=\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}$$ If $|ab|<1, \lim_{s \to \infty}(ab)^s=0$ then $$\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}=ab$$ Similarly if $|ab|>1,\lim_{s \to \infty}\frac1{(ab)^s}=0$ then $$\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}=\lim_{s \to \infty} \frac{1-\frac1{(ab)^s}}{1-\frac1{(ab)^{s+1}}}=1$$
Finitely additive probabilities and Integrals My question is the following: if $(\Omega, \mathcal{A}, P)$ is a probabilistic space with $P$ simply additive (not necessarily $\sigma$-additive) and $f,g$ two real valued, positive, bounded, $\mathcal{A}$-measurable function, then $\int f+g\,dP=\int f\,dP+\int g\,dP$? The definition of $P$ simply additive is the following (I took it from Example for finitely additive but not countably additive probability measure): * *For each $E \subset \Omega$, $0 \le P(E) \le 1$ *$P(\Omega) = 1$ *If $E_1$ and $E_2$ are disjoint subsets $P(E_1 \cup E_2) = P(E_1) + P(E_2)$.
You would need to define the meaning of the integral for a finitely additive measure. I am not sure if there is standard and well established definition in that context. And I think the notion will generally not be very well behaved. For instance, suppose you allow not all sets to have a measure, but allow measures on $\mathbb{N}$ to be defined only on finite and cofinite sets (as with normal measures, you only define them on $\sigma$-algebras). Then, you have a nice measure $\mu$ that is $0$ on finite sets and $1$ on cofinite sets. Let $f(n) := n$, or $g(n) = n \pmod{2}$. What values would you assign to $\int f $ and $\int g$ ? That being said, linearity is a "must" for all notions of integrals I have ever seen. I am far from being an expert, but (variously generalised) integrals are things that take a map, map it to a number, and always (1) preserve positivity and (2) are linear. Edit: If you are referring to the example of ultrafilter measures mentioned under the link, they do lead to well defined "integrals". That is, you can define $\int f = c$ if and only if the set $$\{ n \ : \ f(n) \in (c-\epsilon, c+\epsilon) \}$$ has measure $1$, or equivalently belongs to the ultrafilter. This gives you a well defined integral, that has all the usual nice properties (except Fubini's theorem, perhaps).
Finding equilibrium with two dependent variables I was thinking while in the shower. What if I wanted to set two hands of the clock so that the long hand is at a golden angle to the short hand. I thought, set the short hand (hour hand) at 12, and then set the long hand (minute hand) at what ever is a golden angle to 12. I realized tho, that the short hand moves at 1/12 the speed of the long hand, so I would need to move the long hand 1/12 whatever the long's angle from 12 is. However, doing this will effect the angle between the two hands, so I would need to more the long hand further. But again, I would need to move the short hand further because the long hand was moved further, and this creates a vicious circle until some kind of equilibrium is found. Rather than doing this through trial and error, there must be a way to solve this problem mathematically. Maybe with an equation? My math isn't very good, but I'm very interested in math, so this is why I'm posting this question. Could someone give me the solution to this problem while explaining it very throughly? Thanks for reading. I'm interested in seeing what answers pop up on this question. :) PS: Please help me with the tags for this question, as I don't know which tags would be appropriate.
Start at 0:00 where the two hands are on the 12. You now that the long hand advances 12 times faster than the short one so if the long hand is at an angle $x$, the angle between the two hands is $x - x/12 = 11x/12$. So you just need to place the long hand at an angle $12 \alpha/11$ so that the angle between the two is $\alpha$.
Prove that if $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$ then either $A \subseteq B$ or $B \subseteq A$. Prove that for any sets $A$ or $B$, if $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$ then either $A \subseteq B$ or $B \subseteq A$. ($\mathcal P$ is the power set.) I'm having trouble making any progress with this proof at all. I've assumed that $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$, and am trying to figure out some cases I can use to help me prove that either $A \subseteq B$ or $B \subseteq A$. The statement that $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$ seems to be somewhat useless though. I can't seem to make any inferences with it that yield any new information about any of the sets it applies to or elements therein. The only "progress" I seem to be able to make is that I can conclude that $A \subseteq A \cup B$, or that $B \subseteq A \cup B$, but I don't think this gives me anything I don't already know. I've tried going down the contradiction path as well but I haven't been able to find anything there either. I feel like I am missing something obvious here though...
Hint: Try instead to prove the contrapositive: If $A \nsubseteq B$ and $B \nsubseteq A$, then $\mathcal{P} ( A ) \cup \mathcal{P} ( B ) \neq \mathcal{P} ( A \cup B )$. Remember that $E \nsubseteq F$ means that there is an element of $E$ which is not an element of $F$.
History of Conic Sections Recently, I came to know that ancient Greeks had already studied conic sections. I find myself wondering if they knew about things like directrix or eccentricity. (I mean familiar with these concepts in the spirit not in terminology). This is just the appetizer. What I really want to understand is what will make someone even think of these (let me use the word) contrived constructions for conic sections. I mean let us pretend for a while that we are living in the $200$ BC period. What will motivate the mathematicians of our time (which is $200$ BC) study the properties of figures that are obtained on cutting a cone at different angles to its vertical? Also, what will lead them to deduce that if the angle of cut is acute then the figure obtained has the curious property that for any point on that figure the sum of its distances from some $2$ fixed points a constant. And in the grand scheme of things how do our friend mathematicians deduce the concepts of directrix and eccentricity (I am not sure if this was an ancient discovery, but in all, yes I will find it really gratifying to understand the origin of conic sections). Please shed some light on this whenever convenient. I will really find it helpful. Thanks
There are several Ideas where it might have come from. One such idea is the construction of burning mirrors, for which a parabola is the best shape, because it concentrates the light in a single point, and the distance between the mirror and the point can be calculated by the use of geometry (see diocles "on burning mirrors", I could really recommend this book for it's introduction alone). Conics where also usefull in the construction of diferent sun dials. I have researched the topic quite a bit but sadly I am yet to understand how did they "merge" all this seemingly unrealted topics to cutting a cone. Most likely the lost work "on solid loci" from euclid would provide some more insight.
Is $\mathrm{GL}_n(K)$ divisible for an algebraically closed field $K?$ This is a follow-up question to this one. To reiterate the definition, a group $G$ (possibly non-abelian) is divisible when for all $k\in \Bbb N$ and $g\in G$ there exists $h\in G$ such that $g=h^k.$ Let $K$ be an algebraically closed field. For which $n$ is $\mathrm{GL}_n(K)$ divisible? (It is clearly true for $n=0$ and $n=1$.) The linked question is about the case of $K=\Bbb C$ and the answer is "for all $n$" there.
${\rm GL}(n,K)$ is not divisible when $K$ has finite characteristic $p$ and $n >1.$ The maximum order of an element of $p$-power order in ${\rm GL}(n,K)$ is $p^{e+1},$ where $p^{e} < n \leq p^{e+1}$ ($e$ a positive integer). There is an element of that order, and it is not the $p$-th power of any element of ${\rm GL}(n,K).$
Darboux integral too sophisticated for Calculus 1 students? I strongly prefer Darboux's method to the one commonly found in introductory level calculus texts such as Stewart, but I'm worried that it might be a bit overwhelming for my freshman level calculus class. My aim is to develop the theory, proving all of the results we need such as FTC, substitution rule, etc. If I can get everyone to buy into the concepts of lub and glb, this should be a fairly neat process. But that's potentially a big "if". Even worse, maybe they just won't care about the theory since they know I will not ask them to prove anything in an assignment. It seems to me that there is very little middle ground here. You have to present integration the right way, paying attention to all the details, or accept a decent amount of sloppiness. In either case the class will likely lose interest. Questions: Would you attempt the Darboux method? I would be using Spivak's Calculus / Rudin's Real Analysis as guides. I suppose there's no way of dumbing this down. Otherwise, could you recommend a good source for the standard Riemann integral? Stewart just doesn't do it for me. Thanks
Part of the point of college is to learn the craft, not just the subject. But of all the things you could give the students by a single deviation from the textbook, why Darboux integrals? Not infinite series, recognizing measures in an integral, or linear maps? Each of those would greatly clarify foundations, allowing students to remake calculus after their intuitions.
Finding a subspace whose intersections with other subpaces are trivial. On p.24 of the John M. Lee's Introduction to Smooth Manifolds (2nd ed.), he constructs the smooth structure of the Grassmannian. And when he tries to show Hausdorff condition, he says that for any 2 $k$-dimensional subspaces $P_1$, $P_2$ of $\mathbb{R}^n$, it is always possible to find a $(n-k)$-dimensional subspace whose intersection with both $P_1$ and $P_2$ are trivial. My question: I think it is also intuitively obvious that we can always find $(n-k)$-dimensional subspace whose intersection with m subspaces $P_1,\ldots,P_m$ are trivial, which is a more generalized situation. But I can't prove it rigorously. Could you help me for this generalized case? Thank you.
For $j=1,2,\ldots,m$, let $B_j$ be a basis of $P_j$. It suffices to find a set of vectors $S=\{v_1,v_2,\ldots,v_{n-k}\}$ such that $S\cup B_j$ is a linearly independent set of vectors for each $j$. We will begin with $S=\phi$ and put vectors into $S$ one by one. Suppose $S$ already contains $i<n-k$ vectors. Now consider a vector $v_{i+1}=v_{i+1}(x)$ of the form $(1,x,x^2,\ldots,x^{n-1})^T$. For each $j$, there are at most $n-1$ different values of $x$ such that $v_{i+1}(x)\in\operatorname{span}\left(\{v_1,v_2,\ldots,v_i\}\cup B_j\right)$, otherwise there would exist a non-invertible Vandermonde matrix that corresponds to distinct interpolation nodes. Therefore, there exist some $x$ such that $v_{i+1}(x)\notin\operatorname{span}\left(\{v_1,v_2,\ldots,v_i\}\cup B_j\right)$ for all $j$. Put this $v_{i+1}$ into $S$, $S\cup B_j$ is a linearly independent set. Continue in this manner until $S$ contains $n-k$ vectors. The resulting $\operatorname{span}(S)$ is the subspace we desire.
Isomorphisme of measurable space Hi, Can you ,help me to understand this proposition, and it's prrof ? Definition 24 is : A measurable space $(T, \mathcal{T})$ is said to be separable if there existe a sequence $(A_n)$ dans $\mathcal{T}$ which generates $\mathcal{T}$ and $\chi_{A_n}$ separate the points of $T$ Please Thank you
Clearly, $h:T\to h(T)$ is a surjection. The fact that $h$ is an injection follows from the fact that $\{1_{A_n}\}_{n\in \Bbb N}$ separate points of $T$. That is, if $t\neq s$ then $h(t)\neq h(s)$ since for some $n$ it holds that there exists a separation indicator function, that is $1_{A_n}(t)\neq 1_{A_n}(s)$. I didn't get your second confusion, could you elaborate?
Solving a tricky equation involving logs How can you solve $$a(1 - 1/c)^{a - 1} = a - b$$ for $a$? I get $(a-1)\ln(1-1/c) = \ln(1-b/a)$ and then I am stuck. All the variables are real and $c>a>b>1$.
You have to solve this numerically, since no standard function will do it. To get an initial value, in $a(1 - 1/c)^{a - 1} = a - b$, use the first two terms of the binomial theorem to get $(1 - 1/c)^{a - 1} \approx 1-(a-1)/c$. This gives $a(1-(a-1)/c) \approx a-b$ or $a(c-a-1)\approx ac-bc$ or $ac-a^2-a \approx ac-bc$ or $a^2+a = bc$. Completing the square, $a^2+a+1/4 \approx bc$, $(a+1/2)^2 \approx bc$, $a \approx \sqrt{bc}-1/2$.
Easy way to simplify this expression? I'm teaching myself algebra 2 and I'm at this expression (I'm trying to find the roots): $$ x=\frac{-1-2\sqrt{5}\pm\sqrt{21-4\sqrt{5}}}{4} $$ My calculator gives $ -\sqrt{5} $ and $ -\frac12 $ and I'm wondering how I would go about simplifying this down without a calculator. Is there a relatively painless way to do this that I'm missing? Or is it best to just leave these to the calculator? Thanks!
You'll want to try to write $21-4\sqrt 5$ as the square of some number $a+b\sqrt 5$. In particular, $$(a+b\sqrt 5)^2=a^2+2ab\sqrt 5+5b^2,$$ so we'll need $ab=-2$ and $21=a^2+5b^2$. Some quick trial and error shows us that $a=1,b=-2$ does the job.
Stiff differential equation where Runge-Kutta $4$th order method can be broken Is there a stiff differential equation that cannot be solved by the Runge-Kutta 4th order method, but which has an analytical solution for testing?
Cleve Moler, in this note, gives an innocuous-looking DE he attributes to Larry Shampine that models flame propagation. The differential equation is $$y^\prime=y^2-y^3$$ with initial condition $y(0)=\frac1{h}$, and integrated over the interval $[0,2h]$. The exact solution of this differential equation is $$y(t)=\frac1{1+W((h-1)\exp(h-t-1))}$$ where $W(t)$ is the Lambert function, which is the inverse of the function $t\exp\,t$. (Whether the Lambert function is considered an analytical solution might well be up for debate, but I'm in the camp that considers it a closed form.) For instance, in Mathematica, the following code throws a warning about possible stiffness: With[{h = 40}, y /. First @ NDSolve[{y'[t] == y[t]^2 - y[t]^3, y[0] == 1/h}, y, {t, 0, 2 h}, Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4}]]
How one can show that $\int _0 ^1 ((h')^2-h^2)dx \ge0 $ for all $h\in C^1[0,1]$ and $h(0) = 0$? How one can show that $\int _0 ^1 ((h')^2-h^2)dx \ge0 $ for all $h\in C^1[0,1]$ and $h(0) = 0$?
By the fundamental theorem of Calculus: $$ \left|h(x)\right| = \left|\int_0^x h'(t) \,dt\right| \le \int_0^x \left|h'(t)\right| \,dt \le \int_0^1 \left|h'(x)\right| \,dx $$ By Cauchy-Schwarz (or Jensen's) inequality: $$ \left|h(x)\right|^2 \le \left(\int_0^1 \left|h'(x)\right| \,dx\right)^2 \le \int_0^1 \left|h'(x)\right|^2 \,dx $$ Integrate both sides with respect to $x$ from $0$ to $1$ to get: $$ \int_0^1 \left|h(x)\right|^2\,dx \le \int_0^1 \left|h'(x)\right|^2 \,dx $$
Probability Question relating prison break I am stuck in a question regarding a prisoner trapped in a cell with 3 doors that actually has a probability associated with each door chosen(say $.5$ for door $A$, $.3$ for door $B$ and $.2$ for door $C$). The first door leads to his own cell after traveling $2$ days, whereas the second door leads to his own cell after $3$ days and the third to freedom after $1$ day. "A prisoner is trapped in a cell containig three doors. The first door leads to a tunnel that returns him to his cell after two days of travel. The second leads to a tunnel that returns him to his cell after three days of travel. The third door leads immediately to freedom. a) Assuming that the prisoner will always select doors 1,2,and 3 with probability 0.5, 0.3, 0.2 what is teh expected number of days until he reaches freedom? b) Assuming that the prisoner is always equally likely to choose among those doors that he not used, what is the expected number of days until he reaches freedom? (In this version, for instance, if the prisoner initially tries door1, then when he returns to the cell, he will now select only from doors 2 and 3) c) For parts (a) and (b) find the variance of the number of days until the prisoner reaches freedom. " In the problem I was able to find the $E[X]$ (Expected number of days until prisoner is free where X is # of days to be free). Where I get stuck is how to find the variances for this problem. I do not know how to find $E[X^2$ given door $1$ (or $2$ or $3$) chosen$]$. My understanding is that he does not learn from choosing the wrong door. Could anyone help me out? Thank you very much
Let's go through it for part (a) first. Let $X$ denote the number of days until this prisoner gains freedom. I think you already have $E[X|D=1], E[X|D=2], E[X|D=3]$: $E[X|D=1] = E[X + 2]$ $E[X|D=2] = E[X + 3]$ $E[X|D=3] = E[0]$ So we have $E[X^2|D=1] = E[(X + 2)^2] = E[X^2] + 4E[X] + 4$ $E[X^2|D=2] = E[(X + 3)^2]= E[X^2] + 6E[X] + 9$ $E[X^2|D=3] = E[0^2] = 0 $ and now you can solve it b/c you have $E[X]$ already?
Group theory - left/right $H$-cosets and quotient sets $G/H$ and $G \setminus H$. Let $G$ be a group and $H$ be a subgroup of $G$. The left $H$-cosets are the sets $gH, g \in G$. The set of left $H$-cosets is the quotient set $G/H$. The right $H$-cosets are the sets $Hg, g\in G$. The set of right $H$-cosets is the quotient $H \setminus G.$ The set of $G$ is the union of left(respectively, right) $H$-cosets, each of which has $\left|H\right|$ elements. We deduce that the order of $H$ divides the order of $G$, and that the number of left $H$-cosets equals the number of right $H$-cosets. What I wrote can be find in Groups and symmetries - Yvette Kosmann-Schwarzbach. What should I understand from that statement ? is ok if I say that : $$G=gH \cup Hg?$$ $$\left|G/H\right|=\left|H \setminus G\right|?$$ What means that the number of left $H$-cosets equals the number of right $H$-cosets? And why we deduce that the order of $H$ divides the order of $G$? Thanks :)
It means that $G=\cup_{g\in G} gH=\cup_{g\in G} Hg$, that the map $x\rightarrow gx$ is a bijection between $H$ and $gH$ and therefore $|gH|=|H|$. It can be deduced from this that $|G|=|G/H||H|$. A similar statement for right cosets.
Right translation - left coset - orbits We can remark that the left coset $gH$ of $g \in G$ relative to a subgroup $H$ of $G$ is the orbit of $g$ under the action of $H \subset G$ acting by right translation. What is that right translation? and how can I prove that the orbit of $g$ under the action of $H \subset G$ acting by right translation is $gH$ ?
Right translation can equally be read as "right multiplication", except there is an implication of commutativity. As to your second query, let the subgroup $H$ act on $G$ by right multiplication: $$h \cdot g = gh \qquad \forall g \in G \quad \forall h \in H$$ For any $g \in G$, the orbit $H \cdot g$ is the set $$\{ h \cdot g : h \in H \} = \{ gh: h \in H \} = gH.$$ Note that we didn't need the operation to be commutative here.
Trouble with wording of this math question "Find the derivative as follows (you need not simplify)": a) $y = 2^x f (x)$, where $f (x)$ is a differentiable function and so is $f '(x)$: find $\frac{d^2x}{dx^2}$. That's the exact wording of the question, and no additional information is given outside of this question. Can anyone make sense out of this question? I'm not looking for a solved answer, just an idea of what I'm actually supposed to be doing. Thanks
We are given: $$y = 2^x f(x)$$ where $f(x)$ and $f'(x)$ are differentiable functions and asked to find the second derivative. This is just an application of the product rule, so $\displaystyle \frac{d}{dx} \left(2^x f(x)\right) = 2^x (f(x) \log(2) + f'(x))$ $\displaystyle \frac{d^2}{dx^2} \left(2^x f(x)\right) = 2^x (f(x) log(2)^2 + 2\log(2) f'(x) + f''(x))$
Determine and classify all singular points Determine and find residues for all singular points $z\in \mathbb{C}$ for (i) $\frac{1}{z\sin(2z)}$ (ii) $\frac{1}{1-e^{-z}}$ Note: I have worked out (i), but (ii) seems still not easy.
Thank Mhenni Benghorbal for the hint! I have worked out (i) so far: The singular points for (i) are $\frac{k\pi}{2}, k \in \mathbb{Z}$, The case $k=0$ was justified by Mhenni, For $k \neq 0$ the singular points are simple poles, since $$\lim_{z \to \frac{k\pi}{2}}\frac{z-\frac{k\pi}{2}}{z\sin(2z)}=\lim_{z \to \frac{k\pi}{2}}(-1)^k\frac{z-\frac{k\pi}{2}}{z\sin(2(z-\frac{k\pi}{2}))}=(-1)^k\frac{1}{k\pi}\neq0$$ which also gives the corresponding residues. For (ii): CLearly the singularities are $2ki\pi, k\in\mathbb{Z}$, which are isolated poles (as their reciprocal corresponding to isolated zeros). But I have problem about computing the limit such as $$\lim_{z \to 0} \frac{z e^z}{e^z-1}$$. Note that L-Hospitial's rule may not apply here, the method by series expansion is not applicable here either.?
why $\log(n!)$ isn't zero? I have wondered that why the $\log (n!)$ isn't zero for $n \in N$. Because I think that $\log (1)$ is zero so all following numbers after multiplying the result will become zero. Thanks in advance.
Might as well make an answer of it. $$\begin{align*} \lg(n!)&=\lg(1\cdot2\cdot3\cdot\ldots\cdot n)\\ &=\lg 1+\lg 2+\lg 3+\ldots+\lg n\\ &=\lg 2+\lg 3+\ldots+\lg n\;, \end{align*}$$ so it won’t be $0$ unless $n=1$ (or $n=0$): you’re adding the logs, not multiplying them.
How to prove $\sum\limits_{k=0}^n{n \choose k}(k-1)^k(n-k+1)^{n-k-1}= n^n$? How do I prove the following identity directly? $$\sum_{k=0}^n{n \choose k}(k-1)^k(n-k+1)^{n-k-1}= n^n$$ I thought about using the binomial theorem for $(x+a)^n$, but got stuck, because I realized that my $x$ and $a$ in this case are dynamic variables. Any hints? Don't give me the answer; I really want to think through this on my own, but a nudge in the correct direction would be awesome. Thanks!
A nice proof uses the Lagrange-Bürman inversion formula. Start defining: \begin{equation} C(z) = z e^{C(z)} \end{equation} which gives the expansion: \begin{equation} e^{\alpha C(z)} = \alpha \sum_{n \ge 0} \frac{(\alpha + n)^{n - 1} z^n}{n!} \end{equation} Then you have: \begin{equation} e^{(\alpha + \beta) C(z)} = e^{\alpha C(z)} \cdot e^{\beta C(z)} \end{equation} Expanding and comparing both sides gives Abel's binomial theorem: \begin{equation} (\alpha + \beta) (\alpha + \beta + n)^{n - 1} = \sum_{0 \le k \le n} \binom{n}{k} (\alpha + k)^{k - 1} (\beta + n - k)^{n - k - 1} \end{equation}
Conditional probabilities, urns I found this question interesting and apparently it has to do with conditional probabilities: An urn contains six black balls and some white ones. Two balls are drawn simutaneously. They have the same color with probability 0.5. How many with balls are in the urn? As far as I am concerned I would say it is two white balls...
Long hint/walkthrough: Let the number of white balls be denoted $w$. The probability of pulling two white balls will be $\frac{w}{6+w}\cdot\frac{w-1}{6+w-1}$ since the probability of choosing a white ball will be $P(w_1)=\frac{w}{w+6}$ and since there is one less white ball the probability of choosing another will be $P(w_2)=\frac{w-1}{6+w-1}$. To find the probability that both these events will occur we multiply their probability $P(w_1\cap w_2)=P(w_1)\cdot P(w_2)$. Note that this is only the probability of finding two white balls. What will be the probability of finding two black balls? To find the probability that one OR another event occurs we add the probability of each event ($P(x\cup y)=P(x)+P(y)$). So what is the probability of choosing two of the same color balls? Can we find a mathematical way to express the probability of choosing one ball of each color? If so, we can set these equations equal and have $P(w_1\cap w_2)+P(b_1\cap b_2)=P(b_1\cap w_2)+P(w_1\cap b_2)$. Since our only variable should be $w$ this will allow us to solve.
Strictly increasing function on positive integers giving value between $100$ and $200$ I'm looking for some sort of function $f$ that can take any integer $n>0$ and give a real number $100 \le m \lt 200$ such that if $a \lt b$ then $f(a) \lt f(b)$. How can I do that? I'm a programmer and I need this for an application of mine.
$f(n)=200-2^{-n}$ satisfies your criteria.
Irreducible components of the variety $V(X^2+Y^2-1,X^2-Z^2-1)\subset \mathbb{C}^3.$ I want to find the irreducible components of the variety $V(X^2+Y^2-1, \ X^2-Z^2-1)\subset \mathbb{C}^3$ but I am completely stuck on how to do this. I have some useful results that can help me decompose $V(F)$ when $F$ is a single polynomial, but the problem seems much harder even with just two polynomials. Can someone please help me? EDIT: In trying to answer this question, I knew it would be useful to know if the ideal $I=(X^2+Y^2-1, X^2-Z^2-1)$ was a prime ideal of $\mathbb{C}[X,Y,Z]$ but I'm finding it hard to describe the quotient ring. Is it a prime ideal?
$\newcommand{\rad}{\text{rad}\hspace{1mm}} $ The ideal $(x^2 + y^2 - 1,x^2 - z^2 - 1)$ is equal to the ideal $(y^2 + z^2 ,x^2 - z^2 - 1)$. This is because \begin{eqnarray*} (y^2 + z^2) + (x^2 - z^2 - 1) &=& y^2 + x^2 - 1\\ (x^2 + y^2 - 1) - (x^2 - z^2 - 1) &=& y^2 + z^2. \end{eqnarray*} Thus we get \begin{eqnarray*} V(x^2 + y^2 - 1,x^2 - z^2 - 1) &=& V( y^2 + z^2,x^2 - z^2 - 1) \\ &=& V\left( (y+zi)(y- zi),x^2 - z^2 - 1\right) \\ &=& V(y+zi,x^2 - z^2 - 1) \cup V(y-zi,x^2 - z^2 - 1).\end{eqnarray*} Now we claim that the ideals $(y+zi,x^2 - z^2 - 1)$ and $(y-zi,x^2 - z^2 - 1)$ are prime ideals. I will only show that the former is prime because the proof for the latter is similar. By the Third Isomorphism Theorem we have \begin{eqnarray*} \Bbb{C}[x,y,z]/(y+ zi,x^2 - z^2 - 1) &\cong& \Bbb{C}[x,y,z]/(y+zi) \bigg/ (y+ zi,x^2 - z^2 - 1)/ (y + zi) \\ &\cong& \Bbb{C}[x,z]/(x^2 - z^2 - 1)\end{eqnarray*} because $\Bbb{C}[x,y,z]/(y + zi) \cong \Bbb{C}[x,z]$. At this point there are two ways to proceed, one of which is showing that $x^2 - z^2 - 1$ is irreducible. However there is a nicer approach which is the following. Writing \begin{eqnarray*} x &=& \frac{x+z}{2} + \frac{x-z}{2} \\ z &=& \frac{z + x}{2} + \frac{z-x}{2}\end{eqnarray*} this shows that $\Bbb{C}[x,z] = \Bbb{C}[x+z,x-z]$. Then upon factoring $x^2 - z^2 - 1$ as $(x+z)(x-z) - 1$ the quotient $\Bbb{C}[x,z]/(x^2 - z^2 - 1)$ is isomorphic to $\Bbb{C}[u][v]/(uv - 1)$ where $u = x+z, v = x-z$. Now recall that $$\left(\Bbb{C}[u] \right)[v]/(uv - 1) \cong \left(\Bbb{C}[u]\right)_{u} $$ where the subscript denotes localisation at the multiplicative set $\{1,u,u^2,u^3 \ldots \}$. Since the localisation of an integral domain is an integral domain, this completes the proof that $(y+zi,x^2 - z^2 - 1)$ is prime and hence a radical ideal. Now use Hilbert's Nullstellensatz to complete the proof that your algebraic set decomposes into irreducibles as claimed in Andrew's answer.
Proving $\,f$ is constant. Let $\,f:[a,b] \rightarrow \Bbb R $ be continuous and $\int_a^b f(x)g(x)\,dx=0$, whenever $g:[a,b] \rightarrow \Bbb R $ is continuous and $\int_a^b g(x)\,dx=0$. Show that $f$ is a constant function. I tried a bunch of things including the mid-point integral theorem(?) but to no avail. I'd appreciate an explanation of a solution because I really don't see where to go with this one..
Suppose $f$ is nonconstant. Define $g(x) = f(x)-\bar{f}$, where $\bar{f}:= \frac{1}{b-a} \int_a^b f(x)dx$. Then $\int_a^b g = 0$. Then $$\int_a^b f(x)g(x)dx = \int_a^b f(x) \big(f(x)-\bar{f}\big) dx = \int_a^b \big(f(x)-\bar{f}\big)^2 dx >0$$ The reason that this last term is larger than zero, is that $f$ is non-constant, so we can find at least one value $x \in [a,b]$ such that $f(x) \neq \bar{f}$. By continuity, there exists some interval $[c,d] \ni x$ such that $f(y) \neq \bar{f}$ for $y \in [c,d]$, and thus $(f(y)-\bar{f})^2>0$ for $y\in [c,d]$. Thus if $f$ is nonconstant, then we can find at least one $g$ for which $\int_a^b g = 0$ and $\int_a^b fg >0$.
Funny integral inequality Assume $f(x) \in C^1([0,1])$,and $\int_0^{\frac{1}{2}}f(x)\text{d}x=0$,show that: $$\left(\int_0^1f(x)\text{d}x\right)^2 \leq \frac{1}{12}\int_0^1[f'(x)]^2\text{d}x$$ and how to find the smallest constant $C$ which satisfies $$\left(\int_0^1f(x)\text{d}x\right)^2 \leq C\int_0^1[f'(x)]^2\text{d}x$$
solutin 2: by Schwarz,we have $$\int_{0}^{\frac{1}{2}}[f'(x)]^2dx\int_{0}^{\frac{1}{2}}x^2dx\ge\left(\int_{0}^{\frac{1}{2}}xf'(x)dx\right)^2=\left[\dfrac{1}{2}f(\dfrac{1}{2})-\int_{0}^{\frac{1}{2}}f(x)dx\right]^2$$ so $$\int_{0}^{\frac{1}{2}}[f'(x)]^2dx\ge 24\left[\dfrac{1}{2}f(\dfrac{1}{2})-\int_{0}^{\frac{1}{2}}f(x)dx\right]^2$$ the same methods,we have $$\int_{\frac{1}{2}}^{1}[f'(x)]^2dx\ge 24\left[\dfrac{1}{2}f(\dfrac{1}{2})-\int_{0}^{\frac{1}{2}}f(x)dx\right]^2$$ and use $2(a^2+b^2)\ge (a+b)^2$ then we have $$\int_{0}^{1}[f'(x)]^2dx\ge 12\left(\int_{0}^{1}f(x)dx-2\int_{0}^{\frac{1}{2}}f(x)dx\right)^2$$
Differential Forms, Exterior Derivative I have a question regarding differential forms. Let $\omega = dx_1\wedge dx_2$. What would $d\omega$ equal? Would it be 0?
The differential form $\omega = dx_1 \wedge dx_2$ is constant hence we have $$ d\omega = d(dx_1 \wedge dx_2) = d(1) \wedge dx_1 \wedge dx_2 \pm 1 \, ddx_1 \wedge dx_2 \pm 1 \, dx_1 \wedge ddx_2$$ and because $d^2 = 0$, we have $$ d \omega = 0.$$
Prove set is dense This is a pretty basic and general question. I have to prove (if true) that the sum of two dense sets is dense as well. Let $A, B$ be nonempty dense sets in $\mathbb R$. Then $A+B=\{a+b\mid a\in A, b\in B\}$ is also dense. Can anyone give me a pointer as to how one may prove this (just the method)? is it algebraic or with $\sup /\inf/\text{sequences}$ etc. Thanks!
Hint: * *If $A$ is dense in $\mathbb{R}$, then $A+2013 = \{a+2013 \mid a \in A\}$ is also dense in $\mathbb{R}$. *Union of dense sets is dense, in particular, $A+B = \bigcup_{b \in B}A+b$. Good luck!
Vector Fields Question 4 I am struggling with the following question: Prove that any left invariant vector field on a Lie group is complete. Any help would be great!
Call your lie group $G$ and your vector field $V$. It suffices to show that there exists $\epsilon > 0$ such that given $g \in G$ (notice the order of the quanitfiers!) there exists an integral curve $\gamma_g : (-\epsilon, \epsilon) \rightarrow G$ with $\gamma_g (0) = g$, I.e. a curve $\gamma$ starting at $p$ with $\gamma_\ast (\frac{d}{dt}|_s) = V_{\gamma(s)}$. Now there exists an integral curve $\gamma _e$through the identity, $e$, defined on some open neighborhood $(-\epsilon, \epsilon)$ of $0$. This is the $\epsilon$ we choose. For any $g \in G$ we use the left invariance of $V$ to check that $L_g \circ \gamma_e$ is an integral curve: $L_{g_\ast} (\gamma _{e_\ast} (\frac{d}{dt}|_s) = L_{g_\ast} ( V_{\gamma_e (s)}) = V_{L_g \circ \gamma_e (s)}$. This completes the proof.
Minimize $\|A-XB\|_F$ subject to $Xv=0$ Assume we are given two matrices $A, B \in \mathbb R^{n \times m}$ and a vector $v \in \mathbb R^n$. Let $\|\cdot\|_F$ be the Frobenius norm of a matrix. How can we solve the following optimization problem in $X \in \mathbb R^{n \times n}$? $$\begin{array}{ll} \text{minimize} & \|A-XB\|_F\\ \text{subject to} & Xv=0\end{array}$$ Can this problem be converted to a constrained least squares problem with the optimization variable being a vector instead of a matrix? If so, does this way work? Are there some references about solving such constrained linear least Frobenius norm problems? Thanks!
As the others show, the answer to your question is affirmative. However, I don't see what's the point of doing so, when the problem can actually be converted into an unconstrained least square problem. Let $Q$ be a real orthogonal matrix that has $\frac{v}{\|v\|}$ as its last column. For instance, you may take $Q$ as a Householder matrix: $$ Q = I - 2uu^T,\ u = \frac{v-\|v\|e_n}{\|v-\|v\|e_n\|}, \ e_n=(0,0,\ldots,0,1)^T. $$ Then $Xv=0$ means that $XQe_n=0$, which in turn implies that $XQ$ is a matrix of the form $XQ = (\widetilde{X},0)$, where $\widetilde{X}\in\mathbb{R}^{n\times(n-1)}$. So, if you partition $Q^TB$ as $Q^TB = \pmatrix{\widetilde{B}\\ \ast}$, where $\widetilde{B}$ is $(n-1)\times m$, then your minimisation problem can be rewritten as the unconstrained least-square problem $$\min_{\widetilde{X}\in\mathbb{R}^{n\times(n-1)}} \|A-\widetilde{X}\widetilde{B}\|_F.$$ And the least-norm solution is given by $\widetilde{X} = A\widetilde{B}^+$, where $\widetilde{B}^+$ denotes the Moore-Penrose pseudoinverse of $\widetilde{B}$. Once $\widetilde{X}$ is obtained, you can recover $X = (\widetilde{X},0)Q^T$.
Vertical line test A vertical line crossing the x-axis at a point $a$ will meet the set in exactly one point $(a, b)$ if $f(a)$ is defined, and $f(a) = b$. If the vertical line meets the set of points in two points then $f(a)$ is undefined?
The highlighted proposition is one way of describing the vertical line test, which determines whether $f$ is a function. If there is one and only point of intersection between $x = a$ and $f(x)$, then $f$ is a function. If there are two or more points of intersection between $x = a$ and $f(x)$, then $f$ maps a given $x = a$ to two (or more) distinct values, $f(a) = b, f(a) = c, \; b \neq c$, and hence, fails to be a function, by definition of a function. $f$ may, however, a relation.
Adjust percentage I'm stuck on what I would think is a simple problem. A group of $3$ people are selling a product through a store. Under the current arraignment, the store gets $30$% of the price the product is sold for. The group of $3$ get the remaining $70$%. The group of $3$ split up the remaining $70$% as $25$%, $25$%, and $20$%. If we considered the remaining $70$% as $100$% which would be divided up in the same proportions, what would those percentages be?
If you are confused with the percentages, it is always to better write down statements to make it easier. If $70$% is equivalent to $100$% , then $25$% is equivalent to ? $(25*100/70)$ = $(2500/70)$% = $35.715$% Similarly if $70$% is equivalent to $100$% , then $25$% is equivalent to ? $(25*100/70)$ = $(2500/70)$% = $35.715$% Similarly if $70$% is equivalent to $100$% , then $20$% is equivalent to ? $(20*100/70)$ = $(2000/70)$% = $28.57$% Hope the answer is clear !
In Search of a More Elegant Solution I was asked to determine the maximum and minimum value of $$f(x,y,z)=(3x+4y+5z^{2})e^{-x^{2}-y^{2}-z^{2}}$$ on $\mathbb{R}^{3}$. Now, I employed the usually strategy; in other words calculating the partial derivatives, setting each to zero, and the solve for $x,y,z$ before comparing the values of the stationary points. I obtained $$M=5e^{-3/4}$$ as the maximum value and $$m=(-5e^{-1/2})/{\sqrt{2}}$$ as the minimum value, both of which turned out to be correct. However, as I decided to solve for $x,y,z$ by the method of substitution, the calculations became somewhat hostile. I'm sure there must be a simpler way to arrive at the solutions, and I would be thrilled if someone here would be so generous as to share such a solution.
We have $$\frac\partial{\partial x}f(x,y,z)=(3-2x(3x+4y+5z^2))e^{-x^2-y^2-z^2}$$ $$\frac\partial{\partial y}f(x,y,z)=(4-2y(3x+4y+5z^2))e^{-x^2-y^2-z^2}$$ $$\frac\partial{\partial z}f(x,y,z)=(10z-2z(3x+4y+5z^2))e^{-x^2-y^2-z^2}$$ At a stationary point, either $z=0$ and then $3y=4x$, $x=\pm\frac3{10}\sqrt 2 $. Or $3x+4y+5z^2=5$ and then $x=\frac3{10}$, $y=\frac25$.
Find the value of : $\lim_{x\to\infty}x\left(\sqrt{x^2-1}-\sqrt{x^2+1}\right)=-1$ How can I show/explain the following limit? $$\lim_{x\to\infty} \;x\left(\sqrt{x^2-1}-\sqrt{x^2+1}\right)=-1$$ Some trivial transformation seems to be eluding me.
The expression can be multiplied with its conjugate and then: $$\begin{align} \lim_{x\to\infty} x\left(\sqrt{x^2-1}-\sqrt{x^2+1}\right) &= \lim_{x\to\infty} x\left(\sqrt{x^2-1}-\sqrt{x^2+1}\right)\left(\frac{\sqrt{x^2-1}+\sqrt{x^2+1}}{\sqrt{x^2-1}+\sqrt{x^2+1}}\right) \cr &=\lim_{x\to\infty} x\left(\frac{x^2-1-x^2-1}{\sqrt{x^2-1}+\sqrt{x^2+1}}\right) \cr &=\lim_{x\to\infty} x\left(\frac{-2}{\sqrt{x^2-1}+\sqrt{x^2+1}}\right) \cr &=\lim_{x\to\infty} \frac{-2}{\frac{\sqrt{x^2-1}}{x} + \frac{\sqrt{x^2+1}}{x}} \cr &=\lim_{x\to\infty} \frac{-2}{\sqrt{\frac{x^2}{x^2}-\frac{1}{x^2}} + \sqrt{\frac{x^2}{x^2}+\frac{1}{x^2}}} \cr &=\lim_{x\to\infty} \frac{-2}{\sqrt{1-0} + \sqrt{1-0}} \cr &=\lim_{x\to\infty} \frac{-2}{1+1} \cr &= -1\end{align}$$
Does a such condition imply differentiability? Let function $f:\mathbb{R}\to \mathbb{R}$ be such that $$ \lim_{\Large{(y,z)\rightarrow (x,x) \atop y\neq z}} \frac{f(y)-f(z)}{y-z}=0. $$ Is it then $f'(x)=0$ ?
What is $f'(x)=\displaystyle\lim_{y\to x}\dfrac{f(y)-f(x)}{y-x}$? Let $\epsilon>0$. From the hypothesis it follows that exists a $\delta>0$ such that if $y\neq z$ and $0<\|(y,z)-(x,x)\|<\delta$ then $\left|\dfrac{f(y)-f(z)}{y-z}\right|<\epsilon$. Now if $0<|y-x|<\delta$ then $0<\|(y,x)-(x,x)\|=|y-x|<\delta$ and $y\neq x$. Therefore $\left|\dfrac{f(y)-f(x)}{y-x}\right|<\epsilon$. It follows that $f'(x)=\displaystyle\lim_{y\to x}\dfrac{f(y)-f(x)}{y-x}=0.$
Best Fake Proofs? (A M.SE April Fools Day collection) In honor of April Fools Day $2013$, I'd like this question to collect the best, most convincing fake proofs of impossibilities you have seen. I've posted one as an answer below. I'm also thinking of a geometric one where the "trick" is that it's very easy to draw the diagram wrong and have two lines intersect in the wrong place (or intersect when they shouldn't). If someone could find and link this, I would appreciate it very much.
Let me prove that the number $1$ is a multiple of $3$. To accomplish such a wonderful result we are going to use the symbol $\equiv$ to denote "congruent modulo $3$". Thus, what we need to prove is that $1 \equiv 0$. Next I give you the proof: $1\equiv 4$ $\quad \Rightarrow \quad$ $2^1\equiv 2^4$ $\quad \Rightarrow \quad$ $2\equiv 16$ $\quad \Rightarrow \quad$ $2\equiv 1$ $\quad \Rightarrow \quad$ $2-1\equiv 1-1$ $\quad \Rightarrow \quad$ $1\equiv 0$.
A question on estimates of surface measures If $\mathcal{H}^s $ is $s$ dimensional Hausdorff measure on $ \mathbb{R}^n$, is the following inequality true for all $ x \in \mathbb{R}^n,\ R,t > 0 $ ? $$ \mathcal{H}^{n-1}(\partial B(x,t)\cap B(0,R)) \leq \mathcal{H}^{n-1}(\partial B(0,R)) $$ If the answer is not affirmative then a concrete counter example of two such open balls would be very helpful.
The definition of $\mathcal H^{s}$ implies that it does not increase under $1$-Lipschitz maps: $\mathcal H^s(f(E))\le \mathcal H^s(E)$ if $f$ satisfies $|f(a)-f(b)|\le |a-b|$ for all $a,b\in E$. The nearest point projection $\pi:\mathbb R^n \to B(x,t)$ is a $1$-Lipschitz map. (Note that when $y\in B(x,t)$, the nearest point to $y$ is $y$ itself; that is, the projection is the identity map on the target set). It remains to show that $$\partial B(x,t)\cap B(0,R) \subset \pi(\partial B(0,R))\tag1$$ Given a point $y\in \partial B(x,t)\cap B(0,R)$, consider the half-line $\{y+tn: t\ge 0\}$ where $n$ is an outward normal vector to $\partial B(x,t)$ at $y$. This half-line intersects $\partial B(0,R)$ at some point $z$. An easy geometric argument shows that $\pi(z)=y$.
Le Cam's theorem and total variation distance Le Cam's theorem gives the total variation distance between the sum of independent Bernoilli variables and a Poisson random variable with the same mean. In particular it tells you that the sum is approximately Poisson in a specific sense. Define $$S_n = X_1+\dots+X_n \text{ and } \lambda_n = p_1+\dots+p_n$$ where $P(X_i = 1) = p_i$. The theorem states that $$\sum_{k=0}^{\infty}\left| P(S_n=k)-\frac{\lambda_n^k e^{-\lambda_n}}{k!}\right| < 2\sum_{i=1}^n p_i^2.$$ I am having problems understanding what this tells you about the relationship between their cdfs. That is between $P(S_n < x)$ and $P(\operatorname{Poiss}(\lambda_n)) < x)$. In particular, can you given $n$ and $x$ give a bound on the difference or can you say that as $n$ grows the difference tends to zero?
Let $Y_n$ be any Poisson random variable with parameter $\lambda_n$. Then, for every $x$, $$ \left|P(S_n < x)-P(Y_n < x)\right|=\left|\sum_{k\lt x} P(S_n=k)-P(Y_n=k)\right|\leqslant\sum_{k=0}^{\infty}\left| P(S_n=k)-P(Y_n=k)\right|. $$ Hence, $$ \left|P(S_n < x)-P(Y_n < x)\right| < 2\sum_{i=1}^n p_i^2.$$
Struggling with an integral with trig substitution I've got another problem with my CalcII homework. The problem deals with trig substitution in the integral for integrals following this pattern: $\sqrt{a^2 + x^2}$. So, here's the problem: $$\int_{-2}^2 \frac{\mathrm{d}x}{4 + x^2}$$ I graphed the function and because of symmetry, I'm using the integral: $2\int_0^2 \frac{\mathrm{d}x}{4 + x^2}$ Since the denominator is not of the form: $\sqrt{a^2 + x^2}$ but is basically what I want, I ultimately decided to take the square root of the numerator and denominator: $$2 \int_0^2 \frac{\sqrt{1}}{\sqrt{4+x^2}}\mathrm{d}x = 2 \int_0^2 \frac{\mathrm{d}x}{\sqrt{4+x^2}}$$ From there, I now have, using the following: $\tan\theta = \frac{x}{2} => x = 2\tan\theta => dx = 2\sec^2\theta d\theta$ $$ \begin{array}{rcl} 2\int_{0}^{2}\frac{\mathrm{d}x}{4+x^2}\mathrm{d}x & = & \sqrt{2}\int_{0}^{2}\frac{\mathrm{d}x}{\sqrt{4+x^2}}\mathrm{d}x \\ & = & \sqrt{2}\int_{0}^{2}\frac{2\sec^2(\theta)}{\sqrt{4+4\tan^2(\theta)}}\mathrm{d}\theta \\ & = & \sqrt{2}\int_{0}^{2}\frac{2\sec^2(\theta)}{2\sqrt{1+\tan^2(\theta)}}\mathrm{d}\theta \\ & = & \sqrt{2}\int_{0}^{2}\frac{\sec^2(\theta)}{\sqrt{\sec^2(\theta)}}\mathrm{d}\theta \\ & = & \sqrt{2}\int_{0}^{2}\frac{\sec^2(\theta)}{\sec(\theta)}\mathrm{d}\theta \\ & = & \sqrt{2}\int_{0}^{2}\sec(\theta)\mathrm{d}\theta \\ & = & \sqrt{2}\left [\ln{\sec(\theta)+\tan(\theta)} \right|_{0}^{2}] \\ & = & \sqrt{2}\left [ \ln{\frac{\sqrt{4+x^2}}{2}+\frac{x}{2} } \right|_{0}^{2} ] \end{array} $$ I'm not sure if I've correctly made the integral look like the pattern it's supposed to have. That is, trig substitutions are supposed to be for $\sqrt{a^2 + x^2}$ (in this case that is, there are others). This particular problem is an odd numbered problem and the answer is supposed to be $\frac{\pi}{4}$. I'm not getting that. So, the obvious question is, what am I doing wrong? Also note, I had trouble getting the absolute value bars to produce for the ln: don't know what I did wrong there either. Thanks for any help, Andy
Hint: you can cut your work considerably by using the trig substitution directly into the proper integral, and proceeding (no place for taking the square root of the denominator): You have $$2\int_0^2 \frac{dx}{4+x^2}\quad\text{and NOT} \quad 2\int_0^2 \frac{dx}{\sqrt{4+x^2}}$$ But that's good, because this integral (on the left) is what you have and is already in in the form where it is appropriate to use the following substitution: Let $x = 2 \tan \theta$, which you'll see is standard for integrals of this form. As suggested by Andrew in the comments, we can arrive at his suggested result, and as shown in Wikipedia: Given any integral in the form $$\int\frac{dx}{{a^2+x^2}}$$ we can substitute $$x=a\tan(\theta),\quad dx=a\sec^2(\theta)\,d\theta, \quad \theta=\arctan\left(\tfrac{x}{a}\right)$$ Substituting gives us: $$ \begin{align} \int\frac{dx}{{a^2+x^2}} & = \int\frac{a\sec^2(\theta)\,d\theta}{{a^2+a^2\tan^2(\theta)}} \\ \\ & = \int\frac{a\sec^2(\theta)\,d\theta}{{a^2(1+\tan^2(\theta))}} \\ \\ & = \int \frac{a\sec^2(\theta)\,d\theta}{{a^2\sec^2(\theta)}} \\ \\ & = \int \frac{d\theta}{a} \\ &= \tfrac{\theta}{a}+C \\ \\ & = \tfrac{1}{a} \arctan \left(\tfrac{x}{a}\right)+C \\ \\ \end{align} $$ Note, you would have gotten precisely the correct result had you not taken the square root of $\sec^2\theta$ in the denominator, i.e., if you had not evaluated the integral of the square root of your function.
Cayley's Theorem question: examples of groups which aren't symmetric groups. Basically, Cayley's Theorem says that every finite group, say $G$, is isomorphic to a subgroup of the group $S_G$ of all permutations of $G$. My question: why is there the word "subgroup of"? If we omit this word, is the statement wrong? brief examples would be nice. Thank you guys so much!
The symmetric group $S_n$ has order $n!$ whereas there exists a group of any order (eg. $\mathbb{Z}_n$ has order $n$).
Learning Mathematics using only audio. Are there any mathematics audio books or other audio sources for learning mathematics, like for example math podcasts which really go into detail. I ask this because I make about 1 hour from my house to the school and staring at a screen on the car makes me dizzy. I know about podcasts and stuff but those don't really teach you math, they talk about math news and mathematicians. I ask this because many topics can be understood just by giving it a lot of thought and don't necessarily require pen and paper. Regards.
I can't really point to a source, but I find the question quite relevant, as audiobooks of mathematic subject can be important also for blind people. Learnoutloud has a repository of audiobooks and podcast about math and statistics, and related novels as well. Nevertheless it seems to offer no advanced math repository. PapersOutLoud may be a good project.
Interior of closure of an open set The question is is the interior of closure of an open set equal the interior of the set? That is, is this true: $(\overline{E})^\circ=E^\circ$ ($E$ open) Thanks.
Let $\varepsilon>0$, I claim there is an open set of measure (or total length, if you like) less than $\varepsilon$ whose closure is all of $\mathbb R$. To see this, simply enumerate the rationals $\{r_n\}$ and then for each $n\in\mathbb N$ choose an open interval about $r_n$ of length $\varepsilon/2^n$. The union of those intervals has the desired property.
Double dot product vs double inner product Anything involving tensors has 47 different names and notations, and I am having trouble getting any consistency out of it. This document (http://www.polymerprocessing.com/notes/root92a.pdf) clearly ascribes to the colon symbol (as "double dot product"): $\mathbf{T}:\mathbf{U}=T_{ij} U_{ji}$ while this document (http://www.foamcfd.org/Nabla/guides/ProgrammersGuidese3.html) clearly ascribes to the colon symbol (as "double inner product"): $\mathbf{T}:\mathbf{U}=T_{ij} U_{ij}$ Same symbol, two different definitions. To make matters worse, my textbook has: $\mathbf{\epsilon}:\mathbf{T}$ where $\epsilon$ is the Levi-Civita symbol $\epsilon_{ijk}$ so who knows what that expression is supposed to represent. Sorry for the rant/crankiness, but it's late, and I'm trying to study for a test which is apparently full of contradictions. Any help is greatly appreciated.
I know this might not serve your question as it is very late, but I myself am struggling with this as part of a continuum mechanics graduate course. The way I want to think about this is to compare it to a 'single dot product.' For example: \begin{align} \textbf{A} \cdot \textbf{B} &= A_{ij}B_{kl} (e_i \otimes e_j) \cdot (e_k \otimes e_l)\\ &= A_{ij} B_{kl} (e_j \cdot e_k) (e_i \otimes e_l) \\ &= A_{ij} B_{kl} \delta_{jk} (e_i \otimes e_l) \\ &= A_{ij} B_{jl} (e_i \otimes e_l) \end{align} Where the dot product occurs between the basis vectors closest to the dot product operator, i.e. $e_j \cdot e_k$. So now $\mathbf{A} : \mathbf{B}$ would be as following: \begin{align} \textbf{A} : \textbf{B} &= A_{ij}B_{kl} (e_i \otimes e_j):(e_k \otimes e_l)\\ &= A_{ij} B_{kl} (e_j \cdot e_k) (e_i \cdot e_l) \\ &= A_{ij} B_{kl} \delta_{jk} \delta_{il} \\ &= A_{ij} B_{jl} \delta_{il}\\ &= A_{ij} B_{ji} \end{align} But I found that a few textbooks give the following result: $$ \textbf{A}:\textbf{B} = A_{ij}B_{ij}$$ But based on the operation carried out before, this is actually the result of $$\textbf{A}:\textbf{B}^t$$ because \begin{align} \textbf{A} : \textbf{B}^t &= A_{ij}B_{kl} (e_i \otimes e_j):(e_l \otimes e_k)\\ &= A_{ij} B_{kl} (e_j \cdot e_l) (e_j \cdot e_k) \\ &= A_{ij} B_{kl} \delta_{jl} \delta_{ik} \\ &= A_{ij} B_{il} \delta_{jl}\\ &= A_{ij} B_{ij} \end{align} But I finally found why this is not the case! The definition of tensor contraction is not the way the operation above was carried out, rather it is as following: \begin{align} \textbf{A} : \textbf{B}^t &= \textbf{tr}(\textbf{AB}^t)\\ &= \textbf{tr}(\textbf{BA}^t)\\ &= \textbf{tr}(\textbf{A}^t\textbf{B})\\ &= \textbf{tr}(\textbf{B}^t\textbf{A}) = \textbf{A} : \textbf{B}^t\\ \end{align} and if you do the exercise, you'll find that: $$\textbf{A}:\textbf{B} = A_{ij} B_{ij} $$
Can we extend any metric space to any larger set? Let $(X,d)$ be metric space and $X\subset Y$. Can $d$ be extended to $Y^2$ so that $(Y,d)$ is a metric space? Edit: how about extending any $(\Bbb Z,d)$ to $(\Bbb R,d)$
Let $Z=Y\setminus X$. Let $\kappa=|X|$. If $|Z|\ge\kappa$, we can index $Y=\{z(\xi,x):\langle\xi,x\rangle\in\kappa\times X\}$ in such a way that $z(0,x)=x$ for each $x\in X$. Now define $$\overline d:Y\times Y\to\Bbb R:\langle z(\xi,x),z(\eta,y)\rangle\mapsto\begin{cases} d(x,y),&\text{if }\xi=\eta\\ d(x,y)+1&\text{if }\xi\ne\eta\;. \end{cases}$$ Then $\overline d$ is a metric on $Y$ extending $d$. If $|Z|<\kappa$, you can still use the same basic idea. Let $\varphi:Z\to X$ be any injection, and define $$\overline d:Y\times Y\to\Bbb R:\langle x,y\rangle\mapsto\begin{cases} d(x,y),&\text{if }x,y\in X\\ d((\varphi(x),\varphi(y)),&\text{if }x,y\in Z\\ d(x,\varphi(y))+1,&\text{if }x\in X\text{ and }y\in Z\\ d(\varphi(x),y)+1,&\text{if }x\in Z\text{ and }y\in X\;. \end{cases}$$
How do I show that these sums are the same? My textbook says that I should check that $$ \sum_{i=0}^\infty \frac{\left( \lambda\mathtt{I} + \mathtt{J}_k \right)^i}{i!} $$ is in fact the same as the product of sums $$ \left( \sum_{i=0}^\infty \frac{\left( \lambda\mathtt{I}\right)^i}{i!} \right) \cdot \left( \sum_{j=0}^k\frac{\left( \mathtt{J}_k \right)^j}{j!} \right)$$ Where $ \mathtt{J}_k $ is all zero exept first super diagonal that has all ones. But I can't figure out how to do it. [edit] To clarify: I'm working towards a definition of $f(\mathtt{A})$ where $f$ is a "nice" function, and $\mathtt{A}$ is an arbitrary square matrix. The text basically goes like this. $\mathtt{B} = f(\mathtt{A})$ defined as $b_{ij} = f(a_{ij})$ is a bad idea because $f(\mathtt{A})$ where $f(x) = x^2$ is generally not the same as $\mathtt{A}^2$ and so on. BUT, we know that for numbers, $e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!} $ so lets try this for matrices. Then it goes on to show that for diagonal matrices the power series gives the same result as if we would apply the function on the diagonal elements then its expanded to diagonalizable matrices. Then to Jordan blocks, and thats where these sums come in.
Hints: $$(1)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\sum_{k=0}^\infty\frac{X^k}{k!}=e^X$$ $$(2)\;\;\;\;\;\;\;\;J_k^{n}=0\;,\;\;\;\text{where$\,n\,$ is the number of rows of the matrix}\;J_k$$
Does every section of $J^r L$ come from some section $s\in H^0(C,L)$, with $L$ line bundle on a compact Riemann surface? I am working with jet bundles on compact Riemann surfaces. So if we have a line bundle $L$ on a compact Riemann surface $C$ we can associate to it the $r$-th jet bundle $J^rL$ on $C$, which is a bundle of rank $r+1$. If we have a section $s\in H^0(C,L)$ then there is an induced section $D^rs\in H^0(C,J^rL)$ which is defined, locally on an open subset $U\subset C$ trivializing both $L$ and $\omega_C$, as the $(r+1)$-tuple $(f,f',\dots,f^{(r)})$, where $f\in O_C(U)$ represents $s$ on $U$. Question 1. Does every section of $J^rL$ come from some $s\in H^0(C,L)$ this way? Question 2. Do you know of any reference for a general description of the transition matrices attached to $J^rL$? I only know them for $r=1$ up to now and I am working on $r=2$. Thank you in advance.
This is rather old so maybe you figured out the answers already. Answer to Q1 is No. Not every global section of $J^r L$ comes from the "prolongation" of a section of $L$, not even locally. Consider for example the section in $J^1(\mathcal{O}_\mathbb{C})$ given in coordinates by $(0,1)$ (constant sections $0$ and $1$). This is obviously not of the form $(f,f')$. The second question: maybe you find the explicit formulas for the transition of charts in Saunders "Geometry of Jet Bundle".
Hyperbolic cosine I have an A level exam question I'm not too sure how to approach: a) Show $1+\frac{1}{2}x^2>x, \forall x \in \mathbb{R}$ b) Deduce $ \cosh x > x$ c) Find the point P such that it lies on $y=\cosh x$ and its perpendicular distance from the line $y=x$ is a minimum. I understand how to show the first statement, by finding the discriminant of $1+\frac{1}{2}x^2-x>0$, but trying to apply this to part b doesn't seem to work: $$\cosh x > x \Rightarrow \frac{1}{2}(e^x+e^{-x}) > x \Rightarrow e^x+e^{-x}>2x \Rightarrow e^{2x}-2xe^x+1>0$$ Applying $\Delta>0$ to this because I know the two functions do not intersect: $$\Delta>0 \Rightarrow 4x^2-4>0 \Rightarrow x^2>1 \Rightarrow |x|>1$$ This tells me that in fact, the two do meet, but only when $|x|>1$, what have I done wrong here? This last bit I don't know how to approach, I was thinking maybe vectors in $\mathbb{R}^2$ were involved to find perpendicular distances? Thanks in advance.
a) You'r right, you can do this with the discriminant and it is very natural. But you can also use the well-known inequality: $2ab\leq a^2+b^2$ which follows from the expansion of $(a-b)^2\geq 0$. So you get $$ 2x=2\cdot x\cdot 1\leq x^2+1^2=x^2+1<x^2+2\qquad \forall x\in\mathbb{R}. $$ Then divide by $2$. b) By definition, $\cosh x$ is the even part of $e^x$. Since $e^x=\sum_{n\geq 0}\frac{x^n}{n!}$ for every $x$, you are left with $$ \cosh x=\sum_{n\geq 0}\frac{x^{2n}}{(2n)!}=1+\frac{x^2}{2}+\frac{x^4}{24}+\ldots\qquad\forall x\in\mathbb{R}. $$ Now, using inequality a) and the fact that every term of the series is nonnegative: $$ \cosh x=1+\frac{x^2}{2}+\frac{x^4}{24}+\ldots\geq 1+\frac{x^2}{2}> x\qquad\forall x\in\mathbb{R}. $$ c) The distance from the point $(x_0,y_0)$ to the line $x-y=0$ is $$ d=\frac{|x_0-y_0|}{\sqrt{1^2+(-1)^2}}=\frac{1}{\sqrt{2}}|x_0-y_0|. $$ So for a point $(x,\cosh x)$ on the graph of the hyperbolic cosine, we get $$ d(x)=\frac{1}{\sqrt{2}}|x-\cosh x|=\frac{1}{\sqrt{2}}(\cosh x-x) $$ as $\cosh x > x$ for every $x$. Now study the variations of $d(x)$ to find its minimum. It will occur at a critical point $d'(x_0)=0$. And actually, there is only one such critical point, solution of $\sinh x=1$. So your minimum occurs at $$ x_0=\mbox{arsinh} 1=\log (1+\sqrt{2})\qquad y_0=\cosh x_0=\sqrt{2}. $$ Click on this link for a drawing of the situation.
$\mbox{Ker} \;S$ is T-invariant, when $TS=ST$ Let $T,S:V\to S$ linear transformations, s.t: $TS=ST$, then $\ker(S)$ is $T$-invariant. My solution: $$\{T(v)\in V:TS(v)=0 \}=\{T(v)\in V:ST(v)=0 \}\subseteq\ker(S)$$ If its right, then why $$\{T(v)\in V:ST(v)=0 \}=\ker(S)$$? Thank you.
What you wrote is not correct. You simply have to check that if $v$ belongs to $\mbox{Ker} S$, then $Tv$ also lies in $\mbox{Ker} S$. So assume $Sv=0$. Then $$STv=TSv=T0=0$$ where the last equality holds because a linear transformation always sends $0$ to $0$. Therefore $v \in \mbox{Ker} S$ implies $Tv\in \mbox{Ker} S$. In other words $$T(\mbox{Ker} S)\subseteq\mbox{Ker} S$$ that is $\mbox{Ker} S$ is invariant under $T$. Note: if $S$ is injective, this is of course trivial. But if not, this says that that the eigenspace of $S$ with respect to the eigenvalue $0$ is invariant under $T$. More generally, every eigenspace of $S$ is invariant under $T$. When $S$ and $T$ are two cummuting diagonalizable matrices, this is the key remark when showing that they are simultaneously diagonalizable. Note: as pointed out by Marc van Leeuwen, it is not more diffcult to show that $\mbox{Im S}$ is invariant under $T$, as $TSv=STv$. And finally, every polynomial in $S$ still commutes with $T$, so you can replace $S$ by $p(S)$ everywhere if you want.
If $e^A$ and $e^B$ commute, do $A$ and $B$ commute? It is known that if two matrices $A,B \in M_n(\mathbb{C})$ commute, then $e^A$ and $e^B$ commute. Is the converse true? If $e^A$ and $e^B$ commute, do $A$ and $B$ commute? Edit: Addionally, what happens in $M_n(\mathbb{R})$? Nota Bene: As a corollary of the counterexamples below, we deduce that if $A$ is not diagonal then $e^A$ may be diagonal.
Here's an example over $\mathbb{R}$, modeled after Harald's answer: let $$A=\pmatrix{0&-2\pi\\ 2\pi&0}.$$ Again, $e^A=I$. Now choose any $B$ that doesn't commute with $A$.
Is there an algorithm to find all subsets of a set? I'm trying to find a way to find all subsets of a set. Is there an algorithm to calculate this?
An algorithm is type of finite procedure operating on finite data as input and generating a finite output. So you can only have an algorithm to find the subsets of $\Sigma$ if $\Sigma$ is finite. (You've been given some hints for that case, but it is important to stress that these hints only work for finite $\Sigma$.)
Is $\mathbb{R}$ a subset of $\mathbb{R}^2$? Is it correct to say that $\mathbb{R}$ is a subset of $\mathbb{R}^2$? Or, put more generally, given $n,m\in\mathbb{N}$, $n<m$, is $\mathbb{R}^n$ a subset of $\mathbb{R}^m$? Also, strictly related to that: what is then the "relationship" between the set $\{(x,0)\in\mathbb{R}^2,x\in\mathbb{R}\}\subset\mathbb{R}^2$ and $\mathbb{R}$? Do they coincide (I would say no)? As vector spaces, do they have the same dimension (I would say yes)? If you could give me a reference book for this kind of stuff, I would really appreciate it. Thank you very much in advance. (Please correct the tags if they are not appropriate)
I wouldn't say so, even though every onedimensional subspace of $\mathbb{R}^n$ is isomorphic to $\mathbb{R}$, but there is no natural embedding. But a more or less funny is, that even thought nearly everyone say that $\mathbb{R}\not\subset\mathbb{R}^2$ many mathematicans say that $\mathbb{R}\subset\mathbb{C}$ even though there is a canocial transformation from $\mathbb{C}$ to $\mathbb{R}^2$. I guess sometime one stops to distinguish often between things which are isomorphic but not the same.
Lie Groups induce Lie Algebra homomorphisms I am having a difficult time showing that if $\phi: G \rightarrow H$ is a Lie group homomorphism, then $d\phi: \mathfrak{g} \rightarrow \mathfrak{h}$ satisfies the property that for any $X, Y \in \mathfrak{g},$ we have that $d\phi([X, Y]_\mathfrak{g}) = [d\phi(X), d\phi(Y)]_\mathfrak{h}$. Any help is appreciated!!!!!!!!
Let $x \in G$. Since $\phi$ is a Lie group homomorphism, we have that $$\phi(xyx^{-1}) = \phi(x) \phi(y) \phi(x)^{-1} \tag{$\ast$}$$ for all $y \in G$. Differentiating $(\ast)$ with respect to $y$ at $y = 1$ in the direction of $Y \in \mathfrak{g}$ gives us $$d\phi(\mathrm{Ad}(x) Y) = \mathrm{Ad}(\phi(x)) d\phi(Y). \tag{$\ast\ast$}$$ Differentiating $(\ast\ast)$ with respect to $x$ at $x = 1$ in the direction of $X \in \mathfrak{g}$, we obtain $$d\phi(\mathrm{ad}(X) Y) = \mathrm{ad}(d\phi(X)) d\phi(Y)$$ $$\implies d\phi([X, Y]_{\mathfrak{g}}) = [d\phi(X), d\phi(Y)]_{\mathfrak{h}}.$$
Best estimate for random values Due to work related issues I can't discuss the exact question I want to ask, but I thought of a silly little example that conveys the same idea. Lets say the number of candy that comes in a package is a random variable with mean $\mu$ and a standard deviation $s$, after about 2 months of data gathering we've got about 100000 measurements and a pretty good estimate of $\mu$ and $s$. Lets say that said candy comes in 5 flavours that are NOT identically distributed (we know the mean and standard deviation for each flavor, lets call them $\mu_1$ through $\mu_5$ and $s_1$ trough $s_5$). Lets say that next month we will get a new batch (several packages) of candy from our supplier and we would like to estimate the amount of candy we will get for each flavour. Is there a better way than simply assuming that we'll get "around" the mean for each flavour taking into account that the amount of candy we'll get is around $\mu$? I have access to all the measurements made, so if anything is needed (higher order moments, other relevant data, etc.) I can compute it and update the question as needed. Cheers and thanks!
It depends on your definition of "better." You need to define your risk function. If your risk function is MSE, you can do better than simply using the sample means. The idea is to use shrinkage, which as the name suggests means to shrink all your $\mu_i$ estimates slightly towards 0. The amount of shrinkage should be proportional to the sample variance $s^2$ of your data (noisier data calls for more shrinkage) and inversely proportional to the number of data points $n$ that you collect. Note that the James-Stein estimator is only better for $m \ge 3$ flavors. In general, some form of regularization is always wise in empirical problems.