INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Arcs of a circle Geometry and angles If a central angle of measure $30°$ is subtended by a circular arc of length $6\,\mathrm{m}$, how many meters in length is the radius of the circle? A. $\frac{π}{36}$ В. $\frac{1}{5}$ С. $π$ D. $\frac{36}{π}$ E. $180$ How do I find out what the radius length of the angle is? The answer is (D) by the way.
HINT: The circumference of a circle of radius $r$ is $\pi r$. The $30^\circ$ angle is $\frac{30}{360}=\frac1{12}$ of the total angle at the centre of the circle, so $6$ metres is $\frac1{12}$ of the circumference of the circle. The whole circumference is therefore $6\cdot12=72$ metres, which, as already noted, is $2\pi r$. Therefore $r$ is ... ?
trying to verify pdf for distance between normally distributed points Math people: I am trying to find the probability density function for the distance between two points in $\mathbb{R}^3$ selected independently according to the Gaussian pdf $F(\mathbf{z}) = \left(\frac{1}{\sqrt{2\pi}}\right)^3 \exp(-\frac{1}{2}|\mathbf{z}|^2)$. I keep getting $f(t) = \frac{t^2\exp(-\frac{t^2}{4})}{2\sqrt{\pi}}$. I found a paper (http://www.pupr.edu/hkettani/papers/HICS2008.pdf) which, if I understand correctly, states that the actual pdf is $\sqrt{\frac{2}{\pi}} t^2\exp(-\frac{t^2}{2})$ (see p. 10, second "$f(x)$"). That paper actually proves a much more general result and gives this as a special case. I am reluctant to just take their word for it, since I used only Calc 3 techniques and their arguments are much more sophisticated and difficult. Has anyone seen this problem before, perhaps as a calculus or probability exercise? Assuming that either their formula or mine is right, is there an easy way to test which is right experimentally (generating random vectors, etc.)? Stefan (STack Exchange FAN)
The square of the distance is $r^2=(X_1-X_2)^2+(Y_1-Y_2)^2+(Z_1-Z_2)^2$, where the $X_i$s, $Y_i$s, and $Z_i$s are independent standard normal r.v.s. So, $r^2/2$ is $\chi^2$ distributed with $3$ degrees of freedom, and as you say, the distance $r$ has density $r^2\exp(-\frac{r^2}{4})/(2\sqrt{\pi}).$ p. 10 of the paper you mention is treating a different case, where each component of the difference between the two points is standard normal.
$\lim_{n \rightarrow \infty} \frac{(2n)!}{n^{2n}}$ I am trying to show $$\lim_{n \rightarrow \infty} \frac{(2n)!}{n^{2n}}$$ I tried breaking it down, and got stuck when trying to $\left( \frac{2^{n}n!}{n^{n}} \right)$ goes to 0.
It is not true that $\left( \frac{(2n)!}{n^{2n}} \right)=\left( \frac{2^{n}n!}{n^{n}} \right)$ The left side has a factor $2n-1$ in the numerator while the right side does not. But you can use Stirling's approximation to say $$\frac {(2n)!}{n^{2n}}\approx \frac {(2n)^{2n}}{(ne)^{2n}}\sqrt{4 \pi n}$$ and the powers of $\frac 2e$ take it to zero.
Does $\sin(x)=y$ have a solution in $\mathbb{Q}$ beside $x=y=0$ Is there a way to show, that the only solution of $$\sin(x)=y$$ is $x=y=0$ with $x,y\in \mathbb{Q}$. I am seaching a way to prove it with the things you learn in linear algebra and analysis 1+2 (with the knowledge of a second semester student).
Sorry for the previous spam. I shall prove for $\cos$, cosine of any rational numbers except for 0 cannot get rational numbers. By using polynomial argument, we shall only have to prove for integers. Suppose that $m\in\mathbb{N}$, $\cos(m)\in\mathbb{Q}$. For any fixed prime number $p>m$, consider polynomial $x\in(0,m)$ $$f(x) = \frac{(x-m)^{2p}(m^2-(x-m)^2)^{p-1}}{(p-1)!}$$ And $$F(x) = \sum_{n = 0}^{2p-1} (-1)^{n+1}f^{2n}(x)$$ Which satisfies $$(F'(x)\sin(x)-F(x)\cos(x))' = F''(x)\sin(x) +F(x)\sin(x) = f(x)\sin(x)$$ since the other terms cancelled. $$\int_0^mf(x)\sin(x)dx = F'(m)\sin(m)-F(m)\cos(m)+F(0)$$ Consider $f$ is a polynomial of $(x-m)^2$, thus $F'(m) = 0$, and we can see that $$f(m-x) = x^{2p}(m^2-x^2)^{p-1}/(p-1)!$$ By computing, $p|f^{(l)}(m)$ for every $l$. That means $F(m)$ is a multiple of $p$ by definition of $F$, say $pM$. If $\cos(m) = s/t$, then $$t\int_0^m f(x)\sin(x)dx = -spM+tN$$ is an integer, While $$f\le \frac{m^{4p-2}}{(p-1)!} $$ thus $$t\int_0^mf(x)\sin(x)dx\le t\frac{m^{4p-2}}{(p-1)!}\cdot m <1 $$ when $p$ is large enough. Contradiction of it is an integer. As a result of this, $\sin$ should also satisfy this.
Property kept under base change and composition is preserved by products The following is true? Why? Let $P$ be a property of morphisms preserved under base change and composition. Let $X\to Y$ and $X'\to Y'$ be morphisms of $S$-schemes with property $P$. Then the unique morphism $X\times_S X' \to Y\times_S Y'$ has property $P$.
Yes this is true. The canonical morphism $X\times_S X'\to Y\times_S Y'$ is the composition of $X\times_S X'\to Y\times_S X'$ and $Y\times_S X'\to Y\times_S Y'$. The latter verify property P because they are obtained by base change (the first one is $X\to Y$ base changed to $Y\times_S X'$, the second one is similar). As P is stable by composition, your canonical morphism satisfies P.
Schoen Estimates (part 3) I'm referring to the article 'Estimates for stable minimal surfaces in three dimensional manifolds' of Richard Schoen In the first paragraph of the proof of theorem 2 the author seems to assert that the universal covering of $ M $ is conformally equivalent to the unit disk (with standard metric). But i have some doubt about this thing. In fact if $ M $ is complete and non compact, its universal covering space has to be conformally equivalent to the complex plane (applying methods of Fischer Colbrie Schoen and observing that non negative ricci curvature implies non negative scalar curvature). Thanks
Indeed, $M$ could be simply the plane, which is its own universal cover. So yes, their statement we may assume that $M$ is represented by a conformal immersion $f:D\to N$ [where $D=D_1$ is presumably the unit disk from page 116] needs to be modified. But I think it suffices to replace $M$ with $B_R(P_0)$, which is a non-complete manifold covered by $D$. As other results in the paper, Theorem 2 is local: there is no statement made about the points outside of $B_R(P_0)$.
How to find (or 'generate') combinatorial meaning for the given expression $\left(\dfrac{6(k-n)(k-1)}{(n-2)(n-1)}+1\right)\dfrac{30}{n(n+1)(n+2)}$ (for $n\geq 3$ and $1\leq k \leq n$) The expression comes from question https://math.stackexchange.com/questions/304876/please-help-to-find-function-for-given-inputs-and-outputs where it is used to get answer for some unknown problem (I believe that's combinatorial, with possibility of used Monte-Carlo method). The OP didn't give any interpretation of the expression. Can you please share your best ideas about combinatorial interpretation of the expression? Big thanks in advance. P.S. There's tiny possibility that it was mathematical/SE joke.
This post does not show a meaning of the expression, but explains how one might arrive at it. I want a quadratic function with zero mean defined on $\{1,\dots,n\}$. Naturally, it should be symmetric about the midpoint of the interval. The obvious symmetric function is $k(k-n-1)$, but it does not have mean zero. The mean is $$\frac{1}{n}\sum_{k=1}^n k(k-n-1) = -\frac{(n+1)(n+2)}{6}$$ So I subtract that, arriving at $k(k-n-1)+\frac{(n+1)(n+2)}{6}$. But maybe I also care about the second moment, which is $$\sum_{k=1}^n \left(k(k-n-1)+\frac{(n+1)(n+2)}{6}\right)^2 = \frac{(n-2)(n-1)n(n+1)(n+2)}{180}$$ Dividing by the second moment yields $$\frac{180}{(n-2)(n-1)n(n+1)(n+2)}\left(k(k-n-1)+\frac{(n+1)(n+2)}{6}\right) $$ which is the formula in question.
A improper integral with Glaisher-Kinkelin constant Show that : $$\int_0^\infty \frac{\text{e}^{-x}}{x^2} \left( \frac{1}{1-\text{e}^{-x}} - \frac{1}{x} - \frac{1}{2} \right)^2 \, \text{d}x = \frac{7}{36}-\ln A+\frac{\zeta \left( 3 \right)}{2\pi ^2}$$ Where $\displaystyle A$ is Glaisher-Kinkelin constant I see Chris's question is a bit related with this Evaluate $\int_0^1\left(\frac{1}{\ln x} + \frac{1}{1-x}\right)^2 \mathrm dx$
Here is an identity for log(A) that may assist. $\displaystyle \ln(A)-\frac{1}{4}=\int_{0}^{\infty}\frac{e^{-t}}{t^{2}}\left(\frac{1}{e^{t}-1}-\frac{1}{t}+\frac{1}{2}-\frac{t}{12}\right)dt$. I think Coffey has done work in this area. Try searching for his papers on the Stieltjes constant, log integrals, Barnes G, log Gamma, etc. Another interesting identity is $\displaystyle 2\int_{0}^{1}\left(x^{2}-x+\frac{1}{6}\right)\log\Gamma(x)dx=\frac{\zeta(3)}{2{\pi}^{2}}$. Just some thoughts that may help put it together.
Compound interest derivation of $e$ I'm reviewing stats and probability, including Poisson processes, and I came across: $$e=\displaystyle \lim_{n\rightarrow \infty} \left(1+\frac{1}{n}\right)^n$$ I'd like to understand this more fully, but so far I'm struggling. I guess what I'm trying to understand is how you prove that it converges. Can anyone point me toward (or provide) a good explanation of this?
It's not too hard to prove, but it does rely on a few things. (In particular the validity of the taylor expansion of $\ln$ around 1 and that $\exp$ is continuous.) Consider in general the sequence $n\ln(1+x/n)$ which is defined for all $x$, positive or negative provided $n$ is large enough. (In fact the proof that follows can also be modified slightly to work for complex $x$). Using the power series representation $\ln(1+y) = \sum_{i=1}^\infty (-1)^{i+1}\frac{x^i}{i}$ we get $$n\ln(1+\frac{x}{n}) = x + \sum_{i=2}^\infty(-1)^{i+1}\frac{x^i}{in^{i-1}}$$ The absolute values of the terms in the series on the right hand side are bounded above by a geometric series (for n large enough) whose summation tends to 0 as $n$ tends to infinity. Hence $$\lim_{n\rightarrow \infty} n\ln(1+\frac{x}{n}) = x$$ And so using the fact that $\exp(x)$ is a continuous function: $$\lim_{n\rightarrow \infty}\bigg(1+\frac{x}{n}\bigg)^n = \exp\bigg(\lim_{n\rightarrow \infty}n\ln(1+\frac{x}{n})\bigg) = \exp(x)$$ In particular, set $x = 1$ to get the result you ask for.
How to compute the SVD of $2\times 2$ matrices? What's an efficient algorithm to get the SVD of $2\times 2$ matrices? I've found papers about doing SVD on $2\times 2$ triangular matrices, and I've seen the analytic formula to get the singular values of a $2\times 2$ matrix. But how to use either of these to get the SVD of an arbitrary $2\times 2$ matrix? Are the general algorithms built on these, or are these just some special cases?
The SVD of a $2\times 2$ matrix has a closed-form formula, which can be worked out by writing the rotation matrices in terms of a single unknown angle each, and then solving for those angles as well as the singular values. It is worked out here, for instance.
Wave Equation, Energy methods. I am reading the book of Evans, Partial differential Equations ... wave equation section 2.4; subsection 2.4.3: Energy methods. Arriving at the theorem: Theorem 5 (Uniqueness for wave equation). There exists at most one function $u \in C^{2}(\overline{U}_{T})$ solving $u_{tt} -\Delta u=f $ in $ U_{T}$ $u=g $ on $ \Gamma_{T}$ $u_{t}=h$ on $U \times \{t=0\}.$ Proof. If $\tilde{u}$ is another such solution, then $ w:=u-\tilde{u}$ solves $w_{tt} -\Delta w=0 $ in $ U_{T}$ $w=0 $ on $ \Gamma_{T}$ $w_{t}=0$ on $U \times \{t=0\}.$ Define the "energy" $e(t):=\frac{1}{2} \int_{U} w^{2}_{t}(x,t)+ \mid Dw(x,t)\mid ^{2} dx (0\leq t \leq T).$ We compute $\dot{e}(t)=\int_{U} w_{t}w_{tt}+ Dw \cdot Dw_{t}dx (\cdot = \frac{d}{dt})$ $=\int_{U}w_{t}(w_{tt} - \Delta w)dx=0$. There is no boundary term since $w=0$, and hence $w_{t}=0$, on $\partial U \times [0,T].$ Thus for all $0\leq t \leq T, e(t)= e(0)=0$, and so $w_{t}, Dw \equiv 0$ within $U_{T}$. Since $w \equiv 0$ on$ U \times \{t=0\}$, we conclude $w=u-\tilde {u}\equiv 0$ in $U_{T}$. I have two questions: 1) What is the motivation for the definition of $e(t)$ 2)$\int_{U} w_{t}w_{tt}+ Dw \cdot Dw_{t}dx $ $=\int_{U}w_{t}(w_{tt} - \Delta w)dx$. How to justify this equality? Thank very much.
This quantity, the definition of $e(t)$, can be easily recognized as the Hamiltonian (basically another name of energy) of the system by a physics student. Let's explore more details. The PDE $w_{tt}-\Delta w=0$ is equivalent to a variation problem (under some boundary conditions, maybe), whose Lagrangian (or Lagrangian density) is $$ \mathcal L = \frac{1}{2}(w_t^2-(\nabla w)\cdot(\nabla w)) $$ There is a standard process obtaining the Hamiltonian from the Lagrangian. First, calculate the canonical momentum $\pi$: $$ \pi = \frac{\partial \mathcal L}{\partial w_t}=w_t. $$ Then use the Legendre transformation: $$ \mathcal H = \pi w_t - \mathcal L = \frac{1}{2} (w_t^2+(\nabla w)^2), $$ which is the integrand in the definition of $e(t)$. This is actually the Hamitonian density and should be integrated over space to give the total energy. P.S. As a somewhat more complicated case, consider the following equation: $$ u_{tt} - \nabla \cdot (c^2(x) \nabla u)+q(x)u = 0 $$ with some appropriate boundary conditions and initial conditions imposed. The two given functions $c,q\ge 0$ and depend on $x$ only. You can check that the Lagrangian is $$ \mathcal L = \frac{1}{2}(u_t^2 - c^2 (\nabla u)^2-qu^2), $$ which gives the Hamiltonian $$ \mathcal H = \frac{1}{2} (u_t^2+c^2(\nabla u)^2+qu^2) $$
Functions of algebra that deal with real number If the function $f$ satisfies the equation $f(x+y)=f(x)+f(y)$ for every pair of real numbers $x$ and $y$, what are the possible values of $f(0)$? A.  Any real number B.  Any positive real number C.  $0$ and $1$ only D.  $1$ only E.  $0$ only The answer for this problem is E. For the following problem to find the answer do you have to plug in 0 to prove the function?
Just to add to the collection above, if the vector space is $V$: $$f(v)=f(v+0)=f(v)+f(0)\Longrightarrow f(0)=0$$ for any $v\in V$
Exercise of complex variable, polynomials. Calculate the number of zeros in the right half-plane of the following polynomial: $$z^4+2z^3-2z+10$$ Please, it's the last exercise that I have to do. Help TT. PD: I don't know how do it.
Proceed like the previous problem for first quadrant. You will find one root. And note that roots will be conjugates. So 2 roots in the right half-plane.. For zero in the first quadrant, consider the argument principle: if $Z$ is the number of zeroes of $f$ inside the plane region delimited by the contour $\gamma$, then $\Delta_\gamma(\textrm{arg}f)=2\pi Z$, i.e. the variation of the argument of $f$ along $\gamma$ equals $Z$ times $2\pi$. Take a path from the origin, following the real axis to the point $M>0$, then make a quarter of circle or radius $M$, reaching the point $iM$ and then go back to the origin along the imaginary axis. Now try to determine the variation of the argument of $f(z)$ along this path for $M\to\infty$: * *along the real axis, the function is $f(t)=t^4-2t+2t^3+10$, therefore $f(t)$ is real for $t\geq0$ so the total change of argument along this part of the path is $0$. *along the path $Me^{i\theta}$ for $0\leq\theta\leq \pi/2$, if $M$ is very large, the function is near to $g(\theta)=M^4e^{i4\theta}$; therefore the argument goes from $0$ to $2\pi$. *along the imaginary axis, the function's argument doesn't change. So, the total change of the argument is $2\pi$, implying that the function has only one zero in that quadrant.
Ask for a question about independence This is the question I met while reading Shannon's channel coding theorem. Assume a random variable $X$ is transmitted through a noisy channel with transition probability $p(y|x)$. At the receiver a random variable $Y$ is obtained. Assume we have an additional random variable $X'$ which is independent of $X$. How to show that $X'$ is independent of bivariate random variable $(X,Y)$ and $X'$ is independent of $Y$? It looks obvious because $Y$ is generated only from $X$, but I just can not prove it rigorously, i.e., that $p(x,y|x')=p(x,y)$ and $p(y|x')=p(y)$. Thanks a lot for your answer!
Two given conditions are: (1)$X'$ is independent of $X$, (2)$Y$ is generated only from $X$, i.e., $p(y|x)=p(y|x,x')$. This acutually means $X'\to X\to Y$ forms a Markov chain. Now let's prove $X'$ is independent of bivariate random variable $(X,Y)$, i.e., $p(x,y|x')=p(x,y)$: $p(x,y|x')=p(x|x')p(y|x,x')=p(x)p(y|x,x')$(from (1))$=p(x)p(y|x)$(from (2))$=p(x,y)$. It can easily shown that if $X'$ is independent of $(X,Y)$, then $X'$ is independent of each component. I have shown $X'$ is independent of $(X,Y)$, so $X'$ is also independent of $Y$.
what is the logic behind the 'UPC -A' check digit? In UPC-A barcode symbology, the 12th digit is known as check digit and it is used by the scanner to determine if it scanned the number correctly or not. the check digit is calculated as follows: 1.Add the digits in the odd-numbered positions (first, third, fifth, etc.) together and multiply by three. 2.Add the digits in the even-numbered positions (second, fourth, sixth, etc.) to the result. 3.Find the result modulo 10 (i.e. the remainder when divided by 10.. 10 goes into 58 5 times with 8 leftover). 4.If the result is not zero, subtract the result from ten. For example, in a UPC-A barcode "03600029145x" where x is the unknown check digit, x can be calculated by adding the odd-numbered digits (0 + 6 + 0 + 2 + 1 + 5 = 14), multiplying by three (14 × 3 = 42), adding the even-numbered digits (42 + (3 + 0 + 0 + 9 + 4) = 58), calculating modulo ten (58 mod 10 = 8), subtracting from ten (10 − 8 = 2). The check digit is thus 2. What is the logic behind calculating 'check digit' in such manner? Can't two different combination of digits produce the same check digit?
Just a remark on your question Can't two different combination of digits produce the same check digit? Of course this happens, after all there are $10^{11}$ possibilities for the first $11$ digits, and only $10$ for the $12$-th one. So plenty of valid codes will share the same $12$-th digit, and a mistake that takes you from one to the other can't be caught. But as noted by @bubba, the code aims at catching two of the commonest errors.
Vantage point of character theory I am not sure whether I can frame my question properly, or whether at this point my understandings permit me to comprehend the perspectives of the answers to come, but somehow I find it pretty amazing that while doing representation of finite groups over characteristic 0 fields, the character of representation plays such an important role. If I look at it separately the trace of a matrix hardly reveals anything about the matrix except if the matrix is diagonal. Does the fact that there exist bases such that I can make $ \rho (g) $ diagonal ( for nice underlying fields at least) for any element g of the group G make a significant difference in our considerations? Also, characters being equal implying equivalence of representations is such a stunning fact. Is there any intuitive basis for this? And lastly what kind of information comes under the realm of character theory and what is purely representation theory's realm? I apologize for such a vague question, and reading representation theory for the first time surely I don't understand things too well. But any motivation towards the perspective and vantage points of the subject will be very beneficial.
I know I'm digging up an old question, but one seems to have brought up this point, although darij grinberg did briefly allude to it in the comments: For any fixed $g \in G$, knowing the trace of $\rho(g)$ doesn't tell you much. However, the character contains the additional information of the trace of $\rho(g^k)$ for all $k$. If $\lambda_1, \ldots, \lambda_n$ are the eigenvalues of $\rho(g)$, then $$\operatorname{Tr}(\rho(g^k)) = (\lambda_1)^k + \cdots + (\lambda_n)^k.$$ By Newton's identity, knowing this for all $k$ allows one to recover the data of the multiset $\{\lambda_1, \ldots, \lambda_n\}$. Since $\rho(g)$ is diagonalizable, it follows that you know the similarity type of $\rho(g)$.
Solve $\frac{(b-c)(1+a^2)}{x+a^2}+\frac{(c-a)(1+b^2)}{x+b^2}+\frac{(a-b)(1+c^2)}{x+c^2}=0$ for $x$ Is there any smart way to solve the equation: $$\frac{(b-c)(1+a^2)}{x+a^2}+\frac{(c-a)(1+b^2)}{x+b^2}+\frac{(a-b)(1+c^2)}{x+c^2}=0$$ Use Maple I can find $x \in \{1;ab+bc+ca\}$
I have a partial solution, as follows: Note that $\frac{(b-c)(1+a^2)}{x+a^2}=\frac{(b-c)\left((x+a^2)+(1-x)\right)}{x+a^2}=(b-c)+\frac{{(b-c)}(1-x)}{x+a^2}$. Likewise, $\frac{(c-a)(1+b^2)}{x+b^2}=(c-a)+\frac{(c-a)(1-x)}{x+b^2}$ and $\frac{(a-b)(1+c^2)}{x+c^2}=(c-a)+\frac{(a-b)(1-x)}{x+c^2}$. Now, $\frac{(b-c)(1+a^2)}{x+a^2}+\frac{(c-a)(1+b^2)}{x+b^2}+\frac{(a-b)(1+c^2)}{x+c^2}=\frac{(b-c)(1-x)}{x+a^2}+\frac{(c-a)(1-x)}{x+b^2}+\frac{(a-b)(1-x)}{x+c^2}$ as $(b-c)+(c-a)+(a-b)=0$. Hence $(1-x)\left(\frac{b-c}{x+a^2}+\frac{c-a}{x+b^2}+\frac{a-b}{x+c^2}\right)=0$ and so $x=1$ or $\frac{b-c}{x+a^2}+\frac{c-a}{x+b^2}+\frac{a-b}{x+c^2}=0$
Proving that $\|A\|$ is finite. Let $|v|$ be the Euclidean norm on $\mathbb{R^n} $. For $A\in \mathrm{Mat}_{n\times n}(\mathbb{R})$ we define $\displaystyle \|A\|:= \sup_{\large v\in \mathbb{R^n},\,v \neq 0}\frac{|Av|}{|v|}$. How to show that $\|A\|$ is finite for every $A$? It would be very helpful if someone could give hints. I think I should show that $\|-\|$ is bounded,but I don't know how...
Hint: Let $S=\{v\in\mathbb{R}^n\;|\;|v| = 1\}, N = \{\frac{|Av|}{|v|}\;|\;v\in\mathbb{R}^n.\;v\ne 0\}, N' = \{|Av|\;|\;v\in\mathbb{R}^n.\;|v|=1\}$ Step 1: $||A|| = \sup N = \sup N'$ Step 2: Show that $x\to|Ax|$ is a continuous map. $S$ is closed and bounded in $\mathbb{R}^n$ therefore compact, |Ax| attains max on $S$. Done Do you happen to be in Linear Algebra II class?
Subgroups of order $p$ and $p^{n-1}$ in a group of order $p^n$. I have a group $G$ of order $p^n$ for $n \ge 1$ and $p$ a prime. I am looking for two specific subgroups within $G$: one of order $p$ and one of order $p^{n-1}$. I don't think I would use the Sylow theorems here because those seem to apply to groups with a "messier" order than simply $p^n$. Would Cauchy's Theorem allow me to generate the two requisite subgroups? I could use it to find an element of order $p$ and an element of order $p^{n-1}$ and then consider the cyclic subgroups generated by these two elements?
Let $P$ act on itself by conjugation. $1$ appears in an orbit of size $1$, and everything else appears in an orbit of size $p^k$ for some $k$. Since the sum of the orbit sizes is equal to $|P|$, which is congruent to $0\mod{p}$, that means there has to be at least one more orbit of size $1$. Orbits of size $1$ under conjugation contain elements which commute with everything in the group; they compose $Z(P)$. Now suppose inductively that $\exists S\unlhd P$ with $|S|=p^k$. Then by the above lemma and Cauchy's theorem $P/S$ has a central subgroup $\overline{Q}$ of order $p$. Lifting $\overline{Q}$ back to $P$, we obtain a normal subgroup of order $p^{k+1}$.
Show that $(m^2 - n^2, 2mn, m^2 + n^2)$ is a primitive Pythagorean triplet Show that $(m^2 - n^2, 2mn, m^2 + n^2)$ is a primitive Pythagorean triplet First, I showed that $(m^2 - n^2, 2mn, m^2 + n^2)$ is in fact a Pythagorean triplet. $$\begin{align*} (m^2 - n^2)^2 + (2mn)^2 &= (m^2 + n^2)^2 \\ &= m^4 -2m^2n^2 + n^4 + 4m^2n^2 \\ &= m^4 + 2m^2n^2 + n^4 \\ &= 1\end{align*}$$ which shows that it respect $a^2+b^2 = c^2$ let p be a prime number, $ p|(m^2 + n^2) \text { and } p|(m^2 - n^2) $ if $gcd(m^2 + n^2, (m^2 - n^2)) = 1$ $p | (m^2 + n^2) , \text { so, } p |m^2 \text { and } p |n^2$ that means $ (m^2 + n^2) \text { and } (m^2 - n^2) $ are prime together I'm kind of lost when I begin to show the gcd = 1... I think I know what to do, just not sure how to do it correctly. Thanks
To show $(m^2 - n^2)^2 + (2mn)^2 = (m^2 + n^2)^2$ is equivalent to showing $(m^2 - n^2)^2 + (2mn)^2 - (m^2 + n^2)^2 = 0$ so \begin{align*} && (m^2 - n^2)^2 + (2mn)^2 - (m^2 + n^2)^2 \\ &=& m^4 - 2m^2n^2 + n^4 + 4m^2n^2 - m^4 - 2m^2n^2 - n^4 \\ &=& m^4 + n^4 - m^4 - n^4 \\ &=& 0\end{align*}
Sine not a Rational Function Spivak This is Chapter 15 Question 31 in Spivak: a) Show sin is not a rational function. By definition of a rational function, a rational function cannot be $0$ at infinite points unless it is $0$ everywhere. Obviously, sin have infinite points that are 0 and infinite points that are not zero, thus not a rational function. b) Show that there do not exist rational functions $f_0, \ldots, f_{n-1}$ such that $(\sin x)^n + f_{n-1}(x)(\sin x)^{n-1} + \ldots + f_0({x}) = 0$ for all x First choose $x = 2k\pi$, so $f_0(x) = 0$ for $x = 2k\pi$. Since $f_0$ is rational $\implies f_0(x) = 0$ for all x. Thus can write $\sin x[(\sin x)^{n-1} + f_{n-1}(\sin x)^{n-2} ... +f_1(x)] = 0$ Question: The second factor is $0$ for all $x \neq 2k\pi$ How does this imply that it is 0 for all x? And how does this lead to the result?
Hint: Use continuity of $\sin$ and rational functions.
Stadium Seating A circular stadium consists fo 11 sections with aisles in between. With 13 tiers of concrete steps for the final section, section K. Seats are places along every concrete step, with each step 0.45 m wide. The arc AB at the fron of the first row is 14.4 m long while the arc CD at the back of the back row is 20.25m long. 1. How wide is each concrete step? 2. What is the length of the arc of the back of row 1, row 2, row 3, and so on?
If the inner radius of row $i$ is $r_i$, then the arc length of the inner arc of that row is given by $$L_i = \frac{2\pi r_i}{11}.$$ If the width $w$ of each step is constant, then $$r_i = r_1 + (i-1)w$$ and the arc length of the outer arc of step $i$ is $$M_i = \frac{2\pi (r_i+w)}{11}.$$ You now have two unknowns -- $r_1$ and $w$ -- and two equation relating the unknowns to known quantities, namely $L_1$ and $M_{13}$. Solve for $w$ and $r_1$ and then plug into the equation above to find $M_i$.
If the matrix of a linear map is independent from the basis, then the map is a multiple of the identity map. Let $V$ be a finite dimensional vector space over $F$, and let $$T:V\to V$$ be a linear map. Suppose that given any two bases $B$ and $C$ for $V$, we have that the matrix of $T$ with respect $B$ is equal to that with respect to $C$. How can we show that this implies that there exists some $\lambda\in F$ such that $T(v)=\lambda v$, $\forall v\in V$?
Hint: This is equivalent to say that any nonzero vector is an eigenvector of $T$
Legendre symbol- what is the proof that it is a homomorphism? I know that one property of the Legendre symbol is that it is a homomorphism. However, I have not been able to find a proof that this is the case. If someone could give me or show me to a thorough proof of this, that would be great. I am going with the definition: $\sigma(x) = 1$ when $ x=y^2$ $\sigma(x)= -1 $ otherwise where $\sigma$ is a map st $\sigma: {\mathbb{Z}_p}^{\times} \rightarrow (-1,1)$ EDIT: How would we sketch a proof that the symbol is a homomorphism using that if $(G, *)$ is a finite group and $H\subset G$ is a subgroup, and we have an equivalence relation on $G: x \sim y$ iff $\exists h \in H$ st $y= x*h$, and $P$ is an equivalence relation: then we know $\# P = \# H$ and $\# H$ divides $\# G$. Basically how would we prove, using this fact, that the set of squares in $\mathbb{Z}_p$ is a subgroup of $\mathbb{Z}_p$?
I am wondering if it is permissible to use the primitive roots modulo a prime $p$. Suppose so, and then we look at the definition of a Legendre symbol, and then give a proof that it is a homomorphism. So fix a prime $p$ first. Let $x$ be a number not divisible by $p$. Suppose that we already have at disposal a primitive root modulo $p$, denoted by $g$. Then it is easily seen that $g$ is a non-residue of $p$, and that every integer is congruent to $g^n$ for some $n$. Moreover, $x$ is a residue if and only if $x\equiv g^{2k}$ for some $k$. Then the fact that Legendre symbol is a homomorphism follows from the rules that even+even=even, even+odd=odd, and that odd+odd=even. Suppose we are not given the existence of a primitive root modulo $p$. Then we use another approach: Let $p$ be an odd prime. Define a homomorphism from $\mathbb Z_p^*={1, \cdot\cdot\cdot,p-1}$ to itself by squaring: we send $x$ to $x^2$. Its kernel, that is, {$x: x^2\equiv 1\pmod p$}, consists of two numbers: $1$ and $p-1$. Its image is the set of residues modulo $p$, which is also a group $\mathbb S_p$ under multiplication. Since multiplication of integers is abelian, the group $\mathbb S_p$ is a normal subgroup of $\mathbb Z_p^*$. So we can form a quotient $\mathbb Z_p^*/\mathbb S_p$. It is of order $2$, hence isomorphic with the cyclic group of order $2$, {$1,-1$}. And we send $x\in \mathbb Z_p^*$ to its image in $\mathbb Z_p^*/\mathbb S_p$: this is the Legendre symbol. By this definition, it is obvious that Legendre symbol is a homomorphism. I wonder also if we can talk about the wonderful interpretation due to Zolotarev: it is nothing but the theory of permutations, yet is a magic observation.
Tricky T/F... convergence T or F? 1) If $x_n \rightarrow 0$ and $x_n \neq 0$ for all $n$, then the sequence {$1/n$} is unbounded. Also similarly... 2) If {$x_n$} is unbounded and $x_n \neq 0$ for all $n$, then $1/x_n \rightarrow 0$. For the first one, I would say that is true because the limit of {$1/n$} would approach infinity, thus making it unbounded? And the second one, also seems true by similar logic. Am I overlooking something here?
The first one is true; but note the sequence $(1/x_n)$ need not converge to $\infty$. The sequence $(1/|x_n|)$, however, would. Consider here, for example, the sequence $(1/2,-1/3,1/4,-1/5,\ldots)$. For the second one, consider the sequence $(1,1,2,1,3,1,4,1,5,\ldots)$. Note this sequence is unbounded, but the sequence of reciprocals does not converge to $0$. (If you knew $(x_n)$ converged to $\infty$ (or to $-\infty$), then you could conclude the sequence of reciprocals would converge to $0$.)
Find $\lim_{n\to \infty}\frac{1}{\ln n}\sum_{j,k=1}^{n}\frac{j+k}{j^3+k^3}.$ Find $$\lim_{n\to \infty}\frac{1}{\ln n}\sum_{j,k=1}^{n}\frac{j+k}{j^3+k^3}\;.$$
Here is another sketch of proof. Let $$J_n = \{(j, k) : 0 \leq j, k < n \text{ and } (j, k) \neq (0, 0) \}.$$ Then for each $(j, k) \in J_n$ and $(x_0, y_0) = (j/n, k/n)$, we have $$ \frac{x_0 + y_0}{(x_0+\frac{1}{n})^{3} + (y_0 + \frac{1}{n})^3} \leq \frac{x+y}{x^3 + y^3} \leq \frac{x_0 + y_0 + \frac{2}{n}}{x_0^3 + y_0^3} $$ for $(x, y) \in [x_0, y_0] \times [x_0 + 1/n, y_0 + 1/n]$. Thus if we let $D_n$ be the closure of the set $[0, 1]^2 - [0, 1/n]^2$, then $$ \sum_{(j,k) \in J_n} \frac{j+k}{(j+1)^3 + (k+1)^3} \leq \int_{D_n} \frac{x+y}{x^3 + y^3} \, dxdy \leq \sum_{(j,k) \in J_n} \frac{j+k+2}{j^3 + k^3}. $$ It is not hard to establish the relation that $$ \sum_{(j,k) \in J_n} \frac{j+k}{(j+1)^3 + (k+1)^3} = \sum_{j,k=1}^{n} \frac{j+k}{j^3 + k^3} + O(1) $$ and that $$ \sum_{(j,k) \in J_n} \frac{j+k+2}{j^3 + k^3} = \sum_{j,k=1}^{n} \frac{j+k}{j^3 + k^3} + O(1). $$ By noting that \begin{align*} \int_{D_n} \frac{x+y}{x^3 + y^3} \, dxdy &= 2\int_{\frac{1}{n}}^{1} \int_{0}^{y} \frac{x+y}{x^3 + y^3} \, dxdy \\ &= (2 \log n) \int_{0}^{1} \frac{1}{x^2 - x + 1} \, dx \\ &= \frac{4 \pi \log n}{3\sqrt{3}}, \end{align*} we obtain the asymptotic formula $$ \frac{1}{\log n} \sum_{j,k=1}^{n} \frac{j+k}{j^3 + k^3} = \frac{4 \pi}{3\sqrt{3}} + O\left( \frac{1}{\log n} \right) $$ and hence the answer is $\displaystyle \frac{4 \pi}{3\sqrt{3}} $.
Powers and the logarithm By example: * *$4^{\log_2(n)}$ evaluates to $n^2$ *$2^{\log_2(n)}$ evaluates to $n$ What is the rule behind this?
Here's a method that relies more on applying an appropriate strategy than on formulas. If you want to rewrite $4^{\log_2(n)}$ as a power of $n$, then you simply want to solve for $u$ in the following equation: $$4^{\log_2(n)} = n^u$$ The standard way of solving for something that appears in the exponent of an exponentiated expression is to take the logarithm (to some fixed base) of both sides. Since $\log_2$ already shows up, we may as well take the logarithm-base-$2$ of both sides (otherwise, two different kinds of logarithms will show up, and we'd have to take care of this later): $$\log_{2}\left(4^{\log_2(n)}\right) = \log_{2}\left(n^u\right)$$ $$\log_{2}(n) \cdot \log_{2}(4) = u \cdot \log_{2}(n)$$ $$\log_{2}(4) = u$$ $$2 = u$$ In the last step I used the fact that $\log_2(4) = \log_{2}(2^2) = 2$. However, if I didn't know this, or if the numbers weren't nice (i.e. we got something like $\log_2(5) = u$), we'd still have what we wanted -- a numerical expression for the exponent of $n.$ The same method allows you to (even more easily) determine that $2^{\log_2(n)} = n.$ Here's an application of the same method to some second semester calculus (in the U.S.) semi-challenging $p$-series convergence/divergence problems.
Computing $\lim_{n\to\infty}n\sum_{k=1}^n\left( \frac{1}{(2k-1)^2} - \frac{3}{4k^2}\right)$ What ways would you propose for the limit below? $$\lim_{n\to\infty}n\sum_{k=1}^n\left( \frac{1}{(2k-1)^2} - \frac{3}{4k^2}\right)$$ Thanks in advance for your suggestions, hints! Sis.
OK, it turns out that $$\sum_{k=1}^n\left( \frac{1}{(2k-1)^2} - \frac{3}{4k^2}\right) = \sum_{k=1}^{n-1} \frac{1}{(k+n)^2}$$ This may be shown by observing that $$\sum_{k=1}^n \frac{1}{(2k-1)^2} = \sum_{k=1}^{2 n-1} \frac{1}{k^2} - \frac{1}{2^2} \sum_{k=1}^n \frac{1}{k^2}$$ The desired limit may then be rewritten as $$\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^{n-1} \frac{1}{(1 + (k/n))^2}$$ which you may recognize as a Riemann sum, equal to $$\int_0^1 dx \: \frac{1}{(1+x)^2} = \frac{1}{2}$$
Differentiation in 3d of sin root and fractions in one! -> to find the normal to a function I have to find the normal of this function at a defined point $x$ and $z$, I have done A Level maths but that was some time ago but I don't think it was covered to this level, I am now doing a CS degree. I thought the best way would be to differentiate the function to get the tangent, then get the normal. I have no idea where to start. Any ideas on how I should go about it? $$y = \frac{\sin (\sqrt{x^2+z^2})}{\sqrt{x^2+z^2}}$$
The normal to a curve at a point is perpendicular to the gradient at that point. In your case: $$f(x,z) = \frac{\sin{\sqrt{x^2+z^2}}}{\sqrt{x^2+z^2}}$$ $$\begin{align}\nabla f &= \left ( \frac{\partial f}{\partial x}, \frac{\partial f}{\partial z}\right ) \\ &= \left ( \frac{x \cos \left(\sqrt{x^2+z^2}\right)}{x^2+z^2 }-\frac{x \sin \left(\sqrt{x^2+z^2}\right)}{\left(x ^2+z^2\right)^{3/2}}, \frac{z \cos \left(\sqrt{x^2+z^2}\right)}{x^2+z^2 }-\frac{z \sin \left(\sqrt{x^2+z^2}\right)}{\left(x ^2+z^2\right)^{3/2}}\right ) \end{align}$$ The unit normal to the curve is then $$\frac{\left ( -\frac{\partial f}{\partial z}, \frac{\partial f}{\partial x}\right )}{\sqrt{\left ( \frac{\partial f}{\partial x} \right )^2+\left ( \frac{\partial f}{\partial z} \right )^2}} $$
Why isn't this a valid argument to the "proof" of the Axiom of Countable Choice? I am having a little trouble identifying the problem with this argument: Let $\{A_1, A_2, \ldots, A_n, \ldots\}$ be a sequence of sets. Let $X:= \{n \in \mathbb{N} : $ there is an element of the set $A_n$ associated to $n \}$ (1) $A_1$ is not empty. Therefore, there exists a $x_1 \in A_1$ (2) Given an $A_n$ (and a $x_n \in A_n$), we have that $A_{n+1}$ is non-empty and, therefore, there exists a $x_{n+1} \in A_{n+1}$ So, by induction, $X=\mathbb{N}$ and the axiom of countable choice is "proven". Where is the error?
To avoid Choice, you need to have a definitive way of choosing an element from each set. For example, if each $A_n$ is a pair of shoes, you may always choose the left. A typical useful case is when each $A_n$ has a distinguished member such as a unique minimum that you can choose. For example, Baire Category Theorem for a complete SEPARABLE metric space can be proved by a trick like this.
Combination of three items with no adjacent items the same I am looking for a closed expression for calculating the number of combinations of $n = n_1 + n_2 + n_3$ objects arranged in an ordered list, where there are $n_1$ $a$, $n_2$ $b$ and $n_3$ $c$, under the constraint that $a$ may not appear adjacent to $a$, $b$ not adjacent to $b$, and $c$ not adjacent to $c$. If $n_1 = n_2 = n_3$, the following is an upper bound on the number of combinations: $3$ ways of picking the first, and $2$ ways of picking all subsequent objects gives $$3 \cdot 2^{n-1}$$ A solution only exists if $n_1$ is not greater than $1+n_2+n_3$ and similarly for $n_2$ and $n_3$ Thanks to @Byron Schmuland for pointing me to a general answer; the integral that needs to be evaluated looks a bit daunting. Now I need a solution to the integral for three items like the one listed on page 9 of the paper cited in the answer for two items.
In addition to the formula linked above there is a nice generating function. The number of solutions with $i$ $a$'s, $j$ $b$'s and $k$ $c$'s is the coefficient of $a^ib^jc^k$ in $$\frac{1}{1 - \frac{a}{1+a} - \frac{b}{1+b} - \frac{c}{1+c}}.$$ This is a consequence of the beautiful "Carlitz-Scoville-Vaughan" theorem; see this MathOverflow question for an explanation.
Let $G$ a group. Let $x\in G$. Assume that for every $y\in G, xyx=y^3$. Prove that $x^2=e$ and $y^8=e$ for all $y\in G$. To put this in context, this is my first week abstract algebra. Let $G$ a group. Let $x\in G$. Assume that for every $y\in G, xyx=y^3$. Prove that $x^2=e$ and $y^8=e$ for all $y\in G$. A hint would be appreciated.
$\textbf{Full answer:}$ Let $x\in G$. Suppose $(\forall y\in G)(xyx=y^3)$. (i) Set $y=e$ to get $xyx=xex=x^2=e=e^3=y^3$. (The OP already knew this part). (ii) The hypothesis is $(\forall y\in G)(xyx=y^3)$. Let $y\in G$ be taken arbitrarily. By the hypothesis we have $xyx=y^3$. Cubing both sides we get $(xyx)(xyx)(xyx)=y^9$, i.e., $xy(x^2)y(x^2)yx=y^9$. Therefore $xyyyx=y^9$, that is, $xy^3x=y^9$. Recalling that $y^3=xyx$ it follows $y^9=xy^3x=x(xyx)x=x^2yx^2=eye=y$. From $y^9=y$ you can conclude that $y^8=e$ by multiplying by $y^{-1}$.
Appearance of Formal Derivative in Algebra When studying polynomials, I know it is useful to introduce the concept of a formal derivative. For example, over a field, a polynomial has no repeated roots iff it and its formal derivative are coprime. My question is, should we be surprised to see the formal derivative here? Is there some way we can make sense of the appearance of the derivative (which to me is an analytic object) in algebra? I suspect it might have something to do with the fact that the derivative is linear and satisfies the product rule, which makes it a useful object to consider. It would also be interesting to hear an explanation which explains this in the context of algebraic geometry. Thanks!
Formal derivatives appear naturally when trying to rewrite a polynomial in $ x $ ( over a characteristic $ 0 $ field ) as a polynomial in $ (x-c) $ : Let $ f(x) = a_n x^n + \ldots + a_1 x + a_0 \in K[x] $ ( with $ a_n \neq 0 $ ). Given any $ c \in K $, we can expand $ f(x) $ as a polynomial in $ (x-c) $ to get $ \displaystyle f(x) = \sum_{j=0}^{n} a_j x^j = \sum_{j=0}^{n} a_j (x-c+c)^j $ $ \displaystyle = \sum_{j=0}^{n} a_j \bigg( \sum_{r = 0}^{j} \binom{j}{r} (x-c)^{r} c^{j-r} \bigg) $ $ \displaystyle = \sum_{0 \leq r \leq j \leq n} (x-c)^{r} a_j \binom{j}{r}c^{j-r} $ $ \displaystyle = \sum_{r=0}^{n} (x-c)^{r} \bigg( \sum_{j=r}^{n} a_j \binom{j}{r}c^{j-r} \bigg) $ $ \big[ $ Here "$ \binom{j}{r} $" should be understood as $ 1 + \ldots + 1 $ with $ \frac{j(j-1)..(j-(r-1))}{r!} $ many $ 1$s $ \big] $ Now assuming $ \text{char}(K) = 0 $ we can write $ \binom{j}{r} = (r!)^{-1} ( j(j-1)..(j-r+1) )$ [ where again each term "$ n $" should be understood as $ 1 + \ldots + 1 $ with $n$ many $1$s ], giving us $ f(x) \displaystyle = \sum_{r=0}^{n} \frac{(x-c)^{r}}{r!} \bigg( \sum_{j=r}^{n} a_j \: j(j-1)\ldots(j-(r-1)) \: c^{j-r} \bigg) $. But $ \displaystyle \sum_{j=r}^{n} a_j \: j(j-1)\ldots(j-(r-1)) \: c^{j-r} $ is just $ (D^{r} f)(c) $, where $ D : K[x] \rightarrow K[x] $ is a map sending $ c_0 + c_1 x + c_2 x^2 + \ldots + c_n x^n $ $ \mapsto c_1 + 2 c_2 x + \ldots + n c_n x^{n-1} $. To summarise : Let $ f(x) = a_0 + \ldots + a_n x^n \in K[x] $, $ c \in K $, and $ \text{char}(K) = 0 $. Then $ f(x) = f(c) + (Df)(c) \: (x-c) $ $ + \frac{(D^2 f)(c)}{2!} \: (x-c)^2 + \ldots + \frac{(D^n f)(c)}{n!} \: (x-c)^n $, where $ D : K[x] \rightarrow K[x] $ sends $ c_0 + c_1 x + \ldots + c_n x^n $ $ \mapsto c_1 + 2 c_2 x + \ldots + n c_n x^{n-1} $ [ especially if $ \alpha \in K $ is a root of $ f $, it's multiplicity is the least $ j $ for which $ (D^j f)( \alpha ) \neq 0 $ ]. Edit: [This should work in general characteristic] For every $f \in K[x]$ there is a unique $\psi _f \in K[x,y]$ such that $f(y) - f(x) = (y-x) \psi _f (x,y).$ Explicitly, taking $f(x) = \sum _{0} ^{n} a _i x ^i,$ we get $f(y) - f(x)$ ${= \sum _{0} ^{n} a _i (y ^i - x ^i)}$ ${= (y-x) \left( \sum _{1} ^{n} a _i (y ^{i-1} + y ^{i-2} x + \ldots + x ^{i-2} y + x ^{i-1}) \right).}$ Now $K[x] \longrightarrow K[x,y]$ sending $f(x) \mapsto \psi _f (x,y)$ is linear. $f(y) - f(x) = (y-x) \psi _f,$ and $g(y) - g(x) = (y-x) \psi _g.$ Taking ${\alpha \cdot (\text{eq}1) + \beta \cdot (\text{eq}2)}$ gives $\psi _{\alpha f + \beta g} = \alpha \psi _f + \beta \psi _g.$ Also $\psi _{fg} = \psi _f g + f \psi _g + (y-x) \psi _f \psi _g$. $(fg)(y)$ ${= (f(x) + (y-x) \psi _f)(g(x) + (y-x) \psi _g)}$ ${ = (fg)(x) + (y-x)(\psi _f g+ f \psi _g + (y-x) \psi _f \psi _g)}$ So the map $K[x] \overset{D}{\longrightarrow} K[x]$ sending $f(x) \mapsto \psi _f (x,x)$ is linear, with "Leibnitz rule" $D(fg) = (Df)g + f(Dg).$ From the expression for $\psi _f$ we get the explicit description $D\left( \sum a _i x ^i \right) = \left(\sum i a _{i} x ^{i-1} \right).$
A sine integral $\int_0^{\infty} \left(\frac{\sin x }{x }\right)^n\,\mathrm{d}x$ The following question comes from Some integral with sine post $$\int_0^{\infty} \left(\frac{\sin x }{x }\right)^n\,\mathrm{d}x$$ but now I'd be curious to know how to deal with it by methods of complex analysis. Some suggestions, hints? Thanks!!! Sis.
A complete asymptotic expansion for large $n$ may be derived as follows. We write $$ \int_0^{ + \infty } {\left( {\frac{{\sin t}}{t}} \right)^n {\rm d}t} = \int_0^\pi {\left( {\frac{{\sin t}}{t}} \right)^n {\rm d}t} + \int_\pi ^{ + \infty } {\left( {\frac{{\sin t}}{t}} \right)^n {\rm d}t} . $$ Notice that $$ \int_\pi ^{ + \infty } {\left( {\frac{{\sin t}}{t}} \right)^n {\rm d}t} = \frac{\pi }{{\pi ^n }}\int_1^{ + \infty } {\left( {\frac{{\sin (\pi s)}}{s}} \right)^n {\rm d}s} = \mathcal{O}(\pi ^{ - n} ) $$ as $n\to+\infty$. By Laplace's method, $$ \int_0^{ \pi } {\left( {\frac{{\sin t}}{t}} \right)^n {\rm d}t} = \int_0^\pi {\exp \left( { - n\log (t\csc t)} \right){\rm d}t} \sim \sqrt {\frac{{3\pi }}{{2n}}} \sum\limits_{k = 0}^\infty {\frac{{a_k }}{{n^k }}} $$ as $n\to+\infty$, where $$ a_k = \left( {\frac{3}{2}} \right)^k \frac{1}{{k!}}\left[ {\frac{{{\rm d}^{2k} }}{{{\rm d}t^{2k} }}\left( {\frac{{t^2 }}{{6\log (t\csc t)}}} \right)^{k + 1/2} } \right]_{t = 0} . $$ Accordingly, \begin{align*} \int_0^{ + \infty } {\left( {\frac{{\sin t}}{t}} \right)^n {\rm d}t} &\sim \sqrt {\frac{{3\pi }}{{2n}}} \sum\limits_{k = 0}^\infty {\frac{{a_k }}{{n^k }}} \\ & = \sqrt {\frac{{3\pi }}{{2n}}} \left( {1 - \frac{3}{{20n}} - \frac{{13}}{{1120n^2 }} + \frac{{27}}{{3200n^3 }} + \frac{{52791}}{{3942400n^4 }} + \ldots } \right) \end{align*} as $n\to+\infty$.
How can I prove that a function $f(x,y)= \frac{x^2}{y}$ is convex for $y \gt 0$? How can I prove that a function $f(x,y)= \frac{x^2}{y}$ is convex for $ y \gt 0$? I take the Hessian matrix of $\displaystyle \frac{x^2}{y}$, and I got: $$H = \displaystyle\pmatrix{\frac{2}{y} & -\frac{2x}{y^2} \\-\frac{2x}{y^2} & \frac{2x^2}{y^3}}$$ further more, we have: $$H = \frac{2}{y^3}\displaystyle\pmatrix{y^2 & -xy \\-xy & x^2}$$ I need to prove that H is semi-define positive matrix for y>0. I think I am close to the answer. But lack some knowledge to prove it since I try to calculate the $det(H)$ and it ends up equal to 0. Anyone can help? Thanks.
You're basically done: A symmetric matrix $(a_{ij})_{i,j\le n}$ is positive semidefinite iff the subdeterminants $\det(a_{ij})_{i,j\in H}$ are $\ge 0$ for all $H\subseteq\{1,2,..,n\}$.
Convergence of a particular double series For the double series $$ \sum_{m,n=1}^{\infty} \frac{1}{(m+n)^p} , $$ I was wondering when it converges. I want to use double integrals to estimate it, but I don't know how to write the process accurately... Could you show me a detailed computation? Thanks!
While we wait for someone to do it using double integrals, here's another way: Given a positive integer $k$, the number of pairs $m,n$ with $m\ge1$, $n\ge1$, and $m+n=k$ is $k-1$. So your sum is $\sum_{k=2}^{\infty}(k-1)/k^p$, and the usual single-series methods apply.
Really Stuck on Partial derivatives question Ok so im really stuck on a question. It goes: Consider $$u(x,y) = xy \frac {x^2-y^2}{x^2+y^2} $$ for $(x,y)$ $ \neq $ $(0,0)$ and $u(0,0) = 0$. calculate $\frac{\partial u} {\partial x} (x,y)$ and $\frac{\partial u} {\partial y} (x,y)$ for all $ (x,y) \in \Bbb R^2. $ show that $ \frac {\partial^2 u} {\partial x \partial y} (0,0) \neq \frac {\partial^2 u} {\partial y \partial x} (0,0) $. Check, using polar coordinates, that $ \frac {\partial u}{\partial x} and \frac {\partial u}{\partial y} $ are continuous at $(0,0)$ Any help really appreciated. Cheers
We are given: $$u(x, y)=\begin{cases} xy \frac {x^2-y^2}{x^2+y^2}, ~(x, y) \ne (0,0)\\\\ ~~~0, ~~~~~~~~~~~~~~~(x, y) = (0,0)\;. \end{cases}$$ I am going to multiply out the numerator for ease in calculations, so we have: $$\tag 1 u(x, y)=\begin{cases} \frac {x^3y - xy^3}{x^2+y^2}, ~(x, y) \ne (0,0)\\\\ ~~~0, ~~~~~~~~~~~~~~(x, y) = (0,0)\;. \end{cases}$$ We are asked to: * *(a) Find: $\displaystyle \frac{\partial u} {\partial x} (x,y) ~\forall x~ \in \Bbb R^2$ *(b) Find: $\displaystyle \frac{\partial u} {\partial y} (x,y) ~\forall x~ \in \Bbb R^2$ *(c) Show that $ \displaystyle \frac {\partial^2 u} {\partial x \partial y} (0,0) \neq \frac {\partial^2 u} {\partial y \partial x} (0,0)$ *(d) Check, using polar coordinates, that $\displaystyle \frac {\partial u}{\partial x} \text{and} \frac {\partial u}{\partial y} $ are continuous at $(0,0)$ Using $(1)$, for part $(a)$, we get: $\tag 2 \displaystyle \frac{\partial u} {\partial x} (x,y) = \frac{(3x^2y- y^3)(x^2+y^2) - 2x(x^3y - xy^3)}{(x^2 + y^2)^2} = \frac{x^4y + 4x^2y^3-y^5}{(x^2+y^2)^2}$ Using $(1)$, for part $(b)$, we get: $\tag 3 \displaystyle \frac{\partial u} {\partial y} (x,y) = \frac{(x^3-3xy^2)(x^2+y^2) - 2y(x^3y - xy^3)}{(x^2+y^2)^2} = \frac{x^5 - 4x^3y^2-xy^4}{(x^2+y^2)^2}$ Next, we need mixed partials, so using $(2)$, we have: $ \tag 4 \displaystyle \frac {\partial^2 u} {\partial x \partial y} (x, y) = \frac{(x^4+ 12x^2y^2-5y^4)(x^2+y^2)^2 - 2(x^2+y^2)(2y)(x^4y + 4x^2y^3-y^5)}{(x^2+y^2)^4} = \frac{x^6 + 9x^4y^2-9x^2y^4-y^6}{(x^2+y^2)^3} = \frac {\partial^2 u} {\partial y \partial x} (x, y)$ Thus, we get: $$\tag 5 \displaystyle \frac {\partial^2 u} {\partial x \partial y} (x, y) = \frac {\partial^2 u} {\partial y \partial x} (x, y) = \begin{cases} \frac{x^6 + 9x^4y^2-9x^2y^4-y^6}{(x^2+y^2)^3}, ~(x, y) \ne (0,0)\\\\ ~~~~~~~~~~~~~~~~~~0, ~~~~~~~~~~~~~~~~~~~~~~~~(x, y) = (0,0)\;. \end{cases}$$ Now, for part $(c)$, we want to show that $ \frac {\partial^2 u} {\partial x \partial y} (0,0) \neq \frac {\partial^2 u} {\partial y \partial x} (0,0)$, so we need to find the limits of each mixed partial. We have: $ \tag 6 \displaystyle \frac{\partial^2 u} {\partial x \partial y} (0,0) = \lim\limits_{h \to 0} \frac{\frac{\partial u}{\partial x} (0, h) - \frac{\partial u}{\partial x} (0, 0)}{h} = \lim\limits_{h \to 0} \frac{-h^5/h^4}{h} = \lim\limits_{h \to 0} \frac{-h}{h} = -1$, and $ \tag 7 \displaystyle \frac{\partial^2 u} {\partial y \partial x} (0,0) = \lim\limits_{h \to 0} \frac{\frac{\partial u}{\partial y} (h, 0) - \frac{\partial u}{\partial y} (0, 0)}{h} = \lim\limits_{h \to 0} \frac{h^5/h^4}{h} = \lim\limits_{h \to 0} \frac{h}{h} = +1$ $\therefore$, for part $(c)$, we have shown that: $$\frac {\partial^2 u} {\partial x \partial y} (0,0) \neq \frac {\partial^2 u} {\partial y \partial x} (0,0)$$ as desired. Can you handle part $(d)$? Regards
Convergence of a sum of rational functions I have the complex functions $f_n(z) = 1/(1+z^n)$ and I'm supposed determine where $\sum_{n=1}^\infty f_n(z)$ converges for $z \in\mathbb{C}$ Extra Info: I was only able to determine convergence for $|z| \gt 1$. The argument applies the ratio test: $|\frac{f_{n+1}}{f_n}| = \frac{|1 + z^n|}{|1+z^{n+1}|}$ Now note that the denominator $|1 + z^{n+1}| = |1 -(-z^{n+1})| \ge |1 - |-z^{n+1}|| = |1 - |z|^{n+1}| \ge \frac{1}{K}|z|^{n+1}$ for large $n$ and $K \gt 1$ So then the ratio can be bounded $\frac{|1 + z^n|}{|1+z^{n+1}|} \le K\frac{1 + |z|^n}{|z|^{n+1}} = \frac{K}{|z|^{n+1}} + \frac{K}{|z|}$ which limits to $\frac{K}{|z|}$ so convergence is guaranteed by ratio test when $|z| \gt K$ and $K$ was arbitrarily greater than $1$
For any point $z \in \bar{\mathbb D}$ you can apply the divergence test to your summation to conclude that it does not converge.
A question about normal subgroup $H<G,aHa^{-1}<G$. Then, $H$ is isomorphic to $aHa^{-1}$. I want to show $aha^{-1}\in H,\forall h\in H,a\in G$, but I cannot figure it out. Any hint is appreciated.
To show that $H\cong gHg^{-1}$, you need to show that for any $a\in G$, the conjugation $$\kappa_a : G\to G,\quad g\mapsto aga^{-1}$$ is an automorphism of $G$. The proof is easy: For $g,h\in G$ we have $$\kappa_a(gh) = agha^{-1} = ag(a^{-1}a)ha^{-1} = (aga^{-1})(aha^{-1}) = \kappa_a(g)\kappa_a(h)\text{,}$$ so $\kappa_a$ is a homomorpisms. To show that $\kappa_a$ is one-to-one, check $\kappa_a\circ\kappa_{a^{-1}} = \operatorname{id}$. Now every subgroup $H$ is isomorphic to $\kappa_a(H) = aHa^{-1}$.
Multidimensional Hensel lifting I have a question about a practical application of (some) generalised form of Hensel's Lemma. I cannot find it stated in an appropriate form in Bourbaki or anywhere else, so here goes ... Let $p$ be an odd prime: we work over the $p$-adic numbers $Q_p$ with ring of integers $Z_p$ and residue field $F_p$. I have a bunch of quadratic and quartic polynomial equations in N variables, with coefficients in $F_p$, and I have a (non-unique) solution set in $F_p^{N}$. To simplify things assume that the number of variables N exceeds the total number of equations. Is it possible to conclude from the non-vanishing of some sort of generalised Jacobian determinant, that my solution set lifts to $Z_p$?
The same proof method works in higher dimension. Let $F: \mathbb{Z}_p^m \to \mathbb{Z}_p^n$ be our system of polynomial functions. If $x \in \mathbb{Z}_p^m$ satisfies $$ F(x) \equiv 0 \pmod {p^e} $$ then $$ F(x) \equiv p^e y \pmod {p^{e+1}} $$ for some vector $y$. Furthermore, the Taylor expansion about $x$ tells us $$ F(x + p^e z) \equiv p^e y + p^e dF(x) \cdot z \pmod{p^{e+1}} $$ so as long as $$ dF(x) \cdot z \equiv -y \pmod p$$ always has a solution for the vector $z$ (e.g. more variables than equations and $dF(x)$ has full rank), then lifts exist.
Roots of the equation $I_1(b x) - x I_0(b x) = 0$ I'm interested in the roots of the equation: $I_1(bx) - x I_0(bx) = 0$ Where $I_n(x)$ is the modified Bessel function of the first kind and $b$ is real positive constant. More specifically, I'm interested in the behaviour of the largest non-negative root for $x \geq 0$. I have solved the problem numerically in a crude fashion to see what the function looks like, but as far as I can tell there isn't a closed form. If the value of the largest root is given by $x(b)$, then based on the numerical solution I think the following is true: $x(b) < 0 \; $ for $\; b < 2$ $\lim_{b\to\infty} x(b) = 1$ Can anyone give me some pointers to the possible approaches I might take to approximating $x(b)$ for $b>2$ ? Thanks!
Notice that the equation also can be rewritten as $$ \frac{I_1(b x)}{I_0(b x)} = x $$ Using the asymptotic series expansion for large $b$ and some fixed $x>0$ we get: $$ 1 - \frac{1}{2 x b} - \frac{3}{8 b^2 x^2} - \frac{1}{8 b^3 x^3} + \mathcal{o}\left(b^{-1}\right) = x $$ resulting in $$ x(b) = 1 - \frac{1}{2 b} - \frac{3}{8 b^2} - \frac{9}{16 b^3} + \mathcal{o}\left(b^{-3}\right) $$ Confirming $\lim_{b \to + \infty} x(b) = 1$.
Proving {$b_n$}$_{n=1}^\infty$ converges given {$a_n$}$_{n=1}^\infty$ and {$a_n b_n$}$_{n=1}^\infty$ Suppose {$a_n$}$_{n=1}^\infty$ and {$b_n$}$_{n=1}^\infty$ are sequences such that {$a_n$}$_{n=1}^\infty$ coverges to A$\neq$0 and {$a_n b_n$}$_{n=1}^\infty$ converges. Prove that {$b_n$}$_{n=1}^\infty$ converges. What I have so far: $b_n = {a_n b_n \over a_n}$ $\to$ $C \over A$, $A\neq0$ |$b_n - {C \over A}$| = |${a_n b_n \over a_n} - {C \over A}$| = |${Aa_nb_n - Ca_n \over Aa_n}$| $ \leq $ |${1 \over Aa_n}||Aa_nb_n - Ca_n$|=|${1 \over Aa_n}||a_n(Ab_n - C)$| $\leq |{1 \over Aa_n}||a_n||(Ab_n - C)$|. Note: since $a_n$ converges, there is M>0 such that |$a_n| \leq$M for all n $ \in\Bbb N$. Thus, |${1 \over Aa_n}||a_n||(Ab_n - C)$| = |${1 \over M}||M||(Ab_n - C)$|. And this is where I get lost. Any thoughts? Or am I completely wrong to begin with?
Hint: Let $A=\lim\limits_{n\to\infty}a_n$ and $B=\frac{\lim\limits_{n\to\infty}a_nb_n}{A}$. Since $|A|>0$, for $n$ large enough, $|a_n-A|\le\frac{|A|}{2}$. Show that then, $|a_n|\ge\frac{|A|}{2}$. Then note that $$ a_n(b_n-B)=(a_nb_n-AB)-B(a_n-A) $$
Linear Transformations and Identity Matrix Suppose that $V$ is a finite dimensional vector space and $T$ is a linear transformation from $V$ to $V$. Prove that $T$ is a scalar multiple of the identity matrix iff $ST=TS$ for every linear transformation $S$ from $V$ to $V$.
One direction is trivial. For the other, take $\{v_1,...,v_n\}$ to be any basis for $V$, and consider the transformations $E_{i,j}:V\to V$ induced by $$v_k\mapsto\begin{cases} v_j & \text{if }k=i,\\0 & \text{otherwise.}\end{cases}$$ What can we conclude about $T$ from the fact that $E_{i,j}T=TE_{i,j}$ for all $i,j\in\{1,...,n\}$?
How to rewrite $\sin^4 \theta$ in terms of $\cos \theta, \cos 2\theta,\cos3\theta,\cos4\theta$? I need help with writing $\sin^4 \theta$ in terms of $\cos \theta, \cos 2\theta,\cos3\theta, \cos4\theta$. My attempts so far has been unsuccessful and I constantly get developments that are way to cumbersome and not elegant at all. What is the best way to approach this problem? I know that the answer should be: $\sin^4 \theta =\frac{3}{8}-\frac{1}{2}\cos2\theta+\frac{1}{8}\cos4\theta$ Please explain how to do this. Thank you!
Write $$\sin^4{\theta} = \left ( \frac{e^{i \theta} - e^{-i \theta}}{2 i} \right )^4$$ and use the binomial theorem. $$\begin{align}\left ( \frac{e^{i \theta} - e^{-i \theta}}{2 i} \right )^4 &= \frac{1}{16} (e^{i 4 \theta} - 4 e^{i 2 \theta} + 6 - 4 e^{-i 2 \theta} + e^{-i 4 \theta}) \\ &= \frac{1}{8} (\cos{4 \theta} - 4 \cos{2 \theta} + 3)\end{align}$$ Item of curiosity: the Chebyshev polynomials are defined such that $$T_n(\cos{\theta}) = \cos{n \theta}$$
Find and example of two elements $a,b$ in a finite group $G$ such that $|a| = |b| = 2, a \ne b$ and $|ab|$ is odd. Find and example of two elements $a,b$ in a finite group $G$ such that $|a| = |b| = 2, a \ne b$ and $|ab|$ is odd. Any ideas as to how I would go about finding it? Thanks
There is a very general example you should know about, that of dihedral groups. A dihedral group has order $2n$, for any $n \ge 2$, and it is generated by two elements of order $2$, whose product has order $n$. Probably the simplest way to see these groups is as a group of bijective maps on $\mathbf{Z}_{n}$, $$ a : x \mapsto -x, \qquad b: x \mapsto -x-1. $$ (Coefficients are written as integers, but meant to be in $\mathbf{Z}_{n}$.) These are clearly elements of order $2$, while because of $$ a \circ b(x) = (a(b(x)) = -(-x - 1) = x + 1 $$ $ab = a \circ b$ has order $n$. (Geometrically, such a group is the group of congruences of a regular $n$-gon.) If you do the same thing over $\mathbf{Z}$, you find that $ab$ has infinite order.
If $P(A \cup B \cup C) = 1$, $P(B) = 2P(A) $, $P(C) = 3P(A) $, $P(A \cap B) = P(A \cap C) = P(B \cap C) $, then $P(A) \le \frac14$ We have ($P$ is probability): $P(A \cup B \cup C) = 1$ ; $P(B) = 2P(A) $ ; $P(C) = 3P(A) $ and $P(A \cap B) = P(A \cap C) = P(B \cap C) $. Prove that $P(A) \le \frac{1}{4} $. Well, I tried with the fact that $ 1 = P(A \cup B \cup C) = 6P(A) - 3P(A \cap B) + P(A \cap B \cap C) $ but I got stuck... Could anyone help me, please?
This solution is quite possibly messier than necessary, but it’s what first occurred to me. For convenience let $x=P\big((A\cap B)\setminus C\big)$ and $y=P(A\cap B\cap C)$. Let $a=P(A)-2x-y$, $b=P(B)-2x-y$, and $c=P(C)-2x-y$; these are the probabilities of $A\setminus(B\cup C)$, $B\setminus(A\cup C)$, and $C\setminus(A\cup B)$, respectively. (A Venn diagram is helpful here.) Now $$b+2x+y=2(a+2x+y)=2a+4x+2y\;,$$ so $b=2a+2x+y$. Similarly, $$c+2x+y=3(a+2x+y)=3a+6x+3y\;,$$ so $c=3a+4x+2y$. Then $$\begin{align*} 1&=P(A\cup B\cup C)\\ &=a+b+c+3x+y\\ &=6a+9x+4y\\ &=4(a+2x+y)+2a+x\\ &=4P(A)+2a+x\;. \end{align*}$$ Since $2a+x\ge 0$, we must have $P(A)\le \frac{1}{4}$.
Is this the way to estimate the amount of lucky twins? To estimate the amount of prime twins between $3$ and $x$ we just take $x \prod_{p}(1-2/p)$ where $p$ runs over the primes between $3$ and $\sqrt x$. Lucky numbers are similar to prime numbers. Does this imply that a good way to estimate the amount of lucky twins between $3$ and $x$ is $x \prod_{l}(1-2/l)$ where $l$ runs over the lucky numbers between $3$ and $\sqrt x$? *EDIT:*I Think I can improve the question by stating it as follows. $1$) If $x$ goes to infinity and $l(x)$ denotes the amount of lucky numbers between $3$ and $x$ does $\dfrac{x \prod_{l}(1-2/l)}{l(x)}=Constant$ ? $2$) If $x$ goes to infinity, does $\dfrac{\prod_{p}(1-2/p)}{\prod_{l}(1-2/l)}=Constant$ ? If no theoretical answer is possible are $1$) and $2$) supported numerically ?
I think that to answer this question one would need to start from the paper by Bui and Keating on the random sieve (of which the lucky sequence is a particular realization) and then carefully examine the proofs that they cite of Mertens' theorems for that sieve. The issue is whether there is a correction factor like Mertens' constant or the twin prime constant in the product that you wrote down for the number of lucky twins, and if there is, is it calculated (up to $1+o(1)$ factors) by the same formula as for the primes. http://arxiv.org/abs/math/0607196 The paper gives an asymptotic formula for random twin primes that does not have any twin prime constant, but the explanation there is not exactly about the same situation. It looks to me like some analysis of the proofs of the analogues of Mertens' theorems is needed to understand what product over random primes is the correct estimate of the number of random twin primes. I cannot tell the answer to the question from Bui and Keating paper by itself, but would guess that up to multiplication by a constant, the product you wrote down is, for nearly all realizations of the random sieve (ie., with probability 1) asymptotic to the number of random-sieve twins, and this is the heuristic guess for what happens with the lucky numbers.
Isomorphism with Lie algebra $\mathfrak{sl}(2)$ Let $L$ be a Lie algebra on $\mathbb{R}$. We consider $L_{\mathbb{C}}:= L \otimes_{\mathbb{R}} \mathbb{C}$ with bracket operation $$ [x \otimes z, y \otimes w] = [x,y] \otimes zw $$ far all $x,y \in L$ and $z,w \in \mathbb{C}$. We have that $L_{\mathbb{C}}$ is a Lie algebra. If $L= \mathbb{R}^{3}$ and for $x,y \in L$ we define $[x,y]:= x \wedge y$ (where $\wedge$ denotes the usual vectorial product). We have that $(L, \wedge)$ is a Lie algebra. I have to prove that $L \simeq \mathfrak{sl}(2)$. In order to do this I'd like to prove that $L \simeq \mathfrak{so}(3,\mathbb{R})$. Than, because $\mathfrak{so}(3,\mathbb{R}) \otimes \mathbb{C} \simeq \mathfrak{sl}(2)$ and $\mathfrak{sl}(2)$, up to isomorphism, is the unique $3$-dimetional semisimple algebra, I complete my proof. So my questions are: 1) How to prove that $(\mathbb{R}^{3}, \wedge) \simeq \mathfrak{so}(3, \mathbb{R})$ ? 2) Why $\mathfrak{so}(3,\mathbb{R}) \otimes \mathbb{C} \simeq \mathfrak{sl}(2)$ ?
For (1): Let $$i:=(1,0,0),\quad j:=(0,1,0)\text{ and } k:=(0,0,1)\in\mathbb R^3$$ and $$ A:=\left(\begin{array}{ccc}0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0\end{array}\right),\quad B:=\left(\begin{array}{ccc}0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0\end{array}\right) \text{ and } C:=\left(\begin{array}{ccc}0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0\end{array}\right) \in \mathfrak{so}(3,\mathbb R). $$ So, $$i\wedge j = k,\quad i\wedge k = -j\text{ and }j\wedge k = i \in \mathbb R^3$$ and $$[A,B] = C,\quad [A,C] = -B\text{ and }[B,C]=A\in \mathfrak{so}(3,\mathbb R).$$ Then the linear application $\mathbb R^3\to\mathfrak{so}(3,\mathbb R)$ given by $$\begin{array}{rcl} i & \to & A \\ j & \to & B \\ k & \to & C \end{array}$$ is an isomorphism of Lie algebras. For (2): Let $$ X:=\left(\begin{array}{cc}0 & i \\ i & 0\end{array}\right),\quad Y:=\left(\begin{array}{cc}i & 0 \\ 0 & -i\end{array}\right)\text{ and } Z:=\left(\begin{array}{cc}0 & 1 \\ -1 & 0\end{array}\right)\in\mathfrak{sl}(2,\mathbb C) $$ and $$ X'=A\otimes 2,\quad Y'=B\otimes 2\text{ and } Z'=C\otimes 2\in\mathfrak{so}(3,\mathbb R)\otimes\mathbb C, $$ for $A$, $B$ and $C\in\mathfrak{so}(3,\mathbb R)$ as in item (1). So, $$[X,Y] = 2Z,\quad [X,Z] = -2Y\text{ and }[Y,Z]=2X\in \mathfrak{sl}(2,\mathbb C)$$ and $$[X',Y'] = 2Z',\quad [X',Z'] = -2Y'\text{ and }[Y',Z']=2X'\in \mathfrak{so}(3,\mathbb R)\otimes\mathbb C.$$ Then, the linear application $\mathfrak{sl}(2,\mathbb C)\to\mathfrak{so}(3,\mathbb R)\otimes\mathbb C$ given by $$\begin{array}{rcl} X & \to & X' \\ Y & \to & Y' \\ Z & \to & Z' \end{array}$$ is an isomorphism of Lie algebras.
quotient of finitely presented module $\DeclareMathOperator\Coker{Coker}$Assume the exact sequence $$A \xrightarrow{f} B \xrightarrow{} \Coker f \xrightarrow{} \{0\} $$ where $A$ is a finitely generated module and $B$ is a finitely presented module. Is it true that $\Coker f$ is finitely presented module ? In general, is the quotient of a finitely presented module to a finitely generated submodule, finitely presented ?
This is true. I'll denote the ring by $R$, while remaining silent about whether these are left or right modules. Write $B = R^n/I$ for some finitely generated submodule $I \subseteq R^n$. Then the image of $f \colon A \to B$ is of the form $J/I$ for some submodule $J \subseteq R^n$, and $J/I$ is a finitely generated module. Then $J$ is finitely generated as well: taking finitely many elements of $J$ whose images generate $J/I$, along with a finite generating set for $I$, will give a finite generating set for $J$. Thus $Coker(f) \cong R^n/J$ is finitely presented. If you're looking for a reference, see Exercise 4.8 in T.Y. Lam's book Exercises in Modules and Rings.
limit of the error in approximating definite integral with midpoint rule I want to calculate $\lim_{n \rightarrow \infty} n^2 |\int_{[0,1]}f(x)-I_n(x)|$ where $I_n$ is the integral approximation by midpoint rule: $I_n=\frac{1}{n}\sum_{k=1}^nf(c_k)$ and $c_k$ is the point in the middle of $k^{th}$ interval. My attempt: I know the error is bounded by $\frac{max_{[0,1]} f''(x)}{24n^2}$. Also by Lagrange theorem, we know that the remainder at point $x$ is $\frac{f^{(n+1)}(\xi)(x-x_\circ)^{n+1}}{(n+1)!}$. Now if I want to use the Lagrange formula I'll have to integrate over the whole $[0,1]$ interval. This is what confuses me, because if $n$ is even the integral will become zero and if it odd nonzero. Even if I can integrate, $\lim_{n \rightarrow \infty} n^2 \frac{f^{(n+1)}(\xi)(x-x_\circ)^{n+1}}{(n+1)!}$ is $\infty \times 0$ and I don't know how to deal with it. I appreciate any help.
Assume $f''$ is continuous. The error in a single Midpoint Rule interval of length $h$ is $$ \int_a^{a+h} f(x)\ dx - h f(a+h/2) = -\frac{h^3}{24} f''(\xi)$$ for some $\xi \in [a,a+h]$. The error for $n$ equal subintervals of $[0,1]$ is the sum of the errors in each: if $h = 1/n$ and $x_j = j h$ we get $$\int_0^1 f(x)\ dx - I_n = \sum_{j=0}^{n-1} \left(\int_{x_j}^{x_{j+1}} f(x)\ dx - \frac{1}{n} f(x_j + h/2)\right) = - \frac{1}{24n^3} \sum_{j=0}^{n-1} f''(\xi_j) $$ where $\xi_j \in [x_j, x_{j+1}]$. Now notice that $\displaystyle \frac{1}{n} \sum_{j=0}^{n-1} f''(\xi_j)$ is a Riemann sum for $\int_0^1 f''(x)\ dx = f'(1) - f'(0)$, so ...
Applications of the Isomorphism theorems In my study of groups, rings, modules etc, I've seen the three isomorphism theorems stated and proved many times. I use the first one ( $G/\ker \phi \cong \operatorname{im} \phi$ ) very often, but I can't recall having ever used the other two. Can anyone give some examples where they are used in a crucial way in some proof? For clarity, let us say that the 2nd one is : $(M/L)/(N/L) \cong M/N$ under the appropriate conditions, and the 3rd one is $(M+N)/N \cong M/(M\cap N).$
E.g., in studying solvable groups.
Cantor set + Cantor set =$[0,2]$ I am trying to prove that $C+C =[0,2]$ ,where $C$ is the Cantor set. My attempt: If $x\in C,$ then $x= \sum_{n=1}^{\infty}\frac{a_n}{3^n}$ where $a_n=0,2$ so any element of $C+C $ is of the form $$\sum_{n=1}^{\infty}\frac{a_n}{3^n} +\sum_{n=1}^{\infty}\frac{b_n}{3^n}= \sum_{n=1}^{\infty}\frac{a_n+b_n}{3^n}=2\sum_{n=1}^{\infty}\frac{(a_n+b_n)/2}{3^n}=2\sum_{n=1}^{\infty}\frac{x_n}{3^n}$$ where $x_n=0,1,2, \ \forall n\geq 1$. Is this correct?
Short answer for this question is "your argument is correct ". To justify the answer consider some particular $n_{0}\in \mathbb{N}$. Since $$\sum_{n=1}^{\infty}\dfrac{a_{n}}{3^{n}},\sum_{n=1}^{\infty}\dfrac{b_{n}}{3^{n}}\in C$$ we have that $ x_{n_{0}}=\dfrac{a_{n_{0}}+b_{n_{0}}}{2}\in \{0,1,2\} $. ($ x_{n_{0}}=0 $ if $ a_{n_{0}}=b_{n_{0}}=0 $. $ x_{n_{0}}=2 $ if $ a_{n_{0}}=b_{n_{0}}=2 $. Otherwise $ x_{n_{0}}=1 $. ) Then clearly $$\sum_{n=1}^{\infty}\dfrac{x_{n}}{3^{n}}\in [0,1].$$ So $$2\sum_{n=1}^{\infty}\dfrac{x_{n}}{3^{n}}=\sum_{n=1}^{\infty}\dfrac{a_{n}}{3^{n}}+\sum_{n=1}^{\infty}\dfrac{b_{n}}{3^{n}}\in [0,2].$$ Hence $ C+C\subseteq [0,2] $. To complete the proof you must show that the other direction as well. To show $ [0,2]\subseteq C+C $ it is enough to show $ [0,1]\subseteq \dfrac{1}{2}C+\dfrac{1}{2}C $. Observe that $ b\in \dfrac{1}{2}C $ if and only if there exists $ t\in C $ such that $ b=\dfrac{1}{2}t $. Hence $$ b\in \dfrac{1}{2}C\text{ if and only if }b=\sum\limits_{n = 1}^\infty \frac{b_n}{3^n}\text{ ; where }b_{n}=0\text{ or }1. $$ Now let $ x\in [0,1] $. Then $$ x= \sum\limits_{n = 1}^\infty \frac{x_n}{3^n}\text{ ; where }x_{n}=0,1\text{ or }2. $$ Here we need to find $ y,z\in \dfrac{1}{2}C $ such that $ x=y+z $. Let's define $ y=\sum\limits_{n = 1}^\infty \frac{y_n}{3^n} $ and $ z=\sum\limits_{n = 1}^\infty \frac{z_n}{3^n} $ as follows. For each $n\in \mathbb{N}$, $ y_{n}=0 $ if $ x_{n}=0 $ and $ y_{n}=1 $ if $ x_{n}=1,2 $. For each $n\in \mathbb{N}$, $ z_{n}=0 $ if $ x_{n}=0,1 $ and $ z_{n}=1 $ if $ x_{n}=2 $. Thus $y,z\in \dfrac{1}{2}C $ and for each $n\in \mathbb{N}$, $ y_{n}+z_{n}=0 $ if $ x_{n}=0 $ , $ y_{n}+z_{n}=1 $ if $ x_{n}=1 $ and $ y_{n}+z_{n}=2 $ if $ x_{n}=2 $. Therefore $x=y+z\in \dfrac{1}{2}C+\dfrac{1}{2}C$ and hence $[0,1] \subseteq \dfrac{1}{2}C+\dfrac{1}{2}C$. $\square $
Riesz representation theorem on dual space The Riesz representation theorem on Hilbert spaces is well known, It asserts we can represent a bounded linear function on a Hilbert space $H$ with an inner product on $H$ and vice-versa. My question: Given an inner product in $H^*$, say $(a,b)_{H^*}$, can I write it as $$(a,b)_{H^*} = \langle a, f \rangle_{H^*, H}$$ where $f \in H$? This is the RRT applied to the Hilbert space $H^*$ with its dual $H$. I think it works but I never saw it so I should get it clarified.
Well, for an arbitrary inner product on $X:=H^*$ it is not going to work, since then $X^*$ need not be isomorphic to $H$. On the other hand, the Riesz representation gives a linear isomorphism $H\to H^*$, and if the inner product is defined via this isomorphism, i.e. if $$(\langle x,-\rangle,\langle y,-\rangle)_{H^*}=\langle x,y\rangle_{H} $$ for all $x,y\in H, \ f\in H^*$, then your claim is valid: Let $a,b\in H^*$ then $a=\langle x,-\rangle$ and $b=\langle y,-\rangle$ for some $y\in H$ by Riesz representation, and $a(y)=\langle x,y\rangle$ so we have $$(a,b)_{H^*} = (a,\langle y,-\rangle)=a(y)=\langle a,y\rangle_{H^*,H}\ .$$
Expectation of $X$ given a Cumulative Function I've (hard) to find the expectation of $X$: CDF (Cumulative Function) = $1 - x^{-a}$, $1 \leqslant x < \infty$ $E[X]$ = ??
First you have to establish the proper domain for the random variable $X$. That can be done by using $F(x_{\min}) = 0$. You random variable will be supported on $[x_{\min}, \infty)$. * *Find the probability density function, $f_X(x)$ by differentiating the cumulative function, $f_X(x) = F_X^\prime(x)$. *Apply the law of the unconscious statistician: $$ \mathbb{E}(X) = \int_{x_\min}^\infty x f_X(x) \mathrm{d}x $$ *For what values of parameter $a$ does the integral exist? *Once you found the answer, you have found the mean of the Pareto distribution.
$T:P_3 → P_3$ be the linear transformation such that... Let $T:P_3 \to P_3$ be the linear transformation such that $T(2 x^2)= -2 x^2 - 4 x$, $T(-0.5 x - 5)= 2 x^2 + 4 x + 3$, and $T(2 x^2 - 1) = 4 x - 4.$ Find $T(1)$, $T(x)$, $T(x^2)$, and $T(a x^2 + b x + c)$, where $a$, $b$, and $c$ are arbitrary real numbers.
hint:T is linear transformation $T(a+bx+cx^2)=aT(1)+bT(x) +cT(x^2) $ $$T(2 x^2)= -2 x^2 - 4 x \to T( x^2)= - x^2 - 2 x$$$$T(-0.5 x - 5)= 2 x^2 + 4 x + 3\to-0.5T( x) - 5T(1)= 2 x^2 + 4 x + 3\to 16x^2+72x-46$$$$T(2 x^2 - 1) = 4 x - 4.\to T( x^2) - \frac12T(1) = 2 x - 2\to T(1)=-2 x^2 - 8x +4.$$
Truth set of $-|x| \lt 2$? An exercise in my Algebra I book (Pearson and Allen, 1970, p. 261) asks for the graph of the truth set for $-\left|x\right| \lt 2; x \in \mathbb{R}$. I've re-stated the inequality in the equivalent form of $\left|x\right| \gt -2$. I know that the truth set of $\left|x\right| = -2$ is $\emptyset$, but I'm not certain how to handle the inequality in conjunction with the absolute value. I suspect the truth set is $\{x \mid x \ge 0\}$, but I am not certain whether this is correct, or how to prove it using the algebraic concepts I've learned thus far. (I suspect this is a flaw with the book as this is not the first time it assumes knowledge that hasn't yet been presented.) Is the truth set I arrived at correct? Is there a simple proof of the solution using Algebra I concepts (i.e. the field axioms, basic order properties, etc.)? Bibliography Pearson, H. R and Allen, F. B., 1970. Modern Algebra - A Logical Approach (Book I). Boston: Ginn and Company.
$$-|x|<2\stackrel{\text{multiplication by}\,\,(-1)}\Longleftrightarrow |x|>-2$$ So: for what values of (real) $\,x\,$ it is true that $\,|x|>-2\,$ ? Hint: this is a rather huge subset of the reals...
Can any mathematical relation be called an 'operator'? Mathematics authors agree that $+,-,/,\times$ are basic operators. There are also logical operators like $\text{or, and, xor}$ and the unary negation operator $\neg$. Where there seems to be a disagreement, however, is whether certain symbols used in the composition of propositions qualify as operators. By this definition: an operation is an action or procedure which produces a new value from one or more input values, called "operands" symbols that denote a relation should also be operators. For instance, $a<b$ takes integral operands and returns a value in the Boolean domain. Even something like $x \in S$ should be an operator that takes an element and a set, and also returns a truth value. The same could be said for the equality operator $=$. Computer scientists call these 'relational operators', but mathematicians rarely do this. Is there a reason for this discrimination?
A serious disadvantage of treating logical connectives as operators returning a value (that is, as functions) is that in non-classical logics their "range" may fail to correspond to any two-element set $\{\top,\bot\}$, and according to the Wikipedia page Truth value, may even fail to correspond to any set whatsoever: "[n]ot all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions. For example, intuitionistic logic lacks a complete set of truth values...." (Note that in particular, the naive attempt to consider logical connectives as mapping into a three-element set $\{\top,\bot,\text{some third value}\}$ does not work.)
Prove that any finite set of words is regular. How many states is sufficient for a single word $w_1...w_m$? DFA Prove that any finite set of words is regular. How many states is sufficient for a single word $w_1...w_m$? For part 2, wouldn't it require M states if the word length is M?
A language over the alphabet $\Sigma$ will be a regular language given that it follows the following clauses: * *$\epsilon$, {$a$} for $a\in\Sigma$. *If $L_1$ and $L_2$ are regular languages, then $L_1\cup L_2$, $L_1L_2$, and $L^*$ are also regular. *$L$ is not a regular language unless taken from those two clauses. A finite set of words taken from some alphabet will clearly be a regular language. As for the second part, you need $m+1$ states (hint: there is a tiny element in the first clause that should give you the reason to this).
How is the set of all programs countable? I'm having a hard time seeing how the number of programs is not uncountable, since for every real number, you can create a program that's prints out that number. Doesn't that immediately establish uncountably many programs?
All the answers to fit the question number of programs are countable use the discrete finite definition of program , using either finite memory, finite ( countable ) instruction etc. How ever in the old analog days where voltage was considered as output it was a trivial task to construct a circuit that prdouced all the possible voltage between 0 and 1. Now of course some physics savvy people would point out that voltage is discrete therefor you really dont end up producing all the real numbers as a voltage output between 0 and 1. But that is a physical constraint. So yes a classical program with all of it's finite/countable restriction on memory, instructions etc. can be shown to end up as a point in countable set. But an analog machine, like the ones constructed by pullies and ropes by Mayan's can indeed produce all the real numbers between 0 and 1, rest of the real line could have been achieved by a multiplication factor ( again some type of pully and rope computation). So the statement that set of all the programs is countable depends on what is the computation model that it is being set in, otherwise it neither true or false.
Examples of categories where morphisms are not functions Can someone give examples of categories where objects are some sort of structure based on sets and morphisms are not functions?
Let $G$ be a directed graph. Then we can think of $G$ as a category whose objects are the vertices in $G$. Given vertices $a, b \in G$ the morphisms from $a$ to $b$ are the set of paths in the graph $G$ from $a$ to $b$ with composition being concatenation of paths. Note that we allow "trivial" paths that start at a given vertex $a$, traverse no edges, and end at the starting vertex $a$. We have to do this for the category to have identities. Also we can create categories whose morphisms are not exactly maps but instead equivalence classes of maps. For example given an appropriately nice ring $R$ we can create the stable module category whose objects are $R$-modules. Given two $R$-modules $A$ and $B$ the set of morphisms from $A$ to $B$ is the abelian group of $R$-module homomorphisms from $A$ to $B$ modulo the subgroup of homomorphisms that factor through a projective $R$-module.
Finding the standard deviation of a set of angles. My question is given a set of angles in the form of degrees minutes and seconds once finding the mean how do you find the standard deviation. I know how to find the average or mean of a set (see below) but i'm not sure how to find the standard deviation. For example say we have the following set of angles: $$39^\circ 15'01''$$ $$39^\circ 14'15''$$ $$39^\circ 14'32''$$ The average is $39^\circ 14'36''$. Now, how do I find the standard deviation. I looked at the wiki page, but can't make since of it using degrees minutes seconds instead..
Your first number is $39 + \dfrac{15}{60} + \dfrac{1}{60^2} = 39.250277777\ldots$. Deal similarly with the others. And remember to do as little rounding as possible at least until the last step. Rounding should always wait until the last step except when you know how much effect it will have on the bottom line. One way to do avoid it is to work with exact fractions. (Except that the seconds are of course probably rounded . . . .
Cyclic groups and generators For each of the groups $\mathbb Z_4$,$\mathbb Z_4^*$ indicate which are cyclic. For those that are cyclic list all the generators. Solution $\mathbb Z_4=${0,1,2,3} $\mathbb Z_4$ is cyclic and all the generators of $\mathbb Z_4=${1,3} Now if we consider $\mathbb Z_4^*$ $\mathbb Z_4^*$={1,3} How do i know that $\mathbb Z_4^*$ is cyclic? In our lecture notes it says that the $\mathbb Z_4^*$ is cyclic and the generators of $\mathbb Z_4^*$=3 Can anyone help me on the steps to follow in order to prove the above?
Consulting here, you can easy see that in modulo $4$ there are two relatively prime congruence classes, $1$ and $3$, so $(\mathbb{Z}/4\mathbb{Z})^\times \cong \mathrm{C}_2$, the cyclic group with two elements.
Show $\sum_{n=1}^{\infty}\frac{\sinh\pi}{\cosh(2n\pi)-\cosh\pi}=\frac1{\text{e}^{\pi}-1}$ and another Show that : $$\sum_{n=1}^{\infty}\frac{\cosh(2nx)}{\cosh(4nx)-\cosh(2x)}=\frac1{4\sinh^2(x)}$$ $$\sum_{n=1}^{\infty}\frac{\sinh\pi}{\cosh(2n\pi)-\cosh\pi}=\frac1{\text{e}^{\pi}-1}$$
OK, I have figured out the second sum using a completely different method. I begin with the following result (+): $$\sum_{k=1}^{\infty} e^{-k t} \sin{k x} = \frac{1}{2} \frac{\sin{x}}{\cosh{t}-\cos{x}}$$ I will prove this result below; it is a simple geometrical sum. In any case, let $x=i \pi$ and $t=2 n \pi$; then $$\begin{align}\frac{\sinh{\pi}}{\cosh{2 n \pi}-\cosh{\pi}} &= 2 \sum_{k=1}^{\infty} e^{-2 n \pi k} \sinh{k \pi}\end{align}$$ Now we can sum: $$\begin{align}\sum_{n=1}^{\infty} \frac{\sinh{\pi}}{\cosh{2 n \pi}-\cosh{\pi}} &= 2 \sum_{n=1}^{\infty} \sum_{k=1}^{\infty} e^{-2 n \pi k} \sinh{k \pi}\\ &= 2 \sum_{k=1}^{\infty} \sinh{k \pi} \sum_{n=1}^{\infty}e^{-2 n \pi k}\\ &= 2 \sum_{k=1}^{\infty} \frac{\sinh{k \pi}}{e^{2 \pi k}-1} \\ &= \sum_{k=1}^{\infty} \frac{e^{\pi k} - e^{-\pi k}}{e^{2 \pi k}-1} \\ &= \sum_{k=1}^{\infty} e^{-\pi k} \\ \therefore \sum_{n=1}^{\infty} \frac{\sinh{\pi}}{\cosh{2 n \pi}-\cosh{\pi}} &= \frac{1}{e^{\pi}-1} \end{align}$$ To prove (+), write as the imaginary part of a geometrical sum. $$\begin{align} \sum_{k=1}^{\infty} e^{-k t} \sin{k x} &= \Im{\sum_{k=1}^{\infty} e^{-k (t-i x)}} \\ &= \Im{\left [ \frac{1}{1-e^{-(t-i x)}} \right ]} \\ &= \Im{\left [ \frac{1}{1-e^{-t} \cos{x} - i e^{-t} \sin{x}} \right ]}\\ &= \frac{e^{-t} \sin{x}}{(1-e^{-t} \cos{x})^2 + e^{-2 t} \sin^2{x}}\\ &= \frac{\sin{x}}{e^{t}-2 \cos{x} + e^{-t}} \\ \therefore \sum_{k=1}^{\infty} e^{-k t} \sin{k x} &= \frac{1}{2} \frac{\sin{x}}{\cosh{t}-\cos{x}}\end{align}$$ QED
Continuity of the lebesgue integral How does one show that the function, $g(t) = \int \chi_{A+t} f $ is continuous, given that $A$ is measurable, $f$ is integrable and $A+t = \{x+t: x \in A\}$. Any help would be appreciated, thanks
Notice that $$ |g(t+h)-g(t)| \le \int_{(A+t)\Delta A} |f| $$ so it is enough to prove that $$ |(A+t)\Delta A| \to 0 \qquad \text{as }t \to 0 $$ where $\Delta$ is the symmetric difference, since $$ \int_{A_k} f \to 0 $$ if $|A_k|\to 0$.
Sum of dihedral angles in Tetrahedron I'd like to ask if someone can help me out with this problem. I have to determine what is the lower and upper bound for sum (the largest and smallest sum I can get) of dihedral angles in arbitrary Tetrahedron and prove that. I'm ok with hint for proof, but I'd be grateful for lower and upper bound and reason for that. Thanks
Lemma: Sum of the 4 internal solid angles of a tetrahedron is bounded above by $2\pi$. Start with a non-degenerate tetrahedron $\langle p_1p_2p_3p_4 \rangle$. Let $p = p_i$ be one its vertices and $\vec{n} \in S^2$ be any unit vector. Aside from a set of measure zero in choosing $\vec{n}$, the projection of $p_j, j = 1\ldots4$ onto a plane orthogonal to $\vec{n}$ are in general positions (i.e. no 3 points are collinear). When the images of the vertices are in general positions, a necessary condition for either $\vec{n}$ or $-\vec{n}$ belong to the inner solid angle at $p$ is $p$'s image lies in the interior of the triangle formed by the images of other 3 vertices. So aside from a set of exception of measure zero, the unit vectors in the 4 inner solid angles are "disjoint". When one view tetrahedron $\langle p_1p_2p_3p_4 \rangle$ as the convex hull of its vertices, the vertices are extremal points. This in turn implies for any unit vector, $\vec{n}$ and $-\vec{n}$ cannot belong to the inner solid angle of $p$ at the same time. From this we can conclude (up to a set of exception of measure zero), at most half of the unit vectors belongs to the 4 inner solid angles of a tetrahedron. The almost disjointness of the inner solid angles then forces their sum to be at most $2\pi$. Back to original problem Let $\Omega_p$ be the internal solid angle and $\phi_{p,i}, i = 1\ldots 3$ be the three dihedral angles at vertex $p$. The wiki page mentioned by @joriki tell us: $$\Omega_p = \sum_{i=1}^3 \phi_{p,i} - \pi$$ Notice each $\Omega_p \ge 0$ and we have shown $\sum_{p}\Omega_{p} \le 2\pi$. We get: $$\begin{align} & 0 \le \sum_p \sum_{i=1}^3 \phi_{p,i} - 4\pi \le 2\pi\\ \implies & 2\pi \le \frac12 \sum_p \sum_{i=1}^3 \phi_{p,i} \le 3\pi \end{align}$$ When we sum the dihedral angles over $p$ and $i$, every dihedral angles with be counted twice. This means the expression $\frac12 \sum_p \sum_{i=1}^3 \phi_{p,i}$ above is nothing but the sum of the 6 dihedral angles of a tetrahedron.
application of Lowenheim-Skolem theorem So if minimal model of ZF exists, it is said that it is countable set by Lowenheim-Skolem. So, is Lowenheim-Skolem saying that for any countable theory with existence of infinite model there exists standard model, respecting normal element relation, that is countable infinite?
No. The existence of standard models is strictly stronger. It is consistent that there are no standard models, to see this note that the standard models are well-founded, in the sense that there is no infinite decreasing chain of standard models such that $M_{n+1}\in M_n$, simply because $\in$ itself is well-founded and standard models use the real $\in$ for their membership relation. So there is a minimal standard model. But this model has the standard $\omega$ for its integers, so it cannot possible satisfy $\lnot\text{Con}(\mathsf{ZFC})$, so it must have a model of $\sf ZFC$ inside, but this model cannot be standard. See also: Transitive ${\sf ZFC}$ model on Cantor's Attic.
Baire one extension of continuous functions I struggle with the following comment in Sierpinski's Hypothèse du continu, p. 49. For every continuous function $f(x):X \rightarrow \mathbb{R}$, where $X \subseteq \mathbb{R}$, there exist a Baire one function $g(x): \mathbb{R} \rightarrow \mathbb{R}$ such that $g(x) = f(x)$ for all $x \in X$. What if both $X$ and $\mathbb{R}\backslash X$ are dense and uncountable ?
I think it can be proved by the following: For arbitrary $X\subset\mathbb{R}$, continuous $f:X\longrightarrow\mathbb{R}$ can be extended to a function $F:\mathbb{R}\longrightarrow\mathbb{R}$ such that $F^{-1}(A)$ is a $G_\delta$ set in $\mathbb{R}$ for every closed $A\subset\mathbb{R}$ (a Lebesgue-one function such that $F|_X=f$). Then the Lebesgue-Hausdorff theorem implies that the Lebesgue-one function $F$ is also Baire-one.
How to get as much pie as possible! Alice and Bob are sharing a triangular pie. Alice will cut the pie with one straight cut and pick the bigger piece, but Bob first specifies one point through which the cut must pass. What point should Bob specify to get as much pie as possible? And in that case how much pie can Alice get? The approach I took was to put an arbitrary point in a triangle, and draw lines from it to each corner of the triangle. Now the cut will only go through two of the three triangles we now have, so the aim is to get as close to 50% of the remaining two triangles as possible?
You can consider the triangle to be equilateral, as you can make any triangle equilateral with a linear transformation. That transformation will preserve the ratio of areas. The centroid is the obvious point to pick. If Alice cuts parallel to a side, she leaves Bob $\frac 49$ because the centroid is $\frac 23$ of the way along the altitude.
Let $k$ and $n$ be any integers such that $k \ge 3$ and $k$ divides $n$. Prove that $D_n$ contains exactly one cyclic subgroup of order $k$ a) Find a cyclic subgroup $H$ of order $10$ in $D_{30}$. List all generators of $H$. b) Let $k$ and $n$ be any integers such that $k \ge 3$ and $k$ divides $n$. Prove that $D_n$ contains exactly one cyclic subgroup of order $k$. My attempt at a), the elements of $D_{30}$ of order 10 are $r^{3n},sr^{3n}, 0\le n\le 4$ so any cyclic groups of the corresponding elements would work. The generators of the elements would be of the form $\langle a^j \rangle$ where $\gcd(10, j) = 1$ so $j =1,3,7,9,11,13$. Any ideas as to how I should attempt b)?
Let $$D_{2n} = \langle r, s \mid r^n = 1, s^2 = 1, s r = r^{-1}s \rangle$$ be the dihedral group of order $2n$ generated by rotations ($r$) and reflections ($s$) of the regular $n$-gon. From the presentation it is clear that every element can be put into the form $s^i r^j$ where $i$ is $0$ or $1$. So the cyclic subgroups of $D_{2n}$ are the cyclic subgroups generated by elements of the form $r^j$ and $s r^j$. * *Since $r$ generates $C_n$ we uniquely have $C_d \le C_n \le D_{2n}$ for every $d|n$ by the lemma. *Since $s r^i s r^i = 1$ the second form only generates $C_2$ subgroups. This shows that there may be many different $C_2$ subgroups of $D_{2n}$, but the $C_d$ subgroups are all unique. Lemma For $d|n$, there is a unique subgroup of $C_d$ isomorphic to $C_d$. proof: Let $m=ab$, every cyclic group $C_m$ has exactly $a$ elements $g$ such that $g^a=1$ (in fact these elements are $b$, $2b$, ...). So $C_n$ has exactly $d$ elements such that $g^d=1$, and if $C'$ is a subgroup of $C_n$ isomorphic to $C_d$ then it too has exactly $d$ elements like this: they must be exactly the same elements then!
Real Analysis Question! Consider the equation $\sin(x^2 + y) − 2x= 0$ for $x ∈ \mathbb{R}$ with $y ∈ \mathbb{R}$ as a parameter. Prove the existence of neighborhoods $V$ and $U$ of $0$ in $\mathbb{R}$ such that for every $y ∈ V$ there exists a unique solution $x = ψ(y) ∈ U$. Prove that $ψ$ is a $C\infty$ mapping on $V$, and that $ψ'(0) = \frac{1}{2}.$ I know that the solution has to do with the inverse and implicit function theorems but I just can't figure it out! Any help would be much appreciated!
Implicit differentiation of $$ \sin(x^2+y)-2x=0\tag{1} $$ yields $$ y'=2\sec(x^2+y)-2x\tag{2} $$ $(1)$ implies $$ |x|\le\frac12\tag{3} $$ $(2)$ and $(3)$ imply $$ \begin{align} |y'| &\ge2|\sec(x^2+y)|-2|x|\\ &\ge2-1\\ &=1\tag{4} \end{align} $$ By the Inverse Function Theorem, $(4)$ says that for all $y$, $$ |\psi'(y)|\le1\tag{5} $$ and $\psi\in C^\infty$. Furthermore, $x=0$ is the only $x\in\left[-\frac12,\frac12\right]$ so that $\sin(x^2)=x$. $(2)$ says that $y'=2$ at $(0,0)$. Therefore, $$ \psi'(0)=\frac12\tag{6} $$ Since $\psi$ is continuous, there is a neighborhood of $y=0$ so that $\psi'(y)>0$. Thus, $\psi$ is unique in that neighborhood of $y=0$.
Probability of choosing two equal bits from three random bits Given three random bits, pick two (without replacement). What is the probability that the two you pick are equal? I would like to know if the following analysis is correct and/or if there is a better way to think about it. $$\Pr[\text{choose two equal bits}] = \Pr[\text{2nd bit} = 0 \mid \text{1st bit} = 0] + \Pr[\text{2nd bit} = 1 \mid \text{1st bit} = 1]$$ Given three random bits, once you remove the first bit the other two bits can be: 00, 01, 11, each of which occurring with probability $\frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}$. Thus, $$\Pr[\text{2nd bit} = 0] = 1\cdot\frac{1}{4} + \frac{1}{2}\cdot\frac{1}{4} + 0\cdot\frac{1}{4} = \frac{3}{8}$$ And $\Pr[\text{2nd bit} = 1] = \Pr[\text{2nd bit} = 0]$ by the same analysis. Therefore, $$\Pr[\text{2nd bit}=0 \mid \text{1st bit} = 0] = \frac{\Pr[\text{1st and 2nd bits are 0}]}{\Pr[\text{1st bit}=0]} = \frac{1/2\cdot3/8}{1/2} = \frac{3}{8}$$ and by the same analysis, $\Pr[\text{2nd bit} = 1 \mid \text{1st bit} = 1] = \frac{3}{8}$. Thus, $$\Pr[\text{choose two equal bits}] = 2\cdot\frac{3}{8} = \frac{3}{4}$$
Whatever the first bit picked, the probability the second bit matches it is $1/2$. Remark: We are assuming what was not explicitly stated, that $0$'s and $1$'s are equally likely. One can very well have "random" bits where the probability of $0$ is not the same as the probability of $1$.
How to solve the following differential equation We have the following DE: $$ \dfrac{dy}{dx} = \dfrac{x^2 + 3y^2}{2xy}$$ I don't know how to solve this. I know we need to write it as $y/x$ but I don't know how to in this case.
$y=vx$ so $y'=xv'+v$. Your equation is $xv'+v={{1 \over{2v}}+{{3v} \over {2}}}$. Now clean up to get $xv'={{v^2+1}\over{2v}}$. Now separate ${2v dv \over {v^2+1}} = {dx \over x}$. Edit ${{x^2+3y^2} \over {2xy}} = {{{x^2}\over{2xy}}+{{3y^2}\over{2xy}}}={ x \over {2y}}+{{3y}\over{2x}}$
Fractional Derivative Implications/Meaning? I've recently been studying the concept of taking fractional derivatives and antiderivatives, and this question has come to mind: If a first derivative, in Cartesian coordinates, is representative of the function's slope, and the second derivative is representative of its concavity, is there any qualitative relationship between a 1/2 derivative and its original function? Or a 3/2 derivative with its respective function?
There several approaches to fractional derivatives. I use the Grunwald-Letnikov derivative and its generalizations to complex plane and the two-sided derivatives. However, most papers use what I call "walking dead" derivatives: the Riemann-Liouville and Caputo. If you want to start, don't loose time with them. There are some attempts to give interpretations to the FD: Prof. Tenreiro Machado and also Prof. Podlubny. The best interpretation in my opinion is the system interpretation: there is a system (linear) called differintegrator with transfer function H(s)=s^a, for Re(s)>0 (forward, causal case) or Re(s) < 0 (backward, anti-causal case). The impulse response of the causal system is t^(a-1)/gamma(-a).u(t) where u(t) is the Heaviside function. Send me a mail and i'll send you some papers [email protected]
3D - derivative of a point's function, is it the tangent? If I have (for instance) this formula which associates a $(x,y,z)$ point $p$ to each $u,v$ couple (on a 2D surface in 3D): $p=f(u,v)=(u^2+v^2+4,2uv,u^2−v^2) $ and I calculate the $\frac{\partial p}{\partial u}$, what do I get? The answer should be "a vector tangent to the point $p$" but I can't understand why. Shouldn't I obtain another point?
Take a fixed location where $(u,v) = (u_0,v_0)$. Think about the mapping $u \mapsto f(u,v_0)$. This is a curve lying on your surface, which is formed by allowing $u$ to vary while $v$ is held fixed. In fact, in my business, we would say that this is an "isoparametric curve" on the surface. By definition, $\frac{\partial f}{\partial u}(u_0)$ is the first derivative vector of this curve at $u= u_0$. In other words, it's the "tangent" vector of this curve.
Fubini's theorem for Riemann integrals? The integrals in Fubini's theorem are all Lebesgue integrals. I was wondering if there is a theorem with conclusions similar to Fubini's but only involving Riemann integrals? Thanks and regards!
To see the difficulties of Fubini with Riemann integrals, study two functions $f$ and $g$ on the rectangle $[0,1]\times[0,1]$ defined by: (1) $\forall$integer $i\ge0$, $\forall$odd integer $j\in[0,2^i]$, $\forall$integer $k\ge0$, $\forall$odd integer $\ell\in[0,2^k]$, define $f(j/2^i,\ell/2^k)=\delta_{ik}$ (here, $\delta_{ik}$ is the Kronecker delta, equal to one if $i=k$ and $0$ if not) and $g(j/2^i,\ell/2^k)=1/2^i$; and (2) $\forall x,y\in[0,1]$, if either $x$ or $y$ is not a dyadic rational, define $f(x,y)=0$ and $g(x,y)=0$. Then both iterated Riemann integrals of $f$ are zero, i.e., $\int_0^1\int_0^1 f(x,y)\,dx\,dy=\int_0^1\int_0^1 f(x,y)\,dy\,dx=0$. However, the Riemann integral, over $[0,1]\times[0,1]$, of $f$ does not exist. Also, the Riemann integral, over $[0,1]\times[0,1]$, of $g$ is zero. However, $\forall$dyadic rational $x\in[0,1]$, the Riemann integral $\int_0^1 g(x,y)\,dy$ does not exist. Consequently, $\int_0^1\int_0^1 g(x,y)\,dy\,dx$ does not exist, in the Riemann sense.
Sub-lattices and lattices. I have read in a textbook that $ \mathcal{P}(X) $, the power-set of $ X $ under the relation ‘contained in’ is a lattice. They also said that $ S := \{ \varnothing,\{ 1,2 \},\{ 2,3 \},\{ 1,2,3 \} \} $ is a lattice but not a sub-lattice. Why is it so?
The point of confusion is that a lattice can be described in two different ways. One way is to say that it is a poset such that finite meets and joins exist. Another way is to say that it is a set upon which two binary operations (called meet and join) are given that satisfy a short list of axioms. The two definitions are equivalent in the sense that using the first definition's finite meets and joins gives us the two binary operations, and the structure imposed by the second definition allows one to recover a poset structure, and these processes are inverse to each other. So now, if $L$ is a lattice and $S\subseteq L$ then $S$ is automatically a poset, indeed a subposet of $L$. But, even if with that poset structure it is a lattice it does not mean that it is a sublattice of $L$. To be a sublattice it must be that for all $x,y\in S$, the join $x\vee y$ computed in $S$ is the same as that computed in $L$, and similarly for the meet $x\wedge y$. This much stronger condition does not have to hold. Indeed, as noted by Gerry in the comment, the meet $\{1,2\}\wedge \{2,3\}$ computed in $\mathcal P({1,2,3})$ is $\{2\}$, while computed in the given subset it is $\emptyset$. None the less, it can immediately be verified that the given subset is a lattice since under the inclusion poset, all finite meets and joins exist.
Good book recommendations on trigonometry I need to find a good book on trigonometry, I was using trigonometry demystified but I got sad when I read this line: Now that you know how the circular functions are defined, you might wonder how the values are calculated. The answer: with an electronic calculator! I know a book which seems to be really good: Loney's Plane Trigonometry, I'm just not sure if the book is up to date.
Nothing changed much in basic Trigonometry for a century. Of Loney's book genre is another: Henry Sinclair Hall, Samuel Ratcliffe Knight, Macmillan and Company, 1893 - Plane trigonometry - 404 pages
Solve $x'(t)=(x(t))^2-t^2+1 $ How can we solve $$x'(t)=(x(t))^2-t^2+1 $$?I have tried to check whether it is Exact, separable, homogeneous, Bernoulli or not. It doesn't resemble to none of them. Who can help me. Thank you. The source of question is CEU entrance examination.
The non-linear DE $$x'=P(t)+Q(t)x+R(t)x^2$$ is called Ricatti's equation. If $x_1$ is a known particular solution of it, then we can have a family of solutions of the OE of the form $x(t)=x_1+u$ where $u$ is a solution of $$u'=Ru^2+(Q+2x_1R)u$$ or the linear ODE: $$w'+(Q+2x_1R)w=-R,~~~w=u^{-1}$$ Here, as @Ishan noted correctly, one particular solution of your OE is $x_1=t$ and $$R=1,~~Q=0,~~P=1-t^2$$ and so we first solve $$w'+(0+2t\times1)w=-1,~~w=u^{-1}$$ or $w'+2tw=-1$. The solution of latter OE is $$w=\frac{e^{t^2}}{C-\int_{t_0}^te^{k^2}dk}$$ and sofar we get $$x(t)=t+\frac{C-\int_{t_0}^te^{k^2}dk}{e^{t^2}}$$
DNA sequence in MATLAB I am wanting to count how many times synonymous and non- synonymous mutations appear in a sequence of DNA, given the number of synonymous and non- synonymous mutations in each 3 letter codon. ie given that AAA has 7 synonymous and 1 non- synonymous equations, and CCC has 6 and 3 respectively, then the sequence AAACCC would have 13 synonymous and 4 non- synonymous mutations. However, these sequences could have 10k + letters with a total of 64 different 3 letter combinations... How could I set up an M file, using for / else if statements to count the mutations? Thanks
Assuming you have filtered out the data errors and each time you nicely have three letter, here is one approach: 1) Make your data look like this: AAA CCC ACA CAC ... 2) Count how many times each of the 64 options occurs. 3) Multiply that found number of times with the corresponding syn and non-sym mutations. That should be it! Note that step 2 and 3 can easily be achieved with Excel as well. If you are not fluent in matlab it will probably even be quicker.
Ideals in Dedekind domain If I is a non-zero ideal in a Dedekind domain such that $I^m$ and $I^n$ are principal, and are equal to $(a)$ and $(b)$ respectively. How to show that $I^{(m,n)}$ is principal. Try: $(m,n) = rm +sn$ So, $I^{(m,n)} = (a)^r(b)^s$, where $r$, $s$ can be positive and negative. Both positive case is ok. but how to handle other cases.
Even in the other cases the argument works, you just have fractional ideals instead. Viewing $I$ as an element $\overline{I}$ of the ideal class group of $A$ your question can be stated as: Suppose $\overline{I}^m=\overline{I}^n=0$ in the ideal class group, then $\overline{I}^{(m,n)}=0$. This statement is true in any group.
How can I calculate the limit of this? What is the limit? $$\lim_{n\rightarrow\infty}\dfrac{3}{(4^n+5^n)^{\frac{1}{n}}}$$ I don't get this limit. Really, I don't know if it has limit.
Denote the function $$ f(n) = \frac{3}{(4^n +5^n)^{\frac{1}{n}}} $$ Recall logarithm is a continuous function, hence denote $$ L(f(n)) = \log 3-\frac{\log(4^n +5^n)}{n}\\ \lim_{n \to \infty} L(f(n)) = \log 3 - \lim_{n \to \infty}\frac{\log(4^n +5^n)}{n}=\log 3 - \lim_{n \to \infty} \frac{4^n \log 4 + 5^n \log 5}{4^n + 5^n} \\ =\log 3 - \log 5=\log \bigg(\frac{3}{5} \bigg) $$ I used here L'Hospital's rule and then divided the fraction through $5^n$. Hence $\lim_{n \to \infty}f(n)=\frac{3}{5}$
If $M(r)$ for $a \leq r \leq b$ by $M(r)=\max\{\frac {r}{a}-1,1-\frac {r}{b}\}$.Then $\min \{M(r):a \leq r \leq b\}=$? I faced the following problem that says: Let $0<a<b$. Define a function $M(r)$ for $a \leq r \leq b$ by $M(r)=\max\{\frac {r}{a}-1,1-\frac {r}{b}\}$.Then $\min \{M(r):a \leq r \leq b\}$ is which of the following: $1.0$ $ 2.\frac {2ab}{a+b}$ $3.\frac {b-a}{b+a}$ $4.\frac {b+a}{b-a}$ I do not know how to progress with it.Can someone point me in the right direction? Thanks in advance for your time.
If $a\leq r \leq \dfrac{2ab}{a+b}$ show that $M(r)=1-\dfrac{r}{b}$ and for $\dfrac{2ab}{a+b}\leq r \leq b$ that $M(r)=\dfrac{r}{a}-1$. As Did suggested a picture (of $\dfrac{r}{a}-1, 1-\dfrac{r}{b}$) will help. What is $\dfrac{2ab}{a+b}$? To find $\min \{M(r):a\leq r\leq b\}$ note that $\dfrac{r}{a}-1$ is increasing and $1-\dfrac{r}{b}$ decreasing.
getting the inner corner angle I have four points that make concave quad: now I wanna get the inner angle of the (b) corner in degrees. note: the inner angle is greater than 180 degree.
Draw $ac$ and use the law of cosines at $\angle b$, then subtract from $360$ $226=68+50-2\sqrt{50\cdot 68} \cos \theta \\ \cos \theta\approx -0.926 \\ \theta \approx 157.83 \\ \text{Your angle } \approx 202.17$
Sum of $\prod 1/n_i$ where $n_1,\ldots,n_k$ are divisions of $m$ into $k$ parts. Fix $m$ and $k$ natural numbers. Let $A_{m,k}$ be the set of all partitions divisions of $m$ into $k$ parts. That is: $$A_{m,k} = \left\{ (n_1,\ldots,n_k) : n_i >0, \sum_{i=1}^k n_i = m \right\} $$ We are interested in the following sum $s_{m,k}$: $$s_{m,k} = \sum_{ (n_1,\ldots,n_k) \in A_{m,k} } \prod_{i=1}^k \frac{1}{n_i} $$ Can you find $s_{m,k}$ explicitly, or perhaps its generating function or exponential generating function? EDIT: Since order matters in the $(n_1,\ldots,n_k)$ this is not exactly a partition.
I am not sure I understand the problem correctly, but $$ g(z) = \left( \frac{z}{1} + \frac{z^2}{2} +\frac{z^3}{3} + \ldots + \frac{z^q}{q} + \ldots \right)^k = \left( \log \frac{1}{1-z} \right)^k $$ looks like a good candidate to me, so that $$ s_{m,k} = [z^m] \left( \log \frac{1}{1-z} \right)^k.$$ This is the exponential generating function for a sequence of $k$ cycles containing a total of $m$ nodes, $$\mathfrak{C}(\mathcal{Z}) \times \mathfrak{C(\mathcal{Z})} \times \mathfrak{C(\mathcal{Z})} \times \cdots \times \mathfrak{C(\mathcal{Z})} = \mathfrak{C}^k(\mathcal{Z}) ,$$ so that $m! [z^m] g(z)$ gives the number of such sequences. Since the components are at most $m$ we could truncate the inner logarithmic term at $z^m/m$, but I suspect the logarithmic form is more useful for asymptotics.
Proving the statement using Resolution? I'm trying to solve this problem for my logical programming class: Every child loves Santa. Everyone who loves Santa loves any reindeer. Rudolph is a reindeer, and Rudolph has a red nose. Anything which has a red nose is weird or is a clown. No reindeer is a clown. John does not love anything which is weird. (Conclusion) John is not a child. Here is my theory: 1. K(x) => L(x,s) % Every child loves Santa 2. L(x,s) => ( D(y) => L(x,y) ) % Everyone who loves Santa loves any reindeer 3. D(r) & R(r) % Rudolph is a reindeer, and Rudolph has a red nose. 4. R(x) => ( W(x) v C(x) ) % Anything which has a red nose is weird or is clown 5. ~( D(x) & C(x) ) % No reindeer is a clown. 6. W(x) => ~L(j,x) % John does not love anything which is weird. 7. ?=> ~K(j) % John is not a child? Here are the clauses in CNF: 1. ~K(x) v L(x,s) 2. ~L(x,s) v ~D(y) v L(x,y) 3. D(r) 4. R(r) 5. ~R(x) v W(x) v C(x) 6. ~D(x) v ~C(x) 7. ~W(x) v ~L(j,x) 8. K(j) I cannot seem to get an empty statement by Resolution. Is there a mistake in my theory or the conclusion indeed does not follow? Edit: Resolution [3,6] 9. ~C(r) [4,5] 10. W(r) v C(r) [9,10] 11. W(r) [8,1] 12. L(j,s) [12,2] 13. ~D(y) v L(j,y) [11,7] 14. ~L(j,r) [13,14] 15. ~D(r) Thank you for your help!
The conclusion is correct. I will let you tidy this up and fill in the gaps, but you might want to consider the following W(r) v C(r) W(r) ~L(j,r) ~L(j,s) ~K(j)
Exposition On An Integral Of An Absolute Value Function At the moment, I am trying to work on a simple integral, involving an absolute value function. However, I am not just trying to merely solve it; I am undertaking to write, in detail, of everything I am doing. So, the function is $f(x) = |x^2 + 3x - 4|$. I know that this isn't an algebraic-like function, so we can't evaluate it as one; but, by using the definition of absolute value, we can rewrite it as one. The function $f(x)$, without the absolute value signs, can take on both positive and negative values; so, in order to retain the strictly positive output that the absolute value function demands, we have to put a negative sign in front of the algebraic definiton of our absolute vaue function on the interval where the values yield a negative value, so we'll get a double negative -(-), resulting in the positve we want. This is how far I've gotten so far. From what i've been taught, in order to find the intervals where the function is positive and where it is negative, you have to find the values that make the function zero, and create test intervals from those values. For instance, the zeros of the function above are $x = -4$ and $x = 1$; our test intervals are then $(- \infty, -4)$, $(-4, 1)$, and $(1, \infty)$ My question is, why does finding the zeros of the function guarantee that we will find those precise test intervals?
Since polynomials and absolute value of continuous functions are continuous , the zeros are the points for possible sign change. Then you can write piecewisely your function and integrate..
Define a domain filter of a function Let $\mathbb{B}, \mathbb{V}$ two sets. I have defined a function $f: \mathbb{B} \rightarrow \mathbb{V}$. $\mathcal{P}(\mathbb{B})$ means the power set of $\mathbb{B}$, I am looking for a function $g: (\mathbb{B} \rightarrow \mathbb{V}) \times \mathcal{P}(\mathbb{B}) \rightarrow (\mathcal{P}(\mathbb{B}) \rightarrow \mathbb{V})$ which can filter the domain of $f$ by a subset of $\mathbb{B}$, that means $g: (f, \mathbb{S}) \mapsto h$ such that the domain of $h$ is $\mathbb{S}$ and $\forall x \in \mathbb{S}, h(x) = f(x)$. I am wondering if this kind of function exists already. If not, is there a better way to define it? Could anyone help?
What you denote $g(f,\mathbb S)$ is usually called the restriction of $f$ to $\mathbb S$, and denoted $f|\mathbb S$ or $f\upharpoonright \mathbb S$. Sometimes this notation is used even if $\mathbb S$ is not contained in the domain of $f$, in which case it is understood to be $f\upharpoonright A$, where $A=\mathbb S\cap{\rm dom}(f)$. (And, agreeing with Trevor's comment, the range of $g$ should be $\bigcup_{\mathbb S\in\mathcal P(\mathbb B)}(\mathbb S\to \mathbb V)$ rather than ${\mathcal P}(\mathbb B)\to\mathbb V$. Anyway, I much prefer the notation $A^B$ or ${}^B A$ instead of $B\to A$.)
Does every sequence of rationals, whose sum is irrational, have a subsequence whose sum is rational Assume we have a sequence of rational numbers $a=(a_n)$. Assume we have a summation function $S: \mathscr {L}^1 \mapsto \mathbb R, \ \ S(a)=\sum a_n$ ($\mathscr {L}^1$ is the sequence space whose sums of absolute values converges). Assume also that $S(a) \in \mathbb R \setminus \mathbb Q$. I would like to know if every such sequence $a$ has a subsequence $b$ (infinitely long) such that $S(b) \in \mathbb Q$. Take as an example $a_n = 1/n^2$. Then $S(a)=\pi^2/6$. But $a$ has a subsequence $b=(b_n)=(1/(2^n)^2)$ (ie. all squares of powers of $2$). Then $S(b)=4/3$. Is this case with every such sequence?
No; for example, if $(n_i)$ is a strictly increasing sequence of positive integers, then we can imitate the proof of the irrationality of $e$ to see that $$\sum_{i=1}^\infty \frac{1}{n_1 \dots n_i} \notin \mathbf Q.$$ But every sub-series of this series has the same property (it just amounts to grouping some of the $n_i$ together).
Prove $||a| - |b|| \leq |a - b|$ I'm trying to prove that $||a| - |b|| \leq |a - b|$. So far, by using the triangle inequality, I've got: $$|a| = |\left(a - b\right) + b| \leq |a - b| + |b|$$ Subtracting $|b|$ from both sides yields, $$|a| - |b| \leq |a - b|$$ The book I'm working from claims you can achieve this proof by considering just two cases: $|a| - |b| \geq 0$ and $|a| - |b| < 0$. The first case is pretty straightforward: $$|a| - |b| \geq 0 \implies ||a| - |b|| = |a| - |b| \leq |a - b|$$ But I'm stuck on the case where $|a| - |b| < 0$ Cool, I think I got it (thanks for the hints!). So, $$|b| - |a| \leq |b - a| = |a - b|$$ And when $|a| - |b| < 0$, $$||a| - |b|| = -\left(|a| - |b|\right) = |b| - |a| \leq |a - b|$$
Hint: If $|a|-|b|<0$, rename $a$ to $b'$ and $b$ to $a'$.
How to prove that two non-zero linear functionals defined on the same vector space and having the same null-space are proportional? Let $f$ and $g$ be two non-zero linear functionals defined on a vector space $X$ such that the null-space of $f$ is equal to that of $g$. How to prove that $f$ and $g$ are proportional (i.e. one is a scalar multiple of the other)?
Let $H$ be the null space and take a vector $v$ outside $H$. The point is that $H+\langle v\rangle$ is the whole vector space, this I assume you know (i.e. $H$ has codimension 1). Then $f(v)$ and $g(v)$ uniquely determine the functions $f$ and $v$ and all $x\in X$ can be written as $x=h+tv$ with $h\in H$ so: $$ f(x) / g(x) = f(tv)/g(tv) = f(v)/g(v). $$
Integrate $\iint_D\exp\{\min(x^2, y^2)\}\mathrm{d}x\mathrm{d}y$ Compute the integral: \begin{equation} \iint\limits_{\substack{\displaystyle 0 \leqslant x \leqslant 1\\\displaystyle 0 \leqslant y \leqslant 1}}\exp\left\{\min(x^2, y^2)\right\}\mathrm{d}x\mathrm{d}y \end{equation} $D$ is the rectangle with vertices $(0,0)$, $(0,1)$, $(1,0)$ and $(1,1)$ and $\min(x^2,y^2)$ is the minimum of the numbers $x^2$ and $y^2$. I dont have a clue about how I can integrate this function. I thought of using Taylor series but I am not sure if this would work either. Can anyone help me out?
Note that the derivative of $x\mapsto e^{x^2}$ is $x\mapsto 2xe^{x^2}$, hence by symmetry along the line $x=y$ $$\begin{align} \int_0^1\int_0^1e^{\min\{x^2,y^2\}}\,\mathrm dy\,\mathrm dx &= 2\int_0^1\int_x^1e^{x^2}\,\mathrm dy\,\mathrm dx\\ &=2\int_0^1(1-x)e^{x^2}\,\mathrm dx\\ &=2\int_0^1e^{x^2}\,\mathrm dx-\int_0^12xe^{x^2}\,\mathrm dx\\ &=2\int_0^1e^{x^2}\,\mathrm dx-e+1. \end{align}$$ Unfortunately, the remaining integral is non-elementary. A similar integal with max instead of min is much easier: $$\begin{align} \int_0^1\int_0^1e^{\max\{x^2,y^2\}}\,\mathrm dy\,\mathrm dx &= 2\int_0^1\int_0^xe^{x^2}\,\mathrm dy\,\mathrm dx\\ &=2\int_0^1 x e^{x^2}\,\mathrm dx\\ &=e-1. \end{align}$$
Algebraic Number Theory - Lemma for Fermat's Equation with $n=3$ I have to prove the following, in my notes it is lemma before Fermat's Equation, case $n=3$. I was able to prove everything up to the last two points: Let $\zeta=e^{(\frac{2\pi i}{3})}$. Consider $A:=\mathbb{Z}[\zeta]=\{a+\zeta b \quad|\quad a,b\in \mathbb{Z}\}$. Then * *$\zeta$ is a root of the irreducible poly. $X^2+X+1$. *The field of fractions of $A$ is $\mathbb{Q}(\sqrt{-3})$ *The norm map $N:\mathbb{Q}(\sqrt{-3})\rightarrow \mathbb{Q},$ given by $a+\sqrt{-3}b \mapsto a^2+3b^2$ is multiplicative and sends every element in $A$ to an element in $\mathbb{Z}$. In particular, $u\in A$ is a unit iff $N(u)\in\{-1,1\}$. Moreover, if $N(a)=\pm$ prime number, then $a$ is irreducible. *The unit group $A^x$ is cyclic of order $6$. ($A^x=\{\pm 1, \pm\zeta, \pm\zeta^2\}$) *The ring $A$ is Euclidean with respect to the norm $N$ and hence a unique factorisation domain. *The element $\lambda=1-\zeta$ is a prime element in $A$ and $3=-\zeta^2\lambda^2$. *The quotient $A$ / $(\lambda)$ is isomorphic to $\mathbb{F}_3$. *The image of the set $A^3=\{a^3|a\in A\}$ under $\pi: A \rightarrow A / (\lambda^4)=A / (9)$ is equal to $\{0+(\lambda^4),\pm 1+(\lambda^4),\pm \lambda^3+(\lambda^4)\}$ I was not able to prove 7 and 8. For 7 I do not even know which isomorphism, I guess it should be an isomorphism of rings? I hope anybody knows what to do or has at least some hints, Thanks in advance for your help!
For 7), note that 6) tells you that $3 \in (\lambda)$, and since by 6) $\lambda$ is prime, $A \ne (\lambda)$. Moreover $a + \zeta b = a + (1-\lambda) b \equiv a + b \pmod{\lambda}$. So if you want an explicit isomorphism, it is $a + \zeta b + (\lambda) \mapsto a+ b \pmod{3}$.
Properties of Equivalence Relation Compared with Equality I'm reading about congruences in number theory and my textbook states the following: The congruence relation on $\mathbb{Z}$ enjoys many (but not all!) of the properties satisfied by the usual relation of equality on $\mathbb{Z}$. The text then does not go into detail as to what properties they are describing. So what are the properties they are talking about? I've already showed that congruences are reflexive, symmetric, and transitive, so why in general is this not the same as equality? Is there some property that all equivalence relations will never share with the equality relation? I appreciate all responses.
An equivalence relation is the equality relation if and only if its congruence classes are all singletons. Most equivalence relations do not have this characteristic. The equivalence classes of (most) congruence relations on $\Bbb Z$, for example, are infinite.
Two trivial questions in general topology I'd appreciate some guidance regarding the following 2 questions. Number 1 should be clear, and number 2 is more of a discussion: * *Let $X$ be a topological space. Let $E$ be a dense subset. Can $E$ be finite without $X$ being finite? Or countable? *Let $X$ be a topological space. I came across the following definition of "isolated point": a point $x$ which is not a limit point of $X\setminus\{x\}$. Or, for a metric space: a point $x$ such that there is an open ball centered on $x$ not containing any other point of $X$. These definitions make no sense to me: how can a ball of a topological space $X$ contain points not from $X$? It sounds like there is some kind of bigger topological space containing $X$ which is implicitly referred to. In that context, what meaning can one give to "an isolated point of $X$"?
* *Let $X = \mathbb{R}$ and define the topology $\mathcal{T} = \{\mathbb{R}, \emptyset, \{0\}\}$. The set $\{1\}$ is dense in $(\mathbb{R}, \mathcal{T})$ (it's closure is clearly all of $\mathbb{R}$, and it's a finite set. *Consider $X = [0,1] \cup \{2\}$ with the usual Euclidean metric. The ball of radius $1$ about the point $2$ (in $X$) is just the set $\{2\}$. You don't need to have a larger space in mind when you talk about the isolated points of $X$. The ambient space $\mathbb{R}$ doesn't play any role here. Alternatively, you could think about the metric $d(x,y) = 1$ if $x \neq y$ and $0$ otherwise on $\mathbb{R}$. Then the ball of radius $1$ about any point in $\mathbb{R}$ is the singleton $\{x\}$.
Understanding Primitive Polynomials in GF(2)? This is an entire field over my head right now, but my research into LFSRs has brought me here. It's my understanding that a primitive polynomial in $GF(2)$ of degree $n$ indicates which taps will create an LFSR. Such as $x^4+x^3+1$ is primitive in $GF(2)$ and has degree $4$, so a 4 bit LFSR will have taps on bits 4 and 3. Let's say I didn't have tables telling me what taps work for what sizes of LFSRs. What process can I go through to determine that $x^4+x^3+1$ is primitive and also show that $x^4+x^2+x+1$ is not (I made that equation up off the top of my head from what I understand about LFSRs, I think it's not primitive)? Several pages online say that you should divide $x^e+1$ (where e is $2^n-1$ and $n$ is the degree of the polynomial) by the polynomial, e.g. for $x^4+x^3+1$, you do $(x^{15}+1)/(x^4+x^3+1)$. I can divide polynomials but I don't know what the result of that division will tell me? Am I looking for something that divides evenly? Does that mean it's not primitive?
For a polynomial $p(x)$ of degree $n$ with coefficients in $GF(2)$ to be primitive, it must satisfy the condition that $2^n-1$ is the smallest positive integer $e$ with the property that $$ x^e\equiv 1\pmod{p(x)}. $$ You got that right. For a polynomial to be primitive, it is necessary (but not sufficient) for it to be irreducible. Your "random" polynomial $$ x^4+x^2+x+1=(x+1)(x^3+x^2+1) $$ fails this lithmus test, so we don't need to check anything more. Whenever I am implementing a finite field of characteristic two in a computer program at the beginning I generate a table of discrete logarithms. While doing that I also automatically verify that $2^n-1$ is, indeed, the smallest power of $x$ that yield remainder $=1$. For an in situ example see the latter half of my answer to this question, where I verify that $x^3+x+1$ is primitive by showing that we need to go up to $x^7$ to get remainder equal to $1$. Doing it by hand becomes tedious after while. There are several shortcuts available, if you know that $p(x)$ is irreducible and if you know the prime factorization of $2^n-1$. These depend on the fact the multiplicative group of $GF(2^n)$ is always cyclic. Your example of $p(x)=x^4+x^3+1$ is a case in point. It is relatively easy to decide that it is irreducible. Then the theory of cyclic groups tells that the smallest $e$ will be factor of $15$. So to prove that it is actually equal to fifteen it suffices to check that none of $e=1,3,5$ work. This is easy. The only non-trivial check is that $$ x^5\equiv x^5+x(x^4+x^3+1)=x^4+x\equiv x^4+x+(x^4+x^3+1)=x^3+x+1\not\equiv1\pmod{x^4+x^3+1}, $$ and this gives us the proof. Even with the shortcuts, finding a primitive polynomial of a given degree is a task I would rather avoid. So I use look up tables. My favorite on-line source is at http://web.eecs.utk.edu/~plank/plank/papers/CS-07-593/primitive-polynomial-table.txt After you have one primitive polynomial, you often want to find other closely related ones. For example, when calculating generating polynomials of a BCH-code or an LFSR of a Gold sequence (or other sequence with known structure) you encounter the following task. The given primitive polynomial is the so called minimal polynomial of any one of its roots, say $\alpha$. Those constructions require you to find the minimal polynomial of $\alpha^d$ for some $d$. For example $d=3$ or $d=5$ are very common. The minimal polynomial of $\alpha^d$ will be primitive, iff $\gcd(d,2^n-1)=1$, and this often holds. Then relatively basic algebra of field extensions gives you an algorithm for finding the desired minimal polynomial. Space won't permit me to get into that here, though.
Friends Problem (Basic Combinatorics) Let $k$ and $n$ be fixed integers. In a group of $k$ people, any group of $n$ people all have a friend in common. * *If $k=2 n + 1$ prove that there exists a person who is friends with everyone else. *If $k=2n+2$, give an example of a group of $k$ people satisfying the given condition, but with no person being friends with everyone else. Thanks :)
The first part of this is just an expansion of Harald Hanche-Olsen’s answer. For the second part number the $2n+2$ people $P_1,\dots,P_{n+1},Q_1,\dots,Q_{n+1}$. Divide them into pairs: $\{P_1,Q_1\},\{P_2,Q_2\},\dots,\{P_{n+1},Q_{n+1}\}$. The two people in each pair are not friends; i.e., $P_k$ is not friends with $Q_k$ for $k=1,\dots,n+1$. However, every other possible pair of people in the group are friends. In particular, * *$P_k$ is friends with $P_i$ whenever $1\le i,k\le n+1$ and $i\ne k$, *$Q_k$ is friends with $Q_i$ whenever $1\le i,k\le n+1$ and $i\ne k$, and *$P_k$ is friends with $Q_i$ whenever $1\le i,k\le n+1$ and $i\ne k$. In short, two people in the group are friends if and only if they have different subscripts. Clearly no person in the group is friends with everyone else in the group. Suppose, though, that $\mathscr{A}$ is a group of $n$ of these people. The people in $\mathscr{A}$ have altogether at most $n$ different subscripts, so there is at least one subscript that isn’t used by anyone in $\mathscr{A}$; let $k$ be such a subscript. Then everyone in $\mathscr{A}$ has a different subscript from $P_k$ and is therefore friends with $P_k$ (and for that matter with $Q_k$). I hope to get to the first part a bit later.
I need to ask a question about vectors and cross product? When you take the determinant on 3 vectors, you calculate and get the volume of that specific shape, correct? When you take the cross-product of 2 vectors, you calculate and get the area of that shape and you also get the vector perpendicular to the plane, correct?
Kind Of. When you take the determinant of a set of vectors, you get the volume bounded by the vectors. For instance, the determinant of the identity matrix (which can be considered as a set of vectors) gives the volume of the solid box in $n$ dimensions. A $3\times3$ identity matrix gives the area of a cube. However, when you calculate cross products, the matrix of whose determinant you take has the first row to be the the unit vectors in the $n$ dimensions. For instance \begin{align} \det\begin{pmatrix} \hat{i}&\hat{j}&\hat{k}\\1&0&0\\0&1&0 \end{pmatrix}=1 \hat{k} \end{align} It does NOT return a scalar value.
Proving $\prod((k^2-1)/k^2)=(n+1)/(2n)$ by induction $$P_n = \prod^n_{k=2} \left(\frac{k^2 - 1}{k^2}\right)$$ Someone already helped me see that $$P_n = \frac{1}{2}.\frac{n + 1}{n} $$ Now I have to prove, by induction, that the formula for $P_n$ is correct. The basis step: $n = 2$ is true, $$P_2 = \frac{3}{4} = \frac{1}{2}.\frac{2+1}{2} $$ Inductive step: Assuming $P_k = \frac12.\frac{k+1}k$ is true for $k \ge2$, I need to prove $$P_{k+1} = \frac12.\frac{k+2}{k+1}$$ So I am stuck here, I have been playin around with $P_k$ and $P_{k+1}$ but I can't figure out how to connect the hypothesis with what I am trying to prove.
We do the same thing as in the solution of Brian M. Scott, but slightly backwards, We are interested in the question $$\frac{n+2}{2(n+1)}\overset{?}{=}\frac{n+1}{2n}\frac{(n+1)^2-1}{(n+1)^2}.$$ The difference-of-squares factorization $(n+1)^2-1=(n)(n+2)$ settles things.
Can we test if a number is a lucky number in polynomial time? I know primality tests exist in polynomial time. But can we test if a number is a lucky number in polynomial time ?
I doubt you're going to get a satisfactory answer to this question. I'm fairly sure the answer to your question is that there's no known polynomial time algorithm to determine if a number is lucky. The fact that Primes is in P was not shown until 2004, when people have been studying prime numbers and prime number tests for quite a long time. I certainly can't comment on whether or not there is a polynomial time algorithm for lucky numbers, probably it's in $\mathrm{NP}$ so such a question would inevitably have to answer $\mathrm{P=NP}$. Looking at the lucky numbers they seem to have much less exploitable structure than the primes. Possibly one could develop a large theory of lucky numbers and then use this to help make progress forward. But examining the seive algorithm I highly doubt there's going to be a way to make it polynomial in terms of the input.
Sketch complex curve $z(t) = e^{-1t+it}$, $0 \le t \le b$ for some $b>0$ Sketch complex curve $z(t) = e^{-1t+it}$, $0 \le t \le b$ for some $b>0$ I tried plotting this using mathematica, but I get two curves. Also, how do I find its length, is it just the integral? This equation doesn't converge right? Edit: I forgot the $t$ in front of the $1$ so it's not a circle of radius $e$
(You have received answers for the rest, so let me focus on the length.) The length of the curve is given by $$\int_0^b |z'(t)|\,dt = \int_0^b |(-1+i)e^{(-1+i)t}|\,dt = \int_0^b \sqrt 2 e^{-t}\,dt.$$ I think you can work out what happens when $b\to\infty$ now.