INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Understanding bicomplex numbers I found by chance, the set of Bicomplex numbers. These numbers took particularly my attention because of their similarity to my previous personal research and question. I should say that I can't really understand the fact that $j^2=+1$ (and must of other abstract algebra) without using matricial interpretations. When I look a bit on the Bicomplex numbers, the think that surprised me a lot was the fact that $ij=ji=k$ and $k^2=-1$. Because using matricial representations, we get : $$ij=\begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}=\begin{pmatrix} -1 & 0\\ 0 & 1 \end{pmatrix}$$ $$ji=\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix}=\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$$ So, $ij$ should be different then $ji$. Then if we define $k=ij$ or $k=ji$, we get both way $$k^2= I$$ Looked at the Wikipedia article and did some internet search but couldn't find a matricial representation for the bicomplex $k$. So I wanna ask some clarifications about where from the relations $ij=ji$ and $k^2=-1$ come from.
The algebra you discribed in the question is tessarines. So, there are two matrical representations of tessarines. * *For a tessarine $z=w_1+w_2i+w_3j+w_4ij$ the matrix representation is as follows: $$\left( \begin{array}{cccc} {w_0} & -{w_1} & {w_2} & -{w_3} \\ {w_1} & {w_0} & {w_3} & {w_2} \\ {w_2} & -{w_3} & {w_0} & -{w_1} \\ {w_3} & {w_2} & {w_1} & {w_0} \\ \end{array} \right)$$ One has to use the 4x4 matrix. After performing the operations, the coefficients of the resulting tessarine can be extracted from the first column of the resulting matrix. *Alternatively, one can use a 2x2 matrix of the form $z=a+bj=\left( \begin{array}{cc} a & b \\ b & a \\ \end{array} \right)$ where both $a$ and $b$ are complex numbers. Since complex numbers are embedded in most computer algebra systems, I prefer the second method.
Bayesian learning Imagine we assume there are two different types of coins: * *Coin A: a fair coin, p(heads) = 0.5. *Coin B: biased to heads at p(heads)=0.7. We then want to learn from samples which coin we are flipping. Assume a naive prior over the two coins, so we have a Beta distribution, $\beta_0(1,1)$. You flip the coin and see heads. Since you know the probability that coin A would generate heads is 0.5 and you know the probability that coin B would generate heads is 0.7, we update our distribution as: $$ \beta_1 = (1+\frac{0.5}{1.2},1+\frac{0.7}{1.2}) \approx (1.4167, 1.5833) $$ Is this the correct way to update the distribution or will it improperly bias the distribution in some way?
I don't understand how a beta distribution enters into it. A beta distribution is usually used in the context of an unknown probability lying anywhere in $[0,1]$. We have no such parameter here; all we have is an unknown binary choice between coins $A$ and $B$. The most natural prior in this case is one that assigns probability $1/2$ to both coins. The a priori probability of flipping heads with coin $A$ was $\frac12\cdot0.5=0.25$, and the a priori probability of flipping heads with coin $B$ was $\frac12\cdot0.7=0.35$, so since heads was flipped the probability that the coin is coin $A$ is $0.25/(0.25+0.35)=5/12$ and the probability that the coin is coin $B$ is $0.35/(0.25+0.35)=7/12$.
Finding a continuous function $f: \mathbb{R} \to \mathbb{R}$ such that $f(\mathbb{R})$ is neither open nor closed Find a bounded, continuous function $f: \mathbb{R} \to \mathbb{R}$ such that $f(\mathbb{R})$ is neither open nor closed?
Take $f(x) = \arctan(x^2)$. Then, $f(\mathbb{R}) = [0, \pi/2)$.
Is there something faulty about this statement? Show any prime of the form $3k+1$ is of the form $6k+1$. I came up with my own solution that made perfect sense to me, but when I read the text's solution, it argued that for the primes that are of the particular form are $6k+1 = 3(2k)+1$. But doesn't that really say the primes in the form of $3k+1$ are in the form of $6m+1$? It seems to me as though there's some misuse of notation here -- allowing $k = 2k$. So should the exercise be phrased as $6m+1$ instead?
Another way to phrase it is "If k is a positive integer such that 3k+1 is prime then k is even". The proof, of course, is easy: If k is odd, then k=2h+1 for some integer h. But 3k+1 = 3(2h+1)+1 = 6h+4 is even and therefore not prime.
Probability of draws at random with replacement of five tickets $400$ draws are made at random with replacement from $5$ tickets that are marked $-2, -1, 0, 1,$ and $2$ respectively. Find the expected value of: the number of times positive numbers appear? Expected value of $X$ number times positive number appear $= E(X)= (1\cdot(1/5))+ (2\cdot(1/5))= (1/5) + (2/5)=3/5=0.6$? $400\cdot 3/5=240$ Expected Value of $X$ number times positive number appear $=E(X)=240/400=3/5=0.6$?
The idea is right, but note that $0$ is not positive. So the probability we get a positive on any draw is $\frac{2}{5}$. So if $X_i=1$ if we get a positive on the $i$-th draw, with $X_i=0$ otherwise, then $E(X_i)=\frac{2}{5}$. Now use the linearity of expectation to conclude that the expected number of positives in $400$ draws is $(400)\left(\frac{2}{5} \right)$.
An NFA with $\Sigma = \{1\}$ with $x^2$ accepting runs on strings $1^x$ for all $x \geq 0$ - how to construct? One of my homework assignments requires us to construct an NFA over the alphabet $\{1\}$ which has exactly $x^2 + 3$ accepting runs over the input string 1^x for all $x \in \mathbb{N}$. Now, the +3 part is simple - I've got LaTeX code for a state diagram for this using tikz automata: (Now with a pretty diagram!) However, the $x^2$ part is proving really hard for me to figure out. I'm not sure how to do this with a finite number of states, and because this is an NFA, this is especially tricky. Any and all suggestions of how to think about this and what approach to take would be very helpful.
There's no such NFA. Assume there was an NFA over the alphabet ${1}$ which accepts all strings whose length is $n^2 + 3$ for some $n \in \mathbb{N}$. I.e., the NFS is supposed to accept strings of lengths $3,4,7,12,19,28,39,\ldots$. If there was such an NFS, the language $$ \Omega = \left\{\underline{1}^{n^2+3} \,:\, n \in \mathbb{N}\right\} $$ would be regular. According to the pumping lemma for regular languages, we'd then have an $N$ such that if $x \in \Omega$ and $|x| \geq N$, then there are $x_l,x_2,x_3$ with $|x_2| > 0$, $x_1x_2x_3=x$ and $x_1\underbrace{x_2\ldots x_2}_{k\text{ times}}x_3 = x_1x_2^kx_3 \in \Omega$ for all $k \geq 1$. For our language, that implies there there's an $N$ such that if for some $x > N$ there's an $n \in \mathbb{N}$ with $x = n^2 + 3$, then the same thing works for $x + mx_2$ ($x_2 \neq 0$) for all $m \geq 1$. Set $z = x-3$, then the statement is $$ z \geq N, z \text{ is a square} \implies \exists{x_2\in\mathbb{N}^+}:\, \forall m \geq 1:\, z + mx_2 \text{ is a square}\text{.} $$ That's obviously impossible, since the distance between two consecutive squares gets larger and larger, and once it get's larger than $x_2$, the right-hand side cannot hold.
Is G isomorphic to $\mathbb{Z} \oplus \mathbb{Z}$? If $ G=\{3^{m}6^{n}|m,n \in \mathbb{Z}\}$ under multiplication then i want prove that this G is isomorphic to $\mathbb{Z} \oplus \mathbb{Z}$.Can any one help me to solve this example? please help me. thanks in advance. Can i define $\phi:\mathbb{Z} \oplus \mathbb{Z} to G$ as $\phi\big((m,n)\big)=3^m 6^n$
Hint: $2^k=3^{-k}6^k$.${}{}{}{}{}{}{}{}$
Prove that $\log X < X$ for all $X > 0$ I'm working through Data Structures and Algorithm Analysis in C++, 2nd Ed, and problem 1.7 asks us to prove that $\log X < X$ for all $X > 0$. However, unless I'm missing something, this can't actually be proven. The spirit of the problem only holds true if you define several extra qualifiers, because it's relatively easy to provide counter examples. First, it says that $\log_{a} X < X$ for all $X > 0$, in essence. But if $a = -1$, then $(-1)^{2} = 1$. Therefore $\log_{-1} 1 = 2$. Thus, we must assume $a$ is positive. if $a$ is $< 1$, then $a^2 < 1$. Therefore we must assume that $a \geq 1$. Now, the book says that unless stated otherwise, it's generally speaking about base 2 for logarithms, which are vital in computer science. However, even then - if $a$ is two and $X$ is $\frac{1}{16}$, then $\log_{a} X$ is $-4$. (Similarly for base 10, try taking the log of $\frac{1}{10}$ on your calculator: It's $-1$.) Thus we must assume that $X \geq 1$. ...Unless I'm horribly missing something here. The problem seems quite different if we have to prove it for $X \geq 1$. But even then, I need some help solving the problem. I've tried manipulating the equation as many ways as I could think of but I'm not cracking it.
One way to approach this question is to consider the minimum of $x - \log_a x$ on the interval $(0,\infty)$. For this we can compute the derivative, which is $1 - 1/(\log_e a )\cdot x$. Thus the derivative is zero at a single point, namely $x = 1/\log_e a,$ and is negative to the left of that point and positive to the right. Thus $x - \log_a x$ decreases as $x$ approaches $1/\log_e a$ from the left, and then increases as we move away from this point to the right. Thus the minimum value is achieved at $x = 1/\log_e a$. (Here I'm assuming that $a > 1$, so that $\log_e a > 0$; the analysis of the problem is a little different if $a < 1$, since then for $x < a < 1$, we have $log_a x > 1 > x,$ and the statement is not true.) Now this value is equal to $1/\log_e a + (\log_e \log_e a)/\log_e a,$ and you want this to be $> 0 $. This will be true provided $a > e^{1/e}$ (as noted in the comments).
Normal subgroup of a normal subgroup Let $F,G,H$ be groups such that $F\trianglelefteq G \trianglelefteq H$. I am asked whether we necessarily have $F\trianglelefteq H$. I think the answer is no but I cannot find any counterexample with usual groups. Is there a simple case where this property is not true?
Let $p$ be a prime, and let $G$ be a $p$-group of order $p^3$. Let $H \leq G$ be a non-normal subgroup of order $p$ (equivalently, $H$ is of order $p$ and not central). Then $H$ is contained in a subgroup $K \leq G$ of order $p^2$. In this case $H \trianglelefteq K \trianglelefteq G$, but $H$ is not normal in $G$. For example, $G$ could be the Heisenberg group, which is the set $$\left\{ \begin{pmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{pmatrix} : a, b, c \in \mathbb{Z}_p \right\}$$ of matrices under multiplication. In the case $p = 2$ this group is isomorphic to $D_8$, which is the example given in another answer.
Complex roots of polynomial equations with real coefficients Consider the polynomial $x^5 +ax^4 +bx^3 +cx^2 +dx+4$ where $a, b, c, d$ are real numbers. If $(1 + 2i)$ and $(3 - 2i)$ are two roots of this polynomial then what is the value of $a$ ?
Adding to lab bhattacharjee's answer, Vieta's Formulas basically tell you that the negative fraction of the last term (the constant) divided by the coefficient of the first term is equal to the product of the roots. Letting r be the 5th root of the polynomial (since we know 4), $$-\frac{4}{1} = (1−2i)(1+2i)(3+2i)(3-2i)r$$ We get $1+2i$ and $3-2i$ as two other roots because they're conjugates of the roots you gave us. Roots as well as their conjugates are roots of a polynomial. Vietas formulas also states that the negtive fraction of the coefficient of the term immediately after the first term divided by the coefficient of the first term is equal to the sum of the roots. So $$-\frac{a}{1} = (1−2i)+(1+2i)+(3+2i)+(3-2i)+r$$ Using these two equations, you can solve for $a$ and $r$.
Simple/Concise proof of Muir's Identity I am not a Math student and I am having trouble finding some small proof for the Muir's identity. Even a slightly lengthy but easy to understand proof would be helpful. Muir's Identity $$\det(A)= (\operatorname{pf}(A))^2;$$ the identity is given in the first paragraph of the following link http://en.wikipedia.org/wiki/Pfaffian I am expecting a proof which uses minimal advanced mathematics.Any reference to a textbook or link would do. I would be very grateful, if any of you could point me in that direction. P.s- i have done all the googling required and i wasnt satisfied with their results,so dont post any results from google's 1st page Thanks in Advance
This answer does not show the explicit form of $\textrm{pf}(A)$ but it proves that such a form must exist as a polynomial in the entries of $A$. Let $A$ be a generic skew-symmetric $n \times n$ matrix with indeterminate entries $A_{i j}$ on row $i$ column $j$ for $0 \leq i < j \leq n$. I will prove by induction in $n$ that $\det(A)$ is the square of a polynomial in the indeterminates $A_{i j}$. If $n = 2$ then $$ \det \begin{pmatrix} 0& A_{1 2} \\ -A_{1 2}& 0 \end{pmatrix} = A_{1 2}^2 $$ is a square polynomial. Let $$ B = \begin{pmatrix}1 \\ & 1 \\ & -A_{1 3} & A_{1 2} \\ & -A_{1 4} & & A_{1 2} \\ & \vdots & & & \ddots \\ & -A_{1 n} & & & & A_{1 2} \end{pmatrix} $$ where all unlisted entries are equal to zero. Then the product $$ C = B\,A\,B^{T} $$ is skew-symmetric and takes the form $$ C = \begin{pmatrix} 0& A_{1 2} & 0 & \dotsc & 0\\ -A_{1 2} & 0 & \ast & \dotsc & \ast\\ 0 & \ast & \ddots & \ddots & \vdots\\ \vdots & \vdots & \ddots & & \ast \\ 0 & \ast & \dotsc & \ast & 0 \end{pmatrix} $$ where each asterisk denotes some polynomial in the indeterminates. Let $A'$ be the bottom right $(n-2) \times (n-2)$ skew-symmetric sub-matrix of $C$. By induction $\det(A')$ is the square of a polynomial. From the explicit form of $C$ it follows that $$ A_{1 2}^{2n-4} \det(A) = \det(B \, A \, B^T) = \det(C) = A_{1 2}^2 \det(A') $$ or $$\det(A) = A_{1 2}^{6-2n} \det(A').$$ Now the right hand side must be a polynomial (because $\det(A)$ is) and since it is also a square we are done.
Find a polar representation for a curve. I have the following curve: $(x^2 + y^2)^2 - 4x(x^2 + y^2) = 4y^2$ and I have to find its polar representation. I don't know how. I'd like to get help .. thanks in advance.
Just as the Cartesian has two variables, we will have two variables in polar form: $$x = r\cos \theta,\;\;y = r \sin \theta$$ We can also use the fact that $x^2 + y^2 = (r\cos \theta)^2 + (r\sin\theta)^2 = r^2 \cos^2\theta + r^2\sin^2 \theta = r^2\underbrace{(\sin^2 \theta + \cos^2 \theta)}_{= 1} =r^2$ This gives us $$r^4 - 4r^3\cos \theta - 4r^2 \sin^2\theta = 0 \iff r^2 - 4r\cos\theta - 4\sin^2\theta = 0$$
Explicit Functions on $\mathbb{C}$ The following question on last years Complex Analysis exam paper, and Im a little stuck on it.. $(i)f(z)=e^{z^2}$ find the explicit formulas for $u(x,y)$ and $v(x,y)$ such that: $f(x+iy)=u(x,y)+iv(x,y)$ (ii) Find all functions $v: \mathbb{R}^2\rightarrow\mathbb{R}^2$ such that $f(x+iy)=u(x,y)+iv(x,y)$ where $u(x,y)=x^3-3xy^2$ for $(x,y) \in \mathbb{R}^2$ is differentiable on $\mathbb{C}$ . My Working (i) $z^2=(x+iy)(x+iy) = x^2-y^2+2ixy$ $e^{z^2}=e^{x^2-y^2+2ixy}$ But im not really sure where to go from here to fin a value for u(x,y) and v(x,y) (ii) I dont have a clue what to do with this, any help in the right direction would be great.. Im thinking maybe it has something to do with Cauchy Riemann Equations as its differentiable on $\mathbb{C}$
From Euler formula $e^{iz}=\cos z+i\sin z$, $e^{z^2}=e^{x^2-y^2+2ixy}=e^{x^2-y^2}\cos(2xy)+ie^{x^2-y^2}\sin(2xy)$ $f$ $\mathbb{C}$-differentiable implies $u,v$ $\mathbb{R}$-differentiable plus Cauchy-Riemann equations $$u_x=v_y$$ $$u_y=-v_x$$ Then $$v_y(x,y)=3x^2-3y^2\longrightarrow v=3x^2y-y^3+H(x)$$ $$v_x(x,y)=6xy\longrightarrow v=3x^2y+K(y)$$ By comparison, you get $$v(x,y)=3x^2y-y^3$$ which is the imaginary part of $f(z)=z^3$
$k(tx,ty)=tk(x,y)$ then $k(x,y)=Ax+By$ A friend asked me today the following question: Let $k(x,y)$ be differentiable in all $\mathbb{R}^{2}$ s.t for every $(x,y)$ and for every $t$ it holds that $$k(tx,ty)=tk(x,y)$$ Prove that there exist $A,B\in\mathbb{R}$ s.t $$k(x,y)=Ax+By$$ I want to use the chain rule somehow, but I am having difficulty using it (I am a bit rusty). I believe I can get $$\frac{\partial k}{\partial tx}\cdot\frac{\partial tx}{\partial t}+\frac{\partial k}{\partial ty}\cdot\frac{\partial ty}{\partial t}=k(x,y)$$ hence $$\frac{\partial k}{\partial tx}\cdot x+\frac{\partial k}{\partial ty}\cdot y=k(x,y)$$ but I don't see how this helps. Can someone please help me out ?
First $k(0,0)=0$. $k(x, y)=\lim_{t\to 0}\frac{k(tx, ty)}{t}=xk_x+yk_y$ where $k_x=\partial_xk|_{(x,y)=(0,0)}, k_y=\partial_yk|_{(x,y)=(0,0)}$.
What is the Fourier transform of $f(x)=e^{-x^2}$? I remember there is a special rule for this kind of function, but I can't remember what it was. Does anyone know?
Caveat: I'm using the normalization $\hat f(\omega) = \int_{-\infty}^\infty f(t)e^{-it\omega}\,dt$. A cute way to to derive the Fourier transform of $f(t) = e^{-t^2}$ is the following trick: Since $$f'(t) = -2te^{-t^2} = -2tf(t),$$ taking the Fourier transfom of both sides will give us $$i\omega \hat f(\omega) = -2i\hat f'(\omega).$$ Solving this differential equation for $\hat f$ yields $$\hat f(\omega) = Ce^{-\omega^2/4}$$ and plugging in $\omega = 0$ finally gives $$ C = \hat f(0) = \int_{-\infty}^\infty e^{-t^2}\,dt = \sqrt{\pi}.$$ I.e. $$ \hat f(\omega) = \sqrt{\pi}e^{-\omega^2/4}.$$
Expected number of edges: does $\sum\limits_{k=1}^m k \binom{m}{k} p^k (1-p)^{m-k} = mp$ Find the expected number of edges in $G \in \mathcal G(n,p)$. Method $1$: Let $\binom{n}{2} = m$. The probability that any set of edges $|X| = k$ is the set of edges in $G$ is $p^k (1-p)^{m-k}$. So the probability that $G$ has $k$ edges is $$\binom{m}{k} p^k ( 1-p )^{m-k}$$ This implies that $$E(X) = \sum_{k=1}^m k \binom{m}{k} p^k (1-p)^{m-k}$$ Method $2$: Choose an indicator random variable $X_e : \mathcal G(n,p) \to \{ 0,1 \}$ as follows: $$X_e(G) = \begin{cases} 1 & e \in E(G) \\ 0 & e \notin E(G) \end{cases}$$ So $E(X) = \sum_{e \in K_n} E(X_e(G)) = m p$ since each event $e \in E(G)$ and $f \in E(G)$ are independent. How do you reconcile these answers? I'm looking for either a mistake in reasoning or a direct proof that: $$\sum_{k=1}^m k \binom{m}{k} p^k (1-p)^{m-k} = mp$$ for $0 < p < 1$.
Related problems:(I), (II). Consider the function $$ f(x)=( xp+(1-p) )^m = \sum_{k=0}^{m} {m\choose k} p^k(1-p)^{m-k}x^k $$ Differentiating the above equation with respect to $x$ yields $$ \implies mp( xp+(1-p) )^{m-1} = \sum_{k=1}^{m}{m\choose k} k p^k(1-p)^{m-k}x^{k-1}. $$ Subs $x=1$ in the above equation gives the desired result.
Need help solving - $ \int (\sin 101x) \cdot\sin^{99}x\,dx $ I have a complicated integral to solve. I tried to split ($101 x$) and proceed but I am getting a pretty nasty answer while evaluating using parts. are there any simpler methods to evaluate this integral? $$ \int\!\sin (101x)\cdot\sin^{99}(x)\, dx $$
Let's use the identity $$\sin(101x)=\sin(x)\cos(100x)+\cos(x)\sin(100x)$$ Then the integral becomes $$\int\sin^{100}(x)\cos(100x)dx+\int\sin^{99}(x)\sin(100x)\cos(x)dx$$ Integrating the first term by parts gives $$\int\sin^{100}(x)\cos(100x)dx=\frac{1}{100}\sin^{100}(x)\sin(100x)-\int\sin^{99}(x)\sin(100x)\cos(x)dx$$ Plugging this in, we see the remaining integrals cancel (up to a constant) and we are left with $$\int\sin^{99}(x)\sin(101x)dx=\frac{1}{100}\sin^{100}(x)\sin(100x)+C$$
A parameterized elliptical integral (Legendre Elliptical Integral) $$ K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{1+2t\cos(\theta)+t^{2}}dt $$ For $$ -1<a<1;$$ $$-\pi<\theta<\pi$$ I know this integral to be a known tabulated Legendre elliptic integral, however the very fact that the numerator is parameterized completely throws a curveball. Using: $$ K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{(1+t^{2}) + 2t\cos(\theta)}dt $$ letting $2\gamma$ = $\theta$ $$ \rightarrow K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{(1+t^{2}) + 2t \cos(2\gamma)}dt $$ in which case the trig function can be later manipulated using the double angle identity, turning it into a sine function $$ \rightarrow K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{(1+t^{2}) + 2t \cos(2\gamma)}dt $$ $$ \rightarrow K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{(1+t^{2}) + 2t(1-(\sin(\gamma))^{2})}dt $$ So it does have a sine in the denominator that is sufficient for a Legendre Elliptical integral. The rest is just solving for $k$ and simplifying the expression. Which leaves me with the parameter $a$. I have no idea what to do there. Any help is certainly appreciated.
This is not elliptic integral, this can be expressed in terms of elementary functions: \begin{align} K(a,\theta)=\int_0^{\infty}\frac{t^{-a}dt}{t^2+2\cos\theta\, t+1}=\frac{1}{2i\sin\theta}\int_0^{\infty}\left(\frac{t^{-a}}{t+e^{-i\theta}}-\frac{t^{-a}}{t+e^{i\theta}}\right)dt=\\ =\frac{1}{2i\sin\theta}\left(\frac{\pi e^{ia\theta}}{\sin\pi a}-\frac{\pi e^{-ia\theta}}{\sin\pi a}\right)=\frac{\pi\sin \theta a}{\sin\theta\sin\pi a}. \end{align}
Let $\ \varphi \, : \, V\rightarrow V\ $ be a linear transformation. Prove that $\ Im(\varphi \, \circ \varphi) \subseteq Im \,\varphi\ $ Let V be a vector space and $\ \varphi \, : \, V\rightarrow V\ $ be a linear transformation. Prove that: $$\ Im(\varphi \, \circ \varphi) \subseteq Im \,\varphi\ $$ I am struggling to see what conditions I have to verify, in order to prove it.
Recall that $$\mathrm{Im}(\varphi)=\{\varphi(x)\quad|\quad x\in V\}$$ Now take $y\in \mathrm{Im}(\varphi\circ \varphi)$ then there's $x\in V$ such that $$y=\varphi\circ \varphi(x)= \varphi( \underbrace{\varphi(x)}_{z\in V})=\varphi(z)\in \mathrm{Im}(\varphi)$$ so we have $$\ Im(\varphi \, \circ \varphi) \subseteq Im \,\varphi\ $$
Integration by parts, Reduction I was able to complete part (a) easily by using integration by parts. I ended up getting: $$I(n) = -\frac{1}{n} \cos x\cdot \sin^{n-1}x + \frac{n-1}{n}· I(n-2)$$ For question (b), When I integrated $1/\sin^4x$ and subbed in $n = -4$, I get the following equation: $$\frac{1}{4}·\cos x·\sin^{-5}x + \frac{5}{4} \int sin^{-6}x dx$$ My question is, how do I integrate $\sin^{-6}x$ because it's not the same as integrating $\sin^{6}x$, which will actually get you somewhere. It feels like I'm going in a loop when integrating $\sin^{-6}x$. I might have went wrong somewhere, help would be very much appreciated :)
Putting $n=-2,$ in $$I_n=-\frac1n\cos x\sin^{n-1}x+\frac{n-1}nI_{n-2}$$ we get $$I_{-2}=-\frac1{(-2)}\cos x\sin^{-2-1}x+\frac{(-2-1)}{(-2)}I_{-2-2}$$ $$\implies \frac32I_{-4}=I_{-2}-\frac{\cos x}{2\sin^3x}$$ Now, $$I_{-2}=\int\sin^{-2}xdx=\int \csc^2xdx=-\cot x+C$$ Can you finish it from here?
Ways of merging two incomparable sorted lists of elements keeping their relative ordering Suppose that, for a real application, I have ended up with a sorted list A = {$a_1, a_2, ..., a_{|A|}$} of elements of a certain kind (say, Type-A), and another sorted list B = {$b_1, b_2, ..., b_{|B|}$} of elements of a different kind (Type-B), such that Type-A elements are only comparable with Type-A elements, and likewise for Type-B. At this point I seek to count the following: in how many ways can I merge both lists together, in such a way that the relative ordering of Type-A and Type-B elements, respectively, is preserved? (i.e. that if $P_M(x)$ represents the position of an element of A or B in the merged list, then $P_M(a_i)<P_M(a_j)$ and $P_M(b_i)<P_M(b_j)$ for all $i<j$) I've tried to figure this out constructively by starting with an empty merged list and inserting elements of A or B one at a time, counting in how many ways each insertion can be done, but since this depends on the placement of previous elements of the same type, I've had little luck so far. I also tried explicitly counting all possibilities for different (small) lengths of A and B, but I've been unable to extract any potential general principle in this way.
For merging N sorted lists, here is a good way to see that the solution is $$\frac{(|A_1|+\dots|+|A_N|)!}{|A_1|!\dots |A_N|!}$$ All the $(|A_1|+\dots|+|A_N|)$ elements can be permuted in $(|A_1|+\dots|+|A_N|)!$ ways. Among these, any solution which has the ordering of the $A_1$ elements different from the given order have to be thrown out. If you keep the positions of everyone except $A_1$ fixed, you can generate $|A_1|!$ permutations by shuffling $A_1$. Only 1 among these $|A_1|!$ solutions is valid. Hence the number solutions containing the right order of $A_1$ is $$\frac{(|A_1|+\dots|+|A_N|)!}{|A_1|!}$$ Now repeat the argument for $A_2$, $A_2, \dots A_N$.
Binary Decision Diagram of $(A\Rightarrow C)\wedge (B\Rightarrow C)$? I made a Binary Decision Diagram for $(A\vee B)\Rightarrow C$, which i think is correct. Know i want o make a Binary Decision Diagram for $(A\Rightarrow C) \wedge (B\Rightarrow C)$ but i can't. I can make 2 BDD's, one for $(A\Rightarrow C)$ and one for $(B\Rightarrow C)$. In the picture bellow is just the BDD for $(A\Rightarrow C)$ because the other is the same just instead of $A$ it is $B$. How can i make one BDD for $(A\Rightarrow C) \wedge (B\Rightarrow C)$ ?
$(A\Rightarrow C)\lor(B\Rightarrow C) \equiv (\lnot A\lor C)\lor(\lnot B\lor C)\equiv \lnot A\lor\lnot B\lor C$. So, this is true iff either $A$ is false or $B$ is false, or $C$ is true.
Prove by mathematical induction that $1 + 1/4 +\ldots + 1/4^n \to 4/3$ Please help. I haven't found any text on how to prove by induction this sort of problem: $$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$ I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.
If you want to use proof by induction, you have to prove the stronger statement that $$ 1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} - \frac{1}{3}\frac{1}{4^n} $$
Spanned divisors and Base Points Let $X$ be a smooth algebraic variety. We say that a line bundle $\xi\in H^1(X,\mathcal{O}^\ast)$ is spanned is for each $x\in X$ there is a global section $s\in H^0(X,\mathcal{O}(\xi))$ with $s(x)\neq 0$. Let $\xi=[D]$ be the line bundle associated to a divisor $D$. If $\xi$ is spanned, why can we say that the linear system $|D|$ is base-point free? Basically for each $x$ I want to find $D'\in|D|$ such that $x\notin D'$.
Assume that the line bundle $\mathscr{O}(D)$ is spanned. So given $x \in X$, there exists a section $s \in H^0(X, \mathscr{O}(D))$ such that $s(x) \neq 0$. Let $D'$ denote the divisor of zeroes of the section $s$; then $x$ is not contained in $\mathrm{Supp}(D')$. The remaining issue is to show that $D'$ is linearly equivalent to $D$. To do this, we need to show there's a rational function $f$ on $X$ such that $\mathrm{div} f = D'-D$. But there's a correspondence (explained in detail e.g. in Shafarevich Volume 2, VI.1.4) between global sections of the bundle $\mathscr{O}(D)$ and rational functions $f$ on $X$ such that $\mathrm{div} f + D \geq 0$; moreover, if $s \in H^0(X,\mathscr{O}(D))$ has zero-set $D'$, and $s$ corresponds to a rational function $f$, then $D'=\mathrm{div} f + D$. So our chosen section $s$ from the first paragraph yields a rational function $f$ such that $D'-D=\mathrm{div} f$, which means the two divisors are linearly equivalent, as we wanted.
Roots of cubic polynomial lying inside the circle Show that all roots of $a+bz+cz^2+z^3=0$ lie inside the circle $|z|=max{\{1,|a|+|b|+|c| \}}$ Now this problem is given in Beardon's Algebra and Geometry third chapter on complex numbers. What might be relevant for this problem: * *author previously discussed roots of unity; *a little (I mean abut a page of informal discussion) about cubic and quartic equations; *then gave proof of fundamental theorem of algebra (the existence of root was given as a informal proof and rest using induction) and then the corollary of it (If $p(z) = q(z)$ at $n + 1$ distinct points then $p(z) = q(z)$ for all $z$, where both polynomials are of degree at most $n$); I was trying to see how I should approach it with no success for quite some time. Looked on the net and found, that this would be kind of easy with Rouche's theorem, but I was not given that. So is it possible to solve it in a simple way with what was given? Thanks!
I'd simply be looking at showing that the $z^3$ term was dominant, so there could be no roots beyond the bound. I don't think it is at all sophisticated.
How to write this conic equation in standard form? $$x^2+y^2-16x-20y+100=0$$ Standard form? Circle or ellipse?
Recall that one of the usual standard forms is: $(x - a)^{2} + (y - b)^{2} = r^{2}$ where... * *(a,b) is the center of the circle *r is the radius of the circle Rearrange the terms to obtain: $x^{2} - 16x + y^{2} - 20y + 100 = 0$ Then, by completing the squares, we have: $(x^{2} - 16x + 64) + (y^{2} - 20y + 100) = 64$ $(x - 8)^{2} + (y - 10)^{2} = 8^{2}$ Thus, we have a circle with radius $r = 8$ and center $(a,b) = (8,10)$
Number of Invariant Subspaces of a Jordan Block I'm asking this question on behalf of a person I'm supposed to be tutoring who has this problem as part of eir homework. The problem is "How many invariant subspaces are there of a transformation $T$ that sends $v\mapsto J_{\lambda,n}v$" where $J_{\lambda,n}$ is a Jordan block. We are pretty sure the answer is $n+1$, where the spaces are the trivial space and the ones spanned by sets of columns of this form: $\Big\{$ $\pmatrix{1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \\ \vdots \\ 0}$ , $~\pmatrix{0 \\ 1 \\ 0 \\ \vdots \\ 0 \\ 0 \\ \vdots \\ 0}$ , $~\pmatrix{0 \\ 0 \\ 1 \\ \vdots \\ 0 \\ 0 \\ \vdots \\ 0}$ , $~\cdots$ , $~\pmatrix{0 \\ 0 \\ 0 \\ \vdots \\ 1 \\ 0 \\ \vdots \\ 0}$ $\Big\}$ . But we are not sure how to explain that there aren't others. Can anyone help us, or at least put us on the right track?
You are right. Let $J_{\lambda,n}=\lambda\,I+N$ where $N$ is the nilpotent part, which maps $e_i\mapsto e_{i-1}$ if $i>1$ and $e_1\mapsto 0$. * *Observe that the invariant subspaces of $\lambda\,I+N$ coincide with those of $N$. *Assume that $v=(v_1,..,v_k,0,..,0)$ with $v_k\ne 0$, and consider its generated $N$-invariant subspace $V$. We have $V\ni N^{k-1}v=v_k\,e_1$, so $e_1\in V$. Similarly, by $N^{k-2}v,e_1\in V$ we can conclude $e_2\in V$, and so on, until $e_k\in V$. Thus, indeed $V={\rm span}(e_1,..,e_k)$.
Limit $\frac{\tan^{-1}x - \tan^{-1}\sqrt{3}}{x-\sqrt{3}}$ without L'Hopital's rule. Please solve this without L'Hopital's rule? $$\lim_{x\rightarrow\sqrt{3}} \frac{\tan^{-1} x - \frac{\pi}{3}}{x-\sqrt{3}}$$ All I figured out how to do is to rewrite this as $$\frac{\tan^{-1} x - \tan^{-1}\sqrt{3}}{x-\sqrt{3}}$$ Any help is appreciated!
We want $$L = \lim_{x \to \sqrt{3}} \dfrac{\arctan(x) - \pi/3}{x - \sqrt3}$$ Let $\arctan(x) = t$. We then have $$L = \lim_{t \to \pi/3} \dfrac{t-\pi/3}{\tan(t) - \sqrt{3}} = \lim_{t \to \pi/3} \dfrac{t-\pi/3}{\tan(t) - \tan(\pi/3)} = \dfrac1{\left.\dfrac{d \tan(t)}{dt} \right\vert_{t=\pi/3}} = \dfrac1{\sec^2(t) \vert_{t=\pi/3}} = \dfrac14$$
Induction proof: $\dbinom{2n}{n}=\dfrac{(2n)!}{n!n!}$ is an integer. Prove using induction: $\dbinom{2n}{n}=\dfrac{(2n)!}{n!n!}$ is an integer. I tried but I can't do it.
This is one instance of a strange phenomenon: proving something seemingly more complicated makes things simpler. Show that $\binom{n}{k}$ is an integer for all $0\leq k\leq n$, and that will show what you want. To do so, show by induction on $n$ that $$\binom{n+1}{k}=\binom{n}{k}+\binom{n}{k-1}.$$
compare of eigenvalues $\lambda_1(D_a)$ and $\lambda_1(D_c)$. Let $f(x)$ be a smooth function on $[-1,1]$, such that $f(x)>0$ for all $x\in(-1,1)$,$f(-1)=f(1)=0$. consider $\gamma\subset\Bbb{R}^2$ the graph of the $f(x)$. Let $T_a$ the symmetry with respect to axis $x$ and $T_c$ the central symmetry with respect to origin. Now consider two domains $D_a$:bounded by the curves $\gamma$ and $T_a(\gamma)$, and $D_c$: bounded by the curves $\gamma$ and $T_c(\gamma)$. Let $\lambda_1(D_a)$ and $\lambda_1(D_c)$ are the eigenvalues of Dirichlet problem on $D_a$ and $D_c$ resp. How is compare of $\lambda_1(D_a)$ and $\lambda_1(D_c)$? ($\lambda_1(D_c)\geq\lambda_1(D_a)$ or $\lambda_1(D_a)\geq \lambda_1(D_c)$) There exist one theorem in Strauss's book which: If $\Omega_1\subset\Omega_2$ then $\lambda_1(\Omega_1)\geq\lambda_1(\Omega_2)$. Thanks.
Fact 1. In the special case $$f(-x)\le f(x),\qquad 0\le x\le 1\tag0$$ the inequality $\lambda_1(D_a)\le \lambda_1(D_c)$ holds. Proof. Let $u$ be the first eigenfunction for $D_c$. Extend it to $\mathbb R^2$ by zero outside of $D_c$. For $(x,y)\in\mathbb R^2$ define $$v(x,y)=\begin{cases} \max(u(x,y),u(-x,y))\quad & x\ge 0 \\ \min(u(x,y),u(-x,y))\quad & x\le 0 \end{cases}$$ (This is called the polarization of $u$ with respect to the $y$-axis.) Then the following hold: $$\int_{\mathbb R^2} v^2 = \int_{\mathbb R^2} u^2 \tag1$$ $$\int_{\mathbb R^2} |\nabla v|^2 = \int_{\mathbb R^2} |\nabla u|^2 \tag2$$ $$v=0\quad \text{ on } \partial D_c\tag3$$ Here (1) and (3) are relatively easy (you need assumption (0) to prove (3)). The equality (2) is not very straightforward unless you know about something about Sobolev spaces. The paper An approach to symmetrization via polarization should give you an idea of what is going on here. Since $v$ is an eligible function in the variational definition of $\lambda_1$, we have $$\lambda_1(D_a)\le \frac{\int |\nabla v|^2}{\int v^2}=\lambda_1(D_c) \tag4$$ as claimed. Fact 2. There exist functions $f$ for which $\lambda_1(D_a)<\lambda_1(D_c)$. Let $f=\chi_{[1/2,2/5]}$ (I know it's neither continuous nor positive). Then $D_a$ is a rectangle of dimensions $2\times (1/5)$ while $D_c$ is the union of two rectangles of dimensions $1\times (1/5)$. The fundamental frequency of larger rectangle is smaller. Therefore, $\lambda_1(D_a)<\lambda_1(D_c)$ in this case. The function $f$ can be approximated by smooth positive functions. The first eigenvalue depends continuously on the domain in various senses (you'll have to dig in the literature). Conclusion: $\lambda_1(D_a)<\lambda_1(D_c)$ holds for some domains. An alternative proof of Fact 2 can be given by following the proof of Fact 1 and observing that (4) is a strict inequality in general. Indeed, $v$ is in general not differentiable, while eigenfunctions are; therefore $v$ is not an eigenfunction.
Can every infinite set be divided into pairwise disjoint subsets of size $n\in\mathbb{N}$? Let $S$ be an infinite set and $n$ be a natural number. Does there exist partition of $S$ in which each subset has size $n$? * *This is pretty easy to do for countable sets. Is it true for uncountable sets? *If (1) is true, can it be proved without choice?
A geometric answer, for cardinality $c$. For $x, y \in S^1$ (the unit circle), let $x \sim y$ iff $x$ and $y$ are vertices of the same regular $n$-gon centered at the origin. Now the title of this question speaks of infinite sets generally, which is a different question. The approach by Hagen von Eitzen is probably about the best you'll find in the general case.
Prove by mathematical induction for any prime number$ p > 3, p^2 - 1$ is divisible by $3$? Prove by mathematical induction for any prime number $p > 3, p^2 - 1$ is divisible by $3$? Actually the above expression is divisible by $3,4,6,8,12$ and $24$. I have proved the divisibility by $4$ like: $$ \begin{align} p^2 -1 &= (p+1)(p-1)\\ &=(2n +1 +1)(2n + 1 - 1)\;\;\;\text{as $p$ is prime, it can be written as $(2n + 1)$}\\ &= (2n + 2)(2n)\\ &= 4(n)(n + 1) \end{align} $$ Hence $p^2 - 1$ is divisible by 4. But I cannot prove the divisibility by $3$.
Hint: $p \equiv 1$ or $-1 (\mod3) \implies p^2 \equiv 1 (\mod 3)$ for every $p>3$
Solving systems of linear equations using matrices, 3 equations, 4 variables I understand how to solve systems of linear equations when they have the same number of variables as equations. But what about when there are only three equations and 4 variables? For example, when i was looking through an exam paper, i came across this question- w + x + y + z = 1 2w + x + 3y + z =7 2w + 2x + y + 2z =7 The question does not implicitly ask for us to solve using matrices, but it is in a question about matrices... Any help would be appreciated!
There are a couple of things you have to pay attention to when solving a system of equations. The first thing you want to pay attention to is the rank of the corresponding matrix, defined as the number of pivot rows in the Reduced Row Echelon form of your matrix (that you get at via Gaussian elimination). You can think of the rank as the number of independent equations. For example, if you have $a + b = 3$ and $2a + 2b = 6$, those equations are not independent. The second one does not tell you anything that the first one doesn't tell you already. So instead of characterizing a system as "m equations with n unknowns", treat it as "m independent equations with n unkowns". The next thing you have to know is how to identify the solution space. Linear algebra tells you that if you have a matrix of rank r and n columns (unkowns), you will have n - r free variables that can take any value. Linear algebra also tells you that the complete solution space consists of any particular solution plus the null space of the matrix. To find both a particular solution and a basis for the null space, you will want to use the Reduced Row Echelon form.
How to solve equations of algebra? Let $a_i>0, b_i>0$ ($i=1,2,\ldots,N$). How to prove that there exist unique $x_i>0$ ($i=1,2,\ldots,N$) such that $$a_ix_i^{b_i}+x_1+x_2+\cdots+x_N=1,\;\;i=1,2,\ldots,N.$$ Thank you.
Replace $x_1+\ldots+x_N$ in the equation by $S$. A solution to the problem must have $0<S<1$, since $0<a_ix_i^{b_i}=1-S$. Then $$ x_i=(\frac{1-S}{a_i})^{1/b_i}=f_i(S). $$ $f_i(S)$ are continuous functions from $[0,1]$ to $\mathbb{R^{+}}$, so the same is true for the sum $F(S)=\sum_{i=1}^N f_i(S)$. $F(S)$ is strictly monotonly decreasing with $F(0)=\sum\frac{1}{a_i}^{1/b_i}>0$ and $F(1)=0$. Thus there is a unique point where $F(S)=S$. Now you should be able to conclude.
Rotating x,y points 45 degrees I have a two dimensional data set that I would like to rotate 45 degrees such that a 45 degree line from the points (0,0 and 10,10) becomes the x-axis. For example, the x,y points (1,1), (2,2), and (3,3) would be transformed to the points (0,1), (0,2), and (0,3), respectively, such that they now lie on the x-axis. The points (1,0), (2,0), and (3,0) would be rotated to the points (1,1), (2,2), and (3,3). How can I calculate how to rotate a series of x,y point 45 degrees?
Here is some good background about this topic in wikipedia: Rotation Matrix
An intuitive idea about fundamental group of $\mathbb{RP}^2$ Someone can explain me with an example, what is the meaning of $\pi(\mathbb{RP}^2,x_0) \cong \mathbb{Z}_2$? We consider the real projective plane as a quotient of the disk. I didn't receive an exhaustive answer to this question from my teacher, in fact he said that the loop $2a$ with base point $P$ is homotopically equivalent to the "constant loop" with base point $P$. but this doesn't solve my doubts. Obviously I can calculate it, so the problem is NOT how to calculate it using Van Kampen theorem, but I need to get an idea of "why for every loop $a$, $[2a] = [1]$"
You can see another set of related pictures here, which gives the script for this video Pivoted Lines and the Mobius Band (1.47MB). The term "Pivoted Lines" is intended to be a non technical reference to the fact that we are discussing rotations, and their representations. The video shows the "identification" of the Projective Plane as a Mobius Band and a disk, the identification being shown by a point moving from one to the other. Then the point makes a loop twice round the Mobius Band, as in the above, and this loop moves off the Band onto the disk and so to a point. Thus we are representing motion of motions!
Singular asymptotics of Gaussian integrals with periodic perturbations At the bottom of page 5 of this paper by Giedrius Alkauskas it is claimed that, for a $1$-periodic continuous function $f$, $$ \int_{-\infty}^{\infty} f(x) e^{-Ax^2}\,dx = \sqrt{\frac{\pi}{A}} \int_0^1 f(x)\,dx + O(1) \tag{1} $$ as $A \to 0^+$. How can I prove $(1)$? I'm having a hard time doing it rigorously since I'm unfamiliar with Fourier series. If I ignore convergence and just work formally then I can get something that resembles statement $(1)$. Indeed, since $f$ is $1$-periodic we write $$ f(x) = \int_0^1 f(t)\,dt + \sum_{n=1}^{\infty} \Bigl[a_n \cos(2\pi nx) + b_n \sin(2\pi nx)\Bigr]. $$ Multiply this by $e^{-Ax^2}$ and integrate term by term, remember that $\sin(2\pi nx)$ is odd, and get $$ \begin{align} \int_{-\infty}^{\infty} f(x) e^{-Ax^2}\,dx &= \int_{-\infty}^{\infty} e^{-Ax^2}\,dx \int_0^1 f(x)\,dx + \sum_{n=1}^{\infty} a_n \int_{-\infty}^{\infty} cos(2\pi nx) e^{-Ax^2}\,dx \\ &= \sqrt{\frac{\pi}{A}} \int_0^1 f(x)\,dx + \sqrt{\frac{\pi}{A}} \sum_{n=1}^{\infty} a_n e^{-\pi^2 n^2/A} \\ &= \sqrt{\frac{\pi}{A}} \int_0^1 f(x)\,dx + O\left(A^{-1/2} e^{-\pi^2/A}\right) \end{align} $$ as $A \to 0^+$. I suppose the discrepancy in the error stems from the fact that $f$ need not be smooth (I think in the paper it's actually nowhere differentiable). Obviously there are some issues, namely the convergence of the series and the interchange of summation and integration. Resources concerning the analytic properties of Fourier series which are relevant would be much appreciated. Answers which do not use Fourier series are also very welcome.
Your estimate of error term is correct. The following are just some supplementary details to make your argument more rigorous. Let $(f_n)_{n\ge 1}$ be the sequence of partial sums of the Fourier series of $f$, i.e. $$ f_n(x) = \int_0^1 f(t)\,dt + \sum_{k=1}^n \big(a_k \cos(2\pi kx) + b_k \sin(2\pi kx)\big). $$ Note that $f_n$ converges to $f$ in $L^2([0,1])$, so by Cauchy-Schwarz's inequality, as $n\to\infty$, $$\int_0^1|f(t)-f_n(t)| dt\le \big(\int_0^1|f(t)-f_n(t)|^2dt\big)^{\frac{1}{2}}\to 0.\tag{1}$$ Also note that, for every $n\ge 1$, $$|a_n|=2\cdot\big|\int_0^1 f(t)\cos(2\pi nt) dt\big|\le 2\int_0^1|f(t)|dt.\tag{2}$$ As you have shown, $$\int_{-\infty}^{+\infty}f_n(x)e^{-Ax^2}dx=\sqrt{\frac{\pi}{A}}(\int_0^1f(t)dt+\sum_{k=1}^na_ke^{\frac{-k^2\pi^2}{A}}).\tag{3}$$ From $(2)$ and $(3)$ we know, $$\big|\sqrt{\frac{A}{\pi}}\int_{-\infty}^{+\infty}f_n(x)e^{-Ax^2}dx-\int_0^1f(t)dt\big|\le 2\int_0^1|f(t)|dt\cdot\sum_{k=1}^\infty e^{\frac{-k^2\pi^2}{A}}\le M e^{\frac{-\pi^2}{A}},\tag{4}$$ where $M>0$ is independent of $n\ge 1$ and $0<A\le 1$. Since for every $m\in\mathbb{Z}$, $$\int_m^{m+1}|f_n(x)-f(x)|dx=\int_0^1|f_n(x)-f(x)|dx,$$ $$ \int_{-\infty}^{+\infty}|f_n(x)-f(x)|e^{-Ax^2}dx\le2 \int_0^1|f_n(x)-f(x)|dx\cdot\sum_{m=0}^\infty e^{-Am^2}<\infty.\tag{5} $$ Letting $n\to\infty$ in $(5)$, from $(1)$ we know that $$\lim_{n\to\infty}\int_{-\infty}^{+\infty}|f_n(x)-f(x)|e^{-Ax^2}dx=0.\tag{6}$$ Combining $(4)$ and $(6)$, it follows that $$|\int_{-\infty}^{+\infty}f(x)e^{-Ax^2}dx-\sqrt{\frac{\pi}{A}}\int_0^1f(t)dt|\le M\sqrt{\frac{\pi}{A}} e^{\frac{-\pi^2}{A}}.\tag{7}$$
Dirichlet series generating function I am stuck on how to do this question: Let d(n) denote the number of divisors of n. Show that the dirichlet series generating function of the sequence {(d(n))^2} equals C^4 (s)/ C(2s). C(s) represents the riemann zeta function, I apologize, I am not very accustomed with LaTex. Any help is highly appreciated, I am studying for an exam. Everywhere I have looked on the internet says it is obvious, but none of them seem to want to explain why or how to do it, so please help. thank you
There are many different ways to approach this, depending on what you are permitted to use. One simple way is to use Euler products. The Euler product for $$Q(s) = \sum_{n\ge 1} \frac{d(n)^2}{n^s}$$ is given by $$ Q(s) = \prod_p \left( 1 + \frac{2^2}{p^s} + \frac{3^2}{p^{2s}} + \frac{4^2}{p^{3s}} + \cdots \right).$$ This should follow by inspection considering that $d(n)$ is multiplicative and $d(p^v) = v+1$, with $p$ prime. Now note that $$\sum_{k\ge 0} (k+1)^2 z^k = \sum_{k\ge 0} (k+2)(k+1) z^k - \sum_{k\ge 0} (k+1) z^k \\= \left(\frac{1}{1-z}\right)'' - \left(\frac{1}{1-z}\right)' = \frac{1+z}{(1-z)^3}.$$ It follows that the Euler product for $Q(s)$ is equal to $$ Q(s) = \prod_p \frac{1+1/p^s}{(1-1/p^s)^3}.$$ On the other hand, we have $$ \zeta(s) = \prod_p \frac{1}{1-1/p^s}$$ so that $$ \frac{\zeta^4(s)}{\zeta(2s)} = \prod_p \frac{\left(\frac{1}{1-1/p^s}\right)^4}{\frac{1}{1-1/p^{2s}}} = \prod_p \frac{1-1/p^{2s}}{\left(1-1/p^s\right)^4} = \prod_p \frac{1+1/p^s}{\left(1-1/p^s\right)^3}.$$ The two Euler products are the same, QED.
How big is the size of all infinities? "Not only infinite - it's "so big" that there is no infinite set so large as the collection of all types of infinity..." What does exactly mean? How many infinities are there? I've heard there are more than infinite infinities? What does that mean? Is that true? Will anyone ever be able to know how many infinities there are? Does God know how many there are? Are there so many that God doesn't even know?
In the world of natural numbers it is known that $2 ^ a \neq 3 ^ b$ for any pair of positive integers a and b. This is true for any pair of primes. So if we believe there is only one infinitely large natural number ($\infty$) From the above statement: $2 ^ \infty \neq 3 ^ \infty$ Let $2 ^ \infty $ be $\infty_{2}$ Let $3 ^ \infty $ be $\infty_{3}$ Then $\infty_{2}$ and $\infty_{3}$ are two distinct infinitely large natural numbers. Since there is an infinite number of primes we could have chosen in place of 2 and 3, the implication is that there is an infinite number of infinitely large natural numbers.
Evaluating Complex Integral. I am trying to evaluate the following integrals: $$\int\limits_{-\infty}^\infty \frac{x^2}{1+x^2+x^4}dx $$ $$\int\limits_{0}^\pi \frac{d\theta}{a\cos\theta+ b} \text{ where }0<a<b$$ My very limited text has the following substitution: $$\int\limits_0^\infty \frac{\sin x}{x}dx = \frac{1}{2i}\int\limits_{\delta}^R \frac{e^{ix}-e^{-ix}}{x}dx \cdots $$ Is the same of substitution available for the polynomial? Thanks for any help. I apologize in advance for slow responses, I have a disability that limits me to an on-screen keyboard.
For the first one, write $\dfrac{x^2}{1+x^2+x^4}$ as $\dfrac{x}{2(1-x+x^2)} - \dfrac{x}{2(1+x+x^2)}$. Now $$\dfrac{x}{(1-x+x^2)} = \dfrac{x-1/2}{\left(x-\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2} + \dfrac{1/2}{\left(x-\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2}$$ and $$\dfrac{x}{(1-x+x^2)} = \dfrac{x+1/2}{\left(x+\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2} - \dfrac{1/2}{\left(x+\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2}$$ I trust you can take it from here. For the second one, from Taylor series of $\dfrac1{(b+ax)}$, we have $$\dfrac1{(b+a \cos(t))} = \sum_{k=0}^{\infty} \dfrac{(-a)^k}{b^{k+1}} \cos^{k}(t)$$ Now $$\int_0^{\pi} \cos^k(t) dt = 0 \text{ if $k$ is odd}$$ We also have that $$\color{red}{\int_0^{\pi}\cos^{2k}(t) dt = \dfrac{(2k-1)!!}{(2k)!!} \times \pi = \pi \dfrac{\dbinom{2k}k}{4^k}}$$ Hence, $$I=\int_0^{\pi}\dfrac{dt}{(b+a \cos(t))} = \sum_{k=0}^{\infty} \dfrac{a^{2k}}{b^{2k+1}} \int_0^{\pi}\cos^{2k}(t) dt = \dfrac{\pi}{b} \sum_{k=0}^{\infty}\left(\dfrac{a}{2b}\right)^{2k} \dbinom{2k}k$$ Now from Taylor series, we have $$\color{blue}{\sum_{k=0}^{\infty} x^{2k} \dbinom{2k}k = (1-4x^2)^{-1/2}}$$ Hence, $$\color{green}{I = \dfrac{\pi}{b} \cdot \left(1-\left(\dfrac{a}b\right)^2 \right)^{-1/2} = \dfrac{\pi}{\sqrt{b^2-a^2}}}$$
Practice Preliminary exam - evaluate the limit This is from a practice prelim exam and I know I should be able to get this one. $$ \lim_{n\to\infty} n^{1/2}\int_0^\infty \left( \frac{2x}{1+x^2} \right)^n $$ I have tried many different $u-$substitions but to no avail. I have tried $$ u = \log(1+x^2) $$ $$ du = \frac{2x}{1+x^2}dx $$ but did not get anywhere
Let $x = \tan(t)$. We then get that \begin{align} I(n) & = \int_0^{\infty} \left(\dfrac{2x}{1+x^2} \right)^n dx = \int_0^{\pi/2} \sin^n(2t) \sec^2(t) dt = 2^n \int_0^{\pi/2} \sin^n(t) \cos^{n-2}(t) dt\\ & = 4\int_0^{\pi/2}\sin^2(t) \sin^{n-2}(2t)dt \tag{$\star$} \end{align} Replacing $t$ by $\pi/2-t$, we get $$I(n) = 4 \int_0^{\pi/2} \cos^2(t) \sin^{n-2}(2t) dt \tag{$\perp$}$$ Adding $(\star)$ and $(\perp)$, we get that $$2I(n) = 4 \int_0^{\pi/2} \sin^{n-2}(2t)dt \implies I(n) = 2 \int_0^{\pi/2} \sin^{n-2}(2t)dt$$ I trust you can take it from here, using this post, which evaluates $\displaystyle \int_0^{\pi} \sin^{k}(t) dt$ and using Stirling (or) Wallis formula.
prove that : $ \sum_{n=0}^\infty |x_n|^2 = +\infty \Rightarrow \sum_{n=0}^\infty |x_n| = +\infty $ i wanted to prove initially that a function is well defined and i concluded that it's enough to prove this statement for $x_n$ a sequence : $ \sum_{n=0}^\infty |x_n|^2 = +\infty \Rightarrow \sum_{n=0}^\infty |x_n| = +\infty $ or : $ \sum_{n=0}^\infty |x_n| < +\infty \Rightarrow \sum_{n=0}^\infty |x_n|^2 < +\infty $ any help ?
Hint: If $\sum a_n < \infty$, then $\displaystyle \lim_{n \to \infty} a_n = 0$. In particular for large $n$, we have $|a_n| \leq 1$. Then $$ |a_n|^2 = |a_n| |a_n| \leq \dots $$
Which probability law? It may be a basic probability law in another form, but I cannot figure it out. Why can we say the following: $P(A∩B|C) = $$P(A|B∩C)P(B|C)$ Thank you.
You should simply expand upon the definition: $P(A \wedge B | C) = \frac{P(A \wedge B \wedge C)}{P(C)}$. $P(A | B \wedge C) = \frac {P( A \wedge B \wedge C)}{P( B \wedge C )}$. $P(B|C)=\frac{P( B \wedge C)}{P(C)}$. Now, we solve for $P(A \wedge B \wedge C)$ in the first two above and equate them: $P(A \wedge B | C) \cdot P(C)=P(A | B \wedge C) \cdot P( B \wedge C )$. Solve for $P(B \wedge C)$ in the third line and plug it in to the expression above.
Can't prove this elementary algebra problem $x^2 + 8x + 16 - y^2$ First proof: $(x^2 + 8x + 16) – y^2$ $(x + 4)^2 – y^2$ $[(x + 4) + y][(x + 4) – y]$ 2nd proof where I mess up: $(x^2 + 8x) + (16 - y^2)$ $x(x + 8) + (4 + y)(4 - y)$ $x + 1(x + 8)(4 + y)(4 - y)$ ???? I think I'm breaking one of algebra's golden rules, but I can't find it.
Let us fix your second approach. $$ \begin{align} &x^2+8x+16-y^2\\ =&x^2+8x+(4-y)(4+y)\\ =&x^2+x\big[(4-y)+(4+y)\big]+(4-y)(4+y)\\ =&(x+4-y)(x+4+y). \end{align} $$
Longest antichain of divisors I Need to find a way to calculate the length of the longest antichain of divisors of a number N (example 720 - 6, or 1450 - 4), with divisibility as operation. Is there a universally applicable way to approach this problem for a given N?
If $N=\prod_p p^{e_p}$ is the prime factorization of $N$, then the longest such antichain has length $1+\sum_p e_p$ (if we count $N$ and $1$ as part of the chain, otherwise subtract $2$) and can be realized by dividing by a prime in each step. Thus with $N=720=2^4\cdot 3^2\cdot 5^1$ we find $720\stackrel{\color{red}2}, 360\stackrel{\color{red}2}, 180\stackrel{\color{red}2}, 90\stackrel{\color{red}2}, 45\stackrel{\color{red}3}, 15\stackrel{\color{red}3}, 5\stackrel{\color{red}5}, 1$ and with $N=1450=2^15^229^1$ we find $1450\stackrel{\color{red}2},725\stackrel{\color{red}5},145\stackrel{\color{red}5},29\stackrel{\color{red}{29}}, 1$ (in red is the prime I divide by in each step; I start with the smallest, but the order does not matter).
If $S\times\mathbb{R}$ is homeomorphic to $T\times\mathbb{R}$, and $S$ and $T$ are compact, can we conclude that $S$ and $T$ are homeomorphic? If $S \times \mathbb{R}$ is homeomorphic to $T \times \mathbb{R}$ and $S$ and $T$ are compact, connected manifolds (according to an earlier question if one of them is compact the other one needs to be compact) can we conclude that $S$ and $T$ are homeomorphic? I know this is not true for non compact manifolds. I am mainly interested in the case where $S, T$ are 3-manifolds.
For closed 3-manifolds, taking the product with $\mathbb{R}$ doesn't change the fundamental group, so if the two products are homemorphic, the original spaces have the same fundamental group, and closed 3-manifolds are uniquely determined by their fundamental group, if they are irreducible and non-spherical.
Group of invertible elements of a ring has never order $5$ Let $R$ be a ring with unity. How can I prove that group of invertible elements of $R$ is never of order $5$? My teacher told me and my colleagues that problem is very hard to solve. I would be glad if someone can provide me even a small hint because, at this point, I have no clue how to attack the problem.
Here are a couple of ideas: * *$-1 \in R$ is always invertible. If $-1 \neq 1$, then it follows that $R^*$ should have even order, a contradiction. Therefore, $1=-1$ in ring $R$, so in fact $R$ contains a subfield isomorphic to $\mathbb{F}_2$. *Let $a$ be the generator of $R^*$. Consider the subring $N \subseteq R$ generated by $1$ and $a$. Then in fact $N \simeq \mathbb{F}_2[x] / (f(x))$, where $\mathbb{F}_2[x]$ is the polynomial ring over $\mathbb{F}_2$, and $f(x)$ is some polynomial from that ring. *Since $a^5=1$, it follows that $f(x)$ divides $x^5+1$. This leaves only finitely many options for $f$, and therefore for $N$. Then you can deal with each case separately and see that in each case $|N^*| \neq 5$.
Integrating a school homework question. Show that $$\int_0^1\frac{4x-5}{\sqrt{3+2x-x^2}}dx = \frac{a\sqrt{3}+b-\pi}{6},$$ where $a$ and $b$ are constants to be found. Answer is: $$\frac{24\sqrt3-48-\pi}{6}$$ Thank you in advance!
On solving we will find that it is equal to -$$-4\sqrt{3+2x-x^{2}}-\sin^{-1}(\frac{x-1}{2})$$ Now if you put the appropriate limits I guess you'll get your answer. First of all write $$4x-5 = \mu \frac{d(3+2x-x^{2})}{dx}+\tau(3+2x-x^{2})$$ We will find that $\mu=-2$ and $\tau=-1$. $$\int\frac{4x-5}{\sqrt{3+2x-x^{2}}}=\mu\int\frac{d(3+2x-x^{2})}{\sqrt{3+2x-x^{2}}}+\tau\int\frac{dx}{\sqrt{3+2x-x^{2}}}$$$$= -2(2\sqrt{(3+2x-x^{2}})+-1\left(\int\frac{d(x-1)}{\sqrt{2^{2}-(x-1)^{2}}}\right) $$$$=-4\sqrt{3+2x-x^{2}}-\sin^{-1}(\frac{x-1}{2})$$
Finding Markov chain transition matrix using mathematical induction Let the transition matrix of a two-state Markov chain be $$P = \begin{bmatrix}p& 1-p\\ 1-p& p\end{bmatrix}$$ Questions: a. Use mathematical induction to find $P^n$. b. When n goes to infinity, what happens to $P^n$? Attempt: i'm able to find $$P^n = \begin{bmatrix}1/2 + 1/2(2p-1)^n& 1/2 - 1/2(2p-1)^n\\ 1/2 - 1/2(2p-1)^n & 1/2 + 1/2(2p-1)^n\end{bmatrix}$$ I don't know how to use induction to get there even though I know induction step. i.e n=1 true. suppose it's true for n = k, we need to prove it's also true for k+1...
Your initial $P^1$ matrix is has first row $[p,1-p]$ and second row the reverse of that. Your goal matrix for $P^n$ also has its entries in the same form, with first row say $[a_n,b_n]$ and second row the reverse of that. So an approach would be to multiply the matrix for $P^n$ by the matrix $P$, and its top row will be $$[pa_n+(1-p)b_n,(1-p)a_n+pb_n],$$ with bottom row the reverse of that. Now you just have to check that when $$a_n=1/2+(1/2)(2p-1)^n, \\ b_n = 1/2-(1/2)(2p-1)^n,$$ and the above calculations are done, you obtain the formulas for $a_{n+1}$ and $b_{n+1}$ in the new version of row one of $P^{n+1}$. That is, the form where the above vlues of $a_n,b_n$ have their $n$ replaced by $n+1$. It seems this should be just simple algebra, though admittedly I didn't check, I'm just suggesting an approach. ADDED: Actually the algebra is very simple, if you do the $1/2$ part separately from the $\pm (2p-1)^n$: $p\cdot (1/2)+(1-p)\cdot(1/2)=1/2$, while $$p\cdot (1/2)(2p-1)^n +(1-p) \cdot (-1/2)(2p-1)^n = \\ p \cdot (1/2)(2p-1)^n +(p-1) \cdot(1/2)(2p-1)= \\ (2p-1)\cdot(1/2) (2p-1)^n,$$ which is of course $(1/2)(2p-1)^{n+1}.$
If $\{w^k|w\in L\}$ regular implies L regular? If L is a language and the language $$\tilde{L}:=\{x^k,x\in L, k\in\mathbb{N}\}$$ is regular, does that imply that L is regular? ($|L|<\infty$ gives equivalence) We came across this question when trying to prove an explicit example, namely Prove that $L=\{a^{p^k}\mid p\text{ prime}, k\in\mathbb{N}\}$ is not regular. It was fairly easy to show that $L_1=\{\underbrace{a^{p^1}}_{=a^p}\mid p\text{ prime}\}$ is not regular. If the above statement were true, this would give a very easy proof for the assignment. I am asking out of interest, there are other ways to solve the question (I am NOT asking for answers on this example). I tried proving it via the pumping lemma, explicit construction of a FSA and the Myhill-Nerode theorem, however I was unable to find any notable results. I am unsure if this statement is true or if we can find a counterexample (I have been unable to do so)
No. Consider $L = \{a^{n^2} \colon n \ge 1\}$. As $a \in L$, your $\tilde{L} = \mathcal{L}(a^*)$, which is regular, but $L$ isn't.
Find the value of $\int_{-\infty}^\infty \int_{-\infty}^\infty e^{-(x^2+xy+y^2)} \, dx\,dy$ Given that $\int_{-\infty}^\infty e^{-x^2} \, dx=\sqrt{\pi}$. Find the value of $$\int_{-\infty}^\infty\int_{-\infty}^\infty e^{-(x^2+xy+y^2)} \, dx\,dy$$ I don't understand how I find this double integral by using the given data. Please help.
For fun, I want to point out that much more can be said. The spectral theorem for real symmetric matrices tells us that real symmetric matrices are orthogonally diagonalizable. Thus, if $A$ is some symmetric matrix, then there exists an orthogonal matrix $U$ and diagonal matrix $D$ such that $A$ can be written as $A=U^{-1} DU$. Hence $x^TAx=x^TU^TDUx=(Ux)^TD(Ux)$. The Jacobian det of the transformation $x\mapsto Ux$ is simply $1$, so we have a change of variables $u=Ux$: $$\begin{align} \int_{{\bf R}^n}\exp\left(-x^TAx\right)\,dV & =\int_{{\bf R}^n}\exp\left(-(\lambda_1u_1^2+\cdots+\lambda_nu_n^2)\right)\,dV \\[6pt] & =\prod_{i=1}^n\int_{-\infty}^{+\infty}\exp\left(-\lambda_i u_i^2\right)du_i \\[6pt] & =\prod_{i=1}^n\left[\frac{1}{\sqrt{\lambda_i}}\int_{-\infty}^{+\infty}e^{-u^2} \, du\right] \\[6pt] & =\sqrt{\frac{\pi^n}{\det A}}.\end{align}$$ Note that we don't even have to compute $U$ or $D$. The above calculation is the generalized version of the "completing the squares" approach when $n=2$ (which is invoked elsewhere in this thread). This formula is in fact the basis for the Feynman path integral formulation of the functional determinant from quantum field theory; since technically the integral diverges we need to compare them instead of looking at individual ones outright. Wikipedia has some more details.
Showing $(v - \hat{v})\,\bot\,v$ $\fbox{Setting}$ Let $V$ be an inner-product space with $v \in V$. Suppose that $\mathcal{O} = \{u_1, \ldots, u_n\}$ forms an orthonormal basis of $V$. Let $\hat{v} = \left\langle u_1, v\right\rangle u_1 + \ldots + \left\langle u_n,v\right\rangle u_n$ denote the Fourier Expansion of $v$ with respect to $\mathcal{O}$. $\fbox{Question}$ How do we show that $\left\langle v - \hat{v}, \hat{v} \right\rangle = \left\langle v, \hat{v} \right\rangle - \left\langle \hat{v}, \hat{v} \right\rangle$ is $0$?
$$ \begin{align} \langle v-\hat{v},\hat{v}\rangle &=\left\langle\color{#C00000}{v}-\color{#00A000}{\sum_{k=1}^n\langle v,u_k\rangle u_k},\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle u_k}\right\rangle\\ &=\left\langle\color{#C00000}{v},\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle u_k}\right\rangle-\left\langle\color{#00A000}{\sum_{j=1}^n\langle v,u_j\rangle u_j},\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle u_k}\right\rangle\\ &=\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle}\langle\color{#C00000}{v},\color{#0000FF}{u_k}\rangle-\color{#00A000}{\sum_{j=1}^n\langle v,u_j\rangle}\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle}\langle\color{#00A000}{u_j},\color{#0000FF}{u_k}\rangle\\ &=\sum_{k=1}^n\langle v,u_k\rangle^2-\color{#00A000}{\sum_{j=1}^n\langle v,u_j\rangle}\color{#0000FF}{\langle v,u_j\rangle}\tag{$\ast$}\\ &=\sum_{k=1}^n\langle v,u_k\rangle^2-\sum_{j=1}^n\langle v,u_j\rangle^2\\[6pt] &=0 \end{align} $$ $(\ast)$ since $\langle u_j,u_k\rangle=1$ when $k=j$ and $0$ otherwise.
The set of numbers whose decimal expansions contain only 4 and 7 Let $S$ be the set of numbers in $X=[0,1]$ that when expanded as a decimal form, the numbers are 4 or 7 only. The following are the problems. a), Is S countable ? b), Is it dense in $X$ ? c), Is it compact ? d), Is it perfect ? For a), I want to say that it is intuitively, but I have no idea how to prove this. I tried to come up with a bijection between $S$ and $\Bbb Z$ but I couldn't find one. For b), my understanding of a set being "dense" means that all points in $X$ is either a limit point or a point in $S$. Am I right? Even if I were, I am not sure how to show this. For c), my intuition tells me that it is because it is bounded. So if I could show that it is closed I will be done, I think. But I am still iffy with the idea of limit points, and I am not sure what kind of limit points there are in $S$. For d), Because I can't show that it's closed I am completely stuck. I am teaching myself analysis, and I only know up to abstract algebra. Since I never took topology, please give me an explanation that helps without knowledge of advanced math.
Hints: For b, can you get close to $0.2?$ For c, you are correct that it is bounded, so you need to investigate closed. Let $y \in [0,1]$, but $y \not \in S$. Then there is some digit of $y$ that is not $4$ or $7$.... For d, you need to show that any point of $S$ is a limit of a sequence of other elements of $S$. Let $x \in S$. Can you find a list of other elements in $S$ that get closer and closer to $x$? For starters, can you describe another point within $\pm 0.1$ of $x$?
Calculating time to 0? Quick question format: Let $a_n$ be the sequence given by the rule: $$a_0=k,a_{n+1}=\alpha a_n−\beta$$ Find a closed form for $a_n$. Long question format: If I have a starting value $x=100000$ then first multiply $x$ by $i=1.05$, then subtract $e=9000$. Let's say $y$ is how many times you do it. Anyone know of a formula to get $e$ given $y,x,i$ and given the total must be $0$ after $y$ "turns"? Example: $$y=1: xi-e=0$$ $$y=2: ((xi-e)i-e)=0$$ and so on...
Let's $u_0 = x$, and $u_{n+1} = iu_n - e$. To solve this equation, let's $l= il-e$, and $v_n = u_n - l$. Then $v_{n+1}= i v_n$ ans $v_n = i^n v_0$, so $u_n = l + i^n v_0 = l + i^n (x - l)$. But $l = e/(i-1) = 180 000$. So $u_n = 180 000 - 80000*1.05^n$. So $u_n \leq 0$ is equivalent to $$180 000 - 80000*1.05^n \leq 0$$ $$1.05^n \geq 180000/80000 = 2.25$$ $$n \geq 17$$ (And $u_{17} \neq 0$ !) General case : Let's $l = e/(i-1)$. If $i > 1$ and $x > l$, $u_n \leq 0$ is equivalent to $$l + i^n(x-l) \leq 0$$ $$i^n \geq l/(x-l)$$ $$n \geq \frac{log(i)}{log(l) - log(x-l)}$$ So $$y = E(\frac{log(i)}{log(l) - log(x-l)}) + 1$$
Is Vector Calculus useful for pure math? I have the option to take a vector calculus class at my uni but I have received conflicting opinions from various professors about this class's use in pure math (my major emphasis). I was wondering what others thought about the issue. I appreciate any advice.
The answer depends on your interests, and on the place you continue your education. In some areas in the world, PhD's are very specialized so any course that is not directly related to the subject matter is not necessary. One could complete a pure math PhD and not know vector or multivariate calculus. However, this is increasingly rare; more and more PhD programs are following the American model, expecting their graduates to have some breadth of knowledge in addition to the specialized skills required for the PhD thesis itself. Such programs would consider a lack of vector calculus a serious deficiency and may not even consider your application into any math PhD program, pure or applied. Certainly this is the case at most universities in the U.S.
Homogenous measure on the positive real halfline Define a measure $\mu\not=0$ on positive real number $\Bbb R_{>0}$ such that for any measurable set $E\subset\Bbb R_{>0}$ and $a\in \Bbb R_{>0} $, we have $\mu(aE)= \mu(E)$, where $aE=[ax;x\in E]$. I am totally blank about this problem. I ponder on it several times but didn't get any idea. This exercise illustrates lebesgue measure's abstract and weird nature. Because if we assume E as a subset of real numbers or any interval, it totally disagrees to fulfill this translation.
I'll answer in greater generality. A common way to construct a measure is to take a nonnegative locally integrable function $w$ and define $\mu(E)=\int_E w(x)\,dx$. This does not give all measures (only those that are absolutely continuous with respect to $dx$) but for many examples that's enough. In terms of $w$, the desired condition translates to $$\int_E w(x)\,dx = \int_{aE} w(x)\,dx\tag1$$ A way to get a handle on (1) is to bring both integrals to the same domain of integration. So, change the variable $x=ay$ in the second one, so that it becomes $\int_{E} a\,w(ay)\,dy$. Which is the same as $\int_{E} a\,w(ax)\,dx$, because the name of the integration variable does not matter. So, (1) takes the form $$\int_E w(x)\,dx = \int_{E} a\, w(ax)\,dx \tag2$$ or, better yet, $$\int_E ( w(x)-a\, w(ax))\,dx \tag3$$ So that (3) holds for every measurable set, the integrand should be zero almost everywhere. So, we need a function $w$ such that $w(ax)=w(x)/a$ for all $x$. In particular, $w(a)=w(1)/a$ for all $a$, which tells us what the function is. (The value of $w(1)$ can be chosen to be any positive number). The above can be generalized to produce measures such that $$\mu(aE) =a^p \mu(E)$$ for all $a>0$, where $p$ can be any fixed real number.
Closed form for $\sum_{n=1}^\infty\frac{(-1)^n n^4 H_n}{2^n}$ Please help me to find a closed form for the sum $$\sum_{n=1}^\infty\frac{(-1)^n n^4 H_n}{2^n},$$ where $H_n$ are harmonic numbers: $$H_n=\sum_{k=1}^n\frac{1}{k}=\frac{\Gamma'(n+1)}{n!}+\gamma.$$
$$\sum_{n=1}^\infty\frac{(-1)^n n^4 H_n}{2^n}=\frac{28}{243}+\frac{10}{81} \log \left(\frac{2}{3}\right).$$ Hint: Change the order of summation: $$\sum_{n=1}^\infty\frac{(-1)^n n^4 H_n}{2^n}=\sum_{n=1}^\infty\sum_{k=1}^n\frac{(-1)^n n^4}{2^n k}=\sum_{k=1}^\infty\sum_{n=k}^\infty\frac{(-1)^n n^4}{2^n k}.$$
Why isn't there a continuously differentiable surjection from $I \to I \times I$? I was asked this question recently in an interview. Why can't there be a a continuously differentiable map $f \colon I \to I \times I$, which is also surjective? In contrast to just continuous, where we have examples of space filling curves. I have one proof which is direct via Sard's theorem. But the statement seems simple enough to have a proof using at most implicit (and inverse) function theorem.
The image of any continuously differentiable function $f$ from $I$ to $I\times I$ has measure zero; in particular, it cannot be the whole square. To see this, note that $\|f'\|$ has a maximum value $M$ on $I$. This implies that the image of any subinterval of $I$ of length $\epsilon$ lies inside a circle of diameter $M\epsilon$, by the mean value theorem. The union of these $\approx 1/\epsilon$ circles, which contains the image of $f$, has measure $\approx \pi M^2\epsilon$. Now let $\epsilon$ tend to $0$.
Question on Showing points of discontinuities of a function are removable (or not) The question is as follows: Given function: $F(x,y)=\frac{x + 2y}{sin(x+y) - cos(x-y)}$ Tasks: a/ Find points of discontinuities b/ Decide if the points (of discontinuities) from part a are removable Here is my work so far: (1) For part a, I think the points of discontinuities should have form $(0, \frac{\pi}{4} + n\pi)$ or $(\frac{\pi}{4} + n\pi, 0)$ , since they make the denominator undefined. For convenience of part b, I choose to specifically deal with the point $(0, \frac{\pi}{4})$ (2) Recall definition: A point of discontinuity $x_0$ is removable if the limits of the function under certain path are equal to each other, as they are "close" to $x_0$. In particular, if the function is 1 dimensional, we get the notion of "left" and "right" limits. But here we talk about paths of any possible direction. However, these limits are not equal to $f(x_0)$, which can be defined or undefined. (3) I'm having trouble of "finding" such paths @_@ I come across with these two, by fix x-coordinate and vary y-coordinate: $F(x, x^2 - \frac{\pi}{4})$ and $F(x, x^2 - x - \frac{\pi}{4})$ They both have limit to be $\frac{\pi}{2\sqrt(2)}$ as x approaches 0 (by my calculation) But what I can say about these results? I feel that discontinuities of $F(x,y)$ should be not removable, but I don't know if my thought is correct. Would someone please help me on this question? Thank you in advance ^^
Observe that F(x,y) =(x+2y) /(2*Cos(y+pi/4)*Sin(x - pi/4)).just use formula for sin(a) - sin(b) , and cos(b) = sin (pi/2 -b) Thus the set of discontinuities are (x, pi/4 +n*pi) and (pi/4 + n*pi, y) for any x,y real. ie. they are lines parellel to x and y axis ie. a grid. So we have to look at point of intersection of line x+2y = 0 and the above grid. For example at (pi/4 , -pi/8) F(x,y) =(x+2y) /(2*Cos(y+pi/4)*Sin(x - pi/4)) . Cos(-pi/8 + pi/4) is non zero. So consider Lim as (x,y)->(pi/4 , -pi/8) (x+2y) /Sin (x-pi/4) . Make substitution x' = x-pi/4 and y' = y+pi/8. we get Lim (x',y')->(0,0) x'+2y'/Sin(x'). consider curve y'=0 we get limit as 1 . while curve x'+2y' = 0 we get limit as 0. You can try this in general on the other points, we get a expression of form Lim (x',y')->(0,0) x'+2y'/{+ or -}Sin(x') or x'+2y'/ {+ or -}Sin (y') As in both cases limit never exists it will not be a removable singularity at any point.
When are the binomial coefficients equal to a generalization involving the Gamma function? Let $\Gamma$ be the Gamma function and abbreviate $x!:=\Gamma(x+1)$, $x>-1$. For $\alpha>0$ let us generalize the binomial coefficients in the following way: $$\binom{n+m}{n}_\alpha:=\frac{(\alpha n+\alpha m)!}{(\alpha n)!(\alpha m)!}$$ Of course for $\alpha=1$ this reduces to the ordinary binomial coefficients. My question is: How can one show that this is really the only case where they coincide? That is, $\displaystyle\binom{n+m}{n}=\binom{n+m}{n}_\alpha$ for all $n,m\in\mathbb{N}$ implies $\alpha=1$. Or even more general (but currently not of interest to me): $\displaystyle\binom{n+m}{n}_\alpha=\binom{n+m}{n}_\beta$ implies $\alpha=\beta$. I tried to use Stirling's formula: $\Gamma(z+1)\sim\sqrt{2\pi z}\bigl(\frac{z}{e}\bigr)^z$ as $z\to\infty$, but didn't get very far. Thanks in advance for any help.
Set, for example, $m=1$ and consider the limit $n\rightarrow\infty$. Then $$ {\alpha n+\alpha\choose \alpha n}\sim \frac{(\alpha n)^{\alpha}}{\alpha!},\qquad {n+1 \choose n}\sim n.$$ It is clear that the only possibility for both asymptotics to agree is $\alpha=1$. In the more general situation, the same argument shows that $\alpha=\beta$.
Existence of whole number between two real numbers $x$ and $x +1$? How to prove that there is a whole number, integer, between two real numbers $x$ and $x+1$ (in case $x$ is not whole)? I need this for an exercise solution in my Topology class, so I can, probably, use more than just axioms from set theory. Any ideas?
This can be proved using the decimal expansion of $x$. If $x$ has the decimal expansion $n_0.n_1n_2n_3\cdots$, where $n_0$ is some whole number, then $x+1$ has the decimal expanion $(n_0+1).n_1n_2n_3\cdots$. Then it is clear that $x < n_0+1 < x+1$ if $x$ is not a whole number itself.
What does "Solve $ax \equiv b \pmod{337}$ for $x$" mean? I have a general question about modular equations: Let's say I have this simple equation: $$ax\equiv b \pmod{337}$$ I need to solve the equation. What does "solve the equation" mean? There are an infinite number of $x$'s that will be correct. Does $x$ need to be integer? Thanks very much in advance, Yaron.
Write $p = 337$. This is a prime number. First of all, if $a \equiv 0 \pmod{p}$ then there is a solution if and only if $b \equiv 0 \pmod{p}$, and then all integers $x$ are a solution. If $a \not\equiv 0 \pmod{p}$, then the (infinite number of integer) solutions will form a congruence class modulo $p$. These can be found using Euclid's algorithm to find an inverse of $a$ modulo $p$, very much as you would do over the real numbers, say. That is, use Euclid to find $u, v \in \Bbb{Z}$ such that $a u + p v = 1$, and then note that $x_{0} = u b$ is a solution, as $a x_{0} = a u b = b - p v b \equiv b \pmod{p}$. (Here $u$ is the inverse of $a$ modulo $p$, as $a u \equiv 1 \pmod{p}$.) Then note that if $x$ is any solution, then $a (x - x_{0}) \equiv 0 \pmod{p}$, which happens if and only if $p \mid x - x_{0}$, so that $x \equiv x_{0} \pmod{p}$. Thus the set of solutions is the congruence class of $x_{0}$.
What is the precise statement of the theorem that allows us to "localize" our knowledge of derivatives? Most introductory calculus courses feature a proof that Proposition 1. For the function $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $x \in \mathbb{R} \Rightarrow f(x)=x^2$ it holds that $x \in \mathbb{R} \Rightarrow f'(x)=2x$. In practice though, we freely use the following stronger result. Proposition 2. For all partial functions $f : \mathbb{R} \rightarrow \mathbb{R}$, setting $X = \mathrm{dom}(f)$, we have that for all $x \in X$, if there exists a neighborhood of $x$ that is a subset of $X$, call it $A$, such that $a \in A \Rightarrow f(a)=a^2$, then we have $f'(x)=2x$. What is the precise statement of the theorem that lets us get from the sentences that we actually prove, like Proposition 1, to the sentences we actually use, like Proposition 2?
I’ll give a first try in answering this: How about: Let $f : D_f → ℝ$ and $g : D_g → ℝ$ be differentiable in an open set $D ⊂ D_f ∩ D_g$. If $f|_D = g|_D$, then $f'|_D = g'|_D$. I feel this is not what you want. Did I misunderstand you?
Evaluating this integral : $ \int \frac {1-7\cos^2x} {\sin^7x \cos^2x} dx $ The question : $$ \int \frac {1-7\cos^2x} {\sin^7x \cos^2x} dx $$ I tried dividing by $\cos^2 x$ and splitting the fraction. That turned out to be complicated(Atleast for me!) How do I proceed now?
The integration is $$\int \frac{dx}{\sin^7x\cos^2x}-\int\csc^7xdx$$ Using this repeatedly, $$\frac{m-1}{n+1}\int\sin^{m-2}\cos^n dx=\frac{\sin^{m-1}x\cos^{n+1}x}{m+n}+\int \sin^mx\cos^n dx,$$ $$\text{we can reach from }\int \frac{dx}{\sin^7x\cos^2x}dx\text{ to } \int \frac{\sin xdx}{\cos^2x}dx$$ Now use the Reduction Formula of $\int\csc^nxdx$ for the second/last integral
Symmetric Groups and Commutativity I just finished my homework which involved, among many things, the following question: Let $S_{3}$ be the symmetric group $\{1,2,3\}$. Determine the number of elements that commute with (23). Now, solving this was unproblematic - for those interested; the answer is 2. However, it got me thinking whether or not there is a general solution to this type of question.Thus, my question is: Let $S_{n}$ be the symmetric group $\{1,\dots,n\}$. Determine the number of elements in $S_{n}$ that commute with $(ij)$ where $1\leq i,j \leq n $.
* *Let $\pi, \phi\in S_{\Omega}$. Then $\pi, \phi$ are disjoint if $\pi$ moves $\omega\in \Omega$ then $\phi$ doesn't move $\omega$. For example, $(2,3)$ and $(4,5)$ in $S_6$ are disjoint. indeed, $\{2,3\}\cap\{4,5\}=\emptyset$. Theorem: If $\pi, \phi\in S_{\Omega}$ are disjoint then $\pi\phi=\phi\pi$.
Find the following integral: $\int {{{1 + \sin x} \over {\cos x}}dx} $ My attempt: $\int {{{1 + \sin x} \over {\cos x}}dx} $, given : $u = \sin x$ I use the general rule: $\eqalign{ & \int {f(x)dx = \int {f\left[ {g(u)} \right]{{dx} \over {du}}du} } \cr & {{du} \over {dx}} = \cos x \cr & {{dx} \over {du}} = {1 \over {\cos x}} \cr & so: \cr & \int {{{1 + \sin x} \over {\cos x}}dx = \int {{{1 + u} \over {\cos x}}{1 \over {\cos x}}du} } \cr & = \int {{{1 + u} \over {{{\cos }^2}x}}du} \cr & = \int {{{1 + u} \over {\sqrt {1 - {u^2}} }}du} \cr & = \int {{{1 + u} \over {{{(1 - {u^2})}^{{1 \over 2}}}}}du} \cr & = \int {(1 + u){{(1 - {u^2})}^{ - {1 \over 2}}}} du \cr & = {(1 - u)^{ - {1 \over 2}}} + u{(1 - {u^2})^{ - {1 \over 2}}}du \cr & = {1 \over {({1 \over 2})}}{(1 - u)^{{1 \over 2}}} + u - {1 \over {\left( {{1 \over 2}} \right)}}{(1 - {u^2})^{{1 \over 2}}} + C \cr & = 2{(1 - u)^{{1 \over 2}}} - 2u{(1 - {u^2})^{{1 \over 2}}} + C \cr & = 2{(1 - \sin x)^{{1 \over 2}}} - 2(\sin x){(1 - {\sin ^2}x)^{{1 \over 2}}} + C \cr & = {(1 - \sin x)^{{1 \over 2}}}(2 - 2\sin x) + C \cr} $ This is wrong, the answer in the book is: $y = - \ln |1 - \sin x| + C$ Could someone please explain where I integrated wrongly? Thank you!
You replaced $\cos^2 x$ by $\sqrt{1-u^2}$, it should be $1-u^2$. Remark: It is easier to multiply top and bottom by $1-\sin x$. Then we are integrating $\frac{\cos x}{1-\sin x}$, easy.
clarity on a question The original question is : Let $0 < a_{1}<a_{2}<\dots<a_{mn+1}\;\;$ be $\;mn+1\;$ integers. Prove that you can either $\;m+1\;$ of them no one of which divides any other or $\;n+1\;$ of them each dividing the following. (1966 Putnam Mathematical Competition) The question has words missing , so could someone tell me what the corrected version of this question is
I think it is this question Given a set of $(mn + 1)$ unequal positive integers, prove that we can either $(1)$ find $m + 1$ integers $b_i$ in the set such that $b_i$ does not divide $b_j$ for any unequal $i, j,$ or $(2)$ find $n+1$ integers $a_i$ in the set such that $a_i$ divides $a_{i+1}$ for $i = 1, 2, \dots , n$. 27th Putnam 1966
reflection groups and hyperplane arrangement We know that for the braid arrangement $A_\ell$ in $\mathbb{C}^\ell$: $$\Pi_{1 \leq i < j \leq \ell} (x_i - x_j)=0,$$ $\pi_1(\mathbb{C}^\ell - A_\ell) \cong PB_\ell$, where $PB_\ell$ is the pure braid group. Moreover, the reflection group that is associated to $A_\ell$ is the symmetric group $S_\ell$, and it is known that there is an exact sequence $PB_\ell \rightarrow B_\ell \rightarrow S_\ell$. My question is the following: let $L$ be a reflection arrangement (associated to the reflection group $G_L$) in $\mathbb{C}^\ell$. What is the connection between $\pi_1(\mathbb{C}^\ell - L)$ and $G_L$, or the Artin group associated to $G_L$? Thank you!
I have found out that Brieskorn proved the following (using the above notations): $$ \pi_1(\mathbb{C}^\ell - A_\ell) \cong \text{ker}(A_L \rightarrow G_L) $$ where $A_L$ is the corresponding Artin group, $G_L$ the reflection group.
Truth of Fundamental Theorem of Arithmetic beyond some large number Let $n$ be a ridiculously large number, e.g., $$\displaystyle23^{23^{23^{23^{23^{23^{23^{23^{23^{23^{23^{23^{23}}}}}}}}}}}}+5$$ which cannot be explicitly written down provided the size of the universe. Can a Prime factorization of $n$ still be possible? Does it enter the realms of philosophy or is it still a tangible mathematical concept?
Yes, there exists a unique prime factorization. No, we probably won't ever know what it is.
When the ordinal sum equals the Hessenberg ("natural") sum Let $\alpha_1 \geq \ldots \geq \alpha_n$ be ordinal numbers. I am interested in necessary and sufficient conditions for the ordinal sum $\alpha_1 + \ldots + \alpha_n$ to be equal to the Hessenberg sum $\alpha_1 \oplus \ldots \alpha_n$, most quickly defined by collecting all the terms $\omega^{\gamma_i}$ of the Cantor normal forms of the $\alpha_i$'s and adding them in decreasing order. Unless I am very much mistaken the answer is the following: for all $1 \leq i \leq n-1$, the smallest exponent $\gamma$ of a term $\omega^{\gamma}$ appearing in the Cantor normal form of $\alpha_i$ must be at least as large as the greatest exponent $\gamma'$ of a term $\omega^{\gamma'}$ appearing in the Cantor normal form of $\alpha_{i+1}$. And this holds just because if $\gamma' < \gamma$, $\omega^{\gamma'} + \omega^{\gamma} = \omega^{\gamma} < \omega^{\gamma} + \omega^{\gamma'} = \omega^{\gamma'} \oplus \omega^{\gamma}$. Nevertheless I ask the question because: 1) I want reassurance of this: I have essentially no experience with ordinal arithmetic. 2) Ideally I'd like to be able to cite a standard paper or text in which this result appears. Bonus points if there happens to be a standard name for sequences of ordinals with this property: if I had to name it I would choose something like unlaced or nonoverlapping. P.S.: The condition certainly holds if each $\alpha_i$ is of the form $\omega^{\gamma} + \ldots + \omega^{\gamma}$. Is there a name for such ordinals?
You are exactly right about the "asymmetrically absorptive" nature of standard ordinal addition (specifically with regard to Cantor normal form). Your condition is necessary and sufficient (sufficiency is easy, and you've shown necessity). I don't know of any standard name for such sequences, though. As for your $\omega^\gamma+\cdots+\omega^\gamma$ bit, that isn't appropriate for Cantor normal form. We require the exponents to be listed in strictly decreasing order.
How to find length of a rectangular tile when viewing at some angle I have a question on angles. I have a rectangular tile. when looking straight I can find the width of the tile, but how do I find the apparent width when I see the same rectangular tile at some angle. Below I have attached an image for more clarity. So how do I find y in the image below?
It depends on your projection. If you assume orthogonal projection, so that the apparent length of line segments is independent of their distance the way your images suggest, then you cannot solve this, since a rectangle of any aspect ratio might appear as a rectangle of any other aspect ratio by simply aligning it with the image plane and then rotating it around one of its axes of symmetry. So you can't deduce the original aspect ratio from the apparent one, much less the original lengths.
prove or disprove invertible matrix with given equations Given a non-scalar matrix $A$ in size $n\times n$ over $\mathbb{R}$ that maintains the following equation $$A^2 + 2A = 3I$$ given matrix $B$ in size $n\times n$ too $$B = A^2 + A- 6I$$ Is $B$ an invertible matrix?
Hint: The first equation implies that $A^2+2A-3I=(A-I)(A+3I)=0$. Hence the minimal polynomial $m_A(x)$ of $A$ divides $(x-1)(x+3)$. What happens if $x+3$ is not a factor of $m_A(x)$?
Estimate derivatives in terms of derivatives of the Fourier transform. Let us suppose that $f: \mathbb{R}^n \to \mathbb{R}$ is a smooth function. Furthermore, for every $\alpha$ multi-index, there exists $C_\alpha > 0$ such that $$ |D^\alpha f(\xi)| \leq \frac{C_\alpha}{(1+|\xi|)^{|\alpha|}}. $$ Does it follow that, for every $\alpha$, there exists $C'_\alpha > 0$ such that $$ |D^\alpha (\mathcal{F}^{-1}(f))(x)| \leq \frac{C'_\alpha}{|x|^{n+|\alpha|}} $$ where $\mathcal{F}^{-1}$ is the inverse Fourier transform (which exists since $f \in \mathcal{S}'$)? I tried to do it using the definition, but it is really messed up because $\mathcal{F}^{-1}$ is in general in $\mathcal{S}'$. For instance, if $f$ is a constant function, then its inverse transform is a dirac $\delta$, then I should give it a pointwise meaning, and I don't know when this is possible. Any help would be really appreciated.
This settles it. See Theorem 9, it also settles regularity issues.
Unable to solve expression for $x$ I'm trying to solve this expression for $x$: $$\frac{x^n(n(1-x)+1)}{(1-x)^2}=0$$ I'm not sure where to begin (especially getting rid of the $x^n$ part), any hints or tips are appreciated.
Hint: Try multiplying both sides by the denominator, to get rid of it. Note that this may introduce extraneous solutions if what we multiplied by is $0$, so you have to consider the case of $(x - 1)^2 = 0$ separately. Finally, to solve an equation of the form $a \cdot b = 0$, you can divide by $a \neq 0$ on both sides to get $b = 0$, but this may lead to missing solutions if $a = 0$ is also a solution. So again, consider the cases $a = 0$ and $a \neq 0$ (and therefore $b = 0$) separately.
Summation of independent discrete random variables? We have a summation of independent discrete random variables (rvs) $Y = X_1 + X_2 + \ldots + X_n$. Assume the rvs can take non-negative real values. How can we find the probability mass function of $Y$? Is there any efficient method like the convolution for integer case?
Since the random variables are continuous, you would speak of their probability density function (instead of the probability mass function). The probability density function (PDF) of $Y$ is simply the (continuous) convolution of the PDFs of the random variables $X_i$. Convolution of two continuous random variables is defined by $$(p_{X_1}*p_{X_2})(x)=\int_{-\infty}^{\infty}p_{X_1}(x-y)p_{X_2}(y)\;dy$$ EDIT: I was assuming your RVs are continuous, but maybe I misunderstood the question. Anyway, if they are discrete then (discrete) convolution is also the correct answer. EXAMPLE: Let $X_1$ and $X_2$ be two discrete random variables, where $X_1$ takes on values $1/2$ and $3/4$ with probabilities $0.5$, and $X_2$ takes on values $1/8$ and $1/4$ with probabilities $0.4$ and $0.6$, respectively. So we have $p_{X_1}(x)=0.5\delta(x-1/2) + 0.5\delta(x-3/4)$ and $p_{X_2}=0.4\delta(x-1/8)+0.6\delta(x-1/4)$. Let $Y=X_1+X_2$. Then $p_Y(x)$ is given by the convolution of $p_{X_1}(x)$ and $p_{X_2}(x)$: $$p_Y(x)=0.2\delta(x-5/8)+0.3\delta(x-3/4)+0.2\delta(x-7/8)+0.3\delta(x-1)$$
Could someone please explain the theory behind finding if a given point is inside a circle on a grid? Let us say I have a grid of 1000 x 1000, and on that grid is drawn a circle, the circle could be anywhere. If I then pick a random point from the grid with an x and y co-ordinate I can work out if the point is inside the circle by performing the following math, xCoord = x co-ordinate; yCoord = y co-ordinate; xCenter = x co-ordinate of the center of the circle yCenter = y co-ordinate of the center of the circle radius = the circle's radius ((xCoord - xCenter) ^ 2 + (yCoord - yCenter) ^ 2) < (radius ^ 2) If the radius ^ 2 is less than ((xCoord - xCenter) ^ 2 + (yCoord - yCenter) ^ 2) then it means that the co-ordinates were inside the circle. I am struggling alot to wrap my head around this and cannot seem to work out how it works out that the co-ordinates were inside the circle. Could someone please break this down for me and explain how it is worked out (what is going on in a logical manner so to speak)? Sorry if my question is formatted wrong it is my first question on this site. Thanks
Let me try to answer in words only. you have a circle. To fill it in, as with a paint program, it's every point whose distance from the center of the circle is less than the radius of the circle. Simple enough. To test if a point is inside the circle, calculate the distance from the center point to your point. If less than the radius, it's in your circle.
Mean number of particle present in the system: birth-death process, $E(X_t|X_0=i)$, $b_i=\frac{b}{i+1}$, $d_i=d$ Let $\{X_t\}$ be a birth–and–death process with birth rate $$ b_i = \frac{b}{i+1}, $$ when $i$ particle are in the system, and a constant death rate $$ d_i=d. $$ Find the expected number of particle in the system at time $t$, given that $X_0=i$. Define $$ f(t)=E(X_t), $$ and $$ p_n=P\left(X_t=n | X_0=i \right). $$ Using the foward equation, $$ f'(t)=\sum_{n=1}^\infty n \left( p_{n-1} \frac{b}{n} + p_{n+1}d + p_n\left( 1 - \frac{b}{n+1}-d\right)\right). $$ After simplification, I have $$ f'(t)=p_0 b - \sum_{n=1}^\infty\frac{b}{n+1}p_n + d + f(t), $$ and I don't see how to solve this differential equation.
Not sure one can get explicit formulas for $E[X_t]$ but anyway, your function $f$ is not rich enough to capture the dynamics of the process. The canonical way to go is to consider $u(t,s)=E[s^{X_t}]$ for every $t\geqslant0$ and, say, every $s$ in $(0,1)$. Then, pending some errors in computations done too quickly, the function $u$ solves an integro-differential equation similar to $$ \frac{s}{1-s}\cdot\frac{\partial u}{\partial t}(t,s)=d\cdot(u(t,s)-u(t,0))-b\int_0^su(t,r)\mathrm dr, $$ with initial condition $u(0,s)=s^i$. Assuming one can solve this (which does not seem obvious at first sight), your answer is $$ E[X_t]=\frac{\partial u}{\partial s}(t,1). $$
Mnemonic for centroid of a bounded region The centroid of a region bounded by two curves is given by: $ \bar{x} = \frac{1}{A}\int_a^b{x\left[f(x)-g(x)\right]dx} $ $ \bar{y} = \frac{1}{A}\int_a^b{\left[\frac{(f(x)+g(x)}{2}(f(x)-g(x))\right]dx} = \frac{1}{2A}\int_a^b{\left(f^2(x) - g^2(x)\right)dx}$ where A is just the area of that region. But I have a terrible time remembering those formulas (when taken in conjunction with all of the other things that need to be remembered), and which which moment uses which formula. Does anybody know a good mnemonic to keep track of them? Hopefully this isn't off topic. Thanks
In order to remember those formulas, you have to use them repeatedly on many problems involving finding centroid of areas bounded by two curves. Have faith in the learning process and you will remember it after using it many times, just like playing online games.
calculate the number of possible number of words If one word can be at most 63 characters long. It can be combination of : * *letters from a to z *numbers from 0 to 9 *hyphen - but only if not in the first or the last character of the word I'm trying to calculate possible number of combinations for a given domain name. I took stats facts here : https://webmasters.stackexchange.com/a/16997 I have a very poor, elementary level of math so I've got this address from a friend to ask this. If someone could write me a formula how to calculate this or give me exact number or any useful information that would be great.
Close - but 26 letters plus 10 numbers plus the hyphen is 37 characters total, so it would be (36^2)(37^61) Now granted, that's just the number of alphanumeric combinations; whether those combinations are actually words would require quite a bit of proofreading.
Dense set in $L^2$ Let $ \Omega\subset \mathbb{R}^n$ with $m(\Omega^c)=0 $. Then how can we show that $ \mathcal{F}(C_{0}^{\infty}(\Omega))$ (here $ \mathcal{F}$ denotes the fourier transform) is dense in $L^2$(or $L^p$)? Besides, I'm also interested to know if the condition that $m(\Omega^c)=0$ can be weakened to some more general set. Thanks for your help.
To deal with the case $\Omega=\Bbb R^n$, take $f\in L^2$. Then by Plancherel's theorem, theorem 12 in these lecture notes, we can find $g\in L^2$ such that $f=\mathcal F g$. Now approximate $g$ by smooth functions with compact support and use isometry.
Prove that connected graph G, with 11 vertices and and 52 edges, is Hamiltonian Is this graph always, sometimes, or never Eulerian? Give a proof or a pair of examples to justify your answer Could G contain an Euler trail? Must G contain an Euler trail? Fully justify your answer
$G$ is obtained from $K_{11}$ by removing three edges $e_1,e_2,e_3$. We label now the vertices of $G$ the following way: $$e_1=(1,3)$$ Label the unlabeled vertices of $e_2$ by the smallest unused odd numbers, and label the unlabeled vertices of $e_3$ by the smallest unused odd numbers. Note that by our choices $e_3 \neq (1,11)$, since if $e_3$ uses the vertex $1$, the remaining vertex is labeled by a number $\leq 9$. Label the remaining vertices some random way. Then $1-2-3-4-5-6-7-8-9-10-11-1$ is an Hamiltonian cycle in $G$.
Triple Integral over a disk How do I integrate $$z = \frac{1}{x^2+y^2+1}$$ over the region above the disk $x^2+y^2 \leq R^2$?
Use polar coordinates: $x^2+y^2 = r^2$, etc. An area element is $dx\, dy = r \, dr \, d\theta$. The integral over the disk is $$\int_0^R dr \, r \: \int_0^{2 \pi} d\theta \frac{1}{1+r^2} = 2 \pi \int_0^R dr \frac{r}{1+r^2}$$ You can substitute $u=r^2$ to get for theintegral $$\pi \int_0^{R^2} \frac{du}{1+u}$$ I trust that you can evaluate this.
probable squares in a square cake There is a probability density function defined on the square [0,1]x[0,1]. The pdf is finite, i.e., the cumulative density is positive only for pieces with positive area. Now Alice and Bob play a game: Alice marks two disjoint squares, Bob chooses the square that contains the maximum probability, and Alice gets the other square. The goal of Alice is to maximize the probability in her square. Obviously, in some cases Alice can assure herself a probability of 1/2, for example, if the pdf is uniform in [0,$1 \over 2$]x[0,1], she can cut the squares [0,$1 \over 2$]x[0,$1 \over 2$] and [0,$1 \over 2$]x[$1 \over 2$,1], both of which contain $1 \over 2$. However, in other cases Alice can assure herself only $1 \over 4$, for example, if the pdf is uniform in [0,1]x[0,1]. Are there pdfs for which Alice cannot assure herself even $1 \over 4$ ? What is the worst case for Alice?
I think Alice can always assure herself at least $1 \over 4$ cdf, in the following way. First, in each of the 4 corners, mark a square that contains $1 \over 4$ cdf. Since the pdf is finite, it is always possible to construct such a square, by starting from the corner and increasing the square gradually, until it contains exactly $1 \over 4$ cdf. There is at least one corner, in which the side length of such a square will be at most $1 \over 2$ . Suppose this is the lower-left corner, and the side length is a, so square #1 is [0,a]x[0,a], with a <= $1 \over 2$. Now, consider the following 3 squares: * *To the right of square #1: [a,1]x[0,1-a] *On top of square #1: [0,1-a]x[a,1] *On the top-right of square #1: [a,1]x[a,1] The union of these squares covers the entire remainder after we remove square #1. This remainder contains $3 \over 4$ cdf. So, the sum of cdf in all 3 squares is at least $3 \over 4$ (probably more, because the squares overlap). Among those 3, select the one with the greatest cdf. It must contain at least $1 \over 3$ of $3 \over 4$, i.e., at least $1 \over 4$. This is square #2. So, Alice can always cut two squares that contain at least $1 \over 4$ cdf. Note that this procedure relies on the fact (that I mentioned in the original question) that the pdf is finite. Otherwise, it may not always be possible to construct a square with $1 \over 4$ cdf.
How to simplify $\frac{(\sec\theta -\tan\theta)^2+1}{\sec\theta \csc\theta -\tan\theta \csc \theta} $ How to simplify the following expression : $$\frac{(\sec\theta -\tan\theta)^2+1}{\sec\theta \csc\theta -\tan\theta \csc \theta} $$
The numerator becomes $(\sec\theta -\tan\theta)^2+1=\sec^2\theta+\tan^2\theta-2\sec\theta\tan\theta+1=2\sec\theta(\sec\theta -\tan\theta)$ So, $$\frac{(\sec\theta -\tan\theta)^2+1}{\sec\theta \csc\theta -\tan\theta \csc \theta}$$ $$=\frac{2\sec\theta(\sec\theta -\tan\theta)}{\csc\theta(\sec\theta -\tan\theta)}=2\frac{\sec\theta}{\csc\theta}(\text{ assuming } \sec\theta -\tan\theta\ne0)$$ $$=2\frac{\sin\theta}{\cos\theta}=2\tan\theta$$
Metric spaces and distance functions. I need to provide an example of a space of points X and a distance function d, such that the following properties hold: * *X has a countable dense subset *X is uncountably infinite and has only one limit point *X is uncountably infinite and every point of X is isolated I'm really bad with finding examples... Any help will be greatly appreciated! Thank you. (I got the third one)
For the first question, hint: $\mathbb{Q}$ is a countable set. For the third question, hint: think about the discrete metric on a space. For the second question: Let $X=\{x\in\mathbb{R}\:|\: x>1 \mbox{ or }x=\frac{1}{n}, n\in\mathbb{N}_{\geq 1}\}\cup\{0\}$. Let * *$d(x,y)=1$ if $x> 1$ or $y>1$, *$d(x,y)=|x-y|$ if $x\leq 1$ and $y\leq 1$.
Group $\mathbb Q^*$ as direct product/sum Is the group $\mathbb Q^*$ (rationals without $0$ under multiplication) a direct product or a direct sum of nontrivial subgroups? My thoughts: Consider subgroups $\langle p\rangle=\{p^k\mid k\in \mathbb Z\}$ generated by a positive prime $p$ and $\langle -1\rangle=\{-1,1\}$. They are normal (because $\mathbb Q^*$ is abelian), intersects in $\{1\}$ and any $q\in \mathbb Q^*$ is uniquely written as quotient of primes' powers (finitely many). So, I think $\mathbb Q^*\cong \langle -1\rangle\times \bigoplus_p\langle p\rangle\,$ where $\bigoplus$ is the direct sum. And simply we can write $\mathbb Q^*\cong \Bbb Z_2\times \bigoplus_{i=1}^\infty \Bbb Z$. Am I right?
Yes, you're right. Your statement can be generalized to the multiplicative group $K^*$ of the fraction field $K$ of a unique factorization domain $R$. Can you see how? In fact, if I'm not mistaken it follows from this that for any number field $K$, the group $K^*$ is the product of a finite cyclic group (the group of roots of unity in $K$) with a free abelian group of countable rank, so of the form $K^* \cong \newcommand{\Z}{\mathbb{Z}}$ $\Z/n\Z \oplus \bigoplus_{i=1}^{\infty} \Z.$ Here it is not enough to take the most obvious choice of $R$, namely the full ring of integers in $K$, because this might not be a UFD. But one can always choose an $S$-integer ring (obtained from $R$ by inverting finitely many prime ideals) with this property and then apply Dirichlet's S-Unit Theorem.
Compute the Centroid of a Semicircle without Calculus Can the centroid of a semicircle be computed without deferring to calculus or a limiting procedure?
The following may be acceptable to you as an answer. You can use the centroid theorem of Pappus. I do not know whether you really mean half-circle (a semi-circular piece of wire), or a half-disk. Either problem can be solved using the theorem of Pappus. When a region is rotated about an axis that does not go through the region, the volume of the solid generated is the area of the region times the distance travelled by the centroid. A similar result holds when a piece of wire is rotated. The surface area of the solid is the length of the wire times the distance travelled by the centroid. In the case of rotating a semi-circular disk, or a semi-circular piece of wire, the volume (area) are known. Remark: The result was known some $1500$ years before Newton was born. And the volume of a sphere, also the surface area, were known even before that. The ideas used to calculate volume, area have, in hindsignt, limiting processes at their heart. So if one takes a broad view of the meaning of "calculus," we have not avoided it.
Determining probability of certain combinations Say I have a set of numbers 1,2,3,4,5,6,7,8,9,10 and I say 10 C 4 I know that equals 210. But lets say I want to know how often 3 appears in those combinations how do I determine that? I now know the answer to this is $\binom{1}{1}$ $\binom{9}{3}$ I am trying to apply this to solve a problem in my math book. School is over so I cant ask my professor, Im just trying to get a head start on next year. There are 4 numbers S,N,M,K S stands for students in the class N the number of kids going on the trip M My friend circle including me K the number of my friends I need on the trip with me to enjoy myself. I have to come up with a general solution to find out the probability I will enjoy the trip If I am chosen to go on it. So far I came up with $\binom{S-1}{N-1}$ /($\binom{M-1}{K}$ $\binom{N-K-1}{S-K-1}$) it works for cases like 10 4 6 4 & 3 2 2 1 but dosent work for 10 10 5 3 any help is appreciated
Corrected: First off, your fraction is upside-down: $\binom{S-1}{N-1}$ is the total number of groups of $N$ students that include you, so it should be the denominator of your probability, not the numerator. Your figure of $\binom{M-1}K\binom{N-K-1}{S-K-1}$ also has an inversion: it should be $\binom{M-1}K\binom{S-K-1}{N-K-1}$, where $\binom{S-K-1}{N-K-1}$ is the number of ways of choosing the $N-(K+1)$ students on the trip who are not you or your $K$ friends who are going. After those corrections you have $$\frac{\binom{M-1}K\binom{S-K-1}{N-K-1}}{\binom{S-1}{N-1}}\;.$$ The denominator counts all possible groups of $N$ students that include you. The first factor in the numerator is the number of ways to choose $K$ of your friends, and the second factor is the number of ways to choose enough other people (besides you and the $K$ friends already chosen) to make up the total of $N$. However, this counts any group of $N$ students that includes you and more than $K$ of your friends more than once: it counts such a group once for each $K$-sized set of your friends that it contains. To avoid this difficulty, replace the numerator by $$\binom{M-1}K\binom{S-M}{N-K-1}\;;$$ now the second factor counts the ways to fill up the group with students who are not your friends, so the product is the number of groups of $N$ students that contain you and exactly $K$ of your friends. Of course now you have to add in similar terms for each possible number of friends greater than $K$, since you’ll be happy as long as you have at least $K$ friends with you: it need not be exactly $K$. There are $\binom{M-1}{K+1}\binom{S-M}{N-K-2}$ groups of $N$ that include you and exactly $K+1$ of your friends, another $\binom{M-1}{K+2}\binom{S-M}{N-K-3}$ that contain you and exactly $K+2$ of your friends, and so on, and the numerator should be the sum of these terms: $$\sum_i\binom{M-1}{K+i}\binom{S-M}{N-K-1-i}\;.\tag{1}$$ You’ll notice that I didn’t specify bounds for $i$. $\binom{n}k$ is by definition $0$ when $k>n$ or $k<0$, so we really don’t have to specify them: only the finitely many terms that make sense are non-zero anyway.. Alternatively, you can count the $N$-person groups that include you and fewer than $K$ of your friends and subtract that from the $\binom{S-1}{N-1}$ groups that include you; the different must be the number that include you and at least $K$ of your friends, i.e., the number that is counted by $(1)$. The number that include you and $i$ of your friends is $\binom{M-1}i\binom{S-M-i}{N-1-i}$, so the number of groups that include you and fewer than $K$ of your friends is $$\sum_{i=0}^{K-1}\binom{M-1}i\binom{S-M}{N-1-i}\;.$$ This is going to be a shorter calculation than $(1)$ if $K$ is small.
Proper way to define this multiset operator that does a pseudo-intersection? it's been a while since I've done anything with set theory and I'm trying to find a way to describe a certain operator. Let's say I have two multisets: $A = \{1,1,2,3,4\}$ $B = \{1,5,6,7\}$ How can I define the operator $\mathbf{O}$ such that $ A \mathbf{O} B= \{1,1,1\}$ Thanks!
Let us represent multisets by ordered pairs, $\newcommand{\tup}[1]{\langle #1\rangle}\tup{x,i}$ where $x$ is the element and $i>0$ is the number of times that $x$ is in the set. Let me write the two two multisets in this notation now: $$A=\{\tup{1,2},\tup{2,1},\tup{3,1},\tup{4,1}\},\quad B=\{\tup{1,1},\tup{5,1},\tup{6,1},\tup{7,1}\}.$$ In this case we take those elements appearing in both sets and sum their counters, then: $$A\mathrel{\mathbf{O}}B=\{\tup{x,i+j}\mid\tup{x,i}\in A\land\tup{x,j}\in B\}=\{\tup{1,3}\}.$$
Determining Fourier series for $\lvert \sin{x}\rvert$ for building sums My math problem is a bit more tricky than it sounds in the caption. I have the following Task (which i in fact do not understand): "Determine the Fourier series for $f(x)=\lvert \sin{x}\rvert$ in order to build the Sum for the series: $\frac{1}{1*3}+\frac{1}{3*5}+\frac{1}{5*7}+\dots$" My approach: first, calculating the Fourier series. There is no period, Intervall or Point given. i think it must turn out to be something like this: $a_{n} = \frac{2}{\pi} \int_{0}^{\pi} |\sin x|\cos nx dx$ but what would be the next step to build the series? and second: the other given series. I think its all about the uneven numbers, so i have in mind 2n-1 and 2n+1 are the two possible definitions. So it could be something like this: $(\frac{1}{1\cdot 3}+\frac{1}{3\cdot 5}+\frac{1}{5\cdot 7}+\dots) = \sum\limits_{n=1}^{\infty} \frac{1}{(2n-1)(2n+1)}$ But i cannot make the connection between this series, its sum and |sin x|. despite i think the sum should be something around $\sum\limits_{n=1}^{\infty} \frac{1}{(2n-1)(2n+1)} = \frac{1}{2}$ (but i cannot proof yet) please help me! P.S.: i know the other Convergence of Fourier series for $|\sin{x}|$ -question here at stackexchange, but i think it doesn't fit into my problem. despite i don't understand their Explanation and there is no way shown, how to determine the solution by self. P.P.S: edits were made only to improve Latex and/or language
You were on the right track. First, calculate the Fourier series of $f(x)$ (you can leave out the magnitude signs because $\sin x \ge 0$ for $0\le x \le \pi$ ): $$a_n=\frac{2}{\pi}\int_{0}^{\pi}\sin x\cos nx\;dx= \left\{ \begin{array}{l}-\frac{4}{\pi}\frac{1}{(n+1)(n-1)},\quad n \text{ even}\\ 0,\quad n \text{ odd} \end{array}\right .$$ For $a_0$ we get $a_0=\frac{2}{\pi}$. Therefore, the Fourier series is $$f(x) = \frac{2}{\pi}-\frac{4}{\pi}\sum_{n \text{ even}}^{\infty}\frac{1}{(n+1)(n-1)}\cos nx=\\ =\frac{2}{\pi}\left ( 1-2\sum_{n=1}^{\infty}\frac{1}{(2n+1)(2n-1)}\cos 2nx \right )\tag{1}$$ Now we can evaluate the series. Set $x=\pi$, then we have $f(\pi)=0$ and $\cos 2n\pi = 1$. Evaluating (1) with $x=\pi$ we get $$0 = 1 - 2\sum_{n=1}^{\infty}\frac{1}{(2n+1)(2n-1)}$$ which gives your desired result.
How to prove $n$ is prime? Let $n \gt 1$ and $$\left\lfloor\frac n 1\right\rfloor + \left\lfloor\frac n2\right\rfloor + \ldots + \left\lfloor\frac n n\right\rfloor = \left\lfloor\frac{n-1}{1}\right\rfloor + \left\lfloor\frac{n-1}{2}\right\rfloor + \ldots + \left\lfloor\frac{n-1}{n-1}\right\rfloor + 2$$ and $\lfloor \cdot \rfloor$ is the floor function. How to prove that $n$ is a prime? Thanks in advance.
You know that $$\left( \left\lfloor\frac n 1\right\rfloor - \left\lfloor\frac{n-1}{1}\right\rfloor \right)+\left( \left\lfloor\frac n2\right\rfloor - \left\lfloor\frac{n-1}{2}\right\rfloor\right) + \ldots + \left( \left\lfloor\frac{n}{n-1}\right\rfloor - \left\lfloor\frac{n-1}{n-1}\right\rfloor\right) + \left\lfloor\frac n n\right\rfloor=+2 \,.$$ You know that $$\left( \left\lfloor\frac n 1\right\rfloor - \left\lfloor\frac{n-1}{1}\right\rfloor \right)=1$$ $$\left\lfloor\frac n n\right\rfloor =1$$ $$\left( \left\lfloor\frac n k\right\rfloor - \left\lfloor\frac{n-1}{k}\right\rfloor\right) \geq 0, \qquad \forall 2 \leq k \leq n-1 \,.$$ Since they add to 2, the last ones must be equal, thus for all $2 \leq k \leq n-1$ we have $$ \left\lfloor\frac n k\right\rfloor - \left\lfloor\frac{n-1}{k}\right\rfloor = 0 \Rightarrow \left\lfloor\frac n k\right\rfloor = \left\lfloor\frac{n-1}{k}\right\rfloor $$ It is easy to prove that this means that $k \nmid n$. Since this is true for all $2 \leq k \leq n-1$, you are done.
Examples of Diophantine equations with a large finite number of solutions I wonder, if there are examples of Diophantine equations (or systems of such equations) with integer coefficients fitting on a few lines that have been proven to have a finite, but really huge number of solutions? Are there ones with so large number of solutions that we cannot write any explicit upper bound for this number using Conway chained arrow notation? Update: I am also interested in equations with few solutions but where a value in a solution is very large itself.
Let $b$ is a non-zero integer, and let $n$ is a positive integer. The equation $y(x-b)=x^n$ has only finitely many integer solutions. The first solution is: $x=b+b^n$ and $y=(1+b^{n-1})^n$. The second solution is: $x=b-b^n$ and $y=-(1-b^{n-1})^n$, cf. [1, page 7, Theorem 9] and [2, page 709, Theorem 2]. The number of integer solutions to $(y(x-2)-x^{\textstyle 2^n})^2+(x^{\textstyle 2^n}-s^2-t^2-u^2)^2=0$ grows quickly with $n$, see [3]. References [1] A. Tyszka, A conjecture on rational arithmetic which allows us to compute an upper bound for the heights of rational solutions of a Diophantine equation with a finite number of solutions, http://arxiv.org/abs/1511.06689 [2] A. Tyszka, A hypothetical way to compute an upper bound for the heights of solutions of a Diophantine equation with a finite number of solutions, Proceedings of the 2015 Federated Conference on Computer Science and Information Systems (eds. M. Ganzha, L. Maciaszek, M. Paprzycki), Annals of Computer Science and Information Systems, vol. 5, 709-716, IEEE Computer Society Press, 2015. [3] A. Tyszka, On systems of Diophantine equations with a large number of integer solutions, http://arxiv.org/abs/1511.04004
Probability related finance question: Need a more formal solution You are offered a contract on a piece of land which is worth $1,000,000$ USD $70\%$ of the time, $500,000$ USD $20\%$ percent of the time, and $150,000$ USD $10\%$ of the time. We're trying to max profit. The contract says you can pay $x$ dollars for someone to determine the land's value from which you can decide whether or not to pay $300,000$ USD for the land. What is $x$? i.e., How much is this contract worth? $700,000 + 100,000 + 15,000 = 815,000$ is the contract's worth. So if we just blindly buy, we net ourselves $515$k. I originally was going to say that the max we pay someone to value the land was $x < 515$k (right?) but that doesn't make sense. We'd only pay that max if we had to hire someone to determine the value. We'd still blindly buy the land all day since its Expected Value is greater than $300$k. Out of curiosity: what is the max we'd pay someone to value the land? Anyway, to think about this another way: If we pay someone $x$ to see the value of the land, we don't pay the $150,000$ USD $10\%$ of the time and we still purchase the land if the land is worth more than $300,000$. Quickest way to think of it is to say "valuing saves us $150,000$ USD $10\%$ of the time and does nothing the other $90\%$, so it is worth $15,000$ USD". Is there a more formal way to think of this problem?
There is no unique arbitrage-free solution to the pricing problem with $3$ outcomes, so you will need to impose more assumptions to get a numerical value for the land.
What is the centroid of a hollow spherical cap? I have a unit hollow sphere which I cut along a diameter to generate two equivalent hollow hemispheres. I place one of these hemispheres on an (x,y) plane, letting it rest on the circular planar face where the cut occurred. If the hemisphere was solid, we could write that its centroid in the above case would be given as $(0,0,\frac{3}{8})$. Given that the hemisphere is hollow, can we now write its centroid as $(0,0,\frac{1}{2})$?
Use $z_0=\displaystyle {{\int z ds}\over {\int ds}}$ where $z=\cos\phi$ and $ds=\sin\phi d\theta d\phi$. That gives your 1/2.
Simple dice questions. There are two dice. Assume dice are fair. What does the following probability represent: It's $$\frac{1}{6} + \frac{1}{6} - \left(\frac{1}{36} \right)$$ What does this represent: $$\frac{1}{6} \cdot \frac{5}{6} + \frac{1}{6} \cdot \frac{5}{6}$$ This represents the probability of rolling just a single $X$ (where $X$ is a specified number on the dice, not a combined value where $X$ is between $1$ and $6$) right? What does this represent: $1- \left(\frac{5}{6} \cdot\frac{5}{6} \right)$ What does 1/6 + 1/6 - 1/36 represent? Is this also the probability of rolling a single 6? If so, why do we subtract the 1/36 at the end (the probability of rolling both sixes). Don't we want to include that possibility since we're looking for the probability of rolling a single 6?
1> Same as 3. [but used the inclusion exclusion principle] 2> P[You get different outcomes on rolling the pair of dice twice] 3> P[you get a particular outcome atleast once]
A formal proof required using real analysis If $\int_0^1f^{2n}(x)dx=0$, prove that $f(x)=0$, where $f$ is a real valued continuous function on [0,1]? It is obvious, since $f^{2n}(x) \geq 0$, the only way this is possible is when $f(x)=0$. I am looking for any other formal way of writing this proof i.e. using concepts from real analysis. Any hints will be appreciated.
Assume there exists $c\in[0,1]$ such that $f^{2n}(x)>0$, then by definition of continuity there exists $0<\delta< \min(c,1-c)$ such that $$|x-c|<\delta\implies |f^{2n}(x)-f^{2n}(c)|<\frac{1}{2} f^{2n}(c)$$ Especially, if $|x-c|<\delta$ then $f^{2n}(x)>\frac{1}{2} f^{2n}(c)$. Therefore $$\int_0^1 f^{2n}(x) dx =\int_0^{c-\delta} f^{2n}(x) dx+\int_{c-\delta}^{c+\delta} f^{2n}(x) dx+\int_{c+\delta}^1 f^{2n}(x) dx\\ \ge 0+2\delta \int_{c-\delta}^{c+\delta} \frac{f^{2n}(c)}{2} dx+0 >0 $$
Partitions of an interval and convergence of nets Let $\mathscr{T}$ be the set of partitions $\tau = (\tau_0 = 0 < \tau_1 < \dots < \tau_N = 1)$ of the interval $[0,1]$ (where $N$ is not fixed). This becomes a directed set by setting $\tau < \tau^\prime$ iff $\tau^\prime$ is a subdivision. Now we can look at the nets $$ s_\tau := \sum_{j=1}^N (\tau_j - \tau_{j-1})^2$$ or $$ r_\tau := N \sum_{j=1}^N (\tau_j - \tau_{j-1})^3$$ We should have both $s_\tau \longrightarrow 0$ and $r_\tau \longrightarrow 0$ in the sense of nets, but I don't know how to prove it. Does anybody have an idea?
A general hint and a partial illustration. As I understood, you must show that for each $\varepsilon>0$ there are nets $\tau_1$ and $\tau_2$ such that $s_{\tau_1’}<\varepsilon$ and $r_{\tau_2’}<\varepsilon$ for each $\tau_1’>\tau_1$ and $\tau_2’>\tau_2$. Since $(a+b)^2\ge a^2+b^2$, provided $a$ and $b$ are non-negative, $s_\tau$ is monotone. So it is enough to show that for each real $\varepsilon>0$ there is a net $\tau$ such that $s_\tau<\varepsilon$.
Unexpected approximations which have led to important mathematical discoveries On a regular basis, one sees at MSE approximate numerology questions like * *Prove $\log_{{1}/{4}} \frac{8}{7}> \log_{{1}/{5}} \frac{5}{4}$, *Prove $\left(\dfrac{2}{5}\right)^{{2}/{5}}<\ln{2}$, *Comparing $2013!$ and $1007^{2013}$ or yet the classical $\pi^e$ vs $e^{\pi}$. In general I don't like this kind of problems since a determined person with calculator can always find two numbers accidentally close to each other - and then ask others to compare them without calculator. An illustration I've quickly found myself (presumably it is as difficult as stupid): show that $\sin 2013$ is between $\displaystyle \frac{e}{4}$ and $\ln 2$. However, sometimes there are deep reasons for "almost coincidence". One famous example is the explanation of the fact that $e^{\pi\sqrt{163}}$ is an almost integer number (with more than $10$-digit accuracy) using the theory of elliptic curves with complex multiplication. The question I want to ask is: which unexpected good approximations have led to important mathematical developments in the past? To give an idea of what I have in mind, let me mention Monstrous Moonshine where the observation that $196\,884\approx 196\,883$ has revealed deep connections between modular functions, sporadic finite simple groups and vertex operator algebras. Many thanks in advance for sharing your insights.
The most famous, most misguided, and most useful case of approximation fanaticism comes from Kepler's attempt to match the orbits of the planets to a nested arrangement of platonic solids. Fortunately, he decided to go with his data instead of his desires and abandoned the approximations in favor of Kepler's Laws. Kepler's Mysterium Cosmographicum has unexpected close approximations, and they led to a major result in science.
Determining power series for $\frac{3x^{2}-4x+9}{(x-1)^2(x+3)}$ I'm looking for the power series for $f(x)=\frac{3x^{2}-4x+9}{(x-1)^2(x+3)}$ My approach: the given function is a combination of two problems. first i made some transformations, so the function looks easier. $$\frac{3x^{2}-4x+9}{(x-1)^2(x+3)})=\frac{3x^{2}-4x+9}{x^3+x^2-5x+3}$$ Now i have two polynomials. i thought the Problem might be even easier, if thinking about the function as: $$\frac{3x^{2}-4x+9}{x^3+x^2-5x+3)}= (3x^{2}-4x+9)\cdot \frac{1}{(x^3+x^2-5x+3)}$$ Assuming the power series of $3x^{2}-4x+9$ is just $3x^{2}-4x+9$ itself. I hoped, i could find the power series by multiplying the series of the both easier functions.. yeah, i am stuck. $\sum\limits_{n=0}^{\infty}a_{n}\cdot x^{n}=(3x^{2}-4x+9)\cdot \ ...?... =$ Solution
You can use the partial fraction decomposition: $$ \frac{3x^{2}-4x+9}{(x-1)^2(x+3)}= \frac{A}{1-x}+\frac{B}{(1-x)^2}+\frac{C}{1+\frac{1}{3}x} $$ and sum up the series you get, which are known. If you do the computation, you find $A=0$, $B=2$ and $C=1$, so $$ \frac{3x^{2}-4x+9}{(x-1)^2(x+3)}= \frac{2}{(1-x)^2}+\frac{1}{1+\frac{1}{3}x} $$ The development of $(1-x)^{-2}$ can be deduced from the fact that $$ \frac{1}{1-x}=\sum_{n\ge0}x^n $$ so, by deriving, we get $$ \frac{1}{(1-x)^2}=\sum_{n\ge1}nx^{n-1}=\sum_{n\ge0}(n+1)x^n $$ The power series for the other term is again easy: $$ \frac{1}{1+\frac{1}{3}x}=\sum_{n\ge0}\frac{(-1)^n}{3^n}x^n $$ so your power series development is $$ \frac{3x^{2}-4x+9}{(x-1)^2(x+3)}= \sum_{n\ge0}\left(2n+2+\frac{(-1)^n}{3^n}\right)x^n $$
Subgroup transitive on the subset with same cardinality Maybe there is some very obvious insight that i miss here, but i've asked this question also to other people and nothing meaningful came out: If you have a subgroup G of $S_n$(the symmetric group on n elements), you can consider the natural action of G on the subset of $\{1,...,n\}$; my question is this:what are the G that act transitively on the subsets of the same cardinality; that is, whenever $A,B \subseteq \{1,...,n\}$ with $|A|=|B|$, there is $g \in G$ so that $gA=B$.
By the Livingstone Wagner Theorem, (Livingstone, D., Wagner, A., Transitivity of finite permutation groups on unordered sets. Math. Z. 90 (1965) 393–403), if $n \ge 2k$ and $G$ is transitive on $k$-subsets with $k \ge 5$, then $G$ is $k$-transitive. Using the classification of finite simple groups, $A_n$ and $S_n$ are the only finite $k$-transitive groups for $k \ge 6$, and the only 4- and 5-transitive groups are the Mathieu groups, so (at least for $n \ge 10$), the only permutation groups that are $k$-homogeneous for all $k$ are $A_n$ and $S_n$. See also the similar discussion in Group actions transitive on certain subsets