INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Solve $\frac{1}{2x}+\frac{1}{2}\left(\frac{1}{2x}+\cdots\right)$ If $$\displaystyle \frac{1}{2x}+\frac{1}{2}\left(\frac{1}{2x}+ \frac{1}{2}\left(\frac{1}{2x} +\cdots\right) \right) = y$$ then what is $x$? I was thinking of expanding the brackets and trying to notice a pattern but as it effectively goes to infinity. I don't think I can expand it properly, can I?
Expand. The first term is $\frac{1}{2x}$. The sum of the first two terms is $\frac{1}{2x}+\frac{1}{4x}$. The sum of the first three terms is $\frac{1}{2x}+\frac{1}{4x}+\frac{1}{8x}$. And so on. The sum of the first $n$ terms is $$\frac{1}{2x}\left(1+\frac{1}{2}+\frac{1}{4}+\cdots+\frac{1}{2^{n-1}}\right).$$ As $n\to\infty$, the inner sum approaches $2$. So the whole thing approaches $\frac{1}{x}$. So we are solving the equation $\frac{1}{x}=y$.
Bounds on $ \sum\limits_{n=0}^{\infty }{\frac{a..\left( a+n-1 \right)}{\left( a+b \right)...\left( a+b+n-1 \right)}\frac{{{z}^{n}}}{n!}}$ I have a confluent hypergeometric function as $ _{1}{{F}_{1}}\left( a,a+b,z \right)$ where $z<0$ and $a,b>0$ and integer. I am interested to find the bounds on the value it can take or an approximation for it. Since $$0<\frac{a..\left( a+n-1 \right)}{\left( a+b \right)...\left( a+b+n-1 \right)}<1, $$ I was thinking that ${{e}^{z}}$ would be an upper bound. Is that right? I am not sure about lower bound, can this go to $-\infty$? $$ _{1}{{F}_{1}}\left( a,a+b,z \right)=\sum\limits_{n=0}^{\infty }{\frac{a..\left( a+n-1 \right)}{\left( a+b \right)...\left( a+b+n-1 \right)}\frac{{{z}^{n}}}{n!}}\le {{e}^{z}}$$ Thanks!
You may be interested in the asymptotic formula, $$ {}_1F_1(a,a+b,z) = \frac{\Gamma(a+b)}{\Gamma(b)} (-z)^{-a} + O(z^{-a-1}) $$ as $\operatorname{Re} z \to -\infty$ (see, e.g., [1]). Note, in particular, that it is not true that ${}_1F_1(a,a+b,x) \leq e^x$ for $x \in \mathbb{R}$ large and negative. [1] Bateman Manuscript Project, Higher Transcendental Functions. Vol. 1, p. 248,255,278. [pdf]
Larger Theory for root formula Consider the quadratic equation: $$ax^2 + bx + c = 0$$ and the linear equation: $$bx + c = 0$$. We note the solution of the linear equation is $$x = -\frac{c}{b}.$$ We note the solution of the quadratic equation is $$\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$$. Suppose we take the limit as $a$ approaches 0 on the quadratic equation: ideally we should get the expression $-\frac{c}{b}$ left over, but this is clearly not the case: This means that the quadratic formula does not generalize the linear formula, it is instead streamlined to only solve quadratic equations. What would the general formula be? Basically what is the formula such that if $a$ is non-zero is equivalent to the quadratic equation and if $a = 0$ breaks down to the linear case?
Hint $\ $ First rationalize the numerator, then take the limit as $\rm\:a\to 0.$ $$\rm \frac{-b + \sqrt{b^2 - 4ac}}{2a}\ =\ \frac{2c}{-b -\sqrt{b^2 - 4ac}}\ \to\ -\frac{c}b\ \ \ as\ \ \ a\to 0 $$ Remark $\ $ The quadratic equation for $\rm\,\ z = 1/x\,\ $ is $\rm\,\ c\ z^2+ b\ z + a = 0\,\ $ hence $$\rm z\ =\ \dfrac{1}{x}\ =\ \dfrac{-b \pm \sqrt{b^2-4\:a\:c}}{2\:c} $$ Inverting the above now yields the sought limit as $\rm\:a\to 0,\:$ so effectively removing the apparent singularity at $\rm\: a = 0\:.\ $
Restricted Permutations and Combinations Tino, Colin, Candice, Derek, Esther, Mary and Ronald are famous artist. Starting next week, they will take turns to display their work and each artist's work will be on display at the London Show for exactly one week so that the display of the artworks will last the next seven weeks. In how many ways can a display schedule be developed if $(a)$. There is no restriction. ANS 7! EASY Suppose that a revised timetable has been drawn up and the artists in $(a)$ are to display their work in groups of twos or threes so that the entire exercise takes at most $3$ weeks. In how many ways can Colin and Candice find themselves in the same group? Hint: We need to explore all possible scenarios - the possible distributions of the teams in terms of numbers are: $2,2,3; 2,3,2;$ and $3,2,2.$ Candice and Colin can be in the $1st, 2nd$ or $3rd$ week. This thinking gives us $9$ possible scenarios in which Adam and Brian may end up being in the same team. Answer: $150$ How to get the $150$ I have scratched my head to no avail. What I assumed is that if they are both in a group that has only two people, we count that group as one person and if they are in a group that has $3$ people we count the group as two people. I tried: In both cases the $n$ is reduced to $7-2$, and $r$ to either $3-2$ or $2-2$. Hence: $(5C1*2!*3!*2! + 5C0*2!*2!*2!) * 3!$ and many other futile attempts. The problem is : 1). Classification, how do I classify this problem? As a strictly permutation or strictly combination or mixed permutation or combination? 2). Is sampling with or without replacement? 3). How do I change $n$ and $r$, by subtracting one from each? Or by subtracting p, the number of objects that will always occur? $7-2=5?$ or $7-1 =6?$ The correct reasoning approach will be greatly appreciated.
Where do you get $2*6!$ for (a)? I find $7!$ For the three week problem, let us start by assuming $3,2,2$. We can multiply by $3$ at the end to take care of cyclic permutations of weeks. The two pairs are differently named bins in this case. The two C's can be together in one of the twos in $2\text{(which week)}*{5 \choose 3}\text{(who is in the triplet)}=20$ ways. They can be two of the triplet in $5\text{(the other member of the triplet)}*{4 \choose 2}\text{(the first other couple)}=30$ ways. Multiplying by $3$ gives a total of $150$ ways.
Proving the symmetry of the Ricci tensor? Consider the Ricci tensor : $R_{\mu\nu}=\partial_{\rho}\Gamma_{\nu\mu}^{\rho} -\partial_{\nu}\Gamma_{\rho\mu}^{\rho} +\Gamma_{\rho\lambda}^{\rho}\Gamma_{\nu\mu}^{\lambda} -\Gamma_{\nu\lambda}^{\rho}\Gamma_{\rho\mu}^{\lambda}$ In the most general case, is this tensor symmetric ? If yes, how to prove it ?
This misunderstands the previous post by assuming different conventions about the ordering of indices on the curvature tensor. The previous post assumes the convention that $$2\nabla_{[j}\nabla_{k]}v^\ell = {R^\ell}_{ijk}v^i.$$ With this convention, the argument is correct.
Easiest and most complex proof of $\gcd (a,b) \times \operatorname{lcm} (a,b) =ab.$ I'm looking for an understandable proof of this theorem, and also a complex one involving beautiful math techniques such as analytic number theory, or something else. I hope you can help me on that. Thank you very much
Let $\ell= \text{lcm}(a,b), g=\text{gcd}(a,b)$ for some $a,b$ positive integers. Division algorithm: exists $q,r$ integers with $0\leq r < \ell$ such $ab = q\ell + r$. Observing that both $a$ and $b$ divide both $ab$ and $q\ell$ we conclude they both divide $r$. As $r$ is a common multiple, we must have $\ell \leq r$ or $r\leq 0$ so $r=0$. Therefore $s=\frac{ab}{\ell}$ is an integer. Observe that $\frac{a}{s}=\frac{\ell}{b}$ and $\frac{b}{s}=\frac{\ell}{a}$ are both integers, so $s$ is a common divisor. Thus $g\geq s = \frac{ab}{\ell}$. On the other hand, $\frac{ab}{g}=a\frac{b}{g}=b\frac{a}{g}$ where $\frac{b}{g}$ and $\frac{a}{g}$ are both integers, so $\frac{ab}{g}$ is a common multiple of $a$ and $b$. As $\frac{ab}{g}>0$ we conclude $\frac{ab}{g}\geq \ell$, therefoire $\frac{ab}{\ell}\geq g$ and we are done.
Concatenation of 2 finite Automata I have some problems understanding the algorithm of concatenation of two NFAs. For example: How to concatenate A1 and A2? A1: # a b - - - -> s {s} {s,p} p {r} {0} *r {r} {r} A2: # a b - - - -> s {s} {p} p {s} {p,r} *r {r} {s} Any help would be greatly appreciated.
We connect the accepting states of A1 to the starting point of A2. Assuming that -> means start and * means accepting state.. (I labelled the states according to the original automata, and deleted * from r1 and -> from s2, but added s2 for each possible state change to r1 (once an A1-word would be accepted, we can jump to A2). # a b -- -- -- -> s1 {s1} {s1,p1} p1 {r1,s2} 0 r1 {r1,s2} {r1,s2} s2 {s2} {p2} p2 {s2} {p2,r2} *r2 {r2} {s2}
Applications of group cohomology to algebra I started learning about group cohomology (of finite groups) from two books: Babakhanian and Hilton&Stammbach. The theory is indeed natural and beautiful, but I could not find many examples to its uses in algebra. I am looking for problems stated in more classical algebraic terms which are solved elegantly or best understood through the notion of group cohomology. What I would like to know the most is "what can we learn about a finite group $G$ by looking at its cohomology groups relative to various $G$-modules?"). The one example I did find is $H^2(G,M)$ classifying extensions of $M$ by $G$. So, my question is: What problems on groups/rings/fields/modules/associative algebras/Lie algebras are solved or best understood through group cohomology? Examples in algebraic number theory are also welcome (this is slightly less interesting from my current perspective, but I do remember the lecturer mentioning this concept in a basic algnt course I've taken some time ago).
Here's a simple example off the top of my head. A group is said to be finitely presentable if it has a presentation with finitely many generators and relations. This, in particular, implies that $H_2(G)$ is of finite rank. (You can take nontrivial coefficient systems here too.) So you get a nice necessary condition for finite presentability. The proof of this fact is simple. If $G$ is finitely presented, you can build a finite $2$-complex that has $G$ as its fundamental group. To get an Eilenberg-Maclane space $K(G,1)$ you add $3$-cells to kill all $\pi_2$, then you add $4$-cells to kill all $\pi_3$ etc... You end up building a $K(G,1)$ with a finite $2$-skeleton.
Integrate: $\oint_c (x^2 + iy^2)ds$ How do I integrate the following with $|z| = 2$ and $s$ is the arc length? The answer is $8\pi(1+i)$ but I can't seem to get it. $$\oint_c (x^2 + iy^2)ds$$
Parametrize $C$ as $\gamma(t) = 2e^{it} = 2(\cos t + i \sin t)$ for $t \in [0, 2\pi]$. From the definition of the path integral, we have: $$ \oint_C f(z) \,ds = \int_a^b f(\gamma(t)) \left|\gamma'(t)\right| \,dt $$ Plug in the given values to get: \begin{align} \oint_C (x^2 + iy^2) \,ds &= \int_0^{2\pi} 4(\cos^2{t} + i \sin^2{t})\left| 2ie^{it}\right| \,dt \\ &= 8\left(\int_0^{2\pi} \cos^2 t\,dt + i \int _0^{2\pi} \sin^2 t\,dt\right) \end{align} The last 2 integrals should be straightforward. Both evaluate to $\pi$. Hence, the original integral evaluates to $8\pi(i + 1)$.
Square root is operator monotone This is a fact I've used a lot, but how would one actually prove this statement? Paraphrased: given two positive operators $X, Y \geq 0$, how can you show that $X^2 \leq Y^2 \Rightarrow X \leq Y$ (or that $X \leq Y \Rightarrow \sqrt X \leq \sqrt Y$, but I feel like the first version would be easier to work with)? Note: $X$ and $Y$ don't have to commute, so $X^2 - Y^2$ is not necessarily $(X+Y)(X-Y)$.
Here is a proof which works more generally for $x,y\ge 0$ in a $C^*$-algebra, where the spectral radius satisfies $r(z)\le \|z\|=\sqrt{\|z^*z\|}$ for every element $z$, and $r(t)=\|t\|$ for every normal element $t$. The main difference with @user1551's argument is that we will use the invertibility of $y$. Other than that, the idea is essentially the same. If $y$ is not invertible, replace $y$ by $y+\epsilon1$ in the following argument, and then let $\epsilon$ tend to $0$ at the very end. Now assume that $y$ is invertible and note that $$x^2\le y^2\quad\Rightarrow\quad y^{-1}x^2y^{-1}\le y^{-1}y^2y^{-1}=1\quad\Rightarrow\quad\left\|y^{-1}x^2y^{-1}\right\|\le 1.$$ Since $t=y^{-1/2}xy^{-1/2}$ and $z=xy^{-1}$ are similar, they have the same spectral radius and therefore $$\left\|t\right\|=r(t)=r\left(z\right)\le\|z\|=\sqrt{\|z^*z\|}=\sqrt{\left\|y^{-1}x^2y^{-1}\right\|}\le 1.$$ It follows that $y^{-1/2}xy^{-1/2}\le 1$, whence $x\le y$ as desired.
Order of an element in a group G Suppose that $a$ is an element of order $n$ in a group $G$. Prove: i) $a^i = a^j$ if and only if $i \equiv j \pmod n$; ii) if $d = (m,n)$, then the order of $a^m$ is $n/d$; I was trying to self teach myself this and came to this question. How would you solve this? Can someone please show how to?
If $i \equiv j \pmod n$, then $i = j + kn$ for some $k \in \Bbb Z$. It follows that: $$ a^i = a^{j + kn} = a^ja^{kn} = a^j (a^n)^k = a^j $$ If you haven't proved the power properties $a^{p+q} = a^pa^q$ and $a^{pq} = (a^p)^q$ for $p, q \in \Bbb Z^+$, this is a good exercise to do now. Try using induction. Now, if $a^i = a^j$, then: $$ a^i a^{-j} = a^{i - j} = 1 $$ And this is only possible if $i - j$ is a multiple of $n$. (i) follows. The property $(a^n)^{-1} = a^{-n}$ was used here. Again, try proving it via induction. For (ii), assume the order of $a^m$ is a number smaller than $n/d$, and try to use (i) to reach a contradiction.
What series test would you use on these and why? I just started learning series , I am trying to put everything together...I have few some few random problems just to see what kind of strategy you would use here... * *$\displaystyle\sum_{n=1}^\infty\frac{n^n}{(2^n)^2}$ *$\displaystyle\sum_{n=1}^\infty\frac2{(2n - 1)(2n + 1)}$; telescoping series ? *$\displaystyle\sum_{n=1}^\infty\frac1{n(1 + \ln^2 n)}$ *$\displaystyle\sum_{n=1}^\infty\frac1{\sqrt{n (n + 1)}}$; integral test ? cause you could do u-sub?
General and mixed hints: $$\frac{2}{(2n-1)(2n+1)}=\left(\frac{1}{2n-1}-\frac{1}{2n+1}\right)$$ $$\frac{n}{4}\xrightarrow[n\to\infty]{}\infty$$ $$\frac{2^n}{2^n(1+n^2\log^22)}\le\frac{1}{\log^22}\cdot\frac{1}{n^2}$$
Can the matrix $A=\begin{bmatrix} 0 & 1\\ 3 & 3 \end{bmatrix}$ be diagonalized over $\mathbb{Z}_5$? Im stuck on finding eigenvalues that are in the field please help. Given matrix: $$ A= \left[\begin{matrix} 0 & 1\\ 3 & 3 \end{matrix}\right] $$ whose entries are from $\mathbb{Z}_5 = \{0, 1, 2, 3, 4\}$, find, if possible, matrices $P$ and $D$ over $\mathbb{Z}_5$ such that $P^{−1} AP = D$. I have found the characteristic polynomial: $x^2-3x-3=0$ Since its over $\mathbb{Z}_5$, $x^2-3x-3=x^2+2x+2=0$. But from there I'm not sure how to find the eigenvalues, once I get the eigenvalues that are in the field it will be easy to find the eigenvectors and create the matrix $P$.
yes over $\Bbb Z_5$ because: $\lambda^2 -3\lambda-3=o$ at Z_5 we will have $\Delta=9+12=4+2=6$ (9~4 and 12~2 at Z_5) so $\Delta=1$ and so $\lambda_1=\frac{3+1}{2}=2$ and $\lambda_2=\frac{3-1}{2}=1$ about: $\lambda_1$ we have :$ ( \left[\begin{matrix} 0 & 1\\ 3 &3 \end{matrix}\right]-\left[\begin{matrix} 2 & \\ 0 &2 \end{matrix}\right] )\left[\begin{matrix} x\\ y \end{matrix}\right]=0$ $$-2x+y=0 $$ & $$( 3x+y=0 ~ -2x+y=0 ) $$ and so $$ y=2x $$ is our space of eigen value of $ \lambda_1 =\{(2,4),(0,0)(1,2)\} $ => (dim =1) base={(1,2)} about $\lambda_2$: $ ( \left[\begin{matrix} 0 & 1\\ 3 &3 \end{matrix}\right]-\left[\begin{matrix} 1 & 0\\ 0 &1 \end{matrix}\right] )\left[\begin{matrix} x\\ y \end{matrix}\right]=0$ and so $y=x$ is our answer and eigenvector space of $\lambda_2=\{(0,0)(1,1)(2,2)(3,3)(4,4)\} \implies $ $(\dim=1)$ base ={(1,1)} matrix at base of$ \{(1,1),(1,2)\}$ will be diagonalizable $\left[\begin{matrix} 2 & 0\\ 0 &1 \end{matrix}\right] $
Separating $\frac{1}{1-x^2}$ into multiple terms I'm working through an example that contains the following steps: $$\int\frac{1}{1-x^2}dx$$ $$=\frac{1}{2}\int\frac{1}{1+x} - \frac{1}{1-x}dx$$ $$\ldots$$ $$=\frac{1}{2}\ln{\frac{1+x}{1-x}}$$ I don't understand why the separation works. If I attempt to re-combine the terms, I get this: $$\frac{1}{1+x} \frac{1}{1-x}$$ $$=\frac{1-x}{1-x}\frac{1}{1+x} - \frac{1+x}{1+x}\frac{1}{1-x}$$ $$=\frac{1-x - (1+x)}{1-x^2}$$ $$=\frac{-2x}{1-x^2} \ne \frac{2}{1-x^2}$$ Or just try an example, and plug in $x = 2$: $$2\frac{1}{1-2^2} = \frac{-2}{3}$$ $$\frac{1}{1+2} -\frac{1}{1-2} = \frac{1}{3} + 1 = \frac{4}{3} \ne \frac{-2}{3}$$ Why can $\frac{1}{1-x^2}$ be split up in this integral, when the new terms do not equal the old term?
The thing is $$\frac{1}{1-x}\color{red}{+}\frac 1 {1+x}=\frac{2}{1-x^2}$$ What you might have seen is $$\frac{1}{x-1}\color{red}{-}\frac 1 {x+1}=\frac{2}{1-x^2}$$ Note the denominator is reversed in the sense $1-x=-(x-1)$.
PRA: Rare event approximation with $P(A\cup B \cup \neg C)$? The rare event approximation for event $A$ and $B$ means the upper-bound approximation $P(A\cup B)=P(A)+P(B)-P(A\cap B)\leq P(A)+P(B)$. Now by inclusion-exclusion-principle $$P(A\cup B\cup \neg C)= P(A)+P(B)+P(C)-P(A\cap B)-P(A\cap \neg C) -P(B\cap\neg C) +P(A\cap B\cap \neg C) \leq P(A)+P(B)+P(C)+P(A\cap B\cap \neg C)$$ and now by Wikipedia, this is the general form so does the rare-event-approximation means removal of minus terms in the general form of inclusion-exclusion principle aka the below? $$\mathbb P\left(\cup_{i=1}^{n}A_i\right)\leq \sum_{I\subset\{1,3,...,2h-1\}; |I|=k}\mathbb P(A_I)$$ where $2h-1$ is the last odd term in $\{1,2,3,...,n\}$. Example For example in the case of three events, is the below true rare-event approximation? $$P(A\cup B\cup \neg C) \leq P(A)+P(B)+P(\neg C)+P(A\cap B\cap \neg C)$$ P.s. I am studying probability-risk-assessment course, Mat-2.3117.
Removing all the negatives certainly gives an upper bound. But if one looks at the logic of the inclusion-exclusion argument, whenever we have just added, we have added too much (except possibly, at the very end). So at any stage just before we start subtracting again, our truncated expression gives an upper bound. Thus one obtains upper bounds by truncating after the first sum, or the third, or the fifth, and so on.
We break a unit length rod into two pieces at a uniformly chosen point. Find the expected length of the smaller piece We break a unit length rod into two pieces at a uniformly chosen point. Find the expected length of the smaller piece
With probability ${1\over2}$ each the break takes place in the left half, resp. in the right half of the rod. In both cases the average length of the smaller piece is ${1\over4}$. Therefore the overall expected length of the smaller piece is ${1\over4}$.
Section of unions of open subschemes I'm stuck at a line in Hartshorne's text (p.g. 82). Could someone help me please? Fact. Suppose that $X$ is a scheme having $U$ and $V$ as two non-empty disjoint open subsets of $X$. Then $\mathcal{O}_X(U \cup V) = \mathcal{O}(U) \times \mathcal{O}_X(V)$. I know how to prove this when $X$ is affine, but I don't know how to reduce the general case to this affine case. The fact sounds intuitively reasonable: since $U$ and $V$ are disjoint, a "function" is defined on $U \cup V$ iff it is defined independently on $U$ and on $V$. However, I can't prove this rigorously, since I'm stuck at the first difficulty with algebraic geometry using scheme language: no formula for $\mathcal{O}_X(U)$! It would be nice to know a rigorous of this fact. Thanks!
There is a canonical homomorphism $(\rho^{U\cup V}_U, \rho^{U\cup V}_V) : \Gamma(U\cup V) \to \Gamma(U) \times \Gamma(V)$ induced by the restriction homomorphisms. This being an isomorphism follows directly from the fact that the structure sheaf is a sheaf: injectivity is precisely the fact that a section of $U \cup V$ restricts to 0 on both $U$ and $V$ iff it is 0 on the union; and surjectivity is precisely the fact that two sections on U and V respectively can be lifted to the union (since they agree trivially on the intersection).
Reasoning about the gamma function using the digamma function I am working on evaluating the following equation: $\log\Gamma(\frac{1}{2}x) - \log\Gamma(\frac{1}{3}x)$ If I'm understanding correctly, the above is an increasing function which can be demonstrated by the following argument using the digamma function $\frac{\Gamma'}{\Gamma}(x) = \int_0^\infty(\frac{e^{-t}}{t} - \frac{e^{-xt}}{1-e^{-t}})$: $\frac{\Gamma'}{\Gamma}(\frac{1}{2}x) - \frac{\Gamma'}{\Gamma'}(\frac{1}{3}x) = \int_0^\infty\frac{1}{1-e^{-t}}(e^{-\frac{1}{3}xt} - e^{-\frac{1}{2}xt})dt > 0 (x > 1)$ Please let me know if this reasoning is incorrect or if you have any corrections. Thanks very much! -Larry
This answer is provided with help from J.M. $\log\Gamma(\frac{1}{2}x) - \log\Gamma(\frac{1}{3}x)$ is an increasing function. This can be shown using this series for $\psi$: The function is increasing if we can show: $\frac{d}{dx}(\log\Gamma(\frac{1}{2}x) - \log\Gamma(\frac{1}{3}x)) > 0$ We can show this using the digamma function $\psi(x)$: $$\frac{d}{dx}(\log\Gamma(\frac{1}{2}x) - \log\Gamma(\frac{1}{3}x)) = \frac{\psi(\frac{1}{2}x)}{2} - \frac{\psi(\frac{1}{3}x)}{3}$$ $$\frac{\psi(\frac{1}{2}x)}{2} - \frac{\psi(\frac{1}{3}x)}{3} = -\gamma + \sum_{k=0}^\infty(\frac{1}{k+1} - \frac{1}{k + {\frac{1}{2}}}) + \gamma - \sum_{k=0}^\infty(\frac{1}{k+1} - \frac{1}{k+\frac{1}{3}})$$ $$= \sum_{k=0}^\infty(\frac{1}{k+\frac{1}{3}} - \frac{1}{k+\frac{1}{2}})$$ Since for all $k\ge 0$: $k + \frac{1}{3} < k + \frac{1}{2}$, it follows that for all $k\ge0$: $\frac{1}{k+\frac{1}{3}} > \frac{1}{k+\frac{1}{2}}$ and therefore: $$\sum_{k=0}^\infty(\frac{1}{k+\frac{1}{3}} - \frac{1}{k+\frac{1}{2}}) > 0.$$
Name of $a*b=c$ and $b*a=-c$ $A_+=(A,+,0,-)$ is a noncommutative group where inverse elements are $-a$ $A_*=(A,*)$ is not associative and is not commutative $\mathbf A=(A,+,*)$ is a structure where 1) if $a*b=c$ then $b*a=-c$ holds and 2) $(a*b)+a=b+(a*b)$ A-How is called the structure $\mathbf A$? B-What is the name of 1) and 2) in abstract algebra (even not in the same structure)? C-and what is this structure? It Has been already studied and does it have other intresting (or obvious) properties that I don't see? Thanks in advance and I apologize for errors in my english. Update As Lord_Farin explained to me, $\mathbf A=(A,+,*)$ can't be a structure (closed) since the property 2) imply that $a*0$ and $0*a$ are assorbing elements of $(A,+)$ and that is impossible because $A_+$ is a non-trivial group (in the definition). Anyways I notice that my questions B (about property 2) ) and C are still open in the case of a "generic" $A_+$ .
From 2) we have $(a*0)+a = 0+(a*0) = a*0 = (a*0)+0$ which contradicts the fact that $(A,+)$ is a group (which implies "left-multiplication" by $(a*0)$, i.e. $x \mapsto (a*0)+x$, is injective). Thus the structure $(A,+,*)$ cannot exist.
Rank of a matrix. Let a non-zero column matrix $A_{m\times 1}$ be multiplied with a non-zero row matrix $B_{1\times n}$ to get a matrix $X_{m\times n}$ . Then how to find rank of $X$?
Let me discuss a shortcut for finding the rank of a matrix . Rank of a matrix is always equal to the number of independent equations . The number of equations are equal to the number of rows and the variables in one equation are equal to number of columns . Suppose there is a 3X3 matrix with elements as : row 1 : 1 2 3 row 2 : 3 4 2 row 3 : 4 5 6 So there will be three equations as x + 2y + 3z = 0 -1 3x + 4y + 2z = 0 -2 4x + 5y + 2z = 0 -3 Any of the above equation cannot be obtained by adding or subtracting two equations or multiplying or dividing a single equation by a constant . So there are three independent equations . So rank of above matrix is 3 . Consider another matrix of order 3X3 with elements as : row 1 : 10 11 12 row 2 : 1 2 7 row 3 : 11 13 19 equations : 10x + 11y + 12z =0 - 4 x + 2y + 7z =0 - 5 11x + 13z + 19z =0 - 6 equation 6 can be obtained by adding equations 4 and 5 . So there are only two independent equations . So rank of this matrix is 2 . This method can be applied to matrix of any order .
How to compute the area of the shadow? If we can not use the integral, then how to compute the area of the shadow? It seems easy, but actually not? Thanks!
Let $a$ be a side of the square. Consider the following diagram The area we need to calculate is as follows. $$\begin{eqnarray} \color{Black}{\text{Black}}=(\color{blue}{\text{Blue}}+\color{black}{\text{Black}})-\color{blue}{\text{Blue}}. \end{eqnarray}$$ Note that the blue area can be calculated as $$\begin{eqnarray}\color{blue}{\text{Blue}}=\frac14a^2\pi-2\cdot\left(\color{orange}{\text{Yellow}}+\color{red}{\text{Red}}\right).\end{eqnarray}$$ We already know most of the lengths. What's stopping us from calculating the black area is lack of known angles. Because of symmetry, almost any angle would do the trick. It's fairly easy to calculate angles of triangle $\begin{eqnarray}\color{orange}{\triangle POA}\end{eqnarray}$, if we use cosine rule. $$\begin{eqnarray} |PA|^2&=&|AO|^2+|PO|^2-2\cdot|AO|\cdot|PO|\cos\angle POA\\ a^2&=&\frac{a^2}{4}+\frac{2a^2}{4}-2\cdot\frac a2\cdot\frac{a\sqrt2}{2}\cdot\cos\angle POA\\ 4a^2&=&3a^2-2a^2\sqrt2\cos\angle POA\\ 1&=&-2\sqrt2\cos\angle POA\\ \cos\angle POA&=&-\frac{1}{2\sqrt2}=-\frac{\sqrt2}{4}. \end{eqnarray}$$ Now, because of symmetry, we have $\angle POA=\angle POB$, so $\angle AOB=360^\circ-2\angle POA$. So the cosine of angle $\angle AOB$ can be calculated as follows: $$\begin{eqnarray} \cos\angle AOB&=&\cos(360^\circ-2\angle POA)=\cos(2\pi-2\angle POA)\\ \cos\angle AOB&=&\cos(-2\angle POA)=\cos(2\angle POA)\\ \cos\angle AOB&=&\cos^2(\angle POA)-\sin^2(\angle POA)\\ \cos\angle AOB&=&\cos^2(\angle POA)-(1-\cos^2(\angle POA))\\ \cos\angle AOB&=&2\cos^2(\angle POA)-1\\ \cos\angle AOB&=&2\cdot\left(-\frac{\sqrt2}{4}\right)^2-1=-\frac34\\ \end{eqnarray}$$ From this, we can easily calculate the sine of angle $\angle AOB$, using Pythagorean identity. $$ \sin\angle AOB=\sqrt{1-\frac9{16}}=\sqrt\frac{16-9}{16}=\frac{\sqrt7}4 $$ Going this way, I believe it's not hard to calculate other angles and use known trigonometry-like formulas for area. Then you can easily pack it together using the first equation with colors.
Rudin Theorem 1.35 - Cauchy Schwarz Inequality Any motivation for the sum that Rudin considers in his proof of the Cauchy-Schwarz Inequality? Theorem 1.35 If $a_1,...,a_n$ and $b_1, ..., b_n$ are complex numbers, then $$\Biggl\vert\sum_{j=1}^n a_j\overline{b_j}\Biggr\vert^2 \leq \sum_{j=1}^n|a_j|^2\sum_{j=1}^n|b_j|^2.$$ For the proof, he considers this sum to kick it off: $$\sum_{j=1}^n \vert Ba_j - Cb_j\vert, \text{ where } B = \sum_{j=1}^n \vert b_j \vert^2 \text{ and } C = \sum_{j=1}^na_j\overline{b_j}.$$ I don't see where it comes from. Any help? Thank-you.
He does it because it works. Essentially, as you see, $$\sum_{j=1}^n |Ba_j-Cb_j|^{2}$$ is always greater or equal to zero. He then shows that $$\tag 1 \sum_{j=1}^n |Ba_j-Cb_j|^{2}=B(AB-|C|^2)$$ and having assumed $B>0$; this means $AB-|C|^2\geq 0$, which is the Cauchy Schwarz inequality. ADD Let's compare two different proofs of Cauchy Schwarz in $\Bbb R^n$. PROOF1. We can see the Cauchy Schwarz inequality is true whenever ${\bf x}=0 $ or ${\bf{y}}=0$, so discard those. Let ${\bf x}=(x_1,\dots,x_n)$ and ${\bf y }=(y_1,\dots,y_n)$, so that $${\bf x}\cdot {\bf y}=\sum_{i=1}^n x_iy_i$$ We wish to show that $$|{\bf x}\cdot {\bf y}|\leq ||{\bf x}||\cdot ||{\bf y}||$$ Define $$X_i=\frac{x_i}{||{\bf x}||}$$ $$Y_i=\frac{y_i}{||{\bf y}||}$$ Because for any $x,y$ $$(x-y)^2\geq 0$$ we have that $$x^2+y^2\geq 2xy$$ Using this with $X_i,Y_i$ for $i=1,\dots,n$ we have that $$X_i^2 + Y_i^2 \geqslant 2{X_i}{Y_i}$$ and summing up through $1,\dots,n$ gives $$\eqalign{ & \frac{{\sum\limits_{i = 1}^n {y_i^2} }}{{||{\bf{y}}|{|^2}}} + \frac{{\sum\limits_{i = 1}^n {x_i^2} }}{{||{\bf{x}}|{|^2}}} \geqslant 2\frac{{\sum\limits_{i = 1}^n {{x_i}{y_i}} }}{{||{\bf{x}}|| \cdot ||{\bf{y}}||}} \cr & \frac{{||{\bf{y}}|{|^2}}}{{||{\bf{y}}|{|^2}}} + \frac{{||{\bf{x}}|{|^2}}}{{||{\bf{x}}|{|^2}}} \geqslant 2\frac{{\sum\limits_{i = 1}^n {{x_i}{y_i}} }}{{||{\bf{x}}|| \cdot ||{\bf{y}}||}} \cr & 2 \geqslant 2\frac{{\sum\limits_{i = 1}^n {{x_i}{y_i}} }}{{||{\bf{x}}|| \cdot ||{\bf{y}}||}} \cr & ||{\bf{x}}|| \cdot ||{\bf{y}}|| \geqslant \sum\limits_{i = 1}^n {{x_i}{y_i}} \cr} $$ NOTE How may we add the absolute value signs to conclude? PROOF2 We can see the Cauchy Schwarz inequality is true whenever ${\bf x}=0 $ or ${\bf{y}}=0$, or $y=\lambda x$ for some scalar. Thus, discard those hypotheses. Then consider the polynomial (here $\cdot$ is inner product) $$\displaylines{ P(\lambda ) = \left\| {{\bf x} - \lambda {\bf{y}}} \right\|^2 \cr = ( {\bf x} - \lambda {\bf{y}})\cdot({\bf x} - \lambda {\bf{y}}) \cr = {\left\| {\bf x} \right\|^2} - 2\lambda {\bf x} \cdot {\bf{y}} + {\lambda ^2}{\left\| {\bf{y}} \right\|^2} \cr} $$ Since ${\bf x}\neq \lambda{\bf y}$ for any $\lambda \in \Bbb R$, $P(\lambda)>0$ for each $\lambda\in\Bbb R$. It follows the discriminant is negative, that is $$\Delta = b^2-4ac={\left( {-2\left( {{\bf x} \cdot y} \right)} \right)^2} - 4{\left\| {\bf x} \right\|^2}{\left\| {\bf{y}} \right\|^2} <0$$ so that $$\displaylines{ {\left( {{\bf x}\cdot {\bf{y}}} \right)^2} <{\left\| {\bf x} \right\|^2}{\left\| {\bf{y}} \right\|^2} \cr \left| {{\bf x} \cdot {\bf{y}}} \right| <\left\| {\bf x}\right\| \cdot \left\| {\bf{y}} \right\| \cr} $$ which is Cauchy Schwarz, with equaliy if and only if ${\bf x}=\lambda {\bf y}$ for some $0\neq \lambda \in\Bbb R$ or either vector is null. One proof shows the Cauchy Schwarz inequality is a direct consequence of the known fact that $x^2\geq 0$ for each real $x$. The other is shorter and sweeter, and uses the fact that a norm is always nonnegative, and properties of the inner product of vectors in $\Bbb R^n$, plus that fact that a polynomial in $\Bbb R$ with no real roots must have negative discriminant.
Find a simple formula for $\binom{n}{0}\binom{n}{1}+\binom{n}{1}\binom{n}{2}+...+\binom{n}{n-1}\binom{n}{n}$ $$\binom{n}{0}\binom{n}{1}+\binom{n}{1}\binom{n}{2}+...+\binom{n}{n-1}\binom{n}{n}$$ All I could think of so far is to turn this expression into a sum. But that does not necessarily simplify the expression. Please, I need your help.
Hint: it's the coefficient of $T$ in the binomial expansion of $(1+T)^n(1+T^{-1})^n$, which is equivalent to saying that it's the coefficient of $T^{n+1}$ in the expansion of $(1+T)^n(1+T^{-1})^nT^n=(1+T)^{2n}$.
Solve recursive equation $ f_n = \frac{2n-1}{n}f_{n-1}-\frac{n-1}{n}f_{n-2} + 1$ Solve recursive equation: $$ f_n = \frac{2n-1}{n}f_{n-1}-\frac{n-1}{n}f_{n-2} + 1$$ $f_0 = 0, f_1 = 1$ What I have done so far: $$ f_n = \frac{2n-1}{n}f_{n-1}-\frac{n-1}{n}f_{n-2} + 1- [n=0]$$ I multiplied it by $n$ and I have obtained: $$ nf_n = (2n-1)f_{n-1}-(n-1)f_{n-2} + n- n[n=0]$$ $$ \sum nf_n x^n = \sum(2n-1)f_{n-1}x^n-\sum (n-1)f_{n-2}x^n + \sum n x^n $$ $$ \sum nf_n x^n = \sum(2n-1)f_{n-1}x^n-\sum (n-1)f_{n-2}x^n + \frac{1}{(1-z)^2} - \frac{1}{1-z} $$ But I do not know what to do with parts with $n$. I suppose that there can be useful derivation or integration, but I am not sure. Any HINTS?
Let's take a shot at this: $$ f_n - f_{n - 1} = \frac{n - 1}{n} (f_{n - 1} - f_{n - 2}) + 1 $$ This immediately suggests the substitution $g_n = f_n - f_{n - 1}$, so $g_1 = f_1 - f_0 = 1$: $$ g_n - \frac{n - 1}{n} g_{n - 1} = 1 $$ First order linear non-homogeneous recurrence, the summing factor $n$ is simple to see here: $$ n g_n - (n - 1) g_{n - 1} = n $$ Summing: $$ \begin{align*} \sum_{2 \le k \le n} (k g_k - (k - 1) g_{k - 1}) &= \sum_{2 \le k \le n} k \\ n g_n - 1 \cdot g_1 &= \frac{n (n + 1)}{2} - 1 \\ g_n &= \frac{n + 1}{2} \\ f_n - f_{n - 1} &= \frac{n + 1}{2} \\ \sum_{1 \le k \le n} (f_n - f_{n - 1}) &= \sum_{1 \le k \le n} \frac{k + 1}{2} \\ f_n - f_0 &= \frac{1}{2} \left( \frac{n (n + 1)}{2} + n \right) \\ f_n &= \frac{n (n + 3)}{4} \end{align*} $$ Maxima tells me this checks out. Pretty!
Integral solutions of hyperboloid $x^2+y^2-z^2=1$ Are there integral solutions to the equation $x^2+y^2-z^2=1$?
We can take the equation to $x^2 + y^2 = 1 + z^2$ so if we pick a $z$ then we just need to find all possible ways of expressing $z^2 + 1$ as a sum of two squares (as noted in the comments we always have one way: $z^2 + 1$). This is a relatively well known problem and there will be multiple possible solutions for $x$ and $y$, there is another question on this site about efficiently finding the solutions of this should you wish to do so.
Bounded partial derivatives imply continuity As stated in my notes: Remark: Suppose $f: E \to \mathbb{R}$, $E \subseteq \mathbb{R}^n$, and $p \in E$. Also, suppose that $D_if$ exists in some neighborhood of $p$, say, $N(p, h)$ where $h>0$. If all partial derivatives of $f$ are bounded, then $f$ is continuous on $E$. I found a sketch of the proof here. I'm wondering if I can adapt this proof as follows: $f(x_1+h_1,...,x_n+h_n)-f(x_1,...,x_n)=f(x_1+h_1,...,x_n+h_n)-f(x_1,x_2+h_2,...,x_n+h_n)-...-f(x_1,x_2,...,x_{n-1}+h_{n-1},x_n+h_n)-f(x_1,...,x_{n-1},x_n+h_n)-f(x_1,...,x_n)$ However, I'm not sure how to apply the contraction principle to finish off the proof. Is there a more efficient way to prove the above remark?
The proof is a combination of two facts: * *A function of one real variable with a bounded derivative is Lipschitz. *Let $Q\subset \mathbb R^n$ be a cube aligned to coordinate axes. If a function $f:Q\to\mathbb R$ is Lipschitz in each variable separately, then it is Lipschitz. The proof of 2 involves a telescoping sum such as $$\begin{split} f(x,y,z)-f(x',y',z')&= f(x,y,z)-f(x',y,z) \\ & + f(x',y,z)-f(x',y',z)\\&+f(x',y',z)-f(x',y',z') \end{split}$$
How to prove boundary of a subset is closed in $X$? Suppose $A\subseteq X$. Prove that the boundary $\partial A$ of $A$ is closed in $X$. My knowledge: * *$A^{\circ}$ is the interior *$A^{\circ}\subseteq A \subseteq \overline{A}\subseteq X$ My proof was as follows: To show $\partial A = \overline{A} \setminus A^{\circ}$ is closed, we have to show that the complement $( \partial A) ^C = X\setminus{}\partial A =X \setminus (\overline{A} \setminus A^{\circ})$ is open in $X$. This is the set $A^{\circ}\cup X \setminus(\overline{A})$ Then I claim that $A^{\circ}$ is open by definion ($a\in A^{\circ} \implies \exists \epsilon>0: B_\epsilon(a)\subseteq A$. As this is true for all $a$, by definition of open sets, $A^{\circ}$ is open. My next claim is that $X \setminus \overline{A}$ is open. This is true because the complement is $\overline{A}$ is closed in $X$, hence $X \setminus \overline{A}$ is open in $X$. My concluding claims are: We have a union of two open sets in $X$, By a proposition in my textbook, this set is open in $X$. Therefore the complement of that set is closed, which is we had to show. What about this ?
From your definition, directly, $$ \partial A=\overline{A}\setminus \mathring{A}=\overline{A}\cap (X\setminus \mathring{A}) $$ is the intersection of two closed sets. Hence it is closed. No need to prove that the complement is open, it just makes it longer and more complicated. Also, keep in mind that a set $S$ is open in $X$ if and only if its complement $X\setminus S$ is closed in $X$. This should be pavlovian.
Matrix Manifolds Question I am not sure at all how to do the following question. Any help is appreciated. Thank you. Consider $SL_n \mathbb{R}$ as a group and as a topological space with the topology induced from $R^{n^2}$. Show that if $H \subset SL_n \mathbb{R}$ is an abelian subgroup, then the closure $H$ of $SL_n \mathbb{R}$ is also an abelian subgroup.
Hint: The map $\overline{H}\times \overline{H}\to \overline{H}$ defined by $(a,b)\mapsto aba^{-1}b^{-1}$ is continuous. Since $\overline{H}$ is Hausdorff, and the map is constant on a dense subset of its domain, it must be constant everywhere.
Why does the series $\sum\limits_{n=2}^\infty\frac{\cos(n\pi/3)}{n}$ converge? Why does this series $$\sum\limits_{n=2}^\infty\frac{\cos(n\pi/3)}{n}$$ converge? Can't you use a limit comparison with $1/n$?
Note that $$\cos(n\pi/3) = 1/2, \ -1/2, \ -1, \ -1/2, \ 1/2, \ 1, \ 1/2, \ -1/2, \ -1, \ \cdots $$ so your series is just 3 alternating (and convergent) series inter-weaved. Exercise: Prove that if $\sum a_n, \sum b_n$ are both convergent, then the sequence $$a_1, a_1+b_1, a_1+b_1+a_2, a_1+b_1+a_2+b_2, \cdots $$ is convergent. Applying that twice proves your series converges.
Probability that a stick randomly broken in five places can form a tetrahedron Edit (June. 2015) This question has been moved to MathOverflow, where a recent write-up finds a similar approximation as leonbloy's post below; see here. Randomly break a stick in five places. Question: What is the probability that the resulting six pieces can form a tetrahedron? Clearly satisfying the triangle inequality on each face is a necessary but not sufficient condition; an example is provided below. Furthermore, another commenter kindly points to a reference that may be of help in resolving this problem. In particular, it relates the question of when six numbers can be edges of a tetrahedron to a certain $5 \times 5$ determinant. Finally, a third commenter points out that since one such construction is possible, there is an admissible neighborhood around this arrangement, so that the probability is in fact positive. In any event, this problem is far harder than the classic $2D$ "form a triangle" one. Several numerical attacks can be found below; I will be grateful if anyone can provide an exact solution.
if stick pieces are s1 (longest) to s6 shortest. Picture the tetrahedron with longest side s1 out of view. Then s2 is the spine and any combination of pairs from {s3,s4,s5,s6} can make the two side triangles Hence s3+s6 needs to be longer than s2 (P=0.25) And s4+s5 needs to be longer than s2. (P=0.25) so P(can form)=0.25*0.25=0.0625
Transitive closure proof (Pierce, ex. 2.2.7) Simple exercise taken from the book Types and Programming Languages by Benjamin C. Pierce. This is a definition of the transitive closure of a relation R. First, we define the sequence of sets of pairs: $$R_0 = R$$ $$R_{i+1} = R_i \cup \{ (s, u) | \exists t, (s, t) \in R_i, (t, u) \in R_i \}$$ Finally, define the relation $R^+$ as the union of all the $R_i$: $$R^+=\bigcup_i R_i$$ Show that $R^+$ is really the transitive closure of R. Questions: * *I would like to see the proof (I don't have enough mathematical background to make it myself). *Isn't the final union superfluous? Won't $R_n$ be the union of all previous sequences?
We need to show that $R^+$ contains $R$, is transitive, and is minmal among all such relations. $R\subseteq R^+$ is clear from $R=R_0\subseteq \bigcup R_i=R^+$. Transitivity: By induction on $j$, show that $R_i\subseteq R_j$ if $i\le j$. Assume $(a,b), (b,c)\in R^+$. Then $(a,b)\in R_i$ for some $i$ and $(b,c)\in R_j$ for some $j$. This implies $(a,b),(b,c)\in R_{\max(i,j)}$ and hence $(a,c)\in R_{\max(i,j)+1}\subseteq R^+$. Now for minimality, let $R'$ be transitive and containing $R$. By induction show that $R_i\subseteq R'$ for all $i$, hence $R^+\subseteq R'$, as was to be shown. As for your specific question #2: Yes, $R_n$ contains all previous $R_k$ (a fact, the proof above uses as intermediate result). But neither is $R_n$ merely the union of all previous $R_k$, nor does there necessarily exist a single $n$ that already equals $R^+$. For example, on $\mathbb N$ take the realtaion $aRb\iff a=b+1$. Then $aR^+b\iff a>b$, but $aR_nb$ implies that additionally $a\le b+2^n$.
Orthogonality of Legendre Functions The Legendre Polynomials satisfy the following orthogonality condition: The definite integral of $P(n,x) \cdot P(m,x)$ from $-1$ to $1$ equals $0$, if $m$ is not equal to $n$: $$\int_{-1}^1 P(n,x) \cdot P(m,x) dx = 0. \qquad (m \neq n)$$ Based on this, I am trying to evaluate the integral from $-1$ to $1$ of $x \cdot P(n-1,x) \cdot P(n,x)$ for some given $n$: $$\int_{-1}^1 x \cdot P(n-1,\;x) \cdot P(n,x) dx.$$ If I integrate this by parts, letting $x$ be one function and $P(n-1,x) \cdot P(n,x)$ be the other function, then I get zero, but according to my textbook, its value is non-zero. What am I doing wrong?
Integration by parts is $\int f'g+\int fg'=fg\ (+C)$, so for the definite integral, it is $$\int_a^b f'g+\int_a^b fg'=[fg]_a^b=f(b)g(b)-f(a)g(a)\,.$$ Now we have $f=x$ and $g'=P_{n-1}(x)\cdot P_n(x)$. That is, $g$ is the antiderivative of $P_{n-1}\cdot P_n$. By the definite integral of this, it only allows us to conclude that $g(1)-g(-1)=[g]^1_{-1}=0$. Not less and not more. In particular, $g\ne 0$. So, now it yields: $$\int_{-1}^1 g+\int_{-1}^1 x\cdot P_{n-1}(x)\cdot P_n(x)=1\cdot g(1)-(-1)\cdot g(-1)=2\cdot g(1)\,.$$ This can be evaluated, knowing the explicit form of the $P_n$'s, but then probably it's not simpler than simply writing these in the original integral...
Absolute convergence of the series $\sum\limits_{n=1}^{\infty} (-1)^n \ln\left(\cos \left( \frac{1}{n} \right)\right)$ This sum $$\sum_{n=1}^{\infty} (-1)^n \ln\left(\cos \left( \frac{1}{n} \right)\right)$$ apparently converges absolutely, but I'm having trouble understanding how so. First of all, doesn't it already fail the alternating series test? the $B_{n+1}$ term is greater than the $B_n$ term, correct?
Since $$1-\cos{x}\underset{x\to{0}}{\sim}{\dfrac{x^2}{2}}\;\; \Rightarrow \;\; \cos{\dfrac{1}{n}}={1-\dfrac{1}{2n^2}} +o\left(\dfrac{1}{n^2} \right),\;\; n\to\infty$$ and $$\ln(1+x)\underset{x\to{0}}{\sim}{x},$$ thus $$\ln\left(\cos { \dfrac{1}{n} }\right)\underset{n\to{\infty}}{\sim}{-\dfrac{1}{2n^2}}.$$
Counting strictly increasing and non-decreasing functions $f$ is non-decreasing if $x \lt y$ implies $f(x) \leq f(y)$ and increasing if $x < y$ implies $f(x) < f(y)$. * *How many $f: [a]\to [b]$ are nondecreasing? *How many $f: [a] \to [b]$ are strictly increasing? Where $[a]=\{1,2\ldots a\}$ and $[b]=\{1,2\ldots b\}$
Strictly increasing is easy: we need to choose the $n$ items in $[k]$ that will be the range of our function.
How will studying "stochastic process" help me as mathematician?? I wish to decide if I should take a course called "INTRODUCTION TO STOCHASTIC PROCESSES" which will be held next semester in my University. I can make an un-educated guess that stochastic processes are important in mathematics. But I am also curious to know how. i.e, in what fields/methods, will basic understanding in "stochastic processes" will help me do better mathematics?
such a similar question stochastic process is very usefull in Acturial Sience, Mathematical finance.
Parallel transport for a conformally equivalent metric Suppose $M$ is a smooth manifold equipped with a Riemannian metric $g$. Given a curve $c$, let $P_c$ denote parallel transport along $c$. Now suppose you consider a new metric $g'=fg$ where $f$ is a smooth positive function. Let $P_c'$ denote parallel transport along $c$ with respect to $g'$. How are $P_c$ and $P_c$ related? A similar question is: let $K:TTM \rightarrow TM$ denote the connection map associated to $g$ and $K'$ the one associated to $g'$. How are $K$ and $K'$ related? In case it's helpful, recall the definition of $K$: given $V\in T_{(x,v)}TM$, let $z(t)=(c(t),v(t))$ be a curve in $TM$ such that $z(0)=(x,v)$ and $\dot{z}(0)=V$. Then set $$K(V):=\nabla_{t}v(0).$$
Both the parallel transport and the connection map are determined by the connection, in your case this is the Levi-Civita connection of metric $g$ whose transformation is known (see e.g. this answer). For the connection map you already have a formula in the definition, just use the facts and get the expression. With regards to the parallel transport I guess the best way would be to start with the equations $$ \dot{V}^{k}(t)= - V^{j}(t)\dot{c}^{i}(t)\Gamma^{k}_{ij}(c(t)) $$ that describe the parallel transport (see the details e.g. in J.Lee's "Riemannian manifolds. An Introduction to Curvature"). The Christoffel symbols of the conformally rescaled metric are given in this Wikipedia article. Using them we get the equations of the conformally related parallel transport.
Convergence of the infinite series $ \sum_{n = 1}^\infty \frac{1} {n^2 - x^2}$ How can I prove that for every $ x \notin \mathbb Z$ the series $$ \sum_{n = 1}^\infty \frac{1} {n^2 - x^2}$$ converges uniformly in a neighborhood of $ x $?
Apart from the first few summands, we have $n^2-y^2>\frac12n^2$ for all $y\approx x$, hence the tail is (uniformly near $x$) bounded by $2\sum_{n>N}\frac1{n^2}$.
Law of Quadratic Reciprocity Equivalent Statement Let $p,q$ be two distinct odd primes. Then $(\frac{q}p)=1 \iff p=\pm\beta^2 \pmod{4q}$ for some odd $\beta$. Show that this statement is eqivalent to the Law of Quadratic Reciprocity. I'm trying to grapple with what the question is actually asking me to show. Do I split into various cases of what $p$ and $q$ could possibly be (ie. $1 \pmod 4$ and $3 \pmod 4$) and then show that in each case, the statement holds?
We do one of the four cases. Because $p$ and $q$ both of the shape $4k+1$ is "too easy" and does not fully illustrate the problems we can bump into, we deal with the case $p$ of the form $4k+3$ and $q$ of the form $4k+1$. Suppose that $(q/p)=1$, with $p$ of the form $4k+3$ and $q$ of the form $4k+1$. We want to show that $p\equiv \pm \beta^2\pmod{4q}$ for some odd $\beta$. Note that by Quadratic Reciprocity we have $(p/q)=1$. So $p$ is a quadratic residue modulo $q$. This means that $p\equiv \alpha^2\pmod{q}$ for some $\alpha$. But $-1$ is a quadratic residue of $q$, since $q$ is of the form $4k+1$. So $-1\equiv \gamma^2\pmod{q}$ for some $\gamma$, and therefore $$p\equiv -(\alpha\gamma)^2\pmod{q}.$$ Without loss of generality we may assume that $\alpha\gamma$ is odd. If it isn't, replace it by $q-\alpha\gamma$. Since the square of an odd number is congruent to $1$ modulo $4$, we have $$p\equiv -(\alpha\gamma)^2\pmod{4}.$$ It follows that $p\equiv -(\alpha\gamma)^2\pmod{4q}$. The reverse direction is straightforward. Reverse directions are not really needed if we deal with the "forward" direction in all four cases.
Notation for "absolute value" in multiplicative group. In an additive number group (e.g. $(\mathbb{Z},+)$) there is a well known notation for absolute value, namely $|a|$, which coincides with $\max(a,-a)$, for $a \in \mathbb{Z}$. When the context is a multiplicative number group instead, is there a similar notation, which would coincide with $\max(a,\frac{1}{a})$?
If you're working with a multiplicative group $G\subseteq\Bbb{R}$, you can definitely say $$ \operatorname{abs}(g) := \max\{g,g^{-1}\}\quad\textrm{for }g\in G. $$ The question is whether or not it is useful to the study of the group $G$ in any way. Also, when it comes to the question of notation, $\left|g\right|$ is normally used to mean the order of $g\in G$, which is the smallest $n\in\Bbb{N}$ such that $g^n = e$ (where $e\in G$ is the identity) or equivalently, the order of the subgroup of $G$ generated by $g$. As far as I know, there is no standard notation for $\max\{g,g^{-1}\}$ when $g\in G\subseteq\Bbb{R}$.
Prove that $2222^{5555}+5555^{2222}=3333^{5555}+4444^{2222} \pmod 7$ I am utterly new to modular arithmetic and I am having trouble with this proof. $$2222^{5555}+5555^{2222}=3333^{5555}+4444^{2222} \pmod 7$$ It's because $2+5=3+4=7$, but it's not so clear for me with the presence of powers. Maybe some explanation would help. EDITED Some serious typo EDIT Since some arguments against it appear here is : WolframAlpha EDIT Above is incorrect. I appreciate proofs that it is wrong. Sorry for others.
First recall that as $7$ is prime, then $x^6 = 1 \pmod{7}$. Now, we have $$ 2222 = \begin{cases} 2 \pmod{6} \\ 3 \pmod{7} \end{cases}, \quad 3333 = 1 \pmod{7}$$ $$4444 = -1 \pmod{7}, \quad 5555 = \begin{cases} 5 \pmod{6} \\ 4 \pmod{7} \end{cases}$$ Then we can reduce each side of the equation to $$ 3^5 + 4^2 = 1^5 + (-1)^2 \pmod{7}$$ Then the LHS is $0$ but the RHS is $2$, so the statement is false. EDIT: For reference, I'm testing the conjecture $2222^{5555} + 5555^{2222} = 3333^{5555} + 4444^{2222}$.
Darts on a ruler probability If two points are selected at random on an interval from 0 to 1.5 inches, what is the probability that the distance between them is less than or equal to 1/4"?
Draw the square with corners $(0,0)$, $(1.5.0)$, $(1.5,1.5)$, and $(0,1.5)$. Imagine the points are chosen one at a time. Let random variable $X$ be the first chosen point, and $Y$ the second chosen point. We are invited to assume that $X$ and $Y$ are uniformly distributed in the interval $[0,1.5]$ and independent. (Uniform distribution is highly implausible with real darts.) Then $(X,Y)$ is uniformly distributed in the square just drawn. Consider the two lines $y=x+\frac{1}{4}$ and $y=x-\frac{1}{4}$. The two points are within $\frac{1}{4}$ inch from each other if the random variable $(X,Y)$ falls in the part of our square between the two lines. Call that part of the square $A$. Then our probability is the area of $A$ divided by the area of the whole square. Remark: It is easier to find first the area of the part of the square which is not in $A$. This consists of two isosceles right triangles with legs $\frac{5}{4}$, so their combined area is $\frac{25}{16}$. The area of the whole square is $\frac{9}{4}$, so the area of $A$ is $\frac{11}{16}$. Thus our probability is $\dfrac{\frac{11}{16}}{\frac{9}{4}}$.
Exact number of events to get the expected outcoms Suppose in a competition 11 matches are to be played, each having one of 3 distinct outcomes as possibilities. How many number of ways one can predict the outcomes of all 11 matches such that exactly 6 of the predictions turn out to be correct?
The $6$ matches on which our prediction is correct can be chosen in $\binom{11}{6}$ ways. For each of these choices, we can make wrong predictions on the remaining $5$ matches in $2^5$ ways. Thus the total number is $$\binom{11}{6}2^5.$$
Show that $\lim \limits_{n\rightarrow\infty}\frac{n!}{(2n)!}=0$ I have to show that $\lim \limits_{n\rightarrow\infty}\frac{n!}{(2n)!}=0$ I am not sure if correct but i did it like this : $(2n)!=(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))\cdot (n!)$ so I have $$\displaystyle \frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}$$ and $$\lim \limits_{n\rightarrow \infty}\frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}=0$$ is this correct ? If not why ?
Hint: $$ 0 \leq \lim_{n\to \infty}\frac{n!}{(2n)!} \leq \lim_{n\to \infty} \frac{n!}{(n!)^2} = \lim_{k \to \infty, k = n!}\frac{k}{k^2} = \lim_{k \to \infty}\frac{1}{k} = 0.$$
Is the vector $(3,-1,0,-1)$ in the subspace of $\Bbb R^4$ spanned by the vectors $(2,-1,3,2)$, $(-1,1,1,-3)$, $(1,1,9,-5)$? Is the vector $(3,-1,0,-1)$ in the subspace of $\Bbb R^4$ spanned by the vectors $(2,-1,3,2)$, $(-1,1,1,-3)$, $(1,1,9,-5)$?
To find wether $(3,-1,0,-1)$ is in the span of the other vectors, solve the system: $$(3,-1,0,-1)=\lambda_1(2,-1,3,2)+\lambda _2(-1,1,1,-3)+\lambda _3(1,1,9,-5)$$ If you get a solution, then the vector is the span. If you don't get a solution, then it isn't. It's worth noting that the span of $(2,-1,3,2), (-1,1,1,-3), (1,1,9,-5)$ is exactly the set $\left\{\lambda_1(2,-1,3,2)+\lambda _2(-1,1,1,-3)+\lambda _3(1,1,9,-5):\lambda _1, \lambda _2, \lambda _3\in \Bbb R\right\}$ Alternatively you can consider the matrix $\begin{bmatrix}2& -1 &3 & 2\\ -1 &1 &1 &-3\\ 1 & 1 & 9 & -5 \\ 3 &-1 &0 & -1\end{bmatrix}$. Compute its determinant. If it's not $0$, then the four vectors are linearly independent. If it is $0$ they are linearly dependent. What does that tell you?
Rate of change of a cubes area in respect to the space diagonal The space diagonal of a cube shrinks with $0.02\rm m/s$. How fast is the area shrinking when the space diagonal is $0.8\rm m$ long? I try: Space Diagonal = $s_d = \sqrt{a^2+b^2+c^2}=\sqrt{3a^2}$ Where $a$ is the length of one side. Area = $a^2$ Rate of change for $s_d$ with respect to $a$ $${\mathrm d\over \mathrm da}s_d={\mathrm d\over \mathrm da}\sqrt{3a^2}={\sqrt{3}a \over \sqrt{a^2}}$$ Rate of change for $\rm area$ with respect to $a$ $${\mathrm d\over \mathrm da}\mathrm{area}={\mathrm d\over \mathrm da}a^2={2a}$$ Im stuck when it comes to calculating one thing from another thing! However I have no problem when it comes to position, velocity and acceleration! Can anybody solve this?
The simplest way is to express the area $a$ of the cube as a function of the length $d$ of the space diagonal. Given $d$ the side length $s$ of the cube is $$s={1\over\sqrt{3}} \ d\ ,$$ and the total surface area $a$ then becomes $$a=6s^2=2 d^2\ .$$ Now all quantities appearing here are in fact functions of $t$; therefore at all times $t$ we have $$a(t)=2d^2(t)\ .$$ It follows that $$a'(t)=4d (t) \ d'(t)\ ,$$ and the data $d(t_0)=0.8$ m, $d'(t_0)=-0.02$ m/sec imply that $a'(t_0)=-0.064$ m$^2$/sec.
Fourier series of a function Consider $$ f(t)= \begin{cases} 1 \mbox{ ; } 0<t<1\\ 2-t \mbox{ ; } 1<t<2 \end{cases}$$ Let $f_1(t)$ be the Fourier sine series and $f_2(t)$ be the Fourier cosine series of $f$, $f_1(t)=f_2(t), 0<t<2$. Write the form of the series (without computing the coefficients) and graph $f_1$ and $f_2$ on [-4,4] (including the endpoints $\pm 4$) using *'s to identify the value of the series at points of discontinuity. I think we have: $f_1(t)=\sum \limits_{n=1}^{\infty} b_n \sin \frac{n \pi t}{2}$ $f_2(t)=\frac{a_0}{2}+\sum \limits_{n=1}^{\infty} a_n \cos \frac{n \pi t}{2}$ I think we have $f_2=1$ and for $0<t<2, f_1=f_2=1$ Can we do anything else? Can someone help me with the end? Thank you
Ok at first we gonna plot our function We know that on jump discontinuities it will converge to the arithmetic mean of them, so the first approximation is just taking $\frac{1}{2}$. This gonna look like The Cos terms gonna look like The Sin terms are looking like
Generating functions. Number of solutions of equation. Let's consider two equations $x_1+x_2+\cdots+x_{19}=9$, where $x_i \le 1$ and $x_1+x_2+\cdots+x_{10}=10, $ where $ x_i \le 5$ The point is to find whose equation has greater number of solutions What I have found is: number of solutions for first equation: $\binom{19}{9}=92378$ generating function for the second equation: $$\left(\frac{1-x^6}{1-x}\right)^{10}=(1-x^6)^{10} \cdot (1-x)^{-10}$$ Here I completely do not know how to find number near $[x^{10}]$ coefficient. Wolfram said that it is $85228$, so theoretically I have solution, but I would like to know more generic way how to solve such problems. Any ideas?
Generating functions is a generic way. To continue on your attempt, you can apply Binomial theorem, which applies to negative exponents too! We have that $$ (1-x)^{-r} = \sum_{n=0}^{\infty} \binom{-r}{n} (-x)^n$$ where $$\binom{-r}{n} = \dfrac{-r \times (-r -1) \times \dots \times (-r - n +1)}{n!} = (-1)^n\dfrac{r(r+1)\dots(n+r-1)}{n!} $$ Thus $$ \binom{-r}{n} =(-1)^n\binom{n + r- 1}{n}$$ And so $$ (1-x)^{-r} = \sum_{n=0}^{\infty} \binom{n +r -1}{n} x^n$$ Now your problem becomes finding the coefficient in the product of two polynomials, as you can truncate the infinite one for exponents $\gt 10$. I will leave that to you.
True or False: Every finite dimensional vector space can made into an inner product space with the same dimension. Every finite dimensional vector space can made into an inner product space with the same dimension.
I think it depends on what field you are using for your vector space. If it is $\mathbf{R}$ or $\mathbb{C}$, the answer is definitely "yes" (see the comments, which are correct). I am pretty sure it is "yes" if your field is a subfield of $\mathbb{C}$ that is closed (the word "stable" also seems to be standard) under complex conjugation, such as the algebraic numbers. Otherwise, e.g. if your field is $\mathbf{F}_2$, I don't know. I consulted Wikipedia's article on inner product spaces and they only dealt with the case where the field was the reals or the complex numbers. EDIT: Marc's answer is better than mine. See his comment regaring subfields of $\mathbf{C}$. Some such subfields are not stable under complex conjugation and cannot be used as a field for an inner product space. I am pretty sure that if $\mathbb{K}$ is any ordered field (which may or may not be a subfield of $\mathbb{R}$) you can use it as the field for an inner product space.
Solution of a Sylvester equation? I'd like to solve $AX -BX + XC = D$, for the matrix $X$, where all matrices have real entries and $X$ is a rectangular matrix, while $B$ and $C$ are symmetric matrices and $A$ is formed by an outer product matrix (i.e, as $vv^T$ for some real vector $v$) while $D$ is 'not' symmetric. $A,B,C,D$ matrices are fixed while $X$ is the unknown. How can this equation be solved? Secondly, is there any case, where the solution of this equation has a closed form?
More generally, Sylvester's equation of the form $$AX+XB=C$$ can be put into the form $$M\cdot \textrm{vec}X=L$$ for larger matrices $M$ and $L$. Here $\textrm{vec}X$ is a stack of all columns of matrix $X$. How to find the matrix $M$ and $L$, is shown in chapter 4 of this book: http://www.amazon.com/Topics-Matrix-Analysis-Roger-Horn/dp/0521467136 Indeed, $M=(I\otimes A)+(B^T\otimes I)$, and $L=\textrm{vec}C$, where $\otimes$ denotes the Kronecker product. Special case with $M$ invertible, we have $\textrm{vec}X=M^{-1}L$.
Evaluate $\int_0^\pi\frac{1}{1+(\tan x)^\sqrt2}\ dx$ How can we evaluate $$\int_0^\pi\frac{1}{1+(\tan x)^\sqrt2}\ dx$$ Can you keep this at Calculus 1 level please? Please include a full solution if possible. I tried this every way I knew and I couldn't get it.
This is a Putnam problem from years ago. There is no Calc I solution of which I'm aware. You need to put a parameter (new variable) in place of $\sqrt 2$ and then differentiate the resulting function of the parameter (this is usually called "differentiating under the integral sign"). Most students don't even learn this in Calc III!
Homeomorphism vs diffeomorphism in the definition of k-chain In "Analysis and Algebra on Differentiable Manifolds", 1st Ed., by Gadea and Masqué, in Problem 3.2.4, the student is asked to prove that circles can not be boundaries of any 2-chain in $\mathbb{R}^2-\{0\}$. I understand the solution which makes use of the differential of the angle $\theta$. The pdf at http://www.math.upenn.edu/~ryblair/Math%20600/papers/Lec17.pdf mentions that a singular k-cube $c$ is an homeomorphism of the unit k-cube. Since discs are homeomorphic to unit squares, if $c$ is just asked to be an homeomorphism, the circle can be a boundary of a 2-chain. But a disc is not diffeomorphic to the unit square. Is it correct to say that for the exercise to make sense, the definition of a singular k-cube to consider has to be the one using a diffeomorphism ?
As discussed in the comments, the unit circle defines a singular 1-simplex, $\sigma$ (i.e. a continuous map from the closed interval) which is not the boundary of any singular 2-chain (i.e. any formal sum of continuous maps from the standard 2-simplex) in the punctured plane. One way to see this is by noting that the punctured plane deformation retracts onto the unit circle and that $\sigma$ represents a generator for $H_1(S^1)$ (which can be seen by using a Mayer-Vietoris sequence, for instance). It should be mentioned that you can see the importance of the punctured origin in the definition of the 1-form $d\theta$ you mention, which does not extend smoothly to the entire plane.
Write in array form, product of disjoint cycles, product of 2-cycles... In symmetric group $S_7$, let $A= (2 3 5)(2 7 5 4)$ and $B= (3 7)(3 6)(1 2 5)(1 5)$. Write $A^{-1}$, $AB$, and $BA$ in the following ways: (i) Array Form (ii) Product of Disjoint Cycles (iii) Product of $2$-Cycles Also are any of $A^{-1}, AB$, or $BA$ in $A_7$?
I won’t do the problem, but I will answer the same questions for the permutation $A$; see if you can use that as a model. I assume throughout that cycles are applied from left to right. $A=(235)(2754)=(2345)(2754)(1)(6)$; that means that $A$ sends $1$ to $1$, $2$ to $3$, $3$ to $5$ to $4$, $4$ to $2$, $5$ to $2$ to $7$; $6$ to $6$, and $7$ to $5$. Thus, its two-line or array representation must be $$\binom{1234567}{1342765}\;.$$ Now that we have this, it’s easy to find the disjoint cycles. Start with $1$; it goes to $1$, closing off the cycle $(1)$. The next available input is $2$; it goes to $3$, which goes to $4$, which goes to $2$, giving us the cycle $(234)$. The next available input is $5$; it goes to $7$, which goes right back to $5$, and we have the cycle $(57)$. Finally, $(6)$ is another cycle of length $1$. Thus, $A=(1)(234)(57)(6)$ or, if you’re supposed to ignore cycles of length $1$, simply $(234)(57)$. Finally, you should know that a cycle $(a_1a_2\dots a_n)$ can be written as a product of $2$-cycles in the following way: $$(a_1a_2\dots a_n)=(a_1a_2)(a_1a_3)\dots(a_1a_n)\;.$$ Thus, $A=(23)(24)(56)$. One further hint: one easy way to find $A^{-1}$ is to turn the two-line representation of $A$ upside-down (and then shuffle the columns so that the top line is in numerical order).
How to show measurability of a function implies existence of bounding simple functions If $(X,\mathscr{M},\mu)$ is a measure space with $\mu(X) < \infty$, and $(X,\overline{\mathscr{M}},\overline{\mu})$ is its completion and $f\colon X \to \mathbb{R}$ is bounded. Then $f$ is $\overline{\mathscr{M}}$-measurable (and hence in $L^1(\overline{\mu}))$ iff there exist sequences $\{\phi_n\}$ and $\{\psi_n\}$ of $\mathscr{M}$-measurable simple functions such that $\phi_n \le f \le \psi_n$ and $\int (\psi_n - \phi_n)d \mu < n^{-1}$. In this case, $\lim \int \phi_n d \mu = \lim \int \psi_n d \mu = \int f d \bar{\mu}$. I am able to prove everything except the part that $f$ is $\overline{\mathscr{M}}$-measurable $\implies$ there exist sequences $\{\phi_n\}$ and $\{\psi_n\}$ of $\mathscr{M}$-measurable simple functions such that $\phi_n \le f \le \psi_n$ and $\int (\psi_n - \phi_n)d \mu < n^{-1}$. I know that $f$ is $\overline{\mathscr{M}}$-measurable $\implies$ there exists an $\mathscr{M}$-measurable function $g$ s.t. $f=g$ $\overline{\mu}$-almost everywhere but I'm not sure where to proceed after that Thank you!
A set $N$ is $(\mathcal M,\mu)$-negligible if we can find $N'\in\mathcal M$ such that $\mu(N')=0$ and $N\subset N'$. Recall that $$\overline{\mathcal M}^{\mu}=\{B\cup N,B\in\mathcal M,N\mbox{ is }(\mathcal M,\mu)-\mbox{negligible}\}.$$ It can indeed be shown that the latter collection is a $\sigma$-algebra, the smallest containing both $\mathcal M$-measurable sets and $(\mathcal M,\mu)$-negligible ones. First, as $f$ is $\overline{\mathcal M}^{\mu}$-measurable, it can be approximated pointwise by simple functions, that is, linear combinations of elements of $\overline{\mathcal M}^{\mu}$. So if we deal with the case $f=\chi_S$, where $S\in\overline{\mathcal M}^{\mu}$, we write it as $S=B\cup N$, and we notice that $$\chi_B\leqslant \chi_S\leqslant \chi_{B\cup N'},$$ where $N'$ is as in the definition of negligible.
How use Maple12 to solve a differential equation by using Euler's method? Consider the differential equation $y^{\prime}=y-2$ with initial condition $y\left(0\right)=1$. a) Use Euler's method with 4 steps of size 0.2 to estimate $y\left(0.8\right)$ I know how to do this by hand; however, I have maple 12 installed and was trying to figure out how to do this with Maple, and then make a graph showing each step of the function. Any suggestions. I have tried looking on mapleprimes, but it keeps pointing me to functions for newer versions of maplesoft, which I don't have. I posted this question to use as a model, because I have solved this problem by hand and it will help me edited it for other differential equations. ps. I hope this is the proper place to ask this question, if not please tell me where would be a better place.
maybe this would help ,just change initially condition and step size http://homepages.math.uic.edu/~hanson/MAPLE/euler.html i am not sure that is is for maple12,but i think commands would be same,just try it and if there is errors,post here use also https://stackoverflow.com/questions
Notation for $X - \mathbb{E}(X)$? Let $X$ be a random variable with expectation value $\mathbb{E}(X)=\mu$. Is there a (reasonably standard) notation to denote the "centered" random variable $X - \mu$? And, while I'm at it, if $X_i$ is a random variable, $\forall\,i \in \mathbf{n} \equiv \{0,\dots,n-1\}$, and if $\overline{X} = \frac{1}{n}\sum_{i\in\mathbf{n}} X_i$, is there a notation for the random variable $X_i - \overline{X}$? (This second question is "secondary". Feel free to disregard it.)
I've never seen any specific notation for these. They are such simple expressions that there wouldn't be much to gain by abbreviating them further. If you feel you must, you could invent your own, or just say "let $Y = X - \mu$". One way people often avoid writing out $X - \mu$ is by a statement like "without loss of generality, we can assume $E[X] = 0$" (provided of course that we actually can).
Clarification of sequence space definition Let $(x_n)$ denote a sequence whose $n$th term is $x_n$, and $\{x_n\,:\,n\in\mathbb{N}\}$ denote the set of all elements of the sequence. I have a text that states Note that $\{x_n\,:\,n\in\mathbb{N}\}$ can be a finite set even though $(x_n)$ is an infinite sequence. To me this seems to be a contradiction. Can any reiterate this quotation to shed light on its meaning? I am confused on the difference between $(x_n)$ and $x_n$, and $\{x_n\,:\,n\in\mathbb{N}\}$. Thanks all.
A sequence of real numbers is a function, not a set. Thus, for instance, the sequence $(x_n)$ is actually a function $f:\mathbb N \to \mathbb R$, where we have the equality $x_n =f(n)$. Now, the image of the function is the set $\{x_n\mid n\in \mathbb N\}$, which is a very different thing. An example where this associated set is infinite is for the sequence $(x_n)$ where $x_n=1$ for all $n$. Then the associated set is just $\{1\}$.
Euclidean cirle question Let $c_1$ be a circle with center $O$. Let angle $ABC$ be an inscribed angle of the circle $c_1$. i) If $O$ and $B$ are on the same side of the line $AC$, what is the relationship between $\angle ABC$ and $ \angle AOC$? ii) If $O$ and $B$ are on opposite side of the line $AC$, what is the relationship between $\angle ABC$ and $\angle AOC$ I guess $\angle ABC=(1/2) \angle AOC$ but I don't know how to explain
Here $O$ and $B$ are on the same side of of the line $AC$, you can figure out the other part. $\angle AOB=180-2y$ and $\angle COB=180-2x$ $\angle AOB+ \angle COB=360-2(x+y) \implies \angle AOC=2(x+y) \implies 2 \angle ABC$. This is known widely as Inscribed Angle theorem as RobJohn said in his comment.:)
Mystery about irrational numbers I'm new here as you can see. There is a mystery about $\pi$ that I heard before and want to check if its true. They told me that if I convert the digits of $\pi$ in letters eventually I could read the Bible, any book written and even the history of my life! This happens because $\pi$ is irrational and will display all kind of finite combinations if we keep looking its digits. If that's true then I could use this argument for any irrational. My question is: Is this true?
$\pi$ is just another number like $5.243424974950134566032 \dots$, you can use your argument here. Continue your number with number of particles of universe, number of stars, number of pages in bible, number of letters in the bible, and so on. And 'DO NOT STOP DOING SO, if you do the number becomes rational'. There are another set of numbers which are formed just by two or three numbers in its decimal expansion, take an example :$0.kkk0\underbrace{kkkk}0\underbrace{kkk}000\underbrace{k}0000\underbrace{kk}\ldots$, where $0<k \le 9$. An irrational number is just defined as NOT Rational number. In other means, which can't be expressed in the form $\dfrac{p}{q}$. When I was reading about $\pi$, I just found this picture, Crazy $\pi$.
Calculus world problem expansion of air How would I solve this problem? The adiabatic law for expansion of air is $P(V)^{1.4}=C$ when P is pressure V is volume and C is a certain constant.At a given instant the volume is 30 cubic feet and the pressure is 60 psi. At what rate is the pressure changing if the volume is decreasing at a rate of 2 cubic feet per second? I know that $\frac{dv}{dt}=-2$ $P(1.4)\frac{dv}{dt}+V^{1.4}\frac{dp}{dt}=0$ $60(1.4)(-2)+30^{1.4}\frac{dp}{dt}=0$ $-168+116.9417\frac{dp}{dt}$ $168=116.9417\frac{dp}{dt}$ $\frac{dp}{dt}=1.4366$ but would this be right.
Note: $PV^{\gamma}=C \implies \dfrac{dP}{dt}\cdot V^{\gamma}+(\gamma)V^{(\gamma-1)} \cdot \dfrac{dV}{dt}\cdot P=0$
Prove two inequalities about limit inferior and limit superior I wish to prove the following two inequalities: Suppose $X$ is a subset in $\Bbb R$, and functions $f$ and $g$: $X\to \Bbb R$, and $x_{0}\in X$ is a limit point. Then: $$\lim\sup_{x\to x_0}(f(x)+g(x))\le \lim\sup_{x\to x_0}(f(x))+\lim\sup_{x\to x_0}(g(x))$$ and,$$\lim\inf_{x\to x_0}(f(x))+\lim\inf _{x\to x_0}(g(x))\le \lim\inf_{x\to x_0}(f(x)+g(x)).$$ I did try to use proof by contradiciton, but I just got the desired results. Any help on them, please.
Contradiction is not recommended, as there is a natural direct approach. 1- limsup: recall that $$ \limsup_{x\rightarrow x_0}h(x)=\inf_{\epsilon>0}\sup_{0<|x-x_0|<\epsilon}h(x)=\lim_{\epsilon>0}\sup_{0<|x-x_0|<\epsilon}h(x) $$ where the rhs is the limit of a nonincreasing function of $\epsilon$. Note that the condition $x\in X$ should be added everywhere we take the a sup above. Let's say it is here implicitly to alleviate the notations. The only thing you need to obtain the desired inequality is essentially the fact that for two nonempty subsets $A,B$ of $\mathbb{R}$ $$ \sup (A+B)\leq \sup A+\sup B. $$ I prove it at the end below. Now fix $\epsilon>0$. The latter provides the second inequality in the following $$ \limsup_{x\rightarrow x_0}f(x)+g(x)\leq \sup_{0<|x-x_0|<\epsilon}f(x)+g(x)\leq \sup_{0<|x-x_0|<\epsilon}f(x)+\sup_{0<|x-x_0|<\epsilon}g(x), $$ while the first inequality is simply due to the fact that the limsup is the inf, hence a lower bound, of the $\epsilon$ suprema. The desired inequality follows by letting $\epsilon$ tend to $0$ in the rhs. 2- liminf: you can use a similar argument with $\inf (A+B)\geq \inf A+\inf B$. Or you can simply use the following standard trick $$ \liminf_{x\rightarrow x_0}h(x)=-\limsup_{x\rightarrow x_0}-h(x) $$ which follows from $\inf S=-\sup (-S)$ (good exercise on sup/inf upper/lower bound manipulation). It only remains to apply the limsup inequality to $-f$ and $-g$. Sup inequality proof: it is trivial if one of the two sets $A,B$ is not bounded above. So assume both are bounded above. Now recall that $\sup S$ is the least upper bound of the set $S$, when it is bounded above. In particular, it is an upper bound of $S$. For every $a\in A$, we have $a\leq \sup A$, and for every $b\in B$, we get $b\leq \sup B$. So for every $x=a+b\in A+B$, we have $x\leq \sup A+\sup B$. Thus $\sup A+\sup B$ is an upper bound of $A+B$. Hence it is not smaller than the least upper bound of the latter, namely $\sup(A+B)$. QED.
Prove that the eigenvalues of a real symmetric matrix are real I am having a difficult time with the following question. Any help will be much appreciated. Let $A$ be an $n×n$ real matrix such that $A^T = A$. We call such matrices “symmetric.” Prove that the eigenvalues of a real symmetric matrix are real (i.e. if $\lambda$ is an eigenvalue of $A$, show that $\lambda = \overline{\lambda}$ )
Consider the real operator $$u := (x \mapsto Ax)$$ for all $x \in \mathbb{R}^{n}$ and the complex operator $$\tilde{u} := (x \mapsto Ax) $$ for all $x \in \mathbb{C}^{n}$. Both operators have the same characteristic polynomial, say $p(\lambda) = \det(A - \lambda I)$. Since $A$ is symmetric, $\tilde{u}$ is an hermitian operator. For the spectral theorem for hermitian operators all the eigenvalues (i.e. the roots of the $p(\lambda)$) of $\tilde{u}$ are real. Hence, all the eigenvalues (i.e. the roots of the $p(\lambda)$) of $u$ are real. We have shown that the eigenvalues of a symmetric matrix are real numbers as a consequence of the fact that the eigenvalues of an Hermitian matrix are reals.
Help in proving that $\nabla\cdot (r^n \hat r)=(n+2)r^{n-1}$ Show that$$\nabla \cdot (r^n \hat r)=(n+2)r^{n-1}$$ where $\hat r$ is the unit vector along $\bar r$. Please give me some hint. I am clueless as of now.
You can also use Cartesian coordinates and using the fact that $r \hat{r} = \vec{r} = (x,y,z)$. \begin{align} r^n \hat{r} &= r^{n-1} (x,y,z) \\ \nabla \cdot r^n \hat{r} & = \partial_x(r^{n-1}x) + \partial_y(r^{n-1}y) + \partial_z(r^{n-1}z) \end{align} Each term can be calculated: $\partial_x(r^{n-1}x) = r^{n-1} + x (n-1) r^{n-2} \partial_x r$ $\partial_x r = \frac{x}{r}$. (Here, I used the fact that $r = \sqrt{x^2+y^2+z^2}$.) The terms involving $y$ and $z$ are exactly the same with $y$ and $z$ replacing $x$. And so, \begin{align} \nabla \cdot r^n \hat{r}& = r^{n-1} + x(n-1)r^{n-2}\frac{x}{r} + r^{n-1} + y(n-1)r^{n-2}\frac{y}{r} + r^{n-1} + z(n-1)r^{n-2}\frac{z}{r}\\ & = 3r^{n-1}+(n-1)(x^2 + y^2 + z^2)r^{n-3} \\ & = 3r^{n-1} + (n-1)r^2 r^{n-3} \\ & = (n+2)r^{n-1} \end{align} Of course, polar is the easiest :)
Is there a continuous function with these properties (not piecewise) In short, I'm wondering if I can find a continuous non-piecewise function with these properties. I've found one that was close but not perfect. It's actually really useful to have something like this to scale it. Sorry about the lack of formatting (EDIT: Thank you) $$\begin{aligned} f(0) &= 1\\ f(x) &> 0\\ f'(0) &> 1\\ f'(x) &> 0\\ f''(0) &= 0\\ \lim_{x\to\infty} f '(x) &= 1 \\ \lim_{x\to -\infty} f(x) &= 0 \end{aligned} $$ The second and fourth constraints were added afterwards, so keep that in mind while viewing comments/answers Bonus points if you can find a function that has an elementary integral and that approaches y = x + 1 I've managed to get a function that displays all of the properties except the inflection at zero (and obviously the decrease in $f'(x)$), being hyperbolic (simply $\sqrt{1 + x^2 / 4}+ x / 2$). The actual purpose isn't for mathematical purposes, it's for a program. In short I need to normalize a value so that it cannot drop below 0 but has no cap therein, and should approach additivity but with diminishing returns, rather than increasing that I would get with the hyperbolic one above. Thus, for most of the positive portion I need it to be concave down. If it's integrable I can do more things with it without breaking time-independence but most of the time it won't matter.
After playing around with various compositions and games, here's an analytic one: $$f(x) = \frac 2 \pi \frac {e^x} {e^x + 1} x \arctan x + e^{2x - (2 + \frac 1 \pi)x^2}$$ The first part gives you the limits at $\pm \infty$ and the second part makes adjustments at $0$. So what was this good for anyway? :)
Condition for $\det(A^{T}A)=0$ Is it always true that $\det(A^{T}A)=0$, $\hspace{0.5mm}$ for $A=n \times m$ matrix with $n<m$? From some notes I am reading on Regression analysis, and from some trials, it would appear this is true. It is not a result I have seen, surprisingly. Can anyone provide a proof? Thanks.
From the way you wrote it, the product is size $m.$ However, the maximum rank is $n$ which is smaller. The matrix $A^T A$ being square and of non-maximal rank, it has determinant $0.$
Proof that the sum of the cubes of any three consecutive positive integers is divisible by three. So this question has less to do about the proof itself and more to do about whether my chosen method of proof is evidence enough. It can actually be shown by the Principle of Mathematical Induction that the sum of the cubes of any three consecutive positive integers is divisible by 9, but this is not what I intend to show and not what the author is asking. I believe that the PMI is not the authors intended path for the reader, hence why they asked to prove divisibility by 3. So I did a proof without using the PMI. But is it enough? It's from Beachy-Blair's Abstract Algebra Section 1.1 Problem 21. This is not for homework, I took Abstract Algebra as an undergraduate. I was just going through some problems that I have yet to solve from the textbook for pleasure. Question: Prove that the sum of the cubes of any three consecutive positive integers is divisible by 3. So here's my proof: Let a $\in$ $\mathbb{Z}^+$ Define \begin{equation} S(x) = x^3 + (x+1)^3 + (x+2)^3 \end{equation} So, \begin{equation}S(a) = a^3 + (a+1)^3 + (a+2)^3\end{equation} \begin{equation}S(a) = a^3 + (a^3 + 3a^2 + 3a + 1) + (a^3 +6a^2 + 12a +8) \end{equation} \begin{equation}S(a) = 3a^3 + 9a^2 + 15a + 9 \end{equation} \begin{equation}S(a) = 3(a^3 + 3a^2 + 5a + 3) \end{equation} Hence, $3 \mid S(a)$. QED
Your solution is fine, provided you intended to prove that the sum is divisible by $3$. If you intended to prove divisibility by $9$, then you've got more work to do! If you're familiar with working $\pmod 3$, note @Math Gems comment/answer/alternative. (Though to be honest, I would have proceeded as did you, totally overlooking the value of Math Gems approach.)
If $J$ is the $n×n$ matrix of all ones, and $A = (l−b)I +bJ$, then $\det(A) = (l − b)^{n−1}(l + (n − 1)b)$ I am stuck on how to prove this by induction. Let $J$ be the $n×n$ matrix of all ones, and let $A = (l−b)I +bJ$. Show that $$\det(A) = (l − b)^{n−1}(l + (n − 1)b).$$ I have shown that it holds for $n=2$, and I'm assuming that it holds for the $n=k$ case, $$(l-b)^{k-1}(a+(k-1)b)$$ but I'm having trouble proving that it holds for the $k+1$ case. Please help.
I think that it would be better to use $J_n$ for the $n \times n$ matrix of all ones, (and similarly $A_n, I_n$) so it is clear what the dimensions of the matrices are. Proof by induction on $n$ that $\det(A_n)=(l-b)^n+nb(l-b)^{n-1}$: When $n=1, 2$, this is easy to verify. We have $\det(A_1)=\det(l)=l=(l-b)^1+b(l-b)^0$ and $\det(A_2)=\det(\begin{array}{ccc} l & b \\ b & l \end{array})=l^2-b^2=(l-b)^2+2b(l-b)$. Suppose that the statement holds for $n=k$. Consider $$A_{k+1}=(l-b)I_{k+1}+bJ_{k+1}=\left(\begin{array}{ccccc} l & b & b & \ldots & b \\ b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots \\ b & b & b & \ldots & l \end{array}\right)$$ Now subtracting the second row from the first gives \begin{align} \det(A_{k+1})& =\det\left(\begin{array}{ccccc} l & b & b & \ldots & b \\ b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots \\ b & b & b & \ldots & l \end{array}\right) \\ & =\det\left(\begin{array}{ccccc} l-b & b-l & 0 & \ldots & 0 \\ b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots \\ b & b & b & \ldots & l \end{array}\right) \\ & =(l-b)\det(A_k)-(b-l)\det\left(\begin{array}{cccccc} b & b & b & b & \ldots & b \\ b & l & b & b & \ldots & b \\ b & b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots & \ldots \\ b & b & b & b & \ldots & l \end{array}\right) \end{align} Now taking the matrix in the last line above and subtracting the first row from all other rows gives an upper triangular matrix: \begin{align} \det\left(\begin{array}{cccccc} b & b & b & b & \ldots & b \\ b & l & b & b & \ldots & b \\ b & b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots & \ldots \\ b & b & b & b & \ldots & l \end{array}\right) & =\det\left(\begin{array}{cccccc} b & b & b & b & \ldots & b \\ 0 & l-b & 0 & 0 & \ldots & 0 \\ 0 & 0 & l-b & 0 & \ldots & 0 \\ \ldots & \ldots & \ldots &\ldots &\ldots & \ldots \\ 0 & 0 & 0 & 0 & \ldots & l-b \end{array}\right) \\ & =b(l-b)^k \end{align} Therefore we have (using the induction hypothesis) \begin{align} \det(A_{k+1}) & =(l-b)\det(A_k)-(b-l)(b(l-b)^k) \\ & =(l-b)((l-b)^k+kb(l-b)^{k-1})+(l-b)(b(l-b)^{k-1}) \\ & =(l-b)^{k+1}+(k+1)b(l-b)^k \end{align} We are thus done by induction.
Units (invertibles) of polynomial rings over a field If $R$ is a field, what are the units of $R[X]$? My attempt: Let $f,g \in R[X]$ and $f(X)g(X)=1$. Then the only solution for the equation is both $f,g \in {R}$. So $U(R[X])=R$, exclude zero elements of $R$. Is this correct ?
If $f=a_0+a_1x+\cdots+a_mx^m$ has degree $m$, i.e. $a_m\ne 0$, and $fg=1$ for some $g=b_0+\cdots+b_n x^n$ (and $b_n\ne 0$), then observe that $$0=\deg(1)=\deg(fg)=\deg(a_0b_0+\cdots+a_mb_nx^{n+m})=m+n$$ as $a_mb_n\ne 0$. Hence $m+n=0$ and so $m=n=0$ as $m,n\ge 0$. Hence $f\in R^*$ and $R[x]^*=R^*$ (the star denotes the set of units).
Euler's proof for the infinitude of the primes I am trying to recast the proof of Euler for the infinitude of the primes in modern mathematical language, but am not sure how it is to be done. The statement is that: $$\prod_{p\in P} \frac{1}{1-1/p}=\prod_{p\in P} \sum_{k\geq 0} \frac{1}{p^k}=\sum_n\frac{1}{n}$$ Here $P$ is the set of primes. What bothers me is the second equality above which is obtained by the distributive law, applied not neccessarily finitely many times. Is that justified?
It might be instructive to see the process of moving from a heuristic argument to a rigorous proof. Probably the simplest thing to do when considering a heuristic argument involving infinite sums (or infinite products or improper integrals or other such things) is to consider its finite truncations. i.e. what can we do with the following? $$\prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p}$$ Well, we can repeat the first step easily: $$\prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p} = \prod_{\substack{p \in P \\ p < B}} \sum_{k=0}^{+\infty} \frac{1}{p^k}$$ Because all summations involved are all absolutely convergent (I think that's the condition I want? my real analysis is rusty), we can distribute: $$ \ldots = \sum_{n} \frac{1}{n} $$ where the summation is over only those integers whose prime factors are all less than $B$. At this point, it is easy to make two very rough bounds: $$ \sum_{n=1}^{B-1} \frac{1}{n} \leq \prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p} \leq \sum_{n=1}^{+\infty} \frac{1}{n} $$ And now, we can take the limit as $B \to +\infty$ and apply the squeeze theorem: $$ \prod_{\substack{p \in P}} \frac{1}{1 - 1/p} = \sum_{n=1}^{+\infty} \frac{1}{n} $$
If a group has 14 men and 11 women, how many different teams can be made with $6$ people that contains exactly $4$ women? A group has $14$ men and $11$ women. (a) How many different teams can be made with $7$ people? (b) How many different teams can be made with $6$ people that contains exactly $4$ women? Answer key to a is $257$ but I can't figure out how to get $257$? There's no answer key to b though, but here's my attempt: $$\binom{25}{6} - \left[\binom{14}{6} + \binom{14}{5} + \binom{14}{4} + \binom{14}{3} + \binom{14}{1} + \binom{11}{6} + \binom{11}{5}\right]$$ What I'm trying to do here is subtracting all men, all women, 5 men, 4 men, 3 men, 1 men, and 5 women team from all possible combination of team. Thanks
Hint: For the first one, number of ways you can choose $k$ woman and $m$ men= $11\choose k$+ $14\choose m$, such that $k+m=7$. For the second one, number ways you can choose $4$ women= $14\choose4$ AND number of choosing $2$ men= $11 \choose 2$, when you have an AND, you gotta multiply them.
Another Series $\sum\limits_{k=2}^\infty \frac{\log(k)}{k}\sin(2k \mu \pi)$ I ran across an interesting series in a paper written by J.W.L. Glaisher. Glaisher mentions that it is a known formula but does not indicate how it can be derived. I think it is difficult. $$\sum_{k=2}^\infty \frac{\log(k)}{k}\sin(2k \mu \pi) = \pi \left(\log(\Gamma(\mu)) +\frac{1}{2}\log \sin(\pi \mu)-(1-\mu)\log(\pi)- \left(\frac{1}{2}-\mu\right)(\gamma+\log 2)\right)$$ Can someone suggest a method of attack? $\gamma$ is the Euler-Mascheroni Constant. Thank You!
It suffices to do these integrals: $$ \begin{align} \int_0^1 \log(\Gamma(s))\;ds &= \frac{\log(2\pi)}{2} \tag{1a}\\ \int_0^1 \log(\Gamma(s))\;\cos(2k \pi s)\;ds &= \frac{1}{4k},\qquad k \ge 1 \tag{1b}\\ \int_0^1 \log(\Gamma(s))\;\sin(2k \pi s)\;ds &= \frac{\gamma+\log(2k\pi)}{2k\pi},\qquad k \ge 1 \tag{1c} \\ \int_0^1 \frac{\log(\sin(\pi s))}{2}\;ds &= \frac{-\log 2}{2} \tag{2a} \\ \int_0^1 \frac{\log(\sin(\pi s))}{2}\;\cos(2k \pi s)\;ds &= \frac{-1}{4k},\qquad k \ge 1 \tag{2b} \\ \int_0^1 \frac{\log(\sin(\pi s))}{2}\;\sin(2k \pi s)\;ds &= 0,\qquad k \ge 1 \tag{2c} \\ \int_0^1 1 \;ds &= 1 \tag{3a} \\ \int_0^1 1 \cdot \cos(2k \pi s)\;ds &= 0,\qquad k \ge 1 \tag{3b} \\ \int_0^1 1 \cdot \sin(2k \pi s)\;ds &= 0,\qquad k \ge 1 \tag{3c} \\ \int_0^1 s \;ds &= \frac{1}{2} \tag{4a} \\ \int_0^1 s \cdot \cos(2k \pi s)\;ds &= 0,\qquad k \ge 1 \tag{4b} \\ \int_0^1 s \cdot \sin(2k \pi s)\;ds &= \frac{-1}{2k\pi},\qquad k \ge 1 \tag{4c} \end{align} $$ Then for $f(s) = \pi \left(\log(\Gamma(s)) +\frac{1}{2}\log \sin(\pi s)-(1-s)\log(\pi)- \left(\frac{1}{2}-s\right)(\gamma+\log 2)\right)$, we get $$ \begin{align} \int_0^1 f(s)\;ds &= 0 \\ 2\int_0^1f(s) \cos(2k\pi s)\;\;ds &= 0,\qquad k \ge 1 \\ 2\int_0^1f(s) \sin(2k\pi s)\;\;ds &= \frac{\log k}{k},\qquad k \ge 1 \end{align} $$ and the formula follows as a Fourier series: $$ f(s) = \sum_{k=1}^\infty \frac{\log k}{k}\;\sin(2 k\pi s),\qquad 0 < s < 1. $$ reference Gradshteyn & Ryzhik, Table of Integrals Series and Products (1a) 6.441.2 (1b) 6.443.3 (1c) 6.443.1 (2a) 4.384.3 (2b) 4.384.3 (2c) 4.384.1
Function design: a logarithm asymptotic to one? I want to design an increasing monotonic function asymptotic to $1$ when $x\to +\infty $ that uses a logarithm. Also, the function should have "similar properties" to $\dfrac{x}{1+x}$, i.e.: * *increasing monotonic *$f(x)>0$ when $x>0$ *gets close to 1 "quickly", say $f(10)>0.8$
How about $$ f(x)=\frac{a\log(1+x)}{1+a\log(1+x)},\quad a>0? $$ Notes: * *I have used $\log(1+x)$ instead of $\log x$ to avoid issues near $x=0$ and to make it more similar to $x/(1+x)$. *Choose $a$ large enough to have $f(10)>0,8$. *You can see the graph of $f$ (for $a=1$) compared to $x/(1+x)$.
If $z$ is a complex number of unit modulus and argument theta If $z$ is a complex number such that $|z|=1$ and $\text{arg} z=\theta$, then what is $$\text{arg}\frac{1 + z}{1+ \overline{z}}?$$
Multiplying both numerator and denominator by $z$, we get: $$\arg\left(\frac{1+z}{1+\bar{z}}\right)=\arg\left(\frac{z+z^{2}}{z+1}\right)=\arg\left(\frac{z(1+z)}{1+z}\right)=\arg\left(z\right)$$ We are told that $\arg(z)=\theta$, therefore: $$\arg\left(\frac{1+z}{1+\bar{z}}\right)=\theta$$
Is the gradient of a function in $H^2_0$ in $H^1_0$? Suppose we have $f\in H^2_0(U)$, so $f$ is the limit of some sequence $(g_n)$ of smooth compactly supported functions on $U\in\mathbb{R}^n$ (assume bounded & smooth boundary) and $f$ is in the Sobolev space $H^2(U)$. Does this imply that $\frac{\partial f}{\partial x_i}\in H^1_0(U)$ for all $i \in \{1,...,n\}$? Clearly $f\in H^1(U)$ but I'm not sure if it's the limit of a sequence in $C^{\infty}_c(U)$. Can we just take the sequence $(\frac{\partial g_n}{\partial x_i})$, or does this derivative not commute with the limit $n\rightarrow\infty$?
If $g_n$ converges towards $f$ in $H^2(U)$, you have $$ \sum_{|\alpha|\le 2}\left\| \partial^\alpha f - \partial^\alpha g_n \right\|^2_{L^2(U)} \to 0,$$ where the sum ranges over all multiindices $\alpha$, with $|\alpha| \le 2$. In order to prove $\partial g_n / \partial x_i \to \partial f / \partial x_i$ in $H^1(U)$, you have to prove$$ \sum_{|\alpha|\le 1}\left\| \partial^\alpha \partial_{x_i} f - \partial^\alpha \partial_{x_i} g_n \right\|^2_{L^2(U)} \to 0.$$ Now you use $$\sum_{|\alpha|\le 1}\left\| \partial^\alpha \partial_{x_i} f - \partial^\alpha \partial_{x_i} g_n \right\|^2_{L^2(U)} \le \sum_{|\alpha|\le 2}\left\| \partial^\alpha f - \partial^\alpha g_n \right\|^2_{L^2(U)}.$$ Hence, the (components of the) gradient belongs to $H_0^1(U)$.
Let $A$ be any uncountable set, and let $B$ be a countable subset of $A$. Prove that the cardinality of $A = A - B $ I am going over my professors answer to the following problem and to be honest I am quite confused :/ Help would be greatly appreciated! Let $A$ be any uncountable set, and let $B$ be a countable subset of $A$. Prove that $|A| = |A - B|$ The answer key that I am reading right now follows this idea: It says that $A-B$ is infinite and proceeds to define a new denumerable subset $A-B$ as $C$. Of course since $C$ is countably infinite then we can write $C$ as ${c1,c2,c3...}$ Once we have a set $C$, we know that the union of $C$ and $B$ must be denumerable (from another proof) since $B$ is countable and $C$ is denumerable. This is where I start to have trouble. The rest of the solution goes like this... Since the union of $C$ and $B$ is denumerable, there is a bijective function $f$ that maps the union of $C$ and $B$ to $C$ again. The solution then proceeds to define another function $h$ that maps $A$ to $A-B$. I am just so lost. The thing is I don't even understand the point of constructing a new subset $C$ or defining functions like $f$ or $h$. So I suppose my question is in general, how would one approach this problem? I am not mathematically inclined unfortunately, and a lot of the steps in almost all of these problems seems arbitrary and random. Help would be really appreciated on this problem and some general ideas on how to solve problems like these!!! Thank you so very much!
$|A-B|\leq |A|$ is obvious. For the reverse inequality, Use $|A|=|A\times A|$. Denote the bijection $A\rightarrow A\times A$ by $\phi$, then $\phi(B)$ embeds into a countable subset of $A\times A$. Then consider the projection $\pi_1$ onto the first coordinate, of the set $\phi(B)$. Namely $\pi_1 \phi(B)$. Since $|\pi_1\phi(B)|\leq |B|$, and $|B|<|A|$, we see that $A-\pi_1\phi(B)$ is nonempty. Pick any element $x \in A-\pi_1\phi(B)$, then we see that $\{x\}\times A$ embeds into $(A\times A)-\phi(B)$. This gives the reverse inequality $|A|\leq |A-B|$.
$X(n)$ and $Y(n)$ divergent doesn't imply $X(n)+Y(n)$ divergent. Please, give me an example where $X(n)$ and $Y(n)$ are both divergent series, but $(X(n) + Y(n))$ converges.
Try $x_n = n, y_n = -n$. Then both $x_n,y_n$ clearly diverge, but $x_n+y_n = 0$ clearly converges. Or try $x_n = n, y_n = \frac{1}{n}-n$ if you want something less trivial. Again both $x_n,y_n$ diverge, but $x_n+y_n = \frac{1}{n}$ converges.
Alternate proof for $2^{2^n}+1$ ends with 7, n>1. I have a proof by induction that $2^{2^n}+1$ end with 7. I've been trying to prove that within the theory of rings and ideals, but haven't achieved it yet. The statement is equivalent to $2^{2^n}-6$ ends with zero, so Prove that for $$ e \in \mathbb{N} : e=2^n \Rightarrow 2^e-6 \in (10)\subset\mathbb{Z}$$ I'm not sure if this is the easiest equivalent statement to prove this in the language of rings. any help? or alternatively $$ e\in \bar{0}\in\mathbb{Z}_4 \Rightarrow 2^e-6 \in \bar{0} \in \mathbb{Z}_{10} $$ is also proven by induction.
Here is an alternate proof, it is basically the same idea as TA or Marvis, but probably presented in a "elementary" way. For $n \geq 2$ then $$2^{2^n}-16=16^{2^{n-2}}-16=16[16^{\alpha}-1]=16(16-1)(\mbox{junk})$$ Since 16 is even and 15 is divisible by 5, it follows that $2^{2^n}-16$ is a multiple of 10....
Order of infinite dimension norms I know that $$\|{f}\|_{L^1(0,L)}\leq\|{f}\|_{L^2(0,L)}\leq\|{f}\|_{\mathscr{C}^1(0,L)}\leq\|{f}\|_{\mathscr{C}^2(0,L)}\leq\|{f}\|_{\mathscr{C}^{\infty}(0,L)}$$ But I don't know where to put in this chain this norm: $\|{f}\|_{L^{\infty}}=\inf\{C\in\mathbb{R},\left|f(x)\right|\leq{C}\text{ almost everywhere}\}$ Thanks pals.
If the norm $\mathcal C^1$ is defined by $\sup_{0<x<L}|f(x)|+\sup_{0<x<L}|f'(x)|$, then we have $$\lVert f\rVert_{L^2}\leqslant \sqrt L\lVert f\rVert_{L^{\infty}}\leqslant \sqrt L\lVert f\rVert_{\mathcal C^1}.$$
indexed family of sets, not pairwise disjoint but whole family is disjoint I've seen this problem before, but can't remember how to finish it: Define an indexed family of sets $ \{A_i : i \in \mathbb{N} \}$ in which for any $m,n\in \mathbb{N}, A_m \cap A_n \not= \emptyset$ and $\bigcap A_i = \emptyset$. The closest I came was something to the effect of $A_i = \{(0, 1/n): n \in \mathbb{N} \}$, but I know that doesn't meet the last criterion. Suggestions on how to fix/finish it? Am I even as close as I think I am?
How about $A_i =\{n\in\mathbb N\mid n>i\}$? But your choice, more properly written $A_i=(0,1/i)$, is also fine.
Given a dense subset of $\mathbb{R}^n$, can we find a line that intersects it in a dense set? I have some difficulties in the following question. Let $S$ be a dense subset in $\mathbb{R}^n$. Can we find a straight line $L\subset\mathbb{R}^n$ such that $S\cap L$ is a dense subset of $L$. Note. From the couterexample of Brian M. Scott, I would like to ask more. If we suppose that $S$ has a full of measure. Could we find a straight line $L$ such that $S\cap L$ is a dense subset of $L$?
For each $n>1$ it’s possible to construct a dense subset $D$ of $\Bbb R^n$ such that every straight line in $\Bbb R^n$ intersects $D$ in at most two points. Let $\mathscr{B}=\{B_n:n\in\omega\}$ be a countable base for $\Bbb R^n$. Construct a set $D=\{x_n:n\in\omega\}\subseteq\Bbb R^n$ recursively as follows. Given $n\in\omega$ and the points $x_k$ for $k<n$, no three of which are collinear, observe that there are only finitely many straight lines containing two points of $\{x_k:k<n\}$; let their union be $L_n$. $L_n\cup\{x_k:k<n\}$ is a closed, nowhere dense set in $\Bbb R^n$, so it does not contain the open set $B_n$, and we may choose $x_n\in B_n\setminus\big(L_n\cup\{x_k:k<n\}\big)$. Clearly $D$, so constructed, is dense in $\Bbb R^n$, since it meets every member of the base $\mathscr{B}$, and by construction no three points of $D$ are collinear.
Defining a metric space I'm studying for actuarial exams, but I always pick up mathematics books because I like to challenge myself and try to learn new branches. Recently I've bought Topology by D. Kahn and am finding it difficult. Here is a problem that I think I'm am answering sufficiently but any help would be great if I am off. If $d$ is a metric on a set $S$, show that $$d_1(x,y)=\frac{d(x,y)}{1+d(x,y)}$$ is a metric on $S$. The conditions for being a metric are $d(X,Y)\ge{0}, d(X,Y)=0$ iff $X=Y$, $d(X,Y)=d(Y,X)$, and $d(X,Y)\le{d(X,Z)+d(Z,Y)}$. Thus, we simply go axiom by axiom. 1) Since both $d(x,y)\ge{0}$ and $1+d(x,y)\ge{0},$ it is clear that $d_1(x,y)\ge{0}$. (Is this a sufficient analysis?) 2) $d_1(x,x)=\frac{d(x,x)}{1+d(x,x)}=\frac{0}{1+0}=0$. 3) $d_1(x,y)=\frac{d(x,y)}{1+d(x,y)}=\frac{d(y,x)}{1+d(y,x)}=d_1(y,x).$ 4) $d_1(x,y)=\frac{d(x,y)}{1+d(x,y)}\le{\frac{d(x,z)+d(z,y)}{1+d(x,z)+d(z,y)}}=\frac{d(x,z)}{1+d(x,z)+d(z,y)}+\frac{d(z,y)}{1+d(x,z)+d(z,y)}\lt\frac{d(x,z)}{1+d(x,z)}+\frac{d(z,y)}{1+d(z,y)}=d_1(x,z)+d_1(z,y).$ However, #4 is strictly less, not less than or equal to, according to my analysis, so where did I go wrong?
There is someing wrong in 4), just as Brian comments. Here I offered a proof for you: Proof: Notice that $f(x)=\frac{x} {1+x}$ is increasing on $\mathbb R^+$: to see this, let $g(x)=\frac{1}{x+1}$. It is easily to see that $g(x)$ is descreasing on $\mathbb R^+$. And note that $f(x)+g(x)=1$. Therefore, $f(x)$ is a increasing function on $\mathbb R^+$. Since $d(x,y) \le d(x,z)+d(z,y)$, we have $d_1(x,y)=\frac{d(x,y)}{1+d(x,y)}\le{\frac{d(x,z)+d(z,y)}{1+d(x,z)+d(z,y)}}=\frac{d(x,z)}{1+d(x,z)+d(z,y)}+\frac{d(z,y)}{1+d(x,z)+d(z,y)}\lt\frac{d(x,z)}{1+d(x,z)}+\frac{d(z,y)}{1+d(z,y)}=d_1(x,z)+d_1(z,y).$ Hope this be helpful for you. ADDed: $d(z,y)$ and $d(x,z)$ could be zero.
How to read this expression? How can I read this expression : $$\frac{1}{4} \le a \lt b \le 1$$ Means $a,b$ lies between $\displaystyle \frac{1}{4}$ and $1$? Or is $a$ less the $b$ also less than equal to $1$? So $a+b$ won't be greater than $1$?
Think of the number line. The numbers $\{\tfrac{1}{4}, a, b, 1\}$ are arranged from left to right. The weak inequalities on either end indicate that $a$ could be $\tfrac{1}{4}$ and $b$ could be $1$. However, the strict inequality in the middle indicates that $a$ never equals $b$.
What does it mean to have a determinant equal to zero? After looking in my book for a couple of hours, I'm still confused about what it means for a $(n\times n)$-matrix $A$ to have a determinant equal to zero, $\det(A)=0$. I hope someone can explain this to me in plain English.
Take a 2 x 2 matrix, call it A, plot that in a coordinate system. A= [[2,1],[4,2]] . --> Numpy notation of a matrix Following two vectors are written from A x=[2,4] y=[1,2] If you plot that, you can see that they are in the same span. That means x and y vectors do not form an area. Hence, the det(A) is zero. Det refers to the area formed by the vectors.
How can I calculate the expected number of changes of state of a discrete-time Markov chain? Assume we have a 2 state Markov chain with the transition matrix: $$ \left[ \begin{array} (p & 1-p\\ 1-q & q \end{array} \right] $$ and we assume that the first state is the starting state. What is the expected number of state transitions in $T$ periods? (I want to count the number of state changes from state 1 to state 2, and the other way round).
1. Let $s=(2-p-q)^{-1}$, then $\pi_0=(1-q)s$, $\pi_1=(1-p)s$, defines a stationary distribution $\pi$. If the initial distribution is $\pi$, at each step the distribution is $\pi$ hence the probability that a jump occurs is $$ r=(1-p)\pi_0+(1-q)\pi_1=2(1-p)(1-q)s. $$ In particular, the mean number of jumps during $T$ periods is exactly $rT$. 2. By a coupling argument, for any distribution $\nu$, the difference between the number of jumps of the Markov chain with initial distribution $\nu$ and the number of jumps of the Markov chain with initial distribution $\pi$ is exponentially integrable. 3. Hence, for any starting distribution, the mean number of jumps during $T$ periods is $rT+O(1)$. 4. Note that $$ \frac1r=\frac12\left(\frac1{1-p}+\frac1{1-q}\right), $$ and that $\frac1{1-p}$ and $\frac1{1-q}$ are the mean lengths of the intervals during which the chain stays at $0$ and $1$ before a transition occurs to $1$ and $0$ respectively. Hence the formula. For a Markov chain with $n$ states and transition matrix $Q$, one would get $$ \frac1r=\frac1n\sum_{k=1}^n\frac1{1-Q_{kk}}. $$
Proving that $L = \{0^k \mid \text{$k$ is composite}\}$ is not regular by pumping lemma Suppose $L = \{0^k \mid \text{$k$ is composite}\}$. Prove that this language is not regular. What bugs me in this lemma is that when I choose a string in $L$ and try to consider all cases of dividing it into three parts so that in each case it violates lemma, I always find one case that does not violate it. A bit of help will be appreciated. Thanks in advance. My attempt: * *Suppose that $L$ is regular. *Choosing string $x = 0^{2k}$ where $k$ is prime ($2k$ pumping constant) *We can divide $x$ into three parts $u, v, w$ such that: $$|uv| \le 2k \qquad |v| > 0\qquad uv^iw \in L \text{ for $i \ge 0$}$$ *If $u$ and $w$ are empty, all conditions are met. It is the same when I change $2$ for any other number. Maybe I'm choosing wrong.
You can’t assume that the pumping constant is even. If you want to start with a word of the form $0^{2k}$ for some $k$, that’s fine, but you can’t take $2k$ to be the pumping constant $p$; you can only assume that $2k\ge p$. But trying to use the pumping lemma directly to prove that $L$ is not regular is going to be a bit difficult. I would use the fact that the regular languages are closed under complementation, so if $L$ is regular, so is $$L'=\{0^n:n\text{ is prime}\}\;.$$ Now apply the pumping lemma to $L'$. I’ve done it in the spoiler-protected text below, but I think that you can probably do it yourself without the help. Let $p$ be the pumping length, and start with $x=0^n$ for some prime $n\ge p$. Decompose $x$ in the usual way as $uvw$, so that $|uv|\le p$, $|v|>0$, and $uv^kw\in L$ for $k\ge 0$. Let $a=|uw|$ and $m=|v|>0$; then for each $k\ge 0$ you have $|uv^kw|=a+km$. In other words, the lengths of the words $uv^kw$ form an arithmetic sequence with first term $a$ and constant difference $m$. You should have no trouble showing that this sequence must contain a composite number.
Let $a$ be a prime element in PID. Show that $R/(a)$ is a field. Let $a$ be a prime element in PID. Show that $R/(a)$ is a field. My attempt: Since $a$ is prime, $(a)$ is a prime ideal of $R$. Since $R$ is a PID, every nonzero prime ideal of $R$ is maximal. This implies that $(a)$ is maximal and hence $R/(a)$ is a field. Is my proof correct?
Almost. But, you should just snip out ", $R$ is also a UFD and hence". The fact that every non-zero prime is maximal is a fact true about PIDs but NOT about UFDs. Indeed, consider that $\mathbb{Z}[x]/(x)$ is an integral domain but not a field.
Additive set function properties I am reading an introduction to measure theory, which starts by defining $\sigma$-rings then additive set functions and their properties which are given without proof. I was able to prove two of them which are very easy $\phi(\emptyset)=0$ and $\phi(A_1\cup \ldots\cup A_n)=\displaystyle\sum_{i=1}^nA_i$. There are three more which I couldn't do which are : 1- $\phi(A_1\cup A_2)+\phi(A_1 \cap A_2)=\phi(A_1)+\phi(A_2)$ for any two sets $A_1 $ and $A_2$. 2- If $\phi(A) \ge 0$ for all $A$ and $A_1 \subset A_2$ then $\phi(A_1) \le \phi(A_2)$. 3- If $A_1 \subset A_2$ and $|\phi(A_1)|< + \infty$ then $\phi(A_2-A_1)=\phi(A_2)-\phi(A_1)$. Where $\phi$ is an additive set function. I need some hints on how to proceed.
I was able to use the third proposition to prove the first. Let $A \subset B$. We have $(A \setminus B) \cap B = \emptyset$ then since $\phi$ is additive we have $\phi(A \setminus B) + \phi(B)=\phi((A-B) \cup B)=\phi(A) $ . Then $\phi(A \setminus B)=\phi(A)-\phi(B)$, this proves the second one. Now for the first : $A \cup B= A\setminus (A\cap B) \cup B \setminus (A\cap B) \cup(A \cap B)$. Then $\phi(A \cup B)=\phi(A\setminus (A\cap B) \cup B \setminus (A\cap B) )= \phi(A)-\phi(A\cap B)+\phi(B)- \phi(A\cap B) + \phi(A \cap B)$ Hence upon adding $\phi(A \cap B) $ to both sides we get $\phi(A \cup B) + \phi(A \cap B)= \phi(A) + \phi (B)$. Can anyone help with the second one? Edit: Here's the second one. Since $\phi(A) > 0$ for all sets then $\phi(A\setminus B)>0$ hence by the second property $\phi(A)-\phi(B) > 0$ when $B \subset A$.
Logarithm calculation result I am carrying out a review of a network protocol, and the author has provided a function to calculate the average steps a message needs to take to traverse a network. It is written as $$\log_{2^b}(N)$$ Does the positioning of the $2^b$ pose any significance during calculation? I can't find an answer either way. The reason is, they have provided the results of their calculations and according to their paper, the result would be $1.25$ (given $b= 4$ and $N= 32$). Another example was given this time $N= 50$, $b=4$ giving a result of $1.41$. I don't seem to be able to get the same result if I were to apply the calculation and so it's either my method/order of working or their result is incorrect (which I doubt). Can someone help to provide the correct way of calculating the values, and confirm the initial results? My initial calculation was calculate $\log(2^4) \cdot 32$... Clearly it's totally wrong (maths is not a strong point for me).
The base of the logarithm is $2^b$. You want to find an $x$ such that $(2^b)^x = N$, i.e. $2^{bx} = N$. You can rewrite that as $$x = \dfrac{\log N}{b}$$ if you take the $\log$ to base-2.
An ambulance problem involve sum of two independent uniform random variables An ambulance travels back and forth at a constant speed along a road of length $L$. At a certain moment of time, an accident occurs at a point uniformly distributed on the road.[That is, the distance of the point from one of the fixed ends of the road is uniformly distributed over ($0$,$L$).] Assuming that the ambulance's location at the moment of the accident is also uniformly distributed, and assuming independence of the variables, compute the distribution of the distance of the ambulance from the accident. Here is what I have so far: $X$ = point where the accident happened $Y$ = location of the ambulance at the moment. $D = |X-Y|$, represents the distance between the accident and the ambulance $P(D \leq d) = $$\mathop{\int\int}_{(x,y)\epsilon C} f(x,y) dx dy$ where $C$ is the set of points where $|X-Y| \leq d$ I'm having trouble setting up the limit for the integral. It would be greatly appreciated if someone can upload a picture of the area of integration.
Here is a rough sketch of the integration region: The $x$ and $y$ axes goes between $0$ and $L$. The "shaded" (for lack of a better word) region represents those $X$ and $Y$ such that $|X-Y| \le d$. The integration region is split in 3 pieces, which I hope you can see from this admittedly crude diagram: $$P(|X-Y| \le d) = \frac{1}{L^2} \left [\int_0^d dx \: \int_0^{d+x} dy + \int_d^{L-d} dx \: \int_{-d+x}^{d+x} dy + \int_{L-d}^{L} dx \: \int_{-d+x}^{L} dy \right ]$$ So you can check, the result I get is $$P(|X-Y| \le d) = \frac{d}{L} \left ( 2 - \frac{d}{L} \right)$$ You can also see it from the difference between the area of the whole region minus the area of the 2 right triangles outside the "shaded" region.
Principal axis of a matrix I try to find the definition of the main axis of a matrix. I saw this phrase in some exercise: Let $A$ be a positive matrix, $f:G\longrightarrow \mathbb{R}$ a smooth function, $G$ an open set in $\mathbb{R}^n$. I need to find the orthogonal coordinate transformation $y=Px$ such that the main axis on $y$'s coordinates will be the principle axis of $A$. The book says to diagonalize $A$: $PAP^t=D$ and to choose $P$ to be the transformation. What is the definition of principle axis of matrix? thanks.
Often, principal axes of a matrix refer to its eigenvectors. With this diagonalization, $P$ is the matrix of eigenvectors.
Examples of 2D wave equations with analytic solutions I need to numerically solve the following wave equation$$\nabla^2\psi(\vec{r},t) - \frac{1}{c(\vec{r})^2}\frac{\partial^2}{\partial t^2}\psi(\vec{r},t) = -s(\vec{r},t)$$ subject to zero initial conditions $$\psi(\vec{r},0)=0, \quad \left.\frac{\partial}{\partial t}\psi(\vec{r},t)\right|_{t=0}=0$$ where $\vec{r} \in \mathbb{R}^2$ and $t \in \mathbb{R}$. The problem is that I don't know if my numerical solution is right or not, so I wonder if there are some simple cases where the analytic solution can be calculated (besides the green's function, i.e. the solution when $c(\vec{r}) \equiv $ const and $s(\vec{r},t) = \delta(\vec{r},t)$), so I can compare it with my numerical solution. Thanks!
Solution using Green functions and using Sommerfeld radiation condition, in cylindrical coordinates. \begin{eqnarray} u_s(\rho, \phi, t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} \{ -i \pi H_0^{(2)}\left(k | \sigma - \sigma_s| \right) F(\omega) e^{i\omega t} d\omega\} \end{eqnarray} Where $s$ denotes the source position. More details and implementation can be found in this post here from Computational Science.It uses the following reference: Morse and Feshbach, 1953, p. 891 - Methods of theoretical physics: New York, MacGraw-Hill Book Co., Inc. Here are two snapshots $\mathbb{R}^2$ in [200, 200] lattice with space increment of 100 meters (is not in the axis), velocity 8000 m/s. The source function $ s(\vec{r},t) = \delta(\rho, \phi)f(t) $ bellow sampled with $\Delta t = 0.05 $ seconds, obviously placed in the origin $(\rho=0,\phi=0)$. In fact can be anything you want as long you can have its Fourier Transform.
Find the area of the surface obtained by revolving $\sqrt{1-x^2}$ about the x-axis? So I began by choosing my formulas: Since I know the curve is being rotated around the x-axis I choose $2\pi\int yds$ where $y=f(x)=\sqrt{1-x^2}$ $ds=\sqrt{1+[f'(x)]^2}$ When I compute ds, I find that $ds=\sqrt{x^6-2x^4+x^2+1}$ Therefore, my integral becomes: $2\pi\int(1-x^2)^{\frac{1}{2}}\sqrt{x^6-2x^4+x^2+1}dx$ Am I on the right track, because this integral itself seems very hard to solve?
No. $$f'(x) = -\frac{x}{\sqrt{1-x^2}} \implies 1+f'(x)^2 = \frac{1}{1-x^2}$$
Does Seperable + First Countable + Sigma-Locally Finite Basis Imply Second Countable? A topological space is separable if it has a countable dense subset. A space is first countable if it has a countable basis at each point. It is second countable if there is a countable basis for the whole space. A collection of subsets of a space is locally finite if each point has a neighborhood which intersects only finitely many sets in the collection. A collection of subsets of a space is sigma-locally finite (AKA countably locally finite) if it is the union of countably many locally finite collections. My question is, if a space is separable, first countable, and has a sigma-locally finite basis, must it also be second countable? I think the answer is yes, because I haven't found any counterexample here. Any help would be greatly appreciated. Thank You in Advance. EDIT: I fixed my question. I meant that the space should have a locally finite basis, not be locally finite itself, which doesn't really mean much.
For example: Helly Space; Right Half-Open Interval Topology; Weak Parallel Line Topology. These space are all separable, first countable and paracompact, but not second countable. Note that a paracompact is the union of one locall finite collection.
Proof of Proposition/Theorem V in Gödel's 1931 paper? Proposition V in Gödel's famous 1931 paper is stated as follows: For every recursive relation $ R(x_{1},...,x_{n})$ there is an n-ary "predicate" $r$ (with "free variables" $u_1,...,u_n$) such that, for all n-tuples of numbers $(x_1,...,x_n)$, we have: $$R(x_1,...,x_n)\Longrightarrow Bew[Sb(r~_{Z(x_1)}^{u_1}\cdot\cdot\cdot~_{Z(x_n)}^{u_n})] $$ $$\overline{R}(x_1,...x_n)\Longrightarrow Bew[Neg~Sb(r~_{Z(x_1)}^{u_1}\cdot\cdot\cdot~_{Z(x_n)}^{u_n})]$$ Gödel "indicate(s) the outline of the proof" and basically says, in his inductive step, that the construction of $r$ can be formally imitated from the construction of the recursive function defining relation $R$. I have been trying to demonstrate the above proposition with more rigor, but to no avail. I have, however, consulted "On Undecidable Propositions of Formal Mathematical Systems," the lecture notes taken by Kleene and Rosser from Gödel's 1934 lecture, which have been much more illuminating; but still omits the details in the inductive step from recursive definition, stating "the proof ... is too long to give here." So can anyone give me helpful hint for the proof of the above proposition, or even better, a source where I can find such a demonstration? Thanks!
I'm not completely familiar with Gödel's notation, but I think this is equivalent to theorem 60 in Chapter 2 of The Logic of Provability by George Boolos, which has fairly detailed proofs of this sort of thing (all in chapter 2).
Permutation for finding the smallest positive integer Let $\pi = (1,2)(3,4,5,6,7)(8,9,10,11)(12) \in S_{12}$. Find the smallest positive integer $k$ for which $$\pi^{(k)}=\pi \circ \pi \circ\ldots\circ \pi = \iota$$ Generalize. If a $\pi$'s disjoint cycles have length $n_1, n_2,\dots,n_t$, what is the smallest integer $k$ so that $\pi^{(k)} = \iota$? I'm confused with this question. A clear explanation would be appreciated.
$\iota$ represents the identity permutation: every element in $\{1, 2, ..., 12\}$ is mapped to itself: $\quad \iota = (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)$. Recall the definition of the order of any element of a finite group. In the case of a permutation $\pi$ in $S_n$, the exponent $k$ in your question represents the order of $\pi$. That is, if $\;\pi^k = \underbrace{\pi\circ \pi \circ \cdots \circ \pi}_{\large k \; times}\; = \iota,\;$ and if $k$ is the least such positive integer such that $\pi^k = \iota$, then $k$ is the order of $\pi$. A permutation expressed as the product of disjoint cycles has order $k$ equal to the least common multiple of the lengths of the cycles. So, if $\pi = (1,2)(3,4,5,6,7)(8,9,10,11)(12) \in S_{12}$, Then the lengths $n_i$ of the 4 disjoint cycles of $\pi$ are, in order of their listing above, $n_1 = 2, \; n_2 = 5, \; n_3 = 4,\;n_4 = 1.\;$. So the order $k$ of $\pi$ is given by the least common multiple $$\;\text{lcm}\,(n_1, n_2, n_3, n_4) = \operatorname{lcm}(2, 5, 4, 1) = 20.$$ That is, $\pi^k = \pi^{20} = \iota,\;$ and there is NO positive integer $n<k = 20\,$ such that $\pi^n = \iota$. What this means is that $$\underbrace{\pi \circ \pi \circ \cdots \circ \pi}_{\large 20 \; factors} = \iota$$ and $$\underbrace{\pi \circ \pi \circ \cdots \circ \pi}_{ n \; factors,\;1 \,\lt n\, \lt 20} \neq \iota$$
How do i prove that $\det(tI-A)$ is a polynomial? In wikipedia, it's said "$\det(tI-A)$ can be explicitly evaluated using exterior algebra", but i have not learned exterior algebra yet and i just want to know whether it is polynomial, not how it looks like. How do i prove that $\det(tI-A)$ is a polynomial in $\mathbb{F}[t]$ where $\mathbb{F}$ is a field and $A$ is an $n\times n$ matrix?
To prove that $\det (tI-A)$ is a polynomial you must know some definition, or some properties, of the determinant. The most straightforward and least mystical approach is to use Laplace's formula: http://en.wikipedia.org/wiki/Laplace_expansion This would give a rather quick way of proving that $\det (tI-A)$ is a polynomial, and for almost the same amount of work, that it is a monic polynomial of degree $n$ (if $A$ is an $n\times n$ matrix).
Find minimum in a constrained two-variable inequation I would appreciate if somebody could help me with the following problem: Q: find minimum $$9a^2+9b^2+c^2$$ where $a^2+b^2\leq 9, c=\sqrt{9-a^2}\sqrt{9-b^2}-2ab$
Maybe this comes to your rescue. Consider $b \ge a \ge 0$ When you expansion of $(\sqrt{9-a^2}\sqrt{9-b^2}-2ab)^2=(9-a^2)(9-b^2)+4a^2b^2-4ab \sqrt{(9-a^2)(9-b^2)}$ This attains minimum when $4ab \sqrt{(9-a^2)(9-b^2)}$ is maximum. Applying AM-GM : $\dfrac{9-a^2+9-b^2}{2} \ge \sqrt{(9-a^2)(9-b^2)} \implies 9- \dfrac{9}{2} \ge \sqrt{(9-a^2)(9-b^2)}$ $\dfrac{a^2+b^2}{2} \ge ab \implies 18 \ge 4ab$
$f^{-1}(U)$ is regular open set in $X$ for regular open set $U$ in $Y$, whenever $f$ is continuous. Let $f$ be a continuous function from space $X$ to space $Y$. If $U$ is regular open set in $Y$, it it true that $f^{-1}(U)$ is a regular open set in $X$?
Not necessarily. Consider the absolute value function $x \mapsto | x |$, and the inverse image of $(0,1)$.
Quicksort analysis problem This is a problem from a probability textbook, not a CS one, if you are curious. Since I'm too lazy to retype the $\LaTeX$ I will post an ugly stitched screenshot: This seems ridiculously hard to approach, and it doesn't help that all the difficult problems have no solutions in the textbook (the uselessly easy ones do :P ). How would I attack it?
The question is already broken into pieces in order to help you out. a) This is the law of total expectation, using the fact that the pivot is chosen randomly. b) Once we have a pivot, we need to split the remaining $n-1$ numbers into $2$ groups (one comparison each), and then solve the two sub-problems; one of of size $i-1$ and one of size $n-i$, respectively. The recursion comes from plugging in the result from part b to the formula from part a. c) You can derive this from the recursion in part b. d) Use the recursion from part c to work out what $C_{n+1}$ should be, using the fact that the harmonic sum $\sum_{i=1}^{n+1} \frac{1}{i}$ is approximately $\log (n+1)$ for large $n$.
How can I prove that $xy\leq x^2+y^2$? How can I prove that $xy\leq x^2+y^2$ for all $x,y\in\mathbb{R}$ ?
$$x^2+y^2-xy=\frac{(2x-y)^2+3y^2}4=\frac{(2x-y)^2+(\sqrt3y)^2}4$$ Now, the square of any real numbers is $\ge0$ So, $(2x-y)^2+(\sqrt3y)^2\ge0,$ the equality occurs if each $=0$
Linear Recurrence Relations I'm having trouble understanding the process of solving simple linear recurrence relation problems. The problem in the book is this: $$ 0=a_{n+1}-1.5a_n,\ n \ge 0 $$ What is the general process, and purpose, of solving this? Unfortunately there is a very large language barrier between my professor and myself, which is quite a problem.
The general solution to the equation $$a_{n+1} = k a_n$$ is $$a_n = B \cdot k^n$$ for some constant $B$, which is related to an initial condition.