INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Inverse bit in Chinese Remainder Theorem I need to solve the system of equations: $$x \equiv 13 \mod 11$$ $$3x \equiv 12 \mod 10$$ $$2x \equiv 10 \mod 6.$$ So I have reduced this to $$x \equiv 2 \mod 11$$ $$x \equiv 4 \mod 10$$ $$x \equiv 2 \mod 3$$ so now I can use CRT. So to do that, I have done $$x \equiv \{ 2 \times (30^{-1} \mod 11) \times 30 + 4 \times (33^{-1} \mod 10) \times 33 + 2 \times (110^{-1} \mod 3) \times 110 \} \mod 330$$ $$= \{ 2 (8^{-1} \mod 11) \cdot 30 + 4(3^{-1} \mod 10)\cdot33 + 2(2^{-1} \mod 3) \cdot 110 \} \mod 330$$ but now I'm stuck on what to do. What do the inverse bits means? If I knew that I could probably simplify the rest myself.
Consider $3 \mod 5$. If I multiply this by $2$, I get $2 \cdot 3 \mod 5 \equiv 1 \mod 5$. Thus when I multiply by $2$, I get the multiplicative identity. This means that I might call $3^{-1} = 2 \mod 5$.
Probability of getting exactly 2 heads in 3 coins tossed with order not important? I have been thinking of this problem for the post 3-4 hours, I have come up with this problem it is not a home work exercise Let's say I have 3 coins and I toss them, Here order is not important so possible sample space should be 0 H, 1 H, 2 HH, 3 HHH (H being heads) TTT, HTT, HHT, HHH since P(T) and P(H) =1/2; Here we have fair coins only, Since each and every outcome is equally likely, answer should be 1/4 (is this correct) and if that is correct, all of the probabilities don't add up to one, will I have to do the manipulation to make it add up to one, or I am doing anything wrong. EDIT In my opinion, with order being not important, there should be only 4 possible outcomes. All of the answers have ignored that condition.
The sample space has size $2^3 = 8$ and consists of triples $$ \begin{array}{*{3}{c}} H&H&H \\ H&H&T \\ H&T&H \\ H&T&T \\ T&H&H \\ T&H&T \\ T&T&H \\ T&T&T \end{array} $$ The events $$ \begin{align} \{ 0 \text{ heads} \} &= \{TTT\}, \\ \{ 1 \text{ head} \} &= \{HTT, THT, TTH\}, \end{align} $$ and I'll let you figure out the other two. The probabilities are, for example, $$ P(\{ 1 \text{ head} \}) = \frac{3}{8}. $$ This is called a binomial distribution, and the sizes of the events "got $k$ heads out of $n$ coin flips" are called binomial coefficients.
Intuitive proofs that $\lim\limits_{n\to\infty}\left(1+\frac xn\right)^n=e^x$ At this link someone asked how to prove rigorously that $$ \lim_{n\to\infty}\left(1+\frac xn\right)^n = e^x. $$ What good intuitive arguments exist for this statement? Later edit: . . . where $e$ is defined as the base of an exponential function equal to its own derivative. I will post my own answer, but that shouldn't deter anyone else from posting one as well.
$$ e^x=\lim_{m\rightarrow \infty}\left(1+\frac{1}{m}\right)^{mx} $$ Let $mx=n$, so $m=\frac{n}{x}$ $$e^x=\lim_{n\rightarrow\infty}\left(1+\frac{x}{n}\right)^n$$
Radicals in a fraction: simplification I cannot for the life of me figure out how this fraction got simplified. Please tell me how the first fraction got simplified into the second one. I've provided initial fraction and its simplified answer: $$ -\frac{p \cdot (-1 /(2\sqrt{517-p}) )}{\sqrt{517-p}} = \frac{1}{2(517-p)} $$
$$ -\frac{p \cdot \frac{-1}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ -\frac{p \cdot -\frac{1}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ --\frac{p \cdot \frac{1}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ \frac{p \cdot \frac{1}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ \frac{\frac{p}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ \frac{p}{2\sqrt{517-p} \cdot \sqrt{517-p}} \\\\$$ $$ \frac{p}{2(517-p)} \\\\$$ Which step does not make sense to you?
Combinatorial Proof I have trouble coming up with combinatorial proofs. How would you justify this equality? $$ n\binom {n-1}{k-1} = k \binom nk $$ where $n$ is a positive integer and $k$ is an integer.
We have a group of $n$ people, and want to count the number of ways to choose a committee of $k$ people with Chair. For the left-hand side, we select the Chair first, and then $k-1$ from the remaining $n-1$ to join her. For the right-hand side, we choose $k$ people, and select one of them to be Chair.
Does the sequence converges? I am trying to prove if the sequence $a_n=(\root n\of e-1)\cdot n$ is convergent. I know that the sequences $x_n=(1+1/n)^n$ and $y_n=(1+1/n)^{n+1}$ tends to the same limit which is $e$. Can anyone prove if the above sequence $a_n$ is convergent? and if so, find the limit. My trial was to write $a_n$ as $a_n=n(e^{1/n}-1)$ and taking $1/n=m$ so that $a_n=\frac{1}{m}(e^m-1)$ and taking the limit $\lim_{x\to 0^+}\frac{e^x-1}{x}$, but I don't know how to continue. Thanks to every one who solve this for me.
Let $e^{1/n}-1 = x$. We then have $\dfrac1n = \log(1+x) \implies n = \dfrac1{\log(1+x)}$. Now as $n \to \infty$, we have $x \to 0$. Hence, $$\lim_{n \to \infty}n(e^{1/n}-1) = \lim_{x \to 0} \dfrac{x}{\log(1+x)} = 1$$
Does there exist a positive integer $n$ such that it will be twice of $n$ when its digits are reversed? Does there exist a positive integer $n$ such that it will be twice of $n$ when its digits are reversed? We define $f(n)=m$ where the digits of $m$ and $n$ are reverse. Such as $f(12)=21,f(122)=221,f(10)=01=1$,so we cannot say $f(f(n))=n$,but $f(f(n))=n/10^k$. So we need to find a solution to $f(n)=2n$. If $f(n)=2n$ and the first digit of $n$ is 2,then the last digit of $n$ is 1 or 6,and so on.So the first digit of $n$ is even. There are some solutions to the equation $f(n)=\frac{3}{2}n$,such as $n=4356,43956$,but there is no solution to $f(n)=2n$ when $n<10^7$. Edit:Since Alon Amit has proved that $f(n)=2n$ has no positive solution,so I wonder whether $f(n)=\frac{3}{2}n$ has only finite solutions. Any suggestion is grateful,thanks in advance!
There is no such integer $n$. Suppose there is, and let $b = n \bmod 10$ be its units digit (in decimal notation) and $a$ its leading digit, so $a 10^k \leq n < (a+1)10^k$ for some $k$ and $1 \leq a < 10$. Since $f(n) = 2n$ is larger than $n$, and $f(n)$ has leading digit $b$ and at most as many digits as $n$, we must have $b > a$. At the same time, $2b \equiv a \bmod 10$ because $(2b \bmod 10)$ is the units digits of $2n$ and $a$ is the units digit of $f(n)$. This means that $a$ is even, as you pointed out. * *$a$ cannot be $0$, by definition. *If $a=2$, $b$ must be $1$ (impossible since $b>a$) or $6$. But the leading digit of $2n$ can only be $4$ or $5$, since $4\times 10^k \leq 2n < 6\times 10^k$ and the right inequality is strict (in plain English, when you double a number starting with $2$, the result must start with $4$ or $5$). *If $a=4$, by the same reasoning $b$ must be $7$ which again fails to be a possible first digit for twice a number whose leading digit is $4$. *If $a=6$, we have $b=8$. Impossible since $2n$ must start with $1$. *If $a=8$, $b$ must be $9$. Impossible again, for the same reason. So no $a$ is possible, QED. Edit: The OP has further asked if $f(n) = \frac{3}{2}n$ has only finitely many solutions. The answer to that is No: Consider $n=43999...99956$ where the number of $9$'s is arbitrary. One can check that $f(n) = \frac{3}{2}n$ for those values of $n$.
Every injective function is an inclusion (up to a unique bijection) Let $X$ be a set and let $A$ be a subet of $X$. Let $i:A\longrightarrow X$ be the usual inclusion of $A$ in $X$. Then $i$ is an example of an injective function. I want to show that every injective function is of this kind. More precisely: for every set $Y$ and every injective function $f:X\longrightarrow Y$, there exist a subset $B$ of $Y$ and a bijection $g:X\longrightarrow B$ such that $f$ factors through $B$, i.e. $f=j\circ g$, where $j$ is the inclusion of $B$ in $Y$. Moreover, $g$ is unique with respect to this property. I can take $B:=f(X)$ and $g:=f$ (so that $g$ is the same of $f$ as a rule, but with different codomain) and it is easely checked that everything works. Moreover $g$ is unique, since $j\circ g=f=j\circ g'$ implies $g=g'$ by injectivity of $j$. There is something that does not convince at all, in the unicity part. I mean, $g$ is unique if I fix $B=f(X)$, but what about the unicity of $B$? Is there a $B'$, different from $B$, and a $g'$ from $X$ to $B'$ bijective, such that $j'\circ g'=f$ holds?
No. If $j' \circ g' = f$ then $j'(g'(x)) = f(x)$ for all $x \in X$. But $j'$ is the inclusion of $B'$ in $Y$, so it acts by the identity on elements of $B$, which the $g'(x)$ are, by definition of $g' : X \rightarrow B'$. Hence $g'(x) = f(x)$ for all $x \in X$, so $B' = f(X)$.
When are the sections of the structure sheaf just morphisms to affine space? Let $X$ be a scheme over a field $K$ and $f\in\mathscr O_X(U)$ for some (say, affine) open $U\subseteq X$. For a $K$-rational point $P$, I can denote by $f(P)$ the image of $f$ under the map $$\mathscr O_X(U) \to \mathscr O_{X,P} \twoheadrightarrow \mathscr O_{X,P}/\mathfrak m_P = K.$$ This yields a map $f:U(K)\to K$. Giving $U$ the induced subscheme structure, when does this uniquely define a morphism $f:U\to\mathbb A_K^1$ of schemes? It certainly works when $X$ is a variety (and $K$ algebraically closed), so there should be some "minimal" set of conditions for this interpretation to make sense. Thanks a lot in advance!
The scheme $\mathrm{Spec}(k[X])=\mathbf{A}_k^1$ is the universal locally ringed space with a morphism to $\mathrm{Spec}(k)$ and a global section (namely $X$). What I mean by this is that for any locally ringed space $X$ with a morphism to $\mathrm{Spec}(k)$ (equivalently $\mathscr{O}_X(X)$) is a $k$-algebra) and any global section $s\in\mathscr{O}_X(X)$, there is a unique morphism of locally ringed $k$-spaces $X\rightarrow\mathbf{A}_k^1$ such that $X\mapsto s$ under the pullback map $f^*:k[X]=\mathscr{O}_{\mathbf{A}_k^1}(\mathbf{A}_k^1)\rightarrow\mathscr{O}_X(X)$. This is a special case of my answer here: on the adjointness of the global section functor and the Spec functor
An identity related to Legendre polynomials Let $m$ be a positive integer. I believe the the following identity $$1+\sum_{k=1}^m (-1)^k\frac{P(k,m)}{(2k)!}=(-1)^m\frac{2^{2m}(m!)^2}{(2m)!}$$ where $P(k,m)=\prod_{i=0}^{k-1} (2m-2i)(2m+2i+1)$, is true, but I don't see a quick proof. Anyone?
Clearly $P(k,m) = (-1)^k 4^k \cdot (-m)_k \cdot \left(m+\frac{1}{2}\right)_k$, where $(a)_k$ stands for the Pochhammer's symbol. Thus the sums of the left-hand-side of your equality is $$ 1 + \sum_{k=1}^\infty (-1)^k \frac{P(k,m)}{(2k)!} = \sum_{k=0}^\infty 4^k \frac{(-m)_k \left(m+\frac{1}{2}\right)_k}{ (2k)!} = \sum_{k=0}^\infty \frac{(-m)_k \left(m+\frac{1}{2}\right)_k}{ \left(\frac{1}{2}\right)_k k!} = {}_2F_1\left( -m, m + \frac{1}{2}; \frac{1}{2}; 1 \right) = $$ Per identity 07.23.03.0003.01 ${}_2F_1(-m,b;c;1) = \frac{(c-b)_m}{(c)_m}$: $$ {}_2F_1\left( -m, m + \frac{1}{2}; \frac{1}{2}; 1 \right) = \frac{(-m)_m}{\left(\frac{1}{2}\right)_m} = \frac{(-1)^m m!}{ \frac{(2m)!}{m! 2^{2m}} } = (-1)^m \frac{4^m}{\binom{2m}{m}} $$ The quoted identity follows as a solution of the contiguity relation for $f_m(z) = {}_2F_1(-m,b;c;z)$: $$ (m+1)(z-1) f_m(z) + (2+c+2m-z(b+m+1)) f_{m+1}(z) - (m+c+1) f_{m+2}(z) = 0 $$ Setting $z=1$ and assuming that $f_m(1)$ is finite the recurrence relation drops the order, and can be solved in terms of Pochhammer's symbols.
Given one primitive root, how do you find all the others? For example: if $5$ is a primitive root of $p = 23$. Since $p$ is a prime there are $\phi(p - 1)$ primitive roots. Is this correct? If so, $\phi(p - 1) = \phi(22) = \phi(2) \phi(11) = 10$. So $23$ should have $10$ primitive roots? And, to find all the other primitive roots we need powers of $5$, say $k$, sucht that $gcd(k, p - 1) = d> 1$. Again, please let me know If this true or not So, the possible powers of $5$ are: $1, 2, 11, 22$. But this only gives four other primitive roots. So I don't think I'm on right path.
The possible powers of 5 are all the $k$'s that $gcd(k,p−1)=1$, so $k$ is in the set {$1, 3, 5, 7, 9, 13, 15, 17, 19, 21$} and $5^k$ is in the set {$5, 10, 20, 17, 11, 21, 19, 15, 7, 14$} which is exactly of length 10.
Ordered groups - examples Let $G=BS(m,n)$ denote the Baumslag–Solitar groups defined by the presentation $\langle a,b: b^m a=a b^n\rangle$. Question: Find $m,n$ such that $G$ is an ordered group, i.e. $G$ is a group on which a partial order relation $\le $ is given such that for any elements $x,y,z \in G$, from $x \le y$ it follows that $xz \le yz$ and $zx \le zy$.
From Wikipedia: A group $G$ is a partially ordered group if and only if there exists a subset $H\subset G$ such that: i) $1 \in H$ ii) If $a ∈ H$ and $b ∈ H$ then $ab ∈ H$ iii) If $a ∈ H$ then $x^{-1}ax ∈ H$ for each $x\in G$ iv) If $a ∈ H$ and $a^{-1} ∈ H$ then $a=0$ For every $n,m$ can you find such a subset? Here's one thought: let $n=m=1$. Let $H=\{a^n:n\in\mathbb N\}$, where we include $a^0=1$. I believe this satisfies all the conditions and thus $G=BS(1,1)$ (which is the free abelian group on two generators) has a partial ordering.
to get a MDS code from a hyperoval in a projective plane explain how we can get a MDS code of length q+2 and dimension q-1 from a hyperoval in a projective plane PG2(q) with q a power of 2? HINT:a hyperoval Q is a set of q+2 points such that no three points in Q are collinear. you are expected to get a [q+2,q-1,4] binary code from this.take points,one dimensional subspaces and blocks as the lines 2-dimensional subspaces
As an addition to the answer of Jyrki Lahtonen: The standard way to get projective coordinates of the points of a hyperoval over $\mathbb F_q$ is to take the vectors $[1 : t : t^2]$ with $t\in\mathbb F_q$ together with $[0 : 1 : 0]$ and $[0 : 0 : 1]$. Placing these vectors into the columns of a matrix, in the example $\mathbb F_4 = \{0,1,a,a^2\}$ (with $a^2 + a + 1 = 0$) a possible check matrix of a $[6,3,4]$ MDS code is $$ \begin{pmatrix} 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 & a & a^2 \\ 0 & 0 & 1 & 1 & a^2 & a \end{pmatrix} $$ This specific code is also called Hexacode.
Differentiably redundant functions. I am looking for a differentiably redundant function of order 6 from the following. (a) $e^{-x} + e^{-x/ 2} \cos({\sqrt{3x} \over 2})$ (b) $e^{-x} + \cos(x)$ (c) $e^{x/2}\sin({\sqrt{3x} \over 2})$ I know that (b) has order 4, but I cannot solve for (a) and (c). It would be a huge waste of time if I took the derivatives and calculate them, so there must be a simple way to solve this. According to the book, it is related to $1/2 \pm i\sqrt{3} /2$, but why is that?
"Differentiably redundant function of order $n$" is not a standard mathematical term: this is something that GRE Math authors made up for this particular problem. Define a function $f(x)$ to be differentiably redundant of order $n$ if the $n$th derivative $f^{(n)}(x)=f(x)$ but $f^{(k)}(x)\ne f(x)$ when $k<n$. Which of the following functions is differentiably redundant of order $6$? By the way, this is not a shining example of mathematical writing: "when $k<n$" should be "when $0<k<n$" and, more importantly, $\sqrt{3x}$ was meant to be $\sqrt{3}x$ in both (A) and (C). This looks like a major typo in the book. If you are familiar with complex numbers, the appearance of both $-1/2$ and $\sqrt{3}/2$ in the same formula is quite suggestive, especially since both exponential and trigonometric functions appear here. Euler's formula $e^{it}=\cos t+i\sin t$ should come to mind. Let $\zeta=-\frac12+i\frac{\sqrt{3}}{2}$: then $$e^{-x/2}\cos \frac{\sqrt{3}x}{2} = \operatorname{Re}e^{\zeta x},\qquad e^{-x/2}\sin \frac{\sqrt{3}x}{2} = \operatorname{Im}\, e^{\zeta x}$$ Differentiating $n$ times, you get the factor of $\zeta^n$ inside of $\operatorname{Re}$ and $\operatorname{Im}$. Then you should ask yourself: what is the smallest positive integer $n$ such that $\zeta^n=1$? Helpful article.
Solving for $f(n+1)$ when $f(k)$ is known for $k=0,1,...,n$ I posted earlier about polynomials but this is different type of problem I think. I seem to have an answer but I mistrust it.... A polynomial $f(x)$ where deg[$f(x)$]$\le{n}$ satisfies $f(k)=2^k$ for $k=0,1,...,n$. Find $f(n+1)$. So $f(k)=2^k \Rightarrow 2^{-k}f(k)-1=0.$ Thus, there exists a constant c such that $$2^{-k}f(k)-1=cx(x-1)(x-2)...(x-n)$$ Now, let $x=n+1$. Then $$2^{-(n+1)}f(n+1)-1=c(n+1)(n+1-1)(n+1-2)...(n+1-n)=c(n+1)!$$ Therefore, $f(n+1)=2^{n+1}[c(n+1)!+1]$. Plugging in known values of k we obtain $c=0$ which just shows that $f(n+1)=2^{n+1}$. Is this right? It seems right, but I've seen another problem of the sort and it plays out differently.
You are right to mistrust your answer: it's easy to check that it's incorrect in the case $n=1$ (and, for that matter, $n=0$). The mistake you made is in concluding that $2^{-k}f(k) - 1$ must have a certain form; that expression is not a polynomial, so you can't use results about polynomials to categorize it. In fact, you're not off by that much: the answer is that $f(n+1) = 2^{n+1}-1$. (This gives the right value for $n=0$ and $n=1$, and you can check without too much difficulty that it's also right for $n=2$; that's probably enough data to suggest the pattern.) To me, the nicest way to solve this problem is to prove it by induction on $n$. For clarity, let $f_n$ denote the polynomial described in the problem for a particular $n$. Then consider $$f_n(x+1) - f_n(x).$$ Given what we know about $f_n$, we can show that this new expression is a polynomial of degree at most $n-1$, and its values at $x=0,1,\dots,n-1$ are $1,2,\dots,2^{n-1}$. In other words, $f_n(x+1) - f_n(x)$ must equal $f_{n-1}(x)$! And now the induction hypothesis should give you what you want almost immediately.
Disjunction: Why did the inclusive "OR" become the convention? In How to Prove it by Velleman, for defining disjunctions, he gives the difference between exclusive "OR", and inclusive "OR." Given two events $P$ and $Q$, the disjunction is defined for them as: * *Inclusive: One of $P$ and $Q$, or both. *Exclusive: One of $P$ and $Q$, but not both. Quoting from his book: "In mathematics, or always means inclusive or, unless specified otherwise, ..." (Velleman, 2006, p.15) My question is - Why did the inclusive definition of disjunction become the convention? Was it coincidental, or is there some aspect to the inclusive definition that makes it more convenient?
George Boole, when he originally developed his Laws of Thought to apply mathematics to reasoning, used the exclusive or. However, the system was quite cumbersome in comparison to modern methods. As others took up his ideas, they found that the inclusive or was far better behaved and easier to work with. For instance, suppose we want to say "It is not the case that P or Q but not both". We get a "Either it is not the case that P and not the case that Q, or it is the case that both P and Q". Contrast this with "It is not the case that P or Q or both". To negate this, we have "It is not the case that P and it is not the case that Q".
Summation of n-squared, cubed, etc. How do you in general derive a formula for summation of n-squared, n-cubed, etc...? Clear explanation with reference would be great.
In general, if you want to derive a formula for the sum of $1^k+\cdots+n^k,$ let $$f(x)=\sum_{j=0}^{k+1}a_jx^j$$ and solve $$f(x)-f(x-1)=x^k$$ for the coefficients $a_j$. The polynomial that results will have the property that $f(n)=1^k+\cdots+ n^k$ for all positive integers $n$.
Understanding directional derivative and the gradient I'm having trouble understanding the proof of directional derivative and the gradient. Could someone give me a easy-to-read proof of the directional derivative and explain why does the gradient point to the direction of maximum increase? Thank you very much for any help! =)
As for the gradient pointing in the direction of maximum increase, recall that the directional derivative is given by the dot product $$\nabla f(x)\cdot\textbf{u},$$ where $$\nabla f(x)$$ is the gradient at the point $\textbf{x}$ and $\textbf{u}$ is the unit vector in the direction we are considering. Recall also that this directional derivative is the rate of increase/decrease of the function in the direction of $\textbf{u}$ at the point $\textbf{x}$. The dot product has two equivalent definitions: $$\textbf{u}\cdot\textbf{v}=u_1v_1+u_2v_2+...+u_nv_n$$ or $$\textbf{u}\cdot\textbf{v}=||\textbf{u}||||\textbf{v}||cos(\theta),$$ where $\theta$ is the angle between the vectors. Using this second definition, the fact that $\textbf{u}$ is a unit vector, and knowledge of trigonometry, we see that the directional derivative $D\textbf{u}$ at $\textbf{x}$ is bounded: $$-||\nabla f(x)||=cos(\pi)||\nabla f(x)||\leq cos(\theta)||\nabla f(x)||$$ $$\leq cos(0)||\nabla f(x)||=||\nabla f(x)||$$ Since $0\leq||\nabla f(x)||$, we see that the maximum rate of change must occur when $\theta=0$, that is, in the direction of the gradient. As for the directional derivative consider the direction of the vector $(2,1)$. We want to know how the value of the function changes as we move in this direction at a point $\textbf{x}$. Well, for infinitesimally small units, we are moving $2$ units in the $x$ direction and $1$ unit in the $y$ direction so the change in the function is $2$ times the change in $f$ that we get as we move $1$ unit in the $x$ direction plus $1$ times the change in $f$ that we get as we move $1$ unit in the $y$ direction. Finally, we divide by the norm of $(2,1)$ so that we have the change for $1$ unit in this direction.
Study the equivalence of these norms I have two Hilbert spaces $H_1$ and $H_2$ and I consider a set of functions $f$ which decompose as $f=g+h$ with $g\in H_1$ and $h\in H_2$. I know that this decomposition is unique. So I define the following norm $$\Vert f\Vert=(\Vert g\Vert_{H_1}^2+\Vert h\Vert_{H_2}^2)^{\frac{1}{2}}$$ Is this equivalent to $$|||f|||=\Vert g\Vert_{H_1}+\Vert h\Vert_{H_2}$$? I've followed this reasoning: from sublinearity of square root I have $$\Vert f\Vert\leq|||f|||$$; for the other direction I observe that $$(\Vert g\Vert_{H_1}+\Vert h\Vert_{H_2})^2\leq 2(\Vert g\Vert_{H_1}^2+\Vert h\Vert_{H_2}^2)=2\Vert f\Vert^2$$ And so $$\Vert f\Vert\leq|||f|||\leq\sqrt{2}\Vert f\Vert$$ Is it right?
Your calculation should be right - it is just the equivalence of the $1$-norm and the $2$-norm on $\mathbb R^2$.
Computing the homology groups of a given surface Let $\triangle^2=\{(x,y)\in\mathbb{R}^2\mid 0\le x,y\wedge x+y\le1\}$ (that is, a right triangle). Define the equivalence relation $(t,0) \sim (0,t)\sim (t,1-t)$. I want to compute the homology groups of $\triangle^2/\sim$. An attempt at doing so was to define $U=\{(x,y)\in\mathbb{R}^2\mid 0< x,y\wedge x+y<1\}$ and $V=\triangle^2 \setminus \{1/3,1/3\}$. It is clear that $U\cup V = \triangle^2$ and so Mayer-Vietories could be useful here. Noting the following facts: * *$V$ is a retract deformation of the boundary of the triangle and since all lines are identified it is homeomorphic to $S^1$, and so $H_2(V)=0$, $H_1(V)=\mathbb{Z}$ and $\tilde{H}_0(V)=\mathbb{Z}$. *$U$ is retractable and so it's positive dimension homology groups vanish, and it's zero dimensional homology group is $\mathbb{Z}$ *$U\cap V$ is again homotopy equivalent to $S_1$ At this point it's really easy to see (using M.V) that $H_n(\triangle^2 / \sim ) = 0$ for $n>2$ and also for $n=0$. For lower values of $n$, taking the M.V. sequence for reduced homologies and plugging in the values I already know, I get. $0\to H_2(\triangle^2 / \sim ) \to \mathbb{Z} \to \mathbb{Z}\to H_1(\triangle^2 / \sim )\to 0$. This is the point where I don't know what to do next, and any help would be appreciated.
Just knowing that sequence is exact is not enough since, for example, $H_2(\Delta^2/\sim) = 0 = H_1(\Delta^2/\sim)$ and $H_2(\Delta^2/\sim) = 0, H_1(\Delta^2/\sim) = \mathbb Z/n\mathbb Z$ both work. So you need to look at the actual map $H_1(U \cap V) \to H_1(U) \oplus H_1(V) \simeq 0 \oplus H_1(V)$, which is given by the two inclusions. But $U\cap V$ is a deformation retract of $V$ so that the inclusion $U \cap V \to V$ induces an isomorphism on homology. Thus the map $H_1(U \cap V) \to H_1(U) \oplus H_1(V)$ is an isomorphism so that $H_2(\Delta^2/\sim) = 0 = H_1(\Delta^2/\sim)$.
Contour integration to compute $\int_0^\infty \frac{\sin ax}{e^{2\pi x}-1}\,\mathrm dx$ How to show: $$\int_{0}^{\infty}\frac{\sin ax}{e^{2\pi x}-1}dx=\frac{1}{4}\frac{e^{a}+1}{e^{a}-1}-\frac{1}{2a}$$ integrating $\dfrac{e^{aiz}}{e^{2\pi z}-1}$ round a contour formed by the rectangle whose corners are $0 ,R ,R+i,i$ (the rectangle being indented at $0$ and $i$) and making $R \rightarrow \infty$.
For this particular contour, the integral $$\oint_C dz \frac{e^{i a z}}{e^{2 \pi z}-1}$$ is split into $6$ segments: $$\int_{\epsilon}^R dx \frac{e^{i a x}}{e^{2 \pi x}-1} + i \int_{\epsilon}^{1-\epsilon} dy \frac{e^{i a R} e^{-a t}}{e^{2 \pi R} e^{i 2 \pi y}-1} + \int_R^{\epsilon} dx \frac{e^{-a} e^{i a x}}{e^{2 \pi x}-1} \\+ i \int_{1-\epsilon}^{\epsilon} dy \frac{ e^{-a t}}{e^{i 2 \pi y}-1} + i \epsilon \int_{\pi/2}^0 d\phi \:e^{i \phi} \frac{e^{i a \epsilon e^{i \phi}}}{e^{2 \pi \epsilon e^{i \phi}}-1}+ i \epsilon \int_{2\pi}^{3 \pi/2} d\phi\: e^{-a} e^{i \phi} \frac{e^{i a \epsilon e^{i \phi}}}{e^{2 \pi \epsilon e^{i \phi}}-1}$$ The first integral is on the real axis, away from the indent at the origin. The second integral is along the right vertical segment. The third is on the horizontal upper segment. The fourth is on the left vertical segment. The fifth is around the lower indent (about the origin), and the sixth is around the upper indent, about $z=i$. We will be interested in the limits as $R \rightarrow \infty$ and $\epsilon \rightarrow 0$. The first and third integrals combine to form, in this limit, $$(1-e^{-a}) \int_0^{\infty} dx \frac{e^{i a x}}{e^{2 \pi x}-1}$$ The fifth and sixth integrals combine to form, as $\epsilon \rightarrow 0$: $$\frac{i \epsilon}{2 \pi \epsilon} \left ( -\frac{\pi}{2}\right) + e^{-a} \frac{i \epsilon}{2 \pi \epsilon} \left ( -\frac{\pi}{2}\right) = -\frac{i}{4} (1+e^{-a}) $$ The second integral vanishes as $R \rightarrow \infty$. The fourth integral, however, does not, and must be evaluated, at least partially. We rewrite it, as $\epsilon \rightarrow 0$: $$-\frac{1}{2} \int_0^1 dy \frac{e^{-a y} e^{- i \pi t}}{\sin{\pi y}} = -\frac{1}{2} PV\int_0^1 dy \: e^{-a y} \cot{\pi y} + \frac{i}{2} \frac{1-e^{-a}}{a}$$ By Cauchy's theorem, the contour integral is zero because there are no poles within the contour. Thus, $$(1-e^{-a}) \int_0^{\infty} dx \frac{e^{i a x}}{e^{2 \pi x}-1} -\frac{i}{4} (1+e^{-a}) -\frac{1}{2} PV \int_0^1 dy \: e^{-a y} \cot{\pi y} + \frac{i}{2} \frac{1-e^{-a}}{a}=0$$ where $PV$ represents the Cauchy principal value of that integral. Now take the imaginary part of the above equation and do the algebra - the stated result follows.
Intermediate value-like theorem for $\mathbb{C}$? Is there an intermediate value like theorem for $\mathbb{C}$? I know $\mathbb {C}$ isn't ordered, but if we have a function $f:\mathbb{C}\to\mathbb{C}$ that's continuous, what can we conclude about it? Also, if we have a function, $g:\mathbb{C}\to\mathbb{R}$ ,continuous with $g(x)>0>g(y)$ does that imply a z "between" them satisfying $g(z)=0$. Edit: I apologize if the question is vague and confusing. I really want to ask for which definition of between, (for example maybe the part of the plane dividing the points), and with relaxed definition on intermediateness for the first part, can we prove any such results?
Consider $f(x) = e^{\pi i x }$. We know that $f(0) = 1, f(1) = -1$. But for no real value $r$ between 0 and 1 is $f(r) = 0$, or even real valued. Think about how this is a 'counter-example', and what aspect of $\mathbb{C}$ did we use. It could be useful to trace out this graph.
Geodesics of conformal metrics in complex domains Let $U$ be a non-empty domain in the complex plane $\mathbb C$. Question: what is the differential equation of the geodesics of the metric $$m=\varphi(x,y) (dx^2+dy^2)$$ where $\varphi$ is a positive function on $U$ and where $x,y$ are the usual euclidian coordinates on $\mathbb C\simeq \mathbb R^2$ Certainly, an answer can be found in many classical textbooks. But I'm interested in the (simpler) case when $\varphi=\lvert f(z)\lvert^2$ where $f$ is a holomorphic function of $z=x+iy$. And I didn't find in the classical literature a simple differential equation characterizing geodesics for metric of the form $$m= \lvert f(z) dz\lvert^2.$$ Does anyone know the answer or a good reference? Many thanks in advance.
You should be worried about the zeroes of $f$; the geodesic equation degenerates at the points where the metric vanishes. At the points where $f\ne 0$ the local structure of geodesics is indeed simple. Let $F$ be an antiderivative of $f$, that is $F'=f$. The metric $|F'(z)|^2\,|dz|^2$ is exactly the pullback of the Euclidean metric under the map $F$. Therefore, geodesics are preimages of straight lines under $F$; note that $F$ is locally a diffeomorphism because $F'\ne 0$. Stated another way, geodesics are curves of the form $\mathrm{Re}\,(e^{i\theta}F)=c$ for some $\theta,c\in\mathbb R$. If you want a differential equation for parametrized geodesics $t\mapsto z(t)$, it is $\dfrac{d}{dt}\mathrm{Re}\,(e^{i\theta}F(z(t)))=0$. Example: two orthogonal families of geodesics for $f(z)=z^2$, that is, the metric $|z|^4|dz|^2$
Proving that $4 \mid m - n$ is an equivalence relation on $\mathbb{Z}$ I have been able to figure out the the distinct equivalence classes. Now I am having difficulties proving the relation IS an equivalence relation. $F$ is the relation defined on $\Bbb Z$ as follows: For all $(m, n) \in \Bbb Z^2,\ m F n \iff 4 | (m-n)$ equivalence classes: $\{-8,-4,0,4,8\}, \{-7,-3,1,5,9\} \{-6,-2,2,6,10\}$ and $\{-5, -1, 3, 7, 11\}$
1) reflexivity: $mFm $ since $4|0$ 2) simmetry: $mFn \Rightarrow nFm$ since $4|\pm (m-n)$ 3) transitivity: if $4|(m-n)$ and $4|(n-r)$ then $4|\big((m-n)+(n-r)\big)$ or $4|(m-r)$
Factorial primes Factorial primes are primes of the form $n!\pm1$. (In this application I'm interested specifically in $n!+1$ but any answer is likely to apply to both forms.) It seems hard to prove that there are infinitely many primes of this form, though Caldwell & Gallot were courageous enough to conjecture that there are infinitely many and to give a conjectured density ($e^\gamma\log x$ below $x$). I'm looking at the opposite direction: how many composites are there of the form $n!\pm1$? It seems 'obvious' that the fraction of numbers of this form which are composite is 1, but I cannot even prove that there are infinitely many composites of this form. Has this been proved? (Perhaps there's even a proof easy enough to relate here?) Or on the other hand, is it known to be open?
Wilson's Theorem shows there are infinitely many composites. For if $p$ is prime, then $(p-1)!+1$ is divisible by $p$, and apart from the cases $p=2$ and $p=3$, the number $(p-1)!+1$ is greater than $p$. There are related ways to produce a composite. For example, let $p$ be a prime of the form $4k+3$. Then one of $\left(\frac{p-1}{2}\right)!\pm 1$ is divisible by $p$.
Cylindrical coordinates where $z$ axis is not axis of symmetry. I'm a little bit uncertain of how to set up the limits of integration when the axis of symmetry of the region is not centered at $z$ (this is for cylindrical coordinates). The region is bounded by $(x-1)^2+y^2=1$ and $x^2+y^2+z^2=4$. This is my attempt: Let $x=r\cos\theta$ and $y=r\sin\theta$. We are bounded on $z$ by $\pm\sqrt{4-r^2}$. We take $0\leq r\leq 2\cos\theta$ and $-\pi/2\leq \theta \leq \pi/2$ to account for the fact that the cylinder is not centered with the $z$ axis (shift on the $x$ axis). The volume is given by $$ V = \int\limits_{-\pi/2}^{\pi/2} \int\limits_0^{2\cos\theta}\int\limits_{-\sqrt{4-r^2}}^{\sqrt{4-r^2}} dz\,(r\,dr)\,d\theta \ . $$
I prefer to visualize the cross sections in $z$. Draw a picture of various cross-sections in $z$: there is an intersection region for each $z$ You have to find the points where the cross-sections intersect: $$4-z^2=4 \cos^2{\theta} \implies \sin{\theta} = \pm \frac{z}{2} \ .$$ For $\theta \in [-\arcsin{(z/2)},\arcsin{(z/2)}]$, the cross-sectional area integral at $z$ looks like $$\int_{-\arcsin{(z/2)}}^{\arcsin{(z/2)}} d\theta \: \int_0^\sqrt{4-z^2} dr \, r = (4 -z^2)\arcsin\left(\frac{z}{2}\right) \ .$$ For $\theta \in [-\pi/2,-\arcsin{(z/2)}] \cup [\arcsin{(z/2)},\pi/2]$, this area is $$2 \int_{\arcsin{(z/2)}}^{\pi/2} d\theta \: \int_0^{2 \cos{\theta}} dr \, r = \pi - 2 \arcsin\left(\frac{z}{2}\right) -z \sqrt{1-\frac{z^2}{4}} \ . $$ To get the volume, add the above two expressions and integrate over $z \in [-2,2]$; the expression looks odd, but we are really dealing with $|z|$ : $$V = 2 \int_{0}^2 \: \left(\pi + (2-z^2) \arcsin\left( \frac{z}{2}\right) -z \, \sqrt{1-\frac{z^2}{4}}\right)\,dz = \frac{16 \pi}{3} - \frac{64}{9} \ .$$
How many ways are there to add the numbers in set $k$ to equal $n$? How many ways are there to add the numbers in set $k$ to equal $n$? For a specific example, consider the following: I have infinite pennies, nickels, dimes, quarters, and loonies (equivalent to 0.01, 0.05, 0.1, 0.25, and 1, for those who are not Canadian). How many unique ways are there to add these numbers to get a loonie? Some combinations would include: $1 = 1$ $1 = 0.25 + 0.25 + 0.25 + 0.25$ $1 = 0.25 + 0.25 + 0.25 + 0.1 + 0.1 + 0.05$ And so on. Several people have suggested the coin problem, though I have yet to find a source that explains this well enough for me to understand it.
You will have luck googling for this with the phrase "coin problem". I have a few links at this solution which will lead to the general method. There is a Project Euler problem (or maybe several of them) which ask you to compute absurdly large such numbers of ways, but the programs (I found out) can be just a handful of lines.
Unification and substitutions in first-order logic I am currently learning about first-order logic and various resolution techniques. When applying a substitution $\theta$ to two sentences $\alpha$ and $\beta$, for unification purposes, aside from SUBST($\theta$, $\alpha$) = SUBST($\theta$, $\beta$), does the resulting substitution have to be unique? What I mean is, when unifying, when we check if SUBST($\theta$, $\alpha$) = SUBST($\theta$, $\beta$), is it OK if SUBST($\theta$, $\beta$) = $\beta$ ? Thank you.
The most general unifier $\theta$ is unique in the sense that given any other unifier $\phi$, $\alpha \phi$ can be got from $\alpha \theta$ by a subtitution, and the same for $\beta$.
Combinatorics Example Problem Ten weight lifters are competing in a team weightlifting contest. Of the lifters, 3 are from the United States, 4 are from Russia, 2 are from China, and 1 is from Canada. Part 1 If the scoring takes account of the countries that the lifters represent, but not their individual identities, how many different outcomes are possible from the point of view of scores? This part I understand: 10! / [ 3! 4! 2! ] = 12,600 Part 2 (don't understand this part) How many different outcomes correspond to results in which the United States has 1 competitor in the top three and 2 in the bottom three? This part I'm confused. Here you have 10 slots. The first three slots must be of some order: US, US, or [Russia, China, Canada]. The last three slots must be of some order US, [Russia, China, Canada], [Russia, China, Canada]. I thought the answer would be this: $\binom{3}{2} \binom{1}{1} * \frac{7!}{4!\ 3!\ 2!} $ My reasoning: In the first 3 slots, you have to pick 2/3 US people. Then you only have one remaining. You have 7! ways to organize the rest but have to take out repeats so you divide by factorials of repeats which is 4,3, and 2. But my middle term is wrong.... My research shows me answers of 2 forms but I can't understand it: Method 1: $\binom{3}{1} \binom{3}{2} * \frac{7!}{4!\ 3!\ 2!}$ This case, I don't understand why there's a 3 in the second binomial, $\binom{3}{2}$. We already selected ONE US person so shouldn't it be $\binom{2}{2}$? Method 2: $ \binom{7}{2} \binom{3}{1} \binom{5}{4} \binom{3}{2} \binom{1}{1} $ ? Sorry for the long post. Thanks again.
For the second, you pick the slots (not the people-we said all the people from one country were interchangeable) for the two US people in the bottom ${3 \choose 2}$ ways, but then have to pick which slot the US person in the top is in, which adds a factor ${3 \choose 1}$. Then of the seven left, you just have $\frac {7!}{4!2!}$ as there are no Americans among them. So Method 1 is off by the $3!$ in the denominator. I don't understand the terms in Method 2. Added: One way to look at the first part, which I think is the way you did, is to say there are $10!$ orders of the people, but if the Americans are interchangeable you divide by $3!$, for the Russians you divide by $4!$ and for the Chinese you divide by $2!$. Another way would be to say we pick three slots for Americans in ${10 \choose 3}$ ways, then pick slots for the Russians in ${7 \choose 4}$ ways, then pick slots for the Chinese in ${3 \choose 2}$ ways, giving ${10 \choose 3}{7 \choose 4}{3 \choose 2}=120\cdot 35 \cdot 3=12600$. Of course the answer is the same, but the approach is better suited to the second part. Now if you think of choosing slots for nationalities you get what I did before.
p-adic modular form example In Serre's paper on $p$-adic modular forms, he gives the example (in the case of $p = 2,3,5$) of $\frac{1}{Q}$ and $\frac{1}{j}$ as $p$-adic modular forms, where $Q = E_4 = 1 + 540\sum \sigma_{3}(n)q^n$ is the normalized Eisenstein series of weight 4 and $j = \frac{\Delta}{Q^3}$ is the $j$-invariant. To see this, Serre remarks that the fact that $\frac{1}{Q}$ is a $p$-adic modular form follows from the observations that $\displaystyle\frac{1}{Q} = \lim_{m\to\infty} Q^{p^m} - 1$ and that $Q = 1 \mod p$. He remarks that similarly $\frac{1}{j}$ can similarly be shown to be $p$-adic modular weight 0 and that the space of weight 0 modular forms is precisely $\mathbb{Q}_p\langle \frac{1}{j} \rangle$. Forgive my ignorance, but could someone explain these facts in detail? Perhaps I am missing something obvious, but I don't understand why the $\displaystyle \lim_{m\to\infty} Q^{p^m} - 1 = \frac{1}{Q}$ is true and how to obtain the other statements he makes.
I'm not sure this can quite be correct. The problem is that $Q^{p^m}$ is going to tend to 1, so $Q^{p^m} - 1$ tends to 0, not $1/Q$. I think you may have misread the paper and what was meant was $1/Q = \lim_{m \to \infty} Q^{(p^m - 1)}$; if you're reading Serre's paper in the Antwerp volumes, then this is an easy mistake to make given that it's all typewritten! It shouldn't be too hard to convince yourself by induction that $Q^{p^m} = 1 \bmod p^{m+1}$. As for $1/j$, we have $1/j = \Delta / Q^3$ (there is a typo in your post, you write $j = \Delta / Q^3$ which is not quite right) so once you know that $1/Q$ is $p$-adic modular it follows immediately that $1/j$ is so.
Let $f$ be a twice differentiable function on $\mathbb{R}$. Given that $f''(x)>0$ for all $x\in \mathbb{R}$.Then which is true? Let $f$ be a twice differentiable function on $\mathbb{R}$. Given that $f''(x)>0$ for all $x\in \mathbb{R}$.Then which of the following is true? 1.$f(x)=0$ has exactly two solutions on $\mathbb{R}$. 2.$f(x)=0$ has a positive solution if $f(0)=0$ and $f'(0)>0$. 3.$f(x)=0$ has no positive solution if $f(0)=0$ and $f'(0)>0$. 4.$f(x)=0$ has no positive solution if $f(0)=0$ and $f'(0)<0$. My thoughts:- (1) is not true as $f(x)=x^2+1$ is a counter example. Now suppose the conditions in (2) holds.then $f'(x)$ is increasing everywhere.so $f'(x)$ is never zero for all positive $x$.so $f(x)$ can not have a positive solution otherwise $f'(x)$ have a zero between $0$ and $x$ by Rolle's Theorem. so (3) is true. Are my arguments right?
Correct answer for above question is option no.3.Second option is not correct because it has a counter example.$$f(x)=e^x-1$$ f satisfy condition $f(0)=0,f'(0)>0$but $0$ is the only solution of $f(x)=0$.Hence $f(x)=0$ has no positive solution.
Cardinality of the set of bijective functions on $\mathbb{N}$? I learned that the set of all one-to-one mappings of $\mathbb{N}$ onto $\mathbb{N}$ has cardinality $|\mathbb{R}|$. What about surjective functions and bijective functions?
Choose one natural number. How many are left to choose from? More rigorously, $$\operatorname{Aut}\mathbb{N} \cong \prod_{n \in \mathbb{N}} \mathbb{N} \setminus \{1, \ldots, n\} \cong \prod_{n \in \mathbb{N}} \mathbb{N} \cong \mathbb{N}^\mathbb{N} = \operatorname{End}\mathbb{N},$$ where $\{1, \ldots, 0\} := \varnothing$. The first isomorphism is a generalization of $\#S_n = n!$ Edit: but I haven't thought it through yet, I'll get back to you.
a multiple choice question on monotone non-decreasing real-valued function on $\mathbb{R}$ let $f$ be a monotone non-decreasing real-valued function on $\mathbb{R}$ . Then $1$. $\lim _ {x \to a}f(x)$ exists at each point $a$. $2$. If $a<b$ , then $\lim _ {x \to a+}f(x) \le \lim _ {x \to b-}f(x)$. $3$. $f$ is an unbounded function. $4$. The function $g(x)=e^{-f(x)}$ is a bounded function. $2$ is looking obviously true. but I need some counter example to disprove the others.can anyone help me please.thanks for your help.
1: monotone does not necessarily mean it 's continuous 3: never said it was strictly monotone, could be constant 4: $g$ is strictly decreasing and has a lower bound never reached ($0$). It has an upper bound only if $f$ has a lower bound -> many counter examples
$f: \mathbb{R} \to \mathbb{R}$ satisfies $(x-2)f(x)-(x+1)f(x-1) = 3$. Evaluate $f(2013)$, given that $f(2)=5$ The function $f : \mathbb{R} \to \mathbb{R}$ satisfies $(x-2)f(x)-(x+1)f(x-1) = 3$. Evaluate $f(2013)$, given that $f(2) = 5$.
The conditions allow you to calculate $f(x+1)$ if you know $f(x)$. Try calculating $f(3), f(4), f(5), f(6)$, and looking for a pattern.
If $A$ is a diagonalizable $n\times n$ matrix for which the eigenvalues are $0$ and $1$, then $A^2=A$. If $A$ is a diagonalizable $n\times n$ matrix for which the eigenvalues are $0$ and $1$, then $A^2=A$. I know how to prove this in the opposite direction, however I can't seem to find a way prove this. Could anyone please help?
Write $A = QDQ^{-1}$, where $D$ is a diagonal matrix with the eigenvalues, $0$s and $1$s, on the diagonal. The $A^2 = QDQ^{-1}QDQ^{-1} = QD^{2}Q^{-1}$. But $D^2 = D$, because when you square a diagonal matrix you square the entries on the diagonal and $1^2 = 1$ and $0^2 = 0$. Thus $$A^{2} = QD^{2}Q^{-1} = QDQ^{-1} = A$$
Integrating $\frac {e^{iz}}{z}$ over a semicircle around $0$ of radius $\epsilon$ I am trying to find the value of $\int_{-\infty}^{\infty} \frac{\sin (x)}{x}$ using residue theorem and a contour with a kink around $0$. For this, I need to find $\int_{C_\epsilon} \frac {e^{iz}} {z}$ where $C_\epsilon$ is the semicircle centred at $0$ with radius $\epsilon$ from $-\epsilon$ to $\epsilon$. I guess it is equal to half the residue of $\frac {e^{iz}} {z}$ at $0$. Is this true? Any help is appreciated.
Note that $\dfrac{e^{iz}}{z}=\dfrac1z+O(1)$. Integrating this counter-clockwise around the semicircle of radius $\epsilon$ is $$ \begin{align} \int_\gamma\frac{e^{iz}}{z}\,\mathrm{d}z &=\int_0^\pi\left(\frac1\epsilon e^{-i\theta}+O(1)\right)\,\epsilon\,ie^{i\theta}\,\mathrm{d}\theta\\ &=\int_0^\pi\frac1\epsilon e^{-i\theta}\,\epsilon\,ie^{i\theta}\,\mathrm{d}\theta +\int_0^\pi O(1)\,\epsilon\,ie^{i\theta}\,\mathrm{d}\theta\\[9pt] &=\pi i+O(\epsilon)\\[12pt] \end{align} $$ The residue at $0$ is $1$, so integrating around the full circle would give $2\pi i$.
What is the closed formula for the following summation? Is there any closed formula for the following summation? $$\sum_{k=2}^n \frac{1}{\log_2(k)}$$
There is no closed form as such. However, you can use the Abel summation technique from here to derive the asymptotic. We have \begin{align} S_n & = \sum_{k=2}^n \dfrac1{\log(k)} = \int_{2^-}^{n^+} \dfrac{d \lfloor t \rfloor}{\log(t)} = \dfrac{n}{\log(n)} - \dfrac2{\log(2)} + \int_2^{n} \dfrac{dt}{\log^2(t)}\\ & =\dfrac{n}{\log(n)} - \dfrac2{\log(2)} + \int_2^{3} \dfrac{dt}{\log^2(t)} + \int_3^{n} \dfrac{dt}{\log^2(t)}\\ &\leq \dfrac{n}{\log(n)} \overbrace{- \dfrac2{\log(2)} + \int_2^{3} \dfrac{dt}{\log^2(t)}}^{\text{constant}} + \dfrac1{\log(3)}\int_3^{n} \underbrace{\dfrac{dt}{\log(t)}}_{\leq S_n}\\ \end{align} We have get that $$S_n \leq \dfrac{n}{\log(n)} + \text{constant} + \dfrac{S_n}{\log(3)} \implies S_n \leq \dfrac{\log(3)}{\log(3)-1} \left(\dfrac{n}{\log(n)} + \text{constant}\right) \tag{$\star$}$$ With a bit more effort you can show that $$S_n \sim \dfrac{n}{\log n}$$
Find the Sum $1\cdot2+2\cdot3+\cdots + (n-1)\cdot n$ Find the sum $$1\cdot2 + 2\cdot3 + \cdot \cdot \cdot + (n-1)\cdot n.$$ This is related to the binomial theorem. My guess is we use the combination formula . . . $C(n, k) = n!/k!\cdot(n-k)!$ so . . . for the first term $2 = C(2,1) = 2/1!(2-1)! = 2$ but I can't figure out the second term $3 \cdot 2 = 6$ . . . $C(3,2) = 3$ and $C(3,1) = 3$ I can't get it to be 6. Right now i have something like . . . $$ C(2,1) + C(3,2) + \cdot \cdot \cdot + C(n, n-1) $$ The 2nd term doesn't seem to equal 6. What should I do?
As I have been directed to teach how to fish... this is a bit clunky, but works. Define rising factorial powers: $$ x^{\overline{m}} = \prod_{0 \le k < m} (x + k) = x (x + 1) \ldots (x + m - 1) $$ Prove by induction over $n$ that: $$ \sum_{0 \le k \le n} k^{\overline{m}} = \frac{n^{\overline{m + 1}}}{m + 1} $$ When $n = 0$, it reduces to $0 = 0$. Assume the formula is valid for $n$, and: $$ \begin{align*} \sum_{0 \le k \le n + 1} k^{\overline{m}} &= \sum_{0 \le n} k^{\overline{m}} + (n + 1)^{\overline{m}} \\ &= \frac{n^{\overline{m + 1}}}{m + 1} + (n + 1)^{\overline{m}} \\ &= \frac{n \cdot (n + 1)^{\overline{m}} + (m + 1) (n + 1)^{\overline{m}}} {m + 1} \\ &= \frac{(n + m + 1) \cdot (n + 1)^{\overline{m}}}{m + 1} \\ &= \frac{(n + 1)^{\overline{m + 1}}}{m + 1} \end{align*} $$ By induction, it is valid for all $n$. Defining falling factorial powers: $$ x^{\underline{m}} = \prod_{0 \le k < m} (x - k) = x (x - 1) \ldots (x - m + 1) $$ you get a similar formula for the sum: $$ \sum_{0 \le k \le n} k^{\underline{m}} $$ You can see that $x^{\overline{m}}$ (respectively $x^{\underline{m}}$) is a monic polynomial of degree $m$, so any integral power of $x$ can be expressed as a combination of appropiate factorial powers, and so sums of polynomials in $k$ can also be computed with some work. By the way, the binomial coefficient: $$ \binom{\alpha}{k} = \frac{\alpha^{\underline{k}}}{k!} $$
Evaluating a trigonometric integral using residues Finding the trigonometric integral using the method for residues: $$\int_0^{2\pi} \frac{d\theta}{ a^2\sin^2 \theta + b^2\cos^2 \theta} = \frac{2\pi}{ab}$$ where $a, b > 0$. I can't seem to factor this question I got up to $4/i (z) / ((b^2)(z^2 + 1)^2 - a^2(z^2 - 1)^2 $ I think I should be pulling out $a^2$ and $b^2$ out earlier but not too sure how.
Letting $z = e^{i\theta},$ we get $$\int_0^{2\pi} \frac{1}{a^2\sin^2\theta + b^2\cos^2\theta} d\theta = \int_{|z|=1} \frac{1}{iz} \frac{4}{-a^2(z-1/z)^2+b^2(z+1/z)^2} dz \\= \int_{|z|=1} \frac{1}{iz} \frac{4z^2}{-a^2(z^2-1)^2+b^2(z^2+1)^2} dz = -i\int_{|z|=1} \frac{4z}{-a^2(z^2-1)^2+b^2(z^2+1)^2} dz.$$ Now the location of the four simple poles of the new integrand is given by $$z_{0, 1} = \pm \sqrt{\frac{a+b}{a-b}} \quad \text{and} \quad z_{2, 3} = \pm \sqrt{\frac{a-b}{a+b}}.$$ We now restrict ourselves to the case $a > b > 0,$ so that only $z_{2,3}$ are inside the contour. The residues $w_{2,3}$ are given by $$ w_{2,3} =\lim_{z\to z_{2,3}} \frac{4z}{-2a^2(2z)(z^2-1)+2b^2(2z)(z^2+1)} = \lim_{z\to z_{2,3}} \frac{1}{-a^2(z^2-1)+b^2(z^2+1)} \\ = \lim_{z\to z_{2,3}} \frac{1}{z^2(b^2-a^2)+b^2+a^2}= \frac{1}{-(a-b)^2+b^2+a^2} = \frac{1}{2ab}.$$ It follows that the value of the integral is given by $$-i \times 2\pi i \times 2 \times \frac{1}{2ab} = \frac{2\pi}{ab}.$$ The case $b > a > 0$ is left to the reader. (In fact it is not difficult to see that even though in this case the poles are complex, it is once more the poles $z_{2,3}$ that are inside the unit circle.)
On a long proof On wikipedia there is a claim that the Abel–Ruffini theorem was nearly proved by Paolo Ruffini, and that his proof spanned over $500$ pages, is this really true? I don't really know much abstract algebra, and I know that the length of a paper will vary due to the size of the font, but what could possibly take $500$ pages to explain? Did he have to introduce a new subject part way through the paper or what? It also says Niels Henrik Abel published a proof that required just six pages, how can someone jump from $500$ pages to $6$?
Not only true, but not unique. The abc conjecture has a recent (2012) proposed proof by Shinichi Mochizuki that spans over 500 pages, over 4 papers. The record is the classification of finite simple groups which consists of tens of thousands of pages, over hundreds of papers. Very few people have read all of them, although the result is important and used frequently.
Difficulties performing Laurent Series expansions to determine Residues The following problems are from Brown and Churchill's Complex Variables, 8ed. From §71 concerning Residues and Poles, problem #1d: Determine the residue at $z = 0$ of the function $$\frac{\cot(z)}{z^4} $$ I really don't know where to start with this. I had previously tried expanding the series using the composition of the series expansions of $\cos(z)$ and $\sin(z)$ but didn't really achieve any favorable outcomes. If anyone has an idea on how I might go about solving this please let me know. For the sake of completion, the book lists the solution as $-1/45$. From the same section, problem #1e Determine the residue at $z = 0$ of the function $$\frac{\sinh(z)}{z^4(1-z^2)} $$ Recognizing the following expressions: $$\sinh(z) = \sum_{n=0}^{\infty} \frac{z^{(2n+1)}}{(2n +1)!}$$ $$\frac{1}{1-z^2} = \sum_{n=0}^{\infty} (z^2)^n $$ I have expanded the series thusly: $$\begin{aligned} \frac{\sinh(z)}{z^4(1-z^2)} &= \frac{1}{z^4} \bigg(\sum_{n=0}^{\infty} \frac{z^{(2n+1)}}{(2n +1)!}\bigg) \bigg(\sum_{n=0}^{\infty} (z^2)^n\bigg) \\ &= \bigg(\sum_{n=0}^{\infty} \frac{z^{2n - 3}}{(2n +1)!}\bigg) \bigg(\sum_{n=0}^{\infty} z^{2n-4} \bigg) \\\end{aligned} $$ I don't really know where to go from here. Any help would be great. Thanks.
A related problem. Lets consider your first problem $$ \frac{\cot(z)}{z^4}=\frac{\cos(z)}{z^4\sin(z)}. $$ First, determine the order of the pole of the function at the point $z=0$, which, in this case, is of order $5$. Once the order of the pole has been determined, we can use the formula $$r = \frac{1}{4!} \lim_{z\to 0} \frac{d^4}{dz^4}\left( z^5\frac{\cos(z)}{z^4\sin(z)} \right)=-\frac{1}{45}. $$ Note that, the general formula for computing the residue of $f(z)$ at a point $z=z_0$ with a pole order $n$ is $$r = \frac{1}{(n-1)!} \lim_{z\to z_0} \frac{d^{n-1}}{dz^{n-1}}\left( (z-z_0)^n f(z) \right) $$ Note: If $z=z_0$ is a pole of order one of $f(z)$, then the residue is $$ r = \lim_{z\to z_0}(z-z_0)f(z). $$
Calclute the probability? A random function $rand()$ return a integer between $1$ and $k$ with the probability $\frac{1}{k}$. After $n$ times we obtain a sequence $\{b_i\}_{i=1}^n$, where $1\leq b_i\leq k$. Set $\mathbb{M}=\{b_1\}\cup\{b_2\}\cdots \cup\{b_n\}$. I want to known the probability $\mathbb{M}\neq \{1, 2\cdots, k\}$.
Hint: Obviously $n<k$ is trivial. Thereafter, the question becomes equivalent to solving What fraction of $n$-tuples with the digits $1,\ldots,k$ are in fact $n$-tuples formed from a strict subset of these numbers? or a surjection-counting problem. You can find a recursive solution by letting $p_{n,k}$ be the probability that a set of $n$ digits up to $k$ contains all distinct digits, and then considering the last digit of a sequence leading to $(n+1)$-tuples. Edit: I should make clear that a recursive 'solution' essentially is the best you can do, which is why I called it that! The numbers don't have a closed form. (See e.g. https://mathoverflow.net/questions/27071/counting-sequences-a-recurrence-relation for a discussion, once you've worked out the recursive form yourself.)
How to prove every closed interval in R is compact? Let $[a,b]\subseteq \mathbb R$. As we know, it is compact. This is a very important result. However, the proof for the result may be not familiar to us. Here I want to collect the ways to prove $[a,b]$ is compact. Thanks for your help and any link.
Perhaps the shortest, slickest proof I know is by Real Induction. See Theorem 17 in the aforelinked note, and observe that the proof is six lines. More importantly, after you spend an hour or two familiarizing yourself with the formalism of Real Induction, the proof essentially writes itself.
Where does the relation $\nabla^2(1/r)=-4\pi\delta^3({\bf r})$ between Laplacian and Dirac delta function come from? It is often quoted in physics textbooks for finding the electric potential using Green's function that $$\nabla ^2 \left(\frac{1}{r}\right)=-4\pi\delta^3({\bf r}),$$ or more generally $$\nabla ^2 \left(\frac{1}{|| \vec x - \vec x'||}\right)=-4\pi\delta^3(\vec x - \vec x'),$$ where $\delta^3$ is the 3-dimensional Dirac delta distribution. However I don't understand how/where this comes from. Would anyone mind explaining?
We can use the simplest method to display the results, as shown below : - $$ \nabla ^2 \left(\frac{1}{r}\right) = \nabla \cdot \nabla \left( \frac 1 r \right) = \nabla \cdot \frac {-1 \mathbf {e_r}} {r^2} $$ Suppose there is a sphere centered on the origin, then the total flux on the surface of the sphere is : - $$ \text {Total flux} = 4 \pi r^2 \frac {-1} {r^2} = -4 \pi $$ Suppose the volume of the sphere be $ \mathbf {v(r)}$, so by the definition, the divergence is : - $$ \lim_{\text {volume} \to zero} \frac {\text {Total Flux}} {\text {Volume}} = \lim_{\text {v(r)} \to 0} \left(\frac {-4 \pi} {v(r)}\right) $$ So obviously, $$ \lim_{\text {r} \to 0} \left[ \nabla ^2 \left(\frac{1}{r}\right) \right]= \lim_{\text {r} \to 0} \left[ \nabla \cdot \nabla \left( \frac 1 r \right) \right]= \lim_{\text {r} \to 0} \left(\frac {-4 \pi} {v(r)} \right) = \text {infinite} $$ $$ \lim_{\text r\to 0} \int \nabla ^2 \left( \frac 1 r \right) dv(r) = \lim_{\text r\to 0}\int \frac {-4 \pi} {v(r)} dv(r) = -4\pi$$ Since the laplacian is zero everywhere except $r \to 0$, it is true and real that :- $$ \nabla ^2 \left(\frac{1}{r}\right)=-4\pi\delta^3({\bf r}) $$
Method of Characteristics $au_x+bu_y+u_t=0 $ $au_x+bu_y+u_t=0$ $u(x,y,0)=g(x,y)$ solve $u(x,y,t)$ Our professor talked about solving this using Method of Characteristics. However, I am confused about this method. Since it's weekend, I think it might be faster to get respond here. In the lecture, he wrote down the followings: Fix a point$(x,y,t)$ in $\mathbb{R}^3$. $h(s)=u(x+as,y+bs,t+s)$,line $φ(s)=(x+as,y+bs,t+s)=(x,y,t)+s(a,b,1)$ $h'(s)=u_xa+u_yb+u_t=0$ for all $s$. $h(-t)=u(x-at,y-bt,0)=g(x-at,y-bt)$ <----- u equal this value for all points on the line $(x+as,y+bs,t+s)$. $h(0)=u(x,y,t)$ $u(x,y,t)=g(x-at,y-bt)$ The first question I have is that why we want to parametrize $x,y$ and $t$ this way. In addition, what is the characteristic system of this problem. If we have derived the formula already, why do we still need the characteristics system equations? Thank you!
I think it's easiest just to concisely re-explain the method, so that's what I'll do. The idea: linear, first-order PDEs have preferred lines (generally curved) along which all the action happens. More specifically, because the differential bit takes the form of $\mathbf f \cdot \nabla u$ where in general $\mathbf f$ varies, it is actually always a directional derivative along the vector field $\mathbf f$. Therefore, along a line given by $\dot{\mathbf x}(s) = \mathbf f$, we expect the PDE to reduce to an ODE involving $\mathrm d u(\mathbf x(s))/ \mathrm d s$. In fact, by the chain rule, $\mathrm d u(\mathbf x(s))/ \mathrm d s = \dot{\mathbf x}(s) \cdot\nabla u = \mathbf f \cdot\nabla u$ which is exactly the term we said was in the PDE. So $\mathbf f \cdot\nabla u = h(\mathbf x)$ is equivalent to $$\mathrm d u(\mathbf x(s))/ \mathrm d s = h(\mathbf x(s))$$ Therefore, by finding $\mathbf x(s)$ we can find the ODE $u$ satisfies, and find the initial conditions relevant to each line by saying that at $s=0$ we are on the space where the initial conditions are given. Does this help? If you have a more specific question, ask away!
Rank of matrix of order $2 \times 2$ and $3 \times 3$ How Can I calculate Rank of matrix Using echlon Method:: $(a)\;\; \begin{pmatrix} 1 & -1\\ 2 & 3 \end{pmatrix}$ $(b)\;\; \begin{pmatrix} 2 & 1\\ 7 & 4 \end{pmatrix}$ $(c)\;\; \begin{pmatrix} 2 & 1\\ 4 & 2 \end{pmatrix}$ $(d)\;\; \begin{pmatrix} 2 & -3 & 3\\ 2 & 2 & 3\\ 3 & -2 & 2 \end{pmatrix}$ $(e)\;\; \begin{pmatrix} 1 & 2 & 3\\ 3 & 6 & 9\\ 1 & 2 & 3 \end{pmatrix}$ although I have a knowledge of Using Determinant Method to calculate rank of Given matrix. But in exercise it is calculate using echlon form plz explain me in detail Thanks
Follow this link to find your answer If you are left with any doubt after reading this this, feel free to discuss.
Finding the limit of function - exponential one Find the value of $\displaystyle \lim_{x \rightarrow 0}\left(\frac{1+5x^2}{1+3x^2}\right)^{\frac{1}{\large {x^2}}}$ We can write this limit function as : $$\lim_{x \rightarrow 0}\left(1+ \frac{2x^2}{1+3x^2}\right)^{\frac{1}{\large{x^2}}}$$ Please guide further how to proceed in such limit..
Write it as $$\dfrac{(1+5x^2)^{1/x^2}}{(1+3x^2)^{1/x^2}}$$ and recall that $$\lim_{y \to 0} (1+ay)^{1/y} = e^a$$ to conclude what you want.
Permutations of Symmetric Group of Order 3 Find an example, in the group $S_3$ of permutations of $\{1,2,3\}$, of elements $x,y\in S_3$ for which $x^2 = e = y^2$ but for which $(xy)^4$ $\not=$ e.
This group is isomorphic to the 6 element dihedral group. Any element including a reflection will have order two. The product of two different such elements will be a pure nonzero rotation, necessarily of order 3.
Construct matrix Let $B$ any square matrix. Is possible to construct an invertible matrix $Q_B$ such that $$\|Q_BBQ_B^{-1}\|_2\ \leq\ \rho(B)?$$ Thanks in advance for the help. Edit: $Q_B$ only need to be invertible, not orthogonal.
Since $\rho(B)=\rho(Q_BBQ_B^{-1})$, you are essentially asking whether $B$ is similar to some $A$ such that $\|A\|_2=\rho(A)$, Yet, $\|A\|_2=\rho(A)$ if and only if $A$ is the scalar multiple of a unitary matrix. So, if $B$ is not similar to the scalar multiple of a unitary matrix (e.g. when $B$ has at least two eigenvalues of different moduli, or when $B$ is not normal), the construction of $Q_B$ is impossible.
How do I write a trig function that includes inverses in terms of another variable? It's been awhile since I've used trig and I feel stupid asking this question lol but here goes: Given: $z = \tan(\arcsin(x))$ Question: How do I write something like that in terms of $x$? Thanks! And sorry for my dumb question.
$$z = \tan[\arcsin(x)]$$ $$\arctan(z) = \arctan[\tan(\arcsin(x)] = \arcsin(x)$$ $$\sin[\arctan(z)] = \sin[\arcsin(x)] = x$$
Uniform limit of holomorphic functions Let $\{f_n\}$ be a sequence of holomorphic functions defined in a generic domain $D \subset \mathbb{C}$. Suppose that there exist $f$ such that $f_n \to f$ uniformly. My question is: is it true that $f$ is holomorphic too?
You've already seen an approach using Morera's theorem from the other excellent answers. For a slightly more concrete demonstration of why $f$ is complex differentiable, you can use the fact that every $f_n$ satisfies Cauchy's integral formula, so by uniform convergence $f$ also satisfies Cauchy's integral formula. This allows you to differentiate $f$ by differentiating the integral.
Analytic functions of a real variable which do not extend to an open complex neighborhood Do such functions exist? If not, is it appropriate to think of real analytic functions as "slices" of holomorphic functions?
If $f$ is real analytic on an open interval $(a,b)$. Then at every point $x_0\in (a,b)$, there is a power series $P_{x_0}(x)=\sum_{n=0}^\infty a_n(x-x_0)^n$ with radius of convergence $r(x_0)>0$ such that $f(x)=P_{x_0}(x)$ for all $x$ in $(a,b)\cap \{x:|x-x_0|<r\}$. Then $f$ can be extended to an open neighborhood $B(x_0,r(x_0))$ of $x_0$ in $\mathbb{C}$ by the power series $P_{x_0}(z)$. Now let $O\subset \mathbb{C}$ be the union of these open balls $B(x_0,r(x_0), x_0\in (a,b)$. Define $F(z), z\in O$ such that $F(z)=P_{x_0}(z)$ if $z\in B(x_0,r(x_0))$ (for some $x_0\in (a,b)$). This is well-defined since any two analytic functions agreeing on a set with accumulation points in a connected open set must be identically equal. So $F$ is an extension of $f$ to an open set in $\mathbb{C}$ containing $(a,b)$. Since every open set in $\mathbb{R}$ is a countable union of disjoint open intervals, $f$ can be so extended if its domain is open in $\mathbb{R}$.
Let A = {a, c, 4, {4, 3}, {1, 4, 3, 3, 2}, d, e, {3}, {4}} Which of the following is true? Let A = {a, c, 4, {4, 3}, {1, 4, 3, 3, 2}, d, e, {3}, {4}} Which of the following is true?
Yes, you are correct. None of the options are true. $\{4, \{4\}\}$ is a subset of $A$, since $4 \in A,$ and $\{4\}\in A$, but the set with which it is paired is not a subset of $A$, and none of the items listed as "elements of $A$ are, in fact, elements of $A$. For example, $\{1, 4, 3\} \not \subset A$ because $1 \notin A$ (nor is $3 \in A$). But it also the case that $\{1, 4, 3\} \notin A$ because $A$ does not contain the set $\{1, 4, 3\}$ as an element. Hence, none of the options $A),\, B),\,$ or $\, C)$ are true. That leaves only $(D)$ as being true. Be careful though: $\{1, 4, 3, 3, 2\} = \{1, 2, 3, 4\} \in A$, but $\{1, 4, 3, 3, 2\} = \{1, 2, 3, 4\} \not \subset A$. That is, the set itself is an element in $A$. It is true, however that $\{\{1, 4, 3, 3, 2\}\} = \{\{1, 2, 3, 4\}\} \subset A$. That is, the subset of $A$ containing the element $\{1, 4, 3, 3, 2\}$ is a subset of $A$. Do you understand the difference?
normed division algebra Can we prove that every division algebra over $R$ or $C$ is a normed division algebra? Or is there any example of division algebra in which it is not possible to define a norm? Definition of normed division algebra is in here. Thanks!
Frobenius theorem for (finite-dimensional) asscoative real division algebras states there are only $\mathbb{R},\mathbb{C},\mathbb{H}$, and the proof is elementary (it is given on Wikipedia in fact). If you don't care about finite-dimensional, then the transcendental field extension $\mathbb{R}(T)/\mathbb{R}$, where here $\mathbb{R}(T)$ is the field of real-coefficient rational functions in the variable $T$, is a divison algebra (it is a commutative field) but cannot carry a norm. Indeed, there are no infinite-dimensional normed division algebras. Finally, there are real division algebras that are not $\mathbb{R},\mathbb{C},\mathbb{H}$ or $\mathbb{O}$ (which are the only normed ones), which means there are division algebras that cannot carry a norm. It is still true they all must have dimension $1,2,4$ or $8$ (see section 1.1 of Baez's The Octonions), but there are (for example) uncountably many isomorphism classes of two-dimensional real division algebras (but unfortunately I don't have a reference for this handy).
Why $f^{-1}(f(A)) \not= A$ Let $A$ be a subset of the domain of a function $f$. Why $f^{-1}(f(A)) \not= A$. I was not able to find a function $f$ which satisfies the above equation. Can you give an example or hint. I was asking for an example function which is not addressed here
Any noninjective function provides a counterexample. To be more specific, let $X$ be any set with at least two elements, $Y$ any nonempty set, $u$ in $X$, $v$ in $Y$, and $f:X\to Y$ defined by $f(x)=v$ for every $x$ in $X$. Then $A=\{u\}\subset X$ is such that $f(A)=\{v\}$ hence $f^{-1}(f(A))=X\ne A$. In general, for $A\subset X$, $A\subset f^{-1}(f(A))$ but the other inclusion may fail except when $f$ is injective. Another example: define $f:\mathbb R\to\mathbb R$ by $f(x)=x^2$ for every $x$. Then, $f^{-1}(f(A))=A\cup(-A)$ for every $A\subset\mathbb R$. For example, $A=[1,2]$ yields $f^{-1}(f(A))=[-2,-1]\cup[1,2]$.
Understanding the Hamiltonian function Based on this function: $$\text{max} \int_0^2(-2tx-u^2) \, dt$$ We know that $$(1) \;-1 \leq u \leq 1, \; \; \; (2) \; \dot{x}=2u, \; \; \; (3) \; x(0)=1, \; \; \; \text{x(2) is free}$$ I can rewrite the function into a hamiltonian function: $$H=-2tx-u^2+p2u$$ where u(t) maxizmizee H where: \begin{equation} u = \left\{\begin{array}{rc} 1 & p \geq 1 \\ p & -1 < p < 1 \\ -1 & p \leq -1 \end{array}\right. \end{equation} Now, can somebody help me understand how the last part is true, and why? I find it hard to see the bridge between $u$ and $p$.
$$ \frac{\partial H}{\partial u} = -2u + 2p \tag{1} $$ where $u$ is the control variable and $p$ is the costate. The optimality of $H$ requires (1)=0, where you obtain your $u_t$ expression considering its constraint.
Positive Outcome! I have a question on Probability. With two dices, which each have six sides people are making duels with these dices. the winner is the one who rolls the highest. the chances are of a 50% win as it is a duel between two and the terms are the same, but the person taking bets keeps a 10% commision. What is the best way to bet in this type of scenario? i'll give you an illustration: i bet 100,000 and i win so the host gives me 180,000 which is 2x the original bet -10% out of the outcome. what is the best way to make a long term profit? Currently i am using martingale system, but i lost in a loosing streak by doing the following: i bet 200,000 and lost. then i bet 400,000 and lost then i bet 900,000 and lost and finaly i bet 2,000,000 and won. but the pay out was 3,600,000 which was by 200,000 lower than the total of my bets. What system should I use?
To maximize your long-term profit, you should use Kelly gambling, which says you should bet on a wager proportional to the edge you get from the odds. The intuitive form of the formula is $$ f^{*} = \frac{\text{expected net winnings}}{\text{net winnings if you win}} \! $$ If the odds are even (ie. your scenario, but without the commision), the Kelly criterion says not to bet at all. The addition of a commision for the casino gives a negative result, which means you'll make a loss in the long-term, whatever your strategy. So the short answer: playing this game is a losing proposition whatever your strategy. If it weren't, casinos wouldn't stay in business very long. The martingale system only works if there is no ceiling to the amount you can bet (remember that the bets increase exponentially, and there is only so much money in the world). But you should make sure that each bet covers your existing losses, if you win, and makes a small profit.
Hyperbolic Functions Hey everyone, I need help with questions on hyperbolic functions. I was able to do part (a). I proved for $\sinh(3y)$ by doing this: \begin{align*} \sinh(3y) &= \sinh(2y +y)\\ &= \sinh(2y)\cosh(y) + \cosh(2y)\sinh(y)\\ &= 2\sinh(y)\cosh(y)\cosh(y) + (\cosh^2(y)+\sinh^2(y))\sinh(y)\\ &= 2\sinh(y)(1+\sinh^2(y)) + (1+\sinh^2(y) + \sinh^2(y))\sinh(y)\\ &= 2\sinh(y) + 2\sinh^3(y) + \sinh(y) +2\sinh^3(y)\\ &= 4\sinh^3(y) + 3\sinh(y). \end{align*} Therefore, $0 = 4\sinh^3(y) + 3\sinh(y) - \sinh(3y)$. I have no clue what to do for part (b) and part (c) but I do see similarities between part (a) and part(b) as you can subtitute $x = \sinh(y)$. But yeah, I'm stuck and help would be very much appreciated.
Hint 1: Set $\color{#C00000}{x=\sinh(y)}$. Since $0=4\sinh^3(y)+3\sinh(y)-\sinh(3y)$, we have $$ 4x^3+3x-\sinh(3y)=0 $$ and by hypothesis, $$ 4x^3+3x-2=0 $$ So, if $\color{#C00000}{\sinh(3y)=2}$, both equations match. Solve for $x$. Hint 2: Set $c\,x=\sinh(y)$ for appropriate $c$.
Intuition behind the difference between derived sets and closed sets? I missed the lecture from my Analysis class where my professor talked about derived sets. Furthermore, nothing about derived sets is in my textbook. Upon looking in many topology textbooks, few even have the term "derived set" in their index and many books only say "$A'$ is the set of limit points of $A$". But I am really confused on the difference between $A'$ and $\bar{A}$. For example, Let $A=\{(x,y)∈ \mathbb{R}^{2} ∣x^2+y^2<1\}$, certainly $(1,0) \in A'$, but shouldn't $(0,0)∈A'$ too? In this way it would seem that $A \subseteq A'$. The definition of a limit point is a point $x$ such that every neighborhood of $x$ contains a point of $A$ other than $x$ itself. Then wouldn't $(0,0)$ fit this criterion? If I am wrong, why? And if I am not, can someone please give me some more intuitive examples that clearly illustrate, the subtle difference between $A'$, and $\bar{A}$?
The key to the difference is the notion of an isolated point. If $X$ is a space, $A\subseteq X$, and $x\in A$, $x$ is an isolated point of $A$ if there is an open set $U$ such that $U\cap A=\{x\}$. If $X$ is a metric space with metric $d$, this is equivalent to saying that there is an $\epsilon>0$ such that $B(x,\epsilon)\cap A=\{x\}$, where $B(x,\epsilon)$ is the open ball of radius $\epsilon$ centred at $x$. It’s not hard to see that $x$ is an isolated point of $A$ if and only if $x\in A$ and $x$ is not a limit point of $A$. This means that in principle $\operatorname{cl}A$ contains three kinds of points: * *isolated points of $A$; *points of $A$ that are limit points of $A$; and *points of $X\setminus A$ that are limit points of $A$. If $A_I$ is the set of isolated points of $A$, $A_L$ is the set of limit points of $A$ that are in $A$, and $L$ is the set of limit points of $A$ that are not in $A$, then * *$A=A_I\cup A_L$; *$A'=A_L\cup L$; and *$\operatorname{cl}A=A_I\cup A_L\cup L$. In particular, if $A$ has no isolated points, so that $A_I=\varnothing$, then $A'=\operatorname{cl}A$. If, on the other hand, $A$ consists entirely of isolated points, like the sets $\Bbb Z$ and $\left\{\frac1n:n\in\Bbb Z^+\right\}$ in $\Bbb R$, then $A_L=\varnothing$, $A'=L$, and $\operatorname{cl}A=A\cup L$.
Correct way to calculate a complex integral? I have $$ \int_{[-i,i]} \sin(z)\,dz $$ Parametrizing the segment $[-i,i]$ I have, if $t\in[0,1]$ $$ z(t) = it + (1-t)(-i) = 2it-i, \quad \dot{z}(t) = 2i. $$ So $$ \int_{[-i,i]} \sin(z)\,dz = \int_0^1 \sin(2it-i)2i\, dt = -\cos(2it-i)|_0^1 = 0. $$ Am I correct?
Sine is analytic function, so you can also use Cauchy's theorem. $$ \int_{[-i, i]} \sin z\ dz = -\left . \cos z \right |_{-i}^i = 0 $$
Representing the statement using Quantifiers I want to represent the statement "Some numbers are not real " using quantifiers. I have been told by my teacher that the correct way to represent this is num(x) : x is a number real(x) : x is real ∃x (num(x) ^ ¬real(x)) This made sense, i can translate this statement into "There exist some x such that x is a number and x is not real. But the answer i came up by myself is this one ∃x (num(x)=> ¬real(x)) In translation , There exist some x such that if x is a number then x is not real. I just can't get around why my answer is wrong, for some x ; if x is a number then x is not real. Doesn't that sound very similar to the statement "Some numbers are not real". In one of the video lectures i saw this example which made me even more confused. "No dog is intelligent" dog(x) : x is a dog intel(x) : x is intelligent The representation was ∀x (dog(x) ==> ¬intel(x)) if this representation is true, how is my representation of "Some numbers are not real" wrong. PS : I am just a beginner at Discrete math finding my way, please pardon me if the question doesn't meet the quality standards of the community.
Your version is wrong because "A and B" is not the same as "if A, then B". For instance, there exists a horse H such that if H is 50 feet tall, then I win the lottery. It is sadly not true that there exists a horse H such that H is 50 feet tall and I win the lottery. More pointedly, (A implies B) is true when A is false. Therefore, "there exists X such that if X is a number, then X is not real" is true when X, for instance, is a banana.
Does every non-trivial element of $\mathbb{Z}_p$ generate the group? This just popped up in my head and I just wanted to make sure if I'm right. Every element (except the identity element $0$) of the group $\mathbb{Z}_p$ (under addition and $p$ is prime) is a generator for the group. For example, $\mathbb{Z}_5 = \langle 1 \rangle = \langle 2 \rangle = \langle 3 \rangle = \langle 4 \rangle$. Thanks!
You can show that $\mathbb{Z}_p$ is a field, so if $H= \langle h \rangle$ with $h \neq 0$, $1 \in H$ since $h$ is invertible; you deduce that $H= \mathbb{Z}_p$, ie. $h$ is a generator.
What are zigzag theories, and why are they called that? I've encountered the term zigzag theory while randomly clicking my way through the internet. It is given here. I haven't been able to find a clear explanation of what constitutes a zigzag theory. Here, it is said that they have to do with non-Cantorian sets, which, as I understand, are sets that fail to satisfy Cantor's theorem. The article also says that New Foundations is a zigzag theory, but I don't see that it says why exactly that is so. I have gone through the Wikipedia article on New Foundations, and there's nothing about zigzags in it. So what makes a zigzag theory? And what's zigzagging about it?
As a footnote to Arthur Fischer, here's an additional quote from Michael Potter's Set Theory and its Philosophy: In 1906, Russell canvassed three forms a solution to the paradoxes might take: the no-class theory, limitation of size, and the zigzag theory. It is striking that a century later all of the theories that have been studied in any detail are recognizably descendants of one or other of these. Russell’s no-class theory became the theory of types, and the idea that the iterative conception is interpretable as a cumulative version of the theory of types was explained with great clarity by Gödel in a lecture he gave in 1933, ... although the view that it is an independently motivated notion rather than a device to make the theory more susceptible to metamathematical investigation is hard to find in print before Gödel 1947. The doctrine of limitation of size ... has received rather less philosophical attention, but the cumulatively detailed analysis in Hallett 1984 can be recommended. The principal modern descendants of Russell’s zigzag theory -- the idea that a property is collectivizing provided that its syntactic expression is not too complex -- are Quine’s two theories NF and ML. Research into their properties has always been a minority sport: for the current state of knowledge consult Forster 1995. What remains elusive is a proof of the consistency of NF relative to ZF or any of its common strengthenings.
Does a symmetric matrix with main diagonal zero is classified into a separate type of its own? And does it have a particular name? I have a symmetric matrix as shown below $$\begin{pmatrix} 0&2&1&4&3 \\ 2&0&1&2&1 \\ 1&1&0&3&2 \\4&2&3&0&1 \\ 3&1&2&1&0\end{pmatrix}$$ Does this matrix belong to a particular type? I am CS student and not familiar with types of matrices. I am researching to know the particular matrix type since I have a huge collection of matrices similar to this one. By knowing the type of matrix, maybe I can go through its properties and work around easier ways to process data efficiently. I am working on a research project in Data Mining. Please help. P.S.: Only the diagonal elements are zero. Non diagonal elements are positive.
This is a hollow matrix. You can say that the sum of its eigenvalues equals zero.
What is the probability that a random $n\times n$ bipartite graph has an isolated vertex? By a random $n\times n$ bipartite graph, I mean a random bipartite graph on two vertex classes of size $n$, with the edges added independently, each with probability $p$. I want to find the probability that such a graph contains an isolated vertex. Let $X$ and $Y$ be the vertex classes. I can calculate the probability that $X$ contains an isolated vertex by considering one vertex first and using the fact that vertices in $X$ are independent. But I don't know how to calculate the probability that $X\cup Y$ contains an isolated vertex. Can someone help? Thanks!
This can be done using inclusion/exclusion. We have $n+n$ conditions for the individual vertices being isolated. There are $\binom nk\binom nl$ combinations of these conditions that require $k$ particular vertices in $X$ and $l$ particular vertices in $Y$ to be isolated, and the probability for this is $q^{kn+ln-kl}$, with $q=1-p$. Thus by inclusion/exclusion the desired probability that at least one vertex is isolated is \begin{align} &1-\sum_{k=0}^n\sum_{l=0}^n(-1)^{k+l}\binom nk\binom nlq^{kn+ln-kl}\\ ={}&1-\sum_{k=0}^n(-1)^k\binom nkq^{kn}\sum_{l=0}^n(-1)^l\binom nlq^{ln-kl}\\ ={}&1-\sum_{k=0}^n(-1)^k\binom nkq^{kn}\left(1-q^{n-k}\right)^n\\ ={}&1-\sum_{k=0}^n(-1)^k\binom nk\left(q^k-q^n\right)^n\;. \end{align}
Prove By Mathematical Induction (factorial-to-the-fourth vs power of two) Prove $(n!)^{4}\le2^{n(n+1)}$ for $n = 0, 1, 2, 3,...$ Base Step: $(0!)^{4} = 1 \le 2^{0(0+1)} = 1$ IH: Assume that $(k!)^{4} \le 2^{k(k+1)}$ for some $k\in\mathbb N$. Induction Step: Show $(k+1!)^{4} \le 2^{k+1((k+1)+1)}$ Proof: $(k+1!)^{4} = (k+1)^{4}*(k!)^{4}$ By the definition of factorial. $$\begin{align*} (k+1)^{4}*(k!)^{4} &\le (k+1)^{4}*2^{k(k+1)}\\ &\le (k+1)^{4}*2^{(k+1)((k+1)+1)} \end{align*}$$ by the IH. That is as far as I have been able to get at this point...Please Help! Any suggestions or comments are greatly appreciated.
You are doing well up to $(k+1)^4*(k!)^4 \le (k+1)^4*2^{k(k+1)}$ That is the proper use of the induction hypothesis. Now you need to argue $(k+1)^4 \le \frac {2^{(k+1)(k+2)}}{2^{k(k+1)}}=2^{(k+1)(k+2)-k(k+1)}=2^{2(k+1)}$
Relation between Galois representation and rational $p$-torsion Let $E$ be an elliptic curve over $\mathbb{Q}$. Does the image of $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ under the mod $p$ Galois representation tell us whether or not $E$ has rational $p$-torsion or not?
Yes, it does, and in a rather straightforward way. The $p$-torsion in $E(\mathbb{Q})$ is precisely the fixed vectors under the Galois action. In particular, $E$ has full rational $p$-torsion if and only if the mod $p$ representation is trivial.
Lagrangian subspaces Let $\Lambda_{n}$ be the set of all Lagrangian subspaces of $C^{n}$, and $P\in \Lambda_{n}$. Put $U_{P} = \{Q\in \Lambda_{n} : Q\cap (iP)=0\}$. There is an assertion that the set $U_{P}$ is homeomorphic to the real vector space of all symmetric endomorphisms of $P$. And then in the proof of it there is a fact that the subspaces $Q$ that intersect $iP$ only at $0$ are the graphs of the linear maps $\phi : P\to iP$. This is what I don't understand, any explanation or reference where I can find it would be helpful.
Remember these are Lagrangians and thus half-dimensional. It's easiest to see what is going on if you take $P = \mathbb{R}^n$. This simplifies notation and also is somewhat easier to understand, imo. We are given a Lagrangian subspace $Q$, transverse to $i \mathbb{R}^n$. Then, consider the linear map $Q \to \mathbb{R}^n$ by taking a $z \in Q$ and mapping to the real part (i.e. projecting to $P$ along $iP$). This map is injective (by the transversality assumption) and is thus an isomorphism. The inverse of this map takes a point $x \in \mathbb{R}^n$ and constructs a $y(x) \in \mathbb{R}^n$ so $x + i y(x) \in Q$. The map $y \colon \mathbb{R}^n \to i \mathbb{R}^n$ is what you are looking for.
Why isn't $\lim \limits_{x\to\infty}\left(1+\frac{1}{x}\right)^{x}$ equal to $1$? Given $\lim \limits_{x\to\infty}(1+\frac{1}{x})^{x}$, why can't you reduce it to $\lim \limits_{x\to\infty}(1+0)^{x}$, making the result "$1$"? Obviously, it's wrong, as the true value is $e$. Is it because the $\frac{1}{x}$ is still something even though it's really small? Then why is $$\lim_{x\to\infty}\left(\frac{1}{x}\right) = 0\text{?}$$ What is the proper way of calculating the limit in this case?
In the expression $$\left(1+\frac{1}{x}\right)^x,$$ the $1+1/x$ is always bigger than one. Furthermore, the exponent is going to $\infty$ and (I suppose) that any number larger than one raised to infinity should be infinity. Thus, you could just as easily ask, why isn't the limit infinity? Of course, the reality is that the $1/x$ going to $0$ in the base pushes the limit down towards $1$ (as you observe) and that the $x$ going to $\infty$ in the exponent pulls the limit up to $\infty$. There's a balance between the two and the limit lands in the middle somewhere, namely at the number $e$, a fact explained many times in this forum.
Minimal value of a polynomial I do not know the following statement is true or not: Given $1<x_0<2$, there exists $\delta>0$ such that for any n, define $A=\{ f(x)=\sum\limits_{i=0}^{n}a_ix^i\}$ where $a_i\in\{0\,,1\}$, then for any $f\,,g \in A$ and their degrees are the same, we have $\delta\leq|f(x_0)-g(x_0))|$ or $f(x_0)=g(x_0)$
Let $x_0$ be a real root of the polynomial $1-X^2-X^3+X^4$ in the interval $(1,2)$; it exists by the intermediate value theorem because this polynomial has value $0$ and derivative $-1$ at $X=1$, and value $5$ at $X=2$. Then with $f(x)=1+x^4$ and $g(x)=x^2+x^3$ one has $f(x_0)=g(x_0)$, so no positive $\delta\leq|f(x_0)-g(x_0)|=0$ can exist. I might add that the fact that you get answers that are looking at cases where $f(x_0)-g(x_0)=0$, it is because obviously this is precisely the only thing that can prevent $\delta$ from existing, so the whole formulation with $\delta$ seems a bit pointless.
Show $\det \left[T\right]_\beta=-1$, for any basis $\beta$ when $Tx=x-2(x,u)u$, $u$ unit vector Let $u$ be a unit vector in an $n$ dimensional inner product space $V$. Define the orthogonal operator $$ Tx= x - 2 (x,u)u, $$ where $x \in V$. Show $$ \det A = -1, $$ whenever $A$ is a matrix representation of $T$. I can show that $\det A=\pm1$ $$ 1=\det(I)=\det(A^tA)=(\det A)^2. $$ Then I guess I should extend $u$ to a basis $\beta$ for $V$ $$ \beta=\{u,v_1,v_2,\ldots,v_{n-1}\}. $$ I don't see what to do after this.
To complement rschwieb's answer, he is giving you way to determine a specific basis to perform your calculation in easily. Then recall that if $A$ and $A'$ are two matrices that represent the same linear transformation in two different bases, then $A' = P^{-1} A P$ where $P$ is a change of basis matrix. In particular, this implies that $\det(A') = \det(A)$ (why?).
In how many ways can five letters be posted in 4 boxes? Question : In how many ways can 5 letters be posted in 4 boxes? Answer 1: We take a letter. It can be posted in any of the 4 boxes. Similarly, next letter can be posted in 4 ways, and so on. Total number = $4^5$. Answer 2: Among the total number of positions of 5 letters and 3 dividers (between 4 boxes), positions of 3 dividers can selected in $\binom{8}{3}$ ways. Which one is correct (I think the first one)? Why is the other one wrong and how does it differ from the right answer in logical/combinatorial perspective?
The way to understand this problem (and decide which answer is correct) is to ask what you are counting exactly. If all the letters are distinct then the first answer sounds better because it counts arrangements that only differ by where each particular letter goes. The second answer appears to make the letters and the dividers distinct, as well as counting as distinct the cases where two letters appear in a box in a different order. These seem to be counting different things. If you modify the second answer to place 3 identical dividers in a row of 5 identical letters, (choose 3 from 6 ‘gaps’), then it works fine. But those ‘identical’s are not correct (are they?). If we tried to ‘correct’ the second solution by multiplying by rearrangements (to take into account all possible rearrangements of letters, say) we would discover that some rearrangements would be introduced that weren’t supposed to be distinct. No, the second approach is wrong-headed: it counts the wrong things and it gets very complicated when you try to correct it. So the answer is “the first solution is correct”. This applies when the letters are distinct and the boxes are distinct.
Finding the Laplace Transform of sin(t)/t I'm in a Differential Equations class, and I'm having trouble solving a Laplace Transformation problem. This is the problem: Consider the function $$f(t) = \{\begin{align}&\frac{\sin(t)}{t} \;\;\;\;\;\;\;\; t \neq 0\\& 1\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; t = 0\end{align}$$ a) Using the power series (Maclaurin) for $\sin(t)$ - Find the power series representation for $f(t)$ for $t > 0.$ b) Because $f(t)$ is continuous on $[0, \infty)$ and clearly of exponential order, it has a Laplace transform. Using the result from part a) (assuming that linearity applies to an infinite sum) find $\mathfrak{L}\{f(t)\}$. (Note: It can be shown that the series is good for $s > 1$) There's a few more sub-problems, but I'd really like to focus on b). I've been able to find the answer to a): $$ 1 - \frac{t^2}{3!} + \frac{t^4}{5!} - \frac{t^6}{7!} + O(t^8)$$ The problem is that I'm awful at anything involving power series. I have no idea how I'm supposed to continue here. I've tried using the definition of the Laplace Transform and solving the integral $$\int_0^\infty e^{-st}*\frac{sin(t)}{t} dt$$ However, I just end up with an unsolvable integral. Any ideas/advice?
Just an small hint: Theorem: If $\mathcal{L}\{f(t)\}=F(s)$ and $\frac{f(t)}{t}$ has a laplace transform, then $$\mathcal{L}\left(\frac{f(t)}{t}\right)=\int_s^{\infty}F(u)du$$
Characteristic Polynomial of a Linear Map I am hoping for some help with this question from a practice exam I am doing before a linear algebra final. Let $T_1, T_2$ be the linear maps from $C^{\infty}(\mathbb{R})$ to $C^{\infty}(\mathbb{R})$ given by $$T_1(f)=f'' - 3f' + 2f$$ $$T_2(f)=f''-f'-2f$$ (a) Write out the characteristic polynomials for $T_1$ and $T_2$. (b) Compute the composition (c) Find a vector in $\ker(T)$ which is not a linear combination of vectors in $\ker(T_1)$ and $\ker(T_2)$. I know that the characteristic polynomial of an $n \times n$ matrix is the expansion of $$\det(A - I \lambda ).$$ Where I'm stuck here is finding the matrices for $T_1$ and $T_2$. I know it's silly, but I am used to applying these ideas to transformations of the form $T_A: \mathbb{R}^n \to \mathbb{R}^m$. Where $A$ is an $m \times n$ matrix and $T_A(\mathbf{x})= A \mathbb{x}$. Then $A$ is given by $$A=[T(\mathbf{e}_1) \vdots T(\mathbf{e}_2) \vdots \cdots \vdots T(\mathbf{e}_n)].$$ Then I would find the characteristic polynomial of $A$. Now, I know that these ideas apply to abstract vector spaces like $C^{\infty}(\mathbb{R})$, but for some reason I cannot bridge the gap intuitively in the case of transformations. My textbook covers finding the matrix of a transformation for vectors in $\mathbb{R}^n$ but not for abstract vector spaces. I am having the same problem with another question from the same practice exam: Let $P_2$ be the vector space of polynomials of degree $2 \leq 2$, and define a linear transformation $T: P_2 \to P_2$ by $T(f)=(x+3)f' + 2f$. Write the matrix for $T$ with respect to the basis $\beta = (1,x,x^2)$. Once again I do not know how to write the matrix for $T$. I would really appreciate any help understanding these concepts. Thanks very much.
Here's how to solve the second boxed problem. First, for every $v\in\beta$, write $T(v)$ in the basis $\beta$: $$ \begin{align} T(1) &= 2\cdot 1+0\cdot x+0\cdot x^2 \\ T(x) &= 3\cdot 1+3\cdot x+0\cdot x^2 \\ T(x^2) &= 0\cdot 1+6\cdot x+3\cdot x^2 \end{align} $$ Now, the scalars appearing in these equations become the columns of $[T]_\beta$: $$ [T]_\beta= \begin{bmatrix} 2 & 3 & 0 \\ 0 & 3 & 6 \\ 0 & 0 & 3 \end{bmatrix} $$ Of course, this matrix encodes a lot of information about the linear transformation $T$. For example, we can read off the eigenvalues as $2$ and $3$.
Integration function spherical coordinates, convolution How can I calculate the following integral explicitly: $$\int_{R^3}\frac{f(x)}{|x-y|}dx$$ where $f$ is a function with spherical symmetry that is $f(x)=f(|x|)$? I tried to use polar coordinates at $x=0$ but it didn't help. Any idea on how to do this? Do you think it is doable somehow?
This is a singular integral and so you can expect some weird behavior but if $f$ has spherical symmetry, then I would change to spherical coordinates. Then you'll have $f(x) = f(r)$. $dx$ will become $r^2\sin(\theta)drd\theta d\phi$. The tricky part is then what becomes of $|x-y|$. Recall that $|x-y| = \sqrt{(x-y)\cdot(x-y)} = \sqrt{|x|^2-2x\cdot y+|y|^2}$. In our case, $|x|^2 = r^2$ and $x = (r\sin(\theta)\cos(\phi), r\sin(\theta)\sin(\phi), r\cos(\theta))$. From here, I'm not sure how much simplification there can be. What more are you looking for?
introductory reference for Hopf Fibrations I am looking for a good introductory treatment of Hopf Fibrations and I am wondering whether there is a popular, well regarded, accessible book. ( I should probably say that I am just starting to learn about vector bundles. ) If anyone with more experience could point me in the right direction this would be really helpful.
These notes might also add some motivation to the topic coming in from Physics: http://www.itp.uni-hannover.de/~giulini/papers/DiffGeom/Urbantke_HopfFib_JGP46_2003.pdf
Free modules have no infinitely divisible elements Let $F$ be a free $\mathbb Z$-module. How can we show that $F$ has no non-zero infinitely divisible element? (An element $v$ in $F$ is called infinitely divisible if the equation $nx = v$ has solutions $x$ in $F$ for infinitely many integers $n$.)
By definition, $F$ has a basis $(b_i)_{i \in I}$. Suppose $$ v = a_1 b_1 + \dots + a_k b_k, $$ for $a_i \in \Bbb{Z}$, is divisible by infinitely many $n$. Choose $n$ positive, larger than all the $\lvert a_i \rvert$, so that $v$ is divisible by $n$. If $v = n x$ for $$ x = x_1 b_1 + \dots + x_k b_k $$ then $n x_i = a_i$ for all $i$, as the $b_i$ are a basis, which implies all $a_i = 0$, as $n > \lvert a_i \rvert$. Thus $v = 0$.
Prove that a sequence diverges Let $b > 1$. Prove that the sequence $\frac{b^n}{n}$ diverges to $\infty$ I know that I need to show that $\dfrac{b^n}{n} \geq M $, possibly by solving for $n$, but I am not sure how. If I multiply both sides by $n$, you get $b^n \geq Mn$, but I don't know if that is helpful.
You could use L'Hospital's rule: $$ \lim_{n\rightarrow\infty}\frac{b^n}{n}=\lim_{n\rightarrow\infty}\frac{b^n\log{b}}{1}=\infty $$
Finding the limit of function - irrational function How can I find the following limit: $$ \lim_{x \rightarrow -1 }\left(\frac{1+\sqrt[5]{x}}{1+\sqrt[7]{x}}\right)$$
Let $x=t^{35}$. As $x \to -1$, we have $t \to-1$. Hence, $$\lim_{x \to -1} \dfrac{1+\sqrt[5]{x}}{1+\sqrt[7]{x}} = \lim_{t \to -1} \dfrac{1+t^7}{1+t^5} = \lim_{t \to -1} \dfrac{(1+t)(1-t+t^2-t^3+t^4-t^5+t^6)}{(1+t)(1-t+t^2-t^3+t^4)}$$ I am sure you can take it from here.
Is it always true that $\lim_{x\to\infty} [f(x)+c_1]/[g(x)+c_2]= \lim_{x\to\infty}f(x)/g(x)$? Is it true that $$\lim\limits_{x\to\infty} \frac{f(x)+c_1}{g(x)+c_2}= \lim\limits_{x\to\infty} \frac{f(x)}{g(x)}?$$ If so, can you prove it? Thanks!
Think of it this way: the equality is true only when $f(x), g(x)$ completely 'wash out' the additive constants at infinity. To be more precise, suppose $f(x), g(x) \rightarrow \infty$. Then $$ \frac{f(x) + c_1}{g(x) + c_2} = \frac{f(x)}{g(x)} \frac{1 + c_1/f(x)}{1 + c_2 / g(x)} $$ In the limit as $x \rightarrow \infty$, the right-hand factor goes to 1, and so the left-hand quantity approaches the same limit as $f(x) / g(x)$.
elementary ring proof with composition of functions and addition of functions as operations Consider the set $\mathcal{F}=\lbrace f\mid \mathcal{f}:\Bbb{R}\to\Bbb{R}\rbrace$, in which an additive group is defined by addition of functions and a second operation defined as composition of functions. The question asks to verify that the resulting structure does not satisfy the ring properties. This is the question 24.10 from Modern Algebra, by Durbin, 4th edition. So what I have so far is that all of the properties for the additive group (with addition of functions as the operation here) hold, associativity of the composition of functions hold, and that the failure must come from the distributive laws. My proof of my claim, so far; $$1\,\,\,\,\,[(\mathcal{f}\circ\mathcal{g})+\mathcal{h}](\mathcal{x})=\cdots=\mathcal{f}(\mathcal{g}(\mathcal{x}))+\mathcal{f}(\mathcal{h}(\mathcal{x})).$$ ...but, $$2\,\,[(\mathcal{f}\circ\mathcal{g})+\mathcal{h}](\mathcal{x})=$$ $$3\,\,(\mathcal{f}\circ\mathcal{g})(\mathcal{x})+\mathcal{h}(\mathcal{x})=$$ $$4\,\,\,\mathcal{f}(\mathcal{g}(\mathcal{x}))+\mathcal{h}(\mathcal{x}).$$ ...which brings me to my contradiction of the properties of rings that; $$5\,\,\,[\mathcal{f}\circ(\mathcal{g}+\mathcal{h})](\mathcal{x})\neq[(\mathcal{f}\circ\mathcal{g})+\mathcal{h}](\mathcal{x}).$$ So my question is, is this the correct path to showing the distributive laws are not obeyed when composition is taken as an operation on $\mathcal{F}$? particularly my proceeding from 2 to 3 to 4 is what i feel uncomfortable about, like i may be missing a step in there. thanks
Taking on the suggestion of Sammy Black, consider $g = h = \mathbf{1} = \text{the identity function}$, and $f$ any function. Suppose $f \circ (\mathbf{1} + \mathbf{1}) = f \circ \mathbf{1} + f \circ \mathbf{1} = f + f = 2 f$. So for all $x \in \Bbb{R}$ you should have $f( 2 x) = 2 f(x)$. Now think of a function $f$ which does not satisfy this. (Variation. Take $g(x) = a$ and $h(x) = b$ for two arbitrary constants $a, b \in \mathbf{R}$. If $f \circ (g + h) = f \circ g + f \circ h$ holds, then $f(a + b) = f(a) + f(b)$ for all $a, b \in \Bbb{R}$. Now take a non-additive $f$.) So the distributive property $f \circ (g + h) = f \circ g + f \circ h$ does not generally hold. On the other hand, the other distributive property $(g + h) \circ f = g \circ f + h \circ f$ always holds. In fact for all $x$ one has $$ \begin{align} ((g + h) \circ f) (x) &= (g + h) ( f(x)) \\&= g(f(x)) + h(f(x)) \\&= (g \circ f) (x) + (h \circ f) (x) \\&= (g \circ f + h \circ f) (x). \end{align} $$
An infinite union of closed sets is a closed set? Question: {$B_n$}$ \in \Bbb R$ is a family of closed sets. Prove that $\cup _{n=1}^\infty B_n$ is not necessarily a closed set. What I thought: Using a counterexample: If I say that each $B_i$ is a set of all numbers in range $[i,i+1]$ then I can pick a sequence $a_n \in \cup _{n=1}^\infty B_n$ s.t. $a_n \to \infty$ (because eventually the set includes all positive reals) and since $\infty \notin \Bbb R$ then $\cup _{n=1}^\infty B_n$ is not a closed set. Is this proof correct? Thanks
But since $\infty\notin\Bbb R$ we cannot use it as a counterexample. To see that is indeed the case note that your union is the set $[1,\infty)$, which is closed. Why is it closed? Recall that $A\subseteq\Bbb R$ is closed if and only if every convergent sequence $a_n$ whose elements are from $A$, has a limit inside $A$. So we need sequences whose limits are real numbers to begin with, and so sequences converging to $\infty$ are of no use to us. On the other hand, if $a_n\geq 1$ then their limit is $\geq 1$, so $[1,\infty)$ is closed. You need to try this with bounded intervals whose union is bounded. Show that in such case the result is an open interval. Another option is to note that $\{a\}$ is closed, but $\Bbb Q$ is the union of countably many closed sets. Is $\Bbb Q$ closed?
General solution of $yy′′+2y'''=0$ How do you derive the general solution of $yy''+2y'''= 0$? please help me to derive solution thanks a lot
NB: This is a general way to reduce the order of the equation. This doesn't solve your question as is, but rather gives you a starting point. In this equation, the variable $t$ does not appear, hence one can substitute: $$y'=p(y), \hspace{7pt}y''=p'(y)\cdot y'=p'p,\hspace{7pt}y'''=p''\cdot y'\cdot p+p'\cdot p'\cdot y'=p''p^2+(p')^2p$$ (where $y'=\frac{dy}{dt}$, while $p'=\frac{dp}{dy}$). Then your equation turns into $$yp'p+2p''p^2+2(p')^2p=0 \hspace{7pt}\Rightarrow\hspace{7pt} yp'+2p''p+2(p')^2=0$$ Where the derivatives are w.r.t $y$ - this gives you an equation of the second order.
General solution of a differential equation $x''+{a^2}x+b^2x^2=0$ How do you derive the general solution of this equation:$$x''+{a^2}x+b^2x^2=0$$where a and b are constants. Please help me to derive solution thanks a lot.
First make the substitution: $x=-\,{\frac {6y}{{b}^{2}}}-\,{\frac {{a}^{2}}{{2b}^{2}}}$ This will give you the differential equation: $y^{''} =6y^{2}-\frac{a^{4}}{24}$ which is to be compared with the second order differential equation for the Weierstrass elliptic function ${\wp}(t-\tau_{0},g_2,g_3)$: ${\wp}^{''} =6{\wp}^{2}-\frac{g_{2}}{2}$ Where $g_{2}$, $g_3$ are known as elliptic invariants. It then follows that the solution is given by: $y={\wp}(t-\tau_{0},\frac{a^{4}}{12},g_3)$ $x=\,-{\frac {6}{{b}^{2}}}{\wp}(t-\tau_{0},\frac{a^{4}}{12},g_3)-\,{\frac {{a}^{2}}{{2b}^{2}}}$ Where $\tau_0$ and $g_3$ are constants determined by the initial conditions and $t$ is the function variable.
Prove that a tensor field is of type (1,2) Let $J\in\operatorname{end}(TM)=\Gamma(TM\otimes T^*M)$ with $J^2=-\operatorname{id}$ and for $X,Y\in TM$, let $$N(X,Y):=[JX,JY]-J\big([JX,Y]-[X,JY]\big)-[X,Y].$$ Prove that $N$ is a tensor field of type (1,2). Since I heard $N$ is the Nijenhuis tensor and saw its component formula, I have tried proving it by starting with the component formula for $N^k_{ij}$ showing that for $X=X^i\frac{\partial}{\partial x^i}$ and $Y=Y^i\frac{\partial}{\partial x^i}$ $$ N(X,Y)=N^k_{ij}\frac{\partial}{\partial x^k}\otimes\mathrm{d}x^i\otimes\mathrm{d}x^j\left(X,Y\right).$$ But I am only able to show \begin{align} N(X,Y) &=\underbrace{\big(J^m_iX^i\frac{\partial}{\partial x^i}(J^k_jY^j)-J^m_jY^j\frac{\partial}{\partial x^m}(J^k_iX^i)\big)\frac{\partial}{\partial x^k}}_{=[JX,JY]}-\underbrace{J^k_m\left(\frac{\partial}{\partial x^i}J^m_j-\frac{\partial}{\partial x^j}J^m_i\right)X^iY^j\frac{\partial}{\partial x^k}}_{=?}. \end{align} Anyway, I assume this way to be pretty dumb (although I would like to know if it is correct so far and how to go on from there), so I would like to know if there is a more elegant (and for general (k,l)-tensor fields better) way to prove the type of $N$ than writing it locally.
Being a tensor field means that you have to show $C^{\infty}(M)$-Linearity. (1,2)-Tensor just means, that it eats two vector fields at spits out another vector field which is obvious from the definition. So look at $N(X,fY)$ for $f \in C^{\infty}(M)$ and use the properties of the Lie-bracket, to show $N(X,fY)=fN(X,Y)$. Note: $C^{\infty}$-linearity induces, that $N(X,Y)|_p$ just depends on $X_p$ and $Y_p$ in contrast to $\nabla_YX|_p$ in the upper entry.
If f is integrable on $[a,b]$, prove $f^{q}$ is integrable on $[a,b]$ Let $q $ be a rational number. Suppose that $a < b,\ 0 < c < d $, and that $f : [a,b] $->$ [c,d] $. If $f$ is integrable on $[a,b]$, then prove that $f^{q}$ is integrable on $[a,b]$. I think that the proof involves the binomial theorem. My book has a proof showing that $f^2$ is integrable if $f$ is integrable, which I assume can be easily extended to $f^n$ for any integer $n$. I'm not sure exactly how to go about it from there though.
Hints: * *$f$ is Riemann integrable and $g$ is continuous then the composition $g \circ f$ -- when this makes sense --is Riemann integrable (Theorem 8.18 of these notes). *If $f: [a,b] \rightarrow [c,d]$ is continuous and monotone, then the inverse function $f^{-1}$ exists and is continuous (Theorem 5.39 of loc. cit.). Alternately, as with so many of these kinds of results, this is an immediate corollary of Lebesgue's Criterion for Riemann Integrability (Theorem 8.28 of loc. cit.). In fact this latter route takes care of irrational exponents as well.
Confusion about Banach Matchbox problem While trying to solve Banach matchbox problem, I am getting a wrong answer. I dont understand what mistake I made. Please help me understand. The problem statement is presented below (Source:Here) Suppose a mathematician carries two matchboxes at all times: one in his left pocket and one in his right. Each time he needs a match, he is equally likely to take it from either pocket. Suppose he reaches into his pocket and discovers that the box picked is empty. If it is assumed that each of the matchboxes originally contained $N$ matches, what is the probability that there are exactly $k$ matches in the other box? My solution goes like this. Lets say pocket $1$ becomes empty. Now, we want to find the probability that pocket $2$ contains $k$ matches (or $n-k$ matches have been removed from it. I also note that wikipedia solution does not consider the $1^{st}$ equality -- maybe thats where i am wrong?). Let $p = P[k\ \text{matches left in pocket}\ 2\ |\ \text{pocket 1 found empty}]$ = $\frac{P[k\ \text{matches left in pocket}\ 2\ \text{and pocket 1 found empty}]}{\sum_{i=0}^{n}P[i\ \text{matches left in pocket}\ 2\ \text{and pocket 1 found empty}]}$ = $\frac{\binom{2n-k}{n} \cdot \frac{1}{2^{2n-k}}} {\sum_{i=0}^{n}\binom{2n-i}{n} \cdot \frac{1}{2^{2n-i}}}$ In my $2^{nd}$ equality, I have written the probability of removing all matches from pocket $1$ and $n-k$ from pocket $2$ using Bernoulli trials with probability $\frac{1}{2}$. The denominator is a running sum over a similar quantity. Now, my answer to the original problem is $2p$ (the role of pockets could be switched). I am unable to see whats wrong with my approach. Please explain. Thanks
Apart from doubling $p$ at the end, your answer is correct: your denominator is actually equal to $1$. It can be rewritten as $$\frac1{2^{2n}}\sum_{i=0}^n\binom{2n-i}n2^i=\frac1{2^{2n}}\sum_{m=n}^{2n}\binom{m}n2^{2n-m}=\frac1{2^{2n}}\sum_{i=0}^n\binom{n+i}n2^{n-i}\;,$$ and $$\begin{align*} \sum_{i=0}^n\binom{n+i}n2^{n-i}&=\sum_{i=0}^n\binom{n+i}n\sum_{k=0}^{n-i}\binom{n-i}k\\\\ &=\sum_{i=0}^n\sum_{k=0}^{n-i}\binom{n+i}i\binom{n-i}k\\\\ &=\sum_{k=0}^n\sum_{i=0}^{n-k}\binom{n+i}n\binom{n-i}k\\\\ &\overset{*}=\sum_{k=0}^n\binom{2n+1}{n+k+1}\\\\ &=\sum_{k=n+1}^{2n+1}\binom{2n+1}k\\\\ &=\frac12\sum_{k=0}^{2n+1}\binom{2n+1}k\\\\ &=2^{2n}\;, \end{align*}$$ where the starred step invokes identity $(5.26)$ of Graham, Knuth, & Patashnik, Concrete Mathematics. Thus, your result can be simplified to $$p=\binom{2n-k}n\left(\frac12\right)^{2n-k}\;.$$ And you don’t want to multiply this by $2$: no matter which pocket empties first, this is the probability that the other pocket still contains $k$ matches.
Does this $3\times 3$ matrix exist? Does a real $3\times 3$ matrix $A$ that satisfies conditions $\operatorname{tr}(A)=0$ and $A^2+A^T=I$ ($I$ is an identity matrix) exist? Thank you for your help.
[Many thanks to user1551 for this contribution.] First, let us show that the eigenvalues of $A$ must be real. The equation $A^2+A^T=I$ implies that $$A^T=I−A^2 \quad\text{and}\quad (A^T)^2+A=I.$$ Substituting the first equation into the second yields $$I−2A^2+A^4+A=I \quad\Longrightarrow\quad A^4−2A^2+A=A(A−I)(A^2+A−I)=0.$$ The eigenvalues must therefore satisfy $\lambda(\lambda−1)(\lambda^2+\lambda−1)=0$, which has the roots $\{0,1,(-1+\sqrt{5})/2,(-1-\sqrt{5})2\}$, all real. In fact, we will show that only the last two are possible. Let $A=QUQ^T$ be the Schur decomposition of $A$, where $Q$ is unitary and $U$ is upper triangular. Because the eigenvalues of $A$ are real, both $Q$ and $U$ are real as well. Then $$A^2+A^T=I\quad\Longrightarrow QUQ^TQUQ^T+QU^TQ^T=I\quad\Longrightarrow\quad U^2+U^T=I.$$ $U^2$ is upper triangular, and $U^T$ is lower triangular. The only way it is possible to satisfy the equation is if $U$ is diagonal. (Alternatively, $A$ commutes with $A^T$ because $A^T=I-A^2$. Therefore $A$ is a normal matrix. Since we have shown that all eigenvalues of $A$ are real, it follows that $A$ is orthogonally diagonalisable as $QUQ^T$ for some real orthogonal matrix $Q$ and real diagonal matrix $U$.) If $U$ is diagonal, then $A$ was symmetric; and the diagonal elements of $U$ are the eigenvalues of $A$. Each must separately satisfy $\lambda^2+\lambda-1=0$. So $\lambda=\frac{-1\pm\sqrt{5}}{2}$. But there are three eigenvalues, so one of them is repeated. There is no way for the resulting sum to be zero. Since the sum of the eigenvalues is equal to $\mathop{\textrm{Tr}}(A)$, this is a contradiction. EDIT: I'm going to add that it does not matter that the matrix is $3\times 3$. There is no square matrix of any size that satisfies the conditions put forth. There is no way to choose $n>0$ values from the set $\{(-1+\sqrt{5})/2,(-1-\sqrt{5})/2\}$ that have a zero sum.
Optimizing the area of a rectangle A rectangular field is bounded on one side by a river and on the other three sides by a fence. Additional fencing is used to divide the field into three smaller rectangles, each of equal area. 1080 feet of fencing is required. I want to find the dimensions of the large rectangle that will maximize the area. I had the following equations and I was wondering if they are correct: I let $Y$ denote the length of the rectangle. Then $Y=3y$. If $x$ represents the width of the three smaller rectangles, then I get the following: $$ 4x+3y = 1080,~~~\text{Area} = 3xy.$$
@Gorg, you are on the right track. You can either solve this problem using small y or large Y. Your equations are set up correctly with small y, and the answer I get if you want to compare with what you get is $$x=135 \text{ and}\ y=180$$. Good job :)
What should be the intuition when working with compactness? I have a question that may be regarded by many as duplicate since there's a similar one at MathOverflow. * *In $\mathbb{R}^n$ the compact sets are those that are closed and bounded, however the guy who answered this question and had his answer accepted says that compactness is some analogue of finiteness. In my intuitive view of finiteness, only boundedness would suffice to say that a certain subset of $\mathbb{R}^n$ is in some sense "finite". On the other hand there's the other definition of compactness (in terms of covers) which is the one I really need to work with and I cannot see how that definition implies this intuition on finiteness. *To prove a set is compact I know they must show that for every open cover there's a finite subcover;the problem is that I can't see intuitively how one could show this for every cover. Also when trying to disprove compactness the books I've read start presenting strange covers that I would have never thought about. I think my real problem is that I didn't yet get the intuition on compactness. So, what intuition should we have about compact sets in general and how should we really put this definition to use? Can someone provide some reference that shows how to understand the process of proving (and disproving) compactness?
You may read various descriptions and consequences of compactness here. But be aware that compactness is a very subtle finiteness concept. The definitive codification of this concept is a fundamental achievement of $20^{\,\rm th}$ century mathematics. On the intuitive level, a space is a large set $X$ where some notion of nearness or neighborhood is established. A space $X$ is compact, if you cannot slip away within $X$ without being caught. To be a little more precise: Assume that for each point $x\in X$ a guard placed at $x$ could survey a certain, maybe small, neighborhood of $x$. If $X$ is compact then you can do with finitely many (suitably chosen) guards.
boundary map in the (M-V) sequence Let $K\subset S^3$ be a knot, $N(K)$ be a tubular neighborhood of $K$ in $S^3$, $M_K$ to be the exterior of $K$ in $S^3$, i.e., $M_K=S^3-\text{interior of }{N(K)}$. Now, it is clear that $\partial M_K=\partial N(K)=T^2$, the two dimensional torus, and when using the (M-V) sequence to the triad $(S^3, N(K)\cup M_K, \partial M_K)$ to calculate the homology group of $H_i(M_K, \mathbb{Z})$, $$H_3(\partial M_K)\to H_3(M_K)\oplus H_3(N(K))\to H_3(S^3)\overset{\partial_3}{\to}H_2(\partial M_K)\to H_2(M_K)\oplus H_2(N(K))\to H_2(S^3)\overset{\partial_2}{\to}\cdots$$ I have to figure out what the boundary map $\mathbb{Z}=H_3(S^3)\overset{\partial_3}{\to}H_2(\partial M_K)=\mathbb{Z}$ looks like. I think it should be the $\times 1$ map, but I do not know how to deduce it geometrically, so my question is: Q1, Is it true that in the above (M-V) sequence $~ \mathbb{Z}=H_3(S^3)\overset{\partial_3}{\to}H_2(\partial M_K)=\mathbb{Z}$ is $\times 1$ map? Why? Q2, If the above $K$ is a link with two components, then $\partial M_K=T^2\sqcup T^2$, do the same thing as above, what does $\mathbb{Z}=H_3(S^3)\overset{\partial_3}{\to}H_2(\partial M_K)=\mathbb{Z}\oplus \mathbb{Z}$ look like now?
The generator of $H_3(S_3)$ can be given by taking the closures of $M_K$ and $N(K)$ and triangulating them so the triangulation agrees on the boundary, then taking the union of all the simplices as your cycle. The boundary map takes this cycle and sends it to the common boundary of its two chunks, which is exactly the torus, so the map is indeed 1. The map in the second question should be the diagonal map $(\times 1,\times 1)$.
disconnected/connected graphs Determine whether the statements below are true or false. If the statement is true, then prove it; and if it is false, give a counterexample. (a) Every disconnected graph has a vertex of degree 0. (b) A graph is connected if and only if some vertex is connected to all other vertices. Please correct me if i'm wrong. (a) is false, as we could have 2 triangles not connected with each other. The graph would be disconnected and all vertexes would have order 2. (b) confuses me a bit. Since this is double implication, for the statement to hold, it must be: A graph is connected if some vertex is connected to all other vertices. (true) AND Some vertex is connected to all other vertices if the graph is connected. We could have a square. In this case the graph is connected but no vertex is connected to every other vertex. Therefore this part is false. Since this part is false - the whole statement must also be false. Is this correct?
So that this doesn't remain unanswered: Yes, all of your reasoning is correct.
Application of quadratic functions to measurement and graphing thanks for any help! Q1. Find the equation of the surface area function of a cylindrical grain silo. The input variable is the radius (r). (the equation is to be graphed using a graphics calculator in the following question) Height (h) = 5 meters Radius (r) - unknown Surface Area (S)- unknown Pi (p) = 3.142 So far I have: S = 2pr^2 + 2prh (surface area formula) S = 2p(r^2+5r) S = 2pr(r+5) S= 6.284r(r+5) I am not sure if this is an equation I can use to answer Q2 Use the graphic calculator emulator to draw the equation obtained at Q1. I have also come up with: 2pr^2 + 2prh + 0 (in the quadratic expression ax^2 + bx + c=0) When I substitute values for r I get the same surface area for both equations but am not sure if I am on the right track! Thank you for any help!
(SA = Surface Area) * *SA (silo) = SA (cylinder) + $\frac{1}{2}$ SA (sphere) *SA (cylinder) = $2\pi r h $ *SA (sphere) = $4\pi r^2$ So we have, SA (silo) = SA (cylinder) + $\frac{1}{2}$ SA (sphere) = $2\pi r h + \frac{1}{2}4\pi r^2 = 2\pi r h + 2 \pi r^2 = 2 ~\pi~ r(h + r) = 2 ~\pi~ r(5 + r)$ Plot:
inequality involving complex exponential Is it true that $$|e^{ix}-e^{iy}|\leq |x-y|$$ for $x,y\in\mathbb{R}$? I can't figure it out. I tried looking at the series for exponential but it did not help. Could someone offer a hint?
One way is to use $$ |e^{ix} - e^{iy}| = \left|\int_x^ye^{it}\,dt\right|\leq \int_x^y\,dt = y-x, $$ assuming $y > x$.
Integrate $e^{f(x)}$ Just wondering how I can integrate $\displaystyle xe^{ \large {-x^2/(2\sigma^2)}}$ Tried using substitution where $U(x) = x^2$ but I kept getting a $x^2$ at the denominator which is incorrect. I understand that $\displaystyle \int e^{f(x)} = e^{\large \frac{f(x)}{f'(x)}}$ if $f(x)$ is linear, however, how do we handle this situation when $f(x)$ is not linear? Step by step answer will be awesome!!!! thanks :D M
A close relative of your substitution, namely $u=-\dfrac{x^2}{2\sigma^2}$, works. In "differential" notation, we get $du=-\frac{1}{\sigma^2} x\,dx$, so $x\,dx=-\sigma^2\,du$. Remark: Or more informally, let's guess that the answer is $e^{-x^2/(2\sigma^2)}$. Differentiate, using the Chain Rule. We get $-\frac{x}{\sigma^2}e^{-x^2/(2\sigma^2)}$, so wrong guess. Too bad. But we got close, and there is an easy fix by multiplying by a suitable constant to get rid of the $-\frac{1}{\sigma^2}$ in front of the derivative of our wrong guess. The indefinite integral is $-\sigma^2e^{-x^2/(2\sigma^2)}+C$.
What will be the units digit of $7777^{8888}$? What will be the units digit of $7777$ raised to the power of $8888$ ? Can someone do the math with explaining the fact "units digit of $7777$ raised to the power of $8888$"?
the units digit of a number is the same as the number mod 10 so we just need to compute $7777^{8888} \pmod {10}$ first $$7777^{8888} \equiv 7^{8888} \pmod {10}$$ and secondly by Eulers totient theorem $$7^{8888} \equiv 7^{0} \equiv 1 \pmod {10}$$ by Eulers totient theorem (since $\varphi(10)=4$ and $4 \mid 8888$) so there's a quick way to see that the last digit is a 1
Mean Value Property of Harmonic Function on a Square A friend of mine presented me the following problem a couple days ago: Let $S$ in $\mathbb{R}^2$ be a square and $u$ a continuous harmonic function on the closure of $S$. Show that the average of $u$ over the perimeter of $S$ is equal to the average of $u$ over the union of two diagonals. I recall that the 'standard' mean value property of harmonic functions is proven over a sphere using greens identities. I've given this one some thought but I haven't come up with any ideas of how to proceed. It's driving me crazy! Maybe it has something to do with the triangles resulting from a diagonal? Any ideas?
Consider the isosceles right triangles formed from two sides of the square and a diagonal. Let's consider the first such triangle. Call the sides $L_1, L_2$ and $H_1$ for legs and hypotenuse; for sake of convenience, our square is the unit square, so we give the triangle $T_1$, legs $L_1$ from $(0,0)$ to $(1,0)$ and $L_2$ from $(1,0)$ to $(1,1)$; the hypotenuse $H_1$ clearly runs from $(0,0)$ to $(1,1)$. Now consider the function $\phi(x) = |x-y|$, and we're going to use integration by parts. $$ \int_{\partial T_1} u \phi_{\nu} = \int_{\partial T_1} \phi u_{\nu} $$ since $u$ and $\phi$ are both harmonic in the interior of the triangle $T$. Now, $$ \int_{\partial T_1} u \phi_{\nu} = - \sqrt{2} \int_{H_1} u + \int_{L_1} u + \int_{L_2} u = \int_{L_1} x u_\nu + \int_{L_2} (1-y) u_\nu = \int_{\partial T_1} \phi u_\nu $$ Perform the same construction on the other triangle $T_2$ with hypotenuse $H_1$, but with legs $L_3$ from (0,0) to (0,1) and $L_4$ from (0,1) to (1,1). We get $$ \int_{\partial T_2} u \phi_{\nu} = - \sqrt{2} \int_{H_1} u + \int_{L_3} u + \int_{L_4} u = \int_{L_3} y u_\nu + \int_{L_4} (1-x) u_\nu = \int_{\partial T_2} \phi u_\nu $$ Now consider the function $\psi(x) = |x+y-1|$ on the triangle $T_3$ formed by $L_1$ as above, $L_3$ from (0,0) to (0,1), and $H_2$ from (1,0) to (0,1). $$ \int_{\partial T_3} u \psi_{\nu} = - \sqrt{2} \int_{H_2} u + \int_{L_3} u + \int_{L_1} u = \int_{L_3} (1-y) u_\nu + \int_{L_1} (1-x) u_\nu = \int_{\partial T_3} \psi u_\nu $$ Finally, on the triangle $T_4$ formed by $L_4$, $L_2$, and $H_2$ we have $$ \int_{\partial T_4} u \psi_{\nu} = - \sqrt{2} \int_{H_2} u + \int_{L_2} u + \int_{L_4} u = \int_{L_2} y u_\nu + \int_{L_4} x u_\nu = \int_{\partial T_4} \psi u_\nu $$ Summing all these terms together, we get $$ -2 \sqrt{2} \int_{H_1 \cup H_2} u + 2 \int_{\partial S} u = \int_{\partial S} u_\nu$$ Since $u$ is harmonic, this must equal 0, which tells us that the average over the diagonals is the average over the perimeter.
Continuity of one partial derivative implies differentiability Let $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ be a function such that the partial derivatives with respect to $x$ and $y$ exist and one of them is continuous. Prove that $f$ is differentiable.
In short: the problem reduces to the easy case when $f$ depends solely on one variable. See the greyish box below for the formula that does the reduction. It suffices to show that $f$ is differentiable at $(0,0)$ with the additional assumption that $\frac{\partial f}{\partial x}(0,0)=\frac{\partial f}{\partial y}(0,0)=0$. First pass from $(x_0,y_0)$ to $(0,0)$ by considering the function $g(x,y)=f(x+x_0,y+y_0)$. Then work on $h(x,y)=g(x,y)-x\frac{\partial f}{\partial x}(0,0)-y\frac{\partial f}{\partial y}(0,0)$. So let us assume assume that $\frac{\partial f}{\partial x}$ exists and is continuous on $\mathbb{R}^2$ (only continuity in an open neighborhood of $(0,0)$ is really needed for the local argument), that $\frac{\partial f}{\partial y}$ exists at $(0,0)$, and that $\frac{\partial f}{\partial x}(0,0)=\frac{\partial f}{\partial y}(0,0)=0$. We need to show that $f$ is differentiable at $(0,0)$. Note that the derivative must be $0$ given our assumptions. Now observe that for every $x,y$, we have, by the fundamental theorem of calculus: $$ f(x,y)=f(0,y)+\int_0^x \frac{\partial f}{\partial x}(s,y)ds. $$ I let you check properly that $(x,y)\longmapsto f(0,y)$ is differentiable at $(0,0)$ with zero derivative, using only $\frac{\partial f}{\partial y}(0,0)=0$. For the other term, just note that it is $0$ at $(0,0)$ and that for every $0<\sqrt{x^2+y^2}\leq r$ $$ \frac{1}{\sqrt{x^2+y^2}}\Big|\int_0^x \frac{\partial f}{\partial x}(s,y)ds\Big|\leq \frac{|x|}{\sqrt{x^2+y^2}}\sup_{0\leq \sqrt{x^2+y^2}\leq r}\Big| \frac{\partial f}{\partial x}(s,t)\Big|\leq \sup_{0\leq \sqrt{x^2+y^2}\leq r}\Big| \frac{\partial f}{\partial x}(s,t)\Big|. $$ By continuity of $\frac{\partial f}{\partial x}$ at $(0,0)$, the rhs tends to $0$ when $(x,y)$ tends to $(0,0)$. This proves that the function $(x,y)\longmapsto \int_0^x \frac{\partial f}{\partial x}(s,y)ds$ is differentiable at $(0,0)$ with zero derivative. And this concludes the proof.
Modular Exponentiation Give numbers $x,y,z$ such that $y \equiv z \pmod{5}$ but $x^y \not\equiv x^z \pmod{5}$ I'm just learning modular arithmetic and this questions has me puzzled. Any help with explanation would be great!
Take $x=2$, $y=3$, $z=8$. Then $x^y \bmod 5 = 3$ but $x^z \bmod 5 = 1$.
If $x,y$ are positive, then $\frac1x+\frac1y\ge \frac4{x+y}$ For $x$, $y$ $\in R^+$, prove that $$\frac{1}{x}+\frac{1}{y}\ge\frac{4}{x+y}$$ Could someone please help me with this inequality problem? I have tried to use the AM-GM inequality but I must be doing something wrong. I think it can be solved with the AM-GM but I can’t solve it. Thanks in advance for your help.
Here is a solution with AM-GM: $$\frac{1}{x}+\frac{1}{y} \geq \frac{2}{\sqrt{xy}}$$ $$x+y \geq 2 \sqrt{xy} \Rightarrow \frac{1}{\sqrt{xy}} \geq \frac{2}{x+y}\Rightarrow \frac{2}{\sqrt{xy}} \geq \frac{4}{x+y}$$ Also you can note that $$(x+y)(\frac{1}{x}+\frac{1}{y}) \geq 4$$ is just Cauchy-Schwarz.