INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
upper bound of exponential function I am looking for a tight upper bound of exponential function (or sum of exponential functions): $e^x<f(x)\;$ when $ \;x<0$ or $\displaystyle\sum_{i=1}^n e^{x_i} < g(x_1,...,x_n)\;$ when $\;x_i<0$ Thanks a lot!
Since you suggest in the comments you would like a polynomial bound, you can use any even Taylor polynomial for $e$. Proposition. $\boldsymbol{1 + x + \frac{x^2}{2!} + \cdots + \frac{x^n}{n!}}$ is an upper bound for $\boldsymbol{e^x}$ when $\boldsymbol{n}$ is even and $\boldsymbol{x \le 0}$. Proof. We wish to show $f(x) \ge 0$ for all $x$, where $f: (-\infty, 0] \to \mathbb{R}$ is the function defined by $f(x) = 1 + x + \frac{x^2}{2!} + \cdots + \frac{x^n}{n!} - e^x.$ Since $f(x) \to \infty$ as $x \to - \infty$, $f$ must attain an absolute minimum somewhere on the interval $(-\infty, 0]$. * *If $f$ has an absolute minimum at $0$, then for all $x$, $f(x) \ge f(0) = 1 - e^0 = 0$, so we are done. *If $f$ has an absolute minimum at $y$ for some $y < 0$, then $f'(y) = 0$. But differentiating, $$ f'(y) = 1 + y + \frac{y^2}{2!} + \cdots + \frac{y^{n-1}}{(n-1)!} - e^y = f(y) - \frac{y^n}{n!}. $$ Therefore, for any $x$, $$ f(x) \ge f(y) = \frac{y^n}{n!} + f'(y) = \frac{y^n}{n!} > 0, $$ since $n$ is even. $\square$ Keep in mind that any polynomial upper bound will only be tight up to a certain point, because the polynomial will blow up to infinity as $x \to -\infty$. Also note, the same proof shows that the Taylor polynomial is a lower bound for $e^x$ when $n$ is odd and $x \le 0$.
What does the value of a probability density function (PDF) at some x indicate? I understand that the probability mass function of a discrete random-variable X is $y=g(x)$. This means $P(X=x_0) = g(x_0)$. Now, a probability density function of of a continuous random variable X is $y=f(x)$. Wikipedia defines this function $y$ to mean In probability theory, a probability density function (pdf), or density of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value. I am confused about the meaning of 'relative likelihood' because it certainly does not mean probability! The probability $P(X<x_0)$ is given by some integral of the pdf. So what does $f(x_0)$ indicate? It gives a real number, but isn't the relative likelihood of a specific value for a CRV always zero?
'Relative likelihood' is indeed misleading. Look at it as a limit instead: $$ f(x)=\lim_{h \to 0}\frac{F(x+h)-F(x)}{h} $$ where $F(x) = P(X \leq x)$
Proof that Quantile Function characterizes Probability Distribution The quantile function is defined as $Q(u)= \inf \{x: F(x) \geq u\}$. It is well known the distribution function characterizes the probability distribution in the following sense Theorem Let $X_{1}$ and $X_{2}$ be two real valued random variables with distribution functions $F_{1}$ and $F_{2}$ respectively. If $F_{1}(x)=F_{2}(x)$, $\forall x\in \mathbb{R}$ then $X_{1}$ and $X_{2}$ have the same probability distribution. I want to prove that the quantile function also characterizes the probability distribution, this fact is stated as a corallary of the above theorem in this book (see Corallary 1.2 on p19): Corallary: Let $X_{1}$ and $X_{2}$ be two real valued random variables with quantile functions $Q_{1}$ and $Q_{2}$ respectively. If $Q_{1}(u)=Q_{2}(u)$, $\forall u\in\left(0,1\right)$ then $X_{1}$ and $X_{2}$ have the same probability distribution. The proof in the book is based on the following facts Fact i: $Q(F(x)) \leq x$. Fact ii: $F( Q(u) ) \geq u$. Fact iii: $Q(u) \leq x$ iff $u \leq F(x)$. Fact iv: $Q(u)$ is nondecreasing. But I think it is wrong. Assuming $F_{1}(x_0) < F_{2}(x_0)$ for some fixed $x_0$ the author sets out to prove that this leads to a contradiction. Using facts (i) and (iv) he shows $Q_{2}(F_1(x_0)) < Q_{2}(F_2(x_0)) \leq x_0$. Then he applies fact (iii) to obtain $F_{1}(x_0) \leq F_{2}(x_0)$. The author claims that this is a contradiction. But clearly its not and the argument proves nothing. Am I missing something here? Does anyone know the correct proof to the corallary?
Changed since version discussed in first five comments: The key line in the proof of Corollary 1.2 in Severini's Elements of Distribution Theory book is Hence by part (iii) of Theorem 1.8, $F_2(x_0) \ge F_1(x_0)$ so that $F_1(x_0) \lt F_2(x_0)$ is impossible. As you say, $F_1(x_0) \lt F_2(x_0)$ in fact implies $F_1(x_0) \le F_2(x_0)$, i.e. $F_2(x_0) \ge F_1(x_0)$, so this is not a contradiction.
If Same Rank, Same Null Spaces? "If matrices B and AB have the same rank, prove that they must have the same null spaces." I have absolutely NO idea how to prove this one, been stuck for hours now. Even if you don't know the answer, any help is greatly appreciated.
I would begin by showing that the null space of $B$ is a subspace of the null space of $AB$. Next show that having the same rank implies they have the same nullity. Finally, what can you conclude when a subspace is the same dimension as its containing vector space?
Solve $3\log_{10}(x-15) = \left(\frac{1}{4}\right)^x$ $$3\log_{10}(x-15) = \left(\frac{1}{4}\right)^x$$ I am completely lost on how to proceed. Could someone explain how to find any real solution to the above equation?
Put \begin{equation*} f(x) = 3\log_{10}(x - 15) - \left(\dfrac{1}{4}\right)^x. \end{equation*} We have $f$ is a increasing function on $(15, +\infty)$. Another way, $f(16)>0 $ and $f(17)>0$. Therefore the given equation has only solution belongs to $(16,17)$.
Two convergent sequences in a metric space. Question: Let {$x_n$} and {$y_n$} be two convergent sequences in a metric space (E,d). For all $n \in \mathbb{N}$, we defind $z_{2n}=x_n$ and $z_{2n+1}=y_n$. Show that {$z_n$} converges to some $l \in E$ $\longleftrightarrow$ $ \lim_{n \to \infty}x_n$= $\lim_{n \to \infty}y_n$=$l$. My Work: Since $x_n$ converges, $\exists N_1 \in \mathbb{N}$ s.t. $\forall n \geq N_1$, $|x_n-l_1|<\epsilon$. Likewise, since $y_n$ converges, $\exists N_2 \in \mathbb{N}$ s.t. $\forall n\geq N_2$, $|y_n-l_2|<\epsilon$. Because $z_{2n}=x_n$ and $z_{2n+1}=y_n$, then pick $N=\max\{N_1,N_2\}$. Since eventually $2n+1>2n>N$, if $z_n$ converges to $l$, then $|z_n-l|=|x_{n/2}-l|<\epsilon$ because of how we picked our N. Am I correct in this approach and should continue this way or am I wrong? This is a homework problem so please no solutions!!! Any help is appreciated. My work for the other way: If $\lim_{n \to \infty}x_n=\lim_{n \to \infty}y_n=l$,then we show $\lim_{n \to \infty}$$z_n=l$.Then for $N \in \mathbb{N}$, where $N=max{(\frac{N_1}{2},\frac{N_2-1}{2})}$. Since $|x_n-l|=|y_n-l|=|z_{2n}-l|=|z_{2n+1}-l|<\epsilon$ then for $n\geq N$, $|z_n-l|<\epsilon$.
Hint Remember the fact that "every convergent sequence is a Cauchy sequence". $(\Rightarrow)$ Assume $\lim_{n \to \infty} z_n=l$, then notice that $$ |x_n-l|=|z_{2n}-l|=|(z_{2n}-z_n)+(z_n-l)|\leq |z_{2n}-z_n|+|z_n-l|<\dots $$
$E$ is measurable, $m(E)< \infty$, and $f(x)=m[(E+x)\bigcap E]$ for all $x \in \mathbb{R}$ Question: $E$ is measurable, $m(E)< \infty$, and $f(x)=m[(E+x)\bigcap E]$ for all > $x \in \mathbb{R}$. Prove $\lim_{x \rightarrow \infty} f(x)=0$. First, since measure is translation invariant, I'm assuming that $(E+x)\bigcap E=E$. But then I had this thought: if $E=\{1,2,3\}$ and $x=1$, then $E+x = \{2,3,4\}$. So the intersection is just a single point. This will have measure zero. My question is, I'm not sure if this is the right train of thinking. And, if it is, I'm not sure how to make this rigorous.
Well, it seems you are a bit confused about which object lives where.. By translation invariance, we indeed have $m(E)=m(E+x)$, but not $E=E+x$ as you wrote. Also, $\{1,2,3\}\cap\{2,3,4\}$ has two common elements, not just one:) The hint in one of the comments to consider $E_n:=E\cap[-n,n]$ is a great idea, because in case $x>2n$, $E_n$ and $E_n+x$ will be disjoint, and $$(E+x)\cap E = \bigcup_n \left( (E_n+x)\cap E_n \right) $$ so you can use continuity from below of $m$. Denote $f_n(x):=m((E_n+x)\cap E_n)$. By the above argument, if $x>2n$, then $f_n(x)=0$. We are looking for $$\lim_{x\to\infty} m((E+x)\cap E) = \lim_{x\to\infty}\lim_{n\to\infty} f_n(x)$$ And exchange the limits.. ($f_n$ is nonnegative and bounded: $f_n(x)\le m(E)<\infty$)
seminorm & Minkowski Functional It is known that if $p$ is a seminorm on a real vector space $X$, then the set $A= \{x\in X: p(x)<1\}$ is convex, balanced, and absorbing. I tried to prove that the Minkowski functional $u_A$ of $A$ coincides with the seminorm $p$. Im interested on proving that $u_A$ less or equal to $p$ on $X$. My idea is as follows. We let $x \in X$. Then we can choose $s>p(x)$. Then $p(s^{-1}x) = s^{-1}p(x)<1$. This means that $s^{-1}x$ belongs to $A$. Hence $u_A(x)$ is less or equal to $s$. From here, how can we conclude that $u_A(x)$ is less than or equal to $p(x)$? Thanks in advance juniven
What you proved is that $u_A(x)\leq s$ for every $s\in(p(x),\infty)$. In other words, $$ u_A(x)\leq p(x)+\varepsilon $$ for every $\varepsilon>0$. This implies that $u_A(x)\leq p(x)$.
question regarding metric spaces let X be the surface of the earth for any two points on the earth surface. let d(a,b) be the least time needed to travel from a to b.is this the metric on X? kindly explain each step and logic, specially for these two axioms d(a,b)=0 iff a=b and triangle inequality.
This will generally not be a metric since the condition of symmetry is not fulfilled: It usually takes a different time to travel from $a$ to $b$ than to travel from $b$ to $a$. (I know that because I live on a hill. :-) The remaining conditions are fulfilled: * *The time required to travel from $a$ to $b$ is non-negative. *The time required to travel from $a$ to $b$ is zero if and only if $a=b$. *$d(a,c)\le d(a,b)+d(b,c)$ since you can always travel first from $a$ to $b$ and then from $b$ to $c$ in order to travel from $a$ to $c$.
Finding Tangent line from Parametric I need to find an equation of the tangent line to the curve $x=5+t^2-t$, $y=t^2+5$ at the point $(5,6)$. Setting $x=5$ and $y = 6$ and solving for $t$ gives me $t=0,1,-1$. I know I have to do y/x, and then take the derivative. But how do I know what $t$ value to use?
You have i) $x=5+t^2-t$ and ii) $y=t^2+5$ and $P=(5,6)$. From $P=(5,6)$ you get $x=5$ and $y=6$. From ii) you get now $t^2=1\Rightarrow t=1$ or $t=-1$ and from i) you get (knowing that $t\in${1,-1} ):$\ $ $t=1$ Your curve has the parametric representation $\gamma: I\subseteq\mathbb{R}\rightarrow {\mathbb{R}}^2: t\mapsto (5+t^2-t,t^2+5)$. Therefore, $\frac{d}{dt}\gamma=(2t-1,2t)$. Put $t=1$ into the derivative of $\gamma$ and from this you can get the slope of the tangent in P.
Cross Product of Partial Orders im going to have a similar questions on my test tomorrow. I am really stuck on this problem. I don't know how to start. Any sort of help will be appreciated. Thank you Suppose that (L1;≤_1) and (L2;≤_2) are partially ordered sets. We define a partial order ≤on the set L1 x L2 in the most obvious way- we say (a,b)≤(c,d) if and only if a≤_1 c and b≤_2 d a)Verify that this is a partial order. Show by example that it may not be a total order. b)Show tha if (L1;≤_1) and (L2;≤_2) are both lattices, ten so is (L1 x L2;≤). c)Show that if (L1;≤_1) and (L2;≤_2) are both modular lattices then so is (L1 x L2;≤) d)Show that if (L1;≤_1) and (L2;≤_2) are both distributive lattices then so is (L1 x L2;≤) e)Show that if (L1;≤_1) and (L2;≤_2) are both Boolean alegbras, then so is (L1 x L2;≤)
I will talk about part a, then you should give the other parts a try. They will follow in a similar manner (i.e. breaking $\leq$ into its components $\leq_1$ and $\leq_2$). To show $(L_1 \times L_2, \leq)$ is a partial order, we need to show it is reflexive, anti-symmetric, and transitive. Reflexivity: Given any $(x,y) \in L_1 \times L_2$ we want to show $(x,y) \leq (x,y)$. Looking at the definition of $\leq$, we are really asking whether both $x \leq_1 x$ and $y \leq_2 y$, which is true since $(L_1, \leq_1)$ and $(L_2, \leq_2)$ are both partial orders. Anti-symmetry: Suppose $(a,x) \leq (b,y)$ and $(b,y) \leq (a,x)$. This tells us a lot of information: * *From the first coordinates, we see $a \leq_1 b$ and $b \leq_1 a$, so $a = b$ (since $\leq_1$ is anti-symmetric). *From the second coordinates, we see $x \leq_2 y$ and $y \leq_2 x$, so $x = y$ (since $\leq_2$ is anti-symmetric). Combining those two observations gives $(a,x) = (b,y)$, so $\leq$ is anti-symmetric. Transitivity: Suppose $(a,x) \leq (b,y)$ and $(b,y) \leq (c,z)$. We want to show $(a,x) \leq (c,z)$. Our hypotheses give lots of information again: * *From the first coordinates, we see $a \leq_1 b$ and $b \leq_1 c$. Since $\leq_1$ is transitive, we know $a \leq_1 c$. *From the first coordinates, we see $x \leq_2 y$ and $y \leq_2 z$. Since $\leq_2$ is transitive, we know $x \leq_2 z$. Combining those two observations gives $(a,x) \leq (c,z)$, so $\leq$ is transitive.
Number of prime divisors of element orders from character table. From wikipedia: It follows, using some results of Richard Brauer from modular representation theory, that the prime divisors of the orders of the elements of each conjugacy class of a finite group can be deduced from its character table (an observation of Graham Higman). Precisely how is this done? I would be happy with a reference. (In particular, I am looking for the location of the elements in a solvable group with which have the maximum number of prime divisors in their order. If somebody has any extra information on that particular situation that would also be appreciated.)
I think I probably wrote the quoted passage in Wikipedia. If we let $\pi$ be a prime ideal of $\mathbb{Z}[\omega]$ containing $p,$ where $\omega$ is a primitive complex $|G|$-th root of unity, then it is the case that two elements $x$ and $y$ of $G$ have conjugate $p^{\prime}$-part if and only if we have $\chi(x) \equiv \chi(y)$ (mod $\pi$) for each irreducible character $\chi$ of $G$. This is because the $\mathbb{Z}$-module spanned by the restrictions of irreducible characters to $p$-regular elements is the $\mathbb{Z}$-span of irreducible Brauer characters. The rows of the Brauer character table remain linearly independent (mod $\pi$), as Brauer showed. Furthermore, the value (mod $\pi$) of $\chi(x)$ only depends on the $p^{\prime}$-part of $x,$ since $\eta- 1 \in \pi$ whenever $\eta$ is a $p$-power root of unity. We now work inductively: for any prime $p$, we can recognise elements $g \in G$ which have $p$-power order, since $g$ has $p$-power order if and only if $\chi(g) \equiv \chi(1)$ (mod $\pi$) for all irreducible characters $\chi.$ If we choose a different prime $q,$ we can recognise $q$-elements by the same procedure. But then we can recognise the elements whose orders have the form $p^{a}q^{b}.$ Such an element $h$ must have $p^{\prime}$-part $z$ which has order $q^{b},$ so we have previously identified the possible $p^{\prime}$-parts. Furthermore, $h$ has $p^{\prime}$-part $z$ if and only if $\chi(h) \equiv \chi(z)$ (mod $\pi$) for all irreducible characters $\chi.$ And then, given a different prime $r,$ we can identify all elements whose orders have the form $p^{a}q^{b}r^{c}$ in a similar manner, etc.
Prime, followed by the cube of a prime, followed by the square of a prime. Other examples? The numbers 7, 8, 9, apart from being part of a really lame math joke, also have a unique property. Consecutively, they are a prime number, followed by the cube of a prime, followed by the square of a prime. Firstly, does this occurence happen with any other triplet of consecutive numbers? More importantly, is there anyway to predict or determine when and where these phenomena will occur, or do we just discover them as we go?
If you'll settle for a prime, cube of a prime, square of a prime in arithmetic progression (instead of consecutive), you've got $$5,27=3^3,49=7^2\qquad \rm{(common\ difference\ 22)}$$ and $$157,\ 343=7^3,\ 529=23^2 \qquad \rm{(common\ difference\ 186)}$$ and, no doubt, many more where those came from. A bit more exotic is the arithmetic progression $81,\ 125,\ 169$ with common difference 44: the 4th power of the prime 3, the cube of the prime 5, the square of the prime 13. So $$3^4,5^3,13^2$$ is an arithmetic progression of powers of primes, and the exponents are also in arithmetic progression.
Inverse of a Positive Definite Let K be nonsingular symmetric matrix, prove that if K is a positive definite so is $K^{-1}$ . My attempt: I have that $K = K^T$ so $x^TKx = x^TK^Tx = (xK)^Tx = (xIK)^Tx$ and then I don't know what to do next.
inspired by the answer of kjetil b halvorsen To recap, matrix $A \in \mathbb{R}^{n \times n}$ is HPD (hermitian positive definite), iff $\forall x \in \mathbb{C}^n, x \neq 0 : x^*Ax > 0$. HPD matrices have full rank, therefore are invertible and $A^{-1}$ exists. Also full rank matrices represent a bijection, therefore $\forall x \in \mathbb{R}^n \enspace \exists y \in \mathbb{R}^n : x = Ay$. We want to know if $A^{-1}$ is also HPD, that is, our goal is $\forall x \in \mathbb{C}^n, x \neq 0 : x^*A^{-1}x > 0$. Let $x \in \mathbb{C}^n, x \neq 0$. Because $A$ is a bijection, there exists $y \in \mathbb{C}^n$ such that $x=Ay$. We can therefore write $$x^*A^{-1}x = (Ay)^*A^{-1}(Ay) = y^*A^*A^{-1}Ay = y^*A^*y = y^*Ay > 0,$$ which is what we wanted to prove.
Set of points of continuity are $G_{\delta}$ Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a function. Show that the points at which $f$ is continuous is a $G_{\delta}$ set. $$A_n = \{ x \in \mathbb{R} | x \in B(x,r) \text{ open }, f(x'')-f(x')<\frac{1}{n}, \forall x',x'' \in B(x)\}$$ I saw that this proof was already on here, but I wanted to confirm and flesh out more details. "$\Rightarrow$" If f is continuous at $x$, then $f(x'')-f(x')<\frac{1}{n}$ for $x'',x' \in B(x, r_{n})$. That is, there is a ball of radius $r$ where $r$ depends on $n$. Then $x \in A_n$ and thus $x \in \cap A_n$. "$\Leftarrow$" If $x \in \cap A_n$, then there is an $\epsilon > 0$ and a $\delta > 0$ such that $x' , x'' \in B(x, \delta_n)$ for all $n$ and $$|f(x'')-f(x')|<\epsilon.$$ Take $\epsilon = \frac{1}{n}$.
Here's a slightly different approach. Let $G$ be the set of points where $f$ is continuous, $A_{n,x} = (x-\frac{1}{n}, x+ \frac{1}{n})$ is an open set where $f$ is continuous, and $A_n = \bigcup_{x \in G} A_{n,x}$. Since $A_n$ is union of open sets which is open, $\bigcap_{n \in \mathbb{N}} A_n $ is a $G_\delta$ set. We want to show $\bigcap_{n \in \mathbb{N}} A_n = G$. $\forall n \in \mathbb{N}, A_{n+1, x} \subset A_{n,x} \\ \Rightarrow \bigcup_{x \in G}A_{n+1,x} \subset \bigcup_{x \in G}A_{n,x} \\ \Rightarrow A_{n+1} \subset A_n \\ \Rightarrow \{A_n\} \text{ is a decreasing sequence of nested sets}$ Furthermore, $\bigcap_{n \in \mathbb{N}} A_n = \{x\} $. (i.e. intersection is non-empty by Nested Set Theorem / Cantor Theorem). Therefore, $\begin{align}\bigcap_{n \in \mathbb{N}} A_n &= \lim_{n \to \infty} A_n \\ &= \lim_{n \to \infty} \bigcup_{x \in G} A_{n,x} \\ &= \bigcup_{x \in G} (\lim_{n \to \infty} A_{n,x} ) \\ &= \bigcup_{x \in G} \{x\} \\ &= G\end{align}$ So $G$ is the intersection of open sets and is a $G_\delta$ set. Can someone see why interchanging the limit and union is legal?
Riemann-Stieltjes integral, integration by parts (Rudin) Problem 17 of Chapter 6 of Rudin's Principles of Mathematical Analysis asks us to prove the following: Suppose $\alpha$ increases monotonically on $[a,b]$, $g$ is continuous, and $g(x)=G'(x)$ for $a \leq x \leq b$. Prove that, $$\int_a^b\alpha(x)g(x)\,dx=G(b)\alpha(b)-G(a)\alpha(a)-\int_a^bG\,d\alpha.$$ It seems to me that the continuity of $g$ is not necessary for the result above. It is enough to assume that $g$ is Riemann integrable. Am I right in thinking this? I have thought as follows: $\int_a^bG\,d\alpha$ exists because $G$ is differentiable and hence continuous. $\alpha(x)$ is integrable with respect to $x$ since it is monotonic. If $g(x)$ is also integrable with respect to $x$ then $\int_a^b\alpha(x)g(x)\,dx$ also exists. To prove the given formula, I start from the hint given by Rudin $$\sum_{i=1}^n\alpha(x_i)g(t_i)\Delta x_i=G(b)\alpha(b)-G(a)\alpha(a)-\sum_{i=1}^nG(x_{i-1})\Delta \alpha_i$$ where $g(t_i)\Delta x_i=\Delta G_i$ by the intermediate mean value theorem. Now the sum on the right-hand side converges to $\int_a^bG\,d\alpha$. The sum on the left-hand side would have converged to $\int_a^b\alpha(x)g(x)\,dx$ if it had been $$\sum_{i=1}^n \alpha(x_i)g(x_i)\Delta x$$ The absolute difference between this and what we have is bounded above by $$\max(|\alpha(a)|,|\alpha(b)|)\sum_{i=1}^n |g(x_i)-g(t_i)|\Delta x$$ and this can be made arbitrarily small because $g(x)$ is integrable with respect to $x$.
Compare with the following theorem, Theorem: Suppose $f$ and $g$ are bounded functions with no common discontinuities on the interval $[a,b]$, and the Riemann-Stieltjes integral of $f$ with respect to $g$ exists. Then the Riemann-Stieltjes integral of $g$ with respect to $f$ exists, and $$\int_{a}^{b} g(x)df(x) = f(b)g(b)-f(a)g(a)-\int_{a}^{b} f(x)dg(x)\,. $$
What is the number of all possible relations/intersections of n sets? If n defines number of sets, what is the number of all possible relations between them? For example, when n = 2: 1) A can intersect with B 2) A and B can be disjoint 3) A can be subset of B 4) B can be subset of A that leaves us with 4 possible relations. Now with the n = 3 it gets tricker (A can intersect with B, but not C or B can be subsect of C and intersect with A there etc). Wondering if there's any formula that can be made to calculate such possible relations. Working on this problem for last couple of days, read about Venn diagrams, Karnaugh map, but still can't figure that one out. Any help is appreciated!
Disclaimer: Not an answer® I'd like to think about this problem not as sets, but as elements in a partial order. Suppose all sets are different. Define $\mathscr{P} = \langle\mathscr{P}(\bigcup_n A_n), \subseteq\rangle$ as the partial order generated bay subset relation on all "interesting" sets. Define the operation $\cap$ on $\mathscr{P}$ as $$ C = A\cap B \iff C = \sup_\subseteq \{D: D\subseteq A \wedge D\subseteq B\} $$ which is well defined because of, well, set theory.... The question could be then stated as: Given the sets $A_n$, let $\mathscr{G}$ the subgraph of $\mathscr{P}$ generated by the $A_n$ and closed under the $\cap$ operation. How many non-isomoprhic graph can we get?
Why not write $\sqrt{3}2$? Is it just for aesthetic purposes, or is there a deeper reason why we write $2\sqrt{3}$ and not $\sqrt{3}2$?
Certainly one can find old books in which $\sqrt{x}$ was set as $\sqrt{\vphantom{x}}x$, and just as $32$ does not mean $3\cdot2$, so also $\sqrt{\vphantom{32}}32$ would not mean $\sqrt{3}\cdot 2$, but rather $\sqrt{32}$. An overline was once used where round brackets are used today, so that, where we now write $(a+b)^2$, people would write $\overline{a+b}^2$. Probably that's how the overline in $\sqrt{a+b}$ originated. Today, an incessant battle that will never end tries to call students' attention to the fact that $\sqrt{5}z$ is not the same as $\sqrt{5z}$ and $\sqrt{b^2-4ac}$ is not the same as $\sqrt{b^2-4}ac$, the latter being what one sees written by students.
How to find the eigenvalues and eigenvector without computation? The given matrix is $$ \begin{pmatrix} 2 & 2 & 2 \\ 2 & 2 & 2 \\ 2 & 2 & 2 \\ \end{pmatrix} $$ so, how could i find the eigenvalues and eigenvector without computation? Thank you
For the eigenvalues, you can look at the matrix and extract some quick informations. Notice that the matrix has rank one (all columns are the same), hence zero is an eigenvalue with algebraic multiplicity two. For the third eigenvalue, use the fact that the trace of the matrix equals the sum of all its eigenvalues; since $\lambda_1=\lambda_2=0$, you easily get $\lambda_3=6$. For the eigenvectors corresponding to $\lambda=0$ sometimes it's not hard; in this case it's clear that $v_1=[1,-1,0]$ and $v_2=[0,1,-1]$ (to be normalized) are eigenvectors corresponding to $\lambda=0$. For the eigenvector corresponding to $\lambda_3$, it's not as obvious as before, and you might have to actually write down the system, finding $v_3=[1,1,1]$ (to be normalized).
Are there any memorization techniques that exist for math students? I just watched this video on Ted.com entitled: Joshua Foer: Feats of memory anyone can do and it got me thinking about memory from a programmers perspective, and since programming and mathematics are so similar I figured I post here as well. There are so many abstract concepts and syntactic nuances that are constantly encountered, and yet we still manage to retain that information. The memory palace may help in remembering someone's name, a sequence of numbers, or a random story, but are there any memorization techniques that can better aid those learning new math concepts?
For propositional logic operations, you can remember their truth tables as follows: Let 0 stand for falsity and 1 for truth. For the conjunction operation use the mnemonic of the minimum of two numbers. For the disjunction operation, use the mnemonic of the maximum of two numbers. For the truth table for the material conditional (x->y), you can use max(1-x, y). For negation you can use 1-x.
linear algebra problem please help let $V=\mathbb{R}^4$ and let $W=\langle\begin{bmatrix}1&1&0&0\end{bmatrix}^t,\begin{bmatrix}1&0&1&0\end{bmatrix}^t\rangle$. we need to find the subspaces $U$ & $T$ such that $ V=W\bigoplus U$ & $V=W \bigoplus T$ but $U\ne T$.
HINT: Look at a simpler problem first. Let $X=\{\langle x,0\rangle:x\in\Bbb R\}$, a subspace of $\Bbb R^2$. Can you find subspaces $V$ and $W$ of $\Bbb R^2$ such that $\Bbb R^2=X\oplus V=X\oplus W$, but $V\ne W$?
Arcsine law for Brownian motion Here is the question: $(B_t,t\ge 0)$ is a standard brwonian motion, starting at $0$. $S_t=\sup_{0\le s\le t} B_s$. $T=\inf\{t\ge 0: B_t=S_1\}$. Show that $T$ follows the arcsinus law with density $g(t)=\frac{1}{\pi\sqrt{t(1-t)}}1_{]0,1[}(t)$. I used Markov property to get the following equality: $P(T<t)=P(\sup_{t<s<1}B_s<S_t)=E(P(\sup_{0<s<1-t}(B_{t+s}-B_t)<S_t-B_t|F_t))=P(\hat{S}_{1-t}<S_t-B_t).$ where $\hat{S}_{1-t}$ is defined for the brownian motion $\hat{B}_s=B_{t+s}-B_t$, which is independant of $F_t$. However the reflexion principle tells us that $S_t-B_t$ has the same law as $S_t$, so we can also write that $P(T<t)=P(\hat{S}_{1-t}<S_t)$. To this point, we can calculate $P(T<t)$ because we know the joint density of $(\hat{S}_{1-t},S_t)$, but this calculation leads to a complicated form of integral and I can not get the density $g$ at the end. Do you know how to get the arcsinus law? Thank you.
Let us start from the formula $\mathbb P(T\lt t)=\mathbb P(\hat S_{1-t}\lt S_t)$, where $0\leqslant t\leqslant 1$, and $\hat S_{1-t}$ and $S_t$ are the maxima at times $1-t$ and $t$ of two independent Brownian motions. Let $X$ and $Y$ denote two i.i.d. standard normal random variables, then $(\hat S_{1-t},S_t)$ coincides in distribution with $(\sqrt{1-t}|X|,\sqrt{t}|Y|)$ hence $$ \mathbb P(T\lt t)=\mathbb P(\sqrt{1-t}|X|\lt\sqrt{t}|Y|)=\mathbb P(|Z|\lt\sqrt{t}), $$ where $Z=X/\sqrt{X^2+Y^2}$. Now, $Z=\sin\Theta$, where the random variable $\Theta$ is the argument of the two-dimensional random vector $(X,Y)$ whose density is $\mathrm e^{-(x^2+y^2)/2}/(2\pi)$, which is invariant by the rotations of center $(0,0)$. Hence $\Theta$ is uniformly distributed on $[-\pi,\pi]$ and $$ \mathbb P(T\lt t)=\mathbb P(|\sin\Theta|\lt\sqrt{t})=2\,\mathbb P(|\Theta|\lt\arcsin\sqrt{t})=\tfrac2\pi\,\arcsin\sqrt{t}. $$ The density of the distribution of $T$ follows by differentiation.
Derivative wrt. to Lie bracket. Let $\mathbf{G}$ be a matrix Lie group, $\frak{g}$ the corresponding Lie algebra, $\widehat{\mathbf{x}} = \sum_i^m x_i G_i$ the corresponding hat-operator ($G_i$ the $i$th basis vector of the tangent space/Lie algebra $\frak{g}$) and $(\cdot)^\vee$ the inverse of $\widehat{\cdot}$: $$(X)^\vee := \text{ that } \mathbf{x} \text{ such that } \widehat{\mathbf{x}} = X.$$ Let us define the Lie bracket over $m$-vectors as: $[\mathbf{a},\mathbf{b}] = \left(\widehat{\mathbf{a}}\cdot\widehat{\mathbf{b}}-\widehat{\mathbf{b}}\cdot\widehat{\mathbf{a}}\right)^\vee$. (Example: For $\frak{so}(3)$, $[\mathbf{a},\mathbf{b}] = \mathbf{a}\times \mathbf{b}$ with $\mathbf{a},\mathbf{b} \in \mathbb{R}^3$.) Is there a common name of the derivative: $$\frac{\partial [\mathbf{a},\mathbf{b}]}{\partial \mathbf{a}}$$?
I may be misunderstanding your question, but it seems like you are asking for the derivative of the map $$ \mathfrak g \to \mathfrak g, ~~ a \mapsto [a, b] $$ where $b \in \mathfrak g$ is fixed. Since $\mathfrak g$ is a vector space, the derivative at a point can be viewed as a map $\mathfrak g \to \mathfrak g$. But the above map is linear so it is it's own derivative at any point.
A subset of a compact set is compact? Claim:Let $S\subset T\subset X$ where $X$ is a metric space. If $T$ is compact in $X$ then $S$ is also compact in $X$. Proof:Given that $T$ is compact in $X$ then any open cover of T, there is a finite open subcover, denote it as $\left \{V_i \right \}_{i=1}^{N}$. Since $S\subset T\subset \left \{V_i \right \}_{i=1}^{N}$ so $\left \{V_i \right \}_{i=1}^{N}$ also covers $S$ and hence $S$ is compact in X Edited: I see why this is false but in general, why every closed subset of a compact set is compact?
According to the definition of the compact set, we need every open cover of set K contains a finite subcover. Hence, not every subsets of compact sets are compact. Why closed subsets of compact sets are compact? Proof Suppose $F\subset K\subset X$, F is closed in X, and K is compact. Let $\{G_{\alpha}\}$ be an open cover of F. Because F is closed, then $F^{c}$ is open. If we add $F^{c}$ to $\{G_{\alpha}\}$, then we can get a open cover $\Omega$ of K. Because K is compact, $\Omega$ has finite sucollection $\omega$ which covers K, and hence F. If $F^{C}$ is in $\omega$, we can remove it from the subcollection. We have found a finite subcollection of $\{G_{\alpha}\}$ covers F.
What does it mean for something to be true but not provable in peano arithmetic? Specifically, the Paris-Harrington theorem. In what sense is it true? True in Peano arithmetic but not provable in Peano arithmetic, or true in some other sense?
Peano Arithmetic is a particular proof system for reasoning about the natural numbers. As such it does not make sense to speak about something being "true in PA" -- there is only "provable in PA", "disprovable in PA", and "independent of PA". When we speak of "truth" it must be with respect to some particular model. In the case of arithmetic statements, the model we always speak about unless something else is explicitly specified is the actual (Platonic) natural numbers. Virtually all mathematicians expect these numbers to "exist" (in whichever philosophical sense you prefer mathematical objects to exist in) independently of any formal system for reasoning about them, and the great majority expect all statements about them to have objective (but not necessarily knowable) truth values. We're very sure that everything that is "provable in PA" is also "true about the natural numbers", but the converse does not hold: There exist sentences that are "true about the actual natural numbers" but not "provable in PA". This was famously proved by Gödel -- actually he gave a (formalizable, with a few additional technical assumptions) proof that a particular sentence was neither "provable in PA" nor "disprovable in PA", and a convincing (but not strictly formalizable) argument that this sentence is true about the actual natural numbers. Paris-Harrington shows that another particular sentence is of this kind: not provable in PA, yet true about the actual natural numbers.
Proofs with limit superior and limit inferior: $\liminf a_n \leq \limsup a_n$ I am stuck on proofs with subsequences. I do not really have a strategy or starting point with subsequences. NOTE: subsequential limits are limits of subsequences Prove: $a_n$ is bounded $\implies \liminf a_n \leq \limsup a_n$ Proof: Let $a_n$ be a bounded sequence. That is, $\forall_n(a_n \leq A)$. If $a_n$ converges then $\liminf a_n = \lim a_n = \limsup a_n$ and we are done. Otherwise $a_n$ has a set of subsequential limits we need to show $\liminf a_n \leq \limsup a_n$: This is where I am stuck...
Hint: Think about what the definitions mean. We have $$\limsup a_n = \lim_n \sup \{ a_k \textrm{ : } k \geq n\}$$ and $$\liminf a_n = \lim_n \inf \{ a_k \textrm{ : } k \geq n\}$$ What can you say about the individual terms $\sup \{a_k \textrm{ : } k \geq n\}$ and $\inf \{a_k \textrm{ : } k \geq n\}$ ?
Prove an inequality with a $\sin$ function: $\sin(x) > \frac2\pi x$ for $0$$\forall{x\in(0,\frac{\pi}{2})}\ \sin(x) > \frac{2}{\pi}x $$ I suppose that solving $ \sin x = \frac{2}{\pi}x $ is the top difficulty of this exercise, but I don't know how to think out such cases in which there is an argument on the right side of a trigonometric equation.
Here is a simple solution. Let $0 < x < \pi/2$ be fixed. By Mean value theorem there exists $y \in (0, x)$ such that $$\sin(x)-\sin(0)= \cos(y)(x-0).$$ Thus $$\frac{\sin(x)}{x}= \cos(y).$$ Similarly there exists $z \in (x, \pi/2)$ such that $$\sin(\pi/2)-\sin(x)= \cos(z)(\pi/2-x).$$ Thus $$\frac{1-\sin(x)}{\pi/2- x}= \cos(z).$$ As $0 < y < z< \pi/2$ and $\cos$ is a strictly decreasing function in $[0, \pi/2]$ we see that $$ \cos (z) < \cos (y).$$ Thus $$\frac{1-\sin(x)}{\pi/2- x} < \frac{\sin(x)}{x}.$$ $$\Rightarrow \frac{1-\sin(x)}{\sin(x)} < \frac{\pi/2- x}{x}.$$ $$\Rightarrow \frac{1}{\sin(x)}- 1< \frac{\pi}{2x}- 1.$$ $$\Rightarrow \frac{2}{\pi}< \frac{\sin (x)}{x}.$$ Since $y \in (0, \pi/2)$, therefore $\cos(y) <1$. Hence it is also clear that $$\frac{\sin(x)}{x}= \cos(y) <1.$$ Hence for any $0 < x < \pi/2$, we get $$\frac 2 \pi<\frac{\sin(x)}{x}<1.$$
Tenenbaum and Pollard, Ordinary Differential Equations, problem 1.4.29, what am I missing? Tenenbaum and Pollard's "Ordinary Differential Equations," chapter 1, section 4, problem 29 asks for a differential equation whose solution is "a family of straight lines that are tangent to the circle $x^2 + y^2 = c^2$, where $c$ is a constant." Since the solutions will be lines, I start with the formula $y = m x + b$, and since the line is determined by a single parameter (the point on the circle to which the line is tangent) I expect the differential equation to be of order one. Differentiating, I get $y' = m$, so $y = y' x + b$. So now, I need an equation for $b$. The solution given in the text is $y = x y' \pm c \sqrt{(y')^2 + 1}$, implying $b = \pm c \sqrt{(y')^2 + 1}$, but try as I might I have been unable to derive this formula for $b$. I'm sure I'm missing something simple.
I'll assume the point $P=(x,y)$ lies on the circle $x^2+y^2=c^2$ in the first quadrant. The slope of the tangent at $P$ is $y'$ as you say. You need to express the $y$ intercept. Extend the tangent line until it meets the $x$ axis $A$ and the $y$ axis at $B$, and call the origin $O$. Then the two triangles $APO$ and $OPB$ are similar. From this you can get the y intercept, which is the point $(0,OB)$ by use of $OB=OP*(AB/OA)=OP*sqrt([OA^2+OB^2]/OA^2)=OP*sqrt(1+[OB/OA]^2)$. And $y'=-OB/OA$, being the slope of the line joining $A$ to $B$ lying respectively on the $x$ and $y$ axes. Finally the $OP$ here is the constant $c$, the circle's radius.
Solving trigonometric equations of the form $a\sin x + b\cos x = c$ Suppose that there is a trigonometric equation of the form $a\sin x + b\cos x = c$, where $a,b,c$ are real and $0 < x < 2\pi$. An example equation would go the following: $\sqrt{3}\sin x + \cos x = 2$ where $0<x<2\pi$. How do you solve this equation without using the method that moves $b\cos x$ to the right side and squaring left and right sides of the equation? And how does solving $\sqrt{3}\sin x + \cos x = 2$ equal to solving $\sin (x+ \frac{\pi}{6}) = 1$
Riffing on @Yves' "little known" solutions ... The above trigonograph shows a scenario with $a^2 + b^2 = c^2 + d^2$, for $d \geq 0$, and we see that $$\theta = \operatorname{atan}\frac{a}{b} + \operatorname{atan}\frac{d}{c} \tag{1}$$ (If the "$a$" triangle were taller than the "$b$" triangle, the "$+$" would become "$-$". Effectively, we can take $d$ to be negative to get the "other" solution.) Observe that both $c$ and $d$ are expressible in terms of $a$, $b$, $\theta$: $$\begin{align} a \sin\theta + b \cos\theta &= c \\ b \sin\theta - a\cos\theta &= d \quad\text{(could be negative)} \end{align}$$ Solving that system for $\sin\theta$ and $\cos\theta$ gives $$\left.\begin{align} \sin\theta &= \frac{ac+bd}{a^2+b^2} \\[6pt] \cos\theta &= \frac{bc-ad}{a^2+b^2} \end{align}\quad\right\rbrace\quad\to\quad \tan\theta = \frac{ac+bd}{bc-ad} \tag{2}$$ We can arrive at $(2)$ in a slightly-more-geometric manner by noting $$c d = (a\sin\theta + b \cos\theta)d = c( b\sin\theta - a \cos\theta ) \;\to\; ( b c - a d)\sin\theta = \left( a c + b d \right)\cos\theta \;\to\; (2) $$ where each term in the expanded form of the first equation can be viewed as the area of a rectangular region in the trigonograph. (For instance, $b c \sin\theta$ is the area of the entire figure.)
definite and indefinite sums and integrals It just occurred to me that I tend to think of integrals primarily as indefinite integrals and sums primarily as definite sums. That is, when I see a definite integral, my first approach at solving it is to find an antiderivative, and only if that doesn't seem promising I'll consider whether there might be a special way to solve it for these special limits; whereas when I see a sum it's usually not the first thing that occurs to me that the terms might be differences of some function. In other words, telescoping a sum seems to be just one particular way among many of evaluating it, whereas finding antiderivatives is the primary way of evaluating integrals. In fact I learned about telescoping sums much later than about antiderivatives, and I've only relatively recently learned to see these two phenomena as different versions of the same thing. Also it seems to me that empirically the fraction of cases in which this approach is useful is much higher for integrals than for sums. So I'm wondering why that is. Do you see a systematic reason why this method is more productive for integrals? Or is it perhaps just a matter of education and an "objective" view wouldn't make a distinction between sums and integrals in this regard? I'm aware that this is rather a soft question, but I'm hoping it might generate some insight without leading to open-ended discussions.
Also, read about how Feynman learned some non-standard methods of indefinite integration (such as differentiating under the integral sign) and used these to get various integrals that usually needed complex integration.
The intersection of a line with a circle Get the intersections of the line $y=x+2$ with the circle $x^2+y^2=10$ What I did: $y^2=10-x^2$ $y=\sqrt{10-x^2}$ or $y=-\sqrt{10-x^2}$ $ x+ 2 = y=\sqrt{10-x^2}$ If you continue, $x=-3$ or $x=1$ , so you get 2 points $(1,3)$, $(-3,-1)$ But then, and here is where the problems come: $x+2=-\sqrt{10-x^2}$ I then, after a while get the point $(-3\dfrac{1}{2}, -1\dfrac{1}{2}$) but this doesn't seem to be correct. What have I done wrong at the end?
Let the intersection be $(a,b)$, so it must satisfy both the given eqaution. So, $a=b+2$ also $a^2+b^2=10$ Putting $b=a+2$ in the given circle $a^2+(a+2)^2=10$ $2a^2+4a+4=10\implies a=1$ or $-3$ If $a=1,b=a+2=3$ If $a=-3,b=-3+2=-1$ So, the intersections are $(-3,-1)$ and $(1,3)$
Get the equation of a circle when given 3 points Get the equation of a circle through the points $(1,1), (2,4), (5,3) $. I can solve this by simply drawing it, but is there a way of solving it (easily) without having to draw?
Big hint: Let $A\equiv (1,1)$,$B\equiv (2,4)$ and $C\equiv (5,3)$. We know that the perpendicular bisectors of the three sides of a triangle are concurrent.Join $A$ and $B$ and also $B$ and $C$. The perpendicular bisector of $AB$ must pass through the point $(\frac{1+2}{2},\frac{1+4}{2})$ Now find the equations of the straight lines AB and BC and after that the equation of the perpendicular bisectors of $AB$ and $BC$.Solve for the equations of the perpendicular bisectors of $AB$ and $BC$ to get the centre of your circle.
How to find subfields of a non-Galois field extension? Let $K/F$ be a finite field extension. If $K/F$ is Galois then it is well known that there is a bijection between subgroups of $Gal(K/F)$ and subfields of $K/F$. Since finding subgroups of a finite group is always easy (at least in the meaning that we can find every subgroup by brute-force or otherwise) this gives a nice way of finding subfields and proving they are the only ones. What can we do in the case that $K/F$ is not a Galois extension ? that is: How can I find all subfields of a non-Galois field extension ?
In the inseparable case there is an idea for a substitute Galois correspondence due, I think, to Jacobson: instead of considering subgroups of the Galois group, we consider (restricted) Lie subalgebras of the Lie algebra of derivations. I don't know much about this approach, but "inseparable Galois theory" seems to be a good search term.
Volume of Region R Bounded By $y=x^3$ and $y=x$ Let R be the region bounded by $y=x^3$ and $y=x$ in the first quadrant. Find the volume of the solid generated by revolving R about the line $x=-1$
The region goes from $y=0$ to $y=1$. For an arbitrary $y$-value, say, $y=c$, $0\le c\le1$, what is the cross section of the region at height $c$? That is, what is the intersection of the region with the horizontal line $y=c$? What do you get when you rotate that cross-section around the line $x=-1$? Can you find the area of what you get when you rotate that cross-section? If you can, then you can integrate that area from 0 to 1 to get the volume.
Matching in bipartite graphs I'm studying graph theory and the follow question is driving me crazy. Any hint in any direction would be appreciated. Here is the question: Let $G = G[X, Y]$ be a bipartite graph in which each vertex in $X$ is of odd degree. Suppose that any two distinct vertices of $X$ have an even number of common neighbours. Show that $G$ has matching covering every vertex of $X$.
Hint for one possible solution: Consider the adjacency matrix $M\in\Bbb F_2^{|X|\times|Y|}$ of the bipartite graph, i.e. $$M_{x,y}:=\left\{ \begin{align} 1 & \text{ if }x,y \text{ are adjacent} \\ 0 & \text{ else} \end{align} \right. $$ then try to prove, it has rank $|X|$, and then, I think, using Gaussian elimination (perhaps only on the selected linearly independent columns) would produce a proper matching..
Prove $\lim _{x \to 0} \sin(\frac{1}{x}) \ne 0$ Prove $$\lim _{x \to 0} \sin\left(\frac{1}{x}\right) \ne 0.$$ I am unsure of how to prove this problem. I will ask questions if I have doubt on the proof. Thank you!
HINT Consider the sequences $$x_n = \dfrac1{2n \pi + \pi/2}$$ and $$y_n = \dfrac1{2n \pi + \pi/4}$$ and look at what happens to your function along these two sequences. Note that both sequences tend to $0$ as $n \to \infty$.
Find the limit without l'Hôpital's rule Find the limit $$\lim_{x\to 1}\frac{(x^2-1)\sin(3x-3)}{\cos(x^3-1)\tan^2(x^2-x)}.$$ I'm a little rusty with limits, can somebody please give me some pointers on how to solve this one? Also, l'Hôpital's rule isn't allowed in case you were thinking of using it. Thanks in advance.
$$\dfrac{\sin(3x-3)}{\tan^2(x^2-x)} = \dfrac{\sin(3x-3)}{3x-3} \times \left(\dfrac{x^2-x}{\tan(x^2-x)} \right)^2 \times \dfrac{3(x-1)}{x^2(x-1)^2}$$ Hence, $$\dfrac{(x^2-1)\sin(3x-3)}{\cos(x^3-1)\tan^2(x^2-x)} = \dfrac{\sin(3x-3)}{3x-3} \times \left(\dfrac{x^2-x}{\tan(x^2-x)} \right)^2 \times \dfrac{3(x-1)(x^2-1)}{x^2(x-1)^2 \cos(x^3-1)}\\ = \dfrac{\sin(3x-3)}{3x-3} \times \left(\dfrac{x^2-x}{\tan(x^2-x)} \right)^2 \times \dfrac{3(x+1)}{x^2 \cos(x^3-1)}$$ Now the first and second term on the right has limit $1$ as $x \to 1$. The last term limit can be obtained by plugging $x=1$, to give the limit as $$1 \times 1 \times \dfrac{3 \times (1+1)}{1^2 \times \cos(0)} = 6$$
Probability distribution for sampling an element $k$ times after $x$ independent random samplings with replacement In an earlier question ( probability distribution of coverage of a set after `X` independently, randomly selected members of the set ), Ross Rogers asked for the probability distribution for the coverage of a set of $n$ elements after sampling with replacement $x$ times, with uniform probability for every element in the set. Henry provided a very nice solution. My question is a slight extension of this earlier one: What is the probability distribution (mean and variance) for the number of elements that have been sampled at least $k$ times after sampling a set of $n$ elements with replacement, and with uniform probability, $x$ times?
What is not as apparent as it could be in the solution on the page you refer to is that the probability distributions in this kind of problems are often a mess while the expectations and variances can be much nicer. (By the way, you might be confusing probability distributions on the one hand, and expectations and variances on the other hand.) To see this in the present case, note that the number of elements sampled at least $k$ times is $$ N=\sum\limits_{i=1}^n\mathbf 1_{A_i}, $$ where $A_i$ is the event that element $i$ is sampled at least $k$ times. Now, $p_x=\mathbb P(A_i)$ does not depend on $i$ and is the probability that one gets at least $k$ heads in a game of $x$ heads-and-tails such that probability of heads is $u=\frac1n$. Thus, $$ p_x=\sum\limits_{s=k}^x{x\choose s}u^s(1-u)^{x-s}. $$ This yields $$ \mathbb E(N)=np_x. $$ Likewise, $r_x=\mathbb P(A_i\cap A_j)$ does not depend on $i\ne j$ and is the probability that one gets at least $k$ times result A and $k$ times result B in $x$ games where each game yields the results A, B and C with respective probabilities $u$, $u$ and $1-2u$. Thus, $r_x$ can be written down using multinomial distributions instead of the binomial distributions involved in $p_x$. This yields $\mathbb E(N^2)=np_x+n(n-1)r_x$, hence $\mathrm{var}(N)=\mathbb E(N^2)-(np_x)^2$ is $$ \mathrm{var}(N)=np_x+n(n-1)r_x-n^2p_x^2=np_x(1-p_x)+n(n-1)(r_x-p_x^2). $$
Find the probability mass function. Abe and Zach live in Springfield. Suppose Abe's friends and Zach's friends are each a random sample of 50 out of the 1000 people who live in Springfield. Find the probability mass function of them having $X$ mutual friends. I figured the expected value is $(1000)(\frac{50}{1000})^2 = \frac{5}{2}$ since each person in Springfield has a $(\frac{50}{1000})^2$ chance of being friends with both Abe and Zach. However, how do I generalize this expected value idea to create a probability mass function that returns the probability of having $X$ mutual friends?
The expected value is right, since: $$E[X]=\frac{1}{\binom{1000}{50}}\sum_{k=1}^{50}k\binom{50}{k}\binom{950}{50-k}=\frac{50}{\binom{1000}{50}}\sum_{k=0}^{49}\binom{49}{k}\binom{950}{49-k}$$ $$\frac{\binom{1000}{50}}{50}\,E[X]=[x^{49}]\left((1+x)^{49} x^{49} (1+x^{-1})^{950}\right)=[x^{49}]\left((1+x)^{999}x^{-901}\right)=[x^{950}](x+1)^{999}=\binom{999}{950}=\binom{999}{49},$$ or, more simply, $\frac{\binom{1000}{50}}{50}\,E[X]$ is the number of ways to choose 49 stones between 49 black stones and 950 withe stones, so it is clearly $\binom{999}{49}$. $$E[X]=\frac{50^2}{1000}=2.5.$$
function from A5 to A6 Possible Duplicate: Homomorphism between $A_5$ and $A_6$ Why is it true that every element of the image of the function $f: A_5\longrightarrow A_6$ (alternating groups) defined by $f(x)=(123)(456) x (654)(321)$ does not leave any element of $\{1,2,3,4,5,6\}$ fixed (except the identity)?
I think you are asking why the following is true: Every element of $A_6$ of the form $(123)(456)x(654)(321)$, with $x$ a nontrivial element of $A_5$, leaves no element of $\{1,2,3,4,5,6\}$ fixed. This is actually false: let $x = (12)(34)$. Then $f(x)$ fixes either $5$ or $6$, depending on how you define composition of permutations.
Bounding the Gamma Function I'm trying to verify a bound for the gamma function $$ \Gamma(z) = \int_0^\infty e^{-t}t^{z - 1}\;dt. $$ In particular, for real $m \geq 1$, I'd like to show that $$ \Gamma(m + 1) \leq 2\left(\frac{3m}{5}\right)^m. $$ Knowing that the bound should be attainable, my first instinct is to split the integral as $$ \Gamma(m + 1) = \int_0^{3m/5} e^{-t}t^{m}\;dt + \int_{3m/5}^\infty e^{-t}t^m\;dt \leq (1 - e^{-3m/5})\left(\frac{3m}{5}\right)^m + \int_{3m/5}^\infty e^{-t}t^m\;dt. $$ Using integration by parts, $$ \int_{3m/5}^\infty e^{-t}t^m\;dt = e^{-3m/5}\left(\frac{3m}{5}\right)^m + m\int_{3m/5}^\infty e^{-t}t^{m-1}\;dt.$$ So the problem has been reduced to showing $$ m\int_{3m/5}^\infty e^{-t}t^{m-1}\;dt \leq \left(\frac{3m}{5}\right)^m. $$ But this doesn't seem to have made the problem any easier. Any help is appreciated, thanks.
I'll prove something that's close enough for my applications; in particular, that $$\Gamma(m + 1) \leq 3\left(\frac{3m}{5}\right)^m.$$ Let $0 < \alpha < 1$ be chosen later. We'll split $e^{-t}t^m$ as $(e^{-\alpha t}t^m)e^{-(1 - \alpha)t}$ and use this to bound the integral. First, take a derivative to find a maximum for $e^{-\alpha t}t^m$. $$\frac{d}{dt}e^{-\alpha t}t^m = -\alpha e^{-\alpha t}t^m + me^{-\alpha t}t^{m-1} = -\alpha e^{-\alpha t}t^{m - 1}\left(t - \frac{m}{\alpha}\right). $$ So $t = m / \alpha$ is a critical point, and in particular a maximum (increasing before and decreasing after, if you like). Then we can bound the integral $$ \Gamma(m + 1) = \int_0^\infty e^{-t}t^m\;dt \leq \left(\frac{m}{\alpha e}\right)^m \int_0^\infty e^{-(1 - \alpha)t}\;dt = \left(\frac{m}{\alpha e}\right)^m \left(\frac{1}{1 - \alpha}\right).$$ Choosing $\alpha = 5/(3e)$ and noting that $\frac{1}{1 - 5/(3e)} \leq 3$, we've proven $$ \Gamma(m + 1) \leq 3\left(\frac{3m}{5}\right)^m. $$
Show $m^p+n^p\equiv 0 \mod p$ implies $m^p+n^p\equiv 0 \mod p^2$ Let $p$ an odd prime. Show that $m^p+n^p\equiv 0 \pmod p$ implies $m^p+n^p\equiv 0 \pmod{p^2}$.
From little Fermat, $m^p \equiv m \pmod p$ and $n^p \equiv n \pmod p$. Hence, $p$ divides $m+n$ i.e. $m+n = pk$. $$m^p + n^p = (pk-n)^p + n^p = p^2 M + \dbinom{p}{1} (pk) (-n)^{p-1} + (-n)^p + n^p\\ = p^2 M + p^2k (-n)^{p-1} \equiv 0 \pmod {p^2}$$
Generators and cyclic group concept These statements are false according to my book. I am not sure why though * *In every cyclic group, every element is a generator *A cyclic group has a unique generator. Both statements seem to be opposites. I tried to give a counterexample * *I think it's because $\mathbb{Z}_4$ for example has generators $\langle 1\rangle$ and $\langle3\rangle$, but 2 or 0 isn't a generator. *As shown in (1), we have two different generators, $\langle1\rangle$ and $\langle3\rangle$
Take $Z_n$. This group is cyclic and the generators are $\phi(n)$ = all the numbers that are relatively prime to $n$
Trigonometric bounds Is there a nice way to show: $\sin(x) + \sin(y) + \sin(z) \geq 2$ for all $x,y,z$ such that $0 \leq x,y,z \leq \frac{\pi}{2}$ and $x + y + z = \pi$?
Use the following inequality: $$\sin(x) \geq x\frac{2}{\pi} , x \in [0,\pi/2]$$ And to prove this inequality, Consider the function: $ f(x) = \frac{\sin(x)}{x} $ if $x \in (0, \pi/2]$ and $f(x) = 1$ if $x=0$. Now show $f$ decreases on $[0,\pi/2]$. Hint: Use Mean Value Theorem.
Find pair of polynomials a(x) and b(x) If $a(x) + b(x) = x^6-1$ and $\gcd(a(x),b(x))=x+1$ then find a pair of polynomials of $a(x)$,$b(x)$. Prove or disprove, if there exists more than 1 more distinct values of the polynomials.
There is too much freedom. Let $a(x)=x+1$ and $b(x)=(x^6-1)-(x+1)$. Or else use $a(x)=k(x+1)$, $b(x)=x^6-1-k(x+1)$, where $k$ is any non-zero integer.
Sobolev differentiability of composite function I was wondering about the following fact: if $\Omega$ is a bounded subset of $\mathbb{R}^n$ and $u\in W^{1,p}(\Omega)$ and $g\in C^1(\mathbb{R},\mathbb{R})$ such that $|g'(t)t|+|g(t)|\leq M$, is it true that $g\circ u \in W^{1,p}(\Omega)$? If $g'\in L^{\infty}$ this would be true, but here we don't have this kind of estimate...
You get $g' \in L^\infty$ from the assumptions, since $|g'(t)| \le M/|t|$, and $g'$ is continuous at $0$.
Uncountability of basis of $\mathbb R^{\mathbb N}$ Given vector space $V$ over $\mathbb R$ such that the elements of $V$ are infinite-tuples. How to show that any basis of it is uncountable?
Take any almost disjoint family $\mathcal A$ of infinite subsets of $\mathbb N$ with cardinality $2^{\aleph_0}$. Construction of such set is given here. I.e. for any two set $A,B\in\mathcal A$ the intersection $A\cap B$ is finite. Notice that $$\{\chi_A; A\in\mathcal A\}$$ is a subset of $\mathbb R^{\mathbb N}$ which has cardinality $2^{\aleph_0}$. We will show that this set is linearly independent. This implies that the base must have cardinality at least $2^{\aleph_0}$. (Since every independent set is contained in a basis - this can be shown using Zorn lemma. You can find the proof in many places, for example these notes on applications of Zorn lemma by Keith Conrad.) Suppose that, on the contrary, $$\chi_A=\sum_{i\in F} c_i\chi_{A_i}$$ for some finite set $F$ and $A,A_i\in\mathcal A$ (where $A_i\ne A$ for $i\in F$). The set $P=A\setminus \bigcup\limits_{i\in F}(A\cap A_i)$ is infinite. For any $n\in P$ we have $\chi_A(n)=1$ and $\sum\limits_{i\in F} c_i\chi_{A_i}(n)=0$. So the above equality cannot hold. You can find a proof about dimension of the space $\mathbb R^{\mathbb R}$ (together with some basic facts about Hamel bases) here: Does $\mathbb{R}^\mathbb{R}$ have a basis? In fact, it can be shown that already smaller spaces must have dimension $2^{\aleph_0}$, see Cardinality of a Hamel basis of $\ell_1(\mathbb{R})$.
Irreducibility of $X^{p-1} + \cdots + X+1$ Can someone give me a hint how to the irreducibility of $X^{p-1} + \cdots + X+1$, where $p$ is a prime, in $\mathbb{Z}[X]$ ? Our professor gave us already one, namely to substitute $X$ with $X+1$, but I couldn't make much of that.
Hint: Let $y=x-1$. Note that our polynomial is $\dfrac{x^p-1}{x-1}$, which is $\dfrac{(y+1)^p-1}{y}$. It is not difficult to show that $\binom{p}{k}$ is divisible by $p$ if $0\lt k\lt p$. Now use the Eisenstein Criterion.
Continuity of $f \cdot g$ and $f/g$ on standard topology. Let $f, g: X \rightarrow \mathbb{R}$ be continuous functions, where ($X, \tau$) is a topological space and $\mathbb{R}$ is given the standard topology. a)Show that the function $f \cdot g : X \rightarrow \mathbb{R}$,defined by $(f \cdot g)(x) = f(x)g(x)$ is continuous. b)Let $h: X \setminus \{x \in X | g(x) = 0\}\rightarrow \mathbb{R}$ be defined by $h(x) = \frac{f(x)}{g(x)}$ Show that $h$ is continuous.
The central fact is that the operations $$p:\ {\mathbb R}^2\to{\mathbb R},\quad (x,y)\mapsto x\cdot y$$ and similarly $q:\ (x,y)\mapsto {\displaystyle{x\over y}}$ are continuous where defined and that $$h:\ X\to{\mathbb R}^2,\quad x\mapsto\bigl(f(x),g(x)\bigr)$$ is continuous if $f$ and $g$ are continuous. It follows that $f\cdot g=p\circ h$ is continuous, and similarly for ${\displaystyle{f\over g}}$.
Cartesian product of a set Question: What is A $\times$ A , where A = {0, $\pm$1, $\pm$2, ...} ? Thinking: Is this a set say B = {0, 1, 2, ... } ? This was in my homework can you help me ?
No, $A\times A$ is not a sequence. Neither is $A$: it’s just a set. By definition $A\times A=\{\langle a_1,a_2\rangle:a_1,a_2\in A\}$. Thus, $A\times A$ contains elements like $\langle 1,0\rangle$, $\langle -2,17\rangle$, and so on. Indeed, since $A$ is just the set of all integers, usually denoted by $\Bbb Z$, $A\times A$ is simply the set of all ordered pairs of integers. Here is a picture of $A\times A$: the lines are the coordinate axes, and the dots are the points of $A\times A$, thought of as points in the plane.
Show that $\liminf \limits _{k\rightarrow \infty} f_k = \lim \limits _{k\rightarrow \infty} f_k$ Is there a way to show that $\liminf \limits _{k\rightarrow \infty} f_k = \lim \limits _{k\rightarrow \infty} f_k$. The only way I can think of is by showing $\liminf \limits _{k\rightarrow \infty} f_k = \limsup \limits _{k\rightarrow \infty} f_k$. Is there another way? Edit: Sorry, I should have mentioned that you can assume that $\{f_k\}_{k=0}^\infty$ where $f_k:E \rightarrow R_e$ is a (Lebesgue) measurable function
The following always holds: $\inf_{k\geq n} f_k \leq f_n \leq \sup_{k\geq n} f_k$. Note that the lower bound in non-decreasing and the upper bound is non-increasing. Suppose $\alpha = \liminf_k f_k = \limsup_k f_k$, and let $\epsilon>0$. Then there exists a $N$ such that for $n>N$, we have $\alpha -\inf_{k\geq n} f_k < \epsilon$ and $\sup_{k\geq n} f_k -\alpha < \epsilon$. Combining this with the above inequality yields $-\epsilon < f_k - \alpha< \epsilon$ from which it follows that $\lim_k f_k = \alpha$. Now suppose $\alpha = \lim_k f_k$. Let $\epsilon >0$, then there exists a $N$ such that $-\frac{\epsilon}{2}+\alpha < f_k< \frac{\epsilon}{2}+\alpha$. It follows from this that $-\epsilon + \alpha \leq \inf_{k\geq n} f_k \leq \sup_{k\geq n} f_k < \epsilon+\alpha$, and hence $\liminf_k f_k = \limsup_k f_k = \alpha$. Hence the limit exists iff the $\liminf$ and $\limsup$ are equal.
Why is $\Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}$? It seems as if no one has asked this here before, unless I don't know how to search. The Gamma function is $$ \Gamma(\alpha)=\int_0^\infty x^{\alpha-1} e^{-x}\,dx. $$ Why is $$ \Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}\text{ ?} $$ (I'll post my own answer, but I know there are many ways to show this, so post your own!)
This is a "proof". We know that the surface area of the $n-1$ dimensional unit sphere is $$ |S^{n-1}| = \frac{2\pi^{\frac{n}2}}{\Gamma(\frac{n}2)}. $$ On the other hand, we know that $|S^2|=4\pi$, which gives $$ 4\pi = \frac{2\pi^{\frac32}}{\Gamma(\frac32)} = \frac{2\pi^{\frac32}}{\frac12\Gamma(\frac12)}. $$
What gives rise to the normal distribution? I'd like to know if anyone has a generally friendly explanation of why the normal distribution is an attractor of so many observed behaviors in their eventuality. I have a degree in math if you want to get technical, but I'd like to be able to explain to my grandma as well
To my mind the reason for the pre-eminence can at best be seen in what must be the most electrifying half page of prose in the scientific literature, where Clark Maxwell deduces the distribution law for the velocities of molecules of an ideal gas (now known as the Maxwell-Boltzmann law), thus founding the discipline of statistical physics. This can be found in his collected papers or, more accessibly, in Hawking's anthology "On the Shoulders of Giants". The only assumptions he uses are that the density depends on the magnitude of the velocity (and not on the direction) and that the components parallel to the axes are independent. Mathematically, this means that the only functions in three-dimensional space which have radial symmetry and split as a product of three functions of the individual variables are those which arise in the normal distribution.
Function that sends $1,2,3,4$ to $0,1,1,0$ respectively I already got tired trying to think of a function $f:\{1,2,3,4\}\rightarrow \{0,1,1,0\}$ in other words: $$f(1)=0\\f(2)=1\\f(3)=1\\f(4)=0$$ Don't suggest division in integers; it will not pass for me. Are there ways to implement it with modulo, absolute value, and so on, without conditions?
Look the example for Lagrange Interpolation, then it is easy to construct any function from any sequence to any sequence. In this case : $$L(x)=\frac{1}{2}(x-1)(x-3)(x-4) + \frac{-1}{2}(x-1)(x-2)(x-4)$$ wich simplifies to: $$L(x)=\frac{-1}{2}(x-1)(x-4)$$ which could possibly explain Jasper's answer, but since the method for derivation was not mentioned can not say for sure.
Prove that the language $\{ww \mid w \in \{a,b\}^*\}$ is not FA (Finite Automata) recognisable. Hint: Assume that $|xy| \le k$ in the pumping lemma. I have no idea where to begin for this. Any help would be much appreciated.
It's also possible — and perhaps simpler — to prove this directly using the pigeonhole principle without invoking the pumping lemma. Namely, assume that the language $L = \{ww \,|\, w \in \{a,b\}^*\}$ is recognized by a finite state automaton with $n$ states, and consider the set $W = \{a,b\}^k \subset \{a,b\}^*$ of words of length $k$, where $2^k > n$. By the pigeonhole principle, since $|W| = 2^k > n$, there must be two distinct words $w,w' \in W$ such that, after reading the word $w$, the automaton is in the same state as after reading $w'$. But this means that, since the automaton accepts $ww \in L$, it must also accept $w'w \notin L$, which leads to a contradiction.
Differentiability at 0 I am having a problem with this exercise. Please help. Let $\alpha >1$. Show that if $|f(x)| \leq |x|^\alpha$, then $f$ is differentiable at $0$.
Use the definition of the derivative. It is clear that $f(0)=0$. Note that if $h\ne 0$ then $$\left|\frac{f(h)-0}{h}\right| \le |h|^{\alpha-1}.$$ Since $\alpha\gt 1$, $|h|^{\alpha-1}\to 0$ as $h\to 0$.
Solve logarithmic equation I'm getting stuck trying to solve this logarithmic equation: $$ \log( \sqrt{4-x} ) - \log( \sqrt{x+3} ) = \log(x) $$ I understand that the first and second terms can be combined & the logarithms share the same base so one-to-one properties apply and I get to: $$ x = \frac{\sqrt{4-x}}{ \sqrt{x+3} } $$ Now if I square both sides to remove the radicals: $$ x^2 = \frac{4-x}{x+3} $$ Then: $$ x^2(x+3) = 4-x $$ $$ x^3 +3x^2 + x - 4 = 0 $$ Is this correct so far? How do I solve for x from here?
Fine so far. I would just use Wolfram Alpha, which shows there is a root about $0.89329$. The exact value is a real mess. I tried the rational root theorem, which failed. If I didn't have Alpha, I would go for a numeric solution. You can see there is a solution in $(0,1)$ because the left side is $-4$ at $0$ and $+1$ at $1.$
How to convert a formula to CNF? I am trying to convert the formula: $((p \wedge \neg n) \vee (n \wedge \neg p)) \vee z$. I understand i need to apply the z to each clause, which gives: $((p \wedge \neg n) \vee z) \vee ((n \wedge \neg p) \vee z)$. I know how to simplify this, but am unsure how it will lead to the answer containing 2 clauses. Thanks.
To convert to conjugtive normal form we use the following rules: Double Negation: * *$P\leftrightarrow \lnot(\lnot P)$ De Morgans Laws * *$\lnot(P\bigvee Q)\leftrightarrow (\lnot P) \bigwedge (\lnot Q)$ *$\lnot(P\bigwedge Q)\leftrightarrow (\lnot P) \bigvee (\lnot Q)$ Distributive Laws * *$(P \bigvee (Q\bigwedge R))\leftrightarrow (P \bigvee Q) \bigwedge (P\bigvee R)$ *$(P \bigwedge (Q\bigvee R))\leftrightarrow (P \bigwedge Q) \bigvee (P\bigwedge R)$ So lets expand the following * *$((P\bigwedge \lnot N)\bigvee (N∧\lnot P))\bigvee Z$ *$((((P\bigwedge \lnot N)\bigvee N)\bigwedge ((P\bigwedge \lnot N)\bigvee \lnot P)))\bigvee Z)$ *$(((P\bigvee N)\bigwedge (\lnot N\bigvee N ) \bigwedge (P\bigvee \lnot P)\bigwedge (\lnot N \bigvee \lnot P)) \bigvee Z)$ Then noting that $(\lnot N\bigvee N)$ and $(P\bigvee \lnot P)$ are always true we may remove them and get (It's cancelling these terms that gives a 2 term answer): * *$((P\bigvee N)\bigwedge (\lnot N \bigvee \lnot P)) \bigvee Z)$ *$((P\bigvee N\bigvee Z)\bigwedge (\lnot N \bigvee \lnot P \bigvee Z))$ This will in general not happen though and you may get more terms in your formula in CNF. Just so you know you can also check these things on Wolfram Alpha
How do you prove that proof by induction is a proof? Are proofs by induction limited to cases where there is an explicit dependance on a integer, like sums? I cannot grasp the idea of induction being a proof in less explicit cases. What if you have a function that suddenly changes behavior? If a function is positive up to some limit couldn't I prove by induction that it is always positive by starting my proof on the left of the limit? I feel like I am not getting a grasp on something fundamental and I hope you can help me with this. Thank you.
This math.SE question has a lot of great answers as far as induction over the reals goes. And as Austin mentioned, there are many cases in graph theory where you can use induction on the vertices or edges of a graph to prove a result. an example: If every two nodes of $G$ are joined by a unique path, then $G$ is connected and $n = e + 1$ Where $n$ is the number of nodes and $e$ is the number of edges. $G$ is connected since any two nodes are joined by a path. To show $n = e + 1$, we use induction. Assume it’s true for less than $n$ points (this means we're using strong induction). Removing any edge from $G$ breaks $G$ into two components, since paths are unique. Suppose the sizes are $n_1$ and $n_2$, with $n_1 + n_2 = n$. By the induction hypothesis, $n_1 = e_1 + 1$ and $n_2 = e_2 + 1$; but then$$n = n_1 + n_2 = (e_1 + 1) + (e_2 + 1) = (e_1 + e_2) + 2 = e − 1 + 2 = e + 1$$ So our statement is true by strong induction
Proving that floor(n/2)=n/2 if n is an even integer and floor(n/2)=(n-1)/2 if n is an odd integer. How would one go about proving the following. Any ideas as to where to start? For any integer n, the floor of n/2 equals n/2 if n is even and (n-1)/2 if n is odd. Summarize: [n/2] = n/2 if n = even [n/2] = (n-1)/2 if n = odd Working through it, I try to initially set n = 2n for the even case but am stuck on how to show its a floor... thanks
You should set $n=2m$ for even numbers, where $m$ is an integer. Then $\frac n2=m$ and the floor of an integer is itself. The odd case is similar.
Solve the Relation $T(n)=T(n/4)+T(3n/4)+n$ Solve the recurrence relation: $T(n)=T(n/4)+T(3n/4)+n$. Also, specify an asymptotic bound. Clearly $T(n)\in \Omega(n)$ because of the constant factor. The recursive nature hints at a possibly logarithmic runtime (because $T(n) = T(n/2) + 1$ is logarithmic, something similar may occur for the problem here). However, I'm not sure how to proceed from here. Even though the recurrence does not specify an initial value (i.e. $T(0)$), if I set $T(0) = 1$ some resulting values are: 0 1 100 831 200 1939 300 3060 400 4291 500 5577 600 6926 700 8257 800 9665 900 10933 The question: Is there a technique that I can use to solve the recurrence in terms of $n$ and $T(0)$? If that proves infeasible, is there a way to determine the asymptotic behavior of the recurrence?
$T(n)=T\left(\dfrac{n}{4}\right)+T\left(\dfrac{3n}{4}\right)+n$ $T(n)-T\left(\dfrac{n}{4}\right)-T\left(\dfrac{3n}{4}\right)=n$ For the particular solution part, getting the close-form solution is not a great problem. Let $T_p(n)=An\ln n$ , Then $An\ln n-\dfrac{An}{4}\ln\dfrac{n}{4}-\dfrac{3An}{4}\ln\dfrac{3n}{4}\equiv n$ $An\ln n-\dfrac{An}{4}(\ln n-\ln4)-\dfrac{3An}{4}(\ln n+\ln3-\ln4)\equiv n$ $\dfrac{(4\ln4-3\ln3)An}{4}\equiv n$ $\therefore\dfrac{(4\ln4-3\ln3)A}{4}=1$ $A=\dfrac{4}{4\ln4-3\ln3}$ $\therefore T_p(n)=\dfrac{4n\ln n}{4\ln4-3\ln3}$ But getting the complementary solution part is not optimistic. Since we should handle the equation $T_c(n)-T_c\left(\dfrac{n}{4}\right)-T_c\left(\dfrac{3n}{4}\right)=0$ .
$ \lim_{x \to \infty} \frac{xe^{x-1}}{(x-1)e^x} $ $$ \lim_{x \to \infty} \frac{xe^{x-1}}{(x-1)e^x} $$ I don't know what to do. At all. I've read the explanations in my book at least a thousand times, but they're over my head. Oh, and I'm not allowed to use L'Hospital's rule. (I'm guessing it isn't needed for limits of this kind anyway. This one is supposedly simple - a beginners problem.) Most of the answers I've seen on the Internet simply says "use L'Hospital's rule". Any help really appreciated. I'm so frustrated right now...
$$\lim_{x\to\infty}\frac{xe^{x-1}}{(x-1)e^x}=\lim_{x\to\infty}\frac{x}{x-1}\cdot\lim_{x\to\infty}\frac{e^{x-1}}{e^x}=1\cdot\frac{1}{e}=\frac{1}{e}$$ the first equality being justified by the fact that each of the right hand side limits exists finitely.
Prove DeMorgan's Theorem for indexed family of sets. Let $\{A_n\}_{n\in\mathbb N}$ be an indexed family of sets. Then: $(i) (\bigcup\limits_{n=1}^\infty A_n)' = \bigcap\limits_{n=1}^\infty (A'_n)$ $(ii) (\bigcap\limits_{n=1}^\infty A_n)' = \bigcup\limits_{n=1}^\infty (A'_n)$ I went from doing simple, straightforward indexed set proofs to this, and I don't even know where to start.
If $a\in (\bigcup_{n=1}^{\infty}A_{n})'$ then $a\notin A_{n}$ for any $n\in \mathbb{N}$, therefore $a\in A_{n}'$ for all $n\in \mathbb{N}$. Thus $a\in \bigcap_{n=1}^{\infty}A_{n}'$. Since $a$ was arbitrary, this shows $(\bigcup_{n=1}^{\infty}A_{n})' \subset \bigcap_{n=1}^{\infty}A_{n}'$. The other containment and the other problem are similar.
Finding angles from some other angles related to incircle Let $ABC$ be a triangle and $O$ the center of its enscribed circle. Let $M = BO \cap AC$ and $N=CO \cap AB$ such that $\measuredangle NMB = 30°, \measuredangle MNC = 50°$. Find $\angle ABC, \angle BCA$ and $\angle CAB$. I also posted this here at the Art of Problem Solving.
At first, I took $O$ to denote the center of the incircle inscribed into the triangle. Only a comment below clarified that you were actually taling about the center of the circumcircle circumscribed around the triangle. Therefore, I have two solutions, one for each interpretation. Circumcircle Angles without a proof I constructed your situation using Cinderella. At first I had one point chosen freely on a line, which I later moved into a position where everything fit the way it should. From the resulting figure, I obtained the following measurements: \begin{align*} \measuredangle ABC &= 60° \\ \measuredangle BCA &= 70° \\ \measuredangle CAB &= 50° \end{align*} Warning: I accidentially swapped $M$ and $N$ as well as $B$ and $C$ in the following image. So take care to look more at the actual angles than the denoted point names. I guess that once you know these angles at the corners, you can choose suitable coordinates and using these proove that the angles specified in your question are indeed as required. Oriented angles In the above, I interpreted the angles in your question as unoriented, and measured one of the clockwise and the other counter-clockwise. I furthermore assumed $M$ and $N$ to lie between the corresponding corners of the triangle. Strictly speaking, your angles are given in an oriented fashion, and as both of them are positive, $B$ and $C$ must lie on different sides of $MN$. this leads to rather ugly triangles. Of the two possible solutions I found, the better one is the following: The angles for this triangle are rather ugly, compared to the nice numbers resulting from the interpretation above. Incircle Angles without a proof This result I obtained in a similar way to the one outlined above, using Cinderella with one free point on a line adjusted till things line up as intended. \begin{align*} \measuredangle ABC &= 40° \\ \measuredangle BCA &= 120° \\ \measuredangle CAB &= 20° \end{align*} Construction Let us call the half corner angles like this: \begin{align*} \alpha &= \angle BAO = \angle OAC \\ \beta &= \angle CBO = \angle OBA \\ \gamma &= \angle ACO = \angle OCB \end{align*} From $\triangle BCO$ you see that $\angle BCO = 180° - \beta - \gamma$. That is the same as $\angle MON$, so from $\triangle MNO$ you can conclude that $\beta+\gamma = 30° + 50°$. From $\triangle ABC$ you know that $2\alpha + 2\beta + 2\gamma = 180°$, so $\alpha = 90° - (\beta + \gamma) = 10°$. Based on this angle, you can construct the triangle even without knowing the corner angles up front: * *Choose $M$ and $N$ arbitrarily *Draw the rays $MO$ and $NO$ using the given angles to obtain $O$ *Draw lines at $90° - \alpha = 80°$ from $MO$ through $M$ and $O$ as indicated in the figure below (green). Around their intersection, draw a circle containing all points that see $M$ and $O$ under an angle of $10°$. *Repeat the previous step for $NO$. The intersection of these two circles is $A$. *Now $C = AM \cap NO$ and $B = AN \cap MO$. Measuring the angles in this figure will give you the desired result up to the precision with which you performed your drawings and measurements.
Stone Weierstrass on noncompact subset I would like to ask whether we can create a function $f:\mathbb{R}\rightarrow\mathbb{R}$ which is continuous on $\mathbb{R}$ but $f$ is not the pointwise limit of any sequence of polynomial function $\{p_n(x)\}_{n\in\mathbb{N}}$. Thank you for all helping.
By Stone-Weierstass applied to $f$ restricted to interval $[-n,n]$ there is a polynomial $p_n$ such that $\sup_{x\in [-n,n]} |f(x)-p_n(x)|< \frac{1}{n}$. The sequence $(p_n)$ converges pointwise to $f$. On the other hand, there is no sequence of polynomials converging uniformly to $f(x)=\sin x$, since polynomials are unbounded or constant.
Closed subsets $A,B\subset\mathbb{R}^2$ so that $A+B$ is not closed I am looking for closed subsets $A,B\subset\mathbb{R}^2$ so that $A+B$ is not closed. I define $A+B=\{a+b:a\in A,b\in B\}$ I thought of this example, but it is only in $\mathbb{R}$. Take: $A=\{\frac{1}{n}:n\in\mathbb{Z^+}\}\cup\{0\}$ and $B=\mathbb{Z}$ both of these are closed (is this correct?). But their sum $A+B=\mathbb{Q}$ which is not closed.
Let $A=\mathbb{Z}$ and $B=p\mathbb{Z}:=\{pn: n\in\mathbb{Z}\}$ where $p$ is any irrational number. So $A$ and $B$ are two closed subsets of $\mathbb{R}$, but, $A+B :=\{m+pn: m,n\in\mathbb{Z}\}$ is not closed in $\mathbb{R}$.
Centroids of triangle On the outside of triangle ABC construct equilateral triangles $ABC_1,BCA_1, CAB_1$, and inside of ABC, construct equilateral triangles $ABC_2,BCA_2, CAB_2$. Let $G_1,G_2,G_3$ $G_3,G_4,G_6$be respectively the centroids of triangles $ABC_1,BCA_1, CAB_1$, $ABC_2,BCA_2, CAB_2$. Prove that the centroids of triangle $G_1G_2G_3$ and of triangle $G_4G_5G_6$ coincide
The easiest way to get at the result is to apply an affine transformation to the original triangle so as to make it equilateral. The constructed "interior" triangles will all then coincide with the original triangle, and the conclusion can be easily drawn using basic geometry.
A question about the definition of fibre bundle The canonical definition of fibre bundle is the following: Let $B,X,F$ be three topological spaces and $\pi:X\rightarrow B$ a continuous surjective map; then $(X,F,B,\pi)$ is a fibre bundle on $B$ if for all $b\in B$ exist an open neighbourhood of $U$ of $b$ and a homeomorphism $\phi_U:\pi^{-1}(U)\rightarrow U\times F$ such that $proj_U\circ \phi_U=\pi_{|U}$ (where $proj_U$ is the canonical projection on $U$). A consequence of the previuous definition is that the set $\pi^{-1}(p)$ is homeomorphic to $F$ for all $p\in U$, but I don't understant why this is true. Lets try to resctrict the function $\phi_U$ to the set $\pi^{-1}(p)$ (under the hypothesis that $p\in U$), so we have: $$proj_U\bigg(\phi_U(\pi^{-1}(p))\bigg)=\pi_{|U}(\pi^{-1}(p)) = p $$ but now we can't multiply both sides for $proj_U^{-1}$ because the function is not injective, and we can't conclude that $\phi_U(\pi^{-1}(p))=proj^{-1}(p)=\{p\}\times F$.
If $\phi_U$ is a homeomorphism, then $\phi_U$ restricted to $\pi^{-1}(p)$ will also be a homeomorphism. But this is just a map from $\pi^{-1}(p)$ to $p\times F$ and $p\times F$ is homeomorphic to $F$.
How do I prove this using the definition of a derivative? If $f$ is a differentiable function and $g(x)=xf(x)$, use the definition of a derivative to show that $g'(x)=xf'(x)+f(x)$.
Just do it: set up the difference quotient and take the limit as $h\to 0$. I’ll get you started: $$\begin{align*} g'(x)&=\lim_{h\to 0}\frac{g(x+h)-g(x)}h\\ &=\lim_{h\to 0}\frac{(x+h)f(x+h)-xf(x)}h\\ &=\lim_{h\to 0}\frac{x\big(f(x+h)-f(x)\big)+hf(x+h)}h\\ &=\lim_{h\to 0}\frac{x\big(f(x+h)-f(x)\big)}h+\lim_{h\to 0}\frac{hf(x+h)}h\;. \end{align*}$$ Now just finish working out what those last two limits are; it shouldn’t be hard, especially when you already know what they must be.
Prove $ax - x\log(x)$ is convex? How do you prove a function like $ax - x\log(x)$ is convex? The definition doesn't seem to work easily due to the non-linearity of the log function. Any ideas?
A function $f(x) \in C^2(\Omega)$ is convex if its second derivative is non-negative. $$f(x) = x \log(x) \implies f'(x) = x \cdot \dfrac1x + \log(x) \implies f''(x) = \dfrac1x > 0$$ EDIT If $f(x) = ax - x\log(x)$, then $f''(x) = - \dfrac1x$ and hence the function is concave.
A continuous function with measurable domain. Let $D$ and $E$ be measurable sets and $f$ a function with domain $D\cup E$. We proved that $f$ is measurable on $D \cup E$ if and only if its restrictions to $D$ and $E$ are measurable. Is the same true if measurable is replaced by continuous. I wrote the question straight out of the book this time to make sure I did this correctly. I mechanically replaced measurable with continuous starting with the measurable after $f$. Below here is what I wrote before I changed the question I've proven the case where continuous is switched for measurable. I'm just not sure of a meaningful relationship between measurable sets for domains and continuity.
That's wrong. Let $f\colon[0,1] \to \mathbb R$ be given by \[ x\mapsto \begin{cases} 1 & x = 0\\ 0 & x > 0 \end{cases} \] Then $f$ is not continuous, but it is, restricted to $D := \{0\}$ and $E := (0,1]$, both of which are measurable. Let me add something concerning your edit: As the example above shows, for just measurable parts $D$ and $E$ continuity of $f|_D$ and $f|_E$ doesn't imply that of $f$ on $D \cup E$. But if both $D$ and $E$ are open in $D \cup E$ or both closed, then your result holds: For open $D$ and $E$ we argue as follows: Let $U \subseteq \mathbb R$ be open, then we have $f^{-1}[U] = f|_D^{-1}[U] \cup f|_E^{-1}[U]$. Now, $f|_D^{-1}[U]$ is open in $D$ by continuity of $f|_D$, as $D$ is open in $D\cup E$, $f|_D^{-1}[U]$ is also. By the same argument, replacing $D$ by $E$, $f|_E^{-1}[U]$ is open in $D\cup E$, hence $f^{-1}[U]$ is, proving the continuity of $f$. The case of closed sets can be proved by exactly the same argument, replacing every "open" above by "closed".
Number of songs sung. There were 750 people when the first song was sung. After each song, 50 people are leaving the hall. How many songs are sung to make them zero? The answer is 16, I am unable to understand it. I am getting 15 as the answer. Please explain.
Listen to the question carefully: "There were $750$ people when the first song was sung" First song was already sung $+1$; remaining people is $750$. $750/50=15$; Answer is $15+1=16$
Write down a proof for $\bot\Rightarrow q$ in proposition calculus I am given the hint in the question that I will need to use the axiom $(((s\Rightarrow \bot)\Rightarrow \bot)\Rightarrow s)$. The axioms I am using are $$(s\Rightarrow (t \Rightarrow s)) \\((s\Rightarrow(t\Rightarrow u))\Rightarrow((s\Rightarrow t)\Rightarrow(s\Rightarrow u)) \\ (((s\Rightarrow \bot)\Rightarrow \bot)\Rightarrow s)$$ In a proof every step is either an axiom or deduced by modus ponens.
Using your first two axioms you can prove the deduction theorem. So to prove $\vdash \bot \Rightarrow q$, it's enough to prove $\bot \vdash q$. The hint suggests using the third axiom. With that, you can show that $((q \Rightarrow \bot)\Rightarrow \bot)\vdash q$. So you're done if you can prove $\bot \vdash ((q \Rightarrow \bot)\Rightarrow \bot)$. Can you see how to do this?
Find the value of f(343, 56)? I have got a problem and I am unable to think how to proceed. $a$ and $b$ are natural numbers. Let $f(a, b)$ be the number of cells that the line joining $(a, b)$ to $(0, 0)$ cuts in the region $0 ≤ x ≤ a$ and $0 ≤ y ≤ b$. For example $f(1, 1)$ is $1$ because the line joining $(1, 1)$ and $(0, 0)$ cuts just one cell. Similarly $f(2, 1)$ is $2$ and $f(3, 2) = 4$. Find $f(343, 56)$. I have tried by making the equation of line joining $(0,0)$ and $(343, 56)$. I got the equation as $8x = 49y$. Now I tried it by randomly putting the values of $x$ and $y$ which are both less that $343$ and $56$ respectively. But I am unable to get it. Is there is any better approach? Thanks in advance.
A start: Note that $343=7\cdot 49$, and $56=7\cdot 8$. First find $f(49,8)$. More: If $a$ and $b$ are relatively prime, draw an $a\times b$ chessboard, and think of the chessboard squares you travel through as you go from the beginning to the end.
Show that $f$ has at most one fixed point Let $f\colon\mathbb{R}\to\mathbb{R}$ be a differentiable function. $x\in\mathbb{R}$ is a fixed point of $f$ if $f(x)=x$. Show that if $f'(t)\neq 1\;\forall\;t\in\mathbb{R}$, then $f$ has at most one fixed point. My biggest problem with this is that it doesn't seem to be true. For example, consider $f(x)=x^2$. Then certainly $f(0)=0$ and $f(1)=1 \Rightarrow 0$ and $1$ are fixed points. But $f'(x)=2x\neq 1 \;\forall\;x\in\mathbb{R}$. Is there some sort of formulation that makes this statement correct? Am I missing something obvious? This is a problem from an old exam, so I'm assuming that maybe there's some sort of typo or missing condition.
This problem is straight out of baby Rudin. Assume by contradiction that $f$ has more than one fixed point. Select any two distinct fixed points, say, $x$ and $y$. Then, $f(x) = x$ and $f(y) = y$. By the Mean Value Theorem, there exists some $\alpha \in (x,y)$ such that $f'(\alpha) = \frac{f(x)-f(y)}{x-y} = \frac{x-y}{x-y} = 1$, contradicting the hypothesis.
Reference for topology and fiber bundle I am looking for an introductory book that explains the relations of topology and bundles. I know a basic topology and algebraic topology. But I don't know much about bundles. I want a book that * *explains the definition of bundles carefully and give some intuition on bundles *explains relations between topology and bundles *has physics motivation (if possible) If you know a good books, please let me know. Thank you very much in advance.
Steenrod's "The Topology Of Fibre Bundles" is a classic. It isn't particularly modern but it does the basics very well. Husemoller's "Fibre Bundles" is a bit more modern and has a bit more of a physics-y outlook but still very much a book for mathematicians. I find it not as pleasant to read as Steenrod's book but it's fine. Peter May has some nice notes on bundles and fibrations, available on his webpage. They're quite modern but not written with a physics outlook, very much the outlook of a topologist.
If $\#A$ and $m$ are relatively prime, then $a\mapsto ma$ is automorphism? Is it true if $A$ is a finite, abelian group and $m$ is some integer relatively prime to the order of $A$, then the map $a\mapsto ma$ is an automorphism? It's left as an exercise in some course notes, but I cannot verify it. In particular it is used to prove that Hall complements exist for abelian Hall subgroups of finite groups.
Hint: since $\gcd(m,\#A)=1$, you can find an integer $d$ such that $dm\equiv1\pmod{\#A}$. Show that the map $a \mapsto da$ is the inverse of $a \mapsto ma$.
How do we draw the number hierarchy from natural to complex in a Venn diagram? I want to make a Venn diagram that shows the complete number hierarchy from the smallest (natural number) to the largest (complex number). It must include natural, integer, rational, irrational, real and complex numbers. How do we draw the number hierarchy from natural to complex in a Venn diagram? Edit 1: I found a diagram as follows, but it does not include the complex number. My doubt is that shoul I add one more rectangle, that is a litte bit larger, to enclose the real rectangle? But I think the gap is too large enough only for i, right? Edit 2: Is it correct if I draw as follows?
Emmad's second link is just perfect, IMHO. For something right in front of you, here's this:
Jordan Form of a matrix I'm trying to find a matrix $P$ such that $J=P^{-1}AP$, where $J$ is the Jordan Form of the matrix: $$A=\begin{pmatrix} -1&2&2\\ -3&4&3\\ 1&-1&0 \end{pmatrix} $$ The characteristic polynomial is: $p(\lambda)=(\lambda-1)^3$, and a eigenvector for $A-I$ is $\begin{pmatrix} 0 \\ 1 \\-1 \end{pmatrix}$. Now, how can I find other $2$ vectors? Thanks for your help.
Maybe your fault is, that there is a second eigenvector, the Jordan normal form is $$\begin{pmatrix}1&0&0\\0&1&1\\0&0&1\end{pmatrix}.$$ You can find a second eigenvector $w$ and a vector $z$ such that $(A-I)z=w$.
Continuity of function which is Lipschitz with respect to each variables separately Let a function $f: I\times J \rightarrow \mathbb R$, where $I,J$ are intervals in $\mathbb R$, be Lipschitz with respect of each variable separately. Is it then $f$ continuous with respect of both variables? Thanks
It depends on what you mean with Lipschitz with respect to each variable separately. Consider the function $f\colon\mathbb R^2\to \mathbb R$, $(x,y)\mapsto xy$. Then for fixed $y$, the function $x\mapsto f(x,y)$ is Lipschitz (with Lipschitz constant $y$ depending on $y$) and similarly $y\mapsto f(x,y)$ for fixed $x$. However $f$ itself is not Lipschitz as $f(t,t)=t^2$ shows. On the other hand, if there is a single Lipschitz constant $L$ working for all functions of the form $x\mapsto f(x,y)$ and $y\mapsto f(y,x)$, then $f$ is Lipschitz with constant $L\sqrt 2$.
Let $G=(V,E)$ be a connected graph with $|E|=17$ and for all vertices $\deg(v)>3$. What is the maximum value of $|V|$? Let $G=(V,E)$ be a connected graph with $|E|=17$ and for all vertices $\deg(v)>3$. What is the maximum value of $|V|$? (What is the maximum possible number of vertices?)
HINT: Suppose that $V=\{v_1,\dots,v_n\}$. Then $$\sum_{k=1}^n\deg(v_k)=34\;;\tag{1}$$ why? If $\deg(v_k)\ge 4$ for $k=1,\dots,n$, then $$\sum_{k=1}^n\deg(v_k)\ge\sum_{k=1}^n4\;.\tag{2}$$ Now combine $(1)$ and $(2)$.
showing $\nexists\;\beta\in\mathbb N:\alpha<\beta<\alpha+1$ I want to prove that $\nexists\; \beta\in\mathbb N$ such that $\alpha<\beta<\alpha+1$ for all $\alpha\in\mathbb N$. I just want to use the Peano axioms and $+$ and $\cdot$ If $\alpha<\beta$ then there is a $\gamma\in\mathbb N$ such that $\beta=\alpha+\gamma$. If $\beta<\alpha+1$ then there is a $\delta\in\mathbb N$ such that $\alpha+1=\beta+\delta$. Now I tried to equalize the two equations and I got $\gamma\le0$ which is contradictory to $\gamma\in\mathbb N$. But I used $\alpha+1-\delta=\beta$ in which the $-$ is problematic because I am not allowed to use it. Anbody knows a better solution? Thanks a lot!
I'm not sure if the following is allowed but: Inserting the first equation in the second we get: $\alpha+1=\alpha+\gamma+\delta$. Now we can substract $\alpha$ at both sides to get $1=\gamma+\delta$, which is a contradiction.
How do you find the vertex of a (Bézier) quadratic curve? Before I elaborate, I do not mean a quadratic function! I mean a quadratic curve as seen here. With these curves, you are given 3 points: the starting point, the control point, and the ending point. I need to know how to find the vertex of the curve. Also, I am talking about Bézier curves, but just quadratic Bézier curves-the ones with only 3 points.
A nice question! And it has a nice answer. I arrived at it by a series of hand-drawn sketches and scribbled calculations, so I don't have time right now to present the derivation. But here is the answer: We are given three point $P_0$, $P_1$, and $P_2$ (the start-, control-, and end-points). The Bézier curve for these points is the parabola through $P_0$ and $P_2$ whose tangents at $P_0$ and $P_2$ coincide with the lines $P_0P_1$ and $P_1P_2$ respectively. It has the parametric form $$B(t) = (1-t)Q_0(t) + tQ_1(t)$$ where $$Q_0(t) = (1-t)P_0+tP_1$$ and $$Q_1(t)=(1-t)P_1+tP_2$$ Here is what you have to do to find the vertex of the parabola: Complete the parallelogram $P_0P_1P_2P_3$ by setting $P_3 = P_0 + P_2 - P_1$. Find the parameter $t$ such that $P_0X(t)P_1$ is a right angle, where $X(t)$ is the point on $P_1P_3$ equal to $(1-t)P_1+tP_3$. Then the vertex of the parabola is the point $B(t)$. Note that $t$ is not necessarily in $[0,1]$. Briefly, the idea is that we follow the tangent $Q_0(t)Q_1(t)$ around the curve until the length $|Q_0(t')B(t')|$ is equal to the length $|P_0Q_0(t')|$. Then the symmetry dictates that the vertex of the parabola is reached when $t = t'/2$. You can obtain this value of $t$ by the above procedure.
Proof needed for a function $x^2$ for irrationals and $x$ for rationals How can I prove that the function $f:[0,1] \rightarrow \mathbb{R}$, defined as $$ f(x) = \left\{\begin{array}{l l} x &\text{if }x \in \mathbb{Q} \\ x^2 & \text{if } x \notin \mathbb{Q} \end{array} \right. $$ is continuous on $0$ and $1$, but nowhere else? I really don't know where to start. I know the (equivalent) definitions of continuous functions (epsilon delta, $\lim f(x) = f(c)$, topological definition with epsilon and delta neighbourhoods, and the definition where as $x_n$ goes to $c$, it implies that $f(x_n)$ goes to $f(c)$).
Hint: Choose a sequence of rationals converging to an irrational and vice-versa and recall that continuity also implies sequential continuity, to conclude what you want. Move your mouse over the gray area for a complete solution. Consider $a \in [0,1] \backslash \mathbb{Q}$. For this $a$, choose a sequence of rationals converging to $a$ i.e. $\{a_n\}_{n=1}^{\infty}$, where $a_n \in [0,1] \cap\mathbb{Q}$. One such choice for this sequence is $a_n = \dfrac{\lfloor 10^n a\rfloor}{10^n}$. If $f$ were to be continuous (recall that continuity also implies sequential continuity), then $$\lim_{n \to \infty} f(a_n) = f\left(\lim_{n \to \infty} a_n \right) = f(a)$$ But this gives us that $a = a^2$, which is not true for any $a \in [0,1] \backslash \mathbb{Q}$. Similarly, argue when $a \in [0,1] \cap\mathbb{Q}$, by picking a sequence of irrationals converging to $a$ i.e. $\{a_n\}_{n=1}^{\infty}$, where $a_n \in [0,1] \backslash \mathbb{Q}$. One such choice is $a_n = \left(1 - \dfrac{\sqrt{2}}{2n} \right)a$. If $f$ were to be continuous (recall that continuity also implies sequential continuity), then $$\lim_{n \to \infty} f(a_n) = f \left(\lim_{n \to \infty} a_n \right) = f(a)$$ But this gives us that $a^2 = a$, which is true only for any $a =0$ or $a=1$. Hence, the function is continuous only at $0$ and $1$.
A question about infinite utility streams At the end of Diamond's Evaluation of Infinite Utility Streams he proves a theorem (which he doesn't give a name to, but it's at the very end of the article). There is a step in which he jumps from $(u,0)_{rep}\succ (0,u)_{rep}$ to $(u,0)\succ_t (0,u)$, and I don't understand where that comes from. It seems like it's the opposite direction of axiom A2, and I don't see how that's derived.
Step: $(u,0)_{rep}\succ (0,u)_{rep}\implies (u,0)\succ_2(0,u)$ Proof: Suppose not. Since $\succeq_2$ is complete, we would have $(0,u)\succeq_2 (u,0)$ otherwise, and hence by A2 $(0,u)_{rep}\succeq (u,0)_{rep}$. This cannot be. I don't see how the rest of Diamond's proof works out though. $(u,0,U)\succeq(0,u,U)$ follows now from A1, but I don't see why this should be strict.
All possible combinations of x letters (what is this called in mathematics) Firstly, thank you for looking at my question. I would like to know what this kind of problem is called in mathematics: Given a set of letters, find all possible 'words' you can make with those letters. For example for 'abc' the solution would be: a, b, c, ab, ac, abc, ba, bac, bca, ca, cab, cba Some background, I am writing a computer program to play Scrabble and need to generate all possible words given from a set of letters. I'm researching algorithms for this problem but couldn't quite figure out what the general name is for this type of problem. I'm curious to find out so I thought I would ask. I thought this was a type of permutation problem but reading up on Permutations I see that the length of the result is set, not variable. And it's not a Combination since the order matters.
All possible outcomes of a probability problem are called the "sample space". Is that what you were looking for?
Why is intersection of two independent set probability a multiplication process? Why is the probability of intersection of two independent sets $A$ and $B$, a multiplication of their respective probabilities i.e. Why is $$\mathbb{P}(A \cap B) = \mathbb{P}(A) \cdot \mathbb{P}(B)?$$ this question is about the intuition behind the definition of independence of sets in a probability space
Perhaps, one way of looking at it is the fact that intersection means to add conditions. In that sense, for every individual satisfying the condition A you have to compute how many satisfy condition B, thus the multiplication issue. Note that satisfying two conditions is scarcer than satisfying just one, so the "weight" of individuals satisfying both conditions respect the whole population must be less than the weight of the ones satisfying just one condition. Then, given the fact that probabilities are less than or equal one, multiplication seems a reasonable way of describing the process. I know it is not the same in a formal sense, but is like saying for every row, compute how many columns, then you get the total number of pieces.
Triple integration in cylindrical coordinates Determine the value of $ \int_{0}^{2} \int_{0}^{\sqrt{2x - x^2}} \int_{0}^{1} z \sqrt{x^2 +y^2} dz\,dy\,dx $ My attempt: So in cylindrical coordinates, the integrand is simply $ \rho$. $\sqrt{2x-x^2} $ is a circle of centre (1,0) in the xy plane. So $ x^2 + y^2 = 2x => \rho^2 = 2\rho\cos\theta => \rho = 2\cos\theta $ Therfore, I arrived at the limit transformations, $ 0 < \rho < 2\cos\theta,\,\, 0 < z < 1, \text{and}\,\,0 < \theta < \frac{\pi}{2} $ Bringing this together gives $ \int_{0}^{\frac{\pi}{2}} \int_{0}^{2\cos\theta} \int_{0}^{1} z\,\,\rho^3\,dz\,d\rho\,d\theta $ in cylindrical coordinates. Is this correct?
As joriki noted completely; your integral would be $$ \int_{0}^{\frac{\pi}{2}} \int_{0}^{2\cos\theta} \int_{0}^{1} z\,\,\rho^2\,dz\,d\rho\,d\theta=\bigg(\int_{0}^{\frac{\pi}{2}} \int_{0}^{2\cos\theta}\rho^2\,d\rho\,d\theta \bigg)\times \int_{0}^{1} z\ dz\\\ =\bigg(\int_{0}^{\frac{\pi}{2}} \frac{\rho^3}{3}\bigg|_0^{2\cos(\theta)}d\theta \bigg)\times \frac{1}{2}=\frac{8}{9}$$
Benford's law with random integers I tried testing random integers for compliance with Benford's law, which they are apparently supposed to do. However, when I try doing this with Python, map(lambda x:str(x)[0], [random.randint(0, 10000) for a in range(100000)]).count('1') I get approximately equal frequencies for all leading digits. Why is this the case? Might it have something to do with how the pseudorandom number generator, the Mersenne twister, works?
Benford's law applies only to distributions that are scale-invariant and thus applies approximately to many real-life data sources, especially when we measure with arbitrary units: If the leading-digit distribution of a sample is essentially the same whether we measure in inches or centimeters, this is only possible if the logarithm is equidistributed (or approximately so over a range wide enough to cover several orders of magnitudes).
Constructing Riemann surfaces using the covering spaces In the paper "On the dynamics of polynomial-like mappings" of Adrien Douady and John Hamal Hubbard, there is a way of constructing Riemann surfaces. I recite it as follow: A polynomail-like map of degree d is a triple $(U,U',f)$ where $U$ and $U'$ are open subsets of $\mathbb{C}$ isomorphic to discs, with $U'$ relatively compact in $U$, and $f: U'\rightarrow U $ a $\mathbb{C}$-analytic mapping, proper of degree $d$. Let $L \subset U' $be a compact connect subset containing $f^{-1}\left(\overline{U'}\right)$ and the critical points of $f$, and such that $X_0=U-L$ is connected. Let $X_n$ be a covering space of $X_0$ of degree $d^n$, $\rho_n:X_{n+1}\rightarrow X_n$ and $\pi_n:X_n\rightarrow X_0$ be the projections and let $X$ be the disjoint union of the $X_n$. For each $n$ choose a lifting $$\widetilde{f}_n\colon \pi_n^{-1}(U'-L)\rightarrow X_{n+1},$$ of $f$. Then $T$ is the quotient of $X$ by the equivalence relation identifying $x$ to $\widetilde{f}_n(x)$ for all $x\in \pi_n^{-1}(U'-L)$ and all $n=0,1,2,\ldots$. The open set $T'$ is the union of the images of the $X_n, n=1,2,\ldots$, and $F:T'\rightarrow T$ is induced by the $\rho_n$. Why $T$ is a Riemann surface and isomorphic to an annulus of finit modulus? Is there anything special about the $\pi_n,\rho_n$? What kind of background do I need?
I will give an informal answer. Your covering space is just a collection of holed spaces right? The equivalence relation just projects the holed space down onto a space isomorphic to $U'-L$. It does this by pasting the image and the preimage of $f$ together. (In an informal sense with each iteration the covering space gets bigger. To see this consider the map $z \mapsto z^2$ on the punctured disk (disk without the origin) with radius 1 from the origin. With 1 iteration you cover the disk twice. Applying it another time on the double covering of the disk you cover it four times etc. Quotienting by all these iterations you get the punctured disk.) Hence you get something isomorphic to a space with an annulus of finite modulus. The latter space is just a classical (Parabolic) Riemann surface (Google it!).
Relations between p norms The $p$-norm on $\mathbb R^n$ is given by $\|x\|_{p}=\big(\sum_{k=1}^n |x_{k}|^p\big)^{1/p}$. For $0 < p < q$ it can be shown that $\|x\|_p\geq\|x\|_q$ (1, 2). It appears that in $\mathbb{R}^n$ a number of opposite inequalities can also be obtained. In fact, since all norms in a finite-dimensional vector space are equivalent, this must be the case. So far, I only found the following: $\|x\|_{1} \leq\sqrt n\,\|x\|_{2}$(3), $\|x\|_{2} \leq \sqrt n\,\|x\|_\infty$ (4). Geometrically, it is easy to see that opposite inequalities must hold in $\mathbb R^n$. For instance, for $n=2$ and $n=3$ one can see that for $0 < p < q$, the spheres with radius $\sqrt n$ with $\|\cdot\|_p$ inscribe spheres with radius $1$ with $\|\cdot\|_q$. It is not hard to prove the inequality (4). According to Wikipedia, inequality (3) follows directly from Cauchy-Schwarz, but I don't see how. For $n=2$ it is easily proven (see below), but not for $n>2$. So my questions are: * *How can relation (3) be proven for arbitrary $n\,$? *Can this be generalized into something of the form $\|x\|_{p} \leq C \|x\|_{q}$ for arbitrary $0<p < q\,$? *Do any of the relations also hold for infinite-dimensional spaces, i.e. in $l^p$ spaces? Notes: $\|x\|_{1}^{2} = |x_{1}|^2 + |x_{2}|^2 + 2|x_{1}||x_{2}| \leq |x_{1}|^2 + |x_{2}|^2 + \big(|x_{1}|^2 + |x_{2}|^2\big) = 2|x_{1}|^2 + 2|x_{2}|^2$, hence $=2\|x\|_{2}^{2}$ $\|x\|_{1} \leq \sqrt 2 \|x\|_{2}$. This works because $|x_{1}|^2 + |x_{2}|^2 \geq 2|x_{1}\|x_{2}|$, but only because $(|x_{1}| - |x_{2}|)^2 \geq 0$, while for more than two terms $\big(|x_{1}| \pm |x_{2}| \pm \dotsb \pm |x_{n}|\big)^2 \geq 0$ gives an inequality that never gives the right signs for the cross terms.
* *Using Cauchy–Schwarz inequality we get for all $x\in\mathbb{R}^n$ $$ \Vert x\Vert_1= \sum\limits_{i=1}^n|x_i|= \sum\limits_{i=1}^n|x_i|\cdot 1\leq \left(\sum\limits_{i=1}^n|x_i|^2\right)^{1/2}\left(\sum\limits_{i=1}^n 1^2\right)^{1/2}= \sqrt{n}\Vert x\Vert_2 $$ *Such a bound does exist. Recall Hölder's inequality $$ \sum\limits_{i=1}^n |a_i||b_i|\leq \left(\sum\limits_{i=1}^n|a_i|^r\right)^{\frac{1}{r}}\left(\sum\limits_{i=1}^n|b_i|^{\frac{r}{r-1}}\right)^{1-\frac{1}{r}} $$ Apply it to the case $|a_i|=|x_i|^p$, $|b_i|=1$ and $r=q/p>1$ $$ \sum\limits_{i=1}^n |x_i|^p= \sum\limits_{i=1}^n |x_i|^p\cdot 1\leq \left(\sum\limits_{i=1}^n (|x_i|^p)^{\frac{q}{p}}\right)^{\frac{p}{q}} \left(\sum\limits_{i=1}^n 1^{\frac{q}{q-p}}\right)^{1-\frac{p}{q}}= \left(\sum\limits_{i=1}^n |x_i|^q\right)^{\frac{p}{q}} n^{1-\frac{p}{q}} $$ Then $$ \Vert x\Vert_p= \left(\sum\limits_{i=1}^n |x_i|^p\right)^{1/p}\leq \left(\left(\sum\limits_{i=1}^n |x_i|^q\right)^{\frac{p}{q}} n^{1-\frac{p}{q}}\right)^{1/p}= \left(\sum\limits_{i=1}^n |x_i|^q\right)^{\frac{1}{q}} n^{\frac{1}{p}-\frac{1}{q}}=\\= n^{1/p-1/q}\Vert x\Vert_q $$ In fact $C=n^{1/p-1/q}$ is the best possible constant. *For infinite dimensional case such inequality doesn't hold. For explanation see this answer.
$X_n\overset{\mathcal{D}}{\rightarrow}X$, $Y_n\overset{\mathbb{P}}{\rightarrow}Y \implies X_n\cdot Y_n\overset{\mathcal{D}}{\rightarrow}X\cdot Y\ ?$ The title says it. I know that if limiting variable $Y$ is constant a.s. (so that $\mathbb{P}(Y=c)=1)$ then the convergence in probability is equivalent to the convergence in law, i.e. $$Y_n\overset{\mathbb{P}}{\longrightarrow}c \iff Y_n\overset{\mathcal{D}}{\longrightarrow}c,$$ and then Slutsky's theorem asserts that $X_n\cdot Y_n\overset{\mathcal{D}}{\longrightarrow}X\cdot c$. But what about the case when $Y$ is not constant? Does $X_n\overset{\mathcal{D}}{\longrightarrow}X$, $Y_n\overset{\mathbb{P}}{\longrightarrow}Y$ imply $X_n\cdot Y_n\overset{\mathcal{D}}{\longrightarrow}X\cdot Y$ ? I would appreciate any hints.
Let $Y$ represent a fair coin with sides valued $0$ (zero) and $1$ (one). Set $Y_n = Y$, $X = Y$, $X_n = 1-Y$. The premise is fulfilled, but $X_n\cdot Y_n = 0\overset{\mathcal{D}}{\nrightarrow}Y = X\cdot Y$.
Evaluate congruences with non-prime divisor with Fermat's Little Theorem I can evaluate $ 17^{2012}\bmod13$ with Fermat's little theorem because $13$ is a prime number. (Fermat's Little theorem says $a^{p-1}\bmod p\equiv1$.) But what if when I need to evaluate for example $12^{1729}\bmod 36$? in this case, $36$ is not a prime.
Your example is slightly trivial, because already $12^2\equiv0\bmod36$. If the base and modulus were coprime, you could use Euler's theorem. In cases in between, where the base contains some but not all factors of the modulus, you can reduce by the common factors and then apply Euler's theorem.
Showing $\mathbb{Z}_6$ is an injective module over itself I want to show that $\mathbb{Z_{6}}$ is an injective module over itself. I was thinking in using Baer's criterion but not sure how to apply it. So it suffices to look at non-trivial ideals, the non-trivial ideals of $\mathbb{Z_{6}}$ are: (1) $I=\{0,3\}$ (2) $J=\{0,2,4\}$ So take a $\mathbb{Z_{6}}$-map $f: I \rightarrow \mathbb{Z_{6}}$. Since $f$ is a group homomorphism it must map generators to generators right? so $3 \mapsto 1$ and $0 \rightarrow 0$. Now can we say suppose $f(1)=k$ then define $g: \mathbb{Z_{6}} \rightarrow \mathbb{Z_{6}}$ by sending the remaining elements, (those distinct from 0 and 3), say n, to $nk$?
I found a solution for the 'general' case: Let I be a ideal of $\mathbb{Z}/n\mathbb{Z}$ then we know that $I=\langle \overline{k} \rangle$ for some $k$ such that $k\mid n$. If $f:I\rightarrow \mathbb{Z}/n \mathbb{Z}$ is a $\mathbb{Z}/n \mathbb{Z}$-morphism then $im f\subset I$. To show this we note that if $\overline{x}\in im f$ then there exist $\overline{c}=\overline{lk}$ with $\overline{l}\in\mathbb{Z}/n \mathbb{Z}$ s.t. $f(\overline{c})=\overline{x}$, but $n=ks$ for some $s$, so $\overline{0}=f(\overline{ln})=\overline{s}\cdot f(\overline{lk})=\overline{sx}$, then $sx=nt$ for some $t$, but again since $n=ks$, we get $x=tk$. In particular for $\overline{k}\in I$ we have $f(\overline{k})=b\overline{k}$ for some $b$. So for any $x\in I$ we have $f(x)=bx$ then we just take the extension of $f$ to be the map $\mathbb{Z}/n\mathbb{Z}\rightarrow \mathbb{Z}/n\mathbb{Z}$ where $x\mapsto bx$.
About the homework of differentiation This problem is not solved. $$ \begin{align} f(x) &=\log\ \sqrt{\frac{1+\sqrt{2}x +x^2}{1-\sqrt{2}x +x^2}}+\tan^{-1}\left(\frac{\sqrt{2}x}{1-x^2}\right) \cr \frac{df}{dx}&=\mathord? \end{align} $$
The answer is $$\frac{2\sqrt{2}}{1+x^4}$$ Hints: $$\frac{d\left(\log(1+x^2\pm\sqrt{2}x\right)}{dx}=\frac{2x\pm\sqrt{2}}{1+x^2\pm\sqrt{2}x}$$ and $$\left(1+x^2+\sqrt{2}x\right)\left(1+x^2-\sqrt{2}x\right)=\left(1+x^2\right)^2-\left(\sqrt{2}x\right)^2$$ Similarly, $$\frac{d\left(\tan^{-1}\left(\frac{\sqrt{2}x}{1-x^2}\right)\right)}{dx}=\frac{1}{1+\left(\frac{\sqrt{2}x}{1-x^2}\right)^2}\frac{d\left(\frac{\sqrt{2}x}{1-x^2}\right)}{dx}$$ $$=\frac{1}{1+\left(\frac{\sqrt{2}x}{1-x^2}\right)^2}\frac{\sqrt{2}}{2}\left(\frac{1}{\left(1-x\right)^2}+\frac{1}{\left(1+x\right)^2}\right)$$ The rest is by calculations.
How does $\sqrt{|e^{-y}\cos x + ie^{-y}\sin x|}= e^{-y}$ How does $\sqrt{|e^{-y}\cos x + ie^{-y}\sin x|} = e^{-y}$ which is less than $1$? This is a step from a question I am doing but I am not sure how the square root equaled & $e^{-y}$
I'd start by using properties of the modulus: $$|e^{-y} \cos x + i e^{-y} \sin x|=|e^{-y}||\cos x + i \sin x|=e^{-y}|e^{ix}|=e^{-y}$$
Linear algebra: finding a Tikhonov regularizer matrix A more general soft constraint is the Tikhonov regularization constraint $$ \mathbf{w}^\text{T}\Gamma^\text{T}\Gamma\mathbf{w} \leq C $$ which can capture relationships among the $w_i$ (the matrix $\Gamma$ is the Tikhonov regularizer). (a) What should $\Gamma$ be to obtain a constraint of the form $\sum_{q=0}^Q w_q^2 \leq C$? I think this is just the identity matrix since $\sum_{q=0}^Q w_q^2 = \mathbf{w}^\text{T}\mathbf{w}$ (b) What should $\Gamma$ be to obtain a constraint of the form $\left(\sum_{q=0}^Q w_q\right)^2 \leq C$? To me, this is saying $\mathbf{ww} \leq C$. How is it possible to get $\mathbf{ww} = \mathbf{w}^\text{T}\mathbf{w}$ just by multiplying by some $\Gamma^\text{T}\Gamma$? Where am I going wrong here?
(a) You are right, in order to obtain $\mathbf{w}^T\Gamma^T \Gamma \mathbf{w}=\sum_{q=0}^Q w_q^2$, you should use $\Gamma=I$, where $I$ is the identity matrix. (b) Be careful with your dimensions, $\mathbf{w}\mathbf{w}$ is not defined. You are trying to multiply a $Q\times 1$ matrix by a $Q\times 1$ matrix, which is not possible. To obtain $\mathbf{w}^T\Gamma^T \Gamma \mathbf{w}=\left(\sum_{q=0}^Q w_q\right)^2$, you should use $\Gamma=(1~ 1~\ldots~1)$, i.e. a row of ones. This implies that $\mathbf{w}^T\Gamma^T=\sum_{q=0}^Q w_q$, and thus that $\mathbf{w}^T\Gamma^T \Gamma \mathbf{w}=\left(\sum_{q=0}^Q w_q\right)^2$.
Why the spectral theorem is named "spectral theorem"? "If $V$ is a complex inner product space and $T\in \mathcal{L}(V)$. Then $V$ has an orthonormal basis Consisting of eigenvectors of T if and only if $T$ is normal".   I know that the set of orthonormal vectors is called the "spectrum" and I guess that's where the name of the theorem. But what is the reason for naming it?
I think the top voted answer is very helpful: Since the theory is about eigenvalues of linear operators, and Heisenberg and other physicists related the spectral lines seen with prisms It might be worth explaining a bit more precisely how the theory is related to optics, especially for people not familiar with how a prism works (like me) As shown in the figure, a prism automatically decomposes a light into a series of monochromatic lights and refracts each of them by a different angle.(cf Cauchy's transmission equation) We could consider the following analogies with the Spectrum Theorem: * *the Matrix Operator $A$ as a prism *the input vector $v$ as a polychromatic light *each eigenvector of $A$ as monochromatic light, and the associated eigenvalue as the refractive index of the monochromatic light. With spectral theorem, we know that we can decompose symmetric matrices(I stay in $\mathbb{R}$ for simplicity) into $A = V^{-1}\Lambda V = \sum{\lambda_i \vec{v_i} \cdot \vec{v_i}^t }$. And $A \vec{x}=\sum{\lambda_i \vec{v_i} \cdot \vec{v_i}^t } \vec{x}= \sum{\lambda_i \vec{v_i} x_i }$ This is very similar to what a prism does with light: decompose, refract, and output. So, the spectrum theorem tells us we could find a spectrum of eigenvectors of a given symmetric matrix S. References: Spectrum of a matrix
harmonic conjugates and cauchy riemann eqns I'm trying to find function $v(x,y)$ such that the pair $(u,v)$ satisfies the Cauchy-Riemann equations for the following functions $u(x,y)$: a) $u = \log(x^2+y^2)$ $$ u_x = v_y \Rightarrow \frac{2x}{x^2+y^2} = v_y \Rightarrow v = \frac{2xy}{x^2+y^2}? $$ b) $u = \sin x \cosh y$ $$ u_x = \cos x \cosh y = v_y \Rightarrow v = \sinh y \cos x + C $$ c) $u = \frac{x}{x^2+y^2}$ $u_x = v_y$, but I am getting a mess with integration. The reason is that is there a way to do this by integration, or is the way I have started this seem correct? Thanks!
Here's how to find the corresponding imaginary parts by educated guessing. Maybe not the most systematic method, but maybe it improves your intuition about how these things behave. For the first, remember that $e^{a + ib} = e^a(\cos b + i\sin b)$ (assuming $a,b \in \mathbb{R}$). In other words, $e^z$ sort of maps from polar coordinates to cartesian coordinates. $\log(z)$ thus does the reverse mapping, i.e. $\log(a+ib) = \log(|z|) + i\arg(a+ib) + i2\pi n$. This motivates the assumption that your first $u$ is the real part of $$ f(z) = \log(z^2) $$ Which it actually is, since $\Re(f(x+iy)) = \log(|(x+iy)^2|) = \log(|x+iy|^2) = log(x^2+y^2)$. The corresponding $v$ is thus $v(x,y) = \Im(f(x+iy))$, i.e. $$ v(x,y) = \arg(z^2) + 2\pi n= 2\arg(z) + 2\pi n= 2\left(arctan\left(\frac{y}{x}\right) + \pi \tilde{n}\right) $$ Your integration yields the wrong result because you forgot to take the denominator (which depends on $y$!) into account when finding the antiderivative. For the second, use that $\sin(x+iy)$ = $\sin(x)\cos(iy) + \cos(x)\sin(iy)$ = $\sin(x)\cosh(y) +i\cos(x)\sinh(y)$ . From that, you get that $$ f(z) = \sin(z) $$ and $$ v(x,y) = \cos(x)\sinh(y) $$ which is the same as your integration yields. For the third, observe that $x^2+y^2 = z\bar{z}$ if $z = x+iy$. Thus, one guess for $f(z)$ could be $\frac{z}{z\bar{z}}$. That's no good, however, because $z \to \bar{z}$ is not holomorphic. But since the real parts of $z$ and $\bar{z}$ are identical, you can also try $$ f(z) = \frac{\bar{z}}{z\bar{z}} = \frac{1}{z} $$ And voilá, since $$ \frac{1}{z} = \frac{1}{x+iy} = \frac{x-iy}{(x+iy)(i-iy)} = \frac{x - iy}{x^2 + y^2} $$ you indeed have $\Re(f(x+iy)) = \frac{x}{x^2+y^2}$, and thus $$ v(x,y) = \frac{-y}{x^2+y^2} $$
Ratio of Boys and Girls In a country where everyone wants a boy, each family continues having babies till they have a boy. After some time, what is the proportion of boys to girls in the country? (Assuming probability of having a boy or a girl is the same)
Obviously the ratio of boys to girls could be any rational number, or infinite. If you either fix the number of families or, more generally, specify a probability distribution over the number of families, then B/G is a random variable with infinite expected value (because there's always some non-zero chance that G=0).