INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Smooth function on $\mathbb R$ whose small increments are not controlled by the first derivative at infinity I need some help in finding a (as simple as possible) smooth function $f:\mathbb R \rightarrow \mathbb R$ which does NOT satisfy the following: There exist a constant $C>0$, a compact $K\subset\mathbb R$ and $h_0>0$ such that for every $|h| \leq h_0$ and every $x\in\mathbb R\setminus K$ $|h|^{-1}|f(x+h) - f (x)| \leq C |f'(x)|$ EDIT: and there exists a $\tilde C>0$ such that $|f'(x)|>\tilde C$ for $x\in\mathbb R\setminus K$. EDIT 2: my intuition is that such an $f$ may look like this: the first derivative stays always positive and oscillates (around g(x)=|x| for example), the oscillations becoming both faster and larger in amplitude when $x$C goes to infinity. Many thanks.
Try e.g. $f(x) = \cos(x)$. All you need is that $f$ is not constant on any interval and $f'$ has arbitrarily large zeros. EDIT: With the new condition, take $$f(x) = \int_0^x (1 + t^2 \cos^2(t^2))\ dt = x+\frac{x \sin \left( 2\,{x}^{2} \right)}{8}-\frac{\sqrt {\pi }}{16}{\rm FresnelS} \left( 2\,{\frac {x}{\sqrt {\pi }}} \right) +\frac{{x}^{3}}{6} $$ Note that for $x > 0$ $$\eqalign{\frac{f(x+h) -f(x)}{h} &= \frac{1}{h} \int_x^{x+h} (1 + t^2 \cos^2(t^2))\ dt\cr &> \frac{x^2}{h} \int_x^{x+h} \cos^2(t^2)\ dt \cr &= \frac{x^2}{h} \left(\frac{h}{2} + \frac{\sqrt{\pi}}{4} \left({\rm FresnelC}\left( \frac{2(x+h)}{\sqrt{\pi}}\right) - {\rm FresnelC}\left( \frac{2x}{\sqrt{\pi}}\right) \right)\right) \cr &= \frac{x^2}{2} + o(x^2)}$$ as $x \to \infty$ for any fixed $h$ (since ${\rm FresnelC}(t) \to 1/2$ as $t \to \infty$), while $f'(\sqrt{(n+1/2)\pi})=1$ for positive integers $n$.
Question about exponential functions (easy one) Can someone please explain me, why when we are looking at the function $f(x)=a^x $ , we should remember that $1 \neq a >0 $ ? (And not saying that we can't put an x that satisfies: $ 0 < x < 1 $ ? Any understandable explanation will be great! Thanks !
Besides to @André's theoretical answer(+1); imagine what would be happen if $a<0$? For example if $a=-4$, what would you do with $(-4)^{\frac{m}{n}}$ wherein $n$ is even and $m$ is odd? Would we have a real number? How many fractions of this type are there in $\mathbb R$? I see the close story can be when $a=1$.
Continued Fractions Approximation I have come across continued fractions approximation but I am unsure what the steps are. For example How would you express the following rational function in continued-fraction form: $${x^2+3x+2 \over x^2-x+1}$$
This might be what you are looking for: $$ \begin{align} \frac{x^2+3x+2}{x^2-x+1} &=1+\cfrac{4x+1}{x^2-x+1}\\[4pt] &=1+\cfrac1{\frac14x-\frac5{16}+\cfrac{\frac{21}{16}}{4x+1}}\\[4pt] &=1+\cfrac1{\frac14x-\frac5{16}+\cfrac1{\frac{64}{21}x+\frac{16}{21}}} \end{align} $$ At each stage, we are doing a polynomial division instead of an integer division, but otherwise, the process is the same as with continued fractions with integers. We can get the Bezout polynomials by truncating the continued fraction: $$ 1+\cfrac1{\frac14x-\frac5{16}}=\frac{4x+11}{4x-5} $$ That is, we can write the polynomial GCD (a constant since they are relatively prime) as $$ (4x+11)(x^2-x+1)-(4x-5)(x^2+3x+2)=21 $$
Showing that $f$ is differentiable at $x=0$, using the mean value theorem The following exercise is again about the Mean Value Theorem :) Let $f : [0,1] \rightarrow \mathbb{R}$ be continuous and differentiable on $(0,1)$. Assume that $$ \lim_{x\rightarrow 0^+} f'(x)= \lambda.$$ Show that $f$ is differentiable (from the right) at $0$ and that $f'(0)=\lambda$. Hint: Mean Value Theorem. What exactly is 'differentiable from the right'? Do I have to show that the limit as $x$ approaches $0$ from the right side exists? How can I do that? How can I use the Mean Value Theorem to show that the derivative on $0$ equals $\lambda$? Mean Value Theorem: If $f:[a,b] \rightarrow \mathbb{R}$ is continuous on $ [a,b] $ and differentiable on $ (a,b)$, then there exists a point $ c \in (a,b)$ where $$ f'(c)= \frac{f(b)-f(a)}{b-a}.$$
Being differentiable at $0$ from the right means $\displaystyle{\lim_{x \to 0^+} \frac{f(x)-f(0)}{x-\alpha}}$ exist. Its value is $f'_+(0)$. From the Mean Value Theorem, for each $x>0$ there exists a $y$ with $0<y<x$, such that $$f'(y)=\lambda=\frac{f(x)-f(0)}{x} = \frac{f(x)-f(0)}{x-0}$$ This implies that:$$\displaystyle{\lim_{x \to 0^+}} f'(y) = \displaystyle{\lim_{x \to 0^+}} \lambda= \displaystyle{\lim_{x \to 0^+}}\frac{f(x)-f(0)}{x-0} = f'_+(0) = \lambda$$ Is this a good proof? Thanks for your help all :-)
functional analysis complementary subspace Let $Y$ and $Z$ be closed subspaces in a Banach space $X$. Show that each $x \in X$ has a unique decomposition $x = y + z$, $y\in Y$, $z\in Z$ iff $Y + Z = X$ and $Y\cap Z = \{0\}$. Show in this case that there is a constant $\alpha>0$ such that $ǁyǁ + ǁzǁ \leq\alphaǁxǁ$ for every $x \in X$
Hint: Assume that $Y\cap Z=\{0\}$. If $z+y=x=z'+y'$ s.t. $z,z'\in Z$, $y,y'\in Y$ then $z-z'=y'-y$. For the other direction, if $y\in Y\cap Z$ then $0+0=y+(-y)$.
Calculus 1- Find directly the derivative of a function f. The following limit represents the derivative of a function $f$ at a point $a$. Evaluate the limit. $$\lim\limits_{h \to 0 } \frac{\sin^2\left(\frac\pi 4+h \right)-\frac 1 2} h$$
Let $f(x)=\sin^2x$. We have, $f^{\prime}(x)=2\sin x\cos x$. In the other hand \begin{equation} \begin{array}{lll} \lim_{h\rightarrow 0}\frac{\sin^2\left(\frac{\pi}{4}+h\right)-\frac{1}{2}}{h}&=&\lim_{h\rightarrow 0}\frac{f\left(\frac{\pi}{4}+h\right)-f\left(\frac{\pi}{4}\right)}{h}\\ &=&f^{\prime}(\pi/2)\\ &=&2\sin(\pi/4)\cos(\pi/4)\\ &=&2(1/\sqrt{2})(1/\sqrt{2})=1. \end{array} \end{equation}
Discrete Subgroups of $\mbox{Isom}(X)$ and orbits Let $X$ be a metric space, and let $G$ be a discrete subgroup of $\mbox{Isom}(X)$ in the compact-open topology. Fix $x \in X$. If $X$ is a proper metric space, it's not hard to show using Arzela-Ascoli that $Gx$ is discrete. However, is there an easy example of a metric space that is not proper so that $Gx$ is not discrete for a discrete $G \subset \mbox{Isom}(X)$? I thought about permutations of an orthonormal basis in $l^2(\mathbb{N})$, but no luck there.
Let $X = \mathbb{R}^2$ with the following metric: $$d((x_1,y_1),(x_2,y_2)) = \begin{cases} |y_1-y_2| & \text{if $x_1=x_2$,} \\ |y_1|+|y_2|+|x_1-x_2| & \text{if $x_1\ne x_2$}. \end{cases}$$ I don't know if there is a name for this metric, it is the length of the shortest path if we only allow arbitrary vertical segments and horizontal segments along the $x$-axis. Now let $\mathbb{R}$ act on $X$ by horizontal translation, i.e., $t \circ (x,y) = (x+t,y)$. This is obviously an isometry for every $t$, by definition of the metric. Since the $x$-axis is an orbit, and the metric restricted to the $x$-axis coincides with the usual metric, the orbit of any point $(x,0)$ under this action is non-discrete. In order to see that the group action is discrete, let $s,t \in \mathbb{R}$ with $s\ne t$, and observe that $d(t\circ(0,1),s\circ(0,1)) = d((t,1),(s,1)) = 2+|t-s| >2$, so the orbit of $(0,1)$ has no accumulation point in $X$.
Simple Tensor Product Question about Well-definedness If I want to define a homomorphism, $f$, from $A\otimes_R B$ into some $R$ module $M$. If I defined it on simple tensors $a\otimes b$ what are the conditions I need to check to make this is well defined. Does it suffice to check that $f(r(a\otimes b))=f((ra)\otimes b)=f(a\otimes (rb))$ or is it more complicated than that. Thank you all.
You simply need to define an $R$-bilinear map $\tilde{f} : A \times B \rightarrow M$. The universal property of the tensor product then induces an $R$-module homomorphism $f : A \otimes_R B \rightarrow M$. To check that $\tilde{f}$ is $R$-bilinear, you must show: (1) $\tilde{f}(ra,b) = \tilde{f}(a,rb) = r\tilde{f}(a,b)$ (2) $\tilde{f}(a_1 + a_2,b) = \tilde{f}(a_1,b) + \tilde{f}(a_2,b)$ (3) $\tilde{f}(a,b_1 + b_2) = \tilde{f}(a,b_1) + \tilde{f}(a,b_2)$
If a function is bounded almost everywhere, then globally bounded? Let $f: \mathbb{R} \to \mathbb{R}$ be a function that is bounded almost everywhere. Then is it bounded? If so, what is the main idea or method in tis proof, and can I generalize this for upto what?
Let $f$ map the irrationals to $0$ and the rationals to themselves. Since the rational numbers form a countable set, it has measure $0$. Then f is bounded almost everywhere but is not bounded. More generally, on any infinite set, one can define a function that is bounded almost everywhere but is not bounded. Simply take a countable subset $x_1,x_2,x_3,\ldots$ and send $x_i$ to $i$ and other elements to $0$.
Show that $(f_n)$ is equicontinuous, given uniform convergence Let $f_n: [a,b] \rightarrow, n \in \mathbb{N}$, be a sequence of functions converging uniformly to $f: [a,b] \rightarrow \mathbb{R}$ on $[a,b]$. Suppose that each $f_n$ is continuous on [a,b] and differentiable on (a,b), and that the sequence of derivatives $(f'_n)$ is uniformly bounded on (a,b). This means that there exists an $M>0$ such that $|f'_n(x)| \le M$ for all $x \in (a,b)$ and all $n \in \mathbb{N}$ Question: Show that $(f_n)$ is equicontinuous. Known definitions: * *A sequence of functions $(f_n)$ converges uniformly to a limit function $f$ on a set $A$, if, for every $\epsilon >0$ , there exists an $N\in \mathbb{N}$. such that$|f_n(x) - f(x)| < \epsilon$ whenever $n \ge N$ and $x \in A$ *Cauchy Criterion for Uniform Convergence: A sequence of functions $(f_n)$ converges uniformly on a set $A$, if and only if, for every $\epsilon >0$ , there exists an $N\in \mathbb{N}$. such that$|f_n(x) - f_m(x)| < \epsilon$ for all $n,m \ge N$ and all $x \in A$ *A sequence of functions $(f_n)$ defined on a set $E$, is called equicontinuous if for every $\epsilon >0$ , there exists a $\delta>0$ such that $N\in \mathbb{N}$. such that$|f_n(x) - f_n(y)| < \epsilon$ for all $n \in N$ and $|x-y| \lt \delta$ in $E$ *A sequence of derivatives $(f_n')$ is uniformly bounded on (a,b) if there exists an $M>0$ such that $|f_n'(x)|≤M$ for all $x∈(a,b)$ and all $n∈N$
Hint $$ |f_n(x)-f_n(y)|=|(f_n(x)-f(x))+(f(x)-f(y))+(f(y)-f_n(y))| $$ $$ \leq |f_n(x)-f(x)|+|f(x)-f(y)|+|f(y)-f_n(y)| \,.$$ Now, use the assumptions you have been given.
Show that f is uniformly continuous and that $f_n$ is equicontinuous $f_n: A \rightarrow \mathbb{R}$,$n \in \mathbb{N}$ is a sequence of functions defined on $A \subseteq \mathbb{R} $. Suppose that $(f_n)$ converges uniformly to $f: A \rightarrow \mathbb{R}$, and that each $f_n$ is uniformly continuous on $A$. 1.) Can you show that $f$ is uniformly continuous on A? 2.) Can you show that $(f_n)$ is equicontinuous? * *We are given that$(f_n)$ converges uniformly to $f$. This means that for every $\epsilon>0$ there exists an $N\in \mathbb{N}$, such that $|f_n(x)-f(x)| <\epsilon$ whenever $n \ge N$ and $x \in A$. We have to show that $f$ is uniformly continuous on $A$, which means that for every $\epsilon >0$ there exists a $\delta>0$ such that $|x-y|<\delta$ implies |$f(x)-f(y)|<\epsilon$ *We need to show that for every $\epsilon>0$ there exists a $\delta>0$ such that $|f_n(x)-f_n(y)|< \epsilon$ for all $n\in \mathbb{N}$ and $\|x-y|< \delta$ in $A$
For 1, let $\epsilon >0$. Then pick $n$ such that $|f_n(x) - f(x)| < \epsilon/3$ on $A$. By uniform continuity of $f_n$, there exists a $\delta$ such that $|x-y| < \delta \Longrightarrow |f_n(x)-f_n(y)| < \epsilon/3$. Now if $|x-y| < \delta$, $$ |f(x)-f(y)| = |f(x)-f_n(x)+f_n(x)-f_n(y)+f_n(y) -f(y)| \leq $$ $$|f(x)-f_n(x)| + |f_n(x)-f_n(y)| + |f_n(y) -f(y)| < \epsilon$$
Continuous extension of a real function Related; Open set in $\mathbb{R}$ is a union of at most countable collection of disjoint segments This is the theorem i need to prove; "Let $E(\subset \mathbb{R})$ be closed subset and $f:E\rightarrow \mathbb{R}$ be a contiuous function. Then there exists a continuous function $g:\mathbb{R} \rightarrow \mathbb{R}$ such that $g(x)=f(x), \forall x\in E$." I have tried hours to prove this, but couldn't. I found some solutions, but ridiculously all are wrong. Every solution states that "If $x\in E$ and $x$ is not an interior point of $E$, then $x$ is an endpoint of a segment of at most countable collection of disjoint segments.". However, this is indeed false! (Check Arthur's argument in the link above) Wrong solution Q4.5; http://www.math.ust.hk/~majhu/Math203/Rudin/Homework15.pdf Just like the argument in this solution, i can see that $g$ is continuous on $E^c$ and $Int(E)$. But how do i show that $g$ is continuous on $E$?
This is a special case of the Tietze extension theorem. This is a standard result whose proof can be found in any decent topology text. A rather different proof can be found here.
prove an analytic function has at least n zeros I`m confused about this problem: Let G be a bounded region in C whose boundary consists of n circles. Suppose that f is a non-constant function analytic on G: Show that if absolute value of f(z) = 1 for all z in the boundary of G then f has at least n zeros (counting multiplicities) in G. What does it mean that boundary consists of n circles? How can I start solving the problem? Any help please...
I don't know what tools you have at your disposal, but this follows from some basic topology. The assumptions imply that $f$ is a proper map from the region $G$ to the unit disk $\mathbb{D}$. As such the map has a topological degree, and since the preimage of the boundary $|z|=1$ contains $n$ components, this degree has to be at least $n$, meaning that every $z\in\mathbb{D}$ has to have at least $n$ preimages, counted with multiplicity.
How many ways can the letters of the word TOMORROW be arranged if the Os can't be together? How many ways can the letters of the word TOMORROW be arranged if the Os cant be together? I know TOMORROW can be arranged in $\frac{8!}{3!2!} = 3360$ ways. But how many ways can it be arranged if the Os can't be together? And what is the intuition behind this process?
First, you have to remove the permutations like this TOMOORRW and TMOOORRW, so see OO as an element, then we have $3360-\frac{7!}{2!}$. EDITED Now, Notice you remove words like this TOMOORRW and TMOOORRW with OO and OOO, but you remove a little bit more you want, why? can you see this? how to fix this problem?
Determinant of rank-one perturbations of (invertible) matrices I read something that suggests that if $I$ is the $n$-by-$n$ identity matrix, $v$ is an $n$-dimensional real column vector with $\|v\| = 1$ (standard Euclidean norm), and $t > 0$, then $$\det(I + t v v^T) = 1 + t$$ Can anyone prove this or provide a reference? More generally, is there also an (easy) formula for calculating $\det(A + wv^T)$ for $v,w \in \mathbb{K}^{d \times 1}$ and some (invertible) matrix $A \in \Bbb{K}^{d \times d}$?
I solved it. The determinant of $I+tvv^T$ is the product of its eigenvalues. $v$ is an eigenvector with eigenvalue $1+t$. $I+tvv^T$ is real and symmetric, so it has a basis of real mutually orthogonal eigenvectors, one of which is $v$. If $w$ is another one, then $(I+tvv^T)w=w$, so all the other eigenvalues are $1$. I feel like I should have known this already. Can anyone provide a reference for this and similar facts?
Prove that if X and Y are Normal and independent random variables, X+Y and X−Y are independent Prove that if X and Y are Normal and independent random variables, X+Y and X−Y are independent. Note that X and Y also have the same mean and standard deviation. Note that this is a duplicate of Prove that if $X$ and $Y$ are Normal and independent random variables, $X+Y$ and $X-Y$ are independent, however, there isn't a complete solution to the answer given and I do not understand exactly what the hints are suggesting. My attempt was to check if $f_{x+y,x-y}(u,v) = f_{x+y}(u)f_{x-y}(v)$, however, this does not seem to be working out too nicely.
Define $U = X + Y, V = X - Y$. Then, $X = (U + V)/2, Y = (U - V)/2$. Find the Jacobian $J$ for the transformation. Then, $f_{U,V}(u,v)=f_{X}(x=(u+v)/2)f_{Y}(y=(u-v)/2)|J|$. You will find that $f_{U,V}(u,v)$ factors into a function of $u$ alone and a function of $v$ alone. Thus, by the Factorization thm, $U$ and $V$ are independent.
A collection of Isomorphic Groups So the answer is Questions 1) How exactly is $<\pi>$ isomorphic to the other integer groups? I mean $\pi$ itself isn't even an integer. 2)What is exactly is the key saying for the single element sets? Are they trying to say they are isomorphic to themselves? 3) How exactly is $\{\mathbb{Z_6}, G \}$ and $\{ \mathbb{Z_2}, S_2\}$ isomorphic?
1) Notice that $\langle \pi \rangle=\{\pi^n \mid n \in \mathbb Z\}$ and we also have $\pi^n\pi^m=\pi^{n+m}$ what group does this remind you of? 2) The single element sets are not isomorphic to any of the other groups. As an aside it's not particularly meaningful to say a group is isomorphic to itself. You generally speak of two groups that are distinct in some meaningful way as being isomorphic.
How to Decompose $\mathbb{N}$ like this? Possible Duplicate: Partitioning an infinite set Partition of N into infinite number of infinite disjoint sets? Is it possible to find a family of sets $X_{i}$, $i\in\mathbb{N}$, such that: * *$\forall i$, $X_i$ is infinite, *$X_i\cap X_j=\emptyset$ for $i\neq j$, *$\mathbb{N}=\bigcup_{i=1}^{\infty}X_i$ Maybe it is an easy question, but I'm curious about the answer and I couldnt figure out any solution. Thanks
Let $p_n$ be the $n$-th prime number. That is $p_1=2; p_2=3; p_3=5; p_4=7$ and so on. For $n>0$ let $X_n=\{(p_n)^k\mid k\in\mathbb N\setminus\{0\}\}$. For $X_0$ take all the rest of the numbers available, namely $k\in X_0$ if and only if $k$ can be divided by two distinct prime numbers, or if $k=1$.
Weakly convex functions are convex Let us consider a continuous function $f \colon \mathbb{R} \to \mathbb{R}$. Let us call $f$ weakly convex if $$ \int_{-\infty}^{+\infty}f(x)[\varphi(x+h)+\varphi(x-h)-2\varphi(x)]dx\geq 0 \tag{1} $$ for all $h \in \mathbb{R}$ and all $\varphi \in C_0^\infty(\mathbb{R})$ with $\varphi \geq 0$. I was told that $f$ is weakly convex if, and only if, $f$ is convex; although I can imagine that (1) is essentially the statement $f'' \geq 0$ in a weak sense, I cannot find a complete proof. Is this well-known? Is there any reference?
By a change of variables (translation-invariance of Lebesgue measure) the given inequality can be equivalently rewritten as $$ \int [f(x+h)+f(x-h)-2f(x)]\varphi(x)\,dx \geq 0 \qquad \text{for all }0 \leq \varphi \in C^{\infty}_0(\mathbb{R})\text{ and all }h \gt 0. $$ If $f$ were not midpoint convex then there would be $x \in \mathbb{R}$ and $h \gt 0$ such that $f(x+h) + f(x-h) - 2f(x) \lt 0$. By continuity of $f$ this must hold in some small neighborhood $U$ of $x$, so taking any nonzero $\varphi \geq 0$ supported in $U$ would yield a contradiction to the assumed inequality. Thus, $f$ is midpoint convex and hence convex because $f$ is continuous. Edit: The converse direction should be clear: it follows from $f(x+h) + f(x-h) - 2f(x) \geq 0$ for convex $f$ and all $h \gt 0$ so that the integrand in the first paragraph is non-negative. Finally, the argument above works without essential change for continuous $f \colon \mathbb{R}^n \to \mathbb{R}$.
Question about how to implicitly differentiate composite functions I have a question. How in general would one differentiate a composite function like $F(x,y,z)=2x^2-yz+xz^2$ where $x=2\sin t$ , $y=t^2-t+1$ , and $z = 3e^{-1}$ ? I want to find the value of $\frac{dF}{dt}$ evaluated at $t=0$ and I don't know how. Can someone please walk me through this? I tried a couple of things, including chain rules and jacobians. I know that $\frac{dF}{dt}$ should equal $\frac{\partial F}{\partial x} \frac{dx}{dt} + \frac{\partial F}{\partial y} \frac{dy}{dt} + \frac{\partial F}{\partial z} \frac{dz}{dt}$ but for some reason this doesn't work, or I am doing something wrong. I start out by differentiating to get $\frac{\partial F}{\partial x}=4x+z^2$, $\frac{\partial F}{\partial y}= -z$, $\frac{\partial F}{\partial z} = 2xz-y$, $\frac{dz}{dt}=0$, $\frac{dx}{dt}=2\cos t$, $\frac{dy}{dt}=2t-1$ but this doesn't match the answer, which my book says is $24$. How do they get this, and where is my error? Thanks. Update: What I get is as follows: $F(x,y,z)=2x^2-yz+xz^2$, $\frac{\partial F}{\partial t}=\frac{\partial F}{\partial x} \frac{dx}{dt} + \frac{\partial F}{\partial y} \frac{dy}{dt} + \frac{\partial F}{\partial z} \frac{dz}{dt}$,$\frac{\partial F}{\partial t}=(4x+z^2)(2cos(t))-z(2t-1)$ Which for $t=0$ gives $x=0$ and $\left. \frac{\partial F}{\partial t} \right|_{t=0} = 2z^2+z=9e^{-2}+3e^{-1}$ which clearly isn't $24$ so I must be doing something completely wrong. Edit: I want to rephrase the question. Since everyone else I have talked to thinks there was an error in the book, does everyone here agree?
$$\frac{\partial F}{\partial t}=\frac{\partial F}{\partial x}\frac{dx}{dt}+\frac{\partial F}{\partial y}\frac{dy}{dt}+\frac{\partial F}{\partial z}\frac{dz}{dt}=$$ $$=(4x+z^2)\cdot 2\cos t-z(2t-1)+(2xz-y)\cdot 0$$ You'll now to substitute: $$t=0\Longrightarrow\,x=0\,,\,y=1\,,\,z=3\,e^{-1}$$ The final result is, if I'm not wrong, $$9\,e^{-2}+3\,e^{-1}$$ which has nothing to do with $\,24\,$, so either the book (which one, btw?) has a mistake or you miscopied the exercise. Ps. Please do you check that according to what you wrote $\,z\,$ is independent of $\,t\,$...
An inequality for all natural numbers Prove, using the principle of induction, that for all $n \in \mathbb{N}$, we have have the following inequality: $$1+\frac{1}{\sqrt 2}+\cdots+\frac{1}{\sqrt n} \leq 2\sqrt n$$
Suppose $1+\frac{1}{\sqrt 2}+\cdots+\frac{1}{\sqrt n} \leq 2\sqrt n$ and $1+\frac{1}{\sqrt 2}+\cdots+\frac{1}{\sqrt {n+1}} > 2\sqrt {n+1}$ (i.e., that the induction hypothesis is false). Subtracting these, $\frac{1}{\sqrt {n+1}} > 2\sqrt {n+1} - 2\sqrt n = 2(\sqrt {n+1} - \sqrt n)\frac{\sqrt {n+1} +\sqrt n}{\sqrt {n+1} +\sqrt n} = \frac{2}{\sqrt {n+1} +\sqrt n} $ or $\sqrt {n+1} +\sqrt n > 2 \sqrt {n+1}$, which is false. So the induction hypothesis is true (once a base case is established).
Minimum Number of Nodes for Full Binary Tree with Level $\lambda$ If the level ($\lambda$) of a full binary tree at zero is just a root node, than I know that I can get the maximum possible number of nodes (N) for a full binary tree using the following: N = $2^{\lambda+1}$- 1 Is the minimum possible number of nodes the following? N = 2*$\lambda$ + 1
Sorry to say but what all of you are discussing. AFAIK for full binary tree nodes = [(2^(h+1)) - 1] (fixed). * *For strict binary tree, max node = [2^h + 1] and min node = [2h + 1].
Show $x^2 +xy-y^2 = 0$ is only true when $x$ & $y$ are zero. Show that it is impossible to find non-zero integers $x$ and $y$ satisfying $x^2 +xy-y^2 = 0$.
The quadratic form factors into a product of lines $$0 = x^2 + xy - y^2 = -(y-\tfrac{1-\sqrt{5}}{2}x)(y-\tfrac{1+\sqrt{5}}{2}x),$$ equality holds if either * *$y=\tfrac{1-\sqrt{5}}{2}x$ *$y=\tfrac{1+\sqrt{5}}{2}x$ but this can't happen for $x,y$ integers unless they're both zero.
Proving the 3-dimensional representation of S3 is reducible The 3-dimensional representation of the group S3 can be constructed by introducing a vector $(a,b,c)$ and permute its component by matrix multiplication. For example, the representation for the operation $(23):(a,b,c)\rightarrow(a,c,b)$ is $ D(23)=\left(\begin{matrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{matrix}\right) $ and so forth. The exercise is to prove this representation is reducible. The hint tells me to find a common eigenvector for all 6 matrices which is just $(1,1,1)$. How do I proceed from here? Any help is appreciated.
Here's another way to prove it's reducible, although it may depend on stuff you haven't learned yet. The order of the group is the sum of the squares of the degrees of the irreducible representations. So a group of order 6 can't have an irreducible representation of degree 3; $3^2\gt6$.
Finding the limit of this function as n tends to infinity... $$\lim_{n\rightarrow\infty}\frac{n}{3}\left[\ln\left(e-\frac{3}{n}\right)t-1\right]$$ I'm having little trouble figuring this out. I did try to differentiate it about 3 times and ended up with something like this $$f'''(n) = \frac{1}{3} \left(\frac{1}{e - \frac{3}{n}}\right) + \left(\frac{1}{3e - \frac{9}{n}}\right) - \left(\frac{9}{3e - \frac{9}{n}}\right)$$ So I wonder if the limit of this would be calculated as $$lim_{n\rightarrow\infty}\frac{1}{3} \left(\frac{1}{e - \frac{3}{n}}\right) + \left(\frac{1}{3e - \frac{9}{n}}\right) - \left(\frac{9}{3e - \frac{9}{n}}\right) = \frac{-7}{3e}$$ Which feels terribly wrong. I suspect that I did the differentiation wrong. Any pointers would be cool and the provision of a simpler method would be dynamite.
Your limit is equal to $$\lim_{n\to +\infty}\frac{n}{3}\,\log\left(1-\frac{3}{en}\right),$$ but since: $$\lim_{x\to 0}\frac{\log(1-x)}{x}=-1,$$ (by squeezing, by convexity or by De l'Hopital rule) you have: $$\lim_{n\to +\infty}\frac{n}{3}\,\log\left(1-\frac{3}{en}\right)=-\frac{1}{e}.$$
Orders of the Normal Subgroups of $A_4$ Prove that $A_4$ has no normal subgroup of order $3.$ This is how I started: Assume that $A_4$ has a normal subgroup of order $3$, for example $K$. I take the Quotient Group $A_4/K$ with $4$ distinct cosets, each of order $3$. But I want to prove that these distinct cosets will not contain $(12)(34),(13)(24)$ and $(14)(23)$> Therefore a contradiction. Please help, I'm really stuck!!
In fact, one can show that all normal subgroups of $A_4$ are $1$, $K_4$ (Klein four group) and $A_4$. Note that two permutations of $S_n$ are conjugate iff they have the same type. So we can write down (by some easy calculations) all conjugate classes of $A_4$ are the following $4 $ classes: * *type $1^4$: {(1)} *type $2^2$: {(12)(34),(13)(24),(14)(23)} *type $3^1$:{(123),(142),(134),(243)} and {(132),(124),(143),(234)} So all normal subgroups of $A_4$ are $1$, $K_4$ and $A_4$.
If $ A$ is open and $ B$ is closed, is $B\setminus A$ open or closed or neither? If $ A$ is open and $ B$ is closed, is $B\setminus A$ open or closed or neither? I think it is closed, is that right? How can I prove it?
Yes, if $A$ is open and $B$ is closed, then $B\setminus A$ is closed. To prove it, just note that $X\setminus A$ is closed (where $X$ is the whole space), and $B\setminus A=B\cap(X\setminus A)$, so $B\setminus A$ is the intersection of two closed sets and is therefore closed. Alternatively, you can observe that $X\setminus(B\setminus A)=(X\setminus B)\cup A$ is the union of two open sets and therefore open, so its complement, $B\setminus A$, is closed.
Is every subset of a metric space a metric subspace? Is every subset of a metric space a metric subspace? A simple proof does justify that all are subspaces, still, wanted to know if I missed something.
Let $(x,p)$ be metric space and letting $Y$ be a non empty subset of $X$. Define the function $§$ on $Y$. $Y$ by $§(x,y)=p(x,y)$ for all $x,y$ in $Y$.Then $(Y,§)$ is also a metric space called a subspace of the metric space $(x,p)$.
Given $f(x+1/x) = x^2 +1/x^2$, find $f(x)$ Given $f(x+1/x) = x^2 +1/x^2$, find $f(x)$. Please show me the way you find it. The answer in my textbook is $f(x)=\frac{1+x^2+x^4}{x\cdot \sqrt{1-x^2}}$
$$f\left(x+\frac{1}{x}\right)=x^2+\frac{1}{x^2} = \left(x+\frac{1}{x}\right)^2-2$$ Let $x+\frac{1}{x}=z$. Then we get, $$f(z)=z^2-2.$$ Hence we put x on the place of z. And we get $f(x)=x^2-2$.
How do i solve this double integral $$\int_0^1\int_{-\pi}^\pi x\sqrt{1-x^2\sin^2(y)}\mathrm{d}y\mathrm{d}x$$ How do I solve this question here?
Switch the order of integration. Integrating first over $x$, we obtain ($u=1-x^2 \sin^2y$) $$\int_0^1\!dx\, x \sqrt{1-x^2 \sin^2 y} = \frac{1}{2 \sin^2 y} \int_{\cos^2y}^1\!du\,\sqrt{u} = \frac{1}{3\sin^2 y} ( 1 - |\cos^3 y|).$$ What is missing is the integral over $y$. Using one of the standard method to integral rational functions of trigonometric function over a full period, we obtain finally $$\int_0^1\!dx\int_{-\pi}^\pi \!dy\,x\sqrt{1-x^2\sin^2(y)} = \frac{8}{3}.$$
Lie Groups question from Brian Hall's Lie Groups, Lie Algebras and their representations. In page 60 of Hall's textbook, ex. 8 assignment (c), he asks me to prove that if $A$ is a unipotent matrix then $\exp(\log A))=A$. In the hint he gives to show that for $A(t)=I+t(A-I)$ we get $$\exp(\log A(t)) = A(t) , \ t<<1$$ I don't see how I can plug here $t=1$, this equality is true only for $t \rightarrow 0$, right?
If $A$ is unipotent, then $A - I$ is nilpotent, meaning that $(A-I)^n = 0$ for all sufficiently large $n$. This will turn your power series into a polynomial.
A tough differential calculus problem This is a question I've had a lot of trouble with. I HAVE solved it, however, with a lot of trouble and with an extremely ugly calculation. So I want to ask you guys (who are probably more 'mathematically-minded' so to say) how you would solve this. Keep in mind that you shouldn't use too advanced stuff, no differential equations or similair things learned in college: Given are the functions $f_p(x) = \dfrac{9\sqrt{x^2+p}}{x^2+2}$. The line $k$ with a slope of 2,5 touches $f_p$ in $A$ with $x_A = -1$. Get the function of k algebraically. * *I might have used wrong terminology, because English is not my native language, I will hopefully clear up doubts on what this problem is by showing what I did. First off, I got $[f_p(x)]'$. This was EXTREMELY troublesome, and is the main reason why I found this problem challenging, because of all the steps. Can you guys show me the easiest and especially quickest way to get this derivative? After that, I filled in $-1$ in the derivative and made the derivative equal to $2\dfrac{1}{2}$, this was also troublesome for me, I kept getting wrong answers for a while, again: Can you guys show me the easiest and especially quickest way to solve this? After you get p it is pretty straightforward. I know this might sound like a weird question, but it basically boils down to: I need quicker and easier ways to do this. I don't want to make careless mistakes, but because the length of these types of question, it ALWAYS happens. Any tips or tricks regarding this topic in general would be much appreciated too. Update: A bounty will go to the person with the most clear and concise way of solving this question!
By "touches" I assume you mean that the line is tangent to the graph of $f_p$. You can try implicit differentiation. Start with $$ y = \frac{9\sqrt{x^2 + p}}{x^2 + 2}. $$ Multiply by $x^2 + 2$ to get $$ y(x^2 + 2) = 9\sqrt{x^2 + p}. $$ Squaring, you get $$ y^2 (x^2 + 2)^2 = 81 (x^2 + p). $$ Differentiate both sides implicitly by $x$: $$ 2yy'(x^2 + 2)^2 + y^2 2 (x^2 + 2) 2x = 162x. $$ Now, plug in all the data ($x = -1$, $y'(-1) = 2.5$) to get a quadratic equation for $y(-1) = y$: $$ -12y^2 + 45y = -162. $$ Solutions are $y = 6$ and $y = \frac{-9}{4}$, but notice your function is always positive, so $y(-1) = 6$ and the line is $2.5x + 8.5$.
Do these $\delta-\epsilon$ proofs work? I'm new to $\delta, \epsilon$ proofs and not sure if I've got the hand of them quite yet. $$ \lim_{x\to -2} (2x^2+5x+3)= 1 $$ $|2x^2 + 5x + 3 - 1| < \epsilon$ $|(2x + 1)(x + 2)| < \epsilon$ $|(2x + 4 - 3)(x + 2)| < \epsilon$ $|(2(x+2)^2 -3(x + 2)| \leq 2|x+2|^2 +3|x + 2| < \epsilon$ (via the triangle inequality) Let $|x+2| < 1$ Then $\delta=\min\left(\dfrac{\epsilon}{5}, 1\right)$ and $$\lim_{x\to -2} (3x^2+4x-2)= 2$$ $|3x^2 + 4x - 2 - 2| < \epsilon$ $|3x^2 - 12 + 4x + 8| < \epsilon$ $|3(x+2)(x-2) + 4(x + 2)| < \epsilon$ $|3(x+2)(x + 2 -4) + 4(x + 2)| < \epsilon$ $|3[(x+2)^2 -4(x+2)] + 4(x + 2)| < \epsilon$ $|3(x+2)^2 - 8(x+2)| \leq 3|x+2|^2 + 8|x+2| < \epsilon$ Let $|x + 2| < 1$ Then $\delta =\min\left(\dfrac{\epsilon}{11}, 1\right)$ and to make sure I'm understanding this properly, when we assert that $|x+2| < 1$, this means that $\delta \leq 1$ as well, because if $\delta \geq 1$, this would allow for $|x+2| \geq 1$, which violates that condition we just imposed? Edit: Apologies for the bad tex
$$ \lim_{x\to -2} (2x^2+5x+3)= 1 $$ Finding $\delta$: $|2x^2 + 5x + 3 - 1| < \epsilon$ $|(2x + 1)(x + 2)| < \epsilon$ $|x - (-2)| < \delta $, pick $\delta = 3$ $|x+2| < 3 \Rightarrow -5 < x < 1 \Rightarrow -9 < 2x + 1 < 3$ This implies $|2x + 1||x + 2| < 3 \cdot |x + 2| < \epsilon \Rightarrow |x+2| < \frac{\epsilon}{3} $ $\\[22pt]$ Actual Proof: Let $\epsilon > 0 $. Choose $\delta = min\{3,\frac{\epsilon}{3}\}$ and assume that $0 < |x + 2| < \delta \Rightarrow |2x + 1||x+2| < 3 \cdot |x + 2| < 3 \cdot \frac{\epsilon}{3} = \epsilon$
Prove that $1-x/3\le\frac{\sin x}x\le1.1-x/4, \forall x\in(0,\pi]$ Prove that $$1- \frac{x}{3} \le \frac{\sin x}x \le 1.1 - \frac{x}{4}, \quad \forall x\in(0,\pi].$$
From the concavity of $f(x)=\cos x$ over $[0,\pi/2]$, we have: $$ \forall x\in[0,\pi/2],\quad \cos x\geq 1-\frac{2}{\pi}x, $$ from which $$ \forall x\in[0,\pi/2],\quad \sin x\geq \frac{1}{\pi}x(\pi-x) = x-\frac{1}{\pi}x^2$$ follows, by integration. Now, both the RHS and the LHS are symmetric wrt $x=\frac{\pi}{2}$, so we can extend the inequality over the whole $[0,\pi]$ interval: $$ \forall x\in[0,\pi],\quad \frac{\sin x}{x}\geq 1-\frac{x}{\pi}\geq 1-\frac{x}{3}, $$ proving the "easier" inequality. Using an analogue tecnique, it's possible to establish something a little weaker than the other desired inequality. Since: $$\operatorname{argmax}_{[0,\pi]}\left(\cos x+\frac{x}{2}\right)=\frac{\pi}{6},$$ we have: $$ \forall x\in[0,\pi],\quad \cos x+\frac{1}{2}-\left(\frac{\pi}{12}+\cos\frac{\pi}{6}\right)\leq 0,$$ so, by integration, $$\forall x\in[0,\pi],\quad \sin x +\frac{x^2}{4} - \left(\frac{\pi}{12}+\cos\frac{\pi}{6}\right)x \leq 0,$$ holds, that is: $$\forall x\in[0,\pi],\quad \frac{\sin x}{x}\leq 1.1278247\ldots-\frac{x}{4}.$$
Question about complex numbers (what's wrong with my reasoning)? Can someone point out the flaw here? $$e^{-3\pi i/4} = e^{5\pi i/4}$$ So raising to $\frac{1}{2}$, we should get $$e^{-3\pi i/8} = e^{5\pi i/8}$$ but this is false.
Paraphrase using $e^0=1$ and $e^{\pi i}=-1$. We can write $$ e^{-3\pi i/4}\;1^2=e^{-3\pi i/4}\;(-1)^2 $$ Raising to the $\frac12$ power yields $$ e^{-3\pi i/8}\;1=e^{-3\pi i/8}\;(-1) $$ The problem is that without proper restrictions (e.g. branch cuts), the square root is not well-defined on $\mathbb{C}$.
Does the category framework permit new logics? It appears to me that a topos permits a broader concept of subsets than the yes/no decission of a characteristic function in a set theory setting. Probably because the subobject classifier doesn't have to be {0,1}. But I wonder, aren't all the multivalued logics also part of/can be modeled in set theory? Is there some new logic coming in with topoi which weren't there before? Did it just help discovering new ideas? Fuzzy stuff etc. are all existent in "conventional set theory mathematics" already, right?
No, you can't model all multivalued logics in set theory. Set theory models classical propositional logic, but it does not model a logic where say the principle of contradiction fails and its negation fail also. All formal theorems of any multivalued logic exist within classical logic in the sense that if A comes as a formula in multivalued logic, it will also happen in classical logic and thus can get modeled by set theory (the converse does seem to hold for some multivalued logics, but hardly all that many of them). But, the domain of truth values differs for a formula in a multivalued logic than in classical logic. Fuzzy stuff does NOT exist withing conventional set theory mathematics. The axiom of extensionality does not hold for fuzzy sets.
Why is every representation of $\textrm{GL}_n(\Bbb{C})$ completely determined by its character? I know that every (Lie group) representation of $\textrm{GL}_n(\Bbb{C})$ is completely reducible; this I believe comes from the fact that every representation of the maximal compact subgroup $\textrm{U}(n)$ is completely reducible. More explicitly, suppose $V$ is a representation of $\textrm{GL}_n(\Bbb{C})$. Then $V$ is also a representation of $\textrm{U}(n)$, by complete reducibility of the unitary group we know that there is a $\textrm{U}(n)$ invariant inner product such that if $U$ is any $\textrm{GL}_n$ - invariant subspace of $V$ (and hence $\textrm{U}(n)$ invariant), there is an orthogonal complement $W$ such that $$V = U \oplus W$$ with $W$ invariant under $\textrm{U}(n)$. Now $W$ as a representation of the real Lie algebra $\mathfrak{u}(n)$ is invariant and hence under the complexified Lie algebra $$\mathfrak{gl}_n = \mathfrak{u}_n \oplus i \hspace{1mm} \mathfrak{u}(n).$$ Since $\textrm{GL}_n(\Bbb{C})$ is connected $W$ is also invariant under $\textrm{GL}_n$ showing that every representation of it is completely reducible. Now I have read several textbooks on representation theory (e.g. Bump's Lie Groups, Procesi's book of the same name) and they all seem to tacitly assume that every representation of $\textrm{GL}_n$ is completely determined by its character; i.e. if two representations have the same character then they are isomorphic. Now in the finite groups case, we concluded this fact based on 1) Maschke's Theorem and 2) Linear independence of characters. We do not necessarily have 2) so how can we conclude the fact I said about about $\textrm{GL}_n$? Thanks.
This boils down to facts about the representation theory of compact groups: there every complex representation is determined by its character. Now a representation of $GL(n,\mathbb C)$ is determines by a representation of its Lie algebra. But this is the complexification of $u(n)$ and complex representations of a Lie algebra are in one to one correspondence with complex representations of its complexification.
If every proper quotient is finite, then $G\cong\mathbb Z$ Here is my problem: Let $G$ is an infinite abelian group. Prove that if every proper quotient is finite, then $G\cong\mathbb Z$. And here is my incompleted approach: I know that the quotient subgroup $\frac{G}{tG}$ wherein $tG$ is torsion subgroup of $G$ is always torsion-free. So, if $tG\neq\{0\}$ then here we have $\frac{G}{tG}$ torsion-free and finite simultonously which is a contradiction. Then $G$ is itself a torsion-free group. Moreover, I assume $G$ be a divisible group, so: $$G\cong\sum\mathbb Q\oplus\sum_{p\in P}\mathbb Z(p^{\infty})$$ As any proper quotient of $G$ is infinite, so I concluded it is not divisible. I confess that I am missing the final part. If my way to this problem untill my last conclusion is valid logically, please help me about the last part of the proof. Thanks
Let $a_0\in G$ be a nonzero element. Then $\langle a\rangle\cong \mathbb Z$ as $G$ is torsion-free (which you have shown). Now $Q_0=G/\langle a_0\rangle$ is a finite abelian group. If $Q_0\cong 1$, we are done. Otherwise, select $a_1\in G\setminus\langle a_0\rangle$. Then let $Q_1=G/\langle a_0, a_1\rangle$, etc. The orders of the finite groups $Q_0, Q_1, \ldots$ are strictly decreasing as long as they are $>1$, hence we ultimately find an $a_n$ with $G=\langle a_1, \ldots,a_n\rangle$. Thus $G$ is a finitely generated abelian group. A quick check with the classification theorem shows that $G\cong \mathbb Z$.
Comparing Two Sums with Binomial Coefficients How do I use pascals identity: $${2n\choose 2k}={2n-1\choose 2k}+{2n-1\choose 2k-1}$$ to prove that $$\displaystyle\sum_{k=0}^{n}{2n\choose 2k}=\displaystyle\sum_{k=0}^{2n-1}{2n-1\choose k}$$ for every positive integer $n$ ?
Other than Pascal's identity, we just notice that the sums on the right are the same because they cover the same binomial coefficients (red=even, green=odd, and blue=both). $$ \begin{align} \sum_{k=0}^n\binom{2n}{2k} &=\sum_{k=0}^n\color{#C00000}{\binom{2n-1}{2k}}+\color{#00A000}{\binom{2n-1}{2k-1}}\\ &=\sum_{k=0}^{2n-1}\color{#0000FF}{\binom{2n-1}{k}} \end{align} $$ Note that $\binom{2n-1}{-1}=\binom{2n-1}{2n}=0$.
Laplace integral and leading order behavior Consider the integral: $$ \int_0^{\pi/2}\sqrt{\sin t}e^{-x\sin^4 t} \, dt $$ I'm trying to use Laplace's method to find its leading asymptotic behavior as $x\rightarrow\infty$, but I'm running into problems because the maximum of $\phi(t)$ (i.e. $-\sin^{4}t$) is $0$ and occurs at $0$ (call this $c$). In my notes on the Laplace Method, it specifically demands that $f(c)$ (in this case $f(t)=\sqrt{\sin t}$) cannot equal zero--but it does. How do I get around this?
Plot integrand for few values of $x$: It is apparent that the maximum shifts closer to the origin as $x$ grows. Let's rewrite the integrand as follows: $$ \int_0^{\pi/2} \sqrt{\sin(t)} \exp\left(-x \sin^4(t)\right) \mathrm{d}t = \int_0^{\pi/2} \exp\left(\frac{1}{2} \log(\sin(t))-x \sin^4(t)\right) \mathrm{d}t $$ The maximum of the integrand is determined by $$ 0 = \frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{1}{2} \log(\sin(t))-x \sin^4(t)\right) = \cot(t) \left( \frac{1}{2} - 4 x \sin^4(t)\right) $$ that is at $t_\ast = \arcsin\left((8 x)^{-1/4}\right)$. Then using Laplace's method: $$ \int_0^{\pi/2} \sqrt{\sin(t)} \exp\left(-x \sin^4(t)\right) \mathrm{d}t \approx \int_{0}^{\pi/2} \exp\left(\phi(t_\ast) + \frac{1}{2} \phi^{\prime\prime}(t_\ast) (t-t_\ast)^2 \right) \mathrm{d}t = \exp\left(\phi(t_\ast)\right) \sqrt{\frac{2\pi}{-\phi^{\prime\prime}(t_\ast)}} $$ Easy algebra gives $\exp\left(\phi(t_\ast)\right) = (8 \mathrm{e} x)^{-1/8}$, $-\phi^{\prime\prime}(t_\ast) = 4 \sqrt{2 x} - 2$, giving $$ \int_0^{\pi/2} \sqrt{\sin(t)} \exp\left(-x \sin^4(t)\right) \mathrm{d}t \approx (8 \mathrm{e} x)^{-1/8} \sqrt{ \frac{\pi}{2 \sqrt{2 x} -1}} $$
Check my workings: Show that $\lim_{h\to0}\frac{f(x+h)-2f(x)+f(x-h)}{h^2}=f''(x)$ Let $f''$ be continuous on $\mathbb{R}$. Show that $$\lim_{h\to0}\frac{f(x+h)-2f(x)+f(x-h)}{h^2}=f''(x)$$ My workings $$\lim_{h\to0}\frac{f(x+h)-2f(x)+f(x-h)}{h^2}=\lim_{h\to0}\frac{f(x+h)-f(x)-[f(x)-f(x-h)]}{h^2}=\frac{\lim_{h\to0}\frac{f(x+h)-f(x)}{h}-\lim_{h\to0}\frac{f(x)-f(x-h)}{h}}{\lim_{h\to0}h}$$ By the definition of derivative, I move on to the next step. Also, I observe that everything in this question as continuous and differentiable up to $f''(x)$. $$=\frac{f'(x)-f'(x-h)}{\lim_{h\to0}h}$$ I do not know how to justify the next move but, $$=\lim_{h\to0}\frac{f'(x)-f'(x-h)}{h}$$ Then by the definition of derivative again, $$=f''(x-h)$$ Which is so close to the answer. So I shall assume that since $h\to0$ for $x-h$, therefore $x-h=x$? And so, $$=f''(x)$$ I think i made a crapload of generalization and fallactic errors... I also have another way, which was to work from $f''(x)$ to the LHS. But I realised I assume that the h were the same for $f'(x)$ and $f''(x)$. Is it normal to be unable to solve this question at the first try? Or am I just too weak in mathematics?
Try applying L'Hospital's Rule to $h$, that is, differentiate with respect to $h$.
Evaluate $\sum_{k=1}^\infty \frac{k^2}{(k-1)!}$. Evaluate $\sum_{k=1}^\infty \frac{k^2}{(k-1)!}$ I sense the answer has some connection with $e$, but I don't know how it is. Please help. Thank you.
For $\frac{P(n)}{(n-r)!},$ where $P(n)$ is a polynomial. If the degree of $P(n)$ is $m>0,$ we can write $P(n)=A_0+A_1(n-r)+A_2(n-r)(n-r-1)+\cdots+A_m(n-r)(n-r-1)\cdots\{(n-r)-(m-1)\}$ Here $k^2=C+B(k-1)+A(k-1)(k-2)$ Putting $k=1$ in the above identity, $C=1$ $k=2,B+C=4\implies B=3$ $k=0\implies 2A-B+C=0,2A=B-C=3-1\implies A=1$ or comparing the coefficients of $k^2,A=1$ So, $$\frac{k^2}{(k-1)!}=\frac{(k-1)(k-2)+3(k-1)+1}{(k-1)!}=\frac1{(k-3)!}+\frac3{(k-2)!}+\frac1{(k-1)!}$$ $$\sum_{k=1}^\infty \frac{k^2}{(k-1)!}=\sum_{k=1}^\infty \left(\frac{(k-1)(k-2)+3(k-1)+1}{(k-1)!}\right)$$ $$=\sum_{k=3}^\infty \frac1{(k-3)!}+\sum_{k=2}^\infty \frac3{(k-2)!}+\sum_{k=1}^\infty \frac1{(k-1)!}$$ as $\frac1 {(-r)!}=0$ for $r>0$ $=e+3e+e=5e$
Galois theory (Showing $G$ is not abelain) Suppose $G$ is the Galois group of an irreducible degree $5$ polynomial $f \in \mathbb{Q}[x]$ such that $|G| = 10$. Then $G$ is non-abelian. Proof: Suppose $G$ is abelian. Let $M$ be the splitting field of $f$. Let $\theta$ be a root of $f$. Consider $\mathbb{Q}(\theta) \subseteq M$. Since $G$ is abelian every subgroup is normal. This means $\mathbb{Q}(\theta) \subseteq M$ is a normal extension. So $f$ splits completely in $\mathbb{Q}(\theta)$. Then what how to complete the proof. How would I get a contradiction?
The only abelian group of order $10$ is cyclic. Since $G$ is a subgroup of $S_5$, it's enough to show that there's no element of order $10$ in $S_5$. If you decompose a permutation in $S_5$ as a product of disjoint cycles, then the order is the LCM of the cycle lengths - and these can be any partition of $5$. Since $5 = 1 + 4 = 1 + 1 + 3 = 2 + 3 = 1 + 1 + 1 + 2 = 1 + 2 + 2 = 1 + 1 + 1 + 1 + 1$ are the only partitions, the only orders that appear are $1,2,3,4,5,6$ and in particular not $10$.
Is this differe equation (and its solution) already known? While studying for an exam, I met the following nonlinear differential equation $a\ddot{x}+b\dot{x}+c\sin x +d\cos x=k$ where $a,b,c,d,k$ are all real constants. My teacher says that this differential equation does not admit closed form solution, but on this I would like to compare myself with you. Is this equation (and its solution) already known? Thank you very much
$a\ddot{x}+b\dot{x}+c\sin x+d\cos x=k$ $a\dfrac{d^2x}{dt^2}+b\dfrac{dx}{dt}+c\sin x+d\cos x-k=0$ This belongs to an ODE of the form http://eqworld.ipmnet.ru/en/solutions/ode/ode0317.pdf Let $\dfrac{dx}{dt}=u$ , Then $\dfrac{d^2x}{dt^2}=\dfrac{du}{dt}=\dfrac{du}{dx}\dfrac{dx}{dt}=u\dfrac{du}{dx}$ $\therefore au\dfrac{du}{dx}+bu+c\sin x+d\cos x-k=0$ $au\dfrac{du}{dx}=-bu-c\sin x-d\cos x+k$ $u\dfrac{du}{dx}=-\dfrac{bu}{a}-\dfrac{c\sin x+d\cos x-k}{a}$ This belongs to an Abel equation of the second kind. In fact, all Abel equation of the second kind can be transformed into Abel equation of the first kind. Let $u=\dfrac{1}{v}$, Then $\dfrac{du}{dx}=-\dfrac{1}{v^2}\dfrac{dv}{dx}$ $\therefore-\dfrac{1}{v^3}\dfrac{dv}{dx}=-\dfrac{b}{av}-\dfrac{c\sin x+d\cos x-k}{a}$ $\dfrac{dv}{dx}=\dfrac{(c\sin x+d\cos x-k)v^3}{a}+\dfrac{bv^2}{a}$ Please follow the method in http://www.hindawi.com/journals/ijmms/2011/387429/#sec2
Find the volume of the solid under the plane $x + 2y - z = 0$ and above the region bounded by $y=x$ and $y = x^4$. Find the volume of the solid under the plane $x + 2y - z = 0$ and above the region bounded by $y = x$ and $y = x^4$. $$ \int_0^1\int_{x^4}^x{x+2ydydx}\\ \int_0^1{x^2-x^8dx}\\ \frac{1}{3}-\frac{1}{9} = \frac{2}{9} $$ Did I make a misstep? The answer book says I am incorrect.
I think it should be calculated as \begin{eqnarray*} V&=&\int_0^1\int_{x^4}^x\int_0^{x+2y}dzdydx\\ &=&\int_0^1\int_{x^4}^x(x+2y)dydx\\ &=&\int_0^1\left.\left(xy+y^2\right)\right|_{x^4}^xdx\\ &=&\int_0^1(2x^2-x^5-x^8)dx\\ &=&\left.(\frac{2}{3}x^3-\frac{1}{6}x^6-\frac{1}{9}x^9)\right|_0^1\\ &=&\frac{7}{18} \end{eqnarray*}
problem on entire function Let $f$ be an entire function. For which of the following cases $f$ is not necessarily a constant * *$\operatorname{im}(f'(z))>0$ for all $z$ *$f'(0)=0$ and $|f'(z)|\leq3$ for all $z$ *$f(n)=3$ for all integer $n$ *$f(z) =i$ when $z=(1+\frac{k}{n})$ for every positive integer $k$ I think 1 is true since $f'(z)=$constant so $f(z)=cz$ for some $c$ For 2, $f=0$ For 3, $f$ is not constant since $f(z)=3 \cos2\pi z$ I have no idea for 4 am i right for other three options
You're right for $1$, where the $c$ you mention should have positive imaginary part. For example $f(z)=iz$ does the job. For $2$, $f$ need not be identically $0$. $f(z)=2$ satisfies the requirements, for instance. $f$ does have to be constant, though, because if $f$ is entire, $f'$ is too, and so by Liouville's theorem since $f'$ is bounded it's constant, and the other condition forces it to be $0$ everywhere. You're right on $3$. For $4$, I'm still not sure what $n$ is. My guess is that this will be an identity theorem application, and that $f$ will be $i$ on a set with an accumulation point, which will force it to be $i$ everywhere. But as it's written now, for $n$ is a fixed integer you could get $f$ non-constant in the same way as you did in number $3$: set $$f(z)=i\cos(2n\pi(z-1))$$
convergent series, sequences? I want to construct a sequence of rational numbers whose sum converges to an irrational number and whose sum of absolute values converges to 1. I can find/construct plenty of examples that has one or the other property, but I am having trouble find/construct one that has both these properties. Any hints(not solutions)?
Find two irrational numbers $a > 0$ and $b < 0$ such that $a-b = 1$ but $a+b$ is irrational. Create a series with positive terms that sum to $a$ and another series with negative terms that sum to $b$. Combine the two series.
What is a rational $0$-cycle on an algebraic variety $X$ over $\mathbb{Q}$? I found the following assertion in a paper about Hilbert modular forms that I'm trying to read. Let $X$ be an algebraic variety over $\mathbb{Q}$, and let $\Psi$ be a rational function on $X$ and $C = \sum n_P P$ be a rational $0$-cycle on $X$. Then $\Psi(C) = \prod \Psi(P)^{n_P}$ is a rational number. By searching online I found some definitions of an algebraic cycle, but I haven't found what a rational $0$-cycle is. So the questions I have are: * *What does it mean that $C = \sum n_P P$ is a rational $0$-cycle? Does it mean that the coefficients $n_P \in \mathbb{Q}$ and that $\sum n_P = 0$? And what would be a good reference for these basic definitions? *How do we prove that $\Psi(C) = \prod \Psi(P)^{n_P}$ is a rational number? Thank you very much for any help.
To say that $C = \sum n_P P$ is a rational $0$-cycle means that $n_P\in \mathbb Z$ and that $P\in X$ is a rational point i.e. a closed point with residue field $\kappa (P)=\mathbb Q$. If the rational function $\Psi$ is defined at $P$ its value at $P$ is a rational number $\Psi(P) \in \mathbb Q$ and if $\Psi$ is defined at all $P$'s with $n_P\neq 0$ we have $\Psi(C) = \prod \Psi(P)^{n_P}\in \mathbb Q$. As an illustration of what it means for $P$ to be rational, take the simplest example $X =\mathbb A^1_\mathbb Q=Spec ( \mathbb Q[T])$. Then the point $P$ corresponding to the prime ideal $J_P= \langle T-1/2\rangle\subset \mathbb Q[T]$ is rational since $\mathbb Q[T]/ \langle T-1/2\rangle=\mathbb Q$. However the closed point $Q$ corresponding to the prime ideal $J_Q= \langle T^3-2\rangle\subset \mathbb Q[T]$ is not rational since the canonical morphism $\mathbb Q \to \mathbb Q[T]/ \langle T^3-2 \rangle $ is not an isomorphism.
Relationship between Legendre polynomials and Legendre functions of the second kind I'm taking an ODE course at the moment, and my instructor gave us the following problem: Derive the following formula for Legendre functions $Q_n(x)$ of the second kind: $$Q_n(x) = P_n(x) \int \frac{1}{[P_n(x)]^2 (1-x^2)}dx$$ where $P_n(x)$ is the $n$-th Legendre polynomial. He introduced Legendre functions in the context of second order ODEs, but we haven't really used them for anything - moreover, this is the only problem we were assigned that has anything to do with them. As a result, I'm sort of at a loss of where to start. I've tried a couple of things (like using the actual Legendre ODE $$(1-x^2)y^{\prime \prime} - 2xy^{\prime} + n(n+1)y = 0$$ and plugging in the solution $y(x)=a_1P_n(x)+a_2Q_n(x)$ and proceeding from there) but so far, haven't been able to go anywhere. Any help (preferably as elementary as possible) would be much appreciated. Thanks!
The most general solution of Legendre equation is $$y = A{P_n} + B{Q_n}.$$ Let $y(x) = A(x){P_n}(x)$. Then $y' = AP' + A'P$ and $y'' = AP'' + 2A'P' + A''P$. So $$(1 - {x^2})(AP'' + 2A'P' + A''P) - 2x(AP' + A'P) + n(n + 1)AP = 0.$$ Note that $$(1 - {x^2})(AP'') - 2x(AP') + n(n + 1)AP = 0$$ which means some terms in the above equation vanish. Now let $A' = u$ and reduce the order so that $$2\frac{{dP}}{P} + \frac{{du}}{u} - \frac{{2xdx}}{{1 - {x^2}}} = 0$$ and $$u = \frac{{{\text{const}}}}{{(1 - {x^2}){P^2}}}$$ so $$A = {C_n}\int {\frac{1}{{(1 - {x^2}){P^2}}}dx}.$$ See if you can do the rest.
Compute $\lim_{n\to\infty}\int_0^n \left(1+\frac{x}{2n}\right)^ne^{-x}\,dx$. I'm trying to teach myself some analysis (I'm currently studying algebra), and I'm a bit stuck on this question. It's strange because of the $n$ appearing as a limit of integration; I want to apply something like LDCT (I guess), but it doesn't seem that can be done directly. I have noticed that the change of variables $u=1+\frac{x}{2n}$ helps. With this, the problem becomes $$ \lim_{n\to\infty}\int_1^{3/2}2nu^ne^{-2n(u-1)}\,du. $$ This at least solves the issue of the integration limits. Let's let $f_n(u):=2nu^ne^{-2n(u-1)}$ for brevity. I believe it can be shown that $$ \lim_{n\to\infty}f_n(u)=\cases{\infty,\,u=1\\0,\,1<u\leq 3/2} $$ using L'Hopital's rule and the fact that $u^n$ intersects $e^{2n(u-1)}$ where $u=1$, and so the exponential function is larger than $u^n$ for $n>1$. I think I was also able to show that $\{f_n\}$ is eventually decreasing on $(1,3/2]$, and so Dini's Theorem says that the sequence is uniformly convergent to $0$ on $[u_0,3/2]$ for any $u_0\in (1,3/2]$. Since each $f_n$ is continuous on the closed and bounded interval $[u_0,3/2]$, each is bounded; as the convergence is uniform, the sequence is uniformly bounded. Thus, the Lebesgue Dominated Convergence Theorem says $$ \lim_{n\to\infty}\int_{u_0}^{3/2}2nu^ne^{-2n(u-1)}\,du=\int_{u_0}^{3/2}0\,du=0. $$ So it looks like I'm almost there, I just need to extend the lower limit all the way to $1$. I think this amounts to asking whether we can switch the order of the limits in $$\lim_{n\to\infty}\lim_{u_0\to 1^+}\int_{u_0}^{3/2}2nu^ne^{-2n(u-1)}\,du, $$ and (finally!) this is where I'm stuck. I feel like this step should be easy, and it's quite possible I'm missing something obvious. That happens a lot when I try to do analysis because of my practically nonexistent background.
HINT Note that $$\left(1 + \dfrac{x}{2n} \right)^n < e^{x/2}$$ for all $n$. Hence, $$ \left(1 + \dfrac{x}{2n} \right)^n e^{-x} < e^{-x/2}$$ Your sequence $$f_n(x) = \begin{cases} \left(1 + \dfrac{x}{2n} \right)^n e^{-x} & x \in [0,n]\\ 0 & x > n\end{cases}$$ is dominated by $g(x) = e^{-x/2}$. Now apply LDCT.
Studying $ u_{n}=\frac{1}{n!}\int_0^1 (\arcsin x)^n \mathrm dx $ I would like to find a simple equivalent of: $$ u_{n}=\frac{1}{n!}\int_0^1 (\arcsin x)^n \mathrm dx $$ We have: $$ 0\leq u_{n}\leq \frac{1}{n!}\left(\frac{\pi}{2}\right)^n \rightarrow0$$ So $$ u_{n} \rightarrow 0$$ Clearly: $$ u_{n} \sim \frac{1}{n!} \int_{\sin(1)}^1 (\arcsin x)^n \mathrm dx $$ But is there a simpler equivalent for $u_{n}$? Using integration by part: $$ \int_0^1 (\arcsin x)^n \mathrm dx = \left(\frac{\pi}{2}\right)^n - n\int_0^1 \frac{x(\arcsin x)^{n-1}}{\sqrt{1-x^2}} \mathrm dx$$ But the relation $$ u_{n} \sim \frac{1}{n!} \left(\frac{\pi}{2}\right)^n$$ seems to be wrong...
The change of variable $x=\cos\left(\frac{\pi s}{2n}\right)$ yields $$ u_n=\frac1{n!}\left(\frac\pi2\right)^{n+2}\frac1{n^2}v_n, $$ with $$ v_n=\int_0^n\left(1-\frac{s}n\right)^n\,\frac{2n}\pi \sin\left(\frac{\pi s}{2n}\right)\,\mathrm ds. $$ When $n\to\infty$, $\left(1-\frac{s}n\right)^n\mathbf 1_{0\leqslant s\leqslant n}\to\mathrm e^{-s}$ and $\frac{2n}\pi \sin\left(\frac{\pi s}{2n}\right)\mathbf 1_{0\leqslant s\leqslant n}\to s$. Both convergences are monotonic hence $v_n\to\int\limits_0^\infty\mathrm e^{-s}\,s\,\mathrm ds=1$. Finally, $$ u_n\sim\frac1{(n+2)!}\left(\frac\pi2\right)^{n+2}. $$
How to show that this function is bijective Possible Duplicate: Proving the Cantor Pairing Function Bijective Assume I define $$ f: \mathbb N \times \mathbb N \to \mathbb N, (a,b) \mapsto a + \frac{(a + b ) ( a + b + 1)}{2} $$ How to show that this function is bijective? For injectivity I tried to show that if $f(a,b) = f(n,m) $ then $(a,b) = (n,m)$ but I end up getting something like $3(n-a) + (n+m)^2 -(a+b)^2 + m - b = 0$ and don't see how to proceed from there. There has to be something cleverer than treating all possible cases of $a \leq n, b \leq m$ etc. For surjectivity I'm just stuck. If I do $f(0,n)$ and $f(n,0)$ it doesn't seem to lead anywhere. Thanks for your help.
The term $(a+b)(a+b+1)/2$ is the sum of the numbers from $1$ to $a+b$. For a fixed value of $s=a+b$, $a$ ranges from $0$ to $s$, so we need $s+1$ different results for these arguments. Now you can prove by induction that the range from $s(s+1)/2$ to $(s+1)(s+2)/2-1$ contains precisely the images of the pairs with sum $s$. And I truly empathize with the loss of your teddy bear. My teddy bear would be the first thing I'd save in case of fire.
Number of integral solutions for $|x | + | y | + | z | = 10$ How can I find the number of integral solution to the equation $|x | + | y | + | z | = 10.$ I am using the formula, Number of integral solutions for $|x| +|y| +|z| = p$ is $(4P^2) +2 $, So the answer is 402. But, I want to know, How we can find it without using formula. any suggestion !!! Please help
(1) z=0, $40$ patterns z=1, $36$ z=2, $32$ ・・・ z=9,$4$ but z=10,$2$ the total is $S=2(4+8+\dots+36)+40+2=402$ (2) Another counting There are 8 areas by plus and minus of x,y,z, 40 patterns at x=0, 2 patterns at y=z=0, therefore $40+8*10C2+2=402$
Absolute value and sign of an elasticity In my microeconomics book, I read that when we have $1+\dfrac{1}{\eta}$ where $\eta$ is an elasticity coefficient, we can write $1-\dfrac{ 1}{|\eta|}$ "to avoid ambiguities stemming from the negative sign of the elasticity". What does this mean? Is it always legitimate to perform such a transformation?
If the elasticity coefficient $\eta$ is negative, then $|\eta|=-\eta$. The ambiguity arises because some people may suppress the negative sign and write it as a positive number instead. In this case, using $1+\frac{1}{\eta}$ becomes ambiguous. For example, if $\eta=-2$ but people write it as $\eta=2$, then $1+\frac{1}{\eta}$ can mean $1-\frac{1}{2}$ or $1+\frac{1}{2}$.
existence of hyperbolic groups I perfectly understand that the Milnor-Schwarz lemma tells me that cocompact lattice in semisimple Lie groups of higher rank are not hyperbolic (in the sense of Gromov). But do there exist noncocompact lattices in higher rank semisimple Lie groups which are hyperbolic? (wikipedia is saying "no" without any reference, but I do not trust that article since it contains mistakes...)
You asked a related question in another post, and you erased the question while I was posting an answer. So I post it here. The question was complementary to the one above so I think it's relevant to include the answer: why are non-uniform lattices in rank 1 symmetric spaces of noncompact type not hyperbolic except in the case of the hyperbolic plane? Here is my answer. I think it's a result of Garland and Raghunathan (Annals 1970 "Fundamental domains..."). They show that given such a lattice, for some point $\omega$ at infinity, the stabilizer of $\omega$ in the lattice acts cocompactly on each horosphere based at $\omega$. This horosphere is modeled on a $(n-1)$-dimensional simply connected nilpotent Lie group, where $n$ is the real dimension of the rank 1 symmetric space of non-compact type. Thus the nonuniform lattice contains a f.g. nilpotent group of Hirsch length $n-1$. This is possible in a hyperbolic group only if $n\le 2$. (Note: if $n\ge 3$, it follows that the lattice contains a free abelian group of rank 2.) More precise results about the structure of these lattices were formalized into the concept of relatively hyperbolic groups, see Gromov, Farb, etc. They indicate that intuitively, these "peripheral" subgroups are the only obstruction to hyperbolicity.
Simple matrix equation I believe I'm missing an important concept and I need your help. I have the following question: "If $A^2 - A = 0$ then $A = 0$ or $A = I$" I know that the answer is FALSE (only because someone told me) but when I try to find out a concrete matrix which satisfies this equation (which isn't $0$ or $I$) I fail. Can you please give me a direction to find a concrete matrix? What is the idea behind this question? Guy
If for a polynomial $p$ and a matrix $A$ you have $p(A)=0$ then for every invertible matrix $W$ you have $$p(W^{-1}AW)=W^{-1}p(A)W=0 . $$ Here $p=x^2-x$, you can take $A=\begin{pmatrix}1&0\\0&0\end{pmatrix}$, $W$ any invertible matrix to make a lot of examples.
Understanding what this probability represents Say I want the probability that a five card poker hand contains exactly two kings. This would be $$\frac{{4\choose 2}{48 \choose 3}}{52\choose 5}$$ Now if I drop the $48 \choose 3$, which represents the 3 non king cards, what can the probability $\frac{4\choose 2}{52\choose 5}$ be taken to represent? Is it the number of hands containing at least 2 kings?
It would be the probability of a hand containing exactly two kings and three specified non-kings, e.g. the ace, 2 and 3 of spades.
modular multiplicative inverse I have a homework problem that I've attempted for days in vain... It's asking me to find an $n$ so that there is exactly one element of the complete residue system $\pmod n$ that is its own inverse apart from $1$ and $n-1$. It also asks me to construct an infinite sequence of $n's$ so that the complete residue system $\pmod n$ has elements that are their own inverses apart from $1$ and $n-1$. For the first part, I tried all $n$ from $3$ up to $40$, but none worked... For the second part, I'm really confused... Could someone please help me with this? Thanks!
For the second: What about $3$ and $5 \pmod8$? For the first: if $x^2\equiv 1$ then $(-x)^2\equiv 1$, too, so if there is only one (besides $\pm 1$), then $x\equiv -x \pmod{n}$, that is $2x=n$.
Find all Laurent series of the form... Find all Laurent series of the form $\sum_{-\infty} ^{\infty} a_n $ for the function $f(z)= \frac{z^2}{(1-z)^2(1+z)}$ There are a lot of problems similar to this. What are all the forms? I need to see this example to understand the idea.
Here is related problem. First, convert the $f(z)$ to the form $$f(z) = \frac{1}{4}\, \frac{1}{\left( 1+z \right)}+\frac{3}{4}\, \frac{1}{\left( -1+z \right) }+\frac{1}{2}\,\frac{1}{\left( -1+z \right)}$$ using partial fraction. Factoring out $z$ gives $$ f(z)= \frac{1}{4z}\frac{1}{(1+\frac{1}{z})}- \frac{3}{4z}\frac{1}{(1-\frac{1}{z})}-\frac{1}{2z^2}\frac{1}{(1-\frac{1}{z})^2}\,. $$ Now, using the series expansion of each term yields the Laurent series for $|z|>1$ $$ = \frac{1}{4z}(1-\frac{1}{z}+ \frac{1}{z^2}+\dots) -\frac{3}{4z}(1+\frac{1}{z}+ \frac{1}{z^2}+\dots)-\frac{1}{2z}\sum_{k=0}^{\infty}{-2\choose k}\frac{(-1)^k}{z^k} \,, $$ $$ \sum_{k=0}^{\infty}\left( \frac{(-1)^k}{4}-\frac{3}{4}-\frac{(-1)^k{-2\choose k}}{2} \right)\frac{1}{z^{k+1}}\,. $$ Note that, $$(1+x)^{-1}=1-x+x^2-\dots \,,$$ $$ (1-x)^{-1}=1+x+x^2+\dots \,,$$ $$ (1-x)^{m}=\sum_{k=0}^{\infty}{m\choose k}(-1)^kx^k $$
Show there is no measure on $\mathbb{N}$ such that $\mu(\{0,k,2k,\ldots\})=\frac{1}{k}$ for all $k\ge 1$ For $k\ge1$, let $A_k=\{0,k,2k,\ldots\}.$ Show that there is no measure $\mu$ on $\mathbb{N}$ satisfying $\mu(A_k)=\frac{1}{k}$ for all $k\ge1$. What I have done so far: I am trying to apply Borel-Cantelli lemma ($\mu(\mathbb{N})=1)$. Let $(p_n)_{n\in \mathbb{N}}=(2,3,5,\ldots)$ be the increasing sequence of all prime numbers. It is the case that for all $k\in\mathbb{N}$ and $i_1\lt i_2 \lt \ldots \lt i_k$ we have $\mu\left(A_{p_{i_1}}\cap\ldots\cap A_{p_{i_k}}\right)=\mu\left(A_{p_{i_1}}\right)\ldots\mu\left(A_{p_{i_k}}\right)$ so all $A_n$ are independent. We know that the sum of the reciprocals of the primes $\sum\limits_{n=1}^\infty \frac{1}{p_n}$diverges and hence, by B-C second lemma, $$\mu\left(\bigcap\limits_{n=1}^\infty\bigcup\limits_{m=n}^\infty A_{p_m}\right)=1$$ holds. However, I cannot figure out how to conclude the reasoning. I would appreciate any help.
You are basically there. You proved in your last line that $\mu$-a.e. number is divisible by infinitely many primes, which is an obvious contradiction.
How Many Ways to Make a Pair Given Five Poker Cards I'm confused at the general method of solving this type of problem. The wikipedia page says that there are: ${13 \choose 1} {4 \choose 2} {12 \choose 3} {4 \choose 1}^{3}$ ways to select a pair when 5 cards are dealt. Can someone outline what each calculation means? For example, is $13 \choose 1$ the process of selecting 1 card in the beginning?
$13\choose 1$ is the number of ways of choosing the denomination of the pair, whether it is a pair of kings, or a pair of threes, or whatever. Then there are $4\choose 2$ ways to choose suits for the two cards of the pair. Then there are $12\choose 3 $ ways to choose the three different denominations of the remaining three cards from the twelve that are different from the denomination of the pair. Each of those other three cards can be of ${4\choose 1}=4$ different suits, for ${4\choose 1}^3$ choices of three suits in all.
How many distinct functions can be defined from set A to B? In my discrete mathematics class our notes say that between set $A$ (having $6$ elements) and set $B$ (having $8$ elements), there are $8^6$ distinct functions that can be formed, in other words: $|B|^{|A|}$ distinct functions. But no explanation is offered and I can't seem to figure out why this is true. Can anyone elaborate?
Let's say for concreteness that $A$ is the set $\{p,q,r,s,t,u\}$, and $B$ is a set with $8$ elements distinct from those of $A$. Let's try to define a function $f:A\to B$. What is $f(p)$? It could be any element of $B$, so we have 8 choices. What is $f(q)$? It could be any element of $B$, so we have 8 choices. ... What is $f(u)$? It could be any element of $B$, so we have 8 choices. So there are $8\cdot8\cdot8\cdot8\cdot8\cdot8 = 8^6$ ways to choose values for $f$, and each possible set of choices defines a different function $f$. So that's how many functions there are.
Linear Algebra Fields question I have the following statement (which is false) and I'm trying to understand why, I can't find a concrete example. If $a$ belongs to $Z_n$ and $a^2 = 1$ then $a=1$ or $a=-1$ Can someone give me a direction? Guy
The statement does hold when the structure is a field, i.e. when $n$ is prime. The statement is false for some composite $n$. For example, consider any $n$ of the form $x(x+2)$ for some $x\in\mathbb{N}^+$ then we will have $$(x+1)^2 \equiv x^2 + 2x + 1 \equiv n + 1 \equiv 1 \pmod n$$ so that $\pm(x+1)$ is a solution in addition to $\pm1$.
Convergence of series given a monotonic sequence Let $(x_n)_{n \in \Bbb N}$ be a decreasing sequence such that its series converges, want to show that $\displaystyle \lim_{n \to \infty} n x_n = 0$. Ok I don't even know where to start. I need a direction please! Thankyou!
Just another approach. Since $\{x_n\}_{n\in\mathbb{N}}$ id decreasing and its associated series converges, we have $x_n\geq 0$ for every $n\in\mathbb{N}$ (otherwise, $\lim_{n\to +\infty}x_n < 0$ and the series cannot converge). Assume now the existence of a positive real number $\alpha$ such that $$ n\, x_n \geq \alpha $$ for an infinite number of positive natural numbers $n$; let $A=\{n_1,n_2,\ldots\}$ be the set of such natural numbers. Let now $a_0=0,a_1=n_1$, $a_2$ be the minimum element of $A$ greater than $2a_1$, $a_3$ be the minimum element of $A$ greater than $2a_2$ and so on. We have: $$\sum_{n=1}^{+\infty}x_n \geq \sum_{k=1}^{+\infty}(a_k-a_{k-1})x_{a_k} \geq\sum_{k=1}^{+\infty}\frac{\alpha}{2}=+\infty,$$ that is clearly a contradiction, so $$\lim_{n\in\mathbb{N}} (n\,x_n) = 0$$ must hold.
$a^2-b^2 = x$ where $a,b,x$ are natural numbers Suppose that $a^2-b^2 =x$ where $a,b,x$ are natural numbers. Suppose $x$ is fixed. If there is one $(a,b)$ found, can there be another $(a,b)$? Also, would there be a way to know how many such $(a,b)$ exists?
You want $x = a^2 - b^2 = (a-b)(a+b)$. Let $m = a-b$ and $n = a+b$, then note that $a = (m+n)/2$ and $b = (n-m)/2$. For these to be natural numbers, you want both $m$ and $n$ to be of the same parity (i.e., both odd or both even), and $m \le n$. For any factorization $x = mn$ satisfying these properties, $a = (m+n)/2$ and $b = (n-m)/2$ will be a solution. The answer to your question of how many such $(a,b)$ exist is therefore the same as how many ways there are of writing $x = mn$ with both factors of the same parity and $m \le n$. Let $d(x)$ denote the number of divisors of $x$. (For instance, $d(12) = 6$ as $12$ has $6$ factors $1, 2, 3, 4, 6, 12$.) * *If $x$ is odd, note that for any divisor $m$ of $x$, the factorization $x = mn$ (where $n = x/m$) has both factors odd. Also, out of any two factorizations $(m,n)$ and $(n,m)$, one of them will have $m < n$ and the other will have $m > n$, so the number of "good" factorizations of $x$ is $d(x)/2$. In the case where $x$ is a perfect square, this means either $\lceil d(x)/2 \rceil$ or $\lfloor d(x)/2 \rfloor$ depending on whether or not you want to allow the solution $a = \sqrt{x}$, $b = 0$. *If $x$ is even, then $m$ and $n$ can't be both odd so they must be both even, so $x$ must be divisible by $4$. Say $x = 2^k \cdot l$, where $k \ge 2$. How many factorizations $x = mn$ are there with both $m$ and $n$ being even? Well, there are $d(x)$ factorisations of $x$ as $x = mn$. Of these, the factorisations in which all the powers of $2$ go on one side can be got by taking any of the $d(l)$ factorisations of $l$, and then putting the powers of two entirely on one of the $2$ sides. So the number of representations $x = mn$ with $m$ and $n$ both being even is $d(x) - 2d(l)$. Again, the number where $m \le n$ is half that, namely $(d(x) - 2d(l))/2$, where in the case $x$ is a perfect square, you mean either $\lceil (d(x) - 2d(l))/2 \rceil$ or $\lfloor (d(x) - 2d(l))/2 \rfloor$ depending on whether you want to allow the $b = 0$ solution or not. Here is a program using Sage that can print all solutions $(a,b)$ for any given $x$. #!/path/to/sage print "Hello" def mn(x): '''Returns all (m,n) such that x = mn, m <= n, and both odd/even''' for m in divisors(x): n = x/m if m <= n and (m % 2 == n % 2): yield (m,n) elif m > n: break def ab(x): '''Returns all (a,b) such that a^2 - b^2 = x''' return [((m+n)/2, (n-m)/2) for (m,n) in mn(x)] def num_ab(x): '''The number of (a,b) such that a^2 - b^2 = x''' dx = number_of_divisors(x) if x % 2: return ceil(dx / 2) l = odd_part(x) dl = number_of_divisors(l) return ceil((dx - 2*dl) / 2) # Do it in two ways to check that we have things right for x in range(1,1000): assert num_ab(x) == len(ab(x)) # Some examples print ab(12) print ab(100) print ab(23) print ab(42) print ab(999) print ab(100000001) It prints: Hello [(4, 2)] [(26, 24), (10, 0)] [(12, 11)] [] [(500, 499), (168, 165), (60, 51), (32, 5)] [(50000001, 50000000), (2941185, 2941168)] and you can verify that, for instance, $168^2 -165^2 = 999$.
sines and cosines law Can we apply the sines and cosines law on the external angles of triangle ?
This answer assumes a triangle with angles $A, B, C$ with sides $a,b,c$. Law of sines states that $$\frac{\sin A}{a} = \frac{\sin B}{b} = \frac{\sin C}{c}$$ Knowing the external angle is $\pi - \gamma$ if the angle is $\gamma$, $\sin(\pi-\gamma) = \sin \gamma$ because in the unit circle, you are merely reflecting the point on the circle over the y-axis and the sine value represents the $y$ value of the point, so the $y$ value will remain the same. (Also, $\sin(\pi-\gamma) = \sin\pi \cos \gamma - \cos \pi \sin \gamma = \sin \gamma$), so $$\frac{\sin A}{a} = \frac{\sin B}{b} = \frac{\sin C}{c} = \frac{\sin (\pi -A)}{a} = \frac{\sin (\pi -B)}{b} = \frac{\sin(\pi - C)}{c}$$ The law of cosines state that $$c^2=a^2+b^2-2ab\cos\gamma \Longrightarrow \cos \gamma = \frac{a^2 + b^2 - c^2}{2ab}$$ But $\cos(\pi-\gamma) = -\cos\gamma$ for pretty much the same reasons I used for the sine above. So if you are using external angles, the law of cosines will not work. However, if $\pi - \gamma$ is the external angle, then $$c^2=a^2+b^2+2ab\cos(\pi-\gamma) \Longrightarrow \cos(\pi- \gamma) = -\frac{a^2 + b^2 - c^2}{2ab}$$. So in short, the answer is yes for law of sines, and no for law of cosines (unless you make the slight modification I made).
What type of singularity is this? $z\cdot e^{1/z}\cdot e^{-1/z^2}$ at $z=0$. My answer is removable singularity. $$ \lim_{z\to0}\left|z\cdot e^{1/z}\cdot e^{-1/z^2}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|=0. $$ But someone says it is an essential singularity. I don't know why.
$$ze^{1/z}e^{-1/z^2}=z\left(1+\frac{1}{z}+\frac{1}{2!z^2}+...\right)\left(1-\frac{1}{z^2}+\frac{1}{2!z^4}-...\right)$$ So this looks like an essential singularity, uh? I really don't understand how you made the following step: $$\lim_{z\to 0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|$$ What happened to that $\,z\,$ in the exponential's power?
Math induction ($n^2 \leq n!$) help please I'm having trouble with a math induction problem. I've been doing other proofs (summations of the integers etc) but I just can't seem to get my head around this. Q. Prove using induction that $n^2 \leq n!$ So, assume that $P(k)$ is true: $k^2 \leq k!$ Prove that $P(k+1)$ is true: $(k+1)^2 \leq (k+1)!$ I know that $(k+1)! = (k+1)k!$ so: $(k+1)^2 \leq (k+1)k!$ but where can I go from here? Any help would be much appreciated.
The statement you want to prove is for all $n\in\mathbb{N}$ it holds that $n^2\leq n!$ (you called this $P(n)$. So lets first prove $P(4)$ i.e. $4^2\leq 4!$ but since $16\leq 24$ this is clear. So lets assume $P(n)$ and prove $P(n+1)$. First note that for $n\geq 2$ it holds that $$ 0\leq (n-1)^2+(n-2)=n^2-2n+1+n-2=n^2-n-1 $$ which is equivalent to $n+1\leq n^2$ which gives $$ (n+1)^2=(n+1)(n+1)\leq (n+1)n^2 $$ by induction hypothesis (i.e. $P(n)$) the term $n^2$ in the last expression is smaller or equal $n!$ so we can continue: $$ (n+1)n^2\leq (n+1)n! = (n+1)! $$ which is the statement we wanted to prove. My answer is very extensive and explicit. But maybe, you now get a better understanding of what you have to do in general, when you want to prove something by induction.
Are Trace of product of matrices- distributive/associative? Is $\operatorname{Tr}(X^TAX)-\operatorname{Tr}(X^TBX)$ equal to $\operatorname{Tr}(X^TCX)$, where $C=A-B$ and $A$, $B$, $X$ have real entries and also $A$ and $B$ are p.s.d.
Yes, as $$X^t(A-B)X=X^t(AX-BX)=X^tAX-X^tBX,$$ using associativity and distributivity of product with respect to the addition. The fact that the matrices $A$ and $B$ are p.s.d. is not needed here.
Does a tower of Galois extensions in $\mathbb{C}$ give an overall Galois extension? If $L/K$ and $F/L$ are Galois extensions inside $\mathbb{C}$, must $F/K$ be a Galois extension?
Consider the extension $\mathbb Q\subset\mathbb Q(\sqrt[4]{2})\subset \mathbb Q(\sqrt[4]{2},i) $. You have that $\mathbb Q(\sqrt[4]{2})/\mathbb Q$ is not Galois since it is not normal. Yu have to enlarge $\mathbb Q(\sqrt[4]{2})$ over $\mathbb Q$ in order to get Galois extension.
generalized MRRW bound on the asymptotic rate of q-ary codes Among the many upper bounds for families of codes in $\mathbb F _2 ^n$, the best known bound is the one by McEliece, Rodemich, Rumsey and Welch (MRRW) which states that the rate $R(\delta)$ corresponding to a relative distance of $\delta$ is such that: \begin{equation*}R(\delta) \leq H_2(\frac{1}{2}-\sqrt{\delta(1-\delta)}) \end{equation*} where H is the (binary) entropy function. (A slight improvement of the above exists in the binary case, but within the same framework) In the case of q-ary codes, i.e. codes over $\mathbb F _q ^n$, the above bound is generalized to: \begin{equation*}R(\delta) \leq H_q(\frac{1}{q}(q-1-(q-2)\delta-2\sqrt{(q-1)\delta(1-\delta)})) \end{equation*} My question is as follows: For larger alphabet size q, the above bound seems to weaken significantly. In fact, observing the growth of the above bound as $q \rightarrow \infty$ (using simple approximations of entropy), we see that: \begin{equation*} R(\delta) \leq 1-\delta+\mathcal{O}(\frac{1}{\log{q}}) \end{equation*} Thus, it seems to get worse than even the Singleton bound $R(\delta) \leq 1-\delta$. So which is the best bound for large alphabet size $q$? Or am I wrong in the above conclusion (most sources claim the MRRW bound stated above is the best known bound, but its not clear if that holds for larger q as well). Also, could someone direct me to references for comparisons of different bounds for larger $q$? I am able to find reliable comparisons only for $q=2$.
The source is formula (3) on page 86 in the artice Aaltonen, Matti J.: Linear programming bounds for tree codes. IEEE Transactions on Information Theory 25.1 (1979), 85–90, doi: 10.1109/tit.1979.1056004. According to the article, there is the additional requirement $0 < \delta < 1 - \frac{1}{q}$. Instead of comparing to the asymptotic Singleton bound, one can (more ambitiously) compare to an improvement, the asymptotic Plotkin bound $$R(\delta) \leq 1 - \frac{q}{q-1}\,\delta$$ for $0 \leq \delta \leq q - \frac{1}{q}$. In Aaltonen's paper, the formula is followed by some discussion into this direction. There I find "However, for large $q$ the Plotkin bound remains the best upper bound." So apparently this "generalized MRRW bound" is not strong for large $q$, as you say. As a side note, for $q\to \infty$, the function $R(\delta)$ converges to the asymptotic Singleton bound $1 - \delta$.
Interior Set of Rationals. Confused! Can someone explain to me why the interior of rationals is empty? That is $\text{int}(\mathbb{Q}) = \emptyset$? The definition of an interior point is "A point $q$ is an interior point of $E$ if there exists a ball at $q$ such that the ball is contained in $E$" and the interior set is the collection of all interior points. So if I were to take $q = \frac{1}{2}$, then clearly $q$ is an interior point of $\mathbb{Q}$, since I can draw a ball of radius $1$ and it would still be contained in $\mathbb{Q}$. And why can't I just take all the rationals to be the interior? So why can't I have $\text{int}\mathbb{(Q)} = \mathbb{Q}$?
It is easy to show that there are irrational numbers between any two rational numbers. Let $q_1 < q_2$ be rational numbers and choose a positive integer $m$ such that $m(q_2-q_1)>2$. Taking some positive integer $m$ such that $m(b-a)>2$, the irrational number $m q_1+\sqrt{2}$ belongs to the interval $(mq_1, mq_2)$ and so the irrational number $q_1 + \frac{\sqrt{2}}{m}$ belongs to $(q_1,q_2)$. With this in mind, there are irrational numbers in any neighbourhood of a rational number, which implies that $\textrm{int}(\mathbb{Q}) = \emptyset$.
Is the difference of the natural logarithms of two integers always irrational or 0? If I have two integers $a,b > 1$. Is $\ln(a) - \ln(b)$ always either irrational or $0$. I know both $\ln(a)$ and $\ln(b)$ are irrational.
If $\log(a)-\log(b)$ is rational, then $\log(a)-\log(b)=p/q$ for some integers $p$ and $q$, hence $\mathrm e^p=r$ where $r=(a/b)^q$ is rational. If $p\ne0$, then $\mathrm e=r^{1/p}$ is algebraic since $\mathrm e$ solves $x^p-r=0$. This is absurd hence $p=0$, and $a=b$.
Homology of pair (A,A) Why is the homology of the pair (A,A) zero? $$H_n(A,A)=0, n\geq0$$ To me it looks like the homology of a point so at least for $n=0$ it should not be zero. How do we see this?
Let us consider the identity map $i : A \to A$. This a homeomorphism and so induces an isomorphism on homology. Now consider the long exact sequence of the pair $(A,A)$: We get $$\ldots \longrightarrow H_n(A)\stackrel{\cong}{\longrightarrow} H_n(A) \stackrel{f}{\longrightarrow} H_n(A,A) \stackrel{g}{\longrightarrow} H_{n-1}(A) \stackrel{\cong}{\longrightarrow} H_{n-1}(A) \longrightarrow \ldots $$ Now this tells you that $f$ must be the zero map because its kernel is the whole of the homology group. So the image of $f$ is zero. However this would mean that the kernel of $g$ is zero. But then because of the isomorphism on the right we have that the image of $g$ is zero. The only way for the kernel of $g$ to be zero at the same time as the image being zero is if $$H_n(A,A) = 0.$$ Another way is to probably go straight from the definition: By definition the relative singular chain groups are $C_n(A)/C_n(A) = 0$. Now when you take the homology of of this chain complex it is obvious that you get zero because you are taking the homology of the chain complex $$\rightarrow C_n(A)/C_n(A) \rightarrow C_{n-1}(A)/C_{n-1}(A) \rightarrow \ldots$$
How many ways to reach $1$ from $n$ by doing $/13$ or $-7$? How many ways to reach $1$ from $n$ by doing $/13$ or $-7$ ? (i.e., where $n$ is the starting value (positive integer) and $/13$ means division by $13$ and $-7$ means subtracting 7)? Let the number of ways be $f(n)$. Example $n = 20$ , then $f(n) = 1$ since $13$ is not a divisor of $20$ , we start $20-7=13$ , then $13/13 = 1$. (edit) : Let $g(n)$ be the number of steps needed.(edit) We can show easily that $f(13n) = f(n) + f(13n-7).$ Or if $n$ is not a multiple of $13$ then $f(n) = f(n-7).$ (edit): I believe that $g(13n) = g(n) + g(13n-7) + 2.$ Or if $n$ is not a multiple of $13$ then $g(n) = g(n-7) + 1.$ Although this might require negative values. ( with thanks to Ross for pointing out the error )(edit) Those equations look simple and familiar , somewhat like functional equations for logaritms , partition functions , fibonacci sequence and even collatz like or a functional equation I posted here before. I consider modular aritmetic such as mod $13^2$ and such but with no succes sofar. How to solve this ? Does it have a nice generating function ? Is there a name for the generalization of this problem ? Because it seems very typical number theory. Is this related to q-analogs ?
Some more thoughts to help: As $13 \equiv -1 \pmod 7$, you can only get to $1$ for numbers that are $\equiv \pm 1 \pmod 7$. You can handle $1$ and $13$, so you can handle all $k \equiv \pm 1 \pmod 7 \in \mathbb N $ except $6$. Also because $13 \equiv -1 \pmod 7$, for $k \equiv -1 \pmod 7$ you have to divide an odd number of times. For $k \equiv 1 \pmod 7$ you have to divide an even number of times. From your starting number, you have to subtract $7$'s until you get to a multiple of $13$. Then if you subtract $7$, you have to do $13$ of them, subtracting $91$. In a sense, the operations commute, as you can subtract $91$, then divide by $13$ or divide by $13$ and then subtract $7$ to get the same place. Numbers of the form $13+91k$ have $k+1$ routes to $1$ (as long as they are less than $13^3)$.
Constructing a triangle given three concurrent cevians? Well, I've been taught how to construct triangles given the $3$ sides, the $3$ angles and etc. This question came up and the first thing I wondered was if the three altitudes (medians, concurrent$^\text {any}$ cevians in general) of a triangle are unique for a particular triangle. I took a wild guess and I assumed Yes! Now assuming my guess is correct, I have the following questions How can I construct a triangle given all three altitudes, medians or/and any three concurrent cevians, if it is possible? N.B: If one looks closely at the order in which I wrote the question(the cevians coming last), the altitudes and the medians are special cases of concurrent cevians with properties * *The altitudes form an angle of $90°$ with the sides each of them touch. *The medians bisect the side each of them touch. With these properties it will be easier to construct the equivalent triangles (which I still don't know how) but with just any concurrent cevians, what other unique property can be added to make the construction possible? For example the angle they make with the sides they touch ($90°$ in the case of altitudes) or the ratio in which they divide the sides they touch ($1:1$ in the case of medians) or any other property for that matter. EDIT What André has shown below is a perfect example of three concurrent cevians forming two different triangles, thus given the lengths of three concurrent cevians, these cevians don't necessarily define a unique triangle. But also note that the altitudes of the equilateral triangle he defined are perpendicular to opposite sides while for the isosceles triangle, the altitude is, obviously also perpendicular to the opposite side with the remaining two cevians form approximately an angle of $50°$ with each opposite sides. Also note that the altitudes of the equilateral triangle bisects the opposite sides and the altitude of the isosceles triangle bisects its opposite sides while the remaining two cevians divides the opposite sides, each in the ratio $1:8$. Now, given these additional properties (like the ratio of "bisection" or the angle formed with opposite sides) of these cevians, do they form a unique triangle (I'm assuming yes on this one) and if yes, how can one construct that unique triangle with a pair of compasses, a ruler and a protractor?
It is clear that the lengths of concurrent cevians cannot always determine the triangle. Indeed, they probably never can. But if it is clear, we must be able to give an explicit example. Cevians $1$: Draw an equilateral triangle with height $1$. Pick as your cevians the altitudes. Cevians $2$: Draw an isosceles triangle $ABC$ such that $AB=AC$, and $BC=\dfrac{10}{12}$, and the height of the triangle with respect to $A$ is equal to $1$. Then $AB=AC=\sqrt{1+(5/12)^2}=\dfrac{13}{12}$. There are (unique) points $X$ and $Y$ on $AB$ and $AC$ respectively such that $BY=CX=1$. This is because as a point $T$ travels from $B$ to $A$ along $BA$, the length of $CT$ increases steadily from $\dfrac{10}{12}$ to $\dfrac{13}{12}$, so must be equal to $1$ somewhere between $B$ and $A$. Let $X$ be this value of $T$. Let one of our cevians be the altitude from $A$, and let $BY$ and $CX$ be the other two cevians. These three cevians are concurrent because of the symmetry about the altitude from $A$, and they all have length $1$.
Logic for getting number of pages If a page can have 27 items printed on it and number of items can be any positive number then how can I find number of pages if I have number of items, I tried Modulus and division but didn't helped. FYI, I am using C# as programming platform.
If I understand the question correctly, isn't the answer just the number of total items divided by $27$ and then rounded up? If you had $54$ total items, $54/27=2$ pages, which doesn't need to round. If you had $100$ total items, $100/27=3.7$ which rounds up to $4$ pages. If you had 115 total items, $115/27=4.26$ which rounds up to $5$ pages.
Squeeze Theorem Problem I'm busy studying for my Calculus A exam tomorrow and I've come across quite a tough question. I know I shouldn't post such localized questions, so if you don't want to answer, you can just push me in the right direction. I had to use the squeeze theorem to determine: $$\lim_{x\to\infty} \dfrac{\sin(x^2)}{x^3}$$ This was easy enough and I got the limit to equal 0. Now the second part of that question was to use that to determine: $$\lim_{x\to\infty} \dfrac{2x^3 + \sin(x^2)}{1 + x^3}$$ Obvously I can see that I'm going to have to sub in the answer I got from the first limit into this equation, but I can't seem to figure how how to do it. Any help would really be appreciated! Thanks in advance!
I assume you meant $$\lim_{x \to \infty} \dfrac{2x^3 + \sin(x^2)}{1+x^3}$$ Note that $-1 \leq \sin(\theta) \leq 1$. Hence, we have that $$\dfrac{2x^3 - 1}{1+x^3} \leq \dfrac{2x^3 + \sin(x^2)}{1+x^3} \leq \dfrac{2x^3 + 1}{1+x^3}$$ Note that $$\dfrac{2x^3 - 1}{1+x^3} = \dfrac{2x^3 +2 -3}{1+x^3} = 2 - \dfrac3{1+x^3}$$ $$\dfrac{2x^3 + 1}{1+x^3} = \dfrac{2x^3 + 2 - 1}{1+x^3} = 2 - \dfrac1{1+x^3}$$ Hence, $$2 - \dfrac3{1+x^3} \leq \dfrac{2x^3 + \sin(x^2)}{1+x^3} \leq 2 - \dfrac1{1+x^3}$$ Can you now find the limit? EDIT If you want to make use of the fact that $\lim_{x \to \infty} \dfrac{\sin(x^2)}{x^3} = 0$, divide the numerator and denominator of $\dfrac{2x^3 + \sin(x^2)}{1+x^3}$ by $x^3$ to get $$\dfrac{2x^3 + \sin(x^2)}{1+x^3} = \dfrac{2 + \dfrac{\sin(x^2)}{x^3}}{1 + \dfrac1{x^3}}$$ Now make use of the fact that $\lim_{x \to \infty} \dfrac{\sin(x^2)}{x^3} = 0$ and $\lim_{x \to \infty} \dfrac1{x^3} = 0$ to get your answer.
Derivative of a split function We have the function: $$f(x) = \frac{x^2\sqrt[4]{x^3}}{x^3+2}.$$ I rewrote it as $$f(x) = \frac{x^2{x^{3/4}}}{x^3+2}.$$ After a while of differentiating I get the final answer: $$f(x)= \frac{- {\sqrt[4]{\left(\frac{1}{4}\right)^{19}} + \sqrt[4]{5.5^7}}}{(x^3+2)^2}$$(The minus isn't behind the four) But my answer sheet gives a different answer, but they also show a wrong calculation, so I don't know what is the right answer, can you help me with this?
Let $y=\frac{x^2\cdot x^{3/4}}{x^3+2}$ so $y=\frac{x^{11/4}}{x^3+2}$ and therefore $y=x^{11/4}\times(x^3+2)^{-1}$. Now use the product rule of two functions: $$(f\cdot g)'=f'\cdot g+f\cdot g'$$ Here $f(x)=x^{11/4}$ and $g(x)=(x^3+2)^{-1}$. So $f'(x)=\frac{11}{4}x^{7/4}$ and $g'(x)=(-1)(3x^2)(x^3+2)^{-2}$. But thinking of your answer in the body, I cannot see where did it come from.
Normal subgroups of the Special Linear Group What are some normal subgroups of SL$(2, \mathbb{R})$? I tried to check SO$(2, \mathbb{R})$, UT$(2, \mathbb{R})$, linear algebraic group and some scalar and diagonal matrices, but still couldn't come up with any. So can anyone give me an idea to continue on, please?
${\rm{SL}}_2(\mathbb{R})$ is a simple Lie group, so there are no connected normal subgroups. It's only proper normal subgroup is $\{I,-I\}$
Almost A Vector Bundle I'm trying to get some intuition for vector bundles. Does anyone have good examples of constructions which are not vector bundles for some nontrivial reason. Ideally I want to test myself by seeing some difficult/pathological spaces where my naive intuition fails me! Apologies if this isn't a particularly well-defined question - hopefully it's clear enough to solicit some useful responses!
Fix $ B = (-1,1) $ to be the base space, and to each point $ b $ of $ B $, attach the vector-space fiber $ \mathcal{F}_{b} \stackrel{\text{def}}{=} \{ b \} \times \mathbb{R} $. We thus obtain a trivial $ 1 $-dimensional vector bundle over $ B $, namely $ B \times \mathbb{R} $. Next, define a fiber-preserving vector-bundle map $ \phi: B \times \mathbb{R} \rightarrow B \times \mathbb{R} $ as follows: $$ \forall (b,r) \in B \times \mathbb{R}: \quad \phi(b,r) \stackrel{\text{def}}{=} (b,br). $$ We now consider the kernel $ \ker(\phi) $ of $ \phi $. For each $ b \in B $, let $ \phi_{b}: \mathcal{F}_{b} \rightarrow \mathcal{F}_{b} $ denote the restriction of $ \phi $ to the fiber $ \mathcal{F}_{b} $. Then $ \ker(\phi_{b}) $ is $ 0 $-dimensional for all $ b \in (-1,1) \setminus \{ 0 \} $ but is $ 1 $-dimensional for $ b = 0 $. Hence, $ \ker(\phi) $ does not have a local trivialization at $ b = 0 $, which means that it is not a vector bundle. In general, if $ f: \xi \rightarrow \eta $ is a map between vector bundles $ \xi $ and $ \eta $, then $ \ker(f) $ is a sub-bundle of $ \xi $ if and only if the dimensions of the fibers of $ \ker(f) $ are locally constant. It is also true that $ \text{im}(f) $ is a sub-bundle of $ \eta $ if and only if the dimensions of the fibers of $ \text{im}(f) $ are locally constant. The moral of the story is that although something may look like a vector bundle by virtue of having a vector space attached to each point of the base space, it may fail to be a vector bundle in the end because the local trivialization property is not satisfied at some point. You want the dimensions of the fibers to stay locally constant; you do not want them to jump. Richard G. Swan has a beautiful paper entitled Vector Bundles and Projective Modules (Transactions of the A.M.S., Vol. 105, No. 2, Nov. 1962) that contains results that might be of interest to you.
for $\nu$ a probability measure on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ the set ${x\in \mathbb{R} ; \nu(x) > 0}$ is at most countable Given a probability measure $\nu$ on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$, how do I show that the set (call it $S$) of all $x\in \mathbb{R}$ where $\nu(x)>0$ holds is at most countable? I thought about utilizing countable additivity of measures and the fact that we have $\nu(A) < 1$ for all countable subsets $A\subset S$. How do I conclude rigorously?
Given $n\in\mathbb N$, consider the set $$A_n=\{x\in\mathbb R:\nu(\{x\})\geq\tfrac{1}{n}\}$$ It must be finite; otherwise, the probability of $A_n$ would be infinite since $\nu$ is additive. Thus, $A=\cup_{n\in\mathbb N}A_n$ is countable as a countable union of finite sets, but it is clear that $$A=\{x\in\mathbb R:\nu(\{x\})>0\}$$ so you are done.
Are these two predicate statements equivalent or not? $\exists x \forall y P(x,y) \equiv \forall y \exists x P(x,y)$ I was told they were not, but I don't see how it can be true.
Take the more concrete statements: $\forall {n \in \mathbb{N}}\ \exists {m \in \mathbb{N}}: m > n$ versus $\exists {m \in \mathbb{N}}\ \forall {n \in \mathbb{N}}: m > n$. Now the first statement reads: For every natural number there's a bigger one. The second statement reads: There's a biggest natural number. Quantifiers of different type don't commute. However, quantifiers of the same type do: $\exists x \ \exists y \equiv \exists y \ \exists x$ and $\forall x \ \forall y \equiv \forall y \ \forall x$.
The dual of a finitely generated module over a noetherian integral domain is reflexive. As posted by user26857 in this question the dual of a finitely generated module over a noetherian integral domain is reflexive. Could you tell me how to prove it?
I think I have a proof. I will use this theorem: Let $A$ be a Noetherian ring and $M$ a finitely generated $A$-module. Then $M^*$ is reflexive if and only if $M^*_P$ is reflexive for every $P$ such that $\mathrm{depth}\;A_P=0$; in short, a dual is reflexive if and only if it is reflexive in depth $0$. If the ring is a domain then the only prime of depth $0$ is the zero ideal. If $M$ is finitely generated, then $M^*_{(0)}$ is a finite dimensional vector space over the ring of fractions and so it is reflexive.
Hatcher: Barycentric Subdivision At the bottom of page 122 of Hatcher, he defines a map $S:C_{n}(X)\rightarrow C_{n}(X)$ by $S\sigma=\sigma_{\#}S\Delta^{n}$. What is the $S$ on the right hand side and how does it act on the simplex $\Delta$? I'm having trouble deconstructing the notation here.
It's the $S$ he defines in (2) of his proof: barycentric subdivision of linear chains. I think you will benefit from reading the following supplementary notes. update: here's the idea: first he deals with linear chains so he knows how to define the map $S: LC_n(\Delta^n) \to LC_n(\Delta^n)$, i.e. he knows how to find the barycentric subdivision of the standard $n$-simplex (thought of as the identity map onto itself). Then he uses the map $\sigma_\#$ composed with the subdivided standard $n$-simplex to define a barycentric subdivision of singular $n$-chains.
Find the smallest positive integer satisfying 3 simultaneous congruences $x \equiv 2 \pmod 3$ $2x \equiv 4 \pmod 7$ $x \equiv 9 \pmod {11}$ What is troubling me is the $2x$. I know of an algorithm (not sure what it's called) that would work if all three equations were $x \equiv$ by using euclidean algorithm back substitution three times but since this one has a $2x$ it won't work. Would it be possible to convert it to $x\equiv$ somehow? Could you divide through by $2$ to get $x \equiv 2 \pmod{\frac{7}{2}}$ Though even if you could I suspect this wouldn't really help.. What would be the best way to do this?
To supplement the other answers, if your equation was $$ 2x \equiv 4 \pmod 8 $$ then the idea you guessed is actually right: this equation is equivalent to $$ x \equiv 2 \pmod 4 $$ More generally, the equations $$ a \equiv b \pmod c $$ and $$ ad \equiv bd \pmod {cd}$$ are equivalent. Thus, if both sides of the equation and the modulus share a common factor, you can cancel it out without losing any solutions or introducing spurious ones. However, this only works with a common factor.
An example for a calculation where imaginary numbers are used but don't occur in the question or the solution. In a presentation I will have to give an account of Hilbert's concept of real and ideal mathematics. Hilbert wrote in his treatise "Über das Unendliche" (page 14, second paragraph. Here is an English version - look for the paragraph starting with "Let us remember that we are mathematicians") that this concept can be compared with (some of) the use(s) of imaginary numbers. He thought probably of a calculation where the setting and the final solution has nothing to do with imaginary numbers but that there is an easy proof using imaginary numbers. I remember once seeing such an example but cannot find one, so: Does anyone know about a good an easily explicable example of this phenomenon? ("Easily" means that enigneers and biologists can also understand it well.)
The canonical example seems to be Cardano's solution of the cubic equation, which requires non-real numbers in some cases even when all the roots are real. The mathematics is not as hard as you might think; and as an added benefit, there is a juicy tale to go with it – as the solution was really due to Scipione del Ferro and Tartaglia. Here is a writeup, based on some notes I made a year and a half ago: First, the general cubic equation $x^3+ax^2+bx+c=0$ can be transformed into the form $$ x^3-3px+2q=0 $$ by a simple substitution of $x-a/3$ for $x$. We may as well assume $pq\ne0$, since otherwise the equation is trivial to solve. So we substitute in $$x=u+v$$ and get the equation into the form $$ u^3+v^3+3(uv-p)(u+v)+q=0. $$ Now we add the extra equation $$ uv=p $$ so that $u^3+v^3+q=0$. Substituting $v=p/u$ in this equation, then multiplying by $u^3$, we arrive at $$ u^6+2qu^3+p^3=0, $$ which is a quadratic equation in $u^3$. Noticing that interchanging the two roots of this equation corresponds to interchanging $u$ and $v$, which does not change $x$, we pick one of the two solutions, and get: $$ u^3=-q+\sqrt{q^2-p^3}, $$ with the resulting solution $$ x=u+p/u. $$ The three different cube roots $u$ will of course yield the three solutions $x$ of the original equation. Real coefficients In the case when $u^3$ is not real, that is when $q^2<p^3$, we could write instead $$ u^3=-q+i\sqrt{p^3-q^2}, $$ and we note that in this case $\lvert u\rvert=\sqrt{p}$, so that in fact $x=u+\bar u=2\operatorname{Re} u$. In other words, all the roots are real. In fact the two extrema of $x^3-3px+2q$ are at $x=\pm\sqrt{p}$, and the values of the polynomial at these two points are $2(q\mp p^{3/2})$. The product of these two values is $4(q^2-p^3)<0$, which is another way to see that there are indeed three real zeros.
Adding sine waves of different phase, sin(π/2) + sin(3π/2)? Adding sine waves of different phase, what is $\sin(\pi/2) + \sin(3\pi/2)$? Please could someone explain this. Thanks.
Heres the plot for $\sin(L)$ where $L$ goes from $(0, \pi/2)$ Heres the plot for $\sin(L) + \sin(3L)$ where $L$ goes from $(0, \pi/2)$ I hope this distinction is useful to you. This was done in Mathematica.
Showing that $1/x$ is NOT Lebesgue Integrable on $(0,1]$ I aim to show that $\int_{(0,1]} 1/x = \infty$. My original idea was to find a sequence of simple functions $\{ \phi_n \}$ s.t $\lim\limits_{n \rightarrow \infty}\int \phi_n = \infty$. Here is a failed attempt at finding such a sequence of $\phi_n$: (1) Let $A_k = \{x \in (0,1] : 1/x \ge k \}$ for $k \in \mathbb{N}$. (2) Let $\phi_n = n \cdot \chi_{A_n}$ (3) $\int \phi_n = n \cdot m(A_n) = n \cdot 1/n = 1$ Any advice from here on this approach or another?
I think this may be the same as what Davide Giraudo wrote, but this way of saying it seems simpler. Let $\lfloor w\rfloor$ be the greatest integer less than or equal to $w$. Then the function $$x\mapsto \begin{cases} \lfloor 1/x\rfloor & \text{if } \lfloor 1/x\rfloor\le n \\[8pt] n & \text{otherwise} \end{cases}$$ is simple. It is $\le 1/x$ and its integral over $(0,1]$ approaches $\infty$ as $n\to\infty$.
Using induction to prove $3$ divides $\left \lfloor\left(\frac {7+\sqrt {37}}{2}\right)^n \right\rfloor$ How can I use induction to prove that $$\left \lfloor\left(\cfrac {7+\sqrt {37}}{2}\right)^n \right\rfloor$$ is divisible by $3$ for every natural number $n$?
Consider the recurrence given by $$x_{n+1} = 7x_n - 3 x_{n-1}$$ where $x_0 = 2$, $x_1 = 7$. Note that $$x_{n+1} \equiv (7x_n - 3 x_{n-1}) \pmod{3} \equiv (x_n + 3(2x_n - x_{n-1})) \pmod{3} \equiv x_n \pmod{3}$$ Since $x_1 \equiv 1 \pmod{3}$, we have that $x_n \equiv 1 \pmod{n}$. Ths solution to this recurrence is given by $$x_n = \left( \dfrac{7+\sqrt{37}}{2} \right)^n + \left( \dfrac{7-\sqrt{37}}{2}\right)^n \equiv 1 \pmod{3}$$ Further, $0 < \dfrac{7-\sqrt{37}}{2} < 1$. This means $$3M < \left( \dfrac{7+\sqrt{37}}{2} \right)^n < 3M+1$$ where $M \in \mathbb{Z}$. Hence, we have that $$3 \text{ divides }\left \lfloor \left (\dfrac{7+\sqrt{37}}{2}\right)^n \right \rfloor$$
Integration with infinity and exponential How is $$\lim_{T\to\infty}\frac{1}T\int_{-T/2}^{T/2}e^{-2at}dt=\infty\;?$$ however my answer comes zero because putting limit in the expression, we get: $$\frac1\infty\left(-\frac1{2a}\right) [e^{-\infty} - e^\infty]$$ which results in zero? I think I am doing wrong. So how can I get the answer equal to $\infty$ Regards
$I=\int_{-T/2}^{T/2}e^{-2at}dt=\left.\frac{-1}{2a}e^{-2at}\right|_{-T/2}^{T/2}=-\frac{e^{-aT}+e^{aT}}{2a}$ Despite that $a$ positive or negative, one of the exponents will tend to zero at our limit, so we can rewrite it as : $\lim_{T\rightarrow\infty}\frac{I}{T}=\lim_{T\rightarrow\infty}\left(-\frac{e^{-aT}+e^{aT}}{2aT}\right)=\lim_{T\rightarrow\infty}\left(-\frac{e^{\left|a\right|T}}{2aT}\right) $ But becuase exponental functions are growing much faster than $T$ , this makes the limit is always infinity, thus finaly we have: $\lim_{T\rightarrow\infty}\frac{I}{T}=\begin{cases} -\infty & a>0\\ +\infty & a<0 \end{cases}$
The positive integer solutions for $2^a+3^b=5^c$ What are the positive integer solutions to the equation $$2^a + 3^b = 5^c$$ Of course $(1,\space 1, \space 1)$ is a solution.
There are three solutions which can all be found by elementary means. If $b$ is odd $$2^a+3\equiv 1 \bmod 4$$ Therefore $a=1$ and $b$ is odd. If $b>1$, then $2\equiv 5^c \bmod 9$ and $c\equiv 5 \bmod 6$ Therefore $2+3^b\equiv 5^c\equiv3 \bmod 7$ and $b\equiv 0 \bmod 6$, a contradiction. The only solution is $(a,b,c)=(1,1,1)$. If $b$ is even, $c$ is odd $$2^a+1\equiv 5 \bmod 8$$ Therefore $a=2$. If $b\ge 2$, then $4\equiv 5^c \bmod 9$ and $c\equiv 4 \bmod 6$, a contradiction. The only solution is $(a,b,c)=(2,0,1)$. If $b$ and $c$ are even Let $b=2B$ and $c=2C$. Then $$2^a=5^{2C}-3^{2B}=(5^C-3^B)(5^C+3^B)$$ Therefore $5^C-3^B$ is also a (smaller) power of 2. A check of $(B,C)=(0,1)$ and $(1,1)$ yields the third solution $(a,b,c)=(4,2,2)$. $(B,C)=(2,2)$ does not yield a further solution and we are finished.
Why $0!$ is equal to $1$? Many counting formulas involving factorials can make sense for the case $n= 0$ if we define $0!=1 $; e.g., Catalan number and the number of trees with a given number of vetrices. Now here is my question: If $A$ is an associative and commutative ring, then we can define an unary operation on the set of all the finite subsets of our ring, denoted by $+ \left(A\right) $ and $\times \left(A\right)$. While it is intuitive to define $+ \left( \emptyset \right) =0$, why should the product of zero number of elements be $1$? Does the fact that $0! =1$ have anything to do with 1 being the multiplication unity of integers?
As pointed out in one of the answers to this math.SX question, you can get the Gamma function as an extension of factorials, and then this falls out from it (though this isn't a very combinatorial answer).
Signs of rates of change when filling a spherical tank. Water is flowing into a large spherical tank at a constant rate. Let $V\left(t\right)$ be the volume of water in the tank at time $t$, and $h\left(t\right)$ of the water level at time $t$. Is $\frac{dV}{dt}$ positive, negative, or zero when the tank is one quarter full? Is $\frac{dh}{dt}$ positive, negative, or zero when the tank is one quarter full? My answer I believe that the rate of change of the volume is increasing till 1/2 full, then decreasing. I believe that the rate of change of the height is decreasing till 1/2 full, then increasing. THis is because of the widest part of the sphere is when h = r (the radius of the sphere), and if you think of it as many slices of circles upon each other, then the area change would follow the patter of each slice. Am I correct?
Uncookedfalcon is correct. You're adding water, which means that both the volume and water depth are increasing (that is, $\frac{dV}{dt}$ and $\frac{dh}{dt}$ are positive) until it's full. In fact, $\frac{dV}{dt}$ is constant until it's full.
is a non-falling rank of smooth maps an open condition? If $f \colon M \to N$ is a smooth map of smooth manifolds, for any point $p \in M$, is there an open neighbourhood $V$ of $p$ such that $\forall q \in V \colon \mathrm{rnk}_q (f) \geq \mathrm{rnk}_p (f)$?
Yes. Note that if $f$ has rank $k$ at $p$, then $Df(p)$ has rank $k$. Therefore in coordinates, there is a non-vanishing $k \times k$-minor of $Df(p)$. As having a non-vanishing determinant is an open condition, the same minor will not vanish in a whole neighbourhood $V$ of $p$, giving $\operatorname{rank}_q f \ge k$, all $q \in V$:
Describing the intersection of two planes Consider the plane with general equation 3x+y+z=1 and the plane with vector equation (x, y, z)=s(1, -1, -2) + t(1, -2 -1) where s and t are real numbers. Describe the intersection of these two planes. I started by substituting the parametric equations into the general equation and got 0=9. Does that imply the planes are parallel and hence do not intersect?
As a test to see if the planes are parallel you can calculate the normalvectors for the planes {n1, n2}. If $abs\left (\frac{(n1\cdot n2)}{\left | n1 \right |*\left | n2 \right |} \right )==1$ the planes are parallel.
Example where $\int |f(x)| dx$ is infinite and $\int |f(x)|^2 dx$ is finite I read in a book that the condition $\int |f(x)|^2 dx <\infty$ is less restrictive than $\int |f(x)| dx <\infty$. That means whenever $\int |f(x)| dx$ is finite, $\int |f(x)|^2 dx$ is also finite, right? My understanding is that $|f(x)|$ may have a thick tail to make the integral blow up, but $|f(x)|^2$ may decay quickly enough to have a finite integral. Can someone give me an example that $\int |f(x)| dx=\infty$ but $\int |f(x)|^2 dx <\infty$. Suppose $f(x)$ is an absolutely continous function and bounded on $(-\infty, \infty)$.
The most noticeable one I think is the sinc function $$\mathrm{sinc}(x)=\frac{\sin(\pi x)}{\pi x}$$
Which letter is not homeomorphic to the letter $C$? Which letter is not homeomorphic to the letter $C$? I think letter $O$ and $o$ are not homeomorphic to the letter $C$. Is that correct? Is there any other letter?
There are many others $E$ or $Q$ for example. The most basic method I know of is by assuming there is one, then it restricts to the subspace if you take out one (or more) points. Then the number of connected components of this subspace is an invariant.
How do I prove that $S^n$ is homeomorphic to $S^m \Rightarrow m=n$? This is what I have so far: Assume $S^n$ is homeomorphic to $S^m$. Also, assume $m≠n$. So, let $m>n$. From here I am not sure what is implied. Of course in this problem $S^k$ is defined as: $S^k=\lbrace (x_0,x_1,⋯,x_{k+1}):x_0^2+x_1^2+⋯+x_{k+1}^2=1 \rbrace$ with subspace topology.
Hint: look at the topic Invariance of Domain
Trace of $A$ if $A =A^{-1}$ Suppose $I\neq A\neq -I$, where $I$ is the identity matrix and $A$ is a real $2\times2$ matrix. If $A=A^{-1}$ then what is the trace of $A$? I was thinking of writing $A$ as $\{a,b;c; d\}$ then using $A=A^{-1}$ to equate the positions but the equations I get suggest there is an easier way.
From $A=A^{-1}$ you will know that all the possible eigenvalues are $\pm 1$, so the trace of $A$ would only be $0$ or $\pm 2$. You may show that all these three cases are realizable.