INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Calculate $\lim\limits_{x\to a}\frac{a^{a^{x}}-{a^{x^{a}}}}{a^x-x^a}$ Please help me solving $\displaystyle\lim_{x\to a}\frac{a^{a^{x}}-{a^{x^{a}}}}{a^x-x^a}$
The ratio is $R(x)=\dfrac{u(t)-u(s)}{t-s}$ with $u(z)=a^z$, $t=a^x$ and $s=x^a$. When $x\to a$, $t\to a^a$ and $s\to a^a$ hence $R(x)\to u'(a^a)$. Since $u(z)=\exp(z\log a)$, $u'(z)=u(z)\log a$. In particular, $u'(a^a)=u(a^a)\log a$. Since $u(a^a)=a^{a^a}$, $\lim\limits_{x\to a}R(x)=a^{a^a}\log a$.
Linearly disjoint vs. free field extensions Consider two field extensions $K$ and $L$ of a common subfield $k$ and suppose $K$ and $L$ are both subfields of a field $\Omega$, algebraically closed. Lang defines $K$ and $L$ to be 'linearly disjoint over $k$' if any finite set of elements of $K$ that are linearly independent over $k$ stays linearly independent over $L$ (it is, in fact, a symmetric condition). Similarly, he defines $K$ and $L$ to be 'free over $k$' if any finite set of elements of $K$ that are algebraically independent over $k$ stays algebraically independent over $L$. He shows right after that if $K$ and $L$ are linearly disjoint over $k$, then they are free over $k$. Anyway, Wikipedia gives a different definition for linearly disjointness, namely $K$ and $L$ are linearly disjoint over $k$ iff $K \otimes_k L$ is a field, so I was wondering: do we have a similar description of 'free over $k$' in terms of the tensor product $K \otimes_k L$? It should be a weaker condition than $K \otimes_k L$ being a field, perhaps it needs to be a integral domain?
The condition of being linearly disjoint or free depends much on the "positions" of $K, L$ inside $\Omega$, while the isomorphism class of the $k$-algebra $K\otimes_k L$ doesn't. For instance, consider $\Omega=\mathbb C(X,Y)$, $K=\mathbb C(X)$, $L_1=\mathbb C(Y)$ and $L_2=K$. Then $$K\otimes_\mathbb C L_1\simeq K\otimes_{\mathbb C} L_2$$ as $\mathbb C$-algebras. But $K, L_1$ are linearly disjoint (so free) in $\Omega$, not $K, L_2$. This example shows that in general, the linear disjointness nor the freeness can be determined by intrinsic properties of $K\otimes_k L$. If $K$ or $L$ is algebraic over $k$, then it is true that linear disjointness is equivalent to $K\otimes_k L$ is a field. But in this situation the freeness is automatic whenever the tensor product is a field or not (can even be non-reduced).
Prove: If $a\mid m$ and $b\mid m$ and $\gcd(a,b)=1$ then $ab\mid m\,$ [LCM = product for coprimes] Prove: If $a\mid m$ and $b\mid m$ and $\gcd(a,b)=1$ then $ab\mid m$ I thought that $m=ab$ but I was given a counterexample in a comment below. So all I really know is $m=ax$ and $m=by$ for some $x,y \in \mathbb Z$. Also, $a$ and $b$ are relatively prime since $\gcd(a,b)=1$. One of the comments suggests to use Bézout's identity, i.e., $aq+br=1$ for some $q,r\in\mathbb{Z}$. Any more hints? New to this divisibility/gcd stuff. Thanks in advance!
Write $ax+by=1$, $m=aa'$, $m=bb'$. Let $t=b'x+a'y$. Then $abt=abb'x+baa'y=m(ax+by)=m$ and so $ab \mid m$. Edit: Perhaps this order is more natural and less magical: $m = m(ax+by) = max+mby = bb'ax+aa'by = ab(b'x+a'y)$.
Question about Minimization Let be $J$ a convex functional defined in Hilbert space H and with real values. What hypothesis I should assume to exist solution for the problem?: $J(u) = \inf \left\{{J(v); v \in K}\right\} , u \in K$ For all closed convex $K \subset H.$ I begin using the theorem A functional $J:E\rightarrow\mathbb{R}$ defined over a norm space $E$ is semi-continuous inferiorly if for all sequence $(u_n)_{n\in \mathbb{N}}$ converging to $u$ then: $\lim_{n\rightarrow \infty}\inf J(u_n)\geq J(u)$. But I don't know how make to only "=".
You get equality by taking $u_n$ such that $J(u_n)\to \inf_K J$. Indeed, the weak limit is also an element of $K$ and therefore cannot have a smaller value of the functional than the infimum. The term is "lower semicontinuous", by the way. What you need from $J$ is being bounded from below, and lower semicontinuous with respect to weak convergence of sequences. And if you allow unbounded $K$, it helps to have $J\to\infty $ at infinity, because this forces the sequence $u_n$ to be bounded.
evaluation of the integral $\int_{0}^{x} \frac{\cos(ut)}{\sqrt{x-t}}dt $ Can the integral $$\int_{0}^{x} \frac{\cos(ut)}{\sqrt{x-t}}dt $$ be expressed in terms of elemental functions or in terms of the sine and cosine integrals ? if possible i would need a hint thanks. From the fractional calculus i guess this integral is the half derivative of the sine function (i think so) $ \sqrt \pi \frac{d^{1/2}}{dx^{1/2}}\sin(ux) $ or similar of course i could expand the cosine into power series and then take the term by term integration but i would like if possible a closed expression for my integral
Let $t=x-y^2$. We then have $dt = -2ydy$. Hence, we get that \begin{align} I = \int_0^x \dfrac{\cos(ut)}{\sqrt{x-t}} dt & = \int_0^{\sqrt{x}} \dfrac{\cos(u(x-y^2))}{y} \cdot 2y dy\\ & = 2 \cos(ux) \int_0^{\sqrt{x}}\cos(uy^2)dy + 2 \sin(ux) \int_0^{\sqrt{x}}\sin(uy^2)dy\\ & = \dfrac{\sqrt{2 \pi}}{\sqrt{u}} \left(\cos(ux) C\left(\sqrt{\dfrac{2ux}{\pi}}\right) + \sin(ux) S\left(\sqrt{\dfrac{2ux}{\pi}}\right) \right) \end{align} where $C(z)$ and $S(z)$ are the Fresnel integrals.
Inequality. $\sum{(a+b)(b+c)\sqrt{a-b+c}} \geq 4(a+b+c)\sqrt{(-a+b+c)(a-b+c)(a+b-c)}.$ Let $a,b,c$ be the side-lengths of a triangle. Prove that: I. $$\sum_{cyc}{(a+b)(b+c)\sqrt{a-b+c}} \geq 4(a+b+c)\sqrt{(-a+b+c)(a-b+c)(a+b-c)}.$$ What I have tried: \begin{eqnarray} a-b+c&=&x\\ b-c+a&=&y\\ c-a+b&=&z. \end{eqnarray} So $a+b+c=x+y+z$ and $2a=x+y$, $2b=y+z$, $2c=x+z$ and our inequality become: $$\sum_{cyc}{\frac{\sqrt{x}\cdot(x+2y+z)\cdot(x+y+2)}{4}} \geq 4\cdot(x+y+z)\cdot\sqrt{xyz}. $$ Or if we make one more notation : $S=x+y+z$ we obtain that: $$\sum_{cyc}{\sqrt{x}(S+y)(S+z)} \geq 16S\cdot \sqrt{xyz} \Leftrightarrow$$ $$S^2(\sqrt{x}+\sqrt{y}+\sqrt{z})+S(y\sqrt{x}+z\sqrt{x}+x\sqrt{y}+z\sqrt{y}+x\sqrt{z}+y\sqrt{z})+xy\sqrt{z}+yz\sqrt{x}+xz\sqrt{y} \geq 16S\sqrt{xyz}.$$ To complete the proof we have to prove that: $$y\sqrt{x}+z\sqrt{x}+x\sqrt{y}+z\sqrt{y}+x\sqrt{z}+y\sqrt{z} \geq 16\sqrt{xyz}. $$ Is this last inequality true ? II. Knowing that: $$p=\frac{a+b+c}{2}$$ we can rewrite the inequality: $$\sum_{cyc}{(2p-c)(2p-a)\sqrt{2(p-b)}} \geq 8p \sqrt{2^3 \cdot (p-a)(p-b)(p-c)} \Leftrightarrow$$ $$\sum_{cyc}{(2p-c)(2p-a)\sqrt{(p-b)}} \geq 16p \sqrt{(p-a)(p-b)(p-c)}$$ This help me ? Thanks :)
Notice that the inequality proposed is proved once we estabilish $$\frac{(a+b)(b+c)}{\sqrt{a+b-c}\sqrt{-a+b+c}}+\frac{(b+c)(c+a)}{\sqrt{a-b+c}\sqrt{-a+b+c}}+\frac{(c+a)(a+b)}{\sqrt{a-b+c}\sqrt{a+b-c}}\geq 4(a+b+c).$$ Using AM-GM on the denominators in the LHS, we estabilish that * *$\sqrt{a+b-c}\sqrt{-a+b+c}\leq b$, *$\sqrt{a-b+c}\sqrt{-a+b+c}\leq c$, *$\sqrt{a-b+c}\sqrt{a+b-c}\leq a$. Therefore $$\operatorname{LHS}\geq \frac{(a+b)(b+c)}{b}+\frac{(b+c)(c+a)}{c}+\frac{(c+a)(a+b)}{a}=3(a+b+c)+\frac{ca}{b}+\frac{ab}{c}+\frac{bc}{a}$$ To finish with the proof it suffices then to prove $$\frac{ca}{b}+\frac{ab}{c}+\frac{bc}{a}\geq a+b+c.$$ This follows from AM-GM, indeed * *$$\frac{\frac{ac}{b}+\frac{ab}{c}}{2}\geq a,$$ *$$\frac{\frac{bc}{a}+\frac{ab}{c}}{2}\geq b,$$ *$$\frac{\frac{ac}{b}+\frac{bc}{a}}{2}\geq c;$$ summing up these last inequalities, we get the desired result, hence finishing the proof. Notice that the condition of $a,b,c$ being the sides of a triangle is essential in the first usage of AM-GM.
Average Distance Between Random Points on a Line Segment Suppose I have a line segment of length $L$. I now select two points at random along the segment. What is the expected value of the distance between the two points, and why?
You can picture this problem from a discrete approach, then extend it to the real number line. For the first L natural numbers, the difference between any two of them may range from 1 through L-1. Exactly N pairs of numbers from our set will be L-N units apart. Taking that into account we sum: Sum of distances = sum (0 < x < L, x*(L-x)) Which turns out to be (L-1)(L)(L+1)/6 We then divide by combinations of L in 2 and get the average distance (L+1)/3, which becomes L/3 as we allow for infinitely many more numbers in the [0;L] interval.
Open cover rationals proper subset of R? If I were to cover each rational number by a non-empty open interval, would their union always be R? It seems correct to me intuitively, but I am quite certain it is wrong. Thanks
If you enumerate the rationals as a sequence $x_1, x_2, \dots$, you can then take a sequence of open intervals $(x_1-\delta, x_1+\delta), (x_2-\delta/2, x_2+\delta/2), (x_3-\delta/4, x_3+\delta/4), \dots$ which gives an open cover for $\mathbb{Q}$ of total length $4\delta$, which can be made as small as you wish, by choosing $\delta$ sufficiently small.
Is there a geometric proof to answers about the 3 classical problems? I know that there is a solution to this topic using algebra (for example, this post). But I would like to know if there is a geometric proof to show this impossibility. Thanks.
No such proof is known. Note that this would in fact be meta-geometric: You do not give a construction of an object from givens, but you make a statement about all possible sequences of operations with your drawing instruments. Therefore it is a good idea to classify all points constructable from standard given points. This set of points has no truely geometric properties (after all, they are dense in the standard Euclidean plane, hence arbitrarily good approximations can be constructed) but nice algebraic properties (algebraic numbers with certain properties).
Trouble with formulation of an analytic geometry question I'm having trouble understanding a certain question, so I am asking for an explanation of it. The question is asked in a different language, so my translation will probably be mistake-ridden, I hope you guys can overlook the mistakes (and of course correct them): Show that for each $ a $ the circle $ (x-a)^2 + (y-a)^2 = a^2 $ touches the axes. This is literally how the question is formulated, I'm sure that it isn't a hard question so if one of you can explain what they mean by this question I would appreciate it!
A slightly different take on your question would be to realize that if your circle touches the $Y$ axis, it must do so at a point $(0, y)$. Substitute $x=0$ in the equation of your circle; can you find a value for $y$? The answer for touching the $X$ axis can be found in a similar way.
Kernel of Linear Functionals Problem: Prove that for all non zero linear functionials $f:M\to\mathbb{K}$ where $M$ is a vector space over field $\mathbb{K}$, subspace $(f^{-1}(0))$ is of co-dimension one. Could someone solve this for me?
The following is a proof in the finite dimensional case: The dimension of the image of $f$ is 1 because $\textrm{im} f$ is a subspace of $\Bbb{K}$ that has dimension 1 over itself. Since $\textrm{im} f \neq 0$ it must be the whole of $\Bbb{K}$. By rank nullity, $$\begin{eqnarray*} 1 &=& \dim \textrm{im} f \\ &=& \dim_\Bbb{K} M- \dim \ker f\end{eqnarray*}$$ showing that $\ker f$ has codimension 1.
Bypassing a series of stochastic stoplights In order for me to drive home, I need to sequentially bypass $(S_1, S_2, ..., S_N)$ stoplights that behave stochastically. Each stoplight, $S_i$ has some individual probability $r_i$ of being red, and an associated probability, $g_i$, per minute of time of turning from red to green. What is the probability density function for the number of minutes I spend waiting at the $N$ stoplights on my way home? Update 2: The first update is incorrect since the $T$ variable $T$ is a mix of a discrete and continuous measures (as Sasha noted), to generate our distribution for $T$, and assuming all lights are the same, we need to compute the weighted sum: Distribution for $x = T = \sum^N_{j=1} Pr[j$ lights are red upon approach$] * Erlang[j, g]$ Here, Pr[$j$ lights are red upon approach] is just the probability of $j$ successes in $N$ trials, where the probability of success is $r$. In the case where all the lights are unique, we perform the same sort of weighted sum with the hypoexponential distribution, where we have to account for all possible subsets of the lights, with unique $g_i$, being red. Update 1 (see update 2 first, this is incorrect!): from Raskolnikov and Sasha's comments, I'm supposing that the following is the case: If we allow all the spotlights, $S_i$ to be the same, following from (http://en.wikipedia.org/wiki/Erlang_distribution), we have an Erlang (or Gamma) distribution where $k = N$ and the rate parameter is $\lambda = \frac{g}{r}$. This gives us a mean waiting time at all the red lights, $x = T$ minutes, of $\frac{k}{\lambda} = \frac{N}{(\frac{g}{r})}$ and the following PDF for $x = T$: $\frac{\lambda^k x^{k-1} e^{-\lambda x}}{(k-1)!}$ = $\frac{(\frac{g}{r})^N T^{N-1} e^{-(\frac{g}{r}) T}}{(N-1)!}$ Now if all of the stoplights are not the same, following from (http://en.wikipedia.org/wiki/Hypoexponential_distribution), we have a hypoexponential distribution where $k = N$ and the rate parameters are $(\lambda_1, \lambda_2, ..., \lambda_N) = ((\frac{g_1}{r_1}), (\frac{g_2}{r_2}), ..., (\frac{g_N}{r_N}))$. This gives us a mean waiting time at all of the red lights, $x = T$ minutes, of $\sum^{k}_{i=1} \frac{1}{\lambda_i} = \sum^{N}_{i=1} \frac{1}{(\frac{g_i}{r_i})}$. I'm having trouble, however, understanding how to correctly calculate the PDF for the hypoexponential distribution. Is the above correct? (answer: no, but the means are correct)
Let $T_i$ denote the time you wait on each stop-light. Because $\mathbb{P}(T_i = 0) = 1-r_i > 0$, $T_i$ is not a continuous random variable, and thus does not have a notion of density. Likewise, the total wait-time $T = T_1+\cdots+T_N$ also has a non-zero probability of being equal to zero, and hence has no density. Incidentally, the sum of exponential random variables with different exponents is known as hypoexponential distribution.
Publishing an article after a book? If I first publish an article, afterward I may publish a book containing materials from the article. What's about the reverse: If I first publish a book does it make sense to publish its fragment as an article AFTERWARD?
"If I first publish a book does it make sense to publish its fragment as an article AFTERWARD?" Sure, why not? You might write a book for one audience, and very usefully re-publish a fragment in the form of a journal article for another audience. I have done this with some stuff buried near the end of a long textbook book aimed at beginning grad students, which colleagues are unlikely to read, but which I thought (and the journal editor thought) was interesting enough for stand-alone publication in one of the journals. And I've done things the other way around too, taken something from a book aimed at a professional audience and reworked it as an article in a publication with a more general readership. [This was techie philosophy rather than straight maths, but I'd have thought the same principles would apply.] So I guess you'd need to think about whether you would or would not be reaching significantly different audiences. (Of course, re-publication for the sake of padding out a CV is not a good idea!)
Compute expectation of certain $N$-th largest element of uniform sample A premier B-school has 2009 students.The dean,a math enthusiast,asks each student to submit a randomly chosen number between 0 and 1.She then ranks these numbers in a list of decreasing order and decides to use the 456th largest number as a fraction of students that are going to get an overall pass grade this year.What is the expected fraction of students that get a passing grade? I am not able to think in any direction. As it is really difficult to comprehend.
This is a question on order statistics. Let $U_i$ denote independent random variables, uniformly distributed on unit interval. The teacher picks $m$-th largest, or $n+1-m$-th smallest number in the sample $\{U_1, U_2, \ldots, U_n\}$, which is denoted as $U_{n-m+1:n}$. It is easy to evaluate the cumulative distribution function of $U_{n-m+1:n}$, for $0<u<1$ $$\begin{eqnarray} \mathbb{P}\left(U_{n-m+1:n} \leqslant u \right) &=& \sum_{k=n-m+1}^n \mathbb{P}\left( U_{1:n} \leqslant u, \ldots, U_{k:n} \leqslant u, U_{k+1:n} >u, \ldots, U_{n:n} >u \right) \\ &=& \mathbb{P}\left( \sum_{k=1}^n [ U_k \leqslant u] \geqslant n-m+1\right) \end{eqnarray} $$ where $[U_k \leqslant u]$ denotes the Iverson bracket. It equals 1 is the condition holds, and zero otherwise. Since $U_k$ are independent, $[U_k \leqslant u]$ are independent identically distributed $0-1$ random variables: $$ \mathbb{E}\left( [ U_k \leqslant u] \right) = \mathbb{P}\left(U_k \leqslant u\right) = u $$ The sum of $n$ iid Bernoulli random variables equals in distribution to a binomial random variable, with parameters $n$ and $u$. Thus: $$ F(u) = \mathbb{P}\left(U_{n-m+1:n} \leqslant u \right) = \sum_{k=n-m+1}^n \binom{n}{k} u^{k} (1-u)^{n-k} $$ The mean can be computed by integrating the above: $$\begin{eqnarray} \mathbb{E}\left(U_{n-m+1:n}\right) &=& \int_0^1 u F^\prime(u) \mathrm{d}u = \left. u F(u) \right|_{u=0}^{u=1} - \int_0^1 F(u) \mathrm{d} u \\ &=& 1- \sum_{k=n-m+1}^n \binom{n}{k} \int_0^1 u^{k} (1-u)^{n-k} \mathrm{d} u \\ &=& 1 - \sum_{k=n-m+1}^n \binom{n}{k} B(k+1, n-k+1) \\ &=& 1 - \sum_{k=n-m+1}^n \frac{n!}{k! (n-k)!} \cdot \frac{(k)! (n-k)!}{(n+1)!} \\ &=& 1 - \sum_{k=n-m+1}^n \frac{1}{n+1} = 1 - \frac{m}{n+1} \end{eqnarray} $$ Using $n=2009$ and $m=456$ the exact fraction equals: $$ \left.\mathbb{E}\left(U_{n-m+1:n}\right)\right|_{n=2009,m=456} = \frac{259}{335} \approx 0.77313 $$
In Need of Ideas for a Small Fractal Program I am a freshman in high school who needs a math related project, so I decided on the topic of fractals. Being an avid developer, I thought it would be awesome to write a Ruby program that can calculate a fractal. The only problem is that I am not some programming god, and I have not worked on any huge projects (yet). So I need a basic-ish fractal 'type' to do the project on. I am a very quick learner, and my math skills greatly outdo that of my peers (I was working on derivatives by myself last year). So does anybody have any good ideas? Thanks!!!! :) PS: my school requires a live resource for every project we do, so would anybody be interested in helping? :)
Maybe you want to consider iterated function systems
Integer solutions to $ x^2-y^2=33$ I'm currently trying to solve a programming question that requires me to calculate all the integer solutions of the following equation: $x^2-y^2 = 33$ I've been looking for a solution on the internet already but I couldn't find anything for this kind of equation. Is there any way to calculate and list the integer solutions to this equation? Thanks in advance!
Hint $\ $ Like sums of squares, there is also a composition law for differences of squares, so $\rm\quad \begin{eqnarray} 3\, &=&\, \color{#0A0}2^2-\color{#C00}1^2\\ 11\, &=&\, \color{blue}6^2-5^2\end{eqnarray}$ $\,\ \Rightarrow\,\ $ $\begin{eqnarray} 3\cdot 11\, &=&\, (\color{#0A0}2\cdot\color{blue}6+\color{#C00}1\cdot 5)^2-(\color{#0A0}2\cdot 5+\color{#C00}1\cdot\color{blue}6)^2\, =\, 17^2 - 16^2\\ &=&\, (\color{#0A0}2\cdot\color{blue}6-\color{#C00}1\cdot 5)^2-(\color{#0A0}2\cdot 5-\color{#C00}1\cdot\color{blue}6)^2\, =\, \ 7^2\ -\ 4^2 \end{eqnarray}$
What is the point of the Thinning Rule? I am studying predicate calculus on some lecture notes on my own. I have a question concerning a strange rule of inference called the Thinning Rule which is stated from the writer as the third rule of inference for the the formal system K$(L)$ (after Modus Ponens and the Generalisation Rule): TR) $ $ if $\Gamma \vdash \phi$ and $\Gamma \subset \Delta$, then $\Delta \vdash \phi$. Well, it seems to me that TR is not necessary at all since it is easily proven from the very definition of formal proof (without TR, of course). I am not able to see what is the point here. The Notes are here http://www.maths.ox.ac.uk/system/files/coursematerial/2011/2369/4/logic.pdf (page 14-15)
After a lot of research here and there I think I have found the correct answer thanks to Propositional and Predicate Calculus by Derek Goldrei. So I will try to answer my own question. The fact is that when we are dealing with Predicate Calculus we have the following Generalization Rule: GR) If $x_i$ is not free in any formula in $\Gamma$, then from $\Gamma \vdash \phi$ infer $\Gamma \vdash \forall x_i \phi$. So we easily see that the Thinning Rule TR) $ $ If $\Gamma \vdash \phi$ and $\Gamma \subset \Delta$, then $\Delta \vdash \phi$. is a metatheorem of the Propositional Calculus (where no quantifications and so no Generalization Rule occur) but it is not (in general) true for the Predicative Calculus. As a matter of fact it could happen that $x_i$ has a free occurrence in a formula $\phi$ and $\phi \in \Gamma$ but $\phi \notin \Delta$ with $\Gamma \subset \Delta$. In such a case (without TR) if we have found that $\Gamma \vdash \forall x_i \phi$ from $\Gamma \vdash \phi$ we cannot say that $\Delta \vdash \forall x_i \phi$ because we cannot longer apply GR. This is the reason for the Thinning Rule.
Analysis problem with volume I'm looking for a complete answer to this problem. Let $U,V\subset\mathbb{R}^d$ be open sets and $\Phi:U\rightarrow V$ be a homeomorphism. Suppose $\Phi$ is differentiable in $x_0$ and that $\det D\Phi(x_0)=0$. Let $\{C_n\}$ be a sequence of open(or closed) cubes in $U$ such that $x_0$ is inside the cubes and with its sides going to $0$ when $n\rightarrow\infty$. Denoting the $d$-dimension volume of a set by $\operatorname{Vol}(.)$, show that $$\lim_{n\rightarrow\infty}\frac{\operatorname{Vol}(\Phi(C_n))}{\operatorname{Vol}(C_n)}=0$$ I know that $\Phi$ cant be a diffeomorphism in $x_0$, but a have know idea how to use this, or how to do anything different. Thanks for helping.
Assume $x_0=\Phi(x_0)=0$, and put $d\Phi(0)=:A$. By assumption the matrix $A$ (or $A'$) has rank $\leq d-1$; therefore we can choose an orthonormal basis of ${\mathbb R}^d$ such that the first row of $A$ is $=0$. With respect to this basis $\Phi$ assumes the form $$\Phi:\quad x=(x_1,\ldots, x_d)\mapsto(y_1,\ldots, y_d)\ ,$$ and we know that $$y_i(x)=a_i\cdot x+ o\bigl(|x|\bigr)\qquad(x\to 0)\ .$$ Here the $a_i$ are the row vectors of $A$, whence $a_1=0$. Let an $\epsilon>0$ be given. Then there is a $\delta>0$ with $$\bigl|y_1(x)\bigr|\leq \epsilon|x|\qquad\bigl(|x|\leq\delta\bigr)\ .$$ Furthermore there is a constant $C$ (not depending on $\epsilon$) such that $$\bigl|y(x)\bigr|\leq C|x|\qquad\bigl(|x|\leq\delta\bigr)\ .$$ Consider now a cube $Q$ of side length $r>0$ containing the origin. Its volume is $r^d$. When $r\sqrt{d}\leq\delta$ all points $x\in Q$ satisfy $|x|\leq\delta$. Therefore the image body $Q':=\Phi(Q)$ is contained in a box with center $0$, having side length $2\epsilon r\sqrt{d}$ in $y_1$-direction and side length $2C\sqrt{d}\>r$ in the $d-1$ other directions. It follows that $${{\rm vol}_d(Q')\over{\rm vol}_d(Q)}\leq 2^d\ d^{d/2}\> C^{d-1}\ \epsilon\ .$$ From this the claim easily follows by some juggling of $\epsilon$'s.
how to calculate the exact value of $\tan \frac{\pi}{10}$ I have an extra homework: to calculate the exact value of $ \tan \frac{\pi}{10}$. From WolframAlpha calculator I know that it's $\sqrt{1-\frac{2}{\sqrt{5}}} $, but i have no idea how to calculate that. Thank you in advance, Greg
Your textbook probably has an example, where $\cos(\pi/5)$ (or $\sin(\pi/5)$) has been worked out. I betcha it also has formulas for $\sin(\alpha/2)$ and $\cos(\alpha/2)$ expressed in terms of $\cos\alpha$. Take it from there.
How to prove if function is increasing Need to prove that function $$P(n,k)=\binom{n}{k}= \frac{n!}{k!(n-k)!}$$ is increasing when $\displaystyle k\leq\frac{n}{2}$. Is this inductive maths topic?
As $n$ increases, $n!$ increases. As $n$ increases $(n-k)!$ increases. $(n+1)!$ is $(n+1)$ times larger than n!. $(n+1-k)!$ is however only $(n+1-k)$ times greater than $(n-k)!$. Therefore the numerator increases more than the denominator as n increases. Therefore P increases as n increases. As $k$ increases, $k!$ increases. As $k$ increases $(n-k)!$ decreases. The question is then, what changes at a greater rate: $k!$ or $(n-k)!$. Define the functions $A(x) = x!$ and $B(x) = (n-x)!$. The denominator of $P(n,k)$ is thus $A(k)*B(k)$. Now $A(k) = k!$ and $B(k) = (n-k)!$. Also, $A(k+1) = (k+1)! = (k+1)*k!$ Also, $B(k+1) = (n - k - 1)! = (n-k)!/(n-k)$. Therefore $A(k+1)B(k+1) = (k+1)k!(n-k)!/(n-k) = A(k)B(k)(k+1)/(n-k)$ So $(k+1)/(n-k) \leq 1$ then the function is increasing. This must be the case since $k \leq n/2$ Therefore P is an increasing function of $k$ and $n$.
Bi-Lipschitzity of maximum function Assume that $f(re^{it})$ is a bi-Lipschitz of the closed unit disk onto itself with $f(0)=0$. Is the function $h(r)=\max_{0\le t \le 2\pi}|f(re^{it})|$ bi-Lipschitz on $[0,1]$?
It is easy to prove that $h$ is Lipschitz whenever $f$ is. Indeed, we simply take the supremum of the uniformly Lipschitz family of functions $\{f_t\}$, where $f_t(r)=|f(re^{it})|$. Also, $h$ is bi-Lipschitz whenever $f$ is. Let $D_r$ be the closed disk of radius $r$. Let $L$ be the Lipschitz constant of the inverse $f^{-1}$. The preimage under $f$ of the $\epsilon/L$-neighborhood of $f(D_r)$ is contained in $D_{r+\epsilon}$. Therefore, $h(r+\epsilon)\ge h(r)+\epsilon/L$, which means the inverse of $h$ is also Lipschitz. Answer to the original question: is $h$ smooth? No, it's no better than Lipschitz. I don't feel like drawing an elaborate picture, but imagine the concentric circles being mapped onto circles with two smooth "horns" on opposite sides. For some values of $r$ the left horn is longer, for others the right horn is longer. Your function $h(r)$ ends up being the maximum of two smooth functions (lengths of horns). This makes it non-differentiable at the values of $r$ where one horn overtakes the other.
Using the integral definition Possible Duplicate: Natural Logarithm and Integral Properties I was asked to prove that ln(xy) = ln x + ln y using the integral definition. While I'm not asking for any answers on the proof, I was wondering how to interpret and set-up this proof using the "integral definition" (As I am unsure what that means.) EDIT And to prove that ln(x/y) = ln x - ln y Is it right to say this? $$\ln(\frac{x}{y})=\int_1^{\frac{x}{y}} \frac{dt}{t}=\int_1^x \frac{dt}{t}-\int_x^{\frac{x}{y}}\frac{dt}{t}.$$
By definition, $$\ln w=\int_1^w \frac{dt}{t}.$$ Thus $$\ln(xy)=\int_1^{xy} \frac{dt}{t}=\int_1^x \frac{dt}{t}+\int_x^{xy}\frac{dt}{t}.$$ Now make an appropriate change of variable to conclude that the last integral on the right is equal to $\ln y$.
Why does $\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3)=\pi$? Playing around on wolframalpha shows $\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3)=\pi$. I know $\tan^{-1}(1)=\pi/4$, but how could you compute that $\tan^{-1}(2)+\tan^{-1}(3)=\frac{3}{4}\pi$ to get this result?
Consider, $z_1= \frac{1+2i}{\sqrt{5}}$, $z_2= \frac{1+3i}{\sqrt{10} }$, and $z_3= \frac{1+i}{\sqrt{2} }$, then: $$ z_1 z_2 z_3 = \frac{1}{10} (1+2i)(1+3i)(1+i)=-1 $$ Take arg of both sides and use property that $\arg(z_1 z_2 z_3) = \arg(z_1) + \arg(z_2) + \arg(z_3)$: $$ \arg(z_1) + \arg(z_2) + \arg(z_3) = -1$$ The LHS we can write as: $$ \tan^{-1} ( \frac{2}{1}) +\tan^{-1} ( \frac{3}{1} ) + \tan^{-1} (1) = \pi$$ Tl;dr: Complex number multiplication corresponds to tangent angle addition
If $\int_0^\infty f\text{d}x$ exists, does $\lim_{x\to\infty}f(x)=0$? Are there examples of functions $f$ such that $\int_0^\infty f\text{d}x$ exists, but $\lim_{x\to\infty}f(x)\neq 0$? I curious because I know for infinite series, if $a_n\not\to 0$, then $\sum a_n$ diverges. I'm wondering if there is something similar for improper integrals.
If $\lim_{x\to+\infty}f(x)=l>0$, then $\exists M>0:l-\varepsilon<f(x)<l+\varepsilon\quad \forall x>M$, and so $$ \int_M^{+\infty}f(x)dx>\int_M^{+\infty}(l-\varepsilon)dx=+\infty $$ if $\varepsilon$ is sufficiently small.
Balanced but not convex? In a topological vector space $X$, a subset $S$ is convex if \begin{equation}tS+(1-t)S\subset S\end{equation} for all $t\in (0,1)$. $S$ is balanced if \begin{equation}\alpha S\subset S\end{equation} for all $|\alpha|\le 1$. So if $S$ is balanced then $0\in S$, $S$ is uniform in all directions and $S$ contains the line segment connecting 0 to another point in $S$. Due to the last condition it seems to me that balanced sets are convex. However I cannot prove this, and there are also evidence suggesting the opposite. I wonder whether there is an example of a set that is balanced but not convex. Thanks!
The interior of a regular pentagram centered at the origin is balanced but not convex.
Entire functions representable in power series How to prove that an entire function f, which is representable in power series with at least one coefficient is 0, is a polynomial?
Define $F_n:=\{z\in \Bbb C, f^{(n)}(z)=0\}$. Since for each $n$, $f^{(n)}$ is holomorphic it's in particular continuous, hence $F_n$ is closed. Since we can write at each $z_0$, $f(z)=\sum_{k=0}^{+\infty}\frac{f^{(k)}(z_0)}{k!}(z-z_0)^k$, the hypothesis implies that $\bigcup_{n\geq 0}F_n=\Bbb C$. As $\Bbb C$ is complete, by Baire's categories theorem, one of the $F_n$ has a non empty interior, say $F_N$. Then $f^{(N)}(z)=0$ for all $z\in B(z_0,r)$, for some $z_0\in \Bbb C$ and some $r>0$. As $B(z_0,r)$ is not discrete and $\Bbb C$ is connected, we have $f^{(N)}\equiv 0$, hence $f$ is a polynomial.
Semi-direct product of different groups make a same group? We can prove that both of: $S_3=\mathbb Z_3\rtimes\mathbb Z_2$ and $\mathbb Z_6=\mathbb Z_3\rtimes\mathbb Z_2$ So two different groups (and not isomorphic in examples above) can be described as semi-direct products of a pair of groups ($\mathbb Z_3$ and $\mathbb Z_2$). I hope my question does make sense: Is there any group which can be described as semi-direct products of two different pair of groups? Thanks.
Such a group of smallest order is $D_8$, the Dihedral group of order 8. Write $D_8=\langle x,y\colon x^4=y^2=1, y^{-1}xy=x^{-1}\rangle=\{1,x,x^2,x^3, y,xy,x^2y,x^3y \}$. * *$H=\langle x\rangle$,$K=\langle y\rangle$, then $D_8=H\rtimes K\cong C_4\rtimes C_2$. *$H=\langle x^2,y\rangle$, $K=\langle xy\rangle$, then $D_8=H\rtimes K\cong (C_2\times C_2)\rtimes C_2$.
$\int\frac{x^3}{\sqrt{4+x^2}}$ I was trying to calculate $$\int\frac{x^3}{\sqrt{4+x^2}}$$ Doing $x = 2\tan(\theta)$, $dx = 2\sec^2(\theta)~d\theta$, $-\pi/2 < 0 < \pi/2$ I have: $$\int\frac{\left(2\tan(\theta)\right)^3\cdot2\cdot\sec^2(\theta)~d\theta}{2\sec(\theta)}$$ which is $$8\int\tan(\theta)\cdot\tan^2(\theta)\cdot\sec(\theta)~d\theta$$ now I got stuck ... any clues what's the next substitution to do? I'm sorry for the formatting. Could someone please help me with the formatting?
You have not chosen an efficient way to proceed. However, let us continue along that path. Note that $\tan^2\theta=\sec^2\theta-1$. So you want $$\int 8(\sec^2\theta -1)\sec\theta\tan\theta\,d\theta.$$ Let $u=\sec\theta$. Remark: My favourite substitution for this problem and close relatives is a variant of the one used by Ayman Hourieh. Let $x^2+4=u^2$. Then $2x\,dx=2u\,du$, and $x^2=u^2-4$. So $$\int \frac{x^3}{\sqrt{x^2+4}}\,dx=\int \frac{(u^2-4)u}{u}\,du=\int (u^2-4)\,du.$$
What is the value of $w+z$ if $1I am having solving the following problem: If the product of the integer $w,x,y,z$ is 770. and if $1<w<x<y<z$ what is the value of $w+z$ ? (ans=$13$) Any suggestions on how I could solve this problem ?
Find the prime factorization of the number. That is always a great place to start when you have a problem involving a product of integer. Now here you are lucky, you find $4$ prime numbers to the power of one, so you know your answer is unique.
Maclaurin expansion of $\arcsin x$ I'm trying to find the first five terms of the Maclaurin expansion of $\arcsin x$, possibly using the fact that $$\arcsin x = \int_0^x \frac{dt}{(1-t^2)^{1/2}}.$$ I can only see that I can interchange differentiation and integration but not sure how to go about this. Thanks!
As has been mentioned in other answers, the series for $\frac1{\sqrt{1-x^2}}$ is most easily found by substituting $x^2$ into the series for $\frac1{\sqrt{1-x}}$. But for fun we can also derive it directly by differentiation. To find $\frac{\mathrm d^n}{\mathrm dx^n}\frac1{\sqrt{1-x^2}}$ at $x=0$, note that any factors of $x$ in the numerator produced by differentiating the denominator must be differentiated by some later differentiation for the term to contribute at $x=0$. Thus the number of contributions is the number of ways to group the $n$ differential operators into pairs, with the first member of each pair being applied to the numerator and the second member being applied to the factor $x$ produced by the first. This number is non-zero only for even $n=2k$, and in that case it is $\frac{(2k)!}{2^kk!}$. Each such pair accumulates another factor $1\cdot3\cdot\cdots\cdot(2k-1)=\frac{(2k)!}{2^kk!}$ from the exponents in the denominator. Thus the value of the $n$-th derivative at $x=0$ is $\frac{(2k)!^2}{4^k(k!)^2}$, so the Maclaurin series is $$ \frac1{\sqrt{1-x^2}}=\sum_{k=0}^\infty\frac1{(2k)!}\frac{(2k)!^2}{4^kk!}x^{2k}=\sum_{k=0}^\infty\frac{\binom{2k}k}{4^k}x^{2k}\;. $$ Then integrating yields $$ \arcsin x=\sum_{k=0}^\infty\frac{\binom{2k}k}{4^k(2k+1)}x^{2k+1}\;. $$
3-dimensional array I apologize if my question is ill posed as I am trying to grasp this material and poor choice of tagging such question. At the moment, I am taking an independent studies math class at my school. This is not a homework question, but to help further my understanding in this area. I've been looking around on the internet for some understanding on the topic of higher dimensional array. I'm trying to see what the analogue of transposing a matrix is in 3-dimensional array version. To explicitly state my question, can you transpose a 3-dimensional array? What does it look like? I know for the 2d version, you just swap the indices. For example, given a matrix $A$, entry $a_{ij}$ is sent to $a_{ji}$, vice versa. I'm trying to understand this first to then answer a question on trying to find a basis for the space of $n \times n \times n$ symmetric array. Thank you for your time.
There is no single transformation corresponding to taking the transpose. The reason is that while there is only one non-identity permutation of a pair of indices, there are five non-identity permutations of three indices. There are two that leave none of the fixed: one takes $a_{ijk}$ to $a_{jki}$, the other to $a_{kij}$. Exactly what permutations will matter for your underlying question depends on how you’re defining symmetric for a three-dimensional array.
A dubious proof using Weierstrass-M test for $\sum^n_{k=1}\frac{x^k}{k}$ I have been trying to prove the uniform convergence of the series $$f_{n}(x)=\sum^n_{k=1}\frac{x^k}{k}$$ Obviously, the series converges only for $x\in(-1,1)$. Consequently, I decided to split this into two intervals: $(-1,0]$ and $[0,1)$ and see if it converges on both of them using the Weierstrass M-test. For $x\in(-1,0]$, let's take $q\in(-1,x)$. We thus have: $$\left|\frac{x^k}{k}\right|\leq\left|x^k\right|\leq\left|q^k\right|$$ and since $\sum|q^n|$ is convergent, $f_n$ should be uniformly convergent on the given interval. Now let's take $x\in[0,1)$ and $q\in(x,1)$. Now, we have: $$\left|\frac{x^k}{k}\right|=\frac{x^k}{k}\leq\ x^k\leq{q^k}$$ and once again, we obtain the uniform convergence of $f_n$. However, not sure of my result, I decided to cross-check it by checking whether $f_n$ is Cauchy. For $x\in(-1,0]$, I believe it was a positive hit, since for $m>n$ we have: $$\left|f_{m}-f_{n}\right|=\left|f_{n+1}+f_{n+2}+...f_{m}\right|\leq\left|\frac{x^n}{n}\right|\leq\frac{1}{n}$$ which is what we needed. However, I haven't been able to come up with a method to show the same for $x\in[0,1)$. Now, I am not so sure whether $f_n$ is uniformly convergent on $[0,1)$. If it is, then how can we show it otherwise, and if it isn't, then how can we disprove it? Also, what's equally important - what did I do wrong in the Weierstrass-M test?
Weierstrass M-test only gives you uniform convergence on intervals of the form $[-q,q]$, where $0<q<1$. Your proof shows this. You also get uniform convergence on the interval $[-1,0]$, but to see this you need other methods. For example the standard estimate for the cut-off error of a monotonically decreasing alternating series will work here. As David Mitra pointed out, the convergence is not uniform on the interval $[0,1)$. Elaborating on his argument: No matter how large an $n$ we choose, the divergence of the harmonic series tells us that $\sum_{k=n+1}^{n+p}(1/k)>2$ for $p$ large enough. We can then select a number $a\in(0,1)$ such that the powers $a^k, n<k\le n+p$ are all larger than $1/2$. Then it follows that for all $x\in(a,1)$ $$ \sum_{k=n+1}^{n+p}\frac{x^k}k\ge \sum_{k=n+1}^{n+p}\frac{a^k}k>\frac12\sum_{k=n+1}^{n+p}\frac1k>1. $$ Thus the Cauchy condition fails on the subinterval $(a,1)$.
Which axiom shows that a class of all countable ordinals is a set? As stated in the title, which axiom in ZF shows that a class of all countable (or any cardinal number) ordinals is a set? Not sure which axiom, that's all.
A combination of them, actually. The proof I've seen requires power set and replacement, and I think union or pairing, too, but I can't recall off the top of my head. I can post an outline of that proof, if you like.
From set of differential equations to set of transfer functions (MIMO system) I want to know how I can get from a set of differential equations to a set of transfer functions for a multi-input multi-output system. I can do this easily with Matlab or by computing $G(s) = C[sI - A]^{-1}B + D$. I have the following two equations: $$ \ddot{y}_1 + 2\dot{y}_1 + \dot{y}_2 + u_1 = 0 \\ \dot{y}_2 - y_2 + u_2 - \dot{u}_1 = 0 $$There are 2 inputs, $y_i$, and 2 outputs, $u_i$. At first I thought when I want to retrieve the transfer function from $y_1$ to $u_1$ that I had to set $y_2$ and $u_2$ equal to zero. Thus I would have been left with, $\ddot{y}_1 + 2\dot{y}_1 + u_1 = 0$ and $\dot{u}_1 = 0$. However this does not lead to the correct answer, $$ y_1 \rightarrow u_1: \frac{-s^2 - s + 1}{s^3 + s^2 - 2 s} $$ I also thought about substituting the two formulas in each other. So expressing $y_2$ and $u_2$ in terms of $y_1$ and $u_1$ however this also lead to nothing. Can someone explain to me how to obtain the 4 transfer functions, $y_1 \rightarrow u_1$, $y_1 \rightarrow u_2$, $y_2 \rightarrow u_1$ and $y_2 \rightarrow u_2$?
I am guessing that you are looking for the transfer function from $u$ to $y$, this would be consistent with current nomenclature. Taking Laplace transforms gives $$ (s^2+2s) \hat{y_1} + s\hat{y_2} + \hat{u_1} = 0\\ (s-1)\hat{y_2} + \hat{u_2}-s \hat{u_1} = 0 $$ Solving algebraically gives $$\hat{y_1} = \frac{1-s-s^2}{s(s+2)(s-1)} \hat{u_1} + \frac{1}{s(s+2)(s-1)}\hat{u_2} \\ \hat{y_2} = \frac{s}{s-1} \hat{u_1} -\frac{1}{s-1} \hat{u_2} $$ from which all four transfer functions can be read off.
How do I simplify this limit with function equations? $$\lim_{x \to 5} \frac{f(x^2)-f(25)}{x-5}$$ Assuming that $f$ is differentiable for all $x$, simplify. (It does not say what $f(x)$ is at all) My teacher has not taught us any of this, and I am unclear about how to proceed.
$f$ is differentiable, so $g(x) = f(x^2)$ is also differentiable. Let's find the derivative of $g$ at $x = 5$ using the definition. $$ g'(5) = \lim_{x \to 5} \frac{g(x) - g(5)}{x - 5} = \lim_{x \to 5} \frac{f(x^2) - f(25)}{x - 5} $$ Now write $g'(5)$ in terms of $f$ to get the desired result.
Basic set questions I would really appreciate it if you could explain the set notation here $$\{n ∈ {\bf N} \mid (n > 1) ∧ (∀x,y ∈ {\bf N})[(xy = n) ⇒ (x = 1 ∨ y = 1)]\}$$ 1) What does $∀x$ mean? 2) I understand that $n ∈ {\bf N} \mid (n > 1) ∧ (∀x,y ∈ {\bf N})$ means $n$ is part of set $\bf N$ such that $(n > 1) ∧ (∀x,y ∈ {\bf N})$. What do the $[\;\;]$ and $⇒$ mean? 3) Prove that if $A ⊆ B$ and $B ⊆ C$, then $A ⊆ C$ I could prove it by drawing a Venn diagram but is there a better way?
1) $(\forall x)$ is the universal quantifier. It means "for all $x$". 2) $[ ]$ is the same as a parenthesis. Probably, the author did not want to use too many round parenthesis because it would get too confusing. $\Rightarrow$ is implies. 3) Suppose $x \in A$. Since $A \subset B$, by definition $x \in B$. Since $B \subset C$, then $x \in C$. So $x \in A$ implies $x \in C$. This is precisely the definition of $A \subset C$.
Example where $f\circ g$ is bijective, but neither $f$ nor $g$ is bijective Can anyone come up with an explicit example of two functions $f$ and $g$ such that: $f\circ g$ is bijective, but neither $f$ nor $g$ is bijective? I tried the following: $$f:\mathbb{R}\rightarrow \mathbb{R^{+}} $$ $$f(x)=x^{2}$$ and $$g:\mathbb{R^{+}}\rightarrow \mathbb{R}$$ $$g(x)=\sqrt{x}$$ $f$ is not injective, and $g$ is not surjective, but $f\circ g$ is bijective Any other examples?
If we define $f:\mathbb{R}^2 \to \mathbb{R}$ by $f(x,y) = x$ and $g:\mathbb{R} \to \mathbb{R}^2$ by $g(x) = (x,0)$ then $f \circ g : \mathbb{R} \to \mathbb{R}$ is bijective (it is the identity) but $f$ is not injective and $g$ is not surjective.
Graphing Cubic Functions I'm having a Little bit of trouble in Cubic Functions, especially when i need to graph the Turning Point, Y-intercepts, X-intercepts etc. My class teacher had told us to use Gradient Method: lets say: $$f(x)=x^3+x^2+x+2$$ We can turn this equation around by using the Gradient Method: $$f'(x)=3x^2+2x+1$$ so it a quadratic equation. But i would like to find out more about this method too, if anyone knows. Basically i am not good at sketching graphs so if anyone has a website that might help me find out more about cubic functions and how to graph them, or if anyone can help me out, i'll be thankful. Thanks.
I used to think that the Gradient Method is for plotty functions in 2 variables. However, this answer may give you some pointers. You could start by examining the function domain. In your case, all $x$ values are valid candidates. next, set $x=0$ then $y=0$ to get the intercepts. Setting $x=0$, yields $y=2$, so the point $(0,2)$ is on your graph. Now setting $y=0$ means that you need to solve the following for $x$: $$x^3+x^2+x+2=0$$ Solving such equations is sometimes obvious at least for the first root, but in this case, it is not. You either follow a numeric method or use a calculator such as Cubic Eqn Solver or use the formula in Wolfarm-Cubic equation formula or Wiki-Cubic Function. To get a plot of the function we'll just use the real root value found be either the above methods so the point $(-1.35320,0)$ is on the function graph as well. Note that the other 2 roots are complex, hence the function intersects the x-axis only once. No we can move to find the critical points so that you can determine concavity of the function. Using the derivative method for testing, you can determine the local minimum and maximum of the function. The subject is a bit lengthy to include in detail here. I suggest you read about it in a book such as (page 191 and above) of: Google Books - Calculus of single variable. In your case, the first derivative has no real roots. The second derivative $$6x+2=0$$ has a root at $x=-0.333$, this indicates that $(-0.333,1.741)$ is a point of inflection. To further study the shape, take 2 points immediately before and after the point of inflection to determine the shape of the curve around this point. Use the obtained information so far together with few other points to determine the approximate curve shape. There is a free web based graph plotter at: Desmos-Graph Plotter - Calculator, that may be useful for you also. Here is a sample showing the function and its first derivative:
Is Fractal perimeter always infinite? Looking for information on fractals through google I have read several time that one characteristic of fractals is : * *finite area *infinite perimeter Although I can feel the area is finite (at least on the picture of fractal I used to see, but maybe it is not necessarly true ?), I am wondering if the perimeter of a fractal is always infinite ? If you think about series with positive terms, one can find : * *divergent series : harmonic series for example $\sum_0^\infty{\frac{1}{n}}$ *convergent series : $\sum_0^\infty{\frac{1}{2^n}}$ So why couldn't we imagine a fractal built the same way we build the Koch Snowflake but ensuring that at each iteration the new perimeter has grown less than $\frac{1}{2^n}$ or any term that make the whole series convergent ? What in the definition of fractals allows or prevent to have an infinite perimeter ?
If they have infinite sides, than they must have an infinite perimeter, especially if they are perfectly straight because the formula of perimeter of most shapes is adding up the amount of sides, and the fractal has infinite sides, then it should have an infinite perimeter.
Exactly one nontrivial proper subgroup Question: Determine all the finite groups that have exactly one nontrivial proper subgroup. MY attempt is that the order of group G has to be a positive nonprime integer n which has only one divisor since any divisor a of n will form a proper subgroup of order a. Since 4 is the only nonprime number that has only 1 divisor which is 2, All groups of order 4 has only 1 nontrivial proper subgroups (Z4 and D4)
Let $H$ be the only non-trivial proper subgroup of the finite group $G$. Since $H$ is proper, there must exist an $x \notin H$. Now consider the subgroup $\langle x\rangle$ of $G$. This subgroup cannot be equal to $H$, nor is it trivial, hence $\langle x\rangle = G$, that is $G$ is cyclic, say of order $n$. The number of subgroups of a cyclic group of order $n$ equals the number of divisors of $n$. So $n$ must have three divisors. This can only be the case if $n$ is the square of a prime number. So, $G \cong C_{p^2}$.
Multigrid tutorial/book I was reading Press et. al., "Numerical Recipes" book, which contain section about multigrid method for numerically solving boundary value problems. However, the chapter is quite brief and I would like to understand multigrids to a point where I will be able to implement more advanced and faster version than that provided in the book. The tutorials I found so far are very elaborate and aimed on graduate students. I have notions on several related topics (relaxation methods, preconditioning), but still the combination of PDEs and the multigrid methods is mind-blowing for me... Thanks for any tips for a good explanatory book, website or article.
First please don't be bluffed by those fancy terms coined by computational scientists, and don't worry about preconditioning or conjugate gradient. The multigrid method for numerical PDE can be viewed as a standalone subject, basically what it does is: make use of the "information" on both finer and coarser mesh, in order to solve a linear equation system(obtained from the discretization of the PDE on these meshes), and it does this in an iterative fashion. IMHO Vassilevski from Lawrence Livermore national laboratory puts up a series of very beginner-oriented lecture notes, where he introduced the motivation and preliminary first, how to get the $Ax = b$ type linear equation system from a boundary value problem of $-\Delta u = f$ with $u = g$ on $\partial \Omega$, what is condition number and how does it affect our iterative solvers. Then he introduced all the well-established aspects of multigrid: what is the basic idea in two-grid, how do we do smoothing on the finer mesh, and error correction on the coarser mesh, V-cycle, W-cycle, etc. Algebraic multigrid(the multigrid that uses information from mesh is often called geometric method), also the adaptive methods are covered too. Some example codes for Poisson equation can be easy google'd. If you got more time, this book has a user-friendly and comprehensive introduction on this topic together with some recent advancements.
Multiplicative but non-additive function $f : \mathbb{C} \to \mathbb{R}$ I'm trying to find a function $f : \mathbb{C} \to \mathbb{R}$ such that * *$f(az)=af(z)$ for any $a\in\mathbb{R}$, $z\in\mathbb{C}$, but *$f(z_1+z_2) \ne f(z_1)+f(z_2)$ for some $z_1,z_2\in\mathbb{C}$. Any hints or heuristics for finding such a function?
HINT: Look at $z$ in polar form.
Check that a curve is a geodesic. Suppose $M$ is a two-dimensional manifold. Let $\sigma:M \rightarrow M$ be an isometry such that $\sigma^2=1$. Suppose that the fixed point set $\gamma=\{x \in M| \sigma(x)=x\}$ is a connected one-dimensional submanifold of $M$. The question asks to show that $\gamma$ is the image of a geodesic.
Let $N=\{x\in M:\sigma(x)=x\}$ and fix $p\in N$. Exercise 1: Prove that either $1$ or $-1$ is an eigenvalue of $d\sigma_p:T_p(M)\to T_p(M)$. Exercise 2: Prove that if $v\in T_p(M)$ is an eigenvector of $d\sigma_p:T_p(M)\to T_p(M)$ of sufficiently small norm, then the unique geodesic $\gamma:I\to M$ for some open interval $I\subseteq \mathbb{R}$ such that $\gamma(0)=p$ and $\gamma'(0)=v$ has image contained in $N$. (Hint: isometries of Riemannian manifolds take geodesics to geodesics.) I hope this helps!
Find delta with a given epsilon for $\lim_{x\to-2}x^3 = - 8$ Here is the problem. If $$\lim_{x\to-2}x^3 = - 8$$ then find $\delta$ to go with $\varepsilon = 1/5 = 0.2$. Is $\delta = -2$?
Sometimes Calculus students are under the impression that in situations like this there is a unique $\delta$ that works for the given $\epsilon$ and that there is some miracle formula or computation for finding it. This is not the case. In certain situations there are obvious choices for $\delta$, in certain situations there are not. In any case you are asking for some $\delta\gt 0$ (!!!) such that for all $x$ with $|x-(-2)|\lt\delta$ we have $|x^3-(-8)|\lt 0.2$. Once you have found some $\delta\gt 0$ that does it, every smaller $\delta\gt 0$ will work as well. This means that you can guess some $\delta$ and check whether it works. In this case this is not so difficult as $x^3$ increases if $x$ increases. So you only have to check what happens if you plug $x=-2-\delta$ and $x=-2+\delta$ into $x^3$ and then for all $x$ with $|x-(-2)|$ you will get values of $x^3$ that fall between these two extremes. For an educated guess on $\delta$, draw a sketch. This should be enough information to solve this problem.
Diameter of Nested Sequence of Compact Set Possible Duplicate: the diameter of nested compact sequence Let $(E_j)$ be a nested sequence of compact subsets of some metric space; $E_{j+1} \subseteq E_j$ for each $j$. Let $p > 0$, and suppose that each $E_j$ has diameter $\ge p$ . Prove that $$E = \bigcap_{j=1}^{\infty} E_j$$ also has diameter $\ge p$.
For each $j$ pick two points $x_j, y_j \in E_j$ such that $d(x_j,y_j) \ge p$. Since $x_j \in E_1$ for all $j$, and $E_1$ is compact, the sequence $(x_j)$ has a convergent subsequence $(x_{\sigma(j)})$ say, and likewise $(y_{\sigma(j)})$ has a convergent subsequence $(y_{\tau \sigma(j)})$. What can you say about the limits of these subsequences?
Breakdown of solution to inviscid Burgers equation Let $u = f(x-ut)$ where $f$ is differentiable. Show that $u$ (amost always) satisfies $u_t + uu_x = 0$. What circumstances is it not necessarily satisfied? This is a question in a tutorial sheet I have been given and I am slightly stuck with the second part. To show that $u$ satisfies the equation I have differentiated it to get: $u_t = -f'(x-ut)u$ $u_x = f'(x-ut)$ Then I have substituted these results into the original equation. The part I am unsure of is where it is not satisfied. If someone could push me in the right direction it would be much appreciated.
We have \[ u_t = f'(x-ut)(x-ut)_t = -f'(x-ut)(u_t t + u) \iff \bigl(1 + tf'(x-ut)\bigr)u_t = -uf'(x-ut) \] and \[ u_x = f'(x-ut)(x-ut)_x = f'(x-ut)(1 - u_xt) \iff \bigl(1 + tf'(x-ut)\bigr)u_x = f'(x-ut) \] Which gives that \[ \bigl(1 + tf'(x-ut)\bigr)(u_t +uu_x) = 0 \] so at each point either $1 + tf'(x-ut) = 0$ or $u_t + uu_x = 0$.
How do we know how many branches the inverse function of an elementary function has? How do we know how many branches the inverse function of an elementary function has ? For instance Lambert W function. How do we know how many branches it has at e.g. $z=-0.5$ , $z=0$ , $z=0.5$ or $z=2i$ ?
Suppose your elementary function $f$ is entire and has an essential singularity at $\infty$ (as in the case you mention, with $f(z) = z e^z$). Then Picard's "great" theorem says that $f(z)$ takes on every complex value infinitely often, with at most one exception. Thus for every $w$ with at most one exception, the inverse function will have infinitely many branches at $w$. Determining whether that exception exists (and what it is) may require some work. In this case it is easy: $f(z) = 0$ only for $z=0$ because the exponential function is never $0$, so the exception is $w=0$.
How to find the least path consisting of the segments AP, PQ and QB Let $A = (0, 1)$ and $B = (2, 0)$ in the plane. Let $O$ be the origin and $C = (2, 1)$ . Consider $P$ moves on the segment $OB$ and $Q$ move on the segment $AC$. Find the coordinates of $P$ and $Q$ for which the length of the path consisting of the segments $AP, PQ$ and Q$B$ is least.
Hint: Let $A'$ be the point one unit above $A$. Let $B'$ be the point one unit below $B$. Join $A'$ and $B'$ by a straight line. Show that gives the length of the minimal path.
Child lamp problem A street lamp is 12 feet above the ground. A child 3 feet in height amuses itself by walking in such a way that the shadow of its head moves along lines chalked on the ground. (1) How would the child walk if the chalked line is (a) straight, (b) a circle, (c) a square? (2) What difference would it make if the light came from the sun instead of a lamp? Example: The problem is from Sawyer's "Mathematician's Delight". Note: Since this is my first post here, I would like to note that this is not homework. I am just trying to improve my math/problem solving skills.
Similar triangles show that from each point on the line you draw a line to the base of the lamp. When the child's head makes a shadow at a given point it is $\frac 14$ of the way along the line from the point to the lamp. So the child walks in the same shape: line, circle, or square, with size $\frac 34$ of the figure. For the sun, the rays are parallel, so the child's head traces the same figure as the chalk line.
What is the inverse function of $\ x^2+x$? I think the title says it all; I'm looking for the inverse function of $\ x^2+x$, and I have no idea how to do it. I thought maybe you could use the quadratic equation or something. I would be interesting to know.
If you want to invert $x^2 + x$ on the interval $x \ge -1/2$, write $y = x^2 + x$, so $x^2 + x -y = 0$. Use the quadratic equation with $a=1$, $b=1$, and $c=-y$ to find $$ x= \frac{-1 + \sqrt{1+4y}}{2}.$$ (The choice of whether to use $+\sqrt{4ac}$ rather than $-\sqrt{4ac}$ is because we are finding the inverse of the right side of the parabola. If you want to invert the left side, you would use the other sign.)
Find the necessary and sufficient conditions on $a$, $b$ so that $ax^2 + b = 0$ has a real solution. This question is really confusing me, and I'd love some help but not the answer. :D Is it asking: What values of $a$ and $b$ result in a real solution for the equation $ax^2 + b = 0$? $a = b = 0$ would obviously work, but how does $x$ come into play? There'd be infinitely many solutions if $x$ can vary as well ($a = 1$, $x = 1$, $b = -1$, etc.). I understand how necessary and sufficient conditions work in general, but how would it apply here? I know it takes the form of "If $p$ then $q$" but I don't see how I could apply that to the question. Is "If $ax^2 + b = 0$ has a real solution, then $a$ and $b =$ ..." it?
I assume the question is "find conditions that are necessary and sufficient to guarantee solutions" rather than "find necessary conditions and also find sufficient sufficient conditions for a solution." If the former is the case, then you're asked for constraints on $a$ and $b$ such that (1) if the conditions are met then $ax^2+b$ is zero for some $x$ and (2) if the conditions are not met then $ax^2+b$ isn't zero for any $x$. So, when will $ax^2+b$ have some value $x$ for which the expression is zero? Well, as André suggested, try to solve $ax^2+b=0$ "mechanically". By subtracting we have the equivalent equation $ax^2 = -b$ and we'd like to divide by $a$ to get $x^2=-b/a$. Unfortunately, we can't do that if $a=0$, so we need to consider two cases * *$a=0$ *$a\ne 0$ In the first case, our equation becomes $0\cdot x^2+b=0$, namely $b=0$ and if $b=0$ (and, of course $a=0$) then any $x$ will satisfy this. In other words, there's a solution (actually infinitely many solutions) if $a=b=0$, as you've already noted. Now, in case (2) we can safely divide by $a$ to get $x^2=-b/a$. When does this have a solution? We know that $x^2\ge 0$ no matter what $x$ is, so when will our new equation have a solution? You said you don't want the full answer, so I'll denote the answer you discover by $C$, which will be of the form "some condition on $a\text{ and }b$". Once you've done that, your full answer will be: $ax^2+b=0$ has a solution if and only if * *$a=b=0$, or *Your condition $C$. These are sufficient, since either guarantees a solution, and necessary, since if neither is true, then there won't be a solution (since we exhausted all possibilities for solutions).
Homomorphism of free modules $A^m\to A^n$ Let's $\varphi:A^m\to A^n$ is a homomorphism of free modules over commutative (associative and without zerodivisors) unital ring $A$. Is it true that $\ker\varphi\subset A^m$ is a free module? Thanks a lot!
Here is a counterexample which is in some sense universal for the case $m = 2, n = 1$. Let $R = k[x, y, z, w]/(xy - zw)$ ($k$ a field). This is an integral domain because $xy - zw$ is irreducible. The homomorphism $R^2 \to R$ given by $$(m, n) \mapsto (xm - zn)$$ has a kernel which contains both $(y, w)$ and $(z, x)$. If the kernel is free then it must be free on these two elements by degree considerations, but $x(y, w) = (zw, xw) = w(z, x)$ is a relation between them.
Is this AM/GM refinement correct or not? In Chap 1.22 of their book Mathematical Inequalities, Cerone and Dragomir prove the following interesting inequality. Let $A_n(p,x)$ and $G_n(p,x)$ denote resp. the weighted arithmetic and the weighted geometric means, where $x_i\in[a,b]$ and $p_i\ge0$. $P_n$ is the sum of all $p_i$. Then the following holds: $$ \exp\left[\frac{1}{b^2P_n^2}\sum\limits_{i<j} p_ip_j(x_i-x_j)^2\right]\le\frac{A_n(p,x)}{G_n(p,x)} \le\exp\left[\frac{1}{a^2P_n^2}\sum\limits_{i<j} p_ip_j(x_i-x_j)^2\right] $$ The relevant two pages of the book may be consulted here. I need help to figure out what is wrong with my next arguments. I will only be interested in the LHS of the inequality. Let $n=3$ and let $p_i=1$ for all $i$ and hence $P_n=3$. Let $x,y,z\in[a,b]$. We can assume that $b=\max\{x,y,z\}$. Our inequality is equivalent to: $$ f(x,y,z)=\frac{x+y+z}{3\sqrt[3]{xyz}}-\exp\left[\frac{(x-y)^2+(x-z)^2+(y-z)^2}{9\max\{x,y,z\}^2}\right]\ge0 $$ According to Mathematica $f(1, 2, 2)=-0.007193536514508<0$ which means that the inequality as stated is incorrect. Moreover, if I plot the values of $f(x,2,2)$ here is what I get: You can download my Mathematica notebook here. As you can see our function is negative for some values of $x$ which means that the inequality does not hold for those values. Obviously it is either me that is wrong or Cerone and Dragomir's derivation. I have read their proofs and I can't find anything wrong so I suspect there is a flaw in my exposition above. Can someone help me find it?
Your modesty in suspecting that the error is yours is commendable, but in fact you found an error in the book. The "simple calculation" on p. $49$ is off by a factor of $2$, as you can easily check using $n=2$ and $p_1=p_2=1$. Including a factor $\frac12$ in the inequality makes it come out right. You can also check this by using $f(x)=x^2$, $n=2$, $p_1=p_2=1$ and $x_1=-1$, $x_2=1$ in inequality $(1.151)$ on p. $48$. Then the difference between the average of the function values and the function value of the average is $1$, and the book's version of the inequality says that it's $2$.
Proof: Symmetric and Positive Definite If $A$ is a symmetric and positive definite matrix and matrix $B$ has linearly independent columns , is it true that $B^T A B$ is symmetric and positive definite?
If the matrices are real yes: take $x\in\Bbb C^d$. Then $Bx\in \Bbb C^d$ hence $x^tB^tABx=(Bx)^tA(Bx)\geq 0$ and if $x\neq 0$, as $B$ is invertible $Bx\neq 0$. $A$ being positive definite we have $x^tB^tABx>0$. But if the matrices are real it's not true: take $A=I_2$, $B:=\pmatrix{1&0\\0&i}$, then $B^tAB=\pmatrix{1&0\\0&-1}$ which is not positive definite. But it's true if you replace the transpose by the adjoint (entrywise conjugate of the transpose).
Cardinality of $R[x]/\langle f\rangle$ via canonical remainder reps. Suppose $R$ is a field and $f$ is a polynomial of degree $d$ in $R[x]$. How do you show that each coset in $R[x]/\langle f\rangle$ may be represented by a unique polynomial of degree less than $d$? Secondly, if $R$ is finite with $n$ elements, how do you show that $R[x]/\langle f\rangle$ has exactly $n^d$ cosets?
Hint $ $ Recall $\rm\ R[x]/(f)\:$ has a complete system of reps being the least degree elements in the cosets, i.e. the remainders mod $\rm\:f,\:$ which uniquely exist by the Polynomial Division Algorithm. Therefore the cardinality of the quotient ring equals the number of such reps, i.e. the number of polynomials $\rm\in R[x]\:$ with degree smaller than that of $\rm\:f.$ Remark $\ $ This is a generalization of the analogous argument for $\rm\:\Bbb Z/m.\:$ The argument generalizes to any ring with a Division (with Remainder) Algorithm, i.e. any Euclidean domain, as explained in the linked answer.
Solving $(x+y) \exp(x+y) = x \exp(x)$ for $y$. While thinking about the Lambert $W$ function I had to consider Solving $(x+y) \exp(x+y) = x \exp(x)$ for $y$. This is what I arrived at: (for $x$ and $y$ not zero) $(x+y) \exp(x+y) = x \exp(x)$ $x\exp(x+y) + y \exp(x+y) = x \exp(x)$ $\exp(y) + y/x \exp(y) = 1$ $y/x \exp(y) = 1 - \exp(y)$ $y/x = (1-\exp(y))/\exp(y)$ $x/y = \exp(y)/(1-\exp(y))$ $x = y\exp(y)/(1-\exp(y))$ $1/x = 1/y\exp(y) -1/y$ And then I got stuck. Can we solve for $y$ by using Lambert $W$ function? Or how about an expression with integrals?
The solution of $ (x+y) \exp(x+y) = x \exp(x) $ is given in terms of the Lambert W function Let $z=x+y$, then we have $$ z {\rm e}^{z} = x {\rm e}^{x} \Rightarrow z = { W} \left( x{{\rm e}^{x}} \right) \Rightarrow y = -x + { W} \left( x{{\rm e}^{x}} \right) \,. $$ Added: Based on the comment by Robert, here are the graphs of $ y = -x + { W_0} \left( x{{\rm e}^{x}} \right) $ and $ y = -x + { W_{-1}} \left( x{{\rm e}^{x}} \right) $
Calculus and Physics Help! If a particle's position is given by $x = 4-12t+3t^2$ (where $t$ is in seconds and $x$ is in meters): a) What is the velocity at $t = 1$ s? Ok, so I have an answer: $v = \frac{dx}{dt} = -12 + 6t$ At $t = 1$, $v = -12 + 6(1) = -6$ m/s But my problem is that I want to see the steps of using the formula $v = \frac{dx}{dt}$ in order to achieve $-12 + 6t$... I am in physics with calc, and calc is only a co-requisite for this class, so I'm taking it while I'm taking physics. As you can see calc is a little behind. We're just now learning limits in calc, and I was hoping someone could help me figure this out.
You see the problem here is that the question is asking for a velocity at $t=1$. This means that they require and instantaneous velocity which is by definition the derivative of the position function at $t=1$. If you don't want to use derivative rules for some reason and you don't mind a little extra work then you can calculate the velocity from a limit. (In reality this is the same thing as taking the derivative as you will later see. A derivative is just a limit itself.) To take the instantaneous velocity, we use the formula $\overline{v} = \frac{\Delta x}{\Delta t}$. I use $\overline{v}$ to denote the average velocity. Between time $t$ and $t+\Delta$ we have $$\overline{v} = \frac{x(t+\Delta t) - x(t)}{\Delta t} = \frac{\left(3(t+\Delta t)^2 - 12(t+\Delta t)+4\right)-\left(3t^2-12t+4\right)}{\Delta t} $$ Simplifying a bit $$\overline{v} = \frac{6t\Delta t + (\Delta t)^2 -12\Delta t}{\Delta t}$$ Now comes the calculus. If we take the limit as $\Delta t \rightarrow 0$, that is if we take the time interval to be smaller and smaller so that the average velocity approaches the instantaneous, then the instantaneous velocity $v$ is $$v = \lim_{\Delta t\rightarrow 0}\frac{6t\Delta t + (\Delta t)^2 -12\Delta t}{\Delta t}=\lim_{\Delta t\rightarrow 0}\left(6t-12+\Delta t\right)=6t-12$$ Notice that this is exactly the derivative you had before and the above steps are in fact a calculation of the derivative from first principles.
Replacing one of the conditions of a norm Consider the definition of a norm on a real vector space X. I want to show that replacing the condition $\|x\| = 0 \Leftrightarrow x = 0\quad$ with $\quad\|x\| = 0 \Rightarrow x = 0$ does not alter the the concept of a norm (a norm under the "new axioms" will fulfill the "old axioms" as well). Any hints on how to get started?
All you need to show is that $\|0\|=0$. Let $x$ be any element of the normed space. What is $\|0\cdot x\|$?
Show that the discrete metric can not be obtained from $X\neq\{0\}$ If $X \neq \{ 0\}$ is a vector space. How does one go about showing that the discrete metric on $X$ cannot be obtained from any norm on $X$? I know this is because $0$ does not lie in $X$, but I am having problems. Formalizing a proof for this. This is also my final question for some time, after this I will reread the answers, and not stop until I can finally understand these strange spaces.
You know that the discrete metric only takes values of $1$ and $0$. Now suppose it comes from some norm $||.||$. Then for any $\alpha$ in the underlying field of your vector space and $x,y \in X$, you must have that $$\lVert\alpha(x-y)\rVert = \lvert\alpha\rvert\,\lVert x-y\rVert.$$ But now $||x-y||$ is a fixed number and I can make $\alpha$ arbitrarily large and consequently the discrete metric does not come from any norm on $X$.
Confusion related to the concatenation of two grammars I have this confusion. Lets say I have two languages produced by type 3 grammar such that L(G1) = <Vn1,Vt,P1,S1> L(G2) = <Vn2,Vt,P2,S2> I need to find a type3 grammar G3 such that L(G3) = L(G1)L(G2) I can't use $S3 \rightarrow S1S2$ to get the concatenaion, because the production is not type 3 as well. So what should I do?
First change one of the grammars, if necessary, to make sure that they have disjoint sets of non-terminal symbols. If you’re allowing only productions of the forms $X\to a$ and $X\to Ya$, make the new grammar $G$ generate a word of $L(G_2)$ first and then a word of $L(G_1)$: replace every production of the form $X\to a$ in $G_2$ by the production $X\to S_1a$. If you’re allowing only productions of the forms $X\to a$ and $X\to aY$, replace every production of the form $X\to a$ in $G_1$, by the production $X\to aS_2$. If you’re allowing productions of both forms, you’re not talking about type $3$ grammars.
A problem involving Laplace operator $\Omega$ is a bounded open set in $\mathbb R^n$, consider the number $ r = \inf \{ \left\| {du} \right\|_{{L^2}(\Omega )}^2:u \in H_0^1(\Omega ),{\left\| u \right\|_{{L^2}(\Omega )}} = 1\}$ If for some $v\in H_0^1(\Omega )$ the infimum is achieved, then is $\Delta v\in L^2(\Omega)$?
Let $$ f, g: H_0^1(\Omega) \to \mathbb{R}, f(u)=\|\nabla u\|_{L^2(\Omega)}^2,\ g(u)=\|u\|_{L^2(\Omega)}^2. $$ Then $$ r=\inf\{f(u):\ u \in H_0^1(\Omega),\ g(u)=1\}. $$ If $$ r=f(v), $$ where $v$ belongs to $H_0^1(\Omega)$ and satisfies $g(v)=1$, then, there is a $\lambda \in \mathbb{R}$ such that $$ Df(v)\cdot h=\lambda Dg(u)\cdot h \quad \forall h \in H_0^1(\Omega), $$ i.e. $$ \int_\Omega\nabla v\cdot\nabla h=\lambda\int_\Omega vh \quad \forall h \in H_0^1(\Omega). $$ The latter shows that $v$ is a weak solution of the PDE $$ -\Delta u=\lambda u, \ u \in H_0^1(\Omega). $$ Hence $\Delta v =-\lambda v=-f(v)v \in L^2(\Omega)$.
Is an abstract simplicial complex a quiver? Let $\Delta$ be an abstract simplicial complex. Then for $B\in \Delta$ and $A\subseteq B$ we have that $A\in\Delta$. If we define $V$ to be the set of faces of $\Delta$, construct a directed edge from $B$ to $A$ if $A$ is a face of $B$ (i.e. $A\subseteq B$) and define $E$ to be the set of directed edges, then will $\Gamma=(V,E)$ be a quiver?
Yes, and it's the poset of faces ordered by inclusion.
Problem with Ring $\mathbb{Z}_p[i]$ and integral domains Let $$\Bbb Z_p[i]:=\{a+bi\;:\; a,b \in \Bbb Z_p\,\,,\,\, i^2 = -1\}$$ -(a)Show that if $p$ is not prime, then $\mathbb{Z}_p[i]$ is not an integral domain. -(b)Assume $p$ is prime. Show that every nonzero element in $\mathbb{Z}_p[i]$ is a unit if and only if $x^2+y^2$ is not equal to $0$ ($\bmod p$) for any pair of elements $x$ and $y$ in $\mathbb{Z}_p$. (a)I think that I can prove the first part of this assignment. Let $p$ be not prime. Then there exist $x,y$ such that $p=xy$, where $1<x<p$ and $1<y<p$. Then $(x+0i)(y+0i)=xy=0$ in $\mathbb{Z}_p[i]$. Thus $(x+0i)(y+0i)=0$ in $\mathbb{Z}_p[i]$. Since none of $x+0i$ and $y+0i$ is equal to $0$ in $\mathbb{Z}_p[i]$, we have $\mathbb{Z}_p[i]$ is not an integral domain. However, I don't know how to continue from here.
Note that $(a+bi)(a-bi)=a^2+b^2$. If $a^2+b^2\equiv0\pmod p$, then $a+bi$ is not a unit. And if $a^2+b^2$ is not zero modulo $p$, then it's invertible modulo $p$, so $a+bi$ is a unit.
What is $dx$ in integration? When I was at school and learning integration in maths class at A Level my teacher wrote things like this on the board. $$\int f(x)\, dx$$ When he came to explain the meaning of the $dx$, he told us "think of it as a full stop". For whatever reason I did not raise my hand and question him about it. But I have always shaken my head at such a poor explanation for putting a $dx$ at the end of integration equations such as these. To this day I do not know the purpose of the $dx$. Can someone explain this to me without resorting to grammatical metaphors?
I once went at some length illustrating the point that for the purpose of evaluating integrals it is useful to look at $d$ as a linear operator.
Extensions of Bertrand's Postulate Two questions came to mind when I was reading the proof for Bertrand's Postulate (there's always a prime between $n$ and $2n$): (1) Can we change the proof somehow to show that: $\forall x > x_{0}$, there exists a prime $p$ $\in [x, ax]$, for some $a \in (1, 2)$? (2) Suppose the (1) is true, what is the smallest value of $x_{0}$? I'm not sure how to prove either of them, any input would be greatly appreciated! And correct me if any of the above statement is wrong. Thank you!
I think you would enjoy the page PRIME GAPS. My own version of the conjecture of Shanks, actually both a little stronger and a little weaker, is $$ p_{n+1} < p_n + 3 \,\left( \log p_n \right)^2, $$ for all primes $p_n \geq 2.$ This is true as high as has been checked. Shanks conjectured that $$ \limsup \frac{p_{n+1} - p_n}{\left( \log p_n \right)^2} = 1, $$ while Granville later corrected the number $1$ on the right hand side to $2 e^{- \gamma} \approx 1.1229,$ see CRAMER GRANVILLE. There is no hope of proving this, but I enjoy knowing what seems likely as well as what little can be proved. Here is a table from the third edition (2004) of Unsolved Problems in Number Theory by Richard K. Guy, in which $p = p_n$ is a prime but $n$ is not calculated, then $g = p_{n+1} - p_n,$ and $p = p(g),$ so $p_{n+1} = p + g.$ =-=-=-=-=-=-=-=-=-= =-=-=-=-=-=-=-=-=-=
Determine the equations needed to solve a problem I am trying to come up with the set of equations that will help solve the following problem, but am stuck without a starting point - I can't classify the question to look up more info. The problem: Divide a set of products among a set of categories such that a product does not belong to more than one category and the total products within each category satisfies a minimum number. Example: I have 6 products that can belong to 3 categories with the required minimums for each category in the final row. For each row, the allowed categories for that product are marked with an X - eg. Product A can only be categorized in CatX, Product B can only be categorized in CatX or CatY. $$ \begin{matrix} Product & CatX & CatY & CatZ \\ A & X & & \\ B & X & X & \\ C & X & & \\ D & X & X & X \\ E & & & X\\ F & & X & \\ Min Required& 3 & 1 & 2\\ \end{matrix} $$ The solution - where * marks how the product was categorized: $$ \begin{matrix} Product & CatX & CatY & CatZ \\ A & * & & \\ B & * & & \\ C & * & & \\ D & & & * \\ E & & & *\\ F & & * & \\ Total & 3 & 1 & 2\\ \end{matrix} $$
Let $x_{ij} = 1$ if you put product $i$ in category $j$, $0$ otherwise. You need $\sum_i x_{ij} \ge m_j$ for each $j$, where $m_j$ is the minimum for category $j$, and $\sum_j x_{ij} = 1$ for each $i$, and each $x_{ij} \in \{0,1\}$. The last requirement takes it out of the realm of linear algebra. However, look up "Transportation problem".
proving a inequality about sup Possible Duplicate: How can I prove $\sup(A+B)=\sup A+\sup B$ if $A+B=\{a+b\mid a\in A, b\in B\}$ I want to prove that $\sup\{a+b\}\le\sup{a}+\sup{b}$ and my approach is that I claim $\sup a+ \sup b= \sup\{\sup a + \sup b\}$ and since $\sup a +\sup b \ge a+b$ the inequality is proved. Is my approach correct?
Perhaps this is what you are looking for. Consider $$ \sup_{x\in X}(a(x)+b(x))=\color{#C00000}{\sup_{{x\in X\atop y\in X}\atop x=y}(a(x)+b(y))\le\sup_{x\in X\atop y\in X}(a(x)+b(y))}=\sup_{x\in X}a(x)+\sup_{x\in X}b(x) $$ The red inequality is true because the $\sup$ on the left is taken over a smaller set than the $\sup$ on the right. The equalities are essentially definitions.
How to solve an nth degree polynomial equation The typical approach of solving a quadratic equation is to solve for the roots $$x=\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}$$ Here, the degree of x is given to be 2 However, I was wondering on how to solve an equation if the degree of x is given to be n. For example, consider this equation: $$a_0 x^{n} + a_1 x^{n-1} + \dots + a_n = 0$$
If the equation's all roots are real and negative, The range bound answer for one of a root is between $\displaystyle -\frac{k}{z}$ and $\displaystyle -n \frac{k}{z}$, where $k$ is constant, $z$ is coefficient of $x$ and $n$ is the highest power of $x$. And the coefficient of $x^n$ must be $1$.
Question: Find all values of real number a such that $ \lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2} $ exists. Thanks in advance for looking at my question. I was tackling this limits problem using this method, but I can't seem to find any error with my work. Question: Find all values of real number a such that $$ \lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2} $$ exists. My Solution: Suppose $\lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2}$ exists and is equals to $L$. We have $$\lim_{x\to1}{ax^2+a^2x-2}=\frac{\lim_{x\to1}ax^2+a^2x-2}{\lim_{x\to1}x^3-3x+2}*\lim_{x\to1}x^3-3x+2=L*0=0$$ Therefore, $\lim_{x\to1}{ax^2+a^2x-2}=0$ implying $a(1)^2+a^2(1)-2=0$. Solving for $a$, we get $a=-2$ or $a=1$. Apparently, the answer is only $a=-2$. I understand where they are coming from, but I can't see anything wrong with my solution either.
Since the denominator's limit is 0, the numerator cannot have a nonzero limit if the limit of the quotient is to be defined. The only hope is that the numerator's limit is also 0, and that after analyzing the indeterminate form, it does have a limit. So, it must be the case that $\lim_{x\to1} ax^2+a^2x-2=0$, and consequently $a^2+a-2=0$. The solutions to that are $a=-2$ and $a=1$, and if you substitute them into the expression, you will find that the numerator now factors into $-2(x-1)^2$ in the first case, and $(x+2)(x-1)$ in the second case. In either one, the $(x-1)$ can be cancelled with the $(x-1)$ factor in the denominator, so that the singularity (a pole of order 2) might disappear. $\lim_{x\to1}\frac{x^2+x-2}{x^3-3x+2}=\lim_{x\to1}\frac{x^2+x-2}{(x-1)(x^2+x-2)}=\lim_{x\to 1}\frac{1}{x-1}$ does not exist, so the $a=1 $ case is a false positive. In the other case: $\lim_{x\to1}\frac{-2(x-1)^2}{(x-1)^2(x+2)}=\lim_{x\to1}\frac{-2}{(x+2)}=\frac{-2}{3}$
Numerical optimization with nonlinear equality constraints A problem that often comes up is minimizing a function $f(x_1,\ldots,x_n)$ under a constraint $g(x_1\ldots,x_n)=0$. In general this problem is very hard. When $f$ is convex and $g$ is affine, there are well known algorithms to solve this. In many cases however, $g$ is not affine. For general $g$ this problem is hopelessly hard to solve, but what if the constraint is easy to solve on its own? In particular, suppose that if we are given $x_1,\ldots,x_{n-1}$, then Newtons method on the constraint $g(x_1,\ldots,x_n)=0$ can easily find $x_n$. Are there effective algorithms to solve the constrained optimization problem in that case? To solve these kinds of problems I have tried to use Lagrange multipliers, and directly apply Newtons method to solve those (nonlinear) equations, but this does not converge. Something that does work is to add a penalty term for violating the constraint to the objective function, similar to how the barrier method handles inequalities. Unfortunately (but as expected) this is not very fast at getting an accurate answer.
If you have a single equality constraint you might try to rewrite your constraint $g(x_1,...,x_n)$ as: $x_i = h(x_1,...x_{i-1},x_{i+1},...x_n)$ and then substitute for the $i$th variable in your objective function and solve the problem as an unconstraint optimiation problem.
Statements in Euclidean geometry that appear to be true but aren't I'm teaching a geometry course this semester, involving mainly Euclidean geometry and introducing non-Euclidean geometry. In discussing the importance of deductive proof, I'd like to present some examples of statements that may appear to be true (perhaps based on a common student misconception or over-generalisation), but are not. The aim would be to reinforce one of the reasons given for studying deductive proof: to properly determine the validity of statements claimed to be true. Can anyone offer interesting examples of such statements? An example would be that the circumcentre of a triangle lies inside the triangle. This is true for triangles without any obtuse angles - which seems to be the standard student (mis)conception of a triangle. However, I don't think that this is a particularly good example because the misconception is fairly easily revealed, as would statements that hold only for isoceles or right-angled triangles. I'd really like to have some statements whose lack of general validity is quite hard to tease out, or has some subtlety behind it. Of course the phrase 'may appear to be true' is subjective. The example quoted should be understood as indicating the level of thinking of the relevant students. Thanks.
Here is one example that is quite similar in nature to the statement in the question about the center of the circumcircle lying inside a triangle, but the dubious part ("lie inside") is somewhat better disguised. I report it only because I just found it in Wikipedia, with a literature reference. The incenter (that is, the center for the inscribed circle) of the orthic triangle is the orthocenter of the original triangle. It is easily verified that for a triangle with an obtuse angle the orthocenter lies outside the orthic triangle, so it cannot be the incenter in this case; it is one of the excenters instead.
Inducing a well-defined function on a set What does it mean to say that $f$ induces a well-defined function on the set $X$? I'm confused about what the term induce means here, and what role the set $X$ has.
It means that a function is such that we can define a(nother) well defined function on some set $\,X\,$ that'll depend, in some definite way, on the original function. For example: if $\,f:G\to H\,$ is a group homomorphism and there's some group $\,N\leq \ker f\,$ , with $\,N\triangleleft G\,$ , then $\,f\,$ induces a well-defined group homomorphism $$\overline f:X:=G/N\to H\,\,,\,\,\text{defined by}\,\,\overline f(gN):=f(g)$$ Please do note that the original function's domain is $\,G\,$ whereas the induced function's domain is $\,G/N\,$ . These two domains are both different sets and different groups.
Is it possible to take the absolute value of both sides of an equation? I have a problem that says: Suppose $3x^2+bx+7 > 0$ for every number $x$, Show that $|b|<2\sqrt21$. Since the quadratic is greater than 0, I assume that there are no real solutions since $y = 3x^2+bx+7$, and $3x^2+bx+7 > 0$, $y > 0$ since $y>0$ there are no x-intercepts. I would use the discriminant $b^2-4ac<0$. I now have $b^2-4(3)(7)<0$ $b^2-84<0$ $b^2<84$ $b<\pm\sqrt{84}$ Now how do I change $b$ to $|b|$? Can I take the absolute value of both sides of the equation or is there a proper way to do this?
What you've written is an inequality, not an equation. If you have an equation, say $a=b$, you can conclude that $|a|=|b|$. But notice that $3>-5$, although $|3|\not>|-5|$. If $3x^2+bx+7>0$ for every value of $x$, then the quadratic equation $3x^2+bx+7=0$ has no solutions that are real numbers. THat implies that the discriminant $b^2-4ac=b^2-4\cdot3\cdot7$ is negative. If $b^2-84<0$ then $b^2<84$, so $|b|<\sqrt{84}$. Now observe that $\sqrt{84}=\sqrt{4}\sqrt{21}=2\sqrt{21}$.
Error in proof of self-adjointness of 1D Laplacian I have successfully checked self-adjointness of simple and classic differential operator - 1D Laplacian $$D = \frac {d^2}{dx^2}: L_2(0,\infty) \rightarrow L_2(0,\infty)$$ defined on $$\{f(x) | f'' \in L_2(0,\infty), f(0) = 0\},$$ open an article and see the first Example that this operator is not self-adjoint but stricly simmetric (hermitian). Can anybody point out error in reasoning below? Find adjoint operator, i.e. it's domain. $$ (Df,g) = \int_0^\infty f''\overline g dx = ... = \left. (f'\overline g - f \overline g') \right|_0^{\infty} + (f, D^*g). $$ To satisfy adjointness we should zero out free term with fixed $g$ and for all $f \in D_D$ - domain of D $$ \left. (f'\overline g - f \overline g') \right|_0^{\infty} = 0. $$ Second term in it zero outs because of $f(0) = 0$, so $$ \left. (f'\overline g ) \right|_0^{\infty} = 0. $$ Because $f$ arbitrary from domain of $D$ and hence can have not zero first derivative, so this equality holds for fixed $g$ if and only if $$g(0) = 0.$$ Thus domain of direct and adjoint operator the same, which means that it is self-adjoint. What I see in article. Boris Pavlov wrote: Example. Symplectic extension procedure for the differential operator Consider the second order differential operator $$ L_0u = - \frac {d^2u}{dx^2} $$ defined on all square integrable functions, $u \in L_2(0, \infty)$, with square- integrable derivatives of the first and second order and vanishing near the origin. This operator is symmetric and it’s adjoint $L^+_0$ is defined by the same differential expression on all square integrable functions with square integrable derivatives of the first and second order and no additional boundary condition at the origin. Where is error? It certainly is, because operator $D$ with domain $f'(0)=\alpha f(0)$ is symmetric, It's domain is superset of regarded in my message domain hence my operator is not maximal symmetric and hence can not be self-adjoint.
The operator in Pavlov's article is not the same as yours. His has a domain of functions "vanishing near the origin", i.e. on a neighborhood of 0. For your operator, functions in the domain need only vanish at the origin. So there is no error; your operator is self-adjoint and his is not. Regarding your last paragraph, the operator $D$ with domain $f'(0) = \alpha f(0)$ (Robin boundary conditions) is not an extension of yours. Consider for instance $f(x) = e^{-x} \sin x$; it has $f(0)= 0$ but $f'(0) \ne 0$, so it is in the domain of your original operator but not the latter.
Example about hyperbolicity. $\def\abs#1{\left|#1\right|}$I would like to understand this example: * *Why is the following set a hyperbolic manifold? $X=\{[1:z:w]\in \mathbb{CP}_2\mid0<\abs z< 1, \abs w < \abs{\exp(1/z)}\}$ It's an examples given in the book Hyperbolic Manifolds and Holomorphic Mappings: An Introduction by Kobayashi, in order to give a counterexample of an optimistic generalization of the Big Picard Theorem. They claim that it is biholomorphic to $\mathbb{D}\times\mathbb{D}^*$. I dont understand why.
$\mathbb{CP}^2$ is a natural complex manifold where the chart are given by the maps : $\begin{array}{lclc} \varphi_i: & U_i:=\{[z_0:z_1:z_2]\in \mathbb{CP}^2 \ | \ z_i\neq 0\} & \longrightarrow & \mathbb{C}^2 \\ & {[z_0:z_1:z_2]} & \longmapsto & (\dfrac{z_j}{z_0},\dfrac{z_k}{z_0}) \end{array}$ where $j,k\neq i$. However, you can write $U_1$ as $\{[1:z:w]\in \mathbb{CP}^2\}$ and by definition of a biholomorphic map between two complex manifolds, it tells you that $X$ is biholomorphic to $\varphi_1(X)$. The map $(z,w)\in \varphi_1(X)\mapsto (z,we^{-\frac{1}{z}})\in \mathbb D^\star\times \mathbb D$ is clearly a biholomorphism. So $X$ is biholomorphic to $\mathbb D^\star\times \mathbb D$. Since $\mathbb D$ and $\mathbb D^\star$ are hyperbolic manifolds then so is $\varphi_1(X)$ and consequently $X$ is a hyperbolic manifold.
Prove that $\zeta (4)\le 1.1$ Prove the following inequality $$\zeta (4)\le 1.1$$ I saw on the site some proofs for $\zeta(4)$ that use Fourier or Euler's way for computing its precise value, and that's fine and I can use it. Still, I wonder if there is a simpler way around for proving this inequality. Thanks!
$$\zeta(4) < \sum_{n=1}^{6} \frac{1}{n^{4}} + \int_{6}^{\infty} \frac{dx}{x^4} < 1.1$$
Limit of a complex function How to find the limit of such a complex function? $$ \lim_{z\rightarrow \infty} \frac{z \left| z \right| - 3 \Im z + i}{z \left| z \right|^2 +2z - 3i}. $$
Consider moduli and use the triangular inequality. The modulus of the numerator is at most $|z|^2+3|z|+1$ because $|\Im z|\leqslant|z|$ and $|\mathrm i|=1$. The modulus of the denominator is at least $|z|^3-2|z|-3$ because $|\mathrm i|=1$. Hence the limit of the ratio is $0$ when $|z|\to\infty$.
Affine Subspace Confusion I'm having some trouble deciphering the wording of a problem. I'm given $V$ a vector space over a field $\mathbb{F}$. Letting $v_1$ and $v_2$ be distinct elements of $V$, define the set $L\subseteq V$: $L=\{rv_1+sv_2 | r,s\in \mathbb{F}, r+s=1\}$. It's the next part where I can't figure out what they mean. "Let $X$ be a non-empty subset of $V$ which contains all lines through two distinct elements of $X$." No idea what this set $X$ is. Once I figure that out, I'm supposed to show that it's a coset of some subspace of $V$. I'm hoping this part will become clearer once I know what $X$ is...
By definition the set $L$ in your question consists of all the points on a line. So you may think of $L$ as a line (or the line that passes through the two points $v_1$ and $v_2$). Hence if you are considering the two points ($v_1$, $v_2$) giving you the line $L$, then a subset $X$ containing all lines (the one line) through the two points, is a subset $X$ containing $L$: $L \subseteq X$. Note: There might be a bit confusion here since by saying "$X$ contains all lines..." you might be understood as saying that the elements of $X$ are lines. But that would mean that $X$ is not a subset of $V$, so I assumed that you by "$X$ contains all lines..." mean that $X$ contains all the points on the lines (all the points that make up the lines).
Recommended book on modeling/differential equations I am soon attending a undergrad course named differential equations and modeling. I have dealt with differential equations before, but in that course just learned a bunch of methods for solving them. Is there any cool books with more 'modeling' view of this subject? Like given a problem A, you have to derive equations for solving it, then solve it. This is often a hard part in math problems in my view.
Note: this list is different if you meant partial differential equations. * *A First Course in Differential Equations, Modeling, and Simulation Carlos A. Smith, Scott W. Campbell *Differential Equations: A Modeling Approach, Frank R. Giordano, Maurice Weir *Differential Equations And Boundary Value Problems: Computing and Modeling by Charles Henry Edwards, David E. Penney, David Calvis *Modeling and Simulation of Dynamic Systems by Robert L. Woods *Simulation and Inference for Stochastic Differential Equations: With R Examples by Stefano M. Iacus
Open Measurable Sets Containing All Rational Numbers So I am trying to figure out a proof for the following statement, but I'm not really sure how to go about it. The statement is: "Show that for every $\epsilon>0$, there exists an open set G in $\mathbb{R}$ which contains all of the rational numbers but $m(G)<\epsilon$." How can it be true that the open set G contains all of the rational numbers but has an arbitrarily small measure?
Hint: if you order the rationals, you can put an interval around each successive one and take the union for your set. If the intervals decrease in length quickly enough....
Find the following limit $\lim_{x\to 0}\frac{\sqrt[3]{1+x}-1}{x}$ and $\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2}$ Find the following limits $$\lim_{x\to 0}\frac{\sqrt[3]{1+x}-1}{x}$$ Any hints/solutions how to approach this? I tried many ways, rationalization, taking out x, etc. But I still can't rid myself of the singularity. Thanks in advance. Also another question. Find the limit of $$\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2}$$ I worked up till here, after which I got stuck. I think I need to apply the squeeze theore, but I am not sure how to. $$\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2} = \lim_{x\to 0}\frac{-2\sin\frac{1}{2}(3x+x)\sin\frac{1}{2}(3x-x)}{x^2}=\lim_{x\to 0}\frac{-2\sin2x\sin x}{x^2}=\lim_{x\to 0}\frac{-2(2\sin x\cos x)\sin x}{x^2}=\lim_{x\to 0}\frac{-4\sin^2 x\cos x}{x^2}$$ Solutions or hints will be appreciated. Thanks in advance! L'hospital's rule not allowed.
Like N.S. said, looking this limit as derivative is a way to solve. You could also do $u=x+1$ to simplify your expression and consider $f(u)=u^{1/3}$. $$u=x+1\rightarrow \lim_{u\rightarrow 1} \frac{u^{1/3}-1}{u-1}=\lim_{u\rightarrow 1} \frac{f(u)-f(1)}{u-1}=f'(1)$$ But $f'(u) = \frac{1}{3}u^{-2/3}$, then $f'(1) = \frac{1}{3}\cdot 1^{-2/3}=\frac{1}{3}$.
Uniform convergence of $f_n\rightarrow f$ and limit of zeroes to $f_n$ I'm having some doubts on a homework question: Let $f_n\rightarrow f$ uniformly on compact subsets of an open connected set $\Omega \subset \mathbb{C}$, where $f_n$ is analytic, and $f$ is not identically equal to zero. (a) Show that if $f(w)=0$ then we can write $w=\lim z_n$, where $f_n(z_n)=0$ for all $n$ sufficiently large. (b) Does this result hold if we only assume $\Omega$ to be open? I'm not too sure how to do (a)-- I think I might be able to do it just by using the definition of uniform convergence and the fact that $f_n$ has a zero at $z_n$, but this doesn't use the assumption that $f_n$ is analytic or that $\Omega$ is connected. I'm also guessing that the result doesn't hold if we only assume $\Omega$ to be open and not connected for obvious topological reasons, but not knowing exactly how to do (a), I'm not sure if I know how to prove this. Could anyone give me some pointers? Thanks in advance.
Take a small circle around $w$. Then by Rouché's theorem $f_n$ has a zero $z_n$ inside the circle for $n$ large enough (and maybe several if $w$ is a multiple zero of $f$). Now shrink the circle and repeat: you will obtain the convergent sequence $(z_n)$. By the way, this sketch of proof shows why we must assume that $f$ is not identically zero near $w$: if $f\equiv 0$ just take $f_n=\frac {1}{n}$ (which has no zero at all!) to get a counter-example.
Pathologies in module theory Linear algebra is a very well-behaved part of mathematics. Soon after you have mastered the basics you got a good feeling for what kind of statements should be true -- even if you are not familiar with all major results and counterexamples. If one replaces the underlying field by a ring, and therefore looks at modules, things become more tricky. Many pathologies occur that one maybe would not expect coming from linear algebra. I am looking for a list of such pathologies where modules behave differently than vector spaces. This list should not only be a list of statements but all phenomena should be illustrated by an example. To start the list I will post an answer below with all pathologies that I know from the top of my head. This should also explain better what kind of list I have in mind.
I am surprised that it is not mentioned here- Example of a free module M which has bases having different cardinalities. Let $V$ be a vector space of countably infinite dimension over a division ring $D$. Let $R=End_D(V)$. We know that $R$ is free over $R$ with basis $\{1\}$. We claim that given a positive integer $n$, there is a $R$-basis $B_n=\{f_1,f_2, \dots f_n\}$ for $R$ having $n$ elements. Let $B=\{e_k\}_{k=1}^{\infty}$ be a basis of $V$ over $D$. Define $\{f_1, \dots , f_n\}\in R$ by specifying their values on $B$ as in the following table- \begin{array}{|c| ccccc|} \hline & f_1 & f_2 & f_3 & \dots & f_n \\ \hline e_1& e_1&0 &0 & \dots & 0\\ e_2& 0 & e_1 &0 & \dots & 0\\ \vdots& & & & \ddots\\ e_n & 0 &0 &0 & \dots & e_1 \\ \hline e_{n+1} & e_2 &0 &0 & \dots & 0 \\ e_{n+2} & 0 &e_2 &0 &\dots &0 \\ \vdots & & & &\ddots & \\ e_{2n} & 0 &0 &0 & \dots & e_2 \\ \hline \vdots & \vdots & \vdots & \vdots& \vdots & \vdots \\ \hline e_{kn+1} & e_{k+1} & 0 & 0 & \dots & 0 \\ e_{kn+2} & 0 & e_{k+1}&0& \dots &0 \\ \vdots & &&&\ddots & \\ e_{(k+1)n} &0&0&0& \dots& e_{k+1} \\ \hline \vdots & \vdots & \vdots & \vdots& \vdots & \vdots \\ \end{array} Now we check that $B_n$ is an $R$- basis of $R$. * *Linearly independent- If $\sum_{i=1}^{n} \alpha_i f_i=0$ with $\alpha_i \in R,$ then evaluating on the successive blocks of $n$ vectors, namely , $e_{kn+1}, \dots , e_{(k+1)n}, k=0,1,\dots ,$ we get $\alpha_i(e_{k+1})=0\ \forall\ k$ and $1 \le i \le n ;$ i.e. $\alpha_i \equiv 0\ \forall\ i$ showing that $B_n$ is linearly independent over $R$. * *$B_n$ spans $R$- Let $f\in R$ then $f= \sum_{i=1}^{n} \alpha_i f_i,$ where $\alpha_i \in R$ are defined by their values on $B$ as in the following table- \begin{array}{|c| ccccc|} \hline & \alpha_1 & \alpha_2 & \alpha_3 & \dots & \alpha_n \\ \hline e_1& f(e_1)&f(e_2) &f(e_3) & \dots & f(e_n)\\ e_2& f(e_{n+1}) & f(e_{n+2}) &f(e_{n+3}) & \dots & f(e_{2n})\\ \vdots& & & & \ddots\\ e_n & f(e_{(n-1)n+1}) & f(e_{(n-1)n+2}) &f(e_{(n-1)n+3}) & \dots & f(e_{n^2}) \\ \hline e_{n+1} & f(e_{n^2+1}) &f(e_{n^2+2}) &f(e_{n^2+3}) & \dots & f(e_{n^2+n}) \\ e_{n+2} & . & . &. &\dots &f(e_{n^2+2n}) \\ \vdots & & & &\ddots & \\ e_{2n} & f(e_{2n^2-n+1}) &f(e_{2n^2-n+2}) &f(e_{2n^2-n+3}) & \dots & f(e_{2n^2}) \\ \hline \vdots & \vdots & \vdots & \vdots& \vdots & \vdots \\ \hline e_{kn+1} & f(e_{kn^2+1}) & f(e_{kn^2+2}) & f(e_{kn^2+3}) & \dots & f(e_{kn^2+n}) \\ e_{kn+2} & . & .&.& \dots &f(e_{kn^2+2n}) \\ \vdots & &&&\ddots & \\ e_{(k+1)n} &.&.&.& \dots& f(e_{(k+1)n^2}) \\ \hline \vdots & \vdots & \vdots & \vdots& \vdots & \vdots \\ \end{array} This shows that $B_n$ spans $R$. So for each $n > 0$, $B_n= \{f_n\}$ is a basis of cardinality $n$
How to show a quasi-compact, Hausdorff space be totally disconnected? This is from Atiyah-Macdonald. I was asked to show if every prime ideal of $A$ is maximal, then $A/R$ is absolutely flat, Spec($A$) is a $T_{1}$ space,further Spec($A$) is Hausdorff. The author then asked me to show Spec($A$) is totally disconnected. I am wondering why, because it is not automatic that a compact Hausdorff space is totally disconnected (consider $\mathbb{S}^{1}$ as one-point compactification of $\mathbb{R}$, for example). Why can we put Spec$A$ a discrete topology when we know elements $\{p\}$ is closed?
Okay, so first, there is a distinction made between quasicompactness and compactness. A topological space $X$ is quasicompact if every open cover of $X$ has a finite subcover. The topological space $X$ is said to be compact if it is quasicompact and Hausdorff. We know the $Spec(A)$ is quasicompact for any ring $A$. If you have managed to show for the above problem that $Spec(A)$ is Hausdorff, then this means that $Spec(A)$ is compact. Claim: For a non-unit $f$, the distinguished open set $D(f) = \{p \in Spec(A): f \notin p \}$ is closed. Proof of claim: Since $f$ is not a unit, it follows that $D(f) \subsetneq Spec(A)$. Our goal will be to show that $Spec(A) - D(f)$ is open. Let $p \in Spec(A) - D(f)$. Then, $f \in p$. Note that $f$ is nilpotent in $A_p$, since the only prime of $A_p$ is $p(A_p)$. Thus, there exists $s_p \in A - p$ such that $s_pf^n = 0$ for some $n \in \mathbb{N}$. Then $p \in D(s_p)$, and $D(s_p) \cap D(f) = \emptyset$. Thus, $D(s) \subset Spec(A) - D(f)$. Since $p$ was an arbitrary point of $Spec(A) - D(f)$, this shows that $Spec(A) - D(f) = \bigcup_{p \in Spec(A) - D(f)} D(s_p)$ is open, hence $D(f)$ is closed. Thus, for all $f \in A$, $D(f)$ is a clopen set (simultaneously closed and open). Now let $C$ be a connected component of $Spec(A)$ with more than one element, say $p_1, p_2$. Since $p_1, p_2$ are maximal and distinct, there exists $f \in p_1$ such that $f \notin p_2$. Then $D(f)$ is a clopen set that contains $p_2$ but not $p_1$. This shows that $C \cap D(f)$ is a proper clopen set of $C$, which contradicts the connectedness of $C$.
Probability problem: cars on the road I heard this problem, so I might be missing pieces. Imagine there are two cities separated by a very long road. The road has only one lane, so cars cannot overtake each other. $N$ cars are released from one of the cities, the cars travel at constant speeds $V$ chosen at random and independently from a probability distribution $P(V)$. What is the expected number of groups of cars arriving simultaneously at the other city? P.S.: Supposedly, this was a Princeton physics qualifier problem, if that makes a difference.
There are already two answers that show that under a certain interpretation of the question the answer is the $N$-th harmonic number. This can be seen more directly by noting that the $k$-th car is the "leader" of a group iff it is the slowest of the first $k$ cars, which occurs with probability $1/k$. Thus the expected number of leaders of groups is the $N$-th harmonic number, and of course there are as many groups as there are leaders of groups.
What is it called when a function is not continuous but still can have a derivative? Consider the following function (I think it has a name, but I don't remember it): $$ f(x) = \cases{-1 & $x < 0$ \\ 0 & $x = 0$ \\ 1 & $x > 0$} $$ $f'(x)$ is zero everywhere except at $x=0$, where $f$ is not continuous. But suppose we ignore the right half of the real line and define $f(0)$ to be $-1$. Then $f$ has a left derivative at $x=0$, and it is zero. We can do the same thing from the right, so in a way it could make a little bit of sense to say that $f'(0) =0$. Of course, I understand that going by the definition $f$ isn't differentiable at $x=0$. But one could imagine an alternative definition of derivative for discontinous functions, in which one calculates lateral derivatives by redefining the function to be continuous, and then we see if the lateral derivatives match. This doesn't always work; for example it's hard to meaningfully assign a derivative to $x \mapsto |x|$ at $x=0$. Are there other functions with this property? Does it have a name?
Interestingly, you can assign a derivative of the function $\operatorname{abs}$ at $0$ by using the following definition: $$\frac{\mathrm df(x)}{\mathrm dx}=\lim_{h\to0}\frac{f(x+h)-f(x-h)}{2h}.$$ Thus, taking the limit, $$\operatorname{abs}'(x)=\frac{\mathrm d}{\mathrm dx}|x|=\operatorname{sgn}(x)=\cases{-1, & $x < 0;$ \\ \phantom{-}0, & $x = 0;$ \\\phantom{-}1, & $x > 0.$}$$.
What is the probability of the box? Your box of cereal may be a contest winner! It's rattling, which 100% of winning boxes do. Of course 1% of all boxes rattle and only one box in a million is a winner. What is the probability that your box is a winner?
The correct solution would be $0.0001$ ($1/10000$), wouldn't it? It's late, but it seems to me that Drew Christianson miscalculated and dedocu mixed $p(A)$ and $p(B)$ - correct me please, if I'm wrong.
Proof that a perfect set is uncountable There is something I don't understand about the proof that perfect sets are uncountable. The same proof is present in Rudin's Principles of Mathematical Analysis. Do we assume that our construction of $U_n$ must contain all points of $S$? What if we are only collecting evenly-indexed points of $S$ ($x_{2n}$)? We would still get an infinitely countable subset of $S$, and the rest of $S$ can be used to provide points for $V$. What am I missing?
There is an alternative proof, using what is a consequence of Baire's Theorem: THM Let $(M,d)$ be a complete metric space with no isolated points. Then $(M,d)$ is uncountable. PROOF Assume $M$ is countable, and let $\{x_1,x_2,x_3,\dots\}$ be an enumeration of $M$. Since each singleton is closed, each $X_i=X\smallsetminus \{x_i\}$ is open for each $i$. Moreover, each of them is dense, since each point is an accumulation point of $X$. By Baire's Theorem, $\displaystyle\bigcap_{i\in\Bbb N} X_i$ must be dense, hence nonempty, but it is readily seen it is empty, which is absurd. $\blacktriangle$. COROLLARY Let $(M,d)$ be complete, $P$ a perfect subset of $M$. Then $P$ is uncountable. PROOF $(P,d\mid_P)$ is a complete metric space with no isolated points. ADD It might be interesting to note that one can prove Baire's Theorem using a construction completely analogous to the proof suggested in the post. THM Let $(X,d)$ be complete, and let $\langle G_n\rangle$ be a sequence of open dense sets in $X$. Then $G=\displaystyle \bigcap_{n\in\Bbb N}G_n$ is dense. PROOOF We can construct a sequence $\langle F_n\rangle$ of closed sets as follows. Let $x\in X$, and take $\epsilon >0$, set $B=B(x,\epsilon)$. Since $G_1$ is dense, there exists $x_1\in B\cap G_1$. Since both $B$ and $G_1$ are open, there exists a ball $B_1=B(x_1,r_1)$ such that $$\overline{B_1}\subseteq B\cap G_1$$ Since $G_2$ is open and dense, there is $x_2\in B_1\cap G_2$ and again an open ball $B_2=B(x_2,r_2)$ such that $\overline{B_2}\subseteq B_1\cap G_2$, but we ask now that $r_2\leq r_1/2$. We then successively take $r_{n+1}<\frac{r_n}2$. Inductively, we see we can construct a sequence of closed bounded sets $F_n=\overline{B_n}$ such that $$F_{n+1}\subseteq F_n\\ \operatorname{diam}D_n\to 0$$ Since $X$ is complete, there exists $\alpha\in \displaystyle\bigcap_{n\in\Bbb N}F_n$. But, by construction, we see that $\displaystyle\alpha\in \bigcap_{n\in\Bbb N}G_n\cap B(x,\epsilon)$ Thus $G$ is dense in $X$.$\blacktriangle.$
Please explain how this ratio is being calculated A,B and C are partners of a company. A receives $\frac{x}{y}$ of profit. B and C share the remaining profit equally among them. A's income increases by $I_a$ if overall profit increases from P% to Q%. How much A had invested in their company. I know the answer: $\frac{I_a\cdot100}{P-Q}$. This may be a very simple question, but I don't understand how it comes.
Let $A$ be the amount that Alicia has invested in the company. Let $\frac{x}{y}$ be the fraction of the company that she owns. So if $V$ is the total value of the company, then $A=\frac{x}{y}V$. The old percentage profit was $P$. So the old profit was $\frac{P}{100}V$. Alicia got the fraction $\frac{x}{y}$ of this, so Alicia's old profit was $$\frac{x}{y}\frac{P}{100}V=\frac{P}{100}\frac{x}{y}V=\frac{P}{100}A.$$ Similarly, Alicia's new profit is $$\frac{Q}{100}A,$$ so the change in profit is $$\frac{Q}{100}A-\frac{P}{100}A.$$ This is equal to $I_a$. So $$I_a=\frac{Q-P}{100}A,$$ and therefore $$A=\frac{100 I_a}{Q-P}.$$ Note that the fraction $\frac{x}{y}$ turned out to be irrelevant, as of course did the fact that there are other shareholders.
Taylor's Tangent Approximation This is my question, A function of 2 variable is given by, $f(x,y) = e^{2x-3y}$ How to find tangent approximation to $f(0.244, 1.273)$ near $(0,0)?$ I need some guidance for this question. Am i suppose to do the linear approximation or quadratic approximation? Need some explanation for the formula. Thanks
More precise approximation we obtain if represent $f(x,\,y)$ as $$f(x,\,y)=e^{2x-3y}=e^3e^{2x-3y-3}=e^3e^{2x-3(y-1)}.$$ Then apply formula for tangent approximation to function $g(x,\,y)=e^{2x-3(y-1)}$ with $a=0; \,b=1.$
The drying water melon puzzle I couldn't find an explanation to this problem that I could understand. A watermelon consist of 99% water and that water measures 2 litre. After a day in the sun the water melon dries up and now consist of 98% water. How much water is left in the water melon? I know the answer is ~1 litre, but why is that? I've read a couple of answers but I guess I'm a bit slow because I don't understand why. EDIT I'd like you to assume that I know no maths. Explain it like you would explain it to a 10 year old.
At the beginning the solid material is $1\%$ of the total which is a trifle (to be neglected) more than $1\%$ of $99\%$ of the total, or $1\%$ of $2000\ {\rm cm}^3$. Therefore the solid material has volume $\sim20\ {\rm cm}^3$. After one day in the sun these $20\ {\rm cm}^3$ solid material are still the same, but now they make up $2\%$ of the total. Therefore the total now will be $1000\ {\rm cm}^3$ or $1$ litre. $98\%$ of this volume, or almost all of it, will be water.
Direct construction of Lebesgue measure I have seen two books for measure theory, viz, Rudin's, and Lieb and Loss, "Analysis". Both use some kind of Riesz representation theorem machinery to construct Lebesgue measure. Is there a more "direct" construction, and if so, what is a source?
The most popular way is constructing it using the Caratheodory extension theorem, from Lebesgue outer measure. This approach is not very intuitive, but is a very powerful and general way for constructing measures. An even more direct construction and essentially the one developed by Lebesgue himself defines Lebesgue measurable sets to be the ones that can be well approximated (in terms of outer measure) by open sets from the outside and by closed sets from the inside and shows that Lebesgue outer measure applied to these sets is an actual measure. You find this approach in A Radical Approach to Lebesgue's Theory of Integration by Bressoud and Real Analysis: Measure Theory, Integration, and Hilbert Spaces by Stein and Shakarchi.
Zero polynomial Possible Duplicate: Polynomial of degree $-\infty$? Today in Abstract Algebra my instructor briefly mentioned that sometimes the zero polynomial is defined to have degree $-\infty$. What contexts have caused this to become convention?
Persistance. You want formulas to make sense also when abusively applying them to cases involving the zero polynomial. For example, we have $\deg(f\cdot g)=\deg f +\deg g$ and $\deg (f+g)\le \max\{\deg f, \deg g\}$. Therefore we assign a symbolic value - and be it only for mnemonic purposes - of $-\infty$ as the degree of $0$, because that makes $-\infty =\deg(0\cdot g)=-\infty+\deg g$ and $\deg g = \deg (0+g)=\max\{-\infty,\deg g\}$ work.
Finding error bounds for hermite interpolation I am unsure how to find the error bounds for Hermite interpolation. I have some kind of idea but I have a feeling that I am going wrong somewhere. $f(x)=3xe^x-e^{2x}$ with my x-values being 1 and 1.05 My hermite interpolating polynomial is: $H(x)=.7657893864+1.5313578773(x-1)-2.770468386(x-1)^2-4.83859508(x-1)^2(x-1.05)$ Error Bound: $\large{f^{n+1}(\xi)\over (n+1)!}*(x-x_0)(x-x_1)...(x-n)$ $$\large{f^3 (\xi) \over 3!}(x-1)^2(x-1.5)$$ $(x-1)^2(x-1.5)=x^3-3.05x^2+3.1x-1.05$ We must find the maximum point of this cubic function which is at $(1.0333,1.8518463*10^{-5})$ $$\large{f^3 (\xi) \over 3!}*1.8518463*10^{-5}$$ Am I on the correct path and How would I continue from here?
I think that should be $(x-1)(x-1.05)$ instead of $(x-1)^2(x-1.5)$
Why are Darboux integrals called Riemann integrals? As far as I have seen, the majority of modern introductory real analysis texts introduce Darboux integrals, not Riemann integrals. Indeed, many do not even mention Riemann integrals as they are actually defined (with Riemann sums as opposed to Darboux sums). However, they call the Darboux integrals Riemann integrals. Does anyone know the history behind this? I can understand why they use Darboux - I find it much more natural and the convergence is simpler in some sense (and of course the two are equivalent). But why do they call them Riemann integrals? Is this another mathematical misappropriation of credit or was Riemann perhaps more involved with Darboux integrals (which themselves may be misnamed)?
There are other examples, such as "An introduction to Real Analysis" by Wade. I don't know the history of these definitions at all. Once the dust settles over partitions, we have just one concept of integral left. The term "Riemann integral" is entrenched in so much of the literature that not using it isn't an option. One could use the term "Darboux integral" alongside "Riemann integral", but most students taking Intro to Real Analysis are sufficiently confused already. Mentioning names for the sake of mentioning names isn't what a math textbook should be doing. That job is best left to books on history of mathematics. If you feel bad for Darboux, be sure to give him credit for the theorem about intermediate values of the derivative. (Rudin proves the theorem, but attaches no name to it.) On a similar note: If I had my way, there'd be no mention of Maclaurin in calculus textbooks. Input: the total time spent by calculus instructors explaining the Maclaurin-Taylor nomenclatorial conundrum to sleep-deprived engineering freshmen. Output:
Measuring orderedness I've found this a frustrating topic to Google, and might have an entire field dedicated to it that I'm unaware of. Given an permutation of consecutive integers, I would like a "score" (real [0:1]) that evaluates how in-order it is. Clearly I could count the number of misplaced integers wrt the ordered array, or I could do a "merge sort" count of the number of swaps required to achieve order and normalise to the length of the array. Has this problem been considered before (I assume it has), and is there a summary of the advantages of various methods? I also assume there is no "true" answer, but am interested in the possibilities.
One book which treats metrics on permutations (that is, metrics on the symmetric group) is Persi Diaconis:"Group representations in probability and statistics" which it is possible to download from here: Link
Retraction of the Möbius strip to its boundary Prove that there is no retraction (i.e. continuous function constant on the codomain) $r: M \rightarrow S^1 = \partial M$ where $M$ is the Möbius strip. I've tried to find a contradiction using $r_*$ homomorphism between the fundamental groups, but they are both $\mathbb{Z}$ and nothing seems to go wrong...
If $\alpha\in\pi_1(\partial M)$ is a generator, its image $i_*(\alpha)\in\pi_1(M)$ under the inclusion $i:\partial M\to M$ is the square of an element of $\pi_1(M)$, so that if $r:M\to\partial M$ is a retraction, $\alpha=r_*i_*(\alpha)$ is also the square of an element of $\pi_1(\partial M)$. This is not so. (For all this to work, one has to pick a basepoint $x_0\in\partial M$ and use it to compute both $\pi_1(M)$ and $\pi_1(\partial M)$)
How to find sum of quadratic I got this quadratic function from physics that I need to find the sum of each term, up to whatever point. Written thusly: $$ \sum_{n=1}^{t}4.945n^2$$ And is there someway to quickly figure this out? Or links to tutorials
There is the standard formula $$\sum_{k=1}^n k^2=\frac{n(n+1)(2n+1)}{6}.$$ It can be proved by a pretty routine induction.
Is there a largest "nested" prime number? There are some prime numbers which I will call "nested primes" that have a curious property: if the $n$ digit prime number $p$ is written out in base 10 notation $p=d_1d_2...d_n$, then the nested sequence formed by deleting the last digit one at a time consists entirely of prime numbers. The definition is best illustrated by an example, for which I will choose the number $3733799$: not only is $3733799$ prime, but so are $\{3,37,373,3733,37337,373379\}$. See here and here if you want to check. Question: Does there exist a largest nested prime number, and if so, what is it?
From the comments in OEIS A024770 Primes in which repeatedly deleting the least significant digit gives a prime at every step until a single digit prime remains. The sequence ends at $a(83) = 73939133.$
Showing a vertical tangent exists at a given function. I want to apologise in advance for not having this in latex or some sort of neat code, I would be more than happy to learn how though. Anyway, for the function $y=4(x-1)^{2/5}$ I see there appears to be a vertical tangent at $x=1$, but how can I know for certain the vertical tangent exists at $x=1$? Would I just solve for $f'(x)$, letting $x=1$? But what would that tell me? Thanks.
Yes, you would check if $f'(1)$ tends to $+\infty$ or $-\infty$ $$ \frac{d}{dx}4(x-1)^{2/5}=\frac{8}{5}(x-1)^{-3/5}\\ $$