INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Probability of getting two consecutive 7s without getting a 6 when two dice are rolled Two dice are rolled at a time, for many time until either A or B wins. A wins if we get two consecutive 7s and B wins if we get one 6 at any time. what is the probability of A winning the game??
Let $p_A$ and $p_B$ the winning probabilities for $A$ and $B$, and $p_6$ and $p_7$ the probabilities to roll 6 and 7. Now, by regarding the different possibilities for the first roll (and if one starts with 7, also the second roll), we find: $$p_A=p_6 \cdot 0 + p_7 (p_6 \cdot 0 +p_7+(1-p_6-p_7)p_A) + (1-p_6-p_7)p_A.$$ This is a linear equation for $p_A$. We have $p_6=\frac5{36}$, $p_7=\frac{6}{36}$, and therefore: $$p_A=\frac{p_7^2}{p_7p_6+p_7^2+p_6} =\frac{6}{41}.$$
Simple geometric proof for Snell's law of refraction Snell's law of refraction can be derived from Fermat's principle that light travels paths that minimize the time using simple calculus. Since Snell's law only involves sines I wonder whether this minimum problem has a simple geometric solution.
Perhaps this will help, if you are looking at a non Calculus approach. Consider two parallel rays $A$ and $B$ coming through the medium $1$ (say air) to the medium $2$ (say water). Upon arrival at the interface $\mathcal{L}$ between the two media (air and water), they continue their parallel course in the directions $U$ and $V$ respectively. Let us assume that at time $t=0$, light ray $A$ arrives at the interface $\mathcal{L}$ at point $C$, while ray $B$ is still shy of the surface by a distance $PD$. $B$ travels at the speed $v_{1}=\frac{c}{n_{1}}$ and arrives at $D$ in $t$ seconds. During this time interval, ray $A$ continues its journey through the medium $2$ at a speed $v_{2}=\frac{c}{n_{2}}$ and reaches the point $Q$. We can formulate the rest, geometrically (looking at the parallel lines) from the figure. Let $x$ denote the distance between $C$ and $D$. \begin{eqnarray*} x \sin\left(\theta_{i}\right) &=& PD \\ &=& v_{1} t \\ &=& \frac{c}{n_{1}} t \\ x \sin\left(\theta_{r}\right) &=& CQ \\ &=& v_{2} t \\ &=& \frac{c}{n_{2}} t \end{eqnarray*} Thus, \begin{eqnarray*} n_{1} \sin\left(\theta_{i}\right) &=& \frac{c}{x} t \\ n_{2} \sin\left(\theta_{r}\right) &=& \frac{c}{x} t \end{eqnarray*} Re arranging this will take us to the Snell's law as we know. \begin{eqnarray*} \frac{n_{2} }{n_{1}} &=& \frac{\sin\left(\theta_{i}\right) }{ \sin\left(\theta_{r}\right)} \end{eqnarray*}
Derivative of a linear transformation. We define derivatives of functions as linear transformations of $R^n \to R^m$. Now talking about the derivative of such linear transformation , as we know if $x \in R^n$ , then $A(x+h)-A(x)=A(h)$, because of linearity of $A$, which implies that $A'(x)=A$ where , $A'$ is derivative of $A$ . What does this mean? I am not getting the point I think.
$A'$, where $A$ is seen as a linear /map/, has a derivative $A$, where $A$ is now seen as a (constant) matrix..
Prove: The weak closure of the unit sphere is the unit ball. I want to prove that in an infinite dimensional normed space $X$, the weak closure of the unit sphere $S=\{ x\in X : \| x \| = 1 \}$ is the unit ball $B=\{ x\in X : \| x \| \leq 1 \}$. $\\$ Here is my attempt with what I know: I know that the weak closure of $S$ is a subset of $B$ because $B$ is norm closed and convex, so it is weakly closed, and $B$ contains $S$. But I need to show that $B$ is a subset of the weak closure of $S$. $\\$ for small $\epsilon > 0$, and some $x^*_1,...,x^*_n \in X^*$, I let $U=\{ x : \langle x, x^*_i \rangle < \epsilon , i = 1,...,n \} $ then $U$ is a weak neighbourhood of $0$ What I think I need to show now is that $U$ intersects $S$, but I don't know how.
With the same notations in you question: Notice that if $x_i^*(x) = 0$ for all $i$, then $x \in U$, and therefore the intersection of the kernels $\bigcap_{i=1}^n \mathrm{ker}(x_i^*)$ is in $U$. Since the codimension of $\mathrm{ker}(x^*_i)$ is at most $1$, then the intersection has codimension at most $n$ (exercise: prove this). But since $X$ is infinite dimensional, this means the intersection has an infinite dimension, and in particular contains a line. Since any line going through $0$ intersects $S$, then $U$ intersects $S$. The same argument can be applied to any point in $B$ (any line going through a point in $B$ intersects $S$), and since you've proved the other inclusion, the weak closure of $S$ is $B$.
History of Modern Mathematics Available on the Internet I have been meaning to ask this question for some time, and have been spurred to do so by Georges Elencwajg's fantastic answer to this question and the link contained therein. In my free time I enjoy reading historical accounts of "recent" mathematics (where, to me, recent means within the last 100 years). A few favorites of mine being Alexander Soifer's The Mathematical Coloring Book, Allyn Jackson's two part mini-biography of Alexander Grothendieck (Part I and Part II) and Charles Weibel's History of Homological Algebra. My question is then: What freely available resources (i.e. papers, theses, articles) are there concerning the history of "recent" mathematics on the internet? I would like to treat this question in a manner similar to my question about graph theory resources, namely as a list of links to the relevant materials along with a short description. Perhaps one person (I would do this if necessary) could collect all the suggestions and links into one answer which could be used a repository for such materials. Any suggestions I receive in the comments will listed below. Suggestions in the Comments: * *[Gregory H. Moore, The emergence of open sets, closed sets, and limit points in analysis and topology] http://mcs.cankaya.edu.tr/~kenan/Moore2008.pdf
Babois's thesis on the birth of the cohomology of groups . Beaulieu on Bourbaki Brechenmacher on the history of matrices Demazure's eulogy of Henri Cartan Serre's eulogy of Henri Cartan Dolgachev on Cremona and algebraic cubic surfaces The Hirzebruch-Atiyah correspondence on $K$-theory Krömer's thesis on the beginnings of category theory Raynaud on Grothendieck and schemes. Rubin on the solving of Fermat's last theorem. Schneps's review of the book The Grothendieck-Serre Correspondence
Integral of$\int_0^1 x\sqrt{2- \sqrt{1-x^2}}dx$ I have no idea how to do this, it seems so complex I do not know what to do. $$\int_0^1 x\sqrt{2- \sqrt{1-x^2}}dx$$ I tried to do double trig identity substitution but that did not seem to work.
Here is how I would do it, and for simplicity I would simply look at the indefinite integral. First make the substitution $u = x^2$ so that $du = 2xdx$. We get: $$\frac{1}{2} \int \sqrt{2-\sqrt{1-u}} du$$ Then, make the substitution $v = 1-u$ so that $dv = -du$. We get: $$-\frac{1}{2} \int \sqrt{2 - \sqrt{v}} dv$$ Then make the substitution $w = \sqrt{v}$ so that $dw = \frac{1}{2\sqrt{v}} dv$ meaning that $dv = 2w \text{ } dw$. So we get: $$-\int \sqrt{2-w} \text{ } w \text{ } dw$$ Now make the substitution $s = 2-w$ so that $ds = -dw$ to get: $$-\int \sqrt{s}(s-2) ds$$ The rest should be straightforward.
Sum with binomial coefficients: $\sum_{k=1}^m \frac{1}{k}{m \choose k} $ I got this sum, in some work related to another question: $$S_m=\sum_{k=1}^m \frac{1}{k}{m \choose k} $$ Are there any known results about this (bounds, asymptotics)?
Consider a random task as follows. First, one chooses a nonempty subset $X$ of $\{1,2,\ldots,m\}$, each with equal probability. Then, one uniformly randomly selects an element $n$ of $X$. The event of interest is when $n=\max(X)$. Fix $k\in\{1,2,\ldots,m\}$. The probability that $|X|=k$ is $\frac{1}{2^m-1}\,\binom{m}{k}$. The probability that the maximum element of $X$, given $X$ with $|X|=k$, is chosen is $\frac{1}{k}$. Consequently, the probability that the desired event happens is given by $$\sum_{k=1}^m\,\left(\frac{1}{2^m-1}\,\binom{m}{k}\right)\,\left(\frac{1}{k}\right)=\frac{S_m}{2^m-1}\,.$$ Now, consider a fixed element $n\in\{1,2,\ldots,m\}$. Then, there are $\binom{n-1}{j-1}$ possible subsets $X$ of $\{1,2,\ldots,m\}$ such that $n=\max(X)$ and $|X|=j$. The probability of getting such an $X$ is $\frac{\binom{n-1}{j-1}}{2^m-1}$. The probability that $n=\max(X)$, given $X$, is $\frac{1}{j}$. That is, the probability that $n=\max(X)$ is $$\begin{align} \sum_{j=1}^{n}\,\left(\frac{\binom{n-1}{j-1}}{2^m-1}\right)\,\left(\frac{1}{j}\right) &=\frac{1}{2^m-1}\,\sum_{j=1}^n\,\frac1j\,\binom{n-1}{j-1}=\frac{1}{2^m-1}\,\left(\frac{1}{n}\,\sum_{j=1}^n\,\frac{n}{j}\,\binom{n-1}{j-1}\right) \\ &=\frac{1}{2^m-1}\,\left(\frac{1}{n}\,\sum_{j=1}^n\,\binom{n}{j}\right)=\frac{1}{2^m-1}\left(\frac{2^n-1}{n}\right)\,. \end{align}$$ Finally, it follows that $\frac{S_m}{2^m-1}=\sum_{n=1}^m\,\frac{1}{2^m-1}\left(\frac{2^n-1}{n}\right)$. Hence, $$\sum_{k=1}^m\,\frac{1}{k}\,\binom{m}{k}=S_m=\sum_{n=1}^m\,\left(\frac{2^n-1}{n}\right)\,.$$ It can then be shown by induction on $m>3$ that $\frac{2^{m+1}}{m}<S_m<\frac{2^{m+1}}{m}\left(1+\frac{2}{m}\right)$.
probability of a horse winning a race. Lets suppose ten horses are participating in a race and each horse has equal chance of winning the race. I am required to find the following: (a) the probability that horse A wins the race followed by horse B. (b) the probability that horse C becomes either first or second in the race. I know there are $10 \cdot 9 \cdot 8 $ ways of having first, second or third. Since each horse has an equal chance of winning, each has probability of 1/10. Would I be right in saying that the probability that A wins followed by B is $\frac{1}{10} \cdot \frac{1}{10} $? Is it okay if I do this for (b)? $\frac{1}{10} +\frac{1}{10} $?
The answer to A is 1/90 because of non replacement method. Horse A has 1/10 chance to be 1st and horse b would then be 1 of 9 with a chance to be 2nd. Multiply 1/10 times 1/9 gets 1/90th The answer to b is different as its an addition problemas in he has a chance to be first and/or 2nd so 1/10 plus 1/9...common denominator of 90 so 9/90 plus 10/90 or 19/90 chance. This is just a statistical question not a real question because percentages change based on post position, horse ability, jockey, trainer etc.
Subring of polynomials Let $k$ be a field and $A=k[X^3,X^5] \subseteq k[X]$. Prove that: a. $A$ is a Noetherian domain. b. $A$ is not integrally closed. c. $dim(A)=?$ (the Krull dimension). I suppose that the first follows from $A$ being a subring of $k[X]$, but I don't know about the rest. Thank you in advance.
a) Not every subring of a noetherian ring is noetherian (there are plenty of counterexamples), so this doesn't work here. Instead, use Hilbert's Basis Theorem. b) The element $X^2 = \frac{X^5}{X^3}$ is in $\mathrm{Quot}(A)$. Try to show that it is integral over $A$, but not in $A$. c) The dimension is the transcendence degree of $\mathrm{Quot}(A)$ over $k$. But this field is easy to compute.
Piecewise functions: Got an example of a real world piecewise function? Looking for something beyond a contrived textbook problem concerning jelly beans or equations that do not represent anything concrete. Not just a piecewise function for its own sake. Anyone?
As Wim mentions in the comments, piecewise polynomials are used a fair bit in applications. In designing profiles and shapes for cars, airplanes, and other such devices, one usually uses pieces of Bézier or B-spline curves (or surfaces) during the modeling process, for subsequent machining. In fact, the continuity/smoothness conditions for such curves (usually continuity up to the second derivative) are important here, since during machining, an abrupt change in the curvature can cause the material for the modeling, the mill, or both, to crack (remembering that velocity and acceleration are derivatives of position with respect to time might help to understand why you want smooth curves during machining).
Numerical solution of the Laplace equation on circular domain I was solving Laplace equation in MATLAB numerically. However I have problems when the domain is not rectangular. The equation is as follows: $$ \frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=0 $$ domain is circular $$ x^2 + y^2 < 16 $$ and boundary condition $$ u(x,y)= x^2y^2 $$ How should I start with solving this equation numerically ?
Perhaps begin by rewriting the problem in polar coordintates: $$\frac{\partial^{2}u}{\partial r^{2}}+\frac{1}{r}\frac{\partial u}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}u}{\partial\theta^{2}}=0$$ $$r^2<16$$ $$\left. u(r,\theta)\right|_D=r^4\cos^2\theta\sin^2\theta$$
Reflections generating isometry group I was reading an article and it states that every isometry of the upper half plane model of the hyperbolic plane is a composition of reflections in hyperbolic lines, but does not seem to explain why this is true. Could anyone offer any insight? Thanks.
An isometry $\phi:M\to N$ between connected Riemannian manifolds $M$ and $N$ is completely determined by its value at a single point $p$ and its differential at $d\phi_p$. Take any isometry $\phi$ of $\mathbb{H}^2$. Connect $i$ and $\phi(i)$ by a (unique) shortest geodesic and let $C$ be a perpendicular bisector of the connecting geodesic. Then the reflection $r_C$ across $C$ maps $i$ to $\phi(i)$. Now take an orthonormal basis $e_j$ at $i$. It is mapped to $d\phi e_j\in T_{\phi(i)}\mathbb{H}^2$. Linear algebra tells us that in $T_{\phi(i)}\mathbb{H}^2$ we can map $d\phi e_j$ to $dr_C e_j$ by a reflection across a line (if $\phi$ is orientation-preserving) or by a rotation (if $\phi$ is orientation-reversing). A two-dimensional rotation can be written as a composition of two reflections. By the exponential map, the lines across which we're reflecting map to geodesics in $\mathbb{H}^2$ and the reflections extend to reflections of $\mathbb{H}^2$ across those geodesics. We have therefore written $\phi$ as a composition of reflections. In some cases, e.g. in Thurston, Three-Dimensional Geometry and Topology, the isometry group of $\mathbb{H}^2$ is defined as the group generated by reflections across circles (e.g., in the Poincare model). Chapter two of that book has a discussion of hyperbolic geometry and exercises comparing the various perspectives on isometries, e.g., as $$Sl(2;\mathbb{R}) = SO(1,1) = \mbox{Möbius transformations with real coefficients} = \langle\mbox{refl. across circles}\rangle.$$You might also check Chapter B (I think) of Benedetti and Petronio, Lectures on Hyperbolic Geometry.
Is Koch snowflake a continuous curve? For Koch snowflake, does there exits a continuous map from $[0,1]$ to it? The actural construction of the map may be impossible, but how to claim the existence of such a continuous map? Or can we conside the limit of a sequence of continuous map, but this sequence of continuous maps may not have continuous limit.
Consider the snowflake curve as the limit of the curves $(\gamma_n)_{n\in \mathbb N}$, in the usual way, starting with $\gamma_0$ which is just a equilateral triangle of side length 1. Then each $\gamma_n$ is piecewise linear, consisting of $3\cdot 4^n$ pieces of length $3^{-n}$ each; for definiteness let us imagine that we parameterize it such that $|\gamma_n'(t)| = 3(\frac 43)^n$ whenever it exists. Now, it always holds that $|\gamma_{n+1}(t)-\gamma_n(t)|\le 3^{-n}$ for every $t$ (because each step of the iteration just changes the curve between two corners in the existing curve, but keeps each corner and its corresponding parameter value unchanged). This means that the $\gamma_n$'s converge uniformly towards their pointwise limit: At every $t$ the distance between $\gamma_n(t)$ and $\lim_{i\to\infty}\gamma_i(t)$ is at most $\sum_{i=n}^\infty (1/3)^i$ which is independent of $t$ and goes to $0$ as $n\to\infty$. Because uniform convergence preserves continuity, the limiting curve is a continuous function from $[0,1]$ to the plane.
"8 Dice arranged as a Cube" Face-Sum Equals 14 Problem I found this here: Sum Problem Given eight dice. Build a $2\times 2\times2$ cube, so that the sum of the points on each side is the same. $\hskip2.7in$ Here is one of 20 736 solutions with the sum 14. You find more at the German magazine "Bild der Wissenschaft 3-1980". Now my question: Is $14$ the only possible face sum? At least, in the example given, it seems to related to the fact, that on every face two dice-pairs show up, having $n$ and $7-n$ pips. Is this necessary? Sufficient it is...
No, 14 is not the only possibility. For example: Arrange the dice, so that you only see 1,2 and 3 pips and all the 2's are on the upper and lower face of the big cube. This gives you face sum 8. Please ask your other questions as separate questions if you are still interested.
Computing the length of a finite group Can someone suggest a GAP or MAGMA command (or code) to obtain the length $l(G)$ of a finite group $G$, i.e. the maximum length of a strictly descending chain of subgroups in $G$? Thanks in advance.
Just to get you started, here is a very short recursive Magma function to compute this. You could do something similar in GAP. Of course, it will only work in reasonable time for small groups. On my computer it took about 10 seconds to do $A_8$. To do better you would need to do something more complicated like working up through the subgroup lattice. It is not an easy function to compute exactly. Len := function(G) if #G eq 1 then return 0; end if; return 1 + Max([$$(m`subgroup) : m in MaximalSubgroups(G)]); end function;
Find the area of overlap of two triangles Suppose we are given two triangles $ABC$ and $DEF$. We can assume nothing about them other than that they are in the same plane. The triangles may or may not overlap. I want to algorithmically determine the area (possibly $0$) of their overlap; call it $T_{common}$. We have a multitude of ways of determining the areas of $ABC$ and $DEF$; among the "nicest" are the Heronian formula, which is in terms of the side lengths alone, and $T = \frac{1}{2} \left| \det\begin{pmatrix}x_A & x_B & x_C \\ y_A & y_B & y_C \\ 1 & 1 & 1\end{pmatrix} \right| = \frac{1}{2} \big| x_A y_B - x_A y_C + x_B y_C - x_B y_A + x_C y_A - x_C y_B \big|$ which is in terms of the coordinates alone. Obviously, there does exist a function from $A,B,C,D,E,F$ to $T_{common}$, but my question is: is there a "nice" (or even not-"nice") expression for $T_{common}$ in terms of the $x$ and $y$ coordinates of $A,B,C,D,E,F$? I've drawn out on paper what I think are the various cases, but my issues with this approach are: identifying the case is a job in itself, which I can't easily see how to algorithmise ("just look at a picture" doesn't work for a computer); even within each case the algebra is fiddly and error-prone; and I have little confidence that I've enumerated all possible cases and got the computations right! In my imagination there is a neat approach using ideas from analysis (treating the triangles as functions from $\mathbb{R}^2$ to $\{0,1\}$ and... multiplying them??) but I have no idea whether that's just a flight of fancy or something workable.
Sorry about the comment -- I hit the return key prematurely. This isn't really an answer (except in the negative sense). The common (overlap) area is a function of the coordinates of the 6 points, so it's a mapping from $R^{12}$ into $R$. Think about one of the points moving around, while the other 5 are fixed. When the moving point passes into (or out of) the other triangle, some of the partial derivatives of the area function will be discontinuous (this seems intuitively clear -- the proof is left to someone who has more skill and patience than I do). Anyway, if you believe me, this means that the area function is not smooth. Therefore there can not be some simple formula like Heron's formula that gives the area (because this simple formula would give a smooth function).
Operators from $\ell^\infty$ into $c_0$ I have the following question related to $\ell^\infty(\mathbb{N}).$ How can I construct a bounded, linear operator from $\ell^\infty(\mathbb{N})$ into $c_0(\mathbb{N})$ which is non-compact? It is clear that $\ell^\infty$ is a Grothendieck space with Dunford-Pettis property, hence any operator from $\ell^\infty$ into a separable Banach space must be strictly singular. But I do not know any example above which is non-compact.
A bounded operator $T:\ell_\infty\rightarrow c_0$ has the form $Tx=(x_n^*(x))$ for some weak$^*$ null sequence $(x_n^*)$ in $\ell_\infty^*$. A set $K\subset c_0$ is relatively compact if and only if there is a $x\in c_0$ such that $|k_n|\le |x_n|$ for all $k\in K$ and all $n\ge1$. From these two facts, it follows that $T(B({\ell_\infty}))$ is relatively compact if and only if the representing sequence $(x_n^*)$ is norm-null. So, you need only find a sequence in $\ell_\infty^*$ that is weak$^*$ null, but not norm null. Such a sequence exists in $\ell_\infty^*$ since: 1) weak$^*$ convergent sequences in $\ell_\infty^*$ are weakly convergent ($\ell_\infty^*$ has the Grothendieck property), and 2) $\ell_\infty^*$ does not have the Schur property (weakly convergent sequences are norm convergent). (There may be a less roundabout way of showing the the result of the preceding paragraph.)
How to "rotate" a polar equation? Take a simple polar equation like r = θ/2 that graphs out to: But, how would I achieve a rotation of the light-grey plot in this image (roughly 135 degrees)? Is there a way to easily shift the plot?
A way to think about this is is that you want to shift all $\theta$ to $\theta'=\theta +\delta$, where $\delta$ is the amount by which you want to rotate. This question has a significance if you want to rotate some equation which is a function of theta. In the case $r=\theta$ that becomes $r=\theta+\delta$. Of course if our independent variable in our polar equation was a non-identity function of $\theta$ you might be able to use the angle-sum indentities to help you out: $$ \sin(\alpha + \beta) = \sin \alpha \cos \beta + \cos \alpha \sin \beta \\ \cos(\alpha + \beta) = \cos \alpha \cos \beta - \sin \alpha \sin \beta $$ In case an anyone is trying to programme this in a Cartesian setting like I was trying to do (for a music visualizer) where I wanted my spiral's rotation to be a function of time. $r = \theta(t)$. Normally where solving $r=\theta$ or $\sqrt{x^2+y^2}=tan(\frac{\sin(\theta)}{\cos(\theta)})=tan(\frac{y}{x})$ you can substitute as follows. $$ \sqrt{x^2+y^2}= tan(\frac{\sin(\theta+t)}{\cos(\theta+t)}) = tan(\frac{\sin\theta \cos t+\cos \theta \sin t}{\cos \theta \cos t - \sin \theta \sin t}) = tan(\frac{y \cos t +x\sin t }{x\cos t - y \sin t}) $$ /
$\mid\theta-\frac{a}{b}\mid< \frac{1}{b^{1.0000001}}$, question related to the dirichlet theorem The question is: A certain real number $\theta$ has the following property: There exist infinitely many rational numbers $\frac{a}{b}$(in reduced form) such that: $$\mid\theta-\frac{a}{b}\mid< \frac{1}{b^{1.0000001}}$$ Prove that $\theta$ is irrational. I just don't know how I could somehow relate $b^{1.0000001}$ to $b^2$ or $2b^2$ so that the dirichlet theorem can be applied. Or is there other ways to approach the problem? Thank you in advance for your help!
Hint: Let $\theta=\frac{p}{q}$, where $p$ and $q$ are relatively prime. Look at $$\left|\frac{p}{q}-\frac{a}{b}\right|.\tag{$1$}$$ Bring to the common denominator $bq$. Then if the top is non-zero, it is $\ge 1$, and therefore Expression $(1)$ is $\ge \frac{1}{bq}$. But if $b$ is large enough, then $bq<b^{1.0000001}$. Edit: The above shows that if $\theta$ is rational, there cannot be arbitrarily large $b$ such that $$\left|\theta-\frac{a}{b}\right|<\frac{1}{b^{1.0000001}}.\tag{$2$}$$ Of course, if we replace the right-hand side by $b^{1.0000001}$, then there are arbitrarily large such $b$. Indeed if we replace it by any fixed $\epsilon\gt 0$, there are arbitrarily large such $b$, since any real number can be approximated arbitrarily closely by rationals. Thus if in the original problem one has $b^{1.0000001}$, and not its reciprocal, it must be a typo.
Indefinite integral of secant cubed $\int \sec^3 x\>dx$ I need to calculate the following indefinite integral: $$I=\int \frac{1}{\cos^3(x)}dx$$ I know what the result is (from Mathematica): $$I=\tanh^{-1}(\tan(x/2))+(1/2)\sec(x)\tan(x)$$ but I don't know how to integrate it myself. I have been trying some substitutions to no avail. Equivalently, I need to know how to compute: $$I=\int \sqrt{1+z^2}dz$$ which follows after making the change of variables $z=\tan x$.
We have an odd power of cosine. So there is a mechanical procedure for doing the integration. Multiply top and bottom by $\cos x$. The bottom is now $\cos^4 x$, which is $(1-\sin^2 x)^2$. So we want to find $$\int \frac{\cos x\,dx}{(1-\sin^2 x)^2}.$$ After the natural substitution $t=\sin x$, we arrive at $$\int \frac{dt}{(1-t^2)^2}.$$ So we want the integral of a rational function. Use the partial fractions machinery to find numbers $A$, $B$, $C$, $D$ such that $$\frac{1}{(1-t^2)^2}=\frac{A}{1-t}+\frac{B}{(1-t)^2}+ \frac{C}{1+t}+\frac{D}{(1+t)^2}$$ and integrate.
Can a function with suport on a finite interval have a Fourier transform which is zero on a finite interval? If $f$ has support on $[-x_0,x_0]$ can its Fourier transform $\tilde{f}$ be zero on $[-p_0,p_0]$? If so, what is the maximum admissible product $x_0p_0$?
Let's assume that $f$ is not identically zero. In this answer, it is shown that if $f$ has compact support, then $\hat{f}$ is entire. A non-zero entire function cannot be zero on a set with a limit point. Thus, if $f=0$ on $[-p_0,p_0]$, then $p_0=0$, and therefore, the maximum of $x_0p_0=0$.
Need help with the integral $\int \frac{2\tan(x)+3}{5\sin^2(x)+4}\,dx$ I'm having a problem resolving the following integral, spent almost all day trying. Any help would be appreciated. $$\int \frac{2\tan(x)+3}{5\sin^2(x)+4}\,dx$$
Hint: * *Multiply the numerator and denomiator by $\sec^{2}(x)$. Then you have $$\int \frac{2 \tan{x} +3}{5\sin^{2}(x)+4} \times \frac{\sec^{2}(x)}{\sec^{2}{x}} \ dx$$ *Now, the denomiator becomes $5\tan^{2}(x) + 4\cdot \bigl(1+\tan^{2}(x)\bigr)$, and you can put $t=\tan{x}$. *So your new integral in terms of $t$ is : $\displaystyle \int \frac{2t+3}{9t^{2}+4} \ dt = \int\frac{2t}{9t^{2}+4} \ dt + \int\frac{3}{9t^{2}+4} \ dt$.
Counting matrices over $\mathbb{Z}/2\mathbb{Z}$ with conditions on rows and columns I want to solve the following seemingly combinatorial problem, but I don't know where to start. How many matrices in $\mathrm{Mat}_{M,N}(\mathbb{Z}_2)$ are there such that the sum of entries in each row and the sum of entries in each column is zero? More precisely find cardinality of the set $$ \left\{A\in\mathrm{Mat}_{M,N}(\mathbb{Z}/2\mathbb{Z}): \forall j\in\{1,\ldots,N\}\quad \sum\limits_{k=1}^M A_{kj}=0,\quad \forall i\in\{1,\ldots,M\}\quad \sum\limits_{l=1}^N A_{il}=0 \right\} $$. Thanks for your help.
If you consider the entries of the matrices as unknowns, you have $N\cdot M$ unknowns and $N+M$ linear equations. If you think a little bit, you find out that these equations are not independent, but you get linearly independent equations if you omit one of them. So the solution space has the dimension $NM-N-M+1$, hence it contains $2^{MN-N-M-1}=2^{(N-1)(M-1)}$ elements.
How to prove that the sum and product of two algebraic numbers is algebraic? Suppose $E/F$ is a field extension and $\alpha, \beta \in E$ are algebraic over $F$. Then it is not too hard to see that when $\alpha$ is nonzero, $1/\alpha$ is also algebraic. If $a_0 + a_1\alpha + \cdots + a_n \alpha^n = 0$, then dividing by $\alpha^{n}$ gives $$a_0\frac{1}{\alpha^n} + a_1\frac{1}{\alpha^{n-1}} + \cdots + a_n = 0.$$ Is there a similar elementary way to show that $\alpha + \beta$ and $\alpha \beta$ are also algebraic (i.e. finding an explicit formula for a polynomial that has $\alpha + \beta$ or $\alpha\beta$ as its root)? The only proof I know for this fact is the one where you show that $F(\alpha, \beta) / F$ is a finite field extension and thus an algebraic extension.
Okay, I'm giving a second answer because this one is clearly distinct from the first one. Recall that finding a polynomial over which $\alpha+\beta$ or $\alpha \beta$ is a root of $p(x) \in F[x]$ is equivalent to finding the eigenvalue of a square matrix over $F$ (living in some algebraic extension of $F$), since you can link the polynomial $p(x)$ to the companion matrix $C(p(x))$ which has precisely characteristic polynomial $p(x)$, hence the eigenvalues of the companion matrix are the roots of $p(x)$. If $\alpha$ is an eigenvalue of $A$ with eigenvector $x \in V$ and $\beta$ is an eigenvalue of $B$ with eigenvector $y \in W$, then using the tensor product of $V$ and $W$, namely $V \otimes W$, we can compute $$ (A \otimes I + I \otimes B)(x \otimes y) = (Ax \otimes y) + (x \otimes By) = (\alpha x \otimes y) + (x \otimes \beta y) = (\alpha + \beta) (x \otimes y) $$ so that $\alpha + \beta$ is an eigenvalue of $A \otimes I + I \otimes B$. Also, $$ (A \otimes B)(x \otimes y) = (Ax \otimes By) = (\alpha x \otimes \beta y) = \alpha \beta (x \otimes y) $$ hence $\alpha \beta$ is an eigenvalue of the matrix $A \otimes B$. If you want explicit expressions for the polynomials you are looking for, you can just compute the characteristic polynomial of the tensor products. Hope that helps,
Natural question about weak convergence. Let $u_k, u \in H^{1}(\Omega)$ such that $u_k \rightharpoonup u$ (weak convergence) in $H^{1}(\Omega)$. Is true that $u_{k}^{+}\rightharpoonup u^{+}$ in $\{u\geqslant 0\}$? You can do hypothesis on $\Omega$ if you need.
I get the idea from Richard's answer. Let $\Omega:=(0,2\pi)$ and $u_k(x):=\frac{\cos(kx)}{k+1}$. Then $\{u_k\}$ converges weakly to $0$ in $H^1(\Omega)$ (as it's bounded in $H^1(\Omega)$, and $\int_{\Omega}(u_k\varphi+u'_k\varphi')dx\to 0$ for all test function $\varphi$). Assume that $\{u_k^+\}$ converges weakly to $0$ in $H^1(\Omega)$. Up to a subsequence, using the fact that $L^2(\Omega)$ is a Hilbert space, we can assume that $(v_k^+)'$ and $v_k^+$ converge weakly in $L^2(\Omega)$, where $v_k:=u_{n_k}$, respectively to $f$ and $g$. It's due to the fact that these sequences are bounded in $L^2$. Then writing the definition $g'=f$. We also have $f=0$, hence (by connectedness of $\Omega$), $g$ is constant and should be equal to $0$. But $$\int_{\Omega}|\sin(kx)|dx\geqslant \int_{\Omega}\sin^2(kx)dx=\pi,$$ and we should have that $2u_k^+-u=2u^+_k-u_k^++u_k^-=|u_k|$ weakly converge to $0$.
How to solve this quartic equation? For the quartic equation: $$x^4 - x^3 + 4x^2 + 3x + 5 = 0$$ I tried Ferrari so far and a few others but I just can't get its complex solutions. I know it has no real solutions.
$$x^4 - x^3 + 4x^2 + \underbrace{3x}_{4x-x} + \overbrace{5}^{4+1} = \\\color{red}{x^4-x^3}+4x^2+4x+4\color{red}{-x+1}\\={x^4-x^3}-x+1+4(x^2+x+1)\\={x^3(x-1)}-(x-1)+4(x^2+x+1)\\=(x-1)(x^3-1)+4(x^2+x+1)\\=(x-1)(x-1)(x^2+x+1)+4(x^2+x+1)\\=(x^2+x+1)(x-1)^2+4)=(x^2+x+1)(x^2-2x+5)$$ As for the roots, I assume you could solve those two quadratic equations and you could find the results on Wolfram|Alpha.
Effective cardinality Consider $X,Y \subseteq \mathbb{N}$. We say that $X \equiv Y$ iff there exists a bijection between $X$ and $Y$. We say that $X \equiv_c Y$ iff there exist a bijective computable function between $X$ and $Y$. Can you show me some examples in which the two concepts disagree?
The structure with the computable equivalence (which is defined as $A\leq_T B$ and $B \leq_T A$) is called Turing degrees and has a very rich structure (unlike the usual bijection).
Evaluating $ \int_1^{\infty} \frac{\{t\} (\{t\} - 1)}{t^2} dt$ I am interested in a proof of the following. $$ \int_1^{\infty} \dfrac{\{t\} (\{t\} - 1)}{t^2} dt = \log \left(\dfrac{2 \pi}{e^2}\right)$$ where $\{t\}$ is the fractional part of $t$. I obtained a circuitous proof for the above integral. I'm curious about other ways to prove the above identity. So I thought I will post here and look at others suggestion and answers. I am particularly interested in different ways to go about proving the above. I'll hold off from posting my proof for sometime to see what all different proofs I get for this.
Let's consider the following way involving some known results of celebre integrals with fractional parts: $$ \int_1^{\infty} \dfrac{\{t\} (\{t\} - 1)}{t^2} dt = \int_1^{\infty} \dfrac{\{t\}^2}{t^2} dt - \int_1^{\infty} \dfrac{\{t\}}{t^2} dt = \int_0^1 \left\{\frac{1}{t}\right\}^2 dt- \int_0^1 \left\{\frac{1}{t}\right\} dt = (\ln(2\pi) -\gamma-1)-(1-\gamma)=\ln(2\pi)-2=\log \left(\dfrac{2 \pi}{e^2}\right).$$ REMARK: there is a theorem that establishes a way of calculating the value of the below integral for $m\geq1$: $$\int_0^1 \left\{\frac{1}{x}\right\}^m dx$$ The proof is complete.
$\{1,1\}=\{1\}$, origin of this convention Is there any book that explicitly contain the convention that a representation of the set that contain repeated element is the same as the one without repeated elements? Like $\{1,1,2,3\} = \{1,2,3\}$. I have looked over a few books and it didn't mention such thing. (Wikipedia has it, but it does not cite source). In my years learning mathematics in both US and Hungary, this convention is known and applied. However recently I noticed some Chinese students claim they have never seen this before, and I don't remember I saw it in any book either. I never found a book explicitly says what are the rules in how $\{a_1,a_2,a_3,\ldots,a_n\}$ specify a set. Some people believe it can only specify a set if $a_i\neq a_j \Leftrightarrow i\neq j$. The convention shows that doesn't have to be satisfied.
I took a quick look through some of the likelier candidates on my shelves. The following introductory discrete math texts all explicitly point out, with at least one example, that neither the order of listing nor the number of times an element is listed makes any difference to the identity of a set: * *Winfried K. Grassman & Jean-Paul Tremblay, Logic and Discrete Mathematics: A Computer Science Perspective *Ralph P. Grimaldi, Discrete and Combinatorial Mathematics: An Applied Introduction, 4th ed. *Richard Johnsonbaugh, Discrete Mathematics, 4th ed. *Bernard Kolman, Robert C. Busby, & Sharon Ross, Discrete Mathematical Structures for Computer Science, 3rd ed. *Edward Scheinerman, Mathematics: A Discrete Introduction, 2nd ed.
Real life application of Gaussian Elimination I would normally use Gaussian Elimination to solve a linear system. If we have more unknowns than equations we end up with an infinite number of solutions. Are there any real life applications of these infinite solutions? I can think of solving puzzles like Sudoku but are there others?
One important application is this: Given the corner points of a convex hull $\{\mathbf v_1,\cdots,\mathbf v_m \}$ in $n$ dimensions, s.t. $m > n+1$ and a point $\mathbf c$ inside the convex hull, find an enclosing simplex of $\mathbf c$(of size $r \le n+1$). To solve the the problem, one can find a solution to $\alpha_1 \mathbf v_1+\cdots+\alpha_m \mathbf v_m=\mathbf c$ and $\alpha_1+\cdots+\alpha_m=1$. Once the solution is found, one can use the Carathéodory's theorem to reduce the number of non-zero ${\alpha_i}'s$ to $r$.
Prove that $4^{2n} + 10n -1$ is a multiple of 25 Prove that if $n$ is a positive integer then $4^{2n} + 10n - 1$ is a multiple of $25$ I see that proof by induction would be the logical thing here so I start with trying $n=1$ and it is fine. Then assume statement is true and substitute $n$ by $n+1$ so I have the following: $4^{2(n+1)} + 10(n+1) - 1$ And I have to prove that the above is a multiple of 25. I tried simplifying it but I can't seem to get it right. Any ideas? Thanks.
$\rm\displaystyle 25\ |\ 10n\!-\!(1\!-\!4^{2n}) \iff 5\ |\ 2n - \frac{1-(-4)^{2n}}{5}.\ $ Now via $\rm\ \dfrac{1-x^k}{1-x}\, =\, 1\!+\!x\!+\cdots+x^{k-1}\ $ $\rm\displaystyle we\ easily\ calculate\ that, \ mod\ 5\!:\, \frac{1-(-4)^{2n}}{1-(-4)\ \ \,}\, =\, 1\!+\!1\!+\cdots + 1^{2n-1} \equiv\, 2n\ \ $ by $\rm\: -4\equiv 1$
Simplify these expressions with radical sign 2 My question is 1) Rationalize the denominator: $$\frac{1}{\sqrt{2}+\sqrt{3}+\sqrt{5}}$$ My answer is: $$\frac{\sqrt{12}+\sqrt{18}-\sqrt{30}}{18}$$ My question is 2) $$\frac{1}{\sqrt{2}+\sqrt{3}-\sqrt{5}}+\frac{1}{\sqrt{2}-\sqrt{3}-\sqrt{5}}$$ My answer is: $$\frac{1}{\sqrt{2}}$$ I would also like to know whether my solutions are right. Thank you,
$\begin{eqnarray*} (\sqrt{2}+\sqrt{3}+\sqrt{5})(\sqrt{12}+\sqrt{18}-\sqrt{30}) & = & (\sqrt{2}+\sqrt{3}+\sqrt{5})(2\sqrt{3}+3\sqrt{2}-\sqrt{2}\sqrt{3}\sqrt{5})\\& = & 12, \end{eqnarray*}$ if you expand out the terms, so your first answer is incorrect. The denominator should be $12$. $\begin{eqnarray*} (\sqrt{2}+\sqrt{3}-\sqrt{5})(\sqrt{2}-\sqrt{3}-\sqrt{5}) & = & (\sqrt{2}-\sqrt{5})^2-\sqrt{3}^2\\& = & 7-2\sqrt{10}-3\\& = & 2\sqrt{2}(\sqrt{2}-\sqrt{5}), \end{eqnarray*}$ and so when your fractions in the second part are given common denominators, you'll have exactly $\cfrac{1}{\sqrt{2}}$ after cancellation, so your second answer is correct. Note: In general, if you want to see if two fractions are the same (as in the first problem), cross-multiplication is often a useful way to see it.
Constructing arithmetic progressions It is known that in the sequence of primes there exists arithmetic progressions of primes of arbitrary length. This was proved by Ben Green and Terence Tao in 2006. However the proof given is a nonconstructive one. I know the following theorem from Burton gives some criteria on how large the common difference must be. Let $n > 2$. If all the terms of the arithmetic progression $$ p, p+d, \ldots, p+(n-1)d $$ are prime numbers then the common difference $d$ is divisible by every prime $q <n$. So for instance if you want a sequence of primes in arithmetic progression of length $5$ ie $$ p, p+d, \ldots, p+4d $$ you need that $d \geq 6$. Using this we can get that the prime $p=5$ and $d = 6$ will result in a sequence primes in arithmetic progression of length $5$. So my question is what are the known techniques for constructing a sequence of primes of length $k$? How would one find the "first" prime in the sequence or even the "largest prime" that would satisfy the sequence (assuming there is one)? Also, while the theorem gives a lower bound for $d$, is it known if it is the sharpest lowest bound there is? NOTE: This is not my area of research so this question is mostly out of curiosity.
This may not answer the question, but I would like to point out that more recent work of Green and Tao have proven even stronger results. Specifically, Green and Tao give exact asymptotics for the number of solutions to systems of linear equations in the prime numbers, and their paper Linear Equations in the Primes was published in the Annals in 2010. In particular, this tells us asymptotics for the number of $k$-term arithmetic progressions in the primes up to $N$. For example, as $N\rightarrow \infty$, we can count the asymptotic number of 4-tuples of primes $p_1<p_2<p_3<p_4\leq N$ which lie in arithmetic progression, and it equals $$(1+o(1))\frac{N^2}{\log^4(N)} \frac{3}{4}\prod_{p\geq 5} \left(1-\frac{3p-1}{(p-1)^3}\right).$$ Do note however, that Green and Tao's paper made two major assumptions. They assumed the Möbius Nilsquence conjecture (MN) and the Gowers Inverse norm conjecture (GI). In a paper published in the Annals in 2012, Green and Tao resolve the MN conjecture, proving that the Möbius Function is strongly orthogonal to nilsequences. Recently, Green, Tao and Ziegler resolved the Gowers inverse conjecture, and their paper is currently on the arxiv. (It has not yet been published) This means that we unconditionally have asymptotics for the number of primes in a $k$ term arithmetic progression. If you would like to learn more, I suggest reading Julia Wolf's excellent survey article Arithmetic and polynomial progressions in the primes, d'après Gowers, Green, Tao and Ziegler. It is very recent, as it was for the Bourbaki lectures at 2 month ago.
$\bar{\partial}$-Poincaré lemma This is $\bar{\partial}$-Poincaré lemma: Given a holomorphic funtion $f:U\subset \mathbb{C} \to \mathbb{C}$ ,locally on $U$ there is a holomorphic function $g$ such that : $$\frac{\partial g}{\partial \bar z}=f$$ The author says that this is a local statement so we may assume $f$ with compact support and defined on the whole plane $\mathbb{C}$, my question is why she says that... thanks. *Added* $f,g$ are suppose to be $C^k$ not holomorphic, by definition $$\frac{\partial g}{\partial \bar z}=0$$ if $g$ were holomorphic...
I don't have the book, and thus I can't check the statement. However, I believe that the statement holds for smooth $f$. Basically we want to construct/find $g$ as the following integral: $$g(z) = \frac{1}{2 \pi i}\int_{w\in \mathbb{C}} \frac{f(w)}{z-w} d\overline{w}\wedge dw$$ In order to do this, $f$ must be defined over the whole complex plane.
Show that for all $\lambda \geq 1~$ $\frac{\lambda^n}{e^\lambda} < \frac{C}{\lambda^2}$ Show that for any $n \in \mathbb N$ there exists $C_n > 0$ such that for all $\lambda \geq 1$ $$ \frac{\lambda^n}{e^\lambda} < \frac{C_n}{\lambda^2}$$ I can see that both sides of the inequality have a limit of $0$ as $\lambda \rightarrow \infty$ since, on the LHS, repeated application of L'Hôpital's rule will render the $\lambda^n$ term as a constant eventually, while the $e^{\lambda}$ term will remain, and the RHS is obvious. I can also see that the denominator of the LHS will become large faster than the RHS denominator, but I can't seem to think of anything that will show that the inequality is true for all the smaller intermediate values.
The function $\lambda \mapsto \lambda^{n+2}$ is strictly increasing for positive $\lambda$ and also $e^{\lambda} > \lambda$. Combining this you get $$e^{\lambda} = \left( e^{\frac{\lambda}{n+2}} \right)^{n+2} > \left( \frac{\lambda}{n + 2} \right)^{n+2}$$ for all positive $\lambda$ and therefore $$\frac{\lambda^n}{e^\lambda} < \frac{\lambda^n (n+2)^{n+2}}{\lambda^{n+2}} = \frac{(n+2)^{n+2}}{\lambda^2}.$$
Why are bump functions compactly supported? Smooth and compactly supported functions are called bump functions. They play an important role in mathematics and physics. In $\mathbb{R}^n$ and $\mathbb{C}^n$, a set is compact if and only if it is closed and bounded. It is clear why we like to work with functions that have a bounded support. But what is the advantage of working with functions that have a support that is also closed? Why do we often work with compactly supported functions, and not just functions with bounded support?
* *On spaces such as open intervals and (more generally) domains in $\mathbb R^n$, compactness of support tells us much more than its boundedness. Any function $f\colon (0,1)\to\mathbb R$ has bounded support, since the space $(0,1)$ itself is bounded. But if the support is compact, that means that $f$ vanishes near $0$ and near $1$. (Generally, near the boundary of the domain). *On the other hand, when bump functions are considered on infinite-dimensional spaces (which does not happen nearly as often), the support is assumed bounded, not compact. A compact subset of an infinite-dimensional Banach space has empty interior, and so cannot support a nonzero continuous function. If you are interested in this subject (which is a subset of the geometry of Banach spaces), see Smooth Bump Functions and the Geometry of Banach Spaces: A Brief Survey by R. Fry and S. McManus, Expo. Math. 20 (2002): 143-183
How do I show that this function is always $> 0$ Show that $$f(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} > 0 ~~~ \forall_x \in \mathbb{R}$$ I can show that the first 3 terms are $> 0$ for all $x$: $(x+1)^2 + 1 > 0$ But, I'm having trouble with the last two terms. I tried to show that the following was true: $\frac{x^3}{3!} \leq \frac{x^4}{4!}$ $4x^3 \leq x^4$ $4 \leq x$ which is not true for all $x$. I tried taking the derivative and all that I could ascertain was that the the function became more and more increasing as $x \rightarrow \infty$ and became more and more decreasing as $x \rightarrow -\infty$, but I couldn't seem to prove that there were no roots to go with this property.
Hint: $$f(x) = \frac{1}{4} + \frac{(x + 3/2)^2}{3} +\frac{x^2(x+2)^2}{24}$$
Derivative of an implicit function I am asked to take the derivative of the following equation for $y$: $$y = x + xe^y$$ However, I get lost. I thought that it would be $$\begin{align} & y' = 1 + e^y + xy'e^y\\ & y'(1 - xe^y) = 1 + e^y\\ & y' = \frac{1+e^y}{1-xe^y} \end{align}$$ However, the text book gives me a different answer. Can anyone help me with this? Thank you and sorry if I got any terms wrong, my math studies were not done in English... :)
You can simplify things as follows: $$y' = \frac{1+e^y}{1-xe^y} = \frac{x+xe^y}{x(1-xe^y)} = \frac{y}{x(1-y+x)}$$ Here in the last step we used $y=x+xe^y$ and $xe^y=y-x$.
Prove that a metric space which contains a sequence with no convergent subsequence also contains an cover by open sets with no finite subcover. I really need help with this question: Prove that a metric space which contains a sequence with no convergent subsequence also contains an cover by open sets with no finite subcover.
Let $(a_n)$ be a sequence in the metric space $M$ that doesn't have any convergent subsequence. The set $\{a_n\}$ consists of isolated points (that is, it doesn't have any accumulation points; otherwise you could take a convergent subsequence), and it's infinite (because if it wasn't, one of the points would repeat infinitely in the sequence and we could get a constant subsequence). Now, for each $n$ take an open ball $B_{r_n}(a_n)$ around $a_n$ with radius $r_n$ small enough not to contain any other terms in the sequence, constructing an (infinite) family of open sets $\{B_{r_n}(a_n)\}$. Now if you know some facts about compactness you could argue: The set $\{a_n\}$ is closed, and then since it's a subset of the metric space $M$, if $M$ is compact then so is $\{a_n\}$. However, note that $\{B_{r_n}(a_n)\}$ is an open cover of $\{a_n\}$, but contains no finite subcover. Then $\{a_n\}$ isn't compact, and neither is $M$ (that is, not every subcover of $M$ admits a finite subcover). Or otherwise, like suggested by Martin Sleziak (and showing a direct proof): consider the open cover $\{B_{r_n}(a_n)\} \cup (M \setminus \{a_n\})$, which doesn't admit a finite subcover. It's a nice exercise trying to fill in the details. An example is given in the space of bounded sequences of real numbers, with the supremum norm: for $i \in \mathbb{N}$, let $e_i$ to be the sequence with a one at the $i$-th position and zeroes elsewhere; then $(e_i)$ is a sequence with no convergent subsequence and $\{e_i\}$ doesn't have accumulation points (even more: it's uniformly discrete).
In center-excenter configuration in a right angled triangle My question is: Given triangle $ABC$, where angle $C=90°$. Prove that the set $\{ s , s-a , s-b , s-c \}$ is identical to $\{ r , r_1 , r_2 , r_3 \}$. $s=$semiperimeter, $r_1,r_2,r_3$ are the ex-radii. Any help to solve this would be greatly appreciated.
This problem have some interesting fact behind so I use pure geometry method to solve it to show these facts: first we draw a picture. $\triangle ABC$, $I$ is incenter, $A0,B0,C0$ are the tangent points of incircle. $R1,R2,R3$ is ex-circle center,their tangent points are $A1,B1,C1,A2,B2,C2,A3,B3,C3$. FOR in-circle, we have $CA0=CB0=s-c,BA0=BC0=s-b,AB0=AC0=s-a$, now look at circle $R2, BC2$ and $BA1$ are the tangent lines, so $BC2=BA1$, for same reason, we have $CA1=CB2,AC2=AB2$, and $CB2=AC-AB2$. since $BA1=BC+CA1=BC+CB2=BC+AC-AB2,BC2=AB+AC2=AB+AB2$, so we have $BC+AC-AB2=AB+AB2$, ie$ AB2=\dfrac{BC+AC-AB}{2}=s-c=CB0, CB2=AC-AB2=AC-CB0=AB0=s-b$ . with same reason, we have $ AC2=BC0=s-b$, $BC2=AC0=s-a,BA2=CA0=s-c,BA0=CA2=s-a$. above facts are for any triangles as we don't have limits for the triangle. now we check $R1-A2-C-B1$, since $R1A2=R1B1=r1$, so $R1-A2-C-B1$ is a square! we get $r1=CA2=s-a$, with same reason , we get $ r2=CB2=s-b$, we also konw $r3=R3A3=CB3$ , since $\angle B3AC2=\angle R3AB3$($AR3$ is bisector), so $AC2=AB3$, $ r3=CB3=AC+AB3=AC+AC2=b+s-b=s$, clearly: $ r=IA0=CB0=s-c$ now we show the interesting fact: $ r+r1+r2+r3=s-c+s-b+s-a+s=2s=a+b+c $ $ r^2+r1^2+r2^2+r3^2=(s-c)^2+(s-b)^2+(s-a)^2+s^2=4S^2+a^2+b^2+c^2-2s(a+b+c)=4S^2+a^2+b^2+c^2-2S*2S=a^2+b^2+c^2$ we rewrite the again: In Right angle: $ r+r1+r2+r3==a+b+c $ $ r^2+r1^2+r2^2+r3^2=a^2+b^2+c^2$
Prove that 16, 1156, 111556, 11115556, 1111155556… are squares. I'm 16 years old, and I'm studying for my exam maths coming this monday. In the chapter "sequences and series", there is this exercise: Prove that a positive integer formed by $k$ times digit 1, followed by $(k-1)$ times digit 5 and ending on one 6, is the square of an integer. I'm not a native English speaker, so my translation of the exercise might be a bit crappy. What is says is that 16, 1156, 111556, 11115556, 1111155556, etc are all squares of integers. I'm supposed to prove that. I think my main problem is that I don't see the link between these numbers and sequences. Of course, we assume we use a decimal numeral system (= base 10) Can anyone point me in the right direction (or simply prove it, if it is difficult to give a hint without giving the whole evidence). I think it can't be that difficult, since I'm supposed to solve it. For sure, by using the word "integer", I mean "natural number" ($\in\mathbb{N}$) Thanks in advance. As TMM pointed out, the square roots are 4, 34, 334, 3334, 33334, etc... This row is given by one of the following descriptions: * *$t_n = t_{n-1} + 3*10^{n-1}$ *$t_n = \lfloor\frac{1}{3}*10^{n}\rfloor + 1$ *$t_n = t_{n-1} * 10 - 6$ But, I still don't see any progress in my evidence. A human being can see in these numbers a system and can tell it will be correct for $k$ going to $\infty$. But this isn't enough for a mathematical evidence.
Multiply one of these numbers by $9$, and you get $100...00400...004$, which is $100...002^2$.
Formula to estimate sum to nearly correct : $\sum_{n=1}^\infty\frac{(-1)^n}{n^3}$ Estimate the sum correct to three decimal places : $$\sum_{n=1}^\infty\frac{(-1)^n}{n^3}$$ This problem is in my homework. I find that n = 22 when use Maple to solve this. (with some programming) But, in my homework, teacher said find the formula for this problem. Thanks :)
For alternating sums $\sum(-1)^n a_n$ with $a_n> 0 $ strictly decreasing there is a simple means to estimate the remainder $\sum^\infty_{k=n} (-1)^k a_k$. You can just use $a_{n-1}$.
Evaluating $\int \frac{dx}{x^2 - 2x} dx$ $$\int \frac{dx}{x^2 - 2x}$$ I know that I have to complete the square so the problem becomes. $$\int \frac{dx}{(x - 1)^2 -1}dx$$ Then I set up my A B and C stuff $$\frac{A}{x-1} + \frac{B}{(x-1)^2} + \frac{C}{-1}$$ With that I find $A = -1, B = -1$ and $C = 0$ which I know is wrong. I must be setting up the $A, B, C$ thing wrong but I do not know why.
Added: "I know that I have to complete the square" is ambiguous. I interpreted it as meaning that the OP thought that completing the square was necessary to solve the problem. Completing the square is not a universal tool. To find the integral efficiently, you certainly do not need to complete the square. The simplest approach is to use partial fractions. The bottom factors as $x(x-2)$. Find numbers $A$ and $B$ such that $$\frac{1}{x^2-2x}=\frac{A}{x-2}+\frac{B}{x}.$$
Combination of cards From a deck of 52 cards, how many five card poker hands can be formed if there is a pair (two of the cards are the same number, and none of the other cards are the same number)? I believe you can pick out the first card by ${_4}C_2$, as there are 4 cards which would be the same number (as there are 4 suits). I would pick two from here. From here on though, I am unsure. I believe it involves the numbers 48, 44, and 40, as after every picking you cannot have an identical number anywhere, so there would be 4 less to choose from. However, I don't believe I can just do $_{48}C_1 * _{44}C_1,..$ as I am not simply selecting 1 card from 48 and removing 4 random ones. The answer is $1098240$.
We will use the notation $\binom{n}{r}$, which is more common among mathematicians, where you write ${}_nC_r$. The kind of card that we have a pair of can be chosen in $\binom{13}{1}$ ways. For each choice of kind, the actual cards can be chosen in $\binom{4}{2}$ ways. (By kind we mean things like Ace, or $7$, or Queen.) For each choice made so far, we now count the number of ways to pick the rest of the cards. The kinds of cards we have singletons of can be chosen in $\binom{12}{3}$ ways. For each such choice, the actual cards can be chosen in $4^3$ ways. This is because if, for example, we are to have a $7$, a $10$, and a Queen, the $7$ can be picked in $4$ ways, as can the $10$, as can the Queen. The total number of one pair hands is therefore $$(13)\binom{4}{2}\binom{12}{3}4^3.$$ Compute. We get $1098240$.
Index notation clarification Previously, I have seen matrix notation of the form $T_{ij}$ and all the indices have been in the form of subscripts, such that $T_{ij}x_j$ implies contraction over $j$. However, recently I saw something of the form $T_i^j$ which seems to work not entirely differently from what I was used to. What is the difference? and how do they decide which index to write as a superscript and which a subscript? What is the point of writing them this way? Is there a difference? (A link to a good reference explaining how these indices work would also be appreciated!) Thanks.
Mostly it's just a matter of the author's preference. The staggered index notation $T^i{}_j$ works great in conjunction with the Einstein summation convention, where one of the rules is that an index that is summed over must appear once as a subscript and once as a superscript. Usually the index of an ordinary vector's components are written in superscript, so the contraction becomes $T^i{}_j x^j$. This rule can become relevant when one is working with multiple bases, in which cases supscript and superscript indices behave differently under basis change. Writing the matrix with staggered indices then servers a reminder that you're planning to use the matrix to represent a linear transformation, rather than to represent a bilinear form, for which both indices are always on the same level. This agrees with the fact that the matrix of a linear transformation and the matrix of a bilinear form respond differently to basis changes. These considerations are most weighty in contexts where one needs to juggle a lot of basis changes -- or just to be sure that what one is writing does not depend on the particular choice of basis -- such as differential geometry. On the other hand, in introductory texts where this is less of an issue, there's an argument that explaining the rules for different kind of indices will just confuse the student without really adding to his understanding (as I have may confused you in the above paragraph).
Set theory puzzles - chess players and mathematicians I'm looking at "Basic Set Theory" by A. Shen. The very first 2 problems are: 1) can the oldest mathematician among chess players and the oldest chess player among mathematicians be 2 different people? and 2) can the best mathematician among chess players and the best chess player among mathematicians be 2 different people? I think the answers are no, and yes, because a person can only have one age, but they can have separate aptitudes for chess playing and for math. Is this correct?
(1) Think of it in terms of sets. Let $M$ be the set of mathematicians, $C$ the set of chess players. Both are asking for the oldest person in $C\cap M$. (2) Absolutely fantastic reasoning, though perhaps less simply set-theoretically described.
Solving for $c$ in $a(b + cd) \equiv 0 \mod e$ If I have a modulo operation like this: $$(a ( b+cd ) ) \equiv 0 \pmod{e},$$ How can I derive $c$ in term of other variables present here. I.e. What function $f$ can be used such that: $$c = f (a,b,d,e) $$ And what is the implication of a mod operation's result being $0$ in terms of simplifying the equation. Thank you very much for your time.
EDIT My original solution to this problem was, in hindsight, radically overcomplicated. I have it below the modified post. We have $a(b + dx) \equiv 0 \mod e$, or equivalently $ab + adx \equiv 0$, or equivalently $(ad)x \equiv -ab \mod e$. This is a very well understood subset of the problem on solving $ax \equiv b \mod m$. It turns out that there are solutions iff $(ad, e) | (-ab)$, and if there are any solutions than there are exactly $(ad,e)$ of them. This is the Linear Congruence Theorem. In fact, my previous answer is sort of a skim of the ideas behind the proof, in a sense. And although it gives a clear record of my overcomplicating a problem, I think there is a certain amount of illustrative-ness behind it. Original answer So we have $a(b + cd) \equiv 0\mod e$. If we are lucky, and in partcular that $a,b,c,d,e \neq 0$ and $(a,e) = (b,e) = (c,e) = (d,e) = 1$, i.e. that everything is coprime with $e$, then we have that $c \equiv d^{-1} (-b) \mod e$. In fact, we don't actually require $b$ or $c$ to be coprime to $e$ for this to work, as we just wanted to 'cancel' off the $a$ and be able to write down $d^{-1}$. Otherwise, the solution may not be well-defined, or rather is a family of solutions. To vastly simplify, suppose we have something like $6(1 + c)\equiv 0 \mod 18$. Then $c \equiv 2, 5, 8, 11, 14, 17$ are all solutions. The general idea to finding the solutions to problems like $6(1+c) \equiv 0 \mod 18$ involve noting that $(6,18) = 6$, and so by dividing out by $6$ we get $(1+c) \equiv 0 \mod 3$. This is not the same equation, but it tells us that $c \equiv 2 \mod 3$, and a quick check shows that $2,5,8,11,14,17$ are all solutions. What we really looked at were the congruences $(1+c) \equiv 0, 3, 6, 9, 12, 15 \mod 18$, as multiplying all of these by $6$ yield the original congruence. In fact, you might see that there are $6$ of them - and this is not a fluke. So in your case, with $a(b+cd) \equiv 0 \mod e$, you might first do a 'reduction' to account for $(a,e) > 1$. Afterwards, you can effectively ignore the $a$, but remember that it incorporates $(a,e)$ distinct solutions. You are then left with $cd \equiv -b \mod e$, and this is a classic problem. I link to the Linear Congruence Theorem below for more on this. Let's do a quick example, illustrating the method and a potential problem. $3(1 + 2c) \equiv 0 \mod 18$. We see that $(3,18) = 3$, so we look at $(1 + 2c) \equiv 0 \mod 6$ or $2c \equiv 4 \mod 6$. But what we really wanted was $1+2c \equiv 0, 1+2c \equiv 6,$ and $1+2c \equiv 12 \mod 18$. So we get that $2c \equiv 5, 11, 17 \mod 18$. We could proceed, but it's easy to see here that none of these have a solution. Working $\mod 18$, the left side is always even and the right sides are always odd. It turns out that the congruence $ax \equiv b \mod m$ has a solution iff $(a,m)|b$. So if we define $e' := e/(a,e)$, then $a(b + xd) \equiv 0 \mod e$ will have solutions iff $(d,e') | (e'-b)$. In the (non)example abolve, we required $(2,6) = 2 | 1$, which it doesn't. But if you repeat with $3(2 + 2c) \equiv 0 \mod 18$, there will be solutions. (Answer: They are $2, 5, 8, 11, 14, 17$, with a total of $6$ coming from the $3$ from $(3,18) = 3$, each having $2 = (2, 6)$ solutions of their own. Food for thought: is it possible that there are fewer than $(a,e)(d,e') due to some overlap?) To save this answer from becoming too long, I will recommend sources of study. For further reading, I recommend looking into what wikipedia calls the Linear Congruence Theorem, which talks of the general solvability of equations like $ax \equiv b \mod m$, and the Chinese Remainder Theorem. In addition, almost any introductory book on number theory will cover this sort of reasoning. I am particularly fond of recommending Rosen's Elementary Number Theory and Ireland and Rosen's Classical Introduction to Modern Number Theory, which is harder, and by a different Rosen.
On the zeta sum $\sum_{n=1}^\infty[\zeta(5n)-1]$ and others For p = 2, we have, $\begin{align}&\sum_{n=1}^\infty[\zeta(pn)-1] = \frac{3}{4}\end{align}$ It seems there is a general form for odd p. For example, for p = 5, define $z_5 = e^{\pi i/5}$. Then, $\begin{align} &5 \sum_{n=1}^\infty[\zeta(5n)-1] = 6+\gamma+z_5^{-1}\psi(z_5^{-1})+z_5\psi(z_5)+z_5^{-3}\psi(z_5^{-3})+z_5^{3}\psi(z_5^{3}) = 0.18976\dots \end{align}$ with the Euler-Mascheroni constant $\gamma$ and the digamma function $\psi(z)$. * *Anyone knows how to prove/disprove this? *Also, how do we split $\psi(e^{\pi i/p})$ into its real and imaginary parts so as to express the above purely in real terms? More details in my blog.
$$ \begin{align} \sum_{n=1}^\infty\left[\zeta(pn)-1\right] & = \sum_{n=1}^\infty \sum_{k=2}^\infty \frac{1}{k^{pn}} \\ & = \sum_{k=2}^\infty \sum_{n=1}^\infty (k^{-p})^n \\ & = \sum_{k=2}^\infty \frac{1}{k^p-1} \end{align} $$ Let $\omega_p = e^{2\pi i/p} = z_p^2$, then we can decompose $1/(k^p-1)$ into partial fractions $$ \frac{1}{k^p-1} = \frac{1}{p}\sum_{j=0}^{p-1} \frac{\omega_p^j}{k-\omega_p^j} = \frac{1}{p}\sum_{j=0}^{p-1} \omega_p^j \left[\frac{1}{k-\omega_p^j}-\frac{1}{k}\right] $$ where we are able to add the term in the last equality because $\sum_{j=0}^{p-1}\omega_p^j = 0$. So $$ p\sum_{n=1}^\infty\left[\zeta(pn)-1\right] = \sum_{j=0}^{p-1}\omega_p^j\sum_{k=2}^{\infty}\left[\frac{1}{k-\omega_p^j}-\frac{1}{k}\right] $$ Using the identities $$ \psi(1+z) = -\gamma-\sum_{k=1}^\infty\left[\frac{1}{k+z}-\frac{1}{k}\right] = -\gamma+1-\frac{1}{1+z}-\sum_{k=2}^\infty\left[\frac{1}{k+z}-\frac{1}{k}\right]\\ \psi(1+z) = \psi(z)+\frac{1}{z} $$ for $z$ not a negative integer, and $$ \sum_{k=2}^\infty\left[\frac{1}{k-1}-\frac{1}{k}\right]=1 $$ by telescoping, so finally $$ \begin{align} p\sum_{n=1}^\infty\left[\zeta(pn)-1\right] & = 1+\sum_{j=1}^{p-1}\omega_p^j\left[1-\gamma-\frac{1}{1-\omega_p^j}-\psi(1-\omega_p^j)\right] \\ & = \gamma-\sum_{j=1}^{p-1}\omega_p^j\psi(2-\omega_p^j) \end{align} $$ So far this applies for all $p>1$. Your identities will follow by considering that when $p$ is odd $\omega_p^j = -z_p^{2j+p}$, so $$ \begin{align} p\sum_{n=1}^\infty\left[\zeta(pn)-1\right] & = \gamma+\sum_{j=1}^{p-1}z_p^{2j+p}\psi(2+z_p^{2j+p})\\ & = \gamma+\sum_{j=1}^{p-1}z_p^{2j+p}\left[\frac{1}{1+z_p^{2j+p}}+\frac{1}{z_p^{2j+p}}+\psi(z_p^{2j+p})\right] \\ & = \gamma+p-1+S_p+\sum_{j=1}^{p-1}z_p^{2j+p}\psi(z_p^{2j+p}) \end{align} $$ where $$ \begin{align} S_p & = \sum_{j=1}^{p-1}\frac{z_p^{2j+p}}{1+z_p^{2j+p}} \\ & = \sum_{j=1}^{(p-1)/2}\left(\frac{z_p^{2j-1}}{1+z_p^{2j-1}}+\frac{z_p^{1-2j}}{1+z_p^{1-2j}}\right) \\ & = \sum_{j=1}^{(p-1)/2}\frac{2+z_p^{2j-1}+z_p^{1-2j}}{2+z_p^{2j-1}+z_p^{1-2j}} \\ & = \frac{p-1}{2} \end{align} $$ which establishes your general form. I don't have an answer for your second question at this time.
Why is $\log_{-2}{4}$ complex? With the logarithm being the inverse of the exponential function, it follows that $ \log_{-2}{4}$ should equal $2$, since $(-2)^2=4$. The change of base law, however, implies that $\log_{-2}{4}=\frac{\log{4}}{\log{-2}}$, which is a complex number. Why does this occur when there is a real solution?
The exponential function is not invertible on the complexes. Correspondingly, the complex logarithm is not a function, it is a multi-valued function. For example, $\log(e)$ is not $1$ -- instead it is the set of all values $1 + 2 \pi \mathbf{i} n$ over all integers $n$. How are you defining $\log_a(b)$? If you are defining it by $\log(b) / \log(a)$, then it too is a multi-valued function. The values of $\log(4)/\log(-2)$ ranges over all values $(\ln 4 + 2 \pi \mathbf{i} m)/(\ln 2 + \pi \mathbf{i} + 2 \pi \mathbf{i} n)$, where $m$ and $n$ are integers. Do note that the set of values of this multi-valued function does include $2$; e.g. when $m =1$ and $n=0$.
"Negative" versus "Minus" As a math educator, do you think it is appropriate to insist that students say "negative $0.8$" and not "minus $0.8$" to denote $-0.8$? The so called "textbook answer" regarding this question reads: A number and its opposite are called additive inverses of each other because their sum is zero, the identity element for addition. Thus, the numeral $-5$ can be read "negative five," "the opposite of five," or "the additive inverse of five." This question involves two separate, but related issues; the first is discussed at an elementary level here. While the second, and more advanced, issue is discussed here. I also found this concerning use in elementary education. I recently found an excellent historical/cultural perspective on What's so baffling about negative numbers? written by a Fields medalist.
As a retired teacher, I can say that I tried very hard for many years to get my students to use the term "negative" instead of "minus", but after so many years of trying, I was finally happy if they could understand the concept, and stopped worrying so much about whether they used the correct terminology!
Which of the following are Dense in $\mathbb{R}^2$? Which of the following sets are dense in $\mathbb R^2$ with respect to the usual topology. * *$\{ (x, y)\in\mathbb R^2 : x\in\mathbb N\}$ *$\{ (x, y)\in\mathbb R^2 : x+y\in\mathbb Q\}$ *$\{ (x, y)\in\mathbb R^2 : x^2 + y^2 = 5\}$ *$\{ (x, y)\in\mathbb R^2 : xy\neq 0\}$. Any hint is welcome.
* *No. It's a bunch of parallel lines. These are vertical and the go through the integer points on the $x$-axis. *No. It's a circle. *Yes. It's the plane with the $x$ and $y$ axes excised. *Interesting. It is a union of parallel lines with slope -1 and $y$-intercept at the various rationals. It's dense in the plane.
Matrix commutator question Here's a nice question I heard on IRC, courtesy of "tmyklebu." Let $A$, $B$, and $C$ be $2\times 2$ complex matrices. Define the commutator $[X,Y]=XY-YX$ for any matrices $X$ and $Y$. Prove $$[[A,B]^2,C]=0.$$
Here's a better argument (not posted at midnight...) which shows that the result holds over any field: we don't need the matrices to be complex. As in the other answer, the trace of $[A,B]$ is $0$. Therefore, the characteristic polynomial of $[A,B]$ is $x^2+\det[A,B]$. By the Cayley-Hamilton Theorem, $$[A,B]^2 = -\det[A,B]I.$$ Therefore, $[A,B]^2$ is a scalar matrix, and therefore lies in the center of $M_{2\times 2}(\mathbf{F})$. We conclude that $[[A,B]^2,C]=0$ for any matrix $C\in M_{2\times 2}(\mathbf{F})$.
Calculating statistic for multiple runs I have s imple, general question regarding calculating statistic for N runs of the same experiment. Suppose I would like to calculate mean of values returned by some Test. Each run of the test generates $ \langle x_1 ... x_n \rangle$ , possibly of different length. Let's say the statistic is mean. Which approach would be better and why: * *Sum all values from M runs, and then divide by number of values *for each run calculate average, and then average across all averages I believe one of the above might beunder/overestimating the mean slightly and I don't know which. Thanks for your answers.
$\def\E{{\rm E}}\def\V{{\rm Var}}$Say you have $M$ runs of lengths $n_1,\dots,n_M$. Denote the $j$th value in the $i$th run by $X^i_j$, and let the $X^i_j$ be independent and identically distributed, with mean $\mu$ and variance $\sigma^2$. In your first approach you calculate $$\mu_1 = \frac{1}{n_1+\cdots n_M} \sum_{i=1}^M \sum_{j=1}^{n_i} X^i_j$$ and in your second approach you calculate $$\mu_2 = \frac{1}{M} \sum_{i=1}^M \left( \frac{1}{n_i} \sum_{j=1}^{n_i} X^i_j\right)$$ You can compute their expectations: $$\E(\mu_1) = \frac{1}{n_1+\cdots n_M} \sum_{i=1}^M \sum_{j=1}^{n_i} \mu = \frac{(n_1+\cdots n_M)\mu}{n_1+\cdots n_M} = \mu$$ vs $$\E(\mu_2) = \frac{1}{M} \sum_{i=1}^M \left( \frac{1}{n_i} \sum_{j=1}^{n_i}\mu \right) = \frac{1}{M} ( M\mu ) = \mu$$ so the estimator is unbiased in both cases. However, if you calculate the variances you will find that $$\V(\mu_1) = \frac{\sigma^2}{n_1+\cdots n_M}$$ and $$\V(\mu_2) = \frac{1}{M} \left( \sum_{i=1}^M \frac{1}{n_i} \right) \sigma^2$$ With a little effort, you can show that $$\V(\mu_1)\leq \V(\mu_2)$$ where the inequality is strict except when $n_1=n_2=\cdots=n_M$, i.e. when all of the runs produce the same amount of output. If you need to be convinced of this, work through the details in the case $M=2$, $n_1=1$ and $n_2=N >1$. Therefore it is better to take your first approach, of summing up the output of all runs and dividing by the total length of the output. The expectation is the same in either case, but the variance is lower with the first approach.
Limits of Subsequences If $s=\{s_n\}$ and $t=\{t_n\}$ are two nonzero decreasing sequences converging to 0, such that $s_n ≤t_n$ for all $n$. Can we find subsequences $s ′$ of $s$ and $t ′$ of $t$ such that $\lim \frac{s'}{t'}=0$ , i.e., $s ′$ decreases more rapidly than $t ′$ ?
Yes, we can. So we have two positive decreasing sequences with $s_n \leq t_n$, and $s_n \to 0, t_n \to 0$. Then we can let $t' \equiv t$. As $\{t_n\}$ is positive, $t_1 > 0$. As $s_n \to 0$, there is some $k$ s.t. $s_k < t_1/1$. Similarly, there is some $l > k$ s.t. $s_l < t_2/2$. Continuing in this fashion, we see that we can find a sequence $s'$ so that $s'_n < t_n/n$, so that $\dfrac{s'_n}{t_n} < \dfrac{1}{n}$.
Recurrence relation $T_{k+1} = 2T_k + 2$ I have a series of number in binary system as following: 0, 10, 110, 1110, 11110, 111110, 1111110, 11111110, ... I want to understand : Is there a general seri for my series? I found this series has a formula as following: (Number * 2) + 2 but i don't know this formula is correct or is there a general series (such as fibonacci) for my issue.
$T_{k+1} = 2T_k + 2$. Adding $2$ to both sides, we get that $$\left(T_{k+1}+2 \right) = 2 T_k + 4 = 2 \left( T_k + 2\right)$$ Calling $T_k+2 = u_k$, we get that $u_{k+1} = 2u_k$. Hence, $u_{k+1} = 2^{k+1}u_0$. This gives us $$\left(T_{k}+2 \right) = 2^k \left( T_0 + 2\right) \implies T_k = 2^{k+1} - 2 +2^kT_0$$ Since, $T_0 = 0$, we get that $$T_k = 2^{k+1} - 2$$ where my index starts from $0$.
Show that $\frac{n}{\sigma(n)} > (1-\frac{1}{p_1})(1-\frac{1}{p_2})\cdots(1-\frac{1}{p_r})$ If $n=p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r}$ is the prime factorization of $n>1$ then show that : $$1>\frac{n}{ \sigma (n)} > \left(1-\frac{1}{p_1}\right)\left(1-\frac{1}{p_2}\right)\cdots\cdots\left(1-\frac{1}{p_r}\right)$$ I have solved the $1^\text{st}$ inequality($1>\frac{n}{ \sigma (n)}$) and tried some manipulations on the right hand side of the $2^\text{nd}$ inequality but can't get much further.Please help.
Note that the function $\dfrac{n}{\sigma(n)}$ is multiplicative. Hence, if $n = p_1^{k_1}p_2^{k_2} \ldots p_m^{k_m}$, then we have that $$\dfrac{n}{\sigma(n)} = \dfrac{p_1^{k_1}}{\sigma \left(p_1^{k_1} \right)} \dfrac{p_2^{k_2}}{\sigma \left(p_2^{k_2} \right)} \ldots \dfrac{p_m^{k_m}}{\sigma \left(p_m^{k_m} \right)}$$ Hence, it suffices to prove it for $n = p^k$ where $p$ is a prime and $k \in \mathbb{Z}^+$. Let $n=p^k$, then $\sigma(n) = p^{k+1}-1$. This gives us that $$\dfrac{n}{\sigma(n)} = p^k \times \dfrac{p-1}{p^{k+1}-1} = \dfrac{p^{k+1} - p^k}{p^{k+1} - 1} = 1 - \dfrac{p^k-1}{p^{k+1}-1}.$$ Since $p > 1$, we have that $p(p^k-1) < p^{k+1} - 1 \implies \dfrac{p^k-1}{p^{k+1}-1} < \dfrac1p \implies 1 - \dfrac{p^k-1}{p^{k+1}-1} > 1 - \dfrac1p$. Hence, if $n=p^k$, then $$\dfrac{n}{\sigma(n)} > \left( 1 - \dfrac1p\right)$$ Since, $\dfrac{n}{\sigma(n)}$ is multiplicative, we have the desired result.
hausdorff, intersection of all closed sets Can you please help me with this question? Let's $X$ be a topological space. Show that these two following conditions are equivalent : * *$X$ is hausdorff *for all $x\in X$ intersection of all closed sets containing the neighborhoods of $x$ it's $\{x\}$. Thanks a lot!
HINTS: * *If $x$ and $y$ are distinct points in a Hausdorff space, they have disjoint open nbhds $V_x$ and $V_y$, and $X\setminus V_y$ is a closed set containing $V_x$. *If $F$ is a closed set containing an open nbhd $V$ of $x$, then $V$ and $X\setminus F$ are disjoint open sets.
A space is normal iff every pair of disjoint closed subsets have disjoint closed neighbourhoods. A space is normal iff every pair of disjoint closed subsets have disjoint closed neighbourhoods. Given space $X$ and two disjoint closed subsets $A$ and $B$. I have shown necessity: If X is normal then by Urysohn's lemma there exists continuous map $f:X \to [0,1]$ such that $f(A)=0$ and $f(B)=1$, then $f^{-1}(0)$ and $f^{-1}(1)$ are two disjoint closed neighbourhoods of A and B. But how to show the sufficiency?
If $A$ and $B$ have disjoint closed neighborhoods $U$ and $V$, then by definition of neighborhood we know that $A\subseteq \mathrm{int}(U)$ and $B\subseteq\mathrm{int}(V)$. Now, the interior of $U$ is an open neighborhood of $A$, the interior of $V$ is an open neighborhood of $B$, and so $A$ and $B$ have disjoint open neighborhoods. Thus, if a space satisfies your requirement, then it is normal.
Why do $\mathbb{C}$ and $\mathbb{H}$ generate all of $M_2(\mathbb{C})$? For this question, I'm identifying the quaternions $\mathbb{H}$ as a subring of $M_2(\mathbb{C})$, so I view them as the set of matrices of form $$ \begin{pmatrix} a & b \\ -\bar{b} & \bar{a} \end{pmatrix}. $$ I'm also viewing $\mathbb{C}$ as the subfield of scalar matrices in $M_2(\mathbb{C})$, identifying $z\in\mathbb{C}$ with the diagonal matrix with $z$ along the main diagonal. Since $\mathbb{H}$ contains $j=\begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix}$ and $k=\begin{pmatrix} 0 & i \\ i & 0\end{pmatrix}$, I know that $$ ij+k=\begin{pmatrix} 0 & 2i\\ 0 & 0 \end{pmatrix} $$ and $$ -ij+k= \begin{pmatrix} 0 & 0\\ 2i & 0 \end{pmatrix} $$ are in the generated subring. I'm just trying to find matrices of form $\begin{pmatrix} a & 0 \\ 0 & 0 \end{pmatrix}$ and $\begin{pmatrix} 0 & 0\\ 0 & d \end{pmatrix}$ for $a,d\neq 0$ to conclude the generated subring is the whole ring. How can I get these remaining two pieces? Thanks.
Hint: Use linear combinations of $$ jk=\pmatrix{i&0\cr0&-i\cr}\qquad\text{and the scalar}\qquad \pmatrix{i&0\cr 0&i\cr}. $$
a function that maps half planes Define $H^{+}=\{z:y>0\}$ $H^{-}=\{z:y<0\}$ $L^{+}=\{z:x>0\}$ $L^{-}=\{z:x<0\}$ $f(z)=\frac{z}{3z+1}$ maps which portion onto which from above and vice-versa? I will be glad if any one tell me how to handle this type of problem? by inspection?
HINTS * *Fractional transformations/Möbius transformations take circles and lines to circles and lines, i.e. they are 'circilinear.' They also preserve connected regions. *If you find out what happens to the boundaries, you'll know almost everything (except for in which side of the boundary the image resides); in one of those silly word plays, the image of the boundary is the boundary of the image. *Once you know where the upper half plane, say, goes, you know where the lower half plane goes automatically.
Nearest matrix in doubly stochastic matrix set Suppose $\mathcal{D}_N$ denote an $N\times N$ doubly stochastic matrix, given any element $M\in \mathcal{D}_N$ , the singular value decomposition for $M$ is $$ M=USV'$$ where $U$ and $V$ are two $N\times N$ orthogonal matrix and $S$ is a $N \times N$ diagonal matrix Let $P$ be the 'closest' orthogonal matrix to $M$, i.e. $P=\arg\min_{X\in\mathcal{O}}||X-M||_F^2$,$\mathcal{O}$ represents the $N\times N$ orthogonal matrix set. Note such $P$ may be not unique. In this case, we choose any of it. On conclusion about $P$ is $P=UV'$, where $U$ and $V$ are defined before(although can be not unique, we just choose any of them) $M_1 \in \mathcal{D}_N$, which is 'closest' to $P$. More specifically $$ M_1 = \arg\min_{X\in\mathcal{D}} ||X - P||_F ^2 $$ Similarly, If $M_1$ is not unique, we choose any of it(This should not happen actually. Since we may image it as a 'ball' approaching a 'polygon', should have only one minimum) My question is : The statement: $M_1=M$ if and only if $M$ is a permutation matrix Does this statement always hold true? Actually, if $M$ is a permutation matrix, $M_1=M$, this is obvious, since $S=I$, and $P=M$. However, does another direction always hold true? If so, how to prove this, otherwise, how to give a counter-example? Thanks for any suggestions!
I didn't exactly get your question. But the solution for the optimization problem you are looking is always a permutation matrix. This follows from the birkhoff's theorem. The birkhoff's theorem states that every doubly stochastic matrix is a convex combination of the permutation matrices. Hence, permutation matrices form the corners of the convex set of all doubly stochastic matrices. The objective function you have here is a convex function. Thus the minimum should be attained at one of the corner points, which are all permutation matrices.
Multiplicative Selfinverse in Fields I assume there are only two multiplicative self inverse in each field with characteristice bigger than $2$ (the field is finite but I think it holds in general). In a field $F$ with $\operatorname{char}(F)>2$ a multiplicative self inverse $a \in F$ is an element such that $$ a \cdot a = 1.$$ I think in each field it is $1$ and $-1$. Any ideas how to proof that?
Hint $\rm\ x^2\! =\! 1\!\iff\! (x\!-\!1)(x\!+\!1) = 0\! \iff\! x = \pm1,\:$ by $\rm\:ab=0\:\Rightarrow\: a=0\:\ or\:\ b=0\:$ in a field. This may fail if the latter property fails, i.e. if nontrivial zero-divisors exist. Consider, for example, $\rm\ x^2 = 1\:$ has $4$ roots $\rm\:x = \pm1, \pm 3\:$ in $\rm\:\mathbb Z/8 = $ integers mod $8,\:$ i.e. $\rm\:odd^2 \equiv 1\pmod 8$. Rings satsifying the latter property (no zero-divisors) are called (integral) domains. They are characterized by a generalization of the above, viz. a ring $\rm\: D\:$ is a domain $\iff$ every nonzero polynomial $\rm\ f(x)\in D[x]\ $ has at most $\rm\ deg\ f\ $ roots in $\rm\:D.\:$ For the simple proof see my post here, where I illustrate it constructively in $\rm\: \mathbb Z/m\: $ by showing that, given any $\rm\:f(x)\:$ with more roots than its degree,$\:$ we can quickly compute a nontrivial factor of $\rm\:m\:$ via a $\rm\:gcd$. The quadratic case of this result is at the heart of many integer factorization algorithms, which try to factor $\rm\:m\:$ by searching for a nontrivial square root in $\rm\: \mathbb Z/m,\:$ e.g. a square root of $1$ that is not $\:\pm 1$.
Evaluation of $\lim\limits_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$ One of the previous posts made me think of the following question: Is it possible to evaluate this limit without L'Hopital and Taylor? $$\lim_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$$
Here is a different approach. Let $$L = \lim_{x \to 0} \dfrac{\tan(x) - x}{x^3}$$ Replacing $x$ by $2y$, we get that \begin{align} L & = \lim_{y \to 0} \dfrac{\tan(2y) - 2y}{(2y)^3} = \lim_{y \to 0} \dfrac{\dfrac{2 \tan(y)}{1 - \tan^2(y)} - 2y}{(2y)^3}\\ & = \lim_{y \to 0} \dfrac{\dfrac{2 \tan(y)}{1 - \tan^2(y)} - 2 \tan(y) + 2 \tan(y) - 2y}{(2y)^3}\\ & = \lim_{y \to 0} \dfrac{\dfrac{2 \tan^3(y)}{1 - \tan^2(y)} + 2 \tan(y) - 2y}{(2y)^3}\\ & = \lim_{y \to 0} \left(\dfrac{2 \tan^3(y)}{8y^3(1 - \tan^2(y))} + \dfrac{2 \tan(y) - 2y}{8y^3} \right)\\ & = \lim_{y \to 0} \left(\dfrac{2 \tan^3(y)}{8y^3(1 - \tan^2(y))} \right) + \lim_{y \to 0} \left(\dfrac{2 \tan(y) - 2y}{8y^3} \right)\\ & = \dfrac14 \lim_{y \to 0} \left(\dfrac{\tan^3(y)}{y^3} \dfrac1{1 - \tan^2(y)} \right) + \dfrac14 \lim_{y \to 0} \left(\dfrac{\tan(y) - y}{y^3} \right)\\ & = \dfrac14 + \dfrac{L}4 \end{align} Hence, $$\dfrac{3L}{4} = \dfrac14 \implies L = \dfrac13$$ EDIT In Hans Lundmark answer, evaluating the desired limit boils down to evaluating $$S=\lim_{x \to 0} \dfrac{\sin(x)-x}{x^3}$$ The same idea as above can be used to evaluate $S$ as well. Replacing $x$ by $2y$, we get that \begin{align} S & = \lim_{y \to 0} \dfrac{\sin(2y) - 2y}{(2y)^3} = \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y) - 2y}{8y^3}\\ & = \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y) - 2 \sin(y) + 2 \sin(y) - 2y}{8y^3}\\ & = \lim_{y \to 0} \dfrac{2 \sin(y) - 2y}{8y^3} + \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y)-2 \sin(y)}{8y^3}\\ & = \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) - y}{y^3} - \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) (1 - \cos(y))}{y^3}\\ & = \dfrac{S}4 - \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) 2 \sin^2(y/2)}{y^3}\\ & = \dfrac{S}4 - \dfrac18 \lim_{y \to 0} \dfrac{\sin(y)}{y} \dfrac{\sin^2(y/2)}{(y/2)^2}\\ & = \dfrac{S}4 - \dfrac18 \lim_{y \to 0} \dfrac{\sin(y)}{y} \lim_{y \to 0} \dfrac{\sin^2(y/2)}{(y/2)^2}\\ & = \dfrac{S}4 - \dfrac18\\ \dfrac{3S}4 & = - \dfrac18\\ S & = - \dfrac16 \end{align}
Combining a radical and simplifying? How would I combine and simplify the following radical: $$\sqrt {\frac{A^2}{2}} - \sqrt \frac{A^2}{8}$$
$$\sqrt {\frac{A^2}{2}} - \sqrt \frac{A^2}{8}\\=\frac{|A|}{\sqrt 2}-\frac{|A|}{2\sqrt 2}\\=\frac{2|A|-|A|}{2\sqrt 2}\\=\frac{|A|}{2\sqrt 2}\frac{\sqrt 2}{\sqrt 2}\\=\frac{\sqrt 2|A|}{4}$$
Differential equation of $y = e^{rx}$ I am trying to find what values of r in $y = e^{rx}$ satsify $2y'' + y' - y = 0$ I thought I was being clever and knew how to do this so this is how I proceeded. $$y' = re^{rx}$$ $$y'' = r^2 e^{rx}$$ $$2(r^2 e^{rx}) +re^{rx} -e^{rx} = 0 $$ I am not sure how to proceed from here, the biggest thing I am confused on is that I am working with a variable x, with no input conditions at all, and a variable r (the constant) so how do I do this?
Here's the best part: $e^{rx}$ is never zero. Thus, if we factor that out, it is simply a quadratic in $r$ that remains.
Infinite products - reference needed! I am looking for a small treatment of basic theorems about infinite products ; surprisingly enough they are nowhere to be found after googling a little. The reason for this is that I am beginning to read Davenport's Multiplicative Number Theory, and the treatment of L-functions in there requires to understand convergence/absolute convergence of infinite products, which I know little about. Most importantly I'd like to know why $$ \prod (1+|a_n|) \to a < \infty \quad \Longrightarrow \quad \prod (1+ a_n) \to b \neq 0. $$ I believe I'll need more properties of products later on, so just a proof of this would be appreciated but I'd also need the reference. Thanks in advance,
I will answer your question "Most importantly I'd like to know why $$ \prod (1+|a_n|) \to a < \infty \quad \Longrightarrow \quad \prod (1+ a_n) \to b \neq 0. "$$ We will first prove that if $\sum \lvert a_n \rvert < \infty$, then the product $\prod_{n=1}^{\infty} (1+a_n)$ converges. Note that the condition you have $\prod (1+|a_n|) \to a < \infty$ is equivalent to the condition that $\sum \lvert a_n \rvert < \infty$, which can be seen from the inequality below. $$\sum \lvert a_n \rvert \leq \prod (1+|a_n|) \leq \exp \left(\sum \lvert a_n \rvert \right)$$ Further, we will also show that the product converges to $0$ if and only if one of its factors is $0$. If $\sum \lvert a_n \rvert$ converges, then there exists some $M \in \mathbb{N}$ such that for all $n > M$, we have that $\lvert a_n \rvert < \frac12$. Hence, we can write $$\prod (1+a_n) = \prod_{n \leq M} (1+a_n) \prod_{n > M} (1+a_n)$$ Throwing away the finitely many terms till $M$, we are interested in the infinite product $\prod_{n > M} (1+a_n)$. We can define $b_n = a_{n+M}$ and hence we are interested in the infinite product $\prod_{n=1}^{\infty} (1+b_n)$, where $\lvert b_n \rvert < \dfrac12$. The complex logarithm satisfies $1+z = \exp(\log(1+z))$ whenever $\lvert z \rvert < 1$ and hence $$ \prod_{n=1}^{N} (1+b_n) = \prod_{n=1}^{N} e^{\log(1+b_n)} = \exp \left(\sum_{n=1}^N \log(1+b_n)\right)$$ Let $f(N) = \displaystyle \sum_{n=1}^N \log(1+b_n)$. By the Taylor series expansion, we can see that $$\lvert \log(1+z) \rvert \leq 2 \lvert z \rvert$$ whenever $\lvert z \rvert < \frac12$. Hence, $\lvert \log(1+b_n) \rvert \leq 2 \lvert b_n \rvert$. Now since $\sum \lvert a_n \rvert$ converges, so does $\sum \lvert b_n \rvert$ and hence so does $\sum \lvert \log(1+b_n) \rvert$. Hence, $\lim_{N \rightarrow \infty} f(N)$ exists. Call it $F$. Now since the exponential function is continuous, we have that $$\lim_{N \to \infty} \exp(f(N)) = \exp(F)$$ This also shows that why the limit of the infinite product $\prod_{n=1}^{\infty}(1+a_n)$ cannot be $0$, unless one of its factors is $0$. From the above, we see that $\prod_{n=1}^{\infty}(1+b_n)$ cannot be $0$, since $\lvert F \rvert < \infty$. Hence, if the infinite product $\prod_{n=1}^{\infty}(1+a_n)$ is zero, then we have that $\prod_{n=1}^{M}(1+a_n) = 0$. But this is a finite product and it can be $0$ if and only if one of the factors is zero. Most often this is all that is needed when you are interested in the convergence of the product expressions for the $L$ functions.
Matrix with no eigenvalues Here is another problem from Golan. Problem: Let $F$ be a finite field. Show there exists a symmetric $2\times 2$ matrix over $F$ with no eigenvalues in $F$.
The solution is necessarily split into two cases, because the theory of quadratic equations has a different appearance in characteristic two as opposed to odd characteristic. Let $p=\mathrm{char}\, F$. Assume first that $p>2$. Consider the matrix $$ M=\pmatrix{a&b\cr b&c\cr}. $$ Its characteristic equation is $$ \lambda^2-(a+c)\lambda-(ac-b^2)=0.\tag{1} $$ The discriminant of this equation is $$ D=(a+c)^2-4(ac-b^2)=(a-c)^2+(2b)^2. $$ By choosing $a,c,b$ cleverly we see that we can arrange the quantities $a-c$ and $2b$ to have any value that we wish. It is a well-known fact that in a finite field of odd characteristic, any element can be written as a sum of two squares. Therefore we can arrange $D$ to be a non-square proving the claim in this case. If $p=2$, then equation $(1)$ has roots in $F$, if and only if $tr((ac-b^2)/(a+c)^2)=0.$ By selecting $a$ and $c$ to be any distinct elements of $F$, we can then select $b$ in such a way that this trace condition is not met, and the claim follows in this case also.
Notation for an indecomposable module. If $V$ is a 21-dimensional indecomposable module for a group algebra $kG$ (21-dimensional when considered as a vector space over $k$), which has a single submodule of dimension 1, what is the most acceptable notation for the decomposition of $V$, as I have seen both $1\backslash 20$ and $20/1$ used (or are both equally acceptable)?
My feeling is that this notation is not sufficiently standard for you to use either choice without explanation, hence whichever choice you make, you should signal it carefully in your paper. Given that, either choice looks fine to me.
Numerical Analysis over Finite Fields Notwithstanding that it isn't numerical analysis if it's over finite fields, but what topics that are traditionally considered part of numerical analysis still have some substance to them if the reals are replaced with finite fields or an algebraic closure thereof? Perhaps using Hamming distance as a metric for convergence purposes, with convergence of an iteration in a discrete setting just meaning that the Hamming distance between successive iterations becomes zero i.e. the algorithm has a fixed-point. I ask about still having substance because I suspect that in the ff setting, na topics will mostly either not make sense, or be trivial.
The people who factor large numbers using sieve algorithms (the quadratic sieve, the special and general number field sieves) wind up with enormous (millions by millions) systems of linear equations over the field of two elements, and they need to put a lot of thought into the most efficient ways to solve these systems if they want their algorithms to terminate before the sun does.
In which case $M_1 \times N \cong M_2 \times N \Rightarrow M_1 \cong M_2$ is true? Usually for modules $M_1,M_2,N$ $$M_1 \times N \cong M_2 \times N \Rightarrow M_1 \cong M_2$$ is wrong. I'm just curious, but are there any cases or additional conditions where it gets true? James B.
A standard result in this direction is the Krull-Schmidt Theorem: Theorem (Krull-Schmidt for modules) Let $E$ be a nonzero module that has both ACC and DCC on submodules (that is, $E$ is both artinian and noetherian). Then $E$ is a direct sum of (finitely many) indecomposable modules, and the direct summands are unique up to a permutation and isomorphism. In particular: Corollary. If $M_1$, $M_2$, and $N$ are both noetherian and artinian, and $M_1\times N \cong M_2\times N$, then $M_1\cong M_2$. Proof. Both $M_1\times N$ and $M_2\times N$ satisfy the hypothesis of the Krull-Schmidt Theorem; decompose $M_1$, $M_2$, and $N$ into a direct sum of indecomposable modules. The uniqueness clause of Krull-Schmidt yields the isomorphism of $M_1$ with $M_2$. $\Box$
Why is there no functor $\mathsf{Group}\to\mathsf{AbGroup}$ sending groups to their centers? The category $\mathbf{Set}$ contains as its objects all small sets and arrows all functions between them. A set is "small" if it belongs to a larger set $U$, the universe. Let $\mathbf{Grp}$ be the category of small groups and morphisms between them, and $\mathbf{Abs}$ be the category of small abelian groups and its morphisms. I don't see what it means to say there is no functor $f: \mathbf{Grp} \to \mathbf{Abs}$ that sends each group to its center, when $U$ isn't even specified. Can anybody explain?
This is very similar to Arturo Magidin's answer, but offers another point of view. Consider the dihedral group $D_n=\mathbb Z_n\rtimes \mathbb Z_2$ with $2\nmid n$ (so the $Z(D_n)=1$). From the splitting lemma we get a short exact sequence $$1\to\mathbb Z_n\rightarrow D_n\xrightarrow{\pi} \mathbb Z_2\to 1$$ and an arrow $\iota\colon \mathbb Z_2\to D_n$ such that $\pi\circ \iota=1_{\mathbb Z_2}$. Hence the composite morphism $$\mathbb Z_2\xrightarrow{\iota} D_n\xrightarrow{\pi}\mathbb Z_2$$ is an iso and would be mapped by the centre to an iso $$\mathbb Z_2\to 1\to \mathbb Z_2$$ what is impossible. (One can also recognize a split mono and split epi above and analyze how they behave under an arbitrary functor). Therefore the centre can't be functorial.
Perturbation problem This is a mathematica exercise that I have to do, where $y(x) = x - \epsilon \sin(2y)$ and it wants me to express the solution $y$ of the equation as a power series in $ \epsilon$.
We're looking for a perturbative expansion of the solution $y(x;\epsilon)$ to $$ y(x) = x-\epsilon \sin(2y)$$ I don't know if you're asking for an expansion to all orders. If so, I have no closed form to offer. But, for illustration, the first four terms in the series may be found as follows by use of the addition theorem (this procedure may be continued to any desired order). Expand $$y(x;\epsilon)=y_o(x)+\epsilon y_1(x)+\epsilon^2 y_2(x)+\epsilon^3 y_3(x) + o(\epsilon^3).$$ Then clearly, $y_o(x)=x$ and $$\epsilon \,y_1(x) +o(\epsilon)=-\epsilon\sin(2x+\epsilon 2y_1+o(\epsilon))=-\epsilon\sin(2x)\underbrace{\cos(\epsilon\, 2y_1+o(\epsilon))}_{\sim 1}\\ \phantom{tttttttttttttttttttttttttttttttttttttttttttttttttt}+\epsilon\cos(2x)\underbrace{\sin(\epsilon\, 2y_1+o(\epsilon))}_{\sim 2\epsilon y_1 = O(\epsilon)}. \\$$ This implies $y_1(x)= -\sin(2x)$. Similarly, at the next two orders we find $y_2(x)=2\sin(2x)\cos(2x)=\sin(4x)$ and $y_3(x)=2\sin^3(x)-2\cos(2x)\sin(4x)$, if I haven't made a mistake in the algebra. Hence $$y(x;\epsilon) = x - \epsilon \sin(x) + \epsilon^2 \sin(2x)\cos(x)+\epsilon^3 (2\sin^3(x)-2\cos(2x)\sin(4x)) +o(\epsilon^3). $$
Orthonormal basis Consider $\mathbb{R}^3$ together with inner product $\langle (x_1, x_2, x_3), (y_1, y_2, y_3) \rangle = 2x_1 y_1+x_2 y_2+3 x_3 y_3$. Use the Gram-Schmidt procedure to find an orthonormal basis for $W=\text{span} \left\{(-1, 1, 0), (-1, 1, 2) \right\}$. I don't get how the inner product $\langle (x_1, x_2, x_3), (y_1, y_2, y_3) \rangle = 2 x_1 y_1+x_2 y_2+3 x_3 y_3$ would affect the approach to solve this question.. When I did the gram-schmidt, I got $v_1=(-1, 1, 0)$ and $v_2=(0, 0, 2)$ but then realized that you have to do something with the inner product before finding the orthonormal basis. Can someone please help me? Update: So far I got $\{(\frac{-1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, 0), (0, 0, \frac{2}{\sqrt{12}})\}$ as my orthonormal basis but I'm not sure if I am doing it right with the given inner product.
The choice of inner product defines the notion of orthogonality. The usual notion of being "perpendicular" depends on the notion of "angle" which turns out to depend on the notion of "dot product". If you change the way we measure the "dot product" to give a more general inner product then we change what we mean by "angle", and so have a new notion of being "perpendicular", which in general we call orthogonality. So when you apply the Gram-Schmidt procedure to these vectors you will NOT necessarily get vectors that are perpendicular in the usual sense (their dot product might not be $0$). Let's apply the procedure. It says that to get an orthogonal basis we start with one of the vectors, say $u_1 = (-1,1,0)$ as the first element of our new basis. Then we do the following calculation to get the second vector in our new basis: $u_2 = v_2 - \frac{\langle v_2, u_1\rangle}{\langle u_1, u_1\rangle} u_1$ where $v_2 = (-1,1,2)$. Now $\langle v_2, u_1\rangle = 3$ and $\langle u_1, u_1\rangle = 3$ so that we are given: $u_2 = v_2 - u_1 = (0,0,2)$. So your basis is correct. Let's check that these vectors are indeed orthogonal. Remember, this is with respect to our new inner product. We find that: $\langle u_1, u_2\rangle = 3(-1)(0) + (1)(0) + 2(0)(2) = 0$ (here we also happened to get a basis that is perpendicular in the traditional sense, this was lucky). Now is the basis orthonormal? (in other words, are these unit vectors?). No they arent, so to get an orthonormal basis we must divide each by its length. Now this is not the length in the usual sense of the word, because yet again this is something that depends on the inner product you use. The usual Pythagorean way of finding the length of a vector is: $||x||=\sqrt{x_1^2 + ... + x_n^2} = \sqrt{x . x}$ It is just the square root of the dot product with itself. So with more general inner products we can define a "length" via: $||x|| = \sqrt{\langle x,x\rangle}$. With this length we see that: $||u_1|| = \sqrt{2(-1)(-1) + (1)(1) + 3(0)(0)} = \sqrt{3}$ $||u_2|| = \sqrt{2(0)(0) + (0)(0) + 3(2)(2)} = 2\sqrt{3}$ (notice how these are different to what you would usually get using the Pythagorean way). Thus an orthonormal basis is given by: $\{\frac{u_1}{||u_1||}, \frac{u_2}{||u_2||}\} = \{(\frac{-1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, 0), (0,0,\frac{1}{\sqrt{3}})\}$
Evaluating $\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\exp(n x-\frac{x^2}{2}) \sin(2 \pi x)\ dx$ I want to evaluate the following integral ($n \in \mathbb{N}\setminus \{0\}$): $$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\exp\left(n x-\frac{x^2}{2}\right) \sin(2 \pi x)\ dx$$ Maple and WolframAlpha tell me that this is zero and I also hope it is zero, but I don't see how I can argue for it. I thought of rewriting the sine via $\displaystyle \sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$ or using Euler's identity on $\exp(n x-\frac{x^2}{2})$. However, in both ways I am stuck... Thanks for any hint.
$$I = \frac{e^{n^2/2}}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} e^{-\frac{(x-n)^2}{2}} \sin (2\pi x) \, dx \stackrel{x = x-n}{=} \frac{e^{n^2/2}}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx$$ Now divide the integral into two parts: $$\int_{-\infty}^{+\infty} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx = \int_{-\infty}^{0} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx + \int_{0}^{+\infty} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx$$ Take one of them and substitute $t=-x$: $$\int_{-\infty}^{0} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx = -\int_0^{+\infty}e^{-\frac{t^2}{2}} \sin (2\pi t) \, dt$$ Because these integrals are finite, i.e.: $$\int_0^{+\infty} \left| e^{-\frac{t^2}{2}} \sin (2\pi t) \right| \, dt \le \int_0^{+\infty}e^{-\frac{t^2}{2}} \, dt = \sqrt{\frac{\pi}{2}}$$ We can write $I = 0$ and we are not dealing with any kind of indeterminate form like $\infty - \infty$.
What is the result of sum $\sum\limits_{i=0}^n 2^i$ Possible Duplicate: the sum of powers of $2$ between $2^0$ and $2^n$ What is the result of $$2^0 + 2^1 + 2^2 + \cdots + 2^{n-1} + 2^n\ ?$$ Is there a formula on this? and how to prove the formula? (It is actually to compute the time complexity of a Fibonacci recursive method.)
Let us take a particular example that is large enough to illustrate the general situation. Concrete experience should precede the abstract. Let $n=8$. We want to show that $2^0+2^1+2^2+\cdots +2^8=2^9-1$. We could add up on a calculator, and verify that the result holds for $n=8$. However, we would not learn much during the process. We will instead look at the sum written backwards, so at $$2^8+2^7+2^6+2^5+2^4+2^3+2^2+2^1+2^0.$$ A kangaroo is $2^9$ feet from her beloved $B$. She takes a giant leap of $2^8$ feet. Now she is $2^8$ feet from $B$. She takes a leap of $2^7$ feet. Now she is $2^7$ feet from $B$. She takes a leap of $2^6$ feet. And so on. After a while she is $2^1$ feet from $B$, and takes a leap of $2^0$ feet, leaving her $2^0$ feet from $B$. The total distance she has covered is $2^8+2^7+2^6+\cdots+2^0$. It leaves her $2^0$ feet from $B$, and therefore $$2^8+2^7+2^6+\cdots+2^0+2^0=2^9.$$ Since $2^0=1$, we obtain by subtraction that $2^8+2^7+\cdots +2^0=2^9-1$. We can write out the same reasoning without the kangaroo. Note that $2^0+2^0=2^1$, $2^1+2^1=2^2$, $2^2+2^2=2^3$, and so on until $2^8+2^8=2^9$. Therefore $$(2^0+2^0)+2^1+2^2+2^3+2^4+\cdots +2^8=2^9.$$ Subtract the front $2^0$ from the left side, and $2^0$, which is $1$, from the right side, and we get our result.
Integral of $\int 2\,\sin^{2}{x}\cos{x}\,dx$ I am asked as a part of a question to integrate $$\int 2\,\sin^{2}{x}\cos{x}\,dx$$ I managed to integrate it using integration by inspection: $$\begin{align}\text{let } y&=\sin^3 x\\ \frac{dy}{dx}&=3\,\sin^2{x}\cos{x}\\ \text{so }\int 2\,\sin^{2}{x}\cos{x}\,dx&=\frac{2}{3}\sin^3x+c\end{align}$$ However, looking at my notebook the teacher did this: $$\int -\left(\frac{\cos{3x}-\cos{x}}{2}\right)$$ And arrived to this result: $$-\frac{1}{6}\sin{3x}+\frac{1}{2}\sin{x}+c$$ I'm pretty sure my answer is correct as well, but I'm curious to find out what how did do rewrite the question in a form we can integrate.
Another natural approach is the substitution $u=\sin x$. The path your instructor chose is less simple. We can rewrite $\sin^2 x$ as $1-\cos^2x$, so we are integrating $2\cos x-2\cos^3 x$. Now use the identity $\cos 3x=4\cos^3 x-3\cos x$ to conclude that $2\cos^3 x=\frac{1}{2}\left(\cos 3x+3\cos x\right)$. Remark: The identity $\cos 3x=4\cos^3 x-3\cos x$ comes up occasionally, for example in a formula for solving certain classes of cubic equations. The same identity comes up when we are proving that the $60^\circ$ angle cannot be trisected by straightedge and compass.
A series with prime numbers and fractional parts Considering $p_{n}$ the nth prime number, then compute the limit: $$\lim_{n\to\infty} \left\{ \dfrac{1}{p_{1}} + \frac{1}{p_{2}}+\cdots+\frac{1}{p_{n}} \right\} - \{\log{\log n } \}$$ where $\{ x \}$ denotes the fractional part of $x$.
This is by no means a complete answer but a sketch on how to possibly go about. To get the constant, you need some careful computations. First get an asymptotic for $ \displaystyle \sum_{n \leq x} \dfrac{\Lambda(n)}{n}$ as $\log(x) - 2 + o(1)$. To get this asymptotic, you need Stirling's formula and the fact that $\psi(x) = x + o(x)$ i.e. the PNT. Then relate $ \displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac{\Lambda(p)}{p}$ to $ \displaystyle \sum_{n \leq x} \dfrac{\Lambda(n)}{n}$. Essentially, you can get $$ \displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac{\Lambda(p)}{p} = \displaystyle \sum_{n \leq x} \dfrac{\Lambda(n)}{n} + C + o(1)$$ Getting this constant $C$ is also hard. You can try your luck with partial summation. Then relate $ \displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac1{p}$ to $\displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac{\Lambda(p)}{p}$ i.e. $\displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac{\log(p)}{p}$ by partial summation to get $$\displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac1{p} = \log(\log(x)) + C_1 + o(1)$$ You might need to invoke PNT here as well. The $C_1$ here obviously depends on he constant $C$ and constant $2$ before. EDIT Thinking about it there might be a way to avoid PNT to get the constant.
Is $A^{q+2}=A^2$ in $M_2(\mathbb{Z}/p\mathbb{Z})$? I'm wondering, why is it that for $q=(p^2-1)(p^2-p)$, that $A^{q+2}=A^2$ for any $A\in M_2(\mathbb{Z}/p\mathbb{Z})$? It's not hard to see that $GL_2(\mathbb{Z}/p\mathbb{Z})$ has order $(p^2-1)(p^2-p)$, and so $A^q=1$ if $A\in GL_2(\mathbb{Z}/p\mathbb{Z})$, and so the equation holds in that case. But if $A$ is not invertible, why does the equality still hold?
If $A$ is not invertible, then its characteristic polynomial is either $x^2$ or $x(x-a)$ for some $a\in\mathbb{Z}/p\mathbb{Z}$. In the former case, by the Cayley-Hamilton Theorem we have $A^2 = 0$, hence $A^{q+2}=A^2$. In the latter case, the matrix is similar to a diagonal matrix, with $0$ in one diagonal and $a$ in the other. So, up to conjugation, we have $$A^{q+2}=\left(\begin{array}{cc} 0 & 0\\ 0 & a\end{array}\right)^{q+2} = \left(\begin{array}{cc} 0 & 0\\ 0 & a^{q+2} \end{array}\right).$$ But $a^{p} = a$. Since $q = p^4 -p^3-p^2 + p$, we have $$a^{q} = \frac{a^{p^4}a^p}{a^{p^3}a^{p^2}} = 1$$ so $a^{q+2} = a^2$, hence $A^{q+2}=A^2$.
Parametric equation of a cone. I have a cone with vertex (a, b, c) and base circumference with center $(x_0,y_0)$ and radius R. I can't understand what is the parametric representation of three dimensional space inside the cone. Any suggestions please?
The parametric equation of the circle is: $$ \gamma(u) = (x_0 + R\cos u, y_0 + R\sin u, 0) $$ Each point on the cone lies on a line that passes through $p(a, b, c)$ and a point on the circle. Therefore, the direction vector of such a line is: $$ \gamma(u) - p = (x_0 + R\cos u, y_0 + R\sin u, 0) - (a, b, c) = (x_0 - a + R\cos u, y_0 - b + R\sin u, -c) $$ And since the line passes through $p(a, b, c)$, the parametric equation of the line is: $$ p + v\left(\gamma(u) - p\right) = (a, b, c) + v \left((x_0 - a + R\cos u, y_0 - b + R\sin u, -c)\right) $$ Simplify to get the parametric equation of the cone: $$ \sigma(u, v) = \left(a(1-v) + v(x_0 + R\cos u), b(1-v) + v(y_0 + R\sin u), c(1 - v)\right) $$ Here is a plot of the cone for $p(0, 0, 2)$, $(x_0, y_0) = (-1, -1)$ and $R = 1$, when $u$ scans $[0, 2\pi]$ and $v$ scans $[0, 1]$:
Signature of a manifold as an invariant Could you help me to see why signature is a HOMOTOPY invariant? Definition is below (from Stasheff) The \emph{signature (index)} $\sigma(M)$ of a compact and oriented $n$ manifold $M$ is defined as follows. If $n=4k$ for some $k$, we choose a basis $\{a_1,...,a_r\}$ for $H^{2k}(M^{4k}, \mathbb{Q})$ so that the \emph{symmetric} matrix $[<a_i \smile a_j, \mu>]$ is diagonal. Then $\sigma (M^{4k})$ is the number of positive diagonal entries minus the number of negative ones. Otherwise (if $n$ is not a multiple of 4) $\sigma(M)$ is defined to be zero \cite{char}.
You should be using a more invariant definition of the signature. First, cohomology and Poincaré duality are both homotopy invariant. It follows that the abstract vector space $H^{2k}$ equipped with the intersection pairing is a homotopy invariant. Now I further claim that the signature is an invariant of real vector spaces equipped with a nondegenerate bilinear pairing (this is just Sylvester's law of inertia). So after tensoring with $\mathbb{R}$ the conclusion follows.
Positive Semi-Definite matrices and subtraction I have been wondering about this for some time, and I haven't been able to answer the question myself. I also haven't been able to find anything about it on the internet. So I will ask the question here: Question: Assume that $A$ and $B$ both are positive semi-definite. When is $C = (A-B)$ positive semi-definite? I know that I can figure it out for given matrices, but I am looking for a necessary and sufficient condition. It is of importance when trying to find solutions to conic-inequality systems, where the cone is the cone generated by all positive semi-definite matrices. The question I'm actually interested in finding nice result for are: Let $x \in \mathbb{R}^n$, and let $A_1,\ldots,A_n,B$ be positive semi-definite. When is $(\sum^n_{i=1}x_iA_i) - B$ positive semi-definite? I feel the answer to my first question should yield the answer to the latter. I am looking for something simpler than actually calculating the eigenvalues.
There's a form of Sylvester's criterion for positive semi-definiteness, which unfortunately requires a lot more computations than the better known test for positive definiteness. Namely, all principal minors (not just the leading ones) must be nonnegative. Principal minors are obtained by deleting some of the rows and the same-numbered columns. Source The book Matrix Analysis by Horn and Johnson is the best reference for positive (semi)definiteness that I know.
Traces of all positive powers of a matrix are zero implies it is nilpotent Let $A$ be an $n\times n$ complex nilpotent matrix. Then we know that because all eigenvalues of $A$ must be $0$, it follows that $\text{tr}(A^n)=0$ for all positive integers $n$. What I would like to show is the converse, that is, if $\text{tr}(A^n)=0$ for all positive integers $n$, then $A$ is nilpotent. I tried to show that $0$ must be an eigenvalue of $A$, then try to show that all other eigenvalues must be equal to 0. However, I am stuck at the point where I need to show that $\det(A)=0$. May I know of the approach to show that $A$ is nilpotent?
If the eigenvalues of $A$ are $\lambda_1$, $\dots$, $\lambda_n$, then the eigenvalues of $A^k$ are $\lambda_1^k$, $\dots$, $\lambda_n^k$. It follows that if all powers of $A$ have zero trace, then $$\lambda_1^k+\dots+\lambda_n^k=0\qquad\text{for all $k\geq1$.}$$ Using Newton's identities to express the elementary symmetric functions of the $\lambda_i$'s in terms of their power sums, we see that all the coefficients of the characteristic polynomial of $A$ (except that of greatest degree, of course) are zero. This means that $A$ is nilpotent.
What is the correct way to solve $\sin(2x)=\sin(x)$ I've found two different ways to solve this trigonometric equation $\begin{align*} \sin(2x)=\sin(x) \Leftrightarrow \\\\ 2\sin(x)\cos(x)=\sin(x)\Leftrightarrow \\\\ 2\sin(x)\cos(x)-\sin(x)=0 \Leftrightarrow\\\\ \sin(x) \left[2\cos(x)-1 \right]=0 \Leftrightarrow \\\\ \sin(x)=0 \vee \cos(x)=\frac{1}{2} \Leftrightarrow\\\\ x=k\pi \vee x=\frac{\pi}{3}+2k\pi \vee x=\frac{5\pi}{3}+2k\pi \space, \space k \in \mathbb{Z} \end{align*}$ The second way was: $\begin{align*} \sin(2x)=\sin(x)\Leftrightarrow \\\\ 2x=x+2k\pi \vee 2x=\pi-x+2k\pi\Leftrightarrow \\\\ x=2k\pi \vee3x=\pi +2k\pi\Leftrightarrow \\\\x=2k\pi \vee x=\frac{\pi}{3}+\frac{2k\pi}{3} \space ,\space k\in \mathbb{Z} \end{align*}$ What is the correct one? Thanks
These answers are equivalent and both are correct. Placing angle $x$ on a unit circle, your first decomposition gives all angles at the far west and east sides, then all the angles $60$ degrees north of east, then all the angles $60$ degrees south of east. Your second decomposition takes all angles at the far east side first. Then it takes all angles spaced one-third around the circle starting at 60 degrees north of east. You have the same solution set either way.
Motivation for Koszul complex Koszul complex is important for homological theory of commutative rings. However, it's hard to guess where it came from. What was the motivation for Koszul complex?
In this answer I would rather focus on why is the Koszul complex so widely used. In abstract terms, the Koszul complex arises as the easiest way to combine an algebra with a coalgebra in presence of quadratic data. You can find the modern generalization of the Koszul duality described in Aaron's comment by reading Loday, Valette Algebraic Operads (mostly chapters 2-3). To my knowledge the Koszul complex is extremely useful because you can use it even with certain $A_\infty$-structures arising from deformation quantization of Poisson structures and you relate it to the other "most used resolution in homological algebra", i.e. the bar resolution. For a quick review of this fact, please check my answer in Homotopy equivalent chain complexes As you can see it is a flexibe object which has the property of being extremely "explicit". This helped alot its diffusion in the mathematical literature.
Generating function of Lah numbers Let $L(n,k)\!\in\!\mathbb{N}_0$ be the Lah numbers. We know that they satisfy $$L(n,k)=L(n\!-\!1,k\!-\!1)+(n\!+\!k\!-\!1)L(n\!-\!1,k)$$ for all $n,k\!\in\!\mathbb{Z}$. How can I prove $$\sum_nL(n,k)\frac{x^n}{n!}=\frac{1}{k!}\Big(\frac{x}{1-x}\Big)^k$$ without using the explicit formula $L(n,k)\!=\!\frac{n!}{k!}\binom{n-1}{k-1}$? Attempt 1: $\text{LHS}=\sum_nL(n\!-\!1,k\!-\!1)\frac{x^n}{n!}+\sum_n(n\!+\!k\!-\!1)L(n\!-\!1,k)\frac{x^n}{n!}\overset{i.h.}{=}?$ Attempt 2: $\text{RHS}\overset{i.h.}{=}$ $\frac{1}{k}\frac{x}{1-x}\sum_nL(n,k\!-\!1)\frac{x^n}{n!}=$ $\frac{1}{k}\frac{x}{1-x}\sum_nL(n\!-\!1,k\!-\!1)\frac{x^{n-1}}{(n-1)!}=$ $\frac{1}{k(1-x)}\sum_nn\big(L(n,k)-(n\!+\!k\!-\!1)L(n\!-\!1,k)\big)\frac{x^n}{n!}=?$
We have \begin{align} f_k(x)&:=\sum_{n\in\Bbb Z}L(n,k)\frac{x^n}{n!}\\ &=\sum_{n\in \Bbb Z}L(n-1,k-1)\frac{x^n}{n!}+\sum_{n\in \Bbb Z}(n+k-1)L(n-1,k)\frac{x^n}{n!}\\ &=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+\sum_{j\in \Bbb Z}(j+1+k-1)L(j,k)\frac{x^{j+1}}{(j+1)!}\\ &=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{j!}+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!}\\ &=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+xf_k(x)+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!} \end{align} hence $$(1-x)f_k(x)=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!}.$$ Now we take the derivatives to get $$-f_k(x)+(1-x)f'_k(x)=f_{k-1}(x)+(k-1)f_k(x)$$ hence $$(1-x)f'_k(x)-kf_k(x)=f_{k-1}(x).$$ Multipliying by $(1-x)^{k-1}$ and using the formula for $f_{k-1}$ we get $$(1-x)^kf'_k(x)-k(1-x)^{k-1}f_k(x)=\frac{x^{k-1}}{(k-1)!}$$ so $$((1-x)^kf_k(x))'=\frac{x^{k-1}}{(k-1)!}.$$ Integrating, we get the wanted result up to another term (namely $C(1-x)^k$) but it should vanish using the value at $0$ and the initial definition of Lah numbers.
Prove the convergence/divergence of $\sum \limits_{k=1}^{\infty} \frac{\tan(k)}{k}$ Can be easily proved that the following series onverges/diverges? $$\sum_{k=1}^{\infty} \frac{\tan(k)}{k}$$ I'd really appreciate your support on this problem. I'm looking for some easy proof here. Thanks.
A proof that the sequence $\frac{\tan(n)}{n}$ does not have a limit for $n\to \infty$ is given in this article (Sequential tangents, Sam Coskey). This, of course, implies that the series does not converge. The proof, based on this paper by Rosenholtz (*), uses the continued fraction of $\pi/2$, and, essentially, it shows that it's possible to find a subsequence such that $\tan(n_k)$ is "big enough", by taking numerators of the truncated continued fraction ("convergents"). (*) "Tangent Sequences, World Records, π, and the Meaning of Life: Some Applications of Number Theory to Calculus", Ira Rosenholtz - Mathematics Magazine Vol. 72, No. 5 (Dec., 1999), pp. 367-376
Proving the Möbius formula for cyclotomic polynomials We want to prove that $$ \Phi_n(x) = \prod_{d|n} \left( x^{\frac{n}{d}} - 1 \right)^{\mu(d)} $$ where $\Phi_n(x)$ in the n-th cyclotomic polynomial and $\mu(d)$ is the Möbius function defined on the natural numbers. We were instructed to do it by the following stages: Using induction we assume that the formula is true for $n$ and we want to prove it for $m = n p^k$ where $p$ is a prime number such that $p\not{|}n$. a) Prove that $$\prod_{\xi \in C_{p^k}}\xi = (-1)^{\phi(p^k)} $$ where $C_{p^k}$ is the set of all primitive $p^k$-th roots of unity, and $\phi$ is the Euler function. I proved that. b) Using the induction hypothesis show that $$ \Phi_m(x) = (-1)^{\phi(p^k)} \prod_{d|n} \left[ \prod_{\xi \in C_{p^k}} \left( (\xi^{-1}x)^{\frac{n}{d}} - 1 \right) \right]^{\mu(d)} $$ c) Show that $$ \prod_{\xi \in C_{p^k}} \left( (\xi^{-1}x)^{\frac{n}{d}} - 1 \right) = (-1)^{\phi(p^k)} \frac{x^{\frac{m}{d}}-1}{x^{\frac{m}{pd}} - 1} $$ d) Use these results to prove the formula by substituting c) into b). I am stuck in b) and c). In b) I tried to use the recursion formula $$ x^m - 1 = \prod_{d|m}\Phi_d(x) $$ and $$ \Phi_m(x) = \frac{x^m-1}{ \prod_{\stackrel{d|m}{d<m}} \Phi_d(x)} . $$ In c) I tried expanding the product by Newton's binom using $\phi(p^k) = p^k ( 1 - 1/p)$. I also tried replacing the product by $\xi \mapsto [ \exp(i2\pi / p^k) ]^j$ and let $j$ run on numbers that don't divide $p^k$. In both way I got stuck. I would appreciate help here.
I found a solution that is easy to understand for those who want to know how to solve without following the steps given in the problem. Click to see the source. First, we have a formula $$ {x^{n}-1=\Pi_{d|n}\Phi_{d}(x)} $$ Then, by taking the logarithm on the both sides, $$ \log(x^{n}-1)=\log(\Pi_{d|n}\Phi_{d}(x))=\Sigma_{d|n}(\log\Phi_{d}(x)) $$ Now, we use the Mobius Inversion Formula by taking $f_{n}=\log\Phi_{n}$ and $F_{n}=\Sigma_{d|n}(\log\Phi_{d})$. \begin{align*} \log(\Phi_{n}(x)) & = \Sigma_{d|n}\mu(\frac{n}{d})\log(x^{d}-1)\\ & = \Sigma_{d|n}\log(x^{d}-1)^{\mu(\frac{n}{d})}\\ & = \log\Pi_{d|n}(x^{d}-1)^{\mu(\frac{n}{d})} \end{align*} Hence, we have $$ \Phi_{n}(x)=\Pi_{d|n}(x^{d}-1)^{\mu(\frac{n}{d})} $$
Partial Integration - Where did I go wrong? For a Homework, I need $\int \frac{x}{(x-1)^2} dx$ as an intermediate result. Using partial integration, I derive $x$ and integrate $\frac{1}{(x-1)^2}$, getting: $$ \frac{-x}{x-1} + \int \frac{1}{x-1} dx = \ln(x-1)+\frac{x}{x-1} $$ WolframAlpha tells me this is wrong (it gives me $\frac{1}{1-x}$ where I have $\frac{x}{x-1}$). If I and WA disagree the error is usually somewhere on my side. Unfortunately WA uses partial fractions there instead of partial integration, so I'm not sure which step I screwed up. Supposedly $\int f'g dx = fg - \int fg' dx$ right? (I leave the constant +C out because it's not relevant for the problem I need this for).
the result is : $\ln|x-1| - \frac x{x-1} + C$ where $C$ is some constant. If $C=1$ you get : $\ln |x-1| + \frac 1{1-x}$ The finaly result can be expressed : $$F(x) = \ln |x-1| + \frac 1{1-x} + \lambda$$ where $\lambda$ is some constant. Precesely : $$F(x) = \ln (x-1) + \frac 1{1-x} + \lambda$$ on the intervale $]1,+\infty[$ and: $$F(x) = \ln (1-x) + \frac 1{1-x} + \lambda$$ on the intervale: $]-\infty ,1[$
How to calculate all the four solutions to $(p+5)(p-1) \equiv 0 \pmod {16}$? This is a kind of a plain question, but I just can't get something. For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$. How come that the in addition to the solutions $$\begin{align*} p &\equiv 11\pmod{16}\\ p &\equiv 1\pmod {16} \end{align*}$$ we also have $$\begin{align*} p &\equiv 9\pmod {16}\\ p &\equiv 3\pmod {16}\ ? \end{align*}$$ Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them? Thanks
First note that $p$ has to be odd. Else, $(p+5)$ and $(p-1)$ are both odd. Let $p = 2k+1$. Then we need $16 \vert (2k+6)(2k)$ i.e. $4 \vert k(k+3)$. Since $k$ and $k+3$ are of opposite parity, we need $4|k$ or $4|(k+3)$. Hence, $k = 4m$ or $k = 4m+1$. This gives us $ p = 2(4m) + 1$ or $p = 2(4m+1)+1$. Hence, we get that $$p = 8m +1 \text{ or }8m+3$$ which is what your claim is as well. EDIT You have obtained the first two solutions i.e. $p = 16m+1$ and $p=16m + 11$ by looking at the cases $16 \vert (p-1)$ (or) $16 \vert (p+5)$ respectively. However, note that you are leaving out the following possibilities. * *$2 \vert (p+5)$ and $8 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$ *$4 \vert (p+5)$ and $4 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$ *$8 \vert (p+5)$ and $2 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$ Out of the above possibilities, the second one can be ruled out since $4 \vert (p+5)$ and $4 \vert (p-1)$, then $4 \vert ((p+5)-(p-1))$ i.e. $4 \vert 6$ which is not possible. The first possibility gives us $ p = 8m+1$ while the second possibility gives us $p = 8m +3$. Combining this with your answer, we get that $$p = 8m +1 \text{ or }8m+3$$ In general, when you want to analyze $a \vert bc$, you need to write $a = d_1 d_2$, where $d_1,d_2 \in \mathbb{Z}$ and then look at the cases $d_1 \vert a$ and $d_2 \vert b$.
constructive proof of the infinititude of primes There are infinitely many prime numbers. Euclides gave a constructive proof as follows. For any set of prime numbers $\{p_1,\ldots,p_n\}$, the prime factors of $p_1\cdot \ldots \cdot p_n +1$ do not belong to the set $\{p_1,\ldots,p_n\}$. I'm wondering if the following can be made into a constructive proof too. Let $p_1 = 2$. Then, for $n\geq 2$, define $p_n$ to be a prime number in the interval $(p_{n-1},p_{n-1} + \delta_n]$, where $\delta_n$ is a real number depending only on $n$. Is such a $\delta_n$ known? Note that this would be a constructive proof once we find a $\delta_n$ because finding a prime number in $(p_{n-1},p_{n-1}+\delta]$ can be done in finite time algorithmically. For some reason I believe such a $\delta_n$ is not known. In this spirit, is it known that we can't take $\delta_n = 10n$ for example?
As noted in the comments, we can take $\delta_n=p_{n-1}$. In fact, there are improvements on that in the literature. But if you want something really easy to prove, you can take $\delta_n$ to be the factorial of $p_{n-1}$, since that gives you an interval which includes Euclid's $p_1\times p_2\times\cdots\times p_{n-1}+1$ and therefore includes a new prime.
Is there any geometric way to characterize $e$? Let me explain it better: after this question, I've been looking for a way to put famous constants in the real line in a geometrical way -- just for fun. Putting $\sqrt2$ is really easy: constructing a $45^\circ$-$90^\circ$-$45^\circ$ triangle with unitary sides will make me have an idea of what $\sqrt2$ is. Extending this to $\sqrt5$, $\sqrt{13}$, and other algebraic numbers is easy using Trigonometry; however, it turned difficult working with some transcendental constants. Constructing $\pi$ is easy using circumferences; but I couldn't figure out how I should work with $e$. Looking at made me realize that $e$ is the point $\omega$ such that $\displaystyle\int_1^{\omega}\frac{1}{x}dx = 1$. However, I don't have any other ideas. And I keep asking myself: Is there any way to "see" $e$ geometrically? And more: is it true that one can build any real number geometrically? Any help will be appreciated. Thanks.
Another approach might be finding a polar curve such that it's tangent line forms a constant angle with the segment from $(0,0)$ to $(\theta,\rho(\theta))$. The solution is the logarithmic spiral, defined by $$\rho =c_0 e^{a\theta}$$
Diameter of wheel If a wheel travels 1 mile in 1 minute at a rate of 600 revolutions per minute. What is the diameter of the wheel in feet ? The answer to this question is 2.8 feet. Could someone please explain how to solve this problem ?
The distance travelled by a wheel in one revolution is nothing but the circumference. If the circumference of the circle is $d$, then the distance travelled by the wheel on one revolution is $\pi d$. It does $600$ revolutions per minute i.e. it travels a distance of $600 \times \pi d$ in one minute. We are also given that it travels $1$ mile in a minute. Hence, we have that $$600 \times \pi d = 1 \text{ mile} = 5280 \text{ feet}\implies d = \dfrac{5280}{600 \pi} \text{ feet} \approx 2.801127 \text{ feet}$$
$\gcd(n!+1,(n+1)!)$ The recent post didn't really provide sufficient help. It was too vague, most of it went over my head. Anyway, I'm trying to find the $\gcd(n!+1,(n+1)!)$. First I let $d=ab\mid(n!+1)$ and $d=ab\mid(n+1)n!$ where $d=ab$ is the GCD. From $ab\mid(n+1)n!$ I get $a\mid(n+1)$ and $b|n!$. Because $b\mid n!$ and $ab\mid(n!+1)$, $b$ must be 1. Consequently, $a\mid(n!+1)$ and $a\mid(n+1)$. So narrowing down options for $a$ should get me an answer. At this point I've tried to somehow bring it around and relate it to Wilson's theorem as this problem is from that section of my textbook, but I seem to be missing something. This is part of independent study, though help of any kind is appreciated.
By Euclid $\rm\,(k,k\!+\!1)=1\:\Rightarrow\:(p\!\ k,k\!+\!1) = (p,k\!+\!1)\ [= p\ $ if $\rm\:p\:$ prime, $\rm\:k=(p\!-\!1)!\:$ by Wilson. See here for $\rm\:p = n\!+\!1\:$ composite.
Automorphisms of the field of complex numbers Using AC one may prove that there are $2^{\mathfrak{c}}$ field automorphisms of the field $\mathbb{C}$. Certainly, only the identity map is $\mathbb{C}$-linear ($\mathbb{C}$-homogenous) among them but are all these automorphisms $\mathbb{R}$-linear?
An automorphism of $\mathbb C$ must take $i$ into $i$ or $-i$. Thus an automorphism that is $\mathbb R$-linear must be the identity or conjugation.
Why do mathematicians care so much about zeta functions? Why is it that so many people care so much about zeta functions? Why do people write books and books specifically about the theory of Riemann Zeta functions? What is its purpose? Is it just to develop small areas of pure mathematics?
For one thing, the Riemann Zeta function has many interesting properties. No one knew of a closed form of $\zeta (2)$ until Euler famously found it, along with all the even positive integers: $$\zeta(2n) = (-1)^{n+1}\frac{B_{2n}(2\pi)^{2n}}{2(2n)!}$$ However, to this day, no nice closed form is known for values in the form $\zeta(2n+1)$. Another major need of the Zeta function is relating to the Riemann hypothesis. This conjecture if fairly simple to understand. It essentially hypothesizes that the nontrivial zeros of the zeta function have a real part of 1/2. This hypothesis, if proven true, has major implications in number theory and the distribution of primes. The Riemann zeta function also occurs in many fields and appears occasionally when evaluating different equations, just as many other functions do. Lastly, the sum $$\sum_{n=1}^{\infty} \frac{1}{n^s}$$ is a very natural one to try and study and evaluate and is especially interesting because of the above-mentioned properties and more.
norm for estimating the error of the numerical method In most of the books on numerical methods and finite difference methods the error is measured in discrete $L^2$ norm. I was wondering if people do the in Sobolev norm. I have never see that done and I want to know why no one uses that. To be more specific look at the $$Au=f,$$ where assume $A_h$ is some approximation for $A$ and $U$ is the numerical solution for the system. Then if we plug the actual function $u$ into $A_hU=f$ and substruct we have $$A_h(u-U)=\tau$$ for $\tau$ being a local error. Thus I have an error equation $$e=A_h^{-1}\tau$$ What are the problems I am facing If I use discrete Sobolev norm?
For one thing, it's a question of what norm measures how "accurate" the solution is. Which of the two error terms would you rather have: $0.1\sin(x)$ or $0.0001\sin(10000x)$? The first is smaller in the Sobolev norm, the second is smaller in the $L^2$ norm.
Are There Any Symbols for Contradictions? Perhaps, this question has been answered already but I am not aware of any existing answer. Is there any international icon or symbol for showing Contradiction or reaching a contradiction in Mathematical contexts? The same story can be seen for showing that someone reached to the end of the proof of a theorem (i.e. as shown the tombstone symbol ∎, Halmos).
The symbols are: $\top$ for truth (example: $100 \in \mathbb{R} \to \top$) and $\bot$ for false (example: $\sqrt{2} \in \mathbb{Q} \to \bot$) In Latex, \top is $\top$ and \bot is $\bot$.
Continued fraction question I have been given an continued fraction for a number x: $$x = 1+\frac{1}{1+}\frac{1}{1+}\frac{1}{1+}\cdots$$ How can I show that $x = 1 + \frac{1}{x}$? I played around some with the first few convergents of this continued fraction, but I don't get close.
Just look at it. OK, if you want something more proofy-looking: if $x_n$ is the $n$'th convergent, then $x_{n+1} = 1 + 1/x_n$. Take limits.
Integration of $\int\frac{1}{x^{4}+1}\mathrm dx$ I don't know how to integrate $\displaystyle \int\frac{1}{x^{4}+1}\mathrm dx$. Do I have to use trigonometric substitution?
I think you can do it this way. \begin{align*} \int \frac{1}{x^4 +1} \ dx & = \frac{1}{2} \cdot \int\frac{2}{1+x^{4}} \ dx \\\ &= \frac{1}{2} \cdot \int\frac{(1-x^{2}) + (1+x^{2})}{1+x^{4}} \ dx \\\ &=\frac{1}{2} \cdot \int \frac{1-x^2}{1+x^{4}} \ dx + \frac{1}{2} \int \frac{1+x^{2}}{1+x^{4}} \ dx \\\ &= \frac{1}{2} \cdot -\int \frac{1-\frac{1}{x^2}}{\Bigl(x+\frac{1}{x})^{2} - 2} \ dx + \text{same trick} \end{align*}
Apply Cauchy-Riemann equations on $f(z)=z+|z|$? I am trying to check if the function $f(z)=z+|z|$ is analytic by using the Cauchy-Riemann equation. I made $z = x +jy$ and therefore $$f(z)= (x + jy) + \sqrt{x^2 + y^2}$$ put into $f(z) = u+ jv$ form: $$f(z)= x + \sqrt{x^2 + y^2} + jy$$ where $u = x + \sqrt{x^2 + y^2}$ and that $v = y$ Now I need to apply the Cauchy-Riemann equation, but don't know how would I go about doing that. Any help would be much appreciated.
In order for your function to be analytic, it must satisfy the Cauchy-Riemann equations (right? it's good to think about why this is true). So, what are the equations? Well, du/dx = dv/dy. Does this hold? Or you could consider du/dy = -dv/dx. If either of these equations do not hold, then the function is not analytic.