INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
How to prove $ \int_0^\infty e^{-x^2} \; dx = \frac{\sqrt{\pi}}2$ without changing into polar coordinates? How to prove $ \int_0^\infty e^{-x^2} \; dx = \frac{\sqrt{\pi}}2$ other than changing into polar coordinates? It is possible to prove it using infinite series?
$$ \int_{-\infty} ^\infty e^{-x^2} \; dx =I$$ $$ \int_{-\infty}^\infty e^{-y^2} \; dx =I$$ $$\int_{-\infty}^\infty e^{-x^2}\times e^{-y^2} \; dx =I^2$$ $$\int_{-\infty}^\infty e^{-(x^2+y^2)}\; dx =I^2$$ Polar Coordinates: $x^2+y^2=R^2,-\pi\leqslant \theta\leqslant +\pi$ $$\int_{-\pi}^{+\pi} \int_{0}^ \infty e^{-(R^2)}\; R\times dR \times d(\theta) =I^2$$ $$2\times \pi \int_{0}^ \infty e^{-(R^2)}\; R\times dR =I^2$$ $$-R^2=P\implies -2Rd(R)=d(P)\implies Rd(R)=-d(P)/2$$ $$2\times \pi \int_{-\infty}^ 0 e^{(P)}d(P)/2$$ $$\pi=I^2\Longrightarrow I=\sqrt{\pi}$$ since $e^{x^2}$ is even $\implies$ ans=${\sqrt{\pi}/2}$
Chess tournament, graph Problem. In chess tournament each player, from all $n$ players, played one game with every another player. Prove that it is possible to number all players with numbers from $1$ to $n$ in such way that every player has unique number and none of them lost the game against player with number greater by $1$. My first association was that this tournament can be represented by directed clique and for example edge $\langle a,b \rangle$ means that player with number $a$ won the game against player with number $b$. Then what I need to prove is that vertices in every directed clique with $n$ vertices can be numbered in such way that there in no edge $\langle i+1, \ i \rangle$ for all $i=1,2,...,n-1$. But it seems very hard, I don't know how to approach. Or maybe it isn't the problem from graph theory, I don't know.
You can prove this by induction. The root is - according to taste - a game with just one or two players. So let's assume we have found a numbering for $n$ players and along comes the $n+1^{st}$ player and plays against every other player. If he looses against the $n^{th}$ player, we give him the number $n+1$. If he wins against the $n^{th}$ player but also against the $1^{st}$ player, we give hm the number $1$ and move everybody else one up. So assume he wins against the $n^{th}$ player and looses against the $1^{st}$ player. Then there is a number $k$ such that he wins against player $k$ and looses against player $k-1$. Then we give him the number $k$ and move everybody with a number $>k$ one up.
Is any Mersenne number $M_p$ divisible by $p+2$? More precisely, does there exist a natural number $p$ such that $(2^p-1)/(p+2)$ is also a natural number? It seems to me that this is a really simple problem (with the answer "no"), but I couldn't find anything on the web. There are some facts known about division by $p+1$, but nothing useful for $p+2$.
Another proof is as follows. First for $p=2$, it follows because $2^p-1=3<4=p+2$. For odd primes $p$, there is a theorem (see Mersenne Prime for proof) that states that every factor of $2^p-1$ is of the form $2kp+1$ for some integer $k\geq0$. Thus we want a $p$ for which $p+2=2kp+1$. There is no such prime because for $p>2$, we have $1<p+2<2p+1$ thus requiring $0<k<1$, a contradiction to the theorem.
Random Sequence Generator function I want to find out a function or algorithm, whichever is suitable, which can provide me a random sequence. Like Input: 3 Output: {1,2,3} or {1,3,2} or {2,1,3} or {2,3,1} or {3,1,3} or {3,2,1} Same as if I will enter a number N, output will be a random permutation of the set {1,2,...N} How can a I write this type of algorithm. Actually I want to find out the logic behind it. Edit: I don't want to use any buffer to to save anything.
The first $O(n)$ shuffle or random permutation generator was published by Richard Durstenfeld in 1964. This algorithm came to the notice of programmers because it was included in Knuth's TAOCP, Vol 2, 1969, page 125, as Algorithm P. A succinct statement of the algorithm is: \begin{equation} \text{for }\ k \leftarrow n,n-1,\ldots,2\ \text{ do } \begin{cases} \text{Choose at random a number}\ \ r \in [1,k] \\ \text{Interchange }\pi[r] \text{ and }\pi[k],\ \end{cases} \end{equation} where $\pi[1\ldots n]$ is the array to be shuffled or randomly permuted. See my notes for more information here:RPGLab
How to prove the boundedness of the solutions of a nonlinear differential equation I have the following differential equation: $$ \ddot{x} = -\log(x) - 1 $$ and I need to prove that every solution of this equation is a bounded function. From the phase plane portrait, it is obvious that this is true: How can I construct a formal proof for this?
$x_1=x$, $x_2=\dot{x}$ $\dot{x}_1=x_2$ $\dot{x}_2=-\log(x_1)-1$ we know that to have the solution for the ODE we need $x_1>0$ consider the following function $$V=(x_1\log(x_1)+0.5*x_2^2)$$ the derivative of this function along the system trajectories is $$\dot{V}=x_2\log(x_1)+x_2+x_2(-\log(x_1)-1)=0$$ Therefore, the solutions are bounded!
how can one find the value of the expression, $(1^2+2^2+3^2+\cdots+n^2)$ Possible Duplicate: Proof that $\sum\limits_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$? Summation of natural number set with power of $m$ How to get to the formula for the sum of squares of first n numbers? how can one find the value of the expression, $(1^2+2^2+3^2+\cdots+n^2)$ Let, $T_{2}(n)=1^2+2^2+3^2+\cdots+n^2$ $T_{2}(n)=(1^2+n^2)+(2^2+(n-1)^2)+\cdots$ $T_{2}(n)=((n+1)^2-2(1)(n))+((n+1)^2-2(2)(n-1))+\cdots$
The claim is that $\sum_{i = 1}^n i^2 = \frac{n(n + 1)(2n + 1)}{6}$ We will verify this by induction. Clearly $n = 1$ holds. Suppose the formula holds for $n$. Lets verify it holds for $n + 1$. $$\sum_{i = 1}^{n + 1} i^2 = \sum_{i = 1}^n i^2 + (n + 1)^2 = \frac{n(n + 1)(2n + 1)}{6} + (n + 1)^2 \\ = \frac{n(n + 1)(2n + 1)}{6} + \frac{6(n^2 + 2n + 1)}{6} \\ = \frac{2n^3 + 3n^2 + n}{6} + \frac{6n^2 + 12 n + 6}{6} \\ = \frac{2n^3 + 9n^2 + 13n + 6}{6}$$ If you factor you get $$= \frac{(n + 1)(n + 2)(2n + 3)}{6} = \frac{(n + 1)((n + 1) + 1)(2(n + 1) + 1)}{6}$$ The result follows for $n + 1$. So by induction the formula holds.
Why is $a^n - b^n$ divisible by $a-b$? I did some mathematical induction problems on divisibility * *$9^n$ $-$ $2^n$ is divisible by 7. *$4^n$ $-$ $1$ is divisible by 3. *$9^n$ $-$ $4^n$ is divisible by 5. Can these be generalized as $a^n$ $-$ $b^n$$ = (a-b)N$, where N is an integer? But why is $a^n$ $-$ $b^n$$ = (a-b)N$ ? I also see that $6^n$ $- 5n + 4$ is divisible by $5$ which is $6-5+4$ and $7^n$$+3n + 8$ is divisible by $9$ which is $7+3+8=18=9\cdot2$. Are they just a coincidence or is there a theory behind? Is it about modular arithmetic?
Another proof: Denote $r=b/a$. We know that the sum of a geometric progression of the type $1+r+r^2+\ldots+r^{n-1}$ is equal to $\frac{1-r^n}{1-r}$. Thus, we have \begin{align} 1-r^n&=(1-r)(1+r+r^2+\ldots+r^{n-1}),\quad\text{substituting $r=b/a$ gives:}\\ a^n-b^n &= (a-b)\color{red}{(a^{n-1}+a^{n-2}b+\ldots+b^{n-1})}\\ a^n-b^n &= (a-b)N \end{align} The last step follows since $a,b$ are integers and a polynomial expression of the type in $\color{red}{red}$ font is also an integer.
Limsups of nets The limsup on sequences of extended real numbers is usually taken to be either of these two things, which are equivalent: * *the sup of all subsequential limits. *The limit of the sup of the tail ends of the sequence. For the situation with nets, the same arguments guarantee the existence of the above quantities 1. and 2, as long as you understand that a subnet of a net must be precomposed with an increasing function that is also cofinal. Also, one has that 1. $\leq$ 2., and I just can't see the reverse inequality. Don't forget that one cannot imitate the argument for sequences because the following fact fails: Given a directed set, how do I construct a net that converges to $0$.
As far as I can say, the more usual definition of limit superior of a net is the one using limit of suprema of tails: $$\limsup x_d = \lim_{d\in D} \sup_{e\ge d} x_e = \inf_{d\in D} \sup_{e\ge d} x_e.$$ But you would get an equivalent definition, if you defined $\limsup x_d$ as the largest cluster point of the net. This definition corresponds (in a sense) to the definition with subsequential limits, since a real number is a cluster point of a net if and only if there is a subnet converging to this number. I think that it is relatively easy to see that $\limsup x_d$ is a cluster point of the net $(x_d)_{d\in D}$. To see that for every cluster point $x$ we have $x\le\limsup x_d$ it suffices to notice that, for any given $\varepsilon>0$ and $d\in D$, the interval $(x-\varepsilon,x+\varepsilon)$ must contain some element $x_e$ for $e\ge d$. Hence we get $$ \begin{align*} x-\varepsilon &\le \sup_{e\ge d} x_e\\ x-\varepsilon &\le \lim_{d\in D} \sup_{e\ge d} x_e. \end{align*}$$ and, since $\varepsilon>0$ is arbitrary, we get $$x\le \lim \sup_{e\ge d} x_e.$$ Thus the limit superior is indeed the maximal cluster point. So the only thing missing is to show that cluster points are precisely the limits of subnets - this is a standard result, which you can find in many textbooks. Some references for limit superior of a net are given in the Wikipedia article and in my answer here. Perhaps some details given in my notes here can be useful, too. (The notes are still unfinished.) I should mention, that I pay more attention there to the notion of limit superior along a filter (you can find this in literature defined for filter base, which leads basically to the same thing). The limit superior of a net can be considered a special case, if we use the section filter; which is the filter generated by the base $\mathcal B(D)=\{D_a; a\in D\}$, where $D_a$ is the upper section $D_a=\{d\in D; d\ge a\}$.
Fréchet derivative I have been doing some self study in multivariable calculus. I get the geometric idea behind looking at a derivative as a linear function, but analytically how does one prove this? I mean if $f'(c)$ is the derivative of a function between two Euclidean spaces at a point $c$ in the domain... then is it possible to explicitly prove that $$f'(c)[ah_1+ bh_2] = af'(c)[h_1] + bf'(c)[h_2]\ ?$$ I tried but I guess I am missing something simple. Also how does the expression $f(c+h)-f(c)=f'(c)h + o(\|h\|)$ translate into saying that $f'(c)$ is linear?
It is not true that $f'(c_1+c_2)=f'(c_1)+f'(c_2)$ in general. However, the derivative $f'(c)$ is the matrix of the differential $df_c$ and the expression which you write shows why $df_c(h) = f'(c)h$ is linear in the $h$-variable. It is simply matrix multiplication and all matrix multiplication maps are linear. The subtle question is if $f'(c)$ exists for a given $f$ and $c$. Recall that $f: \mathbb{R} \rightarrow \mathbb{R}$ has a derivative $f'(a)$ at $x=a$ if $$ f'(a) = \lim_{ h \rightarrow 0} \frac{f(a+h)-f(a)}{h} $$ Alternatively, we can express the condition above as $$ \lim_{ h \rightarrow 0} \frac{f(a+h)-f(a)-f'(a)h}{h} =0.$$ This gives an implicit definition for $f'(a)$. To generalize this to higher dimensions we have to replace $h$ with its length $||h||$ since there is no way to divide by a vector in general. Recall that $v \in \mathbb{R}^n$ has $||v|| = \sqrt{v \cdot v}$. Consider $F: U \subseteq \mathbb{R}^m \rightarrow \mathbb{R}^n$ if $dF_{a}: \mathbb{R}^m \rightarrow \mathbb{R}^n$ is a linear transformation such that $$ \lim_{ h \rightarrow 0} \frac{F(a+h)-F(a)-dF_a(h)}{||h||} =0 $$ then we say that $F$ is differentiable at $a$ with differential $dF_a$. The matrix of the linear transformation $dF_{a}: \mathbb{R}^m \rightarrow \mathbb{R}^n$ is called the Jacobian matrix $F'(a) \in \mathbb{R}^{m \times n}$ or simply the derivative of $F$ at $a$. It follows that the components of the Jacobian matrix are partial derivatives of the component functions of $F = (F_1,F_2, \dots , F_n)$ $$ J_F = \left[ \begin{array}{cccc} \partial_1 F_1 & \partial_2 F_1 & \cdots & \partial_n F_1 \\ \partial_1 F_2 & \partial_2 F_2 & \cdots & \partial_n F_2 \\ \vdots & \vdots & \vdots & \vdots \\ \partial_1 F_m & \partial_2 F_m & \cdots & \partial_n F_m \\ \end{array} \right] = \bigl[\partial_1F \ | \ \partial_2F \ | \cdots | \ \partial_nF\ \bigr] = \left[ \begin{array}{c} (\nabla F_1)^T \\ \hline (\nabla F_2)^T \\ \hline \vdots \\ \hline (\nabla F_m)^T \end{array} \right]. $$
Module Theory for the Working Student Question: What level of familiarity and comfort with modules should someone looking to work through Hatcher's Algebraic Topology possess? Motivation: I am taking my first graduate course in Algebraic Topology this coming October. The general outline of the course is as follows: * *Homotopy, homotopy invariance, mapping cones, mapping cylinders *Fibrations, cofibrations, homotopy groups, long-exact sequences *Classifying spaces of groups *Freudenthal, Hurewicz, and Whitehead theorems *Eilenberg-MacLane spaces and Postnikov towers. I have spent a good deal of this summer trying to fortify and expand the foundations of my mathematical knowledge. In particular, I have been reviewing basic point-set and algebraic topology and a bit of abstract algebra. My knowledge of module theory is a bit lacking, though. I've only covered the basics of the following topics: submodules, algebras, torsion modules, quotient modules, module homomorphisms, finitely generated modules, direct sums, free modules, and a little bit about $\text{Hom}$ and exact sequences, so I have a working familirity with these ideas. As I do not have much time left before the beginning of the semester, I am trying to make my studying as economical as possible. So perhaps a more targeted question is: What results and topics in module theory should every student in Algebraic Topology know?
Looks like you'll definitely want to know the homological aspects: projective, injective and flat modules. There are lots of different characterizations for these which are useful to know. (Sorry, don't know the insides of D&F very well, perhaps it is covered.) The Fundamental theorem of finitely generated modules over a principal ideal domain is a pretty good one to know too, since it kind of bundles up the entire theory of linear algebra. (How much algebraic geometry do you know?)
Evaluating $\int_0^{\frac\pi2}\frac{\ln{(\sin x)}\ \ln{(\cos x})}{\tan x}\ dx$ I need to solve $$ \int_0^{\Large\frac\pi2}\frac{\ln{(\sin x)}\ \ln{(\cos x})}{\tan x}\ dx $$ I tried to use symmetric properties of the trigonometric functions as is commonly used to compute $$ \int_0^{\Large\frac\pi2}\ln\sin x\ dx = -\frac{\pi}{2}\ln2 $$ but never succeeded. (see this for example)
Rewrite the integral as $$ \int_0^{\Large\frac\pi2}\frac{\ln{(\sin x)}\ \ln{(\cos x})}{\tan x}\ dx=\int_0^{\Large\frac\pi2}\frac{\ln{(\sin x)}\ \ln{\sqrt{1-\sin^2 x}}}{\sin x}\cdot\cos x\ dx. $$ Set $t=\sin x\ \color{red}{\Rightarrow}\ dt=\cos x\ dx$, then we obtain \begin{align} \int_0^{\Large\frac\pi2}\frac{\ln{(\sin x)}\ \ln{(\cos x})}{\tan x}\ dx&=\frac12\int_0^1\frac{\ln t\ \ln(1-t^2)}{t}\ dt\\ &=-\frac12\int_0^1\ln t\sum_{n=1}^\infty\frac{t^{2n}}{nt}\ dt\tag1\\ &=-\frac12\sum_{n=1}^\infty\frac{1}{n}\int_0^1t^{2n-1}\ln t\ dt\\ &=\frac12\sum_{n=1}^\infty\frac{1}{n}\cdot\frac{1}{(2n)^2}\tag2\\ &=\large\color{blue}{\frac{\zeta(3)}{8}}. \end{align} Notes : $[1]\ \ $Use Maclaurin series for natural logarithm: $\displaystyle\ln(1-x)=-\sum_{n=1}^\infty\frac{x^n}{n}\ $ for $|x|<1$. $[2]\ \ $$\displaystyle\int_0^1 x^\alpha \ln^n x\ dx=\frac{(-1)^n n!}{(\alpha+1)^{n+1}}\ $ for $ n=0,1,2,\cdots$
What am I losing if I decide to perform all math by computer? I solve mathematical problems every day, some by hand and some with the computer. I wonder: What will I lose if I start doing the mathematical problems only by computer? I've read this text and the author says that as techonology progress happens, we should change our focus to things that are more important. (You can see his suggestions on what is more important, in the suggested text.) I'm in search of something a little more deep than the conventional cataclysmic hypothesis: "What will you do when a EMP hit us?!" and also considering an environment without exams which also exclude the "Computers are not allowed in exams" answer. This is of crucial importance to me because as I'm building some mathematical knowledge, I want to have a safe house in the future. I'm also open to references on this, I've asked something similar before and got a few indications.
Yes you can solve most of the problems by computer but you will lose your critical thinking ability, you are going to be an operator not a creator eventually!
Which separation axiom? Let $X$ be a topological space. Assume that for all $x_1,x_2 \in X$ there exist open neighbourhoods $U_i$ of $x_i$ such that $U_1 \cap U_2 = \emptyset$. Such a space, as we all know, is called Hausdorff. What would we call a space, and which separation axioms would the space satisfy, if $\overline{U_1} \cap \overline{U_2} = \emptyset$ for all $x_i \in X$?
Such a space is known as $T_{2\frac{1}{2}}$ or Urysohn according to Wikipedia.
Proving inequality $\frac{a}{b}+\frac{b}{c}+\frac{c}{a}+\frac{3\sqrt[3]{abc}}{a+b+c} \geq 4$ I started to study inequalities - I try to solve a lot of inequlites and read interesting .solutions . I have a good pdf, you can view from here . The inequality which I tried to solve and I didn't manage to find a solution can be found in that pdf but I will write here to be more explicitly. Exercise 1.3.4(a) Let $a,b,c$ be positive real numbers. Prove that $$\displaystyle \frac{a}{b}+\frac{b}{c}+\frac{c}{a}+\frac{3\sqrt[3]{abc}}{a+b+c} \geq 4.$$ (b) For real numbers $a,b,c \gt0$ and $n \leq3$ prove that: $$\displaystyle \frac{a}{b}+\frac{b}{c}+\frac{c}{a}+n\left(\frac{3\sqrt[3]{abc}}{a+b+c} \right)\geq 3+n.$$
Write $$\frac ab+\frac ab+\frac bc\geq \frac{3a}{\sqrt[3]{abc}}$$ by AM-GM. You get $$\operatorname{LHS} \geq \frac{a+b+c}{\sqrt[3]{abc}}+n\left(\frac{\sqrt[3]{abc}}{a+b+c}\right).$$ Set $$z:=\frac{a+b+c}{\sqrt[3]{abc}}$$ and then notice that for $n\leq 3$, $$z+\frac{3n}{z}\geq 3+n.$$ Indeed the minimum is reached for $z=\sqrt{3n}\leq 3$; since $z\geq 3$, the minimum is reached in fact for $z=3$.
How to construct a one-to one correspondence between$\left [ 0,1 \right ]\bigcup \left [ 2,3 \right ]\bigcup ..$ and $\left [ 0,1 \right ]$ How can I construct a one-to one correspondence between the Set $\left [ 0,1 \right ]\bigcup \left [ 2,3 \right ]\bigcup\left [ 4,5 \right ] ... $ and the set $\left [ 0,1 \right ]$ I know that they have the same cardinality
For $x\in{(k,k+1)}$ with $k\geq 2$ and $k$ even define $f(x)=\frac{1}{2x-k}$; for $x\in(0,1]$ define $f(x)=\frac{x+1}{2}$; set $f(0)=0$. Now it remains to map $A=\{2,3,4,5,..\}$ bijectively to $\{1/2,1/4,1/6,1/8..,\}$ to do this define $f(x)=\frac{1}{2(x-1)}$ on $A$. This should give you the desired bijection. Please, let me know if I have made any silly error this time.
Closed form representation of an irrational number Can an arbitrary non-terminating and non-repeating decimal be represented in any other way? For example if I construct such a number like 0.1 01 001 0001 ... (which is irrational by definition), can it be represented in a closed form using algebraic operators? Can it have any other representation for that matter?
Since $0.1 = \frac{1}{10}$, $0.001 = \frac{1}{10^3}$, $0.0000001 = \frac{1}{10^6}$. Making a guess that $n$-th term is $10^{-n(n+1)/2}$ the sum, representing the irrational number becomes $$ 0.1010010001\ldots = \sum_{k=0}^\infty \frac{1}{10^{\frac{k(k+1)}{2}}} = \left.\frac{1}{2 q^{1/4}} \theta_2\left(0, q\right)-1\right|_{q=\frac{1}{\sqrt{10}}} $$ where $\theta_2(u,q) =2 q^{1/4} \sum_{n=0}^\infty q^{n(n+1)} \cos((2n+1)u)$ is the elliptic theta function.
Prime elements in $\mathbb{Z}/n\mathbb{Z}$ I'm tring to determine the prime elements in the ring $\mathbb{Z}/n\mathbb{Z}$.
Every ideal of $\mathbb{Z}/n\mathbb{Z}$ is of the form $m\mathbb{Z}/n\mathbb{Z}$, where $m$ is a divisor of $n$. And its residue ring is isomorphic to $\mathbb{Z}/m\mathbb{Z}$. Hence a prime ideal of $\mathbb{Z}/n\mathbb{Z}$ is of the form $p\mathbb{Z}/n\mathbb{Z}$, where $p$ is a prime divisor of $n$. Hence every prime element of $\mathbb{Z}/n\mathbb{Z}$ is of the form $p$ (mod $n$), where $p$ is a prime divisor of $n$.
Future Lifetime Distribution Suppose the future lifetime of someone aged $20$ denoted by $T_{20}$ is subject to the force of mortality $\mu_x = \frac{1}{100-x}$ for $x< 100$. What is $\text{Var}[\min(T_{20},50)]$? So we have: $$E[\min(T_x,50)|T_{20} > 50] = 50$$ $$\text{Var}[\min(T_{20},50)|T_{20} > 50] = ?$$ $$E[\min(T_x,50)|T_{20} \leq 50] = 25$$ $$\text{Var}[\min(T_{20},50)|T_{20} \leq 50] = 50^{2}/12$$ What is the second line? Also I know that we use the "expectation-variance" formula to calculate $\text{Var}[\min(T_{20},50)]$.
Hint: The random variable $\min(T_{20},50)|T_{20} > 50)$ doesn't vary much! If you then want to use a formula, call the above random variable $Y$. We want $E((Y-50)^2)$. How did you decide earlier that $E(Y)=50$?
What branch of the Math can help me with this? I would love to focus on the branches of the Math that can help me with: * *generation of entropy, i suppose that most of the works are based on statistic since even a big part of the cryptographic world starts from this and is tested with the help of the statistic. But i do not wont to generate confusion, with entropy i mean to fake randomness *creating structures programmatically and via a procedural way, like the classic Voronoi for example, but in N dimensions and with boundaries, for example generating a structure, a building, a road, an entire city, you get the point. What is the big topic of the Math that can help me with this?
1) Dynamical systems, more precisely: Ergodic theory. See Introduction to ergodic theory by Ya. G. Sinai or Ya. B. Pesin's books. 2) Fractal geometry. You can refer to Fractal Geometry by Kenneth Falconer. You can also see Chaos and Fractals: New Frontiers of Science (more elementary) by Heinz-Otto Peitgen, Hartmut Jürgens and Dietmar Saupe. For programming, see here, here or here. Notice that these two branches are strongly linked.
Intuition behind gradient VS curvature In Newton's method, one computes the gradient of a cost function, (the 'slope') as well as its hessian matrix, (ie, second derivative of the cost function, or 'curvature'). I understand the intuition, that the less 'curved' the cost landscape is at some specific weight, the bigger the step we want to take on the landscape terrain. (This is why it is somewhat superior to simply gradient ascent/descent). Here is the equation: $$ \mathbf {w_{n+1} = w_n - \alpha\frac{\delta J(w)}{\delta w}\begin{bmatrix}\frac{\delta^2J(w)}{\delta w^2}\end{bmatrix}^{-1}} $$ What I am having a hard time visualizing, is what is the intuition behind 'curvature'? I get that is the 'curviness of a function', but isnt that the slope? Obviously it is not, that is what the gradient measures, so what is the intuition behind curvature? Thanks!
Intuitively, curvature is how fast the slope is changing: a greater rate of change of the slope means it is more curved. So it is related to the derivative of the slope, i.e. the derivative of the derivative or the second derivative.
Proving that one quantity *eventually* surpasses the other I want to prove the following; $$\forall t>0\ \ \forall m\in \mathbb{N} \ \ \exists N \in \mathbb{N} \ \ \forall n\geq N: \ (1+t)^n > n^m.$$ For readers who hate quantifiers, here's the version in words: "$(1+t)^n$ eventually surpasses $n^m$ for some $t>0$ and $m\in \mathbb{N}$". Though this sounds simple enough, I couldn't manage to prove it using only very elementary statements about the real numbers, like the archimedian property etc. (so no derivates and so on involved). My questions are: 1) Is there a simple (as described above) proof for this ? (If there isn't, I would also be happy with a proof using more of the analysis machinery.) 2) Is there a way to express the least $N$, which satisfies the above, in a closed form ? 3) I have somewhere heard of a theorem, that two real convex functions can have at most two intersection points (and the above statement seems closely related to this theorem), so I would be very happy, if someone could also give me a reference for this theorem.
1) Yes. The inequality is equivalent to $((1+t)^{\frac{1}{m}})^n>n$ by taking the $\frac{1}{m}$th power of both sides which reduces to your original problem to the problem with $m=1$ since $(1+t)^{\frac{1}{m}}$ is of the form $1+u$ with $u$ a strictly positive real. So we'll assume $m=1$. We can expand $(1+t)^n$ using the binomial theorem as (assuming $n>2$) $$(1+t)^n=1+nt+\frac{n(n-1)}{2}t^2+...$$ So in particular we get $(1+t)^n>1+\frac{n(n-1)}{2}t^2$ if $n>2$, so if we pick $N$ such that $\frac{N-1}{2}t^2>1$ and $N>2$ (which we can do by the Archimedean property), we find that $(1+t)^n>n$ for all $n\geq N$. 2) The Lambert W function should be able to do that, yes.
Does $(\mathbf A+\epsilon \mathbf I)^{-1}$ always exist? Why? Does $(\mathbf A+\epsilon \mathbf I)^{-1}$ always exist, given that $\mathbf A$ is a square and positive (and possibly singular) matrix and $\epsilon$ is a small positive number? I want to use this to regularize a sample covariance matrix ($\mathbf A = \Sigma$) in practice, so that I can compute the inverse, which I need to calculate a Mahalanobis distance between two samples. In practice, my covariance matrix is often singular. I know the term $(\mathbf A+\epsilon \mathbf I)^{-1}$ often appears in the context of least squares problems involving Tikhonov regularization (ridge regression). However, I've never seen a statement, proof, or reference which says that the expression is always invertible. Can any of you help me with a proof or reference?
Yes, if $\mathbf A$ is any $n \times n$ matrix, then $\mathbf A+\epsilon \mathbf I$ is invertible for sufficiently small $\epsilon > 0$. This is because $\det (\mathbf A + \epsilon \mathbf I)$ is a polynomial in $\epsilon$ of degree $n$, and so it has a finite number of zeroes.
Physical meaning of spline interpolation I remember that when I took my Numerical Analysis class, the professor said the spline interpolation take its name from a kind of wood sticks used to draw curved lines. Also Wikipedia say that the name is due to those elastic rulers: Elastic rulers that were bent to pass through a number of predefined points (the "knots") were used for making technical drawings for shipbuilding and construction by hand I now wonder if the spline are only inspired to those rulers or if they moreover precisely follow the physical laws that govern the bending of particular wood sticks (and if it is not the case, I ask if anyone know of any alternative physical interpretation of them).
The equations of cubic splines are derived from the physical laws that govern bending of thin beams. For example, see http://stem2.org/je/cs.pdf. The spline equation is an approximate solution of the minimum energy bending equation, valid when the amount of bending is small. Generally, in computer-aided geometric design, minimising some sort of "energy" function is often used as a way to smooth the shape of a curve or surface.
What's the meaning of $C$-embedded? What's the meaning of $C$-embedded? It is a topological notion. Thanks ahead.
A set $A \subset X$ ($X$ is a topological space) is $C$-embedded in $X$ iff every real-valued continuous function $f$ defined on $A$ has a continuous extension $g$ from $X$ to $\mathbb{R}$ (so $g(x) = f(x)$ for all $x \in A$). A related notion of $C^{\ast}$-embedded exist where continuous real-valued functions are (in both cases) replaced by bounded real-valued continuous functions. The Tietze theorem basically says that a closed subset $A$ of a normal space $X$ is $C$ and $C^{\ast}$-embedded in it.
solving for a coefficent term of factored polynomial. Given: the coefficent of $x^2$ in the expansion of $(1+2x+ax^2)(2-x)^6$ is $48,$ find the value of the constant $a.$ I expanded it and got $64-64\,x-144\,{x}^{2}+320\,{x}^{3}-260\,{x}^{4}+108\,{x}^{5}-23\,{x}^{ 6}+2\,{x}^{7}+64\,a{x}^{2}-192\,a{x}^{3}+240\,a{x}^{4}-160\,a{x}^{5}+ 60\,a{x}^{6}-12\,a{x}^{7}+a{x}^{8} $ because of the given info $48x^2=64x^2-144x^2$ solve for $a,$ $a=3$. Correct? P.S. is there an easier method other than expanding the terms? I have tried using the bionomal expansion; however, one needs still to multiply the terms. Expand $(2-x)^6$ which is not very fast.
It would be much easier to just compute the coefficient at $x^2$ in the expansion of $(1+2x+ax^2)(2-x)^6$. You can begin by computing: $$ (2-x)^6 = 64 - 6 \cdot 2^5 x + 15 \cdot 2^4 x^2 + x^3 \cdot (...) = 64 - 192 x + 240 x^2 + x^3 \cdot (...) $$ Now, multiply this by $(1+2x+ax^2)$. Again, you're only interested in the term at $x^2$, so you can spare yourself much effort by just computing this coefficient: to get $x^2$ in the product, you need to take $64, \ -192 x, \ 240 x^2$ from the first polynomial, and $ax^2,\ 2x, 1$ from the second one (respectively). $$(1+2x+ax^2)(2-x)^6 = (...)\cdot 1 + (...) \cdot x + (64a - 2\cdot 192 + 240 )\cdot x^2 + x^3 \cdot(...) $$ Now, you get the equation: $$ 64a - 2\cdot 192 + 240 = 48 $$ whose solution is indeed $a = 3$. As an afterthought: there is another solution, although it might be an overkill. Use that the term at $x^2$ in polynomial $p$ is $p''(0)/2$. Your polynomial is: $$ p(x) = (1+2x+ax^2)(2-x)^6$$ so you can compute easily enough: $$ p'(x) = (2+2ax)(2-x)^6 + 6(1+2x+ax^2)(2-x)^5 $$ and then: $$ p''(x) = 2a(2-x)^6 + 2 \cdot 6(2+2ax)(2-x)^5 + 30(1+2x+ax^2)(2-x)^4 $$ You can now plug in $x=0$: $$ p''(0) = 2a \cdot 2^6 + 2 \cdot 6 \cdot 2 \cdot 2^5 + 30 \cdot 2^4 $$ On the other hand, you have $$p''(0) = 2 \cdot 48$$ These two formulas for $p''(0)$ let you write down an equation for $a$.
Image of a morphism According to Wikipedia, image of a morphism $\phi:X\rightarrow Y$ in a category is a monomorphism $i:I\rightarrow Y$ satisfying the following conditions: * *There is a morphism $\alpha:X\rightarrow I$ such that $i\circ\alpha=\phi$. *If $j:J\rightarrow Y$ is a monomorphism and $\beta:X\rightarrow J$ is a morphism such that $\beta\circ j=\phi$, then there exists a unique morphism $\gamma:I\rightarrow J$ such that $\beta=\gamma\circ\alpha$ and $j\circ\gamma=i$. It is easy to see that $\alpha$ is unique. Intuitively, such $\alpha$ should be an epimorphism, but I can't seem to show it. Is it true?
The term "image" suggests that this concept is modeled on the image of a map, a morphism in the category of sets. In that case, $I$ can be any set equipotent with the image (in the conventional sense) of $\phi$, and $\alpha$ is generally neither unique, nor an epimorphism (a surjective map). For uniqueness, note that you can compose $\alpha$ with any permutation of $I$ to obtain another suitable $\alpha$ for a given $I$.
How to check if a point is inside a rectangle? There is a point $(x,y)$, and a rectangle $a(x_1,y_1),b(x_2,y_2),c(x_3,y_3),d(x_4,y_4)$, how can one check if the point inside the rectangle?
Given how much attention this post has gotten and how long ago it was asked, I'm surprised that no one here mentioned the following method. A rectangle is the image of the unit square under an affine map. Simply apply the inverse of this affine map to the point in question, and then check if the result is in the unit square or not. To make things clear, consider the following image, where the vectors in the pictures are $\mathbf{u} = c - d$, $\mathbf{v} = a - d$, and $\mathbf{w} = d$. Since the legs of a rectangle are perpendicular, the matrix $\begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}$ is orthogonal and so we even have a simple formula for the inverse: $$\begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}^{-1} = \begin{bmatrix}\mathbf{u}^T/||u||^2 \\ \mathbf{v}^T/||v||^2\end{bmatrix}.$$ If you want to check many points for the same rectangle, this matrix can be easily precomputed and stored, so that you perform the (typically more expensive) division operations only once at the start. Then you only need to do a few multiplications, additions, and subtractions for each point you are testing. This method also applies more generally to checking whether a point is in a parallelogram, though in the parallelogram case the matrix inverse does not take such a simple form.
Determining whether or not spaces are separable I've been going over practice problems, and I ran into this one. I was wondering if anyone could help me out with the following problem. Let $X$ be a metric space of all bounded sequences $(a_n) \subset \mathbb{R}$ with the metric defined by $$d( (a_n), (b_n)) = \sup\{ |a_n - b_n| : n = 1, 2, \ldots \}.$$ Let $Y \subset X$ be the subspace of all sequences converging to zero. Determine whether or not $X$ and $Y$ are separable. Thanks in advance!
Suppose that $A=\{\alpha_n:n\in\Bbb N\}$ is a countable subset of $X$, where $\alpha_n$ is the sequence $\langle a_{n,k}:k\in\Bbb N\rangle$. Note that for any $x\in\Bbb R$ there is always a $y\in[-1,1]$ such that $|x-y|\ge 1$. Thus, we can construct a sequence $\beta=\langle b_k:k\in\Bbb N\rangle$ such that $b_k\in[-1,1]$ and $|b_k-a_{k,k}|\ge 1$ for each $k\in\Bbb N$. Clearly $\beta\in X$, but for each $n\in\Bbb N$ we have $\sup_{k\in\Bbb N}|b_k-a_{n,k}|\ge|b_n-a_{n,n}|\ge 1$, so the distance between $\beta$ and $\alpha_n$ is at least $1$. Thus, $A$ is not dense in $X$, and $X$ is not separable. $Y$, on the other hand, is separable. Let $D$ be the set of sequences of rational numbers that are $0$ from some point on, i.e., that have only finitely many non-zero terms. Show that $D$ is both countable and dense in $Y$. For the latter, start with any $\langle a_k:k\in\Bbb N\rangle\in Y$ and any $\epsilon>0$ and construct an element of $D$ that is less than $\epsilon$ away from $\langle a_k:k\in\Bbb N\rangle\in Y$.
Integer solutions of $p^2 + xp - 6y = \pm1$ Given a prime $p$, how can we find positive integer solutions $(x,y)$ of the equation: $$p^2 + xp - 6y = \pm1$$
If $p=2$ or $p=3$, you can't. Otherwise, the extended Euclid algorithm produces a solution $(x_0,y_0)$ of $x_0 p - 6 y_0 = \pm 1$ (in fact, one for $+1$ and one for $-1$). Then $x=x_0+6k-p$ and $y=y_0+pk$ is a solution of $p^2+xp-6y=\pm 1$. Since both $x$ and $y$ grow with $k$, we find infinitely many positive solutions, one for eack $k\ge\max\{\lfloor {p-x_0\over6}\rfloor, \lfloor {-y_0\over p}\rfloor\}+1$.
Are these two quotient rings of $\Bbb Z[x]$ isomorphic? Are the rings $\mathbb{Z}[x]/(x^2+7)$ and $\mathbb{Z}[x]/(2x^2+7)$ isomorphic? Attempted Solution: My guess is that they are not isomorphic. I am having trouble demonstrating this. Any hints, as to how i should approach this?
Suppose they are isomorphic. Then $\left(\mathbb{Z}[x]/(x^2+7)\right)/(2) \cong \left(\mathbb{Z}[x]/(2x^2+7)\right)/(2)$. Ravi helpfully pointed out that considering ideals in either ring in terms of $x$ will often give us different ideals, but we do not suffer from this problem when using ideals generated by integers such as $(2)$. Letting overlines represent the canonical homomorphism from $\mathbb{Z}[x]$ to $\mathbb{F}_2[x]$, applying the correspondence theorem for rings on the left side tells us that $$\left(\mathbb{Z}[x]/(x^2+7)\right)/(2) \cong \left(\mathbb{Z}[x]/(2)\right)/(\overline{x^2 + 7}) \cong \mathbb{F}_2[x]/(x^2 + 1)\,.$$ Similarly, on the right, we get $$\left(\mathbb{Z}[x]/(2x^2+7)\right)/(2) \cong \left(\mathbb{Z}[x]/(2)\right)/(\overline{2x^2 + 7}) \cong \mathbb{F}_2[x]/(1) = \{0\}\,.$$ One can verify the former ring contains four distinct elements. The latter is trivial, which is absurd. EDIT: To further bolster intuition for an approach like this, see Bill's mention of it in his first comment on his own post.
Find $\lim\limits_{n\to+\infty}(u_n\sqrt{n})$ Let ${u_n}$ be a sequence defined by $u_o=a \in [0,2), u_n=\frac{u_{n-1}^2-1}{n} $ for all $n \in \mathbb N^*$ Find $\lim\limits_{n\to+\infty}{(u_n\sqrt{n})}$ I try with Cesaro, find $\lim\limits_{n\to+\infty}(\frac{1}{u_n^2}-\frac{1}{u_{n-1}^2})$ then we get $\lim\limits_{n\to+\infty}(u_n^2n)$ But I can't find $\lim\limits_{n\to+\infty}(\frac{1}{u_n^2}-\frac{1}{u_{n-1}^2})$
If ever $u_N\le 0$, then all $-1/n\le u_n\le0$ for all $n>N$, hence $u_n\sqrt n\to 0$. Therefore we may assume for the rest of the argument that $u_n>0$ for all $n$. Let $e_n = n+2-u_n$. Then $0<e_0<2$. Using the recursion formula for $e_n$ show that the assumption that $e_n\le2$ for all $n$ leads to $e_n\ge2^n e_0$. Therefore $e_n>2$ for some $n$, i.e. $u_n<n$ for some $n$. Let $q_n = {u_n\over n}$ for $n\ge 1$. We have seen that $0<q_n<1$ for big $n$. Find the recursion formula for $q_n$ and show that $q_n< q_{n-1}^2$ for big $n$ and therefore $q_n<\frac1n$ for some $n$. But then $u_{n+1}<0$.
How to prove that the Torus and a Ball have the same Cardinality How to prove that the Torus and a Ball have the same Cardinality ? The proof is using the Cantor Berenstein Theorem. I now that they are subsets of $\mathbb{R}^{3}$ so I can write $\leq \aleph$ but I do not know how to prove $\geqslant \aleph$. Thanks
Hint: Show that a circle is equipotent with $[0,2\pi)$ by fixing a base point, and sending each point on the circle to its angle, where $0$ is the base point. Since both the torus and the ball contain a circle, both have at least $\aleph$ many elements.
If $G$ is a finite group and $g \in G$, then $O(\langle g\rangle)$ is a divisor of $O(G)$ Does this result mean: * *Given any finite group, if we are able to find a cyclic group out of it (subgroup), then the order of the cyclic group will be a divisor of the original group. If I am right in interpreting it, can one suggest an example of highlighting this? And also make me understand the possible practical uses of this result. It surely looks interesting Thanks Soham
You have a finite group $G$ and you take any element $g\in G$. Then $\langle g \rangle$ is a subgroup of $G$. Then, as mentioned in the comment by anon, you can apply Lagrange's theorem to get the conclusion that you want. As an example of this, you could consider the symmetric group $S_5$. You pick a random element $\sigma \in S_5$, for example $\sigma = (1, 2, 4)$. Then you get the subgroup $$ \langle \sigma\rangle = \{(1,2,4), (1, 4, 2), (1) \}. $$ Hence the order of $\langle \sigma\rangle$ is $3$, and indeed 3 is a divisor in $O(S_5) = 5! = 120$. You ask in the comment above about an example with a subgroup of the complex numbers. Consider $z = e^{\frac{2\pi i}{15}}$. Then you have the group $G = \langle z\rangle$ (under multiplication). This group has order $15$. Can you find/write down the elements?)Now take $w= e^{\frac{2\pi i}{5}}$. Then $\langle w\rangle$ is a subgroup of $G$ of order ... ( I will let you think about that). As an application of this someone else might have something helpful to say.
Is $x^y$ - $a^b$ divisible by $z$, where $y$ is large? The exact problem I'm looking at is: Is $4^{1536} - 9^{4824}$ divisible by $35$? But in general, how do you determine divisibility if the exponents are large?
You can use binomial theorem to break up the individual bases into multiples of the divisor and then you can expand binomially to check divisibilty. This is in a general case of doing such a problem . There may be more methods of doing such a problem.
Every point closed $\stackrel{?}{\Rightarrow}$ space is Hausdorff If a topological space is Hausdorff, then every point is closed. Is the converse true? Edited: Let $G$ be a topological group and $H$ the intersection of all neighborhoods of zero. Since every coset of $H$ is closed, every point of $G/H$ will be closed. Why does that make $G/H$ Hausdorff?
Notice that $H$ is a closed normal subgroup of $G$, for that see e.g. this for proof that it is a closed subgroup (equal to $\operatorname{cl} \{e\}$), and for normality just notice that conjugation preserves the neighbourhoods of identity (as a set), so it does preserve intersection as well. From that we see that $G/H$ is a topological group. It is a known fact that for topological groups, $T_0$ implies completely regular Hausdorff. Every point being closed is equivalent to $T_1$, from which $T_{3\frac {1}{2}}$, so in particular $T_2$, follows. A proof can be found in many places, e.g. Engelking's General Topology iirc. A short one for closed $\{e\}\implies T_2$: notice that Hausdorffness is equivalent to the diagonal being closed. But the diagonal is the preimage of identity by the map $(x,y)\mapsto xy^{-1}$. In general, we do not have the implication, as shown by e.g. the cofinite topology on an infinite space.
Finding disjoint neighborhoods of two points in $\Bbb R$ Let $x$ and $y$ be unique real numbers. How do you prove that there exists a neighborhood $P$ of $x$ and a neighborhood $Q$ of $y$ such that $P \cap Q = \emptyset$?
Hint: open intervals of length $e$ centered at $x$ and $y$ are such neighborhoods, and if $e$ is small enough, they will not intersect. How small does $e$ have to be for this to happen?
find the eigenvalues of a linear operator T Let $A$ be $m*m$ and B be $n*n$ complex matrices, and consider the linear operator $T$ on the space $C^{m*n}$ of all $m*n$ complex matrices defined by $T(M) = AMB$. -Show how to construct an eigenvector for $T$ out of a pair of column vectors $X, Y$, where $X$ is an eigenvector for $A$ and $Y$ is an eigenvector for $B^t$. -Determine the eigenvalues of $T$ in terms of those of $A$ and $B$ -Determine the trace of this operator
There is a standard way to do this kind of exercise. Firstly assume that $ A$ and $ B$ are diagonal. Then a short calculation shows that the eigenvalues are as given in previous solutions, i.e., the pairwise products of those of these matrices. The result then holds for diagonalisable matrices by a suitable choice of bases. The easiest way to get the final version is to use the fact that the diagonalisable matrices are dense in all matrices and employ a continuity argument involving the characteristic polynomials.
What's the answer of (10+13) ≡? As to Modulus operation I only have seen this form: (x + y) mod z ≡ K So I can't understand the question, by the way the answers are : a) 8 (mod 12) b) 9 (mod 12) c) 10 (mod 12) d) 11 (mod 12) e) None of the above
The OP may be taking a Computer Science course, in which if $b$ is a positive integer, then $a\bmod{b}$ is the remainder when $a$ is divided by $b$. In that case $\bmod$ is a binary operator. That is different from the $x\equiv y\pmod{m}$ of number theory, which is a ternary relation (or, for fixed $m$, a binary relation). Calculating $(10+13)\bmod{12}$ is straightforward. Find $10+13$, and calculate the remainder on division by $12$. We get $(10+13)\bmod{12}=11$. So $(10+13)\bmod{12} \equiv 11\pmod{12}$. Remark: It seems unusual to use the binary operator and the ternary relation in a single short expression.
Determinant of matrices along a line between two given matrices The question, with no simplifications or motivation: Let $A$ and $B$ be square matrices of the same size (with real or complex coefficients). What is the most reasonable formula one can find for the determinant $$\det((1-t)A + tB)$$ as a function of $t \in [0,1]$? If no reasonable formula exists, what can we say about these determinants? So we're taking a line between two matrices $A$ and $B$, and computing the determinant along this line. When $A$ and $B$ are diagonal, say $$A = \operatorname{diag}(a_1,\ldots,a_n), B = \operatorname{diag}(b_1,\ldots,b_n),$$ then we can compute this directly: $$\begin{aligned} \det((1-t)A + tB) &= \det \operatorname{diag}((1-t)a_1 + tb_1, \ldots, (1-t)a_n + tb_n) \\ &= \prod_{j=1}^n ((1-t)a_j + tb_j). \end{aligned}$$ I'm not sure if this can be further simplified, but I'm sure someone can push things at least a tiny bit further than I have. I'm most curious about the case where $A = I$ and each $(1-t)A + tB$ is assumed to be invertible. Here's what I know in this case: writing $$D(t) = \det((1-t)I - tB),$$ we can compute that $$ \dot{D}(t) = D(t) c(t)$$ where $$c(t) := \operatorname{trace}(((1-t)I + tB)^{-1}(B-I))$$ (a warning: I am not 100% sure this formula holds). Thus we can write $$D(t) = \exp\left(\int_0^t c(\tau) \; d\tau\right)$$ since $D(0) = 1$. I have no idea how to deal with the function $c(\tau)$ though. Any tips?
I think I have an answer to the last case I mentioned ($A=I$, all $(1-t)I + tB$ invertible). The key is to write $$\begin{aligned} \int_0^t c(\tau) \; d\tau &= \operatorname{trace} \int_0^t ((1-\tau)I + \tau B)^{-1} (B-I) \; d\tau \\ &= \operatorname{trace} \log ((1-t)I + tB) \end{aligned} $$ using that $\frac{d}{dt}((1-t)I + tB) = B-I$ and $\frac{d}{dt}\log(A(t)) = A(t)^{-1} \frac{d}{dt} A'(t)$ (I think this is true!). Taking matrix logarithms here should be ok, since everything in sight is invertible. Then $$ D(t) = e^{\operatorname{trace} \log((1-t)I + tB)}. $$ Of course, this leaves one with the problem of computing a matrix logarithm, but in the case where $B$ is diagonal, this reduces to the first formula in my question. This could probably be generalised to the case of general invertible $A$ by computing $\dot{D}(t)$ as before. I'll do this tomorrow and make an edit with the results (or you can do it!)
Proof of the Hardy-Littlewood Tauberian theorem Can someone point me to a proof of the Hardy-Littlewood Tauberian theorem, that is suitable enough to be shown to high school students? (with knowledge of calculus, sequences and series of course)
Have you looked at the presentation in Titchmarsh's Theory of Functions (Section 7.5)? The only non-elementary part of the argument is Weierstrass's approximation theorem, which you can probably assume as a fact. The preliminary material given also include an "easy" special case where the exposition certainly can be understood by someone with knowledge of calculus, sequences, and series.
Evaluating $ \lim\limits_{n\to\infty} \sum_{k=1}^{n^2} \frac{n}{n^2+k^2} $ How would you evaluate the following series? $$\lim_{n\to\infty} \sum_{k=1}^{n^2} \frac{n}{n^2+k^2} $$ Thanks.
We have the following important theorem, Theorem: Let for the monotonic function $f$ ,$\int_{0}^\infty f(x)dx$ exists and we have $\lim_{x\to\infty}f(x)=0$ and $f(x)>0$ then we have $$\lim_{h\to0^+}h\sum_{v=0}^\infty f(vh)=\int_{0}^\infty f(x)dx$$ It is enough to take $h^{-1}=t$ and $f(x)=\frac{2}{1+x^2}$, then we get the desired result. So we showed that $$\lim_{t\to\infty}\left(\frac{2}{t}+\frac{2t}{t^2+2^2}+\cdots+\frac{2t}{t^2+n^2}+\cdots\right)=\pi$$ Let me show some additional infinite sum by this theorem We show $$\lim_{t\to 1^{-}}(1-t)^2\left(\frac{t}{1-t}+\frac{2t^2}{1-t^2}+\frac{3t^2}{1-t^2}+\cdots \frac{nt^2}{1-t^2}+\cdots\right)=\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$$ Now in previous theorem if we take $e^{-h}=t$ and $f(x)=\frac{xe^{-x}}{1-e^{-x}}$ then since $$\int_0^\infty\frac{xe^{-x}}{1-e^{-x}}=\int_0^\infty x(\sum_{n=1}^\infty e^{-nx})dx=\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$$ The previous theorem gives us an important relation of infinite sum about Euler constant We show $$\lim_{t\to 1^{-}}(1-t)\left(\frac{t}{1-t}+\frac{t^2}{1-t^2}+\frac{t^3}{1-t^3}+\cdots \frac{t^n}{1-t^n}+\cdots\right)-\log\frac{1}{1-t}=C=\text{Euler constant}$$ By using the previous theorem we just need to take $f(x)=\frac{e^{-x}}{1+e^{-x}}$ and $e^{-h}=t$ and since it is known that $$\int_0^\infty e^{-x}\left(\frac{1}{1-e^{-x}}-\frac{1}{x}\right)dx=C=\text{Euler constant}$$ See this book for more examples of G.polya, problems and theorems in analysis , vol I Springer
is it true that the infinity norm can be bounded using the $L_2$ norm the following way? Let $v \in \mathbb{R}^k$, and let $A \in \mathbb{R}^{m \times k}$ and let $B \in \mathbb{R}^{m \times n}$ such that each column of $B$, $B_i$, has $$||B_i||_2 \le 1.$$ Is it true that: * *$||v A^{\top} B||_{\infty} \le ||v A^{\top}||_2$ ? *If the spectral norm of $A$ is such that $||A||_{\mathrm{spectral}} \le 1$, is it true that $||v A^{\top}||_2 \le ||v||_2$ ? Thanks.
1) Cauchy-Schwarz says $$|(v A^T B)_i| = |v A^T B_i| \le \|v A^T\|_2 \|B_i\|_2 \le \|v A^T\|_2$$ 2) Yes because the spectral norm is the operator norm corresponding to the $2$-norm on vectors, and $\|A\|_{\text{spectral}} = \|A^T\|_{\text{spectral}}$.
Evaluate $\int\frac{dx}{\sin(x+a)\sin(x+b)}$ Please help me evaluate: $$ \int\frac{dx}{\sin(x+a)\sin(x+b)} $$
The given integral is: $$\int\frac{dx}{\sin(x+a)\sin(x+b)}$$ The given integral can write: $$\int\frac{dx}{\sin(x+a)\sin(x+b)}=\int\frac{\sin(x+a)}{\sin(x+b)}\cdot\frac{dx}{\sin^2(x+a)}$$ We substition $$\frac{\sin(x+a)}{\sin(x+b)}=t$$ By the substition of the above have: $$\frac{dx}{\sin^2(x+a)}=\frac{dt}{\sin(a-b)}$$ Now have: $$\int\frac{dx}{\sin(x+a)\sin(x+b)}=\int\frac{\sin(x+a)}{\sin(x+b)}\cdot\frac{dx}{\sin^2(x+a)}=\frac{1}{\sin(a-b)}\int\frac{dt}{t}=\frac{1}{\sin(a-b)}\ln |t|=\frac{1}{\sin(a-b)}\ln|\frac{\sin(x+a)}{\sin(x+b)}|+C$$
calculate $\int_{-\infty}^{+\infty} \cos(at) e^{-bt^2} dt$ Could someone please help me to calculate the integral of: $$\int_{-\infty}^{+\infty} \cos (at) e^{-bt^2} dt.$$ a and b both real, b>0. I have tried integration by parts, but I can't seem to simplify it to anything useful. Essentially, I would like to arrive at something that looks like: 7.4.6 here: textbook result
Hint: Use the fact that $$\int_{-\infty}^\infty e^{iat- bt^2}\,dt = \sqrt{\frac{\pi}{b}} e^{-a^2/4b} $$ which is valid for $b>0$. To derive this formula, complete the square in the exponent and then shift the integration contour a bit.
Representations of integers by a binary quadratic form Let $\mathfrak{F}$ be the set of binary quadratic forms over $\mathbb{Z}$. Let $f(x, y) = ax^2 + bxy + cy^2 \in \mathfrak{F}$. Let $\alpha = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right)$ be an element of $SL_2(\mathbb{Z})$. We write $f^\alpha(x, y) = f(px + qy, rx + sy)$. Since $(f^\alpha)^\beta$ = $f^{\alpha\beta}$, $SL_2(\mathbb{Z})$ acts on $\mathfrak{F}$ from right. Let $f, g \in \mathfrak{F}$. If $f$ and $g$ belong to the same $SL_2(\mathbb{Z})$-orbit, we say $f$ and $g$ are equivalent. Let $f = ax^2 + bxy + cy^2 \in \mathfrak{F}$. We say $D = b^2 - 4ac$ is the discriminant of $f$. Let $m$ be an integer. If $m = ax^2 + bxy + cy^2$ has a solution in $\mathbb{Z}^2$, we say $m$ is represented by $ax^2 + bxy + cy^2$. If $m = ax^2 + bxy + cy^2$ has a solution $(s, t)$ such that gcd$(s, t) = 1$, we say $m$ is properly represented by $ax^2 + bxy + cy^2$. Is the following proposition true? If yes, how do we prove it? Proposition Let $ax^2 + bxy + cy^2 \in \mathfrak{F}$. Suppose its discriminant is not a square. Let $m$ be an integer. Then $m$ is properly represented by $ax^2 + bxy + cy^2$ if and only if there exist integers $l, k$ such that $ax^2 + bxy + cy^2$ and $mx^2 + lxy + ky^2$ are equivalent.
Lemma 1 Let $f = ax^2 + bxy + cy^2 \in \mathfrak{F}$. Let $\alpha = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right)$ be an element of $SL_2(\mathbb{Z})$. Then $f^\alpha(x, y) = f(px + qy, rx + sy) = kx^2 + lxy + my^2$, where $k = ap^2 + bpr + cr^2$ $l = 2apq + b(ps + qr) + 2crs$ $m = aq^2 + bqs + cs^2$. Proof: Clear. Proof of the proposition Let $f(x, y) = ax^2 + bxy + cy^2$. Suppose $m$ is properly represented by $f(x, y)$. There exist integers $p, r$ such that gcd$(p, r) = 1$ and $m = f(p, r)$. Since gcd$(p, r) = 1$, there exist integers $s, r$ such that $ps - rq = 1$. By Lemma 1, $f(px + qy, rx + sy) = mx^2 + lxy + ky^2$, where $m = ap^2 + bpr + cr^2$ $l = 2apq + b(ps + qr) + 2crs$ $k = aq^2 + bqs + cs^2$. Hence, $ax^2 + bxy + cy^2$ and $mx^2 + lxy + ky^2$ are equivalent. Conversely suppose $ax^2 + bxy + cy^2$ and $mx^2 + lxy + ky^2$ are equivalent. There exists integer $p, q, r, s$ such that $ps - rq = 1$ and $f(px + qy, rx + sy) = mx^2 + lxy + ky^2$. Letting $x = 1, y = 0$, we get $f(p, r) = m$. Since $ps - rq = 1$, gcd$(p, r) = 1$. Hence $m$ is properly represented by $ax^2 + bxy + cy^2$.
derive the formula for the left rectangle sum $f(x)=x^2+1$ from $0$ to $3$ Simply that, derive the formula for the left rectangle sum $f(x)=x^2+1$ from $0$ to $3$ This is when you use like rectangles and Riemann sums to approximate an integral. Not really sure what this means to derive the formula ?
Can you follow the very similar example on this web site? http://www2.seminolestate.edu/lvosbury/CalculusI_Folder/RiemannSumDemo.htm It is important for you to learn what is going on here. I would strongly recommend you use all three (left, right and midpoint) to find the integral. Of course, you know what the answer should be by doing the integral. Hint 1: Area = 12 Hint 2: See the left sum here: http://www.wolframalpha.com/input/?i=INTEGRATE%5Bx%5E2%2B1%2C%7Bx%2C0%2C3%7D%5D&t=crmtb01 Please show your work if this is confusing. HTH ~A
Sum of angles in $\mathbb{R}^n$ Given three vectors $v_1,v_2$ and $v_3$ in $\mathbb{R}^n$ with the standard scalar product the follwing is true $$\angle(v_1,v_2)+\angle(v_2,v_3)\geq \angle(v_1,v_3).$$ It tried to substitute $\angle(v_1,v_2) = cos^{-1}\frac{v_1 \cdot v_2}{\Vert v_1 \Vert \Vert v_2 \Vert}$ but I could not show the resulting inequality. What is the name of the inequality and do you know reference that one can cite in an article?
You can reduce the problem to $\mathbb{R}^3$, and there wlog v2=(0,0,1) then one gets an easy to prove inequality if one writes everything in polar coordinates.
Example of a sequence with countable many cluster points Can someone give a concrete example of a sequence of reals that has countable infinite many cluster points ?
First fix some bijection $f : \mathbb{N} \to \mathbb{N} \times \mathbb{N}$. For $n \in \mathbb{N}$ let $g(n)$ denote the first coordinate of $f(n)$ and let $h(n)$ denote the second corrdinate. Then define a sequence $\{ x_n \}_{n=1}^\infty$ by $$x_n = g(n) + 2^{-h(n)}.$$ Then every natural number is a cluster point of this sequence. For a more concrete example, consider the following sequence: $$\begin{array}{c|c} n & x_n \\ \hline 1 & 1 + 2^{-1} \\ 2 & 1 + 2^{-2} \\ 3 & 2 + 2^{-1} \\ 4 & 1 + 2^{-3} \\ 5 & 2 + 2^{-2} \\ 6 & 3 + 2^{-1} \\ 7 & 1 + 2^{-4} \\ 8 & 2 + 2^{-3} \\ 9 & 3 + 2^{-2} \\ 10 & 4 + 2^{-1} \\ \vdots & \vdots \end{array} $$ (I'm too lazy to give an exact formula at the moment, but I think the idea is clear.)
A formal proof that a sum of infinite series is a series of a sum? I feel confused when dealing with ininities of any kind. E.g. the next equation is confusing me. $$\displaystyle\sum^\infty_{n} (f_1(n) + f_2(n)) = \displaystyle\sum^\infty_{n_1=1} f_1(n_1) + \displaystyle\sum^\infty_{n_2=1} f_2(n_2)$$ How do people deal with such a special object as infinity with such an ease (it makes sense in general though)? Is there any axiomatic theory for that? I ask because I saw people relying on this when calculating the exact value of a series. And last but not least: does my question sound tautological? Better delete it then.
You have not quite stated a result fully. We state a result, and then write down the main elements of a proof. You should at least scan the proof, and then go to the final paragraph. We will show that if $\sum_{i=1}^\infty f(i)$ and $\sum_{j=1}^\infty g(j)$ both exist, then so does $\sum_{k=1}^\infty (f(k)+g(k))$, and $$\sum_{k=1}^\infty (f(k)+g(k))=\sum_{i=1}^\infty f(i)+\sum_{j=1}^\infty g(j).$$ Let $\sum_{i=1}^\infty f(i)=a$ and $\sum_{j=1}^\infty g(j)=b$. We want to show that $\sum_{k=1}^\infty (f(k)+g(k))=a+b$. Let $\epsilon \gt 0$. By the definition of convergence, there is an integer $K$ such that if $n \gt K$ then $$\left|\sum_{i=1}^\infty f(i)-a\right|\lt \epsilon/2.$$ Similarly, there is an integer $L$ such that if $n \gt L$ then $$\left|\sum_{j=1}^n g(j)-b\right|\lt \epsilon/2.$$ Let $M=\max(K,L)$. Then, from the two inequalities above, and the Triangle Inequality, it follows that if $n \gt M$, we have $$\left|\sum_{k=1}^n (f(k)+g(k))-(a+b)\right|\lt \epsilon.$$ This completes the proof. If you do not have experience with arguments like the one above, it may be difficult to understand. But there is one important thing you should notice. After the initial statement of the result, there is no more mention of "infinity." All subsequent work is with finite sums. The existence and value of an expression like $\sum_{n=1}^\infty h(n)$ is defined purely in terms of finite sums. Remark: It is perfectly possible for the sum on your left-hand side to exist, while the sums on the right do not. A crude example would be $f(i)=1$ for all $i$, and $g(j)=-1$ for all $j$. But one can prove that if any two of the sums exist, then the third one does.
Locally integrable functions Formulation: Let $v\in L^1_\text{loc}(\mathbb{R}^3)$ and $f \in H^1(\mathbb{R}^3)$ such that \begin{equation} \int f^2 v_+ = \int f^2 v_- = +\infty. \end{equation} Here, $v_- = \max(0,-f)$, $v_+ = \max(0,f)$, i.e., the negative and positive parts of $v=v_+ - v_-$, respectively. Question: Does $g\in H^1(\mathbb{R}^3)$ exist, such that \begin{equation} \int g^2 v_+ < \infty, \quad \int g^2 v_- = +\infty \quad ? \end{equation} Some thoughts: Let $S_\pm$ be the supports of $v_\pm$, respectively. One can easily find $g\in L^2$ such that the last equation holds, simply multiply $f$ with the characteristic function of $S_-$. The intuitive approach is then by some smoothing of this function by a mollifier, or using a bump function to force the support of $g$ away from $S_+$. However, the supports of $S_\pm$ can be quite complicated: for example, fat Cantor-like sets. Thus, a bump function technique or a mollifier may "accidentally" fill out any of $S_\pm$. My motivation: The problem comes from my original research on the mathematical foundations of Density Functional Theory (DFT) in physics and chemistry. Here, $f^2$ is proportional to the probability density of finding an electron at a space point, and $v$ is the potential energy field of the environmentn. $\int f^2 v$ is the total potential energy for the system's state. The original $f$ gives a meaningless "$\infty-\infty$" result, but for certain reasons, we are out of the woods if there is some $other$ density $g$ with the prescribed property. Edit: Removed claim that $S_\pm$ must be unbounded. This does not follow from the stated assumptions.
When $v \in L^1_{loc}(\mathbb{R}^3)$, then $v$ is the density of an absolutely continuous signed measure $\mu$. Take a Hahn decomposition of $\mathbb{R}^3$ in two measurable sets, so that $\mathbb{R}^3$ is disjoint union of say $P$ and $N$, $P$ is positive for $\mu$ and $N$ is negative for $\mu$, defined as $\mu$ is non-negative on measurable subsets of $P$ and non-positive on measurable subsets of $N$. This gives a Jordan decomposition in measures $\mu^+$ and $\mu^-$, concentrated on $P$ and $N$ respective. Now $\mu^+$ on a measurable set $A$ is $\int_A v_+$ and $\mu_-$ on a measurable set $A$ is $\int_A v_-$. When $f \in H(\mathbb{R}^3) = L^2(R^3)$ has $\int f^2 v_+ = \infty $ and $ \int f^2 v_- = \infty$, then $f \,1_N \in L^2(\mathbb{R}^3)$, $\int f^2\,1_N v_- = \infty $ and $\int (f \, 1_N)^2 v_+ = \int f^2 \, 1_N v_+ = 0$. The not-shown measures in the integrals is Lebesgue measure on $\mathbb{R}^3$.
Random walking and the expected value I was asked this question at an interview, and I didn't know how to solve it. Was curious if anyone could help me. Lets say we have a square, with vertex's 1234. I can randomly walk to each neighbouring vertex with equal probability. My goal is to start at '1', and get back to '1'. How many average walks will I take before I return back to 1?
By symmetry, the unique invariant probability measure $\pi$ for this Markov chain is uniform on the four states. The expected return time is therefore $\mathbb{E}_1(T_1)=1/\pi(1)=4.$ This principle is easy to remember and can be used to solve other interesting problems.
Prove that $i^i$ is a real number According to WolframAlpha, $i^i=e^{-\pi/2}$ but I don't know how I can prove it.
This would come right from Euler's formula. Let's derive it first. There are many ways to derive it though, the Taylor series method being the most popular; here I’ll go through a different proof. Let the polar form of the complex number be equal to $z$ . $$\implies z = \cos x + i\sin x$$ Differentiating on both sides we get, $$\implies \dfrac{dz}{dx} = -\sin x + i\cos x$$ $$\implies dz = (-\sin x + i\cos x)dx$$ Integrating on both sides, $$\implies \displaystyle \int \frac{dz}{z} = i \int dx$$ $$\implies \log_e z = ix + K$$ Since $K = 0$, (Set $x = 0$ in the equation), we have, $$\implies z = e^{ix}$$ $$\implies e^{ix} = \cos x + i\sin x$$ The most famous example of a completely real number being equal to a real raised to an imaginary is $$\implies e^{i\pi} = -1$$ which is Euler’s identity. To find $i$ to the power $i$ we would have to put $ x = \frac{\pi}2$ in Euler's formula. We would get $$e^{i\frac{\pi}2} = \cos \frac{\pi}2 + i\sin \frac{\pi}2$$ $$e^{i\frac{\pi}2} = i$$ $${(e^{i\frac{\pi}2})}^{i} = i^{i}$$ $$i^{i} = {e^{i^{2}\frac{\pi}2}} = {e^{-\frac{\pi}2}}$$ $$i^{i} = {e^{-\frac{\pi}2}} = 0.20787957635$$ This value of $i$ to the power $i$ is derived from the principal values of $\sin$ and $\cos$ which would satisfy this equation. There are infinite angles through which this can be evaluated; since $\sin$ and $\cos$ are periodic functions.
Evaluating $\int_0^\infty\frac{\sin(x)}{x^2+1}\, dx$ I have seen $$\int_0^\infty \frac{\cos(x)}{x^2+1} \, dx=\frac{\pi}{2e}$$ evaluated in various ways. It's rather popular when studying CA. But, what about $$\int_0^\infty \frac{\sin(x)}{x^2+1} \, dx\,\,?$$ This appears to be trickier and more challenging. I found that it has a closed form of $$\cosh(1)\operatorname{Shi}(1)-\sinh(1)\text{Chi(1)}\,\,,\,\operatorname{Shi}(1)=\int_0^1 \frac{\sinh(x)}{x}dx\,\,,\,\, \text{Chi(1)}=\gamma+\int_0^1 \frac{\cosh(x)-1}{x} \, dx$$ which are the hyperbolic sine and cosine integrals, respectively. It's an odd function, so $$\int_{-\infty}^\infty \frac{\sin(x)}{x^2+1} \, dx=0$$ But, does anyone know how the former case can be done? Thanks a bunch.
Mellin transform of sine is, for $-1<\Re(s)<1$: $$ G_1(s) = \mathcal{M}_s(\sin(x)) = \int_0^\infty x^{s-1}\sin(x) \mathrm{d} x =\Im \int_0^\infty x^{s-1}\mathrm{e}^{i x} \mathrm{d} x = \Im \left( i^s\int_0^\infty x^{s-1}\mathrm{e}^{-x} \mathrm{d} x \right)= \Gamma(s) \sin\left(\frac{\pi s}{2}\right) = 2^{s-1} \frac{\Gamma\left(\frac{s+1}{2}\right)}{\Gamma\left(1-\frac{s}{2}\right)} \sqrt{\pi} $$ And Mellin transfom of $(1+x^2)^{-1}$ is, for $0<\Re(s)<2$: $$ G_2(s) = \mathcal{M}_s\left(\frac{1}{1+x^2}\right) = \int_0^\infty \frac{x^{s-1}}{1+x^2}\mathrm{d} x \stackrel{x^2=u/(1-u)}{=} \frac{1}{2} \int_0^1 u^{s/2-1} (1-u)^{-s/2} \mathrm{d}u = \frac{1}{2} \operatorname{B}\left(\frac{s}{2},1-\frac{s}{2}\right) = \frac{1}{2} \Gamma\left(\frac{s}{2}\right) \Gamma\left(1-\frac{s}{2}\right) = \frac{\pi}{2} \frac{1}{\sin\left(\pi s/2\right)} $$ Now to the original integral, for $0<\gamma<1$: $$ \int_0^\infty \frac{\sin(x)}{1+x^2}\mathrm{d}x = \int_{\gamma-i \infty}^{\gamma+ i\infty} \mathrm{d} s\int_0^\infty \sin(x) \left( \frac{G_2(s)}{2 \pi i} x^{-s}\right) \mathrm{d}s = \frac{1}{2 \pi i} \int_{\gamma-i \infty}^{\gamma+i \infty} G_2(s) G_1(1-s) \mathrm{d}s =\\ \frac{1}{4 i} \int_{\gamma-i \infty}^{\gamma+i \infty} \Gamma(1-s) \cot\left(\frac{\pi s}{2}\right) \mathrm{d} s = \frac{2\pi i}{4 i} \sum_{n=1}^\infty \operatorname{Res}_{s=2n} \Gamma(1-s) \cot\left(\frac{\pi s}{2}\right) = \sum_{n=1}^\infty \frac{\psi(2n)}{\Gamma(2n)} = \sum_{n=1}^\infty \frac{1+(-1)^n}{2} \frac{\psi(n)}{\Gamma(n)} $$ Since $$ \sum_{n=1}^\infty z^n \frac{\psi(n)}{\Gamma(n)} = \mathrm{e}^z z \left(\Gamma(0,z) + \log(z)\right) $$ Combining: $$ \int_0^\infty \frac{\sin(x)}{1+x^2} \mathrm{d}x = \frac{\mathrm{e}}{2} \Gamma(0,1) - \frac{1}{2 \mathrm{e}} \Gamma(0,-1) - \frac{i \pi }{2 \mathrm{e}} = \frac{1}{2e} \operatorname{Ei}(1) - \frac{\mathrm{e}}{2} \operatorname{Ei}(-1) $$
Help evaluating a limit I have the following limit: $$\lim_{n\rightarrow\infty}e^{-\alpha\sqrt{n}}\sum_{k=0}^{n-1}2^{-n-k} {{n-1+k}\choose k}\sum_{m=0}^{n-1-k}\frac{(\alpha\sqrt{n})^m}{m!}$$ where $\alpha>0$. Evaluating this in Mathematica suggests that this converges, but I don't know how to evaluate it. Any help would be appreciated.
I would start even more simple-mindedly by replacing the inner sum with its infinite $n$ value of $e^{\alpha \sqrt{n}}$. This cancels out the outer expression, so we are left with $\lim_{n\rightarrow\infty}\sum_{k=0}^{n-1}2^{-n-k} {{n-1+k}\choose k}$. Doing some manipulation, $\sum_{k=0}^{n-1}2^{-n-k} {{n-1+k}\choose k} = \sum_{k=0}^{n-1}2^{-n-k} {{n-1+k}\choose {n-1}} = \sum_{k=n-1}^{2n-2}2^{-k-1} {{k}\choose {n-1}} $. As often happens, it is late and I am tired and not sure exectly what to do next, so I'll leave it at this.
Showing $H=\langle a,b|a^2=b^3=1,(ab)^n=(ab^{-1}ab)^k\rangle$. Let $G=\langle a,b|a^2=b^3=1,(ab)^n=(ab^{-1}ab)^k \rangle$. Prove that $G$ can be generated with $ab$ and $ab^{-1}ab$. And from there, $\langle(ab)^n\rangle\subset Z(G)$. Problem wants $H=\langle ab,ab^{-1}ab \rangle$ to be $G$. Clearly, $H\leqslant G$ and after doing some handy calculation which takes time I've got: * *$ab^{-1}=(ab^{-1}ab)(ab)^{-1}\in H$ *$b=b^{-2}=(ab)^{-1}ab^{-1}\in H$ *$a=(ab)b^{-1}\in H$ So $G\leqslant H$ and therefore $G=H=\langle ab,ab^{-1}ab\rangle$. For the second part, I should prove that $N=\langle(ab)^n\rangle\leqslant Z(G)$. Please help me. Thanks.
The question is answered in the comments. However so that this question does not remain listed as unanswered forever, I will provide a solution. I will also give the details for the first part of the question. Part 1 Show $G= \langle ab, ab^{-1}ab \rangle$ Let $H=\langle ab, ab^{-1}ab \rangle $. It is clear that $ H \leq G$. We will show that $G \leq H$ by showing that $a$ and $b$ are in $H$. We make the following observations which we will use in the argument: 1) $ (ab)^{-1}=b^{-1}a^{-1} \in H $ 2) $ a^{-1}=a $ 3) $ b^{-2}=b $ Then $(ab^{-1}ab)(ab)^{-1}=ab^{-1} \in H$. Thus $ab^{-1} \cdot b^{-1}a^{-1} = ab^{-2}a = aba \in H$. Hence $(ab)^{-1} \cdot (aba) = a \in H$. Thus we have that $a \in H$ and then multiplying $ab$ with $a^{-1}$ on the left gives that also $b$ is in $H$. Hence $ G \leq H$ and we now can conclude $G=H$. Part 2 Show $ \langle (ab)^{n} \rangle \leq Z(G)$ Since $G = \langle ab, ab^{-1}ab \rangle$ we will try to show $ab$ and $ab^{-1}ab$ commute with $(ab)^{n}$. Of course it is clear that $(ab)$ does. Now consider $$(ab^{-1}ab)\cdot (ab)^{n} = ab^{-1}ab(ab^{-1}ab)^{k}$$ (using a relation given in the presentation for $G$ in the question). $$=(ab^{-1}ab)^{k+1}=(ab^{-1}ab)^{k}ab^{-1}ab = (ab)^{n} \cdot ab^{-1}ab$$ Hence we see that $(ab)^{n}$ commutes with both of our generators of $G$ and thus $ \langle (ab)^{n} \rangle $ is central.
point outside a non-convex shape I have a non-convex shape (object) in black on the figure on the link. At the beginning, All red points are outside the shape. Next, I apply a random transformation on some points. This create a new shape (yellow). What I want is to fill the outside of the shape with a specific color but because of the transformation, some red points are now inside the shape. I would like to know how to find a correct red point, i.e outside the shape to apply the filling algorithm. Here is the link on the image. http://postimage.org/image/sbjmgi1tr/
In that case, use a point in polygon algorithm. Then you can test each point if it is in your new polygon or not.
Showing pass equivalence of cinquefoil knot According to C.C. Adams, The knot book, pp 224, "every knot is either pass equivalent to the trefoil knot or the unknot". A pass move is the following: Can someone show me how to show that the Cinquefoil knot is pass equivalent to unknot or trefoil? Been trying on paper but no luck. Don't see how pass moves apply here. Thanks.
The general method is demonstrated in Kauffman's book On Knots. Put the knot into a "band position" So that the Seifert surface is illustrated as a disk with twisted and intertangled bands attached. Then the orientations match those of your figure. You can pass one band over another. Your knot is the braid closure of $\sigma_1^5$. The Seifert surface is two disks with 5 twisted bands between them. Start by stretching the disks apart.
A question about a closed set Let $X = C([0; 1])$. For all $f, g \in X$, we define the metric $d$ by $d(f; g) = \sup_x |f(x) - g(x)|$. Show that $S := \{ f\in X : f(0) = 0 \}$ is closed in $(X; d)$. I am trying to show that $X \setminus S$ is open but I don't know where to start showing that. I wanna add something more, I have not much knowledge about analysis and I am just self taught of it, what I have learnt so far is just some basic topology and open/closed sets.
I'd try to show that $S$ contains all its limit points. To this end let $f$ be a limit point of $S$. Let $f_n$ be a sequence in $S$ converging to $f$ in the sup norm. Now we show that $f$ is also in $S$: By assumption, for $\varepsilon > 0$ you have that $\sup_{z \in [0,1]}|f_n(z) - f(z)| < \varepsilon$ for $n$ large enough. In particular, $|f_n(0) - f(0)| = | f(0)| < \varepsilon$ for $n$ large enough. Now let $\varepsilon \to 0$.
Graph for which certain induced subgraphs are cycles Let us call a graph G $nice$ if for any vertex $v \in G$, the induced subgraph on the vertices adjacent to $v$ is exactly a cycle. Is there anything that we can conclude about nice graphs? In particular, can we find a different (maybe simpler) but equivalent formulation for niceness?
Wrong Answer Given a finite connected "nice" graph, $G$, you can take all triples $\{a,b,c\}$ of nodes with $\{a,b\}$,$\{b,c\}$, and $\{a,c\}$ edges in the graph. Take these as $2$-simplexes, and stitch them together in the obvious way. The fact that $G$ is nice means that each edge must be on exactly two triangles. The fact that $G$ is nice also means that the interior of the union of the triangles that contain node $a$ will be homeomorphic to an open ball in $\mathbb R^2$. So this all shows that stitching these together will yield $G$ as a triangulation of a compact $2$-manifold. There is at least one "degenerate" case for which this is not true - the single-edge graph with two nodes. Depends on whether you consider a single node graph to be a cycle...
Quadratic equations that are unsolvable in any successive quadratic extensions of a field of characteristic 2 Show that for a field $L$ of characteristic $2$ there exist quadratic equations which cannot be solved by adjoining square roots of elements in the field $L$. In $\mathbb{Z_2}$ adjoining all square roots we obtain again $\mathbb{Z_2}$, but $t^2+t+1$ does not have roots in this field. I don't know how to do the general case.
If $L$ is a finite field of characteristic two, then consider the mapping $$ p:L\rightarrow L, x\mapsto x+x^2. $$ Because $F:x\mapsto x^2$ respects sums: $$F(x+y)=(x+y)^2=x^2+2xy+y^2=x^2+y^2=F(x)+F(y),$$ the mapping $p$ is a homomorphism of additive groups. We see that $x\in \mathrm{Ker}\ p$, if and only if $x=0$ or $x=1$. So $|\mathrm{Ker}\ p|=2$. Therefore (one of the basic isomorphism theorems) $|\mathrm{Im}\ p|=|L|/2$. In particular, the mapping $p$ is not onto. Let $a\in L$ be such that it is not in the image of $p$. Then the quadratic equation $$ x^2+x+a=0 $$ has no zeros in $L$. Because the mapping $F$ is onto (its kernel is trivial), all the elements of $L$ have a square root in $L$. Thus adjoinin square roots of elements of $L$ won't allow us to find roots of the above equation. In general the claim may not hold. By elementary Artin-Schreier theory we can actually show that quadratic polynomials of the prescribed type exist exactly, when the above mapping $p$ is not onto (irrespective of whether $L$ is finite or not). This is because, unless the quadratic $r(x)$ is of the form $x^2+a$ (when joining square roots, if needed, will help), its splitting field is separable, hence cyclic Galois of degree two. Thus the cited theorem of Artin-Schreier theory says that the splitting field of $r(x)$ can be gotten by joining a root of a polynomial of the form $x^2+x+a=0$.
Solve $\sqrt{x-4} + 10 = \sqrt{x+4}$ Solve: $$\sqrt{x-4} + 10 = \sqrt{x+4}$$ Little help here? >.<
Square both sides, and you get $$x - 4 + 20\sqrt{x - 4} + 100 = x + 4$$ This simplifies to $$20\sqrt{x - 4} = -92$$ or just $$\sqrt{x - 4} = -\frac{92}{20}$$ Since square roots of numbers are always nonnegative, this cannot have a solution.
limit at infinity $f(x)=x+ax \sin(x)$ Let $f:\Bbb R\rightarrow \Bbb R$ be defined by $f(x)= x+ ax\sin x$. I would like to show that if $|a| < 1$, then $\lim\limits_{x\rightarrow\pm \infty}f(x)=\pm \infty$. Thanks for your time.
We start by looking at the case when $x$ is (large) positive. The idea is that if $|a|\lt 1$, then since $|\sin x|\lt 1$, the term $ax\sin x$, even if it happens to be negative, can't cancel out the large positiveness of the front term $x$. We now proceed more formally. Note that $|\sin x|\le 1$ for all $x$, so $x|a\sin x| \le x|a|$, and therefore $x+ax\sin x\ge x-|a|x$. So our function is $\ge (1-|a|)x$ when $x$ is positive. Since $|a|\lt 1$, the number $1-|a|$ is a positive constant. But $(1-|a|)x$ can be made arbitrarily large by taking $x$ large enough. Similarly, let $x$ be negative. Then $x+ax\sin x=x(1+a\sin x)$. But $1+a\sin x\ge 1-|a|$, and $(1-|a|)x$ can be made arbitrarily large negative by taking $x$ negative and of large enough absolute value.
The product of all elements in $G$ cannot belong to $H$ Let $G$ be a finite group and $H\leq G$ be a subgroup of order odd such that $[G:H]=2$. Therefore the product of all elements in $G$ cannot belong to $H$. I assume $|H|=m$ so $|G|=2m$. Since $[G:H]=2$ so $H\trianglelefteq G$ and that; half of the elements of the group are in $H$. Any Hints? Thanks.
Consider the image of the product under the quotient map $G\to G/H\cong C_2$.
Internal Direct Sum Questiom I'm posed with the following problem. Given a vector space $\,V\,$ over a field (whose characteristic isn't $\,2$), we have a linear transformation from $\,V\,$ to itself. We have subspaces $$V_+=\{v\;:\; Tv=v\}\,\,,\,\, V_-=\{v\;:\; Tv=-v\}$$ I want to show that $\,V\,$ is the internal direct sum of these two subspaces. I have shown that they are disjoint, but can't seem to write an arbitrary element in V as the sum of respective elements of the subspaces... Edit:forgot something important! $$\, T^2=I\,$$ Sorry!
Another approach: Lemma: In arbitrary characteristic, if $V$ is a $K$-vector space and $P:V\to V$ is an endomorphism with $P^2=\lambda P$, $\lambda\ne 0$, then $V=\ker P\oplus\ker (\lambda I-P)$. Proof: If $v\in \ker P\cap\ker (\lambda I-P)$, then $\lambda v = (\lambda I-P)v+Pv = 0$, hence $v=0$. Thus $\ker P\cap\ker (\lambda I-P)=0$. For $v\in V$ let $u=\lambda^{-1}Pv$ and $w=v-u$. Then clearly $v=u+w$ and $(\lambda I-P)u =Pv-\lambda^{-1}P^2v = 0$ and $Pw =Pv-Pu = Pv-\lambda^{-1}P^2v=0$, i.e. $u\in\ker(\lambda I-P)$ and $w\in \ker P$. We conclude that $V=\ker P\oplus\ker(\lambda I-P)$. Corollary: If $\operatorname{char} K\ne 2$, $V$ is a $K$-vector space and $T:V\to V$ is an endomorphism with $T^2=I$, then $V=T_+\oplus T_-$ (with $T_\pm$ defined as in the OP). Proof: Let $P=T+I$. Then $P^2 = T^2+2T+I^2=I+2T+I=2P$. Since $\lambda:=2\ne0$, the lemma applies. Here $\ker P=T_-$ and $\ker (2 I-P)=\ker(I-T)=T_+$.
Darboux Theorem Proof http://en.wikipedia.org/wiki/Darboux%27s_theorem_%28analysis%29 I'm having a bit of trouble understanding the proof of Darboux's Theorem on the IVP of derivatives. Why should there be an extremum such that $g'(x) = 0$ from the fact that $g'(a)>0$ and $g'(b)<0$ ?
Suppose that $g$ attains a local maximum at $a$. Then $$\lim_{x \to a+} \frac{g(x)-g(a)}{x-a} \leq 0.$$ Analogously, if $g$ attains a local maximum at $b$, then $$\lim_{x \to b-} \frac{g(x)-g(b)}{x-b}\geq 0.$$ But both contradict $g'(a)>0$ and $g'(b)<0$. Hence the maximum, which exists since $g$ is continuous on $[a,b]$, must lie inside $[a,b]$. Actually, you can visualize the setting: $g'(a)>0$ means that $g$ leaves $a$ in an increasing way, and then reaches $b$ in a decreasing way. Therefore $g$ must attain a maximum somewhere between $a$ and $b$.
Sum of the sequence What is the sum of the following sequence $$\begin{align*} (2^1 - 1) &+ \Big((2^1 - 1) + (2^2 - 1)\Big)\\ &+ \Big((2^1 - 1) + (2^2 - 1) + (2^3 - 1) \Big)+\ldots\\ &+\Big( (2^1 - 1)+(2^2 - 1)+(2^3 - 1)+\ldots+(2^n - 1)\Big) \end{align*}$$ I tried to solve this. I reduced the equation into the following equation $$n(2^1) + (n-1)\cdot2^2 + (n-2)\cdot2^3 +\ldots$$ but im not able to solve it further. Can any one help me solve this equation out. and btw its not a Home work problem. This equation is derived from some puzzle. Thanks in advance
Let's note that $$(2^1 - 1) + (2^2 - 1) + \cdots + (2^k - 1) = 2^{k+1} - 2-k$$ where we have used the geometric series. Thus, the desired sum is actually $$\sum_{k=1}^n{2^{k+1}-2-k}$$. As this is a finite sum, we can evaluate each of the terms separately. We get the sum is $$2\left(\frac{2^{n+1}-1}{2-1}-1\right) - 2n- \frac{n(n+1)}{2} = 2^{n+2}-4 - 2n-\frac{n(n+1)}{2} $$
Can you help me understand this definition for the limit of a sequence? I'm reading the textbook "Calculus - Early Transcendentals" by Jon Rogawski for my Calculus III university course. I'm trying for the life of me to understand the wording of this definition, and I wonder if it can be said in simpler terms to get the basic point across. A sequence $a_n$ converges to a limit $L$, and we write $$\lim_{n\to\infty} a_n=L$$ if, for every $\epsilon > 0$, there is a number $M$ such that $|a_n - L| < \epsilon$ for all $n > M$. If no limit exists, we say that ${a_n}$ diverges. It looks like a very straightforward rule, but I just can't make sense of all the variables in the definition.
Let’s call $\{a_n,a_{n+1},a_{n+2},\dots\}$ the $n$-tail of the sequence. Now suppose that I give you a target around the number $L$: I pick some positive leeway $\epsilon$ want you to hit the interval $(L-\epsilon,L+\epsilon)$. We’ll say that the sequence hits that target if some tail of the sequence lies entirely inside the interval. For instance, if $a_n=\frac1{2^n}$, the $4$-tail of the sequence hits the target $\left(-\frac1{10},\frac1{10}\right)$ with leeway $frac1{10}$ around $0$: the $4$-tail is $$\left\{\frac1{2^4},\frac1{2^5},\frac1{2^6},\dots\right\}=\left\{\frac1{16},\frac1{32},\frac1{64},\dots\right\}\;,$$ and all of these fractions are between $-\frac1{10}$ and $\frac1{10}$. It’s not hard to see that no matter how small a leeway $\epsilon$ I choose, some tail of that sequence hits the target $(-\epsilon,\epsilon)$: I just have to find an $n$ large enough so that $\frac1{2^n}<\epsilon$, and then the $n$-tail will hit the target. Of course, in my example the $4$-tail of the sequence also hits the target $\left(0,\frac18\right)$ with leeway $\frac1{16}$ around $\frac1{16}$. However, there are smaller targets around $\frac1{16}$ that aren’t hit by any tail of the sequence. For instance, no tail hits the target $\left(\frac1{16}-\frac1{32},\frac1{16}+\frac1{32}\right)=\left(\frac1{32},\frac3{32}\right)$: no matter how big $n$ is, $$\frac1{2^{n+6}}\le\frac1{2^6}=\frac1{64}\;,$$ so $\frac1{2^{n+6}}$ is in the $n$-tail but not in the target. When we say that $\lim\limits_{n\to\infty}a_n=L$, we’re saying that no matter how small you set the leeway $\epsilon$ around $L$, the centre of the target, some tail of the sequence hits that tiny target. Thus, $\lim\limits_{n\to\infty}\frac1{2^n}=0$, and $\lim\limits_{n\to\infty}\frac1{2^n}\ne\frac1{16}$: no matter who tiny a target centred on $0$ you set, there is a tail of the sequence that hits it, but I just showed a target around $\frac1{16}$ that isn’t hit by any tail of the sequence. One way to sum this up: $\lim\limits_{n\to\infty}a_n=L$ means that no matter how small an open interval you choose around the number $L$, there is some tail of the sequence that lies entirely inside that interval. You may have to ignore a huge number of terms of the sequence before that tail, but there is a tail small enough to fit.
Prove with MATLAB whether a set of n points is coplanar I need to find a way to prove if a set of n points are coplanar. I found this elegant way on one of the MATLAB forums but I don't understand the proof. Can someone help me understand the proof please? " The most insightful method of solving your problem is to find the mean square orthogonal distance from the set of points to the best-fitting plane in the least squares sense. If that distance is zero, then the points are necessarily coplanar, and otherwise not. Let x, y , and z be n x 1 column vectors of the three coordinates of the point set. Subtract from each, their respective mean values to get V, and form from it the positive definite matrix A, V = [x-mean(x),y-mean(y),z-mean(z)]; A = (1/n)*V'*V; Then from [U,D] = eig(A); select the smallest eigenvalue in the diagonal matrix D. This is the mean square orthogonal distance of the points from the best fitting plane and the corresponding eigenvector of U gives the coefficients in the equation of that plane, along with the fact that it must contain the mean point (mean(x),mean(y),mean(z))." Here is the link from where I obtained this information. http://www.mathworks.com/matlabcentral/newsreader/view_thread/25094
If you put all the points as columns in a matrix, the resulting matrix will have rank equal to 2 if the points are coplanar. If such a matrix is denoted as $\mathbf A$ then $\mathbf{AA}^T$ will have one eigenvalue equal to or close to 0. Consider that V*U = 0 yields the equation of the plane. Then consider that V'*V*U = V'*0 = 0 can be interpreted as A*U = 0*U which by definition makes U the eigenvector associated with the eigenvalue 0 of A.
$p$ is polynomial, set bounded open set with at most $n$ components Assume $p$ is a non constant polynomial of degree $n$. Prove that the set $\{z:|(p(z))| \lt 1\}$ is a bounded open set with at-most $n$ connected components. Give example to show number of components can be less than $n$. thanks. EDIT:Thanks,I meant connected components.
Hints: * *For boundedness: Show that $|p(z)| \to \infty$ as $|z|\to \infty$ *For openness: The preimage $p^{-1}(A)$ of an open set $A$ under a continuous function $p$ is again open. Polynomials are continuous, so just write your set as a preimage. *For the connected components: Recall fundamental theorem of algebra. What may these components have to do with polynomial roots? How many of them can a polynomial have? The example for fewer components is straightforward, just think of it geometrically (draw a curve).
How to solve $x^3=-1$? How to solve $x^3=-1$? I got following: $x^3=-1$ $x=(-1)^{\frac{1}{3}}$ $x=\frac{(-1)^{\frac{1}{2}}}{(-1)^{\frac{1}{6}}}=\frac{i}{(-1)^{\frac{1}{6}}}$...
Observe that $(e^{a i})^3 = e^{3 a i}$ and $-1=e^{\pi i}$.
How to prove that this function is continuous at zero? Assume that $g : [0, \infty) \rightarrow \mathbb R$ is continuous and $\phi :\mathbb R \rightarrow \mathbb R$ is continuous with compact support with $0\leq \phi(x) \leq 1$, $\phi(x)=1$ for $ x \in [0,1]$ and $\phi(x)=0$ for $x\geq 2$. I wish to prove that $$ \lim_{x \rightarrow 0^-} \sum_{n=1}^\infty \frac{1}{2^n} \phi(-nx) g(-nx)=g(0). $$ I try in the following way. Let $f(x)=\sum_{n=1}^\infty \frac{1}{2^n} \phi(-nx) g(-nx)$ for $x \leq 0$. For $\varepsilon>0$ there exists $n_0 \in \mathbb N$ such that $\sum_{n\geq n_0} \frac{1}{2^n} \leq \frac{\varepsilon}{2|g(0)|}$. For $x<0$ there exists $m(x) \in \mathbb N$, $m(x)> n_0$ such that $\phi(-nx)=0$ for $n>m(x)$. Then $$ |f(x)-f(0)|\leq \sum_{n=1}^\infty \frac{1}{2^n} |\phi(-nx)g(-nx)-\phi(0)g(0|= \sum_{n=1}^{m(x)} +\sum_{n\geq m(x)} $$ (because $\phi(0)=1$, $\sum_{n=1}^\infty \frac{1}{2^n}=1$). The second term is majorized by $\frac{\varepsilon}{2}$ , but I don't know what to do with the first one because $m(x)$ depend on $x$.
Let $h(x)=\phi(x)g(x)$. Then $h\colon[0,\infty)\to\mathbb R$ is continuous and bounded by some $M$ and $h(x)=0$ for $x\ge2$. Given $\epsilon>0$, find $\delta$ such that $x<\delta$ implies $|h(x)-h(0)|<\frac\epsilon3$. Then for $m\in \mathbb N$ $$\sum_{n=1}^\infty \frac1{2^n} h(nx)-h(0)=\sum_{n=1}^{m} \frac1{2^n} (h(nx)-h(0))+\sum_{n=m+1}^\infty \frac1{2^n} h(nx)-\frac1{2^m}h(0)$$ If $m<\frac\delta x$, then $$\left |\sum_{n=1}^{m} \frac1{2^n} (h(nx)-h(0))\right |<\sum_{n=1}^\infty \frac 1{2^n}\frac\epsilon3=\frac\epsilon3.$$ For the middle part, $$\left|\sum_{n=m+1}^\infty \frac1{2^n} h(nx)\right|<\frac M{2^m}$$ and finally $\left|\frac1{2^m}h(0)\right|\le \frac M{2^m}$. If $m>\log_2(\frac {3M}\epsilon)$, we find that $$\left|\sum_{n=1}^\infty \frac1{2^n} h(nx)-h(0)\right|<\epsilon$$ for all $x$ with $x<\min\{\delta,\frac Mm\}$.
I need some help solving a Dirichlet problem using a conformal map I'm struggling here, trying to understand how to do this, and after 4 hours of reading, i still can't get around the concept and how to use it. Basically, i have this problem: A={(x,y) / x≥0, 0≤y≤pi So U(x,0) = B; U(x,pi) = C; U'x(0,y) = 0; I know that inside A, the laplace operator of U is 0. So i have to find U, and U must meet those requirements. I don't have to use any form of differential equation. I'm supposed to find some sort of conformal transformation in order to make the domain a little more.. easy to understand. And then i should just get a result. The problem is, i think i don't know how to do that. If any of you could help me understand how to solve this one, i might get the main idea and i could try to reproduce the resolution in similar cases. Thank you very much, and i'm sorry for my english.
The domain is simple enough already. Observe that there is a function of the form $U=\alpha y+\beta$ which satisfies the given conditions.
Compute: $\sum\limits_{n=1}^{\infty}\frac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)\cdot (2n+1)}$ Compute the sum: $$\sum_{n=1}^{\infty}\dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)\cdot (2n+1)}$$ At the moment, I only know that it's convergent and this is not hard to see if you look at the answers here I received for other problem with a similar series. For the further steps I need some hints if possible. Thanks!
Starting with the power series derived using the binomial theorem, $$ (1-x)^{-1/2}=1+\tfrac12x+\tfrac12\tfrac32x^2/2!+\tfrac12\tfrac32\tfrac52x^3/3!+\dots+\tfrac{1\cdot2\cdot3\cdots(2n-1)}{2\cdot4\cdot6\cdots2n}x^n+\cdots $$ and integrating, we get the series for $$ \sin^{-1}(x)=\int_0^x(1-t^2)^{-1/2}\mathrm{d}t=\sum_{n=0}^\infty\tfrac{1\cdot2\cdot3\cdots(2n-1)}{2\cdot4\cdot6\cdots2n}\frac{x^{2n+1}}{2n+1} $$ Setting $x=1$, we get $$ \sum_{n=1}^\infty\tfrac{1\cdot2\cdot3\cdots(2n-1)}{2\cdot4\cdot6\cdots2n}\frac{1}{2n+1}=\sin^{-1}(1)-1=\frac\pi2-1 $$
Probability of throwing multiple dice of at least a given face with a set of dice I know how to calculate the probability of throwing at least one die of a given face with a set of dice, but can someone tell me how to calculate more than one (e.g., at least two)? For example, I know that the probability of throwing at least one 4 with two 6-sided dice is 27/216, or 1 - (3/6 x 3/6 x 3/6). How do I calculate throwing at least two 4s with four 6-sided dice?
You are asking for the distribution of the number $X_n$ of successes in $n$ independent trials, where each trial is a success with probability $p$. Almost by definition, this distribution is binomial with parameters $(n,p)$, that is, for every $0\leqslant k\leqslant n$, $$ \mathrm P(X_n=k)={n\choose k}\cdot p^k\cdot(1-p)^{n-k}. $$ The probability of throwing at least two 4s with four 6-sided dice is $\mathrm P(X_4\geqslant2)$ with $p=\frac16$. Using the identity $\mathrm P(X_4\geqslant2)=1-\mathrm P(X_4=0)-\mathrm P(X_4=1)$, one gets $$ \mathrm P(X_4\geqslant2)=1-1\cdot\left(\frac16\right)^0\cdot\left(\frac56\right)^4-4\cdot\left(\frac16\right)^1\cdot\left(\frac56\right)^3=\frac{19}{144}. $$
Picking out columns from a matrix using MAGMA How do I form a new matrix from a given one by picking out some of its columns, using MAGMA?
You can actually do this really easily in Magma using the ColumnSubmatrix command, no looping necessary. You can use this in a few ways. For example if you have a matrix $A$ and you want $B$ to be made up a selection of columns: 1st, 2nd, $\ldots$, 5th columns: B := ColumnSubmatrix(A, 5); 3rd, 4th, $\ldots$, 7th columns: B := ColumnSubmatrix(A, 3, 4); (since 7=3+4) OR ColumnSubmatrixRange(A, 3, 7); 2nd, 5th, 8th columns: This is trickier. Magma doesn't let you do this cleanly. But you can select rows 2, 5, and 8 of a matrix individually. You can obviously replace [2,5,8] with any arbitrary sequence. Transpose(Matrix(Transpose(A)[[2,5,8]]));
Completeness and Topological Equivalence How can I show that if a metric is complete in every other metric topologically equivalent to it , then the given metric is compact ? Any help will be appreciated .
I encountered this result in Queffélec's book's Topologie. The proof is due to A.Ancona. It's known as Bing's theorem. We can assume WLOG that $d\leq 1$, otherwise, replace $d$ by $\frac d{1+d}$. We assume that $(X,d)$ is not compact; then we can find a sequence $\{x_n\}$ without accumulation points. We define $$d'(x,y):=\sup_{f\in B}|f(x)-f(y)|,$$ where $B=\bigcup_{n\geq 1}B_n$ and $$B_n:=\{f\colon X\to \Bbb R,|f(x)-f(y)|\leq \frac 1nd(x,y)\mbox{ and }f(x_j)=0,j>n\}.$$ Since $d'\leq d$, we have to show that $Id\colon (X,d')\to (X,d)$ is continuous. We fix $a\in X$, and by assumption on $\{x_k\}$ for all $\varepsilon>0$ we can find $n_0$ such that $d(x_k,a)>\varepsilon$ whenever $k\geq k_0$. We define $$f(x):=\max\left(\frac{\varepsilon -d(x,a)}{n_0},0\right).$$ By the inequality $|\max(0,s)-\max(0,t)|\leq |s-t|$, we get that $f\in B_{n_0}$. This gives equivalence between the two metrics. Now we check that $\{x_n\}$ still is Cauchy. Fix $\varepsilon>0$, $N\geq\frac 1{\varepsilon}$ and $p,q\geq N$. Let $f\in B$, and $n$ such that $f\in B_n$. * *If $n\geq N$, then $|f(x_p)-f(x_q)|\leq \frac 1nd(x_p,x_q)\leq \frac 1n\leq \varepsilon$; *if $n<N$ then $|f(x_p)-f(x_q)|=0$.
Prove that $x^2 + 5xy+7y^2 \ge 0$ for all $x,y \in\mathbb{R}$ This is probably really easy for all of you, but my question is how do I prove that $x^2 + 5xy+7y^2 \ge 0$ for all $x,y\in\mathbb{R}$ Thanks for the help!
$$x^2+5xy+7y^2=\left(x+\frac{5y}2\right)^2 + \frac{3y^2}4\ge 0$$ (not $>0$ as $x=y=0$ leads to 0).
Bijection between the set of classes of positive definite quadratic forms and the set of classes of quadratic numbers in the upper half plane Let $\Gamma = SL_2(\mathbb{Z})$. Let $\mathfrak{F}$ be the set of binary quadratic forms over $\mathbb{Z}$. Let $f(x, y) = ax^2 + bxy + cy^2 \in \mathfrak{F}$. Let $\alpha = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right)$ be an element of $\Gamma$. We write $f^\alpha(x, y) = f(px + qy, rx + sy)$. Since $(f^\alpha)^\beta$ = $f^{\alpha\beta}$, $\Gamma$ acts on $\mathfrak{F}$. Let $f, g \in \mathfrak{F}$. If $f$ and $g$ belong to the same $\Gamma$-orbit, we say $f$ and $g$ are equivalent. Let $f = ax^2 + bxy + cy^2 \in \mathfrak{F}$. We say $D = b^2 - 4ac$ is the discriminant of $f$. It is easy to see that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). If $D$ is not a square integer and gcd($a, b, c) = 1$, we say $f$ is primitive. If $D < 0$ and $a > 0$, we say $f$ is positive definite. We denote the set of positive definite primitive binary quadratic forms of discriminant $D$ by $\mathfrak{F}^+_0(D)$. By this question, $\mathfrak{F}^+_0(D)$ is $\Gamma$-invariant. We denote the set of $\Gamma$-orbits on $\mathfrak{F}^+_0(D)$ by $\mathfrak{F}^+_0(D)/\Gamma$. Let $\mathcal{H} = \{z \in \mathbb{C}; Im(z) > 0\}$ be the upper half complex plane. Let $\alpha = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right)$ be an element of $\Gamma$. Let $z \in \mathcal{H}$. We write $$\alpha z = \frac{pz + q}{rz + s}$$ It is easy to see that $\alpha z \in \mathcal{H}$ and $\Gamma$ acts on $\mathcal{H}$ from left. Let $\alpha \in \mathbb{C}$ be an algebraic number. If the minimal plynomial of $\alpha$ over $\mathbb{Q}$ has degree $2$, we say $\alpha$ is a quadratic number. Let $\alpha$ be a quadratic number. There exists the unique polynomial $ax^2 + bx + c \in \mathbb{Z}[x]$ such that $a > 0$ and gcd$(a, b, c) = 1$. $D = b^2 - 4ac$ is called the discriminant of $\alpha$. Let $\alpha \in \mathcal{H}$ be a quadratic number. There exists the unique polynomial $ax^2 + bx + c \in \mathbb{Z}[x]$ such that $a > 0$ and gcd$(a, b, c) = 1$. Let $D = b^2 - 4ac$. Clearly $D < 0$ and $D$ is not a square integer. It is easy to see that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). Conversly suppose $D$ is a negative non-square integer such that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). Then there exists a quadratic number $\alpha \in \mathcal{H}$ whose discriminant is $D$. We denote by $\mathcal{H}(D)$ the set of quadratic numbers of discriminant $D$ in $\mathcal{H}$. It is easy to see that $\mathcal{H}(D)$ is $\Gamma$-invariant. Hence $\Gamma$ acts on $\mathcal{H}(D)$ from left. We denote the set of $\Gamma$-orbits on $\mathcal{H}(D)$ by $\mathcal{H}(D)/\Gamma$. Let $f = ax^2 + bxy + cy^2 \in \mathfrak{F}^+_0(D)$. We denote $\phi(f) = (-b + \sqrt{D})/2a$, where $\sqrt{D} = i\sqrt{|D|}$. It is clear that $\phi(f) \in \mathcal{H}(D)$. Hence we get a map $\phi\colon \mathfrak{F}^+_0(D) \rightarrow \mathcal{H}(D)$. My question Is the following proposition true? If yes, how do we prove it? Proposition Let $D$ be a negative non-square integer such that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). Then the following assertions hold. (1) $\phi\colon \mathfrak{F}^+_0(D) \rightarrow \mathcal{H}(D)$ is a bijection. (2) $\phi(f^\sigma) = \sigma^{-1}\phi(f)$ for $f \in \mathfrak{F}^+_0(D), \sigma \in \Gamma$. Corollary $\phi$ induces a bijection $\mathfrak{F}^+_0(D)/\Gamma \rightarrow \mathcal{H}(D)/\Gamma$.
Proof of (1) We define a map $\psi\colon \mathcal{H}(D) \rightarrow \mathfrak{F}^+_0(D)$ as follows. Let $\theta \in \mathcal{H}(D)$. $\theta$ is a root of the unique polynomial $ax^2 + bx + c \in \mathbb{Z}[x]$ such that $a > 0$ and gcd$(a, b, c) = 1$. $D = b^2 - 4ac$. We define $\psi(\theta) = ax^2 + bxy + cy^2$. Clearly $\psi$ is the inverse map of $\phi$. Proof of (2) Let $f = ax^2 + bxy + cy^2$. Let $\sigma = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right) \in \Gamma$. Let $f^\sigma = kx^2 + lxy + my^2$. Let $\theta = \phi(f)$. Let $\gamma = \sigma^{-1}\theta$. Then $\theta = \sigma \gamma$. $a\theta^2 + b\theta + c = 0$. Hence $a(p\gamma + q)^2 + b(p\gamma + q)(r\gamma + s) + c(r\gamma + s)^2 = 0$. The left hand side of this equation is $f(p\gamma + q, r\gamma + s) = k\gamma^2 + l\gamma + m$. Since $f^\sigma$ is positive definite by this question, $k \gt 0$. Since gcd$(k, l, m)$ = 1 by the same question, $\psi(\gamma) = f^\sigma$. Hence $\phi(f^\sigma) = \gamma = \sigma^{-1}\theta$. This proves (2).
When is the set statement: (A⊕B) = (A ∪ B) true? "When is the set statement: (A⊕B) = (A ∪ B) a true statement? Is it true sometimes, never, or always? If it is sometimes, state the cases where it is." How would you go about finding the answer to the question or ones like this one? Thanks for your time!
If I've made the right assumptions in my comment above, a good way to approach this problem is by drawing a Venn diagram. Here's $A\oplus B$: Here's $A\cup B$: So, the area that's filled in in $A\cup B$ but not $A\oplus B$ is $A\cap B$. What do I need to be true about $A\cap B$ to make the two Venn diagrams have the same area filled in?
A matrix is diagonalizable, so what? I mean, you can say it's similar to a diagonal matrix, it has $n$ independent eigenvectors, etc., but what's the big deal of having diagonalizability? Can I solidly perceive the differences between two linear transformation one of which is diagonalizable and the other is not, either by visualization or figurative description? For example, invertibility can be perceived. Because non-invertible transformation must compress the space in one or more certain direction to $0$. Like crashing a space flat.
Up to change in basis, there are only 2 things a matrix can do. * *It can act like a scaling operator where it takes certain key vectors (eigenvectors) and scales them, or *it can act as a shift operator where it takes a first vector, sends it to a second vector, the second vector to a third vector, and so forth, then sends the last vector in a group to zero. It may be that for some collection of vectors it does scaling whereas for others it does shifting, or it can also do linear combinations of these actions (block scaling and shifting simultaneously). For example, the matrix $$ P \begin{bmatrix} 4 & & & & \\ & 3 & 1 & & \\ & & 3 & 1 &\\ & & & 3 & \\ & & & & 2 \end{bmatrix} P^{-1} = P\left( \begin{bmatrix} 4 & & & & \\ & 3 & & & \\ & & 3 & &\\ & & & 3 & \\ & & & & 2 \end{bmatrix} + \begin{bmatrix} 0& & & & \\ & 0& 1 & & \\ & & 0& 1 &\\ & & & 0& \\ & & & &0 \end{bmatrix}\right)P^{-1} $$ acts as the combination of a scaling operator on all the columns of $P$ * *$p_1 \rightarrow 4 p_1$, $p_2 \rightarrow 3 p_2$, ..., $p_5 \rightarrow 2 p_5$, plus a shifting operator on the 2nd, 3rd and 4th columns of $P$: *$p_4 \rightarrow p_3 \rightarrow p_2 \rightarrow 0$. This idea is the main content behind the Jordan normal form. Being diagonalizable means that it does not do any of the shifting, and only does scaling. For a more thorough explanation, see this excellent blog post by Terry Tao: http://terrytao.wordpress.com/2007/10/12/the-jordan-normal-form-and-the-euclidean-algorithm/
Isomorphic subgroups, finite index, infinite index Is it possible to have a group $G,$ which has two different, but isomorphic subgroups $H$ and $H',$ such that one is of finite index, and the other one is of infinite index? If not, why is that not possible. If there is a counterexample please give one.
Yes. For $n\in\Bbb N$ let $G_n$ be a copy of $\Bbb Z/2\Bbb Z$, and let $G=\prod_{n\in\Bbb N}G_n$ be the direct product. Then $H_0=\{0\}\times\prod_{n>0}G_n$ is isomorphic to $H_1=\prod_{n\in\Bbb N}A_n$, where $A_n=\{0\}$ if $n$ is odd and $A_n=G_n$ if $n$ is even. Clearly $[G:H_0]=2$ and $[G:H_1]$ is infinite. Of course $H_0$ and $H_1$ are both isomorphic to $G$, so I could have used $G$ instead of $H_0$, but I thought that you might like the subgroups to be proper.
Detect when a point belongs to a bounding box with distances I have a box with known bounding coordinates (latitudes and longitudes): latN, latS, lonW, lonE. I have a mystery point P with unknown coordinates. The only data available is the distance from P to any point p. dist(p,P).` I need a function that tells me whether this point is inside or outside the box.
The distance measurement from any point gives you a circle around that point as a locus of possible positions of $P$. Make any such measurement from a point $A$. If the question is not settled after this (i.e. if the circle crosses the boundary of the rectangle), make a measurement from any other point $B$. The two intersecting circles leave only two possibilities for the location of $P$. If you are lucky, both options are inside or both outside the rectnagle and we are done. Otherwise, a third measurement taken form any point $C$ not collinear with $A$ and $B$ will settle the question of exact position of $P$ (and after that we easily see if $P$ is inside or not. One may wish to choose the first point $A$ in an "optimal" faschion such that the probability of a definite answer is maximized. While this requires knowledge about soem a prioi distribution where $P$ might be, the center of the rechtangle seems like a good idea. The result is nondecisive only if the measured distance is between half the smallest side of the rectangle and half the diagonal of the rectangle.
Proving that $\sum_{j=1}^{n}|z_{j}|^{2}\le 1$ when $|\sum_{j=1}^{n}z_{j}w_{j}| \le 1$ for all $\sum_{j=1}^{n}|w_{j}|^{2}\le 1$ The problem is like this: Fix $n$ a positive integer. Suppose that $z_{1},\cdots,z_{n} \in \mathbb C$ are complex numbers satisfying $|\sum_{j=1}^{n}z_{j}w_{j}| \le 1$ for all $w_{1},\cdots,w_{n} \in \mathbb C$ such that $\sum_{j=1}^{n}|w_{j}|^{2}\le 1$. Prove that $\sum_{j=1}^{n}|z_{j}|^{2}\le 1$. For this problem, I so far have that $|z_{i}|^{2}\le 1$ for all $i$ by plugging $(0, \cdots,0,1,0,\cdots,0)$ for $w=(w_{1},\cdots,w_{n} )$ Also, by plugging $(1/\sqrt{n},\cdots,1/\sqrt{n})$ for $w=(w_{1},\cdots,w_{n} )$ we could have $|z_{1}+\cdots+z_{n}|\le \sqrt{n}$ I wish we can conclude that $|z_{i}|\le 1/\sqrt{n}$ for each $i$. Am I in the right direction? Any comment would be grateful!
To have a chance of success, one must choose a family $(w_j)_j$ adapted to the input $(z_j)_j$. If $z_j=0$ for every $j$, the result holds. Otherwise, try $w_j=c^{-1}\bar z_j$ with $c^2=\sum\limits_{k=1}^n|z_k|^2$.
Prove that $\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\geq \frac{9}{a+b+c} : (a, b, c) > 0$ Please help me for prove this inequality: $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\geq \frac{9}{a+b+c} : (a, b, c) > 0$$
Inequality can be written as: $$\left(a+b+c\right) \cdot \left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right) \geq 9 .$$ And now we apply the $AM-GM$ inequality for both of parenthesis. So: $\displaystyle \frac{a+b+c}{3} \geq \sqrt[3]{abc} \tag{1}$ and $\displaystyle \frac{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}}{3} \geq \frac{1}{\sqrt[3]{abc}} \tag{2}.$ Now multiplying relation $(1)$ with relation $(2)$ we obtained that : $$\left(\frac{a+b+c}{3}\right) \cdot \left(\frac{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}}{3}\right) \geq \frac{\sqrt[3]{abc}}{\sqrt[3]{abc}}=1. $$ So, we obtained our inequality.
What's the probability that the other side of the coin is gold? 4 coins are in a bucket: 1 is gold on both sides, 1 is silver on both sides, and 2 are gold on one side and silver on the other side. I randomly grab a coin from the bucket and see that the side facing me is gold. What is the probability that the other side of the coin is gold? I had thought that the probability is $\frac{1}{3}$ because there are 3 coins with at least one side of gold, and only 1 of these 3 coins can be gold on the other side. However, I suspect that the sides might be unique, which derails my previous logic.
50%. GIVEN that the first side you see is gold, what is the chance that you have the double-gold coin? Assume you do this experiment a hundred times. In 50% of the cases you pull out a coin and see a gold side; the other 50% you see a silver side. In the latter case we have to discard the experiment and only count the cases where we see gold. There is initially a 25% chance of double-gold, 25% chance double-silver, and 50% chance half-and-half. We discard the 25 cases where you draw the double-silver, and the 25 cases where you draw a half-and-half silver-side up. So of the 50 cases remaining, half are double-gold and half are gold-up silver-down. Hence given that you have drawn a coin and see gold on top, there is a 50% there is gold on the bottom.
How can I get sequence $4,4,2,4,4,2,4,4,2\ldots$ into equation? How can I write an equation that expresses the nth term of the sequence: $$4, 4, 2, 4, 4, 2, 4, 4, 2, 4, 4, 2,\ldots$$
$$ f(n) = \begin{cases} 4 \text{ if } n \equiv 0 \text{ or } 1 \text{ (mod 3)}\\ 2 \text{ if } n \equiv 2 \text{ (mod 3)} \end{cases} $$
Example of a bijection between two sets I am trying to come up with a bijective function $f$ between the set : $\left \{ 2\alpha -1:\alpha \in \mathbb{N} \right \}$ and the set $\left \{ \beta\in \mathbb{N} :\beta\geq 9 \right \}$, but I couldn't figure out how to do it. Can anyone come up with such a bijective function? Thanks
Given some element $a$ of $\{ 2\alpha -1 \colon \alpha \in \mathbb{N} \}$, try the function $f(a)=\frac{a+1}{2}+9$.
(Help with) A simple yet specific argument to prove Q is countable I was asked to prove that $\mathbb{Q}$ is countable. Though there are several proofs to this I want to prove it through a specific argument. Let $\mathbb{Q} = \{x|n.x+m=0; n,m\in\mathbb{Z}\}$ I would like to go with the following argument: given that we know $\mathbb{Z}$ is countable, there are only countable many n and countable many m , therefore there can only be countable many equations AND COUNTABLE MANY SOLUTIONS. The instructor told me that though he liked the argument, it doesn't follow directly that there can only be countable many solutions to those equations. Is there any way of proving that without it being a traditional proof of "$\mathbb{Q}$ is countable"?
Your argument can work, but as presented here there are several gaps in it to be closed: * *Your definition of $\mathbb Q$ does not work unless you exclude the case $m=n=0$ -- otherwise everything is a solution. (Thanks, Brian). *You need to point out explicitly that each choice of $n$ and $m$ has at most one solution. *Just because there are countably many choices for $n$ and countably many choices for $m$, you haven't proved that there are countably many combinations of $n$ and $m$. You need either to explicitly reference an earlier proof that the Cartesian product of countable sets is countable, or prove this explicitly. *Since each rational can be hit by more than one $(m,n)$ pair, what you prove is really that $\mathbb Q$ is at most countable. You will need an argument that it doesn't have some lower cardinality. (Unless your definition of "countable" is "at most the cardinality of $\mathbb N$" rather than "exactly the cardinalty of $\mathbb N$", which happens). An explicit appeal to the Cantor-Bernstein theorem may be in order.
Any concrete example of ''right identity and left inverse do not imply a group''? In the abstract algebra class, we have proved the fact that right identity and right inverse imply a group, while right identity and left inverse do not. My question: Are there any good examples of sets (with operations on) with right identity and left inverse, not being a group? To be specific, suppose $(X,\cdot)$ is a set with a binary operation satisfies the following conditions: (i) $(a\cdot b)\cdot c=a\cdot (b\cdot c)$ for any $a,b,c\in X$; (ii) There exists $e\in X$ such that for every $a\in X$, $a\cdot e=a$; (iii) For any $a\in X$, there exists $b\in X$ such that $b\cdot a=e$. I want an example of $(X,\cdot)$ which is not a group.
$$\matrix{a&a&a\cr b&b&b\cr c&c&c\cr}$$ That is, $xy=x$ for all $x,y$.
If $f(n) = \sum_{i = 0}^n X_{i}$, then show by induction that $f(n) = f(n - 1) + X_{n-1}$ I am trying to solve this problem by induction. The sad part is that I don't have a very strong grasp on solving by inductive proving methods. I understand that there is a base case and that I need an inductive step that will set $k = n$ and then one more step that basically sets $k = n + 1$. Here is the problem I am trying to solve: If $f(n) = \sum_{i = 0}^n X_{i}$, then show by induction that $f(n) = f(n - 1) + X_{n-1}$. Can I have someone please try to point me in the right direction? *EDIT: I update the formula to the correct one. I wasn't sure how to typeset it correctly and left errors in my math. Thank you for those that helped. I'm still having the problem but now I have the proper formula posted.
To prove by induction, you need to prove two things. First, you need to prove that your statement is valid for $n=1$. Second, you have to show that the validity of the statement for $n=k$ implies the validity of the statement for $n=k+1$. Putting these two bits of information together, you effectively show that your statement is valid for any value of $n$, since starting from $n=1$, the second bit that you proved above shows that the statement is also valid for $n=2$, and then from the validity of the statement for $n=2$ you know it will also be valid for $n=3$ and so on. As for your problem, we proceed as follows: (Part 1) Given the nature of your problem, it is safe to assume $f(0)=0$ (trivial sum with no elements equals zero). In this case it is trivial to see that $f(1)=f(0)+X_{1}$. (Part 2) Assume the proposition is true for $n=k$, so $$f(k)=\sum_{i=1}^kX_i=f(k-1)+X_k$$ Adding $X_{k+1}$ to both sides of the equation above we get $$f(k)+X_{k+1}=\sum_{i=1}^kX_i=f(k+1)$$ So, the truth of the statement for $n=k$ implies the truth of the statement for $n=k+1$, and so the result is proved for all $n$. As Mr. Newstead said above, the induction step is not really necessary because of the nature of the problem, but it's important to have this principle in mind for more complicated proofs.
Show that in a discrete metric space, every subset is both open and closed. I need to prove that in a discrete metric space, every subset is both open and closed. Now, I find it difficult to imagine what this space looks like. I think it consists of all sequences containing ones and zeros. Now in order to prove that every subset is open, my books says that for $A \subset X $, $A$ is open if $\,\forall x \in A,\,\exists\, \epsilon > 0$ such that $B_\epsilon(x) \subset A$. I was thinking that since $A$ will also contain only zeros and ones, it must be open. Could someone help me ironing out the details?
Let $(X,d)$ be a metric space. Suppose $A \subset X$. Let $ x\in A$ be arbitrary. Setting $r = \frac{1}{2}$ then if $a \in B(x,r)$ we have $d(a,x) < \frac{1}{2}$ which implies that $a=x$ and so a is in A. (1) To show that A is closed. It suffices to note that the complement of A is a subset of X and by (1), it is open hence A must be closed.
Hamiltonian Cycle Problem At the moment I'm trying to prove the statement: $K_n$ is an edge disjoint union of Hamiltonian cycles when $n$ is odd. ($K_n$ is the complete graph with $n$ vertices) So far, I think I've come up with a proof. We know the total number of edges in $K_n$ is $n(n-1)/2$ (or $n \choose 2$) and we can split our graph into individual Hamiltonian cycles of degree 2. We also know that for n vertices all having degree 2, there must consequently be $n$ edges. Thus we write $n(n-1)/2 = n + n + ... n$ (here I'm just splitting $K_n$'s edges into some number of distinct Hamiltonian paths) and the deduction that $n$ must be odd follows easily. However, the assumption I made - that we can always split $K_n$ into Hamiltonian paths of degree 2 if $K_n$ can be written as a disjoint union described above - I'm having trouble proving. I've only relied on trying different values for $n$ trials and it hasn't faltered yet. So, I'm asking: If it is true, how do you prove that if $K_n$ can be split into distinct Hamiltonian cycles, it can be split in such a way that each Hamiltonian cycle is of degree 2?
What you are looking for is a Hamilton cycle decomposition of the complete graph $K_n$, for odd $n$. An example of how this can be done (among many other results in the area) is given in: D. Bryant, Cycle decompositions of complete graphs, in Surveys in Combinatorics, vol. 346, Cambridge University Press, 2007, pp. 67–97. For odd $n$, let $n=2r+1$, take $\mathbb{Z}_{2r} \cup \{\infty\}$ as the vertex set of $K_n$ and let $D$ be the orbit of the $n$−cycle \[(\infty, 0, 1, 2r − 1, 2, 2r − 2, 3, 2r − 3,\ldots , r − 1, r + 1, r)\] under the permutation $\rho_{2r}$ [Here $\rho_{2r}=(0,1,\ldots,2r-1)$]. Then $D$ is a decomposition of $K_n$ into $n$-cycles. Here is the starter cycle for a Hamilton cycle decomposition of $K_{13}$, given in the paper: If you rotate the starter, you obtain the other Hamilton cycles in the decomposition. The method of using a "starter" cycle under the action of a cyclic automorphism is typical in graph decomposition problems.
Show me some pigeonhole problems I'm preparing myself to a combinatorics test. A part of it will concentrate on the pigeonhole principle. Thus, I need some hard to very hard problems in the subject to solve. I would be thankful if you can send me links\books\or just a lone problem.
This turned up in a routine google search of the phrase "pidgeonhole principle exercise" and appears to be training problems for the New Zealand olympiad team. It contains numerous problems and has some solutions in the back.
Character of $S_3$ I am trying to learn about the characters of a group but I think I am missing something. Consider $S_3$. This has three elements which fix one thing, two elements which fix nothing and one element which fixes everything. So its character should be $\chi=(1,1,1,0,0,3)$ since the trace is just equal to the number of fixed elements (using the standard representation of a permutation matrix). Now I think this is an irreducible representation, so $\langle\chi,\chi\rangle$ should be 1. But it's $\frac{1}{6}(1+1+1+9)=2$. So is the permutation matrix representation actually reducible? Or am I misunderstanding something?
The permutation representation is reducible. It has a subrepresentation spanned by the vector $(1,1,1)$. Hence, the permutation representation is the direct sum of the trivial representation and a $(n-1=2)$-dimensional irreducible representation.
Finding $\pi$ through sine and cosine series expansions I am working on a problem in Partha Mitra's book Observed Brain Dynamics (the problem was originally from Rudin's textbook Real and Complex Analysis, and appears on page 54 of Mitra's book). Unfortunately, the book I have does not contain any solutions... Here is the question: By considering the first few terms of the series expansion of sin(x) and cos(x), show that there is a real number $x_0$ between 0 and 2 for which $\cos(x_0)=0$ and $\sin(x_0)=1$. Then, define $\pi=2x_0$, and show that $e^{i\pi/2}=i$ (and therefore that $e^{i\pi}=-1$ and $e^{2\pi i}=1$. Attempt at a solution: In a previous problem I derived the series expansions for sine and cosine as $$ \sin(x) = \sum_{n=0}^{\infty} \left[ (-1)^n \left( \frac{x^{2n+1}}{(2n+1)!} \right) \right] $$ $$ \cos(x) = \sum_{n=0}^{\infty} \left[ (-1)^n \left( \frac{x^{2n}}{(2n)!} \right) \right] $$ My thought is that you can show that $\cos(0)=1$ (trivially), and that $\cos(2)<0$ (less trivially). This then implies that there is a point $x_0$ between 0 and 2 where $\cos(x_0)=0$, since the cosine function is continuous. However, I do not understand how you could then show that $\sin(x_0)=1$ at this same point. My approach may be completely off here. I believe that the second part of this problem ("Then, define $\pi=2x_0$...") will be easier once I get past this first part. Thanks so much for the help. Also - I swear this is not a homework assignment. I am reading through this book on my own to improve my math.
How to show that $\sin (x_0)=1$ if $\cos (x_0)=0$? Quite simply: $$\sin^2 x+\cos^2 x=1$$ (you may also want to specify that $\sin x$ is positive in the given range)
Exercise on finite intermediate extensions Let $E/K$ be a field extension, and let $L_1$ and $L_2$ be intermediate fields of finite degree over $K$. Prove that $[L_1L_2:K] = [L_1 : K][L_2 : K]$ implies $L_1\cap L_2 = K$. My thinking process so far: I've gotten that $K \subseteq L_1 \cap L_2$ because trivially both are intermediate fields over K. I want to show that $L_1 \cap L_2 \subseteq K$, or equivalently that any element of $L_1 \cap L_2$ is also an element of $K$. So I suppose there exists some element $x\in L_1 \cap L_2\setminus K$. Well then I know that this element is algebraic over $K$, implying that $L_1:K=[L_1:K(x)][K(x):K]$, and similarly for $L_2$, implying that that these multiplied together equal $L_1L_2:K$ by hypothesis. And now I’m stuck in the mud... not knowing exactly where the contradiction is.
The assumption implies $[L_1L_2:L_1]=[L_2:K]$. Hence $K$-linearly independent elements $b_1,\ldots ,b_m\in L_2$ are $L_1$-linearly independent, considered as elements of $L_1L_2$. In particular this holds for the powers $1,x,x^2,\ldots ,x^{m-1}$ of an element $x\in L_2$, where $m$ is the degree of the minimal polynomial of $x$ over $K$. Thus $[K(x):K]= [L_1(x):L_1]$. This shows that the minimal polynomial over $K$ of every $x\in L_1\cap L_2$ has degree $1$.
The integral relation between Perimeter of ellipse and Quarter of Perimeter Ellipse Equation $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$ $x=a\cos t$ ,$y=b\sin t$ $$L(\alpha)=\int_0^{\alpha}\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2}\,dt$$ $$L(\alpha)=\int_0^\alpha\sqrt{a^2\sin^2 t+b^2 \cos^2 t}\,dt $$ $$L(2\pi)=\int_0^{2\pi}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag{Perimeter of ellipse}$$ $$L(\pi/2)=\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag {Quarter of Perimeter }$$ Geometrically, we can write $L(2\pi)=4L(\pi/2)$ $$4\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt=\int_0^{2\pi}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag1$$ If I change variable in integral of $L(2\pi)$ $$L(2\pi)=\int_0^{2\pi}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag{Perimeter of ellipse}$$ $t=4u$ $$L(2\pi)=4\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 4u}\,du$$ According to result (1), $$L(2\pi)=4\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 4u},du=4\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt$$ $$\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 4u}\,du=\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag2$$ How to prove the relation $(2)$ analytically? Thanks a lot for answers
I proved the relation via using analytic way. I would like to share the solution with you. $$\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 4u}\,du=K$$ $u=\pi/4-z$ $$K=\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (\pi-4z)}\,dz$$ $\sin (\pi-4z)=\sin \pi \cos 4z-\cos \pi \sin 4z= \sin 4z$ $$\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (\pi-4z)}\,dz=\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $$\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz=\int_{-\pi/4}^{0}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz+\int_{0}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $$\int_{-\pi/4}^{0}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $z=-p$ $$\int_{\pi/4}^{0}\sqrt{b^2+(a^2-b^2)\sin^2 (-4p)}\,(-dp)=\int_{0}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4p)}\,dp$$ $$K=\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz=2\int_{0}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $$K=2\int_{0}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $z=\pi/8-v$ $$K=2\int_{-\pi/8}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $$K=2\int_{-\pi/8}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv+2\int_{0}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $$\int_{-\pi/8}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $v=-h$ $$\int_{\pi/8}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (-4h)}\,(-dh)=\int_{0}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4h)}\,dh$$ $$K=2\int_{-\pi/8}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv+2\int_{0}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $$K=4\int_{0}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $v=\pi/8-t/4$ $$K=4\int_{\pi/2}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (4(\pi/8-t/4))}\,(-dt/4)$$ $$K=\int_{0}^{\pi/2}\sqrt{b^2+(a^2-b^2)\cos^2 (\pi/2-t)}\,dt$$ $$K=\int_{0}^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 (t)}\,dt$$
Show that the group $G$ is of order $12$ I am studying some exercises about semi-direct product and facing this solved one: Show that the order of group $G=\langle a,b| a^6=1,a^3=b^2,aba=b\rangle$ is $12$. Our aim is to show that $|G|\leq 12$ and then $G=\mathbb Z_3 \rtimes\mathbb Z_4=\langle x\rangle\rtimes\langle y\rangle$. So we need a homomorphism from $\mathbb Z_4$ to $\mathrm{Aut}(\mathbb Z_3)\cong\mathbb Z_2=\langle t\rangle$ to construct the semi-direct product as we wish: $$\phi:=\begin{cases} 1\longrightarrow \mathrm{id},& \\\\y^2\longrightarrow \mathrm{id},& \\\\y\longrightarrow t,& \\\\y^3\longrightarrow t, \end{cases} $$ Here, I do know how to construct $\mathbb Z_3 \rtimes_{\phi}\mathbb Z_4$ by using $\phi$ and according to the definition. My question starts from this point: The solution suddenly goes another way instead of doing $(a,b)(a',b')=(a\phi_b(a'),bb')$. It takes $$\alpha=(x,y^2), \beta=(1,y)$$ and note that these elements satisfy the relations in group $G$. All is right and understandable but how could I find such these later element $\alpha, \beta$?? Is really the formal way for this kind problems finding some generators like $\alpha, \beta$? Thanks for your help.
The subgroup $A$ generated by $a^2$ is normal and order 3. The subgroup $B$ generated by $b$ is of order 4. The intersection of these is trivial so the product $AB$ has order 12. So $G$ has order at least 12. To show it has order 12, we need to see that $a\in AB$; but $b^2=a^3=a^2a$ so $$a=a^{-2}b^2\in AB.$$ Thus the group is the semidirect product of $A$ by $B$ where $$ba^2b^{-1}=(ba)(ab^{-1})=a^{-1}(ba)b^{-1}=a^{-1}(a^{-1}b)b^{-1}=a^{-2}$$
Is there a direct, elementary proof of $n = \sum_{k|n} \phi(k)$? If $k$ is a positive natural number then $\phi(k)$ denotes the number of natural numbers less than $k$ which are prime to $k$. I have seen proofs that $n = \sum_{k|n} \phi(k)$ which basically partitions $\mathbb{Z}/n\mathbb{Z}$ into subsets of elements of order $k$ (of which there are $\phi(k)$-many) as $k$ ranges over divisors of $n$. But everything we know about $\mathbb{Z}/n\mathbb{Z}$ comes from elementary number theory (division with remainder, bezout relations, divisibility), so the above relation should be provable without invoking the structure of the group $\mathbb{Z}/n\mathbb{Z}$. Does anyone have a nice, clear, proof which avoids $\mathbb{Z}/n\mathbb{Z}$?
Claim:Number of positive integers pair $(a, b) $ satisfying : $n=a+b$ (for given $n$) $\gcd(a, b) =d$ and $d|n$ is $\phi(n/d) $. Proof: Let $a=xd$ and $b=yd$ We want number of solution for $x+y=\frac{n}{d}$ such that $\gcd(x, y) =1$. $\gcd(x,y)=\gcd(x,x+y)=\gcd(x,n/d)=1$ Solution for $x+y=n/d$, $\gcd(x,y)=1$ is $\phi(n/d) $. ________________________ Number of positive integers pair $(a, b) $ satisfying $a+b=n$ is $n$. But this can counted in different way: If $(a, b) $ is solution then $\gcd(a, b) =d$ for some divisor $d$ of $n$. So we can use our claim to write $\sum_{d|n} \phi(n/d) =\sum_{d|n}\phi(d)=$ Number of solution $=n.$
The limit of the sum is the sum of the limits I was wondering why the statement in the title is true only if the functions we are dealing with are continuous. Here's the context (perhaps not required): (The upper equation there is just a limit of two sums, and the lower expression is two limits of those two sums.), and if anyone wonders, that's the original source (a pdf explaining the proof of the product rule). P.S. In the context it's given that $g$ and $f$ are differentiable, anyway I only provided it to illustrate the question; my actual question is simply general.
The limit of the sum is not always equal to the sum of the limits, even when the individual limits exist. For example: Define $h(i)=1\sqrt{(n^2)+i}$. For each $i=1,\cdots,n$, the limit of $h(i)$ is zero as n goes to infinity. But the limit of the sum $[h(1)+h(2)+\cdots+h(n)]$ as n goes infinity is not zero. The limit of this sum is actually $1$, using the sandwich theorem.