INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Find a lower bound Let $M$ be an $N\times N$ symmetric real matrix, and let $J$ be a permutation of the integers from 1 to $N$, with the following properties: * *$J:\{1,...,N\}\rightarrow\{1,...,N\}$ is one-to-one. *$J$ is its own inverse: $J(J(i))=i$. *$J$ has at most one fixed point, that is, there's at most one value of $i$ such that $J(i)=i$. Explicitly, if $N$ is odd, there is exactly one fixed point, and if $J$ is even, there are none. A permutation with these properties establishes a pairing between the integers from 1 to $N$, where $i$ is paired with $J(i)$ (except if $N$ is odd, in which case the fixed point is not paired). Therefore we will call $J$ a pairing. (*) Given a matrix $M$, we go through all possible pairings $J$ to find the maximum of $$\frac{\sum_{ij}M_{ij}M_{J(i)J(j)}}{\sum_{ij}M_{ij}^2}.$$ This way we define a function: $$F(M)=\max_J\frac{\sum_{ij}M_{ij}M_{J(i)J(j)}}{\sum_{ij}M_{ij}^2}.$$ The question is: Is there a lower bound to $F(M)$, over all symmetric real $N\times N$ matrices $M$, with the constraint $\sum_{ij}M_{ij}^2 > 0$ to avoid singularities? I suspect that $F(M)\geq 0$ (**), but I don't have a proof. Perhaps there is a tighter lower bound. (*) I am asking here if there is a standard name for this type of permutation. (**) For $N=2$ this is false, see the answer by @O.L. What happens for $N>2$?
Not really an answer, rather a reformulation and an observation. UPD: The answer is in the addendum Any such permutation $S$ can be considered as an $N\times N$ matrix with one $1$ and $N-1$ zeros in each raw and each column. Note that $S^{-1}=S^T$. The number of fixed points coincides with $\mathrm{Tr}\,S$. You are also asking for $S$ to be an involution ($J^{2}=1$), which means that $J=J^T$. Given a real symmetric matrix $M$ and an involutive permutation $J$, the expression $$ E_J(M)=\frac{\sum_{i,j}M_{ij}M_{J(i)J(j)}}{\sum_{i,j}M_{ij}^2}$$ can be rewritten as $$ E_J(M)=\frac{\mathrm{Tr}\,MJMJ}{\mathrm{Tr}\,M^2}=\frac{\mathrm{Tr}\left(MJ\right)^2}{\mathrm{Tr}\,M^2}.$$ Let us now consider the case $N=2$ where there is only one permutation $J=\left(\begin{array}{cc} 0 & 1 \\ 1 & 0\end{array}\right)$ with necessary properties. Now taking for example $M=\left(\begin{array}{cc} 1 & 0 \\ 0 & -1\end{array}\right)$ we find $\max_J E_J(M)=-1$. This seems to contradict your hypothesis $F(M)\geq 0$. Similar examples can be constructed for any $N$. Important addendum. I think the lower bound for $F(M)$ is precisely $-1$. Let us prove this for even $N=2n$. Since $J^2=1$, the eigenvalues of $J$ can only be $\pm1$. Further, since the corresponding permutation has no fixed points, $J$ can be brought to the form $J=O^T\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right) O$ by a real orthogonal transformation. Let us now compute the quantity \begin{align} \mathrm{Tr}\,MJMJ=\mathrm{Tr}\left\{MO^T\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right)OMO^T\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right)O\right\}=\\= \mathrm{Tr}\left\{\left(\begin{array}{cc} A & B \\ B^T & D\end{array}\right)\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right)\left(\begin{array}{cc} A & B \\ B^T & D\end{array}\right)\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right)\right\},\tag{1}\end{align} where $\left(\begin{array}{cc} A & B \\ B^T & D\end{array}\right)$ denotes real symmetric matrix $OMO^T$ written in block form. Real matrices $A$, $B$, $D$ can be made arbitrary by the appropriate choice of $M$ (of course, under obvious constraints $A=A^T$, $D=D^T$). Now let us continue the computation in (1): $$\mathrm{Tr}\,MJMJ=\mathrm{Tr}\left(A^2+D^2-BB^T-B^TB\right).$$ Also using that $$\mathrm{Tr}\,M^2=\mathrm{Tr}\left(OMO^T\right)^2=\mathrm{Tr}\left(A^2+D^2+BB^T+B^TB\right),$$ we find that $$\frac{\mathrm{Tr}\,MJMJ}{\mathrm{Tr}\,M^2}+1=\frac{2\mathrm{Tr}\left(A^2+D^2\right)}{\mathrm{Tr}\left(A^2+D^2+BB^T+B^TB\right)}\geq 0.$$ The equality is certainly attained if $A=D=\mathbf{0}_n$. $\blacksquare$
Photograph of Marjorie Rice I'm giving a presentation this weekend about Marjorie Rice's work on tilings. The only photograph I have of her (from her website) is small and pixelated, and I haven't been able to make contact with her to ask her for a better one. I'd be most grateful if you could point me to a better photo of her, either on the web or in print or from your personal archives. Thanks!
See Marjorie Rice, page 2 from a newsletter published by Key Curriculum Press, on Tesselations. There's a photo of Marjorie Rice (on the left), at the lower left of page 2 of the newsletter. The link will take you to the pdf. I don't know if this is an improvement over your current picture, but I thought I'd post this, in case it helps. Here's an image of page 2 of the pdf:
Is mathematical induction necessary in this situation? I was reading "Number Theory" by George E. Andrews. On P.17, where he proves that for each pair of positive integers a,b, gcd(a,b) uniquely exists, I came up with a question. The approach he used is probably the most common one, that is, to make use of Euclidean Algorithm. There exist integers $q_o, r_o $ ,$0 \leq r_0 <b$. such that $a=q_0 \times b +r_0$. If $r_0 \neq 0$, we can find $q_1,r_1$, $0 \leq r_1 < r_0$ such that $b=q_1 \times r_0 + r_1$. Since $b>r_0>r_1>....\geq 0$, there exists $k$ such that $r_k=0$. Then we can prove that $d=r_{k-1}$ divides $r_{k-2}$. Moreover, we can divide every $r_t$ by $d$. I believe this is proved as following; Suppose that $d$ divides both $r_t, r_{t-1}$. Since $r_{t-2}=q_t \times r_{t-1} + r_t$ and the right side is clearly divisible by $d$, so is the left side. And suppose that $d$ divides both $r_{t-1},r_{t-2}$. And keep going and going till we have $d$ divides both $a,b$ And the author says that this procedure requires the Principle of Mathematical Induction. This looks like a Mathematical Induction but this is not proving for infinitely many numbers,so I think this does not need Principle of Mathematical Induction because k is a finite number. To my understanding, we need to use the Principle of mathematical Induction only when we want to prove that a statement is true for infinitely many integers, because we cannot write infinitely long proofs. However, in this situation, we could write the proof of k steps but it was just troublesome. That is why I think it does not need Mathematical Induction Could you help me figure out why we need to use Principle of Mathematical Induction in this situation?
Here are the spots where induction is required: "Since $b>r_0>r_1>....≥0$, there exists $k$ such that $r_k=0$." Not true for real numbers, right? "I think this does not need Principle of Mathematical Induction because k is a finite number." But how do we know it's finite? You could descend forever in some rings. Personally though, I'd use the well-ordering principle, it's much cleaner than induction in most cases. Let $S$ be the set of those $r_i$. It's fine if it's infinite, sets can do that. Now, since we know all of them are $\ge 0$, there is a minimum element. Call this element $d$. [continue as you did in your post].
How to find solutions for linear recurrences using eigenvalues Use eigenvalues to solve the system of linear recurrences $$y_{n+1} = 2y_n + 10z_n\\ z_{n+1} = 2y_n + 3z_n$$ where $y_0 = 0$ and $z_0 = 1$. I have absolutely no idea where to begin. I understand linear recurrences, but I'm struggling with eigenvalues.
Set $x_n=[y_n z_n]^T$, and your system becomes $x_{n+1}=\left[\begin{smallmatrix}2&10\\2&3\end{smallmatrix}\right]x_n$. Iteration becomes matrix exponentiation. If your eigenvalues are less than 1 in absolute value, the matrix approaches 0. If an eigenvalue is bigger than 1 in absolute value, you get divergence. It's a rich subject, read about it on Wikipedia.
Show that the equation $\cos(x) = \ln(x)$ has at least one solution on real number I have question Q Show that the equation $\cos (x) = \ln (x)$ has at least one solution on real number. to solve this question by using intermediate value theorem we let $f(x)=\cos (x)-\ln (x)$ we want to find $a$ and $b$ but what i should try to get $f(a)f(b)<0$ I means $f(a)>0$ $f(b)<0$ thanks
Hint: $\cos$ is bounded whereas $\ln$ is increasing with $\lim\limits_{x\to 0^+} \ln(x) =- \infty$ and $\lim\limits_{x \to + \infty} \ln(x)=+ \infty$.
4 dimensional numbers I've tought using split complex and complex numbers toghether for building a 3 dimensional space (related to my previous question). I then found out using both together, we can have trouble on the product $ij$. So by adding another dimension, I've defined $$k=\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$$ with the property $k^2=1$. So numbers of the form $a+bi+cj+dk$ where ${{a,b,c,d}} \in \Bbb R^4$, $i$ is the imaginary unit, $j$ is the elementry unit of split complex numbers and k the number defined above, could be represented on a 4 dimensinal space. I know that these numbers look like the Quaternions. They are not! So far, I came out with the multiplication table below : $$\begin{array}{|l |l l l|}\hline & i&j&k \\ \hline i&-1&k&j \\ j& -k&1&i \\ k& -j&-i&1 \\ \hline \end{array}$$ We can note that commutativity no longer exists with these numbers like the Quaternions. When I showed this work to my math teacher he said basicaly these : * *It's not coherent using numbers with different properties as basic element, since $i^2=-1$ whereas $j^2=k^2=1$ *2x2 matrices doesn't represent anything on a 4 dimensional space Can somebody explains these 2 things to me. What's incoherent here?
You have discovered split-quaternions. You can compare the multiplication table there and in your question. This algebra is not commutative and has zero divisors. So, it combines the "negative" traits of both quaternions and tessarines. On the other hand it is notable to be isimorphic to the $2\times2$ matrices. Due to this isomorphism, people usually speak about matrices rather than split-quaternions.
Domain of the function $f(z) = \sqrt{z^2 -1}$ What will be the domain of the function $f(z) = \sqrt{z^2 -1}$? My answers are: $(-\infty, -1] \cup [1, \infty)$ OR $\mathbb{R} - \lbrace1>x\rbrace$ OR $\mathbb {R}$, such that $z \nless 1$.
The first part of your answer (before the "or") is correct: The domain of your function, in $\mathbb R$ is indeed $(-\infty, -1]\cup [1, \infty).$ That is, the function is defined for all real numbers $z$ such that $z \leq -1$ or $z \geq 1$. Did you have any particular reason you included: this as your answer, along with "or...."? Did you have doubts about the above, that you were questioning whether the domain is not $(-\infty, -1] \cup [1, \infty)$? Why is the domain $\;\;(-\infty, -1] \cup [1, \infty) \subset \mathbb R\;$? Note that the numbers strictly contained in $(-1, 1)$, when squared, are less than $1$, making $\color{blue}{\bf z^2 - 1 < 0}$, in which case we would be trying to take the square root of a negative number - which has no definition in the real numbers. So we exclude those numbers, the $z \in (-1, 1)$ from the domain, giving us what remains. And so we have that our function, and its domain, is given by: $$f(z) = \sqrt{z - 1},\quad z \in (-\infty, -1] \cup [1, \infty) \subset \mathbb R$$
Generalization of metric that can induce ordering I was wondering if there is some generalization of the concept metric to take positive and negative and zero values, such that it can induce an order on the metric space? If there already exists such a concept, what is its name? For example on $\forall x,y \in \mathbb R$, we can use difference $x-y$ as such generalization of metric. Thanks and regards!
You will have to lose some of the other axioms of a metric space as well since the requirement that $d(x,y)\ge 0$ in a metric space is actually a consequence of the other axioms: $0=d(x,x)\le d(x,y)+d(y,x)=2\cdot d(x,y)$, thus $d(x,y)\ge 0$. This proof uses the requirements that $d(x,x)=0$, the triangle inequality, and symmetry. There are notions of generalizations of metric spaces that weaken these axioms. I think the one closest to what you might be thinking about is partial metric spaces (where $d(x,x)=0$ is dropped).
Conformally Map Region between tangent circles to Disk Suppose we are given two circles, one inside the other, that are tangent at a point $z_0$. I'm trying to map the region between these circles to the unit disc, and my thought process is the following: I feel like we can map $z_0$ to $\infty$, but I'm not really sure about this. If it works, then I get a strip in the complex plane, and I know how to handle strips via rotation, translation, logarithm, power, and then I'm in the upper half-plane (if I've thought this through somewhat properly). My problem really lies in what point $z_0$ can go to, because I thought the point symmetric to $z_0$ (which is $z_0$ in this case) had to be mapped to $0$. Is this the right idea, and if what other details should I make sure I have? Thanks!
I just recently solved this myself. Let $z_1$ be the center of the inner tangent circle such that $|z_1 - z_0| =r$ and let $z_2$ be the center of the larger circle with $|z_2 - z_0| = R$ with $R>r$. Rotate and translate your circles so that $z_0$ lies on the real axis (as so does $z_1$) and $z_2$ is 0. You can map this a vertical strip by sending $z_0$ to $\infty$, 0 to 0 and the antipodal point on the inner circle of $z_0$ to 1. To get from the upper half plane to the vertical strip, you'll need a logarithm and a reflection. E.g., Consider $B(0,2) \cap \overline{B(3/2,1/2)}$ as the domain. The map as above (sending the balls to the vertical strip) is $\Phi(z) = \frac{z}{z-2} \frac{-1}{1}$ (this is a Möbius transformation). You'll need to invert this to go from the strip to the balls.
Does one always use augmented matrices to solve systems of linear equations? The homework tag is to express that I am a student with no working knowledge of math. I know how to use elimination to solve systems of linear equations. I set up the matrix, perform row operations until I can get the resulting matrix into row echelon form or reduced row echelon form, and then I back substitute and get the values for each of my variables. Just some random equations: $a + 3b + c = 5 \\ a + 0 + 5c = -2$ My question is, don't you always want to be using an augmented matrix to solve systems of linear equations? By augmenting the matrix, you're performing row operations to both the left and right side of those equations. By not augmenting the matrix, aren't you missing out on performing row operations on the right side of the equations (the $5$ and $-2$)? The sort of problems I'm talking about are homework/test-level questions (as opposed to real world harder data and more complex solving methods?) where they give you $Ax = b$ and ask you to solve it using matrix elimination. Here is what I mean mathematically: $[A] = \begin{bmatrix} 1 & 3 & 1 \\ 1 & 0 & 5 \\ \end{bmatrix} $ $ [A|b] = \left[\begin{array}{ccc|c} 1 & 3 & 1 & 5\\ 1 & 0 & 5 & -2\\ \end{array}\right] $ So, to properly solve the systems of equations above, you want to be using $[A|b]$ to perform Elementary Row Operations and you do not want to only use $[A]$ right? The answer is: yes, if you use this method of matrices to solve systems of linear equations, you must use the augmented form $[A|b]$ in order to perform EROs to both $A$ and $b$.
I certainly wouldn't use an augmented matrix to solve the following: $x = 3$ $y - x = 6$ When you solve a system of equations, if doing so correctly, one does indeed perform "elementary row operations", and in any case, when working with any equations, to solve for $y$ above, for example, I would add $x$ to each side of the second equation to get $y = x + 6 = 3 + 6 = 9$ Note: we do treat systems of equations like "row operations" (in fact, row operations are simply modeled after precisely those operations which are legitimate to perform on systems of equations) $$2x + 2y = 4 \iff x + y = 2\tag{multiply each side by 1/2}$$ $$\qquad\qquad x + y = 1$$ $$\qquad\qquad x - y = 1$$ $$\iff 2x = 2\tag{add equation 1 and 2}$$ $$x+ y + z = 1$$ $$3x + 2y + z = 2$$ $$-x - y + z = 3$$ We can certainly switch "equation 2 and equation 3" to make "adding equation 3 to equation 1" more obvious. I do believe that using an augmented coefficient matrix is very worthwhile, very often, to "get variables out of the way" temporarily, and for dealing with many equations in many unknowns: and even for $3\times 3$ systems when the associated augmented coefficient matrix has mostly non-zero entries. It just makes more explicit (and easier to tackle) the process one can use with the corresponding system of equations it represents.
Finding intersection of 2 planes without cartesian equations? The planes $\pi_1$ and $\pi_2$ have vector equations: $$\pi_1: r=\lambda_1(i+j-k)+\mu_1(2i-j+k)$$ $$\pi_2: r=\lambda_2(i+2j+k)+\mu_2(3i+j-k)$$ $i.$ The line $l$ passes through the point with position vector $4i+5j+6k$ and is parallel to both $\pi_1$ and $\pi_2$. Find a vector equation for $l$. This is what I know: $$l \parallel \pi_1, \pi_2 \implies l \perp n_1, n_2 \implies d = n_1\times n_2;\quad l:r=(4i+5j+6k)+\lambda d$$ However, that method involves 3 cross-products, which according to the examination report was an 'expeditious' solution which was also prone to 'lack of accuracy'. Rightfully so, I also often make small errors with sign changes leading me to the wrong answer, so if there is a more efficient way of working I'd like to know what that is. Q1 How can I find the equation for line $l$ without converting the plane equations to cartesian form? $ii.$ Find also the shortest distance between $l$ and the line of intersection of $\pi_1$ and $\pi_2$. Three methods are described to solve the second part. However, I didn't understand this one: The determination of the shortest distance was perceived by most to be the shortest distance of the given point $4i+5j+6k$ to the line of intersection of the planes. Thus they continued by evaluating the vector product $(4i+5j+6k)\times (3i+j-k)$ to obtain $-11i+22j-11k$. The required minimum distance is then given immediately by $p = \frac{|-11i+22j-11k|}{\sqrt{11}} = \sqrt{66}$ Q2 Why does the cross-product of the given point and the direction vector of the line of intersection of the planes get to the correct answer?
1) One method you can use to find line $L_1$ is to make equations in $x, y, z$ for each plane and solve them, instead of finding the cross product: $$x=\lambda_1+2\mu_1, y=\lambda_1-\mu_1, z=-\lambda_1+\mu_1$$ $$x=\lambda_2+3\mu_2, y=\lambda_2+\mu_2, z=\lambda_2-\mu_2$$ What we'll do is find the line of intersection; this line is clearly parallel to both planes so we can use its direction for the line through $(4,5,6)$ that we want. Since the line of intersection has to lie on both planes, it will satisfy both sets of equations. So equating them, $$\lambda_1+2\mu_1 = \lambda_2+3\mu_2$$ $$\lambda_1-\mu_1 = 2\lambda_2+\mu_2$$ $$-\lambda_1+\mu_1 = \lambda_2-\mu_2$$ Adding the last two equations, we get, $$\lambda_2 = 0$$ So, $$x=3\mu_2, y=\mu_2, z=-\mu_2$$ This is a parametrisation of the solution which is the line. So obviously $L_1$ must be: $$L_1: (4,5,6) + \mu_2(3,1,-1)$$ 2) Notice that $(0,0,0)$ lies on the line of intersection $L_2$. So the difference vector $v$ between the point $(4,5,6)$ on the first line and $(0,0,0)$ on the second is $v=(4,5,6)$. The distance is really the length of the projection of $v$ on the perpendicular joining the two lines. This is $|v|\sin \theta$ where $\theta$ is the angle between line and plane. Expressing $|v|\sin\theta$ as $$|v|\sin\theta=\frac{|v||w|\sin\theta}{|w|} = \frac{|v\times w|}{|w|}$$ (where $w$ is the direction vector of $L_2$) gives you the answer. Here taking the cross product of the position vector of the given point with the direction vector of $L_2$ depended on $(0,0,0)$ lying on $L_2$; otherwise you would need to take the difference vector of the given point and any point on $L_2$.
Finding the closed form for a sequence My teacher isn't great with explaining his work and the book we have doesn't cover anything like this. He wants us to find a closed form for the sequence defined by: $P_{0} = 0$ $P_{1} = 1$ $\vdots$ $P_{n} = -2P_{n-1} + 15P_{n-2}$ I'm not asking for a straight up solution, I just have no idea where to start with it. The notes he gave us say: We will consider a linear difference equation that gives the Fibonacci sequence. $y(k) + A_1y(k -1) + A_2y(k -2) = \beta$ That's the general form for a difference equation in which each term is formed from the two preceding terms. We specialize this for our Fibonacci sequence by setting $A_1 = 1, $ >$ >A_2 = 1,$ and $ \beta = 0$. With some rearrangement, we get $y(k) = y(k - 1) + y(k - 2)$ which looks more like the general form for the Fibonacci sequence. To solve the difference equation, we try the solution $y(k) = Br^k$. Plugging that in, we obtain $Br^{k-2} (r^2 - r - 1) = 0$ I have no idea where the $Br^k$ is coming from nor what it means, and he won't explain it in any sort of terms we can understand. If someone could help me with the basic principle behind finding a closed form with the given information I would be eternally grateful. EDIT: Using the information given (thank you guys so much) I came up with $y(k) = \frac{1}{8}(3)^k - \frac{1}{8}(-5)^k$ If anyone has ran through let me know what you found, but I'm in no way asking you guys to do that. It's a lot of work to help some random college student.
A related problem. Here is a start. Just assume your solution $P_n=r^n$ and plug in back in the eq. to find $r$ $$ P_{n} = -2P_{n-1} + 15P_{n-2} \implies r^n+2r^{n-1}-15r^{n-2}=0 $$ $$ \implies r^{n-2}(r^2+2r-15)=0 \implies r^2+2r-15=0 $$ Find the roots of the above polynomial $r_1, r_2$ and construct the general solution $$ P(n)=c_1 r_1^n + c_2 r_2^n \longrightarrow (*) $$ To find $c_1$ and $c_2$, just use $P_0=0$ and $P_1=1$ in $(*)$ to get two equations in $c_1$ and $c_2$. Once you find $c_1$ and $c_2$ plug them back in $(*)$ and this will be the required solution.
Derivatives using the Limit Definition How do I find the derivative of $\sqrt{x^2+3}$? I plugged everything into the formula but now I'm having trouble simplifying. $$\frac{\sqrt{(x+h)^2+3}-\sqrt{x^2+3}}{h}$$
Keaton's comment is very useful. If you multiply the top and bottom of your expression by $\sqrt{(x+h)^2+3}+\sqrt{x^2+3}$, the numerator should simplify to $2xh+h^2$. See if you can finish the problem after that.
Clarification on expected number of coin tosses for specific outcomes. As seen in this question, André Nicolas provides a solution for 5 heads in a row. Basically, for any sort of problem that relies on determining this sort of probability, if the chance of each event is 50/50, then no matter what the composition of values, the linear equation would be the same? For example, in the case of flipping 5 coins, and wanting to find out how many flips are needed for 4 consecutive tails followed by a head, is the same form as trying to find how many flips for 5 heads? specifically: $$x=\frac{1}{2}(x+1)+\frac{1}{4}(x+2)+\frac{1}{8}(x+3)+\frac{1}{16}(x+4)+\frac{1}{32}(x+5)+\frac{1}{32}(5).$$ Where $\frac{1}{32}(x+5)$ denotes the last flips chance of landing a tails after 4 heads in a row (.5*.5*.5*.5*(P(tails)) If I was using an unfair coin in the same example as above (HHHHT) with a 60% chance to land on heads, would the equation be: $$x=\frac{1}{2}(x+1)+\frac{1}{4}(x+2)+\frac{1}{8}(x+3)+\frac{1}{16}(x+4)+\frac{1}{40}(x+5)+\frac{1}{40}(5).$$
No, it is not the same. For this pattern, the argument holds well (if you keep flipping heads) until four tosses. But on the fifth toss, if you flip heads you are done, but if you flip tails you are not back to the start-you have potentially flipped the first of your winning series. You will need to consider states where you have flipped some number of tails so far. For the unfair coin problem, aside from that objection, the $\frac 12$ would become $\frac 25$ for the first one because you have a $40\%$ chance to land tails and be back at start. Again, the first four flips work fine (with $0.4^n$) but you need to worry about states with one or more heads in the bank.
Simplex on Linear Program with equations My linear program instead of inequations also contains one equation. I do not understand how to handle this in every tutorial I searched the procedure is to add slack variables to convert the inequations to equations. My lp is the following: Minimize x4 Subject to: 3x1+7x2+8x3<=x4 9x1+5x2+7x3<=x4 5x1+6x2+7x3<=x4 x1+x2+x3=1 I tried to add slack variables w1,w2,w3 to convert the in-equations to equations but then I do not understand how to find an Initial feasible solution. I am aware of the 2-phase simplex but I do not understand how to use it here. Do I have the right to add a slack variable w4 to cope with the last equation? As far as I understand If I do that I will change the LP. How should I start to cope with this LP? Can I use as an initial feasible solution the vector (0,0,1,0) for example? This is not a homework question (Preparatory question for exams)! I do not ask for a complete solution, just for a hint to get unstuck from the equation problem. Edit: I am not able to solve this. And I am not able to prove that it is infeasible. The fact that I have so many zeros in the $b_i$ creates to me problems!
I have to be honest, my simplex is rusty. But perhaps you could split the equation into two inequalities: $$x_1+x_2+x_3\leq 1$$ $$-x_1-x_2-x_3\leq -1$$ This is exactly what some solvers do that can't handle mixtures of inequalities and equations.
Analytically continue a function with Euler product I would like to estimate the main term of the integral $$\frac{1}{2\pi i} \int_{(c)} L(s) \frac{x^s}{s} ds$$ where $c > 0$, $\displaystyle L(s) = \prod_p \left(1 + \frac{2}{p(p^s-1)}\right)$. Question: How to estimate the integral? In other words, is there any way to analytic continue this function? The function as stated converges for $\Re s > 0$, but I'm not sure how to extend it past $y$-axis. Thanks!
Let $\rho(d)$ count the number of solutions $x$ in $\frac{Z}{dZ}$, to $x^2\equiv \text{-1 mod d}$, then we have $$\sum_{n\leq x}d(n^2+1)=2x\sum_{n\leq x}\frac{\rho(n)}{n}+O(\sum_{n\leq x}\rho(n))$$ By multiplicative properties of $\rho(n)$ we have, $$\rho(n)=\chi(n)*|\mu(n)|$$ Where $\chi(n)$ is the non principal character modulo $4$ Which allows us to estimate, $$\sum_{n\leq x}\frac{\rho(n)}{n}=\sum_{n\leq x}\frac{\chi(n)*|\mu(n)|}{n}=\sum_{n\leq x}\frac{|\mu(n)|}{n}\sum_{k\leq \frac{x}{n}}\frac{\chi(k)}{k}=\sum_{n\leq x}\frac{|\mu(n)|}{n}(L(1,\chi)+O(\frac{n}{x}))=L(1,\chi)\sum_{n\leq x}\frac{|\mu(n)|}{n}+O(1)=\frac{\pi}{4}\sum_{n\leq x}\frac{|\mu(n)|}{n}+O(1)=\frac{\pi}{4}(\frac{6}{\pi^2}\ln(x)+O(1))$$ So that, $$\sum_{n\leq x}\frac{\rho(n)}{n}=\frac{3}{2\pi}\ln(x)+O(1)$$ Also note that, $$\sum_{n\leq x}\rho(n)=\sum_{n\leq x}{\chi(n)*|\mu(n)|}=\sum_{n\leq x}|\mu(n)|\sum_{k\leq \frac{x}{n}}\chi(k)\leq\sum_{n\leq x}|\mu(n)|\leq x$$ So we have, $$\sum_{n\leq x}\rho(n)=O(x)$$ Which gives, $$\sum_{n\leq x}d(n^2+1)=\frac{3}{\pi}x\ln(x)+O(x)$$
Is the ideal $(X^2-3)$ proper in $\mathbb{F}[[X]]$? Let $\mathbb{F}$ be a field and $R=\mathbb{F}[[X]]$ be the ring of formal power series over $\mathbb{F}$. Is the ideal $(X^2-3)$ proper in $R$? Does the answer depend upon $\mathbb{F}$? Clearly $X^2-3=(X+\sqrt3)(X-\sqrt3)$ and hence $X^2-3$ is not zero. I have no idea whether the ideal is proper or not. So far, I didn't learn any theorem to prove an ideal is proper. Perhaps I should start with definition of proper ideal? Find one element in $R$ but not in the ideal ?
The element $\sum _{i \geq 0} a_i X^i \in R$ is invertible in $R$ if and only if $a_0\neq 0$. The key to the proof of that relatively easy result is the identity $(1-X)^{-1}=\sum_{i \geq 0} 1. X^i \in R $ In your question $a_0=-3$ so that the element $X^2-3$ is invertible in $R=F[[X]]$ (which is equivalent to the ideal $(X^2-3)\subset R$ being proper) if and only if the characteristic of the field $F$ is not $3$: $$ (X^2-3)\subset R \;\text {proper} \iff \operatorname{char} F\neq 3.$$
Addition table for a 4 elements field Why is this addition table good, \begin{matrix} \boldsymbol{\textbf{}+} & \mathbf{0} & \boldsymbol{\textbf{}1} & \textbf{a} &\textbf{ b}\\ \boldsymbol{\textbf{}0} & 0 & 1 & a & b\\ \boldsymbol{\textbf{}1} & 1 & 0 & b & a\\ \boldsymbol{\textbf{} a} & a & b & 0 & 1\\ \boldsymbol{\textbf{} b} &b & a & 1 & 0 \end{matrix} and this one isn't, what makes it not work? \begin{matrix} \boldsymbol{\textbf{}+} & \mathbf{0} & \boldsymbol{\textbf{}1} & \textbf{a} &\textbf{ b}\\ \boldsymbol{\textbf{}0} & 0 & 1 & a & b\\ \boldsymbol{\textbf{}1} & 1 & a & b & 0\\ \boldsymbol{\textbf{} a} & a & b & 0 & 1\\ \boldsymbol{\textbf{} b} &b & 0 & 1 & a \end{matrix} It's clear that $0$ and $a$ changes places in the second table but I can't find an example that refutes any of the addition axioms.
If you have only one operation, it is difficult to speak about field. But, it is well-known that: 1) there exists exactly two groups (up to isomorphism) with 4 elements: one is ${\mathbb Z}/2{\mathbb Z}\times{\mathbb Z}/2{\mathbb Z}$ (the first table) and the other one is ${\mathbb Z}/4{\mathbb Z}$ (the second table) 2) there exists exaclty one field (up to isomorphism) with 4 elements, and it is isomorphic to ${\mathbb Z}/2{\mathbb Z}\times{\mathbb Z}/2{\mathbb Z}$
understanding $\mathbb{R}$/$\mathbb{Z}$ I am having trouble understanding the factor group, $\mathbb{R}$/$\mathbb{Z}$, or maybe i'm not. Here's what I am thinking. Okay, so i have a group $G=(\mathbb{R},+)$, and I have a subgroup $N=(\mathbb{Z},+)$. Then I form $G/N$. So this thing identifies any real number $x$ with the integers that are exactly 1 unit step away. So if $x=\frac{3}{4}$, then $[x]=({...,\frac{-5}{4},\frac{-1}{4},\frac{3}{4},\frac{7}{4},...})$ and i can do this for any real number. So therefore, my cosets are unit intervals $[0,1)+k$, for integers $k$. Herstein calls this thing a circle and I was not sure why, but here's my intuition. The unit interval is essentially closed and since every real number plus an integer identifies with itself, these "circles" keep piling up on top of each other as if its one closed interval. Since it's closed it is a circle. Does that make sense? Now how do I extend this intuition to this? $G'=[(a,b)|a,b\in{\mathbb{R}}], N'=[(a,b)|a,b\in{\mathbb{Z}}].$ What is $G'/N'$? How is this a torus? I can't get an intuitive picture in my head... EDIT: Actually, are the cosets just simply $[x]=[x\in{\mathbb{R}}|x+k,k\in{\mathbb{Z}}]?$
You can also use the following nice facts. I hope you are inspired by them. $$\mathbb R/\mathbb Z\cong T\cong\prod_p\mathbb Z(p^{\infty})\cong\mathbb R\oplus(\mathbb Q/\mathbb Z)\cong\mathbb C^{\times}$$
Formulate optimization problem My research area has "nothing to do with mathematics" but I still find it full of optimization problems. Therefore, I would like to learn to formulate and solve such problems, even though I am not encouraged to do it (at least at the moment; maybe the situation will change after I have proved my point :-). Currently, I have tried to get familiar with gradient methods (gradient descent), and I think I understand some of the basic ideas now. Still I find it difficult to put my problems into mathematical formulas, yet solving them. The ingridients I have for my optimization problem are: 1) My data; two vectors $x = (x_{0}, ..., x_{N})$ and $y = (y_{0}, ..., y_{N})$ having both $N$ samples. 2) Function $f(a, b)$ which tells me something about the relation of vectors $a$ and $b$. What I want to do is: Find a square matrix $P$ (of size 2 x 2) such that the value of $f(z_{1}, z_{2})$, where $z = P [x, y]^{T}$, becomes minimal. To clarify (sorry, I'm not sure if my notation is completely correct) I mean that $z$ is computed as: $z_{1} = p_{11}x + p_{12}y\\ z_{2} = p_{21}x + p_{22}y$. How would one squeeze up all this into a problem to be solved using an optimization method like the gradient descent? All help is appreciated. Please note that my mathematical background is not too solid, I know only some very basic calculus and linear algebra.
The notation in the question looks fine. So, you have a function $F$ of four real variables $p_{11},\dots,p_{22}$, defined by $$F(p_{11},p_{12},p_{21},p_{22}) = f(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \tag2$$ If $f$ is differentiable, then so is $F$. Therefore, the gradient descent can be used; how successful it will be depends on $f$. From the question it's not clear what kind of function $f$ is. Some natural functions like $f(z_1,z_2)=\|z_1-z_2\|^2$ would make the problem easy, but also uninteresting because the minimum is attained, e.g., at $p_{11}=p_{21}=1$, $p_{12}=p_{22}=0$, because these values make $z_1=z_2$. Using the chain rule, one can express the gradient of $F$ in terms of the gradient of $f$ and the vectors $x,y$. Let's write $f_{ik}$ for the partial derivative of $f(z_1,z_2)$ with respect to the $k$th component of $z_i$. Here the index $i$ takes values $1,2$ only, while $k$ ranges from $0$ to $N$. With this notation, $$\begin{split} \frac{\partial F}{\partial p_{11}}&=\sum_{k=0}^N x_k f_{1k}(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \\ \frac{\partial F}{\partial p_{12}}&=\sum_{k=0}^N y_k f_{1k}(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \\ \frac{\partial F}{\partial p_{21}}&=\sum_{k=0}^N x_k f_{2k}(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \\ \frac{\partial F}{\partial p_{22}}&=\sum_{k=0}^N y_k f_{2k}(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \\ \end{split} \tag1$$ The formulas (1) would be more compact if instead of $x,y$ the data vectors were called $x^{(1)}$ and $x^{(2)}$. Then (1) becomes $$ \frac{\partial F}{\partial p_{ij}}=\sum_{k=0}^N x^{(i)}_k f_{jk}(p_{11}x^{(1)}+p_{12}x^{(2)},\ p_{21}x^{(1)}+p_{22}x^{(2)}) \tag{1*}$$ For more concrete advice, it would help to know what kind of function $f$ you have in mind, and whether the matrix $P$ needs to be of any special kind (orthogonal, unit norm, etc).
Moment of inertia of a circle A wire has the shape of the circle $x^2+y^2=a^2$. Determine the moment of inertia about a diameter if the density at $(x,y)$ is $|x|+|y|$ Thank you
Consider a small segment of the wire, going from $\theta$ to $\theta +d\theta$. The length of the small segment is $a \,d\theta$. The density varies, but is approximately $a|\cos\theta|+a|\sin \theta|$. Take a particular diameter, say with rectangular equation $y=(\tan\phi) x$, or better, $x\sin \phi -y\cos\phi=0$. The perpendicular distance from $(a\cos\theta,a\sin\theta)$ to this diameter is $|a\cos\theta\sin\phi -a\sin\theta\cos\phi|$. So for the moment of inertia, we need to find $$\int_0^{2\pi} \left(a|\cos\theta|+a|\sin \theta|\right)\left(a\cos\theta\sin\phi -a\sin\theta\cos\phi\right)^2a\,d\theta.$$ The integration is doable, but not easy. Special cases such as $\phi=0$ or $\phi=\pi/4$ will not be too hard. Remark: The perpendicular distance from a point $(p,q)$ to the line with equation $ax+by+c=0$ is $$\frac{|ap+bq+c|}{\sqrt{a^2+b^2}}.$$ There is a reasonably good discussion of the formula in Wikipedia.
Weather station brain teaser I am living in a world where tomorrow will either rain or not rain. There are two independent weather stations (A,B) that can predict the chance of raining tomorrow with equal probability 3/5. They both say it will rain, what is the probability of it actually rain tomorrow? My intuition is to look at the complementary, i.e. $$1-P(\text{not rain | A = rain and B = rain}) = \frac{21}{25}$$ However, using the same methodology, the chance of it not raining tomorrow is: $$1-P(\text{rain | A = rain and B = rain}) = \frac{16}{25}$$ Clearly they do not add up to 1! Edit: corrected the probability to 3/5 ** Edit 2: There seems to be a problem using this method. Say the probability of getting a right prediction is 1/2, so basically there is no predictability power. The using the original argument, the probability of raining is $1-\frac{1}{2}*\frac{1}{2} = \frac{3}{4}$ which is also non-sensical
After a couple months of thinking, a friend of mine have pointed out that the question lacks one piece of information: the unconditional probability distribution of rain $P(rain)$. The logic is that, if one is to live in an area that is certain to rain everyday, the probability of raining is always 1. Then Kaz's analysis will give the wrong answer (9/13). In fact, the probability of correctly predicting the rain should be: $$\frac{P(rain)(3/5)^2}{P(rain)(3/5)^2+(1-P(rain))(2/5)^2}$$ Kaz's answer is correct if the probability distribution of raining is uniform. Cheers.
which texts on number theory do you recommend? my close friend intend to study number theory and he asked me if i know a good text on it , so i thought that you guys can help me to help him ! he look for a text for the beginners and for a first course he will study it as a self study .. so what texts do you recommend ? are there any lectures or videos online on internet can help them ?? also , what do you advice him ?
My two pennyworth: for introductory books try ... * *John Stillwell, Elements of Number Theory (Springer 2002). This is by a masterly expositor, and is particularly approachable. *G.H. Hardy and E.M. Wright, An Introduction to the Theory of Numbers (OUP 1938, and still going strong with a 6th edition in 2008). Also aimed at beginning undergraduate mathematicians and pleasingly accessible. *Alan Baker, A Comprehensive Course in Number Theory (CUP 2012) is a nice recent textbook (shorter than its title would suggest, too).
can a ring homomorphism map an integral domain to a non integral domain? i understand that if two rings are isomorphic, and one ring is an integral domain, so must the other be. however, consider two rings, both commutative rings with unity. is it possible that one ring contains zero divisors and one does not while there exists an ring homomorphism between the two? there could not be a isomorphism between the two rings because there would be no one to one or onto mapping between the two rings. but could there be an operation preserving mapping between an integral domain and a commutative ring with unity and with zero divisors? clearly if such a mapping did exists it would not seem to be one to one or onto, but does this rule out the potential of a homomorphism existing between the two?
Given any unital ring $R$ (with multiplicative identity $1_R$, say), there is a unique ring homomorphism $\Bbb Z\to R$ (take $1\mapsto 1_R$ and "fill in the blanks" from there). This may be an injective map, even if $R$ has zero divisors. For example, take $$R=\Bbb Z[\epsilon]:=\Bbb Z[x]/\langle x^2\rangle.$$ Surjective examples are easy to come by. You are of course correct that such maps cannot be bijective if $R$ has zero divisors.
Does $\frac{x}{x}=1$ when $x=\infty$? This may be a dumb question: Does $\frac{x}{x}=1$ when $x=\infty$? I understand why $\frac{x}{x}$ is undefined when $x=0$: This can cause errors if an equation is divided by $x$ without restrictions. Also, $\frac{\infty}{\infty}$ is undefined. So when I use $\frac{x}{x}=1$ to simplify an equation, can it also lead to errors because $x$ can equal infinity? Or is $x=\infty$ meaningless?
You cannot really say $x = \infty$ because $\infty \not \in \mathbb{R}$ What you do is, you take the limes. Limes means not, that $x=a$, but that $x$ is getting closeser and closer to $a$. For example: $$\lim_{x\mapsto 0}\frac{1}{x}=\infty$$ because the divisor gets smaller and smaller $$\frac{1}{2}=0.5 \\\frac{1}{1}=1 \\\frac{1}{0.5}=2\\..$$ So this is growing and growing, but $\frac{1}{0}$ is mathematically nonsense. So it's quite simple. $$\lim_{x\mapsto \infty}\frac{x}{x}=\lim_{x\mapsto\infty}\frac{1}{1}=1$$ In that case you simply cancel (is that the right word in english?) the $x$. That way the Limes and the $\infty$ vansishs.
A Criterion for Surjectivity of Morphisms of Sheaves? Suppose that $f: \mathcal{F} \rightarrow \mathcal{G}$ is a morphism of sheaves on a topological space $X$. Consider the following statements. 1) $f$ is surjective, i.e. $\text{Im } f = \mathcal{G}$. 2) $f_{p}: \mathcal{F}_p \rightarrow \mathcal{G}_p$ is surjective for every $p\in X$. $(1) \Rightarrow (2)$ is always true. I was wondering if $(2) \Rightarrow (1)$ is also true and I found Germ and sheaves problem of injectivity and surjectivity, which claims it positively. I double check all the details of the arguments made in that thread and found no mistake. I just want a confirmation that $(1) \Leftrightarrow (2)$ is right, so that we have a criterion for surjectivity of morphisms of sheaves. In case you asked why this fact, which is already established in a previous thread, is repeated here, followings are my reasons: a) I'm always skeptical, even with myself; b) I have not seen this statement in popular texts. Maybe it's in EGA but I can't read French (if it is, would be nice if someone points it out please!); c) In the proof for the fact that $f$ is isomorphic iff $f_p$ is isomorphic for all $p \in X$ (Prop 1.1, p.g. 63, Hartshorne's), to prove that $\mathcal{F}(U) \rightarrow \mathcal{G}(U)$ is surjective for all $U$ open, the proof requires injectivity of $f_p$ for all $p \in X$. This is not directly related to our situation but it provides one a caution that surjectivity of $f$ is a little bit subtle. Thanks!
This can be found in every complete introduction to sheaves or algebraic geometry and comes down to the fact that the functor $F \mapsto (F_x)_{x \in X}$ is faithful and exact.
Use L'Hopital's rule to evaluate $\lim_{x \to 0} \frac{9x(\cos4x-1)}{\sin8x-8x}$ $$\lim_{x \to 0} \frac{9x(\cos4x-1)}{\sin8x-8x}$$ I have done this problem a couple of times and could not get the correct answer. Here is the work I have done so far http://imgur.com/GDZjX26 . The correct answer was $\frac{27}{32}$, did I differentiate wrong somewhere?
You have to use L'Hopitals 3 times we have $$\begin{align} \lim_{x\to 0}\frac{9x(\text{cos}(4x)-1}{\text{sin}(8x)-8x}&=\lim_{x\to 0}\frac{(9 (\text{cos}(4 x)-1)-36 x \text{sin}(4 x))}{(8 \text{cos}(8 x)-8)}\\ &=\lim_{x\to 0}\frac{-1}{64}\frac{(-72 \text{sin}(4 x)-144 x \text{cos}(4 x))}{\text{sin}(8x)}\\&=\lim_{x\to 0}\frac{-1}{512}\frac{(576 x \text{sin}(4 x)-432 \text{cos}(4 x))}{\text{cos}(8x)}\\&=\frac{432}{512}\\&=\frac{27}{32}.\end{align}$$
Calculating a complex derivative of a polynomial What are the rules for derivatives with respect to $z$ and $\bar{z}$ in polynomials? For instance, is it justified to calculate the partial derivatives of $f(z,\bar{z})=z^3-2z+\bar{z}-(\overline{z-3i})^4$ as if $z$ and $\bar{z}$ were independent? i.e. $f_z=3z^2-2$ and $f_\bar{z}=1-4(\overline{z-3i})^3$
I would first write $$ f(z,\bar z)=z^3−2z+\bar z−(\bar z+3i)^4 $$ and then treat $z$ and $\bar z$ as independent parameters. Then I have $$f_z=3z^2−2$$ $$f_{\bar z}=1−4(\bar z+3i)^3$$ Am I right?
number of terms of a sum required to get a given accuracy How do I find the number of terms of a sum required to get a given accuracy. For example a text says that to get the sum $\zeta(2)=\sum_{n=1}^{\infty}{\frac{1}{n^2}}$ to 6 d.p. of accuracy, I need to add 1000 terms. How do in find it for a general series?
If you have a sum $S=\sum_{n=1}^{\infty} a(n)$ that you want to estimate with a partial sum, denote by $R$ the residual error $$ R(N) = S-\sum_{n=1}^N a(n) = \sum_{n=N+1}^\infty a(n) $$ If all $a(n)$ are nonnegative then $R(N)\ge a(N+1)$, so to estimate within a given accuracy $\epsilon$ you need $N$ at least large enough that $a(N+1)<\epsilon$. So you can tell that to get $\sum_{n=1}^{\infty}\frac{1}{n^2}$ to six decimal places of accuracy, i.e. within $\frac{1}{1000000}$, you will need at least 1000 terms, since for $n\le 1000$ each new term is at least that size. Unfortunately this is not sufficient. If $a(n)$ shrinks slowly $R(N)$ may be much bigger than $a(N+1)$. For your example 1000 terms is only accurate to about $1/1000$: $$ \zeta(2)=1.64493\cdots ~~,~~ \sum_{n=1}^{1000}\frac{1}{n^2}=1.64393\cdots $$ If you can find a decreasing function $b$ on $\mathbb R$ that is an upper bound $b(n)\ge a(n)$ at the integers, then you can bound $R(N)$ by observing that $$ \int_{N}^{N+1} b(x)dx > \min_{N\le x\le N+1} b(x) = b(N+1)\ge a(N+1) \\ \int_{N+1}^{N+2} b(x)dx > b(N+2)\ge a(N+2) \\ \cdots \\ \int_{N}^\infty b(x)dx > R(N) $$ For $\zeta(2)$ choose $b(x)=x^{-2}$, then $R(N)<N^{-1}$, so for 6 decimal places $10^6$ terms would suffice.
Proof of an Integral inequality Let $f\in C^0(\mathbb R_+,\mathbb R)$ and $a\in\mathbb R_+$, $f^*(x)=\dfrac1x\displaystyle\int_0^xf(t)\,dt$ when $x>0$, and $f^*(0)=f(0)$. Show that $$ \int_0^a(f^*)^2(t)\,dt\le4\int_0^af^2(t)\,dt$$ I tried integration by part without success, and Cauchy-Schwarz is not helping here. Thanks for your help.
Assume without loss of generality that $f\geqslant0$. Writing $(f^*)^2(t)$ as $$ (f^*)^2(t)=\frac1{t^2}\int_0^tf(y)\left(\int_0^tf(x)\mathrm dx\right)\mathrm dy=\frac2{t^2}\int_0^tf(y)\left(\int_0^yf(x)\mathrm dx\right)\mathrm dy, $$ and using Fubini, one sees that $$ A=\int_0^a(f^*)^2(t)\mathrm dt=2\int_0^af(y)\int_0^yf(x)\mathrm dx\int_y^a\frac{\mathrm dt}{t^2}\mathrm dy, $$ hence $$ A\leqslant2\int_0^af(y)\frac1y\int_0^yf(x)\mathrm dx\,\mathrm dy=2\int_0^af(y)f^*(y)\mathrm dy. $$ Cauchy-Schwarz applied to the RHS yields $$ A^2\leqslant4\left(\int_0^af(y)f^*(y)\mathrm dy\right)^2\leqslant4\int_0^af^2(y)\mathrm dy\cdot\int_0^a(f^*)^2(y)\mathrm dy=4A\int_0^af^2(y)\mathrm dy, $$ and the result follows.
Automorphism of Graph $G^n$ I try to define the automorphism of $G^n$ where $G$ is a graph and $G^n = G \Box \ldots \Box G$,( $n$ times, $\Box$ is the graph product). I think that : $\text{Aut}(G^n)$ is $\text{Aut}(G) \wr S_n$ where $S_n$ is the symmetric group of $\{1,\ldots,n\}$ but I have no idea how to prove it because I am a beginner in group theory. Can you help me or suggest me a reference on this subject ? Thanks a lot.
You are right, provided you assume that $G$ is prime relative to the Cartesian product, the automorphism group of the $n$-th Cartesian power of $G$ is the wreath product as you stated. The standard reference for this is Hammack, Richard; Imrich, Wilfried; Klavžar, Sandi: Handbook of product graphs. (There is an older version of this, written by Imrich and Klavžar alone, which would serve just as well.) Unfortunately there does not seem to be much on the internet on this subject.
Evaluation of a limit with integral Is this limit $$\lim_{\varepsilon\to 0}\,\,\varepsilon\int_{\mathbb{R}^3}\frac{e^{-\varepsilon|x|}}{|x|^2(1+|x|^2)^s}$$ with $s>\frac{1}{2}$, zero?. The limit of a product is the product of limit, so I evaluate $$\lim_{\varepsilon\to 0}\,\,\int_{\mathbb{R}^3}\frac{e^{-\varepsilon|x|}}{|x|^2(1+|x|^2)^s}$$. With the theorem of dominated convergence the last limit equals $$\int_{\mathbb{R}^3}\frac{1}{|x|^2(1+|x|^2)^s}=4\pi\int_{0}^{+\infty}\frac{1}{(1+r^2)^s}=C<\infty$$ (I have used the fact that $s>\frac{1}{2}$) Using the product rule I have the result. Have I made some mistake?
What you did is correct. The only thing you have to take care is that in general, dominated convergence theorem applies for sequences. Here there is no problem since the convergence is monotonic.
In what sense is the derivative the "best" linear approximation? I am familiar with the definition of the Frechet derivative and it's uniqueness if it exists. I would however like to know, how the derivative is the "best" linear approximation. What does this mean formally? The "best" on the entire domain is surely wrong, so it must mean the "best" on a small neighborhood of the point we are differentiating at, where this neighborhood becomes arbitrarily small? Why does the definition of the derivative formalize precisely this? Thank you in advance.
Say the graph of $L$ is a straight line and at one point $a$ we have $L(a)=f(a)$. And suppose $L$ is the tangent line to the graph of $f$ at $a$. Let $L_1$ be another function passing through $(a,f(a))$ whose graph is a straight line. Then there is some open interval $(a-\varepsilon,a+\varepsilon)$ such that for every $x$ in that interval, the value of $L(x)$ is closer to the value of $f(x)$ than is the value of $L_1(x)$. Now one might then have another line $L_2$ through that point whose slope is closer to that of the tangent line than is that of $L_1$, such that $L_2(x)$ actually comes closer to $f(x)$ than does $L(x)$, for some $x$ in that interval. But now there is a still smaller interval $(a-\varepsilon_2,a+\varepsilon_2)$, within which $L$ beats $L_2$. For every line except the tangent line, one can make the interval small enough so that the tangent line beats the other line within that interval. In general there's no one interval that works no matter how close the rival line gets. Rather, one must make the interval small enough in each case separately.
how to prove: $A=B$ iff $A\bigtriangleup B \subseteq C$ I am given this: $A=B$ iff $A\bigtriangleup B \subseteq C$. And $A\bigtriangleup B :=(A\setminus B)\cup(B\setminus A)$. I dont know how to prove this and I dont know where to start. please give me guidance
Hint: For an arbitrary set $C$, what is the one and only set that is the subset of every set? So given $\,A\triangle\,B \subseteq C$, where $C$ is any arbitrary set, what does this tell you about the set $A\triangle B$? And what does that tell you about the relationship between $A$ and $B$?
How can I able to show that $(S ^{\perp})^{\perp}$ is a finite dimensional vector space. Let $H$ be a Hilbert space and $S\subseteq H$ be a finite subset. How can I able to show that $(S ^{\perp})^{\perp}$ is a finite dimensional vector space.
What you want to prove is that, for any $S\subset H$, $$ S^{\perp\perp}=\overline{\mbox{span}\,S} $$ One inclusion is easy if you notice that $S^{\perp\perp}$ is a closed subspace that contains $S$. The other inclusion follows from $$ H=\overline{\mbox{span}\,S}\oplus S^\perp $$ and the uniqueness of the orthogonal complement.
Contour Integral of $\int \frac{a^z}{z^2}\,dz$. My task is to show $$\int_{c-i\infty}^{c+i\infty}\frac{a^z}{z^2}\,dz=\begin{cases}\log a &:a\geq1\\ 0 &: 0<a<1\end{cases},\qquad c>0.$$So, I formed the contour consisting of a semi-circle of radius $R$ and center $c$ with a vertical line passing through $c$. I am having two problems. I can show that along this outer arc, the integral will go to zero if and only if $\log a\geq0$, or equivalently, $a\geq1$; the problem is that the integral of this contour should be $2\pi i\cdot \text{Res}(f;0)$, so for $a\geq1$, I find $$\int f(z)=2\pi i\log a,\qquad a\geq1.$$My second problem occurs when $0<a<1$, I can no longer get the integral along the arc to go to zero as before. Am I making a mistake in my first calculation, or is the problem asking to show something that is wrong? For the second case, how do I calculate this integral?
For $a>1$, consider the contour $$(c-iT \to c+iT) \cup (c+iT \to -R +iT) \cup (-R+iT \to -R - iT) \cup (-R-iT \to c-iT),$$where $R>0$. For $a<1$, consider the contour $$(c-iT \to c+iT) \cup (c+iT \to R +iT) \cup (R+iT \to R - iT) \cup (R-iT \to c-iT),$$where $R>0$. Then let $R \to \infty$ and then $T \to \infty$. The main reason for choice of these contours is that * *$a^{-R} \to 0$ for $a > 1$, as $R \to \infty$. *$a^{R} \to 0$ for $a < 1$, as $R \to \infty$. For $a>1$, the contour encloses a pole of the integrand at $z=0$ and hence this contribution will be reflected in the integral $\left( \text{recall that }a^z = 1 + z \color{red}{\log(a)} + \dfrac{z^2 \log^2(a)}{2!} + \cdots \right)$, whereas for $a<1$, the integrand is analytic in the region enclosed by the contour.
Primes in $\mathbb{Z}[i]$ I need a bit of help with this problem Let $x \in \mathbb{Z}[i]$ and suppose $x$ is prime, therefore $x$ is not a unit and cannot be written as a product of elements of smaller norm. Prove that $N(x)$ is either prime in $\mathbb{Z}$ or else $N(x) = p^2$ for some prime $p \in \mathbb{Z}$. thanks.
Hint $\ $ Prime $\rm\:w\mid ww' = p_1^{k_1}\!\cdots p_n^{k_n}\:\Rightarrow\:w\mid p_i\:\Rightarrow\:w'\mid p_i' = p_i\:\Rightarrow\:N(w) = ww'\mid p_i^2$ Here $'$ denotes the complex conjugation.
Characterization of short exact sequences The following is the first part of Proposition 2.9 in "Introduction to Commutative Algebra" by Atiyah & Macdonald. Let $A$ be a commutative ring with $1$. Let $$M' \overset{u}{\longrightarrow}M\overset{v}{\longrightarrow}M''\longrightarrow 0\tag{1} $$ be sequence of $A$-modules and homomorphisms. Then the sequence (1) is exact if and only if for all $A$-modules $N$, the sequence $$0\longrightarrow \operatorname{Hom}(M'', N) \overset{\overline{v}}{\longrightarrow}\operatorname{Hom}(M, N)\overset{\overline{u}}{\longrightarrow}\operatorname{Hom}(M', N) \tag{2} $$ is exact. Here, $\overline{v}$ is the map defined by $\overline{v}(f)=f\circ v$ for every $f\in\operatorname{Hom}(M'', N)$ and $\overline{u}$ is defined likewise. The proof one of direction, namely $(2)\Rightarrow (1)$ is given in the book, which I am having some trouble understanding. So assuming (2) is exact sequence, the authors remark that "since $\overline{v}$ is injective for all $N$, it follows that $v$ is surjective". Could someone explain why this follows? Given that $\overline{v}$ is injective, we know that whenever $f(v(x))=g(v(x))$ for all $x\in M$, we have $f=g$. I am not sure how we conclude from this surjectivity of $v$. Thanks!
I think I have found the solution using Zach L's hint. Let $N=\operatorname{coker}(v)=M''/\operatorname{Im}(v)$, and let $p\in\operatorname{Hom}(M'', N)$ be the canonical map $p: M''\to M''/\operatorname{Im(v)}=N$. We observe for every $x\in M$, we have $$p(v(x))=v(x)+\operatorname{Im}(v)=0+\operatorname{Im}(v)=0_{M''/\operatorname{Im(v)}}$$ So $p\circ v=0$. But we know that $\overline{v}$ is injective, that is, $\ker{\overline{v}}=\{0\}$. So $\overline{v}(p)=p\circ v=0$ implies $p=0$ (that is, the identically zero map), from which we get $M''/\operatorname{Im}(v)=0$, that is, $\operatorname{Im}(v)=M''$, proving that $v$ is surjective.
Analytical Solution to a simple l1 norm problem Can we solve this simple optimization problem analytically? $ \min_{w}\dfrac{1}{2}\left(w-c\right)^{2}+\lambda\left|w\right| $ where c is a scalar and w is the scalar optimization variable.
Set $f(w)=\frac{1}{2}(w-c)^2+\lambda |w|$, equal to $\frac{1}{2}(w-c)^2\pm\lambda w$. We find $f'(w)=w-c\pm \lambda$. Setting this to zero gives $c\pm \lambda$ as the only critical values of $f$. As $w$ gets large, $f(w)$ grows without bound, so the minimum is going to be at one of the two critical values. At those values, we have $f(c+\lambda)=\frac{\lambda^2}{2}+\lambda|c+\lambda|$, and $f(c-\lambda)=\frac{\lambda^2}{2}+\lambda|c-\lambda|$. Which one is minimal depends on whether $c,\lambda$ are the same sign or different signs. Also need to compare with $f(0)=\frac{c^2}{2}$, $f$ is nondifferentiable there.
Let $X$ and $Y$ be 2 disjoint connected subsets of $\mathbb{R}^{2}$, can $X \cup Y =\mathbb{R}^{2}$? Let $X$ and $Y$ be 2 disjoint connected subsets of $\mathbb{R}^{2}$. Then can $$X \cup Y =\mathbb{R}^{2}$$ I think this cannot be true, but I don't know of a formal proof. Any help would be nice.
consider $X:=\{(x,0) :x>0\}$ and $Y:=\mathbb{R^2}-X$ ,Both are connected and disjoint but $X\cup Y=\mathbb{R^2}$
Rank of the difference of matrices Let $A$ and $B$ be to $n \times n$ matrices. My question is: Is $\operatorname{rank}(A-B) \geq \operatorname{rank}(A) - \operatorname{rank}(B)$ true in general? Or maybe under certain assumptions?
Set $X=A-B$ and $Y=B$. You are asking whether $\operatorname{rank}(X) + \operatorname{rank}(Y) \ge \operatorname{rank}(X+Y)$. This is true in general. Let $W=\operatorname{Im}(X)\cap\operatorname{Im}(Y)$. Let $U$ be a complementary subspace of $W$ in $\operatorname{Im}(X)$ and $V$ be a complementary subspace of $W$ in $\operatorname{Im}(Y)$. Then we have $\operatorname{Im}(X)=U+W$ and $\operatorname{Im}(Y)=V+W$ by definition and also $\operatorname{Im}(X+Y)\subseteq U+V+W$. Therefore $$\operatorname{rank}(X) + \operatorname{rank}(Y) = \dim U + \dim V + 2\dim W\ge \dim U+\dim V+\dim W \ge\operatorname{rank}(X+Y).$$
Error estimate, asymptotic error and the Peano kernel error formula Find the error estimate by approximating $f(x)$ and derive a numerical integration formula for $\int_0^l f(x) \,dx$ based on approximating $f(x)$ by the straight line joining $(x_0, f(x_0))$ and $(x_1, f(x_1))$, where the two points $x_0$ and $x_1 = h - x_0$ are chosen so that $x_0, x_1 \in (0, l)$, $x_0 < x_1$ and $\int_0^l {(x - x_0) (x - x_1)} dx = 0$. Derive the error estimate, asymptotic error and the Peano kernel error formula for the composite rule for $\int_a^b f(x) \,dx$. Use the asymptotic error estimate to improve the integration formula. Find the values of $x_0$, $x_1$. I know the Peano Kernel formula will take the form $E_n(f)=1/2($$\int_a^b K(t)\ f''(t) \,dx$$)$ with $K(t)$ being the Peano kernel but am having a tough time getting started on the question. Any help will be greatly appreciated. Thanks a lot!
For this I think you can use trapezoidal rule. You can approximate $f(x)$ by the straight line joining $(a,f(a))$ and $(b,f(b))$ Then by integrating the formula for this straight line, we get the approximation $$I_1(f)=\frac{(b-a)}{2}[f(a)+f(b)].$$ To get the error formula we get $$f(x)-\frac{b-x)f(a)+(x-a)f(b)}{b-a}=(x-a)(x-b)f[a,b,x]$$ I am not sure if this is absolutely correct, maybe someone can verify my answer?
Showing that if $\lim\limits_{x \to a} f'(x)=A$, then $f'(a)$ exists and equals $A$ Let $f : [a; b] \to \mathbb{R}$ be continuous on $[a, b]$ and differentiable in $(a, b)$. Show that if $\lim\limits_{x \to a} f'(x)=A$, then $f'(a)$ exists and equals $A$. I am completely stuck on it. Can somebody help me please? Thanks for your time.
Let $\epsilon>0$. We want to find a $\delta>0$ such that if $0\lt x-a\lt\delta$ then $\left|\dfrac{f(x)-f(a)}{x-a}-A\right|\lt\epsilon$. If $x\gt a$ then MVT tell us that $\dfrac{f(x)-f(a)}{x-a}=f'(c)$ for some $c\in[a,x]$. Now use that $\displaystyle\lim_{c\to a^+}f'(c)=A$ to find that $\delta$.
How to find a power series representation for a divergent product? Euler used the identity $$ \frac{ \sin(x) }{x} = \prod_{n=1}^{\infty} \left(1 - \frac{x^2}{n^2 \pi^2 } \right) = \sum_{n=0}^{\infty} \frac{ (-1)^n }{(2n + 1)! } x^{2n} $$ to solve the Basel problem. The product is obtained by noting that the sine function is 'just' an infinite polynomial, which can be rewritten as the product of its zeroes. The sum is found by writing down the taylor series expansion of the sine function and dividing by $x$. Now, I am interested in finding the sum representation of the following product: $$ \prod_{n=1}^{\infty} \left(1 - \frac{x}{n \pi} \right) ,$$ which is divergent (see this article). The infinite sum representation of this product is not as easily found (at least not by me) because it does not have an obvious formal representation like $\frac{\sin(x)}{x}$ above. Questions: what is the infinite sum representation of the second product I mentioned? How does one obtain this sum? And is there any 'formal' represenation for these formulae (like $\frac{\sin(x)}{x}$ above).
I never gave the full answer :) $$\prod_{n=1}^{\infty} (1-x/n)$$ When analysing a product it's often easiest to consider the form $\prod_{n=1}^{\infty} (1+f(n))$ given that $\sum_{n=1}^{\infty}f(n)^m=G(m)$; $f(n)=-x/n$ Then $G(m)=(-x)^m\zeta(m)$. $$\prod_{n=1}^{\infty} (1-x/n)=e^{\sum_{m=1}^{\infty}\frac{(-1)^{m+1}x^m\sum_{n=1}^{\infty}f(n)^m}{m}}$$ Because $\zeta(1)$ is the only part of the product that goes to 0 (e.g. e^-infty), the regularized product will tend to 0 just as $\zeta(1)$ goes to infinity. However this is easy fixable: $$\prod_{n=1}^{\infty} (1-x/n)e^{x/n}=e^{\sum_{m=2}^{\infty}\frac{(-1)^{m+1}x^m\sum_{n=1}^{\infty}f(n)^m}{m}}=e^{\sum_{m=1}^{\infty}\frac{- x^{m+1}\zeta(m+1)}{m+1}}$$ which does converge. To show the above more well known representation, $$\sum_{n=1}^{\infty}-\frac{x^{n+1}\zeta(n+1)}{n+1}=\int_{0}^{-x}\sum_{n=1}^{\infty}(-1)^nz^n\zeta(n+1)dz=\int_{0}^{-x}-H_zdz=-\ln((-x)!)+x\gamma$$ $$\prod_{n=1}^{\infty} (1-x/n)e^{x/n}=\frac{e^ {x\gamma}}{(-x)!}$$ And now with the answer I posted 6 years ago, and using the refined stirling numbers: $$\prod_{n=1}^{\infty} (1-x/n)e^{x/n}=\frac{e^ {x\gamma}}{(-x)!}=\bigg(1-x^2\zeta(2)/2-x^3 2\zeta(3)/6 +x^4 \big(3\zeta(2)^2-6\zeta(4)\big)/24+... \bigg)$$ So we can also say with p goes to inf: $$p^x\prod_{n=1}^{p} (1-x/n)=\frac{1}{(-x)!}=$$ We can rewrite our G(m) with $G(1)=-x\gamma$ and use this in the formula given above to write a polynomal representation, which is easier then multiply our previous polynome by $e^{-x\gamma}$. $$p^x\prod_{n=1}^{p} (1-x/n)= \bigg(1+-x\gamma+x^2 \frac{\gamma^2-\zeta(2)}{2}+x^3 \frac{-\gamma^3+3\gamma\zeta(2)-2\zeta(3)}{3!}+...\big)=\frac{1}{(-x)!}$$ I find it hard to clearly explain these refined stirling numbers but if you are throwing in a bit of time, consider the following ways to represent the found product representation. It's going to be vague but i clarify it if you want. The "nice" form is in knowing g(m) for all m. There are a lot of way to represent the refined stirling numbers, especially within this context. Lots of it is related to partitions and ways to "write" a number. e.g. for the x^4 term, $4=1+1+1+1$ so we get $g(1)^4*4!/4!/1^4$ and the next term is $1+1+2; G(1)^2 G(2) * 4!/1^2/2^1/2$ as coefficient. So we divide by the a*s^a if you have a G(s)^a. another more intunitive way is to see it as the "unique" combination of all outcomes. Another way is to write it as these unique combination, but use sum of sums (particular cool! And very easy to image things). You can also achieve it algebraric with writing the e powers out, but that's a real hassle. And if you want another way to represent these refined stirling numbers, you can construct them by using previous found stirling numbers and binominals, which is the most efficient. I always wondered why there was no wikipedia page about them.
Definition of minimal and characteristic polynomials I have defined the characteristic and minimal polynomial as follows, but have been told this is not strictly correct since det$(-I)$ is not necessarily 1, so my formulae don't match for $A=0$, how can I correct this? Given an $n$-by-$n$ matrix $A$, the characteristic polynomial is defined to be the determinant $\text{det}(A-Ix)=|A-Ix|$, where $I$ is the identity matrix. The characteristic polynomial will be denoted by \begin{equation} \text{char}(x)=(x-x_1)^{M_1}(x-x_2)^{M_2}...(x-x_s)^{M_s}.\nonumber \end{equation} Also, we will denote the minimal polynomial, the polynomial of least degree such that $\psi(A)=\textbf{0}$, by \begin{equation} \psi(x)=(x-x_1)^{m_1}(x-x_2)^{m_2}...(x-x_s)^{m_s}\nonumber \end{equation} where $m_{1}\le M_{1},m_{2}\le M_{2},...,m_{s}\le M_{s}$ and $\textbf{0}$ is the zero matrix.
There are two (nearly identical) ways to define the characteristic polynomial of a square $n\times n$ matrix $A$. One can use either * *$\det(A-I x)$ or *$\det(Ix-A)$ The two are equal when $n$ is even, and differ by a sign when $n$ is odd, so in all cases, they have the same roots. The roots are the most important attribute of the characteristic polynomial, so it's not that important which definition you choose. The first definition has the advantage that its constant term is always $\det(A)$, while the second is always monic (leading coefficient $1$). With the minimal polynomial however, it is conventional to define it as the monic polynomial of smallest degree which is satisfied by $A$.
Does limit means replacing $x$ for a number? I don't understand limit so much. For example I see $\lim_{x \to -3}$. And I always just put $-3$ everywhere I see $x$. I feel like I'm doing something wrong, but it seems correct all the time.
Substitution "works" many times; it works but not always: $$\lim_{x\to a} f(x) = f(a)\quad \text{${\bf only}$ when $f(x)$ is defined and continuous at $a$}$$ and this is why understanding the limit of a function as the limiting value (or lack of one) when $x$ is approaching a particular value: getting very very near that value, is crucial. That is, $$\lim_{x \to a} f(x) \not\equiv f(a) \qquad\qquad\tag{"$\not \equiv$"$\;$ here meaning "not identically"}$$ E.g., your "method" won't work for $\;\;\lim_{x\to -3} \dfrac{x^2 - 9}{x + 3}\;\;$ straight off. Immediate substitution $f(-3)$ evaluates to $\dfrac 00$ which is indeterminate: More work is required. Other examples are given in the comments. When we seek to find the limit of a function $f(x)$ as $x \to a$, we are seeking the "limiting value" of $f(x)$ as the distance between $x$ and $a$ grows increasingly small. That value is not necessarily the value $f(a)$. And understanding the "limit" as the "limiting value" or lack there of, of a function is crucial to understanding, e.g. that $\lim_{x \to +\infty} f(x)$ requires examining the behavior of $f(x)$ as $x$ gets arbitrarily (increasingly) large, where evaluating $f(\infty)$ to find the limit makes no sense and has no meaning.
Why is $PGL_2(5)\cong S_5$? Why is $PGL_2(5)\cong S_5$? And is there a set of 5 elements on which $PGL_2(5)$ acts?
As David Speyer explains, there are 15 involutions of $P^1(\mathbb F_5)$ without fixed points (one might call them «synthemes»). Of these 15 involutions 10 («skew crosses») lie in $PGL_2(\mathbb F_5)$ and 5 («true crosses») don't. The action of $PGL_2(\mathbb F_5)$ on the latter ones gives the isomorphism $PGL_2(\mathbb F_5)\to S_5$.
Monotonic Lattice Paths and Catalan numbers Can someone give me a cleaner and better explained proof that the number of monotonic paths in an $n\times n$ lattice is given by ${2n\choose n} - {2n\choose n+1}$ than Wikipedia I do not understand the how they get ${2n\choose n+1}$ and I do not see how this is the number of monotonic paths that cross the diagonal. Please be explicit about the $n+1$ term. I think this is the hardest part for me to understand. I understand the bijections between proper parenthesizations and so forth... Thanks!
There are $\binom{2n}{n+1}$ monotonic paths from $\langle 0,0\rangle$ to $\langle n-1,n+1\rangle$: such a path must contain exactly $(n-1)+(n+1)=2n$ steps, any $n+1$ of those steps can be vertical, and the path is completely determined once you know which $n+1$ of the $2n$ steps are vertical. Every monotonic path from $\langle 0,0\rangle$ to $\langle n-1,n+1$ necessarily rises above the diagonal, since it starts on the diagonal and finishes above it. At some point, therefore, it must go from a point $\langle m,m\rangle$ on the diagonal to the point $\langle m,m+1\rangle$ just above the diagonal. After the point $\langle m,m+1\rangle$ the path must still take $(n+1)-(m+1)=n-m$ vertical steps and $(n-1)-m=n-m-1$ horizontal steps. If you flip that part of the path across the axis $y=x+1$, each vertical step turns into a horizontal step and vice versa, so you’re now taking $n-m$ horizontal and $n-m-1$ vertical steps. You’re starting at $\langle m,m+1\rangle$, so you end up at $\langle m+(n-m),(m+1)+(n-m-1)\rangle=\langle n,n\rangle$. Thus, each monotonic path from $\langle 0,0\rangle$ to $\langle n-1,n+1\rangle$ can be converted by this flipping procedure into a monotonic path from $\langle 0,0\rangle$ to $\langle n,n\rangle$ that has vertical step from some $\langle m,m\rangle$ on the diagonal to the point $\langle m,m+1\rangle$ immediately above it. Conversely, every monotonic path from $\langle 0,0\rangle$ to $\langle n,n\rangle$ that rises above the diagonal must have such a vertical step in it, and reversing the flip produces a monotonic path from $\langle 0,0\rangle$ to $\langle n-1,n+1\rangle$. Thus, this flipping procedure yields a bijection between all monotonic paths from $\langle 0,0\rangle$ to $\langle n-1,n+1\rangle$, on the one hand, and all monotonic paths from $\langle 0,0\rangle$ to $\langle n,n\rangle$ that rise above the diagonal, on the other. As we saw in the first paragraph, there are $\binom{2n}{n+1}$ of the former, so there are also $\binom{2n}{n+1}$ of the latter. The total number of monotonic paths from $\langle 0,0\rangle$ to $\langle n,n\rangle$, on the other hand, is $\binom{2n}n$: each path has $2n$ steps, any $n$ of them can be vertical, and the path is completely determined once we know which $n$ are vertical. The difference $\binom{2n}n-\binom{2n}{n+1}$ is therefore simply the total number of monotonic paths from $\langle 0,0\rangle$ to $\langle n,n\rangle$ minus the number that rise above the diagonal, i.e., the number that do not rise above the diagonal — which is precisely what we wanted.
How to find the order of these groups? I don't know why but I just cannot see how to find the orders of these groups: $YXY^{-1}=X^2$ $YXY^{-1}=X^4$ $YXY^{-1}=X^3$ With the property that $X^5 = 1$ and $Y^4 =1$ How would I go about finding the order? The questions asks me to find which of these groups are isomorphic. Thanks.
Hint: You should treat those relations as a rule on how to commute $Y$ past $X$, for example the first can be written: $$YX = X^2Y$$ Then you know that every element can be written in the form $X^nY^m$ for some $n$ and $m$. Use the orders of $X$ and $Y$ to figure out how many elements there are of this form.
Path independence of an integral? I'm studying for a test (that's why I've been asking so much today,) and one of the questions is about saying if an integral is path independent and then solving for it. I was reading online about path independence and it's all about vector fields, and I'm very, very lost. This is the integral $$\int_{0}^{i} \frac{dz}{1-z^2}$$ So should I find another equation that gives the same result with those boundaries? I honestly just don't know how to approach the problem, any links or topics to read on would be appreciated as well. Thank you!
Seeing other answers, the follםwing perhaps doesn't grab the OP's intention, but here it is anyway. Putting $\,z=x+iy\implies\,z^2=x^2-y^2+2xyi\,$ , so along the $\,y$-axis from zero to $\,i\,$ we get: $$x=0\;,\;\;0\le y\le 1\implies \frac1{1-z^2}=\frac1{1+y^2}\;,\;\;dx=0 \;,\;\;dz=i\,dy\;,\;\ \;\text{so}$$ $$\int\limits_0^i\frac{dz}{1-z^2}=\left.\int\limits_0^1\frac{i\,dy}{1+y^2}= i\arctan y\right|_0^1=\frac\pi 4i$$
surface area of a sphere above a cylinder I need to find the surface area of the sphere $x^2+y^2+z^2=4$ above the cone $z = \sqrt{x^2+y^2}$, but I'm not sure how. I know that the surface area of a surface can be calculated with the equation $A=\int{\int_D{\sqrt{f_x^2+f_y^2+1}}}dA$, but I'm not sure how to take into account the constraint that it must lie above the cone. How is this done?
Hint: Use spherical coordinates. $dA = r^2\sin\theta d\theta d\phi$ with $0<\theta<\pi$. The surface area becomes $\iint_D dA$.
Find the derivative of y with respect to x,t,or theta, as appropriate $$y=\int_{\sqrt{x}}^{\sqrt{4x}}\ln(t^2)\,dt$$ I'm having trouble getting started with this, thanks for any help.
First Step First, we need to recognize to which variables you are supposed to differentiate with respect. The important thing to realize here is that if you perform a definite integration with respect to one variable, that variable "goes away" after the computation. Symbolically: $$\frac{d}{dt}\int_a^b f(t)\,dt = 0$$ Why? Because the result of a definite integral is a constant, and the derivative of a constant is zero! :) So, it isn't appropriate here to differentiate with respect to $t$. With respect to $\theta$ doesn't make much sense, either--that's not even in the problem! So, we are looking at differentiating with respect to $x$. Second Step We now use a very fun theorem: the fundamental theorem of calculus! (bad pun, sorry) The relevant part states that: $$\frac{d}{dx}\int_a^x f(t)\,dt = f(x)$$ We now make your integral look like this: $$\begin{align} y &= \int_{\sqrt{x}}^{\sqrt{4x}}\ln(t^2)\,dt\\ & = \int_{\sqrt{x}}^a\ln(t^2)\,dt + \int_{a}^{\sqrt{4x}}\ln(t^2)\,dt\\ & = -\int_{a}^{\sqrt{x}}\ln(t^2)\,dt + \int_{a}^{\sqrt{4x}}\ln(t^2)\,dt\\ \end{align}$$ Can you now find $\frac{dy}{dx}$? (Hint: Don't forget the chain rule!) If you still want some more guidance, just leave a comment. EDIT: Note that $y$ is a sum of two integral functions, so you can differentiate both independently. I'll do one, and leave the other for you: $$\begin{align} \frac{d}{dx}\left[\int_{a}^{\sqrt{4x}}\ln(t^2)\,dt\right] &= \left[\ln\left(\sqrt{4x}^2\right)\right]\cdot\frac{d}{dx}\left(\,\sqrt{4x}\right)\\ &=\left[\ln\left(4|x|\right)\right]\left(2\frac{x^{-1/2}}{1/2}\right)\\ &=4x^{-1/2}\ln\left(4|x|\right) \end{align}$$
How do I show that $6(4^n-1)$ is a multiple of $9$ for all $n\in \mathbb{N}$? How do I show that $6(4^n-1)$ is a multiple of $9$ for all $n\in \mathbb{N}$? I'm not so keen on divisibility tricks. Any help is appreciated.
You want it to be a multiple of $9$, it suffices to show you can extract a pair of 3's from this. The $6$ has one of the 3's, and $4^n-1$ is 0 mod 3 so you're done.
Probability of Multiple Choice first attempt and second attempt A multiple choice question has 5 available options, only 1 of which is correct. Students are allowed 2 attempts at the answer. A student who does not know the answer decides to guess at random, as follows: On the first attempt, he guesses at random among the 5 options. If his guess is right, he stops. If his guess is wrong, then on the second attempt he guesses at random from among the 4 remaining options. Find the chance that the student gets the right answer at his first attempt? Then, find the chance the student has to make two attempts and gets the right answer the second time?Find the chance that the student gets the right answer? $P(k)=nk\times p^k\times({1−p})^n−k$. $P($First attempt to get right answer$)=(5C_1)\times \frac{1}{5} \times (\frac{2}{4})^4=?$ $P($Second attempt to get right answer$)=(5C_2)\times \frac{1}{5}\times (\frac{2}{4})^3=?$ $P($The student gets it right$)=(5C_1)\times \frac{1}{5} \times (\frac{2}{4})^4=?$
First try to find the sample space $S$ for the question. There are five equally likely choices, so $S=\{c_1,\cdots, c_5\}$ and the event $E \subset S$ is choosing the correct answer, and there is only one correct answer i.e. $|E|=1.$ Therefore the probability is $\frac{|E|}{|S|}=\frac 15.$ Do the same to determine the sample space for the second question and find the relevant event. What is the answer, then?
What does it mean for a set to exist? Is there a precise meaning of the word 'exist', what does it mean for a set to exist? And what does it mean for a set to 'not exist' ? And what is a set, what is the precise definition of a set?
In mathematics, you do not simply say, for example, that set $S$ exists. You would add some qualifier, e.g. there exists a set $S$ with some property $P$ common to all its elements. Likewise, for the non-existence of a set. You wouldn't simply say that set $S$ does not exist. You would also add a qualifier here, e.g. there does not exist a set $S$ with some property $P$ common to all its elements. How do you establish that a set exists? It depends on your set theory. In ZFC, for example, you are given the only the empty set to start, and rules to construct other sets from it, sets that are also presumed to exist. In other set theories, you are not even given the empty set. Then the existence of every set is provisional on the existence of other set(s). You cannot then actually prove the existence of any set. To prove the non-existence of a set $S$ with property $P$ common to all its elements, you would first postulate its existence, then derive a contradiction, concluding that no such set can exist.
Identity Law - Set Theory I'm trying to wrap my head around the Identity Law, but I'm having some trouble. My lecture slides say: $$ A \cup \varnothing = A $$ I can understand this one. $A$ union nothing is still $A$. In the same way that $1 + 0$ is still $1$. However, it goes on to say: $$ A \cup U = U $$ I don't see how this is possible. How can $A\cup U = U$? http://upload.wikimedia.org/wikipedia/commons/thumb/3/30/Venn0111.svg/150px-Venn0111.svg.png If this image represents the result of $A\cup U$, where $A$ is the left circle, $U$ is the right circle, how can the UNION of both sets EQUAL the RIGHT set? I don't see how that is possible? Can soemone please explain to me how this is possible? Thanks. $$ A \cap\varnothing=\varnothing,\\ A \cap U = A $$ Some further examples from the slides. I really don't understand these, either. It must be something simple, but I'm just not seeing it.
Well $U$ is "the universe of discourse" -- it contains everything we'd like to talk about. In particular, all elements of $A$ are also in $U$. In the "circles" representation, you can think of $U$ as the paper on which we draw circles to indicate sets like $A$.
Dissecting a proof of the $\Delta$-system lemma (part II) This is part II of this question I asked yesterday. In the link you can find a proof of the $\Delta$-system lemma. In case 1 it uses the axiom of choice (correct me if I'm wrong). Now one can also prove the $\Delta$-system lemma differently, for example as follows: I have two questions about it: 1) It seems to me that by using the ordinals to index the family of sets we have eliminated the axiom of choice from the proof. Have we or did we just use it a bit earlier in the proof where we index the family $B$? 2) But, more importantly, why is it ok to assume that $b \in B$ are subsets of $\omega_1$? In the theorem there is no such restriction. Can one just "wlog" this? The answer is probably yes since otherwise the proof would be wrong but I don't see how. Thanks for your help!
You need some AC to prove the statement just for a family of pairs of $\omega_1$. If $\omega_1$ is the union of a countable family $\{B_n:n \in \omega \}$ of countable sets (which is consistent with ZF!), then the family $\{\{n, \beta\}: n<\omega, \beta \in B_n-\omega \}$ does not contain an uncountable $\Delta$-system.
Pascal's triangle and combinatorial proofs This recent question got me thinking, if a textbook (or an exam) tells a student to give a combinatorial proof of something involving (sums of) binomial coefficients, would it be enough to show that by Pascal's triangle these things do add up, or would you fail an answer like that? What if we didn't call it Pascal's triangle but "the number of paths that stop at some point at step $i$ during a one-dimensional random walk"?
I would argue that a combinatorial proof is something more substantial than pointing out a pattern in a picture! If we are at the level of "combinatorics" then we are also at the level of proofs and as such, the phrase "combinatorial proof" asks for a proof but in the combinatorial (or counting) sense. A proof by example, i.e. "this pattern holds in the small portion of Pascal's Triangle that I have drawn", is not a proof period, combinatorially or otherwise. The general case of such a property could be verified combinatorially, but simply observing it would not constitute a combinatorial proof in itself. At least that's the way I see it.
Probability of winning a game $A$ and $B$ play a game. * *The probability of $A$ winning is $0.55$. *The probability of $B$ winning is $0.35$. *The probability of a tie is $0.10$. The winner of the game is the person who first wins two rounds. What is the probability that $A$ wins? The answer is $0.66$. I don't know how it's coming $0.66$. Please help. EDIT : The right combinations according to me are {null,T,TT,TTT....}A{null,T,TT,TTT....}A {null,T,TT,TTT....}A{null,T,TT,TTT....}B{null,T,TT,TTT....}A {null,T,TT,TTT....}B{null,T,TT,TTT....}A{null,T,TT,TTT....}A
Ties don't count, don't record them. So in effect we are playing a game in which A has probability $p=\frac{0.55}{0.90}$ of winning a game, and B has probability $1-p$ of winning a game. Now there are several ways to finish. The least thinking one is that A wins with the pattern AA, or the patterns ABA, or BAA.
Finding the remainder when $2^{100}+3^{100}+4^{100}+5^{100}$ is divided by $7$ Find the remainder when $2^{100}+3^{100}+4^{100}+5^{100}$ is divided by $7$. Please brief about the concept behind this to solve such problems. Thanks.
Using Euler-Fermat's theorem. $\phi(7)=6$ $2^{6} \equiv 1 (\mod 7) \implies2^4.2^{96} \equiv 2(\mod7)$ $3^{6} \equiv 1 (\mod 7) \implies3^4.3^{96} \equiv 4(\mod7)$ $4^{6} \equiv 1 (\mod 7) \implies4^4.4^{96} \equiv 4(\mod7)$ $5^{6} \equiv 1 (\mod 7) \implies5^4.5^{96} \equiv 2(\mod7)$ $2^{100}+3^{100}+4^{100}+5^{100} \equiv 5(\mod 7)$
Differential Equation - $y'=|y|+1, y(0)=0$ The equation is $y'=|y|+1, y(0)=0$. Suppose $y$ is a solution on an interval $I$. Let $x\in I$. If $y(x)\ge 0$ then $$y'(x)=|y(x)|+1\iff y'(x)=y(x)+1\iff \frac{y'(x)}{y(x)+1}=1\\ \iff \ln (y(x)+1)=x+C\iff y(x)+1=e^{x+C}\\ \iff y(x)=e^{x+C}-1$$ Then $y(0)=0\implies C=0$. So $y(x)=e^x-1$ if $y(x)\ge 0$. If $y(x)\leq 0$ then $y(x)=1-e^x$. Now I want to say $y(x)=\begin{cases} e^x-1, \text{if } x\ge 0\\1-e^x, \text{if } x\leq 0\end{cases}$ Is this correct? Is there only one solution?
I think my solution above is correct. there are a few details missing: it is necessary to show that $y(x)\leq 0\iff x\leq0$ and $y(x)\ge 0\iff x\ge 0$ which allows me to define $y$ the way I do. also it is necessary to check that $y$ is differentiable at $x=0$ and it is because: $$\lim _{x\to 0^+}\frac{e^x-1}{x-0}=1=\lim _{x\to 0^-}\frac{1-e^{-x}}{x-0}$$
Determine number of squares in progressively decreasing size that can be carved out of a rectangle How many squares in progressively decreasing size can be created from a rectangle of dimension $a\;X\;b$ For example, consider a rectangle of dimension $3\;X\;8$ As you can see, the biggest square that you can carve out of it is of dimension $3\;X\;3$ and they are ABFE and EFGH The next biggest square is of dimension $2\;X\;2$ which is GJIC Followed by two other squares of dimension $2\;X\;2$ which are JHLK and KLDI So the answer is 5. Is there any mathematical approach of solving it for a rectangle of arbitrary dimension?
Following the algorithm you seem to be doing, cutting the largest possible square off a rectangle, it is a simple recursive algorithm. If you start with an $n \times m$ rectangle with $n \ge m$, you will cut off $\lfloor \frac nm \rfloor m \times m$ squares and be left with a $(n-\lfloor \frac nm \rfloor m) \times m$ rectangle. Then you remove as many $(n-\lfloor \frac nm \rfloor m)$ squares as you can and continue. The smallest square will be the greatest common divisor of $n$ and $m$
Length of period of decimal expansion of a fraction Each rational number (fraction) can be written as a decimal periodic number. Is there a method or hint to derive the length of the period of an arbitrary fraction? For example $1/3=0.3333...=0.(3)$ has a period of length 1. For example: how to determine the length of a period of $119/13$?
Assuming there are no factors of $2,5$ in the denominator, one way is just to raise $10$ to powers modulo the denominator. If you find $-1$ you are halfway done. Taking your example: $10^2\equiv 9, 10^3\equiv -1, 10^6 \equiv 1 \pmod {13}$ so the repeat of $\frac 1{13}$ is $6$ long. It will always be a factor of Euler's totient function of the denominator. For prime $p$, that is $p-1$.
$11$ divides $10a + b$ $\Leftrightarrow$ $11$ divides $a − b$ Problem So, I am to show that $11$ divides $10a + b$ $\Leftrightarrow$ $11$ divides $a − b$. Attempt This is a useful proposition given by the book: Proposition 12. $11$ divides a $\Leftrightarrow$ $11$ divides the alternating sum of the digits of $a$. Proof. Since $10 ≡ −1 \pmod{11}$, $10^e ≡ (−1)^n \pmod{11}$ for all $e$. Then \begin{eqnarray} a&=&r_n10^n +r^{n−1}10^{n−1} +...+a_210^2 +a_110+a_0\\ &≡&a_n(−1)^n +a_{n−1}(−1)^{n−1} +...+a_2(−1)^2 +a_1(−1)+a_0 \pmod{11}. \end{eqnarray} So, I let "$a$" be $10a+b$, which means that $10a+b≡11k\pmod{11}$, or similarly that $\frac{10a+b-11k}{11}=n$, from some $n\in \mathbb{Z}$. Next, I write $10$ as $11-1$, which gives $\frac{11a-a+b-11k}{11}=\frac{11(a-k)+b-a}{11}=\frac{11(a-k)-(a-b)}{11}$. This being so we must then have that $(a-b)=11m$, for some $m\in \mathbb{Z}$ (zero works), but this would mean that $a≡b\pmod{11m}$, namely that $11m~|~a-b$. It is therefore quite plain that $11$ divides $10a+b$ $\Leftrightarrow$ $11$ divides $a-b$. Discussion What I'd like to know is how I'm to use this to show that $11$ divides $232595$, another part of the same problem.
Since $11$ divides $10a+b$, then $$ 10a+b=11k $$ or $$ b = 11k-10a $$ so $$ a-b=a-11k+10a=11(a-k) $$ which means that $11$ divides $a-b$ as well, since $a,b,k$ are integers. Update To prove in opposite direction you can do the same $$ a-b=11k\\ b=a-11k\\ 10a+b=10a+a-11k=11(a-k) $$ or in other words, if $11$ divides $a-b$ it also divides $10a+b$. So both directions are proved $$ 10a+b\equiv 0(\text{mod } 11) \Leftrightarrow a-b\equiv 0(\text{mod } 11) $$
Approximating measurable functions on $[0,1]$ by smooth functions. Let $f$ be a measurable function on $[0,1]$. Is there a sequence infinitely differentiable $f_n$ such that one of * *$f_n\rightarrow f$ pointwise *$f_n\rightarrow f$ uniformly *$\int_0^1|f_n-f|\rightarrow 0$ is true?
Uniform convergence is surely too much to ask for. As Wikipedia suggests, uniform convergence theorem assures that the uniform limit of continuous functions is again continuous. Hence, as soon as $f$ is discontinuous, all hope of finding smooth $f_n$ uniformly convergent to $f$ is gone. The statement involving the integral is true (if we additionally assume $\int |f| < \infty$, at least), and follows from a more general fact that the continuous functions are dense in $L^1([0,1])$ (integrable functions with norm given by $||f|| = \int |f|$). A possible way to check this is the following. First, measurable functions can be arbitrarily well approximated by simple functions (the ones of the form $\sum a_i \chi_{A_i}$, with $A_i$ - measurable sets). Thus, if we are able to approximate the function $\chi_A$ arbitrarily well by continuous functions, then we are done. For this, notice that $A$ can be approximated by an open set $U$: for any $\varepsilon > 0$, there is open $U$ with $\lambda(A \triangle U) < \varepsilon$. Now, $U$ is open, so you can express it as a sum of intervals: $U = \bigcup I_n$, $I_n$ - open interval, disjoint from the $I_m$, $m\neq n$. Now, $\chi_I$ can be approximated by smooth functions by using classical bump functions. A lot of details would have to be filled in, but it should be clear that a measurable function can indeed be arbitrarily well approximated by smooth ones, in $L^1$. For pointwise convergence, I think you can use a reasoning as just offered for $L^1$. I also believe you can use mollifiers. Since you only asked if one of the statements can be made true, I shall not go into more detail. Also, I am not quite sure what background to assume, and I am more that sure there are a lot of other users who have much better understanding of these issues.
Definition of $b|a \implies 0|0$? The definition I'm using for $b|a$ (taken from Elementary Numbery Theory by Jones & Jones): If $a,b \in \mathbb{Z}$ then $b$ divides $a$ if for some $q \in \mathbb{Z}$ $a = qb$. However, I have $0 = q\cdot0$ for any $q$ I choose. So this seems to imply that $0$ divides $0$ which I know is always taken to be undefined. Should the definition be for "unique $q$" rather than for "some $q$"? Thank-you.
The statement $0$ divides $0$ and the "quantity" $0/0$ are different things. The first is exactly the statement that there exists some $a$ such that $0a=0$ and the second is not a number
$\frac{1}{ab}=\frac{s}{a}+\frac{r}{b} \overset{?}{\iff}\gcd(a,b)=1$ $$\frac{1}{ab}=\frac{s}{a}+\frac{r}{b} \overset{?}{\iff} \gcd(a,b)=1$$ This seems almost painfully obvious because it is just $ar+bs=1$ in another form. This second form is the definition of coprimality, so what else is my professor looking for?
If $\gcd(a,b)=1$ then since the greatest common divisor is the smallest positive integer that can be represented as a linear combination of a and b then we have that there are integers r and s such that $1=ra+sb$ By dividing by ab we have that $\frac{1}{ab}=\frac{s}{a}+\frac{r}{b}$. Now if we suppose that $\frac{1}{ab}=\frac{s}{a}+\frac{r}{b}$ then by multiplying by ab we have $1=ra+sb$. Since $\gcd(a,b)$ divides both a and b then it divides 1 and since the greatest common divisor is non-negative then $\gcd(a,b)=1$.
Prove that $X\times Y$, with the product topology is connected I was given this proof but I don't clearly understand it. Would someone be able to dumb it down for me so I can maybe process it better? Since a topological space is connected if and only if every function from it to $\lbrace 0,1\rbrace$ is constant. Let $F:X\times Y\rightarrow\lbrace 0,1\rbrace$ be a continuous function. Let $x \in X$, we get a function $f:Y\rightarrow\lbrace 0,1\rbrace$ defined by $y\mapsto F(x,y)$. F is constant on every set $\lbrace x\rbrace\times Y$. So the function is continuous and constant because $Y$ is connected. In the same way $F$ is constant on the set of the form $X\times \lbrace y\rbrace$. This implies constant on $X\times Y$. $(x,y)$ exist on $X\times Y$. Say there exists $(a,b)$ on $X\times Y{}{}$. $F(x,y)=F(a,b)$ proving $A\times B$ is conected. The main things I don't understand is why is F constant on every set $\{x\}\times Y$ and how is Y connected to make the function continiuous and constant
The basic fact we use is the one you start out with (which I won't prove, as I assume it's already known; it's not hard anyway): (1) $X$ is connected iff every continuous function $f: X \rightarrow \{0,1\}$ (the latter space in the discrete topology) is constant. So, given connected spaces $X$ and $Y$, we start with an arbitrary continuous function $F: X \times Y \rightarrow \{0,1\}$ and we want to show it is constant. This will show that $X \times Y$ is connected by (1). First, for a fixed $x \in X$, we can define $F_x: Y \rightarrow \{0,1\}$ by $F_x(y) = F(x,y)$. This is the composition of the maps that sends $y$ to $(x,y)$ (for this fixed $x$) and $F$, and as both these maps are continuous, so is $F_x$, for every $x$. So, as this is a continuous function from $Y$ to $\{0,1\}$ and $Y$ is connected, every $F_x$ is constant, say all its values are $c_x \in \{0,1\}$. Of course we haven't used the connectedness of $X$ yet, so we do the exact same thing fixing $y$: for every $y \in Y$, we define $F^y: X \rightarrow \{0,1\}$ by $F^y(x) = F(x,y)$. Again, this is a composition of the map sending $x$ to $(x,y)$ (for this fixed $y$) and $F$, so every $F^y$ is continuous from the connected $X$ to $\{0,1\}$ and so $F^y$ is constant with value $c'_y \in \{0,1\}$, say. The claim now is that $F$ is constant: let $(a,b)$ and ($c,d)$ be any two points in $X \times Y$. Then we also consider the point $(a,d)$ and note that: $$F(a,b) = F_a(b) = c_a = F_a(d) = F(a,d) = F^d(a) = c'_d = F^d(c) = F(c,d)\mbox{,}$$ first using that $F_a$ is constant and then that $F^d$ is constant. We basically connect any 2 points via a third using a horizontal and a vertical "line" (here via $(a,d)$) and on every "line" $F$ remains constant, so $F$ is a constant function. This concludes the proof. As a last note: the fact that a function $x \rightarrow (x,y)$, for fixed $y$, is continuous, is easy from the definitions: a basic neighbourhood of $(x,y)$ is of the form $U \times V$, $U$ open in $X$ containing $x$, $V$ open in $Y$ containing $y$, and its inverse image under this function is just $U$.
Big-O notation in division Let $r(x)=\frac{p(x)}{q(x)}$. Expending $p$ and $q$ around 0 gives $$ \frac{p_0+p'_0x+\mathcal{O}(x^2)}{q_0+q'_0x+\mathcal{O}(x^2)}. $$ Now the claim is that the above expression is equal to $$ \frac{p_0+p'_0x}{q_0+q'_0x}+\mathcal{O}(x^2). $$
Try to evaluating the difference: $\displaystyle \frac{p_0+p'_0x+\mathcal{O}(x^2)}{q_0+q'_0x+\mathcal{O}(x^2)}-\frac{p_0+p'_0x}{q_0+q'_0x}$ and recall that $p\mathcal{O}(x^2)=\mathcal{O}(x^2)$ and $x\mathcal{O}(x^2)=\mathcal{O}(x^2)$.
Solve $\dfrac{1}{1+\frac{1}{1+\ddots}}$ I'm currently a high school junior enrolling in AP Calculus, I found this website that's full of "math geeks" and I hope you can give me some clues on how to solve this problem. I'm pretty desperate for this since I'm only about $0.4%$ to an A- and I can't really afford a B now... The problem is to simplify: $$\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{\ddots}}}}$$ What I did, was using basic "limits" taught in class and I figured out that the denominator would just keep going like this and approaches $1$, so this whole thing equals $1$, but I think it's not that easy...
This is the Golden ratio (also known as $\varphi$) expressed using countinued fraction. This number is solution of the $x^2-x-1=0$ quadratic equotation. This quadratic equotation you can wrote as $x=1+\frac{1}{x}$ and this form is used to construct cotinued faction used in your question. See wikipedia for "Golden Ratio" or read book writed by Mario Livio: Golden ratio for example. Here you found more answers.
whether or not there exist a non-constant entire function $f(z)$ satisfying the following conditions In each of the case below, determine whether or not there exist a non-constant entire function $f(z)$ satisfying the following conditions. ($1$) $f(0)=e^{i\alpha}$ and $|f(z)|=1/2$ for all $z \in Bdr \Delta$. ($2$) $f(e^{i\alpha})=3$ and $|f(z)|=1$ for all $z$ with $|z|=3$. ($3$) $f(0)=1$ , $f(i)=0$ , and $|f(z)| \le 10$ $z \in \mathbb{C}$. ($4$) $f(0)=1 , f(i)=0$ , and $|f(z)| \le 5$ $ \forall z \in \Delta$. ($5$) $f(z) =0 $ for all $z=n \pi$ , $n\in \mathbb{Z}$ My thought:- ($1$)No. by Maximum-modulus Theorem $|f(z)|$ has maximum value at the boundary,Then $1/2$ would be the maximum value but $|f(0)|=1$ which is a contradiction. ($2$)No . by same argument as above. ($3$)No. by liouvilles Theorem $f(z)$ is bounded hence must be constant. ($4$) I think it is true but not sure. ($5$) True. $\sin z$ is the example. please somebody verify my answers.
Looks good to me. Surely you can find an example for (4)? A first degree polynomial should do the job.
construction set of natural number logic I identify the natural number $0$ with the empty set $\emptyset$, $1$ with $S(0)$, $2$ with $S(1)$, etc, etc. The axiom of infinity says $\exists x (\emptyset\in x\wedge \forall z\in x\space z\cup\{z\}\in x)$ and the Axiom schema of specification says $\forall y_0,...,y_n\exists x\forall z (z\in x\leftrightarrow (z\in y_0\wedge \phi(z,y_1,...,y_n)))$. My question now: Why is there now a smallest element $x$ which can be identified with the natural numbers?
Let $y$ be an inductive set whose existence follows from the axiom of infinity, then consider $\{x\subseteq y\mid x\text{ is inductive}\}$. This is a definable collection of members of the power set of $y$, so it is a set, and $y$ is there so it's not an empty set. Now take the intersection of all those sets. This is an inductive set as well (you have to prove this, of course). Call this inductive set $N$. Now prove that if $x$ is any inductive set then $N\subseteq x$, by considering $M=N\cap x$. Show that $M$ is inductive as well, and $M\subseteq y$, now use the property which defined $N$ to conclude $N=M$ and therefore $N\subseteq x$. And now we're done.
Prove that if $A - A^2 = I$ then $A$ has no real eigenvalues Given: $$ A \in M_{n\times n}(\mathbb R) \; , \; A - A^2 = I $$ Then we have to prove that $A$ does not have real eigenvalues. How do we prove such a thing?
By using index notation, $A-A^2=I$ can be written as $A_{ij}-A_{ik}A_{kj}=\delta_{ij}$. By definition: $A_{ij}n_i=\lambda n_j$. So that, $A_{ij}n_i-A_{ik}A_{kj}n_i=\delta_{ij}n_i$, hence $\lambda n_j -\lambda n_k A_{kj}=n_j$, whence $\lambda n_j -\lambda^2 n_j=n_j$, or $(\lambda^2-\lambda+1)n_j=0$, $n_j\neq 0$ an eigenvector. $\lambda^2-\lambda+1=0$ has not real roots.
If $S_n = 1+ 2 +3 + \cdots + n$, then prove that the last digit of $S_n$ is not 2,4 7,9. If $S_n = 1 + 2 + 3 + \cdots + n,$ then prove that the last digit of $S_n$ cannot be 2, 4, 7, or 9 for any whole number n. What I have done: *I have determined that it is supposed to be done with mathematical induction. *The formula for an finite sum is $\frac{1}{2}n(n+1)$. *This means that since we know that $n(n+1)$ has a factor of two, it must always end in 4 and 8. *Knowing this, we can assume that $n(n+1)\bmod 10 \neq 4$ or $n(n+1)\bmod 10 \neq 8$.
First show that for $n^2$ the last digit will always be from the set $M=1,4,5,6,9,0$ (I don't know how to create those brackets with your version of TeX, \left{ doesn't seem to work). Then consider all cases for the last digit of $n$ (last digit is a $1$, I get a $2$ as last digit for $n(n+1)= n^2 + n$ and so on). If you do all of this in $mod5$, you only have 5 easy cases to check and show that $n^2 + n$ $mod5 \notin (3,4)$.
Is the linear dependence test also valid for matrices? I have the set of matrices $ \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix} $ $ \begin{pmatrix} 0 & 1 \\ 0 & 0 \\ \end{pmatrix} $ $ \begin{pmatrix} 0 & 0 \\ 1 & 0 \\ \end{pmatrix} $ $ \begin{pmatrix} 0 & 0 \\ 0 & 1 \\ \end{pmatrix} $ and I'm asked to check if said set is a basis of $ \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} $, that is, any 2x2 matrix. I know that in this case it is really simple to do it like $$\pmatrix{a&b\cr c&d\cr}=\alpha\pmatrix{1&0\cr0&0\cr}+\beta\pmatrix{0&1\cr0&0\cr}+\gamma\pmatrix{0&0\cr1&0\cr}+\delta\pmatrix{0&0\cr0&1\cr}$$ But not all cases are that simple, so I was wondering if I could do it another way: When I'm asked to check if a set of vectors is a basis of a vector space $R^n$ I just see if said vectors are linearly independent, given that if I have n linearly independent vectors $\{(v1,...,vn),...,(u1,...un)\}$ they'll be a basis for $R^n$. Can I use the same test and say that if $$\pmatrix{0&0\cr 0&0\cr}=\alpha\pmatrix{1&0\cr0&0\cr}+\beta\pmatrix{0&1\cr0&0\cr}+\gamma\pmatrix{0&0\cr1&0\cr}+\delta\pmatrix{0&0\cr0&1\cr}$$ Has only one solution $\alpha=\beta=\gamma=\delta=0$ then they are all linearly independent, and since it's all 2x2 it'll be a basis for all 2x2 matrices? Does that make any sense?
$\mathbb{R}^{N\times N}$ as a linear space (with addition between elements and multiplication by scalars) is no different than $\mathbb{R}^{2N}$ endowed with these same operations. There is a bijective mapping between elements and operations in these two spaces, so any tricks you know about vectors is $\mathbb{R}^4$ are applicable to $\mathbb{R}^{2\times 2}$
Stieltjes Integral meaning. Can anybody give a geometrical interpretation of the Stieltjes integral: $$\int_a^bf(\xi)\,d\alpha(\xi)$$ How would we calculate? $$\int_a^b \xi^3\,d\alpha(\xi)$$ for example.
$$\int_a^b \xi^3\,d\alpha(\xi)=\int_a^b\xi^3a'(\xi)d\mu(\xi) $$ in case that $a$ is differentiable. Also $\mu(\xi) $ indicates the Legesgue measure. In order to find the above formula, you have to use Radon-Nikodym derivatives. (http://en.wikipedia.org/wiki/Radon%E2%80%93Nikodym_theorem). Generally the function $a$ gives you a way to measure elementary sets, such as intervals. This function will produce the corresponding Stieltjes measure. In the same way, measuring intervals(or cubes) with the natural way ($\mu([0,1])=1)$, produces Lebesgue measure.
Existence of a certain functor $F:\mathrm{Grpd}\rightarrow\mathrm{Grp}$ Let $\mathrm{Grpd}$ denote the category of all groupoids. Let $\mathrm{Grp}$ denote the category of all groups. Are there functors $F\colon\mathrm{Grpd}\rightarrow \mathrm{Grp}, G\colon\mathrm{Grp}\rightarrow \mathrm{Grpd}$ such that $GF=1_{\mathrm{Grpd}}$. Dear all, I know the question is not easy (at least for me). I don't expect you to solve a problem that is possibly unintresting for you and waste your time on it. I was just asking to see if anyone had seen something similar so that s/he would give me a reference to it Thank you
Such a pair of functors do not exist. Reason 1 (if you accept the empty groupoid) In the category of groups every pair of objects have a morphism between them. While in the the category of groupoids there is no morphism from the terminal object to the initial object. It follows that the initial object can't be in the image of $G$. Reason 2 (if you don't accept the empty groupoid) Let $A$ and $B$ be the discrete groupoids on the sets $\{0\}$ and $\{0,1\}$ respectively, and let $f,g:A\to B$ be functors defined by $f(0)=0$ and $g(0)=1$. Now suppose such a pair of functors exists and let $z: F(A)\to F(A)$, $z':F(A)\to F(B)$ be the group homorphisms sending everything to the identity element. Since $A$ is the terminal object in the category of groupoids it follows that $G(z) = 1_A$. We have $f = f 1_A= GF(f) G(z)= G(F(f)z)=G(z')$ and similary $g= G(z')$. This leads to $f=g$ which is a contradicition.
Meaning and types of geometry I heard that there's several kind of geometries for instance projective geometry and non euclidean geometry besides the euclidean geometry. So the question is what do you mean by a geometry, do you need truly many geometries and if yes what kind of results we can find in one geometry and not in the others. Thanks a lot.
Different geometries denote different sets of axioms, which in turn result in different sets of conclusions. I'll concentrate on the planar cases. * *Projective geometry is pure incidence geometry. The basic relation expresses whether or not a point lies on a line or not. One of its axioms requires that two different lines will always have a point of intersection, which in the case of parallel lines is usually interpreted as being infinitely far away in the direction of those parallel lines. Projective geometry does not usually come with any metric to measure lengths or angles, but using concepts by Cayley and Klein, many different geometries can be embedded into the projective plane by distinguishing a specific conic as the fundamental object of that geometry. This includes Euclidean and hyperbolic geometry as well as pseudo-Euclidean geometry and relativistic space-time geometry, among others. *Non-Euclidean geometries would in the literal sense be any geometry which doesn't exactly follow Euclid's set of axioms. More specifically, though, it is usually used for geometries which satisfy all of his postulates except for the parallel postulate. This will always include hyperbolic geometry and, depending on how you interpret the other axioms, usually includes elliptic geometry as well. One important difference between these and Euclidean geometry is the way lengths and angles are measured. It turns out that hyperbolic geometry describes the geometry on an infinite surface of constant negative curvature, whereas elliptic geometry is the geometry on a positively curved surface and therefore closely related to spherical geometry.
Graph of $\quad\frac{x^3-8}{x^2-4}$. I was using google graphs to find the graph of $$\frac{x^3-8}{x^2-4}$$ and it gave me: Why is $x=2$ defined as $3$? I know that it is supposed to tend to 3. But where is the asymptote???
Because there is a removable singularity at $x = 2$, there will be no asymptote. You're correct that the function is not defined at $x = 2$. Consider the point $(2, 3)$ to be a hole in the graph. Note that in the numerator, $$(x-2)(x^2 + 2x + 4) = x^3 - 8,$$ and in the denominator $$(x-2)(x+ 2) = x^2 - 4$$ When we simplify by canceling (while recognizing $x\neq 2$), we end with the rational function $$\frac{x^2 + 2x + 4}{x+2}$$ We can confirm that the "hole" at $x = 2$ is a removable singularity by confirming that its limit exists: $$\lim_{x \to 2} \frac{x^2 + 2x + 4}{x+2} = 3$$ In contrast, however, we do see, that there is an asymptote at $x = -2$. We can know this without graphing by evaluating the limit of the function as $x$ approaches $-2$ from the left and from the right: $$\lim_{x \to -2^-} \frac{x^2 + 2x + 4}{x+2} \to -\infty$$ $$\lim_{x \to -2^+} \frac{x^2 + 2x + 4}{x+2} \to +\infty$$ Hence, there exists a vertical asymptote at $x = -2$.
Testing for convergence in Infinite series with factorial in numerator I have the following infinite series that I need to test for convergence/divergence: $$\sum_{n=1}^{\infty} \frac{n!}{1 \times 3 \times 5 \times \cdots \times (2n-1)}$$ I can see that the denominator will eventually blow up and surpass the numerator, and so it would seem that the series would converge, but I am not sure how to test this algebraically given the factorial in the numerator and the sequence in denominator. The recursive function for factorial $n! = n \times (n-1)!$ doesn't seem to simplify things in this case, as I cannot eliminate the $(2n-1)$ in the denominator. Is there a way to find a general equation for the denominator such that I could perform convergence tests (e.g. by taking the integral, limit comparison, etc.)
We have $$a_n = \dfrac{n!}{(2n-1)!!} = \dfrac{n!}{(2n)!} \times 2^n n! = \dfrac{2^n}{\dbinom{2n}n}$$ Use ratio test now to get that $$\dfrac{a_{n+1}}{a_n} = \dfrac{2^{n+1}}{\dbinom{2n+2}{n+1}} \cdot \dfrac{\dbinom{2n}n}{2^n} = \dfrac{2(n+1)(n+1)}{(2n+2)(2n+1)} = \dfrac{n+1}{2n+1}$$ We can also use Stirling. From Stirling, we have $$\dbinom{2n}n \sim \dfrac{4^n}{\sqrt{\pi n}}$$ Use this to conclude, about the convergence/divergence of the series. EDIT $$1 \times 3 \times 5 \times \cdots \times(2n-1) = \dfrac{\left( 1 \times 3 \times 5 \times \cdots \times(2n-1) \right) \times \left(2 \times 4 \times \cdots \times (2n)\right)}{ \left(2 \times 4 \times \cdots \times (2n)\right)}$$ Now note that $$\left( 1 \times 3 \times 5 \times \cdots \times(2n-1) \right) \times \left(2 \times 4 \times \cdots \times (2n)\right) = (2n)!$$ and $$\left(2 \times 4 \times \cdots \times (2n)\right) = 2^n \left(1 \times 2 \times \cdots \times n\right) = 2^n n!$$ Hence, $$1 \times 3 \times 5 \times \cdots \times(2n-1) = \dfrac{(2n)!}{2^n \cdot n!}$$
Find all matrices $A$ of order $2 \times 2$ that satisfy the equation $A^2-5A+6I = O$ Find all matrices $A$ of order $2 \times 2$ that satisfy the equation $$ A^2-5A+6I = O $$ My Attempt: We can separate the $A$ term of the given equality: $$ \begin{align} A^2-5A+6I &= O\\ A^2-3A-2A+6I^2 &= O \end{align} $$ This implies that $A\in\{3I,2I\} = \left\{\begin{pmatrix} 3 & 0\\ 0 & 3 \end{pmatrix}, \begin{pmatrix} 2 & 0\\ 0 & 2 \end{pmatrix}\right\}$. Are these the only two possible values for $A$, or are there other solutions?If there are other solutions, how can I find them?
The Cayley-Hamilton theorem states that every matrix $A$ satisfies its own characteristic polynomial; that is the polynomial for which the roots are the eigenvalues of the matrix: $p(\lambda)=\det[A-\lambda\mathbb{I}]$. If you view the polynomial: $a^2-5a+6=0$, as a characteristic polynomial with roots $a=2,3$, then any matrix with eigenvalues that are any combination of 2 or 3 will satisfy the matrix polynomial: $A^2-5A+6\mathbb{I}=0$, that is any matrix similar to: $\begin{pmatrix}3 & 0\\ 0 & 3\end{pmatrix}$,$\begin{pmatrix}2 & 0\\ 0 & 2\end{pmatrix}$,$\begin{pmatrix}2 & 0\\ 0 & 3\end{pmatrix}$. Note:$\begin{pmatrix}3 & 0\\ 0 & 2\end{pmatrix}$ is similar to $\begin{pmatrix}2 & 0\\ 0 & 3\end{pmatrix}$. To see why this is true, imagine $A$ is diagonalized by some matrix $S$ to give a diagonal matrix $D$ containing the eigenvalues $D_{i,i}=e_i$, $i=1..n$, that is: $A=SDS^{-1}$, $SS^{-1}=\mathbb{I}$. This implies: $A^2-5A+6\mathbb{I}=0$, $SDS^{-1}SDS^{-1}-5SDS^{-1}+6\mathbb{I}=0$, $S^{-1}\left(SD^2S^{-1}-5SDS^{-1}+6\mathbb{I}\right)S=0$, $D^2-5D+6\mathbb{I}=0$, and because $D$ is diagonal, for this to hold each diagonal entry of $D$ must satisfy this polynomial: $D_{i,i}^2-5D_{i,i}+6=0$, but the diagonal entries are the eigenvalues of $A$ and thus it follows that the polynomial is satisfied by $A$ iff the polynomial is satisfied by the eigenvalues of $A$.
Summing series with factorials in How do you sum this series? $$\sum _{y=1}^m \frac{y}{(m-y)!(m+y)!}$$ My attempt: $$\frac{y}{(m-y)!(m+y)!}=\frac{y}{(2m)!}{2m\choose m+y}$$ My thoughts were, sum this from zero, get a trivial answer, take away the first term. But actually I don't think this will work very well. This question was originally under probability, but the problem is that I can't sum a series and really has nothing to do with probability (reason for the first comment)
For example, one can write \begin{align} \sum_{y=0}^m\frac{y}{(m-y)!(m+y)!} &= \sum_{k=0}^m\frac{m-k}{k!(2m-k)!} \\ &=\frac{m}{(2m)!}\sum_{k=0}^m{2m \choose k}-\frac{1}{(2m-1)!}\sum_{k=1}^{m}{2m-1\choose k-1} \\ &= \frac{m}{2(2m)!}\left[{2m\choose m}+\sum_{k=0}^{2m}{2m \choose k}\right]-\frac{1}{(2m-1)!}\sum_{k=0}^{m-1}{2m-1\choose k} \\ &= \frac{m}{2(2m)!}\left[{2m\choose m}+\left(1+1\right)^{2m}\right]-\frac{1}{2(2m-1)!}\sum_{k=0}^{2m-1}{2m-1\choose k} \\ &= \frac{m}{2(2m)!}{2m\choose m}+\frac{m\cdot 2^{2m}}{2(2m)!}-\frac{2^{2m-1}}{2(2m-1)!} \\ &= \frac{m}{2(2m)!}{2m\choose m}\;. \end{align} All we have used in the way is that $\displaystyle{n\choose k} ={n\choose n-k}$ and that $\displaystyle(1+1)^n=\sum_{k=0}^n{n\choose k}$.
closest pair in N-Dimensional I have to find the closest pair in n-dimension, and I have problem in the combine steps. I use the divide and conquer.I first choose the median x, and split it into left and right part, and then find the smallest distance in left and right part respectively, dr, dl. And then dm=min(dr,dl); And I have to consider the across hyper-plane constructed by median x, and the cloest pair must be in in the 2d think slab, and I don't understand that what to do in the following?(How to reduce the dimension) Here is the ppt that I following, please explain that the combine step, I have read it for a day and still cannot figure it out what it is doing.(from p9) https://docs.google.com/file/d/0ByMlz1Uisc9OWmxBRUk1LW9oMlk/edit?usp=sharing Thx in advance.
The closest pair was either already found, or is in the 2-d-thick slab which can only include a low number of points. No need to reduce the dimension, just apply the algorithm recursively left, right and on the slab (cycling the direction the separating hyperplane is perpendicular to), optimality is implicit. Here are other slides.
$\frac{d}{dt} \int_{-\infty}^{\infty} e^{-x^2} \cos(2tx) dx$ Prove that: $\frac{d}{dt} \int_{-\infty}^{\infty} e^{-x^2} \cos(2tx) dx=\int_{-\infty}^{\infty} -2x e^{-x^2} \sin(2tx) dx$ This is my proof: $\forall \ t \in \mathbb{R}$ (the improper integral coverge absolutely $\forall \ t \in \mathbb{R}$) I consider: $g(t)=\int_{-\infty}^{\infty} e^{-x^2} \cos(2tx) dx$. Let $h \ne 0$ $\left| \frac{g(t+h)-g(t)}{h}-\int_{-\infty}^{\infty} -2x e^{-x^2} \sin(2tx) dx \right|\le\int_{-\infty}^{\infty} \left|\frac{\cos(2(t+h)x)-\cos(2tx)}{h}-(-2x)\sin(2tx)\right| e^{-x^2}dx$ For the main value theorem and since $\int_{-\infty}^{\infty} e^{-x^2} dx=\sqrt{\pi}$ $\int_{-\infty}^{\infty} \left|\frac{\cos(2(t+h)x)-\cos(2tx)}{h}-(-2x)\sin(2tx)\right| e^{-x^2}dx=\left|\frac{\cos(2(t+h)\bar{x})-\cos(2t\bar{x})}{h}-(-2\bar{x})\sin(2t\bar{x})\right| \sqrt{\pi}$ $\cos(2tx)$ is derivable in $\bar{x}$ then fixed a $\epsilon>0 \ $ if $\ 0<|h|<\delta$: $\left|\frac{\cos(2(t+h)\bar{x})-\cos(2t\bar{x})}{h}-(-2\bar{x})\sin(2t\bar{x})\right| \sqrt{\pi}<\sqrt{\pi} \ \epsilon$ It is correct? There are other ways? UPDATE probably the proof is incorrect, because when I use the mean value theorem $x$ depends also from $h$ and hence the continuity of $x(h)$ is not obvious, then I can't guarantee the derivability of $\cos(2 t x(h))$ in $x$ for $h \rightarrow 0$. Am I right?
You way looks good. Here's an alternate way: evaluate both integrals, and see that the derivative of one equals the other. For example, $$\begin{align}\int_{-\infty}^{\infty} dx \, e^{-x^2} \, \cos{2 t x} &= \Re{\left [\int_{-\infty}^{\infty} dx \, e^{-x^2} e^{i 2 t x} \right ]}\\ &= e^{-t^2}\Re{\left [\int_{-\infty}^{\infty} dx \, e^{-(x-i t)^2} \right ]}\\ &= \sqrt{\pi}\, e^{-t^2}\end{align}$$ The derivative of this with respect to $t$ is $$\frac{d}{dt} \int_{-\infty}^{\infty} dx \, e^{-x^2} \, \cos{2 t x} = -2 \sqrt{\pi} t e^{-t^2}$$ Now try with taking the derivative inside the integral: $$\begin{align}-2 \int_{-\infty}^{\infty} dx \,x \, e^{-x^2} \,\sin{2 t x} &= -2 \Im{\left [\int_{-\infty}^{\infty} dx\,x \, e^{-x^2} e^{i 2 t x} \right ]}\\ &= -2 e^{-t^2}\Im{\left [\int_{-\infty}^{\infty} dx\,x \, e^{-(x-i t)^2} \right ]}\\ &= -2 e^{-t^2}\Im{\left [\int_{-\infty}^{\infty} dx\,(x+i t) e^{-x^2} \right ]} \\ &= -2 t \sqrt{\pi} e^{-t^2} \end{align}$$ QED
Square and reverse reading of an integer For all $n=\overline{a_k a_{k-1}\ldots a_1 a_0} := \sum_{i=0}^k a_i 10^i\in \mathbb{N}$, where $a_i \in \{0,...,9\}$ and $a_k \neq 0$, we define $f(n)=\overline{a_0 a_1 \ldots a_{k-1} a_k}= \sum_{i=0}^k a_{k-i}10^i$. Is it true that, for all $m=\overline{a_k a_{k-1}\ldots a_1 a_0} \in \mathbb{N}$, we have $f(m\times m)=f(m)\times f(m) \implies$$\forall i \in \{0, \ldots, k\}, a_i \in \{0,1,2,3\}$ ? Example: $f(201)\times f(201)=102 \times 102=10404=f(40401)=f(201\times 201)$. It's true for $m \leq 10^8$.
If $m=...4$, then $m^2=...6$, but $f(m)=4...$ and $f(m)^2=1...$ or $2...$ (because $4^2=16$ and $5^2=25$). The same can be calculated explicitly for $m$ ending in $5, \ldots, 8$, and only a little bit different for $9$. If $m=...9$, then $m^2=...1$, but $f(m)=9...$ and $f(m)^2=8...$ or $9...$ not $1...$ (as $9^2=81$, $10^2=100$ and the inequality is strict).
How to enumerate the solutions of a quadratic equation When we solve a quadratic equation, and let's assume that the solutions are $x=2$, $x=3$, should I say * *$x=2$ and $x=3$ *$x=2$ or $x=3$. What is the correct way to say it?
You should say $$x=2 \color{red}{\textbf{ or }}x=3.$$ $x=2$ and $x=3$ is wrong since $x$ cannot be equal to $2$ and $3$ simultaneously, since $2 \neq 3$.
Solve for $x$: question on logarithms. The question: $$\log_3 x \cdot \log_4 x \cdot \log_5 x = \log_3 x \cdot \log_4 x \cdot \log_5 x \cdot \log_5 x \cdot \log_4 x \cdot \log_3 x$$ My mother who's a math teacher was asked this by one of her students, and she can't quite figure it out. Anyone got any ideas?
Following up on Jaeyong Chung's answer, and working it out: $$ 1 =\log_3x\log_4x\log_5x$$ $$1=\frac{(\ln x)^3}{\ln3\ln4\ln5}$$ $$(\ln x)^3 = \ln3\ln4\ln5$$ $$(\ln x) = \sqrt[3]{\ln3\ln4\ln5}$$ $$x = \exp\left(\sqrt[3]{\ln3\ln4\ln5}\right) \approx 3.85093$$ EDIT: And, of course, the obvious answer that everyone will overlook: $x=1$ makes both sides of the equation zero. :D
Uncountability of the equivalence classes of $\mathbb{R}/\mathbb{Q}$ Let $a,b\in[0,1]$ and define the equivalence relation $\sim$ by $a\sim b\iff a-b\in\mathbb{Q}$. This relation partitions $[0,1]$ into equivalence classes where every class consists of a set of numbers which are equivalent under $\sim$, My textbook states (without proof): The set $[0,1]/\sim$ consists of uncountably many of these classes, where each class consists of countably many members. How can I formally prove this statement?
If you know $\Bbb Q$ is countable, that covers the second half. Then use the fact that a countable union of countable sets is again countable to show that there must be uncountably many classes.
Another Birthday Problem (Probability/Combinatorics) What is the smallest number of people in a room to assure that the probability that at least two were born on the same day of the week is at least 40%? I understand when approaching this type of problem, you simplify it so there's only 365 days. Also, I thought you go about the question by finding the probability that no one is born on the same day of the week. Then you subtract by 1 to get the solution: Therefore, if the first person can have a birthday on any of 365 days, and the second is (365-8) because 1 week has to be removed (since question asks at least two born on the same day). I thought the answer is: $1-\cfrac{365(365-8)...(365-r+1)}{365^{r}}\tag{1}$ The solution is 4 people but when I enter r=4, I get 5.6% which is obviously wrong. Any help is appreciated. Thank you.
Here are the first several results for the same day of the week: $$ \begin{align} 1-\frac77&=0&\text{$1$ person}&(0\%)\\ 1-\frac77\frac67&=\frac17&\text{$2$ people}&(14.29\%)\\ 1-\frac77\frac67\frac57&=\frac{19}{49}&\text{$3$ people}&(38.78\%)\\ 1-\frac77\frac67\frac57\frac47&=\frac{223}{343}&\text{$4$ people}&(65.01\%)\\ \end{align} $$
Learning trigonometry on my own. I have been self teaching myself math beginning with a grade 10 level for a while now and need learn trigonometry from near scratch. I am seeking both books and perhaps lectures on trigonometry and possibly geometry as some overlap does exist. I am not looking for algebra/precalc textbooks as my algebra knowledge is quite good. The primary goal is to be prepared for both calculus and linear algebra in the future. I have done some research, but haven't really found much that starts trigonometry from the beginning. Though I am aware and have used Khan Academy, this is not what I seek. I find things missing from Khan Academy and things are usually taught to generally. I prefer more traditional methods to learning, more doing less watching. However, I am not opposed to other suggestions. -$Thanks$ $edit:$ Though I love Pauls online notes, it's unfortunate he doesn't have much in the way of trig notes. :(
The Indian mathematician Ramanujan learned his trigonometry from Sidney Luxton Loney's "Plane Trigonometry". Since it's a free Google book, what have you got to lose? http://books.google.com/books?id=Mtw2AAAAMAAJ&printsec=frontcover&dq=editions:ix4vRrrEehgC&hl=en&sa=X&ei=Qu2CUeznBaO-yQH2tYCwDA&ved=0CDQQ6AEwAQ#v=onepage&q&f=false
Solve a written problem with matrix I have the following problem described here: The government attributes an allocation to the children who benefits child-care services. The children are splitted inside 3 groups: preschool, first cycle and second cycle. The allocation is different for each group, 2$ for the first cycle and the others are unknowns, lets name them x and y. On the other hand, we analysed the following data from 3 different school Rainbow School: 43 preschool children, 160 first cycle children and 140 second cycle. Total allocation: 589$ Cumulus School: 50, 170, 160 total: k(unknown) Nimbus School: 100, 88, 80 total: 556$ Now, I must represent the following problem with a matrix equation, I have tried the following: $$ M = \begin{array}{cccc} 43x & 320 & 140y & 589 \\ 50x & 340 & 160y & k \\ 100x & 176 & 80y & 556 \end{array} $$ I'm not sure that it make sense and I'm not sure how to solve it either. What now?
There's an easier solution, I think. Take the Rainbow and Nimbus schools alone. This yields a system of equations: $$ 43x+320+140y=589\\ 100x+176+80y=556. $$ Two variables and two equations means you can find the solutions for $x$ and $y$. Following that, you can plug those values into your formula for Cumulus and find $k$. EDIT: if you know that $x$ and $y$ are integers, you can plug in some low values and probably find $x,y$ without solving the system properly.
Is there a correct order to learning maths properly? I am a high school student but I would like to self-learn higher level maths so is there a correct order to do that? I have learnt pre-calculus, calculus, algebra, series and sequences, combinatorics, complex numbers, polynomials and geometry all at high school level. Where should I go from here? Some people recommended that I learn how to prove things properly, is that a good idea? What textbooks do you recommend?
Quite often the transition to higher, pure math is real analysis. Here proofs really become relevant. I would suggest this free set of down-loadable notes from a class given at Berkeley by Fields medal winner (math analog of Noble Prize) Vaughan Jones. https://sites.google.com/site/math104sp2011/lecture-notes They are virtually verbatim and complete as a text. They build gradually so you can get a good base. The material is Prof. Jones's own treatment and the proofs are quite accessible and beautiful. You might just give it a try and see if it works for you.
Stokes' Theorem Let $C$ be the following, let $C$ be the curve of intersection of the cylinder $x^2 + y^2 = 1$ and the given surface $z = f(x,y)$, oriented counterclockwise around the cylinder. Use Stokes' theorem to compute the line integral by first converting it to a surface integral. (a) $\int_C (y \, \mathrm{d}x + z \, \mathrm{d}y + x \, \mathrm{d}z),\quad z=x \cdot y$. I'm having a problem setting up the problem. I appreciate any assistance.
Just to get you started, here's the details for (a). Start by finding the curl of the vector field $\mathbf{F}=\langle y,z,x\rangle$. You get $$\nabla\times\mathbf{F}=\det\begin{pmatrix} \mathbf{i} &\mathbf{j}&\mathbf{k}\\\frac{\partial}{\partial x}&\frac{\partial}{\partial y}&\frac{\partial}{\partial z}\\y&z&x\end{pmatrix}=\langle0,-1,-1 \rangle$$ Stokes's Theorem says the line integral of $\mathbf{F}$ around $C$ is equal to the surface integral of $\nabla\times\mathbf{F}$ over any surface $S$ having $C$ as a boundary (written $C=\partial S$). The easiest such surface would just be the part of the surface $z=xy$ lying inside the cylinder $x^2+y^2=1$. Call $S$ this surface, and parametrize it in polar coordinates by $\mathbf{S}(r,\theta)=\langle r\cos\theta, r\sin\theta, r^2\cos\theta\sin\theta\rangle$ for $r\in[0,1]$ and $\theta\in[0,2\pi)$. The $z$ component here comes from the fact that $z=xy$, and in polar coordinates $x=r\cos\theta$ and $y=r\sin\theta$. Then we just need to compute $$\int\int_S\langle 0,-1,-1\rangle\cdot \mathbf{dS}=\int_0^{2\pi}\int_0^1\langle 0,-1,-1\rangle\cdot\left(\frac{\partial\mathbf{S}}{\partial r}\times\frac{\partial\mathbf{S}}{\partial\theta}\right)\,dr\,d\theta$$ But we have $$\frac{\partial\mathbf{S}}{\partial r}=\langle\cos\theta,\sin\theta,2r\cos\theta\sin\theta \rangle=\langle \cos\theta,\sin\theta,r\sin(2\theta)\rangle$$ and $$\frac{\partial\mathbf{S}}{\partial \theta}=\langle -r\sin\theta, r\cos\theta, r^2(\cos^2\theta-\sin^2\theta) \rangle=r\langle-\cos\theta,\sin\theta,r\cos(2\theta)\rangle$$ Then $$\frac{\partial\mathbf{S}}{\partial r}\times\frac{\partial\mathbf{S}}{\partial\theta}=\det\begin{pmatrix}\mathbf{i}&\mathbf{j}&\mathbf{k}\\\cos\theta&\sin\theta&r\sin(2\theta)\\-r\sin\theta&r\cos\theta&r^2\cos(2\theta)\end{pmatrix}$$ The first component doesn't matter, because we're dotting it with $\langle0,-1,-1\rangle$ anyway. The second component is $-r^2(\sin(2\theta)\sin\theta+\cos(2\theta)\cos\theta)$ and the third component is just $r$. You can tell this is the right orientation because the third component is positive and hence pointing up rather than down. Taking the dot product, our integral becomes $$\int_0^{2\pi}\int_0^1\left(r^2\sin(2\theta)\sin(\theta)+r^2\cos(2\theta)\cos\theta-r\right)\,dr\,d\theta$$ Doing the integration with respect to $r$ we get $$\int_0^{2\pi}\left(\frac{1}{3}(\sin(2\theta)\sin\theta+\cos(2\theta)\cos\theta)-\frac{1}{2}\right)\,d\theta$$ I'll leave it to you to finish -- with the hint that $\sin(2\theta)\sin\theta+\cos(2\theta)\cos\theta$ simplifies very nicely if you remember your sum/difference formulas.
How long will it take Marie to saw another board into 3 pieces? So this is supposed to be really simple, and it's taken from the following picture: Text-only: It took Marie $10$ minutes to saw a board into $2$ pieces. If she works just as fast, how long will it take for her to saw another board into $3$ pieces? I don't understand what's wrong with this question. I think the student answered the question wrong, yet my friend insists the student got the question right. I feel like I'm missing something critical here. What am I getting wrong here?
The student is absolutely correct (as Twiceler has correctly shown). The time taken to cut a board into $2$ pieces (that is $1$ cut) : $10$ minutes Therefore, The time taken to cut a board into $3$ pieces (that is $2$ cuts) : $20$ minutes The question may have different weird interpretations as I am happy commented:<br Time taken to cut it into one piece = $0$ minutes So Time taken to cut it into $3$ pieces = $0 \times 3$ minutes = $0$ minutes. So $0$ can be an answer. but it is illogical just like the teacher's answer and as Keltari said Another correct answer would be 10 minutes. One could infer, "If she works just as fast," that "work" is the complete amount of time to do the job. -Keltari This is logical but you can be sure that this is not what the question meant, but the student has chosen the most relevant one. The teacher's interpretation is mathematically incorrect. The teacher may have put the question for the students to have an idea of Arithmetic Progression and may have thought that the students will just answer the question without thinking hard. In many a schools, at low grades children are thought that real numbers consists of all the numbers. Only later in higher grades do they learn that complex numbers also exist. (I learned just like that.) So the question was put as a question on A.P. thinking that the students may not be capable of solving the answer the correct way. Or as Jared rightly commented: This is simultaneously wonderful and sad. Wonderful for the student who was level-headed enough to answer this question correctly, and sad that this teacher's mistake could be representative of the quality of elementary school math education. – Jared Whatever may be the reason, there is no doubt that the student has been accurate in answering the question properly and that the teacher's answer is illogical.
I roll 6-sided dice until the sum exceeds 50. What is the expected value of the final roll? I roll 6-sided dice until the sum exceeds 50. What is the expected value of the final roll? I am not sure how to set this one up. This one is not homework, by the way, but a question I am making up that is inspired by one. I'm hoping this will help me understand what's going on better.
The purpose of this answer here is to convince readers that the distribution of the roll 'to get me over $50$' is not necessarily that of the standard 6=sided die roll. Recall that a stopping time is a positive integer valued random variable $\tau$ for which $\{\tau \leq n \} \in \mathcal{F}_n$, where $\mathcal{F}_n = \sigma(X_0, X_1, \cdots, X_n)$ is the canonical filtration with respect to the (time-homogeneous) markov chain $X_n$. The Strong Markov Property asserts (in this case) that conditioned on the event $\tau < \infty$, the random variables $X_1$ and $X_{\tau + 1} - X_{\tau}$ are equidistributed. Letting $\tau = -1 + \min\{k \mid X_k > 50\}$ ought to prove the equidistribution with a regular die roll, right? Well, no. What happens is that $\tau + 1$ is a stopping time, but $\tau$ is not, because it looks into the future one timestep. This is just enough to throw off the SMP.
Inf and sup for Lebesgue integrable functions Let $D \subset \mathbb{R}$ be a measurable set of finite measure. Suppose that $f : D \to \mathbb{R}$ is a bounded function. Prove that $$\sup\left\{\int_D \varphi \mid \varphi \leq f \text{ and } \varphi \text{ simple}\right\} = \inf\left\{\int_D \psi \mid f \leq \psi \text{ and } \psi \text{ simple}\right\}$$ iff $f$ is measurable. I was wondering if I could get a hint.
Suppose the inf and the sup are equal. Then for any $\epsilon > 0$ there exist simple functions $\varphi \le f \le \psi$ with the property that $\displaystyle \int_D \psi < \int_D \varphi + \epsilon$. Use this fact to construct sequences $\varphi_n$ and $\psi_n$ of simple functions with the property that $\varphi_n \le f \le \psi_n$ and $\displaystyle \int_D \psi_n - \varphi_n < 4^{-n}$. Let $m^*$ denote Lebesgue outer measure and $m$ Lebesgue measure. Then $$ m^*(\{f - \varphi_n > 2^{-n}\}) \le m^*(\{\psi_n - \varphi_n > 2^{-n}\}) = m(\{\psi_n - \varphi_n > 2^{-n}\}) $$ because $f \le \psi_n$. Chebyshev's inequality gives you $$ m(\{\psi_n - \varphi_n > 2^{-n}\}) \le 2^n \int_D \psi_n - \varphi_n < 2^{-n}. $$ Write $E_n = \{f - \varphi_n > 2^{-n}\}$. Then $m^*(E_n) < 2^{-n}$, and by subadditivity $m^*(\cup_{n \ge N} E_n) \le 2^{1-N}$. The monotonicity of $m^*$ implies $$ m^* \left( \bigcap_{N \ge 1} \bigcup_{n \ge N} E_n \right) = 0. $$ On edit: can you use this fact to show $f$ is measurable?
Bounded integrable function Let $f : \mathbb{R} \to \overline{\mathbb{R}}$ be an integrable funtion. Given $\varepsilon > 0$ show that there is a bounded integrable function $g$ such that $\int |f - g| < \varepsilon$. I was wondering if I could get a hint.
First, as $f$ is integrable, it takes infinite values on a negligible set, so we can assume that $f$ take its values on $\Bbb R$. Writing $f=\max\{f,0\}+(f-\max\{f,0\})$, we can write $f$ as the difference of two measurable integrable non-negative functions. So we are reduced to the case $f\geqslant 0$ is integrable and measure. To this aim, go back to the definition of Lebesgue integral, and recall that a simple function is bounded.
Is $f(x,y)$ continuous? I want to find out if this function is continuous: $$(x,y)\mapsto \begin{cases}\frac{y\sin(x)}{(x-\pi)^2+y^2}&\text{for $(x,y)\not = (\pi, 0)$}\\0&\text{for $(x,y)=(\pi,0)$}\end{cases}$$ My first idea is that $$\lim_{(x,y)\to(\pi,0)} |f(x,y)-f(\pi,0)|=\lim_{(x,y)\to(\pi,0)}\left|\frac{y\sin(x)}{(x-\pi)^2+y^2}\right|\\\le \lim_{(x,y)\to(\pi,0)}|y\sin(x)|\cdot \left|\frac{1}{(x-\pi)^2+y^2}\right| $$ Where the first term = 0 ($y=0$,$ \sin(\pi)=0$) but im not sure if this is leading me where I want. Btw, I assume the function is continuous.
Let $u=x-\pi$. Then you're wondering about $$\lim_{(u,y)\to(0,0)}\frac{y\sin(\pi+u)}{u^2+y^2}$$ $$\lim_{(u,y)\to(0,0)}\frac{-y\sin u}{u^2+y^2}=$$ $$\lim_{(u,y)\to(0,0)}\frac{-yu}{u^2+y^2}\frac{\sin u}{u}$$ Can you show it doesn't go to zero? Look at $y=u$ and $y=-u$, for example.
Are all vectors straight lines? Is there a math field that deals with quadratic, cubic etc. vectors? Or a non-linear equivalent of a vector? If so, why are they so much less common than linear vectors?
From your response to my comment in your OP, I'll talk about why vectors are always considered "rays" geometrically. The most important interpretation of a vector is as a list of numbers (we'll say from $\Bbb R^n$.) The list of numbers determines a point in $\Bbb R^n$, and if you imagine connecting this to the origin with a straight line and adding an arrowhead at the end with the point, you have a "ray" representing the vector. Actually you can slide this ray (without changing the direction) all over space and still have the same vector. Strictly speaking it is not a segment because it has a fixed direction. If you put the arrowhead on the other end, it is a different vector. Since the vector only carries the information about a single point, it would does not determine anything more complex than a line. In order to "curve" the vector, you'd need to say more about the path it follows on its way from start to finish. Another reason is that the points lying on the line determined by a vector through the origin are a subspace of the big space. Subspaces "aren't curved" in some sense. This property causes $\alpha v,\beta v, (\alpha+\beta)v$ to all lie on the same straight line. Multiplying a vector (ray) in a real vector space by $\lambda \in \Bbb R$ just stretches or contracts the vector. If the vector were curved somehow, then scaling it would change the curviture of the shape. It does not seem desirable that a geometric property such as curviture is ruined merely by scaling the vector, so it doesn't seem very likely to be useful.
a totally ordered set with small well ordered set has to be small? doing something quite different the following question came to me: 1)If you have a totally ordered set A such that all the well ordered subset are at most countable, is it true that A has at most the cardinality of continuos? 2)More in general is it true that if a set A totally ordered has well ordered subset of lenght at most $|B|$ then A has at most cardinality $2^{|B|}$?
If $\kappa>2^\omega$, then $\kappa^*$ is a counterexample: all of its well-ordered subsets are finite. (The star indicates the reverse order.) However, if you require all well-ordered and reverse well-ordered subsets to be countable, the answer is yes to the more general question. Let $\langle A,\le\rangle$ be a linear order with $|A|>2^\kappa$. Let $\preceq$ be any well-order on $A$. Then $[A]^2$, the set of $2$-element subsets of $A$, can be partitioned into sets $I_0$ and $I_1$, where $$I_0=\left\{\{a,b\}\in[A]^2:\le\text{ and }\preceq\text{ agree on }\{a,b\}\right\}$$ and $$I_1=\left\{\{a,b\}\in[A]^2:\le\text{ and }\preceq\text{ disagree on }\{a,b\}\right\}\;.$$ By the Erdős-Rado theorem there are an $H\subseteq A$ and an $i\in\{0,1\}$ such that $|H|>\kappa$ and $[H]^2\subseteq I_i$. If $i=0$, $H$ is well-ordered by $\le$, and if $i=1$, $H$ is inversely well-ordered by $\le$. (Nice question, by the way.)