INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Prove about a right triangle How to prove (using vector methods) that the midpoint of the hypotenuse of a right triangle is equidistant from the three vertices. Defining the right triangle as the one formed by $\vec{v}$ and $\vec{w}$ with hypotenuse $\vec{v} - \vec{w}$, This imply to prove that $||\frac{1}{2}(\vec{v}+\vec{w})|| = ||\frac{1}{2}(\vec{v}-\vec{w})||$. The only thing that came to my mind is to expand like this: \begin{align} & {}\qquad \left\|\frac{1}{2}(\vec{v}+\vec{w})\right\| = \left\|\frac{1}{2}(\vec{v}-\vec{w})\right\| \\[10pt] & =\left|\frac{1}{2}\right|\left\|(\vec{v}+\vec{w})\right\| = \left|\frac{1}{2}\right|\left\|(\vec{v}-\vec{w})\right\| \\[10pt] & =\left\|(\vec{v}+\vec{w})\right\| = \left\|(\vec{v}-\vec{w})\right\| \\[10pt] & =\left\|(v_1+w_1, v_2+w_2,\ldots,v_n+w_n)\right\| = \left\|(v_1-w_1, v_2-w_2,\ldots,v_n-w_n)\right\| \\[10pt] & =\sqrt{(v_1+w_1)^2+ (v_2+w_2)^2+\cdots+(v_n+w_n)^2} \\[10pt] & = \sqrt{(v_1-w_1)^2+ (v_2-w_2)^2+\cdots+(v_n-w_n)^2} \end{align} $$ And now I get stuck. Why? Because expanding it more would result in $$v_1^2+2v_1w_1+w_1^2 \cdots = v_1^2-2v_1w_1+w_1^2 \cdots$$ And theres obiously a minus there. I know a making a mistake but I can't get further. Any help?
If u and v are orthogonal, $u.v=0$. Then $||u+v||^2 = ||u-v||^2 = ||u||^2+||v||^2$
Leibniz Alternating Series Test Can someone help me find a Leibniz Series (alternating sum) that converges to $5$ ? Does such a series even exist? Thanks in advance!!! I've tried looking at a series of the form $ \sum _ 1 ^\infty (-1)^{n} q^n $ which is a geometric series ... But I get $q>1 $ , which is impossible... Does someone have an idea?
Take any Leibniz sequence $x_n$, the series of which converges not to zero, say to $c$, and then consider the sequence $(\frac5c\cdot x_n)_n$.
Calculate $20^{1234567} \mod 251$ I need to calculate the following $$20^{1234567} \mod 251$$ I am struggling with that because $251$ is a prime number, so I can't simplify anything and I don't have a clue how to go on. Moreover how do I figure out the period of $[20]_{251}$? Any suggestions, remarks, nudges in the right direction are very appreciated.
If you do not know the little theorem, a painful but -I think- still plausible method is to observe that $2^{10} = 1024 \equiv 20$, and $10^3 = 1000 \equiv -4$. Then, we may proceed like this: $20^{1234567} = 2^{1234567}10^{1234567} = 2^{123456\times 10 + 7}10^{411522\times 3 + 1} = 1280\times 1024^{123456}1000^{411522} \equiv 1280\times 20^{123456}4^{411522}$. Observe that after one pass, we are still left with the powers of $2$ and the powers of $20$ that we can handle with the same equivalences above. We still have to make some calculations obviously (divisions etc), but it is at least not as hopeless as before.
Geometric Distribution $P(X\ge Y)$ I need to show that if $X$ and $Y$ are idd and geometrically distributed that the $P(X\ge Y)$ is $1\over{2-p}$. the joint pmf is $f_{xy}(xy)=p^2(1-p)^{x+y}$, and I think the only way to do this is to use a double sum: $\sum_{y=0}^{n}\sum_{x=y}^m p^2(1-p)^{x+y}$, which leads to me getting quite stuck. Any suggestions?
It is easier to use symmetry: $$ 1 = \mathbb{P}\left(X<Y\right) +\mathbb{P}\left(X=Y\right) + \mathbb{P}\left(X>Y\right) $$ The first and the last probability are the same, due to the symmetry, since $X$ and $Y$ are iid. Thus: $$ \mathbb{P}\left(X<Y\right) = \frac{1}{2} \left(1 - \mathbb{P}\left(X=Y\right) \right) $$ Thus: $$ \mathbb{P}\left(X\geqslant Y\right) = \mathbb{P}\left(X>Y\right) + \mathbb{P}\left(X=Y\right) = \frac{1}{2} \left(1 + \mathbb{P}\left(X=Y\right) \right) $$ The probability of $X=Y$ is easy: $$ \mathbb{P}\left(X=Y\right) = \sum_{n=0}^\infty \mathbb{P}\left(X=n\right) \mathbb{P}\left(Y=n\right) = \sum_{n=0}^\infty p^2 (1-p)^{2n} = \frac{p^2}{1-(1-p)^2} = \frac{p}{2-p} $$
Proof of equivalence theorem about left invertible matrices I am taking a course in Matrix Theory and we have a theorem that states (among other things) that: The following conditions on the matrix $A$ of size $m \times n$ are equivalent: (1) A has left inverse (2) The system $Ax=b$ has at most one solution for any column vector $b$. ... The proof that (1) $\implies$ (2) goes like this: If $Ax=b$ and $V$ is a left inverse, then $VAx=Vb \implies x=Vb$, so we have at most one solution (if any). The thing is, left inverses are not unique right? Take $A = \left( \begin{matrix} 1 \\ 0 \end{matrix} \right)$ That has left inverses $V_1= \left( \begin{matrix} 1 & 0 \end{matrix} \right) $ and $ V_2 = \left( \begin{matrix} 1 & 1 \end{matrix} \right)$ Does this mean that the proof is wrong or am I missing something?
Existence of left inverse means $A$ is 1-1, i.e., $Ax_1 = Ax_2$ implies $VAx_1 = VAx_2$ , i.e., $x_1=x_2$. So a solution, if it exists, must be unique.
Product of pairwise coprime integers divides $b$ if each integer divides $b$ Let $a_1....a_n$ be pairwise coprime. That is $gcd(a_i, a_k) = 1$ for distinct $i,k$, I would like to show that if each $a_i$ divides $b$ then so does the product. I can understand intuitively why it's true - just not sure how to formulate the proof exactly. I want to say if we consider the prime factorizations of each $a_i$, then no two prime factorizations share any prime numbers. So the product of $a_1...a_n$ must appear in the prime factorization of $b$. Is this correct? Or at least if if the idea is correct, any way to formulate it more clearly?
Use unique prime factorization for each $a_i$ to write it as $$a_i = \prod_{i=1}^k p_i^{\alpha_i}.$$ $k$ is chosen such that it will number all prime factors across the $a_i$, with $\alpha_i = 0$ when $p_i$ is not a factor of $a_i$. In other words, $k$ will be the same number for each $a_i$. By the assumption, $b$ will then be of form $$b = \prod_{i=1}^k p_i^{\beta_i} \cdot d = \prod_{i=1}^n a_i \cdot d,$$ for some integer $d$, and where $\beta_i$ is the unique (as the $\alpha_i$ are coprime, and each divide $b$) $\alpha_i \neq 0$ for all $i$. This was to be shown.
Another two hard integrals Evaluate : $$\begin{align} & \int_{0}^{\frac{\pi }{2}}{\frac{{{\ln }^{2}}\left( 2\cos x \right)}{{{\ln }^{2}}\left( 2\cos x \right)+{{x}^{2}}}}\text{d}x \\ & \int_{0}^{1}{\frac{\arctan \left( {{x}^{3+\sqrt{8}}} \right)}{1+{{x}^{2}}}}\text{d}x \\ \end{align}$$
For the second integral, consider the more general form $$\int_0^1 dx \: \frac{\arctan{x^{\alpha}}}{1+x^2}$$ (I do not understand what is special about $3+\sqrt{8}$.) Taylor expand the denominator and get $$\begin{align} &=\int_0^1 dx \: \arctan{x^{\alpha}} \sum_{k=0}^{\infty} (-1)^k x^{2 k} \\ &= \sum_{k=0}^{\infty} (-1)^k \int_0^1 dx \: x^{2 k} \arctan{x^{\alpha}} \end{align}$$ Now we can simply evaluate these integrals in terms of polygamma functions: $$\int_0^1 dx \: x^{2 k} \arctan{x^{\alpha}} = \frac{\psi\left(\frac{a+2 k+1}{4 a}\right)-\psi\left(\frac{3 a+2 k+1}{4 a}\right)+\pi }{8 k+4}$$ where $$\psi(z) = \frac{d}{dz} \log{\Gamma{(z)}}$$ and we get that $$\int_0^1 dx \: \frac{ \arctan{x^{\alpha}}}{1+x^2} = \frac{\pi^2}{16} - \frac{1}{4} \sum_{k=0}^{\infty} (-1)^k \frac{\psi\left(\frac{3 \alpha+2 k+1}{4 \alpha}\right)-\psi\left(\frac{\alpha+2 k+1}{4 \alpha}\right) }{2 k+1} $$ This is about as close as I can get. The sum agrees with the numerical integration out to 6 sig figs at about $10,000$ terms for $\alpha = 3+\sqrt{8}$.
Show that $f = 0$ if $\int_a^b f(x)e^{kx}dx=0$ for all $k$ The problem is show that $f=0$ whenever $f\in C[a,b]$ and $$\int_a^bf(x)e^{kx}dx =0, \hspace{1cm}\forall k\in\mathbb{N}.$$ Can someone help me? Thank you!
First, letting $u=e^x$ we note that $\int_{e^a}^{e^b}f(\ln u)u^{k-1}du=0$ for all $k\in\mathbb N$. Next, see the following old questions: Nonzero $f \in C([0, 1])$ for which $\int_0^1 f(x)x^n dx = 0$ for all $n$ If $f$ is continuous on $[a , b]$ and $\int_a^b f(x) p(x)dx = 0$ then $f = 0$ problem on definite integral
Proof of a comparison inequality I'm working on a problem that's been giving me some difficulty. I will list it below and show what work I've done so far: If a, b, c, and d are all greater than zero and $\frac{a}{b} < \frac{c}{d}$, prove that $\frac{a}{b} < \frac{a + c}{b + d} < \frac{c}{d}$. Alright, so far I think the first step is to notice that $\frac{a}{b+d} + \frac{c}{b+d} < \frac{a}{b} + \frac{c}{d}$ but after that I'm not sure how to continue. Any assistance would be a great help!
As $$\frac ab<\frac cd\implies ad<bc\text { as } a,b,c,d>0$$ $$\frac{a+c}{b+d}-\frac ab=\frac{b(a+c)-a(b+d)}{b(b+d)}=\frac{bc-ad}{b(b+d)}>0 \text{ as } ad<bc \text{ and } a,b,c,d>0 $$ So, $$\frac{a+c}{b+d}>\frac ab$$ Similarly, $$\frac{a+c}{b+d}-\frac cd=\frac{ad-bc}{d(b+d)}<0\implies \frac{a+c}{b+d}<\frac cd$$
Average of function, function of average I'm trying to find all functions $f : \mathbb{R} \to \mathbb{R}$ such that, for all $n > 1$ and all $x_1, x_2, \cdots, x_n \in \mathbb{R}$: $$\frac{1}{n} \sum_{t = 1}^n f(x_t) = f \left ( \frac{1}{n} \sum_{t = 1}^n x_t \right )$$ My intuition is that this is only true if $f$ is a linear function. I started from the relation: $$\sum_{t = 1}^n f(x_t) = f \left ( \sum_{t = 1}^n x_t \right )$$ That is, $f$ is additive. Then, by multiplying by $\frac{1}{n}$ each side, we obtain: $$\frac{1}{n} \sum_{t = 1}^n f(x_t) = \frac{1}{n} f \left ( \sum_{t = 1}^n x_t \right )$$ And hence, any $f$ which has the property that $f(ax) = a f(x)$ (a linear function) will work. And since all linear functions are trivially additive, any linear function is a solution to my relation. But all I have done is prove that linear functions are solutions, how should I go about showing that only linear equations are solutions? Is it valid to just "go backwards" in my argument, proving that if $f$ is a solution, then it must be linear? I feel it is not sufficient, since I only have implication and not equivalence. How do I proceed? I think the $\frac{1}{n}$ term was added to confuse me, since without it, this would be straightforward.
What about function f that: $f(x+y)=f(x)+f(y)$ $\ \ \ \ \ \ x,y\in \mathbb{R}$ $f(a x)=af(x)$ $ \ \ \ \ \ a\in \mathbb{Q}, x\in \mathbb{R}$ This kind of function does not have to be continuous. (It can be so wild that it does not have to be even measurable. And I guess(not sure at all) if it is measurable than it is linear function(in normal sense) almost everywhere. Acording to wikipedia http://en.wikipedia.org/wiki/Harmonic_function#The_mean_value_property All locally integrable continuous function that satisfy mean value property(in 1D it means $\frac{1}{2}(f(x)+f(y)) = f(\frac{x+y}{2})$) are infinitely differentiable and harmonics. So I guess this answers your question.
how to solve this gamma function i know that $\Gamma (\frac {1}{2})=\sqrt \pi$ But I do not understand how to solve these equations $$\Gamma (m+\frac {1}{2})$$ $$\Gamma (-m+\frac {1}{2})$$ are there any general relation to solve them for example: $\Gamma (1+\frac {1}{2})$ $\Gamma (-2+\frac {1}{2})$
By the functional equation $$\Gamma(z+1)=z \, \Gamma(z)$$ (which is easily proved for the integral definition of $\Gamma$ by parts, and may be used to analytically continue the function to the negative numbers) we find $$\Gamma(1 + \frac{1}{2}) = \frac{1}{2} \Gamma(\frac{1}{2})$$ and $$\Gamma (-2+\frac {1}{2}) = \frac{1}{-1+\frac {1}{2}}\Gamma (-1+\frac {1}{2}) = \frac{2}{-1+\frac {1}{2}}\Gamma (\frac{1}{2})$$
Ring theorem and isomorphic I got a problems as follow Let $S = \left\{\begin{bmatrix} a & 0 \\ 0 & a \\ \end{bmatrix} | a \in R\right\}$, where $R$ is the set of real numbers. Then $S$ is a ring under matrix addition and multiplication. Prove that $R$ is isomorphic to $S$. What is the key to prove it? By definition of ring?But I have no idea how to connect the characteristic of ring to Real number.
Hint: Identify $a$ with \begin{bmatrix} a & 0 \\ 0 & a \\ \end{bmatrix} for each $a$ in $R$.
Cofactor theorem Shilov on page 12 says determinant $D=a_{1j}A_{1j}+a_{2j}A_{2j}+...+a_{nj}A_{nj}...(I)$ is an identity in the quantities $a_{1j}, a_{2j},..., a_{nj}$. Therefore it remains valid if we replace $a_{ij} (i = 1, 2,. . . , n)$ by any other quantities. The quantities $A_{1j}, A_{2j},..., A_{nj}$ remain unchanged when such a replacement is made, since they do not depend on the elements $a_{ij}$. Quantity $A_{ij}$ is called the cofactor of the element $a_{ij}$ of the determinant $D$. My question,from the equation (I) we see that all $\Large A$s are multiplied with specific $\Large a$s, not just any $\Large a$, then how did he conclude the above statement? .
The identity (I) uses the elements $a_{1j}, a_{2j} \ldots$, which of course for the matrix have specific values. What the statement means is that if those values, and only those values, are changed in the matrix, the new determinant is valid for the new matrix.
socles of semiperfect rings For readers' benefit, a few definitions for a ring $R$. The left (right) socle of $R$ is the sum of all minimal left (right) ideals of $R$. It may happen that it is zero if no minimals exist. A ring is semiperfect if all finitely generated modules have projective covers. Is there a semiperfect ring with zero left socle and nonzero right socle? Someone asked me this recently, and nothing sprang to mind either way. In any case, I'm interested in a construction method that is amenable for creating imbalanced socles like this. If you happen to know the answer when semiperfect is strengthened to be 'some side perfect' or 'semiprimary' or 'some side Artinian', then please include it as a comment. (Of course, a ring will have a nonzero socle on a side on which it is Artinian.)
Here is an example due to Bass of a left perfect ring that is not right perfect. See Lam's First Course, Example 23.22 for an exposition (which is opposite to the one I use below). Let $k$ be a field, and let $S$ be the ring of (say) column-finite $\mathbb{N} \times \mathbb{N}$-matrices over $k$. Let $E_{ij}$ denote the usual "matrix units." Consider $J = \operatorname{Span}\{E_{ij} : i > j\}$ be the set of lower-triangular matrices in $S$ that have only finitely many nonzero entries, all above the diagonal. Let $R = k \cdot 1 + J \subseteq S$. Then $R$ is a local ring with radical $J$, and it's shown to be left perfect in the reference above. As I mentioned in my comment above, this means that $R$ is semiperfect and has nonzero right socle. I claim that $R$ has zero left socle. Certainly any minimal left ideal will lie inside the unique maximal left ideal $J$. It suffices to show that every $a \in J \setminus \{0\}$ has left annihilator $ann_l(a) \neq J$. For then $Ra \cong R/ann_l(a) \not \cong R/J = k$. Indeed, if $a \in J$ then $a = \sum c_{ij} E_{ij}$ for some scalars $c_{ij}$ that are almost all zero. Let $r$ be maximal such that there exists $c_{rj} \neq 0$. Then there exist finitely many $j_1 < \cdots < j_r$ such that $c_{r j_p} \neq 0$. It follows that $E_{r+1,r} \in J$ with $$E_{r+1,r}a = E_{r+1,r} \sum c_{ij} E_{ij} = \sum c_{ij} E_{r+1,r} E_{ij} = \sum_p c_{i j_p} E_{r+1,j_p} \neq 0.$$ In particular, $E_{r+1,r} \in J \setminus ann_l(a)$ as desired.
Not following what's happening with the exponents in this proof by mathematical induction. I'm not understanding what's happening in this proof. I understand induction, but not why $2^{k+1}=2*2^{k}$, and how that then equals $k^{2}+k^{2}$. Actually, I really don't follow any of the induction step - what's going on with that algebra? Thanks! Let $P(n)$ be the statement $2^n > n^2$, if $n$ is an integer greater than 4. Basis step: $~~~~~P(5)$ is true because $2^5 = 32 > 5^2 = 25$ Inductive step: $~~~~~~$Assume that $P(k)$ is true, i.e. $2^k > k^2$. We have to prove that $P(k+1)$ is true. Now $$\begin{aligned} 2^{k+1} & = 2 \cdot 2^k\\ & > 2 \cdot k^2\\ & = k^2 + k^2\\ & > k^2 + 4k\\ & \geq k^2 +2k+1\\ & = (k+1)^2 \end{aligned}$$ $~~~~~$ because $k>4$ $~~~~~$ Therfore $P(k+1)$ is true Hence from the principle of mathematical induction the given statement is true.
I have copied the induction step table format from your question and will reformat it as I had to do in my high school geometry class. I find it very helpful to get away from the table as you gave it. Set $P(n)$ equal to the statement that 2^n>n^2 for all n>4. Note the basis step 2^4>4^2 is true. $~~~~~~$Assume that $P(k)$ is true, i.e. $2^k > k^2$. We have to prove that $P(k+1)$ is true. Now \begin{array}[10,2]{} STATEMENT & REASON \\ 2^{k+1} = 2 \cdot 2^k & \text{Induction step of the definition of }2^n \\ 2^k > k^2 & \text{This is the induction hypothesis.} \\ 2 \cdot 2^k > 2 \cdot k^2= k^2 + k^2 & \text{Multiplying by a positive number preserves inequalities} \\ & \text{ and the distributive law.} \\ k^2 + k^2 > k^2 + 4k & k>4\text{ by hypothesis, so } k^2 >4k \text{ and then } k^2 + 4k > k^2 + 2k +2\\ & \text{and the standard rules of inequalities used above apply.} \\ k^2 + 4k \geq k^2 +2k+1 & 4k > 2k+1 \text{ since }2k-1>0 \\ k^2 +4k +1 =(k+1)^2 & \text{Multiplication distributes over addition.} \\ \end{array} $~~~~~~$ This shows $P(k)\implies P(k+1)$, and the principle of finite induction completes the proof that $P(n)$ is true for all $n>4$ .
Find all polynomials $\sum_{k=0}^na_kx^k$, where $a_k=\pm2$ or $a_k=\pm1$, and $0\leq k\leq n,1\leq n<\infty$, such that they have only real zeroes Find all polynomials $\sum_{k=0}^na_kx^k$, where $a_k=\pm2$ or $a_k=\pm1$, and $0\leq k\leq n,1\leq n<\infty$, such that they have only real zeroes. I've been thinking about this question, but I've come to the conclusion that I don't have the requisite math knowledge to actually answer it. An additional, less-important question. I'm not sure where this problem is from. Can someone find a source? edit I'm sorry, I have one more request. If this can be evaluated computationally, can you show me a pen and paper way to do it?
Let $\alpha_1,\alpha_2...\alpha_n$ the real roots. We know: $$\sum \alpha_i^2=( \sum \alpha_i )^2-2\sum \alpha_i\alpha_j= \left(\frac{a_{n-1}}{a_n}\right)^2-2\left(\frac{a_{n-2}}{a_n}\right)\le 8$$ On the other hand, by AM-GM inequality: $$\sum \alpha_i^2\ge n \sqrt[n]{|\prod\alpha_i|^2}=n\sqrt[n]{\left|\frac{a_0}{a_n}\right|^2}\ge n\sqrt[n]{\frac{1}{4}}$$ So $8\ge n \sqrt[n]{\frac{1}{4}} \Rightarrow n\le9$. The rest is finite enough.
Why doesn't this argument show that every locally compact Hausdorff space is normal? In proving that a locally compact Hausdorff space $X$ is regular, we can consider the one-point compactification $X_\infty$ (this is not necessary, see the answer here, but bear with me). Since $X$ is locally compact Hausdorff, $X_\infty$ is compact Hausdorff. As a result, $X_\infty$ is normal. Imitating the idea in my proof in the link above and taking into consideration the correction pointed out in the answer, let $A,B\subseteq X$ be two disjoint, closed sets in $X$, and consider $X_\infty$. Since $X_\infty$ is normal, there are disjoint open sets $U,V \subseteq X_\infty$ such that $A\subseteq U$ and $B \subseteq V$. Then considering $X$ as a subset of $X_\infty$, we can take the open sets $U \cap X$ and $V \cap X$ as disjoint, open subsets of $X$ that contain $A$ and $B$, respectively...or so I thought. I see from the answers to this question that this does not succeed in proving that a locally compact Hausdorff space is normal, since this is not true. So my question is simply: why does the above proof fail? Thanks.
$A$ and $B$ are closed in $X$. They need not be closed in the compactification $X_\infty$. You could try to fix this by replacing them with their closures in $X_\infty$, but then these need not be disjoint.
Factorization problem Find $m + n$ if $m$ and $n$ are natural numbers such that: $$\frac {m+n} {m^2+mn+n^2} = \frac {4} {49}\;.$$ My reasoning: Say: $$m+n = 4k$$ $$m^2+mn+n^2 = 49k$$ It follows:$$(m+n)^2 = (4k)^2 = 16k^2 \Rightarrow m^2+mn+n^2 + mn = 16k^2 \Rightarrow mn = 16k^2 - 49k$$ Since: $$mn\gt0 \Rightarrow 16k^2 - 49k\gt0 \Rightarrow k\gt3$$ Then no more progress.
Observe that $k$ must be a non-zero integer. We know that $m, n$ are the roots of the quadratic equation $$X^2 - 4kX + (16k^2 - 49k)$$ The roots, from the quadratic equation, are $$ \frac { 4k \pm \sqrt{(4k)^2 - 4(16k^2 - 49k) }} {2} = 2k \pm \sqrt{ 49k - 12k^2}$$ The expression in the square root must be a perfect square. Try $k = 1$, $49 k - 12k^2 = 37$ is not a perfect square. Try $k = 2$, $49k - 12k^2 = 50$ is not a perfect square. Try $k=3$, $49k-12k^2 = 39$ is not a perfect square. Try $k=4$, $49k-12k^2 = 4$ is a perfect square. This leads to roots 6, 10, which have sum 16. For $k\geq 5$, $49k - 12k^2 < 0$ has no solution. For $k \leq -1$, $49k - 12k^2 < 0$ has no solution.
Classifying mathematical "coincidences" Doing homework a few years ago, I noticed that the sum of the squares of $88$ and $33$ is $8833$. What would this kind of mathematical "curiosity" be called? Does this or any other similar coincidence have any deeper meaning or structure?
Alf van der Poorten wrote a paper, The Hermite-Serret algorithm and $12^2+33^2$. He notes a number of similar curiosities, such as $$25840^2+43776^2=2584043776$$ and $$1675455088^2+3734621953^2=16754550883734621953$$ and develops the theory behind these things, including a discussion of the way to go from a solution of $z^2\equiv-1\pmod n$ to a solution of $x^2+y^2=n$. The paper also appears in Cryptography and Computational Number Theory (Singapore, 1999), 129–136, Progr. Comput. Sci. Appl. Logic, 20, Birkhäuser, Basel, 2001. The Math Reviews data would be MR1944724 (2003i:11047) See also http://oeis.org/A064942
$9^n \equiv 1 \mod 8$ I would like someone to check this inductive proof (sketch) The base case is clear. For the inductive step, it follows that $8 \mid 9^{n+1} - 9 = 9(9^n - 1)$ by the indutive hyp. So $9^{n+1} \equiv 9 \equiv 1 \mod 8$. Feedback would be appreciated.
I'm assuming you mean what you say when you state your work as a proof "sketch". The base case is clear. For the inductive step, it follows that $8 \mid 9^{n+1} - 9 = 9(9^n - 1)$ by the indutive hyp. So $9^{n+1} \equiv 9 \equiv 1 \mod 8$. In your final write up, I'd suggest you "fill in" a bit of detail: e.g., to "walk through" the base case, at least stating that the congruence holds for $n=1$, or perhaps "for $n = 1$, clearly, $9\equiv 1 \pmod 8$". Then I suggest you make your inductive hypothesis explicit: "Assume that it is true that $9^n \equiv 1 \pmod 8$," and then finish with, "for the inductive step....[what you wrote]" If your task was to prove the congruence holds using proof by induction on $n$, then you've done a fine job of sketching such a proof. If you can use other strategies, then bonsoon's suggestion is worth considering: "Or note that since $9 \equiv 1 \pmod 8$, we have $9^n\equiv 1^n = 1 \pmod 8.$"
Can every diagonalizable matrix be diagonalized into the identity matrix? I'm a chemistry major and I haven't taken much math, but this came up in a discussion of quantum chemistry and my professor said (not very confidently) that if a matrix is diagonalizable, then you should be able to diagonalize it to the identity matrix. I suspect this is true for symmetrical matrices, but not all matrices. Is that correct?
Take the $0$ $n\times n$ matrix. It's already diagonal (and symmetrical) but certainly can't be diagonalized to the identity matrix.
Mulitnomial Coefficient Power on a number using the binomial coefficient $_nC_r=\binom nr$, find the coefficient of $(wxyz)^2$ in the expansion of $(2w-x+3y+z-2)^n$. The answer key says its $n=12$, $r= 2\times2\times2\times2\times4$ in one of the equation for $_nC_r$. Why is there a $4$ there ? is it because there are $4$ terms ?
The coefficient of $a^2b^2c^2d^2e^{n-8}$ in $(a+b+c+d+e)^n$ is the multinomial coefficient $\binom n{2,2,2,2,n-8}$. The $n-8$ is needed because the exponenents need to add up to $n$, anything else would make the multinomial coefficient undefined. For $n=12$ you get $n-8=4$, so I suppose that is where your $4$ comes from. Now put $a=2w,b=-x,c=3y,d=z,e=-2$ to get the real answer to your question.
Books for Self learning math from the ground up I am a CSE graduate currently working as a .NET and android developer. I feel my poor basic in mathematics is hampering my improvements as a programmer. I want to achieve a sound understanding on the basics of mathematics so that i can pick up a book on 3D graphics programming/Algorithm and not be intimidated by all the maths(linear algebra,discrete mathematics... etc. ) used in it. So, what path/resource/book one should follow to create a good foundation on mathematics? Thank you P.S. I have read this but i wanted to know other/better options.
Let me propose a different tack since you have a clear goal. Pick up a book on 3D graphics programming or algorithms, and if you come across something that intimidates you too much to get by on your own, ask about it here. We will be able to direct you to exact references to better understand the material in this way. Conceivably, there might be a little bit of recursion, and that's okay.
$n\times n$ board, non-challenging rooks Consider an $n \times n$ board in which the squares are colored black and white in the usual chequered fashion and with at least one white corner square. (i) In how many ways can $n$ non-challenging rooks be placed on the white squares? (ii) In how many ways can $n$ non-challenging rooks be placed on the black squares? I've tried some combinations with shading z certain square and deleting the row and the column in which it was and then using recurrence, but it doesn't work.
We are looking at all permutations of $n$ that are (white squares case) parity-perserving, or (black square case) parity-inversing. If $n$ is even the black squares case is equivalent to the white square case (by a vertical reflection for instance), and if $n$ is odd the black square case has no solutions (the odd numbers $1,3,\ldots,n$ must map bijectively to fewer even numbers $2,4,\ldots,n-1$). So it remains to do the white square case. But now we must permute the odd and the even numbers independently among themselves, for $\lfloor\frac n2\rfloor!\times\lceil\frac n2\rceil!$ solutions.
Solve the recurrence $y_{n+1} = 2y_n + n$ for $n\ge 0$ So I have been assigned this problem for my discrete math class and am getting nowhere. The book for the class doesn't really have anything on recurrences and the examples given in class are not helpful at all. I seem to be going in circles with the math. Any help with this problem would be GREATLY appreciated. Solve the recurrence $$y_{n+1} = 2y_n + n$$ for non-negative integer $n$ and initial condition $y_0 = 1\;$ for a) Ordinary generating series b) Exponential generating series c) Telescoping d) Lucky guess + mathematical induction e) Any other method of your choice Thanks in advance!
Using ordinary generating functions $$y_{n+1}=2y_n+n$$ gets transformed into $$\sum_{n=0}^\infty y_{n+1}x^n=2\sum_{n=0}^\infty y_nx^n+\sum_{n=0}^\infty nx^n$$ $$\sum_{n=0}^\infty y_{n+1}x^n=2 y(x)+x\sum_{n=1}^\infty nx^{n-1}$$ $$\sum_{n=0}^\infty y_{n+1}x^n=2 y(x)+x\frac{1}{(1-x)^2}$$ $$\sum_{n=0}^\infty y_{n+1}x^{n+1}=2x y(x)+x^2\frac{1}{(1-x)^2}$$ $$\sum_{n=1}^\infty y_{n}x^{n}=2x y(x)+x^2\frac{1}{(1-x)^2}$$ $$\sum_{n=0}^\infty y_{n}x^{n}=2x y(x)+x^2\frac{1}{(1-x)^2}+y_0$$ $$y(x)=2x y(x)+x^2\frac{1}{(1-x)^2}+y_0$$ $$(1-2x) y(x)=x^2\frac{1}{(1-x)^2}+y_0$$ $$ y(x)=x^2\frac{1}{(1-2x)(1-x)^2}+\frac{y_0}{1-2x}$$ Can you match the coefficients now?
Show that $\frac{z-1}{\mathrm{Log(z)}}$ is holomorphic off $(-\infty,0]$ Let $f(z)=\frac{z-1}{Log(z)}$ for $z\neq 1$ and $f(1)=1$. Show that $f$ is holomorphic on $\mathbb{C}\setminus(-\infty,0]$. I know it looks like an easy problem, but I got stuck and need some clarification. The way I see it, I need to show that $f$ is complex differentiable at every point in $\Omega=\mathbb{C}\setminus(-\infty,0]$. So, if we take any $z_{0}$ different than $1$, our function is the quotient of two complex differentiable functions on $\Omega$, so is complex differentiable ($Log(z_{0})\neq0)$. Now, if we take $z_{0}$ to be $1$, than if $f$ is continous at that point, we could use the Cauchy-Riemann equations to check whether $u_{x}(1,0)=v_{x}(1,0)$ and $u_{y}(1,0)=-v_{x}(1,0)$ with $f=u+iv$. My question is: Is this the fastest way to show complex differentiability at $1$? I mean, how do I get to differentiating $u$ and $v$ if I don't know them explicitly? I would also appreciate it if someone could give me some hints on how to compute the limit of $f$ at $1$.
If $g$ is holomorphic on an open set $G$, $a\in G$, and $g(a)=0$, then there is a positive integer $n$ and a holomorphic function $h$ on $G$ such that $g(z)=(z-a)^nh(z)$ for all $z\in G$, and $h(a)\neq 0$. (This $n$ is the multiplicity of the zero of $g$ at $a$.) Since $\mathrm{Log}(1)=0$, and $\mathrm{Log}'(1)=1\neq 0$, there exists a holomorphic function $h$ on $\mathbb C\setminus(-\infty,0]$ such that $\mathrm{Log}(z)=(z-1)h(z)$ for all $z$, and $h(1)\neq 0$. Since $\mathrm{Log}$ has no zeros except at $1$, $h$ has no zeros. Therefore $f=\dfrac{1}{h}$ is holomorphic on $\mathbb C\setminus(-\infty,0]$.
Is this set closed under addition or multiplication or both and why? $\{-1,0,1\}$ Please give an explanation and also tell me what does closed under addition and multiplication mean. Different definitions are given everywhere.
A set $X$ is closed under addition if $x+y\in X$ for any $x,y\in X$. It is closed under multiplication if $x\times y\in X$ for any $x,y\in X$. Note that $x$ and $y$ may or may not be equal. The set $\{-1,0,1\}$ is closed under multiplication but not addition (if we take usual addition and multiplication between real numbers). Simply verify the definitions by taking elements from the set two at a time, possibly the same.
Does $a!b!$ always divide $(a+b)!$ Hello the question is as stated above and is given to us in the context of group theory, specifically under the heading of isomorphism and products. I would write down what I have tried so far but I have made very little progress in trying to solve this over the last few hours!
The number of ways to choose $a$ objects out of $a+b$ objects if order matters in the selection is $(a+b)\times(a+b-1)\times\cdots\times((a+b)-(a-1))=\frac{(a+b)!}{b!}$, since there are $a+b$ ways to choose the first object, $a+b-1$ ways to choose the second object from the remaining ones, and so on. However, $a!$ permutations actually correspond to a single combination (where order is immaterial), since $a$ objects can be arranged in $a!$ ways. This means that $\frac{(a+b)!}{b!}=ka!$ for some integer $k$, so that $(a+b)!=ka!b!$.
Inverse of a diagonal matrix plus a Kronecker product? Given two matrices $X$ and $Y$, it's easy to take the inverse of their Kronecker product: $(X\otimes Y)^{-1} = X^{-1}\otimes Y^{-1}$ Now, suppose we have some diagonal matrix $\Lambda$ (or more generally an easily inverted matrix, or one for which we already know the inverse). Is there a closed-form expression or efficient algorithm for computing $(\Lambda + (X\otimes Y))^{-1}$?
Let $C := \left\{ c_{i,j} \right\}_{i,j=1}^N$ and $A := \left\{ a_{i,j} \right\}_{i,j=1}^n$ be symmetric matrices. The spectral decompositions of the two matrices read $A = O^T D_A O$ and $C = U^T D_C U$ where $D_A := Diag(\lambda_k)_{k=1}^n$ and $D_C := Diag(\mu_k)_{k=1}^N$ and $O \cdot O^T = O^T \cdot O = 1_n$ and $U \cdot U^T = U^T \cdot U = 1_N$. We use equation 5 from the cited paper in order to compute the resolvent $G_{A \otimes C}(z) := \left(z 1_{ n N} - A \otimes C\right)^{-1}$. We have: \begin{eqnarray} G_{A \otimes C}(z) &=& \left( O^T \otimes U^T\right) \cdot \left( z 1_{ n N} - D_A \otimes D_C \right)^{-1} \cdot \left(O \otimes U \right) \\ &=& \left\{ \sum\limits_{k=1}^n O_{k,i} O_{k,j} U^T \cdot \left( z 1_{N} - \lambda_k D_C\right)^{-1} U \right\}_{i,j=1}^n \\ &=& \sum\limits_{p=0}^\infty \frac{1}{z^{1+p}} A^p \otimes C^p \\ &=& \frac{\sum\limits_{p=0}^{d-1} z^{d-1-p} \sum\limits_{l=d-p}^d (-1)^{d-l} {\mathfrak a}_{d-l} \left(A \otimes C\right)^{p-d+l}}{\sum\limits_{p=0}^d z^{d-p} (-1)^p {\mathfrak a}_p} \end{eqnarray} where $\det( z 1_{n N} - A \otimes C) := \sum\limits_{l=0}^{n N} (-1)^l {\mathfrak a}_l z^{n N-l}$. The first two lines from the top are straightforward. In the third line we expanded the inverse matrix in a series and finally in the fourth line we summed up the infinite series using Cayley-Hamilton's theorem.
Comparing value of two definite integrals ($x^nlnx$) I need compare these two integrals: $$ (1) \int_{a}^{b}x^n lnx dx\space \space (2) \int_{a}^{b}x^{n+1} lnx dx $$ for the following values of [a, b]: (A) [1, 2] for both integrals, (B) [0.5, 1] for both integrals and (C) [0.5, 1] for (1) and [0.3, 1] for (2). What would be the most efficient way to solve this problem? Should I compare $x^nlnx$ and $x^{n+1}lnx$ (and the slope of both functions for a given range) or solve the integrals?
You can give an elegant answer without touching the integrals, using integral properties. It is know that if $f(x)$ and $g(x)$ are Riemann integrable functions over the closed interval $[a,b]$, and such that $f(x) \geq g(x)$, then $\int_{a}^{b} f(x) dx \geq \int_{a}^{b} g(x) dx$. The same thing goes for $>$ and the opposites. So you just need to know where your $f(x)$, namely, $x^{n}ln(x)$ is bigger or smaller than $x^{n+1}ln(x)$. Once you know these regions where some condition of order occurs ($f$ is bigger/smaller/equal than $g$), you'll know the relation between both integrals, directly from the properties. For the C) point of the question, I'd just check if the functions are monotone in the given intervals, if some function is always greater than the other in its corresponding interval, and I would play around with some tricky values for $n$. Only if I couldn't get any insight from properties or theorems, I would try to compute the integrals.
If the absolute value of a function is Riemann Integrable, then is the function itself integrable? I am trying to check the converses of a few theorems. I know that that if $g$ is integrable then $|g|$ is integrable. However, if $|g|$ is Riemann Integrable, then is $g$ Rieman integrable? I know that if $g$ is integrable then $g^2$ is integrable. However, is the converse true? I have a hunch that they aren't true, but am failing to device the counter examples.
Let $f(x)=1$ when $x$ is rational, $-1$ when $x$ is irrational. Interval say $[0,1]$.
Plotting for solution for $y=x^2$ and $x^2 + y^2 = a $ Consider the system $$y=x^2$$ and $$x^2 + y^2 = a $$for $x>0$, $y>0$, $a>0$. Solving for equations give me $y+y^2 = a$, and ultimately $$y = \frac {-1 + \sqrt {4a+1}} {2} $$ (rejected $\frac {-1 - \sqrt {4a+1}} {2} $ since $y>0$). The next part is to plot on the $x-y$ plane for different values of $a$. Is plotting the graph of $y = x^2$ insufficient?
Yes, it is insufficient. You should notice that this equation is "special:" $$x^2 + y^2 = a$$ This is the graph of a circle, radius $\sqrt{a}$. So, your graph should contain both the parabola and the part of the circle in the region in question. Here's a link to a graph from Wolfram Alpha which may help give some intuition. The darkest shaded region that is there is the region of interest.
Given an $m$ how to find $k$ such that $k m + 1$ is a perfect square Any way other than trying linearly for all values of $k$.
If $km+1=n^2$ for some $n$ and $k=am+b$ for some $a,b\in\mathbb{Z}$, $0\le b<m$. Then $$k=\frac{n^2-1}{m}=a^2m+2b+\frac{b^2-1}{m}$$ So we can know $k$ is integer if and only if $b^2\equiv 1 \pmod{m} $. For example, if $b=\pm 1$ we get $k=a^2m\pm 2$ and $km+1=(am\pm1)^2$
intersection of sylow subgroup I am in need of * *Example of a group $G$ a subgroup $A$ which not normal in $G$ and $p$ sylow subgoup $B$ of $G$ such that $A \cap B$ is a $p$-sylow subgroup of $A$. A detailed solution will be helpful.
If you want to construct an example which is non-trivial under several points of view, that is $A$ and $B$ and $G$ all distinct, and $A$ is not a $p$-group, you may find it as follows. Find a finite group $G$ which has a non-normal subgroup $A$ which is not a $p$-group, but whose order is divisible by $p$. Let $S$ be a Sylow $p$-subgroup of $A$. Choose $B$ to be a Sylow $p$-subgroup of $G$ containing $S$. (Such a $B$ exists by Sylow's theorems.) For instance, take $p = 2$, $G = S_{4}$, $A = \langle (123), (13) \rangle$ of order 6, $S = \langle (13) \rangle$, $B = \langle (1234), (13) \rangle$. PS A simpler, but somewhat trivial example would be $G = B = \langle (1234), (13) \rangle$, $A = \langle (13) \rangle$.
For which $a$ does the equation $f(z) = f(az) $ has a non constant solution $f$ For which $a \in \mathbb{C} -\ \{0,1\}$ does the equation $f(z) = f(az) $ has a non constant solution $f$ with $f$ being analytical in a neighborhood of $0$. My attempt: First, we can see that any such solution must satisfy: $f(z)=f(a^kz)$ for all $k \in \mathbb{N} $. If $|a|<1$: The series $z_{k} = a^k$ converges to 0 which is an accumulation point, and $f(z_i)=f(z_j)$ for all $i, j\in \mathbb{N} $. Thus $f$ must be constant. If $|a|=1$: For all $a \neq 1$ , $f$ must be constant on any circle around $0$, so again $f$ must be constant. My quesions are: Am I correct with my consclusions? Also, I'm stuck in the case where $|a|>1$. Any ideas? Thanks
Thank you all very much. For completeness, I will write here a sctach of the soultion: For $|a|=1$ we can take $f(z)=z^k$ with $k = 2\pi/Arg(a) $. For $|a|>1$, we can notice that actually $f(z)=f(a^kz)$ for all $k \in \mathbb{Z}$, so the solution is similar to the case $|a|<1$.
Looking for a simple proof that groups of order $2p$ are up to isomorphism $\Bbb{Z}_{2p}$ and $D_p$ for prime $p>2$. I'm looking for a simple proof that up to isomorphism every group of order $2p$ ($p$ prime greater than two) is either $\mathbb{Z}_{2p}$ or $D_{p}$ (the Dihedral group of order $2p$). I should note that by simple I mean short and elegant and not necessarily elementary. So feel free to use tools like Sylow Theorems, Cauchy Theorem and similar stuff. Thanks a lot!
Use Cauchy Theorem Cauchy's theorem — Let $G$ be a finite group and $p$ be a prime. If $p$ divides the order of $G$, then $G$ has an element of order $p$. then you have an element $x\in G$ of order $2$ and another element $y\in G$ of order $p$. Now you have to show that $xy$ is of order $2p$ using commutativity we get: $(xy)^2 = y^2$, and hence $ord(xy) \not| 2$ $(xy)^p = x$ , therefore $ord(xy) \not| p$ and $(xy)^{2p} = y^{2p} = e$, then $ord(xy) | 2p$ hence $1 <2<p<ord(xy) | 2p$ an then $ord(xy) = 2p$ because it doesn't equal to any divisor different of $2p$.
Can there be a well-defined set with no membership criteria? Prime numbers are a well-defined set with specific membership criteria. Can the same be said about "numbers"? Aren't numbers (that is, all numbers) a well defined set but without membership criteria? Anybody can say, given a particular object, whether that belongs in the set of numbers or not. But it may not be possible to give any criteria for this inclusion. We may want to say that the set of all numbers has a criterion and that is that only numbers shall get into the set and all non-numbers shall stay out of it. But then, this is merely a definition and not a criterion for inclusion. Therefore my question: Can there be a well-defined set with no membership criteria?
A set must have membership criteria. Cantor defined sets by (taken from wikipedia) A set is a gathering together into a whole of definite, distinct objects of our perception [Anschauung] and of our thought – which are called elements of the set. As the elements in a set therefore are definite, we can describe them in a way - which is a membership critria. There can happen funny things (Take a look at Axiom of Choice and Russels Paradox, the latter showing why Cantors definition isn't enough and ZFC is needed (I sadly can't link to it now) ) In your example: The set of all numbers is the set containing all objects that we characterize as numbers. This quickly gets a bit meta-mathematical ("What is a number?", "two" and "2" are just the representatives / "shadows" of the thing we call two), you should take a look at how the natural numbers are defined.
Show that there are no nonzero solutions to the equation $x^2=3y^2+3z^2$ I am asked to show that there are no non zero integer solutions to the following equation $x^2=3y^2+3z^2$ I think that maybe infinite descents is the key. So I started taking the right hand side modulo 3 which gives me zero. Meaning that $X^2$ must be o modulo 3 as well and so I can write $X^2=3k$ , for some integer K and (k,3)=1. I then divided by 3 and I am now left with $k=y^2+z^2$ . Now I know that any integer can be written as sum of 2 squares if and only if each prime of it's prime factorization has an even power if it is of the form 4k+3. But yet I am stuck . If anyone can help would be appreciated.
Assume $\,x,y,z\,$ have no common factor. Now let us work modulo $\,4\,$ : every square is either $\,0\,$ or $\,1\,$ , and since $$3y^2+3z^2=-(y^2+z^2)=\begin{cases}0\,\;\;,\,y,z=0,2\\{}\\-1=3\,\;\;,\,y=1\,,\,z=0\,\,or\,\,y=0\,,\,z=1\\{}\\-2=2\;\;\,,\,y=z=1\end{cases}$$ so the only possibility is the first one, and thus also $\,x^2=0\,$ , but then all of $\,x,y,z\,$ are even...
Holomorphic functions on unit disc Let $f,g$ be holomorphic on $\mathbb{D}:=\lbrace z\in\mathbb{C}:|z|<1\rbrace$, $f\neq0,g\neq0$, such that $$\frac{f^{\prime}}{f}(\frac{1}{n})=\frac{g^{\prime}}{g}(\frac{1}{n}) $$ for all natural $n\geq1$. Does it imply that $f=Cg$, where $C$ is some constant? Let $A:=\lbrace\frac{1}{n}:n\geq1\rbrace$ and $h:=\frac{f^{\prime}}{f}-\frac{g^{\prime}}{g}$. Now, $h$ is holomprphic on $\mathbb{D}$ and disappears on a subset of $\mathbb{D}$ which has a limit point. Thus $h=0$, so $\frac{f^{\prime}}{f}=\frac{g^{\prime}}{g}$ on $\mathbb{D}$. Could someone help with the next steps? Or maybe $f$ doesn't have to be in the form described above?
Notice that your last statement is equivalent to ${f'g-g'f}=0$, since $f,g \neq 0$. Now there is no harm in dividing that expression by $g^2$. You get ${f'g-g'f}{g^2}=0$. So, $(f/g)'=0$ and so your result follows.
Is there a general formula for the antiderivative of rational functions? Some antiderivatives of rational functions involve inverse trigonometric functions, and some involve logarithms. But inverse trig functions can be expressed in terms of complex logarithms. So is there a general formula for the antiderivative of any rational function that uses complex logarithms to unite the two concepts?
Write the rational function as $$f(z) = \dfrac{p(z)}{q(z)} = \dfrac{p(z)}{\prod_{j=1}^n (z - r_j)}$$ where $r_j$ are the roots of the denominator, and $p(z)$ is a polynomial. I'll assume $p$ has degree less than $n$ and the roots $r_j$ are all distinct. Then the partial fraction decomposition of $f(z)$ is $$ f(z) = \sum_{j=1}^n \frac{p(r_j)}{q'(r_j)(z - r_j)}$$ where $p(r_j)/q'(r_j)$ is the residue of $f(z)$ at $r_j$. An antiderivative is $$ \int f(z)\ dz = \sum_{j=1}^n \frac{p(r_j)}{q'(r_j)} \log(z -r_j)$$
Square of two positive definite matrices are equal then they are equal I have read that if $P, Q$ are two positive definite matrices such that $P^2=Q^2$, then $P=Q$. I don't know why. Some one can help me? Thanks for any indication.
It all boils down to this: Proposition. Suppose $A$ is an $n\times n$ positive definite matrix. Then $A^2$ has an eigenbasis. Furthermore, given any eigenbasis $\{v_1, \ldots, v_n\}$ of $A^2$ such that for each $i$, $A^2v_i=\lambda_iv_i$ for some $\lambda_i>0$, we must have $Av_i=\sqrt{\lambda_i}v_i$. I will leave the proof of this proposition to you. Now, suppose $\{v_1,\ldots,v_n\}$ is an eigenbasis for $P^2=Q^2$ with $P^2v_i=Q^2v_i=\lambda_iv_i$. By the above proposition, we must also have $Pv_i=Qv_i=\sqrt{\lambda_i}v_i$. Since the mappings $x\mapsto Px$ and $x\mapsto Qx$ agree on a basis of the underlying vector space, $P$ must be equal to $Q$.
$\gcd(m,n) = 1$ and $\gcd (mn,a)=1$ implies $a \cong 1 \pmod{ mn}$ I have $m$ and $n$ which are relatively prime to one another and $a$ is relatively prime to $mn$ and after alot of tinkering with my problem i came to this equality: $a \cong 1 \pmod m \cong 1 \pmod n$ why is it safe to say that $a \cong 1 \pmod {mn}$?..
It looks as if you are asking the following. Suppose that $m$ and $n$ are relatively prime. Show that if $a\equiv 1\pmod{m}$ and $a\equiv 1\pmod{n}$, then $a\equiv 1\pmod{mn}$. So we know that $m$ divides $a-1$, and that $n$ divides $a-1$. We want to show that $mn$ divides $a-1$. Let $a-1=mk$. Since $n$ divides $a-1$, it follows that $n$ divides $mk$. But $m$ and $n$ are relatively prime, and therefore $n$ divides $k$. so $k=nl$ for some $l$, and therefore $a-1=mnl$. Remark: $1.$ There are many ways to show that if $m$ and $n$ are relatively prime, and $n$ divides $mk$, then $n$ divides $k$. One of them is to use Bezout's Theorem: If $m$ and $n$ are relatively prime, there exist integers $x$ and $y$ such that $mx+ny=1$. Multiply through by $k$. We get $mkx+nky=k$. By assumption, $n$ divides $mk$, so $n$ divides $mkx$. Clearly, $n$ divides $nky$. So $n$ divides $mkx+nky$, that is, $n$ divides $k$. $2.$ Note that there was nothing special about $1$. Let $m$ and $n$ be relatively prime. If $a\equiv c\pmod{m}$ and $a\equiv c\pmod{n}$, then $a\equiv c\pmod{mn}$.
Changing $(1-\cos(x))/x$ to avoid cancellation error for $0$ and $π$? I have to change this formula: $$\frac{1-\cos(x)}{x}$$ so that I can avoid the cancellation error. I can do this for 0 but not for $π$. So I get: $$\frac{\sin^2(x)}{x(1+\cos(x))}$$ which for $x$ close to $0$ gets rid of the cancellation error. But I don't know how to fix the error for $x$ close to $π$? I just want to know if I should be using trigonometric identities again? I've tried to use trig identities but nothing works. Any suggestions or hints? Edit: So for π I meant that sin(π) would be 0 so it wouldn't give me the correct value as (1-cosπ)/π=2/π. The second equation would overall give me 0. That's the error I meant for π. Sorry for the confusion there.
Another possiblity that avoids cancellation error at both places is $$ \frac{2 \sin^2(x/2)}{x} $$
Why $f(x + x^3, y + y^3) \in L^1(\mathbb{R}^2)$, when $f(x, y) \in L^2(\mathbb{R}^2)$? How show that $f(x + x^3, y + y^3) \in L^1(\mathbb{R}^2)$, when $f(x, y) \in L^2(\mathbb{R}^2)$? Can someone help me? Thank you!
The statement that $f(x, y) \in L^2(\mathbb{R}^2)$ is the same as the statement that $(f(x + x^3, y + y^3))^2(1 + 3x^2)(1 + 3y^2)$ is in $L^1(\mathbb{R}^2)$, which can be seen after a change of variables to $(u,v) = (x + x^3, y + y^3)$ for the latter. Inspired thus, we write $$\int_{\mathbb{R}^2}|f(x + x^3, y + y^3)| = $$ $$\int_{\mathbb{R}^2}\bigg(\big|f(x + x^3, y + y^3)\big|\sqrt{(1 + 3x^2)(1 + 3y^2)}\bigg){1 \over \sqrt{(1 + 3x^2)(1 + 3y^2)} }\,dx\,dy$$ By Cauchy-Schwarz this is at most the square root of $$\int_{\mathbb{R}^2}(f(x + x^3, y + y^3))^2(1 + 3x^2)(1 + 3y^2)\,dx\,dy\int_{\mathbb{R}^2}{1 \over (1 + 3x^2)(1 + 3y^2) }\,dx\,dy$$ The first integral is finite as described above, and the second one can directly be computed to something finite. So the original integral is finite.
If $f(x) = x-5$ and $g(x) = x^2 -5$, what is $u(x)$ if $(u \circ f)(x) = g(x)$? Let $f(x) = x-5$, $g(x) = x^2 -5$. Find $u(x)$ if $(u \circ f)(x) = g(x)$. I know how to do it we have $(f \circ u) (x)$, but only because $f(x)$ was defined. But here $u(x)$ is not defined. Is there any way I can reverse it to get $u(x)$ alone?
I think I figured out what my professor did now . . . $(u \circ f)(x) = g(x)$ $(u \circ f)(f^{-1} (x)) = g( f^{-1}(x)) $ $\big((u \circ f) \circ f^{-1}\big)(x) = (g \circ f^{-1})(x) $ $\big(u \circ (f \circ f^{-1})\big)(x) = (g \circ f^{-1})(x) $ $u(x) = g(f^{-1}(x))$ $u(x) = g(x+5)$ I think this is right. Please correct me if I'm wrong.
definition of left (right) Exact Functors Let $P,Q$ be abelian categories and $F:P\to Q$ be an additive functor. Wikipedia states two definitions on left exact functors (right dually): * *$F$ is left exact if $0\to A\to B\to C\to 0$ is exact implies $0\to F(A)\to F(B)\to F(C)$ is exact. *$F$ is left exact if $0\to A\to B\to C$ is exact implies $0\to F(A)\to F(B)\to F(C)$ is exact. Moreover, it states that these two are equivalent definitions. I'm quite new at this topic so I'm not sure if this is immediately clear or not. Surely, 2. $\implies$ 1., being the more general case. But I don't see how to even approach the other direction; is this merely tautological?
Assume 1. holds. First observe that $F$ preserves monomorphisms: If $i : A \to B$ is a monomorphism, then $0 \to A \xrightarrow{i} B \to \mathrm{coker}(i) \to 0$ is exact, hence also $0 \to F(A) \to F(B) \to F(\mathrm{coker}(i))$ is exact. In particular $F(i)$ is a monomorphism. Now if $0 \to A \xrightarrow{i} B \xrightarrow{f} C$ is exact, then $0 \to A \xrightarrow{i} B \xrightarrow{f} \mathrm{im}(f) \to 0$ is exact, hence by assumption $0 \to F(A) \to F(B) \to F(\mathrm{im}(f))$ is exact. Since $F(\mathrm{im}(f)) \to F(C)$ is a monomorphism, it follows that also $0 \to F(A) \to F(B) \to F(C)$ is exact.
Finding $n$ such that $\frac{3^n}{n!} \leq 10^{-6}$ This question actually came out of a question. In some other post, I saw a reference and going through, found this, $n>0$. Solve for n explicitly without calculator: $$\frac{3^n}{n!}\le10^{-6}$$ And I appreciate hint rather than explicit solution. Thank You.
I would use Stirling's approximation $n!\approx \frac {n^n}{e^n}\sqrt{2 \pi n}$ to get $\left( \frac {3e}n\right)^n \sqrt{2 \pi n} \lt 10^{-6}$. Then for a first cut, ignore the square root part an set $3e \approx 8$ so we have $\left( \frac 8n \right)^n \lt 10^{-6}$. Now take the base $10$ log asnd get $n(\log 8 - \log n) \lt -6$ Knowing that $\log 2 \approx 0.3$, it looks like $16$ will not quite work, as this will become $16(-0.3)\lt 6$. Each increment of $n$ lowers it by a factor $5$ around here, or a log of $0.7$. We need a couple of those, so I would look for $18$. Added: the square root I ignored is worth a factor of $10$, which is what makes $17$ good enough.a Alpha shows that $17$ is good enough.
Combinatorics Statistics Question The problem I am working on is: An academic department with five faculty members—Anderson, Box, Cox, Cramer, and Fisher—must select two of its members to serve on a personnel review committee. Because the work will be time-consuming, no one is anx-ious to serve, so it is decided that the representative will be selected by putting the names on identical pieces of paper and then randomly selecting two. a.What is the probability that both Anderson and Box will be selected? [Hint:List the equally likely outcomes.] b.What is the probability that at least one of the two members whose name begins with C is selected? c. If the five faculty members have taught for 3, 6, 7, 10, and 14 years, respectively, at the university, what is the probability that the two chosen representatives have a total of at least 15 years’ teaching experience there? For a), I figured that since probability of Anderson being chosen is $1/5$ and Box being chosen is $1/5$ the answer would simply be $2/5$. It isn't, though. It is $0.1$ How did they get that answer? I might need help with parts b) and c) as well.
most simple way to understand this problem. (i just did this problem just now just now) lol A, B, Co, Cr, and F exist. pick 1 of 5 at random. (1/5) pick another at random, but this time around there are only 4 choices. so (1/4) multiply the two values. (1/20) BUT! that's considering that A is picked at first try then B on the second. you also have to consider the possibility that B would be picked first and etc. so you add (1/20) + (1/20) = (2/20) = (1/10) = 0.1 pce
Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Let $s_n$ be a sequence defined as given below for $n \geq 1$. Then find out $\lim\limits_{n \to \infty} s_n$. \begin{align} s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx \end{align} I have written a solution of my own, but I would like to know it is completely correct, and also I would like if people post more alternative solutions.
Notice (1) $\frac{s_n}{n} + \frac{s_{n+1}}{n+1} = \int_0^1 x^{n-1} dx = \frac{1}{n} \implies s_n + s_{n+1} = 1 + \frac{s_{n+1}}{n+1}$. (2) $s_n = n\int_0^1 \frac{x^{n-1}}{1+x} dx < n\int_0^1 x^{n-1} dx = 1$ (3) $s_{n+1} - s_n = \int_0^1 \frac{d (x^{n+1}-x^n)}{1+x} = \int_0^1 x^n \frac{1-x}{(1+x)^2} dx > 0$ (2+3) $\implies s = \lim_{n\to\infty} s_n$ exists and (1+2) $\implies s+s = 1 + 0 \implies s = \frac{1}{2}$. In any event, $s_n$ can be evaluated exactly to $n (\psi(n) - \psi(\frac{n}{2}) - \ln{2})$ where $\psi(x)$ is the diagamma function. Since $\psi(x) \approx \ln(x) - \frac{1}{2x} - \frac{1}{12x^2} + \frac{1}{120x^4} + ... $ as $x \to \infty$, we know: $$s_n \approx \frac{1}{2} + \frac{1}{4 n} - \frac{1}{8 n^3} + ...$$
Fermat's Little Theorem Transformation I am reading a document which states: By Fermat's Little Theorem, $a^{p-1}\bmod p = 1$. Therefore, $a^{b^c}\bmod p = a^{b^c\bmod (p - 1)} \bmod p$ For the life of me, I cannot figure out the logic of that conclusion. Would someone mind explaining it? I will be forever in your debt. Thank you!
The key point is that if $\rm\ a^n = 1\ $ then exponents on $\rm\ a\ $ may be reduced mod $\rm\,n,\,$ viz. Hint $\rm\quad a^n = 1\ \,\Rightarrow\,\ a^i = a^j\ \ { if} \ \ i\equiv j\,\ (mod\ n)\:$ Proof $\rm\ \ i = j\!+\!nk\:$ $\Rightarrow$ $\rm\:a^i = a^{j+nk} = a^j (a^n)^k = a^j 1^k = a^j\ \ $ QED Yours is the special case $\rm\:0\ne a\in \Bbb Z/p,\:$ so $\rm\:a^{p-1}\! = 1,\:$ so exponents may be reduced mod $\rm\:p\!-\!1.$ Remark $\ $ You should check that proof works ok if $\rm\,k < 0\:$ (hint: $\rm\: a^n = 1\:\Rightarrow\: a\,$ is invertible, so negative powers of $\rm\,a\,$ are well-defined). The innate structure will become clearer if you study university algebra, where you will learn about cyclic groups, orders of elements, and order ideals, and modules.
Proving the sum of the first $n$ natural numbers by induction I am currently studying proving by induction but I am faced with a problem. I need to solve by induction the following question. $$1+2+3+\ldots+n=\frac{1}{2}n(n+1)$$ for all $n > 1$. Any help on how to solve this would be appreciated. This is what I have done so far. Show truth for $N = 1$ Left Hand Side = 1 Right Hand Side = $\frac{1}{2} (1) (1+1) = 1$ Suppose truth for $N = k$ $$1 + 2 + 3 + ... + k = \frac{1}{2} k(k+1)$$ Proof that the equation is true for $N = k + 1$ $$1 + 2 + 3 + ... + k + (k + 1)$$ Which is Equal To $$\frac{1}{2} k (k + 1) + (k + 1)$$ This is where I'm stuck, I don't know what else to do. The answer should be: $$\frac{1}{2} (k+1) (k+1+1)$$ Which is equal to: $$\frac{1}{2} (k+1) (k+2)$$ Right? By the way sorry about the formatting, I'm still new.
Think of pairing up the numbers in the series. The 1st and last (1 + n) the 2nd and the next to last (2 + (n - 1)) and think about what happens in the cases where n is odd and n is even. If it's even you end up with n/2 pairs whose sum is (n + 1) (or 1/2 * n * (n +1) total) If it's odd you end up with (n-1)/2 pairs whose sum is (n + 1) and one odd element equal to (n-1)/2 + 1 ( or 1/2 * (n - 1) * (n + 1) + (n - 1)/2 + 1 which comes out the same with a little algebra).
Proving the equality in weak maximum principle of elliptic problems This one is probably simple, but I just can't prove the result. Suppose that $\mathop {\max }\limits_{x \in \overline \Omega } u\left( x \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } {u^ + }\left( x \right)$ and $\mathop {\min }\limits_{x \in \overline \Omega } u\left( x \right) \geqslant \mathop {\min }\limits_{x \in \partial \Omega } {u^ - }\left( x \right)$, where $\overline \Omega $ is closure of $\Omega $ and ${\partial \Omega }$ is boundary of $\Omega $, ${u^ + } = \max \left\{ {u,0} \right\}$ and ${u^ - } = \min \left\{ {u,0} \right\}$ (notice that $\left| u \right| = {u^ + } - {u^ - }$). How to show that $\mathop {\max }\limits_{x \in \overline \Omega } \left| {u\left( x \right)} \right| = \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right|$? So far, I've got $\mathop {\max }\limits_{x \in \overline \Omega } \left| {u\left( x \right)} \right| = \mathop {\max }\limits_{x \in \overline \Omega } \left( {{u^ + }\left( x \right) - {u^ - }\left( x \right)} \right) \leqslant \mathop {\max }\limits_{x \in \overline \Omega } {u^ + }\left( x \right) + \mathop {\max }\limits_{x \in \overline \Omega } \left( { - {u^ - }\left( x \right)} \right) \leqslant \mathop {\max }\limits_{x \in \overline \Omega } {u^ + }\left( x \right) - \mathop {\min }\limits_{x \in \overline \Omega } {u^ - }\left( x \right)$ and $\mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right| = \mathop {\max }\limits_{x \in \partial \Omega } \left( {{u^ + }\left( x \right) - {u^ - }\left( x \right)} \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } {u^ + }\left( x \right) - \mathop {\min }\limits_{x \in \partial \Omega } {u^ - }\left( x \right)$ Edit: $u \in {C^2}\left( \Omega \right) \cap C\left( {\overline \Omega } \right)$, although I don't see how that helps. $Lu=0$, which gives us $\mathop {\max }\limits_{x \in \overline \Omega } u\left( x \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } {u^ + }\left( x \right)$ and $\mathop {\min }\limits_{x \in \overline \Omega } u\left( x \right) \geqslant \mathop {\min }\limits_{x \in \partial \Omega } {u^ - }\left( x \right)$. (Renardy, Rogers, An introduction to partial differential equations, p 103) Edit 2: Come on, this should be super easy, the author didn't even comment on how the equality follows from those two.
$\left. \begin{gathered} \mathop {\max }\limits_{x \in \overline \Omega } u\left( x \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } {u^ + }\left( x \right)\mathop \leqslant \limits^{{u^ + } \leqslant \left| u \right|} \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right| \\ \mathop {\max }\limits_{x \in \overline \Omega } \left( { - u\left( x \right)} \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } - {u^ - }\left( x \right)\mathop \leqslant \limits^{ - {u^ - } \leqslant \left| u \right|} \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right| \\ \end{gathered} \right\} \Rightarrow \mathop {\max }\limits_{x \in \overline \Omega } \left| {u\left( x \right)} \right| \leqslant \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right|$. On the other hand, $\partial \Omega \subseteq \overline \Omega \Rightarrow \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right| \leqslant \mathop {\max }\limits_{x \in \overline \Omega } \left| {u\left( x \right)} \right|$
Is there a good repository for mathematical folklore knowledge? Among mathematicians there is lot of folklore knowledge for which it is not obvious how to find original sources. This knowledge circulates orally. An example: Among math competition folks, a common conversation is the search for a function over the reals that is infinitely differentiable, with it and all derivatives vanishing only at 0. I think $$f(x) := e^{-\frac{1}{x^2}}\mathrm{\;\;\;\; for\;\;} x \neq 0$$ is an answer to this one, and it is not hard to prove. Is there any collection of such mathematical folklore, with proofs? See also my follow-up question here.
One place that contains a lot of mathematics is the nLab. It is largely centered on higher category theory/homotopy theory but also contains a lot of general stuff. It is certainly not aiming to only contain folklore knowledge but it does contain a lot of it. Wikipedia will also certainly contain folklore knowledge embedded somewhere in the millions of pages of information but perhaps its accuracy is more questionable than what the nLab offers. Various maths dedicated blogs will also contain folklore and anecdotes.
$Q=\Sigma q_i$ and its differentiation by one of its variables Suppose that $Q = q_1 + ... +q_n$. why is $$\frac{dQ}{dq_i} = \Sigma_{j=1}^{n}\frac{\partial q_j}{\partial q_i}$$? Is it related to each $q_i$ being independent to other $q_i$'s?
No, it is because differentiation is linear, ie, if $h=f+g$, then $\frac{d h}{d x} = \frac{d f}{d x} + \frac{d g}{d x}$.
Prove that $f:[0,\infty)\to\mathbb{R}$ where $f(x) := {1\over x}\cos({1\over x}),x>0$ ,does $f$ has the intermediate value property on $[0,\infty)$? Prove that $f:[0,\infty)\to\mathbb{R}$ where $f(x) := {1\over x}\cos({1\over x}),x>0$ does $f$ has the intermediate value property on $[0,\infty)$? Attempts: In $\mathbb{R}$, if $f$ is continuous then it has intermediate value property and as we know ${1\over x}\cos({1\over x})$ are continuous on $\mathbb{R}$ so it has the intermediate value property but it doesn't seem to be that straight forward.
The function $f\colon [0,+\infty)\to \mathbb R$ $$ f(x) = \begin{cases} \frac {1}{x} \cos \frac 1 x & \text{if $x>0$}\\\\ 0 & \text {if $x=0$} \end{cases} $$ is not continuous but has the intermediate value property. In fact given two points $a,b \in [0,\infty)$ with $a<b$ you have two possibilities: * *$a>0$. In this case notice that the function is continuous on $[a,b]$ hence it has the property *$a=0$. In this case you should notice that it is possible to find $\epsilon<b$ such that $f(\epsilon)=0$ (see where $\cos(1/x)=0$). Since $f$ is continuous on $[\epsilon,b]$ you can apply the intermediate value theorem there.
What's the difference between $|\nabla f|$ and $\nabla f$? what's the difference between $|\nabla f|$ and $\nabla f$ for example in : $$\nabla\cdot{\nabla f\over|\nabla f|}$$
$|\nabla f|$ is the magnitude of the $\nabla f$ vector. The expression $\frac{\nabla f}{|\nabla f|}$ is thus a unit vector. $\nabla$ (gradient) acts on a scalar to give a vector, and $\nabla\cdot$ (divergence) acts on a vector to give a scalar.
newton: convergence when calculating $x^2-2=0$ Find $x$ for which $x^2-2=0$ using the newton algorithm and $x_0 = 1.4$. Then you get $x_{k+1} = x_k + \frac{1}{x_k} - \frac{x_k}{2}$. How to show that you need 100 steps for 100 digits precision? So I need to show for which $N$ it is $|x_N-\sqrt{2}| \leq 10^{-100}$ and therefore I am asked to show $$|x_k - \sqrt{2}|\leq\delta<0.1 \implies |x_{k+1}-\sqrt{2}|\leq \frac{\delta^2}{2}$$ Then it obviously follows that you need 100 steps, but I don't manage to show this..
Try to show that $|x_{k+1}-\sqrt2|\leqslant\frac12(x_k-\sqrt2)^2$ hence $|x_k-\sqrt2|\leqslant2\delta_k$ implies that $|x_{k+1}-\sqrt2|\leqslant2\delta_{k+1}$ with $\delta_{k+1}=\delta_k^2$. Then note that $\delta_0\lt10^{-2}$ hence $\delta_k\leqslant10^{-2^{k+1}}$ for every $k$, in particular $2\delta_6\lt2\cdot10^{-128}$ ensures (slightly more than) $100$ accurate digits.
Find $\lim_{x\to 0^+} \ln x\cdot \ln(1-x)$ Find $$\lim_{x\to 0^+} \ln x\cdot \ln(1-x)$$ I've been unable to use the arithmetic rules for infinite limits, as $\ln x$ approaches $-\infty$ as $x\to 0^+$, while $\ln(1-x)$approaches $0$ as $x\to 0^+$, and the arithmetic rules for the multiplication of infinite limits only applies when one of the limits is finite and nonzero. Can anyone point me in the right direction for finding this limit? I've been unable to continue.. (Spoiler: I've checked WolframAlpha and the limit is equal to $0$, but this information hasn't helped me to proceed)
Hint: Another approach which is similar to @Mhenni's is: When $x\to 0$ and we know that $\alpha(x)$ is very small then $$\ln(1+\alpha(x))\sim\alpha(x)$$
Euler graph - a question about the proof I have a question about the proof of this theorem. A graph is Eulerian $\iff$ it is connected and all its vertices have even degrees. My question concerns "$\Leftarrow$" Let $T=(v_0, e_1, v_1, ..., e_m, v_m)$ be a trip in Eulerian graph G=(V, E) where vertices can repeat but edges cannot. Let's consider T of the largest possible length. We prove that (i) $v_0 = v_m$, and (ii) $\left\{ e_i : i = 1, 2, . . . , m\right\} = E$ (but I think I understand everything about this part) Ad (i). If $v_0 \neq v_m$ then the vertex $v_0$ is incident to an odd number of edges of the tour $T$. But since the degree $deg_G(v_0)$ is even, there exists an edge $e \in E(G)$ not contained in T. Hence we could extend $T$ by this edge — a contradiction. What I don't understand here is why $v_0$ is incident to an odd number of edges.
You have a typo: it’s when $v_0\ne v_m$ that you can conclude that $v_0$ is incident to an odd number of edges of $T$. It’s incident to $v_1$, and any other vertices of $T$ to which it is incident must come in pairs, one just before it and one just after in the tour. But then, as you say, $T$ would not be maximal, so this is impossible, and we must have $v_0=v_m$.
Linear algebra eigenvalues and diagonalizable matrix Let $A$ be an $n\times n$ matrix over $\mathbb{C}$. First I don't understand why $AA^*$ can be diagnosable over $\mathbb{C}$. And why $i+1$ can't be eigenvalue of $AA^*$? Hope question is clear enough and I don't have any spelling mistake and used right expressions.
Any Hermitian matrix is diagonalizable. All eigenvalues of a Hermitian matrix are real. These two facts (that you probably learnt) solve the question: Show your matrix is Hermitian and note that $1+i$ is not real.
Compute $\sum_{k=1}^{n} \frac 1 {k(k + 1)} $ More specifically, I'm supposed to compute $\displaystyle\sum_{k=1}^{n} \frac 1 {k(k + 1)} $ by using the equality $\frac 1 {k(k + 1)} = \frac 1 k - \frac 1 {k + 1}$ and the problem before which just says that, $\displaystyle\sum_{j=1}^{n} a_j - a_{j - 1} = a_n - a_0$. I can add up the sum for any $n$ but I'm not sure what they mean by "compute". Thanks!
It means: find how much it sums to. In fact, you have already said everything you need to solve the problem. You only have to put 1 and 1 together to obtain 2.
Dirac's theorem question Give an example to show that the condition $\deg(v) \geq n/2$ in the statement of Dirac's theorem cannot be replaced by $\deg(v) \geq (n-1)/2$ The textbook gives the solution: The complete bipartite graph $K_{n/2 - 1, n/2 + 1}$ if $n$ is even, and $K_{(n-1)/2, (n+1)/2}$ if $n$ is odd. If anyone can explain this more thoroughly it would be greatly appreciated, thanks!
The complete bipartite graph $K_{(n-1)/2, ~ (n+1)/2}$ has $(n-1)/2 + (n+1)/2 = n$ vertices. Each vertex has degree greater than or equal to $(n-1)/2$ but this graph does not contain any Hamiltonian cycles, so the conclusion of Dirac's theorem does not hold. You may wish to consider, for example, $K_{1,2}$ or $K_{2,3}$.
Limits calculus very short question? Can you help me to solve this limit? $\frac{\cos x}{(1-\sin x)^{2/3}}$... as $x \rightarrow \pi/2$, how can I transform this?
Hint: let $y = \pi/2 - x$ and take the limit as $y \rightarrow 0$. In this case, the limit becomes $$\lim_{y \rightarrow 0} \frac{\sin{y}}{(1-\cos{y})^{2/3}}$$ That this limit diverges to $\infty$ may be shown several ways. One way is to recognize that, in this limit, $\sin{y} \sim y$ and $1-\cos{y} \sim y^2/2$, and the limit becomes $$\lim_{y \rightarrow 0} \frac{2^{2/3} y}{y^{4/3}} = \lim_{y \rightarrow 0} 2^{2/3} y^{-1/3} $$ which diverges.
Find $\lim_{(x,y) \to (0,0)}\frac{x^3\sin(x+y)}{x^2+y^2}$ Find the limit $$\lim_{(x,y) \to (0,0)}\frac{x^3\sin(x+y)}{x^2+y^2}.$$ How exactly can I do this? Thanks.
Using $|\sin z|\leq 1$ we find that the absolute value of your function is not greater than $$ \frac{|x^3|}{x^2+y^2}\leq \frac{|x^3|}{x^2}=|x|. $$ This is first when $x\neq 0$. Then observe that $|f(x,y)|\leq |x|$ also holds when $x=0$. Can you conclude?
Homogeneous Linear Equation General Solution I’m having some difficulty understanding the solution to the following differential equation problem. Find a general solution to the given differential equation $4y’’ – 4y’ + y = 0$ The steps I’ve taken in solving this problem was to first find the auxiliary equation and then factor to find the roots. I listed the steps below: $4r^2 – 4r + 1$ $(2r – 1) \cdot (2r-1)$ $\therefore r = \frac{1}{2} \text{is the root}$ Given this information, I supposed that the general solution to the differential equation would be as follows: $y(t) = c_{1} \cdot e^{\frac{1}{2} t}$ But when I look at the back of my textbook, the correct answer is supposed to be $y(t) = c_{1} \cdot e^{\frac{1}{2} t} + c_{2} \cdot te^{\frac{1}{2} t}$ Now I know that understanding the correct solution has something to do with linear independence, but I’m having a hard time getting a deep understanding of what’s going on. Any help would be appreciated in understanding the solution.
The story behind what is going on here, is exactly the same we always see when search a Basis for a vector space over $V$ a field $K$. There; we look for a set of linear independent vectors which can generate the whole space. In the space of all solutions for an OE, we do the same as well. For any Homogeneous Linear OE with constant coefficients, there is a routine way to find out the solutions. And you did it right for this one. When the solution of axillary equation is one and this solution frequents two times, one solution is as you noted $y_1(t)=\exp(0.5t)$ but we don't have another solution. So, we should find another solution which is independent to the first one and the number of whole solutions is equal to the order of OE which is two here. It means that, we need one solution that the set $$\{\exp(0.5t),y_2(t)\}$$ is a fundamental set of solutions. For doing this you can use the method of Reduction of Order to find $$y_2(t)=t\exp(0.5t)$$
When log is written without a base, is the equation normally referring to log base 10 or natural log? For example, this question presents the equation $$\omega(n) < \frac{\log n}{\log \log n} + 1.4573 \frac{\log n}{(\log \log n)^{2}},$$ but I'm not entirely sure if this is referring to log base $10$ or the natural logarithm.
In many programming languages, log is the natural logarithm. There are often variants for log2 and log10. Checked in C, C++, Java, JavaScript, R, Python
Need examples about injection (1-1) and surjection (onto) of composite functions The task is that I have to come up with examples for the following 2 statements: 1/ If the composite $g o f$ is injective (one-to-one), then $f$ is one-to-one, but $g$ doesn't have to be. 2/ If the composite $g o f$ is surjective (onto), the $g$ is onto, but $f$ doesn't have to be. I have some difficulties when I try to think of examples, especially the "one-to-one" statement. ** For #1, I try to go about letting the function $f$ being something in the nature of $f(x) = kx$, which is clearly 1-1, and $g$ being something in the nature of $g(x) = x^2$, which is not 1-1. But then when I try to make g o f out of these two by all of $+, -, x$, and division, it turns out I can't find a 1-1 function $g o f$. ** For #2, I firstly think about letting $f$ being $e^x$, which only covers the positive values of $y$, and then g being something like $k - e^x$. I thought g o f would cover the whole real line, but when I tries that on matlab, it only covers the negative values of $y$. So I'm totally wrong >_< Little note: I finished the proofs on showing f must be 1-1 (for #1), and g must be onto (for #2). I just need examples on the "doesn't to be" parts. Would someone please give me some suggestions ? I appreciate any help. Thank you ^_^
Consider maps $\{0\}\xrightarrow{f}\{0,1\}\xrightarrow{g}\{0\}$, or $\{0\}\xrightarrow{f}A\xrightarrow{g}\{0\}$ where $A$ is any set with more than one element.
kernel and cokernel of a morphism Let $\phi: A=\mathbb{C}[x,y]/(y^2-x^3) \to \mathbb{C}[t]=B $ be the ring morphism defined by $x\mapsto t^2, y \mapsto t^3.$ Let $f:Y=\text{Spec} B \to X=\text{Spec} A$ be the associated morphism of affine schemes. Since $\phi$ is injective, then $f^{\sharp} : \mathcal{O}_X \to f_{\star}\mathcal{O}_Y$ is injective too. 1.What is the cokernel of $f^{\sharp}?$ I know that it will be isomorphic to the quotient sheaf $f_{\star}\mathcal{O}_Y/\mathcal{O}_X$ but I'm unable to find it explicitly. Here I computed the cotangent sheaf $\Omega_X$ of $X.$ 2.How can I find the kernel and the cokernel of $f^{\star} \Omega_X \to \Omega_Y?$ What I've tried out: The sheaf morphism corresponds to a $B$-module morphism $\Omega_A \otimes_A B \to \Omega_B$ or $Adx \oplus Ady/(2ydy-3x^2dx) \otimes B \to Bdt$ which is in fact, the map $(t)dt \oplus (t^2)dt \to Bdt$ where $(tg(t)dt,t^2h(t)dt)\mapsto (tg(t)+t^2h(t))dt$ for $g,h \in B.$ The kernel is isomorphic to $B$ since $g(t)=-th(t)$ and the cokernel is $B/(t).$ Is this correct?
Hopefully you know that coherent sheaves over $\mathrm{Spec} A$ are equivalent to finitely generated $A$-modules. The structure sheaf $\mathcal O_X$ corresponds to $A$ as a module over itself and $f_\ast\mathcal O_Y$ corresponds to $\mathbb C[t]$ with the $A$-module structure given by restricting through $\phi$. So $f^\#$ corresponds to $\phi$ as a map of $A$-modules, the image of $\phi$ consists of all polynomials with no degree $1$ term so the cokernel is $(t)/(t^2)$. The $A$-module action on this is that $x$ and $y$ act as $t^2$ and $t^3$ respectively, hence $x$ and $y$ act as $0$. So the cokernel is a trivial $1$-dimensional $A$-module. Edit: I forgot about $1$, the cokernel should be $1$-dimensional, not $2$.
$\int_0^\infty\frac{\log x dx}{x^2-1}$ with a hint. I have to calculate $$\int_0^\infty\frac{\log x dx}{x^2-1},$$ and the hint is to integrate $\frac{\log z}{z^2-1}$ over the boundary of the domain $$\{z\,:\,r<|z|<R,\,\Re (z)>0,\,\Im (z)>0\}.$$ I don't understand. The boundary of this domain has a pole of the integrand in it, doesn't it? Doesn't it make this method useless?
Note $$I(a)=\int_0^\infty\frac{\ln x}{(x+1)(x+a)}dx\overset{x\to\frac a x} = \frac{1}{2}\int_0^\infty\frac{\ln a}{(x+1)(x+a)}dx= \frac{\ln^2a}{2(a-1)} $$ Then $$\int_0^\infty\frac{\ln x}{x^2-1}dx=I(-1)=-\frac14 [\ln(e^{i\pi})]^2=\frac{\pi^2}4 $$
Normal Distribution Identity I have the following problem. I am reading the paper which uses this identity for a proof, but I can't see why or how to prove its true. Can you help me? \begin{align} \int_{x_{0}}^{\infty} e^{tx} n(x;\mu,\nu^2)dx &= e^{\mu t+\nu^2 t^2 /2} N(\frac{\mu - x_0 }{\nu} +\nu t ) \end{align} where $n(\cdot)$ is the normal pdf with mean $\mu$ and variance $\nu^2$. $N(\cdot)$ refers to the normal cdf. \begin{align} \int_{x_{0}}^{\infty} e^{tx} n(x;\mu,\nu^2)dx &= \int_{x_{0}}^{\infty} e^{tx} \frac{1}{\nu \sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2 \nu^2} } dx \\ &\int_{x_{0}}^{\infty} \frac{1}{\nu \sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2 \nu^2} + tx } dx \\ &\int_{x_{0}}^{\infty} \frac{1}{\nu \sqrt{2\pi}} e^{-\frac{x^2 -2x\mu +\mu^2 +2\nu^2tx}{2 \nu^2}} dx \\ \end{align} and I'm stuck here Thank you.
* *Substitute the expression for the Normal pdf. *Gather together the powers of $e$. *Complete squares in the exponent of $e$ to get the square of something plus a constant. *Take the constant powers of $e$ out of the integral. *Change variables to turn the integral into a integral of the standard normal pdf from $-\infty$ to some number $a$.
Permutation and Equivalence Let X be a nonempty set and define the two place relation ~ as $\sigma\sim\tau$ if and only if $\rho^{-1}\circ\sigma\circ\rho=\tau$ for some permutation $\rho$ For reflexivity this is what I have: Let x$\in$X such that $(\rho^{-1}\circ\sigma\circ\rho)(x)=\sigma(x)$ Than $\rho^{-1}(\sigma(\rho(x)))=\sigma(x)$ So $\sigma(\rho(x))=\rho(\sigma(x))$ Therefore $\sigma(x)=\rho^{-1}(\rho(\sigma(x))$ So $\sigma(x)=\rho^{-1}(\sigma(\rho(x))$ Finally $\sigma(x)=\rho^{-1}\circ\sigma\circ\rho$ Does that show that the relation is reflexive?
You assumed, I believe, what you were to prove. Look at your expression following "Let" and look at your expression following "Finally"...they say the same thing! How about letting $\rho = \sigma$: all that matters to show is that for each $\sigma \in X$, $\sigma \sim \sigma$, there exists some permutation in $X$ satisfying the relation: This reflexive relation is satisfied for any $\sigma \in X$ by choosing $\rho$ to be itself: for $\tau$, let $\rho = \tau$...etc... Then $$(\sigma^{-1} \circ \sigma \circ \sigma)(x) = (\sigma^{-1} \circ \sigma)(x) \circ \sigma(x) $$ $$\vdots$$ $$ = \sigma(x)$$.
Can normal chain rule be used for total derivative? Chain rule states that $$\frac{df}{dx} \frac{dx}{dt} = \frac{df}{dt}$$. Suppose that $f$ is function $f(x,y)$. In this case, would normal chain rule still work?
The multivariable chain rule goes like this: $$ \frac{df}{dt} = \frac{\partial f}{\partial x}\cdot \frac{dx}{dt}+ \frac{\partial f}{\partial y}\frac{dy}{dt} $$ If you can isolate for $\dfrac{dy}{dx}$, then you can always just do implicit differentiation. Let's do an example: $$ f=f(x,y) = x^2 - y $$ Where $$ x(t) = t, \; \; y(t) = t $$ $$ \frac{df}{dt} \frac{df}{dx} \cdot \frac{dx}{dt} = 2x -\frac{dy}{dx}\frac{dx}{dt}= 2t - 1 $$ Also $$ \frac{df}{dt} = \frac{\partial f}{\partial x}\cdot \frac{dx}{dt}+ \frac{\partial f}{\partial y}\frac{dy}{dt} = 2x - 1 = 2t - 1 $$ Now let's do the same example, except this time: $$ f=f(x,y)= x^2 - y\\ x(t)=t\ln(t), \;\; y(t) = t\sin(t) \\ \Rightarrow \frac{df}{dt} = 2t\ln(t)(\ln(t) + 1) -(\sin(t) +t\cos(t)) $$ Would implicit differentiation work here? Yes. But it requires more steps. Implicit differentiation would be the inefficient route as, $$ e^{x/t} = t \Rightarrow y= e^{x/t}\sin(t) \Rightarrow \frac{dy}{dt} = \frac{dy}{dx}\frac{dx}{dt} = \\ \left(\frac{1}{t}\frac{dx}{dt}-\frac{x}{t^2}\right)e^{\frac{x}{t}}\sin(t)+e^{\frac{x}{t}}\cos(t) = \\ e^{\frac{x}{t}}\left(\left(\frac{1}{t}\frac{dx}{dt}-\frac{x}{t^2}\right)\sin(t)+\cos(t)\right) =\\ t\left(\frac{\ln(t)+1}{t} - \frac{t\ln(t)}{t^2}\right)\sin(t) + t\cos(t) = \\ \left(\ln(t) +1 - \ln(t)\right)\sin(t) + t\cos(t) = \\ \sin(t)+t \cos(t) $$ But it still works out. You can check out this site: Link For examples.
Factoring Cubic Equations I’ve been trying to figure out how to factor cubic equations by studying a few worksheets online such as the one here and was wondering is there any generalized way of factoring these types of equations or do we just need to remember a bunch of different cases. For example, how would you factor the following: $x^3 – 7x^2 +7x + 15$ I'm have trouble factoring equations such as these without obvious factors.
The general method involves three important rules for polynomials with integer coefficients: * *If $a+\sqrt b$ is a root, so is $a-\sqrt b$. *If $a+ib$ is a root, so is $a-ib$. *If a root is of the form $p/q$, then p is an integer factor of the constant term, and q is an integer factor of the leading coefficient. Since this is cubic, there is at least one real rational solution (using rules 1 & 2). Let's find it using rule 3: $$q = 1,\ p \in \{\pm1,\pm3,\pm5,\pm15\}$$ Brute force to find that $-1$ is a solution. So we can now write the polynomial as: $$(x+1)(x-a)(x-b)=x^3 - 7x^2 + 7x +15$$ Now use: $$a+b=8$$ $$ab-a-b=7$$ Solve to get: $$a(8-a)-a-(8-a)=7 \to a^2-8a+15 = 0$$ And get the two remaining solutions.
Three random variables, two are independent Suppose we have 3 random variables $X,Y,Z$ such that $Y\perp Z$ and let $W = X+Y$. How can we infer from this that $$\int f_{WXZ}(x+y,x,z)\mathrm{d}x = \int f_{WX}(x+y,x)f_{Z}(z)\mathrm{d}x$$ Any good reference where I could learn about independence relevant to this question is also welcome.
After integrating out $x$, the left-hand side is the joint density for $Y$ and $Z$ at $(y,z)$, and the right-hand side is the density for $Y$ at $y$ multiplied by the density for $Z$ at $z$. These are the same because $Y$ and $Z$ were assumed to be independent.
How to compute conditional probability for a homogeneous Poisson process? Let $N$ be a homogeneous Poisson process with intensity $\lambda$. How do I compute the following probability: $$P[N(5)=2 \, | \, N(2)=1,N(10)=3]?$$
You know that exactly $2$ events occurred between $t=2$ and $t=10$. These are independently uniformly distributed over the interval $[2,10]$. The probability that one of them occurred before $t=5$ and the other thereafter is therefore $2\cdot\frac38\cdot\frac58=\frac{15}{32}$. The intensity $\lambda$ doesn't enter into it, since you know the numbers of events.
Quantified Statements To English The problem I am working on is: Translate these statements into English, where C(x) is “x is a comedian” and F(x) is “x is funny” and the domain consists of all people. a) $∀x(C(x)→F(x))$ b)$∀x(C(x)∧F(x))$ c) $∃x(C(x)→F(x))$ d)$∃x(C(x)∧F(x))$ ----------------------------------------------------------------------------------------- Here are my answers: For a): For every person, if they are a comedian, then they are funny. For b): For every person, they are both a comedian and funny. For c): There exists a person who, if he is funny, is a comedian For d): There exists a person who is funny and is a comedian. Here are the books answers: a)Every comedian is funny. b)Every person is a funny comedian. c)There exists a person such that if she or he is a comedian, then she or he is funny. d)Some comedians are funny. Does the meaning of my answers seem to be in harmony with the meaning of the answers given in the solution manual? The reason I ask is because part a), for instance, is a implication, and "Every comedian is funny," does not appear to be an implication.
This is because the mathematical language is more accurate than the usual language. But your answers are right and they are the same as the book.
Show that monotone sequence is reducing and bounded by zero? Let $a_n=\displaystyle\sum\limits_{k=1}^n {1\over{k}} - \log n$ for $n\ge1$. Euler's Constant is defined as $y=\lim_{n\to\infty} a_n$. Show that $(a_n)^\infty_{n=1}$ is decreasing and bounded by zero, and so this limit exists My thought: When I was trying the first few terms for $a_1, a_2, a_3$, I get: $$a_1 = 1-0$$ $$a_2={1\over1}+{1\over2}-\log2$$ $$a_3={1\over1}+{1\over2}+{1\over3}-\log3$$ It is NOT decreasing!!! what did I do wrong?? Professor also gave us a hint: Prove that $\dfrac{1}{n+1}\le \log(n+1)-\log n\le \dfrac1n$ we need to use squeeze theorem???
Certainly it's decreasing. I get $1-0=1$, and $1 + \frac12 -\log2=0.8068528\ldots$, and $1+\frac12+\frac13-\log3=0.73472\ldots$. $$ \frac{1}{n+1} = \int_n^{n+1}\frac{dx}{n+1} \le \int_n^{n+1}\frac{dx}{x} = \log(n+1)-\log n. $$ $$ \frac1n = \int_n^{n+1} \frac{dx}{n} \ge \int_n^{n+1} \frac{dx}{x} = \log(n+1)-\log n. $$ So $$ \Big(1+\frac12+\frac13+\cdots+\frac1n-\log n\Big) +\Big(\log n\Big) + \Big(\frac{1}{n+1}-\log(n+1)\Big) $$ $$ \le1+\frac12+\frac13+\cdots+\frac1n-\log n. $$
Distinction between "measure differential equations" and "differential equations in distributions"? Is there a universally recognized term for ODEs considered in the sense of distributions used to describe impulsive/discontinuous processes? I noticed that some authors call such ODEs "measure differential equations" while others use the term "differential equations in distributions". But I don't see a major difference between them. Can anyone please make the point clear? Thank you.
As long as the distributions involved in the equation are (signed) measures, there is no difference and both terms can be used interchangeably. This is the case for impulsive source equations like $y''+y=\delta_{t_0}$. Conceivably, ODE could also involve distributions that are not measures, such as the derivative of $\delta_{t_0}$. In that case only "differentiable equation in distributions" would be correct. But I can't think of a natural example of such an ODE at this time.
Field extension of composite degree has a non-trivial sub-extension Let $E/F$ be an extension of fields with $[E:F]$ composite (not prime). Must there be a field $L$ contained between $E$ and $F$ which is not equal to either $E$ or $F$? To prove this is true, it suffices to produce an element $x\in E$ such that $F(x) \not= E$, but I cannot find a way to produce such an element. Any ideas?
Let $L/\mathbb Q$ be a Galois extension with Galois group $A_4$, this is certainly possible although off the top of my head I can't remember a polynomial that gives you this extension. Now $A_4$ has a subgroup $H$ of index $4$ namely, the subgroup generated by a cycle, but this subgroup is not properly contained in another proper subgroup of $A_4$. In particular let $K$ be the fixed field of $H$, then $H$ has no non-trivial proper subfields because this would imply that $H$ was properly contained in a proper subgroup. Of course we also have $[K:\mathbb Q]=4$. In general to find such an example one can find a group $G$ which has a maximal subgroup with the desired index. Then one can realize $G$ as a Galois group of a field of rational functions and look at the fixed field of the subgroup to get a general counterexample.
Counting primes of the form $S_1(a_n)$ vs primes of the form $S_2(b_n)$ Let $n$ be an integer $>1$. Let $S_1(a_n)$ be a symmetric irreducible integer polynomial in the variables $a_1,a_2,...a_n$. Let $S_2(b_n)$ be a symmetric irreducible integer polynomial in the variables $b_1,b_2,...b_n$ of the same degree as $S_1(a_n)$. Let $m$ be an integer $>1$ and let $S^*_1(m)$ be the amount of integers of the form $S_1(a_n)$ $<m$ (where the $a_n$ are integers $>-1$). Likewise let $S^*_2(m)$ be the amount of integers of the form $S_2(b_n)$ $<m$ (where the $b_n$ are integers $>-1$). By analogue let $S^-_1(m)$ be the amount of primes of the form $S_1(a_n)$ $<m$ (where the $a_n$ are integers $>-1$). And let $S^-_2(m)$ be the amount of primes of the form $S_2(b_n)$ $<m$ (where the $b_n$ are integers $>-1$). Now I believe it is always true that $lim_{m=oo}$$\dfrac{S^-_1(m)}{S^-_2(m)}$= $lim_{m=oo}$$\dfrac{S^*_1(m)}{S^*_2(m)}$=$\dfrac{x}{y}$ for some integers $x,y$. How to show this ? Even with related conjectures assumed to be true I do not know how to prove it , nor if I need to assume some related conjectures to be true. EDIT: I forgot to mention both $S_1(a_n)$ and $S_2(b_n)$ (must) have the Bunyakovsky property.
I think the polynomial $2x^4+x^2y+xy^2+2y^4$ is symmetric and irreducible, but its values are all even, hence, it takes on at most $1$ prime value. If your other polynomial takes on infinitely many prime values --- $x^4+y^4$ probably does this, though proving it is out of reach --- then the ratio of primes represented by the two polynomials will go to zero, I doubt the ratio of numbers represented would do that.
what is the precise definition of matrix? Let $F$ be a field. To me, saying 'Matrix is a rectangular array of elements of $F$' seems extremely terse. What kind of set is a rectangular array of elements of $F$? Precisely, $(F^m)^n \neq (F^n)^m \neq F^{n\times m}$. I wonder which is the precise definition for $M_{n\times m}(F)$?
In an engineering class where we don't do things very rigorously at all, we defined an $n\times m$ matrix as a linear function from $\mathbb{R}^m$ to $\mathbb{R}^m$.
half-angle trig identity clarification I am working on the following trig half angle problem. I seem to be on the the right track except that my book answer shows -1/2 and I didn't get that in my answer. Where did I go wrong? $$\sin{15^{\circ}} = $$ $$\sin \frac { 30^{\circ} }{ 2 } = \pm \sqrt { \frac { 1 - \cos{30^{\circ}} }{ 2 } } $$ $$\sin \frac { 30^{\circ} }{ 2 } = \pm \sqrt { \frac { 1 - \frac {\sqrt 3 }{ 2 } }{ 2 } } $$ $$\sin \frac { 30^{\circ} }{ 2 } = \pm \sqrt { \frac { 1 - \frac {\sqrt 3 }{ 2 } }{ 2 } (\frac {2} {2}) } $$ $$\sin \frac { 30^{\circ} }{ 2 } = \pm \sqrt { 2 - \sqrt {3} } $$ Book Answer $$\sin \frac { 30^{\circ} }{ 2 } = -\frac {1} {2} \sqrt { 2 - \sqrt {3} } $$
$$\sqrt { \dfrac { 1 - \dfrac {\sqrt 3 }{ 2 } }{ 2 } \times \dfrac22} = \sqrt{\dfrac{2-\sqrt3}{\color{red}4}} = \dfrac{\sqrt{2-\sqrt3}}{\color{red}2}$$
Convert $ x^2 - y^2 -2x = 0$ to polar? So far I got $$r^2(\cos^2{\phi} - \sin^2{\phi}) -2 r\cos{\phi} = 0$$ $$r^2 \cos{(2\phi)} -2 r \cos{\phi} = 0$$
You are on the right track. Now divide through by $r \ne 0$ and get $$r \cos{2 \phi} - 2 \cos{\phi} = 0$$ or $$r = 2 \frac{ \cos{\phi}}{\cos{2 \phi}}$$
Is it true that, $A,B\subset X$ are completely seprated iff their closures are? If $A,B\subset X$ and $\overline{A}, \overline{B}$ are completely seprated, so also are $A,B$. since $A\subset \overline{A}$, $B\subset \overline{B}$ then, $f(A)\subset f(\overline{A})=0$ and $f(B)\subset f(\overline{B})=1$ for some continuous function $f:X\to [0,1]$. but the converse is true?
HINT: Suppose that $A$ and $B$ are completely separated, and let $f:X\to[0,1]$ be a continuous function such that $f(x)=0$ for all $x\in A$ and $f(x)=1$ for all $x\in B$. Since $f$ is continuous, $f^{-1}[\{0\}]$ is closed, and certainly $A\subseteq f^{-1}[\{0\}]$, so ... ?
Is there a way to solve $x^2 + 12y - 12x = 0$ for $x$? I'm doing some statistical analysis (random variate generation for simulation models) and I just ran the inverse transform of a CDF: $$ F(x) = \begin{cases} (x-4)/4 & \text{for } x \in [2,3] \\ x - (x^2/12) & \text{for } x \in (3,6] \\ 0 & \text{otherwise} \end{cases} $$ That yieds a couple of equations: $$ R=(x-4)/4 ~ \text{ for } ~ 2 \leq x \leq 3$$ $$ R=x(1-x)/12 ~ \text{ for } 3 < x \leq 6 $$ Now, the first one is easy: $$ 4(R+1)=x ~ \text{ for } -1/2 \leq x \leq -1/4 $$ But the second one is implicit: $$ (1-(12R/x))1/2=x \text{ for } -2 < x <=-17.5 $$ ...backtracking, I rearrange the equation: \begin{eqnarray*} R & = & x - (x^2/12) \\ 12R & = & 12(x - (x^2/12)) \\ R & = & 12(x - (x^2/12))/12 \\ R & = & (12x - (12x^2/12))/12 \\ R & = &(12x - (x^2))/12 \\ 12R & = &(12x - (x^2)) \\ 12R & = &12x - (x^2) \\ 12R & = &12x - x^2 \\ \end{eqnarray*} Changing R for y... $$ x^2 + 12y - 12x= 0 $$ Now, that looks awfully familiar, but I confess I've hit a wall and do not remember what to do from here. How can I get an explicit function solving for $x$?
Try the quadratic equation formula: $$x^2-12x+12y=0\Longrightarrow \Delta:=12^2-4\cdot 1\cdot 12y=144-48y=48(3-y)\Longrightarrow$$ $$x_{1,2}=\frac{12\pm \sqrt{48(3-y)}}{2}=6\pm 2\sqrt{3(3-y)}$$ If you're interested in real roots then it must be that $\,y\le 3\,$ ...
Proving algebraic sets i) Let $Z$ be an algebraic set in $\mathbb{A}^n$. Fix $c\in \mathbb{C}$. Show that $$Y=\{b=(b_1,\dots,b_{n-1})\in \mathbb{A}^{n-1}|(b_1,\dots,b_{n-1},c)\in Z\}$$ is an algebraic set in $\mathbb{A}^{n-1}$. ii) Deduce that if $Z$ is an algebraic set in $\mathbb{A}^2$ and $c\in \mathbb{C}$ then $Y=\{a\in \mathbb{C}|(a,c)\in Z\}$ is either finite or all of $\mathbb{A}^1$. Deduce that $\{(z,w)\in \mathbb{A}^2 :|z|^2 +|w|^2 =1\}$ is not an algebraic set in $\mathbb{A}^2$.
This answer was merged from another question so it only covers part ii). $Z$ is algebraic, and hence the simultaneous solution to a set of polynimials in two variables. If we swap one variable in all the polynomials with the number $c$, you will get a set of polynomials in one variable, with zero set being your $Y$. $Y$ is therefore an algebraic set, and closed in $\Bbb A^1$, therefore either finite or the whole affine line. Assume for contradiction that $Y = \{ ( z,w) \in \mathbb{A}^2 : |z|^2 + |w|^2 = 1 \}$ is algebraic. Set $w = 0$ (this is our $c$). The $Y$ we get from this is the unit circle in the complex plane. That is an infinite set, but certainly not all of $\Bbb C$. Thus $Y$ is neither finite nor all of $\Bbb A^1 = \Bbb C$, and therefore $Z$ cannot be algebraic to begin with.
Why aren't parametrizations equivalent to their "equation form"? Consider the parametrization $(\lambda,t)\mapsto (\lambda t,\lambda t^2,\lambda t^3)$. This is a union of lines (not sure how to visualize it precisely. I think it's a double cone). It doesn't appear that the $x$ or $z$ axis are in this parametrization (if $y$ and $z$ are both zero, so must $x$ be). However, when you solve the parametrization in terms of $x,y,z$ you obtain the relationship $y^2=xz$. In which the $x$ and $z$ axes are solutions! Why is this? Is this the closure? If so, how can we relate the closure to the original set?
When you say that you "solve" and obtain the relationship $y^2 = xz$ I image you just observed that $(\lambda t^2)^2 = (\lambda t)(\lambda t^3)$ correct? In this case what you have shown is that the set for which you have a parameterization is a subset of the set of solutions to $y^2 = xz$. What you have not shown is that every solution of $y^2 = xz$ is covered by the parameterization. Indeed, as you pointed out, this is not true.
with how many zeros it ends I have to calculate with how many zeros 1000! ends. This is wat I did, but I am not sure whether its good: I think I have to calculate how many times the product 10 is in 1000!. I found out the factor 10 is 249 times in 1000! using the fact that $s_p(n!)=\sum_{r=1}^{\infty}\lfloor \frac{n}{p^r}\rfloor$. So I think the answer should be that it ends with 249 zeros. Is this correct/are there other ways to do this? If not how should I do it then?
First, let's note that $10$ can be factored into $2 \times 5$ so the key is to compute the minimum of the number of $5$'s and $2$'s that appear in $1000!$ as factors of the numbers involved. As an example, consider $5! = 120$, which has one $0$ because there is a single $5$ factor and a trio of $2$ factors in the product, one from $2$ and a pair from $4$. Thus, the key is to compute the number of $5$'s, $25$'s, $125$'s, and $625$'s in that product as each is contributing a different number of $5$'s to the overall product as these are the powers of $5$ for you to consider as the $2$'s will be much higher and thus not worth computing. So, while there are 200 times that $5$ will be a factor, there are 40 times for $25$ being a factor, eight for $125$ and one for $625$, which does give the same result as you had of 249, though this is a better explanation.
For which Natural $n\ge2: \phi(n)=n/2$ For which Natural $n\ge2$ does this occur with?: $\phi(n)=n/2$
Hint: $n$ is even, or $n/2$ wouldn't be an integer. Hence $n=2^km$ with $m$ odd and $k\ge1$. You have $\phi(2^km)=2^{k-1}\phi(m)$ which must equal $n/2$.
Induction proof of $\sum_{k=1}^{n} \binom n k = 2^n -1 $ Prove by induction: $$\sum_{k=1}^{n} \binom n k = 2^n -1 $$ for all $n\in \mathbb{N}$. Today I wrote calculus exam, I had this problem given. I have the feeling that I will get $0$ points for my solution, because I did this: Base Case: $n=1$ $$\sum_{k=1}^{1} \binom 1 1 = 1 = 2^1 -1 .$$ Induction Hypothesis: for all $n \in \mathbb{N}$: $$\sum_{k=1}^{n} \binom n k = 2^n -1 $$ Induction Step: $n \rightarrow n+1$ $$\sum_{k=1}^{n+1} \binom {n+1} {k} = \sum_{k=1}^{n} \binom {n+1} {k} + \binom{n+1}{n+1} = 2^{n+1} -1$$ Please show me my mistake because next time is my last chance in this class.
suppose that $$\sum_{k=1}^{n} \binom n k = 2^n -1 $$ then $$\sum_{k=1}^{n+1} \binom {n+1}{k} =\sum_{k=1}^{n+1}\Bigg( \binom {n}{ k} +\binom{n}{k-1}\Bigg)=$$ $$=\sum_{k=1}^{n+1} \binom {n}{ k} +\sum_{k=1}^{n+1}\binom{n}{k-1}=$$ $$=\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\sum_{k=0}^{n} \binom {n}{k}=$$ $$=\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\binom{n}{0}+\sum_{k=1}^{n} \binom {n}{k}=$$ $$=2\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\binom{n}{0}=$$ since $\binom{n}{n+1}=0, \binom{n}{0}=1$ $$=2\sum_{k=1}^{n} \binom {n}{ k}+1=2(2^n-1)+1=2^{n+1}-2+1=2^{n+1}-1$$
Graphs such that $|G| \ge 2$ has at least two vertices which are not its cut-vertices Show that every graph $G$, such that $|G| \ge 2$ has at least two vertices which are not its cut-vertices.
Let $P$ be a maximal path in $G$. I claim that the end points of $P$ are not cut vertices. Suppose that an end point $v$ of $P$ was a cut vertex. Let $G$ be separated into $G_1,\ G_2,\ \cdots,\ G_k$. It follows that any path from one component to another must pass through $v$ and namely such a path does not end on $v$ and therefore cannot be $P$. Therefore $P$ is contained entirely within some $G_i\cup\{v\}$. But this contradicts the fact that $P$ is maximal for there exists at least one vertex in $G_j$ for $i\neq j$ which connects to $v$ and extends $P$. Therefore $v$ must not be a cut vertex.
Proving some basic facts about integrals Let $f$, $g$ be Riemann integrable functions on the interval $[a,b]$, that is $f,g \in \mathscr{R}([a,b])$. (i) $\int_{a}^{b} (cf+g)^2\geq 0$ for all $c \in \mathbb{R}$. (ii) $2|\int_{a}^{b}fg|\leq c \int_{a}^{b} f^2+\frac{1}{c}\int_{a}^{b} g^2$ for all $c \in \mathbb{R}^+$ I don't have a complete answer for either of these, but I have some ideas. For (i) $\int_{a}^{b} (cf+g)^2=\int_{a}^{b}c^2f^2+2cfg+g^2=c^2\int_{a}^{b}f^2+c\int_{a}^{b}2fg+\int_{a}^{b}g^2$. The first third is positive because if $c <0$ then $c^2>0$. I'm not sure about the middle third. The final third is positive. I did notice that if I can figure out (i)...(ii) follows from some rearranging.
I will assume that the functions are real-valued. For the first point, I will use that if an R-integrable function $h$ is nonegative on $[a,b]$, then $\int_a^bh(x)dx\geq 0$. For the second point, I will use that if $h(x)\leq k(x)$ on $[a,b]$, then $\int_a^bh(x)dx\leq \int_a^bk(x)dx$. Note that the latter follows readily from the former by linearity of the integral. 1) We have $(cf(x)+g(x))^2\geq 0$ for all $x\in [a,b]$, so $\int_a^b(cf(x)+g(x))^2dx\geq 0$. 2) Recall that $2|ab|\leq a^2+b^2$ for every $a,b\in\mathbb{R}$. With $a=\sqrt{c}f(x)$ and $b=g(x)/\sqrt{c}$ ,this yields $$ 2|f(x)g(x)|\leq cf(x)^2+\frac{1}{c}g(x)^2 $$ on $[a,b]$. Hence $$ 2\int_a^b|f(x)g(x)|dx\leq c\int_a^bf(x)^2dx+\frac{1}{c}\int_a^bg(x)^2dx. $$ Finally, we have $|\int_a^bf(x)g(x)dx|\leq \int_a^b|f(x)g(x)|dx$, hence the second inequality. Note: As you said, you can also deduce 2) from 1) directly by expanding $(cf+g)^2$ and then dividing by $c$. But $2|ab|\leq a^2+b^2$ is so useful I could not resist mentioning it.
When is $\| f \|_\infty$ a norm of the vector space of all continuous functions on subset S? Let S be any subset of $\mathbb{R^n}$. Let $C_b(S)$ denote the vector space of all bounded continuous functions on S. For $f \in C(S)$, define $\| f \|_\infty = \sup_{x \in S} |f(x)|$ When is this a norm of the vector space of all continuous functions on S?
For X a locally compact Hausdorff space such as an open or closed subset or non-empty intersection of open and closed subsets of R^n, the space of bounded continuous function on X with the sup norm, which is the same as the infinity norm or essential sup norm is complete. The sup or the essential sup is always a norm on this pace of all bounded continuous function. The completion of the space of all continuous function on X with compact support with respect to the sup norm is the space of all continuous function on X which vanishes at infinity.
Evaluate $\lim_{x\to1^-}\left(\sum_{n=0}^{\infty}\left(x^{(2^n)}\right)-\log_2\frac{1}{1-x}\right)$ Evaluate$$\lim_{x\to1^-}\left(\sum_{n=0}^{\infty}\left(x^{(2^n)}\right)-\log_2\frac{1}{1-x}\right)$$ Difficult problem. Been thinking about it for a few hours now. Pretty sure it's beyond my ability. Very frustrating to show that the limit even exists. Help, please. Either I'm not smart enough to solve this, or I haven't learned enough to solve this. And I want to know which!
This is NOT a solution, but I think that others can benefit from my failed attempt. Recall that $\log_2 a=\frac{\log a}{\log 2}$, and that $\log(1-x)=-\sum_{n=1}^\infty\frac{x^n}n$ for $-1\leq x<1$, so your limit becomes $$\lim_{x\to1^-}x+\sum_{n=1}^\infty\biggl[x^{2^n}-\frac1{\log2}\frac{x^n}n\biggr]\,.$$ The series above can be rewritten as $\frac1{\log2}\sum_{k=1}^\infty a_kx^k$, where $$a_k=\begin{cases} -\frac1k,\ &\style{font-family:inherit;}{\text{if}}\ k\ \style{font-family:inherit;}{\text{is not a power of}}\ 2;\\\log2-\frac1k,\ &\style{font-family:inherit;}{\text{if}}\ k=2^m.\end{cases}$$ We can try to use Abel's theorem, so we consider $\sum_{k=1}^\infty a_k$. Luckily, if this series converges, say to $L$, then the desired limit is equal to $1+\frac L{\log2}\,$. Given $r\geq1$, then we have $2^m\leq r<2^{m+1}$, with $m\geq1$. Then the $r$-th partial sum of this series is equal to $$\sum_{k=1}^ra_k=\biggl(\sum_{k=1}^r-\frac1k\biggr)+m\log2=m\log2-H_r\,,$$ where $H_r$ stands for the $r$-th harmonic number. It is well-known that $$\lim_{r\to\infty}H_r-\log r=\gamma\quad\style{font-family:inherit;}{\text{(Euler-Mascheroni constant)}}\,,$$ so $$\sum_{k=1}^ra_k=\log(2^m)-\log r-(H_r-\log r)=\log\Bigl(\frac{2^m}r\Bigr)-(H_r-\log r\bigr)\,.$$ Now the bad news: the second term clearly tends to $-\gamma$ when $r\to\infty$, but unfortunately the first term oscillates between $\log 1=0$ (when $r=2^m$) and $\bigl(\log\frac12\bigr)^+$ (when $r=2^{m+1}-1$), so $\sum_{k=1}^\infty a_k$ diverges.
Two Problem: find $\max, \min$; number theory: find $x, y$ * *Find $x, y \in \mathbb{N}$ such that $$\left.\frac{x^2+y^2}{x-y}~\right|~ 2010$$ *Find max and min of $\sqrt{x+1}+\sqrt{5-4x}$ (I know $\max = \frac{3\sqrt{5}}2,\, \min = \frac 3 2$)
Problem 1: I have read a similar problem with a good solution in this forum
Exponentiation when the exponent is irrational I am just curious about what inference we can draw when we calculate something like $$\text{base}^\text{exponent}$$ where base = rational or irrational number and exponent = irrational number
$2^2$ is rational while $2^{1/2}$ is irrational. Similarly, $\sqrt 2^2$ is rational while $\sqrt 2^{\sqrt 2}$ is irrational (though it is not so easily proved), so that pretty much settles all cases. Much more can be said when the base is $e$. The Lindemann-Weierstrass Theorem asserts that $e^a$ where $a$ is a non-zero algebraic number is a transcendental number.
$p$ is an odd primitive, Show why there are no primitive roots of $\bmod 3p$ if $p$ is an odd primitive, Prove there are no primitive roots of $\bmod 3p$ Where I'm at: $a^{2(p-1)}=1 \pmod{3p}$ where a is a primitive root of $3p$ (by contradiction) $(a/3p)=(a/3)(a/p)$ are the Legendre symbols, and stuck here..tried a couple of things, but got nowhere, could use a helping hand :]
Note that when $p=3$ the theorem does not hold! 2 is a primitive root. So supposing $p$ is not 3... Since $p$ is odd let $p = 2^r k+1$ with $k$ odd. The group of units is $\mod {3p}$ is $$(\mathbb Z/(3p))^\times \simeq \mathbb Z/(2) \times \mathbb Z/(2^r) \times (\mathbb Z/(k))^\times$$ by Sun Zi's theorem. There can be no primitive root for this because of the two $\mathbb Z/(2^i)$ parts.
topology question about cartesian product I have a question about a proof I'm reading. It says: Suppose $A$ is an open set in the topology of the Cartesian product $X\times Y$, then you can write $A$ as the $\bigcup (U_\alpha\times V_\alpha)$ for $U_\alpha$ open in $X$ and for $V_\alpha$ open in $Y$. Why is this? (I get that the basis of the product topology of $X \times Y$ is the collection of all sets of the form $U \times V$, where $U$ is an open subset of $X$ and $V$ is an open subset of $Y$, I just don't know why we can take this random union and say that it equals $A$.)
It isn't a random union, it is the union for all open $U,V$ such that $U\times V$ is contained in $A$. As a result we immediately have half of the containment, that the union is a subset of $A$. To see why $A$ is contained in the union, consider a point $(x,y)$ in $A$. Since $A$ is open, there must be a basic open set of the product topology that contains the point and is a subset of $A$. But this is precisely a product $U\times V$ of open sets $U,V$ from $X,Y$ respectively. QED
Given a point $x$ and a closed subspace $Y$ of a normed space, must the distance from $x$ to $Y$ be achieved by some $y\in Y$? I think no. And I am looking for examples. I would like a sequence $y_n$ in $Y$ such that $||y_n-x||\rightarrow d(x,Y)$ while $y_n$ do not converge. Can anyone give a proof or an counterexample to this question?
This is a slight adaptation of a fairly standard example. Let $\phi: C[0,1]\to \mathbb{R}$ be given by $\phi(f)=\int_0^{\frac{1}{2}} f(t)dt - \int_{\frac{1}{2}}^1 f(t)dt$. Let $Y_\alpha = \phi^{-1}\{\alpha\}$. Since $\phi$ is continuous, $Y_\alpha$ is closed for any $\alpha$. Now let $\hat{f}(t) = 4t$ and notice that $\phi(\hat{f}) = -1$ (in fact, any $\hat{f}$ such that $\phi(\hat{f}) = -1$ will do). Then $$\inf_{f \in Y_0} \|\hat{f}-f\| = \inf \{ \|g\|\, | \, g+\hat{f} \in Y_0 \} = \inf \{ \|g\|\, | \, \phi(g) =1 \} = \inf_{g \in Y_1} \|g\|$$ It is clear that $g_n$ is an infimizing sequence for the latter problem iff $g_n+\hat{f}$ is an infimizing sequence for the initial problem. It is well known that $Y_1$ has no element of minimum norm, consequently there is no $f \in Y_0$ that mnimizes $\|f-\hat{f}\|$.
What are relative open sets? I came across the following: Definition 15. Let $X$ be a subset of $\mathbb{R}$. A subset $O \subset X$ is said to be open in $X$ (or relatively open in $X$) if for each $x \in O$, there exists $\epsilon = \epsilon(x) > 0$ such that $N_\epsilon (x) \cap X \subset O$. What is $\epsilon$ and $N_\epsilon (x) $? Or more general, what are relatively open sets?
Forget your definition above. The general notion is: Let $X$ be a topological space, $A\subset X$ any subset. A set $U_A$ is relatively open in $A$ if there is an open set $U$ in $X$ such that $U_A=U\cap A$. I think that in your definition $N_\epsilon(x)$ is meant to denote an open neighborhood of radius $\epsilon$ of $x$, ie $(x-\epsilon,\ x+\epsilon)$. As you can see, this would agree with the definition I gave you above.