INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Help with a short paper - cumulative binomial probability estimates I was hoping someone could help me with a brief statement I can't understand in a book.
The problem I have is with the final line of the following section of Lemma 2.2 (on the second page):
Since $|\mathcal{T}_j|$ is bin. distributed with expectation $n(\log{n})^2 2^{−\sqrt{\log{n}}}$, by the standard estimates, we have that the probability that $\mathcal{T}_j$ has more than $2\mu$ elements is at most $e^{−μ/3} < n^{−2}$. Then, with probability at least $1− \frac{\log{n}}{n^2}$, the sum of the sizes of these sets is at most $n(\log{n})^3 2^{-\sqrt{\log{n}}}$.
Why is this?
| Wikipedia is your friend. In general, when a paper mentions using technique X, if you are not aware of technique X, then look it up. It will be impossible to fill the gap without knowing about X.
In the case at hand, X is the Chernoff bound (also Hoeffding's inequality, and even more names). It's indeed pretty standard, so it's good for you to get to know it. It's a theorem of the "concentration of measure" type, saying that if you average many well-behaved random variables, then you get something which is concentrated roughly like it should according to the central limit theorem. The central limit theorem itself doesn't give you anything about the speed of convergence to a normal distribution, and so to get an actual "large deviation bound" you use something like Chernoff. Sometimes you need more refined results, and then you use Berry-Esséen (q.v.).
|
A contest problem about multiple roots if a is real, what is the only real number that could be a mutiple root of $x^3 +ax+1$=0
No one in my class know how to do it, so i have to ask it here.
| Let the multiple root be $r$, and let the other root be $s$. If $r$ is to be real, then $s$ must be real also. From Vieta's formulas, we have $2r + s = 0$ and $r^2s = -1$. The first equation gives $s = -2r$, which we plug into the second equation to get $r^2s = -2r^3 = -1$, so $r = \boxed{\left(\frac12\right)^{1/3}}$.
|
Combining Taylor expansions How do you taylor expand the function $F(x)={x\over \ln(x+1)}$ using standard results? (I know that WA offers the answer, but I want to know how to get it myself.) I know that $\ln(x+1)=x-{x^2\over 2}+{x^3\over 3}+…$ But I don't know how to take the reciprocal. In general, given a function $g(x)$ with a known Taylor series, how might I find $(g(x))^n$, for some $n\in \mathbb Q$?
Also, how might I evaluate expressions like $\ln(1+g(x))$ where I know the Taylor expansion of $g(x)$ (and $\ln x$). How do I combine them?
Thank you.
| You have
$$F(x)=\frac{x}{\sum_{n\ge 1}\frac{(-1)^{n+1}}nx^n}=\frac1{\sum_{n\ge 0}{\frac{(-1)^n}{n+1}}x^n}=\frac1{1-\frac{x}2+\frac{x^2}3-+\dots}\;.$$
Suppose that $F(x)=\sum\limits_{n\ge 0}a_nx^n$; then you want
$$1=\left(1-\frac{x}2+\frac{x^2}3-+\dots\right)\left(a_0+a_1x+a_2x^2+\dots\right)\;.$$
Multiply out and equate coefficients:
$$\begin{align*}
1&=a_0\,;\\
0&=a_1-\frac{a_0}2=a_1-\frac12,\text{ so }a_1=\frac12\,;\\
0&=a_2-\frac{a_1}2+\frac{a_0}3=a_2-\frac14+\frac13=a_2+\frac1{12},\text{ so }a_2=-\frac1{12}\,;
\end{align*}$$
and so on. In general $$a_n=\frac{a_{n-1}}2-\frac{a_{n-2}}3+\dots+(-1)^{n+1}\frac{a_0}{n+1}$$ for $n>0$, so
$$\begin{align*}&a_3=-\frac1{24}-\frac16+\frac14=\frac1{24}\;,\\
&a_4=\frac1{48}+\frac1{36}+\frac18-\frac15=-\frac{19}{720}\;,\\
&a_5=-\frac{19}{1440}-\frac1{72}+\frac1{48}-\frac1{10}+\frac16=\frac{29}{480}\;,
\end{align*}$$
and if there’s a pattern, it isn’t an obvious one, but you can get as good an approximation as you want in relatively straightforward fashion;
$$F(x)=1+\frac{x}2-\frac{x^2}{12}+\frac{x^3}{24}-\frac{17x^4}{720}+\frac{29x^5}{480}+\dots$$
already gives two or three decimal places over much of the interval of convergence.
|
Area of a trapezoid from given the two bases and diagonals Find the area of trapezoid with bases $7$ cm and $20$ cm and diagonals $13$ cm and $5\sqrt{10} $ cm.
My approach:
Assuming that the bases of the trapezoid are the parallel sides, the solution I can think of is a bit ugly,
*
*Find the other two non-parallel sides of the trapezoid by using this formula.
*Find the height using this $$ h= \frac{\sqrt{(-a+b+c+d)(a-b+c+d)(a-b+c-d)(a-b-c+d)}}{2(b-a)}$$
Now, we can use $\frac12 \times$ sum of the parallel sides $\times$ height.
But, this is really messy and I am not sure if this is correct or feasible without electronic aid, so I was just wondering how else we could solve this problem?
| Let's denote $a=20$ , $b=7$ ,$d_1=13$ , $d_2=5 \sqrt{10}$ , (see picture below)
You should solve following system of equations :
$\begin{cases}
d_1^2-(b+x)^2=d_2^2-(b+y)^2 \
a-b=x+y
\end{cases}$
After you find values of $x$ and $y$ calculate $h$ from one of the following equations :
$h^2=d_2^2-(b+y)^2$ , or
$h^2= d_1^2-(b+x)^2$
Then calculate area :
$A=\frac{a+b}{2} \cdot h$
|
How to get from $a\sqrt{1 + \frac{b^2}{a^2}}$ to $\sqrt{a^2 + b^2}$ I have the following expression: $a\sqrt{1 + \frac{b^2}{a^2}}$. If I plug this into Wolfram Alpha, it tells me that, if $a, b$ are positive, this equals $\sqrt{a^2 + b^2}$.
How do I get that result? I can't see how that could be done. Thanks
| $$a\sqrt{1 + \frac{b^2}{a^2}}$$
$$=a\sqrt{\frac{a^2 + b^2}{a^2}}$$
$$=a\frac{\sqrt{a^2 + b^2}}{|a|}$$
So when $a$ and $b$ are positive, $|a|=a$. Hence:
$$=\sqrt{a^2 + b^2}$$
Without the assumption:
$$\sqrt{a^2} =|a|=\begin{cases} a && a \geq 0\\ -a &&a < 0\\ \end{cases}$$
|
How to deal with multiplication inside of integral? I have an undefined integral like this:
\begin{aligned}
\ \int x^3 \cdot \sin(4+9x^4)dx
\end{aligned}
I have to integrate it and I have no idea where to start. I have basic formulas for integrating but I need to split this equation into two or to do something else.
| Note that $$(4+9x^4)' = 36x^3$$
So that your integral becomes
$$\int x^3 \sin(4+9x^4)dx$$
$$\dfrac{1}{36}\int 36x^3 \sin(4+9x^4)dx$$
$$\dfrac{1}{36}\int \sin u du$$
Which you can easily solve.
|
Prove by induction that $n!>2^n$
Possible Duplicate:
Proof the inequality $n! \geq 2^n$ by induction
Prove by induction that $n!>2^n$ for all integers $n\ge4$.
I know that I have to start from the basic step, which is to confirm the above for $n=4$, being $4!>2^4$, which equals to $24>16$.
How do I continue though. I do not know how to develop the next step.
Thank you.
| Hint: prove inductively that a product is $> 1$ if each factor is $>1$. Apply that to the product $$\frac{n!}{2^n}\: =\: \frac{4!}{2^4} \frac{5}2 \frac{6}2 \frac{7}2\: \cdots\:\frac{n}2$$
This is a prototypical example of a proof employing multiplicative telescopy. Notice how much simpler the proof becomes after transforming into a form where the induction is obvious, namely: $\:$ a product is $>1$ if all factors are $>1$. Many inductive proofs reduce to standard inductions.
|
Linear Algebra: Find a matrix A such that T(x) is Ax for each x I am having difficulty solve this problem in my homework:
(In my notation, $[x;y]$ represents a matrix of 2 rows, 1 column)
Let $\mathbf{x}=[x_1;x_2]$, $v_1$=[−3;5] and $v_2=[7;−2]$ and let $T\colon\mathbb{R}^2\to\mathbb{R}^2$ be a linear transformation that maps $\mathbf{x}$ into $x_1v_1+x_2v_2$. Find a matrix $A$ such that $T(\mathbf{x})$ is $A\mathbf{x}$ for each $\mathbf{x}$.
I am pretty clueless. So I assume that I start off with the following:
$x_1v_1 + x_2v_2 = x_1[−3;5] + x_2[7;−2]$
But I do not know what to do from here, or if this is even the correct start!
| If I understand you correctly, I would say that
$$A = \left(\begin{array}{rr}-3&7\\5&-2\end{array}\right) \ \textrm{and} \ x'=Ax.$$
You can see this if you use
$$x' = \left(\begin{array}{cc}x_1\\x_2\end{array}\right).$$
Then $$x_1'= -3\cdot x_1 + 7\cdot x_2 = x_1 \cdot v_{11} + x_2\cdot v_{21}$$ and $$x_2'= 5\cdot x_1-2\cdot x_2 = x_1\cdot v_{12} + x_2\cdot v_{22}$$ (here $v_{12}$ is the second element of the first $v_1$).
|
What does $\sin^{2k}\theta+\cos^{2k}\theta=$?
What is the sum $\sin^{2k}\theta+\cos^{2k}\theta$ equal to?
Besides Mathematical Induction,more solutions are desired.
| If you let $z_k=\cos^k(\theta)+i\sin^k(\theta)\in\Bbb C$, it is clear that
$$
\cos^{2k}(\theta)+\sin^{2k}(\theta)=||z_k||^2.
$$
When $k=1$ the complex point $z_1$ describes (under the usual Argand-Gauss identification $\Bbb C=\Bbb R^2$) the circumference of radius $1$ centered in the origin, and your expression gives $1$.
For any other value $k>1$, the point $z_k$ describes a closed curve $\cal C_k\subset\Bbb R^2$ and your expression simply computes the square distance of the generic point from the origin. There's no reason to expect that this expression may take a simpler form than it already has.
|
Density function with absolute value Let $X$ be a random variable distributed with the following density function:
$$f(x)=\frac{1}{2} \exp(-|x-\theta|) \>.$$
Calculate: $$F(t)=\mathbb P[X\leq t], \mathbb E[X] , \mathrm{Var}[X]$$
I have problems calculating $F(t)$ because of the absolute value. I'm doing it by case statements but it just doesn't get me to the right answer.
So it gets to this:
$$
\int_{-\infty}^\infty\frac{1}{2} \exp(-|x-\theta|)\,\mathrm dx $$
| The very best thing you can do in solving problems such as these is to sketch the given density function first. It does not have to be a very accurate sketch: if you drew a peak of $\frac{1}{2}$ at $x=\theta$ and decaying curves on either side, that's good enough!
Finding $F_X(t)$:
*
*Pick a number $t$ that is smaller than $\theta$ (where that peak is) and remember that $F_X(t)$ is just the area under the exponential curve to the left of $t$. You can find this area by integration.
*Think why it must be that $F_X(\theta) = \frac{1}{2}$.
*Pick a $t > \theta$. Once again, you have to find $F_X(t)$ which is
the area under the density to the left of $t$. This is clearly the area to the left of $\theta$ (said area is $\frac{1}{2}$, of course!) plus the area under the curve between $\theta$ and $t$ which you can find by
integration. Or you can be clever about it and say that the area to the right of
$t = \theta + 5$ must, by symmetry, equal the area to the left of $\theta - 5$, which you found previously. Since the total area is $1$, we have $F_X(\theta+5)=1-F_X(\theta-5)$, or more generally,
$$F_X(\theta + \alpha) = 1 - F_X(\theta - \alpha).$$
Finding $E[X]$:
Since the pdf is symmetric about $\theta$, it should work out that $E[X]=\theta$
but we do need to check that the integral does not work out to be of the undefined form $\infty-\infty$.
|
Solution to Locomotive Problem (Mosteller, Fifty Challenging Problems in Probability) My question concerns the solution Professor Mosteller gives for the Locomotive Problem in his book, Fifty Challenging Problems in Probability. The problem is as follows:
A railroad numbers its locomotives in order 1, 2, ..., N. One day you see a locomotive and its number is 60. Guess how many locomotives the company has.
Mosteller's solution uses the "symmetry principle". That is, if you select a point at random on a line, on average the point you select will be halfway between the two ends. Based on this, Mosteller argues that the best guess for the number of locomotives is 119 (locomotive #60, plus an equal number on either "side" of 60 gives 59 + 59 + 1 = 119.
While I feel a bit nervous about challenging the judgment of a mathematician of Mosteller's stature, his answer doesn't seem right to me. I've picked a locomotive at random and it happens to be number 60. Given this datum, what number of locomotives has the maximum likelihood?
It seems to me that the best answer (if you have to choose a single value) is that there are 60 locomotives. If there are 60 locomotives, then the probability of my selecting the 60th locomotive at random is 1/60. Every other total number of locomotives gives a lower probability for selecting #60. For example, if there are 70 locomotives, I have only a 1/70 probability of selecting #60 (and similarly, the probability is 1/n for any n >= 60). Thus, while it's not particularly likely that there are exactly 60 locomotives, this conclusion is more likely than any other.
Have I missed something, or is my analysis correct?
| Choosing $2\times 60 - 1$ gives an unbiased estimate of $N$.
Choosing $60$ gives a maximum likelihood estimate of $N$.
But these two types of estimator are often different, and indeed this example is the one used by Wikipedia to show that the bias of maximum-likelihood estimators can be substantial.
|
Lemma vs. Theorem I've been using Spivak's book for a while now and I'd like to know what is the formal difference between a Theorem and a Lemma in mathematics, since he uses the names in his book. I'd like to know a little about the ethymology but mainly about why we choose Lemma for some findings, and Theorem for others (not personally, but mathematically, i.e. why should one classify a finding as lemma and not as theorem). It seems that Lemmas are rather minor findings that serve as a keystone to proving a Theorem, by that is as far as I can go.
NOTE: This question doesn't address my concern, so please avoid citing it as a duplicate.
| There is no mystery regarding the use of these terms: an author of a piece of mathematics will label auxiliary results that are accumulated in the service of proving a major result lemmas, and will label the major results as propositions or (for the most major results) theorems. (Sometimes people will not use the intermediate term proposition; it depends on the author.)
Exactly how this is decided is a matter of authorial judgement.
There is a separate issue, which is the naming of certain well-known traditional results,
such as Zorn's lemma, Nakayama's lemma, Yoneda's lemma, Fatou's lemma, Gauss's lemma,
and so on. Those names are passed down by tradition, and you don't get to change them, whatever your view is on the importance of the results. As to how they originated, one would have to investigate the literature.
|
Examples of patterns that eventually fail Often, when I try to describe mathematics to the layman, I find myself struggling to convince them of the importance and consequence of "proof". I receive responses like: "surely if Collatz is true up to $20×2^{58}$, then it must always be true?"; and "the sequence of number of edges on a complete graph starts $0,1,3,6,10$, so the next term must be 15 etc."
Granted, this second statement is less logically unsound than the first since it's not difficult to see the reason why the sequence must continue as such; nevertheless, the statement was made on a premise that boils down to "interesting patterns must always continue".
I try to counter this logic by creating a ridiculous argument like "the numbers $1,2,3,4,5$ are less than $100$, so surely all numbers are", but this usually fails to be convincing.
So, are there any examples of non-trivial patterns that appear to be true for a large number of small cases, but then fail for some larger case? A good answer to this question should:
*
*be one which could be explained to the layman without having to subject them to a 24 lecture course of background material, and
*have as a minimal counterexample a case which cannot (feasibly) be checked without the use of a computer.
I believe conditions 1. and 2. make my question specific enough to have in some sense a "right" (or at least a "not wrong") answer; but I'd be happy to clarify if this is not the case. I suppose I'm expecting an answer to come from number theory, but can see that areas like graph theory, combinatorics more generally and set theory could potentially offer suitable answers.
| This might be a simple example.
If we inscribe a circle of radius 1 in a square of side 2, the ratio of the area of the circle to the square is $\frac{\pi}{4}$. You can show that any time we put a square number of circles into this square, the ratio of the area of the circles to that of the square is (for the simple symmetric arrangement) again $\frac{\pi}{4} $. So for 1, 4, 9, 16 circles, this packing is the best we can do.
I had mistakenly assumed, based on this "obvious" pattern, that the limit of optimal packings of circles into the square did not converge, but rather continued to drop down to this same ratio every time a square number was reached.
This turns out not to be true, as I learned here.
The pattern breaks down at n=49 circles. At 49 circles the optimal packing, given here, is not the simple square lattice arrangement.
There are many other examples, but this served as a reminder for me.
|
Solving the equation $- y^2 - x^2 - xy = 0$ Ok, this is really easy and it's getting me crazy because I forgot nearly everything I knew about maths!
I've been trying to solve this equation and I can't seem to find a way out.
I need to find out when the following equation is valid:
$$\frac{1}{x} - \frac{1}{y} = \frac{1}{x-y}$$
Well, $x \not= 0$, $y \not= 0$, and $x \not= y$ but that's not enough I suppose.
The first thing I did was passing everything to the left side:
$$\frac{x-y}{x} - \frac{x-y}{y} - 1 = 0$$
Removing the fraction:
$$xy - y² - x² + xy - xy = 0xy$$
But then I get stuck..
$$- y² - x² + xy = 0$$
How can I know when the above function is valid?
| $x^2-xy+y^2=(x+jy)(x+j^2y)$ so $x=y(1+\sqrt{-3})/2$
|
For $f$ Riemann integrable prove $\lim_{n\to\infty} \int_0^1x^nf(x)dx=0.$
Suppose $f$ is a Riemann integrable function on $[0,1]$. Prove that $\lim_{n\to\infty} \int_0^1x^nf(x)dx=0.$
This is what I am thinking: Fix $n$. Then by Jensen's Inequality we have $$0\leq\left(\int_0^1x^nf(x)dx\right)^2 \leq \left(\int_0^1x^{2n}dx\right)\left(\int_0^1f^2(x)dx\right)=\left(\frac{1}{2n+1}\right)\left(\int_0^1f^2(x)dx\right).$$Thus, if $n\to\infty$ then $$0\leq \lim_{n\to \infty}\left(\int_0^1x^nf(x)dx\right)^2 \leq 0$$ and hence we get what we want. How correct (or incorrect) is this?
| That looks great. If someone doesn't know Jensen's inequality, this is still seen just with Cauchy-Schwarz. Another quick method is the dominated convergence theorem. Gerry's and Peters answers are both far simpler though.
|
Is $3^x \lt 1 + 2^x + 3^x \lt 3 \cdot 3^x$ right? Is $3^x \lt 1 + 2^x + 3^x \lt 3 \cdot 3^x$ right?
This is from my lecture notes which is used to solve:
But when $x = 0$, $(1 + 2^x + 3^x = 3) \gt (3^0 = 1)$? The thing is how do I choose which what expression should go on the left & right side?
| When $x=0$, the left side $3^0=1$, the center is $3$ as you say, and the right side is $3\cdot 3^0=3 \cdot 1=3$ so the center and right sides are equal. But you want this for large $x$, so could restrict the range to $x \gt 1$, say.
|
Notation of the summation of a set of numbers Given a set of numbers $S=\{x_1,\dotsc,x_{|S|}\}$, where $|S|$ is the size of the set, what would be the appropriate notation for the sum of this set of numbers? Is it
$$\sum_{x_i \in S} x_i
\qquad\text{or}\qquad
\sum_{i=1}^{|S|} x_i$$ or something else?
| Say I had a set A, under an operation with the properties of $+$, then $$\sum_{i\in A} x_i$$ is how I write it.
|
If $A,B\in M(2,\mathbb{F})$ and $AB=I$, then $BA=I$ This is Exercise 7, page 21, from Hoffman and Kunze's book.
Let $A$ and $B$ be $2\times 2$ matrices such that $AB=I$. Prove that
$BA=I.$
I wrote $BA=C$ and I tried to prove that $C=I$, but I got stuck on that. I am supposed to use only elementary matrices to solve this question.
I know that there is this question, but in those answers they use more than I am allowed to use here.
I would appreciate your help.
| $AB= I$, $Det(AB) = Det (A) . Det(B) = 1$. Hence $Det(B)\neq 0$ Hence $B$ is invertible.
Now let $BA= C$ then we have $BAB= CB$ which gives $B= CB$ that is $B. B^{-1} = C$ this gives $ C= I$
|
Fibonacci numbers modulo $p$ If $p$ is prime, then $F_{p-\left(\frac{p}{5}\right)}\equiv 0\bmod p$, where $F_j$ is the $j$th Fibonacci number, and $\left(\frac{p}{5}\right)$ is the Jacobi symbol.
Who first proved this? Is there a proof simple enough for an undergraduate number theory course? (We will get to quadratic reciprocity by the end of the term.)
| Here's a proof that only uses a little Galois theory of finite fields (and QR). I don't know if it's any of the proofs referenced by Gerry. Recall that
$$F_n = \frac{\phi^n - \varphi^n}{\phi - \varphi}$$
where $\phi, \varphi$ are the two roots of $x^2 = x + 1$. Crucially, this formula remains valid over $\mathbb{F}_{p^2}$ where $p$ is any prime such that $x^2 = x + 1$ has distinct roots, thus any prime not equal to $5$. We distinguish two cases:
*
*$x^2 = x + 1$ is irreducible. This is true for $p = 2$ and for $p > 2, p \neq 5$ it's true if and only if the discriminant $\sqrt{5}$ isn't a square $\bmod p$, hence if and only if $\left( \frac{5}{p} \right) = -1$, hence by QR if and only if $\left( \frac{p}{5} \right) = -1$. In this case $x^2 = x + 1$ splits over $\mathbb{F}_{p^2}$ and the Frobenius map $x \mapsto x^p$ generates its Galois group, hence $\phi^p \equiv \varphi \bmod p$. It follows that $\phi^{p+1} \equiv \phi \varphi \equiv -1 \bmod p$ and the same is true for $\varphi$, hence that $F_{p+1} \equiv 0 \bmod p$.
*$x^2 = x + 1$ is reducible. This is false for $p = 2$ and for $p > 2, p \neq 5$ it's true if and only if $\left( \frac{p}{5} \right) = 1$. In this case $x^2 = x + 1$ splits over $\mathbb{F}_p$, hence $\phi^{p-1} \equiv 1 \bmod p$ and the same is true for $\varphi$, hence $F_{p-1} \equiv 0 \bmod p$.
The case $p = 5$ can be handled separately. Maybe this is slightly ugly, though.
|
Resources for learning Elliptic Integrals During a quiz my Calc 3 professor made a typo. He corrected it in class, but he offered a challenge to anyone who could solve the integral.
The (original) question was:
Find the length of the curve as described by the vector valued function $\vec{r} = \frac{1}{3}t^{3}\vec{i} + t^{2}\vec{j} + 4t\vec{k} $ where $0 \le t \le 3$
This give us:
$\int_0^3 \! \sqrt{t^{4}+4t^{2}+16} \, \mathrm{d}t$
Wolfram Alpha says that the solution to this involves Incomplete Elliptic Integrals of the First and Second Kinds. I was wondering if anyone had any level appropriate resources where I can find information about how to attack integrals like this.
Thanks in advance.
| There are plenty of places to look (for example, most any older 2-semester advanced undergraduate "mathematics for physicists" or "mathematics for engineers" text), but given that you're in Calculus III, some of these might be too advanced. If you can find a copy (your college library may have a copy, or might be able to get a copy using interlibrary loan), I strongly recommend the treatment of elliptic integrals at the end of G. M. Fichtenholz's book The Indefinite Integral (translated to English by Richard A. Silverman in 1971). Also, the books below might be useful, but Fichtenholz's book would be much better suited for you, I think. (I happen to have a copy of Fichtenholz's book and Bowman's book, by the way.)
Arthur Latham Baker, Elliptic functions: An Elementary Text-book for Students of Mathematics (1906) http://books.google.com/books?id=EjYaAAAAYAAJ
Alfred Cardew Dixon, The Elementary Properties of the Elliptic Functions With Examples (1894) http://books.google.com/books?id=Gx4SAAAAYAAJ
Frank Bowman, Introduction to Elliptic Functions With Applications (reprinted by Dover Publications in 1961)
|
Why are perpendicular bisectors 'lines'? Given two points $p$ and $q$ their bisector is defined to be $l(p,q)=\{z:d(p,z)=d(q,z)\}$.
Due to the construction in Euclidean geometry, we know that $l(p,q)$ is a line, that is, for $x,y,z\in l(p,q)$, we have $d(x,y)+d(y,z)=d(x,z)$, which charactorizes lines.
I wonder whether this is true for other geometries. That is, does the bisector always satisfy the above charactorization?
I think about this problem when trying to prove bisectors are 'lines' in hyperbolic geometry (upper half plane) where the metric is different from Euclidean, only to notice even the Euclidean case is not so easy.
Any advice would be helpful!
| Let $A$ and $B$ be the two given points and let $M$ be the midpoint of $AB$, i.e., $M\in A\vee B$ and $d(M,A)=d(M,B)$. Let $X\ne M$ be an arbitrary point with $d(X,A)=d(X,B)$. Then the triangles $\Delta(X,A,M)$ and $\Delta(X,B,M)$ are congruent as corresponding sides have equal length. It follows that $\angle(XMA)=\angle(XMB)={\pi\over2}$ which implies that the line $m:=X\vee M$ is the unique normal to $A\vee B$ through $M$. Conversely, if $Y$ is an arbitrary point on this line, then $d(Y,M)=d(Y,M)$, $d(M,A)=d(M,B)$ and $\angle(Y,M,A)=\angle(Y,M,B)={\pi\over2}$. Therefore the triangles $\Delta(Y,M,A)$ and $\Delta(Y,M,B)$ are congruent, and we conclude that $d(Y,A)=d(Y,B)$.
The above argument is valid in euclidean geometry as well as in spherical and hyperbolic geometry. Note that a spherical triangle is completely determined (up to a motion or reflection on $S^2$) by the lengths of its three sides or by the lengths of two sides and the enclosed angle, and the same is true concerning hyperbolic triangles.
|
Can there exist a non-constant continuous function that has a derivative of zero everywhere? Somebody told me that there exists a continuous function with a derivative of zero everywhere that is not constant.
I cannot imagine how that is possible and I am starting to doubt whether it's actually true. If it is true, could you show me an example? If it is not true, how would you go about disproving it?
| Since there are no restrictions on the domain, it is actually possible. Let $f:(0,1)\cup(2,3)\to \mathbb R$ be defined by $f(x)=\left\{
\begin{array}{ll}
0 & \mbox{if } x \in (0,1) \\
1 & \mbox{if } x\in (2,3)
\end{array}
\right.$
|
Simplest Example of a Poset that is not a Lattice A partially ordered set $(X, \leq)$ is called a lattice if for every pair of elements $x,y \in X$ both the infimum and suprememum of the set $\{x,y\}$ exists. I'm trying to get an intuition for how a partially ordered set can fail to be a lattice. In $\mathbb{R}$, for example, once two elements are selected the completeness of the real numbers guarantees the existence of both the infimum and supremum. Now, if we restrict our attention to a nondegenerate interval $(a,b)$ it is clear that no two points in $(a,b)$ have either a suprememum or infimum in $(a,b)$.
Is this the right way to think of a poset that is not a lattice? Is there perhaps a more fundamental example that would yield further clarity?
| The set $\{x,y\}$ in which $x$ and $y$ are incomparable is a poset that is not a lattice, since $x$ and $y$ have neither a common lower nor common upper bound. (In fact, this is the simplest such example.)
If you want a slightly less silly example, take the collection $\{\emptyset, \{0\}, \{1\}\}$ ordered by inclusion. This is a poset, but not a lattice since $\{0\}$ and $\{1\}$ have no common upper bound.
|
$C[0,1]$ is not Hilbert space Prove that the space $C[0,1]$ of continuous functions from $[0,1]$ to $\mathbb{R}$ with the inner product $ \langle f,g \rangle =\int_{0}^{1} f(t)g(t)dt \quad $ is not Hilbert space.
I know that I have to find a Cauchy sequence $(f_n)_n$ which converges to a function $f$ which is not continuous, but I can't construct such a sequence $(f_n)_n$.
Any help?
| You are right to claim that in order to prove that the subspace $C[0,1]$ of $L^2[0,1]$ is not complete, it is sufficient to "find [in $C[0,1]$] a Cauchy sequence $(f_n)_n$ [i.e. Cauchy for the $L^2$-norm] which converges [in $L^2[0,1]$] to a function $f$ which is not continuous". It will even be useless to check that $(f_n)_n$ is $L^2$-Cauchy: this will result from the $L^2$-convergence.
The sequence of functions $f_n\in C[0,1]$ defined by
$$f(x)=\begin{cases}n\left(x-\frac12\right)&\text{if }\left|x-\frac12\right|\le\frac1n\\+1&\text{if }x-\frac12\ge\frac1n\\-1&\text{if }x-\frac12\le-\frac1n\end{cases}$$
satisfies $|f_n|\le1$ and converges pointwise to the function $f$ defined by
$$f(x)=\begin{cases}0&\text{if }x=\frac12\\+1&\text{if }x>\frac12\\-1&\text{if }x<\frac12.\end{cases}$$
By the dominated convergence theorem, we deduce $\|f_n-f\|_2\to0.$
Because of its jump, the function $f$ is discontinuous, and more precisely: not equal almost everywhere to any continuous function.
|
Solving the system $\sum \sin = \sum \cos = 0$. Can we solve the system of equations:
$$\sin \alpha + \sin \beta + \sin \gamma = 0$$
$$\cos \alpha + \cos \beta + \cos \gamma = 0$$
?
(i.e. find the possible values of $\alpha, \beta, \gamma$)
| Developing on Gerenuk's answer, you could consider the complex numbers
$$ z_1=\cos \alpha+i\sin \alpha,\ z_2=\cos \beta+i\sin\beta,\ z_3=\cos \gamma+i\sin \gamma$$
Then you know that $z_1,z_2,z_3$ are on the unit circle, and the centroid of the triangle formed by the points of afixes $z_i$ is of afix $\frac{z_1+z_2+z_3}{3}=0$. From classical geometry, we can see that if the centroid of a triangle is the same as the center of the circumscribed circle, then the triangle is equilateral. This proves that $\alpha,\beta,\gamma$ are of the form $\theta,\theta+\frac{2\pi}{3},\theta+\frac{4\pi}{3}$.
|
Is the domain of an one-to-one function a set if the target is a set? This is probably very naive but suppose I have an injective map from a class into a set, may I conclude that the domain of the map is a set as well?
| If a function $f:A\to B$ is injective one, we can assume without loss of generality that $f$ is surjective too (by passing to a subclass of $B$), therefore $f^{-1}:B\to A$ is also a bijection.
If $B$ is a set then every subclass of $B$ is a set, so $f^{-1}:B\to A$ is a bijection from a set, and by the axiom of replacement $A$ is a set.
|
Basis for adjoint representation of $sl(2,F)$ Consider the lie algebra $sl(2,F)$ with standard basis $x=\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$, $j=\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$, $h=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$.
I want to find the casimir element of the adjoint representation of $sl(2,F)$. How can I go about this? Thanks.
| A representation for a Lie algebra $\mathfrak{g}$ is a Lie algebra homomorphism $\varphi:\mathfrak{g} \to \mathfrak{gl}(V)$ for some vector space $V$.
Of course, every representation corresponds to a module action. In the case of this representation the module action would be $g \cdot v = \varphi(g)(v)$.
It is not clear what you mean by "basis for the representation". Do you mean a basis for the linear transformations $\varphi(g)$? That would be a basis for $\varphi(\mathfrak{g})$ (the image of $\mathfrak{g}$ in $\mathfrak{gl}(V)$). Or do you mean a basis for the module $V$?
The adjoint representation is the map $\mathrm{ad}:\mathfrak{g} \to \mathfrak{gl}(\mathfrak{g})$ defined by $\mathrm{ad}(g)=[g,\cdot]$. In the case that $\mathfrak{g}=\mathfrak{sl}_2$, $\mathfrak{g}$ has a trivial center so $\mathrm{ad}$ is injective. Thus a basis for $\mathfrak{g}$ maps directly to a basis for $\mathrm{ad}(\mathfrak{g})$.
Therefore, if by "basis for the representation" you mean a basis for the space of linear transformations $\mathrm{ad}(\mathfrak{sl}_2)$, then "Yes" $\mathrm{ad}_e, \mathrm{ad}_f,$ and $\mathrm{ad}_h$ form a basis for this space.
On the other hand, if you mean "basis for the module" then $e,f,$ and $h$ themselves form a basis for $V=\mathfrak{sl}_2$.
By the way, if you are looking for matrix representations for $ad_e,ad_f,ad_h$ relative to the basis $e,f,h$. Simply compute commutators: $[e,e]=0$, $[e,f]=h$, $[e,h]=-2e$. Thus the coordinate matrix of $ad_e$ is $$\begin{bmatrix} 0 & 0 & -2 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$$
|
What are the possible values for $\gcd(a^2, b)$ if $\gcd(a, b) = 3$? I was looking back at my notes on number theory and I came across this question.
Let $a$, $b$ be positive integers such that $\gcd(a, b) = 3$. What are the possible values for $\gcd(a^2, b)$?
I know it has to do with their prime factorization decomposition, but where do I go from here?
| If $p$ is a prime, and $p|a^2$, then $p|a$; thus, if $p|a^2$ and $p|b$, then $p|a$ and $p|b$, hence $p|\gcd(a,b) = 3$. So $\gcd(a^2,b)$ must be a power of $3$.
Also, $3|a^2$ and $3|b$, so $3|\gcd(a^2,b)$; so $\gcd(a^2,b)$ is a multiple of $3$.
If $3^{2k}|a^2$, then $3^k|a$ (you can use prime factorization here); so if $3^{2k}|\gcd(a^2,b)$, then $3^k|\gcd(a,b) = 3$. Thus, $k\leq 1$. That is, no power of $3$ greater than $3^2$ can divide $\gcd(a^2,b)$.
In summary: $\gcd(a^2,b)$ must be a power of $3$, must be a multiple of $3$, and cannot be divisible by $3^3=27$. What's left? Now give examples to show all of those possibilities can occur.
|
The inclusion $j:L^{\infty}(0,1)\to L^1(0,1)$ is continuous but not compact. I'm stuck on this problem, namely I cannot find a bounded subset in $L^\infty(0,1)$ such that it is not mapped by the canonical inclusion $$j: L^\infty(0,1)\to L^1(0,1)$$ onto a relatively compact subset in $L^1(0,1)$. Can anybody provide me an example? Really I don't see the point.
My thoughts are wondering on the fact that the ball of $L^\infty(0,1)$ is norm dense in $L^1(0,1)$ so the inclusion cannot be compact, however, as i said, no practical examples come to my mind.
Thank you very much in advance.
| This is actually just a variant of a special case of NKS’s example, but it may be especially easy to visualize with this description.
For $n\in\mathbb{Z}^+$ and $x\in(0,1)$ let $f_n(x)$ be the $n$-th bit in the unique non-terminating binary expansion of $x$. Then $\|f_n\|_\infty=1$, but $\|f_n-f_m\|_1=\frac12$ whenever $n\ne m$.
|
Infinite distinct factorizations into irreducibles for an element Consider the factorization into irreducibles of $6$ in $\mathbb{Z}[\sqrt{-5}]$. We have $6=2 \times 3$ and $6=(1+\sqrt{-5}) \times (1-\sqrt{-5})$, i.e. $2$ distinct factorizations. And,
$$6^2=3 \times 3\times2\times2$$
$$=(1+\sqrt{-5}) \times (1-\sqrt{-5}) \times (1+\sqrt{-5}) \times (1-\sqrt{-5})$$
$$=(1+\sqrt{-5}) \times (1-\sqrt{-5})\times3\times2.$$
More generally, $6^n$ will have $n+1$ distinct factorizations into irreducibles in $\mathbb{Z}[\sqrt{-5}]$ by a simple combinatorial argument. But, can we construct a ring in which there exists an element that has an infinite number of distinct factorizations into irreducibles? To make life harder, can we construct an extension of $\mathbb{Z}$ in which this happens? I have been thinking about this for a while and have managed to find no foothold.. Any help is appreciated.
| If you are only interested in behaviour in the ring of integers of a number field (such as $\mathbb{Z}[\sqrt{-5}]$) then you will never get infinitely many different factorisations of an element.
These different factorisations come from reordering the (finitely many) prime ideals in the unique factorisation of the ideal generated by your element.
|
For $x+y=n$, $y^x < x^y$ if $x(updated)
I'd like to use this property for my research, but it's somewhat messy to prove.
$$\text{For all natural number $x,y$ such that $x+y=n$ and $1<x<y<n$, then $y^x < x^y$}.$$
For example, let $x=3, y=7$. Then $y^x = y^3 = 343$ and $x^y = 3^7 = 2187$. Any suggestion on how to prove this?
| I proved this in the special case $x = 99, y = 100$, here. As others have pointed out, what you really want to hold is the following:
Statement: Let $x, y \in \mathbb{R}$. Then $y > x > e$ implies $x^y > y^x$.
Proof:. Write $y = x + z$, where $z > 0$. Then,
$$\begin{align}
x^y > y^x &\iff x^x x^z > y^x
\\
&\iff x^z > \left(\frac{x+z}{x} \right)^x
\\
&\iff x^z > \left( 1 + \frac{z}{x} \right)^x.
\end{align}$$
The right hand side $\left(1 + \frac{z}{x} \right)^x$ is monotone increasing with limit $e^z$. Since the left hand size is strictly greater than $e^z$ (as $x > e$), it follows that the inequality always holds.
|
"Every linear mapping on a finite dimensional space is continuous" From Wiki
Every linear function on a finite-dimensional space is continuous.
I was wondering what the domain and codomain of such linear function are?
Are they any two topological vector spaces (not necessarily the same), as along as the domain is finite-dimensional? Can the codomain be a different normed space (and may not be finite-dimensional)?
I asked this because I saw elsewhere the same statement except the domain is a finite-dimensional normed space, and am also not sure if the codomain can be a different normed space (and may not be finite-dimensional).
Thanks and regards!
| The special case of a linear transformations $A: \mathbb{R}^n \to \mathbb{R}^n$ being continuous leads nicely into the definition and existence of the operator norm of a matrix as proved in these notes.
To summarise that argument, if we identify $M_n(\mathbb{R})$ with $\mathbb{R^{n^2}}$, and suppose that $v \in \mathbb{R}^n$ has co-ordinates $v_j$, then by properties of the Euclidean and sup norm on $\mathbb{R}^n$ we have:
$\begin{align}||Av|| &\leq \sqrt{n} \,||Av||_{\sup} \\&= \sqrt{n}\max_i\bigg|\sum_{j}a_{ij}\,v_j\bigg|\\&\leq \sqrt{n}\max_i \sum_{j}|a_{ij}\,v_j|\\&\leq \sqrt{n} \max_i n\big(\max_j|a_{ij} v_j|\big)\\&=n\sqrt{n} \max_i \big(\max_j |a_{ij}| \max_j |v_j|\big)\\&= n\sqrt{n}\max_{i,j}|a_{ij}|||v||_{\sup}\\&\leq n \sqrt{n} \max_{i,j}|a_{ij}|||v||
\end{align}$
$\Rightarrow ||Av|| \leq C ||v||$ where $C = n\sqrt{n}\displaystyle\max_{i,j}|a_{ij}|$ is independent of $v$
So if $\varepsilon>0$ is given, choose $\delta = \dfrac{\varepsilon}{C}$ and for $v, w \in \mathbb{R}^n$ with $||v-w||< \delta$ consider
$||Av - Aw || = ||A(v-w) || \leq C ||v-w || < \delta C= \varepsilon$ from which we conclude that $A$ is uniformly continuous.
|
What is the Jacobian? What is the Jacobian of the function $f(u+iv)={u+iv-a\over u+iv-b}$?
I think the Jacobian should be something of the form $\left(\begin{matrix}
{\partial f_1\over\partial u} & {\partial f_1\over\partial v} \\
{\partial f_2\over\partial u} & {\partial f_2\over\partial v}
\end{matrix}\right)$
but I don't know what $f_1,f_2$ are in this case. Thank you.
| You could just write $(u+iv−(a_1+a_2 i))/(u+iv−(b_1+b_2 i))$ where $u,v,a_1,a_2,b_1,b_2$ are real. Then multiply the numerator and denominator by the complex conjugate of the denominator to find the real and imaginary parts.
Then later, exploit the Cauchy--Riemann equations to conclude that the matrix must have the form $\begin{bmatrix} {}\ \ \ c & d \\ -d& c\end{bmatrix}$ where $c$ and $d$ are some real numbers and $f\;'(z)=c+id$.
|
Indefinite integral of $\cos^{3}(x) \cdot \ln(\sin(x))$ I need help. I have to integrate $\cos^{3} \cdot \ln(\sin(x))$ and I don´t know how to solve it. In our book it is that we have to solve using the substitution method. If somebody knows it, you will help me..please
| Substitute :
$\sin x =t \Rightarrow \cos x dx =dt$ , hence :
$I=\int (1-t^2)\cdot \ln (t) \,dt$
This integral you can solve using integration by parts method .
|
Proof of $\sum_{0 \le k \le a} {a \choose k} {b \choose k} = {a+b \choose a}$ $$\sum_{0 \le k \le a}{a \choose k}{b \choose k} = {a+b \choose a}$$
Is there any way to prove it directly?
Using that $\displaystyle{a \choose k}=\frac{a!}{k!(a-k)!}$?
| How about this proof? (Actually an extended version of your identity.)
*
*http://en.wikipedia.org/wiki/Chu-Vandermonde_identity#Algebraic_proof
I don't think it is "direct" enough, though...
|
Must a measure on $2^{\mathbb{N}}$ be atomless to be a measure on $[0,1]$? This question comes from section 4.4, page 17, of this paper.
Let $\mu$ be a Borel measure on Cantor space, $2^\mathbb{N}$. The authors say that
If the measure is atomless, via the binary expansion of reals we can view it also as a Borel measure on $[0,1]$.
Is it necessary that $\mu$ be atomless?
| The existence of the measure on $[0,1]$ has nothing to do with atoms, per se.
Let $\varphi: 2^\mathbb{N}\to [0,1]$ be defined by $\varphi(x)=\sum_{n=0}^\infty {x(n)/2^n}$. This map is Borel measurable, and so for any Borel measure $\mu$ on $2^\mathbb{N}$, the image measure $\mu\circ\varphi^{-1}$ is a Borel measure on $[0,1]$.
The authors mention this condition, I think, so they can go back and forth between the two viewpoints. That is, for atomless measures the map $\mu\mapsto \mu\circ\varphi^{-1}$ is
one-to-one.
|
Disprove uniform convergence of $\sum_{n=1}^{\infty} \frac{x}{(1+x)^n}$ in $[0,\infty)$ How would I show that $\sum_{n=1}^{\infty} \frac{x}{(1+x)^n}$ does not uniformly converge in $[0,\infty)$?
I don't know how to approach this problem.
Thank you.
| This is almost the same as Davide' answer:
let
$$f_n(x)={x\over (1+x)^n},\ n\in\Bbb N^+;\ \ \text{ and }\ \ f(x)= \sum\limits_{n=1}^\infty {x\over(1+x)^n}.$$ Since, for $x>0$, the series $\sum\limits_{n=1}^\infty {1\over(1+x)^n}$ is a Geometric series with $r={1\over 1+x}$:
$$
f(x)=x\sum_{n=1}^\infty {1\over(1+x)^n} =x\cdot{ 1/(1+x)\over 1-\bigl(1/(1+x)\bigr)}
=x\cdot{1\over x}=1,
$$
for $x>0$.
As $f(0)=0$, we see that $f(x)$ converges pointwise to a discontinuous function on $[0,\infty)$. Since a uniform limit of a sum of continuous functions is continuous, and as each term $f_n$ is continuous on $[0,\infty)$, it follows that $f(x)$ does not converge uniformly on $[0,\infty)$.
|
an open ball in $\mathbb{R^n}$ is connected
Show that an open ball in $\mathbb{R^n}$ is a connected set.
Attempt at a Proof: Let $r>0$ and $x_o\in\mathbb{R^n}$. Suppose $B_r(x_o)$ is not connected. Then, there exist $U,V$ open in $\mathbb{R^n}$ that disconnect $B_r(x_o)$. Without loss of generality, let $a\in B_r(x_o)$: $a\in U$. Since $U$ is open, for some $r_1>0$, $B_{r_1}(x_o)\subseteq U$. Since $(U\cap B_r(x_o))\cap (V\cap B_r(x_o))=\emptyset$, $a\not\in V$. Thus, $\forall b\in V, d(a,b)>0$. But then for some $b'\in V: b'\in B_r(x_o)$ and some $r>o$, $d(a,b')>r$. Contradiction since both $a$ and $b'$ were in the ball of radius $r$.
Is this the general idea?
| $\mathbb{R}=(-\infty,\infty)$, hence it is connected. Since the finite product of connected space is connected, the result follows.
|
Show that $\tan 3x =\frac{ \sin x + \sin 3x+ \sin 5x }{\cos x + \cos 3x + \cos 5x}$ I was able to prove this but it is too messy and very long. Is there a better way of proving the identity? Thanks.
| More generally, for any arithmetic sequence, denoting $z=\exp(i x)$ and $2\ell=an+2b$, we have
$$\begin{array}{c l}
\blacktriangle & =\frac{\sin(bx)+\sin\big((a+b)x\big)+\cdots+\sin\big((na+b)x\big)}{\cos(bx)+\cos\big((a+b)x\big)+\cdots+\cos\big((na+b)x\big)} \\[2pt]
& \color{Red}{\stackrel{1}=}
\frac{1}{i}\frac{z^b\big(1+z^a+\cdots+z^{na}\big)-z^{-b}\big(1+z^{-a}+\cdots+z^{-na}\big)}{z^b\big(1+z^a+\cdots+z^{na}\big)+z^{-b}\big(1+z^{-a}+\cdots+z^{-na}\big)} \\[2pt]
& \color{LimeGreen}{\stackrel{2}=}\frac{1}{i}\frac{z^b-z^{-b}z^{-na}}{z^b+z^{-b}z^{-na}} \\[2pt]
& \color{Blue}{\stackrel{3}=}\frac{(z^\ell-z^{-\ell})/2i}{(z^\ell+z^{-\ell})/2} \\[2pt]
& \color{Red}{\stackrel{1}{=}}\frac{\sin (\ell x)}{\cos(\ell x)}.
\end{array}$$
Hence $\blacktriangle$ is $\tan(\ell x)$ - observe $\ell$ is the average of the first and last term in the arithmetic sequence.
$\color{Red}{(1)}$: Here we use the formulas $$\sin \theta = \frac{e^{i\theta}-e^{-i\theta}}{2i} \qquad \cos\theta = \frac{e^{i\theta}+e^{-i\theta}}{2}.$$
$\color{LimeGreen}{(2)}$: Here we divide numerator and denominator by $1+z^a+\cdots+z^{na}$.
$\color{Blue}{(3)}$: Multiply numerator and denominator by $z^{na/2}/2$.
Note: there are no restrictions on $a$ or $b$ - they could even be irrational!
|
Finite Rings whose additive structure is isomorphic to $\mathbb{Z}/(n \mathbb{Z})$ I am having trouble proving the following conjecture: If $R$ is a ring with $1_R$ different from $0_R$ s.t. its additive structure is isomorphic to $\mathbb{Z}/(n \mathbb{Z})$ for some $n$, must $R$ always be isomorphic to the ring $\mathbb{Z}/(n \mathbb{Z})$ ? How do we go about defining a ring isomorphism with a proper multiplication on $R$?
| Combine the following general facts:
For any ring $R$, the prime ring (i.e. the subring generated by $1$) is isomorphic to the quotient of $\mathbb Z$ by the annihilator of $R$ in $\mathbb Z$.
Any cyclic group $R$ is isomorphic to the quotient of $\mathbb Z$ by the annihilator of $R$ in $\mathbb Z$.
(This is Mariano's answer with slightly different words.)
|
Normal distribution involving $\Phi(z)$ and standard deviation The random variable X has normal distribution with mean $\mu$ and standard deviation $\sigma$. $\mathbb{P}(X>31)=0.2743$ and $\mathbb{P}(X<39)=0.9192$. Find $\mu$ and $\sigma$.
| Hint:
Write,
$$ \tag{1}\textstyle
P[\,X>31\,] =P\bigl[\,Z>{31-\mu\over\sigma}\,\bigr]=.2743\Rightarrow {31-\mu\over\sigma} = z_1
$$
$$\tag{2}\textstyle
P[\,X<39\,] =P\bigl[\,Z<{39-\mu\over\sigma}\,\bigr]=.9192\Rightarrow {39-\mu\over\sigma} =z_2 ,
$$
where $Z$ is the standard normal random variable.
You can find the two values $z_1$ and $z_2$ from a cdf table for the standard normal distribution. Then you'll have two equations in two unknowns. Solve those for $\mu$ and $\sigma$.
For example, to find $z_1$ and $z_2$, you can use the calculator here. It gives the value $z$ such that $P[Z<z]=a$, where you input $a$.
To use the calculator for the first equation first write
$$\textstyle P\bigl[\,Z<\underbrace{31-\mu\over\sigma}_{z_1}\,\bigr]=1-P\bigl[\,Z>{31-\mu\over\sigma}\,\bigr] =1-.2743=.7257.$$
You input $a=.7257$, and it returns $z_1\approx.59986$.
To use the calculator for the second equation,
$$\textstyle P\bigl[\,Z<\underbrace{39-\mu\over\sigma}_{z_2}\,\bigr]= .9192,$$
input $a=.9192$, the calculator returns $z_2\approx1.3997$.
So, you have to solve the system of equations:
$$
\eqalign{
{31-\mu\over\sigma}&=.59986\cr
{39-\mu\over\sigma}&=1.3997\cr
}
$$
(The solution is $\sigma\approx 10$, $\mu\approx 25$.)
|
Finding a simple expression for this series expansion without a piecewise definition I am doing some practice Calculus questions and I ran into the following problem which ended up having a reduction formula with a neat expansion that I was wondering how to express in terms of a series. Here it is: consider
$$
I_{n} = \int_{0}^{\pi /2} x^n \sin(x) dx
$$
I obtained the reduction formula
$$
I_{n} = n\left(\frac{\pi}{2}\right)^{n-1} - n I_{n-1}.
$$
I started incorrectly computing up to $I_{6}$ with the reduction formula
$$
I_{n} = n\left(\frac{\pi}{2}\right)^{n-1} - I_{n-1}
$$
by accident which ended up having a way more interesting pattern than the correct reduction formula. So, after computing $I_{0} = 1$, the incorrect reduction expansion was,
$$
I_{1} = 0 \\
I_{2} = \pi \\
I_{3} = \frac{3\pi^2}{2^2} - \pi \\
I_{4} = \frac{4\pi^3}{2^3} - \frac{3\pi^2}{2^2} + \pi \\
I_{5} = \frac{5\pi^4}{2^4} - \frac{4\pi^3}{2^3} + \frac{3\pi^2}{2^2} - \pi \\
I_{6} = \frac{6\pi^5}{2^5} - \frac{5\pi^4}{2^4} + \frac{4\pi^3}{2^3} - \frac{3\pi^2}{2^2} + \pi \\
$$
Note that $\pi = \frac{2\pi}{2^1}$, of course, which stays in the spirit of the pattern. How could I give a general expression for this series without defining a piecewise function for the odd and even cases? I was thinking of having a term in the summand with $(-1)^{2i+1}$ or $(-1)^{2i}$ depending on it was a term with an even or odd power for $n$, but that led to a piecewise defined function. I think that it will look something like the following, where $f(x)$ is some function that handles which term gets a negative or positive sign depending on whether $n$ is an even or odd power in that term: $$\sum\limits_{i=1}^{n} n \left(\frac{\pi}{2} \right)^{n-1} f(x)$$
Any ideas on how to come up with a general expression for this series?
| $$
\color{green}{I_n=\sum\limits_{i=2}^{n} (-1)^{n-i}\cdot i\cdot\left(\frac{\pi}{2} \right)^{i-1}}
$$
|
Why do introductory real analysis courses teach bottom up? A big part of introductory real analysis courses is getting intuition for the $\epsilon-\delta\,$ proofs. For example, these types of proofs come up a lot when studying differentiation, continuity, and integration. Only later is the notion of open and closed sets introduced. Why not just introduce continuity in terms of open sets first (E.g. it would be a better visual representation)? It seems that the $\epsilon-\delta$ definition would be more understandable if a student is first exposed to the open set characterization.
| I'm with Alex Becker, I first learned convergence of sequences, using epsilon and deltas, and only later moved on to continuity of functions. It worked out great for me. I don't believe that the abstraction from topology would be useful at this point. The ideas of "$x$ is near $y$", "choosing $\epsilon$ as small as you want", etc, are better expressed by epsilon-delta arguments, because they quantify/translate the words "near" and "small".
Maybe one could talk about "size of intervals", grasping the idea of "open neighborhood" and retaining the epsilons.
|
Extension and Self Injective Ring Let $R$ be a self injective ring. Then $R^n$ is an injective module. Let $M$ be a submodule of $R^n$ and let $f:M\to R^n$ be an $R$-module homomorphism. By injectivity of $R^n$ we know that we can extend $f$ to $\tilde{f}:R^n\to R^n$.
My question is that if $f$ is injective, can we also find an injective extension $\tilde{f}:R^n\to R^n$?
Thank you in advance for your help.
| The question is also true without any commutativity for quasi-Frobenius rings.
Recall that a quasi-Frobenius ring is a ring which is one-sided self injective and one-sided Noetherian. They also happen to be two-sided self-injective and two-sided Artinian.
For every finitely generated projective module $P$ over a quasi-Frobenius ring $R$, a well-known fact is that isomorphisms of submodules of $P$ extend to automorphisms of $P$. (You can find this on page 415 of Lam's Lectures on Modules and Rings.)
Obviously your $P=R^n$ is f.g. projective, and injecting $M$ into $P$ just results in an isomorphism between $M$ and its image, so there you have it!
In fact, this result seems a bit overkill for your original question, so I would not be surprised if a class properly containing the QF rings and satisfying your condition exists.
|
Is the product of symmetric positive semidefinite matrices positive definite? I see on Wikipedia that the product of two commuting symmetric positive definite matrices is also positive definite. Does the same result hold for the product of two positive semidefinite matrices?
My proof of the positive definite case falls apart for the semidefinite case because of the possibility of division by zero...
| Actually, one has to be vary careful in the way one interprets the results of Meenakshi and Rajian (referenced in one of the posts above). Symmetry is inherent in their definition of positive definiteness. Thus, their result can be stated very simply as follows: If $A$ and $B$ are symmetric and PSD, then $AB$ is PSD iff $AB$ is symmetric. A direct proof for this result can be given as follows. If $AB$ is PSD, it is symmetric (by Meenakshi and Rajian's definition of PSD). If it is symmetric, it is PSD since the eigenvalues of $AB$ are non-negative. To summarize, all the stuff about normality in their paper is not required (since normality of $AB$ is equivalent to the far simpler condition of symmetry of $AB$ when $A$ and $B$ are symmetric PSD). The most important point here is that if one adopts a more general definition for PSD ($x^TAx\ge 0$) and if one now considers cases where the product $AB$ is unsymmetric, then their results do not go through.
|
Why is there no continuous square root function on $\mathbb{C}$? I know that what taking square roots for reals, we can choose the standard square root in such a way that the square root function is continuous, with respect to the metric.
Why is that not the case over $\mathbb{C}$, with respect the the $\mathbb{R}^2$ metric? I suppose what I'm trying to ask is why is there not continuous function $f$ on $\mathbb{C}$ such that $f(z)^2=z$ for all $z$?
This is what I was reading, but didn't get:
Suppose there exists some $f$, and restrict attention to $S^1$. Given $t\in[0,2\pi)$, we can write
$$
f(\cos t+i\sin t)=\cos(\psi (t))+i\sin(\psi (t))
$$
for unique $\psi(t)\in\{t/2,t/2+\pi\}$. (I don't understand this assertion of why the displayed equality works, and why $\psi$ only takes those two possible values.) If $f$ is continuous, then $\psi:[0,2\pi)\to[0,2\pi)$ is continuous. Then $t\mapsto \psi(t)-t/2$ is continuous, and takes values in $\{0,\pi\}$ and is thus constant. This constant must equal $\psi(0)$, so $\psi(t)=\psi(0)+t/2$. Thus $\lim_{t\to 2\pi}\psi(t)=\psi(0)+\pi$.
Then
$$
\lim_{t\to 2\pi} f(\cos t+i\sin t)=-f(1).
$$
(How is $-f(1)$ found on the RHS?) Since $f$ is continuous, $f(1)=-f(1)$, impossible since $f(1)\neq 0$.
I hope someone can clear up the two problems I have understanding the proof. Thanks.
| Here is a proof for those who know a little complex function theory.
Suppose $(f(z))^2=z$ for some continuous $f$.
By the implicit function theorem, $f(z)$ is complex differentiable (=holomorphic) for all $z\neq0$ in $\mathbb C$.
However since $f$ is continuous at $0$, it is also differentiable there thanks to Riemann's extension theorem.
Differentiating $z=f(z)^2$ at $z=0$ leads to $1=2f(0)f'(0)=2\cdot0\cdot f'(0)=0 \;$. Contradiction.
|
How can I evaluate an expression like $\sin(3\pi/2)$ on a calculator and get an answer in terms of $\pi$? I have an expression like this that I need to evaluate:
$$16\sin(2\pi/3)$$
According to my book the answer is $8\sqrt{3}$. However, when I'm using my calculator to get this I get an answer like $13.86$. What I want to know, is it possible to make a calculator give the answer without evaluating $\pi$, so that $\pi$ is kept separate in the answer? And the same for in this case, $\sqrt{3}$. If the answer involves a square root, I want my calculator to say that, I don't want it to be evaluated.
I am using the TI-83 Plus if that makes a difference.
| Here’s something I used to tell students that might help. Among the angles that you’re typically expected to know the trig. values for ($30,$ $45,$ $60$ degrees and their cousins in the other quadrants), the only irrational values for the sine, cosine, tangent have the following magnitudes:
$$\frac{\sqrt{2}}{2}, \;\; \frac{\sqrt{3}}{2}, \;\; \sqrt{3}, \;\; \frac{\sqrt{3}}{3}$$
Note that if you square each of these, you get:
$$\frac{1}{2}, \;\; \frac{3}{4}, \;\; 3, \;\; \frac{1}{3}$$
Now consider the decimal expansions of these fractions:
$$0.5, \;\; 0.75, \;\; 3, \;\; 0.3333…$$
The important thing to notice is that if you saw any of these decimal expansions, you’d immediately know its fractional equivalent. (O-K, most people would know it!)
Now you can see how to use a relatively basic calculator to determine the exact value of $\sin\left(2\pi / 3 \right).$ First, use your calculator to find a decimal for $\sin\left(2\pi / 3 \right).$ Using a basic calculator (mine is a TI-32), I get $0.866025403.$ Now square the result. Doing this, I get $0.75.$ Therefore, I know that the square of $\sin\left(2\pi / 3 \right)$ is equal to $\frac{3}{4},$ and hence $\sin\left(2\pi / 3 \right)$ is equal to $\sqrt{\frac{3}{4}}.$ The positive square root is chosen because I got a positive value for $\sin\left(2\pi / 3 \right)$ when I used my calculator. Finally, we can rewrite this as $\frac{\sqrt{3}}{\sqrt{4}}=\frac{\sqrt{3}}{2}.$
What follows are some comments I posted in sci.math (22 June 2006) about this method.
By the way, I used to be very concerned in the early days of calculators that students could obtain all the exact values of the $30,$ $45,$ and $60$ degree angles by simply squaring the calculator output, recognizing the equivalent fraction of the resulting decimal [note that the squares of the sine, cosine, tangent of these angles all come out to fractions that any student would recognize (well, they used to be able to recognize) from its decimal expansion], and then taking the square root of the fraction. As the years rolled by, I got to where I didn't worry about this at all, because even when I taught this method in class (to help students on standardized tests and to help them for other teachers who were even more insistent about using exact values than I was), the majority of my students had more trouble doing this than just memorizing the values!
|
Finding an indefinite integral I have worked through and answered correctly the following question:
$$\int x^2\left(8-x^3\right)^5dx=-\frac{1}{3}\int\left(8-x^3\right)^5\left(-3x^2\right)dx$$
$$=-\frac{1}{3}\times\frac{1}{6}\left(8-x^3\right)^5+c$$
$$=-\frac{1}{18}\left(8-x^3\right)^5+c$$
however I do not fully understand all of what I have done or why I have done it (other than I used principles I saw in a similar example question).
Specifically I picked $-\frac{1}{3}$ to multiply the whole of the integral because it is the reciprocal of $-3$ but I do not fully understand why it is necessary to perform this step.
The next part I do not understand is on the second line what causes the $\left(-3x^2\right)$ to disappear?
Here is what I think is happening:
$$-\frac{1}{3}\times-3x^2=x^2$$
therefore
$$\int x^2\left(8-x^3\right)^5dx=-\frac{1}{3}\int\left(8-x^3\right)^5\left(-3x^2\right)dx$$
But I picked as stated before the reciprocal of $-3$ because it was the coefficient of the derivative of the expression $8-x^3$ not because it would leave an expression equivalent to $x^2$. For example if I alter the question slightly to:
$$\int x^3\left(8-x^3\right)^5dx$$
then by picking $-\frac{1}{3}$ the following statement would be false?
$$\int x^3\left(8-x^3\right)^5dx=-\frac{1}{3}\int\left(8-x^3\right)^5\left(-3x^2\right)dx$$
Also
$$\int-3x^2=-3\left(\frac{1}{3}x^3\right)+c$$
$$=x^3+c$$
Which is why I am confused as to why when integrating the full question $-3x^2$ seems to disappear.
| You correctly recognised x^2 as "almost" thw derivative of
So put u = (8 - x^3), and find du/dx = -3x^2.
The your integral becomes (-1/3)∫(-3x2)(8−x3)^5dx
= (-1/3) ∫ u^5 (du/dx) dx
= (-1/3) ∫ u^5 du -- which is rather easier to follow. It is the change of variable procedure, which is the reverse of the chain rule for derivatives.
(To verify this procedure, put I = the integral, and compare dI/dx and dI/du, using the chain rule.)
|
What is the proof that covariance matrices are always semi-definite? Suppose that we have two different discreet signal vectors of $N^\text{th}$ dimension, namely $\mathbf{x}[i]$ and $\mathbf{y}[i]$, each one having a total of $M$ set of samples/vectors.
$\mathbf{x}[m] = [x_{m,1} \,\,\,\,\, x_{m,2} \,\,\,\,\, x_{m,3} \,\,\,\,\, ... \,\,\,\,\, x_{m,N}]^\text{T}; \,\,\,\,\,\,\, 1 \leq m \leq M$
$\mathbf{y}[m] = [y_{m,1} \,\,\,\,\, y_{m,2} \,\,\,\,\, y_{m,3} \,\,\,\,\, ... \,\,\,\,\, y_{m,N}]^\text{T}; \,\,\,\,\,\,\,\,\, 1 \leq m \leq M$
And, I build up a covariance matrix in-between these signals.
$\{C\}_{ij} = E\left\{(\mathbf{x}[i] - \bar{\mathbf{x}}[i])^\text{T}(\mathbf{y}[j] - \bar{\mathbf{y}}[j])\right\}; \,\,\,\,\,\,\,\,\,\,\,\, 1 \leq i,j \leq M $
Where, $E\{\}$ is the "expected value" operator.
What is the proof that, for all arbitrary values of $\mathbf{x}$ and $\mathbf{y}$ vector sets, the covariance matrix $C$ is always semi-definite ($C \succeq0$) (i.e.; not negative definte; all of its eigenvalues are non-negative)?
| A symmetric matrix $C$ of size $n\times n$ is semi-definite if and only if $u^tCu\geqslant0$ for every $n\times1$ (column) vector $u$, where $u^t$ is the $1\times n$ transposed (line) vector. If $C$ is a covariance matrix in the sense that $C=\mathrm E(XX^t)$ for some $n\times 1$ random vector $X$, then the linearity of the expectation yields that $u^tCu=\mathrm E(Z_u^2)$, where $Z_u=u^tX$ is a real valued random variable, in particular $u^tCu\geqslant0$ for every $u$.
If $C=\mathrm E(XY^t)$ for two centered random vectors $X$ and $Y$, then $u^tCu=\mathrm E(Z_uT_u)$ where $Z_u=u^tX$ and $T_u=u^tY$ are two real valued centered random variables. Thus, there is no reason to expect that $u^tCu\geqslant0$ for every $u$ (and, indeed, $Y=-X$ provides a counterexample).
|
Find the class of equivalence of a element of a given equivalence relation. Yesterday on my Abstract Algebra course, we were having a problem with equivalence relations. We had a given set: $$A = \{a, b, c\}$$
We found all the partitions of $A$, and one of them was: $$P = \{ \{a\} , \{b, c\} \}$$
Then we built an equivalence relation $S$ from this partition, where two elements are in equivalence relation if $a$ and $b$ belong to the same cell.
So the relation of equivalence is: $$S = \{ (a,a) , (b,b) , (c,c) , (b,c) , (c,b) \}$$
After this the professor, without explaining anything wrote:
The class of equivalence of $(b,c)$:
$[(b,c)] = \{ (b,c) , (c,b) \}$
So can anyone explain this last line? Because I don't understand it.
| When you have an equivalence relation $R$ on a set $X$, and an element $x\in X$, you can talk about the equivalence class of $x$ (relative to $R$), which is the set $$[x] = \{y\in X\mid (x,y)\in R\} = \{y\in X\mid (y,x)\in R\} = \{y\in X\mid (x,y),(y,x)\in R\}.$$
But I note that your professor did not say "equivalence class", he said "Class of Equivalence". That suggests he may be refering to some other (defined) concept. I would suggest that you talk to your professor directly and ask him what he means by "Class of Equivalence", and whether it is the same thing as "equivalence class"; explain what your understanding of "equivalence class" is, and why you would be confused if he said "The equivalence class of $(b,c)$ is $[(b,c)]={(b,c),(c,b)}$" (at least, I would be confused because in order to talk about "the equivalence class of $(b,c)$", I would need some equivalence relation defined on some set that contains $(b,c)$, and we don't seem to have that on hand).
|
Estimating a probability of head of a biased coin The question is: We assume a uniform (0,1) prior for the (unknown) probability of a head. A coin is tossed 100 times with 65 of the tosses turning out heads. What is the probability that the next toss will be head?
Well, the most obvious answer is of course prob = 0.65, but I am afraid this is too simple. However, I really don't know what is wrong with this answer? I think I need to use the fact that we assume a uniform [0,1] before we begin tossing the coin, but I am not sure how to proceed.
| $0.65$ is the maximum-likelihood estimate, but for the problem you describe, it is too simple. For example, if you toss the coin just once and you get a head, then that same rule would say "prob = 1".
Here's one way to get the answer. The prior density is $f(p) = 1$ for $0\le p\le 1$ (that's the density for the uniform distribution). The likelihood function is $L(p) = \binom{100}{65} p^{65}(1-p)^{35}$. Bayes' theorem says you multiply the prior density by the likelihood and then normalize, to get the posterior density. That tells you the posterior density is
$$
g(p) = \text{constant}\cdot p^{65}(1-p)^{35}.
$$
The "constant" can be found by looking at this. We get
$$
\int_0^1 p^{65} (1-p)^{35} \; dp = \frac{1}{101\binom{100}{65}},
$$
and therefore
$$g(p)=101\binom{100}{65} p^{65}(1-p)^{35}.
$$
The expected value of a random variable with this distribution is the probability that the next outcome is a head. That is
$$
\int_0^1 p\cdot 101\binom{100}{65} p^{65}(1-p)^{35}\;dp.
$$
This can be evaluated by the same method:
$$
101\binom{100}{65} \int_0^1 p\cdot p^{65}(1-p)^{35}\;dp = 101\binom{100}{65} \int_0^1 p^{66}(1-p)^{35}\;dp
$$
$$
= 101\binom{100}{65} \cdot \frac{1}{\binom{101}{66}\cdot 102} = \frac{66}{102} = \frac{11}{17}.
$$
This is an instance of Laplace's rule of succession (Google that term!). Laplace used it to find the probability that the sun will rise tomorrow, given that it's risen every day for the 6000-or-so years the universe has existed.
|
prove that $g\geq f^2$ The problem is this:
Let $(f_n)$ a sequence in $L^2(\mathbb R)$ and let $f\in L^2(\mathbb R)$ and $g\in L^1(\mathbb R)$. Suppose that $$f_n \rightharpoonup f\;\text{ weakly in } L^2(\mathbb R)$$ and $$f_n^2 \rightharpoonup g\;\text{ weakly in } L^1(\mathbb R).$$ Show that $$f^2\leq g$$ almost everywhere on $\mathbb R$.
I admit I'm having problems with this since It's quite a long time I don't deal with these kind of problems. Even an hint is welcomed. Thank you very much.
| Well it is a property of the weak convergence that every weak convergent sequence is bounded and
$$||f||\leq \lim\inf ||f_{n}||$$
then for every $\Omega \in \mathbb{R}^n$ measurable with finite measure we have
$$\left(\int_\Omega f^2\right)^{\frac{1}{2}}\leq \lim\inf\left(\int_\Omega f_n^2\right)^{\frac{1}{2}}$$
That implies
$$\int_\Omega f^2\leq \lim\inf\int_\Omega f_n^2\tag{1}$$
Note that the function $h$ constant equal to 1 in $\Omega$ and 0 case contrary is in $L^{\infty}{(\mathbb{R})}$ which is the dual of $L^{1}{(\mathbb{R})}$ then
$$\int_{\Omega}f_n^2\rightarrow\int_\Omega g\tag{2}$$
Jointing (1) and (2) we get
$$\int_{\Omega}f^2\leq\int_\Omega g$$
Since $\Omega$ is arbitrary we have $f^2\leq g$ almost everywhere.$\blacksquare$
|
Gentzen's Consistency Proof confusion I am recently finding some confusion. Some texts say that Gentzen's Consistency Proof shows transfinite induction up to $\varepsilon_0$ holds, while other texts say that consistency can be shown up to the numbers less than $\varepsilon_0$, but not $\varepsilon_0$. Which one is correct?
Thanks.
| Since $\epsilon_0$ is a limit ordinal when you say induction up to $\epsilon_0$ you mean every ordinal $<\epsilon_0$. In fact the confusion is only understanding in the terminology used, as both mean the same thing.
For example, induction on all the countable ordinals would be just the same as induction up to $\omega_1$.
|
Product of adjacency matrices I was wondering if there was any meaningful interpertation of the product of two $n\times n$ adjacency matrices of two distinct graphs.
| The dot product of the adjacency matrix with itself is a measure of similarity between nodes. For instance take the non-symmetric directed adjacency matrix A =
1, 0, 1, 0
0, 1, 0, 1
1, 0, 0, 0
1, 0, 1, 0
then the dot of $A^T$A (gram matrix) gives the un-normalized similarity between column i and column j which is the symmetric matrix:
3, 0, 2, 0
0, 1, 0, 1
2, 0, 2, 0
0, 1, 0, 1
This is much like the gram matrix of a linear kernel in an SVM. An alternate version of the kernel is the RBF kernel. The RBF kernel is simply a measure of similarity between two datapoints that can be looked up in the nxn matrix. Likewise, so is the linear kernel.
A Gram matrix is simply the dot of its transpose and itself.
Now say you have matrix B which is also a non-symmetric directed adjacency matrix. B =
1, 0, 1, 0
1, 0, 0, 0
1, 0, 0, 0
1, 0, 0, 1
So $A^T$B is a non-symmetric matrix:
3, 0, 1, 1
1, 0, 0, 0
2, 0, 1, 1
1, 0, 0, 0
Matrix A col i and matrix B col j are proportionately similar according to the above matrix. Thus the dot product of transpose of the first matrix to the second matrix is a measure of similarity of nodes characterized by their edges.
|
What are all pairs $(a,b)$ such that if $Ax+By \equiv 0 \pmod n$ then we can conclude $ax+by = 0 \pmod n$? All these are good pairs:
$$(0, 0), (A, B), (2A, 2B), (3A, 3B), \ldots \pmod{n}$$
But are there any other pairs?
actually it was a programming problem with $A,B,n \leq 10000$ but it seems to have a pure solution.
| If $\rm\:c\ |\ A,B,n\:$ cancel $\rm\:c\:$ from $\rm\:Ax + By = nk.\:$ So w.l.o.g. $\rm\:(A,B,n) = 1,\:$ i.e. $\rm\:(A,B)\equiv 1$.
Similarly, restricting to "regular" $\rm\:x,y,\:$ those such that $\rm\:(x,y,n) = 1,\:$ i.e. $\rm\:(x,y)\equiv 1,\:$ yields
Theorem $\rm\:\ If\:\ (A,B)\equiv 1\equiv (x,y)\:\ and\:\ Ax+By\equiv 0,\ then\:\ ax+by\equiv 0\iff aB\equiv bA$
Proof $\ $ One easily verifies
$$\rm\:\ \ B(ax+by)\: =\: (aB-bA)x + b(Ax+By) $$
$$\rm -A(ax+by)\: =\: (aB-bA)y - a(Ax+By)$$
$(\Rightarrow)\ $ Let $\rm\:z = aB-bA.\:$ By above $\rm\:ax+by\equiv 0\ \:\Rightarrow\ xz,\:yz\equiv 0 \ \Rightarrow\ z \equiv (x,y)z\equiv 0$.
$(\Leftarrow)\ $ Let $\rm\:z = ax+by.\:$ By above $\rm\:aB-bA\equiv 0\ \Rightarrow\ Az,Bz\equiv 0\ \Rightarrow\ z \equiv (A,B)z\equiv 0.\ \ $ QED
Note $\rm\ (x,y)\equiv 1\pmod n\:$ means $\rm\:ix+jy = 1 + kn\:$ for some $\rm\:i,j,k\in \mathbb Z$
Thus we infer $\rm\:xz,yz\equiv 0\ \Rightarrow z \equiv (ix+jy)z\equiv i(xz)+j(yz)\equiv 0\pmod n$
i.e. $\rm\ \ ord(z)\ |\ x,y\ \Rightarrow\ ord(z)\ |\ (x,y) = 1\ $ in the additive group $\rm\:(\mathbb Z/n,+)$
|
Calculating the partial derivative of the following I think I may be missing something here,
$$f(x,y)=\left\{ \frac {xy(x^{2}-y^{2})}{x^{2}+y^{2}}\right.\quad(x,y)\neq (0,0)$$
Let $X(s,t)= s\cos(\alpha)+t\sin(\alpha)$ and $Y(s,t)=-s\sin(\alpha)+t\cos(\alpha)$, where $\alpha$ is a constant, and Let $F(s,t)=f(X(s,t), Y(s,t))$. Show that
$$ \left.\frac{\partial^2 F}{\partial s^2}\frac{\partial^2 F}{\partial t^2} - \left( \frac{\partial^2 F}{\partial s\partial t}\right)^2 = \left(\frac{\partial^2 f}{\partial x^2}\frac{\partial^2 f}{\partial y^2} - \left(\frac{\partial^2 f}{\partial x\partial y}\right)^2\right)\right| _{x=X(s,t),y=Y(s,t)} $$
I decided to try and subsitute My $X(s,t)$ and $Y(s,t)$ into $f(x,y)$, however im just wondering if thre is an alternative approach as it gives a lot of terms, many thanks in advance.
I have gone away and had a think about the answer and still not sure where to put my best foot forward with it so:
$ \frac{\partial^2 F}{\partial s^2}=cos^{2}\alpha\frac{\partial^2 F}{\partial ^2X}$ $\Rightarrow$ $ \frac{\partial^2 F}{\partial t^2}=sin^{2}\alpha\frac{\partial^2 F}{\partial ^2X}$
Now using the fact that $\frac{\partial^2 F}{\partial ^2X}$ is equal to $\frac{\partial^2 f}{\partial ^2X} | _{x=X(s,t)}$ to calculate our $\frac{\partial^2 F}{\partial ^2X}$.
Now
$\frac{\partial^2 f}{\partial ^2X}$= $ \frac{-4x^{4}y-20x^{2} y^{3}+8x 3y^{3}-4x y^{5}+4x^{5} y+10x^{3} y^{3}+6x y^{5}}{(x^{2}+y^{2})^{3}}$ hence do I make the subsitution here, seems to be far to many terms and havent even got to the RHS, many thanks in advance.
| Let $f:\ (x,y)\mapsto f(x,y)$ be an arbitrary function and put $$g(u,v):=f(u\cos\alpha + v\sin\alpha, -u\sin\alpha+v \cos\alpha)\ .$$
Using the abbreviations
$$c:=\cos\alpha, \quad s:=\sin\alpha,\quad \partial_x:={\partial\over\partial x}, \quad\ldots$$
we have (note that $c$ and $s$ are constants)
$$\partial_u=c\partial_x-s\partial_y, \quad \partial_v =s\partial_x+ c\partial_y\ .$$
It follows that
$$\eqalign{g_{uu}&\cdot g_{vv}-\bigl(g_{uv}\bigr)^2 \cr
&=(c\partial_x-s\partial_y)^2 f\cdot (s\partial_x+ c\partial_y)^2 f -\bigl((c\partial_x-s\partial_y)(s\partial_x+ c\partial_y)f\bigr)^2 \cr
&=(c^2 f_{xx}-2cs f_{xy}+s^2 f_{yy})(s^2 f_{xx}+2cs f_{xy}+c^2 f_{yy}) -\bigl(cs f_{xx}+(c^2-s^2) f_{xy}-cs f_{yy}\bigr)^2 \cr
&=\ldots =f_{xx}\, f_{yy} -\bigl(f_{xy})^2\ . \cr}$$
This shows that the stated identity is true for any function $f$ and not only for the $f$ considered in the original question.
There should be a way to prove this identity "from a higher standpoint", i.e., without going through this tedious calculation.
|
How to pronounce "$\;\setminus\;$" (the symbol for set difference) A question for English speakers. When using (or reading) the symbol $\setminus$ to denote set difference —
$$A\setminus B=\{x\in A|x\notin B\}$$
— how do you pronounce it?
If you please, indicate in a comment on your answer what region you're from (what dialect you have).
This is a poll question. Please do not repeat answers! Rather, upvote an answer if you pronounce the symbol the same way the answerer does, and downvote it not at all. Please don't upvote answers for other reasons. Thanks!
| I usually say "A without B," but it depends on my mood that day
|
Coupon Problem generalized, or Birthday problem backward. I want to solve a variation on the Coupon Collector Problem, or (alternately) a slight variant on the standard Birthday Problem.
I have a slight variant on the standard birthday problem.
In the standard Coupon Collector problem, someone is choosing coupons at random (with replacement) from n different possible coupons. Duplicate coupons do us no good; we need a complete set. The standard question is "What is the expected number of coupons (or probability distribution in number of coupons) to collect them all?
In the standard birthday problem, we choose k items from n options with replacement (such as k people in a room, each with one of 365 possible birthdays) and try to determine the probability distribution for how many unique values there will be (will they have the same birthday?).
In my problem, someone has chosen k items from n options and I know that there were p distinct values, but I don't know what k was. If p=n this is the coupon problem, but I want to allow for values of p that are less than n. I want to determine the probability distribution for k (actually, all I need is the expected value of k, but the distribution would be interesting as well) as a function of p and n.
| This is a statistical problem, not a probabilistic problem: you have observed data (the value $p$) and seek to infer the underlying probabilistic process (the parameter $k$). The process going from $k$ to $p$ is understood, but the reverse is much more difficult. You cannot "solve" this problem of parameter estimation.
The Maximum Likelihood Estimator of $k$ would be $\hat k = p$. Other statistical criteria would lead to different estimates.
|
Is there a closed-form solution to this linear algebra problem? $A$ and $B$ are non-negative symmetric matrices, whose entries sum to 1.0.
Each of these matrices has $\frac{N^2-N}{2}+N-1$ degrees of freedom.
$D$ is the diagonal matrix defined as follows (in Matlab code):
$$D=\text{diag}(\text{diag}(A*\text{ones}(N)))^{-1}$$
We are given the matrix $B$. Does this problem have a closed-form solution to $A$ (assuming one exists), such that
$$ADA=B$$
If so, what is it? If not, what's the best method to find an approximate solution?
| The diagonal entries of $D$ are the reciprocals of the row sums of $A$. The row sums of $B$ are those of $A$. Thus $D$ is known. Then $A$ can be obtained as
$$A=\frac1{\sqrt D}\sqrt{\sqrt DB\sqrt D}\frac1{\sqrt D}\;,$$
or, if you prefer,
$$A=D^{-1/2}\left(D^{1/2}BD^{1/2}\right)^{1/2}D^{-1/2}\;.$$
According to this post, this is the unique symmetric positive-definite solution of $ADA=B$.
The square root of $D$ is straightforward; the remaining square root can be computed by diagonalization or by various other methods.
To see that the solution is consistent in that the $A$ so obtained does indeed have the same row sums as $B$, note that
$$\left(D^{1/2}BD^{1/2}\right)\left(D^{-1/2}\mathbf 1\right)=D^{1/2}B\mathbf 1=D^{1/2}D^{-1}\mathbf 1=D^{-1/2}\mathbf 1\;,$$
where $\mathbf 1$ is the vector consisting entirely of $1$s. Thus $D^{-1/2}\mathbf 1$ is an eigenvector with eigenvalue $1$ of $D^{1/2}BD^{1/2}$, and thus also of
$$D^{1/2}AD^{1/2}=\left(D^{1/2}BD^{1/2}\right)^{1/2},$$
and thus
$$DA\mathbf1=D^{1/2}\left(D^{1/2}AD^{1/2}\right)\left(D^{-1/2}\mathbf1\right)=D^{1/2}D^{-1/2}\mathbf1=\mathbf1$$
as desired.
Perhaps a more concise way of saying all this is that we should apply a transform $x=D^{1/2}x'$ to get
$$x^\top Ax=x'^\top A'x'\quad\text{with}\quad A'=D^{1/2}AD^{1/2}\;,\\
x^\top Bx=x'^\top B'x'\quad\text{with}\quad B'=D^{1/2}BD^{1/2}\;,\\
\mathbf1'=D^{-1/2}\mathbf1\;,$$
and then the equation becomes $A'^2=B'$ and the row sum conditions become $A'\mathbf1'=B'\mathbf1'=\mathbf1'$.
|
Proving Cauchy's Generalized Mean Value Theorem This is an exercise from Stephen Abbott's Understanding Analysis. The hint it gives on how to solve it is not very clear, in my opinion, so I would like for a fresh set of eyes to go over it with me:
pp 143 Exercise 5.3.4. (a) Supply the details for the proof of Cauchy's Generalized Mean Value Theorem (Theorem 5.3.5.).
Theorem 5.3.5. (Generalized Mean Value Theorem). If $f$ and $g$ are continuous on the closed interval $[a,b]$ and differentiable on the open interval $(a,b)$, then there exists a point $c\in(a,b)$ where$$[f(b)-f(a)]g'(c)=[g(b)-g(a)]f'(c).$$If $g'$ is never zero on $(a,b)$, then the conclusion can be stated as$$\frac{f'(c)}{g'(c)}=\frac{f(b)-f(a)}{g(b)-g(a)}.$$
*Hint: This result follows by applying the Mean Value Theorem to the function*$$h(x)=[f(b)-f(a)]g(x)-[g(b)-g(a)]f(x)$$
First of all, I know that the Mean Value Theorem (MVT) states that if $f:[a,b]\to\mathbb{R}$ is continuous on $[a,b]$ and differentiable on $(a,b)$, then there exists a point $c\in(a,b)$ where$$f'(c)=\frac{f(b)-f(a)}{b-a}.$$
If we assume that $h$ has the above properties, then applying the MVT to it, for some $c\in(a,b)$, would yield$$h'(c)=\frac{h(b)-h(a)}{b-a}=$$
$$\frac{[f(b)-f(a)]g(b)-[g(b)-g(a)]f(b) \quad - \quad [f(b)-f(a)]g(a)+[g(b)-g(a)]f(a)}{b-a}=$$
$$[f(b)-f(a)]\left(\frac{g(b)-g(a)}{b-a}\right) \quad - \quad[g(b)-g(a)]\left(\frac{f(b)-f(a)}{b-a}\right)=$$
$$[f(b)-f(a)]g'(c) \quad - \quad [g(b)-g(a)]f'(c).$$This is the best I could achieve; I have no clue on how to reach the second equation in the above theorem.
Do you guys have any ideas? Thanks in advance!
| Note that
$$\begin{eqnarray}h(a)&=&[f(b)-f(a)]g(a)-[g(b)-g(a)]f(a)\\
&=&f(b)g(a)-g(b)f(a)\\
&=&[f(b)-f(a)]g(b)-[g(b)-g(a)]f(b)\\
&=&h(b)\end{eqnarray}$$
and so $h'(c)=0$ for some point $c\in (a,b)$. Then differentiate $h$ normally and note that this makes $c$ the desired point.
|
How many combinations of 6 items are possible? I have 6 items and want to know how many combinations are possible in sets of any amount. (no duplicates)
e.g. It's possible to have any of the following:
1,2,3,4,5,6
1,3,5,6,2
1
1,3,4
there cannot be duplicate combinations:
1,2,3,4
4,3,2,1
Edit: for some reason I cannot add more comments. @miracle173 is correct. Also {1,1} is not acceptable
| Your are asking the number of subsets of a set with n elements.{1,2,3,...,n}
Each subset can be represented by a binary string, e.g for the set {1,2,3,4,5,6} the string 001101 means the subset that
does not contain the element 1 of the set, because the 1st left character of the string is 0
does not contain the element 2 of the set, because the 2nd left character of the string is 0
does contain the element 3 of the set, because the 3rd left character of the string is 1
does contain the element 4 of the set, because the 4th left character of the string is 1
does not contain the element 5 of the set, because the 5th left character of the string is 0
does contain the element 6 of the set, because the 6th left character of the string is 1
so 001101 means the subset {3,4,6}. Therefore there asre as many subsets as strings of length n. With n binary digits one can count from 0 to 2^n-1, therefore there are 2^n such strings and 2^n subsets of {1,....,n}. 00...0 means the empty subset. if you dont want count the empty subset then you have only 2^n-1 subsets.
|
For $x_1,x_2,x_3\in\mathbb R$ that $x_1+x_2+x_3=0$ show that $\sum_{i=1}^{3}\frac{1}{x^2_i} =({\sum_{i=1}^{3}\frac{1}{x_i}})^2$ Show that if $ x_1,x_2,x_3 \in \mathbb{R}$ , and $x_1+x_2+x_3=0$ , we can say that:
$$\sum_{i=1}^{3}\frac{1}{x^2_i} = \left({\sum_{i=1}^{3}\frac{1}{x_i}}\right)^2.$$
| Hint:
What is value of $\frac{1}{x_1.x_2}+\frac{1}{x_2.x_3}+\frac{1}{x_3.x_1}$ ,when $x_1+x_2+x_3=0$.
If you got the value of $\frac{1}{x_1.x_2}+\frac{1}{x_2.x_3}+\frac{1}{x_3.x_1}$, then proceed by expanding $(\sum_{i=1}^3 \frac{1}{x_i})^2$ by using the formula $(a+b+c)^2= a^2+b^2+c^2+ 2(ab+bc+ac)$
|
Proof of a formula involving Euler's totient function: $\varphi (mn) = \varphi (m) \varphi (n) \cdot \frac{d}{\varphi (d)}$ The third formula on the wikipedia page for the Totient function states that $$\varphi (mn) = \varphi (m) \varphi (n) \cdot \dfrac{d}{\varphi (d)} $$
where $d = \gcd(m,n)$.
How is this claim justified?
Would we have to use the Chinese Remainder Theorem, as they suggest for proving that $\varphi$ is multiplicative?
| You can write $\varphi(n)$ as a product $\varphi(n) = n \prod\limits_{p \mid n} \left( 1 - \frac 1p \right)$ over primes.
Using this identity, we have
$$
\varphi(mn)
= mn \prod_{p \mid mn} \left( 1 - \frac 1p \right)
= mn \frac{\prod_{p \mid m} \left( 1 - \frac 1p \right) \prod_{p \mid n} \left( 1 - \frac 1p \right)}{\prod_{p \mid d} \left( 1 - \frac 1p \right)}
= \varphi(m)\varphi(n) \frac{d}{\varphi(d)}
$$
|
Rationality test for a rational power of a rational It has been known since Pythagoras that 2^(1/2) is irrational. It is also obvious that 4^(1/2) is rational. There is also a fun proof that even the power of two irrational numbers can be rational.
Can you, in general, compute whether the power of two rational numbers is rational?
The reason I am asking, besides curiosity, is that the Fraction-type in Python always returns a float on exponentiation. If there is a quick way to tell if it could be accurately expressed as a fraction, the power function could conceivably only return floats when it has to.
EDIT:
By popular demand, I changed 0.5 to 1/2 to make it clearer that it is a fraction and not a float.
| We can do this much quicker than using prime factorization. Below I show how to reduce the problem to testing if an integer is a (specific) perfect power - i.e. an integer perfect power test.
Lemma $\ $ If $\rm\,R\,$ and $\,\rm K/N\:$ are rationals, $\rm\:K,N\in\mathbb Z,\ \gcd(K,N)=1,\,$ then $$\rm\:R^{K/N}\in\Bbb Q\iff R^{1/N}\in \mathbb Q\qquad$$
Proof $\ (\Rightarrow)\ $ If $\,\rm\color{#0a0}{R^{K/N}\in\Bbb Q},\,$ then by $\rm\:gcd(N,K) = 1\:$ we have a Bezout equation
$$\rm 1 = JN+I\:\!K\, \overset{\!\div\ N}\Rightarrow\ 1/N = J + IK/N\ \Rightarrow\ R^{1/N} =\ R^J(\color{#0a0}{R^{K/N}})^I \in \mathbb Q$$
$(\Leftarrow)\ \ \rm\:R^{1/N}\in \mathbb Q\ \Rightarrow\ R^{K/N} = (R^{1/N})^K\in \mathbb Q.\ \ \small\bf QED$
So we've reduced the problem to determining if $\rm\:R^{1/N} = A/B \in \mathbb Q.\,$ If so then $\rm\: R = A^N/B^N\:$ and $\rm\:gcd(A,B)=1\:$ $\Rightarrow$ $\rm\:gcd(A^N,B^N) = 1,\:$ by unique factorization or Euclid's Lemma. By uniqueness of reduced fractions, this is true iff the lowest-terms numerator and denominator of $\rm\:R\:$ are both $\rm\:N'th\:$ powers of integers.
So we reduce to the problem of checking if an integer is a perfect power. This can be done very quickly, even in the general case, see D. J. Bernstein, Detecting powers in almost linear time. 1997.
|
How to show that these two random number generating methods are equivalent? Let $U$, $U_1$ and $U_2$ be independent uniform random numbers between 0 and 1. Can we show that generating random number $X$ by $X = \sqrt{U}$ and $X = \max(U_1,U_2)$ are equivalent?
| For every $x$ in $(0,1)$, $\mathrm P(\max\{U_1,U_2\}\leqslant x)=\mathrm P(U_1\leqslant x)\cdot\mathrm P(U_2\leqslant x)=x\cdot x=x^2$ and $\mathrm P(\sqrt{U}\leqslant x)=\mathrm P(U\leqslant x^2)=x^2$ hence $\max\{U_1,U_2\}$ and $\sqrt{U}$ follow the same distribution.
|
Estimate the sample deviation in one pass We've learned this algorithm in class but I'm not sure I've fully understood the correctness of this approach.
It is known as Welford's algorithm for the sum of squares, as described in: Welford, B.P., "Note on a Method for Calculating Corrected Sums of Squares and Products", Technometrics, Vol. 4, No. 3 (Aug., 1962), pp. 419-420
The two-pass algorithm to estimate the sample deviation is simple. We first estimate the mean by one pass and then calculate the standard deviation. In short, it is $s_n^2 = \frac{1}{n-1} \sum_{i=1}^n (Y_i - \hat{Y})^2$.
The one-pass approach has three steps.
1) $v_1 = 0$
2) $v_k = v_{k-1} + \frac{1}{k(k-1)} (\sum_{i=1}^{k-1} Y_i - (k-1) Y_k)^2 (2 \leq k \leq n)$
3) $s_n^2 = \frac{v_n}{n-1}$.
Can someone help me understand how this approach works?
| $v_n$ is going to be $\sum_{i=1}^n (Y_i - \overline{Y}_n)^2$ where $\overline{Y}_n = \frac{1}{n} \sum_{i=1}^n Y_i$. Note that by expanding out the square,
$v_n = \sum_{i=1}^n Y_i^2 - \frac{1}{n} \left(\sum_{i=1}^n Y_i\right)^2$.
In terms of $m_k = \sum_{i=1}^k Y_i$, we have
$$v_n = \sum_{i=1}^n Y_i^2 - \frac{1}{n} m_n^2 = \sum_{i=1}^{n-1} Y_i^2 + Y_n^2 - \frac{1}{n} (m_{n-1} + Y_n)^2$$
On the other hand,
$$v_{n-1} = \sum_{i=1}^{n-1} Y_i^2 - \frac{1}{n-1} m_{n-1}^2$$
Since $(m_{n-1} + Y_n)^2 = m_{n-1}^2 + 2 Y_n m_{n-1} + Y_n^2$, the difference between these is
$$ v_n - v_{n-1} = \left(1 - \frac{1}{n}\right) Y_n^2 + \left(\frac{1}{n-1} - \frac{1}{n}\right) m_{n-1}^2 - \frac{2}{n} Y_n m_{n-1} $$
$$ = \frac{1}{n(n-1)} m_{n-1}^2 - \frac{2}{n} Y_n m_{n-1} +\frac{n-1}{n} Y_n^2 $$
$$ = \frac{1}{n(n-1)} \left(m_{n-1} - (n-1) Y_n\right)^2 $$
|
Is the support of a Borel measure measured the same as the whole space? Wikipedia says
Let (X, T) be a topological space. Let μ be a measure on the Borel σ-algebra on X. Then the support
(or spectrum) of μ is defined to be the set of all points x in X for
which every open neighbourhood Nx of x has positive measure.
The support of μ is a Borel-measurable subset, because
The support of a measure is closed in X.
I wonder if the support of μ is measured the same as the whole space?
It is equivalent to that the complement of the support of μ has 0 measure. But the following property seems to say it is not true
Under certain conditions on X and µ, for instance X being a
topological Hausdorff space and µ being a Radon measure, a measurable
set A outside the support has measure zero
So when does the support of a measure on a Borel sigma algebra have different measure from the whole space?
Thanks!
| For an example with a probability measure, consider the following standard counterexample: let $X = [0, \omega_1]$ be the uncountable ordinal space (with endpoint), with its order topology. This is a compact Hausdorff space which is not metrizable. Define a probability measure on the Borel sets of $X$ by taking $\mu(B) = 1$ if $B$ contains a closed uncountable set, $\mu(B)=0$ otherwise. It is known that this defines a countably additive probability measure; see Example 7.1.3 of Bogachev's Measure Theory for the details.
If $x \in X$ and $x < \omega_1$, then $[0, x+1)$ is an open neighborhood of $x$ which is countable, hence has measure zero. So $x$ is not in the support of $\mu$. In fact the support of $\mu$ is simply $\{\omega_1\}$. But $\mu(\{\omega_1\}) = 0$.
|
Arc Length Problem I am currently in the middle of the following problem.
Reparametrize the curve $\vec{\gamma } :\Bbb{R} \to \Bbb{R}^{2}$ defined by $\vec{\gamma}(t)=(t^{3}+1,t^{2}-1)$ with respect to arc length measured from $(1,-1)$ in the direction of increasing $t$.
By reparametrizing the curve, does this mean I should write the equation in cartesian form? If so, I carry on as follows.
$x=t^{3}+1$ and $y=t^{2}-1$
Solving for $t$
$$t=\sqrt[3]{x-1}$$
Thus,
$$y=(x-1)^{2/3}-1$$
Letting $y=f(x)$, the arclength can be found using the formula
$$s=\int_{a}^{b}\sqrt{1+[f'(x)]^{2}}\cdot dx$$
Finding the derivative yields
$$f'(x)=\frac{2}{3\sqrt[3]{x-1}}$$
and
$$[f'(x)]^{2}=\frac{4}{9(x-1)^{2/3}}.$$
Putting this into the arclength formula, and using the proper limits of integration (found by using $t=1,-1$ with $x=t^{3}+1$) yields
$$s=\int_{0}^{2}\sqrt{1+\frac{4}{9(x-1)^{2/3}}}\cdot dx$$
I am now unable to continue with the integration as it has me stumped. I cannot factor anything etc. Is there some general way to approach problems of this kind?
| Hint:
Substitute $(x-1)^{1/3}=t$. Your integral will boil down to $$\int_{-1}^1t\sqrt{4+9t^2}\rm dt$$
Now set $4+9t^2=u$ and note that $\rm du=18t~~\rm dt$ which will complete the computation. (Note that you need to change the limits of integration while integrationg over $u$.)
A Longer way:
Now integrate by parts with $u=t$ and $\rm d v=\sqrt{4+9t^2}\rm dt$ and to get $v$, you'd like to keep $t=\dfrac{2\tan \theta}{3}$
|
The derivative of a complex function.
Question:
Find all points at which the complex valued function $f$ define by $$f(z)=(2+i)z^3-iz^2+4z-(1+7i)$$ has a derivative.
I know that $z^3$,$z^2$, and $z$ are differentiable everywhere in the domain of $f$, but how can I write my answer formally? Please can somebody help?
Note:I want to solve the problem without using Cauchy-Riemann equations.
| I'm not sure where the question is coming from (what you know/can know/etc.).
But some things that you might use: you might just give the derivative, if you know how to take it. Perhaps you might verify it with the Cauchy-Riemann equations. Alternatively, differentiation is linear (which you might prove, if you haven't), and finite linear combination of differentiable functions is also differentiable. Or you know the series expansion, it's finite, and converges in infinite radius - thus it's holomorphic.
But any of these would lead to a complete solution. Does that make sense?
|
Strengthened finite Ramsey theorem I'm reading wikipedia article about Paris-Harrington theorem, which uses strengthened finite Ramsey theorem, which is stated as "For any positive integers $n, k, m$ we can find $N$ with the following property: if we color each of the $n$-element subsets of $S = \{1, 2, 3,..., N\}$ with one of $k$ colors, then we can find a subset $Y$ of $S$ with at least $m$ elements, such that all $n$ element subsets of $Y$ have the same color, and the number of elements of $Y$ is at least the smallest element of $Y$".
After poking around for a while I interpreted this as follows.
Let $n,k$ and $m$ be positive integers. Let $S(N)=\{1,...,N\}$ and $S^{(n)}(M)$ be the set of $n$-element subsets of $S(M)$. Let $f:S^{(n)}(M)\to \{1,...,k\}$ be some $k$-colouring of $S^{(n)}(M)$. Theorem states that for any $n, k, m$ there is a number $N$ for which we can find $Y\subseteq S(N)$ such that $|Y|\geq m$, the induced colouring $f':Y^{(n)}\to \{0,...,k\}$ boils down to a constant function (every $n$-subset of $Y$ has the same colour) and $|Y|\geq\min{Y}$.
In this correct?
I also faced another formulation where it is stated that "for any $m,n,c$ there exists a number $N$ such that for every colouring $f$ of $m$-subsets of $\{0,...,N-1\}$ into $c$ colours there is an $f$-homogeneous $H\subset\{0,..,N-1\}$...".
How $f$-homogeneous is defined?
| Yes, your interpretation of the first formulation is correct.
In the second formulation the statement that $H$ is $f$-homogeneous simply means that every $m$-subset of $H$ is given the same color by $f$: in your notation, the induced coloring $$f\,':H^{(m)}\to\{0,\dots,c-1\}$$ is constant. However, the formulation is missing the important requirement that $|H|\ge\min H$.
|
What on earth does "$r$ is not a root" even mean? Method of Undetermined Coeff Learning ODE now, and using method of Undetermined Coeff
$$y'' +3y' - 7y = t^4 e^t$$
The book said that $r = 1$ is not a root of the characteristic equation. The characteristic eqtn is $r^2 + 3r - 7 = 0$ and the roots are $r = -3 \pm \sqrt{37}$
Where on earth are they getting $r = 1$ from?
| $1$ comes from the $e^t$ on the right side. If it was $e^{kt}$ they would take $r=k$.
|
Complex Roots of Unity and the GCD I'm looking for a proof of this statement. I just don't know how to approach it. I recognize that $z$ has $a$ and $b$ roots of unity, but I can't seem to figure out what that tells me.
If $z \in \mathbb{C}$ satisfies $z^a = 1$ and $z^b = 1$ then
$z^{gcd(a,b)} = 1$.
| Hint $\:$ The set of $\rm\:n\in \mathbb Z$ such that $\rm\:z^n = 1\:$ is closed under subtraction so it is closed under $\rm\:gcd$.
Recall gcds may be computed by repeated subtraction (anthyphairesis, Euclidean algorithm)
|
Determining variance from sum of two random correlated variables I understand that the variance of the sum of two independent normally distributed random variables is the sum of the variances, but how does this change when the two random variables are correlated?
| Let's work this out from the definitions. Let's say we have 2 random variables $x$ and $y$ with means $\mu_x$ and $\mu_y$. Then variances of $x$ and $y$ would be:
$${\sigma_x}^2 = \frac{\sum_i(\mu_x-x_i)(\mu_x-x_i)}{N}$$
$${\sigma_y}^2 = \frac{\sum_i(\mu_y-y_i)(\mu_y-y_i)}{N}$$
Covariance of $x$ and $y$ is:
$${\sigma_{xy}} = \frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N}$$
Now, let us consider the weighted sum $p$ of $x$ and $y$:
$$\mu_p = w_x\mu_x + w_y\mu_y$$
$${\sigma_p}^2 = \frac{\sum_i(\mu_p-p_i)^2}{N} = \frac{\sum_i(w_x\mu_x + w_y\mu_y - w_xx_i - w_yy_i)^2}{N} = \frac{\sum_i(w_x(\mu_x - x_i) + w_y(\mu_y - y_i))^2}{N} = \frac{\sum_i(w^2_x(\mu_x - x_i)^2 + w^2_y(\mu_y - y_i)^2 + 2w_xw_y(\mu_x - x_i)(\mu_y - y_i))}{N} \\ = w^2_x\frac{\sum_i(\mu_x-x_i)^2}{N} + w^2_y\frac{\sum_i(\mu_y-y_i)^2}{N} + 2w_xw_y\frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N} \\ = w^2_x\sigma^2_x + w^2_y\sigma^2_y + 2w_xw_y\sigma_{xy}$$
|
Is every invertible rational function of order 0 on a codim 1 subvariety in the local ring of the subvariety? I have been trying to read Fulton's Intersection Theory, and the following puzzles me.
All schemes below are algebraic over some field $k$ in the sense that they come together with a morphism of finte type to $Spec k$.
Let $X$ be a variety (reduced irreducible scheme), and let $Y$ be a codimension $1$ subvariety, and let $A$ be its local ring (in particular a $1$-dimensional Noetherian local domain). Let $K(X)$ be the ring of rational functions on $X$ (the local ring at the generic point of X). Let $K(X)^*$ be the abelian group (under multiplication) of units of $K(X)$, and $A^*$ -- the group of units of $A$.
On the one hand, for any $r\in K(X)^*$ define the order of vanishing of $r$ at $Y$ to be $ord_Y(r)=l_A(A/(a))-l_A(A/(b))$ where $r=a/b$ for $a$ and $b$, and $l_A(M)$ is the length of an $A$-module $M$. On the other hand, it turns out that $Y$ is in the support of the principal Cartier divisor $div(r)$ if and only if $r\not\in A^*\subset K(X)^*$.
It is obvious that $Y$ not in the support of $div(r)$ implies that $ord_r(Y)=0$, since the former claims that $r\in A^*$ from which it follows that $ord_Y(b)=ord_Y(rb)=ord_Y(a)$ since obviously $ord_Y(r)=0$ for any unit. The contrapositive states that $ord_r(Y)\neq0$ implies $Y$ is in the support of $div(r)$. Because the latter can be shown to be a proper closed set and thus containing finitely many codim. $1$ subvarieties, which shows that the associated Weyl divisor $[D]=\sum_Y ord_Y(r)[Y]$ is well-defined.
What is not clear to me is whether or not the converse is true, i.e. whether $Y$ in the support of $div(r)$ implies that $ord_Y(r)\neq0$. I find myself doubting since if I am not mistaken, this is equivalent to the statement $l_A(A/(a))=l_A(A/(b))$ if and only if $(a)=(b)$, where $A$ is any $1$-dimensional local Noetherian domain (maybe even a $k$-algebra) which seems much too strong. Any (geometric) examples to give me an idea of what is going would be much appreciated.
| The support of a Cartier divisor $D$ on $X$ is the union of all closed subvarieties $Z\subset X$ such that the local equation of $D$ at the generic point $z$ of $Z$ is not a unit of the local ring $O_{X,z}$. Note that $Z$ can be of codimension $>1$. However, let $f_Z$ be the local equation of $D$ and let $p\in\mathrm{Spec}(O_{X,z})$ such that $f_ZO_{X,z}\subseteq p$ and $p$ is minimal with that property. Then by the Principal Ideal Theorem $p$ is of height $1$ and $f_Z$ is not a unit in the localization $(O_{X,z})_p$. The latter is the local ring of the codimenions $1$ subvariety $Y$ having $p$ as its generic point. This shows that if $Z$ is in the support of $D$, then every codimension $1$ subvariety $Y\subset X$ such that $Z\subseteq Y$ is in the support of $D$. Does this solve your problem?
|
confusion on legendre symbol i know that $\left(\frac{1}{2}\right)=1$ since $1^2\equiv 1 \pmod2$ now since
$3\equiv 1\pmod2$ we should have $\left(\frac{3}{2}\right)=\left(\frac{1}{2}\right)=1$ but on Maple i get that $\left(\frac{3}{2}\right)=-1$ why?
| The Legendre symbol, the Jacobi symbol and the Kronecker symbol are successive generalizations that all share the same notation. The first two are usually only defined for odd lower arguments (primes in the first case), whereas the Kronecker symbol is also defined for even lower arguments.
Since the distinction is merely historic, I guess it makes sense for math software to treat them all the same; Wolfram|Alpha returns $-1$ for JacobiSymbol(3,2). See the Wikipedia article for the definition for even lower arguments; the interpretation that a value of $-1$ indicates a quadratic non-residue is no longer valid in this case.
|
What's the sum of $\sum_{k=1}^\infty \frac{2^{kx}}{e^{k^2}}$? I already asked a similar question on another post:
What's the sum of $\sum \limits_{k=1}^{\infty}\frac{t^{k}}{k^{k}}$?
There are no problems with establishing a convergence for this power series:
$$\sum_{k=1}^\infty \frac{2^{kx}}{e^{k^2}}$$
but I have problems in determining its sum.
| $$\sum_{k=1}^{\infty}\frac{2^{kx}}{e^{k^{2}}} = -\frac{1}{2} + \frac{1}{2} \prod_{m=1}^{\infty} \left( 1 - \frac{1}{e^{2m}} \right) \left( 1+ \frac{ 2^x }{e^{2m-1} } \right) \left( 1 + \frac{1}{2^x e^{2m-1} }\right ). $$
|
Similarity Transformation Let $G$ be a subgroup of $\mathrm{GL}(n,\mathbb{F})$. Denote by $G^T$ the set of transposes of all elements in $G$. Can we always find an $M\in \mathrm{GL}(n,\mathbb{F})$ such that $A\mapsto M^{-1}AM$ is a well-defined map from $G$ to $G^T$?
For example if $G=G^T$ then any $M\in G$ will do the job.
Another example, let $U$ be the set of all invertible upper triangular matrices. Take $M=(e_n\,\cdots\,e_2\,e_1)$ where $e_i$ are the column vectors that make $I=(e_1\,e_2\,\cdots\,e_n)$ into an identity matrix. Then $M$ do the job. For $n=3$, here what the $M$ will do
$$\begin{pmatrix}a&b&c\\ 0&d&e\\ 0&0&f\end{pmatrix}\mapsto M^{-1}\begin{pmatrix}a&b&c\\ 0&d&e\\ 0&0&f\end{pmatrix} M=\begin{pmatrix}f&e&c\\0&d&b\\0&0&a\end{pmatrix}^T$$
Thank you.
| The answer is no, in general. For example, when ${\mathbb F}$ is the field of two elements, let $G$ be the stabilizer of the one-dimensional subspace of ${\mathbb F}^3,$ (viewed as column vectors, with ${\rm GL}(3,\mathbb{F})$ acting by left multiplication) consisting of vectors with $0$ in positions $2$ and $3$. Then $G \cong S_{4},$ but $G$ does not stabilize any $2$-dimensional subspace. However $G^{T}$ is the stabilizer of the $2$-dimensional subspace consisting of vectors with $0$ in position $1$. Hence the subgroups $G$ and $G^{T}$ are not conjugate within ${\rm GL}(3,\mathbb{F}).$
|
Should I ignore $0$ when do inverse transform sampling? Generic method
*
*Generate $U \sim \mathrm{Uniform}(0,1)$.
*Return $F^{-1}(U)$.
So, in step 1, $U$ has domain/support as $[0,1]$, so it is possible that $U=0$ or $U=1$,
but $F^{-1}(0)=-\infty$. Should I reject the value $U=0$ and $U=1$ before applying step 2?
For example, discrete distribution sampling: $X$ takes on values $x_1, x_2, x_3$ with probability $p_1,p_2,p_3$
*
*Generate $U \sim \mathrm{Uniform}(0,1)$.
*Find the smallest $k$ such that $F(x_k)\geq U$ ($F$ is the CDF).
However, if $U=0$, and $p_1=0$, $k$ would be $1$. It could generate $x_1$ though its probability $p_1=0$. Is it acceptable?
| In theory, it doesn't matter: the event $U=0$ occurs with probability $0$, and can thus be ignored. (In probabilistic jargon, it almost never happens.)
In practice, it's possible that your PRNG may return a value that is exactly $0$. For a reasonably good PRNG, this is unlikely, but it may not be quite unlikely enough for you to bet that it never happens. Thus, if your program would crash on $U=0$, a prudent programmer should check for that event and generate a new random number if it occurs.
(Note that many PRNG routines are defined to return values in the range $0 \le U < 1$: for example, the Java default PRNG is defined to return values of the form $m\cdot2^{-53}$ for $m \in \{0,1,\dotsc,2^{53}-1\}$. If you're using such a PRNG, a cheap but effective way to avoid the case $U=0$ is to use the number $1-U$ instead of $U$.)
|
If we know the GCD and LCM of two integers, can we determine the possible values of these two integers? I know that $\gcd(a,b) \cdot \operatorname{lcm}(a,b) = ab$, so if we know $\gcd(a,b)$ and $\operatorname{lcm}(a,b)$ and we want to find out $a$ and $b$, besides factoring $ab$ and find possible values, can we find these two integers faster?
| If you scale the problem by dividing through by $\rm\:gcd(a,b)\:$ then you are asking how to determine coprime factors of a product. This is equivalent to factoring integers.
Your original question, in the special case $\rm\:gcd(a,b) = lcm(a,b),\:$ is much easier:
Hint $\:$ In any domain $\rm\:gcd(a,b)\ |\ a,b\ |\ lcm(a,b)\ $ so $\rm\:lcm(a,b)\ |\ gcd(a,b)\ \Rightarrow $ all four are associate. Hence all four are equal if they are positive integers. If they are polynomials $\ne 0$ over a field then they are equal up to nonzero constant multiples, i.e. up to unit factors.
|
Multiple function values for a single x-value I'm curious if it's possible to define a function that would have more than two functionvalues for one single x-value.
I know that it's possible to get two y-values by using the root (one positive, one negative: $\sqrt{4} = -2 ; +2$).
Is it possible to get three or more function values for one single x-value?
| Let's consider some multivalued functions (not 'functions' since these are one to one by definition) :
$y=x^n$ has $n$ different solutions $\sqrt[n]{y}\cdot e^{2\pi i \frac kn}$ (no more than two will be real)
The inverse of periodic functions will be multivalued (arcsine, arccosine and so on...) with an infinity of branches (the principal branch will usually be considered and the principal value returned).
The logarithm is interesting too (every branch gives an additional $2\pi i$).
$i^i$ has an infinity of real values since $i^i=(e^{\pi i/2+2k \pi i})^i=e^{-\pi/2-2k \pi}$ (replace one of the $i$ by $ix$ to get a multivalued function).
The Lambert W function has two rather different branches $W_0$ and $W_{-1}$
and so on...
|
Find Double of Distance Between 2 Quaternions I want to find the geometric equivalent of vector addition and subtraction in 3d for quaternions. In 3d difference between 2 points(a and b) gives the vector from one point to another. (b-a) gives the vector from b to a and when I add this to b I find the point which is double distance from a in the direction of (b-a). I want to do the same thing for unit quaternions but they lie on 4d sphere so direct addition won't work. I want to find the equivalent equation for a-b and a+b where a and b are unit quaternions. It should be something similar to slerp but it is not intuitive to me how to use it here because addition produces a quaternion outside the arc between 2 quaternions.
| Slerp is exactly what you want, except with the interpolation parameter $t$ set to $2$ instead of lying between $0$ and $1$. Slerp is nothing but a constant-speed parametrization of the great circle between two points $a$ and $b$ on a hypersphere, such that $t = 0$ maps to $a$ and $t = 1$ maps to $b$. Setting $t = 2$ will get you the point on the great circle as far from $b$ as $b$ is from $a$. See my other answer to a related question on scaling figures lying in a hypersphere.
Update: Actually, it just occurred to me that this is overkill, though it gives the right answer. The simpler solution is that the quaternion that maps $a$ to $b$ is simply $ba^{-1}$ (this plays the role of "$b-a$"), and applying that quaternion to $b$ gives you $ba^{-1}b$ (analogous to "$(b - a) + b$") which is what you want.
|
Show that this limit is equal to $\liminf a_{n}^{1/n}$ for positive terms.
Show that if $a_{n}$ is a sequence of positive terms such that $\lim\limits_{n\to\infty} (a_{n+1}/a_n) $ exists, then this limit is equal to $\liminf\limits_{n\to\infty} a_n^{1/n}$.
I am not event sure where to start from, any help would be much appreciated.
| I saw this proof today and thought it's a nice one:
Let $a_n\ge 0$, $\lim\limits_{n \to \infty}a_n=L$. So there are 2 options:
(1) $L>0$:
$$
\lim\limits_{n \to \infty}a_n=L
\iff \lim\limits_{n \to \infty}\frac{1}{a_n}=\frac{1}{L}$$
Using Cauchy's Inequality Of Arithmetic And Geometric Means we get:
$$\frac{n}{a_1^{-1}+\dots+a_n^{-1}}\le\sqrt[n]{a_1\cdots a_n}\le \frac{a_1+ \cdots + a_n}{n}$$
Applying Cesaro Theorem on $a_n$, notice that RHS $\mathop{\to_{n \to \infty}} L$ , and that by applying Cesaro Thm on $1/a_n$, LHS$\mathop{\to_{n \to \infty}} \frac{1}{1/L}=L$ . And so from the squeeze thm:
$$\lim\limits_{n \to \infty}\sqrt[n]{a_1\cdots a_n}=L$$
(2) $L=0$:
$$0\le\sqrt[n]{a_1\cdots a_n}\le \frac{a_1+ \cdots + a_n}{n} $$
$$\Longrightarrow\lim\limits_{n \to \infty}\sqrt[n]{a_1\cdots a_n}=0=L$$
Now, define:
$$b_n = \begin{cases}{a_1} &{n=1}\\\\ {\frac{a_n}{a_{n-1}}} &{n>1}\end{cases}$$
and assume $\lim\limits_{n \to \infty}b_n=B $.
Applying the above result on $b_n$ we get:
$$\frac{n}{b_1^{-1}+\dots+b_n^{-1}}\le\sqrt[n]{b_1\cdots b_n}\le \frac{b_1+ \cdots + b_n}{n}$$
$$\frac{n}{b_1^{-1}+\dots+b_n^{-1}}\le\sqrt[n]{a_1\cdot (a_2/a_1)\cdots (a_n/a_{n-1})}\le \frac{b_1+ \cdots + b_n}{n}$$
$$\frac{n}{b_1^{-1}+\dots+b_n^{-1}}\le\sqrt[n]{a_n}\le \frac{b_1+ \cdots + b_n}{n}$$
$$\Longrightarrow\lim\limits_{n \to \infty}\sqrt[n]{a_n}=B$$
So we can conclude that if $\lim\limits_{n\to\infty} \frac{a_{n+1}}{a_n}$ exists and equal to $B$, then $\lim\limits_{n \to \infty}\sqrt[n]{a_n}=B$.
|
proving by $\epsilon$-$\delta $ approach that $\lim_{(x,y)\rightarrow (0,0)}\frac {x^n-y^n}{|x|+|y|}$exists for $n\in \mathbb{N}$ and $n>1$ As the topic, how to prove by $\epsilon$-$\delta $ approach $\lim_{(x,y)\rightarrow (0,0)}\frac {x^n-y^n}{|x|+|y|}$ exists for $n\in \mathbb{N}$ and $n>1$
| You may use that
$$\left|\frac{x^n-y^n}{|x|+|y|}\right|\leq \frac{|x|^n-|y|^n}{|x|+|y|}\leq \frac{|x|}{|x|+|y|}|x|^{n-1}+\frac{|y|}{|x|+|y|}|y|^{n-1}\leq|x|^{n-1}+|y|^{n-1}.$$
Since you impose $x^2+y^2< \delta \leq 1$ you have $|x|, |y|<1\Rightarrow |x|^{n-1}<|x|,\ |y|^{n-1}<|y|.$
Then you have
$|x|^{n-1}+|y|^{n-1}<|x|+|y|\leq 2\sqrt{x^2+y^2}$.
|
RSA cryptography Algebra
This is a homework problem I am trying to do.
I have done part 2i) as well as 2ii) and know how to do the rest. I am stuck on 2iii) and 2vii).
I truly dont know 2vii because it could be some special case I am not aware of. As for 2iii) I tried to approach it the same way as I did 2ii in which case I said you could take the 2 equations and use substitution method to isolate each variable and plug it in to find your variable values but 2iii) that doesnt work so I dont know how to do it.
| For $s$ sufficiently small, we can go from $b^2=n+s^2$ to $b^2\approx n$. Take the square root and you approximately have the average of $p$ and $q$. Since $s$ is small so is their difference (relatively), so we can search around $\sqrt{n}$ for $p$ or $q$. The part (iv) means absolute difference and should have written that. Take the square root of your number and you will find $p$ and $q$ very close nearby.
For (vii), say cipher and plaintexts are equal, so $m\equiv m^e$ modulo $n$. There are only a certain # of $m$ that allow this: those with $m\equiv0$ or $1$ mod $n$ or $p|m$ and $q|(m-1)$ or vice-versa, by elementary number theory. If the scheme uses padding to avoid these messages, then no collision is possible between plain and cipher, but otherwise if you allow arbitrary numbers as messages it clearly is.
|
What is the value of $\sin(x)$ if $x$ tends to infinity?
What is the value of $\sin(x)$ if $x$ tends to infinity?
As in wikipedia entry for "Sine", the domain of $\sin$ can be from $-\infty$ to $+\infty$. What is the value of $\sin(\infty)$?
| Suppose $\lim_{x \to \infty} \sin(x) = L$. $\frac{1}{2} > 0$, so we may take $\epsilon = \frac{1}{2}$.
let N be any positive natural number. then $2\pi (N + \frac{1}{4}) > N$ as is $2\pi (N+\frac{3}{4})$.
but $\sin(2\pi (N + \frac{1}{2})) = \sin(\frac{\pi}{2}) = 1$.
so if $L < 0$, we have a $y > N$ (namely $2\pi (N + \frac{1}{4})$) with:
$|\sin(y) - L| = |1 - L| = |1 + (-L)| = 1 + |L| > 1 > \epsilon = \frac{1}{2}$.
similarly, if $L \geq 0$, we have for $ y = 2\pi (N+\frac{3}{4}) > N$:
$|\sin(y) - L| = |-1 - L| = |(-1)(1 + L)| = |-1||1 + L| = |1 + L| = 1 + L \geq 1 > \epsilon = \frac{1}{2}$.
thus there is NO positive natural number N such that:
$|\sin(y) - L| < \frac{1}{2}$ when $y > N$, no matter how we choose L.
since every real number L fails this test for this particular choice of $\epsilon$, $\lim_{x \to \infty} \sin(x)$ does not exist.
(edit: recall that $\lim_{x \to \infty} f(x) = L$ means that for every $\epsilon > 0$, there is a positive real number M such that $|f(y) - L| < \epsilon$ whenever $y > M$. note that there is no loss of generality by taking M to be a natural number N, since we can simply choose N to be the next integer greater than M.)
|
The position of a particle moving along a line is given by $ 2t^3 -24t^2+90t + 7$ for $t >0$ For what values of $t$ is the speed of the particle increasing?
I tried to find the first derivative and I get
$$6t^2-48t+90 = 0$$
$$ t^2-8t+15 = 0$$
Which is giving me $ t>5$ and $0 < t < 3$, but the book gives a different answer
| Let's be careful. The velocity is $6(t^2-8t+15)$. This is $\ge 0$ when $t \ge 5$ and when $t\le 3$. So on $(5,\infty)$, and also on $(0,3)$, this is the speed. It is not the speed on $(3,5)$. There the speed is $-6(t^2-8t+15)$.
When $t > 5$ and also when $t< 3$, the derivative of speed is $6(2t-8)$, and is positive when $t>4$. So the speed is certainly increasing over the interval $(5,\infty)$.
In the interval $(3,5)$, the derivative of speed is $-6(2t-8)$. This is positive in the interval $(3,4)$.
So the speed is increasing on $(5,\infty)$ and on $(3,4)$.
Note that the derivative of speed does not exist at $3$ and at $5$.
Remark: Occasionally, I have asked questions of this nature, though not quite as sadistic. Here there is a double twist. Even if the student notices that the question doesn't ask where $s(t)$ is increasing, there is the velocity/speed trap as backup. Not a good idea, it only proves one can fool most of the people most of the time.
|
Divide inside a Radical It has been so long since I have done division inside of radicals that I totally forget the "special rule" for for doing it. -_-
For example, say I wanted to divide the 4 out of this expression:
$\sqrt{1 - 4x^2}$
Is this the right way to go about it?
$\frac{16}{16} \cdot \sqrt{1 - 4x^2}$
$16 \cdot \frac{\sqrt{1 - 4x^2}}{16}$
$16 \cdot \sqrt{\frac{1 - 4x^2}{4}} \Longleftarrow \text{Took the square root of 16 to get it in the radicand as the divisor}$
I know that this really a simple, question. Can't believe that I forgot how to do it. :(
| The correct way to do this, after fixing the mistake pointed out by Donkey_2009, is:
$\dfrac{2}{2} \cdot \sqrt{1-4x^2}$
$= 2 \cdot \dfrac{\sqrt{1-4x^2}}{2}$
$= 2 \cdot \dfrac{\sqrt{1-4x^2}}{\sqrt{4}} \qquad \Leftarrow$ applied $x = \sqrt{x^2}$
$= 2 \cdot \sqrt{\dfrac{1-4x^2}{4}} \qquad \Leftarrow$ applied $\frac{\sqrt a}{\sqrt b} = \sqrt{\frac a b}$
|
If $f(x)=f'(x)+f''(x)$ then show that $f(x)=0$
A real-valued function $f$ which is infinitely differentiable on $[a.b]$ has the following properties:
*
*$f(a)=f(b)=0$
*$f(x)=f'(x)+f''(x)$ $\forall x \in [a,b]$
Show that $f(x)=0$ $\forall x\in [a.b]$
I tried using the Rolle's Theorem, but it only tells me that there exists a $c \in [a.b]$ for which $f'(c)=0$.
All I get is:
*
*$f'(a)=-f''(a)$
*$f'(b)=-f''(b)$
*$f(c)=f''(c)$
Somehow none of these direct me to the solution.
| Hint $f$ can't have a positive maximum at $c$ since then $f(c)>0, f'(c)=0, f''(c) \le 0$ implies that $f''(c)+f'(c)-f(c) < 0$. Similarly $f$ can't have a negative minimum. Hence $f = 0$.
|
LTL is a star-free language but it can describe $a^*b^\omega$. Contradiction? Does the statement "LTL is a star-free language"(from wiki) mean that the expressiveness power of LTL is equivalent to that of star-free languages? Then why can you describe in LTL the following language with the star: $a^*b^\omega$?
$$\mathbf{G}(a \implies a\mathbf{U}b) \land G(b \implies \mathbf{X}b)$$
So, what does the sentence "LTL is star-free language" mean? Can you give an example of regular language with star which cannot be expressed in LTL? (not an example of LTL < NBA, but an example of LTL < regular language with star)
| Short answer: $a^*b^{\omega}$ describes a star-free language.
Longer answer:
In order to show that let's consider two definitions of a regular star-free language :
*
*Language has a maximum star height of 0.
*Language is in the class of star-free languages, which is defined as follows:
it's the smallest subset of $\Sigma^{\omega}$ which contains $\Sigma^{\omega}$, all singletons $\{x\}$, $x \in \Sigma$, and which is closed under finite union, concatenation and complementation.
It's possible to see that those two definitions are equivalent. We can also note that all finite languages are star-free.
An $\omega$-regular language is called $\omega$-star-free if it's a finite union of languages of type $XY^{\omega}$, where $X$ and $Y^+$ are star-free.
Now, $L = a^*$ can be described as $\Sigma^* \setminus (\Sigma^* (\Sigma \setminus a) \Sigma^*)$, so $L$ is a star-free language. Since $L' = a^* b^{\omega}$ can be written as concatenation in the form of $XY^{\omega}$ where $X = L$ and $Y = \{b\}$ (it's easy to show that $Y^+$ is star-free) we can conclude that $L'$ is star-free.
For more information (and equivalent definitions) I can refer you to the following papers: First-order definable languages, On the Expressive Power of Temporal Logic, On the expressive power of temporal logic for infinite words
|
Can a basis for a vector space be made up of matrices instead of vectors? I'm sorry if this is a silly question. I'm new to the notion of bases and all the examples I've dealt with before have involved sets of vectors containing real numbers. This has led me to assume that bases, by definition, are made up of a number of $n$-tuples.
However, now I've been thinking about a basis for all $n\times n$ matrices and I keep coming back to the idea that the simplest basis would be $n^2$ matrices, each with a single $1$ in a unique position.
Is this a valid basis? Or should I be trying to get column vectors on their own somehow?
| Yes, you are right. A vector space of matrices of size $n$ is actually, a vector space of dimension $n^2$. In fact, just to spice things up: The vector space of all
*
*diagonal,
*symmetric and
*triangular matrices of dimension $n\times n$
is actually a subspace of the space of matrices of that size.
As with all subspaces, you can take any linear combination and stay within the space. (Also, null matrix is in all the above three).
Try to calculate the basis for the above 3 special cases: For the diagonal matrix, the basis is a set of $n$ matrices such that the $i^{th}$ basis matrix has $1$ in the $(i,i)$ and $0$ everywhere else.
Try to figure out the basis vectors/matrices for symmetric and triangular matrices.
|
Efficiently solving a special integer linear programming with simple structure and known feasible solution Consider an ILP of the following form:
Minimize $\sum_{k=1}^N s_i$ where
$\sum_{k=i}^j s_i \ge c_1 (j-i) + c_2 - \sum_{k=i}^j a_i$ for given constants $c_1, c_2 > 0$ and a given sequence of non-zero natural numbers $a_i$, for all $1 \le i \le j \le N$, and the $s_i$ are non-zero natural numbers.
Using glpk, it was no problem to solve this system in little time for $N=100$, with various parameters values. Sadly, due to the huge numbers of constraints, this does not scale well to larger values of $N$ - glpk takes forever in trying to find a feasible solution for the relaxed problem.
I know that every instance of this problem has a (non-optimal) feasible solution, e.g., $s_i = \max \{ 1, 2r - a_i \}$ for a certain constant $r$, and the matrix belonging to the system is totally unimodular. How can I make use of this information to speed up calculations? Would using a different tool help me?
Edit: I tried using CPlex instead. The program runs much faster now, but the scalability issues remain. Nevertheless, I can now handle the problem I want to address. It may be interesting to note that while it is possible to provide a feasible but non-optimal solution to CPlex (see the setVectors function in the Concert interface), this makes CPlex assume that the given solution is optimal (which is not neccesarily the case) and hence give wrong results.
It would still be interesting to know if there is a better solution that does not involve throwing more hardware at the problem.
| I did not find a satisfying solution for this, so I will just re-iterate what I found: Using CPlex, the problem scales somewhat better. Sadly, it does not seem possible to tell CPlex that you have a feasible solution, only that you have a (claimed to be) optimal solution, which wastes effort.
|
Probability that, given a set of uniform random variables, the difference between the two smallest values is greater than a certain value Let $\{X_i\}$ be $n$ iid uniform(0, 1) random variables. How do I compute the probability that the difference between the second smallest value and the smallest value is at least $c$?
I've messed around with this numerically and have arrived at the conjecture that the answer is $(1-c)^n$, but I haven't been able to derive this.
I see that $(1-c)^n$ is the probability that all the values would be at least $c$, so perhaps this is related?
| There's probably an elegant conceptual way to see this, but here is a brute-force approach.
Let our variables be $X_1$ through $X_n$, and consider the probability $P_1$ that $X_1$ is smallest and all the other variables are at least $c$ above it. The first part of this follows automatically from the last, so we must have
$$P_1 = \int_0^{1-c}(1-c-t)^{n-1} dt$$
where the integration variable $t$ represents the value of $X_1$ and $(1-c-t)$ is the probability that $X_2$ etc satisfies the condition.
Since the situation is symmetric in the various variables, and two variables cannot be the least one at the same time, the total probability is simply $nP_1$, and we can calculate
$$ n\int_0^{1-c}(1-c-t)^{n-1} dt = n\int_0^{1-c} u^{n-1} du = n\left[\frac1n u^n \right]_0^{1-c} = (1-c)^n $$
|
The set of functions which map convergent series to convergent series Suppose $f$ is some real function with the above property, i.e.
if $\sum\limits_{n = 0}^\infty {x_n}$ converges, then $\sum\limits_{n = 0}^\infty {f(x_n)}$ also converges.
My question is: can anything interesting be said regarding the behavior of such a function close to $0$, other than the fact that $f(0)=0$?
| Answer to the next question: no.
Let $f\colon\mathbb{R}\to\mathbb{R}$ be defined by
$$
f(x)=\begin{cases}
n\,x & \text{if } x=2^{-n}, n\in\mathbb{N},\\
x & \text{otherwise.}
\end{cases}
$$
Then $\lim_{x\to0}f(x)=f(0)=0$, $f$ transforms convergent series in convergent series, but $f(x)/x$ is not bounded in any open set containing $0$. In particular $f$ is not differentiable at $x=0$. This example can be modified to make $f$ continuous.
Proof.
Let $\sum_{k=1}^\infty x_k$ be a convergent series. Let $I=\{k\in\mathbb{N}:x_k=2^{-n}\text{ for some }n\in\mathbb{N}\}$. For each $k\in I$ let $n_k\in\mathbb{N}$ be such that $x_k=2^{-n_k}$. Then
$$
\sum_{k=1}^\infty f(x_k)=\sum_{k\in I} n_k\,2^{-n_k}+\sum_{n\not\in I} x_n.
$$
The series $\sum_{k\in I} n_k\,2^{-n_k}$ is convergent. It is enough to show that also $\sum_{n\not\in I} x_n$ is convergent. This follows from the equality
$$
\sum_{n=1}^\infty x_n=\sum_{n\in I} x_n+\sum_{n\not\in I} x_n
$$
and the fact that $\sum_{n=1}^\infty x_n$ is convergent and $\sum_{k\in I} x_n$ absolutely convergent.
The proof is wrong. $\sum_{k\in I} x_n$ may be divergent. Consider the series
$$
\frac12-\frac12+\frac14-\frac14+\frac14-\frac14+\frac18-\frac18+\frac18-\frac18+\frac18-\frac18+\frac18-\frac18+\dots
$$
It is convergent, since its partial sums are
$$
\frac12,0,\frac14,0,\frac14,0,\frac18,0,\frac18,0,\frac18,0,\frac18,0,\dots
$$
The transformed series is
$$
\frac12-\frac12+\frac24-\frac14+\frac24-\frac14+\frac38-\frac18+\frac38-\frac18+\frac38-\frac18+\frac38-\frac18+\dots
$$
whose partial sums are
$$
\frac12,0,\frac12,\frac14,\frac34,\frac12,\frac78,\frac34,\frac98,1,\frac{11}8,\frac54,\dots
$$
which grow without bound.
On the other hand, $f(x)=O(x)$, the condition in Antonio Vargas' comment, is not enough when one considers series of arbitrary sign. Let
$$
f(x)=\begin{cases}
x\cos\dfrac{\pi}{x} & \text{if } x\ne0,\\
0 & \text{if } x=0,
\end{cases}
\quad\text{so that }|f(x)|\le|x|.
$$
Let $x_n=\dfrac{(-1)^n}{n}$. Then $\sum_{n=1}^\infty x_n$ converges, but
$$
\sum_{n=1}^\infty f(x_n)=\sum_{n=1}^\infty\frac1n
$$
diverges.
|
Probability that sum of rolling a 6-sided die 10 times is divisible by 10? Here's a question I've been considering: Suppose you roll a usual 6-sided die 10 times and sum up the results of your rolls. What's the probability that it's divisible by 10?
I've managed to solve it in a somewhat ugly fashion using the following generating series:
$(x+x^2+x^3+x^4+x^5+x^6)^{10} = x^{10}(x^6 - 1)(1+x+x^2+\cdots)^{10}$ which makes finding the probability somewhat doable if I have a calculator or lots of free time to evaluate binomials.
What's interesting though is that the probability ends up being just short of $\frac{1}{10}$ (in fact, it's about 0.099748). If instead, I roll the die $n$ times and find whether the sum is divisible by $n$, the probability is well approximated by $\frac{1}{n} - \epsilon$.
Does anyone know how I can find the "error" term $\epsilon$ in terms of $n$?
| The distribution of the sum converges to normal distribution with speed (if I remember correctly) of $n^{-1/2}$ and that error term could be dominating other terms (since you have got just constant numbers (=six) of samples out of resulting distribution). However, there is a small problem -- probability that your sum will be divisible by $n$ tends to $0$ with speed $n^{-1}$. Still, I typed something into Mathematica and this is what I've got:
In[1]:=
X := Sum[
Erfc[(6*Sqrt[2]*(7n/2 - k*n+1))/(35n)]/2
-Erfc[(6*Sqrt[2]*(7n/2 - k*n))/(35n)]/2,
{k, {1,2,3,4,5,6}}
]
In[2]:= Limit[X, n -> Infinity]
Out[2]= 0
In[3]:= N[Limit[X*n, n -> Infinity]]
Out[3]= -0.698703
In[4]:= N[Limit[(X+1/n)*n, n -> Infinity]]
Out[4]= 0.301297
The Erfc is the cumulative probability function of normal distribution,
the formula inside is to adjust for mean and variation. $X$ should be approximation of what you are looking for. What it shows (In[3] and In[4]) is that there is a typo in my formula or this does not converge to what you think it should (in fact that may be true (I am not sure), i.e. in each sixth part you are always off from center (or wherever the mean of that part value is) by constant margin). Hope that helps ;-)
|
Coding theory (existence of codes with given parameters) Explain why each of the following codes can't exist:
*
*A self complementary code with parameters $(35, 130, 15)$. (I tried using Grey Rankin bound but 130 falls within the bound)
*A binary $(15, 2^8, 5)$ code. (I tried Singleton Bound but no help)
*A $10$-ary code $(11, 100, 10)$ subscript 10. (I tried using Singleton Bound but again, it falls within the bound)
| Let me elaborate on problem #2. As I said in my comment that claim is wrong, because there does exist a binary code of length 15, 256 words and minimum Hamming distance 5.
I shall first give you a binary $(16,256,6)$ code aka the Nordstrom-Robinson code.
Consider the $\mathbf{Z}_4$-submodule $N$ of $\mathbf{Z}_4^8$ generated by the rows of the matrix
$$
G=\left(
\begin{array}{cccccccc}
1&3&1&2&1&0&0&0\\
1&0&3&1&2&1&0&0\\
1&0&0&3&1&2&1&0\\
1&0&0&0&3&1&2&1
\end{array}\right).
$$
Looking at the last four columns tells you immediate that $N$ is a free
$\mathbf{Z}_4$-module with rows of $G$ as a basis, and therefore it has $256$ elements. It is easy to generate all 256 them, e.g. by a fourfold loop. Let us define a function called the Lee weight $w_L$. It is a modification of the Hamming weight. We define it first on elements of $\mathbf{Z}_4$ by declaring $w_L:0\mapsto 0$, $1\mapsto 1$,
$2\mapsto 2$, $3\mapsto 1$, and then extend the definition to vectors $\vec{w}=(w_1,w_2,\ldots,w_8)$ by
$$
w_L(\vec{w})=\sum_{i=1}^8w_L(w_i).
$$
It is now relatively easy to check (e.g. by listing all the 256 elements of $N$, but there are also cleaner ways of doing this) that for any non-zero $\vec{w}\in N$ we have $w_L(\vec{w})\ge 6$.
Then we turn the $\mathbf{Z}_4$-module $N$ into a binary code. We turn each element of $\mathbf{Z}_4$ to a pair of bits with the aid of the Gray mapping
$\varphi:\mathbf{Z}_4\rightarrow \mathbf{Z}_2^2$ defined as follows: $\varphi(0)=00$, $\varphi(1)=01$, $\varphi(2)=11$, $\varphi(3)=10$. We then extend this componentwise to a mapping from $\mathbf{Z}_4^8$ to $\mathbf{Z}_2^{16}$. For example, the first generating vector becomes
$$
\varphi: 13121000 \mapsto 01\ 10\ 01\ 11\ 01\ 00\ 00\ 00.
$$
The mapping $\varphi$ is not a homomorphism of groups, so the image $\varphi(N)$ is not
a subgroup of $\mathbf{Z}_2^{16}$, i.e. $\varphi(N)$ is not a linear code. However, we make the key observation that $\varphi$ is an isometry. Basically it turns the Lee weight into Hamming weight. So if $\varphi(\vec{w})$ and $\varphi(\vec{w}')$ are two distinct elements of
$\varphi(N)$, then
$$
d_{Hamming}(\varphi(\vec{w}),\varphi(\vec{w}'))=w_L(\vec{w}-\vec{w}')\ge6.
$$
It is easy to show this by first checking that this relation holds for all pairs of
elements of $\mathbf{Z}_4$. As the corresponding function on vectors is the componentwise sum, the relation holds for vectors as well.
Therefore $\varphi(N)$ is a (non-linear) binary $(16,256,6)$ code.
Finally, we get at a non-linear binary $(15,256,5)$-code by dropping, say, the last bit
from all the vectors of $\varphi(N)$.
|
Probabilistic paradox: Making a scratch in a dice changes the probability? For dices that we cannot distinguish we have learned in class, that the correct sample space is $\Omega _1 = \{ \{a,b\}|a,b\in \{1,\ldots,6\} \}$, whereas for dices that we can distinguish we have $\Omega _2 = \{ (a,b)|a,b\in \{1,\ldots,6\} \}$.
Now here's the apparent paradox: Suppose we have initially two identical dices. We want to evaluate the event that the sum of the faces of the two dices is $4$. Since $ 4=1+3=2+2$, we have $P_1(\mbox{Faces}=4)=\frac{2}{|\Omega_1|}=\frac{2}{21}$. So far so good. But if we now make a scratch in one dice, we can distinguish them, so suddenly the probability changes and we get $P_2(\mbox{Faces}=4)=\frac{3}{|\Omega_2|}=\frac{3}{36}=\frac{1}{12}$ (we get $3$ in the numerator since $(3,1) \neq (1,3$)).
Why does a single scratch change the probability of the sum of the faces being $4$ ?
(My guess would be that either these mathematical models, $\Omega _1,\Omega _2$, don't describe the reality - meaning rolling two dices - or they do, but in the first case, although the dices are identical we can still distinguish them, if we, say, always distinguish between the left dice and the right, so applying the first model was actually wrong. But then what about closing the eyes during the experiment ?)
| The correct probability distribution for dice treats them as distinguishable. If you insist on using the sample space for indistinguishable dice, the outcomes are not equally likely.
However, if you are doing quantum mechanics and the "numbers" become individual quantum states, indistinguishable dice must be treated using either Fermi or Bose statistics, depending on whether they have half-integer or integer spin.
|
Applying Euler's Theorem to Prove a Simple Congruence I have been stuck on this exercise for far too long:
Show that if $a$ and $m$ are positive integers with $(a,b)=(a-1,m)=1$, then
$$1+a+a^2+\cdots+a^{\phi(m)-1}\equiv0\pmod m.$$
First of all, I know that
$$1+a+a^2+\cdots+a^{\phi(m)-1}=\frac{a^{\phi(m)-2}-1}{a-1},$$
and by Euler's theorem,
$$a^{\phi(m)}\equiv1\pmod m.$$
Now, because $(a,m)=1$, we have
$$a^{\phi(m)-2}\equiv a^{-2}\pmod m,$$
$$a^{\phi(m)-2}-1\equiv a^{-2}-1\pmod m,$$
and because $(a-1,m)=1$,
$$\frac{a^{\phi(m)-2}-1}{a-1}\equiv\frac{a^{-2}-1}{a-1}\pmod m,$$
$$1+a+a^2+\cdots+a^{\phi(m)-1}\equiv\frac{a^{-2}-1}{a-1}\pmod m.$$
However, I get stuck here. Is there a way to show that the RHS of that last expression is congruent to zero modulus $m$? Thanks in advance!
Note: I really do not know if I am tackling this problem correctly to begin with.
| Hint: From $a^{\phi(m)}-1$ is congruent to 0 mod m and is congruent to the $(a^{\phi(m)}-1)/(a-1)$ mod m
|
Given 5 children and 8 adults, how many ways can they be seated so that there are no two children sitting next to each other.
Possible Duplicate:
How many ways are there for 8 men and 5 women to stand in a line so that no two women stand next to each other?
Given 5 children and 8 adults, how many different ways can they be seated so that no two children are sitting next to each other.
My solution:
Writing out all possible seating arrangements:
tried using $\displaystyle \frac{34*5!*8!}{13!}$ To get the solution, because $13!$ is the sample space. and $5!$ (arrangements of children) * $34$ (no two children next to each other) * $8!$ (# of arrangements for adults).
| The solution below assumes the seats are in a row:
This is a stars and bars problem. First, order the children (5! ways). Now, suppose the adults are identical. They can go in any of the places on either side or between of the children. Set aside 4 adults to space out the children, and place the other 4 in any arrangement with the 5 children; there are $\binom{9}{4}$ ways to do this. Finally, re-order the adults. So we get $$8!5!\binom{9}{4}$$
|
Dummit Foote 10.5.1(d) commutative diagram of exact sequences.
I solved other problems, except (d): if $\beta$ is injective, $\alpha$ and $\gamma$ are surjective, then $\gamma$ is injective.
Unlike others, I don't know where to start.
| As the comments mention, this exercise is false as stated. Here's a counterexample: let $A$ and $B$ be groups, with usual inclusion and projection homomorphisms $$\iota_A(a) = (a,1),$$ $$\iota_B(b) = (1,b)$$ and $$\pi_B(a,b) = b.$$
Then the following diagram meets the stated requirements, except $\pi_B$ is not injective.
$$\require{AMScd}
\begin{CD}
A @>{\iota_A}>> A\times B @>{\iota_B \circ \pi_B}>> A\times B\\
@V{\operatorname{Id}}VV @V{\operatorname{Id}}VV @V{\pi_B}VV\\
A @>{\iota_A}>> A \times B @>{\pi_B}>> B
\end{CD}$$
This indeed exploits the fact that $\varphi$ is not required to be surjective.
|
Ordered partitions of an integer Let $k>0$ and $(l_1,\ldots,l_n)$ be given with $l_i>0$ (and the $l_i's$ need not be distinct). How do I count the number of distinct tuples
$$(a_1,\ldots,a_r)$$
where $a_1+\ldots+a_r=k$ and each $a_i$ is some $l_j$. There will typically be a different length $r$ for each such tuple.
If there is not a reasonably simple expression, is there some known asymptotic behavior as a function of $k$?
| The generation function for $P_d(n)$, where no part appears more than $d$ times is given by
$$
\prod_{k=1}^\infty \frac{1-x^{(d+1)k}}{1-x^k} = \sum_{n=0}^\infty P_d(n)x^n.
$$
In your case $d=1$ and the product simplifies to
$$
\prod_{k=1}^\infty (1+x^k)= 1+x+x^2+2 x^3+2 x^4+3 x^5+4 x^6+5 x^7+6 x^8+8 x^9+10 x^{10}+12 x^{11}+\cdots
$$
See here(eqn. 47) or at OEIS. Interestlingly the number of partitions of $n$ into distinct parts matches the number of partitions of $n$ into odd parts. Something that was asked/answered here and is known as Euler identity:
$$
\prod_{k=1}^\infty (1+x^k) = \prod_{k=1}^\infty (1-x^{2k-1})^{-1}
$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.