INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Prove that if $AB$ is invertible then $B$ is invertible. I know this proof is short but a bit tricky. So I suppose that $AB$ is invertible then $(AB)^{-1}$ exists. We also know $(AB)^{-1}=B^{-1}A^{-1}$. If we let $C=(B^{-1}A^{-1}A)$ then by the invertible matrix theorem we see that since $CA=I$(left inverse) then $B$ is invertible. Would this be correct?
Edit
Suppose $AB$ is invertible. There exists a matrix call it $X$ such that $XAB=I$. Let $C=XA$ Then $CB=I$ and it follows that $B$ is invertible by the invertible matrix theorem.
|
$\;AB\;$ invertible $\;\implies \exists\;C\;$ s.t.$\;C(AB)=I\;$ , but using associativity of matrix multiplication:
$$I=C(AB)=(CA)B\implies B\;\;\text{is invertible and}\;\;CA=B^{-1}$$
|
$\sum_{k=0}^{n/2} {n\choose{2k}}=\sum_{k=1}^{n/2} {n\choose{2k-1}}$, Combinatorial Proof: How am I supposed to prove combinatorially:
$$\sum_{k=0}^{n/2} {n\choose{2k}}=\sum_{k=1}^{n/2} {n\choose{2k-1}}$$
$${n\choose{0}}+{n\choose{2}}+{n\choose{4}}+\dots={n\choose{1}}+{n\choose{3}}+{n\choose{5}}+\cdots$$
Absolutely clueless.
|
The question as currently posed can be answered by looking at the symmetry of the rows of Pascal's triangle corresponding to odd $n$ (which have an even number of elements). By definition
$\large{n\choose{k}\large}=\frac{n!}{k!(n-k)!}$.
Therefore ${n\choose{0}}={n\choose{n}}$, ${n\choose{1}}={n\choose{n-1}}$, and in general ${n\choose{k}}={n\choose{n-k}}$. Thus, the set of odd indexed elements and the set of even indexed elements in each row are identical.
|
Topologist's Sine Curve not a regular submanifold of $\mathbb{R^2}$? I am trying to work out the details of following example from page 101 of Tu's An Introduction to Manifolds:
Example 9.3. Let $\Gamma$ be the graph of the function $f(x) =
\sin(1/x)$ on the interval $]0, 1[$, and let $S$ be the union of
$\Gamma$ and the open interval $I=\{(0,y)\in \mathbb{R^2} |−1<y<1\}$.
The subset $S$ of $\mathbb{R^2}$ is not a regular submanifold for the
following reason: if $p$ is in the interval $I$, then there is no
adapted chart containing $p$, since any sufficiently small
neighborhood $U$ of $p$ in $\mathbb{R^2}$ intersects $S$ in infinitely
many components.
Using Tu's definitions, I need to show that given a neighborhood $U$ of $p$, there exists no homeomorphism $\phi$ from $U$ onto an open set of $V \subset\mathbb{R^2}$ with the property that $U \cap S$ is the pre-image with respect to $\phi$ of the $x$ or $y$ axes intersected with $V$.
I am not sure where the fact that $U \cap S$ has infinitely many components comes into play. I tried to derive a contradiction using this fact; but even if restrict my attention to connected neighborhoods $U$ of $p$, the intersection of the connected set $\phi(U)$ with the $x$ or $y$ axes might have infinitely many components (I think), so there's no contradiction with the fact that homeomorphism preserves components.
I would appreciate any help!
|
One simple way to see that $S$ is not a regular submanifold around $p$ is that it is not locally Euclidean: while there exist open subsets of $\mathbb{R}$ with as many connected components as you like, there are no points like $p$ in $\mathbb{R}$, that is, with every neighborhood not connected.
So, in the sentence "since any sufficiently small neighborhood $U$ of $p$ in $\mathbb{R}^2$ intersects $S$ in infinitely many components" the important part (besides "many components") is "any neighborhood".
|
Computing Brauer characters of a finite group I am studying character theory from the book "Character Theory of Finite Groups" by Martin Isaac. (I am not too familiar with valuations and algebraic number theory.)
In the last chapter on modular representation theory, Brauer characters, blocks, and defect groups are introduced.
My question is this: How do we find the irreducible Brauer characters and blocks, given a group and a prime?
For instance, let's say we have $p=3$ and the group $G = S_5$, the symmetric group.
An example of the precise calculation or method used to determine the characters would be very helpful. Thanks.
|
This is a difficult question and you would probably need to learn more theory in order to understand the different methods available.
But one method that is often used in practice is to calculate the representations and then just find the Brauer character directly from the matrices of the representations. Of course, you have to express the traces of the matrices as sums of roots of unity over the finite field, and then lift this sum to the complex numbers to get the Brauer character, but that is not particularly difficult. (That is not completely true - since the lifting is not uniquely defined, you may have to work hard if you want to make it consistent.)
With ordinary (complex) character tables, it is generally much easier to calculate the characters than the matrices that define the representations, but that is not always the case with modular representations. There are fast algorithms for computing representations over finite fields, using the so-called MeatAxe algorithm.
I am more familiar with Magma than with GAP, and I expect there are similar commands in GAP, but in Magma I can just type
> G := Sym(5);
> I := AbsolutelyIrreducibleModules(G, GF(3));
and I get the five absolutely irreducible representations in characteristic three as group homomorphisms, and so I can just look at the images of elements from the different conjugacy classes. There is a Magma command that does this for you, giving the Brauer character table:
> [BrauerCharacter(i): i in I];
[
( 1, 1, 1, 0, 1, 1, 0 ),
( 1, -1, 1, 0, -1, 1, 0 ),
( 4, 2, 0, 0, 0, -1, 0 ),
( 4, -2, 0, 0, 0, -1, 0 ),
( 6, 0, -2, 0, 0, 1, 0 )
]
|
contour integral with integration by parts Is there a complex version of integration-by-part? I saw someone used it but didn't find it in textbook. I tested integrals $\int_{\mathcal{C}}\frac{\log(x+1)}{x-2}\mathrm{d}x$ and $\int_{\mathcal{C}}\frac{\log(x-2)}{x+1}\mathrm{d}x$, where $\mathcal{C}$ encloses both -1 and 2. But the results do not match. Is it because they are not equal at the first place or I chose the wrong branch cut?
|
Integration by parts is just the product rule and the Fundamental Theorem of Calculus. But you need well-defined analytic functions on your contour, which you don't have here.
|
An application of fixed point theorem I want to use the fixed point method to solve the equation to find $y$: $$ y = c_1 y^3 - c_2 y$$where $c_1, c_2$ are real valued constants. So I designed $$ y_{k+1} = c_1 y_k^3 - c_2 y_k$$ to approximate $y$. But I don't know what to do next. Also I want to know the convergence for this equation.
|
Do you want to do this on a computer or by hand? Approximating things by hand usually makes little sense, so suppose by computer.
If so, then the thing to do is first to put some value of $y_0$ (probably close to $0$, or else $y_k \to \infty$ as $k \to \infty$, but also not precisely $0$, or else $y_k = 0$ for all $k$). Keep using the recursion to generate $y_k$, until you reach a good solution (a reasonable check is $y_{k+1} \simeq y_k$), or run out of patience ($k$ is very large, say $k \gg 1000$), or you see that the sequence diverges ($y_k$ is very large, say $y_k \gg 1000$). If you found an approximate solution, then the job is done. If not, then you probably should try again with a different choice of $y_0$ (it might be reasonable to put $y_0$ random).
Of course, it is usually much simpler to solve the equation by transforming it into the form: $$ y(c_1 y^2 - (c_2+1)) = 0$$
and then computing $y = 0$ or $y = \pm \sqrt{\frac{c_2+1}{c_1}}$. In some circumstances you explicitly don't want to use square roots, however. (E.g. you want to generalise the method afterwards, or you just want to learn how the fixed point method works, or you're programming and your language does not have the square root function...). I'm assuming its one of such situations.
|
How to find cubic non-snarks where the $\min(f_k)>6$ on surfaces with $\chi<0$? Henning once told me that,
[i]t follows from the Euler characteristic of the plane that the average face degree of a 3-regular planar graph with $F$ faces is $6-12/F$, which means that every 3-regular planar graph has at least one face with degree $5$ or lower.
I tried to understand and extend this and got the following:
Given a $k$-regular graph. Summing over the face degrees $f_n$ gives twice the number of edges $E$ and this is $k$ times the number of vertices $V$:
$$
\sum f_n = 2E =kV \tag{1}
$$
Further we have Euler's formula saying
$$
V+F = E +\chi,
$$
where $\chi$ is Euler's characteristic of the surface where the graph lives on. Again we insert $E=\frac k2V$ and get:
$$
F=\left( \frac k2 -1 \right)V+\chi\\
V=\frac{F-\chi}{\frac k2 -1}. \tag{2}
$$
Dividing $(1)$ by $F$ and inserting $(2)$ gives:
$$
\frac{\sum f_n}{F}= \frac{k(F-\chi)}{(\frac k2 -1)F}=\frac {2k}{k -2} \left( 1-\frac{\chi}{F}\right) \tag{3}
$$
or
$$
\sum f_n=\frac {2k}{k -2} \big( F-\chi\big). \tag{3$^\ast$}
$$
Plug in $k=3$ and $\chi=2$
(the characteristic of the plane), we get back Henning's formula, but when e.g. $\chi=-2$, so the surface we draw could be a double torus, we get the average degree to be:
$$
6 \left( 1+\frac{2}{F}\right)
$$
How to find cubic graphs where the $\min(f_k)>6$ on surfaces with $\chi<0$?
EDIT The graph should not be a snark.
|
For orientable surfaces, here's a representative element of a family of non-snarky cubic graphs on an $n$-torus with $4n-2$ vertices, $6n-3$ edges and a single $(12n-6)$-sided face.
If it is a problem that some pairs of vertices have more than one edge going between them, that can easily be fixed with some local rearrangements (which can be chosen to preserve the green-blue Hamiltonian cycle and thus non-snarkiness).
For non-orientable surfaces, I think it is easiest to start by appealing to the classification theorem and construe the surface as a sphere with $k\ge 2$ cross-caps on it. Now if you start with a planar graph and place a cross-cap straddling one of its edges (so following the edge from one end to another will make you arrive in the opposite orientation), the net effect is to fuse the two faces the edge used to separate.
Therefore, start with an arbitrary planar non-snarky cubic graph with $2k-2$ vertices and $3k-3$ edges. Select an arbitrary spanning tree (comprising $2k-3$ edges), and place cross-caps on the $k$ edges not in the spanning tree. This will fuse all of the original graph's faces, and we're left with a graph with a single $(6k-6)$-sided face.
In each of the above cases, if you want more faces, simply subdivide the single one you've already got with new edges. The face itself, before you glue its edges together to form the final closed surface, is just a plain old $(12n-6)$- or $(6k-6)$-gon, so you can design the subdivision as a plane drawing.
|
How much money should we take? I'm a new user so if my question is inappropriate, please comment (or edit maybe).
We want to define a dice game. We will be the casino and a customer will roll dice. I will assume customer is man. He can stop whenever he wants and when he stopped he can take the money as much as sum of his dices so far. But, if he dices 1 he must stop and he can't take any money. For the sake of casino, how much money should we take at the beginning of the game, minimum?
For example: If he dices 2-5-1, he can't get any money but if he dices 4 and stop he get 4 dollars.
I don't have any good work for this problem but I guess it is 10. Also, maybe this helps:
If game would be free and we can play it once in a year, when should we stop? Obviously, if we get 1000 dollars we should stop and if we gain 2 dollars we should play.
Answer for this question is 20. Because, in the game, if we get so far $t$, after next move we will have $\frac26 + \frac36 + \frac46 + \frac56 + \frac66 -\frac t6 = \frac{20-t} 6 $.
Please don't hesitate to edit linguistic mistakes in my question. Thanks for any help.
|
Let $f(n)$ be the expected final win for a player already having a balance of $n$ and employing the optimal strategy. Trivially, $f(n)\ge n$ as he might decide to stop right now.
However, if the optimal strategy tells him to play at least once more, we find that $f(n)=\frac16\cdot 0+\sum_{k=2}^6 \frac16f(n+k)$. Thus
$$\tag1 f(n)=\max\left\{n,\frac16\sum_{k=2}^6f(n+k)\right\}.$$
If the player always plays as if his current balance were one more than it actually is, the expected win with starting balance $n$ will be $f(n+1)-1+p$ where $p$ is his probability of ending with rolling a "1". Therefore $f(n)\ge f(n+1)-1$ and by induction $f(n+k)\le f(n)+k$. Then from $(1)$ we get
$$ f(n)\le \max\left\{n,\frac56f(n)+\frac{10}{3}\right\}$$
Especially, $f(n)>n$ implies $f(n)\le \frac56f(n)+\frac{10}3$, i.e. $f(n)\le 20$. Thus $f(n)=n$ if $n\ge 20$ and we can calculate $f(n)$ for $n<20$ backwards from $(1)$. We obtain step by step
$$\begin{align}f(19)&=\frac{115}6\\
f(18)&=\frac{55}3\\
&\vdots\\
f(0)&=\frac{492303203}{60466176}\approx 8.1418\end{align}$$
as the expected value of the game when starting with zero balance, and this is also the fair price.
We have at the same time found the optimal strategy: As $f(n)>n$ iff $n\le 19$, the optimal strategy is to continue until you have collected at least $20$ and then stop (this matches your remarks in your last paragraph).
|
Are quantifiers a primitive notion? Are quantifiers a primitive notion? I know that one can be defined in terms the other one, so question can be posed, for example, like this: is universal quantifier a primitive notion? I know, that $\forall x P (x) $ can be viewed as a logical conjunction of a predicate $ P $ being applied to all possible variables in, for example, $\sf ZFC$. But how can one write such a statement down formally? Also, it seems you can not use the notion of a set to define the domain of discourse, because you are trying to build $\sf ZFC$ from scratch, and sets make sense only inside $\sf ZFC$. Obviously, I'm missing a lot here. Any help is appreciated.
|
Note for example in PA that even if $P(0)$ and $P(1)$ and $P(2)$ and ... are all theorems, it may happen that $\forall n\colon P(n)$ is not a theorem. Thus $\forall n\colon P(n)$ is in fact something different from $P(0)\land P(1)\land P(2)\land \ldots$ even if one were to accept such an infinte string as a wff (which is a box of Pandora that should not be opened as in the next step such an infinite conjuntion would reuire an infinte proof and so on).
So this means that quantification does bear a "new" notion and should be considered primitive.
|
$X$ homeomorphic to $f(X)$
Let $X$, and $Y$ be topological spaces, and let $f:X\rightarrow Y$ be a continuous and one-to-one map.
When is $X$ homeomorphic to $f(X)$?
|
Well... when $f$ is open (or closed). A nice criterion is: $X$ compact, $Y$ Hausdorff, then $f$ is a closed map. Indeed let $C\subset X$ be closed, then $C$ is compact. The continuous image of a compact set is compact, so $f(C)\subset Y$ is compact, and thus closed.
Note: I interpreted your question as: "When is $f$ an homeomorphism with its image?" Obviously as Daniel Fisher stated in a comment, $X$ and $f(X)$ can be homeomorphic without $f$ being a homeomorphism.
|
Examples of uncountable sets with zero Lebesgue measure I would like examples of uncountable subsets of $\mathbb{R}$ that have zero Lebesgue measure and are not obtained by the Cantor set.
Thanks.
|
Let $(r_n)_{n\in\mathbb{N}}$ be a dense sequence in $\mathbb{R}$ (it could, for example, be an enumeration of the rationals). For $k \in \mathbb{N}$, let
$$U_k = \bigcup_{n\in\mathbb{N}} (r_n - 2^{-(n+k)},\, r_n + 2^{-(n+k)}).$$
$U_k$ is a dense open set with Lebesgue measure $\leqslant 2^{2-k}$, thus
$$N = \bigcap_{k\in\mathbb{N}} U_k$$
is a set of the second category (Baire), hence uncountable, and has Lebesgue measure $0$.
|
Average of all 6 digit numbers that contain only digits $1,2,3,4,5$ How do I find the average of all $6$ digit numbers which consist of only digits $1,2,3,4$ and $5$?
Do I have to list all the possible numbers and then divide the sum by the count? There has to be a more efficient way, right?
Thank you!
|
Don't want to completely give it away, but there are $5^6$ of these numbers as the first through sixth digits can all take on five different values. I'm sure there's something slicker you could do, but it should be easy to then sum them all up by evaluating the sum
$$
\sum_{a=1}^5 \sum_{b=1}^5 \sum_{c=1}^5 \sum_{d=1}^5 \sum_{e=1}^5 \sum_{f=1}^5 (a \cdot 10^5+b \cdot 10^4+ c \cdot 10^3+d \cdot 10^2+ e \cdot 10^1 + f \cdot 10^0)
$$
and dividing by the total number of them.
|
How to check a set of ring is a subring? To check a subset of a given ring is a subring,
is it enough to check that the subset is closed under induced operations(multiplication and addition) or
do I also need to show that it contains 0 and additive inverses of each element?
|
You do need to show that it contains an additive inverse for each of its elements. (For example, $\mathbb{N}$ is not a subring of $\mathbb{Z}$ though it is closed under addition and multiplication.) Provided that you know the subset is nonempty, this together with it being closed under addition will then imply that $0$ is in there.
|
$(a^{n},b^{n})=(a,b)^{n}$ and $[a^{n},b^{n}]=[a,b]^{n}$? How to show that $$(a^{n},b^{n})=(a,b)^{n}$$ and $$[a^{n},b^{n}]=[a,b]^{n}$$ without using modular arithmetic? Seems to have very interesting applications.$$$$Try: $(a^{n},b^{n})=d\Longrightarrow d\mid a^{n}$ and $d\mid b^n$
|
Show that the common divisors of $a^n$ and $b^n$ all divide $(a,b)^n$ and that any divisor of $(a,b)^n$ divides $a^n$ and $b^n$ (the proofs are pretty straight forward). It might be useful to consider prime factorization for the second direction.
Similarly, show that $[a,b]^n$ is a multiple of $a^n$ and $b^n$ but that it is also the smallest such multiple (and again there, prime factorization might be useful in the second case). If you need more details just ask.
EDIT :
Let's use prime factorization, I think this way everything makes more sense. To show equality, it suffices to show that the prime powers dividing either side are the same. Let's show $(a^n, b^n) = (a,b)^n$ first.
Since $p$ is prime, $p$ divides $a$ if and only if it divides $a^n$ and similarly for $b$ ; so if $p$ does not divide $a$ or $b$, then $p^0 = 1$ is the greatest power of $p$ that divides both sides. If $p$ divides both $a$ and $b$, let $p^k$ be the greatest power of $p$ dividing $(a,b)$, so that $p^{kn}$ is the greatest power of $p$ dividing $(a,b)^n$. Since $p^k$ divides $a$, $p^{kn}$ divides $a^n$, and similarly for $b$. For obvious reasons the greatest power of $p$ dividing both $a^n$ and $b^n$ must be a power of $p^n$. But if $p^{(k+1)n}$ divided both $a^n$ and $b^n$, then $p^{(k+1)}$ would divide $a$ and $b$, contradicting the fact that $p^k$ is the greatest power of $p$ dividing $(a,b)$. Therefore $p^{kn}$ is the greatest power of $p$ dividing $(a^n,b^n)$ and the greatest power of $p$ dividing $(a,b)^n$, so taking the product over all primes, $(a^n,b^n) = (a,b)^n$.
For $[a^n,b^n] = [a,b]^n$ you can do very similar techniques as with the gcd, except all the 'greatest' are replaced by 'smallest' in the last proof, and 'division' is replaced by 'being a multiple of'.
Hope that helps,
|
Evaluate the integral $\int_{0}^{+\infty}\frac{\arctan \pi x-\arctan x}{x}dx$ Compute improper integral : $\displaystyle I=\int\limits_{0}^{+\infty}\dfrac{\arctan \pi x-\arctan x}{x}dx$.
|
We have
$$\int_a^b \dfrac{\arctan \pi x-\arctan x}{x}dx=\int_{\pi a}^{\pi b}\dfrac{\arctan x}{x}dx-\int_{ a}^{ b}\dfrac{\arctan x}{x}dx\\=\int_{ b}^{\pi b}\dfrac{\arctan x}{x}dx-\int_{ a}^{ \pi a}\dfrac{\arctan x}{x}dx$$
and since the function $\arctan$ is increasing so
$$\arctan( b)\log\pi=\arctan( b)\int_b^{\pi b}\frac{dx}{x}\leq\int_{ b}^{\pi b}\dfrac{\arctan x}{x}dx\\ \leq\arctan(\pi b)\int_b^{\pi b}\frac{dx}{x}=\arctan(\pi b)\log\pi$$
so if $b\to\infty$ we have
$$\int_{ b}^{\pi b}\dfrac{\arctan x}{x}dx\to\frac{1}{2}\pi\log\pi$$
and by a similar method we prove that
$$\int_{ a}^{ \pi a}\dfrac{\arctan x}{x}dx\to 0,\quad a\to0$$
hence we conclude
$$\displaystyle I=\int\limits_{0}^{+\infty}\dfrac{\arctan \pi x-\arctan x}{x}dx=\frac{1}{2}\pi\log\pi$$
|
Distributing persons into cars We want to distribute $10$ persons into $6$ different cars knowing that each car can take three persons. How many ways have to do it. The order of the person inside the same car is not important and the car can be empty.
|
If we put $i$ people in the 1-st car, there's $\binom{10}{i}$ ways to do this. Once this is done, we put $j$ people in the 2-nd car, and there's $\binom{10-i}{j}$ ways to do this. And so on, until we get to the final car, where we attempt to put in all of the unassigned passengers. If there's more than 3, we discard this case.
Hence the number of ways is: $$\scriptsize \sum_{i=0}^3 \binom{10}{i} \sum_{j=0}^3 \binom{10-i}{j} \sum_{k=0}^3 \binom{10-i-j}{k} \sum_{\ell=0}^3 \binom{10-i-j-k}{\ell} \sum_{m=0}^3 \binom{10-i-j-k-\ell}{m} [0 \leq 10-i-j-k-\ell-m \leq 3].$$
Here $[0 \leq 10-i-j-k-\ell-m \leq 3]$ takes the value $1$ if $0 \leq 10-i-j-k-\ell-m \leq 3$ is true and $0$ otherwise.
In GAP this is computed by
WithinBounds:=function(n)
if(n>=0 and n<=3) then return 1; fi;
return 0;
end;;
Sum([0..3],i->Binomial(10,i)*Sum([0..3],j->Binomial(10-i,j)*Sum([0..3],k->Binomial(10-i-j,k)*Sum([0..3],l->Binomial(10-i-j-k,l)*Sum([0..3],m->Binomial(10-i-j-k-l,m)*WithinBounds(10-i-j-k-l-m))))));
which returns $36086400$.
Alternatively, let $\mathcal{G}$ be the set of partitions of $\{1,2,\ldots,10\}$ of size at most $6$ with parts of size at most $3$. Given a partition $P \in \mathcal{G}$, there are $\binom{6}{|P|} |P|!$ ways to distribute the passengers among the cars in such a way to as give rise to the partition $P$ (after discarding empty cars). So, the number is also given by $$\sum_{P \in \mathcal{G}} \binom{6}{|P|} |P|!.$$
This is implemented in GAP via:
S:=Filtered(PartitionsSet([1..10]),P->Size(P)<=6 and Maximum(List(P,p->Size(p)))<=3);;
Sum(S,P->Binomial(6,Size(P))*Factorial(Size(P)));
which also returns $36086400$.
|
Question involving exponential tower of 19 Consider:
$$
y = \underbrace{19^{19^{\cdot^{\cdot^{\cdot^{19}}}}}}_{101 \text{ times}}
$$
with the tower containing a hundred $ 19$s. Take the sum of the digits of the resulting number. Again, add the digits of this new number and get the sum. Keep doing this process till you reach a single-digit number. Find the number.
Here's what I tried so far:- Every number which is a power of $19 $ has to end in either $1 $ or $ 9$. Also, by playing around a bit, taking different powers of $19$, I found that regardless of the power of $19$, whether it is an odd or an even number, the single-digit number obtained at the end is always $ 1$. I've been trying to prove this, but I've no idea on how to do it. Can anyone help me out?
|
$$10^0a_0+10^1a_1+10^2a_2+\ldots+10^na_n=(a_0+a_1+\ldots+a_n)+\text{a multiple of }9.$$
Therefore taking the sum of the digits of a number gives you a number that leaves the same remainder, in the division by $9$, as the one you had before (and it is also smaller as long as the original number is not $<10$). Therefore the process ends with a one-digit number that leaves the same remainder under division by $9$ are your number.
|
Folliation and non-vanishing vector field. The canonical foliation on $\mathbb{R}^k$ is its decomposition into parallel sheets $\{t\} \times \mathbb{R}^{k-1}$ (as oriented submanifolds). In general, a foliation $\mathcal{F}$ on a compact, oriented manifold $X$ is a decomposition into $1-1$ immersed oriented manifolds $Y_\alpha$ (not necessarily compact) that is locally given (preserving all orientations) by the canonical foliation in a suitable chart at each point. For example, the lines in $\mathbb{R}^2$ of any fixed slope (possibly irrational) descend to a foliation on $T^2 = \mathbb{R}^2/\mathbb{Z}^2$.
(a) If $X$ admits a foliation, prove that $\chi(X) = 0$. (Hint: Partition of unity.)
(b) Prove (with suitable justification) that $S^2 \times S^2$ does not admit a foliation as defined above.
Theorem
A compact, connected, oriented manifold $X$ possesses a nowhere vanishing vector field if and only if its Euler characteristic is zero.
Question: How could $X$ in this problem satisfy the connectness property in the theorem? Can I just say if it is not connected, treat each connected component individually?
|
I assume that your manifold and foliation are smooth and foliation is of codimension 1, otherwise see Jack Lee'a comment. Then pick a Riemannian metric on $X$ and at each point $x\in M$ take unit vector $u_x$ orthogonal to the leaf $F_x$ through $x$: There are two choices, but since your foliation is transversally orientable, you can make a consistent choice of $u_x$. Then $u$ is a nonvanishing vector field on $X$.
In fact, orientability is irrelevant: Clearly, it suffices to consider the case when $X$ is connected. Then you can pass to a 2-fold cover $\tilde{X}\to X$ so that the foliation ${\mathcal F}$ on $X$ lifts to a transversally oriented foliation on $\tilde{X}$. See Proposition 3.5.1 of
A. Candel, L. Conlon, "Foliations, I", Springer Verlag, 1999.
You should read this book (and, maybe, its sequel, "Foliations, II") if you want to learn more about foliations.
Then $\chi(\tilde{X})=0$. Thus, $\chi(X)=0$ too. Now, recall that a smooth compact connected manifold admits a nonvanishing vector field if and only if it has zero Euler characteristic. Thus, $X$ itself also admits a nonvanishing vector field.
Incidentally, Bill Thurston proved in 1976 (Annals of Mathematics) that the converse is also true: Zero Euler characteristic for a compact connected manifold implies existence of a smooth codimension 1 foliation. This converse is much harder.
|
What would have been our number system if humans had more than 10 fingers? Try to solve this puzzle. Try to solve this puzzle:
The first expedition to Mars found only the ruins of a civilization.
From the artifacts and pictures, the explorers deduced that the
creatures who produced this civilization were four-legged beings with
a tentatcle that branched out at the end with a number of grasping
"fingers". After much study, the explorers were able to translate
Martian mathematics. They found the following equation:
$$5x^2 - 50x + 125 = 0$$
with the indicated solutions $x=5$ and $x=8$. The value $x=5$ seemed
legitimate enough, but $x=8$ required some explanation. Then the explorers
reflected on the way in which Earth's number system developed, and found
evidence that the Martian system had a similar history. How many fingers would
you say the Martians had?
$(a)\;10$
$(b)\;13$
$(c)\;40$
$(d)\;25$
P.S. This is not a home work. It's a question asked in an interview.
|
The correct answer is (a) 10.
There is no comment which number system the given answers refer to. As all other numbers refer to the Martian number system, we can safely assume the answers refer to the Martian number system as well.
|
difference between expected values of distributions let us assume two distributions $p$ and $p'$ over the set of naturals $N$.
Is it true the following property?
$\sum_{n \in N} p(n) \cdot n \le \sum_{n \in N} p'(n) \cdot n$
IFF
for all $0 \le u \le 1$
$\sum_{n \in N} p(n) \cdot u^{n} \ge \sum_{n \in N} p'(n) \cdot u^n$
Thanks for your help!
|
Call $X$ a random variable with distribution $p$ and $Y$ a random variable with distribution $p'$, then one considers the assertions:
*
*$E[X]\leqslant E[Y]$
*$E[u^X]\geqslant E[u^Y]$ for every $u$ in $[0,1]$
Assertion 2. implies assertion 1. because $X=\lim\limits_{u\to1}\frac1{1-u}\cdot(1-u^X)$ and the limit is monotonous.
Assertion 1. does not imply assertion 2., as witnessed by the case $E[u^X]=\frac23+\frac13u$ and $E[u^Y]=\frac34+\frac14u^2$.
|
Proof of strong Holder inequality Let $a>1$ and $f,g :\left(0,1\right) \rightarrow \left(0,\infty\right)$ measurable functions, $B$ a measurable subset of $\left(0,1\right)$ such that $$\left(\int_{C} f^2 dt\right)^{1/2} \left(\int_{C} g^2 dt\right)^{1/2} \geq a \int_{C} fg dt$$ for all $C$ measurable subset of $B$. Prove that $B$ has Lebesgue measure zero. Is the same true if we consider a probability measure and Borel subsets of $\left(0,1\right)$ and Borel functions?
|
Yes, it's true in more general situations.
Let $\mu$ a positive measure on $X$, and $f,\, g \colon X \to (0,\,\infty)$ measurable. Let $a > 1$. Then every measurable $B$ with $\mu(B) > 0$ contains a measurable $C \subset B$ with
$$\left(\int_C f^2\,d\mu\right)^{1/2} \left(\int_C g^2\,d\mu\right)^{1/2} < a\cdot \int_C fg\,d\mu.$$
For $c > 1$, $n\in \mathbb{Z}$, measurable $M$ with $\mu(M) > 0$, and measurable $h\colon X \to (0,\,\infty)$, let
$$S(c,k,M,h) := \{ x \in M : c^k \leqslant h(x) < c^{k+1}\}.$$
Each $S(c,k,M,h)$ is measurable, and
$$M = \bigcup_{k \in \mathbb{Z}} S(c,k,M,h)$$
where the union is disjoint. Since $\mu(M) > 0$, at least one $S(c,n,M,h)$ has positive measure.
Choose $1 < c < \sqrt{a}$ and set $m_f = c^n$ where $n\in\mathbb{Z}$ is such that $A = S(c,n,B,f)$ has positive measure. Let $k \in\mathbb{Z}$ such that $C = S(c,k,A,g)$ has positive measure, and set $m_g = c^k$.
On $C$, we have $m_f \leqslant f(x) < c\cdot m_f$ and $m_g \leqslant g(x) < c\cdot m_g$, hence
$$\begin{align}
\int_C fg\,d\mu &\geqslant \int_C m_f\cdot m_g\, d\mu = m_f m_g \cdot \mu(C),\\
\left(\int_C f^2\,d\mu\right)^{1/2} \left(\int_C g^2\,d\mu\right)^{1/2}
&< \left(\int_C(c\cdot m_f)^2\, d\mu\right)^{1/2} \left(\int_C (c\cdot m_g)^2\, d\mu\right)^{1/2}\\
&= c\cdot m_f \sqrt{\mu(C)} \cdot c\cdot m_g \sqrt{\mu(C)}\\
&= c^2\cdot m_f m_g\cdot \mu(C)\\
&< a\cdot m_f m_g\cdot \mu(C)\\
&\leqslant a \int_C fg\,d\mu.
\end{align}$$
|
How to prove Disjunction Elimination rule of inference I've looked at the tableau proofs of many rules of inference (double-negation, disjunction is commutative, modus tollendo ponens, and others), and they all seem to use the so-called "or-elimination" (Disjunction Elimination) rule:
$$(P\vdash R), (Q\vdash R), (P \lor Q) \vdash R$$
(If $P\implies R$ and $Q\implies R$, and either $P$ or $Q$ (or both) are true, then $R$ must be true.)
It's often called the "proof by cases" rule, it makes sense, and I've seen the principle used in many mathematical proofs.
I'm trying to figure out how to logically prove this rule (using other rules of inference and/or replacement), however the proof offered is self-reliant! Is this an axiom?
(There's also the Constructive Dilemma rule, which looks like a more generalized version of Disjunction Elimination. Maybe the proof of D.E. depends on C.D.? or maybe C.D. is an extension of D.E.?)
|
The rules of Disjunction Elimination and Constructive dilemma are interchangable.
You can proof Disjunction Elimination from Constructive Dilemma and
You can proof Constructive Dilemma from Disjunction Elimination.
So whichever you have you can prove the other.
Your second question is Disjunction Elimination an axiom?
In a strict logical sense axioms are formulas that are treated as self-evidently true.
Rules of inference are not formulas so in the strict sense they cannot be axioms.
But seeing in it in a more relaxed way , they are treated as self-evidently true so you could call it a metalogical axiom or something like it.
|
Proving a Set is NOT a vector space Before I begin, I will emphasis I DO NOT want the full solution. I just want some hints.
Show that the set $S=\{\textbf{x}\in \mathbb{R}^3: x_{1} \leq 0$ and $x_{2}\geq 0 \}$ with the usual rules for addition and multiplication by a scalar in $\mathbb{R}^3$ is NOT a vector space by showing that at least one of the vector space axioms is not satisfied. Give a geometric interpretation of the result.
My solution (so far): To show this, I will provide a counter example, I have selected axiom 6 (closure under multiplication of a scalar).
$\textbf{x} = \begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\end{pmatrix}$
Let $\lambda = -1, x_{1} = -2, x_{2} = 2, x_{3}=1$
$\lambda \textbf{x} = \lambda \begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\end{pmatrix}$
$= -1 \begin{pmatrix}-2\\ 2\\ 1\end{pmatrix}$
$= \begin{pmatrix}2\\ -2\\ -1\end{pmatrix}$
Clearly, as $\begin{pmatrix}2\\ -2\\ -1\end{pmatrix} \notin S$, as $x_{1} \nleqslant 0$ and $x_{2} \ngeqslant 0$ axiom (Multiplication by a scalar) does not hold. Hence $S$ is not a vector space.
My questions:
*
*Is my solution correct/reasoning? How can it be improved? (Please note I am new to Linear Algebra)
*Are there more axioms for which it doesn't hold besides the one I listed?
*It says to give a geometric interpretation of this result. I'm not sure how to go about doing this. Any hints?
|
Absolutely! A single counterexample is all you need. Nice work.
In general elements of $S$ will not have additive inverses in $S$. (Can you determine the exceptions?) Otherwise, the axioms are satisfied.
Geometrically speaking, I recommend that you focus on the lack of additive inverses. Note that if $A$ is a set of vectors such that every element of $A$ has an additive inverse in $A,$ then $A$ will be symmetric about the origin. That, in itself, won't be sufficient to make $A$ a vector subspace, but it will be necessary. Your set $S$ here is an octant of $3$-space. In general, an octant will not be a vector subspace, but a union of octants may be. (When?)
|
Conjugates of $12^{1/5}+54^{1/5}-144^{1/5}+648^{1/5}$ over $\mathbb{Q}$ After much manual computation I found the minimal polynomial to be $x^5+330x-4170$, although I would very much like to know if there's a clever way to see this. I suspect there is from seeing that the prime factorisations of the four integers are as various powers of $2$ and $3$ and because the number can be written as: $$12^{1/5}+54^{1/5}-(12^{1/5})^2+(12^{1/5}\cdot 54^{1/5})$$
But I haven't yet been able to find anything better than manual computation of the fifth power.
However, from here I am lost, I'm not sure how to solve this equation. A quick internet search returns much information on the general case in terms of the "Bring-Gerrard normal form" but the book from which this problem was taken hasn't gone into any detail on general methods for solving polynomials, so I am trying to find a solution that doesn't require any heavy machinery.
|
Let $x$ be your number.
You can immediately see that $x \in \Bbb Q(12^{1/5}, 54^{1/5})$ (or even $x \in \Bbb Q(2^{1/5},3^{1/5})$, from your remark about the prime factors).
It is not too hard to show that its normal closure is $\Bbb Q(\zeta_5,12^{1/5},54^{1/5}) ( = \Bbb Q(\zeta_5,2^{1/5},3^{1/5}))$, and from there you can investigate the relevant Galois group and find the conjugates.
However, you have found that the number is actually of degree $5$, which should be a surprise at first : the situation is simpler than that.
After investigating a bit, we see that $54 = 12^3 / 2^5$, and so $54^{1/5}$ is already in $\Bbb Q(12^{1/5})$, and letting $y = 12^{1/5}$, we have $x = y + y^3/2 - y^2 + y^4/2$.
So now the problem reduces to finding the conjugates of $y = 12^{1/5}$. Its minimal polynomial is clearly $y^5-12=0$, and its $5$ roots are the the $\zeta_5^k 12^{1/5}$ where $\zeta_5$ is a primitive $5$th root of $1$. To get the conjugates of $x$ you simply replace $y$ with any of its conjugate in the above formula for $x$
In simpler terms : your calculations should actually show that "if $y^5 = 12$, then $(y+y^3/2-y^2+y^4/2)$ is a root of $x^5+330x-4170 = 0$". By using $5$ different $y$ such that $y^5 = 12$, you obtain $5$ roots of $x^5+330x-4170 = 0$. However, proving that they are distinct, and that the polynomial is irreducible, doesn't seem easy to do.
|
Proof a $2^n$ by $2^n$ board can be filled using L shaped trominoes and 1 monomino Suppose we have an $2^n\times 2^n$ board. Prove you can use any rotation of L shaped trominoes and a monomino to fill the board completely.
You can mix different rotations in the same tililng.
|
Forgive me if what is below is ‘old hat’ or easily found on some other site. I only found this site whilst solving the similar problem: for which $n$ can an $n\times n$ square be covered without overlapping by T shapes each comprising four small squares. The fact that an nxn square can be almost covered by L shapes comprising three small squares leaving any prechosen square free is true for all $n\ge6, 3\nmid n$. A very simple proof can be found in The Mathematical Gazette of November 1999.
Dr. R.B.J.T.Allenby (long retired from Leeds University,)
|
Where did this statistics formula come from: $E[X^2] = \mu^2 + \sigma^2$ I am studying statistics and I need some guidance as to where this formula came from. All I know is that $\displaystyle E[X^2] = x^2 \sum_{i=0}^n p_{i}(x)$
|
Edit: Thanks to the Did's comment and as a alternative answer.
You can use the following definition:
If $X$ is any random variable with distribution $F_{X}(x)$, then
$$\mu_{X}=\int \limits_{0}^{+\infty}\left({1-F_{X}(x)}\right)dx-\int\limits_{-\infty}^{0}F_{X}(x)dx,$$
$$\sigma_{X}^{2}=\int \limits_{0}^{+\infty} 2x \left( {1-F_{X}(x)+F_{X}(-x) } \right)dx-\mu_{X}^2$$
and then show that
$$E[X^2]=\int \limits_{0}^{+\infty} 2x \left( {1-F_{X}(x)+F_{X}(-x) } \right)dx$$
to conclude your equality.
|
A basic question on the definition of order In the first chapter of Rudin's analysis book "order" on a set is defined as follows :
Let $S$ be a set. An order on $S$ is a relation, denoted by $<$, with the following two properties :
(i) If $x \in S$ and $y \in S$ then one and only one of the statements
$$ x < y, x=y, y<x $$ is true.
(ii) If $x,y,z \in S$, then $x < y$ and $y < z$ implies $x<z$.
How is this different from the usual partial/total order notation. This looks like total order. Why is defining "order" like this ? Moreover, he has not defined $=$ here.
|
The root of your difficulty seems to be that people use different conventions for defining "order". The first issue arises in the defintion of "partial order", by which some people mean a reflexive, transitive, and antisymmetric relation $\leq$, while other people mean an irreflexive, transitive relation $<$. Given a partial order on a set $A$ in either sense, one can easily define the corresponding order in the other sense, by adjoining or removing the pairs $(a,a)$ for all $a\in A$. So people don't usually worry too much about the distinction, but technically they are two different notions of order. I'm accustomed to calling the reflexive version a "partial order" and the irreflexive version a "strict partial order", but there are people who prefer to use the shorter name "partial order" for the irreflexive version.
Total (or linear) orders are then defined in the reflexive version by requiring $a\leq b$ or $b\leq a$ for all $a,b\in A$, and in the irreflexive version by instead requiring $a<b$ or $a=b$ or $b<a$.
Again, one can convert either sort of total order to the other sort, just as with partial orders.
To further increase the confusion, some people use "order" (without any adjective) to mean a partial order, while others use it to mean a total order. So the final result is that "order" can have any of four meanings. You just have to get used to that; you'll get no suport for denouncing Rudin just because he chose a convention different from the one you learned first.
|
Proving that pullback objects are unique up to isomorphism In Hungerford's Algebra he defines a pullback of morphisms $f_1 \in \hom(X_1,A)$ and $f_2 \in \hom(X_2,A)$ as a commutative diagram
$$\require{AMScd}
\begin{CD}
P @>{g_1}>>X_2\\
@V{g_2}VV @V{f_1}VV \\
X_2 @>{f_2}>> A
\end{CD}$$
satisfying the universal property that for any commutative diagram
$$\require{AMScd}
\begin{CD}
Q @>{h_1}>>X_2\\
@V{h_2}VV @V{f_1}VV \\
X_2 @>{f_2}>> A
\end{CD}$$
there exists a unique morphism $t: Q \to P$ such that $h_i = g_i \circ t$. He then asks the reader to establish that
For any other pullback diagram with $P'$ in the upper-left corner $P \cong P'$.
How do we obtain this isomorphism?
The obvious choice seems to be considering the two morphisms $t: P \to P'$, $t': P' \to P$ and show that they compose to the identity. To this end,
$$h_1 = g_1 \circ t \implies h_1\circ 1 = h_1 \circ t' \circ t$$
but we cannot cancel unless $h_1$ is monic. Can we claim that necessarily $t \circ t'$ is the identity, since comparing $(P,g_1,g_2)$ with itself there exists a unique morphism $t'': P \to P$?
|
As said by Martin Brandenburg in the comments, you stated the universal property wrong (not relevant anymore since the edit of the original post).
A pullback of $f_1\colon X_1 \to A,f_2\colon X_2 \to A$ is a diagram
$$ \require{AMScd}
\begin{CD}
P @>{g_1}>>X_1\\
@V{g_2}VV @V{f_1}VV \\
X_2 @>{f_2}>> A
\end{CD} $$
satisfying that for any other diagram
$$\require{AMScd}
\begin{CD}
Q @>{h_1}>>X_1\\
@V{h_2}VV @V{f_1}VV \\
X_2 @>{f_2}>> A
\end{CD}$$
there exists a unique $t \colon Q \to P$ such that the following diagram commutes :
.
So now, if you have two pullback $P,P'$ there is $t \colon P \to P',t' \colon P' \to P$ such that commute the following diagrams :
.
Notably, the arrow $t' \circ t \colon P \to P$ make the diagram
commutes. By the universal property of the pullback $P$, such a $t'\circ t$ is unique : do you see another arrow $P \to P$ satisfying the same property ? Then it must equal $t' \circ t$.
Starting from here and elaborating a similar argument with the pullback $P'$, you should be able to prove the uniqueness up to isomorphism.
|
Starting with $\frac{-1}{1}=\frac{1}{-1}$ and taking square root: proves $1=-1$ In this blog post, RJ. Lipton mentions an example of common mathematical traps. In particular, that ``square root is not a function''. He shows the following trap:
Start with:
$\frac{-1}{1}=\frac{1}{-1}$, then take the square root of both sides:
$$
\frac{\sqrt{-1}}{\sqrt{1}}=\frac{\sqrt{1}}{\sqrt{-1}}
$$
hence
$$
\frac{i}{1} = \frac{1}{i} \\
i^2=1 \enspace ,
$$
which contradicts the definition that $i^2=-1$.
Question 1: I know that the square root is not a function because it is multi-valued, but I still can not wrap my head around this example. Where was the problem exactly? Was is that we
*
*can not convert $\sqrt{1/-1}$ to $\sqrt{1}/\sqrt{-1}$?
*both the RHS and LHS are unordered sets?
*both?
Question 2: Also, does this problem only arise in equalities or in general algebraic manipulation? Because it would be a nightmare when manipulating an expression with fractional powers! Are there easy rules to determine what is safe to do with fractional powers? To see what I mean, there is another example of a similar trap:
One might easily think that $\sqrt[4]{16x^2y^7}$ is equivalent to $2x^{1/2}y^{7/4}$, which is not true for $x=-1$ and $y=1$.
|
Without going into complex analysis, I think this is the simplest way I can explain this. Let $f(x) = \sqrt{x}$. Note that the (maximal) domain of $f$ is the set of all non-negative numbers. And how is this defined? $f(x) = \sqrt{x}$ is equal to a non-negative number $y$ such that $y^2=x$. In this sense, square root is a function! It is called the principal square root.
In contrast, the following correspondence is not a function: the relation $g$ takes in a value $x$ and returns a value $y$ such that $y^2=x$. For example, under $g$, 1 corresponds to two values, $1,-1$.
Now, the property of distributing a square root over a product is only proven (at least in precalculus) over the domain of the principal square root, that is, only for non-negative numbers. Given this, there is no basis for the step
$$\frac{\sqrt{1}}{\sqrt{-1}} = \frac{\sqrt{-1}}{\sqrt{1}}.$$
As to why this property is not true, the best explanation for now is that because $-1$ is not in the domain of the principal square root. Hence, $\sqrt{-1}$ does not actually make sense, as far as our definition of square root is concerned. In complex analysis, more can be said. As a commenter mentioned, this has something to do with branches of logarithms.
For your second question, I think it is safe that you always keep in mind what the domain of the function is. If you will get a negative number inside an even root, then you can't distribute the even root over products or quotients.
|
Rudin 4.22 Theorem Could you help me understand
why 1. f(H) = B and
why 2. $\bar A$ $\cap$ B is empty and
why 3. $\bar G$ $\cap$ H is empty?
|
In 2. (@Antoine's solution) the closure is understood in subspace topology.
In 3. An indirect proof is (technically) simpler.
Assume that $x\in \overline G\cap H$. Then $f(x)\in f(\overline G)\subseteq \overline A$, and $f(x)\in f(H)=B$, which is a contradiction because $\overline A\cap B=\emptyset$. In fact, $f(H)\subseteq B$ is enough for the conclusion, that is trivial.
|
Injective function $f(x) = x + \sin x$ How can I prove that $f(x) = x + \sin x$ is injective function on set $x \in [0,8]$?
I think that I should show that for any $x_1, x_2 \in [0,8]$ such that $x_1 \neq x_2$ we have $f(x_1) \neq f(x_2)$ id est $x_1 + \sin x_1 \neq x_2 + \sin x_2$. But I don't know what can I do next.
Equivalently I can show that for any $x_1, x_2 \in [0,8]$ such that $f(x_1) = f(x_2)$ we have $x_1 = x_2$ but I have again problem.
|
Hint: What is the sign of $f'(x)$. What does it tell you about $f(x)$?
Since $f'(x)\ge0$, the function is non-decreasing. So if $f(a)=f(b)$ for some $a<b$, then $f(x)$ would have to be constant on the interval $[a,b]$. This would imply $f'(x)=0$ for each $x\in[a,b]$. But there is no non-trivial interval such that $f'(x)$ is zero for each point of the interval.
|
How many values of $n<50$ exist that satisfy$ (n-1)! \ne kn$ where n,k are natural numbers? How many natural numbers less than 50 exist that satisfy $ (n-1)! \ne kn$ where n,k are natural numbers and $n \lt 50$ ?
when n=1
$0!=1*1$
when n=2
$1!\ne2*1$
...
...
...
when n=49
$48!=\frac{48!}{49}*49$
Here $k = \frac{48!}{49}$ and $n =49$
|
If you're only interested in the case of up to n = 49 then the answer isn't very enlightening... the answer is 16.
As the hints above suggest, you need to look at what happens when n is a prime, although you also need to take care in this case with n = 4 (can you see why?).
|
How to solve the Riccati's differential equation I found this question in a differential equation textbook as a question
The equation
$$
\frac{dy}{dx} =A(x)y^2 + B(x)y +C(x)
$$
is called Riccati's equation
show that if $f$ is any solution of the equation, then the transformation
$$
y = f + \frac{1}{v}
$$
reduces it to a linear equation in $v$.
I am not understanding this, what does $f$ mean? How can we find the solution with the help of the solution itself. I hope anyone could help me to solve this differential equation.
|
The question is wrongly posed, which is probably why you don't comprehend it. Here's the correct one:
Let $y$ and $f$ be solutions to the above diff. equation such that $y=f+1/v$ for some function $v(x)$. Show that $v$ satisfies a linear diff. equation.
The solution is provided by Amzoti.
|
Is the MLE strongly consistent and asymptotically efficient for exponential families? It is known that the Maximum Likelihood Estimator (MLE) is strongly consistent and asymptotically efficient under certain regularity conditions. By strongly consistent I mean that $\hat{\theta}_{MLE} \rightarrow \theta$ almost surely. By asymptotically efficient I mean that $\sqrt{n}(\hat{\theta}_{MLE}-\theta)\rightarrow N(0,I^{-1}(\theta))$ in distribution.
These regularity conditions are cumbersome to check so I was wondering if there is a general and easy to check case for when the regularity conditions hold. For example, do these regularity conditions always hold for exponential families?
I am not asking anyone to prove this, I am just wondering if someone knows the answer.
Regularity Conditions for Asymptotic Efficiency: http://en.wikipedia.org/wiki/Maximum_likelihood#Asymptotic_normality
Regularity Conditions for Strong Consistency: http://en.wikipedia.org/wiki/Maximum_likelihood#Consistency
|
In a paper I read a while ago, which can be found here, the authors propose a set of conditions of eigenvalues which are weaker than the usual regularity conditions. They show that under this set of conditions, the usual MLE are strongly consistent for (nonlinear) exponential families.
|
Why does factoring eliminate a hole in the limit? $$\lim _{x\rightarrow 5}\frac{x^2-25}{x-5} = \lim_{x\rightarrow 5} (x+5)$$
I understand that to evaluate a limit that has a zero ("hole") in the denominator we have to factor and cancel terms, and that the original limit is equal to the new and simplified limit. I understand how to do this procedurally, but I'd like to know why this works. I've only been told the methodology of expanding the $x^2-25$ into $(x-5)(x+5)$, but I don't just want to understand the methodology which my teacher tells me to "just memorize", I really want to know what's going on. I've read about factoring in abstract algebra, and about irreducible polynomials (just an example...), and I'd like to get a bigger picture of the abstract algebra in order to see why we factor the limit and why the simplified is equal to the original if it's missing the $(x-5)$, which has been cancelled. I don't want to just memorize things, I would really like to understand, but I've been told that this is "just how we do it" and that I should "practice to just memorize the procedure."
I really want to understand this in abstract algebra terms, please elaborate. Thank you very much.
|
One of definitions of $\lim_{x \to A} f(x) = B$ is:
$$\forall_{\varepsilon > 0}\exists_{\delta > 0}\forall_{0 < \left|x - A\right| < \delta}\left|f(x) - B\right| < \varepsilon$$
The intuition is that we can achieve arbitrary 'precision' (put in bounds on y axis) provided we get close enough (so we get the bounds on x axis). However the definition does not say anything about the value at the point $f(A)$ which can be undefined or have arbitrary value.
One of method of proving the limit is to find the directly $\delta(\varepsilon)$. Hence we have following formula (well defined as $x\neq 5$):
$$\forall_{0 < \left|x - 5\right| < \delta}\left|\frac{x^2-25}{x-5} - 10\right| < \epsilon$$
As $x\neq 5$ (in such case $\left|x - 5\right| = 0$) we can factor the expression out
$$\forall_{0 < \left|x - 5\right| < \delta} \left|x + 5 - 10\right| < \varepsilon $$
$$\forall_{0 < \left|x - 5\right| < \delta} \left|x - 5 \right| < \varepsilon $$
Taking $\delta(\varepsilon) = \varepsilon$ we find that:
$$\forall_{\varepsilon > 0}\exists_{\delta > 0}\forall_{0 < \left|x - 5\right| < \delta}\left|\frac{x^2-25}{x-5} - 10\right| < \varepsilon$$
The key thing is that we don't care about value at the limit.
|
Show that $f:I\to\mathbb{R}^{n^2}$ defined by $f(t)=X(t)^k$ is differentiable Let $I$ be a interval, $\mathbb{R}^{n^2}$ be the set of all $n\times n$ matrices and $X:I \to\mathbb{R}^{n^2}$ be a differentiable function. Given $k\in\mathbb{N}$, define $f:I\to\mathbb{R}^{n^2}$ by $f(t)=X(t)^k$. How to prove that $f$ is differentiable?
Thanks.
|
Hint: The entries of $X(t)$ are obviously differentiable, and the entries of $f(t)$ are polynomials in the entries of $X(t)$.
|
Possible values of difference of 2 primes Is it true that for any even number $2k$, there exists primes $p, q$ such that $p-q = 2k$?
Polignac's conjecture talks about having infinitely many consecutive primes whose difference is $2k$. This has not been proven or disproven.
This is a more general version of my question on the possible value of prime gaps.
Of course, the odd case is easily done.
|
In short, this is an open problem (makes Polignac seem really tough then, doesn't it?).
The sequence A02483 of the OEIS tracks this in terms of $a(n) =$ the least $p$ such that $p + 2n = q$. On the page it mentions that this is merely conjectured.
In terms of a positive result, I am almost certain that Chen's theorem stating that every even number can be written as either $p + q$ or $p + q_1q_2$ is attained by sieving methods both powerful enough and loose enough to also give that every even number can be written as either a difference of two primes or a difference of a prime and an almost-prime (or of an almost-prime and a prime). I think I've even seen this derived before.
This would come as a corollary of Polignac's conjecture or of the vastly stronger Schinzel's hypothesis H, which is one of those conjectures that feels really far from reach to me. I suppose that it has been proved on average over function fields (I think), so perhaps that's hopeful. On the other hand, so has the Riemann Hypothesis.
|
What is wrong in saying that the map $F:\mathbb{S}^1 \times I \to \mathbb{S}^1$ defined by $F(z,t)=z^{t+1}$ is a homotopy from $f$ to $g$? Let $\mathbb{S}^1$ be the unit circle of complex plane and $f,g:\mathbb{S}^1 \to \mathbb{S}^1$ be two maps defined by $f(z)=z$ and $g(z)=z^2$. What is wrong in saying that the map $F:\mathbb{S}^1 \times I \to \mathbb{S}^1$ defined by $F(z,t)=z^{t+1}$ is a homotopy from $f$ to $g$?
Can someone tell me please what is wrong ?thanks in advance.
|
Raising a complex number to a non-integer power is more complicated than you're realizing. The "function" $z^{t}$ is really multivalued when $t\notin\mathbb{Z}$, and even after choosing a branch, it won't be continuous.
|
Infinite Limit Problems Can some one help me to solve the problems?
*
*$$\lim_{n\to\infty}\sum_{k=1}^n\left|e^{2\pi ik/n}-e^{2\pi i(k-1)/n}\right|$$
*$$ \lim_{x\to\infty}\left(\frac{3x-1}{3x+1}\right)^{4x}$$
*$$\lim_{n\to\infty}\left(1-\frac1{n^2}\right)^n $$
(Original scan of problems at http://i.stack.imgur.com/4t12K.jpg)
This questions are from ISI kolkata Computer science Phd exam entrance.The link of the original questions http://www.isical.ac.in/~deanweb/sample/MMA2013.pdf
|
For the first one, consider a geometric interpretation. Recall that when $a$ and $b$ are complex numbers, $|a-b|$ is the distance in the plane between the points $a$ and $b$. Peter Tamaroff says in a comment that the limit is $2\pi$, which I believe is correct.
Addendum: The points $e^{2\pi ik/n}$ are spaced evenly around the unit circle. The distance $\left|e^{2\pi ik/n} - e^{2\pi i(k-1)/n}\right|$ is the distance from one point to the next. So we are calculating the perimeter of a regular $n$-gon which, as $n$ increases, approximates the perimeter of the unit circle arbitrarily closely. The answer is therefore $2\pi$.
For the third one, factor $$\left(1-\frac1{n^2}\right)$$ as
$$ \left(1-\frac1{n}\right) \left(1+\frac1{n}\right)$$ and then apply the usual theorem about $\lim_{n\to\infty}\left(1+\frac an\right)^n$.
|
How to approximate unknown two-dimensional function? I have a surface defined by values on a two-dimensional grid. I would like to find an approximate function which would give me a value for any arbitrary point within certain range of xs and ys.
My general idea is to construct some sort of polynomial, and then tweak it's coefficients by using some sort of evolutionary algorithm until the polynomial behaves as I want it to. But how should my polynomial look in general?
|
You might be interested in Lagrange interpolation (link only talks about one-variable version). Just input 100 points from your values and you'll probably get something good. You may need more points for it to be accurate at locations of high variance. If you insist on using an evolutionary algorithm, you might use it to choose near-optimal choices of data points, but in many cases you could probably do almost as well by spreading them around evenly.
Two-variable interpolation isn't on Wikipedia. But it is Google-able, and I found it described in sufficient generality in section 3 of this paper. Or, if you have access to MATLAB, a "static" 2D-interpolater has already been coded in this script (I haven't used this one, so it might be unwieldy or otherwise unsuited to your purposes).
Hope this helps you out!
|
Finding a Hopf Bifucation with eigenvalues I am trying to show that the following 2D system has a Hopf bifurcation at $\lambda=0$:
\begin{align}
x' =& y + \lambda x \\
y' =& -x + \lambda y - x^2y
\end{align}
I know that I could easily plot the system with a CAS but I wish to analytical methods. So, I took the Jacobian:
\begin{equation}
J = \begin{pmatrix} \lambda&1\\-1-2xy&\lambda-x^2\end{pmatrix}
\end{equation}
My book says I should look at the eigenvalues of the Jacobian and find where the real part of the eigenvalue switches from $-$ to $+$. This would correspond to where the
system changes stability. So I took the $\det(J)$:
\begin{align}
\det(J) =& -\lambda x^2 + 2xy + \lambda^2 + 1 = 0
\end{align}
I am stuck here with algebra and am not quite sure how to find out where the eigenvalues switch from negative real part to positive real part. I would like to use the
quadratic formula but the $2xy$ term throws me off.
How do I proceed? Thanks for all the help!
|
Step 1: As jkn stated, the first step is to find the equilibrium such that $\dot x=\dot y=0$. It is easy to obtain the equilibrium is $(0, 0)$.
Step 2: Compute the eigenvalue around the equilibrium $(0, 0)$.
$$J= \left(\begin{array}{cc}
\lambda &1\\
-1& \lambda\\
\end{array}\right)$$
Thus the characteristic equation is $$\det (\tilde\lambda I-J)=0$$
The aobve equation admits two eigenvalue $\lambda\pm i$. It is called the imaginary $\omega=1$.
It is obvious that $\lambda:=0$ such that the dynamical system has a pair of pure imaginary roots $\pm i$, moverover
$$\frac{d\tilde\lambda}{d\lambda}|_{\lambda=0}=\frac{d}{d\lambda}|_{\lambda=0}(\lambda\pm i)=1>0$$
There the dynamical system exhibits a Hopf bifurcation at $\lambda=0$.
Step 3, Compute the Hopf bifurcation. Consider
$$(J-\tilde\lambda I)\left(\begin{array}{cc}
q_1\\
q_2\\
\end{array}\right)=0$$
we get the eigenvector $$\left(\begin{array}{cc}
q_1\\
q_2\\
\end{array}\right)=\left(\begin{array}{cc}
i\\
1\\
\end{array}\right)$$.
On the other hand, consider$$(J^T-\overline{\tilde\lambda} I)\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right)=0$$
we get the eigenvector $$\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right)=\left(\begin{array}{cc}
i\\
1\\
\end{array}\right)$$
It is always possible to normalize $\mathbf p$ with respect to $\mathbf p$:
$$<\mathbf p, \mathbf q>=1, \mbox{ with } <\mathbf p, \mathbf q>=\bar p_1q_1+\bar p_2 q_2 $$
Thus $$\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}
i\\
1\\
\end{array}\right)$$
We denote the nonlinear term $$\mathbf F:=\left(\begin{array}{cc}
F_1\\
F_2\\
\end{array}\right)
=\left(\begin{array}{cc}
0\\
-x^2y\\
\end{array}\right)$$
Introduce the complex variable $$z:=<\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right),
\left(\begin{array}{cc}
x\\
y\\
\end{array}\right)>$$
Then the dynamical system becomes to
$$\dot z=(\lambda+i)z+g(z, \bar z,\lambda) \mbox{ with }g(z, \bar z,\lambda) =<\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right),
\left(\begin{array}{cc}
F_1\\
F_2\\
\end{array}\right)>$$
Some direct computations show
$$g(z, \bar z,\lambda)=\frac{1}{2}(z^3-z^2\bar z-z\bar z^2+\bar z^3):=\frac{g_{30}}{6}z^3+\frac{g_{21}}{2}z^2\bar z+\frac{g_{12}}{2}z\bar z^2+\frac{g_{03}}{6}\bar z^3$$
We recall that the first Lyapunov coefficient is
$$l_1=\frac{1}{2\omega^2}\mbox{Re} (ig_{20}g_{11}+\omega g_{21})$$
Since $\omega=1$, $g_{20}=g_{11}=0$, $g_{21}=-1$, we obtain
$$l_1=-\frac{1}{2}<0$$
Therefore $\lambda=0$ is a super crirical Hopf bifurcation.
|
Mathematical Games suitable for undergraduates I am looking for mathematical games for an undergraduate `maths club' for interested students. I am thinking of things like topological tic-tac-toe and singularity chess. I have some funding for this so it would be nice if some of these games were available to purchase, although I am open to making the equipment myself if necessary.
So the question is: what are some more examples of games that should be of interest to undergraduates in maths?
|
Ticket to Ride is a very easy game to learn and can lead to some interesting discussions of graph theory. On a more bitter note, a game of Settlers of Catan never fails to provide a wonderful example of the difference between theoretical and empirical probability.
|
Finding all bifurcations in a 2D system I want to find all bifurcations for the system:
\begin{align}
x' =& -x+y\\
y' =& \frac{x^2}{1+x^2}-\lambda y
\end{align}
So far, I am using the idea that the intersection of the nullclines ($y'=0$ or $x'=0$) gives equilibrium points. So:
\begin{align}
x'=& 0 \implies x=y \\
y'=& 0 \implies y=\frac{x^2}{\lambda(1+x^2)}
\end{align}
Clearly $(0, 0)$ is an equilibrium point but there is also the possibility of two more equilibrium points when the parabola given
by the above equation intersects the line $y=x$. To find the intersection I solved:
\begin{align}
x =& \frac{x^2}{\lambda(1+x^2)}\\
\lambda(1+x^2) =& x\\
\lambda x^2-x+\lambda =& 0 \\
x =& \frac{1\pm \sqrt{1-4\lambda^2}}{2\lambda}
\end{align}
I have these two intersection points. Now I need to vary $\lambda$ to find where the curve passing through the intersection points
becomes tangent to $y=x$ and hence we would expect one equilibrium point instead of these two at this particular value of $\lambda$. Then continuing the variation of $\lambda$ in the same direction we would expect no equilibrium points from these two.
Hence we originally had $2$ equilibrium points and then they coalesced and finally annihilated each other. This is a saddle-node bifucation.
How do I show this for my variation of $\lambda$? Are there any other bifurcations?
EDIT: Consider the discriminant of the equation:
\begin{align}
x = \frac{1\pm\sqrt{1-4\lambda^2}}{2\lambda}
\end{align}
\begin{align}
1-4\lambda^2 =& 0 \\
1 =& 4\lambda^2 \\
1 =& \pm 2\lambda \\
\pm\frac{1}{2} =& \lambda
\end{align}
So, I plotted the system with $\lambda = \frac{1}{2}$: sage: P = plot(x^2/ .5*(1+x^2), 0, 2) + plot(x, 0, 2)
We no longer have two bifurcations and instead have one. I was expecting the curves to be tangent. When does it happen that the curves are tangent? Actually I just realized that SAGE was not dividing the $1+x^2$ term, so I added the extra set of parens and everything works as expected!
|
Notice that x is real only when the discriminant 1-4λ^2 > 0. I.e. The curves of x' and y' do not intersect at points when 1-4λ^2 > 0 does not hold.
|
Using the FTOC to find the derivative of the integral. I'm apologizing ahead of time because I don't know how to format an integral.
If I have the following integral: $$\int_{x^2}^5 (4x+2)\;dx.$$
I need to find the derivative, so could I do the following?
Multiply the integral by -1 and swap the limits of integration so that they are from 5 to x^2.
Then, use FTOC and say that the derivative will be $-((4x^2)+2))$.
Is this correct?
|
You have
$$
f(x) = \int_{x^2}^5 4t + 2 \; dt
$$
and you want to find $f'(x)$.
First note that
$$
f(x) = -\int_5^{g(x)} 4t + 2\; dt
$$
where $g(x) = x^2$. So $f(x)$ is the composition of two functions:
$$
f(x) = h(g(x)).
$$
where
$$\begin{align}
h(x) &= -\int_5^x 4x + 2\; dt \quad \text{and}\\
g(x) &= x^2.
\end{align}
$$
So by the chain rule you have
$$
f'(x) = h'(g(x))\color{red}{g'(x)} = -(4g(x) + 2)\color{red}{g'(x)} = \dots
$$
It looks like you forgot the derivative of the inner function.
|
Does the graph of $y + |y| = x + |x|$ represent a function of $x$? The question is whether or not the graph $y + |y| = x + |x|$ represents a function of $x$. Explain why.
It looks like a weird graph but it would probably be a function because if you say $f(x) = y$ (you get a $y$ value)?
|
It is a function only if for every $x$ you only have one $y$ value satisfying the equation.
Now look for example the value $x=-1$ and $y=-1$ or $x=-1$ and $y=0$.
|
Show that if $a$ has order $3\bmod p$ then $a+1$ has order $6\bmod p$. Show that if $a$ has order $3\bmod p$ then $a+1$ has order $6\bmod p$. I know I am supposed to use primitive roots but I think this is where I am getting caught up. The definition of primitive root is "if $a$ is a least residue and the order of $a\bmod p$ is $\phi(m)$ then $a$ is a primitive root of $m$. But I really am not sure how to use this to my advantage in solving anything.
Thanks!
|
Note that if $a=1$, $a$ has order $1$. Thus, we can assume $a\ne1$. Furthermore, $p\ne2$ since no element mod $2$ has order $3$. Therefore, $-1\ne1\pmod{p}$.
$$
\begin{align}
(a+1)^3
&=a^3+3a^2+3a+1\\
&=1+3a^2+3a+1\\
&=-1+3(1+a+a^2)\\
&=-1+3\frac{a^3-1}{a-1}\\
&=-1
\end{align}
$$
Therefore, $(a+1)^3=-1$ and $(a+1)^6=1$. (Shortened a la ccorn).
$$
\begin{align}
(a+1)^2
&=a^2+2a+1\\
&=a+(a^2+a+1)\\
&=a+\frac{a^3-1}{a-1}\\
&=a\\
&\ne1
\end{align}
$$
Therefore, $(a+1)^2\ne1$.
Thus, $(a+1)$ has order $6$.
|
Using Van-Kampen in $S^1\times S^1$ as cylinder I was trying to use Van-Kampen theorem to compute $S^1 \times S^1$ using its representation as a cylinder.
$U$= from a little below the middle circle to the upper part of the cylinder.
$V$= from a little above the middle circle to the bottom part of the cylinder.
Then $U$ is homotopic to the upper $S^1$.
Similarly, $V$ is homotopic to the bottom $S^1$
The intersection of $U$ and $V$ is homotopic to the middle circle, $U \cap V = S^1$.
Then $\pi(U)=<\alpha>$, $\pi(V)=<\beta>$, $\pi(U\cap V)=<\gamma>$.
Now $\alpha$ and $\beta$ are homotopic since the upper circle and the bottom circle are relative.
$\pi(X)=<\alpha,\beta \mid \alpha=\beta>=<\alpha>$.
I know this is wrong but I don't understand where. I know that using the rectangular representation of the torus we can use Van-Kampen, just taking a point out and then a circle and we will get $\pi(X)=<\alpha,\beta \mid \alpha*\beta*\alpha^{-1}*\beta^{-1}=1>$.
I am trying to use van Kampen in this example and I cant do it.
Even more general, Can we use van Kampen in $X\times Y$.
|
The product $S^1\times S^1$ is not a cylinder. It's a torus.
So what you've done is a correct computation of the fundamental group, but of a wrong space.
In general if you want to compute the fundamental group of a product of spaces, you don't need van Kampen. The group $\pi_1(A\times B)$ is just going to be the product of the fundamental groups $\pi_1(A)\times\pi_1(B)$. It's easy to see, since any loop in $A\times B$ is just a product of loops in $A$ and in $B$.
|
Inverse of a function $e^x + 2e^{2x}$ The function is $f(x) = e^x+2e^{2x}$
So to find the inverse I went
$$WTS: x = ____$$
$$y = e^x+2e^{2x}$$
$$ log3(y)=log3(3e^{2x})$$
$$ log3(y) = 2x$$
$$ log3(y)=5x$$
$$ x=\frac{log3(y)}{2}$$
Am i correct?
|
No, you are wrong, $log$ is not a linear function, set $g(x)=e^x$, $2g(x)^2+g(x)-y=0$,
solving this equation, since $y>0$, $g(x)=e^x=-{1\over4}+{1\over 2}\sqrt{{1\over4}+2y}.$
|
Prove that $\sum\limits_{i=1}^{n-k} (-1)^{i+1} \cdot \frac{(k-1+i)!}{k! \cdot(i-1)!} \cdot \frac{n!}{(k+i)! \cdot (n-k-i)!}=1$
Prove that, for $n>k$, $$\sum\limits_{i=1}^{n-k} (-1)^{i+1} \cdot \frac{(k-1+i)!}{k! \cdot(i-1)!} \cdot \frac{n!}{(k+i)! \cdot (n-k-i)!}=1$$
I found this problem in a book at the library of my my university, but sadly the book doesn't show a solution. I worked on it quite a long time, but now I have to admit that this is above my (current) capabilities. Would be great if someone could help out here and write up a solution for me.
|
Note first that two factorials nearly cancel out, leaving us with $\frac1{k+i}=\int_0^1t^{k+i-1}\mathrm dt$, hence the sum to be computed is
$$
S=\sum_{i=1}^{n-k}(-1)^{i+1}n{n-1\choose k}{n-k-1\choose i-1}\int_0^1t^{k+i-1}\mathrm dt
$$
Second, note that
$$
\sum_{i=1}^{n-k}(-1)^{i+1}{n-k-1\choose i-1}t^{k+i-1}=t^k\sum_{j=0}^{n-k-1}{n-k-1\choose j}(-t)^{j}=t^k(1-t)^{n-k-1}
$$
hence
$$
S=n{n-1\choose k}\int_0^1t^k(1-t)^{n-k-1}\mathrm dt=n{n-1\choose k}\mathrm{B}(k+1,n-k)
$$
where the letter $\mathrm{B}$ refers to the beta numbers, defined as
$$
\mathrm{B}(i,j)=\frac{(i-1)!(j-1)!}{(i+j-1)!}
$$
All this yields
$$
S=n{n-1\choose k}\frac{k!(n-k-1)!}{n!}=1
$$
|
Estimate $\displaystyle\int_0^\infty\frac{t^n}{n!}e^{-e^t}dt$ accurately. How can I obtain good asymptotics for $$\gamma_n=\displaystyle\int_0^\infty\frac{t^n}{n!}e^{-e^t}dt\text{ ? }$$
[This has been already done] In particular, I would like to obtain asymptotics that show $$\sum_{n\geqslant 0}\gamma_nz^n$$
converges for every $z\in\Bbb C$.
N.B.: The above are the coefficients when expanding $$\Gamma \left( z \right) = \sum\limits_{n \geqslant 0} {\frac{{{{\left( { - 1} \right)}^n}}}{{n!}}\frac{1}{{n + z}}} + \sum\limits_{n \geqslant 0} {{\gamma _n}{z^n}} $$
ADD Write $${c_n} = \int\limits_0^\infty {{t^n}{e^{ - {e^t}}}dt} = \int\limits_0^\infty {{e^{n\log t - {e^t}}}dt} $$
We can use something similar to Laplace's method with the expansion $${p_n}\left( x \right) = g\left( {{\rm W}\left( n \right)} \right) + g''\left( {{\rm W}\left( n \right)} \right)\frac{{{{\left( {x - {\rm W}\left( n \right)} \right)}^2}}}{2}$$
where $g(t)=n\log t-e^t$. That is, let $$\begin{cases} w_n={\rm W}(n)\\
{\alpha _n} = n\log {w_n} - {e^{{w_n}}} \\
{\beta _n} = \frac{n}{{w_n^2}} + {e^{{w_n}}} \end{cases} $$
Then we're looking at something asymptotically equal to $${C_n} = \exp {\alpha _n}\int\limits_0^\infty {\exp \left( { - {\beta _n}\frac{{{{\left( {t - {w_n}} \right)}^2}}}{2}} \right)dt} $$
|
$$
\begin{align}
\sum_{n=1}^\infty\gamma_nz^n
&=\sum_{n=1}^\infty\int_0^\infty\frac{t^nz^n}{n!}e^{-e^t}\,\mathrm{d}t\\
&=\int_0^\infty e^{tz}e^{-e^t}\,\mathrm{d}t\\
&=\int_0^\infty e^{t(z-1)}e^{-e^t}\,\mathrm{d}e^t\\
&=\int_{1}^\infty u^{z-1}e^{-u}\,\mathrm{d}u\\
&=\Gamma(z,1)
\end{align}
$$
The Upper Incomplete Gamma Function is an entire function. According to this answer, the power series for an entire function has an infinite radius of convergence.
$\color{#C0C0C0}{\text{idea mentioned in chat}}$
$$
\begin{align}
\int_0^\infty\frac{x^{n-1}}{(n-1)!}e^{-e^x}\,\mathrm{d}x
&=\frac1{(n-1)!}\int_1^\infty\log(t)^{n-1}e^{-t}\frac{\mathrm{d}t}{t}\\
&=\frac1{n!}\int_1^\infty\log(t)^ne^{-t}\,\mathrm{d}t\\
&=\frac1{n!}\int_1^\infty e^{-t+n\log(\log(t))}\,\mathrm{d}t\\
\end{align}
$$
Looking at the function $\phi(t)=-t+n\log(\log(t))$, we see that it reaches its maximum when $t\log(t)=n$; i.e. $t_0=e^{\mathrm{W}(n)}=\frac{n}{\mathrm{W}(n)}$.
Using the estimate
$$
\mathrm{W}(n)\approx\log(n)-\frac{\log(n)\log(\log(n))}{\log(n)+1}
$$
from this answer, at $t_0$,
$$
\begin{align}
\phi(t_0)
&=-n\left(\mathrm{W}(n)+\frac1{\mathrm{W}(n)}-\log(n)\right)\\
&\approx n\log(\log(n))
\end{align}
$$
and
$$
\begin{align}
\phi''(t_0)
&=-\frac{\mathrm{W}(n)+1}{n}\\
&\approx-\frac{\log(n)}{n}
\end{align}
$$
According to the Laplace Method, the integral would be asymptotic to
$$
\begin{align}
\frac1{n!}\sqrt{\frac{-2\pi}{\phi''(t_0)}}e^{\phi(t_0)}
&\approx\frac1{n!}\sqrt{\frac{2\pi n}{\log(n)}}\log(n)^n\\
&\approx\frac1{\sqrt{2\pi n}}\frac{e^n}{n^n}\sqrt{\frac{2\pi n}{\log(n)}}\log(n)^n\\
&=\frac1{\sqrt{\log(n)}}\left(\frac{e\log(n)}{n}\right)^n
\end{align}
$$
which dies away faster than $r^{-n}$ for any $r$.
Analysis of the Approximation to Lambert W
For $x\ge e$, the approximation
$$
\mathrm{W}(x)\approx\log(x)\left(1-\frac{\log(\log(x))}{\log(x)+1}\right)
$$
attains a maximum error of about $0.0353865$ at $x$ around $67.9411$.
At least that same precision is maintained for $x\ge\frac53$.
For all $x\gt1$, this approximation is an underestimate.
|
Numerical inversion of characteristic functions I have a need to use the FFT in my work and am trying to learn how to use it. I am beginning by attempting to use the FFT to numerically invert the characteristic function of a normal distribution. So I have discretised the integral using the trapezoidal rule (I know, this is a very crude method), converted to a form consistent with the FFT and then run a programme to make the calculations. However when I plot the output, the density function gets thinner and thinner as I increase the number of discretisation steps. I don't know if this is because of my errors or because of the numerical problems associated with the trapezoidal rule. Would someone mind having a look at my working please?
Thanks...
$$
f(x) = \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-ix\xi}F(\xi)d\xi = \frac{1}{\pi}\int_{0}^{\infty}e^{-ix\xi}F(\xi)d\xi\\
\approx \frac{1}{\pi}\int_0^{\xi_{max}}e^{-ix\xi}F(\xi)d\xi
$$
Let $\xi_j=(j-1)\Delta\xi,\quad j=1,...,N$
Take $\xi_N=\xi_{max}$ and $\Delta\xi=\frac{\xi_{max}}{N-1}$
Set $F(\xi_j) = F_j$
Let $x_k = x_{min} + (k-1)\Delta x,\quad k=1,...,N \quad$ ($\Delta x$ will be determined later)
Set $f(x_k) = f_k$
Discretising integral using trapezoidal rule:
$$
f_k \approx \frac{1}{\pi}\int_0^{\xi_{max}}e^{-ix_j\xi}F(\xi)d\xi \\
= \frac{1}{\pi}\Delta\xi \left(\sum_{j=1}^{N}e^{-ix_k\xi_j}F_j - \frac{1}{2}e^{-ix_k\xi_1}F_1 - \frac{1}{2}e^{-ix_k\xi_N}F_N \right) \\
= \frac{1}{\pi}\Delta\xi \left(\sum_{j=1}^{N}e^{-i\Delta x \Delta\xi (j-1)(k-1)}e^{-i x_{min}(j-1)\Delta \xi}F_j - \frac{1}{2}F_1 - \frac{1}{2}e^{-ix_k\xi_{max}}F_N \right) \\
= \frac{1}{\pi}\Delta\xi \left(\sum_{j=1}^{N}e^{-i\frac{2\pi}{N} (j-1)(k-1)}e^{-i x_{min}(j-1)\Delta \xi}F_j - \frac{1}{2}F_1 - \frac{1}{2}e^{-ix_k\xi_{max}}F_N \right)
$$
where in the last step $\Delta x \Delta{\xi}$ has been set to $\frac{2\pi}{N}$ in order for the sum to be in the form required by FFT. Rearranging gives $\Delta x = \frac{2\pi}{N\Delta \xi} = \frac{2\pi(N-1)}{N\xi_{max}}.$
To centre about the mean $\mu$ set $x_{min} = \mu - \frac{N\Delta x}{2} = \mu - \frac{\pi}{\Delta \xi}.$
|
Please have a look the section on the numerical inversion of characteristic functions in the dissertation found here:
http://wiredspace.wits.ac.za//handle/10539/9273
It maybe helpful.
|
What's the difference between theorem, lemma and corollary? Can anybody explain me what is the basic difference between theorem, lemma and corollary?
We have been using it for a long time but I never paid any attention. I am just curious to know.
|
Terence Tao (Analysis I, p. 25, n. 4):
From a logical point of view, there is no difference between a lemma, proposition,
theorem, or corollary - they are all claims waiting to be proved. However, we use
these terms to suggest different levels of importance and difficulty.
A lemma is an easily proved claim which is helpful for proving other propositions and theorems, but is usually not particularly interesting in its own right.
A proposition is a statement which is interesting in its own right, while
a theorem is a more important statement than a proposition which says something definitive on the subject, and often takes more effort to prove than a proposition or lemma.
A corollary is a quick consequence of a proposition or theorem that was proven recently.
|
how did he conclude that?integral So the question is : Find all continuous functions such that $\displaystyle \int_{0}^{x} f(t) \, dt= ((f(x)^2)+C$.
Now in the solution, it starts with this, clearly $f^2$ is differentiable at every point ( its derivative is $f$). So $f(x)\ne0$? I have no idea how he concluded that, this is from Spivak's calculus, if you differentiate it's clearly $f(x)=f(x)f'(x)$ but he said that before even giving this formula help please?
EDIT : I know that $f(x)=f(x)f'(x)$ What i don't understand is this "clearly $f^2$ is differentiable at every point ( its derivative is $f$). So $f(x)\ne0$" why $f(x)$ mustn't equal $0$?
|
I interpret the problem as follows:
Find all continuous functions $f:\ \Omega\to{\mathbb R}$ defined in some open interval $\Omega$ containing the origin and satisfying the integral equation
$$\int_0^x f(t)\ dt=f^2(x)+ C\qquad(x\in\Omega)$$
for a suitable constant $C$.
Assume $f$ is such a function and that $f(x)\ne 0$ for some $x\in\Omega$. Then for $0<|h|\ll1$ we have
$$\int_x^{x+h} f(t)\ dt=f^2(x+h)-f^2(x)=\bigl(f(x+h)+f(x)\bigr)\bigl(f(x+h)-f(x)\bigr)\ ,$$
and using the mean value theorem for integrals we conclude that
$${f(x+h)-f(x)\over h}={f(x+\tau h)\over f(x+h)+f(x)}$$
for some $\tau\in[0,1]$. Letting $h\to0$ we see that $f$ is differentiable at $x$ and that $f'(x)={1\over2}$. It follows that the graph of $f$ is a line with slope ${1\over2}$ in all open intervals where $f$ is nonzero. When such a line coming from west-south-west arrives at the point $(a,0)$ on the $x$-axis then $f(a)=0$ by continuity, and similarly, when such a line starts at the point $(b,0)$ due east-north-east, then $f(b)=0$.
The above analysis leaves only the following possibility for such an $f$: There are constants $a$, $b$ with $-\infty\leq a\leq b\leq\infty$ such that
$$f(x)=\cases{-{1\over2}(a-x)\quad &$(x\leq a)$ \cr 0&$(a\leq x\leq b)$ \cr {1\over2}(x-b)&$(x\geq b)\ .$\cr}$$
It turns out that these $f$'s are in fact solutions to the original problem.
Proof. Assume for the moment
$$a\leq0\leq b\ .\tag{1}$$
When $0\leq x\leq b$ then $$\int_0^x f(t)\ dt=0=f^2(x)+0\ ,$$
and when $x\geq b$ then
$$\int_0^x f(t)\ dt=\int_b^x f(t)\ dt={1\over4}(t-b)^2\biggr|_b^x={1\over4}(x-b)^2= f^2(x)+0$$
as well. Similarly one argues for $x\leq0$.
In order to get rid of the assumption $(1)$ we note that when $f$ is a solution of the original problem then any translate $g(x):=f(x-c)$, $\>c\in{\mathbb R}$, is a solution as well: For all $x\in{\mathbb R}$ we have
$$\eqalign{\int_0^x g(t)\ dt&=\int_0^x f(t-c)\ dt=\int_{-c}^{x-c} f(t')\ dt'=\int_{-c}^0 f(t')\ dt'+\int_0^{x-c} f(t')\ dt'\cr
&=\int_{-c}^0 f(t')\ dt'+f^2(x-c)+C=g^2(x)+C'\ .\cr}$$
|
Coin chosen is two headed coin in this probability question I have a probability question that reads:
Question:
A box has three coins. One has two heads, another two tails and the last is a fair coin. A coin is chosen at random, and comes up head. What is the probability that the coin chosen is a two headed coin.
My attempt:
P(two heads coin| given head) = P(two heads coin * given head)/P(given head)
= 1/3/2/3 = 1/2
Not sure whether this is correct?
|
For such a small number of options its easy to count them
The possible outcomes are:
heads or heads using the double head coin
tails or tails using the double tail coin
heads or tails using the fair coin
All these outcomes are equally likely. How many of these are heads and of those how many use the double headed coin?
$$Answer = \frac{2}{3}$$
|
Zeroes about entire function Given an entire function $f(z)$ which satisfies $|f(z)|=1, \forall z \in \mathbb{R}$, the problem asks to show that there exists an entire function $g(z)$ such that $f(z)=\exp(g(z))$.
The only thing need to show is that $f(z)$ admits no zeros on $\mathbb{C}$ so that we can define $g(z)$ by a standard single-valued branch argument of logarithm. But does the assumption $|f(z)|=1, \forall z \in \mathbb{R}$ is so strong that entire functions satisfying this cannot have zeroes? Intuitive we can take an example of $f(z)=\exp(iz)$ where $\infty$ is its essential singularity so it is hard to expect we can turn the real line into the boundary of unit circle and then use somekind of Maximum Principle, etc. Basically I do not get the picture of what the assumption is talking about.
|
Consider the function $h(z) = \overline{f(\overline{z})}$. That is an entire function too, and hence so is $k(z) = f(z)\cdot h(z)$. On the real line, you have
$$k(x) = f(x)\cdot h(x) = f(x) \overline{f(x)} = \lvert f(x)\rvert^2 = 1,$$
hence $k \equiv 1$. That guarantees that $f$ has no zeros.
|
What is the product of this by telescopic method? $$\prod_{k=0}^{\infty} \biggl(1+ {\frac{1}{2^{2^k}}}\biggr)$$
My teacher gave me this question and said that this is easy only if it strikes the minute you read it. But I'm still thinking. Help!
P.S. This question is to be attempted by telescopic method.
|
The terms of the product are $(1+1/2)(1+1/4)(1+1/16)(1+1/256)\cdots$ with each denominator being the square of the previous denominator.
Now if you multiply the product with $(1-1/2)$ you see telescoping action:
$(1-1/2)(1+1/2)=1-1/4$
$(1-1/4)(1+1/4)=1-1/16$
$(1-1/16)(1+1/16)=1-1/256$
Do you see the pattern developing?
|
Show that in any group of order $23 \cdot 24$, the $23$-Sylow subgroup is normal.
Show that in any group of order $23 \cdot 24$, the $23$-Sylow subgroup is normal.
Let $P_k$ denote the $k$-Sylow subgroup and let $n_3$ denote the number of conjugates of $P_k$.
$n_2 \equiv 1 \mod 2$ and $n_2 | 69 \implies n_2= 1, 3, 23, 69$
$n_3 \equiv 1 \mod 3$ and $n_3 | 184 \implies n_3 = 1, 4, 184$
$n_{23} \equiv 1 \mod 23$ and $n_{23} | 24 \implies n_{23} = 1, 24$
Suppose for contradiction that $n_{23} = 24$. Then $N(P_{23})=552/24=23$. So the normalizer of $P_3$ only contains the identity and elements of order $23$.
It is impossible for $n_2$ to equal $23$ or $69$, or for $n_3$ to equal $184$, since we would then have more than $552$ elements in G.
Case 1:
Let $n_2 = 1$. Then we have $P_2 \triangleleft G$, and so we have a subgroup $H=P_2P_{23}$ in G. We know that $|H|=\frac{|P_2||P_{23}|}{|P_2 \cap P_{23}} = 184$. We also know that the $23$-Sylow subgroup of $H$ is normal, so it's normalizer is of order $184$. Since a $p$-sylow subgroup is the largest subgroup of order $p^k$ for some $k$; and since the order of the $23$-Sylow subgroup of H and G both equal $23$, they must coincide. But that means that elements of orders not equal to $23$ normalize $P_{23}$, which is a contradiction.
Case 2:
Suppose that $n_3=1$. Then we have $P_3 \triangleleft G$, and so we have a subgroup $K=P_2P_{23}$ in $G$. Since $|K|=\frac{|P_3||P_{23}|}{|P_2 \cap P_{23}}= 69$, the $23$-sylow subgroup of $K$ is normal. Again, as in case 1, we have an element of order not equal to $23$ that normalizes $P_{23}$.
Case 3:
Let $n_3=4$ and $n_2=3$. But then $N(P_2)=184$. So this is the same as case 1, since we have a subgroup of G of order $184$.
Do you think my answer is correct?
Thanks in advance
|
As commented back in the day, the OP's solution is correct. Promoting a modified (IMHO simplified) form of the idea from Thomas Browning's comment to an alternative answer.
The group $G$ contains $23\cdot24-24\cdot22=24$ elements outside the Sylow $23$-subgroups. Call the set of those elements $X$. Clearly $X$ consists of full conjugacy classes. If $s\in G$ is an element of order $23$, then consider the conjugation action of $P=\langle s\rangle$ on $X$. The identity element is obviously in an orbit by itself. By Orbit-Stabilizer either $X\setminus\{1_G\}$ is a single orbit, or the action of $P$ has another non-identity fixed point in $X$.
*
*In the former case all the elements of $X\setminus\{1_G\}$ must share
the same order, being conjugates, in violation of Cauchy's theorem
implying that there must exist elements of orders $2$ and $3$ in $X$
at least.
*In the latter case $P$ is centralized, hence normalized, by a non-identity element $\in X$. But this contradicts the fact that $P$ is known to have $24=[G:P]$ conjugate subgroups, and hence must be self-normalizing, $N_G(P)=P$.
|
Show that $S=\{\frac{p}{2^i}: p\in\Bbb Z, i \in \Bbb N \}$ is dense in $\Bbb R$. Show that $S=\{\frac{p}{2^i}: p\in\Bbb Z, i \in \Bbb N \}$ is dense in $\Bbb R$.
Just found this given as an example of a dense set while reading, and I couldn't convince myself of this claim's truthfulness. It kind of bugs me and I wonder if you guys have any idea why it is true. (I thought of taking two rational numbers that I know exist in any real neighborhood and averaging them in some way, but I didn't get far with that idea..)
Thank you!
|
I like to think of the answer intuitively. Represent $p$ in binary (base 2). Then $\frac{p}{2^i}$ is simply a number with finitely many binary digits. Conversely, any number whose binary representation has finitely many digits can be written as $\frac{p}{2^i}$.
To show a set is dense, we have to show that given an element $a$ in the set, we can always find an element $b\neq a$ such that $a$ is arbitrarily close to $b$. As a consequence, the density is infinite: You can find infinite numbers from the set in an unit interval. That's the intuitive meaning of "dense".
If you look at the intuition, it should be clear that the set is dense: You can always find a number that is as close as you want to any other number.
|
Proof read from "A problem seminar" May you help me judging the correctness of my proof?:
Show that the if $a$ and $b$ are positive integers, then
$$\left(a+\frac{1}{2}\right)^n+\left(b+\frac{1}{2}\right)^n$$
is integer for only finintely many positive integers $n$
We want $n$ so that
$$\left(a+\frac{1}{2}\right)^n+\left(b+\frac{1}{2}\right)^n\equiv0\pmod{1}$$
So we know by the binomial theorem that
$(an+b)^k\equiv b^n\pmod{n}$ for positive $k$
Then,
$$\left(a+\frac{1}{2}\right)^n\equiv(1/2)^n\pmod{1}$$
and similarly with the $b$
So
$$\left(a+\frac{1}{2}\right)^n+\left(b+\frac{1}{2}\right)^n\equiv 2*(1/2)^n\pmod{1}$$
Therefore, we want $2*(1/2)^n$ to be integer, so that $2^n|2$
clearly, the only positive option is $n=1$
(Editing, my question got prematurely posted. Done)
|
Our expression can be written as
$$\frac{(2a+1)^n+(2b+1)^n}{2^n}.$$
If $n$ is even, then $(2a+1)^n$ and $(2b+1)^n$ are both the squares of odd numbers.
Any odd perfect square is congruent to $1$ modulo $8$. So their sum is congruent to $2$ modulo $8$, and therefore cannot be divisible by any $2^n$ with $n\gt 1$.
So we can assume that $n$ is odd. For odd $n$, we have the identity
$$x^n+y^n=(x+y)(x^{n-1}-x^{n-2}y+\cdots +y^{n-1}).$$
Let $x=2a+1$ and $y=2b+1$. Note that $x^{n-1}-x^{n-2}y+\cdots +y^{n-1}$ is a sum of an odd number of terms, each odd, so it is odd.
Thus the highest power of $2$ that divides $(2a+1)^n+(2b+1)^n$ is the highest power of $2$ that divides $(2a+1)+(2b+1)$. Since $(2a+1)+(2b+1)\ne 0$, there is a largest $n$ such that our expression is an integer.
Remark: The largest $n$ such that our expression is an integer can be made quite large. You might want to see for example what happens if we let $2a+1=2049$ and $2b+1=2047$. Your proposed proof suggests, in particular, that $n$ cannot be greater than $1$.
I suggest that when you are trying to write out a number-theoretic argument, you avoid fractions as much as possible and deal with integers only.
|
Quartic Equation having Galois Group as $S_4$ Suppose $f(x)\in \mathbb{Z}[x]$ be an irreducible Quartic polynomial with Galois Group as $S_4$. Let $\theta$ be a root of $f(x)$ and set $K=\mathbb{Q}(\theta)$.Now, the Question is:
Prove that $K$ is an extension of degree $\mathbb{Q}$ of degree 4 which has no proper Subfields?
Are there any Galois Extensions of $\mathbb{Q}$ of degree 4 with no proper sub fields.
As i have adjoined a root of irreducible quartic, I can see that $K$ is of degree $4$ over $\mathbb{Q}$.
But, why does there is no proper subfield of $K$ containing $\mathbb{Q}$.
suppose $L$ is proper subfield of $K$, then $L$ has to be of degree $2$ over $\mathbb{Q}$. So, $L$ is Galois over $\mathbb{Q}$. i.e., $L$ is normal So corresponding subgroup of Galois group has to be normal.
I tried working in this way but could not able to conclude anything from this.
any help/suggestion would be appreciated.
Thank You
|
As has been remarked, the non-existence of intermediate fields is equivalent to $S_{3}$ being a maximal subroup of $S_{4}.$ If not, then there is a subgroup $H$ of $S_{4}$ with $[S_{4}:H] = [H:S_{3}] = 2.$ Now $S_{3} \lhd H$ and $S_{3}$ contains all Sylow $3$-subgroup of $H.$ But $S_{3}$ has a unique Sylow $3$-subgroup, which is therefore normal in $H.$ Hence $H$ contains all Sylow $3$-subgroups of $S_{4}$ as $H \lhd S_{4}$ and $[S_{4}:H] = 2.$ Then since $H$ only has one Sylow $3$-subgroup, $S_{4}$ has only one Sylow $3$-subgroup, a contradiction
(for example, $\langle (123) \rangle$ and $\langle (124) \rangle$ are different Sylow $3$-subgroups of $S_{4}$).
|
Finding an orthogonal basis of the subspace spanned by given vectors
Let W be the subspace spanned by the given vectors. Find a basis for $W^\perp$.
$$v_1=(2,1,-2) ;v_2=(4,0,1)$$
Well I did the following to find the basis.
$$(x,y,z)*(2,1,-2)=0$$ $$(x,y,z)*(4,0,1)=0$$
If you simplify this in to a Linear equation
$$2x + y - 2z = 0$$ $$4x + z = 0$$
Now bu placing this in a vector and performing row echelon I get
$$ w =
\left[ \begin{array}{ccc|c}
1 &\ 0 &\ 1/4 &0\\
0&\ 1 &\ -5/2 &0
\end{array} \right]$$
By solving this I get
$$x=-1/4 t$$
$$y=5/2 t$$
$$z=t$$
By this I get the basis to be.
$$[-1/4, 5/2 ,1]$$
I don't see that the answer is correct because you get the vector space in the someway. Please tell me if I used the correct method.
Thank you
|
Since you work in $\mathbb R^3$ so simply take $v_3=v_1\wedge v_2=(1,-10,-4)$.
|
Why is there never a proof that extending the reals to the complex numbers will not cause contradictions? The number $i$ is essentially defined for a property we would like to have - to then lead to taking square roots of negative reals, to solve any polynomial, etc. But there is never a proof this cannot give rise to contradictions, and this bothers me.
For instance, one would probably like to define division by zero, since undefined values are simply annoying. We can introduce the number "$\infty$" if we so choose, but by doing so, we can argue contradictory statements with it, such as $1=2$ and so on that you've doubtlessly seen before.
So since the definition of an extra number to have certain properties that previously did not exist may cause contradictions, why aren't we so rigourous with the definition of $i$?
edit: I claim we aren't, simply because no matter how long and hard I look, I never find anything close to what I'm looking for. Just a definition that we assume is compatible with everything we already know.
|
There are several ways to introduce the complex numbers rigorously, but simply postulating the properties of $i$ isn't one of them. (At least not unless accompanied by some general theory of when such postulations are harmless).
The most elementary way to do it is to look at the set $\mathbb R^2$ of pairs of real numbers and then study the two functions $f,g:\mathbb R^2\times \mathbb R^2\to\mathbb R^2$:
$$ f((a,b),(c,d)) = (a+c, b+d) \qquad g((a,b),(c,d))=(ac-bd,ad+bc) $$
It is then straightforward to check that
*
*$(\mathbb R^2,f,g)$ satisfies the axioms for a field, with $(0,0)$ being the "zero" of the field and $(1,0)$ being the "one" of the field.
*the subset of pairs with second component being $0$ is a subfield that's isomorphic to $\mathbb R$,
*the square of $(0,1)$ is $(-1,0)$, which we've just identified with the real number $-1$, so let's call $(0,1)$ $i$, and
*every element of $\mathbb R^2$ can be written as $a+bi$ with real $a$ and $b$ in exactly one way, namely $(a,b)=(a,0)+(b,0)(0,1)$.
With this construction in mind, if we ever find a contradiction involving complex number arithmetic, this contradiction can be translated to a contradiction involving plain old (pairs of) real numbers. Since we believe that the real numbers are contradiction-free, so are the complex numbers.
|
Please help on this Probability problem
A bag contains 5 red marbles and 7 green marbles. Two marbles are drawn randomly one at a time, and without replacement. Find the probability of picking a red and a green, without order.
This is how I attempted the question: I first go $P(\text{Red})= 5/12$ and $P(\text{Green})= 7/11$ and multiplied the two: $$\frac{7}{11}\times \frac{5}{12}= \frac{35}{132}$$
Then I got $P(\text{Green})= 7/12$ and $P(\text{Red})= 5/11$ $\implies$
$$\frac{5}{11} × \frac{7}{12}= \frac{35}{132}$$
So I decided that $$P(\text{G and R}) \;\text{ or }\; P(\text{R and G}) =\frac{35}{132} + \frac{35}{132} =\frac{35}{66}$$ Is this correct?
|
Very nice and successful attempt. You recognized that there are two ways once can draw a red and green marble, given two draws: Red then Green, or Green then Red. You took into account that the marbles are not replaced. And your computations are correct: you multiplied when you needed to multiply and added when you needed to add:
$$\left[P(\text{1. Red}) \times P(\text{2. Green})\right]+ \left[P(\text{1. Green}) \times P(\text{2. Red})\right]$$
Your method and result are correct.
|
Mean value of the rotation angle is 126.5° In the paper
"Applications of Quaternions to Computation with Rotations"
by Eugene Salamin, 1979 (click here),
they get 126.5 degrees as the mean value of the rotation angle of a random rotation (by integrating quaternions over the 3-sphere).
How can I make sense of this result?
If rotation angle around a given axis can be 0..360°, should not the mean be 180? or 0 if it can be −180..180°? or 90° if you take absolute values?
|
First, SO(3) of course has its unique invariant probabilistic measure. Hence, “random rotation” is a well-defined SO(3)-valued random variable. Each rotation (an element of SO(3)) has an uniquely defined rotation angle θ, from 0 to 180° (π) because of axis–angle representation. (Note that axis is undefined for θ = 0 and has two possible values for θ = 180°, but θ itself has no ambiguity.) Hence, “angle of a random rotation” is a well-defined random angle.
Why is its average closer to one end (180°) than to another (0)? In short, because there are many 180° rotations, whereas rotation by zero angle is unique (identity map).
Note that I ignore Spin(3) → SO(3) covering that is important in quaternionic discourse, but it won’t change the result: Haar measure on Spin(3) projected onto SO(3) gives the same Haar measure on SO(3), hence there is no difference whether do we make computations on S3 of unit quaternions (the same as Spin(3)) or directly on SO(3).
|
How to find the degrees between 2 vectors when I have $\arccos$ just in radian mode? I'm trying to write in java a function which finds the angles, in degrees, between 2 vectors, according to the follow equation -
$$\cos{\theta} = \frac{\vec{u} \cdot \vec{v}}{\|\mathbf{u}\|\|\mathbf{v}\|}$$
but in java the Math.acos method returns the angle radians, so what I have to do after I calculate $\frac{\vec{u} \cdot \vec{v}}{\|\mathbf{u}\|\|\mathbf{v}\|}$, to get it in degrees?
|
You can compute the angle, in degrees, by computing the angle in radians, and then multiplying by
$\dfrac {360}{2\pi} = \dfrac {180\; \text{degrees}}{\pi\; \text{radians}}$:
$$\theta = \arccos\left(\frac{\vec{u} \cdot \vec{v}}{\|\mathbf{u}\|\|\mathbf{v}\|}\right)\cdot \frac {180}{\pi}$$
|
Expected number of people sitting in the right seats. There was a popular interview question from a while back: there are $n$ people getting seated an airplane, and the first person comes in and sits at a random seat. Everyone else who comes in either sits in his seat, or if his seat has been taken, sits in a random unoccupied seat. What is the probability that the last person sits in his correct seat?
The answer to this question is $1/2$ because everyone looking to sit on a random seat has an equal probability of sitting in the first person's seat as the last person's.
My question is: what is the expected number of people sitting in their correct seat?
My take: this would be $\sum_{i=1}^n p_i$ where $p_i$ is the probability that person $i$ sits in the right seat..
$X_1 = 1/n$
$X_2 = 1 - 1/n$
$X_3 = 1 - (1/n + 1/n(n-1))$
$X_4 = 1 - (1/n + 2/n(n-1) + 1/n(n-1)(n-2))$
Is this correct? And does it generalize to $X_i$ having an $\max(0, i-1)$ term of $1/n(n-1)$, a $\max(0, i-2)$ term of $1/n(n-1)(n-2)$ etc?
Thanks.
|
I found this question and the answer might be relevant.
Seating of $n$ people with tickets into $n+k$ chairs with 1st person taking a random seat
The answer states that the probability of a person not sitting in his seat is $\frac{1}{k+2}$ where $k$ is the number of seats left after he takes a seat. This makes sense because for person $i$, if anyone sits in chairs $1, i+1, ... n$ then he must sit in his own seat, so the probability of that happening is $\frac{n-i+1}{n-i+2}$. So $k = 0$ for the last person and $k = n-1$ for the second person. The answer then should just be
$1/n + \sum_{i = 2}^{n} \frac{n-i+1}{n-i+2}$
|
What does it mean "adjoin A to B"? What does it mean that we can obtain $\mathbb{C}$ from $\mathbb{R}$ by adjoining $i$?
Or that we can also adjoin $\sqrt{2}$ to $\mathbb{Q}$ to get $\mathbb{Q}(\sqrt{2})=\{a+b \sqrt{2}\mid a,b \in \mathbb{Q}\}$?
|
It means exactly what you have written there. Let $F$ be a field and $\alpha$ be a root of a polynomial $f(x)$ that is irreducible of degree $d$ over $F[x]$. Then we say we can adjoin $\alpha$ to $F$ by considering all linear combinations of field elements of $F$ with scalar multiples of powers of $\alpha$ up to $d-1$, or rather:
$F(\alpha) = \{a_{1} + a_{2}\alpha + a_{3} \alpha^{2} + \cdots + a_{d}\alpha^{d-1}| a_{1},\ldots,a_{d} \in F\}$.
Note that because $\alpha \notin F$ because it is a root of an irreducible polynomial in $F[x]$. However, $F(\alpha)$ certainly contains $\alpha$. In this light, we can view field extensions as a way of "extending" base fields to include elements they wouldn't otherwise have.
You'll notice this is consistent with the definition of $\mathbb{C}$ from $\mathbb{R}$. $i$ is the root of the irreducible polynomial $x^{2} + 1$ over $\mathbb{R}[x]$. Hence, if we define $\mathbb{C} = \mathbb{R}(i)$, then
$\mathbb{C} = \{a + bi | a, b \in \mathbb{R}\}$.
|
Laplace operator's interpretation (Laplacian) What is your interpretation of Laplace operator? When evaluating Laplacian of some scalar field at a given point one can get a value. What does this value tell us about the field or it's behaviour in the given spot?
I can grasp the meaning of gradient and divergence. But viewing Laplace operator as divergence of gradient gives me interpretation "sources of gradient" which to be honest doesn't make sense to me.
It seems a bit easier to interpret Laplacian in certain physical situations or to interpret Laplace's equation, that might be a good place to start. Or misleading. I seek an interpretation that would be as universal as gradients interpretation seems to me - applicable, correct and understandable on any scalar field.
PS The text of this question is taken from Physics StackExchange. I found it useful for people who search in the Math StackExchange forum.
|
It's enlightening to note that the adjoint of $\nabla$ is $-\text{div}$, so that $-\text{div} \nabla$ has the familiar pattern $A^T A$, which recurs throughout linear algebra. Hence you would expect (or hope) $-\text{div} \nabla$ to have the properties enjoyed by a symmetric positive definite matrix -- namely, the eigenvalues are nonnegative and there is an orthonormal basis of eigenvectors.
$-\text{div}$ is sometimes called the "convergence", and this viewpoint suggests that the Laplacian should have been defined as the convergence of the gradient.
|
Classification of all semisimple rings of a certain order I'd appreciate it if you tell me where to begin in order to solve this question:
Classify (up to ring isomorphism) all semisimple rings of order 720.
Could the Wedderburn-Artin Structural Theorem be applicable?
|
Yes, you should definitely apply Artin-Wedderburn.
The thing you gain from knowing the ring is finite is that the ring will be a product of matrix rings over fields, since finite division rings are fields. Hopefully you know that all finite fields are of prime power order.
Now then, an n by n matrix ring over a field with q elements clearly has $q^{n^2}$ matrices. Start deducing what the possibilities are :)
|
When polynomial is power $P(x)$ ia a polynomial with real coefficients, and $k>1$ is an integer. For any $n\in\Bbb Z$, we have $P(n)=m^k$ for some $m\in\Bbb Z$. Show that there exists a real coefficients polynomial $H(x)$ such that $P(x)=(H(x))^k$, and $\forall n\in\Bbb Z,$ $H(n)$ is an integer.
This is an old question, but I never saw a complete proof. Thanks a lot!
|
The result is Corollary 3.3 in this paper.
|
$\sum_{k=1}^nH_k = (n+1)H_n-n$. Why? This is motivated by my answer to this question.
The Wikipedia entry on harmonic numbers gives the following identity:
$$
\sum_{k=1}^nH_k=(n+1)H_n-n
$$
Why is this?
Note that I don't just want a proof of this fact (It's very easily done by induction, for example). Instead, I want to know if anyone's got a really nice interpretation of this result: a very simple way to show not just that this relation is true, but why it is true.
Has anyone got a way of showing that this identity is not just true, but obvious?
|
I suck at making pictures, but I try nevertheless. Write $n+1$ rows of the sum $H_n$:
$$\begin{matrix}
1 & \frac12 & \frac13 & \dotsb & \frac1n\\
\overline{1\Big\vert} & \frac12 & \frac13 & \dotsb & \frac1n\\
1 & \overline{\frac12\Big\vert} & \frac13 & \dotsb & \frac1n\\
1 & \frac12 & \overline{\frac13\Big\vert}\\
\vdots & & &\ddots & \vdots\\
1 & \frac12 &\frac13 & \dotsb & \overline{\frac1n\Big\vert}
\end{matrix}$$
The total sum is obviously $(n+1)H_n$. The part below the diagonal is obviously $\sum\limits_{k=1}^n H_k$. The part above (and including) the diagonal is obviously $\sum_{k=1}^n k\cdot\frac1k = n$.
It boils down of course to the same argument as Raymond Manzoni gave, but maybe the picture makes it even more obvious.
|
Spectrum and tower decomposition I'm trying to read "Partitions of Lebesgue space in trajectories defined by ergodic automorphisms" by Belinskaya (1968). In the beginning of the proof of theorem 2.7, the author considers an ergodic automorphism $R$ of a Lebesgue space whose spectrum contains all $2^i$-th roots of unity ($i=1,2,\ldots$), and then he/she claims that for each $i$ there exists a system of sets ${\{D_i^k\}}_{k=1}^{2^i}$ such that $R D_i^k=D_i^{k+1}$ for $k=1, \ldots, 2^i-1$ and $R D_i^{2^i}=D_i^1$. I'm rather a newbie in ergodic theory and I don't know where this claim comes from. I would appreciate any explanation.
|
Let $R$ be an ergodic automorphism of a Lebesgue space. Let $\omega$ be a root of unity in the spectrum of $R$, and $n$ be the smallest positive integer such that $\omega^n = 1$.
Let $f$ be a non-zero eigenfunction corresponding to the eigenvalue $\omega$. Then:
$f^n \circ R = (f \circ R)^n = (\omega f)^n = f^n,$
and $f^n$ is an eigenfunction for the eigenvalue $1$. Without loss of generality, we can assume that $f^n$ is bounded (you just have to take $f/|f|$ instead whenever $f \neq 0$). Then, since the transformation is ergodic, $f^n$ is constant. Up to multiplication by a constant, we can assume that $f^n \equiv 1$.
Thus $f$ takes its values in the group generated by $\omega$. For all $0 \leq k < n$, let $A_k$ be the set $\{f = \omega^k\}$. This is a partition of the whole space. Moreover, if $x \in A_k$, then $f \circ R (x) = \omega f(x) = \omega \cdot \omega^k = \omega^{k+1}$, so $R (x) \in A_{k+1}$ (with the addition taken modulo $n$), and conversely. Hence, $R A_k = A_{k+1}$.
|
Mathematical Analysis advice Claim: Let $\delta>0, n\in N. $ Then $\lim_{n\rightarrow\infty} I_{n} $exists, where $ I_{n}=\int_{0}^{\delta} \frac{\sin\ nx}{x} dx $
Proof: $f(x) =\frac{\sin\ nx}{x}$ has a removable discontinuity at $x=0$ and so we let $f(0) =n$
$x = \frac{t}{n}$ is continuous and monotone on $t\in[0,n\delta]$, hence, $ I_{n}=\int_{0}^{n\delta} \frac{\sin\ t}{t} dt $
For all $p\geq 1$, given any $\epsilon \gneq 0, \exists n_{0} \gneq \frac{\epsilon}{2\delta}$ such that $\forall n\gneq n_{0}$, $|I_{n+p}-I_{n}| = \left\lvert\int_{n\delta}^{(n+p)\delta} \frac{\sin\ t}{t} dt \right\lvert \leq \int_{n\delta}^{(n+p)\delta} \frac{\left\lvert\sin t\ \right\lvert}{t} dt = \frac{1} {n\delta}\int_{n\delta}^{M} \left\lvert\sin t\right\lvert dt \ \leq \frac{2} {n\delta} \lneq \epsilon$ , for some $M \in [n\delta, (n+p)\delta]$, by Bonnet's theorem
Is my proof valid? Thank you.
|
I can suggest a alternative path. Prove that $$\lim_{a\to 0^+}\lim_{b \to\infty}\int_a^b\frac{\sin x}xdx$$
exists as follows: integrating by parts
$$\int_a^b \frac{\sin x}xdx=\left.\frac{1-\cos x}x\right|_a^b-\int_a^b\frac{1-\cos x}{x^2}dx$$
Then use $$\frac{1-\cos h}h\stackrel{h\to 0}\to 0$$ $$\frac{1-\cos h}{h^2}\stackrel{h\to 0}\to\frac 1 2$$ $$\int_1^\infty\frac{dt}{t^2}=1<+\infty$$
Then it will follow your limit is indeed that integral, since changing variables and since $\delta >0$ $$\int_0^{n\delta}\frac{\sin x}xdx\to\int_0^\infty\frac{\sin x}xdx$$
Then it remains to find what that improper Riemann integral equals to. One can prove it equals $\dfrac \pi2$. First, one uses that
$$1 +2\sum_{k=1}^n\cos kx=\frac{\sin \left(n+\frac 1 2\right)x}{\sin\frac x 2}$$
from where it follows $$\int_0^\pi\frac{\sin \left(n+\frac 1 2\right)x}{\sin\frac x 2}dx=\pi$$ since all the cosine integrals vanish. Now, the function $$\frac{2}{t}-\frac{1}{\sin\frac t 2}$$ is continuous on $[0,\pi]$. Thus, by the Riemann Lebesgue Lemma, $$\mathop {\lim }\limits_{n \to \infty } \int_0^\pi {\sin \left( {n + \frac{1}{2}} \right)x\left( {\frac{2}{x} - {{\left( {\sin \frac{x}{2}} \right)}^{ - 1}}} \right)dx} = 0$$
It follows that $$\mathop {\lim }\limits_{n \to \infty } \int_0^\pi {\frac{{\sin \left( {n + \frac{1}{2}} \right)x}}{x}dx} = \frac{\pi }{2}$$ so $$\mathop {\lim }\limits_{n \to \infty } \int_0^{\pi \left( {n + \frac{1}{2}} \right)} {\frac{{\sin x}}{x}dx} = \frac{\pi }{2}$$
Since we know the integral already exists, we conclude $$\int_0^\infty {\frac{{\sin x}}{x}dx} = \frac{\pi }{2}$$
|
Parametric equations, eliminating the parameter $\,x = t^2 + t,\,$ $y= 2t-1$ $$x = t^2 + t\qquad y= 2t-1$$
So I solve $y$ for $t$
$$t = \frac{1}{2}(y+1)$$
Then I am supposed to plug it into the equation of $x$ which is where I lose track of the logic.
$$x = \left( \frac{1}{2}(y+1)\right)^2 + \frac{1}{2}(y+1) = \frac{1}{4}y^2 + y+\frac{3}{4}$$
That is now my answer? I am lost. This is x(y)? How is this valid? I don't understand.
|
Let's assume you are walking on an xy-plane. Your x-position (or east-west position) at a certain time t is given by $x = t^2 + t$, and your y-position is $y = 2t - 1$.
If you want to know what the whole path you remained is, without wanting to know when you stepped on where? Eliminate t:
$$x = \frac{1}{4} y^2 + y + \frac{3}{4}$$
If a pair of $(x, y)$ satisfied that, that means some time in the past or in the future, you stepped or would step on that point.
|
Showing probability no husband next to wife converges to $e^{-1}$ Inspired by these questions:
*
*Probability of Couples sitting next to each other (Sitting in a Row)
*Probability question about married couples
*Four married couples, eight seats. Probability that husband sits next to his wife?
*In how many ways can n couples (husband and wife) be arranged on a bench so no wife would sit next to her husband?
*No husband can sit next to his wife in this probability question
the more general question of the probability that seating $n$ couples (i.e. $2n$ individuals) in a row at random means that no couples are sat together can be expressed using inclusion-exclusion as
$$\displaystyle\sum_{i=0}^n (-2)^i {n \choose i}\frac{(2n-i)!}{(2n)!}$$
which for small $n$ takes the values:
n Probability of no couple together
1 0 0
2 1/3 0.3333333
3 1/3 0.3333333
4 12/35 0.3481481
5 47/135 0.3428571
6 3655/10395 0.3516114
7 1772/5005 0.3540460
8 20609/57915 0.3558491
This made me wonder whether it converges to $e^{-1} \approx 0.3678794$ as $n$ increases, like other cases such as the secretary/dating problem and $\left(1-\frac1n\right)^n$ do. So I tried the following R code (using logarithms to avoid overflows)
couples <- 1000000
together <- 0:couples
sum( (-1)^together * exp( log(2)*together + lchoose(couples,together) +
lfactorial(2*couples - together) - lfactorial(2*couples) ) )
which indeed gave a figure of $0.3678794$.
How might one try to prove this limit?
|
I observe that each term with $i$ fixed approaches a nice limit. We have
$$ 2^i \frac{n(n-1)(n-2)\cdots(n-i+1)}{i!} \frac1{(2n-i+1)(2n-i+2)\cdots(2n)} $$
or
$$ \frac1{i!} \frac{2n}{2n} \frac{2(n-1)}{2n-1} \cdots \frac{2(n-i+1)}{(2n-i+1)} \sim \frac 1{i!} $$
This gives you the series, assuming the limits (defining terms with $i>n$ to be zero) may be safely exchanged,
$$\lim_{n\to\infty} \sum_{i=0}^\infty [\cdots] = \sum_{i=0}^\infty \lim_{n\to\infty} [\cdots] = \sum_{i=0}^\infty (-1)^i \frac1 {i!} \equiv e^{-1}$$
Justifying the limit interchange I haven't thought about but I suspect this can be shown to be fine without too much effort... Edit: You can probably use the Weierstrass M-test.
|
Proof that equality on categorical products is componentwise equality I want to proof that in the categorical product as defined here it holds that
for $x,y \in \prod X_i$ then
$$
x = y \textrm{ iff } \forall i \in I : \pi_i(x) = \pi_i(y).
$$
The direction from left to right is trivial, but the other, that iff the components equal than their product is equal I am not able to proof, I tried to substitute the identity morphisms in the universal property, but I always get the wrong "types" in the functions involved. Any hints?
|
In arbitrary categories, there is a notion of "generalized element": A generalized element of an object $A$ is any morphism into $A$ (from any object of the category). A morphism $A\to B$ can be applied to a generalized element $Z\to A$ just by composing them to get a generalized element $Z\to B$. In these terms, the result you want can be proved: A generalized element of $\prod_iX_i$ is determined by its images under all the projections $\pi_i$. But "proved" here is too grandiose a term; this fact is just part of the categorical definition of product.
|
Induced homomorphism between fundamental groups of a retract is surjective I'm trying to understand why the induced map $i_*: \pi_1(A) \rightarrow \pi_1(X)$ is surjective, for $A$ being a retract of $X$ and $i: A \rightarrow X$ being the inclusion map? For homotopy retracts it's obvious, but for retracts it seems I miss something.
|
Any loop in $A$ is also a loop in $X$. What does $f_*$ do to an element of $\pi_1(X)$ that is a loop in $A$?
More categorically, if $i:A\to X$ is the inclusion map (so that $f\circ i=\mathrm{id}_A$), then $f_*\circ i_*=\mathrm{id}_{\pi_1(A)}$ because $\pi_1$ is a functor. Since $\mathrm{id}_{\pi_1(A)}$ is surjective we must have that $f_*$ is surjective.
Regarding your edited question, the map $i_*:\pi_1(A)\to \pi_1(X)$ does not have to be surjective, regardless of whether or not there is a retraction $f:X\to A$.
For example, let $X$ be any space with a non-trivial fundamental group and let $A=\{x\}$ be a point in $X$. There is an obvious retraction $f:X\to A$ (the constant map to $x$). But $\pi_1(A)$ is trivial and hence $i_*:\pi_1(A)\to\pi_1(X)$ cannot be surjective.
|
If $A$ is compact and $B$ is Lindelöf space , will be $A \cup B$ Lindelöf I have 2 different questions:
As we know a space Y is Lindelöf if each open covering contains a countable subcovering.
(1) :If $A$ is compact and $B$ is Lindelöf space , will be $A \cup B$ Lindelöf?
If it is right, how can we prove it?
(2) : A topological space is called $KC$ , when each compact subset is closed.
Is cartesian product of KC spaces also KC space? is infinite or finite number important?
|
I am facing a notational problem. What is a $KC$ space? Answer of your first question is the following.
Lindelof Space: A space $X$ is said to be Lindelof is every open cover of the space has a countable subcover.
Consider an open cover $P = \{P_{\alpha}: \alpha \in J, P_{\alpha}$ is open in $A \cup B\}$
Now $P$ will gives cover for both $A$ and $B$ say $P_1$ and $P_2$ where $P_1 = \{P_{\alpha} \cap A\}$ and $P_{\alpha}\cap B$.
Now $A$ is compact, thus there is a finite cover say $P_1^{'}$. You may write its elements yourself.
$B$ is Lindelof, thus you shall get a countable subcover from $P_2$, say $P_2^{'}$.
Collect all elements of $P_1^{'}$ and $P_2^{'}$ form your original cover $P$, it is countable. So $A \cup B$ is countable.
There are for Latex problems in this answers. Thank you for correction.
|
Can a 2D person walking on a Möbius strip prove that it's on a Möbius strip? Or other non-orientable surface, can a 2D walker on a non-orientable surface prove that the surface is non-orientable or does it always take an observer from a next dimension to prove that an entity of a lower dimension is non-orientable? So it always takes a next dimension to prove that an entity of the current dimension is non-orientable?
|
If he has a friend then they both can paint their right hands blue and left hands red.
His friend stays where he is, he goes once around the strip, now his left hand and right hand are switched when he compares them to his friends hands.
|
Expected value of game involving 100-sided die The following question is from a Jane Street interview.
You are given a 100-sided die. After you roll once, you can choose to either get paid the dollar amount of that roll OR pay one dollar for one more roll. What is the expected value of the game? (There is no limit on the number of rolls.)
P.S. I think the question assumes that we are rational.
|
Let $v$ denote the expected value of the game. If you roll some $x\in\{1,\ldots,100\}$, you have two options:
*
*Keep the $x$ dollars.
*Pay the \$$1$ continuation fee and spin the dice once again. The expected value of the next roll is $v$. Thus, the net expected value of this option turns out to be $v-1$ dollars.
You choose one of these two options based on whichever provides you with a higher gain. Therefore, if you spun $x$, your payoff is $\max\{x,v-1\}$.
Now, the expected value of the game, $v$, is given as the expected value of these payoffs:
\begin{align*}
v=\frac{1}{100}\sum_{x=1}^{100}\max\{x,v-1\}\tag{$\star$},
\end{align*}
since each $x$ has a probability of $1/100$ and given a roll of $x$, your payoff is exactly $\max\{x,v-1\}$. This equation is not straightforward to solve. The right-hand side sums up those $x$ values for which $x>v-1$, and for all such values of $x$ that $x\leq v-1$, you add $v-1$ to the sum. This pair of summations gives you $v$. The problem is that you don't know where to separate the two summations, since the threshold value based on $v-1$ is exactly what you need to compute. This threshold value can be guessed using a numerical computation, based on which one can confirm the value of $v$ rigorously. This turns out to be $v=87\frac{5}{14}$.
Incidentally, this solution also reveals that you should keep rolling the dice for a $1 fee as long as you roll 86 or less, and accept any amount 87 or more.
ADDED$\phantom{-}$In response to a comment, let me add further details on the computation. Solving for the equation ($\star$) is complicated by the possibility that the solution may not be an integer (indeed, ultimately it is not). As explained above, however, ($\star$) can be rewritten in the following way:
\begin{align*}
v=\frac{1}{100}\left[\sum_{x=1}^{\lfloor v\rfloor-1}(v-1)+\sum_{x=\lfloor v\rfloor}^{100}x\right],\tag{$\star\star$}
\end{align*}
where $\lfloor\cdot\rfloor$ is the floor function (rounding down to the nearest integer; for example: $\lfloor1\mathord.356\rfloor=1$; $\lfloor23\mathord.999\rfloor=23$; $\lfloor24\rfloor=24$). Now let’s pretend for a moment that $v$ is an integer, so that we can obtain the following equation:
\begin{align*}
v=\frac{1}{100}\left[\sum_{x=1}^{v-1}(v-1)+\sum_{x=v}^{100}x\right].
\end{align*}
It is algebraically tedious yet conceptually not difficult to show that this is a quadratic equation with roots
\begin{align*}
v\in\left\{\frac{203\pm3\sqrt{89}}{2}\right\}.
\end{align*}
The larger root exceeds $100$, so we can disregard it, and the smaller root is approximately $87\mathord.349$. Of course, this is not a solution to ($\star\star$) (remember, we pretended that the solution was an integer, and the result of $87\mathord.349$ does not conform to that assumption), but this should give us a pretty good idea about the approximate value of $v$. In particular, this helps us formulate the conjecture that $\lfloor v\rfloor=87$. Upon substituting this conjectured value of $\lfloor v\rfloor$ back into ($\star\star$), we now have the exact solution $v=87\frac{5}{14}$, which also confirms that our heuristic conjecture that $\lfloor v\rfloor=87$ was correct.
|
What's 4 times more likely than 80%? There's an 80% probability of a certain outcome, we get some new information that means that outcome is 4 times more likely to occur.
What's the new probability as a percentage and how do you work it out?
As I remember it the question was posed like so:
Suppose there's a student, Tom W, if you were asked to estimate the
probability that Tom is a student of computer science. Without any
other information you would only have the base rate to go by
(percentage of total students enrolled on computer science) suppose
this base rate is 80%.
Then you are given a description of Tom W's personality, suppose from
this description you estimate that Tom W is 4 times more likely to be
enrolled on computer science.
What is the new probability that Tom W is enrolled on computer
science.
The answer given in the book is 94.1% but I couldn't work out how to calculate it!
Another example in the book is with a base rate of 3%, 4 times more likely than this is stated as 11%.
|
The only way I see to make sense of this is to divide by $4$ the probability it does not happen. Here we obtain $20/4=5$, so the new probability is $95\%$.
|
Evaluating $\int_0^\infty \frac{1}{x+1-u}\cdot \frac{\mathrm{d}x}{\log^2 x+\pi^2}$ using real methods. By reading a german wikipedia (see here) about integrals, i stumpled upon this entry
27 1.5
$$ \color{black}{
\int_0^\infty \frac{1}{x+1-u}\cdot \frac{\mathrm{d}x}{\log^2 x+\pi^2} =\frac{1}{u}+\frac{1}{\log(1-u)}\,, \qquad u \in (0,1)}
$$
(Click for the source) Where the result was proven using complex analysis. Is there any method to show the equality using real methods? Any help will be appreciated =)
|
I'm not sure about the full solution, but there is a way to find an interesting functional equation for this integral.
First, let's get rid of the silly restriction on $u$. By numerical evaluation, the integral exists for all $u \in (-\infty,1)$
Now let's introduce the more convenient parameter:
$$v=1-u$$
$$I(v)=\int_0^{\infty} \frac{dx}{(v+x)(\pi^2+\ln^2 x)}$$
Now let's make a change of variable:
$$x=e^t$$
$$I(v)=\int_{-\infty}^{\infty} \frac{e^t dt}{(v+e^t)(\pi^2+t^2)}$$
$$I(v)=\int_{-\infty}^{\infty} \frac{(v+e^t) dt}{(v+e^t)(\pi^2+t^2)}-v \int_{-\infty}^{\infty} \frac{ dt}{(v+e^t)(\pi^2+t^2)}=1-v J(v)$$
Now let's make another change of variable:
$$t=-z$$
$$I(v)=\int_{-\infty}^{\infty} \frac{e^{-z} d(-z)}{(v+e^{-z})(\pi^2+z^2)}=\int_{-\infty}^{\infty} \frac{ dz}{(1+v e^z)(\pi^2+z^2)}=\frac{1}{v} J \left( \frac{1}{v} \right)$$
Now we get:
$$1-v J(v)=\frac{1}{v} J \left( \frac{1}{v} \right)=I(v)$$
$$v J(v)+\frac{1}{v} J \left( \frac{1}{v} \right)=1$$
$$v \in (0,\infty)$$
For example, we immediately get the correct value:
$$J(1)=I(1)=\int_0^{\infty} \frac{dx}{(1+x)(\pi^2+\ln^2 x)}=\frac{1}{2}$$
We can also check that this equation works for the known solution (which is actually valid on the whole interval $v \in (0,\infty)$, except for $v=1$).
$$I(v)=\frac{1}{1-v}+\frac{1}{\ln v}$$
$$J(v)=-\frac{1}{1-v}-\frac{1}{v \ln v}$$
$$J \left( \frac{1}{v} \right)=\frac{v}{1-v}+\frac{v}{\ln v}$$
$$1-v J(v)=\frac{1}{v} J \left( \frac{1}{v} \right)$$
Now this is not a solution of course (except for $I(1)$), but it's a big step made without any complicated integration techniques.
Basically, if we define:
$$f(v)=vJ(v)$$
$$I(v)=1-f(v)$$
We need to solve a simple functional equation:
$$I(v)+I \left( \frac{1}{v} \right)=1$$
|
Can this difference operator be factorised? If a difference operator is defined as $$LY_i=\left(-\epsilon\dfrac{D^+ -D^-}{h_1}+aD^-\right)Y_i,\quad 1\leq i\leq N$$ Suppose $Y_N$ and $Y_0$ are given and that the difference operators are defined as follows $D^+V_i=(V_{i+1}-V_i)/h_1$, $D^-V_i=(V_i-V_{i-1})/h_1$. How is it possible to write the difference operator as $$LY_i=(Y_N-Y_0)\left(-\epsilon\dfrac{D^+ -D^-}{h_1}+aD^-\right)\psi_i?$$
I am thinking that telescoping trick is used, but am failing to see how.
If my question is not clear could someone clarify the second $L_\epsilon^NY_i$ on page 57 of the excerpt which is attached below.
|
$-\frac{\epsilon}{h^2} (Y_{i+1}-2Y_i + Y_{i-1}) + \alpha(Y_i-Y_{i-1}) = 0 \;\;\; (1)$
$-\frac{\epsilon}{h^2} Y_{N+1} + \left ( \frac{2\epsilon}{h^2} + \alpha \right )Y_N - \left ( \frac{\epsilon}{h^2} + \alpha \right ) Y_{N-1} = 0$
$\dots$
$Y_{N} = \frac{\epsilon Y_{N+1}}{2\epsilon + \alpha h^2}+\frac{\epsilon + \alpha h^2}{2\epsilon + \alpha h^2} Y_{N} \equiv aY_{N+1} + (1-a)Y_{N-1} = aY_{N+1}(1+(1-a) + (1-a)^2 + \dots+ (1-a)^N)+(1-a)^{N+1}Y_0 = a\frac{1-(1-a)^{N+1}}{a}Y_{N+1} +(1-a)^{N+1}Y_0 = (1-(1-a)^{N+1})Y_{N+1}+(1-a)^{N+1}Y_0$
$Y_{N-1} = (1-(1-a)^{N})Y_{N+1}+(1-a)^{N}Y_0$
Hence, (1) can be rewritten as a function of $Y_0$ and $Y_{N+1}$. Collecting the terms by $Y_0$ and $Y_{N+1}$ and defining your $\phi$ appropriately, you should get the desired.
Not worth the bounty but should point you in the right direction.
|
Is there a simpler way to express the fraction $\frac{x}{x+y}$? Can I simplify this expression, perhaps into two expressions $\frac{x}{x+y}$ or is that already simplified as much as possible?
|
The given expression uses two operations (one division and one addition). If we judge simplicity by the number of operations, only an expression with one operation would be simpler, but the expression equals none of $x+y$, $x-y$, $y-x$, $xy$, $\frac xy$, $\frac yx$.
|
Evaluating a 2-variable limit Could you help me evaluating this limit?
$$
\lim_{x\to 0}\frac{1}{x}\cdot\left[\arccos\left(\frac{1}{x\sqrt{x^{2}-
2x\cdot \cos(y)+1}}-\frac{1}{x}\right)-y\right]
$$
|
Notice: I changed what I think a typo otherwise the limit is undefined.
By the Taylor series we have (and we denote $a=\cos(y)$)
$$\frac{1}{\sqrt{x^{2}-2xa+1}}=1+xa+x^2(\frac{3}{2}a^2-\frac{1}{2})+O(x^3)$$
so
$$\frac{1}{x\sqrt{x^{2}-2xa+1}}-\frac{1}{x}=a+x(\frac{3}{2}a^2-\frac{1}{2})+O(x^2)$$
Now using
$$\arccos(a+\alpha x)=\arccos(a)-\frac{\alpha}{\sqrt{1-a^2}}x+O(x^2)$$
we have
$$\arccos(\frac{1}{x\sqrt{x^{2}-2xa+1}}-\frac{1}{x})=\arccos(a)-\frac{\frac{3}{2}a^2-\frac{1}{2}}{\sqrt{1-a^2}}x+O(x^2)$$
so if we suppose that $y\in[-\frac{\pi}{2},\frac{\pi}{2}]$ then
$$\lim_{x\to 0}\frac{1}{x}\cdot\left[\arccos\left(\frac{1}{x\sqrt{x^{2}-2x\cdot \cos(y)+1}}-\frac{1}{x}\right)-y\right]=-\frac{\frac{3}{2}a^2-\frac{1}{2}}{\sqrt{1-a^2}}$$
|
Does $\det(A + B) = \det(A) + \det(B)$ hold? Well considering two $n \times n$ matrices does the following hold true:
$$\det(A+B) = \det(A) + \det(B)$$
Can there be said anything about $\det(A+B)$?
If $A/B$ are symmetric (or maybe even of the form $\lambda I$) - can then things be said?
|
Although the determinant function is not linear in general, I have a way to construct matrices $A$ and $B$ such that $\det(A + B) = \det(A) + \det(B)$, where neither $A$ nor $B$ contains a zero entry and all three determinants are nonzero:
Suppose $A = [a_{ij}]$ and $B = [b_{ij}]$ are 2 x 2 real matrices. Then $\det(A + B) = (a_{11} + b_{11})(a_{22} + b_{22}) - (a_{12} + b_{12})(a_{21} + b_{21})$ and $\det(A) + \det(B) = (a_{11} a_{22} - a_{12} a_{21}) + (b_{11} b_{22} - b_{12} b_{21})$.
These two determinant expressions are equal if and only if
$a_{11} b_{22} + b_{11} a_{22} - a_{12} b_{21} - b_{12} a_{21}
= $ $\det \left[
\begin{array}{cc}
a_{11} & a_{12}\\
b_{21} & b_{22}
\end{array} \right]$
+
$\det \left[
\begin{array}{cc}
b_{11} & b_{12}\\
a_{21} & a_{22}
\end{array} \right]$
= 0.
Therefore, if we choose any nonsingular 2 x 2 matrix $ A = [a_{ij}]$ with nonzero entries and then create $B = [b_{ij}]$ such that $b_{11} = - a_{21}, b_{12} = - a_{22}, b_{21} = a_{11},$ and $b_{22} = a_{12}$, we have solved our problem. For example, if we take
$$A =
\begin{bmatrix}
1 & 2 \\
3 & 4
\end{bmatrix}
\quad
\text{and}\quad
B =
\begin{bmatrix}
-3 & -4 \\
1 & 2\end{bmatrix}
,$$
then $\det(A) = -2, \det(B) = -2, $ and $\det(A + B) = -4$, as required.
|
Three quotient-ring isomorphism questions I need some help with the following isomorphisms.
Let $R$ be a commutative ring with ideals $I,J$ such that $I \cap J = \{ 0\}$. Then
*
*$I+J \cong I \times J$
*$(I+J)/J \cong I$
*$(R/I)/\bar{J} \cong R/(I+J) \quad \text{where} \quad \bar{J}=\{x+I \in R/I: x \in J \}$
For the first item $\theta: I \times J \ \rightarrow I+J: \ (x,y) \mapsto x+y$ is clearly a surjective homomorphism. It's injective because $I \cap J = \{ 0\}$.
For the second item, the mapping $\eta \ : \ I+J \rightarrow I \ : \ x+y \mapsto x$ is well-defined by item 1, surjective, and the kernel equals $0+J$. Now we can use the first isomorphism theorem.
For the third item, I tried to find a define a mapping:
$$\phi: \quad R/I \ \rightarrow \ R/(I+J) \quad : \quad x + I \ \mapsto \ x+I+J $$
And I tried to show that $\bar{J}$ is the kernel, but it didn't totally feel okay because I got confused. Is the following correct?
$$ x \in \ker(\phi) \ \iff \ x+ I \in I+J \iff x+I \in \bar{J} $$
I would appreciate it if you could tell me if I made mistakes. Could you provide me a little information about the third item? It seems like a blur to me.
|
The line
$ x \in \ker(\phi) \ \iff \ x+ I \in I+J \iff x+I \in \bar{J} $
is wrong, because $x+I$ could be principally no element of $I+J$ since $I+J$ is an ideal which contains elements of $R$, and $x+I$ is a left coset of an ideal and hence also a set of elements of $R$. You could write instead
$ x + I \in \ker(\phi) \iff x \in I+J \iff x+I \in (I+J)/I = \overline{J}.$
Are you familiar which the third isomorphism theorem of rings?
If $I \subseteq J$ are ideals of $R$, then $J/I$ is an ideal of $R/I$ and $(R/I)/(J/I) \cong R/J$.
Note, that in your case $\overline{J} = (I+J)/I$.
|
How find this $3\sqrt{x^2+y^2}+5\sqrt{(x-1)^2+(y-1)^2}+\sqrt{5}(\sqrt{(x-1)^2+y^2}+\sqrt{x^2+(y-1)^2})$ find this follow minimum
$$3\sqrt{x^2+y^2}+5\sqrt{(x-1)^2+(y-1)^2}+\sqrt{5}\left(\sqrt{(x-1)^2+y^2}+\sqrt{x^2+(y-1)^2}\right)$$
I guess This minimum is $6\sqrt{2}$
But I can't prove,Thank you
|
If $v_1 = (0,0), v_2 = (1,1), v_3 = (0,1)$, and $v_4 = (1,0)$ and $p = (x,y)$, then you are trying to minimize $$3|p - v_1| + 5|p - v_2| + \sqrt{5}|p - v_3| + \sqrt{5}|p - v_4|$$Note that if $p$ is on the line $y = x$, moving it perpendicularly away from the line will only increase $|p - v_1|$ and $|p - v_2|$, and it is not too hard to show it also increases $|p - v_3| + |p - v_4|$. So the minimum has to occur on the line $y = x$. So letting $p = (t,t)$ your problem becomes to minimize
$$3\sqrt{2}t + 5\sqrt{2}(1 - t) + 2\sqrt{5}\sqrt{2t^2 - 2t + 1}$$
This can be minimized through calculus... maybe there's a slick geometric way too.
|
Partition Topology I am trying to prove the following equivalence:
"Let $X$ be a set and $R$ be a partition of $X$, this is:
i) $(\forall A,B \in R, A \neq B) \colon A \cap B = \emptyset$
ii) $ \bigcup_{A \in R} A = X$
We say that a topology $\tau$ on $X$ comes from is a partition topology iff $\tau = \tau(R)$ for some partition $R$ of $X$.
Then a topology $\tau$ is a partition topology iff every open set in $\tau$ is also a closed set."
I am trying to proove $\Leftarrow$. I have tried using Zorn to proove the existance of a kind of maximal refinement of $\tau$ so as to find the partition that could generate $\tau$ but I am getting nowhere. I would truly appreciate any help posible...
|
Alternative hint: $R$ consists of the closures of the one-point sets.
|
How can some statements be consistent with intuitionistic logic but not classical logic, when intuitionistic logic proves not not LEM? I've heard that some axioms, such as "all functions are continuous" or "all functions are computable", are compatible with intuitionistic type theories but not their classical equivalents. But if they aren't compatible with LEM, shouldn't that mean they prove not LEM? But not LEM means not (A or not A) which in particular implies not A - but that implies (A or not A). What's gone wrong here?
|
If $A$ is a sentence (ie has no free variables), then your reasoning is correct and in fact $\neg (A \vee \neg A)$ is not consistent with intuitionistic logic.
However, all the instances of excluded middle that are contradicted by the statements "all functions are continuous" and "all functions are computable" are for formulas of the form $A(x)$ where $x$ is a free variable. To give an explicit example, working over Heyting arithmetic (HA), let $A(n)$ be the statement that the $n$th Turing machine halts on input $n$. Then, it is consistent with HA that $\forall n\;A(n) \vee \neg A(n)$ is false. That is, $\neg (\forall n \; A(n) \vee \neg A(n))$ is consistent with HA, and is in fact implied by $\mathsf{CT}_0$ (essentially the statement that all functions are computable). Note that even in classical logic this doesn't directly imply $\neg A(n)$, which would be equivalent to $\forall n \; \neg A(n)$. What we could do in classical logic is deduce $\exists n \; \neg (A(n) \vee \neg A(n))$ and continue as before, but this does not work in intuitionistic logic.
|
A gamma function identity I am given the impression that the following is true (for at least all positive $\lambda$ - may be even true for any complex $\lambda$)
$$ \left\lvert \frac{\Gamma(i\lambda + 1/2)}{\Gamma(i\lambda)} \right\rvert^2 = \lambda \tanh (\pi \lambda) $$
It would be great if someone can help derive this.
|
Using the Euler's reflection formula
$$\Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z},$$
we get (for real $\lambda$)
\begin{align}
\left|\frac{\Gamma\left(\frac12+i\lambda\right)}{\Gamma(i\lambda)}\right|^2&=
\frac{\Gamma\left(\frac12+i\lambda\right)\Gamma\left(\frac12-i\lambda\right)}{\Gamma(i\lambda)\Gamma(-i\lambda)}=\\
&=(-i\lambda)
\frac{\Gamma\left(\frac12+i\lambda\right)\Gamma\left(\frac12-i\lambda\right)}{\Gamma(i\lambda)\Gamma(1-i\lambda)}=\\
&=(-i\lambda)\frac{\pi/\sin\pi\left(\frac12-i\lambda\right)}{\pi/\sin\pi i\lambda}=\\
&=-i\lambda\frac{\sin \pi i \lambda}{\cos\pi i \lambda}=\\
&=\lambda \tanh\pi \lambda.
\end{align}
This will not hold if $\lambda$ is complex.
|
Showing that the function $f(x,y)=x+y-ye^x$ is non-negative in the region $x+y≤1,x≥0,y≥0$ ok, since it's been so long when I took Calculus, I just wanna make sure I'm not doing anything wrong here.
Given $f:\mathbb{R}^2\rightarrow \mathbb{R}$ defined as $f(x,y)=x+y-ye^x$. I would like to show that the function is nonnegative in the region $x+y\leq 1, \;\;x\geq 0, \;\;y\geq 0$.
Now my game plan is as follows:
1. Show the function is non-negative on the boundary of the region
2. Show the function takes a positive value in the interior of the region
3. Show that the function has no critical points in the interior of the region
4. By continuity the function is non-negative everywhere in the region.
Is the above sufficient or am I doing something wrong? Would there be a better way to show this?
|
I'll try using Lagrange multiplier:
The function is:
$$f(x,y) = x + y - ye^x$$
and constraint are:
$$g(x,y) = x+y \leq 1$$
$$h(x) = x \geq 0$$
$$j(y) = y \geq 0$$
So using Lagrange multiplier now we have:
$$F(x,y,\lambda,\lambda_1,\lambda_2) = x + y - ye^x - \lambda(x+y-1) - \lambda_1(x) - \lambda_2(y)$$
Now we take parital derivatives:
$$F_x = 1 - ye^x - \lambda - \lambda_1 = 0$$
$$F_y = 1 - e^x - \lambda - \lambda_2 = 0$$
$$\lambda(x+y-1) = 0$$
$$\lambda_1(x) = 0$$
$$\lambda_2(y) = 0$$
Now we have 8 cases:
1) $\lambda = \lambda_1 = \lambda_2 = 0$
This implies one solution $(x,y) = (0,1)$
2) $\lambda = \lambda_1 = y = 0$
Now in $F_x$ we have $1=0$, which is not posible, so this case doesn't give a solution.
3) $\lambda = x = \lambda_2 = 0$
Now in $F_x$ we have $y + \lambda_1 = 1$, because all $\lambda$ values are positive, we get $y \leq 1$. So the solutions are $(x,y) = (0,y)$, where $0 \leq y \leq 1$
4) $\lambda = x = y = 0$
Simply this implies one solution $(x,y) = (0,0)$
5) $x + y - 1 = \lambda_1 = \lambda_2 = 0$
This implies a solution that we've already obtained $(x,y) = (0,1)$
6) $x + y - 1 = \lambda_1 = y = 0$
This simply implies one solution $(x,y) = (1,0)$
7) $x + y - 1 = x = \lambda_2 = 0$
This simply implies one solution $(x,y) = (0,1)$
8) $x + y - 1 = x = y = 0$
This case doesn't imply any solution because it's a contradiction.
Now we have 4 distinct solutions we check them all now:
1) $(x,y) = (0,1)$
$$f(x,y) = x + y - ye^x = 1 - 1 = 0$$
2) $(x,y) = (0,y)$
$$f(x,y) = x + y - ye^x = y - y = 0$$
3) $(x,y) = (0,0)$
$$f(x,y) = x + y - ye^x = 0$$
4) $(x,y) = (1,0)$
$$f(x,y) = x + y - ye^x = 1$$
This means that $f(x,y)$ has minimum of $0$ at point $(0,y)$ and maximum of $1$ at point $(1,0)$.
Q.E.D.
|
Why is $\pi r^2$ the surface of a circle Why is $\pi r^2$ the surface of a circle?
I have learned this formula ages ago and I'm just using it like most people do, but I don't think I truly understand how circles work until I understand why this formula works.
So I want to understand why it works and not just how.
Please don't use complicated symbols.
|
The simplest explanation is that the area of any shape has to be in units of area, that is in units of length squared. In a circle, the only "number" describing it the the radius $r$ (with units of length), so that the area must be proportional to $r^2$. So for some constant $b$,
$$A=b r^2$$
Now, to find the constant $b$, I think the easiest way is to look at this Wikipedia diagram:
This shows how when you subdivide the circle into many equal small triangles, the area becomes a rectangle with height $r$ and length equal to half the circumference of the circle, which is $\pi r$, by the definition of $\pi$.
|
Primes between $n$ and $2n$ I know that there exists a prime between $n$ and $2n$ for all $2\leq n \in \mathbb{N}$ . Which number is the fourth number that has just one prime in its gap? First three numbers are $2$ , $3$ and $5$ . I checked with computer until $15000$ and couldn't find next one. Maybe, you can prove that there is no other number with this condition?
Also, when I say, a number $n$ has one prime in its gap it means the set $X = \{x: x$ is prime and $n<x<2n\}$ has only one element.
Thanks for any help.
|
There is no other such $n$.
For instance,
In 1952, Jitsuro Nagura proved that for $n ≥ 25$, there is always a prime between $n$ and $(1 + 1/5)n$.
This immediately means that for $n \ge 25$, we have one prime between $n$ and $\frac{6}{5}n$, and another prime between $\frac{6}{5}n$ and $\frac65\frac65n = \frac{36}{25}n < 2n$. In fact, $\left(\frac{6}{5}\right)^3 < 2$ as well, so we can be sure that for $n \ge 25$, there are at least three primes between $n$ and $2n$. As you have already checked all $n$ up to $25$ (and more) and found only $2$, $3$, $5$, we can be sure that these are the only ones.
The number of primes between $n$ and $2n$ only gets larger as $n$ increases: it follows from the prime-number theorem that
$$ \lim_{n \to \infty} \frac{\pi(2n) - \pi(n)}{n/\log n} = 2 - 1 = 1,$$ so the number of primes between $n$ and $2n$, which is $\pi(2n) - \pi(n)$, is actually asymptotic to $\frac{n}{\log n}$ which gets arbitrarily large.
|
prove $\sum\limits_{n\geq 1} (-1)^{n+1}\frac{H_{\lfloor n/2\rfloor}}{n^3} = \zeta^2(2)/2-\frac{7}{4}\zeta(3)\log(2)$ Prove the following
$$\sum\limits_{n\geq 1}(-1)^{n+1}\frac{H_{\lfloor n/2\rfloor}}{n^3} = \frac{1}{2}\zeta(2)^2-\frac{7}{4}\zeta(3)\log(2)$$
I was able to prove the formula above and interested in what approach you would take .
|
The chalenge is interresting, but easy if we know some classical infinite sums with harmonic numbers : http://mathworld.wolfram.com/HarmonicNumber.html
( typing mistake corrected)
I was sure that the formula for $\sum\frac{H_{k}}{(2k+1)^3}$ was in all the mathematical handbooks among the list of sums of the same kind. I just realize that it is missing in the article of Wolfram referenced above. Sorry for that. Then, see : http://www.wolframalpha.com/input/?i=sum+HarmonicNumber%28n%29%2F%282n%2B1%29%5E3+from+n%3D1to+infinity
One can find in the literature some papers dealing with the sums of harmonic numbers and even more with the sums of polygamma functions. The harmonic numbers are directly related to some particular values of polygamma functions. So, when we are facing a problem of harmonic number, it is a good idea to transform it to a problem of polygamma.
For example, in the paper “On Some Sums of Digamma and Polygamma Functions” by
Michael Milgram, on can find what are the methods and a lot of formulas with the proofs :
http://arxiv.org/ftp/math/papers/0406/0406338.pdf
From this, one could derive a general formula for $\sum\limits_{n\geq 1}\frac{H_n}{(an+b)^p}$ with any $a, b$ and integer $p>2$. Less ambitious, the case $a=2 ; b=1 ; p=3$ is considered below :
|
$O(n,\mathbb R)$ of all orthogonal matrices is a closed subset of $M(n,\mathbb R).$
Let $M(n,\mathbb R)$ be endowed with the norm $(a_{ij})_{n\times n}\mapsto\sqrt{\sum_{i,j}|a_{ij}|^2}.$ Then the set $O(n,\mathbb R)$ of all orthogonal matrices is a closed subset of $M(n,\mathbb R).$
My Attempt: Let $f:M(n,\mathbb R)\to M(n,\mathbb R):A\mapsto AA^t.$ Choose a sequence $\{A_k=(a^k_{ij})\}\subset M(n,\mathbb R)$ such that $A_k\to A=(a_{ij})$ for chosen $A\in M(n,\mathbb R).$ Then $\forall~i,j,$ $a_{ij}^k\to a_{ij}$ in $\mathbb R.$
Now $A_kA_k^t=(\sum_{p=1}^n a_{ip}^ka_{jp}^k)~\forall~k\in\mathbb Z^+.$
Choose $i,j\in\{1,2,...,n\}.$ Then for $p=1,2,...,n;~a_{ip}^k\to a_{ip},~a_{jp}^k\to a_{jp}$ in $\mathbb R\implies \sum_{p=1}^n a_{ip}^ka_{jp}^k\to \sum_{p=1}^n a_{ip}a_{jp}$ in $\mathbb R.$
So $(\sum_{p=1}^n a_{ip}^ka_{jp}^k)\to (\sum_{p=1}^n a_{ip}a_{jp})\implies A_kA_k^t\to AA^t.$
So $f$ is continuous on $M(n,\mathbb R).$
Now $O(n,\mathbb R)=f^{-1}(\{I\}).$ The singleton set $\{I\}$ being closed in $M(n,\mathbb R),$ $O(n,\mathbb R)$ is closed in $M(n,\mathbb R).$
I'm not absolutely sure about the steps. Is't a correct attempt?
|
It would be quicker to observe that $f$ is a vector of polynomials in the natural coordinates, and polynomials are continuous, so $f$ is continuous.
|
The origin of $\pi$ How was $\pi$ originally found?
Was it originally found using the ratio of the circumference to diameter of a circle of was it found using trigonometric functions?
I am trying to find a way to find the area of the circle without using $\pi$ at all but it seems impossible, or is it?
If i integrate the circle I get:
$$4\int_{0}^{1}\sqrt{1-x^{2}}dx=4\left [ \frac{\sin^{-1} x}{2}+\frac{x\sqrt{1-x^{2}}}{2} \right ]_{0}^{1}=\pi $$
But why does $\sin^{-1} 1=\frac{\pi }{2}$?
Is it at all possible to find the exact area of the circle without using $\pi$?
|
to answer at
"Is it at all possible to find the exact area of the circle without using π?"
hello, $A=CR/2$
"How was π originally found?"
maybe Pythagore and euclide with a²+b²=c² found the area of squares.
Then Archimede found $3+10/71<pi<3+1/7$
|
Abelian Groups and Number Theory What is the connection between "Finite Abelian Groups" and "Chinese Remainder Theorem"?
(I have not seen the "abstract theory" behind Chinese Remainder Theorem and also its proof. On the other hand, I know abstract group theory and classification of finite abelian groups. Please, give also a motivation to study "Chinese Remainder Theorem from "Group Theory point of view".)
|
Let $m$ and $n$ be coprime, and let $a$ and $b$ be any integers. According to the Chinese remainder theorem, there exists a unique solution modulo $mn$ to the pair of equations
$$x \equiv a \mod{m}$$
$$x \equiv b \mod{n}$$
Now the map $(a,b) \mapsto x$ is an isomorphism of rings from $\mathbb{Z}/m\mathbb{Z} \oplus \mathbb{Z}/n\mathbb{Z}$ to $\mathbb{Z}/mn\mathbb{Z}$.
Conversely, if we are given an isomorphism of rings $\phi: \mathbb{Z}/m\mathbb{Z} \oplus \mathbb{Z}/n\mathbb{Z} \rightarrow \mathbb{Z}/mn\mathbb{Z}$, then $x = \phi(a,b)$ is a solution to the pair of equations since in this case $\phi(x,x) = x = \phi(a,b)$.
|
singleton null vector set linearly dependent, but other singletons are linearly independent set Why the set $\{\theta_v\}$ where $\theta_v$ is the null vector of a vector space is a dependent set intuitively (what is the source of dependence) and the singleton vector set which are non-null are independent sets ? (btw, I know how to show it mathematically, but not understanding the intuition behind this).
Does it really make sense to think of independence of a singleton set ?
|
The intuition is the following: the null vector only spans a zero-dimensional space, whereas any other vector spans a one-dimensional space. This is captured by the following thought:
A set of vectors $\{ \bar v_1, \bar v_2, ..., \bar v_n\}$ is linearly independent iff $span(\{ \bar v_1, ..., \bar v_n\})$ is not spanned by a proper subset of $\{ \bar v_1, \bar v_2, ..., \bar v_n\}$. Now, the space spanned by $\{ \bar o\}$ is already spanned by a proper subset of $\{ \bar o\}$ namely $\emptyset$. For a non-zero $\bar v, span(\bar v)$ is not spanned by any proper subset of $\{ \bar v \}$.
Edit: Definition: Let $S=\{ \bar v_i:i\in I\}$ be a subset of a vector space $V$, then $$span(S):= \{ \sum_{i \in I}c_i\bar v_i: \bar v_i \in S, c_i \in \Bbb F, c_i=0 \mbox { for almost all } i \}.$$ Taking $I=\emptyset$, i.e. $S=\emptyset$, we get $$span(\emptyset )= \{ \sum_{i \in \varnothing }c_i\bar v_i \} = \{ \bar o \},$$ because by definition the empty sum equals $\bar o$ (just like in arithmetic, where the empty sum equals $0$ and the empty product equals $1$, or in set theory, where the empty union equals $\emptyset$).
|
Integral $ \lim_{n\rightarrow\infty}\sqrt{n}\int\limits_0^1 \frac {f(x)dx}{1 + nx^2} = \frac{\pi}{2}f(0) $
Show that for $ f(x) $ a continuous function on $ [0,1] $ we have
\begin{equation}
\lim_{n\rightarrow\infty}\sqrt{n}\int\limits_0^1 \frac {f(x)dx}{1 + nx^2} = \frac{\pi}{2}f(0)
\end{equation}
It is obvious that
\begin{equation}
\sqrt{n}\int\limits_0^1 \frac {f(x)dx}{1 + nx^2} = \int\limits_0^1 f(x) d [\arctan(\sqrt{n}x)]
\end{equation}
and for any $ x \in (0, 1] $
\begin{equation}
\lim_{n\rightarrow\infty}{\arctan(\sqrt{n}x)} = \frac{\pi}{2},
\end{equation}
so the initial statement looks very reasonable. But we can't even integrate by parts because $ f(x) $ is in general non-smooth! Can anybody help please?
|
Hint: Make the change of variables $ y=\sqrt{n}x .$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.