INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Expectation and median (Jensen’s inequality) of spacial functions Let’s have a 1-Lipschitz function $f:S^n \to \mathbb{R}$, where $S^n$ is equipped with the geodesic distance $d$ and with the uniform measure $\mu$.
How can I show that such an $f$ satisfies Jensen’s inequality:
$(\int_{S^n} f d\mu)^2 \leq {\int_{S^n} f^2 d\mu}$?
In addition, is it true that in such a case we have $\sqrt{\int_{S^n} f^2 d\mu} \leq m$ where $m$ is the unique number satisfying $\mu(f \geq m) \geq 0.5, \space \mu(f \leq m) \geq 0.5$?
| The first inequality is called Cauchy-Schwarz inequality, rather than Jensen inequality. Its proof is simple and very general: one considers $g=(f-a)^2$ with $a=\int\limits_{S_n}f\,\mathrm d\mu$. Then $g\geqslant0$ everywhere hence $\int\limits_{S_n}g\,\mathrm d\mu\geqslant0$. Expanding this by linearity and using the fact that the mass of $\mu$ is $1$ yields the result.
The second inequality you suggest is odd. If $f(x)=x_1-1$, then $m=-1$, which ruins every chance to get a nonnegative quantity $\leqslant m$. More generally, $ \sqrt{ \int\limits_{S^n} f^2\,\mathrm d\mu } \geqslant \int\limits_{S^n} f\,\mathrm d\mu$ and, as soon as $f$ is symmetric around one of its medians $m$, the RHS is $m$. To sum up, no comparison can exist, and if one existed, it would be the opposite of the one you suggest.
|
orientability of riemann surface Could any one tell me about the orientability of riemann surfaces?
well, Holomorphic maps between two open sets of complex plane preserves orientation of the plane,I mean conformal property of holomorphic map implies that the local angles are preserved and in particular right angles are preserved, therefore the local notions of "clockwise" and "anticlockwise" for small circles are preserved. can we transform this idea to abstract riemann surfaces?
thank you for your comments
| Holomorphic maps not only preserve nonoriented angles but oriented angles as well. Note that the map $z\mapsto\bar z$ preserves nonoriented angles, in particular nonoriented right angles, but it does not preserve the clockwise or anticlockwise orientation of small circles.
Now a Riemann surface $S$ is covered by local coordinate patches $(U_\alpha, z_\alpha)_{\alpha\in I}\ $, and the local coordinate functions $z_\alpha$ are related in the intersections $U_\alpha\cap U_\beta$ by means of holomorphic transition functions $\phi_{\alpha\beta}$: One has $z_\alpha=\phi_{\alpha\beta}\circ z_\beta$ where the Jacobian $|\phi'_{\alpha\beta}(z)|^2$ is everywhere $>0$. It follows that the positive ("anticlockwise") orientation of infinitesimal circles on $S$ is well defined througout $S$. In all, a Riemann surface $S$ is oriented to begin with.
For better understandng consider a Moebius band $M:=\{z\in {\mathbb C}\ | \ -1<{\rm Im}\, z <1\}/\sim\ ,$ where points $x+iy$ and $x+1-iy$ $\ (x\in{\mathbb R})$ are identified. In this case it is not possible to choose coordinate patches $(U_\alpha, z_\alpha)$ covering all of $M$ such that the transition functions are properly holomorphic. As a consequence this $M$ is not a Riemann surface, even though nonoriented angles make sense on $M$.
|
Decomposition of a prime number in a cyclotomic field Let $l$ be an odd prime number and $\zeta$ be a primitive $l$-th root of unity in $\mathbb{C}$.
Let $A$ be the ring of algebraic integers in $\mathbb{Q}(\zeta)$.
Let $p$ be a prime number such that $p \neq l$.
Let $f$ be the smallest positive integer such that $p^f \equiv 1$ (mod $l$).
Then $pA = P_1...P_r$, where $P_i's$ are distinct prime ideals of $A$ and each $P_i$ has the degree $f$ and $r = (l - 1)/f$.
My question: How would you prove this?
This is a related question.
| Once you know what the ring of integers in $\mathbf{Q}(\zeta)$ is, you know that the factorization of a rational prime $p$ therein is determined by the factorization of the minimal polynomial of $\zeta$ over $\mathbf{Q}$, which is the cyclotomic polynomial $\Phi_\ell$, mod $p$. So you basically just need to determine the degree of a splitting field over $\mathbf{F}_p[X]$ of the image of $\Phi_\ell$ in $\mathbf{F}_p$. The degree is the $f$ in your question. This can be determined using the Galois theory of finite fields, mainly the fact that the Galois group is cyclic with a canonical generator. The details are carried out in many books, e.g., Neukrich's book on algebraic number theory.
|
The number $\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\frac{1-\sqrt{5}}{2}\right)^n\right]$ is always an integer For each $n$ consider the expression $$\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\frac{1-\sqrt{5}}{2}\right)^n\right]$$
I am trying to prove by induction that this is an integer for all $n$.
In the base case $n=1$, it ends up being $1$.
I am trying to prove the induction step:
*
*if $\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\frac{1-\sqrt{5}}{2}\right)^n\right]$ is an integer, then so is
$\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n+1}-\left(\frac{1-\sqrt{5}}{2}\right)^{n+1}\right]$.
I have tried expanding it, but didn't get anywhere.
| Hint $\rm\quad \phi^{\:n+1}\!-\:\bar\phi^{\:n+1} =\ (\phi+\bar\phi)\ (\phi^n-\:\bar\phi^n)\ -\ \phi\:\bar\phi\,\ (\phi^{\:n-1}\!-\:\bar\phi^{\:n-1})$
Therefore, upon substituting $\rm\ \phi+\bar\phi\ =\ 1\ =\, -\phi\bar\phi\ $ and dividing by $\:\phi-\bar\phi = \sqrt 5\:$ we deduce that $\rm\:f_{n+1} = f_n + f_{n-1}.\:$ Since $\rm\:f_0,f_1\:$ are integers, all $\rm\,f_n\:$ are integers by induction, using the recurrence.
|
Area of Triangle inside another Triangle I am stumped on the following question:
In the figure below $AD=4$ , $AB=3$ , and $CD=9$. What is the area of Triangle AEC?
I need to solve this using trigonometric ratios however if trig. ratios makes this problem a lot easier to solve I would be interested in looking how that would work.
Currently I am attempting to solve this in the following way
$\bigtriangleup ACD_{Area}=\bigtriangleup ECD_{Area} + \bigtriangleup ACE_{Area}$
The only problem with this approach is finding the length of AE or ED. How would I do that ? Any suggestion or is there a better method
| $AE+ED=AD=4$, and $ECD$ and $EAB$ are similar (right?), so $AE$ is to $AB$ as $ED$ is to $CD$. Can you take it from there?
|
For Maths Major, advice for perfect book to learn Algorithms and Date Structures Purpose: Self-Learning, NOT for course or exam
Prerequisite: Done a course in basic data structures and algorithms, but too basic, not many things.
Major: Bachelor, Mathematics
My Opinion: Prefer more compact, mathematical, rigorous book on Algorithms and Data Structures. Since it's for long-term self-learning, not for exam or course, then related factors should not be considered (e.g. learning curve, time-usage), only based on the book to perfectly train algorithms and data structures.
After searching at Amazon, following three are somehow highly reputed.
(1)Knuth, The Art of Computer Programming, Volume 1-4A
This book-set contains most important things in Algorithms, and very mathematically rigorous. I prefer to use this to learn algorithms all in one step.
(2)Cormen, Introduction to Algorithms
It's more like in-between (1) and (3).
(3)Skiena, The Algorithm Design Manual
More introductory and practical compared with (1), is it ok for this to be warm-up then read (1), and skip (2) ?
Desirable answer: Advice or more recommended books
| Knuth is an interesting case, it's certainly fairly rigorous, but I would never recommend it as a text to learn from. It's one of those ones that is great once you already know the answer, but fairly terrible if you want to learn something. On top of this, the content is now rather archaically presented and the pseudocode is not the most transparent.
Of course this is my experience of it (background CS & maths), but I have encountered this rough assessment a few times.
|
Methods to solve differential equations We are given the equation
$$\frac{1}{f(x)} \cdot \frac{d\left(f(x)\right)}{dx} = x^3.$$
To solve it, "multiply by $dx$" and integrate:
$\frac{x^4}{4} + C = \ln \left( f(x) \right)$
But $dx$ is not a number, what does it mean when I multiply by $dx$, what am I doing, why does it work, and how can I solve it without multiplying by $dx$?
Second question:
Suppose we have the equation $$\frac{d^2f(x)}{dx^2}=(x^2-1)f(x)$$
Then for large $x$, we have $\frac{d^2f(x)}{dx^2}\approx x^2f(x)$, with the approximate solution $ke \cdot \exp \left (\frac{x^2}{2} \right)$
Why is it then reasonable to suspect, or assume, that the solution to the original equation, will be of the form $f(x)=e^{x^2/2} \cdot g(x)$, where $g(x)$ has a simpler form then $f(x)$? When does it not work?
Third question:
The method of replacing all occurences of $f(x)$, and its derivatives by a power series, $\sum a_nx^n$, for which equations does this work or lead to a simpler equation?
Do we lose any solutions this way?
| FIRST QUESTION
This is answered by studying the fundamental theorem of calculus, which basically (in this context) says that if on an interval $(a,b)$
\begin{equation}
\frac{dF}{dx} = f
\end{equation}
then,
\begin{equation}
\int^{b}_{a} f\left(x\right) dx = F(b) - F(a)
\end{equation}
where the integral is the limit of the Riemann sum. And then, you can specify anti-derivatives (indefinite integrals) as $F\left(x\right) + c$ by fixing an arbitrary $F\left(a\right) = c$ and considering the function
\begin{equation}
F\left(x\right) = \int^{x}_{a} f\left(x\right) dx
\end{equation}
where we do not specify the limits as they are understood to exist and are arbitrary. It is from here actually, that we get that multiplying by $dx$ rule.
For,
\begin{equation}
F = \int dF
\end{equation}
and hence
\begin{equation}
dF = f(x)dx \implies F = \int f(x) dx
\end{equation}
This is a little hand waiving, and for actual proofs, you would have to study Riemann sums.
Now, after this, for specific examples such as yours, I guess Andre Nicolas's answer is better but still I will try to offer something similar.
We can say let $g\left(x\right) = \int x^{3} dx$ and $h\left(x\right) = log\left|f\left(x\right)\right|$. Then,
\begin{equation}
\frac{dh\left(x\right)}{dx} = \frac{dg\left(x\right)}{dx}
\end{equation}
for all $x \in \left(a,b\right)$
and hence, we can say that
\begin{equation}
h\left(x\right)= g\left(x\right) + C
\end{equation}
|
Decomposition of a prime number $p \neq l$ in the quadratic subfield of a cyclotomic number field of an odd prime order $l$ Let $l$ be an odd prime number and $\zeta$ be a primitive $l$-th root of unity in $\mathbb{C}$.
Let $K = \mathbb{Q}(\zeta)$.
Let $A$ be the ring of algebraic integers in $K$.
Let $G$ be the Galois group of $\mathbb{Q}(\zeta)/\mathbb{Q}$.
$G$ is isomorphic to $(\mathbb{Z}/l\mathbb{Z})^*$.
Hence $G$ is a cyclic group of order $l - 1$.
Let $f = (l - 1)/2$.
There exists a unique subgroup $G_f$ of $G$ whose order is $f$.
Let $K_f$ be the fixed subfield of $K$ by $G_f$.
$K_f$ is a unique quadratic subfield of $K$.
Let $A_f$ be the ring of algebraic integers in $K_f$.
Let $p$ be a prime number such that $p \neq l$.
Let $pA_f = P_1\cdots P_r$, where $P_1, \dots, P_r$ are distinct prime ideals of $A_f$.
Since $p^{l - 1} \equiv 1$ (mod $l$), $p^f = p^{(l - 1)/2} \equiv \pm$1 (mod $l$).
My question: Is the following proposition true? If yes, how would you prove this?
Proposition
(1) If $p^{(l - 1)/2} \equiv 1$ (mod $l$), then $r = 2$.
(2) If $p^{(l - 1)/2} \equiv -1$ (mod $l$), then $r = 1$.
This is a related question.
| This is most easily established by decomposition group calculations; it is a special case of the more general result proved here (which is an answer to OP's linked question).
|
Does $z ^k+ z^{-k}$ belong to $\Bbb Z[z + z^{-1}]$? Let $z$ be a non-zero element of $\mathbb{C}$.
Does $z^k + z^{-k}$ belong to $\mathbb{Z}[z + z^{-1}]$ for every positive integer $k$?
Motivation:
I came up with this problem from the following question.
Maximal real subfield of $\mathbb{Q}(\zeta )$
| Yes. It holds for $k=1$; it also holds for $k=2$, since
$$z^2+z^{-2} = (z+z^{-1})^2 - 2\in\mathbb{Z}[z+z^{-1}].$$
Assume that $z^k+z^{-k}$ lie in $\mathbb{Z}[z+z^{-1}]$ for $1\leq k\lt n$. Then, if $n$ is odd, we have:
$$\begin{align*}
z^n+z^{-n} &= (z+z^{-1})^n - \sum_{i=1}^{\lfloor n/2\rfloor}\binom{n}{i}(z^{n-i}z^{-i} + z^{i-n}z^{i})\\
&= (z+z^{-1})^n - \sum_{i=1}^{\lfloor n/2\rfloor}\binom{n}{i}(z^{n-2i}+z^{2i-n}).
\end{align*}$$
and if $n$ is even, we have:
$$\begin{align*}
z^n+z^{-n} &= (z+z^{-1})^n - \binom{n}{n/2} - \sum_{i=1}^{(n/2)-1}\binom{n}{i}(z^{n-i}z^{-i} + z^{i-n}z^{i})\\
&= (z+z^{-1})^n - \binom{n}{n/2} - \sum_{i=1}^{(n/2)-1}\binom{n}{i}(z^{n-2i}+z^{2i-n}).
\end{align*}$$
If $1\leq i\leq \lfloor \frac{n}{2}\rfloor$, then $0\leq n-2i \lt n$, so $z^{n-2i}+z^{2i-n}$ lies in $\mathbb{Z}[z+z^{-1}]$ by the induction hypothesis. Thus, $z^n+z^{-n}$ is a sum of terms in $\mathbb{Z}[z+z^{-1}]$.
|
$\mathbb{CP}^1$ is compact? $\mathbb{CP}^1$ is the set of all one dimensional subspaces of $\mathbb{C}^2$, if $(z,w)\in \mathbb{C}^2$ be non zero , then its span is a point in $\mathbb{CP}^1$.let $U_0=\{[z:w]:z\neq 0\}$ and $U_1=\{[z:w]:w\neq 0\}$, $(z,w)\in \mathbb{C}^2$,and $[z:w]=[\lambda z:\lambda w],\lambda\in\mathbb{C}^{*}$ is a point in $\mathbb{CP}^1$, the map is $\phi_0:U_0\rightarrow\mathbb{C}$ defined by $$\phi_0([z:w])=w/z$$
the map $\phi:U_1\rightarrow\mathbb{C}$ defined by $$\phi_1([z:w])=z/w$$
Now,Could any one tell me why $\mathbb{CP}^1$ is the union of two closed sets $\phi_0^{-1}(D)$ and $\phi_1^{-1}(D)$, where $D$ is closed unit disk in $\mathbb{C}$,and why $\mathbb{CP}^1$ is compact?
| The complex projective line is the union of the two open subsets
$$
D_i=\{[z_0:z_1]\text{ such that }z_i\neq0\},\quad i=0, 1.
$$
Each of them is homeomorphic to $\Bbb C$, hence to the open unit disk.
Compacteness follows from the fact that it is homeomorphic to the sphere $S^2$. You can see this as a simple elaboration of the observation that
$$
\Bbb P^1(\Bbb C)=D_0\cup[0:1].
$$
|
Question about $p$-adic numbers and $p$-adic integers I've been trying to understand what $p$-adic numbers and $p$-adic integers are today. Can you tell me if I have it right? Thanks.
Let $p$ be a prime. Then we define the ring of $p$-adic integers to be
$$ \mathbb Z_p = \{ \sum_{k=m}^\infty a_k p^k \mid m \in \mathbb Z, a_k \in \{0, \dots, p-1\} \} $$
That is, the $p$-adic integers are a bit like formal power series with the indeterminate $x$ replaced with $p$ and coefficients in $\mathbb Z / p \mathbb Z$. So for example, a $3$-adic integers could look like this: $1\cdot 1 + 2 \cdot 3 + 1 \cdot 9 = 16$ or $\frac{1}{9} + 1 $ and so on. Basically, we get all natural numbers, fractions of powers of $p$ and sums of those two.
This is a ring (just like formal power series). Now we want to turn it into a field. To this end we take the field of fractions with elements of the form
$$ \frac{\sum_{k=m}^\infty a_k p^k}{\sum_{k=r}^\infty b_k p^k}$$
for $\sum_{k=r}^\infty b_k p^k \neq 0$. We denote this field by $\mathbb Q_p$.
Now as it turns out, $\mathbb Q_p$ is the same as what we get if we take the ring of fractions of $\mathbb Z_p$ for the set $S=\{p^k \mid k \in \mathbb Z \}$. This I don't see. Because then this would mean that every number $$ \frac{\sum_{k=m}^\infty a_k p^k}{\sum_{k=r}^\infty b_k p^k}$$ can also be written as $$ \frac{\sum_{k=m}^\infty a_k p^k}{p^r}$$
and I somehow don't believe that. So where's my mistake? Thanks for your help.
| I want to emphasize that $\mathbb Z_p$ is not just $\mathbb F_p[[X]]$ in disguise, though the two rings share many properties. For example, in the $3$-adics one has
\[
(2 \cdot 1) + (2 \cdot 1) = 1 \cdot 1 + 1 \cdot 3 \neq 1 \cdot 1.
\]
I know three ways of constructing $\mathbb Z_p$ and they're all pretty useful. It sounds like you might enjoy the following description:
\[
\mathbb Z_p = \mathbb Z[[X]]/(X - p).
\]
This makes it clear that you can add and multiply elements of $\mathbb Z_p$ just like power series with coefficients in $\mathbb Z$. The twist is that you can always exchange $pX^n$ for $X^{n + 1}$. This is the “carrying over” that Thomas mentions in his helpful series of comments.
|
A Trivial Question - Increasing by doubling a number when its negative
The question is : if $x=y-\frac{50}{y}$ , where x and y are both > $0$ . If the value of y is doubled in the equation above the value of x will a)Decrease b)remain same c)increase four fold d)double e)Increase to more than double - Ans)e
Here is how I am solving it $x = \frac{y^2-50}{y}$ by putting y=$1$ . I get $-49$ and by putting y=2(Doubling) I get $y=-24$. Now how is $-24$ more than double of $-49$. Correct me if I am wrong double increment of $-49$ will be = $-49+49 = 0$ which would be greater than -24.
| Let $x_1=y-\frac{50}{y}$ and $x_2=2y-\frac{50}{2y}$. Then $x_2-x_1=(2y-y)-(\frac{50}{2y}-\frac{50}{y})=y+\frac{50}{2y}=y-\frac{50}{y}+(\frac{150}{2y})=x_1+(\frac{150}{2y})$
$\implies x_2-2x_1=(\frac{150}{2y})\gt 0$ (as $y\gt 0$)$\implies x_2\gt 2x_1\implies x_2/x_1\gt 2$ which is option (e).
|
optimality of 2 as a coefficient in a continued fraction theorem I'm giving some lectures on continued fractions to high school and college students, and I discussed the standard theorem that, for a real number $\alpha$ and integers $p$ and $q$ with $q \not= 0$, if $|\alpha-p/q| < 1/(2q^2)$ then $p/q$ is a convergent in the continued fraction expansion of $\alpha$. Someone in the audience asked if 2 is optimal: is there a positive number $c < 2$ such that, for every $\alpha$ (well, of course the case of real interest is irrational $\alpha$), when $|\alpha - p/q| < 1/(cq^2)$ it is guaranteed that $p/q$ is a convergent to the continued fraction expansion of $\alpha$?
Please note this is not answered by the theorem of Hurwitz, which says that an irrational $\alpha$ has $|\alpha - p_k/q_k| < 1/(\sqrt{5}q_k^2)$ for infinitely many convergents $p_k/q_k$, and that $\sqrt{5}$ is optimal: all $\alpha$ whose cont. frac. expansion ends with an infinite string of repeating 1's fail to satisfy such a property if $\sqrt{5}$ is replaced by any larger number. For the question the student at my lecture is asking, an optimal parameter is at most 2, not at least 2.
| 2 is optimal. Let $\alpha=[a,2,\beta]$, where $\beta$ is a (large) irrational, and let $p/q=[a,1]=(a+1)/1$. Then $p/q$ is not a convergent to $\alpha$, and $${p\over q}-\alpha={1\over2-{1\over \beta+1}}$$ which is ${1\over(2-\epsilon)q^2}$ since $q=1$.
More complicated examples can be constructed. I think this works, though I haven't done all the calculations. Let $\alpha=[a_0,a_1,\dots,a_n,m,2,\beta]$ with $m$ and $\beta$ large, let $p/q=[a_0,a_1,\dots,a_n,m,1]$. Then again $p/q$ is not a convergent to $\alpha$, while $$\left|{p\over q}-\alpha\right|={1\over(2-\epsilon)q^2}$$ for $m$ and $\beta$ sufficiently large.
|
If $p$ is an element of $\overline E$ but not a limit point of $E$, then why is there a $p' \in E$ such that $d(p, p') < \varepsilon$? I don't understand one of the steps of the proof of Theorem 3.10(a) in Baby Rudin. Here's the theorem and the proof up to where I'm stuck:
Relevant Definitions
The closure of the subset $E$ of some metric space is the union of $E$ with the set of all its limit points.
The diameter of the subset $E$ of some metric space is the supremum of the set of all pairwise distances between its elements.
For the points $x$ and $y$ in some metric space, $d(x, y)$ denotes the distance between $x$ and $y$.
Theorem 3.10(a) If $\overline{E}$ is the closure of a set $E$ in a metric space $X$, then
$$
\text{diam} \ \overline{E} = \text{diam} \ E.
$$
Proof. Since $E \subseteq \overline{E}$, we have
$$\begin{equation*}
\text{diam} \ E \leq \text{diam} \ \overline{E}.
\end{equation*}$$
Let $\epsilon > 0$, and pick $p, q \in \overline{E}$. By the definition of $\overline{E}$, there are points $p', q' \in E$ such that $d(p,p') < \epsilon$ and $d(q, q') < \epsilon$...
I see that this works if $p$ and $q$ are limit points of $E$. But how does this work if, say, $p$ isn't a limit point of $E$? What if $E$ is some region in $\mathbb{R}^2$ along with the point $p$ by itself way off somewhere?
| Basically Rudin needs to write a triangle inequality with two additional arbitrary points $p',q'$.
$d(p,q) \leq d(p,p') + d(p',q) $
$ d(p,p') + d(p',q) \leq d(p,p') + d(p',q') + d(q',q)$
He assumes $E$ is non-empty, so we might pick points $p'$ and $q'$ for this inequality, even if $p = p'$ and $q = q'$.
|
The Star Trek Problem in Williams's Book This problem is from the book Probability with martingales by Williams. It's numbered as Exercise 12.3 on page 236. It can be stated as follows:
The control system on the starship has gone wonky. All that one can do is to set a distance to be traveled. The spaceship will then move that distance in a randomly chosen direction and then stop. The object is to get into the Solar System, a ball of radius $r$. Initially, the starship is at a distance $R>r$ from the sun.
If the next hop-length is automatically set to be the current distance to the Sun.("next" and "current" being updated in the obvious way). Let $R_n$ be the distance from Sun to the starship after $n$ space-hops. Prove $$\sum_{n=1}^\infty \frac{1}{R^2_n}<\infty$$
holds almost everywhere.
It has puzzled me a long time. I tried to prove the series has a finite expectation, but in fact it's expectation is infinite. Does anyone has a solution?
| in the book it seems to me that the sum terminates when they get home, so if they get home the sum is finite. williams also hints: it should be clear what thm to use here. i am sure he is referring to levy's extension of the borel-cantelli lemma, $\frac 1 {R_n^2}$ is the conditional probability of getting home at time $n+1$ given your position at time n is $R_n$, it is the proportion of the area of the sphere of radius etc.
|
Determine invertible and inverses in $(\mathbb Z_8, \ast)$ Let $\ast$ be defined in $\mathbb Z_8$ as follows:
$$\begin{aligned} a \ast b = a +b+2ab\end{aligned}$$
Determine all the invertible elements in $(\mathbb Z_8, \ast)$ and determine, if possibile, the inverse of the class $[4]$ in $(\mathbb Z_8, \ast)$.
Identity element
We shall say that $(\mathbb Z_8, \ast)$ has an identity element if:
$$\begin{aligned} (\forall a \in \mathbb Z_8) \text { } (\exists \varepsilon \in \mathbb Z_8 : a \ast \varepsilon = \varepsilon \ast a = a)\end{aligned}$$
$$\begin{aligned} a+\varepsilon+2a\varepsilon = a \Rightarrow \varepsilon +2a\varepsilon = 0 \Rightarrow \varepsilon(1+2a) = 0 \Rightarrow \varepsilon = 0 \end{aligned}$$
As $\ast$ is commutative, similarly we can prove for $\varepsilon \ast a$.
$$\begin{aligned} a \ast 0 = a+0+2a0 = a \end{aligned}$$
$$\begin{aligned} 0\ast a = 0+a+20a = a\end{aligned}$$
Invertible elements and $[4]$ inverse
We shall state that in $(\mathbb Z_8, \ast)$ there is the inverse element relative to a fixed $a$ if and only if exists $\alpha \in (\mathbb Z_8, \ast)$ so that:
$$\begin{aligned} a\ast \alpha = \alpha \ast a = \varepsilon \end{aligned}$$
$$\begin{aligned} a+\alpha +2a\alpha = 0 \end{aligned}$$
$$\begin{aligned} \alpha(2a+1) \equiv_8 -a \Rightarrow \alpha \equiv_8 -\frac{a}{(2a+1)} \end{aligned}$$
In particular looking at $[4]$ class, it follows:
$$\begin{aligned} \alpha \equiv_8 -\frac{4}{(2\cdot 4+1)}=-\frac{4}{9} \end{aligned}$$
therefore:
$$\begin{aligned} 9x \equiv_8 -4 \Leftrightarrow 1x \equiv_8 4 \Rightarrow x=4 \end{aligned}$$
which seems to be the right value as
$$\begin{aligned} 4 \ast \alpha = 4 \ast 4 = 4+4+2\cdot 4\cdot 4 = 8 + 8\cdot 4 = 0+0\cdot 4 = 0 \end{aligned}$$
Does everything hold? Have I done anything wrong, anything I failed to prove?
| From your post:
Determine all the invertible elements in $(\mathbb Z_8, \ast)$
You have shown that the inverse of $a$ is $\alpha \equiv_8 -\frac{a}{(2a+1)}$. I would go a step further and list out each element which has an inverse since you only have eight elements to check.
|
The longest sum of consecutive primes that add to a prime less than 1,000,000
In Project Euler problem $50,$ the goal is to find the longest sum of consecutive primes that add to a prime less than $1,000,000. $
I have an efficient algorithm to generate a set of primes between $0$ and $N.$
My first algorithm to try this was to brute force it, but that was impossible.
I tried creating a sliding window, but it took much too long to even begin to cover the problem space.
I got to some primes that were summed by $100$+ consecutive primes, but had only run about $5$-$10$% of the problem space.
I'm self-taught, with very little post-secondary education.
Where can I read about or find about an algorithm for efficiently calculating these consecutive primes?
I'm not looking for an answer, but indeed, more for pointers as to what I should be looking for and learning about in order to solve this myself.
| Small speed up hint: the prime is clearly going to be odd, so you can rule out about half of the potential sums that way.
If you're looking for the largest such sum, checking things like $2+3,5+7+11,\cdots$ is a waste of time. Structure it differently so that you spend all your time dealing with larger sums. You don't need most of the problem space.
Hopefully this isn't too much of an answer for you.
|
Show matrix $A+5B$ has an inverse with integer entries given the following conditions Let $A$ and $B$ be 2×2 matrices with integer entries such that
each of $A$, $A + B$, $A + 2B$, $A + 3B$, $A + 4B$ has an inverse with integer
entries. Show that the same is true for $A + 5B$.
| First note that a matrix $X$ with integer entries is invertible and its inverse has integer entries if and only if $\det(X)=\pm1$.
Let $P(x)=\det(A+xB)$. Then $P(x)$ is a polynomial of degree at most $4$, with integer coefficients, and $P(0),P(1),P(2),P(3),P(4) \in \{ \pm 1 \}$.
Claim: $P(0)=P(1)=P(3)=P(4)$.
Proof: It is known that $b-a|P(b)-P(a)$ for $a,b$ integers.
Then $3|P(4)-P(1), 3|P(3)-P(0)$ and $4|P(4)-P(0)$. Since the RHS of each division is $0$ or $\pm 2$, it follows it is zero. This proves the claim.
Now, $P(x)-P(0)$ is a polynomial of degree at most four which has the roots $0,1,3,4$. Thus
$$P(x)-P(0)=ax(x-1)(x-3)(x-4) \,.$$
hence $P(2)=P(0)-4a$. Since $P(2), P(0) \in \{ \pm 1 \}$, it follows that $a=0$, and hence
$P(x)$ is the constant polynomial $1$ or $-1$.
Extra One can actually deduce further from here that $\det(A)=\pm 1$ and $A^{-1}B$ is nilpotenet.
Indeed $A$ is invertible and since $\det(A+xB)=\det(A)\det(I+xA^{-1}B)$ you get from here that $\det(I+xA^{-1}B)=1$. It is trivial to deduce next that the characteristic polynomial of $A^{-1}B$ is $x^2$.
|
$G$ be a group acting holomorphically on $X$
could any one tell me why such expression for $g(z)$, specially I dont get what and why is $a_n(g)$? notations confusing me and also I am not understanding properly, and why $g$ is automorphism? where does $a_i$ belong? what is the role of $G$ in that expression of power series?
Thank you, I am self reading the subject, and extremely thank you to all the users who are responding while learning and posting my trivial doubts. Thank you again.
| The element $g$ gives rise to a function $g: X \to X$. Since $g$ is a function, it has a Taylor expansion at the point $p$ in the coordinate $z$. We could write
$$
g(z) = \sum_{n=0}^\infty a_nz^n.
$$
for some complex numbers $a_n$. However, we are going to want to do this for all the elements of the group. If there's another element, $k$, we could write
$$
k(z) = \sum_{n=0}^\infty b_nz^n.
$$
It is terrible notation to pick different letters for each element of the group, and how do we even do it for an arbitrary group? What the author is saying is that, instead, we can pick one efficient notation by labeling the Taylor-series coefficient with the appropriate element of $G$. So what I called $a_n$ is what he calls $a_n(g)$; what I called $b_n$ is what he called $a_n(k)$. Do you now understand his arguments that $a_0(g) = 0$ (for any $g$) and $a_1(g)$ can't be?
But it's not just that his notation is better than mine - it's also that it makes it more obvious that we get a function $G \to \mathbb{C}^\times$ by taking the first Taylor coefficient (in his notation that's $g \mapsto a_1(g)$). It's this function which he is showing is a homomorphism.
|
A Couple of Normal Bundle Questions We are working through old qualifying exams to study. There were two questions concerning normal bundles that have stumped us:
$1$. Let $f:\mathbb{R}^{n+1}\longrightarrow \mathbb{R}$ be smooth and have $0$ as a regular value. Let $M=f^{-1}(0)$.
(a) Show that $M$ has a non-vanishing normal field.
(b) Show that $M\times S^1$ is parallelizable.
$2$. Let $M$ be a submanifold of $N$, both without boundary. If the normal bundle of $M$ in $N$ is orientable and $M$ is nullhomotopic in $N$, show that $M$ is orientable.
More elementary answers are sought. But, any kind of help would be appreciated. Thanks.
| Some hints:
*
*(a) consider $\nabla f$. $\quad$(b)Show that $TM\oplus \epsilon^1\cong T\mathbb{R}^{n+1}|_M$ is a trivial bundle, then analyze $T(M\times S^1)$.
*Try to use homotopy to construct an orientation of $TN|_M$. Let $F:M\times[0,1]\rightarrow N$ be a smooth homotopy map s.t. $F_0$ is embedding, $F_1$ is mapping to a point $p$, then pull back (my method is parallel transportation) the orientation of $T_p N$ to $TN|_M$, and then use $TN|_M\cong TM\oplus T^\perp M$.
|
Question about proof of $A[X] \otimes_A A[Y] \cong A[X, Y] $ As far as I understand universal properties, one can prove $A[X] \otimes_A A[Y] \cong A[X, Y] $ where $A$ is a commutative unital ring in two ways:
(i) by showing that $A[X,Y]$ satisfies the universal property of $A[X] \otimes_A A[Y] $
(ii) by using the universal property of $A[X] \otimes_A A[Y] $ to obtain an isomorphism $\ell: A[X] \otimes_A A[Y] \to A[X,Y]$
Now surely these two must be interchangeable, meaning I can use either of the two to prove it. So I tried to do (i) as follows:
Define $b: A[X] \times A[Y] \to A[X,Y]$ as $(p(X), q(Y)) \mapsto p(X)q(Y)$. Then $b$ is bilinear. Now let $N$ be any $R$-module and $b^\prime: A[X] \times A[Y] \to N$ any bilinear map.
I can't seem to define $\ell: A[X,Y] \to N$ suitably. The "usual" way to define it would've been $\ell: p(x,y) \mapsto b^\prime(1,p(x,y)) $ but that's not allowed in this case.
Question: is it really not possible to prove the claim using (i) in this case?
| Define $l(X^i Y^j) := b'(X^i, Y^j)$ and continue this to an $A$-linear map $A[X,Y] \to N$ by
$$l\left(\sum_{i,j} a_{ij} X^i Y^j\right) = \sum_{i,j} a_{ij} b'(X^i,Y^j).$$
|
Is this statement stronger than the Collatz conjecture?
$n$,$k$, $m$, $u$ $\in$ $\Bbb N$;
Let's see the following sequence:
$x_0=n$; $x_m=3x_{m-1}+1$.
I am afraid I am a complete noob, but I cannot (dis)prove that the following implies the Collatz conjecture:
$\forall n\exists k,u:x_k=2^u$
Could you help me in this problem? Also, please do not (dis)prove the statement, just (dis)prove it is stronger than the Collatz conjecture.
If it implies and it is true, then LOL.
UPDATE
Okay, let me reconfigure the question: let's consider my statement true. In this case, does it imply the Collatz conjecture?
Please help me properly tagging this question, then delete this line.
| Call your statement S. Then: (1.) S does not hold. (2.) S implies the Collatz conjecture (and S also implies any conjecture you like). (3.) I fail to see how Collatz conjecture should imply S. (4.) If indeed Collatz conjecture implies S, then Collatz conjecture does not hold (and this will make the headlines...).
|
How to prove that if $m = 10t+k $ and $67|t - 20k$ then 67|m? m, t, k are Natural numbers.
How can I prove that if $m = 10t+k $ and $67|t - 20k$ then 67|m ?
| We have
$$10t+k=10t -200k+201k=10(t-20k)+(3)(67)k.$$
The result now follows from the fact that $67\mid(t-20k)$.
|
Distance in the Poincare Disk model of hyperbolic geometry I am trying to understand the Poincare Disk model of a hyperbolic geometry and how to measure distances. I found the equation for the distance between two points on the disk as:
$d^2 = (dx^2 + dy^2) / (1-x^2-y^2)^2$
Given two points on the disk, I am assuming that $dx$ is the difference in the euclidean $x$ coordinate of the points, and similar for $dy$. So, what are $x$ and $y$ in the formula? Also, I see some formulas online as:
$d^2 = 4(dx^2 + dy^2) / (1-x^2-y^2)^2$
I am not sure which one is correct.
Assuming that $X$ and $Y$ are coordinates of one of the points, I have tried a few examples, but can't get the math to work out. For instance, if $A = (0,0)$, $B = (0,.5)$, $C = (0,.75)$, then what are $d(A,B)$, $d(B,C)$, and $d(A,C)$? The value $d(A,B) + d(B,C)$ should equal $d(A,C)$, since they are on the same line, but I can't get this to work out. I get distances of $.666$, $.571$, and $1.714$ respectively.
MathWorld - Hyperbolic Metric
| The point is $dx$ and $dy$ in the formula (the one with the 4 in is right btw) don't represent the difference in the $x$ and $y$ co-ordinates, but rather the 'infinitesimal' distance at the point $(x,y)$, so to actually find the distance between two points we have to do some integration.
So the idea is that at the point $(x,y)$ in Euclidean co-ordinates, the length squared, $ds^2$, of an infinitesimally small line is the sum of the infinitesimally small projections of that line onto the $x$ and $y$ axes ($dx^2$ and $dy^2$) multiplied by a scaling factor which depends on $x$ and $y$ ($\frac{4}{(1-x^2-y^2)^2}$).
|
Formal notation for function equality Is there a short-hand notation for
$$
f(x) = 1 \quad \forall x
$$
?
I've seen $f\equiv 1$ being used before, but found some some might (mis)interpret that as $f:=1$ (in the sense of definition), i.e., $f$ being the number $1$.
| Go ahead and use $\equiv$. To prevent any possibility of misunderstanding, you can use the word "constant" the first time this notation appears. As in "let $f$ be the constant function $f\equiv1$..."
|
Prime-base products: Plot For each $n \in \mathbb{N}$, let $f(n)$ map $n$ to the product of the primes that divide $n$.
So for $n=112$, $n=2^4 \cdot 7^1$, $f(n)= 2 \cdot 7 = 14$.
For $n=1000 = 2^3 \cdot 3^3$, $f(1000)=6$.
Continuing in this manner, I arrive at the following plot:
Essentially: I would appreciate an explanation of this plot, "in the large":
I can see why it contains near-lines at slopes $1,2,3,4,\ldots,$ etc., but, still, somehow
the observational regularity was a surprise to me, due, no doubt, to my numerical naivety—especially the way it degenerates near the $x$-axis to such a regular stipulated pattern.
I'd appreciate being educated—Thanks!
| I cannot tell how much you know. If a number $n$ is squarefree then $f(n) = n.$ This is the most frequent case, as $6/\pi^2$ of natural numbers are squarefree. Next most frequent are 4 times an odd squarefree number, in which case $f(n) = n/2,$ a line of slope $1/2,$ as I think you had figured out. These numbers are not as numerous, a count of $f(n) = n/2$ should show frequency below $6/\pi^2.$ All your lines will be slope $1/k$ for some natural $k,$ but it is not just a printer effect that larger $k$ shows a less dense line. Anyway, worth calculating the actual density of the set $f(n) = n/k.$
Meanwhile, note that a computer printer does not actually join up dots in a line into a printed line, that would be nice but is not realistic. There are optical effects in your graph that suggest we are seeing step functions. If so, there are artificial patterns not supported mathematically.
Alright, i am seeing an interesting variation on frequency that I did not expect. Let us define
$$ g(k) = \frac{\pi^2}{6} \cdot \mbox{frequency of} \; \left\{ f(n) = n/k \right\}. $$
Therefore $$ g(1) = 1. $$
What I am finding is, for a prime $p,$
$$ g(p) = \frac{1}{ p \,(p+1)}, $$
$$ g(p^2) = \frac{1}{ p^2 \,(p+1)}, $$
$$ g(p^m) = \frac{1}{ p^m \,(p+1)}. $$
Furthermore $g$ is multiplicative, so when $\gcd(m,n) = 1,$ then $$g(mn) = g(m) g(n). $$
Note that it is necessary that the frequency of all possible events be $1,$ so
$$ \sum_{k=1}^\infty \; g(k) = \frac{\pi^2}{6}.$$ I will need to think about the sum, it ought not to be difficult to recover this from known material on $\zeta(2).$
EDIT: got it. see EULER. For any specific prime, we get
$$ G(p) = g(1) + g(p) + g(p^2) + g(p^3) + \cdots = \left( 1 + \frac{1}{p(p+1)} + \frac{1}{p^2(p+1)} + + \frac{1}{p^3(p+1)} + \cdots \right) $$ or
$$ G(p) = 1 + \left( \frac{1}{p+1} \right) \left( \frac{1}{p} + \frac{1}{p^2} + \frac{1}{p^3} + \cdots \right) $$ or
$$ G(p) = \frac{p^2}{p^2 - 1} = \frac{1}{1 - \frac{1}{p^2}}. $$
Euler's Product Formula tells us that
$$ \prod_p \; G(p) = \zeta(2) = \frac{\pi^2 }{6}. $$ The usual bit about unique factorization and multiplicative functions is
$$ \sum_{k=1}^\infty \; g(k) = \prod_p \; G(p) = \zeta(2) = \frac{\pi^2}{6}.$$
|
Identity related to binomial distribution? While writing a (non-math) paper I came across the following apparent identity:
$N \cdot \mathop \sum \limits_{i = 1}^N \frac{1}{i}\left( {\begin{array}{*{20}{c}}
{N - 1}\\
{i - 1}
\end{array}} \right){p^{i - 1}}{\left( {1 - p} \right)^{N - i}} = \frac{{1 - {{\left( {1 - p} \right)}^N}}}{p}$
where $N$ is a positive integer and $p$ is a nonzero probability. Based on intuition and some manual checks, this looks like it should be true for all such $N$ and $p$. I can't prove this, and being mostly ignorant about math, I don't know how to learn what I need to prove this. I'd really appreciate anything helpful, whether a quick pointer in the right direction or the whole proof (or a proof or example that the two aren't identical).
Note also that ${1 - {\left( {1 - p} \right)}^N} = {{\sum\limits_{i = 1}^N {\left( {\begin{array}{*{20}{c}}
N\\
i
\end{array}} \right){p^i}{{\left( {1 - p} \right)}^{N - i}}} }}$
and that ${p = {1 - {\left( {1 - p} \right)}^1}}$
For background, see the current draft with relevant highlightings here.
| $$
\begin{align}
N\sum_{i=1}^N\dfrac1i\binom{N-1}{i-1}p^{i-1}(1-p)^{N-i}
&=\frac{(1-p)^N}{p}\sum_{i=1}^N\binom{N}{i}\left(\frac{p}{1-p}\right)^i\tag{1}\\
&=\frac{(1-p)^N}{p}\left[\left(1+\frac{p}{1-p}\right)^N-1\right]\tag{2}\\
&=\frac{(1-p)^N}{p}\left[\frac1{(1-p)^N}-1\right]\tag{3}\\
&=\frac{1-(1-p)^N}{p}\tag{4}
\end{align}
$$
Explanation of steps:
*
*$\displaystyle\frac{N}{i}\binom{N-1}{i-1}=\binom{N}{i}$
*$\displaystyle\sum_{i=0}^N\binom{N}{i}x^i=(1+x)^N\quad\quad$($i=0$ is missing, so we subtract $1$)
*$1+\dfrac{p}{1-p}=\dfrac1{1-p}$
*distribute multiplication over subtraction
|
Convergence of the sequence $f_{1}\left(x\right)=\sqrt{x} $ and $f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)} $
Let $\left\{ f_{n}\right\} $ denote the set of functions on
$[0,\infty) $ given by $f_{1}\left(x\right)=\sqrt{x} $ and
$f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)} $ for $n\ge1 $.
Prove that this sequence is convergent and find the limit function.
We can easily show that this sequence is is nondecreasing. Originally, I was trying to apply the fact that “every bounded monotonic sequence must converge” but then it hit me this is true for $\mathbb{R}^{n} $. Does this fact still apply on $C[0,\infty) $, the set of continuous functions on $[0,\infty) $. If yes, what bound would we use?
| You know that the sequence is increasing. Now notice that if $f(x)$ is the positive solution of $y^2=x+y$, we have $f_n(x)<f(x) \ \forall n \in \mathbb{N}$. In fact, notice that this hold for $n=1$ and assuming for $n-1$ we have
$$f^{2}_{n+1}(x) = x +f_n(x) < x+f(x) = f^{2}(x).$$ Then there exist $\lim_n f_n(x)$ and taking the limit in $f^{2}_{n+1}(x) = x + f_n(x)$, we find that $\lim_n f_n(x)= f(x)$.
|
Proof by contradiction that $n!$ is not $O(2^n)$ I am having issues with this proof: Prove by contradiction that $n! \ne O(2^n)$. From what I understand, we are supposed to use a previous proof (which successfully proved that $2^n = O(n!)$) to find the contradiction.
Here is my working so far:
Assume $n! = O(2^n)$. There must exist $c$, $n_{0}$ such that $n! \le c \cdot 2^n$. From the previous proof, we know that $n! \le 2^n$ for $n \ge 4$.
We pick a value, $m$, which is gauranteed to be $\ge n_{0}$ and $\ne 0$. I have chosen $m = n_{0} + 10 + c$.
Since $m > n_0$:
$$m! \le c \cdot 2^m\qquad (m > n \ge n_0)$$
$$\dfrac{m!}{c} \le 2^m$$
$$\dfrac{1}{c} m! \le 2^m$$
$$\dfrac{1}{m} m! \le 2^m\qquad (\text{as }m > c)$$
$$(m - 1)! \le 2^m$$
That's where I get up to.. not sure which direction to head in to draw the contradiction.
| In
Factorial Inequality problem $\left(\frac n2\right)^n > n! > \left(\frac n3\right)^n$,
they have obtained
$$
n!\geq \left(\frac{n}{e}\right)^n.
$$
Hence
$$
\lim_{n\rightarrow\infty}\frac{n!}{2^n}\geq\lim_{n\rightarrow\infty}\left(\frac{n}{2e}\right)^n=\infty.
$$
Suppose that $n!=O(2^n)$. Then there exist $C>0$ and $N_0\in \mathbb{N}$ such that
$$\frac{
n!}{2^n}\leq C
$$
for all $n\geq N_0$. Lettting $n\rightarrow\infty$ in the aobve inequality we obtain
$$
\lim_{n\rightarrow\infty}\frac{n!}{2^n}\leq C,
$$
which is an absurd.
|
Continuous maps between compact manifolds are homotopic to smooth ones
If $M_1$ and $M_2$ are compact connected manifolds of dimension $n$, and $f$ is a continuous map from $M_1$ to $M_2$, f is homotopic to a smooth map from $M_1$ to $M_2$.
Seems to be fairly basic, but I can't find a proof. It might be necessary to assume that the manifolds are Riemannian.
It should be possible to locally solve the problem in Euclidean space by possibly using polynomial approximations and then patching them up, where compactness would tell us that approximating the function in finitely many open sets is enough. I don't see how to use the compactness of the target space though.
| It is proved as Proposition 17.8 on page 213 in Bott, Tu, Differential Forms in Algebraic Topology. For the necessary Whitney embedding theorem, they refer to deRham, Differential Manifolds.
This Whitney Approximation on Manifolds is proved as Proposition 10.21 on page 257 in Lee, Introduction to Smooth Manifolds.
There you can find even the proof of Whitney embedding Theorem.
|
What is a lift? What exactly is a lift? I wanted to prove that for appropriately chosen topological groups $G$ we can show that the completion of $\widehat{G}$ is isomorphic to the inverse limit $$\lim_{\longleftarrow} G/G_n$$
I wasn't sure how to construct the map so I asked this question to which I got an answer but I don't understand it. Searching for "lift" or "inverse limit lift" has not helped and I was quite sure that if $[x] \in \widehat{G}$ I could just "project" $x_n$ to $$\lim_{\longleftarrow} G/G_n$$ using the map $\prod_{n=1}^\infty \pi_n (x) = \prod_{n=1}^\infty x_n$ in $G/G_n$. Would someone help me finish this construction and explain to me what a lift is? Thank you.
Edit
I don't know any category theory. An explanation understandable by someone not knowing any category theory would be very kind.
| The term lift is merely meant to mean the following. Given a surjective map $G\to G'$ a lift of an element $x\in G'$ is a choice of $y\in G$ such that $y\mapsto x$ under this map.
In the linked answer you have $(x_n)\in \lim G/G_n$. Thus the notation has $x_n\in G/G_n$. The elements $y_n$ are just chosen preimages of $x_n$ under the natural surjection $G\to G/G_n$.
|
Find endomorphism of $\mathbb{R}^3$ such that $\operatorname{Im}(f) \subset \operatorname{Ker}(f)$ I have a problem:
Let's find an endomorphism of $\mathbb{R}^3$ such that $\operatorname{Im}(f) \subset \operatorname{Ker}(f)$.
How would you do it?
The endomorphism must be not null.
| Observe that such operator has the property that $f(f(\vec x))=0$, since $f(\vec x)\in\ker f$. Any matrix $M$ in $M_3(\mathbb R)$ such that $M^2=0$ will give you such operator.
For example, then, $f(x_1,x_2,x_3)=(x_3,0,0)$, for the canonical basis the matrix looks like this:$$\begin{pmatrix} 0&0&1\\0&0&0\\0&0&0\end{pmatrix}$$
See now that $f^2=0$, but $f\neq 0$.
|
Alternating sum of binomial coefficients
Calculate the sum:
$$ \sum_{k=0}^n (-1)^k {n+1\choose k+1} $$
I don't know if I'm so tired or what, but I can't calculate this sum. The result is supposed to be $1$ but I always get something else...
| Another way to see it: prove that
$$\binom{n}{k}+\binom{n}{k+1}=\binom{n+1}{k+1}\,\,\,,\,\,\text{so}$$
$$\sum_{k=0}^n(-1)^k\binom{n+1}{k+1}=\sum_{k=0}^n(-1)^k\cdot 1^{n-k}\binom{n}{k}+\sum_{k=0}^n(-1)^k\cdot 1^{n-k}\binom{n}{k+1}=$$
$$=(1+(-1))^n-\sum_{k=0}^n(-1)^{k+1}\cdot 1^{n-k-1}\binom{n}{k+1}=0-(1-1)^n+1=1$$
|
A simple question of finite number of basis Let $V$ be a vector space. Define
" A set $\beta$ is a basis of $V $" as "(1) $\beta$ is linearly independent set, and (2) $\beta$ spans $V$ "
On this definition, I want to show that "if $V$ has a basis (call it $\beta$) then $\beta$ is a finite set."
In my definition, I have no assumption of the finiteness of the set $\beta$. But Can I show this statement by using some properties of a vector space?
| I take it $V$ is finite-dimensional? For instance, the (real) vector space of polynomials over $\mathbb{R}$ is infinite-dimensional $-$ in this space, no basis is finite. And for finite-dimensional spaces, it's worth noting that the dimension of a vector space $V$ is defined as being the size of a basis of $V$, so if $V$ is finite-dimensional then any basis for $V$ is automatically finite, by definition!
|
Show $\iint_{D} f(x,y)(1 - x^2 - y^2) ~dx ~dy = \pi/2$ Suppose $f(x,y)$ is a bounded harmonic function in the unit disk
$D = \{z = x + iy : |z| < 1 \} $
and $f(0,0) = 1$. Show that
$$\iint_{D} f(x,y)(1 - x^2 - y^2) ~dx ~dy = \frac{\pi}{2}.$$
I'm studying for a prelim this August and I haven't taken Complex in a long time (two years ago). I don't know how to solve this problem or even where to look unless it's just a game with Green's theorem-any help? I don't need a complete solution, just a helpful hint and I can work the rest out on my own.
| Harmonic functions have the mean value property, that is
$$
\frac1{2\pi}\int_0^{2\pi}f(z+re^{i\phi})\,\mathrm{d}\phi=f(z)
$$
If we write the integral in polar coordinates
$$
\begin{align}
\iint_Df(x,y)(1-x^2-y^2)\,\mathrm{d}x\,\mathrm{d}y
&=\int_0^1\int_0^{2\pi}f(re^{i\phi})(1-r^2)\,r\,\mathrm{d}\phi\,\mathrm{d}r\\
&=2\pi\int_0^1f(0)(1-r^2)\,r\,\mathrm{d}r\\
&=2\pi f(0)\cdot\frac14\\
&=\frac\pi2
\end{align}
$$
|
Why does the $2$'s and $1$'s complement subtraction works? The algorithm for $2$'s complement and $1$'s complement subtraction is tad simple:
$1.$ Find the $1$'s or $2$'s complement of the subtrahend.
$2.$ Add it with minuend.
$3.$ If there is no carry then then the answer is in it's complement form we have take the $1$'s or $2$'s
complement of the result and assign a negative sign to the final
answer.
$4.$ If there is a carry then add it to the result in case of $1$'s complement or neglect in case of $2$'s complement.
This is what I was taught in undergraduate, but I never understood why this method work. More generally why does the radix complement or the diminished radix complement subtraction works? I am more interested in understanding the mathematical reasoning behind this algorithm.
| We consider $k$-bit integers with an extra $k+1$ bit, that is associated with the sign. If the "sign-bit" is 0 then we treat this number as usual as a positive number in binary representation. Otherwise this number is negative (details follow).
Let us start with the 2-complement. To compute the 2-complement of an number $x$ we flip the bits, starting after the right most 1. Here is an example:
0010100 -> 1101100
We define for every positive number $a$, the inverse element $-a$ as the 2-complement of $a$. When adding two numbers, we will forget the overflow bit (after the sign bit), if an overflow occurs. This addition with the set of defined positive and negative numbers forms a finite Abelian group. Notice that we have only one zero, which is 00000000. The representation 10000000 does not encode a number. To check the group properties, observe that
*
*$a + (-a) = 0$,
*$a+0=a$,
*$-(-a)=a$,
*$(a+b)=(b+a)$,
*$(a+b)+c= a+ (b+c)$.
As a consequence, we can compute with the 2-complement as with integers. To subtract $a$ from $b$ we compute $a+(-b)$. If we want to restore the absolute value of a negative number we have to take its 2-complement (this explains step 3 in the question's algorithm).
Now the 1-complement. This one is more tricky, because we have two distinct zeros (000000 and 111111). I think it is easiest to consider the 1-complement of $a$ as its 2-complement minus 1. Then we can first reduce the 1-complement to the 2-complement and then argue over the 2-complements.
|
Fourier transform of the derivative - insufficient hypotheses? An exercise in Carlos ISNARD's Introdução à medida e integração:
Show that if $f$ and $f'$ $\in\mathscr{L}^1(\mathbb{R},\lambda,\mathbb{C})$ and $\lim_{x\to\pm\infty}f(x)=0$ then $\hat{(f')}(\zeta)=i\zeta\hat{f}(\zeta)$.
($\lambda$ is the Lebesgue measure on $\mathbb{R}$.)
I'm tempted to apply integration by parts on the integral from $-N$ to $N$ and then take limit as $N\to\infty$. But to obtain the result I seemingly need $f'e^{-i\zeta x}$ to be Riemann-integrable so as to use the fundamental theorem of Calculus.
What am I missing here?
Thank you.
| We can find a sequence of $C^1$ functions with compact support $\{f_j\}$ such that $$\lVert f-f_j\rVert_{L^1}+\lVert f'-f_j'\rVert_{L^1}\leq \frac 1j.$$
This is a standard result about Sobolev spaces. Actually more is true: the smooth functions with compact support are dense in $W^{1,1}(\Bbb R)$, but that's not needed here. It's theorem 13 in Willie Wong's notes about Sobolev spaces.
Convergence in $L^1$ shows that for all $\xi$ we have $\widehat{f_j}(\xi)\to \widehat{f}(\xi)$ and $\widehat{f_j'}(\xi)\to \widehat{f'}(\xi)$. And the result for $C^1$ functions follows as stated in the OP by integration by parts.
|
What is smallest possible integer k such that $1575 \times k$ is perfect square? I wanted to know how to solve this question:
What is smallest possible integer $k$ such that $1575 \times k$ is a perfect square?
a) 7, b) 9, c) 15, d) 25, e) 63. The answer is 7.
Since this was a multiple choice question, I guess I could just put it in and test the values from the given options, but I wanted to know how to do it without hinting and testing. Any suggestions?
| If $1575\times k$ is a perfect square, we can write it as
$\prod_k p_k^2$. Since $1$ is not among the choices given, let's assume that $1575$ is not a square from the beginning.
Therefore $9=3^2$ and $25=5^2$ are ruled out, since multiplying a non square with a square doesn't make it a square. $63=9\cdot 7$ is ruled out, since $7$ would be the smaller possible choice.
We are left with $7$ and $15$ and it jumps to the eye that $1575$ is dividible by $15$ twice:
$$
1575=1500+75=100\cdot 15+5\cdot 15=105\cdot 15=(75+30)\cdot 15=(5+2)\cdot 15^2.
$$
So what are you left with...? Good luck ;-)
|
I want to prove $ \int_0^\infty \frac{e^{-x}}{x} dx = \infty $ How can I prove this integral diverges?
$$ \int_0^\infty \frac{e^{-x}}{x} dx = \infty $$
| $$
\int_{0}^{\infty}\frac{e^{-x}}{x}= \int_{0}^{1}\frac{e^{-x}}{x}+\int_{1}^{\infty}\frac{e^{-x}}{x} \\
> \int_{0}^{1}\frac{e^{-x}}{x} \\
> e^{-1}\int_{0}^{1}\frac{1}{x}
$$
which diverges.
|
What's the meaning of the unit bivector i? I'm reading the Oersted Medal Lecture by David Hestenes to improve my understanding of Geometric Algebra and its applications in Physics. I understand he does not start from a mathematical "clean slate", but I don't care for that. I want to understand what he's saying and what I can do with this geometric algebra.
On page 10 he introduces the unit bivector i. I understand (I think) what unit vectors are: multiply by a scalar and get a scaled directional line. But a bivector is a(n oriented) parallellogram (plane). So if I multiply the unit bivector i with a scalar, I get a scaled parallellogram?
| The unit bivector represents the 2D subspaces. In 2D Euclidean GA, there are 4 coordinates:
*
*1 scalar coordinate
*2 vector coordinates
*1 bivector coordinate
A "vector" is then (s, e1, e2, e1^e2).
The unit bivector is frequently drawn as a parallelogram, but that's just a visualization aid. It conceptually more closely resembles a directed area where the sign indicates direction.
|
Does "nullity" have a potentially conflicting or confusing usage? In Linear Algebra and Its Applications, David Lay writes, "the dimension of the null space is sometimes called the nullity of A, though we will not use the term." He then goes on to specify "The Rank Theorem" as "rank A + dim Nul A = n" instead of calling it the the rank-nullity theorem and just writing "rank A + nullity A = n".
Naturally, I wonder why he goes out of his way to avoid using the term "nullity." Maybe someone here can shed light....
| While choices of terminology is often a matter of taste (I would not know why the author should prefer to say $\operatorname{Nul}A$ instead of $\ker A$), there is at least a mathematical reason why "rank" is more important than "nullity": is it connected to the matrix/linear map in a way where source and destination spaces are treated on equal footing, while the nullity is uniquely attached to the source space. This is why the rank can be defined in an equivalent manner as the row rank or the column rank, or in a neutral way as the size of the largest non-vanishing minor or the smallest dimension of an intermediate space through which the linear map can be factored (decomposition rank). No such versatility exists for the nullity, it is just the dimension of the kernel inside the source space, and cannot be related in any way to the destination space. A notion analogous to the nullity at the destination side is the codimension of the image in the destination space (that is, the dimension of the cokernel); it measures the failure to be surjective, and it is different from the nullity (which measures the failure to be injective) for rectangular matrices. There is a (rather obvious) analogue to the rank-nullity theorem that says that for linear $f:V\to W$ one has
$$
\operatorname{rk} f+ \dim\operatorname{coker} f = \dim W.
$$
|
Finding $E(N)$ in this question suppose $X_1,X_2,\ldots$ is sequence of independent random variables of $U(0,1)$ if
$N=\min\{n>0 :X_{(n:n)}-X_{(1:n)}>\alpha , 0<\alpha<1\}$ that $X_{(1:n)}$ is smallest order statistic and
$X_{(n:n)}$ is largest order statistic. how can find $E(N)$
| Let $m_n=\min\{X_k\,;\,1\leqslant k\leqslant n\}=X_{(1:n)}$ and $M_n=\max\{X_k\,;\,1\leqslant k\leqslant n\}=X_{(n:n)}$. As explained in comments, $(m_n,M_n)$ has density $n(n-1)(y-x)^{n-2}\cdot[0\lt x\lt y\lt1]$ hence $M_n-m_n$ has density $n(n-1)z^{n-2}(1-z)\cdot[0\lt z\lt1]$.
For every $n\geqslant2$, $[N\gt n]=[M_n-m_n\lt\alpha]$ hence
$$
\mathrm P(N\gt n)=\int_0^\alpha n(n-1)z^{n-2}(1-z)\mathrm dz=\alpha^{n}+n(1-\alpha)\alpha^{n-1}.
$$
The same formula holds for $n=0$ and $n=1$ hence
$$
\mathrm E(N)=\sum_{n=0}^{+\infty}\mathrm P(N\gt n)=\sum_{n=0}^{+\infty}\alpha^n+(1-\alpha)\sum_{n=0}^{+\infty}n\alpha^{n-1}=\frac2{1-\alpha}.
$$
Edit: To compute the density of $(m_n,M_n)$, start from the fact that
$$
\mathrm P(x\lt m_n,M_n\lt y)=\mathrm P(x\lt X_1\lt y)^n=(y-x)^n,
$$
for every $0\lt x\lt y\lt 1$. Differentiating this identity twice, once with respect to $x$ and once with respect to $y$, yields the opposite of the density of $(m_n,M_n)$.
|
What digit appears in unit place when $2^{320}$ is multiplied out Is there a way to answer the following preferably without a calculator
What digit appears in unit place when $2^{320}$ is multiplied out ? a)$0$ b)$2$ c)$4$ d)$6$ e)$8$
---- Ans(d)
| $\rm mod\ 5\!:\, \color{#0A0}{2^4}\equiv \color{#C00}1\Rightarrow2^{320}\equiv (\color{#0A0}{2^4})^{80}\equiv \color{#C00}1^{80}\!\equiv \color{#C00}1.\,$ The only choice $\:\equiv \color{#C00}1\!\pmod 5\:$ is $6,\: $ in d).
|
Prove $\frac{1}{1 \cdot 3} + \frac{1}{3 \cdot 5} + \frac{1}{5 \cdot 7} + \cdots$ converges to $\frac 1 2 $ Show that
$$\frac{1}{1 \cdot 3} + \frac{1}{3 \cdot 5} + \frac{1}{5 \cdot 7} + \cdots = \frac{1}{2}.$$
I'm not exactly sure what to do here, it seems awfully similar to Zeno's paradox.
If the series continues infinitely then each term is just going to get smaller and smaller.
Is this an example where I should be making a Riemann sum and then taking the limit which would end up being $1/2$?
| Solution as per David Mitra's hint in a comment.
Write the given series as a telescoping series and evaluate its sum:
$$\begin{eqnarray*}
S &=&\frac{1}{1\cdot 3}+\frac{1}{3\cdot 5}+\frac{1}{5\cdot 7}+\cdots \\
&=&\sum_{n=1}^{\infty }\frac{1}{\left( 2n-1\right) \left( 2n+1\right) } \\
&=&\sum_{n=1}^{\infty }\left( \frac{1}{2\left( 2n-1\right) }-\frac{1}{
2\left( 2n+1\right) }\right)\quad\text{Partial fractions decomposition} \\
&=&\frac{1}{2}\sum_{n=1}^{\infty }\left( \frac{1}{2n-1}-\frac{1}{2n+1}
\right) \qquad \text{Telescoping series} \\
&=&\frac{1}{2}\sum_{n=1}^{\infty }\left( a_{n}-a_{n+1}\right), \qquad a_{n}=
\frac{1}{2n-1},a_{n+1}=\frac{1}{2\left( n+1\right) -1}=\frac{1}{2n+1} \\
&=&\frac{1}{2}\left( a_{1}-\lim_{n\rightarrow \infty }a_{n}\right) \qquad\text{see below} \\
&=&\frac{1}{2}\left( \frac{1}{2\cdot 1-1}-\lim_{n\rightarrow \infty }\frac{1
}{2n-1}\right) \\
&=&\frac{1}{2}\left( 1-0\right) \\
&=&\frac{1}{2}.
\end{eqnarray*}$$
Added: The sum of the telescoping series $\sum_{n=1}^{\infty }\left( a_{n}-a_{n+1}\right)$ is the limit of the telescoping sum $\sum_{n=1}^{N}\left( a_{n}-a_{n+1}\right) $ as $N$ tends to $\infty$. Since
$$\begin{eqnarray*}
\sum_{n=1}^{N}\left( a_{n}-a_{n+1}\right) &=&\left( a_{1}-a_{2}\right)
+\left( a_{2}-a_{3}\right) +\ldots +\left( a_{N-1}-a_{N}\right) +\left(
a_{N}-a_{N+1}\right) \\
&=&a_{1}-a_{2}+a_{2}-a_{3}+\ldots +a_{N-1}-a_{N}+a_{N}-a_{N+1} \\
&=&a_{1}-a_{N+1},
\end{eqnarray*}$$
we have
$$\begin{eqnarray*}
\sum_{n=1}^{\infty }\left( a_{n}-a_{n+1}\right) &=&\lim_{N\rightarrow
\infty }\sum_{n=1}^{N}\left( a_{n}-a_{n+1}\right) \\
&=&\lim_{N\rightarrow \infty }\left( a_{1}-a_{N+1}\right) \\
&=&a_{1}-\lim_{N\rightarrow \infty }a_{N+1} \\
&=&a_{1}-\lim_{N\rightarrow \infty }a_{N} \\
&=&a_{1}-\lim_{n\rightarrow \infty }a_{n}.\end{eqnarray*}$$
|
Combinatorics question: Prove 2 people at a party know the same amount of people I recently had an assignment and got this question wrong, was wondering what I left out.
Prove that at a party where there are at least two people, there are two people who know the same number of other people there. Be sure to use the variable "n" when writing your answer.
My answer:
n >= 2
Case1: another person comes to the party, but doesn't know either of the first two. So the original two still only know the same number of people.
Case2: another person comes, and knows one out of the original 2, so therefore the >newcommer, and the one that doesnt know the newcommer both know the same number of people.
Case 3: another person comes and knows both of them, implying that they all know each >other, and therefore they all know the same number of people.
So therefore if n>=2, where n and n-1 know each other, in either case whether n+1 joins, >there will be at least two people who know the same amount of people.
Many thanks in advance. Have a test coming up.
| First of all, you left out saying that you were proceding by induction, and you left out establishing a base case for the induction.
But let's look at case 2. Suppose that of the original two people, one knows 17 of the 43 people at the party, the other knows 29. Now the newcomer knows the first of the two, but not the other. What makes you think the newcomer knows exactly 29 of the people at the party?
|
What is the $\tau$ symbol in the Bourbaki text? I'm reading the first book by Bourbaki (set theory) and he introduces this logical symbol $\tau$ to later define the quantifiers with it. It is such that if $A$ is an assembly possibly contianing $x$ (term, variable?), then $\tau_xA$ does not contain it.
Is there a reference to this symbol $\tau$? Is it not useful/necessary? Did it die out of fashion?
How to read it?
At the end of the chapter on logic, they make use of it by... I don't know treating it like $\tau_X(R)$ would represent the solution of an equation $R$, this also confuses me.
| Adrian Mathias offers the following explanation here:
Bourbaki use the Hilbert operator but write it as $\tau$ rather than $\varepsilon$, which latter is visually too close to the sign $\in$ for the membership relation. Bourbaki use the word assemblage, or in their English translation, assembly, to mean a finite sequence of signs or leters, the signs being $\tau$, $\square$, $\lor$, $\lnot$, $=$, $\in$ and $\bullet$.
The substitution of the assembly $A$ for each occurrence of the letter $x$ in the assembly $B$ is denoted by $(A|x) B$.
Bourbaki use the word relation to mean what in English-speaking countries is usually called a well-formed formula.
The rules of formation for $\tau$-terms are these:
Let $R$ be an assembly and $x$ a letter; then the assembly $\tau_x(R)$ is obtained in three steps:
*
*form $\tau R$, of length one more than that of $R$;
*link that first occurrence of $\tau$ to all occurrences of $x$ in $R$
*replace all those occurrences of $x$ by an occurrence of $\square$.
In the result $x$ does not occur. The point of that is that there are no bound variables; as variables become bound (by an occurrence of $\tau$), they are replaced by $\square$, and those occurrences of $\square$ are linked to the occurrence of $\tau$ that binds them.
The intended meaning is that $\tau_x(R)$ is some $x$ of which $R$ is true.
|
When was $\pi$ first suggested to be irrational? When was $\pi$ first suggested to be irrational?
According to Wikipedia, this was proved in the 18th century.
Who first claimed / suggested (but not necessarily proved) that $\pi$ is irrational?
I found a passage in Maimonides's Mishna commentary (written circa 1168, Eiruvin 1:5) in which he seems to claim that $\pi$ is irrational. Is this the first mention?
| From a non-wiki source:
Archimedes [1], in the third century B.C. used regular polygons inscribed and circumscribed to a circle to approximate : the more sides a polygon has, the closer to the circle it becomes and therefore the ratio between the polygon's area between the square of the radius yields approximations to $\pi$. Using this method he showed that $223/71<\pi<22/7$.
Found on PlanetMath: Pi, with a reference to Archimedes' work...
|
analytically calculate value of convolution at certain point i'm a computer science student and i'm trying to analytically find the value of the convolution between an ideal step-edge and either a gaussian function or a first order derivative of a gaussian function.
In other words, given an ideal step edge with amplitude $A$ and offset $B$:
$$
i(t)=\left\{
\begin{array}{l l}
A+B & \quad \text{if $t \ge t_{0}$}\\
B & \quad \text{if $t \lt t_{0}$}\\
\end{array} \right.
$$
and the gaussian function and it's first order derivative
$$
g(t) = \frac{1}{\sigma \sqrt{2\pi}}e^{- \frac{(t - \mu)^2}{2 \sigma^2}}\\
g'(t) = -\frac{t-\mu}{\sigma^3 \sqrt{2\pi}}e^{- \frac{(t - \mu)^2}{2 \sigma^2}}
$$
i'd like to calculate the value of both
$$
o(t) = i(t) \star g(t)\\
o'(t) = i(t) \star g'(t)
$$
at time $t_{0}$ ( i.e. $o(t_{0})$ and $o'(t_{0}) )$.
I tried to solve the convolution integral but unfortunately i'm not so matematically skilled to do it. Can you help me?
Thank you in advance very much.
| We can write $i(t):=B+A\chi_{[t_0,+\infty)}$. Using the properties of convolution, we can see that
\begin{align}
o(t_0)&=B+A\int_{-\infty}^{+\infty}g(t)\chi_{[t_0,+\infty)}(t_0-t)dt\\
&=B+A\int_{-\infty}^0\frac 1{\sigma\sqrt{2\pi}}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)dx.
\end{align}
The integral can be expressed with erf-function.
For the second one, things are easier since we can compute the integrals:
\begin{align}
o'(t_0)&=A\int_{-\infty}^0g'(t)dt=Ag(0).
\end{align}
|
If a player is 50% as good as I am at a game, how many games will it be before she finally wins one game? This is a real life problem. I play Foosball with my colleague who hasn't beaten me so far. I have won 18 in a row. She is about 50% as good as I am (the average margin of victory is 10-5 for me). Mathematically speaking, how many games should it take before she finally wins a game for the first time?
| A reasonable model for such a game is that each goal goes to you with probability $p$ and to her with probability $1-p$. We can calculate $p$ from the average number of goals scored against $10$, and then calculate the fraction of games won by each player from $p$.
The probability that she scores $k$ goals before you score your $10$th is $\binom{9+k}9p^{10}(1-p)^k$, so her average number of goals is
$$\sum_{k=0}^\infty\binom{9+k}9p^{10}(1-p)^kk=10\frac{1-p}p\;.$$
Since you say that this is $5$, we get $10(1-p)=5p$ and thus $p=\frac23$. The probability that you get $10$ goals before she does is
$$\sum_{k=0}^9\binom{9+k}9\left(\frac23\right)^{10}\left(\frac13\right)^k=\frac{1086986240}{1162261467}\approx0.9352\;,$$
so her chance of winning a game should be about $7\%$, and she should win about one game out of $1/(1-0.9352)\approx15$ games – so you winning $18$ in a row isn't out of the ordinary.
|
showing almost equal function are actually equal I am trying to show that if $f$ and $g$ are continuous functions on $[a, b]$ and if $f=g$ a.e. on $[a, b]$, then, in fact, $f=g$ on $[a, b]$. Also would a similar assertion be true if $[a, b]$ was replaced by a general measurable set $E$ ?
Some thoughts towards the proof
*
*Since $f$ and $g$ are continuous functions, so for all open sets $O$ and $P$ in $f$ and $g$'s ranges respectfully the sets $f^{-1}\left(O\right) $ and $g^{-1}\left(P\right) $ are open.
*Also since $f=g$ a.e. on $[a, b]$ I am guessing here implies their domains and ranges are equal almost everywhere(or except in the set with measure zero).
$$m(f^{-1}\left(O\right) - g^{-1}\left(P\right)) = 0$$
I am not so sure if i can think of clear strategy to pursue here. Any help would be much appreciated.
Also i would be great full you could point out any other general assertions which if established would prove two functions are the same under any domain or range specification conditions.
Cheers.
| The set $\{x\mid f(x)\neq g(x)\}$ is open (it's $(f-g)^{—1}(\Bbb R\setminus\{0\})$), and of measure $0$. It's necessarily empty, otherwise it would contain an open interval.
|
Prove whether a relation is an equivalence relation Define a relation $R$ on $\mathbb{Z}$ by $R = \{(a,b)|a≤b+2\}$.
(a) Prove or disprove: $R$ is reflexive.
(b) Prove or disprove: $R$ is symmetric.
(c) Prove or disprove: $R$ is transitive.
For (a), I know that $R$ is reflexive because if you substitute $\alpha$ into the $a$ and $b$ of the problem, it is very clear that $\alpha \leq \alpha + 2$ for all integers.
For (b), I used a specific counterexample; for $\alpha,\beta$ in the integers, if you select $\alpha = 1$, and $\beta = 50$, it is clear that although $\alpha ≤ \beta + 2$, $\beta$ is certainly not less than $ \alpha + 2$.
However, for (c), I am not sure whether the following proof is fallacious or not:
Proof: Assume $a R b$ and $b R g$;
Hence $a ≤ b + 2$ and $b ≤ g + 2$
Therefore $a - 2 ≤ b$ and $b ≤ g + 2$
So $a-2 ≤ b ≤ g + 2$
and clearly $a-2 ≤ g+2$
So then $a ≤ g+4$
We can see that although $a$ might be less than $ g+2$,
it is not always true.
Therefore we know that the relation $R$ is not transitive.
QED.
It feels wrong
| You are right, the attempt to prove transitivity doesn't work. But your calculation should point towards a counterexample.
Make $a \le b+2$ in an extreme way, by letting $b=a-2$. Also, make $b\le g+2$ in the same extreme way. Then $a \le g+2$ will fail. Perhaps work with an explicit $a$, like $47$.
|
Difference between "space" and "mathematical structure"? I am trying to understand the difference between a "space" and a "mathematical structure".
I have found the following definition for mathematical structure:
A mathematical structure is a set (or sometimes several sets) with various associated mathematical objects such as subsets, sets of subsets, operations and relations, all of which must satisfy various requirements (axioms). The collection of associated mathematical objects is called the structure and the set is called the underlying set. http://www.abstractmath.org/MM/MMMathStructure.htm
Wikipedia says the following:
In mathematics, a structure on a set, or more generally a type, consists of additional mathematical objects that in some manner attach (or relate) to the set, making it easier to visualize or work with, or endowing the collection with meaning or significance.
http://en.wikipedia.org/wiki/Mathematical_structure
Regarding a space, Wikipedia says:
In mathematics, a space is a set with some added structure. http://en.wikipedia.org/wiki/Space_(mathematics)
I have also found some related questions, but I do not understand from them what the difference between a space and a mathematical structure is:
difference-between-space-and-algebraic-structure
what-does-a-space-mean
| I'm not sure I must be right. Different people have different ideas. Here I just talk about my idea for your question. In my opinion, they are same: the set with some relation betwen the elements of the set. Calling it a space or calling it just a mathematical structure is just a kind of people's habit.
|
Applications of Bounds of the Sum of Inverse Prime Divisors $\sum_{p \mid n} \frac{1}{p}$ Question: What interesting or notable problems involve the additive function $\sum_{p \mid n} \frac{1}{p}$ and depend on sharp bounds of this function?
I'm aware of at least one: A certain upper bound of the sum implies the ABC conjecture.
By the Arithmetic-Geometric Inequality, one can bound the radical function (squarefree kernel) of an integer,
\begin{align}
\left( \frac{\omega(n)}{\sum_{p \mid n} \frac{1}{p}} \right)^{\omega(n)} \leqslant \text{rad}(n).
\end{align}
If for any $\epsilon > 0$ there exists a finite constant $K_{\epsilon}$ such that for any triple $(a,b,c)$ of coprime positive integers, where $c = a + b$, one has
\begin{align}
\sum_{p \mid abc} \frac{1}{p} < \omega(abc) \left( \frac{K_\epsilon}{c} \right)^{1/((1+\epsilon) \omega(abc))},
\end{align}
then
\begin{align}
c < K_{\epsilon} \left( \frac{\omega(abc)}{\sum_{p \mid abc} \frac{1}{p}} \right)^{(1+ \epsilon)\omega(abc)} \leqslant \text{rad}(abc)^{1+\epsilon},
\end{align}
and the ABC-conjecture is true.
Edit: Now, whether or not any triples satisfy the bound on inverse primes is a separate issue. Greg Martin points out that there are infinitely many triples which indeed violate it. This begs the question of whether there are any further refinements of the arithmetic-geometric inequality which remove such anomalies, but this question is secondary.
| As it turns out, your proposed inequality is false - in fact false for any $\epsilon>0$, even huge $\epsilon$.
The following approximation to the twin primes conjecture was proved by Chen's method: there exist infinitely many primes $b$ such that $c=b+2$ is either prime or the product of two primes. With $a=2$, this gives $\omega(abc)\le4$, and so
$$
\omega(abc) \left( \frac{K_\epsilon}{c} \right)^{1/((1+\epsilon) \omega(abc))} \le 4 \bigg( \frac{K_\epsilon}c \bigg)^{1/(4+4\epsilon)},
$$
which (for any fixed $\epsilon>0$) can be arbitrarily small as $c$ increases. However, the other side $\sum_{p\mid abc} \frac1p$ is at least $\frac12$, and so the inequality has infinitely many counterexamples.
|
Are the two statements concening number theory correct? Statement 1: any integer no less than four can be factorized as a linear combination of two and three.
Statement 2: any integer no less than six can be factorized as a linear combination of three, four and five.
I tried for many numbers, it seems the above two statement are correct. For example,
4=2+2; 5=2+3; 6=3+3+3; ...
6=3+3; 7=3+4; 8=4+4; 9=3+3+3; 10=5+5; ...
Can they be proved?
| The question could be generalized, but there is a trivial solution in the given cases.
*
*Any even number can be written as $2n$. For every odd number $x > 1$, there exists an even number $y$ such that $y+3 = x$.
*Likewise, numbers divisible by $4$ can be written as $4n$. All other numbers over $2$ are $4n + 5$, $4n + 3$ or $4n + 3 + 3$.
|
Prove that for any $x \in \mathbb N$ such that $xProve that every positive integer $x$ with $x<n!$ is the sum of at most $n$ distinct divisors of $n!$.
| Hint: Note that $x = m (n-1)! + r$ where $0 \le m < n$ and $0 \le r < (n-1)!$. Use induction.
EDIT: Oops, this is wrong: as Steven Stadnicki noted, $m (n-1)!$ doesn't necessarily divide $n!$.
|
Accidents of small $n$ In studying mathematics, I sometimes come across examples of general facts that hold for all $n$ greater than some small number. One that comes to mind is the Abel–Ruffini theorem, which states that there is no general algebraic solution for polynomials of degree $n$ except when $n \leq 4$.
It seems that there are many interesting examples of these special accidents of structure that occur when the objects in question are "small enough", and I'd be interested in seeing more of them.
| $\mathbb{R}^n$ has a unique smooth structure except when $n=4$. Furthermore, the cardinality of [diffeomorphism classes of] smooth manifolds that are homeomorphic but not diffeomorphic to $\mathbb{R}^4$ is $\mathfrak{c}=2^{\aleph_0}$.
|
Euler's theorem for powers of 2 According to Euler's theorem,
$$x^{\varphi({2^k})} \equiv 1 \mod 2^k$$
for each $k>0$ and each odd $x$. Obviously, number of positive integers less than or equal to $2^k$ that are relatively prime to $2^k$ is
$$\varphi({2^k}) = 2^{k-1}$$
so it follows that
$$x^{{2^{k-1}}} \equiv 1 \mod 2^k$$
This is fine, but it seems like even
$$x^{{2^{k-2}}} \equiv 1 \mod 2^k$$
holds, at least my computer didn't find any counterexample.
Can you prove or disprove it?
| An $x$ such that $\phi(m)$ is the least positive integer $k$ for which $x^k \equiv 1 \mod m$ is called a primitive root mod $m$. The positive integers that have primitive roots are
$2, 4, p^n$ and $2 p^n$ for odd primes $p$ and positive integers $n$. In particular you are correct that there is no primitive root for $2^n$ if $n \ge 3$, and thus $x^{2^{n-2}} \equiv 1 \mod 2^n$ for all odd $x$ and all $n \ge 3$.
|
$\sigma$-algebra of $\theta$-invariant sets in ergodicity theorem for stationary processes Applying Birkhoff's ergodic theorem to a stationary process (a stochastic process with invariant transformation $\theta$, the shift-operator - $\theta(x_1, x_2, x_3,\dotsc) = (x_2, x_3, \dotsc)$) one has a result of the form
$ \frac{X_0 + \dotsb + X_{n-1}}{n} \to \mathbb{E}[ X_0 \mid J_{\theta}]$ a.s.
where the right hand side is the conditional expectation of $X_0$ concerning the sub-$\sigma$-algebra of $\theta$-invariant sets... How do these sets in $J_{\theta}$ look like? (I knew that $\mathbb{P}(A) \in \{0,1\}$ in the ergodic case, but I don't want to demand ergodicity for now).
| The $T$-invariant $\sigma$-field is generated by the (generally nonmeasurable) partition into the orbits of $T$. See https://mathoverflow.net/questions/88268/partition-into-the-orbits-of-a-dynamical-system
This gives a somewhat geometric view of the invariant $\sigma$-field. You should also studied the related notion of ergodic components of $T$.
|
'Linux' math program with interactive terminal? Are there any open source math programs out there that have an interactive terminal and that work on linux?
So for example you could enter two matrices and specify an operation such as multiply and it would then return the answer or a error message specifying why an answer can't be computed? I am just looking for something that can perform basic matrix operations and modular arithmetic.
| Sage is basically a Python program/interpreter that aims to be an open-source mathematical suite (ala Mathematica and Magma etc.). There are many algorithms implemented as a direct part of Sage, as well as wrapping many other open-source mathematics packages, all into a single interface (the user never has to tell which package or algorithm to use for a given computation: it makes the decisions itself). It includes GP/Pari and maxima, and so does symbolic manipulations and number theory at least as well as them.
It has a command-line mode, as well as a web notebook interface (as an example, a public server run by the main developers).
And, although this might not be relevant since your use-case sounds very simple, the syntax is just Python with a small preprocessing step to facilitate some technical details and allow some extra notation (like [1..4] which expands to [1,2,3,4]), so many people already know it and, if not, learning it is very easy.
As a slight tangent, Sage is actually the origin of the increasingly popular Cython language for writing fast "Python", and even offers an easy and transparent method of using Cython for sections of code in the notebook.
|
Randomly selecting a natural number In the answer to these questions:
*
*Probability of picking a random natural number,
*Given two randomly chosen natural numbers, what is the probability that the second is greater than the first?
it is stated that one cannot pick a natural number randomly.
However, in this question:
*
*What is the probability of randomly selecting $ n $ natural numbers, all pairwise coprime?
it is assumed that we can pick $n$ natural numbers randomly.
A description is given in the last question as to how these numbers are randomly selected, to which there seems to be no objection (although the accepted answer is given by one of the people explaining that one cannot pick a random number in the first question).
I know one can't pick a natural number randomly, so how come there doesn't seem to be a problem with randomly picking a number in the last question?
NB: I am happy with some sort of measure-theoretic answer, hence the probability-theory tag, but I think for accessibility to other people a more basic description would be preferable.
| It really depends on what you mean by the "probability of randomly selecting n natural numbers with property $P$". While you cannot pick random natural number, you can speak of uniform distribution.
For the last problem, the probability is calculated, and is to be understood as the limit when $N \to \infty$ from the "probability of randomly selecting n natural numbers from $1$ to $N$, all pairwise coprime".
Note that in this sense, the second problem also has an answer. And some of this type of probabilities can be connected via dynamical systems to an ergodic measure and an ergodic theorem.
Added The example provided by James Fennell is good to understand the last paragraph above.
Consider ${\mathbb Z}_2 = {\mathbb Z}/2{\mathbb Z}$, and the action of ${\mathbb Z}$ on ${\mathbb Z}_2$ defined by
$$m+ ( n \mod 2)=(n+m) \mod 2$$
Then, there exists an unique ergodic measure on ${\mathbb Z}_2$, namely $P(0 \mod 2)= P(1 \mod 2)= \frac{1}{2}$.
This is really what we intuitively understand by "half of the integers are even".
Now, the ergodic theory yields (and is something which can be easily proven directly in this case)
$$\lim_{N} \frac{\text{amount of even natural numbers} \leq N}{N} = P( 0 \mod 2) =\frac{1}{2} \,.$$
|
Finding all solutions to $y^3 = x^2 + x + 1$ with $x,y$ integers larger than $1$
I am trying to find all solutions to
(1) $y^3 = x^2 + x + 1$, where $x,y$ are integers $> 1$
I have attempted to do this using...I think they are called 'quadratic integers'. It would be great if someone could verify the steps and suggest simplifications to this approach. I am also wondering whether my use of Mathematica invalidates this approach.
My exploration is based on a proof I read that $x^3 + y^3 = z^3$ has no non-trivial integer solutions. This proof uses the ring Z[W] where $W = \frac{(-1 + \sqrt{-3})}{2}$. I don't understand most of this proof, or what a ring is, but I get the general idea. The questions I have about my attempted approach are
*
*Is it valid?
*How could it be simplified?
Solution:
Let $w = (-1 + \sqrt{-3})/2$. (Somehow, this can be considered an "integer" even though it doesn't look anything like one!)
Now $x^3 - 1 = (x-1)(x-w)(x-w^2)$ so that, $(x^3 - 1)/(x-1) = x^2 + x + 1 = (x-w)(x-w^2)$. Hence
$y^3 = x^2 + x + 1 = (x-w)(x-w^2).$
Since $x-w, x-w^2$ are coprime up to units (so I have read) both are "cubes". Letting $u$ be one of the 6 units in Z[w], we can say
$x-w = u(a+bw)^3 = u(c + dw)$ where
$c = a^3 + b^3 - 3ab^2, d = 3ab(a-b)$
Unfortunately, the wretched units complicate matters. There are 6 units hence 6 cases, as follows:
1) $1(c+dw) = c + dw$
2) $-1(c+dw) = -c + -dw$
3) $w(c+dw) = -d + (c-d)w$
4) $-w(c+dw) = d + (d-c)w$
5) $-w^2(c+dw) = c-d + cw$
6) $w^2(c+dw) = d-c + -cw$
Fortunately, the first two cases can be eliminated. For example, if $u = 1$ then $x-w = c+dw$ so that $d = -1 = 3ab(a-b).$ But this is not possible for integers $a,b$. The same reasoning applies to $u = -1$.
For the rest I rely on a program called Mathematica, which perhaps invalidates my reasoning, as you will see.
We attack case 5. Here
$x = c-d = a^3 + b^3 - 3a^2b$, and $c = a^3 + b^3 - 3ab^2 = -1.$
According to Mathematica the only integer solutions to $c = -1$ are
$(a,b) = (3,2), (1,1), (0,-1), (-1,0), (-1,-3), (-2,1).$
Plugging these into $x = c-d$ we find that no value of x that is greater than 1. So case 5 is eliminated, as is 6 by similar reasoning.
Examining case 4 we see that $d-c = -(a^3 + b^3 - 3a^2*b) = -1$ with solutions
$(-2,-3), (-1,-1), (-1,2), (0,1), (1,0), (3,1).$
Plugging these values into $x = d = 3ab(a-b)$ yields only one significant value, namely $x = 18$ (e.g. (a,b)=(3,1) . The same result is given by case 4. Hence the only solution to (1) is $7^3 = 18^2 + 18 + 1$
However, I'm unsure this approach is valid because I don't know how Mathematica found solutions to expressions such as $a^3 + b^3 - 3ab^2=-1$. These seem more difficult than the original question of $y^3 = x^2 + x + 1$, although I note that Mathematica could not solve the latter.
| In this answer to a related thread, I outline a very elementary solution to this problem. There are probably even simpler ones, but I thought you might appreciate seeing that.
|
Is every monomial over the UNIT OPEN BALL bounded by its L^{2} norm? Let $m\geq 2$ and $B^{m}\subset \mathbb{R}^{m}$ be the unit OPEN ball . For any fixed multi-index $\alpha\in\mathbb{N}^{m}$ with $|\alpha|=n$ large and $x\in B^{m}$
$$|x^{\alpha}|^{2}\leq \int_{B^{m}}|y^{\alpha}|^{2}dy\,??$$
| No. For a counterexample, take $\alpha=(n,0,\ldots,0)$. Obviously, $\max_{S^m}|x^\alpha|=1$, but an easy calculation shows
$$
\int_{S^m}|y^\alpha|^2{\mathrm{d}}\sigma(y) \to 0,
$$
as $n\to\infty$.
For the updated question, that involves the open unit ball, the answer is the same. With the same counterexample, we have
$$
\int_{B^m}|y^\alpha|^2{\mathrm{d}}y \to 0,
$$
as $n\to\infty$.
|
Invertible elements in the ring $K[x,y,z,t]/(xy+zt-1)$ I would like to know how big is the set of invertible elements in the ring $$R=K[x,y,z,t]/(xy+zt-1),$$ where $K$ is any field. In particular whether any invertible element is a (edit: scalar) multiple of $1$, or there is something else. Any help is greatly appreciated.
| Let $R=K[X,Y,Z,T]/(XY+ZT-1)$. In the following we denote by $x,y,z,t$ the residue classes of $X,Y,Z,T$ modulo the ideal $(XY+ZT-1)$. Let $f\in R$ invertible. Then its image in $R[x^{-1}]$ is also invertible. But $R[x^{-1}]=K[x,z,t][x^{-1}]$ and $x$, $z$, $t$ are algebraically independent over $K$. Thus $f=cx^n$ with $c\in K$, $c\ne0$, and $n\in\mathbb Z$. Since $R/xR\simeq K[Z,Z^{-1}][Y]$ we get that $x$ is a prime element, and therefore $n=0$. Conclusion: if $f$ is invertible, then $f\in K-\{0\}$.
|
Assuming that $(a, b) = 2$, prove that $(a + b, a − b) = 1$ or $2$ Statement to be proved: Assuming that $(a, b) = 2$, prove that
$(a + b, a − b) = 1$ or $2$.
I was thinking that $(a,b)=\gcd(a,b)$ and tried to prove the statement above, only to realise that it is not true.
$(6,2)=2$
but $(8,4)=4$, seemingly contradicting the statement to be proved?
Is there any other meaning for $(a,b)$, or is there a typo in the question?
Sincere thanks for any help.
| if $d\mid(a+b, a-b) \Rightarrow d|(a+b)$ and $d\mid(a-b)$
$\Rightarrow d\mid(a+b) \pm (a-b) \Rightarrow d\mid2a$ and $d\mid2b \Rightarrow d\mid(2a,2b) \Rightarrow d\mid2(a,b)$
This is true for any common divisor of (a+b) and (a-b).
If d becomes (a+b, a-b), (a,b)|(a±b)
as for any integers $P$, $Q$, $(a,b)\mid(P.a+Q.b)$, $d$ can not be less than $(a,b)$,
in fact, (a,b) must divide d,
so $d = (a,b)$ or $d = 2(a,b)$.
Here (a,b)=2.
So, $(a+b, a-b)$ must divide $4$ i.e., $= 2$ or $4$ (as it can not be $1$ as $a$, $b$ are even).
Observation:
$a$,$b$ can be of the form
(i) $4n+2$, $4m$ where $(2n+1,m)=1$, then $(a+b, a-b)=2$, ex.$(6,4) = 2 \Rightarrow (6+4, 6-4)=2$
or (ii) $4n+2$, $4m+2$ where $(2n+1,2m+1)=1$, then $(a+b, a-b)=4$, ex.$(6, 10)=2 \Rightarrow (6+10, 6 - 10)=4$
|
How does a Class group measure the failure of Unique factorization? I have been stuck with a severe problem from last few days. I have developed some intuition for my-self in understanding the class group, but I lost the track of it in my brain. So I am now facing a hell.
The Class group is given by $\rm{Cl}(F)=$ {Fractional Ideals of F} / {Principle fractional Ideals of F} , ($F$ is a quadratic number field) so that we are actually removing the Principal fractional ideals there (that's what I understood by quotient group). But how can that class group measure the failure of Unique Factorization ?
For example a common example that can be found in any text books is $\mathbb{Z[\sqrt{-5}]}$
in which we can factorize $6=2\cdot3=(1+\sqrt{-5})(1-\sqrt{-5})$. So it fails to have unique factorization. Now can someone kindly clarify these points ?
*
*How can one construct $\rm{Cl}(\sqrt{-5})$ by using the quotient groups ?
*What are the elements of $\rm{Cl}(\sqrt{-5})$ ? What do those elements indicate ? ( I think they must some-how indicate the residues that are preventing the $\mathbb{Z[\sqrt{-5}]}$ from having a unique factorization )
*What does $h(n)$ indicate ? ( Class number ). When $h(n)=1$ it implies that unique factorization exists . But what does the $1$ in $h(n)=1$ indicate. It means that there is one element in the class group , but doesn't that prevent Unique Factorization ?
EDIT:
I am interested in knowing whether are there any polynomial time running algorithms that list out all the numbers that fail to hold the unique factorization with in a number field ?
I am expecting that may be Class group might have something to do with these things. By using the class group of a number field can we extract all such numbers ? For example, if we plug in $\mathbb{Z}[\sqrt{-5}]$ then we need to have $6$ and other numbers that don't admit to a unique factorization.
Please do answer the above points and save me from confusion .
Thank you.
| h=1 means that the size of the class group is 1. That means that the group is the trivial group with only one element, the identity. The identity element of the class group is the equivalence class of principal ideals. Hence h=1 is equivalent to "all fractional ideals are principal" or equivalently "all ideals are principal".
|
What is the chance of an event happening a set number of times or more after a number of trials? Assuming every trial is independent from all the others and the probability of a successful run is the same every trial, how can you determine the chance of a successful trial a set number of times or more?
For example, You run 20 independent trials and the chance of a "successful" independent trial each time is 60%. how would you determine the chance of 3 or more"successful" trials?
| If the probability of success on any trial is $p$, then the probability of exactly $k$ successes in $n$ trials is
$$\binom{n}{k}p^k(1-p)^{n-k}.$$
For details, look for the Binomial Distribution on Wikipedia.
So to calculate the probability of $3$ or more successes in your example, let $p=0.60$ and $n=20$. Then calculate the probabilities that $k=3$, $k=4$, and so on up to $k=20$ using the above formula, and add up.
A lot of work! It is much easier in this case to find the probability of $2$ or fewer successes by using the above formula, and subtracting the answer from $1$. So, with $p=0.60$, the probability of $3$ or more successes is
$$1-\left(\binom{20}{0}p^0(1-p)^{20}+\binom{20}{1}p(1-p)^{19}+\binom{20}{2}p^2(1-p)^{18} \right).$$
For the calculations, note that $\binom{n}{k}=\frac{n!}{k!(n-k)!}$. In particular, $\binom{20}{0}=1$, $\binom{20}{1}=20$ and $\binom{20}{2}=\frac{(20)(19)}{2!}=190$.
|
Longest cylinder of specified radius in a given cuboid Find the maximum height (in exact value) of a cylinder of radius $x$ so that it can completely place into a $100 cm \times 60 cm \times50 cm$ cuboid.
This question comes from http://hk.knowledge.yahoo.com/question/question?qid=7012072800395.
I know that this question is equivalent to two times of the maximum height (in exact value) of a right cone of radius $x$ so that it can completely place into a $50 cm \times$ $30 cm \times 25 cm$ cuboid whose the apex of the right cone is placed at the corner of the cuboid, but I still have no idea until now.
| A beginning:
Let $a_i>0$ $\>(1\leq i\leq 3)$ be the dimensions of the box. Then we are looking for a unit vector ${\bf u}=(u_1,u_2,u_3)$ in the first octant and a length $\ell>0$ such that
$$\ell u_i+2 x\sqrt{1-u_i^2}=a_i\qquad(1\leq i\leq 3)\ .$$
When $x$ is small compared to the dimensions of the box one might begin with
$$\ell^{(0)}:=d:=\sqrt{a_1^2+a_2^2+a_3^2}\ ,\qquad u_i^{(0)}:={a_i\over d}\quad(1\leq i\leq3)$$
and do a few Newton iterations in order to obtain an approximate solution.
|
When can one use logarithms to multiply matrices If $a,b \in \mathbb{Z}_{+}$, then $\exp(\log(a)+\log(b))=ab$. If $A$ and $B$ are square matrices, when can we multiply $A$ and $B$ using logarithms? If $A \neq B^{-1}$, should $A$ and $B$ be symmetric?
| When they commute, or: when they have the same eigenvectors, or if $AB=BA$
|
If $a^n-b^n$ is integer for all positive integral value of $n$, then $a$, $b$ must also be integers. If $a^n-b^n$ is integer for all positive integral value of n with a≠b, then a,b must also be integers.
Source: Number Theory for Mathematical Contests, Problem 201, Page 34.
Let $a=A+c$ and $b=B+d$ where A,B are integers and c,d are non-negative fractions<1.
As a-b is integer, c=d.
$a^2-b^2=(A+c)^2-(B+c)^2=A^2-B^2+2(A-B)c=I_2(say),$ where $I_2$ is an integer
So, $c=\frac{I_2-(A^2-B^2)}{2(A-B)}$ i.e., a rational fraction $=\frac{p}{q}$(say) where (p,q)=1.
When I tried to proceed for the higher values of n, things became too complex for calculation.
| assuming $a \neq b$
if $a^n - b^n$ is integer for all $n$, then it is also integer for $n = 1$ and $n = 2$.
From there you should be able to prove that $a$ is integer.
|
Is this CRC calculation correct? I am currently studying for an exam and trying to check a message (binary) for errors using a polynomial, I would like if somebody could verify that my results below are (in)valid.
Thanks.
Message: 11110101 11110101
Polynomial: X4 + x2 + 1
Divisor (Derived from polynomial): 10101
Remainder:111
Result: There is an error in the above message?
Also, I had a link to an online calculator that would do the division but can't relocate it, any links to a calculator would be greatly appreciated.
Thanks.
| 1111010111110101 | 10101
+10101 | 11001011101
10111 |
+10101 |
10011 |
+10101 |
11011 |
+10101 |
11101 |
+10101 |
10000 |
+10101 |
10110 |
+10101 |
111 | <- you are right! there is an error in the message!
|
Is concave quadratic + linear a concave function? Basic question about convexity/concavity:
Is the difference of a concave quadratic function of a matrix $X$ given by f(X) and a linear function l(X), a concave function?
i.e, is f(X)-l(X) concave?
If so/not what are the required conditions to be checked for?
| A linear function is both concave and convex (here $-l$ is concave), and the sum of two concave functions is concave.
|
Function writen as two functions having IVP I heard this problem and I am a bit stuck.
Given a function $f : I \rightarrow \mathbb{R}$ where $I \subset \mathbb{R}$ is an open interval.
Then $f$ can be writen $f=g+h$ where $g,h$ are defined in the same interval and have the Intermediate Value Property. I tried to construct firstly the one function arbitarily at two points and then tried to define it in a way to have the IVP but I cannot manage to control the other function, as I try to fix the one I destroy the other and I cannot seem to know how to be certain I have enough point to define both in a way they have the IVP.
Any help appreciated! Thank you.
| Edit: In fact, all the information I give below (and more) is provided in another question in a much more organized way. I just found it.
My original post: The intermediate Value property is also called the Darboux property. Sierpinski first proved this theorem.The problem is treated in a blog of Beni Bogosel, a member of our own community and in much more generality too.
http://mathproblems123.files.wordpress.com/2010/07/strange-functions.pdf
It is also proved in( As I found from Wikipedia)
Bruckner, Andrew M: Differentiation of real functions, 2 ed, page 6, American Mathematical Society, 1994
|
Canonical Isomorphism Between $\mathbf{V}$ and $(\mathbf{V}^*)^*$ For the finite-dimensional case, we have a canonical isomorphism between $\mathbf{V}$, a vector space with the usual addition and scalar multiplication, and $(\mathbf{V}^*)^*$, the "dual of the dual of $\mathbf{V}$." This canonical isomorphism means that the isomorphism is always the same, independent of additional choices.
We can define a map $I : \mathbf{V} \to (\mathbf{V}^*)^*$ by $$x \mapsto I(x) \in (\mathbf{V}^*)^* \ \text{ where } \ I(x)(f) = f(x) \ \text{for any } \ f \in \mathbf{V}^*$$
My Question: what can go wrong in the infinite-dimensional case? The notes I am studying remark that if $\mathbf{V}$ is finite-dimensional, then $I$ is an isomorphism, but in the infinite-dimensional case we can go wrong? How?
| There are two things that can go wrong in the infinite-dimensional (normed) case.
First you could try to take the algebraic dual of $V$. Here it turns out that $V^{**}$ is much larger than $V$ for simple cardinality reasons as outlined in the Wikipedia article.
On the other hand, if $V$ is a normed linear space and you take the continuous dual, $V''$, then $V'$ (and thus also $V''$) will always be a Banach space. But! While $I$ as a map from $V$ to $V^{**}$ is obviously well-defined, it is not entirely obvious that $I(x)$ is in fact continuous for all $x$. In fact it is a consequence of the Hahn–Banach theorem which (roughly) states that there are "enough" continuous linear maps from $V$ to the base field in order for $V'$ and $V''$ to be interesting, e.g. the map $I$ is injective.
If $V$ is not a normed linear space, then things are more complicated and better left to a more advanced course in functional analysis.
|
Enumerate certain special configurations - combinatorics. Consider the vertices of a regular n-gon, numbered 1 through n. (Only the vertices, not the sides).
A "configuration" means some of these vertices are joined by edges.
A "good" configuration is one with the following properties:
1) There is at least one edge.
2) There can be multiple edges from a single vertex.
3) If A and B are joined by an edge, then the degree of A equals the degree of B.
4) No two edges must intersect each other, except possibly at the endpoints.
5) The degree of each vertex is at most k. (where $0\leq k \leq n$ )
Find f(n, k), the number of good configurations. For example, f(3, 2) = 4 and f(n, 0) = 0.
| We can show that vertex degrees $k \le 2$. Suppose for contradiction that $n$ is the size of a minimal counterexample, a convex $n$-gon with some degree $k \gt 2$. By minimality (discarding any vertices not connected to the given one) all vertices in the $n$-gon have degree $k$.
But it is known that the maximum number of nonintersecting diagonals of an $n$-gon is $n-3$. Add in the outer edges of the polygon itself and we'd have at most $2n-3$ nonintersecting edges.
But for $n$ vertices to each have degree $k$ requires $\frac{nk}{2}$ edges. Now an elementary inequality argument:
$$ \frac{nk}{2} \le 2n-3 $$
$$ k \le 4 - \frac{6}{n} $$
immediately gives us that $k \le 3$.
To rule out $k = 3$ makes essential use of the convexity of the polygon, in that a nonconvex quadrilateral allows nonintersecting edges with three meeting at each vertex. However in a convex polygon once the maximum number of nonintersecting diagonals are added, the result is a dissection of the polygon into triangles. At least two of these triangles consist of two "external" edges and one diagonal, so that the corresponding vertex opposite the diagonal edge has only degree $2$.
Added: Let's flesh out the reduction of $f(n,k)$ to fewer vertices by consideration of cases: vertex $1$ has (a) no edge, (b) one edge, or (c) two edges (if $k=2$ allows).
If vertex $1$ has no edge, we get a contribution of $f(n-1,k)$ from "good configurations" of the remaining $n-1$ vertices.
If vertex $1$ has just one edge, its other endpoint vertex $p$ must also have only that edge. This induces a contribution summing the "good configurations" of the two sets of vertices on either side of vertex $p$, including possibly empty ones there since we've already satisfied the requirement of at least one edge:
$$ \sum_{p=2}^{n} (1 + f(p-2,k)) (1 + f(n-p,k)) $$
If vertex $1$ has two edges (as noted, only possible if $k=2$), the contributions are similar but need more bookkeeping. Vertex $1$ belongs to a cycle of $r \gt 2$ edges, and the remaining $n-r$ vertices are partitioned thereby into varying consecutive subsets, each of which has its own "good configuration" (or an empty one).
|
Showing the sum of orthogonal projections with orthogonal ranges is also an orthogonal projection
Show that if $P$ and $Q$ are two orthogonal projections with orthogonal ranges, then $P+Q$ is also an orthogonal projection.
First I need to show $(P+Q)^\ast = P+Q$. I am thinking that since
\begin{align*}
((P+Q)^\ast f , g) & = (f,(P+Q)g) \\
& = (f,Pg) + (f,Qg) \\
& = (P^\ast f,g) + (Q^\ast f,g) \\
& = (Pf,g) + (Qf,g) \\
& = ((P+Q)f,g),
\end{align*}
we get $(P+Q)^\ast=P+Q$.
I am not sure if what I am thinking is right since I assumed that $(P+Q)f=Pf+Qf$ is true for any bounded linear operator $P$, $Q$.
For $(P+Q)^2=P+Q$, I use
$$(P+Q)^2= P^2 + Q^2 + PQ +QP,$$
but I cant show $PQ=0$ and $QP=0$.
Anyone can help me? Thanks.
| To complete your proof we need the following observations.
If $\langle f,g\rangle=0$ for all $g\in H$, then $f=0$. Indeed, take $g=f$, then you get $\langle f,g\rangle=0$. By definition of inner product this implies $f=0$.
Since $\mathrm{Im}(P)\perp\mathrm{Im}(Q)$, then for all $f,g\in H$ we have $\langle Pf,Qg\rangle=0$. which is equivalent to $\langle Q^*Pf,g\rangle=0$ for all $f,g\in H$. Using observation from previous section we see that $Q^*P(f)=0$ for all $f\in H$, i.e. $Q^*P=0$. Since $Q^*=Q$ and $P^*=P$ we conclude
$$
QP=Q^*P=0
$$
$$
PQ=P^*Q^*=(QP)^*=0^*=0
$$
In fact $R$ is an orthogonal projection iff $R=R^*=R^2$. In this case we can prove your result almost algebraically
$$
(P+Q)^*=P^*+Q^*=P+Q
$$
$$
(P+Q)^2=P^2+PQ+QP+Q^2=P+0+0+Q=P+Q
$$
I said almost, because prove of $PQ=QP=0$ requires some machinery with elements of $H$ and its inner product.
|
Trigonometry proof involving sum difference and product formula How would I solve the following trig problem.
$$\cos^5x = \frac{1}{16} \left( 10 \cos x + 5 \cos 3x + \cos 5x \right)$$
I am not sure what to really I know it involves the sum and difference identity but I know not what to do.
| $$\require{cancel}
\frac1{16} [ 5(\cos 3x+\cos x)+\cos 5x+5\cos x ]\\
=\frac1{16}[10\cos x \cos 2x+ \cos 5x +5 \cos x]\\
=\frac1{16} [5\cos x(2\cos 2x+1)+\cos 5x]\\
=\frac1{16} [5\cos x(2(2\cos^2 x-1)+1)+\cos 5x]\\
=\frac1{16} [5\cos x(4\cos^2 x-1)+\cos 5x]\\
=\frac1{16} [5\cos x(4\cos^2 x-1)+\cos 5x]\\
=\frac1{16} [20\cos^3 x-5\cos x+\cos 5x]\\
=\frac1{16} [20\cos^3 x-5\cos x+\cos 5x]\\
=\frac1{16} [\cancel{20\cos^3 x}\cancel{-5\cos x}+16\cos^5 x\cancel{-20\cos^3 x}+\cancel{5\cos x}]\\
=\frac1{16} (16\cos^5 x)\\=\cos^5 x$$
|
A Tri-Factorable Positive integer Found this problem in my SAT book the other day and wanted to see if anyone could help me out.
A positive integer is said to be "tri-factorable" if it is the product of three consecutive integers. How many positive integers less than 1,000 are tri-factorable?
| HINT:
$1,000= 10*10*10<10*11*12$ so in the product $n*(n+1)*(n+2)$, $n$ must be less then $10.$
|
Find the domain of $f(x)=\frac{3x+1}{\sqrt{x^2+x-2}}$
Find the domain of $f(x)=\dfrac{3x+1}{\sqrt{x^2+x-2}}$
This is my work so far:
$$\dfrac{3x+1}{\sqrt{x^2+x-2}}\cdot \sqrt{\dfrac{x^2+x-2}{x^2+x-2}}$$
$$\dfrac{(3x+1)(\sqrt{x^2+x-2})}{x^2+x-2}$$
$(3x+1)(\sqrt{x^2+x-2})$ = $\alpha$ (Just because it's too much to type)
$$\dfrac{\alpha}{\left[\dfrac{-1\pm \sqrt{1-4(1)(-2)}}{2}\right]}$$
$$\dfrac{\alpha}{\left[\dfrac{-1\pm \sqrt{9}}{2}\right]}$$
$$\dfrac{\alpha}{\left[\left(\dfrac{-1+3}{2}\right)\left(\dfrac{-1-3}{2}\right)\right]}$$
$$\dfrac{\alpha}{(1)(-2)}$$
Now, I checked on WolframAlpha and the domain is $x\in \mathbb R: x\lt -2$ or $x\gt 1$
But my question is, what do I do with the top of the problem? Or does it just not matter at all.
| Note that $x^2+x-2=(x-1)(x+2)$. There is a problem only if $(x-1)(x+2)$ is $0$ or negative. (If it is $0$, we have a division by $0$ issue, and if it is negative we have a square root of a negative issue.)
Can you find where $(x-1)(x+2)$ is $0$? Can you find where it is negative? Together, these are the numbers which are not in the domain of $f(x)$.
Or, to view things more positively, the function $f(x)$ is defined precisely for all $x$ such that $(x-1)(x+2)$ is positive.
Remark: Let $g(x)=x^2+x-2$. We want to know where $g(x)$ is positive. By factoring, or by using the Quadratic Formula, we can see that $g(x)=0$ at $x=-2$ and at $x=1$.
It is a useful fact that a nice continuous function can only change sign by going throgh $0$. This means that in the interval $(-\infty, -2)$, $g(x)$ has constant sign. It also has constant sign in $(-2,1)$, and also in $(1,\infty)$.
We still don't know which signs. But this can be determined by finding $g(x)$ at a test point in each interval. For example, let $x=-100$. Clearly $g(-100)$ is positive, so $g(x)$ is positive for all $x$ in the interval $(-\infty,0)$.
For the interval $(-2,1)$, $x=0$ is a convenient test point. Note that $g(0) \lt 0$, so $g(x)$ is negative in the whole interval $(-2,1)$. A similar calculation will settle things for the remaining interval $(1,\infty)$.
There are many other ways to handle the problem. For example, you know that the parabola $y=(x+2)(x-1)$ is upward facing, and crosses the $x$-axis at $x=-2$ and $x=1$. So it is below the $x$-axis (negative) only when $-2 \lt x \lt 1$.
Or we can work with pure inequalities. The product $(x+2)(x-1)$ is positive when $x+2$ and $x-1$ are both positive. This happens when $x \gt 1$. The product is also positive when $x+2$ and $x-1$ are both negative. This happens when $x \lt -2$.
|
Show that if $\kappa$ is an uncountable cardinal, then $\kappa$ is an epsilon number Firstly, I give the definition of the epsilon number:
$\alpha$ is called an epsilon number iff $\omega^\alpha=\alpha$.
Show that if $\kappa$ is an uncountable cardinal, then $\kappa$ is an epsilon number and there are $\kappa$ epsilon numbers below $\kappa$; In particular, the first epsilon number, called $\in_0$, is countable.
I've tried, however I have not any idea for this. Could anybody help me?
| The following is intended as a half-outline/half-solution.
We will prove by induction that every uncountable cardinal $\kappa$ is an $\epsilon$-number, and that the family $E_\kappa = \{ \alpha < \kappa : \omega^\alpha = \alpha \}$ has cardinality $\kappa$.
Suppose that $\kappa$ is an uncountable cardinal such that the two above facts are knows for every uncountable cardinal $\lambda < \kappa$.
*
*If $\kappa$ is a limit cardinal, note that in particular $\kappa$ is a limit of uncountable cardinals. By normality of ordinal exponentiation it follows that $$\omega^\kappa = \lim_{\lambda < \kappa} \omega^\lambda = \lim_{\lambda < \kappa} \lambda = \kappa,$$ where the limit is taken only over the uncountable cardinals $\lambda < \kappa$.
Also, it follows that $E_\kappa = \bigcup_{\lambda < \kappa} E_\lambda$, and so $| E_\kappa | = \lim_{\lambda < \kappa} | E_\lambda | = \kappa$.
*If $\kappa$ is a successor cardinal, note that $\kappa$ is regular. Note, also, that every uncountable cardinal is an indecomposable ordinal. Therefore $\kappa = \omega^\delta$ for some (unique) ordinal $\delta$. As $\omega^\kappa \geq \kappa$, we know that $\delta \leq \kappa$. It suffices to show that $\omega^\beta < \kappa$ for all $\beta < \kappa$. We do this by induction: assume $\beta < \kappa$ is such that $\omega^\gamma < \kappa$ for all $\gamma < \beta$.
*
*If $\beta = \gamma + 1$, note that $\omega^\beta = \omega^\gamma \cdot \omega = \lim_{n < \omega} \omega^\gamma \cdot n$. By indecomposability it follows that $\omega^\gamma \cdot n < \kappa$ for all $n < \omega$, and by regularity of $\kappa$ we have that $\{ \omega^\gamma \cdot n : n < \omega \}$ is bounded in $\kappa$.
*If $\beta$ is a limit ordinal, then $\omega^\beta = \lim_{\gamma < \beta} \omega^\gamma$. Note by regularity of $\kappa$ that $\{ \omega^\gamma : \gamma < \beta \}$ must be bounded in $\kappa$.
To show that $E_\kappa$ has cardinality $\kappa$, note that by starting with any ordinal $\alpha < \kappa$ and defining the sequence $\langle \alpha_n \rangle_{n < \omega}$ by $\alpha_0 = \alpha$ and $\alpha_{n+1} = \omega^{\alpha_n}$ we have that $\alpha_\omega = \lim_{n < \omega} \alpha_n < \kappa$ is an $\epsilon$-number. Use this fact to construct a strictly increasing $\kappa$-sequence of $\epsilon$-numbers less than $\kappa$.
(There must be an easier way, but I cannot think of it.)
|
Two sums with Fibonacci numbers
*
*Find closed form formula for sum: $\displaystyle\sum_{n=0}^{+\infty}\sum_{k=0}^{n} \frac{F_{2k}F_{n-k}}{10^n}$
*Find closed form formula for sum: $\displaystyle\sum_{k=0}^{n}\frac{F_k}{2^k}$ and its limit with $n\to +\infty$.
First association with both problems: generating functions and convolution. But I have been thinking about solution for over a week and still can't manage. Can you help me?
| For (2) you have $F_k = \dfrac{\varphi^k}{\sqrt 5}-\dfrac{\psi^k}{\sqrt 5}$ where $\varphi = \frac{1 + \sqrt{5}}{2} $ and $\psi = \frac{1 - \sqrt{5}}{2}$ so the problem becomes the difference between two geometric series.
For (1) I think you can turn this into something like $\displaystyle \sum_{n=0}^{\infty} \frac{F_{2n+1}-F_{n+2}}{2\times 10^n}$ and again make it into a sum of geometric series.
There are probably other ways.
|
Goldbach's conjecture and number of ways in which an even number can be expressed as a sum of two primes Is there a functon that counts the number of ways in which an even number can be expressed as a sum of two primes?
| See Goldbach's comet at Wikipedia.
EDIT: To expand on this a little, let $g(n)$ be the number of ways of expressing the even number $n$ as a sum of two primes. Wikipedia gives a heuristic argument for $g(n)$ to be approximately $2n/(\log n)^2$ for large $n$. Then it points out a flaw with the heuristic, and explains how Hardy and Littlewood repaired the flaw to come up with a better conjecture. The better conjecture states that, for large $n$, $g(n)$ is approximately $cn/(\log n)^2$, where $c>0$ depends on the primes dividing $n$. In all cases, $c>1.32$.
I stress that this is all conjectural, as no one has been able to prove even that $g(n)>0$ for all even $n\ge4$.
|
What is the name of the logical puzzle, where one always lies and another always tells the truth? So i was solving exercises in propositional logic lately and stumbled upon a puzzle, that goes like this:
Each inhabitant of a remote village always tells the truth or always lies. A villager will only give a "Yes" or a "No" response to a question a tourist asks. Suppose you are a tourist visiting this area and come to a fork in the road.
One branch leads to the ruins you want to visit; the other branch leads deep into the jungle. A villager is standing at the fork in the road. What one question can you ask the villager to determine which branch to take?
I intuitively guessed the answer is "If I asked you this path leads to the ruins, would you say yes?". So my questions are:
*
*What is the name and/or source of this logical riddle?
*How can i corroborate my answer with mathematical rigor?
| Knights and Knaves?
How: read about it.
|
Surfaces of constant projected area Generalizing the well-known variety of plane curves of constant width, I'm wondering about three-dimensional surfaces of constant projected area.
Question: If $A$ is a (bounded) subset of $\mathbb R^3$, homeomorphic to a closed ball, such that the orthogonal projection of $A$ onto a plane has the same area for all planes, is $A$ necessarily a sphere? If not, what are some other possibilities?
Wikipedia mentions a concept of convex shapes with constant width, but that's different.
(Inspired by the discussion about spherical cows in comments to this answer -- my question is seeking to understand whether there are other shapes of cows that would work just as well).
| These are called bodies of constant brightness. A convex body that has both constant width and constant brightness is a Euclidean ball. But non-spherical convex bodies of constant brightness do exist; the first was found by Blaschke in 1916. See: Google and related MSE thread.
|
Approximating $\pi$ with least digits Do you a digit efficient way to approximate $\pi$? I mean representing many digits of $\pi$ using only a few numeric digits and some sort of equation. Maybe mathematical operations also count as penalty.
For example the well known $\frac{355}{113}$ is an approximation, but it gives only 7 correct digits by using 6 digits (113355) in the approximation itself. Can you make a better digit ratio?
EDIT: to clarify the "game" let's assume that each mathematical operation (+, sqrt, power, ...) also counts as one digit. Otherwise one could of course make artifical infinitely nested structures of operations only. And preferably let's stick to basic arithmetics and powers/roots only.
EDIT: true. logarithm of imaginary numbers provides an easy way. let's not use complex numbers since that's what I had in mind. something you can present to non-mathematicians :)
| Let me throw in Clive's suggestion to look at the wikipedia site. If we allow for logarithm (while not using complex numbers), we can get 30 digits of $\pi$ with
$\frac{\operatorname{ln}(640320^3+744)}{\sqrt{163}}$
which is 13 digits and 5 operation, giving a ratio of about 18/30=0.6.
EDIT: Here is another one I found on this site:
$\ln(31.8\ln(2)+\ln(3))$
gives 11 digits of $\pi$ with using 5 numbers and 4 operations.
|
Alternating sum of squares of binomial coefficients I know that the sum of squares of binomial coefficients is just ${2n}\choose{n}$ but what is the closed expression for the sum ${n\choose 0}^2 - {n\choose 1}^2 + {n\choose 2}^2 + \cdots + (-1)^n {n\choose n}^2$?
| Here's a combinatorial proof.
Since $\binom{n}{k} = \binom{n}{n-k}$, we can rewrite the sum as $\sum_{k=0}^n \binom{n}{k} \binom{n}{n-k} (-1)^k$. Then $\binom{n}{k} \binom{n}{n-k}$ can be thought of as counting ordered pairs $(A,B)$, each of which is a subset of $\{1, 2, \ldots, n\}$, such that $|A| = k$ and $|B| = n-k$. The sum, then, is taken over all such pairs such that $|A| + |B| = n$.
Given $(A,B)$, let $x$ denote the largest element in the symmetric difference $A \oplus B = (A - B) \cup (B - A)$ (assuming that such an element exists). In other words, $x$ is the largest element that is in exactly one of the two sets. Then define $\phi$ to be the mapping that moves $x$ to the other set. The pairs $(A,B)$ and $\phi(A,B)$ have different signs, and $\phi(\phi(A,B)) = (A,B)$, so $(A,B)$ and $\phi(A,B)$ cancel each other out in the sum. (The function $\phi$ is what is known as a sign-reversing involution.)
So the value of the sum is determined by the number of pairs $(A,B)$ that do not cancel out. These are precisely those for which $\phi$ is not defined; in other words, those for which there is no largest $x$. But there can be no largest $x$ only in the case $A=B$. If $n$ is odd, then the requirement $\left|A\right| + \left|B\right| = n$ means that we cannot have $A=B$, so in the odd case the sum is $0$. If $n$ is even, then the number of pairs is just the number of subsets of $\{1, 2, \ldots, n\}$ of size $n/2$; i.e., $\binom{n}{n/2}$, and the parity is determined by whether $|A| = n/2$ is odd or even.
Thus we get $$\sum_{k=0}^n \binom{n}{k}^2 (-1)^k = \begin{cases} (-1)^{n/2} \binom{n}{n/2}, & n \text{ is even}; \\ 0, & n \text{ is odd}.\end{cases}$$
|
Periodic solution of differential equation let be the ODE $ -y''(x)+f(x)y(x)=0 $
if the function $ f(x+T)=f(x) $ is PERIODIC does it mean that the ODE has only periodic solutions ?
if all the solutions are periodic , then can all be determined by Fourier series ??
| No, it doesn't mean that. For instance, $f(x)=0$ is periodic with any period, but $y''(x)=0$ has non-periodic solutions $y(x)=ax+b$.
|
Solving linear system of equations when one variable cancels I have the following linear system of equations with two unknown variables $x$ and $y$. There are two equations and two unknowns. However, when the second equation is solved for $y$ and substituted into the first equation, the $x$ cancels. Is there a way of re-writing this system or re-writing the problem so that I can solve for $x$ and $y$ using linear algebra or another type of numerical method?
$2.6513 = \frac{3}{2}y + \frac{x}{2}$
$1.7675 = y + \frac{x}{3}$
In the two equations above, $x=3$ and $y=0.7675$, but I want to solve for $x$ and $y$, given the system above.
If I subtract the second equation from the first, then:
$2.6513 - 1.7675 = \frac{3}{2}y - y + \frac{x}{2} - \frac{x}{3}$
Can the equation in this alternate form be useful in solving for $x$ and $y$? Is there another procedure that I can use?
In this alternate form, would it be possible to limit $x$ and $y$ in some way so that a solution for $x$ and $y$ can be found by numerical optimization?
| $$\begin{equation*}
\left\{
\begin{array}{c}
2.6513=\frac{3}{2}y+\frac{x}{2} \\
1.7675=y+\frac{x}{3}
\end{array}
\right.
\end{equation*}$$
If we multiply the first equation by $2$ and the second by $3$ we get
$$\begin{equation*}
\left\{
\begin{array}{c}
5.3026=3y+x \\
5.3025=3y+x
\end{array}
\right.
\end{equation*}$$
This system has no solution because
$$\begin{equation*}
5.3026\neq 5.3025
\end{equation*}$$
However if the number $2.6513$ resulted from rounding $2.65125$, then the
same computation yields
$$\begin{equation*}
\left\{
\begin{array}{c}
5.3025=3y+x \\
5.3025=3y+x
\end{array}
\right.
\end{equation*}$$
which is satisfied by all $x,y$.
A system of the form
$$\begin{equation*}
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}
=
\begin{pmatrix}
b_{1} \\
b_{2}
\end{pmatrix}
\end{equation*}$$
has the solution (Cramer's rule)
$$\begin{equation*}
\begin{pmatrix}
x \\
y
\end{pmatrix}
=\frac{1}{\det
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix}
}
\begin{pmatrix}
a_{22}b_{1}-a_{12}b_{2} \\
a_{11}b_{2}-a_{21}b_{1}
\end{pmatrix}
=
\begin{pmatrix}
\frac{a_{22}b_{1}-a_{12}b_{2}}{a_{11}a_{22}-a_{21}a_{12}} \\
\frac{a_{11}b_{2}-a_{21}b_{1}}{a_{11}a_{22}-a_{21}a_{12}}
\end{pmatrix}
\end{equation*}$$
if $\det
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}%
\end{pmatrix}%
\neq 0$.
In the present case, we have
$$\begin{equation*}
\begin{pmatrix}
\frac{1}{2} & \frac{3}{2} \\
\frac{1}{3} & 1
\end{pmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}
=
\begin{pmatrix}
2.6513 \\
1.7675
\end{pmatrix}
\end{equation*}$$
and $$\det
\begin{pmatrix}
\frac{1}{2} & \frac{3}{2} \\
\frac{1}{3} & 1
\end{pmatrix}
=0$$
|
Transforming an inhomogeneous Markov chain to a homogeneous one I fail to understand Cinlar's transformation of an inhomogeneous Markov chain to a homogeneous one. It appears to me that $\hat{P}$ is not fully specified. Generally speaking, given a $\sigma$-algebra $\mathcal A$, a measure can be specified either explicitly over the entire $\sigma$-algebra, or implicitly by specifying it over a generating ring and appealing to Caratheodory's extension theorem. However, Cinlar specifies $\hat{P}$ over a proper subset of $\hat{\mathcal{E}}$ that is not a ring.
| We give the condition that $\widehat P$ is a Markow kernel, and we have that
$$\widehat P((n,x),\{n+1\}\times E)=P_{n+1}(x,E)=1,$$
hence the measure $\widehat P((n,x),\cdot)$ is concentrated on $\{n+1\}\times E\}$? Therefore, we have $\widehat P((n,x),I\times A)=0$ for any $A\subset E$ and $I\subset \Bbb N$ which doesn't contain the integer $n+1$.
|
Calculate Rotation Matrix to align Vector $A$ to Vector $B$ in $3D$? I have one triangle in $3D$ space that I am tracking in a simulation. Between time steps I have the the previous normal of the triangle and the current normal of the triangle along with both the current and previous $3D$ vertex positions of the triangles.
Using the normals of the triangular plane I would like to determine a rotation matrix that would align the normals of the triangles thereby setting the two triangles parallel to each other. I would then like to use a translation matrix to map the previous onto the current, however this is not my main concern right now.
I have found this website http://forums.cgsociety.org/archive/index.php/t-741227.html
that says I must
*
*determine the cross product of these two vectors (to determine a rotation axis)
*determine the dot product ( to find rotation angle)
*build quaternion (not sure what this means)
*the transformation matrix is the quaternion as a $3 \times 3$ (not sure)
Any help on how I can solve this problem would be appreciated.
| General solution for n dimensions in matlab / octave:
%% Build input data
n = 4;
a = randn(n,1);
b = randn(n,1);
%% Compute Q = rotation matrix
A = a*b';
[V,D] = eig(A'+A);
[~,idx] = min(diag(D));
v = V(:,idx);
Q = eye(n) - 2*(v*v');
%% Validate Q is correct
b_hat = Q'*a*norm(b)/norm(a);
disp(['norm of error = ' num2str(norm(b_hat-b))])
disp(['eigenvalues of Q = ' num2str(eig(Q)')])
|
Logical question problem A boy is half as old as the girl will be when the boy’s age is twice the sum of their ages when the boy was the girl’s age.
How many times older than the girl is the boy at their present age?
This is a logical problem sum.
| If $x$ is the boy's age and $y$ is the girl's age, then when the boy was the girl's current age, her age was $2y-x$. So "twice the sum of their ages when the boy was the girl's age" is $2(3y-x)=6y-2x$. The boy will reach this age after a further $6y-3x$ years, at which point the girl will be $7y-3x$. We are told that $x$ is half of this; so $2x=7y-3x$, which means that $x=\frac{7}{5}y$.
|
Easy way to find roots of the form $qi$ of a polynomial Let $p$ be a polynomial over $\mathbb{Z}$, we know that there is an easy way to check if $p$ have rational roots (using the rational root theorem).
Is there an easy way to check if $p$ have any roots of the form $qi$ where $q\in\mathbb{Q}$ (or at least $q\in\mathbb{Z}$) ? ($i\in\mathbb{C}$) ?
| Hint $\ f(q\,i) = a_0\! -\! a_2 q^2\! +\! a_4 q^4\! +\cdots + i\,q\,(a_1\! -\! a_3 q^2\! +\! a_5 q^4\! +\! \cdots) = g(q) + i\,q\,h(q)$
|
Compute $\int \frac{\sin(x)}{\sin(x)+\cos(x)}\mathrm dx$ I'm having trouble computing the integral:
$$\int \frac{\sin(x)}{\sin(x)+\cos(x)}\mathrm dx.$$
I hope that it can be expressed in terms of elementary functions. I've tried simple substitutions such as $u=\sin(x)$ and $u=\cos(x)$, but it was not very effective.
Any suggestions are welcome. Thanks.
| Hint: $\sqrt{2}\sin(x+\pi/4)=\sin x +\cos x$, then substitute $x+\pi/4=z$
|
Comparing speed in stochastic processes generated from simulation? I have an agent-based simulation that generates a time series in its output for my different treatments. I am measuring performance through time, and at each time tick the performance is the mean of 30 runs (30 samples). In all of the treatments the performance starts from near 0 and ends in 100%, but with different speed. I was wondering if there is any stochastic model or probabilistic way to to compare the speed or growth of these time series. I want to find out which one "significantly" grows faster.
Thanks.
| Assuming you're using a pre-canned application, then there will be an underlying distribution generating your time series. I would look in the help file of the application to find this distribution.
Once you know the underlying distribution, then "significance" is determined the usual way, namely pick a confidence level, and test for the difference between two random variables.
The wrinkle is to do with the time element. If your "treatments" are the result of an accumulation of positive outcomes, then each time series is effectively a sum of random variables, and each term in that sum is itself the result of a mean of 30 samples from the underlying distribution.
So, although the formulation is different, the treatment of significance is the same.
|
Why is the complex number $z=a+bi$ equivalent to the matrix form $\left(\begin{smallmatrix}a &-b\\b&a\end{smallmatrix}\right)$
Possible Duplicate:
Relation of this antisymmetric matrix $r = \!\left(\begin{smallmatrix}0 &1\\-1 & 0\end{smallmatrix}\right)$ to $i$
On Wikipedia, it says that:
Matrix representation of complex numbers
Complex numbers $z=a+ib$ can also be represented by $2\times2$ matrices that have the following form: $$\pmatrix{a&-b\\b&a}$$
I don't understand why they can be represented by these matrices or where these matrices come from.
| Since you put the tag quaternions, let me say a bit more about performing identifications like that:
Recall the quaternions $\mathcal{Q}$ is the group consisting of elements $\{\pm1, \pm \hat{i}, \pm \hat{j}, \pm \hat{k}\}$ equipped with multiplication that satisfies the rules according to the diagram
$$\hat{i} \rightarrow \hat{j} \rightarrow \hat{k}.$$
Now what is more interesting is that you can let $\mathcal{Q}$ become a four dimensional real vector space with basis $\{1,\hat{i},\hat{j},\hat{k}\}$ equipped with an $\Bbb{R}$ - bilinear multiplication map that satisfies the rules above. You can also define the norm of a quaternion $a + b\hat{i} + c\hat{j} + d\hat{k}$ as
$$||a + b\hat{i} + c\hat{j} + d\hat{k}|| = a^2 + b^2 + c^2 + d^2.$$
Now if you consider $\mathcal{Q}^{\times}$, the set of all unit quaternions you can identify $\mathcal{Q}^{\times}$ with $\textrm{SU}(2)$ as a group and as a topological space. How do we do this identification? Well it's not very hard. Recall that
$$\textrm{SU}(2) = \left\{ \left(\begin{array}{cc} a + bi & -c + di \\ c + di & a-bi \end{array}\right) |\hspace{3mm} a,b,c,d \in \Bbb{R}, \hspace{3mm} a^2 + b^2 + c^2 + d^2 = 1 \right\}.$$
So you now make an ansatz (german for educated guess) that the identification we are going to make is via the map $f$ that sends a quaternion $a + b\hat{i} + c\hat{j} + d\hat{k}$ to the matrix $$\left(\begin{array}{cc} a + bi & -c + di \\ c + di & a-bi \end{array}\right).$$
It is easy to see that $f$ is a well-defined group isomorphism by an algebra bash and it is also clear that $f$ is a homeomorphism. In summary, the point I wish to make is that these identifications give us a useful way to interpret things. For example, instead of interpreting $\textrm{SU}(2)$ as boring old matrices that you say "meh" to you now have a geometric understanding of what $\textrm{SU}(2)$. You can think about each matrix as being a point on the sphere $S^3$ in 4-space! How rad is that?
On the other hand when you say $\Bbb{R}^4$ has now basis elements consisting of $\{1,\hat{i},\hat{j},\hat{k}\}$, you have given $\Bbb{R}^4$ a multiplication structure and it becomes not just an $\Bbb{R}$ - module but a module over itself.
|
Is failing to admit an axiom equivalent to proof when the axiom is false? Often, mathematicians wish to develop proofs without admitting certain axioms (e.g. the axiom of choice).
If a statement can be proven without admitting that axiom, does that mean the statement is also true when the axiom is considered to be false?
I have tried to construct a counter-example, but in every instance I can conceive, the counter-example depends on a definition which necessarily admits an axiom. I feel like the answer to my question is obvious, but maybe I am just out of practice.
| Yes. Let the axiom be P. The proof that didn't make use of P followed all the rules of logic, so it still holds when you adjoin $\neg P$ to the list of axioms. (It could also happen that the other axioms sufficed to prove P, in which case the system that included $\neg P$ would be inconsistent. In an inconsistent theory, every proposition can be proved, so the thing you originally proved is still true, although vacuously. The case where the other axioms prove $\neg P$ is also OK, obviously.)
|
Equivalence of norms on the space of smooth functions Let $E, F$ be Banach spaces, $A$ be an open set in $E$ and $C^2(A,F)$ be the space of all functions $f:A\to F,$ which are twice continuously differentiable and bounded with all derivatives. The question is when following two norms in $C^2(A,F)$ are equivalent:
$$
\|f\|_{1}=\sup_{x\in A}\sum^2_{k=0}\|f^{(k)}(x)\|, \ \|f\|_{2}=\sup_{x\in A}(\|f(x)\|+\|f^{(2)}(x)\|).
$$
In the case $A=E$ they are equivalent. One can prove it in the following way. To bound $\|f^{(1)}(x)h\|$ consider a line $g(t)=f(x+th)$ through $x$ in the direction $h$ and use inequality
$$
\sup_{t\in \mathbb{R}}\|g^{(1)}(t)\|\leq\sqrt{2\sup_{t\in \mathbb{R}}\|g(t)\|\sup_{t\in \mathbb{R}}\|g^{(2)}(t)\|}.
$$
The case when $A$ is an open ball is unknown to me. Of course, one can try to consider not lines but segments. But the problem is the length of segment can't be bounded from below and inequalitites I know can't be applied.
| We can reduce to the case $F=\mathbb R$ by considering all compositions $\varphi\circ f$ with $\varphi$ ranging over unit-norm functionals on $F$.
Let $A$ be the open unit ball in $E$. Given $x\in A$ and direction $v\in E$ (a unit vector), we would like to estimate the directional derivative $f_v(x)$ in terms of $M=\sup_A(\|f\|, \|f\,''\|)$. Instead of restricting to a line, let us restrict $f$ to a 2-dimensional subspace $P$ that contains the line (and the origin). The advantage is that $D:=A\cap P$ is a 2-dimensional unit disk: the size of section does not depend on $x$ or $v$.
The directional derivative $f_v:D\to \mathbb R$ is itself differentiable, and using the 2nd derivative bound we conclude that the oscillation of $f_v$ on $A$ (namely, $\sup_A f_v-\inf_A f_v$) is bounded by $2M$. Suppose $\sup_A f_v>5M$. Then $f_v>3M$ everywhere in $A$. Integrating this inequality along the line through the origin in direction $v$, we find that the oscillation of $f$ on this line is at least $6M$, a contradiction. Therefore, $\sup_A f_v\le 5M$. Same argument shows $\inf_A f_v\ge -5M$.
|
Worst case analysis of MAX-HEAPIFY procedure . From CLRS book for MAX-HEAPIFY procedure :
The children's subtrees each have size at most 2n/3 - the worst case
occurs when the last row of the tree is exactly half full
I fail to see this intuition for the worst case scenario . Can some one explain possibly with a diagram . Thanks .
P.S: I know Big O notation and also found this answer here but still I have doubts.
| Start with a heap $H$ with $n$ levels with all levels full. That's $2^{i - 1}$ nodes for each level $i$ for a total of $$|H| = 2^n - 1$$ nodes in the heap. Let $L$ denote the left sub-heap of the root and $R$ denote the right sub-heap. $L$ has a total of $$|L| = 2^{n - 1} - 1$$ nodes, as does $R$. Since a binary heap is a complete binary tree, then new nodes must be added such that after heapification, nodes fill up the last level from left to right. So, let's add nodes to $L$ so that a new level is filled and let's denote this modified sub-heap as $L'$ and the modified whole heap as $H'$. This addition will require $2^{n - 1}$ nodes, bringing the total number of nodes in $L'$ to $$ |L'| = (2^{n - 1} - 1) + 2^{n - 1} = 2\cdot 2^{n - 1} - 1$$ and the total number of nodes in the entire heap to $$ |H'| = (2^{n} - 1) + 2^{n - 1} = 2^n + 2^{n - 1} - 1 $$
The amount of space $L'$ takes up out of the whole heap $H'$ is given by
$$ \frac{|L'|}{|H'|} = \frac{2\cdot 2^{n-1} - 1}{2^n + 2^{n - 1} - 1} =
\frac{2\cdot 2^{n-1} - 1}{2\cdot 2^{n - 1} + 2^{n - 1} - 1} =
\frac{2\cdot 2^{n-1} - 1}{3\cdot 2^{n - 1} - 1} $$
Taking the limit as $n \to \infty$, we get:
$$ \lim_{n\to\infty} { \frac{|L'|}{|H'|} } = \lim_{n\to\infty} { \frac{2\cdot 2^{n-1} - 1}{3\cdot 2^{n - 1} - 1} } = \frac{2}{3} $$
Long story short, $L'$ and $R$ make up effectively the entire heap. $|L'|$ has twice as many elements as $R$ so it makes up $\frac{2}{3}$ of the heap while $R$ makes up the other $\frac{1}{3}$.
This $\frac{2}{3}$ of the heap corresponds to having the last level of the heap half full from left to right. This is the most the heap can get imbalanced; adding another node will either begin to rebalance the heap (by filling out the other, right, half of the last level) or break the heap's shape property of being a complete binary tree.
|
Singular-value inequalities This is my question: Is the following statement true ?
Let $H$ be a real or complex Hilbertspace and $R,S:H \to H$ compact operators.
For every $n\in\mathbb{N}$ the following inequality holds:
$$\sum_{j=1}^n s_j(RS) \leq \sum_{j=1}^n s_j(R)s_j(S)$$
Note: $s_j(R)$ denotes the j-th singular value of the opeartor $R$.
The sequence of the singular values falls monotonically to zero.
With best regards,
mat
Edit: I found out, that the statement is true for products instead of sums. By that I mean:
Let $H$ be a $\mathbb{K}$-Hilbertspace and $R,S: H \to H$ compact operators.
For every $n\in\mathbb{N}$ we have:
$$\Pi_{j=1}^n s_j(RS) \leq \Pi_{j=1}^n s_j(R)s_j(S)$$
Is it possible to derive the statement for sums from this?
| The statement is true. It is a special case of a result by Horn (On the singular values of a product of completely continuous operators, Proc. Nat.Acad. Sci. USA 36 (1950) 374-375).
The result is the following. Let $f:[0,\infty)\rightarrow \mathbb{R}$ with $f(0)=0$. If $f$ becomes convex following the substitution $x=e^t$ with $-\infty\leq t<\infty$, then for any linear completely continuous operators $R$ and $S$, $$\sum_{j=1}^n f(s_j(RS))\leq \sum_{j=1}^n f(s_j(R)s_j(S)).$$
The function $f(x)=x$ falls in the scope of the theorem and the proof follows from the theorem you stated about products.
|
Are there diagonalisable endomorphisms which are not unitarily diagonalisable? I know that normal endomorphisms are unitarily diagonalisable. Now I'm wondering, are there any diagonalisable endomorphisms which are not unitarily diagonalisable?
If so, could you provide an example?
| Another way to look at it, though no really different in essence, is to consider the operator norm on ${\rm M}_{n}(\mathbb{C})$ induced by the Euclidean norm on $\mathbb{C}^{n}$ (thought of as column vectors). Hence $\|A \| = {\rm max}_{ v : \|v \| = 1} \|Av \|.$ Since the unitary transformations are precisely the isometries of $\mathbb{C}^{n},$ we see that conjugation by a unitary matrix does not change the norm of a matrix. If $A$ can be diagonalized by a unitary matrix, then it is clear from this discussion that $\| A \| = {\rm max}_{\lambda} |\lambda |,$ as $\lambda$ runs over the eigenvalues of $A$. Hence as soon as we find a diagonalizable matrix $B$ with $\| B \| \neq {\rm max}_{\lambda} |\lambda|,$ we know that $B$ is not diagonalizable by a unitary matrix. For example, the matrix $$B = \left( \begin{array}{clcr} 3 & 5\\0 & 2 \end{array} \right)$$ has largest eigenvalue $3,$ but $\|B \| > 5$ because
$B \left( \begin{array}{cc} 0 \\1 \end{array} \right) = \left( \begin{array}{cc} 5 \\2 \end{array} \right).$ Also, $B$ is diagonalizable, but we now know that can't be achieved via a unitary matrix.
|
Is it true, that $H^1(X,\mathcal{K}_{x_1,x_2})=0$? - The cohomology of the complex curve with a coefficient of the shaeaf of meromorphic functions... Let X be complex curve (complex manifold and $\dim X=1$).
For $x_1,x_2\in X$ we define the sheaf $\mathcal{K}_{x_1,x_2}$(in complex topology) of meromorphic functions vanish at the points $x_1$ and $x_2$.
Is it true, that $H^1(X,\mathcal{K}_{x_1,x_2})=0$?
In general, what are sufficient conditions for the $\mathcal{F}$ to $H^1(X,\mathcal{F})=0$ if X is curve?
| The answer is yes for a non-compact Riemann surface $H^1(X, \mathcal K_{x_1,x_2})=0$ .
The key is the exact sequence of sheaves on $X$:$$0\to \mathcal K_{x_1,x_2} \to \mathcal K \xrightarrow {truncate } \mathcal Q_1\oplus \mathcal Q_2\to 0$$ where $\mathcal Q_i$ is the sky-scraper sheaf at $x_i$ with fiber the Laurent tails (locally of the form $\sum_{j=0}^Na_jz^{-j}$).
Taking cohomology we get a long exact sequence $$\cdots \mathcal K(X) \xrightarrow {\text {truncate}} \mathcal Q_1(X) \oplus \mathcal Q_2(X)\to H^1(X, \mathcal K_{x_1,x_2})\to H^1(X, \mathcal K) \to \cdots $$
The vanishing of the cohomology group $H^1(X, \mathcal K_{x_1,x_2})$ then follows from the two facts:
1) $H^1(X, \mathcal K)=0$
2) The morphism $ \mathcal K(X) \xrightarrow {\text {truncate}} \mathcal Q_1(X) \oplus \mathcal Q_2(X)$ is surjective because of the solvability of the Mittag-Leffler problem on a non-compact Riemann surface.
For a compact Riemann surface of genus $\geq1$ the relevant Mittag-Leffler problem is not always solvable, so that we have $H^1(X, \mathcal K_{x_1,x_2})\neq 0$ (however for the Riemann sphere $H^1(\mathbb P^1, \mathcal K_{x_1,x_2})=0$)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.