title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How can I solve this second order ODE using Laplace transform? | $$Y = \frac{1}{(s+3)(s+5)}\left[\frac{35\left(1-e^{4-2s}\right)}{s-2} +...... \right]$$
For the first term you can use the formula:
$$\mathcal {L^{-1}}\{ e^{-cs}F(s)\}=H(t-c)f(t-c)$$
You have that:
$$e^{4-2s}=e^4e^{-2s} \implies c=2$$
And:
$$F(s) =- \frac{35e^4}{(s+3)(s+5)(s-2)}$$ |
Compositions preserving measurability | The composition of measurable functions where we use the same $\sigma$-algebras throughout, is measurable.
However, when speaking of Lebesgue measurable functions $\Bbb R\to\Bbb R$, the definition says that the inverse image of a set belonging to the Borel $\sigma$-algebra is a set that belongs to the Lebesgue $\sigma$-algebra. Hence for Lebesgue measurable $g$ it may happen that for some Borel set $E$, the set $g^{-1}(E)$ is Lebesgue, but not Borel. In that situation we are also no longer guaranteed that the $f^{-1}(g^{-1}(E))$ is Lebesgue. |
Another social network matrix expression assistance please. | Note that $k$ is a common friend of $i$ and $j$ iff $F_{i,k}=F_{j,k}=1$, equivalently $F_{i,k}F_{j,k}=1$. So the number of common friends of $i$ and $j$ is
$$C_{i,j}=\sum_{k=1}^n F_{i,k}F_{j,k}=\sum_{k=1}^n F_{i,k}F_{k,j}.$$
Now what matrix operation does that look like? |
How could I calculate this limit without using L'Hopital's Rule $\lim_{x\rightarrow0} \frac{e^x-1}{\sin(2x)}$? | Using the fact that $$\lim _{ x\rightarrow 0 }{ \frac { { e }^{ x }-1 }{ x } =1 } \\ \lim _{ x\rightarrow 0 }{ \frac { \sin { x } }{ x } =1 } $$ we can conclude that $$\\ \lim _{ x\rightarrow 0 }{ \frac { { e }^{ x }-1 }{ \sin { 2x } } } =\frac { 1 }{ 2 } \lim _{ x\rightarrow 0 }{ \frac { { e }^{ x }-1 }{ x } \frac { 2x }{ \sin { 2x } } } =\frac { 1 }{ 2 } $$ |
Homology connected sum of tori | $\partial_1 =0$ or equivalently $\phi$ is injective. Also $\partial_0=0$ equivalently $\psi$ is onto. Hence you have a splitting short exact sequence. So (as very often with long exact sequences) the problem boils down to filling in zeros (zero maps) (or appropriately modifying the sequence).
If you know that the surface is orientable and admits a cellular decomposition with a single $0$-cell and $2$-cell respectively, then you have a chain complex
$$
0 \to \mathbb Z \stackrel 0\to \mathbb Z^{2g} \stackrel 0\to \mathbb Z \to 0.
$$ |
Converge? $\sum_{k=1}^{\infty}\frac{ \sin \left(\frac{1}{k}\right) }{k} $ | Hint
Since you want a direct comparison use this inequality
$$\sin(x)\le x,\; \forall x\ge0$$ |
$\mathbb{R}$ $\rightarrow$ Hom(V,V) | You have things a little mixed up, here is what the exercise is asking spelled out:
You have given a map $\pi: \mathbb{R} \rightarrow \text{Hom}(V,V)$, defined by
$\pi(\lambda) = \lambda I$
where $\lambda I$ is the function that takes a vector and multiplies it by the scalar $\lambda$. First of all, if you haven't proven this in class or other exercises, you should prove that this map i indeed a member of Hom($V,V$).
Next, you need to show that the map $\pi$ is linear. Let us start by showing that it respects scalar multiplication. For this, let $a \in \mathbb{R}$ and $\lambda \in \mathbb{R}$ (here we treat the $a$ as the scalar, and $\lambda$ as the vector). Now you need to show that
$\pi(a\lambda) = a\pi(\lambda)$
that is, that these two functions returns the same value for any $v \in V$. Here you need to use what the function $a\pi(\lambda)$ is defined as.
Next you need to show that it respects vector addition. So let $\lambda, \mu \in \mathbb{R}$. Now you need to show that
$\pi(\mu + \lambda) = \pi(\mu) + \pi(\lambda)$
the same way as above, and using the definition of $\pi(\mu) + \pi(\lambda)$. Hope this makes sense. |
projectors in a tensor product of number fields | Hint: Look at the images of $(\sqrt{-d}\otimes \sqrt{-d}) \pm (1 \otimes d)$. |
Find the point on OY | First, note that the distances $NP$ and $N'P$ are equal (check why).
Now, you can draw a triangle with sides $N'P$, $MP$ and $MN'$. You always have
$$MN' + N'P \ge MP,$$
where the equality is true only when the three sides lie on the same line (see triangle inequality). That's only possible in the case you have described. And, thus,
$$MN' \ge MP - N'P.$$ |
Spanning trees of ladder graphs... | You can verify that there are $6 + 9 = 15$ spanning trees of the $3$-ladder graph.
To see this, note that you must remove $2$ edges.
If the center edge is removed, we can remove any one of the remaining $6$ edges.
If the center edge is not removed, we remove one edge from each side - there are $3$ choices per side, so a total of $9$ possibilities.
For part b, which techniques have you seen for solving recurrence relations?
You can find the characteristic polynomial to find the general solution. In this case we get $r^2 - 4r + 1 = 0$, so $r = \frac{4 \pm \sqrt{12}}{2} = 2 \pm \sqrt{3}$. Then, $T(n) = c_1 (2 + \sqrt{3})^n + c_2(2 - \sqrt{3})^n$. Solve for the constants using the initial values $T(1) = 1$ and $T(2) = 4$. |
Countable sequence of sets | I assume that similar means "having the same cardinality".
Your argument should work, assuming that you have proven that $ \mathbb{R} \sim [i-1,i) $. Then, this gives you a surjective function from $ [0, +\infty) $ to the union of the sequence, which means $ \mathbb{R} $ is at least as big as the union. And since any term of the sequence is at least as big as $ \mathbb{R} $, then the union is at least as big as $ \mathbb{R} $.
So we conclude that the union has the same size as $ \mathbb{R} $. |
question on identity theorem | If $f(z)=z^k$ on $(0,1)$, then $f(z)=z^k$ on $ \mathbb C$.
A similar argument gives $f(z)=-z^k$ on $ \mathbb C$.
Conclusion: no such function exists. |
Does Lagrangian always exist for any equation? | Every classical physics equation can be traced back to a Lagrangian. And every quantum problem of finding the expectation value of an operator $\theta$ can be characterized as a problem of doing a path integral over $\theta \exp(i \int d^dx \mathcal{L})$. In fact, the Standard Model, from which every modern theory of physics (that doesn't involve gravity) can be derived is a Lagrangian. And the classical theory of gravity GR can be derived by functionally differentiating the Hilbert action ("action" is the spacetime integral of the Lagrangian). So yeah... every established theory of physics thus far can be derived from a Lagrangian... but that doesn't necessarily mean that it's the right way to go. If we ever get a proper unifying theory, it may well be that the whole notion of describing the theory with a Lagrangian makes no sense. However the most famous candidate (String Theory) happens to have a Lagrangian! (The Nambu-Goto plus Polyakov actions)
Although not every equation is a direct result of varying an action, but rather, indirect. For example, the electric/magnetic field wave equations can be derived from Maxwell's equations, and Maxwell's equations can be directly derived from the vector field portion of the QED Lagrangian ($-\frac{1}{4} F_{\mu \nu} F^{\mu \nu}$). So while just about every physics equation you see ultimately came from a Lagrangian, that doesn't mean it's the direct result of varying a Lagrangian specifically for it. Although you can always make one up that's mathematically consistent with the equation you're looking at, though I don't think that would always get you very far.
BTW maybe the physics section is more suited for this. |
recurrence relation and find sum | $a(n+1)=\frac{n}{3} a(n)=\frac{n(n-1)}{3^{2}}a(n-1)=\ldots=\frac{n!}{3^{n}}a(1)$.
But $a(1)=\frac{0}{3} a(0)=0$, then $a(n)=0$ for $n\in \mathbb{N}$?? |
integer part of the number | Drawing the graphical representations of functions $f_1$ (blue curve) and $f_2$ (magenta "staircase') given by equations
$$f_1(x):=x-\dfrac{1}{x} \ \ \text{and} \ \ f_2(x)=[2x]$$
gives
1) The conviction that there are no solutions to equation $f_1(x)=f_2(x)$.
2) A way to prove it in three steps
as the "staircase pattern" is situated between lines with equations $y=2x-1$ and $y=2x$ (in green), show in a rigorous way that
$$2x-1 \leq [2x] \leq 2x \tag{1}$$
then show that (1) can be extended, for $x>0$ into
$$\underbrace{x-\dfrac{1}{x} < 2x-1}_{(I)} \leq [2x] \leq 2x$$ by showing that the quadratic inequation resulting from (I) is verified for all $x>0$.
do an equivalent reasoning in the case $x<0$ for
$$ [2x] \leq \underbrace{2x < x-\dfrac{1}{x}}_{(J)}$$ |
proving of combinational argument $\sum_{r=0}^{k}(-1)^r2^{k-r}\binom{n}{r}\binom{n-r}{k-r}= \binom{n}{k}$ | Start with $n$ numbered white balls. How many ways are there to choose $k$ of the balls, put them in a box, throw away some subset of the balls in the box, and end up with a box containing $k$ balls?
There are $2^k\binom{n}k$ ways to pick $k$ of the balls, put them in the box, and then choose a subset of them to throw away. Now we’ll subtract the arrangements that throw away at least one ball. There are $\binom{n}1$ ways to pick the ball that’s to be thrown away, $\binom{n-1}{k-1}$ ways to pick the other $k-1$ balls to be put into the box initially, and $2^{k-1}$ ways to pick the rest of the balls that will be thrown away. This leaves
$$2^k\binom{n}k-2^{k-1}\binom{n}1\binom{n-1}{k-1}\tag{1}$$
outcomes.
Of course this overcounts, since any result that ends up with two balls being thrown away gets subtracted twice in $(1)$ and should be added back in. There are $\binom{n}2$ pairs of balls that could be the two definitely thrown away, $\binom{n-2}{k-2}$ ways to choose the other $k-2$ balls to be put into the box initially, and $2^{k-2}$ to choose from those any other balls to be thrown away. Adding all of this back in gives the approximation
$$2^k\binom{n}k-2^{k-1}\binom{n}1\binom{n-1}{k-1}+2^{k-2}\binom{n}2\binom{n-2}{k-2}\;,\tag{2}$$
and at this point, if not before, it should be clear that we’re dealing with a standard inclusion-exclusion argument, and that the number of outcomes that have all $k$ chosen balls in the box with none thrown away is
$$\sum_{r=0}^k(-1)^r2^{k-r}\binom{n}r\binom{n-r}{k-r}\;.$$
Of course this is simply the number of sets of $k$ balls, which is $\dbinom{n}k$. |
Find all harmonic functions $f:\mathbb{C}\backslash \{0\}\to \mathbb{R}$ that are constant on every circle centered at 0. | Here's a hint, which I think will allow you to solve it:
You can represent $f(z)$ in polar form
$$f(z)=f(R,\phi)$$
with $R\in\mathbb{R}$ and $z=Re^{i\phi}$ ($\phi$ is only defined up to $2\pi$, of course). Then write the Cauchy-Riemann equations for $R$ and $\phi$ (they're available here). In this form, you know that $\partial f/\partial \phi=0$, for every $R$. |
Conjecture about the Jordan-Polya numbers | This is false. Using the data in your list...
$$ 4320 - 4096 = 224 \text{.} $$
The next few violations of the conjecture are:
\begin{align*}
8192 - 7776 &= 416 \text{,} \\
8640 - 8192 &= 448 \text{,} \\
16384 - 15552 &= 832 \text{,} \\
17280 - 16384 &= 896 \text{, and} \\
25920 - 24576 &= 1344 \text{.}
\end{align*}
So, while it is common in the first few instances that violations include a power of $2$, this is not true for all violations.
There are $6853$ Jordan-Polya numbers up to $20!$. Computing the gaps and splitting up the first $6800$ gaps into blocks of $100$ gaps ("centuries"). We plot the number of gaps conforming with the conjecture.
After initial high levels of conformance, by the tenth century (thousandth gap), the likelihood of a Jordan-Polya number gap being a Jordan-Polya number seems to be around $50\%$ and is, at least for the range covered, fairly stably so.
Repeating up to $30!$, there are $91\,802$ Jordan-Polya numbers and $53\,065$ gaps are violating. The plot by centuries ...
... suggests that conforming instances become rarer as one proceeds. |
Preservation of negativity of eigenvalues of the Jacobian.matrix under coordinate change. | Yes. Write down the product rule and chain rule and use two facts: $f(x_0)=0$ gets rid of the horrible term; and $D\phi(\psi(x)) = (D\psi(x))^{-1}$. Then you get the fact that $Dg(\phi(x_0)) = P^{-1}Df(x_0)P$ for $P=D\psi(x_0)$. Thus, the two matrices are similar (i.e., represent the same linear map) and, in particular, have the same eigenvalues. |
How to alter the interval of a composite function | You want to intersect the intervals.
For $f$ you have $(-\infty ,0)$ and $[0,\infty)$
For $g$ you have $(-\infty,3)$ and $[3,\infty)$
So, for the compositions, you want to look at each of the intervals $(-\infty,0)$,$[0,3)$, and $[3,\infty)$. |
When $x^2+6xy+y^2$ a square number? | Stereographic projection from $(-1,0),$ I get
$$ x = m^2 - n^2, \; \; y = 2 mn + 6 n^2, $$
with $$ \gcd(m,n) = 1, $$
and either $m > n > 0$ or $n < 0$ and $|n| < m < 3 |n|. $
Also $m,n$ not both odd.
Put the inequalities together, we get
$$ m > n > 0 \; \; \mbox{OR} \; \; \frac{-m}{3} >n > -m. $$
In the latter case we are originally in the third quadrant as the rational point on the hyperbola $x^2 + 6xy+y^2 = 1$ is
$$ x= \frac{m^2 - n^2}{m^2 + 6 mn+n^2}, \; \; y= \frac{2mn+6n^2}{m^2 + 6 mn+n^2}, $$
while $-m/3 > n > -m$ implies $m^2 + 6 mn + n^2 < 0.$
Improvement: If $m,n$ both odd, take
$$ x = \frac{m^2 - n^2}{4}, \; \; y = \frac{2 mn + 6 n^2}{4}, $$
final requirement here is that $m \neq n \pmod 4.$ Put another way: when both are odd, we require that $m+n$ be divisible by $4.$
The nonsense with the possible negative signs comes from the fact that we are not projecting onto an ellipse, we are projecting onto a hyperbola. So, the denominator may come up negative and solutions disappear. Need to allow $n$ negative. Let me know if I have missed any solutions; fairly easy to just splat a formula on the page, harder to figure out whether one has all solutions.
Note that, given some pair $(x,y)$ such that $x^2 + 6 xy+ y^2 = w^2$ for some $w,$ we get new pairs ( same $w$) with
$$ (-y,x+6y), $$
$$ (-x-6y,6x+35y), $$
$$ (-6x-35y,35x+204y), $$
$$ (-35x-204y,204x+1189y) $$
and so on, which pushes along the relevant hyperbola in the second or fourth quadrant.
As you will see, there is some repetition below. First ordered by $m,n$ then a short thing ordered by just $x,y.$
EDIT: this worked out very well. I just restricted to those solutions with $x>y$ and put them in order, well, backwards...Being in order, I was able to compare with a list of all solutions with $x \leq 306,$ we have a winner.
x y m n
306 19 35 1
300 13 37 -13
297 80 19 -8
290 171 39 -19
286 279 35 9
285 92 17 2
275 42 18 -7
273 232 17 4
270 119 37 -17
261 220 19 -10
260 189 33 7
255 38 16 1
253 12 17 -6
247 150 16 3
240 17 31 1
234 115 31 5
230 39 33 -13
225 112 17 -8
221 84 15 2
216 209 35 -19
210 11 31 -11
208 57 29 3
207 70 16 -7
200 153 33 -17
198 175 29 7
195 34 14 1
187 138 14 3
184 105 31 -15
182 15 27 1
176 105 27 5
171 10 14 -5
168 65 29 -13
165 76 13 2
161 144 15 -8
154 51 25 3
152 33 27 -11
143 30 12 1
136 9 25 -9
133 60 13 -6
132 13 23 1
126 95 23 5
119 30 12 -5
117 68 11 2
114 91 25 -13
105 8 11 -4
102 55 23 -11
99 26 10 1
90 11 19 1
85 84 11 -6
78 7 19 -7
77 60 9 2
70 39 17 3
65 24 9 -4
63 22 8 1
56 9 15 1
55 6 8 -3
52 45 17 -9
44 21 15 -7
40 33 13 3
36 5 13 -5
35 18 6 1
30 7 11 1
21 4 5 -2
15 14 4 1
12 5 7 1
10 3 7 -3
3 2 2 -1
x y m n
3 2 2 -1
3 10 2 1
5 12 3 -2
2 3 3 1
5 36 3 2
7 30 4 -3
15 14 4 1
7 78 4 3
9 56 5 -4
21 4 5 -2
21 44 5 2
4 21 5 3
9 136 5 4
11 90 6 -5
35 18 6 1
11 210 6 5
13 132 7 -6
33 40 7 -4
10 3 7 -3
12 5 7 1
45 52 7 2
33 152 7 4
6 55 7 5
13 300 7 6
15 182 8 -7
39 70 8 -5
55 6 8 -3
63 22 8 1
55 102 8 3
39 230 8 5
15 406 8 7
17 240 9 -8
14 15 9 -5
65 24 9 -4
77 60 9 2
65 168 9 4
8 105 9 7
17 528 9 8
19 306 10 -9
51 154 10 -7
99 26 10 1
91 114 10 3
51 434 10 7
19 666 10 9
x y m n
x y
1 0
3 2
10 3
12 5
15 14
21 4
30 7
35 18
36 5
40 33
44 21
52 45
55 6
56 9
63 22
65 24
70 39
77 60
78 7
85 84
90 11
99 26
x y
jagy@phobeusjunior:
I was curious about repetition, still with positive $x,y,$ to $x^2 + 6 xy+ y^2 = z^2.$ Plenty, and predictable, squarefree products of primes $p \equiv \pm 1 \pmod 8.$
x y m n z
65 24 9 -4 -119 7 . 17
90 11 19 1 119
99 26 10 1 161 7 . 23
136 9 25 -9 -161
143 30 12 1 217 7 . 31
102 55 23 -11 -217
351 14 20 -7 -391 17 . 23
176 105 27 5 391
2628 37 109 -37 -2737 7 . 17 . 23
1927 318 44 3 2737
1333 660 37 6 2737
1025 912 33 8 2737
81807 1030 304 -103 -84847 7 . 17 . 23 . 31
62625 8528 271 -104 -84847
59998 9735 491 33 84847
55040 12177 471 41 84847
49773 15052 247 -106 -84847
45140 17877 429 59 84847
39195 22018 226 -109 -84847
35032 25353 383 81 84847
x y m n z |
Why is $C(\beta \mathbb{R})/C_0(\mathbb{R})\cong C(\beta \mathbb{R}\setminus \mathbb{R})$ as $C^*$-algebras? | You have $$C_0(\mathbb R)=\{f\in C(\beta\mathbb R):\ f|_{\beta\mathbb R\setminus\mathbb R}=0\}.$$ |
Do Optional and Progressive Processes Have Counterparts in Discrete Time? | "Progressive" and "optional" are refinements of "adapted" involving some degree of joint measurability of $(\omega,t)\mapsto X_t(\omega)$. The three notions coalesce in discrete time. Consider, for example, the notion "optional" in a discrete-time setting. So let $(\Omega,(\mathcal F_n)_{n\ge 0},\mathcal F,\Bbb P)$ be a filtered probability space. The optional $\sigma$-field $\mathcal O$ is the $\sigma$-field on $\Omega\times\{0,1,2,\ldots\}$ generated by the sets of the form $[T,\infty):=\{(\omega,n)\in\Omega\times\{0,1,\ldots\}:n\ge T(\omega)\}$, as $T$ varies over the $(\mathcal F_n)$ stopping times. A process $X=(X_n)_{n\ge 0}$ is optional provided $(\omega,n)\mapsto X_n(\omega)$ is $\mathcal O$-measurable.
If $X$ is optional then $X_T1_{\{T<\infty\}}$ is $\mathcal F_T$-measurable for each stopping time $T$; in particular, $X_n$ is $\mathcal F_n$-measurable for each $n\in\{0,1,\ldots\}$. Therefore $X$ is adapted.
Suppose, conversely, that $X$ is adapted and $B$ is a Borel subset of the real line. Define stopping times $T_m$, $m=0,1,\ldots$ by
$$
T_m=\cases{m,&if $X_m\in B$;\cr \infty,&otherwise.\cr}
$$
(These are stopping times because $X$ is adapted.)
The graph $[T_m]:=\{(\omega,n): T_m(\omega)=n\}$ of $T_m$ is optional (i.e., an element of $\mathcal O$---just notice that $[T_m]=[T_m,\infty)\setminus[T_m+1,\infty)$) and so
$$
X^{-1}(B)=\cup_{m=0}^\infty[T_m]
$$
is an element of $\mathcal O$. It follows that $X$ is optional.
"Progressive" can be handled similarly. |
Eigenvectors for $A-\lambda I = \left[\begin{smallmatrix} \pm i\sin\theta & -\sin\theta\\ \sin\theta & ±i\sin\theta\end{smallmatrix}\right]$ | As you found eigenvalues, there are eigenvectors.
Let us find one eigenvector:
$$
\begin{cases}\cos\theta X - \sin\theta Y = (\cos\theta + i\sin\theta)X\\
\sin\theta X + \cos\theta Y = (\cos\theta + i\sin\theta)Y\end{cases}
\iff
- \sin\theta Y = i\sin\theta X\\
\Leftarrow iY = X
$$
Here, check that both equations are equivalent: hence your eigenvalue is good!
Assume that
$
Y=1
$ (you only need one eigenvector) gives $X= i$.
Now do the same with the other eigenvalue, you find the relation
$$
-iY = X .
$$ |
$T:V \to V$ if $B = \{v_1,...,v_n\}$ is a basis for $V$ what about $\{Tv_1,...Tv_2\}$ | if $\{v_{1} , v_{2},..., v_{n}\}$ is a basis of $\ V$ and $ \ T:\ V \rightarrow V$ a linear transformation then obviously $\{T(v_{1}) , T(v_{2}),..., T(v_{n})\}$ is a generating set for $Im(T)$ ie: $Im(T)=vect(T(v_{1}) , T(v_{2}),..., T(v_{n}))$ now we know that:
$$\dim(Ker(T))+dim(Im(T))=dim(V)$$
thus having $Ker(T)=\{ 0_{\ V} \}\implies dim(Im(T))=dim(V)\implies V=Im(T)$ |
Why do we need $x \neq c$ in $(\epsilon, \delta)$ definition of limits? | Consider
$$
f(x) = \begin{cases}x^2, & x \ne 0\\ 1, & x=0 \end{cases}
$$
Do you want $\displaystyle \lim_{x\to 0} f(x)$ to exist? If you want it to exist and be zero, you need to rule out $x=c$ in the definition. |
Proof that if $Z$ is standard normal, then $Z^2$ is distributed Chi-Square (1). | $$\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{z^2(t-\frac{1}{2})}dz=\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-z^2(\frac{1}{2}-t)}dz$$
The general PDF for a normal distribution is given by:
$$
f(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}
$$
You should attempt to solve the integral by fitting a normal distribution and cancelling it out by realising that it integrates to 1. Currently:
$$
\mu=0
$$
$$
\frac{1}{2\sigma^2}=\frac{1}{2}-t
$$
So, solve for $\sigma$ and multiply accordingly to make the integral the pdf of a normal distribution (integrates to 1) whatever is left over should give you the result you're looking for.
Hope this helps |
Cardinality and Bijections- Proof Problem | The deficulty is understanding concepts and using language. If you can do that this is trivial (almost literally).
Let me clearify with a few examples.
Ex: 1) Let $A=\{dog, cat, mouse\}$
Ex: 2) Let $A =\mathbb R$.
Then $B = \{f:\{1\}\to A\}= \{K\subset \{1\}\times A|$ for each $x \in \{1\}$ there is exactly one $(x,y) \in K\}=$
$\{K\subset \{1\}\times A|$ there is exactly one $(1,y) \in K\}=$
$\{K\subset \{1\}\times A| K = \{(1,y)\}$ for some $y \in A\}=$
$\color{blue}{\big\{}\{(1,y)\}| y\in A\color{blue}{\big\}}$.
In example 1: then $B = \color{blue}{\big\{}\{(1,dog)\}, \{(1,cat)\}, \{(1,mouse)\}\color{blue}{\big\}}$.
In example 2: then $B = \color{blue}{\big\{}\small\{(1, y)\small\}|y\in \mathbb R\color{blue}{\big\}}$
... Now try to do this on your own without reading further ...
Now it should be intuitively obvious that for every $a \in A$ there is exactly one function $f: \{1\}\to A$ so that $f(1) = a$.
And that's it. That's your bijection:
...... try to formally define the bijection, $j: B \to A$, before reading further ......
Let $j: B \to A$ via for any $f \in B$ we set $j(f) = f(1)$.
... Now try to prove that that is an injection without reading further...
To formally prove $j$ is a bijection.
Surjective: For each $a\in A$ then if we define $f:\{1\} \to A$ as $f(1) =a$ then $f \in B$ and $j(f) = a$. So $j$ is surjective.
Injective: If $j(f) = j(g)= y$ for some $y \in A$ then $f(1) = y$ and $g(1) = y$. but then (as $1$ is the only element of $\{1\}$) for all $x \in \{1\}$ then $f(x) = g(x)$. So $f = g$. So $j$ is one to one. |
Geometry Right triangles in a rectangle, find the area. | We have $[ABCD]=1200$, therefore the area of $\Delta{ABD}=\dfrac{1}{2}[ABCD]=600$. Now, calculate length $AD$ and $BD$.
$$
\begin{align}
[ABD]&=600\\
\dfrac{1}{2}AB\cdot AD&=600\\
\dfrac{1}{2}\cdot40\cdot AD&=600\\
20\cdot AD&=600\\
AD&=30
\end{align}
$$
Using Phytagoras' formula, we get
$$
BD^2=AB^2+AD^2\quad\Rightarrow\quad BD=\sqrt{40^2+30^2}=50.
$$
Now, calculate length $AE$.
$$
\begin{align}
[ABD]&=600\\
\dfrac{1}{2}BD\cdot AE&=600\\
\dfrac{1}{2}\cdot50\cdot AE&=600\\
25\cdot AE&=600\\
AE&=24.
\end{align}
$$
Again we use Phytagoras' formula to obtain $BE$.
$$
AB^2=AE^2+BE^2\quad\Rightarrow\quad BE=\sqrt{AB^2-BE^2}=32.
$$
Thus, the area of $\Delta{ABE}$ is
$$
\begin{align}
[ABE]&=\dfrac{1}{2}AE\cdot BE\\
&=\dfrac{1}{2}\cdot24\cdot 32\\
&=384.
\end{align}
$$ |
Curve from curvature | If you have some initial value like the starting Point $c_0$ of the curve you can reconstruct the curve. If $c(s)$ is the curve parametrized by $s$ (for simplicity, it can be assumed that the curve is parametrized by arclength) then it holds the following equation (if $n(s)$ is the unit normal vector):
$c''(s) = k(s) n(s)$. (' is derivative by $s$)
For arclength-parametrized curves you can assume $c(s) = (cos(\theta(s)),sin(\theta(s)))$ for a function $\theta$. By using above equation you will get the differential equation:
$\frac{d}{ds} \theta(s) = k(s)$. By Integration you will obtain $c'$; integrate once again with an initial condition $c(0) = c_0$ and you have the curve. |
Finding conformal mapping from $S=\{z=x+iy: xy>2, x>0\}$ to unit disc $\mathbb D$ | Simply $w=z^2$ maps $S$ onto $\mathbb{H}^\prime=\{w\mid \operatorname{Im}w>4\}$
, so $$w=z^2-4i$$ maps $S$ onto $\mathbb{H}$.
Of course there is an approach to obtain the mapping $w=z^2-4i$ using mapping properties of $z+\frac{1}{z}$, however, it is very complicated.
To use mapping properties of $z+\frac{1}{z}$ is an essential way if you want to obtain a mapping function of $\{z=x+iy\mid xy<2\}$ to $\mathbb{H}.$ |
Clarifications on Row Echelon Form and Reduced Row Echelon Form | Some definitions are in order. Please refer to the references or any linear programming text for more details.
Matrix diagonal entries are the entries $a_ij$ such that $i=j$. The matrix does not have to be square matrix.
The definitions below work even in the case of augmented matrix (just ignore the vertical line).
A matrix is Echelon Form if:
A - All zero rows (if any) have moved to the bottom.
B - The leading non-zero is farther to the right than the leading nonzero element in the row just above it.
C - In each column containing a leading nonzero element, the entries below that element the leading nonzero element are zero.
Note: The Row Echelon form is NOT unique.
A matrix is in reduced row echelon form (also called row canonical form) if it satisfies the following conditions:
A - Matrix is in row echelon form.
B - The leading entry in each nonzero row is a 1 (called a leading 1).
C - Each column containing a leading 1 has zeros everywhere else (above and below).
Note: The Reduced Row Echelon form is unique.
The reduced row echelon form of a matrix may be computed by Gauss–Jordan elimination.
Unlike the row echelon form, the reduced row echelon form of a matrix is unique and does not depend on the algorithm used to compute it.For a given matrix, despite the row echelon form not being unique, all row echelon forms and the reduced row echelon form have the same number of zero rows (if any) and the pivots are located in the same indices.
do I HAVE to take it to REF first before breaking it down further to
RREF
It is natural that you do REF and proceed to RREF - See the examples. It does make sense and keep the work organized at least. In some cases, you could move immediately to RREF. I guess we can claim that every matrix in RREF is also an REF but the opposite is not always true.
Is a system in REF if the number above or below the leading one isn't
0? E.g is this in REF? Why and Why not?
In REF definition, there is no mention of the value of the entries above the leading 1s. The definition references the entries below the leading 1s only. In the example you have provided, the matrix satisfies the rules of REF only.
Example:
Some References:
1-Google Books-Linear Algebra: Theory and Applications
2-Wiki-https://en.wikipedia.org/wiki/Row_echelon_form
See above notes for useful links. |
Integration, computation of integral in $[0,1]$. | By the substitution $p = e^{-u}$,
\begin{align*}
\int_{0}^{1} \log p \log(1-p) \, dp
&= - \int_{0}^{\infty} u e^{-u} \log(1-e^{-u}) \, du.
\end{align*}
Then by the Taylor expansion of the logarithm,
\begin{align*}
\int_{0}^{1} \log p \log(1-p) \, dp
&= \int_{0}^{\infty} u e^{-u} \left\{ \sum_{n=1}^{\infty} \frac{e^{-nu}}{n} \right\} \, du \\
&= \sum_{n=1}^{\infty} \frac{1}{n} \int_{0}^{\infty} u e^{-(n+1)u} \, du \\
&= \sum_{n=1}^{\infty} \frac{1}{n(n+1)^{2}} \\
&= \sum_{n=1}^{\infty} \left( \frac{1}{n} - \frac{1}{n+1} - \frac{1}{(n+1)^{2}} \right) \\
&= 2 - \zeta(2)
= 2-\frac{\pi^{2}}{6} \\
&\approx 0.355065933151773563527584833354\cdots.
\end{align*}
Or we can just make use of the beta function
$$ \beta(z, w) = \frac{\Gamma(z)\Gamma(w)}{\Gamma(z+w)} = \int_{0}^{1} x^{z-1}(1-x)^{w-1} \, dx $$
to obtain
\begin{align*}
\int_{0}^{1} \log p \log(1-p) \, dp
&= \frac{\partial^{2} \beta}{\partial z \partial w}(1, 1) \\
&= \beta(1, 1) \left\{ \left( \psi^{(0)}(1)-\psi ^{(0)}(2) \right)^{2} - \psi^{(1)}(2) \right\} \\
&= 1 \cdot \{ (-1)^{2} - (\zeta(2) - 1) \} \\
&= 2 - \zeta(2).
\end{align*} |
Determinants of matrices with a special property | Following your observation that we have either $a_{ij}=1$ or $\delta_{ij}=0$ for all $i,j$, we note that for any fixed $i$ or $j$, it must hold that
$$
\sum_{i=1}^n \delta_{ij} = \sum_{j=1}^n \delta_{ij} = \det(A).
$$
As I note in my comment, the matrix determinant lemma allows us to conclude that since $B = A + uv^T$, we have
$$
\det(B) = \det(A) + v^T \operatorname{cof}(A)^T u = \det(A) + u^T \operatorname{cof}(A) v \\ = \det(A) + \sum_{i,j = 1}^n (-1)^i \delta_{ij}.
$$
However, applying the first identity allows us to rewrite
$$
\det(B) = \det(A) + \sum_{i = 1}^n (-1)^i \sum_{j=1}^n\delta_{ij} =\\
\det(A) + \sum_{i = 1}^n (-1)^i \det(A) =
\begin{cases}
0 & n \text{ is odd}\\
\det(A) & n \text{ is even}.
\end{cases}
$$
In either case, we find that $\det(B)(\det(A) - \det(B)) = 0$. |
Cube in space with water around it | This research paper will help you imagine
https://thescipub.com/PDF/pisp.2012.50.57.pdf |
Is the maximum of the eigenvalues of any symmetric positive? | If one wants to pick the maximum of its eigenvalues, will the value be positive?
Of course not. Consider
$$ \begin{pmatrix}
-1 & 0\\
0 & -2
\end{pmatrix}
$$
For an adjacency matrix, it's true that the largest eigenvalue is non-negative, but that's not trivial; see eg here or here or here.
Update: as A.G. notes in a comment, for a symmetric adjacency matrix (more in general: if all eigenvalues are real) it's actually trivial that the largest eigenvalue is non-negative: because, elsewhere the trace would be negative, which is imposible. By the same observation, restricting to symmetric adjacency matrices distinct from zero, then we can assert that the largest eigenvalue is strictly positive. |
Equivalence involving expectation | Hint: $e^x$ is a strictly increasing function so it preserves order among variables if you have multiple variables. Thus $\max_i(e^{X_i}) = e^{\max_i X_i}$ |
The $n$th term of a certain sequence | One answer is
$$
(-1)^{\lceil n/3\rceil}n.
$$
If that answer isn't satisfying to you, please describe what sort of answer you're looking for. |
Bohr Radius of an atom | You don't plug in $a_o$ until after having taken the derivative. To find the minimum value of a function you take the derivative and set it equal to zero. So you need to solve the equation $E'(r) = 0$ for $r$. Alternatively, you can take the derivative and then show that $E'(\frac{4\pi\epsilon_0h^2}{me^2}) = 0$. |
Proving that a sequence is monotone | I've spent too much time
on this already,
so I will just dump
everything new
into another answer.
First,
$\begin{array}\\
c_j
&=\binom{-1/2}{j}\\
&=\dfrac1{j!}\prod_{i=0}^{j-1} (-\frac12-i)\\
&=\dfrac{(-1)^j}{j!}\prod_{i=0}^{j-1} (\frac12+i)\\
&=\dfrac{(-1)^j}{2^jj!}\prod_{i=0}^{j-1} (1+2i)\\
&=\dfrac{(-1)^j\prod_{i=0}^{j-1} (2i+1)}{2^jj!}\\
&=\dfrac{(-1)^j\prod_{i=0}^{j-1} (2i+1)\prod_{i=1}^{j} (2i)}{2^jj!\prod_{i=1}^{j} (2i)}\\
&=\dfrac{(-1)^j(2j)!}{2^jj!2^j j!}\\
&=\dfrac{(-1)^j(2j)!}{4^jj!^2}\\
&=\dfrac{(-1)^j}{4^j}\binom{2j}{j}\\
&\approx \dfrac{(-1)^j}{4^j}\dfrac{4^j}{\sqrt{\pi j}}\\
&= \dfrac{(-1)^j}{\sqrt{\pi j}}\\
\end{array}
$
Note that
$\dfrac{c_{j+1}}{c_j}
=\dfrac{\dfrac1{(j+1)!}\prod_{i=0}^{j} (\frac12+i)}{\dfrac1{j!}\prod_{i=0}^{j-1} (\frac12+i)}
=\dfrac{j+\frac12}{j+1}
=1-\dfrac{1}{2(j+1)}
$
so the
$c_j$ is decreasing.
Now we can get
an infinite converging series
for $s_m$.
$\begin{array}\\
s_m
&= \sum_{k=1}^{m} \frac{1}{\sqrt{m^2 + k}}\\
&= \frac1{m}\sum_{k=1}^{m} \frac{1}{\sqrt{1 + k/m^2}}\\
&= \frac1{m}\sum_{k=1}^{m} (1 + k/m^2)^{-1/2}\\
&= \frac1{m}\sum_{k=1}^{m} \sum_{j=0}^{\infty} \binom{-1/2}{j}(k/m^2)^j\\
&= \frac1{m}\sum_{k=1}^{m} \sum_{j=0}^{\infty} (-1)^jc_j(k/m^2)^j\\
&= \frac1{m}\sum_{k=1}^{m} \left(1+\sum_{j=1}^{\infty} (-1)^jc_j(k/m^2)^j\right)\\
&= 1+\frac1{m}\sum_{k=1}^{m} \sum_{j=1}^{\infty} (-1)^jc_j(k/m^2)^j\\
&= 1+\frac1{m} \sum_{j=1}^{\infty}\sum_{k=1}^{m} (-1)^jc_j(k/m^2)^j\\
&= 1+\sum_{j=1}^{\infty}(-1)^jc_jm^{-2j-1}\sum_{k=1}^{m} k^j\\
&= 1+\sum_{j=1}^{\infty}(-1)^jc_jp_j(m)/m^{2j+1}
\qquad\text{where } p_j(m)=\sum_{k=1}^{m} k^j\\
&= 1+ \sum_{j=1}^{\infty}(-1)^jc_jp_j(m)/m^{2j+1}\\
\end{array}
$
Since,
for $j=1, 2, 3, ...$
we have
$c_j
=\frac12, \frac38, \frac{5}{16}, ...
$
and
$p_j(m)
=\frac12 m(m+1),
\frac16 m(m+1)(2m+1),
\frac14 m^2(m+1)^2
$,
we have
$\begin{array}\\
s_m
&=1
-\frac{m(m+1)}{4m^3}
+\frac{3m(m+1)(2m+1)}{8\cdot 6m^5}
-\frac{5m^2(m+1)^2}{16\cdot 4m^7}
+...\\
&=1
-\frac{m+1}{4m^2}
+\frac{(m+1)(2m+1)}{16m^4}
-\frac{5(m+1)^2}{64m^5}
+...\\
\end{array}
$
Note that since
$c_j
\approx \frac{1}{\sqrt{\pi j}}
$
and
$p_j(m)
\approx
\frac{m^{j+1}}{j+1}
$
we have
$c_jm^{-2j-1}p_j(m)
\approx \dfrac{1}{m^{j}(j+1)\sqrt{\pi j}}
$,
so the series converges
absolutely.
From this,
we can get an exact equation for
$s_{m+1}-s_m
$.
We first will use the
expression with the
first 4 terms.
$\begin{array}\\
s_{m+1}-s_m
&=(1
-\frac{m+2}{4(m+1)^2}
+\frac{(m+2)(2m+3)}{16(m+1)^4}
-\frac{5(m+2)^2}{64(m+1)^5}
+...)\\
&\qquad
-(1
-\frac{m+1}{4m^2}
+\frac{(m+1)(2m+1)}{16m^4}
-\frac{5(m+1)^2}{64m^5}
+...)\\
&=-(\frac{m+2}{4(m+1)^2}-\frac{m+1}{4m^2})
+(\frac{(m+2)(2m+3)}{16(m+1)^4}-\frac{(m+1)(2m+1)}{16m^4})\\
&\qquad
-(\frac{5(m+2)^2}{64(m+1)^5}-\frac{5(m+1)^2}{64m^5})
+...\\
&=\frac{m^2 + 3 m + 1}{4 m^2 (m + 1)^2}
-\frac{4 m^5 + 19 m^4 + 30 m^3 + 20 m^2 + 7 m + 1}{16 m^4 (m + 1)^4}\\
&\qquad +\frac{5 (3 m^6 + 17 m^5 + 35 m^4 + 35 m^3 + 21 m^2 + 7 m + 1)}{64 m^5 (m + 1)^5}\\
&=\frac{16m^3(m+1)^3(m^2 + 3 m + 1)
-4m(m+1)(4 m^5 + 19 m^4 + 30 m^3 + 20 m^2 + 7 m + 1)+5 (3 m^6 + 17 m^5 + 35 m^4 + 35 m^3 + 21 m^2 + 7 m + 1)}{64 m^5 (m + 1)^5}\\
&=\frac{4 m (m + 1) (4 m^6 + 16 m^5 + 13 m^4 - 10 m^3 - 16 m^2 - 7 m - 1)+5 (3 m^6 + 17 m^5 + 35 m^4 + 35 m^3 + 21 m^2 + 7 m + 1)}{64 m^5 (m + 1)^5}\\
&=\frac{m^5 (16 m^3 + 80 m^2 + 131 m + 52)+O(m^4)}{64 m^5 (m + 1)^5}\\
&=\frac{16 m^3 + 80 m^2 + 131 m + 52)+O(1/m)}{64 (m + 1)^5}\\
&\gt\frac{16 (m+1)^3+O(1/m)}{64 (m + 1)^5}\\
&\gt\frac{1+O(1/m^4)}{4(m + 1)^2}\\
\end{array}
$
Next,
look at all the terms.
$\begin{array}\\
s_{m+1}-s_m
&= \sum_{j=1}^{\infty}(-1)^jc_j\left(\dfrac{p_j(m+1)}{(m+1)^{2j+1}}-\dfrac{p_j(m)}{m^{2j+1}}\right)\\
&= \sum_{j=1}^{\infty}(-1)^jc_j\left(\dfrac{p_j(m+1)m^{2j+1}-p_j(m)(m+1)^{2j+1}}{(m+1)^{2j+1}m^{2j+1}}\right)\\
&= \sum_{j=1}^{\infty}(-1)^jc_j\left(\dfrac{d_j(m)}{(m+1)^{2j+1}m^{2j+1}}\right)\\
\end{array}
$
where
$d_j(m)=p_j(m+1)m^{2j+1}-p_j(m)(m+1)^{2j+1}
$.
$\begin{array}\\
d_j(m)
&=p_j(m+1)m^{2j+1}-p_j(m)(m+1)^{2j+1}
\\
&=(p_j(m)+(m+1)^j)m^{2j+1}-p_j(m)(m+1)^{2j+1}\\
&=(p_j(m)+(m+1)^j)m^{2j+1}-p_j(m)(m+1)^{2j+1}
\\
&=p_j(m)(m^{2j+1}-(m+1)^{2j+1})+(m+1)^jm^{2j+1}\\
\\
&=p_j(m)m^{2j+1}(1-(1+1/m)^{2j+1})+(m+1)^jm^{2j+1}\\
\\
&=m^{2j+1}(p_j(m)(1-(1+1/m)^{2j+1}))+(m+1)^j)\\
&\lt m^{2j+1}(p_j(m)(1-(1+(2j+1)/m)))+(m+1)^j)\\
&= m^{2j+1}(p_j(m)(-\frac{2j+1}{m})+(m+1)^j)\\
&\approx m^{2j+1}(-m^{j+1}\frac{2j+1}{m(j+1)}+(m+1)^j)
\qquad\text{since }p_j(m)>n^{j+1}/(j+1)\\
&= m^{2j+1}(-m^{j}\frac{2j+1}{(j+1)}+(m+1)^j)\\
&= m^{3j+1}(-\frac{2j+1}{j+1}+(1+1/m)^j)\\
&\approx m^{3j+1}(-2+\frac1{j+1}+(1+1/m)^j)\\
\end{array}
$
so if
$(1+1/m)^j
\lt 2-\frac1{j+1}
$
then
$d_j(m) < 0$.
This is the same as
$1+1/m
\lt (2-\frac1{j+1})^{1/j}
$
or
$m
\gt \frac1{(2-\frac1{j+1})^{1/j}-1}
$.
Numerically,
according to Wolfy,
this looks like about
$3j/2$.
To check:
$\begin{array}\\
(2-\frac1{j+1})^{1/j}
&=2^{1/j}(1-\frac1{2j+2})^{1/j}\\
&=2^{1/j}e^{\ln(1-1/(2j+2))/j}\\
&\approx 2^{1/j}e^{-1/(j(2j+2))}\\
&=e^{\ln 2/j-1/(j(2j+2))}\\
&=e^{\ln 2/j-1/(j(2j+2))}\\
&\approx 1+\ln 2/j-1/(j(2j+2))\\
\text{so}\\
\frac1{(2-\frac1{j+1})^{1/j}-1}
&\approx \frac1{\ln 2/j-1/(j(2j+2))}\\
&= \frac{j}{\ln 2-1/(2j+2)}\\
\text{and}\\
\frac1{\ln 2}
&\approx 1.44\\
\end{array}
$
Experiments with Wolfy
suggest that
a more accurate approxumation
is
$\frac{j}{\ln 2-1/(3.85j)}
$.
Estimating the terms.
Each term in the sum
is about
$\begin{array}\\
\dfrac{d_j(m)}{(m+1)^{2j+1}m^{2j+1}}
&\approx \dfrac{m^{3j+1}(-1+\frac1{j+1}+\frac{j}{m})}{(m+1)^{2j+1}m^{2j+1}}\\
&\approx -\dfrac{m^j(1-\frac1{j+1})}{(m+1)^{2j+1}}\\
&\approx -\dfrac{1-\frac1{j+1}}{m^{j+1}}\\
\end{array}
$
Taking $c_j$ into account,
this is about
$-\dfrac{c_j(1-\frac1{j+1})}{m^{j+1}}
$
This is certainly
decreasing in absolute value,
so the sum will be
between the last two sums. |
Describe an equation of a parabola lying between (1,1) and the x-axis | The way the problem is described, the point $(1, 1)$ is the focus, and the $x$-axis is the directrix.
The easiest way to determine the equation for the resulting parabola, intuitively (in my opinion), is to identify three points: The apex of the parabola, which is exactly halfway in between the focus and the directrix, and the two points on either side of the focus but with the same $y$-coordinate.
Since the focus is at "height" $1$ above the directrix, the apex is at $(1, 1/2)$, and the two points on either side must be $1$ away in the $x$-coordinate: that is, at $(0, 1)$ and $(2, 1)$. That way, they are equidistant from both the focus and the directrix. We can therefore write
$$
y-\frac{1}{2} = k(x-1)^2
$$
This automatically places the apex at $(1, 1/2)$, and now we solve for the parabola that includes one of the other points, $(0, 1)$:
$$
\frac{1}{2} = k(1-0)^2 = k
$$
(The other point yields the identical result, as you can verify.) Thus the equation for the parabola is
$$
y-\frac{1}{2} = \frac{(x-1)^2}{2}
$$
or, if you prefer,
$$
y = \frac{x^2}{2}-x+1
$$ |
Is there a reason why the number of non-isomorphic graphs with $v=4$ is odd? | Definition:
Let $g(n)$ denote the number of unlabeled graphs on $n$ vertices, let $e(n)$ denote its $2$-part, that is the largest power of $2$ which divides $g(n)$.
Lemma: If $n\geq5$ is odd then $e(n) = (n+1)/2-\lfloor \log_2(n) \rfloor$. If $n \geq 4$ is even then $e(n) \geq n/2 - \lfloor \log_2(n) \rfloor$ with equality iff $n$ is a power of $2$.
Corollary: The amount of unlabeled graphs is even for $n > 4$
Some values $e(\{4,5,\ldots,15\})=\{0,1,1,2,1,2,2,3,3,4,4,5\}$ (for even numbers it is the lower bound).
The theorem is due to Steven C. Cater and Robert W. Robinson and can be found, including a proof, in this publication.
They mention that $g(n)$ is not only even but contains a large number $2$'s in its prime factorisation for large $n$ (this also follows form the formula). In fact they even show that they are asymptotically $n/2$ factors of $2$ in $g(n)$. |
Logic about systems? | Try Gödel Without Tears freely downloadable at http://www.logicmatters.net/resources/pdfs/gwt/GWT.pdf
Or get out of the library a copy of my Introduction to Gödel's Theorems. |
If $n>k$, $n \mid km$, and $m \nmid n$, does this imply that $k \mid n$? | No. For a counterexample, you can take $k = 6, m = 10$, and $n = 15$.
It is possible you are misremembering the following similar statement.
If $a \mid bc$ and $\gcd(a,b) = 1$, then $a \mid c$. This is stated and proved on MSE elsewhere. |
Euler's summation formula for $x \geq 2$ | Let's check with the simplest possible case: suppose that $f(n) = 1$, and take $x = 10$.
Then the LHS is
$$ \sum_{n = 2}^{10} 1 = 9.$$
The RHS is
$$ 1 - \sum_{n = 3}^9 1 + (20 - 1)\cdot 1 - 1\cdot(10 - 10) = 1 - 7 + 19 - 0 = 13.$$
So you are correct, there is an error somewhere. I didn't track the error down, but one way to do so would be to insert the $f(n) = 1$ function into your attempted proof and see where an inequality pops up. |
Does every elliptic cohomology theory represent a complex-orientable $E_\infty$-ring spectra and vice-versa? | Neither $K(ku)$ nor tmf are complex orientable, so neither $K(ku)$ nor tmf are elliptic cohomology theories in the strict sense. $K(ku)$ "is" an elliptic cohomology theory in the looser sense that it "detects $v_2$-periodic phenomena" (although I can't elaborate too much on what this means), and tmf "is" an elliptic cohomology theory in the looser sense that it is built out of all elliptic cohomology theories somehow. |
Counting diagonalizable matrices in $\mathcal{M}_{n}(\mathbb{Z}/p\mathbb{Z})$ | Your formula assumes that diagonal matrices can be represented as $PDP^{-1}$ for some diagonal matrix $D$ and invertible matrix $P$ uniquely. This is not true: $D$ is unique, but $P$ isn't.
Without loss of generality we can work over arbitrary $\Bbb F_q$; this adds no difficult.
We need to compute the size of the orbit of a given diagonal matrix under the action of ${\rm GL}_n(\Bbb F_q)$ and then sum over all possible diagonal matrices modulo permutation. By the orbit-stabilizer theorem, the size of an orbit is equal to $|{\rm GL}_n(\Bbb F_q)|$ divided by the stabilizer of a given matrix.
Given a diagonal matrix ${\rm diag}(\lambda_1,\cdots,\lambda_n)$, which $P\in{\rm GL}_n(\Bbb F_q)$ act trivially on it by conjugation?
First thoughts: this is equivalent to $P\in {\rm GL}_n(\Bbb F_q)$ preserving the decomposition of $\Bbb F_q^n$ into eigenspaces, which means such $P$ are direct sums of arbitrary invertible maps on them.
Using this approach I get
$$\sum_{r=1}^n \binom{q}{r}\sum_{\substack{m\vdash n \\ {\rm len}(m)=r}}\langle m\rangle^r\frac{|{\rm GL}_n(\Bbb F_q)|}{|{\rm GL}_{m_1}(\Bbb F_q)|\cdots|{\rm GL}_{m_r}(\Bbb F_q)|}.$$
where $\langle m\rangle$ for my purposes denotes the number of distinct entires in the $r$-tuple $m$.
(Think I finally have it right.) Not sure how to simplify it though. |
Uniformly integrable sequence has norm convergent subsequence? | First note that if the statement was true, then it would also be true for complex valued sequences of functions (if $(f_n)_n$ are uniformly integrable, then so are the real and imaginary part. Now take a subsequence along which the real part converges in $L^1$, and take a further subsequence on which the imaginary part converges).
Thus, it is enough to provide a complex-valued counterexample. Since the sequence $(f_n)_n = (e^{2\pi i n x})_n$ is bounded in $L^\infty$, it is uniformly integrable. Now assume that $f_{n_k} \to f$ with convergence in $L^1$. This easily implies $\int_0^1 f \cdot e^{2 \pi i l x} d x = \lim_k \int_0^1 e^{2 \pi i (l + n_k) x} d x = 0$ for all $l \in \Bbb{Z}$. By basic Fourier analysis, this implies $f =0$ almost everywhere. Because of $\|f_n\|_{L^1} = 1$, we thus cannot have $L^1$ convergence $f_n \to f$. |
Limit of $x(\sqrt{x^2+1}-\sqrt[3]{x^3+1})$ | $$\lim_{x\rightarrow +\infty} x(\sqrt{x^2+1}-\sqrt[3]{x^3+1})=\lim_{t\rightarrow0^+}\frac{1}{t}\left(\sqrt{\frac{1}{t^2}+1}-\sqrt[3]{\frac{1}{t^3}+1}\right)=$$
$$=\lim_{t\rightarrow0^+}\frac{\sqrt{1+t^2}-1-\left(\sqrt[3]{1+t^3}-1\right)}{t^2}=$$
$$=\lim_{t\rightarrow0^+}\frac{\frac{t^2}{\sqrt{1+t^2}+1}-\frac{t^3}{\sqrt[3]{I1+t^3)^2}+\sqrt[3]{t^3+1}+1}}{t^2}=\frac{1}{2}$$ |
The way to divide a group into two equal halves according to their preferences | A group of 6 people can be divided into two groups of equal size in 20 different ways. Depening on your definition of similarity all of those 20 ways could be a good division. In order to answer your question, first you have to define what exaclty do you mean by similarity.
For example imagine three different preferences vectors:
$p1: A>B>C>D>E$
$p2: E>A>B>C>D$
$p3: A>E>D>C>B$
And now tell whether $p1$ is more similar to $p2$ or to $p3$? As you see the answer is not so obvious. |
What does $dQ = LdP$ mean? | This is the Radon Nikodym derivative which is used to change the probability measure. The measure change is a well known process used in finance for several purposes and the most "known" one is the measure change in the B&S framework which is used to get the martingale property of the actualized asset price.
If you want more details on that I suggest you take a look on Girsanov theorem.
When you write $dP=LdQ$ this means that your measure $P$ the probability of an event A to happen $P(A)=E^P[1_A]=\int_{A}dP(x)=\int_{A}LdQ(x)=E^Q[1_AL]$ this can be useful when some processes do not have a property that we need under the P measure but they do under the Q measure.
The $L$ process must fulfill some conditions so it can represent a measure change for more details I suggest you take a look on Radon Nikodym derivative and on Novikov criterion for the Girsanov theorem application. |
Fourier Series of Sawtooth Wave from IFT | There are some mistakes in what you wrote.
(Assuming everything converges)
If $$f_p(t) = \sum_{k=-\infty}^\infty f(t-kT)$$ Then we have the Fourier series $$f_p(t) = \sum_{n=-\infty}^\infty c_n e^{2i \pi n t/T}$$ where $$c_n = \frac{1}{T}\int_0^T f_p(t) e^{-2i \pi n t / T}dt = \frac{1}{T}\int_0^T \sum_{k=-\infty}^\infty f(t-kT) e^{-2i \pi n t / T}dt \\= \frac{1}{T}\sum_{k=-\infty}^\infty \int_0^T f(t-kT) e^{-2i \pi n t / T}dt =\frac{1}{T}\sum_{k=-\infty}^\infty \int_{-kT}^{-kT+T} f(t) e^{-2i \pi n t / T}dt \\=\frac{1}{T}\int_{-\infty}^\infty f(t) e^{-2 i\pi n t/T}dt =\frac{1}{T} F(n/T)$$
In other words : the Fourier transform (in the sense of dsitributions) of $f_p(t)$ is $F_p(\nu) = \sum_{n=-\infty}^\infty c_n \delta(\nu-n/T)= F(\nu)\frac{1}{T}\sum_{n=-\infty}^\infty \delta(\nu-n/T)$
All this means that the Fourier series theorem is equivalent to the statement that the Dirac comb $\displaystyle\sum_{k=-\infty}^\infty \delta(t-k)$ is its own Fourier tranform, and that periodizing in time domain amounts to sampling in the Fourier domain |
Differential equation note | $$y''-y'=0$$
$$\frac{y''}{y'}=1$$
$$\log|y'|=x+C_1$$
$$y'=e^{x+C_1}=ke^x$$
$$y=ke^x+C_2$$ |
Proof of recurrence relation of first kind Stirling Numbers | I just went to wikipedia.
The Stirling numbers of the first kind,
$s(n, k)$,
are defined by
$(x)_n
=\sum_{k=0}^n s(n, k) x^k
$
where
$(x)_n
=\prod_{i=0}^{n-1} (x-i)
$.
Then
$(x)_{n+1}
=\sum_{k=0}^{n+1} s(n+1, k) x^k
$.
But
$\begin{array}\\
(x)_{n+1}
&=(x)_n(x-n)\\
&=(x-n)\sum_{k=0}^n s(n, k) x^k\\
&=x\sum_{k=0}^n s(n, k) x^k-n\sum_{k=0}^n s(n, k) x^k\\
&=\sum_{k=0}^n s(n, k) x^{k+1}-n\sum_{k=0}^n s(n, k) x^k\\
&=\sum_{k=1}^{n+1} s(n, k-1) x^{k}-n\sum_{k=0}^n s(n, k) x^k\\
&=\sum_{k=1}^{n} s(n, k-1) x^{k}+s(n, n)x^{n+1}-n(s(n, 0)+\sum_{k=1}^n s(n, k) x^k)\\
&=s(n, n)x^{n+1}-ns(n, 0)+\sum_{k=1}^{n} (s(n, k-1) x^{k}-n s(n, k)) x^k\\
\end{array}
$
Equating coefficients gives
$s(n+1, 0) = -ns(n, 0),
s(n+1, n+1) = s(n, n),$
and,
for $1 \le k \le n$,
$s(n+1,k)
=s(n, k-1) x^{k}-n s(n, k)
$. |
Can I consider shooting% as an independent variable | What you are calling dependent and independent variables are better referred to as your target and predictor variables. This makes clear what the relationship is supposed to be between them - you use the predictors to predict the target. The words dependent and independent have particular meanings in mathematics (as alluded to in Henning's comment) and it introduces unnecessary confusion to overload them too much.
Now, if you want to make forecasts then clearly you can't use the percentage of shots on target as an indicator, because you don't know the percentage of shots on target until the game is over! You might consider including a particular team's past history of shots on target as a regressor (e.g. their percentage of shots on target in the last twenty games they played) but you can't use the value from the match whose score you are trying to predict.
If you just want an explanatory model rather than a predictive one, then you could use the percentage of shots on target. However, this is a bit dubious, because of the exact relationship that exists between three of the variables:
$$\textrm{Goals Scored} = \textrm{Attempts on Goal} \times \textrm{Percentage of Shots on Target}$$
In some sense, you already understand why there is a relationship between Goals Scored and Percentage of Shots on Target - it's given by this formula! Indeed, if you included an interaction term between Attempts on Goal and Percentage of Shots on Target, then your regression would pick this out as the only significant predictor.
For these reasons I recommend that you don't include the percentage of shots on target in your model. |
Use any dice to calculate the outcome of any probability | Yes, given a single random number $X$ which generates elements $\{1,\dots,k\}$, with the $0<P(X=1)=p<1$, and any real number $q\in[0,1]$ you can use repeated rolls of the die to simulate an event with probability $q$.
Basically, we're going to pick a real number in $[0,1]$ by successively reducing our range. To pick real exactly, we'd have to roll the die infinitely many times, but luckily, with probability $1$, in a finite amount of time, the current interval will no longer contain $q$, and we can stop, because then we know if $q$ is less than or greater than the real.
We start with the entire interval $[a_0,b_0]=[0,1]$.
At step $n$, we have interval $[a_n,b_n]$. If $b_n<q$, then we halt the process and return "success." If $a_n>q$ then we halt with failure.
Otherwise, we roll the die.
If it comes up $1$, we take the next interval as $[a_n,(1-p)a_n+pb_n]$.
If it comes up something other than $1$, we take the next interval to be $[(1-p)a_n+pb_n,b_n]$.
The interval at step $n$ is at most length $\left(\max(p,1-p)\right)^n$. There is no guarantee that the process with stop in a known number of die rolls, but it will stop with probability $1$ in a finite number of rolls.
Edit: Ian asked for the expected number of rolls to know where you are.
This is rather complex, and depends on $q$ and $p$ as follows. Given an infinite sequence $\{a_i\}_1^{\infty}$ each in $\{0,1\}$, we can define $R_n=\sum_{i=1}^n a_i$ and $L_n=n-R_n$. We treat the $a_i$ as "left-right" choices in a binary tree.
Then for almost all[*] $q\in[0,1]$ there exist exactly one sequence $\{a_i\}$ such that:
$$q=\sum a_ip^{L_n}(1-p)^{R_n}$$
This has the advantage that if $\{a_i\}$ corresponds to $q_1$ and $\{b_i\}$ corresponds to $q_2$, then if $q_1<q_2$, we have that for some $n$, $a_i=b_i$ for $i<n$ and $a_n<b_n$. That is, ordering is lexicographical ordering.
The expected number of rolls is going to depend on the $a_i$ corresponding to $q$.
That said, let $e_p(q)$ be the expected number of rolls. We can define the expected number recursively as follows:
$$e_p(q)=\begin{cases}
1 + (1-p)e_p\left(\frac{q-p}{1-p}\right)&p<q\\
1+pe_p\left(\frac{q}{p}\right)&p>q
\end{cases}$$
But whether $p<q$ is determined by $a_1$. Assuming $q\neq p$, if $a_1=0$ then $q<p$ while if $a_1=1$, then $q>p$ almost certainly.
Finally, if $a_1=0$, then $\frac{q}{p}$ corresponds to the sequence $\{a_2,a_3,\dots\}$ and if $a_1=1$ then $\frac{q-p}{1-p}$ corresponds to $\{a_2,a_3,\dots\}$.
So we really see this expected value is related to the sequence $\{a_i\}$, but it is a mess to compute it.
The value is:
$$\sum_{i=0}^\infty p^{L_i}(1-p)^{R_i}$$
which we can also see because $p^{L_i}(1-p)^{R_i}$ is the odds that $q$ is still in our interval after trial $i$.
This is no more than $$\frac{1}{1-\max(p,1-p)},$$ for any $q$.
If you want more efficient use of the die (I'm using it as a coin toss here) and it has $N$ sides with equal probabilities, then the expected number of rolls is:
$$\sum_{k=0}^\infty \frac{1}{N^k}=\frac{N}{N-1}$$
and is independent of $q$. (That's true if $p=\frac{1}{2}$ in the original approach. |
$\forall A\subset \mathbb{N}$ the sum of the reciprocals of $A$ diverges iff $A$ is $(\tau, \mathbb{N})$-dense | Hint:
A nonempty subset $U$ of $\Bbb N$ is open iff
$$\sum_{n\notin U}\frac1n<\infty$$ |
Integral with modulus in the denominator. | In the unit circle, $\bar{z} = z^{-1}$, and thus $\lvert z - a\rvert^2 = (z - a)(\bar{z}-\bar{a}) = (z - a)(z^{-1} - \bar{a})$ in the unit circle. Hence
$$\int_{\lvert z\rvert = 1} \frac{f(z)}{\lvert z - a\rvert^2}\, dz = \int_{\lvert z\rvert = 1} \frac{f(z)}{(z - a)(z^{-1}-\bar{a})}\, dz = \int_{\lvert z\rvert = 1} \frac{zf(z)}{(z - a)(1 - \bar{a}z)}\, dz$$
We can decompose
$$\frac{z}{(z - a)(1 - \bar{a}z)} = \frac{1}{1 - \lvert a\rvert^2}\left(\frac{1}{z-a} + \frac{\bar{a}}{1 - \bar{a}z}\right)$$
This allows us to write
$$\int_{\lvert z \rvert = 1} \frac{zf(z)}{(z-a)(1-\bar{a}z)}\, dz = \frac{1}{1-\lvert a\rvert^2}\left(\int_{\lvert z\rvert = 1} \frac{f(z)}{z-a}\, dz + \bar{a}\int_{\lvert z \rvert = 1} \frac{f(z)}{1 - \bar{a}z}\, dz\right)$$
You'll be able to take it from here. |
Square absolute value of covariance $X$ and $Y$ | Are you looking for something like this?
Let $$\varrho(X,Y) := \frac{\text{cov}(X,Y)}{\sqrt{\text{var} X} \cdot \sqrt{\text{var} Y}}$$
the correlation coefficient. Then one can easily show (using Cauchy-Schwarz inequality) that $\varrho(X,Y) \in [-1,1]$. Thus
$$1 \geq |\varrho(X,Y)|^2 = \frac{\text{cov}(X,Y)^2}{\text{var} X \cdot \text{var} Y}$$
i.e. $$\text{cov}(X,Y)^2 \leq \text{var} X \cdot \text{var} Y$$ where $\text{var} X := \mathbb{E}((X-\mathbb{E}X)^2)$ denotes the variation of $X$. |
Why is it so good to know that $(1+x)^n \approx 1+nx$ for $nx \ll 1$? | Many of us know $\sqrt 2 \approx 1.414$. This allows you to find square roots of numbers near $2$, so $\sqrt {2.05}=\sqrt{2(1.025)}=\sqrt 2 \cdot \sqrt{1+0.025}\approx 1.414(1+\frac {0.025}2)= 1.414+\frac {1.414}{80}\approx 1.414+.018=1.432$. Maybe you know $9^3=729$ and want $9.1^3=9^3(1+\frac 1{90})^3\approx 9^3(1+\frac 1{30})\approx 729+24.3\approx 753$
It allows you to make small corrections for many facts you know.
I would do the first by saying to myself $2.05$ is $2.5\%$ bigger than $2$, so the square root is $1.25\%$ bigger, which is $\frac 1{80}$ to get to the final calculation. |
Show that a group of order $180$ is not simple. | Assume $G$ is simple and that $n_5=6$, so index$[G:N_G(P)]=6$, $P \in Syl_5(G)$. But then $G$ embeds homomorphically into $A_6$ (note that core$_G(N_G(P))=1$, since $G$ is simple!). But index$[A_6:G]=360/180=2$, implying $G$ is normal in $A_6$, which contradicts the simplicity of $A_6$. |
Projection onto a subspace | By definition (or by what can be proved from what I think the standard definitions are), we get:
$$S:=\text{Span}\,\{v_1,v_2\}\implies \text{Proj}_S\,v_3:=\frac{\langle v_3,v_1\rangle}{\left\|v_1\right\|^2}\,v_1+\frac{\langle v_3,v_2\rangle}{\left\|v_2\right\|^2}\,v_2$$
and in your case
$$\text{Proj}_S\,v_3:=\frac22\begin{pmatrix}1\\1\\0\end{pmatrix}+\frac22\begin{pmatrix}0\\1\\1\end{pmatrix}=\ldots$$ |
Relationship between $\mathcal{Spec}(R)$ and $\mathcal{Spec}(R_{\text{red}})$ | $Spec(R)$ and $Spec(R_{red})$ are homeomorphic via the map induced by the projection $R \to R_{red}$. |
English statement to logical expression | No, although close.
$\exists x \Biggl(\forall y \biggl(L(y,x) \land \forall z(L(y,z) \rightarrow (z=x)) \biggr) \Biggr)$
"There is some-x whom every-y loves and any-z that is loved by that y is that x."
"There is exactly one person everybody loves and noone loves anyone else."
Uniqueness is $\exists x~\Big(P(x)\land\forall z~\big(P(z)\to z=x\big)\Big)$ -- "there is someone who satisfies the predicate, and anyone who satisfies the predicate is that someone."
Here, your predicate is that the person is loved by everyone. $~P(x):=\forall y~L(y,x)$
You want to say "There is someone loved by everyone and anyone, who is loved by everyone, is that (first)person."
$$\exists x~\biggl(\forall y~\Bigl(L(y,x)\Bigr)\land\forall z~\Bigl(\forall y~\bigl(L(y,z)\bigr)\to z=x\Bigr)\biggr)$$ |
Can I check whether integral solutions exist if I know a rational solution? | Another quadratic Diophantine equation: How do I proceed?
How to find solutions of $x^2-3y^2=-2$?
Generate solutions of Quadratic Diophantine Equation
Why can't the Alpertron solve this Pell-like equation?
Finding all solutions of the Pell-type equation $x^2-5y^2 = -4$
If $(m,n)\in\mathbb Z_+^2$ satisfies $3m^2+m = 4n^2+n$ then $(m-n)$ is a perfect square.
how to solve binary form $ax^2+bxy+cy^2=m$, for integer and rational $ (x,y)$ :::: 69 55
Find all integer solutions for the equation $|5x^2 - y^2| = 4$
Positive integer $n$ such that $2n+1$ , $3n+1$ are both perfect squares
Maps of primitive vectors and Conway's river, has anyone built this in SAGE?
Infinitely many systems of $23$ consecutive integers
Solve the following equation for x and y: <1,-1,-1>
Finding integers of the form $3x^2 + xy - 5y^2$ where $x$ and $y$ are integers, using diagram via arithmetic progression
Small integral representation as $x^2-2y^2$ in Pell's equation
Solving the equation $ x^2-7y^2=-3 $ over integers
Solutions to Diophantine Equations
How to prove that the roots of this equation are integers?
Does the Pell-like equation $X^2-dY^2=k$ have a simple recursion like $X^2-dY^2=1$?
If $d>1$ is a squarefree integer, show that $x^2 - dy^2 = c$ gives some bounds in terms of a fundamental solution. "seeds"
Find all natural numbers $n$ such that $21n^2-20$ is a perfect square.
http://www.maa.org/press/maa-reviews/the-sensual-quadratic-form (Conway)
http://www.springer.com/us/book/9780387955872
There is a difference between rational solutions and integral solutions, reflected in the form class numbers.
With discriminant $101,$ we have just one class of forms up to $SL_2 \mathbb Z$ equivalence. The Gauss-Lagrange reduced form is $x^2 + 9 xy - 5 y^2.$ The numbers primitively represented are, up to $300,$ these. Note that, since $10^2 - 101 = -1,$ a number $n$ is represented if and only if $-n$ is represented.
Primitively represented positive integers up to 300
1 = 1
5 = 5
13 = 13
17 = 17
19 = 19
23 = 23
25 = 5^2
31 = 31
37 = 37
43 = 43
47 = 47
65 = 5 * 13
71 = 71
79 = 79
85 = 5 * 17
95 = 5 * 19
97 = 97
101 = 101
107 = 107
115 = 5 * 23
125 = 5^3
131 = 131
137 = 137
155 = 5 * 31
157 = 157
169 = 13^2
179 = 179
181 = 181
185 = 5 * 37
193 = 193
197 = 197
211 = 211
215 = 5 * 43
221 = 13 * 17
223 = 223
227 = 227
233 = 233
235 = 5 * 47
239 = 239
247 = 13 * 19
251 = 251
281 = 281
283 = 283
289 = 17^2
299 = 13 * 23
Primitively represented positive integers up to 300
1 9 -5 original form
========================================================
or discriminant $404,$ there are three classes.
404 factored 2^2 * 101
1. 1 20 -1 cycle length 2
2. 4 18 -5 cycle length 6
3. 5 18 -4 cycle length 6
form class number is 3
============================================================
The first one, your $x^2 - 101 y^2,$ does not represent $\pm 71$
Primitively represented positive integers up to 300
1 = 1
20 = 2^2 * 5
37 = 37
43 = 43
52 = 2^2 * 13
65 = 5 * 13
68 = 2^2 * 17
76 = 2^2 * 19
85 = 5 * 17
92 = 2^2 * 23
95 = 5 * 19
97 = 97
100 = 2^2 * 5^2
101 = 101
115 = 5 * 23
124 = 2^2 * 31
125 = 5^3
155 = 5 * 31
179 = 179
188 = 2^2 * 47
221 = 13 * 17
223 = 223
233 = 233
235 = 5 * 47
247 = 13 * 19
260 = 2^2 * 5 * 13
283 = 283
284 = 2^2 * 71
299 = 13 * 23
Primitively represented positive integers up to 300
1 20 -1 original form
==========================================================
However, $4 x^2 + 18 xy - 5 y^2$ does
Primitively represented positive integers up to 300
4 = 2^2
5 = 5
13 = 13
17 = 17
19 = 19
20 = 2^2 * 5
23 = 23
25 = 5^2
31 = 31
47 = 47
52 = 2^2 * 13
65 = 5 * 13
68 = 2^2 * 17
71 = 71
76 = 2^2 * 19
79 = 79
85 = 5 * 17
92 = 2^2 * 23
95 = 5 * 19
100 = 2^2 * 5^2
107 = 107
115 = 5 * 23
124 = 2^2 * 31
131 = 131
137 = 137
148 = 2^2 * 37
155 = 5 * 31
157 = 157
169 = 13^2
172 = 2^2 * 43
181 = 181
185 = 5 * 37
188 = 2^2 * 47
193 = 193
197 = 197
211 = 211
215 = 5 * 43
221 = 13 * 17
227 = 227
235 = 5 * 47
239 = 239
247 = 13 * 19
251 = 251
260 = 2^2 * 5 * 13
281 = 281
284 = 2^2 * 71
289 = 17^2
299 = 13 * 23
Primitively represented positive integers up to 300
4 18 -5 original form
================================== |
Why can a matrix without a full rank not be invertible? | Suppose that the columns of $M$ are $v_1, \ldots, v_n$, and that they're linearly dependent. Then there are constants $c_1, \ldots, c_n$, not all $0$, with
$$
c_1 v_1 + \ldots + c_n v_n = 0.
$$
If you form a vector $w$ with entries $c_1, \ldots, c_n$, then (1) $w$ is nonzero, and (2) it'll turn out that
$$
Mw = c_1 v_1 + \ldots + c_n v_n = 0. (*)
$$
(You should write out an example to see why this first equality is true).
Now we also know that
$$
M0 = 0. (**)
$$
So if $M^{-1}$ existed, we could say two things:
$$
0 = M^{-1}0 \ (**)\\
w = M^{-1} 0\ (*)
$$
But since $w \ne 0$, these two are clearly incompatible. So $M^{-1}$ cannot exist.
Intuitively: a nontrivial linear combination of the columns is a nonzero vector that's sent to $0$, making the map noninvertible.
But when you really get right down to it: proving this, and things like it, help you develop your understanding, so that statements like this become intuitive. Think about something like "the set of integers that have integer square roots". I say that it's intuitively obvious that $19283173$ is not one of these.
Why is that "obvious"? Because I've squared a lot of numbers, and all the squares have a last digit that's either $0, 1, 4, 5, 6,$ or $9$ (because those are the last digits of squares of single-digit numbers). Now that I've told you that, my statement about "intuitively obvious" is obvious to you, too. But until you'd at least learned a little about integer squares by investigating them, your intuition wasn't as good as mine. Sometimes "intuition" is just another name for "applied experience." |
How to calculate an expected result? | You can define a random variable $X$ - the number of successes (success = winning the price) in 100 draws. Each draw has a chance to win of $1/100 = 0.01$. Because the draws are independent (Bernoulli trials), $X$ is binomially distributed with $n=100$ and $p=0.01$, i.e. $X \sim Bin(n, p)$, or $ X \sim Bin(100, 0.01)$. The expected value of a binomially distributed variable is $E(X) = n \cdot p$, that's why $E(X) = 100 \cdot 0.01 = 1$. (The expected value for the binomial distribution is very intuitive). |
what is the behaviour of moving dot with 50% chance to go left or right? | This is the simple symmetric random walk on the integers. It is well known that this walk is recurrent, and so visits every point infinitely often with probability one. In particular, it has probability zero of converging to either $+\infty$ or $-\infty$. Proofs can be found here, or in most introductory textbooks on random processes. |
Independence and conditional independence between random variables | I like to interpret these two concepts as follows:
Events $A,B$ are independent if knowing that $A$ happened would not tell you anything about whether $B$ happened (or vice versa). For instance, suppose you were considering betting some money on event $B$. Some insider comes along and offers to pass you information (for a fee) about whether or not $A$ happened. Saying $A,B$ are independent is to say that this inside information would be utterly irrelevant, and you wouldn't pay any amount of money for it.
Events $A,B$ are conditionally independent given a third event $C$ means the following: Suppose you already know that $C$ has happened. Then knowing whether $A$ happened would not convey any further information about whether $B$ happened - any relevant information that might be conveyed by $A$ is already known to you, because you know that $C$ happened.
To see independence does not imply conditional independence, one of my favorite simple counterexamples works. Flip two fair coins. Let $A$ be the event that the first coin is heads, $B$ the event that the second coin is heads, $C$ the event that the two coins are the same (both heads or both tails). Clearly $A$ and $B$ are independent, but they are not conditionally independent given $C$ - if you know that $C$ has happened, then knowing $A$ tells you a lot about $B$ (indeed, it would tell you that $B$ is guaranteed). If you want an example with random variables, consider the indicators $1_A, 1_B, 1_C$.
Of interest here is that $A,B,C$ are pairwise independent but not mutually independent (since any two determine the third).
A nice counterexample in the other direction is the following. We have a bag containing two identical-looking coins. One of them (coin #1) is biased so that it comes up heads 99% of the time, and coin #2 comes up tails 99% of the time. We will draw a coin from the bag at random, and then flip it twice. Let $A$ be the event that the first flip is heads, $B$ the event that the second flip is heads. These are clearly not independent: you can do a calculation if you like, but the idea is that if the first flip was heads, it is strong evidence that you drew coin #1, and therefore the second flip is far more likely to be heads.
But let $C$ be the event that coin #1 was drawn (where $P(C)=1/2$). Now $A$ and $B$ are conditionally independent given $C$: if you know that $C$ happened, then you are just doing an experiment where you take a 99%-heads coin and flip it twice. Whether the first flip is heads or tails, the coin has no memory so the probability of the second flip being heads is still 99%. So if you already know which coin you have, then knowing how the first flip came out is of no further help in predicting the second flip. |
Example of "sequence $μ_n$ is not weakly convergent, because $f$ is not a density" | You should take $f_n(x)=1/n$ if $x\in [0,n]$ and $0$ otherwise (otherwise its integral over the real line is not $1$). The sequence $(f_n)_{n\geqslant 1}$ indeed converges pointwise to $0$. Observe that the cumulative distribution function of $\mu$ is
$$\mu_n\left(\left(-\infty,t\right]\right)=\begin{cases}0&\mbox{if }t\leqslant 0\\
t/n&\mbox{if }0<t<n\\
1&\mbox{if }n\leqslant t\end{cases},$$
hence for each $t$, $\lim_{n\to +\infty}\mu_n\left(\left(-\infty,t\right]\right)=0$, which proves that we cannot have the convergence in distribution. |
Method of Moments - What's the logic? | You don't even have to use moments. Any function $g(X)$ whose expectation with respect to $P_{\theta}$ uniquely determines $\theta$ will do. By Law of Large Numbers, the average of $g(X_i)$ will converge to that expectation, and you estimate $\theta$ to be the $\theta$ giving this expectation (this only works if the inverse of $\mathbb{E}_{\theta} g(X)$ is continuous in $\theta$). One often uses $g(X)=X^k$, in which case this gives estimates of the k-th mooment of the underlying distribution (from which one deduces the parameters), hence the name.
This answer is shamelessly lifted from http://ocw.mit.edu/courses/mathematics/18-443-statistics-for-applications-fall-2003/lecture-notes/lec3.pdf where you can find more details. |
Maximal Consistent Sets of wff | Every maximal consistent set (m.c.s.) can be imagined as "one half" of the set of all formulas: for each formula $\alpha$, it contains either $\alpha$, or ${\sim}\alpha$, but not both, of course. Therefore, every m.c.s. must be infinite (since the set of all formulas is infinite).
Hence, if $\Delta$ is a finite consistent set of formulas, then $\Delta$ is not maximal. Of course, there are also infinite consistent sets that are still not maximal – e.g. any consistent normal modal logic; say, $K$. |
Given starting point and length of line, find end point which lies on a given line | Given the coordinates of $P_1, P_2$ and $P_3$, you can find the lengths of the sides of the triangle $P_1P_2P_3$. (Build the difference vectors $P_i-P_j$, then use Pythagoras to compute the lengths $|P_i-P_j|$).
Using trigonometry, you can compute the angle $\alpha$ at $P_2$, which belongs to both the big triangle $P_1P_2P_3$ and the small triangle $P_1P_2P_x$. Now that you have two sides ($P_1P_2$ and $k$) and one angle ($\alpha$) for the small triangle, you can compute the length of the side $P_2P_x$.
You can now compute $P_x$ by going into the direction of $P_3$ starting from $P_2$:
$$P_x = P_2 + \frac{|P_2-P_x|}{|P_3-P_2|} \cdot (P_3-P_2)$$ |
Is $X^{***}$ useful in banach space theory? | I know a theoretical application of $X^{***}$, when you show the reflexivity of a dual space.
We denote the canonical map from a normed space $X$ to its bidual space $X^{**}$ by $i_{X}: X \to X^{**}, x \mapsto i_{X}[x]$ where $i_{X}[x](\varphi) = \varphi(x)$ for all $\varphi \in X^*$. Further for an Operator $T:X \to Y$ we donate its dual operator by $T^*: Y^* \to X^*$.
Consider the following lemma:
Let $X$ be a normed space. For $i_{X^*}: X^* \to X^{***}$ and $(i_X)^*: X^{***} \to X^*$. Then it holds that $$(i_X)^* \circ i_{X^*} = \operatorname{id}_{X^*}.$$
Proof. For all $\varphi \in X^*$ and $x \in X$ we have
$$\Big((i_X)^*(i_{X^*}(\varphi))\Big)(x) = \Big((i_{X^*}(\varphi)) \circ i_X\Big)(x) = ((i_{X^*}(\varphi))(i_X(x)) = (i_X(x))(\varphi) = \varphi(x).$$
So we have $(i_X)^* \circ i_{X^*} = \operatorname{id}_{X^*}$.
Now the application of this lemma:
Let $X$ a Banach space. Then if $X$ is reflexive, $X^*$ is reflexive too. (It actually holds iff but we need the lemma just for this direction)
Proof. Since $i_X$ is bijective, $(i_X)^*$ is bijective too. Our lemma gives us $(i_X)^* \circ i_{X^*} = \operatorname{id}_{X^*}$. Hence $i_{X^*} = [(i_X)^*]^{-1}$ is surjetive and so we have that $X^*$ is reflexive.
As you see it's a pretty special case and very complicated. I don't think one should use that space since there are often better ways to show things. |
Implicit function theorem. | Il order not to mix everything let us denote $X_i$ the new functions, more precisely as $\partial f\over \partial x_i$$ \not =0$, there exists near $(a_1,....a_{i-1},a_{i+1},...a_n)$ a function $X_i (x_1,...,x_{i-1},x_{i+1},...x_n)$ such that $f(x_1,...,x_{i-1},X_i, x_{i+1},...x_n)=0$ and $X_i (a_1,...,a_{i-1},a_{i+1},...a_n)=a_i$.
Deriving the equation $X_i (x_1,...,x_{i-1},x_{i+1},...x_n)$ by $x_{i+1}$ we get :
${\partial f \over \partial x_i} (a).{\partial X_{i+1} \over \partial x_i} +{\partial f \over \partial x_{i+1}} (a)=0 .$
Then ${\partial X_{i+1} \over \partial x_i}= -{ {\partial f \over \partial x_{i+1}} (a)\over {\partial f \over \partial x_{i}} (a)}$
We deduce the product :
$\Pi_1^n{\partial X_{i+1} \over \partial x_i}= (-1)^n$ |
Principle of superposition after using seperation of variables | By looking at $h$ you have found what the eigenvalues are:
$$
\lambda_0=0
,\qquad
\lambda_n=-(n\pi/L)^2 \quad (n \ge 1)
.
$$
To each eigenvalue you have a corresponding function $h_n(x)$.
Then for each eigenvalue you solve
$$
g_n''(y)=-\lambda_n g_n(y)
,\qquad
g_n(0)=0
,
$$
to find $g_n(y$).
If you combine $h_n$ and $g_n$, you get a separated solution to the PDE
for each $n \ge 0$:
$$
u_n(x,y)=h_n(x) \, g_n(x)
.
$$
These separated solutions are then put together in a linear combination
$$
u(x,y) = \sum_{n \ge 0} c_n \, u_n(x,y)
,
$$
where the constants $c_n$ are chosen such that $u(x,t)$ satisfies the last boundary condition $u(x,H)=f(x)$. |
Fractional Linear Transformation | Hint: you have to prove two things
First: if $\Im(z)>0$ then $|w|\leq 1$
(and if you try $z=-2+i$ you will see this is not true, so your problem is incorrect).
Second: if $|w|\leq 1$ then $\Im(z)>0$ |
Profit,Loss and Discount | The value of the article is $v$, and the price of the article (before any discounts) is $p$.
Then,
A shopkeeper sells an article at 12% discount on the marked price and makes a profit of 10%
tells you that $0.88\cdot p = 1.1\cdot c$
and
If he gives a discount of 4% he will earn a profit of Rupees 40
Tells you that $0.96 \cdot p = c + 40$
This should be all you need. |
proof $\lim \inf x_n + \lim \inf y_n \le \lim \inf (x_n+y_n)$ (clarification) | For the first question: For $k\geq n$, $x_{k}+y_{k}\in X_{n}+Y_{n}$ because each $x_{k}\in X_{n}=\{x_{l}: l\geq n\}$ and $y_{k}\in Y_{n}=\{y_{l}: l\geq n\}$. So we conclude that $XY_{n}\subseteq X_{n}+Y_{n}$.
For the second question: Let $\liminf(x_{n}+y_{n})=\lim_{k}(x_{n_{k}}+y_{n_{k}})$. Since $(x_{n_{k}})$ is bounded, then there is a further subsequence $(x_{n_{k_{i}}})$ of $(x_{n_{k}})$ such that $\lim_{i}x_{n_{k_{i}}}$ exists. Since $\liminf x_{n}$ is the smallest limit point of its convergent subsequences, so $\liminf x_{n}\leq\lim_{i}x_{n_{k_{i}}}$. Now the corresponding subsequence $(y_{n_{k_{i}}})$ of $(y_{n_{k}})$ need no converge, but there is a further subsequence $(y_{n_{k_{i_{l}}}})$ of $(y_{n_{k_{i}}})$ such that $\lim_{l}y_{n_{k_{i_{l}}}}$ exists. Once again we have $\liminf y_{n}\leq\lim_{l}y_{n_{k_{i_{l}}}}$. The corresponding subsequence $(x_{n_{k_{i_{l}}}}+y_{n_{k_{i_{l}}}})$ of $(x_{n_{k}}+y_{n_{k}})$ is convergent and $\lim_{l}(x_{n_{k_{i_{l}}}}+y_{n_{k_{i_{l}}}})=\lim_{k}(x_{n_{k}}+y_{n_{k}})$. But $\lim_{l}(x_{n_{k_{i_{l}}}}+y_{n_{k_{i_{l}}}})=\lim_{l}x_{n_{k_{i_{l}}}}+\lim_{l}y_{n_{k_{i_{l}}}}=\lim_{i}x_{n_{k_{i}}}+\lim_{l}y_{n_{k_{i_{l}}}}$, the result follows. |
What is the word instead of "valley" | There is nothing wrong with using valley in this context. It is a visual metaphor which will be readily understood.
An alternative word might be trough, which is a bit more common in this abstract sense; but is still a visual metaphor. |
Small notation question about union of chains (Set Theory) | Note that, by definition, $x \in \bigcup C$ iff there is some $y \in C$ such that $x \in y$.
In your specific example, $\bigcup C$ consists precisely of those elements contained in some $X_a$, where $a \in I$.
So, if for example $C = \{X_1,X_2,X_3\}$, where $X_1 = \{1\}$, $X_2 = \{1,2\}$, $X_3 = \{3,4\}$, then $\bigcup C = \{1,2,3,4\}$. |
Looking for a reference that builds homological algebra from the simplest setting upwards | The number of theorems in homological algebra that don't involve results for specific rings is small. Those theorems wouldn't be any clearer if you assumed yourself to be in the category of abelian groups versus a general ring. You should just read Weibel's book in detail. There he does what you say: he develops some fundamental machinery and in Chapter 3 and 4 he proceeds through various rings, indeed starting with abelian groups. Those chapters basically applies the machinery of homological algebra to see what can be said about specific rings and modules.
In addition, you should check out Beilinson's notes. Lubkin's book "Cohomology of Completions" does homological alegebra in a general abelian category and for most people this approach is unreadable, but curious and of academic interest. |
Two variable limits via paths - are there pathalogical examples? | An observation to address your second question: If there exists two paths that approach $0$ which give different values for $f$, then there exist infinitely many paths approaching $0$ for which $f$ has no limit.
Suppose $\gamma_1(t)$ and $\gamma_2(t)$ give $f$ different limits as $t$ approaches $0$ from the left. Then for any real $c\ne0$, consider the path
$$\gamma_c(t)=\sin\left(\frac{c}{t}\right)\gamma_1(t)+\cos\left(\frac{c}{t}\right)\gamma_2(t)$$
To see that $\lim_{t\to 0^-}f(\gamma_c(t))$ does not exist, consider the subsequence $t_n=-\frac{2c}{\pi n}$.
However, if we only consider paths whose limits exist then we can construct a function in which there is only one deviant path. Let
$$f(x,y)=\begin{cases}1&y\ne 0\text{ or }x\ge0\\0&y=0\text{ and }x<0\end{cases}$$
Notice that any path whose limit exists will give a limit of $1$, except if there is a neighborhood around the origin in which the path approaches along the $x$-axis from the left. So, ignoring reparametrizations, and considering two paths equal if their images agree in some neighborhood of the origin, we have defined a function with a unique deviant path. |
Polyhedral symmetry in the Riemann sphere | I finally found an answer online. It turns out that Felix Klein was also aware of these functions in 1878: he used them to construct his famous solution to the quintic equation!
I'll post here the parts of the derivation that are relevant to my question. It is clear from the form of the functions $F_{a,b}$ that each one is defined in terms of two monic polynomials, which we can call $G(z)$ and $H(z)$ (the references call it $f$ and $H$, but I will keep the notation consistent with my question to avoid any confusion). As I said above, in the Riemann sphere $G$ has zeros at the vertices of the corresponding regular polyhedron, and $H$ has zeros at the face centers. We have
$$F_{a,b}(z) \propto \frac{G(z)^a}{H(z)^b}$$
There is another monic polynomial that can be defined, called $T(z)$, which has zeros at the edge midpoints of the polyhedron. It can be constructed, for example, as the Jacobian of $G$ and $H$, expressed in symmetric form by substituting $z=x/y$ and multiplying by $y$ until there are no factors of $y^{-1}$ left:
$$T(x,y) \propto \begin{vmatrix}
G_{,x} & G_{,y} \\
H_{,x} & H_{,y}
\end{vmatrix}$$
It can be demostrated that there exists a relation or syzygy:
$$H^b + T^2 = f_{a,b} \: G^a$$
where the $f_{a,b}$ are the constants from my question. Thus knowing $G, H$ and $T$, using the relation above and comparing the two sides, one can directly calculate the value of the constants.
For references, see this MathOverflow post. |
How to build the Mathematics for outage probability. | I think that what’s going on is as follows:
First (and check the paper for notation): I think $f_{|h|^2}(y)$ designates the probability density function for a Rayleigh distribution of scale parameter $|h|$.
So the inner integral with respect to $dx$ is accumulating the portion of $P_0$ for a given value of $h$. The lower bound of this integral is the smallest valid value of $h$.
And then the outer integral with respect to $dy$ is accumulating over all values of $h$.
In more detail:
$$\begin{align*}
P_0 &= Pr\!\left(
|h|^2 \geq \alpha|f|^2|g|^2\gamma_{th} + \frac{\sigma^2\gamma_{th}}{P_t}
\right)
\\&=
\lim_{\delta y \to 0}
\sum_{k=0}^{\infty}
Pr\!\left(
|h|^2 \geq
\alpha|f|^2|g|^2\gamma_{th} + \frac{\sigma^2\gamma_{th}}{P_t}
\,\Bigg{|}\,
h = y_k
\right)
Pr\Bigl(h \in [y_k,y_{k+1})\Bigr)
&\text{where $y_k = k\delta y$}
\end{align*}$$
by the law of total probability. Intuitively, we tile the positive number line into intervals of width $\delta y$. Those are the possible values for $h$. Now $Pr\Bigl(h \in [y_k,y_{k+1})\Bigr) \approx f_{|h|^2}(y) \delta y$ and the approximation becomes perfect as $\delta y \to 0$ so the sum becomes an integral
$$
P_0 = \int_{0}^{\infty}
Pr\!\left(
|h|^2 \geq
\alpha|f|^2|g|^2\gamma_{th} + \frac{\sigma^2\gamma_{th}}{P_t}
\,\Bigg{|}\,
h = y
\right)
f_{|h|^2}(y) \, dy
$$
I think that something similar is going on with the inner integral with respect to $x$, so that
$$\begin{align*}
Pr\!\left(
|h|^2 \geq
\alpha|f|^2|g|^2\gamma_{th} + \frac{\sigma^2\gamma_{th}}{P_t}
\,\Bigg{|}\,
h = y
\right)
&=
\lim_{\delta x \to 0}
\sum_{\ell=0}^{\infty}
Pr\!\left(
|h|^2 \geq
\alpha|f|^2|g|^2\gamma_{th} + \frac{\sigma^2\gamma_{th}}{P_t}
\,\Bigg{|}\,
h = y, |f||g| = x_{\ell}
\right)
Pr\Bigl( |f||g| \in [x_{\ell}, x_{\ell+1}) \Bigr)
& \text{where $x_{\ell} = \ell \delta x$}
\\&=
\int_{\alpha x \gamma_{th} + \frac{\sigma^2\gamma_{th}}{P_t}}^{\infty}
f_{|f|^2|g|^2}(x) \, dx
\end{align*}$$
Hence
$$\begin{align*}
P_0 &= \int_{0}^{\infty}
\left(
\int_{\frac{\gamma_{th}(\alpha P_t x + \sigma^2)}{P_t}}^{\infty}
f_{|f|^2|g|^2}(x) \, dx
\right)
f_{|h|^2}(y) \, dy
\\&=
\int_{0}^{\infty}
\int_{\frac{\gamma_{th}(\alpha P_t x + \sigma^2)}{P_t}}^{\infty}
f_{|h|^2}(y)
f_{|f|^2|g|^2}(x)
\, dy dx
\end{align*}$$
where the last step is from rearranging the order of the integration and bringing the factor $f_{|h|^2}(y)$ inside the inner integral. |
Can there be any representation of a finite group which is reducible but indecomposable? | There are analogues in finite characteristic -- the same as your example for $n=1$ if the group is cyclic of order $m$, and $m \bmod p=0$.
In characteristic zero, irreducibility for finite group representations is equivalent to indecomposability. This is due to Maschke's theorem, see for example
Understanding Maschke's Theorem |
Let $n,m \in \mathbb{N}$, $n>m,$ $\varepsilon>0$ then prove that exist some $N\in \mathbb{N}$ such that if $n>m>N ,\dfrac{n-m}{(m+1)^2}< \varepsilon$ | hint
Your argument is not valid since the $ N $ you find depends on $ n $.
You could observe that
$$\forall k\in\{m+1,m+2,...,n\}$$
$$\frac{1}{k^2}\le \frac{1}{k-1}-\frac 1k$$
and telescoping. |
What's the difference between the main types of logic? | To give a very good answer would require probably to read a good book in mathematical logic.
Nevertheless here is a very short, and possibly rough, description of the terms you asked for. Be careful, what follows is a very imprecise description of the various kind of logic, but again I think this is the best one can get in the limit of a post (or at least, that is the best I can think of).
Mathematical logic denote that branch of mathematics that applies the technique of mathematics to the study of logics. It basically provides formal ways to define what a logical system is and how to prove stuff about these systems.
Propositional, first-order and second order logics are just few examples of these formal logical systems.
The differences in these systems are in the expressiveness of the said systems.
For instance in propositional logic one deals only with true-or-false statements while in first-order logic one can use formulas which states that certain relations hold for some objects or for all the objects.
In second-order logic we can start to state that relations between relations hold for certain relations or for all the possible relations, something which cannot be done in first-order logic.
The term Boolean logic is usually used a synonym for propositional logic.
Boolean algebra is a term used for denote a certain class of algebraic structure which captures the algebraic properties of propositional logic.
Edit: by the way if you look at wikipedia's article you can find lots of informations on the subjects.
Edit 2: the OP asked in the comments below for an example of second order sentence, here is an informal example.
In what follow I will use the following notation:
$P x$ will mean $x$ satisfies the property (unary relation) $P$ and $x R y$ will mean $x$ is in the relation $R$ with $y$.
For every binary relation $R$ such that
for all $x$ we have that $x R x$
forall $x$ and $y$ if $x R y$ and $y R x$ then $x=y$
forall $x$, $y$ and $z$ if $x R y$ and $y R z$ then $x R z$
forall properties $P$ such that exists at least an $x$ for which $P x$ there is a $m$ such that $P m$ holds and for every other $x$ for which $Px$ holds we have $m R x$
we have that forall $x$ and $y$ either $x R y$ or $y R x$.
This apparently complex statement is basically saying that every order $R$ which is wellorder, i.e. an order such that any satisfied property $P$ has a minimal element that satisfies it, is a total order. |
If $N \trianglelefteq G$ then $(gN)^\alpha = g^\alpha N$ for $\alpha \in \mathbb{Z}$. | It's the definition of the product on the quotient group: by definition
$$
(xN)(yN)=xyN
$$
so there's really nothing to prove.
The neutral element in the quotient group is $1N=N$. So, by definition, $(gN)^0=N$, as $x^0$ is defined, in every group, to be the neutral element.
It is actually the case that if you consider that as the product set, under the convention that $X\cdot Y=\{xy:x\in X,y\in Y\}$, the equality
$$
xN\cdot yN=(xy)N
$$
holds. Indeed, since $N$ is normal, $Ny=yN$ and moreover $N\cdot N=N$ as $N$ is a subgroup. However, this is not really relevant for the definition of a group structure on $G/N$. |
Number of rectangles on a checkerboard with at least 4 black squares | To quickly count the number of rectangles of any size: Note that any rectangle is uniquely defined by a left and a right horizontal grid line, and a left and right vertical gridline. There are 9 horizontal gridlines, so you need to choose 2 different ones out of those: $9 \choose 2$. Likewise, there are $9 \choose 2$ possible pairs of vertical gridlines. So, you have ${9 \choose 2} \times {9 \choose 2}$ possible rectangles total: $36 \times 36 = 1296$ |
Show that a certain group is normal | $x g^2 x^{-1} = (x g x^{-1}) (x g x^{-1}) = (x g x^{-1})^{2} \in N$. |
Complex vector space identity | If one takes for $u$ the vector $\begin{pmatrix}x_1&\ldots& x_n \end{pmatrix}$ then the proposed equation becomes a quadratic form that is zero for all the values of the indeterminates $x_1, \ldots,x_n$ so its coefficients must all be zero. |
Exponential Sum Approximation | Assuming that the two exponents $(a,b)$ are not "too different", what you could do, for the range $\alpha \leq x \leq \beta$, is to minimize
$$\Phi(C,c)=\int_\alpha^\beta \Big[C e^{-c x}-A e^{-a x}-B e^{-b x} \Big]^2\, dx$$ which is equivalent to an exponential regression based on an infinite number of data points.
The antiderivative is
$$-\frac{A^2 e^{-2 a x}}{a}-\frac{4 A B e^{-(a+b)x}}{a+b}+\frac{4 A C e^{
-(a+c)x}}{a+c}-\frac{B^2 e^{-2 b x}}{b}+\frac{4 B C e^{
-(b+c)x}}{b+c}-\frac{C^2 e^{-2 c x}}{c}$$
Apply the bounds to get $\Phi(C,c)$, compute the partial derivatives and set them equal to $0$.
$$\frac{\partial \Phi(C,c)}{\partial C}=0 \implies C=f(c)\qquad \text{(which is an explicit function)}$$ and you are left with
$$\frac{\partial \Phi(C,c)}{\partial c}=\frac{\partial \Phi(f(c),c)}{\partial c}=0$$ which will require some numerical method (a quite nasty nonlinear equation in $c$ but not difficult to solve using Newton method with $c_0=\frac{a+b}2$).
Probably, generating data points and using nonlinear regression could be easier since the exponential fitting is quite trivial. Generate $n$ data points $(x_i,y_i)$ with $y_i=A e^{-a x_i}+B e^{-b x_i}$ to face the model
$$y=C e^{-c x}$$ In a first step, take logarithms and a linear regression will give estimates of $\log(C)$ and $c$ which will be good starting values for the nonlinear regression.
For illustration, I used $A=123$, $a=0.8$, $B=234$, $b=1.1$, $\alpha=3$, $beta=5$ and generated $100$ data points. The nonlinear regression gives $(R^2 > 0.9999)$
$$\begin{array}{clclclclc}
\text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\
A & 306.804 & 0.60903 & \{305.596,308.013\} \\
a & 0.91475 & 0.00057 & \{0.91363,0.91587\} \\
\end{array}$$ showing a maximum absolute error of $0.06$ while the sum of the two exponentials vary between $20$ and $3$. |
Let $A,B,C$ be square matrices. Calculate $(A+B+C)^3$ | You cannot simplify the full expansion, which has $27$ terms, because matrix multiplication is non-commutative ($AB\ne BA$ in general). |
set of all holomorphic functions is a integral domain | The idea of the proof is right, but you should mention why the implication of $f(a) \neq 0 \implies f(z)\neq 0 \forall z \in B(a,R)$ for some $R$ holds.
As we have holomorphic functions you don't need to make the assumption $f(a)\neq 0$ it is enough to say $f\not \equiv 0$, as zeroes of holomorphic functions can't have an accumulation point, when the function is not constant zero.
And why from $g^n(a)=0$ the implication $g(z)=0$ on $G$ holds.
And you should mention somewhere that $\mathbb{C}$ is a field and hence $f(a)\cdot g(a)=0\implies f(a)=0 \vee g(a)=0$ |
Reference for proof of Hochschild-Kostant-Rosenberg for Hochschild cohomology | These notes might help, still the case considered by Kontsevich:
http://arxiv.org/abs/1107.0487 |
Convolution of indicator function with itself | You are doing nothing wrong. Since one considers the whole family of integer shifted B-splines of the given order, one chooses the "generating" B-spline with center of mass closest to 0, thus the choice of 0 or 1/2. This is what the "is a translation" part of the definition is for. |
Proof about "metric measure on a compact metric space" | In general, if we have nested measurable sets $\{ U_{n} \}_{n \in \mathbb{N}}$ such that $U_{n} \supset U_{n+1} \supset U_{n+2} \supset \dots$, and there is an $m \in \mathbb{N}$ such that $\mathcal{M}(U_{m}) < \infty$, then it follows that $\lim_{n \rightarrow \infty} \mathcal{M}(U_{n}) = \mathcal{M}(\cap_{n \in \mathbb{N}} U_{n})$. Since the $U_{n}$ as defined satisfies that $\mathcal{M}(U_{1}) \leq \mathcal{M}(S) < \infty$, and $\cap_{n \in \mathbb{N}} U_{n} = F$, it follows that $\lim_{n \rightarrow \infty} \mathcal{M}(U_{n}) = \mathcal{M}(\cap_{n \in \mathbb{N}} U_{n}) = \mathcal{M}(F)$.
The proof of the first sentence is as follows: Define $G_{n} = U_{n} \setminus U_{n+1}$. Then $U_{1} = (\cap_{n \in \mathbb{N}} U_{n}) \cup (\cup_{n \in \mathbb{N}}G_{n})$. Let $U = \cap_{n \in \mathbb{N}} U_{n}$. Since the $G_{n}$'s are disjoint, we obtain that $\mu(U_{1}) = \mu(U) + \sum_{n \in \mathbb{N}} \mu(U_{n}) - \mu(U_{n+1}) = \mu(U) + \mu(U_{1}) - \lim_{n \rightarrow \infty} \mu(U_{n})$. Since $\mu(U_{1}) < \infty$, we may subtract on both sides to obtain that $\mu(\cap_{n \in \mathbb{N}} U_{n} ) = \lim_{n \rightarrow \infty} \mu(U_{n})$.
Edit:
The proof I have given also applies to this example because $U_{n}$ is an open set and Borel Sets are measurable in the Caratheodory sense with respect to the metric outer measure. It will be sufficient to show that $\forall F \subset S$, $F$ closed, $\forall A \subset X$, $\mathcal{M}(A) = \mathcal{M}(A \cap F) + \mathcal{M}(A \setminus F)$, where $\mathcal{M}$ is the metric outer measure. Define $A_{n} = \{ x \in A : d(x,F) \geq \frac{1}{n} \}$. Observe that $A \setminus F = \cup_{n \in \mathbb{N}} A_{n} $. Note that $\mathcal{M}(A) \leq \mathcal{M}(A \cap F) + \mathcal{M}(A \setminus F)$ by monotonicity. Observe that for fixed $n \in \mathbb{N}$, since $A_{n}$ and $F$ are positively separated, $\mathcal{M}(A \cap F) + \mathcal{M}(A_{n}) = \mathcal{M}((A \cap F) \cup A_{n}) \leq \mathcal{M}(A)$. We would like to show that $\mathcal{M}(A \setminus F) \leq \lim_{n \rightarrow \infty} \mathcal{M}(A_{n})$. Assuming this, we then would have that $\mathcal{M}(A \cap F) + \mathcal{M}(A \setminus F) \leq \lim_{n \rightarrow \infty} (\mathcal{M}(A \cap F) + \mathcal{M}(A_{n})) = \lim_{n \rightarrow \infty} \mathcal{M}((A \cap F) \cup A_{n}) \leq \mathcal{M}(A)$, which would imply that $F$ is caratheodory measurable. Define $B_{n} = A_{n} \setminus A_{n-1}$, and $B_{1} = A_{1}$. Then $\mathcal{M}(\cup_{n=k}^{\infty} B_{2n-1}) = \sum_{n=k}^{\infty} \mathcal{M}(B_{2n-1})$, and $\mathcal{M}(\cup_{n=k}^{\infty} B_{2n}) = \sum_{n=k}^{\infty} \mathcal{M}(B_{2n})$, since they are positively separated. Note that $\mathcal{M}(\cup_{n=k}^{\infty} B_{2n}) , \mathcal{M}(\cup_{n=k}^{\infty} B_{2n-1}) \leq \mathcal{M}(A_{2k-1})$ by monotonicity. Assume $\lim_{n \rightarrow \infty} \mathcal{M}(A_{n}) < \infty$, since if it were infinite then $\mathcal{M}(A) = \infty$ and there is nothing to prove. Then $\mathcal{M}(\cup_{n=k}^{\infty} B_{2n}) , \mathcal{M}(\cup_{n=k}^{\infty} B_{2n-1}) < \infty$. Observe that $(A \setminus F) = \cup_{n \in \mathbb{N}} A_{n} \implies \mathcal{M}(A \setminus F) = \mathcal{M}(\cup_{n \in \mathbb{N}} A_{n}) \leq \mathcal{M}(A_{k} \cup (\cup_{n=k+1}^{\infty} B_{n}))$ for any fixed $k \in \mathbb{N}$. Then $\mathcal{M}(A \setminus F) \leq \lim_{k \rightarrow \infty} \mathcal{M}(A_{k}) + \sum_{n=k+1}^{\infty} \mathcal{M}(B_{n})$. Since $\sum_{n=1}^{\infty} \mathcal{M}(B_{n}) < \infty \implies \lim_{k \rightarrow \infty} \sum_{n=k}^{\infty} \mathcal{M}(B_{n}) = 0$. Hence, $\mathcal{M}(A \setminus F) \leq \lim_{n \rightarrow \infty} \mathcal{M}(A_{n}) \implies \mathcal{M}(A \cap F) + \mathcal{M}(A \setminus F) \leq \mathcal{M}(A \cap F) + \lim_{n \rightarrow \infty} \mathcal{M}(A_{n}) \leq \lim_{n \rightarrow \infty}\mathcal{M}((A \cap F) \cup A_{n}) \leq \mathcal{M}(A)$. Hence, $F$ is caratheodory measurable. This implies that the Borel sets of a metric space are measurable with respect to the metric outer measure. By the Caratheodory extension theorem, $\mathcal{M}$ is countably additive on the borel sets. The proof given earlier now applies to the metric outer measure. |
Need a quadratic function that is larger than zero when variables are unequal | Consider $$p(x)=\sum_{i<j}(x_i-x_j)^2$$
if any pair of $x_i$ and $x_j$ are not equal, that will contribute to a positive term. |
Area of a Surface of Revolution? | The formula for the area of a surface of revolution about the $y$-axis formed by $(x(t),y(t))$ on $a\le t\le b$ is $$2\pi \int_a^b x(t)\sqrt{(x'(t))^2+(y'(t))^2} dt.$$ In your case, let $x(t)=t$ so that $y(t)=\frac{1}{4}t^2-\frac{1}{2}t\ln t$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.