title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to get the exact position of the element based on 2 dynamic angles? | The problem can be rephrased in terms of vector addition.
We have two vectors we need to add:
The vector from centre of the green square to the base of the blue rectangle
The vector from the base of the blue rectangle to the tip of it
Let's call the angle of rotation of the square $\alpha$ and that of the little square $\beta$. Moreover, the radius of the big rectangle can be $a$, and the length of the little rectangle can be $b$.
Since we're measuring against the y-axis and not the x-axis, the second vector has a horizontal component ($x$) of $b\sin{\beta}$ and a vertical component ($y$) of $b\cos{\beta}$.
Likewise, the horizontal component the first vector is is $a\sin{\alpha}$ and the vertical component is $a\cos{\alpha}$.
All we need to do to find the sum of these, is add the horizontal components, and add the vertical components:
So the resultant horizontal component is $b\sin{\beta} + a\sin{\alpha}$ and the vertical component is $b\cos{\beta} + a\cos{\alpha}$. |
Is every finite, nondiscrete $T_0$ space connected? | Let $X=\{1,2,3\}$ be endowed with topology $\tau=\{\varnothing,\{2\},\{3\},\{1,2\},\{2,3\},\{1,2,3\}\}$.
Then $X$ is $T_0$ and e.g. $\{3\}$ is a non-trivial clopen set. |
Baire Category Theorem: What should we really prove there? | Enumerate the rational numbers $q_1, q_2, q_3, \dots$. Then place an open interval with length $1/2^n$ centered at $q_n$. The union of these intervals is open and dense, yet has finite total measure (length) and thus cannot be all of $\mathbb{R}$. |
Algorithm to approximate roots using blackbox root counting function | An obvious algorithm is to look at $n(a)$ and $n(b)$. If the difference is positive then go ahead as the interval is "in play".
Look at $n\left(\frac{a+b}{2}\right)$. If this is equal to either $n(a)$ and $n(b)$ then half the interval is no longer in play; otherwise both halves are in play.
Continue subdividing the subintervals which are in play until your appoximations are close enough. |
Find the inverse of the following function. | The answer is:
$ j = (k-1)\pmod{N} + 1$ and $i = \lfloor(k-1)/N\rfloor+1.$ |
$\ell_{p}$ spaces with pointwise multiplication | Hints:
Prove that $\Vert a\Vert_{\ell_\infty}\leq\Vert a\Vert_{\ell_p}$ and $a\in\ell_p\implies a\in\ell_\infty$.
Show that $\Vert ab\Vert_{\ell_p}\leq\Vert a\Vert_{\ell_\infty}\Vert b\Vert_{\ell_p}$ |
Q about mathematical induction | The image of your solution suggests that you were in the right track, but as you reached the 3rd step, you made a mistake.
For proof by induction, we need to prove the following things:
Check if $P(n_1)$ is true.
Suppose that $P(k)$ is true.
Prove that $P(k+1)$ is true.
Now, $$P(4) = 1*2*3*4 = 24 > 2^4$$
Thus, $P(4)$ is true.
Let,$$P(k) = 1*2*3...*k > 2^k$$
Now,$$P(k+1) = 1*2*3...*k*(k+1)$$
Since, $k≥4$,
therefore, $k+1>4$.
Now, multiply $k+1$ in $P(k)$.
Thus, $$1*2*3...*k*(k+1) > 2^k *(k+1)$$
Now, using the fact that if,
$$a>b$$
Then, $$K*a>K*b [for K>0]$$
We see that,
$$1*2*3...*k*(k+1) > 2^k*4$$
$$P(k+1) > 2^{k+2}$$
Thus,$$P(k+1) > 2^{k+1}$$
Hence, Proved! |
Determine the area of a shaded region between two curves | Hint
As you noticed, the two curves intersect at $x=-1$ and $x=2$. So the integrals have to be taken between these two bounds. You must have noticed that $y=x$ is always greater than $y=x^2-2$ over the considered range. So the area is given by $$S=\int_{-1}^2 (x- (x^2-2)) \, dx$$
I am sure that you remember that, in general, the antiderivative of $x^n$ is $\frac{x^{n+1}}{n+1}$.
I am sure that you can take from here and finish. |
Arfken's Fourier series problem | First of all be sure you understand what the problem means; Fourier series minimizes the squared error (or L2 norm) in a trigonometric expansion! Or, it is the optimal expansion in "least squares."
Now since we take the partial derivative with respect to the coefficient $b_n$, every term in the series $\sum_m\sin(mx)$ is zero besides $m=n$. That means
$\displaystyle\partial_{b_n}\Delta_p = \partial_{b_n}\int_0^{2\pi}(f(x) - F(x))^2dx = -2\int_0^{2\pi}(f(x) - F(x))\partial_{a_n}F(x)dx$
where $F(x)$ is the Fourier series. And next,
$\displaystyle\partial_{b_n}F(x) = \partial_{b_n}\Big(\frac{a_0}{2} + \sum_{m=1}^\infty a_m\cos(mx) + b_m\sin(mx)\Big) = b_n\sin(nx)$
and similarly for the $\cos(nx)$ term. Next we integrate this against the Fourier series. As the trig functions are orthogonal on $[0,2\pi]$ every term is zero besides $\int_0^{2\pi}\sin^2(nx)dx = \pi$.
That should be enough for you to finish the problem. :) |
Range of a sum of sine waves | I'll assume that all variables and parameters range over the reals, with $A,C\neq0$. Let's see how we can get a certain combination of phases $\alpha$, $\gamma$:
$$Ax+B=2\pi m+\alpha\;,$$
$$Cx+D=2\pi n+\gamma\;.$$
Eliminating $x$ yields
$$2\pi(nA-mC)=AB-BC+\alpha C-\gamma A\;.$$
If $A$ and $C$ are incommensurate (i.e. their ratio is irrational), given $\alpha$ we can get arbitrarily close to any value of $\gamma$, so the range in this case is at least $(-2,2)$. If $AB-BC$ happens to be an integer linear combination of $2\pi A$ and $2\pi C$, then we can reach $2$, and the range is $(-2,2]$, whereas if $AB-BC$ happens to be a half-integer linear combination of $2\pi A$ and $2\pi C$ (i.e. and odd-integer linear combination of $\pi A$ and $\pi C$), then we can reach $-2$, and the range is $[-2,2)$. (These cannot both occur if $A$ and $C$ are incommensurate.)
On the other hand, if $A$ and $C$ are commensurate (i.e. their ratio is rational), you can transform $f$ to the form
$$f(u)=\sin mu+ \sin (nu+\phi)$$
by a suitable linear transformation of the variable, so $f$ is periodic. In this case, there are periodically recurring minima and maxima, and in general you'll need to use numerical methods to find them. |
Parameter values for non-zero real limit to exist | We start by
$$x\ln(1-x)=$$
$$-x^2-\frac{x^3}{2}-\frac{x^4}{3}+x^4\epsilon(x)$$
and
$$e^X=1+X+\frac{X^2}{2}+X^2\epsilon(X).$$
thus
$$e^{x\ln(1-x)}=1-x^2-\frac{x^3}{2}-\frac{x^4}{3}+\frac{x^4}{2}+x^4\epsilon(x)$$
and the numerator becomes
$$(a-\frac 12)x^3+x^4(\frac 16+\epsilon(x))$$
the limit will exist if
$$a=\frac 12$$
and the limit is
$$\frac 16$$ |
Minimax theorem with discrete sets | No, take $g(x) = x^2$,
$f_1(x) = g(x-1)$, $f_2(x) = g(x+1)$.
Then $\min_x \max (f_1(x),f_x(x)) = 1$ and this occurs at $x=0$, whereas
$\max_k \min_x f_k(x) = 0$ and this occurs at $x=\pm 1$. |
Maximum coefficient in the expansion of $(5+3x)^{10}$ | Write $$(5+3x)^{10}=\sum_{i=0}^{10}a_ix^i.$$ So that
$$
a_i=\binom{10}{i}3^i5^{10-i}.
$$ Then $$f(i):=a_i/a_{i+1}=\frac{5}{3}\cdot \frac{i+1}{10-i}.$$ This is an increasing function of $i$ (for $0\leq i<10$). The maximum of the $a_i$ is attained at the smallest $i$ for which $f(i)>1$. Solving $f(i)=1$ yields $i=25/8$, which is larger than $3$ but smaller than $4$. Therefore $a_4$ is the largest coefficient. |
The set of all functions from $\mathbb{N} \to \{0, 1\}$ is uncountable? | Hint: Show that $\{0,1\}^\mathbb N$ is equinumerous with $\mathcal P(\mathbb N)$ and use Cantor's theorem to conclude there is no bijection between $\mathbb N$ and $\mathcal P(\mathbb N)$. |
Conditional entropy - example (clipping function) | If we are given $Y\neq 0$, we have $X=Y$, otherwise if $Y=0$, we have $X\sim U([-c,c])$.
So that
\begin{align*}
\mathbb E[d(X|Y=Y)] &= \mathbb P[Y=0]\mathbb E[d(X|Y=Y)|Y=0] + \mathbb P[Y\neq0]\mathbb E[d(X|Y=Y)|Y\neq0]\\
&= \mathbb P[Y=0] \cdot 1 + \mathbb P[Y\neq0]\cdot 0\\
&= \frac{c}{b}
\end{align*} |
Find a trig polynomial, $(P_N)_N$, s.t. $(P_N)_N\rightarrow |x|$ on $\Big[-1/2,1/2\Big]$ | Thoug you have been asked to find polynomials explicitly, you are not required to prove uniform convergence directly. Since $|x|$ is a continuous periodic function of bounded variation the Fourier series does converge uniformly by a well known theorem. |
Limit of $\frac1{c^n}\iint_{[0,c]^n}\frac{f(x_1) +f(x_2) +\cdots+f(x_n)}{g(x_1) +g(x_2) +\cdots+g(x_n)}\,dx_1dx_2\cdots dx_n$ when $n\to\infty$ | Let $S_n$ denote the sum of $n$ i.i.d. random variables uniformly distributed on $(0,c)$. Then $a_n=\mathrm E\left(\dfrac{S_n}{n+S_n}\right)$. By the strong law of large numbers, $\dfrac{S_n}n\to\dfrac{c}2$ almost surely, hence
$$\lim\limits_{n\to\infty} a_n=\dfrac{c/2}{1+c/2}=\dfrac{c}{2+c}.$$
Edit In case B (added afterwards to the question) one should consider the sums $S_n$ and $T_n$ of $n$ random variables $f(X_i)$ and $g(X_i)$, for some i.i.d. random variables $(X_n)_{n\geqslant1}$ uniformly distributed on $(0,c)$. Then $a_n=\mathrm E(R_n)$ with $R_n=S_n/T_n$ and, by the strong law of large numbers, $S_n/n\to\mathrm E(f(X_1))$ and $T_n/n\to\mathrm E(g(X_1))$ hence $R_n\to\mathrm E(f(X_1))/\mathrm E(g(X_1))$ almost surely.
If, for example, $g\gt0$ almost everywhere on $(0,c)$, $R_n$ is well defined. If, for example, $|f|\leqslant Ag$ almost everywhere for some finite $A$, $(R_n)_{n\geqslant1}$ is uniformly integrable hence
$$\lim\limits_{n\to\infty} a_n=\lim\limits_{n\to\infty}\mathrm E(R_n)=\mathrm E(\lim\limits_{n\to\infty} R_n)=\dfrac{\mathrm E(f(X_1))}{\mathrm E(g(X_1))}=\dfrac{\displaystyle\int_0^cf(x)\mathrm dx}{\displaystyle\int_0^cg(x)\mathrm dx}.$$ |
Solve for transform parameters given original and transformed vectors | Assuming P is well conditioned, you can start with the pseudo-inverse, and compute $T=P'/P$. This should have a form approximately $T=\begin{bmatrix} a_{1,1}&a_{1,2}&a_{1,3}&d_x\\a_{2,1}&a_{2,2}&a_{2,3}&d_y\\a_{3,1}&a_{3,2}&a_{3,3}&d_z\\0&0&0&1\end{bmatrix}$. To compute an exact analytical answer, the bottom row of T must be exactly $\begin{matrix}0&0&0&1\end{matrix}$, but if there are only small errors, it may still be possible to calculate an estimated solution that is "good enough" for your purposes using analytical methods.
The values of $d_x$, $d_y$, and $d_z$, are clearly visible in T, so use these to compute $$R_zR_yR_xS=D^{-1}T=D^{-1}P'/P$$.
Next, let $Q=R_zR_yR_x$. Then $$(QS)^TQS=(D^{-1}P'/P)^T(D^{-1}P'/P)$$
$$(QS)^TQS=S^TQ^TQS$$
Note that $Q$ is a rotation matrix, so $Q^TQ=I$, which yields $$(QS)^TQS=S^TIS=S^TS=(D^{-1}P'/P)^T(D^{-1}P'/P)$$
S is a diagonal matrix, so its diagonal elements will equal the square root of the diagonal elements of $(D^{-1}P'/P)^T(D^{-1}P'/P)$.
Now it is possible to compute $Q=D^{-1}TS^{-1}=D^{-1}(P'/P)S^{-1}$. Here Wikipedia provides the results of the matrix multiplication, $$Q=R_zR_yR_x=\begin{bmatrix} cos(b)cos(c)&cos(a)sin(c)+sin(a)sin(b)cos(c)&sin(a)sin(c)-cos(a)sin(b)cos(c)&0\\-cos(b)sin(c)&cos(a)cos(c)+sin(a)sin(b)sin(c)&sin(a)cos(c)-cos(a)sin(b)sin(c)&0\\sin(b)&-sin(a)cos(b)&cos(a)cos(b)&0\\0&0&0&1\end{bmatrix}$$
Using this array, $b=arcsin(q_{3,1})$, $a=arccos(q_{3,3}/cos(b))$, and $c=arccos(q_{1,1}/cos(b))$.
If you resort to numerical methods, you may also find it useful (depending on the solver) to note that $det(D)=det(R_x)=det(R_y)=det(R_z)=1$ implies $det(S)=det(P'/P)$, and use this as an additional constraint. |
Approximation of Semicontinuous Functions | I came across this looking for a wrong theorem. Sorry if it is too late.
I would avoid taking sups because they may not preserve smoothness.
But if you know an increasing sequence of continuous functions that converge
to your lsc function, you may obtain smooth ones by removing $2^{-n}$ to the
current function, approximate it within $2^{-n-2}$ by a smooth function
(but in the whole space $R^d$ a brutal convolution will not work, I don't
know a better way than using partitions of unity before you convolve).
Anyway, your new sequence is smooth and still increasing, and converges to the same
limit.
Agreed, this will not be nonnegative if your initial function $f$ was zero
somewhere. For that case I am afraid I see no way to avoid doing this by hand,
working on the open set where $f > 2^{-n}$, doing the same sort of thing as above there, and gluing by hand in the remaining region. (Sorry, did not spend too much time). |
Prime, non-maximal, non-principal Ideal | $\;\;\;\;\;\;$Yes. Since $\Bbb Z$ is not a field then the kernel $(x,y)$ of the ring epimorphism $\Psi:\Bbb Z[x,y]\to\Bbb Z$ defined by $\Psi:f(x,y)\mapsto f(0,0)$ is not maximal.
$\;\;\;\;\;\;x,y$ are coprime polynomials in $\Bbb Z[x,y]$. $(x,y)$ is obviously proper, i.e. no units in $(x,y)$, but not principal or else its generators would be common divisors of $x,y$ and thus units.
$\;\;\;\;\;\;$Also, consider the fact that $(x,y)$ is properly contained in the proper $\Bbb Z[x,y]$ ideal $$\mathcal A:=\{f(x,y)\in\Bbb Z[x,y]:2|f(0,0)\}$$ made of those polynomials with even constant term. |
Counting vertices of a tree | HINT: Suppose that there are $n$ leaves. Then the sum of the degrees of the vertices is $33+n$, and there are $n+5$ vertices altogether. Use the handshaking lemma and the fact that a tree has one more vertex than it has edges to solve for $n$. |
Explanation for derivative of $x*e^x$ | We have the product rule stating that for a function consisting of a product, the following applies;
$$f(x) = u(x) \cdot v(x)$$
$$f'(x) = u'(x)\cdot v(x) + u(x) \cdot v'(x)$$
For your function, we can set $$u = x$$ $$v = e^x$$
Giving us $$u' = 1$$ $$v' = e^x$$
Note: I'm using shorthand, meaning $u = u(x)$ and $v = v(x)$. Just for simplicity.
Anyway, plugging this into the formula;
$$f'(x) = u'v + uv' = 1e^x + xe^x = e^x(1+x)$$
Extra notes: During this, we're using a few rules of differentiation that I assume you're familiar with, namely that the derivative of $x$ with respect to $x$ is 1. Also, that $e^x$ is its own derivative. If you are unfamiliar with this, my advice is to jump back a few steps, because you've missed out, or forgotten a few things along the way. |
Let $f(x)=\sqrt{x^2+3x+4}$ be a rational-valued function of the rational variable $x$. Find the domain and range. | Let $s=\sqrt{x^2+3x+4}$. Then,
$$s^2=x^2+3x+4\Rightarrow x^2+3x+4-s^2=0.$$
Since the discriminant has to be the square of a rational number, we have
$$3^2-4\cdot 1\cdot (4-s^2)=t^2,\quad\text{i.e.}\quad 4s^2-7=t^2$$
for some $t\in\mathbb Q$.
So, we want $s,t\in\mathbb Q$ where $s\gt 0$ such that
$$(2s)^2-t^2=7.$$
Now let us consider a hyperbola $x^2-y^2=7$. We want every rational $x\gt 0$.
$\qquad\qquad\qquad$
First of all, $(x,y)=(-4,3)$ is on the hyperbola. So, for $u\in\mathbb Q$, by Vieta's formulas, the $x$ coordinate $\alpha$ of the intersection point other than $(-4,3)$ of the hyperbola with a line $y-3=u(x-(-4))$ is also rational :
$$x^2-(ux+4u+3)^2=7$$$$(1-u^2)x^2+(-8u^2-6u)x-16u^2-24u-16=0$$
$$-4+\alpha=-\frac{-8u^2-6u}{1-u^2}\Rightarrow \alpha=\frac{4u^2+6u+4}{1-u^2}$$
In order for this to be positive, we need to have $-1\lt u\lt 1$.
On the other hand, if the intersection point of the hyperbola with the line passing through $(-4,3)$ is a rational point, then the slope of the line is also rational.
Thus, we know that every rational $x\gt 0$ such that $x^2-y^2=7$ where $y\in\mathbb Q$ can be written as
$$\left\{x\mid x=\frac{4u^2+6u+4}{1-u^2},-1\lt u\lt 1,u\in\mathbb Q\right\}.$$
It follows from $s=\frac x2$ that the range of $f$ is
$$\left\{y\mid y=\frac{2u^2+3u+2}{1-u^2},-1\lt u\lt 1,u\in\mathbb Q\right\},$$
i.e.
$$\left\{y\mid y=\frac{p^2-3p+4}{2p-3},\color{red}{p\gt\frac 32},p\in\mathbb Q\right\}.$$
Here, I set $u=\frac{2-p}{p-1}=-1+\frac{1}{p-1}$. Note here that $p$ has to be larger than $\frac 32$ because $y$ has to be positive.
Then, from $x^2+3x+4-y^2=0$,
$$x=\frac{-3\pm\sqrt{9-4\left(4-\left(\frac{p^2-3p+4}{2p-3}\right)^2\right)}}{2}=\frac{-3\pm\frac{2p^2-6p+1}{2p-3}}{2}=\frac{p^2-6p+5}{2p-3},\frac{4-p^2}{2p-3}$$
So, the domain of $f$ is
$$\left\{x\mid x=\frac{p^2-6p+5}{2p-3},p\gt\frac 32,p\in\mathbb Q\right\}\cup\left\{x\mid x=\frac{4-p^2}{2p-3},p\gt\frac 32,p\in\mathbb Q\right\}.$$
By the way, we have
$$\left\{x\mid x=\frac{p^2-6p+5}{2p-3},p\gt\frac 32,p\in\mathbb Q\right\}=\left\{x\mid x=\frac{4-p^2}{2p-3},p\color{red}{\lt}\frac 32,p\in\mathbb Q\right\}$$
because $\frac{\color{blue}{(3-p)}^2-6\color{blue}{(3-p)}+5}{2\color{blue}{(3-p)}-3}=\frac{4-p^2}{2p-3}$.
It follows from this that the domain of $f$ is
$$\left\{x\mid x=\frac{p^2-6p+5}{2p-3},p\gt\frac 32,p\in\mathbb Q\right\}\cup\left\{x\mid x=\frac{4-p^2}{2p-3},p\gt\frac 32,p\in\mathbb Q\right\},$$i.e.
$$\left\{x\mid x=\frac{4-p^2}{2p-3},p\lt\frac 32,p\in\mathbb Q\right\}\cup\left\{x\mid x=\frac{4-p^2}{2p-3},p\gt\frac 32,p\in\mathbb Q\right\},$$
i.e.$$\left\{x\mid x=\frac{4-p^2}{2p-3},p\not=\frac 32,p\in\mathbb Q\right\}.$$ |
Trigonometric inequality | Just to see a different approach. we can write the inequality as:
$$
5\sin^2 x-5\sin x \cos x+2 \sin x \cos x-2 \cos^2 x >0
$$
$$
(\sin x- \cos x)(5 \sin x+2 \cos x)>0
$$
now let $\delta$ such that:
$$
\cos \delta= \frac {5}{\sqrt{5^2+2^2}}\qquad \sin \delta= \frac {2}{\sqrt{5^2+2^2}}
$$
the inequality becomes:
$$
\sqrt{29}(\sin x - \cos x)(\cos \delta \sin x+\sin \delta \cos x)>0
$$
$$
(\sin x - \cos x)\sin (\delta+x)>0
$$
that you can solve with simple trigonometry. |
How to calculate the expectation value (example) | For $|x|<1$, the geometric series gives $\sum_{j=0}^{\infty}x^{j}=\frac{1}{1-x}$. Hence, $\sum_{j=1}^{\infty}x^{j}=\frac{1}{1-x}-1=\frac{x}{1-x}$. We can differentiate this series term-by-term to obtain
$$
\sum_{j=1}^{\infty}jx^{j-1}=\frac{1}{(1-x)^{2}}.
$$
Hence,
$$
\sum_{j=1}^{\infty}jx^{j-1}(1-x)=\frac{1}{1-x}.
$$
Now, take $x=PER$ in your example. |
Write your answers in rational form. Write down a polynomial $f(x)$ which has a root $\sqrt{5}$ | Hint:
$$x=\sqrt{5}\implies x^2=5\implies x^2-5=0$$ |
Method of dominant balance | The general idea of the method of dominant balance is fairly simple. Suppose you want to solve some equation $F(y)=0$ with the unknown $y$ and known $F$. In your case, $y$ is a function, $F$ is a differential operator, but the trick applies more or less regardless what we are talking about. Suppose also that $F$ is complicated and no easy explicit solution is in sight but you can split $F$ into two pieces $F=E+G$ such that the equation $E(y)=g$ is easily solvable with any right hand side and $G$ is in some sense much smaller than $E$ for all $y$ you are interested in. After solving $E(y)=0$, you get a solution $y_1$. Then you just say that you got a good approximation and all you need now is to solve $E(y)=-G(y_1)$ to compensate for the extra term. Then you get $y_2$ that solves this equation and switch to $E(y)=-G(y_2)$, etc. If you've ever seen the Picard iteration process, this should sound very familiar.
The whole art is in the splitting. In your case, you can clearly see that $1$ in $(1+3x)$ can be certainly put in $G$ but the rest may not be immediate. Unfortunately, as it turns out, you cannot chop anything off $x^2y''+3xy'+y$. Indeed, suppose you decide that the highest derivative term is in $E$ (something should be there after all). Then you get $x^2y_1''=0$. Let's consider $y_1=1$, say. Then you end up with $G(y)=(3x+1)y'+y$, so $-G(y_1)=-1$. So far, so good. Now let us find the solution to $x^2y''=-1$. We have $y'=\frac 1x+C$ and $y_2=\log x+Cx+C'$. Oops, we seem to get $y_2$ much larger than $y_1$, not a small correction as it should be! This is a clear indicator that our splitting into the dominant and the small part is totally wrong. Playing with this a bit more, we can convince ourselves that we have no choice but to use $E(y)=x^2y''+3xy'+y$. The only part that can go to $G$ is $y'$. Fortunately, solving $E(y)=0$ is not hard. You get $y=\frac 1x$ or $y=\frac{\log x}{x}$ (up to an irrelevant free constant factor). I'll do the Picard iteration game with $y_1=\frac 1x$. Since $E$ is a linear operator, it is enough to solve for the correction term rather than for the whole thing. We need $E(y)=\frac 1{x^2}$. This produces $y_2=\frac 1x+\frac c{x^2}$ where $6c-6c+c=1$, so $c=1$. Then $G(y_2)=-\frac 2{x^3}$, so $y_3=\frac 1x+\frac 1{x^2}+\frac c{x^3}$ with $12c-9c+c=2$, whence $c=\frac 12$. And so on. Just plug'n'play until bored.
This was the cookbook part of the story. The most interesting part is to justify the game. If the sequence you have obtained converges (in the appropriate space), then you can at least say that you found a solution (the limit). Let's see if here it is the case. You have, probably, realized by now that we are getting errors of the kind $c_kx^{-k}$. Each such error gives rise to the correction term $C_kx^{-k}$ with $C_k$ of size about $k^{-2}c_k$, which, in turn, produces a new error $c_{k+1}x^{-k-1}$ with $c_{k+1}$ of size about $k^{-1}c_k$. Thus we are lucky: the errors and correction terms decay quickly and uniformly near $\infty$. You may now say that the whole thing was pure misnomination and we've just done the usual power series representation. Well, that is, indeed, the case here. However, in the truly interesting situations, you get a divergent asymptotic series and the cookbook approach not followed by a careful justification part may give a totally meaningless result. |
Evaluate $\int \frac{1}{z^2+1}$ | No, it is not a sum. Usually, the meaning of $\int_\Gamma$ with $\Gamma=\{z\in\mathbb{C}\,:\,|z|=r\}$ means that we are integrating along the loop $\Gamma\colon[0,2\pi]\longrightarrow\mathbb C$ defined by $\Gamma(t)=re^{it}$.
In this specific case, the value of the integral will be $2\pi i$ times the sum of the residues of $\frac1{z^2+1}$ at $i$ and $-i$. The first residue is equal to $-\frac i2$ and the second one is equal to $\frac i2$. Therefore, the integral is equal to $0$. |
Confusion about meaning of this question. High school Algebra level. | Your expression is alright
Let the two numbers be $a$ and $b$ then $a*b=10$ or $b=\frac{10}a$
The sum of the two numbers would be $a+b=a+\frac{10}a$. |
graphing the solution of $y'=x^2-3$ | First of all, let me remark that your ODE means "Find all primitives of the function $x \mapsto x^2 -3$. As you noticed, there are infinitely many primitives, and hence infinitely many solutions to your ODE. These solutions are parametrized by the real parameter $K$.
As a consequence, you cannot draw a graph of the solution: you should draw infinitely many graphs. However, it is immediate to see that if you draw the graph of $y=\frac{x^3}{3}-3x$, all other graphs are otained by translating this particular graph upwards or downwards. I guess you can draw the graph of a polynomial.
In general there are many ways to single out a particular solution from the family of all solutions. The most elementary is to prescribe an initial condition like $y(x_0)=y_0$. In your case this would be equivalent to
$$
y_0=\frac{x_0^3}{3}-3x_0+K,
$$
or
$$
K=y_0-\frac{x_0^3}{3}+3x_0.
$$ |
Help with understanding this proof (I think it's Hensels Lifting?) | $\mathbb Z/p\mathbb Z$ is a field, the element $2x$ of this field is not zero, hence it has a multiplicative inverse.
Let me elaborate: Let $r$ be an integer that is not a multiple of $p$. We consider the remainder of $nr$ after division by $p$ for all $0\le n<p$. These are all different, because if $nr\equiv n'r\pmod p$, then $(n-n')r$ is divisible by $p$, and since $p$ is prime and does not divide $r$, $n-n'$ has to be divisible by $p$, and the only possibility for that is $n=n'$. But now if the remainders of $nr$ are all distinct numbers from $0$ to $p-1$, then actually each of these numbers has to occur exactly once. This means that for $r\not\equiv0\pmod p$ and $m$ arbitrary there is always an $n$ such that $nr\equiv m\pmod p$.
Not let $r=2x$, $m=-b$, $n=y$. |
Homogeneous space of SL(3, R) | Sorry if answering one owns questions is not very well seen but I want feedback on it and I think I got it.
Let $G = SL(3, \mathbb{R})$ and suppose $spin(3) = \mathbb{S}^3$ is a homogeneous space of it. Then, since $M$ is simply connected, there exists a compact subgroup $C$ of $G$ that also acts transitively on $\mathbb{S}^3$. If X is the isotropy subgroup of some point in the sphere then $C/X = \mathbb{S}^3$. Moreover, since the sphere is simply connected $X$ is connected.
On the other hand, the maximal compact subgroups of $G$ are isomorhpic to $SO(3)$ and $C$ is contained in one such group, say $K$. Then $\mbox{dim}\; C\le \mbox{dim}\; K = 3 = \dim \mathbb{S}^3$. Since the sphere is a quotient of $C$, we conclude that $\mbox{dim}\; C = 3$ and so it is open in $K$ and being also open, $C = K$ and since $X$ is connected and of dimension $0$ it is a point (it must be the identity). We deduce $SO(3) = \mathbb{S}^3$ which is false!
Hence, $spin(3)$ is not an homogeneous space of $SL(3, \mathbb{R})$. |
finding recursive formula and show it converges to a limit | What you did is not wrong, but it's not complete.
What you did prove:
If the sequence $x_n$ has a limit, then the limit is equal to $200$.
What you did not prove:
The sequence $x_n$ has a limit.
Also, that's not what the question is asking you. The question says you need to find a formula for $x_n$, not the limit of $x_n$. |
Show matrix representation for $L$ with respect to the standard basis is $I−\frac 2 {\|v\|^2}vv^T$? | Simply:
$$\forall k=1,\ldots,m-1,\quad L(v_k)=v_k-\frac 2 {||v||^2}vv^Tv_k=v_k-\frac 2 {||v||^2}v\langle v,v_k\rangle=v_k$$
and
$$L(v)=v-\frac 2 {||v||^2}vv^Tv=v-\frac 2 {||v||^2}v\langle v,v\rangle=-v$$ |
Does Planck length contradict math? | The explanation is very simple: math is not reality.
Even if the amazing mathematical models of the reality we can measure are very detailed and have surprising predictive power, they are nothing but mathematical models.
The canonical reference for this is Wigner's The Unreasonable Effectiveness of Mathematics in the Natural Sciences. |
How can I prove that the determinant of a matrix formed of polynomials of degree n-2 or smaller is zero? | Hints: The space of polynomials of degree $n-2$ or less is a vector space of dimension $n-1$. Therefore, any set of $n$ polynomials from this space is linearly dependent. Hence there exist scalars $c_{1}, \ldots, c_{n}$, not all zero, such that $c_{1}p_{1} + \cdots + c_{n}p_{n} = 0$ (the zero polynomial).
So $c_{1}p_{1}(x) + \cdots + c_{n}p_{n}(x) = 0$ for all $x\in \Bbb{R}$. Thus $$c_{1} p_{1}(a_i) + \cdots + c_{n}p_{n}(a_{i}) = 0$$
for all $i = 1,\ldots,n$.
Can you take it from here? (Try writing the above $n$ equations in matrix form.) |
How to solve $y'+xy = y^4$? | Considering the equation
$$y' + xy = y^4$$ let $$y=\frac{1}{{z^{1/3}}}\implies y'=-\frac{z'}{3 z^{4/3}}$$ which makes
$$z'-3x z+3=0$$ which is separable and the general solution is $$z=c_1 e^{\frac{3 x^2}{2}}-\sqrt{\frac{3 \pi }{2}} e^{\frac{3 x^2}{2}}
\text{erf}\left(\sqrt{\frac{3}{2}} x\right)$$ Using the initial condition $z(0)=1$, you get $c_1=1$ making $$z=\frac{1}{2} e^{\frac{3 x^2}{2}} \left(2-\sqrt{6 \pi }
\text{erf}\left(\sqrt{\frac{3}{2}} x\right)\right)$$ Back to $y$
$$y=\frac{e^{-\frac{x^2}{2}}}{\sqrt[3]{1-\sqrt{\frac{3 \pi }{2}}
\text{erf}\left(\sqrt{\frac{3}{2}} x\right)}}$$ |
Does the existence of cokernels imply the existence of coequalizers? | No, not necessarily. For instance, consider the following subcategory of the category of pointed sets. The objects are the sets $0=\{*\}$ and $A=\{*,a,b\}$. The morphisms are the identity maps, the constant maps with value $*$, and all maps $f:A\to A$ such that $f^{-1}(\{*\})=\{*\}$.
This category has kernels and cokernels. The only nontrivial case to check here is that the maps $f:A\to A$ and $g:A\to A$ such that $f(a)=f(b)=a$ and $g(a)=g(b)=b$ have cokernels. But the map $A\to 0$ is a cokernel of both $f$ and $g$ since any map on $A$ in this category which sends either $a$ or $b$ to $*$ must send every point to $*$.
However, this category does not have a coequalizer of the identity $1_A:A\to A$ and the map $f:A\to A$ described in the previous paragraph. Indeed, suppose a map $h$ were a coequalizer of $1_A$ and $f$. Then $hf=h1_A$, so $h(a)=h(b)$. Since $f^2=f1_A$, there is a unique map $i$ from the codomain of $h$ to $A$ such that $ih=f$. This means $h(a)\neq h(*)$, so the codomain of $h$ is $A$ and $h$ can only be $f$ or $g$. But if $h=f$, then the map $i$ is not unique, since both $i=f$ and $i=1_A$ work. Similarly, if $h=g$, then $i$ is not unique, since $i$ could be either $f$ or the map which swaps $a$ and $b$. |
Complex analysis (basic, no solutions) | First one looks good. In the second one your choice of $\delta $ is dependent on the variable $z$, which is incorrect. Your $\delta $ must solely depend on the choice of $\epsilon$. Try to carefully understand the definition of limit of a function.
For the second one you can proceed as follows.
$$ |(z^2 +c ) -(z_{0}^{2} +c)| = |z^2 -z_{0}^{2}|=|(z +z_{0})(z -z_{0})| = |z +z_{0}||z -z_{0}| \le(|z|+|z_0|)|z-z_0|$$
We now make use of the following inequality which is a consequence of the Triangle Inequality, $$||a|-|b||\le|a-b|$$
For $|z-z_0|<1$, we have $$||z|-|z_0||\le|z-z_0|<1\implies|z|-|z_0|<1\implies|z|<1+|z_0|$$
and hence,
$$|(z^2 +c ) -(z_{0}^{2} +c)| \le(|z|+|z_0|)|z-z_0|<(1+2|z_0|)|z-z_0|-------(1)$$
Now choose, $\delta=min\left(1,\frac{\displaystyle\epsilon}{\displaystyle1+2|z_0|}\right)$, which does not depend on the variable $z$.
Since, $0<|z-z_0|<\delta\le 1 $. Equation (1) holds, $$|(z^2 +c ) -(z_{0}^{2} +c)| <(1+2|z_0|)|z-z_0|$$
Again, as $\delta \le \frac{\displaystyle\epsilon}{\displaystyle(1+2|z_0|)}$
So for $0<|z-z_0|<\delta$, $$|(z^2 +c ) -(z_{0}^{2} +c)| < (1+2|z_0|)|z-z_0|<(1+2|z_0|)\frac{\displaystyle\epsilon}{\displaystyle(1+2|z_0|)}=\epsilon$$
Q.E.D |
Every local minimum of $f(x) = \frac{1}{2}x^tAx + b^tx +c$ is also a global minimum | Once you know that $A$ is symmetric and positive semidefinite, there is an orthonormal basis transformation that makes it a diagonal matrix $D$ with non-negative real values in the diagonal.
Another simplification is that yuo can forget about $c$.
By rewriting everything in the new basis, we have a very simple form of the function $f(x)= x^tDx+b^tx$ (with new $b$ and new variable $x$...), which is
$f(x)= (d_1x_1^2 + b_1x_1)+ \ldots + (d_nx_n^2 + b_nx_n) $.
The condition that $x_0$ be a local minimum can be translated to all sections of the function: so for all $i$, $(d_1x_1^2 + b_1x_1)$ is a local minimum for the $i$-th coordinate of $x_0$.
If $d_i>0$, then we have a proper quadratic function with exactly one local minimum, and it is also a global minimum.
If $d_i=0$, then the only way $b_ix_i$ has a minimum is $b_i=0$. Nut then again, local minima and global minima coincide, as every rel number is a local and global minimum. |
Integral of an infinite power tower. | This makes sense for $x>0$ (integral is just red herring):
$$y=\sqrt{x+{\sqrt{x+{\sqrt{x+\sqrt{x+\dots}}}}}}$$
$$y=\sqrt{x+y}$$
$$y^2-y-x=0$$
$$y=\frac{1\pm\sqrt{1+4x}}{2}$$
Discard the negative solution and then solve:
$$\int \frac{1+\sqrt{1+4x}}{2}dx$$
Fun fact:
$$2=\sqrt{2+{\sqrt{2+{\sqrt{2+\sqrt{2+\dots}}}}}}$$ |
Long inverse trignometry | HINT:
$$\tan y=\dfrac{\sqrt{1+x^2}-\sqrt{1-x^2}}{\sqrt{1+x^2}+\sqrt{1-x^2}}$$
Use Weierstrass substitution |
non-homogeneous boundary value prblem - help! | When considering linear boundary value problems, you should apply the superposition principle in order to forget about non-homogenous boundary conditions. I guess your problem is something like:
$$u_t = u_{xx}, \quad 0 < x < 1, \quad t > 0,$$
with boundary and inital conditions:
$$u(0,t) = 0, \quad u(1,t) = 100, \quad t > 0; \quad u(x,0) = f(x), \quad x \in [0,1]$$
for some known function $f(x)$. Make now the substitution $u = v+w$, such that $v$ satisfies homogenous boundary conditions and $w$ "absorbe" them, so the sum of the two problems always leads to the original one. Assume $w(x,t) = A(t) x + B(t)$ and note that, for example, $w(x,t) = 100 \, x$ is a very good solution which satisfies the non-homogenous boundary conditions. Then, the problem for $v$ becomes:
$$v_t = v_{xx} - w_t + w_{xx} = v_{xx}, \quad 0 < x < 1, \quad t > 0,$$
with boundary and inital conditions:
$$v(0,t) = 0, \quad v(1,t) = 0, \quad t > 0; \quad v(x,0) = f_0(x)-w(x,0), \quad x \in [0,1],$$ and note that everything containing information from $w$ is known. Now the problem is to determine the function $v$, which satisfies an homogneous second order PDE (heat equation) subject to homogneous boundary conditions. We can now apply separation of variables (or Sturm-Liouville theory if you prefer so) to the PDE for $v$, assuming:
$$v(x,t) = X(x)T(t), \quad T \neq 0 \neq X, $$
which yields to the set of equations that you have already come up to before, i.e.:
$$\begin{align}
X'' - \lambda X & = 0,\\
T' - \lambda T & = 0,
\end{align}
$$ where $\lambda$ is a constant, which can be negative, zero or positive. Apply the boundary conditions for $X$, which result to be: $X(0)=X(1)=0$ to solve the problem for $X$ (you don't need to solve after for $T$). This only makes sense for negative values of $\lambda$, so $\lambda = - |\lambda| = - k^2$ and it yields:
$$X(x) = A \cos k x + B \sin kx, $$
applying the aforementioned boundary conditions, we have: $A = 0$ and:
$$B \sin k = 0,$$
which is true if $B = 0$ (not valid) or $\sin kx = 0$ which happens to be if $k$ is a natural multiple of $\pi$, so:
$$k = n \pi, \quad n = 1,2,3,\ldots \equiv \mathbb{N},$$
so the solutions for every $n$ are:
$$X_n(x) = B_n \sin k_n x = \sin n\pi x,$$
where I have set $B_n \to 1$ due to the "homogenous-ness" of the equation for $X$. This functions are called eigenfunctions of the problem and you can obtain the solution $v$ by expanding it as follows:
$$v(x,t) = \sum_{n=1}^\infty X_n(x) C_n(t),$$
and solving for the Fourier coefficients, $C_n(t)$. I'm sure you can take it from here.
I hope this might help you.
Cheers! |
Motivation for contractions/extensions of ideals | One way of getting insights about commutative algebra results for me is looking for their implications in Algebraic Geometry.
These notes from Andreas Gathmann (https://www.mathematik.uni-kl.de/~gathmann/en/commalg.php) have a whole chapter (9) dedicated to ring extensions, and the results Lying Over and Going Up are translated to problems about when a projection of a variety is well behaved or not. In these cases, you can really see how the contraction of a prime ideal is related with the problem of surjectivity of this projection, for example. |
Confusing notation: Presentation theory of symmetric groups (Coxeter presentations) | Presumably:
$$
G \cong \langle \ s_i \mid \forall i \ (1 \le i \le n \to s^2_i = 1) \text { and }$$
$$\forall i \ (1 \le i \le (n - 1) \to (s_i s_{i+1})^3 = 1) \text { and }$$
$$\forall i \ \forall j \ ((1 \le i < j \le n \text { and } |i - j| > 1) \to (s_i s_j)^2 = 1) \ \rangle$$ |
what prime numbers can be written as the sum of two consecutive squares? | Basically, we are looking for primes $p=n^2 + (n+1)^2$ according to Wojowu. But primes, except the first two, are either of the form $p=6k+1$ or $p=6k-1$.
So we are looking for solutions of $2n^2 + 2n + 1 = 6k-1$ and $2n^2 +2n +1 = 6k+1$. So the problem is now reduced to solving a quadratic equation in $n$ with a parameter $k$. We demand that the discriminant $d$ of these quadratic equations be a square.
The case of $p=6k-1$ has a $d=12k-3$. $d$ is a square for $k=1,7,19,...$ and the corresponding primes are $p=5=1^2 + 2^2$, $p=41=4^2 + 5^2$ and $p=113=7^2 +8^2$. Most probably, not all square values of d will lead to a prime.
The case of $p=6k+1$ has a $d=12k+1$. $d$ is a square for $k=2,10,30,52...$ and the primes are $p=13= 2^2 + 3^2$, $p=61=5^2 + 6^2$, $p=181=9^2 + 10^2$, $p=313=12^2 +13^2$.
It would be nice if someone could write a program (I cannot) to look for more $k$ values that lead to primes. By the way, the first two primes cannot be written as a sum of two consecutive primes. |
How to find $\int \sqrt{x^8 + 2 + x^{-8}} \,\mathrm{d}x$? | Note that$$x^8+2+x^{-8}=(x^4+x^{-4})^2.$$Can you take it from here? |
Probability of search results showing up for two different merchants with different products in a top 25 list. | I'd do it in a better way:
$$P=1-\dfrac{\binom{28}0\binom{78}0\binom{11500}{25}}{\binom{11500+28+78}{25}}$$Don't know if this probability same as yours? |
How to find all complex polynomial $f$ such that $1+f(z^n+1)=(f(z))^n$ | The same idea as the linked solution appears to work:
We claim that the solutions are exactly those in the sequence $x, x^n+1,(x^n+1)^n+1,\ldots$
First, let $\omega$ be an $n$-th root of unity. Substituting $x\mapsto \omega x$, we find that $P(\omega x) = \omega^k P(x)$ for some $k<n$, so $P$ is $x^k$ times a polynomial in $x^n$. In other words, there is some $Q$ with $P(x)=x^k Q(x^n+1)$.
We have $(x^n+1)^k Q((x^n+1)^n+1) = (x^k Q(x^n+1))^n+1$. Substituting $y=x^n+1$ gives us $y^k Q(y^n+1) = (y-1)^k (Q(y))^n + 1$. Reading off the coefficient of $y$ on both sides, we see that $k\in\{0,1\}$.
We have $Q(2)=1$. If $k=1$ and $Q(a)=1$ for $a\neq 0$, then $aQ(a^n+1)=(a-1)+1=a$, so $Q(a^n+1)=1$. This gives us an infinite sequence of reals satisfying $Q(a)=1$, so $Q\equiv 1$ and $P(x)=x$.
If $k=0$, then $Q$ satisfies the same equation as $P$, and we can proceed by induction. |
$\mathbb{Q}[x,y]/(x^2+y^2)\cong \mathbb{Q}[y,yi]$? | The standard monomials of ${\Bbb Q}[x,y]/\langle x^2+y^2\rangle$ are $1,y,y^2,y^3,\ldots$ and $x$ with $x^2=-y^2$ and so $x=\pm iy$. This gives certainly an isomorphism of vector spaces with ${\Bbb Q}[y,iy]$. That it is also an isomorphism of rings requires a proof: $y\mapsto y$ and $x\mapsto iy$.
More insight into standard monomials is provided by Cox, Little, O'Shea ''Using Algebraic Geometry'' and the more elementary text of Cox, Little, O'Shea ''Ideals, Varieties, and Algorithms''. |
Symmetry Group of a Colored Cube | For me it is easier to think of blue tetrahedron and a yellow tetrahedron glued together along one face. The symmetry group then is just the symmetry group of the tetrahedron, with one face mapped to itself: there is rotation of period 3 (on an axis perpendicular to the fixed face), and a flip (about a plane perpendicular to the fixed face). The dihedral group of triangle. |
Binomial and Geometric Variables- Finding expected values and variance | a) You may already know that the variance of $X$ is $np(1-p)$, that is, $(100)(1/3)(2/3)$.
To find the expectation of $(50+X)^2$, expand the square, and use the linearity of expectation. We get $E(2500)+100E(X)+E(X^2)$.
To find $E(X^2)$, use the fact that $\text{Var}(X)=E(X^2)-(E(X))^2$.
b) You may already know the variance of a geometric. If you don't, you can derive it or look it up, please see Wikipedia, geometric distribution.
In the notation of the OP, it is $\frac{1-p}{p^2}$.
To finish, use the fact that in general the variance of the random variable $a+bY$ is $b^2$ times the variance of $Y$. |
Covariance of Function of Antithetic Random Variables | This can be tackled in the one-dimensional example using integration by parts as follows...
Suppose $f(x)$ is an increasing and $h(x)$ a decreasing function, both of which are bounded, and of zero expectation value $E[f(G)]=E[h(G)]=0$. Consider the function
$$ I_h(x)=\int_{-\infty}^{x}h(t)e^{-t^2/2}dt$$
We note that $I_{h}(\infty)=I_h(-\infty)=0$. Also $I_h'(x)=h(x)e^{-x^2/2}$.
Since $\int_{-\infty}^{\infty} h(x)e^{-x^2/2}dx=0$ we conclude that $h$ has to change sign. Since it is decreasing, there's only one point at which it changes sign, and suppose that happens at $x=a$, where $h(a)=0$. Then $h(x)>0, x<a$ and $h(x)<0, x>a$. Given this observation we see that
$$I_h(a)=\int_{-\infty}^a h(t)e^{-t^2/2}dt>0$$
but since $I_h'(a)=0$, we conclude that $I_h$ has a maximum at that point. Since $I_h$ is increasing for $x<a$ and decreasing otherwise, we find that the range of this function is $(0,I_h(a)]$ and thus we finally infer that $I_h(x)>0 ~\forall x$.
Now perform integration by parts on the following quantity
$$E[f(G)h(G)]=\int_{-\infty}^{\infty}f(t)h(t)e^{-t^2/2}dt=-\int_{-\infty}^{\infty}f'(t)I_h(t)dt<0$$
whence the desired result follows, since $f'>0$ and the boundary terms are zero for $f$ bounded.
It is trivial to generalize to variables of non-zero expectation values by noting that the RV's $A=f(G)-E[f(G)], B=h(G)-E[h(G)]$ satisfy the conditions of the lemma proven above and thus
$$E[AB]=E[(f(G)-E[f(G)])(h(G)-E[h(G)])]=\text{Cov}[f(G),h(G)]<0$$
The 1-dimensional result quoted above follows by noting that if $g(x)$ is increasing and bounded then $h(x)=g(-x)$ is decreasing and bounded and thus satisfies the inequality. |
Spectrum of the sum of matrices | Note that $\dim\ker J_n=n-1$ hence $0$ is an eigenvalue with multiplicity $n-1$ and the last eigenvalue is $\mathrm{tr}(J_n)=n$ where $\mathrm{tr}(A)$ denote the trace of the matrix $A$ so
$$\mathrm{spectrum}(J_n)=\{0,\ldots,0,n\}$$
and obviously we have
$$\mathrm{spectrum}((k-1)I_n)=\{k-1,\ldots,k-1\}$$
so we can conclude that
$$\mathrm{spectrum}(J_n+(k-1)I_n)=\{k-1,\ldots,k-1,k+n-1\}$$
Remark If $A$ and $B$ are two matrices which conmmute i.e. $AB=BA$ then they are triangularizable over $\mathbb C$ (or if it's possible diagonalizable) in the same basis and in this case we have
$$\mathrm{spectrum}(A+B)=\{\lambda_i+\mu_i,\quad i=1,\ldots,n\}$$
where $\lambda_i$ and $\mu_i$ are the eigenvalues of $A$ and $B$ respectively. |
function from A5 to A6 | I think you are asking why the following is true:
Every element of $A_6$ of the form $(123)(456)x(654)(321)$, with $x$ a nontrivial element of $A_5$, leaves no element of $\{1,2,3,4,5,6\}$ fixed.
This is actually false: let $x = (12)(34)$. Then $f(x)$ fixes either $5$ or $6$, depending on how you define composition of permutations. |
An optimization problem in Euclidean Geometry - finding "the smallest" inscribed triangle | Suppose we have a triangle $\Delta ABC$ with $|AB| = c$, $|BC| = a$ and $|AC| = b$. Let $E$ be the foot lying on $AB$, so that $AB \perp EC$. Without loss of generality, assume $b \ge a$. Then
\begin{align*}
|AE|^2 + |EC|^2 &= |AC|^2 = b^2 \\
|BE|^2 + |EC|^2 &= |BC|^2 = a^2 \\
|AE| + |BE| &= |AB| = c
\end{align*}
Subtracting the first two equations, we obtain
$$b^2 - a^2 = |AE|^2 - |BE|^2 = (|AE| - |BE|)(|AE| + |BE|)$$
Using this with the third equation gives us
$$b^2 - a^2 = c(|AE| - |BE|) \implies |AE| - |BE| = \frac{b^2 - a^2}{c}.$$
Using the third equation again allows us to solve for $|AE|$, in particular,
$$2|AE| = c + \frac{b^2 - a^2}{c} \implies |AE| = \frac{b^2 + c^2 - a^2}{2c}.$$
(Note the not-so-subtle or coincidental resemblance to the law of cosines!) So, for example, if we wish to find the position of the foot on the length $5$ edge, we have $c = 5$, $b = 10$, and $a = 9$ (so chosen to preserve $b \ge a$). Thus, we have
$$|AE| = \frac{10^2 + 5^2 - 9^2}{2 \cdot 5} = \frac{22}{5}.$$
The point $A$ is incident on the edges of length $5$ and length $10$. The foot along the length $5$ edge is $\frac{22}{5}$ distance from this point (and $\frac{3}{5}$ distance from the other point). |
How to Solve Disjunction Elimination Proof | Premise: A v B
Case 1: A
Then, we can just get A v (B ∧ C) and it's works.
Case 2: B (where A is false)
Premise: A v C
since NOT A, we get C (because A v C is true)
so, A v (B ∧ C) |
Proving $B_n/[B_n,B_n]$ is infinite (cyclic) group. | Remember the function $\ell$ from last time? It is defined by taking the number of positive exponents and subtracting the number of negative exponents. To show that the abelianization is infinite, it suffices to prove that $\sigma_1^n$ is not a product of commutators whenever $n > 1$. Note that every product of commutators $\beta \in [B_n, B_n]$ satisfies $\ell(\beta) = 0$. But $\ell(\sigma_1^n) = n$, which is non-zero. So $\sigma_1^n$ is not a product of commutators, whence the quotient has infinitely many elements.
Edit: As noted in the comments it is important to check that $\ell$ is well-defined. This can be done by observing that applying the braid relations or commutativity relations does not change the value of $\ell$. From there it is also easy to see that the abelianization is infinite: Just check that $\ell$ is a homomorphism into an infinite abelian group. |
Prove partial derivatives of uniformly convergent harmonic functions converge to the partial derivative of the limit of the sequence. | Your approach is correct, I would use the mean value property to try to derive an analogue of the Cauchy estimates. Let $\Omega$ be the domain, and let $u$ be harmonic. See if you can show that if $B(z,r) \subset \Omega$, then
$$ |\partial_i u(z)| \le Cr^{-1} \|u \|_{L^{\infty}(\partial B(z,r))}$$
You will be using both the mean value property (since $\partial_i u$ is itself harmonic) and the divergence theorem. If you have further questions I can elaborate more.
Elaboration: The $L^{\infty}$, in the context of continuous functions (such as the harmonic functions in this problem), is just the norm corresponding to uniform convergence. So the sequence of harmonic functions $u_n$ converging uniformly on a compact set $K$ can be rewritten as $\|u - u_n\|_{L^{\infty}(K)} \rightarrow 0$ as $n \rightarrow \infty$.
To obtain the estimate in question, first use the mean value property to obtain:
$$ \partial_i u(z) = \frac{1}{\pi r^2} \int_{B(z,r)} \partial_i u(x,y) \, dx \, dy, $$
which follows from $\partial_i u$ itself being harmonic. By the divergence theorem, then this is equal to
$$ \frac{1}{\pi r^2} \int_{\partial B(z,r)} u \nu^i \, dS, $$
where $\nu^i$ is the $i$-th component of the unit vector normal to the surface $\partial B(z,r)$. Thus
$$ |\partial_i u(z)| \le \frac{1}{\pi r^2} \int_{\partial B(z,r)} |u| \, dS = \frac{2}{r} \|u\|_{L^{\infty}(\partial B(z,r))}$$
This allows you to control the pointwise convergence of the partial derivatives of $u_n$ by the uniform convergence of $u_n$. However, this in turn allows you to control the uniform convergence of the derivatives on a compact set, since then you can just use larger balls. |
Proving $f(f^{-1}(D)) \subset D$ | By definition of $f^{-1}(D)$, $x\in f^{-1}(D)$ if and only if $f(x)\in D$.
This shows $f(f^{-1}(D))=\{f(x)\mid x\in f^{-1}(D)\} \subset D$. |
Sum of a decreasing geometric series of integers | See OEIS sequence A005187 and references there.
Depending on what language you're using, the simplest way to compute it may be
as $2n - (\text{sum of binary digits of }n)$. |
Why this set is of the second category? | Every set $G_m=\bigcup_{n=1}^\infty \left(r_n - \frac1{2^{n+m}}, r_n + \frac1{2^{n+m}}\right)$ is a dense open set because is union of open sets and contains the rational numbers. So it's complement is nowhere dense, and hence $B=\bigcup_{m=1}^\infty G_m'$ is of first category which $G_m'$ is the complement of $G_m$.
Now observe that $A=B'=\bigcap_{m=1}^\infty \bigcup_{n=1}^\infty \left(r_n - \frac1{2^{n+m}}, r_n + \frac1{2^{n+m}}\right)$ must be of second category.
Note. We used De Morgan's laws in the last line. |
Series expansion of $\exp(-x)$ without alternating terms? | Here's a series of strictly positive terms:
$$\exp(-x)=\sum_{n=1}^\infty{\exp(-x-n\log2)}$$
Edit: Generally, suppose that $\exp(-x)=\sum f_n(x)$, where each $f_n$ is a strictly positive function. Then $f_n$ can't be a polynomial, since polynomials blow up as $x\to\infty$, whereas $0<f_n(x)<\exp(-x)$. So any series that we find is going to have to involve more interesting functions, like the shifted exponentials in my series above. |
Parametric equation of tangent question involving a circle | Guide:
The radius is perpedicular to the tangent line.
Can you find a solution to $$3x-4y=0?$$ |
Riemann Sum to integral form | Hint
The lim sum $\lim _{n\to\infty} \Delta x\sum_{k=1}^n f({k\over n})$ can be converted to $\int_0^1f(x)dx$ as long as $f(x)$ is integrable over $(0,1)$ and $\Delta x={1\over n}$, so first find $f(x)$ and then substitute $x={k\over n}$.
Update
In the question, it is mentioned that $\Delta x={c\over an}$, hence the integral must be something like $${c\over a}\int_0^1 f(x)dx$$ |
covering of a connected genus? | If $X$ is an $n$-fold cover of $Y$ then (under mild hypotheses which are satisfied here, say, for finite CW complexes) we have $\chi(X) = n \chi(Y)$ where $\chi$ is the Euler characteristic. The Euler characteristic of $\Sigma_g$ is $2 - 2g$, so it follows that if $\Sigma_g$ covers $\Sigma_h$ (any such covering map is necessarily finite, because $\Sigma_g$ is compact) then $1 - h$ must divide $1 - g$.
Setting $g = 0$ gives that $1 - h$ must divide $1$ (and the quotient must be positive, since it must be the degree of the cover) which gives $h = 0$. In other words, the only closed orientable surface that $S^2$ covers is itself.
An alternate argument is thinking about universal covers. $S^2 = \Sigma_0$ is simply connected, $T^2 \cong \Sigma_1$ has universal cover $\mathbb{R}^2$, and for $g \ge 2$, $\Sigma_g$ has universal cover the upper half plane $\mathbb{H} \cong \mathbb{R}^2$ (e.g. by the uniformization theorem although one can also write down a covering map explicitly). In particular the universal cover of $\Sigma_g$ is noncompact for $g \ge 1$ (this is is equivalent to the fundamental group being infinite) and so cannot be $S^2$. |
How to compute this Z-Transform? | $$x(n) = (4^n) u(n)$$
$$X(z) = \sum_{n=-\infty}^{\infty} x(n) z^{-n}$$
$$X(z) = \sum_{n=0}^{\infty} 4^{n} z^{-n}$$
$$X(z) = 1 + \frac{4}{z} + \frac{4^2}{z^2} + .......$$
Geometric Series
$$X(z) = \frac{1}{(1-\frac{4}{z})}$$ |
Problem with showing that group is isomorphic, cyclic and its subgroup is normal | As for the second claim, since the Sylow subgroups are normal, and conjugate, there is only one of each. Then say for starters there are only two, $P$ and $Q$. Their intersection is trivial by Lagrange, since their orders are relatively prime. And $G=PQ$, by counting, and by Lagrange again. Now proceed with the induction.
For $3$, it follows from the fact that $(m,n)=1\implies\Bbb Z_m\times\Bbb Z_n\cong\Bbb Z_{mn}$, a well-known fact, following from the Chinese remainder theorem.
Next see https://www.researchgate.net/publication/279181950_A_Characterization_of_the_Cyclic_Groups_by_Subgroup_Indices. This result shows that the Sylow subgroups, under our assumption, will be cyclic. Thus we can apply the previous result. |
On finding the complex number satisfying the given conditions | From the following where $\circ=\pi/4$,
$\qquad\qquad\qquad $
the circle $$(x-4)^2+(y-1)^2=4$$
you've found is correct, but note that we have only the part below the line $y=x-1$ passing through $(2,1)$ and $(4,3)$.
This is because the angle (as measured in counter-clock wise direction) from $\vec{zz_1}$ to $\vec{zz_2}$ is $\pi/4$ where $z_1=4+3i,z_2=2+i$.
$\qquad\qquad\qquad$
Thus, solving
$$(x-4)^2+(y-1)^2=4$$
$$y\lt x-1$$
$$(x-3)^2+(y+1)^2=3^2$$
gives only one solution $$x=4+\frac{4}{\sqrt 5},\quad y=1-\frac{2}{\sqrt 5}$$
(and this satisfies $(1)$.) |
Prove that $\lim_{(x,y)\to (0,0)}\frac{x^{2}+xy+y^{2}}{\sqrt{x^{2}+y^{2}}}=0$. | Hint: To finish ...
$$0 \leqslant\frac{(\sqrt{x^{2}+y^{2}})|x^{2}+xy+y^{2}|}{|x^{2}+y^{2}|}\leqslant \sqrt{x^{2}+y^{2}}\left(\frac{|x^{2}+y^{2}|}{|x^{2}+y^{2}|}+\frac{|xy|}{|x^{2}+y^{2}|}\right) \leqslant \sqrt{x^{2}+y^{2}}(\ldots)$$ |
Given a $2\times 2$ invertible matrix $A$ with real entries, does $L_A$ map open set to open set? | Since all linear map $L:\mathbb R^2\to \mathbb R^2$ is continuous, obviously, if $A$ is invertible, then it's open. Indeed, the converse of a linear map $\mathbb R^2\to \mathbb R^2$ is also linear map $\mathbb R^2\to \mathbb R^2$, and thus continuous. |
Eigenvalues of Operator on $L^{2}[0, 1]$ | Old, Naive, Too Complicated Solution.
Suppose that $x$ is an eigenfunction of $A$ w/ eigenvalue $\lambda$. That is,
$$\lambda x(t)=Ax(t)=\int_0^1 ts(1-ts)x(s)\ ds.$$
See a remark in the comment section below that proves differentiability of $x$. Note that
$$\lambda x'(t)=\int_0^1 s(1-ts)x(s)\ ds-t\int_0^1s^2x(s)\ ds.$$
That is,
$$\lambda t x'(t)=\lambda x(t)-t^2\gamma(x)\tag{1}$$
where $\gamma:L^2[0,1]\to \Bbb C$ is given by $\gamma(x)=\int_0^1 s^2 x(s)\ ds$.
Observe that the eigenspace with the eigenvalue $0$ is given by
$$\ker A=\left\{x\in L^2[0,1]:\int_0^1sx(s)\ ds=0\wedge \int_0^1s^2x(s)\ ds=0\right\}.$$
From now on suppose that $\lambda\ne 0$.
From (1), we get
$$\frac{d}{dt}\frac{x(t)}{t}=-\frac{1}{\lambda}\gamma(x).$$
Therefore,
$$\frac{x(t)}{t}=C-\frac{t}{\lambda}\gamma(x)$$
or
$$x(t)=Ct-\frac{t^2}{\lambda}\gamma(x).$$
Since $\gamma(x)=\int_0^1s^2x(s)\ ds$, we need
$$\gamma(x)=\int_0^1s^2\left(Cs-\frac{s^2}{\lambda}\gamma(x)\right)\ ds.$$
So
$$\gamma(x)=\frac{C}{4}-\frac{1}{5\lambda}\gamma(x)\implies C=\left(4+\frac4{5\lambda}\right)\gamma(x).$$
Hence,
$$x(t)=\gamma(x)\Biggl(\left(4+\frac4{5\lambda}\right){t}-\frac{t^2}{\lambda}\Biggr).\tag{2}$$
Therefore, the eigenspace of $A$ w/ eigenvalue $\lambda \ne 0$ is a subspace of the span of
$$x_\lambda(t)=\left(4+\frac4{5\lambda}\right){t}-\frac{t^2}{\lambda}.$$
Plugging this into $\lambda x_\lambda (t)=Ax_\lambda (t)$, it turns out that $\lambda$ must satisfy
$$\lambda\left(4+\frac{4}{5\lambda}\right)=\frac{\frac{1}{\lambda}+80}{60}.$$ That is, $\lambda=\frac{4\pm\sqrt{31}}{61}$.
Simpler Solution.
Suppose that $x$ is an eigenfunction of $A$ w/ eigenvalue $\lambda\neq 0$. Then,
$$x(t)=\frac{Ax(t)}{\lambda}=\left(\frac{\int_0^1 sx(s)\ ds}{\lambda}\right)t+\left(-\frac{\int_0^1s^2x(s)\ ds}{\lambda}\right)t^2.$$
That is, $x(t)=at+bt^2$ for some constants $a,b$. Plugging this into $\lambda x=Ax$, we get
$$\lambda(at+bt^2)=\left(\frac{a}{3}+\frac{b}{4}\right) t +\left(-\frac{a}{4}-\frac{b}{5}\right)t^2.$$
Hence,
$$\frac{a}{3}+\frac{b}{4}=\lambda a\wedge -\frac{a}{4}-\frac{b}{5}=\lambda b.$$
So $\lambda$ is an eigenvalue of the matrix
$$\begin{pmatrix}\frac13 & \frac14\\ -\frac14 &-\frac15\end{pmatrix},$$ which means $\lambda=\frac{4\pm\sqrt{31}}{60}$. |
Finding the basis vectors of a null space of a matrix in GF2 | You find the null space just like you would with any other matrix, just doing your operations in $\mathbb{F}_{2}$. However, the author is using the slightly unusual convention of multiplying (row) vectors from the left side of the matrix. This makes it a little awkward to work things by hand, because you would have to do column operations, etc. The easiest way is to work with the transpose of the matrix, so you can do row operations like usual. So you would look at
$$A^{T} = \begin{bmatrix} 0 & 0 & 1 & 1\\0 & 1 & 1 & 1\\0 & 1 & 0 & 0\\0 & 0 & 0 & 0 \end{bmatrix}$$
which row reduces to
$$\begin{bmatrix} 0 & 1 & 0 & 0\\0&0&1&1\\0&0&0&0\\0&0&0&0\end{bmatrix}$$
so you see the null space has basis given by $(1,0,0,0)$, $(0,0,1,1)$.
(you can check that these two vectors, if you use them as column vectors, are in the null space of $A^{T}$; if you use them as row vectors, they are in the null space of your original matrix when multiplied from the left, that is, they satisfy $vA = 0$).
Multiplying vectors from the left is very common in areas of computational mathematics; since row vectors are more natural to type, many computational packages use this convention. Also, matrices aren't reduced by hand in these settings so we don't have to care about awkwardness of column operations or having to transpose the matrix to do row operations. |
Is there a procedural way of finding a Möbius transformation given prescribed conditions? | Offhand, this seems like an over-determined problem. But there's some symmetry. Do you know what Möbius transformations map the unit disk to itself? Then can you generalize to the disk of radius $2$? Then think about how to get $4$ to map to $0$. |
How many 2-hop neighbors in ER network? | There's no reason why the number of 2-hop neighbors can't be much larger than the number of edges. For example, in a star graph ($1$ node connected to $k$ others), the number of edges is $k$, and the number of 2-hop neighbor pairs is $\binom k2$.
However, the answer of $n^3 p^2$ is only valid when $p$ is not too large. Specifically, we will want $np^2 \ll 1$, or $p \ll \frac1{\sqrt n}$. If $np^2 \gg 1$, then $n^3 p^2 \gg n^2$, so there would be more than $n^2$ 2-hop neighbor pairs, which is nonsense. The intermediate case where $p \sim \frac{c}{\sqrt n}$ also has different behavior: here, a constant fraction of the pairs of vertices are 2-hop neighbors.
Your final approach where we pick one of the $\binom n2$ pairs and estimate the probability that they form a $2$-hop neighbor pair is, I think, the easiest one conceptually, even if the asymptotics are tricky.
To understand the probability $p^* = (1-p)(1 - (1 - p^2)^{n-2})$, let's:
First, drop the factor of $1-p$. Since $p \ll 1$, $1-p \sim 1$, so $p^* \sim 1 - (1-p^2)^{n-2}$ as $n \to \infty$.
Similarly, replace the $n-2$ by $n$. This is only multiplying part of the expression by $(1-p^2)^2$, which is negligible for the same reason as multiplying by $1-p$ is negligible. Now we have $$p^* \sim 1- (1-p^2)^n.$$
For $p \ll \frac1{\sqrt n}$, we now want to use the inequality $1 - \binom n1 p^2 \le (1 - p^2)^n \le 1 - \binom n1 p^2 + \binom n2 p^4$. Where does this come from? It's taking the first two and first three terms of the binomial expansion of $(1-x)^n$ as lower and upper bounds, which is valid by inclusion-exclusion. Therefore
$$
np^2 - \frac12 n^2 p^4 \lesssim p^* \lesssim np^2.
$$
However, $np^2 - \frac12 n^2p^4 = np^2 \left(1 - \frac12 np^2\right)$. We are assuming $np^2 \ll 1$, so $1 - \frac12 np^2 \sim 1$, and we have $p^* \sim np^2$.
There are $\binom n2 \sim \frac12 n^2$ pairs of vertices which can be 2-hop neighbors, so the expected number of 2-hop neighbors is $\binom n2 p^* \sim \frac12 n^3p^2$. This doubles, becoming $n^3 p^2$, if you want to count the pair $(v,w)$ and the pair $(w,v)$ as different.
For $p = \frac{c}{\sqrt n}$, $(1 - p^2)^n = (1 - \frac{c^2}{n})^n \sim e^{-c^2}$, so $p^* = 1 - e^{-c^2}$ and there are $\sim \binom n2 (1 - e^{-c^2})$ $2$-hop neighbors. By monotonicity, this is also the estimate when $p \sim \frac{c}{\sqrt n}$.
Finally, when $p \gg \frac1{\sqrt n}$ but still $p \ll 1$, we also have $p \gg \frac{c}{\sqrt n}$ for all $c$, so almost all pairs of vertices are 2-hop neighbors (since $1 - e^{-c^2} \to 1$ as $c \to \infty$).
You're right that you're multi-counting central nodes in your approaches. This is the reason why they always yield an estimate of $n^3p^2$, even though this estimate is false for $np^2 \gg 1$.
There's another thing you're not being careful about, which is multiplying expectations: in general, for random variables $X$ and $Y$, $\mathbb E[X Y] \ne \mathbb E[X] \mathbb E[Y]$.
You make this mistake in both approaches; it's easiest to spot in the first. There, if $X$ is the number of neighbors of a node, you compute $\mathbb E[X] \sim np$. Then, you switch to talking about $\binom X2$, the number of pairs of neighbors. You claim that its average value is $\mathbb E \left[ \binom X2\right] \sim \binom {np}2$; however, the only thing we get for free is $\binom{\mathbb E[X]}{2} \sim \binom{np}{2}$, which is different.
For example, if a node is equally likely to have $0$ and $100$ neighbors, then $\mathbb E[X] = 50$, so $\binom{\mathbb E[X]}{2} = 1225$. However, $\binom X2$ is either $0$ or $4950$, so $\mathbb E \left[ \binom X2\right] = 2475$; more than twice as large.
You either need to compute $\mathbb E[X^2]$ directly, or you need to show that $X$ is tightly concentrated around its mean. Both of these take more work. |
Linearly ordered sets "somewhat similar" to $\mathbb{Q}$ | This is a great question!
There are a large uncountable number of these orders.
Consider as a background order the long rational line, the order
$\mathcal{Q}=([0,1)\cap\mathbb{Q})\cdot\omega_1$. For each
$A\subset\omega_1$, let $\mathcal{Q}_A$ be the suborder obtained
by keeping the first point from the $\alpha^{\rm th}$ interval
whenever $\alpha\in A$, and omitting it if $\alpha\notin A$. That
is,
$$\mathcal{Q}_A=((0,1)\cap\mathbb{Q})\cdot\omega_1\ \cup\ \{(0,\alpha)\mid\alpha\in
A\}.$$
This corresponds to adding $\omega_1$ many copies of $\eta$ or
$1+\eta$, according to the pattern specified by $A$ as a subset of $\omega_1$.
I claim that the isomorphism types of these orders, except for the question of a least element, correspond precisely to
agreement-on-a-club for $A\subset\omega_1$.
Theorem. $\mathcal{Q}_A$ is isomorphic to $\mathcal{Q}_B$ if
and only if $A$ and $B$ agree on having $0$ and also agree
modulo the club filter, meaning that there is a closed unbounded set $C\subset\omega_1$
such that $A\cap C=B\cap C$. In other words, this is if and only
if $A$ and $B$ agree on $0$ and are equivalent to $B$ in $P(\omega_1)/\text{NS}$, as subsets
modulo the nonstationary ideal.
Proof. If $A$ and $B$ agree on $0$ and agree on a club $C$, then we may build an
isomorphism between $\mathcal{Q}_A$ and $\mathcal{Q}_B$ by
transfinite induction. Namely, for each $\alpha\in C$, we will
ensure that $f$ restricted to the cut below $(0,\alpha)$ is an
isomorphism of the segment in $\mathcal{Q}_A$ to that in
$\mathcal{Q}_B$. The point is that $(0,\alpha)$ is actually a
point in $\mathcal{Q}_A$ if and only if it is point in
$\mathcal{Q}_B$, and so these points provide a common frame on
which to carry out the transfinite recursion. If we have an
isomorphism up to such a point, we can continue it to the next
point, since this is just adding a copy of $\eta$ or of $1+\eta$
on top of each (the same for each), and at limits we take the
union of what we have built so far, which still fulfills the
property because $C$ is closed. So $\mathcal{Q}_A\cong\mathcal{Q}_B$.
Conversely, if $f:\mathcal{Q}_A\cong\mathcal{Q}_B$, then $A$ and $B$ must agree on $0$. Let $C$ be
the set of closure ordinals of $f$, that is, the set of $\alpha$
such that $f$ respects the cut determined by the point
$(0,\alpha)$. This set $C$ is closed and unbounded in $\omega_1$.
Furthermore, it is now easy to see that $(0,\alpha)\in
\mathcal{Q}_A$ if and only if $(0,\alpha)\in \mathcal{Q}_B$ for
$\alpha\in C$, since this point is the supremum of that cut, and
it would have to be mapped to itself. Thus, $A\cap C=B\cap C$ and
so $A$ and $B$ agree modulo the club filter. QED
Corollary. There are $2^{\aleph_1}$ many distinct q-like linear orders up to isomorphism.
Proof. The theorem shows that there are as many different q-like linear
orders as there are equivalence classes of subsets of $\omega_1$
modulo the non-stationary ideal. So the number of such orders is
$|P(\omega_1)/\text{NS}|$. This cardinality is $2^{\aleph_1}$
because we may split $\omega_1$ into $\omega_1$ many disjoint
stationary sets, by a theorem of Solovay and Ulam, and the union
of any two distinct subfamilies of these differ on a stationary
set and hence do not agree on a club.
So there are at least $2^{\aleph_1}$ many distinct q-like orders
up to isomorphism, and there cannot be more than this, since every
such order has cardinality at most $\omega_1$. QED
Finally, let me point out, as Joriki mentions in the comments, that every uncountable q-like linear order is isomorphic to $\mathcal{Q}_A$ for some $A$. If $L$ is any such order, then select an unbounded $\omega_1$ sequence in $L$, containing none of its limits, chop $L$ into the corresponding intervals these elements and define $A$ according to whether these corresponding intervals have a least element or not. Thus, we have a complete characterization of the q-like linear orders: the four countable orders, and then the orders $\mathcal{Q}_A$ with two $A$ from each class modulo the nonstationary ideal, one with $0$ and one without. |
Central limit theorem and asympototic normality | It's wrong, the central limit theorem states:
$$ \sqrt n (\overline X_n - \mu) = \frac 1 {\sqrt n} \sum_{i=1}^n(X_i - \mu) \overset{d}{\longrightarrow} \mathcal N(0,\sigma^2). $$
For the asymptotic normality, it is wrong too as you cannot use $\hat \theta$ in the expression of the limit, you should say
$$ \frac{\hat \theta - \theta}{\sqrt{\text{var}(\hat \theta)}} \overset d \longrightarrow \mathcal N(0, 1). $$ |
Are we unable to find the extrema of the function $f(x,y) = \frac{y}{x^2+y^2}$ using the second partial test? | First, you’ve some errors in your third line of calculations. The correct partials are:
$$\begin{align*}
&f_x(x,y)=-\frac{2xy}{(x^2+y^2)^2}\\
&f_y(x,y)=\frac{x^2-y^2}{(x^2+y^2)^2}\\
&f_{xx}(x,y)=\frac{2y(3x^2-y^2)}{(x^2+y^2)^3}\\
&f_{xy}(x,y)=\frac{2x(3y^2-x^2)}{(x^2+y^2)^3}\\
&f_{yy}(x,y)=\frac{3y(y^2-2x^2)}{(x^2+y^2)^3}
\end{align*}$$
Clearly $f_x(x,y)=0$ only if $xy=0$, i.e., at least one of $x$ and $y$ is $0$. Similarly, $f_y(x,y)=0$ only if $0=x^2-y^2=(x-y)(x+y)$, i.e., only if $y=x$ or $y=-x$. The only way to satisfy both of these conditions is to have $x=y=0$, and the function and its partial derivatives aren’t defined at $(0,0)$. Thus, you’re quite right: there is no point at which the second derivative test applies.
Rewriting the function in polar coordinates as
$$f(r,\theta)=\frac{r\sin\theta}{r^2}=\frac{\sin\theta}r$$
may help to explain what’s going on. As we travel around the circle $C_r$ of radius $r$ centred at the origin, the function value is $0$ where $C_r$ crosses the $x$-axis, reaches a maximum of $\frac1r$ where $C_r$ crosses the positive $y$-axis, and reaches a minimum of $-\frac1r$ where $C_r$ crosses the negative $x$-axis. But that high point with value $\frac1r$ on $C_r$ can’t be a local maximum of the function, because the value of the function gets larger as you move down the $y$-axis towards the origin: $\frac1r$ increases as $r$ decreases. Similarly, the low point on $C_r$ can’t be a local minimum of $f$, because $-\frac1r$ gets smaller (more negative) as you move up the negative $y$-axis and $r$ decreases. |
Mathematical arithmetic rounding $1$ | Imagine multiplying everything by 10: 2.5632 rounds to 2.563. It's the same if we didn't multiply by 10: 0.25632 rounds to 0.2563. Similarly,0.025632 rounds to 0.02563. The rounding process is invariant under multiplying or dividing by powers of 10. |
How to determine the existence of this function | No. If $v = \langle a,b \rangle$ is a unit vector and $w = \langle -a,-b \rangle$ then
\begin{align*}
\frac{\partial f}{\partial v}(x_0,y_0) &= \lim_{h \to 0} \frac{f(x_0 + ah,y_0+bh)}{h} \\
&= \lim_{h \to 0^+}\frac{f(x_0 + ah,y_0+bh)}{h} \\
&= \lim_{h \to 0^-}\frac{f(x_0 - ah,y_0-bh)}{-h} \\
&= - \lim_{h \to 0^-}\frac{f(x_0 - ah,y_0-bh)}{h} \\
&= - \lim_{h \to 0}\frac{f(x_0 - ah,y_0-bh)}{h} \\
&= - \frac{\partial f}{\partial w}(x_0,y_0)
\end{align*}
because both directional derivatives are assumed to exist. More specifically if $f$ is differentiable at $(x_0,y_0)$ then
$$\frac{\partial f}{\partial w}(x_0,y_0) = \nabla f(x_0,y_0) \cdot w = - \nabla f(x_0,y_0) \cdot v = - \frac{\partial f}{\partial v}(x_0,y_0).$$
In any event, $\dfrac{\partial f}{\partial v}(x_0,y_0)$ and $\dfrac{\partial f}{\partial w}(x_0,y_0)$ can't both be positive. |
Possible generalizations of $\sum_{k=1}^n \cos k$ being bounded | I claim that the sequence $s_n=\sum_{k=0}^{n-1} f(kq)$ is bounded, for a given $q$, if the following hypotheses are satisfied:
$f$ is $p$-periodic, and its integral over an entire period vanishes,
$q$ is an irrational multiple of $p$,
the irrationality measure of $q/p$ is finite (say, $q/p$ has irrationality measure $\mu$),
$f$ is $C^k$ for some $k>\mu$.
In particular, if $f$ is $C^3$, the sequence is bounded for almost all $q$, including all $q$ where $q/p$ is algebraic but irrational (by the Thue-Siegel-Roth theorem).
First of all, we may suppose without loss of generality that $f$ has period $2\pi$. As $\mu \geq 2$ for all irrational $q$, we need only consider those $f$ which are at least $C^3$. In particular, this means $f$ has an absolutely convergent Fourier series $f(x)=\sum_{-\infty}^\infty a_m e^{imx}$. Since $\int_0^{2\pi} f = 0$, $a_0=0$. Now,
$$
s_n=\sum_{k=0}^{n-1} f(kq)= \sum_{k=0}^{n-1} \sum_{m=-\infty}^\infty a_m e^{imkq}=\sum_{m=-\infty}^\infty a_m \left(\sum_{k=0}^{n-1}e^{imkq}\right)
$$
(where we've used absolute convergence to swap the order of summation).
As $a_0=0$ and $q/2\pi$ is irrational, the common ratio $e^{imq}$ in this last sum is never $1$, and so we can apply the general form of the geometric series formula to obtain
$$
s_n=\sum_{m=-\infty}^\infty a_m \left(\frac{1-e^{imnq}}{1-e^{imq}}\right)=\sum_{m=-\infty}^\infty a_m e^{im(n-1)q/2} \frac{\sin \frac{mnq}{2}}{\sin \frac{mq}{2}} \, .
$$
Now, let $\Phi_n(x)=\frac{\sin (nx/2)}{\sin (x/2)}$. It suffices to show that $\sum_{m=-\infty}^\infty |a_m \Phi_n(mq)|$ is bounded independently of $n$. Note that, for any $x$ with $0 < |x| < \pi$, we have $|\Phi_n(x)| \leq \left|\frac{1}{\sin (x/2)}\right| \leq \frac{\pi}{|x|}$.
Choose $\alpha$ with $\mu < \alpha < k$. Since $\alpha > \mu$, the inequality
\begin{equation}
\left| \frac{q}{2\pi} - \frac{\ell}{m}\right| > \frac{1}{|m|^\alpha}
\end{equation}
holds for all but finitely many choices of $(\ell, m) \in \Bbb{Z}^2$.
We will now divide and conquer: first we will show that the sum of all the terms in which the inequality holds for $m$ is bounded independent of $n$; then we will show that the sum of all the terms in which it does not hold is also so bounded.
Suppose the inequality holds for some $m$; choose $\ell \in \Bbb{Z}$ so that $|mq-2\pi\ell|<\pi$. Then by periodicity we have
$
\Phi_n(mq)=\Phi_n(mq-2\pi\ell)
$;
by the bound we just derived on $\Phi_n$ it follows that
$$
\left|\Phi_n(mq)\right|<\frac{\pi}{|mq-2\pi \ell|} = \frac{1}{2|m||q/2\pi - \ell/m|} < \frac{1}{2}|m|^{\alpha-1}
$$
by our irrationality measure inequality. Since $f$ is $C^k$, its Fourier coefficients $a_m$ are all bounded by $\frac{2C}{|m|^k}$ for some uniform constant $C$. So we can bound the sum over all the $m$ such that the irrationality measure inequality holds by
$$
\sum_{m=-\infty}^\infty \frac{C}{|m|^k} |m|^{\alpha-1}=\sum_{m=-\infty}^\infty C |m|^{\alpha-k-1} \, ;
$$
since $\alpha < k$, this is a convergent sum (and is independent of $n$, since $C$ depends only on $f$).
On the other hand, there are only finitely many $m$ such that the inequality does not hold. Let $\{m_1,\dots\,m_j\}$ be those $m$; then it follows immediately from our inequality for $\Phi$ that the sum of these terms is bounded by $\sum_j a_{m_j} \frac{\pi}{|m_j q - 2\pi \ell_j|}$, where the $\ell_j$ are chosen so that $|m_j q - 2\pi \ell_j|\leq \pi$. Since this is a finite sum independent of $n$, we are done. |
Why does the sequence ${1/n}$ not converge in the positive reals? | Because $0$ is not a positive real! |
Let $S$ be a subset of $\mathbb{R}$ such that $2018$ is an interior point of $S$ . Which of the followinf is(are) true? | Your proofs for (a) and (c) look fine. For (b), a single example does not constitute a proof. But note that there is no requirement that the sequence in $S$ have distinct terms, so you can just choose some $s\in S$ with $s\neq 2018$ (which exists since $2018$ is an interior point) and set $x_n=s$ for all $n$.
For (d), consider for instance the set $S=(2018-10^{-6},2018+10^{-6})$. |
Lower semicontinuous non-negative function on a locally compact Hausdroff space with a countable base | It seems the following.
The assertion is true. I have the following idea of the proof. Let $F$ be a monotonic homeomorphism between the extended real line and the segment $[-1,1]$ such that $F(0)=0$ (for instance, we can put $F^{-1}(x)|_{(-1,1)}=\tan\frac{\pi x}2 $).
An extended real valued function $f$ on $X$ is lower semicontinuous iff the composition $F\circ f$ is lower semicontinuous.
Let $f\ge 0$ be a lower semicontinous function on $X$. Then $F\circ f\ge 0$ is a lower semicontinous function on $X$ too. Since $X$ is a locally compact Hausdorff space, by [Eng, Theorem 3.3.1], $X$ is a regular (even a completely regular) space. Since the space $X$ has a countable base, it is metrizable (by [Eng, 4.4.7] (Nagata-Smirnov metrization theorem) or by [Eng, 4.4.8] (Bing metrization theorem) :-) ) and (or by [Eng, Lemma 4.4.5]) hence, by [Eng, Corollary 4.1.3] perfectly normal. By [Eng, Example 1.7.15.c], there exists a non-decreasing sequence $\{f_n’’\}$ of continuous real valued functions on the space $X$ which pointwise converges to the function $F\circ f$. Since the space $X$ is a locally compact space with a countable base, it is a union $X=\bigcup K_n$ of a non-decreasing sequence $\{K_n\}$ of its compact subsets. Now for each point $x\in X$ and each number $n$ put $f’_n(x)=\max\{0, f_n’’(x)\}$ if $x\in K_n$ and $f_n’=0$ if $x\in X\setminus K_n$. Since the function $F\circ f$ is non-negative, a sequence $\{f_n’\}$ of non-negative continuous real valued functions with compact support on the space $X$ pointwise converges to the function $F\circ f$. For each $n$ put $f_n=F^{-1}\circ f_n’$. Then $\{f_n\}$ is a non-decreasing sequence of non-negative continuous real valued functions with compact support on the space $X$, which pointwise converges to the function $f$.
[Eng] R. Engelking, General Topology, Heldermann Verlag, Berlin, 1989. |
What could be the derivative of the below | Another way could be logarithmic differentiation
$$y=\frac{x \cos^{-1}(x)} {\sqrt{1-x^2}}\implies \log(y)=\log(x)+\log(\cos^{-1}(x))-\frac 12\log({1-x^2})$$ Differentiate both sides
$$\frac{y'}y=\frac 1x +\frac{ \left(\cos^{-1}(x)\right)'} {\cos^{-1}(x) }-\frac 12\frac{\left({1-x^2}\right)' }{{1-x^2} }$$ When done $$y'=y \times \frac{y'}y$$ and simplify. |
Solution of Ordinary Differential equation | Hint: Substitute $y/x=v$
$y'=xv'+v$ |
Nullity and an Isomorphism | Hint: Use linearity of $T$.
Suppose $\ker T = \{0\}$, then consider $T(x) = T(y)$ for any $x,y \in R$. Then $$T(x) = T(y) \implies T(x) - T(y) = 0 \implies T(x-y) = 0 \implies x-y = 0 \implies x= y$$
Thus $T$ is injective.
On the other hand, if $T$ is injective and $x$ is any element in $\ker T$ then $$T (x) = 0 = T(0) \implies x = 0$$
Thus $\ker T$ has only $0$. |
Charecterization of Exponential distribution | Not a full answer but is too long for comments. The challenge for me in the second part arises because I have made no assumptions about the support of $X$. Does the problem statement make any mention about $\operatorname{supp}(X)$?
We have $X_1,X_2\overset{\text{i.i.d.}}{\sim}F_X$ with $\operatorname{supp}(X)=(a,b)$. Using the standard derivation one has for the joint distribution of the order statistics
$$
f_{X_{(1)}X_{(2)}}(x_1,x_2)=2f_X(x_1)f_X(x_2),\quad a<x_1<x_2<b.
$$
Let, $(M,R)=(X_{(1)},X_{(2)}-X_{(1)})$. It follows that the inverse transformation and Jacobian is $(X_{(1)},X_{(2)})=(M,M+R)$ and $|J|=1$, respectively. We then have
$$
f_{MR}(m,r)=2f_X(m)f_X(r+m),\quad (r,m)\in D,
$$
where $D$ is as seen in the folowing image.
Part 1: $X_1,X_2\overset{\text{i.i.d.}}{\sim}\operatorname{Exp}(\lambda)\implies M\perp R$:
If $X_1,X_2\overset{\text{i.i.d.}}{\sim}\operatorname{Exp}(\lambda)$ then $a=0$, $b=\infty$, and $D=(0,\infty)\times (0,\infty)$ so that
$$
f_{MR}(m,r)=2\lambda e^{-\lambda m}\lambda e^{-\lambda(r+m)}=\underbrace{2\lambda e^{-2\lambda m}}_{f_M(m)}\underbrace{\lambda e^{-\lambda r}}_{f_R(r)},\quad (m,r)\in(0,\infty)^2,
$$
which shows that $M\sim\operatorname{Exp}(2\lambda)$, $R\sim\operatorname{Exp}(\lambda)$, and $M\perp R$.
Part 2: $M\perp R\implies X_1,X_2\overset{\text{i.i.d.}}{\sim}\operatorname{Exp}(\lambda)$: |
Solving a linear system of booleans | This is an integer program, which is typically NP-hard. In some cases (when the integrality gap is zero), then we can relax the $x_j$ to be real valued, solve a linear program, and round the result to get the exact result. In general solving the LP-relaxation will not give you the correct solution. |
Showing that $Det(A^T A)\ge 0$ | Use these results:
$$\det(A^T)=\det(A)$$
and
$$\det(AB)=\det(A)\det (B)$$ |
Laurent series and residues $f(z^2)$ | You should check your definition of Residue, is not the $a_{-n}$ coefficient in the Laurent expansion but the $a_{-1}$ one.
If $f$ has a pole of order $n$ at $z=0$ then
$$
f(z)=a_{-n}z^{-n} + \cdots + a_{-1}z^{-1} + \sum_{k=0}^{\infty}a_kz^k
$$
with $a_{-n} \neq 0$. Of course $Res(f,0)=a_{-1}$. Then around $0$, we have that if $g(z)=f(z^2)$ then
\begin{align}
g(z)=f(z^2) & =a_{-n}z^{-2n} + \cdots + a_{-1}z^{-2} + \sum_{k=0}^{\infty}a_kz^{2k}\\
& = b_{-2n}z^{-2n} + \cdots + b_{-1}z^{-1} + \sum_{k=0}^{\infty}b_kz^{k}\\
\end{align}
where
$$
b_j=\left\{
\begin{array}{cc}
a_{j/2} & \text{ if } j \in 2 \mathbb{Z} \\
0 & \text{in other case}
\end{array} \right.
$$
Since $b_{-2n}=a_{-n} \neq 0$ then $g$ has a pole of order $2n$ at $z=0$ and $Res(g,0)=b_{-1}=0$.
Now is clear that this still holds even if $f$ has not a finite order pole, since the residue of $g$ at $z=0$ will still be $b_{-1}=0$. Thus supposing that $f$ has a pole of order $n$ at $0$ only helps to get that $g$ has a pole of order $2n$ at $0$.
Then we can conclude that $f(z^2)$ has residue $0$ at $z=0$ |
What is a heuristic proof? | First, I object to the term "heuristic proof." That phrase is a contradiction in terms. Heuristics are not (generally speaking) proof, and proofs generally require more detail and nuance than heuristics. It would be better to use the phrase "heuristic argument."
With that bit of pedantry addressed, it is often (though not always) helpful to look at what lexicographers have determined that word means. In this case, Merriam-Webster suggests that a heuristic is something
involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods.
So heuristics are learning or problem-solving tools. In mathematics, heuristics are generally informal arguments that are meant to convince someone that a result is true without necessarily getting into the nitty-gritty of a proof, which might be rather involved. For example, in an elementary calculus class, you will often see an argument for the chain rule that looks something like
The derivative of $f$ with respect to $g$ is denoted by $\frac{\mathrm{d}f}{\mathrm{d}g}$, and the derivative of $g$ with respect to $x$ is denoted by $\frac{\mathrm{d}g}{\mathrm{d}x}$. Thus we have
$$\require{cancel}
\frac{\mathrm{d}f}{\cancel{\mathrm{d}g}} \cdot \frac{\cancel{\mathrm{d}g}}{\mathrm{d}x}
=\frac{\mathrm{d}f}{\mathrm{d}x}. $$
Therefore $\frac{\mathrm{d}}{\mathrm{d}x} (f\circ g)(x) = f'(g(x)) g'(x)$.
This argument is not rigorous, and cannot really be made rigorous without a great deal of explanation (either by way of non-standard analysis, or a very careful $\varepsilon$-$\delta$ argument). However, the notation is suggestive, and the basic result is correct, so it is a useful aid to learning.
Another example is the heuristic argument for the Prime Number Theorem, which (roughly) states that the number of prime numbers less than $N$ is on the order of $N/\log(N)$. An actual proof of the Prime Number Theorem is rather involved, but the following heuristic argument is generally enough to convince someone that it is true:
Suppose that the primes are uniformly randomly distributed among all of the positive integers. Let $P(N)$ denote the probability that $N$ is prime, where $N$ is some sufficiently large number. If $N$ is prime and $M$ is larger than $N$, then $N$ divides $M$ with probability $1/N$ (2 divides every other number, 3 divides every third number, 5 divides every fifth number, and so on). In particular, if is prime, then $N$ divides $N+1$ with probability $1/N$. By the Law of Total Probability
$$ P(N+1) = \underbrace{P(N)\left[P(N)\left( 1- \frac{1}{N}\right)\right]}_{(1)} + \underbrace{(1-P(N))P(N)}_{(2)}$$
where (1) if $N$ is prime, then (assuming independence) $N+1$ is prime with probability $P(N+1) \approx P(N)$ and also $N \nmid N+1$, which happens with probability $1-\frac{1}{N}$, and (2) $N$ is not prime with probability $1-P(N)$, but for $N$ large enough, both $N$ and $N+1$ have about the same chance of being prime. Thus $P(N+1) \approx P(N)$. By some basic algebra
$$ \frac{P(N+1)}{P(N)}
= P(N) \left(1-\frac{1}{N}\right) + (1-P(N))
= 1 - \frac{P(N)}{N}.$$
But we can approximate $P(N+1)$ by $P(N) + P'(N)$ (i.e. $P(x+\Delta x) \approx P(x) + P'(x)$), so this becomes
$$ 1 + \frac{P'(N)}{P(N)} = 1 - \frac{P(N)}{N}
\implies \frac{P'(N)}{P(N)} = - \frac{P(N)}{N}. $$
This is a first order PDE with general solution
$$ P(N) = \frac{1}{C + \log(N)}. $$
In other words, the probability that a randomly chosen integer $N$ is prime is approximately $\frac{1}{\log(N)}$. If $N$ is any positive integer, then there are $N$ positive integers less than or equal to $N$, and so
$$ (\text{number of primes up to $N$}) \approx N \cdot \frac{1}{\log(N)}. $$
There are a lot of holes in this argument (there are infinitely many positive integers, yet we have assumed that the probability that a given number is prime is uniform, which is a problem; we have not been very careful with the errors in a number of approximations; etc). In this case, lulu's comment that a heuristic argument is "an informal argument which assumes plausible (but unproven) results" is quite apt---there is a long way to go before we have a real proof.
Indeed, the usual proofs of the Prime Number Theorem are not probabilistic proofs, and attack the problem along other lines. However, most of the authentic proofs are pretty technical, and don't really give any insight into why we might even suspect that the Prime Number Theorem is true in the first place. This heuristic argument should convince us that the theorem might be true, and give us reason to pursue a better proof. |
Representation of the fundamental class of a closed orientable $n-$manifold | Write down the simplicial complex. There are no $n+1$-cells, hence in dimension $n+1$ it is zero:
$$ \cdots \to 0 \to Z^\text{number of n-cells} \to Z^\text{something} \to \cdots$$
Now let there be $k$ n-cells. Then $ker = H_n(X) = Z^r \to Z^k$ gives you homology. It will be $Z^r$ for some $r = \#\pi_0(X)$. We can assume that $r=1$ and then you see that, since $M$ is orientable, the image of an $n$-simplex (ie its boundary) is precisely the negatively oriented boundary of its complement, ie $\partial \sigma_i = -\partial \sum_{i\neq j} \sigma _j$, hence, $\partial $ being a homomorphism $\partial \sum \sigma_i =0$.
An other way to see this (and this is by definition and this will be important for all fields where you need the fundamental class) is to take the characterising property of the fundamental class and check it. This will work in many different settings. It means check that the homology class restricts to local orientation. In the situation of a simplicial complex it is obvious to see on points in the interior of $n$-cells. Now think about how to fix it on the boundary...
Depending on your definition of orientation you can go with the first (which is basically the simplicial definition) or with the second (the local homology orientation definition). |
Prove the direct sum: $V = im(T)⊕im(S)$ | It is clear that $im(T)+im(S)\subseteq V$. To show that $V \subseteq im(T)+im(S)$, pick $v \in V$. Now, by (1), $T(v)+S(v)=I(v)=v$. So, $v \in im(T)+im(S)$ and hence $V \subseteq im(T)+im(S)$.
To show that the sum is direct, pick $v \in im(T) \cap im(S)$. Then, by definition, there exists $a \in V$ such that $T(a)=v$. Now, by (1), $T(a)+S(a)=I(a)=a$ and so, by (2) and (3), $0=0+0=T(T(a))+T(S(a))=T(T(a)+S(a))=T(a)=v$, thus, $v=0$ and we are done. |
Transforming differentiable functions that are not compactly supported | Let $n = m = 1$ and consider $f(x) = x$. Suppose $\phi$ is a bijection $\phi : \mathbb{R} \to \mathbb{R}$ s.t. $\phi \circ f$ is compactly supported. Then in particular, there are at least 2 points $x,y$ s.t. $x \neq y$ and $\phi\circ f(x) = \phi \circ f (y) = 0$. Then $f(x) = \phi^{-1}(0) = f(y)$. But $f$ is an injection, so $x =y$, a contradiction.
So no, this doesn't always hold (note every diffeomorphism is a bijection, as well as every homeomorphism). |
Calculating $\lim_{x\to 0}\left(\frac{1}{\sqrt x}-\frac{1}{\sqrt{\log(x+1)}}\right)$ | Hint
$$\frac{\log(1+x)}{x}\to 1$$
$$\frac{\log(1+x)-x}{x^2}\to -\frac{1}{2}$$
Further Hint
$$\displaylines{
\frac{1}{{\sqrt x }} - \frac{1}{{\sqrt {\log (x + 1)} }} = \frac{{\sqrt {\log (x + 1)} - \sqrt x }}{{\sqrt {x\log \left( {1 + x} \right)} }}\left( {\frac{{\sqrt {\log (x + 1)} + \sqrt x }}{{\sqrt {\log (x + 1)} + \sqrt x }}} \right) \cr
= \frac{{\log (x + 1) - x}}{{\sqrt {x\log \left( {1 + x} \right)} }}{\left( {\sqrt {\log (x + 1)} + \sqrt x } \right)^{ - 1}} \cr
= \frac{{\log (x + 1) - x}}{{\sqrt {x\log \left( {1 + x} \right)} }}\frac{1}{{\sqrt x }}{\left( {\sqrt {\frac{{\log (x + 1)}}{x}} + 1} \right)^{ - 1}} \cr
= \frac{1}{{\sqrt x }}\frac{{\log (x + 1) - x}}{x}{\left( {\sqrt {\frac{{\log \left( {1 + x} \right)}}{x}} } \right)^{ - 1}}{\left( {\sqrt {\frac{{\log (x + 1)}}{x}} + 1} \right)^{ - 1}} \cr
= \sqrt x \frac{{\log (x + 1) - x}}{{{x^2}}}{\left( {\sqrt {\frac{{\log \left( {1 + x} \right)}}{x}} } \right)^{ - 1}}{\left( {\sqrt {\frac{{\log (x + 1)}}{x}} + 1} \right)^{ - 1}} \cr} $$ |
Proper definition of complex random variable. | The second definition is missing something after the "such that" - there is no condition stated?
EDIT: I see you fixed this. Yes, the two conditions are equivalent.
In general, a random variable is none other than a measurable function between two measure spaces (a measure space is a set equipped with a $\sigma$-algebra). Your second definition is a special case of this more general definition. To see why it is equivalent to the first definition, we need to understand the relationship between the Borel $\sigma$-algebras on $\mathbb R$ and $\mathbb C$.
We can use a nice property of the Borel $\sigma$-algebra on any topological space $X$: a function $f\colon (\Omega,\mathcal F)\to (X,\mathcal B(X))$ is measurable if and only if $f^{-1}(U)$ is measurable for all open sets $U\subseteq X$. Now since $\mathbb C$ is equipped with the product topology of $\mathbb R\times \mathbb R$, a set $U$ is measurable if and only if $\pi_1(U)$ and $\pi_2(U)$ are measurable, where $\pi_i$ denote the projections onto the coordinates. Thus, it follows that the first condition is equivalent to the second, as a result of the fact that $\mathbb C$ is equal to the product topology of two copies of $\mathbb R$. |
Confusion about notation in this proof | It seems your point is that the intersection and union of a family of sets should be written as $\cup U_\alpha$ instead of $\cup \{U_\alpha\}$.
The next paragraph is from the ZFC axioms page on wikipedia:
The axiom of union states that for any set of sets $\mathcal{F}$ there is a set $A$ containing every element that is a member of some member of $\mathcal{F}$:
$\forall \mathcal{F} \,\exists A \, \forall Y\, \forall x [(x \in Y \land Y \in \mathcal{F}) \Rightarrow x \in A]$.
That is, you can think of the union as a 'formula' that assigns to a family of sets the union of all the sets in that family. Seen this way, the notation $\cup \{\tau_\alpha;\alpha\in A\}$ makes sense. As in the question they just called the family $\{\tau_\alpha\}$, they also called it this way in the answer, although it would be better to write the index set.
The notation you use is also accepted and more widely used. |
Integration - Partial Fraction Decomposition | Procedure of decomposition
So:
$$\frac{x^2+3x+1}{(x^2+4)(x^2+1)} = \frac{ax+b}{x^2+4}+\frac{cx+d}{x^2+1} \Rightarrow$$
$$ \Rightarrow x^2+3x+1 =(x^2+1)(ax+b)+(x^2+4)(cx+d) \Rightarrow$$
$$\Rightarrow x^2+3x+1 =(a+c)x^3+(b+d)x^2+(a+4c)x+(b+4d)$$
Therefore , you have to solve following system of equations :
$\begin{cases}
a+c=0 \\
b+d=1 \\
a+4c=3 \\
b+4d=1
\end{cases}$
After you find coefficients : $a,b,c,d$ use sum of integrals :
$$\int\frac{ax+b}{x^2+4} \,dx =\int\frac{ax}{x^2+4} \,dx+\int\frac{b}{x^2+4} \,dx$$
$$\int\frac{cx+d}{x^2+1} \,dx =\int\frac{cx}{x^2+1} \,dx+\int\frac{d}{x^2+1} \,dx$$ |
Can (x(t), y(t)) generate a surface? If so, can the surface be continuous? | You have to make sure that it is well-defined, which means you need to decide which representation to use for $t=0.499999....=0.5000...$. You'd also have to prove the function is continuous. The tricky point is at the same values of $t$ where it wasn't well-defined.
Even then, the function is not $1-1$. If you take $t_0=0.409090909090...$ and $t_1=0.500000....$ they get sent to the same value.
What you have here is called a "space-filling curve."
That said, there is no meaning to "continuous surface" in this sense. A "curve" is not just the image of the map $(x(t),y(t))$, but also the path through that image. Under that view, this is still a continuous curve, not a continuous surface. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.