title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Do we really need Second Derivative Test? | Here is a more nasty example:
$$f(x)=\sin\left(\frac{1}{x}\right),\quad x\neq 0.$$
The derivative is
$$f'(x)=-\frac{1}{x^2}\cos\frac{1}{x}.$$
Hence if $f'(x_e)=0$, you have $\cos\frac{1}{x_e}=0$.
The second derivative is
$$f''(x)=\frac{2}{x^3}\cos\frac{1}{x} - \frac{1}{x^4}\sin\frac{1}{x}.$$
Inserting $x_e$ now gives you $f''(x_e)=-\frac{1}{x_e^4}\sin\frac{1}{x_e}.$
This second deivative test works far more reliably in this case, because you have an infinite number of extremal points which are getting closer, when $x$ tends to zero.
This means finding points in a neighbourhood of a possible extremum to test for a sign change is more difficult here (it can be done, but is tedious).
Another more advanced reason for using the second derivative is, that this method can be generalised to higher dimensions. For the sign change method this is not possible. (see e.g.https://en.wikipedia.org/wiki/Second_partial_derivative_test) |
Fourier transforms and Dirac delta function | In this answer we normalize the Fourier transform as
$$\tag{1} \hat{\varphi}(\omega)~:=~(2\pi)^{-\frac{n}{2}} \int_{\mathbb{R}^n} \!d^nt~e^{-i\omega\cdot t}\varphi(t). $$
Here the dimension will be $n=2$.
The relevant version of the Dirac delta distribution $\delta(t_1-t_2)$ (with two running arguments, so to speak) is here
$$\tag{2} \delta[\varphi]~:=~\int_{\mathbb{R}^2} \!d^2t~\varphi(t)~\delta(t_1-t_2)~:=~\int_{\mathbb{R}} \!dt~ \varphi(t,t)$$
for a Schwartz test function $\varphi:\mathbb{R}^2\to \mathbb{C}$.
There is a notion of Fourier transform for tempered distributions. The Fourier transformed Dirac delta distribution is
$$\hat{\delta}[\varphi]~:=~\delta[\hat{\varphi}]~\stackrel{(2)}{=}~\int_{\mathbb{R}} \!dt~ \hat{\varphi}(t,t)
~\stackrel{(1)}{=}~\int_{\mathbb{R}}\!\frac{dt}{2\pi}\int_{\mathbb{R}^2} \!d^2\omega~e^{-it(\omega_1+\omega_2)}~\varphi(\omega) $$
$$\tag{3} ~=~\ldots ~=~\int_{\mathbb{R}} \!d\omega~ \varphi(\omega,-\omega)~=:~\int_{\mathbb{R}^2} \!d^2\omega~\varphi(\omega)~\delta(\omega_1+\omega_2).$$ |
the reciprocal of an exterior conformal mapping | The answer is no. Begin with
$$f(z)=\frac{z}{1+z^{-1}}$$
which is univalent in some neighborhood of infinity, denoted $N$. For sufficiently large $R$ we have $\{z:|z|\ge R\}\subset f(N)$. Let $L$ be the preimage of the circle $\{z:|z|=R\}$. Let $G$ be the exterior of $L$. Then the map $\Phi(z)=R^{-1}f(z)$ satisfies all conditions of your problem.
The constant factor $R^{-1}$ does not affect the vanishing of derivatives. Dropping it, we work with $1/f(z)=z^{-1}+z^{-2}$ and find that $$(1/f)^{(n)}(z) = (-1)^n n! z^{-n-2}\left(z+n+1\right)$$
The derivative vanishes at $z=-n-1$, which lies in $G$ when $n$ is large enough. |
In Pollard p-1 how is the bound B chosen? | The p-1-method works , if there is a prime factor $\ p\ $ of $\ N\ $ , such that $\ p-1\ $ splits into prime factors smaller than the chosen bound $\ B\ $. Whether a given bound $\ B\ $ will work, cannot be predicted.
Of course, increasing $\ B\ $ would eventually find a non-trivial factor, but at some point, this method would be not more efficient (perhaps even less efficient) then trial division.
If we are lucky, we can find large factors that could not be found with trial division in reasonable time.
The p-1-method rarely works , if there is no prime factor with , lets say , $\ 25\ $ digits or less. In this case, the better ECM (elliptic curve method) is usually used. |
Reciprocal sum with 2017 | You can also start with
$$ \frac{2017}{1000}=\frac{1}{1}+\frac{1}{2}+\frac{1}{4}+\frac{1}{5}+\frac{1}{15}+\frac{1}{3000} $$
or
$$ \frac{2017}{1000}=\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\frac{1}{6}+\frac{1}{60}+\frac{1}{3000} $$
then expand $\frac{1}{3000}$ into a sum of $2012$ egyptian fractions by exploiting $\frac{1}{n}=\frac{1}{n+1}+\frac{1}{n(n+1)}$ multiple times, or
$$\frac{1}{3000}=\frac{1}{3000}\underbrace{\left(\sum_{k=1}^{2010}\frac{1}{2^k}+\frac{1}{3\cdot 2^{2009}}+\frac{1}{6\cdot 2^{2009}}\right)}_{2012\text{ terms}}$$
The number of possible solutions for this problem is probably gargantuan.
You know I've always liked that word gargantuan? I so rarely have an opportunity to use it in a sentence. |
Prove that $\int^{\pi}_0 \ln \left(\frac{b-\cos x}{a- \cos x}\right)dx=\pi \ln (\frac{b+\sqrt{b^2-1}}{a+\sqrt{a^2-1}})$ | First, notice the integrand can be written as
$$
\begin{align}
\ln (\frac{b-\cos x}{a- \cos x}) &=\ln(b-\cos x)-\ln(a-\cos x)\\
& =\ln(y-\cos x)|^{y=b}_{y=a}\\
& =\int^b_a\frac{1}{y-\cos x}dy
\end{align}
$$
So the integration equals
$$
\begin{align}
\int^{\pi}_0 \ln (\frac{b-\cos x}{a- \cos x})dx &=\int^{\pi}_0 [ \int^b_a\frac{1}{y-\cos x}dy]dx \\
&=\int^b_a [\int^{\pi}_0 \frac{1}{y-\cos x}dx ] dy\\
& =\int^b_a(\frac{\pi}{\sqrt{y^2-1}})dy
\end{align}
$$
Let $$y=\sec t$$
so
$$dy=\sec t\tan t dt$$
So the integrantion becomes
$$
\begin{align}
\pi \int^{acrsec b}_{arcsec a}\frac{\sec t\tan t}{\tan t}dt &=\pi \int^{acrsec b}_{arcsec a}\sec t dt\\ & = \pi \ln |\sec t+\tan t|^{arcsec b}_{arcsec a}\\ & = \pi [\ln(b+\sqrt{b^2-1})-\ln(a+\sqrt{a^2-1})] \\ & =\pi \ln (\frac{b+\sqrt{b^2-1}}{a+\sqrt{a^2-1}})
\end{align}
$$ |
For which parameters are the solution stable for an ODE? | First, write this as a first order vector equation by letting $y = x'$, giving the equation
$$ \mathrm x' = \begin{pmatrix} x \\ y \end{pmatrix}' = \begin{pmatrix} y \\ -k^2\sin x - y \end{pmatrix} $$
Obviously, the presence of $\sin x$ indicates that this is a nonlinear equation, so we'll need to linearize about our equilibrium points to get any information about them. To do that, we need to find the equilibria, so we set the right hand side to $0$.
$$ y^* = 0 $$
$$ x^* = n\pi $$
Then, we use the Jacobian as the coefficient matrix to get a linear equation which approximates our nonlinear near the equilibria.
$$ \mathrm x' = \mathrm J \mathrm x $$
$$ \mathrm J = \begin{pmatrix} 0 & 1 \\ -k^2\cos n\pi & -b \end{pmatrix} $$
Because $\cos n\pi$ oscillates between $1$ and $-1$, let's split it up into two cases: $n_0 = 2k$ and $n_1=2k+1$. Correspondingly, we have $x^*_0 = 2k\pi$ and $x^*_1 = (2k+1)\pi$, and the Jacobian matrices
$$ \mathrm J_0 = \begin{pmatrix} 0 & 1 \\ -k^2 & -b \end{pmatrix} $$
and
$$ \mathrm J_1 = \begin{pmatrix} 0 & 1 \\ k^2 & -b \end{pmatrix} $$
Now, the only information we really need to speak of stability for this system is the sign of the real part of the eigenvalues, so let's get those eigenvalues.
$ \mathrm J_0 $ has eigenvalues $\lambda_{1,2}$ that are roots of
$$ \lambda^2 + b\lambda + k^2 $$
which come in the form
$$ \lambda_{1,2} = \frac{-b\pm\sqrt{b^2-4k^2}}{2} $$
If $b^2 \leq 4k^2$, the square root term doesn't contribute to the real part, so the signs will both be the opposite of $b$. If $b^2 > 4k^2$, then we must have that $|b| \geq \sqrt{b^2 - 4k^2}$, so again the signs are both the opposite of $b$. This means that when $b > 0$, these equilibria will be stable for all $k$, and for $b < 0$ they will be unstable for all $k$. When $b = 0$, we have a pair of pure imaginary roots, so we'll have concentric periodic orbits around the equilibrium, at least locally. Unless also $k = 0$, then we have unstable parallel line orbits (this is $x'' = 0$).
$ \mathrm J_1 $ has eigenvalues $\lambda_{1,2}$ that are roots of
$$ \lambda^2 + b\lambda - k^2 $$
which come in the form
$$ \lambda_{1,2} = \frac{-b\pm\sqrt{b^2+4k^2}}{2} $$
These roots are always real, and since $\sqrt{b^2+4k^2} \geq |b|$, the two roots will have opposite sign. This will give us an unstable saddle when $b$ and $k$ are not both $0$, and when they are it will be the same as above.
So, to answer the question, there are always unstable solutions, and there are only stable solutions when $b$ is positive, regardless of $k$. |
Function $f(t) - kf(1-t) = t$ . Rearrange and find the formula for f(t) | To prove ii)a
$f({1\over2})-f({1\over2})={1\over2}$ contradiction. You should be able to figure out b) based on this.
For c)
Let $f(t)=at^3+bt^2+ct+d$
Then $f(1-t)=-at^3+(b+3a)t^2+(-3a-2b-c)t+(a+b+c+d)$
So $2a=8,-3a=-12,2c+2b+3a=6,-a-b-c=-1$
So $a=4,b+c=-3,d=$anything will work. |
Simple Golden Ratio Construction with Three Lines, and Interesting Implied Circle? | The radius of the circle you've constructed is close to $3/4$ of the length of the starting segments, but not exactly. For ease of calculation let's assume the starting segments are $2$ units long. Apply Pythagoras' theorem to find that the yellow segment has length $\sqrt 3$ and the blue segment has length $b:=\frac12(\sqrt{15}-\sqrt 3)$. The ratio of these is $2/(\sqrt5-1)=(\sqrt 5+1)/2$, which indeed equals the golden ratio.
The triangle that encloses your construction has side lengths $\sqrt 3+b$, $\sqrt 7$, and $\sqrt{b^2+4}$ (the last two values obtained by applying Pythagoras), and its area is $\frac12(\text{base})(\text{height})=\sqrt3+b$. The circle you've constructed is called the circumcircle of the triangle. Its radius is called the circumradius and can be calculated from the formula:
$$R={\text{product of side lengths of triangle}\over\text{4 $\times$ area of triangle}}.$$
Plugging in the values we've obtained, the circumradius for your triangle has value
$${(\sqrt3+b)\sqrt 7(\sqrt{b^2+4})\over 4(\sqrt 3+b)}=\frac{\sqrt7}4\sqrt{b^2+4}\approx 1.5004434,$$
which is slightly more than $3/4$ the length of each starting segment. |
Is there a primality test based on the sum of squares of the first $n$ natural numbers $\sum_{x = 1}^{n} x^2$? | $$
f(n):=\sum_1^{n-2}x^2=\frac{2n^3 - 9n^2 + 13n - 6}{6}
$$
and so your test can be evaluated as such:
$$\begin{align}
f(6m) &= 72m^3 - 54m^2 + 13m - 1 \equiv& m-1 &\pmod{6m}\\
f(6m+1) &= 72m^3 - 18m^2 + m \equiv& -1 &\pmod{6m+1}\\
f(6m+2) &= 72m^3 + 18m^2 + m \equiv& 3m &\pmod{6m+2}\\
f(6m+3) &= 72m^3 + 54m^2 + 13m + 1 \equiv& 4m+1 &\pmod{6m+3}\\
f(6m+4) &= 72m^3 + 90m^2 + 37m + 5 \equiv& 3m+1 &\pmod{6m+4}\\
f(6m+5) &= 72m^3 + 126m^2 + 73m + 14 \equiv& -1 &\pmod{6m+4}
\end{align}$$
and so your test boils down to requiring that the number is 1 or 5 mod 6. Indeed, this is a requirement for primes greater than 3, but it's not useful to test it this way. |
Why does $1$ divided by $p$ have $p-1$ repeating decimals? | If $p=3$ you get $0.333333\ldots$, and you could say it has $p-1=2$ repeating digits and the part that repeats is $33$.
If $p=11$ you get $0.0909090909\ldots$, and you could say it has $p-1=10$ repeating digits, and the part that repeats is $0909090909$.
If $p=37$, you get $0.027027027\ldots$, and you could say it has $p-1=36$ repeating digits, and the part that repeats is $027\ldots027$ (with $12$ iterations of $027)$.
If $p=101$, you get $0.0099009900990099\ldots$, and you could say it has $p-1=100$ repeating digits, and the part that repeats is $0099$.
If $p=41$, you get $0.\underbrace{024390}_{\text{This repeats.}}$, and you could say it has $p-1=40$ repeating digits, and the part that repeats is eight iterations of that sequence of five digits.
If $p=13$, you get $0.\overbrace{076923}$ with a $6$-digit repetend, and you could say it has $p-1=12$ repeating digits, and the part that repeats is two iterations of $076923$.
$3$ is a divisor of $10^1 - 1$.
$11$ is a divisor of $10^2 - 1$.
$37$ is a divisor of $10^3-1$.
$101$ is a divisor of $10^4-1$.
$41$ is a divisor of $10^5-1$.
$13$ is a divisor of $10^6 - 1$.
The number of repeating digits in the shortest repetend in $1/p$ is the smallest exponent $k$ such that $p$ divides $10^k-1$.
If $41$ is a divisor of $10^5-1$, then $41$ is a divisor of $10^{40}-1$ since
$$
10^{40} - 1 = (10^5 - 1) \Big( (10^5)^7 + (10^5)^6 + (10^5)^5 + (10^5)^4 + (10^5)^3+(10^5)^2+(10^5)^1+1\Big).
$$ |
Find min,max, inf, sup (if they exist) of the sequence $x_n=(-1)^n \cdot \frac{\sqrt{n}}{n+1}+\sin \frac{n \pi}{2}$. | $$\mathrm{{d\over dx}{x^{0.5}\over(1+x)}=-{x^{0.5}\over(1+x)^2}+{0.5\over x^{0.5}(1+x)}={0.5(1-x)\over x^{0.5}(1+x)^2}<0\;\forall\;x>1}$$
So $\mathrm{\sqrt{n}\over n+1}$ is strictly decreasing.$\quad\cdots(1)$
Also
$$\mathrm{\lim_{n\to\infty}{\sqrt{n}\over n+1}=0}\quad\cdots(2)$$
$\mathrm{\sin\left({k\pi\over2}\right)=0,1,0,-1}$ at $\mathrm{k=4n,4n+1,4n+2,4n+3}$ respectively.$\quad\cdots(3)$
Using $(1),(2),(3)$ we see
$$
\mathrm{\sup_{n\ge0}x_{4n}={2\over5}},\quad\mathrm{\sup_{n\ge0}x_{4n+2}={\sqrt{2}\over3}},\quad
\mathrm{\sup_{n\ge0}x_{4n+1}=1},\quad\mathrm{\sup_{n\ge0}x_{4n+3}=-1}\\
\mathrm{\inf_{n\ge0}x_{4n}=0},\quad\mathrm{\inf_{n\ge0}x_{4n+2}=0},\quad
\mathrm{\inf_{n\ge0}x_{4n+1}={1\over2}},\quad\mathrm{\inf_{n\ge0}x_{4n+3}=-{\sqrt{3}\over4}-1}
$$
So $$\mathrm{\sup_{n\ge0}x_n=1,\quad\inf_{n\ge0}x_n=-{\sqrt{3}\over4}-1}$$
and using $(2),(3)$ the accumulation points are $0,\pm1$. |
Markov chain - a notation I don't understand | Each occurrence of $X_1→X_2→…→X_n$ can be replaced by the more usual $(X_k)_{1⩽k⩽n}$. |
Confused about notation: stochastic integral over $[0,\tau]$, $\tau$ a stopping time | For a (nice) process $X$ denote by
$$(X \bullet M)(t,\omega) := \left( \int_0^t X(s) \, dM_s \right)(\omega), \qquad t \in [0,\infty] \tag{1}$$
the stochastic integral of $X$ with respect to $M$. Identity (2.17) is saying that
$$(X \bullet M)(\tau(\omega),\omega) = ((X 1_{[0,\tau]}) \bullet M)(\infty,\omega) \quad \text{a.s.}$$
The left-hand side is defined as the stochastic integral $(1)$ evaluated at $t=\tau(\omega)$. In contrast, the right-hand side is the stochastic integral of the "truncated" process $X \cdot 1_{[0,\tau]}$. |
$\binom{n}{r}$ versus $^n\mathrm{C}_r$ : which notation is more used? | I've seen ${}^n C_r$, but almost vanishingly rarely. The notation $n \choose r$ is easily the most common in my experience, but one sees $C(n, r)$ often too. |
Why does a subset of an unbounded real set have no limit points? | If $S$ were finite, $S=\{\mathbf{p}_1,\ldots,\mathbf{p}_k\}$, then letting $M=\max\{|\mathbf{p}_1|,\ldots,|\mathbf{p}_k|\}+1$ you would have that no element of $S$ satisfies $|\mathbf{s}|\gt M$, a contradiction. Thus, $S$ must be infinite.
Likewise, if $\mathbf{p}$ is any point in $\mathbb{R}^k$, then we have that
$$|\mathbf{x}_n| = |\mathbf{x}_n-\mathbf{p}+\mathbf{p}| \leq |\mathbf{x}_n-\mathbf{p}|+|\mathbf{p}|.$$
In particular, $|\mathbf{x}_n-\mathbf{p}|\geq n-|\mathbf{p}|$ for all $n$.
Let $N$ be the smallest positive integer such that $N\gt|\mathbf{p}|$. Then for $n\geq N+1$ we have that $|\mathbf{x}_n-\mathbf{p}|\geq n-|\mathbf{p}|\geq N-|\mathbf{p}|+1\gt 1$.
Now let $$r = \frac{1}{2}\min\Bigm\{|\mathbf{x}_1-\mathbf{p}|,\ldots,|\mathbf{x}_N-\mathbf{p}|, 1\Bigr\}.$$
If $r\gt 0$ (that is, $\mathbf{p}\neq \mathbf{x}_n$ for all $n$), then for every $n\in\mathbb{N}$ we have $|\mathbf{x}_n-\mathbf{p}|\gt r$ (if $n\leq N$, then by construction we have $r\leq \frac{1}{2}|\mathbf{x}_n-\mathbf{p}|\lt|\mathbf{x}_n-\mathbf{p}|$, and if $n\geq N+1$, then $|\mathbf{x}_n-\mathbf{p}|\gt 1 \gt r$).
If $r=0$, then $\mathbf{p}=\mathbf{x}_i$ for some $i\leq N$. Replace $r$ with
$$\frac{1}{2}\min\Bigl(\Bigl\{ |\mathbf{x}_m-\mathbf{p}|\;\Bigm|\; \mathbf{x}_m\neq\mathbf{p}, m\leq N\Bigr\}\cup\{1\}\Bigr)$$
and proceed as above. |
Could someone explain me this "ownership" of the arctangent | If you draw a right triangle with sides $a,b,c$ angle $A$ opposite leg $a$, you have $\arctan \frac ab=A, \arctan \frac ba=\frac \pi 2-A$. The general relation then is $\arctan \frac 1x=\frac \pi 2 -\arctan x$. The integral signs don't matter. |
The probability of data loss | Assuming that the distribution is done uniformly at random (i.e., for each different file, we distribute its copies to the hard drives in the $\binom{10}{3} = 120$ different ways with equal probability), the probability that no file was lost is the probability that no file was stored on exactly those three hard drives. For each file, this probability is $\left(1 - \frac{1}{120}\right)$, so the probability that none of your 300 files are lost is
$$\left(1 - \frac{1}{120}\right)^{300}$$
and the probability that at least 1 file was lost is therefore
$$1 -\left(1 - \frac{1}{120}\right)^{300} \approx 0.919$$ |
How does one interpret statements like: "The traveling salesman problem is NP-complete?" | The set of $NP-complete$ problems is specific to decision problems. When people say that the Travelling Salesman Problem is NP-complete, they are referring to the decision variant of it:
Does there exist a tour of less than length $k$?
The above is different from the variant of the TSP where the goal is to produce the minimal weight tour (the one you are asking about).
This variant is a function problem, not a decision problem. There are function complexity classes that are analogous to decision ones, but allows for the output to be more than just ACCEPT or REJECT. For instance, the output could be the binary encoding a TSP tour or just an integer. The function analogue to the decision complexity class, $P$ (polynomial time), is known as $FP$, which is the set of functions computable in polynomial time on a deterministic Turing machine with an output tape. Likewise, there are other function classes like $FNP$ and $FEXP$.
The travelling salesman problem you are considering is complete for the function complexity class $FP^{NP}$, which is functional polynomial time with access to an $NP$ oracle (black box). |
How to compute a marginal probability | Supposing that "compute" here means express in terms of determinants of Laplacian matrices:
Let $Z_\beta$ be as you've defined it for a given graph $G$ and let $Z_\beta'$ be this quantity evaluated for the graph $G/e$, which is $G$ but with the edge $e=\{i,j\}$ contracted. Note that both of these quantities can be expressed via Kirchhoff's Matrix-Tree theorem as determinants of Laplacian matrices.
The key fact that you need is that the spanning trees of $G$ that include a particular edge are in bijection with spanning trees of $G/e$ (via contracting $e$ or reversing that contraction).
Thus the probability you want is equal to the ratio $Z_\beta'/Z_\beta$.
As an aside, Kirchhoff also showed that this ratio is equal to the effective resistance of the graph if every edge has resistance $e^{-\beta W(i,j)}$. |
Between $6$ and $8$ pm, the minute hand the hour hand interchange positions | I don't get any of the given options as an answer. Instead, I find the man entered his house at $37{109\over143}$ minutes after $6$.
Here's my thinking. Let $t$ denote the time in minutes starting at $6$ o'clock, let $M(t)$ denote the minute number (between $0$ and $60$) that the minute hand is pointing at, and let $H(t)$ denote the minute number that the hour hand is pointing at. For the time range of interest ($0\lt t\lt 120$), the key formula is
$$H(t)=30+{t\over12}$$
Now if $t_1$ denotes the time (after $6$) that the man enters his house and $t_2$ denotes the time (between $7$ and $8$) when he exits, we want
$$M(t_2)=H(t_1)\quad\text{and}\quad H(t_2)=M(t_1)$$
But since $0\lt t_1\lt 60$, we have $M(t_1)=t_1$, while $60\lt t_2\lt 120$ implies $M(t_2)=t_2-60$. The two equations are thus
$$t_2-60=30+{t_1\over12}\quad\text{and}\quad 30+{t_2\over12}=t_1$$
When I eliminate $t_2$ and solve for $t_1$, I get $t_1=5400/143=37{109\over143}$.
A final remark: The OP posted an earlier clock-type question, presumably from the same source, for which the book's solution was wrong. I hope someone will check my work as well. |
Represent non-integer values on the factorial base | My other answer was answering a very different question.
Looking at Wikipedia's description of fractional values in a factorial number system, it says that $e$ can be represented as $10_F1111111\ldots$, i.e. as $1\times2! +0\times 1! +1\times\frac1{2!}+1\times\frac1{3!}+1\times\frac1{4!}+1\times\frac1{5!} + \cdots$. Note that Wikipedia omits any digits which must be zero so no multiples of $0!$, $\frac1{0!}$ or $\frac1{1!}$ are shown and $71_{10}$ would be represented by $2321_F$.
On a similar basis $\pi$ can be represented by $11_F0031565\ldots$ i.e. as $1\times2! +1\times 1! +0\times\frac1{2!}+0\times\frac1{3!}+3\times\frac1{4!}+1\times\frac1{5!} +5\times\frac1{6!}+6\times\frac1{7!}+5\times\frac1{8!} + \cdots$
The way to calculate this is to have a remainder $r_{n-1}$ after finding the digit which multiplies $\frac{1}{(n-1)!}$, take $d_n=\lfloor nr_{n-1}\rfloor$ and $r_n = nr_{n-1} -d_n$. So with $\pi$, we have $r_1=\pi-3 \approx 0.1415927$ and the calculation of the fractional digits $d_n$ looks like
n d_n r_n
1 0.1415927
2 0 0.2831853
3 0 0.8495559
4 3 0.3982237
5 1 0.9911184
6 5 0.9467106
7 6 0.6269741
8 5 0.0157927
On a similar basis $\phi$ can be represented by $1_F1024067\ldots$ |
Is there a commutative ring with a "generalized determinant"? | For $R=\mathbb Z/4\mathbb Z$ the permanent has all the properties you list, and it differs from the determinant when $n\ge 2$.
(For any matrix, the difference between its permanent and its determinant is a multiple of $2$, so in $\mathbb Z/4\mathbb Z$ the permanent is a unit iff the determinant is).
In general, we can set $R=\mathbb Z/m\mathbb Z$ whenever $m$ is not square-free, and then let $\bar m$ be the product of $m$'s prime factors (without multiplicity) and consider
$$ f((a_{ij})) = \sum_{\sigma \in S_n} (h(\sigma) + \operatorname{sgn}(\sigma))\cdot \prod_i a_{i,\sigma(i)} $$
for any function $h: S_n \to \bar mR $ such that $h({\rm id})=0$. If $h$ is not identically zero, then $f$ will differ from the determinant, but satisfy all your conditions. |
Compare the growth rate of given functions. | You need to parenthesize your towers. $n^{\log n}=(2^m)^m,$ not $2^{(m^m)}$ which is the correct way to read the stack without parentheses. We are therefore comparing
$$(2^m)^m \text { and } m^{(2^m)}$$
Now $(2^m)^m=2^{(m^2)}$. For large $m$ we have
$$m \gt 2\\2^m \gt m^2\\m^{(2^m)} \gt (2^m)^m$$ |
Ratio of sines given, find ratio of cosines? | Here's a picture of the reverse problem.
And here's an overly-long discussion of that picture:
Given a point $P$ at some distance (here, greater than $1$) from center $O$ of the unit circle, we connect $P$ to some point $A$ on the circle. Drop a perpendicular from $O$ to $B$ on $\overline{AP}$, and let point $X$ on that perpendicular be such that $\overleftrightarrow{AX}\parallel\overleftrightarrow{OP}$.
Defining $\alpha := \angle AOB$ and $\beta = \angle BOP = \angle BXA$, we have
$$\cos\alpha = \frac{|\overline{OB}|}{|\overline{OA}|} \qquad \cos\beta = \frac{|\overline{OB}|}{|\overline{OP}|}\qquad\to\qquad |\overline{OP}| = \frac{\cos\alpha}{\cos\beta}$$
Thus, $|\overline{OP}|$ "gives" a ratio of cosines.
However, we also have
$$\sin\alpha = \frac{|\overline{AB}|}{|\overline{OA}|} \qquad \sin\beta = \frac{|\overline{AB}|}{|\overline{AX}|} \qquad\to\qquad |\overline{AX}| = \frac{\sin\alpha}{\sin\beta}$$
Thus, $|\overline{AX}|$ is the corresponding ratio of sines; or, rather, a corresponding ratio of sines. If we choose to connect $P$ to an alternative point $A$ ---say, $A^\prime$--- an identical construction can give an entirely different ratio of sines $|\overline{A^\prime X^\prime}| = \frac{\sin\alpha^\prime}{\sin\beta^\prime} \neq \frac{\sin\alpha}{\sin\beta}$ for the same ratio of cosines $\frac{\cos\alpha^\prime}{\cos\beta^\prime} = |\overline{OP}| = \frac{\cos\alpha}{\cos\beta}$. |
Eigenvalue of a linear transformation substituting $t+1$ for $t$ in polynomials. | Hint: if you write the matrix of the transformation in the standard basis $\left(1, x, \ldots , x^{n} \right)$, you will see you get an upper triangular $\left( n+1\right) \times \left( n+1 \right)$ matrix with the main diagonal with $A_{ii}=1$ for any $i$ and such that $A - I$ has rank $n$.
Hence the eigenspace has dimension $1$ and you have already shown a generator. |
Stochastic integrals | Counter example:
Let $X_t \sim \mathcal{U}(\ [-\sqrt{3(T-t)},\sqrt{3(T-t)}]\ )$
$\forall t < T:$
$Var[X_t]=T-t $
$\mathbb{E}[X_t]=0$
The process is continuous. $\forall t < T,\ X_t$ is a continuous random variable but is not normal (it has uniform distribution!).
The only way for a r.v. to be a normal r.v. is to have normal law, indipendently to its mean and variance. (Quite tautological, but that's it!). |
Check membership in a matrix group | There is one key ingredient that makes your problem algorithmically solvable:
Theorem. 1. Every normal subgroup of $\Gamma=SL(d,Z)$, $d\ge 3$, is either central or has finite index in $\Gamma$. 2. Every finitely generated normal subgroup of $\Gamma=SL(2,Z)$, is either central or has finite index in $\Gamma$.
Part 1 is due to Margulis and is very deep, part 2 was probably known to Poincare and is not deep. (I can find references if you like.)
I will leave you to work out an algorithm covering the central case (it is easy) and consider the "generic" situation.
You run in parallel two Turing Machines:
T1 simply enumerates reduced words in the alphabet $M_i^{\pm 1}, i=1,...,n$ according to their length and checks if the corresponding product of matrices equals $A$.
T2 enumerates maps from elementary matrices (generators of $\Gamma$) to permutation groups $S_k$ (for each $k=2, 3, ...$) and checks if they define a homomorphism $f$ that sends matrices $M_i$ to the identity permutation. At the same time, it checks if $f(A)=1$.
Eventually, one of these Turing machines stops and conforms that either $A$ is the product of matrices $M$ (that is if T1 wins) or that $A$ is not in the subgroup generated by $M_i$'s (if T2 wins). If $d\ge 3$ then T2 can be replaced by the following:
T3: Enumerate finite groups $SL(d, Z/k)$ where $k$'s are natural numbers. For each $k$ compute reductions of $A$ and $M_i$'s modulo $k$ and check if the reduction of $A$ is a product of reductions of $M_i$'s. If $A$ is not in the subgroup generated by $M_i$'s then T3 will eventually find $k$ such that the same is true mod $k$. (Validity of this algorithm depends on the Congruence Subgroup Property that $SL(d, Z)$ has if and only if $d\ge 3$.)
All these algorithms are terribly inefficient, but I am not a programmer. |
Correct mathematical expression | You probably look for
$$
\prod_{\substack{1\leq j\leq n \\x_j>0}} x_j,
$$
where I assume that $x$ above $0$ means $x_j>0$ (you did not define $x$). |
Basic theorem from vector analysis - Condition that 4 points $A,B,C,D$ no three of which are collinear, will lie in a plane | Let $x_0, x_1, x_2, x_3$ be the points. Then all four points are on the same plane if array whos rows are the coordinates of $x_1-x_0, x_2-x_0, x_3-x_0$ has a rank of two. |
$\sum\nolimits_{i=0}^{\infty }{\sum\nolimits_{j=0}^{\infty }{\sum\nolimits_{k=0}^{\infty }{{{3}^{-\left( i+j+k \right)}}}}}$ | You can avoid inclusion-exclusion by considering $S$ as a nested sum:
$$S=6\sum_{i=0}^\infty 3^{-i}\sum_{j=i+1}^\infty 3^{-j}\sum_{k=j+1}^\infty 3^{-k}\ ,$$
whereby the factor $6$ compensates for the assumption $i<j<k$. The innermost sum has the value $$3^{-j}\sum_{k'=1}^\infty 3^{-k'}={1\over2} 3^{-j}\ .$$
The next-inner sum then becomes
$${1\over2}\sum_{j=i+1}^\infty3^{-2j}={1\over2}3^{-2i}\sum_{j'=1}^\infty 3^{-2j'}={1\over2}\cdot{1\over8}\>3^{-2i}\ .$$
In this way we finally obtain
$$S=6\cdot{1\over2}\cdot{1\over8}\sum_{i=0}^\infty 3^{-3i}=6\cdot{1\over2}\cdot{1\over8}\cdot{27\over26}={81\over208}\ .$$ |
clarification on inequalities | The conclusion cannot possibly be valid the way you have put it, because you have
$$Var(x)\le\hbox{something}\ ,\quad E^2(x)\le\hbox{something}\ ,$$
and you are asked to conclude
$$Var(x)\le E^2(x)\ .$$
This is in effect the same as asking "$a\le10$, $b\le20$, which is bigger, $a$ or $b$?" There is no way to know.
I suggest you look carefully at earlier parts of the paper, perhaps there is more information which will help. |
For what $x \in \mathbb{R}$ does the series $\sum_{n=0}^{\infty} n!x^{n}$ converge? | Yes, the ratio test applies and yields that it converges for $x=0$ only.
Let me know if you need more details. |
then what is $f(-5)=?$ | \begin{eqnarray}
f'(x) &=&{1\over \sqrt{1-{4x^2\over (x^2+1)^2}}}\cdot {-2x^2+2\over (x^2+1)^2}-{2\over 1+x^2}\\
&=&{x^2+1\over \sqrt{(x^2-1)^2}}\cdot {-2(x^2-1)\over (x^2+1)^2}-{2\over 1+x^2}\\
&=&{1\over |x^2-1|}\cdot {-2(x^2-1)\over x^2+1}-{2\over 1+x^2}\\
&=&0
\end{eqnarray}
So $f$ is constant function on $(-\infty, -1)$ and on $(1,\infty)$ and thus $$f(-5)= f(-\sqrt{3})=\arcsin {-\sqrt{3}\over 2}+2\arctan (-\sqrt{3}) = -\pi$$ |
Applying reduction formula to integrals | Take $v'=(8-x)^{1/3},$ integrating you get, as implicit in your comment,$$v=-\frac{3}{4}(8-x)^{1/3+1}=-\frac{3}{4}(8-x)^{1/3}(8-x)=\frac{-24}{4}(8-x)^{1/3}+\frac{3x}{4}(8-x)^{1/3}.$$
Split the integral in the addition sign. |
Determine lambda parameter of exponential distribution from covariance | Suppose that $X_1,X_2,\ldots$ are exponentially distributed independent random variables with the parameter $\lambda$. If $n>m$, then
$$
\operatorname{Cov}(S_m,S_n)=\operatorname{Cov}\biggl(\sum_{i=1}^mX_i,\sum_{i=1}^mX_i\biggr)+\operatorname{Cov}\biggl(\sum_{i=1}^mX_i,\sum_{i=m+1}^nX_i\biggr)=\operatorname{Var}S_m
$$
and
$$
\operatorname{Var}S_m=\sum_{i=1}^m\operatorname{Var}X_i=m\operatorname{Var}X_1
$$
using the independence and identical distributions. Since $\operatorname{Var}X_1=\lambda^{-2}$,
$$
\operatorname{Cov}(S_m,S_n)=\frac m{\lambda^2}.
$$
In this particular example $m=31$ and $\operatorname{Cov}(S_{31},S_{57})=31+57$. Hence,
$$
\lambda=\sqrt{\frac{31}{88}}.
$$ |
Show the following Inner product problem | You have $\langle y, x_n \rangle = 0$ for all $n$ and the function $z \mapsto \langle y, z \rangle$ is continuous, hence $\langle y, x_n \rangle \to \langle y, x \rangle$, hence $\langle y, x \rangle = 0$.
Another (essentially equivalent) way of looking at it is to let $S = \{ z | \langle y, z \rangle = 0 \}$
and notice that $S$ is closed because it is the inverse image of the closed
set $\{0\}$ under the continuous map $z \mapsto \langle y, z \rangle$.
Then since $x_n \in S$, $x_n \to x$ and $S$ is closed, we have $x \in S$. |
The probability of being paired with someone? | We solve a small part of the problem, the expected number of pairs from the bus. Call the people on the bus $P_1$ to $P_8$. For any $i$ from $1$ to $8$, define random variable $X_i$ by $X_i=1$ if $P_i$ is paired with someone from the bus, and by $X_i=0$ otherwise. Then the number $Y$ of paired people from the bus is given by $Y=\frac{X_1+\cdots+X_8}{2}$.
By the linearity of expectation we have $E(Y)=\frac{1}{2}\left(E(X_1)+\cdots+E(X_8)\right)$.
We have $E(X_i)=\Pr(X_i=1)=\frac{7}{49}$, and therefore $E(Y)=\frac{28}{49}$. |
How can we know that the decimal expansion of an irrational number will never repeat? | If there appears a period in the decimal expansion of a number then You have after subtracting a rational term an expression of the form
$$\sum_{n=k}^{\infty}(d_1...d_r)10^{-rn}$$
where $d_j\in\{0,...,9\}$ are the digits in the period and $r$ its length. Clearly this expression is rational if and only if $\sum_{n=0}^{\infty}(d_1...d_r)10^{-rn}$ is rational and
$$\sum_{n=0}^{\infty}(d_1...d_r)10^{-rn}=\sum_{n=0}^{\infty}\left(\frac{d_1...d_r}{10^r}\right)^n=\frac{1}{1-\frac{d_1...d_r}{10^r}}$$
is rational as a geometric series. Note $\frac{d_1...d_r}{10^r}<1$. Since there are (quite sophisticated) proofs that $\pi$ is irrational (even transcendental, i.e. no root of any polynomial with rational coefficients,) one concludes that its decimal expansion cannot lead to any period. |
Checking Riemann integrability | Here is an idea for showing that the upper integral is also zero.
Fix a large integer $n>0$.
Use a partition such that:
$[0,1/n]$ is the first subinterval,
The numbers $1/1,1/2,\ldots,1/(n-1)$ are all isolated from the rest of $[0,1]$ by tiny subintervals of width $<1/n^2$. This forces several subintervals containing none of the numbers $1/k$, but that's the point really.
Then show that the upper sum related to the above partition is less than $2/n^2$. With an appropriate choice of $n$ this will be as small as you wish. |
Value of $2^{-1-i}$ in the complex | $$2^{-1-i}=2^{-1}\cdot2^{-i}=\frac12\cdot(e^{\text{Log}2})^{-i}=\frac{e^{i(-\text{Log}2)}}2$$
where $\displaystyle\text{Log}_ez=\log_ez+2n\pi i$
Now, use Euler Formula |
Cardinal numbers: if $a>2^{\omega}$, then $a^{\omega}=a$? | The claim is false in the first place. A consequence of Konig's theorem is that for all infinite cardinals $\kappa$, it holds $$\kappa<\kappa^{\operatorname{cof}\kappa}$$ and there is plenty of cardinals $\kappa$ such that $\kappa>2^{\aleph_0}$ and $\operatorname{cof}\kappa=\aleph_0$. Notably, $\beth_\omega$ (see here for the definition of beth-sequence). |
Total Percentage Change - Why does this work? | It may be easier to understand why this works if you consider the percentage increases as decimal multiplications.
Example: starting with $100$, a $A=10\%$ increase is equivalent to $100*1.10$, where the percentage $10\%$ is divided by $100$ and added to $1$.
We add $1$ to include the initial amount.
When multiple interests are compounded (applied sequentially), you are simply multiplying the respective decimal values together.
Your final algebraic formula works as a result of multiplication's distributive property, but it can be written more generally as $$(\prod_{i=1}^n (A_i+1)) - 1$$
Where $A$ represents the interest rate, and $\prod$ means multiply all of the following together, for $i$ starting at $1$, and incrementing by $1$ until it reaches $n$. |
A question about integers. | If they are not all equal, there must be some prime $p$ such $a = p^i A$, $b = p^j B$, $c = p^k C$ where $A,B,C$ are each coprime to $p$ and $i,j,k$ are nonnegative integers that are not all equal. Without loss of generality $i \le j \le k$ (otherwise permute $a$, $b$, $c$ appropriately) and $i < k$.
Then $ b/a + c/b + a/c$ has $p^{k-i}$ in its denominator, and can't be an integer. |
Can convex hulls contain duplicate points? | The convex hull (call this $H$) of points $P_1,\cdots P_n$ is the set of combinations
$$\sum_{k=1}^n c_kP_k$$
where each $c_k \ge 0$ and the sum of the $c_k$ is 1. If a copy of one of the points, say $P_n$ is adjoined to the list $P_1,\cdots P_n$, the now possibly different hull, call it $H'$ is by the above definition the set of combinations
$$c_{n+1}P_n+\sum_{k=1}^n c_kP_k$$
where now all the $c_k,\ k=1,2,...,n+1$ are nonnegative and their sum is $1$.
Now the claim is that in fact $H'=H$ as sets, i.e. the hulls are the same. It's clear that $H \subset H'$ by taking $c_{n+1}=0$.
For the reverse direction, define a new coefficient name $e_n=c_n+c_{n+1}$ and note that now $H'$ is given by the set of combinations
$$\sum_{k=1}^{n-1} c_kP_k+e_nP_n.$$
The constraints on these $n$ coefficients are again that they be each nonnegative and sum to $1$. This then shows that $H' \subset H$, and we arrive at the claimed statement, that adding a copy of a point in describing a convex hull has no effect on the hull generated. |
Show that every linear transformation $\mathbb R^{n}$ ->$\mathbb R^{m}$ transforms linear dependent vectors into linear dependent vectors | As mentioned in the comments, you need to prove it generally, not merely provide an example showing that it may be true. Let me outline how you might approach it.
Suppose that $\vec v_1,...,\vec v_k\in\Bbb R^n$ (where $k$ is some integer greater than $1$) such that $\{\vec v_j:1\le j\le k\}$ is a linearly dependent subset of $\Bbb R^n.$
Suppose that $T:\Bbb R^n\to\Bbb R^m$ is any linear transformation.
Let $\vec w_j=T(\vec v_j)$ for each integer $j$ such that $1\le j\le k.$
Now, unpack the definition of linear dependence to rewrite our first assumption, and use our second assumption to conclude that $\{\vec w_j:1\le j\le k\}$ is a linearly dependent subset of $\Bbb R^m.$ |
Finding average speed using Calc | As stated, problem 2 has nothing to do with derivatives: you just want the average rate of change of $h$ on $[a,b]$ which is $\frac{h(b)-h(a)}{b-a}$. Part of the point of calculus is that if $b$ is very close to $a$ then this is very nearly $h'(a)$. This problem is a concrete way of investigating what "very close" means for a particular function, namely $h$, near a particular point, namely $2.5$. |
show that ${ x^4+x+1 \in GF(2^2)[x] }$ | In this case, it isn't too hard to do this by direct computation. Since $x^4+x+1$ is monic, suppose that $a,b,c,d\in\mathrm{GF}(4)$ with
$$x^4+x+1=(x^2+ax+b)(x^2+cx+d).$$
Expand the RHS, compare coefficients and solve for $a,b,c,d$. It will be helpful to remind yourself of what the elements of $\mathrm{GF}(4)$ look like. |
Is Jensen's Inequality strict for the maximum function? | I don't think you can prove this using strict convexity. If you use the following addition al fact about the process $(Y_t)$ you can prove this from first principles:
$$0<P(Y_t <K) <1$$
Assuming this note that $\int Y_t dP=\int_{Y_t \geq K} Y_t dP+\int_{Y_t < K} Y_t dP<\int_{Y_t \geq K} Y_t dP+KP(Y_t<K)$: strict inequality holds because $\int_{Y_t < K} Y_t dP \leq KP(Y_t <k) <K$ by above mentioned property.
Keeping this in mind consider $Eg(Y_t)=\int (Y_t-K)^{+}dP =\int_{Y_t \geq K} (Y_t-K) dP>\int Y_t dP-K$ if $EY_t >K$ (by the inequality just established); also $\int_{Y_t \geq K} (Y_t-K) dP>0$ (by the fact that $P(Y_t \geq K) >0$. Combining these two inequalities we get $Eg(Y_t) > \max \{0, \int Y_t dP-K\}=g(EY_t)$. |
How do you solve x^2 = log^2(x) | I assume you're asking how you can see that the solution to $\frac{\log x}{x}=\frac{x}{\log x}$ is $W(1)$.
As you've seen, this simplifies to $x^2=(\log x)^2$, which is the case if either $x=\log x$ or $x=-\log x$. The former of these is impossible, so we must have $$x+\log x = 0$$
Taking the exponential on both sides of this gives
$$ e^x x = 1 $$
whose solution for $x$ is the definition of $W(1)$.
Note, however, that this intersection point does not really relate to your initial paragraph -- the behavior at $x=0.5671$ sheds essentially no light on the behavior for $x\to \infty$, since $\frac{x}{\log x}$ has a singularity at $x=1$ which is between $0.561$ and $\infty$... |
Does there exist a real differentiable function $f:\mathbb{R}\rightarrow\mathbb{R}$ with the following properties? | Yes, the function $f(x) = 0$ fulfills all of those properties. |
$\varphi : R → S$ is a epimorphism from $R$ to ring $S$, let $I$ be an ideal of $R$. Prove $\varphi (I) = S$ if and only if $R = I +Ker(\varphi)$ | Note that $\ker(\phi) \subset R$ is an ideal, and $I +\ker(\phi)\subset$ is also an ideal in $R$.
Since $\phi(\ker(\phi)) = 0\in S$, we see that $\phi(I +\ker(\phi)) = \phi(I)$.
The question wants you to see that $\phi(I) = S,$ so the image of $I$ is the whode codomain iff $I +\ker(\phi) =R$. |
Finding the Projection without using projection matrix always but to use some symmetry | Since you have $y'=y+x'-x$ you also have
$$t_2'=Ty'=T(y+x'-x)=Ty+Tx'-Tx=t_2+t_1'-t_1$$ |
The estimate of the sum of $\frac{\log p_i}{p_i}$ | Evidently, we have
$$
\sum_{i=1}^n{\log p_i\over p_i}
=\sum_{p\le p_n}{\log p\over p}
=\log p_n+\mathcal O(1)
$$
Chebyshev's theorem guarantees the existence of positive constants $A$ and $B$ such that
$$
An\log n<p_n<Bn\log n
$$
Taking logarithms gives
$$
\log n+\log\log n+\log A<\log p_n<\log n+\log\log n+\log B
$$
This implies $\log p_n\sim\log n$. Consequently, we get
$$
\sum_{i=1}^n{\log p_i\over p_i}=\mathcal O(\log n)
$$ |
Lagrange theorem help | I doubt if a general solution for multiple optimalization exists, since the goals might be just conflicting with each other. But I assume you must observed that the function $$x^{2}+y^{2}+4$$ has an absolute minimum at $x=0,y=0$. On the other hand if you let $x=0$, $f(0,y)=y^{2}+4$ has no maximum at all. So there is essentially only one choice. |
Uniqueness of probability given marginals | No. Consider $X=Y=Z=\{0,1\}$ and the following two probability distributions:
$p_1(x,y,z) = \frac18$
$p_2(x,y,z) = \frac18\left(1 + (-1)^{x+y+z}\right)$
It is easily checked that in both cases, all marginal distributions are equally distributed. |
What values of $\delta$ make this statement true? | Since $\alpha\geq 1-\delta$, and $|\beta|\leq \delta$, you get the desired inequality if
$$
\frac{\delta}{1-\delta} < \epsilon
$$
which happens if $\delta < \frac{\epsilon}{1+\epsilon}$. |
Independence between random vector and event | An event and a random variable are independent if for all supported values of the random variable, the conditional expectation of the event given the variable equals the margial probability of the event.
In short you need to establish whether: $${\forall u\in(0;1)~\forall x\in(0;1):\\\quad\mathsf P(\{U_1{>}U_2{>}U_3\})=\mathsf P(\{U_1{>}U_2{>}U_3\}\mid u{=}\max\{U_i\}_{i=1}^3,x{=}\min\{X_j\}_{j=1}^n)}$$ |
Turn set equality into predicate formula | here’s how we can express equality of $x$ and $y$ with a formula of set theory:
$$\def\iff{\leftrightarrow} (x = y) ::= \forall z (z \in x \iff z \in y) $$
Here comes the question. How to write a formula for $p = \{a, b\}$.
Don't reinvent the roundmover™. Just use substitution. $$\begin{align}(p = \{a, b\})~::=~&\forall z~(z\in p\iff z\in\{a,b\})\\=~&\forall z~(z\in p\iff (z=a\lor z=b))\\=~&\forall z~(z\in p\iff(\forall y~(y\in z\iff y\in a))\lor( \forall y~(y\in z\iff y\in b)))\end{align}$$ |
Show that a matrix is not positive definite | Following Dan's hints . . .
Let $B$ be an $n{\times}n$ matrix.
Then for any $n{\times}1$ column matrix $x$, we have
\begin{align*}
x^T(B+B^T)x
&=x^TBx+x^TB^Tx\\[4pt]
&=x^TBx+(x^TBx)^T\\[4pt]
&=x^TBx+x^TBx\qquad\text{[since $x^TBx$ is a $1{\times}1$ matrix]}\\[4pt]
&=2(x^TBx)\\[4pt]
\end{align*}
It follows that $B+B^T$ is positive definite if and only if $B$ is positive definite.
Let $m,n$ be positive integers, with $m < n$.
Suppose $X,Y$ are $m{\times}n$ matrices, and $D$ is an $m{\times}m$ symmetric matrix ($D$ need not be diagonal; symmetric will suffice).
Since $Y^TDX = (X^TDY)^T$, to prove $X^TDY+Y^TDX$ is not positive definite, it suffices to prove $X^TDY$ is not positive definite.
Since $m < n$, there exists a nonzero $n{\times}1$ column matrix $v$ such that $Yv=0$.
But then $v^T(X^TDY)v=0$, so $X^TDY$ is not positive definite.
It follows that $X^TDY+Y^TDX$ is not positive definite. |
Unit vector rotated by 30° in clockwise direction with respect to specific axis of rotation | There are two steps:
Define the rotation: Your choices are (a) the Rodrigues' rotation formula to get a rotation matrix or (b) the quaternions to get a unit quaternion representing the rotation. In either case your axis would be $[1, 1, 1]/\sqrt 3$ and your angle would be $-30^{\circ}$ (Note for quaternions you would use half the angle but that should show up in the formula). In either case you would get a rotation matrix $\mathbf R$ or a unit quaternion $\mathbf q$.
Rotate the stated vector using the chosen method. For the rotation matrix case your vector would be $\mathbf v = [0, 1, 0]$ which would give you $\mathbf{Rv}$. For the quaternion case the vector would be $\mathbf u= [0, 0, 1, 0] = [0, \mathbf v]$ (note that 0 was prepended to $\mathbf v$ to make it a quaternion. The convention I am using is that the real part is the first element). Then the rotation would be $\mathbf{qu}\bar{\mathbf q}$ where $\bar{\mathbf q}$ is the conjugate of $\mathbf{q}$. Your resulting vector would be the last three elements of the result.
This is an overview of the process allowing you to fill in the blanks. Is there a particular aspect that you are struggling with? |
polar coordinates for integral bounds with parallelogram as region | HINT
We can set
$u=x+y \implies 0\le u\le 6$
$v=2y-x\implies 0\le v\le 3$
then
$$\iint_D (x^2+y^2) \,dx\, dy=\iint_R \frac13 \left[\left(\frac{2u-v}3\right)^2+\left(\frac{u+v}3\right)^2\right] \,du\, dv$$ |
$ \log|x-4| -\log|3x-10| = \log\left|\frac{1}{x} \right| $ one formal way to say we won't choose all solutions in degree 2 equation | No, $x=2$ is not out of domain. If $x=2$, you get
$$\ln 2-\ln 4=\ln \left(\frac 24\right)=\ln \left(\frac 12\right)$$ which is correct. |
Characterization of a collection of sets containing identity in a group so that it generates a topology? | A requirement for B to be a base is
for all x,y in G, U,V in B$_e$,
(a in xU $\cap$ yV implies xU $\cap$ yV in L$_a$).
"we want to generate ..." Please speak for yourself only, I have no desire to do that.
After expressing the desire "I want to ask ..." you still have not asked your question. Do not beat around the bush, be brave and directly ask your question "What conditions are needed for S to be a base of some topology for G"? |
Prove ths sum of $\small\sqrt{x^2-2x+16}+\sqrt{y^2-14y+64}+\sqrt{x^2-16x+y^2-14y+\frac{7}{4}xy+64}\ge 11$ | For convenience, we make the translation $x=2+a$ and $y=6+b$, so that the equality case is $a=b=0$. Then the expression to bound is:
$$\sqrt{(a+1)^2+15}+\sqrt{(b-1)^2+15}+\sqrt{\frac{7}{8}(a+b)^2+\frac{1}{8}(a-6)^2+\frac{1}{8}(b+6)^2} $$
Now recall the following form of Cauchy-Schwarz for $n$ nonnegative variables $x_1, \cdots, x_n$:
$$\sqrt{n(x_1+\cdots+x_n)}=\sqrt{(1+\cdots+1)(x_1+\cdots+x_n)}\ge \sqrt{x_1}+\cdots+\sqrt{x_n}$$
with equality iff $x_1=\cdots=x_n$. We use this three times, keeping in mind the equality case $a=b=0$:
$$\sqrt{(a+1)^2+15}=\frac{1}{4}\sqrt{16((a+1)^2+15)}\ge \frac{1}{4}(|a+1|+15)$$
$$\sqrt{(b-1)^2+15}=\frac{1}{4}\sqrt{16((b-1)^2+15)}\ge \frac{1}{4}(|b-1|+15)$$
$$\sqrt{\frac{7}{8}(a+b)^2+\frac{1}{8}(a-6)^2+\frac{1}{8}(b+6)^2}\ge \frac{1}{4}\sqrt{2((a-6)^2+(b+6)^2)}\ge \frac{1}{4}(|a-6|+|b+6|)$$
Now since $|a-6|+|a+1|\ge 7$ and $|b+6|+|b-1|\ge 7$ by the triangle inequality, the expression must be at least $\frac{15}{2}+\frac{7}{2}=11$, as required. |
Closed form for the inverse of $f(x) = x \log x$ over $[1, \infty]$? | Maybe there is no closed form?
Yes. At least not in terms of elementary functions. However, you can express it in terms of certain special functions, such as Lambert's W function, in a way similar to that described in ‘Example $2$’.
An invertible map from $[1,\infty)$ to $[0,\infty)$ exists.
Just because something exists, doesn't mean that it can also be expressed in terms of other things. Fundamental colors also exist, but they cannot be expressed as a combination of other colors.
Can you give me some heuristics behind the intuition that this doesn't have a closed form ?
As far as “heuristics” and “intuition” are concerned, try comparing this situation with that of anti-derivatives; i.e., just because a function can be expressed as a combination of elementary functions does not mean that its primitive has the same property. Take, for instance, $e^{-x^2}$, whose integral is the non-elementary error function. But in order to actually be able to prove these facts, knowledge of abstract algebra is required. |
Find $2\times 2$ symmetric matrix $A$ given two eigenvalues and one eigenvector | To offer a slightly different perspective, due to the Spectral Theorem you have
$$
A=P_1+4P_2,$$
where $P_1$ is the orthogonal projection onto the span of $(1,1)^t$, and $P_2$ is orthogonal to $P_1$; that is, $P_2=I-P_1$. Thus
$$
P_1=\tfrac12\,\begin{bmatrix} 1&1\end{bmatrix}\begin{bmatrix} 1\\1\end{bmatrix}=\begin{bmatrix}1/2&1/2\\1/2&1/2\end{bmatrix}.
$$
And then
$$
A=\begin{bmatrix}1/2&1/2\\1/2&1/2\end{bmatrix}+4\begin{bmatrix}1/2&-1/2\\-1/2&1/2\end{bmatrix}=\begin{bmatrix}5/2&-3/2\\-3/2&5/2 \end{bmatrix}
$$ |
Show that the following series is less than $4 \pi^2 / 3$. | Note that
$$
\frac{1}{i+1}+\frac{1}{k-i+1}=\frac{k+2}{(i+1)(k-i+1)}
$$
So, (because $(a+b)^2\leq 2a^2+2b^2$):
$$
\frac{(k+2)^2}{(i+1)^2(k-i+1)^2}\leq\frac{2}{(i+1)^2}+\frac{2}{(k-i+1)^2}
$$
Adding we get
$$
\sum_{i=0}^{k}\frac{(k+2)^2}{(i+1)^2(k-i+1)^2}
\leq 4\sum_{i=0}^{k}\frac{1}{(1+i)^2}\leq 4\zeta(2)=\frac{2\pi^2}{3}
$$
So
$$
\sum_{i=0}^{k}\frac{(k+1)^2}{(i+1)^2(k-i+1)^2}
\leq \frac{(k+1)^2}{(k+2)^2}\frac{2\pi^2}{3}<\frac{2\pi^2}{3}< \frac{4\pi^2}{3}
$$ |
Radial differential form | In polar coordinates you have
$$
\omega=r\cos\phi\sin\phi \,d(r\cos\phi)+r(1+\sin^2\phi)\,d(r\sin\phi)=\\
r\cos^2\phi\sin\phi\, dr- r^2\cos\phi\sin^2\phi\, d\phi+r(\sin\phi+\sin^3\phi)\,dr+r^2(\cos\phi+\cos\phi\sin^2\phi)\,d\phi=\\
2r\sin\phi\, dr +r^2\cos\phi\, d\phi=d(r^2\sin\phi).
$$ Therefore,
$$
\omega=d\left(y\sqrt{x^2+y^2}\right).
$$ |
Limit through a figure | We have $\angle OAC = \angle OBC = \frac\pi2$, so $\angle ACB = \pi - x$. We also have $|AC| = |BC| = \tan(x/2)$. Now we may use the sine formula for the area:
$$
T(x) = \frac{|AC|\cdot |BC| \cdot \sin(\angle ACB)}{2} = \frac{\tan^2(x/2)\sin( x)}{2}
$$
where I've used that $\sin(\pi - x) = \sin(x)$. To calculate $\lim_{x \to 0}\frac{T(x)}{x^3}$, I would suggest using L'Hôpital's rule, or equivalently, power series. If neither of those are something you've heard about, then I probably need to use a different approach. |
Is there an isomorphism of fields between $\mathbb{F}_{3^{2}}$ and $\mathbb{F}=\{a+bi; a,b \in \mathbb{F}_{3}\}$? | The polynomial $p(x)=x^2+1$ is irreducible in $\mathbb F_3[x]$. Hence $\mathbb F \cong \mathbb F_3[x]/(p)$ is a field with 9 elements. Therefore isomorphic to $\mathbb F_{3^2}$ (as finite fields are unique up to isomophism). |
Clarification about this step in matrix proof? | The matrix $A$ is split into two rows $[A_{11}, A_{12}]$ (top) and $[A_{21}, A_{22}]$ (bottom). Since the total rank is $r$ and the top row already has rank $r$, each of the bottom rows is a linear combination of some of the first $r$ rows. Denoting $A_j$ for the $j$th row of $A$, this means for each $i > r$ there are coefficients $b_{ij}$ such that
$$A_i = \sum_{1\leq j \leq r}b_{ij}A_j$$
These coefficients populate a $(n-r)\times r$ matrix $B = (b_{ij})$, with $BA_{11} = A_{21}$ and $BA_{12} = A_{22}$. |
Cannot arrive at a conclusion using rules of inference | In fact it's not possible to prove the second conclusion using what is given. The premises are consistent with the possibility that there do not exist any film stars, nor any playback singers, nor any film directors at all. And if that's the case then the statement "some film stars are film directors" is false. |
Finding level curves given that $-y∂f/∂x+x∂f/∂y =0$ | For part 1, notice that this equation can also be written as
$$[-y,x] \cdot [f_x,f_y]=0$$
We know that the level curves are perpendicular to the gradient, and we now know that the vector $[-y,x]$ is also perpendicular to the gradient. So, the slope of the level curve at a point $(x,y)$ is $-\frac{x}{y}$
We know that the gradient is never 0, so there will never be an ambiguous case, now we just solve the differential equation $\frac{dx}{dy} = -\frac{x}{y}$, which turns out to give us circles. You can probably solve that on your own, so I'll leave you to it.
I don't really understand what is being asked in 2, as you defined f as a function of two variables, but then you write $f(x)$. If you can clarify I might be able to help you further. |
Finding $\sin(x/2)$ given a $\tan$ using half-angle identities | Your first goal is to find $\sin x$ and $\cos x$. You know the following things:
$\sin^2 x + \cos^2 x = 1$.
$\frac{\sin x}{\cos x} = -5.099$.
Because $x$ is in Quadrant IV, $\cos x > 0$ and $\sin x < 0$.
We will only need $\cos x$ for now, but finding $\sin x$ and finding $\cos x$ goes hand in hand.
Once you know that, you want to solve for $$\sin \frac x2 = \pm \sqrt{\frac{1 - \cos x}{2}}.$$
The sign of $\sin \frac x2$, once again, cannot be determined from the value of $\cos x$. You have to ask yourself: if $x$ is in Quadrant IV, what is the range of possible values of $x$ (as an angle)? What, then, is the range of possible values of $\frac x2$? Is $\sin x$ positive or negative for those values? |
Are there non-periodic continuous functions with this property? | No; a counterexample is $x\mapsto \sin |x|$ with $A=[-1,1]$ and $L=4\pi$. |
Meaning of $ E(u|v) = p\,v$ | It means that $u$ and $v$ aren't independent, and they have a covariance of $p$. For instance, if you know from the context that $u = -v$ always holds, then $u$ and $v$ have a covariance of $-1$, and the fact that $E(u \mid v) = -v$ should be quite apparent. |
Subbase of the Product Topology | In general if $f:X\to Y$ and $\mathcal B$ denotes a subbase of the topology on $Y$ then: $$f\text{ is continuous iff }f^{-1}(U)\text{ is open for every }U\in\mathcal B$$
The product topology on set $X:=\prod _{\alpha \in A} X_\alpha$ can be defined as the coarsest topology such that all projections $\pi_{\alpha}:X\to X_{\alpha}$ are continuous.
So if $\mathcal B_{\alpha}$ denotes a subbase for $X_{\alpha}$ for every $\alpha\in A$ then in the collection:
$$T:=\{\pi_{\alpha}^{-1}(U_{\alpha})\mid U_{\alpha}\in\mathcal B_{\alpha}\}\subseteq S$$we can recognize a subbase of the product topology.
For the $\mathcal B_{\alpha}$ there are several choices and one of them is the collection of all open sets in $X_{\alpha}$. In that case $T=S$.
Every subbase of a topology induces a base containing exactly the finite intersections of sets that belong to the subbase. |
Have to use pythagoras theorem | The area of a parallogram is the product of base and height. In your case, the base is AB and the height is AC, so the area is just $12\cdot13=156$, so no need of Pythagoras' theorem. You can use it, however, to calculate $BC=\sqrt{AB^2+AC^2}$. |
Linear transformation, transformation in base1 | You don't need to use inverse matrix (you need it if you change base). Let $f\colon V\to V$ be a linear transformation ($V$ for vector space). If $B=\{e_1, \ldots, e_n\}$ is a basis of $V$, and
$$
f(e_1) = a_{11}e_1 + a_{21}e_2 + \ldots + a_{n1}e_n\\
\cdots\\
f(e_n) = a_{n1}e_1 + a_{n1}e_2 + \ldots + a_{nn}e_n\\
$$
then matrix
$$
\begin{pmatrix}
a_{11} & \cdots & a_{1n}\\
\vdots & \vdots & \vdots\\
a_{n1} & \cdots & a_{nn}
\end{pmatrix}
$$
called transformation matrix in the base $B$ (see wiki, for example). Just look:
$$
f(b_1) = b_4\\
f(b_2) = b_2\\
f(b_3) = b_2\\
f(b_4) = b_4\\
$$
And matrix is ...? |
How to evaluate $\int_{0}^{\infty}\frac{(x^2-1)\ln{x}}{1+x^4}dx$? | We have a well-known formula below
$$J(a,b)=\int_0^\infty\frac{x^{\large a-1}}{1+x^b}\mathrm dx=\frac{\pi}{b}\csc\left(\frac{a\pi}{b}\right)\tag{1}$$
Differentiating $(1)$ with respect to $a$ once, we have
$$J'(a,b)=\int_0^\infty\dfrac{x^{\large a-1}\ln x}{1+x^b}\mathrm dx=-\frac{\pi^2}{b^2}\csc\left(\frac{a\pi}{b}\right)\cot\left(\frac{a\pi}{b}\right)\tag{2}$$
then, by using $(2)$, we can obtain the result of our integral as follows
\begin{align}
I&=\int_{0}^{\infty}\frac{(x^2-1)\ln{x}}{1+x^4}\mathrm dx\\[10pt]
&=\int_{0}^{\infty}\frac{x^2\;\ln{x}}{1+x^4}\mathrm dx-\int_{0}^{\infty}\frac{\ln{x}}{1+x^4}\mathrm dx\\[10pt]
&=J'(3,4)-J'(1,4)\\[10pt]
&=\frac{\pi^2}{8\sqrt{2}}+\frac{\pi^2}{8\sqrt{2}}\\[10pt]
&=\bbox[3pt,border:3px #FF69B4 solid]{\color{red}{\large\frac{\pi^2}{4\sqrt{2}}}}
\end{align} |
Fiber bundles of $G$-spaces | For 1, of course if the bundle is trivial it has a global section. But the converse need not hold.
Consider the chain of subgroups $SO(2n)\subseteq SO(2n+1)\subseteq SO(2n+2)$. Then the bundle in 1 is the unit tangent bundle $T^1S^{2n+1}\rightarrow S^{2n+1}$. A section of this bundle is essentially a non-vanishing vector field on $S^{2n+1}$, so exists for all $n$. On the other hand, Adams showed this bundle is trivial only for $n=0,1,3$.
For 2, I'm not sure what you're asking. How does $N$ act on $J/H$? |
Understanding the proof $\mu$ is invariant then $\mu$ is a linear transformation of Lebesgue measure | I think the proof will be easier to push through if we break it up a bit. If this answer is not what you are looking for, I will be glad to delete it. But all you really need to know here is that every open set is a countable disjoint union of half-open intervals with rational endpoints (even dyadic rational endpoints). Then, the main idea is that Lebesgue measure is the only translation-invariant measure on $\mathscr B(\mathbb R)$ that assigns to each half-open interval with rational endpoints, its length. From there, it's a little trick to finish the proof.
Cohn does it like this:
Suppose that $\mu$ is another measure that does so. Then, if $U$ is an open subset of $\mathbb R$, it is a disjoint countable union of half-open intervals with rational endpoints $I_n$. Then,
$\mu (U)=\sum \mu (I_n)=\sum \lambda (I_n)=\lambda (U).$
So, $\mu$ and $\lambda$ agree on the open sets. Regularity of $\lambda$ now implies that $\mu(E)\le \lambda(E)$ for all Borel sets.
For the reverse inequality, suppose that $A$ is a bounded Borel set and take an open set $V$ containing $A$ and apply the previous inequality, to get
$\mu(V)=\mu (A)+\mu (V-A)\le \lambda (A)+\lambda (V-A)=\lambda (V)$
so $\mu(A)=\lambda (A).$
For the unbounded case, note that $A=\cup_n (-n,n]\cap A$ and use the countable additivity of $\mu$ and $\lambda$.
So, $\mu=\lambda$ on the Borel sets, which proves the main claim.
To finish, define a new measure $\nu$ on the Borel sets of $\mathbb R$ by $\nu(E)=\frac{1}{c}\mu(E)$. Then, $\nu$ is translation invariant, and $\nu((0,1])=\lambda((0,1]).$
Take an interval $I=(r,r+2^{-k}];\ r\in \mathbb Q.$ Then, $I$ is an interval with rational endpoints, and now, using the translation invariance the measures, and the result we just proved, we have
$2^k\cdot \nu(I)=\nu((0,1])=\lambda((0,1])=2^k\lambda (I)\Rightarrow \nu(I)=\lambda(I)$,
so $\nu=\lambda$ on the Borel sets, and thus $\mu=c\lambda.$ |
$A$ is $k$-algebra, $A$-bimodule-$A$ structure on $k$ | No, if $A$ is a $k$-algebra, then $k$ usually doesn't even have an $A$-module structure. For instance, if $A$ is a proper field extension of $k$, then there is no homomorphism of $k$-algebras $A\to k$ (and if $A$ has greater cardinality than $k$, there is not even any ring-homomorphism $A\to k$).
In the answer you linked, though, $A$ is assumed to be much more than just a $k$-algebra: it is assumed to be a graded connected $k$-algebra. This means $A=\bigoplus_{n\in\mathbb{N}}A_n$ is a graded $k$-algebra such that the algebra structure map $k\to A$ is an isomorphism between $k$ and $A_0$. There is then a natural quotient map of algebras $A\to A_0\cong k$ which sends $A_n$ to $0$ for all $n>0$. Via this map, $k$ can be considered as an $A$-bimodule. Explicitly, given $a\in A$, it acts on $k$ on either side by multiplication by the degree $0$ part of $a$ (which is just an element of $A_0\cong k$). |
A special Artinian module is also Noetherian | The main theorem in this context is that a right artinian ring is also right noetherian (Hopkins-Levitzki). The proof can be found in any book on ring theory.
Let $J$ be the Jacobson radical of $R$. If $J=\{0\}$, then $R$ is semisimple artinian, so any module is a direct sum of simple modules and the result follows.
Since $J$ is nilpotent, we can do induction on its nilpotency index $n$ (the minimal integer such that $J^n=0$), and we have just proved the base step of the induction.
Consider the exact sequence $0\to MJ^{n-1}\to M\to M/MJ^{n-1}\to0$. By the induction hypothesis, $M/MJ^{n-1}$ is noetherian, being an artinian module over $R/J^{n-1}$ and by the induction hypothesis. So we are bound to prove that $L=MJ^{n-1}$ is noetherian.
Note that $LJ=0$, so $L$ is a module over the semisimple artinian ring $R/J$; since it is artinian, it is also noetherian, being semisimple.
Remark: if $I$ is an ideal of $R$, the structure of submodules of $M/MI$ is the same when we consider it either as a module over $R$ or over $R/I$; similarly, if $MI=0$, the structure of submodules of $M$ is the same when we consider it either as a module over $R$ or over $R/I$. |
Integral $\int \sqrt{\frac{x}{2-x}}dx$ | $$\int \sqrt{\frac{x}{2-x}}dx$$
Set $t=\frac {x} {2-x}$ and $dt=\left(\frac{x}{(2-x)^2}+\frac{1}{2-x}\right)dx$
$$=2\int\frac{\sqrt t}{(t+1)^2}dt$$
Set $\nu=\sqrt t$ and $d\nu=\frac{dt}{2\sqrt t}$
$$=4\int\frac{\nu^2}{(\nu^2+1)^2}d\nu\overset{\text{ partial fractions}}{=}4\int\frac{d\nu}{\nu^2+1}-4\int\frac{d\nu}{(\nu^1+1)^2+\mathcal C}$$
$$=4\arctan \nu-4\int\frac{d\nu}{(\nu^2+1)^2}$$
Set $\nu=\tan p$ and $d\nu=\sec^2 p dp.$ Then $(\nu^2+1)^2=(\tan^2 p+1)^2=\sec^4 p$ and $p=\arctan \nu$
$$=4\arctan \nu-4\int \cos^2 p dp$$
$$=4\arctan \nu-2\int \cos(2p)dp-2\int 1dp$$
$$=4\arctan \nu-\sin(2p)-2p+\mathcal C$$
Set back $p$ and $\nu$:
$$=\color{red}{\sqrt{-\frac{x}{x-2}}(x-2)+2\arctan\left(\sqrt{-\frac{x}{x-2}}\right)+\mathcal C}$$ |
find symmetric line of given two line | When we say something in a plane is symmetric about a line, we mean it's reflection over that line is unchanged. So this is misleading. How do you define the distance between two lines? The angle? If so, you are looking for an angular bisector of the two lines. But there are two, the vertical line and horizontal line passing through the intersection point of those two lines. The coordinates of this intersection point are calculated as follows:
$$2x+3-y=0, -2x+11-y=0$$
$$\implies 0=(2x+3-y)-(-2x+11-y)=4x-8$$
$$\implies x=2, \implies 2x+3-y=7-y=0 \implies y=7.$$
Thus, the vertical line is given by the graph of x=2, the horizontal by the graph of y=7.
In general, you would take this intersection point and find the lines passing through it whose angle at the intersection is one of the two which are halfway between those of the two lines you started from. Since the slopes $m, s$ of your starting lines are the tangent of these two angles, one of your angles is given by the average of the arctangents of those two slopes, $\frac{arctan(m)+arctan(s)}{2}$, and the other is that angle plus $\frac{\pi}{2}$. Hence the slopes of your solution lines are $tan(\frac{arctan(m)+arctan(s)}{2}), tan(\frac{arctan(m)+arctan(s)+\pi}{2})$. |
Sum of digits, sequence (no theory) | One has
$$\eqalign{N&=\sum_{k=1}^{2011}{10^k-1\over 9}=\sum_{j=0}^{222}10^{9j+4}{\sum_{l=1}^9 10^l\over 9}+{\sum_{k=1}^4 10^k -2011\over 9}\cr
&=123456790\sum_{j=0}^{222}10^{9j+4}+1011\ .\cr}$$
It follows that the decimal representation of $N$ consists of $223$ times the sequence $123456790$, followed by $1011$. The sum of the digits therefore is $223\cdot 37+3=8254$. |
Linear recurrences relation | Notice that
$$\frac{1-r^n}{1-r} = \sum_{k=0}^{n-1} r^k$$
Then it's easy to see that :
$$T(1) = rT(0)+a$$
$$T(2) = r(rT(0)+a)+a = r^2T(0) + a(1 + r)$$
$$T(3) = r(r^2T(0) + a(1 + r)) + a = r^3T(0) + a(1 + r + r^2)$$
So you get by induction
$$T(n) = r^nT(0) + a(1 + r + r^2 + \cdots + r^{n-1})$$
$$T(n) = r^nT(0) + a\frac{1-r^n}{1-r}$$ |
Formula for pyramid-like sum | Here is a classic problem in the basic arithmetic of sums. It is said that Gauss solved this when we has about nine years old.
$$
N + 2N + 3N + \cdots + kN =
N(1 + 2 + \cdots + k) =
N\dfrac{(k+1)k}{2}
$$
You can see the sum involving $k$ by noting that, when $k$ is even, there are $k/2$ pairs of the form $1+k$, $2 + k-1$, $3+k-2$, etc. That was what Gauss saw as a child. |
Convergence of Infinite Series - Complex numbers | The absolute value of the term within the summation does not tend to zero, so the summation diverges. The value of $i^n$ is either $i,-1,-i$ or $1$ and hence the value of the summation is:
$$\frac{1}{2+i}+\frac{1}{2-1}+\frac{1}{2-i}+\frac{1}{2+1}+...$$
$$=\frac{2-i}{5}+1+\frac{2+i}{5}+\frac{1}{3}+...$$
...which clearly diverges. |
Proof for any smaller common divisor than g.c.d. being not a linear combination | Let $g$ be the greatest common divisor of $a$ and $b$.
If $d=ax+by$ for some integers $x$ and $y$, then $g$ divides $d=ax+by$, and hence $g\le d$.
Hope this helps. |
How did john Napier make log table | Humerous jokes aside about logs, what he did is if I recall correctly was that he worked with the base $1-10^{-7}$ and then computed it's various values at increasing values for numbers between $0$ and $1$. He then used the identity
$$\log_a x = \frac{\log_b x}{\log_b a}$$
combined with other logarithmic identities to ease his computation, and I think he choose that value for base for simplicity reasons, which I cannot recall. |
A quadrilateral with only one diagonal bisected and one pair of opposite congruent sides. | This is possible as long as $\overline{BD}$ is more than twice as long than $\overline{AB}$.
Concrete example: $A=(0,-10)$, $B=(4,-13)$, $C=(0,16)$, $D=(-4,13)$, where $\overline{AB}$ and $\overline{CD}$ are both $5$ units long and the diagonals interxect in the origin.
We make use of the fact that two sides and an angle do not determine a triangle if the given angle is opposite the shorter side. |
Need help to understand the following number theory proof | If $a^2 +1$ is not prime, then it can be written as a product:
$$a^2 + 1 = rs,$$ where $r \geq s \gt 1$.
$r$ and $s$ are simply these factors here. |
Is there the following generalization of Goldbach's weak conjecture? | Define a Goldbach number to be an even integer that is the sum of two primes. Note that your statement is equivalent to the following: the gaps between consecutive Goldbach numbers is at most $k$ (perhaps even $k-3$). This is still an open problem, as mentioned in this Terry Tao blog post. It is known (as you can read about here) that you can replace the constant $k$ with a power of $\log n$ and the statement becomes true, which is close at least. |
Integration using trapezoid rule | Wolfram Alpha gives $2\ln^2\left(2\right)-2\ln\left(2\right)+\dfrac{3}{4}\approx 0.3246116667165122$ for the value of the integral. |
Looking for help in regard to Series solutions with ordinary points (ODE) | We know that $a_2 = -1/4 $ if we set $a_0 = 1$ for convenience. Hence, given your original recurrence:
$$a_4 = \frac{1}{32} = \frac{1}{4^2 \times 2!}, \quad a_6 = -\frac{1}{384} = - \frac{1}{4^3 \times 3!}, \quad a_8 = \frac{1}{6144} = \frac{1}{4^4 \times 4!} \ \ , \ldots , \ \ a_{2k} = \frac{(-1)^k}{4^k k!} $$
Can you see that the Taylor expansion is giving you the coefficients of $\exp(-x^2/4)$ for the solution? For the other part of the solution I'm afraid I cannot provide any help.
Hope this helps! |
Does convergence in $L_p$ imply convergence of quotients in $L_p$ | From $f_n - f\to 0$ in $L^p$ follows that there exists a subsequence of $f_n$ which converges almost everywhere to $f$. Thus, $f\ge 1$ almost everywhere.
Now, notice that
$$ \left| \frac 1 {f_n} - \frac1f \right | = \frac{|f_n - f|}{f_n f} \le |f_n - f|. $$
Integrate (take essential supremum) on both side to obtain the statement. |
Formal Group for the Elliptic Curve $Y^2=X^3+AX$ | My copy of Silverman is in my office, so I cannot recall/check all the key bits. I just have this hunch that this is related to the existence of an order four automorphism of elliptic curves of this type. This is too long to fit into a comment, so an answer it is.
To move the identity element to the origin, $(z,w)=(0,0)$ we do the usual change of coordinates $z=-x/y$, $w=-1/y$. The equation of this curve then becomes
$$w=z^3+Azw^2\quad(*)$$ making the automorphism $\sigma:z\mapsto iz, w\mapsto -iw$ stand out. The formal group law $F(z_1,z_2)$ basically works with that $z$-coordinate. Given that $\sigma$ is an automorphism of the elliptic curve, the formal group law must satisfy
$$
F(\sigma(z_1),\sigma(z_2))=\sigma(F(z_1,z_2)).
$$
In other words, we must have
$$
F(iz_1,iz_2)=iF(z_1,z_2)\qquad (**)
$$
for all $z_1,z_2\in\Bbb{C}$.
Your claim follows immediately from $(**)$, because a non-zero homogeneous term $F_n(z_1,z_2)$ of degree $n$ must satisfy both $F_n(iz_1,iz_2)=iF_n(z_1,z_2)$ and
$F_n(iz_1,iz_2)=i^nF(z_1,z_2)$. Therefore $i^n=i$ whenever $F_n\neq0$.
The automorphism $\sigma$ also forces the power series solution $w=w(z)\in\Bbb{C}[[z]]$ of $(*)$ to only have terms of degrees $\equiv 3\pmod 4$. Being rusty, I first thought that we need to use that somehow, but those calculations scare me. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.