title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Catching the right bus | 0% probability.
Assuming the timetable is set to make this as unlikely as possible consider having the 1 min bus arrive every minute on the minute, ie 12:00, 12:01, etc. Then make the 15 minute bus arrive at 12:00 + epsilon, 12:15 + epsilon, etc. |
A distance preserving operator that's not linear? | http://en.wikipedia.org/wiki/Mazur-Ulam_theorem |
Can you find this limit in a "nicer" way? | Note that $x^{1/x}=e^{(\ln x)/x}$. By looking at the power series expansion of $e^t$, we get that if $x\gt 1$ then $e^{(\ln x)/x}\gt 1+\frac{\ln x}{x}$. Now we are finished.
We can prove $e^t\gt 1+t$ if $t\gt 0$ in other ways, for example by looking at the derivative of $e^t$.
Remark: The advantage of the above approach is not so much quickness, though it is quick. What is useful is that it gives a quite precise idea of the size of $\sqrt[n]{n}-1$ for large $n$. |
Some interesting integrals with dilogarithm | $ \large \text{ Hooray!!!}$ The closed-form of the integral $a)$ is impressive. According to my calculations,
$$ \int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^2-(\operatorname{Li}_2(e^{i x}))^2}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{\pi^5}{48}.$$
Including also the trivial case, $n=1$,
$$ \int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})-\operatorname{Li}_2(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{\pi^3}{4}.$$
$ \large \text{ Second Hooray!!!}$
$$ \int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^3-(\operatorname{Li}_2(e^{i x}))^3}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{\pi^7}{192}.$$
$ \large \text{Third Hooray!!!}$
I think I have found a first generalization!
$$ I(n)=\int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^n-(\operatorname{Li}_2(e^{i x}))^n}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{\pi^{2n+1}}{6^n}\left(1-\left(-\frac{1}{2}\right)^n\right).$$
$ \large \text{Fourth Hooray!!!}$
Guess what?! I'm also done with the generalization $J(n,m)$
$$\ J(n,m)=\int_0^{2\pi} \frac{(\operatorname{Li}_m(e^{-i x}))^n-(\operatorname{Li}_m(e^{i x}))^n}{e^{-i x}-e^{i x}}\textrm{d}x=\pi(\zeta(m)^n-((2^{1-m}-1)\zeta(m))^n).$$
$ \large \text{Fifth Hooray!!!}$
I computed $2$ cases of the generalization in $K(n)$ and I approach the solution of the generalization. So,
$$ \int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})\operatorname{Li}_3(e^{-i x})-\operatorname{Li}_2(e^{i x})\operatorname{Li}_3(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{5}{48}\pi^3\zeta(3);$$
$$ \int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})\operatorname{Li}_3(e^{-i x})\operatorname{Li}_4(e^{-i x})-\operatorname{Li}_2(e^{i x})\operatorname{Li}_3(e^{i x})\operatorname{Li}_4(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{17}{6912}\pi^7 \zeta(3).$$
$ \large \text{Sixth Hooray!!!}$
Looks like I have been lucky today! Let me put the last generalization I just proved in a nice form
$$K(n)=\int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})\operatorname{Li}_3(e^{-i x})\cdots \operatorname{Li}_n(e^{-i x})-\operatorname{Li}_2(e^{i x})\operatorname{Li}_3(e^{i x})\cdots \operatorname{Li}_n(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x$$
$$=\pi \left(\zeta(2)\zeta(3)\cdots \zeta(n)+(-1)^{n-1} \eta(2)\eta(3)\cdots\eta(n))\right).$$
Extra information:
https://en.wikipedia.org/wiki/Riemann_zeta_function
https://en.wikipedia.org/wiki/Dirichlet_eta_function
https://en.wikipedia.org/wiki/Polylogarithm |
Evaluate the sum of the following Legendre symbols: $\sum\limits_{a\in\mathbb{F}_p \ \text{and} \ a \neq 0,1} \left(\frac{a-a^2}{p} \right)$ | Use the claim below and your problem is trivial.
Claim: For an odd prime natural number $p$ and $a,b\in\mathbb{F}_p$ with $b\neq 0$, we have $$\sum_{x\in\mathbb{F}_p}\,\left(\frac{x(ax+b)}{p}\right)=-\left(\frac{a}{p}\right)\,.$$
Proof: Let $x^{-1}$ be the inverse of $x$ modulo $p$. We have $$\begin{align}
\sum_{x\in\mathbb{F}_p}\left(\frac{x(ax+b)}{p}\right)&= \sum_{x\in\mathbb{F}_p^\times} \left(\frac{x}{p}\right)\left(\frac{ax+b}{p}\right) = \sum_{x\in\mathbb{F}_p^{\times}} \left(\frac{x^{-1}}{p}\right)\left(\frac{ax+b}{p}\right)
\\&= \sum_{x\in\mathbb{F}_p^{\times}}\left(\frac{x^{-1}(ax+b)}{p}\right)=\sum_{x\in\mathbb{F}_p^{\times}}\left(\frac{a+bx^{-1}}{p}\right)\,,
\end{align}$$
where $\mathbb{F}_p^\times:=\mathbb{F}_p\setminus\{0\}$. Therefore,
$$ \sum_{x\in\mathbb{F}_p}\left(\frac{x(ax+b)}{p}\right)= \sum_{y\in\mathbb{F}_p^{\times}}\left(\frac{a+by}{p}\right) = \sum_{y\in\mathbb{F}_p}\left(\frac{a+by}{p}\right)-\left(\frac{a}{p}\right)\,.$$ Because $p\nmid b$, we have $\left\{a+by \,\big|\, y\in\mathbb{F}_p\right\}$ is a complete residue system modulo $p$, and thus, $\sum_{y\in\mathbb{F}_p}\left(\frac{a+by}{p}\right) = 0$. This proves the desired equality. |
Proof: $\nabla^2$ is invariant under rotation. | Let $u = x\cos \theta - y \sin \theta$ and $v = x\sin \theta + y \cos \theta$. Then $$f_x = f_u u_x + f_v v_x = f_u \cos \theta + f_v \sin \theta$$
$$f_y = f_u u_y + f_v v_y = -f_u\sin \theta + f_v \cos \theta$$ Therefore \begin{align}f_{xx} &= (f_x)_u u_x + (f_x)_v v_x = (f_u \cos \theta + f_v \sin \theta)_u\cos \theta + (f_u \cos \theta + f_v \sin \theta)_v \sin \theta \\
&= f_{uu} \cos^2 \theta + f_{vu}\sin \theta \cos \theta + f_{uv}\cos \theta \sin \theta + f_{vv}\sin^2 \theta\\
&= f_{uu}\cos^2 \theta + 2f_{uv}\sin\theta \cos \theta + f_{vv}\sin^2\theta
\end{align}
and by a similar argument, $$f_{yy} = f_{uu}\sin^2 \theta - 2f_{uv}\sin \theta \cos \theta + f_{vv} \cos^2 \theta$$ Hence $$f_{xx} + f_{yy} = f_{uu}(\cos^2 \theta + \sin^2 \theta) + f_{vv}(\sin^2\theta + \cos^2 \theta) = f_{uu} + f_{vv}$$ |
Evaluate $\lim_{n\to\infty} \frac{1}{n} \sum_{k=1}^n \sqrt{\frac{k^3}{n}}$ | Your remark is correct, now split the limit as a product of two terms: as $n\to+\infty$,
$$\frac{1}{n} \sum_{k=1}^n \sqrt{\frac{k^3}{n}}=\underbrace{n}_{\to+\infty}\cdot \underbrace{\frac{1}{n} \sum_{k=1}^n \left(\frac{k}{n}\right)^{3/2}}_{\to \int_0^1 x^{3/2}\,dx=2/5}\to +\infty.$$ |
Show that F(X) and Y-E[Y|X] uncorrelated | Hint: Push $f(X)$ inside the conditional expectation as it is $\sigma(X)$-measurable and re-use the law of iterated expectations. |
Solving a system of equations using the inverse of the coefficient matrix | Hint: What happens if you multiply both sides by $A^{-1}$? Your inverse is incorrect though. |
How does the identity element change when a set is multiplied by a scalar? | I guess you are denoting by $1=a_1, a_2,\dots,a_{\phi(n)}$ the integers in $[0,n-1]$ that are coprime to $n$. If you consider
$$
B'=\{ca_1\bmod n,ca_2\bmod n,\dots,ca_{\phi(n)}\bmod n\}
$$
then $B'=A$ (just the order is different), and the identity of this group is obviously $1$: the corresponding element in $B$ is $ca_i$, where
$$
ca_i\equiv 1\pmod{n}
$$
(the inverse of $c$ modulo $n$), which can be determined with the extended Euclidean algorithm. |
laplace transform multiplication by power t | For non-integer powers of $t$ it might be easier to solve without differentiating:
It holds that (if $\text{Re}\,a>-1$ and $\text{Re}\,s>0$)
$$
\int_0^{+\infty}t^a e^{-st}\,dt = \frac{\Gamma(a+1)}{s^{a+1}},
$$
so
$$
\mathcal{L}(t^{5/2})=\frac{\Gamma(7/2)}{s^{7/2}}=\frac{15\sqrt{\pi}}{8s^{7/2}}.
$$
The function $e^{4t}$ just shifts this, so (for $\text{Re}\,s>4$)
$$
\mathcal{L}(t^{5/2}e^{4t})=\frac{15\sqrt{\pi}}{8(s-4)^{7/2}}.
$$ |
The value of $[L:K]$ | Let $\alpha \in L$ be a root of $f$. It suffices to show that $K(\alpha)=L$. Suppose that $K(\alpha) \neq L$, then $\mathrm{Gal}(L/K(\alpha))$ is a subgroup of $\mathrm{Gal}(L/K)$. Since $K(\alpha)$ is not a normal extension of $K$, what can you say about it's corresponding subgroup in the Galois group? Why is this a problem? |
When we say, "ZFC can found most of mathematics," what do we really mean? | We mean that we can formalize the needed languages, and theories, and we can prove the existence of sufficient models for "regular mathematics" from ZFC.
This means that within any model of ZFC we can show that there are sets which we can interpret as the sets for "regular mathematics". Sets like the real numbers with their order, addition, and so on. And all the statements that you have learned in calculus about the real numbers and continuous functions, and so on -- all these can be made into sets which represent them and we can write proofs (which are other sets) and prove that these proofs are valid, and so on. All this within sets, using nothing but $\in$.
The key issue is that all this happens internally, so it happens within every model of ZFC, or rather in every universe (even if it is not assumed to be a set in a larger universe on its own). |
Why is $3$ the multiplicative coefficient in the Collatz conjecture? | As explained in Isaac Solomon's answer just adding $1$ would make the problem simple. Multiplying by $2$ (or another even number) and adding $1$ would not work, so there one is at $3$.
One could ask the same question for $5$ instead of $3$, or there are other variants, but note the following:
If you assume a probabilistic model then in the multiplying by $3$ case you are in the following situation: in "half" the cases (even $n$) you divide by $2$, in "half" the cases (odd $n$) you take $3n+1$, but then you are guaranteed to divide by $2$ so effectively the situation is:
"half" the time multiply by $\frac{1}{2}$
"half" the time multiply by (roughly) $\frac{3}{2}$ (as the combination of $3n+1$ and the then forced $\frac{(3n+1)}{2}$).
The product of these two $\frac{1}{2}$ and $\frac{3}{2}$ is $\frac{3}{4}$ and this is less than 1, so in the long run you'd expect a decrease.
So the heuristic suggests a decrease in the long run. (This argument is rough but one could make it a bit more precise. However, I think to get a rough idea this might do.)
If you do the same with $5$ (or something still larger) instead of $3$, you'd have $
\frac{1}{2}$ and $\frac{5}{2}$ instead. So you'd get $\frac{5}{4}$ which is greater than $1$. So in the long run you'd expect an increase.
Indeed, one can also study this problem; so $5$ instead of $3$, but then the situation is different in that one will (it is believed) have some starting sequence that go to infinity and several small loops. So the question remains interesting, but it somehow changes as then one will have these values that escape to infinity.
More generally, there are numerous Collatz-like problems that are considered. But for some it can even be shown that they are formally undecidable. |
Exact number of compositions. | The generating function for such compositions is
$$g(x)=\prod_{i=1}^k\sum_{j\ge i}x^j=\prod_{i=1}^k\frac{x^i}{1-x}=\frac1{(1-x)^k}\prod_{i=1}^kx^i=\frac{x^{k(k+1)/2}}{(1-x)^k}\;,$$
so the coefficient of $x^n$ in this function is the number that you want. A standard and very useful generating function is
$$\frac1{(1-x)^k}=\sum_{n\ge 0}\binom{k+n-1}nx^n\;;$$
using this, you should be able to find the coefficient of $x^n$ in $g(x)$. |
How to determine significance of a binomial test on a sample | what you are testing is:
$H_0$ : the probability of answering I prefer smart phone 1 is $p = 0.5$
The conclusion of your test is that you reject the null hypothesis (well it depends of the type 1 error rate you had a priori defined). Notice that nowhere the total size of the group is mentionned. Your conclusion is valid regardless of the the size of the group. |
Sigma algebra generated by indicator functions on unit interval | Informally, the sigma-algebra generated by a random variable is the smallest sigma-algebra such that this given random variable is measurable with respect to it.
Let me be more precise. Take a random variable $X: \Omega \rightarrow \mathbb{R}$ on a probability space $(\Omega, \mathcal{F}, P)$. The simplest case is when the measurable space we are mapping $\Omega$ into is $(\mathbb{R}, \mathcal{B(\mathbb{R}))}$, the real line equipped with Borel's sigma-algebra.
Then, we can define the sigma-algebra generated by $X$ writing:
$\mathcal{F}_{X} = \{A \subseteq \Omega : X^{-1}(B) = A \text{, for some borelian set } B \in \mathcal B(\mathbb{R})\}$.
First, it's easy to see that this collection is a sigma-algebra, indeed, because the inverse image is preserved under countable unions, countable intersections and complementation.
Now, can you see that, by definition, $X$ is $\mathcal{F}_{X}$-measurable? Take some time to convince yourself of this claim. Also, if we take a set out of this collection, it will fail to be a sigma-algebra or $X$ won't be $\mathcal{F}_{X}$-measurable anymore. This is our motivation to call it the sigma-algebra generated by the random variable $X$. This is the smallest collection of sets you need to have in order to say that $X$ is a measurable function.
In your example, $X_{t} = 1, \forall \omega \in \Omega$. So, if you have a given borelian $B$ and $1\in B$, the inverse image is $\Omega$. If not, it's just the empty set and the smallest sigma-algebra needed for "measuring" $X_{t}$ is really just the trivial one, you are right. |
Show that the set of all infinite subsets of $\mathbb{N}$ has cardinality greater than $\mathbb{N}$ | Suppose that the set of all infinite subsets (and thus all are countable) of $\mathbb N$ was countable. Suppose thus that $\{A_n\}_{n\in \mathbb N}$ is a list of all the countable subsets of $\mathbb N$, and write $A_n=\{a_{n,1},\cdots, a_{n,k},\cdots \}$. Now diagonalize by defining $b_n$ to be some integer different than $\{b_j\}_{1\le j < n}$ and also $b_n\ne a_{n,n}$ and show that $\{b_n\}_{n\in \mathbb N}$ is not on your list, but it is an infinite subset of $\mathbb N$. |
probability (waiting time = infinity) for a poisson process | Clearly we may assume that $N(t)$ is right-continuous with monotone increasing sample path.
Then $t \geq W_n$ if and only if $N(t) \geq n$ and we have
$$\Bbb{P}(W_n \leq t) = \Bbb{P}(N(t) \geq n) = \sum_{k=n}^{\infty} \frac{(\lambda t)^{k}}{k!} e^{-\lambda t} = 1 - \sum_{k=0}^{n-1} \frac{(\lambda t)^{k}}{k!} e^{-\lambda t}. $$
Now taking $t \to \infty$, the monotone convergence theorem yields the desired result $\Bbb{P}(W_n < \infty) = 1$. |
optimization of normalized quadratic function | This is rectangle constrained optimization problem. First solve the unconstrained problem, then check is the constraints are fullfield if not then solve the problem on each edge of the contrains ($\alpha_i = C$, $\alpha_i = 0$), and choose the minimum.
Solving the unconstrained problem:
\begin{equation}
f(\alpha)=\frac{\alpha^T B\alpha}{2\sum_{i=0}^{n}{\alpha_i}}+b^T\alpha
\end{equation}
This is looks very similar to quadratic problem, but it is some kind of rational function. Therefore I would just use an iterative method.
\begin{equation}
\alpha_{n+1}=\alpha_n+\gamma d\alpha
\end{equation}
You can use gradient-desent it is the most simple to perform but it is the most slow solution. I would suggest Newton's method, and find $d\alpha$ from:
\begin{equation}
Hf(\alpha)d\alpha=\nabla f(\alpha)
\end{equation}
I didn't check this but i think that in this case the hessian matrix will be positive definite, and allow you to find $d\alpha$ by using the cholesky decomposition. |
Find the bottom right coordinate of rectangle? | It is true that in some applications of computer science, the upper left corner is considered as the origin of coordinates. For example, digital images assume that system.
In those cases, the x coordinate of the bottom right corner is equal to the width of the image:
$$ x_{br} = w $$
And the y coordinate is equal to the minus height:
$$ y_{br} = -h $$
Therefore, if the top left corner was not found at the origin, but at the coordinates $ \left(x_{tl};y_{tl}\right) $ given, the coordinates of the bottom right corner result:
$$ x_{br} = x_{tl} + w \\ y_{br} = y_{tl} - h $$ |
Taking the expectation out of the inner product | Too long for a comment, but not a complete answer by any means.
Additionally, I don't know the answer to this question, but I have some suggestions that you could perhaps consider.
The covariance is well-defined on the Hilbert space, so this may help with applications of Fubini/Tonelli.
You have a separable Hilbert space, so taking some (countable) basis $f_1, f_2, ...$ might not be a bad shout. Maybe point 1. can be used to take this infinite sum out in some way, cancelling cross-terms. This may help as you need only consider certain $f$.
Moreover, you may consider $X = X^+ - X^-$ in the usual way, and consider just non-negative $f$ similarly. Of course, while $\Bbb{E}(X) = 0$, you don't necessarily have $\Bbb{E}(X^{\pm}) = 0$.
Anyway, I hope that these points may be of some help to you. Sorry that I can't be of more help! |
The probability of choosing husband and wife of two couples | For part d) there are two possibilities. Either in one couple both partners are retired, or in one couple only the husband is retired, and in the other only the wife is retired.
You have shown in parts a) and c) the the probability that both are retired is $.32$ and the probability that neither is retired is $.22$. Also in part b), you showed that the probability that only the wife is retired is $.08$ and that the probability that only the husband is retired is $.38$.
Since the statuses of the two couples are independent, we compute the probability that both events occur by multiplication. Thus the probability that both partners in one marriage are retired and that neither partner in the other marriage is retired is $$2\cdot.32\cdot.22=.1408,$$ where the $2$ comes from the fact that we have $2$ ways to choose which of the two couples is retired.
Can you see how to finish the problem now? |
Probability of modifying $\alpha$ and $\beta$ in $ \beta = a \alpha + b $ such that the equality still holds | Answer was figured out in the comments by discussion with @Raskolnikov.
(Edit: It's better if a moderator can close the question.) |
what are the poles of $\cot z$? | It has a simple pole at $0$ because $\lim_{z\to0}z\cot z$ exists (in $\mathbb C$) and it is different from $0$. But it also has a simple pole at every multiple integer of $\pi$. |
Question about partitions in intervals of the real numbers. | Notice that $\mathbb{Q} \subseteq \mathbb{R}$ and $\left| \mathbb{Q} \right| = \left| \mathbb{N} \right|.$ Now, every interval, other than a single element interval, contains a rational number, so you can map every interval in your partition to a rational number contained in that particular interval. Since the intervals in your partition are disjoint you don't have to worry about mapping two intervals to the same rational, so the map is injective. Clearly the map is also surjective, so you have a bijection between your partition to $\mathbb{Q},$ and hence to $\mathbb{N}$ as desired.
You should obviously write this argument a little nicely ;)
UPDATE: I think that in this proof we are using the Axiom of Choice when we say
You can map every interval in your partition to a rational number
contained in that particular interval.
However I don't think this is a serious problem.
UPDATE: Oops! This map is not surjective, but is injective, so you have one inequality. The other inequality is the one you obtained. |
What is the name for this product? | It's close to the outer product, but because $\vec p$ isn't arranged as a $2\times 2$ matrix, I think it's more correct to say it's an instance of the Kroenecker product, which is defined for arbitrary-size matrices. In that setting it is also common to write
$$
\vec p = [a_1\vec b, a_2\vec b]
$$
It is a special case of a more general construction known as the tensor product.
All three of them, confusingly enough, are written as $\vec a \otimes \vec b$, except that the tensor product some times have a subscript like $\otimes _{\Bbb R}$ or $\otimes_{\Bbb C}$, depending on whether 1) there is any cause for confusion, and 2) what kind of elements the $a_i$ and $b_i$ are. |
legendres polynomial and recurrence formula or rodrigues method | Use the Legendre polynomial's property
$$P_{m+1}' - P_{m-1}' = (2m+1)P_m$$
Summing, we get
$$\sum_{m=1}^{n} (2m+1)P_m = \sum_{m=1}^{n} (P_{m+1}' - P_{m-1}') = P_{n+1}'+P_{n}'-P_{1}'-P_{0}'= P_{n+1}'+P_{n}'-1$$
As $P_0=1$, this can be rewritten as
$$P_{n+1}'+P_{n}' = \sum_{m=0}^{n} (2m+1)P_m$$ |
How to Find joint Probability mass function of $P_{AB}(a,b)$ given $P_A(a)$ and $P_B(b)$? | Regarding the first point, you have the probabilities already in your table. The calculations you sketch out are not needed.
Assuming that a short order has 1 page and a long order has $B$ pages.
First, you can calculate the kilometer pages
$$
\begin{array}{lccc}
& cheese (20 km) & chocolate (100km) & fruits (300 km) \\
short (1 page) & 20 & 100 & 300\\
long (B pages) & 20 B & 100 B & 300 B\\
\end{array}
$$
Now, you can multiply component by component this table with your table and you obtain
$$
\begin{array}{lccc}
& cheese (20 km) & chocolate (100km) & fruits (300 km) \\
short (1 page) & 4 & 20 & 60\\
long (B pages) & 2 B & 20 B & 30 B\\
\end{array}
$$
Now, you can sum the elements of this table and you obtain
$$
E[K]=84+52B
$$ |
A difficult circle theorem | First of all I think that it should be $AF \perp BD$, instead of $AE \perp BD$. Let $P$ be the orthocenter, the intersectin of $BH,DC,AF$. Then we have that $DGBP$ is a parallelogram, as the two diagonales intersect at their midpoints.
Now we have that: $\angle DAB + \angle DGB = \angle DAB + \angle DPB = \angle DAB + 180^{\circ} - \angle BDC - \angle DBH$
$ = \angle DAB + 180^{\circ} - 90^{\circ} + \angle ABD - 90^{\circ} + \angle BDA = \angle DAB + \angle ABD + \angle BDA = 180^{\circ}$. Therefore $G$ lies on the circumcircle of $\triangle ABD$. But now we have that $GB \parallel DC$ from $DGBP$, so $GB \perp AB$. As $G$ is on the circumcircle we must have that $AG$ is the diameter. |
Euler's Beta function for positive integers derivation | Ok, I've found a solution:
let $a_i=\int_{0}^{1}{\left(m+i-1\right)x^{m+i-2}\left(1-x\right)^{n-i}dx}$
let $u_i=\left(1-x\right)^{n-i},du_i=\left(i-n\right)\left(1-x\right)^{n-i-1}dx$
let $v_i=x^{m+i-1},dv_i=\left(m+i-1\right)x^{m+i-2}dx$
\begin{align*}
a_i&=\int_{0}^{1}{\left(m+i-1\right)x^{m+i-2}\left(1-x\right)^{n-i}dx}\\
&=\int_{0}^{1}{u_idv_i}\\
&=\left[u_iv_i\right]_0^1+\int_{0}^{1}{v_idu_i}\\
&=\left[\left(1-v_i^{\frac{1}{m+i-1}}\right)^{n-i}v_i\right]_0^1-\int_{0}^{1}{x^{m+i-1}\left(i-n\right)\left(1-x\right)^{n-i-1}dx}\\
&=0-\left(i-n\right)\int_{0}^{1}{x^{m+i-1}\left(1-x\right)^{n-i-1}dx}\\
&=\frac{n-i}{m+i}\int_{0}^{1}{\left(m+i\right)x^{m+i-1}\left(1-x\right)^{n-i-1}dx}\\
&=\frac{n-i}{m+i}a_{i+1}\\
&=a_{n-1}\prod_{j=1}^{n-2}{\frac{n-j}{m+j}}\\
&=\frac{m!\left(n-1\right)!}{\left(m+n-2\right)!}a_{n-1}
\end{align*}
\begin{align*}
B\left(m,n\right)&=\int_{0}^{1}{x^{m-1}\left(1-x\right)^{n-1}dx}\\
&=\frac{1}{m}\int_{0}^{1}{mx^{m-1}\left(1-x\right)^{n-1}dx}\\
&=\frac{1}{m}a_1\\
&=\frac{1}{m}\frac{m!\left(n-1\right)!}{\left(m+n-2\right)!}a_{n-1}\\
&=\frac{\left(m-1\right)!\left(n-1\right)!}{\left(m+n-2\right)!}\int_{0}^{1}{\left(m+n-2\right)x^{m+n-3}\left(1-x\right)dx}\\
&=\frac{\left(m-1\right)!\left(n-1\right)!}{\left(m+n-2\right)!}\left[\int_{0}^{1}{\left(m+n-2\right)x^{m+n-3}dx}-\int_{0}^{1}{\left(m+n-2\right)x^{m+n-2}dx}\right]\\
&=\frac{\left(m-1\right)!\left(n-1\right)!}{\left(m+n-2\right)!}\left(\left[x^{m+n-2}\right]_0^1-\left[\frac{m+n-2}{m+n-1}x^{m+n+1}\right]_0^1\right)\\
&=\frac{\left(m-1\right)!\left(n-1\right)!}{\left(m+n-2\right)!}\left(1-\frac{m+n-2}{m+n-1}\right)\\
&=\frac{\left(m-1\right)!\left(n-1\right)!}{\left(m+n-2\right)!}-\frac{\left(m-1\right)!\left(n-1\right)!}{\left(m+n-2\right)!}\frac{m+n-2}{m+n-1}\\
&=\frac{\left(m-1\right)!\left(n-1\right)!\left(m+n-1\right)}{\left(m+n-2\right)!}-\frac{\left(m-1\right)!\left(n-1\right)!\left(m+n-2\right)}{\left(m+n-1\right)!}\\
&=\frac{\left(m-1\right)!\left(n-1\right)!\left(m+n-1-m-n+2\right)}{\left(m+n-2\right)!}\\
&=\frac{\left(m-1\right)!\left(n-1\right)!}{\left(m+n-1\right)!}
\end{align*} |
Computing the ideal of a finite set of points | In theory, this is simple.
Each point can be described by $n$ linear equations, and so defines an ideal $I_P$.
To get the ideal of $S$, you just compute $\bigcap I_P$, the intersection of the ideals of each point.
However, computing intersection is not easy to do by hand - to do that one needs Gröbner bases. |
Does there exist an infinite $\sigma$-algebra that contains no nonempty member which has no nonempty proper measurable subset? | Put the topology of pointwise convergence of nets on the set of all functions $f:\mathbb R\to\{0,1\}$. In other words, the product topology on the product of continuum-many copies of $\{0,1\}$, where the latter has the discrete topology. Consider the $\sigma$-algebra of all Borel sets in this space. An open set is a set of all functions whose values at finitely many points are specified. An intersection of countably many open sets is a set of all functions whose values at countably many points are specified.
In other words, an intersection of countably many open sets is a set of the form
$\{f : \forall x\in A\ f(x)=g(x)\}$ for some fixed function $g$ and some countable set $A\subseteq\mathbb R$. This will always have a proper subset that is a non-empty Borel set: just add some more members to $A$, but not uncountably many.
Does every Borel set have a subset that is the intersection of countably many Borel sets? On that point I'm rusty; maybe this won't work if that's not true.
If one can show that a set containing only one function $f:\mathbb R\to \{0,1\}$ is never a Borel set in this space, I think that's enough. |
About banded Toeplitz matrices | Take $A = \left[\begin{array}{ccc}1 & 1 & -1\\ 1 & 1 & 1 \\ -1 & 1 & 1\end{array}\right]$ and $B = \left[\begin{array}{ccc}1 & 1 & 0\\ 1 & 1 & 1 \\ 0 & 1 & 1\end{array}\right]$. We have $\|A\|_2 = 2$ and $\|B\|_2 = 1 + \sqrt{2} > 2$.
Thus, norm of a matrix can be smaller than norm of its banded matrix even for symmetric Toeplitz matrices. I suppose, that the same holds in infinite dimensional case. |
Computing the class-preserving automorphism group of finite $p$-groups | Your current test is
if not (Representative(cc[i])^a in cc[i]) then
that will eliminate elements as not lying in the subgroup you want. We could phrase this alternatively (this is in fact what the in test does) as
if RepresentativeAction(G,Representative(cc[i])^a,Representative(cc[i])=fail then
that is we are testing whether there is an element in G that will conjugate the class representative in the same way as the automorphism a does.
This is now easily generalized. Add
Gd:=DerivedSubgroup(G);
at the start and change the test to
if RepresentativeAction(Gd,Representative(cc[i])^a,Representative(cc[i])=fail then
Finally -- it is not clear from your code whether you want to factor out the inner automorphisms -- you would factor out not all inner automorphisms, but only those induced by $G'$. |
Inductive Proof With Regular Expression | Here is a possible way to solve your question. Given a word $u$, let $|u|_0$ (respectively $|u|_1$) denote the number of occurrences of $0$ (respectively $1$) in $u$. Now consider the map $f: \{0,1\}^* \to \mathbb{Z}$ defined by
$$
f(u) = |u|_0 - |u|_1
$$
Observe that $u$ has an equal number of $0$'s and $1$'s if and only if $f(u) = 0$.
Now here are the steps to conclude:
Verify that $f(01) = f(10) = 0$
Verify that $f(uv) = f(u) + f(v)$ and more generally, that $f(u_1 \cdots u_n) = f(u_1) + \dotsm + f(u_n)$
Use (2) to show that for every word $u \in \{01, 10\}^+$, $f(u) = 0$.
Conclude. |
How many non-isomorphic abelian groups of order $\kappa$ are there for $\kappa$ infinite? | To your edit, this may depend on the value of the continuum, let us fix $\kappa$ to be an infinite cardinal, and $\mathbb P$ the set of primes in $\mathbb N$.
Denote for $p$ prime, denote the group $A_p=\bigoplus\limits_{i<\kappa}\mathbb Z/p\mathbb Z$.
Now for every $I\subseteq\mathbb P$ denote the group $Z_I=\bigoplus\limits_{p\in I}A_p$. Its cardinality is $\kappa$. If $I\neq J$ then without loss of generality there is some $n\in I\setminus J$, therefore in $Z_I$ there is an element of order $n$, while in $Z_J$ there are none of order $n$.
This shows that there are at least continuum many non-isomorphic subgroups. And I haven't began meddling with free abelian groups and other strange animals. Moreover the larger $\kappa$ gets the more "smaller" groups we can use. I'd conjecture that the number either ends up $2^{\kappa}$ many, or its representation is independent of ZFC (it has no representation as a function of $\kappa$ in terms of $\aleph$ cardinals and the continuum function). |
Pseudo-pythagorean theorem | If $\cos\frac {2m\pi}n$ is rational with $\gcd(m,n)=1$, then the primitive $n$th root of unity $\zeta=\cos \frac{2m\pi}{n}+i\sin\frac{2m\pi}n$ is a root of the rational polynomial $X^2-2\cos \frac{2m\pi}{n}X+1$. On the other hand, we know that $\zeta$ is a root of $X^n-1$ and any factorization of this over the rationals can be resaled to a factorization over the integers. Since $X^2-2\cos \frac{2m\pi}{n}X+1$ is monic this implies that $2\cos\frac{2m\pi }{n}$ is already an integer of absolute value $\le 2$.
This leads to the cases
$c^2=a^2+b^2+2ab$ for $\gamma = \pi$
$c^2=a^2+b^2+ab$ for $\gamma = \frac 23\pi$ or $\gamma = \frac 43\pi$
$c^2=a^2+b^2$ for $\gamma =\frac 12\pi$ or $\gamma = \frac 32\pi$
$c^2=a^2+b^2-ab$ for $\gamma=\frac13\pi$ or $\gamma =\frac 53\pi$
$c^2=a^2+b^2-2ab$ for $\gamma =0$
and that's all with $\gamma \in[0,2\pi)\cap \pi\mathbb Q$.
Regarding your second question: Pythagorean triples are best viewed as numbers $z=a+bi\in\mathbb Z[i]$ with norm $z\bar z$ a perfect square, which leads to a partitioning of the set of primes into those with $p\equiv -1\pmod 4$ (which can only occur as factor of $c$ if they are also factors of $a$ and $b$) and those with $p\equiv 1\pmod 4$ (which can be written as sums of squares and lead to primitive pythagorean triangles; for example $5=2^2+1^2=(2+i)(2-i)$ give us $\color{red}5^2=(2+i)^2(2-i)^2=(\color{red}3+\color{red}4i)(3-4i)$ and hence the most famous Pythagorean triangle) and the special prime $2$. Similarly, the numbers aou are after should be viewed as elements of $\mathbb Z[\omega]$ where $\omega=-\frac12+\frac i2\sqrt 3$. It turns out that one gets a similar partitioning of the primes, this time based on the remainders modulo $3$. Those are very interesting number theoretic questions indeed. |
Derive the dual function $g(\lambda, \nu)$ for the least-norm problem | Your're almost there...
Recall the definition of convex conjugates. Recall that the convex conjugate of a norm $\|.\|$ is the indicator function of the unit ball of the dual norm $\|.\|_*$. Now,
\begin{equation}
\begin{split}
g(\nu) &:= \underset{x}{\inf }L(x,\nu) = \underset{x}{\inf }\|x\| + \nu^T Ax - \nu^T b = \nu^Tb -\underset{x}{\sup }x^T(-A^T\nu) - \|x\|\\
&= \nu^Tb -\|.\|^*(-A^T\nu) = \begin{cases}\nu^Tb, &\mbox{ if }\|A^T\nu\|_* \le 1,\\-\infty, &\mbox{ else.}\end{cases}
\end{split}
\end{equation}
Thus the dual problem is:
\begin{equation}
\text{Maximize } \nu^Tb\text{ subject to }\|A^T\nu\|_* \le 1.
\end{equation}
Example:
For example, if $\|.\|$ is the $\ell_1$-norm, then your original problem is the well-known Basis Pursuit problem, and the dual we've obtained is a linear program (you're maximizing a linear function on a polytope).
Notes: Using the Fenchel-Rockafellar duality Theorem, you can obtain the sought-for dual formulation in exactly one line! |
Sum of series with conditional convergence | There's a standard way to do these things. Note that
$${x \over 1 + x^3} = x(\sum_{n=0}^{\infty} (-x^3)^n)$$
$$=\sum_{n=0}^{\infty} (-1)^n x^{3n + 1}$$
As a result (I'll leave the rigorousness for you to ponder on),
$$\int_0^1 {x \over 1 + x^3} = \sum_{n=0}^{\infty} \int_0^1 (-1)^n x^{3n + 1}$$
$$= \sum_{n=0}^{\infty} {(-1)^{n} \over 3n + 2}$$
$$= {1 \over 2} - { 1 \over 5}+ { 1\over 8} -...$$
This gives you one third of the series, and you can do similar integrals for the other two portions of it. If you evaluate these integrals and add the results you'll get the answer.
Your work can be reduced if you recognize one of the other two thirds of the series as a multiple of the alternating series for $\ln 2$. The sum of the integrals for the remaining two thirds will then simplify a bit. |
Recursive proof by induction with pseudocode | Since $bar(n)$ calls all of $bar(0)$ through $bar(n-1)$, you'll need strong induction.
OK, aside from checking the claim for $n=0$, let $k$ be some number greater than $0$, and use the inductive assumption that for all $n<k$ we have that $bar(n)$ prints out $2^n$ stars.
Well, then $bar(k)$ prints out
$$1+\sum_{n=0}^{k-1} {2^n} = 1+(2^k-1)=2^k$$
stars. |
Curve with unit speed and zero curvature lies on a straight line | As $\gamma$ is unit speed, the curvature of $\gamma$ is $\|\ddot \gamma\|$. Thus, $\ddot \gamma = 0$, so $\dot \gamma(s) = a \in \mathbb R^3$. Hence $\gamma(s) = as + b$. Therefore $\gamma$ lies on a straight line. |
Let $G$ be a compact group. If $\{a^n\}_{n \in \mathbb{Z}}$ is dense in $G$, then $G$ is abelian. | You need some other separation axiom like Hausdorff, otherwise there are simple counterexamples. If $G$ is Hausdorff, then $D = \{(x,x):x \in G\}$ is closed in $G \times G$. Hence $C_y = \{x \in G: xy = yx\}$ is closed, because it is the preimage of $D$ under the map $x \mapsto (xy,yx)$. Hence $C_{a^n}$ is closed, and contains the dense subset $\{a^n:n\in \mathbb Z\}$. Hence $C_{a^n} = G$. Hence for any $x \in G$, we have that $x \in C_{a^n}$, which implies that $a^n \in C_x$. Hence $C_x$ is closed and contains the dense subset $\{a^n:n\in \mathbb Z\}$. Hence $C_x = G$. |
Prove or disprove that $I= \{a\in R\mid \alpha(a)>\alpha(1_R)\}$ is an ideal. | Sorry,
this is meant to be a comment, but I cannot post a comment. I think
$\alpha$ refers to Euclidean norm. As a hint, what if you let $R$ to be
some obvious ring and there should be an easy counterexample (if I am
right). I can give more details if you wish.
EDIT-1: Are you proving for the general case or just searching for a counterexample? |
Combination Question (Red and Blue team) | It's okay to say pick a blue guy, then a red guy, then anybody who's left. The only mistake is in how you adjusted for double counting. You've only counted each team twice, not six times. Say Alex and Bob are blue and Charlie is red. In you scheme, you count Alex, Charlie, Bob and Bob, Charlie, Alex, but no other permutation.
You should have gotten $$br(b+r-2)=50$$ |
Improper Integral: $\int_{-\infty}^\infty\frac{e^{-t}}{1+e^{-2t}}\ dt$ | $$\begin{align}
\int_{-\infty}^{\infty}\frac{e^{-t}}{1+e^{-2t}}dt&=\lim_{L\to \infty}\left(\left .-\arctan (e^{-t})\right|_{-L}^{L}\right)\\\\
&=-\arctan(0)+\lim_{L\to \infty}\arctan(e^{L})\\\\
&=\frac{\pi}{2}
\end{align}$$ |
induction proof with a constant | $$\sum_{k=1}^n(k^2 - k) = \sum_{k=1}^n k^2 - \sum_{k=1}^nk$$
$$= \frac{1}{6}n(n+1)(2n+1) - \frac{1}{2}n(n+1)$$
(you can prove the formulas above using induction)
Now, you can simplify and see whether things work out or not. |
Can we have a Metric manifold without a connection/covariant derivative? | A differentiable metric induces a connection in a certain sense.This connection is called the Levi Civita connection. The Levi Civita connection defined by the metric $g$ is thw connection (unique) without torsion such that $g$ is parallel for the connectiion, i.e the covariant derivative of $g$ relatively to the connection is zero.
https://en.wikipedia.org/wiki/Levi-Civita_connection |
Verify $f(x)=\sin(x)-15$ satisfies the conditions of Rolle's Theorem | $f(x) = \sin(x) - 15$
The function $\sin(x)$ is a differentiable function and $-15$ is differentiable as a constant. So, $f(x) = \sin(x) - 15$ is differentiable in $(0,\pi)$. It is also continuous as a sum of continuous functions, in $[0,\pi]$.
After that, $f(0)=f(\pi)=-15$.
So, the function $f(x)$ satisfies the conditions of Rolle's Theorem in the interval $[0,\pi]$.
Now, $f'(x) = \cos(x)$. And $f'(x) = 0$ for $x= \frac{\pi}{2} \in [0,\pi]$. |
Showing two sequences converge and have the same limit | For $k \ge 2,k! > 2^{k-1}$ so
$$\frac{1}{k!} \le \frac{1}{2^{k-1}}$$
i.e. terms of the first sum are all (after a while) bounded by 1/2 + 1/4 + 1/8 + ... = 1. |
Reweighting preserves positive average | Since $\omega$ is non-increasing it is measurable and for $c\geq 0$ we can approximate $\omega$ on $[0,c]$ from below by simple functions $\omega_n$ of the form $\omega_n = \sum_{i=1}^n a_i \Bbb 1_{[t_{i-1} , t_i)}$, where $0 = t_0 < \ldots < t_n = c$ and $a_i \geq a_{i+1}$.
Fix $n$ and suppose that for all $c\geq0$ and all simple functions of the form above with $n$ coefficients $a_i$ and lattice points $t_i$ holds that
$$\sum_{i=1}^n a_i \int_{t_{i-1}}^{t_i} h(y) \text d y \geq a_n \int_0^{t_n} h(y) \text dy$$
Then holds for a simple function of above form with $n+1$ coefficients $b_i$ and lattice points $s_i$ that
$$\sum_{i=1}^{n+1} b_i \int_{t_{i-1}}^{t_i} h(y) \text d y = \sum_{i=1}^n b_i \int_{t_{i-1}}^{t_i} h(y) \text dy +b_{n+1}\int_{t_n}^{t_{n+1}} h(y)\text dy\\
\geq b_n \underbrace{\int_0^{t_n} h(y) \text d y}_{\geq 0} + b_{n+1}\int_{t_n}^{t_{n+1}} h(y)\text dy\\
\geq b_{n+1} \int_0^{t_{n+1}} h(y) \text d y$$
Since the assumption above holds for $n=1$ by the fact that $\int_0^c h (y) \text d y \geq 0$, we have that for all $\omega_n$ holds
$$\int_0^c h(y) \omega (y) \text d y \geq a_n \int_0^c h(y) \text dy \geq 0$$
Note that $\vert h \omega_n \vert \leq \vert h \vert \omega \leq \vert h \vert$, and $\vert h \vert$ is integrable by assumption. Thus
$$\int_0^c h(y) \omega (y ) = \lim_{n\to\infty} \int_0^c h(y) \omega_n (y) \text dy \geq 0$$ |
Theory problem in complex integration. | One problem here is that your substitution also changes the contour of integration. You have also not accounted correctly for the change of variables in $x^k\,\mathrm{d}x$.
Here is how I would do it:
$$
\begin{align}
\int_0^\infty x^ke^{-x(1-i)}\,\mathrm{d}x
&=\lim_{R\to\infty}\int_0^Rx^ke^{-x(1-i)}\,\mathrm{d}x\tag1\\
&=\left(\frac{1+i}2\right)^{k+1}\lim_{R\to\infty}\int_0^{R(1-i)}z^ke^{-z}\,\mathrm{d}z\tag2\\
&=\left(\frac{1+i}2\right)^{k+1}\lim_{R\to\infty}\int_0^Rz^ke^{-z}\,\mathrm{d}z\tag3\\
&=\left(\frac{1+i}2\right)^{k+1}\int_0^\infty z^ke^{-z}\,\mathrm{d}z\tag4\\
&=\left(\frac{1+i}2\right)^{k+1}k!\tag5
\end{align}
$$
In step $(2)$, we have used $z=x(1-i)$, which also means $x=\frac{1+i}2z$; i.e. $x^k\,\mathrm{d}x=\color{#C00}{\left(\frac{1+i}2\right)^{k+1}}z^k\,\mathrm{d}z$. Note that $\left(\frac{1+i}2\right)^{k+1}=2^{-(k+1)/2}e^{-\pi i(k+1)/4}$.
In step $(3)$, we use Cauchy's Integral Theorem and the family of contours $\gamma_R$
$$
\gamma_R=\underbrace{\ \ [0,R]\ \ }_{\substack{\text{the contour}\\\text{in step (3)}}}\cup\underbrace{[R,R-Ri]}_{\substack{\text{this integral}\\\text{vanishes as}\\\text{$R\to\infty$}}}\cup\underbrace{[R-Ri,0]}_{\substack{\text{the reverse of}\\\text{the contour in}\\\text{step (2)}}}
$$
So now we need to show that the integral along $[R,R-Ri]$ vanishes. |
Possible values of prime gaps | In fact it is expected that every even number occurs as a prime gap infinitely often. See Polignac's conjecture. |
Differentiability of $z \rightarrow \cos(\bar{z})$ | No, it is not differentiable. If $z=x+yi$, with $x,y\in\mathbb R$, then\begin{align}\cos(\overline z)&=\cos(x-yi)\\&=\cos(x)\cos(yi)+\sin(x)\sin(yi)\\&=\cos(x)\cosh(y)+\sin(x)\sinh(y)i.\end{align}Now, use the Cauchy-Riemann equations. |
Struggling to find a counterexample to a weakening of MCT/DCT | Hints: i) If $\int f <\infty,$ try DCT.
ii) If $\int f=\infty,$ try Fatou's Lemma |
Complex exponential reduction | $$ \frac{e^A-e^{-B}}{1-e^{-C}} = e^{\frac{A-B+C}{2}}\cdot\frac{sinh(\frac{A+B}{2})}{sinh(\frac{C}{2})} $$
Then:
$$\frac{e^{jwM_1}-e^{-jw(M_2+1)}}{1-e^{-jw}} =$$
$$ e^{\frac{jw}{2}\cdot(M_1-M_2)}\cdot\frac{sinh(\frac{jw}{2}\cdot(M_1+M_2+1))}{sinh(\frac{jw}{2})}$$ |
Which is a better definition of a parabola? | Definition 1 is not sufficient to characterize all parabolas in a coordinate plane. It characterizes some--namely, those whose axes of symmetry are coincident with one of the coordinate axes, but for parabolas whose axis of symmetry is not parallel to either coordinate axis, definition 1 is not sufficient.
Consequently, Definition 2 is preferable. There is no restriction on the directrix or focus so long as the latter is not a point on the former. |
Is there a graphic visualisation of $f^{\frac{1}{2}}$? | For a limited class of functions, at least, the idea of "$f^\frac12$" does make sense. Thus, let $f(x)=|x|^a$, where $a>0$. Then you could take $f^\frac12(x)=|x|^\sqrt a$. However, there is no simple way to relate the graph of $f^\frac12$ to that of $f$. For a start, any relationship would vary with the value of $a$. |
at least half of $u_0, u_1, ..., u_{n-1}$ are superior to $2u_n$ , show that $u_n$ converges to 0. | I think that your inequality on $u_2$ is not proved, as ${\rm max}\{u_0,\frac{u_0}{2}\}=u_0=1$.
I try a solution:
For $n\geq 2$, I put $\displaystyle v_n={\rm max}\{u_k,\frac{n-1}{2}\leq k\leq n-1\}$. By the hypothesis, there exist $k$, $\displaystyle \frac{n-1}{2}\leq k\leq n-1$, such that $2u_n\leq u_k$. Hence we get $2u_n\leq v_n$.
Now
$$v_{n+1}={\rm max}\{u_k,\frac{n+1}{2}\leq k\leq n\}= {\rm max}\{ {\rm max}\{u_k,\frac{n+1}{2}\leq k\leq n-1\}, u_n\}\leq {\rm max}\{v_n, \frac{v_n}{2}\}=v_n$$
Hence $v_n$ is decreasing and positive, so $v_n\to L\geq 0$. Suppose that $L>0$. Then, there exists $N$ such that for $n\geq N$, we have $\displaystyle v_n\leq \frac{3L}{2}$. Then for $n\geq N$, we have $\displaystyle u_n\leq \frac{3L}{4}$. For $\displaystyle \frac{n-1}{2}\geq N$, this gives $\displaystyle v_n\leq \frac{3L}{4}$, and $\displaystyle L\leq \frac{3L}{4}$, a contradiction. Hence $L=0$, and of course, $u_n\to 0$. |
Are countably infinite, compact, Hausdorff spaces necessarily second countable? | In fact, thanks to Sierpinski-Mazurkiewicz theorem, one precisely knows all Hausdorff countable compact topological spaces; namely, any such space is homeomorphic to $\omega^{\alpha} \cdot n +1$ for some countable ordinal $\alpha$ and some integer $n \geq 1$. See for example my answer here. Now, it is not difficult to verify that $\omega^{\alpha} \cdot n +1$ is second-countable. |
Number of different tournaments. | The number of nonisomorphic tournaments on $n$ vertices is given by OEIS sequence A000518.
The first few terms (from $n=0$ to $n=17$) are
$$1, 1, 1, 2, 4, 12, 56, 456, 6880, 191536, 9733056, 903753248, 154108311168, 48542114686912, 28401423719122304, 31021002160355166848, 63530415842308265100288, 244912778438520759443245824.$$ |
Discounted price process in Black-Scholes model is a martingale with respect to Q. | I'll rewrite the proof in terms of the process $\tilde{S}_t$, rather than using stochastic integrals. The idea behind Girsanov's theorem is to 'eliminate' the drift term of $\tilde{S}_t$ by changing the probability measure (to $\mathbb{Q}$). We have,
$$d\tilde{S}_t = \tilde{S}_t(\mu-r)dt + \tilde{S}_t \sigma dB_t.$$
Since $\tilde{B}_t = B_t - \frac{r-\mu}{\sigma}t$, we obtain by Itô's lemma
$$d\tilde{B}_t = \left(\frac{\mu-r}{\sigma}\right)dt+dB_t.$$
Hence, we obtain
\begin{align}
d\tilde{S}_t & = \tilde{S}_t(\mu-r)dt + \tilde{S}_t \sigma dB_t \\
&=\tilde{S}_t(\mu-r)dt + \tilde{S}_t\sigma\left[d\tilde{B}_t - \left(\frac{\mu-r}{\sigma}\right)dt\right] \\
&= \tilde{S}_t \sigma d\tilde{B}_t.
\end{align}
We have showed that $\tilde{S}_t$ is a martingale w.r.t $\tilde{B}_t$, hence a $\mathbb{Q}$-martingale (since $\tilde{B}_t$ is a Brownian motion under $\mathbb{Q}$ by Girsanov and a Brownian motion is a martingale). |
Two inequalities for the difference $H_n - H_m$ of harmonic numbers | Since $0 < \gamma < 1$, that is certainly true.
EDIT: For the new question, I don't see what your problem is.
It's certainly true that for $m < n$,
$$ H_n - H_m < \ln(n) - \ln(m) + \dfrac{1}{n} - \dfrac{1}{m} + O(1/n^2) + O(1/m^2)$$
It's also true that for $m < n$,
$$H_n - H_m = \sum_{k=m+1}^n \dfrac{1}{k} < \sum_{k=m+1}^n \int_{k-1}^k \dfrac{dt}{t} = \int_{m}^n \dfrac{dt}{t} = \ln(n) - \ln(m)$$
And this implies the weaker inequality
$$H_n - H_m < \ln(n) + 1 - \ln(m)$$
So what? |
X,Y are independent RVs with known characteristic functions. Find P(X+Y=2). | You can read off the distributions. For $X$, it is $1$ with probability $1/4$ and $2$ with probability $3/4$. Now $\Pr(X+Y=2)$ is an elementary calculation. |
Trapezoid rule error | Estimating the error with the Taylor expansion is quite messy.
For an alternative, consider the local error on a subinterval $[x_n, x_{n+1}]$ with $h = x_{n+1} - x_n$.
The error in approximating the integral with the trapezoidal formula is
$$E_n = \frac{h}{2}[f(x_n) + f(x_{n+1})] - \int_{x_n}^{x_{n+1}} f(x) \, dx.$$
Integration by parts shows this to be
$$E_n = \int_{x_n}^{x_{n+1}}(x-c)f'(x) \, dx,$$
where $c = (x_{n+1}+x_n)/2$ is the midpoint.
To see this, note that
$$x_{n+1} - c = c - x_n = \frac{x_{n+1} - x_n}{2} = \frac{h}{2},$$
and with $u = (x-c)$ and $dv = f'(x)dx$, integration by parts yields
$$\int_{x_n}^{x_{n+1}}(x-c)f'(x) \, dx = (x-c)f(x)|_{x_n}^{x_{n+1}} - \int_{x_n}^{x_{n+1}} f(x) \, dx = E_n.$$
If the derivative is bounded as $|f'(x)| \leqslant M$, we can bound the error as
$$|E_n| \leqslant \int_{x_n}^{x_{n+1}}|x-c||f'(x)| \, dx \\ \leqslant M\int_{x_n}^{x_{n+1}}|x-c| \, dx \\ = \frac{M}{4}(x_{n+1} - x_n)^2 \\ = \frac{M}{4}h^2.$$
Approximating the integral over $[a,b]$ with $m$ subintervals of length $h = (b-a)/m$, has a global error bound of
$$|GE| \leqslant \sum_{n=1}^{m} |E_n| = m\frac{M}{4}h^2 = \frac{M(b-a)^2}{4m}$$
If the second derivative is bounded as $|f''(x)| \leqslant M$, then we can demonstrate $O(h^3)$ local accuracy. An integration by parts of the previous integral for $E_n$ yields
$$E_n = \frac1{2} \int_{x_n}^{x_{n+1}} [(h/2)^2 - (x-c)^2]f''(x) \, dx.$$
Using the bound for $f''$ and integrating we obtain the local and global error bounds
$$|E_n| \leqslant \frac{M}{12}h^3, \\ |GE| \leqslant \frac{M(b-a)^3}{12m^2}.$$ |
Finite groups in group theory | Hint: Define a relation $g \sim h$ if $g = h$ or $g = h^{-1}$. Show that this is an equivalence relation, and that an element is of order $1$ or $2$ iff it's alone in its equivalence class. Then count. |
Need help in proving a result rellated to small o notation in number theory | Expanding the second in a power series in $a$ centered at $\infty$ (or equivalently, replacing $a$ with $1/b$ and expanding in a power series in $b$ centered at $0$),
$$ 1 + \ln 2 + \frac{2r+1}{a+1}\ln(r+1) = 1 + \ln 2 + \frac{(2r+1)\ln(r+1)}{a} + \frac{-(2r+1)\ln(r+1)}{a^2} + \cdots $$
(This is literally an application of the geometric series formula
$$ \frac{c}{1-r} = c + cr + cr^2, \quad |r|<1 \text{,} $$
with $r = 1/a$ and the constant term $0$ because in the limit as $a \rightarrow \infty$, $\frac{c}{a+1} \rightarrow 0$.)
Then every term on the right is decreasing in $a$, so the sum is decreasing in $a$ (i.e., is bounded by a constant on any interval $[N,\infty)$). Therefore, the right hand side is $1 + \ln 2 + o(1)$. |
How to express the basis $\beta = \{1, (x-1), (x-1)^2, (x-1)^3\}$ in matrix form | You expand the expressions and check the coefficients and put them in the right place in the matrix.
First column:
$$(1)\cdot 1$$
Second column:
$$1\cdot x + (-1)\cdot 1$$
Third column:
$$(x-1)^2 = 1\cdot x^2 + (-2)\cdot x + 1\cdot 1$$
Fourth column:
$$(x-1)^3 = 1\cdot x^3 + (-3)\cdot x^2 + 3\cdot x + (-1) \cdot 1$$
$$\begin{bmatrix}1&-1&1&-1\\0&1&-2&3\\0&0&1&-3\\0&0&0&1\end{bmatrix}$$ |
Suppose that $\chi$ is a non-negative real number for all $g \in G$. Prove that $\chi$ is reducible | I presume you are working over $\mathbb C$.
$\chi(1)$ must be positive, so if $\chi(g)$ is real and nonnegative for all $g$, then $d=\frac{1}{|G|}\sum \chi(g)$ is a positive real. Now if a representation $V$ affords $\chi$, then $d$ is exactly the $\mathbb C$-dimension of $V^G$. But $V^G$ is a $G$-submodule of $V$ on which $G$ acts trivially. Hence $V$ has a positive-dimensional submodule on which $G$ acts trivially. The conclusion is that $V$ is reducible or trivial.
If you permit yourself to use orthogonality, the argument is easier: $\langle \chi, \chi_1\rangle$ is positive, so $\chi$ is reducible or $\chi=\chi_1$ is trivial. |
Calculating joint probability correctly | [1] Yes, (2) is correct.
[2] (3) is in general sense incorrect. In special case where $A,B,C$ are independent it is correct.
[3] Attend your colleague on the possibility $A=B=C$ (or if you dislike equalities in this matter a case with high level of dependence). In that case (2) gives $P(A)$ as solution (which is correct) and (3) gives $P(A)^3$ which is incorrect if $P(A)\notin\{0,1\}$. |
Subcomplexes of relative I-cell complexes | At least this is true if we slightly change the definition: In 2. we require not only $\overline h^\beta: C_i\rightarrow \overline X_\beta$ is a factorization of the map $h^\beta_i:C_i\rightarrow X_\beta$ through the map $\overline X_\beta\rightarrow X_\beta$, but also the obvious map $D_i\rightarrow \overline X_{\beta+1}$ is a factorization of the map $\overline X_{\beta+1}\rightarrow X_{\beta+1}$. Then for each $\beta$, the map $\overline X_\beta\rightarrow X_\beta$ is induced by the pushout and is unique. Now just observe that $\overline X_\beta\rightarrow X_\beta$ is the pushout of the coproduct of maps in $T^{\beta}- \overline T^{\beta}$.
I am not sure if this remains true when the definition is not changed.
I am so sorry for the use of not mentioned notations. |
$\int_{0}^{1} \int_{0}^{1} \sqrt{x^2+y^2} dxdy$ | \begin{align}
\int_0^1 \int_0^1 \sqrt{x^2+y^2 } \, dy \, dx &= 2 \int_0^{\frac{\pi}4} \int_0^{\sec \theta} r^2 \, dr d \theta \\
&= \frac23 \int_0^{\frac{\pi}4} \sec^3 \theta \,d\theta \\
&= \frac23 \int_0^{\frac{\pi}4} \sec\theta (\sec^2 )\theta \,d\theta
\end{align}
Try to complete the task. |
$\cos(\theta) = \cos(-\theta)$ which means that the cosine function is (blank)? | The cosine function is even and the sine function is odd.
If you were just looking for those words, then that's your answer. If not, you need to clarify the question further. |
Definitions of well ordered set, maximal element and upper bound | You appear to believe that $\le$ has some specific meaning - perhaps you're thinking that $a \le b$ if and only if $b - a$ is nonnegative, for example.
On the contrary, $\le$ is used as a symbol to indicate any relation that is intended to be interpreted as some kind of ordering. The meaning of $\le$ will be defined before it is used. It need not be a relation on $\mathbb{N}$ or $\mathbb{R}$; I could define the relation-class $\le$ on the class of all sets by $X \le Y$ if and only if $X$ injects into $Y$, for example. Or I could define $\le$ on the genuine set $\{\text{the ordinals less than $\alpha$}\}$ by $\beta \le \gamma$ iff $\beta$ is isomorphic to an initial segment of $\gamma$. |
Proving that if $ab=e$ then $ba=e$ | I assume that you mean if for SOME $a,b$ (not every) $ab = e$ then $ba = e$.
Your proof is valid, but you could write it much easier without playing with $k$. $$ab = e \Rightarrow bab = b.$$ If you already know the cancellation law, then we are done. Otherwise you may continue by writing $baba = ba$, so that $(ba)^2 =ba$. Just note that the only idempotent in a group is $e$. |
if $f(x)$ is summable square function, then... | It isn't true, even if we restrict the function to be continuous. For example, take the function that forms the top part of a triangle of width $1/n^2$ at each integer and goes up to its peak, $1$, at the integer. Then it is easy to see the integral converges to a value less than $\pi^2/3$ but $$\lim_{x\to\pm\infty}f(x)=\text{DNE}.$$ |
Given $f : \Bbb{R} \to \Bbb{R}$ is a continuous function. To prove that $\int_{0}^{1}f(x)x^2 dx = \frac{f(c)}{3}$ for some $c \in [0,1]$ | \begin{align*}
\dfrac{1}{3}m=m\int_{0}^{1}x^{2}dx\leq\int_{0}^{1}f(x)x^{2}dx\leq M\int_{0}^{1}x^{2}dx=\dfrac{1}{3}M,
\end{align*}
where $m=\min_{x\in[0,1]}f(x)$ and $M=\max_{x\in[0,1]}f(x)$, so by Intermediate Value Theorem we have some $c\in[0,1]$ such that
\begin{align*}
f(c)=3\int_{0}^{1}f(x)x^{2}dx.
\end{align*} |
A linear form can not be too small on integer points | I'm afraid the converse is true. For any $\zeta_1, \ldots, \zeta_k \in \mathbb{R}$ and any integer $n > 0$, there exist integers $n_0, \ldots, n_k$ with absolute values at most $n$, not all zero, such that $|n_0 + n_1\zeta_1 + \ldots + n_k\zeta_k| \leq n^{-k}$ (this is plainly the pigeonhole principle applied to the set of fractional parts $\{m_1\zeta_1 + \ldots + m_k\zeta_k\}$ for all positive integers $m_1, \ldots, m_k$ with values at most $n$). In your case, $k = 5$. |
$f(z)=\frac{(iz+2)}{(4z+i)}$ maps the real axis in the $\mathbb{C}$-plane into a circle | Let $f(z)=w=x+iy$ and show that $(x,y)$ is on a circle, provided $z \in \mathbb{R}$. First, solve for $z$ so we can get a look at its real and imaginary parts: $$w=\frac{(iz+2)}{(4z+i)} \\
(4z+i)w= iz+2 \\
z(4w-i)=2-iw \\
z=\frac{2-iw}{4w-i} = \frac{2-iw}{4w-i}\frac{4\overline{w}+i}{4\overline{w}+i}=\frac{8\overline{w}+2i-4i|w|^2+w}{|4w-i|^2}$$
Since $z$ is a real number, the imaginary part of the numerator of that last expression must be zero. So
$$-8y+2-4(x^2+y^2)+y=0 \\
x^2 + y^2 +\frac{7}{4}y=\frac{1}{2} \\
x^2 + y^2 +\frac{7}{4}y + \left(\frac{7}{8}\right)^2=\frac{1}{2}+ \left(\frac{7}{8}\right)^2 = \frac{81}{64} \\
x^2 +\left(y+ \frac{7}{8}\right)^2=\left(\frac{9}{8}\right)^2 $$
Now claim the point $f(z)=(x,y)$ does lie on a circle, the circle has center $(0,-7/8)$ and the radius of the circle is $\frac{9}{8}$.
Verified by plotting some points.
To find the point $z_c$ that $f$ maps to the center of the circle $w_c$, we can use our simplest expression for $z$ in terms of $w$ from above.
$$z_c=\frac{2-iw_c}{4w_c-i}=\frac{2-i\left( \frac{-7}{8}i \right)}{4\left( \frac{-7}{8}i \right)-i}=\frac{2-\frac{7}{8}}{-\frac{28}{8}i-i}=\frac{i}{4}$$ |
Recursive definition of Wallis product. | It seems like:
$$
\frac \pi 2 = \frac 43 \frac{16}{15} \frac{36}{35}\frac{64}{63} ... = \frac{2^2}{2^2-1} \frac{4^2}{4^2-1}\frac{6^2}{6^2 - 1} ...
$$
The key point here is to note that $a_n = \frac{4n^2}{4n^2 - 1}$, and $a_1 = \frac 43$.
So, all we need to do, is:
$$
\frac{a_n}{a_{n-1}} = \frac{\frac{4n^2}{(4n^2 - 1)}}{\frac{4(n-1)^2}{4(n-1)^2 - 1}} = \frac{4n^2 (4(n-1)^2 - 1)}{4(n-1)^2 (4n^2 - 1)} = \frac{n^2(2n-3)}{1 + n^2 (2n-3)}
$$
(The last formula came after simplification)
Hence, we can write $a_1 = \frac 43$ and $a_n = a_{n-1} \frac{n^2(2n-3)}{1+n^2(2n-3)}$.
To check: If $n=2$, then $a_2 = \frac 43 \frac{4(1)}{1+4} = \frac{16}{15}$.
$n=3 \implies a_3 = \frac{16}{15} \frac{9(3)}{1+9(3)} = \frac{432}{420} = \frac{36}{35}$.
$n=4 \implies a_4 = \frac{36}{35} \frac{16(5)}{1+16(5)} = \frac{2880}{2835} = \frac{64}{63}$.
You can prove that this works by induction. I leave you to do that. |
Differentiating milk from water | The seller needs to turn every litre of pure milk into $\frac{150}{80}$ litres of diluted milk.
A litre bought for $100$ RS would give back $\frac{150}{80}\cdot80=150$ RS, hence $50\%$ profit.
Ratio of $7:8$ means that you take $1$ litre of pure milk, and add $\frac78$ litres of water.
The result is $1+\frac78=\frac{15}{8}$ litres of diluted milk. |
Verifier and Certificate for coNP SUBSET-SUM | For such a set to be a certificate for the complement of SUBSET SUM the verifier not only has to verify that each subset in the certificate sums to a non-zero value. The verifier also has to verify that every subset in the certificate is derivable from the original set and verify that no additional subsets beyond those in the certificate can be derived. If your verifier does all that, then you have a verifier and a certificate. But since this is equivalent to brute-force solving the problem twice (once to generate the certificate, once to verify the certificate), it's kind of pointless. Brute-forcing the problem once is infeasible enough. |
If 9 people play against each other twice for one point each, what is the minimum number of points you would need to be guaranteed in the top 6? | A score of $10\;$or more guarantees placement in the top $6$ (allowing ties for $6$-th place).
To see this, suppose player $A$ has a score of at least $10.\;$If player $A$ was not in the top $6$, there would have to be $6$ other players, each having a score of $11$ or more.$\;$But then the total of the scores would be at least $(6{\,\times\,}11) + 10 = 76$, which is impossible, since the number of games played is $72$.
However, if player $A$ has a score of $9$, it's possible for $6$ other players to have scores of $10$, in which case, $A$ will not place in the top $6$.
To see this, label the players as $1,2,3,4,5,6,A,B,C$.
We will show a tournament for which the scores are:
$10\;$for players $1,..,6$.$\\[5pt]$
$9\;$for player $A$.$\\[5pt]$
$3\;$for player $B$.$\\[5pt]$
$0\;$for player $C$.$\\[5pt]$
The above scores can be realized as follows . . .
Player $C$ loses all games played, so player $C$ gets a score of$\;0$, and each of the other players gets $2$ points from player $C$.$\\[8pt]$
Let players $1,...,6$ tie against each other, tie with player $A$, and beat players $B$ and $C$ on both tries, so they each get scores of $5 + 1 + 2 + 2 = 10$.$\\[8pt]$
Let player $A$ tie against all other players except player $C$.$\;$Then the score for player $A$ is $7 + 2 = 9$.$\\[8pt]$
It follows that player $B\;$has a score of $0 + 1 + 2 = 3$.$\\[8pt]$ |
Find the radius of the third circle given three circles | I assume the following interpretations:
"Circles touching internally" $\implies$ "Circles intersect" (that is, they overlap),
"Circle touching the line" $\implies$ "Circle is tangent to the line".
Hints:
If you haven't yet, draw the circles. The third one will also need to overlap with the previous two for it to touch the line $AB$
The radius of a circle is always perpendicular to tangent lines
Join the centers of all circles with straight lines and, together with the radius of the third circle, you'll see why it's a trigonometry question.
Edit: And yes, I'd ask for more clarification for those expressions. Maths usually has quite rigorous definitions to avoid those kind of misinterpretations. |
Simple geometry/analysis question | Let $f(x)=\tan x-x$. Then $f(0)=0$ and $f'(x)=\sec^2 x-1$. Note that the derivative is positive for $0\lt x\lt \frac{\pi}{2}$, since $\sec^2 x$ is defined and $\gt 1$ in this interval.
So $f(x)$ is increasing in the interval $[0,\frac{\pi}{2})$. It follows that $\tan x\gt x$ in the interval $(0,\frac{\pi}{2})$.
Remark: For a "calculus-free" (geometric) proof, the pictures you drew are sufficient. |
Suppose that a ≠ 0R is not a zerodivisor in a ring R. Show that ab = ac, with b and c in R, implies that b = c. | Hint: $ab=ac$ is equivalent to $a(b-c)=0$. Since $a$ is not a zero divisor, $a(b-c)=0$ implies... |
Error in uniform convergence question? | If we are more precise, we may see where the error is. I will use $|y-x|$ to mean $d(y,x)$ since I think it makes it clearer than working with $d$ and $\rho$. It won't change any of the important details.
For each $n$, $f_n$ is continuous at $x$. Let $\epsilon>0$. Then, for each $n\in\Bbb N$, there is a $\delta = \delta(n)$, which depends on $n$, such that $|f_n(y)-f_n(x)|<\epsilon/2$ whenever $|y-x|<\delta(n)$.
Since $x_m\to x$, there is an $N_1 = N_1(n)\in\Bbb N$ (note that $N_1$ also depends on $n$ here!) such that $|x_m - x| < \delta(n)$ for each $m\ge N_1$.
Now you claim that for each $k,m\ge N_1(n)$, whenever $|x_m-x|<\delta(n)$, you have $|f_k(x_m)-f_k(x)| < \epsilon/2$.
This is the first serious point you erred. The reason this is an incorrect deduction is that for different $n$, you don't know that $N_1 = N_1(n)$ is large enough to guarantee that $|f_k(x_m)-f_k(x)|<\epsilon/2$. Only for the specific $n$ for which $\delta = \delta(n)$ is this true.
Edit: RRL summed it up nicely in the comment. You are assuming equicontinuity of the sequence $\{f_n\}$ when you are neglecting the $n$ that the $\delta = \delta(n)$ depends on. |
Please help me with the logic behind this probabilty question. Three numbers are chosen at random without replacement from (1, 2, 3 ..., 10). | There are ${7 \choose 2}=21$ sets with $3$ the minimum and ${6 \choose 2}=15$ sets with $7$ the maximum out of ${10 \choose 3}=120$ total. We have counted three sets $(347,357,367)$ twice, so the total probability is $$\frac {21+15-3}{120}=\frac {11}{40}$$ |
check the convergence of this sequence $S_{n}=\frac{1}{\ln(2n+1)}\sum_{i=0}^{n}\frac{((2i)!)^2}{2^{4i} (i!)^4}.$ | Since $b_n=\ln(2n+1)$ is a strictly increasing sequence that diverges to $\infty$, you can use Stolz-Cesaro theorem:
$$\begin{align}
\lim_{n\to\infty}S_n&=\lim_{n\to\infty}\frac{\sum_{i=0}^{n+1}\frac{((2i)!)^2}{(2^i i!)^4}-\sum_{i=0}^{n}\frac{((2i)!)^2}{(2^i i!)^4}}{\ln(2n+3)-\ln(2n+1)}\\
&=\lim_{n\to\infty}\frac{((2n+2)!)^2}{(2^{n+1}(n+1)!)^4}\frac{1}{\ln\frac{2n+3}{2n+1}} \end{align}$$
Then use Stirling's approximation and $\lim_{t\to 1}\frac{\ln t}{t-1}=1$:
$$
\lim_{n\to\infty}S_n=\lim_{n\to\infty}\frac{2\pi(2n+2)\left(\frac{2n+2}{e}\right)^{4n+4}}{2^{4n+4}4\pi^2(n+1)^2\left(\frac{n+1}{e}\right)^{4n+4}}\frac{1}{\frac{2n+3}{2n+1}-1}
$$
Notice that
$$\frac{\left(\frac{2n+2}{e}\right)^{4n+4}}{2^{4n+4}\left(\frac{n+1}{e}\right)^{4n+4}}=1 $$
So
$$
\lim_{n\to\infty}S_n=\lim_{n\to\infty}\frac{(2n+1)(2n+2)}{4\pi(n+1)^2}=\frac{1}{\pi}
$$ |
Pattern for all the binary chains divisible by 5 | We can use the happy face of five-divisibility!
Start in state $0$. Follow the appropriate arrows as you read digits from your binary number from left-to-right. If you end up in state $0$ again your number is divisible by $5$ (and if not, the state number gives you the remainder).
How does it work? Well if we're in state $k$ it means the digits we have read so far form the number $n$ with remainder $k \equiv n \mod 5$. If we then read another digit $b$, we effectively move to the new number $n' = 2n + b$. Thus we need to move to state $(2k + b) \bmod 5$, which is exactly what we do in the above graph. Thus if we end up in state $0$ in the end we know there is no remainder, and the number that we read is divisible by 5.
The state diagram above is just this logic graphically displayed. You could have it as a table instead as well:
\begin{array}{ccc}
k & b & 2k + b & (2k + b) \bmod 5\\
\hline
0 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 \\
1 & 0 & 2 & 2 \\
1 & 1 & 3 & 3 \\
2 & 0 & 4 & 4 \\
2 & 1 & 5 & 0 \\
3 & 0 & 6 & 1 \\
3 & 1 & 7 & 2 \\
4 & 0 & 8 & 3 \\
4 & 1 & 9 & 4 \\
\hline
\end{array}
This also makes for a nice mental rule. You start with the number $0$ in your head and look at the digits from left-to-right. For each digit you multiply the number in your head by 2 and add the digit you just read. If the number goes to five or above you subtract five. If you end up with $0$ the number is divisible by 5.
As an example for the binary number $1111101000_2 = 1000$, you go:
0 is our starting value
1111101000
^ 2*0 + 1 = 1
1111101000
^ 2*1 + 1 = 3
1111101000
^ 2*3 + 1 = 7, is >= 5 so we subtract 5
7 - 5 = 2
1111101000
^ 2*2 + 1 = 5, is >= 5 so we subtract 5
5 - 5 = 0
1111101000
^ 2*0 + 1 = 1
1111101000
^ 2*1 + 0 = 2
1111101000
^ 2*2 + 1 = 5, is >= 5 so we subtract 5
5 - 5 = 0
1111101000
^ 2*0 + 0 = 0
1111101000
^ 2*0 + 0 = 0
1111101000
^ 2*0 + 0 = 0
Our remainder is $0$, thus we are divisible by five! |
$\lim_{n \to \infty} \frac{1}{n}\left((m+1)(m+2) \cdots (m+n)\right)^{\frac{1}{n}}$ | $$A=\frac{1}{n}\big((m+1)(m+2) \cdots (m+n)\big)^{\frac{1}{n}}=\frac{1}{n}\left(\frac{\Gamma (m+n+1)}{\Gamma (m+1)}\right)^{\frac{1}{n}}$$ Take logarithms
$$\log(nA)=\frac{1}{n}\big(\log (\Gamma (m+n+1))- \log (\Gamma (m+1)) \big)$$ Now, using the very first term of Stirling approximation for large $n$ you should get
$$\log(nA)=-1+\log(n)+O\left(\frac{1}{n}\right)$$ Continuing with Taylor
$$nA=e^{\log(nA)}=\frac n e +\cdots$$
Edit
If you want to the know the impact of $m$ on the result, yo need to add the next term in Stirling approximation and, using the same process, you should get
$$nA=\frac n e +\frac 1 e \left(\left(m+\frac{1}{2}\right) \log (n)-\log (\Gamma (m+1))+\frac{1}{2} \log (2 \pi ) \right)+O\left(\frac{1}{n^2}\right) $$ |
Compute $5!25! \mod 31$ | I think your answer is ok, but I would rewrite it as follows
$$
5! = 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1
\equiv
(-26) \cdot (-27) \cdot (-28) \cdot (-29) \cdot (-30)
\pmod{31}.
$$
Thus
$$
5! \cdot 25! \equiv (-1)^{5} \, 30! \equiv (-1) \cdot (-1) = 1 \pmod{31},
$$
using as you did Wilson's Theorem. |
Finding a complex orthonormal basis | If the inner product in question is Hermitian, then you can easily build such a basis. Observe that
$$1+w+w^2=0$$
and
$$w^3=1$$
Using this, it is easy to prove that the basis
$$
\begin{pmatrix}1 \\ 1 \\ 1\end{pmatrix}
\begin{pmatrix}1 \\ w \\ w^2\end{pmatrix}
\begin{pmatrix}1 \\ w^2 \\ w\end{pmatrix}
$$
is orthogonal. Then you can simply normalize them (divide by $\sqrt{3}$, as suggested by KittyL) to get an orthonormal one.
In fact, these vectors form the columns of the 3-dimensional unitary discrete Fourier transform. |
Is the set $\{ (x,\sin(\frac{1}{x}) ) | x \in (0,1)\}$ a manifold with and without the origin | If $A$ is a manifold then any neighborhood of $(0,0)$ is homeomorphic of open subset of $\mathbb{R}^n$. But open subset of $\mathbb{R}^n$ are path connected while every neighborhood of $(0,0)$ in $A$ are not path-connected. Hence $A$ can not be manifold if $(0,0)$ is included because path-connectedness property is preserved by homeomorphism. |
Distributing n balls into k boxes so that every box has an even number of balls | Of course $n$ must be even. Then distribute $\frac{n}{2}$ balls over $k$ boxes (no conditions) and double the amounts in all boxes. This gives all such even distibutions and so the problem is equivalent to the $\frac{n}{2}$ over $k$ boxes problem for even $n$. For $n$ odd there are no solutions. |
Geometry: Construct a rectangle with area equal to a given triangle and with one side equal to a given segment. | Take $F$ on the $BC$-line such that $BF=DE$. Take $G$ on the $AB$-line such that $CG\parallel AF$ and let $M$ be the midpoint of $CG$. In the following figure, the depicted rectangle and the triangle $ABC$ have the same area:
This happens because the area of the rectangle equals the area of $BGF$, and since $\frac{BG}{BA}=\frac{BC}{BF}$,
$$[BGF]=[ABC].$$ |
Compute $\int_{\gamma}\frac{\log(1+z)}{ (z-\frac{1}{2})^3}dz$ | log(1+z) is analytic if we remove the ray (-inf,-1] and the given curve is in the resulting region. Thus from Cauchy formula f''(a)=2/2.pi.i.integral round C of f(z)/(z-a)^3 we get that the integral is (log(1+z))''.pi.i at z=1/2. It is i.pi.1/(9/4) |
Why the limit of a series of increasing/decreasing sets in a Monotonous class is usually described as their union or intersection? | They are exactly the same thing for monotone increasing series of sets.
For any sequence of sets $\{A_n\}_{n=0}^{\infty}$, the lim sup and lim inf can be defined by
$$\liminf_n A_n = \{ a \mid \exists N : \forall n > N, a \in A_n\}$$
and
$$\limsup_n A_n = \{ a \mid \forall N, \exists n > N | a \in A_n\}$$
That is, the lim inf is the set of all objects that are eventually in every $A_n$, while the lim sup is the set of all objects that are in infinitely many $A_n$. The sequence converges if the lim inf and lim sup are the same. That is, every object that is in infinitely many of the $A_n$ is eventually in all of them. And of course the limit is the common value.
Now suppose that $\{A_n\}$ is increasing: $A_0 \subset A_1 \subset A_2 \subset ...$. If $a \in A_N$ for some $N$, then $a \in A_n$ for all $n \ge N$. Therefore $a \in \liminf A_n$ and $a \in \limsup A_n$. Hence
$$\liminf A_n = \limsup A_n = \bigcup_{n=0}^\infty A_n$$
A similar argument shows that if $\{A_n\}$ is decreasing, $A_0 \supset A_1 \supset A_2 \supset ...$, then $$\liminf A_n = \limsup A_n = \bigcap_{n=0}^\infty A_n$$ |
Express this matrix as the product of elementary matrices | To do this sort of problem, consider the steps you would be taking for row eleimination to get to the identity matrix. Each of these steps involves left multiplication by an elementary matrix, and those elementary matrices are easy to invert. Thus:
$$
\left( \begin{array}{ccc} 1&0&0 \\ 0&1&0 \\ -2&0&1 \end{array} \right)
\left( \begin{array}{ccc} 1&0&1 \\ 0&2&0 \\ 2&2&4 \end{array} \right) = \left( \begin{array}{ccc} 1&0&1 \\ 0&2&0 \\ 0&2&2 \end{array} \right) \\
\left( \begin{array}{ccc} 1&0&0 \\ 0&1/2&0 \\ 0&0&1 \end{array} \right)
\left( \begin{array}{ccc} 1&0&1 \\ 0&2&0 \\ 0&2&2 \end{array} \right) = \left( \begin{array}{ccc} 1&0&1 \\ 0&1&0 \\ 0&2&2 \end{array} \right) \\
\left( \begin{array}{ccc} 1&0&0 \\ 0&1&0 \\ 0&-2&1 \end{array} \right)
\left( \begin{array}{ccc} 1&0&1 \\ 0&1&0 \\ 0&2&2 \end{array} \right) = \left( \begin{array}{ccc} 1&0&1 \\ 0&1&0 \\ 0&0&2 \end{array} \right) \\
\left( \begin{array}{ccc} 1&0&0 \\ 0&1&0 \\ 0&0&1/2 \end{array} \right)
\left( \begin{array}{ccc} 1&0&1 \\ 0&1&0 \\ 0&0&2 \end{array} \right) = \left( \begin{array}{ccc} 1&0&1 \\ 0&1&0 \\ 0&0&1 \end{array} \right) \\
\left( \begin{array}{ccc} 1&0&-1 \\ 0&1&0 \\ 0&0&1 \end{array} \right)
\left( \begin{array}{ccc} 1&0&1 \\ 0&1&0 \\ 0&0&1 \end{array} \right) = \left( \begin{array}{ccc} 1&0&1 \\ 0&1&0 \\ 0&0&1 \end{array} \right) \\
$$
So
$$
\left( \begin{array}{ccc} 1&0&1 \\ 0&2&0 \\ 2&2&4 \end{array} \right) =
\left( \begin{array}{ccc} 1&0&0 \\ 0&1&0 \\ 2&0&1 \end{array} \right)
\left( \begin{array}{ccc} 1&0&0 \\ 0&2&0 \\ 0&0&1 \end{array} \right)
\left( \begin{array}{ccc} 1&0&0 \\ 0&1&0 \\ 0&2&1 \end{array} \right)
\left( \begin{array}{ccc} 1&0&0 \\ 0&1&0 \\ 0&0&2 \end{array} \right)
\left( \begin{array}{ccc} 1&0&1 \\ 0&1&0 \\ 0&0&1 \end{array} \right)
$$ |
How can I find the price of house and the garden? | Try writing two equations in two variables, and then solving them together. Perhaps let $h$ stand for the cost of the house, and $g$ for the cost of the garden. Since they were bought together for \$1000, this gives us
$$g + h = 1000$$
Likewise, the second piece of information given is that the house costs $5$ times as much as the garden, so $h = 5g$.
Now what happens when you substitute the second equation into the first? Can you use that to find one of the variables, and then the second? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.