title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
A conditional probability of picking a colored ball from box | To answer the first question (the others are handled in a similar way):
Let $P_n$ be the probability of drawing $WWWW$ if there are exactly $n$ white balls in the urn. It is an easy matter to compute that $$P_n=\left( \frac {n}{10}\right)^4$$
Now, what is the probability that we started with exactly $n$ white balls? From the binomial distribution we see that $$P(\#White = n)=\binom {10}n\frac 1{2^{10}}$$
It follows that the total probability of observing $WWWW$ is $$\sum_{n=0}^{10}\left( \frac {n}{10}\right)^4\times \binom {10}n\frac 1{2^{10}}=0.10175$$
As the contribution of the "all white" scenario to that sum is $\frac 1{2^{10}}$ we see that the desired answer is $$\frac 1{\sum_{n=0}^{10}\left( \frac {n}{10}\right)^4\times \binom {10}n}=\boxed{0.009597666}$$
Remark: as our prior, we believed that the probability we were in the all-white scenario was $\frac 1{2^{10}}=0.000976563$ so our estimate of that probability has gone up by, roughly, a factor of $10$. |
When a system of rational linear equations have complex solutions does it have rational solutions? | Let $\operatorname{rank}(A)=r$, that does not depend on the base field ($\mathbb{Q}$ or $\mathbb{C})$ and let (for instance) $c_1,\dots,c_r$ (the first $r$ columns of $A$) be a basis of $\operatorname{Im}(A)$. If $Ax=b$ admits a complex solution, then $b\in \operatorname{Im}(A)$ and $b$ is a linear combination of $c_1,\dots,c_r$. The vectors $c_1,\dots, c_r,b$ are in $\mathbb{Q}^n$ ; thus, by the Cramer formulas, $b=\sum_{i=1}^rs_ic_i$ where the $(s_i)_{1\leq i\leq r}$ are rational numbers. Finally a rational solution is $[s_1,\dots,s_r,0,\dots,0]^T$. |
Show that if $N$ is a rng, and $x \in N$ with $x \neq 0 $, then there is a largest ideal of $N$ that does not contain $x$. | The poset you are considering is the set of all ideals not containing $x$ ordered by inclusion. It contains $\{0\}$ but not $N$.
If you have already checked that the union of a chain of elements of this poset lies in the poset, then Zorn's lemma kicks in and tells you there are maximal elements not containing $x$.
This does not a priori mean that such an ideal is a maximal ideal: it just means that any strictly larger ideals must pick up $x$ (but we don't know yet if they have to be $N$.)
There is no reason to expect such an ideal to be unique either. Let $F$ be a field and $N$ be the rng $\oplus _\mathbb N F$ of finitely nonzero tuples. Let $x$ be nonzero on $5$ places and zero elsewhere. Every ideal of the form $M_j=\{y\in N\mid y_j=0\}$ is a maximal ideal of $N$, and $x$ is not in five of these.
So it is not really correct to say "there is a largest" like that (meaning a maximum member), but it is right to say there are maximal elements like that. |
When is $\sum_{k=1}^{n} \sin(a k + f(k))$ bounded for $a \in \mathbb{R}$, $f(k) = o(k)$? | Not a very good bound
$$\sum \sin(ak+f(k))=\sum \sin(ak)\cos(f(k))+\sum \cos(ak)\sin(f(k))$$
If $f(k)=C+o(1)$, then
$$\cos f(k)=\cos C\cos o(1)-\sin C\sin o(1)=\cos C+o(1)$$
$$\sin f(k)=\sin C\cos o(1)+\cos C\sin o(1)=\sin C+o(1)$$
By knowing that $\sum^n\sin ak$ and $\sum^n\cos ak$ is bounded, the original sum is bounded as well. |
CC for finite sets and equivalent condition | Not quite a complete answer to your question, but too long for a comment: http://projecteuclid.org/download/pdf_1/euclid.pjm/1102983625 shows that e.g. the axiom of choice for all familes of sets with exactly two elements, does not prove the axiom of choice for all families of sets with exactly 3 elements. (see page 235). This strongly suggests that the answer is "no" (the issue, of course, being that you ask if choice for all families of 2-element sets implies countable choice for finite sets, so it doesn't quite apply).
It's worth noting, though, that there are nontrivial implications here. For example, the axiom of choice for families of sets with either 2 or 3 elements does imply the full (not just countable) axiom of choice for families of sets with exactly 4 elements: given a family of disjoint sets $\{X_i: i\in I\}$ with $\vert X_i\vert=4$ for all $i\in I$, let $S_i$ be the set of all partitions of $X_i$ into two two-element sets. Note that $\vert S_i\vert=3$. Now we apply 3-choice to get a set $\{p_i: i\in I\}$, where each $p_i$ is a partition of $X_i$ into two two-element sets. Apply 2-choice to this family of partitions yields a family of sets $\{A_i: i\in I\}$ where $A_i$ is a two-element subset of $X_i$. Apply 2-choice once more, and we're done. |
Harmonic Conjugates -- proving if $v$ is a harmonic conjugate of $u$, then $-u$ is harmonic conjugate of $v$ | We have that $f=u+iv$ is holomorphic and have to show that $g:=v-iu$ is holomorphic.
But this is easy, since $g=-if$. |
For an integral domain $R$, the rings $R\times R $ and $R\times R\times R$ are not isomorphic | Your argument is not valid: the quotients $(R\times R)/(0\times R)$ and $(R\times R\times R)/(0\times 0\times R)$ not being isomorphic does not rule out the existence of an isomorphism $\varphi\colon R\times R\to R\times R\times R$. It just rules out that such an isomorphism can satisfy $\varphi(0\times R)=0\times 0\times R$.
Let me clarify that for two isomorphic rings $R$ and $R'$ with ideals $I\subset R$ and $I'\subset R'$ that are also isomorphic as rings, you can not conclude that $R/I$ and $R'/I'$ are isomorphic rings:
Take any non-zero ring $S$ and let $R=\bigoplus_{i=1}^\infty S$ with component-wise addition and multiplication. Let $R'=R$, $I=R$ and
$$
I' = \left\{ \, (0,s_1,s_2,\dots)\in R \,\middle|\, s_i\in S\,\right\}.
$$
Note that the shift map
$$
(s_1,s_2,\dots) \mapsto (0,s_1,s_2,\dots)
$$
is a ring isomorphism from $I$ to $I'$.
However, $R/I=0$ while $R/I'\cong S$.
Also note that the shift map is no longer surjective when extended to a map $R\to R$ |
Limit $\lim_{x\to\infty} (1+\frac{1}{x})^x$ | $$L = \lim_{x\to\infty} \left(1+\frac 1x\right)^x$$
$$\log L = \lim_{x\to\infty} x\log\left(1 + \frac 1x\right) $$$$= \lim_{x\to\infty}\frac{\log\left(1 + \frac 1x\right)}{\frac 1x} = \lim_{x\to\infty} \frac{\frac{1}{1 + \frac 1x}\cdot \frac 1{x^2}}{\frac 1{x^2}} =\lim_{x\to\infty} \frac 1{1 + \frac 1x} = 1$$
So $$\log L = 1 \implies L = e$$ |
Diffeomorphism between Euclidean space | Restrict the map to an open ball $B(x,r)$ within $U$, to get a local diffeomorphism. Then the Jacobian $Jf|_{B(x,r)}$ must be invertible. But the Jacobian
is (represented by) an $n \times m$ matrix. |
Need someone to check if I am reading nested quantifiers correctly | The sentence
$$\exists x:(S(x)\implies P(x))$$
reads in English as
There exists a person who, if he is in our class, he is working at the mall.
This statement is true for me, because I am not a student in your class, which means that the statement "If 5xum is a student in DOe's class, then he works at the mall" is true. (remember, if $A$ is not true, then $A\implies B$ is always true).
So no, your first solution isn't what you want. I suggest you try again.
Your second attempt is also wrong. Let's say Joe is the only faculty member, and he ate steak, but never ate salad. Then the statement
Some faculty member at Blue College has not eaten some menu items at the Blue College cafeteria.
is clearly true, since Joe never ate salad.
However:
The statement $F(Joe)\implies E(Joe, steak)$ is true,
Therefore, which means that $\exists y:(F(x)\implies E(x,y))$ is true (because it is true for $y=steak$),
Therefore, so the statement $\neg \exists y:(F(x)\implies E(x,y))$ is false (because it is the negation of a true statement).
So, the entire statement must also be false (because it's clearly false if $F(x)$ is not true, and $F(x)$ is true only if $x=Joe$).
Your third answer is correct. |
Show that $End_A(A)$ = {$r_a$ | $a ∈ A$} | If $f\in\operatorname{End}_A(A)$ then by definition $f(ax)=af(x)$ for all $a\in A$ (and $f$ is also additive). Now let $b=f(1)$, then for all $a\in A$ one has $f(a)=f(a1)=af(1)=ab=r_b(a)$, so $f=r_b$, identically. |
$f(x,y)$ is continuous in $x$, Lipschitz in $y$. Is it bounded? | The answer is yes if $f$ is uniformly Lipschitz in $y$ (i.e. the Lipschitz constant $L$ does not depend on $x$).
Since $f$ is continuous in $x$, there is $K$ such that $|f(x,y_0)| \le K$ for all $x$. Then
$$
|f(x,y)| \le |f(x,y) - f(x,y_0)| + |f(x,y_0)| \le K + Lb \, .
$$ |
Terminology in random walk on a graph | I'm not sure there's a special word for the experiment, but:
The probability of eventually reaching a target state (or set of states) from the initial state is called a hitting probability.
The random variable that gives the number of steps is called the hitting time. However, by convention, we say that the hitting time is $\infty$ rather than $0$ if the target state is never reached.
I don't know a specific word for the case where the hitting probability is $1$, but here is some related terminology:
When it's possible to get from $i$ to $j$, we say that $j$ is accessible from $i$.
When $i$ is accessible from $j$ and vice versa, we say that $i$ and $j$ communicate. This is an equivalence relation, and its equivalence classes are called communicating classes.
When any two states communicate, the Markov chain is irreducible.
When it's not possible to leave a state, we call that state absorbing. Similarly, a communicating class is called closed if it's not possible to leave it.
If we're looking for the hitting probability, it's common to transform the Markov chain as follows:
Make state 1 (the state we're trying to get to) an absorbing state.
Make all states from which 1 is not accessible absorbing states. (We can even combine them into one absorbing state.)
Then we can find out which absorbing state we get to first. (A Markov chain in which every state can reach an absorbing state is called an absorbing Markov chain.) |
Construct random variables with given correlation | Take i.i.d. random variables $U,V$ with standard normal distribution and take $X=U, Y=aU+bV$. can you fins $a,b$ such that $\rho_{X,Y} =\frac 1 3$? |
Show that if $(b,c)=1$, then $(a,bc)=(a,b)(a,c)$. | The first proof can be done more efficiently as below. More generally see this answer.
$(a,b)(a,c) = (aa,ab,ac,bc) = (a\color{#c00}{(a,b,c)},bc) = (a,bc),\ $ by $\ (b,c)=1\,\Rightarrow\,\color{#c00}{(a,b,c)= 1}$
where we used GCD polynomial arithmetic, i.e. associative, commutative, distributive laws.
To prove it by the hint reduce to case $\,(\bar a,\bar b)=1\,$ via cancel $\,\color{#0a0}{d = (a,b)}\,$ from both sides to get
$$ (\bar a,\bar bc) = (a,c),\ \ \ \bar a = a/d,\, \bar b = b/d$$
$d\mid \bar a,\bar bc\,\Rightarrow\, d\mid \bar a\mid a\ $ & $\ d\mid \bar ac,\bar bc\,\Rightarrow\,d\mid (\bar ac,\bar bc)=(\bar a,\bar b)c = c\,$ so $\,d\mid a,c$.
$d\mid a,c\,\Rightarrow\, d\mid c\mid \bar bc\ $ & $\ d\mid\!\underbrace{\!\!\bar ad,\!\!\!}_{\large a}\,\bar ac\,\Rightarrow\, d\mid (\bar ad,\bar ac)=\bar a(\color{#0a0}d,c)=\bar a(\color{#0a0}{a,b},c)=\bar a$ so $\,d\mid \bar a,\bar bc$ |
Finding $\lim_{n\to\infty} \frac{1}{\sqrt n} ( 1 - e^{ i\varphi} + e^{2i\varphi} - \cdots + (-1)^n e^{ni\varphi}\bigr)$ | Hint:
$$1-e^{ia}+e^{i2a}+...+(-1)^ne^{ina}=\frac{1+(-1)^{n}e^{i(n+1)a}}{1+e^{ia}}$$ |
How to prove that $\lim_{n\to\infty}\frac{n^k}{3^{\sqrt{n}}}=0$ | the $\sqrt n$ throws a monkey wrench in taking derivatives of the denominator. The chain rule will lead to products and the derivatives get longer and more complex. But we replace $m = \sqrt n$ then
$\lim\limits_{n\to \infty} \frac {n^k}{3^{\sqrt n}} = \lim\limits_{\sqrt n\to \infty} \frac {m^{2k}}{3^m}=\lim\limits_{m\to \infty} \frac {m^{2k}}{3^m}$.
That stops the denominator's derivative from "spreading". $(3^m)' = (\ln 3) 3^m$ and so the $2k$ derivative of $3^m$ is $(\ln 3)^{2k} 3^m$.
And the $2k$th derivative of $n^{2k} = (2k)!$, a constant. (Clear? Yes? first derivative is $2k m^{2k-1}$ and second is $2k(2k-1)m^{2k-2}$ and induction....?)
So
$\lim\limits_{n\to \infty} \frac {n^k}{3^{\sqrt n}} = \lim\limits_{m\to \infty} \frac {m^{2k}}{3^m}=\lim\limits_{m\to \infty} \frac {(2k)!}{(\ln 3)^{2k} 3^m} = 0$
......
I suppose it is mildly interesting that if $\lim \frac {f(x)}{g(x)}$ is in indeterminate form that l'hopitals rule not only states $\lim \frac {f(x)}{x} = \frac {\frac {df(x)}{dx}}{\frac {dg(x)}{dx}}$ but $\lim \frac {f(x)}{x} = \frac {\frac {df(x)}{dy}}{\frac {dg(x)}{dy}}$ for any $y$ dependent on $x$..... which is really obvious and unsurprising if we care to think about it... which I'm not sure I ever had..... |
Operator norm of a normal operator. | Really you should view the hypothesis of this theorem as "$T$ is unitarily diagonalizable", not "$T$ is self-adjoint", because this is the property that enters the proof. Once you unitarily diagonalize it is straightforward, because $\langle T\sum_{i=1}^n c_i v_i,\sum_{i=1}^n c_i v_i \rangle = \sum_{i=1}^n \lambda_i |c_i|^2$, which is a convex combination of the eigenvalues if $\sum_{i=1}^n c_i v_i$ was a unit vector. You need the unitarity of the eigenvector matrix to remove the cross-terms, whose indefinite sign would spoil this convex structure.
It is a standard theorem that an operator on a vector space over $\mathbb{C}$ is unitarily diagonalizable if and only if it is normal. The proof is slightly harder than the proof of the spectral theorem but not that hard. |
Mobius band is differentiable manifold | The idea is to use separating neighbourhoods for representatives of classes $[x] \neq [y]$ in $M$. We can split up the proof in three case: a) $x,y$ both lie in the interior of $X$; b) One of $x$,$y$ lies in the interior of $X$ and the other lies on the boundary; c) Both $x$ and $y$ lie on the boundary of $X$.
a) In this case, pick separating neighbourhoods $U,V$ for $x,y$ respectively that are small enough that they don't touch the boundary. Then since the equivalence relation leaves the interior of $X$ untouched, $\pi(U) \cap \pi(V) = \pi(U \cap V) = \emptyset$.
b) Without loss of generality, we may assume that $x \in \partial X$ and $y \not\in \partial X$. We may find an $\epsilon > 0$ small enough so that $y$ is not contained in the fattened boundary $U = [0,\epsilon]\times \mathbb R \cup [1-\epsilon,1] \times \mathbb R$. Since the complement of $U$ is open in $X$, we may then find a neighborhood $V$ of $y$ that also doesn't intersect $U$. The equivalence relation will leave $V$ intact and perturb the part of $U$ that lies on the boundary, and overall we will have $$\begin{align}\pi(U) \cap \pi(V) &= (\pi(\partial X) \cup \pi(U \setminus \partial X)) \cap \pi(V) \\&= (\pi(\partial X) \cap \pi(V)) \cup (\pi(U \setminus \partial X) \cap \pi(V))\\ &= \emptyset \cup \pi((U \setminus \partial X) \cap V)\\&= \emptyset.\end{align} $$
c) If $x,y \in \partial X$, we have to make sure that the separating neighbourhoods in $X$ are small enough so that they don't get squished together in $M$. For example, a tall and skinny open neighbourhood of $(0,1)$ that contains $(0,-1)$ will, down in $M$, intersect a tall and skinny open neighbourhood of $(1,1)$ that contains $(1,-1)$, even though they don't intersect in $X$. The idea, then, is to make sure that the neighbourhoods are not too tall. To this end, choose the representatives $x,y$ so that their first coordinate is $0$, i.e., they both lie on the left boundary of $X$. Then any $U,V$ that separate $x,y$ and don't intersect the right boundary of $X$ will do the job, because the parts of $U,V$ that don't intersect the left boundary boundary will be unchanged by the projection, and the parts that intersect the left boundary won't get mixed up by the projection (as $\pi$ is injective when restricted to $X$ minus the right boundary). I'll leave it to you to write this down as formally as you want to. |
Question about statement of Fubini's theorem | Your interpretation in 1 and 2 is correct. Regarding the point
I really dislike talking about the integral of a non-measurable function, even though the function equals a measurable one almost everywhere.
I share your dislike. This is why I think that spaces $X$ and $Y$ should themselves be assumed complete. On a complete measure space, a function that is a.e. equal to a measurable function is itself measurable.
It's true that $\nu(E_x)$ is only defined almost everywhere: for some values of $x$, the set $E_x$ may be nonmeasurable. So, at some point (preferably, before Fubini's theorem) it should be said that on a complete measure space $(X,\mu)$, we can interpret $\int_X f\,d\mu$ even if $f$ is only defined on some set $A\subset X$ such that $\mu(X\setminus A)=0$. Namely, we interpret $\int_X f\,d\mu$ as $\int_X f\chi_A\,d\mu$, setting the function to be $0$ on the complement of $A$. Or any other number, does not matter.
In a complete measure space, negligible sets can be safely neglected. |
If $a^4 + 4^b$ is prime, then $a$ is odd and $b$ is even. | It's clear that $a$ must be odd, as otherwise the number would be divisible by $2$.
Suppose $b$ is odd. Then we can write our number as $a^4 + 4\cdot 4^{2n} = a^4 + 4 \cdot 2^{4n}$ for some $n$. You were given the key factorization,
$$ (a^4 + 4 b^4) = (a^2 + 2ab + 2b^2)(a^2 - 2ab + 2b^2).$$
here, that means
$$ a^4 + 4 \cdot 2^{4n} = (a^2 + 2 a 2^n + 2\cdot 2^{2n})(a^2 - 2 a 2^n + 2\cdot 2^{2n}),$$
which is a nontrivial factorization. Thus $b$ cannot be odd.
Aside:
This factorization identity is called Sophie Germain's identity, after the French mathematian and general polymath who noticed it while exploring number theory. She corresponded with Gauss and Legendre, and attended lectures at the newly founded École Polytechnique. To avoid ridicule, as it was extremely uncommon for women to pursue math (or indeed, to be allowed to pursue a great many professions), she used a male pen name for her initial correspondence. |
A proof about period length | First of all : The statement is not true for $k=0$ and $k=1$. But for $k>1$ there is a proof which however needs some advanced number theoretical tools.
The period length of $1/N$ for some positive integer $N$ coprime to $10$ is equal to $$ord_{10}(N)$$ which is equal to the smallest positive integer $u$ such that $$10^u\equiv 1\mod N$$ holds.
A Fermat number (let us call it $P$) larger than $5$ is coprime to $10$. What we have to show is that we have
$ord_{10}(P)=P-1$ if and only if $P$ is prime.
First suppose that $P$ is prime. Then, we have $$10^{P-1}\equiv 1\mod P$$ because of Fermat's little theorem and because of Euler's criterion we have
$$10^{(P-1)/2}\equiv \left(\frac{10}{P}\right)\mod P$$
where () denotes the Legendre-symbol.
Since $k>1$ , $P$ is of the form $8k+1$ , hence $2$ is a quadratic residue mod P.
$P$ must also be of the form $5k+2$, hence $5$ is not a quadratic residue mod P.
Hence $10$ is not a quadratic residue mod P and we get $ord_{10}(P)=P-1$
If we have $ord_{10}(P)=P-1$ , we can immediately conclude that $P$ is prime (this is not only true for Fermat numbers). This finishes the desired proof. |
Prove $\arctan(x)$ can be estimated by $\frac{\pi}{2}-\frac{1}{x}$ when $x$ is large | Using the hint $\frac{\pi}{2}- \arctan(x) = \arctan\left( \frac{1}{x} \right)$ combined with $\arctan(t) \approx t$ when $t$ is close to $0$:
$$
\arctan(x)
= \frac{\pi}{2} - \arctan\left( \frac{1}{x} \right)
\approx \frac{\pi}{2} - \frac{1}{x}
.
$$
when $x$ is big. |
How to compute the Krull dimension of $ \mathbb{C} [X_1 , \dots , X_n ] / ( f_1 , \dots , f_m ) $? | To expand a little on Mariano's comment: the trick is to compute Gröbner bases, and this is best done on a computer.
Basically, a Gröbner basis is a nice set of generating functions for of your ideal. The really good thing about Gröbner bases, is that they tell you which terms in this generating set is "largest", and that by forgetting all other terms - thereby getting a monomial ideal (generated by the largest terms of the generators), with many of the same properties as your original ideal.
Specifically, it has the same dimension (and Hilbert polynomial) as your original ideal - and it is much easier to compute the dimension of a monomial ideal than to compute the dimension of an arbitrary ideal. This is how computer algebra systems such as Macaulay2 compute dimension.
For example, in user26857's answer, the leading terms generate the ideal $(y^2,x^2y,x^3)$ (under some ordering), which has radical $(y,x)$, which is of dimension $1$! |
An undergraduate level example where the set of commutators is proper in the derived subgroup. | This publication On Commutator Groups, by Kappe and Morse is great to read. Among many, it contains the following theorem: If $G$ is a finite group with $|G : Z(G)|^2 < |G'|$, then there are elements in $G'$ that are not commutators. With the help of this one can construct a large family of groups of nilpotency class 2 with the property that the set of
commutators is not equal to the commutator subgroup. |
Convergence of Lebesgue Integral | First assume that $f$ is continuous and has compact support.
Now pass to a sequence. Let $t_n \to 0$. Then we want to prove that
$$\lim_{n \to \infty} \int |g(x)[f(x) - f(x + t_n)]| \, \textrm{d}x = 0.$$
In this case the result follows from the Lebesgue Dominated Convergence Theorem (LDCT) by using the continuity of $f$. Now an approximation argument is in order, this is very similar as to my answer in this post (actually you can mimic the complete argument since $g$ is bounded anyway). |
Can you find a plain aneloid? | $$\begin{array}{c|cccc}
\diamond & 0 & a & b & 1\\
\hline
0 & 0 & 1 & a & b \\
a & b & a & 1 & 0\\
b & 1 & 0 & b & a \\
1 & a & b & 0 & 1\\
\end{array}$$
$\diamond$ is not commutative: $a\diamond b =1\neq 0 = b\diamond a$
$\diamond$ is not associative: $(a\diamond a)\diamond b= a \diamond b = 1\neq 0 = a\diamond 1 = a\diamond(a\diamond b)$
Take any finite field $\mathbf{F}$ of order $p^n\geq 3$ and a generator $a\in F$ such that $a\neq0,1$. Define an operation $\diamond$ on $F$ by $x\diamond y=a\cdot x + (1-a)\cdot y$. According to Theorem 2.6 in Jezek and Kepka's paper "Atoms in the lattice of varieties of distributive groupoids," $\diamond$ distributes over itself.
Therefore if $A=\{0,1,a,b\}$, then $\mathbf{A}=(A,\diamond,\diamond)$ is a 'plain aneloid.'
Also, the word plain has already been used for algebraic structures: see here. |
Rewriting $x-2\mp\sqrt{2x-5}$ as $a^2\mp2ab+b^2$ | It's $$\sqrt{2x-4-2\sqrt{2x-5}}+\sqrt{2x+4-6\sqrt{2x-5}}=2$$ or
$$\sqrt{\left(\sqrt{2x-5}-1\right)^2}+\sqrt{\left(\sqrt{2x-5}-3\right)^2}=2$$ or
$$|\sqrt{2x-5}-1|+|\sqrt{2x-5}-3|=2.$$
But by the triangle inequality $$2=|\sqrt{2x-5}-1|+|\sqrt{2x-5}-3|\geq|\sqrt{2x-5}-1+3-\sqrt{2x-5}|=2.$$
Thus, $$1\leq\sqrt{2x-5}\leq3$$ or
$$3\leq x\leq7.$$ |
Show that $HK$ is not a subgroup of $G$. | Use the product formula: $|HK|=|H||K|/|H\cap K|$. Then $|HK|$ is a power $p^n$ of $p$ and is bigger than $|H|=p^k$. If $HK$ was a subgroup, by Lafrange, $p^k$ would divide $p^km$ and $p^{n-k}$ would divide $m$ which is not possible. QED |
Exclusion and Inclusion principle in rolling a dice | Let $A_i$ be event “we got 6 on i-th roll”.
$P(\bigcup_{i = 1}^n A_i) = \sum_{k = 1}^n (-1)^{k - 1} \sum_{I ⊆ n, |I| = k} P(\bigcap_{i ∈ I} A_i) = \sum_{k = 1}^n (-1)^{k - 1} {n \choose k} p^k = 1 - (1 - p)^n$.
First equality is inclusion and exclusion. $P(\bigcap_{i ∈ I} A_i) = p^{|I|}$. Last equality is binomial theorem. |
Non uniformly integrable sequence | Consider $$X_n := n 1_{(0,1/n)}-n 1_{(1/n,2/n)}$$ |
Necessary condition for pairwise sufficient statistic | I'm pretty sure the author of the paper I mentioned in the comments was R. R. Bahadur. Here's his CV: http://www.stat.uchicago.edu/faculty/InMemoriam/bahadur/BahadurCV.pdf
Possibly this (?): http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.aoms/1177728715 |
Using Axiom of Choice To Find Decreasing Sequences in Linear Orders | For each $x\in X$, let $L_x = \{y\in X \mid y\lt x\}$. Take the family $\{L_x\}_{x\in X}$. Let $f$ be an element of $\prod_{x\in X}L_x$.
Now pick $x_1\in X$. Use the recursion theorem to define a function $g\colon\mathbb{N}\to X$ by letting $g(1) = x_1$, and letting $g(n+1) = f(g(n))\in L_{g(n)}$. The sequence you want is $x_i=g(i)$.
In fact, you don't need all of AC to do the above, it suffices to use the Axiom of Dependent Choice which is implied by, but is not equivalent to, the full Axiom of Choice. Showing that you actually need ADC for the general result (as opposed to showing that it suffices to use it) requires arguments along the lines of Asaf's answer. |
False proof using Nullstellensatz : Let $J = (S)$ an ideal of polynomials then $\sqrt{S} = \sqrt{J}.$ What's wrong? | As per request, I will write my comment as an answer. The problem here is that you have (mistakenly) taken the radical of the singleton set $\{1\}$. The singleton set $\{1\}$ is not an ideal. We would rather take radicals of ideals because then it can be shown quite easily that the radical of an ideal is also an ideal. Thus, we should take the radical of the ideal generated by $\textbf{1}$ which is denoted as $\sqrt{(1)}$.
So the correct formulation of your last line would be: $$\sqrt{(1)} = \{g \vert \exists n : g^n \in (1) = R \} = R.$$ |
C*- Algebra representation and Matrices | You are on track, although I don't really see how you got your multiplicities.
If all the blocks have size 1, you can make your $A$ as $\mathbb C^4\oplus \mathbb C I_3$. Or $\mathbb C^3\oplus \mathbb C I_2\oplus\mathbb C I_2$.
If you allow a block of size 2, that already gives you dimension 4, so there cannot be another one. That gives you $M_2(\mathbb C)\oplus \mathbb C I_5$. Other possibilities if you play with the multiplicities are $\big[M_2(\mathbb C)\otimes I_2\big]\oplus \mathbb C I_3$ and $\big[M_2(\mathbb C)\otimes I_3\big]\oplus \mathbb C I_1$. Here the tensor notation denotes the multiplity, i.e.
$$
M_2(\mathbb C)\otimes I_2=\Big\{\begin{bmatrix} A&0\\0&A\end{bmatrix}:\ A\in M_2(\mathbb C)\Big\}.
$$
The three $M_2(\mathbb C)\oplus\mathbb C$ cases have multiplicities $\{1,5\}$, $\{2,3\}$, and $\{3,1\}$ respectively. |
Relation between stirling numbers | The Stirling numbers of the second kind satisfy a somewhat Pascal-like recurrence relation:
$${{n+1}\brace k}=k{n\brace k}+{n\brace{k-1}}\;.$$
In particular, then,
$${n\brace{n-2}}=(n-2){{n-1}\brace{n-2}}+{{n-1}\brace{n-3}}\;.\tag{1}$$
It’s easy to check that
$${{n-1}\brace{n-2}}=\binom{n-1}2\;,$$
so $(1)$ becomes
$${n\brace{n-2}}=(n-2)\binom{n-1}2+{{n-1}\brace{n-3}}\;.$$ |
Multiplication over field extension - nonsingular linear transformation? | A field has no zero-divisors, so the kernel of this linear transformation is trivial iff that fixed element is non-zero. |
Given $n\in \mathbb{N}$ find $x\in (0,1)$ such that $p(n,x)<10^{-12}$ | Consider the function $$f(x)=\frac{x^{n-1}}{1+x}+\frac{(1-x)^{n-1}}{2-x}$$ and notice the symmetry around $x=\frac 12$.
So, let $x=y+\frac 12$ and consider the function
$$g(y)=\frac{\left(\frac{1}{2}-y\right)^{n-1}}{\frac{3}{2}-y}+\frac{\left(y+\frac{1}{2}\right)^{n-1}}{y+\frac{3}{2}}$$ where $0 \leq y \leq \frac 12$. This function is still not very well conditioned but, for given $n$, its logarithm is much better, looking like a $\large V$ shape curve.
Since I do not see any way to avoid numerical method, we need an estimate of the variable and, instead of solving $f(x)=10^{-k}$, we shall try to find the zero of function
$$h(y)=\log\left(\frac{\left(\frac{1}{2}-y\right)^{n-1}}{\frac{3}{2}-y}+\frac{\left(y+\frac{1}{2}\right)^{n-1}}{y+\frac{3}{2}} \right)+k\log(10)$$
Expanding $\log(g(y))$ as an infinite series built at $y=\frac 12$, we have
$$\log(g(y))=-\log (2)+\sum_{k=1}^\infty (-1)^k\frac{ \left(2^{-k}-n+1\right) }{k}\left(y-\frac{1}{2}\right)^k$$
Now, using series reversion
$$y=\frac{1}{2}+t+\frac{(4 n-5) t^2}{4 (2 n-3)}+\frac{\left(16 n^2-36 n+21\right)
t^3}{24 (2 n-3)^2}+\frac{\left(64 n^3-200 n^2+228 n-93\right) t^4}{192 (2
n-3)^3}+\frac{\left(256 n^4-1056 n^3+1896 n^2-1596 n+501\right) t^5}{1920 (2
n-3)^4}+\frac{\left(1024 n^5-5504 n^4+13344 n^3-15264 n^2+7884 n-1485\right)
t^6}{23040 (2 n-3)^5}+O\left(t^{7}\right)$$ where $t=\frac{2 (\log (2)-k \log (10))}{2 n-3}$
Trying for $k=12$ as requested, some results
$$\left(
\begin{array}{ccc}
n & \text{approximation} & \text{solution} \\
50 & 0.074283757193 & 0.074280567685 \\
60 & 0.131266692251 & 0.131265832666 \\
70 & 0.175046836738 & 0.175046551370 \\
80 & 0.209658736187 & 0.209658626086 \\
90 & 0.237674620683 & 0.237674573066 \\
100 & 0.260798780945 & 0.260798758418 \\
110 & 0.280200353757 & 0.280200342301 \\
120 & 0.296706076798 & 0.296706070614 \\
130 & 0.310916165380 & 0.310916161870 \\
140 & 0.323276469382 & 0.323276467305 \\
150 & 0.334124864087 & 0.334124862811 \\
160 & 0.343721940253 & 0.343721939444 \\
170 & 0.352271831379 & 0.352271830852 \\
180 & 0.359936672989 & 0.359936672637 \\
190 & 0.366846847576 & 0.366846847336 \\
200 & 0.373108377667 & 0.373108377500 \\
210 & 0.378808349784 & 0.378808349665 \\
220 & 0.384018953814 & 0.384018953728 \\
230 & 0.388800532501 & 0.388800532438 \\
240 & 0.393203912475 & 0.393203912428 \\
250 & 0.397272206517 & 0.397272206483
\end{array}
\right)$$
If you want a more compact form to replace the series expansion, you could use
$$y=1+t+t^2 \left(\frac {a_0+a_1 t}{1+a_2 t}\right)$$
$$a_0=\frac{4 n-5}{4 (2 n-3)}\qquad a_1=\frac{256 n^4-1248 n^3+2136 n^2-1512 n+369}{96 (2 n-3)^2 \left(16 n^2-36
n+21\right)}$$
$$a_2=\frac{-64 n^3+200 n^2-228 n+93}{8 (2 n-3) \left(16 n^2-36 n+21\right)}$$
Warning
You must take care that the solution exists only if
$$k \log (10)+\log \left(\frac{2^{3-n}}{3}\right) < 0$$ |
sum of weigted multiplicative function | We have a double sum, which (if implemented naively) would require $O(N^2)$ integer operations, additionally to the cost of computing the $f$-values. A standard technique in such cases is to change the order of summation. The sum is over all pairs $(n,l)$ satisfying $1\le l\le n\le N$, so we can sum over $n$, first:
$$\sum_{n=1}^N \sum_{l=1}^{n} \lfloor n/l \rfloor f(l)=\sum^N_{l=1}f(l)\sum^N_{n=l}\lfloor n/l \rfloor.$$
Now $\lfloor n/l \rfloor$ is constant between consecutive multiples of $l$, so we have some blocks of $l$ equal values plus some rest of a period where $\lfloor n/l \rfloor=\lfloor N/l \rfloor$, meaning
$$s(N,l)=\sum^N_{n=l}\lfloor n/l \rfloor=l\,\frac{\lfloor N/l \rfloor(\lfloor N/l \rfloor-1)}2+(N \bmod l+1)\,\lfloor N/l \rfloor.$$ Using $N \bmod l=N-l\,\lfloor N/l \rfloor$, we can write this as $$s(N,l)=(N+1)\,\lfloor N/l \rfloor-l\,\frac{\lfloor N/l \rfloor(\lfloor N/l \rfloor+1)}2.$$ Depending on the shape of $f$, the resulting sum $$\sum_{n=1}^N \sum_{l=1}^{n} \lfloor n/l \rfloor f(l)=\sum^N_{l=1}f(l)\,s(N,l)$$ can often be computed still more efficiently, using the fact that $\lfloor N/l \rfloor$ takes only $O(N^{1/2})$ different values. |
Converting from parts of a circle to polar coordinates | $\displaystyle y^2 = \frac{x}{4}-\frac{x^2}{4}$
so
$\displaystyle 4r^2 \sin^2\theta = r\cos\theta-r^2\cos^2\theta$
and
$\displaystyle 4r = \cos\theta+3r\cos^2\theta$
You could try solving this for $r$: $\displaystyle r=\frac{\cos\theta}{4-3\cos^2\theta}$, then let $r$ be your inner integral and integrate over $r$ from zero to that limit. You might also try to solve for $\cos\theta$, let your inner integral be over $\theta$ from 0 to that limit, but that is looking kind of hairy to me. Unless $f(x,y)$ has some particularly nice form, I don't see any clean way to do this.
EDIT: Just saw the comments. Matthew Pereira has the right idea. |
Finding an upper bound error of a Maclaurin polynomial. | The calculation is basically right. However, the range is supposed to be $|x|\le 0.1$. The "worst case" bound for the fourth derivative should be obtained by setting $c=-0.1$, and not $c=0$. |
What's the generalisation of the quotient rule for higher derivatives? | The answer is:
$\frac{d^n}{dx^n} \left (\frac{f(x)}{g(x)} \right ) = \sum_{k=0}^n {(-1)^k \tbinom{n}{k} \frac{d^{n-k}\left(f(x)\right)}{dx^{n-k}}}\frac{A_k}{g_{(x)}^{k+1}} $
where:
$A_0=1$
$A_n=n\frac{d\left(g(x)\right)}{dx}\ A_{n-1}-g(x)\frac{d\left(A_{n-1}\right)}{dx}$
for example let $n=3$:
$\frac{d^3}{dx^3} \left (\frac{f(x)}{g(x)} \right ) =\frac{1}{g(x)} \frac{d^3\left(f(x)\right)}{dx^3}-\frac{3}{g^2(x)}\frac{d^2\left(f(x)\right)}{dx^2}\left[\frac{d\left(g(x)\right)}{d{x}}\right] + \frac{3}{g^3(x)}\frac{d\left(f(x)\right)}{d{x}}\left[2\left(\frac{d\left(g(x)\right)}{d{x}}\right)^2-g(x)\frac{d^2\left(g(x)\right)}{dx^2}\right]-\frac{f(x)}{g^4(x)}\left[6\left(\frac{d\left(g(x)\right)}{d{x}}\right)^3-6g(x)\frac{d\left(g(x)\right)}{d{x}}\frac{d^2\left(g(x)\right)}{dx^2}+g^2(x)\frac{d^3\left(g(x)\right)}{dx^3}\right]$
Relation with Faa' di Bruno coefficents:
The $A_n$ have also a combinatorial form, similar to the Faa' di Bruno coefficents (ref http://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno).
An explication via an example (with for shortness
$g'=\frac{d\left(g(x)\right)}{dx}$, $g''=\frac{d^2\left(g(x)\right)}{dx^2}$, etc.):
Let we want to find $A_4$.
The partitions of 4 are: $1+1+1+1, 1+1+2, 1+3, 4, 2+2$.
Now for each partition we can use the following pattern:
$1+1+1+1 \leftrightarrow C_1g'g'g'g'=C_1\left(g'\right)^4$
$1+1+2+0 \leftrightarrow C_2g'g'g''g=C_2g\left(g'\right)^2g''$
$1+3+0+0 \leftrightarrow C_3g'g'''gg=C_3\left(g\right)^2g'g'''$
$4+0+0+0 \leftrightarrow C_4g''''ggg=C_4\left(g\right)^3g''''$
$2+2+0+0 \leftrightarrow C_5g''g''gg=C_5\left(g\right)^2\left(g''\right)^2$
with $C_i=(-1)^{(4-t)}\frac{4!t!}{m_1!\,m_2!\,m_3!\,\cdots 1!^{m_1}\,2!^{m_2}\,3!^{m_3}\,\cdots}$ (ref. closed-form of the Faà di Bruno coefficents)
where $t$ is the numers of partition items different of $0$, and $m_i$ is the numer of i.
We have $C_1=24$ (with $m_1=4, t=4$), $C_2=-36$ (with $m_1=2, m_2=1, t=3$), $C_3=8$ (with $m_1=1, m_3=1, t=2$), $C_4=-1$ (with $m_4=2, t=1$), $C_5=6$ (with $m_2=2,t=2$).
Finally $A_4$ is the sum of the formula found for each partition, i.e.
$A_4=24\left(g'\right)^4-36g\left(g'\right)^2g''+8\left(g\right)^2g'g'''-\left(g\right)^3g''''+6\left(g\right)^2\left(g''\right)^2$ |
Can $f_n\to f$ uniformly, $f'_n\to g$ uniformly, but $f$ not being differentiable? | Actually the following stronger result holds:
If $f_n$ are differentiable functions on some open set $U$, $f_n\to f$ pointwise on $U$, and $f'_n\to g$ uniformly on $U$, then $f$ is differentiable on $U$ and $f'=g$ on $U$.
So there is no counterexample. |
How to count the $r$-tuples of subsets of $\{1,\dots,n\}$ that are cyclically disjoint? | The pattern is obvious! $A_{n,4}=7^n$.
Well, maybe not totally obvious, but here is a proof is by induction.
First, $A_{1,4}=7$ is verified by listing out the possibilities: $(\emptyset,\emptyset,\emptyset,\emptyset),(1,\emptyset,\emptyset,\emptyset),(\emptyset,1,\emptyset,\emptyset),(\emptyset,\emptyset,1,\emptyset),(\emptyset,\emptyset,\emptyset,1),(1,\emptyset,1,\emptyset),(\emptyset,1,\emptyset,1)$
Let $$\mathcal{A}_{n,4}=\{S_1,S_2,S_3,S_4\subseteq[n]\;|\; S_i\cap S_{i+1}=\emptyset \text{ for } 1\leq i\leq 3 \mbox{ and } S_1\cap S_4=\emptyset\},$$ and assume that $A_{n-1,4}=7^{n-1}$.
Let $(S_1,\ldots,S_4)\in \mathcal{A}_{n,4}$. Then deleting $n$ from each $S_i$ gives an element of $\mathcal{A}_{n-1,4}$. Conversely, from a fixed $(S_1,\ldots,S_4)\in \mathcal{A}_{n-1,4}$ we can form an element of $A_{n,4}$ by inserting $n$ into the $S_i$'s. We can:
Insert zero $n$'s in one way.
Insert one $n$ in $4$ ways.
Insert two $n$'s in $2$ ways.
Insert three or four $n$'s in $0$ ways.
So for each $(S_1,\ldots,S_4)\in\mathcal{A}_{n-1,4}$ we can form $7$ elements of $\mathcal{A}_{n,4}$, and each element of $\mathcal{A}_{n,4}$ arises exactly once with this construction. Thus $A_{n,4}=7A_{n-1,4}=7(7^{n-1})=7^n$. |
Graph-theory question regarding degrees and cliques | For each vertex $v$, denote by $\Gamma(v)$ the set of neighbors of $v$. For each vertex $v$, set $A_1:=\Gamma(v)$ and $C_1:=\{v\}$. In the following recursion, $A_i$ denotes the set of possible candidates to enter the clique and $C_i$ is the largest clique containing $v$ which we have already found:
For each $w\in A_i$, check whether $C_i\subseteq\Gamma(w)$. If so, then we have found a larger clique $C_{i+1}:=C_i\cup\{w\}$. Also, we can set $A_{i+1}:=\Gamma(w)\cap A_i$ since any vertex in an even larger clique will have to be a neighbor of all vertices in $C_{i+1}$. For performance reasons, we would stop as soon as $|A_i\cup C_i|<k$, of course.
If at some point you reach $i=k$, then $C_k$ is a clique of size $k$. On the other hand, if there is a clique of size $k$, then this procedure will find it by starting at any of its vertices. The running time is $nd^k$. |
Restriction of sheaf of modules | Suppose $\mathrm{res}^U_V: \mathscr{O}(U)\to \mathscr{O}(V)$ is the restriction map on the structure sheaf and $\rho^U_V: \mathscr{F}(U)\to \mathscr{F}(V)$ the restriction map on $\mathscr{F}$ as a sheaf of abelian groups. If $\mathscr{F}(U)$ is an $\mathscr{O}(U)$ module it means that there is a collection of maps $\alpha(U): \mathscr{O}(U)\times \mathscr{F}(U)\to \mathscr{F}(U)$ representing the action of $\mathscr{O}(U)$ on $\mathscr{F}(U)$. Then the compatibility is the commutativity of the following diagram: |
Completion of $C([0,1])$ with respect to a norm vanishing at $0$ | When you complete $C([0,1])$ with that norm you pick up "all the functions that blow up at zero more slowly than $1/\psi$."
To make this more rigorous we will do the most basic thing possible, which is to embed our space in another complete space and figure out the closure there. We define the following:
$X =$ your space $C([0,1])$ with the $\psi$-norm, $||f||_\psi = \sup_{x \in [0,1]} |\psi(x)f(x)|$
$Y =$ the space $C([0,1])$ with the standard sup norm, $||f||_0 = \sup_{x \in [0,1]} |f(x)|$
$j : X \to Y, f \mapsto \psi f$
and
$C_0 \subset Y, C_0 = \{f\in Y | f(0) = 0\}$
(I'm just going to treat $\psi$ as a function defined on all of $[0,1]$ - it won't make any difference.)
By construction, $j$ is an (isometric) embdedding of $X$ into $Y$. As $Y$ has the standard $\sup$ norm $C_0$ is a closed subspace, and $j$ clearly maps $X$ into $C_0$. So we easily get that $\overline{j(X)} \subset C_0$, and I claim that $\overline{j(X)}$ is all of $C_0$.
So let's take an arbitrary $f \in C_0$ and $n \in \mathbb{N}$. We are going to find an $\epsilon > 0$ such that for all $x \in [0,\epsilon]$ we have both
$$
|f(x)| < 1/2n
$$
and
$$
\psi(x) < \psi(\epsilon)
$$
We do this by first finding $\epsilon_0$ so that the condition on $f$ holds, and then set
$$
\epsilon = \operatorname*{arg\,max}_{x \in [0,\epsilon_0]} \psi(x)
$$
(For your simple example where $\psi(x) = x$, $\psi$ is strictly increasing near zero, we could find $\epsilon$ more easily, but we need this more complicated version to handle an arbitrary, possibly less well-behaved $\psi$ that could, for example, oscillate infinitely often between $x^2$ and $x^3$ as $x \to 0^+$)
Now define $f_n \in X$ as
$$
f_n(x) = \begin{cases}
f(\epsilon)/\psi(\epsilon) & x \in [0,\epsilon]\\
f(x)/\psi(x) & x \in [\epsilon,1]\\
\end{cases}
$$
Let's look at $j(f_n)$, i.e. $f_n$ mapped into $Y$. On most of $[0,1]$, away from zero, $j(f_n)$ wil be identical to $f$, while near zero $j(f_n)$ looks like a scaled version of $\psi$, but isn't too far from $f$. Explicitly, for $x \in [0,\epsilon]$,
$$
\begin{split}
|j(f_n)(x) - f(x)| & = |\psi(x)f(\epsilon)/\psi(\epsilon) - f(x)| \\
& < |\psi(x)/\psi(\epsilon)| * |f(\epsilon)| + |f(x)| \\
& < 1 * 1/2n + 1/2n = 1/n
\end{split}
$$
So $||j(f_n) - f||_0 < 1/n$, which gives us $j(f_n) \to f$ as $n \to \infty$ and thus $\overline{j(X)} = C_0$. QED
While this gives us a precise answer it'd be nice to see what this means for our original functions, without looking at them multiplied by $\psi$. If we take @Jochen's idea of looking at them as living in $C( (0,1] )$ you can see that we only get the functions where $\lim\limits_{x\to 0} \psi(x)f(x)$ equals $0$, not the whole space of functions where the limit exists. In particular, the function $ f = 1/\psi$ is not in the completion. It's hard for me to see what's going on with the original functions in $C( (0,1] )$, but if you use $j$ to map $1/\psi$ to my space $Y$ you get $\textbf{1}_{[0,1]}$, which can't be approximated (in the $\sup$ norm) by functions that vanish at zero.
(Thanks for posting this question. It was fun to think about.) |
If independent r.v. converge in probability to a constant, do they converge almost surely? | Let $A_n$ be independent events with $\mathbb{P}(A_n)=1/n$, and define
$X_n=1_{A_n}$. Then $X_n\to0$ in probability, but $X_n$ does not converge almost
everywhere.
Apply the second Borel-Cantelli lemma twice; once to the sequence $A_n$
and also to the sequence $A_n^c$, to conclude that
$$P([X_n=1\mbox{ infinitely often}] \cap [X_n=0\mbox{ infinitely often}])=1.$$ |
How to evaluate $\lim_{n\to\infty} a_n$, where $a_{n+1} = \sqrt{1+\frac12 a_n}$? | Yes, it's just an upper bound, not a supremum. Indeed, you can prove that the sequence is no terms exceed $\frac{3}{2}$ by induction. We have $a_2 = \sqrt{1+ \frac{1}{2}}<\sqrt{1+ \frac{5}{4}}=\frac{3}{2} $. Assume that $a_n < \frac{3}{2}$, it still holds for $a_{n+1}$:
$$a_{n+1} = \sqrt{1+\frac{a_n}{2}} < \sqrt{1+\frac{3}{4}} < \sqrt{1+\frac{5}{4}}=\frac{3}{2}$$
Moreover, in various Calculus 1 textbooks, you can easily find that if $\{a\}_{n\geq 1}$ is a sequence that $\lim_{n\to \infty} a_n$ exists, the sumpremum of $\{a\}_{n\geq 1}$ is the limit of itself. |
Why is "$Ax=b$ is solvable $\iff AA^\mathsf{T}y=b$ is solvable" true? | Indeed, this is not true as stated. Take $m = 2$, $n=1$, $A = b = \begin{pmatrix}1 \\ 0 \end{pmatrix}$.
Then
$$Ax = b$$
is the same as
$$\begin{pmatrix}1 \\ 0 \end{pmatrix} x = \begin{pmatrix}1 \\ 0 \end{pmatrix}$$
which has the unique solution $x=1$.
On the other hand, $AA^T y = b$ is the same as
$$\begin{pmatrix}1 & 0 \\ 0 & 0 \\ \end{pmatrix} y = \begin{pmatrix}1 \\ 0 \end{pmatrix}$$
which has multiple solutions, e.g. $y = \begin{pmatrix}1 \\ 0\end{pmatrix}$ or $y = \begin{pmatrix}1 \\ 1\end{pmatrix}$. |
Inequality involving inner product. $|\langle u,v\rangle+\overline{\langle u,v\rangle}|\le 2|\langle u,v\rangle|$ | $$\langle u,v\rangle=x+\mathrm iy\implies|\langle u,v\rangle+\overline{\langle u,v\rangle}|=2|x|=2\sqrt{x^2}\leqslant 2\sqrt{x^2+y^2}=|\langle u,v\rangle|$$ |
Are there $k$-rational points that are not closed point? | Reducing to the affine case, your question comes down to the following: if $A$ is an algebra over a field $k$, not necessarily finitely generated, and $\mathfrak p$ is a prime of $A$ such that $k \rightarrow A_{\mathfrak p}/\mathfrak p A_{\mathfrak p}$ is an isomorphism, is it possible for $\mathfrak p$ to be a nonmaximal ideal? The answer is no.
By hypothesis, the composition $k \xrightarrow{i} A \xrightarrow{\pi} A/\mathfrak p \xrightarrow{h} \operatorname{Quot}(A/\mathfrak p) = A_{\mathfrak p}/\mathfrak p A_{\mathfrak p}$ is an isomorphism. Since
$$h \circ (\pi \circ i)$$
is a bijection, it is in particular surjective, which implies that $h$ must be surjective. The surjectivity of
$$A/\mathfrak p \rightarrow \operatorname{Quot}(A/\mathfrak p)$$
implies that $A/\mathfrak p$ is a field, i.e. $\mathfrak p$ is maximal. |
In a 2-connected simple graph, is there always a simple cycle containing any given path P and disjoint edge e? | No.
It is also not true of a path and a vertex. Pick the path specified above and one end of the edge specified above. |
Integrating a function using residues theorem | Yes it is right. To see this explicitly,
$$\frac{1}{e^z - 1} = \frac{1}{z(1 + z/2 +z^2/3! + \dots)} = \frac{1}{z}(a_0 + a_1z + a_2z^2 + \dots)$$
By comparison, $a_0 = 1, a_1 = -1/2, \text{and} \;a_2 = 1/12$. The analytic part of the series will of course go to $0$, leaving you to integrate $$\int_\gamma \frac{1}{z} dz = 2\pi i.$$ |
Finding the norm of this operator | Hint 1: You made a mistake, the following equality is not true:
$$
\sup_{x\in X}\frac{||\sum_{j=1}^n c_j x(t_j)||}{||x||}\neq \sup_{x\in X} \sum_{j=1}^n \gamma_j \frac{||x(t_j)||}{||x||}$$
Hint 2: For all $c_j \neq 0$ denote by $\omega_i:= \frac{c_j}{||c_j||}$.
Then
$$\|\sum_{j=1}^n c_j x(t_j)\| \leq \sum_{j=1}^n| c_j| | x(t_j)| \leq \sum_{j=1}^n| c_j| \| x\| $$
The first inequality is equality exactly when
$$\overline{\omega_j} x(t_i)=|x(t_i)|$$
while the second is equality exactly when
$$|x(t_i)|=\|x\|$$ |
From Goodfellow book: why can one rescale argmax of conditional probability into an expectation? | It just takes a bit of time to find out where the symbols are defined. There's no computations involved in this question.
\begin{align}
\boldsymbol{\theta}_{\rm ML} &= \mathop{\rm argmax}_\boldsymbol{\theta} \sum_{i=1}^{m} \log p_{\rm model}(\boldsymbol{x}^{(i)}; \boldsymbol{\theta}) \tag{5.58}\label{558} \\
&= \mathop{\rm argmax}_\boldsymbol{\theta} \underbrace{\frac1m}_\text{const.} \sum_{i=1}^{m} \log p_{\rm model}(\boldsymbol{x}^{(i)}; \boldsymbol{\theta}) \tag{divide by $m$}\label{frac1m}
\end{align}
In \eqref{559}, $\boldsymbol{x}^{(i)}$'s are replaced by $\boldsymbol{x}$, and a new symbol $\hat{p}_{\rm data}$ is introduced, so it's better to scroll up the page to see where they are defined.
Consider a set of $m$ examples $\mathbb{X} = \{\boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(m)}\}$ drawn independently from the true but unknown data-generating distribution $\hat{p}_{\rm data}(\mathbf{x})$.
Observe the difference in boldface styles and their corresponding meaning.
\begin{array}{|c|c|c|}
\hline
\text{$\rm \LaTeX$ code} & \texttt{\boldsymbol{x}} & \texttt{\mathbf{x}} \\ \hline
\text{output} & \boldsymbol{x} & \mathbf{x} \\ \hline
\text{meaning} & \text{realized} & \text{theoretical} \\ \hline
\text{usage} & \boldsymbol{x}^{(i)} & \hat{p}_{\rm data}(\mathbf{x}) \\ \hline
\end{array}
Let's take a closer look at the ${\rm \small{\bf smaller}}$ part of \eqref{559}
\begin{equation}
\boldsymbol{\theta}_{\rm ML} = \mathop{\rm argmax}_\boldsymbol{\theta} \mathbb{E}_{\mathbf{x} \sim \hat{p}_{\rm data}} \log p_{\rm model}(\boldsymbol{x}; \boldsymbol{\theta}). \tag{5.59}\label{559}
\end{equation}
It reads
$${\huge \mathbf{x} \sim \hat{p}_{\rm data}}. \tag{subscript} \label{sub}$$
From the previous quoted text, it's clear that $\boldsymbol{x}^{(i)}$'s are i.i.d. with (unknown) distribution $\hat{p}_{\rm data}$. In \eqref{558}, we calculate $\log p_{\rm model}(\boldsymbol{x}^{(i)}; \boldsymbol{\theta})$ from these $m$ realisations $\boldsymbol{x}^{(i)}$'s with $i = 1,\dots, m$, then we take the simple average in \eqref{frac1m}. The symbol $\mathbb{E}$ captures the idea of "average" and the \eqref{sub} indicates the underlying probability distribution. |
Is average of two random directions also a random direction? | No - average directions away from the edge are more likely than average directions near the edge of the hemisphere |
Simple Harmonic estimate | Here, the overline means averaging the function over the indicated domain, namely $B_4$. The Poincaré inequality allows us to estimate the $L^p$ norm of a function with mean zero in terms of the $L^p$ norm of its gradient. In particular, we can estimate the $L^2$ norm of $\nabla u-\overline{\nabla u}_{B_4}$ in terms of the $L^2$ norm of $D^2u$. From what you wrote I cannot tell what role the estimate with $\delta$ has in all of this. A reference to where you encountered these estimates might be helpful. |
Is $\mathcal{B}(H)$ complemented in $\ell_\infty(I, H)$ | The answer to your question is no. No copy of $B(H)$ is complemented in $\ell_\infty(I, H)$. This is because the latter space is a Banach lattice and $B(H)$ lacks the local unconditional structure.
Y. Gordon and D. R. Lewis, Absolutely summing operators and local unconditional structures, Acta Mathematica, 133 (1974), 27–48.
See also
Y. Gordon and D. R. Lewis, Banach ideals on Hilbert spaces, Studia Mathematica, 54 (1975), 161–172.
A Banach space $X$ has the local unconditional structure if and only if $X^{**}$ is complemented in a Banach lattice. (Consult Theorem 17.5 in
J. Diestel, H. Jarchow and A. Tonge, Absolutely Summing Operators, Cambridge University Press, 1995.
for more details.) |
Radius of Convergence of Infinite Series | Use the ratio test:
$$ \frac{\frac{(n-1+1)^{n-1}}{(n-1)!}}{\frac{(n+1)^{n}}{n!}}=\frac{n^n}{(n+1)^n}=\frac1{\left(1+\frac1n\right)^n}$$
and
$$ \left(1+\frac1n\right)^n\to e$$ |
Surface Integral directly | This looks like a standard "applying the divergence theorem" problem. Notice
$$
\iint_S \mathbf{F} \cdot d\mathbf{A} = \iiint_V \nabla\cdot \mathbf{F}\, dV = 3\iiint_V\,dV
$$
The region
$$
x^{2}+y^{2} \leq z \leq (2-x^{2}-y^{2})^{1/2}
$$
which is the intersection of the paraboloid and a sphere looking like the following figure:
Notice in this figure the region is artificially divided into two parts using a cone.
First part is below the sphere $x^2+y^2+z^2 = 2$, above the cone $z = \sqrt{x^2+y^2}$, integral within the first part is easier to be integrated in spherical coordinates:
$$
\begin{aligned}
&r = \sqrt{x^2+y^2+z^2} \in (0,\sqrt{2})
\\
&\phi = \arccos\frac{z}{r} \in (0,\frac{\pi}{4})
\\
&\theta = \arctan\frac{y}{x} \in (0,2\pi)
\end{aligned}
$$
this is a cone with a spherical top. The other part is below the cone $z = \sqrt{x^2+y^2}$, above the paraboloid $z = x^2+y^2$. Integral within the second part is easier to be integrated in cylindrical coordinates:
$$
\begin{aligned}
&\rho = \sqrt{x^2+y^2} \in (0,1)
\\
&z \in (\rho^2,\rho)
\\
&\theta = \arctan\frac{y}{x} \in (0,2\pi)
\end{aligned}
$$
The integral becomes:
$$
\iint_S \mathbf{F} \cdot d\mathbf{A} = 3\int^{2\pi}_0\int^{\frac{\pi}{4}}_0 \int_0^\sqrt{2} r^2 \,dr d\phi d\theta + 3\int^{2\pi}_0\int^{1}_0 \int_{\rho^2}^\rho \rho \,dz d\rho d\theta= \sqrt{2}\pi^2 + \frac{\pi}{2}
$$ |
Derivation of distance formula to isosurface of a scalar field. | With some help of the author of the article himself, I finally fully understand the derivation. Here is the full story:
Having a scalar field $f:\mathbb{R}^n \rightarrow \mathbb{R}$ we want to find distance from given point $\vec x$ to the isosurface $f = 0$.
Lets assume that $f(\vec x + \vec e) = 0$, for some vector $\vec e$, then what we are searching for is $|\vec e|$ - the length of vector $\vec e$.
We know that if $\vec x + \vec e$ is close to point $\vec x$, then we can approximate $f(\vec x + \vec e)$ by its 1-st order Taylor series approximation:
$f(\vec x + \vec e) \approx f(\vec x) + \vec\nabla f \cdot \vec e$
then (by assumption $f(\vec x + \vec e) = 0$):
$0 = f(\vec x + \vec e) \approx f(\vec x) + \vec\nabla f \cdot \vec e$
Lets assume that approximation is accurate:
If $0=f(\vec x) + \vec\nabla f \cdot \vec e$ , then equivalently:
$|0|=|f(\vec x) + \vec\nabla f \cdot \vec e|$ , which is the same as: $0=|f(\vec x) + \vec\nabla f \cdot \vec e|$
Based on triangle inequality $|a + b| \le |a| + |b|$, for any real $a$ and $b$, and letting:
$a = p + q$
$b = -q$,
for any real $p$ and $q$, we can derive the following inequality:
$|p + q - q| \le |p + q| + |-q|$
$|p| \le |p + q| + |q|$
$|p| - |q| \le |p + q|$ which is the same as $|p + q| \ge |p| - |q|$
Case A) Consider:
$p = f(\vec x)$
$q = \vec\nabla f \cdot \vec e$
so we have:
$0=|f(\vec x) + \vec\nabla f \cdot \vec e| \ge |f(\vec x)| - |\vec\nabla f \cdot \vec e| \implies 0 \ge |f(\vec x)| - |\vec\nabla f \cdot \vec e| \implies |\vec\nabla f \cdot \vec e| \ge |f(\vec x)| $
Case B) Swap $p$ an $q$ from case A:
$p = \vec\nabla f \cdot \vec e$
$q = f(\vec x)$
so we have:
$0=|\vec\nabla f \cdot \vec e + f(\vec x)| \ge |\vec\nabla f \cdot \vec e| - |f(\vec x)| \implies 0 \ge |\vec\nabla f \cdot \vec e| - |f(\vec x)| \implies |f(\vec x)| \ge |\vec\nabla f \cdot \vec e| $
From case A and case B we can conclude that:
$|f(\vec x)| = |\vec\nabla f \cdot \vec e| $
From definition of dot product ($\vec a \cdot \vec b = |\vec a| |\vec b| cos \theta$):
$|\vec\nabla f \cdot \vec e| \le |\vec\nabla f| |\vec e|$, so we could write:
$|f(\vec x)| = |\vec\nabla f \cdot \vec e| \le |\vec\nabla f| |\vec e|$
$|f(\vec x)| \le |\vec\nabla f| |\vec e|$
$\frac {|f(\vec x)|}{|\vec\nabla f|} \le |\vec e|$
Which is exactly the result that Inigo Quilez came with. |
Closed form for $\sum_{k=0}^{n-p}\binom{n}{k}\binom{n}{p+k}$ | For this kind of problems it is best to have the summation variable running in opposite directions; also note that you can remove the distracting upper bound on $k$ since terms for $k>n-p$ will be zero anyway due to the second binomial coefficient. So apply symmetry either in the first or the second binomial coefficient, giving respectively
$$
\sum_{k\geq0}\binom{n}{n-k}\binom{n}{p+k}
\qquad\text{or}\qquad
\sum_{k\geq0}\binom{n}{k}\binom{n}{n-p-k}.
$$
The first summation can be interpreted as the counting way to choose $n+p$ elements out of a set of$~2n$, with $n-k$ coming from the first half, and the remaining $p+k$ from the second half. The second summation can be interpreted as the counting way to choose $n-p$ elements out of a set of$~2n$, with $k$ coming from the first half, and the remaining $n-p-k$ from the second half. The results are the same, since
$$
\binom{2n}{n+p} = \binom{2n}{n-p}
$$
by symmetry. This summation is a specialisation of the Vandermonde identity. |
Counting theorem problem. | To color just the triangles, I see one coloring with no black triangles, one coloring
(up to symmetry) with one black triangle, and two colorings with two black triangles
(one with triangles on adjacent sides of the pentagon, one with triangles not on
adjacent sides). That's four so far.
The other colorings of the triangles have either on blue triangles, one blue triangle,
or two blue triangles. So four of them.
For each of these $8$ colorings I can choose either blue or black for the pentagon.
So by (relatively) brute-force counting, there are $16$ possible colorings.
This is just a confirmation of the result. Your approach seems better. |
Solve recursion $p[n,m] = p[n-1,m-1] + p[n+1,m-1] + p[n,m-1]$ | See OEIS A027907 and references on trinomial coefficients. If you just start making the triangle in a spreadsheet and type the numbers into OEIS you will find it. |
Matrix endomorphism rank | Hints.
Let $\{Ax_1, Ax_2, \ldots, Ax_r\}$ be a basis of the column space of $A$, and $\{x_{r+1},\ldots,x_n\}$ be a basis of the null space of $A$.
Similarly, let $\{B^Ty_1, B^Ty_2, \ldots, B^Ty_k\}$ be a basis of the column space of $B^T$, and $\{y_{k+1},\ldots,y_n\}$ be a basis of the null space of $B^T$.
Show that the image of $u$ is spanned by $\{Ax_iy_j^TB: 1\le i\le r,\ 1\le j\le k\}$. Explain why this is a linearly independent set of matrices.
Now what is the rank of $u$? |
Divide N items into M groups with as near equal size as possible | If you are given $N$ and $M,$ you put $\lfloor \frac NM \rfloor+1$ in $N \pmod M$ of the groups and $\lfloor \frac NM \rfloor$ in the rest. |
Find two matrices $A$ and $B$ such that matrix $AB$ that is invertible but $BA$ is not. | Hint: Nobody said that $A,B$ be square matrices. |
How can we define scalar product so that those three vectors will form orthonormal basis? | Let $A$ be the $3 \times 3$ matrix whose columns are your vectors. If indeed your vectors are linearly independent, the matrix $A \cdot A^T$ is invertible (because $A$ is) and then you can define an inner product $g(\vec{u}, \vec{v})$ on $\mathbb{R}^3$ by the formula
$$ g(\vec{u}, \vec{v}) := \vec{u}^T (A \cdot A^T)^{-1} \vec{v}. $$
For this inner product, the columns $A\vec{e_i}$ of $A$ will form an orthnormal basis because
$$ g(A\vec{e_i}, A\vec{e_j}) = \vec{e_i}^T A^T (A \cdot A^T)^{-1} A \vec{e_j} = \vec{e_i}^T A^T (A^T)^{-1} A^{-1} A \vec{e_j} = \vec{e_i} \vec{e_j}^T = \delta_{ij}. $$
This is indeed an inner product because the matrix $(A \cdot A^T)^{-1}$ is positive definite (being the inverse of a positive definite matrix). |
Mental techniques for large arithmetic and decimals | should be possible without thinking, since this is pretty basic:
$20\cdot 45=900$
A method here could be to split it into more multiplications, if you struggle:
It is easier when you calculate $45\cdot 2\cdot 10=90\cdot 10=900$
An other example: $14\cdot 15$ might be trickier. But calculating $15\cdot 2\cdot 7=30\cdot 7=210$ is pretty easy.
If you know that $\frac16=0.1\bar{6}$ this is trivial, since $\frac{7}{6}=1+\frac16=1.1\bar{6}$
$28\cdot 0.28$ the trick here is to transfrom $0.28$ into a fraction. $0.28=\frac{28}{100}$ This should also be obvious.
Then we are left with $\frac{28\cdot 28}{100}=\frac{14\cdot 28}{50}=\frac{14\cdot 14}{25}=\frac{196}{25}$
Most people know the quadratics of the numbers 1 to 25, because they had to learn it in school. If you do not know them, you should learn them.
If you want to give a decimal instead of a fraction, it is often a good idea, to add and subtract something.
I would consider this more advanced:
$\frac{196}{25}=\frac{200}{25}-\frac{4}{25}=8-\frac{16}{100}=8-0.16=7.84$
If you learned your quadratics, you could also know that $28\cdot 28=784$ which makes the solution trivial.
Of course there are many ways leading to an answer, I am just trying to give some examples how it could be done, by showing some commen principles.
$\frac{42}{168}=\frac{1}{4}$
this should be seen immediatly, that $4\times 42=168$. If not it is not hard to cancel commen factors:
$\frac{42}{168}=\frac{21}{84}=\frac{3}{12}=\frac14$
$\frac{14}{4.6\bar{6}}$ is in my opinion the hardest to calculate here. Especially without pen and paper.
I would try this:
$4.\bar{6}=4+\frac23=\frac{14}{3}$
Then $\frac{14}{\frac{14}{3}}=\frac13$
So tips you should take away:
a) It is almost always easier to calculate with fractions.
b) Know some basic multiplications like the quadratics $1^2, 2^2, \dotso, 24^2, 25^2$ or even more. This is also useful if you want to find a root in your head! If you know the root of a five digit number is an integer, then you can 'guess' the roots of numbers in the range of $100^2$ to $250^2$ pretty easy.
Example: $\sqrt{20736}$ You only have to focus on the first three digits $207$ and the last one $6$.
The first three digits help as follows. If you know the quadratics by heart you just have to bound 207 between two quadratics which are the 'closest' to it:
$14^2=196<207<225=15^2$ So $\sqrt{20736}$ is between $1400$ and $1500$
Now you need the last digit. Note that every (besides 0 and 5) 'last digit' can only be created by two multiplications:
$1^2$ and $9^2$ give last digit $1$
$2^2$ and $8^2$ give last digit $4$
$3^2$ and $7^2$ give last digit $9$
$4^2$ and $6^2$ give last digit $6$
$5^2$ gives digit 5 and $0^2$ of course $0$.
Since 20736 ends with 6 we know that the root is either $144$ or $146$. Since 20736 is closer to 19600 we can guess it is $4$.
c) Adding numbers to get higher (or subtracting for lower) easier to factor numbers and use binomial formulas.
Example:
$199^2$ might be hard to calculate by itself, but $(200-1)^2=200^2-400+1$ is trivial.
These methods also help alot.
Practice makes perfect and there are nearly unlimited methods to calculate. |
(Geometry) Circle, angles and tangents problem | First of all, note that $\angle PAB + \angle PBA = 140^\circ$. That means that $\angle MAB + \angle NBA = 220^\circ$.
Then we see that $AO$ bisects $\angle MAB$, and $BO$ bisects $\angle NBA$, so $\angle OAB + \angle OBA = 110^\circ$.
Lastly, looking at the quadrilateral $AOBP$, we see that $x = 360^\circ - 40^\circ - 140^\circ - 110^\circ = 70^\circ$.
There is no reason to believe $\triangle PAB$ to be isosceles. In fact, from just the given information it might not be. If we move $A$ closer to $M$, we see that $AB$ touching the circle will force $B$ closer to $P$. It's just that you've happened to draw the figure symmetrically. |
Intersection between circle and segment that starts in middle of circle | As the segment starts in the centre of the circle, let $$d=\sqrt{(x-c_x)^2+(y-c_y)^2}>r$$
Your point is $$P=(c_x,c_y)+\frac{r}{d}(x-c_x,y-c_y)$$
You can see this is the point on the segment at distance $r$ from the centre. |
Independence of arrival time and interarrival time | Recall the following elementary lemma:
Let $(X_1,\ldots,X_{n+1})$ be (jointly) independent random variables and $g: \mathbb{R}^n \to \mathbb{R}$ be a measurable function. Then $g(X_1,\ldots,X_n)$ and $X_{n+1}$ are independent.
By definition, we have $S_n = \sum_{i=1}^n X_i$; therefore, setting $$g(x_1,\ldots,x_n) := \sum_{i=1}^n x_i$$ we obtain from the previous lemma that $S_n = g(X_1,\ldots,X_n)$ is independent of $X_{n+1}$.
Concerning your second question: No, this is in general not true since pairwise independence does not imply joint independence. (If $X,Y,Z$ are jointly independent, then $X$ and $Y+Z$ are independent.) |
An urn with three kinds of balls... and a weird constraint! | Let's suppose there are $a,b,c$ balls of each type.
We have $P(A)-P(\overline B)=1-P(\overline A)-P(\overline B)$. Also $P(\overline A)=\big(\frac{b+c}{a+b+c}\big)^n$ and $P(\overline B)=\big(\frac{a+c}{a+b+c}\big)^n.$ Multiplying $1-P(\overline A)-P(\overline B)=0$ by $(a+b+c)^n$ we need $(a+c)^n+(b+c)^n=(a+b+c)^n$, which has no solutions for $n>2$ by a result of Wiles. It is easy to see there are no solutions for $n=1$ either, since the two sides differ by $c>0$.
If $n=2$ we have a Pythagorean triple $x^2+y^2=z^2$. Any such triple gives a solution, setting $c=x+y-z$, $a=z-y$, $b=z-x$. (e.g. for $3,4,5$ we get $a=1,b=2,c=2$.) |
Permutations: A cycle is conjugate to its own inverse | To answer the d) question. Fix $1\leq i\leq r$ such an integer then you can write :
$$\sigma:=(1,2,...,r)=(i,i+1,...,r,1,...,i-1)$$
This is just another way of writing your permutation.
Now you know that :
$$\sigma^{-1}=(1,r,r-1,...,2)=(i,i-1,...,1,r,...,i+1) $$
And you want $\rho$ such that :
$$\rho\sigma\rho^{-1}=\sigma^{-1} \text{ and } \rho(i)=i$$
From a) you get :
$$\rho\sigma\rho^{-1}=(\rho(i),\rho(i-1),...,\rho(1),\rho(r),...,\rho(i+1))=(i,\rho(i-1),...,\rho(1),\rho(r),...,\rho(i+1))$$
Finally you just want :
$$(i,\rho(i+1),...,\rho(r),\rho(1),...,\rho(i-1))=(i,i-1,...,1,r,...,i+1) $$
You can now see that such a $\rho$ always exists. You just want to send the $r-1$ tuple:
$$(i+1,...,r,1,...,i-1)\text{ on } (i-1,...,1,r,...,i+1)$$
with a permutation of the set $\{1,...,i-1,i+1,...,r\}$ and this can be done because the action of the group permutation is $r-1$ transitive. |
Bordered minor and rank of a matrix | I don't know what a "bordering minor" is.
This does not seem to be a word with a standard meaning. However, here is a
theorem that is true:
Theorem 1. Let $\mathbf{k}$ be a field. Let $A\in\mathbf{k}^{u\times v}$
be a matrix. In the following, whenever $i_{1},i_{2},\ldots,i_{k}\in\left\{
1,2,\ldots,u\right\} $ and $j_{1},j_{2},\ldots,j_{\ell}\in\left\{
1,2,\ldots,v\right\} $, we shall denote by $A\left[ \dfrac{j_{1}
,j_{2},\ldots,j_{\ell}}{i_{1},i_{2},\ldots,i_{k}}\right] $ the $k\times\ell
$-matrix whose $\left( p,q\right) $-th entry is the $\left( i_{p}
,j_{q}\right) $-th entry of $A$ for every $\left( p,q\right) \in\left\{
1,2,\ldots,k\right\} \times\left\{ 1,2,\ldots,\ell\right\} $.
Let $k\in\mathbb{N}$. Let $i_{1},i_{2},\ldots,i_{k}\in\left\{ 1,2,\ldots
,u\right\} $ and $j_{1},j_{2},\ldots,j_{k}\in\left\{ 1,2,\ldots,v\right\} $
be such that $\det\left( A\left[ \dfrac{j_{1},j_{2},\ldots,j_{k}}
{i_{1},i_{2},\ldots,i_{k}}\right] \right) \neq0$. Assume that every
$i^{\prime}\in\left\{ 1,2,\ldots,u\right\} \setminus\left\{ i_{1}
,i_{2},\ldots,i_{k}\right\} $ and $j^{\prime}\in\left\{ 1,2,\ldots
,v\right\} \setminus\left\{ j_{1},j_{2},\ldots,j_{k}\right\} $ satisfy
(1) $\det\left( A\left[ \dfrac{j_{1},j_{2},\ldots,j_{k},j^{\prime}
}{i_{1},i_{2},\ldots,i_{k},i^{\prime}}\right] \right) =0$.
Then, $\operatorname*{rank}A=k$.
Proof. Assume the contrary; i.e., assume that $\operatorname*{rank}A\neq k$.
First, we observe that the numbers $i_{1},i_{2},\ldots,i_{k}$ are distinct
(because otherwise, the matrix $A\left[ \dfrac{j_{1},j_{2},\ldots,j_{k}
}{i_{1},i_{2},\ldots,i_{k}}\right] $ would have two equal rows, which would
yield $\det\left( A\left[ \dfrac{j_{1},j_{2},\ldots,j_{k}}{i_{1}
,i_{2},\ldots,i_{k}}\right] \right) =0$, contradicting $\det\left( A\left[
\dfrac{j_{1},j_{2},\ldots,j_{k}}{i_{1},i_{2},\ldots,i_{k}}\right] \right)
\neq0$). Similarly, the numbers $j_{1},j_{2},\ldots,j_{k}$ are distinct.
The rows of the matrix $A\left[ \dfrac{j_{1},j_{2},\ldots,j_{k}}{i_{1}
,i_{2},\ldots,i_{k}}\right] $ are linearly independent (since $\det\left(
A\left[ \dfrac{j_{1},j_{2},\ldots,j_{k}}{i_{1},i_{2},\ldots,i_{k}}\right]
\right) \neq0$). Hence, the rows of the matrix $A\left[ \dfrac{1,2,\ldots
,v}{i_{1},i_{2},\ldots,i_{k}}\right] $ are also linearly independent (since
the rows of the matrix $A\left[ \dfrac{j_{1},j_{2},\ldots,j_{k}}{i_{1}
,i_{2},\ldots,i_{k}}\right] $ are fragments of the rows of the matrix
$A\left[ \dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k}}\right] $, and
therefore any linear dependence relation between the latter would yield a
linear dependence relation between the former). In other
words, the $i_{1}$-th, the $i_{2}$-th, etc., the $i_{k}$-th rows of the matrix
$A$ are linearly independent. Hence, the matrix $A$ has $k$ linearly
independent rows; thus, $\operatorname*{rank}A\geq k$. Combined with
$\operatorname*{rank}A\neq k$, this yields $\operatorname*{rank}A>k$. Thus,
the row space of $A$ cannot be spanned by just $k$ of its rows (because if it
could, then its dimension would be $\leq k$, which would contradict the fact
that its dimension is $\operatorname*{rank}A>k$). Therefore, there exists at
least one $i^{\prime}\in\left\{ 1,2,\ldots,u\right\} \setminus\left\{
i_{1},i_{2},\ldots,i_{k}\right\} $ such that the $i^{\prime}$-th row of $A$
does not belong to the span of the $i_{1}$-th, the $i_{2}$-th, etc., the
$i_{k}$-th rows of $A$. Fix such an $i^{\prime}$. We know that:
The $i^{\prime}$-th row of $A$ does not belong to the span of the $i_{1}
$-th, the $i_{2}$-th, etc., the $i_{k}$-th rows of $A$.
The $i_{1}$-th, the $i_{2}$-th, etc., the $i_{k}$-th rows of $A$ are
linearly independent.
Combining these two facts, we conclude that $i_{1}$-th, the $i_{2}$-th, etc.,
the $i_{k}$-th, and the $i^{\prime}$-th rows of $A$ are linearly independent.
In other words, the rows of the matrix $A\left[ \dfrac{1,2,\ldots,v}
{i_{1},i_{2},\ldots,i_{k},i^{\prime}}\right] $ are linearly independent.
Since this matrix has $k+1$ rows, we thus conclude that $\operatorname*{rank}
\left( A\left[ \dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k},i^{\prime}
}\right] \right) =k+1>k$. Therefore, the column space of $A\left[
\dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k},i^{\prime}}\right] $ cannot be
spanned by just $k$ of its columns (since if it could, then its dimension
would be $\leq k$, which would contradict the observation that its dimension
is $\operatorname*{rank}\left( A\left[ \dfrac{1,2,\ldots,v}{i_{1}
,i_{2},\ldots,i_{k},i^{\prime}}\right] \right) >k$). Thus, there exists at
least one $j^{\prime}\in\left\{ 1,2,\ldots,v\right\} \setminus\left\{
j_{1},j_{2},\ldots,j_{k}\right\} $ such that the $j^{\prime}$-th column of
$A\left[ \dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k},i^{\prime}}\right] $
does not belong to the span of the $j_{1}$-th, the $j_{2}$-th, etc., the
$j_{k}$-th columns of $A\left[ \dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots
,i_{k},i^{\prime}}\right] $. Fix such a $j^{\prime}$.
On the other hand, the columns of the matrix $A\left[ \dfrac{j_{1}
,j_{2},\ldots,j_{k}}{i_{1},i_{2},\ldots,i_{k}}\right] $ are linearly
independent (since $\det\left( A\left[ \dfrac{j_{1},j_{2},\ldots,j_{k}
}{i_{1},i_{2},\ldots,i_{k}}\right] \right) \neq0$). In other words, the
$j_{1}$-th, the $j_{2}$-th, etc., the $j_{k}$-th columns of the matrix
$A\left[ \dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k}}\right] $ are
linearly independent. Therefore, the $j_{1}$-th, the $j_{2}$-th, etc., the
$j_{k}$-th columns of $A\left[ \dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots
,i_{k},i^{\prime}}\right] $ are linearly independent (because the $j_{1}$-th,
the $j_{2}$-th, etc., the $j_{k}$-th columns of the matrix $A\left[
\dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k}}\right] $ are fragments of the
$j_{1}$-th, the $j_{2}$-th, etc., the $j_{k}$-th columns of $A\left[
\dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k},i^{\prime}}\right] $, and
therefore any linear dependence relation between the latter would yield a
linear dependence relation between the former). Now we
know that:
The $j^{\prime}$-th column of $A\left[ \dfrac{1,2,\ldots,v}{i_{1}
,i_{2},\ldots,i_{k},i^{\prime}}\right] $ does not belong to the span of the
$j_{1}$-th, the $j_{2}$-th, etc., the $j_{k}$-th columns of $A\left[
\dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k},i^{\prime}}\right] $.
The $j_{1}$-th, the $j_{2}$-th, etc., the $j_{k}$-th columns of $A\left[
\dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k},i^{\prime}}\right] $ are
linearly independent.
Combining these two facts, we conclude that the $j_{1}$-th, the $j_{2}$-th,
etc., the $j_{k}$-th, and the $j^{\prime}$-th columns of $A\left[
\dfrac{1,2,\ldots,v}{i_{1},i_{2},\ldots,i_{k},i^{\prime}}\right] $ are
linearly independent. In other words, the columns of the matrix $A\left[
\dfrac{j_{1},j_{2},\ldots,j_{k},j^{\prime}}{i_{1},i_{2},\ldots,i_{k}
,i^{\prime}}\right] $ are linearly independent. Hence, $\det\left( A\left[
\dfrac{j_{1},j_{2},\ldots,j_{k},j^{\prime}}{i_{1},i_{2},\ldots,i_{k}
,i^{\prime}}\right] \right) \neq0$. But this contradicts (1). Thus, we
have found a contradiction, and Theorem 1 is proven. |
When To Use Only Principal Root | Dividing top and bottom by $x^2 = |x^2| = \sqrt {x^4}$ is perfectly valid here - the square root always refers to taking the positive (or $0$) value. Dividing by $\sqrt{x^4}$ implies you're taking the positive square root, and your working is valid. |
Is Integration Greater than the Function We are Integrating? | It is true in general that:
$$
\int_a^b f(x)dx \ge \inf_{x\in(a,b)}(f(x))\cdot (b-a)
$$
$\inf_{x\in(a,b)}(f(x))$ is basically the minimum value of $f(x)$ for $x\in(a,b)$, if you are unfamiliar with infimum.
For example, if $f$ is decreasing, then $f(x)\ge f(b)$ for all $x<b$. Then:
$$
\int_a^b f(x)dx \ge \int_a^b f(b)dx = f(b)\cdot(b-a)
$$
If f is increasing, then $f(x)\le f(b)$ for all $x<b$. Then:
$$
\int_a^b f(x)dx \le \int_a^b f(b)dx = f(b)\cdot(b-a)
$$
Think about the integral as the area under $f$. Changing $f(x)$ to the constant $f(b)$ means that we find the area of the rectangle with base $(b-a)$ and height $f(b)$. Depending on wether $f$ is increasing or decreasing, that rectangle will have greater or smaller area the the graph of $f$. |
High school proofs in plane geometry | Since $BE$ is a diameter of $C_1$, we obtain $\measuredangle EDB=90^{\circ}$, which gives $\measuredangle BDC=90^{\circ}$.
Since $FB||DC$, we get $\measuredangle FDB=\measuredangle FBA=\measuredangle C$
and since $BE\perp FD$ and $BD\perp FC$, we obtain $\measuredangle E=\measuredangle FDB.$
Thus, $\measuredangle E=\measuredangle C$, which gives $BE=BC$ and $\frac{OE}{BC}=\frac{OE}{BE}=\frac{1}{2}$.
Since $BD$ is an altitude to hypotenuse of $\Delta EBC$, we obtain $BC^2=CD\cdot CE$.
Done! |
A lemma (possibly of Atiyah) in a proof of the fundamental theorems of invariant theory | This is in fact suggested in "On the heat equation and the Index Theorem" by Atiyah, Bott, and Patodi (1973) in Inventiones Mathematicae, Vol. 19 pp 279-230.
The result in particular is the lemma in Appendix I.
Although they only prove it for the orthogonal group, very similar arguments can be used to prove it for the symplectic group. |
Joint Convergence implication | If your first convergence holds for any sequence $(t_i)_{i\geq 0}$, then the second holds (by definition)
If your first convergence holds for a particular $t_i$, then the second does not hold in general. It may even not have a meaning. |
Is there a "natural" way to define a group operation on the set of size-$n$ subsets of a finite set? | No there is not.
Let $S$ be the set of $k$-element subsets of an $n$-element set $A$, $1\le k<n$.
If there were a group law $\circ$ on $S$ that deserved being called natural, then it would be invariant under the permutations of $S$ that are induced by the group $\operatorname{Sym}(A)$ of permutations of $A$. Especially, this action of $\operatorname{Sym}(A)$ must leave the neutral element of $(S,\circ)$ fixed. Since $\operatorname{Sym}(A)$ acts transitively on $S$ and $|S|>1$, this is not possible. |
Computing the standard error for an estimated probability | $\def\dto{\xrightarrow{\mathrm{d}}}\def\vec{\boldsymbol}$First, because$$
L(a; \vec{x}) = f_{\vec{X}}(\vec{x}; a) = \prod_{k = 1}^n \frac{x_k}{a} \exp\left( -\frac{x_k^2}{2a} \right) = \frac{1}{a^n} \left( \prod_{k = 1}^n x_k \right) \exp\left( -\frac{1}{2a} \sum_{k = 1}^n x_k^2 \right),\\
l(a; \vec{x}) = \ln(L(a; \vec{x})) = -\frac{1}{2a} \sum_{k = 1}^n x_k^2 - n \ln a + \sum_{k = 1}^n \ln x_k,
$$
then$$
\frac{\partial l}{\partial a}(a; \vec{x}) = \frac{1}{2a^2} \sum_{k = 1}^n x_k^2 - \frac{n}{a} \Longrightarrow \widehat{a}_n = \frac{1}{2n} \sum_{k = 1}^n X_k^2.
$$
Next, since $X_1, \cdots, X_n$ are i.i.d, then $X_1^2, \cdots, X_n^2$ are also i.i.d., and$$
f_{X_1}(x; a) = \frac{x}{a} \exp\left( -\frac{x^2}{2a} \right)\ (x > 0) \Longrightarrow f_{X_1^2}(y; a) = \frac{1}{2a} \exp\left( -\frac{y}{2a} \right)\ (y > 0).
$$
Thus,$$
D(\widehat{a}_n) = \frac{1}{4n^2} D\left( \sum_{k = 1}^n X_k^2 \right) = \frac{1}{4n} D(X_1^2) = \frac{a^2}{n}.
$$
Now, note that$$
p = P_a(X > 6) = \exp\left( -\frac{18}{a} \right) \Longrightarrow \widehat{p}_n = \exp\left( -\frac{18}{\widehat{a}_n} \right).
$$
Since $E(X_1^2) = 2a$, $D(X_1^2) = 4a^2$, by the central limit theorem,$$
\sqrt{n} · \frac{2 \widehat{a}_n - E(X_1^2)}{\sqrt{\smash[b]{D(X_1^2)}}} \dto N(0, 1) \Longrightarrow \sqrt{n} (\widehat{a}_n - a) \dto N(0, a^2).
$$
Define $g(x) = \exp\left( -\dfrac{18}{x} \right)$. By the delta method,$$
\sqrt{n} (\widehat{p}_n - p) = \sqrt{n} (g(\widehat{a}_n) - g(a)) \dto N(0, (g'(a))^2 · a^2) = N\left( 0, 18\exp\left( -\frac{18}{a} \right) \right).
$$
Thus approximately, $\widehat{p}_n - p \sim N\left( 0, \dfrac{18}{n} \exp\left( -\dfrac{18}{a} \right) \right)$, and$$
D(\widehat{p}_n) ≈ \frac{18}{n} \exp\left( -\frac{18}{a} \right) ≈ \frac{18}{n} \exp\left( -\frac{18}{\widehat{a}_n} \right).
$$ |
Why is a finite integral domain always field? | Remember that cancellation holds in domains. That is, if $c \neq 0$, then $ac = bc$ implies $a=b$. So, given $x$, consider $x, x^2, x^3,......$. Out of finiteness there would be a repetition sometime: $x^n = x^m$ for some $n >m$. Then, by cancellation, $x^{n-m} =1$, and $x$ has an inverse. |
If $G$ is a discrete topological group, is $(G,X,\theta)$ a $G$-space? | $G \times X$ is covered by the open sets $O_g = \{g\} \times X$ ($g \in G)$, and $\theta$ is continuous iff $\theta |_{O_g}$ is continuous for all $g$. Then we can say that the restriction of $\theta$ to $O_g$ is a bijection on $X$ (from the axioms of a group action). But we need this to be a continuous bijection. And this need not be true:
let $X$ be the Sorgenfrey line ($\mathbb{R}$ in the topology generated by all sets of the form $[a,b), a < b$)). Let $G = \{1,-1\}$ in the discrete topology (as a multiplicative group), and define $\theta(g,x) = gx$. Then for $g=-1$ we have that $x \rightarrow -x$ is not continuous on $X$, so $\theta$ is not a continuous action, although it is a valid group action. |
Common ratio of a GP - $a_p, a_q, a_r$ given $a_1, a_2 ...$ form an AP | Since
\begin{align*}
a_q^2 & = a_pa_r\\
a_q^2 -a_ra_q& = a_pa_r-a_ra_q\\
a_q(a_q-a_r)&=a_r(a_p-a_q)\\
\frac{a_q}{a_r}& =\frac{a_p-a_q}{a_q-a_r}\\
\frac{a_q}{a_r}& =\frac{d(p-q)}{d(q-r)}\\
\frac{a_q}{a_r}& =\frac{p-q}{q-r}\\
\end{align*}
My answer and yours doesn't match (they are close in form). Perhaps check your answer again. If you want the common ratio then it will be $\dfrac{a_r}{a_q}=\frac{q-r}{p-q}$. |
Is there a non-decreasing sequence $(a_n)$ such that $\sum 1/a_n=\infty$ and $\sum1/(n+a_n)<\infty$? | We use Abel's (or Pringsheim's) theorem:
If $b_n$ is decreasing and positive, and if $\sum b_n$ converges, then $\lim\limits_{n\rightarrow\infty} nb_n=0$.
Now,
if $\sum {1\over n+a_n}$ converged, we would have the string of implications:
$${n\over n+a_n}\rightarrow 0\quad \Rightarrow \quad{a_n\over n} \rightarrow\infty
\quad\Rightarrow\quad {n\over a_n}\rightarrow0\quad\Rightarrow \quad\sup{n\over a_n}<\infty.$$ |
Evaluating $\arccos(\cos\frac{15\pi }{4})$ | Here's one way to do it
$$\arccos\left(\cos\left(\frac{15}{4}\pi\right)\right)$$
$$=\arccos\left(\cos\left(\pi+2\pi+\frac{3}{4}\pi\right)\right)$$
$$=\arccos\left(\cos\left(\pi+\frac{3}{4}\pi\right)\right)$$
$$=\arccos\left(-\cos\left(\frac{3}{4}\pi\right)\right)$$
$$=\pi-\arccos\left(\cos\left(\frac{3}{4}\pi\right)\right)$$
$$=\pi-\frac{3}{4}\pi=\frac{\pi}{4}$$ |
Area of a quadrilateral inside a square | If we move the figure to a $xy$ axis, we can solve this problem by finding the line equations and intersection points. Consider this image:
Since we know the point coordinates in the image, we can easily find the line equation of lines 1, 2 and 3 (we can substitute the point values in the slope-intercept form equation $y = ax + b$ and find the equation of each line).
For line 1, the line equation is $y = 2x$.
For line 2, the line equation is $y = \frac{2 - x}{2}$
For line 3, the line equation is $y = \frac{1 - x}{2}$
To find point A, we have to find the intersection point of lines 1 and 3:
$$
2x = \cfrac{1 - x}{2} \Rightarrow x = \cfrac{1}{5} \;and \; y = \cfrac{2}{5} \Rightarrow A = \left(\cfrac{1}{5}, \cfrac{2}{5}\right)
$$
We can do the same with lines 1 and 2 in order to find point B:
$$
2x = \cfrac{2 - x}{2} \Rightarrow x = \cfrac{2}{5} \; and \; y = \cfrac{4}{5} \Rightarrow B = \left(\cfrac{2}{5}, \cfrac{4}{5}\right)
$$
The side of the square (we can call it $L$) will then be the distance $\overline{AB}$:
$$
L = \overline{AB} = \sqrt{(x_B - x_A)^2 + (y_B - y_A)^2 } = \sqrt{\cfrac{1}{25} + \cfrac{4}{25}} = \sqrt{\cfrac{1}{5}}
$$
Finally, the area of the square will be:
$$
Area = L^2 = \cfrac{1}{5}
$$ |
Derivative of convolution of non differentiable functions | If also the weak derivatives exist then the derivative is understood in the weak sense.
$f'$ is the unique weak derivative of $f$ is for all $C_c^\infty(\mathbb R)$ functions $\phi$ we have
$$\int f(x) \phi'(x) \, dx = - \int f'(x) \phi(x) \, dx$$
So plug in and use Fubini. |
How the red circle marked equations came ? | Multiply the first equation by $\frac{-a_2}{a_1}$ and add it to the second to get,
$$\Big(\frac{-a_2b_1}{a_1} + b_2 \Big)\frac{y}{z} + \frac{-a_2c_1}{a_1} + c_2 = 0$$
$$\Big(\frac{-a_2b_1 + a_1b_2}{a_1}\Big)\frac{y}{z} + \frac{-a_2c_1 + a_1c_2}{a_1}= 0$$
$$\Big(-a_2b_1 + a_1b_2\Big)\frac{y}{z} + -a_2c_1 + a_1c_2= 0$$
$$\Big( a_1b_2 - a_2b_1\Big)\frac{y}{z} = a_2c_1 - a_1c_2$$
$$\frac{y}{z} = \frac{c_1a_2 - c_2a_1}{ a_1b_2 - a_2b_1}$$
$\frac{x}{z}$ can be found in a similar way.
$$\frac{x}{z} = \frac{b_1c_2 - b_2c_1}{a_1b_2 - a_2b_1}$$
\begin{equation}
\frac{x}{b_1c_2 - b_2c_1} = \frac{z}{a_1b_2 - a_2b_1}
\end{equation}
$$\frac{y}{z} = \frac{c_1a_2 - c_2a_1}{a_1b_2 - a_2b_1}$$
\begin{equation}
\frac{y}{c_1a_2 - c_2a_1} = \frac{z}{a_1b_2 - a_2b_1}
\end{equation}
From the above equations we have,
\begin{equation}
\frac{x}{b_1c_2 - b_2c_1} = \frac{y}{c_1a_2 - c_2a_1} = \frac{z}{a_1b_2 - a_2b_1}
\end{equation} |
Help with this proof (Index Sifting) | Simply change the index so let $j=k-r\iff k=j+r$ so since $j\in\{a,\ldots,b\}$ then $k\in\{a+r,\ldots,b+r\}$ hence
$$\sum_{j=a}^b x_j=\sum_{k=a+r}^{b+r}x_{k-r}$$
and remember that the variables are dummy so replace in the last sum $k$ by $j$ to find your result. |
Ellipse representation | you should exclude the case of $e=0$, which is a circle but not a ellipse. So the answer is A.
Additional notes
I think the $3/2$ in all your options should be $7/2$. Otherwise $(2,3/2)$ does not make any sense. |
If the columns of AB are linearly independent, how can I prove the columns of B must be linearly independent? | If $Bx=0$ has a nonzero solution $x$, then it also solves $ABx=0$. |
Squaring in a multivariable equation | Your calctulation and book answer are correct. A square root admits two solutions.
If for example you plug in $ y= \pm \dfrac 34$ you get $ x= \pm \dfrac 35$ and in total there are four correct cases possible as shown below; we can choose the sign in each case,i.e., from each quadrant depending on quadrant context.
All algebraic operations here are reversible. |
$n$-vertex $3$-edge-colored graphs with exactly $6$ automorphisms which preserve edge color classes, but permute the edge colors distinctly? | Proffering the following construction even though it feels "too simple".
Assume first that $n\equiv1\pmod3$. Consider a star shaped graph with one central vertex, and three "monocolor rays" of $(n-1)/3$ vertices emanating from it. Any graph automorphism must map the central vertex to itself, because it is the only vertex of degree three. It easily follows that a graph automorphism must permute the three rays. By construction all those 6 permutations preserve the edge-color classes.
Adding one isolated vertex covers the case $n\equiv2\pmod3$.
If $3\mid n$ replace the central vertex with a cycle of three so that one ray begins from each of the three vertices of the cycle. If $P_1,P_2,P_3$ are the three vertices of that cycle, connect $P_1$ and $P_2$ with an edge sharing the color of the ray starting from $P_3$, and permute that cyclically. |
For any integer $n\ge 1$, which of the following is/are true? | For option $\bf(1)$,
Let $~n=37^2>1000~$
Clearly, here $~w(n)=\text{number of prime divisors of $~n~$ counted with multiplicities}=2~$
and $~d(n)=\text{number of positive divisors of $~n~$}=3~$ and also $~\log(n)=\log(37^2)=2\log(37)>3=d(n)~.$
Therefore option $(1)$ is incorrect.
For option $\bf(2)$,
Let $~n=p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}~,$ then $~d(n)=(r_1+1)(r_2+1)\cdots (r_k+1)~$
Again $~3\sqrt n=3~p_1^{r_1/2}p_2^{r_2/2}\cdots p_k^{r_k/2}\ge d(n)~,~~~\forall ~n\in\mathbb N$
Hence $~\nexists~n\in\mathbb N~,~$ such that $~d(n)>3\sqrt n ~$ and therefore option $(2)$ is again incorrect.
For option $\bf(3)$,
Let $~n=p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}~,$ then
$~d(n)=(r_1+1)(r_2+1)\cdots (r_k+1)~$
$~w(n)=r_1+r_2+\cdots r_k~$
$~v(n)=\text{number of distinct prime divisors of $~n~$}=k~$
So $~2^{w(n)}=2^{r_1+r_2+\cdots r_k}~$ and $~2^{v(n)=2^k}~$
Clearly, $~2^k\le (r_1+1)(r_2+1)\cdots (r_k+1)\le 2^{r_1+r_2+\cdots r_k}~$
which implies, $~n, ~2^{v(n)}\le d(n) \le 2^{w(n)}~$ and therefore option $(3)$ is correct.
For option $\bf(4)$,
Let $~n=9=3^2~$ and $~m=3\times 7=21~,$ then $~w(n)=w(m)=2~$ and $~d(n)=3~, ~~d(m)=4$
So although for some $~n~$ and $~m~$, $~w(n)=w(m)~,$ but $~d(n)\ne d(m)~$ and therefore option $(4)$ is also incorrect. |
Average mileage problem | The mean cost of a gallon, $c$, is the total amount of dollars spent divided by the total amount of gallons:
$$
c=
{
x\times 3.50 + y\times 1.78 + z \times 2.78
\over
x + y + z
}.
$$
In the specific example,
$$
c=
{
9000 \times 3.50 + 4000\times 1.78 + 100 \times 2.78
\over
9000 + 4000 + 100
} = $2.97/{\rm gallon}.
$$ |
Understanding the proof of Excercise 10b from chapter 5 of Spivak’s Calculus. | $$\lim_{x\to 0}f(x)=L\iff$$ $$ \forall e>0\,\exists d>0\, \forall x\,(0<|x|<d\implies |f(x)-L|<e)\iff$$ $$ \forall e>0\, \exists d>0\,\forall x'\,(0<|x'-a|<d\implies |f((x'-a))-L|<e) \iff$$ $$ \lim_{x'\to a}f(x'-a)=L \iff$$ $$ \lim_{x\to a}f(x-a)=L.$$ Regardless of whether or not $a$ is a $0$ of $f.$ |
Can the Sieve of Eratosthenes have a range? | No, since for the sieve to function is has to strike out all numbers in the range which are multiples of primes, including multiples of primes smaller than the range in question. If you started at 1000, where would these primes come from? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.