title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How many ways can a television producer air commercials... | Let $X,Y$, and $Z$ stand for the commercials $B,C$, and $D$ in any order. Counting the first and last positions, there are four possible slots for $A$, indicated by underscores: $\_X\_Y\_Z\_$. We can put at most one $A$ commercial in each of those slots, so to schedule the $A$ commercials, we need only decide which $3$ of the $4$ possible slots to use; we can do that in $\binom43$ ways. Then we have to decide on the order of $B,C$, and $D$ in the $X,Y$, and $Z$ slots. How many permutations of $A,B$, and $C$ are possible? Can you finish it from there? |
Factoring out the trace of a matrix | Let $k_{i,j}$ be the element in row $i$ and column $j$ of matrix $K$ of size $N\times N$. Then $\mathbf{1}^TK\mathbf{1}$ is the sum of all the elements of $K$, so $$\eta\mathbf{1}^TK\mathbf{1}=\eta\sum_{i=0}^{N-1}\sum_{i=0}^{N-1}k_{i,j}$$
Now the $i^{th}$ diagonal element of matrix product $K(\eta\mathbf{1}\mathbf{1}^T)$ will be the sum of the elements in row $i$ of matrix $K$
$$\eta\sum_{j=0}^{N-1}k_{i,j}$$
leading to the trace (sum of all $N$ diagonal elements) being
$$\mathbf{Tr}K(\eta\mathbf{1}\mathbf{1}^T)=\eta\sum_{i=0}^{N-1}\sum_{i=0}^{N-1}k_{i,j}=\eta\mathbf{1}^TK\mathbf{1}$$ |
Find surface area when this function is rotated around the y-axis. $y = \frac{1}{3} x^{\frac{3}{2}}$ | $$\text{Let }\begin{bmatrix}x \\ \mathrm dx\end{bmatrix}=\begin{bmatrix}1/4\cdot\cot^2\theta\\ -1/2\cdot \cot\theta\csc^2\theta\mathrm d\theta\end{bmatrix}$$
$$\begin{aligned}\int x\sqrt{1+\dfrac{1}{4x}}\mathrm dx&=-\dfrac{1}{8}\int\cot^2\theta \sec\theta \cot\theta\csc^2\theta\mathrm d\theta\\&=-\dfrac{1}{8}\int\dfrac{\cos^2\theta}{\sin^2\theta}\cdot\dfrac{1}{\cos\theta}\cdot\dfrac{\cos\theta}{\sin\theta}\cdot\dfrac{1}{\sin^2\theta}\mathrm d\theta\\ &=-\dfrac{1}{8}\int\cot^2\theta\csc^2\theta\mathrm d\theta\end{aligned}$$
Now simply let $u=\cot\theta\implies \mathrm du=-\csc^2\theta\mathrm d\theta$. Can you proceed?
Post OP's edit
$$\begin{aligned}\int x\sqrt{1+\dfrac{1}{4}x}\mathrm dx&=\dfrac{1}{2}\int x\sqrt{x+4}\mathrm dx\end{aligned}$$
Let $u=x+4\iff x=u-4\implies \mathrm du=\mathrm dx$
$$\begin{aligned}\dfrac{1}{2}\int x\sqrt{x+4}\mathrm dx&=\dfrac{1}{2}\int \sqrt{u}(u-4)\mathrm du\\&=\dfrac{1}{2}\int \left(u^{3/2}-4\sqrt{u}\right)\mathrm du\end{aligned}$$
Can you proceed? |
Prove linear operator is a reflection | If you find non-$0$ vectors $x,y\in\Bbb R^2$ such that $r(x)=x$ and $r(y)=-y,$ then to verify that they are orthogonal, you need only take their dot product.
As for actually finding them, proceed by cases. If $\sin\phi=0,$ then you shouldn't have any trouble, so suppose not. Solving your first equation for $x_2$ gives $$x_2=\frac{1-\cos\phi}{\sin\phi}x_1,$$ whence substitution into your second equation yields $$x_1\sin\phi-\frac{1-\cos^2\phi}{\sin\phi}x_1=0\\x_1\sin\phi-\frac{\sin^2\phi}{\sin\phi}x_1=0\\0=0.$$ Uninformative, at first glance. What that means, though, is that you have a free variable (in fact, you have to, since the eigenspace associated to $1$ is one-dimensional). Thus, you can take any $x_1$ you like, then put $x_2=\frac{1-\cos\phi}{\sin\phi}x_1,$ and you're set!
Addendum: Personally, I'd let $x_1=\sin\phi,$ for simplicity, for then $x_2=1-\cos\phi.$ In fact, $x=\begin{pmatrix}\sin\phi\\1-\cos\phi\end{pmatrix}$ works even in the case that $\sin\phi=0,$ so that's a nice general solution that lets you out of having to proceed by cases! |
Do All Subsets of $\Bbb N$ Have Predicates? | The proof is correct. Assuming that you mean the language of arithmetic, or any other countable language.
Consider, on the other hand, the uncountable language where for every $S\subseteq\Bbb N$ we have a predicate symbol $R_S$ which is interpreted as $S$ itself in $\Bbb N$. There, indeed, every subset of the natural numbers is definable by a predicate. But since the language is uncountable, this is not a problem. |
Prove that $ ax^2+bx+c=0 $ has at least one root in $(0,1)$ if $10a+12b+15c=0$ | Consider $x = \frac{10}{12}$:
$$\begin{align*}
f\left(\frac{10}{12}\right)
&= a\left(\frac{10}{12}\right)^2+b\left(\frac{10}{12}\right)+c\\
&= \frac{10}{144}\left(10a+12b+\frac{144}{10}c\right)\\
&= \frac{10}{144}\left(10a+12b+15c-\frac{6}{10}c\right)\\
&= -\frac{6}{144}c
\end{align*}$$
And consider $x=0$:
$$f(0) = 0^2a + 0b + c = c$$
If $c\ne 0$, by intermediate value theorem, there must be an $x\in\left(0, \frac{10}{12}\right)\subset(0,1)$ which satisfies $f(x)= 0$.
If $c=0$, $\frac{10}{12}\in(0,1)$ is a root.
How I obtained $x = \frac{10}{12}$:
I assume $x = 12k$ and $x^2 = 10k$ for some real number $k$. By solving $10k = (12k)^2$, $k$ is chosen to be the non-zero root $k = \frac{10}{144}$, which appears as the multiplier above. And $x = 12k = \frac{10}{12}$. |
What is the intuition behind Gordan's theorem? | Here's one way to look at it.
The first condition can be written as $ A^T y > 0$. Gordan's theorem says that either the range of $ A^T $ intersects the positive orthant, or the null space of $ A $ intersects the nonnegative orthant (at a point other than the origin).
Because the null space of $ A $ and the range of $ A^T$ are orthogonal complements of each other, this result seems geometrically plausible. |
Is there any one-to-many notion of convolution? | I've figured out the answer! The given integral can be expressed in terms of an $n$-dimensional convolution.
We may write
$$G(x_1, x_2, \ldots, x_n)=\int f(t) g(t-x_1, t-x_2, \ldots, t-x_n) dt \\
=\iiiint f(t)\delta(t-t_2)\delta(t-t_3)\ldots\delta(t-t_n) g(t-x_1, t_2-x_2, \ldots, t_n-x_n) dtdt_2dt_3\ldots dt_n$$
Where $\delta(t)$ is the Dirac delta function. Hence, we may write:
$$G(x_1, x_2, \ldots, x_n) = \hat{f}(x_1,\ldots, x_n) * g(x_1, x_2, \ldots, x_n)$$
Where $\hat{f}(x_1,\ldots, x_n) = f(x_1)\delta(x_1-x_2)\delta(x_1-x_3)\ldots\delta(x_1-x_n)$. |
Is there a linear transformation on $\mathbb{R^3}$ whose image and kernel are the same? | No.
Suppose Image=Kernel $\implies$ $r=\text{dim (Image)}=\text{dim (Kernel)}$. Then by Rank Nullity Theorem, $r+r=3$ which has non integer solutions, hence a contradiction. |
Killing form for a non-abelian Lie Algebra of dimension $2$ | That is not true. For example, the Killing form of any nilpotent algebra is zero.
In some books (for example in my version of Kirillov Jr. book) it is stated, that the converse is true, but that is wrong. |
Finding minimum value of $\mu$ in cubic $x^3-\lambda x^2+\mu x-6=0$ | Hint
Assume the roots are $a,b,c.$ Then
$$x^3-\lambda x^2+\mu x-6=(x-a)(x-b)(x-c),$$ from where
$$abc=6, ab+ac+bc=\mu.$$ Since $a,b,c>0$ we have that (AM-GM inequality)
$$\mu=ab+ac+bc\ge 3\sqrt[3]{a^2b^2c^2}=3\sqrt[3]{36}.$$
Is it possible to achieve the minimum value? |
How to solve Cauchy problem if equation is not linear and there is no $t$ argument? | Assuming you mean $x'-3x^{2/3} = 0 \iff x' = 3x^{2/3}$, in which case you integrate both sides:
$$
\frac{dx}{dt} = 3x^{2/3} \iff \int dt = \int \frac{dx}{3x^{2/3}}
$$
can you finish? |
Differentiating an integral with the Fundamental Theorem of Calculus | The main hypotheses for applying the Leibniz integral rule is that the derivative of $e^{xy}/y$ in $x$ must exist and be continuous on a rectangle whose base contains the domain of integration. Clearly $e^{xy}/y$ cannot be extended to continuously to any point at which $y=0$ due to the division by 0, so it certainly cannot be differentiable. Thus it fails the main hypothesis.
If you wanted to calculate $f'(x)$ in this case you would have to do it by hand at $x=0$, but the Leibniz formula you derived should work fine for $x\neq 0$ as a $\int_0^{x+h}-\int_0^{x} = \int_x^{x+h}$ stays away from the bad point when $x\neq 0$ (so long as you first prove these integrals exist). |
The sum of two numbers a and b is ${\sqrt 18}$ and their difference is ${\sqrt 14}$ . How do I find ${\log_ba}$? | Check out $$ab = 1$$
So $a = \frac{1}{b}$. |
Exercise 1.11 Harris Algebraic Geometry: A First Course | I'll look at problem (a). On $U_0 = \{Z_0 \ne 0\}$ you are intersecting the affine curves $$z_2 = z_1^2, \; \; z_3 = z_1 z_2.$$
From these equations already follows $$z_2^2 = z_1^2 z_2 = z_1 z_3,$$ so it looks like the twisted cubic.
When $Z_0 = 0$, $V(F_1,F_2)$ reduces to $Z_1 = 0;$ i.e. the projective line $$\{[0 : 0 : z_2 : z_3]\}.$$
So $V(F_1,F_2)$ is the union of the twisted cubic and the line $\{[0:0:z_2:z_3]\}$. This line does not connect two distinct points of the cubic but that is not actually claimed in the exercise as written in Harris' book. |
Poisson process and interrarrival times | Your argument seems on the right track, yet might observe further improvement. In fact, should there be more than two disturbances within the first ten minutes, the computer would definitely crash as it is a must that some two disturbances occur within five minutes. Therefore, a correct formula seems to predict
$$
\mathbb{P}\left(\left\{X_{j+1}\le\frac{1}{12}\text{ for some }j\in\left\{1,2,\cdots,n-1\right\}\right\}\right)=1
$$
for all $n>2$.
Here is a possible alternative.
Let $N_t$ be a Poisson process with $N_1\sim\text{Poiss}(\lambda)$. Take "hour" as the unit of time, and $\lambda=3$.
Let $t=0$ be the time at present, with $N_0=0$. Let $t_1=1/12$ be the most inter-arrival time that will trigger a crash (i.e., $5$ minutes). Let $t_2=1/6$ be the duration of the crash-or-not (i.e., $10$ minutes).
Define
$$
\tau=\inf\left\{t>0:N_t>0\right\},
$$
the moment when the first disturbance occurs. Note that for all $t>0$, the first disturbance occurs before the moment $t$ if and only if at least one disturbance has occurred by the moment $t$, for which
$$
\mathbb{P}\left(\tau\le t\right)=\mathbb{P}\left(N_t>0\right).
$$
This implies that
$$
\tau\sim\text{Exp}\left(\lambda\right).
$$
Let $f_{\tau}(t)=\lambda e^{-\lambda t}$ be the probability density of $\tau$.
The target is to figure out $\mathbb{P}\left(\left\{\text{crash before }t_2\right\}\right)$. Note that the computer would crash if and only if either of these mutually exclusive cases takes place.
$\tau\le t_1$ and $N_{\tau+t_1}-N_{\tau}>0$ (the first disturbance occurs within the first five minutes, and upon this occurrence, there is at least one disturbance in the next five minutes).
$\tau\le t_1$ and $N_{\tau+t_1}-N_{\tau}=0$ and $N_{t_2}-N_{\tau+t_1}>1$ (the first disturbance occurs within the first five minutes, but it does not contribute to the disturbances that cause a crash within the first ten minutes).
$\tau>t_1$ and $N_{t_2}>1$ (no disturbance occurs in the first five minutes, and more than one disturbance occurs in the last five minutes).
It suffices to calculate the probability of each of the above three cases.
Note that
\begin{align}
\mathbb{P}\left(\tau\le t_1,N_{\tau+t_1}-N_{\tau}>0\right)&=\mathbb{P}\left(N_{\tau+t_1}-N_{\tau}>0|\tau\le t_1\right)\mathbb{P}\left(\tau\le t_1\right)\\
&=\mathbb{P}\left(N_{\tau+t_1-\tau}>0|\tau\le t_1\right)\mathbb{P}\left(\tau\le t_1\right)\\
&=\mathbb{P}\left(N_{t_1}>0\right)\mathbb{P}\left(\tau\le t_1\right)\\
&=\mathbb{P}\left(N_{t_1}>0\right)\mathbb{P}\left(N_{t_1}>0\right),
\end{align}
and that
\begin{align}
&\mathbb{P}\left(\tau\le t_1,N_{\tau+t_1}-N_{\tau}=0,N_{t_2}-N_{\tau+t_1}>1\right)\\
&=\mathbb{P}\left(N_{\tau+t_1}-N_{\tau}=0,N_{t_2}-N_{\tau+t_1}>1|\tau\le t_1\right)\mathbb{P}\left(\tau\le t_1\right)\\
&=\mathbb{P}\left(N_{\tau+t_1}-N_{\tau}=0|\tau\le t_1\right)\mathbb{P}\left(N_{t_2}-N_{\tau+t_1}>1|\tau\le t_1\right)\mathbb{P}\left(\tau\le t_1\right)\\
&=\mathbb{P}\left(N_{\tau+t_1}-N_{\tau}=0|\tau\le t_1\right)\mathbb{P}\left(N_{t_2}-N_{\tau+t_1}>1,\tau\le t_1\right)\\
&=\mathbb{P}\left(N_{\tau+t_1-\tau}=0|\tau\le t_1\right)\mathbb{P}\left(N_{t_2}-N_{\tau+t_1}>1,\tau\le t_1\right)\\
&=\mathbb{P}\left(N_{t_1}=0\right)\mathbb{P}\left(N_{t_2}-N_{\tau+t_1}>1,\tau\le t_1\right)\\
&=\mathbb{P}\left(N_{t_1}=0\right)\int_0^{t_1}\mathbb{P}\left(N_{t_2}-N_{\tau+t_1}>1|\tau=t\right)f_{\tau}(t){\rm d}t\\
&=\mathbb{P}\left(N_{t_1}=0\right)\int_0^{t_1}\mathbb{P}\left(N_{t_2-\tau-t_1}>1|\tau=t\right)f_{\tau}(t){\rm d}t\\
&=\mathbb{P}\left(N_{t_1}=0\right)\int_0^{t_1}\mathbb{P}\left(N_{t_1-t}>1\right)f_{\tau}(t){\rm d}t,
\end{align}
and that
\begin{align}
\mathbb{P}\left(\tau>t_1,N_{t_2}>1\right)&=\mathbb{P}\left(N_{t_1}=0,N_{t_2}>1\right)\\
&=\mathbb{P}\left(N_{t_1}=0,N_{t_2}-N_{t_1}>1\right)\\
&=\mathbb{P}\left(N_{t_1}=0\right)\mathbb{P}\left(N_{t_2}-N_{t_1}>1\right)\\
&=\mathbb{P}\left(N_{t_1}=0\right)\mathbb{P}\left(N_{t_2-t_1}>1\right)\\
&=\mathbb{P}\left(N_{t_1}=0\right)\mathbb{P}\left(N_{t_1}>1\right).
\end{align}
All these three terms are now able to be figured out by using $N_t\sim\text{Poiss}(\lambda t)$. Thanks to the mutual exclusivity, the sum of them would give the expected probability. The result reads
$$
\mathbb{P}\left(\left\{\text{crash before }t_2\right\}\right)=1-\frac{49}{32\sqrt{e}}\approx 7.125\%.
$$
This result could be verified by numerical simulations. For example, generate a partition for $t\in\left[0,1/6\right]$ with $10^3$ equi-spaced grid points. Generate $10^6$ trajectories for $N_t$ with $t\in\left[0,1/6\right]$ with $N_1\sim\text{Poiss}(3)$. With these settings, a simulation gives that $71359$ trajectories witness at least two disturbances occur within $1/12$ unit of time, meaning that the probability of crash within the first ten minutes would be around $7.1359\%$. This matches the above theoretical result very well. |
Cut a convex polytope in $2^n$ parts | Making an answer out of Tal-Botvinnik's comment.
In $\mathbb{R}^3$, consider an elongated cylinder of radius $r > 0$ and height $h$, with $h$ much bigger than $r$. If $d = (1,1,1)$ is the direction of the height and $V$ the volume of the cylinder intersected with the corresponding quadrant, you can find the simple bound
$$
V \geq \pi r^2 (h/2 - \sqrt{3}r).
$$
For a fixed volume of the cylinder, as $r\rightarrow 0$, $V$ tends to half the volume of the cylinder.
The same argument works in $\mathbb{R}^n$, replacing $\pi r^2$ by the area of a disk of radius $r$ and $\sqrt{3}$ by $\sqrt{n}$. |
If $X^n+Y^n+1$ is reducible, is the degree divisible by the characteristic? | I will show that if the characteristic, say $p$, of the field does not divide $n$ then $X^n + Y^n + 1$ is irreducible in $F[X,Y]$. In $F[Y]$ the polynomial $Y^n + 1$ is separable and of course not constant. Let $\pi(Y)$ be an irreducible factor of $Y^n+1$ in $F[Y]$. Then in the ring $F[X,Y] = F[Y][X]$, we consider the polynomial $X^n + Y^n + 1$ as a polynomial in $X$ with coefficients in $F[Y]$ and see that it is Eisenstein with respect to $\pi(Y)$. Therefore by the Eisenstein irreducibility criterion (applied to the PID $F[Y]$, not ${\mathbf Z}$) our polynomial in irreducible in $F[Y][X] = F[X,Y]$.
I like to use this kind of example in some courses for $X^n + Y^n - 1$ instead, since then $Y-1$ is a visible linear factor of $Y^n - 1$ and one can say directly (without the abstract choice of $\pi(Y)$) that $X^n + Y^n - 1$ is Eisenstein with respect to $Y-1$ when $p$ doesn't divide $n$ so it is irreducible in $F[Y][X] = F[X,Y]$. This makes (some) students get a new appreciation for the scope of the Eisenstein irreducibility criterion. |
If a CW Complex captures the topology of manifold, what captures its geometry? | If you build a mesh, then the different line segments also each have a length. A CW complex wouldn't know about that because that is part of the geometry. The CW complex only describes which line segments are attached to which nodes. You could make some line segments very short and others much longer. The object will look very different because the geometry is very different but the topology didn't change. |
Summing over components of a basis: Coalgebra | Regarding your first question: no we are not summing elements of any base. This is actually the Sweedler's notation for the comultiplication $\Delta: H\to H\otimes H$. What it does, is to suppress the explicit summation indices via the use of the abstract summation indices $_{(1)}$ and $_{(2)}$. So, instead of writting $\Delta(h) = \sum_{i,j=1}^{k} h_{i}\otimes h_{j}$ we use $\Delta(h) = \sum h_{(1)}\otimes h_{(2)}$. This is more abstract, but concrete enough to handle situations in which the conventional sigma notation would produce rather cumbersome expressions. For example, the comultiplication's defining condition is coassociativity i.e.
$$(\Delta\circ Id)\circ\Delta=(Id\circ\Delta)\circ\Delta$$
If we apply Sweedler's notation, we get the following equivalent expression of coassociativity at the element's level:
$$(\Delta\circ Id)\circ\Delta(h)=(\Delta\circ Id)(\sum h_{(1)}\otimes h_{(2)})=\sum \Delta(h_{(1)})\otimes h_{(2)}$$
and
$$(Id\circ\Delta)\circ\Delta(h)=(\Delta\circ Id)(\sum h_{(1)}\otimes h_{(2)})=\sum h_{(1)}\otimes \Delta(h_{(2)})$$
thus, coassociativity means that for any element $h\in H$:
$$\sum \Delta(h_{(1)})\otimes h_{(2)}=\sum h_{(1)}\otimes \Delta(h_{(2)})=h_{(1)}\otimes h_{(2)}\otimes h_{(3)}$$
(try expressing this using the usual tensor-like summation indices).
Now, regarding your second question, i guess you are actually talking about the relation:
$$\Delta\circ m=(m\otimes m)\circ(Id\otimes \tau \otimes Id)\circ(\Delta \otimes \Delta)$$
which is the part of "compatibility" conditions between the algebraic and coalgebraic structure of $H$, in order for $H$ to be a bialgebra.
Notice that both of the above maps (LHS and RHS) are of the form: $H\otimes H\rightarrow H\otimes H$. (see also the diagram below for a more detailed analysis of the maps). Let's compute the LHS:
$$\Delta\circ m(g\otimes h)=\Delta(gh)=\sum (gh)_{(1)}\otimes (gh)_{(2)}$$
and the RHS:
$$(m\otimes m)\circ(Id\otimes \tau \otimes Id)\circ(\Delta \otimes \Delta)(g\otimes h)= \\ (m\otimes m)\circ(Id\otimes \tau \otimes Id)\Big(\sum g_{(1)}\otimes g_{(2)}\otimes h_{(1)}\otimes h_{(2)}\Big)=(m\otimes m)\Big(\sum g_{(1)}\otimes h_{(1)}\otimes g_{(2)}\otimes h_{(2)}\Big)=\sum g_{(1)}h_{(1)}\otimes g_{(2)}h_{(2)}$$
so your condition (part of the definition of a bialgebra and a Hopf algebra) can now be equivalently written as:
$$\Delta(gh)=\sum (gh)_{(1)}\otimes (gh)_{(2)}=\sum g_{(1)}h_{(1)}\otimes g_{(2)}h_{(2)}=\Delta(g)\Delta(h)$$
and actually says that: the comultiplication is an algebra homomorphism, between the algebras $H$ and $H\otimes H$ (equipped with its usual tensor product algebra structure).
An equivalent formulation of the above, is the commutativity of the following diagram: |
Prove that if $x \neq 0$, then if $ y = \frac{3x^2+2y}{x^2+2}$ then $y=3$ | $y = \frac{3x^2+2y}{x^2+2}$, multiplying both sides of the equation by $x^2+2$ results in an equivalent equation because that term is never $0$ (in the reals at least).
You end up with $yx^2 + 2y = 3x^2+2y$
subtract $2y$ from both sides (always legitimate).
$yx^2=3x^2$
Since $x\neq 0$ we can divide both sides by $x^2$ and get $y=3$ |
Distance between two circles with known size and intersection area | Given $r_1,r_2$ and
\begin{align}
k&=\frac{\mathrm{intersection\ area}}{\pi(r_1^2+r_2^2)}
\end{align}
you can try
Halley's method
for approximation, starting with some reasonable initial guess for $d$, for example, $d_0=\tfrac12(r_1+r_2)$:
\begin{align}
d_{{n+1}}
&=d_{n}-{\frac {2f(d_{n},r_1,r_2,k)f'(d_{n},r_1,r_2)}{2{[f'(d_{n},r_1,r_2)]}^{2}-f(d_{n},r_1,r_2,k)f''(d_{n},r_1,r_2)}}
,
\end{align}
where
\begin{align}
f(d,r_1,r_2,k)&=
-\pi(r_1^2+r_2^2)\,k
+r_1^2\arccos\left(\frac{d^2+r_1^2-r_2^2}{2d r_1}\right)
\\
&+r_2^2\arccos\left(\frac{d^2+r_2^2-r_1^2}{2d r_2}\right)
\\
&-\tfrac12\sqrt{(d+r_1+r_2)(-d+r_1+r_2)(d-r_1+r_2)(d+r_1-r_2)}
,\\
f'(d,r_1,r_2)&=
-\frac{\sqrt{2d^2 r_1^2-d^4+2d^2r_2^2-r_1^4+2r_1^2 r_2^2-r2^4}}d
,\\
f''(d,r_1,r_2)&=
\frac{(d^2+r_2^2-r_1^2)(d^2+r_1^2-r_2^2)}{d^2\sqrt{(d+r_1+r_2)(-d+r_1+r_2)(d-r_1+r_2)(d+r_1-r_2)}}
.
\end{align}
For example, this R-code
##
r1 <- 7
r2 <- 9
k <- 0.0702529004662957 ## fraction (intersection area)/(pi*(r1^2+r2^2))
f <- function(d,r1,r2,k){
-pi*(r1^2+r2^2)*k+r1^2*acos(1/2*(d^2+r1^2-r2^2)/d/r1)+r2^2*acos(1/2*(d^2+r2^2-r1^2)/d/r2)-1/2*sqrt((d+r1+r2)*(-d+r1+r2)*(d-r1+r2)*(d+r1-r2))
}
df <- function(d,r1,r2){
-(2*d^2*r1^2-d^4+2*d^2*r2^2-r1^4+2*r1^2*r2^2-r2^4)^(1/2)/d
}
ddf <- function(d,r1,r2){
(d^2+r2^2-r1^2)*(d^2+r1^2-r2^2)/(d+r1+r2)^(1/2)/(-(d-r1+r2)*(-r1+d-r2)*(d+r1-r2))^(1/2)/d^2
}
F <- function(x,r1,r2,k){
x-2*f(x,r1,r2,k)*df(x,r1,r2)/(2*df(x,r1,r2)^2-f(x,r1,r2,k)*ddf(x,r1,r2))
}
d <- (r1+r2)/2 ## initial approximation
d <- F(d,r1,r2,k)
d <- F(d,r1,r2,k)
d <- F(d,r1,r2,k)
d <- F(d,r1,r2,k)
results in consequent approimations:
8
11.85015
11.99998
12
12
Edit
You are correct that the total area is not the same
as the sum of the two circles, but this can be easily corrected.
Let $S_x$ and $S_t$ be the intersection area and the total area, respectively,
then
\begin{align}
S_t&=\pi(r_1^2+r_2^2)-S_x
.
\end{align}
Consider two fractions, $k$ and $q$,
\begin{align}
k&=\frac{S_x}{\pi(r_1^2+r_2^2)}
,\\
q&=\frac{S_x}{S_t}
,
\end{align}
so, given $q$ tou can find corresponding $k$ as
\begin{align}
k&=1-\frac1{q+1}
\end{align}
and use the procedure above as before.
Just add a function
Fq <- function(x,r1,r2,q){
vf <- f(x,r1,r2,1-1/(q+1))
x-2*vf*df(x,r1,r2)/(2*df(x,r1,r2)^2-vf*ddf(x,r1,r2))
}
and use it instead of $F$. |
General solution of a second order PDE | Hint: look for a change of variables $t = T^p$. For a suitable $p$, you'll get the wave equation in $x$ and $T$. |
How to cure "I don't appreciate Lebesgue integration because I was taught Riemannian integration throughout university" | I think your point 4 is the most important one to start the discussion with. Every mathematical tool has a domain of application which must be considered. The world of deterministic integrals appearing in applied mathematics, physics, engineering, chemistry and the rest of the sciences does not require the ability to have a well-defined integral for pathological functions like the question mark function. Keep in mind the historical context in which the Lebesgue integral was introduced: mathematicians had started realizing that many results which were claimed to have been proven for all functions actually could be contradicted by constructing abstract functions with paradoxical quantities - which had no analogue in the world of science. Thus the purpose of the Lebesgue integral was not to perform new integrals of interest to the sciences, but rather to place the mathematical formalism on a solid foundation. The Lebesgue integral is an antidote to a crisis in the foundations of mathematics, a crisis which was not felt in any of the sciences even as it was upending mathematics at the turn of the 20th century. "If it isn't broken, don't fix it" would be a natural response applied mathematicians and scientists could apply to this situation.
To anyone learning the Lebesgue integral for the purposes of expanding the scope of scientific integration they can perform, I would caution them with the previous paragraph.
However, the power of the Lebesgue integral (and the apparatus of measure theory) lies in its ability to make rigorous mathematical statements that apply to very badly behaved functions - functions that are so badly behaved, they often need to be specifically constructed for this purpose, and have no analogue in "real life". These are functions that are so delicate, an arbitrarily small "tweak" will destroy all these paradoxical properties. (This can be made rigorous in many ways, one of which is the fact that bounded continuous functions are dense in $L^p[0,1]$ - so for any terrible function $f\in L^p[0,1]$ and any $\epsilon>0$ you can find a very nice function $g$ with $|f(x)-g(x)|<\epsilon$ for all $x\in[0,1]$.) In "real life", all measurements carry errors and as a corollary any property that is destroyed by arbitrarily small modifications is not one that can actually be measured!
Despite all this, there is one major application of Lebesgue integration and the measure-theoretic apparatus: stochastic calculus, where one attempts to integrate against stochastic processes like Brownian motion. Even though the sample paths of Brownian motion are continuous, they represent the most badly behaved class of continuous paths possible and require special treatment. While the theory is very well developed in the Brownian case (and many of the top hedge funds on Wall Street have made lots of money exploiting this) there are other stochastic processes whose analysis is much more difficult. (How difficult? Well, the core of the Yang-Mills million dollar problem boils down to finding a way to rigorously define a certain class of very complicated stochastic integrals and show that they have the "obvious" required properties.) |
Is a Frechet space separable, if its dual is? | I think so (no books at hand for a reference). All we need for the normed proof to go through, is some sort of Hahn-Banach theorem that gives us enough functionals to works with. And Fréchet spaces have enough functionals. |
is there a specific name for this? | I am not sure how widespread this name is, but I have found at least some occurrences of crown square fractal. (Google Books, Google Images, Google Scholar, Google)
Brief look at the hits in those searches suggests that this terminology appears mostly in connection with fractal antennas. |
Know that $\tan\left(\alpha-\frac{\pi}{4}\right)=\frac{1}{3}$ calculate $\sin\alpha$ | $\tan (\alpha -\frac {\pi} 4)=\frac {\tan (\alpha)-1} {1+\tan \alpha}=\frac 1 3$ and this gives $\tan \alpha =2$. Can you find $\sin \alpha$ from this? |
Let $ABCD$ be a cyclic convex quadrilateral such that $AD + BC = AB$. Prove that the bisectors of the angles $ADC$ and $BCD$ meet on the line $AB$. | Let angle bisector for $\angle BCD$ meet $AB$ at $F$ (so we have to prove that $DF$ is angle bisector for $\angle ADC$), then $$\angle BCF = FCD = \alpha\;\;\;\;{\rm and }\;\;\;\;\;\angle BAD = 180^{\circ} -2\alpha$$ and let $E$ on $AB$ be such that $BE = BC$, then $AE = AD$ and $$\angle FED = \angle AED = \angle ADE = \alpha$$ so $\angle FED = \angle FCD = \alpha $ and thus $CDFE$ is cyclic!
Now, if $\angle FDC = \beta$ then $\angle BEC = \angle ECB = \beta$, so $$\angle 180^{\circ} -2\beta \implies \angle ADC = 2\beta$$
and thus $DF$ is also angle bisector but for $\angle ADC$ and we are done. |
epsilon-N proof of divergent sequence with another divergence assumed | Note that $x_n > y_n \implies \liminf x_n \geqslant \liminf y_n$ and for all $\epsilon > 0$
$$\limsup_{n \to \infty} b_n \geqslant \liminf_{n \to \infty} b_n \geqslant \lim_{n \to \infty} \frac1{n}\sum_{k=1}^Na_k + \lim_{n \to \infty}(1 - N/n)\epsilon = \epsilon.$$
Hence, $\lim b_n = \limsup b_n = \liminf b_n = + \infty.$
Alternatively, with $N$ fixed,
$$\lim_{n \to \infty} \frac1{n}\sum_{k=1}^Na_k = 0 , \\ \lim_{n \to \infty} (1 - N/n) = 1,$$
and, for $n$ sufficiently large
$$\frac1{n}\sum_{k=1}^Na_k > -\epsilon/4, \\ (1 - N/n) > 1/2,$$ and
$$b_n > \frac1{n}\sum_{k=1}^Na_k + (1 - N/n)\epsilon > -\epsilon/4 + \epsilon/2 = \epsilon/4$$ |
Linear regression coefficient | $\sum \left (X_{i}-\overline{X} \right )\left (Y_{i}-\overline{Y} \right )=\sum X_i Y_i-(\sum X_i) \overline{Y}-\overline{X}
(\sum Y_i)+n\overline{X}\overline{Y}$. Thus
$\sum \left (X_{i}-\overline{X} \right )\left (Y_{i}-\overline{Y} \right)=\sum X_i Y_i-n\overline{X}\overline{Y}$. Using this also for $X=Y$ the second expression reads as
$\displaystyle\frac{\sum X_i Y_i-n\overline{X}\overline{Y}}{\sum X_i^2-n\overline{X}^2}$. Multiplying both, nominator and denominator, with $n$ gives the first expression for $b_1$. |
Distribution Function of ln(x) | If a continuous random variable $X$ whose support is all $\ge0$ has CDF $F_X$ and PDF $f_X=F_X^\prime$, then $Y=-\ln X$ is a real-valued continuous random variable of CDF$$F_Y(y)=P(Y\le y) =P(X\ge e^{-y})=1-F_X(e^{-y})$$and PDF$$f_Y(y)=F_Y^\prime(y)=e^{-y}f_X(e^{-y}).$$I leave as an exercise working out the support of $Y$ in terms of that of $X$. |
Rewriting definite integral as a Riemann sum | If $f $ is Riemann integrable at $[a,b]$, then the Riemann sums goe to the integral of $f $ over $[a,b]$ which means, in particular, that
$$\lim_{n\to+\infty}\frac{b-a}{n}\sum_{k=1}^nf(a+k\frac{b-a}{n})=\int_a^bf$$
in your case,
$$\lim_{n\to+\infty}\frac hn\sum_{k=1}^n(0+\frac hn k)^2=\int_0^hx^2dx=\frac{h^3}{3}$$ |
Infinite Differentiability and Generalisation of Mean Value Theorem | ad 1: There are many "Lagrange theorems". It is unclear to me what is meant by "consequence of Lagrange theorem" mentioned in that link; so much the more as the link "Lagrange theorem" leads to the MVT without any reference to Lagrange.
ad 2 and 3: In many calculus texts the assumptions for the MVT are "$f$ continuous on $[a,b]$ and differentiable on $(a,b)$". This makes the theorem applicable, e.g., to $f(x):=\sqrt{1-x^2}$ on $[{-1},1]$. When these assumptions seem somewhat contrived to you you can replace them by Spivak's assumptions in your highlighted text. Spivak's version follows easily when you consider the extended function $g:\ [a,b]\to{\mathbb R} $ obtained from $f$ by defining $g(a)$ and $g(b)$ in the obvious way. The extension $g$ is continuous on all of $[a,b]$.
ad 4: That $f^{(n)}(0)=0$ for
$$f(x) :=\cases{e^{-1/x}\quad&$(x>0)$\cr 0&$(x\leq0)$\cr}$$ is most easily proven by induction. Given that for some polynomial $p_n$,
$$f^{(n)}(x) = \cases{p_n(1/x)\>e^{-1/x}\quad&$(x>0)$\cr 0&$(x\leq0)$\cr}$$
it follows that $f^{(n+1)}(x)=p_{n+1}(1/x)e^{-1/x}$ for $x>0$, and
$$\lim_{x\to 0+}{f^{(n)}(x)-f^{(n)}(0)\over x}=0\ ,$$
etcetera. |
Need help with continuity of integral in real analysis | What's wrong is that $\|f_n - f\|_\infty$ need not exist, and if it does exist need not converge to $0$.
Hint: Try $f_n(x) = (n+1) x^n$. |
Manipulating Compound Inequality | If $0<1-ab<1$, then we have: $$0<ab<1.$$ This gives you 1 and 3, since if $ab>0$, then $a$ and $b$ must have the same sign. This therefore also implies that ${a}/{b}>0$.
To see that 2 is not always true, take $a=1, b=1/2$. Then: $$0<1-1/2=1/2<1,$$ but $\dfrac{a}{b}=2>1$. |
Finding the distribution of $X^2 +Y^2 + Z^2$ | If $X_1, ..., X_n$ are i.i.d. standard Gaussians then the distribution of $X_1^2 + ... + X_n^2$ is known as the $\chi_n^2$-distribution. Calculating the cdf of the $\chi^2$ distributions is quite involved and includes special functions. See the wiki page for more information. |
Will The Flea Catch The Kangaroo | The answer is yes.
After the the first jump of the flea, the total ratio of the rope behind the flea and in front of her is $\frac{1}{100,000}$
Before the second jump, the ratio remains the same. After the jump, however, the ratio is $$\frac{1}{100,000} + \frac{1}{200,000}$$
and after $n$ jumps, it's $$\frac{1}{100,000} + \frac{1}{200,000} + \cdots + \frac{1}{n\cdot 100,000}$$ |
Maximization of a function defined with $\max$ | It appears that $f$ is a concatenation of convex functions (namely "max" and linear operations), and therefore $f$ is convex. It follows that one of the end points will always be maximal. |
Bloch vector time evolution in magnetic field | I can suggest a simple way which is not without equations, but it is with very simple equations. It goes as follows:
You can rotate your system of axes by $\omega t$ around the axis $z$, then by an angle $\theta = arctg(B_1/B_2)$ so as to lay your vector $\vec {B(t)}$ over a new axis $z'$ ? You will get a vector constant in time, of length $B = \sqrt {B_1^2 + B_2^2}$.
In the new axes $x', y', z'$ the system of equations will become
$\frac {d\ n_{x'}}{dt} = \gamma B \ n_{y'}$
$\frac {d\ n_{y'}}{dt} = -\gamma B \ n_{x'}$
and the solution is immediate,
(1) $n_{x'} = cos(\gamma B t)$,$ \ \ \ n_{y'} = -sin(\gamma B t)$.
Whatever is left after this is to reverse the two rotations, i.e. first rotate back around the axis $y'$ by $-\theta$, then, as the axis $z'$ is laid back on $z$, rotate by $-\omega t$ around the axis $z$ for which you have the rotation matrices. |
Compute the Cardinality of the set $\{z \in \Bbb{Z}|z>-10, z^3<0\}$ | The integers satisfying $z>-10$ form the set $\{z \in \mathbb{Z} | z>-10\}=\{-9,-8,-7,...\}$.
The cube of a negative integer is negative, so the numbers satisfying $z^3<0$ are all the negative integers.
In the set $\{z \in \mathbb{Z}|z>-10, z^3<0\}$, you want both of these conditions to be true, so $$\{z \in \mathbb{Z}|z>-10, z^3<0\}=\{-9,-8,...,-2,-1\}$$ which has $9$ elements. That is, the cardinality is $9$. |
Proof that the interval and the plane is not homeomorphic | Well you can show, that there is always a path connecting two points. |
Showing that a function is strictly increasing | for $x > 0,$ the function equals
$$
\frac{1}{1+1/x}
$$
$x$ is increasing so $1/x$ is decreasing so $1+1/x$ is decreasing so
$$
\frac{1}{1+1/x}
$$
is increasing.
For $x<0,$ a similar argument. Joining the two cases together is easy. |
Nash Equilibrium of cheating a test($N$-player game) | There are ${N\choose k}$ pure Nash equilibria, corresponding to the different possible subsets of $k$ students that cheat. When $k$ (the maximum) students cheat, nobody wants to change their strategy; the cheaters and non-cheaters alike would do worse if they switched.
There may also be mixed Nash equilibria. |
$R_1$ and $R_2$ are partial orders. What about $R_1 \cap R_2$? | $R_1$ and $R_2$ are by definition subsets of $S \times S$ which are reflexive, antisymmetric, and transitive. Now we need to check that $R_1 \cap R_2$ is also reflexive, antisymmetric, and transitive.
Reflexive: Since $(a,a)$ must be in both $R_1$ and $R_2$ for any $a \in S$, $(a,a)$ will also be in $R_1 \cap R_2$, so it is reflexive.
Antisymmetric: Now, this is a conditional property. If $(a,b) \in R_1 \cap R_2$ and $(b,a) \in R_1 \cap R_2$, it must be the case that $a = b$. Since we know that this property is satisfied for both $R_1$ and $R_2$, it must also hold for $R_1 \cap R_2$.
Transitivity: Likewise, since we know that the transitive property holds for both $R_1$ and $R_2$, there can be no two elements $(a,b) \in R_1 \cap R_2$ and $(b,c) \in R_1 \cap R_2$ without it also being the case that $(a,c) \in R_1 \cap R_2$. |
Is $p(x)=(5/12)^x + (12/13)^x - 1$ strictly increasing function? | No, it is in fact strictly decreasing. Note that $\frac{5}{12}$ and $\frac{12}{13}$ are both less than $1$, and so $\left(\frac{5}{12}\right)^x$ and $\left(\frac{12}{13}\right)^x$ are both strictly decreasing. This implies that their sum is strictly decreasing; the constant term in $p(x)$ is irrelevant. |
Equation of Circle touching a straight line and passing through the centroid of a triangle and a particular point | The centroid $G$ belongs to the line
$\mathcal{L}:$ $y=x+1$.
Since the circle $\mathcal{C}$ must pass through
$G(2,3)$ and $D(1,1)$, it must also pass through
the point $K(4,4)$,
which is a reflection of $D$ wrt the perpendicular through $G$.
Hence, the sought circle is the circumcircle of $\triangle GDK$
with the side lengths
\begin{align}
|GD|=|GK|&=\sqrt5
,\\
|DK|&=3\sqrt2
,
\end{align}
the area
\begin{align}
S_{GDK}&=
\tfrac12\,|DK|\,\sqrt{|GD|^2-\tfrac14\,|DK|^2}
=\tfrac32
\end{align}
and the circumradius
\begin{align}
R&=\frac{|GD|^2|DK|}{4S_{GDK}}
=\tfrac52\,\sqrt2
.
\end{align}
Since in this special simple case
$GO$ is the diagonal
of the $\tfrac52\times\tfrac52$ square,
the coordinates of the center of the circle $\mathcal C$
can be found as
\begin{align}
O&=G+(\tfrac52,\, -\tfrac52)
=(\tfrac92,\, \tfrac12)
.
\end{align}
In case of general $\triangle ABC$
with the side lengths $a,b,c$,
we could find
the center $O$
using a known expression
\begin{align}
O&=
\frac{a^2\,(b^2+c^2-a^2)\,A+b^2\,(a^2+c^2-b^2)\,B+c^2\,(b^2+a^2-c^2)\,C}
{a^2\,(b^2+c^2-a^2)+b^2\,(a^2+c^2-b^2)+c^2\,(b^2+a^2-c^2)}
\\
&=
\frac{a^2\,(b^2+c^2-a^2)\,A+b^2\,(a^2+c^2-b^2)\,B+c^2\,(b^2+a^2-c^2)\,C}
{16S_{ABC}^2}
,
\end{align}
which, of course, works in this special case as well
and provides the same result. |
To show that $\exp(\psi(2n+1))\int_0^1x^n(1-x)^n dx$ is a positive integer, where $\psi(x)$ is the Chebyshev function | If you have a copy of Ram Murty's $\textit{Problems in Analytic Number Theory}$, 2nd Edition, it's exercise 3.1.4.
Show that $e^{\psi(n)}=\text{lcm}\{1,2,\cdots,n\}$.
Show that $I_n=\int_0^1x^n(1-x)^n\ dx\leq 2^{-2n}$.
Show that $I_n=\sum_{k=0}^{n}\int_0^1(-1)^k{n\choose k}x^{n+k}\ dx$ and that $I_n\times\text{lcm}\{1,2,\cdots,2n+1\}$ is a positive integer.
Think you can take it from there? |
Double integral involving zeta function: $\int_0^\infty \frac{1-12y^2}{(1+4y^2)^3}\int_{1/2}^{\infty}\log|\zeta(x+iy)|~dx ~dy.$ | I suspect that this is a troll (especially considering the comment), but regardless;
In 1995, Volchkov proved that the integral you're interested in is equal to $\frac{\pi(3-\gamma)}{32}$ if and only if the Riemann Hypothesis is true. |
Calculate angle of the next point on a circle | If the $n$th point $P_n$ has polar angle $a_n$, the $n+1$st point $P_{n+1}$ has polar angle:
$$a_{n+1}=a_n+\alpha \ \ \ \text{where} \ \ \ \alpha=\sin^{-1}(\tfrac{d}{2r}) \iff \sin \alpha = \tfrac{d}{2r}$$
this last relationship coming from elementary trigonometry on isosceles triangle $OP_nP_{n+1}$ whose leg is $r$, and half base $d/2$ (remember : sine = opp/hyp). |
Anomaly Coefficient of $SU(N)$ [Srednicki] | I think I found a solution. Since
$$
\begin{align}
A(R) d_R^{abc} &= \tfrac{1}{2}\text{Tr} \left( t^a_R \{ t^b_R,t^c_R \} \right)\\
&= \tfrac{1}{2}\text{Tr} \left( t^a_R \frac{1}{N}\delta^{bc}1\!\!\!\,1 \right) + \tfrac{1}{2}\text{Tr} \left( t^a_R d_R^{bcd}t_R^d \right)\\
&= \tfrac{1}{2N}\underbrace{\text{Tr} \left( t^a_R \right)}_{0}\delta^{bc} + \tfrac{1}{2}d_R^{bcd}\text{Tr} \left( t^a_R t_R^d \right)\\
&= \tfrac{1}{2}d_R^{bcd}\ T(R)\delta^{ad}\\
&= \tfrac{1}{2}d_R^{bca}\ T(R)\\
&= \tfrac{1}{2}d_R^{abc}\ T(R)\\
&\quad\leftrightarrow A(R) = \frac{T(R)}{2} \quad\text{ if }d^{abc}\neq 0.
\end{align}
$$
I suppose Srednicki used the Gell-Mann matrices $\lambda$ instead of $t=\frac{1}{2}\lambda$ as generators, for which the usual normalization is $T(F)_\text{Gell-Mann}=2$ and therefore $A(F)_\text{Gell-Mann}=1$.
[Which is strange, since he wrote $T(F)=\tfrac{1}{2}$ a few pages earlier...] |
Does integration over one complete cycle equals to 4 times integration over quarter-cyle? | It really was my failure to notice that the "cycle" was the motion of the pendulum itself.
So, what wikipedia said is the integration from where the pendulum start to where it reaches bottom, which equals 1/4 of its trajectory. Thus, the equation could be written in that form. |
Closed subset of the real line without isolated points. | Define "closed interval". Typically the definition goes like this: for $a,b\in\mathbb{R}$
$$[a,b]:=\{x\in\mathbb{R}\ |\ a\leq x\leq b\}$$
and therefore singletons $\{c\}$ are closed intervals as well, i.e. $[c,c]=\{c\}$. Thus any subset of $\mathbb{R}$ is a union of closed intervals. Even disjoint union.
However if by "closed interval" you mean "closed interval of non-zero length" (equivalently "closed intervals which are not singletons") then the answer is "no" because the Cantor set has no isolated points. And the Cantor set does not contain any open interval (and thus closed of non-zero length as well) because it is totally disconnected. |
Describe a $3$-dimensional solid whose symmetry group is isomorphic to $D_5$ | Yes, a (regular) pentagonal frustum would satisfy this. Since the two pentagonal faces of the frustum do not have the same size, any element of the symmetric group must preserve these faces. Hence the symmetric group is, "at most", the symmetric group of a pentagon, which is $D_5$. Since all five rotations and five reflections preserve the frustum, we are done.
Another choice of solid with symmetric group $D_5$ would be a regular pentagonal pyramid. |
The cardinality of $\mathbb{R}/\mathbb Q$ | I don't see what I would have expected to be the "standard answer" to this question, so let me leave it in the hopes it will be helpful to someone.
Proposition: Let $G$ be an infinite group, and let $H$ be a subgroup with $\# H < \# G$. Then $\# G/H = \# G$.
Proof: Let $\{g_i\}_{i \in G/H}$ be a system of coset representatives for $H$ in $G$: then every element $x$ in $G$ can be written as $x = g_{i_x} h_x$ for unique $h_x \in H$ and $i_x \in G/H$. (Note that there is no canonical system of coset representatives: getting one is an archetypical use of the Axiom of Choice.) Thus we have defined a bijection from $G$ to $G/H \times H$, so $\# G = \# G/H \cdot \# H$. Since $\# G$ is infinite, so must be at least one of $\# G/H$, $\# H$, and then standard cardinal arithmetic (again AC gets used...) gives that
$\# G = \# G/H \cdot \# H = \max(\#G/H, \# H)$.
Since we've assumed $\# H < \# G$, we conclude $\# G = \#G/H$.
This applies in particular with $G = \mathbb{R}$, $H = \mathbb{Q}$ to give $\# \mathbb{R}/\mathbb{Q} = \# \mathbb{R} = 2^{\aleph_0}$. |
Finding Trigonometric Fourier Series of a piecewise function | \begin{align}
a_0 &= \frac{1}{2\pi}\int_{0}^{2\pi} \sin x \cdot \mathbf{I}_{[0,\pi]}\, dx = \frac{1}{2\pi}\int_{0}^{\pi} \sin x \, dx
= \frac{1}{\pi}\\
a_n &= \frac{2}{2\pi}\int_{0}^{2\pi} \sin x \cdot \mathbf{I}_{[0,\pi]} \cdot \cos\tfrac{2\pi n x}{2\pi}\, dx
= \frac{1}{\pi} \int_{0}^{\pi} \sin x \cdot \cos nx \, dx
= \frac{1}{\pi} \frac{\cos (\pi n) + 1}{1-n^2}\\
&= \frac{1}{\pi}\frac{1 + (-1)^n}{1-n^2}
= \begin{cases} \frac{2}{\pi}\frac{1}{1-n^2} & \text{if $n$ even}\\ 0 & \text{if $n$ odd}\end{cases}\\
b_n &= \frac{2}{2\pi}\int_{0}^{2\pi} \sin x \cdot \mathbf{I}_{[0,\pi]} \cdot \sin\tfrac{2\pi n x}{2\pi}\, dx
= \frac{1}{\pi} \int_{0}^{\pi}\sin x \cdot \sin (nx)\,dx\\
&= \frac{1}{\pi}\cdot\begin{cases}\tfrac{\pi}{2} & \text{if $n=1$}\\0 & \text{if $n>1$}\\\end{cases}
= \begin{cases}\tfrac{1}{2} & \text{if $n=1$}\\0 & \text{if $n>1$}\\\end{cases}
\end{align}
Hence the Fourier series of $f(x)$ over $(0,2\pi)$ is given by
\begin{align}
f(x) &\sim
a_0+\sum_{n=1}^{\infty}\left [a_n \cos (nx) + b_n\sin(nx)\right]\\
&= \frac{1}{\pi} + \sum_{n=1}^{\infty}\frac{2}{\pi}\frac{1}{1-(2n)^2}\cos((2n)x) + \frac{1}{2}\sin((1)x)\\
&= \frac{1}{\pi} + \frac{2}{\pi}\sum_{n=1}^{\infty}\frac{\cos(2nx)}{1-4n^2} + \frac{1}{2}\sin(x)
\end{align}
Plotted with 20 terms using Wolfram Alpha:
Note
I used Wolfram Alpha to compute these integrals, but it skipped the special case of $b_1$. This is likely to do with a symbolic division that implicitly assumed $n\neq 1$. Either way, this special case follows near-directly from the principle of orthogonality, i.e.,
$$
\frac{1}{\pi}\int_0^{2\pi} \sin(nx)\sin(mx)\,dx = \begin{cases}1 &\text{if $n=m$}\\
0 & \text{if $n\neq m$}\end{cases}
$$
So in our case, $$\frac{1}{\pi} \int_0^{2\pi} \sin(x)\mathbf{I}_{(0,\pi)}\sin(nx)\,dx
= \frac{1}{\pi} \begin{cases} \int_0^{\pi} \sin(x)^2\,dx & \text{for $n=1$}\\
\int_0^{\pi} \sin(x)\sin(nx)\,dx & \text{for $n>1$}\end{cases}$$
This latter point highlights why I knew that Wolfram Alpha hadn't given me everything,
$$\int_{0}^a \sin(x)^2 \,dx > 0,\quad\text{for $a>0$}$$
so we wouldn't expect the corresponding coefficient to be zero. If, on the other hand, the function were something like $f(x)=\sin(5x)\cdot \mathbf{I}_{(0,\pi)}$ or $f(x)=\cos(2x)\cdot \mathbf{I}_{(0,\pi)}$, then we'd know to check $b_5$ or $a_2$ respectively.
If you have learned about vector spaces, then you can think of the integral as an dot (inner) product over vectors $1, \sin(nx),\cos(nx)$ for $n=1,\dotsc,\infty$. The inner product measures similarity. In this case we are asking about amount of "$sin(x)$"-ness if we take a $\sin(x)$ function and lop off the latter half. The answer, perhaps unsurprisingly, is a half. |
Counting Permutations | HINT: You need to choose $3$ of the $n$ elements for the cycle of length $3$, and then you need to choose a cyclic order for them; in how many ways can that be done?
Then you need to choose $4$ elements for the two $2$-cycles and divide them into two disjoint cycles; in how many ways can that be done?
Once all of that has been done, the permutation is completely determined (why?), so you need only combine those partial results correctly. |
Knots with infinite crossing | As answered in the topic you linked, there are knots with infinite many crossings in a diagram. For example wild knots, and some people consider them. The reason the finite case is much more popular is that any smooth knot has a diagram with finite number of crossings, and any diagram has all information about a knot. |
For a sub-space $U$, find $u\in U$ such that $\text{card}(u)$ is minimized and $Au\neq 0$ | Since everything is finite, you could just try all the vectors $\bf u$ of Hamming weight 1 (I've never seen the word cardinality used the way you use it; I think Hamming weight means what you mean by cardinality), then all the vectors of Hamming weight 2, etc., until you find one that works. Were you maybe after something more efficient? Do you have some reason to believe there is a more efficient method? |
Linearity of expectation for infinite sums? | The condition provided is sufficient. See Fubini Theorem and scroll down to Fubini-Tonelli Theorem. This states that says:
If $\mathbb{E}\Big[\sum\limits_{i=1}^{\infty}\big|X_i\big|\Big] < \infty$ or $\sum\limits_{i=1}^{\infty} \mathbb{E}[|X_i|] < \infty$ then we may apply Fubini Theorem and compute the double integral using iterated integrals.
As a note, we may use Fubini-Tonelli because probability measures are $\sigma-$finite and the counting measure on $\mathbb{N}$ is $\sigma-$finite.
Consider what happens if,
$$X_n = \left\{ \begin{array}{ll}
(-1)^{n}\frac{1}{n} & \text{with probability} \; \frac{1}{2}\\
0 & \text{with probability} \; \frac{1}{2}
\end{array} \right.$$
Both $\mathbb{E}\Big[\sum\limits_{n=1}^{\infty}\big|X_n\big|\Big] = \infty$ and $\sum\limits_{n=1}^{\infty} \mathbb{E}[|X_n|] =\infty$ the second is an easy series result while the first can be shown using Borel-Cantelli Lemma. However, the Fubini Theorem still holds for this example. |
Multiplying elements of $SU(2)$. | To get$$\left(\cos|a|+a_1\operatorname{sinc}|a|i+a_2\operatorname{sinc}|a|j+a_3\operatorname{sinc}|a|k\right)\left(\cos|b|+b_1\operatorname{sinc}|b|i+b_2\operatorname{sinc}|b|j+b_3\operatorname{sinc}|b|k\right)\\=\cos|a|\cos|b|-\vec{a}\cdot\vec{b}\operatorname{sinc}|a|\operatorname{sinc}|b|\\+[a_1\operatorname{sinc}|a|\cos|b|+b_1\operatorname{sinc}|b|\cos|a|+(a_2b_3-a_3b_2)\operatorname{sinc}|a|\operatorname{sinc}|b|]i\\+\cdots\\=\cos|c|+\frac{c_1i+c_2j+c_3k}{|c|}\sin|c|$$we need$$\cos|c|=\cos|a|\cos|b|-\vec{a}\cdot\vec{b}\operatorname{sinc}|a|\operatorname{sinc}|b|,\,\\\vec{c}\operatorname{sinc}|c|=\vec{a}\operatorname{sinc}|a|\cos|b|+\vec{b}\operatorname{sinc}|b|\cos|a|+\operatorname{sinc}|a|\operatorname{sinc}|b|\vec{a}\times \vec{b}.$$Or if we denote unit vectors with hats,$$\cos|c|=\cos|a|\cos|b|-\hat{a}\cdot\hat{b}\sin|a|\sin|b|,\,\\\hat{c}\sin|c|=\hat{a}\sin|a|\cos|b|+\hat{b}\sin|b|\cos|a|+\sin|a|\sin|b|\hat{a}\times\hat{b}.$$ |
Proving something is uniformly continuous | As $-x \le x \sin (x^{-1}) \le x$, your function has a limit at 0, namely $0$, so we may extend it to a continuous function $f\colon [0,\infty) \to \mathbb R$, which is uniformly continuous on $[0,2]$ (as this is a compact interval) and on $[1,\infty)$ (as $f$ has bounded derivative
\[
f'(x) = \sin\left(\frac 1x\right) - \frac{\cos\left(\frac 1x\right)}{x}
\]
here and is therefore Lipschitz). Hence $f$ is uniformly continuous on $[0,\infty)$, hence on $(0,\infty)$. |
Span, multiplicity and dimensions | I suppose you must know that the multiplicity of every linear factor $\;(x-\lambda)\;$ in the minimal polynomial equals the maximal size a Jordan block corresponding to $\;\lambda\;$ has in the Jordan form of the matrix. Since this is $\;1\;$ in our case this means the Jordan form of the matrix is diagonal, i.e. our matrix is diagonalizable. |
Squeeze Theorem when the sin parameter is a fraction | it's not, you should've just written that $|\sin (\frac{1}{x+1})|$ is bounded by $1$ (which you did) and then say that $(x+1)^2 \rightarrow 0$, hence the product goes to zero as well, kinda like $$ - (x+1)^2 \leq (x+1)^2 \sin (\frac{1}{x+1}) \leq (x+1)^2$$ it isn't true that $\sin(\frac{1}{x+1}) = (x+1)\sin(1)$ which seems to be what you wrote |
Double integral bounds of integration polar change of coordinate | This question is a bit tricky because $R$ is not a circle quadrant, it's an ellipse quadrant. Therefore, the underlying ellipse needs to be transformed into a circle by a substitution – $u=\frac23x$ works:
$$\iint_Rxy\,dA=\frac94\iint _Suy\,dA$$
$S$ is now a quarter-disc of radius $4$ around the origin, so polar coordinates can now be used:
$$=\frac94\int_0^4\int_0^{\pi/2}r^3\cos\theta\sin\theta\,d\theta\,dr$$
$$=\frac94\int_0^4\frac12r^3\,dr$$
$$=\frac98×\frac14×4^4=72$$ |
Proof x \in L \leftrightarrow det(...) = 0. | As you have already mentioned: $x\in L \Leftrightarrow x_i = v_i + \lambda (w_i-v_i) = (1-\lambda) v_i + \lambda w_i$ for some $\lambda \in K$.
Now, for such a $\lambda$ you can see that $(1-\lambda)\pmatrix{1 & v_1 & v_2} + \lambda \pmatrix{1 & w_1 & w_2} = \pmatrix{ 1 & x_1 & x_2}$. Thus the row vectors of you matrix are linear dependent, thus the rank of your matrix is $2$ and thus the determiant is $0$. Conversely if the determinant is $0$ the vectors are linear dependent, and because $v \neq w$ you can find a lambda so that $x=v+(1-\lambda) w$. |
L'Hôpital's rule does not apply?! | You have misstated L'Hopital's Rule. It does not say $\lim_{x\to c}{f(x)\over g(x)}={f'(c)\over g'(c)}$ (with the usual assumptions on $\lim_{x\to c}f(x)$ and $\lim_{x\to c}g(x)$). It says
$$lim_{x\to c}{f(x)\over g(x)}=\lim_{x\to c}{f'(x)\over g'(x)}$$
provided the latter limit exists. In this case
$${f'(x)\over g'(x)}={2x\sin(1/x)-\cos(1/x)\over\cos x}$$
for $x\not=0$. So even though $f'(0)=\lim_{x\to0}{f(x)-f(0)\over x}=\lim_{x\to0}x\sin(1/x)=0$ (assuming we let $f(0)=\lim_{x\to0}f(x)=0$), the hypotheses of L'Hopital's Rule are not fulfilled because $\lim_{x\to0}(f'(x)/g'(x))$ does not exist. In particular $\cos(1/x)$ has no limit as $x\to0$. |
Which of the following statements about f is true? | User ModCon had a (now deleted) solution to some parts of the problem. There was a small mistake in it that made a few, but not all conclusions incorrect. I'll try to repeat his contributions, and add a few of my own.
We have $$\left|\frac{\sin(x/n)}n\right| \le \frac{|x/n|}n = \frac{|x|}{n^2}.$$
That means for each $x$ the series $\sum_{n=1}^{\infty}\frac{\sin(x/n)}{n}$ has a convergent majorant
$$\sum_{n=1}^{\infty}\frac{|x|}{n^2} = |x| \sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}6|x|,$$
which means it converges pointwise. This also means that we can use the Weierstrass M-Test on an interval $[-a,a]$ for some $a>0$ and choose $M_n=\frac{a}{n^2}$. That means the series converges absolutely and uniformly on $[a,-a]$, which means $f(x)$ is continuous on that inverval. Since we can chose any real $a$, this means $f(x)$ is continuous on the whole $\mathbb R$.
If we differentiate the series term-wise, we get another function:
$$g(x)=\sum_{n=1}^{\infty}\frac{\cos(x/n)}{n^2}$$
Using $|\cos(x/n)| \le 1$ we can directly apply the M-test on whole real line for $g(x)$, when setting $M_n=\frac1{n^2}$. This shows that g(x) is actually well defined and the defining series converges uniformly on $\mathbb R$ to $g(x)$.
So is now $f'(x)=g(x)$? By Theorem 1 on page 2 in this university script, the answer is yes. The term-wise derivate series (for $g(x)$) converges uniformly on the whole $\mathbb R$, the original series (for $f(x)$) converges on a point (we already know it converges anywhere), so we now know that
$$f'(x)=g(x),\; \forall x \in \mathbb R$$
This means $f(x)$ is differentible, which answers (c) in the affirmative.
We also know that
$$|g(x)| \le \sum_{n=1}^{\infty}\frac{|\cos(x/n)|}{n^2} \le \sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}6,$$
so the derivate of $f(x)$ is bounded. That means $f(x)$ is uniformly continuous: Given any $\epsilon > 0$, we can choose $\delta=\frac{6\epsilon}{\pi^2}$ and if we assume $x_0 < x_1 < x_0+\delta$, we have $f(x_1)=f(x_0)+(x_1-x_0)f'(\xi)$ with $\xi \in [x_0,x_1]$ (Mean Value Theorem), so
$$|f(x_1)-f(x_0)| = |f(x_0)+(x_1-x_0)f'(\xi) - f(x_0)| = |x_1-x_0||f'(\xi)| < \delta\frac{\pi^2}6 = \epsilon.$$
This ansers (b) in the affirmative and (a) in the negative.
For (d), because we know $f(x)$ is differentiable, it is enough to find a point where $f'(x)=g(x)$ is negative, to prove that (d) is not true.
Consider $x=\pi$. Then the first element of the series is $\cos(\pi)=-1$. We know that all the other elements of the series can sum up to at most $\sum_{n=\color{red}{2}}^{\infty}\frac{1}{n^2} = \frac{\pi^2}6 - 1 < 1$, so $g(\pi) < 0$ and (d) is not true.
If you don't want to use the non-elementary exact sum of that series, you can can also just say
$$\sum_{n=2}^{\infty}\frac{1}{n^2} < \sum_{n=2}^{\infty}\frac{1}{n(n-1)} = \sum_{n=2}^{\infty}\left(\frac{1}{n-1}-\frac{1}{n}\right) = (1-\frac12)+(\frac12-\frac13)+(\frac13-\frac14)\ldots=1.$$
Finally, a plot of the function (hopefully good enough truncated) by Wolfram Alpha: |
Show that $\{n^2F(\frac{1}{n})\}$ is bounded. | Indeed $G(0)=0$: $|\frac{1}{n}G(\frac{1}{n})|=|F(\frac{1}{n})|\le\frac{1}{n^{3/2}}$. Thus $|G(\frac{1}{n})|\le\frac{n}{n^{3/2}}=\frac{1}{n^{1/2}}$, by continuity of $G$ we get $G(0)=0$.
Now, since $G$ is analytic and $G(0)=0$, we have $|G(z)|\le|z|$ on the unit disc. Therefore $|F(z)|=|zG(z)|\le|z|^2$ on the unit disc. Taking $z=\frac{1}{n}$, done. |
Application of Euler's theorem apart from finding last digits of huge numbers | I can think of one possible example. As $\phi(m)$ is the number of natural numbers less than $m$ which are coprime to $m$, Euler's theorem provides a good upper bound to a lot of problems which involve coprime numbers.
For example, if we consider the problem of Quadratic Congruence, solving the equation $x^{2} \equiv a\ (mod\ p)$, where p is a prime. Solving this problem using a number of algorithms require finding a non-residue $z$ of p first. It can be shown that there are $\phi(\phi(m))$ quadratic residues for m, hence the probability of finding a non-quadratic residue for a given p is quite high in practice. Even, for a composite $m$, checking if $a$ is a quadratic residue of $m$ uses the following criteria, which is based on Euler's theorem:
Euler's Criterion: The equation $x^{2} \equiv a \ (mod\ m)$ has a solution if and only if
$a^{\frac{\phi(m)}{2}} \equiv 1 \ (mod\ m)$
As another example, I'd like to say that, finding a primitive root modulo $m$ has also a good use of Euler's theorem, the testing for primitive-rootness modulo $m$ exploits Euler's theorem and Lagrange's theorem equally. |
Depending on variable to calculate joint distribution | If you choose a ball at random from amongst all the balls (rather than choosing a row first and then choosing a ball), then the probability that row $i$ was chosen is $P[X=i]={i\over N}$, where $N={n(n+1)\over2}$ is the total number of balls.
So if $j\le i$
$$
P[X=i,Y=j]= P[X=i]\cdot P[Y=j\,|\,X=i]={i\over N}\cdot {1\over i}={1\over N}.
$$
If $j>i$, then $P[X=i,Y=j]$ is of course 0.
Note this is (maybe) intuitively clear, since $P[X=i,Y=j]$ is the probability of picking a particular ball from the total of $N$ balls. |
Preimage of homologous points are homologous | This is much simpler than I believed. Roughly speaking
Recall that if $A\subset X$, then $x\in \delta A$ implies $f(x) \in\delta f(A)$.
Taking $A=f^{-1}(Y\setminus \{y_1,y_2\})$, the result follows upon showing $A$ to be a chain |
throwing a dice repeatedly so that each side appear once. | This is a special case of the Coupon Collector's Problem.
Throw the die once. Of course we get a "new" number. Now throw the die again, and again, until we have obtained each of $1$ to $6$ at least once.
Let random variable $X_2$ be the "waiting time" (number of additional throws) until we get a number different from the result on the first throw. Let $X_3$ be the waiting time (number of additional throws) between the time we get the second new number and the time we get the third. Let $X_4$ be the waiting time between the third new number and the fourth. Define $X_5$ and $X_6$ analogously. Then $X=1+X_2+X_3+X_4+X_5+X_6$. By the linearity of expectation we have $E(X)=1+E(X_2)+\cdots+E(X_6)$.
The various $X_i$ have geometric distribution. Consider for example $X_2$. After the first throw, the probability that a result is "new" is $\frac{5}{6}$, so $E(X_2)=\frac{6}{5}$. Similarly, $E(X_3)=\frac{6}{4}$, and so on. Now put the pieces together. |
Estimating a dynamical system's behavior without using Liapunov theorem | Let $A(u,v)=(\epsilon u+2v,-u+\epsilon v)$, then $(x',y')=A(x,y)\cdot(1+z)$ and $z'=-z^3$ hence $(x,y)$ stays on the trajectories of the system $(u',v')=A(u,v)$, only, with a time change and possibly turning backwards. The (trivial) phase diagram of $z'=-z^3$ shows that, for every starting point, $z\to0$ hence, after a time, $\frac12\leqslant z+1\leqslant2$ and there is no more turning point.
The eigenvalues of the $(u,v)$ linear system are $\epsilon\pm\mathrm i\sqrt2$ hence:
If $\epsilon\lt0$, then $(x,y,z)\to(0,0,0)$
If $\epsilon=0$, then $z\to0$ and $(x,y)$ circles around $(0,0)$ on the ellipse $x^2+2y^2=x_0^2+2y_0^2$ clockwise
If $\epsilon\gt0$ and $(x_0,y_0)=(0,0)$, then $(x,y,z)\to(0,0,0)$ since $(x,y)=(0,0)$ uniformly and $z\to0$
If $\epsilon\gt0$ and $(x_0,y_0)\ne(0,0)$, then $z\to0$ and $(x,y)$ explodes in the sense that $\|x\|\to\infty$ and $\|y\|\to\infty$ |
Can the 0 element of the Fourier Algebra be represented as a coefficient function of two non-zero vectors? | If $0\ne x\in l^2(G)$ is not cyclic for $\lambda$, i.e., the closed linear span of vectors
$\lambda(t)x$ is a proper subspace in $l^2(G)$, then one can find $0\ne y\in l^2(G)$ for which
$\langle \lambda(t)x,y\rangle=0$ $(\forall t\in G)$ although $x\ne 0\ne y$.
For a counterexample one can look in two dimensional case. Let $G=({\mathbb Z}_2, +)$. Then
$l^2(G)={\mathbb C}^2$ and
$$ \lambda(0)=\left(\begin{array}{cc} 1 & 0\\ 0 & 1\end{array}\right),\qquad
\lambda(1)=\left(\begin{array}{cc} 0 & 1\\ 1 & 0\end{array}\right). $$
It is obvious that $x=\left(\begin{array}{c} 1 \\ 1\end{array}\right)$ is an eigenvector for
$\lambda(t)$ $(t\in G)$, which means that $\lambda(0)x$ and $\lambda(1)x$ span onedimensional subspace. Let $y=\left(\begin{array}{c} 1 \\ -1\end{array}\right)$. Then $\langle \lambda(t)x,y\rangle=0$ for any $t\in G$.
EDIT. Ups! I have overlooked that $G$ has to be infinite. Hence the counteexample is not relevant. But the first part of the answer is correct, I guess.
EDIT 2.
If $G$ is commutative, say $G=({\mathbb Z}, +)$, then $\{ \lambda(t); t\in G\}$ generates a commutative subalgebra of operators in $B(l^2(G))$. Such an algebra cannot be transitive, which means that it has a proper non-trivial invariant subspace. Hence, there exists a non-zero vector which is not cyclic for $\{ \lambda(t); t\in G\}$. |
Using gradient to find an equation of a plane tangent to the graph | What is $z = f(P)$? Now, consider your solution at $P$: $\frac{3}{25}(1-1)-\frac{4}{25}(2-2) = z \implies z = 0$.
This is not an answer, but I cannot comment with my low rep.
Perhaps, to answer your question, the $-\frac{1}{5}$ comes about as a result of shifting your plane along the $z$-axis. Your tangent plane was just parallel, but not coincident with the solution. |
Does this simple proof use the axiom of choice? | The proof you've given is completely choice-free: it just boils down to existential elimination. This can be checked by writing out the fully-formal proof corresponding to that rigorous natural-language argument, and if you want you can even do this in such a way that a computer can verify the final result (although this takes a bit more work).
It's worth noting that an elaboration on this idea yields a stronger result:
$\mathsf{ZF}$ proves "Every finite collection of nonempty sets has a choice function."
This is not as trivial as it may seem, since in a model of $\mathsf{ZF}$ there may be "internally-finite" sets which are not in fact finite. Instead, we have to carefully check that the relevant induction argument can be run internally: reasoning in $\mathsf{ZF}$, what can you say about the least finite ordinal $n$ such that there is some $n$-sized collection of nonempty sets without a choice function? |
Number of prime order elements of two non-isomorphic groups of same order | Like I said in my comment, I do not see any reason for this result to hold, so I tried to find a counter-example.
I didn't find an explicit one, but I did find what would not work, so I am sharing it here to prevent someone else to lose time in this direction.
I tried to find two groups where they would not be any elements of order $>3$. Then we would have two non isomorphic groups, with not the same number of elements of order $2$ and $3$ respectively and we would have won.
But such a groupe is of one of the following types (which both works as acceptable groups regarding to our conditions):
Type $T$. The group is isomorphic to the set
$$(\mathbb Z/3\mathbb Z)^\Gamma\times \{\pm 1\}$$
with the product:
$$(h,a)\cdot (k,b)=(hk^a,ab)$$
where $(\mathbb Z/3\mathbb Z)^\Gamma$ is the set of maps from a given set $\Gamma$ (of cardinality $n$) to $\mathbb Z/3\mathbb Z$.
*We can note that if $n=1$, then a group of type $T$ is isomorphic to $\mathfrak S_3$.
Type $S$. The group is isomorphic to the set
$$\left((\mathbb Z/2\mathbb Z)^2\right)^\Gamma\times \mathbb Z/3\mathbb Z$$
with the product:
$$(h,a)\cdot (k,b)=(h\cdot\alpha^a(k),a+b)$$
where we think of $\mathbb Z/3\mathbb 2$ as $\{0,1,2\}$ under addition modulo $3$, and $\alpha$ is a cyclic permutation of three nonidentity elements of $(\mathbb Z/2\mathbb Z)^2$.
*We can note that if $n=1$, then a group of type $S$ is isomorphic to $\mathfrak A_4$.
We can notice that a group of type $S$ and a group of type $T$ won't have the same order unless maybe if $n=1$, which won't work either.
Conclusion: there is no counter-example with of group with only elements of order $2$ and $3$.
I wish someone find a better result.
Sources : one article, another article and a final article. |
Prove any DFA with $k<2^n$ states does not accept the strings with an odd number of some character. | Let $A = \{a_1, \ldots, a_n\}$ be the alphabet. You can use the fact that the states of the minimal DFA of $L$ can be identified with the left quotients
$$
u^{-1}L = \{v \in A^* \mid uv \in L\}
$$
where $u$ ranges over $A^*$.
Now, for $k_1, \ldots, k_n \in \{0,1\}$, consider the languages
$$
L(k_1, \ldots, k_n) = \{ u \in A^* \mid |u|_{a_1} \equiv k_1 \bmod 2,\ \dotsm\ , |u|_{a_n} \equiv k_n \bmod 2\}
$$
These $2^n$ languages are clearly distinct (and even pairwise disjoint) and I let you verify that all of them are of the form $u^{-1}L$ for some $L$. Therefore, the minimal DFA of $L$ has at least $2^n$ states (and actually exactly $2^n$ states, as you have shown). |
What is the best rest position for two elevators in a 10-story building? | In general, elevator scheduling is a seriously difficult problem (see, e.g., this presentation and its list of references for some idea of its complexity). Your example situation starts to get at why it's hard: in order to know how often this happens, you need to know how quickly the elevator travels, relative to how often people arrive. And once it happens, maybe it would be better to dispatch the 10th-floor elevator, but maybe the 1st-floor elevator will do its thing fairly quickly and you should just wait for it to be done. In order to answer these kinds of questions, you need lots of data about your apartment's specific situation; a theoretical answer based on a few assumptions isn't going to get you anywhere useful.
But the hard part is determining what to do when the elevators are busy. You're asking about which floors you want the elevators to rest on, which is something that only matters when they are not particularly busy. And in that case, along with your assumptions, we can come up with something tractable.
So, in addition to the assumptions you made, I will also assume that:
Only one person wants to use the elevator at a time, so the elevators are always on their resting floors when someone wants to use them.
Elevators move from floor to floor at constant speed, so a passenger's wait time is proportional to the distance to the closest elevator. (This assumption could be removed and it wouldn't make the problem much more difficult, but it's hard to know what to replace it with.)
Additionally, unless we are in a certain classic mathematician joke, your assumptions imply that a passenger will want to go up or down with equal probability.
This is enough to solve the problem. We take a representative population consisting of one person on each higher floor wanting to go down, and 9 people on the ground floor wanting to go up. Over this population, we minimize the total waiting time; i.e., the total distance to the closest elevator. The following simple python script does this for each possible elevator configuration, and then tells you which one is best:
least_wait_time = float('inf')
passengers = [1 for i in range(0, 9)] + range(2, 11)
def wait_time(passenger, elevator1, elevator2):
return min(abs(passenger - elevator1), abs(passenger - elevator2))
for high_elevator in range(2, 11):
for low_elevator in range(1, high_elevator):
total_wait_time = sum(wait_time(passenger, low_elevator, high_elevator)
for passenger in passengers)
print 'Elevator positions: ' + str((low_elevator, high_elevator))
print 'Total wait time: ' + str(total_wait_time)
if total_wait_time < least_wait_time:
best_elevators = (low_elevator, high_elevator)
least_wait_time = total_wait_time
print ''
print 'Optimal elevator position: ' + str(best_elevators)
print 'Optimal wait time: ' + str(least_wait_time)
It turns out that the optimal thing to do, given all these assumptions, is to put one elevator on floor 1 and the other one on floor 7. This gives a total wait time of 15 (i.e., over the population of 18 people, the nearest elevator will on average start 15/18 floors away).
Why is this different from your result? Because we're not assuming the elevator on floor 1 is used solely to go up. If someone wants to come down from floor 2 or 3, the 1st-floor elevator is already closer to them than the higher-up elevator even when the higher elevator is at floor 6, so it's not useful to keep the higher elevator close to them. So we might as well move the higher elevator up a bit to keep the people on really high floors happy. |
True random number set | By definition, to have randomness, you need to have a set of possible outcomes, and a probability measure that describes how likely each element of the set is. In this way, all random variables have some structure. The outcome of each individual experiment will be random, but if you make infinitely many experiments, the frequency of different occurrences will always converge to the given structure.
You can try to get rid of such structure, but you'll quickly run into problems. For example, even having each positive number happen with equal probability without an upper limit is problematic: for any big number $M$, the length of the segment $[0, M]$ is finite, while $[M, \infty)$ is infinite, so you're guaranteed to get numbers bigger than $M$. Regardless of how big $M$ is.
That said, there are many, varied distributions. Consider the Cauchy distribution with density function in its simplest form $f(x)=\frac{1}{1 + x^2}$
When drawing samples from this distribution, your mode is at 0, and if you draw lots and lots of samples, you'll have twice as many samples near 0 than near 1. And the histogram will follow a bell-like shape centered around 0 (though declining much more slowly than a normal).
However, the Cauchy distribution doesn't have a mean. If you try to compute it with an integral, the integral will diverge. And if you draw $n$ samples and average them, you'll get a significantly different number each time - itself a Cauchy random variable.
This partially satisfies your inquiry. The histogram will converge to the distribution curve. But the mean after 100 samples can be much different than the mean after 1000, and after 10000. |
Intuition from Boundary Point Lemma (Hopf Lemma) | Hopf Lemma tells you that (under the required assumptions) the maximum of the solution $u$ is attained at a point $x_0$ on the boundary $\partial\Omega$ but also that the derivative of the función $u$ in the outward direction near $x_0$ has to be positive, i.e.
$$ \frac{\partial u}{\partial \mathbf{n}}(x_0):= \langle\nabla u(x_0),\mathbf
{n}\rangle > 0. $$
The intuition one gets from here is that as you approach $x_0\in\partial\Omega$ from the interior of $\Omega$ the derivative of $u$ has to be positive since otherwise it is imposible for the maximum of $u$ to be in $\partial\Omega$. Or either $\nabla u(x_0)=0$ and then $u$ is constant in all the domain (this is one case in the Maximum Principle), or either $u$ has its maximum at $x_0\in\partial\Omega$ and hence if you move from $x_0$ towards the interior of $\Omega$ (i.e. direction $-\mathbf{n}$), $u$ has to decrease and thus $\langle\nabla u(x_0),-\mathbf{n}\rangle < 0$.
So, if I understood well your question, the answer is yes.
Hope it helps! |
First Order Logic, help in Unification and substitution process | When unifying you don't 'derive' the one from the other ... rather, you make substitutions for each of the terms so they match up.
So, use:
$X \leftarrow c$, $M \leftarrow c$, and $V \leftarrow e(c)$ |
Proving $\frac{1}{\cos^2\frac{\pi}{7}}+ \frac {1}{\cos^2\frac {2\pi}{7}}+\frac {1}{\cos^2\frac {3\pi}{7}} = 24$ | The roots of $x^6+x^5+\ldots+x+1$ over $\mathbb{C}$ are $x=\exp\left(\frac{2\pi\text{i}}{7}\right)$ for $k=1,2,\ldots,6$. Let $y:=x+\frac{1}{x}$. Then, $$\frac{x^6+x^5+\ldots+x+1}{x^3}=\left(y^3-3y\right)+\left(y^2-2\right)+y+1=y^3+y^2-2y-1\,.$$
Hence, the roots of $y^3+y^2-2y-1$ are $y=y_k:=2\,\cos\left(\frac{2k\pi}{7}\right)$ for $k=1,2,3$. Observe that $$S:=\sum_{k=1}^3\,\frac{1}{\cos^{2}\left(\frac{k\pi}{7}\right)}=\sum_{k=1}^3\,\frac{2}{1+\cos\left(\frac{2k\pi}{7}\right)}=4\,\sum_{k=1}^3\,\frac{1}{2+y_k}\,.$$
Since $y_k^3+y_k^2-2y_k-1=0$, we have $$y_k^2-y_k=\frac{1}{2+y_k}$$ for all $k=1,2,3$. Consequently,
$$S=4\,\sum_{k=1}^3\,\left(y_k^2-y_k\right)\,.$$
The rest should be easy.
In general, let $n$ be a nonnegative integer and we are evaluating the sums $\displaystyle \sum_{k=0}^{2n}\,\frac{1}{\cos^{2}\left(\frac{k\pi}{2n+1}\right)}$ and $\displaystyle \sum_{k=1}^{n}\,\frac{1}{\cos^{2}\left(\frac{k\pi}{2n+1}\right)}$. The roots of $x^{2n+1}-1$ over $\mathbb{C}$ are $x=x_k:=\exp\left(\frac{2k\pi}{2n+1}\right)$, for $k=0,1,2,\ldots,2n$. Observe that
$$\frac{1}{1+x_k}=\frac{1}{2}\,\left(\frac{1+x_k^{2n+1}}{1+x_k}\right)=\frac{1}{2}\,\sum_{j=0}^{2n}\,(-1)^j\,x_k^j=\frac{2n+1}{2}-\frac{1}{2}\,\sum_{j=1}^{2n}\,\left(1-\left(-x_k\right)^j\right)\,.$$
That is,
$$\frac{1}{\left(1+x_k\right)^2}=\frac{2n+1}{2}\left(\frac{1}{1+x_k}\right)-\frac{1}{2}\,\sum_{j=1}^{2n}\,\sum_{i=0}^{j-1}\,(-1)^i\,x_k^i\,,$$
or equivalently,
$$\frac{1}{\left(1+x_k\right)^2}=\frac{2n+1}{4}\,\sum_{j=0}^{2n}\,(-1)^j\,x_k^j-\frac{1}{2}\,\sum_{j=1}^{2n}\,\sum_{i=0}^{j-1}\,(-1)^i\,x_k^i\,.$$
Consequently,
$$\frac{x_k}{\left(1+x_k\right)^2}=\frac{2n+1}{4}\,\sum_{j=0}^{2n}\,(-1)^j\,x_k^{j+1}-\frac{1}{2}\,\sum_{j=1}^{2n}\,\sum_{i=0}^{j-1}\,(-1)^i\,x_k^{i+1}=\frac{2n+1}{4}+f\left(x_k\right)$$ for some polynomial $f(x)$ of degree at most $2n$ without the constant term. Then, $$\sum_{k=0}^{2n}\,\frac{x_k}{\left(1+x_k\right)^2}=\frac{(2n+1)^2}{4}+\sum_{k=1}^{2n}\,f\left(x_k\right)\,.$$
It is evident that $\displaystyle\sum_{k=0}^{2n}\,f\left(x_k\right)=0$. Furthermore, $$\frac{x_k}{\left(1+x_k\right)^2}=\frac{1}{2}\,\left(\frac{1}{1+\cos\left(\frac{2k\pi}{2n+1}\right)}\right)=\frac{1}{4}\,\left(\frac{1}{\cos^2\left(\frac{k\pi}{2n+1}\right)}\right)\,.$$ Ergo,
$$\frac{1}{4}\,\sum_{k=0}^{2n}\,\frac{1}{\cos^2\left(\frac{k\pi}{2n+1}\right)}=\sum_{k=0}^{2n}\,\frac{x_k}{\left(1+x_k\right)^2}=\frac{(2n+1)^2}{4}\,.$$ This shows that $$\sum_{k=0}^{2n}\,\frac{1}{\cos^2\left(\frac{k\pi}{2n+1}\right)}=(2n+1)^2\,.$$ Furthermore, we have
$$\sum_{k=1}^n\,\frac{1}{\cos^2\left(\frac{k\pi}{2n+1}\right)}=\frac{(2n+1)^2-1}{2}=2n(n+1)\,.$$ |
Infinite number of Proofs in Propositional Calculus? | A proof is a sequence of steps that leads to desired conclusion at the end. Nobody says that all these steps have to be relevant. For instance, just before concluding the proof you can put lots of logical tautologies. For a human reader, they will obviously be irrelevant to the proof, but nevertheless, the new proof, the bigger part of which is pointless, will still be a correct proof.
For instance, here's a proof that all numbers divisible by 4 are also divisible by 2:
Let $n$ be a number divisible by $4$. It means there's some $k$ such that $n = 4k$. But $4k = 2\cdot 2k$. Putting $l = 2k$, we have $n = 2l$. Thus $n$ is divisble by 2.
Here's another proof.
Let $n$ be a number divisible by $4$. It means there's some $k$ such that $n = 4k$. But $4k = 2\cdot 2k$. Putting $l = 2k$, we have $n = 2l$. We see that $n = 2l$. We also have $4k = 2 \cdot 2k$. We know that $n$ is divisible by $4$. We see that $n = 2l$. Thus $n$ is divisble by 2.
We obviously can make even longer (or arbitrarily long) proof. |
Prove that $b_n$ is a Cauchy's sequence | You can check that $$b_{n+1}=1+\sum_{k=1}^n \left(1+\frac 1 k\right)^{-k^2}$$
and since
$$\begin{split}\left(1+\frac 1 k\right)^{-n^2}&=\exp\left(-k^2\ln\left(1+\frac 1 k\right)\right)\\
&=\exp\left(-k^2\left(\frac 1 k+\mathcal O\left(\frac 1 {k^2}\right)\right)\right)\\
&=\exp\left(-k+\mathcal O\left(1\right)\right)\\
&=\mathcal O(e^{-k})
\end{split}$$
the series converges, and so does the $\{b_n\}$ sequence. |
Prove that for any real numbers $x$ and $y$ if $x \neq 0$, then if $y=\frac{3x^2+2y}{x^2+2}$ then $y=3$. | Do some algebraic manipulation; $$y(x^2+2)=3x^2+2y \iff 2y+x^2y-3x^2-2y=0 \iff x^2y-3x^2=0 \iff x^2(y-3)=0 \iff x=0$$(Discarded because of the hypothesis) or $$y=3$$
The denominator cannot be zero because $x \in \mathbb{R}$ |
Partial Integration $ \int \frac{x\cos x}{\sin^3x}dx $ | HINT:
$$\int x\cdot\frac{\cos x}{\sin^3x}dx=x\int\frac{\cos x}{\sin^3x}dx-\int\left(\frac{dx}{dx}\int\frac{\cos x}{\sin^3x}dx\right)dx$$
For $\int\dfrac{\cos x}{\sin^3x}dx$ write $\sin x=u$ |
Subgroup order of $\mathtt{SmallGroup}(576,8661)$ | This GAP-output :
gap> Collected(List(AllSubgroups(SmallGroup(576,8661)),Order));
[ [ 1, 1 ], [ 2, 63 ], [ 3, 64 ], [ 4, 651 ], [ 8, 1395 ], [ 9, 64 ],
[ 12, 336 ], [ 16, 651 ], [ 32, 63 ], [ 48, 84 ], [ 64, 1 ], [ 192, 1 ],
[ 576, 1 ] ]
gap> DivisorsInt(576);
[ 1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 32, 36, 48, 64, 72, 96, 144, 192, 288,
576 ]
gap>
shows that there are no subgroups of order $6$, for example. You can find out all missing orders easily using this output. |
How should I proceed on proving that vectors pointing to the vertices of an n-simplex all have negative dot products with one another? | [Note: Some of my $n$s are off by one because the simplexes I was thinking about weren't the standard simplexes. I don't think that changes the core idea though, and this is only a sketch of an idea.]
I haven't worked this all the way out, but it's got an $n$ in it, so why not try induction?
It feels like you should be able to express the vectors for an $n+1$ simplex as {the vectors for an $n$ simplex} plus (the vector from the $n$ center to the $n+1$ center), along with one new vector for the new vertex. Then you have to show 1) you still have a negative dot product for the old vectors, even after adding on the center-to-center vector, and 2) the vector for the new vertex has a negative dot product with all the previous vectors.
Edit - Adding details on the induction: I think of an n-simplex as lying in $\mathbb{R}^n$, and you build an $(n+1)$-simplex by mapping $\mathbb{R}^n$ into $\mathbb{R}^{n+1}$ as
$$(x_1, ... x_n) \mapsto (x_1, ... x_n, 0)$$
and then you add a new vertex that becomes the "top of the pyramid" and the embedded $n$-simplex becomes the base of the $n+1$-simplex. BTW, as I'm explaining this I'm realizing you could either track the vertices or track the vectors from your center to the vertices when doing your induction. I'm not sure which would be cleaner/easier.
I'll confess that I'm still not sure which class of simplexes you want to prove this for, so I'm not even sure that this construction always applies to all the cases you're interested in, but it works for the standard simplexes we used in algebraic topology, and it might let you figure out how to construct counter-examples if you find that you can't carry the induction forward. |
Find all postive integers $n$ such that $n+\varphi{(n)}=2\tau{(n)}$ | If $n+\phi(n)=2\tau(n)$, then we have that
$$n<2\tau(n)$$
Let $n=p_1^{a_1}\cdots p_s^{a_s}$ be the prime factorization of $n$. Then
$$\prod_{i=1}^s p_i^{a_i} < 2\prod_{i=1}^s (a_i+1) \implies \prod_{i=1}^s \left(\frac{p_i^{a_i}}{a_i+1}\right) < 2.$$
We note that
$$p_i^{a_i} = ((p_i-1)+1)^{a_i} \geq 1+(p_i-1)a_i \geq 1+a_i,$$
(where we have obtained the first inequality from the binomial theorem), so each of the terms in the product are $\geq 1$, which implies that they all must be less than $2$. We see that, if $p_i>3$,
$$p_i^{a_i} \geq 1+(p_i-1)a_i \geq 1+4a_i > 2a_i+2,$$
which means the terms are $>2$ if $p_i>3$, a contradiction. Thus, $p_i$ is $2$ or $3$. If $p_i=2$, it can easily be seen by inspection that the terms remain $<2$ only for $a_i=1$ or $a_i=2$, and if $p_i=3$, it can be seen that the terms are $<2$ iff $a_i=1$. Thus, the only possible answers are
$$2^x 3^y$$
where $0\leq x\leq 2$ and $0\leq y \leq 1$. By simple case-checking, we see that the only ones of these which work are $1$, $4$, and $6$, finishing the proof.
Remark: This problem is substantially easier than the question of $n+\tau(n)=2\phi(n)$, as that problem relies on the bound that $n/2>\phi(n)$ for most $n$, while this one relies on the bound that $n/2>\tau(n)$ for most $n$. While the second only has finitely many exceptions, the first is false infinitely often. |
Show that $\sqrt{x^TQx+1}$ is convex for positive definite $Q$ | Here is a easy way to show the convexity by using convex function composition rules.
$\Vert x \Vert_2$ is obviously convex.
$\sqrt{x^2+1}$ is an composition of $\Vert x \Vert_2$ of dimension 2 and an affine transformation of
$$ g(x) = \left[\array{x_1 \\ 1}\right] = \left( \array{1 \quad 0 \\0\quad 0} \right)\left[\array{x_1 \\ x_2}\right] + \left[\array{0 \\ 1}\right].$$ Hence $\sqrt{x^2+1}$ is convex (If you want, you can use the second derivative to verify its convexity as well. I am trying to minimize mathematical manipulations here.)
$\sqrt{x^TQx} \equiv \Vert x \Vert_Q$ is a form of norm. Hence it is convex.
Further note $\sqrt{x^2+1}$ is an increasing function of $x$ for $x\ge0$. By the composition rule $\sqrt{{(\sqrt{x^TQx})}^2+1} = \sqrt{x^TQx+1}$ is convex.
Note if you are not convince that $\sqrt{x^TQx} \equiv \Vert x \Vert_Q$ is convex, then you can convince yourself by noting $\Vert x\Vert_2$ is convex, and its composition with an affine transformation $g(x)=\sqrt{\Lambda}Ux$ should also be convex where $Q=U^T\Lambda U$, $\Lambda$ is a diagonal matrix with positive entries since $Q$ is positive definite. |
Can Fractional linear transformation map collinear points to 'elliptically distributed points'? | For the purposes of a linear fractional transformation a line is a type of circle (like a circle with infinite radius). So, lines are mapped to circles (or lines). So, one way to put this is that linear fractional transformations map generalized circles (lines or circles) to generalized circles.
Edit: To answer the comment, no, I don't believe so. In that example, the real axis (a generalized circle) would be mapped to an ellipse (not a generalized circle). |
For an even and $2\pi$ periodic function, why does $\int_{0}^{2\pi}f(x)dx = 2\int_{0}^{\pi}f(x)dx $ | Hint:
We have,
$$\int_{0}^{2\pi}\frac{1}{a+b\cos x}dx$$
$$=\int_{0}^{\pi} \frac{1}{a+b \cos x} dx+\int_{\pi}^{2\pi} \frac{1}{a+b \cos (x-2\pi)} dx$$
Can you see why? What happens if you let $x-2\pi=u$ on the second part?
\begin{align} \int_{0}^{2\pi} \frac{1}{a+b\cos x}dx \ =\int_{0}^{\pi} \frac{1}{a+b\cos x}dx+\int_{\pi}^{2\pi} \frac{1}{a+b\cos x}dx \ =\int_{0}^{\pi} \frac{1}{a+b\cos x}dx+\int_{\pi}^{2\pi} \frac{1}{a+b\cos (x-2\pi)}dx \ =\int_{0}^{\pi} \frac{1}{a+b\cos x}dx+\int_{-\pi}^{0} \frac{1}{a+b\cos x} dx \ =\int_{-\pi}^{\pi} \frac{1}{a+b\cos x}dx \ =2 \int_{0}^{\pi} \frac{1}{a+b\cos x}dx \end{align} |
Automorphism of $\mathbb{Q}^*$ | Hint: $$\mathbb{Q}^\times \cong (\mathbb{Z}/ 2\mathbb{Z}) \oplus \bigoplus_{p}\mathbb{Z}$$
where the direct sum is indexed over the primes. |
Greatest Common Divisor Summation Simplification | Your sum could be written
$$\sum_{d|ka}\gcd(a,d)$$
To simplify things, let's start with $k=p$ a prime that does not divide $a$.
$$\begin{align}
\sum_{d|pa}\gcd(a,d)&=\sum_{d|a}\gcd(a,d)+\sum_{d|a}\gcd(a,dp)\\
&=2\sum_{d|a}\gcd(a,d)\\
&=2\sum_{d|a}d\\
&=2\sigma(a)
\end{align}
$$
since $\gcd(a,d)=\gcd(a,dp)$. The function $\sigma$ is the sum of the divisors.
Now, let's take $k$ any integer coprime with $a$:
$$\begin{align}
\sum_{d|ka}\gcd(a,d)&=\sum_{d'|k}\sum_{d|a}\gcd(a,dd')\\
&=d(k)\sum_{d|a}\gcd(a,d)\\
&=d(k)\sigma(a)
\end{align}
$$
where $d(k)$ is the number of divisors of $k$.
The case when $k$ is not coprime with $a$ seems more difficult. If I find something, I'll edit this answer. |
Countability of set of positive reals with bounded sum for all finite subsets | Hint: Let $B_0$ be the set of elements of $B$ that are greater than $1$. For every positive integer $n$, let $B_n$ be the set of elements of $B$ that are in the interval $\left[\frac{1}{n},\frac{1}{n+1}\right)$.
The set $B$ has been decomposed into a countable union of finite sets. |
Given a polynomial, prove the set $A$, the pre-image of $\{0\}$, is a closed subset of $\mathbb{R}^2$ | Polynomials are continuous functions, so in particular, the inverse image of a closed set is closed. We know $D=p^{-1}(\{0\})$, and since $\{0\}$ is a closed set in $\mathbb{R}$, $D$ must be closed in $\mathbb{R}^2$. |
replacing numbers to get final anser | Clearly after the first step you get a number with three or fewer digits, and after the second step you get a number with three or fewer digits with the last digit being $3$ or less. There are now only a very small number of possibilities for this number, and each leads to the number $123$ in at most four more steps. |
Closed form for $\int_0^1 \frac{x^a dx}{(1+x^b)^c}$. | As @OL points out the integral is a hypergeometric function:
$$\frac{\, _2F_1(p+1,q;p+2;-1)}{p+1}$$
You can get this by expanding the denominator in a series (by the binomial theorem), integrating term by term, and observing that you get a hypergeometric series. If there is a relationship of some sort between $p$ and $q,$ this might be simplifiable. |
Probability of edges removal decreasing Chromatic number | I'm still not sure what an answer to this question looks like, since the probability depends a lot on the structure of $G$, but here are some deeper observations.
We could ask about the behavior of a typical graph with chromatic number $k$ (e.g., the behavior of the Erdős–Rényi graph $\mathcal G_{n,1/2}$, conditioned on $\chi(\mathcal G_{n,1/2}) = k$). For $k$ not too large, this is approximately the same as choosing a uniformly random $k$-partite graph, with the caveat that actually the size of each part will not be $n/k$, but maybe $n/k \pm O(\sqrt{n})$ or something like that (I don't think that this consideration changes the behavior significantly).
If $G$ is a uniformly random $k$-partite graph, then removing each edge of $G$ with probability $\frac12$ leaves a $k$-partite graph where each edge is present with probability $\frac14$. Then the expected number of $k$-cliques in the resulting graph is $$\left(\frac{n}{k}\right)^k \cdot 2^{-\binom k2} = \left(\frac{n}{k\, 2^{(k-1)/2}}\right)^k$$ so there is some threshold for $k$, that is asymptotically $O(\log n)$, at which we are nearly guaranteed to see one of these cliques. So for these graphs, the chromatic number almost never changes.
But in another sense, these graphs are very atypical graphs with chromatic number $k$: they are far too dense for this chromatic number to arise naturally. So instead, we could look at $\mathcal G_{n,p}$ for a range of $p$ and just deal with the chromatic number we end up getting.
In that case, randomly removing edges gives us a $\mathcal G_{n,p/2}$ coupled with our original random graph, and this almost certainly has a different chromatic number. For $p \gg \frac1n$, we have $\mathcal G_{n,p} \sim \frac{np}{2\log np}$, with concentration on an interval of length $O(\sqrt n)$ or something like that, and cutting $p$ in $2$ guarantees that these intervals don't overlap.
For $p = \frac dn$, we have two-point concentration of the chromatic number on a value of $k$ satisfying $k \sim \frac{d}{\log d}$. Somewhere between $k=3$ and $k=5$, these values can overlap, but for larger $k$ (and larger $d$), we once again are guaranteed with high probability (in $n$) that the thinned-out graph has smaller chromatic number.
However, I think we can use this fact about sparse random graphs to create a sparse graph $G$ with chromatic number $k$ for any constant $k$ that will keep its chromatic number when thinned out. Starting with a $\mathcal G_{n,d/n}$ that has chromatic number $k$ with high probability, replace each vertex by $kr$ vertices (for a value of $r$ I'll choose later) and each edge by a copy of $K_{kr,kr}$, only inflating the vertex count by $kr$ and the edge count by $(kr)^2$.
Each group of vertices in this blow-up has a majority color present on at least $r$ of the vertices. After the thinning-out happens, I'll say that an edge survives if, in the thinned out $K_{kr,kr}$, there is no group of $r$ vertices on one side and $r$ vertices on the other with no edges between them. An edge survives with probability
$$ s \ge 1 - \binom{kr}{r}^2 2^{-r^2} \ge 1 - \left(\frac{(kr)^2}{2^r}\right)^r.$$
For $r = C \log_2 k$ with $C \ge 3$ or so, this survival probability is better than $1 - 2^{-C}$ (we might have to take larger $C$ when $k$ is small). But then, the graph we get by going back to the $n$-vertex graph and keeping edges for which the corresponding $K_{kr,kr}$ survives is a $\mathcal G_{n,sd/n}$ with $ps$ very close to $p$. For each $k$, there is some $C$ we can take such that $\chi(\mathcal G_{n,sd/n}) = \chi(\mathcal G_{n,d/n})$ with high probability, and therefore the blow-up keeps its chromatic number after being thinned out. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.