title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to calculate the lower and upper bound of the error in estimating a ratio with low sample size? | If you are trying to get a confidence interval for the 'population' conversion rate based on $1$ sale among $21$ clicks, you can view this as a confidence interval for
a binomial proportion. The relevant Wikipedia page, of @tomi's Comment,
discusses several styles of confidence intervals.
In R, the procedure prop.test gives an 'exact' 95% CI $(0.0066, 0.1828)$ based on binomial CDFs (with no normal approximation) as shown below. [The suffix $conf.int shows just the CI.]
prop.test(1, 27, cor=F)$conf.int
[1] 0.006568146 0.182834659
attr(,"conf.level")
[1] 0.95
A Bayesian interval estimate, based on the Jeffries noninformative prior $\mathsf{Beta}(.5,.5),$ is often used as a frequentist CI.
For your data, this 95% CI $(0.0052,0.2018)$
uses quantiles $.025$ and $.975$ of the Bayesian
posterior distribution $\mathsf{Beta}(.5+x, .5+n-x)
= \mathsf{Beta}(1.5, 20.5).$
qbeta(c(.025, .975), 1.5, 20.5)
[1] 0.005187043 0.201755968
The Wald CI of the form $\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p)}{n}},$ where $\hat p = x/n.$ This is an asymptotic interval meant for use with large $n$ and so
one does not expect reliable results with $n = 21.$
For your data the Wald CI is taken as $(0, 0.1327),$
suppressing the impossible negative lower limit.
n = 21; x = 1; p.h = x/n
CI.wald = p.h + qnorm(c(.025,.975))*sqrt(p.h*(1-p.h)/n)
CI.wald
[1] -0.04346329 0.13870138
Agresti and Coull (1998) proposed a modification of Wald's CI,
using point estimate $\tilde p = (x+2)/(n+4)$ to obtain the interval
$\tilde p \pm 1.96\sqrt{\frac{\tilde p(1-\tilde p)}{n+4}},$ which has a more reliable 95% coverage probability for small $n$ than the Wald CI.
The Agresti
interval is currently used in many elementary and intermediate level statistics texts because it gives reasonably good good results and can be computed without specialized software (e.g., using just a mobile phone calculator). For your data, this 95% CI amounts to
$(0, 0.2474).$
n = 21; x = 1; p.e = (x+2)/(n+4)
CI.ac = p.e + qnorm(c(.025,.975))*sqrt(p.e*(1-p.e)/(n+4))
CI.ac
[1] -0.007382581 0.247382581
Note: I have shown endpoints of 95% CIs to four
decimal places to make clear the differences among
types of intervals. In practice, with small $n,$ it might be appropriate to show two-place accuracy. To one place: 'Exact', Jeffries, and Agresti intervals are all
$(0.0, 0.2).$ |
Surjctive,Bijective,Injective Examples | As you say, the easiest way to do it is to draw up a table of the values that the function $f$ takes in each case.
If all of the values 0 to 8 appear in your table, then $f$ is surjective.
If no value is repeated, then $f$ is injective.
If both, then $f$ is bijective.
Of course, the "table" method only works for (small!) finite sets, and for these sets $f$ will either be bijective, or not injective and not surjective. |
transition maps of a principal bundle are smooh | Let $F$ denote the fiber of the bundle, and fix some $f\in F$. Let $$\varphi_\alpha:E|_{U_\alpha}\to U_\alpha\times F,\:\varphi_\beta:E|_{U_\beta}\to U_\beta\times F$$denote the relevant local trivializations. By definition, we have for every $p\in U_\alpha\cap U_\beta$ $$\varphi_\alpha\circ\varphi_\beta^{-1}(p,f)=(p, g_{\alpha\beta}(p)(f)),$$and now, as said in the question, smoothness follows from the implicit function theorem. |
Question concerning rank of a matrix | There is a theorem of linear algebra: For any $m \times n$ matrix $A$,
$$ \operatorname{rank}(A) + \operatorname{nullity}(A) = n,$$
or in terms of the system of equations $A\mathbf{x} = \mathbf{0}$,
$$ \textrm{(number of independent equations)} + \textrm{(number of independent parameters in the solution)} = \textrm{(number of columns of $A$)} $$
At any rate, if $A\mathbf{x} = \mathbf{b}$ has more than one solution, then
so will the system $A\mathbf{x} = \mathbf{0}$, meaning that $\operatorname{nullity}(A) > 0$. But this only shows that $\operatorname{rank}(A) < n$. As @Jyrki Lahtonen stated in his answer, there are no restrictions on $\operatorname{rank}(A)$ with respect to $m$, the number of rows.
[added after question was edited]
For the case $m < n$, we certainly know $\operatorname{rank}(A) \leq m$. You are asking under what conditions the rows must be dependent. That condition is exactly equivalent to $\operatorname{rank}(A) < m$. And we can say more, in terms of the original system. Using the rank formula above, $\operatorname{rank}(A) = n - \operatorname{nullity}(A)$. So $\operatorname{rank}(A) < m$ forces $\operatorname{nullity}(A) > n - m$. In practical terms, the condition that $A$ has dependent rows is equivalent to the condition that there are more independent paramaters in the solution than the difference of rows and columns.
Hope this helps! |
Limits of integration for a joint PDF | There are two ways. One is: $$ \int_{0}^{+\infty} \int_{0}^{y} \lambda^2\exp(-\lambda\cdot y)\,dx\,dy$$ and the other is:
$$ \int_{0}^{+\infty} \int_{x}^{+\infty} \lambda^2\exp(-\lambda\cdot y)\,dy\,dx.$$ |
Finding the area of a region defined by a polar curve that is outside another polar curve region? | The first step that you need to do is identify the measure of the angles that define the intersection point(s) of your curves. In this case, the intersections of $r=5\sin(\theta)$ and $r=4$. After equating equations and comparing the possible solutions that would make sense graphically, we see that $\theta=\arcsin(\frac{4}{5})$ and $\theta=\pi-\arcsin(\frac{4}{5})$ fit what is required.
Next, we see that the curve $r=5\sin(\theta)$ is above the curve $r=4$, so we will subtract $r=4$ from $r=5\sin(\theta)$ in our integral. This integral will be of the form $\frac{1}{2}\int_\alpha^\beta (5\sin(\theta))^2-4^2 d\theta$. By applying the angles identified above and evaluating, the integral is $$\frac{1}{2}\int_{\arcsin(\frac{4}{5})}^{\pi-\arcsin(\frac{4}{5})} 25\sin^2(\theta)-16\;d\theta=6-\frac{7\pi}{4}+\frac{7}{2}\arcsin\left(\frac{4}{5}\right)\approx3.7477$$ |
Checking if a matrix defines a bounded operator | Such a matrix is called a Toeplitz matrix. There is a fascinating result on the boundedness of such matrices.
A Toeplitz matrix \begin{pmatrix} a_0 & a_{-1} & a_{-2} & a_{-3} & ... \\ a_1 & a_0 & a_{-1} & a_{-2} & ... \\ a_2 & a_1 & a_0 & a_{-1} & ...\\ ... & ... & ... & ...& ... \end{pmatrix} is a bounded operator $\ell^2\to\ell^2$ if and only if there exists a bounded function $f\colon S^1\to\mathbb C$ such that $$a_n=\int_{S^1}\overline{z^n}f(z)~dz=i\int_0^{2\pi}e^{i\pi t(n+1)}f(e^{it})~dt.$$ That is to say, that the $a_n$'s are the Fourier coefficients of $f$. |
Has the 3x3 magic square of all squares entries been solved? | The existence or not of a non-trivial integer 3x3 magic square of squares is STILL a unsolved problem.
The quoted reference to Kevin Brown's web pages only discusses an extremely special configuration of numbers, which does not exist. The page does NOT claim to prove non-existence for all possible magic squares.
If you are interested in this topic you should consult the web-site
http://www.multimagie.com/
which gives lots of details and references. |
de Rham cohomology as locally constant integrals modulo the trivially constant ones | In his motivation, what you are calling "locally constant path-integrals" are intended to mean locally constant with respect to ("proper") variations of the paths (he clarifies this).
That said, let $\gamma_s$ be a variation of $\gamma$ such that $\gamma_0=\gamma$. Visualizing this as a map $H: I \times I \to M$ with $H(\cdot,0)=\gamma_0(\cdot)$, we know that
$$\int_{I \times I} d(H^*\theta)=\int_{\partial (I \times I)}H^*\theta$$
by Stokes. The right side is $$\int_{\gamma_0} \theta-\int_{\gamma_1} \theta,$$
whereas the left side is $\int\limits_{I \times I}H^*(d\theta)=0.$ |
Angle vector in polar system represented by Cartesian vector | The unit vector $\hat \theta$ is defined as the unit vector that is normal to the position vector $\vec r$ and points in the direction of increasing $\theta$.
Write $\hat \theta = \cos(\alpha)\hat x+\sin(\alpha)\hat y$ for some angle $\alpha$. Then, we have
$$\hat \theta \cdot \hat r=\cos(\theta-\alpha)=0\implies \alpha = \theta\pm \pi/2\tag 1$$
The ambiguity of the sign in $(1)$ is resolved when enforcing that $\hat \theta$ points in the direction of increasing $\theta$, in which case we have $\alpha = \theta +\pi/2$ and therefore
$$\hat \theta = -\sin(\theta)\hat x+\cos(\theta)\hat y$$
Alternatively, we note that
$$\begin{align}
0&=\frac{d(1)}{d\theta}\\\\
&=\frac{d(\hat r\cdot \hat r)}{d\theta}\\\\
&=2\hat r\cdot \frac{d\hat r }{d\theta}\\\\
&=2\hat r\cdot (-\sin(\theta)\hat x+\cos(\theta)\hat y)
\end{align}$$
So, $\hat r$ is orthogonal to the unit vector $-\sin(\theta)\hat x+\cos(\theta)\hat y$, which points in the direction of increasing $\theta$. |
Prove or disprove that Q[√2] is a field | $$\frac{a}{a^2-2b^2}-\frac{b}{a^2-2b^2}\sqrt{2}=(a+b\sqrt{2})^{-1}$$ whenever $a$ and $b$ are not simultaneously $0$ |
Is there a non-regular depth 1 noetherian local ring with this property? | Let $k$ be a field and set $R := k[[X^{3},X^{5},X^{7}]]$ and $x := X^{3}$ and $y := X^{5}$. Then $y X^{3} = x X^{5}$ and $y X^{5} = x X^{7}$ and $y X^{7} = x (X^{3})^{3}$, hence $((x) : (y)) = \mathfrak{m}$. Moreover $t := y/x = X^{2}$ satisfies the polynomial $T^{3}-X^{6} \in R[T]$. Suppose $p \in R[T]$ is a monic polynomial of degree $2$ such that $p(X^{2}) = 0$. Then we can write $p(T) = (T-a)(T-X^{2})$ for some $a \in k((X))$ (ring of formal Laurent series). Since $a+X^{2} \in R$, the coefficient of $X^{2}$ in $a$ must be $-1$; then the coefficient of $X^{4}$ in $aX^{2}$ is $-1$ (in particular nonzero), which contradicts $X^{4} \not\in R$. |
Uniform integrability and convergence in mean question | As the question is tagged (probability-theory), I assume the underlying space of finite measure. Let $Y_n:=|X-X_n|^r$; we assume that $Y_n\to 0$ in probability. Then
$$E[Y_n]\leqslant E[Y_n\chi_{\{Y_n\leqslant R\}}]+\sup_{k\in\Bbb N}E[Y_k\chi_{\{Y_k\geqslant R\}}].$$
A consequence of the dominated convergence theorem is that $\lim_{n\to +\infty} E[Y_n\chi_{\{Y_n\leqslant R\}}]=0$ for each $R$, which gives, for each $R>0$,
$$\limsup_{n\to +\infty}E[Y_n]\leqslant \sup_{k\in\Bbb N}E[Y_k\chi_{\{Y_k\geqslant R\}}].$$
We conclude using uniform integrability assumption. |
Confused by how to derive the derivative of $f(\boldsymbol{x})=g(\boldsymbol{y})$ | From your other question I can say that it's a misunderstanding of differentiation and what is happening here. One simple way to understand it is as follows:
Take differentiable functions $x,y$ with open domain $D$ such that $\ln(y(t)) = a + b \ln(x(t))$ for any $t \in D$.
Then differentiating gives $\frac{y'(t)}{y(t)} = b \frac{x'(t)}{x(t)}$ for any $t \in D$.
If you rewrite this in Leibniz's suggestive notation you get:
$\frac{dy(t)}{y(t)\ dt} = b \frac{dx(t)}{x(t)\ dt}$
And if you treat $dt$ as a sufficiently small non-zero quantity and multiply both sides by it, you get:
$\frac{dy(t)}{y(t)} = b \frac{dx(t)}{x(t)}$
Note that this equation must be used with the understanding that the differentials are taken in the context of a small change in $t$, since we have now omitted $dt$.
Note that this is not equivalent to a small change in $x(t),y(t)$! For example if $x(t) = \sin(t)$ and $y(t) = \sin(2t)$ for any $t \in \mathbb{R}$, the curve $(x,y)$ intersects itself at $(0,0)$ and has two gradients there, one when $t = 0$ and the other when $t = \pi$.
And historically we have used bare variables to represent changing quantities so if we drop the parameter $t$ we get:
$\frac{dy}{y} = b \frac{dx}{x}$.
Note that this now makes no sense unless both differentials are taken in the same context, which now means that not only are they taken with respect to a small change in $t$ (which is now missing from the equation), we have to use this equation with the understanding that the values of $x,y$ are tied to each other, represented earlier on explicitly by the parameter $t$. In many cases in physics, however, $t$ is $x$ itself, which is expressed by the mathematically not quite right "$y$ is a function of $x$". |
Problem with unit coversion (temperature) | In your derivation, I don't see where there could be an error. However, I think the problem is clarified by thinking about the specific heat capacity in degrees Fahrenheit.
Specific heat capacity can be seen in the following way: "Informally, it is the amount of energy that must be added, in the form of heat, to one unit of mass of the substance in order to cause an increase of one unit in its temperature." [Wikipedia]
A change in degrees Celsius equals $\frac{5}{9}$ times that change in degrees Fahrenheit (as you said $\Delta ^{\circ} F = \frac{9}{5} \Delta^{\circ}C$). For a $\frac{5}{9}$ unit increase in temperature, $\frac{5}{9}$ of the amount of energy must be added. Therefore, to express the specific heat capacity in Fahrenheit, it is multiplied by $\frac{5}{9}$ (instead of divided). This results in the desired $0,454\cdot \frac{5}{9} \cdot 4,185 [kJ]$. |
Fixed point Fourier transform (and similar transforms) | My Functional Analysis Fu has gotten bit weak lately, but I think the following should work:
The Schauder fixed point theorem says, that a continuous function on a compact convex set in a topological vector space has a fixed point. Because of isometry, the Fourier transform maps the unit ball in $L^2$ to itself. Owing to the Banach Alaoglu theorem, the unit ball in $L^2$ is compact with respect to the weak topology. The Fourier transform is continuous in the weak topology, because if $( f_n, \phi ) \to (f, \phi)$ for all $\phi \in L^2$, then
$$
(\hat{f}_n, \phi) = (f_n, \hat{\phi}) \to (f, \hat{\phi}) = (\hat{f}, \phi).
$$ |
Trouble expanding a del operator expression | Okay, first you've done a lot of voodoo here. Let's try to simplify this from first principles.
First, consider the $u \cdot \nabla u$ part--take as convention that $u \cdot \nabla u = (u \cdot \nabla) u$ for the rest of this post.
We can expand this by an identity to
$$u \cdot \nabla u = \frac{1}{2} \nabla u^2 - u \times (\nabla \times u)$$
Now we can take a divergence:
$$\nabla \cdot (u \cdot \nabla u) = \frac{1}{2} \nabla^2 u^2 - \nabla \cdot [u \times (\nabla \times u)]$$
That second term is going to be fun. Let's expand it:
$$\nabla \cdot [u \times (\nabla \times u)] = (\nabla \times u)^2 + u \cdot [\nabla \times (\nabla \times u)]$$
That should get you an identity involving just derivatives of $u$ or $u^2$.
Now, regarding your two questions: for (Q1), I don't even see how that question can be answered. If you don't have any definition of what a gradient of a vector field is, then how can you justify the manipulation that got you to write it down?
For (Q2), when you directly expand $\nabla \cdot (u \cdot \nabla u)$ according to the product rule, you get at the least
$$\dot \nabla \cdot (\dot u \cdot \nabla u) + (u \cdot \nabla)(\nabla \cdot u)$$
where the dots denote that $\dot \nabla$ differentiates only $\dot u$. So the term you're concerned about is definitely there; it's possible it cancels in some way, but I suspect that the $\nabla \cdot (\psi A)$ identity you tried to use doesn't hold when $\psi$ is a scalar differential operator instead of a scalar field. |
Is the degree function well behaved over power series? | $$(1+x+x^2+x^3+x^4+\cdots)(1-x)=1.$$ |
Regarding a Conditional Distribution (Please Check for Me) | I think the conditional distribution of X, if $0<y<\frac{1}{2}\text{ should be something like below:}$
$$f_{X|Y}(x|y)=\frac{f(x,y)}{f_Y(y)}=\frac{\frac{12}{5}\times x\times (2-x-y)}{\int_{y=0}^{y=\frac{1}{2}}\int_{x=0}^{{x=1}}\frac{12}{5}\times x\times (2-x-y)dxdy}$$ |
Find the matrix $A$ representing rotation around the direction $(1, 1, 1)$ by $\pi/2$ radians counter-clockwise | Rodrigues' rotation formula states if $\;\vec v\;$ is rotated about unit vector $\;\vec k\;$ through an angle $\;\theta\;$ anticlockwise the rotated vector becomes
$$
\vec v_{\text{rot}}=\vec v\cos\theta+(\vec k\times\vec v)\sin\theta+\vec k\cdot(\vec k\cdot\vec v)(1-\cos\theta)
$$
In our case $$\vec k={1\over\sqrt 3}(1,1,1)^\top,\quad\theta={\pi\over2}$$
If $R$ is the coveted matrix,
$$R\vec v=\vec v_{\text{rot}}$$
Taking $\;\vec v=(1,0,0)^\top$ we get
$$
R\vec v=\text{1st column of }R=(\vec k\times\vec v)+\vec k\cdot(\vec k\cdot\vec v)=\frac{1}{3}(1,1+\sqrt3,1-\sqrt3)^\top
$$
Similarly other columns of $R$ can be computed. In fact,
$$
R=\frac{1}{3}\begin{bmatrix}
1 & 1-\sqrt3 & 1+\sqrt3\\
1+\sqrt3 & 1 & 1-\sqrt3\\
1-\sqrt3 & 1+\sqrt3 & 1\\
\end{bmatrix}
$$ |
Determine rotational ellipsoid from main orientation and Eigenvalues | You wrote: "...the main axis is given by the vector and its semiaxes by the orientation errors".
Suppose for a moment that the vector is $(3, 0, 0)$, and the orientation errors are $2$ and $1$. What are the semiaxes going to be? $(0,2,0)$ and $(0, 0, 1)$? Maybe $(0,0,2)$ and $(0, 1, 0)$? Maybe
$$
2(0, s, c)\\
1(0, -c, s)
$$
where $s$ and $c$ are the sine and cosine of some arbitrary angle?
What I'm getting at is that the problem here is that the thing you're trying to do is under-specified. Now you might say "just choose the vector for the larger error to always be orthogonal to, say, $(0, 1, 0)$". That's not a bad idea if your vector $v$ is never $(0, 1, 0)$, but if it IS....then the solution is underspecified.
So you really need, for any nonzero vector $v$, a perpendicular vector $w$ that varies continuously as a function of $v$, so that when your rotational ellipsoid axis is $v$, you can put the larger (or the first, or whatever) error-value in the $w$ direction.
Such a choice, restricted to the case of $v$ lying in the unit sphere, gives you a function
$$
w : S^2 \to \S^2 : v \mapsto w(v)
$$
with the property that for every $v$, the dot product $v \cdot w(v) = 0$. You can visualize $w(v)$, which is orthogonal to $v$, as being placed as a vector with its base at the tip of $v$, in which case it looks like a tangent-vector to $S^2$. And then the function $w$ gives you an everywhere nonzero vector field on the unit sphere in 3-space...which is impossible. In short: there doesn't seem to be any continuous way of doing what you're asking.
Gratuitous observation: one reason people use ellipsoid visualization of (symmetric) tensors is that a tensor contains a lot of information in a not-very-easily disentangled form. You, on the other hand, have data in a very nice form: a vector, with the error components along each axis already given. I don't see a good reason for transforming these error-values into measurements in some alternative coordinate system in which the direction in which the error is displayed is no longer correlated with the direction in which the error is measured.
In short, this sounds as if it's not only impossible, because of the vector fields on spheres theorem, but also a not very good way to look at the data. Rather than trying to cast your data into a form that it doesn't really fit (but for which there are already viz tools), I'd suggest that you spend a bit more time thinking about what you'd like to be able to understand from looking at your visualization, and make sure that the process you follow leads to that result. |
Prove $0 \lt \int_0^1 f(x)\sin(rx)\mathrm{d} x \le \frac{1}{n!}, r\in(0, \pi] $, $f(x) = \frac{x^n(1-x)^n}{n!}, x\in [0, 1]$ | Note that $f(x)$ is symmetric about $x = 1/2$ and that at $x = 1/2$ the value is $1/4$. It is straightforward to see that this is a maximum and so $\displaystyle \sup_{x \in [0,1]} \vert f(x) \vert = 1/4$. You also noted that $\displaystyle \sup_{x \in [0,1]} \vert \sin(rx) \vert \leq 1$. Then,
\begin{align*}
\int_0^1 f(x)\sin(rx)dx &\leq \left\vert \int_0^1 f(x)\sin(rx)dx \right\vert \leq \int_0^1 \vert f(x) \sin(rx) dx\vert \leq \int_0^1 \vert f(x) \vert \cdot \vert\sin(rx)\vert dx\\
&\leq \int_0^1 \frac{1}{4n!} \cdot 1dx = \frac{1}{4n!}
\end{align*}
The integral is clearly positive since both $f(x)$ and $\sin(rx)$ are positive on $(0,1)$ for $r \in (0,\pi]$. |
A differentiation and double differential function proof | Suppose $f(x)$ meets the conditions of the problem. Note that since $|f(0)| \le 1$, $2\sqrt 2 \le |f'(0)| \le 3$. Now $-f(x), f(-x),$ and $-f(-x)$ also meet all the conditions. At least one of the four functions will have both derivative and second derivative $\ge 0$ at $x = 0$. Let $h(x)$ be the one with both $h'(0) \ge 0, h''(0) \ge 0$.
So $2\sqrt 2 \le h'(0) \le 3$ and $h''(0) \ge 0$. If $h'''(0) < 0$, we are done. Otherwise, Let $a = \sup \{b \ge 0 \mid h''' \ge 0 \text{ on } [0,b]\}$, so $h''' \ge 0$ on $[0,a)$.
$h''$ is increasing on this interval, and so $h''(x) \ge h''(0) \ge 0$ everywhere. This in turn means that $h'$ is increasing on the interval as well. So $h'(x) \ge h'(0) \ge 2\sqrt 2$. Since $h'$ is continuous, it is integrable and $$h(a) - h(0) = \int_0^a h'(x)\,dx \ge 2\sqrt 2 a$$
But $|h(a) - h(0)| \le |h(a)| + |h(0)| \le 2$. So
$$2 \ge 2\sqrt 2a\\\frac 1{\sqrt 2} \ge a$$
Because $h'(a) \ge 2\sqrt 2$ and continuous, there is some neighborhood of $a$ where $h' > 0$. But by the choice of $a$, there are points $c$ in that neighborhood where $h'''(c) < 0$, and therefore $h'(c)h'''(c) < 0$.
From this it follows that either $f'(c)f'''(c) < 0$ or $f'(-c)f'''(-c) < 0$, and further the point $c$ or $-c$ is in $(-b, b)$ for any $b > \frac 1{\sqrt 2}$, for which $b = 3$ certainly qualifies. |
Showing $\left(\sqrt{1-p}-\sqrt{1-q}\right)^2+\left(\sqrt{p}-\sqrt{q}\right)^2\leq \left(\frac{1}{p}+\frac{1}{1-p}\right) (p-q)^2 $ for $0<p<1, 0<q<1$ | $(\sqrt {1-p} -\sqrt {1-q})^{2}=\frac {(p-q)^{2}} {(\sqrt {(1-p)}+\sqrt {1-q})^{2}}\leq \frac {(p-q)^{2}} {1-p}$. Similarly the second term does not exceed $\frac {(p-q)^{2}} p$. Just add these two inequalitities. |
Asymptotics of $\max\limits_{1\leqslant k\leqslant n}X_k/n$ | Let $M_n=n^{-1}\cdot\max\limits_{1\leqslant k\leqslant n}X_k$ for some nonnegative identically distributed integrable sequence $(X_n)_n$, then $M_n\to0$ almost surely. (The independence of the sequence $(X_n)_n$ is not needed.)
To prove this, let $x\gt0$. If $X_n\leqslant nx$ for every $n$ large enough, say for every $n\geqslant N$, then $nM_n\leqslant NM_N+nx$ for every $n\geqslant N$ and in particular $\limsup\limits_{n\to\infty}M_n\leqslant x$. Thus,
$$
[\limsup\limits_{n\to\infty}M_n\geqslant2x]\subseteq A_x,\qquad A_x=\limsup\limits_{n\to\infty}A^n_x,\qquad A^n_x=[X_n\geqslant nx].
$$
The sequence $(X_n)_n$ is identically distributed, hence
$$
\sum_{n\geqslant1}P[A_x^n]=\sum_{n\geqslant1}P[X_1\geqslant nx]\leqslant x^{-1}E[X_1],
$$
which is finite. By Borel-Cantelli lemma (easy part), $P[A_x]=0$. This holds for every $x\gt0$ hence $\limsup\limits_{n\to\infty}M_n=0$ almost surely. Since every $M_n\geqslant0$ almost surely, the conclusion follows.
Edit: To show the upper bound of the series, note that, for every nonnegative $\xi$,
$$
\sum_{n\geqslant1}\mathbf 1_{\xi\geqslant nx}=\lfloor x^{-1}\xi\rfloor\leqslant x^{-1}\xi,
$$
and integrate this pointwise inequality over $\xi$ with respect to the distribution of $X_1$. |
Making sense of notation of probability distributions under integral | I'm a bit confused why you are complaining about the "ds" in $$g_S(ds) = \int_{x \in ds} \prod_{i \in S} g_i(dx_i).$$ It is playing the role of a dummy variable here. Also, although I do agree that the notation they use is horrendous, the words they use to describe what the notion reflects is pretty clear, in my opinion at least. Anyways, as the authors say, $g_S$ is a measure. Let's define it clearly, i.e., explain what its value is on Borel sets (I assume the sigma-algebra on $[0,1]^n$ is the Borel one). By basic measure theory, it is sufficient to explain what its value is on rectangles: $[a_1,b_1]\times\dots\times[a_n,b_n]$. Letting $S = \{s_1 < \dots < s_k\}$, the definition is $$g_S([a_1,b_1]\times\dots\times[a_n,b_n]) = \int_{a_{s_1}}^{b_{s_1}}\dots\int_{a_{s_k}}^{b_{s_k}} \prod_{i \in S} g_i(x_i) dx_{s_1}\dots dx_{s_k}$$ if $0 \in [a_i,b_i]$ for each $i \not \in S$, and $g_S([a_1,b_1]\times\dots\times[a_n,b_n]) = 0$ otherwise.
That's (hopefully) a completely clear definition. Let me now explain (1) how that coincides with their (confusing) notation and (2) how that definition makes sense in the context of their paper.
I'll do (2) first. We choose a set $S \subseteq [n]$. What we are told are the values of $X_s$, for $s \in S$; more precisely, we are given a vector $\langle x_1,\dots, x_n \rangle$ with $x_s$ being the value of $X_s$ for $s \in S$, and $x_s$ being $0$ for $s \not \in S$. The measure $g_S$ is representing the probability of seeing a given vector. Clearly $g_S([a_1,b_1]\times\dots\times[a_n,b_n])$, i.e., the probability of the vector being in $[a_1,b_1]\times\dots\times[a_n,b_n]$, is $0$ if there is some $i \not \in S$ with $0 \not \in [a_i,b_i]$, since we'll obviously see $x_i = 0$. Conversely, provided that $0 \in [a_i,b_i]$ for each $i \not \in S$, the probability of our vector $\langle x_1,\dots,x_n\rangle$ lying in $[a_1,b_1]\times[a_n,b_n]$ is simply the probability that $x_i$ is in $[a_i,b_i]$ for each $i \in S$, which, due to independence, is $(\int_{a_{s_1}}^{b_{s_1}} g_{s_1}(x_{s_1})dx_{s_1})\dots(\int_{a_{s_k}}^{b_{s_k}} g_{s_k}(x_{s_k})dx_{s_k})$, which is the same as $\int_{a_{s_1}}^{b_{s_1}}\dots\int_{a_{s_k}}^{b_{s_k}} \prod_{i \in S} g_i(x_i) dx_{s_1}\dots dx_{s_k}$.
Now (1). The reason for $\int_{x \in ds}$ is as follows. We could, for example, write $\int_0^1 x^2dx$, or we could write $\int_{x \in (0,1)} x^2$. The authors are using the analogue of the latter. The reason for naming the dummy variable $ds$ is that they want to, nonrigorously/intuitively, explain how $g_S$ is defined on an infinitesimal piece of $[0,1]^n$.
I'm sure once you read and reflect on this answer, you'll be able to figure out the other ambiguous/confusing expressions in the paper. |
How to find all roots of the quintic using the Bring radical | The answer is contained in the paper by Eagle (1939)
http://www.jstor.org/stable/2303036?seq=1#page_scan_tab_contents
The Bring-Jerrard normal form is a particular example of a trinomial, and Eagle gives the general solution. |
What should be the conditions for $\frac{d^2x}{dx^2} = 0$? | $\frac {d^{2} x}{dx^{2}}=\frac d{dx} ({\frac d {dx} (x)})=\frac d {dx} (1)=0$. This is quite valid. |
Question on the construction of symmetric stable distributions | $f_\mu(t) = e^{-c|t|^\alpha} = 1 - c|t|^\alpha + o(|t|^\alpha) \Rightarrow 1- f_\mu(t) \sim c|t|^\alpha$. |
Prove:that in any set of 1009 positive integers exits two numbers $a_i$ and $a_j$ such that $a_i-a_j$ or $a_i+a_j$ is divisible by 2014 | We will show that if condition is violated then there cannot be more than 1008 numbers.Lets say x is the residue of some $a_i$ then no other number can have x as residue or -x as residue. Also it will reduce the set by 2 unless and until x=-x which is true for x=1007,0.So the set which violates the above condition can have at most 2012/2+2=1008 elements |
Is the topology that has the same sequential convergence with a metrizable topology equivalent as that topology? | No, this is not true in general. For instance, let $F$ be a nonprincipal ultrafilter on $X=\mathbb{N}$. Let $\mathscr{T}_1$ be the discrete topology, and let $\mathscr{T}_2$ consist of all sets that either are in $F$ or do not contain $0$. In both these topologies, a sequence converges iff it is eventually constant, and $\mathscr{T}_1$ is metrizable.
In the proof of Proposition IV.2.1, $\mathscr{T}_2$ is additionally known to be first-countable, since it is generated by countably many seminorms. |
Total no of closed loop paths in 3-by-3 grid | There are $48$ such paths for $n=3$. For $n=4$ the count is $1344$ and the relevant sequence does not seem to appear in OEIS.
I counted these by depth-first search. Here are the $96$ paths you get if you count a path and its reverse as different:
ABCFDEHIG, ABCFEHIGD, ABCFIGHED, ABCFIHEDG, ABCIFDEHG, ABCIFEHGD, ABCIGHEFD,
ABCIHEFDG, ABEDFCIHG, ABEDGHIFC, ABEFCIHGD, ABEFDGHIC, ABEHGDFIC, ABEHGICFD,
ABEHICFDG, ABEHIGDFC, ABHEDFCIG, ABHEDGIFC, ABHEFCIGD, ABHEFDGIC, ABHGDEFIC,
ABHGICFED, ABHICFEDG, ABHIGDEFC, ACBEDFIHG, ACBEFIHGD, ACBEHGIFD, ACBEHIFDG,
ACBHEDFIG, ACBHEFIGD, ACBHGIFED, ACBHIFEDG, ACFDEBHIG, ACFDGIHEB, ACFEBHIGD,
ACFEDGIHB, ACFIGDEHB, ACFIGHBED, ACFIHBEDG, ACFIHGDEB, ACIFDEBHG, ACIFDGHEB,
ACIFEBHGD, ACIFEDGHB, ACIGDFEHB, ACIGHBEFD, ACIHBEFDG, ACIHGDFEB, ADEBCFIHG,
ADEBHGIFC, ADEFCBHIG, ADEFCIGHB, ADEFICBHG, ADEFIGHBC, ADEHBCFIG, ADEHGIFCB,
ADFCBEHIG, ADFCIGHEB, ADFEBCIHG, ADFEBHGIC, ADFEHBCIG, ADFEHGICB, ADFICBEHG,
ADFIGHEBC, ADGHBEFIC, ADGHEFICB, ADGHICFEB, ADGHIFEBC, ADGICFEHB, ADGIFEHBC,
ADGIHBEFC, ADGIHEFCB, AGDEBHIFC, AGDEFCIHB, AGDEFIHBC, AGDEHIFCB, AGDFCIHEB,
AGDFEBHIC, AGDFEHICB, AGDFIHEBC, AGHBCIFED, AGHBEDFIC, AGHEBCIFD, AGHEDFICB,
AGHICBEFD, AGHICFDEB, AGHIFCBED, AGHIFDEBC, AGICBHEFD, AGICFDEHB, AGIFCBHED,
AGIFDEHBC, AGIHBCFED, AGIHBEDFC, AGIHEBCFD, AGIHEDFCB
Mathematica code:
n = 3;
pts = Tuples[Range[0, n - 1], 2];
adj[x_, y_] :=
adj[x, y] = (Count[x - y, 0] == 1) && MemberQ[{1, n - 1}, Max[Abs[x - y]]];
extensions[p_] := Select[Complement[pts, p], adj[Last[p], #] &];
DFS[p_] := (
If[Length[p] == n^2 && adj[First[p], Last[p]], Sow[p]; Return];
Scan[DFS[Append[p, #]] &, extensions[p]]);
paths = Reap[DFS[{{0, 0}}]][[2, 1]]; |
If a relation is euclidean, is it necessarily asymmetric? | Let R be an Euclidean relation on A. and let $(x,y) \in R $
$xRy \land xRy \Rightarrow yRy $ which means Euclidean relation cant be asymmetric if there exists an $(x,y) \in R$ in case of Empty Relation we know that it doesnt have any elements
so this proposition doesnt contain it |
Spectrum perturbation | It might very well be that I misunderstand your question but if "self-adjoint bounded" means "self adjoint and bounded" then I think you need more assumptions on $A$ and $B$. If we consider the Hilbert space $\mathbb{R}^2$ and the operators
\begin{align}A=
\begin{pmatrix}
0 &1\\
1&0
\end{pmatrix}, \qquad
B=\begin{pmatrix}
1 &0\\
0&0
\end{pmatrix}
\end{align}
then it's easy to check that $\sigma(A)=\{\pm1\}$ and $\sigma(B)=\{1,0\}$ whereas $\sigma(A+B)=\{\frac{1\pm \sqrt{5}}{2}\}$ |
A tight lower bound for the entropy of the XOR of two random variables | Case $\epsilon = 0$ can be proved in a way that reveals for the general case $\epsilon >0$.
Lemma. (Applying an independent random injective function to a random variable does not decrease entropy) If $X, Y$ are independent discrete random variables and $f_y$ is an injective function on the range of $X$ for each $y$ in the range of $Y$, then $H(f_Y(X)) \ge H(X)$.
Proof: $H(f_Y(X)) \ge H(f_Y(X)|Y) = H(X|Y) = H(X)$. |
Application of Liouville's theorem exercise | The Moebius transformation
$$T:\quad w\mapsto{w-i\over w+i}$$
maps the upper half plane onto the unit disk $D$. Therefore the function
$$g(z):=T\bigl(f(z)\bigr)$$
is entire and bounded. By Liouville's theorem $g$ has to be constant, and so is $f$. |
Find the probability that you have three of a kind after you pick 5 cards from a regular 52-card deck. | There are $\binom{52}{5}$ five-card hands. They are all equally likely.
We now count the favourables, the three of a kind hands. The kind we have $3$ of can be chosen in $\binom{13}{1}$ ways. For each of these choices, there are $\binom{4}{3}$ ways to choose the actual $3$ cards.
Now we must count the number of ways to choose the "useless" cards. The two different kinds that we have one each of can be chosen in $\binom{12}{2}$ ways. For each of these ways, the actual cards of the chosen kinds can be chosen in $\binom{4}{1}\binom{4}{1}$ ways. Thus the number of favourables is $\binom{13}{1}\binom{4}{3}\binom{12}{2}\binom{4}{1}\binom{4}{1}$.
For the probability, divide by $\binom{52}{5}$.
Remark: The following is an alternate approach to count the number of ways to choose the useless cards. There are $\binom{48}{2}$ ways to choose $2$ cards from the $48$ cards that are not of the kind we have $3$ of. But some of these choices give us $2$ of the same kind, giving us a full house, a very good hand. So we must subtract the number of ways in which we get $2$ of a kind. The kind can be chosen in $\binom{12}{1}$ ways, and the actual cards in $\binom{4}{2}$ ways. So the number of ways to choose the useless cards is $\binom{48}{2}-\binom{12}{1}\binom{4}{2}$. |
Approximate $f(t) = 1-|2t-5|$ in $[2,3]$ by $p\in P_2$ by using the least squares method | If you are due to use Legendre polynomials, you have to
change the working interval :
Instead of $t \in [2,3]$, take variable $T \in [-1,1]$
using coordinates change
$$T=2t-5 \tag{1}$$
Meanwhile, expression $f(t)$ is changed into expression $F(T)=1-|T|.$
compute coefficients
$$a_k=\dfrac{2k+1}{2}\int_{-1}^{1}F(T)L_k(T)dT \ \ k=0,1,2$$
Then, the looked for quadratic $Q(T)$ is the beginning of the infinite expansion limited to its 3 first terms : $$F(T)=\underbrace{a_0L_0(T)+a_1L_1(T)+a_2L_2(T)}_{Q(T)}+...$$
(see https://en.wikipedia.org/wiki/Legendre_polynomials).
You should find $a_0=\tfrac12, a_1=0, a_2=-\tfrac58$, giving
$$Q(T)=\tfrac12-\tfrac{5}{16}(3T^2-1)$$
which is indeed very satisfactory (see Fig. 1).
It remains to do the "return path", i.e., express the result as a quadratic $q(t)$ with respect to initial variable using (1) :
$$q(t)=\tfrac12-\tfrac{5}{16}(3(2t-5)^2-1)$$
(best quadratic approximation).
Fig. 1 : In red (resp. blue), curve with equation $Y=F(T)$ (resp. $Y=Q(T)$, its best quadratic approximation) on reference interval $[-1,1]$. |
About the Erdős-Borwein Constant | Wolfram Alpha evaluates $1-\dfrac{\psi_{1/2}(1)}{\ln 2} =$ 1 - QPolyGamma(1,1/2)/ln(2) as
$1.606695152415291763783301523190924580480579671505756435778079...$
For more digits, go to the link and click on "More digits". |
Consider a discrete metric space $(X, \delta)$. Prove a sequence converges w.r.t $\delta$ if and only if it is eventually constant. | Suppose $x_n \to x$. Then, for any neighborhood $U$ of $x$, there is some positive integer $N$ so $n\ge N \implies x_n \in U$.
Well in this case, $\{x\}$ is a neighborhood of $x$. Done. |
Find the eigenvalues of... | Factor!
$$
0 = \lambda^3 - 3\lambda^2 k + 3\lambda k^2 - k^3 = (\lambda - k)^3
$$
so
$\lambda = k$ is the only root (eigenvalue). |
Dissertation on Integrals | My opinion is that this is overly ambitious, but if you decide to go through with it, here are two books that will be very useful:
A Garden of Integrals by Frank Burk
Varieties of Integration by C. Ray Rosentrater
One drawback to these two books is that they make no mention of the large number of other integrals that have been studied. Obviously, the authors have to draw a line somewhere in what they discuss, but a short two or three page appendix or afterword mentioning the integrals of Bochner, Burkill, Denjoy, Jeffery, Khintchine, Kolmogorov, Kubota, Perron, Ridder, Saks, Ward (and others I've probably overlooked) would have been a very useful addition. Indeed, both books treat mostly the same integrals, so their union doesn't tell you about the existence of much more than either of them.
As a partial remedy, there is Gordon's book:
The Integrals of Lebesgue, Denjoy, Perron, and Henstock by Russell A. Gordon
For a more thorough remedy, I recommend these two very extensive survey papers:
Peter Bullen, Non-absolute integrals in the twentieth century,
AMS Special Session on Nonabsolute Integration, 23-24 September 2000, 27 pages. (195 references)
Ralph Henstock, A short history of integration theory, Southeast Asian Bulletin of Mathematics 12 #2 (1988), 75-95. (262 references)
However, rather than attempt a survey of integration methods, I recommend focusing on a specific integration topic, such as is discussed in my 7 November 2007 sci.math post
and in the math overflow question Cauchy's left endpoint integral (1823).
Another topic is the investigation of what can be the set of all possible Riemann sums for a certain function or for functions having certain specified properties. I know of quite a few papers on this topic, but they're at home now and I don't remember enough about their titles or authors to list any of them now. (One of the authors might be I. J. Maddox.) |
does uncorrelation extend to product of complex random variables? | It depends on distributions:
$$ Cov(|X|^2 , |Y|^2) = E[|X|^2 |Y|^2] - E[|X|^2]E[ |Y|^2]$$
$$ E[|X|^2 |Y|^2] = \int_{x,y}|X|^2 |Y|^2\ f_{X,Y}(x,y) dx dy $$
Only, in case of Gaussian distribution we can say
$$ f_{X,Y}(x,y) = f_X(x)f_Y(y) \ \ \ (1)$$
and therefor the co-variance is zero; they are uncorrelated
but, always (1) is true for independent random variables
so, in general the answer depends on distribution of X and Y |
Calculus problem ε-δ definition | You want to show that $\lim_{x\rightarrow -3}(2x+2) = -4$. It's a test: Someone gives you a small tolerance $\epsilon > 0$, and you have to figure out how to make sure $2x+2$ is within the desired tolerance $\epsilon$ of the $-4$ by restricting $0 < |x-(-3)| < \delta$. So, for every $\epsilon > 0$, you have to come up with some $\delta > 0$. Of course, $\delta$ is going to depend on the $\epsilon$ they give you. If they can give you a small enough $\epsilon > 0$ for which no $\delta$ exists, then the limit equality doesn't hold.
So, someone gives you $\epsilon > 0$ and you have to figure out a way to make sure
$$
|(2x+2)-(-4)| = |2x+6|=2|x-(-3)| < \epsilon
$$
In this case, choose $\delta = \epsilon/2$. Then, whenever $0 < |x-(-3)| < \delta$, you are guaranteed that $|(2x+2) - (-4)| < \epsilon$. That works regardless of what $0 < \epsilon$ the person gives you. So, $\lim_{x\rightarrow -3}(2x+2)=-4$ as desired.
For $\epsilon = 0.1$, you can guarantee that $|(2x+4)-(-4)|< 0.1$ whenever $0 < |x-(-3)| < 0.05$. They give you $\epsilon=0.1$, and you came up with $\delta=0.05$. Check it on your calculator if you want. Try plugging in numbers strictly between $-3-.05=-3.05$ and $-3+.05=-2.95$ into $2x+2$, and you'll find that the numbers which come out are strictly between $-4-.1=-4.1$ and $-4+.1=-3.9$. |
Bisection Method - Number of steps required proof | After $k$ steps the size of the interval is $\frac{b-a}{2^k}$, and we want to set this as a tolerance. $$tol=\frac{b-a}{2^k}$$ so $$k=\log_2\frac{b-a}{tol}$$
Since the solution of the above equation can be a non-integer, it does not make sense to have something like 5.3 steps. To guarantee that we get within tolerance, we must choose any integer grater than the value above. And the smallest one is $$\left\lceil\log_2\frac{b-a}{tol}\right\rceil$$ |
how to prove that the ceiling(x) = floor(x) + 1? | What you wrote is only true if $x$ is not an integer:
The ceiling is defined as the smallest integer that is larger or equal to $x$
The floor is defined as the largest integer that is smaller or equal to $x$.
This means that if $x\in\mathbb Z$, then $\lceil x\rceil = \lfloor x \rfloor = x$
On the other hand, if $x\notin \mathbb Z$, then it is simple to see that since $y=\lfloor x \rfloor$ is smaller or equal to $x$, then
$y\neq x$, therefore $y<x$, meaning that $y$ cannot be $\lceil x \rceil$.
$y+1$ must be larger than $x$, because if $y+1\leq x$, then $y\neq \lfloor x \rfloor$
Therefore, $y+1$ is the smallest integer that is larger than $x$, so $y+1=\lceil x \rceil.$ |
SVD: proof of existence | Let $S = \{x \in \mathbb C^m \mid \|x \|_2 = 1 \}$.
Then
$\|A\|_2 = \sup_{x \in S} \|Ax\|_2$.
But the function $f:S \to \mathbb R$ defined by $f(x) = \|Ax\|_2$ is continuous, and $S$ is compact, so (by the extreme value theorem) $f$ attains a maximum value at some point $v_1 \in S$. |
tricky surface integral | Simplifying, using the change of variables $ x = \frac{1}{\sqrt{2}} \tan(t) $ and the identity $1+\tan^2(x)=\sec^2(x)$, the integral falls apart
$$\int_0^1 \sqrt{4x^2+2} dx = 2\int_0^1 \sqrt{x^2+\frac{1}{2}} dx$$
$$ = \int_{0}^{\tan^{-1}\sqrt{2}} \sec^3(y)dy = \frac{\sqrt {2}\sqrt {3}}{2}+\frac{1}{4}\,\ln \left( 5+2\,\sqrt {6} \right). $$
For techniques of evaluating the last integral see here.
Added: It is easier to evaluate the integral using the suggestion by Paul. We use the substitution $x= \frac{1}{\sqrt{2}}\sinh(t)$ and the identity $1+\sinh(t)^2=\cosh^2(t)$
$$ 2\int_0^1 \sqrt{x^2+\frac{1}{2}} dx = \int_{0}^{\ln(\sqrt{2}+\sqrt{3})} \cosh^2(t)dt =\dots\,. $$
You can use the identity
$$ \cosh(x) = \frac{e^{x}+e^{-x}}{2} $$
to evaluate the above integral. |
Discrete Mathematics -Propositional Logic | HINT (first 3 lines):
Suppose $B\land C$
Suppose $B$
$C$ (from 1) |
Complex numbers $p$ without $q$, $r$ such that $p+q+r=1$ and $|p|=|q|=|r|$ | Quick intuition: $|q+r|$ must be big enough to "reach" from $p$ to $1$. If $|p|$ is very small (e.g. if $p$ is a positive real $<\frac 1 3$), then there's no way $p+q+r$ could add up to 1.
More formally:
Because of the triangle inequality, if $p + q + r = 1$ the distance from $1$ to $p$ must be less than or equal to $|q| + |r|$:
$$|1-p| = |q+r| \leq |q| + |r| = 2|p|$$
Substitute $p = a + bi$, and square both sides:
$$(1-a)^2+b^2 \leq 4(a^2 + b^2)$$
$$1 \leq 3a^2 + 2a + 3b^2 = 3(a^2 + \frac{2}{3}a) + 3b^2$$
Completing the square:
$$ 1 + \frac{1}{3} \leq 3(a + \frac{1}{3})^2 + 3b^2$$
$$ \frac{4}{9} \leq (a + \frac{1}{3})^2 + b^2$$
So, if $p$ lies on the circumference or outside the circle defined by the above equation (radius $\frac{2}{3}$, centered around $-\frac{1}{3}$), then $q$, $r$ s.t. $p + q + r = 1$ and $|p|=|q|=|r|$ can be found.
If $p$ lies inside the circle of radius $\frac{2}{3}$, centered around $-\frac{1}{3}$, then no such $q,r$ exist. |
How many bitstrings of length 8 contain 5 consecutive 0's? | Here's another way to work this type of problem.
The earliest string of five consecutive $0$s either comes at the very beginning or else is preceded by a $1$. If it comes at the very beginning, there are clearly $8$ ways to fill in the final three bits. If it's preceded by a $1$, there are $4$ choices for the other two bits and the $6$-bit string $100000$ comes either before, between, or after them, for another $4\times3=12$ possibilities, which gives a total of $8+12=20$.
Note, this approach works to count the number bitstrings of length $n$ with $k$ consecutive $0$s whenever $k\le n\le2k$. It begins to falter (which is to say, you need to start doing some inclusion-exclusion) when the string is long enough to both start with $k$ consecutive $0$s and have a string preceded by a $1$. |
Finding $\binom{n}{0} + \binom{n}{3} + \binom{n}{6} + \ldots $ | This technique is known as the Roots of Unity Filter. See this related question.
Note that $(1+x)^{n} = \displaystyle\sum_{k = 0}^{n}\dbinom{n}{k}x^k$. Let $\omega = e^{i2\pi/3}$. Then, we have:
$(1+1)^{n} = \displaystyle\sum_{k = 0}^{n}\dbinom{n}{k}1^k$
$(1+\omega)^{n} = \displaystyle\sum_{k = 0}^{n}\dbinom{n}{k}\omega^k$
$(1+\omega^2)^{n} = \displaystyle\sum_{k = 0}^{n}\dbinom{n}{k}\omega^{2k}$
Add these three equations together to get $\displaystyle\sum_{k = 0}^{n}\dbinom{n}{k}(1+\omega^k+\omega^{2k}) = 2^n+(1+\omega)^n+(1+\omega^2)^n$
You can see that $1+\omega^k+\omega^{2k} = 3$ if $k$ is a multiple of $3$ and $0$ otherwise.
Thus, $\displaystyle\sum_{m = 0}^{\lfloor n/3 \rfloor}\dbinom{n}{3m} = \dfrac{1}{3}\left[2^n+(1+\omega)^n+(1+\omega^2)^n\right]$
Now, simplify this. |
curl of what yields $(0,s^{-1},0)$ in cylindrical coordinates? | Then you can use a Green's function to solve this PDE.
The relevant Green's function is
$$\mathbf G(\mathbf r) = \frac{\mathbf r}{4\pi r^3}$$
The function $\mathbf A$ can then be found by convolution with the Green's function. Let $\mathbf B = s^{-1} \hat \theta$. The solution is then
$$\mathbf A(\mathbf r) = \int_{M} \mathbf B(\mathbf r') \times \mathbf G(\mathbf r - \mathbf r') \, dV'$$
Edit: there was a surface integral here, but I realized I had it written down wrong. I'll have to find a source for this, for while the volume integral is pretty simple, the surface integral looks, to me, to be a bit complicated, and you probably don't want to do that. I'll try to verify the form of that integral. Choosing all space as your integration region safely ignores the surface integral, but you may not be able to do this easily with $1/s$ in your vector field. |
Proof of a theorem in Hilbert's system | This is just a re-labelling of $3$. It's 3s $\beta = \neg\alpha$ in theorem and $\alpha$ in 3. is $\beta$ in your theorem.
$$(\neg(\neg\alpha) \Rightarrow \neg\beta) \Rightarrow ((\neg(\neg\alpha) \Rightarrow \beta) \Rightarrow \neg\alpha)$$
For a proof of $\neg\neg\alpha \leftrightarrow\alpha$, you can refer to Thm. 6 on this site, to reduce to the proof of $(\neg\beta\Rightarrow\neg\alpha)\Rightarrow(\alpha\Rightarrow\beta)$ (Wich is their axiom 3) |
epsilon delta limit proof verification of $\lim_{x\to2}{\frac{x-1}{x^2-1}} = \frac{1}{3} $. | Your proof is very good. You have explained every step very clearly.
Note that at $$\left|\frac{(x-2)}{3(x+1)}\right|<|x-2|<\epsilon$$
You could have done a little bit better by considering $3(x+1)>7.5$
Thus we could have said $$\left|\frac{(x-2)}{3(x+1)}\right|<|x-2|/{7.5}<\epsilon$$ Which makes your $\delta = \min(0.5,7.5\epsilon)$ |
Why does $\sum_{n = 0}^\infty \frac{n}{2^n}$ converge to 2? | Besides the differentiation trick mentioned by others, here's another trick:
$$S = \sum_{n=0}^{\infty} \frac{n}{2^n} = \frac{1}{2} \sum_{n=0}^{\infty} \frac{n}{2^{n-1}} = \frac{1}{2} \left(\sum_{n=0}^{\infty} \frac{n - 1}{2^{n-1}} + \sum_{n=0}^{\infty} \frac{1}{2^{n-1}}\right) = \frac{1}{2} \left(S + \frac{-1}{2^{-1}} + 4\right) = \frac{1}{2}(S + 2).$$ |
Solving Diophantine equation $1/x^2+1/y^2=1/z^2$ | Multiply both sides by $x^2y^2z^2$.
Then you get
$$y^2z^2+x^2z^2=x^2y^2$$
Now use that each of $x^2, y^2,z^2$ divide two of the terms hence the third.
Added Here is the rest of the solution. Let $a=gcd(x,y), b=gcd(x,z), c=gcd(y,z)$.
Then, $gcd(a,b)=1$ and hence $ab|x$. We claim $ab=x$.
Indeed write $x=abd$. Assume by contradiction that $d \neq 1$ and let $p|d$, $p$ prime.
As $x | yz$ we have $abd | yz \Rightarrow d | \frac{y}{a}\frac{z}{b}$.
Then $p$ divides either $\frac{y}{a}$ or $\frac{z}{b}$.
But then, in the first case $pa |x,y$ while in the second $pb | x,z$ contradicting the $gcd$.
Therefore $x=ab$. The same way you can prove that $y=ac, z=bc$.
Replacing in the above equation you get
$$a^2b^2c^4+a^2b^4c^2=a^4b^2c^2$$
or
$$c^2+b^2=a^2$$
this shows that $(c,b,a)$ is a primitive Pytagoreal triple and
$$x=ab \\
y=ac \\
z=bc$$ |
Square root of 6 proof rationality | How about a proof by descent?
First show that $2^2<6<3^2$. Then if $\sqrt{6}$ is to be rational it must have a form $p/q$ where $p,q\in \mathbb{Z}, q>0, 2q<p<3q$. By simple algebra the square root is also equal to $6q/p$, thus
$p/q=6q/p\text{.....Eq. 1}$.
Now if $a/b=c/d$ then also
$a/b=(ma+nc)/(mb+nd)$
for any coefficients $m,n$ where the denominator is nonzero. In particular, Eq. 1 implies
$p/q=(3p-6q)/(3q-p)\text{.....Eq. 2}$
where we already have $2q<p<3q$ and thus $0<3q-p<q$. So the proposed rational fraction $p/q$ must be equal to an alternative rational fraction with a smaller positive denominator. This causes an infinite descent contradiction forcing the assumption of a rational value to be false.
We can form a similar proof for the square root of any natural number that is not a squared integer. "Not a squared integer" is needed because the square root must be strictly between two adjacent integers to obtain a descent of positive denominators. |
zeroes of the orthogonal polynomials are simple | The proof starts off by assuming that $p_{n+1}$ has a pair of complex roots $\alpha \pm i\beta$ and reaches the conclusion that
$$ \int_a^b \left( (x-\alpha)^2 + \beta^2 \right) \left| \frac{p_{n+1}}{(x-\alpha)^2 + \beta^2} \right|^2 \omega(x) \, dx = 0$$
but all three factors are greater than or equal to $0$ for all $x \in (a,b)$ with a finite number of zeros. That implies the integral must be greater than 0, so we have a contradiction and the assumption that $p_{n+1}$ has a pair of complex conjugate zeros must be false. |
How to start an eigenvalue problem | I guess DEQs of this type show up when expressing problems in terms of parabolic cylindrical coordinates.
So you might be looking for Weber DEQs or more generally, parabolic cylinder functions as a basis.
Fitting your problem into this framework, the solution could be
$$
\phi(\lambda, x) = c_1 D_{(-1/2)}((1+i) x^{1/4} \sqrt{x+2} \; \lambda)+c_2
D_{-1/2}((-1+i) x^{1/4} \sqrt{x+2} \; \lambda)
$$
where $D_n (z)$ are parabolic cylinder functions and the constants should be adapted to the boundary conditions. Wolfram Alpha helps as well.
Hope this goes in the right direction... |
Sum of exactly n perfect square divisors | In order to have exactly $6$ square divisors, a number $n$ is either of the form $n=p^{10}m$ with $p$ a prime and $m$ square-free or of the form $n=p^4q^2m$ with $p$ and $q$ distinct primes and $m$ square-free. In the first case the square divisors are $1+p^2+p^4+p^6+p^8+p^{10}$ and the smallest possibility is $n=2^{10}=1024$. In the second case the sum of the square divisors is $1+p^2+q^2+p^4+p^2q^2+p^4q^2$ and the smallest possibility has is $n=2^4\cdot3^2=144$. So $n=144$ is the smallest possibility, and the sum of its square divisors is $1+4+9+16+36+144=210$. |
Can we prove that $AD||PQ$ in the figure? | Since $BC=CD$, we have
$$\angle BAC=\angle CAD= x$$
Using angles in the same segment,
$$\angle BAC=\angle BDC=x$$
You know that $\angle ADB=\angle BDC$, thus we have $\angle ADB=x$.
Since $\angle APB=\angle DPC=\angle CAD+\angle ADB$ (exterior angle of triangle), we now obtain $\angle APB=x+x=2x$.
Since $PQ$ bisects $\angle APB$, $\angle APQ=\frac{2x}{2}=x$.
Combining the results of $\angle APQ=\angle CAD= x$, we have proven that $AD$ is parallel to $PQ$ (alternate angles equal).
The following picture is for reference. |
Using Zorn's lemma show that $\mathbb R^+$ is the disjoint union of two sets closed under addition. | First let us recall Zorn's lemma.
Zorn's lemma. Suppose that $(P,\leq)$ is a non-empty partial ordered set such that whenever $C\subseteq P$ is a chain, then there is $p\in P$ that for every $c\in C$, $c\leq p$. Then $(P,\leq)$ has a maximal element.
To use Zorn's lemma, if so, one has to find a partial order with the above property (every chain has an upper bound) an utilize the maximality to prove what is needed.
We shall use the partial order whose members are $(A,B)$ where $A,B$ are disjoint subsets of $\mathbb R^+$ each is closed under addition. We will say that $(A,B)\leq (A',B')$ if $A\subseteq A'$ and $B\subseteq B'$.
This is obviously a partial order. It is non-empty because we can take $A=\mathbb N\setminus\{0\}$ and $B=\{n\cdot\pi\mid n\in\mathbb N\setminus\{0\}\}$, both are clearly closed under addition and disjoint.
Suppose that $C=\{(A_i,B_i)\mid i\in I\}$ is a chain, let $A=\bigcup_{i\in I}A_i$ and $B=\bigcup_{i\in I} B_i$. To see that these sets are disjoint suppose $x\in A\cap B$ then for some $A_i$ and $B_j$ we have $x\in A_i\cap B_j$. Without loss of generality $i<j$ then $x\in A_j\cap B_j$ contradiction the assumption that $(A_j,B_j)\in P$ and therefore these are disjoint sets. The proof that $A$ and $B$ are closed under unions is similar.
Then $(A,B)\in P$ and therefore is an upper bound of $C$. So every chain has an upper bound and Zorn's lemma says that there is some $(X,Y)$ which is a maximal element.
Now all that is left is to show that $X\cup Y=\mathbb R^+$. Suppose that it wasn't then there was some $r\in\mathbb R^+$ which was neither in $X$ nor in $Y$, then we can take $X'$ to be the closure of $X\cup\{r\}$ under addition. If $X'\cap Y=\varnothing$ then $(X',Y)\in P$ and it is strictly above $(X,Y)$ which is a contradiction to the maximality. Therefore $X'\cap Y$ is non-empty, but then taking $Y'$ to be the closure of $Y\cup\{r\}$ under addition has to be disjoint from $X$, and the maximality argument holds again.
In either case we have that $X\cup Y=\mathbb R^+$. |
Given $\frac {a\cdot y}{b\cdot x} = \frac CD$, find $y$. | Note that $$\frac{\dfrac{1}{x}}{y} = \frac{1}{x}\frac{1}{y} = \frac{1}{xy}.$$ This process is often called 'invert and multiply'. If you apply this rule to your final expression, you will find it agrees with the given solution.
More generally $$\frac{\dfrac{a}{b}}{\dfrac{c}{d}} = \frac{a}{b}\frac{d}{c} = \frac{ad}{bc}.$$ |
Poker cards combinatorics question | For second,
We either choose $3$ denominations or $2$. For pairs from $3$ denominations, it is same as the first and for pairs from $2$ denominations, we first choose the denominations and then we choose which we will have $4$ cards of and which $2$ ($2$ ways do so).
So, probability $\displaystyle = \frac{\binom{13}{3} \ {{\binom{4}{2}}^3 \cdot 40} + \binom{13}{2} \cdot 2 \cdot {\binom{4}{2}} \cdot 44}{\binom{52}{7}}$
Also for first, instead of $46$ choices for the $7$th card, you should have only $40$ choices (as $12$ cards cannot be considered - no three cards can have the same denomination). |
How to proceed on Induction problem involving permutations | Let's simplify things. Note that $$Q(\ell) = \Big(\sum_i a_i^2+b_{\ell_i}^2\Big)-2\sum_{i}a_ib_{\ell_i},$$ and the first summation is the same for every $\ell$. Therefore, we may show that the quantity
$$
\tilde Q(\ell)=\sum_{i}a_ib_{\ell_i}
$$
is maximized when $\ell$ is the identity permutation.
To prove this, suppose that $\ell$ has an inversion, meaning a pair $(i,j)$ where $i<j$ but $\ell_i>\ell_j$. Note that this implies $(a_j-a_i)(b_{\ell_i}-b_{\ell_j})\ge 0$, so that
$$
a_ib_{\ell_j}+a_jb_{\ell_i}\ge a_ib_{\ell_i}+a_jb_{\ell_j}\tag{1}
$$
Now, consider what happens when you make a new permutation $\ell'$, which is just like $\ell$ except that $\ell_i$ and $\ell_j$ are switched. According to $(1)$, this means that $\tilde Q(\ell')\ge \tilde Q(\ell)$, because the sum of the two affected products has increased. Therefore, any permutation with an inversion is weakly dominated by one with fewer inversions,${}^*$ so the identity permutation (the only permutation without any inversions) is maximal.
${}^*$I am skipping over the detail of proving that $\ell'$ has fewer inversions than $\ell$. This takes a little thought to prove. Alternatively, to any permutation $\ell$ you can associate the quantity $E(\ell)=\sum_i i\ell_i$, and show that $E(\ell')>E(\ell)$ using the same method. Since this quantity cannot increase forever, performing enough "un-inversions" will eventually create a permutation with no inversions. |
Induced isomorphism on elliptic curves | Suppose $F$ is an isomorphism. From $(i)$ we have that $A\Lambda_1=\Lambda_2.$
So, you got $$A\tau_1=a\tau_2+b$$ $$A=c\tau_2+d,$$ for some $a,b,c,d \in\mathbb{Z}.$
That can also be written differently, as
$$
A\begin{bmatrix}
\tau_1 \\ 1
\end{bmatrix} =
\begin{bmatrix}
a&b \\ c&d
\end{bmatrix}
\begin{bmatrix}
\tau_2 \\ 1
\end{bmatrix}
$$
All we have to do now is to see that $ad-bc=\pm1.$
But also, the following is true: $ \Lambda_1=\frac{1}{A}\Lambda_2, $ so similarly we conclude $$\frac{1}{A}\tau_2=a'\tau_1+b'$$ $$\frac{1}{A}=c'\tau_1+d',$$ for some $a',b',c',d' \in \mathbb{Z},$ i.e.
$$
\dfrac{1}{A}\begin{bmatrix}
\tau_2 \\ 1
\end{bmatrix} =
\begin{bmatrix}
a'&b' \\ c'&d'
\end{bmatrix}
\begin{bmatrix}
\tau_1 \\ 1
\end{bmatrix}.
$$
By combining the two matrix equations that we have, we get
$$
\begin{bmatrix}
\tau_1 \\ 1
\end{bmatrix} =
\begin{bmatrix}
a&b \\ c&d
\end{bmatrix}
\begin{bmatrix}
a'&b' \\ c'&d'
\end{bmatrix}
\begin{bmatrix}
\tau_1 \\ 1
\end{bmatrix}.
$$
Since, $\{1,\tau_1\}$ is be a basis for $\mathbb{C}$ over $\mathbb{R}$ (otherwise it wouldn't define a lattice), it's obvious that $$\begin{bmatrix}
a&b \\ c&d
\end{bmatrix}
\begin{bmatrix}
a'&b' \\ c'&d'
\end{bmatrix}=I.$$
Now $$ \det\begin{bmatrix}
a&b \\ c&d
\end{bmatrix}
\begin{bmatrix}
a'&b' \\ c'&d'
\end{bmatrix}=\det I=1. $$ Since all matrix entries are integers, the determinants are also integer numbers and we conclude $$\det\begin{bmatrix}
a&b \\ c&d
\end{bmatrix}=\pm 1,$$ so $ad-bc=\pm1.$
Conversely, suppose that $\dfrac{\tau_1}{1}=\dfrac{a\tau_2+b}{c\tau_2+d},$ where $ad-bc=\pm1.$
Equivalently, we have $$A\tau_1=a\tau_2+b,$$ $$A=c\tau_2+d,$$ for some $A\in\mathbb{C}$ (because the numerators and the denominators only have to be proportional, not equal). Now, obviously, $A\Lambda_1\subseteq\Lambda_2.$
Written differently, the above two equalities become$$
A\begin{bmatrix}
\tau_1 \\ 1
\end{bmatrix} =
\begin{bmatrix}
a&b \\ c&d
\end{bmatrix}
\begin{bmatrix}
\tau_2 \\ 1
\end{bmatrix},
$$ i.e.
$$
A
\begin{bmatrix}
d&-b \\ -c&a
\end{bmatrix}
\begin{bmatrix}
\tau_1 \\ 1
\end{bmatrix} =
\begin{bmatrix}
\tau_2 \\ 1
\end{bmatrix}.$$
Similarly, we get $\Lambda_2\subseteq A\Lambda_1.$ |
Solve for z(t) from the simultaneous equation using Laplace transform | First note that if $u(t)$ is the Heaveside step function, then
$$\int_0^t z(s)\,dt = u(t)\ast z(t)$$
where $\ast$ denotes convolution. So we can rewrite the system as
$$\left\{\begin{aligned} y^{\prime}+2y + 6u(t)\ast z(t) &= -2u(t)\\ y^{\prime} + z^{\prime} + z &= 0\end{aligned}\right.$$
Noting that $y(0) = -5$ and $z(0) = 6$, we take Laplace transforms of both equations to see that
$$\left\{\begin{aligned} (sY(s)+5) + 2Y(s) + 6\frac{Z(s)}{s} &= - \frac{2}{s}\\ (sY(s)+5) + (sZ(s)-6) + Z(s) &= 0\end{aligned}\right.$$
which simplifies to
$$\left\{\begin{aligned}(s^2+2s)Y(s)+6Z(s) &= -5s-2\\ sY(s) + (s+1)Z(s) &= 1\end{aligned}\right.$$
We now solve for $Z(s)$ by means of elimination; multiplying the second equation by $-(s+2)$ and then adding to the first leaves us with
$$(6-(s^2+3s+2))Z(s) = -4-6s\implies Z(s) = \frac{4+6s}{s^2+3s-4}$$
Note that $\dfrac{1}{s^2+3s-4} = \dfrac{1}{(s+4)(s-1)} = \dfrac{1}{5(s-1)} - \dfrac{1}{5(s+4)}$.
Therefore,
$$Z(s) = \frac{4}{5(s-1)} -\frac{4}{5(s+4)} +\frac{6s}{5(s-1)} - \frac{6s}{5(s+4)}$$
Can you take things from here with finding the inverse Laplace Transform? |
Combinations of red and black balls | $f(n,k) = $ number of sequences with $n$ balls and exactly $k$ repeats, which is exactly this:
$f(n,k) = 2 \binom{n-1}{k}$
Essentially, there are two sequences with 0 repeats and $n-k$ length. Given a string with no repeats, we choose how many extra balls are added into each spot, which is equivalent to multiplying by the multichoose $\left( \binom{n-k}{k} \right) = \binom{n-1}{k}$.
I guess you’d want to sum this function for all k less than K, but that’s the nicest idea I could come up with. Apologies for my sloppy phrasing, I can clarify where needed. |
How do I solve this differential equation to get expression with hyperbolic tangent? | What you have is some real constants, I will call them $A,B,C,$ and
$$ R' = A R^2 + B R + C. $$
IF $$ B^2 - 4 AC > 0, $$
then there are two constant solutions, and the (bounded) solution in between them can be written in terms of $\tanh.$
EDIT, 2:30 pm Pacific. Given $B^2 - 4 AC > 0,$ define $\delta > 0$ by
$$ \color{magenta}{ \delta = \frac{1}{2} \;\sqrt {B^2 - 4AC} \; \; .} $$
Next, define
$$ \color{magenta}{ w = A R + \frac{B}{2} \; \; .} $$
which gives us
$$ \color{magenta}{ w' = w^2 - \delta^2 \; \; .} $$
For the solutions with $- \delta < w < \delta,$ we get $w' < 0$ and
$$ \color{magenta}{ w = - \delta \tanh \delta t \; \; ,} $$
finally
$$ \color{magenta}{ R = -\frac{1}{A} \left( \delta \tanh \delta t + \frac{B}{2} \right) \; \; .} $$
Note that the system is autonomous, there is no explicit dependence on the variable $t,$ which means that every bounded solution is a pure translate of the one above, found by replacing $t$ by $\color{magenta}{(t - t_0)}$ for some constant $t_0.$
There are also constant solutions with $|w| = \delta,$ also some with $|w| > \delta$ that are all unbounded, those involve $\delta \coth \delta t.$
Please check the solution; the methodology is correct, but I may have made arithmetic errors.
ORIGINAL: Everything depends on the signs, and size, of the constants. The similar ODE
$$ y' = 1 - y^2 $$
has constant solutions with $y=1$ and $y=-1;$ in between it is tanh..
We have $$ \frac{d}{dx} \cosh x = \sinh x,$$
$$ \frac{d}{dx} \sinh x = \cosh x,$$ and
$$ \cosh^2 x - \sinh^2 x = 1. $$
$$ \frac{d}{dx} \tanh x = \, \mbox{sech}^2 \, x,$$
$$ \frac{d}{dx} \, \mbox{sech} \, x = - \, \mbox{sech} \, x \; \tanh x,$$
$$ \tanh^2 x + \, \mbox{sech}^2 \, x = 1. $$
as a result, $$ 1 - \tanh^2 x = \mbox{sech}^2 \, x. $$
So $$ y = \tanh x $$
is a solution of $$ y' = 1 - y^2 $$
that obeys $$ -1 < y < 1. $$
The solutions with either $y > 1$ or $y < -1$ are unbounded...
Indeed,
$$ \frac{d}{dx} \coth x = - \, \mbox{csch}^2 \, x,$$
$$ \frac{d}{dx} \, \mbox{csch} \, x = - \, \mbox{csch} \, x \; \coth x,$$
$$ \coth^2 x - \, \mbox{csch}^2 \, x = 1, $$
or
$$ 1 - \coth^2 x = - \, \mbox{csch}^2 . $$
So, another solution to $y' = 1-y^2$ is $y = \coth x,$ which jumps across from $y < -1$ to $y > 1.$
If we switch to $$ y' = y^2 -1, $$ all that happens is we multiply the solutions by $-1:$ |
Is $|x+y|/\sqrt{x^2+y^2}$ continuous? | HINT
Note that
$$x^2+y^2\geq 2|x+y|$$
is not true for $x=y=\frac12$.
Check the path with $x=-y=t\to 0$. |
What do I miss? $\ln(x^2 -4) = \ln(1-4x)$, $x \neq 1$ | No, because if $A$ and $B$ are equalities and $A\Rightarrow B$, $B$ might have some solutions that $A$ doesn't have. |
Prove that $G$ has a subgroup isomorphic to $G/H$. | I think the self-duality mentioned in the question makes this (otherwise rather difficult) question easy. In $\widehat G=\def\Hom{\operatorname{Hom}}\Hom(G,\mu)$ consider $\{f\in\widehat G\,\mid H\subseteq\ker(f)\,\}$, a subgroup that is canonically isomorphic to $\Hom(G/H,\mu)=\widehat{G/H}$, and therefore (non-canonically) to $G/H$. Under the (non-canonical) isomorphism $\widehat G\to G$, it maps to a subgroup of $G$ that is isomorphic to $G/H$. |
Slope Intercept Form Word Problem | A dependent variable is one that depends on the independent variable, usually $y$ and $x$ respectively because $x$ is the input and $y$ is the output.
That said, look at the problem's first sentence:
The speed at which you drive a car can affect the car's fuel economy.
Here, the car's fuel economy (in mpg) is affected by the speed at which you drive (mph). Thus, mpg is the dependent variable and mph is the independent variable.
Now look at the next sentence of the world problem:
The July 2008 Consumer Reports magazine reported the Toyota Camry has a fuel economy of 40 miles per gallon (mpg) at 55 miles per hour (mph), and 30 mpg at 75 mph.
Here, you're given two mph and mpg pairs for the Toyota Camry, which you can treat as ordered pairs $(55, 40)$ and $(75, 30)$. From this, you can create a linear equation by finding the slope:
$$\dfrac{30-40}{75-55} = -\dfrac{1}{2}$$
And because you know at least one point, you can rewrite into point-slope form:
$$y - 40 = -\dfrac{1}{2}(x - 55)$$
Then rearrange into slope-intercept:
$$y = -\dfrac{1}{2}x + 67.5$$
Now about a valid domain and range, think about the following questions:
Can a car go negative mph?
How high can modern cars' mpgs be?
Can a car have negative mpg? |
Numerical Analysis using MATLAB. Find the condition number $\mu$ | Seems like this is a hand written question and either you or your instructor has made some minor mistakes. It's very likely that the original question should read:
Find the condition number $\mu=\|A\|\|A^{-1}\|$ for the Hilbert Matrix $A$ using the uniform norm.
Here $\|X\|$ denotes the uniform norm of a matrix $X=(x_{ij})_{i,j\in\{1,2,\ldots,n\}}$. The uniform norm, a.k.a. maximum row sum norm, is the matrix norm induced by the vector norm $\|x\|_\infty$. It is defined by
$$
\|X\|=\max_{1\le i\le n}\,\sum_{j=1}^n|x_{ij}|.
$$
In matlab, $\|X\|$ can be calculated as $\mathtt{max(sum(abs(X),2))}$. |
Solve finite differences linear equation | It depends. Do A and B depend on the index $i$? If A and B are constant, then you can split the solution up into a homogeneous solution $u_{i}^{(H)}$ and an inhomogeoneous solution $u_{i}^{(I)}$. The homogeneous solution satisfies
$$A u_{i+1}^{(H)} + 2 u_{i}^{(H)} + B u_{i-1}^{(H)} = 0 $$
with initial conditions such as $u_{0}^{(H)} = u_0$ and $u_{1}^{(H)} = u_1$. In this case, with constant coefficients, you can assume that $u_{i}^{(H)}$ takes the form $\exp{(i r)}$ and find $r$ from the following equation:
$$A r^2 + 2 r + B = 0$$
which I leave to you; the solutions are then $r_1$ and $r_2$, and the general solution to the homogeneous equation is $u_{i}^{(H)} = C_1 \exp{(i r_1)} + C_2 \exp{(i r_2})$. $C_1$ and $C_2$ are found by using the initial conditions.
The inhomogeneous solution satisfies
$$A u_{i+1}^{(I)} + 2 u_{i}^{(I)} + B u_{i-1}^{(I)} = C $$
with initial conditions such as $u_{0}^{(I)} = 0$ and $u_{1}^{(I)} = 0$. For a constant $C$, $u_{i}^{(I)}$ can be a constant, with such constant being offset in the homogeneous initial conditions.
For the case where $A$ and/or $B$ is not constant, then methods specific to the forms of their variation should be employed. |
Proving a definite integral with a parameter is positive | Write the integrand as $f(x) g(x)^q$, where $f(x) = (5-3x)(x-1)$ and
$g(x) = (x-1)^{-3}(7-3x)^{-1}$. Note that $f(x) = F'(x)$ where $F(x) = -x^3 + 4 x^2 - 5 x + 2 = (2-x)(x-1)^2$; $F(x) > 0 $ on the interval $(1,2)$, with $F(1) = F(2) = 0$. Now integrate by parts, being careful with the endpoint $1$ because $g(x)$ is singular there (you have to look at the limit of $F(x) g(x)^q$ as $x \to 1+$):
$$ \eqalign{\int_1^2 f(x) g(x)^q\; dx &= \left. F(x) g(x)^q\right|_1^2 - \int_1^2 F(x) (g^q)'(x)\; dx\cr
&= -q \int_1^2 F(x) g(x)^{q-1} g'(x)\; dx}$$
and note that the integrand of the last integral is negative on $(1,2)$. |
Integral part of $\sqrt{2018+\sqrt{2018+\sqrt{...+2018}}}$ | Begin by considering the fact that $45^2= 2025$. Then notice that your number is at least
$$\sqrt{2018 + \sqrt{2018}}.$$
Using the estimate that $\sqrt{2018} > 40$, which follows from $40^2 = 1600 < 2018$, your number is at least $\sqrt{2058} > \sqrt{2025} = 45$. Hence, the final answer is $45$. |
Let $f$ be a closed path in $S^1$ at $1$. Prove that if $f$ is not surjective, then $\deg f = 0$ | You could do it that way. The key is that there is a number $a\in(0,1)$
such that the lift of $f$ never takes the values $a$ or $a-1$. Thus
the lift starting at $0$ can never reach another integer (by the
Intermediate Value Theorem).
I'd do it a different way. The map $f$ maps into $S^1-\{\text{point}\}$
and this space is contractible: we can homotope $f$ to a point. |
Real Analysis - Supremums | To understand how to use the supremum you need to see (and prove) exercises. In your case
Proposition 1. Let $A := (a_n)_{n=1}^\infty$ and $B := (b_n)_{n=1}^\infty$ be two bounded sequences of real numbers. If $C := (a_n + b_n)_{n=1}^\infty$, show that $\sup C \le \sup A + \sup B$.
Proof. Let $n \ge 1$ a positive integer. Since $a_n \leq \sup A$, $b_n \leq \sup B$, we have $a_n+b_n \leq \sup A + \sup B$. Thus $\sup A + \sup B$ is an upper bound for $C$.
Now suppose $c_n < \sup A + \sup B$ and define $\epsilon := \sup A + \sup B - c_n > 0$. Then $\epsilon/2 > 0$. Since $\sup A - \epsilon/2 < \sup A$, there exists $a_n$ such that $\sup A - \epsilon/2 < a_n$. Similarly, there exists $b_n$ such that $\sup B - \epsilon/2 < b_n$. Then $c_n = \sup A + \sup B - \epsilon = (\sup A - \epsilon/2) + (\sup B - \epsilon/2) < a_n + b_n$. So $c_n$ is not an upper bound
for $C$. Therefore, $\sup A + \sup B$ is the least upper bound.
Also you can try to prove:
Proposition 2. Let $h > 0$ a positive real number and let $E$ a subset of real numbers. Show that if $E$ has a supremum, there exists $x \in E$ such that $x > \sup E - h$. (Hint: prove by contradiction.)
Proposition 3. There exists a positive real number $x$ such that $x^2 = 2$. (Hint: define the set $E := \{x \in \mathbf R : x \ge 0 \;\;\text{and}\;\; x^2 < 2\}$ and prove by contradiction.) |
Show that $\omega * \mu$ is either 0 or 1. | Put
$$F(n)=\left\{ \begin{align}1, \quad &n \> \text{is prime} \\ 0, \quad & n\> \text{is not prime}\end{align}\right.$$
Notice that $v(n)=\sum_{d|n}F(d)$, so by the Möbius inversion formula, we get $f=F$, so the assertion. |
Co-variance of two random variables | Using law of total expectation,
\begin{align}
E[XY] &= E[E[XY|X]] \\
&= E[XE[Y|X]] \\
&= E[X(nX)] \\
&= n E[X^2] \\
&= n \int_0^1 x^2 dx \\
&=\frac{n}{3}
\end{align} |
A recursion similar to the one for Bernoulli numbers | You have for $m\geq 1$ $$\sum_{j=0}^m {m \choose j} A_j=-A_m$$
This gives that with $\displaystyle f(x)=\sum_{k=0}^{+\infty}\frac{A_j}{j!} x^j$, if we multiply by $x^m$ and sum for $m\geq 1$, we have
$$f(x)\exp(x)-1=-(f(x)-1)$$
hence $\displaystyle f(x)=\frac{2}{\exp(x)+1}$. Now we find easily that $f(x)+f(-x)=2$, and we are done. |
Transformation of two non parallel lines | Since the lines $l$ and $l'$ define $4$ regions in the Euclidean plane, we can consider the bisectors of these regions, say $b_1$ and $b_2$. Notice that $b_1\cap b_2=\{x\}$. For $i=1,2$, define $R_i$ the reflection with respect to the bisector $b_i$. Both reflections satisfy the property $R_i(x)=x$ and $R_i(l)=l'$. Moreover, each combinations of them satisfy that conditions. Now it remains to see that the group generated by $R_1$ and $R_2$ has exactly $4$ elements. |
Solving the cubic-exponential Diophantine equation $x^3+3=2^n$ | Two cases:
$1)$ $n=2k$. Then $x^3+3=\left(2^k\right)^2$. But $a^3+3=b^2$ has $2$ integral solutions (http://oeis.org/A081119) $(a,b)=(1,\pm 2)$, so $(x,n)=(1,2)$.
$2)$ $n=2k+1$. Then $(2x)^3+24=\left(2^{k+2}\right)^2$. But $a^3+24=b^2$ has $8$ integral solutions (http://oeis.org/A081119, in particular http://oeis.org/A081119/b081119.txt)(I found them with a program, or you can see these tables)
$$(a,b)=(-2,\pm 4),(1,\pm 5),(10,\pm32),(8158,\pm736844),$$
so $(x,n)=(-1,1),(5,7)$.
Mordell Equations $x^3+k=y^2$ for $k\in\Bbb Z_{\neq 0}$ are fully solved when $|k|<10^4$ (were solved in $1998$; see here), so we can always use this method when the numbers aren't very large. |
How to determine the values of $x$ for which $f(x)=x^4-8x^3+22x^2-24x+21$ is increasing and for which it is decreasing. | Since $f^\prime=4(x^3-6x^2+11x-6)=4(x-1)(x-2)(x-3)$, $f$ is increasing (decreasing) when $(x-1)(x-2)(x-3)$ is positive (negative). Note that $x>3\implies f^\prime>0$, and $f^\prime$ changes sign at each of $1$, $2$, $3$. So $f$ is increasing for $1<x<2$ and for $x>3$.
Note: if you're wondering how I factorized $f^\prime$, the honest answer is I've seen that monic cubic before in certain combinatorial contexts, but the helpful answer is the rational root theorem implies its only possible rational roots are factors of the constant term $-6$, so it makes sense to evaluate it at $\pm1,\,\pm2,\,\pm3$. Once you find a root, you can take out a linear factor, leaving a quadratic factor, which is easier to handle. No need to break out Cardano's formula. |
Why does drawing $\square$ mean the end of a proof? | It just means the same thing as q.e.d. Its introduction is usually attributed to Paul Halmos:
"The symbol is definitely not my invention — it appeared in popular magazines (not mathematical ones) before I adopted it, but, once again, I seem to have introduced it into mathematics. It is the symbol that sometimes looks like ▯, and is used to indicate an end, usually the end of a proof. It is most frequently called the 'tombstone', but at least one generous author referred to it as the 'halmos'.", Paul R. Halmos, I Want to Be a Mathematician: An Automathography, 1985, p. 403.
(This is quoted in Wikipedia) |
Derivative Calculation | Each step should follow one of the derivative rules that you know about. The notation "$\frac{d}{dx}$" in what follows means "take the derivative of".
$$
\begin{align*}
f'(x) &= \frac{d}{dx}\left[ \frac{3x^2 + 1}{2} \right]\\
&= \frac{d}{dx}\left[ \frac{1}{2}(3x^2 + 1) \right], \quad\textrm{(algebra)}\\
&= \frac{1}{2}\frac{d}{dx}\left[3x^2 + 1 \right], \quad\textrm{(constant multiple rule)}\\
&= \frac{1}{2}\left( \frac{d}{dx}[3x^2] + \frac{d}{dx}[1] \right),
\quad \textrm{(sum/difference rule)}\\
&= \frac{1}{2}\left( 3\frac{d}{dx}[x^2] + \frac{d}{dx}[1] \right),
\quad \textrm{(constant mult. rule again)}\\
&= \frac{1}{2}\left( 3(2x) + \frac{d}{dx}[1] \right),
\quad \textrm{(power rule)}\\
&= \frac{1}{2}\left( 3(2x) + 0 \right),
\quad \textrm{(derivative of a constant is 0 -- really just power rule)}\\
&= 3x, \quad \textrm{(algebra to simplify answer)}
\end{align*}
$$
Now as you do more and more of these problems, you'll find which steps you can do in your head, until you get to the point where it becomes a one-line problem!
Hope this helps! |
does one point sets are closed in a non hausdorff space? | One point sets are closed means for any $x\in X$, $X\setminus\{x\}$ is in the topology i.e. open.
This is of course not true in general. Take $X=\{0,1\}$ and the trivial topology $\{\emptyset, X\}$. |
Is it possible $\sin\alpha=\cos\alpha=1?$ | $\sin ^2 x + \cos ^2 x = 1$ for all $x$, so if you had $\cos \alpha = \sin \alpha = 1$, then $\sin ^2 \alpha + \cos ^2 \alpha$ would be equal to $2$. This is absurd. |
Leibniz integral rule for solving integrals | $$I_1(y)=\int_0^1 \frac{\arctan(yx)}{x\sqrt{1-x^2}}dx\Rightarrow I_1(0)=0$$
$$I_2(y)=\int_0^{\pi/2}\frac{\arctan(y\tan x)}{\tan x}dx\Rightarrow I_2(0)=0$$
First for $I_1$, take $\frac{d}{dy}$ on both sides (and use the Leibniz integral rule)
$$I_1'(y)=\int_0^1\frac1{x\sqrt{1-x^2}}\frac{\partial}{\partial y}\arctan(yx)dx$$
$$I_1'(y)=\int_0^1\frac1{x\sqrt{1-x^2}}\frac{x}{1+y^2x^2}dx$$
$$I_1'(y)=\int_0^1\frac{dx}{(1+y^2x^2)\sqrt{1-x^2}}$$
Then set $x=\sin(u)$:
$$I_1'(y)=\int_0^{\pi/2}\frac{du}{1+y^2\sin^2u}$$
Then set $u=t/2$:
$$I_1'(y)=\int_0^\pi\frac{dt}{2+y^2-y^2\cos t}$$
Then let $x=\tan(t/2)$:
$$I_1'(y)=2\int_0^\infty \frac{1}{2+y^2+y^2\frac{x^2-1}{x^2+1}}\frac{dx}{x^2+1}$$
$$I_1'(y)=\int_0^\infty \frac{dx}{(1+y^2)x^2+1}$$
And since it is easily shown that
$$\int_0^\infty \frac{dx}{ax^2+1}=\int_0^\infty \frac{dx}{x^2+a}=\frac\pi{2\sqrt{a}}$$
We have that
$$I_1'(y)=\frac\pi{2\sqrt{1+y^2}}$$
And since $I_1(0)=0$ we have that
$$I_1(y)=\frac\pi2\int_0^y \frac{da}{\sqrt{1+a^2}}=\frac\pi2\sinh^{-1}(y)$$
Like with $I_1$, we compute $I_2$ by taking $d/dy$ on both sides which gives
$$I_2'(y)=\int_0^{\pi/2} \frac1{\tan x}\frac{\partial}{\partial y}\arctan(y\tan x)dx$$
$$I_2'(y)=\int_0^{\pi/2} \frac1{\tan x}\frac{\tan x}{1+y^2\tan^2x}dx$$
$$I_2'(y)=\int_0^{\pi/2}\frac{dx}{1+y^2\tan^2x}$$
We can re write this as
$$I_2'(y)=\int_0^{\pi/2}\frac{\sec(x)^2dx}{(1+\tan(x)^2)(1+y^2\tan(x)^2)}$$
We then use $u=\tan(x)$ to get
$$I_2'(y)=\int_0^\infty \frac{du}{(1+u^2)(1+y^2u^2)}$$
Then we can use partial fractions and a trig sub to get
$$I_2'(y)=\frac\pi{2y+2}$$
And since $I_2(0)=0$, we have that
$$I_2(y)=\frac\pi2\int_0^y \frac{dt}{t+1}=\frac\pi2\ln(y+1)$$
So we plug in $y=1$ to get that
$$\int_0^{\pi/2}\frac{x}{\tan x}=\frac\pi2\ln2$$ |
Dimension of space of meromorphic functions on torus with only one pole | Without loss of generality assume that the pole is at $z=0$. A function
$f \in V$ has a Laurent-series
$$
f(z) = \sum_{j=-n}^\infty a_n z^n
$$
at $z=0$.
Now consider the Weierstrass $\wp$ function for the same lattice, and
its derivatives $\wp', \ldots, \wp^{(n-2)}$.
$\wp, \wp', \ldots, \wp^{(n-2)}$ have poles at $z=0$
of order $2, 3, \ldots, n$, respectively. It follows that for suitable constants
$c_0, \ldots, c_{n-2}$,
$$
g(z) = f(z) - c_0 \wp - c_1 \wp' - \ldots - c_{n-2} \wp^{(n-2)}
$$
has at most a simple pole at $z=0$ (and no other poles in the
fundamental parallelogram).
And since the sum of all residues of an elliptic function in the fundamental parallelogram is always zero, $g$ has no pole at all
and therefore is constant.
It follows that functions in $V$ are exactly the linear combinations
$$
c + c_0 \wp + c_1 \wp' + \ldots + c_{n-2} \wp^{(n-2)}
$$
with $c, c_0, \ldots, c_{n-2} \in \Bbb C$. |
Logic and Conditional Law | What if $A = \{2,~ 3,~ 4\}$ and $B = \{10,~11,~12\}$ and $a = 6$ ?
It might be that your issue is that you are assuming $A \text{ \\ } B$ is only defined when $B \subseteq A$. That may be true in common language but in conventional mathematics there is no subset requirement. |
Properties of any structures | Unfortunately, in mathematics, there is no general encyclopedia or database for the known mathematical structures and terms. You could look for numbers, for algebraic structures and for combinatorial structures.
Search for example in:
Google
Google Books
Online Encyclopedia of Integer Sequences (https://oeis.org)
Encyclopedia of Combinatorial Structures (http://ecs.inria.fr)
Wolfram mathworld (http://mathworld.wolfram.com)
Wikipedia
Wikipedia: list of mathematical |
Inversion Formula for Asymptotic Behaviour | In general the asymptotic behavior of $f$ is not determined by the asymptotic behavior of its partial sums. Loosely speaking the problem is that the summation averages out oscillatory behavior.
A natural source of examples of this behavior is functions coming from number theory; for example, the totient function $\varphi(n)$ has the property that
$$\sum_{k=1}^n \varphi(k) \sim \frac{3n^2}{\pi^2}$$
which is also true of the function $\frac{6n}{\pi^2}$, but it is not true that $\varphi(n) \sim \frac{6n}{\pi^2}$; in fact $\lim_{n \to \infty} \frac{\varphi(n)}{n}$ does not exist, and we have $\limsup_{n \to \infty} \frac{\varphi(n)}{n} = 1$ (by taking $n$ to be prime) but $\liminf_{n \to \infty} \frac{\varphi(n)}{n} = 0$ (by taking $n$ to be the product of the first $k$ primes and letting $k \to \infty$). |
How to subtract an absolute value from both sides of an inequality? ($y+|x|<3$) | $$y+|x| < 3$$
Subtracting $|x|$ from both sides, we have
$$y+|x|-|x| < 3-|x|$$
Since $|x|-|x|=0$,
$$y < 3-|x|$$
That is for $x \geq 0$, plot $y <3-x$.
For $x<0$, plot $y < 3+x$.
@Daniel_W._Farlow's approach based on transformation is awesome. |
Nilpotent iff every maximal subgroup is normal | To be fair I don't see how $G \ge H \ge \{1\}$ helps you prove that $G$ is nilpotent.
First of all assume that $G$ isn't the trivial group, nor a $p$-group, as it's trivial that claim holds for those groups.
Anyway, here's a way how to prove the fact above. Assume that each maximal subgroup of $G$ is normal. Now let $P$ be a $p$-Sylow subgroup of $G$. Obviously $P \not = G$. Now we know that if each Sylow subgroup is normal in $G$, then it's nilpotent. So assume that $N[P] \not = G$. Then we have that $N[P] \le M < G$, where $M$ is some maximal subgroup of $G$. Now by Frattini's Argument we have:
$$G = MN_G[P] = M$$
which is a contradiction and hence $N[P] = G$ and so $P \lhd G$ and so $G$ is nilpotent.
For the other way first prove that nilpotent subgroups satisfy the nilpotent condition. Then let $M$ be any maximal subgroup of $G$, then we have that $M < N[M]$, but from the maximality of $M$ we must have $N[M] = G$ and hence $M \lhd G$ and hence the proof. |
Rank of a linear operator | The rank of the operator $T: \mathbb{R}^{2\times 2} \to \mathbb{R}^{2\times 2}$, defined by $T(X) = AX$, agrees with the rank of its matrix representation with respect to any pair of (ordered) bases for the domain and codomain.
For simplicity let's use the same basis for both:
$$ \mathscr{B} = \left\{ \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} ,
\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} ,
\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} ,
\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \right\} $$
With respect to this (ordered) basis, the representation of the linear operator:
$$ A \begin{pmatrix} a & b \\ c & d \end{pmatrix} =
\begin{pmatrix} a+c & b+d \\ 0 & 0 \end{pmatrix} $$
is $M = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0
\end{pmatrix}$. The rank of $M$ is two, as is the rank of linear operator $T$. |
Evaluating $\int e^{ax} x^b (1-x)^c \mathrm{dx}$ | You can expand out the $(1-x)^c$ to get terms of the form $\int e^{ax}x^n dx$. Wolfram Alpha then gives a solution in terms of the incomplete Gamma function. This is a form that can be integrated by parts-set $dv=e^{ax}dx, u=x^n$ and step down the exponents, giving $\int e^{ax}x^n dx=\frac {x^n e^{ax}}a -\frac na \int x^{n-1}e^{ax}dx$ |
Numbers which are sums of all smaller primes | Let $\Psi(n)$ denote the sum of the primes less than $n$. We contend that $n>5$ implies $\Psi(n)>n$. That is clearly stronger than the desired result.
It is easy to confirm that this is true for the first few $n$. We then proceed by induction.
Suppose $N$ is the least counterexample. Then, in particular, the claim must be true for $\Big \lfloor \frac N2\Big \rfloor$ (we may assume that we have checked far enough so that $\frac N2>6$). But then $$\Psi\left(\Big \lfloor \frac N2\Big \rfloor\right)>\frac N2$$
By Bertrand there is a prime $p$ between $\frac N2$ and $N$ so that we have $$\Psi(N)≥\Psi\left(\Big \lfloor \frac N2\Big \rfloor\right)+p>N$$ and we are done. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.