title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
What is the underlying function for the "shotgun" polar pattern? | I started off with $\cos(2\theta)$ to make a 4 petal pattern.
Then, I subtracted $0.5$, a small, arbitrary constant, to reduce the size of the horizontal petals.
Finally, I added $\sin(\theta)$ to increase the size of the top petal.
Indeed it's quite rough, but $\cos(2\theta)-0.5+\sin(\theta)$ makes the approximate shape. Play with the coefficients, I suppose.
The only discord between my function and yours is that it looks horizontally squished. |
Rank of a Matrix that Lies in Left Null Space of Another Matrix | I assume, that the columns of $D$ form a basis of the left null space of $B$. Note that the left null space of $B$ equals the null space of $B^T$. Hence,
$$
rank(D) = \dim\ker B^T = n - rank(B^T) = n - rank(B) = n-m.
$$
So, yes. |
How to get a vector field from its rotation field? | Formula valid when the domain is star-shaped with respect to the origin (${\rm div}\ {\bf R}=0$ is required):
$${\bf V}(x)=\int_0^1 {\bf R}(tx)\times tx\,dt.$$ |
Arithmetic modulo $n$ when $n>a$ | Note that $a = 0\cdot n+a$, so $q=0$, $r=a$ works if $0\le a<n$. |
A doubt on the Sobolev space $W_0^{1,p}(\Omega)$ | Another situation in which equality fails is $\Omega = (-1,1)^d \setminus (\{0\}\times (-1,1)^{d-1})$. Then,
$$W_\Omega = W_0^{1,p}( (-1,1)^d )$$
since the a.e.-equality does not see the hyperplane $\{x | x_1 = 0\}$.
However, you have (without any further assumptions on $\Omega$)
$$W_0^{1,p}(\Omega) = \{ v \in W^{1,p}(\mathbb R^d) \;|\; v = 0 \text{ q.e. on } \mathbb R^d \setminus \Omega \}.$$
Here, "q.e." means quasi-everywhere (i.e., up to subsets of capacity zero) and we use the quasi-continuous representative of $v \in W^{1,p}(\mathbb R^d)$.
This can be found in Theorem 4.5 in the book
"Nonlinear potential theory of degenerate elliptic equations"
(1993)
by
Heinonen, Kilpeläinen, Martio. |
Factoring $9788111$ via Gaussian elimination over $\mathbb F_2$ | $$
\left(
\begin{array}{ccc}
1&0 &1 \\
1 &1 &0 \\
1& 1& 1\\
\end{array}
\right)
$$
Got it, give me a minute; easy, as Henno says
$$
\left(
\begin{array}{ccc}
1&0 &1 \\
0 &1 &1 \\
0&1 &0 \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{ccc}
1&0 &1 \\
0&1 &1 \\
0&0 &1 \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{ccc}
& & \\
& & \\
& & \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{ccc}
& & \\
& & \\
& & \\
\end{array}
\right)
$$ |
What is $\frac{2x}{1-x^2}$ when $x=\sqrt{\frac{1-\cos\theta}{1+\cos\theta}}$? | $$x=\sqrt{\frac{1-\cos\theta}{1+\cos\theta}}$$
But $\cos2a=2\cos^2a-1=1-2\sin^2a$, so
$$1-\cos\theta=2\sin^2 \frac{\theta}{2}$$
$$1+\cos\theta=2\cos^2 \frac{\theta}{2}$$
Hence
$$x=\left|\tan \frac{\theta}2\right|$$
Therefore
$$\frac{2x}{1-x^2}=\frac{2\left|\tan\frac{\theta}2\right|}{1-\tan^2\frac{\theta}2}=\frac{\left|2\sin\frac{\theta}2\cos\frac{\theta}2\right|}{\cos^2\frac{\theta}2-\sin^2\frac{\theta}2}=\frac{|\sin\theta|}{\cos\theta}$$
Notice that the answer is not $\tan\theta$, when $\sin\theta<0$, that is for
$$\theta \in \bigcup_{k\in\Bbb Z} ](2k-1)\pi,2k\pi[$$
Or, if we remove also the values of $\theta$ for which $\cos\theta=0$,
$$\theta \in \bigcup_{k\in\Bbb Z} ]-\pi+2k\pi,-\pi/2+2k\pi[\;\cup\;]-\pi/2+2k\pi,2k\pi[$$ |
A geometry problem from high school Olympiad selection round | You are looking for Euler's formula for a triangle.
Here is the image to give you an idea of how to get the relation.
By using the paythagoras therorem on different triangles you get
$$\begin{align}
y &= R - r - h,\\
x &= \sqrt{\rho^{2}-(h+r)^{2}},\\
\rho^{2} &= h^{2}+(a/2)^{2},\\
d^{2} &= x^{2}+y^{2}\\
&= \rho^{2}-(h+r)^{2}+(R-r-h)^{2}\\
&= h^{2}+(a/2)^{2}-(h+r)^{2}+(R-r-h)^{2}\\
&= (a/2)^{2}-2Rr+(R-h)^{2},\\
R^{2}&=(R-h)^{2}+(a/2)^{2}
\end{align}
$$
By solving this we get
$$d^{2}=R^{2}-2Rr$$ |
constant speed of curve,regular curve, and reparametrization by an arc length | You have indeed completed the necessary computations, but you could definitely present your solution in a nicer manner!
i.e What is the arc-length parametrization? You have found $T(s)$, so you can define $\text{A} (s) = \alpha(T(s))$.
It would probably help (and be educationally re-enforcing) to write things like "By definition, the speed of the parametrization $\alpha(t)$ is defined to be __ which is readily seen to be a constant since (insert computation)." |
Bounded operator and dense sets | The answer is "no" to your first question:
In $\ell_2$, extend the unit vectors, $(e_i)_{i=1}^\infty$, to a Hamel basis $(f_\alpha)_{\alpha\in I}$ of $\ell_2$. For $i\in\Bbb N$, define $Ae_i=e_i$. Define $A$ on the other elements of $(f_\alpha)_{\alpha\in I}$ so that it is unbounded (for instance, take $(h_i)_{i=1}^\infty$ a sequence from $(f_\alpha)_{\alpha\in I}$ disjoint from $(e_i)_{i=1}^\infty$ and map $h_i$ to $i\cdot h_i $). Note $I$ must be uncountable; hence this can be done.
The answer to your second question is "no" as well, as Daniel Fischer's comment in the original post shows. |
Determine if bounded solutions of ODE system are stable | Step 1: Apply the Laplace transform to both equations and solve the system in the Laplace domain. The system becomes linear in the Laplace domain and it is, therefore, easy to solve.
Step 2: If the solution's poles are in the second or third quadrant the system is stable and therefore your solutions to the system are too.
I would apply the Routh–Hurwitz stability criterion-
https://en.wikipedia.org/wiki/Routh%E2%80%93Hurwitz_stability_criterion
http://pages.mtu.edu/~tbco/cm416/routh.html
The method above is normally used by Control Engineers to check the stability of LTI systems. |
Confirming the solution of an ODE | $$y'=\frac{2x-y}{x-2y} \implies (y-2x)dx+(x-2y)dy=0=Mdx+Ndy~~~~(1)$$
As $\frac{\partial M}{\partial y}=1= \frac{\partial N}{\partial x}$, it is an exact ODE, so its solution is
$$\int(y-2x) dx ~\text{(treat $y$ as constant)}+ \int (-2y ) dy =C$$
$$\implies xy-x^2-y^2=C$$
It(1) can also be seen as Homogeneous ODE and it can be solved using $y=vx$. |
Number of partitions into $k$ parts: recursion has characteristic polynomial $(X-1)(X^2-1) \cdots (X^k-1)$? | What you call $p_k$ I will call $r_k$, and use $p_k$ to count partitions of $n$ into at most $k$ parts. This gives the obvious recurrence $p_k=r_k+p_{k-1}$ (as functions of $n$). We may regard $X$ as the shift operator, satisfying $(Xf)(n):=f(n+1)$, in which case $(X^k-1)p_k$ equals $X^kp_{k-1}$, i.e.
$$ p_k(n+k)=p_k(n)+p_{k-1}(n+k). \tag{$\circ$}$$
This is because every partition of $n+k$ into at most $k$ parts either has $k$ parts, in which case we may subtract $1$ from each part to obtain a partition of $n$, or else it has at most $k-1$ parts.
Thus $(X^k-1)\cdots(X-1)p_k=0$. Substituting $p_k=r_k+p_{k-1}$ in $(X^k-1)p_k=X^kp_{k-1}$ and then rewriting $X^{k-1}p_{k-2}$ as $X(X^{k-1}-1)p_{k-1}$ and cancelling like terms yields
$$ (X^k-1)r_k=(1-X)p_{k-1}+X^kr_{k-1}. \tag{$\ast$}$$
If we start with the induction hypothesis that $(X^{k-1}-1)\cdots(X-1)$ anihilates $r_{k-1}$, then it is clear that $(X^{k-1}-1)\cdots(X-1)$ annihilates both sides of $(\ast)$, or in other words we have concluded that $(X^k-1)(X^{k-1}-1)\cdots(X-1)r_k=0$. The base case $(X-1)r_1=0$ is clear. |
topology made of all sets on R ("closed", "open", "clopen") | The topology you're looking for consists of arbitrary unions of finite intersections of sets in the family you're starting with.
Let $c\in\mathbb{R}$; then $(c-1,c]$ and $[c,c+1)$ belong to the family, so their intersection is open in the topology.
Since $c$ is arbitrary, you can conclude the topology is… |
Horizontal asymptote definition | Actually, it means that a function never reaches its horizontal asymptote when $x\to \infty$. It imposes no restriction for bounded values of $x$. |
Fit a plane in data set which passes through maximum number of points in this data set and disregards noise | Suppose than the equation of the plane be $$Ax+By+Cz+D=0$$ the distance of a point $(x_i,y_i,z_i)$ to the plane is given by $$d_i=\pm \frac{Ax_i+By_i+Cz_i+D}{\sqrt{A^2+B^2+C^2}}$$ So, to me, by analogy with least-square fit, the idea is to minimize $$\Phi(A,B,C,D)=\sum_{i=1}^n d_i^2$$ with respect to parameters $A,B,C,D$. I should say that this looks as a trivial minimization problem.
When the problem is solved, you know all the $d_i$'s. If you want to discard those for which $|d_i| \gt \Delta$, just remove the points from the data set and repeat the optimization.
I suppose that you could be interested by this |
Intuition behind quaternion multiplication with zero scalar | If both vectors $x$ and $y$ are of unit length, the resulting quaternion is actually a rotation around the axis formed by the cross product of the two vectors $x \times y$. Since quaterions are applied in "sandwitch" product, the rotation of any vector around that axis is twice the angle formed by the two vectors. The rotation angle of the quaternion can be calculated as:
$\theta = \tan^{-1}(\frac{\| x \times y\|}{x \cdot y})$.
The quaternion product of $x$ and $y$ is equal to:
$x y = -\cos(\theta) + \sin(\theta) b$
Where $b = \frac{x \times y}{\|x \times y\|}$ is the axis of rotation.
That is analogous to a 2D rotation in complex number theory (Euler's formula)
$e^{-\theta i} = \cos(\theta) - \sin(\theta) i$
You can also define a quaternion in terms of exponential function:
$e^{-b \theta/2} = \cos(\theta/2) - \sin(\theta/2) b$
The convention here is to use half of the angle, since quaterions are applied in "sandwitch" product, the total rotation is twice the angle of the quaternion.
Using the exponential map defined above, one can parameterize quaternions using angle and axis parameters.
If vectors $x$ and $y$ are not of unit length, the quaternion $q = x y$ is still representing a rotation, since the inverse of $q^{-1} = \frac{q^*}{\|q\|^2}$, the scalar factor $\frac{1}{\|q\|^2}$ accounts for the normalization of the rotated vector $v' = q v q^{-1}$. So the norm of $v$ is preserved after rotation.
Quaternion product is a form of the fundamental "geometric product" of vectors in the 3D Euclidean Geometric Algebra. That product convey a lot of geometric meaning. In the language of Geometric Algebra you can define projections, rejections, intersections and more, in terms of the geometric product. |
A problem about symmetric random walk | For every integer $-b\leqslant x\leqslant a$, let $t_x=\mathrm E_x(T_a:T_a\lt T_{-b})$. Then $t_{-b}=t_a=0$ and the Markov property after one step yields
$$
t_x=\mathrm P_x(T_a\lt T_{-b})+\frac12(t_{x-1}+t_{x+1}),
$$
for every integer $-b\lt x\lt a$. Write this as
$$
(\Delta t)_x=-\mathrm P_x(T_a\lt T_{-b})=-\frac{x+b}{a+b},
$$
where $\Delta$ is the discrete Laplacian operator, defined by
$$
(\Delta u)_x=\frac12(u_{x-1}+u_{x+1})-u_x.
$$
Let us check the effect of $\Delta$ on some simple sequences:
If $u_x=1$ for every $x$, then $(\Delta u)_x=0$.
If $u_x=x$ for every $x$, then $(\Delta u)_x=0$.
If $u_x=x^2$ for every $x$, then $(\Delta u)_x=1$.
If $u_x=x^3$ for every $x$, then $(\Delta u)_x=3x$.
One sees that $\Delta t=\Delta t^{c,d}$, where, for every $(c,d)$, $t^{c,d}$ is defined by
$$
t_x^{c,d}=\frac{c+dx-3bx^2-x^3}{3(a+b)}.
$$
If $t_{-b}^{c,d}=t_a^{c,d}=0$, then $t=t^{c,d}$ on $\{-b,a\}$ and $\Delta t=\Delta t^{c,d}$ on $(-b,a)$, thus $\Delta (t-t^{c,d})=0$. The maximum principle shows that $t=t^{c,d}$ everywhere.
In particular, $t_0=\frac{c}{3(a+b)}$ if $(c,d)$ solves the system $t^{c,d}_{-b}=t^{c,d}_a=0$. This yields $c=ab(a+2b)$. QED. |
Two separable 1-st order ODEs | $$y' = x^2/(2y+1),\quad y(0) = -1$$
$$y^2+y=\frac13x^3+c$$
$y(0)=-1\quad\implies (-1)^2-1=0+c=0\quad\implies c=0$
$$y^2+y-\frac13x^3=0$$
$$y(x)=\frac12\left(-1-\sqrt{1+\frac43 x^3} \right)$$
$y(x)=\frac12\left(-1+\sqrt{1+\frac43 x^3} \right)$ is rejected it doesn't satisfies the condition $y(0)=-1$.
$$ $$
$$y'=-\frac{y}{x}+1$$
$$xy'+y=x$$
$$(xy)'=x$$
$$xy=\frac12 x^2+c$$
$$y(x)=\frac{x}{2}+\frac{c}{x}$$
Note : In solving the homogeneous part of the ODE you obtained $y=\frac{C}{|x|}$. Since $C$ is any constant you can replace $C$ by $-C$ when $x<0$ thus
$y=\frac{C}{|x|}$ is equivalent to $y=\frac{C'}{x}$ with any $C'$. Forget the absolute value in the equation. |
Calculate area between 2 curves when you know only their data points | You find the numerical approximation to your integral.
For each $x$, find $|y_1 -y_2|$ and use Trapezoidal or Simpson rule to approximate the area. |
Expansion question involving Taylor or Binomial series? | Using Taylor (which reduces to the binomial series for integer powers of a binomial and to the generalized binomial series for real powers), and limiting ourselves to the third order,
$$\left(1+\frac x2\right)^3-(1+3x)^{1/2}=1+\frac{3x}{2}+\frac{3x^2}{4}+\frac{x^3}{8}-\left(1+\frac{3x}2-\frac{9x^2}82+\frac{27x^3}{16}-\cdots\right)\\
=\frac{15x^2}8-\frac{25x^3}{16}+\cdots=\frac{15x^2}8\left(1-\frac{5x}6+\cdots\right)$$
and you are done. |
$x^2 + y^5 = 2015^{17}$ | Hint: try it mod $11$......... |
Manipulating/Simplifying Conjugate Convex Functions | The optimization variable is $x$. We have
\begin{align}
F^*(\zeta)=&\sup_x\left(\underbrace{\int_0^1\frac{1}{4}\zeta^2\,dt}_{\text{does not depend on }x}-\int_0^1(x-\frac{1}{2}\zeta)^2\,dt\right)=\int_0^1\frac{1}{4}\zeta^2\,dt+\sup_x\left(-\int_0^1(x-\frac{1}{2}\zeta)^2\,dt\right)=\\
=&\int_0^1\frac{1}{4}\zeta^2\,dt-\inf_x\left(\underbrace{\int_0^1(x-\frac{1}{2}\zeta)^2\,dt}_{\ge 0,\forall x\text{ and }=0,x=\zeta/2}\right)=
\int_0^1\frac{1}{4}\zeta^2\,dt-0.
\end{align}
Similarly in the second case. It is always convenient to complete the square for optimization. By doing that we get a minimization of a non-negative function with the trivial minimum being zero. |
Smallest $n >1$ such that $a^n = a$ in $\mathbb{Z}/m\mathbb{Z}$ for all $a$ | If $m = p_1p_2 \cdots p_k$, then the smallest such $n$ will be
$$
1 + \lambda(m) = 1 + \operatorname{lcm}(p_1 -1,\dots,p_k-1).
$$
To see that this is the case, it suffices to note that, by the Chinese Remainder theorem, we have
$$
a^n \equiv a \pmod {m} \iff a^n \equiv a \pmod {p_i} \text{ for } i = 1,\dots,k.
$$ |
What is the definition of a regular operator? | One definition I saw was: $T$ is invertible and its inverse is also bounded. The reference is this paper
http://www.liusb.com/pdf/amm.pdf |
How to construct a specific isomorphism between two finite fields with same order? | Since both fields the polynomial, $x^{121}-x$, has $121$ roots in both fields, and $x^2+1$ is a factor of that polynomial in $\mathbb Z_{11}[x]$, we know that $x^2+1$ splits in $\mathbb F_{11}[\sqrt{-3}]$. Just find a root of that in the second field, and you can send $\sqrt{-1}$ to that element. (Okay, finding the root is possibly non-trivial in general, but we know it can be done.)
In your particular case, though, $\sqrt{-3}=\pm 5\sqrt{-1}$, because $5^2=3\pmod {11}$. So $(2\sqrt{-3})^2=-1$. |
Prove a topological basis | You attempt the impossible.
Counter example:
$f: \mathbb R \longrightarrow \mathbb R$, $x \mapsto 0$.
$\{ U : f(U) \in B \} = \{ U : \{0\} \in B \} = \emptyset$.
Revise the theorem you are trying to prove.
Do not put corrections in remarks.
Edit your reply so it reads as a complete theorem. |
A counterexample to "a associates b", then "a strongly associates b" | Your proposed definition of $c$ is not continuous, and so is not an element of $C[0,3]$. The point is that $c$ needs to be $1$ on $[0,1]$ and $-1$ on $[2,3]$, so in order to be continuous it would need to pass through $0$ in between, and so it cannot be a unit. |
Rouche's Theorem of functions other than polynomial | Sure. Suppose, for intance, that you wish to prove that the function $f(z)=e^z+5z$ has one and only one zero when $|z|<1$. Apply Rouché's theorem: if $|z|=1$, then$$|e^z|=e^{\operatorname{Re} z}\leqslant e<5=|5z|.$$Therefore, $f(z)$ has as many zeros in the region $|z|<1$ as the function $z\mapsto5z$. That is, it has one and only one. |
derivative function of area function $1-(1-x)^s$ | Integrate within limits $0$ to $x$;
$ \int_0^x f(z)dz=\int_0^x s(1-t)^{s-1}dt$
$=\left(-(1-t)^s\right )_0^x$
$=-(1-x)^s+1$ |
If one periodic orbit is attracting then $T$ is not topologically transitive | Well, let's see. Let $U$ is a non-empty open set with the property that every point in $U$ converges to the attractive orbit. If the transformation is topologically transitive, then
$$\bigcup_{k=1}^{\infty} T^k(U)$$
covers the space. Must be a contradiction right about there, seeing as how one of the $T^k(U)$s must contain a point in the other orbit. |
Substitution and Codomain | Note that $$x=-1 \implies y=x+2=1>0$$ so the codomain of non-negative values does not cause any problem. |
How to solve $2^x+e^x=400$ | $$e^x(1+(\frac{2}{e})^x)=400$$
take the log
$$x+\log(1+(\frac{2}{e})^x)=\log 400$$
$$x=\log \frac{400}{1+(\frac{2}{e})^x}$$
then use Fixed-point Iteration Method by selecting $x=1$ to get new value of $x=5.4400198...$
repeat this many times to get
$$x=5.837229692...$$ |
Standalone proof of a conditional part of Lagrange’s Four-Square Theorem? | The problem is equivalent to determine whether the Fourier coefficient of
$$
\Theta(q)=\sum_{
\begin{array}{cc}
n_1,n_2,n_3,n_4\in\textbf{Z}\\
n_1+n_2+n_3+n_4=1
\end{array}
}q^{n_1^2+n_2^2+n_3^2+n_4^2}
$$
is always greater or equal to 1.
We can write equivalent $n_4=1-n_1-n_2-n_3$.
Hence
$$
\sum^{4}_{j=1}n_j^2=
$$
$$
=1+2n_1+n_2^2+2n_2+2n_1n_2+2n_2^2+2n_3+2n_1n_3+2n_2n_3+2n_3^2=
$$
$$
=(n_1+n_2+n_3)^2+2(n_1+n_2+n_3)+n_1^2+n_2^2+n_3^2+1
$$
Hence
$$
\Theta(q)=\sum_{
\begin{array}{cc}
n_1,n_2,n_3,t\in\textbf{Z}\\
n_1+n_2+n_3=t
\end{array}
}q^{(t+1)^2+n_1^2+n_2^2+n_3^2}=
$$
$$
=\sum^{\infty}_{t=-\infty}q^{(t+1)^2}\sum_{
\begin{array}{cc}
n_1,n_2,n_3\in\textbf{Z}\\
n_1+n_2+n_3=t
\end{array}
}q^{n_1^2+n_2^2+n_3^2}=
$$
$$
=\sum^{\infty}_{t=-\infty}q^{(t+1)^2}\sum^{\infty}_{s=0}q^s\sum_{
\begin{array}{cc}
n_1,n_2,n_3\in\textbf{Z}\\
n_1+n_2+n_3=t\\
n_1^2+n_2^2+n_3^2=s\\
\end{array}
}1.
$$
But the equations
$$
n_1+n_2+n_3=t\textrm{, }n_1^2+n_2^2+n_3^2=s
$$
have solutions
$$
n_1=\frac{1}{2}\left(-n_3+t-\sqrt{-2n_3^2+2s+2n_3t-t^2}\right)
$$
$$
n_2=\frac{1}{2}\left(-n_3+t+\sqrt{-2n_3^2+2s+2n_3t-t^2}\right)
$$
and
$$
n_1=\frac{1}{2}\left(-n_3+t+\sqrt{-2n_3^2+2s+2n_3t-t^2}\right)
$$
$$
n_2=\frac{1}{2}\left(-n_3+t-\sqrt{-2n_3^2+2s+2n_3t-t^2}\right)
$$
Hence if we set
$$
A(t,s)=\sum_{
\begin{array}{cc}
n_1+n_2+n_3=t\\
n_1^2+n_2^2+n_3^2=s\\
\end{array}
}1,
$$
then, if the discriminant of $n_1$ and $n_2$: $-2n_3^2+2s+2n_3t-t^2$ is zero possess one double root $(n_1=n_2)$. Otherwise if $-2n_3^2+2s+2n_3t-t^2=m^2$, $m\in\textbf{Z}^{*}$ two different. Hence if $s'=\left[\sqrt{s}\right]$ is the floor of the square root of $s\geq 0$, we get
$$
A(t,s)=\sum_{
\begin{array}{cc}
-s'\leq n_3\leq s'\\
-3n_3^2+2s+2n_3t-t^2=0
\end{array}
}1+
$$
$$
+2\sum_{
\begin{array}{cc}
-s'\leq n_3\leq s'\\
-3n_3^2+2s+2n_3t-t^2\neq 0
\end{array}
}X_{\textbf{Z}}\left(\frac{1}{2}\left(-n_3+t+\sqrt{-3n_3^2+2s+2n_3t-t^2}\right)\right),
$$
where $X_{\textbf{Z}}(n)$ is the characteristic function on integers.
By this way we get if $r_0(t,2s)$ is the number of representations of $2s$ in the form
$$
3x^2-2xt+t^2
$$
and $r_1(t,2s)$ the number of representations of $2s$ in the form
$$
3x^2-2xt+m^2+t^2,
$$
then $r(t,2s)=r_0(t,2s)+2r_1(t,2s)=A(t,s)$ is the Fourier coefficient of
$$
\Theta(q)=\sum^{\infty}_{t=-\infty}\sum^{\infty}_{s=0}A(t,s)q^{(t+1)^2+s}\textrm{, }|q|<1
$$
If $s-t=p$, then
$$
2s=3x^2-2xt+m^2+t^2\Leftrightarrow 3x^2-2tx+t^2-2t+m^2=2p.\tag 1
$$
We want to show that the above last equation have always integer solutions for every even non negative integer $p\geq0$ and for $p$ odd positive none. This will enable us to conclude, that the odd Fourier coefficients of $\Theta(q)$ are always occur and are non zero iff $p$ is even. For $p$ odd we have no representations. For this we define:
$$
\phi(q)=\sum^{\infty}_{n,t=-\infty}q^{3n^2-2tn+t^2-2t}
$$
and assume the transformation $n\rightarrow an+bt$, $t\rightarrow c n+d t$, where $a=-1$, $b=1$, $c=1$, $d=-2$. Then $ad-bc=1$ and
$$
\phi(q)=2\sum^{\infty}_{n,t=-\infty}q^{n^2-2n+8 t^2+4t}.
$$
Hence
$$
\phi(q)=2q^{-1}\left(\sum^{\infty}_{n=-\infty}q^{n^2-2n+1}\right)\left(\sum^{\infty}_{t=-\infty}q^{8t^2+4t}\right)=
$$
$$
=2q^{-1}\theta_3(q)\sum^{\infty}_{n=-\infty}q^{8n^2+4n},
$$
where $\theta_3(q)=\sum^{\infty}_{n=-\infty}q^{n^2}$, $|q|<1$. Hence (1) have analog:
$$
\phi(q)\theta_3(q)=\sum_{n,k,m\in\textbf{Z}}q^{3n^2-2nk+k^2-k+m^2}=2q^{-1}\theta_3(q)^2\sum^{\infty}_{n=-\infty}q^{8n^2+4n}=
$$
$$
=2q^{-1}\theta_3(q)^2\phi_e(q^2),
$$
where $\phi_e(q)=\sum^{\infty}_{n=-\infty}q^{4n^2+2n}$. But
$$
\theta_2(q)=q^{1/4}\sum^{\infty}_{n=-\infty}q^{n^2+n}=q^{1/4}\sum^{\infty}_{n=-\infty}q^{4n^2+2n}+\sum^{\infty}_{n=-\infty}q^{(2n+1+1/2)^2}
$$
and
$$
\sum^{\infty}_{n=-\infty}q^{(2n+1+1/2)^2}=\sum^{\infty}_{n=-\infty}q^{(2n+3/2-2)^2}=
$$
$$
\sum^{\infty}_{n=-\infty}q^{(2n-1/2)^2}=q^{1/4}\sum^{\infty}_{n=-\infty}q^{4n^2-2n}.
$$
Hence
$$
\theta_2(q)=2q^{1/4}\sum^{\infty}_{n=-\infty}q^{4n^2+2n}
$$
and therefore
$$
\phi(q)\theta_3(q)=q^{-3/2}\theta_3(q)^2\theta_2(q^2).\tag 2
$$
Hence is equivalent to show that
$$
x^2+y^2+2z^2+2z=2p+1,\tag 3
$$
have always solutions when $p$ is even non negative integer and none when $p$ positive odd. But indeed: $2z^2+2z\equiv 0(4)$ and when $p$ is odd we have no solutions since no integer of the form $4n+3$ have representation as a sum of two squares. Hence we left with case $p$ even.
The function $f(z)=\theta_3(q)^2\theta_2(q^2)$, $q=e(z)=e^{2\pi i z}$, $Im(z)>0$ is the associated theta function of (3) and is a modular form of weight $3/2$ in $\Gamma(8)$, i.e. it holds
$$
f\left(\frac{az+b}{cz+d}\right)=\epsilon_{c,d}(cz+d)^{3/2}f(z),
$$
when $Im(z)>0$ and $a,b,c,d$ integers with $ad-bc=1$, $a,d\equiv 1(8)$, $c,b\equiv 0(8)$. The function $\epsilon_{c,d}$ is such that $\epsilon_{c,d}^4=1$. But I don't have much experience to go further. |
What is the Mobius sum $\sum_{n=1}^\infty \frac{(-1)^{n+1}|\mu(n)|}{n^s}$? | Yes, it is true. Let $\mathbb{P}$ be the set of all positive primes. For $T\subseteq\mathbb{P}$, write $\prod(T)$ for the product of elements in $T$. Let $s\in\mathbb{C}$ with $\text{Re}(s)>1$. We have
$$D(s)=\sum_{T\subseteq \mathbb{P}}\,\frac{(-1)^{\prod(T)+1}}{\big(\prod(T)\big)^s}=\sum_{T\subseteq \mathbb{P}\setminus\{2\}}\left(\frac{1}{\big(\prod(T)\big)^s}-\frac{1}{\big(2\,\prod(T)\big)^s}\right)\,.$$
Hence,
$$D(s)=\left(1-\frac{1}{2^s}\right)\,\sum_{T\subseteq \mathbb{P}\setminus\{2\}}\,\frac{1}{\big(\prod(T)\big)^s}=\frac{2^s-1}{2^s+1}\,\sum_{T\subseteq \mathbb{P}\setminus\{2\}}\left(\frac{1}{\big(\prod(T)\big)^s}+\frac{1}{\big(2\,\prod(T)\big)^s}\right)\,.$$
Therefore,
$$D(s)=\frac{2^s-1}{2^s+1}\,\sum_{T\subseteq \mathbb{P}}\,\frac{1}{\big(\prod(T)\big)^s}=\frac{2^s-1}{2^s+1}\,B(s)\,.$$ |
Is the solution to a driftless SDE with Lipschitz variation a martingale? | Unfortunately, it didn't work in general (not saying that your claim is false though). Here is what I was trying to do. Maybe it can help you come up with ideas, otherwise just nevermind it.
Let $L^2(X)$ be the set of all predictable processes $H$ such that the process $(\int_0^t H^2_s d\langle X\rangle_s)_{t\geq 0}$ is integrable, where $(\langle X \rangle_t)_{t\geq 0}$ denotes the predictable quadratic variation process.
In the following $\mathcal{H}^2$ (resp. $\mathcal{H}_{\text{loc}}^2$) denotes the set of all square integrable martingales (resp. locally square integrable martingales). Then we have the following theorem from Jacod & Shiryaev:
Theorem 4.40(b). Let $(X_t)_{t\geq 0}\in \mathcal{H}_{\text{loc}}^2$. Then $(\int_0^t H_s d
X_s)_{t\geq 0} \in \mathcal{H}^2$ if and only if $H\in L^2(X)$.
Obviously we are in the scope of this theorem as $(B_t)_{t\geq 0}\in\mathcal{H}^2$ with predictable quadratic variation $\langle B_t\rangle = t$, $t\geq 0$. Furthermore if $(X_t)_{t\geq 0}$ is the solution to the SDE of the original post, i.e.
$$
X_t=\int_0^t \sigma(X_s) dB_s,\quad t\geq 0,
$$
then $(X_t)$ is adapted and continuous and hence it is a predictable process. The theorem now yields that $(X_t)_{t\geq 0}\in \mathcal{H}^2$ if and only if
$$
E\left[\int_0^t \sigma(X_s)^2 d \langle B\rangle_s\right]=E\left[\int_0^t \sigma(X_s)^2 ds\right]=\int_0^t E\left[\sigma(X_s)^2\right] ds<\infty
$$
holds for all $t\geq 0$. Using the Lipschitz assumption we get a sufficient condition for $(X_t)$ being a square integrable martingale:
$$
\int_0^t E[|X_s|]ds<\infty, \quad t\geq 0.
$$ |
A result in the solution of wave equation | We have according to D'Alambert's formula
$$\lim_{x\to\pm\infty}u_t(x,t) \to \frac{1}{2}\left( g'(\pm \infty)+g'(\pm\infty)\right) + \frac{1}{2}\left(h(\pm\infty)+h(\pm\infty)\right) = 0$$
by the compact support of $g$ and $h$. Using this, we can pick up where you left off and show that
$$E'(t) = \int_{-\infty}^\infty u_x u_{tx}\:dx + \int_{-\infty}^\infty u_t u_{xx}\:dx = u_xu_t\Bigr|_{-\infty}^\infty -\int_{-\infty}^\infty u_t u_{xx}\:dx+\int_{-\infty}^\infty u_t u_{xx}\:dx = 0 $$
hence $E(t)$ is constant. For the second part, use D'Alembert's formula to get equations for the first partial derivatives:
$$u_t(x,t) = \frac{1}{2}\left( g'(x+t)+g'(x-t)\right) + \frac{1}{2}\left(h(x+t)+h(x-t)\right)$$
$$u_x(x,t) = \frac{1}{2}\left( g'(x+t)-g'(x-t)\right) + \frac{1}{2}\left(h(x+t)-h(x-t)\right)$$
$$\frac{1}{2}\int_{-\infty}^\infty u_t^2 - u_x^2 \:dx = \frac{1}{2}\int_{-\infty}^\infty(g'(x+t)+h(x+t))\cdot(g'(x-t) + h(x-t))\:dx$$
The question, however, does not ask for a limiting behavior of $t$. It implies a discrete switching behavior that happens for some finite $t$.
Taking a look at the integral, notice that if $t > |\operatorname{Supp}(g+h)|$, then for any point $x$ in the domain of integration, one of the terms in the integrand's product will always be $0$
Thus there exists $T = |\operatorname{Supp}(g+h)|$ such that $\forall t > T$:
$$k(t) - p(t) = \frac{1}{2}\int_{-\infty}^\infty u_t^2 - u_x^2 \:dx = 0$$
Physically, the first part demonstrates the conservation of energy, taking $k(t)$ to be kinetic energy and $p(t)$ to be potential energy. The second part demonstrates the principle of least action since the quantity $k(t) - p(t)$ is called the Lagrangian. |
Derivative of a matrix: Outer product chain rule | Let $Y=1^T\Phi$, then the problem is to find the derivative of the function $\,L=\|Y\|_F^2$
Better yet, using the Frobenius product, the function can be written as $\,L=Y:Y$
Start by taking the differential
$$\eqalign{
dL &= 2\,Y:dY \cr
&= 2\,1^T\Phi:1^Td\Phi \cr
&= 2\,11^T\Phi:d\Phi \cr
&= 2\,11^T\Phi:\Phi'\circ d(XV) \cr
&= 2\,(11^T\Phi)\circ\Phi':d(XV) \cr
&= 2\,(11^T\Phi)\circ\Phi':X\,dV \cr
&= 2\,X^T[(11^T\Phi)\circ\Phi']:dV \cr
}$$
Since $dL = \big(\frac {\partial L} {\partial V}\big):dV\,\,$ the derivative must be
$$\eqalign{
\frac {\partial L} {\partial V} &= 2\,X^T[(11^T\Phi)\circ\Phi'] \cr
}$$
This is the same result as @legomygrego, but with the step-by-step details. The only property which might be new to some readers is the mutual commutivity of the Frobenius and Hadamard products
$$\eqalign{
A:B &= B:A \cr
A\circ B &= B\circ A \cr
A\circ B:C &= A:B\circ C \cr
}$$ |
Partial Derivatives by Level Curves | HINT
Observe that $P$ is on the curvee level $f(x,y)=6$ and $f(x,y)$ decreases for $x$ increasing thus
$f_x=\frac{\partial f}{\partial x}<0$
note also that from $P$ level curves are less dense with $x$ increasing thus
$f_{xx}=\frac{\partial^2 f}{\partial x^2}>0$ |
Confidence intervals and different methods of sampling | If both samples were random samples, then you would expect the means to be equal and the confidence intervals to overlap. While the mean need not be equal due to small sample size, the confidence intervals---if computed correctly---account for the small sample size and should overlap even for small samples. Only in less than 5% of the cases would you have nonoverlapping confidence intervals if both sampling was random and the population means were the same.
Hence, yes, you can reject the hypothesis of random sampling at a lower than 5% level. Whether that means the sampling methods are flawed---I don't know, depends on what they are supposed to achieve. If they are supposed to give you a random sample, then you can indeed view at least one of them as flawed. |
If a joint cdf is increasing in each argument, then the pdf is strictly positive a.s.? | No, this is not true. We construct a counterexample in the plane. To make computations easy, we restrict ourselves to the unit square $U=[0,1]\times[0,1]$, but you will see that the idea extends to the entire plane easily. Define the set $N=[1/3,2/3]\times[1/3,2/3]$.
Consider the probability density function
$f(x,y)=\begin{cases} 0&(x,y)\not\in U,\\
0&(x,y)\in N,\\
9/8&(x,y)\in U \setminus N.
\end{cases}$
The associated marginal distribution functions are strictly increasing for arguments inside the unit interval. The idea easily extends to the whole space. For example, take a standard normally distributed vector in $\mathbb{R}^d$, set its density to zero for arguments in the unit ball and renormalize to get a density function again. |
Properties of ellipses and hyperbola related to matrix operation | You can start by verifying that the matrix
$$\mathbf A=\begin{pmatrix}5&4\\4&5\end{pmatrix}$$
corresponding to your conic $\mathbf x^\top\mathbf A\mathbf x=5x^2+8xy+5y^2=1$, where $\mathbf x^\top=(x\quad y)^\top$ is the vector of variables possesses the following eigendecomposition:
$$\mathbf A=\begin{pmatrix}5&4\\4&5\end{pmatrix}=\begin{pmatrix}\tfrac1{\sqrt 2}&-\tfrac1{\sqrt 2}\\\tfrac1{\sqrt 2}&\tfrac1{\sqrt 2}\end{pmatrix}\cdot\begin{pmatrix}9&\\&1\end{pmatrix}\cdot\begin{pmatrix}\tfrac1{\sqrt 2}&-\tfrac1{\sqrt 2}\\\tfrac1{\sqrt 2}&\tfrac1{\sqrt 2}\end{pmatrix}^\top=\mathbf S\mathbf D\mathbf S^\top$$
Note that the orthogonal matrix $\mathbf S$ is in fact the $45^\circ$ rotation matrix, and that in fact tells you how you obtain the coordinate system $(c_1,c_2)$ from the starting coordinate system $(x,y)$. Since it's easier to analyze an ellipse for, e.g., its principal axes, when the ellipse's two axes coincide with coordinate axes, you can think of it this way: you start with a sketch of a tilted ellipse, and to ease the task of "looking" at the ellipse, you tilt the paper it's drawn on by $45^\circ$, since it's easier to "look" at when the relevant axes are horizontal and vertical. |
Unwarranted Factor of t in Particular Solution | Problems of the form $p(D) y = q(t) e^{at}$ where $p,q$ are polynomials, $D$ is the derivative operator, and $a$ is a complex number, have a rather algorithmic way to construct the particular solution. Namely:
Determine the order $k$ of the zero of $p$ at $t=a$.
Take a particular solution of the form $t^k r(t) e^{at}$ where $r$ is a polynomial of the same degree as $q$.
In your problem, $p(t)=t^2+3t+2$ has a zero of order $1$ at $t=-2$, so you take a solution of the form $t(at^2+bt+c) e^{at}$.
Why does this extra $t^k$ come into play? A reason to start with is that if you tried to use a polynomial of the same degree as $q$, the lowest $k$ coefficients would be killed by $p(D)$, so you would have fewer free coefficients than there are coefficients in $q$. So you have to do something different.
But a followup question is, why does this not break entirely, i.e. why isn't $p(D) t^3 e^{at}$ a cubic times $e^{at}$ (as it would be if $p(a)$ were not zero)? You can see that by thinking of the product rule: the cubic part of $p(D) t^3 e^{at}$ is given by applying all the derivatives to the $e^{at}$, which results in a factor of $p(a)$, so that term disappears. Once any derivatives hit $t^3$, you're getting a quadratic.
A different answer for intuition is to look back at first order equations: consider that $y'-ay=e^{at}$ easily gives the factor of $t$ by using the method of integrating factors. This intuition can then be used for actual calculation by operator-factoring $p(D)$. In your example, $p(D)=(D+2)(D+1)$, so replacing $u=(D+1)y=y'+y$ results in $(D+2)u=t^2 e^{-2t}$ which can be solved by the method of integrating factors, and doing so gives the extra factor of $t$ "automatically". |
What is the period of this signal? | I presume that "j" here is the "imaginary unit", that I would call "i", the principal square root of -1 in the complex number system. By definition then $j^2= -1$. Then $j^3= j(j^2)= j(-1)= -j$ and $j^4= (j^2)(j^2)= (-1)(-1)= 1$. If we continue the same way we get $j^5= j$, $j^6= -1$, $j^7= -j$, $j^8= 1$, etc. Every power of j that is a multiple of 4 gets us back to 1 so the sequence of powers repeats every 4 places. It is in that sense that multiplication by j is "periodic with period 4". |
show that u(x, t) = $e^{At}$ + B$e^{-3t}$cos(x) + C$e^{Dt}$ cos(2x) | One way, likely the simplest one, is to substitute directly into the PDE and note that if
$$
u(x, t) = e^{At} + Be^{-3t}\cos(x) + Ce^{Dt}\cos(2x)
$$
then
$$
u_t = Ae^{At} -3Be^{-3t}\cos x + DCe^{Dt}\cos(2x)
$$
and
$$
u_{xx} = -Be^{-3t}\cos x-4Ce^{Dt}\cos(2x)
$$
so you can solve for $A,B,C,D$ which make $u_t=3u_{xx}$ hold.
UPDATE
So $u_t=3u_{xx}$ implies we have
$$
\require{cancel}
Ae^{At} -\cancel{3Be^{-3t}\cos x} + DCe^{Dt}\cos(2x)
= - \cancel{3Be^{-3t}\cos x} - 12Ce^{Dt}\cos(2x)
$$
and we must have $A=0,D=-12$ with any values for $B,C$. |
Why is the "finitely many" quantifier not definable in First Order Logic? | We can define formula $P_i$ that says "there are at most $i$ elements satisfying $P$". Now, if the infinite disjunction of the $P_i$ was definable in FO, it would (by compactness) imply a conjunction of some finite subset of the $P_i$, hence it would imply $P_i$ for some $i$. That is not true, if $P$ can have (say) $i+1$ elements satisfying it. |
Can one come to prove Cantor's theorem (existence of higher degree of infinities) FROM Russell's paradox? | If we assume Russell's paradox holds then we effectively assume that there is a contradiction in the system.
From a contradiction everything is provable. In particular Cantor's theorem.
(See also: http://xkcd.com/704/) |
Given a circle $\{(x,y) \in \mathbb{R^2} : x^2 + y^2 = 1\}$, show that taking away points from the circumference gives us a disconnected space. | The circle minus a single point is homeomorphic to an open interval $(0,1)$ in $\Bbb R$ (or equivalently, $\Bbb R$ itself). So removing $n\ge 2$ points from the circle is the same topologically as removing $n-1$ points from $\Bbb R$, which gives us a space with $n$ components (open intervals and open segments), thus disconnected.
To see the first homeomorphism, note that e.g. $C\setminus\{(0,1)\}$ is homeomorphic to $(0,1)\subseteq \Bbb R$ via $t \to (\cos 2\pi t,\sin 2\pi t) \in C$ etc. |
Find $ y''$ and $ y'''$ if $b^2x^2+a^2y^2=a^2b^2$ | The question is not very clear but it seems that you are writing $y_1$ for the derivative of $y$ and $y_2$ for the second derivative.
If I am correct in this, then
$$\eqalign{b^2x^2+a^2y^2=a^2b^2\quad
&\Rightarrow\quad 2b^2x+2a^2yy'=0\cr
&\Rightarrow\quad 2b^2+2a^2(yy''+(y')^2)=0\qquad(*)\cr
&\Rightarrow\quad y''=-\frac{b^2}{a^2y}-\frac{(a^2yy')^2}{a^4y^3}\cr
&\Rightarrow\quad y''=-\frac{a^2b^2y^2+b^4x^2}{a^4y^3}\cr
&\Rightarrow\quad y''=-\frac{b^2(a^2b^2)}{a^4y^3}=-\frac{b^4}{a^2y^3}\ .\cr}$$
You can use similar ideas to find $y'''$: it will probably be easiest to begin by differentiating $(*)$. |
Finding specific sets | $A \cap B = \{a,e,h,k\}$
$A \cap B \cap C = \{a,e\}$
$A \cup B \cup B = \{a,b,c,d,e,h,i,k,l,m\}$
$A \setminus B = \{c\}$
$A \setminus \{B\setminus C\} = A\setminus \{b,d,h,k,l\} = \{a,c,e\}$ |
Which of these collections of elements span $P_2(\mathbb{R})$ | You can simply study the linear systems that need to be solved in order to get the coefficients on each base.
(1) obviously works and (4) obviously doesn't work. For instance for (2) what you want to know is if every polynomial $a_0+a_1 X + a_2 X^2$ can be written in the form $b_0+ b_1 X+ b_2(X^2-1)$, which amounts to solve the linear system
$$
\begin{cases} b_0 - b_2 = a_0\\ b_1 = a_1 \\b_2 = a_2\end{cases}\Leftrightarrow \begin{cases} b_0 = a_0 + a_2\\ b_1 = a_1 \\b_2 = a_2\end{cases}.
$$
Since the system has a single solution, you conclude that every second degree polynomial can be uniquely written as a linear combination of $1, X, X^2 -1$, which means that it forms a basis. |
Pratical solving a least square problem - How to find the best fit? | Here is the answer. I can't believe that all mathematics about linear regression was so difficult and hard explained, and my answer is the easiest one and does the job!
The estimated line of best fit
The line is about
$$y = 0.25x + 0.4$$
The cost function
The code:
%% Random values
R = 0.5:0.5:50;
Y = R.*rand(1, length(R));
figure(1)
plot(Y);
% Set up vectors
V = zeros(1, length(R)*length(R)); % Cost function vector
x = linspace(0, 100, length(R)); % x-axis vector
N = linspace(0, 0.6, length(R)); % Values for k and m
L = 0; % Initial
Vlast = 1e+6; % Initial
for i = 1:length(R)
for j = 1:length(R)
% Compute the estimated Y
Yh = N(i).*x + N(j); % N(i) = k and N(j) = m
% Count
L = L + 1;
% Compute the cost function
V(L) = sum(Y - Yh).^2;
% Check if the last cost value was higher
if V(L) < Vlast
Vlast = V(L); % Over write
k = N(i); % Remeber
m = N(j); % Remeber
end
end
end
% Plot the cost function
figure(2)
plot(1:length(V), V)
% Plot the estimated line
figure(1)
hold on
plot(x, k.*x + m) |
Independent Poisson processes: Race to 5 | You must assume that the two Poisson processes are independent. Goal-scoring (without regard to which team scores) is then a Poisson process $N(t) = N_1(t) + N_2(t)$ with rate $1+2=3$, and each individual goal that is scored has probability $1/3$ of coming from team 1. |
Quotient topology, topological spaces | Presumably you want $\bar{f}: (X/\sim) \to Y$ to be your homeomorphism, correct? One way to define $\bar{f}$ is to let $\bar{f}([p]) = f(p)$ where $p \in X$ but $[p] \in X/\sim$. Now if $U \subseteq Y$ then to prove $\bar{f}$ continuous we want $\bar{f}^{-1}(U)$ to be open in $X/\sim$. This set is equivalent to writing $(q\circ f^{-1})(U)$ and this is open since both $f$ and $q$ are continuous. |
Properties of analytic function in a disc | The first line is the Schwarz integral formula, and it follows directly from the Poisson integral formula: The real part is the Poisson integral of the real part of $f$ on the circle, and the integral is analytic as a function of $z$, so the difference of $f$ and the integral is an imaginary constant. Evaluating both sides for $z=0$ you find that it has to be the imaginary part of $f(0)$.
In your line of inequalities the term in the middle should have argument $z$, not $0$. After that, the inequalities follow easily from $$ \frac{1-|z|}{1+|z|} \le \left|\frac{e^{it}+z}{e^{it}-z}\right| \le \frac{1+|z|}{1-|z|}$$ and $$u(0) = \frac1{2\pi} \int_0^{2\pi} u(e^{it}) \, dt,$$
where $u$ is the real part of $f$. |
A dilation is a Mobius transformations | Apologies for my previous answer - it was incorrect because I misunderstood what you meant by "inversion".
Identify a point $(x,y) \in \mathbb R^2$ with the point $z = x+iy \in \mathbb C$. An inversion with respect to a circle of radius $r$ sends
$$ z \mapsto \frac {r^2} {\bar z}.$$
If you compose an inversion with radius $r_1$ with an inversion with radius $r_2$, you get the map
$$ z \mapsto \frac {r_2^2}{r_1^2} z. $$
To get a dilation by factor $a \in \mathbb R_{>0}$, choose $r_1$ and $r_2$ so that
$$ a = \frac {r_2^2}{r_1^2}.$$ |
Confused with regard to converting English sentences to If-Then Conditional Logic | Please know that 'if' expresses a sufficient condition, while 'only if' expresses a necessary condition.
For example, we can say 'you are male if you are a bachelor': knowing that you are a bachelor is sufficient for us to tell that you are male. This translates as $B \rightarrow M$
We can also say: 'you are a bachelor only if you are male': being male is necessary for one to be a bachelor. So this one does not translate to $M \rightarrow B$ (since that would be saying that one is a bachelor as soon as one is male ... which is clearly not true), but we can express this as $\neg M \rightarrow \neg B$: if one is not a male, then one is not a bachelor .... which by contraposition is equivalent to $B \rightarrow M$.
In sum:
'$P$ if $Q$' translates as $Q \rightarrow P$
'$P$ only if $Q$' translates as $P \rightarrow Q$
Of course, to add to the confusion, in English we can switch the order in which we mention the antecedent and consequent. That is, instead of 'you are male if you are a bachelor', we can also say 'if you are a bachelor, then you are male'. Likewise, to day that 'you are a bachelor only if you are male' is saying the same thing as 'only if you are male, can you be a bachelor'. So, we also have:
'If $P$ then $Q$' translates as $P \rightarrow Q$
'Only if $P$, $Q$' translates as $Q \rightarrow P$
So, just remembering these four patterns will be of help.
However, I would also strongly recommend to always do the following: once you have written down the symbolic expression that you suspect is correct, translate that expression back into English, and see if that indeed captures the idea of the original English exprssion. Translation from English to logic is not a one-way street! |
Series and characteristc function | This is a plot of the function
$$\sum_{n=1}^{+\infty}\chi_{[n,+\infty)}(x)$$ |
Derivative of distributions inner product | The derivative $\frac{\partial f}{\partial s}$ of $f(s) = u(s)^\top v(s)$ can be presented either as a row vector or a column vector. If it is presented as a row vector, then we have
$$
\frac{\partial f}{\partial s} = \pmatrix{\left[\frac{\partial u}{\partial s_1}\right]^Tv + u^T\left[\frac{\partial v}{\partial s_1}\right] &
\cdots &
\left[\frac{\partial u}{\partial s_d}\right]^Tv + u^T\left[\frac{\partial v}{\partial s_d}\right]}.
$$
We note that
$$
\left[\frac{\partial u}{\partial s_k}\right]^Tv + v^T\left[\frac{\partial u}{\partial s_k}\right] = \sum_{i=1}^d \left(\frac{\partial u_i}{\partial s_k}v_i + u_i\frac{\partial vI}{\partial s_k} \right).
$$
This can also be written as
$$
\frac{\partial f}{\partial s} = \left[\frac{\partial u}{\partial s}\right]^Tv + u^T\left[\frac{\partial v}{\partial s}\right],
$$
where $\frac{\partial u}{\partial s}$ denotes the matrix
$$
\frac{\partial u}{\partial s} = \pmatrix{\frac{\partial u}{\partial s_1} & \cdots & \frac{\partial u}{\partial s_d}}.
$$ |
Query description into mathematical notation | Check this. Search the best (efficient and/or clear) way to construct a list of the 10 biggest values from the set of frequencies and name it, by example as $F_{10}$. The concepts of max and min maybe essential, and relations of order between elements trough their index.
By example you can define the set of frequencies in some way, it depends of your problem and context.
After you define a subset of it and, by example, you define the maximum of the intersection as equal or lesser to the minimum of your subset.
A similar idea is something like this:
$$F=\{f_i\}\\
F_{10}\subset F: (|F_{10}|=10) \land (\max(F-F_{10}) \le \min F_{10})$$
Because $F$ is a variable set or a family of sets you can add some kind of index to notate the moment about what you refer, if you need it, as a sequence.
For your query 1 you only must change $F$ by $R$ and $F_{10}$ by anything that you want to denote the set of top 10 frequencies. |
Gelfand transform on disk algebra | First things first: Your proof is correct.
Depending on the target audience it may be necessary to elaborate a few points. What it means that the disk algebra is generated by $1$ and $z$, and how that together with $\Gamma(1) = 1$ and $\Gamma(z) = z$ implies that $\Gamma$ is the inclusion map might not be evident to everybody. Or it may not be evident to an examiner/grader that you know that, if you are tasked with proving it in an exam or homework.
On the other hand, things like the unitality of characters need not be mentioned if it is clear that they are well-known to the audience (and the presenter of the proof).
But if no special circumstances demand extending or shortening it, it's a good proof. |
Prove that a set is not strictly convex | A set $S$ is called strictly convex if it is convex and furthermore all points $\lambda f + (1 - \lambda)g $ $\lambda \in (0,1)$ are in the interior of the set, for all $f,g \in S$.
The set can't be strictly convex because it is not even convex. As an example, pick $f(x)=x, g(x)=1-x$ on $[0,1]$ and $\lambda=1/2$. Clearly $f$ and $g$ belongs to the set. But $||\lambda g +(1-\lambda)f||_{\infty}= ||(1-x)/2+x/2||_{\infty}=1/2$, which shows that $\lambda g +(1-\lambda)f$ is not in the space, so can't be convex by definition. |
Using an appropriate form of the chain rule, find all (partial) derivaitves of the 1st order of ... | We know that function $f(g(x,y))$. Thus,
$$\dfrac{\partial f(g(x,y))}{\partial x}=\dfrac{\partial \ln(1+\sqrt{x^2+y^2})}{\partial x}=\\
\dfrac{\partial \ln(1+\sqrt{x^2+y^2})}{\partial (1+\sqrt{x^2+y^2})}\dfrac{\partial (1+\sqrt{x^2+y^2})}{\partial x}=\dfrac{x}{(1+\sqrt{x^2+y^2})\sqrt{x^2+y^2}}$$
Also,
$$\dfrac{\partial f(g(x,y))}{\partial y}=\dfrac{\partial \ln(1+\sqrt{x^2+y^2})}{\partial (1+\sqrt{x^2+y^2})}\dfrac{\partial (1+\sqrt{x^2+y^2})}{\partial y}=\dfrac{y}{(1+\sqrt{x^2+y^2})\sqrt{x^2+y^2}}$$ |
How to find the number of perfect matchings in complete graphs? | It's just the number of ways of partitioning the six vertices into three sets of two vertices each, right? So that's 15; vertex 1 can go with any of the 5 others, then choose one of the 4 remaining, it can go with any of three others, then there are no more choices to make. $5\times3=15$. |
how to find polynominal equation with roots and canonical form | Plug the roots in the canonical form:
$$a(0-2)^2+3=a(4-2)^2+3=0$$
and solve for $a$.
Even though there is a single parameter but two conditions, there is a solution. |
$(a^{n},b^{n})=(a,b)^{n}$ and $[a^{n},b^{n}]=[a,b]^{n}$? | If $p$ is a prime and $p^t$ is the highest power of $p$ dividing $a$, then $p^{tn}$ is the highest power dividing $a^n$.
Therefore $\text{gcd}(a^n,b^n)=\text{gcd}(a,b)^n$.
For the other one, start with
$$
\text{lcm}(a,b)=\frac{ab}{\text{gcd}(a,b)}
$$
and take $n$-th powers both sides. |
Integer solutions using stars and bars with bounds on integers | The term $c$ has been introduced to the expression as a dummy variable, which saves you a lot of work in the long run.
Via the introduction of this new dummy variable '$c$', the below problem has been transformed from
$$\sum_{i=0}^6 a_i \leq 100$$
to
$$\sum_{i=0}^6 a_i + c = 100$$
Why this works out so well is that each non-negative value taken up by this dummy variable $c$ corresponds to a particular case of the inequality. For example, when $c=10$, $\sum_{i=0}^6 a_i = 90$. To reiterate, taking this extra variable saves you the effort of having to calculate so many different cases.
The number of positive integral solutions of the above equation can be calculated easily via the well known formula for the same. I think you can take it from here! |
L'Hospital's Rule Question. | The ‘$=x$’ is getting ahead of yourself a bit. Let $$L=\lim_{n\to\infty}\left(1+\frac{x}n\right)^n\;,$$ and take the logarithm to get
$$\begin{align*}
\ln L&=\ln\lim_{n\to\infty}\left(1+\frac{x}n\right)^n\\
&=\lim_{n\to\infty}\ln\left(1+\frac{x}n\right)^n\\
&=\lim_{n\to\infty}n\ln\left(1+\frac{x}n\right)\;,
\end{align*}$$
where the interchange of the log and the limit is justified by the fact that the logarithm function is continuous.
This limit is now a so-called $\infty\cdot 0$ indeterminate form, and there is a standard approach to those: move one of the factors into the denominator. In this case we have
$$\ln L=\lim_{n\to\infty}\frac{\ln\left(1+\frac{x}n\right)}{1/n}\;,$$
a limit in which both numerator and denominator approach $0$ as $n\to\infty$. Now you can apply l’Hospital’s rule.
Don’t forget that at this point you’re actually finding $\ln L$, not $L$, so you’ll have to exponentiate to get $L$. |
Inverse Fourier Transforms | It will be simpler if you express $\sin(2 \pi \nu T) \cos (10 \pi \nu T) $ as $$\sin(2 \pi \nu T) \cos (10 \pi \nu T) =\frac{1}{2}\left(\sin(12\pi\nu T)-\sin(8\pi\nu T)\right)$$
so that you can write the function as a difference between two sinc functions.
As for the scaling factor, just multiply numerator and denominator by $2\pi$, so that you have
$$\DeclareMathOperator{\sinc}{sinc}\frac{\sin(2\pi\nu T)}{\nu T}=\frac{2\pi\sin(2\pi\nu T)}{2\pi\nu T}=2\pi \sinc(2\pi\nu T)$$ |
Sum that is 0 for Eulerian graphs | if G is Eulerian - as in you have one cyclic route that goes over all the edges, then the degree of each vertex is even so $\prod _{ij\in E} x_i x_j = \prod_{i\in [n]} x_i ^{deg(i)} = 1$ and so the total sum is $2^n$.
if by eulerian you refer to a route that is not a circle then you have exactly two vertices with odd degree so $\prod _{ij\in E} x_i x_j = x_s x_t$ and it is easy to see that $\sum_x x_s x_t = 0$ |
Find tangent line of $f(x)=e^x+2$ that passes through origin | The tangent at $t$ can be expressed as
\begin{equation}
y = f'(t)(x-t) + f(t) = e^t(x-t) + e^t + 2.
\end{equation}
Now this should pass through the origin, hence we have
\begin{equation}
-t e^t + e^t + 2 = 0.
\end{equation}
In general, this does not have analytic solution, hence we need to solve this using some iterative numerical solution. I used a numerical method, which gives the answer: $1.4630555$.
There are hundreds of ways to solve this. Here's one method:
the following Python code will give the answer.
import scipy.optimize
from numpy import exp
def foo(x):
return - x * exp(x) + exp(x) + 2
print(scipy.optimize.anderson(foo, 0.0))
You can solve the same problem using other methods in Python, or you can solve this using Matlab, too. (And there are still other methods!) |
Alternative way (quicker) of looking into a probability problem | Here is how I would do it.
a) Any chip is equally likely to be selected, so the probability of choosing a defective one is $${20\over100}$$ the number of defective chips divided by the total number of chips.
b) Imagine that a defective chip has been chosen. Then there are $19$ defective chips and $99$ total chips remaining, so the conditional probability is $${19\over99}$$ for the same reason as in part a).
c) The probability that both are defective is the product of the probabilities in parts a) and b) or $${20\cdot19\over100\cdot99}$$ As a a check this is the same answer we would get if we divided the ${20\choose2}$ ways to choose two defective chips by the ${100\choose2}$ way to choose two chips, which is the way I would do part c) if parts a) and b) hadn't been asked. |
Is the space C[0,1] connected? | Let $f$ be a continuous function defined on $[0,1]$, $f(x)=tf(x)$, $t\in [0,1]$ defines a path between $f$ and the $0$ function, so $C([0,1])$ is pathe connected hence connected. |
Probability distribution of the product of two dependent variables | $P(Z=XY)$ is just a look-up. No multiplication involved. You will have to deal with multiple $X,Y$ pairs which give the same $Z$ in which case, the probabilities are additive.
$$P(Z=63)=0.09$$
A good way of testing whether ANYTHING is a valid PDF is to see if it integrates/sums up to 1. |
Is my translation of the statement into notation correct? | The statement is this:
$$\forall x\in{\Bbb Z}\;\forall y\in{\Bbb Z}\;\exists n\in{\Bbb Z} \; [x>0\Rightarrow nx>y]$$ |
Describing two-step iteration in terms of complete Boolean algebras. | This is simply definition by cases of names.
$\{b,-b\}$ is a maximal antichain (or a partition of $1$ in terms of Boolean algebras). So we can define a name whose values are determined by that antichain. |
How can I calculate sum of multichoose multiplied by its argument? | Suppose that you want to pick a team of $n+1$ people from a group of $2n$ candidates, and you want to designate one member of the team as the captain. No two candidates are the same height, and the tallest member of the team is not allowed to be the captain. You can choose the team in $\binom{2n}{n+1}$ ways, and you then have $n$ choices for the captain, so you can complete the selection in $n\binom{2n}{n+1}$ ways.
Let the candidates be $c_1,c_2,\ldots,c_{2n}$, arranged in order of increasing height. Clearly a team of $n+1$ people will include at least one person from the set $U=\{c_{n+1},c_{n+2},\ldots,c_{2n}\}$; in particular, the tallest member of the team will be from $U$. For $k=1,\ldots,n$, how many ways are there to select the team so that $c_{n+k}$ is its tallest member?
We can do it by selecting $n-1$ team members from $\{c_1,c_2,\ldots,c_{n-1+k}\}$, which can be done in $$\binom{n-1+k}{n-1}=\binom{n-1+k}k$$ ways, and then selecting one of the remaining $k$ members of the set $\{c_1,c_2,\ldots,c_{n-1+k}\}$ to be the captain. This last step can be done in $k$ ways, so there are $k\binom{n-1+k}k$ ways to select a team whose tallest member is $c_{n+k}$. Since $k$ can be any element of $\{1,2,\ldots,n\}$, it follows that
$$\sum_{k=1}^nk\binom{n-1+k}k=n\binom{2n}{n+1}\;.$$ |
Show that any closed ball in $E$ is entirely contained in at least one set $U_i$ | $A$ is not necessarily closed, but that does not matter: any sequence has a cluster point in the compact metric space $E$ (which is all we need).
Using your $r$ we indeed get infinitely many $p_i$ that are in $B(p, \frac{r}{2})$ so for some large enough index $n$ we indeed have that $B(p, \frac{1}{n}) \subseteq B(p, \frac{r}{2})$. But we need (for a contradiction) to see that $\overline{B(p_n, \frac{1}{n})}$ (with centre $p_n$, not $p$!) is a subset of $U_j$. For this it suffices that $B(p_n, \frac{1}{n}) \subseteq B(p, \frac{r}{2})$ because then the closure is a subset of the $\{x: d(p,x) \le \frac{r}{2}\}$ (the closed ball), which is a subset of $B(p, r) \subseteq U_j$, which would give the contradiction.
So you need to argue that there exists $n$ such that $B(p_n, \frac{1}{n}) \subset B(p, \frac{r}{2})$. Can you do that?
Extra hint: Choose $p_n$ so large that both $d(p_n, p) < \frac{r}{4}$ and $\frac{1}{n} < \frac{r}{4}$. The triangle inequality shows that $B(p_n, \frac{1}{n}) \subseteq B(p, \frac{r}{2})$ |
How many ways are there to sit $n$ people to sit on bench with length $n$ assume that John must see Eric on his left | We can regard the bench as a sequence of chairs. Now, we first fix which two seats John and Eric would sit. This is the question of which two seats they would sit and how many of them exist. This is basically equivalent to choosing two of the seats among $n$ seats; hence, $n\choose{2}$. Now, given that we choose how they sit, there is only one way to allocate John and Eric in these seats given the requirement that one should sit to the left of the other, hence the $1$. Finally, we would need to seat the rest $(n-2)$ people to the remaining $(n-2)$ seats, which would happen in $(n-2)!$ different ways. Therefore, the total number of ways would be as you remarked
$$
{n\choose 2 }\times 1\times (n-2)!
$$ |
What are the restrictions on infinite sums of measures? | In probability you are allowed to have distinct members of countable sets have a positive probability but it becomes a little more tricky for uncountable sets such as defining a probability measure on a (0,1) interval. As you may know, mathematics takes the perspective of thinking of these probability of set happening as volume or density of the set. So they way I like to think of why a probability for $P(X=a)=0$ for $a\in(0,1)$ if $(0,1)$ is support of X is that if look at area of square on a 2D plane each point doesn't have a volume but regions do.
Also you can define a probability measure on all the natural where each number has a positive probability the poisson distribution is an example.
I don't know if this exactly what you are looking for. |
Change of variables PDE | Some pointers
$$
\partial_t = \frac{dz}{dt}\partial_z = \frac{1}{B}\partial_z\\
\partial_x = \frac{dy}{dx}\partial_y = \frac{1}{A}\partial_y\\
\partial_{xx} = \frac{1}{A}\partial_y\frac{1}{A}\partial_y = \frac{1}{A^2}\partial_{yy}
$$
therefore your equation becomes
$$
C\frac{1}{B}\partial_z v(x,t) + g(v(x,t)-E) = \frac{a}{2R}\frac{1}{A^2}\partial_{yy}
$$
using $v = (w+1)E$ we find
$$
C\frac{1}{B}\partial_z v(x,t) + g((w+1)E-E) = \frac{CE}{B}\partial_z w + gEw = \frac{a}{2R}\frac{1}{A^2}\partial_{yy} = \frac{a}{2A^2R}E\partial_{yy}w
$$
thus we get
$$
\frac{CE}{B}\partial_z w + gEw =\frac{a}{2A^2R}E\partial_{yy}w
$$
we can factor out $E$ and divide through by $C/B$ we find
$$
\partial_z w + \frac{gB}{C}w = \frac{aB}{2A^2RC}\partial_{yy}w
$$
so to achieve your desired result you need
$$
\frac{gB}{C} = 1\implies B = \frac{C}{g}\\
\frac{aB}{2A^2RC} = \frac{aC}{2gA^2RC} = 1\implies A = \pm \sqrt{\frac{2gR}{a}}
$$ |
Relationship between primes, right triangles and homogeneous polynomials | For a primitive Pythagorean triangle, there are integers $m,n > 0$ with $\gcd(m,n) = 1$ and $m,n$ not both odd, so that $x+y$ equals
$$ m^2 + 2mn - n^2 $$
This is an indefinite binary quadratic form of discriminant $8,$ which has class number one. Thus the form is equivalent to $$ u^2 - 2 v^2, $$
and any prime $$ q \equiv 3,5 \pmod 8 $$
is not able to divide such a form without dividing both variables. The special case of prime $2$ is taken care of by the condition that $m+n$ be odd. |
Does the axes of $\mathbb R^n$ have the fixed-point property? | Let $f : A \to A, f(x_1,x_2) = (x_1+1,0)$. This map does not have a fixed pioint. |
$r^{th}$ factorial of Hypergeometric Distribution | Let us change notation. Let $a-r=b$ and $(N-r)-(a-r)=g$. Let $n-r=p$. Then the identity you are trying to prove can be rewritten as
$$\sum_{k=0}^p \binom{b}{k}\binom{g}{p-k}\overset{?}{=} \binom{b+g}{p}.\tag{$1$}$$
There are $b$ boys and $g$ girls in a class. In how many ways can we form a committee of $p$ people? Obviously $\binom{b+g}{p}$.
Let us count the number of committees another way. We could choose $k=0$ boys and $p-0$ girls; or we could choose $k=1$ boys and $p-1$ girls. Or we could choose $2$ boys and $p-2$ girls, and so on. The sum on the left of $(1)$represents that way of counting the committees. The two ways of counting must give the same answer, and hence the desired identity holds.
Remark: One might worry about what happens if, for example, there is a total of $5$ boys in the class, and $30$ girls, and we want a committee of say $8$ people. Then we have two choices. We can restrict $k$ to run from $0$ to $5$. That is essentially the notational approach taken in the material quoted. Or else we can define $\binom{x}{y}$ to be $0$ if $x$ and $y$ are non-negative integers such that $x\lt y$. With either approach, the identity of $(1)$ is correct for all possible $b$, $g$, and $p$. |
Cauchyness vs. Double Limits | If $(x_\alpha)_{\alpha\in A}$ is a Cauchy net, for every $\varepsilon > 0$, there is an $\alpha(\varepsilon)$ such that $\beta,\gamma \geqslant \alpha(\varepsilon) \Rightarrow d(x_\beta, x_\gamma) < \varepsilon$. Then for each $\beta$, the net $\gamma \mapsto d(x_\beta,x_\gamma)$ is a Cauchy net in $\mathbb{R}$, hence its limit exists. Also the net $\beta \mapsto \lim\limits_{\gamma} d(x_\beta,x_\gamma)$ is a Cauchy net in $\mathbb{R}$ hence its limit exists as well: For $\beta \geqslant \alpha(\varepsilon)$, we have $0 \leqslant \lim\limits_\gamma d(x_\beta,x_\gamma) \leqslant \varepsilon$, hence
$$\lim_\beta\lim_\gamma d(x_\beta,x_\gamma) = 0.$$
Conversely, if $(x_\alpha)_{\alpha\in A}$ is a net such that for every (large enough) $\beta\in A$ the limit $\lim\limits_\gamma d(x_\beta,x_\gamma)$ exists, and $\lim\limits_\beta \lim\limits_\gamma d(x_\beta, x_\gamma) = 0$, then given $\varepsilon > 0$, there is a $\beta(\varepsilon)$ such for all $\beta \geqslant \beta(\varepsilon)$ we have $\lim\limits_\gamma d(x_\beta,x_\gamma) < \varepsilon/2$. In particular, $\lim\limits_\gamma d(x_{\beta(\varepsilon)},x_\gamma) < \varepsilon/2$, so there is a $\gamma(\varepsilon,\beta(\varepsilon))$ such that $d(x_{\beta(\varepsilon)},x_\gamma) < \varepsilon/2$ for all $\gamma \geqslant \gamma(\varepsilon,\beta(\varepsilon))$. Choose $\alpha(\varepsilon) = \gamma(\varepsilon,\beta(\varepsilon))$. Then, for $\beta,\gamma \geqslant \alpha(\varepsilon)$ we have
$$d(x_\beta,x_\gamma) \leqslant d(x_{\beta(\varepsilon)},x_\beta) + d(x_{\beta(\varepsilon)},x_\gamma) < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon,$$
so $(x_\alpha)_{\alpha\in A}$ is a Cauchy net.
To complete everything, we need to see that $\lim\limits_{(\beta,\gamma)} d(x_\beta,x_\gamma) = 0$ is equivalent to $(x_\alpha)$ being a Cauchy net.
If $(x_\alpha)_{\alpha\in A}$ is a Cauchy net, then for $(\beta,\gamma) \geqslant (\alpha(\varepsilon),\alpha(\varepsilon))$ we have $d(x_\beta,x_\gamma) < \varepsilon$, therefore $\lim\limits_{(\beta,\gamma)} d(x_\beta,x_\gamma) = 0$. Conversely, if $\lim\limits_{(\beta,\gamma)} d(x_\beta,x_\gamma) = 0$, then for every $\varepsilon > 0$ there is a pair $(\beta(\varepsilon),\gamma(\varepsilon))$ such that $(\beta,\gamma) \geqslant (\beta(\varepsilon),\gamma(\varepsilon)) \Rightarrow d(x_\beta,x_\gamma) < \varepsilon$. Then we can choose (by directedness) an $\alpha(\varepsilon)$ with $\alpha(\varepsilon) \geqslant \beta(\varepsilon)$ and $\alpha(\varepsilon) \geqslant \gamma(\varepsilon)$, and for $\beta,\gamma \geqslant \alpha(\varepsilon)$ it follows that $d(x_\beta,x_\gamma) < \varepsilon$, hence $(x_\alpha)$ is a Cauchy net. |
Solving simultaneous congruences with the Chinese remainder theorem | The first congruence takes the form
$$2x \equiv 3 \pmod 7$$
so we want to find the multiplicative inverse of $2$, modulo $7$.
You can use Euclid's algorithm for computing GCDs, or just think about it, and see that $4$ is the multiplicative inverse of $2$, as
$$2 \times 4 \equiv 1 \pmod{7}$$
So now we multiply the congruence through by $4$, and we get
$$x \equiv 3 \times 4 \equiv 5 \pmod 7$$
which is the form we want for the CRT.
This congruence is already in the form we want.
The third congruence is
$$5x \equiv 50 \pmod {55}$$
Since the coefficient of $x$, the remainder and the modulus all have common factor $5$, we can divide through by this to get the congruence
$$x \equiv 10 \pmod {11}$$
So now this congruence is in a suitable form to apply CRT.
(For proof that this works, observe that
$$55 \mid (5x - 50)
\iff 5 \times 11 \mid 5\,(x - 10)
\iff 11 \mid x - 10.)
$$
Now you have the three congruences
$$
\begin{align*}
x &\equiv 5 \pmod 7 \\
x &\equiv 4 \pmod 6 \\
x &\equiv 10 \pmod {11}
\end{align*}
$$
the standard form, which I assume you already know how to solve. |
Finding the area of a Paralleltope | Let $\vec{a},\,\vec{b}$ denote the two vectors you gave. If an angle $\theta$ exists between these vectors, the parallelogram they span has area$$2\cdot\frac12 ab\sin\theta=|\vec{a}\times\vec{b}|=|-3\vec{i}+6\vec{j}-3\vec{k}|=3\sqrt{6}.$$ |
Convergence of $\sum_{n=2}^\infty \frac{1}{n^p - n^q}$, where $0 < q < p$ | $\frac {n^{p}} {n^{p}-n^{q}} \to 1$ so the given series converges iff $\sum \frac 1 {n^{p}}$ converges iff $p >1$. As long as $0<q<p$ we don't have to worry about whether $q>1$ or not. |
The $3\times3$ matrix $M$ with $M_{ij}=a_ia_j+\mathbf 1_{i=j}$ has determinant $a_1^2+a_2^2+a_3^2+1$ | If $A$ is that matrix, and we let $v=\begin{pmatrix}a\\b\\c\end{pmatrix}$, then we notice that $Ax = (v\cdot x)v+x$ for all vectors $x$. In particular, $Av=(|v|^2+1)v$ whereas $Aw=w$ for $w\perp v$. Thus, we can express $A$ with respect to a suitable basis $v,w_1,w_2$ as
$$\begin{pmatrix}|v|^2+1&0&0\\0&1&0\\0&0&1\end{pmatrix} $$
which obviously has determinant $|v|^2+1=a^2+b^2+c^2+1$. |
Finding the solution of a complex number in polar form? | Multiplying by a $3$rd root of unity will give another cube root. As mentioned in the comments there are $2$ more.
Multiplying by $e^{\frac{2\pi i}3}$ gives (b)... |
lemma 4.4 Stein Shakarchi pg88 - confusion | What you think are correct. $g(\xi), f(y)$ and $e^{-2\pi i \xi y}$ are all measurable on $\mathbb{R}^{2d}$, so it their product.
In general, if $f,g$ are measurable, then so is $fg$, even at some point they're infinite. |
What means smallest relation and what difference from simple relation | As you say, a relation is a set of ordered pairs taken from the Ccartesian product of a set with itself. We can partially order relations by inclusion. The smallest one will be the one that is a subset of all the others. Among reflexive relations, the one you give is the smallest because those three pairs must be in every reflexive relation. You can add any other pairs you wish without spoiling the fact that the relation is reflexive.
As another example, the empty relation is the smallest of all relations. |
Partition of a rectangle into squares problem | The solver wants to draw horizontal lines through the rectangle, and wants the number of horizontal lines to be $\tilde b$, which is the integer nearest the height of the rectangle. You can do this by drawing the lines at the half-integral heights $\pm1/2,\pm3/2,\pm5/2,\dots$. But things get messy if one or more of those horizontal lines coincides with an edge of a square, as the solver wants to break each horizontal line up into segments, and attribute each segment to exactly one of the squares in the tiling. So, you have to make sure that there are no edges at a half-integral height. That's where Dirichlet's Theorem On Diophantine Approximation comes in; it ensures that given any finite collection of numbers, there is a positive integer $q$ such that you can multiply each of the numbers by $q$ and the resulting numbers won't be half-integers (will in fact differ from the nearest integer by at most one-fifth).
Now the solver is also going to draw vertical lines, and these are also going to be at half-integers (so the number of vertical lines will be $\tilde a$), and these also have to miss the sides of the squares, so the finite collection of numbers to be multiplied by $q$ has to include all the horizontal coordinates, but that's still a finite collection of numbers, so Dirichlet applies.
But why one-fifth, when one-third would be good enough to avoid all the edges of squares? Well, you want the number of line segments in any given square to be $\tilde s_i$, so you want a square to have $s_i$ at least one-half if it has a line segment running through it. With coordinates as much as one-third away from the nearest integer, you could have a square of side one-third with a line segment through it; but with coordinates no more than one-fifth from an integer, a square must have side at least three-fifths to have a segment through it, and $3/5>1/2$.
I hope this helps. |
Finding max and min from polynomial equation | If $b=0$ and $a\geq 0$ then $x^4+ax^2+b=0$ has just one real root, i.e. $x=0$.
If $b=0$ and $a< 0$ then $x^4+ax^2+b=0$ has three real roots, i.e. $x=0, \pm\sqrt{-a}$
If $b>0$ then $x^4+ax^2+b=0$ has two distinct real roots iff $z^2+az+b=0$ has one positive root and a negative root, which is impossible because $b>0$, or two coincident positive roots that is when $\Delta=a^2-4b=0$ and $a<0$.
Hence for $b=a^2/4$ with $a<0$ we have that $|a-b|=|a-a^2/4|$ has supremum $+\infty$, whereas $a+2b=a+a^2/2=\frac{(a+1)^2-1}{2}$ has minimum value $-1/2$, attained for $a=-1$. |
Find the sum of the series $\sum_{r =1}^{\infty} \frac{1}{(2r-1) 4^{r-1}}$ | hint...write $4^{r-1}$ as a power of $2$ |
Does the expression $\lim_{n\to\infty} (1/n)(\sin(\pi/n)+\sin(2\pi/n)+\cdots+\sin(n\pi/n))$ converge or diverge? | Because of the symmetry of $sin(x)$ around $x=\frac{\pi}{2}$ the series can be split $=\frac{2}{n}(\sum\limits_{k=1}^{\frac{n}{2}}sin(\frac{k\pi}{n}))\lt \frac{2\pi}{n^2}\sum\limits_{k=1}^{\frac{n}{2}}k\to \frac{\pi}{4}$ |
$2+\sqrt{12-2x}=x$ solution is 4, cannot arrive at this solution | Writing your equation in the form
$$\sqrt{12-2x}=x-2$$ and squaring
$$12-2x=x^2-4x+4$$
collecting like terms
$$x^2-2x-8=0$$
Can you solve this equation? |
Any idea how to find $\lim_{x\to 0} \frac{\sqrt{1-\cos(x^2)}}{1-\cos(x)}$? | if you multiply and divide by both conjugates you get:
$$\lim_{x \to 0} \frac{(1+\cos{x})\sin{(x^2)}}{\sqrt{1+\cos{(x^2)}} \space \sin^2{x}}$$
Amplifying to get limit of the form $\frac{\sin{x}}{x} :$
$$=\lim_{x \to 0} \frac{(1+\cos{x})}{\sqrt{1+\cos{x^2}}} \space \frac{\sin{(x^2)}}{x^2} \space \frac{x^2}{\sin^2{x}}=\frac{2}{\sqrt{2}}*1*1^2=\sqrt{2}$$ |
Finding argmin$_{n \in \mathbb{N}} |2^{n/12} - 5|$ non-computationally | I would try to solve the following equation:-
$$ 2^{\frac{n}{12}}=5$$
by raising both sides to the $3rd$ power as the Right Hand Side (RHS) becomes quite close to an integer power of $2$ (fairly coarse approximation though)
$$ (2^\frac{n}{12})^3=5^3 \Rightarrow 2^{\frac{n}{4}}=125\approx128 \Rightarrow n= 28$$ |
Find how much alcohol in grams is in this margarita recipe | First, let us determine the volume of alcohol in the mix (where I'm going to be lazy and use "oz" for fluid ounces):
The Don Julio is 80 proof, which means that 40% of the 2.5 oz of tequila are alcohol. Multiplying these, we get
$$ 2.5\text{ oz} \cdot 0.40 = 1 \text{ oz}, $$
thus we get 1 oz of alcohol from the tequila.
Similarly, we get
$$ 0.5 \text{ oz} \cdot 0.30 = 0.15\text{ oz}$$
of alcohol from the orange liqueur (though why you would use Triple Sec, rather than something nice the Cointreau is beyond me; I'm not judging, though). At any rate, the drink then contains a total of 1.15 oz of alcohol.
Now that we know the volume of alcohol in the drink, we can use the conversion formulae you give to get a mass. Applying the conversions, we get
$$ 1.15 \text{ oz} \cdot \frac{29.6 \text{ mL}}{\text{oz}} \cdot \frac{0.789\text{ g}}{\text{mL}} \approx 26.9 \text{ g}$$
of alcohol. Note how the units "upstairs" cancel the units "downstairs", leaving us with only gram |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.