title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
cluster points of sub-sequences of sequence $\frac{n}{e}-[\frac{n}{e}]$ | $\alpha:=1/e$ is irrational. Hence the sequence of fractional parts
$
\left(\{n\alpha\}: n\ge 1\right)
$
is equidistributed in $[0,1]$. So, the set of clusters (ordinary accumulation points) is still the whole interval $[0,1]$.
We can prove something stronger. Let $\Gamma_x$ be the set of statistical cluster points, i.e., the set of all $y$ such that
$$
S_\varepsilon:=\{n: |x_n-y|<\varepsilon\}
$$
has not asymptotic density zero for all $\varepsilon>0$, which means $\frac{|S_\varepsilon \cap [1,n]|}{n} \not\to 0$ as $n\to \infty$. (Clearly, this is a subset of the ordinary accumulation points.) However, in this case, it is still true that $\Gamma_x=[0,1]$. |
Find all angles in a triangle, given 2 internal 90deg angles and segment equality. | $BD$ is a median in the right triangle $ABE$, hence $BD = AD = DE$; $BE$ is a median in the right triangle $DBC$, hence $BE = DE = EC$. Therefore $BDE$ is an equilateral triangle.
Can you take it from here? |
Linearly growing sets of numbers which allow for a unique decomposition | It turns out that these sets have been studied, under the name "$B_n$ sets" (usually people have used $h$ in place of $n$) or "$B_h[1]$ sets".
Bose and Chowla (see Theorem 1) gave an explicit algebraic construction of a $B_h$ set with $m$ elements, all of which are integers between $0$ and $m^h$. Up to the constant in front of $m^h$, this order of growth is best possible (by the pigeonhole principle).
See O'Bryant's survey for more constructions and references on this topic. |
heterogeneous recurrence with f(n) as constant | $s_{n+1}=4s_{n-1}-3s_n+5$
$q^2-4+3q=0$
$\Delta=3^2-4*1*(-4)=25$
$\sqrt\Delta=5$
$q_1=\frac{-3-5}2=-4$
$q_2=\frac{-3+5}2=1$
homo general $s_n=c_1*1^n+c_2*(-4)^n$
$k=1$ where $k$ is multipiclity of root
hetero particular $s_n=Q(n)*q^n*n^k$
$s_n=A*1^n*n^1=An$; $1^n=1$ because $1^0=1$, $1^1=1$ and so on
$A(n+1)=4(A(n-1))-3An+5$
$An+A=4An-4A-3An+5$
$An+A=An-4A+5$
$A=1$
hetero particular $s_n=n$
hetero general $s_n=c_1*1^n+c_2*(-4)^n+n$
$\begin{cases} -3=s_0=c_1*1^0+c_2*(-4)^0=c_1+c_2 => c_1=-3-c_2=-3+\frac{6}{5}=\frac{-9}{5}
3=s_1=c_1*1^1+c_2*(-4)^1=-3-c_2-4*c_2 => -5c_2=6 => c_2=\frac{-6}{5}
\end{cases}$
hetero general $s_n=\frac{-9}{5}*1^n-\frac{6}{5}*(-4)^n+n$ |
Homework - Resolve the recurrence relation | A generating function approach would make this straightforward:
$$G(x) = \sum_{n=0}^\infty a_n x^n = 1+2x+\sum_{n=2}^\infty a_n x^n$$
$$=1+2x+\sum_{n=2}^\infty (a_{n-1} + 2a_{n-2} + 2^n)x^n$$
$$=1+2x+\sum_{n=1}^\infty a_n x^{n+1} + 2 \sum_{n=0}^\infty a_n x^{n+2} + \sum_{n=2}^\infty 2^n x^n$$
$$=1+2x+x(G(x)-1)+2x^2G(x)+\frac{2^2x^2}{1-2x}$$
Thus $$G(x)(1-x-2x^2) = 1+x+\frac{2^2x^2}{1-2x}$$ and $$G(x) = \frac{1+x}{1-x-2x^2} + \frac{2^2x^2}{(1-2x)(1-x-2x^2)} = \frac{1+x}{(1+x)(1-2x)} + \frac{2^2x^2}{(1-2x)(1+x)(1-2x)}$$
$$= \frac{1}{(1-2x)} + \frac{2^2x^2}{(1-2x)^2(1+x)}$$
Now you can use partial fractions, and then expand in terms of geometric series to find the taylor coefficients of $G(x)$ which are the terms $a_n$. |
do Carmo Riemannian Geometry Exercise 2.3: definition of $\nabla$ for an immersion - Part II | First, your checking that $\nabla_XY$ is well defined independent of $\overline X, \overline Y$ is unclear: to take an analogy, even if two functions $f_1, f_2$ agree at a point $p$, it does not imply that $f'_1 = f_2'$ at $p$.
To check that $\nabla$ is well-defined, we split into two steps:
If $\overline X, \widetilde X$ are both extension of $df(X)$, then for any local vector fields $Z$ on $V\subset \overline M$ and for all $p\in U$,
$$ \overline \nabla_{\overline X} Z = \overline \nabla_{\widetilde X} Z\ \ \ \ \ \ \text{ at } f(p).$$
Proof: This follows from the fact that $\overline \nabla$ is $C^\infty$-linear in that component, thus the value $\overline \nabla_{\overline X} Z(f(p))$ depends only on $\overline X(f(p))$.
Let $\overline Y, \widetilde Y$ are both extension of $df(Y)$ and $\overline X$ is tangential to $f(U)$, then
$$\tag{2} \overline \nabla _{\overline X} \overline Y = \overline \nabla _{\overline X} \widetilde Y\ \ \ \ \ \text{ at }f(p).$$
Proof: this follows from the fact that covariant differentiation can be computed using parallel transport (here): In particular, since $\overline X$ is tangential to $f(U)$, one can find an integral curve of $\overline X$ which lies inside $f(U)$ (For example, let $\gamma : (-\epsilon, \epsilon)\to M$ be an integral curve of $X$. Then $f\circ \gamma$ is an integral curve of $\overline X$ lying inside $f(U)$). Since $\overline Y, \widetilde Y$ agrees on $f(U)$, (2) is shown.
Second, we show that $\nabla$ is indeed a connection. To begin with, we show
(1) For any local vector fields $X, Y$ on $U$ and local smooth functions $\varphi:U \to \mathbb R$, we have
$$\nabla_{\varphi X} Y (p) = \varphi(p) \nabla_X Y(p), \ \ \ \forall p\in U.$$
Proof: let $\overline \varphi$ be a smooth function on $V\subset \overline M$ which extends $\varphi\circ f^{-1} : f(U) \to \mathbb R$. That is, for all $f(p) \in f(U)$ we have
$$ \varphi (p) = \overline \varphi (f(p)).$$
Then $\overline \varphi \overline X$ is an extension of $df (\varphi X)$. So
\begin{align*}
\nabla _{\varphi X} Y(p) &= df^{-1} \bigg(\text{tangential component of } \overline \nabla_{\overline\varphi \overline X} \overline Y(f(p))\bigg) \\
&= df^{-1} \bigg(\text{tangential component of }\ \overline\varphi (f(p)) \overline \nabla_{\overline X} \overline Y(f(p))\bigg) \\
&= \varphi (p) df^{-1} \bigg(\text{tangential component of }\overline \nabla_{\overline X} \overline Y(f(p))\bigg) \\
&= \varphi (p) \nabla_X Y (p).
\end{align*}
(2) We show also that $\nabla$ is compatible with the pullback metric $g = f^*\bar g$, let $X, Y, Z$ be vector fields. Then by definition,
$$ X g(Y, Z)(p) = \frac{d}{dt}\bigg|_{t=0} g_{\gamma(t)}( Y(\gamma(t)), Z(\gamma(t))),$$
where $\gamma : (-\epsilon, \epsilon) \to M$ is any curve with $\gamma(0) = p$, $\gamma'(0) = X(p)$. Using the definition of pullback metric,
$$ g_{\gamma(t)}( Y(\gamma(t)), Z(\gamma(t))) = \bar g_{f(\gamma(t))} (df_{\gamma(t)} Y(\gamma(t)), df_{\gamma(t)} Z(\gamma(t))).$$
Since $f\circ \gamma$ is a curve in $\overline M$ with $f\circ \gamma (0) = f(p)$, $(f\circ \gamma)'(t) = df_{\gamma(t)} X(\gamma(t))$, we have
\begin{align*}
\frac{d}{dt}\bigg|_{t=0} g_{\gamma(t)}( Y(\gamma(t)), Z(\gamma(t)))&= \overline X \bar g (\overline Y, \overline Z) f(p)\\
&= \bar g(\overline \nabla _{\overline X} \overline Y , \overline Z ) + \bar g(\overline Y , \overline \nabla _{\overline Y} \overline Z) \ \ \ \ \ \text{ at } f(p)\\
&= \bar g(df (\nabla _{X} Y) , df ( Z) ) + \bar g(df(Y) , df(\nabla _{Y} Z) \\
&= g(\nabla_X Y, Z) + g(Y, \nabla_XZ)
\end{align*}
at $p$. Note we used that $\overline Y, \overline Z$ are tangential to $f(U)$, so that we have
$$ \bar g (\overline \nabla_{\overline X} \overline Y, \overline Z) = \bar g ((\overline \nabla_{\overline X} \overline Y)^\top, \overline Z),$$
where $(\cdot)^\top$ denotes the tangential part of a vector.
Finally, in your checking of the symmetry of $\nabla$ you used $\Gamma_{ij}^k = \Gamma_{ji}^k$, which a priori you do not know yet. Indeed the symmetry of $\nabla$ is equivalent to the symmetry of $\Gamma$.
To give a correct proof we, just like all the others properties we proved, pushforward everything to $\overline M$, prove the property there and then pullback: by definition,
\begin{align*}
\nabla_X Y- \nabla_Y X &= df^{-1} \left( \overline\nabla_{\overline X} \overline Y - \overline\nabla _{\overline Y} \overline X\right)^\top \\
&= df^{-1} ([\overline X, \overline Y]^\top).
\end{align*}
Since $f(U)$ is a submanifold and $\overline X, \overline Y$ are tangential to $f(U)$,
$$ [\overline X, \overline Y]^\top = [\overline X, \overline Y] = [df (X), df(Y)]$$
(this can be check directly, assuming that $f(U)$ is a plane $\mathbb R^n \subset \mathbb R^{n+k}$. The Riemannian structure is not used here). Then by this, we have
$$\nabla_X Y- \nabla_Y X = [X, Y].$$ |
Proving $\prod _{k=j}^n \frac{p_{k+1}}{p_k} = \frac{p_{n+1}}{p_j}\!\!,\;\;1\le j\!<\!n$ | Expand out the product and cancel common factors:
$$\prod_{k=j}^n\frac{p_{k+1}}{p_k}=\frac{p_{j+1}p_{j+2}\cdots p_{n}p_{n+1}}{p_{j}p_{j+1}\cdots p_{n-1}p_{n}}=\frac{\color{red}{p_{j+1}}p_{j+2}\color{red}{\cdots p_{n}}p_{n+1}}{p_{j}\color{red}{p_{j+1}}\cdots p_{n-1}\color{red}{p_{n}}}=\frac{p_{n+1}}{p_j}$$
Note that this has nothing to do with $p_k$ being prime. |
How to make sense of the Green's function of the 4D wave equation? | For ease of reference in this post equations are numbered as in ref. 1.
The expression given is surprisingly useless for actual calculations. But it seems to be the best we can do with the usual functional notation to express the actual, quite well-defined, distribution. Below I'll try to make it more understandable.
Let's start from the way $(36)$ was derived. The authors in ref. 1 derived it by integrating the Green's function for (5+1)-dimensional wave equation,
$$G_5=\frac1{8\pi^2c^2}\left(\frac{\delta(\tau)}{r^3}+\frac{\delta'(\tau)}{cr^2}\right),\tag{32}$$
where $\tau=t-r/c$, along the line of uniformly distributed sources in 5-dimensional space, using the integral
$$G_{n-1}(r,t)=2\int_r^\infty s(s^2-r^2)^{-1/2}G_n(s,t)ds,\tag{25}$$
where $r=r_{n-1}$ is the radial coordinate in $(n-1)$-dimensional space.
Remember that a Green's function for a wave equation is the impulse response of the equation, i.e. the wave that appears after the action of the unit impulse of infinitesimal size and duration, $f(r,t)=\delta(r)\delta(t)$. Let's replace this impulse with one that is finite at least in one variable, e.g. time. This means that our force function will now be $f(r,t)=\delta(r)F(t)$, where $F$ can be defined as
$$F(t)=\frac{(\eta(t+w)-\eta(t))(w+t)+(\eta(t)-\eta(t-w))(w-t)}{w^2},$$
which is a triangular bump of unit area, with width (duration) $2w$. The choice of triangular shape, rather than a rectangular one, is to make sure we don't get Dirac deltas when differentiating it once.
Then, following equation $(34)$, we'll have the displacement response of the (5+1)-dimensional equation, given by
$$\phi_5(r,t)=\frac1{8\pi^2c^2}\left(\frac{F(\tau)}{r^3}+\frac{F'(\tau)}{cr^2}\right).\tag{34}$$
Now, to find the displacement response $\phi_4(r,t)$ of the (4+1)-dimensional equation, we can use $\phi_5$ instead of $G_5$ in $(25)$. We'll get
$$\phi_4(r,t)=
\frac1{4c^3\pi^2r^2w^2}
\begin{cases}
\sqrt{c^2(t+w)^2-r^2} & \text{if }\,ct\le r<c(t+w),\\
\sqrt{c^2(t+w)^2-r^2}-2\sqrt{c^2t^2-r^2} & \text{if }\,c(t-w)<r<ct,\\
\sqrt{c^2(t+w)^2-r^2}-2\sqrt{c^2t^2-r^2}+\sqrt{c^2(t-w)^2-r^2} & \text{if }\,r\le c(t-w),\\
0 & \text{otherwise.}
\end{cases}$$
Here's a sample of $\phi_4(r,t)$ for $c=1,$ $t=10,$ $w=0.011:$
What happens in the limit of $w\to0$? By cases in the above expression:
The first case (blue line in the figure above) corresponds to the leading edge of the force function bump, it's located outside of the light cone of the Green's function $G_4$. As $w\to0$, the area under its curve grows unboundedly, tending to $+\infty$.
The second case (orange) corresponds to the trailing edge of the bump. A zero inside the domain of this case splits the function into a positive and negative parts. The integral of this function times $r^3$ diverges to $-\infty$.
The third case (green) corresponds to the wake after the force function bump ends. It's negative in the whole its domain, and the integral of it times $r^3$ diverges to $-\infty$. The term itself in the limit of $w\to0$ becomes, for $r<ct$, exactly the second term of $(36)$.
Together, however, the integral $\int_0^\infty r^3\phi_4(r,t)\,\mathrm{d}r$ for $t>w$ remains finite, equal to $\frac t{2\pi^2},$ regardless of the value of $w.$
Conclusions:
The Green's function does exist and is a well-defined distribution
The equation $(36)$ formally does make sense
We can do calculations using $\phi_4$ instead of the $G_4$ from $(36)$, taking the limit $w\to0$ at appropriate times.
References:
1: H. Soodak, M. S. Tiersten, Wakes and waves in N dimensions, Am. J. Phys. 61, 395 (1993) |
Slopes of curves from complex derivative | Let's prove the result for $u(x,y)$, the proof for $v(x,y)$ being similar.
On one hand, for a level curve $u(x,y)=k$, the slope at a point $(x,y(x))$ is given by $$y^\prime(x)=-\frac{\frac{\partial u}{\partial x}(x,y(x))}{\frac{\partial u}{\partial y}(x,y(x))} \tag 1$$
On the other hand, for a complex number $z$, we have $$\cot (\arg (z))=\frac{\Re(z)}{\Im(z)}=\frac{i(z+ \overline z)}{z - \overline z} \tag 2$$
Now as $f$ is supposed to be analytic we have $f^\prime(z)=\frac{\partial u}{\partial x}+i \frac{\partial v}{\partial x}$. Applying (2) to the complex $f^\prime(z)$ we get $$\cot (\arg (f^\prime(z)))=\frac{\frac{\partial u}{\partial x}}{\frac{\partial v}{\partial x}}$$ Finally we obtain the desired result using following Cauchy-Riemann equation (as $f$ is analytic) $$\frac{\partial v}{\partial x}=-\frac{\partial u}{\partial y}$$ |
What is the integral of $\frac{1}{k!}$? | This is a discrete distribution, so instead of integrating, try summing over $k$:
$$\sum_{k=0}^\infty P(X=k) = 1$$ |
$A$ has $\Bbb Z_m$-module structure | Let $\mathrm{End}(A)$ denote the ring of additive group endomorphisms of $A$ and $\mu:\Bbb Z\to\mathrm{End}(A)$ be the only ring homomorphism (which maps $1\in\Bbb Z$ into the identity on $A$).
Then for each $n\in\Bbb Z$ we have $\mu(n):x\in A\mapsto nx$.
A $\Bbb Z/m\Bbb Z$-module structure on $A$ is given by a ring homomorphism $\Bbb Z/m\Bbb Z\to\mathrm{End}(A)$ and it makes the following diagram commutative:
$\hspace 6.5cm$
A such ring homomorphism there exists if and only if $\mathrm{Ker}(\mu)\supseteq m\Bbb Z$ and this is equivalent to $m\in\mathrm{Ker}(\mu)$, that's $mA=0$. |
Definition of Differentiation and partial derivatives | When we define differentiability at a certain point for functions $f : \mathbb{R} \to \mathbb{R}$ one of our goals is to assure that the function has a tangent line to the graph at the point. Indeed, the tangent line is the best linear approximation to the function near the point of tangency.
When we try to extend this notion to functions $F: \mathbb{R}^n \to \mathbb{R}^m$ we use this notion of best linear approximation to the values of the function near a point. The usual definition is (as given by Spivak on Calculus on Manifolds):
Definition: Let $F: \mathbb{R}^n \to \mathbb{R}^m$, we say that $F$ is differentiable at $a\in\mathbb{R}^n$ if there's a linear transformation $\lambda:\mathbb{R}^n \to \mathbb{R}^m$ such that
$$\lim_{h\to0}{\dfrac{\left\|F(a+h)-F(a)-\lambda(h)\right\|}{\left\|h\right\|}} = 0$$
If that happens we denote $\lambda = DF(a)$ and call this linear transformation the Total Derivative of $F$ at $a$. This means that the error when approximating $F$ by $\lambda$ near $a$ goes to zero faster than the distance from $a$ to $a+h$: in other words, if you go to a neighbouring point that's close enough the function will be well approximated by the values of $\lambda$ which is a linear function by hypothesis.
The point here is - the basic idea of differentiability is this idea: approximate some function by a linear function in the neighborhood of a point. Look that this definition extends well the one dimensional case where we approximate the graph by the tangent line. Altough sometimes the way we write the definitions seems strange and complicated the ideas behind are very clear and simple.
You've presented a particular case, and it's easy to show that this definition you gave is the same definition I've presented if $n = 2$ and $m = 1$ (a good exercise is to show this). This obvious is because you can proove such a thing from the definition above and from the definition of partial derivative. The partial derivative is defined when we try to understand how a function change if we move in a particular direction. I assume that you already know the definition of partial derivative.
Although your question is: "why it's obvious that the total derivative can be expressed in terms of the partials" I think that with this information you can try to get to the answer yourself. In my opinion one of the most important parts of learning math is trying to get to the results yourself, sometimes only when you give it a try you can see what's behind.
Try it! Use this definition and the definition of partial derivatives and try to show that in the case you've presented it's true that $a = D_1f(x,y)$ and $b=D_2f(x,y)$ where $D_1$ and $D_2$ denotes the partials with respect to $x$ and $y$ respectively. If you do it try to extend to the general case.
If you can't, feel free to ask more information. Also, look at Spivak's Calculus on Manifolds, it's a very good book to learn those things.
Good look! |
Question about calculating the top left and top right coordinates of a point on N*N matrix | The classic solution to this programming problem uses two auxiliary vectors that track occupancy of the diagonals. For one set of diagonals, $i+j$ is constant, for the other, it’s $i-j$. Offset these values by an appropriate constant amount so that you can use them as one-dimensional array indices. Checking that a diagonal is occupied before placing a queen becomes a simple array lookup.
You can do the same for the rows and columns, of course. Representing the board as these four vectors instead of the “obvious” two-dimensional array makes the entire occupancy check a matter of four array lookups.
However, to answer your specific question, I’m going to assume zero-based indexing. Since $i-j$ is constant for one of the diagonals, the corresponding element in row zero is $i-j$. If this is value is negative, that means that the diagonal hits the side of the board before reaching the first row. The square at which this happens can be found by symmetry, and it’s $(j-i,0)$. For the other diagonal, you would do a similar calculation with $i+j$, checking to see if this is $\ge N$, in which case you’ve hit the edge at $(i+j-(N-1), N-1)$. |
Models of a theory in an elementary topos. | Answer is no, FinSet has no Models for Peano Numbers. |
Determining parameters of $y=ab^x+c$ given 3 points | The equations being $$a \,b^{x_1}+c=y_1\tag 1$$ $$a \,b^{x_2}+c=y_2\tag 2$$ $$a \,b^{x_3}+c=y_3\tag 3$$ So, as André Nicolas commented, writing differences $$a(b^{x_2}-b^{x_1})=y_2-y_1\tag 4$$ $$a(b^{x_3}-b^{x_2})=y_3-y_2\tag 5$$ Making ratios as André Nicolas commented $$\frac{b^{x_2}-b^{x_1}}{b^{x_3}-b^{x_2}}=\frac{y_2-y_1}{y_3-y_2}\tag 6$$ So, you are left with one nonlinear equation in $b$. When you have $b$, $(4)$ or $(5)$ will give $a$ and then $(1)$, $(2)$ or $(3)$ will give $c$.
Except for very specific cases (such as $x_2=2x_1$, $x_3=3x_1$ for example would reduce equation $(6)$ to a polynomial in $b^{x_1}$; equally spaced values for the $x$'s will do nice job - see at the end of this answer). But, in the most general case, solving equation $(6)$ (which is nonlinear) will require numerical methods (Newton would probably be the simplest).
For illustration purposes, let us consider three data points $(1.5,9.2)$, $(3.1,16.7)$, $(4.7,32.9)$. So, equation $(6)$ write $$\frac{b^{3.1}-b^{1.5}}{b^{4.7}-b^{3.1}}=\frac{75}{162}$$ The plot of the function shows a root close to $b=1.5$; Newton method would converge to $b=1.61821$; now, using this result, $a=3.14089$ and then $c=2.73448$.
In the particular case where the $x$ values would be equally spaced $(x_2=x_1+\Delta)$, $(x_3=x_2+\Delta)$ equation $(6)$ would greatly simplify leading to $$b^{-\Delta}=\frac{y_2-y_1}{y_3-y_2}$$ and $b$ is immediately obtained (the remaining staying identical).
Edit
Eliminating $a$ and $c$ from equations $(1)$ and $(2)$ and replacing in equation $(3)$ leads to a nicer form for the equation to solve for $b$. It is
$$(y_2-y_3)b^{x_1}+(y_3-y_1)b^{x_2}+(y_1-y_2)b^{x_3}=0$$ |
How do I know the number of primitive elements, which are them and the degree of extensions of Galois Groups | $\newcommand{\GF}{\textrm{GF}}$$\renewcommand{\phi}{\varphi}$You mean Galois fields, right?
As correctly noted by Jyrki Lahtonen in his comments (thanks a bunch!) there are two notions of primitive element here. I had first written an answer for the meaning an element whose powers account for the whole multiplicative group.
In this case the answer to point 2 is
The number of primitive elements of the field $\GF(p^{n})$ is $\phi(p^{n} -1)$.
Then I thought that maybe it is the other definition we are talking about here, that is, an element $\alpha$ such that $\GF(p)[\alpha] = \GF(p^{n})$, and thus replaced the previous answer by
The number of primitive elements of the field $\GF(p^{n})$ is the degree of the polynomial $\prod_{d \mid n} (x^{p^{d}} - x)^{\mu(n/d)}$. Here $\mu$ is the Moebius function. Thus the number is $\sum_{d \mid n} p^{d} \mu(n/d)$.
If $r \mid n$, then $\lvert \GF(p^{n}) : \GF(p^{r}) \rvert = n/r$. |
Importance of $2$-groups in Finite Simple Groups | It seems to me that the reference there is to some classical theorems on the matter. See, for example, the Walter theorem or the Alperin–Brauer–Gorenstein theorem. |
Example of a function between boolean lattices that preserves $(\top,\bot,\wedge,\vee)$ but not complements. | There is no such function. Given a boolean algebra, observe that $\lnot a$ is the unique element satisfying $a \land \lnot a = \bot$ and $a \lor \lnot a = \top$; thus, any function that preserves $\bot$, $\top$, $\lor$, and $\land$ must also preserve $\lnot$. |
Find p such that P(X>2)=$\frac{1}{2}$ where X is a Geometric Distribution, Geometric(p) | Reply to OP's query: (too long for a comment)
You're probably substituting into the wrong geometric distribution (or, equivalently, evaluating $\ \mathrm{CDF(3)}\ $, rather than $\ \mathrm{CDF(2)}\ $).
The equation $\ P(X>k)=q^k\ $ implies that you're using the version of the geometric distribution in which $\ P(X=0)=0\ $ and $\ P(X=k)=pq^{k-1}\ $ for $\ k\ge 1\ $. For this distribution, $\ \mathrm{CDF(2)} = p+pq\ $, and when you substitute $\ p=1-\frac{1}{\sqrt 2}\approx 0.293\ $ into $\ 1-p-pq=(1-p)^2\ $, you get $\ \frac{1}{2}\ $, as required. If you substitute it into $\ 1-p-pq-pq^2\ $, on the other hand, you get $\ 0.35\ $ (approximately), so I suspect this is what you have done. |
Probability of getting 2 Aces, 2 Kings and 1 Queen in a five card poker hand | The probability that your hand has exactly two Kings is $\frac{{4\choose 2}{3\choose 52-4}}{52\choose 5}$ (two Kings and three other cards).
The probability that exactly two of the three remaining cards are Aces, given that you have exactly two kings, is $\frac{{4\choose 2}{1\choose 50-2}}{3\choose 50}$. |
How many triangles, can be formed? | You should just be selecting three vertices out of six, and ${6 \choose 3}=\frac {6!}{3!(6-3)!}=\frac {720}{6\cdot 6}=20$ |
What is a good introduction to quantities such as the norm of a lattice and of short vectors in the context of lattice reduction? | A very nice introductory survey to lattice reduction is
F. Eisenbrand,
Integer Programming and algorithmic geometry of numbers,
Chapter 14 in:
50 years of integer programming 1958-2008 (M. Juenger et al., eds.),
Springer, Berlin 2010. |
Selecting zero or more objects from $n$ identical objects | The original question (which had "different" rather than "identical") is incorrect. The number of ways of selecting zero or more objects from $n$ identical objects is $n+1$, as follows: $$0, 1,2, \ldots, n$$
There are $n+1$ numbers in the above list. |
Tangent series representation | $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
You can use the Mittag-Leffler Expansion:
$\ds{\tan\pars{z}}$ has single poles at
$\ds{p_{n} = \pars{2n + 1}{\pi \over 2}}$, with residues
$\ds{r_{n} = -1}$, where $\ds{n \in \mathbb{Z}}$ .
$$
\bbx{\mbox{Note that}\quad p_{-n} = -p_{n - 1}}
$$
Then,
\begin{align}
\tan\pars{z} & =
\sum_{n = -\infty}^{\infty}\pars{-1}\pars{{1 \over z - p_{n}} + {1 \over p_{n}}} =
\sum_{n = 1}^{\infty}\bracks{%
\pars{{1 \over p_{n} - z} - {1 \over p_{n}}} +
\pars{{1 \over p_{-n} - z} - {1 \over p_{-n}}}}
\\[2mm] & +
\pars{{1 \over p_{0} - z} - {1 \over p_{0}}}
\\[5mm] & =
\lim_{N \to \infty}\sum_{n = 1}^{N}\bracks{%
\pars{{1 \over p_{n} - z} - {1 \over p_{n}}} +
\pars{{1 \over -p_{n - 1} - z} + {1 \over p_{n - 1}}}} +
\pars{{1 \over p_{0} - z} - {1 \over p_{0}}}
\\[5mm] & =
\lim_{N \to \infty}\bracks{\pars{{1 \over p_{0} - z} - {1 \over p_{0}}} +
\sum_{n = 1}^{N}\pars{{1 \over p_{n} - z} - {1 \over p_{n}}} +
\sum_{n = 1}^{N}\pars{{1 \over -p_{n - 1} - z} + {1 \over p_{n - 1}}}}
\\[5mm] & =
\lim_{N \to \infty}\bracks{\sum_{n = 0}^{N - 1}
\pars{{1 \over p_{n} - z} - {1 \over p_{n}}} +
\pars{{1 \over p_{N} - z} - {1 \over p_{N}}} +
\sum_{n = 0}^{N - 1}\pars{-\,{1 \over p_{n} - z} + {1 \over p_{n}}}}
\\[5mm] & =
\sum_{n = 0}^{\infty}\pars{{1 \over p_{n} - z} - {1 \over p_{n} + z}} =
\sum_{n = 0}^{\infty}{2z \over p_{n}^{2} - z^{2}} =
\sum_{n = 0}^{\infty}{8z \over \pars{2p_{n}}^{2} - 4z^{2}}
\\[5mm] & =
\bbx{\sum_{n = 0}^{\infty}{8z \over \pars{2n + 1}^{2}\pi^{2} - 4z^{2}}}
\end{align} |
can the empty set be an element of a group? | If by group you mean "set", sure. It's a legitimate set. Note that if $A$ is a set, then the empty set is an element of the power set of $A$.
And yes, you have computed the power set correctly. |
Is variant of $\arccos$ analytic? | $$
y=\arccos(1-x^2)\\
\cos(y)=1-x^2\\
x^2=1-\cos(y)=2\sin^2(y/2)\\
y=2\arcsin(|x|/\sqrt2)
$$
In this more simplified form you can see where the kink comes from and how to continue the part over the positive half axis into a smooth function around $x=0$. |
How to show that $z=0$ is an essential singularity of $\sin(\frac{1}{z})$ | If it was a pole, then this would be valid: $\lim_{z\to 0}\sin\left(\frac{1}{z}\right)=\infty$.
Then, for every sequence $z_n\to0$ ($n\to\infty$) we would have $\lim_{n\to\infty}\sin\left(\frac{1}{z_n}\right)=\infty$.
However - compare that with $z_n=\frac{1}{\frac{\pi}{2}+n\pi}\to 0$ and $\sin\left(\frac{1}{z_n}\right)=(-1)^n$. |
Random sum of random variables | If $S=\sum_{i=1}^{Y}X_{i}$, then the cumulant generating function of $S$ satisfies $$K_S(t)=K_Y(K_X(t)).$$ Any property of $S$ can be extracted from $K_S$. |
Indices - factorising | You have
\begin{align*}
\frac {x^5y^2x^3 + x^4y^5 - y^5x^7y^4}{x^4y^3}
&=\frac {x^8y^2 + x^4y^5 - x^7y^9}{x^4y^3} \\
&=\frac {x^4y^2\big(x^4 + y^3 - x^3y^7\big)}{x^4y^3} \\
&=\frac {x^4 + y^3 - x^3y^7}{y},\quad x\not=0.
\end{align*}
That's about as far as you can go. |
What is the best reference for real number objects? | There are two sections about this in Sheaves in Geometry and Logic by Mac Lane and Moerdijk:
VI.8, which is about the construction of the real numbers object in a topos;
VI.9, which is about Brouwer's theorem that all functions are continuous (which can hold in certain intuitionistic toposes). |
A conjecture inspired by the abc-conjecture | This is not an answer, but a community wiki to avoid a lot of comments.
What I have tested without failure, so far:
$a=2$ and $b=1,\dots, 1,000,000$
$0<a,b<10,000$
Also tested without failure:
$3 \le p \le 10^6 $, $p$ prime. $a = 1$, $b = \frac{p^2-1}{2}$
$ p \neq q, p,q \le 10^4$, $p,q$ primes, $a=p$, $b=q$
$ 3 \le p \le 10^6$, $p$ prime, $a=\frac{p-1}{2},b=\frac{p+1}{2}$ |
Problem with solving non-linear differential equation. | The second equation can be rewritten as $r\ddot\varphi+\dot r\dot\varphi=\frac{d}{dt}\left(r\dot\varphi \right)=0$ and therefore $r\dot\varphi$ is a constant. The rest is clear sailing! By the way, you should try to clarify the context where the problem came from. Is this related to the geodesic equation?
Next, you would substitute $\dot{\varphi}=\frac{c}{r}$ in the first equation, and then multiply by $r$ to get an exact differential again.
You might want to point out that what you are doing with springs and equations of motion is equivalent to solving the geodesic equation (in differential-geometric terms). |
Finding values that make $xy=x+y+z$ true. | Rewrite the equation as $(x-1)(y-1)=z+1$. Now it is easy to find infinitely many solutions. One can give a general description of the solutions in terms of the prime power factorization of $z+1$. |
How do I get to the coordinates (e,1/e) in this question | Here, $ y = \frac{\ln x}{x} $
For finding stationary points, you need to put $ \frac{dy}{dx} = 0 $
$$ \because \frac{dy}{dx} = \frac{x/x - \ln x}{x^2} $$
We have, $ \frac{x/x - \ln x}{x^2} = 0 \implies 1 - \ln x = 0 \implies x = e $
Hence, Stationary point is $ x = e $ where $ y = \frac{1}{e} $
For finding maxima/minima you need to check the sign of $ \frac{d^2y}{dx^2} $.
If it’s a positive, then it’s a minima and if it’s negative, then it’s a maxima.
Here, you can find $ \frac{d^2y}{dx^2} $ to be negative. That means $ (e, \frac{1}{e}) $ is a maxima. |
find pairs of real numbers $x, y$ to satisfy this equation | Hint:
By expanding we have
\begin{align*}
(x+y)^2&=(x+3)(y-3)\\
\iff x^2+2xy+y^2&=xy-3x+3y-9\\
\iff x^2+xy+y^2+3x-3y+9&=0\\
\iff \tfrac12(x+y)^2+\tfrac12(x+3)^2+\tfrac12(y-3)^2&=0
\end{align*}
So we must have $$x+y=x+3=y-3=0$$ |
Prove that for any sets $A$ and $B$, if $\mathscr P(A)\cup\mathscr P(B)=\mathscr P(A\cup B)$ then either $A\subseteq B$ or $B\subseteq A$. | It’s correct, but it’s much too wordy and far more complicated than necessary. To prove a theorem of the form $X\implies Y\text{ or }Z$, it suffices to show that if $X$ holds and $Y$ does not, then $Z$ must hold. Here that means that we need only show that if $\wp(A\cup B)=\wp(A)\cup\wp(B)$ and $A\nsubseteq B$, then $B\subseteq A$. This can be done in five lines, even writing it up in fairly wordy fashion:
Suppose that $\wp(A\cup B)=\wp(A)\cup\wp(B)$, but $A\nsubseteq B$. $A\cup B\in\wp(A\cup B)$, so $A\cup B\in\wp(A)\cup\wp(B)$, and therefore $A\cup B\in\wp(A)$, or $A\cup B\in\wp(B)$. $A\subseteq A\cup B$, so if $A\cup B\in\wp(B)$, then $A\in\wp(B)$, and therefore $A\subseteq B$, contradicting our assumption that $A\nsubseteq B$; thus, we must instead have $A\cup B\in\wp(A)$. And $B\subseteq A\cup B$, so this implies that $B\in\wp(A)$ and hence that $B\subseteq A$. |
True/false: Let $v,w \in \mathbb{R}^2$. If $v \perp w$, then we have that $\left \| v+w \right \|= \left \| v \right \|+\left \| w \right \|$ | $$\left \| v+w \right \|^2=(v+w)\cdot(v+w)=\left \| v\right \|^2+\left \| w \right \|^2+2v\cdot w$$
Once $v\perp w$ then $v\cdot w=0$ and then we have
$$\left \| v+w \right \|^2=\left \| v\right \|^2+\left \| w \right \|^2\le (\left \| v\right \|+\left \| w \right \|)^2$$ |
Can find the angles of the triangle created by 3 points if I have each points compass bearing? | If you have the compass heading from each point to each of the other two, you have measured the angles-just take the difference of the headings. Even if it is a large triangle so the compass deviation varies between the points and the angles don't add to $180^\circ$, the measurement of the two angles is local and applies. |
Complex Differential Equation: $f'(z)=bf(z) \iff f(z)=ae^{bz}$ | $1 \Rightarrow 2$: If $f(z) = a e^{bz}$, then $f'(z) = bae^{bz} = bf(z)$.
$2 \Rightarrow 1$: Suppose $f'(z) = bf(z)$ and consider $g(z) = e^{-bz}f(z)$. We have:
$$
g'(z) = -be^{-bz}f(z) + e^{-bz}f'(z) = 0
$$
Hence $g$ is constant. We have $g(0) = f(0)$, so let $a = f(0)$. It follows that $g(z) = e^{-bz}f(z) = a$. Thus, $f(z) = ae^{bz}$. |
Compact set in the weak topology | Since $A$ is compact, it follows that, for each $f \in E'$, $\{f(x) \, \mid \, x \in A\}$ is bounded. (Why?)
Let $J : E \to (E')'$ be the canonical map so that $[J(x)](f) = f(x)$. Recall that $J$ is an isometry, which is a fancy way of saying $\|J(x)\| = \|x\|$.
By the first paragraph, $\{J(x) \, \mid \, x \in A\}$ is pointwise bounded. The Banach-Steinhaus Theorem (or Uniform Boundedness Principle) implies $\{J(x) \, \mid \, x \in A\}$ is actually bounded in $(E')'$. In view of the second paragraph, that means $A$ is bounded. |
Work out $\int_{0}^{1}t^2\cos(2t\pi)\tan(t\pi)\ln[\sin(t\pi)]dt$ | $$\color{blue}{I = \frac{{\ln 2}}{2\pi}(1 - \ln 2)} $$
We have
$$\begin{aligned}
I &= \frac{1}{{{\pi ^3}}}\int_0^\pi {{x^2}\cos (2x)\tan x\ln (\sin x)dx} \\
&= \frac{1}{{{\pi ^3}}}\int_{ - \pi /2}^{\pi /2} {{{\left( {\frac{\pi }{2} + x} \right)}^2}\cos (2x)\cot x\ln (\cos x)dx} \\
&= \frac{1}{{{\pi ^2}}}\int_{ - \pi /2}^{\pi /2} {x\cos (2x)\cot x\ln (\cos x)dx}
\end{aligned}$$
Now invoke the formula:
When $a-b-c>0$, we have $$\int_{ - \pi /2}^{\pi /2} {{e^{iax}}{{( - 2i\sin x)}^b}{{(2\cos x)}^c}dx} = \sin \left( {\frac{\pi }{2}(a - b - c)} \right)\int_0^1 {{x^{(a - b - c)/2 - 1}}{{(1 - x)}^c}{{(1 + x)}^b}dx} $$
For a proof, see here.
Differentiate both sides with respect to $a$ and $c$, then set $a=2,b=-1,c=1$ gives
$$i\int_{ - \pi /2}^{\pi /2} {\frac{{x{e^{2ix}}}}{{ - 2i\sin x}}(2\cos x)\ln (2\cos x)dx} = \frac{\pi }{2}\int_0^1 {\left[ {\frac{{(1 - x)\ln x}}{{1 + x}} - \frac{{(1 - x)\ln (1 - x)}}{{1 + x}}} \right]dx} $$
You should have no difficulty in evaluating RHS, giving
$$\int_{ - \pi /2}^{\pi /2} {x\cot x\cos 2x\ln (2\cos x)dx} = \frac{{\pi {{\ln }^2}2}}{2}$$
I left you as an exercise to show that
$$\int_{ - \pi /2}^{\pi /2} x\cot x \cos 2x dx = \pi (\ln 2 - \frac{1}{2})$$
This completes the proof. |
Understanding part of proof of the Fundamental Theorem of Galois Theory | Just evaluate, let $m\in M$, $\phi(\gamma\circ\alpha)(m)=(\gamma\circ\alpha)\mid_M(m)=(\gamma\circ\alpha)(m)=\gamma(\alpha(m))$, and $(\phi(\gamma)\circ\phi(\alpha))(m)=((\gamma\mid_M)\circ(\alpha\mid_M))(m)=\gamma\mid_M(\alpha\mid_M(m))=\gamma\mid_M(\alpha(m))=\gamma(\alpha(m))$, here you use that $\alpha(m)\in M $, which means that $\phi(\gamma\circ\alpha)=\phi(\gamma)\circ\phi(\alpha)$. |
Let $m$ be the largest real root of the equation $\frac3{x-3} + \frac5{x-5}+\frac{17}{x-17}+\frac{19}{x-19} =x^2 - 11x -4$ find $m$ | HINT:
Write the LHS as $$\left(\frac x{x-3}-1\right)+\left(\frac x{x-5}-1\right)+\left(\frac x{x-17}-1\right)+\left(\frac x{x-19}-1\right)$$ and this gives a constant term of $-4$. Equating this with the RHS, we have$$x\left(\frac1{x-3}+\frac1{x-5}+\frac1{x-17}+\frac1{x-19}\right)=x(x-11)$$ and note that $3,5$ are 'symmetrical' around $11$; that is, $17-11=11-5$ and $19-11=11-3$. |
Algorithms to compute the class number | Yes, there are algorithms that are much better than brute force. For example, see section 5.4 in Henri Cohen's book "A course in computational algebraic number theory" for Shanks's baby-step giant-step algorithm $O(|D|^{1/4+\epsilon})$ - which is practical for negative discriminants $D$ up to 25 digits or more and, further, McCurley's sub-exponential algorithm (including Atkin's variant) which is $O(L(|D|)^\alpha)$ for $\alpha = \sqrt 2 \;$ or perhaps even $\alpha = \sqrt{9/8},$ where $\; L(x) = e^{\sqrt {\ln x \ln\ln x}}$. This can handle $D$ up to 50 digits or more (nowadays, with various improvements, probably around 80 digits or more - the prior numbers are quoted from the 1993 edition of Cohen's book - currently the bible for computational algebraic number theory). |
Using Stirling's formula to uniformly bound Bernoulli success probabilities | Hint: For every $0\leqslant k\leqslant i\leqslant n$, $${n\choose i}={n\choose k}\cdot\prod_{j=k}^{i-1}\frac{n-j}{j+1}\leqslant{n\choose k}\cdot\left(\frac{n-k}{k+1}\right)^{i-k},$$ hence, if $2k\geqslant n$, summing the geometric series with reason $$r=\frac{n-k}{k+1}\lt1,$$ one gets $$\sum_{i=k}^n{n\choose i}\lt{n\choose k}\cdot\sum_{i\geqslant k}r^{i-k}={n\choose k}\cdot\frac{k+1}{2k+1-n}.$$ Now, for every $\gamma\gt\frac12$, use this for $$k=\lceil\gamma n\rceil,$$ with the guarantee that $2k\geqslant n$ since $\gamma\geqslant\frac12$, and apply Stirling formula to the binomial prefactor $${n\choose \lceil\gamma n\rceil}.$$ |
Explain why there must be a value $r$ for $2<r<5$ such that $h(r)=0$?(using IVT) | $h(2)$ is $3$, and $h(5)$ is $-3$. Since $x(t)$ is continuous, $h(t) = x(t)-t$ is also. Therefore, $h(t)$ must cross 0. |
Sketch of proof concerning a conjecture of Vasile Cirtoaje : $(1-x)^{(2x)^k}+x^{(2(1-x))^k}\leq 1$ with $k\geq1$ and $0<x\leq \frac{1}{2}$ | I have a second sketch/Partial proof (tell me if i'm wrong) :
We want to show
Let $0.65\leq x<1$ and $1\leq k\leq n$ two naturals numbers with $n\geq 10^{10}$ then we have ::
$$P(k)=(1-x)^{(2x)^{1+\frac{k}{n}}}+x^{(2(1-x))^{1+\frac{k}{n}}}\leq 1\quad (I)$$
We use a form of the Young's inequality or weighted Am-Gm :
Let $a,b>0$ and $0<v<1$ then we have :
$$av+b(1-v)\geq a^vb^{1-v}$$
Taking account of this theorem and putting :
$a=(x)^{(2(1-x))^{1+\frac{k}{n}}}$$\quad$$b=1$$\quad$$v=(2(1-x))^{\frac{1}{n}}$ we get :
$$(x)^{(2(1-x))^{1+\frac{k}{n}}}\leq (x)^{(2(1-x))^{1+\frac{k-1}{n}}}(2(1-x))^{\frac{1}{n}}+1-(2(1-x))^{\frac{1}{n}}$$
Now the idea is to show :
Let $$(1-x)^{(2x)^{1+\frac{k}{n}}}\leq 1-\Big((x)^{(2(1-x))^{1+\frac{k-1}{n}}}(2(1-x))^{\frac{1}{n}}+1-(2(1-x))^{\frac{1}{n}}\Big)$$
Or:
$$(1-x)^{(2x)^{1+\frac{k}{n}}-\frac{1}{n}}+2^{\frac{1}{n}}(x)^{(2(1-x))^{1+\frac{k-1}{n}}}\leq 2^{\frac{1}{n}}\quad (0)$$
Now we have $x\in[0.65,1)$ :
$$2^{\frac{1}{n}}(1-x)^{(2(1-x))^{1+\frac{k-1}{n}}}\geq (1-x)^{(2(x))^{1+\frac{k}{n}}-\frac{1}{n}}\quad(1)$$
Putting $(1)$ into $(0)$ we can use a proof by induction and conlude with Refinements of the inequality $f(x)=x^{2(1-x)}+(1-x)^{2x}\leq 1$ for $0<x<0.5$
Hope it inspires someone to achieve this ! |
Use continuity to prove inequality in $\textbf{R}^n$ | By continuity, for each $\varepsilon>0$ there exists $\delta>0$ such that $|a-x|<\delta$ implies $|f(a)-f(x)|<\varepsilon$. If you take $r$ as the $\delta$ corresponding to $\varepsilon=d/2$, the triangle inequality will finish it for you. |
Prove that $\log_{10}(1-x^{-m}) \geq -2 \cdot x^{-m}$ for $x>2^{1/m}$. | PRIMER: ELEMENTARY INEQUALITY
In THIS ANSWER, I showed using only the limit definition of the exponential function along with Bernoulli's Inequality that the logarithm function satisfies the inequalities
$$\bbox[5px,border:2px solid #C0A000]{\frac{x-1}{x}\le \log(x)\le x}\tag 1$$
for $x>0$.
For $x>0$ and $m>0$, the domain of the function $f(x)=\log_{10}(1-x^{-m})$ is $x>1$. If in addition, we have $x>2^{1/m}$, then we can write
$$\begin{align}
\log_{10}(1-x^{-m})&=\frac{\log_e (1-x^{-m})}{\log_e(10)} \tag 2\\\\
&\ge \frac{-x^{-m}}{\log_e(10) (1-x^{-m})} \tag 3\\\\
&\ge \frac{-x^{-m}}{\log_e(10) (1-\left(2^{1/m}\right)^{-m})}\\\\
&=\frac{-2x^{-m}}{\log_e(10)}\\\\
&\ge -2x^{-m}
\end{align}$$
as was to be shown!
In going from $(2)$ to $(3)$, we made use of the left-hand side inequality in $(1)$. |
What is the number of ring isomorphisms from $\mathbb{Z}^n$ to $\mathbb{Z}^n$. | One of my friends gave me a suggestion to find the number of $n×n$ invertible matrices with components $1$ or $0$.
This is OEIS A055165; there appears to be no simple formula for it. |
Solve the integral equation | $$y(x) = 2 + \int_8^x (t-t y(t)) dt \,\, (\clubsuit)$$
Differentiating with respect to $x$, we get that
$$y'(x) = x - x y(x) \implies y'(x) + x \cdot y(x) =x \implies y'(x) \cdot e^{x^2/2} + x \cdot e^{x^2/2} \cdot y(x) = x \cdot e^{x^2/2}$$
Hence, we get that
$$\dfrac{d \left(y(x) \cdot e^{x^2/2}\right)}{dx} = \dfrac{d (e^{x^2/2})}{dx}$$
Hence,
$$y(x) = 1 + c \cdot e^{-x^2/2} \,\, (\spadesuit)$$
To find $c$, plug in $(\spadesuit)$ into $(\clubsuit)$. |
Why does $arg(z^{2})\neq 2arg(z)$? | They are distinguishing between $arg$ (lower case) and $Arg$ (uppercase).
$arg (z) = Arg(z) + 2k\pi$ for any $k$.
So $arg(z) + arg(z) = Arg(z) + 2k \pi + Arg(z) + 2m\pi = 2Arg(z) + 2n\pi$ for any possible integer value of $n=k+m$.
But $2arg(z) = 2(Arg(x) + 2k\pi) = 2Arg(x) + 4k\pi$ for any integer value of $k$.
So $arg(z) + arg(z) \ne 2arg(z)$.
Which is probably what they should have said at the start.
....
To be honest.... I'm not loving the text.
I prefer to think of arguments as modulo classes and when we say $arg(z) = blah$ we mean $arg(z) \equiv blah \mod 2\pi$. (I hope you know what that notation means. Formally it means $arg(z) = \{blah + 2k\pi|k \in \mathbb Z\}$.
Hence I'd say $2arg(z)= 2blah$ to mean $2arg(x) \equiv 2blah \mod 2\pi$.
i.e. $2\arg(z) = \{2blah + 2k\pi|k \in \mathbb Z\}$. And $2\arg(z) \ne \{2\theta |\theta \in arg(z)\}=\{2blah + 4k\pi|k \in \mathbb Z\}$.
So in MY book, I would sat that $arg(z^2)$ DOES equal $2arg(z)$.
I honestly don't see the point of confusing students this way.
...
On the other hand it will be very important that $arg (z^{\frac 1n}) \ne \frac 1n arg(z)$. It is $\frac 1n arg(z) + \frac {2k}n\pi$. |
Isomorphic Mapping Between $\mathbb{Z}_3 [x] / \langle x^2 + x + 2 \rangle$ and $\mathbb{Z}_3 [x] / \langle x^2 + 1\rangle$ | The function sending each element to itself in the other field isn't an homomorphism, because the product isn't the same, for example, let $A=\mathbb{Z}_3 [x] / \langle x^2 + x + 2\rangle$ and $B=\mathbb{Z}_3 [x] / \langle x^2 + 1\rangle$. In $A$, $$x(x+1)=x^2+x=-2=1$$
but in $B$, you have that
$$x(x+1)=x^2+x=x+1$$
So the property that homomorphism preserves the product is not correct.
To build a correct homomorphism, see that $f\colon B\longrightarrow A$ sends $f(a+bx)=f(a)+f(b)f(x)$, and since $f(1_B)=1_A$, it must happen that $f(a)=f(1\cdot a)=a$ and $f(b)=b$, so you only need to see where goes $f(x)$
Since your $x_B\in B$ has the property that $x_B^2=-1=2$, you must send it to another element in $A$ that has that property, and that is
$(x_A^2)^2=(-x_A-2)^2=x_A^2+4x_A+4=(-x_a-2)+x_A+4=2$
So the homomorphism that sends $x_B\mapsto x_A^2$ and $1_A\mapsto 1_B$ will be your answer. Let's check it:
$$f((a+bx)+(c+dx))=f(a+c+bx+dx)=a+bf(x)+c+df(x)=f(a+bx)+f(c+dx)$$
In the other part,
$$f((a+bx)(c+dx))=f(ac+adx+bcx+bdx^2)=f(ac+adx+bcx+2bd)=$$
$$ac+adf(x)+bcf(x)+2bd=ac+adf(x)+bcf(x)+2bd$$
$$=ac+adx^2+bcx^2+x^4bd$$
$$=(a+bx^2)(c+dx^2)=f(a+bx)f(c+dx)$$ |
Determine the multiplicity of a root in determinant | Adding or subtracting one row to another does not change the value of the determinant. So $$\det\begin{bmatrix}1-x&1&1&1\\1&1+x&1&1\\1&1&1-z&1\\1&1&1&1+z\end{bmatrix}=\det\begin{bmatrix}1&1&1&1+z\\0&x&0&-z\\0&0&z&z\\0&0&x&x+xz\end{bmatrix}=x^2z^2$$
NB It is $x=0$ and $z=0$ that are the roots of the polynomial.
EDIT: Let's expand $\det(A(x))$ as a series in $x$:
$\det(A(0))=0$ since it contains a repeated column of $1$s. This shows that $x=0$ is a root of $p(x,z)$.
To get the first order terms in $x$, recall that determinants are linear in each column separately, hence expanding the first and second columns gives \begin{align}\det(A(x))&=\det\begin{bmatrix}1&1&1&1\\1&1+x&1&1\\1&1&1-z&1\\1&1&1&1+z\end{bmatrix}-\det\begin{bmatrix}x&1&1&1\\0&1+x&1&1\\0&1&1-z&1\\0&1&1&1+z\end{bmatrix}\\
&=\det\begin{bmatrix}1&0&1&1\\1&x&1&1\\1&0&1-z&1\\1&0&1&1+z\end{bmatrix}-\det\begin{bmatrix}x&1&1&1\\0&1&1&1\\0&1&1-z&1\\0&1&1&1+z\end{bmatrix}-\det\begin{bmatrix}x&0&1&1\\0&x&1&1\\0&0&1-z&1\\0&0&1&1+z\end{bmatrix}\\
&=-x^2\det\begin{bmatrix}1-z&1\\1&1+z\end{bmatrix}\end{align} In the second equation, the first two determinants cancel out by noticing that the second can be obtained from the first by swapping the first two columns and rows. It is this cancellation that annihilates the $x$ term and produces the double root $x^2$. |
Computing a singular integral | $$I_s=\int_1^\infty\left(\frac{r}{(r^2-1)^s}-\frac{1}{r^{2s-1}}\right)dr=\frac{1}{2-2s}\Big((r^2-1)^{1-s}-r^{2-2s}\Big)\Bigg|_{r\to 1}^{r\to\infty}=\frac{1}{2-2s}.$$ |
Projective modules and lifting property | Welcome to MSE!
Let $f : N \to P$ and assume $P$ projective. We want to find a section $g : P \to N$.
Now, since $P$ is projective, we have $R^m = M \oplus P$. Then we can view $f$ as a map $N \to R^m$ by composing with the inclusion $ \iota : P \hookrightarrow M \oplus P$.
Now we use the fact that $R^m$ is free: If $f : N \to R^m$, then let $\{e_i\}$ be the basis elements in $\text{Im}(f) = P$. Since they are in the image, we can find $\{n_i\} \subseteq N$ so that $f(n_i) = e_i$ for each $i$. Now we can define $g : R^m \to N$ by sending the relevant $e_i$ to the $n_i$ and sending the remaining elements of the basis to $0$ (or anywhere really). Now it is clear that $\iota \circ f \circ g = \text{id}$. Since $\iota$ is an injection, we thus have $f \circ g = \text{id}$ too.
As an aside, this is how theorems regarding projective modules usually go when we unpack the category-theoretic language. The only thing we really know is that $P$ is a summand of a free module, so we will use the inclusion $\iota : P \to R^m$ and the projection $\pi : R^m \to P$ as necessary to transfer our problem to the free-module setting (which is very easy to work with) and then transfer back.
Edit:
As for why this theorem doesn't work for every module: Consider the surjection
$\pi : \mathbb{Z} \to \mathbb{Z}/2$ viewed as $\mathbb{Z}$-modules. There can be no section $g : \mathbb{Z}/2 \to \mathbb{Z}$ since there are no elements of order $2$ in $\mathbb{Z}$!
I hope this helps ^_^ |
The domain of $f(x) = a^x + a^{-x}, a>0$. | Yes, the domain is $\mathbb R$. But the fact that the exponential function never attains $0$ is not relevant here. All you need to know is that, for any real number $x$, both $a^x$ and $a^{-x}$ are defined.
And, yes, it is even: $f(-x)=a^{-x}+a^x=a^x+a^{-x}=f(x)$. |
In signal processing, every where you see infinity. Why? | Fourier analysis requires a function to be defined on a (locally compact, abelian) group. The set of real numbers is a group under addition; this is the setting of Fourier transform. So is the circle $\mathbb R/(a\mathbb Z)$ for some $a>0$; this is the setting of Fourier series. So is the cyclic group $\mathbb Z/(n\mathbb Z)$, which is the setting of the discrete Fourier transform.
An interval $[a,b]$ is not a group. So, even though we may only have a function defined on such an interval, from the mathematical point of view the Fourier transform is taken on the real line. The function can be set to $0$ outside of $[a,b]$. |
how to prove boolean identities | For problem 2:
(¬p¬q)∨(q¬r)∨(¬p¬r)
= ¬p¬q ∨ q¬r ∨ ¬p¬r // simplified notation
= ¬p¬q ∨ q¬r ∨ (1)¬p¬r
= ¬p¬q ∨ q¬r ∨ (q ∨ ¬q)¬p¬r
= ¬p¬q ∨ q¬r ∨ q¬p¬r ∨ ¬q¬p¬r
|
`--------------.
|
= q¬r ∨ q¬p¬r ∨ ¬p¬q ∨ ¬q¬p¬r
= q¬r(1 ∨ ¬p) ∨ ¬p¬q(1 ∨ ¬r)
= q¬r(1) ∨ ¬p¬q(1)
= q¬r ∨ ¬p¬q
q.e.d |
Simplifying $\nabla[ \phi( \parallel \mathbf{x} - \mathbf{\xi}_i \parallel ) ]$ | For the function $\rho({\bf x}):=\|{\bf x}\|$ one has ${\partial \rho\over\partial x_i}={x_i\over \|{\bf x}\|}$ $\ (1\leq i\leq n)$, or $$\nabla \rho({\bf x})={{\bf x}\over \|{\bf x}\|}\ .\qquad(1)$$ Consider now the function
$$f({\bf x}):=\phi(\|{\bf x}-\xi\|)=\phi\bigl(\rho({\bf x}-\xi)\bigr)\qquad(2)$$
where $\xi$ is fixed and $\phi:\ {\mathbb R_{\geq0}}\to{\mathbb R}$ is some scalar function of a real variable $r$. Then by the one-variable chain rule and (1) one has
$${\partial f\over\partial x_i}=\phi'\bigl(\rho({\bf x}-\xi)\bigr)\ {\partial \rho({\bf x}-\xi)\over\partial x_i} =\phi'(\|{\bf x}-\xi\|){x_i-\xi_i \over \|{\bf x}-\xi\|}\qquad(1\leq i\leq n)\ .$$
These $n$ scalar equations can be summarized to
$$\nabla f({\bf x})=\phi'(\|{\bf x}-\xi\|){{\bf x}-\xi \over \|{\bf x}-\xi\|}\ .\qquad(3)$$
If one is sufficiently fluent with multidimensional calculus one can of course omit the use of coordinates altogether and pass directly from (2) to (3), using (1). |
How do I show that $W_{x_0}$ is a maximal subspace of $\mathscr{C}(X, \mathbb{F})$? | Two ways to solve the question
The first one is to use the general result that says that the kernel of a linear form is a maximal subspace.
Second one is to prove it in ou particular case. To do so, take $g \notin W_{x_0}$. By hypothesis, $g(x_0) \neq 0$. Now for any $f$
$$f=g \frac{f(x_0)}{g(x_0)}+ (f-g \frac{f(x_0)}{g(x_0)} )$$
Notice that $(f-g \frac{f(x_0)}{g(x_0)} )$ vanishes at $x_0$. |
Compute the integral $\int\limits_0^1 \frac{3x}{\sqrt{4-3x^2}} dx $? | Hint:
If you let $u=4-3x^2,$ then all will be fine and dandy. |
Conditional expected value of mutlitple draws from uniform distribution | Based on the comments above, this is what I've come to:
There are $m$ i.i.d. draws of X from $U(0,1)$, with $x_{(1)}\leq\ldots\leq x_{(n)}\leq\ldots\leq x_{(m)}$ being the order statistics. Given some $x_{(i)}=x_i, i\in N=\{1,\ldots,n\}$, I want to compute the expected average of the $(n-1)$ remaining lowest draws - conditional on knowing $x_i, i\in N$, but not knowing the rank $(i)$ itself! That is, I know that the draw $x_i$ is among the $n$ lowest, but not its precise order.
With the help of the comments, the solution seems to be that this expectation equals
$$
\underbrace{\frac{1}{(n-1)}}_{1.}\cdot\underbrace{\frac{1}{\sum_{j=1}^n \binom{m-1}{j-1}\left(1-x_i\right)^{m-j}x_i^{j-1}}}_{2.}\cdot\sum_{j=1}^n\underbrace{\binom{m-1}{j-1}(1-x_i)^{m-j}x_i^{j-1}}_{3.}\underbrace{\left((j-1)\frac{x_i}{2}+\sum_{k=1}^{n-j}(x_i+\frac{k(1-x_i)}{m-j+1})\right)}_{4.}
$$
where the terms represent
average ($n-1$ items)
normalisation
probability of $x_i$ being order position $j$
conditional expectation of the remaining $(n-1)$ samples if $x_i$ is order position $j$
If anyone can confirm this (I posted it as a comment previously), I'd be very grateful. Also, if one of the commenters who helped me with the solution would like to post this as an answer themselves, I am more than happy to mark it as accepted! |
Is there a "computable" countable model of ZFC? | Great question! This was already considered at Mathoverflow; the answer is no. See https://mathoverflow.net/questions/12426/is-there-a-computable-model-of-zfc.
I've made this community wiki so I don't get a reputation bonus from others' hard work; I would vote to close as a duplicate, but the system won't let me, since the duplicate question isn't on math.stackexchange.
Note that, despite this, we can ask: how complicated must a countable model of ZFC be? Since ZFC is a computable first-order theory, any PA degree computes such a model, and there are many such degrees; in fact, there are PA degrees which are low, that is, whose jump is Turing reducible to the Halting problem (= as small as possible). Conversely, the arguments at the mathoverflow question cited above show that any countable model of ZFC is of PA degree, so this is an exact characterization.
Note that very little specifically about ZFC is used here! Specifically, and addressing your general question: Henkin's construction shows that any PA degree computes a model of any computable first-order theory, and that conversely any degree which is not PA fails to compute a model of some computable first-order theory; though of course, for specific theories, the set of degrees of models of that theory might be more complicated. This set, by the way, has been studied a little; it is called the spectrum of the theory. See http://www.math.wisc.edu/~andrews/spectra.pdf. |
Surjective/injective linear function | Proof of Surjectivity.
The image of $f$ is a subspace. But $\Bbb R^3$ has no proper 3-dimensional subspaces. So the image is all of $\Bbb R^3$. $\square$
Proof of Injectivity.
If $f(x)=f(y)$ for $x\not=y$ then $f(e_1)=0$ for $e_1:=x-y\not=0$. If $\{e_1,e_2,e_3\}$ is a basis of $\Bbb R^3$, then a vector $v= v_1e_1+v_2e_2+v_3e_3$ gets mapped to
$$f(v)=v_1f(e_1)+v_2f(e_2)+v_3f(e_3)=0+v_2f(e_2)+v_3f(e_3).$$
So the image is 2-dimensional. Therefore $f$ must be injective. $\square$ |
If $X \subseteq A \cup B \implies X \subseteq A$ or $X \subseteq B$ then $A \subseteq B$ or $B \subseteq A$ | Prove the contrapositive. Assume that neither $A$ nor $B$ is contained in the other. Then $\exists x \in A \setminus B \text{ and } \exists y \in B \setminus A, \text{ so }X= \{x, y \} \subseteq A \cup B \text{ but } X \not\subseteq A \text{ and } X \not\subseteq B$ and the implication on the left fails. |
Find all values in polar form. | Using that $-3125 = -5^5$, we can determine that the radii of our polar points are all 5.
Then, by the property of our roots all being equally spaced we can see that there must be an angle of $72^\circ$ between each root.
Thus the polar coordinates are $(5,180^\circ), (5, 108^\circ), (5, 36^\circ), (5, -36^\circ), (5, -108^\circ)$
Convert to radians if required. |
Matrix transformation and boundedness | The result is true in finite dimesnsionalspaces but false in infinite dimensional Hilbert spaces. It is well known that there exist discontinuous linear functionals in the latter case ( hence also discontinuous linear maps from the space into itself). So consider a finite dimensional sapce now. If $\{e_1,e_2,..,e_n\}$ is an orthogonal basis then any vector $v$ can be written as $\sum a_ie_i$. So we get $Av=\sum a_i Ae_i$. Let $C$ be the maximum of the numbers $\|Ae_i\|, 1 \leq i \leq n$. Then $\|Av\| \leq C\sum |a_i|\leq C\sqrt n(\sum |a_i|^{2})^{1/2} =C\sqrt n \|v\|$. |
Continuous $\sum_1^\infty u_n(x)$ converge uniformly to F(x). $\lim_{x\to\infty}u_n(x)=b_n<\infty$. Then $\lim_{x\to\infty}F(x)=\sum_1^\infty b_n$ | Claim 1: $\sum_{n=1}^{\infty}b_{n}$ converges.
Let $S_{n}(x)=\sum_{k=1}^{n}u_{k}(x)$. Let $s_{n}=\sum_{k=1}^{n}b_{k}$.
We go to show that $(s_{n})$ is a Cauchy sequence. Choose $c>0$.
Let $\varepsilon>0$. Since $\sum_{n}u_{n}(x)$ converges uniformly
on $[c,\infty)$, there exists $N$ such that $|S_{n}(x)-S_{m}(x)|<\varepsilon$
whenever $N\leq m<n$ and $x\in[c,\infty)$. That is, $\left|\sum_{k=m+1}^{n}u_{k}(x)\right|<\varepsilon.$
Letting $x\rightarrow\infty$, we have $|s_{n}-s_{m}|=|\sum_{k=m+1}^{n}b_{k}|\leq\varepsilon$.
Therefore, $(s_{n})$ is a Cauchy sequence and hence $\sum_{n=1}^{\infty}b_{n}$
converges.
Claim 2: $\lim_{x\rightarrow\infty}F(x)=\sum_{n=1}^{\infty}b_{n}$.
We continue to adopt notations we have used in Claim 1. Denote $s=\sum_{n=1}^{\infty}b_{n}$.
Fix $m\geq N$ such that $|s_{m}-s|<\varepsilon$. Observe that $S_{m}(x)\rightarrow s_{m}$
as $x\rightarrow\infty$, so there exists $x_{0}>c$ such that $|S_{m}(x)-s_{m}|<\varepsilon$
whenever $x\in[x_{0},\infty)$. Now, let $x\in[x_{0},\infty)$ be
arbitrary. Recall that for any $n>m$, we have $|S_{n}(x)-S_{m}(x)|<\varepsilon$.
Letting $n\rightarrow\infty$, we obtain $|F(x)-S_{m}(x)|\leq\varepsilon$.
Finally, we have estimation:
\begin{eqnarray*}
& & |F(x)-s|\\
& \leq & |F(x)-S_{m}(x)|+|S_{m}(x)-s_{m}|+|s_{m}-s|\\
& < & 3\varepsilon.
\end{eqnarray*}
Therefore $\lim_{x\rightarrow\infty}F(x)=\sum_{n=1}^{\infty}b_{n}$.
Remark: Continuity of $u_{k}$ or $F$ is irrelevant. |
The multiplication of rank for finite projective modules | Let me use a slightly different notation: We have $R' \to R \to M$, where $R$ is $R'$-projective of rank $s$ and $M$ is $R$-projective of rank $r$.
The previous exercise states that
if $M$ is $R$-projective of rank $r$, then for any maximal ideal $m$ of $R$, $\dim_{R/m} M/mM = r$.
Since you already showed that $M$ is $R'$-projective, it suffices to show that for a maximal ideal $m$ of $R'$, $\dim _{R'/m} M/mM = sr$.
The extension $R'/m \to R/ mR$ is integral and $R'/m$ is a field.
Hence $R / mR$ is a finite product of Artinian local rings, and $M /mM$ is a module over this product. Write
$$R/ mR \cong R_1 \times R_2 \times \cdots \times R_q.$$
Here, each $R_i$ is a localization of $R / mR$ at a maximal ideal, and $R_i$ is finite over $R'$.
We conclude that
$\dim_{R'/mR'} R/mR = \sum \dim_{R'/mR'} R_i = s$.
Now, we apply the exercise to $R$ and $M$. As $M$ is $R$-projective, for each maximal ideal $n$ of $R$, $\dim_{R/n} M/ nM = r$.
Let $n_i$ be the maximal ideal which is the kernel of the morphism
$$
R \to R/ mR \cong R_1 \times R_2 \times \cdots \times R_q \to R_i.$$
Then $(R/mR)/ n_i(R/mR) \cong R/n_i R$, and $\dim_{R/{n_i}} M/{n_i}M = r$.
Lastly, we these numbers together. We have that
$M/mM \cong \oplus (R/{n_i})^r$, and
$$\dim_{R'/m} M/mM
= \sum r \dim_{R'/m} R_i = r \sum \dim_{R'/m} R_i = r \dim_{R'/m} R/mR = rs.$$ |
Probability of complex number being real. [AMC 12 2015] | Define $f(a,b) = (\cos a\pi + i \sin b\pi)^4$. If $\sin b\pi = 0$ or $\cos a\pi = 0$, then $f(a,b)$ is real for obvious reasons. If neither is zero, then $f(a,b)$ is real only if $$\cos^2 a \pi - \sin^2 b \pi = 0$$ which would imply $\cos a \pi = \pm \sin b \pi$, or equivalently, $$a \pm b - \frac{1}{2} \in \mathbb Z.$$ From here, enumeration is straightforward. |
Find the altitude of a tetrahedron whose faces are congruent triangles | Since Tetrahedron is usually meant to designate the regular polyhedron,
we can better say that we are constructing a triangular Pyramid.
Intuitively speaking, we are starting with two identical sets of three "poles", having different
lengths but respecting the triangular law.
With one set we construct the base, $\triangle{ABC}$, and we designate, as usual, with $a$ the
length of the edge opposed to $A$, with $\alpha$ the angle in $A$, etc.
Thereafter, with the second set of poles we are going to construct a "tent" over the base.
To do that, we order the poles in a non-decreasing order, e.g. $a,b,c$.
We place the shortest $a$ in $A$, the longest $c$ in $C$, and $b$ in $B$, and join at the top $V$.
We have obtained four exact copies of the triangle $ABC$, arranged congruently.
And there is no other way to arrange them.
The sum of the angles in $V$ is $\pi < 2\pi$, so everything looks to be in place.
Now we place a reference system with the origin in one vertex, and one edge of the base along
the $x$ axis, as in the sketch.
To obtain the position of $V$, we impose to the vector $\bf b = \vec{BV}$ to make
an angle $\gamma$ with $\bf a = \vec{BC}$ and an angle $\alpha$ with $\bf c = \vec{BA}$,
and to have a length of $b$.
That means
$$
\left\{ \matrix{
\left( {\matrix{ a & 0 & 0 \cr {c\cos \beta } & {c\sin \beta } & 0 \cr } } \right)
\left( {\matrix{ {v_x } \cr {v_y } \cr {v_z } \cr } } \right)
= \left( {\matrix{ {ab\cos \gamma } \cr
{cb\cos \alpha } \cr } } \right) \hfill \cr
\left( {\matrix{ {v_x } & {v_y } & {v_z } \cr } } \right)
\left( {\matrix{ {v_x } \cr {v_y } \cr {v_z } \cr } } \right)
= b^{\,2} \hfill \cr} \right.
$$
which is readily solved as
$$ \bbox[lightyellow] {
\left\{ \matrix{
v_x = b\cos \gamma \hfill \cr
v_y = b\left( {\cos \alpha - \cos \beta \cos \gamma } \right)/\sin \beta \hfill \cr
v_z = b\sqrt {1 - \cos ^{\,2} \gamma - \left( {\cos \alpha - \cos \beta \cos \gamma } \right)^{\,2} /\sin ^{\,2} \beta } \hfill \cr} \right.
}\tag{1}$$
$v_z$ is the heigth to compute the volume, which thus will be
$$ \bbox[lightyellow] {
\eqalign{
& V = {1 \over 6}abc\sqrt {\left( {1 - \cos ^{\,2} \gamma } \right)\sin ^{\,2} \beta
- \left( {\cos \alpha - \cos \beta \cos \gamma } \right)^{\,2} } = \cr
& = {1 \over 6}abc\sqrt {1 + 2\cos \alpha \cos \beta \cos \gamma
- \left( {\cos ^{\,2} \gamma + \cos ^{\,2} \beta + \cos ^{\,2} \alpha } \right)} \cr}
}\tag{2}$$
which can be then reformulated in various ways by using the sine or cosine laws.
But .., whoever has practically built any such a tent, or origami, or else, might smell that
it can't go always so .. straight.
And in fact, if one of the angles, e.g. $\gamma$, is rect, in the above we get $v_x=0$.
Some investigation shows that the tent has become flat, with $AVBC$ that becomes a rectangle.
And some more investigation shows that the construction (always dealing with four equal triangles) is:
- possible when all the angles are less than $\pi/2$ (acute triangles),
- becomes flat if one angle is $\pi/2$,
- and is impossible if the triangles are obtuse.
Note that, by replacing $\alpha$ with $\pi - \alpha -\beta$ and then putting it back, we can rewrite the result above as
$$ \bbox[lightyellow] {
\eqalign{
& V^{\,2} = {1 \over {36}}a^{\,2} b^{\,2} c^{\,2} \left( {\left( {1 - \cos ^{\,2} \gamma } \right)\sin ^{\,2} \beta - \left( {\cos \alpha - \cos \beta \cos \gamma } \right)^{\,2} } \right) = \cr
& = {1 \over {36}}a^{\,2} b^{\,2} c^{\,2} \left( {\sin ^{\,2} \gamma \sin ^{\,2} \beta - \left( {\cos \left( {\pi - \left( {\beta + \gamma } \right)} \right) - \cos \beta \cos \gamma } \right)^{\,2} } \right) = \cr
& = {1 \over {36}}a^{\,2} b^{\,2} c^{\,2} \left( {\sin ^{\,2} \gamma \sin ^{\,2} \beta - \left( {2\cos \beta \cos \gamma - \sin \beta \sin \gamma } \right)^{\,2} } \right) = \cr
& = {1 \over {36}}a^{\,2} b^{\,2} c^{\,2} \left( { - 4\cos ^{\,2} \beta \cos ^{\,2} \gamma + 4\cos \beta \cos \gamma \sin \beta \sin \gamma } \right) = \cr
& = {1 \over 9}a^{\,2} b^{\,2} c^{\,2} \cos \alpha \cos \beta \cos \gamma \cr}
}\tag{2.a}$$
which coincides with the more mathematically elegant method provided by Blue.
And we can rewrite more compactly also the (1). |
The boundary of a set is subset of the boundary of the closure of the set. | Tip: It's not true.
Counterexample: Notice that $\mathbb{R}$ is a normed vector space under the Euclidean norm. Let $A$ be the set $[0,1] \smallsetminus \{1/2\}$, the closed unit interval without its midpoint.
The boundary of $A$ is $\{0,1/2,1\}$. The closure of $A$ is $[0,1]$, the closed unit interval. The boundary of the closure is $\{0,1\}$. Since $1/2 \not \in \{0,1\}$, the boundary of $A$ is not a subset of the boundary of the closure of $A$. |
Estimating Cable Length on a Reel | Weigh the reel.
Weigh or estimate the weight of an empty reel (may be negligible).
Weigh a metre of cable.
$$Cable \space length = {[1] - [2] \over [3]}$$
Not mathematics. |
Prove that a closed ball $B$ is closed and bounded in $(C[0,1], d^*)$ | If you know you're in a metric space $(X,d)$, any set of the form $D = \{x: d(p,x) \le r \}$, wheer $r > 0, p \in X$ is closed and bounded. Nothing special about $C([0,1])$. You can show the complement is open: if $d(q,p) > r$ then show using the triangle inequality that $B(q, d(q,p) - r)$ is disjoint from $D$. |
How to solve $\int_{0}^{\infty} e^{-ax} \frac{x}{\sqrt{1+x^2}}dx$ or $\int_{0}^{\infty} e^{-ax} \sqrt{1+x^2}dx$ | For $\int_0^\infty e^{-ax}\dfrac{x}{\sqrt{1+x^2}}~dx$ ,
$\int_0^\infty e^{-ax}\dfrac{x}{\sqrt{1+x^2}}~dx$
$=\int_0^\infty e^{-a\sinh t}\dfrac{\sinh t}{\sqrt{1+\sinh^2t}}~d(\sinh t)$
$=\int_0^\infty e^{-a\sinh t}\sinh t~dt$
$=-\dfrac{d}{da}\int_0^\infty e^{-a\sinh t}~dt$
$=-\dfrac{\pi}{2}\dfrac{d}{da}\mathbf{K}_0(a)$ (according to https://dlmf.nist.gov/11.5)
For $\int_0^\infty e^{-ax}\sqrt{1+x^2}~dx$ ,
$\int_0^\infty e^{-ax}\sqrt{1+x^2}~dx=\dfrac{\pi\mathbf{K}_1(a)}{2a}$ (according to https://dlmf.nist.gov/11.5) |
How is math used in computer graphics? | Like many questions of the form: How much math do you need to know/do X? It depends greatly on how far you want to take things. For simple 3D computer graphics or 2D computer graphics, say just a 2D Tetris like game for your phone or what have you, elementary geometry can be all you need. But for 3D computer graphics, games like Call Of Duty or CAD computer programs, where the developers are using advanced libraries like OpenGl and DirectX, linear algebra and calculus comes into play. Cutting edge/active areas of research in computer graphics use the most advanced algorithms and math available, signal processing, optimization, differential equations, etc. You name it, basically every field of math in some way.
For a look at what kind of math is introduced and used in more simple/not terribly advanced applications of computer graphics, look at these two resources:
Taste of math used in basic computer graphics:
Eric Haines's (a prominent figure in computer graphics) has a free
course at Udacity that is an introduction to interactive 3D Computer
Graphics: Interactive 3D
Graphics. Really high
quality material he produced for this course. Notice how even though
its 3D, the math is too crazy since it goes back to how far you want
to take things.
Another free online resource, Jason L. McKesson's Learning Modern
3D Graphics Programming
And lastly but certainly not least, you have Processing, a
programming language partly created to introduce the concepts of
2D/3D computer graphics to an audience with no programming/math
experience: Processing
Taste of math used in advanced computer graphics:
This book, Mathematics for 3D Game Programming and Computer
Graphics
The SIGGRAPH (short for Special Interest Group on GRAPHics and
Interactive Techniques) website. This association is home to many
engineers and scientists who are active researchers in the field of
computer graphics. All sorts of helpful links on this site.
SIGGRAPH
And why not have a look at some of the newest research papers about
computer graphics, try to read a paper and use this site and other
resources to try and understand it, this will give you an impression of the very advanced mathematical techniques utilized by active researchers. This is the "farthest" you can take things: arXiv Computer Graphic's
Papers
I mentioned video games mostly as examples of computer graphics
applications. just since they are usually the most i guess relatable
or tangible applications of computer graphics. But scientists use
computer graphics in many other ways, to visualize things like
nuclear weapons, stellar evolution, climate and weather etc. Take a
look at some of the areas of research using visualization at:
Sandia National Laboratories. The physics simulations there use every kind of math possible in some way. Whether that is helpful to you writing your paper or not, depends :). |
Enderton Computability Theory - Calculation of Semi-characteristic Function - Implicit Use of Infinity? | Henning Makholm response provides the answer - indeed infinity is used in computability theory :
Computability theory is about which functions can be computed in a finite number of steps. In order to speak about this we also sometimes need to define some functions or properties that are not computable; otherwise we wouldn't even be able to state that they're not computable. The property of "not terminating" is one of these |
$x_1 + x_2 + x_3 + x_4 + x_5=5$ . Determine the maximum value of $x_1x_2+x_2x_3+x_3x_4+x_4x_5$. | Hint: Reduce the number of variables, $$x_1x_2+x_2x_3+x_3x_4+x_4x_5\leq x_2(x_1+x_3)+(x_1+x_3)x_4+x_4x_5\leq\dots$$ |
Compute the measure of a constant function | A function being measurable has its own definition (which you have given), but does not require any concept of "the measure of a function", and indeed this is not defined.
To show the constant function $f=c$ is measurable, observe that $\{x:f(x)>a\}$ is either the empty set or the full space, depending on the particular value of $a$. |
Am I right in thinking that differential of $f(x)$ is $dy = {\Delta}y$ when ${\Delta}x$ is infinitesimally small, otherwise $dy$ $\ne$ ${\Delta}y$? | Let's take a function $f(x)=x^2$ and a point $x_0$. Calculate the difference:
$$\Delta y = f(x_0+\Delta x)-f(x_0) = (x_0+\Delta x^2)-x_0^2 = 2x_0\cdot \Delta x + (\Delta x)^2$$
Hmmm, it looks like that $\Delta y = a\cdot \Delta x + (\Delta x)^2$, where $a=2x_0$ is a number. Assuming it is not zero (i.e. $x_0\neq 0$) we see that $|\Delta x| \ll |a|$ the second term is quite negligible, so $\Delta y \approx a\cdot\Delta x$ (and the quality of this approximation depends on the inequality $|\Delta x| \ll |a|$).
Now let's think about infinitesimals. What do we mean that $\Delta x$ is infinitesimally small? We could try to define
$\Delta x$ is infinitesimal if for all real $a\neq 0$ we have $|\Delta x| < |a|$
But... there is no such (non-zero) "infinitesimal": for every $\Delta x$ we can choose $a=\Delta x /2$. We either need to invent new kind of numbers or accept "infinitesimal" to be a loosely defined concept, meaning "for very small $\Delta x$ we have $\Delta y\approx \Delta x$".
As you will see, in mathematics usually we don't speak about "infinitesimal" $dy$ and $dx$ – we usually speak about the ratio $dy/dx$ that often happens to be a real number. (Such a ratio doesn't need to exist, for example if your function "jumps", i.e. is not continuous).
I'd suggest thinking about them in a loose way ("$dy/dx \approx \Delta y/\Delta x$ and the approximation becomes better and better if we take $\Delta x$ to be smaller and smaller"), but learning rigorous analysis in which you consider derivatives, not infinitesimal quantities. Eventually you will get used to this way of thinking and to proving rigorous theorems – good luck!
(A small disclaimer: there is also a way to make sense about "infinitesimal" equations like $dy = f'(x)\,dx$ that use the language of differential forms, but both sides are not numbers – they are linear functions. You can read about this staff for example in the books mentioned here). |
Induction on Real Numbers | Okay, I can't resist: here is a quick answer.
I am construing the question in the following way: "Is there some criterion for a subset of $[0,\infty)$ to be all of $[0,\infty)$ which is (a) analogous to the principle of mathematical induction on $\mathbb{N}$ and (b) useful for something?"
The answer is yes, at least to (a).
Let me work a little more generally: let $(X,\leq)$ be a totally ordered set which has
$\bullet $a least element, called $0$, and no greatest element.
$\bullet$ The greatest lower bound property: any nonempty subset $Y$ of $X$ has a greatest lower bound.
Principle of Induction on $(X,\leq)$: Let $S \subset X$ satisfy the following properties:
(i) $0 \in S$.
(ii) For all $x$ such that $x \in S$, there exists $y > x$ such that $[x,y] \subset S$.
(iii) If for any $y \in X$, the interval $[0,y) \subset S$, then also $y \in S$.
Then $S = X$.
Indeed, if $S \neq X$, then the complement $S' = X \setminus S$ is nonempty, so has a least upper bound, say $y$. By (i), we cannot have $y = 0$, since $y \in S$. By (ii), we cannot have $y \in S$, and by (iii) we cannot have $y \in S'$. Done!
Note that in case $(X,\leq)$ is a well-ordered set, this is equivalent to the usual statement of transfinite induction.
It also applies to an interval in $\mathbb{R}$ of the form $[a,\infty)$. It is not hard to adapt it to versions applicable to any interval in $\mathbb{R}$.
Note that I believe that some sort of converse should be true: i.e., an ordered set with a principle of induction should have the GLB / LUB property. [Added: yes, this is true. A totally ordered set with minimum element $0$ satisfies the principle of ordered induction as stated above iff every nonempty subset has an infimum.]
Added: as for usefulness, one can use "real induction" to prove the three basic Interval Theorems of honors calculus / basic real analysis. These three theorems assert three fundamental properties of any continuous function $f: [a,b] \rightarrow \mathbb{R}$.
First Interval Theorem: $f$ is bounded.
Inductive Proof: Let $S = \{x \in [a,b] \ | \ f|_{[a,x]} \text{ is bounded} \}$. It suffices to show that $S = [a,b]$, and we prove this by induction.
(i) Of course $f$ is bounded on $[a,a]$.
(ii) Suppose that $f$ is bounded on $[a,x]$. Then, since $f$ is continuous at $x$, $f$ is bounded near $x$, i.e., there exists some $\epsilon > 0$ such that $f$ is bounded on
$(x-\epsilon,x+\epsilon)$, so overall $f$ is bounded on $[0,x+\epsilon)$.
(iii) If $f$ is bounded on $[0,y)$, of course it is bounded on $[0,y]$. Done!
Corollary: $f$ assumes its maximum and minimum values.
Proof: Let $M$ be the least upper bound of $f$ on $[a,b]$. If $M$ is not a value of $f$,
then $f(x)-M$ is never zero but takes values arbitrarily close to $0$, so $g(x) = \frac{1}{f(x)-M}$ is continuous and unbounded on $[a,b]$, contradiction.
(Unfortunately in the proof I said "least upper bound", and I suppose the point of proofs by induction is to remove explicit appeals to LUBs. Perhaps someone can help me out here.)
Second Interval Theorem (Intemediate Value Theorem): Suppose that $f(a) < 0$ and $f(b) > 0$. Then there exists $c \in (a,b)$ such that $f(c) = 0$.
Proof: Define $S = \{x \in [a,b] \ | \ f(x) \leq 0\}$. In this case we are given that $S \neq [a,b]$, so at least one of the hypotheses of real induction must fail. But which?
(i) Certainly $a \in S$.
(iii) If $f(x) \leq 0$ on $[a,y)$ and $f$ is continuous at $y$, then we must have $f(y) \leq 0$ as well: otherwise, there is a small interval about $y$ on which $f$ is positive.
So it must be that (ii) fails: there exists some $x \in (a,b)$ such that $f \leq 0$ on
$[a,x]$ but there is no $y > x$ such that $f \leq 0$ on $[a,y]$. As above, since $f$ is continuous at $x$, we must have $f(x) = 0$!
Third Interval Theorem: $f$ is uniformly continuous.
(Proof left to the interested reader.)
Moreover, one can give even easier inductive proofs of the following basic theorems of analysis: that the interval $[a,b]$ is connected, that the interval $[a,b]$ is compact (Heine-Borel), that every infinite subset of $[a,b]$ has an accumulation point (Bolzano-Weierstrass).
Acknowledgement: My route to thinking about real induction was the paper by I. Kalantari "Induction over the Continuum".
His setup is slightly different from mine -- instead of (ii) and (iii), he has the single axiom that $[0,x) \subset S$ implies there exists $y > x$ such that $[0,y) \subset S$ -- and I am sorry to say that I was initially confused by it and didn't think it was correct. But I corresponded with Prof. Kalantari this morning and he kindly set me straight on the matter.
For that matter, several other papers exist in the literature doing real induction (mostly in the same setup as Kalantari's, but also with some other variants such as assuming (ii) and that the subset $S$ be closed). The result goes back at least to a 1922 paper of Khinchin, who in fact later used real induction as a basis for an axiomatic treatment of real analysis. It is remarkable to me that this appealing concept is not more widely known -- there seems to be a serious PR problem here.
Added Later: I wrote an expository article on real induction soon after giving this answer: it's very similar to what I wrote above, but longer and more polished. It is available here if you want to see it. |
Infinite sum with primes | Notice that for $s>1$ the following inequality holds:
$$\frac{1}{p^s-1}<\frac{2}{p^s}\iff 2<p^s$$
Now, what do you know of $\sum_{n\geq1}\tfrac{1}{n^s}$? |
Minimize SOP and POS algebraically? | You can use the distributive law as usual:
$$ (a+b)c = ac + bc $$
or the other way
$$ (ab)+c = (a+c)(b+c) $$
(it might help to temporarily swap $+$ and $\cdot$ if you have trouble "seeing" the above distribution)
Doing it "algebraically" is unlikely to be any better than Karnaugh maps, or more generally the Quine-McCluskey algorithm. In fact, it will probably be much more work. |
Is $7^{8}+8^{9}+9^{7}+1$ a prime? (no computer usage allowed) | Trial division mod $47$, as per the hint.
Firstly, for positive integer $a$, observe that $$a \equiv \overbrace{a \mod 50}^{\text{reduced residue}}+3\lfloor a/50\rfloor \pmod {47}.$$ This makes taking mod $47$s much easier.
We compute $7^2=49 \equiv 2 \pmod {47}$. So $7^8 \equiv 2^4 = 16 \pmod {47}$.
We compute $8^2=64 \equiv 14+3 = 17 \pmod {47}$. So $8^4 \equiv 17^2 = 289 \equiv 39+3 \times 5=54 \equiv 4+3=7 \pmod {47}.$ So $8^8 \equiv 7^2 = 49 \equiv 2 \pmod 7 \pmod {47}.$ So $8^9 \equiv 2 \times 8 = 16 \pmod {47}$.
I happen to have memorized that $9^3=729$ (I used to set my alarm to 7:29am because it is equal to $3^6$). So $9^3 \equiv 729 = 29+3 \times 14=71 \equiv 21+3=24 \pmod {47}$. So $9^6 \equiv 24^2=576 \equiv 26+3 \times 11=59 \equiv 9+3=12 \pmod {47}$. So $9^7=12 \times 9=108=8+3 \times 2=14 \pmod {47}$.
Finally $16+16+14+1=47 \equiv 0 \pmod {47}$. |
Boundary of boundary of closed set equals boundary of this set | The boundary $\partial A$ of a closed subset $A \subseteq \mathbb{R}^n$ has empty interior, because any open neighborhood of any point $x \in \partial A$ contains points that are not in $A$ (by definition of boundary), therefore not in $\partial A$ (since $A$ is closed.)
This is certainly not true for arbitrary subsets. The boundary of $\mathbb{Q}^4$ is all of $\mathbb{R}^4$. But then $\mathbb{Q}^4$ is not closed. |
Difference between "Show" and "Prove" | Sometimes students misinterpret show to mean give an example. I now avoid using show in exams; I always use prove when a proof is required.
In the context of examples or calculations, it might be ok use show. For instance, "Show that $2$ is a root of $x^2-4$" or "Show that $\sin x$ is a solution of $y''= -y$. |
Verify understanding of contractable spaces? | $\DeclareMathOperator{id}{id}$Note that there is exactly one continuous map $f : W \to \{e\}$ for all spaces $W$. Also, note that continuous maps $g : \{e\} \to W$ correspond exactly to points $w = g(e) \in W$.
Thus, one can rephrase the definition of "contractible" as: $X$ is contractible iff there is some $x \in X$ such that $\id_X \sim g$, where $g(y) = x$ for all $y \in X$.
In other words, $X$ is contractible iff $\id_X$ is null-homotopic, as you claim. |
Prove $\mathbb{D}$ and $\mathbb{D} \cup \{(0,1)\}$ aren't homeomorphic. | $\mathbb D$ is a $2$-manifold: $\mathbb D \cup\{(0,1)\}$ is not, as no neighbourhood of $(0,1)$ in it is homeomorphic to $\mathbb R^2$. For example, no neighbourhood of $(0,1)$ is compact. |
Matrix Ring over Semisimple Ring | Use the following two simple observations together with Wedderburn's theorem.
A ring of matrices over the ring of matrices over a ring R is simply a ring of (larger) matrices over R.
A ring of matrices over a direct product of rings is just the direct product of the matrix rings of the factors. |
Is this a well defined set in ZFC? | Your set essentially satisfies the relation $M = \mathcal P(M)^2$. This would contradict Cantor's theorem :
Let $S = \{(A,B) \in M \mid (A,B) \notin A \}$ and let $m = (S,\emptyset)$. Then $m \in S \iff m \notin S$, which is a contradiction.
In general, $\bigcup \mathcal P(M_k) \subset \mathcal P(\bigcup M_k)$ and the inclusion is strict.
If you define $M$ as the reunion of the sets $M_n$, the $M$ obtained doesn't satisfy the equation, but you still have that $M = \bigcup \mathcal P(M_n)^2$ . There is still an inclusion $M \to \mathcal P(M)^2$. Of course the inclusion is strict since if $m \in M_n$ then its image is actually in $\mathcal P(M_{n-1})^2$. So this never reaches subsets of $M$ containing elements of arbitrarily high level. |
Is this form of proof circular reasoning? | You are right, but it's not circular reasoning. It's another type of fallacious argument called affirming the consequent. Just because something implies a true statement doesn't mean it is true. For instance, the statement "for all $a$ and $b,$ $a+b = a\cdot b$" implies the true statement $2+2=2\cdot 2,$ but it's obviously false.
However, any statement that implies a false statement must be false, so if you'd derived $1=0$ you'd be justified in concluding the original equation was false. |
What is $\text{Res}_{\mathbb{F}_p(t)/\mathbb{F}_p(t^p)}(\text{Spec}\,\mathbb{F}_p(t)[x]/(x^p - t))$? | The answer is the empty scheme. We use the basis $\{1, t, \ldots, t^{p - 1}\}$ for $L/k$. To find the restriction of scalars, we substitute$$x = y_0 + ty_1 + \ldots + t^{p - 1}y_{p - 1}$$into the equation $x^p - t = 0$ defining $L$, and rewrite the result as$$F_0 + f_1t + \ldots + F_{p - 1} = 0$$with $F_i \in k[y_0, \ldots, y_{p - 1}]$. We get\begin{align*}
F_0 & = y_0^p - t^p y_1^p + \ldots + t^{p(p - 1)}y_{p - 1}^p,\\
F_1 & = -1, \\
F_i & = 0 \text{ for }i \ge 2.
\end{align*}
Then $\text{Res}_{L/k}X$ is $\text{Spec}\,k[y_0, \ldots, y_{p - 1}]/(F_0, \ldots, F_{p - 1})$. This is the empty scheme, since $F_1 = -1$ generates the unit ideal. |
Iteration for solving $x=g(x)$. | Writing $g(x)=x\cdot \frac{x}{3}$, we see that if $x>3$ then $g(x)>x$ and if $0\leq x<3$, then $g(x)<x$. So what we see is that, unless $p_0=3$, $g(p_0),g(g(p_0)),...$ is going to get steadily further away from $3$. |
integrating to find a mean | For one dimension, the average $<f>$ of a function $F$ on the interval $(a,b)$ is
$$<f>=\frac{\int_a^{b} f(x)dx}{\int_a^{b} 1dx}=\frac{1}{b-a}\int_a^{b} f(x)dx$$
For $n$-dimensions, the average $<f>$ of a function $f$ defined on a finite domain $\mathscr{D}$ is given
$$\frac{\int_\mathscr{D} f d^nx}{\int_\mathscr{D} 1 d^nx}$$
In the $2$-D case presented, $f=e^{-2r}$, where $r=\sqrt{x^2+y^2}$ and the domain is $\mathscr{R}^2$. Thus,
$$\begin{align}
<e^{-2r}>&=\lim_{R \to \infty} \frac{\int_0^{2\pi} \int_0^R e^{-2r} rdrd\phi}{\int_0^{2\pi} \int_0^R 1 rdrd\phi}\\\\
&=\lim_{R \to \infty} \frac{2\pi\left(\frac14-\frac14(2R+1)e^{-2R}\right)}{\pi R^2}\\\\
&=0
\end{align}$$
NOTE:
If one is finding the average of $r$ under the measure $e^{-2r}$, then
$$<r>=\lim_{R \to \infty} \frac{\int_0^{2\pi} \int_0^R e^{-2r} rdrd\phi}{\int_0^{2\pi} \int_0^R e^{-2r} drd\phi}=\lim_{R \to \infty} \frac{2\pi\left(\frac14-\frac14(2R+1)e^{-2R}\right)}{\pi (1-e^{-2R})}=\frac12$$ |
Sequence of functions converging to a function | The function $f(x) = \frac 1{x^2}$ does not belong to $L^1([0,1])$.
On the other hand, the function $\phi(t) = \frac{t}{1 + t^2}$, $t \ge 0$, attains its maximum value $\frac 12$ at $t = 1$.
Thus if $x \in [0,1]$ and $n \ge 0$ then $$|f_n(x)| = f_n(x) = \frac{nx}{1 + (nx)^2} \le \frac 12$$ so that $g(x) = \frac 12$ is a suitable majorant for the sequence. You can apply LDCT to conclude $$\lim_{n \to \infty} \int_0^1 f_n = \int_0^1 \lim_{n \to \infty} f_n = 0.$$ |
A.P. problem involving sum and product | Let $a_1a_5=x$. Then
$$5 (\frac{a_1+a_5}{2})=30 \implies a_1+a_5=12$$
$$a_3=\frac{a_1+a_5}{2}=6$$
$$a_2a_4=(\frac{a_1+a_3}{2})(\frac{a_3+a_5}{2})=\frac14(a_1+6)(6+a_5)=\frac14 (a_1a_5+6[a_1+a_5]+36)=\frac14(x+108)$$
$$a_1a_2a_3a_4a_5=(a_3)(a_1a_5)(a_2a_4)=\frac64 x(x+108)=3840$$
$$x^2+108x-2560=(x+128)(x-20)=0$$ $$x=+20, -128$$
Since $a_1+a_5=2a_1+4d=12$ and $d\gt 2$ then $a_1 \lt 0$ and $a_5 \gt 0$ so $a_1a_5 \lt 0$. Therefore $$a_1a_5=x=-128$$ |
Is there a series approximation in terms of $n$ for the sum of the harmonic progression : $\sum_{k=0}^{n}\frac{1}{1+ak}$? | $$\sum_{k=1}^{n}\frac{1}{1+ak} = \frac{1}{a}\sum_{k=1}^{n}\frac{1}{k+\frac{1}{a}}=\frac{H_n}{a}-\frac{1}{a^2}\sum_{k=1}^{n}\frac{1}{k\left(k+\frac{1}{a}\right)}=\frac{1}{a}H_n+O\left(\frac{1}{a^2}\right). $$ |
Invertible , bounded linear operator on a Hilbert space | Let $H=\ell^2(\mathbb{N})$ be the space of square-summable series, i.e. $$
\big\langle (a_i)_{i\in\mathbb{N}},\;(b_i)_{i\in\mathbb{N}}\big\rangle = \sum_{i=1}^\infty a_ib_i
$$ and therefore $$
\|(a_i)_{i\in\mathbb{N}}\|^2 = \big\langle(a_i)_{i\in\mathbb{N}},\;(a_i)_{i\in\mathbb{N}}\big\rangle =\sum_{i=1}^\infty a_i^2.
$$
Now look at the operator $$
K \;:\; H\to H\;:\; (a)_{n\in\mathbb{N}} \to (\tfrac{1}{n}a)_{n\in\mathbb{N}}.
$$
and in particular at the images of the vectors $$
\mathbf{u}_i = (\underbrace{0,\ldots,0}_{i-1\text{ zeros}},1,0,\ldots).
$$
Edit: Note, however, that $K$ isn't surjective, only injective. So $K$ isn't bijective, although it does have a left-inverse (but no right-inverse). |
Are invertible linear operators of bounded linear operators also bounded? | If an inverse of any kind exist, $T$ is a bijection. As a consequence of the open mapping theorem, a bijective operator is bounded from below, meaning that there is $c>0$ such that $\|Tx\|\ge c\|x\|$ for all $x$. This and the property $TS=I$ imply that $S$ is bounded. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.