title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Convergence Range of $\sum\limits_{n=1}^{\infty} \frac{\sin(2n-1)x}{(2n-1)^2}$ | I support George Coote's comment: it should be $\sin[(2n-1)x]$ rather than $[\sin(2n-1)]x$. However, the crucial estimation works regardless of this issue. In fact
$$\sum_{n\ge1}\left|\frac{\sin[(2n-1)x]}{(2n-1)^2}\right|=\sum_{n\ge1}\frac{|\sin[(2n-1)x]|}{(2n-1)^2}\leqslant\sum_{n\ge1}\frac1{(2n-1)^2}$$
Thus, the series converges absolutely independent of the actual value of $x$ and therefore the radius of convergence (assuming $x$ is real) is in fact the entire real line. |
Does every Krull ring have a height 1 prime ideal? | Let $R=\cap_{\lambda\in\Lambda}R_{\lambda}$ with $R_{\lambda}$ DVRs (as in Matsumura's definition of Krull domains). Assume that the intersection is irredundant, that is, if $\Lambda'\subsetneq\Lambda$ then $\cap_{\lambda\in\Lambda}R_{\lambda}\subsetneq\cap_{\lambda'\in\Lambda}R_{\lambda'}$.
Let's prove that $m_{\lambda}\cap R$ is a prime ideal of height one, for all $\lambda\in\Lambda$. First note that $m_{\lambda}\cap R\neq (0)$. If the height of some $m_{\alpha}\cap R$ is at least $2$, then there exists a nonzero prime $p\subsetneq m_{\alpha}\cap R$. From Kaplansky, Commutative Rings, Theorem 110, there exists $m_{\alpha'}\cap R\subseteq p$ (obviously $\alpha'\neq\alpha$). Let $x\in\cap_{\lambda\ne\alpha}R_{\lambda}$, $x\notin R$ (so $x\notin R_{\alpha}$), and $y\in m_{\alpha'}\cap R$, $y\neq 0$. One can choose $m,n$ positive integers such that $z=x^my^n$ is a unit in $R_{\alpha}$. Since $x\in R_{\alpha'}$ and $y\in m_{\alpha'}\cap R$ we get $z\in m_{\alpha'}$. Obviously $z\in R_{\lambda}$ for all $\lambda\ne \alpha, \alpha'$, so $z\in R$, $z$ is invertible in $R_{\alpha}$ and not invertible in $R_{\alpha'}$, a contradiction with $m_{\alpha'}\cap R\subseteq m_{\alpha}\cap R$.
(This argument is adapted from Kaplansky's proof of Theorem 114. Furthermore, using again Theorem 110 one can see that $m_{\lambda}\cap R$ are the only height one prime ideals of $R$.) |
Conditional Probability Question for A Continuos Random Variable | Let $p_{\small X}(x)$ be the probability density for the continuously distributed random variable $X$. Then $p_{\small X}(X)$ is a random variable, and you seem to want to talk about the condtional measure $p_{\small X\mid p_{\small X}(X)=3}(x)$.
The nature of the conditional measure, and whether it even exists, entirely depends on the distribution for the random variable.
If $X$ has a probability density of $3$ in an integrateable interval of uncountable many points, say $A$, then the conditional probability measure $p_{X\mid X\in A}(x)$ will be a probability density function. $$p_{\small X\mid X\in A}(x)=\dfrac{p_{\small X}(x)\mathbf 1_{x\in A}}{\int_A p_{\small X}(s)\mathsf d s}=\dfrac {\mathbf 1_{x\in A}}{\int_A \mathsf d s}$$
If $X$ has a probability density of $3$ in an interval of finite many points, say $B$, then the conditional probability measure $p_{X\mid X\in B}(x)$ will be a probability mass function.
$$p_{\small X\mid X\in A}(x)=\dfrac{p_{\small X}(x)\mathbf 1_{x\in B}}{\sum_{x\in B} p_X(s)}=\dfrac {\mathbf 1_{x\in B}}{\lvert B\rvert}$$
For instance, if $X$ has density of $3$ at only one point, say at $x_3$, then the conditional probability that $X=x_3$ when $X=x_3$ will most surely be $1$.
In other cases it may not even be sensible. |
Triangularization of a matrix | Choosing $v_2 = (1,0)$ works perfectly well. All you need here is something that is not a multiple of $v_1$ so that $P$ is invertible.
If you want, you can choose $v_1,v_2$ so that $P$ is orthogonal (unitary). In that case, you would be looking for the Schur decomposition of the matrix. |
Proving that a relation R is an equivalence relation | You want to try to prove as much as you can for arbitrary elements $(A,B)$, working from the definitions.
For example, let's say you want to prove symmetry. Symmetry means the following: if you assume $(A,B) \in R$, you want to prove $(B,A) \in R$. So assume $(A,B) \in R$. That means, according to the definition of $R$, either $A=B$ or $A=S\setminus B$. So we break down into two cases:
Case 1: $A=B$. Then $B=A$, so by definition $(B,A) \in R$. (This basically boils down to reflexivity, which you have already proven.)
Case 2: $A = S \setminus B$. Then, by the properties of set subtraction, $B = S \setminus A$ (do you see why?). Thus $(B,A) \in R$ again by definitioin of $R$.
In either case $(B,A) \in R$, so we have proven symmetry.
Transitivity can be done similarly (though you might need to break up into more cases and subcases); I'll leave it for you to tackle. |
Is $x^{1/3}$ continuous? | There is not any discontinuous point in these functions, so the domain of both $$x^{\frac13}$$ and $$(2x-1)^{\frac13}$$ is $R$.
Whenever we have a $\sqrt{}$ functions with noneven roots, we check function in sqrt. You could check dominator functions, $\tan$, $\log$, and so on. |
Existence of a complete metric on $(0,1)$ | For an explicit example of the sort that Alex S mentioned in the comments, consider $d(x,y)=|\tan(r(x))-\tan(r(y))|$, where $r(x)=\pi x - \pi/2$. This metric is complete because $\mathbb{R}$ is complete in its usual metric, and it induces the standard topology because $\tan \circ r$ is a homeomorphism. |
Independence of random variables measure theory | Let $\mathcal{B}$ be the Borel $\sigma$-algebra of $\mathbb{R}$.
Show that $\sigma(X) = \{X^{-1}(B) : B \in \mathcal{B}\}$. ($\supset$ is clear. For $\subset$, show the right side is a $\sigma$-algebra.)
Fix $y \in \mathbb{R}$. Let $\mathcal{L} = \{ B \in \mathcal{B} : P(X \in B, Y \le y) = P(X \in B) P(Y \le y)\}$. Show that $\mathcal{L}$ is a $\lambda$-system. Let $\mathcal{P} = \{ (-\infty, x] : x \in \mathbb{R} \}$. Show that $\mathcal{P}$ is a $\pi$-system, $\mathcal{P} \subset \mathcal{L}$ and that $\sigma(\mathcal{P}) = \mathcal{B}$. By the $\pi$-$\lambda$ theorem, $\mathcal{B} \subset \mathcal{L}$. Conclude that for every $B \in \mathcal{B}$ and $y \in Y$, $P(X \in B, Y \le y) = P(X \in B) P(Y \le y)$.
Fix $B \in \mathcal{B}$. Let $\mathcal{L}' = \{C \in \mathcal{B} : P(X \in B, Y \in C) = P(X \in B) P(Y \in C)\}$. Show that $\mathcal{L}'$ is a $\lambda$-system, and proceed as before to show $\mathcal{B} \subset \mathcal{L}'$.
We have now shown that $P(X \in B, Y \in C) = P(X \in B) P(Y \in C)$ for all $B, C \in \mathcal{B}$. By (1), this says that $\sigma(X)$ and $\sigma(Y)$ are independent. |
What is the value of $w+z$ if $1<w<x<y<z$ | Hint: $770$ is the product of four distinct primes. Why must $w$, $x$, $y$, $z$ each be one of those primes? |
Explanation of OEIS:A000046 | In this case, primitive means it's not the concatenation of two or more copies of the same necklace.
For example, with $n=4$, we count only $0111$ and $0011$. The former counts all necklaces with one zero (anywhere), and (by taking complements) three zeroes. The latter counts all necklaces with two adjacent zeroes, and two adjacent ones.
What we don't count are $1111$ and $1010$. The former is four copies of the primitive necklace $1$, while the latter is two copies of the primitive necklace $10$.
With $n=5$, we count $01111$, $00111$, and $10101$. By considering complements, we may look at just zero, one, or two $0$'s. If zero, we have the nonprimitive $11111$. If one, we have $01111$. If two, the zeroes could either be adjacent or nonadjacent. By rotation and reflection this covers all the cases.
With $n=6$, we count $011111$, $001111$, $010111$, $000111$, $010011$. Note that $011011$ and $010101$ are not primitive. |
Complex equation solving for a special case | We have:
$$
a \cdot \bar d=e^{ik} \; \iff \; a \cdot \bar d \cdot d=e^{ik}\cdot d \; \iff \; a|d|^2=e^{ik}\cdot d \; \iff \;a=\frac{e^{ik}\cdot d }{|d|^2}
$$
The two equations are the same and you can
solve for $d$ the second equation in the same way. |
Solving A Certain Diophantine Equation | (Old answer revised with new information.)
Given,
$$d(2^{k+1}-1)-b^2(2^{k+1}-2)=1$$
There are solutions for all $k$. It is best to simplify with $k=p-1$ to get $\color{brown}{M_p = 2^p-1}$. Hence,
$$d(M_p)-b^2(M_p-1)=1\tag1$$
I. Family 1: The two general solns in the old answer can be combined into one. For any $p$, there is an infinite number of positive integer $b,d$ that solves $(1)$. For integer $n$.
$$b = M_p n \pm 1$$
$$d = (M_p)(M_p-1)n^2\pm2(M_p-1)n+1$$
II. Family 2: But there are additional parametric solutions for even $p$,
$$b = M_p n \pm 2\,^{p/2}$$
$$d = (M_p)(M_p-1)n^2\pm 2\cdot2\,^{p/2}(M_p-1)n+M_p$$
One may ask, "Using integer $n$, does this give all the solutions?". If $M_p$ is composite, the answer is no. For example, for $p=23$, the smallest non-family solution is,
$$b = M_{23}n+1 = 3034178,\quad\text{where}\;n=17/47 $$
However, if $M_p>3$ is a Mersenne prime, for $p = 3,5,7,13,17,19,\dots$ then it seems $b = M_p n \pm 1$ is the complete solution. (I tested $b<10^6$ and there were no non-family solutions for those $p$.)
P.S. These families were found using Mathematica and some intuitive guessing. |
Relationship between equicontinuity and total boundedness | Yes, you can "cancel" closedness. In fact that's often exactly how the Arzela-Ascoli theorem is stated. The statement of Arzela-Ascoli that is taught at my course is as follows:
Let $K$ be a compact space and $C(K)$ be the space of continuous real valued functions on $K$. Then $S \subset C(K)$ is totally bounded if and only if it is bounded and equicontinuous.
In a similar vein rather than the result in your first sentence a slightly different result that talks about non-closed $S$ is: a subset $S$ of a complete metric space is pre-compact (meaning $\bar{S}$ is compact) if and only if $S$ is totally bounded. It's equivalent to the result you've stated since $S$ is totally bounded if and only if $\bar{S}$ is totally bounded.
Finally your second bullet point is correct, a totally bounded subset of $C(K)$ is equicontinuous since it's an if and only if statement.
Proof of Arzela-Ascoli:
Firstly let $S$ be totally bounded. Then certainly $S$ is bounded. To show equicontinuity, fix a point $x \in K$ and $\epsilon > 0$. Then there exists an $\epsilon$-net $\{f_1, \ldots, f_n\} \subset S$. Then since each $f_i$ is continuous there exist an open neighbourhood of $x$, $U_i$, such that for all $y \in U_i$ we have $\lvert f_i(y) - f_i(x) \rvert < \epsilon$.
Now take $U = \cap_{i=1}^n U_i$ which is still an open nbhd of $x$. Then for any $y \in U$ and $f \in S$, there exists $f_i$ such that $\lVert f - f_i \rVert < \epsilon$. Then
$$ \lvert f(y) - f(x) \rvert \leq \lvert f(y) - f_i(y) \rvert + \lvert f_i(y) - f_i(x) \rvert + \lvert f_i(x) - f(x) \rvert < 3 \epsilon $$
So $S$ is equicontinuous.
Conversely let $S$ be bounded and equicontinous. Then by equicontinuity for all $x \in K$ ther exists an open nbhd $U_x$ of $x$ such that for all $y \in U_X$, $f \in S$ we have $\lvert f(x) - f(y) \rvert < \epsilon$. Then $K = \cup_{x \in K} U_x$ so by compactness $K = \cup_{i = 1}^n U_{x_i}$ for some $x_1, \ldots, x_n$. Now consider the map
$$ \varphi: C(K) \to \mathbb R^n, \quad \varphi(f) = (f(x_1), \ldots, f(x_n))$$
It is easy to check that $\varphi$ preserves boundedness of $S$, thus $\varphi(S)$ is a bounded subset of $\mathbb R^k$. Then by Heine-Borel $\overline{\varphi(S)}$ is a compact, thus $\overline{\varphi(S)}$ and hence $\varphi(S)$ is totally bounded in $\mathbb R^k$. Now fix $\epsilon > 0$ and pick an $\epsilon$-net $\{\varphi(f_1), \ldots, \varphi(f_n)\}$ of $\varphi(S)$ in $\mathbb R^k$, we show $\{f_1, \ldots, f_m\}$ is an $3\epsilon$-net of $S$ in $C(K)$.
Fix any $y \in K$ and $f \in C(K)$. Then there exists $f_j$ such that $\lvert f(x_i) - f_j(x_i) \rvert < \epsilon$ for all $i = 1, \ldots, n$. Then $K = \cup_{i=1}^n U_{x_i}$ thus $y \in U(x_i)$ for some $i$. Then
$$ \lvert f(y) - f_j(y) \rvert \leq \lvert f(y) - f(x_i) \rvert + \lvert f(x_i) - f_j(x_i) \rvert + \lvert f_j(x_i) - f(y) \rvert < 3\epsilon$$
Thus $\lVert f - f_j \rVert < 3\epsilon$ so as required $\{f_1, \ldots, f_m\}$ is a $3\epsilon$-net of $S$. So $S$ is totally bounded. |
Trouble doing polynomial interpolation | Since the system is overdetermined, there is ideally no solution.Hoever things like least square fit etc. are still possible, and this will be fitting a polynomial to a given graph. An easy way would be to treat it like a linear equation (trat 1, x, x^2 as columns of matrix) and then solve using Y=bX. X wont be invertible, but use any matrix algebra package to find least square solution. |
Log-Sum-Exp as an approximation of min function | We may rearrange the $x_i$ so that they are sorted $x_1 \leq x_2 \leq \cdots \leq x_N$. Then
\begin{align*}
f(\tau, x_1, \dots, x_N) &= -\tau \ln \left( \frac{1}{N}\sum_{i=1}^N \mathrm{e}^{-x_i/\tau} \right) \\
&= -\tau \left( \ln\left( \frac{1}{N}\mathrm{e}^{-x_1/\tau} \right) + \ln \left( 1 + \sum_{i=2}^N \mathrm{e}^{(x_1-x_i)/\tau} \right) \right) \\
&= -\tau \left( -\ln(N) - \frac{x_1}{\tau} + \ln \left( 1 + \sum_{i=2}^N \mathrm{e}^{(x_1-x_i)/\tau} \right) \right) \\
&= \tau \ln(N) + x_1 - \tau \ln \left( 1 + \sum_{i=2}^N \mathrm{e}^{(x_1-x_i)/\tau} \right) \text{.}
\end{align*}
Therefore,
$$ |f(\tau, x_1, \dots, x_N) - x_1| = \left| \tau \ln(N) - \tau \ln \left( 1 + \sum_{i=2}^N \mathrm{e}^{(x_1-x_i)/\tau} \right) \right| \text{.} $$
For $\tau$ sufficiently small, the sum expression in the the parentheses is $\varepsilon \ll 1$, so \begin{align*}
|f(\tau, x_1, \dots, x_N) - x_1| &= \left| \tau \ln(N) - \tau \ln \left( 1 + \varepsilon \right) \right| \\
&= \tau \left| \ln(N) - \left(\varepsilon + O(\varepsilon^2) \right) \right| \\
&\approx \tau \ln N \text{.}
\end{align*}
Visually inspecting graphs of $f$ versus $\tau$ for several choices of $N$ and $x_i$, this does capture the near zero behaviour. |
Norms on general integral domains. | No. For instance, on any integral domain you can define a norm by $N(0)=0$ and $N(x)=1$ for all $x\neq 0$. This will not satisfy the property you mention unless the integral domain is actually a field. |
Analytic continuation of several complex variables | The answer is yes and it is a consequence of the classical Hartogs‘s theorem on separately holomorphic functions. Precisely, let $u(a_1,\dots,a_n,z)$ be a holomorphic function on a domain $D_u\in\mathbb{C}^{n+1}$ which is analytically continuable to a larger domain respect to the variable $z$ as a function $v(a_1,\dots,a_n,z)$: then
$v$ is separately holomorphic respect to $a_1,\dots,a_n$ for every $z\notin D_u$. To see this, choose $z_0\in D_u\cap\{z\in\mathbb{C}\}$ such that the Taylor series expansion of $v$ in $z_0$ has a convergence disk not entirely contained in $D_u\cap\{z\in\mathbb{C}\}$ and whose of radius $R_{z_0}\geq c>0$ does not depend on $a_1,\dots,a_n$. Such $z_0$ exists, since assuming the contrary would deny the possibility of analytically continue $u$ respect to $z$ outside $D_u\cap\{z\in\mathbb{C}\} $.
Then, evaluating this Taylor series at fixed
point $z_1\notin D_u\cap\{z\in\mathbb{C}\} $ inside its radius of convergence, we get
$$
\begin{split}
v(a_1,\dots,a_n, z_1)&=\sum_{k=0}^\infty\frac{1}{k!} \frac{\partial^k v}{\partial z^k}(a_1,\dots,a_n, z_0)(z_1-z_0)^k\\
&=\sum_{k=0}^\infty\frac{1}{k!} \frac{\partial^k u}{\partial z^k}(a_1,\dots,a_n, z_0)(z_1-z_0)^k
\end{split}\tag{1}\label{1}
$$
since $v=u$ on $D_u$. Now \eqref{1} implies that the $N$-th order Taylor polynomial
$$
v_N(\dots,a_j,\dots,z_1)=\sum_{k=0}^N\frac{1}{k!} \frac{\partial^k u}{\partial z^k}(\dots,a_j,\dots, z_0)(z_1-z_0)^k\tag{2}\label{2}
$$
can be considered as a sequence of holomorphic functions in each single variable $a_j$, converging uniformly (thanks to the holomophicity of $u$ and to the uniform convergence of \eqref{1}) to $v$. This suffices to prove the separate analyticity of $v$, analytic continuation of $u$, respect to all its variables in the domain
$$
D_u\cup\{(a_1,\dots,a_n,z)|0<|z-z_0|\leq |z_1-z_0|\}.
$$
The same argument can be repeated to cover all the domain $D_v$, as quckly sketched in the following picture:
$v$ is separately holomorphic respect to each of the variables $a_1,\dots,a_n, z\in D_v\varsupsetneq D_u$ thus, by Hartogs’s theorem, it is jointly holomorphic respect to all its variables on $D_v$. |
Prove that triangle $\triangle ABC \cong \triangle G H I$ . Explain each step. | First you should prove that $\triangle DEF \cong \triangle GHI$ by applying the ASA (Angle-Side-Angle) criterion:
angle $EDF \cong$ angle $HGI$;
$DF \cong GI$;
angle $DFE \cong$ angle $GIH$.
Finally you can prove that $\triangle ABC \cong \triangle GHI$ by applying the transitive property of congruence. |
Singletons in metric space | You have to show $X \setminus \{x\}$ is open. So if $p \neq x$, we know that $r = d(p,x) > 0$. Now show that $B(p,r) \subseteq X \setminus \{x\}$. |
Introductory book for homological algebra | Rotman's An Introduction to Homological Algebra, 2nd edition is a pretty solid intro to general homological algebra and is very accessible, although it doesn't deal with Koszul complexes.
Weibel's An Introduction to Homological Algebra is also accessible, but is more advanced in its delivery and topics than Rotman's book. It includes topics like Koszul complexes and local cohomology that are relevant to Cohen-Macaulay modules.
Relative Homological Algebra by Enochs and Jenda has a good look at topics in homological algebra, especially Gorenstein homological algebra and approximation theory. It is well contained, reasonable in length and has lots of good exercises. It includes regular sequences, Koszul complexes, local cohomology and has some discussions about CM modules.
There are two books on local cohomology that are accessible and are contain information about CM modules, namely Twenty-four hours of local cohomology by Iyengar et al, and Local Cohomology: an algebraic introduction with geometric applications by Brodmann and Sharp. Both contain more general information about related homological algebra and commutative algebra.
The first three chapters of Cohen-Macaulay rings by Bruns and Herzog has a lot of information on local rings, depth and related homological algebra, including local cohomology and Koszul complexes. It is quite good as a reference, but you could learn from it as well. I would also put the homological algebra sections of Eisenbud's Commutative Algebra with a view to Algebraic Geometry in with this.
I think you could build a good foundation from any of these books, and there are many more books on homological algebra that I haven't mentioned that are probably just as good.
More technical and specialist books about Cohen-Macaulay modules are Cohen-Macaulay Representations by Leuschke and Wiegand, and Cohen-Macaulay modules over Cohen-Macaulay rings by Yoshino. These are monographs rather than textbooks, but provide a good motivation and illustration of the representation theory of CM modules. The former is more heavy on commutative algebra, while the latter is more concerned with the categorical properties of the CM modules. |
How can I show that the integral $\int_0^1 (1 - t^2)^{-3/4} dt$ is bounded? | Note here that the integrand $(1-t^2)^{-3/4}$ can be factored as $(1-t)^{-3/4}(1+t)^{-3/4}$, the second part of which is bounded in the defining interval $(0,1)$.
Thus it suffices to show that the integral $\int_0^1(1-t)^{-3/4}dt$ is bounded
But then after linear change of variables $t \to 1-x$, $\int_0^1(1-t)^{-3/4}dt=\int_0^1x^{-3/4}dx$ is bounded as the primitive function of $x^{-3/4}$ is $4x^{1/4}$, bounded on $(0,1)$. |
The spectrum norm of difference of two matrix converging to 0 | Matrix perturbation theory might explain it. The intuition is that two close matrix also have close eigenvalue.
And since all matrix norms are equivalent up to a constant multiplier, this conclusion should also hold under other norms. |
Integral of $\frac{1}{x^2+4}$ Different approach | For those kinds of integrals, for how many ideas one might have, the simplest way to proceed is to collect first the constant you have in the denominator:
$$\frac{1}{x^2 + 4} = \frac{1}{4\cdot \left(\frac{x^2}{4} + 1\right)}$$
Remember about the $\frac{1}{4}$ factor, that you bring out of the integral.
Now use the substitution $y = \frac{x}{2}$ so $\text{d}y = \frac{1}{2}\text{d}x$
Result integration: $\int \frac{1}{y^2 + 1}\text{d}y = \arctan(y)$
The final result will be then
$$\frac{1}{2}\arctan\left(\frac{x}{2}\right)$$
Moreover your expression about the complex logarithm is wrong. The right one is:
$$\boxed{\log(a + ib) = \frac{1}{2}\log(a^2 + b^2) + i\ \text{Arg}(x + ib)}$$ |
Calculating statistic for multiple runs | Let the random variable $W_1$ denote the result obtained from the first procedure, and let $W_2$ denote result obtained from the second procedure.
If $\mu$ is the true mean, then by the linearity of expectation we have $E(W_1)=E(W_2)=\mu$. Both $W_1$ and $W_2$ are unbiased estimators of the mean.
The difference is that the variance of $W_2$ may (and in general will) be greater than the variance of $W_1$. (It cannot be less.)
So though both are unbiased, $W_1$ is a better estimator than $W_2$.
The intuition: To see intuitively that $W_2$ has in general larger variance, imagine that we have exactly two runs, one with $1000$ performances of the experiment, and a second short one of $2$ performance of the experiment.
Look at $W_1$. It pools the results of the $1002$ experiments. The resulting sample mean is probably quite close to $\mu$.
Now look at $W_2$. The average we get by averaging the first $1000$ performances is probably quite close to $\mu$. But the average of the $2$ performances is likely to be some distance from $\mu$. So when this average is averaged in with the average of the first $1000$, it is likely to "contaminate" that average, making it more likely that $W_2$ is a significant distance from the true mean. In essence, we are giving equal weight to the reliable first run average and the quite unreliable second run average. |
Finding length and width from depth using factors of a cubic equation? | Presumably your three factors represent length, width, and depth in some order. Each dimension has to be greater than zero, so the minimum $x$ is $5$. For that, the dimensions are $1 \times 4 \times 8$. As the length is conventionally greater than the width, the minimum width is $4$ and the minimum length is $8$. |
What is the minimum number of edges in this special graph? | Consider the graph $\overline G$, the graph with the same vertex set as $G$ and edge set defined by: $u$ and $v$ are connected in $\overline G$ if and only if they are not adjacent in $G$. Then the property
$$\text{For any $n$ vertices $v_1,\ldots,v_n$ in $G$, one vertex $v_1$ is adjacent to all of $v_2,\ldots,v_n$}\tag 1$$
is equivalent to
$$\text{For any $n$ vertices $v_1,\ldots,v_n$ in $\overline G$, one vertex $v_1$ is not adjacent to any of $v_2,\ldots,v_n$.}\tag 2$$
Claim 1. When $n$ is even, $(2)$ holds if and only if there are at least $n+1$ vertices of degree $0$.
$(\Rightarrow)$ Suppose, for a contradiction that there are at most $n$ vertices of degree $0$, then there are at least $n$ vertices of degree $1$ or more. Consider the subgraph $H$ of $\overline G$ induced by the $k\ge n$ vertices of positive degree. If $k=n$ then we are done since property $(2)$ cannot hold for these $n$ vertices. So suppose $k>n$; now I remove vertices from $H$ as described below, at each stage we do not have any vertices of $H$ with degree $0$ in the induced subgraph. If there is a connected component of $H$ of odd size, remove a vertex of degree $1$, if it can be found; otherwise remove an arbitrary vertex—if removing a vertex $u_1$ would have $\mathrm{deg}\, u_2=0$, then $u_2$ had degree $1$ before removal. If the smallest connected component of $H$ has size $2$ and all connected components have even size, then remove the smallest one (since $n$ is even, $k\ge n+2$ if there are no odd components). If the smallest connected component of $H$ is even and has size larger than $2$, remove a vertex of degree $1$, if it can be found; otherwise remove an arbitrary vertex—if removing a vertex $u_1$ would have $\mathrm{deg}\, u_2=0$, then $u_2$ had degree $1$ before removal. By repeating this, we can reach a point where $H$ has size $n$ and no isolated vertex, i.e. $(2)$ cannot be satisfied.
$(\Leftarrow)$ Any choice of $n$ vertices from $\overline G$ must contain one of the vertices of degree $0$, of which there are $n+1$ (pigeonhole principle).
Claim 2. When $n$ is odd, $(2)$ holds if and only if there are at least $n+1$ vertices of degree $0$, or no vertex of degree $2$ or more.
$(\Rightarrow)$ Reason as before, by contradiction, to get the subgraph $H$ of $\overline G$ induced by the $k\ge n$ vertices of positive degree. If $k=n$ we are done, so suppose $k>n$. Consider the connected component $C$ of $H$ which contains the vertex $v$ of degree at least $2$. If $C$ has even size $|C|\ge 4$, then remove a vertex of degree $1$, if it can be found; otherwise remove an arbitrary vertex. Hence, we may assume that $C$ has odd size, say $|C|=m\ge 3$, and no isolated vertices. Call the remainder $H^\prime=\overline G-C$ minus the aforementioned removed vertex—if applicable. In the same way as before, trim $H^\prime$ down until it has size $k^\prime=n-m$, at each stage prioritising odd sized connected components, then components of size $2$, then larger even sized components. (Note that if there are no odd components of $H^\prime$, the smallest component is of size $2$, and $|H^\prime|>n-m$, then since $|H^\prime|$ and $n-m$ are both even, the smallest component can be safely removed while preserving the inequality $|H^\prime|\ge n-m$.) After this is done, $H^\prime+C$ has size $n$ and no isolated vertices, so property $(2)$ cannot hold.
$(\Leftarrow)$ If $\overline G$ has no vertex of degree $2$ or more, then it is a collection of $K_2$s and $K_1$s. Since $n$ is odd, any choice of $n$ vertices will have a vertex that is in its own connected component. If there is a vertex of degree $2$ or more then, as in claim one, any choice of $n$ vertices from $\overline G$ must contain one of the vertices of degree $0$
Bounds for the properties. A minimum bound $M$ on the number of edges to satisfy $(1)$ is an upper bound on the number of edges that can satisfy $(2)$.
When $n$ is even, at least $n+1$ vertices need to be degree $0$; including all possible edges in the remaining $n-1$ vertices gives $\tfrac 12(n-1)(n-2)$ edges. When $n$ is odd, we can do better when $n=1$, $n=3$ or $n=5$: a graph with no vertex of degree $2$ or more is (at best) a collection of $K_2$s, and has $n$ edges. Then $M$ is the difference between $\tfrac 12(2n)(2n-1)$ and these numbers. Hence
$$M=\begin{cases}
2n^2-2n&\text{$n=1$, $n=3$ or $n=5$},\\
\tfrac 12(3n^2+n-2)&\text{otherwise}.
\end{cases}$$
Remark. It is also possible to work directly with property $(1)$. Notice that (excluding the special cases $n=1$, $3$, $5$) graphs that satisfy the minimum bound are split graphs of $2n$ vertices with clique of size $n+1$ and independent set of size $n-1$. The number of edges in such graphs is $$(2n-1)+(2n-2)+\cdots+(n-1).$$
The crux here being that any choice of $n$ vertices will intersect with the clique, hence contains a vertex adjacent to the other $n-1$ vertices. We cannot improve on this bound, because removing an edge $u_1u_2$ from the clique will produce a set $\{u_1,u_2,v_3,\ldots,v_n\}$ (where $v_3,\ldots,v_n$ are from the independent set) which does not satisfy $(1)$. Since $v_3,\ldots,v_n$ form an independent set, they cannot be the 'special' vertex which is adjacent to the others. Nor can $u_1$ and $u_2$ have the property—they are not adjacent to each other!
It would take a bit of work to show that this is an optimal formulation for all $n$ except $1$ ,$3$ and $5$ (I am not sure how to do that). |
If a series is conditionally convergent, then the series of positive and negative terms are both divergent | Note that
$$b_n=\dfrac{a_n+|a_n|}{2} \;\;(\geqslant 0), \\
c_n=\dfrac{a_n-|a_n|}{2} \;\;(\leqslant 0)
$$
and
$$c_n=a_n-b_n.$$
If $\sum a_n$ converges (conditionally) and $\sum b_n$ is convergent (absolutely) then $\sum {c_n}$ is convergent (absolutely). Because $a_n=b_n+c_n$ then $\sum a_n$ must be absolutely convergent, which contradicts to its conditional convergence. |
Modified $L^2$ space is a Hilbert space | If you set $\mu (A) =\int_A h(x) dx $ for the all Lebesgue measurable sets $A\subset [a,b],$ then $\mu$ is a positive measure and $$\int_a^b f(x)g^* (x) h(x) dx =\int_{[a,b]} f(t)g(t) \mu (dt) $$ hence the space that you consider becomes a space $L^2 (\mu )$ but all those spaces are complete. Proof of this fact you can find in almost any book of functional analysis. |
Conjecture: When does $n=ab$, with $a\leq b\leq 2a$? | The numbers for which such a factorization exists have been studied by Maier and Tenenbaum, On the set of divisors of an integer, Invent. Math. 76:1 (1984), pp. 121-128. Erdos conjectured that almost all positive integers $n$ have a factorization $n=ab$ with $a\le b\le2a$ ("almost all" means all but a set of density zero), and Maier and Tenenbaum proved a strong form of this conjecture. There are more recent, more general results by Kevin Ford, The distribution of integers with a divisor in a given interval, Annals of Mathematics, 168 (2008), 367–433.
Both papers are freely available online. |
Elementary Geometry, proof:Isosceles triangle base \overline AB halves a length. | Reflect $D$ in $A$. Denote the new point by $X$. Can you prove that $XE \parallel AB$? Can you see how it follows that $F$ is the midpoint of $DE$? |
prove that set is polyhedral | For $u \in \mathbb R^n$ and $\eta \in \mathbb R$ let $H(u,\eta) = \{ h \in \mathbb R^n \mid (h,u) \le \eta \}$.
Let us first consider $\eta < 0$. Then $C = \emptyset$ which is a polyhedron (in fact, the intersection of $H(e_1,-1)$ and $H(-e_1,1)$ for $e_1 = (1,0,0,\ldots)$.
For $\eta \ge 0$ it is rather technical.
For $h = (h_1,\ldots,h_n)$ let $w(h) = (w_1(h),\ldots,w_{n-1}(h))$ where $w_i(h) = 1$ if $h_i \ge h_{i+1}$ and $w_i(h) = 0$ else. Then
$$\phi(h) = \sum_{i=1}^{n-1} \lvert h_i - h_{i+1} \rvert_+ = \sum_{i=1}^{n-1} (h_i - h_{i+1})w_i(h) .$$
The set
$$W = \{w(h) \mid h \in \mathbb R^{n-1} \} = \{(w_1,\ldots,w_{n-1}) \mid w_i \in \{0,1\} \} $$
has $2^{n-1}$ elements. For each $w = (w_1,\ldots,w_{n-1}) \in W$ define $u(w) = (u_1(w),\ldots,u_n(w))$ by
$$u_1(w) = w_1 ,\\
u_i(w) = w_i - w_{i-1} \text{ for } 1 < i < n ,\\
u_n(w) = -w_{n-1} .$$
Let
$$S = \bigcap_{w \in W} H(u(w),\eta) .$$
We claim that $S = C$.
$\left( h, u(w) \right) = \sum_{i=1}^{n-1} (h_i - h_{i+1})w_i$: In fact, $$\left( h, u(w) \right) = \sum_{i=1}^n h_iu_i(w) = h_1w_1 + \sum_{i=2}^{n-1} h_i(w_i - w_{i-1}) - h_nw_{n-1} \\= \sum_{i=1}^{n-1} h_iw_i - \sum_{i=1}^{n-1}h_{i+1}w_i = \sum_{i=1}^{n-1} (h_i - h_{i+1})w_i .$$
$\phi(h) = \left( h, u(w(h)) \right)$: In fact, $$\left( h, u(w(h)) \right) = \sum_{i=1}^{n-1} (h_i - h_{i+1})w_i(h) = \phi(h) .$$
$S \subset C$: If $h \in S$, then $h \in H(u(w(h)),\eta)$, thus $\phi(h) \le \eta$, i.e. $h \in C$.
$\phi(h) \ge \left( h, u(w) \right)$ for all $w \in W$: This is equivalent to $\sum_{i=1}^{n-1} (h_i - h_{i+1})w_i(h) \ge \sum_{i=1}^{n-1} (h_i - h_{i+1})w_i$, i.e. to $\sum = \sum_{i=1}^{n-1} (h_i - h_{i+1})(w_i(h) - w_i) \ge 0$. But if $w_i(h) = 1$, then $h_i - h_{i+1} \ge 0$ and $w_i(h) - w_i \ge 0$, and if $w_i(h) = 0$, then $h_i - h_{i+1} < 0$ and $w_i(h) - w_i \le 0$, hence all summands in $\sum$ are nonnegative.
$C \subset S$: If $h \in C$, then for all $w \in W$ we get $(h,u(w)) \le \phi(h) \le \eta$, i.e. $h \in H(u(w),\eta)$.
For $\ell^2$ this argument is no longer valid because (1) there are uncountably many sequences $w = (w_1,w_2,\ldots)$ with $w_i \in \{0,1\}$ and (2) $u(w)$ is in general no element of $\ell^2$. However, this is no strict proof that $C$ is not polyhedral in that case. |
$\mathfrak{sp}_4$ is a subspace of the vector space of all $4\times 4$ matrices | To check that a subset $S$ of a vector space (over, say, the field $\mathbb{F}$) is a vector subspace, we need only check:
That the subset is nonempty ($S \neq \varnothing$).
That the subset is closed under scalar multiplication (for all $f \in \mathbb{F}$, $s \in S$, we have $fs \in S$).
That the subset is closed under addition (for all $s_1, s_2 \in S$, we have $s_1 + s_2 \in S$). |
every closed ball is compact | Choose $r > 0$ large enough that $B(x,r) \cap A \ne \varnothing$, where $B(x,r)$ is the open ball of radius $r$ centered at $x$. Then $\overline{B(x,r)} \cap A$ is a closed subset of a compact set, and therefore is compact.
Let $(a_n)_{n=1}^\infty \subset A \cap \overline{B(x,r)}$ be a sequence with $\rho(x,a_n) \to \inf_{a' \in A} \rho(x,a')$ (such a sequence exists: see next paragraph). By taking a subsequence, we can assume $a_n \to a$ for some $a \in A$. It follows that $\rho(x,a) = \inf_{a' \in A} \rho(x,a')$ (triangle inequality).
Let $(a'_n)_{n=1}^\infty \subset A$ have $\rho(x,a'_n) \to \inf_{a' \in A} \rho(x,a')$. Note that there is some $a \in B(x,r) \cap A$ by assumption, so $\inf_{a' \in A} \rho(x,a') < r$, and so we can find $N$ such that $n \geq N$ implies $\rho(x,a'_n) < r$. |
Why closed points of a variety are Zariski dense? | It's not true that "every single point in a variety is closed in the Zariski topology". Look at the variety $\mathbb{A}^1 = \operatorname{Spec} \Bbbk[x]$. The generic point corresponding to the zero ideal $(0) \subset \Bbbk[x]$ is not closed. However the set of all closed points of $\mathbb{A}^1$ is dense for the Zariski topology (meaning that its closure is all of $\mathbb{A}^1$). See e.g. this question for a reference (an algebraic variety is in particular of finite type). |
For $H \le G$ and $N\unlhd G$, prove that $HN$ is the smallest subgroup containing $H$ and $N$. | I would argue you've already done the hard part! I suspect you're overthinking this.
Here's a hint: Recall elements of $HN$ look like elements $hn$ with $h \in H$ and $n \in N$. Now let $K \leq G$ be any subgroup containing $H$ and $N$. We want to to show that $HN \leq K$.
Since $H \leq K$, we know that every $h \in K$. Since $N \leq K$, we know that every $n \in K$ too. Then, since $K$ is a subgroup...
I hope this helps ^_^ |
Implicit differentiation question | Expanding your final expression, we have
\begin{align*}
y'' &= \frac{-n(n-1)x^{n-2} -n(n-1)y^{n-2}\dfrac{x^{2n-2}}{y^{2n-2}}}{ny^{n-1}}\\
&= \frac{-n(n-1)x^{n-2} -n(n-1)y^{-n}x^{2n-2}}{ny^{n-1}}\\
&= \frac{-n(n-1)x^{n-2}(1 + x^{n}y^{-n})}{ny^{n-1}}\\
&= \frac{-(n-1)x^{n-2}(1 + x^{n}y^{-n})}{y^{n-1}}.
\end{align*}
Now note that $x^n + y^n = 1$, so $x^ny^{-n} + 1 = y^{-n}$. Therefore
\begin{align*}
y'' &= \frac{-(n-1)x^{n-2}(1 + x^{n}y^{-n})}{y^{n-1}}\\
&= \frac{-(n-1)x^{n-2}y^{-n}}{y^{n-1}}\\
&= \frac{-(n-1)x^{n-2}}{y^{2n-1}}.
\end{align*} |
I dont understand the motion of this particle | HINT. An object whose position is given by $f(t)$ is accelerating if $f''(t)>0$ because then its velocity ($f'(t)$) would be increasing.
As for the last part, I assume what they mean by 'direction' here is whether the particle is moving 'up' or 'down'. To think what direction the particle is moving, it would be moving in the direction of its tangent line. The slope of the tangent line is given by $f'(t)$. So if the particle is moving downward its tangent line has negative slope, i.e. $f'(t)<0$, and if it is moving upward its tangent line has positive slop, i.e. $f'(t)>0$.
It can also help to compare what you are doing to the actual plot of the particles movement. See this graph here. |
Order statistics of scaled beta distributions (Project Euler 573) | The problem in your calculations is that you assume that the startings positions $D_k$ are independent. However, since they all have support $[0, 1]$ and $D_k < D_{k +1}$ almost surely, they are not independent. Consequently, the running times $T_k$ are also dependent, which is why your second simulation yields incorrect values. |
Solution of this ODE comprising Airy functions | The solution to the equation
$$
\frac{{\rm d}^2 f(y)}{{\rm d}y^2} - y f(y) = 0 \tag{1}
$$
is
$$
f(y) = \alpha {\rm Ai}(y) + \beta {\rm Bi}(y) \tag{2}
$$
where $\rm Ai$ is the Airy function and $\rm Bi$ is the associated Airy function of the second kind. The solution to the problem
$$
\frac{{\rm d}^2 f(y)}{{\rm d}y^2} - y f(y) = \color{red}{-m} \tag{3}
$$
is therefore
$$
y(x) = \alpha {\rm Ai}(y) + \beta {\rm Bi}(y) + \color{red}{g_m(y)} \tag{4}
$$
where $g_m(x)$ is a particular solution to the non-homogeneous problem, which in this case is not trivial at all. This is what Wolfram Alpha gives as a result
\begin{eqnarray}
f(y) = \alpha \text{Ai}(y)+\beta \text{Bi}(y) +
\left\{ 3 3^{5/6} \pi m y \Gamma
\left(\frac{1}{3}\right) \Gamma \left(\frac{5}{3}\right) \text{Ai}(y)
\,
_1F_2\left(\frac{1}{3};\frac{2}{3},\frac{4}{3};\frac{y^3}{9}\right)-3
\sqrt[3]{3} \pi m y \Gamma \left(\frac{1}{3}\right) \Gamma
\left(\frac{5}{3}\right) \text{Bi}(y) \,
_1F_2\left(\frac{1}{3};\frac{2}{3},\frac{4}{3};\frac{y^3}{9}\right)+3
\sqrt[6]{3} \pi m y^2 \Gamma \left(\frac{2}{3}\right)^2 \text{Ai}(y)
\,
_1F_2\left(\frac{2}{3};\frac{4}{3},\frac{5}{3};\frac{y^3}{9}\right)+3^
{2/3} \pi m y^2 \Gamma \left(\frac{2}{3}\right)^2 \text{Bi}(y) \,
_1F_2\left(\frac{2}{3};\frac{4}{3},\frac{5}{3};\frac{y^3}{9}\right)\right\}/\left\{2
7 \Gamma \left(\frac{2}{3}\right) \Gamma \left(\frac{4}{3}\right)
\Gamma \left(\frac{5}{3}\right)\right\}
\end{eqnarray} |
Calculating probability of broken machine using Bayes Theorem | Here is my take on the problem interpretation: the machine is either broken or it is not. If it is broken, all the parts it produces are defective, else they are all perfect.
Let's say $B$ is the event the machine is broken, and $F$ is the event that the first part fails the test. Then for (a),
$$\begin{align}
P(B|F) &= \frac{P(B \cap F)}{P(F)} \\
&= \frac{P(F|B) \; P(B)}{P(F|B) \; P(B) + P(F|B^c) \; P(B^c)} \\
&= \frac{0.99 \times 0.005}{0.99 \times 0.005 + 0.01 \times 0.995} \\
&= 0.3322
\end{align}$$ |
Given $g(m) = \sqrt{m-4}$, solve for $g(m) = 2$. | $\sqrt{m-4}=2$, so squaring, we have $m-4=4$. |
Find local maximum or minimum in 2 variable function | Here is a solution that avoids using the Hessian.
The function is always concave (parabolic) in $s_1$. Holding fixed $s_2$ the function is a reaches it's maximum value at $s_1=1+bs_2$. Since the function is concave in $s_1$ it has no local minimum.
Does it have a local maximum? If so, it must occur where $s_1=1+bs_2$. Plugging in this to the objective yields a function that is convex and parabolic in $s_2$. Because it is convex in $s_2$ it cannot have a local maximum.
Since the function cannot have a local maximum or minimum, any stationary point must be a saddle. |
Show this summation diverges | HINT: Suppose that you can find a positive number $c$ and a positive integer $n_0$ such that $n^{1/n}<a$ for all $n\ge n_0$; then you’d have $$\frac1{nn^{1/n}}\ge\frac1{cn}$$ for all $n\ge n_0$. What do you know about $$\lim_{n\to\infty}n^{1/n}\;?$$ |
Does $\int _{\mathbb{R}}f_{j} dm \rightarrow 0$ imply $f_{j} \rightarrow 0$? | Consider the sequence
\begin{align}
f_{1,2}(x)&=\chi_{[0,1/2]}(x) \\
f_{2,2}(x)&=\chi_{[1/2,1]}(x) \\
f_{1,3}(x)&=\chi_{[0,1/3]}(x) \\
f_{2,3}(x)&=\chi_{[1/3,2/3]}(x) \\
f_{3,3}(x)&=\chi_{[2/3,1]}(x)\\
\vdots
\end{align}
The sequence of integrals goes to $0$ but the sequence does not converge to $0$ pointwise anywhere in $[0,1]$. |
When do we need parenthesis to change order of operations? | In strictest sense, an expression such as $1+2+3$ isn't even defined, only $(1+2)+3$ and $1+(2+3)$ are. It is the law of associativity that allows us to interchange the latter two expressions and motivates the usual convention of dropping parentheses altogether in such sums. In view of this, the "right" to evaluate a sum in arbitrary order is already implicit in the legitimiacy of leaving out parentheses. Similar for products.
The fact that we write $1+2\cdot 3$ without parentheses, although $(1+2)\cdot 3$ and $1+(2\cdot 3)$ differ, is not based on an arithmetic law, but rather on the convention that multiplication and division precede addition and subtraction. That is, $1+2\cdot 3$ is really a shorthand for $1+(2\cdot 3)$ whereas there is no shorthand for $(1+2)\cdot 3$.
Finally, a similar convention, namely that the non-associative operation of subtraction (as well as division) are to be done left to right. That is, of the two expressions $(1-2)-3$ and $1-(2-3)$, only the first has a shorthand notation of $1-2-3$.
Note however that exponentiation is not associative, e.g. $(2^3)^4\ne 2^{(3^4)}$. Since the former can be written simply as $2^{3\cdot4}$, we have the convention that the expression $2^{3^4}$ is a shorthand for $2^{(3^4)}$.
I think the best way to view this is in for of a tree where each node is either a leaf labelled with a number or (if the node has a left and a right subtree) an operator $+,-,\cdot,/$ or epxonetiation. You may additionally introduce negative signs and functions as unary operators (nodes with one subtree). For associative operations such as addition and multiplication, you may loosen these rules and allow more than two subtrees. This tree determines the order of operation (note that there are no parentheses needed to build the tree): In order to perform an addition, subtraction etc. you need to first compute the two subtrees and then combine these two results accordingly. Note that the overall sequence of operation is only very loosely defined/restricted by this: You can first evaluate the left tree, then the right tree, or vice versa, or intertwined. Only the "top" operation must be last. This is all there is behind the rules of precedence and parentheses: They clarify which of several possible trees is intended. |
Proof of time translation invariance of Brownian Motion. Missing assumption? | I think you should just rename $t_0$ to be $\tau$ and see if it doesn't fix things. You seem to be mixing the translating in time with the increments in time themselves.
Proof: Let us take properties $1.$, $3.$ and $4.$ in here for granted as to stochastic process $\widetilde{W}(t)$ and focus on the proof of property $2.$ for $\widetilde{W}(t)$. First, consider that for any $s<t$:
$$\widetilde{W}(t)-\widetilde{W}(s)=W(t+\tau)-W(s+\tau)\tag{1}$$
To check property $2.$, we may assume that $\tau>0$. Then, for any $0\leq t_0<t_1<\ldots<t_n$, we have $0<\tau\leq t_0+\tau<t_1 + \tau<\ldots<t_n+\tau$. By property $2.$ for $W(t)$, $W(t_k+\tau)-W(t_{k-1}+\tau)$, $k=1,2,\ldots,n$ are independent random variables. Thus, by $(1)$, the random variables $\widetilde{W}(t_k)-\widetilde{W}(t_{k-1})$, $k=1,2,\ldots,n$ are independent and so $\widetilde{W}(t)$ satisfies property $2.$. |
What is the meaning of a unique expression? | Each polynomial of the form $c(x+4)^2$, with $c \ne 0$, has a double root at $x=-4$.
If $c(x+4)^2$ is monic, then $c=?$ |
Find coordinates of a point on a square | it is easy to find the line AB since it is perpendicular to CB, you know the length of AB so you find A, than the Line parallel to CB cuts the x- Achse in S
next question: if you use vectors to describe lines and the directionvektor a unit vector,
in your first question you also could find A bei adding the Vektor BA to A. |
Generators of $\mathbb{Z} \times \mathbb{Z}$ | Just think about the group that is generated by $(1,1)$. Okay, so you certainly have $(1,1)$. You must also have the identity $(0,0)$. Also you have $2\cdot (1,1) = (2,2)$, and $3\cdot (1,1) = (3,3)$,...see the pattern?
The group generated by $(1,1)$ is the subgroup of $\mathbb{Z}\times\mathbb{Z}$ consisting of elements of the form $(n,n)$. This is clearly not the entirety of $\mathbb{Z}^2$.
Now what about the group generated by $(1,0)$ and $(0,1)$? Every element generated by this group has the form $a\cdot (1,0) + b\cdot (0,1) = (a,b)$, which clearly is all of $\mathbb{Z}^2$. |
What is monomial basis for polynomial? | If you're studying vector spaces, then the space $V$ of polynomials spanned by $\{1, X, X^{2}, \ldots, X^{n-1}\}$ is an example of a vector space of dimension $n$, with each 'vector' in $V$ being a linear combination of the basis 'vectors' $\{1, X, X^{2}, \ldots, X^{n-1}\}$; that is, a polynomial of the form
$$
a_{0} + a_{1}X + \ldots + a_{n}X^{n-1}.
$$
This basis is monomial since each 'vector' is polynomial with one term, and can be viewed as the polynomial analogue of the standard basis $\{e_{1}, e_{2}, \ldots, e_{n}\}$ for $\mathbb{R}^{n}$. Of course, there are other bases for $V$ that are not monomial; for example, the space spanned by $\{1, X, X^{2}\}$ also has the basis $\{1, 1 + X, 1 + X + X^{2}\}$ which is clearly not monomial.
In general, what is this monomial basis for? Just for describing the space of polynomials, but this basis is the 'easiest' to work with when you first start learning about polynomial vector spaces, much like the standard basis $\{e_{1}, e_{2}, e_{3}\}$ is the 'easiest' basis to work with when you begin learning about $\mathbb{R}^{3}$, but there are plenty of other bases for it.
As I mentioned in the comments, despite $X$ being a variable, there is really no ambiguity in the basis since two polynomials $f(X)$ and $g(X)$ are linearly independent if and only if they are not scalar multiples of each other for all $X$ in your domain; equivalently, if $X \in [a, b]$, then $f$ and $g$ are linearly independent if and only if
$$
\alpha_{1}\cdot f(X) + \alpha_{2}\cdot g(X) = 0
$$
has solution $\alpha_{1} = \alpha_{2} = 0$ as $X$ ranges across the whole of $[a, b]$. For example, if $f(X) = \sin(X)$ and $g(X) = \cos(X)$ with $X \in [0, \pi/4]$, then sure $\sin(\pi/4) = \cos(\pi/4)$, but for all other values in $[0, \pi/4]$ they are different and not related by a constant scalar, so
$$
\alpha_{1}\sin(X) + \alpha_{2}\cos(X) = 0 \hspace{20pt} \forall X \in [0, \pi/4]
$$
only has the solution $\alpha_{1} = \alpha_{2} = 0$, since such a solution must work for all values of $X$ in the domain. Hence $\sin(X)$ and $\cos(X)$ are linearly independent over $[0, \pi/4]$. |
Finding $(x-1)^{-2}$ in Quotient Ring | Well, $(x-1)^{-2}=((x-1)^{-1})^2$, no? As for the rest, you know that $$(x-1)(x^2+1)\equiv -3\pmod{x^3-x^2+x+2}$$ In other words, that $(x-1)\left(-\frac13x^2-\frac13\right)\equiv 1\pmod{x^3-x^2+x+2}$. |
What are the coefficients in the expansion of the (integer) power of a sum | By multinomial
$$ \left(\sum_{i=1}^n a_i \right)^k =\sum_{k_1+\dots +k_n=k}{n\choose k_1\dots k_n}a_1^{k_1}\cdots a_n^{k_n} $$
where $${n\choose k_1\dots k_n}=\frac{n!}{k_1!\dots k_n!}$$ |
Asymptotic expansion as $N \rightarrow \infty$ of $\sum_{k=1}^{\left\lfloor{N/2}\right\rfloor} k \sum_{e \mid 2k}\frac{\Lambda \left({e}\right)}{e}$ | $$\sum_n (2n)^{-s} 2n \sum_{2d+1|2n} \frac{\Lambda(d)}d= (\frac{-\zeta'(s)}{\zeta(s)}-\frac{2^{-s}\log 2}{1-2^{-s}}) 2^{1-s}\zeta(s-1)$$
Following the same Tauberian theorem as in the proof of the PNT we get
$$\sum_{2n\le x}2n \sum_{2d+1|2n}\frac{\Lambda(d)}d\sim Res((\frac{-\zeta'(s)}{\zeta(s)}-\frac{2^{-s}\log 2}{1-2^{-s}}) 2^{1-s}\zeta(s-1)\frac{x^s}{s},2)$$ $$\sim \frac{x^2}{4}(\frac{-\zeta'(2)}{\zeta(2)}-\frac{2^{-2}\log 2}{1-2^{-2}})$$
The error term is $O(\frac{x^2}{\log^k x})$, under the RH it can be improved to $O(x^{3/2+\epsilon})$. |
Convergence or divergence of $\sum_{n=1}^\infty \frac{2+\cos n}{1-\sqrt n}$ | Assuming you mean that you start from $n=2$,
HINT
$$\sum_{n=2}^{\infty}\frac{2+\cos n}{\sqrt{n}-1}\ge \sum_{n=2}^{\infty}\frac{1}{\sqrt{n}-1}$$
Also, we have that $$\frac{{1}}{n^{\frac{2}{3}}}<\frac{1}{\sqrt{n}-1}$$
For sufficiently large $n$.
Using the $p$-series, note $\sum_{k=1}^{\infty}\frac{1}{\sqrt{n}-1}$diverges. |
Let S be a Set and let A be a subset of S. how many options there are to choose 2 subsets from S that their intersection is exactly A? | As you say, both subsets have to include $A$ so start there. Then each of the $n-k$ other elements of $S$ can go in one subset, the other, or neither, for three choices. That would give $3^{n-k}$ ordered ways to choose the subsets. They are ordered because we started by saying one was the first subset.
If you care about unordered pairs of subsets you have to note that the one choice of not adding any elements to either subset is symmetric but all the others can be interchanged to produce another ordered pair, so we divide the nonsymmetric pairs by $2$ and add back in the symmetric pair, giving $$\frac 12\left(3^{n-k}-1\right)+1=\frac 12\left(3^{n-k}+1\right)$$ |
SVD decomposition representation | I assume that $m \le n$ holds? Then note:
$$AA^t = U \Sigma V^t V \Sigma^t U^t = U \Sigma \Sigma^t U^t$$
Since $A$ has full rank, $\Sigma \Sigma^t$ also has full rank. From $U \Sigma \Sigma^t U^t U (\Sigma\Sigma^t)^{-1}U = I$, we can infer $(AA^t)^{-1}$ = $U(\Sigma\Sigma^t)^{-1}U^t$. This yields:
$$A^t(AA^t)^{-1} = V \Sigma^t U^t U (\Sigma\Sigma^t)^{-1} U^t = V \Sigma^t (\Sigma \Sigma^t)^{-1} U^t.$$
We have $\Sigma = (\Sigma_{m \times m})$ and therefore $\Sigma \Sigma^t = \Sigma_{m \times m} \Sigma_{m \times m}^t = \Sigma_{m \times m}^2$. This implies $$\Sigma^t(\Sigma \Sigma^t)^{-1} = \begin{pmatrix}\Sigma_{m \times m}^{-1} \\ 0\end{pmatrix}.$$
Edit: If $u_i$ resp. $v_i$ are the rows of $U$ resp. $V$, you can simplify this further.
$$\begin{pmatrix}v_1 \\ v_2 \\ \vdots \\v_n\end{pmatrix} \begin{pmatrix}\Sigma_{m \times m}^{-1} \\ 0 \end{pmatrix} \begin{pmatrix}u_1^t & u_2^t & \cdots & u_m^t\end{pmatrix} =\begin{pmatrix}v_1 \\ v_2 \\ \vdots \\v_n\end{pmatrix} \begin{pmatrix}\Sigma_{m \times m}^{-1} u_1^t & \Sigma_{m \times m}^{-1} u_2^t & \cdots & \Sigma_{m \times m}^{-1}u_m^t \\ 0 & 0 & \cdots & 0\end{pmatrix}$$
Now if $\sigma_i$ are the diagonal elements of $\Sigma_{m \times m}$, then the $(i, j)$-th coordinate of this matrix is given by $\sum \limits_{k = 1}^m v_{i, k} \sigma_{k, k}^{-1} u_{j, k}$. |
Please help to show a function is unbounded and thus not Riemann Integrable | In every nbhd of $0$ there is a strictly decreasing positive sequence $(x_n)_{n\in N}$ with $\sin 1/x_{2 n}^2=1$ and $\sin 1/x_{2 n-1}^2=-1.$ Since $2 x_n \cos (1/x_n^2)=0$ we have $$g(x_{2 n})=2/x_{2 n}\; \text { and }\; g(x_{2 n-1}=-2/x_{2 n-1}.$$ The variation in $g$ in any nbhd of $0$ is therefore at least $$\sup_{m\in N} \sum_{n=1}^m |f(x_{2 n-1})-f(x_{2 n})|=\sup_{m\in N}\sum_{n=1}^k(2/x_{2 n-1}+2/x_{2 n})\geq \sup_{m\in N}2/x_{2 m}=\infty.$$ Remark: The term $2 x \cos 1/x^2$ is bounded in absolute value by $2$ for $x\ne 0$, so if it were replaced by any term bounded (in absolute value) by some $k>0$ we would have $|f(x_{2 m-1})-f(x_{2 m})|\geq 2/ x_{2 m-1}+2/x_{2 m}-2 k >2/x_{2m}-2 k,$ which goes to $\infty$ as $m\to \infty.$ |
Prove that $C^1([a,b])$ with the $C^1$- norm is a Banach Space | When $(f_n)_{n\geq1}$ is a Cauchy sequence with respect to the $C^1$-norm, then given an $\epsilon>0$ there is an $n_0$ with
$$\eqalign{|f_m(x)-f_n(x)|+|f_m'(x)-f_n'(x)|&\leq \sup_t|f_m(t)-f_n(t)|+\sup_t|f_m'(t)-f_n'(t)|\cr &=\|f_m-f_n\|_{C^1}<\epsilon\cr}$$
for all $x\in[a,b]$ and all $m$, $n>n_0$. It follows that both $(f_n)_{n\geq1}$ and $(f_n')_{n\geq1}$ are Cauchy sequences with respect to the $\sup$-norm and so converge uniformly to functions $f$ and $g\in C\bigl([a,b]\bigr)$. Furthermore we know that under the given circumstances the limit function $f$ is differentiable and that $f'=g$.
It remains to prove that the given sequence $(f_n)_{n\geq1}$ converges to $f$ with respect to the $C^1$-norm. To this end let an $\epsilon>0$ be given. Since the $f_n$ and the $f_n'$ converge uniformly to $f$ and $f'$ there is an $n_0$ with
$$\|f_n-f\|_\sup<{\epsilon\over2},\qquad\|f_n'-f'\|_\sup<{\epsilon\over2}\qquad \forall\ n>n_0\ ,$$
and this implies
$$\|f_n-f\|_{C^1}<\epsilon \qquad \forall\ n>n_0\ .$$ |
Which of $[0]_3, [1]_3, [2]_3$ is $[5^k]_3$ equal to? | See that $[5]_3=[-1]_3$ and therefore $[5^k]_3=[(-1)^k]_3$.
Now it's quite straightforward: we have $(-1)^k=1$ for $k$ even and $(-1)^k=-1$ for $k$ odd.
Hence the solution $[5^k]_3=[1]_3$ for $k$ even and $[5^k]_3=[-1]_3=[2]_3$ for $k$ odd. |
Understanding the denominator for a multivariate Gaussian | In a univariate gaussian, you have the denominator:
$$ \sqrt{2\pi}\sigma $$
What this does is correct for the amount of "stretching" that is done by the $\sigma$ term in the exponential function $e^{\frac{-(x-\mu)^2}{2\sigma^2}}$. Why is this? Because the above denominator is a simple example of a Jacobian Determinant, which is used to ensure that an integral does not change when you go to a new set of (transformed) variables. We can see this as follows. If $F_{\mu,\sigma}(x)$ is the cdf of your univariate gaussian, then we know that it is related to the integral of the standard gaussian by a shift and scale transformation:
$$F_{\mu,\sigma}(x)=\Phi\left(\frac{x-\mu}{\sigma}\right) $$
However, the above has a problem: The RHS does not integrate to 1! It is too wide. We need to uniformly reduce the height of the function by some factor to get it to work out. This factor is $\frac{\partial x}{\partial z}=\sigma$, where $z=\frac{x-\mu}{\sigma}$. With this factor, we can now re-write (correctly) our transformed standard gaussian:
$$ F_{\mu,\sigma}(x)=\frac{1}{\sigma}\Phi\left(\frac{x-\mu}{\sigma}\right) \implies f_{\mu,\sigma}(x)=\frac{d}{dz}F(x)=\frac{1}{\sqrt{2\pi}\sigma}exp\left(-\frac{(x-\mu)^2}{\sigma^2}\right)$$
The above will now integrate to 1, since we have removed the effect of stretching the standard gaussian by $\sigma$. Taking that intuition to the multivariate case, you will be forming the Jacobian matrix of your transformation, which will be equal to the first derivative of each of the original variables wrt each transformed variable, for an $n \times n$ matrix). The determinant of this will be the degree of "stretching" done by your transformation, just like in the univariate case.
However, the notation in your post is not in terms of the jacobian, but of the coveriance matrix, whose determinant is the square of the stretching factor. Hence, you take the square root.
Ill leave it as an exercise for you to use the definition of the Jacobian in the wiki link to derive the details..its a bit of tedious algebra, but it will give you the equation in your post (once you verify that $|\Sigma|=|J|^2$ |
Difference between a number and a set with one number | Set's are collections of objects. The objects that can be collected in the sets can be sets themselves.
This is a conceptual block for many students. If a set is a collection and it has collections within it, then, they ask, aren't the items within subcollections also be in the collection? And the answer to that is a resounding and absolute !!!!!NO!!!!!!. And the reason is a simple basic no concept, no interpretion, no computation. A set is a list of objects, with NO interpretation. If a set has: a mermaid, a bag of dog food, and the English alphabet, then the set has three things in it. 1) a mermaid, 2) a bag of dog food, and 3) the English alphabet. It does not have several pieces of kibble; it does not have A, K, Q, and P. It does not have "slightly venomous saline saliva" and "a respiratory system that functions under water as well as in the air"-- even though those are things contained in the objects, they are not objects in the set itself.
Perhaps it be easier to think a set as a "list". If you have a list of three items: Babar the Elephant, Roosevelt Franklin Elementary School, and the starting lineup of the 1942 New York Yankees; then the list has three things on it. The components of the items in the list are not themselves on the list. BUt then...list implies things that aren't necessary; order doesn't matter, definition doesn't matter. A set is basically dumping things in a bag and carrying them.
But I think you get all that.
To answer your questions:
"wouldn't {3} be a set of cardinality one?"
Indeed, yes.
Is this just a matter of notation (if we include the curly braces then we consider it a set and if not then it's just a natural number)?
Pretty much.
....
I think Tao's main point is:
The more accurate statement is that natural numbers can be the cardinalities of sets, rather than necessarily being sets themselves
There is an issue what is a natural number. Mathematics is bootstrapping from ... as little as we can get away with and what are we doing when we count?
Well, It may be that he is going to get into the set constructionist idea of constructing the natural numbers from sets.
$0$ is the empty set.
$1$ is the set containing the empty set.
If you have defined $n$ as as certain set then $n+1$ will be set that is the union that set and the set containing that set.
So $0 = \emptyset$
$1 = \emptyset \cup \{\emptyset\}=\{\emptyset\}=\{0\}$
$2 = 1 \cup \{1\} = \{\emptyset\} \cup \{\{\emptyset\}\} = \{\emptyset, \{\emptyset\}\}=\{0,1\}$.
$3 = 2\cup \{2\} = \{\emptyset, \{\emptyset\}\}\cup \{ \{\emptyset, \{\emptyset\}\} \}= \{ \emptyset, \{\emptyset\}\},\{\emptyset, \{\emptyset\}\} \}=\{0,1,2\}$
and so on.
And much as I've come to value and rely on this concept of natural numbers, I sincerely hope your reaction is "What the #@&! are you talking about!"
Natural numbers aren't sets. Sets are sets. Right?
But what are natural numbers? Or for that matter what is ... anything. The set theorist view is everything is a concept of sets. We start with the idea of set with nothing in it, and the idea of putting things into sets and we can constructing a succession of objects, (and "succession" is equivalent to counting things one by one) by putting the previous object into itself to make the next object.
Whew!......
But as Tao points out... the natural numbers aren't actually the sets that we make this way, but the concept of the cardinality of how many elements in the sets we create.
Our first set is $K_0 = \emptyset = \{\}$ a set with nothing in it. And our first natural number is $0 = |K_0|$ the number of elements in it. $K_0$ has no elements in it.
Our next set is $K_1 = \{K_0\}$ a set with one set in it. And our next natural number is $1 = |K_1|$ the number of elements in it.
Our next set is $K_2 = K_1 \cup \{K_1\} = \{K_0, K_1\}$ and $2 = |K_2|$ which has two elements in it.
And $K_3 = \{K_0,K_1,K_2\}$ and $3=|K_3|$.
And so on.
SO I suspect that Tao is trying to get to two birds:
1) A set can have sets as elements, but the elements within those sets are not elements of the "upper" set.
3) Natural numbers can be elements in sets, but not sets themselves, but can represent the cardinality of sets.
So $|\{3\}| = 1$. But $|\{0,1,2\}| = 3$ and $|\{0,1,\{0,1\}\}|= 3$ and $|\{\emptyset, \{\emptyset\},\{\emptyset, \{\emptyset\}\}\}| = 3$.
This leads to the, initial apparently circular but actually valid, definition: The natural number $n$ is defined of as the cardinality of the set $\{0,1,2,3,.....,n-1\}$.
(Sometimes it's a good idea for a mathematician to take a day off and go to the zoo and look at the elephants and then start fresh the next day....) |
Show that $\lim_{t \rightarrow \infty} tF(t) = 0.$ | Make the change of variable $y=tx$ to see that
$$
tF(t) = \int_0^1 f(y) \cos(ty) dy.
$$
Let $\epsilon >0$. Since $f$ is continuous on $[0,1]$, which is compact, we can use the Weierstrass approximation theorem to pick a polynomial $p$ such that
$$
\sup_{y \in [0,1]} |p(y) - f(y)| < \epsilon.
$$
Then
$$
\left\vert tF(t) - \int_0^1 p(y) \cos(ty) dy \right \vert \le \int_0^1 \vert [f(y)-p(y) ]\cos(ty) \vert dy \le \int_0^1 \epsilon = \epsilon.
$$
Now we integrate by parts:
$$
\int_0^1 p(y) \cos(ty) dy = \int_0^1 p(y) \frac{d}{dy} \frac{\sin(ty)}{t} dy
= p(1) \frac{\sin(t)}{t} - \frac{1}{t} \int_0^1 p'(y) \sin(ty) dy.
$$
Choose $T$ large enough so that $|p(1)|/T < \epsilon$ and so that
$$
\frac{1}{T} \int_0^1 \vert p'(y) \vert dy < \epsilon.
$$
Then for $t > T$ we have that
$$
\left\vert \int_0^1 p(y) \cos(ty) dy \right\vert < 2 \epsilon.
$$
Hence $t > T$ implies that
$$
\vert tF(t) \vert < 3 \epsilon,
$$
and so $tF(t) \to 0$ as $t \to \infty$. |
Prove that addition is associative with respect to this ring | The assignment is to prove that $\vee$ is associative (calling it “addition” is not really being gentle to the poor student); it's actually quite easy:
\begin{align}
x\vee(y\vee z)
&=x\vee(y+z+yz)\\
&=x+(y+z+yz)+x(y+z+yz)\\
&=x+y+z+yz+xy+xz+xyz
\end{align}
and
\begin{align}
(x\vee y)\vee z
&=(x+y+xy)\vee z\\
&=x+y+xy+z+(x+y+xy)z\\
&=x+y+xy+z+xz+yz+xyz
\end{align}
The two final terms are equal. |
Reference Request: Characters of Finite General Linear Groups | I hope that following references help you:
1.M.R Darafsheh, On some characters of $GL_n(\mathbb F)$, J Pure and Apple Algebra, 1985, 247-252.
2.R. Gow, Some characters of Affine subgroup of classical linear group, j London math Soc, 1976, 231-238.
3.R. Gow, Properties of the characters of the finite general linear groups related to transpose-inverse involution, Proc London Math Soc, 1983, 493-506.
4.A.O. Morris The characters of the group $GL_n(q)$, Math. Zeitscher, 1963, 112-123.
5.Pharm Huu Tiep, A. E. Zalesski, Minimal characters of the finite classical groups, Comm in algebra, 1995, 2093-2167. |
Perspective Equations for x,y in image plane? | Perhaps this diagram will make it a bit easier to work out the proportions.
Notice that from similar triangles we obtain the proportions:
\begin{equation}
\frac{x}{d}=\frac{X}{d-Z}
\end{equation}
and
\begin{equation}
\frac{y}{d}=\frac{Y}{d-Z}
\end{equation}
Thus
\begin{equation}
x=\frac{d}{d-Z}X
\end{equation}
and
\begin{equation}
y=\frac{d}{d-Z}Y
\end{equation}
As a matrix equation this may be re-written
\begin{equation}
\begin{pmatrix}
x\\y\\0
\end{pmatrix}
=
\begin{pmatrix}
\frac{d}{d-Z}&0&0\\
0&\frac{d}{d-Z}&0\\
0&0&0
\end{pmatrix}
\cdot
\begin{pmatrix}
X\\Y\\Z
\end{pmatrix}
\end{equation} |
What is the largest integer value of $n$ for which $8^n$ evenly divides $(100!)$? | I think the easiest way to answer this question is to factorize $100!$. Actually, a partial factorization will be sufficient. Thus, we see that $$100! = 2^{97} \times 3^{48} \times 5^{24} \times \ldots \times 83 \times 89 \times 97.$$
Since $8 = 2^3$, we need to divide $97$ by $3$ and discard the remainder. That is, rewrite $2^{96}$ as $8^n$ and there's your answer. |
Showing $\frac12 3^{2-p}(x+y+z)^{p-1}\le\frac{x^p}{y+z}+\frac{y^p}{x+z}+\frac{z^p}{y+x}$ | Indeed, Jensen works.
Let $x+y+z=3$.
Hence, we need to prove that $\sum\limits_{cyc}f(x)\geq0$, where $f(x)=\frac{x^p}{3-x}-\frac{1}{2}$.
$f''(x)=\frac{x^{p-2}\left((p-2)(p-1)x^2-6p(p-2)x+9p(p-1)\right)}{(3-x)^3}$.
If $p=2$ so $f''(x)>0$.
If $p>2$ so since $f''(0)>0$ and $\lim\limits_{x\rightarrow3^-}f''(x)>0$ and $\frac{3p}{p-1}>3$, we see that $f''(x)>0$.
If $1<p<2$ so since $f''(0)>0$ and $\lim\limits_{x\rightarrow3^-}f''(x)>0$, we see that $f''(x)>0$.
Thus, your inequality follows from Jensen.
Done! |
Why is "If ψ ∈ Γ then the sequent (Γ ⊢ ψ) is correct" true? | The authors are introducing the basic elements of the proof system.
As you said, the definition of correct sequent $(\Gamma \vdash \psi)$ is :
There is a proof [according to the rules of the system to be specified] whose conclusion is $\psi$ and whose undischarged assumptions [premises] are all in the set $Γ$.
When the semantics of the language will be defined [see para 3.5] the authors will intorduce the concept of semantic sequent : $\Gamma \vDash \psi$, defined as :
for every $σ$-structure $A$, if $A$ is a model of $Γ$ then $A$ is a model of $ψ$.
The definition formalizes the informal concept of valid argument.
Then, they will prove the basic result [see page 87 : the Soundness Theorem of Natural Deduction for Propositional Logic] :
$\Gamma \vdash \psi \text { iff } \Gamma \vDash \psi$.
Having said that, the rules of the proof system are the "rules of the game" that allows us to derive conclusion from premises.
It is obvious that if $\psi \in \Gamma$, we can derive it from $\Gamma$ and this is formalized with the (Axiom Rule) above.
What if $\psi$ is false ? No problem: the move is "formally" correct but the argument is still valid because the case $\psi$ false does not contradict the definition of valid argument :
the conclusion must be true whenever all the premsies are true.
In general, the reasoning applies if some elements of $\Gamma$ is false; the (Axiom Rule) applies (because a premise can always be derived as conclusion) without contradiction. |
Given $x+y=uv$ and $xy=u-v$ s.t. $x = X(u,y), v = V(u,y),$ find the partials of $X$ and $V$ w.r.t $u$ and $y$. | The equations are
\begin{equation}
x+y = uv, \quad xy = u-v.
\end{equation}
They can be solved for $v$ and $x$ by multiplying the first equation with $y$: $xy + y^2 = u v y$, and by using the second equation to eliminate $x$:
\begin{equation}
u-v + y^2 = xy + y^2 = uvy \quad \Leftrightarrow \quad v = \frac{u+y^2}{1+uy} =: V(u,y).
\end{equation}
We also obtain
\begin{equation}
x = uv - y = u \frac{u+y^2}{1+uy} - y = \frac{u^2-y}{1+uy} =: X(u,y).
\end{equation}
We can now compute the partial derivatives of $V, X$ by the quotient rule:
\begin{equation}
\frac{\partial V}{\partial u} = \frac{1-y^3}{(1+uy)^2}, \quad \frac{\partial V}{\partial y} = \frac{2y+uy^2-u^2}{(1+uy)^2}, \quad \frac{\partial X}{\partial u} = \frac{2u+u^2y+y^2}{(1+uy)^2}, \quad \frac{\partial X}{\partial y} = \frac{-1-u^3}{(1+uy)^2}.
\end{equation}
Attempting implicit differentiation, on the other hand, we do not start by solving the equations. Instead we take the partial derivatives with respect to $u$ and $y$ of the equations
\begin{equation}
X+y = uV, \quad Xy = u-V,
\end{equation}
to obtain
\begin{equation}
\frac{\partial X}{\partial u} = V + u \frac{\partial V}{\partial u}, \quad \frac{\partial X}{\partial y} + 1 = u \frac{\partial V}{\partial y}, \quad \frac{\partial X}{\partial u} y = 1 - \frac{\partial V}{\partial u}, \quad \frac{\partial X}{\partial y} y + X = - \frac{\partial V}{\partial y}.
\end{equation}
This is a linear system of PDEs, but it still contains $V$ and $X$, so going this way seems to be more complicated. |
probability of a dart hitting a radius $\leq 2$ from the center given the joint distribution of $(x,y)$ . | Use polar coordinates. $r^2=(x-x_0)^2+(y-y_0)^2$.
Then the probability of one dart ending up inside the circle is
$$q=\frac{1}{2\pi}\int_0^2e^{\frac{-r^2}{2}}2\pi r dr=(1-\frac{1}{e^2})$$
The probability of no hits in any of the 10 throws is $(1-q)^{10}$. So the probability of at least 1 hit is $$1-(1-q)^{10}$$ |
When does a matrix game and the sign flipped matrix game have the same nash equilibria? | (After edit, thanks @Robert Israel) If the payoff is
$$
\begin{pmatrix}
(1,1) & (0,0) \\
(0,0) & (-1,-1)
\end{pmatrix}
$$
then the only Nash equilibrium is to play strategy 1 for both players, which changes when you change the signs. Maximizing profit may be equivalent to minimizing cost, but it is definitely not equivalent to maximizing cost.
However, for zero-sum games represented by symmetric matrices, something like that may be true. |
Find 2 orthonormal vectors that parametrically describe a plane from a normal and a point? | Find any vector orthogonal to $n$ and divide it by its module. This will be $v_1$. Then $v_2=n\times v_1$. |
Is the zero ideal of a graded ring considered homogeneous? | Yes, the zero ideal is homogeneous. The element $0\in S_\bullet$ is homogeneous, but its degree is not uniquely defined--it is "homogeneous of degree $n$" simultaneously for all $n$. After all, it is an element of $S_n$ for all $n$. Of course, the degree of any nonzero homogeneous element is unique, since $S_n\cap S_m=\{0\}$ if $m\neq n$.
The context in which Wikipedia says $0$ is homogeneous of degree zero is in relation to the fact that $S_0$ is a subring of $S_\bullet$: in order to be a subring, it must contain $0$. This does not imply $0$ can't be homogeneous of other degrees as well though!
Note moreover that you don't even need to say $0$ is homogeneous in order for (1) to hold, since the ideal $\{0\}$ is generated by the empty set. Every element of the empty set is homogeneous.
(In fact, it would be not entirely unreasonable to define a homogeneous element of $S_\bullet$ to be a nonzero element of some $S_n$. The idea behind this definition is that a homogeneous element is an element which has exactly one nonzero component. So you would not consider $0$ to be homogeneous, similar to how $1$ is not considered to be a prime number. I can imagine there might be circumstances where this definition is the more natural one. But as far as I know, the standard definition is that $0$ is homogeneous.) |
Trace of a Matrix is positive | as stated, with $B\succeq 0$ there are some problems that break the OP's goal, e.g. with dividing by zero when considering $\mathbf x^T B \mathbf x=0$ when $\mathbf x$ is in the nullspace of $B$. Also it technically isn't true that $tr(B^+)\gt 0$, only that $tr(B^+)\geq 0$
(For example, consider the cases when $B$ has rank one or rank zero.)
I prove the result assuming that $B\succ 0$
by application of matrix determinant lemma (for rank one updates):
$\det\big(B^+\big) = \det\big(B\big)\big(1+ \frac{-1}{\mathbf x^TB \mathbf x}(\mathbf x^TB)B^{-1}(B\mathbf x))$ $= \det\big(B\big)\big(1+ \frac{-1}{\mathbf x^TB \mathbf x}\mathbf x^TB\mathbf x)) = \det\big(B\big)\big(1-1\big)=0$
so $B^+$ is necessarily singular
to cleanup notation
$\mathbf y: = \frac{1}{\sqrt{\mathbf x^TB \mathbf x}}B\mathbf x$
$B^+ = B - \mathbf{yy}^T$
when we look at a quadratic form, for nonzero $\mathbf z \perp \mathbf y$
$\mathbf z^T B^+\mathbf z = \mathbf z^T B\mathbf z - \mathbf z^T\mathbf{yy}^T\mathbf z = \mathbf z^T B\mathbf z +0\gt 0$
by positive definiteness of $B$.
$\mathbf z$ lives in a $n-1$ dimensional subspace, and we know $B^+$ is singular, so it follows that $\mathbf y$ is in the kernel of $B^{+}$ and $B^+$ has signature of $(n-1,0,1)$, i.e. $B^+$ is a non-zero positive semi-definite matrix which implies $tr\big(B^+\big) \gt 0$
addendum
here's an analytic proof
consider the path
$B(\tau) = (1-\tau)B+ \tau B^{+} = B- \tau \mathbf{yy}^T$
for $\tau \in [0,1]$
now re-visit the matrix determinant lemma for rank one updates to get
$\det\big(B(\tau)\big)= \det\big(B\big)\big(1+ \tau\frac{-1}{\mathbf x^TB \mathbf x}\mathbf x^TB\mathbf x))= \det\big(B\big) \cdot (1-\tau)$
which is nonzero for $\tau\in[0,1)$
by topological continuity of eigenvalues this means that the eigenvalues of $B(\tau)$ are all positive for $\tau \in [0,1)$ i.e. $B$ has all positive eigenvalues because it is real symmetric positive definite -- and, essentially, apply intermediate value theorem -- so no eigenvalue can have 'crossed over' to negative without making $B(\tau)$ singular for some $\tau\in[0,1)$. (A more careful way to do this is with winding numbers but I digress.)
Further applying continuity, this means at $\tau =1$, $B(\tau) = B^{+}$ cannot have negative eigenvalues and since it is real symmetric this means it is real symmetric positive semidefinite.
We merely need to show it isn't the zero matrix since for real symmetric positive semidefinite $C$, we know $tr\big(C\big)\geq 0$ with equality iff $C=\mathbf 0$
(easy proof: $tr\big(C\big) = tr\big(C^\frac{1}{2}C^\frac{1}{2}\big) = tr\big((C^\frac{1}{2})^TC^\frac{1}{2}\big) =\big\Vert C^\frac{1}{2}\big \Vert_F^2$ and the Frobenius norm is positive definite. )
I'd suggest finishing by noting that if we select any non-zero $\mathbf z$ where $\mathbf z^T \mathbf y =0$ then
$\mathbf z^T B^{+}\mathbf z=\mathbf z^T B\mathbf z - \mathbf z^T\mathbf{yy}^T\mathbf z = \mathbf z^T B\mathbf z +0= \mathbf z^T B\mathbf z \gt 0$
so $B^{+} \neq \mathbf 0$ and hence has positive trace |
When are extensions of the polynomials with coefficients in the rationals isomorphic? | The isomorphism class is determined by the discriminant of $f(x)$, in case of degree 2. If $f(x)=x^{2} + ax + b$ and $g(x) = x^{2} + cx + d$ are both irreducible, then $\mathbb{Q}[x]/(f(x)) \simeq \mathbb{Q}(\sqrt{a^{2}- 4b})$, and two fields are isomorphic if and only if $(a^{2} - 4b)/(c^{2} - 4d)$ is a square of some rational number.
If part is not hard to show, and only if part is basically same as the fact that $\mathbb{Q}(\sqrt{2})$ and $\mathbb{Q}(\sqrt{3})$ are not isomorphic as fields. |
Why do customers in the following queue have a residual service time of 5? | \begin{equation}
\mathbb{E}[R_i] = \frac{\mathbb{E}[B_i^2]}{2\mathbb{E}[B_i]} = \frac{100}{2 \cdot 10} = 5.
\end{equation}
Each service time is exactly $10$ minutes. You arrive at any random moment in time and therefore the average time that the customer still needs to be in service is $5$ minutes.
Think of it like this: let's say you want to take a bus and you know that the bus leaves every $10$ minutes. You arrive at a random time to the bus stop. What is the expected time that you need to wait before the next bus leaves from your bus stop? Clearly this is $5$ minutes. |
Finding orthogonal of a set in functional analysis | So $M^{\perp}$ is the set of all $y=(f_1,f_2)\in \mathbb{R}^2$ such that $\langle x,y\rangle=0$. Now, the inner product on $\mathbb{R}^2$ is given by
$$
\langle x,y\rangle=e_1f_1+e_2f_2,
$$
which is $0$ when? This amounts to solving an equation. |
Continuous Function with Metric Measure | You've started off in the right direction, but you seem to get a bit lost. The question requires you to do three things:
Show that $f_N$ converges to a function $f_\infty$
Show that this convergence is uniform
Show that $f_\infty \in C([0,1])$
HINT for step 1: the functions $g_n$ clearly shrink to $0$, but you still need to prove that they shrink fast enough for their sum to converge (if $g_n =1/n$ they would go to zero but their sum would slowly approach $\infty$ so you do need to prove this).
HINT for step 2: Once you have the function $f_\infty$ it's a very standard argument to show that $f_n \stackrel{u}{\rightarrow} f_\infty$
HINT for step 3: Once you know about $f_\infty$ it's a very standard $\varepsilon - \delta$ argument to show that it is continuous on $[0,1]$
To help you get started, let's look at step 1. For any $x\in {\mathbb R}$ we can find an interval of width $2^{-n}$ containing it, i.e. $\exists z\in {\mathbb Z}$ s.t. $x\in [2^{-n}z, 2^{-n}(z+1))$. So $g_n(x) \leq 2^{-n} \ \forall x \forall n$. Since $\sum_{i=1}^\infty 2^{-n} =1$ we have $f_N(x) \leq 1 \ \forall x \forall N$. So the $f_N$ converge to some function $f_\infty(x) := \sum_{i=1}^\infty g_n(x)$ |
Do we count only distinct roots in Descartes' rule of signs? | If $X$ is a root with multiplicity $m$, then $f(x)= (x-X)^m$ divides $p(x)$, so $p(x)=f(x) q(x)$ for some polynomial $q(x)$ where $q(X)\ne0$. Note that because $q(X)$ is (continuous) polynomial with $P(X)\ne0$, there is some small interval $[X_-,X_+]$ not containing $0$ (where $X_-<X<X_+$) on which $q(x)\ne0$ and does not change sign.
If $m$ is even, $f(X_-)$ and $f(X_+)$ are both positive (they are both even powers of a nonzero number), and therefore $p(X_-)$ and $p(X_+)$ have the same sign as $q(X)$. If $m$ is odd, $f(X_-)>0$ and $f(X_+)<0$ so $p(X_-)$ and $p(X_+)$ have opposite signs, as they are odd powers of a negative number ($X_--X$) and a positive number ($X_+-X$), respectively.
Therefore when $X$ is a root of $p(x)$ with odd multiplicity, the graph of $p(x)$ crosses over the $x$-axis at $x=X$. When $X$ is a root of $p(x)$ with even multiplicity, the graph of $p(x)$ “bounces” on the $x$-axis at $x=X$. |
Show non-singularity of orthogonal matrix | Suppose $A$ is a orthogonal matrix (as by your definition). Then
\begin{eqnarray}
(AA^T)_{i,j} &=& \sum_{k=1}^n A_{ik}A^T_{kj}\\
&=& \sum_{k=1}^n A_{ik}A_{jk}.\\
\end{eqnarray}
Notice that that expression is simply the inner product between the $i$-th and $j$-th row of $A$. These are orthogonal to each other by definition (well actually the columns are, but you can simply consider the transpose to get orthogonal rows or consider $A^TA$).
It follows that $AA^T=Id$ and thus $A$ is invertible and both statements follow. Notice that a one-sided inverse is enough when dealing with square matrices to conclude that this is a two-sided inverse. |
Statistical Physics-Taylor expanding | So this is not really rigorous; $s$ actually ranges all the way from $-N$ to $N$ so that $s/N$ really ranges all the way from $-1$ to $1$. Near $s/N=0$, one can Taylor expand. Traditionally we expand the logarithm and then exponentiate, since the logarithm is (quantitatively) smoother. This shows that the profile looks like a Gaussian near there.
For certain purposes that's plenty. To be rigorous you would want to ask: how near is "near"? And if there are values of $s$ which are not "near", what happens there?
There are different ways to handle that. One way would be just brute force estimation (try to take more derivatives of $\ln(P(d))$ and use the Lagrange error). Another way would be a quantitative central limit theorem, such as the Berry-Esseen theorem. Indeed $d$ is a sum of $N$ independent identically distributed variables which are uniformly distributed on $\{ a,-a \}$. The mean of this is zero; the variance is $a^2$; the third absolute moment is $a^3$. So the ratio $\frac{\rho}{\sigma^3}$ (using Wikipedia's notation) in the Berry-Esseen theorem is $1$. So the uniform error in the CLT approximation is less than $\frac{1}{2\sqrt{N}}$...but probably not by a whole lot, maybe an order of magnitude or so. An interesting question for an analyst or probabilist (not so much for a physicist) is where the error is maximized. |
Some functions $\phi:\mathbb R\rightarrow \mathbb C$ that satisfy $\phi(x)\bar\phi(y)=\phi(x-y)$ | Assume $\overline{\phi(0)}\ne 0$, then $$\phi(0)=|\phi(0)|^2$$ so $\phi(0)>0$ therefore $\phi(0)=1$. From this we have $\phi(-x)=\overline{\phi(x)}$. As such, $1=\phi(x-x)=|\phi(x)|^2$, so really
$$\phi:\Bbb R\to S^1$$
and $\phi$ is a homomorphism by the given property. If you are willing to assume continuity, we know the continuous homomorphisms (i.e. group characters) from $\Bbb R\to S^1$ are just the maps
$$\{x\mapsto e^{i\alpha x}:\alpha\in\Bbb R\}\qquad (*)$$
So this is a complete characterization. In particular, since I added the little $\alpha$ in there, you have yourself uncountably infinitely many more examples!
You only asked about examples, but in case you're interested in how the classification of $(*)$ goes, it's not terribly hard if you know a little about basic topology (local compactness mostly) and it is presented very well in Tate's thesis or André Weil's Basic Number Theory (among other places). |
How to evaluate $\int_0^x\vartheta_3(0,t)\ dt$? | About the last point, we have:
$$ \mathcal{L}^{-1}\left(\frac{1}{(x^2+1)^2}\right)=\frac{\sin s-s\cos s}{2} $$
hence:
$$\sum_{n\geq 0}\frac{1}{(n^2+1)^2} = 1+\int_{0}^{+\infty}\frac{\sin s-s\cos s}{2(e^s-1)}\,ds$$
and the last integral can be computed through the residue theorem. As an alternative, starting from
$$ \frac{\sinh(\pi x)}{\pi x}=\prod_{n\geq 1}\left(1+\frac{x^2}{n^2}\right)$$
and considering $\frac{d}{dx}\log(\cdot)$ of both sides, we get:
$$ -\frac{1}{x}+\pi\coth(\pi x) = \sum_{n\geq 1}\frac{2x}{n^2+x^2} $$
and by differentiating again:
$$ \frac{1}{x^2}-\frac{\pi^2}{\sinh^2(\pi x)}=2\sum_{n\geq 1}\frac{n^2-x^2}{(n^2+x^2)^2}$$
so that:
$$\boxed{ \sum_{n\geq 1}\frac{1}{(n^2+x^2)^2} = \color{red}{\frac{1}{4x^4}\left(-2+\pi x\coth(\pi x)+\left(\frac{\pi x}{\sinh(\pi x)}\right)^2\right)}}$$
We may also exploit the Poisson summation formula to get the same, since the Fourier transform of $\frac{1}{(1+x^2)^2}$ is a multiple of $(1+|x|)\,e^{-|x|}$. |
Prove the sequence $(x_n = \sin(\frac{n\pi}{100}))$ diverges. | Since the subsequence $x_{50(2n+1)} = \sin((2n+1)\pi/2) = (-1)^{n}$ is divergent, $x_n$ is divergent. |
What is the runtime complexity for this recurrence relation $T\left(n\right)\ =\ 2T\left(\frac{n}{2}\right)\ +\ n\ lg\ n$? | For simplicity take n to be a power of 2, ie $n=2^k$, and we can also take log to be base 2.
Then by iterating the same step:
$T(n)=2T(\frac{n}{2})+n\log(n)=4T(\frac{n}{4})+n\log(\frac{n}{2})+n\log(n)=nT(1)+n\log(\frac{n}{2^{k-1}})+n\log(\frac{n}{2^{k-2}})+...+n\log(\frac{n}{2})+n\log(n)$
Put $n=2^{k}$, to get $T(n)=2^{k}T(1)+2^{k}(1+2+...+k)=2^{k}T(1)+2^{k}\frac{(k)(k+1)}{2}$
Now use that $k=\log(n)$ to conclude that $T(n)=\Theta(n\log(n)^2)$ |
When do we use log when calculating computational complexity | If you take the squareroot $k$ times, you are really just taking $x^{1/2^k}$. So the problem is how large does $k$ need to be for $x^{1/2^k}<1+w$. Taking $\log_2$ of both sides, this equivalent to $\log_2(x)/2^k<\log_2(1+w)$, or $\log_2(x)/\log_2(1+w)<2^k$. Taking $\log_2$ again, you see that you need $k>\log_2(\log_2(x)/\log_2(1+w))=\log_2\log_2(x)-\log_2\log_2(1+w)$. Note that for small $w$, $\log_2(1+w)=\Theta(w)$, so the right side becomes $\Theta(\log\log x+\log(1/w))$. |
X={1,2,3}. Give a list of topologies on X such that every topology on X is homeomorphic to exactly one on your list. | You’re missing the ones homeomorphic to $\big\{\varnothing,X,\{1\},\{2,3\}\big\}$; there are $3$ of those. Also, your (E) group lists one topology twice: (E1) and (E3) are the same. The question wants you to list one topology from each of the $9$ groups (including the group that I just added). |
Showing $\xi$ exists with certain restrictions | Given any $N \in \mathbb{N}$ and any subinterval $I$, you have noted that the number of rationals with denominator $\leq N$ is finite. But since the rationals are dense, in particular, there are infinitely many in $I$, and so there must be one (call it $\xi$) with denominator $>N$. This is the $\xi$ that you are looking for. |
continuity on the common edge of the two tensor-product Bezier surfaces | Calculate the partial derivatives of $\mathbf{p}$ and $\mathbf{q}$ with respect to $s$ at parameter values $(1,t)$. You will find that they are both simple functions of the control points. Set the partial derivative expressions equal to each other, and use this equation to derive a relationship between control points. |
Proving a generalisation of the Second Mean Value Theorem for definite integrals | Without loss of generality assume that $h$ is monotonically increasing. Let $\tilde h(x)=h(x)-h(a)$. Note that $\tilde h(x)\ge 0$.
You want to prove that there exists a $\xi$ such that $\int_a^b \tilde h(x)f(x) dx = \tilde h(b) \int_\xi^b f(x) dx$. See here for the proof of this statement. |
Struggling to solve differential equation once integrated | Let's see... you started with
$$\frac{dy}{dt}=g-cv$$
Separated variables and integrated to get
$$-\frac{\ln|g-cv|}c=t+C_1$$
Then you want to apply initial conditions
$$-\frac{\ln|g-cv_0|}c=C_1$$
So now you should have
$$-\frac{\ln|g-cv|}c+\frac{\ln|g-cv_0|}c=\frac1c\ln\left(\frac{g-cv_0}{g-cv}\right)=t$$
$$\frac{g-cv_0}{g-cv}=e^{ct}$$
And we can solve for $v$ to get
$$v=\frac{cv_0+g\left(e^{ct}-1\right)}{ce^{ct}}=\frac gc+\left(v_0-\frac gc\right)e^{-ct}$$ |
Ideal sheaf on a surface | Yes, $\mathcal O_S(k)$ is precisely $\mathcal O_S(kH)$. |
Derivative definition vs its requirements for existence | You can define the derivative of $f$ on any cluster point of its domain. This is not a new concept. For example, I phrased it like that because that is how Bartle describes it in The Elements of Real Analysis, though only briefly before restricting his attention to the usual sets.
Wherever a function $f$ is differentiable in its domain $D$, it is always continuous with respect to that domain: If $f'(x_0) = L$, then $$\lim_{D \ni x\to x_0}\frac{f(x) -f(x_0)}{x - x_0} = L.$$ Thus, letting $\epsilon = 1$, there is a $\delta > 0$ such that if $0 < |x - x_0| < \delta$ and $x \in D$, then $$\left | \frac{f(x) - f(x_0)}{x - x_0} - L \right | < 1$$
$$ -|x - x_0| < f(x) - f(x_0) - L(x - x_0) <|x - x_0|$$
$$|f(x) - f(x_0)| < (|L| + 1)|x - x_0|$$
From which it follows that $f(x) \to f(x_0)$ as $x \to x_0$ in $D$.
In this case, consider 3 possible domains for $f(x) = (-2)^x$ as you have defined it:
$$D_0 = \left\{ {2n\over 2m+1}\ |\ n, m \in \Bbb Z\right\}$$
$$D_1 = \left\{ {2n+1\over 2m+1}\ |\ n, m \in \Bbb Z\right\}$$
$$D_2 = \left\{ {n\over 2m+1}\ |\ n, m \in \Bbb Z\right\} = D_0 \cup D_1$$
The differentiability of $f$ on $D_0$ follows from that of $2^x$ on $\Bbb R$, and the differentiability of $f$ on $D_1$ similarly follows from that of $-2^x$. But $f$ is obviously not continuous on $D_2$.
One final note: an additional loosening of the definition of derivative at $x_0$ would be the double limit:$$\lim_{x_1,x_2 \to x_0}\frac{f(x_1) - f(x_2)}{x_1 - x_2}$$
Under this definition $f$ no longer needs to be defined at $x_0$, so continuity fails. But by the same argument above, it must still satisfy a Cauchy condition there, so the discontinuity is removable. |
For which parameters $a,b \in \mathbb{R}$ function $F(t)$ are distribution of random variable? | You are making a mistake when tending $t\to-\infty$ from the first equation. From $(1)$ we can say $b=1$. For $t\to-\infty$, the function is zero for small enough $t$. Now note that ${1\over (1+t)^2}$ is a decreasing function of $t$; therefore for $a$ being non-negative we have a monotonically increasing function and we must have $$0\le b-4a\le 1\implies 0\le a\le {1\over 4}$$If the continuity of CDF matters to us, then $$b-4a=0\implies a={1\over 4}$$ |
Determine if the set $\{(x, \alpha, \beta) \mid x \ge 0 \land 0 \le \alpha \le \pi \land 0 \le \beta \le 2 \pi\}$ is compact | $\text{ }$
Yes, everything correct.
$\text{ }$ |
Proving compactness of sets of sequences | Hint: The continuous image of a compact set is compact, and the map $f\colon X\to\mathbb R$, $\{x_n\}\mapsto x_1$ is continuous. |
Complex contour integral $\int_C \dfrac{cosh(z)}{z^4}dz$ where C is the square centered at the origin with side length 4 | $f(z)=\frac{\cosh z}{z^4}$ is a meromorphic function with a single pole of order four at the origin. By the residue theorem, if $\gamma$ is any closed simple curve enclosing the origin, counter-clockwise oriented,
$$ \oint_{\gamma} f(z)\,dz = 2\pi i\cdot\text{Res}\left(f(z),z=0\right). $$
However, $\cosh(z)$ is an entire and even function, so a holomorphic function of $z^2$, and the Laurent expansion at the origin of $\frac{\cosh z}{z^4}$ contains no term of the form $\frac{1}{z}$. In particular,
$$\text{Res}\left(f(z),z=0\right)=\color{red}{\large 0}.$$ |
Convergent fraction for constant $e$? | You’re using a generalized continued fraction; the convergents that you normally see listed are those for the standard continued fraction expansion of $e$, i.e., the one with $1$ for each numerator:
$$e=[2;1,2,1,1,4,1,1,6,1,1,8,\dots]\;.$$
This can also be written
$$[1;0,1,1,2,1,1,4,1,1,6,1,1,8,\dots]$$
to emphasize the pattern even more strongly. |
Is it allowed to use "equal to" and "approximately equal to" in the same sentence? | Your example claims that two things are equal (on the left), two things are equal (on the right), and that the left pair are approximately equal to the right pair.
One should be careful with too much use of the ill-defined $\approx$ symbol, or you can get $$1\approx 1.01\approx 1.02\approx \cdots \approx 1.99\approx 2$$ |
If $u_n \rightharpoonup u$ in $H_0^1(\Omega)$, what can we say about $\{\nabla u_n\}$? | Suppose $$x_n \to x $$ weakly.
Then
$$(x_n , x) \to (x,x)=\|x\|^2.$$
However, $|(x_n,x)|\le \|x_n\|\|x\|$. Therefore, $\liminf \|x_n\|\ge \|x\|$.
Going back to your problem, taking $x_n$ to be the weakly convergent sequence $(u_{\sigma(n)})$ in $H_0^1$, with limit $u$, we have
$$\liminf\left ( \int |\nabla u_{\sigma(n)}|^2 + \int u_{\sigma(n)} ^2 \right)\ge \int |\nabla u|^2 + \int u^2 .$$
But the second integral on LHS converges to the second integral on RHS, so we obtain
$$ \liminf \int |\nabla u_{\sigma(n)}|^2 \ge \int |\nabla u|^2.$$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.