title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Finding a differentiable function from (0,1) to (0,1) dominating (point wise) a given continuous function from (0,1) to (0,1) | Pick monotonic sequences $a_n\to 0$ and $b_n\to 1$.
Since $f$ is bounded away from $1$ on $[a_{n},b_{n}]$, you can readily find smooth (in fact constant) $\psi_n\colon[a_n,b_n]\to (0,1)$ that is differentiable and $\ge f$ on this interval.
Now you can recursively glue together $\phi_n$ on $[a_{n},b_{n}]$ such that $\phi_n(x)=\phi_{n-1}(x)$ for $x\in[a_{n-2},b_{n-2}]$, $\psi_n(x)\ge\phi_n(x)\ge\phi_{n-1}(x)\setminus[a_{n-2},b_{n-2}]$ for $x\in[a_{n-1},b_{n-1}]$, $\phi_n(x)=\psi_n(x)$ for $x\in[a_{n1},b_{n1}]\setminus [a_{n-1},b_{n-1}]$.
Finally let $\phi(x)=\phi_{n+1}(x)$ where $n$ is such that $x\in[a_{n},b_{n}]$. |
Finding $\mathrm{Cov}(X,Y)$ and $E(X\mid Y=y)$ given the joint density of $(X,Y)$ | The densities are usually denoted with subscripts in capital letters for the random variables.
For part (a), it is an algebraic error on your part. For part (b), you are not paying attention to the supports of the random variables in your calculations. Hence the absurd answers.
Define the indicator $$1_A(x)=\begin{cases}1&,\text{ if }x\in A\\0&,\text{ otherwise }\end{cases}$$
Then joint density of $(X,Y)$ is $$f_{X,Y}(x,y)=1_{(0,1)}(x)1_{(2x,2)}(y)$$
So density of $Y$ is
\begin{align}
f_{Y}(y)&=\int f_{X,Y}(x,y)\,dx\,1_{(0,2)}(y)
\\&=\int_0^{y/2}\,dx\,1_{(0,2)}(y)
\\&=\frac{y}{2}\,1_{(0,2)}(y)
\end{align}
Thus giving the density of $X$ conditioned on $Y=y$ for each $y\in(0,2)$:
$$f_{X\mid Y=y}(x)=\frac{2}{y}\,1_{(0,y/2)}(x)$$
Hence $$\mathbb E\left[X\mid Y=y\right]=\int x f_{X\mid Y=y}(x)\,dx=\frac{y}4$$ |
chi-square distribution and p-value | The only mistake I find in your posting is your assertion that $63.691<62.5.$
Since the value of the test statistic is less than the critical value, the null hypothesis is not rejected.
Likewise, since the p-value is more than $0.01$ the null hypothesis is not rejected. |
Divergence of the sum and product of divergent sequences | For (a), a "trick:" get rid of $(v_n)_{n\geq 0}$ as soon as possible. Namely, since $(v_n)_{n\geq 0}$ diverges to $\infty$, by definition there exists $N\geq 0$ such that $v_n \geq 1$ for all $n\geq N$.
Now, for any $n\geq N$, we have $u_n + v_n \geq u_n + 1 > u_n$. You should be able to conclude. Fix $K> 0$: by divergence of $(u_n)_n$, there exists $N'$ such that for all $n\geq N'$, $u_n > K$. For any $n\geq \max(N,N')$, we then have
$$
u_n + v_n \geq u_n + 1 > K + 1 > K
$$
and you are done.
For the second, it's similar, with a small little thing to mention on top of that. Fix $K> 0$: by divergence of $(u_n)_n$, there exists $N'$ such that for all $n\geq N'$, $u_n > K$. For any $n\geq \max(N,N')$, we then have
$$
u_n v_n \geq u_n > K
$$
using first that $u_n > 0$ (as $K>0$) and $v_n \geq 1$, then that $u_n > K$.
For (b), act similarly: since $(v_n)_{n\geq 0}$ is bounded, there exists $M>0$ such that $-M \leq v_n \leq M$ for every $n\geq 0$. Then, you have that for all $n$, $u_n + v_n \geq u_n - M$.
Fix any $K>0$, and set $K'\stackrel{\rm def}{=} K+M$. Since $(u_n)_{n\geq 0}$ diverges to $\infty$, there exists $N\geq 0$ such that for all $n\geq N$, $u_n > K'$. But then... |
Is $\omega = dU = sin(x+y)dx+cos(x+y)dy$ an exact form? | $\large\mbox{It's not !!!}$
$$
{\partial U \over \partial x} = \sin\left(x + y\right)\,,
\quad
U = -\cos\left(x + y\right) + \phi\left(y\right)
$$
$$
{\partial U \over \partial y} = \sin\left(x + y\right) + \phi'\left(y\right)
\color{#ff0000}{\LARGE\not=}
\cos\left(x + y\right)
$$ |
Rings isomorphic to $\mathbb{Z}_6\times\mathbb{Z}_{10}$ | $\mathbb Z_6\times\mathbb Z_{10}\cong \mathbb Z_2\times\mathbb Z_{30}$. It might be easier to characterize that group.
One characterization is in terms of an idempotent[*]:
The ring is commutative and there is an idempotent element $e\neq 0$ such that $e+e=0$, with $1-e$ having additive order $30$, and $e$ and $1$ generate the entire ring additively. In $\mathbb Z_6\times\mathbb Z_{10}$, you get $e=(3,0)$. Then $(1,0)=(3,0)+10(1,1)$ and $(0,1)=(3,0)+21(1,1)$.
Or:
The ring is commutative with identity and $60$ elements, with an idempotent $e\neq 0$ such that $e+e=0$ and $e,1$ generate the entire ring, additively. Same $e,1$ for this case.
Or:
The ring is commutative with identity and there is an idempotent $e$ of additive order $6$ with $1-e$ having additive order $10$ and $e$ and $1$ generate the entire group additively. Here, $e=(1,0)$.
[*] An idempotent element of a ring is an element $e$ such that $e\cdot e=1$. |
Finding the probability of the difference of two discrete random variables | Since $X$ and $Y$ are integers, $Z=|X-Y|$ is integer too. From the definition of absolute value we can assume that $Z\ge 0$. Thus, $Z$ can take values $0,1,2,\ldots$ However, from $Z<1$, we can conclude that $Z$ can be only $0$. |
Whether there is always an intersection between Column Space and Nullspace | A vector may or may not lie in the nullspace of a given matrix. Multiplying a matrix with a vector yields a linear combination of the columns of the matrix, so the result of the multiplication lies in the column space.
$c-d$ is a vector in the nullspace of $A$. $c$ and $d$ may or may not be in the nullspace, depending on whether $b = 0$. The vectors $Ac, Ad$ and $A(c-d)$ are all linear combinations of the columns of $A$ and lie in the column space of $A$. in this case, the entries of $c-d, c$ and $d$ are the coefficients of these linear combination.
If $A$ is $m\times n$ with $m\neq n$, then the nullspace and the column space do not have anything to do with one another (the nullspace consists of $n\times 1$ vectors, while the column space consists of $m\times 1$ vectors). It would be nonsensical to ask about their intersection.
If $A$ is square, then it makes sense to ask about the intersection of the two spaces. The zero vector will definitely be there. Maybe other vectors will too. For instance, for the matrix
$$
A = \begin{bmatrix}0&1\\0&0\end{bmatrix}
$$
both the column space and the nullspace are spanned by $\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right]$, and the two spaces are equal. |
How do I take the contraction of an ideal which is not in the image of the given morphism? | Let $\phi:A\to B$ be any function between sets and $I$ any subset of $B$. Then $\phi^{-1}(I)$ is defined as the set of all $a\in A$ such that $\phi (a) \in I$.
In your case you just need to show that if $A$ and $B$ are rings, $I$ an ideal of $B$, and $\phi$ a ring homomorphism, that $\phi^{-1}(I)$ is an ideal of $A$. Then the definition makes sense. |
Counting the sum of cycles of the elements in $S_n$. | Let $b_n=\dfrac{A_n}{n!}$; then your recurrence can be rewritten
$$b_n=1+\frac1n\sum_{k=1}^{n-1}b_k\;,\tag{1}$$
with $b_1=1$. Calculate a few values:
$$\begin{align*}
&b_1=1\\
&b_2=1+\frac12\cdot1=\frac32\\
&b_3=1+\frac13\left(1+\frac32\right)=\frac{11}6\\
&b_4=1+\frac14\left(1+\frac32+\frac{11}6\right)=\frac{25}{12}
\end{align*}$$
I recognize those as the first four harmonic numbers: $$b_n=H_n=\sum_{k=1}^n\frac1k\;.$$
And sure enough, the harmonic numbers do satisfy $(1)$:
$$\begin{align*}
1+\frac1n\sum_{k=1}^{n-1}H_k&=1+\frac1n\sum_{k=1}^{n-1}\sum_{i=1}^k\frac1i\\
&=1+\frac1n\sum_{i=1}^{n-1}\sum_{k=i}^{n-1}\frac1i\\
&=1+\frac1n\sum_{i=1}^{n-1}\frac{n-i}i\\
&=1+\frac1n\sum_{i=1}^{n-1}\left(\frac{n}i-1\right)\\
&=1+\sum_{i=1}^{n-1}\frac1i-\frac{n-1}n\\
&=1+H_{n-1}-1+\frac1n\\
&=H_n\;.
\end{align*}$$
Thus, $A_n=n!H_n$.
Added: Here’s another approach to the problem. It’s not too hard to prove that the number of $\pi\in S_n$ with $k$ cycles is $\left[n\atop k\right]$, a Stirling number of the first kind. These have the generating function $$\sum_{k\ge 0}\left[n\atop k\right]x^k=x^{\overline{n}}=x(x+1)(x+2)\dots(x+n-1)\;.$$ Differentiating yields
$$\begin{align*}
\sum_{k\ge 1}k\left[n\atop k\right]x^{k-1}&=\frac{d}{dx}\Big(x(x+1)(x+2)\dots(x+n-1)\Big)\\
&=\sum_{k=0}^{n-1}\frac{\prod_{i=0}^{n-1}(x+i)}{x+k}\;,
\end{align*}$$
and evaluating at $x=1$ yields $$\sum_{k\ge 1}k\left[n\atop k\right]=\sum_{k=0}^{n-1}\frac{n!}{k+1}=n!H_n\;.$$ |
A rectangular photograph is twice as tall as it is wide. | You have a photograph that measures $w$ by $2w$, thus the area of the photograph is $2w^2$. The frame is 4cm wide which adds 4 cm on the left, top, bottom, and right. So the total frame measures $w+8$ by $2w+8$, with an area of $(w+8)(2w+8)$. We know the following equality:
$$(w+8)(2w+8)=3(2w^2)$$
$$2w^2+24w+64=6w^2$$
$$4w^2-24w-64=0$$
$$w^2-6w-16=0$$
$$(w-8)(w+2)=0$$
And the only positive solution is $w=8$ so the answer is c.
The answer is very similar to Martin's answer, however, he forgets to include the 4cm on both sides of the frame:
_________
| _____ |
| | | |
| |_____| |
|____w____|
4cm 4cm |
Number of one-to-one functions $f: \{1, 2, 3, 4, 5\} \to \{0, 1, 2, 3, 4, 5\}$ such that $f(1) \neq 0, 1$ and $f(i) \neq i$ for $i = 2, 3, 4, 5$ | Introduce 0 in set A and place it in position which is not occupied by any 5 elements in set A(no of cases still remain same).
If 0 takes 0 , then it is simply $d_5$
If 0 does not take 0 it would have been $d_6$ if 1 could take 0 but u see 1 has equal probability going to 0,2,3,4,5 right. Out of $d_6$ every 4 out of 5 cases should be counted
Answer is simply $d_5$ + $\frac{4d_6}{5}$ = 44 + $\frac{4*265}{5}$ = 44 + 212=256 |
if function is continuous and differntiability at an interior point $a$ | Given a differentiable function $F: \mathbb{R}^n \to \mathbb{R}^m$, we have that the differential of $F$ is the linear map:
$$\\$$
$$d_pF = \left[\frac{\partial F^i}{\partial x^j}(p)\right]_{1 \leq i \leq m, i \leq j \leq n} \ \ \textbf{where}\ \ p \in U \subset \mathbb{R}^n$$
Example: If we let $f: \mathbb{R}^2 \to \mathbb{R}^3$ be defined by $f(x,y) =(F^1=x,F^2=y,F^3=x+y)$. Then we get that the differential is given by:
$$\\$$
$$d_pf = \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \end{pmatrix}$$ |
Does $\sigma(T)=\overline{\sigma_p(T)}$ imply $\sigma_r(T)=\emptyset$? | No, take any operator of the form $T=T_1\oplus T_2$, on $X=X_1\oplus X_2$, with $\overline{\sigma_p(T_1)}=\sigma_r(T_2)$. |
Is not enough for a sequence being monotonic and bounded for $n \geq n_0$? | Yes, the theorem still hold. The sequence converges towards the superior bound. |
If $\lim\limits_{x\to a}f(x)$ exists, and $\lim\limits_{x\to a}[f(x) + g(x)]$ exists, does it follow that $\lim\limits_{x\to a}g(x)$ exists? | So it can be marked off as answered...
Do you already know that if $\lim\limits_{x\to c}F(x)$ exists and $\lim\limits_{x\to c} G(x)$ exists, then $\lim\limits_{x\to c}\bigl(F(x)-G(x)\bigr)$ exists? If so, set $F(x) = f(x)+g(x)$ and $G(x) = f(x)$ to get the desired result.
The proof is (essentially) valid; it can be streamlined a bit if you simply start from $|g(x)-L_3|$ and use the triangle inequality:
\begin{align*}|g(x)-L_3| &= |g(x)-L_3 + f(x)-f(x)+L_2-L_2|\\
&= \left|\Bigl(g(x)+f(x)-L_2\Bigr)-\Bigl(f(x)-(L_2-L_3)\Bigr)\right|\\
&\leq \left|\Bigl(g(x)+f(x)\bigr)-L_2\right|+\Bigl|f(x)-L_1\Bigr|\\
&\lt 2\epsilon.
\end{align*} |
Prove the inconsistency of this definition of limit | For example, $f(x) = x^2$ doesn't satisfy your definition.
On the other hand $f(x) = 1 - e^{-x}$ satisfies your definition but $\lim_{x \to \infty} f(x) \ne +\infty$. |
How do we know that model existence implies consistency? | Suppose we have a structure $W$ such that $W ⊨ φ$ for each axiom $φ$ of $\mathsf {ZFP}$.
Then for any sentence $ψ$, if $\mathsf {ZFP} ⊢ ψ$, then $W ⊨ ψ$ (why?)
This follows by soundness of FOL : if $\Gamma \vdash \psi$, then $\Gamma \vDash \psi$.
In your case, $\Gamma$ is the collection of axioms of $\mathsf {ZFP}$.
Why model existence implies consistency?
The Model Existence Th asserts that a consistent theory has a model.
Let $\mathfrak A$ a structure and $\varphi$ a sentence (in general : a collection $\Gamma$ of sentences).
We define the satisfaction relation $\mathfrak A \vDash \varphi$ and we say "$\varphi$ is true in $\mathfrak A$".
We call $\mathfrak A$ a model of $\varphi$ (in general : a model of $\Gamma$).
We say that $\varphi$ is a logical consequence of $\Gamma$ (in symbols : $\Gamma \vDash \varphi$) when $\varphi$ holds in each model of $Γ$ (i.e. if $\mathfrak A \vDash \Gamma$, then $\mathfrak A \vDash \varphi$).
By the definition itself of the satisfaction relation : $\mathfrak A \vDash \lnot \varphi \text { iff } \mathfrak A \nvDash \varphi$, we have that in a model we cannot have that two contradictory sentences both hold.
This means that every theory having a model is consistent. |
Prove that $R$ has no zero divisors and $R$ has an identity. | Servaes's answer has shown the non-existence of zero divisors. Now let $a \not =0, a \in R$, so $\exists b \in R$ with $aba=a$. Let $ab=t, ba=u$, where $t, u \in R$. Now $ta=a$ so $xta=xa \, \forall x \in R$ so $(xt-x)a=0$ so ($a \not =0$) $xt=x \, \forall x\in R$. Similarly $ux=x \, \forall x \in R$. Finally $u=ut=t$ so $u=t$ is the (multiplicative) identity. |
complete and inherited metric question | Suppose $S$ is complete. Assume $S$ is not closed, then $Clsr(S)\setminus S\neq\emptyset$. Choose $x\in Clsr(S)\setminus S\neq\emptyset$, then for every $n\in\mathbb{N}$, we have $(B(x,\frac{1}{n})\setminus\{x\})\cap S\neq \emptyset$, so we can choose $x_n\in (B(x,\frac{1}{n})\setminus\{x\})\cap S$. Obviously $(x_n)$ is a Cauchy sequence in $S$ with $x_n\to x$, but $x\notin S$ which means $S$ is not complete, contradiction.
Conversely suppose $S$ is closed. Assume $S$ is not complete. Let $(x_n)$ be a Cauchy sequence in $S$ which does not converge in $S$. Then since $X$ is complete, there exists $x\in X\setminus S$ such that $x_n\to x$. This means for every $\epsilon>0$, $B(x,\epsilon)$ contains a tail of the sequence, and thus has non-empty intersection with $S$. This implies $x\in Clsr(S)$, and therefore $S\neq Clsr(S)$ and this means $S$ is not closed, contradiction. |
Given three postive numbers $a,b,c$ so that $a\geqq b\geqq c$. Prove that $\sum\limits_{cyc}\frac{a+bW}{aW+b}\geqq 3$ . | Let $a=x^2$, $b=y^2$ and $c=z^2$, where $x$, $y$ and $z$ are positives.
Thus, we need to prove that
$$\sum_{cyc}\frac{a+\sqrt{\frac{b}{c}}b}{\sqrt{\frac{b}{c}}a+b}\geq3$$ or
$$\sum_{cyc}\frac{x^2z+y^3}{x^2y+y^2z}\geq3.$$
Now, by AM-GM
$$\sum_{cyc}\frac{x^2z+y^3}{x^2y+y^2z}\geq3\sqrt[3]{\frac{\prod\limits_{cyc}(y^3+x^2z)}{\prod\limits_{cyc}(x^2y+y^2z)}}$$ and it's enough to prove that
$$\prod\limits_{cyc}(x^3+z^2y)\geq\prod\limits_{cyc}(x^2y+y^2z)$$ or
$$\sum_{cyc}(x^5z^4+x^6y^2z)\geq xyz\sum_{cyc}(x^3y^3+x^4yz),$$ which is true by Rearrangement twice:
$$\sum_{cyc}x^5z^4=x^4y^4z^4\sum_{cyc}\frac{x}{y^4}\geq x^4y^4z^4\sum_{cyc}\frac{x}{x^4}=xyz\sum_{cyc}x^3y^3$$ and
$$\sum_{cyc}x^6y^2z=x^2y^2z^2\sum_{cyc}\frac{x^4}{z}\geq x^2y^2z^2\sum_{cyc}\frac{x^4}{x}=xyz\sum_{cyc}x^4yz.$$
Done! |
Combinations of lego bricks figures in an array of random bricks | The problem of minimizing the number of unused blocks is an integer linear programming problem, equivalent to maximizing the number of blocks that you do use. Integer programming problems are in general hard, and I don’t know much about them or the methods used to solve them. In case it turns out to be at all useful to you, though, here’s a more formal description of the problem of minimizing the number of unused blocks.
You have seven colors of blocks, say colors $1$ through $7$; the input consists of $c_k$ blocks of color $k$ for some constants $c_k\ge 0$. You have five types of output (Simpsons), which I will number $1$ through $5$ in the order in which they appear in this image. If the colors are numbered yellow $(1)$, white $(2)$, light blue $(3)$, dark blue $(4)$, green $(5)$, orange $(6)$, and red $(7)$, the five output types require colors $(1,2,3),(4,1,5),(1,6,4),(1,7,1)$, and $(1,3)$. To make $x_k$ Simpsons of type $k$ for $k=1,\dots,5$ requires $$\begin{align*}
x_1+x_2+x_3+2x_4+x_5&\text{blocks of color }1,\\
x_1&\text{blocks of color }2,\\
x_1+x_5&\text{blocks of color }3,\\
x_2+x_3&\text{blocks of color }4,\\
x_2&\text{blocks of color }5,\\
x_3&\text{blocks of color }6,\text{ and}\\
x_4&\text{blocks of color }7\;.
\end{align*}$$
This yields the following system of inequalities:
$$\left\{\begin{align*}
x_1+x_2+x_3+2x_4+x_5&\le c_1\\
x_1&\le c_2\\
x_1+x_5&\le c_3\\
x_2+x_3&\le c_4\\
x_2&\le c_5\\
x_3&\le c_6\\
x_4&\le c_7\;.
\end{align*}\right.\tag{1}$$
Let $b=c_1+c_2+\dots+c_7$, the total number of blocks, and let $$f(x_1,x_2,\dots,x_7)=\sum_{k=1}^7 x_k\;,$$ the number of blocks used. You want to maximize $f(x_1,\dots,x_7)$ subject to the constraints in $(1)$ and the requirement that the $x_k$ be non-negative integers. |
Defining a norm in the quotient space $E/M$ | For positive definiteness you should use that $M$ is assumed closed. So if $\|x+m_k\|\rightarrow 0$, $m_k\in M$ then show (1) that the sequence $(m_k)_k$ is Cauchy, (2) that it therefore converges to some $m\in M$ and (3) finally $x=-m\in M$.
In homogeneity you should distinguish the case $\lambda=0$ and $\lambda\neq 0$.
In the triangle inequality perhaps better to say for the last part:
$$\inf\{\|x+y+m\| : m\in M\}=\inf\{\|x+m+y+n\| : m,n\in M\}
\leq \inf\{\|x+m\|+\|y+n\| : m,n\in M\}
= \inf\{\|x+m\| : m\in M\}+\inf\{\|y+n\| : n\in M\}$$ |
What does a transitive set exactly imply? | A set $X$ is transitive means that $Y \in X$ implies $Y \subset X$. In other words, a set $X$ is transitive whenever $Y \in X$ and $Z \in Y$ implies $Z \in X$. Thus, the $\in$ relation is transitive in this case. |
volume of solid of revolution. Can someone please check my work? | The setup is correct.
I did the numerical integral on Wolfram Alpha (see here), and got $\approx 0.421573 \pi \approx 1.3244111$ instead. |
Combinatorial identity: summation of stars and bars | Consider the set $S=\{1,2,\ldots,d,d+1,\ldots,d+k\}$. The right hand side gives us the number of ways selecting a subset of $S$ of $d$ elements.
If we choose such a subset $X$ of $S$ of $d$ elements, the highest element of $X$ can be $d,d+1,\ldots,d+k$. If the highest element of $X$ is $d+i$ then we can find $d-1$ more elements of $X$ from $\{1,2,\ldots,d+i-1\}$. The number of ways that we can do that is ${d+i-1\choose d-1}$. By running $i$ from $0$ to $k$ we have the left hand side of the identity. |
What does it mean that random variables of mean zero form a Hilbert space with the inner product being the covariance? | The mean zero requirement is to ensure the positive definiteness, i.e. if $X$ is in this space and $\operatorname{Cov}(X,X)=0$ then $X$ is the zero random variable. (To have a unique "zero random variable" in this sense, you must first take the quotient by $X \sim Y$ if $X=Y$ a.s., which is the usual procedure used in defining the $L^p$ spaces.) If you bundle in all the random variables with finite variance, you don't have this property anymore because all constants will have $\operatorname{Cov}(c,c)=0$. |
Is "applying similar operations from left to right" a convention or a rule that forces us to mark one answer wrong? | There is no contradiction here. The first calculator interprets it as
$$\frac{6}{2(1+2)} = 1$$
The second interprets it as,
$$\frac{6}{2}(1+2) = 9$$
It is a matter of how the calculators are designed. In general, it is bad practice to write ambigous equations like that, since your intention is not always clear. |
Proof of the existence of a closest vector | Using the hint: Fix $z\in \mathbb{R}^L\setminus C, x_0 \in C$ and consider the set
$$
S = \{y\in C : \|z-y\| \leq \|z-x_0\|\}
$$
Then $S$ is a closed (since $C$ is closed), bounded set (Check!) and hence compact. Now define $f:S\to \mathbb{R}$ by
$$
f(y) := \|z-y\|
$$
Then $f$ is continuous (by the hint), and hence attains a minimum at a point $x^{\ast} \in C$. This point $x^{\ast}$ clearly satisfies the required condition. |
If $V$ is a quasi-affine variety, the algebra $k[V]$ is isomorphic to a subalgebra of a finitely generated $k$-algebra? | It's not obvious that $k[V]$ is finitely generated. If $V$ is closed then it's just $k[\mathbb{A}^n]/I(V)$, but otherwise we define $k[V]$ as the functions given locally by well-defined elements of $k(\mathbb{A}^n)$, which is not finitely generated. In fact there are examples of quasi-projective varieties with non-finitely generated rings of functions, for instance, the total space of the sum of a degree $0$ nontorsion and a positive-degree line bundle on an elliptic curve. You can find this example described in notes of Ravi Vakil, who also varies it to give a quasi-affine example.
It's much easier to show that a quasi-affine algebra is contained in a finitely generated algebra than to construct an example as discussed above. Take a cover of a quasi-affine variety $X$ by finitely many open affine subsets $U_i$, yielding an epimorphism $\sqcup U_i\to X$ and thus a monomorphism $k[X]\to k[\sqcup U_i]\cong \prod k[U_i]$. |
Eigenvalues of the laplacian on a compact manifold without boundary | If $c>0$, then $\lambda_1=nc$ implies that $M$ is isometric to the standard round sphere of radius ${1}/\sqrt{c}$. This is known as Lichnerowicz-Obata's theorem. See Theroem 5.1 in Peter Li's "Lecture notes on geometric analysis". |
Differentiable function has measurable derivative? | Here's a key observation: Assume that $f$ is differentiable and define
$$f_n(x) = \left\{
\begin{array}{cl}
\frac{f(x+1/n)-f(x)}{1/n} & \text{if } 0 \leq x+1/n \leq T \\
0 & \text{else.}
\end{array}\right.$$
Then, each $f_n$ is measurable and $f_n \rightarrow f'$. |
Natural definitions of families of subgraphs | I guess somehow we should rule out spurious ways to depend on parameter, such as
"a graph is $k$-cliqueish if either $k=1$ and the graph is connected,
or $k\ge 2$ and the graph has a clique of size $k$".
Here is attempt (not constructive and not really canonical). Fix a first-order language $L$ containing some basic arithmetic so we can use parameters from $\mathbb N$.
Definition 1. For an $L$-definable function $f$ with domain $\mathbb N$, define the complexity $C(f)$ to be the shortest definition of $f$ in $L$.
Definition 2. For functions $f^*$ and $f$ with domain $\mathbb N$, we say that $f$ is an improvement of $f^*$ if $C(f)\le C(f^*)$, and for all but finitely many $n$, $f(n)=f^*(n)$.
Definition 3. A set $S$ depends on a parameter if there exists an injective $f$ with domain $\mathbb N$, such that for all improvements $g$ of $f$, there is an $n$ with $f(n)=g(n)=S$. In this case we say the value of the parameter can be $n$.
Theorem(?) The collection $S$ of graphs having a clique of size 5 does depend on a parameter (which can have the value 5), since a natural definition of "clique of size $k$" cannot be shortened and modified in such a way as to still be correct for almost all $k$, and at the same time remove $S$ from the range. |
Typical problem on P with conditions | If there were such a $y$, then the congruence $z^2\equiv -1\pmod{p}$ would have a solution. (Multiply $y$ by the inverse of $2$ modulo $p$.)
But it is a standard fact that if $4$ divides $p-3$, then $-1$ cannot be a quadratic residue of $p$. |
Ring-Homomorphism from $\mathbb{Z}_{2}$ to $\mathbb{Z}_{2n}$ | If $n$ is even then
$$n^2=2^2(n/2)^2=4(n/2)(n/2)=(2n)(n/2)=0$$
but in that case, $n^2=n$ so $n$ is not even (that is, $n$ is odd.) |
If $\{f(0)\}^{2} + \{f'(0)\}^{2} = 4$ then there is a $c$ with $f(c) + f''(c) = 0$ | The solution presented here is not mine and has been reproduced from Alexanderson et al. The solution is a gem as it uses several key concepts in analysis.
Let $G(x) = [f(x)]^2 + [f'(x)]^2$ and $H(x) = f(x) + f''(x)$. Since $H$ is continuous, it suffices to show that it changes sign. So, lets assume $H(x) > 0$ for all $x$ or $H(x) < 0$ for all $x$ and obtain a contradiction.
Since $|f(0)| \leq 1$ and $G(0) = 4$, either $f'(0) \geq \sqrt{3}$ or $f'(0) \leq -\sqrt{3}$. We will show for the case in which $H(x) > 0$ and $f'(0) \geq \sqrt{3}$; the other cases are similar.
Assume that the set $S$ of positive $x$ with $f'(x) < 1$ is nonempty and let $g$ be the greatest lower bound of $S$. Then $f'(0) \geq \sqrt{3}$ and continuity of $f'(x)$ imply $g > 0$. Now, $f'(x) \geq 0$ and $H(x) \geq 0$ for $0 \leq x \leq g$ lead to
$G(g) = 4 + 2 \int_{0}^{g} f'(x)[f(x) + f''(x)] dx \geq 4$
Since $|f(g)| \leq 1$, this implies $f'(g) \geq \sqrt{3}$. Then continuity of $f'(x)$ tells us that there is an $a > 0$ such that $f'(x) \geq 1$ for $0 \leq x < g+a$. This contradicts the definition of $g$ and hence $S$ is empty. Now $f'(x) \geq 1$ for all $x$ and this imples that $f(x)$ is unbounded, contradicting $f(x) \leq 1$. |
What is the Z-transform of the sequence $-b^nu(1-n)$ and what is its ROC? | $$x(n) = -(b^n) u(1-n)$$
$$X(z) = \sum_{n=-\infty}^{\infty} x(n) z^{-n}$$
$$X(z) = -\sum_{n=-\infty}^{1}\frac{b^n}{z^n} $$
$$X(z) = - ( \frac{b}{z}+1+\frac{z}{b}+\frac{z^2}{b^2}+......)$$
Now apply the formula for sum of infinte GP
$$X(z) = -\frac{b}{b-z}$$
$$X(z) = \frac{b}{z-b}$$
$$|\frac{z}{b}|<1$$
$$ROC :|z|<|b|$$ |
Is the sequence uniformly convergent over [0,1]? | You can use more simple techniques.
For all $n \in \mathbb N$ and $x \in [0,1]$, tou have
$$\vert f_n(x) \vert = \left\vert \frac{\ln(1+x^2n^2)}{n^2} \right\vert \le \frac{\ln(1+n^2)}{n^2},$$
as the function $x \mapsto \ln(1+n^2x^2)$ is increasing on $[0,1]$. The sequence $(\frac{\ln(1+n^2)}{n^2})$ converges to zero. Therefore the sequence $(f_n)$ converges uniformly to the always vanishing function.
Moreover the series $\sum \frac{\ln(1+n^2)}{n^2}$ converges. Therefore, $\sum f_n(n)$ converges uniformly according to Weierstrass M-test. |
Cauchy-Riemann equation doubts | The function $f(x+iy) = 2ixy$ is differentiable as a function from $\mathbb{R}^2$ to $\mathbb{C}$, but is not complex differentiable.
One way to see this is to write $f$ in terms of $z$ and $\bar z$ rather than $x$ and $y$:
$$
2ixy = 2i \left(\frac{z+\bar z}{2}\right)\left(\frac{z-\bar z}{2i}\right)
= \frac{1}{2}\left(z^2 - \bar z^2\right)
$$
In these coordinates, the Cauchy-Riemann equations take the form $\frac{\partial f}{\partial \bar z} = 0$. Here $\frac{\partial f}{\partial \bar z} = \bar z$, so $f$ is not complex differentiable. |
Perpendicular Chord parabola | Let $O=(0,0)$ be the vertex of the parabola, and $P_1=(x_1,y_1)$, $P_2=(x_2,y_2)$, the other endpoints of the chords. The perpendicularity condition yields
$$
y_1y_2=-x_1x_2=-{1\over16a^2}y_1^2y_2^2,
\quad\hbox{that is}\quad
y_1y_2=-16a^2
\quad\hbox{and}\quad
x_1x_2=16a^2.
$$
We have then:
$$
r_2^2=x_2^2+y_2^2=x_2^2+4ax_2={256a^4\over x_1^2}+{64a^3\over x_1}=
{64a^3\over x_1^3}(4ax_1+x_1^2)={64a^3\over x_1^3}(y_1^2+x_1^2)
=\left({4a\over x_1}\right)^3r_1^2
$$
that is
$$
r_2^{2/3}={4a\over x_1}r_1^{2/3}.
$$
It follows that
$$
r_1^{4/3}r_2^{4/3}={16a^2\over x_1^2}r_1^{8/3}
$$
and
$$
r_1^{2/3}+r_2^{2/3}=\left(1+{4a\over x_1}\right)r_1^{2/3}
={x_1^2+4ax_1\over x_1^2}r_1^{2/3}
={r_1^2\over x_1^2}r_1^{2/3}={1\over x_1^2}r_1^{8/3}.
$$
In conclusion:
$$
{r_1^{4/3}r_2^{4/3}\over r_1^{2/3}+r_2^{2/3}}=16a^2.
$$ |
Minimal injective resolution of a module | It is well known that we can define some chain maps $u^{\bullet}:E^{\bullet}\to I^{\bullet}$ and $v^{\bullet}:I^{\bullet}\to E^{\bullet}$ which are identity on $M$. Thus we get a chain map $w^{\bullet}:E^{\bullet}\to E^{\bullet}$ which is identity on $M$, where $w^{\bullet}=v^{\bullet}u^{\bullet}$. Now using that $E^{\bullet}$ is minimal it's easy to prove (inductively) that $w^{\bullet}$ is a chain isomorphism and therefore $u^{\bullet}$ is injective. |
equivalent description of "respectively" | I'd say it does. For instance, a set $S$ is strongly (resp. weakly) [property] if $S$ satisfies [something] (resp. [something else]).
Or you can drop "resp." entirely, e.g. $f_n$ is said to converge to $f$ strongly (weakly) if $f_n \rightarrow f$ uniformly (pointwise). You can replace the adverb with an adjective as well, e.g. $g_n$ has the strong (weak) boundedness property on $X$ if $g_n$ is uniformly (pointwise) bounded.
I only write this if the strong property 'obviously' implies the weak property, but usually I interpret a sentence like this as saying if I were to replace the word preceding each bracket with the bracketed word then I have another assertion. |
Prove a matrix function is continuous | You do not tell us which norm you use on $\mathbb R^2$. But since all all norms on $\mathbb R^2$ are equivalent, we may take $\lVert \mathbf{x} \rVert = \lVert \mathbf{x} \rVert_\infty = \max(\lvert x_1 \rvert, \lvert x_2 \rvert)$.
Since the operator norm is submultiplicative (see e.g. show operator norm submultiplicative), we get
$$\lVert f(g) - f(h) \rVert_{op} = \lVert U^{-1}g(D) U - U^{-1}h(D) U \rVert_{op} = \lVert U^{-1}(g(D) - h(D)) U \rVert_{op} \\ \le \lVert U^{-1} \rVert_{op} \cdot \lVert g(D) - h(D) \rVert_{op} \cdot \lVert U \rVert_{op} = c \lVert g(D) - h(D) \rVert_{op} = c \lVert (g - h)(D) \rVert_{op} .$$
With $u = g - h$ we get
$$\lVert u(D) \rVert_{op} = \sup_{\lVert \mathbf{x}\rVert=1}\lVert u(D)\mathbf{x} \rVert = \sup_{\lVert \mathbf{x}\rVert=1}\lVert \begin{pmatrix}
u(d_{11})x_1\\
u(d_{22})x_2
\end{pmatrix} \rVert = \sup_{\lVert \mathbf{x}\rVert=1}\max( \lvert u(d_{11})x_1 \rvert, \lvert u(d_{22})x_2 \rvert) \\ = \max( \lvert u(d_{11}) \rvert, \lvert u(d_{22}) \rvert) \le \sup_{t \in [0,2]} \lvert u(t) \rvert = \lVert u \rVert_\infty$$
which finishes the proof. |
Find $\alpha\in\mathbb{R}$ such that inequality is true | Hint: Try just explicitly computing the integral by making a substitution $u=\frac{x^2}{2}+1$. |
Does any measure preserving system have an invertible extension? | Let me offer a different answer: I thought about it just now, and indeed I think checking that the Hahn-Kolmogorov consistency theorem applies is actually non-trivial, so let me give a reference to a nice exposition (which addresses you question entirely):
See the first three sections of chapter 5 of Parthasarathy, K. R. Probability measures on metric spaces. In particular Theorem 3.2 is what you want.
It constructs what is known as an inverse limit of a sequence $$X_0 \leftarrow X_1 \leftarrow \ldots $$ of surjective measure preserving maps between Borel probability spaces. In particular the invertible extension is the case where each $X_i=X$ and the mappings are $T$.
I'm not sure how much we can relax the surjectivity requirement. |
Convergence or divergence of $\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2}$ | First proof" Cauchy's Condensation Test (why is it possible to use it?):
$$2^na_{2^n}=\frac{2^n\log(2^n)}{2^{2n}}=\frac{n}{2^n}\log 2$$
And now it's easy to check the rightmost term's series convergence, say by quotient rule:
$$\frac{n+1}{2^{n+1}}\frac{2^n}n=\frac12\frac{n+1}n\xrightarrow[n\to\infty]{}\frac12<1$$
Second proof: It's easy to check (for example, using l'Hospital with the corresponding function) that
$$\lim_{n\to\infty}\frac{\log n}{\sqrt n}=0\implies \log n\le\sqrt n\;\;\text{for almost every}\;\;n\implies$$
$$\frac{\log n}{n^2}\le\frac{\sqrt n}{n^2}=\frac1{n^{3/2}}$$
and the rightmost element's series converges ($\,p-$series with $\,p>1\,$) , so the comparison test gives us that our series also converges. |
Finding the number of solutions to an equation under bounds of $x$ | What you have to find is just:
$$\begin{eqnarray*}&&[x^{20}](x+x^2+x^3+x^4)(x^2+\ldots+x^{10})(x^3+\ldots+x^{12})\\&=&[x^{14}](1+\ldots+x^3)(1+\ldots+x^8)(1+\ldots+x^9)\\&=&[x^{14}]\frac{(1-x^4)(1-x^9)(1-x^{10})}{(1-x)^3}\\&=&[x^{14}](1-x^4-x^9-x^{10}+x^{13}+x^{14})\sum_{j=0}^{+\infty}\binom{j+2}{2}x^j\\&=&\binom{16}{2}-\binom{12}{2}-\binom{7}{2}-\binom{6}{2}+\binom{3}{2}+1\\&=&\color{red}{22}.\end{eqnarray*}$$
As an alternative approach, you can check that we have $4$ solutions with $x_1=1$, $5$ solutions with $x_1=2$, $6$ solutions with $x_1=3$ and $6$ solutions with $x_1=4$, and:
$$ 4+5+6+7 = \color{red}{22}.$$ |
Zariski topology over $\mathbb R$ | Taking $\mathbb{R}$ as the affine line over reals, the Zariski topology of $\mathbb{R}$ consits of all its subsets whose complement is either finite or all of $\mathbb{R}$ (i.e. the finite complement topology.) |
How do you work out the base of an isosceles triangle with all other sides and angles given? | In isosceles triangle altitude to the base is also median of the base and bisector of the vertex angle. So draw the altitude and you will get right triangle with hypotenuse $2613$ and angle opposite half of the base equal to $87°$. Thus half of the base is $2613 \cdot \sin 87° \approx 2609.4$. |
topology basis, euclidean topology | For part (a.) indeed $\mathcal B_0$ is a basis, but the topology is not the euclidean. Every integer has only one neighborhood, namely $\mathbb R$.
For part (b.) ($k\ge1$) note that if $x$ is not an integer then $\mathcal B_k$ contains all sets $(x-\varepsilon,x+\varepsilon)$ for every $\varepsilon>0$ with $\varepsilon<d:=\min\{x-\lfloor x\rfloor,\lfloor x\rfloor +1-x\}$. (Here $\lfloor x\rfloor$ is the floor function, the largest integer smaller than $x$, so $d$ is the distance from $x$ to the nearest integer.)
On the other hand, if $n$ is an integer, then $\mathcal B_k$ contains all sets $(n-\varepsilon,n+\varepsilon)$ for every $\varepsilon>0$ with $\varepsilon<1$.
This shows that (for $k\ge1$), the topology induced by the $\mathcal B_k$ is at least as strong as the Euclidean topology. Since every interval $(a,b)$ is open in the Euclidean topology, we also trivially have that the topology induced by $\mathcal B_k$ is weaker than the Euclidean topology, hence the two topologies coincide.
Finally, concerning $B_1,B_2\in \mathcal{B}_k$, note that if $B_1,B_2$ each contains at most $k$ integers, then so does their intersection
$B_1\cap B_2$. So you could let $B=B_1\cap B_2$ (as indicated already in the comments). |
Can any meromorphic function be represented as a product of zeroes and poles? | Short answer: Not all, but...
Turns out this is more complicated than user170431's answer suggested, though it hinted at the correct direction.
Given a meromorphic function $f(z)$ with poles at $\{p_i\}$ of multiplicites $\{n_{p_i}\}$, one can indeed use Mittag-Leffler's theorem to obtain a holomorphic function
$$h(z) = f(z) - \sum_i \pi_{p_i}(1/(z-p_i))$$
where $\pi_{p_i}$ denotes a polynomial of degree $n_{p_i}$, and due to their finite order (meromorphic functions don't have essential singularities) one can expand $f(z)$ to the common denominator.
What the other answer and my question's assumption did however neglect was the full extend of the Weierstrass factorization theorem, according to which $h(z)$ actually factorizes into
$$h(z) = e^{g(z)}\prod_j(z-z_j)^{n_{z_j}}$$
where $n_{z_j}$ denotes the multiplicity of the zero $z_j$ and $g(z)$ is an entire function. Note that I simplified matters here, for infinitely many zeros (or of infinite multiplicity) parts of the exponential are inside the product as "elementary factors" in order to have the infinite product converge. In my sloppy notation, they are part of $\exp g(z)$ though.
Therefore, the actual representation of a meromorphic function is
$$f(z) = e^{g(z)}\prod_i(z-z_i)^{n_i}$$
(now the $z_i$ also include the poles again, as in the question's notation). Only if there are finitely many zeros of finite multiplicities (i.e. a quotient of two polynomials), the representation without an exponential in front (i.e. a constant $g(z)$) is valid. |
Burgers equation with sinusoidal bump initial data | Indeed, the characteristic curves $x = \phi(x_0) t + x_0$ along which $u = \phi(x_0)$ is constant are shown below:
Until characteristics intersect, the solution of the PDE is given by the method of characteristics, i.e. by solving $u = \phi(x-ut)$ numerically (I don't know any closed form expression). The breaking time $t_b$ where the solution becomes multi-valued can be computed as described in this post:
$$
t_b = \frac{-1}{\inf \phi'} = \frac{1}{\pi} \approx 0.32 .
$$
The speed of shock $\dot x_s$ satisfies the Rankine-Hugoniot condition
$$
\dot x_s = \frac{1}{2}\big(\tilde u(x_s,t) + 0\big), \qquad x_s(1/\pi) = 1
$$
where $\tilde u(x_s,t)$ solves $\tilde u = \phi(x_s-\tilde ut)$. Hence, it is not easy to compute the shock trajectory $x_s(t)$ analytically in the present case. However, it is possible to derive some analytical expressions if the sinusoidal bump is replaced by a polynomial bump, e.g. the parabola $x\mapsto 4x(1-x)$ or the triangular function $x\mapsto 1-|2x-1|$
displayed below |
Future Value of Ordinary Annuity equation | The expression you are querying has simply used $$(1+i) \frac{1}{1+i} = 1$$
The reason your numerical example fails is because you've got a bit confused over the decimal representation of 2% which is 0.02 not 0.2. This means near the bottom where you have 8.5 you should have 10. |
Any hint for this measure theory problem from Halmos? | Suppose both $A$ and $A^c$ have positive measure. Choose $x, y$ and $r > 0$ such that $A$ and $A^c$ have $99$% measure in $(x - r, x + r)$ and $(y - r, y + r)$ respectively. Choose $d \in D$ such that $|x + d - y| < r/100$. Now it is easily seen that $(A + d) \backslash A$ has positive measure. |
A proof regarding the Fibonacci Sequence. | If I'm right ( others will correct me if i'm not )
$$\left|\frac{\overline{\phi}^{i}}{\sqrt{5}}\right| < \frac{1}{2}$$
Hence
$$
-\frac{1}{2} <\frac{\overline{\phi}^{i}}{\sqrt{5}} < \frac{1}{2}
$$
So
$$
\frac{\phi^{i}}{\sqrt{5}}-\frac{1}{2} < F_i < \frac{\phi^{i}}{\sqrt{5}}+\frac{1}{2}
$$
Then $F_i$ is an integer which it is in $\displaystyle \left]a-\frac{1}{2},a+\frac{1}{2}\right[$ hence in $\displaystyle \left]\lfloor a-\frac{1}{2}\rfloor,\lfloor a+\frac{1}{2}\rfloor\right]$
We can conclude that
$$
F_i=\lfloor\frac{\phi^{i}}{\sqrt{5}}+\frac{1}{2}\rfloor
$$ |
Are Differential equations without "$x$" considered separable? | It would be separable. Check if $f(y) = 0$ is a solution. Then ignore this case to get
$$\frac{y'}{f(y)} = 1 $$
so your solution looks like
$$\int\frac{y'}{f(y)} = x + C $$ |
Intuition behind joint pdf with transformations with partitioned support | Let's take a relatively simple example, say $(X,Y)$ with a uniform distribution on $[-3,7]^2$ so with density $f_{X,Y}(x,y)=\frac1{100}$ on that support, and look for the distribution $(U,V)$ where $U=|X|$ and $V=|Y|$. This simple example avoids worrying about transforming the density with the Jacobian and concentrates on the addition issue
Clearly the support for $(U,V)$ is $[0,7]^2$. You might intuitively expect $(1,1)$ to be more likely to occur than $(6,6)$ as there more ways the former might happen. If so, you would be correct, and the calculation is as follows
Looking for inverses, $0 \lt U \le 3 \implies X=U$ or $X=-U$ while $3 \lt U \le 7 \implies X=U$, and similarly for $V$ and $Y$. So finding a joint density function for $(U,V)$ may involve adding up four, two or one densities
Thus
for $0 \lt u \le 3$ and $0 \lt v \le 3$, we have four densities to add up, covering the inverses $(x,y)=(u,v)$, $(x,y)=(-u,v)$, $(x,y)=(u,-v)$ and $(x,y)=(-u,-v)$, and making $f_{U,V}(u,v)=\frac{4}{100}$
for $0 \lt u \le 3$ and $3 \lt v \le 7$, we have two densities to add up, covering the inverses $(x,y)=(u,v)$ and $(x,y)=(-u,v)$, and making $f_{U,V}(u,v)=\frac{2}{100}$
for $3 \lt u \le 7$ and $0 \lt v \le 3$, we have two densities to add up, covering the inverses $(x,y)=(u,v)$ and $(x,y)=(u,-v)$, and making $f_{U,V}(u,v)=\frac{2}{100}$
for $3 \lt u \le 7$ and $3 \lt v \le 7$, we have one density to add up, covering the inverse $(x,y)=(u,v)$, and making $f_{U,V}(u,v)=\frac{1}{100}$
Just as a check that nothing is missing, the total probability described by this density on this support would be $\int_u\int_v f_{U,V}(u,v) \,dv \, du= 9\times\frac{4}{100} + 12\times\frac{2}{100} + 12\times\frac{2}{100} + 16\times\frac{1}{100} =1$ as you would hope
Intuitively, where there are multiple inverses, each one can contribute to the final density for $(U,V)$, with each adding to the final result |
Integral $ \int_{-\infty}^{+\infty} e^{-t^2}\ln(1+e^t) \, dt $ | For the convergence, just find a function bigger than the given function and it's integral converges : Observe there exists $C>0$ such that $0<\ln(1+e^t)\leq Ct^2$ for all $t\in\mathbb{R}$ (check this claim by yourself). Then $$0\leq\bigg|\int_{-\infty}^\infty e^{-t^2}\ln(1+e^t)dt\bigg|=\int_{-\infty}^\infty e^{-t^2}\ln(1+e^t)dt\leq C\int_{-\infty}^\infty e^{-t^2}t^2<\infty$$ by Gaussian integral.
I believe calculating the exact result is more difficult, so if I can, I will add later. |
How to find the P-Value for this test of hypothesis? | Thank you for including some preliminary computations. You have $\hat p = 587/755 = 0.7775.$ You want to test $H_0: p = .75,$
perhaps against the 2-sided alternative $H_a: p \ne .75.$
Two-sided test. The test statistic is $$Z = \frac{\hat p - .75}{\sqrt{(.75)(.25)/755}} = 1.744,$$
which should be approximately standard normal if $H_0$ is true.
You'd reject $H_0$ at the 5% level, if $|Z| > 1.96,$ which is not
true, so you don't reject.
The P-value is the probability of a more extreme result than that observed.
For a two-sided hypothesis, that means 'more extreme in either direction',
so the P-value is
$$P(|Z| > 1.744) = 1 - P(|Z| \le 1.774) = 1 - P(-1.744 \le Z \le 1.744).$$
Using software, I got about $0.08,$ but you should compute it using
normal tables or whatever software you're expected to use in your class.
Right-sided test. If you were supposed to test against the one-sided alternative $H_a: p > .75,$
then you'd reject if $Z > 1.645,$ which is true. So you would reject $H_0.$
(You know that the estimate $\hat p > .75$, the question is whether it is
significantly larger, or whether .7775 could have just have happened
to exceed .75 due to random sampling error.)
For the right-tailed test, the P-value is
$P(Z > 1.774),$ the probability of a more extreme value in the direction of
the alternative. That P-value is half as big as for the two-sided test.
Note: You say you are doing a two-sided test. Not knowing the context of the problem, I can't say for sure that is right. So I'll let you
think about that as required.
(Suppose the purpose of the poll were to asses
the chances of passage of Proposition A in an upcoming election, and it
takes 75% Yes votes for passage. Then I'd suppose you'd want a one-sided
test. If lore from previous years is that 75% of the population thinks
it's morally wrong to cheat on income tax, and the purpose of the poll
is to assess whether that opinion has changed recently (one way or the other), then I'd suppose
you want a two-sided alternative.)
Sketches. In problems like this, it is always a good idea to draw
sketches so you can visualize the areas that match various probabilities.
In the left panel below (two-sided alternative), the heavy black line is the
observed value of $Z,$ the dotted black line is just as far from 0 as the
heavy black line, and the P-value is the sum of the two areas beyond the
black lines. The rejection region is values of $Z$ between the two red bars
(at 'critical values'); the total area under the density curve and outside the two red bars is 5%.
In the right panel (right-sided alternative), the heavy black line is again
the observed value of $Z.$ The red line is at the critical value, and 5% of
the area under the density curve to the right of the red bar. The rejection
region is to the right of the red line, and the observed value of $Z$ is
in the rejection region. The P-value (smaller than 5%) is the area to the
right of the observed value of $Z$. |
3 Gents, 3 Ladies, and one round table | There are $\binom{3}{2}$ to select the two women who sit together and $2!$ ways to arrange them within that block. That leaves four open seats, two of which are adjacent to the two women who sit together, so there are $2$ ways to sit the third woman in a seat that is not adjacent to the other two women and $3!$ ways to arrange the men in the remaining seats. Thus, there are
$$\binom{3}{2} \cdot 2! \cdot 2 \cdot 3! = 3 \cdot 2 \cdot 2 \cdot 6 = 72$$
ways to seat three men and three women at a round table if exactly two of the women sit together.
Seating the men first is more difficult. You have to choose which two men are to sit three seats apart (to accommodate the pair of women who sit together), arrange those two men in those seats, then choose which of the two remaining seats will be filled by the third man. This can be done in $\binom{3}{2} \cdot 2! \cdot 2$ ways. You can then choose two of the three women to sit together, arrange them in two ways, and seat the third woman in the remaining seat, which can be done in $\binom{3}{2} \cdot 2! \cdot 1$ ways. Thus, the number of possible arrangements is
$$\binom{3}{2} \cdot 2! \cdot 2 \cdot \binom{3}{2} \cdot 2! \cdot 1 = 3 \cdot 2 \cdot 2 \cdot 3 \cdot 2 \cdot 1 = 72$$ |
CFG for $n_b = n_a + n_c$ | How about:
\begin{align}
S &\gets XSbS \mid bSXS \mid \varepsilon\\
X &\gets a \mid c
\end{align} |
Conditional Expectation (N coin flips) | $E(X_i|S_N=K)$ is the same for all $i$ and the sum of these expectations is
$$ \sum_{i=1}^N E(X_i|S_N=K)=E\left(\sum_{i=1}^NX_i\,\Big| S_N=K\right)=E(S_N|S_N=K)=K.$$
We get therefore $E(X_i|S_N=K)=\frac KN$. |
Is there any abstract theory of electrical networks? | The theory of electrical circuits consists of several sub-theories. Predominant mathematical disciplines that arise in the study of electrical circuits are linear algebra, differential equations, functional analysis (Fourier Transform, Laplace Transform) and graph theory. A circuit is a physical system which implements a mathematical function. Thus a circuit can be abstracted from its physics into its mathematical behavior and one can choose to study the latter. Then the mathematical behavior is captured by what is known as a "system", which there are various ways to define mathematically. Some authors begin by defining the "input space" and the "output space" and then a system is a particular kind of "morphism" between those two spaces. Another abstract approach is to define input and output spaces and then take a system to be a subset of the cartesian product of the input and output space, i.e. the set of all input-output signal pairs that can occur in the system. This is known as the behavioral approach. Another approach is the algebraic approach. As an example, Rudolf Kalman, the giant of mathematical systems theory, about 50 years ago, wrote a paper saying that a linear system is actually a module over a principal ideal domain. This opened the road for algebraic systems theory and if you like categories, you will find many interesting things there. But if you want to make a start, on the textbook level, i recommend any good book on Signals and Systems (e.g. Oppenheim's) or on Control Theory e.g. William Brogan's "Modern Control Theory". Warning: the further you go on this road, the less relevant your study will be with actual circuit design and analysis practices used in industry. This is because there is a huge distance to be covered from having a meaningful and useful theory to actually using this theory and modifying it locally in order to obtain something that actually works. Let me give you a simple example: there is only one system model of the BJT transistor (and a couple more equivalent) but there are hundreds of various types of BJT transistors, very different from each other. These differences are not captured in the system theory level, but they are crucial for implementation. |
What is the steps of solution for limit of division of trigonometric functions? | Use L'Hopital's rule (L.H.):
$$\lim_{x\to \pi} \dfrac{\sin(3x)}{\sin(5x)} \quad \overset{L.H.}{=} \quad \lim_{x \to \pi}\dfrac{\Big(\sin(3x)\Big)'}{\Big(\sin(5x)\Big)'} = \lim_{x\to \pi} \dfrac {3\cos 3x}{5 \cos 5x} = \dfrac {3\cdot 1}{5 \cdot 1} = \dfrac 35$$ |
$x^2+y^2+z^2=5(xy+yz+zx)$ -- Is this all solutions? | Yes, all primitive solutions come from this, with the symmetries and negating all three entries at once. To get absolutely all, multiply these triples by any nonzero integer.
The matrix equation being solved by the computer, with a bound (9) on the matrix entries supplied by me, was $R^T G R = 196 H,$ where
$$
G =
\left(
\begin{array}{rrr}
2 & -5 & -5 \\
-5 & 2 & -5 \\
-5 & -5 & 2
\end{array}
\right)
$$
and
$$
H =
\left(
\begin{array}{rrr}
0 & 0 & -1 \\
0 & 2 & 0 \\
-1 & 0 & 0
\end{array}
\right)
$$
The matrices are to be applied to the column vector
$$
\left(
\begin{array}{c}
u^2 \\
uv \\
v^2
\end{array}
\right)
$$
which gives all primitive solution vectors $(x,y,z)$ (up to $\pm$) to
$$ y^2 - zx = 0. $$
Put another way, all solutions come from these by multiplying by a nonzero integer.
Existence of an integer matrix of this type is guaranteed by Theorem I.9 on page 15 of PLESKEN; THAT IS, Automorphs of Ternary Quadratic Forms, by William Plesken, pages 5-30 in Ternary Quadratic Forms and Norms, (1982), edited by Olga Taussky. This is originally in pages 507-508 of Fricke and Klein (1897), which can be read online
I typed some extra characters at your matrix to make it easier to locate.
./homothety_indef 1 1 1 -5 -5 -5 0 196 0 0 -196 0 9
note that the final 9 is the bound on absolute values of matrix entries
Mon Mar 23 12:01:09 PDT 2015
-5 -9 -3
-3 3 1
1 -1 -5
-5 -1 1
-3 -9 -5
1 3 -3
-5 1 1
-3 9 -5
1 -3 -3
-5 9 -3
-3 -3 1
1 1 -5
-5 -9 -3
1 -1 -5
-3 3 1
-5 -1 1
1 3 -3
-3 -9 -5
-5 1 1
1 -3 -3
-3 9 -5
-5 9 -3
1 1 -5
-3 -3 1
-3 -9 -5
-5 -1 1
1 3 -3
-3 -3 1
-5 9 -3
1 1 -5
-3 3 1
-5 -9 -3
1 -1 -5
-3 9 -5
-5 1 1
1 -3 -3
-3 -9 -5
1 3 -3
-5 -1 1
-3 -3 1
1 1 -5
-5 9 -3
-3 3 1
1 -1 -5
-5 -9 -3
-3 9 -5
1 -3 -3
-5 1 1
-1 -3 3
3 9 5
5 1 -1
-1 -1 5
3 3 -1
5 -9 3
-1 1 5
3 -3 -1
5 9 3
-1 3 3
3 -9 5
5 -1 -1
-1 -3 3
5 1 -1
3 9 5
-1 -1 5
5 -9 3
3 3 -1
-1 1 5
5 9 3
3 -3 -1
-1 3 3
5 -1 -1
3 -9 5
1 -3 -3
-5 1 1
-3 9 -5
1 -1 -5
-5 -9 -3
-3 3 1
1 1 -5 =-=-=-=-=-=-=-=-=-=-=-=
-5 9 -3
-3 -3 1
1 3 -3
-5 -1 1
-3 -9 -5
1 -3 -3
-3 9 -5
-5 1 1
1 -1 -5
-3 3 1
-5 -9 -3
1 1 -5
-3 -3 1
-5 9 -3
1 3 -3
-3 -9 -5
-5 -1 1
3 -9 5
-1 3 3
5 -1 -1
3 -3 -1
-1 1 5
5 9 3
3 3 -1
-1 -1 5
5 -9 3
3 9 5
-1 -3 3
5 1 -1
3 -9 5
5 -1 -1
-1 3 3
3 -3 -1
5 9 3
-1 1 5
3 3 -1
5 -9 3
-1 -1 5
3 9 5
5 1 -1
-1 -3 3
5 -9 3
-1 -1 5
3 3 -1
5 -1 -1
-1 3 3
3 -9 5
5 1 -1
-1 -3 3
3 9 5
5 9 3
-1 1 5
3 -3 -1
5 -9 3
3 3 -1
-1 -1 5
5 -1 -1
3 -9 5
-1 3 3
5 1 -1
3 9 5
-1 -3 3
5 9 3
3 -3 -1
-1 1 5
-196 : 1 1 1 -5 -5 -5
-7529536 : 0 196 0 0 -196 0
Mon Mar 23 12:01:09 PDT 2015
Note that there is an annoying complication here about the prime $7.$ It is possible, precisely when my $v \equiv 5 u \pmod 7,$ to have all three of my $x,y,z$ divisible by $7.$ However, and i am still hand-waving here, when that happens, we can divide through by $7$ and produce the result from a different $(u,v)$ pair. The point, really, is that all the binary quadratic forms used are equivalent to $u^2 + 3 uv - 3 v^2$ of discriminant $21.$ When you divide through by $7,$ you get right back to that.
Tuesday, 24 March: pleased I was able to fill in the blanks as far as the prime $7.$ If we have the triple divisible by $7,$ it means we can write (using the original $(m,n)$ letters,
$$ n = 5m + 7 t, $$
because we have $n \equiv 5m \pmod 7.$ All three formulas in the original question become divisible by $7,$ and we divide that out to get
$$ \frac{m^2 +mn-5n^2}{7} = -17 m^2 - 49 mt - 35 t^2, $$
$$ \frac{-5m^2 +9mn-3n^2}{7} = -5 m^2 - 21 mt - 21 t^2, $$
$$ \frac{-3m^2 -3mn +n^2}{7} = m^2 + 7 mt + 7 t^2. $$
Proceeding by hand from here would be a mess, but I did a computer search to find a simultaneous substitution that does the desired thing; the (integer invertible!!) change of variables
$$ m = 3r-2s , \; \; \; \; t = -2r + s. $$
The results are gratifying: we get
$$ \frac{m^2 +mn-5n^2}{7} = r^2 + rs - 5 s^2, $$
$$ \frac{-5m^2 +9mn-3n^2}{7} = -3 r^2 - 3rs + s^2, $$
$$ \frac{-3m^2 -3mn +n^2}{7} = -5r^2 + 9rs - 3 s^2. $$
Combine this with a permutation and we have it.
Final comment: the matrices I called $R$ provided by the first computer run have determinant $\pm 196.$ That is, they are singular in the field of $7$ elements, but also singular in the field of $2$ elements. This seems bad, but is not. The eigenvectors with eigenvalue $0$ in $\mathbb Z / 2 \mathbb Z$ are $(0,1,1), \; \; $ $(1,0,1), \; \; $ $(1,1,0). \; \; $ However, we are applying such an $R$ only to $(u^2, uv,v^2)$ with integers $u,v$ relatively prime (not both even). The possible such vectors are $(1,1,1), \; \; $ $(1,0,0), \; \; $ $(0,1,0), \; \; $ $(0,0,1). \; \; $ So, this is never a problem. |
Given linear function, describe kernel and image | I'll be assuming your matrices are real, but nothing much would change if they were complex.
The first thing to note is that the the kernel is a subset of the domain and the image is a subset of the codomain - this trips a lot of people up. To check if a matrix is in the kernel, you apply $F$ to it (like you correctly suggest for matrix $b$). However to check if a matrix $A$ is an "image", you need to see if there exists a matrix $M$ such that $F(M)=A$. So matrix $a$ is not an image, because every matrix in the image of $F$ has zero in the upper right and lower left. Matrix $c$ is an image, since
$$
F\left(\begin{array}{cc}3 & 0\\ 0& -3\end{array}\right)=\left(\begin{array}{cc}3 & 0\\ 0& -3\end{array}\right)=c
$$
To find bases for the kernel and image, you can start with a base for $M(2\times 2)$. The standard base is the four matrices:
$$
e_1=\left(\begin{array}{cc}1 & 0\\ 0& 0\end{array}\right)\quad e_2=\left(\begin{array}{cc}0 & 1\\ 0& 0\end{array}\right)\quad e_3=\left(\begin{array}{cc}0 & 0\\ 1& 0\end{array}\right)\quad e_4=\left(\begin{array}{cc}0 & 0\\ 0& 1\end{array}\right)
$$
Then, you know that any matrix $A\in M(2\times 2)$ can be written as $$
A=ae_1+be_2+ce_3+de_4. $$ Now apply $F$ to your basis elements:
$$
F(e_1)=e_1\quad F(e_2)=0\quad F(e_3)=0\quad F(e_4)=e_4
$$
Since $F$ is linear, we can describe its range by taking an arbitrary element of $M(2\times 2)$, writing in terms of our base $\{e_1,e_2,e_3,e_4\}$, and applying $F$ to it:
$$
F(ae_1+be_2+ce_3+de_4)=aF(e_1)+bF(e_2)+cF(e_3)+dF(e_4)=ae_1+de_4
$$
Thus the image can be described using the two basis matrices $e_1$ and $e_4$:
$$
\text{Im}F=\{\alpha e_1+\beta e_4:\alpha,\beta\in\Bbb{R}\}
$$
Like you suggest, the kernel can be described as any matrix with zero diagonal. Now that we have a basis for $M(2\times 2)$, we can write this succinctly as
$$
\text{Ker}F=\{\alpha e_2+\beta e_3:\alpha,\beta\in\Bbb{R}\}
$$
You should check to make sure these are subspaces.
Bonus question: show that any matrix $A\in M(2\times 2)$ can be written as the sum of a matrix in the kernel of $F$ and a matrix in the image of $F$. |
calculate the limit for $\sqrt{1 + \sqrt{2 + \sqrt{1 + \sqrt{2 + ......}}}}$ | The sequence is bounded above by $2$: certainly $a_0<2$ since $\sqrt{2}<3$
Once you know $a_{n-1}<2$, $a_n<\sqrt{1+\sqrt{2+2}}=\sqrt{3}<2$
increasing: $a_1>a_0$ clearly, and use induction again
So bdd above, incerasing -> convergent |
Laurent series coefficient calculation by Cauchy’s residue theorem. | The problem is that although $f$ has a double pole at $z=0$, $f(z)/z^{n+1}$ has a pole or order $n+3$, and it's the residue of this that is relevant, not the residue of $f$ itself. |
Use the method of finite differences to determine a formula | What you did looks basically correct. FYI, one small thing the question is implicitly stating, and you're implicitly using, is that the $D_3$ row of $6$'s will continue indefinitely. There are formulas (although they would be somewhat convoluted) giving sequences where the first $7$ values match what you're given, but where not all of the later values of $D_3$ would be $6$.
Another issue is with with the terminology. As stated in the first sentence of Wikipedia's Finite difference page
A finite difference is a mathematical expression of the form $f(x + b) − f(x + a)$.
This does apply to what you're doing. However, note the Wikipedia page says at the start of the third paragraph that
Today, the term "finite difference" is often taken as synonymous with finite difference approximations of derivatives, especially in the context of numerical methods.
Also, Wolfram's MathWorld Finite difference page starts with
The finite difference is the discrete analog of the derivative.
I believe the term "finite difference" likely comes from the idea that you're using one specific, even if very small, finite difference, as opposed to differentation which looks at using one of the limiting case of infinitely many small intervals to determine a difference to calculate the derivative function exactly.
Instead, as explained in Mathonline's Difference Sequences page, the $p$'th row you've determined is called the "$p^{\text{th}}$ order difference sequence". The entire set of these rows, as explained in Mathonline's Difference Tables of Sequences, is called a "difference table". This phrase is also in other places, such as in the Example section of Wikipedia's "Binomial transform" page.
Regarding the solution method, there's an alternate method, as suggested by the Wikipedia article mentioned above, and explained quite well in the answer by Jean-Claude Arbaut to Baffled by resolving number list. Using this, assuming the index of the polynomial values starts at $0$, you get using the binomial transform with the left-most values of $S$ and $D_i$ that
$$\begin{equation}\begin{aligned}
p_0(n) & = 5{n \choose 0} + 14{n \choose 1} + 16{n \choose 2} + 6{n \choose 3} \\
& = 5 + 14n + 16\left(\frac{n(n-1)}{2}\right) + 6\left(\frac{n(n-1)(n-2)}{6}\right) \\
& = 5 + 14n + 8(n^2 - n) + (n^3 - 3n^2 + 2n) \\
& = n^3 + 5n^2 + 8n + 5
\end{aligned}\end{equation}\tag{1}\label{eq1A}$$
Converting to using a starting index of $1$ instead gives
$$\begin{equation}\begin{aligned}
p_0(n) & = p_1(n-1) \\
& = (n-1)^3 + 5(n-1)^2 + 8(n-1) + 5 \\
& = (n^3 - 3n^2 + 3n - 1) + 5(n^2 - 2n + 1) + (8n - 8) + 5 \\
& = n^3 + 2n^2 + n + 1
\end{aligned}\end{equation}\tag{2}\label{eq2A}$$
This matches what you got. Alternatively, you can use the pattern to determine the previous values in the sequences. Starting from the $6$'s in $D_3$, you get the previous value in $D_2$ would be $16 - 6 - 10$, the previous value in $D_1$ would be $14 - 10 = 4$ and the previous previous value in $S$ would be $5 - 4 = 1$. Thus, using the binomial transform with these values values gives
$$\begin{equation}\begin{aligned}
p_1(n) & = 1{n \choose 0} + 4{n \choose 1} + 10{n \choose 2} + 6{n \choose 3} \\
& = 1 + 4n + 10\left(\frac{n(n-1)}{2}\right) + 6\left(\frac{n(n-1)(n-2)}{6}\right) \\
& = 1 + 4n + 5(n^2 - n) + (n^3 - 3n^2 + 2n) \\
& = n^3 + 2n^2 + n + 1
\end{aligned}\end{equation}\tag{3}\label{eq3A}$$
If you're solving it using a set of linear equations, as you did, then there's a small short-cut you can use. If the degree of the polynomial is $d$ and the coefficient of the highest-order term is $a_d$, then then $D_d$ terms are all $a_d(d!)$. You can determine this by noting the coefficient of the highest order term in each $D_i$ level is $d - i + 1$ times that of the previous level, so these factors combined form $d!$ times the $S$'s coefficient, i.e., $a_d$. You can also see this from the binomial transform as the highest order term comes from the $D_d$ values times ${n \choose d}$, so it'll be $a_d = \frac{D_d}{d!} \implies D_d = a_d(d!)$. Using this with your set of equations, you can see that $d = 3$, so $3! = 6$, gives $6 = 6A \implies A = 1$. Thus, you only need to use $3$ equations to get
$$5 = 1 + B + C + D \implies 4 = B + C + D \tag{4}\label{eq4A}$$
$$19 = 8 + 4B + 2C + D \implies 11 = 4B + 2C + D \tag{5}\label{eq5A}$$
$$49 = 27 + 9B + 3C + D \implies 22 = 9B + 3C + D \tag{6}\label{eq6A}$$
You can solve \eqref{eq4A}, \eqref{eq5A} and \eqref{eq6A} to get $B = 2$, $C = 1$ and $D = 1$, as you did in your question text. |
Differentiate an integral function | This is a particular case of Leibniz integral rule for differentiating an integral.
$${\frac {\mathrm {d} }{\mathrm {d} t}}\left(\int _{a(t)}^{b(t)}f(x,t)\,\mathrm {d} x\right)=\int _{a(t)}^{b(t)}{\frac {\partial f}{\partial t}}\,\mathrm {d} x\,+\,f{\big (}b(t),t{\big )}\cdot b'(t)\,-\,f{\big (}a(t),t{\big )}\cdot a'(t)$$
One might recognize this to be a combination of the multivariate chain rule and the fundamental theorem of calculus. |
Simultaneous diagonalizability and matrix polynomial | As pointed out, there are many simultaneously diagonalizable matrices such that neither is a polynomial of the other. One example that comes to mind is
$$
A = \begin{bmatrix}1 & 0 & 0\\0 &1 & 0 \\0 & 0 & 2 \end{bmatrix},\quad \begin{bmatrix}1 & 0 & 0\\0 &2 & 0 \\0 & 0 & 2\end{bmatrix}
$$
since no polynomial in $A$ can make the topmost two diagonal entries unequal, and no polynomial in $B$ can make the bottommost two diagonal entries unequal.
However, if one matrix is a polynomial of the other, say
$$
B = a_nA^n + \cdots + a_1A + a_0I
$$
and $A$ is diagonalisable (say $A = PDP^{-1}$, with $D$ diagonal), then
$$
B = a_n(PDP^{-1})^n + \cdots + a_1PDP^{-1} + a_0I \\
= a_nPD^nP^{-1} + \cdots + a_1PDP^{-1} + a_0PIP^{-1}\\
= P(a_nD^n + \cdots + a_1D + a_0I)P^{-1} = PEP^{-1}
$$
where $E = a_nD^n + \cdots + a_1D + a_0I$ is diagonal. So $A$ and $B$ are simultaneously diagonalisable. |
Write a linear equation that represents this scenario. | You need to define your terms for people to have a good chance of understanding the equation you've come up with. I infer that $d$ must be the number of days worked, so $90d$ is her money earned. Similarly, $p$ is her days travelled so that $150p$ is her money spent on travel. Is $c$ supposed to be money spent on the laptop? In that case is $c=700$?
Try plugging in some numbers that should work and see if the equation is doing what it is supposed to. For example, if she buys the laptop and travels 10 days she will have spent \$700 + 10(\$150) = \$2200. She would have to work \$2200/\$90 = 24.444... days to pay for this. If you plug in d = 24.444... and p = 10 does your equation come out right?
The comment from @RecklessReckoner is a good one. |
Do the solutions to the unit equation lie dense in the complex numbers | This question has been asked and answered on MathOverflow. I have replicated the accepted answer by user2035 below.
If $f\in\mathbf Z[X]$ is any monic polynomial, the solutions of $x(1-x)\cdot f(x)=1$ are solutions of the unit equation. Take some $y\in U\setminus\mathbf R$. Since the substitution $z\mapsto1/(1-z)$ leaves $S$ invariant, we may assume $|y|>1$. For $n$ given, choose $u,v\in\mathbf R$ such that $y(1-y)\cdot(y^n+uy+v)=1$. Now if $n$ is sufficiently large, Rouché's theorem shows that the number of solutions in a suitable neighbourhood of $y$ in $U$ does not change if we replace $u$ and $v$ by the nearest integers. Hence, $S\cap U$ is nonempty. Since $U$ was arbitrary, this implies that $S\cap U$ is infinite. |
Geometric interpretation of an elliptic point on a Riemann surface / hyperbolic surface | They are sometimes called cone points, because they look like taking a piece of paper with a corner of angle $2\pi/m_i$ and folding it over to identify opposite sides of the vertex. This produces something which looks like a cone at the singular point. They are also sometimes called pillowcase points as they look like the corners of a pillow.
So, for example, if $m_i=2$ then take a piece of paper, pick a point on one side, and identify the sides opposite the point to make a cone. If $m_i=3$ take a piece with a $120^\circ$ angle and identify the opposite sides. The cone gets "sharper" as the angle decreases, i.e. as $m_i$ increases. |
If $f(x), \phi(x)$ are continuous functions on $[a,b]$, how can I show that $\int_a^b f(x)\phi(x) dx = 0$? | Hint
Weierstrass theorem tells that $f(x)$ can be approximated by a polynomial. |
Suppose that $z\in\mathbb C$ with $|z^2+1|\le 1$. How to prove $|z+1|\geq\frac12$. | Let $z=a+ib$, then $$|1+z^2|^2=1+2a^2+a^4+b^2(2a^2-2)+b^4\le1.$$ We want to minimize $|1+z|^2=\color{orange}{(1+a)^2+b^2}$. Let me complete the square in the constraint: $$b^4+b^2(2a^2-2)\color{blue}{+(a^2-1)^2}\le\color{blue}{(a^2-1)^2}-a^4-2a^2=1-4a^2$$ which is the same as $$(b^2+(a^2-1))^2\le1-4a^2.$$ In particular, it follows that $- \frac12<a<\frac12$ and $$b^2\in\left[1-a^2-\sqrt{1-4a^2},1-a^2+\sqrt{1-4a^2}\right].$$
Thus, $$2+2a-\sqrt{1-4a^2}\le\color{orange}{(1+a)^2+b^2}\le2+2a+\sqrt{1-4a^2}.$$ We will thus want to minimize the function $2a-\sqrt{1-4a^2}$. This can be done for example with calculus, but we can also proceed as follows:
Now we have $$8a^2-4 \sqrt 2 a+1=8 \left(a - \frac{1}{2 \sqrt 2}\right)^2\geq0$$ which implies $$4a^2-4\sqrt 2a+2=(\sqrt 2-2a)^2\geq1-4a^2$$ and thus $$\sqrt 2-2a\geq\sqrt{1-4a^2}.$$ By replacing $a$ by $-a$ we also get $$\sqrt{1-4a^2}\le\sqrt2+2a.$$
It follows from the previous result that $$2-\sqrt 2\le\color{orange}{(1+a)^2+b^2}\le2+\sqrt 2$$ and thus in particular $$\bbox[15px,border:1px groove navy]{|1+z|=\sqrt{\color{orange}{(1+a)^2+b^2}}\geq\sqrt{2-\sqrt 2}>\frac12.}$$ |
Verify that convolution with $\exp(-|z|)$ gives a solution of $y''-y=f$ | First, observe that
$$
\frac{\partial^2 f(x-z)}{\partial x^2} = \frac{\partial^2 f(x-z)}{\partial z^2}
$$
Then integrate by parts twice, throwing the derivative onto the exponential term. The latter is not differentiable at $z=0$, so the integral should be split:
$$-2y''(x) = \int_{-\infty}^0 \frac{\partial^2 f(x-z)}{\partial z^2} e^{z} dz +
\int_{-0}^\infty \frac{\partial^2 f(x-z)}{\partial z^2} e^{-z} dz \\ =
\frac{\partial f(x-z)}{\partial z} e^{z} \bigg|_{-\infty}^0
- \int_{-\infty}^0 \frac{\partial f(x-z)}{\partial z} e^{z} dz +
\frac{\partial f(x-z)}{\partial z} e^{-z} \bigg|_{0}^\infty
+ \int_{-0}^\infty \frac{\partial f(x-z)}{\partial z} e^{-z} dz
$$
The boundary values at $\infty$ are zero, and at $0$ they cancel out. Integrate by parts again:
$$
- f(x-z)e^z\bigg|_{-\infty}^0
+ \int_{-\infty}^0 f(x-z) e^{z} dz
+ f(x-z)e^{-z}\bigg|_0^{\infty}
+ \int_{-0}^\infty f(x-z) e^{-z} dz
$$
This simplifies to
$$
-2f(x) + \int_{-0}^\infty f(x-z) e^{-|z|} dz = -2f(x) - 2y(x)
$$
as expected. |
Deriving convexity from Taylor series expansion | First you require that $f(x)$ represents a convergent series for it to be defined at a real number $x$. This gives $f$ its domain: $D = \{x: |3x| < 1\}=\left(-\dfrac{1}{3},\dfrac{1}{3}\right)$, then you can differentiate term-by-term since the series that represent $f$ is now absolutely convergent. Thus $f''(x) = \displaystyle \sum_{k=1}^\infty 3^{2k}(2k)(2k-1)(x^{2})^{k-1}\geq 0, \forall x \in D, \forall k \geq 1$, and hence $f$ is convex for all $x \in D$. |
Findind the parameter that makes two matrices similar | Why not try first the easy things? The trace and determinant must be equal. The former is, but for the latter we have:
$$\det B=3\;,\;\;\det A=3-2h\implies h=0$$
is a necessary condition for similarity...but perhaps not sufficient. Substitute $\;h=0\;$ and now go with eigenvalues or whatever.
Added as further help (because the comments make it clear this is necessary): The characteristic polynomial of the first matrix is (assuming already $\;h=0\;$)
$$\det(xI-A)=\begin{vmatrix}x-3&\!-2&0\\0&x-1&\!-5\\0&0&x-1\end{vmatrix}=(x-1)^2(x-3)$$
As $\;A\;$ must be diagonal, it is enough to check whether the eigenspace of $\;\lambda=1\;$ is of dimension two:
$$\lambda=1:\;\;\begin{cases}-2x-2y=0\\-5z=0\\\end{cases}\;\;\implies x=-y,\,z=0$$
and we thus get $\;\dim V_1=1\implies A\;$ isn't diagonalizable and thus $\;A\;$ cannot be similar to $\;B\;$ no matter what. |
Solving second order linear non homogoneus differential equation with operators | A simple googling of "solving second order differential equations" reveals a multitude of links. Reading the first two pages of, for example,
http://www.stewartcalculus.com/data/CALCULUS%20Concepts%20and%20Contexts/upfiles/3c3-2ndOrderLinearEqns_Stu.pdf
will answer your question. In short, you always try to find a solution $x = C_1 e^{r_1 t} + C_2 e^{r_2 t}$, where $C_1$ and $C_2$ are constants, but $r_1$ and $r_2$ are solutions to the specific 2nd order polynomial. After having found the solution to the homogeneous equation this way, you will have to guess the solution to the inhomogeneous equation. I think you will succeed by trying out an arbitrary 2nd order polynomial as an inhomogeneous solution
$x_{inhomog}(t) = at^2 +bt+c$
The constants can be figured out by substituting it into the original equation. The final solution will be the sum of homogeneous and inhomogeneous one.
I don't feel like solving the equation explicitly here. There is really a lot of material on this type of problems online |
Ambiguity in the definition of "locally finiteness of a collection of subsets of a topological space". | If any $x\in M$ belonged to every member of $Y,$ where $Y$ is an infinite subset of $X,$ then every nbhd $U$ of $X$ would have non-empty intersection with every member of $Y$ (because $x\in U\cap Z$ for all $Z\in Y$), implying that $X$ is not locally-finite.
A family $F$ of subsets of $M$ is called point-finite iff every $x\in M$ belongs to only finitely many members of $F.$ So a locally-finite family is point-finite. Point-finite is not equivalent to locally-finite. For example if $M=\mathbb R$ then $F=\{\{x\}:x\in M\}$ is point-finite but not locally-finite. |
Is it possible to have $\mathbb E S_n \to \infty$ but $\inf S_n = -\infty$ a.s.? | As pointed out in the comments, the law of large numbers entails that the following convergence holds almost surely:
$$ \frac{S_n}n\to\mathbb E\left[X_1\right]=p-(1-p)=2p-1\gt 0.$$
Therefore, for almost every $\omega$, there exists $N=N\left(\omega\right)$ such that if $n\geqslant N$, then $S_n\left(\omega\right)\geqslant n\left(p-1/2\right)$, hence $\lim_{n\to +\infty}S_n\left(\omega\right)=+\infty$. This was the only missing thing in the opening post. |
Topology proof: dense sets and no trivial intersection | Why does $G\cap A'\ne \emptyset$ leads to a contradiction? We know that
$$G\cap A = \emptyset \implies G\subseteq A' - A$$
but you have to say something more in order to reach the absurd (unless you're using some propositions you proved earlier)
The rest is Ok. |
Complex integral of the function $f(z)=\dfrac{1}{z^4+1}$ | Your contour is the circle $$(x-1)^2+y^2=1$$
Your integrand has poles at $z=\pm\sqrt i, \pm i\sqrt i $.
Note that $\sqrt i=\exp(i{\pi\over4})={1\over\sqrt 2}(1+i)$. Interior of your circle only intersects $1$st and $4$th quadrant. $-\sqrt i, i\sqrt i$ are in $3$rd and $2$nd quadrant respectively. So we don't need to worry about them. The other poles are inside your circle.
$$
\text{Res}_\sqrt i\left({1\over z^4+1}\right)=\text{Res}_\sqrt i\left({1\over (z^2+i)(z^2-i)}\right)={1\over2i\cdot2\sqrt i}=-{i\over 4\sqrt i}\\
\text{Res}_{-i\sqrt i}\left({1\over z^4+1}\right)=\text{Res}_{-i\sqrt i}\left({1\over (z^2+i)(z^2-i)}\right)={1\over(-2i\sqrt i)\cdot(-2 i)}=-{1\over 4\sqrt i}
$$
So your integral becomes
$$
-{2\pi i\over4\sqrt i}(1+i)=-{2\pi i\over{4\over\sqrt 2}(1+i)}(1+i)=-{i\pi \over\sqrt2}
$$ |
Implicit Function Theorem for $C^1$ function $r:\mathbb{R}^2\to\mathbb{R}^3.$ | Ok, no sulution, but some hints, as requested...
$r_u\times r_v\neq 0 $ means $r_u$ and $r_v$ are nonzero and linear independent. By continuity, this is true in a neighbourhood of $(u_0,v_0)$. In other words, $r$ defines locally a two dimensional manifold which is known to admit a local representation as a graph.
If this unknown to you choose a basis vector $e_i$ such that $r_u(u_0,v_0), r_v(u_0,v_0), e_i$ are linear independent. Wlog $e_i= e_3$ corresponding to the $z$ coordinate. Now look at $$F:\mathbb{R}^2\rightarrow \mathbb{R}^2$$
defined by $$(u,v)\mapsto \langle r,e_2\rangle e_2+ \langle r,e_1\rangle e_1$$
This (is just the projection of the image of $r$ to the $(x,y)$ plane and) has to be a map with a differential of maximal rank from what has been discussed above, so you can apply the implicit function theorem. So $F$ is a local diffeomorphism near $(u_0,v_0)$ and so the projection $(x,y,z)\mapsto (x,y) $, when restricted to the image of $r$ near $(u_0,v_0)$ is one-to-one and onto. |
Closed subset of $L^1([0,1])$ | It's enough to show that $G$ is sequentially closed: let $(f_k,k\geqslant 1)$ be a sequence of elements of $G$, converging in $L^1$ to some $f$. Extracting a subsequence which converges almost everywhere, and using Fatou's lemma, we are done. |
Limit of Laplacian of the distance at the origin | If you use normal coordinate, the computation along this line will be easier: here we have $d=\sqrt{x_1^2+...+x_n^2}$, $\Gamma_{ij}^k(p)=0$ so $\Gamma_{ij}^k=O(d)$; and $\frac{\partial g_{ij}}{\partial x_k}(p)=0$, so $g_{ij}=\delta_{ij}+O(d^2)$. Take inverse we get also $g^{ij}=\delta_{ij}+O(d^2)$. Now from
$\Delta d=g^{ij}(d_{ij}-\Gamma_{ij}^kd_k)$, by direct calculation we note $d_k=O(1)$, so $\Gamma_{ij}^kd_k\to 0$. Also by direct computation we see $d_{ij}=O(d^{-1})$; so $g^{ij}d_{ij}=\delta_{ij}d_{ij}+O(d^2)O(d^{-1})$, with $ O(d^2)O(d^{-1})\to 0$. Thus at distance $d$, $\Delta d$ differs from the Euclidean version by $O(d)\to 0$, the result follows from the Euclidean case. |
a representation of $f''$ only using terms of $f$ and constants. | As for $1 \le i \le 3$
\[ f(x+ih) = f(x) + ihf'(x) + \frac 12 {i^2}{h^2}f''(x) + \frac 16{i^3}{h^3}f'''(x) + O(h^4)
\]
So we have for your right hand side
\begin{align*}
f(x)\bigl(A + B + C + D)\\
{} + f'(x)\bigl(B + 2C + 3D\bigr)h\\
{} + f''(x)\frac 12\bigl(B + 4C + 9D\bigr)h^2\\
{} + f'''(x)\frac 16 \bigl(B + 8C + 27D)h^3\\
{} + O(h^4)
\end{align*}
So we want to have
\begin{align*}
A + B + C + D &= 0\\
B + 2C + 3D &= 0\\
B + 4C + 9D &= 0
\end{align*}
Wlog we can let $A = 1$. Then our System gives
\begin{align*}
B + C + D &= -1\\
C + 2D &= 1\\
3C + 8D &= 1
\end{align*}
hence
\begin{align*}
B + C + D &= -1\\
C + 2D &= 1\\
2D &= -2
\end{align*}
so $D = -1$, $C = 3$, $B = -3$. Then $B + 8C + 27D = -3+24-27 = -6$. That gives
\[
f'''(x) = \frac{-f(x) + 3f(x+h) - 3f(x+2h) + f(x+3h)}{h^3} + O(h).
\] |
Proving that $G/N$ is an abelian group | One way is using first isomorphism theorem.
To do this you should find a group homomorphism such that $\operatorname{Ker} \varphi=N$.
Let us try $\varphi: G\to \mathbb R^*\times \mathbb R^*$ given by
$$\begin{pmatrix} a & b \\ 0 & d\end{pmatrix} \mapsto (a,d).$$
(By $\mathbb R^*$ I denote the group $\mathbb R^*=\mathbb R\setminus\{0\}$ with multiplication. By $G\times H$ I denote the direct product of two groups, maybe your book uses notation $G\oplus H$ for this.)
It is relatively easy to verify that $\varphi$ is a surjective homomorphism. It is clear that $\operatorname{Ker} \varphi=N$. Hence, by the first isomorphism theorem,
$$G/N \cong \mathbb R^*\times\mathbb R^*$$
This is a commutative group.
If you prefer, for any reason, not using the first isomorphism theorem, you could also try to verify one of equivalent definitions of normal subgroup and then describe the cosets and their multiplication.
In this case you have
$$\begin{pmatrix} a & b \\ 0 & d \end{pmatrix}
\begin{pmatrix} 1 & b' \\ 0 & 1 \end{pmatrix}
\frac1{ad}
\begin{pmatrix} d & -b \\ 0 & a \end{pmatrix}=
\begin{pmatrix} 1 & \frac{ab'}d \\ 0 & 1 \end{pmatrix}$$
(I have omitted the computations), which shows that $xNx^{-1}\subseteq N$ for any $x\in G$.
You can find out easily that cosets are the sets of the form
$$\{\begin{pmatrix} x & y \\ 0 & z \end{pmatrix}; y\in\mathbb R\}$$
for $x,z\in\mathbb R\setminus\{0\}$ and that the multiplication of cosets representatives $\begin{pmatrix} x & 0 \\ 0 & z \end{pmatrix}$ is coordinate-wise. |
Fourier Transformation | Write $x(t) = e^{-3t+5}u(t-1) = e^{-3t+3+2}u(t-1) = e^2\cdot e^{-3(t-1)}u(t-1)$.
Now notice that $x(t)$ is a shifted version of $y(t) = e^2 \cdot e^{-3t}u(t)$, so apply the shift theorem to $y(t)$ and you should have your result. |
Bayes Theorem with multiple variables | Using Sum Rule:
$P(D|Dem1) = P(D|T,Dem1)P(T|Dem1) + P(D|T',Dem1)P(T'|Dem1)$
= $(0.95)(.08) + (0.05)(1-.08) = 0.122$ |
Lebesgue point - an example | Indeed, the function is not defined at $x=0$ or on the unit sphere. But at all other points, the function is defined. But we can definitely define it at $x=0$. Let's show that no matter what value $a$ we assign to $f$ at $x=0$, the expressions
$$ A_r = \frac{1}{r^n}\int_{B_r (0)} |f(y) - a| dy$$
do not converge to $0$ as $r\to 0+$. WLOG we will assume $r < 1/2$. By radial symmetry, $A_r$ is equal, up to a multiplicative constant, to
$$ r^{-n} \int_0^r |\sin (\ln |\ln s|)-a| s^{n-1} ds.$$
Change variables $s \to u = |\ln s|$ gives $du = -\frac{1}{s} ds$, and so
$$(*)\quad r^{-n} \int_{|\ln r|}^\infty |\sin (\ln u)-a| e^{-n u} du$$
Using the priodicity of $\sin$, there exists a sequence of disjoint intervals $I_k = [\alpha,\beta] + 2\pi k~,k=1,2,\dots$ such that $|\sin (x) -a|>c_0>0$ for all $x$ in the union of the intervals (if $a=0$ choose $c_0=\frac 12$ and if $a\ne 0$ choose $c_0= |a|/2$). Now $x \in I_k$ if and only if $\ln (\alpha +2\pi k) \le \ln x \le \ln (\beta + \pi k)$. Therefore the integral is bounded below by the sum
$$ (**) \quad c_0 \sum_{k=k(r)}^\infty (\beta+ 2\pi k)^{-n} (\ln ( \beta +2\pi k) - \ln (\alpha +2\pi k)),$$
and where $k(r)$ is the smallest $k$ such that satisfying $\alpha + 2\pi k \ge |\ln r|$. That is
$$k(r) \sim |\ln r|/2\pi.$$
Below, $c,c',c''$ are constants not depending on $r$.
By a simple comparison argument, the general term of the series in $(**)$ is bounded below by $c k^{-n-1}$ for all $k\ge k(r)$, and therefore the summation is bounded below by $c' k(r)^{-n}=c''|\ln r|^{-n}$. Now $|\ln r|r \to 0$ as $r\to0$, and therefore, $\liminf_{r\to 0+} A_r = \infty$. |
Functions $ \cos(2x)$, $\sin(2ax)$, $1$ independent and dependent | Hint: set $\alpha\cos(2x)+\beta\sin(2ax)+\gamma=0$ (constant zero function) and try some values of $x$.
Further hint.
For $x=0$ we get $\alpha+\gamma=0$
For $x=\pi/4$ we get $\beta\sin(a\pi/2)+\gamma=0$
For $x=\pi/2$ we get $-\alpha+\beta\sin(a\pi)+\gamma=0$
This creates a linear system with matrix
$$
\begin{bmatrix}
1 & 0 & 1 \\
0 & \sin(a\pi/2) & 1 \\
-1 & \sin(a\pi) & 1
\end{bmatrix}
$$
whose determinant is
$$
2\sin(a\pi/2)-\sin(a\pi)=2\sin(a\pi/2)-2\sin(a\pi/2)\cos(a\pi/2)=
2\sin(a\pi/2)(1-\cos(a\pi/2))
$$
What can you say when this determinant is $\ne0$?
What can you do when it equals $0$? |
Complex differentiability of a function from first principles. | We wish to prove $\lim_{w\to0}\frac{\overline{w}^2}{w}\cos w$ exists and evaluate it. (@mrsamy suggested this approach, but I've changed their $h$ to $w$ to emphasize the limit is in a complex variable.) The $\cos w$ factor is asymptotic to $1$, so we just want $\lim_{w\to0}\frac{\overline{w}^2}{w}$. Since $\left|\frac{\overline{w}^2}{w}\right|=|w|$, the limit is $0$ by the squeeze theorem. |
Solving $ T ( n ) = 8 T \left( \left\lfloor \frac n 2 \right\rfloor \right) + 1 $ using Akra-Bazzi | $$h(n) = \left\lfloor \frac{n}{2}\right\rfloor - \frac{n}{8}\sim\frac{3n}8\notin O\left(\frac{n}{\log^2n}\right)$$ |
SImple first order population growth problem can solve for c but dont know how to get K | We need to find P(t) and be able to use it to find P(12).
From given information we have:
$$\frac{dP}{dt}=k\sqrt{P} \tag1$$
The initial condition could be used to determine the value of $k$ as follows:
$$\frac{dP}{dt}=20=k\sqrt{100}$$
$$k=2$$
From (1) we could separate the variables and integrate as follows:
$$\int{\frac{dP}{\sqrt{P}}}=\int {2dt}$$
$$2 \sqrt{P}=2t+C$$
$$ P=(\frac{2t+C}{2})^2 \tag 2$$
We are given that P(0)=100, so:
$$P(0)=100=(\frac{C}{2})^2$$
Solving for C, we get $$C=20$$
Using (2), we can now write $$P(t)=(t+10)^2$$
After 12 month from the start, we calculate $P(12)$ to be:
$$P(12)=484$$ |
Can the covariance of X and Y be larger than the variance of X? | This is a consequence of the famous Cauchy-Scwhartz Inequality, which is used to prove that correlations are bounded by 1. This implies that $$ Cov(X,Y) \leq \sqrt{Var(X)Var(Y)} $$ |
Does there function with no bound and $\int_{-\infty}^{\infty} |f(x)|dx < \infty$ | There is. Take $$f(x)=\frac{1}{\sqrt[]{x}}\mathbf{1}_{(0,1]}$$ This functions has no bound, but we have:
\begin{align}
\int_\mathbb{R}|f(x)|\,dx=\int^1_0 \frac{1}{\sqrt[]{x}}\,dx= 2<\infty
\end{align} |
Why/How does this sqrt term work? The inverse of a fraction in a sqrt | There are two things happening here. One is that $$\sqrt{\frac{a}{b}} = \frac{\sqrt{a}}{\sqrt{b}}$$ for all nonnegative real numbers where $b \neq 0$. Second is that $$\frac{\frac{a}{b}}{\frac{c}{d}} = \frac{a}{b} \cdot \frac{d}{c}$$ for all real numbers where $b,c,d \neq 0$. For your problem, let $a=b=1$, and $c=\sqrt{g},d = \sqrt{l}$. The equality follows. |
If $\sum_{k=1}^{\infty}x_k$ is conditionally convergent, prove that $\sum_{k=1}^{\infty}x_k^+ = \infty = \sum_{k=1}^{\infty}x_k^-$. | Notice $x_k^{+} = x_k$ if $x_k \geq 0$, and $x_k^{+}=0$ otherwise. Notice $x_k^{-} = -x_k$ if $x_k \leq 0$, and $x_k^{-}=0$ otherwise.
Put
$$s_n = \sum_{k=1}^{n} |x_k|, \quad \sigma_n = \sum_{k=1}^{n} x_k, \quad \sigma_n^{+} = \sum_{k=1}^{n} x_k^{+}, \quad \sigma_n^{-} = \sum_{k=1}^{n} x_k^{-}.
$$
Note $$\sigma_n^{+} = \frac{1}{2}s_n + \frac{1}{2}\sigma_n.$$ Since $s_n$ diverges and $\sigma_n$ converges, we must have that $\sigma_n^{+}$ diverges. Moreover, as $\sigma_n^{+}$ is increasing, $\sigma_n^{+} \rightarrow \infty$.
Likewise, $\sigma_n^{-} = \frac{1}{2}s_n - \frac{1}{2}\sigma_n$ leads to $\sigma_n^{-} \rightarrow \infty$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.