title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Showing an operator is compact | To show that $A$ is bounded, Cauchy-Schwarz works well
$$
\|Af\|^2 = \int_0^1\left|\int_0^x f(t)dt\right|^2dx \\
\le \int_0^1 \int_0^x|f(t)|^2dt\int_0^xdt\,dx \\
= \int_0^1 \int_0^x|f(t)|^2dt xdx \\
\le \int_0^1 \int_0^1 |f(t)|^2dt \, 1 dx = \|f\|^2.
$$
Then apply your theorem using the orthonormal basis $\{ e_n(t)= e^{2\pi int} \}_{n=-\infty}^{\infty}$:
$$
Ae_n = \int_0^x e_n(t)dt = \frac{1}{2\pi in}(e_n(t)-1), \; n\ne 0, \\
\|Ae_n\|\le \frac{1}{\pi|n|},\;\; n\ne 0.
$$ |
Example of Separable Product Space with cardinality greater than continuum? | If you don't assume Hausdorff-ness, you can pretty much do whatever you want. You can take $X_\alpha$ to be all spaces with the trivial topology, and let $A$ be of as great a cardinality as you want - the product will have the trivial topology, and in particular will be separable.
By the way, for the theorem, you have to assume $X_\alpha$ are moreover not singletons (or rather the theorem says that only $\mathfrak{c}$ of them can have more than one point).
EDIT. Here's an example of a product of $T_1$ spaces. For any cardinality $\kappa$, the product of $\kappa$-many infinite countable spaces with the cofinite topology is separable. As bof pointed out in the comments, the set of constant functions (which is countable) is dense. |
Blocks in sequences from {1,...,k} | $00$ is a subsequence of $010$. You seem to be thinking of substring/subword. |
Exclusive disjunction of rectilinear polygons | After asking an expect, this question seem to be common knowledge for people in the comp. geometry field. The bound is quadratic at worst(for n and m edges of the 2 polygons we can have O(n*m) segments, and there is a construction that makes the bound tight) and the example follows:
It is 2 "hair combs" with n and m teeth , and the one is rotated by 90 degrees. |
Example of where the sum of a subspace and its orthogonal complement is not the original vector space? | Presumably, the intended bilinear form on $\mathbb{F}^n$ with respect to which $W^\perp$ is defined is the map $B:\mathbb{F}^n\times\mathbb{F}^n\to\mathbb{F}$ defined by $B(x,y)=\sum_{i=1}^nx_iy_i$, so that
$$W^\perp=\{x\in\mathbb{F}^n:B(x,w)=0\text{ for all }w\in W\}$$
(See the relevant Wikipedia page.)
For a simple counterexample, let $\mathbb{F}$ be the field with two elements, and let $n=2$. I leave it to you to find the subspace $W\subset \mathbb{F}^n$ with the property that $W+W^\perp\neq\mathbb{F}^n$. |
Prove that $U-f(U)$ is an open set. | Let $S=\{x\in X:d(x,x_0)= 1\}$. Note that $f(S)\cap U=\emptyset$ by the assumption on $f$. Therefore
$$
U\setminus f(U)=U\setminus f(U\cup S).
$$
Now $U\cup S$ is closed, so by the closed map lemma, its image is also closed. Can you finish? |
(Updated) How to compute expectation and variance of an argument of a complex random variable? | This seems to be a simple application of the law of the unconscious statistician. That is, suppose $\Xi$ is a random variable with pdf $f(\xi)$. Then we have
$$
E(\arg(\Xi)) = \iint_\mathbb{C} \arg(\xi)f(\xi)\,d\xi
$$
In order to put that in terms that you might be more familiar with: let $X = \operatorname{Re}\{\Xi\}$ and $Y = \operatorname{Im}\{\Xi\}$ so that $\Xi = X + iY$. Similarly, let $x$ and $y$ be real variables and define $\xi = x+iy$. Then, define $f_R(x,y)=f(x+iy)$. Note that $\arg(x+iy)$ may be computed as described here. From there, we may calculate the expectation as a two-dimensional real integral. Namely,
$$
E(\arg(\Xi)) =
E(X + iY) =
\iint_{\mathbb{R}^2} \arg(x+iy)f_{R}(x,y)\,dx\,dy\\
= \int_{-\infty}^\infty\int_{-\infty}^\infty
\arg(x+iy)f_{R}(x,y)\,dx\,dy
$$
I hope that's the answer you were looking for.
As for variance, the process is similar. Simply calculate $E(\arg^2(\Xi))-E(\arg(\Xi))^2$, using the above calculation of $E(\arg(\Xi))$ and the same process to calculate $E(\arg^2(\Xi))$.
As for the newly added specifics of your question, I will attempt a solution if I can. |
How do you find the optimal value for this function? | You can use the method of Lagrange multipliers; but the problem is much more simple here.
You have an equation that defines a plane, and you want the vector on the plane that is closest to $\mathbf{0}$. The normal vector to the plane is $(1,1,\ldots,1)$, so you want to find the vector $(x_1,\ldots,x_n)$ in the plane such that $(x_1,\ldots,x_n)+t(1,1,\ldots,1) = (0,0,\ldots,0)$ for some $t$.
Hence we must have $x_1=\cdots=x_n$, and to lie on the plane you must have $nx_1=1$, so the answer is the vector
$(\frac{1}{n},\frac{1}{n},\ldots,\frac{1}{n})$.
(This can also be seen by symmetry, since whatever the answer is, it should be invariant under swapping variables, since they all play symmetric roles). |
Definition of the Limit point compactness. | Ad 1) It refers to limit point in $X$. They do differ. Consider space
$$X=\left\{0\right\}\cup\left\{\frac{1}{n},\ \ n\in\mathbb{N}\right\}$$
and subset $Y=\left\{\frac{1}{n},\ \ n\in\mathbb{N}\right\}$. Obviously $Y$ has a limit point but it is not in $Y$. Furthere more every infinite subset $X$ has $0$ as its limit point. So the space is limit point compact in Munkres' sense but it is not limit point compact in your hypothetical sense.
Ad 2) Yes, at least under the Axiom of Choice. Let $E$ be an infinite subset of $X$. If it is infinite then you can pick sequence $(a_n)\subseteq E$ which is injective as a function (that actually follows from the Axiom of Choice).
If $X$ is sequentially compact then $(a_n)$ has a convergent subsequence $(a_{n_k})$. Pick one of the limits (note that the limit is unique if $X$ is $T_2$. We don't assume that here). Since it is injective then the limit must be different from almost all elements of $(a_{n_k})$. The limit might be one of the elements of $(a_{n_k})$ but not more due to injectivness. If you remove that problematic element from $(a_{n_k})$ you obtain a sequence with limit that is not a part of that sequence. By definition that limit is a limit point of $E$.
Ad 3) No. Consider the uncountable product of intervals $X=[0, 1]^{\mathcal{C}}$. It is well know that this space is compact but not sequentially compact. In particular there exists a sequence $(a_n)\subseteq X$ with no convergent subsequence. In particular $(a_n)$ treated as a subset of $X$ does not have a limit point in $X$. Otherwise we would be able to pick elements from $(a_n)$ that form a convergent sequence (again: under the Axiom of Choice).
It looks to me that this limit point compactness is really similar to sequential compactness (at least under the Axiom of Choice). Maybe be equivalent? |
Prove the identity $\frac{(n-1)!}{n^m}\sum_{q=1}^{\min\{m,n\}}\frac{q}{(n-q)!}\sum_{\sum_{j=1}^qa_j=m-q}\prod_{k=1}^qk^{a_k}=1-(1-\frac{1}{n})^m$ | We seek to verify the identity
$$\frac{(n-1)!}{n^m}
\sum_{q=1}^{\min\{m,n\}} \frac{q}{(n-q)!}
\sum_{\sum_{j=1}^q a_j = m-q}
\prod_{k=1}^q k^{a_k}
= 1 - \left(1-\frac{1}{n}\right)^m$$
where the $a_j$ are non-negative integers. The inner sum is
$$\sum_{\sum_{j=1}^q a_j = m-q}
\prod_{k=1}^q k^{a_k}
= [z^{m-q}]
\prod_{k=1}^q (1+kz+k^2z^2+\cdots)
\\ = [z^{m-q}] \prod_{k=1}^q \frac{1}{1-kz}
= [z^m] \prod_{k=1}^q \frac{z}{1-kz}
= {m\brace q}.$$
We then get for the outer sum
$$\sum_{q=1}^{\min\{m,n\}} \frac{q}{(n-q)!} {m\brace q}
\\ = m! [z^m]
\sum_{q=1}^{\min\{m,n\}} \frac{1}{(n-q)! \times (q-1)!}
(\exp(z)-1)^q
\\ = \frac{m!}{(n-1)!} [z^m]
\sum_{q=1}^{\min\{m,n\}} {n-1\choose q-1} (\exp(z)-1)^q.$$
Now the binomial coefficient enforces $q\le n$ through the falling
factorial $(n-1)^{\underline{q-1}}$ and the coefficient extractor
enforces $q\le m$ because $\exp(z)-1 = z +\cdots$, taken together they
enforce $q\le\min\{m,n\}$ and we may write
$$\frac{m!}{(n-1)!} [z^m]
\sum_{q\ge 1} {n-1\choose q-1} (\exp(z)-1)^q
\\ = \frac{m!}{(n-1)!} [z^m] (\exp(z)-1)
\sum_{q\ge 0} {n-1\choose q} (\exp(z)-1)^q
\\ = \frac{m!}{(n-1)!} [z^m] (\exp(z)-1) \exp((n-1)z)
\\ = \frac{m!}{(n-1)!} [z^m] (\exp(nz) - \exp((n-1)z)).$$
Restoring the factor in front of the outer sum we obtain
$$\frac{(n-1)!}{n^m} \times
\frac{1}{(n-1)!} (n^m - (n-1)^m).$$
This is
$$\bbox[5px,border:2px solid #00A000]{
1- \left(1-\frac{1}{n}\right)^m.}$$
as claimed. |
Homeomorphism from the real numbers to the real numbers with restriction to the Cantor set. | Let $U=\mathbb{R}\setminus C$ and let $S$ be the set of connected components of $U$. Then $S$ is a set of disjoint open intervals (in particular, it is necessarily countable) and we can totally order $S$ by saying $s< t$ if every element of $s$ is less than every element of $t$ for $s,t\in S$. Since $C$ is compact and thus bounded, $S$ has a greatest and least element. Moreover, for any $s,t\in S$ with $s<t$, there is $u\in S$ with $s<u<t$. Indeed, if no such $u$ existed, then $C$ would contain the entire (possibly degenerate) closed interval $I$ of numbers that are greater than every element of $s$ and less than every element of $t$. If $I$ is degenerate this would mean $C$ has an isolated point, and if $I$ is nondegenerate this would mean $C$ has nonempty interior.
So, $S$ is a countable densely ordered set with greatest and least elements. By a back-and-forth argument, any two such ordered sets are isomorphic. In particular, if we let $V=\mathbb{R}\setminus K$ and $T$ be the set of connected components of $V$, then there is an order-isomorphism $g:T\to S$. Choosing an order-isomorphism between $t$ and $g(t)$ for each $t\in T$ (since $t$ and $g(t)$ are both just open intervals), we can then obtain an order-isomorphism $h:V\to U$. This order-isomorphism $h$ then extends to the Dedekind completions of $V$ and $U$, which are naturally identified with $\mathbb{R}$ since $V$ and $U$ are each dense in $\mathbb{R}$. That is, $h$ extends to an order-isomorphism $f:\mathbb{R}\to\mathbb{R}$. This $f$ is then a homeomorphism that maps $K$ to $C$, as desired. |
Set theory: A = ⟨1,→⟩, is '→' equivalent to '∞'? | Yes, $\langle 1,\to\rangle$ is another way of writing $\langle 1,\infty\rangle$ (it is what students learn in high school here in Norway, for instance). Also, note that using angle brackets to denote open intervals is somewhat uncommon. It is much more common to see $(1,\infty)$, and to a certain extent $]1,\infty[$. |
Show that this homothety takes $K$ to $M$ | I do not know how rigorous this proof is, but here goes.
Suppose that we start by creating circle $C_1$ and it's chord $AB$. Now we find the midpoint of sector $ACB$ and label it $M$. Now we draw an arbitrary line that goes through $M$, $AB$ and circle $C_1$.
The intersections will be $T,K$ and $M$ and thus by definition, they are and will be collinear.
Now we draw a line through $C_1$ and $T$. Then we draw a line perpendicular to line $C_1T$, and by definition, the perpendicular line will be tangent to $\bigcirc \ C_1$ at $T$. Since $\bigcirc C_2$ should be internally tangent to $\bigcirc C_1$ at $T$, then the center $C_2$ must necessarily lie on $C_1T$.
Now since $\bigcirc C_2$ must pass through, and in fact be tangent to, $T$ and $K$, then $TK$ will be a chord of $\bigcirc C_2$, and a perpendicular bisector of $TK$ at $C$ will pass through the center of $\bigcirc C_2$. Since we know the center of $\bigcirc C_2$ must be on line $C_1T$ then, the intersection of the perpendicular bisector of $TK$ and line $C_1T$ will be the center of $\bigcirc C_2$. This now begs the question as to why $\bigcirc C_2$ should be tangent to $AB$ at $K$.
Notice that there a kite forms from the points $T,C_2,K,G$. Notice too that $C_2T \cong C_2K$, since they are the radii of $\bigcirc C_2$. Notice that $TC\cong KC$, because $C$ is the midpoint of $TK$. Because of these, $\angle C_2KG$ must be a reflection of $\angle C_2TG$. And through that, we prove that $K$ is indeed the point of tangency between $\bigcirc C_2$ and $AB$, and thus completing the proof that $T, K$ and $M$ are collinear.
Additionally, we can look at an extreme case:
Notice that when the lines of tangency are parallel to the $x$-axis, where the center of $\bigcirc C_2$ sits on the point of origin, and when $TK$ is the diameter of $\bigcirc C_2$, $T,K,M$ are collinear. All the centers and $T,K,M$ are aligned. |
The exact sequence $0 \rightarrow L(-1) \rightarrow L(0)^2 \rightarrow L(1) \rightarrow 0$ | Well, $L(1)$ is generated by the sections $x$ and $y$, so the degree-$1$ map $L(0)^2 \to L(1)$ given by $(a,b)\mapsto ax+by$ is surjective. The kernel is the set of all $(a,b)$ such that $a=fy, b=-fx$ for some $f$; so the degree-$1$ map from $L(-1)$ to $L(0)$ given by $c\mapsto (cy,-cx)$ maps isomorphically to the kernel. I hope that helped... |
Complex Analysis, find all analytic functions | Hint: Note that $|2f(z)| = |f(z)+1 + f(z) - 1|\le |f(z)+1|+|f(z)-1|=4$, so that $|f(z)|\le 2$ for all $z\in \mathbb C$. What can you say about entire and bounded functions? |
Computing the integral $\int \frac{u}{b - au - u^2}\mathrm{d}u$ | You can write $b-au-u^2=\frac{a^2+4b}{4}-(u+\frac a2)^2$
There are thus three cases to consider
A. $a^2+4b=0$
Then your integral is
$$\int\frac{\mathrm{d}u}{b-au-u^2}=-\int\frac{\mathrm{d}u}{(u+\frac a2)^2}=\frac{1}{u+\frac a2}+C$$
B. $a^2+4b>0$
The trinomial $b-au-u^2$ has two real roots $\alpha,\beta$
$$\alpha=\frac{-a+\sqrt{a^2+4b}}{2}$$
$$\beta=\frac{-a-\sqrt{a^2+4b}}{2}$$
Partial fraction decomposition yields
$$\frac{1}{b-au-u^2}=\frac{1}{\beta-\alpha}\frac{1}{u-\alpha}-\frac{1}{\beta-\alpha}\frac{1}{u-\beta}$$
Hence
$$\int\frac{\mathrm{d}u}{b-au-u^2}=\frac{1}{\beta-\alpha}\int \left(\frac{1}{u-\alpha}-\frac{1}{u-\beta}\right)\,\mathrm{d}u=\frac{1}{\beta-\alpha}\ln \left|\frac{u-\alpha}{u-\beta}\right|+C$$
C. $a^2+4b<0$
The trinomial $b-au-u^2$ has two complex roots. Let's apply the change of variable $u=\lambda t$ with $\lambda=\frac12\sqrt{|a^2+4b|}$:
$$\int\frac{\mathrm{d}u}{b-au-u^2}=\int\frac{\mathrm{d}u}{\frac{a^2+4b}{4}-(u+\frac a2)^2}=\int \frac{\lambda\,\mathrm{d}t}{-\lambda^2-(\lambda t+\frac a2)^2}\\=-\frac1\lambda\int \frac{\mathrm{d}t}{1+(t+\frac a{2\lambda})^2}=-\frac1\lambda\mathrm{Arctan}\left(t+\frac{a}{2\lambda}\right)+C\\=-\frac{2}{\sqrt{|a^2+4b|}}\mathrm{Arctan}\left(\frac{2u+a}{\sqrt{|a^2+4b|}}\right)+C$$
There is another solution in the case $a^2+4b>0$
Let's apply the change of variable $u=\lambda t$ with $\lambda=\frac12\sqrt{a^2+4b}$:
$$\int\frac{\mathrm{d}u}{b-au-u^2}=\int\frac{\mathrm{d}u}{\frac{a^2+4b}{4}-(u+\frac a2)^2}=\int \frac{\lambda\,\mathrm{d}t}{\lambda^2-(\lambda t+\frac a2)^2}\\=\frac1\lambda\int \frac{\mathrm{d}t}{1-(t+\frac a{2\lambda})^2}=\frac1\lambda\mathrm{Argth}\left(t+\frac{a}{2\lambda}\right)+C\\=\frac{2}{\sqrt{a^2+4b}}\mathrm{Argth}\left(\frac{2u+a}{\sqrt{a^2+4b}}\right)+C$$
It's valid if $\left|\frac{2u+a}{\sqrt{a^2+4b}}\right|<1$. Outside of this interval, the integral is instead
$$\frac{2}{\sqrt{a^2+4b}}\mathrm{Argcoth}\left(\frac{2u+a}{\sqrt{a^2+4b}}\right)+C$$
Notice that $\mathrm{Argth} x=\frac12\ln\frac{1+x}{1-x}$, defined for $|x|<1$ whereas $\mathrm{Argcoth} x=\frac12\ln\frac{x+1}{x-1}$, defined for $|x|>1$. Their derivative is the same, $\frac{1}{1-x^2}$. |
How to find the generator of a $\Sigma$ Algebra | As mentioned above you should recap expressions like "Borel set", "$\sigma$-Algebra" and "generator".
If you did consider the function $f(x) = \frac{1}{\mathbb{a}}x$ which is continuous hence measurable and it holds $$\mathbb{a}A = f^{-1}(A)$$ and so $\mathbb{a}A$ is a Borel set because $A$ is. |
Solve to find the unknown | Sofia,Sorry for the delayed response.I was busy with other posts.
you have two choices.One is to use pascals triangle and the other one is to expand using the binimial theorem.
You can compare the expression $$\left ( \frac{1}{x^2} + ax \right )^6$$ with $$(a+x)^6$$ where a = 1/x^2 and x = ax,n=6.Here'e the pascal triangle way of expanding the given expression.All you need to do is to substitute the values of a and x respectively.
$$(a + x)^0 = 1$$
$$(a + x)^1 = a +x a+ x$$
$$(a + x)^2 = (a + x)(a + x) = a^2 + 2ax + x^2$$
$$(a + x)^3 = (a + x)^2(a + x) = a^3 + 3a^2x + 3ax^2 + x^3$$
$$(a + x)^4 = (a + x)^3(a + x) = a^4 + 4a^3x + 6a^2x^2 + 4ax^3 + x^4$$
$$(a + x)^5 = (a + x)^4(a + x) = a^5 + 5a^4x + 10a^3x^2 + 10a^2x^3 + 5ax^4 + x^5$$
$$(a + x)^6 = (a + x)^5(a + x) = a^6 + 6a^5x + 15a^4x^2 + 20a^3x^3 + 15a^2x^4 + 6ax^5 + x^6$$
Here'e the Binomial theorem way of expanding it out.
$$(1+x)^n = 1 + nx + \frac{n(n-1)}{2!}x^2 + \frac{n(n-1)(n-2)}{3!}x^3 + ...$$
using the above theorem you should get
$$a^6x^6 + 6a^5x^3 + 15a^4 + \frac{20a^3}{x^3} + \frac{15a^2}{x^6}+\frac{6a}{x^9}+\frac{1}{x^{12}}$$
You can now substitute the constant term and get the desired answer |
The order of the cyclic subgroups of $GL(3,q)$ | Here is a counterexample with $q=11$. The group ${\rm GL}(3,11)$ has a unique conjugacy class of cyclic subgroups of order $133 = 7 \times 19$.
Let $Q$ be elementary abelian of order $11^3$, let $P \cong C_p$ with $p=133$, and let $G = P \ltimes Q$ with the action corresponding to this cyclic subgroup of ${\rm GL}(3,q)$. The action of $P$ on $Q$ is irreducible, so $P$ is maximal in $G$ and hence $P=C_G(P)=N_G(P)$. Clearly $G'=Q$.
So we have to check the condition that all non-normal non-cyclic subgroups are conjugate in $G$. Let $H$ be a non-cyclic subgroup of $G$. Since $G/Q$ is cyclic, $H \cap Q \ne 1$.
But all nontrivial subgroups of $P$ (i.e. $P$ itself and its subgroups of orders $7$ and $19$) act irreducibly on $Q$, so either $Q \le H$ or $H \le Q$, If $Q \le H$ then $H \unlhd G$, so we can assume that $H < Q$. Then $H$ non-cyclic implies that $|H|=q^2$.
Now $Q$ has exactly $q^2+q+1=133$ subgroups of order $q^2$. Since $P$ and its nontrivial subgroups act irreducibly on $Q$, they cannot normalize $H$, so these $133$ subgroups are all conjugate under the action of $P$.
Note that the fact that $Q$ has $q^2+q+1$ subgroups of order $q^2$, which must all be conjugate under the action of $P$, means that $|P|$ must be divisible by $q^2+q+1$.
If $|P| > q^2+q+1$ then by the Orbit-Stabilizer Theorem, some nontrivial element $g$ of $P$ must stabilize (i.e. normalize) some subgroup $R$ of order $q^2$. Now $\langle g,R \rangle$ is a non-normal subgroup of $G$ of order greater than $q^2$, so it cannot be conjugate to the subgroups of order $q^2$, contradicting the hypotheses. So $|P|=q^2+q+1$. |
Question about structure of ideals of a sub-ideal | No, it is not true. The intersection of ideals of $R$ will be an ideal of $R$ (an easy exercise for you) but you may cook up examples where $I'$ is an ideal of $I$ but not an ideal of $R$. It seems that you already have one, since you know taking $J = I'$ does not work in general. That is, actually, precisely when the statement fails! |
Examples of quasigroups with no identity elements | A finite quasigroup is essentially a Latin square used as a "multiplication" table. Consider for $n \gt 2$ a Latin square, and label the rows (resp. columns) with a permutation of the symbols not appearing in any row (resp. in any column). This determines a quasigroup without identity, if the entries of the Latin square are considered the result of the binary operation on the row symbol and column symbol assigned to that entry.
The rows and columns may then be permuted to any common order you please, and for symbol set $\{1,2,3\}$ a specific example of the Latin square with rows and columns in the canonical order would be:
$$ \begin{bmatrix} 1 & 3 & 2 \\ 3 & 2 & 1 \\ 2 & 1 & 3 \end{bmatrix} $$
In this fashion $2 * 3 = 1$, but no element acts as a left (resp. a right) identity. |
If $x,y>0,$ prove: $\quad xΓ(y)+yΓ(x)\geq (x+y)Γ(\frac{x+y}{2})$ | One approach, even if I was not able to finish all the
calculations at the moment.
If we put $x+y=a$, than we have to find :
$min_{0<x<a} x\Gamma(a-x)+(a-x)\Gamma(x)$
where $a$ is now a constant.
Call $f(x)=x\Gamma(a-x)+(a-x)\Gamma(x), 0<x<a$.
Than it is easy to verify that $f'(a/2)=0$.
Therefore $x=a/2$ is a good candidate for a minimum. From some trials it looks that $f(x)$ is convex even if at the moment I have not been able to prove it. This would imply that the local minimum is unique and global.
Note that this is equivalent to show that:
$g(x)=x\Gamma(a(1-x))+(1-x)\Gamma(xa), 0<x<1$.
is convex, i.e. the $g''(x) \ge 0,0<x<1$, which seems to be the case from
some experiments. |
Other ways to evaluate the integral $\int_{-\infty}^{\infty} \frac{1}{1+x^{2}} \, dx$? | Here is a solution using trigonometry. Consider the following situation:
$\hspace{5em}$
Since triangles $\triangle CP_1P_2$, $\triangle CQ_1Q_2$ and $\triangle CR_1R_2$ are congruent with ratio
$$1 \ : \ \frac{1}{\sqrt{1+t^2}} \ : \ \frac{1}{\sqrt{1+(t+\Delta t)^2}},$$
it follows that the area of the wedge $CQ_1R_2$, which equals $\frac{1}{2}\angle Q_1 C R_2$, is bounded between
$$ \frac{\Delta t}{2(1+(t+\Delta t)^2)}
= \mathrm{Area}(\triangle CR_1R_2)
\leq \frac{1}{2}\angle Q_1 C R_2
\leq \mathrm{Area}(\triangle CQ_1Q_2)
= \frac{\Delta t}{2(1+t^2)}. $$
Hence for any $\theta \in (0,\frac{\pi}{2})$ and for any partition $\Pi = \{0 = t_0 < t_1 < \cdots < t_n = \tan\theta \}$ we have
$$ \sum_{i=1}^{n} \frac{\Delta t_i}{1+t_i^2} \leq \theta \leq \sum_{i=1}^{n} \frac{\Delta t_i}{1 + t_{i-1}^2}, \qquad (\Delta t_i = t_i - t_{i-1}). $$
Taking the limit $\|\Pi\|\to 0$, the squeezing lemma tells
$$ \int_{0}^{\tan\theta} \frac{dt}{1+t^2} = \theta. $$
Then taking $\theta \uparrow \frac{\pi}{2}$ proves the desired identity through symmetry. |
Subgroup of $Z_{108}$ of order 9 | I would compute the number $a=\frac{108}9$ and consider the subgroup$$\{0,a,2a,\ldots,8a\}.$$ |
$k$-group endomorphisms of the multiplicative group scheme for $k$ a connected ring. | QiL'8 and Martin Brandenburg have passed judgement on the proof, so in the interest of answering the question, I'm posting this as an answer. It seems the proof is correct. |
Measurability of integral | The function $f$ is a Caratheodory function and must therefore be jointly measurable, so $A=f^{-1}\big((-\infty,0]\big)$ is a measurable subset of $\mathbb{R}^n\times\mathbb{R}^m$. Let $\nu$ be some finite measure on $\mathbb{R}^n$. The Borel-$\sigma$-algebras have the property that $\mathcal{B}(\mathbb{R}^n\times\mathbb{R}^m)=\mathcal{B}(\mathbb{R}^n)\otimes\mathcal{B}(\mathbb{R}^m)$. Let $1_A$ be the indicator function of $A$. By Fubini's theorem, $$\nu\otimes m(A)=\int_{\mathbb{R}^n}\int_{\mathbb{R}^m} 1_A(x,y)~dm(y)~d\nu(x) $$ and the function $x\mapsto\int_{\mathbb{R}^m} 1_A(x,y)~dm(y)$ is measurable. But $\int_{\mathbb{R}^m} 1_A(x,y)~dm(y)=F(x)$.
Of course, a more direct argument could be made using the essential parts of the proof of Fubini's theorem. |
Computing the persistence homology of the sublevel sets of a function | Sometimes the function $f$ is naturally already defined on a simplicial complex on which it is "monotone". This bypasses the need for fitting a grid. More generally, if $f$ has some nice properties such as piecewise-linearity, then you may be able to find a more efficient grid that scales better with dimension. Or, if you could somehow find the critical points of $f$, then there are simpler filtered complexes available. However, if you don't know anything about $f$ a priori, then it's hard to do better than finding a discrete approximation since a function - even if continuous - can be arbitrarily complex.
It's probably too much to ask for equivalent filtered complexes. Instead, if we look at homology and allow for small errors in approximation (measured in the bottleneck distance for persistence diagrams, say), then your second question can be answered by the well-known stability theorem of Cohen-Steiner, Edelsbrunner, and Harer. More precisely, given some bound on the local oscillations of $f$, a fine mesh size translates to a bound on the difference between $f$ and its discrete approximation, which implies that the resulting bottleneck distance between the corresponding diagrams is small. |
Angle bisector comparison | Geometry of a triangle is studied extensively. Let the sides be $a,b,c$ respectively and $l_a$ be the length of angle bisector from vertex $A$ onto the side whose length is $a.$ The Angle Bisector property tells us that $l_a$ bisects the side $BC$ in the proportion $AC:AB =b:c.$ This gives us two equations:
$$\begin{cases}
x+y = a\\
\dfrac{x}{y} = \dfrac{b}{c}
\end{cases}$$
Solving this is easy and it yields:
$$x = \dfrac{ab}{b+c}\quad , y = \dfrac{ac}{b+c}.$$
Finally, Stewart's theorem gives us:
$$a\cdot l_a^2 = \dfrac{b^2ac+c^2ab}{b+c} - \dfrac{a^3bc}{(b+c)^2}$$
or
$$l_a = \sqrt{bc - \dfrac{a^2bc}{(b+c)^2}} = \dfrac{2\sqrt{bcp(p-a)}}{b+c}$$
where $p = \dfrac{a+b+c}{2}$ is the semiperimeter.
With this formula in hand and in the setup of your problem, one can immediately see that:
$$CE^2 - BD^2 = l_c^2 - l_b^2 = ab-\dfrac{c^2ab}{(a+b)^2} - ac+\dfrac{b^2ac}{(a+c)^2} = $$
$$=a(b-c)\left(1 + \dfrac{bc(a^2+b^2+c^2+bc+2ab+2ac)}{(a+b)^2(a+c)^2}\right)\leq 0.$$
Thus, $l_b\geq l_c.$ |
soft question- Kenneth's Brown notation at "Cohomology of finite Groups" | In Proposition 10.4 there's an extra hypothesis that $[G:H]$ is invertible in $M$, which implies that $H^n(G,M)_{(p)}=H^n(G,M)$ for $n>0$ if $H$ is a Sylow $p$-subgroup. |
How to eliminate the repeated case from polynomial counting? | Instead of subtracting out the repeated cases, it's easier to only generate the non-repeated cases in the first place:
$$(1+zx)(1+zx^2)(1+zx^3)(1+zx^4)(1+zx^5)(1+zx^6)\;.$$
Then the coefficient of $z^3x^n$ counts the number of partitions of $n$ into $3$ distinct parts from $1$ to $6$, and the number of ways of getting that sum from the dice is $3!$ times that number, which is the coefficient of $x^n$ in
$$\left.\frac{\partial^3}{\partial z^3}\left((1+zx)(1+zx^2)(1+zx^3)(1+zx^4)(1+zx^5)(1+zx^6)\right)\right|_{z=0}\;.$$ |
Consequence of $a+b=a/b+b/a$? | Hint: If $n,m$ are positive integers, then $\frac n1\ge \frac nm$, with equality exactly if $m=1$.
Thus, if either of $a$ and $b$ is larger than $1$, then we have $$ \frac a1+\frac b1 > \frac ab + \frac ba $$ |
Intuition behind Strassen's theorem | Well, certainly it is a useful result for coupling. Let us define $M_\epsilon:=\mu+\nu$, then $M_\epsilon$ is a coupling of $\Bbb P$ and $\Bbb Q$ by $(1)$, and
$$
M\left(d(x,y) \geq \alpha+\epsilon\right)\leq\beta+\epsilon.
$$
The latter fact is very useful: say, you have two random variables $\xi\sim \Bbb P$ and $\eta\sim \Bbb Q$, and you would like to use $\eta$ to approximate certain properties of $\xi$. For example, $\xi$ is a nasty stochastic process and $\eta$ is its discretization. Suppose that you know that $\Bbb Q(\eta_t \in A \;\forall t\geq T) = 1$ for some attractor set $A$; how can you use this information to argue about the limiting behavior of $\xi$? So far you are only given their distributions, so you want to connect them somehow. In case $d(x,y) = \sup_t |x_t - y_u|$, the result in the OP tells you that
$$
\Bbb P(\xi_t\in A_{\alpha+\epsilon} \; \forall t\geq T)\geq 1 - \beta-\epsilon
$$
where $A_{\alpha+\epsilon}$ is a ball of an obvious radius around $A$, which maybe quite useful in applications. Quite similar estimates can be derived using the Wasserstein metric. See the discussion on p.3 here. |
Find all eigenvalues of $L:\mathbb{R}^{3} \rightarrow \mathbb{R}^{3}: x \mapsto x-\langle x, a \rangle b$ | the eigen-value equation is this:
$$
L[x]=x-\langle {x,a} \rangle b=\lambda x
$$
which can be written as:
$$
(1-\lambda)x=\langle {x,a} \rangle b
$$
which has two solutions: either $x\perp a \land \lambda=1$
or $x\propto b$. In such case:
$$
x=\alpha b \implies\alpha(1-\lambda)b=\alpha\langle {b,a} \rangle b\implies \lambda=1-\langle {b,a} \rangle=-1
$$ |
Mean squared error of 1/sample mean | No.
if $\overline{X}_n$ is unbiased, say $\mathbb{E}[\overline{X}_n]=g(\theta)$ it is wellknown that $\frac{1}{\overline{X}_n}$ is biased (this is Jensen's inequality).
To calculate mean, variance, and MSE of $\frac{1}{\overline{X}_n}=\frac{n}{\sum_i X_i}$ you have to know the distribution on $\sum_i X_i$
Sometimes mean and variance $\frac{1}{\overline{X}_n}=\frac{n}{\sum_i X_i}$ can be derived with some brainstomings on the score of the model (for example using Bartlett's identities) |
Aside from the obvious stuff, do the partial functions that solve the quadratic equation have any interesting properties? | With Viete's Formulas and some simple algebra, we see that $cx^2 + bx + a = 0$ has roots that are inverses of the roots of $ax^2 + bx + c = 0$, so
$$\left\{f_-(c, b, a), f_+(c, b, a)\right\} = \left\{\frac{1}{f_-(a, b, c)}, \frac{1}{f_+(a, b, c)}\right\}$$
It should be possible to generate a large amount of such relations using similar processes (I'm sure textbooks pertaining to quadratic equations have many of such root-transforming problems).
Another example I've worked out:
By finding an equation whose roots are the squares of the roots of $ax^2 + bx + c = 0$,
$$\left\{f_{\{-,+\}}(a^2, 2ac - b^2, c^2)\right\} = \left\{\left(f_{\{-, +\}}(a, b, c)\right)^2\right\}$$ |
I have been asked this particular number theory question in an interview. | In units of $\frac14$ kg the weights are $32,16,8,4,2$, and $1$. Each combination of weights corresponds to the representation in base two of some number. For example, the combination of $2,4$, and $32$ corresponds to the binary number $100110$. The largest such number is $111111_{\text{two}}$, or $63$ in ordinary decimal notation. Thus, I can weigh any whole number of quarter kilograms from $0$ through $63$. The total is therefore
$$0+1+2+3\ldots+63=\sum_{k=0}^{63}k=\frac{63(63+1)}2=2016\;,$$
but that’s in units of $\frac14$ kg, so the total in kilograms is $2016\cdot\dfrac14=504$ kg. |
Isomorphism of $S[t^2]$-modules | Your description of the $S[t^2]$-module structure on $S[t^2]\otimes_S It$ is correct. More generally, is $B$ is an $A$-algebra and $M$ is an $A$-module, $B\otimes_A M$ is a $B$-module in the same way.
As a hint for the isomorphism, $S[It,t^2]$ is just those polynomials whose odd-degree coefficients are all in $I$. This is the direct sum of two submodules: one consisting of polynomials with only even-degree terms, and one consisting of polynomials with only odd-degree terms. Identify these submodules with the summands of $S[t^2]\oplus S[t^2]\otimes_S It$. |
Find a value for $c$ such that $f(x)$ is continuous. Am I correct? | In order for the ordinary limit to exist as $x\to2$, the left-side and right-side limits must be equal. Remember that $x\to2^-\implies x\lt2$ and $x\to2^+\implies x\gt2$.
$$\lim\limits_{x\to2^-}(f(x))=\lim\limits_{x\to2^+}(f(x))\implies cx^2-3=cx+2$$
We can apply direct substitution.
$$c(2)^2-3=c(2)+2\implies c=\frac{5}{2}$$
We can verify our solution with the definition of continuity at $x=2$. Let us first determine the value of $f(2)$.
$$f(2)=\frac{5}{2}(2)^2-3=5(2)-3=7$$
Now, following user N. F. Taussig's suggestion in the comments, let us find the left-side and right-side limits as $x\to2$ again, but this time we won't be solving for $c$. Instead, we will be solving for the actual limits.
\begin{align}
\lim\limits_{x\to2^-}(f(x))&=\frac{5}{2}(2)^2-3=5(2)-3=7\\
\lim\limits_{x\to2^+}(f(x))&=\frac{5}{2}(2)+2=5+2=7
\end{align}
Everything is in order here; you solved the problem correctly.
$$\lim\limits_{x\to2}(f(x))=f(2)=7$$ |
clarification of notation for partial derivatives | Both the left and the right are commonly used notations for the directional derivative of $f$ in the direction of the vector $\mathbf a$. When $f$ is differentiable, this is, of course, given by dotting the gradient of $f$ with $\mathbf a$. |
Find the Supremum of $B$ | We know that$$\large z_{2n}=1+\frac{1}{2n}\\z_{2n-1}=-1+\frac{1}{2n-1}$$therefore$$\large\sup_{n}\{z_n\}=\max\{\sup_{2n}\{z_{2n}\},\sup_{2n-1}\{z_{2n-1}\}\}=\max\{z_2,-\frac{1}{2}\}=z_2=\frac{3}{2}\\\large\inf_{n}\{z_n\}=\min\{\inf_{2n}\{z_{2n}\},\inf_{2n-1}\{z_{2n-1}\}\}=\min\{1,-1\}=-1$$therefore we have that:$$\forall a\in A,n\to -1<z_n\le \frac{3}{2}\quad ,\quad -2\le a\le1\to {a.z_n<2\\a.z_n\le \frac{3}{2}}$$which leads us to $$\forall b\in B\quad,\quad b<2$$which implies that $l=2$ is an upper bound for $B$. To prove that is least such bound it suffices to prove that for any $\epsilon>0$ there exists a $b\in B$ such that $b>2-\epsilon$. Now take $b=az_n$ for some $n$ and $a\in A$. Since $\inf A=-2$ we can get arbitrarily close to it i.e. $a=-2+\epsilon_1$ when $\epsilon_1>0$ and arbitrary. Therefore:$$b=a.z_n=(-2+\epsilon_1)(-1+\frac{1}{2n-1})=2-\epsilon_1-\frac{2}{2n-1}+\frac{\epsilon_1}{2n-1}$$ By choosing $n>\frac{2}{\epsilon}+\frac{1}{2}$ and $\epsilon_1=\frac{\epsilon}{2}$ we get:$$2-\epsilon_1-\frac{2}{2n-1}+\frac{\epsilon_1}{2n-1}>2-\epsilon_1-\frac{2}{2n-1}>2-\frac{\epsilon}{2}-\frac{\epsilon}{2}=2-{\epsilon}$$which is what we wanted to show. Then:$$\sup B=2$$ |
Show that $\{x\in X\mid \forall$ open set $U, x\in U \implies A\cap U\neq\emptyset\} \subseteq A\cup A'$. | Let $x\in X$ such that $x\in U$ implies $A\cap U\ne\emptyset$, for every open subset $U$ of $X$. To show $x\in A\cup A'$, we assume $x\not\in A$ and show $x\in A'$. Since $x\not\in A$, we know
$$x\in A' \iff x\in\overline{A\setminus\{x\}}=\overline{A}.$$
Thus, in order to conclude $x\in\overline{A}$, we just need to prove that every closed set containing $A$ also contains $x$. To this end, suppose by contradiction that $F$ is a closed set such that $A\subseteq F$ and $x\not\in F$. Then there is an open set $U$ such that $x\in U \subseteq F^c$. But this implies $U\cap A\subseteq U\cap F=\emptyset$, which contradicts our assumption on $x$. Therefore
$$ x\in \bigcap\{F\subseteq X\ \text{closed} \mid A\subseteq F\} = \overline{A}$$
and consequently $x\in A'$ as desired. |
About the relation of rank(AB), rank(A), rank(B) and the zero matrix | As pointed out in the comments your matrix $A$ is the wrong way round... As, for our purposes, is the inequality! (It's right, just not helpful since what we want is $\mathrm{rank}(AB)\ge1$.)
Here's an equivalent way of saying $AB=0$.
The image of $B$ is entirely contained within the kernel of $A$.
Can you see why this can't happen? |
Riemann sum for unbounded functions | Consider a Lebesgue integrable function $f:[0,1]\to\Bbb R$, and modify it on $\Bbb Q$ so that
$$
x\in\Bbb Q\implies f(x) = 0
$$
As $\lambda(\Bbb Q) = 0$, the representation of $f$ in $L^1([0,1])$ does not change.
But the Riemann sums
$$
S_n = \frac1n \sum_{k=1}^nf\left(\frac kn\right)
$$are all null.
A more complex question is:
if for almost every choice of
$$
\frac {k-1}n\le x_{k,n}\le \frac kn
$$the sums
$$
\frac1n \sum_{k=1}^n f(x_{k,n})
$$converge to $I$, then $I$ is the integral of $f$. |
The sum of $n$ consecutive numbers is divisible by the greatest prime factor of $n$. | This is an excellent conjecture. It is not quite true, as it fails for $n=2$. The sum of two consecutive numbers is odd. We can say more. The sum of $n$ consecutive numbers is divisible by $n$ if $n$ is odd and by $\frac n2$ if $n$ is even. This implies the student's conjecture for $n \gt 2$.
To see this, reduce all the numbers $\bmod n$. We will then have one each congruent to $0,1,2,\ldots n-1 \bmod n$. The sum of the numbers from $0$ to $n-1$ is $\frac 12(n-1)n$, which is divisible by $n$ or $\frac n2$ as required. |
Using calculus to show that $f_n(x)=x^n$ is not Cauchy in $C^0[0,1]$ | Here's one possible solution, which I leave you to complete. What is the maximum value of $f_n-f_{2n}$? |
Conditions for continuity of \min function | For any continuous $\phi$, $\psi$ is continuous. Pick any point $y \in \mathbb{R}$ and $\epsilon < 0$. Consider $\psi(y) = \min_{\xi \in [x_0, y]} \phi(\xi)$. By continuity of $\phi$ at $y$, there's $\delta > 0$ such that if $|z - y| < \delta$, then $|\phi(z) - \phi(y)| < \frac{\epsilon}{2}$. Assume that $z < y$ (the case $z > y$ is similar) Now $|\psi(z) - \psi(y)| = |\min_{\xi \in [x_0, z]} \phi(\xi) -\min_{\xi \in [x_0, y]} \phi(\xi)|$. The minimum value of $\phi$ on $[x_0, y]$ is taken either somewhere in $[x_0, z]$ or in $[z, y]$. In the first case, $\psi(z) = \psi(y)$, in the second case the minimum value of $\phi$ on $[x_0, z]$ is different from the minimum value of $\phi$ on $[x_0, y]$ by not more than $\epsilon$, essentially because $|\phi(u) - \phi(v)| < \epsilon$ for $u, v \in [z, y]$ (I'll leave careful analysis of the possible cases for you). |
Proving that locally integrable functions are embedded in the space of distributions | Let $K$ be a compact in $\Bbb R^n$ and $\psi$ be a test function such that $\psi = 1$ on $K$, $0 \le \psi \le 1$ everywhere and $\psi = 0$ outside a neighborhood of $K$ (such a $\psi$ exists - it's often called a cut-off function. Let me know if you need help in constructing it). Then $f \psi \in L^1$. Let $(\phi_{\epsilon})$ be a mollifier. Then: $\phi_{\epsilon} \star (f\psi) (x) = \int\phi_{\epsilon}(x-y) f(y)\psi(y)dy = 0$ for all $x \in \Bbb R^n$ since $y \mapsto \psi(y)\phi_{\epsilon}(x-y)$ is a test function. Recall that, since $f\psi \in L^1$, $\phi_{\epsilon} \star f\psi \to f\psi$ in $L^1$. It follows that $\int |f\psi| = 0$. So $f\psi = 0$ a.e. on $K$, and since $\psi =1$ on $K$, $f = 0$ a.e. on $K$. Since $\Bbb R^n$ can be exhausted by countably many compacts, it follows that $f = 0$ a.e. on $\Bbb R^n$. |
How to "find" this Lie algebra: proof that $\mathfrak{sl}$ is trace zero matrices | Here's one way to think of things:
For each element of the tangent space at $I \in G$, we can construct a flow, and thus a corresponding one-parameter subgroup $A(t)$.
We can further show that for any Lie group $G \subset GL(n,\Bbb C)$, every one-parameter subgroup has the form $A(t) = e^{tX}$ for some matrix $X = A'(0)$.
Thus, we may conclude that $X \in \mathfrak g \iff e^{tX} \in G$ for all $t \in \Bbb R$, when $G \subset GL(n, \Bbb C)$.
In this particular case, we may state that
$$
X \in \mathfrak{sl}_n \iff e^{tX} \in SL_n \quad \forall t \in \Bbb R
\iff \det(e^{tX}) = 1 \quad \forall t \in \Bbb R
\\ \iff e^{t \cdot \text{tr}(X)} = 1 \quad \forall t \in \Bbb R
\iff \text{tr}(X) = 0
$$
Thus, $\mathfrak{sl}_n$ consists precisely of the traceless matrices.
Note in particular the importance of considering all $t \in \Bbb R$. For example, if we take
$$
X = \pmatrix{2 \pi i &0 \\0&2 \pi i}
$$
we find that $X \notin \mathfrak{sl}_2$, even though $e^X = I \in SL_2$. |
How to evaluate $\int\frac{1}{x^2\ln(x)} dx$ | Let $x=e^t$ to make
$$I=\int \frac {dx}{x^2 \log(x)}=\int \frac{e^{-t}}{t}\,dt=\text{Ei}(-t)$$ where appears the exponential integral function. |
Floating Numbers in Combinations | The answer could use the gamma function:
$$\binom {2.5} 2=\dfrac{2.5!}{0.5!\times2!}=\dfrac{\Gamma(3.5)}{\Gamma(1.5)2}=\dfrac{2.5\times1.5}2=1.875.$$ |
Rule of calculation of residue | The quotient$$\frac{b_0+b_1(z-z_0)+b_2(z-z_0)^2+\cdots}{c_1+c_2(z-z_0)+c_3(z-z_0)^2+\cdots}$$is the quotient of two holomorphic functions and therefore it is holomorphic; besides, it maps $z_0$ into $\frac{b_0}{c_1}$. It can be written, near $z_0$, as $d_0+d_1(z-z_0)+d_2(z-z_0)^2+\cdots$, with $d_0=\frac{b_0}{c_1}$. So, near $z_0$,$$\frac{f(z)}{g(z)}=\frac{d_0}{z-z_0}+d_1+d_2(z-z_0)+\cdots$$Therefore, by the definition of residue,$$\operatorname{Res}\left(\frac fg,z_0\right)=d_0=\frac{b_0}{c_1}.$$ |
Reference request: Category Theory for Theoretical Physics | I would maybe recommend reading this paper, IF, you already have some background information in algebra, or topology, and things like that. https://math.berkeley.edu/~erabin/The%20Categorical%20Language%20of%20Physics.pdf
There is also a book;https://link.springer.com/chapter/10.1007/3-540-53763-5_52.
There is also a post on mathoverflow;https://mathoverflow.net/questions/34861/how-is-category-theory-actually-useful-in-actual-physics
And a whole wikipedia page; https://en.wikipedia.org/wiki/Categorical_quantum_mechanics |
What does $\oplus $ mean in set theory? | From the context, it is most likely symmetric difference. The $\oplus$ symbol is normally used to denote "exclusive or" in (boolean) logic. More usually the $\triangle$ symbol is used to distinguish between set-operation and logic-operation.
$$\begin{align}A\oplus B ~ = ~ A\triangle B = & ~ \Big\{x: (x\in A)\oplus (x\in B)\Big\} \\[1ex] = (A\cap B^\complement)\cup(A^\complement\cap B) = & ~ \Big\{x: \big((x\in A)\wedge(x\notin B)\big)\vee\big((x\notin A)\wedge (x\in B)\big)\Big\}\\[1ex] = ~ (A\cup B)\cap(A\cap B)^\complement = & ~ \Big\{x: \big((x\in A)\vee(x\in B)\big)\wedge\neg\big((x\in A)\wedge (x\in B)\big)\Big\}\end{align}$$ |
Area surrounded by a curve | You can consider that area as the sum of two integrals:
$$\int_{C}r ~ dr ~d\theta = \int_{C_1}r ~ dr ~d\theta + \int_{C_2} r ~ dr ~d\theta = 2 \int_{C_1} r ~dr ~d\theta$$
Where $C_1$ is the part of the curve which has a positive $x$ coordinate. For this part of the curve $\theta$ varies in the interval $[-\frac{\pi}{4},\frac{\pi}{4}]$. So the integral that has to be solved is:
$$\int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} d \theta \int_0^{2 \sqrt{cos(2 \theta)}} r ~dr = \int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} cos(2 \theta) ~ d \theta $$
The result of this integral is:
$$\left. sin(x)cos(x) \right|_{- \frac{\pi}{4}}^{\frac{\pi}{4}} = 1$$
So the total area enclosed by the curve is equal to 2. |
For the hyperbola $9x^2 - 4y^2 =36$, identify the vertices, foci, and asymptotes, then graph | Rewrite the given equation as :
$$\frac{x^2}{4}-\frac{y^2}{9}=1$$
Now compare the given equation with standard hyperbola : $$\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$$ (Notice $a<b$)
Now,
Foci : $(\pm ae,0)$ ( Here $e$ is the eccenticity of hyperbola, given by $e^2=1+\dfrac{b^2}{a^2}$)
Vertices : $(\pm a,0)$
And the Asymptotes equation is given by : $$y = \pm \frac{b}{a} x$$
You can proceed now. |
The area of circles tangent inside another circle | Consider the image below:
Let $R$ be the radius of the big circle and $r$ the radius of the small circles.
If we connect the center of the small circles we get an equilateral triangle with side = $2r$. The center of the triangle is the center of the big circle as well.
The distance between the center of the triangle to the midpoint of one of its sides (aka apothem) is $h/3$, where $h$ is the height of the triangle. In our case, $h = r\sqrt{3}$ (we can easily calculate this with the pythagorean theorem).
As we can see in the image, we can now say that $R = r + \frac{2}{3}r\sqrt{3} = r \left(1 + \frac{2\sqrt{3}}{3}\right) \approx 2.1547r$.
$Area_{shaded} = 3\pi r^2 $
$Area_{bigcircle} = \pi R^2 = \pi (2.1547r)^2 \approx 4.6427\pi r^2$
$Area_{unshaded} = Area_{bigcircle} - Area_{shaded} \approx (4.6427 - 3) \pi r^2 \approx 1.6427 \pi r^2 $
Finally, we can check that $2 (Area_{unshaded}) \approx 3.2854 \pi r^2$ is bigger than $Area_{shaded} = 3 \pi r^2$ |
What is the meaning of Rank[A | b]? (Linear Algebra) | If the system of equations is written in matrix form, $A\mathbf{x}=\mathbf{b}$, then $A$ is the coefficient matrix and $[A|\mathbf{b}]$ is the so called augmentend matrix: the matrix formed by adding a column to $A$, consisting of the constants $\mathbf{b}$ from the system of equations (the right-hand side). You can check the Wikipedia page for some examples. |
Proof that if the limit exists, then the function is bounded in some neighbourhood | Your proof is correct.
Your generalization is not. In the proof you claim correctly that
we are guaranteed the existence of a deleted neighbourhood
in which the inequalities you state involving $\epsilon$ are true.
That argument applies only to this particular neighborhood, not to all neighborhoods.
If your argument were correct you could use the whole space as a neighborhood and conclude that the function was a bounded function. |
About a Corollary of Yoneda's Lemma | As Qiaochu says in his comment, the existence of the bijections you state is just Yoneda's Lemma (Theorem 6.1 in the book you refer to), regardless of what $F$ is.
But the content of the corollary you refer to (which you haven't stated in full) is that if $F$ is a subfunctor of $\operatorname{Hom}_C(-,X)$ then the bijection sends $f\in F(X)$ to the natural transformation given by composing with $f$. Of course, you're right that $\operatorname{Hom}_C(-,Y)$ need not be a subfunctor of $\operatorname{Hom}_C(-,X)$, and so the second statement isn't strictly a consequence of the first. However, the first statement is true, with exactly the same proof, for $F$ a subfunctor of $\operatorname{Hom}_C(-,Y)$ for any object $Y$, which is probably what the authors of the book meant to say. |
Missing angle problem | Build equilateral triangle $BEC$. Then $\angle EBA = 80^\circ - 60^\circ = 20^\circ = \angle BAD$ and $BE = BC = AD$. It follows by SAS that $\triangle EBA$ is congruent to $\triangle DAB$. Hence $\angle DBA = \angle BAE = 10^\circ$ and finally $\angle BDC = \angle DBA + \angle BAD = 10^\circ + 20^\circ = 30^\circ$. |
How to solve this trigonometric equation $\sin (x)-\cos (x)=0$? | Recall that $\sin(x) = \cos(\frac \pi 2 - x)$. Therefore your equation is:
$$\cos(x) = \cos(\frac \pi 2 - x)$$
And from here we get: $x = \frac \pi 2 - x + 2\pi k \Rightarrow 2x = \frac \pi 2 + 2 \pi k \Rightarrow x = \frac \pi 4 + \pi k$ (Of course $k \in \mathbb{Z})$ |
The group of roots of unity in an algebraic number field | The degree of $e^{2\pi i/n}$ goes to infinity with $n$. If $K$ had an infinity of roots of unity, it would have elements of arbitrarily high degree, and thus would not be of finite degree over the rationals, and thus would not, in fact, be an algebraic number field. |
Nonnegative partial derivalives a.e. and monotonicity | Let $N$ be the set of nondifferentiability points of $f$. For almost every line segment $L$ parallel to a coordinate axes, the intersection $L\cap N$ has zero 1-dimensional measure. Hence, the derivative of the restriction of $f$ to this segment at a.e. point comes from the derivative of $f$. Hence, the derivative of restriction is nonnegative.
Also, the restriction of $f$ to every line segment is Lipschitz, hence absolutely continuous. An absolutely continuous function $f:[a,b]\to \mathbb R$ with nonnegative derivative satisfies
$$f(x) = f(a)+\int_a^x f'(t)\,dt \ge f(a)$$
(Fundamental Theorem of Calculus applies to absolutely continuous functions.) |
Is there concise notation for a combination of sets? | If I understand correctly, you are given several sorted lists of numbers $X_1,\ldots, X_n$. The lists are all of the same size $m$, the lists come in a specific enumerated order, and the elements within each list are also ordered (numerically).
I don't know of any existing notation for zipping sets like this, but you could potentially define a notation such as:
$$\bigoplus_{\{X_1\ldots X_n\}} \equiv \{\, \langle X_i[k], X_j[k]\rangle\, : 1\leq i < j \leq n,\; 1\leq k\leq m\} $$
where here $X_i$ refers to the $i$th set in your specified order, and $X[k]$ refers to the $k$th element of an ordered list.
Then if you had a family $\mathcal{F}$ of unspecified sets, their zipper would be the set $\bigoplus_{\mathcal{F}}$. |
How many ways to divide a n-element set? | You want the Bell numbers --- link to Wikipedia entry. |
Let $f: [0,10) \to [0,10] $ be a continuous function then which is correct | Let $f(x)=10$ for $x\in [0, 10)$. |
Is the dynamics of spacetime coordinate-dependent? | In your definition of dynamical you require only the existence of a (let's call it) a dynamical-chart. The existence of this chart doesn't rule out the possibility of a non-dynamical-chart but still it is a well defined property to possess a dynamical chart.
For example consider the space-time $(\mathbb{R}^2, g)$ with $g = -x_2 dx_2\otimes dx_2 + dx_1\otimes dx_1 $, this is a non-dynamical chart but the spacetime is dynamical since it admits a dynamical chart, namely the chart induced by $\tilde{x} = x_1 + x_2$ and $\tilde{x_2} = x_2$, indeed $\frac {\partial}{\partial \tilde{x}_1 } = \frac {\partial}{\partial x_1 } - \frac {\partial}{\partial x_2 } $ and
$$g = -\tilde{x_2} d\tilde{x_2}\otimes d\tilde{x_2} + (1-\tilde{x_2})d\tilde{x_1}\otimes d\tilde{x_1} + \tilde{x_2} d\tilde{x_1}\otimes d\tilde{x_2} + \tilde{x_2} d\tilde{x_2}\otimes d\tilde{x_1}.$$
The existence of an asympt. timelike Killing vector guarantee the existence of a not-dynamical-chart. |
How to find the normal on a surface of revolution? | Let me change the notation a little. You are given a parametrized arc-length curve
$$\alpha(v)=(f(v),g(v))$$
The surface generated by this curve $\alpha(v)$ when rotating along the $z$-axis can be parametrized as
$$\Phi(u,v)=(f(v)\cos u,f(v)\sin u,g(v))$$
The unit normal vector to the surface at a point $q\in \Phi(U)$, where $U$ is an open subset of $\Bbb{R^2}$ is
$$N(q)=\frac{\Phi_u\times \Phi_v}{|\Phi_u\times \Phi_v|}$$
where $\Phi_u$ and $\Phi_v$ are the partial derivatives of $\Phi(u,v)$.
On the other hand, the coefficients of the first fundamental form are given as $E=<\Phi_u,\Phi_u>, \ F=<\Phi_u,\Phi_v>,\ G=<\Phi_v,\Phi_v>$. Operating, this yields in
$E=(f(v))^2$, $F=0$ and $G=(f'(v))^2+(g'(v))^2=1$, since $\alpha(v)$ is parametrized by arc-length. Also, it is easy to show that $$|\Phi_u\times \Phi_v|=\sqrt{EG-F^2}$$
which finishes the calculation. |
Finding z in the form $a + bi$ when $z^2 = 1 + i + {58\over{9(3-7i)}}$ | Hint:
$$z^2 =\frac{12}{9}+\frac{16i}{9}=\frac{16}{9}+\frac{16\cdot i}{9}+\frac{4i^2}{9}$$ |
A question on "unlabeled Cayley graphs" | According to Lauri, Josef; Scapellato, Raffaele (2003), Topics in graph automorphisms and reconstruction, the Petersen graph is not a Cayley graph of any group $G$ and generating set $S$. There are only two groups of order $10$, $\mathbb Z/10\mathbb Z$ and $D_5$. For each, you can look at all generating sets for which $|S\cup S^{-1}|=3$ (necessary because the Petersen graph is $3$-regular), and note that all of them would result in a graph with diameter more than $2$, whereas the Petersen graph has diameter $2$. |
Finding the minimal polynomial of $e^{2πi/5}$ over $\mathbb Q$ | Hint:
It is not, because $\mathrm e^{\tfrac{2\pi i}5}\ne 1$. Hence the minimal polynomial is a divisor of
$$\frac{X^5-1}{X-1}=X^4+X^3+X^2+X+1.$$
Can you prove this polynomial is irreducible? |
Translations calculated using matrices | No, translations can't be computed via the same matrix-multiplication process.
The reason is that matrix multiplication will always map the vector $\vec0$ to itself, whereas a translation doesn't (except when the translation vector is $\vec0$ itself).
For all $a,b,c,d,\qquad$
$$\pmatrix{a & b \\c & d\\}\pmatrix{0 \\0 \\}=\pmatrix{0 \\0 \\}$$
But $$\pmatrix{0 \\0 \\}+\vec v=\vec v$$ |
If $\int_a^b f(x)\;dx=(b-a)\sup\;f([a,b])$ where $f$ is continuous on $[a,b]$ then $f$ is constant in $[a,b]$ | To begin with, $(b-a) \sup f([a,b])$ is an upper bound on the integral, $\int_a^b f(x)\, dx$. Now suppose that $f$ is not a constant function. Then there is a point $x_0$ for which $$f(x_0) < \sup(f) - \epsilon$$ for some $\epsilon > 0$. Let $\delta > 0$ be such that $f(x) < \sup(f) - \epsilon/2$ for all $|x-x_0| < \delta$. Then $$\int_{x_0 - \delta}^{x_0+\delta} f(x)\, dx < 2\delta \cdot \sup(f) - \delta \epsilon.$$ |
$b \mid ac\Rightarrow b \mid (a,b)(c,b)\,$ for integers $\,a,b,c$ | Lemma $\rm\,\ a\mid bc\Rightarrow a\mid (a,b)(a,c)\ \ $ [my notation swaps $\,\rm a,b\,$ vs yours]
Proof $\ \ \rm \color{#c00}{ad = bc} \ \Rightarrow\ (a,b)\,(a,c)\, =\ (aa,ab,ac,\color{#c00}{bc})\, =\, \color{#c00}a\,(a,b,c,\color{#c00}d)\ \ $ $\small\bf QED$
The OP has $\rm\,(a,b)=1\,\Rightarrow\,(a,b,c,d)=1\,$ so the above is $\rm\, (a,c) = a,\,$ so $\rm\ a\mid c$
The proof used only basic GCD arithmetic (distributive, commutative, associative laws).
Alternatively $\rm\,(a,bc) = (a\,(1,c),bc) = (a,ac,bc) = (a,(a,b)c)\ [\,= (a,c)\ \ if\ \ (a,b) = 1]$
See here for much more on this proof, esp. on how to view it in analogy with integer arithmetic.
Alternatively, if you know the LCM $\cdot$ GCD law $\rm\ [a,b]\, (a,b)\, =\, ab\ $ then, employing this law,
we have $\rm\ \ a,b\mid bc \,\Rightarrow \, [a,b]\mid bc\, \Rightarrow\, ab\mid (a,b)\,bc\, \Rightarrow\, a\mid (a,b)\,c,\ $ so $\rm\,a\,|\,c\ $ if $\rm\ (a,b)= 1.$
This appears to be the proof that Sierpinski has in mind since his prior proof is merely the special case where $\rm\ \ (a,b)= 1,\, $ and it employs the consequent specialization of the above $\ $ LCM $\cdot$ GCD $\ $ law, explicitly that $\rm\ (a,b) = 1\ \Rightarrow\ [a,b] = ab\,$.
For a proof of the LCM $\cdot$ GCD law simpler than Sierpinski's see the one line universal proof of the Theorem here. Not only is this proof simpler but it is also more general - it works in any domain.
Note also that the result that you seek is a special case of the powerful Euler four number theorem (Vierzahlensatz), or Riesz interpolation, or Schreier refinement. For another example of the simplicity of proofs founded upon the fundamental GCD laws (associative, commutative, distributive, and absorptive laws), see this post on the Freshman's Dream $\rm\, (A+B)^n =\, A^n + B^n\ $ for GCDs / Ideals, $\,$ if $\rm\, A+B\ $ is cancellative. It's advantageous to present gcd proofs using these basic laws (vs. the Bezout linear form) since such proofs will generalize better (e.g. to ideal arithmetic) and, moreover, since these laws are so similar to integer arithmetic, we can reuse are well-honed expertise manipulating expressions obeying said well-known arithmetic laws. For examples see said Freshman's Dream post.
See also below (merged for preservation from a deleted question).
Note $\rm\ \ (n,ab)\ =\ (n,nb,ab)\ =\ (n,(n,a)\:b)\ =\ (n,b)\ =\ 1\ $ using prior said GCD laws.
Such exercises are easy using the basic GCD laws that I mentioned in your prior questions, viz. the associative, commutative, distributive and modular law $\rm\:(a,b+c\:a) = (a,b).\,$ In fact, to make such proofs more intuitive one can write $\rm\:gcd(a,b)\:$ as $\rm\:a\dot+ b\:$ and then use familar arithmetic laws, e.g. see this proof of the GCD Freshman's Dream $\rm\:(a\:\dot+\: b)^n =\: a^n\: \dot+\: b^n\:.$
Note $\ $ Also worth emphasis is that not only are proofs using GCD laws more general, they are also more efficient notationally, hence more easily comprehensible. As an example, below is a proof using the GCD laws, followed by a proof using the Bezout identity (from Gerry's answer).
$\begin{eqnarray}
\qquad 1&=& &\rm(a\:,\ \ n)\ &\rm (b\:,\ \ n)&=&\rm\:(ab,\ &\rm n\:(a\:,\ &\rm b\:,\ &\rm n))\ \ =\ \ (ab,n) \\
1&=&\rm &\rm (ar\!\!+\!\!ns)\:&\rm(bt\!\!+\!\!nu)&=&\rm\ \ ab\:(rt)\!\!+\!\!&\rm n\:(aru\!\!+\!\!&\rm bst\!\!+\!\!&\rm nsu)\ \ so\ \ (ab,n)=1
\end{eqnarray}$
Notice how the first proof using GCD laws avoids all the extraneous Bezout variables $\rm\:r,s,t,u\:,\:$ which play no conceptual role but, rather, only serve to obfuscate the true essence of the matter. Further, without such noise obscuring our view, we can immediately see a natural generalization of the GCD-law based proof, namely
$$\rm\ (a,\ b,\ n)\ =\ 1\ \ \Rightarrow\ \ (ab,\:n)\ =\ (a,\ n)\:(b,\ n) $$
This quickly leads to various refinement-based views of unique factorizations, e.g. the Euclid-Euler Four Number Theorem (Vierzahlensatz) or, more generally, Schreier refinement and Riesz interpolation. See also Paul Cohn's excellent 1973 Monthly survey Unique Factorization Domains. |
Proving $\{x \cdot f(x)\} \rightarrow 0$ given $\{x\} \rightarrow 0$ | Since $f$ is bounded (say, $|f(x)| \leq M$) we have the following estimate for all $x$:
$$
|xf(x)| \leq |x|M.
$$
Now since $M$ is a constant, what happens as $x \to 0$? |
If $z^2+z+2=0$, then $z^2 + \frac{4}{z^2} = -3$ | Since $z \neq 0$, you have
$$\begin{equation}\begin{aligned}
& z^2 + z + 2 = 0 \\
& z + 1 + \frac{2}{z} = 0 \\
& z + \frac{2}{z} = -1 \\
& \left(z + \frac{2}{z}\right)^2 = (-1)^2 \\
& z^2 + 4 + \frac{4}{z^2} = 1 \\
& z^2 + \frac{4}{z^2} = -3
\end{aligned}\end{equation}\tag{1}\label{eq1A}$$ |
Question about computing the homology of an $n$-disk | By definition, $C_k(X)$ is a vector space whose basis is the set of $k$-simplices of $X$. When $X=D^0$, $X$ has a single $0$-simplex and no $k$-simplices for $k\neq 0$. This means that $C_0(D^0)$ is one-dimensional (i.e., isomorphic to $\mathbb{Z}_2$) and $C_k(D^0)$ is trivial if $k\neq 0$.
Now we want to compute $H_k(D^0)$. That means we want to look at the maps $\partial_k:C_k(D^0)\to C_{k-1}(D^0)$ and $\partial_{k+1}:C_{k+1}(D^0)\to C_k(D^0)$ and compute the kernel of $\partial_k$ mod the image of $\partial_{k+1}$. Now the definitions of these maps are pretty complicated, but fortunately we don't have to worry about the definitions in this case because our vector spaces are so trivial. If $k>0$, then $C_k(D^0)$ is trivial. Since $\ker\partial_k$ is a subspace of $C_k(D^0)$, $\ker\partial_k$ is also trivial, and since $H_k(D^0)$ is a quotient of $\ker\partial_k$, $H_k(D^0)$ is also trivial. This proves $H_k(D^0)=0$ for $k\neq 0$.
Now let's consider $k=0$. The map $\partial_0:C_0(D^0)\to C_{-1}(D^0)$ has domain $\mathbb{Z}_2$ and codomain $0$ (we usually don't talk about $C_{-1}(D^0)$; it is always just $0$ because there is no such thing as a $(-1)$-simplex). So $\partial_0$ must send everything to $0$, so its kernel is all of $\mathbb{Z}_2$. On the other hand, the map $\partial_1:C_1(D^0)\to C_0(D^0)$ must be $0$, since its domain is $0$. So the image of $\partial_1$ is trivial. Thus $H_0(D^0)$ is $\mathbb{Z}_2$ mod the trivial subspace, or just $\mathbb{Z}_2$. |
How to prove and what are the necessary hypothesis to prove that $\frac{f(x+te_i)-f(x)}{t}\to\frac{\partial f}{\partial x_i}(x)$ uniformly? | Here there is the answer: the key fact is that $\frac{\partial f}{\partial x_i}$ must be uniformly continuous, which it is if $f\in C^1_c(U)$.
Actually, none of the three posts that you link in your question suggests that $f\in C^2_c(U)$ is necessary, instead they all say what I have reported here. |
Recurrence relation, find general term | Let $b_j(n)$ be the $j$'th digit from the left in the binary expansion of $n$, and
$s(n)$ the sum of the binary digits of $n$. Thus since $6 = 110_2$, $b_1(6) = b_2(6) = 1$, $b_3(6)=0$, and $s(6) = 2$. Then
$A(n) = 2 n - s(n) + 1 - 3 c + (A(1) - 1 + 2 c) b_2(n)$. |
Showing an Isometry between normed space. | I don't understand the last two sentences of the proof. It seems that you made a leap at that point. But you are already half there:
As you noticed, $A$ is an isometry if and only if $\|Ax-Ay\|=\|x-y\|$, for every $x,y\in V$. By the linearity of $A$, this is the case if and only if $\|A(x- y)\|=\|x-y\|$, for every $x,y\in V$. Now you need to set $w=x-y$ and you are done. |
Maximal subgroups of order $p^3$ in finite simple groups | By a theorem of Thompson, if a finite group has a nilpotent maximal subgroup of odd order, it is solvable.
Janko and Deskins have shown that if a finite group $G$ has a maximal Sylow $2$-subgroup of class $\leq 2$, then $G$ is solvable. Proofs can also be found in Endliche Gruppen I by Huppert: Satz 7.4, p. 445.
It is immediate from these results that if a finite simple group has a maximal Sylow $p$-subgroup $P$, then $p = 2$ and $|P| \geq 16$. (Note that $\operatorname{PSL}(2,17)$ has a maximal Sylow $2$-subgroup of order $16$.)
References:
[1] J. Thompson, Finite groups with fixed-point-free automorphisms of prime order.
Proc. Nat. Acad. Sci. U.S.A. 45 (1959) 578–581.
[2] Z. Janko, Finite groups with a nilpotent maximal subgroup. J. Austral. Math. Soc. 4 (1964) 449–451.
[3] W. E. Deskins, A condition for the solvability of a finite group. Illinois J. Math. 5 (1961) 306–313.
[4] J. S. Rose, On finite insoluble groups with nilpotent maximal subgroups. J. Algebra 48 (1977) no. 1, 182–196. |
Let A and B be n*n matrices such that trace(A)<0<trace(B). | The exponent of a matrix is defined by a convergent infinite sum.
Use $\det e^A=e^{\text{Tr }A}$.
http://en.wikipedia.org/wiki/Matrix_exponential |
How to find the antiderivative of this function? | This has no elementary antiderivative. This involves polylogarithms. See here. As TylerHG also pointed out, you may want to check this.
If so inclined, then you could write the denominator $e^x - 1 = -1 + \sum_{n=0}^{\infty} \frac{x^n}{n!} = \sum_{n=1}^{\infty} \frac{x^n}{n!}$ |
eigenvalues and eigenvectors of 2x2 block matrix | Since $A$ and $B$ are symmetrical, we can diagonalize them in the way such that $A=UD_AU^T$ and $B=VD_BV^T$.
Hence,
$\left(\begin{array}{cc} A & 0 \\ 0 & B \end{array} \right)=\left(\begin{array}{cc} UD_AU^T & 0 \\ 0 & VD_BV^T \end{array} \right)=\left(\begin{array}{cc} U & 0 \\ 0 & V \end{array} \right)\left(\begin{array}{cc} D_A & 0 \\ 0 & D_B \end{array} \right)\left(\begin{array}{cc} U & 0 \\ 0 & V \end{array} \right)^T.$ |
Does the Fourier transform of $e^{-\epsilon x}$ have a limit in distributional sense? | Let $1/y$ be the p.v. functional and let all the limits be for $\epsilon \to 0^+$. The distributional limit exists and can be found in any of these ways:
1) using the Sokhotski-Plemelj formula:
$$\frac 1 {\epsilon + i y} =
\frac i {-y + i \epsilon} \xrightarrow {\mathcal D}
-\frac i y + \pi \delta(y);$$
2) using the fact that the limit and the Fourier transform are interchangeable:
$$e^{-\epsilon x} H(x) \xrightarrow {\mathcal D} H(x), \\
(H(x), e^{-i y x}) =
-\frac i y + \pi \delta(y);$$
3) writing the functionals as distributional derivatives of ordinary functions, as you suggest. The logarithm is an integrable function, therefore it just corresponds to a regular functional. It can be proved that the limit and the derivative of $\ln(\epsilon + i y)$ are the same in the ordinary sense and in the distributional sense, giving
$$\left( \frac 1 {\epsilon + i y}, \phi \right) =
(i \ln(\epsilon + i y), \phi') \to \\
(i \ln(i y), \phi') =
\left( i \ln |y| - \frac \pi 2 \operatorname{sgn} y, \phi' \right) = \\
\left( -\frac i y + \pi \delta(y), \phi \right).$$ |
How to find the basis of vector spaces, and their dimensions. | $C^1$ does not mean "differentiable function only once". It means a function has continuous first derivative. See for instance: https://en.wikipedia.org/wiki/Function_space#Functional_analysis.
For the first one, consider the polynomials $p_n(x)=x^n$.
For the second one, do you know how to solve the differential equation?
Consider $p_m(x)=x^{2m}$. |
What does "copies" of a Ring/Module etc. mean | "Copies" does not really stand for a mathematical thing. "Direct sum of $n$ copies of $M$" is just the best way English and several indo-european languages have at their disposal to state concisely that a construction such as $X_1\oplus X_2\oplus\cdots \oplus X_n$ (i.e. "direct sum of $n$ objects") is applied to the case $M=X_1=X_2=\cdots=X_n$. |
Building a partial injective relation | There are only injections from the smaller set to the larger, namely
$$
f: A \to B
$$
There are $4$ choices for the value $f(a)$, $3$ choices for $f(b)$, and $2$ choices for $f(c)$ for a total of $4 \cdot 3 \cdot 2 = 24$ possible injective functions. Each of these gives a distinct maximal injective relation, consisting of the pairs
$$
\big\{ (a, f(a)), (b, f(b)), (c, f(c)) \big\}.
$$ |
Using the Induction Hypothesis in Inequality Proofs | No, your proof is invalid. The problem is that you’re trying to go from $a<b$ and $a+c<b+d$ to $c<d$ by subtracting $a$ from one side and $b$ from the other. This is invalid, as can be seen from the example of $10+2<12+1$. |
Let $\phi$ be Euler's totient function, find all $n$ such that $\phi(n) = \frac{1}{3} n$. | We need $n\prod_{p|n}(1 - \frac 1p) = \frac 13 n$ so
$\prod (1-\frac 1p) = \frac 13$
Now to get $3$ in the denominator we must have $3|n$.
So $(1 - \frac 13)\prod_{p|n; p\ne3}(1-\frac 1p)= \frac 13$ so
$\prod_{p|n;p\ne =3} (1 - \frac 1p) = \frac 12$.
To have $2$ in the denominator we must have $2|n$ so
$(1 - \frac 13)(1-\frac 12)\prod_{p|n;n\ne 2; n\ne 3} (1- \frac 1p) = \frac 13$ so $\prod_{p|n;n\ne 2; n\ne 3} (1- \frac 1p) =1$ and the only prime factors of $n$ are $2$ and $3$.
So $n$ may be any number of the form $2^a3^b; a\ge 1; b\ge 1$. |
Expressing a function in terms of sinc(t) | Suggestion: since they're different functions, give them different names. Say
$$
U(t) = \sin(t/\Delta) / t \\
S(t) = \sin(t)/t
$$
Now, using substitution, try writing out and simplifying
$$
U(\Delta s)
$$
and comparing that to
$$
S(s).
$$ |
Show that the following matrix is nonsingular | For any subspace $V$ of $\Bbb R^n$, you have a decomposition
$$\Bbb R^n=V\oplus V^\perp$$
In the present case, consider $V$ to be the row space of $A(z)$. Then by definition $V^\perp=\operatorname{Ker}A(z)$.
By construction, the rows of $A(z)$ are a basis of $V$ and the rows of $Z^T$ are a basis of $V^\perp$. Since $\Bbb R^n=V\oplus V^\perp$, it follows that the rows of the full matrix form a basis of $\Bbb R^n$, so that the matrix is non-singular. |
given OGF of some sequence, provide a closed form expression for the sequence. | For someone who may need this.
\begin{equation*}
\begin{split}
d(x) &= (4-2x+x^2)^{-1} \\
&= \frac{1}{(\alpha -x)(\beta -x)}\\
&= \frac{A}{\alpha -x} + \frac{B}{\beta -x}
\end{split}
\end{equation*}
We can determine that :
\begin{equation*}
A(\beta -x) + B(\alpha -x) = 1
\end{equation*}
So, we have:
\begin{equation*}
A = \frac{1}{\beta - \alpha},\\
B = \frac{1}{\alpha - \beta}
\end{equation*}
Thus,
\begin{equation*}
\begin{split}
d(x) &= \frac{1}{\beta - \alpha} (\frac{1}{\alpha - x} - \frac{1}{\beta - x})\\
&= \frac{1}{\beta - \alpha} (\frac{1}{\alpha} \frac{1}{1 - \frac{x}{\alpha}} - \frac{1}{\beta} \frac{1}{1 - \frac{x}{\beta}})
\end{split}
\end{equation*}
Since we know:
\begin{equation*}
\frac{1}{1-cx} = \sum_{n \geq 0} c^n x^n
\end{equation*}
So,
\begin{equation}
[x^n]d(x) = \frac{1}{\beta - \alpha} (\frac{1}{\alpha^{n+1}}- \frac{1}{\beta^{n+1}})
\end{equation}
and we can substitute $\alpha$ and $\beta$ as we know:
\begin{equation*}
\alpha = 1 - \sqrt{3} i , \beta = 1 + \sqrt{3} i
\end{equation*}
And (2) can be simplified as following:
\begin{equation*}
\begin{split}
[x^n]d(x) &= \frac{1}{\beta - \alpha} (\frac{1}{\alpha^{n+1}}- \frac{1}{\beta^{n+1}})\\
&= \frac{1}{2\sqrt{3}i}(\frac{1}{ (1 - \sqrt{3} i)^{n+1}} - \frac{1}{ (1 + \sqrt{3} i)^{n+1}})\\
&= \frac{1}{2\sqrt{3}i} \frac{ (1 + \sqrt{3} i)^{n+1} - (1 - \sqrt{3} i)^{n+1}}{4^{n+1}} \\
&= \frac{1}{2^{n+2} \sqrt{3}i} [(\frac{1}{2} + \frac{\sqrt{3}}{2}i)^{n+1} - (\frac{1}{2} - \frac{\sqrt{3}}{2}i)^{n+1}] \\
&= \frac{1}{2^{n+2} \sqrt{3}i} [(\cos\frac{\pi}{3} + i\sin \frac{\pi}{3} )^{n+1} - (\cos\frac{-\pi}{3} + i\sin \frac{-\pi}{3} )^{n+1}] \\
&= \frac{1}{2^{n+2} \sqrt{3}i} \{exp[\frac{(n+1)\pi i}{3}] - exp[-\frac{(n+1)\pi i}{3}]\} \\
&= \frac{1}{2^{n+2} \sqrt{3}i} [(\cos\frac{(n+1)\pi}{3} + i\sin \frac{(n+1)\pi}{3} ) - (\cos\frac{-(n+1)\pi}{3} + i\sin \frac{-(n+1)\pi}{3} )]\\
&= \frac{1}{2^{n+2} \sqrt{3}i} [2i\sin\frac{(n+1)\pi}{3}] \\
&= \frac{\sqrt{3}}{3 \cdot 2^{n+1}} \sin[\frac{(n+1)}{3} \pi]
\end{split}
\end{equation*} |
Basic Summation Rules | In the first problem, $\sum_{t=0}^{100-X}X=(100-X+1)X$: there are $100-X+1$ terms because of the $t=0$ term. Fix this, and you’ll get Yuval’s answer.
In the second, notice first that $|i-j|=0$ when $i=j$, so we can remove those terms from the sum. Next, $|i-j|=|j-i|$, so we need only sum the terms in which $i>j$ and then double the total to account for those in which $i<j$. The sum of the terms in which $i>j$ is
$$\frac1{n^2}\sum_{i=2}^n\sum_{j=1}^{i-1}(i-j)\;:\tag{1}$$
the $1/n^2$ is constant and can be pulled out, $i$ must be at least $2$ to leave room for a smaller $j$, and $j$ runs only up to $i-1$, so as to remain less than $i$. (Robert didn’t pull out the constant factor, and he left in the terms in which $i=j$; since they’re $0$, there’s no actual difference.)
Now rewrite the inner sum: as $j$ runs from $1$ to $i-1$, $i-j$ runs from $i-1$ down to $1$, so $$\sum_{j=1}^{i-1}(i-j)=\sum_{k=1}^{i-1}k=\frac12i(i-1)\;.$$ Substitute this into $(1)$ to get $$\frac1{n^2}\sum_{i=2}^n\frac12i(i-1)=\frac1{2n^2}\sum_{i=2}^ni(i-1)=\frac1{2n^2}\sum_{i=1}^{n-1}i(i+1)\;,$$ where the last step is just an index shift. Now break up the sum to get
$$\begin{align*}
\frac1{2n^2}\sum_{i=1}^{n-1}i(i+1)&=\frac1{2n^2}\left(\sum_{i=1}^{n-1}i^2+\sum_{i=1}^{n-1}i\right)\\
&=\frac1{2n^2}\left(\frac16(n-1)n(2n-1)+\frac12n(n-1)\right)\\
&=\frac{n-1}{12n}(2n-1+3)\\
&=\frac{(n-1)(n+1)}{6n}\\
&=\frac{n^2-1}{6n}\;.
\end{align*}$$
Recall that we have to double this to account for the terms with $i<j$, so the actual sum in question is $$\sum_{i=1}^n\sum_{j=1}^n\frac{|i-j|}{n^2}=\frac{n^2-1}{3n}\;.$$ |
Solve: $\frac{dy}{dx} = y^2 - \frac{y}{x} - \frac{1}{x^2}$ | Make the substitution $$y(x)=-\frac{v'(x)}{v(x)}$$ the we get
$$x^2v''+xv'-v=0$$ assuming $$v(x)=x^{\lambda}$$ then we get
$$v=\frac{C_1}{x}+C_2x$$ and our solution is
$$y(x)=\frac{C_1-C_2x^2}{C_1x+C_2x^3}$$ |
Negative functions and their anti-derivatives | No, this does not hold true.
Let $\mathrm{f}(x)=0$ for all $x$. Then $\mathrm{F}(x)=k$, for some constant $k$, are all anti-derivatives of $\mathrm{f}$, i.e. $\mathrm{F}'(x)\equiv \mathrm{f}(x)$. In particular, $\mathrm{F}(x)=1$ is an anti-derivative of $\mathrm{f}$.
We have $\mathrm{f}(x) \le 0$ for all $x$ and yet $\mathrm{F}(x) > 0$ for all $x$. |
Determine principal value | We can write
\begin{align}
i&=0+1i\\
&=\cos\frac{\pi}{2}+i\sin\frac{\pi}{2}\\
&=e^{i\frac{\pi}{2}}\tag{1}
\end{align}
From equation $(1)$,
\begin{align}
(ie^\pi)^i&=(e^\frac{i\pi}{2}e^\pi)^i\\
&=(e^\frac{i^2\pi}{2})e^{i\pi}\\
&=e^\frac{-\pi}{2}(-1)\\
&=\frac{-1}{e^\frac{\pi}{2}}
\end{align} |
bisector theorem , cosine rule or other method? | Let $B=(-1,t)$, $C=(x,y)$.
Then
$$\frac{y}{x-a}=\frac{t}{-1-a}$$
Angle AOB can be obtained to be
$$\cos AOB=\frac{-a}{a\sqrt{1+t^2}}=\frac{-1}{\sqrt{1+t^2}}$$
Using
$$\cos^2 AOC = \frac{1+\cos AOB}{2}$$
we have
$$\frac{y^2}{x^2}=\tan^2 AOC = \sec^2 AOC - 1=\frac{2}{1-\frac{1}{\sqrt{1+t^2}}}-1=\frac{2\sqrt{1+t^2}}{\sqrt{1+t^2}-1}-1=\frac{\sqrt{1+t^2}+1}{\sqrt{1+t^2}-1}=\frac{\left(\sqrt{1+t^2}+1\right)^2}{t^2}$$
$$\frac{y}{x}=\frac{\sqrt{1+t^2}+1}{t}$$
$$\left(t\frac{y}{x}-1\right)^2=1+t^2$$
$$\left(ty-x\right)^2=\left(1+t^2\right)x^2$$
$$\left(\frac{(1+a)y^2}{x-a}+x\right)^2=\left(1+\frac{(1+a)^2y^2}{(x-a)^2}\right)x^2$$
$$((1+a)y^2+x(x-a))^2=((x-a)^2+(1+a)^2y^2)x^2$$
$$(1+a)^2y^4+2(1+a)x(x-a)y^2=(1+a)^2x^2y^2$$
$$y^2+\frac{2}{1+a}x(x-a)-x^2=0$$
$$\left(\frac{2}{1+a}-1\right)x^2-\frac{2a}{1+a}x+y^2=0$$
$$(1-a)x^2-2ax+(1+a)y^2=0$$
If $0<a<1$, then it is an ellipse.
If $a=1$, then it is a parabola.
If $a>1$, then it is a hyperbola. |
mapping is bijective if $\lambda=? $ | Since $1+i\lambda\neq 0$, $f$ is bijective iff $g(z)=z+i\lambda\bar z$ is bijective. Let $z=x+iy$. Then,
$$g(x+iy)=(x+\lambda y)+i(y+\lambda x)$$
To see if $g$ is bijective we can work in the vector space $\Bbb R^2$ instead of $\Bbb C$. Now $g$ is a linear application and its matrix is
$$\begin{pmatrix}1&\lambda\\\lambda&1\end{pmatrix}$$
Therefore, $g$ is bijective iff $1-\lambda^2\neq0$. Since $\lambda>0$, $g$ is bijective iff $\lambda\neq 1$. |
Can the material implication ever be used as the main connective within the scope of an existential quantifier? | There is an answer to this question, such that if the question is asking for a specific example, then the answer is providing it. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.