title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
I want to solve $\cos^3(x)=\frac12$ | $\cos^3x=\dfrac12$
$\cos^3x=\dfrac48$
$\cos x=\sqrt[3]{\dfrac48}$
$\cos x=\dfrac{\sqrt[3]4}2$
$\cos x=\dfrac12\sqrt[3]4$
The solutions are the following :
$x=\pm\arccos\left(\dfrac12\sqrt[3]4\right)+2k\pi\quad$ for all $\;k\in\mathbb{Z}\;.$ |
Union of metric spaces | Daniel Wainfleet's definition of $d_3$ needs the assumption that $2$ is an upper bound of $d_1$ and $d_2$, otherwise the function $d_3$ will not be a metric (except for the case when some of the sets $X_1$ and $X_2$ is empty). Indeed, suppose for instance that $d_1(x_1,x_1')>2$ for some $x_1,x_1'\in X_1$, and take some $x_2\in X_2$. Then $d_3(x_1,x_1')>d_3(x_1,x_2)+d_3(x_2,x_1')$.
The assumptions $X_1\cap X_2=\varnothing$, $X_1\ne\varnothing$, $X_2\ne\varnothing$ on the metric spaces $(X_1,d_1)$ and $(X_2,d_2)$ are sufficient for proceeding in the following way. We take some $a_1$ in $X_1$ and some $a_2$ in $X_2$, and we define $d:(X_1\cup X_2)^2\to\mathbb{R}$ by setting
$$d(x_1,x_2)=d(x_2,x_1)=\max(d_1(x_1,a_1),d_2(x_2,a_2),1)$$
when $x_1\in X_1$, $x_2\in X_2$, and $d(x,y)=d_i(x,y)$ when $x,y\in X_i$ ($i=1,2$). Then $d$ is always a metric. To show that $d(x,y)\le d(x,z)+d(z,y)$ for all $x,y,z\in X_1\cup X_2$, we consider separately two cases:
(i) $x$ and $y$ belong to one and same of the sets $X_1$ and $X_2$;
(ii) $x\in X_1$, $y\in X_2$ or $x\in X_2$, $y\in X_1$.
Suppose we have the case (i), and let for instance $x,y\in X_1$. The case $z\in X_1$ is trivial, therefore suppose $z\in X_2$. We have then to show that
$$d_1(x,y)\le\max(d_1(x,a_1),d_2(z,a_2),1)+\max(d_1(y,a_1),d_2(z,a_2),1).$$
and, of course, this inequality follows from the fact that
$$d_1(x,y)\le d_1(x,a_1)+d_1(y,a_1).$$
Suppose now we have the case (ii), and let for instance $x\in X_1$, $y\in X_2$, $z\in X_2$. We have then to show that
$$\max(d_1(x,a_1),d_2(y,a_2),1)\le\max(d_1(x,a_1),d_2(z,a_2),1)+d_2(z,y).$$
Clearly this inequality follows from the inequality
$$d_2(y,a_2)\le d_2(z,a_2)+d_2(z,y)$$
because $d_1(x,a_1)$ and $1$ obviously do not exceed the right-hand side. |
Find the order of $A$? | $a$ can be $2$ possible elements, $b$ can be $3$ so the order of $A$ is $6$. Recall that the order of the group is just the number of elements. |
Why the formula given to a cyclic R-module C changes to Ra, when R has identity and C is unitary. | By definition, $Ra$ is contained in $C$, so it suffices to show $C\subset R$ if $R$ has unity and if $C$ is unitary. But now
$$
ra+na=ra+(a+a+\dots+a);
$$
can you go from here? By definition of $Ra$, you only need to show that the above element is of the form $sa$, for some $s\in R$. |
Derivative of power series : | Note that $m\in (0,1)$ and the geometric series is convergent:
$$f'(1)=\sum_{k\geq 1}{m^k1^{k-1}}=\lim_{n\to \infty}\sum_{k=1}^{n}{m^k}=\lim_{n\to \infty}\frac{m(1-m^n)}{1-m}=\frac{m}{1-m}.$$ |
Peter Winkler's Mathematical Puzzle "Intervals and Distances" | Let $k=4$, then your $X=\frac1{15}([0,1]\cup[2,3]\cup[6,7]\cup[14,15])$ (to abuse notation a bit). What are two points with distance $10/15$? |
identifying the measure $\lambda f^{-1}$ on the interval $[0,1]$ | If you can prove the statement for all open dyadic intervals it would be already very useful.
(I assume that by dyadic interval you mean an interval whose endpoints are of the type $k2^{-n}$
for suitable integers $k,n$.)
If you have
$\lambda f^{-1}(E)=m(E)$
for open dyadic intervals $E$, then
one can show that
$\lambda f^{-1}(E)=m(E)$
also holds for all (non-dyadic or dyadic) open intervals $E\subset [0,1]$.
This can be done by approximating the open intervals by dyadic intervals from inside:
If you have real numbers $a,b\in [0,1]$ with $a<b$, then there exist sequences
$k_n,l_n\in\Bbb N$ such that $x_n:= k_n2^{-n}$ converges from above to $x$
and $y_n:=l_n2^{-n}$ converges from below to $b$.
For large $n$, the sequences $k_n,l_n$ can be chosen such that
$a\leq x_n\leq a+2^{-n} < b-2^{-n} \leq y_n \leq b$ is satisfied.
Since the interval $(x_n,y_n)$ is a dyadic interval, we have
$\lambda f^{-1}((x_n,y_n))=\mu((x_n,y_n))=y_n-x_n$.
Using the properties of a measure (like continuity from below) it follows that
$\lambda f^{-1}((a,b))=\mu((a,b))=y_n-x_n$
holds for all real numbers $a,b\in [0,1]$.
If two measures are equal on all open intervals, then it is known that
these measures agree on all Borel measurable sets,
see for example this question and its comments and answers
(the fact that you use $[0,1]$ while the question uses $\mathbb R$ does not make a significant difference, the arguments work the same in both cases).
Thus we can conclude that $\lambda f^{-1}$ is just the standard Borel measure
on $[0,1]$. |
Why does $\left\{ \left( \frac{1}{n},\frac{1}{m} \right) : n,m \in Z^+ \right\}$ have Jordan measure $0$? | First: let $K= \{ 1/n : n \in \Bbb{Z}^+\}$. Obviously $A \subset K \times (0,1]$, so it is enough to prove that $K \times (0,1]$ has zero area. This is equivalent to say that $\forall \varepsilon > 0$ there exist finitely many rectangles $R_i$ covering $K \times (0,1]$ and whose total area is smaller than $\varepsilon$.
Now, fix any $0< \varepsilon <1$, and consider $R_0 = [0, \varepsilon^2] \times [0,1]$ whose area is $\varepsilon^2$. For all integers $1 \le n < \varepsilon^{-2}$ (there are finitely many: precisely $[\varepsilon^{-2}]$ many) consider the rectangle $$R_n=[1/n - \varepsilon^3 , 1/n + \varepsilon^3] \times [0,1]$$
whose area is $2 \varepsilon^3$. The total area of these rectangles is
$$ \varepsilon^2+ [\varepsilon^{-2}]2\varepsilon^3 < \varepsilon^2+ \varepsilon^{-2}2\varepsilon^3 = 2 \varepsilon + \varepsilon^2$$
which is arbirarily small.
If you note that $A \subset K \times (0,1] \subset \bigcup_{i=0}^{[\varepsilon^{-2}]} R_i$, then it is clear that they have zero Jordan measure. |
$P_n=\{X\subseteq \mathbb{N}:|X|=n\}$ is countable. | Let $(p_i)_{i \ge 0}$ be the sequence of prime numbers with $p_0 = 2$. To each $X = \{x_1, \dots, x_n \} \subseteq \Bbb N$ you may associate a natural number by the formula $X \mapsto p_{x_1} \dots p_{x_n}$. Note that if $p_{x_1} \dots p_{x_n} = p_{y_1} \dots p_{y_s}$ then necessarily $s = n$ and $x_i = y_i$ (modulo a possible reordering). Therefore, the mapping $X \mapsto \Bbb N$ described above is injective, therefore $P_n$ injects into $\Bbb N$, so it is at most countable.
Let us now see that $P_n$ is not finite. Well, this is easy, because it must contain all the subsets of the form $\{k+1, \dots, k+n\}$ for all $k \in \Bbb N$, and these are infinitely many. Therefore, $P_n$ is infinite.
Since $P_n$ is infinite and at most countable, it follows that it is countable. |
How does the Doomsday argument make any sense? | None of your critiques look to me like problems for the argument.
Yes, the 100th person could have reasoned in that way. This reasoning would, in that scenario, have led to a bad estimate. So? For a probabilistic argument to be a good one, we don't demand that it lead to good estimates regardless of what evidence you happen to have. If a die which I know may or may not have come from a prank novelty company comes up six purely by chance on the first fifty rolls, I will guess that the die is not fair. My reasoning is good even though my conclusion is false.
Yes, every person has different evidence -- especially when we're constraining the evidence in the narrow what that this argument requires. Yes, different evidence leads to different estimates. So?
You ask "How could it possibly tell you how many more are left in $N$, regardless of what you do with the numbers?" The answer is easy if we add the following assumption: $P(n\leq 0.05\cdot N)\leq 0.05$. The Doomsday Argument includes this assumption. Now, we can disagree about whether, in the context of the Doomsday Argument, that assumption is reasonable. (FWIW, it seems reasonable to me.) But what is clear is that if you grant that assumption, then your question ("How could $n$ tell you...") is answered.
The Doomsday Argument need not commit itself to this at all. It would suffice to have only the assumptions that: (a) There exists some number $N$ which is (an upper bound on) the total number of humans who will live; and (b) my own birth number is assigned to me on a uniform distribution (i.e., I could equally likely have had any birth order number $1\leq n \leq N$). Quite possibly you could salvage a version of the DA with even weaker versions of (b).
It seems to me that the crux of your disagreement with the Argument is point 3. You do not accept the crucial assumption that you $P(n\leq 0.05 \cdot N)\leq 0.05$. I'm not sure how we should adjudicate whether that statement is true. Maybe, in the end, this is the kind of issue on which the kind of frequentist intuitions that drive all of us (even us committed Bayesians) cannot thrive. But if you do accept that assumption, then it seems difficult to escape the conclusion of the Doomsday Argument. |
Function Composition Thinking Problem | Let $x$ be the number of people. The cost of renting a room can be represented as a function of the number of people: $f(x) = 975 + 39.95x$.
Let $y$ be the amount of a bill. The cost after a 20% discount can be represented as a function of the amount of a bill: $g(y) = .8y$.
A 20% discount on the cost of renting a room can be represented by a composition of the two functions: $g(f(x)) = .8f(x) = .8(975 + 39.95x) = 780 + 31.96x$, which expresses the discounted bill as a function of the number of people, $x$. |
The soundness of modus ponens (?) | Most likely it means the same as you're use to calling "valid".
The word "sound" is most often applied to entire logics:
A logic is "unsound" (with respect to a particular semantics) iff there exists statements $A, B_1, B_2, \ldots, B_n$ such that the logic can derive $A$ as a consequence of the $B_i$s, and yet the semantics allows a situation where all the $B_i$s hold but $A$ doesn't.
A logic is "sound" if it is not unsound.
The most likely interpretation of "modus ponens is sound" would be that if we take a logic where the only way to conclude anything is by modus ponens (that is, no axioms, no other rules of inference, no nothing), then that logic is sound.
And that is basically the same as what is also commonly expressed as "modus ponens is a valid inference rule".
In philosophy (and the early pages of some textbooks of mathematical logic) one can also call an argument "sound" if it is a valid deduction from premises that happen to be true. But there doesn't seem to be any reasonable way to extend that meaning to apply to a rule of inference in isolation. |
Calculating Distributional derivative. | $ -\int_0^1 \frac{1}{\sqrt{x}} \phi' dx = - \bigg(\frac{1}{\sqrt{x}}\bigg) (\phi - \phi(0))\bigg|_0^1 + \int_0^1 \frac{-1}{2x^{3/2}}(\phi(x) - \phi(0)) dx =\\
=-(\phi(1) - \phi(0)) + \int_0^1 \frac{-1}{2x^{3/2}}(\phi(x) - \phi(0)) dx $ |
How many coin flips to get $k$ heads with probability $1-\delta$? | A foolish arithmetic mistake in one of the steps.
The correct bound is $$N=\frac{{{2\ln\delta^{-1}}}+k+\sqrt{2k\ln\delta^{-1}}}{p}$$
Going back to the analysis, we have:
$$x=\frac{\sqrt{{2\ln\delta^{-1}}}+\sqrt{2\ln\delta^{-1}+4k}}{2}
=\frac{\sqrt{{\ln\delta^{-1}}}+\sqrt{\ln\delta^{-1}+\color{red}{2}k}}{\sqrt2}.$$
Therefore:
$$
N = x^2/p = \frac{\left(\sqrt{{\ln\delta^{-1}}}+\sqrt{\ln\delta^{-1}+2k}\right)^2}{2p}
= \frac{{{\ln\delta^{-1}}}+2(\ln\delta^{-1}+\sqrt{2k\ln\delta^{-1}})+{(\ln\delta^{-1}+2k)}}{2p}
= \frac{{{2\ln\delta^{-1}}}+k+\sqrt{2k\ln\delta^{-1}}}{p}.
$$
I'm still wondering if there is a way to simplify this analysis. |
How to show $B^2=B$ when $rank(B) + rank(I_d - B) = d$? | $$
\newcommand{\abs}[1]{\left\vert #1 \right\vert}
\newcommand\rme{\mathrm e}
\newcommand\imu{\mathrm i}
\newcommand\diff{\,\mathrm d}
\DeclareMathOperator\sgn{sgn}
\renewcommand \epsilon \varepsilon
\newcommand\trans{^{\mathsf T}}
\newcommand\F {\mathbb F}
\newcommand\Z{\mathbb Z}
\newcommand\R{\Bbb R}
\newcommand \N {\Bbb N}
$$
A proof without the hint.
Suppose we are working on the field $\F$, and we prove this for linear operators $B$ on $V = \F^d$. The assumption suggests
$$
\dim (\ker B) + \dim (\ker (I -B)) = d.
$$
If $v \in \ker B \cap \ker (I-B)$, then $0=Bv = v - Bv$ hence $v = 2Bv = 0$. Therefore $\ker B + \ker(I-B)$ is a direct sum. Combined with the dimensional equation,
$$
V = \ker B \oplus \ker (I -B).
$$
Then $B^2 - B =- B( I -B)$, since for every $v \in V$, $v = w + u$ where $w \in \ker B, u \in \ker (I - B)$, thus
$$
(B^2 - B)v = (I-B)(Bw) + B ((I-B)u) = 0 + 0 = 0,
$$
and hence $B^2 - B = O$ since $v$ is arbitrary. |
Identify and internal direct sum | Can we define $U'$ and $W'$ in other ways?
Maybe? If you want to do it a harder, less obvious way, I think the burden is on you to explain why one should spend time doing that, though.
If we can why is it smart to define $U'$ and $W'$ like this?
The reason for choosing this way is because it makes verification of the internal direct sum trivial.
Which claims do $U'$ and $W'$ exactly have to claim?
The usual definition of “$V=U’\oplus W’$” is “$U’$ and $W’$ are subspaces such that $U’+W’=V$ and $U’\cap W’=\{0\}$.”
As I understand identify $U'$ with $U$ means that $U$ and $U'$ are isomorphic, right?
Yes.
Do we claim that $U$ and $U'$ are from the same field and that's why we can make a map from $U'$ to $U$?
If you mean that they are vector spaces over the same field, then yes, that is true.
By defining a map how do show that we can identify $U'$ with $U$?
$x\mapsto ( x,0)$ is an isomorphism of $U$ with $U’$.
We have to show that $V$ is internal direct sum of $U'$ and $W'$ then why does we show that $(u,0)+(0,w) \in U +W$ and $x \in U\cap W$?
I guess you mean $U’$ and $W’$ everywhere because $U$ and $W$ aren’t, a priori, subsets of anything that you could add or intersect. But after that correction, you would have to verify these things because that is the usual definition of direct sum.
Some of this explanation could change of course if you mention any background you have left out. I am only answering using the most common interpretation of the limited information given . |
Linearly dependent or independent over $\Bbb{C}$ | Yes, you can use the method. Consider the matrix $\begin{bmatrix}1+i & 1\\ 2i & 1+i\end{bmatrix}$, by writing the given vectors as columns of the matrix. Now, in this case, for finding the rank of the matrix, I can multiply by scalars from $\mathbb{C}$, since we need linear (in)dependence over $\mathbb{C}$.
So, multiplying the first row by $1+i$ and substracting from the second row gives the matrix $\begin{bmatrix}1+i & 1\\ 0 & 0\end{bmatrix}$ and hence the rank is $1$.
Thus the given vectors are linearly dependent. |
Prove that $\lim_{x\rightarrow a}(1+u(x))^{v(x)}=e^{\lim_{x\rightarrow a}u(x)\cdot v(x)}$ if $u(x)\rightarrow 0$ and $v(x)\rightarrow \infty$ | Write
$$
(1+u(x))^{v(x)}=\left[(1+u(x))^{1/u(x)}\right]^{u(x)v(x)}.
$$
Then,
\begin{gather}
\lim_{x\to a}(1+u(x))^{1/u(x)}=\lim_{z\to 0}(1+z)^{1/z}=e,\\\lim_{x\to a}u(x)v(x)\text{ exists (the question assumes this implicitly)}
\end{gather}
and the result you seek follows. |
Collatz sequences meet at a point | This is what I'd do. In case of many repeated queries, preparing a table "first term of Collatz sequence below starting value $n$" for $n=2,3,\ldots, 10^6$ may speed up things.
Assume $b>a$. Compute the Collatz sequence starting with $b$ until the first time the sequence reaches some $b'<b$.
If $b'>a$, scratch the $b$-sequence, let $b\leftarrow b'$, and create the $b$-sequence again up to a new $b'<b$.
If $b'=a$, the answer is $a$, and we are done.
Otherwise, compute the sequence starting at $a$ until you reach $a'<a$.
If $b'=a'$, follow both sequences back step by step until you reach the point they meet, and we are done.
Otherwise, let $b=\max\{a',b'\}$, $a=\min\{a',b'\}$ and restart.
Then again, the maximal sequence length with starting values $\le 10^6$ is 524, so simply computing the two sequences to the end and looking for the meeting point from their ends cannot take too long ... |
Finding the model matrix, $P$, knowing eigenvectors and eigenvalues | The simpler way is to note that the eigenvectors of a matrix $A$ are the basis in which the transformation is represented by a diagonal matrix $D$ with the eigenvalues as diagonal elements. And if
$$
P^{-1}AP=D
$$
than
$$
A=PDP^{-1}
$$
For the diagonal form
$$D=\begin{pmatrix}
\sqrt{2} & 0\\
0 & 3
\end{pmatrix}$$
the ''basis'' matrix (with, as columns, the corresponding eigenvectors) is
$$
P=\begin{pmatrix}
-1 & 3/2\\
1 & -1
\end{pmatrix}$$
so we can easily find $P^{-1}$ and calculate $A=PDP^{-1}$ |
Is the factor group abelian if subgroup is normal? | The first one is false. Take $G=S_3$ and $N=1$.
The second one is true because $(G:H)=2$ implies that $H$ normal and that $G/H$ has order $2$. Every group of order $2$ is cyclic and so abelian. |
A General GRE Quantitative Problem: triangle inside circle | Hint: Note that the radius is $r = OA = OB$, so $\angle A = \angle B$. But then since angles in a triangle must add up to $180^\circ$, it follows that $\triangle OAB$ is equilateral. So we can easily get the radius from the perimeter, which can be used to get the circumference. |
Exponential polynomials, Stirling and Bernoulli numbers | Using the exponential generating function for these polynomials:
$$\sum_{n=0}^{\infty}P_n(x)\frac{t^n}{n!}=\exp\big(x(e^t-1)\big),$$
we obtain (for $|y|$, $|z|$ small enough)
$$\sum_{n,m=1}^{\infty}\frac{y^n}{n!}\frac{z^m}{m!}\int_{-\infty}^0 P_n(x)P_m(x)\frac{e^{2x}}{x}\,dx=\int_{-\infty}^0\frac{(e^{xe^y}-e^x)(e^{xe^z}-e^x)}{x}\,dx\\=\ln\frac{(e^y+1)(e^z+1)}{2(e^y+e^z)}=\ln\frac{\cosh(y/2)\cosh(z/2)}{\cosh\big((z-y)/2\big)}$$
(this is a Frullani-type integral). The power series for $\color{DarkBlue}{\ln\cosh}$ is known from the one of $\color{DarkBlue}{\tanh}$:
$$\ln\cosh(w/2)=\sum_{k=2}^{\infty}L_k\frac{w^k}{k!},\qquad L_k=(2^k-1)B_k/k$$
(we keep in mind that $L_k=0$ for odd $k$). Hence,
$$\ln\frac{\cosh(y/2)\cosh(z/2)}{\cosh\big((z-y)/2\big)}=-\sum_{k=2}^{\infty}\frac{L_k}{k!}\sum_{n=1}^{k-1}(-1)^n\binom{k}{n}y^n z^{k-n}\\=\sum_{k=2}^{\infty}\sum_{n=1}^{k-1}(-1)^{n-1}L_k\frac{y^n}{n!}\frac{z^{k-n}}{(k-n)!}=\sum_{n,m=1}^{\infty}(-1)^{n-1}L_{n+m}\frac{y^n}{n!}\frac{z^m}{m!}.$$
The result follows by comparison. |
Convergence of maximum of iid random variables in distribution | Note that
$$ \begin{align}
P\left(\frac{\max_{1\leq i\leq n}|X_i|}{\sqrt n} > \epsilon \right)&=
1-P\left(\max_{1\leq i\leq n}|X_i| \leq \epsilon \sqrt n \right)\\
&= 1-P\left(|X_1| \leq \epsilon \sqrt n \right)^n \\
&= 1-P\left(|X_1|^2 \leq \epsilon ^2 n \right)^n
\end{align}$$
For any integrable $X$, $\lim_{x\to \infty} xP(X> x )=0$. Indeed $$\begin{align} xP(X> x ) = x\int \mathbb 1_{X> x}(w) dP(w) &\leq x\int \mathbb 1_{X> x}(w)\frac{X(w)}{x} dP(w)\\
&=\int \mathbb 1_{X> x}(w) X(w) dP(w) \end{align}$$
and $\displaystyle \lim_{x\to \infty} \int \mathbb 1_{X> x}(w) X(w) dP(w)=0$ by dominated convergence.
Here, this implies $P\left(|X_1|^2 > \epsilon ^2 n \right)=o\left( \frac{1}{n}\right)$, hence
$$ \begin{align}
P\left(\frac{\max_{1\leq i\leq n}|X_i|}{\sqrt n} > \epsilon \right)&=
1-\left(1+o\left( \frac{1}{n}\right) \right)^n\\
&= 1-\exp\left(n\ln \left(1+o\left( \frac{1}{n}\right) \right) \right)\\
&= 1-\exp\left(o\left( 1\right) \right)\\
&= o\left(1\right)
\end{align}$$
This proves convergence in probability to $0$, hence convergence in distribution to $0$. |
What words do you use to describe a bad proof? | Here are several adjectives that I have heard used to describe proofs:
Wrong, false, flawed: there are serious problems with the logic used.
Incomplete, weak, partial: fails to completely address the statement being proved.
Hand wavy, outline, sketch, reductive, imprecise: the proof is heuristic, giving ideas and motivations without being rigorous.
Clunky, confusing: the proof is correct and rigorous, but poorly structured.
Terse: the proof is so short so as to be confusing.
Wordy, long winded: the proof is longer or more complicated than it needs to be.
Heavy-handed, overpowered: the proof uses techniques or results which are far more powerful or advanced than need be used.
Inelegant: the proof uses simple but time consuming techniques, when better ways exist.
Technical: the proof uses long, possibly boring arguments for which there is no reasonable alternative.
There are positive ones too.
Clear: the proof is very easy to understand.
Concise: the proof is short by eliminating unnecessary phraseology.
Elementary: the proof does not use advanced techniques, especially when more advanced proofs exist.
Clever, interesting: the proof uses ideas that are are not obvious, or may even be novel.
Beautiful: a proof that is very clever, clear, and well written so that it elicits an positive emotional response.
Cute: a simple, clever proof that is not quite beautiful.
Elegant: the highest praise a proof can receive, such a proof is beautiful, and relies on a simple but not obvious idea. |
Proof that $\max(x_1,x_2)$ is continuous | If $x_2>x_1$ then $\max(x_1,x_2)=x_2$ and then it is continuous in that region. Similar analysis in the region $x_1>x_2$.
If $x_2=x_1$ then note that $|\max(y_1,y_2)-\max(x_1,x_2)|\le \max(|y_1-x_1|,|y_2-x_2|)$.
Given $\varepsilon>0$, choose $\delta=\varepsilon$. If $||(y_1,y_2)-(x_1,x_2) ||<\delta$ then we have
$$
|\max(y_1,y_2)-\max(x_1,x_2)|\le \max(|y_1-x_1|,|y_2-x_2|)\le \sqrt{|y_1-x_1|^2+|y_2-x_2|^2}<\delta =\varepsilon.
$$ |
Calculus: Prove function is increasing | Your reasoning is correct. In both cases, you manage to find an interval away from $c$ on which to apply the mean value theorem (MVT).
That said, there isn't a reason to avoid $c$ being an endpoint of the interval on which MVT is applied. The statement of MVT involves derivative at some interior point of the interval. As long as you have control of $f'$ at the interior points, you can go ahead with the MVT. |
A key doubt about the resultant of two polynomials and their common zero locus | Let's write down the claims:
A) The line $l=\overline{pq}$ meets $X$.
B) Every pair of homogeneous $F,G\in I(X)$ has a common zero on $l$
C) $Res_{x_n}(F,G)$ vanishes at $q$ for all homogeneous pairs $F,G\in I(X)$.
You state that you understand that C) is equivalent to an intermediate result but aren't sure about why C) should be equivalent to A). We'll go through the steps carefully.
First, we'll establish that A) and B) are equivalent (you didn't mention an issue with this, but I want to make sure we cover it anyways, plus it's short).
A) clearly implies B): any $F,G\in I(X)$ will have a common zero at every point in $l\cap X$ which is assumed nonempty.
For the other direction of the equivalence, we prove the contrapositive: if $X\cap l=\emptyset$, then there exist homogeneous $F,G\in I(X)$ so that $F,G$ have no common zero on $l$. Assume $X\cap l=\emptyset$. Up to a change of coordinates, we may assume $l=V(x_2,\cdots,x_n)\subset \Bbb P^n$. Now on $U_0=D(x_0)$, we have $X_0:= X\cap U_0$ and $l_0:= l\cap U_0$ are affine varieties which do not meet. Thus $I_{U_0}(X_0)+I_{U_0}(l_0)=(1)$, so we can find elements $a\in I_{U_0}(X_0)$ and $b\in I_{U_0}(l_0)$ so that $a+b=1$. Then $a$ vanishes on $X_0$ but not $l_0$, and after homogenizing it to $\widetilde{a}$ and multiplying by some power of $x_0$, we get that $x_0^p\widetilde{a}$ is a homogeneous element of $I(X)$ which vanishes only at $[0:1:0:\cdots:0]\in l$. Repeating this construction on $U_1=D(x_1)$, we get a homogeneous element of $I(X)$ which vanishes only at $[1:0:\cdots:0]\in l$. These two elements are our $F,G$ which do not share a common zero on $l$, so B implies A by contrapositive.
Before we start tackling the equivalence of B) and C), let's recall some facts about the resultant:
1) The resultant of two polynomials with coefficients from an integral domain is zero iff they have a common divisor of positive degree.
2) If $A,B$ are two polynomials in $R[x]$ and $\varphi: R\to S$ is a ring homomorphism which extends to a ring homomorphism $\varphi:R[x]\to S[x]$ in the natural way, then:
$Res_x(\varphi(A),\varphi(B))=\varphi(Res_x(A,B))$ if $\deg_x A = \deg_x \varphi(A)$ and $\deg_x B = \deg_x \varphi(B)$
$\varphi(Res_x(A,B))=0$ if $\deg_x A > \deg_x \varphi(A)$ and $\deg_x B > \deg_x \varphi(B)$
$\varphi(Res_x(A,B))=\varphi(a)^{\deg B-\deg \varphi(B)}Res_x(\varphi(A),\varphi(B))$ if $\deg A =\deg \varphi(A)$ and $\deg B > \deg \varphi(B)$ where $a$ is the top coefficient of $A$.
$\varphi(Res_x(A,B))=\pm \varphi(b)^{\deg A-\deg \varphi(A)}Res_x(\varphi(A),\varphi(B))$ if $\deg B =\deg \varphi(B)$ and $\deg A > \deg \varphi(A)$ where $b$ is the top coefficient of $B$.
Each of these parts of 2) can be proven by noticing that $\varphi$ commutes with $\det$ since it's a polynomial.
(See the Wikipedia page if you care about when the $\pm$ is a $+$ versus a $-$.)
Now let's look at the equivalence of B) and C). We'll quantify it as follows: for any pair $F,G\in I(X)$, their having a common zero on $l$ is equivalent to $Res_{x_n}(F,G)(q)=0$.
Suppose either $F$ or $G$ satisfies the condition that it's leading coefficient as a polynomial in $x_n$ doesn't vanish upon evaluation at $q$ (aka restriction to $l$). We apply the fact 2) about the resultant with $\varphi$ being the evaluation at $q$ map: the first, third, or fourth part of this fact applies, and we have that $Res_{x_n}(F,G)(q)=0$ iff $Res_{x_n}(F[q],G[q])=0$. But $Res_{x_n}(F[q],G[q]) = 0$ iff $F[q]$ and $G[q]$ have a common factor of positive degree by fact 1) about resultants, and this common factor is exactly equivalent to a common zero on $l$, so we see that B) and C) are equivalent in this case.
In the case where $F$ and $G$ both have leading coefficients as polynomials in $x_n$ which vanish upon plugging in $q$, we show that conditions B) and C) are automatically true. As $p\Rightarrow q$ is equivalent to $\neg p \vee q$, this will show that B) and C) are equivalent in this case.
If $F,G$ both have leading coefficients as polynomials in $x_n$ which vanish upon plugging in $q$, we are in the situation of the second part of fact 2), so $Res_{x_n}(F,G)(q)=0$. Similarly, the vanishing of the leading coefficient implies that $F,G$ both have a zero at $p$ because $\deg_{x_n} F < \deg F$. (To prove this last bit, it may be instructive to note that up to a change of coordinates leaving $p$ fixed, we may take $q=[1:0:\ldots:0]$, so that $F[q],G[q]$ are either zero or divisible by $x_0$ and therefore must have a zero at $p$.)
I have to admit that I personally got a little turned around a few times attempting to write the last part of this answer - the key thing to note is that there's a case where the equivalence of B) and C) is automatic because they're both just true from the assumptions in this special case. Hope this helps! |
Can absolute scalability be 'relaxed' to an equivalent condition in the properties of a norm? | The relaxed condition also implies $$ \left\|\frac1\alpha \alpha x\right\|\le\left|\frac1\alpha\right|\|\alpha x\|$$
and hence
$$ \|\alpha x\|\le |\alpha|\|x\|\le |\alpha|\left|\frac1\alpha\right|\|\alpha x\|=\|\alpha x\|,$$
which implies equality throughout. |
Find $\lim_\limits{x\to 0} \frac{\sin x}{e^x -1 -\sin x}$ | Intuitively, you can use the approximations $\sin x\approx x$ and $e^x\approx1+x+x^2/2$, so that the given expression behaves like $2/x$, which has no limit. |
Name of theorem in plane, Euclidean Geometry regarding a circle circumscribing a triangle | Here is the code for a diagram illustrating the theorem in my post.
\documentclass{amsart}
\usepackage{tikz}
\usetikzlibrary{calc,intersections}
\begin{document}
\noindent \hspace*{\fill}
\begin{tikzpicture}
%Triangle ABC has sides of lengths 3, 2 + 6/7, and 1 + 6/7. A = (-1,0), B = (2,0), and C = (9/7,12/7).
%(The figure is magnified by 3/2.)
\coordinate (A) at ({(3/2)*(-1)},0);
\coordinate (B) at ({(3/2)*2},0);
\coordinate (C) at ({(3/2)*9/7},{(3/2)*12/7});
%
\draw (A) -- (B) -- (C) -- cycle;
%
\node[anchor=north, inner sep=0, font=\footnotesize] at ($(A) +(0,-0.15)$){\textit{A}};
\node[anchor=north, inner sep=0, font=\footnotesize] at ($(B) +(0,-0.15)$){\textit{B}};
%The center of the circle circumscribing the triangle is located. The radius of the circumscribing circle
%is 65/42.
\path[name path=a_path_to_locate_center_of_circle] ({(3/2)*1/2},0) -- ({(3/2)*1/2},2);
\path[name path=another_path_to_locate_center_of_circle] ($(A)!0.5!(C)$) -- ($($(A)!0.5!(C)$)!1!90:(A)$);
\coordinate[name intersections={of=a_path_to_locate_center_of_circle and another_path_to_locate_center_of_circle, by={center_of_circle}}];
%
\draw[blue] let \p1=($(A)-(center_of_circle)$), \n1={atan(\y1/\x1)}, \p2=($(B)-(center_of_circle)$), \n2={atan(\y2/\x2)} in (A) arc ({\n1+180}:\n2:{(3/2)*65/42});
%Vertex C is labeled.
\path let \p1=($(C)-(center_of_circle)$), \n1={atan(\y1/\x1)} in node[anchor={\n1+180}, inner sep=0, font=\footnotesize] at ($(C) +({\n1}:0.15)$){\textit{C}};
%The length of side BC is labeled "a."
\path let \p1=($(B)-(C)$), \n1={atan(\y1/\x1)} in node[anchor={\n1-90}, inner sep=0, font=\footnotesize] at ($($(B)!0.5!(C)$) +({\n1+90}:0.1)$){\textit{a}};
\end{tikzpicture}
\hspace{\fill}
\end{document} |
Plotting the inverse of a function | The answer is $log_2(x)$, by definition. The graph is a reflection of the graph of the original function, as WolframAlpha will show you: |
Do there exist non-trivial integer coefficients that break linear independence of the roots of unity? | I am not sure I understood your definition correctly but I believe the fact that the $n$th cyclotomic field extension of $Q$ has degree $\phi(n)$ says that there are no non trivial combinations. |
Are there any interactions between abstract harmonic analysis and harmonic analysis of real variables? | In my opinion, the difference between the fields is less the domain you are working on, but more on the types of techniques you use. In the 'real-variable harmonic analysis' you discuss, the primary techniques are quantitative, i.e. you are often trying to bound some quantity in terms of some other quantity, most often to show some operator is bounded. On the other hand, in 'abstract harmonic analysis', the techniques are much more qualitative, relying more on topological techniques, weak compactness, etc. Both approaches are necessary to the general development of the field. Of course, the topology of $\mathbf{R}^n$ is very necessary to developing Littlewood-Paley / Calderon-Zygmund type techniques, so you shouldn't expect these approaches to generalize to all locally compact groups. But there are certainly groups 'like' $\mathbf{R}^n$, i.e. Lie groups, that do have similar theories. When it comes to applications of soft-type analysis in PDEs, I'm not too knowledgeable about those that could be directly considered as 'abstract harmonic analysis', but results like the Atiyah-Singer index theorem could be considered as soft-analytical results that have major importance in PDEs. |
Determining groups G | You would need to show that $(g,g^{-1})(h,h^{-1}) = (gh, (gh)^{-1})$ in order for it to be a subgroup. However $(g,g^{-1})(h,h^{-1}) = (gh, g^{-1}h^{-1}) = (gh, (hg)^{-1})$. So to show that it is a subgroup, it is equivalent to showing that $(gh)^{-1} = (hg)^{-1}$.
What happens if you take the inverse of both sides? Do you see what this implies about $G$? (Can you find a counter example from there?) |
Characteristic function of Cantor distribution | Suppose $X$ is Cantor distributed and we know that it is symmetric around $\frac{1}{2}$ and we have to rewrite as :
$$X - \frac{1}{2} = \sum_{k=1}^\infty \frac{X_k}{3^k}$$
where $\Pr[X_k = -1] = \Pr[X_k = 1] = 1/2$ and $X_k$ are iid for all $k$.
Remark: It is the same as what John Dawkins write, noted that $\frac{1}{2} = \sum_{k=1}^\infty \frac{1}{3^k}$.
For each $X_k$, $$\begin{align}\varphi_{X_k}(t) &= \sum\Pr[X_k=l]e^{itl}\\
&=\frac{1}{2}e^{-it} + \frac{1}{2}e^{it} = \cos(t)\\
\therefore \varphi_{X_k/3^k}(t)&=\varphi_{X_k}(t/3^k)\\
&=\cos\left(\frac{t}{3^k}\right)\end{align} $$
By independence, the sum of $X_k/3^k$ will be the product in the characteristic function,
$$\begin{align} \varphi_{X-1/2} (t) &= \prod_{k=1}^\infty \cos\left(\frac{t}{3^k}\right) \\
\varphi_{X} (t) &= e^{it/2} \prod_{k=1}^\infty \cos\left(\frac{t}{3^k}\right) \end{align}$$
For most mathematical beauty, a little bit of location-scale transformation is always needed.
References: Lukacs, Eugene (1970). Characteristic Functions, 2nd edition, Griffin, London. |
Which of the following is bigger (logarithms) | $\log_3 4 = \dfrac{\log_2 4}{\log_2 3} = \dfrac{2}{\log_2 3}$
$A := {\log_2 3}+ \log_3 4 = {\log_2 3} + \dfrac{2}{\log_2 3}$
Dividing by$A$ by $\sqrt 2$, observe $ \dfrac{\log_2 3}{\sqrt 2} + \dfrac{\sqrt 2}{\log_2 3} > 2$ by AM-GM inequality (since ${\log_2 3 \ne \sqrt 2}$)
Thus $A>2\sqrt 2$ |
Plot of a domain in the complex plane | Let be $x=u+iv:$
$$
\sqrt{(u^2-v^2-1)^2+(2uv)^2}=|u^2-v^2-1+2uvi|=|x^2-1|<r.
$$ |
Financial Investment timeline | The quarterly rate is $0.01$ and the number of quarters are $16$ for the $2000$ and $8$ for $1500$
For the first $2000$ the investor gets $$2000(1+.01)^{16} = 2345.157290$$
For the $1500$ he/she gets $$1500(1+.01)^8 = 1624.285058$$
The total investment is $$3969.442348 \approx 3969$$ |
If $S$ is real skew-symmetric matrix, then prove that $\det(I+S) \ge 1+\det(S)$,The equality holds if and only if $n\le 2$ or $n\ge3$ and $S=O$ | I hope this helps :
Since skew-symmetric is normal, i.e commute with adjoint, you know that by complexification of the vector space, we have canonical form for those operator. For example for $f$ such that $f = -f^{*}$ (as for skew-symmetric) $\exists P^{-1}= P^t$ such that $$A = P^{-1}S P = \begin{pmatrix}\textbf{0} & 0 \\ 0 & \Box\end{pmatrix}$$
Where the first $0$ are $k$ zero relative to the eigenvectors of $0$ eigenvalue of $f$, and the other part are $j$ (indeces are not very important here) block $\Box_\mu = \begin{pmatrix}0 & \mu \\ -\mu & 0 \end{pmatrix}$ relative to the purely imaginary eigenvalues of $f$ (Here $\mu \in \mathbb{R}$ and this is very important).
That being sad, since $\det(P)^2 = 1$ (being $P$ orthogonal) playing with Binet :
$$\text{det}(Id+S) = 1\cdot \text{det}(Id+S) = \text{det}(P)^2 \text{det}(Id+S) = \text{det}(P^{-1})\text{det}(Id+S)\text{det}(P) = \text{det}(P^{-1}(Id+S)P) = det(P^{-1}Id P + P^{-1}SP) = \text{det}(Id+ A)$$
It should clear that $I_d+A = \begin{pmatrix}Id_k & 0 \\ 0 & \Box\end{pmatrix}$ where the blocks now turned into $B_\mu = \begin{pmatrix}1 & \mu \\ -\mu &1 \end{pmatrix}$
And since the matrix is block diagonal we have that $\text{det}(Id+A) = 1^k \prod\limits_{i=1}^j B_{\mu_i}$.
Note that having $\text{det}( B_{\mu_i}) = 1 + \mu_i^2$, we end up with $\text{det}(I_d+A) = \prod\limits_{i=1}^j (1+\mu_i^2)$.
It should be clear now that
$$\prod\limits_{i=1}^j (1+\mu_i^2) \geq 1+ \prod\limits_{i=1}^j \mu_i^2 = 1 + \prod\limits_{i=1}^j \text{det}(\Box_{\mu_i}) = 1 + 1^k\prod\limits_{i=1}^j \text{det}(\Box_{\mu_i}) = 1 + \text{det}(P^{-1}SP) = 1 + \text{det}(P)^2\text{det}(S) = 1+\text{det}(S)$$
Edit : I think if this is right you can easily find the special cases for $n$ by yourself.
$\textbf{Addendum :}$ Rewatching this "proof" it seems to me that the equality holds if and only if holds $\prod\limits_{i=1}^j (1+\mu_i^2) = 1+ \prod\limits_{i=1}^j \mu_i^2$. If you are familiar with the first product, you should know or you could prove that $\prod\limits_{i=1}^j (1+\mu_i^2) = \sum\limits_{J \subset[j]}\prod\limits_{i \in J} \mu_i^2$. From this equality you see that holds if and only if $\mu_i^2 = 0$ for every $i$ since we're dealing with real numbers, i.e $\mu_i = 0$ for all $i$. But if we remember who $\mu_i$ was, we get $S = 0$ since $0$ is the only matrix similar to the $0$ matrix. |
Bessel's inequality with condition on the derivative | $\hat f' (n)= 2.\pi. i n \hat f(n)$. So ${\vert \hat f(n)\vert }^2 \leq {1\over 4 \pi^2} {\vert \hat f'(n)\vert }^2$ (with equality iff $n=1$, and the result follows by Bessel.
Choose $c$ so that $\hat f(0)=c$, and apply Parseval to the function $f-c$, we get the result. |
An application of the Inclusion Principle to Chemistry? (Proof Verification) | I arrived at the same conclusion using another approach. My approach is to let $G(X,E)$ be a graph whose vertices are the atoms $X_i$, and the edges are the bonds $b_{ij}=(X_i,X_j)$. And using your definition of $H_i$ and $N_i$ , we observe that for all vertices:$$H_i +\deg(X_i) = N_i$$ Since degree of an atom corresponds to the number of bonds that atom made, number of electrons needed in the final configuration is just the number of electrons already existing in the atom and the ones that are gained by bonds and since each bond brings another electron this statement holds. Then, using Handshaking lemma:$$ \sum_{X_i\in{X}}\deg(X_i)=2|E|=2B \\ \sum_{X_i\in{X}}\deg(X_i)=\sum_{X_i\in{X}}(N_i-H_i)=N-H \\ B=\frac{N-H}{2}$$ |
Does the zeroth root exist? | The $n^\text{th}$ root of a real number $x$ is $$x^{1/n}$$
If $n=0$ then $1/0$ is undefined, so there is no such thing as the $0^{\text{th}}$ root. |
Proof of convolution theorem for Laplace transform | \begin{align*}\mathcal{L}\{f\}({s})\cdot\mathcal{L}\{g\}({s})&=\left(\int_0^\infty e^{-{s}t}{f}(t)\,dt\right)\left(\int_0^\infty e^{-{s}u}{g}(u)\,du\right)\\
&=\lim_{{{L}}\to\infty}\left(\int_0^{{L}}e^{-{s}t}{f}(t)\,dt\right)\left(\int_0^{{L}}e^{-{s}u}{g}(u)\,du\right)
\end{align*}
By Fubini's theorem,
\begin{align*}\left(\int_0^{{L}}e^{-{s}t}{f}(t)\,dt\right)\left(\int_0^{{L}}e^{-{s}u}{g}(u)\,du\right)&=\int_0^{{L}}\int_0^{{L}}e^{-{s}(t+u)}{f}(t){g}(u)\,dt\,du\\
&=\iint_{R_{{L}}}e^{-{s}(t+u)}{f}(t){g}(u)\,dt\,du,
\end{align*}
where $R_{{L}}$ is the square region $$0\leq t\leq {{L}},\qquad 0\leq u\leq {{L}}.$$
Let $T_{{L}}$ be the triangular region
$$0\leq t\leq {{L}},\qquad 0\leq u\leq {{L}},\qquad t+u\leq {{L}}.$$
Provided that ${f}$ and ${g}$ are bounded by exponential functions,
$$\lim_{{{L}}\to\infty}\iint_{R_{{L}}}e^{-{s}(t+u)}{f}(t){g}(u)\,dt\,du=\lim_{{{L}}\to\infty}\iint_{T_{{L}}}e^{-{s}(t+u)}{f}(t){g}(u)\,dt\,du.$$
Now, the function
$$\varphi(v,u)=(v-u,u)$$
maps $D_{{L}}$ bijectively onto $T_{{L}}$, where $D_{{L}}$ is the triangular region
$$0\leq v\leq {{L}},\qquad 0\leq u\leq {{L}},\qquad v\geq u.$$
The component functions of $\varphi$ are
$$t(v,u)=v-u\qquad\mbox{and}\qquad u(v,u)=u,$$
so the Jacobian of $\varphi$ is
$$J\varphi=\det\begin{bmatrix}t_v&t_u\\u_v&u_u\end{bmatrix}=\det\begin{bmatrix}1&-1\\0&1\end{bmatrix}=1.$$
Hence,
$$\iint_{T_{{L}}}e^{-{s}(t+u)}{f}(t){g}(u)\,dt\,du=\iint_{D_{{L}}}e^{-{s}v}{f}(v-u){g}(u)\,dv\,du.$$
By Fubini's theorem,
\begin{align*}&\iint_{D_{{L}}}e^{-{s}v}{f}(v-u){g}(u)\,dv\,du=\int_0^{{L}}\int_0^ve^{-{s}v}{f}(v-u){g}(u)\,du\,dv\\
\\
&=\int_0^{{L}}e^{-{s}v}\int_0^v{f}(v-u){g}(u)\,du\,dv=\int_0^{{L}}e^{-{s}v}({f}\ast{g})(v)\,dv.
\end{align*}
Hence,
\begin{align*}\lim_{{{L}}\to\infty}\iint_{D_{{L}}}e^{-{s}v}{f}(v-u){g}(u)\,dv\,du&=\lim_{{{L}}\to\infty}\int_0^{{L}}e^{-{s}v}({f}\ast{g})(v)\,dv\\
\\
&=\int_0^\infty e^{-{s}v}({f}\ast{g})(v)\,dv\\
\\
&=\mathcal{L}\{{f}\ast{g}\}({s}).
\end{align*} |
Two bodies under mutual gravitational attraction as system of first order ODEs | This has been asked before.
The two-body problem can be solved almost completely in closed form. The equations of motion can be integrated to obtain the equations of the orbits, which are always conic sections (Kepler's first law generalised to include the unbounded cases of the parabola and hyperbola). However, to express the motions of the bodies as function of time, it's necessary to solve Kepler's equation, for which there is no known simple solution in closed form. |
Find the trace of the matrix $A$ | Hint:
Let $\lambda_i, i=1\dots n$ be the eigenvalues. Then $\rm{trace}(A^m)=\Sigma\lambda_i^m$.
Now try to find two things to apply the Cauchy–Schwarz inequality to so that the LHS and the RHS are both in terms of $\rm{trace}(A^2)$, $\rm{trace}(A^3)$ and $\rm{trace}(A^4)$. Note the equality conditions for CS.
Solution:
Apply Cauchy-Schwarz to $(\lambda_1,\dots,\lambda_n)$ and $(\lambda_1^2,\dots,\lambda_n^2)$ to get $\rm{trace}(A^3)^2\leq\rm{trace}(A^2)\rm{trace}(A^4)$. Since we in fact have equality, we must have $\lambda_i^2=\alpha\lambda_i$ for each $i$ and some fixed $\alpha$. Hence each $\lambda_i$ is zero or $\alpha$. Looking for solutions of this form we find that it works iff $\alpha$ is $1$. |
For which values of $a$, equation $\sqrt{x-3}+ax=2a+3$ has one solution for $x$? | Hint:
take the square to get
$x-3=(2a+3)^2+a^2x^2-2ax(2a+3)$
which becomes
$a^2x^2-( 2a(2a+3)+1)x+(2a+3)^2+3=0$
it has one solution if the discriminant is zero. |
Equation for a line through a plane in homogeneous coordinates. | Definition
You are right, 3D points and planes are described with 4 homogeneous coordinates. A point at $\vec{r}$ is $P=(\vec{r};1)$ and a plane $W=(\vec{n};-d)=(\vec{n};-\vec{r}\cdot\vec{n})$ with normal $\vec{n}$ through point $\vec{r}$, or with minimum distance to origin $d$.
A line needs 6 coordinates (plücker coordinates) describing the direction and moment about the axis. A line along $\vec{e}$ through a point $\vec{r}$ has coordinates $L=[\vec{e};\vec{r}\times\vec{e}]$. Given a line $L=[\vec{l};\vec{m}]$, the direction is recovered by $\vec{e}=\frac{\vec{l}}{|\vec{l}|}$ and the position by $\vec{r} = \frac{\vec{l}\times\vec{m}}{|\vec{l}|^2}$
Now derive the point $P=(\vec{r};1)$ where line $L=[\vec{l};\vec{m}]$ meets plane $W=(\vec{w};\epsilon)$ as follows:
See that for the point to be on the plane you must have $\epsilon = - \vec{r}\cdot \vec{w}$
For the point to be on the line you must have $\vec{m} = \vec{r} \times \vec{l}$
Use the vector triple product to get
$$
\vec{w} \times \vec{m} = \vec{w} \times \left( \vec{r} \times \vec{l} \right) = \vec{r} (\vec{w}\cdot\vec{l})-\vec{l}(\vec{w}\cdot\vec{r}) $$
$$ \vec{w} \times \vec{m} = \vec{r} (\vec{w}\cdot\vec{l}) - \vec{l}(-\epsilon) $$
$$ \vec{r} = \frac{\vec{w}\times\vec{m}-\epsilon \vec{l}}{\vec{w}\cdot\vec{l}} $$
Define the line-plane meet operator as
$$ \begin{aligned}
P & = [W\times] L \\
\begin{pmatrix} \vec{p} \\ \delta \end{pmatrix} & = \begin{bmatrix}
-\epsilon {\bf 1} & \vec{w}\times \\ \vec{w}^\top & 0
\end{bmatrix} \begin{pmatrix} \vec{l} \\ \vec{m} \end{pmatrix}
\end{aligned}$$
where $\vec{w}\times = \begin{pmatrix}x\\y\\z\end{pmatrix} \times = \begin{bmatrix} 0 & -z & y \\ z & 0 & -x \\ -y & x & 0 \end{bmatrix}$ is the cross product matrix operator in 3×3 form.
The meet operator $[W\times]$ has dimensions 4×6 to work between lines and points.
Example
A plane normal to the $x$ axis located at $x=3$ has coordinates $W=(1,0,0;-3)$
A line through $y=2$ directed towards $\hat{i}+\hat{k}$ has coordinates $L=[1,0,1;2,0,-2]$
The meet operator is $$ [W\times] = \left[ \begin{array}{ccc|ccc}
3 & 0 & 0 & 0 & 0 & 0 \\
0 & 3 & 0 & 0 & 0 & -1 \\
0 & 0 & 3 & 0 & 1 & 0 \\
\hline 1 & 0 & 0 & 0 & 0 & 0
\end{array}\right]$$
The point where the line meets the plane is $P=[W\times]L$ $$P=\left[ \begin{array}{ccc|ccc}
3 & 0 & 0 & 0 & 0 & 0 \\
0 & 3 & 0 & 0 & 0 & -1 \\
0 & 0 & 3 & 0 & 1 & 0 \\
\hline 1 & 0 & 0 & 0 & 0 & 0
\end{array}\right] \begin{bmatrix}1\\0\\1\\ \hline 2 \\ 0 \\ -2 \end{bmatrix} =\begin{pmatrix}3\\2\\3\\ \hline 1 \end{pmatrix}$$
The point is located at $\vec{r} = (3,2,3)$ |
Stuck with finding the domain of a function with a logarithm | What is in the argument of $\log_3$? It is $x^2 - 1$, right? So you need the values of $x$ such that: $$x^2 - 1 > 0$$
We have: $$x^2 - 1 > 0 \implies x^2 > 1 \implies |x| > 1$$
You got it wrong there because $\sqrt{x^2} = |x|$, so you let escape a few possibilities. Now, drawing the real line, it is easy to see that $|x| > 1$ happens if and only if $x > 1$, or $x < -1$. Ok?
We have that $\log_a b = x \iff a^x = b$, by definition. If $a > 0$ and $b < 0$, you get something positive equaling something negative. |
There are $5$ apples $10$ mangoes and $15$ oranges in a basket. | You can give any combo of $0-5$ apples, $0-10$ mangoes to person $A$
Thus $6\times11 = 66$ ways (the balance needed to make $15$ will be oranges) |
Function many to one | A function $f\colon X \to Y$ can be thought of as a subset $F \subseteq X \times Y$ such that
For every $x \in X$ there is a $y \in Y$ such that $(x, y) \in F$.
If $(x, y), (x, y') \in F$ then $y = y'$.
If you have a function $f\colon X \to Y$ then it corresponds to the ordered pairs $F = \{(x, f(x)) \mid x \in X\}$. Conversely if you have a set of pairs $F \subseteq X \times Y$ with the above properties then the associated function $f\colon X \to Y$ is defined by letting $f(x)$ be the unique element of $Y$ such that $(x, f(x)) \in F$.
It is indeed true that this means functions cannot be one to many. In set theory this is part of the definition of a function. A function $f\colon X \to Y$ is an assignment, to every $x \in X$, of a unique element $f(x) \in Y$. |
Solving the congruence $7x + 3 = 1 \mod 31$? | We have that
$$7x + 3 \equiv 1 \mod 31 \implies 7x\equiv -2\mod 31$$
Then we need to evaluate by Euclidean algorithm the inverse of $7 \mod 31$, that is
$31=4\cdot \color{red}7 +\color{blue}3$
$\color{red}7=2\cdot \color{blue}3 +1$
then
$1=7-2\cdot 3=7-2\cdot (31-4\cdot 7)=-2\cdot 31+9\cdot 7$
that is $9\cdot 7\equiv 1 \mod 31$ and then
$$9\cdot 7x\equiv 9\cdot -2\mod 31 \implies x\equiv 13 \mod 31$$ |
Equivalent notions of a smooth map using different definitions of manifold | Your question is essentially covered by my answer to Equivalent definition of a tangent space? However, my answer did not focus on the concept of a smooth map, so let me do it properly here.
First note that property "Tu-smooth at $x \in X$" does not depend on the choice of charts around $x$ and $f(x)$. This is a basic theorem.
G&P define a manifold as a subset $X \subset \mathbb R^N$ if for each $x \in X$ there exist an open neighborhood $U$ in $X$ and a diffeomorphism $\psi : U \to V$ onto an open $V \subset \mathbb R^k$. Here "diffeomorphism" means that $\psi$ is a bijection such that both $\psi, \psi^{-1}$ are "G&M-smooth" in the sense defined in your question.
The first thing we have to do is to endow a G&M-manifold with a differentiable structure in the sense of Tu. As an atlas take the family of all local diffeomorphisms $\psi : U \to V$ as above. Now have a look at the above link. You will see that for each $z \in U$ there exist an open neigborhood $W$ of $z$ in $\mathbb R^N$, $W \cap X \subset U$, and a smooth extension $F : W \to V$ of $\psi \mid_{W \cap X}$.
Now the transition map between charts $\psi : U \to V$ and $\psi' : U' \to V'$ is given by
$$\psi' \circ \psi : \psi^{-1}(U \cap U') \to \psi'(U \cap U') .$$
But this map is smooth: We know that $\overline{\psi^{-1}} : U \to \mathbb R^N, \overline{\psi^{-1}}(\zeta) = \psi^{-1}(\zeta)$, is smooth, and that around each $z = \psi(\zeta)$ with $\zeta \in \psi^{-1}(U \cap U')$ we have a smooth extension $F' : W' \to U'$ of $\psi' \mid_{W' \cap X}$ for some open neigborhood $W'$ of $\psi(\zeta)$ in $\mathbb R^N$. We may assume that $W' \subset U \cap U'$. This shows that $\zeta$ has an open neighborhood conatined in $\psi^{-1}(U \cap U')$ on which $\psi' \circ \psi$ agrees with $F \circ \overline{\psi^{-1}}$ which is smooth.
Thus our atlas is smooth and generates a differentiable structure on $X$.
It is then easy to see that the inclusion $j : X \to \mathbb R^N$ is Tu-smooth. In fact, let $\psi : U \to V$ be a chart in our atlas. On $\mathbb R^N$ take the identity as a chart. Then $id \circ j \circ \psi^{-1} = \overline{\psi^{-1}}$ which is smooth.
Now let us verify that Tu-smooth maps are G&P-smooth. Let $f : X \to Y$ be Tu-smooth at $x \in X$. Then also $\bar f = j \circ f : X \to \mathbb R^N$ is Tu-smooth. This means that for any chart $\psi : U \to V$ around $x$ tha map $\bar f \circ \psi^{-1} : V \to \mathbb R^N$ is smooth. There is a smooth extension $F : W \to V$ of $\bar f \mid_{W\cap X}$, where $W$ is some open neigborhood of $x \in \mathbb R^N$. Thus also $ \bar f \circ \psi^{-1} \circ F : W \to \mathbb R^N$ is smooth. Clearly it is an extension of $\bar f \mid_{W\cap X}$. |
Hilbert spaces and subspaces | The answer is no. Take $H = \Bbb R^2$, and
$
M = \{(t,0): t \in \Bbb R\}$, and
$u(x_1,x_2) = (0,x_1)$. |
Generalized Vandermonde-Matrix | It is not necessarily invertible. Consider $z_1 = 1, z_2 = -1$, with $\lambda_1 = 2, \lambda_2 = 4$. Then the resulting matrix is $2 \times 2$ matrix of ones, which is clearly not invertible.
To see why this in general is not necessarily invertible, think of the Vandermonde matrix as polynomial interpolation. The normal Vandermonde matrix, for example, gives the $(n - 1)$th degree polynomial going through $n$ distinct points. What you are asking here is to find a polynomial whose coefficients are all zero except the coefficients to the terms $x^{\lambda_1}, x^{\lambda_2}, \cdots, x^{\lambda_n}$. To construct a counterexample, we just need to note that such a polynomial has $\lambda_n - \lambda_1$ nonzero roots, and if $n$ of these $\lambda_n - \lambda_1$ roots are distinct, we could find $n$ distinct values to give to $z_1, \cdots, z_n$ and have the matrix be singular (why?). |
Triple integral convergence | Transforming to spherical coordinates, we have
$$
r^4 = (x^2 + y^2 + z^2)^2\\
= x^4+y^4+z^4 + 2x^2y^2 + 2y^2z^2 + 2x^2z^2\\
\leq 3(x^4 + y^4 + z^4)
$$
which gives us $\frac13r^4\leq x^4 + y^4 + z^4 $, meaning we have
$$
\frac{1}{1 + x^4 + y^4 + z^4}\leq \frac 3{3+r^4}
$$
We integrate the right-hand expression over all of $\Bbb R^3$ to get
$$
\int_{-\pi}^{\pi}\int_0^{2\pi}\int_0^\infty\frac{3}{3+r^4}r^2\sin\varphi\,dr\,d\varphi\,d\theta\\
$$
I don't know what $\int_0^{\infty}\frac{3r^2}{3+r^4}dr$ is, but a suitable comparison using $\frac1{r^2}$ shows that it converges. The other two are proper, bounded integrals and therefore also converge, meaning the whole triple integral also converges. |
Check that one can find a sequence | HINT Consider the decimal expansion of $\delta$, and if $\delta$ has a finite decimal expansion, replace, e.g. $1.23$ by $1.2299\ldots$ |
Find limit of $(a_n)$ defined by $a_1 = 1$ and $a_{n+1} = \frac{1}{3+a_n}$ | First we note that $a_n\in (0,1/3)$ for $n>1$.
Since
$$a_{n+1}-a_{n}=\frac{1}{3+a_n}-\frac{1}{3+a_{n-1}}=\frac{a_{n-1}-a_n}{(3+a_n)(3+a_{n-1})},$$
and $(3+a_n)(3+a_{n-1})>9$, we see that
$$|a_{n+1}-a_n|<\frac19 |a_n-a_{n-1}|.$$
This shows that $\{a_n\}$ is Cauchy, so it converges to some number $a\in[0,1/3]$. To find $a$, take limits on both sides of
$$a_{n+1}=\frac1{3+a_n}.$$ |
automorphism of fields | For example, let $a$ has a minimal polynomial of degree 2 ove $F=GF(p)$. Then $E=F(a)=\{ k \cdot 1 + l \cdot a | k,l \in F\}$. For any field automorphism $\alpha$ of $E$, $\alpha(1)=1$. But $1 \rightarrow a, a\rightarrow 1$ will give us an invertible linear transformation. |
Show that $\mathbb{N}$ is infinite | Yes. Your proof is correct.
Indeed, if $f\colon\Bbb N\to\{0,\ldots,n-1\}$ is injective, then restricting the function to $\{0,\ldots,n\}$ would be an injection from a finite set into a proper subset, which is a contradiction to the pigeonhole principle.
One can argue in several other ways, e.g. using Peano axioms, or other axioms from which you can obtain (in a second-order) the natural numbers in a relatively natural way. These ways lend themselves to slightly different proofs. But your suggested proof is correct, and is a fairly standard proof in elementary set theory courses. |
Exercise $2$ from chapter $5$ of Eisenbud's Geometry of Syzygies book | Let $f \in S=k[x_1,x_2,x_3,x_4]$ be a quadratic form. Then we have an isomorphism $(f) \cong S(-2)$ of graded $S$-modules, and so
the regularity of $(f)$ is $2$ (irrespectively of whether the corresponding conic lies in a plane or not).
Now let $\ell_1,\ell_2$ be two disjoint lines in $\mathbb{P}^3$, and so the ideal
of their union is $I_{\ell_1} \cap I_{\ell_2}$. Now $I_{\ell_1} = (p_1,p_2)$, where $p_1,p_2$ are linear forms, and similarly $I_{\ell_2} = (q_1,q_2)$. Thinking of the two lines as $2$-dimensional subspaces in $k^4$, since they are disjoint, their sum must be $k^4$. This implies that their orthogonal complements are disjoint. This in turn implies that the linear forms $p_1,p_2,q_1,q_2$ are linearly independent and so by a change of coordinates we can assume that
$I_{\ell_1} = (x_1,x_2)$ and $I_{\ell_2} = (x_3,x_4)$, where $x_1,x_2,x_3,x_4$ are the indeterminates of the underlying polynomial ring. But then $I_{\ell_1} \cap I_{\ell_2} = I_{\ell_1} I_{\ell_2}$. Now, the regularity of the product of $d$ linear forms is $d$, by a result of Conca and Herzog, 2003. Consequently, the regularity of $I_{\ell_1} \cap I_{\ell_2}$ is $2$.
Regarding the inequality $\mathrm{reg}(I_X) > \deg(X)-\mathrm{codim}(X)+1$, note that the degree and the codimension of the two lines are each equal to $2$, while for the conic in a plane its degree is zero and its codimension is $1$. |
What is the difference between a parametric equation and a mathematic law? | Given the definition of mathematical law and parametric equation listed by @user10389, the result of an integration is a parametric equation. The definite integration operator essentially gives you the measure bounded by the function you're integrating, so it is redefining the measure in terms of the function.
As far as the integration constant, you may find it helpful to think of the indefinite integration operator as a halfway step- it tells you something like what the potential energy of the thing you're integrating is. The integration constant just says that the only thing that's important is the difference between potentials, not their actual value. |
Prove that the set of $n \times n$ matrices with determinant $1$ is unbounded closed with empty interior in $\mathbb{R}^{n^{2}}$. | The unboundedness can be seen by taking a diagonal matrix $D$ with $D_{11} = 1/N$ , $D_{22} = N$, and $D_{jj} = 1$ for $j = {3, \dots, n}$ for all $N \in \mathbb{R} \setminus \{0\}$.
For the empty interior, suppose $A$ were an interior point of the set
$$SL(n) = \{A \in \mathcal{n \times n}: \text{det}(A) = 1\}.$$
Then we would have: there exists some $\delta > 0$, for all $\varepsilon \in (-\delta, \delta)$
\begin{align}
\tag{$\star$}
\label{eq:1}
A + \varepsilon E_n \in SL(n),
\end{align}
where $E_n$ is the all $1$ matrix. Let $p(\varepsilon) = \det(A + \varepsilon E_n)$. $p(\varepsilon)$ is a polynomial in $\varepsilon$ of degree $n$. But \eqref{eq:1} implies $p(\varepsilon) = 1$ has infinitely many solutions (every open interval has same cardinality as $\mathbb R$). This clearly contradicts fundamental theorem of algebra. |
Proof in number-theory | As @dxiv notes in the comments, a stronger inequality is true.
First observe that $f(t) = t^x$ is a convex function for $x > 1$, $t>0$ which you can check with calculus. Then, $f(\frac{a+b}{2}) \leq \frac{1}{2} (f(a) + f(b))$, giving the required result. |
What are the fields in which $-1$ is not a square and over which every finite-dimensional vector space has a nice inner product? | Suppose $g$ is a symmetric bilinear form on a finite-dimensional vector space $V$ such that $g(v,v)$ is a nonzero square for any nonzero $v\in V$. Then $V$ has an orthonormal basis with respect to $g$. Indeed, you can start with any basis for $V$ and use the Gram-Schmidt process to orthonormalize it: the assumption that $g(v,v)$ is always a nonzero square is exactly what you need for Gram-Schmidt orthonormalization to work.
This means that if property (2) holds, then it actually holds whenver $g$ is just the usual inner product on $k^n$. So, property (2) is equivalent to saying that any sum of squares in $k$ which are not all zero is a nonzero square. Note that this implies that $-1$ is not a sum of squares in $k$ (and in particular implies (1)), since otherwise you could add $1$ to write $0$ as a sum of squares that are not all $0$. So, every such field $k$ can be made into an ordered field, and your $\mathcal{C}$ is the class of fields that can be made into ordered fields in which any sum of squares is a square. Such fields are known as formally real Pythagorean fields. |
Find an element $\theta \in \mathbb{R}$ such that $\mathbb{Q}(\sqrt{2},\sqrt[3]{5}) = \mathbb{Q}(\theta)$ | Take $\theta=\sqrt{2}{\sqrt[3]{5}}$
We have that $(\sqrt{2}\sqrt[3]{5})^2=2\sqrt[3]{5^2} \Rightarrow \sqrt[3]{5^2} \in \mathbb{Q}(\theta)$
Therefore $\sqrt{2}=\frac{(\sqrt{2}\sqrt[3]{5})\sqrt[3]{5^2}}{5} \in \mathbb{Q}(\theta) \Rightarrow \sqrt{2}\in \mathbb{Q}(\theta)$
Now with the same way we can prove that $\sqrt[3]{5} \in \mathbb{Q}(\theta)$ by noticing that $\sqrt[3]{5}=\frac{\sqrt{2}(\sqrt{2}\sqrt[3]{5})}{2}$
Thus $\mathbb{Q}(\sqrt{2},\sqrt[3]{5}) \subseteq \mathbb{Q}(\theta)$
Also it is obvious that $\mathbb{Q}(\theta) \subseteq \mathbb{Q}(\sqrt{2},\sqrt[3]{5})$
So you have the result. |
Write power series as rational function | In general $\sum_{n=1}^\infty\frac{1}{u^{2n-1}}=\frac{u}{u^2-1}$. Your expression becomes $\frac{x-3}{(x-3)^2-1}-\frac{x-2}{(x-2)^2-1}$$=\frac{(x-3)((x-2)^2-1)-(x-2)((x-3)^2-1)}{((x-3)^2-1)((x-2)^2-1)}$
You could simplify it. Note that $|x-3|\gt 1$ and $|x-2| \gt 1$ required. |
Finding submodules of a module over a ring of matrices | Try to prove the following statement:
Let $v, w$ be any elements of $E$ with $v \neq 0$. Then there exists an $A \in M_n(k)$ such that $Av = w$.
Now what can you say about the $M_n(k)$-submodules of $E$? |
Why does the elimination method work? | why do you get the point $(x,y)$ at which both lines intersect?
You are solving two equations, so two curves are involved (it can be more, of course, but for simplicity let's stick with two curves).
You are looking for points which are on both curves (lines in your example). The only way a point can be on both curves is if the curves intersect at those points.
Any other points on either line will not be on both lines (will not satisfy both equations at the same time).
So each equation represents one line/curve (or in more complex problems it can be curves or more complex structures in multidimensional spaces) and the set of values that satisfy all equations are, by definition, points on each line/curve. |
Sum of the first $n$ palindromes | Partial answer: (Would be great if someone can prove this observation)
I wrote a simple python script to sum palindromes: (python has no integer limit)
Update: Decided to listen to DanielV's comment, and optimized my script (credit to this answer):
#generates all palindromes in range
list = palindromes(1, 10**12)
count = 0
pastCount = 0
palindrome = 0
#sums them and check each sum for being a palindrome
for x in list:
count += x
palindrome += 1
# print out only the newest palindromic sum
if (pastCount != count and str(count) == str(count)[::-1]):
print (count, palindrome)
pastCount = count
# This prints sum of the entire range (last sum)
if (x == list[-1]):
print ("progress:", count, palindrome)
Where the palindromes() function is in the linked answer.
$X$, Palindromic sums, it found: $1, 3, 6, 111, 353, 7557, 2376732$
(For first $N$ palindromes: $1, 2, 3, 12, 16, 47, 314$)
This sequence is also in the OEIS and looks like that are all terms there is.
By the look of it, I would bet on that the next sum of palindromes
which is also palindromic, does not exist; that there is no solution
for $N$ if $X$ is over three million.
But I'm not sure how to prove this.
Here is the output: (took few seconds)
1 1
3 2
6 3
111 12
353 16
7557 47
2376732 314
progress: 545046045045045041 1999999
Where you can see that if $N$ existed, it is surely greater than two million ($N\ge2000000$), where its $X$ is surely greater than $5\times10^{17}$, which is way beyond your three million lower bound. |
Find the remainder when the 300-digit number 112222333333........ is divided by 8? | Hint. What is the smallest integer $n$ such that
$$2\sum_{k=1}^9 k+4\sum_{k=10}^n k=90+2n(n+1)-180\geq 300?$$
P.S. The smallest integer is $n=14$, since for $n=13$, the left-hand side is 274, it follows that the remaining 26 digits are $13$ times 14. Therefore the last three digits are 414 and the remainder is $414 \pmod{8}=6$. |
Misunderstanding about Laplace operator | Basically, the problem is the norm of the spaces where you define the operators.
The Laplace operator $\Delta$ has domain $H^2_0 (\Omega)$. If one endows this Sobolev space
with its norm, then $\Delta:H^2_0 (\Omega) \to L^2(\Omega)$ is bounded. So there is no
contradiction with the open mapping thm.
On the other hand, note that we cannot apply
the open mapping theorem to $A: L^2(\Omega) \to H^2_0(\Omega)$, if we consider the $L^2$ norm on $H^2(\Omega)$ (since it is not a Banach space). |
A strengthened implication of the ergodicity | Your intuition is right, but for a simple reason. Note that, since $\mu$ is invariant under $T$, we have
\begin{equation}
\int f\circ T\, \mathrm{d}\mu = \int f\, \mathrm{d}(T\mu) = \int f\, \mathrm{d}\mu \;,
\end{equation}
hence,
\begin{equation}
\int (f\circ T - f)\, \mathrm{d}\mu = 0 \;.
\tag{*}
\end{equation}
Now, if the integrand in (*) is almost surely non-negative, it cannot be non-zero on a set of positive measure. |
Show if $S_1 \subseteq \text{span} S_2$ then span$(S_1) \subseteq $ span$(S_2)$ | Overall, I think the proof looks good. If you don't assume that $S_1$ and $S_2$ then your proof can be adapted.
One small suggestion: when you say that you can write $\textbf{y} = \alpha_1 \textbf{v}_1 + ... + \alpha_n \textbf{v}_n$, Say that there exists $\alpha_1,\dots,\alpha_n\in\Bbb{F}$ such that you can do this. Similar for the $\lambda_{i,j}$ (Here, you put that they are in $\Bbb{F}$, but say that they exist). |
Convergence of a sequence $(b_n)$ with $b_n \in \{a_1, a_2, \cdots, a_n, \cdots\}$ for all $n$. | Hint : If $(b_n)$ takes a finite number of values, show that $(b_n)$ must be stationnary.
If $(b_n)$ takes an infinite number of values, show that there exists a subsequence of $(b_n)$ which is also a subsequence of $(a_n)$, and conclude. |
Non linear system -control problem | You can use the IF method as follows
$$ \dot x_2+ x_2(2-3x_1) = 1 $$
hence
$$ \big( x_2(t)e^{\int 2-3x_1(t) dt} \big)' = e^{\int 2-3x_1(t) dt} $$
from which
$$ x_2(t) = x_2(0)e^{-\int_0^t 2-3x_1(s)ds} + \int_0^t e^{\int_0^s 2-3x_1(u)du -\int_0^t 2-3x_1(v)dv} ds$$
follows.
The first equation can be written as
$$ \big(x_1(t)e^{t}\big)' = 5\sin(6t)e^{t} $$
so
$$ x_1(t) = x_1(0)e^{-t} + \int_0^te^{s-t} 5\sin(6s)ds $$ |
Distributive modulo? | Your notation with $\pmod M$ is not quite appropriate. You normally use it in sentences with the equivalence operator
$$8\equiv 5\pmod3.$$
When used as an operator, the usual syntax holds
$$8\bmod3=5\bmod3=2.$$
This said, your "distributivity rule" doesn't hold because
$$(A+B)\bmod M\ne A\bmod M+B\bmod M$$ in the cases where the RHS exceeds $M-1$. The rule that holds is
$$(A+B)\bmod M=(A\bmod M+B\bmod M)\bmod M$$ which you can also write
$$(A+B)\bmod M\equiv A\bmod M+B\bmod M\pmod M.$$
So, yes, the distributivity law holds "modulo $M$". |
Hamiltonian path in a complement of a tree | Actually you can use induction. I'll give you partial solution, but it is not hard to fill the blanks.
Check for $n=4$ (convince yourself that the other cases can't happen).
Let us look at some leaf. If its parent has degree $\leq n-2$ then by removing this leaf we still have max degree $< n-2$ , but now we have tree on $n-1$ vertices , apply induction hypothesis to get a hamiltonian cycle C. You can get hamiltonian cycle for the original graph by connecting the removed vertex to 2 vertices that follow each other on the cycle(convince yourself).
Elseways, since the maximum degree $< n-1$ we get $\Delta = n-2$, meaning that we have a vertex with $n-3$ leaves , and one neighbour of degree 2. Now try to convince yourself (by drawing) that this graph's complement also contains hamiltonian cycle. |
SVD with non invertible matrices | Regarding question 1: you should write $x = V \bar \alpha$ rather than $x = \bar \alpha V$; note that $x$ must be a column vector. Also, we have no reason to assume that $U$ and $V$ have zero-rows and zero-columns.
Regarding the problem itself, here is a much simpler proof. If $x = \sum_{i=1}^r \alpha_i v_i$, then we can write
$$
\begin{align}
Ax & =
\left(\sum_{i=1}^r \sigma_i u_iv_i^T \right)\left(\sum_{j=1}^r \alpha_j v_j \right)
\\ & = \sum_{i=1}^r \sum_{j=1}^r \alpha_j\sigma_i \cdot u_i(v_i^Tv_j)
\\ & = \sum_{i=1}^r \alpha_i\sigma_i \cdot u_i,
\end{align}
$$
and similarly
$$
\begin{align}
BAx & = B(Ax) = \left(\sum_{i=1}^r \frac 1{\sigma_i} v_iu_i^T \right)\left(\sum_{j=1}^r \alpha_j \sigma_j u_j \right)
\\ & =
\sum_{i=1}^r \sum_{j=1}^r \frac{\alpha_j \sigma_j}{\sigma_i} v_i(u_i^Tu_j)
\\ & = \sum_{i=1}^r \frac{\alpha_i \sigma_i}{\sigma_i} v_i = x,
\end{align}
$$
which was what we wanted.
If you prefer to work with matrices, you could also have used $x = V \bar \alpha$ and $B = V \Sigma^+ U^T$ to find that
$$
\begin{align}
BAx & = (V \Sigma^+ U^T)(U\Sigma V^T)(V\bar \alpha)
\\ & = V \Sigma^+ (U^TU)\Sigma (V^TV)\bar \alpha
\\ & = V (\Sigma^+ \Sigma) \bar \alpha
\\ & = V \bar \alpha = x.
\end{align}
$$
However, the fact that $U^TU, V^TV$ are identity matrices and that $(\Sigma^+ \Sigma) \bar \alpha = \bar \alpha$ should be justified. |
Sangaku - Find diameter of congruent circles in a $9$-$12$-$15$ right triangle | From the figure below, $\triangle AJI \cong \triangle KJC$. So $9+4z = 12-3z$, and $z = \frac 37$.
So one of the circles is the inscribed circle of the triangle which have side lengths $\frac{75}{7}, \frac{60}{7}$ and $\frac{45}{7}$, so the radius is
$$
\frac 12 \left(\frac{45}{7} + \frac{60}{7} - \frac{75}{7}\right) = \frac{15}{7}.
$$ |
Explain this notation: $\overline{g(x)}$ | It is complex conjugate.
If $z = a+ib$, then $\overline{z} = a-ib.$ |
Probability that certain drawn pieces of paper having the same colour are in the same group | Let's say Arnold and Berhard were the two people who put their names on a green piece of paper. Arnold is a natural leader, and will take control of whatever team he's selected into. It will therefore be called the "A-team". Now, Arnold has two teammates, and there are $9$ people who are not on the A-team. So there are $11$ places for Bernhard, and $2$ of them are on the A-team. Thus, the probability that Bernhard joins the A-team along with Arnold is $\frac{2}{11}$. |
$f:(1,-\infty) \times \mathbb{R} \to \mathbb{R}:f(x,y)=(\ln(x)^{y})$ is Borel measurable. | my first thought was correct. Because the function is continuous it is enough so you can see it's a Borrel measurable function |
Can Spectra be described as abelian group objects in the category of Spaces? (in some appropriate $\infty$-sense) | Here is a general fact (due to a bunch of people in the late 60s, early 70s):
A connected space is (weakly equivalent to) an infinite loop space if
and only if it admits an action of an $E_\infty$-operad.
Let me try to unpack this statement. An $E_\infty$-operad $\mathcal{O}$, first of all, is an operad all of whose spaces $\mathcal{O}(n)$ are contractible.
(Sometimes also want to require that the action of the symmetric group on $\mathcal{O}(n)$ be free; this is important if you want the theory of $E_\infty$-algebras to be homotopy invariant. For instance, if you take your $E_\infty$-operad to consist of a point in each dimension, then sure, an algebra over that operad -- that is, a topological abelian group -- is an infinite loop space -- but those algebras tend not to be interesting, and certainly don't model all infinite loop spaces. Note that topological abelian groups are always weakly equivalent to infinite products of Eilenberg-MacLane spaces.)
Now being an algebra over an $E_\infty$-operad is a way of saying that your space is as close to a commutative monoid as possible. That is, there's a multiplication law (pick any point in $\mathcal{O}(2)$ to get a map $m: X^2 \to X$), it's homotopy associative (that's because the two $m(m(\cdot, \cdot), \cdot), m(\cdot, m(\cdot, \cdot))$ are both 3-ary operations coming from $\mathcal{O}(3)$, which is contractible). Moreover, there are higher coherence homotopies (infinitely many) which are conveniently packaged in the operad: that's one of the things operads do efficiently!
One example of a coherence condition is the following. So we know that $m(x, m(y, (m(z, w)))$ and $m(m(m(x, y), z), w)$ are both canonically homotopic (as maps from $X^4 \to X$). But there are two different ways we could make the homotopy go. We want a coherence homotopy between those two homotopies. This is the analog of the MacLane coherence axioms on a monoidal category; you want the various iterated identifications one can make between iterated multiplication laws to be all homotopic.
I'll also mention a weaker (and easier) result:
A connected space is (weakly equivalent to) a loop space if and only if it admits an action of an $A_\infty$-operad.
An algebra over $A_\infty$-operad is something which is supposed to be as close to associative as possible. The standard example is the little intervals operad. If you take $\mathcal{O}(n)$ to consist of the space of imbeddings of $n$ intervals in the interval, then that acts on $\Omega X$ for any $X$. (How? Use these embeddings to compose a bunch of loops.) The point is that being an $A_\infty$-space, rather than simply a homotopy associative H space, is the data you need to construct a classifying space. (The whole story essentially begins with the theory of classifying spaces; it's what shows you that any topological group $G$ is the loop space on $BG$.) You can do this either by first strictifying your $A_\infty$-space into an actual topological monoid (yes, you can do this, essentially because the associatve operad is still "reasonable" insofar as it is acted on freely by the symmetric group; Berger and Moerdijk's paper on homotopy theory for operads), and then take the usual classifying space. Or you can do it directly, e.g. as Segal does it in the paper I mention below.
The result for $E_\infty$-operads and infinite loop spaces is supposed to be more or less the following: take iterated classifying spaces. I don't think that's how May does it (though I don't understand May's construction that well at the moment), but apparently Boardman and Vogt prove it that way.
A very intuitive and fun paper on this sort of thing is Segal's "Categories and cohomology theories." Segal introduces his own form of delooping machinery, which in fact implies the result about $A_\infty$-spaces that I described above.
Finally, let me say something about exactly how
$E_\infty$ spaces are like abelian groups. There is a general theory of algebras and commutative algebras in a monoidal $\infty$-category, due to Lurie (developed in DAG II and III, now in "Higher Algebra"). The definition works out so that "commutative algebra" really means "homotopy commutative algebra up to infinitely many higher homotopies" (as it always does in $\infty$-land). So an associative algebra object in spaces is an $A_\infty$-space, and a commutative algebra object is an $E_\infty$-space. I think one of the motivations of higher category theory is to say efficiently what "up to coherent homotopy" means. For instance, you can think of an $\infty$-category as a "topological category where multiplication is associative up to coherent homotopy." |
Contradiction in function of $e^{\frac {-1}{x^2}}$ | No contracdiction. The function is indeed decreasing for $x < 0$: |
Why solve polynomial equations? | One important consequence of being able to explicitly solve polynomial equations is that it permits great simplifications by linearizing what would otherwise be much more complicated nonlinear phenomena. The ability to factor polynomials completely into linear factors over $\mathbb C$ enables widespread linearization simplifications of diverse problems. An example familiar to any calculus student is the fact that integration of rational functions is much simpler over $\mathbb C$ (vs. $\mathbb R$) since partial fraction decompositions involve at most linear (vs. quadratic) polynomials in the denominator. Analogously, one may reduce higher-order constant coefficient differential and difference equations (i.e. recurrences) to linear (first-order) equations by factoring them as linear operators over $\mathbb C$ (i.e. "operator algebra").
More generally, such simplification by linearization was at the heart of the development of abstract algebra. Namely, Dedekind, by abstracting out the essential linear structures (ideals and modules) in number theory, greatly simplified the prior nonlinear Gaussian theory (based on quadratic forms). This enabled him to exploit to the hilt the power of linear algebra. Examples abound of the revolutionary breakthroughs that this brought to number theory and algebra - e.g. it provided the methods needed to generalize the law of quadratic reciprocity to higher-order reciprocity laws - a longstanding problem that motivated much of the early work in number theory and algebra. |
A bound on the number of zeroes of a polynomial in a given set. | Induction: Consider $P_i$ for $i=t_n$. It can be zero on at most $(D-t_n)N^{n-2}$ (n-1) tuples, which give at most $(D-t_n)N^{n-1}$ roots for P; if it is non-zero, the number of roots of $P$ is at most $t_nN^{n-1}$. |
Proving $K$ is a group | None of them. Rememeber the group operation is composition of functions, not pointwise addition.
You need to check $(L_{a,b}\circ L_{c,d})\circ L_{e,f}=L_{a,b}\circ (L_{c,d}\circ L_{e,f})$ as functions of $x$. |
Is $VV^T + D$ a submanifold? | What a coincidence. Someone asked a closely related (but different) question just two days ago.
Let $k\le n$ be a positive integer. Note that the five sets
\begin{align*}
\mathcal{P}&=\{P\in M_n(\mathbb{R}): P \text{ is positive definite}\},\\
\mathcal{S}_1&=\{D+VV^T: D\in\mathcal{P},\, V\in M_n(\mathbb{R}),\, \operatorname{rank} V\le k\},\\
\mathcal{S}_2&=\{D+VV^T: D\in\mathcal{P},\, V\in M_n(\mathbb{R}),\, \operatorname{rank} V=k\},\\
\mathcal{S}_3&=\{D+VV^T: D\in\mathcal{P},\, V\in M_{n,k}(\mathbb{R}),\, \operatorname{rank} V\le k\},\\
\mathcal{S}_4&=\{D+VV^T: D\in\mathcal{P},\, V\in M_{n,k}(\mathbb{R}),\, \operatorname{rank} V=k\},\\
\end{align*}
are identical to each other. So, whatever properties does $\mathcal{P}$ possess, those $\mathcal{S}_j$s possess the same properties too. |
Suppose $\theta \neq \frac pq * \pi$. Show ${\{e^{in\theta} : n \ \epsilon \ N}\}$ is dense in $S^1=\{x + iy: x^2 + y^2 = 1\} \subseteq C$ | Its so simple.$$exp(jx)=cos(x)+jsin(x)$$. Since $$sin^2(x)+cos^2(x)=1$$ this complex number always is on the circumference of a circle with radius 1 |
The language that contains no proper prefixes of all words of a regular language is regular | The condition that no proper prefix is in $L$ means that the input should be rejected if you encounter an accepting state before the word is completely read. So you could use a FSM for $L$ with the modification that from an accepting state all transitions are redirected to a non-accepting error state.
Edit: Of course, one has to assume w.l.o.g. that the FSM that recognizes $L$ is deterministic and has no $\lambda$-transitions. |
Determinant of adjoint representation | My question is answered in Knapps book Lie groups beyond an introduction on page 472 (Integration, Application to Reductive Lie Groups). The answer is $2\rho\log(a)$, where $\rho$ is the half sum of positive roots (counted with multiplicities). |
How to solve linear equation with variable constraints? | Each coordinate will give a constraint on $t$. So we have
$$x_{min} \leq x + at \leq x_{max}$$
so this requires
$$ \frac{x_{min}-x}{a} \leq t \leq \frac{x_{max}-x}{a}$$
and similarly
$$ \frac{y_{min}-y}{b} \leq t \leq \frac{y_{max}-y}{b}$$
$$ \frac{z_{min}-z}{c} \leq t \leq \frac{z_{max}-z}{c}$$
and there's a solution if all those three intervals have a non empty intersection. |
If $f’(x)$ is not $0$ for all $x$, then either $f’(x)>0$ for all $x$ or $f’(x)<0$ for all $x$. | The derivative function has the Intermediate Value Property due to Darboux's theorem. Therefore, if it takes positive and negative values it takes all values in between, in particular zero. |
Prove $\frac{1}{a^3+b^3+abc}+\frac{1}{a^3+c^3+abc}+\frac{1}{b^3+c^3+abc} \leq \frac{1}{abc}$ | We have $$\sum \frac{1}{a^3+b^3+abc} \le \sum \frac{1}{a^2b+ab^2+abc} = \frac{1}{a+b+c} \sum \frac{1}{ab} = \frac{1}{abc}$$
Note that $$a^3+b^3 \ge a^2b+ab^2 \iff (a-b)^2(a+b) \ge 0$$ |
Chess board Problems | Let us calculate the number of ways a which a person can go from A to X
In going from A to X a person will take 3 horizontal steps (HHH where H denotes horizontal steps)and 3 vertical
VVV where V denotes vertical step) i.e. he moved in following manner
VVVHHH.
We know that the no. of ways of arranging the word with 3 V and 3 H are 6!/(3! x 3!) = 20 ways
Now as per the given question, in between roads are under construction so he cannot take in between roads which are under construction. So, there are only 2 ways to go from X to Y i.e. from X ->P->Y or X ->Q->Y
In going from Y to B a person will take 2 horizontal
steps (HH where H denotes horizontal steps)and 2 vertical(VV where V denotes vertical step) i.e. he moved in following manner VVHH.
We know the number of ways of arranging the word with 2 V and 2 H are 4!/(2!x2!) = 6 ways
Thus, the total number of ways in going from A to B = 20 × 2 × 6 = 240. |
Does the category $\mathsf{DL}$ of bounded distributive lattices have (filtered) colimits? | The below assumes that your morphisms of bounded distributive lattices are maps preserving meets, joins, top, and bottom.
Your adjunction, like any free-forgetful adjunction between categories of models of an algebraic theory, is monadic. In particular, it creates reflexive coequalizers. This tells us, essentially, that we can construct unique bounded distributive lattice (bdl) structures on the quotient set of a bdl by a congruence: an equivalence relation closed under the lattice operations. Now, the usual construction of colimits from coproducts and coequalizers actually uses only reflexive coequalizers. Thus to give all colimits of bdls it suffices to give their coproducts. And this is easy, in terms of generators and relations: a presentation of the coproduct of a family of bdls is just the disjoint union of presentations of each bdl in the family. Of course, as with free products of groups, the actual elements of such a coproduct can be tricky to get a handle on. But it certainly exists!
Filtered colimits are even easier: the forgetful functor, again, creates them. This isn't a property of general monads, but of those corresponding to algebraic theories in particular. The point is that the filtered colimits of the underlying sets of some filtered family $L_i$ of bdls has, again, a unique bdl structure making the inclusion maps into homomorphisms. It's given simply by $[a_i]\vee [b_i]=[a_i\vee b_i]$, $0=[0_i]$ etc. The filteredness serves to show that these formulae define all possible meets and joins in a well defined way and that the resulting operations are distributive.
All of the above holds for the category of models of any finitary algebraic theory, which covers most familiar categories of algebra with exceptions like fields (axioms can only be equations), posets (there are only operations, no relations), and complete lattices (operations must be of finite arity.) |
How do you show that sum of $1$ to $(n-1)$ is divisible by $n$? | You might know that for any $ n\ge 1$,
$$S_n=0+1+2+3+...+(n-1)=\frac{(n-1)n}{2}$$
hence $$ n|S_n \iff \frac{n-1}{2}\in \Bbb N$$
$$\iff n \text{ is odd}$$
this is not the case for $ n=4$. |
Complex Line Integral(Meromorphic) | We can use: Cauchy Integral Formula and Residue Calculus
Partial fractions can be used, but will probably not be necessary.I will give you an example that will help you quickly solve this.
Ex: $\frac{1}{2 \pi i}\int _{|z|=3} \frac{1}{(z-1)(z-4)} dz $
Only one of the singularities fall within our contour so we can thus integrate instead:
$\frac{1}{2 \pi i}\int_{|z|=3} \frac{\frac{1}{z-4}}{z-1} dz = f(1) = \frac{-1}{3}$ by Cauchy Integral Formula. Here we take $f(z)=\frac{1}{z-4}$.
$\textbf{Question for OP}$: Does this make sense? Here we have a simple pole, order 1.
$\textbf{Additional}$: now in the case where we are integrating over a contour that gets both of singularities or if the function has more, we can use partial fractions to make our integral simpler and use Cauchy Integral however many times evaluating $f(z_0)$ where $z_0$ is the singularities for the function.Now this can be very tedious, but once one start to learn about residue calculus, the idea of having a function with a good bit of singularities becomes very much like child's play. Check this out residue calculus. |
Define the linear transformation T: P2 -> R2 by T(p) = [p(0) p(0)] Find a basis for the kernel of T. | Your guess is that the kernel is $\left[\begin{matrix}a\\a\end{matrix}\right]$, but that can't be right, because it is not an element of $P_2$.
The kernel is all the polynomials $p(x)$ of degree $\leq 2$ such that $p(0)=0$, that is, polynomials of the form $bx + ax^2$, for any $a,b$ in the real numbers. So we need to find a set of polynomials such that multiplying by scalars and adding can get us the whole set. We can take the basis $\{x, x^2\}$.
Note that there are two elements in the basis, as there should be, since the $$\dim(\ker)=\dim(\text{domain})-\dim(\text{image})=3-1=2,$$
since the image $\left\{\left[\begin{matrix}a\\a\end{matrix}\right] : a\in \mathbb{R}\right\}$ is of dimension 1. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.