title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Halving a tiny angle by doubling a side of a triangle. | According to the small angle approximation the tangent of an angle equals approximately the angle.
Tangent of the old angle ABC is the quotient between AC and AB. Denote it x.
Tangent of the new angle ABC is the quotient between AC and twice AB. Denote it y.
y = 2x
Both triangles have the angle ABC very small. So x is equal to the old ABC angle and y becomes the new ABC angle.
So, the old angle ABC is half the new one. |
How to find the greatest amount of savings from two people when it is related with a division? | Note that the calculations which agree with the answer to this question are
$$105=3\times45-30$$
$$45=30+15$$
$$30=2\times 15$$
We can work this out as follows, where $a$ is the larger of the two amounts $a,b$.
$$a=3\times b-R$$
$$b=R+S$$
$$R=2\times S$$
Solving in terms of $S$ gives $$R=2S,b=3S,a=7S.$$
The LCM of $7S$ and $3S$ is $315$ and so $$21S=315.$$
Therefore $S=15$ and $a=105$. |
Limit of a sequence - the sign of infinity | Why not to use the long division to get $$\frac{n^7-2n^4-1}{n^4-3n^6+7}= \frac{n^7-2n^4-1}{-3n^6+n^4+7}=-\frac{n^7-2n^4-1}{3n^6-n^4-7}=-\frac{n}{3}-\frac{1}{9 n}+\frac{2}{3 n^2}+\cdots$$ |
How would I found the compound for the following on each account with compound interest? | For shorter compounding periods, you compute each period with the applicable interest. For monthly compounding you get $\frac {2.5\%}{12}$ interest each month, so at the end you would have $4700(1+\frac {2.5\%}{12})^{12}$. Compounding means you earn interest each period on the total in the account including interest earned previously. |
Characterizations of real concave functions | One very important characterization is the dual characterization of concave (and convex) functions: every concave function $f$ is the minimum of all linear functions $\varphi$ such that $\varphi\geq f$.
You might also want to look up the Fenchel conjugate. |
How to find the limit of $a_n = (3^n - 2^n)$ | HINT
Note that
$$3^n-2^n = 3^n\left(1-\left(\frac{2}{3}\right)^n\right)$$
If you want to proceed by induction, it suffices to prove that
$$a_n =3^n-2^n \ge n$$ |
The product metric space of two compact metric spaces is compact | Here is the sketch
$z_n=(x_n,y_n)\in X\times Y$ be any sequence $\ni x_n\in X,y_n\in Y$
since $X$ is compact so $x_n$ has a subsequence $x_{n_k}\to x\in X$ and $Y$ is compact so $y_n$ has a subsequence $y_{n_k}\to y$
Then $z_{n_k}=(x_{n_k},y_{n_k})\to (x,y)\in X\times Y$ |
Switching order of supremum for doubly indexed sequence? | Yes, it means exactly what you wrote.
The reason that
$$\sup_i\,\sup_j\ \alpha_{ij}=\sup_j\,\sup_i\ \alpha_{ij}$$
is that they're both equal to
$$\sup_{i,j}\,\alpha_{ij}\;.$$
For assume that
$$\sup_i\,\sup_j\ \alpha_{ij}\lt\sup_{i,j}\,\alpha_{ij}\;.$$
Then $\sup_i\,\sup_j\ \alpha_{ij}$ is not an upper bound for the $\alpha_{ij}$ (since there is no upper bound less than the supremum). Thus there is some $\alpha_{kl}$ greater than $\sup_i\,\sup_j\ \alpha_{ij}$. But this $\alpha_{kl}$ would make $\sup_j\alpha_{kj}$ be at least $\alpha_{kl}$, and thus $\sup_i\,\sup_j\alpha_{ij}$ would also be at least $\alpha_{kl}$, a contradiction.
Similarly, if
$$\sup_i\,\sup_j\ \alpha_{ij}\gt\sup_{i,j}\,\alpha_{ij}\;,$$
then some $\alpha_{kl}$ would have to be greater than $\sup_{i,j}a_{ij}$, which is impossible.
Note that this only works because both operations are suprema. If you take, say, the infimum with respect to $i$ and the supremum with respect to $j$, then it does matter in which order you perform those operations. |
Integrating a 0-form | You are correct that this integral is a special case. In Spivak's Calculus on Manifolds, he writes:
A special definition must be made for $k$ = $0$. A $0$-form $\omega$ is a function; if $c : \{ 0 \} \to A$ is a singular $0$-cube$^\ast$ in $A$ we define $$\int_c \omega = \omega(c(0))$$
Since the boundary of the interval $[a,b]$ (oriented from $a$ to $b$) is $b$ (positively oriented) and $a$ (negatively oriented), Stokes' theorem tells us that:
$$ \int_{[a,b]} d\omega = \int_b \omega - \int_a \omega = \omega(b) - \omega(a) $$
as is expected from the Fundamental Theorem of Calculus.
$^\ast$ by $k$-cube, Spivak means "a continuous function $c : [0,1]^k \to A \subseteq \mathbb{R}^n$." |
to prove that $F(c)=3c^2$ | Otherwise, by the IVT, either $F(x)\gt3x^2$ for every $x$ in $(0,1]$, or $F(x)\lt3x^2$ for every $x$ in $(0,1]$. Since $\int\limits_0^13x^2\mathrm dx=1$, in both cases, $\int\limits_0^1F(x)\mathrm dx\ne1$, which is absurd. |
Why Dirac's Delta is not an ordinary function? | Intuitively, because there a bump functions $\varphi$ with $\varphi(0) = 1$ but which are zero except on an arbitrary small interval $[-\epsilon,\epsilon]$ around $0$. That, however, is incompatible with the requirement that for some fixed $u$, $$
\int_{\mathbb{R}^n} u(x)\varphi(x) \,dx = \varphi(0) = 1
$$
essentially because you can make that integral arbitrarily small by picking a suitably narrow $\varphi$.
For a formal proof, let $(\varphi_n)_{n\in\mathbb{N}}$ be a sequence of bump functions with $\varphi_n(0) = 1$, $\|\varphi_n\|_\infty = 1$ and $\textrm{supp}\, \varphi_n = [-\frac{1}{n},\frac{1}{n}]^k$ (i.e., $\varphi_n$ is zero outside the cube $[-\frac{1}{n},\frac{1}{n}]^k$).
You then on the one hand get $$
\lim_{n\to\infty} \int_{\mathbb{R}^k} u \varphi_n
= \lim_{n\to\infty} \langle I_u, \varphi_n \rangle
= \lim_{n\to\infty} \varphi(0) = 1 \text{.}
$$
But on the other hand, since obviously $u \varphi_n \to 0 $ pointwise as $n \to \infty$, you get by dominated convergence (note that $|u(x)\varphi_n(x)| \leq |u(x)|\cdot\|\varphi_n\|_\infty \leq |u(x)|$ for all $x \in [-1,1]^k$ and that $|u|\in L^1_{[-1,1]^k} \subset L^1_{\textrm{loc}}$) that $$
\lim_{n\to\infty} \int_{\mathbb{R}^k} u \varphi_n
= \lim_{n\to\infty} \int_{[-1,1]^k} u \varphi_n
= 0 \text{.}
$$
This obviously is a contradiction. |
Does $\sum\limits_{k=1}^n a_k^2$ imply $\sum\limits_{l=1}^k a_k \in o(\sqrt{n})$? | You seem to have in mind sequences $a_n$ determined by smooth functions $f(x)$ evaluated at integers $n$, which I agree is a good mental picture; unfortunately, series don't have to be all that well behaved in general. Take for example
$$
a_n = \begin{cases}
n^{-1/4}, &\text{if $n$ is a power of $2$}, \\
0, &\text{otherwise.}
\end{cases}
$$
Then $\sum_{n=1}^\infty a_n^2$ converges (the nonzero terms form a geometric series), but $a_n^2$ is not $o(1/n)$.
A general moral might be: if the only information you have is about an aggregate statistic of a sequence (like an $O$-estimate for its sum), then you're not going to be able to deduce much information about the individual terms (like an $O$-estimate for each $a_n$) without additional information about the structure of the sequence. |
Find the inverse Laplace transform of the function | $$\frac{1}{4}e^{t}\sinh{(t)} = \frac{1}{4}e^{t}\left(\frac{e^t - e^{-t}}{2}\right)= \frac{e^{2t}}{8} -\frac{1}{8} \\$$ So both the book and you are correct. |
Point A is picked randomly in a circle with a radius of 1, and center O. What is the variance of length OA? | The probability to pick a point inside the smaller circle $\|z\|=r$ is clearly $\frac{r^2}{R^2}$, hence the probability density function of $X=\overline{OA}$ is supported on $[0,1]$ and simply given by $f(x)=2x$. It follows that:
$$ \mu=\mathbb{E}[X]=\int_{0}^{1}2x^2\,dx = \frac{2}{3}\tag{1} $$
and:
$$ \text{Var}[X]=\mathbb{E}[(X-\mu)^2] = \int_{0}^{1}2x\left(x-\frac{2}{3}\right)^2\,dx = \color{red}{\frac{1}{18}}.\tag{2}$$ |
Iteration of cubic : could factorization be not unique? | No. The nine roots of the equation
$$
\big(x^3-1\big)^3=x^3
$$
are the three roots $\ r_1, r_2, r_3\ $ of the equation $\ x^3=x+1\ $ (only one of which is real) multiplied by the cube roots of unity, $\ 1, \frac{-1+i\sqrt{3}}{2},\frac{-1-i\sqrt{3}}{2},\ $. Of those nine roots, $\ r_1,r_2,r_3\ $ are the only ones that satisfy the original equation, $\ x^3=x+1\ $. |
Given k % y, how can I adjust the dividend (k) to preserve the modulo when the divisor (y) is incremented by one? | You have $k=my+r$ for some $m,r$ and $k\%y=r=k-my$. Now if you want $k'\%(y+1)=r$, you need $k'=m'(y+1)+r$. One way to assure this is to have $m'=m, k'=k+m$. Unfortunately, just having $k\%y$ you don't know $m$. Another way to assure this is to let $k=r$ but it seems you want $k'$ to depend upon $k$ somehow. Yet a third way is to set $k'=k(y+1)+r$ Do any of these meet your needs? |
Proving every tree has at most one perfect matching | First, why the claim is true. Take any vertex. If it's matches to the same vertex in both perfect matching, then it had degree zero on the symmetric difference. Otherwise it has degree two.
Second, after removing all isolated vertices in the symmetric difference, all vertices have degree two. Take any such vertex and follow its two edges. What you get is a growing path that eventually closed to a cycle since the graph is finite.
Since trees have no cycles, this implies that any two perfect matching are equal, by consisting their symmetric difference.
A different proof is by induction. The idea is that every leaf must be matched to its unique neighbor. |
I am not getting roots of $x^4-4x^3+3x^2+2x-30=0$ | $$
y^4-3y^2-28 = (y^2-7)(y^2 + 4)
$$
And $x=y+1$, so roots are $1\pm\sqrt7$, $1\pm 2i$. |
123 persons in a cafe, and pigeons and boxes | is it possible to find 23 people so their age is less than or equal to 713? Suppose the group of the 23 youngest people has age greater than 713. Then the total age of the group is greater than $\frac{123\cdot 713}{23}=3813$ a contradiction. |
Why is there's a unique circle passing through a point? | The fact that the circle passes through $A=(1,0)$ and $B=(-1,0)$ is equivalent to the fact that its center $C$ (which is necessarily on the line bissector of $AB$) has coordinates $(0,a)$.
Therefore, the circle being the locus of points $M$ such that
$$CM^2=CA^2,$$
if we take $M=(x,y)$, this relationship is converted into the equation
$$x^2+(y-a)^2=1+a^2$$
or (simplifying by $a^2$)
$$x^2+y^2-2ay-1=0\tag{1}$$
Saying that this circle passes through $(x_0,y_0)$ is equivalent to say that :
$$x_0^2+y_0^2-2ay_0-1=0\tag{2}$$
One obtains a first degree equation in $a$, explaining that there exists a unique solution,
$$a=\dfrac{x_0^2+y_0^2-1}{2y_0}$$
(the equation you mention) with an exceptional case : no solution exists when the denominator $y_0=0$, i.e., when $M_0$ belongs to the $x$-axis, which is natural because no circle passes through 3 aligned points. |
Prove for all p > 1, $x * y = (p + 1)^{-2}$ | A naive approach.
From the first,
$y = 1-px$.
Substituting in the second,
$1
=x+py
=x+p(1-px)
=x+p-p^2x
=x(1-p^2)+p
$
or
$x
=\dfrac{1-p}{1-p^2}
$.
If $p \ne 1$,
$x
=\dfrac1{1+p}
$.
Then
$y
=1-px
=1-\dfrac{p}{1+p}
=\dfrac{1+p-p}{1+p}
=\dfrac1{1+p}
=x
$
so
$xy
=\dfrac1{(1+p)^2}
$. |
What is the integral of $\int\frac{dx}{\sqrt{x^3+a^3}}$? | As the other users said, this integral is unlikely to have an elementary form, however, it can be expressed in terms of the well known Gauss hypergeometric function, which can be easily evaluated by most CAS or even Wolfram Alpha.
First, let's consider the case $|x|<|a|$, then we can substitute:
$$x=at, \qquad |t|<1$$
$$\int\frac{dx}{\sqrt{x^3+a^3}}=a^{-1/2}\int\frac{dt}{\sqrt{1+t^3}}=a^{-1/2} \sum_{k=0}^\infty \binom{-1/2}{k} \int t^{3k} dt=$$
$$=a^{-1/2} \sum_{k=0}^\infty \binom{-1/2}{k} \frac{t^{3k+1}}{3k+1}=a^{-1/2} \Gamma \left( \frac{1}{2} \right) \sum_{k=0}^\infty \frac{1}{\Gamma \left(1/2-k \right) k!} \frac{t^{3k+1}}{3k+1}=$$
$$=\frac{1}{3}\sqrt{\frac{\pi}{a}}~t~\sum_{k=0}^\infty \frac{1}{\Gamma \left(1/2-k \right) k!} \frac{t^{3k}}{k+1/3}$$
To find the hypergeometric form of the series above, we consider the ratio of the successive terms:
$$\frac{c_{k+1}}{c_k}=\frac{\left(-1/2-k \right)(k+1/3)}{ (k+4/3)} \frac{t^3}{k+1}=\frac{\left(k+1/2 \right)(k+1/3)}{ (k+4/3)} \frac{-t^3}{k+1}$$
$$c_0=\frac{3}{\sqrt{\pi}}$$
Which, by definition makes the series:
$$\sum_{k=0}^\infty \frac{1}{\Gamma \left(1/2-k \right) k!} \frac{t^{3k}}{k+1/3}=\frac{3}{\sqrt{\pi}} {_2F_1} \left( \frac{1}{2}, \frac{1}{3}; \frac{4}{3}; -t^3 \right)$$
Which makes the integral:
$$\int\frac{dx}{\sqrt{x^3+a^3}}=\frac{t}{\sqrt{a}} {_2F_1} \left( \frac{1}{2}, \frac{1}{3}; \frac{4}{3}; -t^3 \right)=\frac{x}{\sqrt{a^3}} {_2F_1} \left( \frac{1}{2}, \frac{1}{3}; \frac{4}{3}; -\frac{x^3}{a^3} \right)$$
This is a correct answer for $|x|<|a|$, as can be checked by numerical experiments.
Mathematica, or other advanced software, can directly evaluate and plot hypergeometric function, which makes this form more useful than the original integral.
For $|x|>|a|$ we can use the same method of binomial expansion to get the hypergeometric form.
$$x=at, \qquad |t|>1$$
$$\int\frac{dx}{\sqrt{x^3+a^3}}=a^{-1/2}\int\frac{t^{-3/2} dt}{\sqrt{1+1/t^3}}=a^{-1/2} \sum_{k=0}^\infty \binom{-1/2}{k} \int t^{-3k-3/2} dt=$$
$$=-a^{-1/2} \sum_{k=0}^\infty \binom{-1/2}{k} \frac{t^{-3k-1/2}}{3k+1/2}$$
It's straightforward to continue in the same way and obtain another hypergeometric function. |
Algebraic manipulation with $\neq$ instead of $=$. | It's OK to do this if the action you take on both sides of the equation is invertible. In your example, adding $1$ is an invertible operation.
Adding a fixed number, subtracting a fixed number, multiplying by a nonzero number, and dividing by a nonzero number are all invertible actions you can take. So is raising to a power when that is an invertible action. |
How to integrate this function over this surface? | Integrating $y$ first and using the trig substitution $x = a\sin\theta$ leads to
$$
\int_{0}^{a} \frac{2\sqrt{a^{2} - x^{2}}\, dx}{2a + x}
= \int_{0}^{\pi/2} \frac{2a\cos^{2}\theta\, d\theta}{2 + \sin\theta}.
\tag{1}
$$
The tangent half-angle substitution $t = \tan\frac{\theta}{2}$ gives
$$
\cos\theta = \frac{1 - t^{2}}{1 + t^{2}},\qquad
\sin\theta = \frac{2t}{1 + t^{2}},\qquad
d\theta = \frac{2\, dt}{1 + t^{2}},
$$
upon which (1) becomes
$$
2a\int_{0}^{1} \frac{\left(\dfrac{1 - t^{2}}{1 + t^{2}}\right)^{2}}{2 + \dfrac{2t}{1 + t^{2}}} \cdot \frac{2\, dt}{1 + t^{2}}
= 2a\int \frac{(1 - t^{2})^{2}}{(1 + t^{2})^{2}(1 + t + t^{2})}\, dt.
\tag{2}
$$
Partial fractions is a bit laborious (six equations, six unknowns); for the record, the decomposition is
$$
\frac{(1 - t^{2})^{2}}{(1 + t^{2})^{2}(1 + t + t^{2})}
= \frac{A_{1}t + B_{1}}{(t^{2} + 1)^{2}} + \frac{A_{2}t + B_{2}}{t^{2} + 1} + \frac{A_{3}t + B_{3}}{t^{2} + t + 1},
$$
and the augmented matrix of the resulting linear system for $(A_{1}, B_{1}, A_{2}, B_{2}, A_{3}, B_{3})$ is
$$
\left[\begin{array}{@{}rrrrrr|r@{}}
0 & 0 & 1 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 1 & 0 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 1 \\
1 & 0 & 2 & 1 & 2 & 0 & 0 \\
1 & 1 & 1 & 2 & 0 & 2 & -2 \\
1 & 1 & 1 & 1 & 1 & 0 & 0 \\
\end{array}\right].
$$
The end result is easily checked to be
$$
\frac{(1 - t^{2})^{2}}{(1 + t^{2})^{2}(1 + t + t^{2})}
= -\frac{4t}{(t^{2} + 1)^{2}} + \frac{4}{t^{2} + 1} - \frac{3}{(t + \frac{1}{2})^{2} + \frac{3}{4}}.
$$
Consequently, (2) becomes
$$
2a\left[\frac{2}{t^{2} + 1} + 4\arctan t - 2\sqrt{3} \arctan\bigl(\tfrac{2}{\sqrt{3}}(t + \tfrac{1}{2})\bigr)\right]\bigg|_{0}^{1}
= 2a\left[-1 + \pi + \frac{\pi}{\sqrt{3}}\right].
$$ |
How to get the coordinates to create a triangle? | I think that there are infinitely many different combinations of points that could satisfy your criteria, but let's assume for the sake of this problem that we have a side length $s$ for our equilateral triangle and that we have our usual $xy$ coordinate system. We will also assume that that the base of the triangle is parallel to the $x$-axis.
What we can do is try to find the height of our triangle, and then we will have enough information to solve this problem. If we drop a height from the top vertex to the base, we will split the base in half, so the bottom part of the triangle will have length $\frac{s}{2}$ and the hypotenuse will be $s$. Since this a right triangle, we can apply the Pythagorean Theorem, which yields:$$(\frac{s}{2})^2+h^2=s^2$$ Solving for $h$ yields the following: $$h=\frac{s\sqrt{3}}{2}$$
So, our points are $(x\pm\frac{s}{2},y-\frac{s\sqrt{3}}{2}).$
A visual solution for what is described above can be seen
here. |
Combinatoric Identities | With the hint I provided, the first identity goes this way :
\begin{align}
\binom{-x}{k} &= \frac{(-x)(-x-1)\dots(-x-k+1)}{k!} \\
&= \frac{(-1)^k(x)(x+1)\dots(x+k-1)}{k!} \\
&= (-1)^k\binom{x+k-1}{k}
\end{align}
Try to proove the second one by yourself. |
Definition of Total Derivative | Let $f\colon \mathbb R^n\to\mathbb R$ be a scalar field, for example the pressure assigned to each point of the Euclidean space.
Let $\gamma\colon \mathbb R\to \mathbb R^n$ be a curve – think about $\gamma(t)$ as the position of an airplane at a given time $t$.
We can compose these functions to get
$$f\circ \gamma\colon \mathbb R\to \mathbb R,$$
which assigns to a given moment $t$, the pressure $f(\gamma(t))$ measured around the airplane.
Then the "total derivative" at $t$ is nothing else as the ordinary derivative $(f\circ \gamma)'(t)$. The chain rule says that
$$(f\circ \gamma)'(t) = f'(\gamma(t)) \gamma'(t),$$
which is sometimes spelled as
$$\frac{df}{dt} = \sum_k \frac{\partial f}{\partial x^k} \frac{dx^k}{dt},$$
although I personally find this notation more confusing than the first one. |
Prove that $x^3 -2$ is irreducible over $\mathbb{Q}(\sqrt{5})$. | Hint
If
$$x^3 -2=m(x)q(x)
$$
with non-constant $m,q$,
then one of $m,q$ must be linear and hence must have a root in $\mathbb{Q}(\sqrt{5})$. This root is a root of $x^3-2$. |
Bounded non-constant holomorphic function on the complex plane minus the negative real axis | Hint Since $D$ is a simply connected, nonempty, proper, open subset of $\Bbb C$, it is biholomorphic to the unit disk. |
Find the prob. to win third prize in lottery game | To win third prize, you need to get two wrong numbers. There are ${45\choose 2}$ ways to pick two (unordered) wrong numbers. You also need to get three right numbers. There are ${5\choose 3}$ ways to get three right numbers. Hence the answer you seek is $$\frac{{45\choose 2}{5\choose 3}}{{50\choose 5}}$$ |
Good book about finite element methods in a multiphysics context for self study | So, while I started studying FEM, one of the best books I found was:
Numerical Solutions of PDE by Finite Element Method- Claes Johnson, Cambridge University Press.
It's a basic book and develops the theory pretty easily. You do not need to know Functional Analysis to understand this work.
For advanced FEM, I suggest
Mathematical Theory of Finite Element Methods- Brenner and Scott
This is a little involved and would expect the reader to know a little about Functional Analysis.
For implementation using C++, I suggest
Finite Elements: Theory and Algorithm- Sashikumar Ganesan and Lutz Tobiska, IISc-Cambrdige press
This has some basics on how to apply FEM to the C++ environment.
There is a new book, I came to know about which has MATLAB implementation but I personally have not studied this book:
MATLAB Codes for Finite Element Analysis- Antonio Ferreria, Nicholas Fantuzzi, Springer.
For multiphysics problems, I would suggest finding books that are relevant to your field |
If we are given that $Y=3$, find the probability that $Y$ is the sum of 2 dice. | Let $A$ be the event that the first toss resulted in a $2$, and let $B$ be the event the second sum was $3$. We want $\Pr(A|B)$. Note that
$$\Pr(A|B)=\frac{\Pr(A\cap B)}{\Pr(B)}.$$
We need the two probabilities on the right, and start with the harder one, $\Pr(B)$.
The result on tossing the die or dice the second time can be $3$ in three ways: (i) We got a $1$ on the first die, and then a "sum" of $3$ the next time; (ii) We got a $2$ on the first toss, and a sum of $3$ using $2$ dice the next time; (iii) We got a $3$ on the first die, and then a sum of $3$ from $3$ dice.
We calculate the probabilities of these and add up. (i) The probability of this is the probability of a $1$ followed by a $1$. This is $\frac{1}{6}\cdot\frac{1}{6}$; (ii) With probability $\frac{1}{6}$ we got an initial toss of $2$. The probability that this is followed by a sum of $3$ is $\frac{2}{36}$, for a probability of $\frac{1}{6}\cdot\frac{2}{36}$; (iii) This is also straightforward.
Now find $\Pr(A\cap B)$ and finish the calculation. |
Solving $\sin (100^\circ-x) \sin 20^\circ =\sin (80^\circ-x)\sin 80^\circ$ | We need to solve
$$\sin100^{\circ}\sin20^{\circ}\cos{x}-\cos100^{\circ}\sin20^{\circ}\sin{x}=\sin^280^{\circ}\cos{x}-\cos80^{\circ}\sin80^{\circ}\sin{x}$$ or since
$$\cos80^{\circ}\sin80^{\circ}-\cos100^{\circ}\sin20^{\circ}\neq0,$$
$$\tan{x}=\frac{\sin^280^{\circ}-\sin100^{\circ}\sin20^{\circ}}{\cos80^{\circ}\sin80^{\circ}-\cos100^{\circ}\sin20^{\circ}}.$$
But $$\frac{\sin^280^{\circ}-\sin100^{\circ}\sin20^{\circ}}{\cos80^{\circ}\sin80^{\circ}-\cos100^{\circ}\sin20^{\circ}}=\frac{1-\cos160^{\circ}-\cos80^{\circ}+\cos120^{\circ}}{\sin160^{\circ}-\sin120^{\circ}+\sin80^{\circ}}=$$
$$=\frac{\sin30^{\circ}+\cos20^{\circ}-\cos80^{\circ}}{\sin80^{\circ}-2\sin20^{\circ}\cos40^{\circ}}=\frac{\sin30^{\circ}+\sin50^{\circ}}{4\sin20^{\circ}\cos20^{\circ}\cos40^{\circ}-2\sin20^{\circ}\cos40^{\circ}}=$$
$$=\frac{2\sin40^{\circ}\cos10^{\circ}}{4\sin20^{\circ}\cos20^{\circ}\cos40^{\circ}-2\sin20^{\circ}\cos40^{\circ}}=\frac{4\sin20^{\circ}\cos20^{\circ}\cos10^{\circ}}{4\sin20^{\circ}\cos20^{\circ}\cos40^{\circ}-2\sin20^{\circ}\cos40^{\circ}}=$$
$$=\frac{2\cos20^{\circ}\cos10^{\circ}}{2\cos20^{\circ}\cos40^{\circ}-\cos40^{\circ}}=\frac{2\cos20^{\circ}\cos10^{\circ}}{\cos60^{\circ}+\cos20^{\circ}-\cos40^{\circ}}=\frac{2\cos20^{\circ}\cos10^{\circ}}{\sin30^{\circ}+\sin10^{\circ}}=$$
$$=\frac{2\cos20^{\circ}\cos10^{\circ}}{2\sin20^{\circ}\cos10^{\circ}}=\cot20^{\circ}=\tan70^{\circ}.$$
Id est, $$x=70^{\circ}+180^{\circ}k,$$ where $k\in\mathbb Z$. |
Find probability that $X\sim Geom(p)$ is even | Your computation shows $P(even):P(odd)=1-p$. Use $P(odd)=1-P(even)$ to solve. |
Assuming $a_0 = 0$, $a_1 = 1$, and $ a_{n+2} = 4a_{n+1}+a_{n}$ for $n \geq 0$, prove that $\gcd(a_m,a_{m+1}) = \gcd(a_m,a_{m-1})$ | Rewrite the recurrence as
$$
a_{m+1} = 4a_m +a_{m-1}
$$
Now suppose that $d$ divides both terms on the right. Can you explain why it divides the left? |
How to Calculate $x$ in $5x=12+x$ | result = (3) how can i calculate with simple formula?
Rather than relying on a specific formula, it's safer to be able to do this step by step by gathering $x$ to the same side and dividing by its coefficient:
$$\begin{align}
5x=12+x \\ \tag{subtract $x$ from both sides} \\
5x-x=12 \\ \tag{simplify the left-hand side} \\
4x=12 \\ \tag{divide both sides by $4$} \\
x=\frac{12}{4}
\end{align}$$ |
Calculate $ \frac{\partial f}{\partial x} (x,y) $ | Let us call $E=E(t)$ a fixed antiderivative of $e^{-t^2}$. Then $$f(x,y)=E(y^2)-E(x^2).$$ Now, $$\frac{\partial f}{\partial x} = \frac{\partial}{\partial x} \left( E(y^2)-E(x^2) \right) = -2x E'(x^2)=-2x e^{-x^4}.$$ |
I want to show that this set is open in $\mathbb{R}^3$ | You have
$$f_r=(\cos\theta,\sin\theta,0),\qquad f_\theta=(-r\sin\theta,r\cos\theta,1)\ ,\qquad
f_r\times f_\theta=(\sin\theta,-\cos\theta, r)$$
(note the $r$ at the end of the line!). The unit normal is therefore given by
$$n(r,\theta)={1\over\sqrt{1+r^2}}(\sin\theta,-\cos\theta, r)\ .$$
A helicoidal shell of thickness $2\epsilon$ is then produced by
$$\Psi:\quad(r,\theta,t)\mapsto f(r,\theta)+ t\,n(r,\theta)\qquad\bigl(r\in{\mathbb R},\ \theta\in{\mathbb R},\ -\epsilon<t<\epsilon\bigr)\ .$$
In order to show that this shell is an open set in ${\mathbb R}^3$ you have to verify that the Jacobian $J_\Psi(r,\theta, t)$ is $\ne0$ at all parameter points $(r,\theta,t)$ with $|t|$ sufficiently small. |
There are 10 seats numbered 1 through 10. There are 5 boys and 4 girls. The girls must sit in the even numbered seats. | The number of ways to seat the girls is $4! \binom{5}{4}$ (5 available seats, select 4, permute 4); the number of ways of seating the 5 boys is $5! \binom{6}{5}$ (after seating the girls, there are 6 seats free; select 5 for the boys, permute). In all:
$\begin{align*}
4! \binom{5}{4} \cdot 5! \binom{6}{5}
&= 4! \frac{5!}{4! 1!} \cdot 5! \frac{6!}{5! 1!} \\
&= 5! \cdot 6! \\
&= 86\,400
\end{align*}$ |
What will be the value of least significant byte? | This is another way of saying , what will be the remainder when $N$ is divided by $2^8=256$?
Here $$N = \frac{1023\times 1024}{2} - (2+3+11+17+31)$$ $$= 1023\times 512 - 64$$ $$= 1022\times 512 + (512-64)$$ $$ = 1022\times 512 + 448$$
Now $448\pmod{256} =192 = 11000000_{2}.$ |
Showing that if six people stand in a ring, then the probability of exactly $t$ people standing between Q and R in the clockwise direction is $1/5$ | There are 6! ways to place those people in a circle. Let’s see in how many ways you can place them so that Q is exactly $t, t\in\{0,1,2,3,4\}$ places away from $P$, in clockwise direction. For that:
Place P first, you can do it in 6 ways.
Place Q next, $t$ places away clockwise. This place is uniquely determined, i.e. there is 1 way of doing that.
Place the other four people to the remaining 4 places, the number of ways is 4!
So the total number of ways to place those people and get the desired number of places between P and Q, clockwise, is $6\times 4!$.
Thus, assuming all placements are equally likely, the probability is $\frac{6\times 4!}{6!}=\frac{1}{5}$ and it is the same for every $t$.
Now, obviously, if you drop the “clockwise” requirement, then:
$t$ can only be 0,1 or 2
$t=0$ is now the same as $t=0\lor t=4$ before, so the probabilities add up to $\frac{2}{5}$.
Similarly, $t=1$ is now the same as $t=1\lor t=3$ before, so the probabilities add up to $\frac{2}{5}$.
Finally $t=2$ is the same as $t=2$ before (2 people in between, in a circle of 6, means “opposite” - in either direction), so the probability stays $\frac{1}{5}$. |
Generating different operators from an eigenfunction and a lowest eigenvalue | You know that $(T_1 - T_2)f(x) = 0$, so first find an operator $S$ which sends $f(x)$ to zero. Once you have this, find an operator $T_1$ which has an eigenfunction $f(x)$ with lowest eigenvalue $\lambda$. Then you can derive $T_2 = T_1 - S$. |
Convergent series whose Cauchy product diverges | We will use as base series the slowly convergent series
$$\frac{1}{1^{1/4}} -\frac{1}{2^{1/4}}+\frac{1}{3^{1/4}}-\frac{1}{4^{1/4}}+\frac{1}{5^{1/4}}-\frac{1}{6^{1/4}}+\frac{1}{7^{1/4}}-\frac{1}{8^{1/4}}+\frac{1}{9^{1/4}}-\frac{1}{10^{1/4}}+\cdots.$$
So we are using the series $\sum_{i=1}^\infty (-1)^{i+1} \frac{1}{i^{1/4}}$.
(It is convenient to start the indexing at $i=1$. But if you really want to start at $0$, prepend an additional $9$-th tern of $0$.) Consider the Cauchy product of the above series with itself.
In particular, consider the term $c_{n+1}$ of the Cauchy product, where for convenience $n$ is odd. For a concrete example, let $n=9$. Then $c_{n+1}$ is a sum of $n$ terms. Each term is $\ge \frac{1}{\left(\frac{n+1}{2}\right)^{1/2}}$. This is because for any positive integer $a$, the product $x(a-x)$ attains its maximum at $x=\frac{a}{2}$.
It follows that the sequence $(c_{n})$ does not have limit $0$, so $\sum c_n$ does not converge. |
Loss of direction in Gauß' theorem? | The correct formula you want is
$$\int_V \vec{\nabla\phi}\,dV = \int_{\partial V} \phi \vec n\,dS\,,$$
where $\vec n$ is the unit outward normal to $\partial V$. |
Zero and inverse element in abelian group of extensions of modules | For two extensions
$$A\to E \to B$$
and
$$A\to F \to B$$
we define the sum of them to be be the quotient of the pullback $X$ on the morphisms
$$\pi_e: E\to B$$
$$\pi_f: F\to B$$
with the module
$$N=\{\imath_e(a)\oplus-\imath_f(a):a\in A\}$$
Then the baer sum is $X/N$. Let $A\oplus B$ be given and $E$ an extension, then we have.
$$X=\{(a\oplus b)\oplus e:\pi(a\oplus b)=\pi_e(e)\}$$
$$=\{a\oplus b\oplus e:b=\pi_e(e)\}$$
$$=\{(a\oplus b)\oplus e:b=\pi_e(e)\}$$
$$\cong\{a \oplus e\}$$
$$=A\oplus E$$
and for the submodule we have through a similar reasoning that $N\cong A\cong A\oplus 0$
and taking their quotient you get the rest. I recommend you look into MacLanes book on it. |
Which Hecke algebra is used in representation theory? | Both are used and they are basically equivalent. What I'm going to say is technically incorrect in some details but the general idea is right.
To make life easy, assume $f$ is a modular cusp form for $\operatorname{SL}_2(\mathbb Z)$ which is a normalized eigenfunction of all Hecke operators $T_p$. There is a way (actually several, slightly inequivalent ways) to associate $f$ to an automorphic form $\phi: \operatorname{GL}_2(\mathbb Q) \backslash \operatorname{GL}_2(\mathbb A) \rightarrow \mathbb C$. Let $G = \operatorname{GL}_2$. The function $\phi$ lives inside the space $L^2(G(\mathbb Q)Z_G(\mathbb A) \backslash G(\mathbb A))$ and generates an irreducible representation $\Pi$ inside there.
Let $G_p = G(\mathbb Q_p)$ and $K_p = G(\mathbb Z_p)$. There are unique irreducible, admissible representations $\pi_p$ of $G_p$ and a representation $\pi_{\infty}$ of $G_{\infty} = G(\mathbb R)$ such that $\Pi$ contains the "infinite tensor product" representation $\otimes_{p \leq \infty} \pi_p$ as a dense subspace (some work needs to be done to make sense out of an infinite tensor product). Assume each representation $\pi_p$ of $G_p$ has a nonzero vector fixed by $K_p$.
Let $H_p = \mathscr C_c^{\infty}(K_p \backslash G_p/K_p)$ be the convolution ring of locally constant and left and right bi-$K_p$ invariant complex valued functions on $G_p$. This is one of the kinds of Hecke algebras you were considering. These particular Hecke algebras turn out to be commutative rings with unity. Let $H_{\operatorname{fin}}$ be the infinite tensor product of the rings $H_p : p < \infty$.
The function $\phi$ lies in $\otimes_{p \leq \infty} \pi_p$ and in fact is itself equal to an infinite tensor product $\phi = \otimes_{p \leq \infty} \phi_p$ with $\phi_p \in \pi_p$. The Hecke operators $T_{p^n} : n \in \mathbb N$ scale the cusp form $f$, but if we identify $f$ with the automorphic form $\phi$, then the $T_{p^n}$ affect only the component $\phi_p$. In fact, $T_{p^n}$ identifies with a certain element in $H_p$, and $H_p$ is generated as an algebra by the $T_{p^n}$.
In this way, the tensor product of the "local Hecke algebras" $H_p$ form the "global finite Hecke algebra" $H_{\operatorname{fin}}$, which can also be thought of as being generated by the operators $T_{p^n}$, for $p$ prime and $n\in \mathbb N$. |
Why are two functions not identical? | Because they have different natural domains.
On the other hand, in $(0,\infty)$, $f(x)=g(x)$.
A similar example:
$$
g(x)=\sqrt{x},\quad f(x)=\sqrt{|x|}
$$ |
Example of Cut in Natural Deduction and how to remove it | As you can see in Sara Negri & Jan von Plato, Structural Proof Theory (2001), page 18, in Natural Deduction we have the substitution rule, which is a "derived rule" :
In natural deduction, if two derivations $\Gamma \vdash A$ and $A, \Delta \vdash C$ are given, we can join them together into a derivation $\Gamma, \Delta \vdash C$, through a substitution :
$${ \Gamma \vdash A \quad A, \Delta \vdash C \over \Gamma, \Delta \vdash C } \, \text{(Subst)}$$
The sequent calculus rule corresponding to this is cut :
$${ \Gamma \implies A \quad A, \Delta \implies C \over \Gamma, \Delta \implies C } \, \text{(Cut)}$$
Often cut is explained as follows: We break down the derivation of $C$ from some assumptions into "lemmas," intermediate steps that are easier to prove and that are chained together in the way shown by the cut rule.
But see also [page 172] :
[Substitution] rule resembles cut, but is different in nature: Closure under substitution just states that substitution through the putting together of derivations produces a correct derivation. This is seen clearly from the proof of admissibility of substitution.
In natural deduction in sequent calculus style, there are no principal formulas in the antecedent, and therefore the substitution formula in the right premiss also appears in at least some premiss of the rule concluding the right premiss. Substitution is permuted up until the right premiss is an assumption.
Elimination of substitution is very different from the elimination of cut. |
Relationship between $\operatorname{supp} w$ and $\operatorname{supp} dw$ | This is actually easier. Let $C = \text{supp}(\omega)$ and $U = \Sigma \setminus C$. Then $\omega |_U = 0$ and so $d(\omega |_U) = 0$. But as $d\omega)|_U = d(\omega|_U)$, we have $d\omega = 0$ on $U$ and so $\text{supp}\omega \subset C$. |
Prove that $f(x,y)$ is totally differentiable in $ (0,0)$ | For this function there is an easier way to prove that it is totally differentiable. The only problem is at $(0,0)$.At any other point the partial derivatives are continuous so the function is differentiable.
So at $(0,0)$:
You can evaluate $\frac{\partial f}{\partial x}(0,0)=\lim_{x \to 0}\frac{f(x,0)-f(0,0)}{x}=0$ and $\frac{\partial f}{\partial y}(0,0)=0$.
Then you evaluate the limit $\lim_{(x,y) \to (0,0)}\frac{f(x,y)-f(0,0)-\frac{\partial f}{\partial x}(0,0)x-\frac{\partial f}{\partial y}(0,0)y}{\sqrt {x^2+y^2}}=0$ .Therefore f is differentiable at $(0,0)$ and $df(0,0)(x_1,x_2)=0x_1+0x_2$ |
Calculating the number of pairs in a series of combinations | Hint:
Number of ways of selecting $3$ questions out of $5$ questions will be : $3 \choose 5$
And these questions are asked to $1000$ people, just multiply the ways of asking $3$ questions out of $5$. |
Is the exponential map for $\text{Sp}(2n,{\mathbb R})$ surjective? | In his comment, user148177 already explained that the exponential function of $\text{Sp}_{2n}({\mathbb R})$ is not surjective.
Two factors, however, suffice:
Theorem (Polar decomposition) Any $g\in\text{Sp}_{2n}({\mathbb R})$ can uniquely be written as a product $g = h\cdot\text{exp}(X)$ with $h\in SO_{2n}({\mathbb R})\cap\text{Sp}_{2n}({\mathbb R})$ and $X\in\text{Sym}_{2n}({\mathbb R})\cap{{\mathfrak s}{\mathfrak p}}_{2n}({\mathbb R})$.
Reference: Hilgert-Neeb: The structure and geometry of Lie groups, Proposition 4.3.3.
As a compact Lie group $SO_{2n}({\mathbb R})\cap\text{Sp}_{2n}({\mathbb R})$ has surjective exponential, so $\text{Sp}_{2n}({\mathbb R})=\text{exp}({\mathfrak s}{\mathfrak p}_{2n}({\mathbb R}))^2$. |
Does $\sum \frac{1}{n^x}$ converge uniformly? | Let $\sum f_n$ denote the series. Note that this series converges uniformly on $[A, \infty)$ for all $A > 1$: we have for all $x \ge A$ and all $n$, $n^x \ge n^A$, hence $1/n^x \le 1/n^A$. Thus,
$$\sup_{x \ge A} f_n(x) \le \frac1{n^A}$$
So you can use Weierstrass' M-test.
Note
The series does not converge uniformly on, say, $(1, \infty)$:
$$R_n(x) = \sum_{k=0}^{\infty} f_k (x) - \sum_{k=0}^n f_k(x) = \sum_{k=n+1}^{\infty} \frac1{k^x} > \sum_{k=n+1}^{2n} \frac1{k^x} > \frac{n}{(2n)^x} = \frac1{2 (2n)^{x-1}}$$
Putting $x_n = 1 + 1/n$, we get:
$$R_n(x_n) \ge \frac1{2(2n)^{1/n}} \to 1/2$$
Hence,
$$\sup_{x > 1} R_n(x) \not \to 0$$ |
If operators $A, B, A+B$ are all closable, show that $\overline{A + B} \supseteq \overline{A} + \overline{B}$. | I think that exercise is false.
Let $A=\frac{d^2}{dx^2}$ on all finite linear combinations of $\{ 1,x,x^2,x^3,\cdots \}$ and let $B=\frac{d^2}{dx^2}$ on all finite linear combinations of $\{ \sin(\pi x),\sin(2\pi x),\sin(3\pi x),\cdots\}$. Both of these operators are densely-defined; and they're both closable because they're both restrictions of the same closed operator $C=\frac{d^2}{dx^2}$ on $\mathcal{D}(C)=H^2[0,1]$.
$\mathcal{D}(A+B)=\{0\}$, which means that $A+B$ is trivially closed on its domain $\{0\}$. This is because the linear space of polynomials has nothing in common with finite linear combinations of $\sin(n\pi x)$ terms, other than the $0$ function.
The closure of $B$ has domain
$$
\mathcal{D}(\overline{B})=\{ f \in H^2[0,1] : f(0)=f(1)=0 \}.
$$
This is because $\{ \sin(n\pi x) \}_{n=1}^{\infty}$ is an orthonormal basis of $\{ f \in H^2[0,1] : f(0)=f(1)=0 \}$.
The domain of $A$ contains polynomials that vanish at $0$ and $1$, which means $\mathcal{D}(A)\cap\mathcal{D}(\overline{B}) \ne \{0\}$. So $\mathcal{D}(\overline{A})\cap\mathcal{D}(\overline{B}) \ne \{0\}$, which proves that
$$
\overline{A}+\overline{B} \not \subseteq \overline{A+B}.
$$ |
How to find the number of options to arrange similiar objects? | You can find the number of arrangements for your problem by using this method.
To find the possible arrangements of n unique objects we use n!. However, if there is repetition like in this case, the number of arrangements are over estimated.
-like in this case
-4!=24
-so we need to cancel the effect of different arrangements of the similar type characters by dividing them
-like here we will divide 4! by 3! and 1!
-so the effect of 3Bs is canceled hence -4!/3!=4
The probabilty= 4 x (0.5)(0.5) (0.5)(0.3)
-if consider another example like
BBRR BRBR RRBB RBBR BRRB RBRB
-this is verified by the formula 4!/(2!*2!)=6 arrangements
as now there are two types
-Also, check out this link
Similar Question |
Matlab, finding the variance given a probability distribution | Use $f(x)$ to obtain a large sample of randomly generated variables $x_i$ which follow this distribution. In case of the normal distribution one could use s = normrnd(mu,sigma,n,m) to create such a sample. Given the vector $s$ containing the sample, you can calculate the variance by var(s). If $f(x)$ is difficult to sample from, you could use a rejection sampler.
Example: s = normrnd(0,1,1,10000) creates a large sample (n=10000) for $X\sim N(0,1)$. This gives var(s)=1.0053. |
Why is it so hard to translate some proves into machine-readable form? | Another example is Gonthier's recent proof of Feit–Thompson theorem. It would be too long to answer your question in detail, but this link (in French)
COQ ET CARACTÈRES, Preuve formelle du théorème de Feit et Thompson, contains several relevant references.
In particular, A Special Issue on Formal Proof of the Notices of the AMS gathers four articles:
Formal Proof by Thomas Hales
Formal Proof--The Four-Color Theorem by Georges Gonthier
Formal Proof--Theory and Practice by John Harrison
Formal Proof--Getting Started by Freek Wiedijk |
On factorizationa of a cube $\,m^3 = ab$ | As lulu noted, the order of any prime dividing $ab$ must be divisible by 3, because $ab=m^3$. Now the product above just results from the fact that we can break up the primes dividing $m$ into those which divide only $a$, those which divide both $a$ and $b$, and thus divide $d$, and those which divide only $b$. Only the first two categories contribute to our expression for $a$, obviously, then you need simply observe that if a prime occurs exclusively in $a$, then the order of the prime in $a$ must be the same as the order of the prime dividing $ab$, whence it must be divisible by 3. |
subspace verificaion | You have the right idea. It's just that you have to formalize it properly and state things in the proper order.
Say you have vectors $x_1, ..., x_n$ in $W$ and scalars $a_1, ..., a_n$ in $\mathbb{R}$.
Since a scalar multiple of any vector in $W$ is also in $W$, you have for any $x_i$ that $a_ix_i$ is in $W$.
Since $a_ix_i$ is in $W$ for all $i=1,...,n$, and the sum of any vectors in $W$ is also in $W$, then $a_1x_1 + ... + a_nx_n$ is also in $W$. |
Eigenvector proof for repeated eigenvalues | Proof sketch:
Since $\lambda= \lambda_1=\lambda_2$ and $S$ is symmetric, the geometric multiplicity of $\lambda$ is $2$. Now, if we denote $E_{\mu}$ the eigenspace of $S$ associated to the eigenvalue $\mu$, we have $V=E_{\lambda}\oplus E_{\lambda_3}$ where $S:V\to V$ and $\oplus$ denotes the direct sum. Finally, we know that $V= E_{\lambda_3}\oplus E_{\lambda_3}^{\perp}$ and by identification we get $E_{\lambda_3}^{\perp}=E_{\lambda}$. |
If $z_0$ is a root of the equation $z^n\cos\theta_0+z^{n-1}\cos\theta_1+\cdots+\cos\theta_n=2$ | You know that
$$2< |z|^n+|z|^{n-1}+\cdot+1 =\frac{1-|z|^{n+1}}{1-|z|}$$
Now, if $|z| <1$ we have
$$1-|z|^{n+1} > 2-2|z| \Rightarrow 2|z|>1+|z|^{n+1}>1$$
If $|z|\geq 1$ we have
$$|z|>1 \geq \frac{1}{2}$$
P.S. Alternately, you can observe that if $|z| \leq \frac{1}{2}$ then
$$1+|z|+|z|^2+..+|z|^n \leq 1+\frac{1}{2}+\frac{1}{2^2}+...+\frac{1}{2^n} <2$$ |
Number of permutations of $n$ objects with order 3 or 4 | We count the permutations of order that divides $3$, count the permutations of order that divides $4$, add, and subtract $1$ for the identity permutation that has been counted twice.
Permutations whose order divides $3$: Let $a_n$ be the number of such permutations of the $n$-element set $1,2,\dots,n$. Now look at $a_{n+1}$. Maybe $n+1$ is sent to itself. There are $a_n$ such permutations of order that divides $3$. Maybe $n+1$ is part of a $3$-cycle. Then there are $n$ choices for $\sigma(n+1)$, and then $n-1$ choices for $\sigma(\sigma(n))$. That yields the recurrence
$$a_{n+1}=a_n+n(n-1)a_{n-2}.$$
Let $b_n$ be the number of permutations whose order divides $4$. One uses the same idea to get a recurrence for $b_n$. Either $n+1$ is sent to itself, or it is part of a $2$-cycle, or it is part of a $4$-cycle. |
find the angle in a triangle with angles $ 20^{\circ}, 70^\circ, 90^\circ $ | Here is a solution to part (b).
Let $\Omega$ be $BCE$'s circumcircle and $O$ its centre. Then $\angle COE=2\angle CBE=20^\circ$. $\angle EOB=2\angle ECB=140^\circ$ so $\angle COB=\angle COE+\angle EOB=160^\circ$ so $\angle OBC=\angle BCO=10^\circ$.
Erect an equilateral $\triangle BOJ$ on base $BO$.
$OJ=OB$ so $J$ is on $\Omega$. $\angle JCB=\frac{1}2 \angle JOB=30^\circ=\angle FCB$ so $F$ is on $CJ$.
Let $OJ$ cross $BF$ at $P$. $\angle OBJ=60^\circ=2\angle OBF$ so $BF$ is an axis of symmetry of $\triangle BOJ$ and is thus $OJ$'s $\perp$ bisector. Thus $\angle FOJ=\angle OJF$.
$OJ=OB=OC$ so $\triangle CJO$ is isosceles on base $CJ$. $\angle JCO=\angle FCO=\angle FCB+\angle BCO=40^\circ$, so $\angle OJC=\angle JCO=40^\circ$ so $\angle FOJ=\angle OJF=\angle OJC=40^\circ$.
Therefore $\angle EOF=\angle EOB-\angle FOJ-\angle JOB=140^\circ-40^\circ-60^\circ=40^\circ=\angle ECF$, so $OCEF$ is cyclic, so $\angle CFE=\angle COE=20^\circ$, which solves part b.
Hints for part (a): it might be useful to know that $OCEF_aF_b$ are all on a circle whose centre lies on $BC$. |
Why Benders cuts are valid? | Actually, for a minimization problem the subproblem generates a cut that is a lower bound on the portion of the original objective function coming from the subproblem variables.
The key to understanding this is duality theory. The values of the integer variables coming from the master problem modify the right hand side limits of the subproblem constraints. That's equivalent to modifying the objective coefficients of the dual to the subproblem. So the dual solution you get is feasible (in the dual) for all possible integer variable values (since they are not messing with the dual constraints) and optimal for that particular set of variable values. You construct both kinds of cuts, optimality (lower bound) and feasibility (when the primal subproblem is infeasible/dual is unbounded), from the dual solution, so the fact that the dual solution does not depend on the master problem values to be feasible makes the cuts globally valid.
As far as other (non-Benders) cuts go, it is entirely possible for some of them to be provably valid and cut off a given solution. I'm just not aware of any that are uniformly provably valid (valid for all decomposable MIP models). The proof that some other flavor of cut is globally valid will likely depend on some particular aspect of the problem, something like having a known type of IP model (knapsack, set covering, TSP, ...) embedded within the given problem. |
Prove that $\langle{a,b\,\vert\,aba^{-1}b^{2}}\rangle$ is not isomorphic to $\mathbb{Z}/3\mathbb{Z}$. | Consider the homomorphism from the given group to $(\mathbb Z,+)$ given by the assignment $$a\mapsto 1 \text{ and } b\mapsto 0 .$$
This takes $aba^{-1}b^2$ to $1+0-1+0+0=0$ so is well defined and is surjective because for any integer $n$, the element $a^n$ will map to $n$.
Unfortunately, $\mathbb Z/3\mathbb Z$ cannot surject on to an infinite group.. |
Convergence Of The Following Alternating series. | The series in fact converges absolutely and thus it converges (alternatingly), since:
$$\sum_{k=1}^n\sin k\theta=\frac{\cos\frac\theta2-\cos\left(n+\frac12\right)\theta}{2\sin\frac\theta2}\implies\left|\sum_{k=1}^n\sin k\theta\right|\le\frac1{\left|\sin\frac\theta2\right|}$$
and $\;\cfrac1n\rightarrow0\;$ monotonically , so we can use Dirichlet's Test to deduce the result. |
2016 Spain Math Olympiad final stage, problem 2 | For $p=2$, or any prime where $3=25$, the conditions are identical.
For odd primes,
$x^2 - x = a$ is solvable if and only if $4a+1$ is a perfect square.
The statement is now that $-11$ and $-99$ are both squares or both nonsquares. For $p \neq 3$ that is clear and for $3$ one can calculate.
So there is some special verification at $p=2$ (where completing the square does not work) and $p=3$ (where multiplication by 9 can change a nonsquare into a square) and the argument using quadratic equations or completing the square handles all other values of $p$. |
Simple Random Walk with equal probability of +1 and -1. | Hint:
Take a look at Gambler's ruin. |
Understanding a coin throwing game | Your answers are almost correct. The only thing you are missing are factors that account for the "given that" parts of both questions. In the first part, you are not accounting for the fact that the probability that A did not obtain tails on her first two trials is 1/8, which you need to divide by as you are conditioning on this event (it is now your sample space). You made the same error in the second problem, you need to divide by the probability that A loses, (1/3), and now your answers will match your answer key. |
How do I determine the domain and range of the following relations using set builder notation? | You're on the right track, but you're having trouble with the endpoints.
The points $(-3, -2)$ and $(4, -2)$ are on the graph in a), which means that your domain should actually be $x \in [-3, 4]$ in interval notation to indicate that $x = -3$ and $x = 4$ are included. This would be reflected in the set-builder notation by using inequalities with "or equal to", so that you'd have $\{x \in \Bbb R \mid -3 \le x \le 4\}$.
For part c), you're again almost right, except for the endpoints. Their $x$- and $y$-coordinates need to be included, so you'd need brackets for interval notation, and all $\le$'s for the set-builder notation. So, for the range, I would write $\{y \in \Bbb R \mid -3 \le y \le 3\}$, using $y$ rather than $x$.
Now, in addition to filled-in circles, some graphs have arrows. These indicate that the graph "keeps going" in (roughly) whatever direction the arrows point. In b), this would be reflected as an interval $x \in (-\infty, 3]$ for the domain.
Notice for b) that $x$ needs to only be "at most $3$" (not "at least" anything), and thus you'll only need a single inequality, rather than the compound ones you would use on graphs that have a definite starting and ending point.
I'll let you give the rest a shot; most of what you had was spot-on. |
what is the probability of landing 10 heads in a row given a certain amount of time. | Let's call "Trying for 10 minutes" a "large attempt". A large attempt has some probability of succeeding (exactly what that probability is depends on how fast you're throwing). I would think it rather obvious that if you do 60 large attempts (i.e. keep going for 10 hours), then you have an increased probability of experiencing a success than if you do only one. |
Limit I don't know how to start solving. | Some intuition for how to come up with this solution:
As $n$ gets very large, $4^n$ is much much bigger than $2^n$. A common trick to "prove" this, or tease this fact out, is to divide the numerator and denominator by the highest order term. This is legal since it amounts to multiplying by 1, and $4^n$ is never 0 for any $n$.
Doing this makes the result easier to see, as in your example. Once you divide by $4^n$ you end up with
$$
\frac{(\text{a bunch of stuff that goes to 0})-1}{(\text{a bunch of stuff that goes to 0})+4^{2}}\rightarrow-1/16
$$
as $n\rightarrow \infty$. |
Reasoning behind the trigonometric substitution for $\sqrt{\frac{x-\alpha}{\beta-x}}$ and $\sqrt{(x-\alpha)(\beta-x)}$ | In the first place, one notices that the expression can be "normalized" by means of a linear transform that maps $\alpha,\beta$ to $-1,1$, giving the expressions
$$\sqrt{1-x^2}\text{ and }\sqrt{\frac{1+x}{1-x}}=\frac{1+x}{\sqrt{1-x^2}}.$$
Then the substitution $x=\cos\theta$ comes naturally. We could stop here.
Coming back to the unscaled originals, we have
$$x=\frac{\alpha+\beta+(\beta-\alpha)\cos\theta}2$$
which is also
$$x=\frac{(\alpha+\beta)(\cos^2\frac\theta2+\sin^2\frac\theta2)+(\beta-\alpha)(\cos^2\frac\theta2-\sin^2\frac\theta2)}2=\alpha\sin^2\frac\theta2+\beta\cos^2\frac\theta2.$$ |
Can't understand one chance in R of winning where R is some result of factorials. | You're looking at the binomial coefficient, written as
$\dbinom{51}{6}$, said "$51$ choose $6$" and compactly represented as :
$$\binom{51}{6} = \frac{51!}{6!(51-6)!} \\[5ex]
= \frac{51\times 50\times 49\times 48\times 47\times 46}{6\times 5\times 4\times 3\times 2\times 1} \\[5ex]
= 17\times 10\times 49\times 47\times 46 =18009460
$$
This is the total number of ways of selecting a smaller set ($6$) from a larger ($51$). Thus to get a particular desired selection, there is one way to do that in this large number of possibilities. |
A fair coin is tossed $5$ times. It is known that there are more than $2$ heads in the $5$ tosses. What is the probability... | $$P(H=3|H> 2)= \frac{P([H=3] \cap [H>2])}{P(H>2)}=\frac{P(H=3)}{P(H>2)}$$
This is a trivial application of Bayes Theorem. What changes the probability here is the fact that we know the number of heads was at least 2. This information reduced the sample space for any further questions asked. For example if we ask what is the probability that number of heads is 1 when it is known that the number of heads was greater than 2. The probability is 0. This is an example of reduction of sample space. |
Combinations question | Represent each equivalence class by the number which has it's digits sorted.
Now this is uniquely determined by the number of 0s, number of 1s, ..., number of 9s.
Thus you need to find the total number of tuples $(n_0, n_1, \dots, n_9)$ such that
$\displaystyle \sum_{i=0}^{9} n_i = 4$ such that $n_i \ge 0$.
This is given by $\displaystyle {13 \choose 4}$
Since this also counts $0000$, the answer for your problem is $\displaystyle {13 \choose 4}-1$.
In general, the number of disitnct $n$-tuples of non-negative integers which sum to $k$ is given by $\displaystyle {n+k-1 \choose k}$.
See here: http://en.wikipedia.org/wiki/Stars_and_bars_%28probability%29 |
$n^{th}$ root of a matrix. | If your matrix is invertable and you can get it into Jordan Normal form you can take the logarithm of a matrix using the techniques described here, and then exponentiate, taking advantage of the power rule for exponentials in the usual way. (If A is your matrix, then $\exp(\frac{1}{n}\log(A)) = A^\frac{1}{n}$) |
Proof verification of incompleteness of metric $\sigma(x,y) = |\arctan(x) - \arctan(y)|$ | The ideas all seem to be present, although an important step is out of order.
You should start by defining the sequence $x_n = n$, before you ever "fix $\epsilon > 0$".
Next you should state your intention to prove $x_n$ is Cauchy.
Only then, by applying the definition of a Cauchy sequence, are you justified to "fix $\epsilon > 0$", and to proceed with the proof that $x_n$ is Cauchy.
Otherwise, your proof is fine. |
Find the maxima of the given polynomial function | Starting from @user772784's answer we need to solve for $x$ the equation
$$\left(3 x^2-x-3\right) \left(4 x^5-12 x^4+2 x^3+3 x^2-5 x-3\right)=0$$ and this is quite discouraging. So, there must be a trick somewhere and I suppose that, at leats, one of the required roots comes from the quadratic, that is to say $x_\pm=\frac{1\pm\sqrt{37}}{6} $
Now, if you plug these value in $f(x)$, the result is $\sqrt{10}$ for the maximum $(x=x_-)$.
Unfortunately, the minimum corresponds to the only real root of the quintic and we cannot get its analytical expression. Using calculus, the minimum is located around $x=2.795$ and its value looks like $-1.547$. |
Why can't the nth triangular number be expressed as the area of an equilateral triangle? | By what you mean as area of an triangle, that is the area of an triangle in terms of squares of unit length. But balls arranged in equilateral triangle do not pack in square cells; rather, they are more closely related to hexagonal tiling.
There are two directions: either arrange triangle numbers in the form of right-angled isosceles triangles, or use hexagons as base unit.
First, consider the $n$th triangle number arranged as a right-angled triangle, using $1\times 1$ square cells. Overlay a right-angled triangle of base and height $n$.
The $n$th triangle number is $\frac{n(n+1)}{2} = \frac{n^2 + n}{2}$, but the area of the triangle is only $\frac {n^2}2$. The difference comes from the series of $n$ little triangles lying outside the hypotenuse, each with base and height $1$.
Alternatively, consider the $n$th triangle number arranged as an equilateral triangle, using regular hexagons. The hexagons has sides $\frac1{\sqrt3}$, which makes the distance between any two parallel edges $1$. The area of such hexagon is
$$A_h = 6\cdot\frac12\cdot\frac12\cdot\frac1{\sqrt3} = \frac{\sqrt3}{2}$$
Overlay an equilateral triangle of sides $n$ on the hexagonal arrangement. The area of this triangle is $\frac12\cdot n^2\cdot \frac{\sqrt3}{2}$, which is $\frac{n^2}2$ times of the basic hexagon. But this time, outside each of the $3$ side of the triangle, there are $n$ isosceles triangles in excess, each little triangle of area $\frac16$ of the basic hexagon. These add to the area of the triangle to obtain the area of the hexagonal arrangement,
$$\frac{n^2}{2}A_h + 3n\frac{A_h}6 = \frac{n(n+1)}{2}A_h$$ |
biconditionals and tautologies | A biconditional, say, $K \longleftrightarrow N\equiv (K \rightarrow N) \land (N\rightarrow K)$ is true only when both $K$ and $N$ are true, or when both $K$ and $N$ are false.
However if one is true and the other false, the biconditional is not true. Hence, it is not true for all truth-value assignments for $K, N$. Hence, it is not a tautology.
We do have that $K \iff N \equiv (K\; \text{ XNor } N)$ and shares the following truth-table: |
What is a reper (reper bundle)? | I believe a reper bundle is a frame bundle. There are several sources defining a "reper", e.g. here:
A local reper or frame of $E$ over $U \subseteq M$ is a
collection of sections $e_1, . . . ,e_k$ such that for all $p\in U$ the vectors
$e_1(p),\ldots ,e_k(p)$ form a basis of $E_p$. A reper is called global if this
property extends to all $p\in M$. |
First-order logic: nested quantifiers for same variables | This is really about how you evaluate the truth value.
$\exists x\varphi(t)$ is true if and only if there exists some $x$ for which $\varphi(x)$ is true. Conversely it is false if and only if for all $x$ (in a given model, of course) $\varphi(x)$ is false.
The inner quantification is mostly to "confuse" your intuition and since the truth value of $\forall x\forall y(P(x,y))$ is not dependent of the truth value of the outer quantification it is easier to change the variables, even informally before writing the actual proof.
The claim itself just says that there is a pair $(x,y)$ such that if $P(x,y)$ then $P$ is all the ordered pairs of the universe.
We can prove the validity of this formula from an external point of view, and we do that semantically (that is we do not try to write a proof, but rather show that is holds in every model), for brevity denote our formula $\varphi$.
Let $M$ be an arbitrary model of our language (where $P$ is a binary relation).
If $M\models\forall x\forall y(P(x,y))$ then $M\models\varphi$ (can you see why?)
If $M\models\lnot(\forall x\forall y(P(x,y))$ then for some $a,b\in|M|$ we have $\lnot P(a,b)$. In particular for this pair that $M\models P(a,b)\rightarrow\forall x\forall y(P(x,y))$, so we have $M\models\varphi$.
If needed, this should be made rigorously using the $\operatorname{val}$ function. I strongly recommend on working the details yourself and following closely after the definitions of $\operatorname{val}_M(\exists x\varphi,g)$ (and similarly $\forall x\varphi$). |
Prove that an identity element does not exist with the definition | Say there exists such $e$. So for all $a$ we would have $$a\circ e = a$$ i.e. $$ \log(|e|+1) = {a\over a^2+1}$$
Since the left side of the equation is constant and right is not, there is no such element (right identity) $e$.
Say exists left identity element $e$, then we have for all $b$: $$e^2+1 = {b\over \log (|b|+1)}$$ which is clearly nonconstant again. |
Uniqueness proof of the left-inverse of a function | You're assuming that whenever you have a $b\in B$ there will be some $a$ such that $b=f(a)$. This is not necessarily the case!
However, if you explicitly add an assumption that $f$ is surjective, then a left inverse, if it exists, will be unique.
For your comment: There are two different things you can conclude from the additional assumption that $f$ is surjective:
There is at least one right inverse.
There is at most one left inverse (and if there is one, it is actually two-sided).
Conversely, if you assume that $f$ is injective, you will know that
There is at most one right inverse (and if there is one, it is actually two-sided).
There is at least one left inverse (except in the case drhab points out below). |
Explain step by step | $$PB=(I-(A+B))B=B-AB-B^2=-AB\\AP=A(I-(A+B))=A-A^2-AB=-AB$$ |
Solve the differential equation : $0.5 \frac{dy}{dx}=4.9-0.1y^2$ | Continuing your calculations
$$ \int \frac{1}{49 - y^2} dy = \int \frac{1}{5} dx $$
$$ \int \frac{1}{7^2 - y^2} dy = \int \frac{1}{5} dx $$
$$ \int \frac{1}{7 - y} + \frac{1}{7 + y} dy = \int \frac{14}{5} dx $$
$$ \log(\frac{7+y}{7-y}) = \frac{14}{5} x + C$$ |
Entropy of sum of two (potentially) dependent R.Vs | Using $ H(Z)=H(X)+H(Z \mid X) - H(X\mid Z)$ with $Z=X+Y$ we get:
$$\begin{align}
H(X+Y)&=H(X)+H(Y\mid X)- H(X \mid X+Y)\\
&=H(Y)+H(X\mid Y)- H(Y \mid X+Y)\\
&=H(X,Y)- H(X \mid X+Y)\\
\end{align}$$
Now, $H(X,Y) \le H(X)+H(Y)$ (equality iff $X,Y$ are independent) and $H(X \mid X+Y)\ge 0$ (equality iff knowing the sum lets me know the summands; i.e, iff $x_1+y_1 \ne x_2+y_2$ for any set of values with positive probability).
Then $$H(X+Y)\le H(X)+H(Y)$$
is a valid general bound. If the variables are dependent and we are given $H(X,Y)$ , then
$$H(X+Y)\le H(X,Y)$$
would be a tighter bound. |
Decrypting RSA when given N and E but not d | Using the code in my previous answer
I get as the decimal number equivalent of the ciphertext: $3473726822818613085692498216956767492477$
Wolfram alpha factorises your $N$ as
$$123457 \times 123456789123456789123456789123456829$$
This gives $$d=1297147349619189902810679264512807332559$$
and thus $$m= 42856312891220705415$$
which becomes Llanymynech (Welsh course?)
Remark:
If you know the answer is (again) a Welsh place name, take a list of them and encrypt each of them till you find a match for your ciphertext. There only a few 100 places to try at most, peanuts for computers nowadays. This already shows that the scheme is quite flawed in this form, even with larger $N$ (this one is tiny by realistic scenarios), as with such public key schemes random padding is added to elimate such attack scenarios. |
how to calculate these intersections without having to count all combinations | I am sorry, but you will have to write down all possible combinations.
It's not too bad though.
For the first one, $a$ and $b$ are both less than $c$, which is less than $d$. So $c$ is at least 3, and at most 4. If $c=3$, there are two ways to do $a$ and $b$; for each of those there are two choices for 4. So that makes $2\times 2$ ways if $c=3$. If $c=4$, then $d$ must be $5$. There are three choices for $a$, and for each of those, there are two choices for $b$. So you work out how many ways with $c=4$. |
$a+n^2$ is the sum of two perfect squares | The general solution of the equation
$$x^2+y^2=z^2+w^2$$ is given by the identity with four arbitrary parameters
$$(tX+sY)^2+(tY-sX)^2=(tX-sY)^2+(tY+sX)^2$$
Let $n$ be any integer so we have $$a+n^2=z^2+w^2$$Making
$$n=tY-sX\\z=tX-sY\\w=tY+sX$$ we have three equations with four unknowns which in general have infinitely many solutions. Any way we have
$$n^2=t^2Y^2+s^2X^2-2stXY\\z^2=t^2X^2+s^2Y^2-2stXY\\w^2=t^2Y^2+s^2X^2+2stXY$$ which implies $$z^2+w^2-n^2=t^2X^2+s^2Y^2+2stXY=(tX+sY)^2$$ Since $$a=z^2+w^2-n^2$$ we have
$$a=(tX+sY)^2$$ We are done. |
Two time derivatives of kinetic energy of fluid | This really depends on the nature of the domain $D$. If the domain $D(t)$ is time-dependent where $\mathbf{u}^{(b)}(\mathbf{x})$ is the velocity of an area element at the boundary point $\mathbf{x} \in \partial D(t)$, then according to Reynolds transport theorem (proved here) and an application of the divergence theorem, we have
$$\tag{1}\frac{dK}{dt} = \int_{D(t)} \frac{\partial K}{\partial t}\, dV + \int_{ D(t)}\nabla \cdot \left(K\mathbf{u}^{(b)}\right) \, dV \\= \int_{D(t)} \frac{\partial K}{\partial t}\, dV + \int_{\partial D(t)}K\left(\mathbf{u}^{(b)}\cdot \mathbf{n}\right) \, dA$$
There are three possibilities.
(i) We have an arbitrary moving domain with some prescribed boundary velocity $\mathbf{u}^{(b)}$ and (1) cannot be simplified further without more information.
(ii) We have a solid stationary boundary where $\mathbf{u}^{(b)} \cdot \mathbf{n}$ holds at every boundary point either as a consequence of the no-slip condition for a viscous fluid or the impermeability condition for an inviscid fluid. In this case (1) reduces to
$$\tag{2}\frac{dK}{dt} = \int_{D(t)} \frac{\partial K}{\partial t}\, dV $$
(iii) The domain is a material element where $\mathbf{u}^{(b)}= \mathbf{u}$, that is
the velocity at the boundary is the fluid velocity. In this case,
$$\tag{3}\frac{dK}{dt} = \int_{D(t)} \frac{\partial K}{\partial t}\, dV + \int_{\partial D(t)}K\left(\mathbf{u}\cdot \mathbf{n}\right) \, dA$$
Again, the ambiguity of $\mathbf{u} \cdot \mathbf{n}$ remains.
In the above discussion nothing was said about the incompressiblity of the fluid and the condition $\nabla \cdot \mathbf{u} = 0$, because it was not necessary to derive either form of the Reynolds transport theorem shown in (1).
What you show in your first equation
$$\frac{d}{dt} K(t) = \int_D \rho \dot{u} \cdot u \, dV,$$
is a consequence of the Reynolds transport theorem given both incompressiblity and the specific form $K = \frac{1}{2} \rho |\mathbf{u}|^2$. This follows form
$$ \frac{\partial K}{\partial t} +\nabla \cdot (K\mathbf{u}) = \frac{\partial K}{\partial t} + \underbrace{K \nabla \cdot \mathbf{u}}_{= 0} + (\mathbf{u} \cdot\nabla)K \\ = \frac{\rho}{2} \frac{\partial }{\partial t} (\mathbf{u} \cdot \mathbf{u}) + \frac{\rho}{2}(\mathbf{u} \cdot \nabla) (\mathbf{u} \cdot \mathbf{u}) = \rho \frac{\partial \mathbf{u}}{\partial t} \cdot \mathbf{u} + \rho \left[(\mathbf{u} \cdot \nabla) \mathbf{u} \right] \cdot \mathbf{u} \\ := \rho \dot{u}\cdot u$$
However, this special form does not make it any clearer why the two forms you present should be equal in all circumstances. |
Calculate $\left \| v_1 \right \|, \left \langle v_1,v_2 \right \rangle, \left \| v_1+v_2 \right \|$ | Seems fine besides the last one though the answers coincide.
$$\left \| v_1+v_2 \right \|= \sqrt{\langle \begin{bmatrix} 2 \\ 1 \\ 1\end{bmatrix}, \begin{bmatrix} 2 \\ 1 \\ 1\end{bmatrix}\rangle}=\sqrt{2^2+2(1)(1)+1^2}$$ |
$L^1 \subseteq L^2$? | Unfortunately, the above reasoning is indeed flawed; you cannot conclude
$$
\operatorname{ess sup}|X| < \infty,
$$
because even though $|X|$ can attain $\infty$ only on a nullset, it can still become arbitrarily large on non-nullsets. |
Trigonometric Functions: sine equal at three points. | Obviously $\omega=0$ is a trivial case. If not, it must hold that either $$\omega t_i+\phi=\omega t_j+\phi+2k\pi\quad,\quad 1\le i\ne j\le 3$$(which doesn't hold) or $$\omega t_i+\phi=2k\pi +\pi-\omega t_j-\phi\quad,\quad 1\le i\ne j\le 3$$ which also doesn't hold. Therefore the case where $\omega=0$ is the only case |
Topology defined by a fundamental system of neighbourhoods of zero in a topological group | Basically, because to define a topology we need to define a local base at every point in a consistent way. In a topological group $G$, for every $x$ and $y$ we have a homeomorphism of $G$ that maps $x$ to $y$, just use $h(z) = y*x^{-1}*z$. So we can transport a neighbourhood base of $e$ (the identity of $G$) to any other point of $G$, using such $h$. One then checks this is a consistent assignment and so determines the topology on $G$. |
Proving equivalent statements regarding two sets | Let $x\in U$, i.e. $x$ is any arbitrary element. We need to show that $x\in A' \cup B'$.
There are two cases: $x\in A$ and $x\notin A$.
If $x\in A$, then since $A\subseteq B'$, $x\in B'$ and so $x\in A'\cup B'$.
If $x\notin A$, then $x\in A'$ and so $x\in A'\cup B'$.
In each case, we conclude $x\in A'\cup B'$. Therefore, $$U\subseteq A'\cup B'$$
We already know,
$$A'\cup B'\subseteq U$$
Combining the two we get,
$$A'\cup B'= U$$
as desired.
Hope this helps :) |
linear spaces - simple sum - reasoning | By definition of $U \bigoplus V= X$, b) holds, as direct summands must have trivial intersection.
a) is false, for example, take $X = \mathbb{R}^2$, and $U = <(1,0)>$ and $V= <(0,1)>$.
Thus c) is also false, as there elements which are neither in $V$ nor $U$, but are linear combinations of elements in $U$ respectively $V$. |
Show that the set {0} with multiplication is a group. | Closure:
Is it true that in the set $\{0\}$, for any two elements $a,b\in\{0\}$, the product $a\times b$ is also an element of $\{0\}$?
Answer:
Yes! Proof:
If $a\in\{0\}$, then $a=0$
If $b\in\{0\}$, then $b=0$.
Therefore, $a\cdot b=0\cdot 0=0$.
Therefore, because $0\in\{0\}$, we conclude $a\cdot b\in\{0\}$.
For associativity, a very similar argument can be made.
Identity:
Is $0$ the identity of $\{0\}$? That is, is it true that for any element $a\in\{0\}$, the element $a\cdot 0=a$?
Answer:
Yes! Proof:
If $a\in\{0\}$, then $a=0$.
Therefore, $a\cdot 0 = 0\cdot 0=0$.
Since $a=0$ and $a\cdot 0=0$, we conclude $a\cdot 0=a$. |
Prove that any maximal interval of existence is open | This is a simple consequence of the definition of maximal interval, which has always to do with some initial condition. And the union of any family of open intervals containing some time $t_0$ is always an open interval.
EDITED: What is more delicate to prove is that the maximal interval only depends on the initial condition, even if the solutions are not unique. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.