title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
What kind of function is represented by this graph? | $y(x) = x + \lfloor x/4 \rfloor$ for $x \in \mathbb{Z}$. |
Different behaviour of Newton's method for finding minimum of a function | $f(0,1)=-1$.
We'll prove that it's a minimal value.
Indeed, we need to prove that
$$2x^2+y^2-2xy+2x-2y\geq-1$$ or
$$2x^2-2(y-1)x+y^2-2y+1\geq0$$ or
$$x^2+(x-y+1)^2\geq0.$$
Done! |
If a die is thrown $5$ times, what are the odds of getting a $5$ AT LEAST $3$ times | If $X$ is the number of 5s, then $X\sim B\left(5,\frac{1}{6}\right)$. The probability of getting at least 3 fives is $$\mathbb P(X=3)+\mathbb P(X=4)+\mathbb P(X=5)$$
meaning that you have to sum the probabilities for getting 3, 4 and 5 fives. |
Trigonometric Integrals times exponential | For positive integer $a$, you can write $\cos(x) = (\exp(ix) + \exp(-ix))/2$ and expand the $a$'th power using the binomial theorem. You get
$$ \int_0^{\pi/2} 2^{-a} \sum_{j=0}^a {a \choose j} \exp((a + i(a-2j)) x) \; dx
= 2^{-a} \sum_{j=0}^a {a \choose j} \left.\dfrac{\exp((a+i(a-2j))x)}{a + i(a-2j)} \right|_{x=0}^{\pi/2}$$
which after some simplification should give you the answer.
EDIT:
In the case $a=2$, you should get
$$ \dfrac{1}{4} \dfrac{e^{(1+i)\pi} - 1}{2+2i} + \dfrac{1}{2} \dfrac{e^\pi - 1}{2} + \dfrac{1}{4} \dfrac{e^{(1-i)\pi} - 1}{2-2i} = \dfrac{e^\pi-3}{8}$$ |
Calculate value of $ f'(0)$ of function $f(x) = \sum_{n=1}^{\infty} \frac {\sin(nx)}{n^3}, f: {\bf R}\to{\bf R}.$ | First you should check if it's possible to differentiate series term by term, but in this case it's posible, because $\sum_{n=1}^{\infty} \frac{\sin nx}{n^3}$ and $\sum_{n=1}^{\infty} (\frac{\sin nx}{n^3})'$ converges for each $x \in \mathbb{R}$ (because $|\sin nx| \leq 1$, next use Weierstrass M-test), so:
$f'(x)=\sum_{n=1}^{\infty} (\frac{\sin nx}{n^3})'=\sum_{n=1}^{\infty} \frac{ \cos nx}{n^2}$
For $x=0$ we have:
$f'(0)=\sum_{n=1}^{\infty} \frac{1}{n^2}=\frac{\pi^2}{6}$ |
Computing $\sum_{j\neq i}\sum_{k\neq i,j}\sum_{l\neq j,k}(x_l-1)$ | The term $x_i - 1$ appears once for each of $(n - 1)(n - 2)$ choices of the ordered pair $(j, k)$.
For $l \ne i$, the term $x_l - 1$ appears once for each of $(n - 2)(n - 3)$ choices of $(j, k)$.
So the sum is:
\begin{align*}
& \phantom{=}
(n - 1)(n - 2)(x_i - 1) + (n - 2)(n - 3)\sum_{l \ne i}(x_l - 1)
\\ & =
(n - 1)(n - 2)x_i - (n - 1)(n - 2) + (n - 2)(n - 3)\sum_{l \ne i}x_l - (n - 1)(n - 2)(n - 3)
\\ & =
(n - 1)(n - 2)x_i + (n - 2)(n - 3)\sum_{l \ne i}x_l - (n - 1)(n - 2)^2
\\ & = 2(n - 2)x_i + (n - 2)(n - 3)\sum_{l=1}^nx_l - (n - 1)(n - 2)^2.
\end{align*} |
Updates of Serge Lang — Differential manifolds | It sounds like you have the version of this book from 1971. It was updated and expanded in the 1990's and the most recent version is entitled "Fundamentals of Differential Geometry."
Lang first put out "Introduction to Differentiable Manifolds" in 1962, which was a very useful reference on the basics of Banach and Hilbert manifolds. He since updated and and expanded the text multiple times, giving it a different name each time instead of calling later versions the $n$th edition for $n \geq 2$. Your 1971 version went by "Differential Manifolds," a later version was called "Differential and Riemannian Manifolds," and the latest edition is called "Fundamentals of Differential Geometry." There is actually an "Introduction to Differentiable Manifolds, 2nd Ed." by Lang, which is not any of the above, but rather a version of the original 1962 text that was adapted to be more introductory by only addressing finite-dimensional topics. |
Summation in constraint | Yes, it does not make sense to have $\sum_t$ and $\forall t$ in the same constraint. Also, is the first sum instead supposed to be $\sum_i$? |
Let $A$ be a square matrix of order $n$ over $\mathbb{R}$ such that $A^2-3A+I_n=0_n$. Then $A^{-1}=3I_n-A$ | For square matrices, a one-sided inverse is automatically the (unique) inverse. Let's exhibit a one-sided inverse... $3\,{\rm Id}_n-A$, say? We check: $$A(3\,{\rm Id}_n - A) = 3A - A^2 \stackrel{(\ast)}{=} {\rm Id}_n,$$where in $(\ast)$ we use the hypothesis. |
Probability/Uniform Distribution | If the student takes the third test then his expected mark is $50$, as this is the mean of the uniform distribution on $[0,100]$.
Now suppose the student takes the second test. There is a $\frac{1}{2}$ chance that he will score $50$ or more. In this case he will not take the third test (because if he did, his expected mark would be lower, or at best the same). So his final mark will be what he got for test $2$. As this case is a uniform distribution on $[50,100]$ the expectation is $75$.
On the other hand, there is a $\frac{1}{2}$ chance that he will get less than $50$ on the second test. In this case he will take the third test and his expected mark is $50$.
Putting it all together, if the student takes the second test, his expected score is
$$\frac{1}{2}75+\frac{1}{2}50=62.5\ .$$
So, if he receives the first test mark and it is this much or better, then he should stop and and not take any more tests. |
1-1 correspondence between $Hom_K(U,V)$ and $K^{m,n}$ | $\textbf{HINT:}$ If $A \in K^{n,m}$ then the map you are looking is left-multiplication by $A$, namely $L_A: U \rightarrow V, x \mapsto xA$. |
Why do $M^\dagger M$ amd $MM^{\dagger}$ have the same first $k$ eigenvalues? | Let's say we have an eigenvector $u_k$ for matrix $MM^T$ with eigenvalue $\lambda_k\neq 0$:
$$
\lambda_k u_k = MM^Tu_k.
$$
Then consider how matrix $M^TM$ acts on vector $v_k = M^Tu_k$:
$$
M^TMv_k = (M^TM)M^T u_k = M^T (MM^Tu_k)=M^T\lambda_ku_k=\lambda_k(M^Tu_k) = \lambda_kv_k.
$$
So $v_k$ is an eigenvector for $M^TM$ with the same eigenvalue $\lambda_k$. Finally, since we have $\mathrm{rank}\ M$ non-zero eigenvalues, it would work for every one of them. |
Finding the area of a $15-75-90$ triangle with the length of the hypotenuse included without using trigonometric functions. | Here is your triangle with just one extra segment inscribed in it:
Now you have a $30$-$60$-$90$ triangle, whose ratios you presumably know.
That is, you know the ratios of $AC$ and $AD$ to $CD.$
But also $BD=CD$ and $AB = AD + BD,$ so you have the ratio $AB : CD,$
and now you can use the Pythagorean Theorem to get the ratio $BC: CD.$
But $BC = 12,$ and using the ratios you have found you can assign lengths to all the other segments, in particular $AB$ and $AC.$
Then you can find the area. |
Rudin: Union of countable sets | It follows from Rudin's theorem, in several ways. You modify the proof a bit and do another diagonalisation argument, or if $A,B$ are countable, then you put $A_1:= A, A_2:= B$ and you take countable sets $A_3, A_4, \dots$. Then we have
$$A \cup B \subseteq A_1 \cup A_2 \cup A_3 \cup \dots$$
and by Rudin's theorem, the latter is countable, so $A\cup B$ as well.
Alternatively, if $A$ and $B$ are countable and we write the elements of $A$ in a sequence $\{a_n\}_{n=1}^\infty$ and the elements of $B$ in a sequence $\{b_n\}_{n=1}^\infty$. Then you consider the sequence
$$a_1, b_1, a_2, b_2, a_3, b_3, \dots$$
and this gives a surjection $\mathbb{N} \to A \cup B$. Thus $|A \cup B| \leq |\mathbb{N}|$. |
Quotient of Confluent Hypergeometric Functions of the 1st Kind | If $a\neq -1,-2,-3,\dots,$ and $a-b\neq 0,1,2,\dots,$ then the ratio of confluent hypergeometric functions can be expressed as the continued fraction:
\begin{equation}
\frac{{_{1}}F_{1}(a,b,z)}{{_{1}}F_{1}(a+1,b+1,z)}=1+\cfrac{u_{1}z}{1+\cfrac{u_{2}z}{1+\cdots}}
\end{equation}
where, $u_{2n+1}=\frac{a-b-n}{(b+2n)(b+2n+1)}$, and $u_{2n}=\frac{a+n}{(b+2n-1)(b+2n)}$.
This continued fraction can be truncated at any $u_{n}$ which yields a ratio of polynomials in $z$. Taking the quotient of the polynomials and solving for z will give an approximation to the root in question. Truncating at larger $n$ will yield a closer approximation. |
What is a explanation in math to High School Students that why prove/disprove riemann hypothesis is important? | You might look at Dan Rockmore's book, "Stalking the Riemann Hypothesis", http://books.google.ca/books?id=cTVn9f9oKAgC
There are lots of connections to very cool things, but I don't know what would strike a high school student as important. |
I am looking for a general solution for $xy+x+y=n^2$ anyone able to help? | A hint/suggestion: add $1$ to both sides, getting $xy+x+y+1=n^2+1$. You should be able to factor the LHS now and see how to get values of $x$ and $y$ from any $n$ such that $n^2+1$ is composite. |
How would I get an intuative idea of what this transformation does to the unit square? | Let us consider that the graphics you have provided is in the $(X,Y)$ image plane (uppercase X,Y versus lowercase $x,y$ for the initial space $[0,1] \times [0,1]$).
The relationships describing the transformation
$$\tag{1} \cases{X=ye^x & (a) \\ Y=2x & (b)}$$
can be interpreted thus: we look for those $(X,Y)$ having a preimage $(x,y)$, i.e., being such that $\exists x,y \in [0,1]$ such that :
$$\tag{1} \cases{X=ye^x & (a) \\ Y=2x & (b)}$$
Let us (partially) inverse these relationships.
(1) is equivalent by taking the logarithm of relationship (1a):
$$\tag{2}\cases{\ln(X)=\ln(y)+\frac{Y}{2} & (a) \\ x=\frac{Y}{2} & (b)}$$
But condition $y \in [0,1]$ is equivalent to condition $\ln(y) \leq 0$.
Thus equation (2a) is equivalent to say that $\ln(X)<\frac{Y}{2}$.
Otherwise said, it means that all the region described by inequality:
$$Y>2 \ln(X)$$
is convenient.
But we must take into account the constraint brought by (2b): $0 \leq Y \leq 2$.
We have thus proved that the region you have found is the good locus (see picture below with the images of $2000$ random points in $[0,1] \times [0,1]$.
Remark: A completly different approach would have been to consider that the different constituents of the boundary of the image are the images of the four sides of the initial square.
For example, the image of the vertical side along the $y$ axis, that can be parameterized as the set of points $(x,y)=(0,t), \ \ t \in [0,1]$ is the set of points $(te^0,2 \times 0)=(t,0)$, i.e, is the horizontal side. Whereas the upper horizontal side which is the set of points $(x,y)=(t,1), \ \ t \in [0,1]$ is mapped onto the set of points $(1 \times e^t,2t)$ which is a parametric representation of the arc of logarithm curve.
A drawback of this method is that the region enclosed by the frontier curve is not necessarily mapped onto the region delimited by the image of the frontier. This has not happened here, but for the sake of rigor, it is better to work on inequalities and equivalent inequalities, as we have done in the first part. |
Probability ( Deck of Cards ). | Usually, in English, "three of a kind" means three aces, not three spades. That is, "kind" means the same denomination (e.g. ace, king, queen,...) not suit (spades, hearts, diamonds, clubs.) Your calculation, using $\binom{13}{3}$ computes picking three cards from a single suit, not three cards of the same denomination.
There are $\binom{13}{1}\binom{4}{3}$ ways to pick $3$ cards of the same denomination. There are $\binom{12}{2}\cdot 4\cdot 4$ ways to pick the remaining two cards of differing denominations.
So the probability is:
$$\frac{\binom{13}{1}\binom{4}{3}\cdot \binom{12}{2}\cdot 4\cdot 4}{\binom{52}{5}}$$ |
is this solution of $AX=0$ in space VA? | ok, it is! we are in $mod 11$ and $X4=\begin{bmatrix}0 \\ 2 \\ 1 \\ 1\end{bmatrix}=2\begin{bmatrix}9 \\ 1 \\ 0 \\ 0\end{bmatrix}+1\begin{bmatrix}8 \\ 0 \\ 1 \\ 0\end{bmatrix}+1\begin{bmatrix}7 \\ 0 \\ 0 \\ 1\end{bmatrix}$ $mod(11)$ |
what is a pure periodic function? | The usual term in mathematics is "periodic"; "pure" is just there for emphasis, I suspect. A function $f$ on $\mathbb R$ is periodic with period $p$ if $f(x+p) = f(x)$ for all $x$. |
Number of ways to label a die | First, notice that there are $24$ ways of turning the die. This is because if you consider one particular face of the die, there are exactly $4$ ways of turning it so that this face is always in the same place. Then do the same for the other faces, and get $4 \times 6 = 24$.
$6!$ is the number of ways to number the die but this also counts the $24$ possible turns of the die. So the answer is $\frac{6!}{24}=30$. |
Proving Theorem regarding LHRCC (Linear Homogenous Recurrence Relations) with two distinct roots | Guide:
For backward direction:
If we already know that $a_n = \alpha_1r_1^n + \alpha_2r_2^n$ where $\alpha_1$ and $\alpha_2$ are constant,
$$c_1a_{n-1}+c_2a_{n-2}=c_1(\alpha_1r_1^{n-1}+\alpha_2r_2^{n-1})+c_2(\alpha_1r_1^{n-2}+\alpha_2r_2^{n-2})$$
Try to simplify this expression to $a_n$. You want to use the property that $r_1$ and $r_2$ are roots of a particular quadratic equation.
For forward direction:
If $a_n = c_1a_{n-1} + c_2a_{n-2}$,
we have $$\begin{bmatrix} a_{n-1} \\ a_n\end{bmatrix} = \begin{bmatrix} 0 & 1 \\ c_2 & c_1 \end{bmatrix}\begin{bmatrix} a_{n-2} \\ a_{n-1}\end{bmatrix}=\begin{bmatrix} 0 & 1 \\ c_2 & c_1 \end{bmatrix}^{n-1}\begin{bmatrix} a_{0} \\ a_{1}\end{bmatrix}$$
You might want to diagonalize the matrix $\begin{bmatrix} 0 & 1 \\ c_2 & c_1 \end{bmatrix}$.
You might want to check what is the characteristic polynomial correponding to that matrix. |
Question about open set of finite measure and Fubini's theorem. | Note that
$$
I(x) = \int_{F^c} \frac{\delta(y)}{|x-y|^2}dy
$$
and, with a view to use Fubini-Tonelli, we have to prove that
$$
\int_{F^c} \left (\int_F \frac{\delta(y)}{|x-y|^2}dx\right ) dy < \infty
$$
However, for any $y \in F^c$, one has
$$
\int_F \frac{1}{|x-y|^2}dx \leq \int_{|x-y|\geq \delta(y)} \frac{1}{|x-y|^2}dx = 2\int_{\delta(y)}^{\infty} \frac{dt}{t^2} = \frac{2}{\delta(y)}
$$
and so
$$
\int_{F^c} \left (\int_F \frac{\delta(y)}{|x-y|^2}dx\right ) dy \leq \int_{F^c} \delta(y)\frac{2}{\delta(y)}dy = 2m(F^c) < \infty
$$
Now the conclusion follows from what you have already observed. |
Is there an example of an algebraic surface with a (-2)-curve and... | Let $S$ be the blowup of $\mathbf{P}^2$ at eleven smooth points on a cubic curve $C'$. Then, we have an effective curve
$$C \in \lvert 3\ell - E_1 - E_2 - \cdots - E_{10} - E_{11}\rvert$$
corresponding to the strict transform of the cubic $C'$, where we denote $\ell$ by the strict transform of a line in $\mathbf{P}^2$ and $E_i$ the exceptional divisors. Then, $C$ is a $(-2)$-curve:
$$C^2 = \left(3\ell - \sum E_i\right)^2 = 9 - 11 = -2,$$
but we have
$$K_S = -3\ell + \sum E_i,$$
hence
$$K_S + C \sim 0$$
as required. |
Equality of two series. | For given $n\in{\mathbb N}_{\geq1}$ the function
$$f:\>{\mathbb Z}\to{\mathbb R},\quad k\mapsto\left\lfloor{k\over n}\right\rfloor -{k\over n}$$
is periodic with period $n$. Now do a discrete Fourier transform on $f$, and you will obtain a finite trigonometric sum representing $f$. |
Evaluating $\int xy \ d(xy)$ | One of the wonderful things about differentials (as opposed to, say, partial derivatives) is that they don't care about independent/dependent variables, interact extremely well with algebraic manipulations, and so forth.
For example, you know that $\mathrm{d}\left( \frac{1}{2} t^2 \right) = t \, \mathrm{d}t$, and if we have $t = xy$ then it immediately follows that $\mathrm{d}\left( \frac{1}{2} (xy)^2 \right) = xy \, \mathrm{d} xy$. Thus, $\frac{1}{2}(xy)^2$ is an antiderivative of $xy \, \mathrm{d}xy$.
And this fact is true always; when $x$ and $y$ are: independent, dependent on each other, dependent on additional variables, or even constant!
Now, an important point that you've overlooked is, just as there is a difference between the differential $\mathrm{d}$ and partial derivatives, there is a difference between the 'antidifferential' and the partial antiderivatives.
When you calculated $\int x^2 y\,\mathrm{d}y = \frac{1}{2} x^2y^2$, you computed a partial antiderivative. That is, you antidifferentiated subject to the constrant that $x$ is held constant. This is no good, since you were trying to invert the differential, not a partial derivative!
The same happens if you consider definite integration instead of indefinite integration — the appropriate integral would be a path integral $\int_\gamma x^2 y \, \mathrm{d} y $, where $\gamma$ is a path restricted to the one-dimensional space of allowed values of $(x,y)$. Arbitrary paths in the plane are disallowed — such as the vertical paths that would be analogous to the partial antiderivative in $y$.
Now, partial antidifferentiation can be used to compute an antidifferential. If $x$ and $y$ are independent variables and $z$ a scalar depending on them, then
$$ \mathrm{d} z = \frac{\partial z}{\partial x} \mathrm{d} x +\frac{\partial z}{\partial y} \mathrm{d} y $$
(where, as usual, $\partial/\partial x$ means to hold $y$ constant and vice versa)
So, if an antidifferential exists, we can get information from partial antiderivatives; e.g.
$$ \int \frac{\partial z}{\partial x} \mathrm{d} x = z + c(y) $$
(where, as usual, $\int \ldots \mathrm{d}x$ means the partial antiderivative while holding $y$ constant) Then, if the solution is not obvious, take the differential again (or just take the partial derivative in $y$) to get a differential equation you can solve for $c(y)$.
For example, in the given problem, if we define $z$ be such that
$$ \mathrm{d}z = x^2 y \, \mathrm{d}y + x y^2 \, \mathrm{d}x $$
then in the generic domain where $x$ and $y$ are independent, the partial antiderivatives in $y$ and $x$ you computed imply, respectively,
$$ z = \frac{x^2 y^2}{2} + C_1(x) $$
$$ z = \frac{x^2 y^2}{2} + C_2(y) $$
at which point the general solution is clear:
$$ z = \frac{x^2 y^2}{2} + C $$
And as before, since the equation
$$ \mathrm{d}\left(\frac{x^2 y^2}{2} + C\right) = x^2 y \, \mathrm{d}y + x y^2 \, \mathrm{d}x $$
holds when $x$ and $y$ are independent, it holds always. |
Clash of Clans. What's the total time to train? | To get a lower bound on the answer, let's oversimplify the problem at first.
(We can later correct the error that this introduces.)
Assume the first two barracks produce a continuous "flow" of barbarian-stuff
at the rate of $2$ barbarian-units every $20$ seconds,
which is to say, $\frac{1}{10}$ barbarian-unit per second.
Since one barbarian-unit fills one housing-unit, the rate at which
the first two barracks fill the camp is
$\frac{1}{10}$ housing-unit per second.
Similarly, the next two barracks produce archer-stuff at a rate of
$\frac{2}{25}$ archer-units per second,
filling $\frac{2}{25}$ housing-units per second.
The last two barracks produce minion-stuff at the rate
of $\frac{2}{25}$ minion-units per second,
filling $\frac{4}{45}$ housing-units per second.
The total flow of "stuff" into the camp is the sum of all three flows,
$$ \frac{1}{10} + \frac{2}{25} + \frac{4}{45}
= \frac{45}{450} + \frac{36}{450} + \frac{40}{450}
= \frac{121}{450} $$
measured in housing-units per second.
The number of seconds this would take to fill $200$ housing units is
$$ \frac{200}{\left(\frac{121}{450}\right)} = 200 \times \frac{450}{121}
\approx 743.8. $$
Clearly this is an oversimplification and has produced an error.
The camp cannot fill at some time like this; since barbarians, archers,
and minions are produced only at times that are multiples of $5$, the camp
must fill at a time that is a multiple of $5$.
In fact, in the first $743.8$ seconds the first two barracks will have
produced barbarians $37$ times (because $37 \times 20 = 740$),
the next two will have produced archers $29$ times
(because $29 \times 25 = 725$)
and the last two will have produced minions $16$ times
(because $16 \times 45 = 720$).
Taking into account the number of housing units occupied by each
barbarian/archer/minion produced, the number of housing units
that the barracks will have filled in this time is
$$ 37 \times 2 + 29 \times 2 + 16 \times 4 = 74 + 58 + 64 = 196. $$
So we just need to fill four more housing units.
The next barbarians will be produced at $760$ seconds,
the next archers at $750$ seconds, and
the next minions at $765$ seconds.
So the next event is at $750$ seconds; we get two archers, raising
the number of occupied housing units to $198$, and the next archers are
due at $775$ seconds. The next event after this is at $760$ seconds;
we get two barbarians, and the number of occupied housing units is now $200$,
broken down as follows:
\begin{array}{lll}
76 \text{ barbarians} & \times 1 \text{ unit/barbarian} & = 76 \text{ units}\\
60 \text{ archers} & \times 1 \text{ unit/archer} & = 60 \text{ units}\\
32 \text{ minions} & \times 2 \text{ unit/minion} & = 64 \text{ units}\\
\end{array}
and the process completes in $760$ seconds.
The numbers in your linked image ($76, 56, 34$)
would take (respectively) $760$ seconds, $700$ seconds, and $765$ seconds
to be produced at the rates stated in your question;
what happened to the archers that should have been produced
at $725$ seconds and $750$ seconds after the start?
Do the barracks start production at different times?
Or do they each produce their first batch of barbarians/archers/minions
instantly?
If the answer is that the first batch is produced instantly,
then at the very start we already have two barbarians, two archers,
and two minions, occupying $8$ housing units.
We therefore need to fill just $192$ more units;
in the oversimplified model, the number of seconds this would take is
$$ \frac{192}{\left(\frac{121}{450}\right)} = 192 \times \frac{450}{121}
\approx 714.05. $$
At this time we would have $72$ barbarians (the two we started with,
plus $70$ more produced in the first $700$ seconds),
with the next ones due at $720$ seconds,
$58$ archers, with the next ones due at $725$ seconds,
and $32$ minions, with the next ones due at $720$ seconds.
So far, the number of housing units occupied is
$$ 72 + 58 + 32 \times 2 = 194. $$
The next event is at $720$ seconds: we get two barbarians and two minions,
occupying a total of $6$ housing units, so now the units occupied are
$$ 74 + 58 + 34 \times 2 = 200, $$
which (almost) agrees with the numbers in your image,
but now we have more archers and fewer barbarians.
So I suspect that after all there is some other discrepancy in "starting times"
among the six barracks. |
Solving Recurrence $T(n) = T(n − 3) + 1/2$; | The crucial observation is that the sequence occurs in blocks of 3, so for each $n$ we need to find out "which block of 3 is $n$ in". So using $\lfloor n/3\rfloor$ or $\lceil n/3\rceil$ would be good.
Observe the pattern:
$$\begin{array}{c}
n & T(n) & \lceil n/3\rceil\\\hline
0 & 2/2 & 1\\\hline
1 & 2/2 & 1\\\hline
2 & 2/2 & 1\\\hline
3 & 3/2 & 2\\\hline
4 & 3/2 & 2\\\hline
5 & 3/2 & 2\\\hline
6 & 4/2 & 3\\\hline
7 & 4/2 & 3\\\hline
8 & 4/2 & 3\\\hline
\end{array}$$ |
Use the Laplace transform to solve the initial value problem. | You do not need to convert e^(-t) to a dirac-delta function. It is a dirac-delta function that is converted to e^(at) form to reduce the function.
Secondly, as we are supposed to work with initial conditions, which are not given here. we have,
t=2 => consider n=t-2 so that n=0. => t=n+2
now the given equation can be written as: y"(n+2)−3y'(n+2)+2y(n+2)=e^-(n+2)
take u(n)=y(t)=y(n+2) => u'(n)=y'(n+2) and u"(n)=y"(n+2)
also, u(0)=y(0+2)=1 and u'(0)=y'(0+2)=0
Hence the IVP under the given conditions becomes u"-3u'+2u=e^-(n+2)
you can now apply the laplace transform on both sides to get your answer |
Using an Euler diagram determine if the argument is valid or invalid? | You are probably not looking at an Euler diagram, but a Venn diagram. A Venn diagram shows all possible overlaps, but just because circles overlap does not mean that they have something in common. Indeed, the information embodied in the premises can force certain of those overlapping regions to be empty .. And even if some region is not forced to be empty, that still does not mean that there has to be something in that region. |
"Foldable" functions | Any associative operator gives rise to (a family of) such functions, like $\sum_{1 \le i \le n} x_i$, $\prod_{1 \le i \le n} x_i$, $\bigcap_{1 \le i \le n} x_i$. Even the maximal common divisor and the minimal common multiple qualify. The cases $\min$ and $\max$ are just the natural extensions of those binary operations to several arguments. |
Would the following sum give me the following numbers? | Yes. That is what the sum would mean.
$\sum_{k=1}^{9}\sum_{j=0}^{9}k*100+10*j+k = $
$\sum_{k=1}^9([100k + 0 + k]+ [100k + 10 + k] + [100k + 20 + k] + .....+[900k + 90 + k]) =$
$(101 + 111 + 121 + 131 + ...... 191) + $
$(202 + 212 + 222 + 232 + .......292) + $
$......$
$(909+929+929+939+ .....999)$
Alternatively it could also be expressed:
$\sum_{k=1}^9(\sum_{j=0}^9 (k\cdot 100 + 10\cdot j + k)) =$
$\sum_{k=1}^9(\sum_{j=0}^9 k\cdot 100 + \sum_{j=0}^9 10\cdot j + \sum_{j=0}^9 k)=$
$\sum_{k=1}^9 (10\cdot k\cdot 100 + (\sum_{j=0}^9 10\cdot j)+10\cdot k) =$
$\sum_{k=1}^9 (1010k + \sum_{j=0}^9 10\cdot j)$
Can you finish that up to figure out what the sum is? |
Find upper triangular matrix which is similar to diagonal matrix possible? | No, you tweak the eigenvectors of $T$ but still preserve $0<\langle e_1\rangle<\langle e_1,e_2\rangle<\dots<\mathbb{F}^n$.
For example: $M=\begin{bmatrix}-1\\&1\end{bmatrix}$ and you want to introduce a nonzero coefficient in the upper-right in $T$. You try something like
$$
\begin{bmatrix}1&1\\0&1\end{bmatrix}
\begin{bmatrix}-1&0\\0&1\end{bmatrix}
\begin{bmatrix}1&1\\0&1\end{bmatrix}^{-1}
$$ |
Eigenvalues of $A$ vs eigenvalues of $2A$ | Assume $v$ is an eigenvector that corresponds to $\lambda$ for $A$
Then $(2A)v = 2(Av) = 2\lambda v$ so $v$ is an eigenvector for $2A$ with eigenvalue $2\lambda$ |
How do I find the radius of convergence of this powerseries? | For your first question,
If a general power series is of the form $\sum_{n=0}^{\infty}a_nz^n$ then for your power series:$$a_0=a_{2n+1}=0\text{ for all }n\geq0\\a_{2n}=\frac{(-1)^n}{(2n-1)!}\text{ for all } n\geq1$$
For your second question,
$$\lim \sup |a_n|^{\frac{1}{n}}=\lim \sup |a_{2n}|^{\frac1{2n}}=0$$
Thus, $R =\infty.$ |
The sum of three primes is 100, one of them exceeds the other by 36, find the largest one | As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are
$$2,31,67$$ |
Discrete Mathematics - POSETs | Your solutions to $(2)$ and $(3)$ are correct.
$(1)$ is a bit tricky. You might think that the Hasse diagram below would do the trick, with $\{b,c\}$ as a subset with no infimum:
a
/ \
b c
However, the supremum of the empty set is the infimum of the whole partial order, so if every subset has a supremum, the partial order must have a least element, which the one shown clearly does not. In fact the situation described in $(1)$ is impossible. Suppose that $\langle P,\le\rangle$ is a partial order in which every subset has a supremum, and let $S\subseteq P$. Let $L$ be the set of all lower bounds for $S$. By hypothesis $L$ has a supremum $x$. If $s\in S$, then $s$ is an upper bound for $L$, so $x\le s$. Thus, $x$ is actually the greatest element of $L$ and as such is clearly the infimum of $S$.
You are correct that $(4)$ is impossible for a finite partial order. If $\langle P,\le\rangle$ is a finite partial order, and $x$ is minimal in $P$ but not the least element of $P$, let $Q=\{p\in P:p\not\le x\}$. Then $Q$ is finite and non-empty, so it has a minimal element, say $q$, and I’ll leave it to you to verify that $q$ is a minimal element of $P$ distinct from $x$.
It is possible, however, if $P$ is infinite, as you can see from the following Hasse diagram:
*
/ \
* x
|
*
|
*
|
·
·
·
Here $x$ is minimal, but the partial order clearly has no least element. |
Hoffman & Kunze exercise | Hint: $U$ is orthoganal diagoniazable as $U$ is selfadjoint. Every eigenvalue of $U$ must be real (by self-adjointness) and of modulus one (by unitarity). So the eigenspaces $W_+ = \ker (U-1)$ and $W_- = \ker (U +1)$ are orthogonal with sum $V$. |
Find a basis for $W$, where $W=\{(x_1,...,x_5)\in \mathbb{R}^5 : x_1=3x_2+x_3, x_2=x_5, \text{ and }x_4=2x_3\}$ | Hint: Note that a arbitrary vector of $W$ it seems $$v=\begin{pmatrix}3x_{2}+x_{3}\\ x_{2} \\ x_{3}\\ 2x_{3} \\x_{2}\\ \end{pmatrix}\in W$$where $x_{2},x_{3}\in \mathbb{R}$. |
Proof: $(a^2+h)^{1/2} \approx a+\frac{h}{2a}$ for $0<h<a^2$ | You could then try Bernoulli's inequality
$$(a^2+h)^{1/2}=a(1+h/a^2)^{1/2}\ge a\left(1+\frac {h}{2a^2}\right)$$
So could state as $$(a^2+h)^{1/2}\approx a+\frac {h}{2a}$$
By the way by using calculus for this question,the method stated by you is known as Linearisation and is done because the higher powers of binomial terms make the terms become negligible. |
By inspection, find the inverse of the given one to one matrix operator | Hints:
a) The transformations here are
reflection
dilation, contraction (scaling)
rotation
These are linear transformations, so you can restrict yourself to $\mathbb{R}^{2\times 2}$ matrices for the 2D vectors and $\mathbb{R}^{3\times 3}$ for the 3D vectors.
(The more general case would involve affine transformations, and thus homogenous coordinates needing an extra coordinate to use convenient matrices for the transformations)
I would as well interpret "inspection" as "try it out". So you could think how a matrix $A$ of the above kind would act on an input vector $x = (x_i)$ to produce the wanted output vector $y = A x$, with $y_i = \sum_j a_{ij} x_j$.
b) Let us look at the easy one, 2.: We need to find a matrix $A$ such that
$$
A x = 5 x \iff \\
\begin{pmatrix}
5 x_1 \\
5 x_2
\end{pmatrix}
=
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix}
\begin{pmatrix}
x_1 \\
x_2
\end{pmatrix}
=
\begin{pmatrix}
a_{11} x_1 & a_{12} x_2 \\
a_{21} x_1 & a_{22} x_2
\end{pmatrix}
$$
you could now figure out the coefficients $a_{ij}$ from component-wise comparison. Then you could derive the inverse from it.
Alternative: What is the inverse of increasing by a factor 5? Using the operation and then its inverse must give the original vector. You could go directly for the matrix of the inverse operation.
c) Now try 1.:
Here a vector has three coordinates $x = (x_1, x_2, x_3)^t$. A reflection at the $x$-$y$-plane affects the $z$-coordinate. So the problem is to find an $A$ such that
$$
A (x_1, x_2, x_3)^t = (x_1, x_2, -x_3)^t
$$
The plane itself consists of the vectors $(x_1, x_2, 0)^t$ which will not be changed by the reflection.
d) Regarding 3.: If we mirror a vector $x$ at some axis $a$, we get an image vector $y$ and expect the axis $a$ to bisect the line orthogonal which connects $x$ and $y$. A vector from the origin to a point on the axis should not be changed by the transformation.
e) Regarding 4.: If you solved 2. this should be a piece of cake. The additional dimension should be easy to handle.
f) Regarding 5.: @EmilioNovati pointed already out in the comments below, that this problem lacks information to nail it down to a unique solution. You need to know around which axis you rotate.
Rotations themselves have the property that they preserve the length of the vectors. This they share with reflections. Also there is a restriction on how the coordinates are permuted, the determinant of the matrix must be $+1$.
The easy case is to figure out how a 2D vector will get changed by a rotation in the $x$-$y$-plane (around the $z$-axis). This idea will extend to the 3D case and has to involve extra transformations if the axis of rotation is not parallel to one of the coordinate system axis or is not through the origin. |
How many ways are there for $2$ teams to win a best of $7$ series? | We count the ways in which Team A can win the series, and double the result. To count the ways A can win the series, we make a list like yours.
A wins in $4$: There is $1$ way this can happen.
A wins in $5$: A has to win $3$ of the first $4$, and then win. There are $\binom{4}{3}$ ways this can happen.
A wins in $6$: A has to win $3$ of the first $5$, then win. There are $\binom{5}{3}$ ways this can happen.
A wins in $7$: A has to win $3$ of the first $6$, then win. There are $\binom{6}{3}$ ways this can happen. |
Maximal abelian subalgebras of different dimensions | Q.1 Any 1 dimensional Lie subalgebra is abelian so is contained in at least one maximal abelian subalgebra. So taking any non-semisimple element we find a abelian subalgebra with non-semisimple elements and thus there exist maximal abelian algebras with non-semisimple elements.
Q.2 I think this is best illustrated with some examples:
$$ D = \left\{ \begin{pmatrix}
a_1 & \dots & 0 \\
\vdots & \ddots & \vdots \\
0 & \dots & a_{2n} \\
\end{pmatrix} : a_i \in \mathbb{C} \right\} $$
$$ A = \left\{ \begin{pmatrix}
0 & 0 \\
B & 0 \\
\end{pmatrix}: B \in \mathfrak{gl}(n,\mathbb{C})\right\} $$
These are both maximal abelian and A has dimension $n^2$ where D has dimension $2n$. In terms of semisimplicity of L these examples fit in to the solvable Lie algebra of lower triangular matrices. But I suspect that it is possible in semisimple Lie algebras as well |
Finding possible determinant values of 3x3 matrix using an equation | If $4A=A^7$, then $\det(4A)=\det(A^7)=[\det(A)]^7=4^3\det(A)$
Since $\det(cA)=c^n\det(A)$
Let $\det(A)=x$, then we must solve $64x=x^7$ |
What is the expected number of turns for two dots to meet | Moving pin $1$ clockwise at a rate of $1$, and pin $2$ clockwise at a rate of $2$ is the same as keeping pin $1$ still, and moving pin $2$ clockwise at a rate of $2-1=1$. Does that help? Do you know how to do the expected value calculation from there? Just think of pin $1$ as fixed, and think about the number of steps to reach it from all of the different places pin $2$ could land. |
Can the level set of the resolvent norm be constant on a set of positive measure? | It may be worth noting that the reciprocal of the resolvent norm is subharmonic. Although I am not sure how to convert this to an answer. |
The integral $\int_{\varepsilon}^1 r^n(1-r)^{k-n}\,dr $ | Usng a CAS, the following result was obtained $$\int_{a}^1 r^n(1-r)^{k-n}\,dr=\frac{\Gamma (n+1) \Gamma (k-n+1)}{\Gamma (k+2)}-\frac{a^{n+1} \,
_2F_1(n+1,n-k;n+2;a)}{n+1}$$ You can expand the hypergeometric function as a Taylor series and then get as an approximation $$\frac{a^n \left(-\frac{\Gamma (k+2) a}{n+1}+\frac{(k-n) \Gamma (k+2)
a^2}{n+2}+O\left(a^3\right)\right)+\Gamma (n+1) \Gamma (k-n+1)}{\Gamma (k+2)}$$
Added later
$$\int_{a}^1 r^n(1-r)^{k-n}\,dr=\frac{1}{(k+1)\dbinom k n}-B_a(n+1,k-n+1)$$ Expanded as a series built at $a=0$, $$B_a(n+1,k-n+1)=a^{n+1} \left(\frac{1}{n+1}+\frac{(n-k) a}{n+2}+\frac{(n-k) (n-k+1) a^2}{2
(n+3)}+O\left(a^3\right)\right)$$ |
What do we need to define a category? | First, one needs to adopt a foundation of mathematics to define a set, category, and other mathematical objects!
Different foundations give different category theories. Some 'categories', which are called big in one foundation, do not exists in another, e.g. the functor category between two large categories and the localisation of a category with respect to a proper class (large set) of its morphisms. The meanings of the terms (small) set and (proper) class, and the operations you can perform on them, depend on the adopted foundation. Shulman's Set theory for category theory and Mac Lane's One universe as a foundation for category theory discuss the effect of the foundation on the resulting category theory, although the latter is more focused on the advantages of a specific foundation. |
Show positive real axis and open unit disc are surjectively invariant under $p(z) = z^2, z \in \mathbb C$ | Let $\mathbb D$ be the unit disk. Asserting that $P(\mathbb D)=\mathbb D$ means two things:
$P(\mathbb D)\subset\mathbb D$;
$P(\mathbb D)\supset\mathbb D$.
The first one is easy: $\lvert z\rvert<1\implies\lvert z^2\rvert=\lvert z\rvert^2<1$.
You did not even try to prove the second assertion. But it is easy from what you did: if $z\in\mathbb D$, write it as $re^{i\theta}$ ($r\in[0,1)$). But then$$z=\left(\sqrt re^{i\theta/2}\right)^2=P\left(\sqrt re^{i\theta/2}\right).$$
Now, cncerning the set $[0,\infty)$, this simply follows from two facts:
every square of an element of $[0,\infty)$ belongs to $[0,\infty)$;
if $x\in[0,\infty)$, then $\sqrt x\in[0,\infty)$ and $x=P\left(\sqrt x\right)$. |
Applications of derivatives Analytical geometry | Let $E$ be an ellipse
$$
\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1.
$$
As you know (I suppose), tangent equation at point $P(x_0,\, y_0)$ is
$$
\frac{xx_0}{a^2} + \frac{yy_0}{b^2} = 1.
$$
Points of intersection with $x$-axis and $y$-axis are
$$
A(a^2/x_0,\, 0),\quad B(0,\, b^2/y_0)
$$
Area of triangle $\triangle OAB$ is
$$
S = \frac12\times\frac{a^2}{x_0}\times\frac{b^2}{y_0}.
$$
We should to find a minumum of this. But $(x_0,\,y_0)\in E$; so,
$$
x_0 = a\cos t_0,\enspace y_0 = b\sin t_0,
$$
and
$$
S = \frac{ba}{\sin(2t_0)}.
$$
Minumum $S$ is at $$\sin(2t_0)=1\Longrightarrow t_0 = \frac{\pi}{4}$$
(we work in the first quadrant by symmetry). So,
$$
x_0 = \frac{a}{\sqrt2},\enspace y_0 = \frac{b}{\sqrt2}.
$$
In your case and notation
$$
m=2,\enspace n = 3.
$$ |
Why does $\lim_{x \rightarrow 0} B(x,y)$ exist and how is it calculated? | [Initial apology: this is just a bunch of considerations too long to fit in a comment] Now we have a proof.
$$ I = -\int_{0}^{+\infty} x\cdot\log^2(1-e^{-x})\,dx = \int_{0}^{+\infty}\frac{x^2}{e^{x}-1}\log(1-e^{-x})\,dx,$$
$$ I = -\int_{0}^{+\infty}\frac{x^2}{e^{x}-1}\sum_{k=1}^{+\infty}\frac{e^{-kx}}{k}dx = -\int_{0}^{+\infty}x^2 e^{-x}\sum_{k=1}^{+\infty}H_k e^{-kx}dx,$$
$$ I = -\sum_{k=1}^{+\infty}H_k \int_{0}^{+\infty}x^2 e^{-(k+1)x}dx = -\sum_{k=1}^{+\infty}\frac{2H_k}{(k+1)^3}.$$
Interestingly, integrating by parts in another way we get:
$$ I = \int_{0}^{1}\operatorname{Li}_2(x)\left(\frac{\log(1-x)}{x}-\frac{\log x}{1-x}\right)dx = -\frac{1}{2}\operatorname{Li}_2^2(1)-\int_{0}^{1}\frac{\operatorname{Li}_2(x)\log x}{1-x}\,dx,$$
[This part is merely optional]
so the identity $I=-\frac{\zeta(4)}{2}$ can probably be proved through dilogarithm identities. By using the Taylor series of $\operatorname{Li}_2(x)$ and $\frac{\log(1-x)}{x}$ and the well-known identity $\int_{0}^{1}(1-x)^m x^n dx=\frac{m! n!}{(m+n+1)!}$ it is quite easy to notice that:
$$\int_{0}^{1}\frac{\operatorname{Li}_2(1-x)\log(1-x)}{x}\,dx = -\sum_{m,n=1}^{+\infty}\frac{m! n!}{m^2 n^2 (m+n)!},$$
and I would not be surprised if someone manage to put the RHS in $c\cdot\zeta(2)^2$ form though "reverse" creative telescoping or simple fractions decomposition.
[End of the optional part]
For istance:
$$\int_{0}^{1}\frac{\operatorname{Li}_2(x)\log x}{1-x}dx = -\sum_{n=1}^{+\infty}\frac{1}{(n+1)^2}\sum_{m=1}^{n}\frac{1}{m^2}.\tag{1}$$
Finally, here is the trick:
$$\frac{\pi^4}{120}=\frac{\zeta(2)^2-\zeta(4)}{2} = \sum_{m>n}\frac{1}{m^2 n^2}$$
is exactly the opposite of the RHS in $(1)$. By collecting pieces we get:
$$ I = -\frac{\zeta(2)^2}{2}+\frac{\zeta(2)^2-\zeta(4)}{2} = -\frac{\zeta(4)}{2}$$
QED. |
Poisson random variables converging to zero | Hint
$$\mathbb P\left\{|X_n|>\varepsilon \right\}\leq \frac{1}{\varepsilon^2 }\mathbb E[X_n^2].$$ |
Zariski Topology: $V(S)=V(\langle S\rangle)$ | It is clear that, if $S\subseteq T$, then $V(T)\subseteq V(S)$. Therefore
$$
V(\langle S\rangle)\subseteq V(S)
$$
Suppose now $x\in V(S)$ and let $p\in\langle S\rangle$. By definition of $\langle S\rangle$,
$$
p=f_1g_1h_1+\dots+f_kg_kh_k
$$
where $f_1,\dots,f_k,h_1,\dots,h_k$ are suitable polynomials and $g_1,\dots,g_k\in S$. Then
$$
p(x)=f_1(x)g_1(x)h_1(x)+f_2(x)g_2(x)h_2(x)+\dots+f_k(x)g_k(x)h_k(x)=0
$$
and so $p\in V(\langle S\rangle)$.
The definition of $\langle S\rangle$ can actually be simplified, because the set you describe is the same as
$$
\{f_1g_1+\dots+f_kg_k: k\in \mathbb{N}, f_i\in \mathbb{P}^n, g\in S\}
$$
as multiplication in $\mathbb{P}^n$ is commutative. |
Does lcm of multiple numbers also divide any common multiple of these numbers? | Hint Consider the prime factor decomposition and note that the exponent $p(x)$ of a prime $p$ dividing the lcm $x$ of numbers $x_1,...,x_k$ is given by $\max \{p(x_1), ... , p(x_k)\}$. What can you say about $p(y)$ for a common multiple $y$ of $x_1,...,x_k$? |
Krull dimension of the ring $\mathbb{Q}[\pi,\sqrt{11},x,y]/I$ | What stands out to me here is what does a transcendence basis of $A/\mathbf{Q}$ look like? Clearly $\pi$ should be in there and if we add $x$ then we are done because now the ring $R = \mathbf{Q}[\pi, x]$ is a 2-dimensional polynomial ring and both $\sqrt{11}$ and $y$ are integral over $R$. Hence $A/R$ is finite which means that $A$ has dimension $2$. |
Proof by induction: $x_1 , x_2, \cdots ,x_n \in (0,1), \ i=1,2,\cdots ,n \implies (1-x_1)(1-x_2)...(1-x_n)>1-(x_1 + x_2 + ... + x_n)$ | $P(n)$ would be the statement: For any $n$ elements $x_1, ....,x_n\in (0,1)$ then $(1-x_1)......(1-x_n) > 1-(x_1 + ... + x_n)$.
To prove this induction step:
If we assume $\color{blue}{(1-x_1)......(1-x_n) > 1-(x_1 + ... + x_n)}$ for all possible $x_1,.....,x_n$ then if $0 < x_{n+1} < 1$ then $1-x_{n+1} > 0$ so
$\color{blue}{(1-x_1)......(1-x_n)\color{red}{(1-x_{n+1})} > (1-(x_1 + ... + x_n))\color{red}{(1-x_{n+1})}}$ and now it is a matter of proving that
$\color{blue}{(1-(x_1 + ... + x_n))\color{red}{(1-x_{n+1})}} \ge 1-(x_1 + ...... + x_n + x_{n+1})$
Can you do that?
I'd do it by noting
$\color{blue}{(1-(x_1 + ... + x_n))\color{red}{(1-x_{n+1})}}= $
$\color{blue}{(1-(x_1 + ... + x_n))}\cdot \color{red}1 - \color{blue}{(1-(x_1 + ... + x_n))}\cdot \color{red}{x_{n+1}} = $
$[\color{red}1 - \color{blue}{(x_1 + ... + x_n)}] - [\color{red}{x_{n+1}} - \color{red}{x_{n+1}}\color{blue}{(x_1 + ... + x_n)}]=$
$[1 - \color{orange}{(x_1 + ... + x_n)}] - [\color{orange}{x_{n+1}} - x_{n+1}(x_1 + ... + x_n)]=$
$1 - \color{orange}{\underbrace{(x_1 + ... + x_n) - x_{n+1}}} + x_{n+1}(x_1 + ... + x_n)=$
$1 - \color{orange}{(x_1 + ....... + x_n + x_{n+1})} + x_{n+1}(x_1 + ... + x_n)=$
$1 - (x_1 + ....... + x_n + x_{n+1}) + \color{green}{x_{n+1}(x_1 + ... + x_n)} >$
$1 - (x_1 + ....... + x_n + x_{n+1}) + \color{green}{0} $
$1 - (x_1 + ....... + x_n + x_{n+1})$
And that's it. We're done. |
If $A$ and $B$ are linear operators such that $A(x)=B(x)$ then $x=0$? | Consider the transformations defined by the matrices
$$\begin{pmatrix}1&2\\3&4\end{pmatrix},\begin{pmatrix}1&2\\3&5\end{pmatrix}$$ and apply them to the vector
$$\begin{pmatrix}1\\0\end{pmatrix}.$$ |
Image of morphism between curves | As in your previous question, the trick is to work in affine charts. This is an idea that will keep coming up over and over $\cdots$ and over again in your study of algebraic geometry. Let us compute the image of the affinization of this curve (an Elliptic curve) in the chart given by $z=1$. Denote this $C_1$.
Here the curve is $y^2=x(x+2)(x-1)$ and your function $f:C_1\to \mathbb A^1$ is given by$f(x,y)=y$. You can see that whatever value $a$ in the image you want, you can get by solving the equation $x(x+2)(x-1)-a^2=0$ which assuming we are in an algebraically closed field, you will always have a solution.
At this point, it only remains for us to check if the point $(1:0)$ is in the image of $f$. And indeed, it is, because if $y=1$ and $z=0$, then the point $(0:1:0)\in C$ satisfies this requirement. So our map $f$ in fact surjects onto $\mathbb P^1$. |
Proof of asymptotics of a recursion or functional equation | A quick computation. We show that $a_n = \log_2 n + \mathcal{O}(1)$. Notice that $x \mapsto 1 - (1-2^{-x})^n$ is decreasing in $x$ for each $n \geq 1$. So
$$ a_n = \sum_{j=0}^{\infty} (1 - (1 - 2^{-j})^n) \leq 1 + \int_{0}^{\infty}(1 - (1 - 2^{-x})^n) \, dx = 1 + \frac{H_n}{\log 2}, $$
where $H_n = \sum_{k=1}^{n} \frac{1}{k}$ is the harmonic numbers and the integral is computed using the substitution $u = 1-2^{-x}$. A lower bound is obtained in a similar way:
$$ a_n \geq \int_{0}^{\infty}(1 - (1 - 2^{-x})^n) \, dx = \frac{H_n}{\log 2}. $$
More detailed result. Write $F_{n}(x) = (1 - 2^{-x})^n$. Using Euler-MacLaurin formula, we find that
$$ a_n = \frac{H_n}{\log 2} + \frac{1}{2} - \int_{0}^{\infty} \tilde{B}_1(x) \, dF_n(x), $$
where $\tilde{B}_1(x)$ is the periodic Bernoulli polynomial of degree 1. Notice that the bound $|\tilde{B}_1(x)| \leq \frac{1}{2}$ replicates the previous inequalities.
Now let $X_i \sim \mathrm{Exp}(\log 2)$ be i.i.d. and $M_n = \max\{X_1, \cdots, X_n\}$. Then $\mathbb{P}[M_n \leq x] = F_n(x)$ and $M_n - \log_2 n \Rightarrow G$, where $G$ satisfies $\mathbb{P}[G \leq x] = \exp(-2^{-x})$. From this, we may expect that
\begin{align*}
a_{n}
&= \frac{H_{n}}{\log 2} + \frac{1}{2} - \mathbb{E}[\tilde{B}_1(M_{n})] \\
&= \frac{H_{n}}{\log 2} + \frac{1}{2} - \mathbb{E}[\tilde{B}_1(G + \log_2 n)] + o(1)
\end{align*}
holds. Indeed this is not terribly hard to establish using the continuous mapping theorem. Finally, numerical computations suggests that $\alpha \mapsto \mathbb{E}[\tilde{B}_1(G + \alpha)]$ is quite small (of order $10^{-6}$) but not identically zero, hence $a_n$ should exhibit an oscillation which slows down at logarithmic speed. |
prove that $ E = \{(x,y) \in E_2 | x \in E_1 \} =E_2 \cap(E_1 \times \Bbb{R^n}) \in \Bbb{R^{m+n}}$ have jordan mesuare | As I understood, you ask if $E$ is Jordan measurable provided both $E_1$ and $E_2$ are Jordan measurable. Since the set $E_2$ is Jordan measurable, it is bounded. Therefore there exists a number $N$ such that $E_2\subset [-N,N]^{m+n}$. Then $E_2\cap E_1\times\Bbb {R}^n= E_2\cap (E_1\times[-N,N]^n)$. The measurability of a set $E_1\times [-N,N]^n$ follows almost directly from the definition of Jordan measure $m$. The intersection $E=E_2\cap (E_1\times [-N,N]^n)$ is Jordan measurable as an intersection of two Jordan measurable sets. |
Finding CDF for this (simple?) PDF, and is it differentiable/continuous? | The choice of the integration variable doesn't really matter, but $X$ (capitalized) is a bit of a weird choice. It's more common to use $x$ if the random variable is denoted by $X$.
Other than that it looks OK, though I didn't check the details. The only thing is that technically $F$ should be defined on all of $\mathbb{R}$, so you should have two more cases (one for $s<0$ and one for $s>\alpha$). These cases are trivial to write down but still important. |
$A$ be a $2\times2$ matrix such that $\mathrm{trace}(A)=0$ and $\mathrm{det}(A)=-1$. Is there a basis of $\mathbb R^2$ containing the eigenvectors? | Hints:
The characteristic polynomial of $\;A\;$ is
$$p_A(x)=x^2-(\text{tr.}\,A)\,x+\det A\stackrel{\text{given}}=x^2-1$$
Thus (why?) the minimal polynomial of $\;A\;$ is also $\;x^2-1=(x-1)(x+1)\;$ ... |
Finite fields and isomorphism | It says that, if you take finite field $F$, it contains a prime field (isomorphic to $\mathbf Z/p\mathbf Z$ also denoted $\mathbf F_p$ in the context of finite fields) for some $p$. You simply have to map $\mathbf Z$ to $F$ by sending $n$ to $n\cdot 1_F$ and consider the kernel of this homomorphism. |
Finding the number of eigenvector(s) of non-zero linear transformation $T$ to $R$ | Hint:Any linear transformation on $\mathbb{R}$ is of the form $T(x)=cx$ for some fixed $c\in\mathbb{R}$ and $x$ is any real number.
Try to prove the following things these might help you to understand!These are standard results! Proof of the respective results can be found in any standard text on Linear Algebra, Functional Analysis, Real Analysis respectively.
$1.$ $\mathbb{R}$ is a one dimensional vector space over the field $\mathbb{R}$ itself!
$2.$ Any Linear Map on a finite dimensional vector space is continuous!
$3.$ A continuous function on $\mathbb{R}$ which satisfies $T(x+y)=T(x)+T(y)\forall x,y\in\mathbb{R}$ must be of the form $T(x)=cx$ for some fixed $c\in\mathbb{R}$ |
Presentation of $A_4$ | Different approach.
Here is a picture, of the different multiplications.
We move along the black lines if we multiply the right side by $a$ I have used $a'$ to represent $a^2 = a^{-1}$. And we move along the red lines when we multiply by $b$ (on the right).
And playing with this a little bit you should come to the conclusion that it is indeed closed, and that every element has an inverse. Hence this is a group.
original approach
If you map $a = (123)$
and $b = (12)(34)$ these two elements meet the criteria of your group and they indeed generate $A_4$
Is this the "largest group."
There can really be only one group generated by some set of generators. Once you have generate a group, it is closed to any further operations. If you multiply the elements by one of the generators, you get an element you have already found. |
Harmonic measure function and the surjectivity of the diagonal map | It's clear that $\phi$ is not onto when $n\ge 2$. Since $$\sum_{j=1}^{n+1}u_j(z)=1,$$ it follows that $$\sum_{j=1}^n u_j(z)<1.$$
I doubt that a suitable characterization of the range of $\phi$ exists. |
Ellipse Section Area that does not include the center | We can take the x-axis line and integrate that to y axis to get area of P(shown as weavy shaded area in the image below). and then subtract the area of the triangle (shown in blue) as shown in the equation below.
$$\int_{h_{1}}^{h_{2}}\sqrt{\Big(1-\frac{y^{2}}{b^{2}}\Big)a^{2}}dy-\frac{1}{2}\sqrt{\Big(1-\frac{y^{2}}{b^{2}}\Big)\vert_{y=h_{2}}}$$
Now to calculate the integral:
$$\int_{h_{1}}^{h_{2}}\sqrt{\Big(1-\frac{y^{2}}{b^{2}}\Big)a^{2}}dy = \frac{a}{b}\int_{h_{1}}^{h_{2}}\sqrt{b^{2}-y^{2}}dy=\frac{a}{2b} \left[y\sqrt{b^{2}-y^{2}}]+b^{2}sin^{-1}\Big(\frac{y}{b}\Big)\right]_{h_{1}}^{h2}$$
$$= \frac{a}{2b}\left[h_{2}\sqrt{b^{2}-h_{2}^{2}}-h_{1}\sqrt{b^{2}-h_{1}^{2}}+b^{2}\left(sin^{-1}\Big(\frac{h_{2}}{b}\Big)-sin^{-1}\Big(\frac{h_{1}}{b}\Big)\right)\right] $$
This can be verified by putting $h_{1}=0\;h_{2}=b$ we get area of P $=\frac{1}{4}\pi a b$. And $4P = \pi ab$ which is area of the ellipse.
So the area of the sector within $h_{1}$ and $h_{2}$ can be calculated as:
$$\frac{a}{2b}\left[h_{2}\sqrt{b^{2}-h_{2}^{2}}-h_{1}\sqrt{b^{2}-h_{1}^{2}}+b^{2}\left(sin^{-1}\Big(\frac{h_{2}}{b}\Big)-sin^{-1}\Big(\frac{h_{1}}{b}\Big)\right)\right]-\frac{1}{2}\sqrt{1-\frac{h_{2}^{2}}{b^{2}}}$$
Now you need to calculate $h_{1}$ in terms of b and your h (the h in the question) and $h_{2}$ in terms of $\theta$ and substitute. |
The Power of Taylor Series | Here is an interesting application of power series; unfortunately one would need to bother with the remainder to make it really interesting.
$$\arctan(x)= \sum_{n=0}^\infty \frac{(-1)^nx^{2n+1}}{2n+1} \,.$$
Plug in $x= \frac{1}{\sqrt 3}$ ($1$ would also work but one would need to explain why this formula also holds at the end point of the interval). We get:
$$\frac{\pi}{6} = \frac{1}{\sqrt{3}}\sum_{n=0}^\infty \frac{(-1)^n}{3^n2n+1} \,.$$
The right side is an alternating series which converges very fast, thus you can use it to calculate $\pi$ with 5-6 digits. And it is alternating, which means you could use the Alternating Series error estimate.
You can also do the same for the Taylor series of $e^x$. |
Double radical proof | Well assume that $\sqrt{a+\sqrt{b}}$ can be written as sum of 2 square roots
$$\sqrt{a+\sqrt{b}}=\sqrt{x}+\sqrt{y}\\a+\sqrt{b}=x+y+\sqrt{4xy}\\a=x+y\\b=4xy\\x=a-y\\b=4(a-y)y\\b=4ay-4y^2\\4y^2-4ay+b=0\\y_{1,2}=\frac{4a\pm\sqrt{16a^2-16b}}{8}\\y_{1,2}=\frac{a\pm\sqrt{a^2-b}}{2}\\x_{1,2}=\frac{a\mp\sqrt{a^2-b}}{2}$$
Now it $x_1=y_2$ and $x_2=y_1$ so that doesn't matter at all,now set $C=\sqrt{a^2-b}$ and you get your formula |
explicit formula for coefficients of Laurent series | You can at first determine the Laurent series for
$$ e^{z} = \sum_{m=0}^\infty \frac{z^m}{m!}$$
and
$$ \frac1{z-1} = \sum_{n=0}^\infty z^{-(n+1)} , \qquad |z|>1$$
independently.
To multiply two Laurent series, we can use this formula or simply calculate
$$\begin{align}\frac{e^{z}}{z-1}= &\left(\sum_{m=0}^\infty \frac{z^m}{m!} \right)
\left(\sum_{n=0}^\infty z^{-(n+1)}\right)\\
&=\sum_{m,n\in\mathbb{Z}}[0\leq m][0\leq n] \frac{z^{m-n-1}}{m!}\\
&=\sum_{k,m} [0\leq m][0\leq m-k-1] \frac{z^{k}}{m!}\\
&= \sum_{k=-\infty}^{-1} z^k \underbrace{\sum_{m=0}^\infty \frac1{m!}}_{e}
+ \sum_{k=0}^\infty z^k \sum_{m=1+k}^\infty \frac1{m!}.
\end{align}$$
with $k=m-n-1$ and where I used Iverson's bracket.
Edit: As Robert Israel pointed out, we can still express the last sum using the incomplete $\gamma$-function
$$\gamma(n,x) = \int_0^x t^{n-1} e^{-t} dt = x^n (n-1)! e^{-x} \sum_{k=0}^\infty \frac{x^k}{(n+k)!} .$$
Setting $x=1$, we have with $n\geq 1$
$$\gamma(n,1) = (n-1)! \sum_{i=n}^\infty \frac{1}{i!}. $$
Thus,
$$\frac{e^z}{z-1} = e \sum_{k=-\infty}^{-1} z^k+ \sum_{k=0}^\infty\frac{\gamma(k+1,1)}{k!} z^k.$$ |
For a measurable cardinal $κ$, show that $cf(γ)≠κ$ implies $j_U(γ)=\sup\{j_U(δ):δ<γ\}$ ($U$ $κ$-complete ultrafilter, $j_U$ associated embedding) | Let us divide the cases. Let $\langle \gamma_\zeta\mid \zeta<\mu\rangle$ be a cofinal sequence converging to $\gamma$. (Here $\mu=\operatorname{cf}\gamma$.)
$\operatorname{cf}\gamma<\kappa$: Let $[f]_U<j_U(\gamma)$. Without loss of generality we may assume that $f(\xi)<\gamma$ for all $\xi<\kappa$. (Why?) For each $\xi$, choose $\eta_\xi<\mu$ such that $f(\xi)<\gamma_{\eta_\xi}$. Then there is $\delta<\mu$ such that $\{\xi<\kappa\mid \eta_\xi=\delta\}\in U$. Can you see how to proceed from there?
$\operatorname{cf}\gamma>\kappa$: Let $[f]_U<j_U(\gamma)$ again. Still, we may assume that $f(\xi)<\gamma$ for all $\xi<\kappa$. Since $\kappa<\mu$, $\delta:=\sup_{\xi<\kappa}f(\xi)<\gamma$. Then completing the proof is easy, so I will leave it to you. |
Cofficient of x in a product | Write
$$\prod_{i=1}^{n-1}{(1-p_i+p_ix)}=\prod_{j=1}^{n-1}{(1-p_i)}\prod_{i=1}^{n-1}{\left(1+\frac{p_ix}{1-p_i}\right)}$$
The coefficient of $x^m$ is
$$\left(\prod_{j=1}^{n-1}{(1-p_i)}\right)e_m\left(\frac{p_1}{1-p_1},\frac{p_2}{1-p_2},\ldots,\frac{p_{n-1}}{1-p_{n-1}}\right)$$
where $e_m$ is the $m$th elementary symmetric polynomial in $n-1$ variables.
The elementary symmetric polynomial can be computed efficiently using the recurrence
$$e_k(x_1,x_2,\ldots,x_p)=x_pe_{k-1}(x_1,x_2,\ldots,x_{p-1})+e_k(x_1,x_2,\ldots,x_{p-1})$$
with the starting condition $e_0=1$ and $e_k(x_1,x_2,\ldots,x_p)=0$ if $p<k$. |
How is $3$ not a primitive root mod 8? | $\phi(8) = 4$ because there are $4$ numbers less than $8$ and coprime to it - the totatives of $8$ - $\{1,3,5,7\}$.
In order for a number to be a primitive root $\bmod n$, its powers $\bmod n$ must cycle through all the totatives of $n$ with of course $1$ being the last because that restarts the cycle.
However, for $8$,
$3^2 =9 \equiv 1 \bmod 8, \\
5^2 =25 \equiv 1 \bmod 8, \\
7^2 =49 \equiv 1 \bmod 8,$ so there is no primitive root among the totatives.
Any time the Carmichael function of a number is less than Euler's totient, there are no primitive roots. |
The intersection of two different paths joining two points can be infinite? | You can start with your two paths being the same, and then perturb just the first half of one of them. This way, you get two different paths that coincide on the interval $[\frac{1}{2},1]$.
Edit: And if you are looking for two curves with infinitely many intersection points, but not as many as in the previous paragraph, you can take the two following curves:$$\gamma_1(t)=(t,0),\quad\gamma_2(t)=\left(t,t\cos\left(\frac{\pi}{2t}\right)\right).$$ |
$\lim_{n \to \infty} a_n = \infty$ iff $\{a_n\}$ is not bounded above | Take the sequence defined by
$$a_{2n}=n\text{ and } a_{2n+1}=0$$
$(a_n) $ is not bounded above because
$$(\forall M\in \Bbb R)\;(\exists p\in \Bbb N)\;:\; a_{2p}>M$$
where $ p=\lfloor|M|\rfloor+1$. But
$$\lim_{n\to +\infty}a_n \ne \infty$$
since
$$\lim_{n\to+\infty}a_{2n+1}=0$$ |
What is the value of k if equation $x^3-3x^2+2=k$ has three real roots and if one real root? | Consider $g(x)=x^{3}-3x^{2}=x^{2}(x-3)$, so that $f(x)=g(x)+(2-k)$. The plot of $g(x)$ is as follows;
So we see that $g(x)$ has $2$ roots, one at $x=0$ and the other at $x=3$. It also has a local minimum at $x=2$, with value $f(2)=-4$ and a local maximum of at $x=0$.
Now, $f(x)$ is just a vertical translation of $g(x)$ by $(2-k)$. So for $(2-k)<0$ we have one root, as $g(x)$ is being translated down so its local maximum is below the $x$ axis. For $(2-k)=0$ we have two roots, as $g(x)=f(x)$. For $0<(2-k)<4$ we have three roots, as $g(x)$ has been translated up so that the $x$ axis is between its local minimum and local maximum. For $(2-k)=4$ we have two roots, as $g(x)$ has been translated up until its local minimum is sitting right on the $x$ axis, and finally for $(2-k)>4$ we have one root, as $g(x)$ has been translated until it's local minimum is above the $x$ axis.
As we have $3$ roots for $0<(2-k)<4$, we have by solving the inequality that there are three real roots for $-2<k<2$, i.e. for $k\in(-2,2)$.
As we have a single real, root for $(2-k)<0$ and for $(2-k)>4$, we have upon solving the inequalities that there is one real root for $k>2$ or $x<-2$, i.e. for $k\in (-2,\infty)\cup(2,\infty)$ |
Is my proof that the set of all functions from $X' \subseteq X$ to $Y'\subseteq Y$ exist valid | What you're doing works, but it is aguably simpler to note that each of the functions you're interested in is a subset of $X\times Y$, so you can get at set of all of them by using Separation on the power set of $X\times Y$. |
a polynomial of degree $4$ such that $P(n) = \frac{120}{n}$ for $n=1,2,3,4,5$ | So noticing that $2P(2) = P(1) = 3P(3)$ (which is also equal to $4P(4), 5P(5)...$)
You are on the right track. $$x P(x) - 120$$ is a polynomial of degree (at most) 5, and has zeros at $x= 1, 2, 3, 4, 5$, therefore
$$
x P(x) - 120 = c(x-1)\cdots (x-5)
$$
for some constant $c$, which can be determined by substituting $x = 0$. |
Proving that this is true for all x in the sequence. | For any $t \in \mathbb{N}_+$ such that$$
t = u^2 - v^2 + uv
$$
for some $u, v \in \mathbb{N}$, note that $u$ and $v$ cannot be $0$ simultaneously, then take$$
(x, y) = (2u + 2v, 2u),
$$
and$$
\frac{x^2 + xy + y^2}{xy - t} = \frac{12u^2 + 4v^2 + 12uv}{4u^2 + 4uv - t} = 4.
$$ |
Projection of a vector onto a plane | You're almost there:
what you have found is parallel to $[1,1,1]^T$ which is the normal vector of the plane, so this is the orthogonal component $b_\perp$ of the vector sum $b_\perp\ +\ b_\parallel=b$, and I guess, we are looking for the other component $b_\parallel\ =\ b-b_\perp$ which is on the plane. |
Formula that takes on all integers | The statement is false, since $z$ is necessarily divisible by $\gcd(a,b,m)$, which may not be $1$. Thus the equivalent statement in the comment is likewise false. For instance, the set of products of two numbers with remainder $2$ modulo $4$ only contains odd multiples of $4$. |
Taking module sheaf commutes with tensor product | I would say that your proof is not particularly common, certainly I've never seen it before (admittedly this doesn't mean much). The proof in Hartshorne is brief because it is one of those scenarios where doing the "obvious" thing works. Here is the proof that I think Hartshorne had in mind (which is entirely constructive):
Let $T$ be the tensor pre-sheaf of $\tilde{M}\otimes_{\mathcal{O}_X}\tilde{N}$ and $\theta : T \rightarrow \tilde{M}\otimes_{\mathcal{O}_X}\tilde{N}$ be the canonical morphism. We may define the "obvious" morphism $\varphi: T \rightarrow \tilde{(M\otimes_A N)}$ that over each open set sends $f\otimes g$ to $f\cdot g$.
Explicitly, for each $P \in \operatorname{Spec}(A)$, $(f\cdot g)(P) :=f(P)\cdot g(P) \in (M\otimes_A N)_P$ where we are implicitly using the canonical isomorphism from $M_P\otimes_{A_P} N_P$ to $(M\otimes_A N)_P$. This is where we are using that localisation commutes with tensor products.
You should check that on the stalk at $P$, $\varphi_P$ gives the canonical isomorphism $M_P\otimes_{A_P} N_P \rightarrow (M\otimes_A N)_P$.
Finally, by the universal property of sheafification, $\varphi$ factors through some morphism $\bar\varphi:\tilde{M}\otimes_{\mathcal{O}_X}\tilde{N}\rightarrow \tilde{(M\otimes_A N)}$, i.e. $\varphi = \bar\varphi\circ\theta$ and since $\varphi$ is an isomorphism on stalks, $\bar\varphi$ is an isomorphism.
As frequently happens in Hartshorne, a lot of algebraic detail is left out,and you can see why. If he included all of the details like this every time this kind of problem arose, the book would be $2-3$ times as long!
For the construction Vakil/Wedhorn and Görtz use, this is simply using the fact that you know what the sections of $\mathcal{O}_X, \tilde{M}$ and $\tilde{N}$ look like over the distinguished open affine pieces $D(f)$ of $X$. Since these form a basis for the topology on $X$ you can define the morphism $\varphi$ explicitly on these open pieces as the isomorphism $M_f\otimes_{A_f} N_f\rightarrow (M\otimes_A N)_f$ (this is again using the fact that tensor product commutes with localisations) and then "glue" this to get a morphism defined on any open set, which is an isomorphism on stalks since it is an isomorphism on the sections of a basis for the topology. You then still need to produce $\bar\varphi$ in the same way.
Which proof you prefer is entirely a matter of choice, they're essentially the same. The second one is probably cleaner, in some sense. The actual morphism looks nicer on each affine piece, but the price you pay is having to work slightly harder to show that the morphisms commute with restriction, depending on how happy you are to say it is "obvious". |
Barnes' double gamma function versus q-gamma function | q-Pochhammer symbol has the following representation:
$$(q^w;q)_{\infty}=\frac{\Gamma_2(w|1,\tau')\Gamma_2(w|1,-\tau')}{\Gamma_1(w|1)}.$$ |
Can you give me a hint on this proof of a subspace of vectors? | Let $X$ be some countable set, so $X\cong \mathbb{N}$. Then $l^2(\mathbb{N})=\left\{(x_n)_{n\in \mathbb{N}}\mid x_n\in \mathbb{C}, \sum_{n\in \mathbb{N}}|x_n|^2<\infty\right\}$ is the space of square summable sequences. Your set is the subset of finite sequences (the tail is zero). Obviously, your subset is a subspace. When you say it's not closed, it means that you have a topology on $l^2(\mathbb{N})$. The standard topology is the topology coming from the norm $\left||(x_n)_n\right\|_2:=\sqrt{\sum_{n\in \mathbb{N}}|x_n|^2}$. Now use the other answer to show that this subspace is not closed. |
solving the Laplace equation | You can find more information here: https://en.wikipedia.org/wiki/Fundamental_solution but to give you an intuition to how it is solved, the Laplace operator is translation invariant, so you seek a solution $g(r)=f(x)$ where $r=|x|$, and plug this into the PDE, this gives you a second order ODE, solving this gives the fundamental solution. |
Finitely generated ideals | An ideal with zero generators would necessarily be the zero ideal (because of the empty sum convention), but by convention (and, I guess, convenience, since it makes it more explicit which ring it is an ideal of; the empty set generates all zero ideals in all rings simultaneously), we say the zero ideal is generated by $\{0\}$. |
Rotation of a catenary in $\mathbb{R}^5$ | First, you have to be more precise in your description of the catenoid.
Definition A catenoid is the surface of revolution in $\mathbb{R}^3$ formed by revolving a catenary about its directrix.
Note that it is extremely important to specify the fact that you are rotating about the directrix! If you translate a catenary you will get another catenary. But the surface of revolution in cylindrical coordinates given by
$$ r(z) = \cosh z + 1 $$
is no longer what we would consider a catenoid: for starters, it is not a minimal surface! It is, however, a rotation of a catenary. You can get even stranger examples if you rotate the catenary about lines which are not even parallel to the directrix of the catenary.
Now that we have fixed the correct definition of the catenoid, the generalisation to higher dimension is immediate. What you want to consider is the hypersurface of revolution formed by rotating a catenary about its directrix. Here rotation means the following:
Let $V$ be a vector subspace with $k$ dimensions inside $\mathbb{R}^n$. Then the subgroup of $SO(n)$ which acts on $V$ as the identity is isomorphic to $SO(n-k)$. By rotating a point $p\in \mathbb{R}^n$ about $V$ we mean the orbit of $p$ under the action of this $SO(n-k)$.
In the case of hypersurfaces of revolution, there is a simpler description: let $\gamma$ be a curve in the upper half plane of $\mathbb{R}^2$, we can write it as $\gamma = \{f(x,y) = 0\}$. Its corresponding hypersurface of revolution _formed by revolving $\gamma$ about the $x$ axis in $\mathbb{R}^n$ is the set
$$ \Sigma := \{ (x_1,\ldots,x_n)\in\mathbb{R}^n | f(x_n, r_n) = 0, r_n = \sqrt{x_1^2 + \cdots + x_{n-1}^2} \} $$
In cylindrical coordinates $(r,\omega,z)\in \mathbb{R}_+ \times\mathbb{S}^{n-2} \times \mathbb{R}$, we can describe $\Sigma$ more simply as
$$ \Sigma := \{ f(z,r) = 0 \} $$
In your case, $\gamma$ is given by $y = \cosh x$, the catenary as a graph over its directrix. Hence the corresponding surface of revolution is
$$ \Sigma := \{ r = \cosh z \} $$
or
$$ \Sigma := \{ x_1^2 + x_2^2 + \cdots + x_{n-1}^2 = \left( \cosh x_n\right)^2 \} $$ |
Uniform convergence of $f_n(x)=nx^n(1-x)$ for $x \in [0,1]$? | Yes you're right! Notice that this example shows that the uniform convergence is a sufficient condition to interchange limit and the integral sign $\int$ but not a necessary condition. |
$A \xrightarrow{\alpha, \beta} E \xrightarrow{p} B$, where $p$ is a covering map and $A$ is connected. | This is called the unique lifting property of covering space.
Now you need to prove that the set where $\alpha$ and $\beta$ agree on is both open and closed.
First assume $\alpha(t)=\beta(t)$ and then assume $\alpha(t)\ne \beta(t)$. |
Find $\lim_{n\to\infty} \cos(\frac{\pi}{4}) \cos(\frac{\pi}{8})\ldots \cos(\frac{\pi}{2^n}) $ | If $x_n=\cos(\frac{\pi}{4}) \cos(\frac{\pi}{8})\ldots \cos(\frac{\pi}{2^n}) $ then $\ $
$$x_n\sin (\frac{\pi}{2^n})= \cos(\frac{\pi}{4}) \cos(\frac{\pi}{8}) \ldots
\cos(\frac{\pi}{2^n}) \sin (\frac{\pi}{2^n}) $$
$$=\frac{1}{2^1} \cos(\frac{\pi}{4}) \cos(\frac{\pi}{8}) \ldots
\cos(\frac{\pi}{2^{n-1}}) \sin (\frac{\pi}{2^{n-1}}) $$
$$ =\ldots= \frac{1}{2^{n-1}} $$
So $$x_n=\frac{1}{2^{n-1}\sin (\frac{\pi}{2^n})} $$
So $\lim_{n\to \infty }x_n=\frac{2}{\pi} $ |
Given a list of $2n$ elements, which is the best approach to find the $n$'th largest element? | Use the median of medians algorithm to find the median in linear time. This is the best possible since all elements must be examined. It is interesting for a number of reasons: it violates the intuition that sorting is the fastest way to accomplish this, it is an optimal algorithm for the median problem and problems with optimal algorithms are rare, and also it calls into question the zero-one-infinity rule by dividing the list into fifths or smaller parts (thirds yields a log-linear algorithm, like sorting). Recently I mentioned this algorithm as an anecdote about the number $5$ itself. |
Determine which of the following relation is a function? | A relation is a set of ordered pairs, which maps from one set, called the domain, to another set, called the co-domain. Here $A$ is the domain, and $B$ is the co-domain. All the sets given in the OP are indeed relations of $A\to B$; but are they functions?
A function is a relation where any element of the domain is mapped to at most one element in the co-domain. That is: no two distinct pairs of the relation will share the same left-member (or "x"-value; or rather "A"-value in this case).
( PS: "at most one" means either one or none, but never two or more. )
That is it! That's the only property you have to test. There is no restriction on sharing the right members ("y"-values; or "B" values here). |
$\lim_{x\rightarrow\infty}\frac{f(x)}{e^x}$ for analytic functions | Hint: This is a perfect example of why you can't always switch limits! |
Trying to convert a logical expression into CNF | It might help to think by analogy with algebraic expressions formed using multiplication and addition. Multiplication distributes through addition, i.e., you have $x \times (y + z) = (x \times y) + (x \times z)$. Hence you can write any expression as a sum of products of atoms, by pushing the multiplications in through additions. But note that addition does not distribute through multiplication, so you can't write every expression as a product of sums of atoms.
Boolean algebra, however is much more symmetric: not only does conjunction distribute through disjunction, i.e., $x \land (y \lor z) = (x \land y) \lor (x \land z)$ but also disjunction distributes through conjunction, i.e., $ x \lor (y \land z) = (x \lor y) \land (x \lor z)$. Hence (also using De Morgan's laws to push in negations), you get to choose whether to push the conjunctions in, which will lead you to DNF: a disjunction of conjunctions of literals, or to push the disjunctions in, which will lead you to a CNF: a conjunction of disjunctions of literals. (Here "literal" means an atom or negated atom.) For example, your formula can be transformed into a CNF like this:
$$
\begin{array}{rcl}
[(p \land q) \lor \lnot p] \lor \lnot q &=& [(p \lor \lnot p) \land (q \lor \lnot p)] \lor \lnot q\\
&=& (p \lor \lnot p \lor \lnot q) \land (q \lor \lnot p \lor \lnot q)
\end{array}
$$
You can now do further simplifications if you wish, e.g., to arrive at $p \lor \lnot p$ (which is both a DNF and a CNF). |
Picard approximation. | Let $P$ denote the Picard operator.
If $g_{k+1} = Pg_k = g_k$, then $g_k$ is a fixed point of the Picard operator, hence it is a solution of the related Cauchy problem.
Since the sequence of iterates converges, we must have $g_0 = g_1$ and $g_k = g_0$ for every $k$. |
Find the product of all quadratic nonresidues in $\mathbb{Z}_{103}$ | Recall Wilson's theorem for finite Abelian groups:
The product of all elements in a finite Abelian group is either $1$ or the element of order $2$ if there is only one such element.
$\mathbb{Z}_{103}^\times$ is a cyclic group of order $102$. It has exactly one element of order $2$, namely $-1$, and so the product of all its elements is $-1$.
The set of quadratic residues is a subgroup of order $61$. It has no element of order $2$ and so the product of all its elements is $1$.
Therefore, the product of all quadratic nonresidues is $-1 \bmod 103$. |
What is the derivative of the modulus of a complex function? | The reference that suggests that $|z|$ is differentiable at $z=0$ is incorrect. Note that if $f(z)$ is differentiable, then $$f'(z)=\lim_{\Delta z\to 0}\frac{f(z+\Delta z)-f(z)}{\Delta z}$$
Taking $f(z)=|z|$ and $z=0$, we see that $\displaystyle \lim_{\Delta z\to 0}\frac{|\Delta z|-0}{\Delta z}$ fails to exist.
Similarly, if $f(z)=u(x,y)+iv(x,y)$, and $g(z)=|f(z)|^2=u^2(x,y)+v^2(x,y)$, then $g(z)$ is purely real. The only purely real function that is complex differentiable in an open neighborhood of a point is a function that is constant. So, $g$ is differentiable in a neighborhood of $z$ only if $f$ is constant there.
To show this, we appeal to the Cauchy-Riemann equations. Note that if $h(z)=\phi(x,y)$, where $\phi$ is a purely real-valued function, then the Cauchy-Riemann equations reveal
$$\frac{\partial \phi(x,y)}{\partial x}=\frac{\partial \phi(x,y)}{\partial y}=0\implies \phi(x,y)\,\,\text{is a constant}$$
Now, it is possible that $|f|^2$ might be differentiable at an isolated point $z$, but not in an open neighborhood of that point. Note that if $f$ is analytic in the open domain $O$, and $z\in O$ and $z+\Delta z\in O$, then
$$\begin{align}
\frac{|f(z+\Delta z)|^2-|f(z)|^2}{\Delta z}&=\frac{|f(z)+f'(z)\Delta z+O\left((\Delta z)^2\right)|^2-|f(z)|^2}{\Delta z}\\\\
&=\frac{2\text{Re}\left(\overline{f(z)}f'(z)\Delta z\right)}{\Delta z}+O\left(\Delta z\right)
\end{align}$$
Hence, if the limit $\lim_{\Delta z\to 0}\frac{2\text{Re}\left(\overline{f(z)}f'(z)\Delta z\right)}{\Delta z}$ exists, then $|f|^2$ is differentiable at $z$. Again, $|f|^2$ cannot be analytic at any point unless $f$ is a constant. |
Computing the volume of a region on the unit $n$-sphere | You can write your area as a probability times the area of $S^n$, by writing $X_i = Z_i/\sqrt{\sum_{j=1}^n Z_j^2}$ for i.i.d. $N(0,1)$ gaussian random variables. Then your desired area is $2\pi^{n/2}/\Gamma(n/2)$ times the probability that $\sum_j a_j Z_j^2 \le a_2\sum_j Z_j^2$, which might be estimated (say) by a form of the central limit theorem. |
Show that an entire function $f$ s.t. $|f(z)|>1$ for $|z|>1$ is a polynomial | If the Taylor series about 0 does not terminate, $f(1/z)$ has an essential singularity at $0$ (why?)
Then from the Casorati–Weierstrass theorem (have you learnt this?) you know $f(1/z)$ cannot be bounded from below near 0. Contradiction! |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.