title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Analysis Series Question
$$\frac1{3n+1}+\frac1{3n+2}-\frac2{3n+3}=\frac2{(3n+1)(3n+3)}+\frac1{(3n+2)(3n+3)}\sim\frac1{3n^2}$$
A question about the proof of an obvious result
It's not clear how you could write a simpler proof. Basically, you are using each feature of the "local homeomorphism" exactly once: $f(U)$ being open in $Y$ means $f(U)\cap V$ is an open subset of $f(U)$. $f$ being a homeomorphism $U\to f(U)$ means that $f^{-1}$ of an open subset of $f(U)$ is an open subset of $U$, and hence an open subset of $X$.
RMM 2015 /P1: Does there exist an infinite sequence of positive integers $a_1, a_2, a_3, . . .$
By using the canonical indexing of primes, it is sufficient to show that there exists a sequence $\{A_n\}_{n\geq 1}$ of finite subsets of $\mathbb{N}\setminus\{0\}$ such that $A_{n+1}$ belongs to the complement of $A_n$ but has a non-trivial intersection with every member of the family $A_1,A_2,\ldots,A_{n-1}$. Your sequence is associated to $$ \{1,2\},\{3,4\},\{1,5\},\{2,3,6\},\{1,4,7\},\{2,3,5,8\},\{1,4,6,9\},\{2,3,5,7,10\},\ldots$$ and I can see a pattern here: starting with $A_5=\{1,4,7\}$, $A_n$ is given by $$ (A_{n-2}\setminus\{\max A_{n-2}\})\cup\{\max A_{n-2}-1\}\cup\{n+2\}. $$ Decrease by one the maximum element of $A_{n-2}$, then append $n+2$. By induction it should not be difficult to prove that this actually works. I'll start the proof: $A_n\cap A_{n+1}=\emptyset$. This is blatantly true for any $n\leq 6$, hence we may assume $n>6$. Since $\max A_{n+1}=n+3>n+2=\max A_n$, $\max A_{n+1}$ is not an element of $A_n$. The set $A_{n+1}\setminus\{\max A_{n+1}\}$ equals $A_{n-1}$ with the maximum element ($n+1$) being replaced by $n$. $A_n\cap A_{n-1}=\emptyset$ by inductive hypothesis, hence the proof of $A_n\cap A_{n+1}=\emptyset$ boils down to the proof of $n\not\in A_n$, which follows from $\max(A_n\setminus\{\max A_n\})=n-1$. $A_n$ has a non-trivial intersection with $A_1,A_2,\ldots,A_{n-2}$. By direct inspection we may assume $n>6$ as well. By definition $A_n$ has non-trivial intersections with $A_{n-2},A_{n-4},\ldots,A_2$, so it is sufficient to prove that $A_n$ has non-trivial intersections with $A_{n-3},A_{n-5},\ldots,A_1$. In the previous point we have shown $\max(A_n\setminus\{\max A_n\})=n-1=\max A_{n-3}$, so $A_n\cap A_{n-3}\neq\emptyset$. In a similar way we may show that if we remove the two greatest elements from $A_n$, the maximum becomes the maximum of $A_{n-5}$, so $A_n\cap A_{n-5}\neq \emptyset$ etcetera. This is basically the reverse approach of the one taken by Eigen von Eitzen here (his sets end with $2n-1,2n$, our sets start with $1,4$ or $2,3$). We gain a pleasant bit of regularity if we pick $A_3$ as $\{2,5\}$ instead of $\{1,5\}$: $$ \{1,2\},\{3,4\},\{2,5\},\{1,3,6\},\{2,4,7\},\{1,3,5,8\},\{2,4,6,9\},\{1,3,5,7,10\},\ldots$$
number theory(trace)
Set $\alpha=\sqrt[4]{2}.$ $T=T^{\mathbb{Q}[\alpha]}_{\mathbb{Q}}.$\ Suppose $\sqrt{3}\in \mathbb{Q}[\alpha],$ then $\sqrt{3}=a+b\alpha+c\alpha^2+d\alpha^3,$ where $ a,b,c$ and $d\in \mathbb{Q}.$ Then $T(\sqrt{3})=T(a+b\alpha+c\alpha^2+d\alpha^3)=T(a)+T(b\alpha)+T(c\alpha^2)+T(d\alpha^3)=T(a)+bT(\alpha)+cT(\alpha^2)+dT(\alpha^3).$ We have $Irr(\alpha,\mathbb{Q})=X^4-2$, $Irr(\alpha^2,\mathbb{Q})=X^2-2$, $Irr(\alpha^3,\mathbb{Q})=X^4-8$, $Irr(\sqrt{3},\mathbb{Q})=X^2-3$ and $Irr(\sqrt{3},\mathbb{Q}[\sqrt{3}])=X-\sqrt{3}.$ As $\sqrt{3} \in \mathbb{Q}[\alpha],$ then $\mathbb{Q}\subset\mathbb{Q}[\sqrt{3}]\subset\mathbb{Q}[\alpha].$ Therefore $T(\sqrt{3})=T_{\mathbb{Q}}^{\mathbb{Q}[\sqrt{3}]}\left(T_{\mathbb{Q}[\sqrt{3}]}^{\mathbb{Q}[\alpha]}(\sqrt{3})\right)=4a.$ Since $T(a)=4a,$ $T(\alpha)=\alpha+(-\alpha)+(-i\alpha)+i\alpha=0,$ $T(\alpha^2)=\sqrt{2}+(-\sqrt{2})=0$ and $T(\alpha^3)=\alpha^3+(-\alpha^3)+(-i\alpha^3)+i\alpha=0.$ But $T_{\mathbb{Q}[\sqrt{3}]}^{\mathbb{Q}[\alpha]}(\sqrt{3})=\sqrt{3},$ then $T(\sqrt{3})=T_{\mathbb{Q}}^{\mathbb{Q}[\sqrt{3}]}(\sqrt{3})=\sqrt{3}+(-\sqrt{3})=0.$ Therefore $a=0.$ Thus $\sqrt{3}=b\alpha+c\alpha^2+d\alpha^3.$ Let $\beta=\frac{\sqrt{3}}{\alpha}$, then $\beta=b+c\alpha+d\alpha^2.$ As $\sqrt{3}$ and $\alpha$ belong to $\mathbb{Q}[\alpha]$, then $\beta\in \mathbb{Q}[\alpha]$ and $\mathbb{Q}\subset\mathbb{Q}[\beta]\subset\mathbb{Q}[\alpha].$ Hence $T(\beta)=T_{\mathbb{Q}}^{\mathbb{Q}[\beta]}\left(T_{\mathbb{Q}[\beta]}^{\mathbb{Q}[\alpha]}(\beta)\right)=4b.$ But $T_{\mathbb{Q}[\beta]}^{\mathbb{Q}[\alpha]}(\beta)=\beta,$ because $Irr(\beta,\mathbb{Q}[\beta])=X-\beta.$ Therefore $T(\beta)=T_{\mathbb{Q}}^{\mathbb{Q}[\beta]}(\beta)=0,$ since $Irr(\beta,\mathbb{Q})=X^4-\frac{9}{2}.$ Then $b=0.$ Thus $\beta=c\alpha+d\alpha^2.$ Let $\gamma=\frac{\beta}{\alpha},$ then $\gamma=c+d\alpha.$ As $\beta$ and $\alpha$ belong to $\mathbb{Q}[\alpha],$ then $\gamma\in \mathbb{Q}[\alpha]$ and $\mathbb{Q}\subset\mathbb{Q}[\gamma]\subset\mathbb{Q}[\alpha].$ Hence $T(\gamma)=T_{\mathbb{Q}}^{\mathbb{Q}[\gamma]}\left(T_{\mathbb{Q}[\gamma]}^{\mathbb{Q}[\alpha]}(\gamma)\right)=4c.$ But $T_{\mathbb{Q}[\gamma]}^{\mathbb{Q}[\alpha]}(\gamma)=\gamma,$ since $Irr(\gamma,\mathbb{Q}[\gamma])=X-\gamma.$ Therefore $T(\gamma)=T_{\mathbb{Q}}^{\mathbb{Q}[\gamma]}(\gamma)=0,$ since $Irr(\gamma,\mathbb{Q})=X^2-\frac{3}{2}.$ Then $c=0.$ Thus $\gamma=d\alpha,$ then $\sqrt{3}=d\alpha^3.$ If $d\ne0,$ then $\sqrt{2}=\frac{3}{2d^2}\in \mathbb{Q}.$ A contradiction. Hence $\alpha\notin \mathbb{Q}[\alpha].$
Affine coordinate ring, k- vector space
Well, the coordinate ring is $k[x,y]/J$, where $J$ is the ideal $$J=\langle x,y\rangle\cap\langle x-1,y\rangle \cap\langle x,y-1\rangle\cap \langle x-1,y-1\rangle,$$ which is equal to $k[x,y]/\langle x^2-x,y^2-y\rangle$. See Showing that two ideals are equivalent. for more details. Moreover, the standard monomial basis of $k[x,y]/J$ consists of $1,x,y,xy$ and so the coordinate ring is 4-dimensional over $k$.
Constructing piecewise quadratic polynomial
From my comments above, you require the spline to satisfy $p(1)=q(1)$, $q(2)=r(2)$ for function continuity and $p'(1)=q'(1)$ and $q'(2)=r'(2)$ for derivative continuity. The result of this is an overdetermined linear system, which has a unique solution.
Order of an element in $U(13)$ and $U(7)$.
The order of an element $a$ in $U(n)$ is the smallest positive integer $k$ such that $a^k \equiv 1 \pmod{n}$. With this, note that $12 \not\equiv 1 \pmod{13}$ but $12^2 \equiv 1 \pmod{13}$, so, the order of $12$ is in fact $2$.
Possible worlds/beliefs/Probability Matrix/Example 3
$\begin{pmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 1/2 & 0 \\ 1/2 & 1/2 & 0\end{pmatrix}$ cannot be reordered as a block diagonal matrix and is a suitable belief matrix. But if a matrix can be reordered as a block matrix with only column operations, then such reorder is unique: let $v_1, \ldots, v_n$ be the basis for the domain, $w_1, \ldots, w_k$ the basis of the codomain that makes $A$ a block matrix. Then you will have $$\text{span}(Av_1, \ldots,Av_{i_1}) \ \subseteq\text{span}(w_1, \ldots, w_{j_1}), \ \text{span}(Av_{i_{1}+1}, \ldots,Av_{i_2}) \ \subseteq\text{span}(w_{j_1+1}, \ldots, w_{j_2}), \ldots$$ and so on. Reordering the columns you are only changing the order of the vectors in the basis $v_1,\ldots,v_n$, and you can easily see that this identify the blocks (besides permutations inside the block).
Scheduling profit deadlines $NP$-completeness
The problem, as you state it in the post, is to find the maximum profit. This is not a decision problem and so it can't be NP-complete by definition. So let's consider a closely related decision problem: let $A$ be a set of tasks, each task $a$ having an associated execution time $t(a)$, profit $p(a)$, and deadline $d(a)$, and let $B$ be a positive integer. Is there a sequence of distinct tasks $a_1, a_2, \ldots, a_n$ in $A$ such that $\sum_{1 \le i\le k}{t(a_i)} \le d(a_k)$ (that is, each task completes by its deadline) and $\sum_{1 \le i \le n} p(a_i) \ge B$ (that is, total profit of the tasks is at least $B$)? This problem is clearly in NP (it's easy to check in polynomial time that a given schedule meets the conditions) and it can be shown to be NP-hard, by a reduction from SUBSET SUM: Suppose we have an instance of SUBSET SUM, in the form of a finite set $A$, a size $s(a) \in Z^+$ for each $a \in A$, and a positive integer $B$, the problem being to determine if there is a subset $A′ \subseteq A$ such that $\sum_{a\in A′}s(a) = B$. Then the corresponding sequencing problem has $t(a) = p(a) = s(a)$ and $d(a) = B$. A standard argument for reducing an optimization problem to the corresponding decision problem shows that your original problem (to find the maximum profit) is in the class FPNP. The decision version of your problem is [SS3] SEQUENCING TO MINIMIZE TARDY TASK WEIGHT in Garey & Johnson, Computers and Intractability, where the authors add: Can be solved in pseudo-polynomial time (time polynomial in $\left|A\right|$, $\sum{t(a)}$, and $\log \sum {p(a)}$) [Lawler & Moore (1969), "A functional equation and its applications to resource allocation and sequencing problems," Management Science 16 pp. 77–84.]
Find $A^n , n \ge 1$
$A^2 = u^tvu^tv= u^t (vu^t)v\\ A^n = u^tvu^tv= u^t (vu^t)^{n-1}v\\ (vu^t) = 3\\ A^n = 3^{n-1} A$
Homeomorphism of coarser topologies
Yes. If $X$ is a set and $\tau_1$ and $\tau_2$ are topologies on $X$ such that $(X,\tau_1)$ and $(X,\tau_2)$ are homeomorphic, let $f\colon X\longrightarrow X$ be such a homeomorphism. Then, if $\tau_1'$ is a topology on $X$ which is coarser than $\tau_1$, then, if $\tau_2'=\{f(A)\mid A\in\tau_1'\}$, $\tau_2'$ is a topology on $X$ which is coarser than $\tau_2$ and $(X,\tau_1')$ and $(X,\tau_2')$ are homeomorphic (again, take $f$).
$x^6+x^3+1$ is irreducible over $\mathbb{Q}$
HINT: Let $y=x-1$, and apply Eisenstein's criterion for $p=3$.
sum of X and Y and Z if they are iid random variables
Let $X, Y, Z$ be the results of tossing three coloured, but equally-biased, dice. Clearly $X, Y, Z$ have the same distribution, but are not the same variable.   They may independently have three different values in any instance.
What can be learned for number theory from geometrical constructions (and vice versa)?
Addressing only the title of your Q. Farey-sequences are a useful tool in elementary Number Theory. Let $a,b,c,d\in \Bbb N$ such that $a/b$ and $c/d$ are in lowest terms and $a/b<c/d.$ We say $a/b,c/d $ are Farey-adjacent iff for all $e,f\in \Bbb N$ such that $a/b<e/f<c/d,$ we have $f>\max (b,d).$ An important property is that if $a/b,c/d$ are Farey-adjacent then $|ad-bc|=1.$ This can be proven algebraically. In "Introduction To Geometry" by Coxeter, it is shown how to prove it geometrically: If $a/b,c/d$ are Farey-adjacent then we can easily show there is a finite sequence $T_1,..., T_n$ of affine area-preserving maps from $\Bbb R^2$ to $\Bbb R^2$ such that $T=\prod_{i=1}^nT_i$ maps the triangle with vertices $(0,0),(a,b),(c,d)$ onto the triangle with vertices $(0,0), (1,0),(0,1). $ The area of the $\triangle$ with vertices $(0,0)(a,b),(c,d)$ is $\frac {1}{2}|ad-bc|$ while the area of the $\triangle$ with vertices $(0,0),(1,0)(0,1)$ is $\frac {1}{2}.$
Probability of $\mathcal{O}_{\mathbb{Q}(\alpha)}=\mathbb{Z}[\alpha]$
You are in luck. This exact problem has been considered in this paper: https://arxiv.org/abs/1611.09806 . They prove that $ p_n = 1/\zeta(2)$, in particular it is independent of $n$! Read the introduction to the paper, it is very clear. (Do note that they work with a different height function.)
Do maximal proper subfields of the real numbers exist?
Such a field $F$ does not exist. Assume contrariwise that $F$ is a field with the properties $\sqrt2\notin F$, $F(\sqrt2)=\mathbb{R}$. In that case there exists a non-trivial $F$-automorphism $\sigma$ of $F(\sqrt2)$ with the property $\sigma(z)=z$ for all $z\in F$ and $\sigma(\sqrt2)=-\sqrt2$. This contradicts the known fact that the field $\mathbb{R}$ has no non-trivial automorphisms. I outline the steps in the argument in case you have not seen them before. Below $\tau$ is an arbitrary automorphism of $\mathbb{R}$. We have $\tau(q)=q$ for all the rational numbers $q$. The automorphism $\tau$ maps any square in the field $\mathbb{R}$ to (possibly another) a square. A real number is a square, iff it is non-negative, so by Step 2 $\tau$ maps any positive real number to a positive real number. In the field $\mathbb{R}$ we have $x\le y\Leftrightarrow y-x\ge0$. Therefore Step 3 implies that the automorphism $\tau$ is strictly increasing as a real function. Step 4 implies that $\tau$ is continuous on all of $\mathbb{R}$. Therefore its fixed points form a closed set. The (topological) closure of $\mathbb{Q}$ is all of $\mathbb{R}$, so combining Steps 1 and 5 shows that $\tau(x)=x$ for all real numbers $x$.
The rigid additive tensor category freely generated by an object
Not really an answer, but I wanted to post this here in case anyone else ends up thinking about this thing. It might help set you on the right track. Anyway, after browsing through this collection of slides (see "Ingredients of Construction" slide) I now know that the morphisms in this category can be described using walled Brauer diagrams (see e.g. page 6-7 here). I haven't completely tied up all the holes in my understanding but it has helped quite a lot. The rule for going from an element $\sigma \in S_{a+d}$ to a map $\epsilon(\sigma):T^{a,b}\to T^{c,d}$ is as follows: draw the element $\sigma$ as in my diagram below as a collection of strands joining the $a+d = b+c$ points along two lines from top to bottom. Additionally, there is a green "wall" between the $a$th and $(a+1)$st points on the top connected to the point between the $c$th and $(c+1)$st on the bottom (this wall represents the "split" between covariant and contravariant factors in the map). Then "flip" the right (contravariant) part of the diagram along the green wall to get a pictorial description of a map $T^{a,b}\to T^{c,d}$. One obtains a similar diagram, but with different numbers of nodes on the top and bottom (the top has $a+b$ nodes, the bottom has $c+d$ nodes). Lines that crossed the green wall then become "semi-loops" attached to one edge, which I have taken to mean "evaluation" and "coevaluation". The other lines show which factors are sent to which other factors in the corresponding map The universal formula mentioned above feels within reach when you follow the formal rule $$\text{closed loop in a Brauer diagram} = \text{factor of } \text{rank}(V) \text{ multiplying the diagram without the loop.}$$ Assuming this rule works in this vague interpretation, I managed to demonstrate the formula in the image below for a particular element $(2,3)$ of the symmetric group $S_5$. Why one must follow this rule is still unclear to me, but I guess it is something like this: in a composition $$T^{a,b}\xrightarrow{\epsilon(\sigma)} T^{c,d}\xrightarrow{\epsilon(\tau)}T^{e,f}$$ where $a+d = b+c$ and $c+f = e+d$, there is some number $N$ of evaluations and coevaluations occuring between elements of $V$ and $V^\vee$, which can be seen as closed loops in the centre of the Brauer diagram. When each of these pairings between coevaluations and evaluations occurs, a sum of basis vectors of the form $\delta_i(e_i)$ (notation in the accepted answer to this question) multiplies the morphism by a factor of the rank of $V$. Therefore every time a loop occurs in the diagram, we thus pick up another factor of the rank. When you calculate what the corresponding map $\epsilon(\tau\sigma): T^{a,b}\to T^{e,f}$ does, some "double-overlapping" of the wall kills off certain strands in the diagrams, which would get mapped to closed loops after "flipping" the diagram. Multiplying by the number of strands that get killed by this gives you the same thing: hence $$\epsilon(\tau)\circ\epsilon(\sigma) = \text{rank} (V)^N \epsilon(\tau\sigma).$$ This is quite a hard procedure to describe, and after reading about it there seems to be lots of extremely deep maths related to this construction including links to supersymmetry in physics. I hope someone else may find this to be a helpful and/or interesting note. Of course, if anyone would like to correct me or add to what I have written in this answer I would be very happy!
How to represent the following set operation.
Note that $A$ is really just $C\times D$, in which case the set you are looking for is $E\times D$. If $A$ is really just an arbitrary subset of $C\times D$, e.g. a function $f\colon C\to D$, then you can write $A\restriction E$ to mean $\{(a,b)\in A\mid a\in E\}$. Just be sure to explain precisely what you mean when you write $A\restriction E$, to avoid any possible confusion for the readers.
Little Fermat's theorem question $135^{135} mod 17$
Since $135=8\times 17 -1$, the remainder is $17-1=16$. In fact, we didn't really need FLT.
How to find first five terms of sequence?
The recurrence is given by: $$u_n = 5 u_{n-1} - 6 u_{n-2}$$ $u_1 = 1$ $u_2 = 5$ $u_3 = 5 u_2 - 6u_1 = 5 \times 5 - 6 \times 1 = 19$ $u_4 = 5 u_3 - 6 u_2 = 5 \times 19 - 6 \times 5 = 65$ $u_5 = 5 u_4 - 6 u_3 = 5 \times 65 - 6 \times 19 = 211$ $\ldots$
Right cone, you are at A and need to complete a revolution before reaching the bottom B. What is shortest distance AB?
Henry has answered most of your questions, except how to calculate the distance of the path. To find the distance you need to find the angle $\theta$ of the circular sector. We know the circular arc of the sector has the same length as the circumference of the cone's base circle, so $\theta s = 2\pi r$ giving: $$\theta = \frac{2 \pi r}{s}$$ If the point $A$ is located a distance $d$ from the cone vertex, we can now use the Law of Cosines to find the path distance x: $$x^2 = s^2 + d^2 - 2sd \cos(\frac{2 \pi r}{s})$$ EDIT To find the uphill distance $x_u$ we use the method given by Henry, i.e. drop a perpendicular from $O$ to $AB$, meeting at $C$. The height $h$ of $OC$ can be found by the area formulas for a triangle: $$\text{Area} = \frac{1}{2}\cdot s \cdot d \cdot \sin(\theta) = \frac{1}{2}\cdot x \cdot h$$ giving $$h = \frac{s \cdot d \cdot \sin(\theta)}{x}$$ We then see that $x_u^2+h^2=d^2$ or $$x_u = \sqrt{d^2-h^2}$$
Integration problem yields two solutions. But are they the same?
According to your consideration, $\cos\theta = \frac{1}{x^6} \Rightarrow \theta = \cos^{-1}(\frac1{x^6})$ And $\cos^{-1}x = \frac\pi2-\sin^{-1}x$. So, $\boxed{\theta = \frac\pi2 - \sin^{-1}\left(\frac{1}{x^6}\right)}$. Now, $\frac{1}{6}\cdot\frac\pi2$ is just a constant and you have the required result as $\boxed{- \frac16\sin^{-1}\left(\frac{1}{x^6}\right) +k}$
Abstract algebra ring problem related with number theory
First let's show that $S$ is closed under - and $\cdot$. Choose $[ur], [vr] \in S$ where $0 \leq u,v \leq s-1$. So $0 \leq |u-v| \leq s-1$. By division algorithms there are unique $q,R \in \mathbb{Z}$ s.t $(u-v)r=rsq+R (0 \leq R < rs)$. If $u-v \geq 0$ then $(u-v)r <rs$. So $q=0$ so that $R=(u-v)r$. Thus $[(u-v)r] \in S$. If $u-v <0$ then $q=-1$, since $-(s-1) \leq u-v <0$. So $R = (u-v+s)r$, since $-(s-1) + s \leq u-v+s < s$. Thus $[(u-v)r] \in S$. Therefore $S$ is closed under subtraction. Moreover by division algorithm there are unique $Q', \rho$ s.t $urvr=rsQ+\rho$ where $0 \leq \rho < rs$. By applying division algorithm again there are unique $Q', \rho'$ s.t $urv = SQ' +\rho'$ where $0 \leq \rho' <s$. Thus $urvr =(SQ' + \rho')r = sQ'r + \rho' r$. Note that $0 \leq \rho' <s$ then $0 \leq \rho' r <rs$. Thus $\rho' r$ is the remainder of $uurvr$ divided by $rs$. Therefore $[urvr] = [\rho' r] \in S$. So $S$ is closed under multiplication. Hence $S$ is the subring of $\mathbb{Z}_{rs}$. Furthermore, $[ur][ks+1]=[ur(ks+1)]=[urks+ur]=[urks]+[ur]=[0]+[ur]=[ur]$. Thus $[ks+1]$ is the multiplicative identity of S. $\square$
Show that $2$ is a primitive root modulo $13$.
1 . This is Lagrange's theorem. If $G$ is the group $(\mathbb{Z}/13\mathbb{Z})^{\ast}$ (the group of units modulo $13$), then the order of an element $a$ (that is, the smallest number $t$ such that $a^t \equiv 1 \pmod{13}$) must divide the order of the group, which is $\varphi(13) = 12$. So we only check the divisors of $12$. 2 . Yes, that is a square mod $13$. To say that $a$ is a primitive root mod $13$ means that $a^{12} \equiv 1 \pmod{13}$, but all lower powers $a, a^2, ... , a^{11}$ are not congruent to $1$. Again use Lagrange's theorem: supposing $a^2$ were a primitive root, then $12$ would be the smallest power of $a^2$ such that $(a^2)^{12} \equiv 1$. But note that $b^{12} \equiv 1$ for ANY integer $b$ not divisible by $13$. So $(a^2)^{6} = a^{12} \equiv 1$, and $6 < 12$, contradiction. 3 . It's a general result about finite cyclic groups. A cyclic group of order $m$ is a group of the form $H = \{ 1, g, g^2, ... , g^{m-1}\}$. It is basically the same thing as the group $\mathbb{Z}/m\mathbb{Z}$ with respect to addition. In general, if $d \geq 1$, there exist elements in $H$ with order $d$ (that is, their $d$th power is $1$, all lower powers are not $1$) if and only if $d$ is a divisor of $m$, and there are exactly $\varphi(d)$ such elements. In particular, if $p$ is an odd prime number, the result is that $(\mathbb{Z}/p\mathbb{Z})^{\ast}$ is a cyclic group of order $\varphi(p) = p-1$, and the number of primitive roots (that is, the number of elements with order $p-1$) is exactly $\varphi(p-1) = \varphi(\varphi(p))$. 4 . If you have found a primitive root modulo $p$ (where $p$ is an odd prime), then you can easily find the rest of them: if $a$ is a primitive root mod $p$, then the other primitive roots are $a^k$, where $k$ runs through those numbers which don't have any prime factors in common with $p-1$. It's a good exercise to prove this. So $2^9$ wouldn't work; $9$ has prime factors in common with $12$.
Prove that the integral of an even function is odd
We have $$ F(x) = \int_{0}^{x}f(t)dt, $$ then $$F(-x) =\int_{0}^{-x}f(t)dt=\overbrace{ \int_{0}^{x}f(-u)(-du)}^{\large u\:=-t}=\int_{0}^{x}f(u)(-du)=-F(x).$$
Convergence $\sum_{k=1}^N \frac{a_k}{ \sum_{i=0}^k a_{i}^2} $
Suppose that $\sum\limits_{k=1}^{\infty}a_k^2=S>0$. Denote $$ b_k:=\frac{a_k}{\sum\limits_{i=1}^{k}a_i^2}. $$ Then, $$ b_k\sim\frac{a_k}{S},~\text{when}~n\to\infty. $$ Hence, ($a_k, b_k>0$) the series $\sum\limits_{k=1}^{\infty}b_k$ is convergent if and only if the series $\sum\limits_{k=1}^{\infty}{a_k}$ is convergent. Moreover, note that if $\sum\limits_{k=1}^{\infty}a_k$ is convergent, then so is $\sum\limits_{k=1}^{\infty}a_k^2$.
A remark about the Rellich-Kondrachov Compactness Theorem in Evans's PDE book
Remember $U$ is bounded, so $W^{1,n}(U) \subset W^{1,p}(U)$ for all $1 \leq p < n$ by Holder. Since $p^*\to \infty$ as $p\to n$, we can choose a fixed $p<n$ close enough to $n$ so that $p^*>n$. Then by the Rellich-Kondrachov compactness thoerem $$W^{1,n}(U) \subset W^{1,p}(U) \subset\subset L^n(U).$$ Arguing along the same lines, we actually have $W^{1,n}(U) \subset\subset L^q(U)$ for all $1 \leq q < \infty$.
Angle between two vectors using transpose
The equation in your book just shows the dot product in matrix notation. A row vector times a column vector gives you a scalar, the dot product of the row and column vector, as long as the number of columns that the row vector has equals the number of rows that the column vector has. In other words, the row vector must contain the same number of elements as the column vector. That's a mouthful... so here's an example that can be easy be generalized to any number of elements for x and y (as long as x and y have the same number of elements and are both column vectors). If x is a column vector, $x^T$ is a row vector: $$x = \left[ {\begin{array}{*{20}{c}} {{x_1}}\\ {{x_2}}\\ {{x_3}} \end{array}} \right]$$ $$y = \left[ {\begin{array}{*{20}{c}} {{y_1}}\\ {{y_2}}\\ {{y_3}} \end{array}} \right]$$ $${x^T} = \left[ {\begin{array}{*{20}{c}} {{x_1}}&{{x_2}}&{{x_3}} \end{array}} \right]$$ $${x^T}y = \left[ {\begin{array}{*{20}{c}} {{x_1}}&{{x_2}}&{{x_3}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{y_1}}\\ {{y_2}}\\ {{y_3}} \end{array}} \right] = {x_1}{y_1} + {x_2}{y_2} + {x_3}{y_3}$$ In general, if x and y are both $n \times 1$ column vectors, $$x^Ty = \sum_{i=1}^n x_iy_i$$ with i representing the ith components of the vectors. A row times a column is often called the inner product. The outer product is a columns times a row (as long as the vectors have the same number of components...but in that case you get a matrix, not a scalar, as your product. If the column vector was a $n \times 1$ column vector and the row was a $1\times n$ row vector, you'd get an $n \times n$ matrix.
Find a conformal map from two circle tangent to each other from inside to the upper half plane.
The idea of the construction is to use Möbius Transformation to map the origin to infinity, so that we straighten the two circles to be a strip. Define the Möbius Transformation $$f:\hat{\mathbb{C}}\longrightarrow\hat{\mathbb{C}},\ \text{by}\ z\mapsto f(z):=\dfrac{z-1}{z},$$ which is always a bijective holomorphic function. Now, let's figure out where $f(z)$ maps the inner and outer circle to. Set the inner circle to be $$S_{1}:=\Big\{z\in\mathbb{C}:\ z=\dfrac{2}{3}e^{i\theta}+\dfrac{2}{3},\ \theta\in[0,2\pi]\Big\}, $$ and similarly set the outer circle to be $$S_{2}:=\Big\{z\in\mathbb{C}:\ z=e^{i\theta}+1,\ \theta\in[0,2\pi]\Big\}.$$ For $z\in S_{1}$, we have \begin{align*} f(z)&=\dfrac{z-1}{z} \\ &=\dfrac{2 e^{i\theta}-1}{2e^{i\theta}+2}\\ &=\dfrac{2+2\cos\theta}{8+8\cos\theta}+\dfrac{6 i\sin\theta}{8+8\cos\theta}\\ &=\dfrac{1}{4}+i\dfrac{3}{4}\tan\Big(\dfrac{\theta}{2}\Big)\\ &:=u+iv. \end{align*} Thus, $f(z)$ maps $S_{1}$ to $\Big\{w=u+iv:\ u=\dfrac{1}{4},\ v\in\mathbb{R}\ \Big\},$ which is a vertical straight line parallel to the imaginary axis passing through $\dfrac{1}{4}$. For $z\in S_{2}$, we have \begin{align*} f(z)&=\dfrac{z-1}{z}\\ &=\dfrac{e^{i\theta}}{e^{i\theta}+1}\\ &=\dfrac{1}{2}+i\dfrac{1}{2}\tan\Big(\dfrac{\theta}{2}\Big) \end{align*} Thus, $f(z)$ also maps $S_{2}$ to $\Big\{w=u+iv:\ u=\dfrac{1}{2},\ v\in\mathbb{R}\ \Big\},$ which is another vertical straight line parallel to the imaginary axis passing through $\dfrac{1}{2}$. Therefore, totally, $$f:D\longrightarrow D_{1}:=\Big\{w=u+iv:\ u\in\Big[\dfrac{1}{4},\dfrac{1}{2}\Big], v\in\mathbb{R}\Big\}.$$ Now, we want to move $D_{1}$, so that one side of the strip is the imaginary line. Thus, we use this map $$g(z):D_{1}\longrightarrow D_{2}:=\Big\{w=u+iv:\ u\in\Big[0,\dfrac{1}{4}\Big], v\in\mathbb{R}\Big\},\ \text{by}\ z\mapsto g(z):=z-\dfrac{1}{4},$$ which is a commonly-used conformal map. Then, we rotate $D_{2}$ to a strip that lives in the upper-half plane. We use another known conformal map $$h(z):D_{2}\longrightarrow D_{3}:=\Big\{w=u+iv:\ u\in\mathbb{R}, v\in\Big[0,\dfrac{1}{4}\Big]\Big\},\ \text{by}\ z\mapsto h(z):=iz.$$ We then dilate $D_{3}$ to be a strip betwen $0$ and $\pi i$, i.e. use the conformal map $$d(z):D_{3}\longrightarrow D_{4}:=\Big\{w=u+iv:\ u\in\mathbb{R}, v\in[0,\pi]\Big\},\ \text{by}\ z\mapsto d(z):=4\pi z.$$ Now, everything is set so that we can use our final and also well-known conformal map $$\ell (z):D_{4}\longrightarrow\mathbb{H},\ \text{by}\ z\mapsto \ell(z):=e^{z}.$$ Therefore, our desired conformal mapping is $$F:=\ell\circ d\circ h\circ g\circ f:D\longrightarrow\mathbb{H}, $$ and a quick calculation gives us $$F(z)=-\exp\Big(\dfrac{3z-4}{z}\Big).$$
Prove that $I^2/\partial I^2$ is homeomorphic to $\mathbb{S}^2$
One can define a map $I^n \to S^n$ by requiring that the interior maps to $S^n \setminus \{pt\}$. You can make this formal by taking the inverse of the stereographic projection $s:S^n \setminus \{pt\} \to \mathbb R^n$ and using your favorite homeomorphism $f:\mathrm{Int}(I^n) \to \mathbb R^n$. I recommend an $\tan$ in each co-ordinate. Composing $s^{-1} \circ f$ gives the desired map. The key now is to see that this map can be extended to the boundary in a continuous fashion. We can be crude about this by defining $\partial D^n \to pt$ and defining the total map piecewise. To check continuity, take any neighborhood of $\{pt\}$ and note that the preimage is just an open ring in $D^n$. Doing this explicitly might be a good exercise, but I don't think its so interesting. Anyhow, since this makes the appropriate identifications, we know by the universal property of the quotient topology/map that this total map factors through the quotient, and the induced map is bijective. Since the domain is compact and the codomain hausdorff, it is a homeomorphism.
probability question show that $P(A)>P(B)$
$P(A|C) > P(B|C) \implies P(A\cap C) = P(A|C)P(C) > P(B|C)P(C) = P(B\cap C)$. $P(A|C^c) > P(B|C^c)\implies P(A\cap C^c)= P(A|C^c)P(C^c) > P(B|C^c)P(C^c) = P(B\cap C^c)$ So, $P(A \cap C) > P(B \cap C)$ and $P(A \cap C^c) > P(B \cap C^c)$ $ \implies P(A) = P(A \cap C) + P(A \cap C^c)>P(B \cap C) + P(B \cap C^c) = P(B)$
Why is the directional derivative linearly dependent on the direction vector?
He may mean that $D_vf$ is linear in $v$, so that $D_{v+u}f = D_vf + D_uf$ and $D_{cv}f=cD_vf$ (which are both true, by the way). In other words, $D_vf$ "depends linearly" on $v$, which isn't the same thing as being linearly dependent in the sense of linear algebra/vector space theory.
Laplace Transform of Dirac Delta function
The Laplace transform is defined as $$L[f(t)] = \int_0^\infty f(t) e^{-st}{\rm d} t$$ If $a<0$ then $f(t) = \delta(t-a) = 0$ for all $t\in[0,\infty)$ so we simply have $L[\delta(x-a)] = 0$.
Finding a closed formula for a summation with two binomial coefficients
I don't believe that there is a closed form for this sum, but we can get a generating function as follows: $$ \begin{align} \sum_{k=0}^m\binom{m}{k}\binom{n+k}{m} &=\sum_{k=0}^m\binom{m}{k}\binom{n+k}{n-m+k}\\ &=\sum_{k=0}^m(-1)^{n-m+k}\binom{m}{m-k}\binom{-m-1}{n-m+k}\\ &=\sum_{k=0}^m(-1)^{n-k}\binom{m}{k}\binom{-m-1}{n-k}\\ \end{align} $$ This is the coefficient of $x^n$ in $(1+x)^m(1-x)^{-m-1}$. Examples: $$ \begin{align} m=1:&&(1+x)^1(1-x)^{-2}&=1+\color{#C00000}{3}x+\color{#C00000}{5}x^2+7x^3+9x^4+11x^5+\dots\\ m=2:&&(1+x)^2(1-x)^{-3}&=1+5x+13x^2+\color{#C00000}{25}x^3+41x^4+\dots \end{align} $$
Line triangle intersection
Let's take the image from the anwer: You can think of the red line as a two dimensional plane, where $4x − 3y + 2 = 0$ is it's coordinate equation with some normal vector $$\vec{n} = \begin{pmatrix} 4 \\\ -3 \end{pmatrix}$$ that is orthogonal to the line. Now, by substituting a point into the equation, you get the distance of point and line along $\vec{n}$ (see. Hesse's normal form). While we don't actually care for the distance itself, we do care for it's sign. Positive sign indicates the point's lying on the side $\vec{n}$ points to, a negative one indicates the opposite side. Now, if all the values have the same sign, all triangle points lie on the same side of the line, i.e. there are no intersections. With different signs, the triangle must hit the line.
Smallness/ Rigidity of $\kappa(\mathcal{H})$ without using minimal projections?
I just saw your question and I think I can introduce a reference for one of the above statements. The fact that $K(H)$ is simple is proved in Corollary 5.7.6 of my lecture notes on $C^*$-algebras available at arXiv:1211.3404. The proof rests on the facts that $K(H)$ is the closure of the ideal of finite rank operators and this latter ideal is contained in every non-zero ideal of $B(H)$.
Let $U = \min\{X,Y\}$ and $V = \max\{X,Y\}$. Find $\textbf{E}(U)$, and hence calculate $\textbf{Cov}(U,V)$.
Here is a direct way: \begin{align} \mathsf{E}U &=\mathsf{E}U1\{X\le Y\}+\mathsf{E}U1\{X> Y\} \\ &=\int_0^1\int_x^1 x\, dydx+\int_0^1\int_0^x y\, dydx=\frac{1}{3}. \end{align} To calculate the expectation of $UV$, notice that $UV=XY$. Therefore, $\mathsf{E}UV=\mathsf{E}X\mathsf{E}Y=1/4.$
Product of terms involving eigenvalues and an eigenvector.
Assume $n\ge 3$ so that eigenvalue $\lambda_3$ exists. $(A-\lambda_3 I)x = 0$ by definition of eigenvalue and eigenvector. $A$ and $I$ commute, so we can rearrange the terms. $$\left[\prod_{j\ne 3}(A-\lambda_j I)\right](A-\lambda_3 I) x = 0.$$ Okay, after posting this, looked at the comments. @JMoravitz already gave the answer there!
What's the probability that the last coin toss out of 50 will be tails, given 30 of the 50 are heads?
Intuitively you can think of an urn with $30$ heads and $20$ tails in it. You draw the coins one by one. The chance the last one is a tail is the same as the chance the first one is a tail and is $\frac 25$. In your calculation $49 \choose 29$ is the number of ways to have the last one heads, not tails. That is why you get $\frac 35$, not $\frac 25$. You should have done $\frac {{49 \choose 19}}{{50 \choose 20}}=\frac 25$
How to find the MLE for $P(X>a)$ for $n$ iid normal random variables
After reading this related question on Cross Validated, I used the following reasoning. (Any comments as to the validity of this reasoning would be appreciated.) Recall that the MLE of $\theta$ is $\bar X$. Notice $p=\mathbb P(\frac {X-\theta}\sigma>\frac{a-\theta}\sigma)=1-\Phi(\frac{a-\theta}\sigma)$. Thus, by the invariance property of MLEs, where $\delta$ is the MLE of $p$ we have $$\delta=1-\Phi\left(\frac{a-\bar X}\sigma\right).$$
Finding contradicting example for limits
For the first one assume $f(x)=\tan x$ $g(x)= \frac \pi 2+2\pi\lfloor x\rfloor -\frac1x \to \infty$ then $f(g(x)) \to \infty$ but $\lim_{x\to \infty} f(x)$ doesn't exist. For the second one consider $f(x)=0$ for $x\in \mathbb N$ $f(x)=\lfloor x+\frac12\rfloor $ otherwise then $\lim_{x\to 5} f(x)=5$ but $\lim_{x\to 5} f(f(x))=0$.
Linear algebra - proof that two subspaces are equal
$$W=\text{Span}\,\{(1,1)\}\;,\;\;P=\text{Span}\,\{(1,0)\}\;,\;\;U=\text{Span}\,\{(0,1)\}$$ Then $$W\cap P=W\cap U=\{0\}\,,\,\,\text{and also}\;\;W+P=W+U=\Bbb R^2$$ yet $\;P\neq U\;$ . The claim is false.
Show that a pair of equations has a unique solution
Alright. There is an evident solution at about $(9,2).$ Then have curves have asymptotes near $y=x$ in the first quadrant. It is fairly likely that these asymptotes do not intersect, and this can be investigated more carefully. We usually think in terms of slope and $y-x,$ so let $x = -u+v$ and $y = u+v,$ so that $y-x = 2u$ and $y+x = 2v.$ Both curves have arcs with large positive $v$ and small $u,$ I think slightly negative. So: more work is needed, but it can be done.
If $|A|=30$ and $|B|=20$, find the number of surjective functions $f:A \to B$.
To construct a surjective function from $A$ to $B$, we want to distribute the elements of $A$ into $m$ bins (each representing an element of $B$) so that each bin contains at least one element. In other words, we want to partition the set $A$ consisting of $n$ elements into $m$ non-empty subsets, and assign an element of $B$ to each partition. The number of ways to partition a set of $n$ elements into $m$ non-empty subsets is called the Stirling number of the second kind and usually denoted by $$ \left\lbrace{n\atop m}\right\rbrace. $$ There is no simple closed form for this number, but the wiki page contains a number of identities to calculate it, as well as a table for some values. In particular, we have $$ \left\lbrace{30 \atop 20}\right\rbrace = 581535955088511150. $$ For each partitioning of $A$, we can associate $m!$ surjective functions. Thus, the total number of surjective functions from $A$ to $B$ is $$ \left\lbrace{30 \atop 20}\right\rbrace 20! = 1414819992961759105672223809536000000. $$
Bloch sphere - qubit representation with application of Pauli Matrices
For example, we compute $$ X |\psi\rangle = \pmatrix{0&1\\1&0}[\cos(\theta/2)|0\rangle+e^{i\phi}\sin(\theta/2)|1\rangle]\\ = \cos(\theta/2)|1\rangle+e^{i\phi}\sin(\theta/2)|0\rangle \\ = e^{i\phi}\sin(\theta/2)|0\rangle + \cos(\theta/2)|1\rangle $$ However, this state is not in the required canonical form since the coefficient of $|0\rangle$ has a non-zero complex phase. So, we multiply the entire vector by an appropriate complex phase (which is to say we divide the whole vector by $e^{i\phi}$) to get $$ \sin(\theta/2)|0\rangle + e^{-i\phi}\cos(\theta/2)|1\rangle $$ Now, in order to convert this vector into Bloch-coordinates, we need to write this in the canonical form of $$ \cos(\hat \theta/2) |0\rangle + e^{i\hat \phi} \sin(\hat \theta/2) | 1 \rangle $$ for angles $\hat \theta, \hat \phi$. We associate this with the point $\big(\sin(\hat \theta)\cos(\hat\phi),\sin(\hat \theta)\sin(\hat \phi),\cos(\hat \theta)\big)$. To that end, we note that $$ \sin (\theta/2) = \cos((\pi - \theta)/2),\\ \cos (\theta/2) = \sin((\pi - \theta)/2),\\ e^{-i\phi} = e^{i(2\pi - \phi)} $$ And with this association, we can write $$ e^{-i\phi} X | \psi \rangle = \cos(\hat \theta/2) |0\rangle + e^{i\hat \phi} \sin(\hat \theta/2) | 1 \rangle $$ where $\hat \theta = \pi - \theta$ and $\hat\phi = (2 \pi - \phi)$.
How to solve log decimal
What do you mean by "solve"? If you mean to find the value of $x$ that makes that true, then the problem is easy: that value is $\log_{10} 5$. If you mean to find a decimal expression that is approximately equal to the value of $x$ that makes that true, then there isn't really anything to do more than use your calculator. There are ways to compute logarithms by hand, but there is little practical use for such, and doing so is surely beyond the scope of your course.
Open closed sets intersection
No, it is not true in general. Take $A = [0,1]$ and $B = (1,2)$. Then we have that $A\cap\overline{B} = \{1\}$ and $\overline{A\cap B} = \varnothing$. Hopefully this helps!
Types of elliptic curves
Have a look at Silverman and Tate's "Rational Points on Elliptic Curves". There, in page 22, they tell you how to transform any non-singular cubic into a Weierstrass form. The reason why you don't see much work on curves of the form $y^3=x^3+\cdots$ is that we first bring it to a Weierstrass form and then work there.
Apostol - Analytic Number Theory, Chapter 3 problem 4a
You definitely seem to be on the right path! As it happens, $\sum_{n\le x} \mu(n)$ gets as large as $\sqrt x$ in size infinitely often, so your proposed claim isn't valid. I suspect the place you'll find extra leverage is by writing $([\frac xn]+O(1))^2$ as $[\frac xn]^2 + O(\frac xn)$ rather than as $[\frac xn]^2+O(x)$.
Can the concept of divisibility in a ring be defined on non-commutative rings?
You can, but you'd have to specify sides. What I mean is that "$a$ divides $b$ on the right" may not mean the same thing as "$a$ divides $B$ on the left." You could say $a|_r b$ if $a=cb$ for some $c$, and $a|_\ell b$ if $a=bc$ for some $c$. This would correspond to containment between principal left ideals and containment between principal right ideals. A good example is trying to factor polynomials in $\mathbb H[x]$. I don't have an example at hand, but I'm pretty sure I've seen an example of such a polynomial that was divisible on the left by a linear factor $x-\alpha$ but not divisible on the right by $x-\alpha$. Another way to get an example of interesting things happening is if you took the free algebra $\mathbb Q\langle x,y,z\rangle$ modulo the ideal containing $xy-z$, so that $x$ divides $z$ on the left, but not on the right.
Why are these two quotients equal?
Hint A multiplicative Ideal generated by a Polynomial $p(x)$ in $\mathbb C[x]$ is the set given by $$(p) = \{ f\in \mathbb C[x] : f(x) = 0 \Leftarrow p(x) = 0\}$$ Can you find a polynomial $f$ such that $f\in (x^2) \setminus (x^2-x^3)$? (Look at the factorings) If $(p)\ne(q)$ then $\mathbb C[x]/(p) \ne \mathbb C[x]/(q)$.
What is the maximum total displacement needed for sorting a list?
I think (one) most messed up configuration is given by $n,n-1,n-2,\ldots, 1$. $1$ and $n$ have to travel $n-1$ positions, $2$ and $n-1$ have to travel $n-3$ positions and so on. So I think if $n$ is even you have maximal $2*((n-1)+(n-3)+\ldots +1)=2*(n/2)^2$. Please correct me if i am wrong.
Is this a characterization of Uniformly Convex spaces?
For a normed space are equivalent: i) the only geodesic between any pair of two points is the affine segment. ii) the space is strictly convex. See Prop. 7.2.1 on p.180 in Metric Spaces, Convexity and Nonpositive Curvature by A. Papadopoulos for this equivalence and quite a few others. On the other hand, Day showed that there are (reflexive) strictly convex spaces which are not even isomorphic to a uniformly convex space, so the converse you ask about is not true (unless the space is finite-dimensional).
Compactness of $(a,b)$ in $\Bbb{R}$.
The $(a_i,b_i)$ are indeed not compact. That does not prevent their intersection from being compact. Also there is no "initial assertion that each of $(a_k,b_k)$ cannot be covered by a finite number of open sets." anywhere in your argument
Solve the equation $\sqrt{\sin (x) - \sqrt{\sin(x) +\cos(x) }}=\cos(x)$
From $\sqrt{\sin x - \sqrt{\sin x + \cos x}} = \cos x$, as the LHS is non-negative, we must have $\cos x \geqslant 0$. It is also easy to see that we must have $\sin x \geqslant 0$ for the LHS to exist. Consider then the case $\cos x > 0$. This gives $\sin x + \cos x > \sin x$ and hence $\sqrt{\sin x + \cos x }> \sqrt {\sin x} > \sin x \implies $ the LHS is the root of a negative number. Hence this cannot hold true. The only remaining case is $\cos x = 0$, which in fact satisfies the equation with $\sin x = 1$, so we must have $x = \dfrac{\pi}2 + 2n \pi $ with $n \in \mathbb Z$.
Hausdorff or weaklly hausdorff may apply
For $n\in\mathbb Z$ define $$V_n=\bigcap\{U_i:i\le n\text{ and }x\in U_i\}.$$Then $V_n$ is an open neighborhood of $x$, so we can choose $a_n\in A\cap V_n$. I claim that the sequence $a_1,a_2,a_3,\dots,a_n,\dots$ converges to $x$. Let a neighborhood $W$ of $x$ be given. Choose $i$ so that $x\in U_i\subseteq W$. Then, for all $n\ge i$, we have$$a_n\in V_n\subseteq U_i\subseteq W.$$
Calculate the limit: $\lim_{n\rightarrow\infty}\left(\frac{1^{2}+2^{2}+...+n^{2}}{n^{3}}\right)$
For variety, $$\begin{align} \lim_{n \to \infty} \frac{1^2 + 2^2 + \ldots + n^2}{n^3} &= \lim_{n \to \infty} \frac{1}{n} \left( \left(\frac{1}{n}\right)^2 + \left(\frac{2}{n}\right)^2 + \ldots + \left(\frac{n}{n}\right)^2 \right) \\&= \int_0^1 x^2 \mathrm{d}x = \frac{1}{3} \end{align} $$
Integral calculating using complex analysis
Presumably $n$ is an integer. The integral is the real part of $$\int_0^{2\pi}e^{\cos(t)}e^{int-i\sin(t)}\,dt=\int_0^{2\pi}e^{int}e^{e^{-it}}\,dt=\int_0^{2\pi}e^{int}\sum_k\frac1{k!}e^{-ikt}\,dt= \begin{cases}\frac{2\pi}{n!},&(n\ge0), \\0,&(n<0).\end{cases}$$
Integral involving exponentials of cosh functions
By Fubinis theorem: $$I_p = \int\limits_0^{+\infty}\,\int\limits_{-\infty}^{+\infty}\,e^{-p\,[c+(c^2+1)\cosh x]}\,\mathrm{d}x\,\mathrm{d}c =\int\limits_{-\infty}^{+\infty}\,\int\limits_{0}^{+\infty}\,e^{-p\,[c+(c^2+1)\cosh x]}\,\mathrm{d}c\,\mathrm{d}x.$$ We can solve the integral with respect to $c$ analytically: $$\int\limits_{0}^{+\infty}\,e^{-p\,[c+(c^2+1)\cosh x]}\,\mathrm{d}c = -\dfrac{\sqrt{{\pi}}\mathrm{e}^{\frac{p}{4\cosh\left(x\right)}-p\cosh\left(x\right)}\left(\operatorname{erf}\left(\frac{\sqrt{p}}{2\sqrt{\cosh\left(x\right)}}\right)-1\right)}{2\sqrt{p\cosh\left(x\right)}}$$ I am afraid that we can't solve the second integral analytically.
Linearized Pitot system
We have $$ P -P_se^\frac{h*g*M}{RT} = 0\\ M - \sqrt{\frac{2}{(\gamma-1)}\biggr(\left(\frac{P_o}{P(h)}\right)^\frac{\gamma}{\gamma-1}-1\biggr)}= 0 $$ $$ f_1(P,M,h)=0\\ f_2(P,M,h)=0 $$ then $$ \frac{df_1}{dh}=\frac{\partial f_1}{\partial P}P'+\frac{\partial f_1}{\partial M}M'+\frac{\partial f_1}{\partial h}=0\\ \frac{df_2}{dh}=\frac{\partial f_2}{\partial P}P'+\frac{\partial f_2}{\partial M}M'+\frac{\partial f_2}{\partial h}=0 $$ and solving for $P', M'$ we have $$ P' = \frac{2 g (\gamma -1)^2 M(h) P(h) \left(T(h)-h T'(h)\right) \sqrt{\frac{\left(\frac{\text{P0}}{P(h)}\right)^{\frac{\gamma }{\gamma -1}}-1}{\gamma -1}}}{T(h) \left(\sqrt{2} g \gamma h \left(\frac{P_0}{P(h)}\right)^{\frac{\gamma }{\gamma -1}}+2 (\gamma -1)^2 R T(h) \sqrt{\frac{\left(\frac{P_0}{P(h)}\right)^{\frac{\gamma }{\gamma -1}}-1}{\gamma -1}}\right)}\\ M' = -\frac{\sqrt{2} g \gamma M(h) \left(T(h)-h T'(h)\right) \left(\frac{\text{P0}}{P(h)}\right)^{\frac{\gamma }{\gamma -1}}}{T(h) \left(\sqrt{2} g \gamma h \left(\frac{P_0}{P(h)}\right)^{\frac{\gamma }{\gamma -1}}+2 (\gamma -1)^2 R T(h) \sqrt{\frac{\left(\frac{P_0}{P(h)}\right)^{\frac{\gamma }{\gamma -1}}-1}{\gamma -1}}\right)} $$
Determining similar matrices
Given a 3 by 3 matrix with 3 distinct eigenvalues, you know you will have 3 eigenvectors. Once you find these three eigenvectors, by finding $null(A-\lambda I)$ for $\lambda=3,-3,0$ in your case, giving you three linearily independent vectors. You glue these eigenvectors together to form a matrix $M$ with: $D=M^{-1}AM$ where D is diagonal. So in your case, $M=[v_{3}v_{-3}v_{0}]$ where each v is an eigenvector.
Finding unit vector (with sum of components also zero) with smallest cosine distance with another given vector
I suppose that $\beta_i$ is the given vector. There are (at least) two solutions. The first, most easy one is a geometrical one and is based on the fact that $\alpha_i$ is restricted to a hyperplane through the origin with normal vector $n = (1,1,\ldots, 1)$. Note that the scalar product $\alpha \cdot n = 0$. First, starting at the endpoint of the vector $\beta$ we descend a perpendicular until we reach the hyperplane, i.e we determine a point $\beta - rn$ such that $\sum_i{(\beta - rn)_i} = 0$. This gives us a value for $r$, namely $r = \sum_i{\beta_i}/d$, where $d$ is the dimension of the vector space. Now $\beta - n\sum_i{\beta_i}/d$ is potentially the vector $\alpha_i$, with the exception that its length is not $1$, so all we have to do is divide this vector by its norm $\left\| \beta - n\sum_i{\beta_i}/d\right\| = \sqrt{(\sum_j{(\beta_j - \sum_i{\beta_i}/d)^2}}$, so finally : $$\alpha_i = (\beta_i-\sum_j{\beta_j}/d)/\sqrt{(\sum_j{(\beta_j - \sum_k{\beta_k}/d)^2}} $$.
Entire function with limited growth
One version of Lindelof Theorem claims that if $|g(iy)| \le M, |g(z)| \le C_ae^{aR},\Re z \ge 0, |z| \le R$ where $|a| < \frac{\pi}{2}$ then $|g(z)| \le M^{c(z)}, c(z)=\frac{a \Re z}{\cos a}, \Re z \ge 0$; (this is classic and follows by first proving a boundness result for an angle less than $\pi$ when we are given bounds on the boundary of the angle and then applying this to $e^{-bz/\cos a}g(z)$ in the first and fourth quadrants respectively with $b >a, b \to a$) In particular, if the result above holds with $a \to 0, a>0$ it follows that $|g(z)| \le M$ in the right half plane $\Re z \ge 0$ Now assuming $f$ as in the OP (that is called $f$ has order at most $(\frac{1}{2},0)$ or in other words, $f$ has order strictly less than $1/2$ or it has order $1/2$ but it is of minimal type) is bounded on the real line (note that it is enough to assume boundness on either positive or negative half-line) we will show that $f$ is constant. Assume boundness on say the negative real half line we apply the above with $g(z)=f(z^2)$ which is even and satisfies $|g(iy)| \le M$ (while if $f$ bounded on the positive reals we just take $g(z)=f(-z^2)$) and the hypothesis in the OP imply that $|g(z)| \le C_ae^{aR},\Re z \ge 0, |z| \le R$ for arbitrary $a>0$ because $\log M_f(R) = o(\sqrt{R})$ is clearly equivalent to $\log M_g(R) = o(R)$ By the Lindelof Theorem quoted, it follows that $g$ is bounded in the right half-plane and since $g$ is even it follows that $g$ is bounded in the plane, hence it is constant and so is $f$. Done! A non-trivial example of such an $f$ is $\Pi_{n \ne 0}(1-z/n^3)$ which has order $1/3$ by simple results in the theory of entire functions (number of zeroes in a disc of radius $R$ is ~ $R^{1/3}$ and that being a non-integral power immediately implies that the function has order $1/3$) Edit later - to clarify a bit what happens (per comments), the idea is that the lower the order of growth of an entire nonconstant function, the more constrained its boundness at infinity properties are which may seem a bit unintuitive (as lower order of growth means the function grows slower after all) but it should become intuitive if we think of the order of growth in terms of its non-exceptional values in the Picard sense (so all complex numbers but at most one) as the lower the order, the "fewer" (density per area wise as they ar infinitely many otherwise except for polynomials) solutions $f(z)=a$ has so it means that $f$ must spread out faster Polynomials go to infinity; functions of order up to $(1/2,0)$ cannot be bounded on any ray (though $\infty$ is still an essential singularity so all but at most one values are $f(z_n), |z_n| \to \infty$, functions of order up to $(1,0)$ cannot be bounded on any line, up to order $(2,0)$ cannot be bounded on two perpendicular lines at the same time, etc; in infinite order, there are entire nonconstant functions that are bounded on every line through $0$ (though of course non-uniformly by Liouville); examples like $\cos \sqrt z$ of order $1/2$ and finite non zero type which is bounded on the positive reals, $\cos z$, positive type order $1$ bounded on the reals, $\cos z^2$ positive type order $2$ bounded on both the real and imaginary axis show that these types of results (that come under Phragmen-Lindelof or just Lindelof theorems) are sharp.
Adapting the Simplex method to use the distance function as the target.
See for instance: Philip Wolfe, The Simplex Method for Quadratic Programming, Econometrica, Vol. 27, No. 3, (Jul., 1959), pp. 382-398 This is an extension to the Simplex method for a standard Quadratic Programming (QP) problem, so a slightly more general problem than you are stating.
Chebyshev Inequality - How is the following inferred ??
The way to see that is to remember he wants to know the fraction of periods incurred in losses, i.e. $$x_i <= 0$$. If the mean return is 8% it implies that in order to have a loss, $$a$$ has to be equal or greater than 8. So $$a = 8$$ here. Also the reason he says the fraction of periods is no more than 14.1% for either a loss, $$x_i <= 0$$ or for very good returns, $$x_i >= 16$$.
Problem 2.36 in Folland. Is my solution correct?
I don't understand the sentence So, we know convergence in $L^1$ is equivalent to: $\mu(E_n) < \int |f| < \infty.$ Anyway the main idea is correct. Convergence in $L^1$ implies that a subsequence $\{ \chi_{E_{n_k}} \}$ converges to $f$ pointwise a.e. The second part of the proof is a bit confusing. You cannot define $f=\chi_E$ and anyway your set $E$ is not correct. Since $\{ \chi_{E_{n_k}} \}$ converges to $f$ pointwise a.e., take a point $x$ such that $\chi_{E_{n_k}}(x)\to f(x)$. Since $\chi_{E_{n_k}}(x)$ only takes values $0$ and $1$ and the limit exists, necessarily $\chi_{E_{n_k}}(x)=0$ for all $k$ large, in which case $\chi_{E_{n_k}}(x)=0\to 0$, or $\chi_{E_{n_k}}(x)=1$ for all $k$ large in which case $\chi_{E_{n_k}}(x)=1\to 1$. Hence, $f(x)$ can only be $0$ or $1$, which shows that $f$ is the characteristic function of a set.
Group of order $p^2+p $ is not simple
Since the proof in the duplicate question is partly buried in the comments, there is no harm in repeating it. Assuming that $p$ is prime, if $G$ is simple, then $G$ has $p+1$ Sylow $p$-subgroups, which must intersect trivially, so we have a total of $p^2-1$ elements of order $p$, the indentity, and $p$ other elements. Let $g$ be one of these other elements. Then $g$ cannot be centralized by an element of order $p$, since the Sylow $p$-subgroups are self-normalizing, so $|C_G(g)| \le p+1$. On the other hand, $g$ has at most $p$ conjugates, so it must have exactly $p$ conjugates, and $|C_G(g)|=p+1$. But then $C_G(g)$ must consist of exactly of the identity and the elements not of order $p$, so it is a normal subgroup of $G$, contradicting simplicity.
Why is $\langle S\mid R\cup R'\rangle $ a presentation for $G/N(R')$, where $G$ is a group with presentation $\langle S\mid R\rangle?$
Notice that $F(S)$ is a group, $N(R) \trianglelefteq F(S)$, $N(R \cup R') \trianglelefteq F(S)$, and $N(R) \subseteq N(R \cup R')$. By the third isomorphism theorem, $$ \frac{F(S)}{N(R \cup R')} \cong \frac{F(S)/N(R)}{N(R \cup R')/N(R)} $$ or, as presentations, $$ \langle S \mid R \cup R' \rangle \cong \frac{\langle S \mid R \rangle}{N(R \cup R')/N(R)} \text{.} $$ Are you able to show that the only part of $\langle S \mid R \rangle$ that is actually sent to the identity by the quotient on the right is $N(R')$? (In particular, the implicit quotient in $\langle S \mid R \rangle$ has already sent all of $N(R)$ to the identity, so only a subset of $N(R')$ remains to be sent there.)
Compact subset in two different topologies
Compactness depends on the topology, so if $X\subset Y$ as a set, but not as a subspace, you cannot guarantee that $K$ will be compact with both topologies. For example, the one suhogrozdje gave in the comments, take $X=\mathbb{R}$ with Euclidean topology and $Y=\mathbb{R}$ with discrete topology. Then, $K=[0,1]$ is compact in $X$, but not compact in $Y$ (can you see why?). Conversely, if $X$ is a subspace of $Y$, then $K$ is compact in $Y$, since for any open cover in $Y$, you can restrict it to $X$, where you have a finite cover by sets of the form $X\cap U$ with $U\in\tau_2$. Then just use the sets $U$ to finitely cover $K$ in $Y$. Compactness is independent of the ambient space, i.e., requiring $X$ and $Y$ to be compact doesn't actually add anything, unless you demand some extra properties on $K$ like being closed in $Y$ (a closed subset of a compact space is always compact).
Clarification on step in proof of IBP
Absolutely continuous functions are, in particular, continuous, so $$ \int_a^b|f'g|\leq\max_{x\in [a,b]}|g(x)|\int_a^b|f'|<\infty $$
Is "basis times square matrix" a new basis?
Your question is open to several interpretations. (At least the current revision.) And we could write quite a lot about each of these interpretations. You are using notation $BS$, which is (as far as I can say), most frequently use to denote the matrix product. But you also write that $B$ is a basis. Basis is not a matrix. So we could dismiss your questions as nonsensical. Or we could try to look whether we can interpret it in a way in which it makes sense. Matrix interpretation: Let us assume that you are working with the vector space $V=F^n$ over a field $F$. Then the vectors of this vector space are $n$-tuples. If we have some basis $\vec b_1,\dots,\vec b_n$, then we can put the vectors of this basis in a matrix $B$. Now it makes sense to make the product $BS$ of the two $n\times n$ matrices. And your question can be interpreted as the question whether the columns of the matrix $BS$ again for a basis of $F^n$. (I chose column vectors rather than row vectors, since the wording of your question seems to indicate that you are used to working with column vectors.) The notion of invertible matrix is useful for this. If $B$ is a $n\times n$-matrix over a field $n$, then the following conditions are equivalent: The matrix $B$ is invertible, i.e., there exists a matrix $A$ such that $AB=BA=I$. Determinant of $B$ is non-zero, i.e., $\det B\ne0$. The matrix $B$ has full rank, i.e., $\operatorname{rank}(B)=n$. The rows of the matrix $B$ form a basis of $F^n$. The columns of the matrix $B$ form a basis of $F^n$. You can find a few more equivalent condition in the Wikipedia article I linked above. With this in mind, your question can be understood as: When is the product of an invertible matrix $B$ and a square matrix $S$ again an invertible matrix. The answer is that this is true if and only if $S$ is invertible. Indeed, we have $$S=B^{-1}(BS)$$ so $S$, as a product of two invertible matrices, is invertible. You can find several posts on this site showing that product of invertible matrices is invertible (or some equivalent statement). For example: Does the product of two invertible matrix remain invertible? ${\rm rank}(BA)={\rm rank}(B)$ if $A \in \mathbb{R}^{n \times n}$ is invertible? Prove that if a product$ AB$ of $n\times n$ matrices is invertible, so are the factors $A$ and $B$. Linear combination of columns: From the formulation of your question it seems that you have noticed a useful observation about multiplication of matrices. Columns of the matrix $BS$ are linear combinations of the columns of $B$. And coefficients of these linear combinations are given by the entries of $S$. (Similar observation for rows is described here.) With this in mind we can make your question into a question which makes sense for every finitely-dimensional vector space $V$, not only $V=F^n$. Suppose $\vec b_1,\dots,\vec b_n$ is a basis of $V$. Let $S$ be an $n\times n$-matrix. Let us define vectors $\vec c_1,\dots,\vec c_n$ as $$\vec c_k = \sum_{i=1}^n s_{ik} \vec b_k.$$ For which matrices $S$ is $\vec c_1,\dots,\vec c_n$ a basis? If $V=F^n$, then this is precisely the question about matrices, since the vectors we described here are precisely the columns of the matrix $BS$. However, this formulation makes sense for any basis. The answer to this question is again that this is true if and only if $S$ is invertible. For more about this, you can have a look at matrix representing change of basis. Linear combination of rows: We could ask similar question as above, but we could put vectors of the basis into row of a matrix. This does not change the answer. But in this case we can make a simple argument like this: We are basically asking whether the linear transformation $\vec x\mapsto \vec x S$ transforms a basis into a basis. (Since if we denote $\vec b_1,\dots,\vec b_n$ the rows of $B$, then the rows of $BS$ are $\vec b_1S,\dots,\vec b_nS$.) The linear map $\vec x\mapsto \vec xS$ maps a basis to a basis if and only if it is a linear isomorphism. And equivalent condition is that $S$ is an invertible matrix.
Why arccos(2) have value in complex numbers even though its $D_f [-1,1]$
If you are working within $\mathbb R$ then $arcos (2)$ does not exist. But in the complex plane there are infinitely many solutions of the equation $\cos(z)=2$.
Question on proof on Euler Toient function property.
The proof is simply counting the number of elements of $C_n$ in two different ways. We know there are $n$ elements of $C_n$, and we know that the sets $A_d=\{g \in C_n | o(g)=d\}$ partition $C_n$ i.e. each $g\in C_n$ is a member of one and only one $A_d$. And we know that $|A_d|=0$ unless $d$ is a factor of $n$. So $\displaystyle \sum_{d|n}|A_d| = n$ To see why $|A_d| = \varphi(d)$, think of the elements of $C_n$ as being powers of a generator $a$, so $C_n=\{a, a^2, a^3, \dots, a^n\}$. If $e=\frac{n}{d}$ then $a^e$ will have order $d$ and each of the $d$ powers of $a^e$ in $\{a^{ke}| 1 \le k \le d \}$ will also have order $d$ unless $\gcd(k,d) > 1$. So $|A_d| = |\{k|1\le k \le d, \gcd(k,d)=1\}| = \varphi(d)$
Initial velocity of a ball which is thrown
You've ignored the multiplication of initial velocity by time of flight. The total distance above ground is given by $$3+14t-5t^2$$which then must be solved for equality to zero distance above ground, and account for common sense. Gravitational acceleration creates a change in distance of $$-5t^2$$ if we use 1 significant figure. The only thing affecting the gravitational changes is gravity itself, which has no dependence on velocity (at least in his scale). The initial velocity changes distance by a separate factor,$$v_it$$ which does not depend on gravity, hence these terms are separate in the equation. The constant term is straightforwardly the height at time zero.
Prove Differentiator is Linear and Time-Invariant
For sufficiently smooth functions $$f:\quad {\mathbb R}\to {\mathbb C},\qquad t\mapsto f(t)$$ ("time signals") we have the operations $D:\>f\mapsto f'$ and $T_a:\> f\mapsto f_a$, whereby $f_a$ is defined by $f_a(t):=f(t-a)$. Both these operations are obviously linear: $D(\lambda f+\mu g)=\lambda\> Df+\mu\> Dg$, and similarly for $T_a$. It is claimed that $D$ is translation invariant, which is the same thing as stating that $T_a$ and $D$ commute, whatever $a\in{\mathbb R}$: $$D\circ T_a=T_a\circ D\ .$$ Proof. Let $g:=Df$. Then $g(t)=f'(t)$ for all $t$, hence $$g_a(t)=f'(t-a)=\lim_{h\to0}{f(t-a+h)-f(t-a)\over h}=\lim_{h\to0}{f_a(t+h)-f_a(t)\over h}=f_a'(t)$$ for all $t\in{\mathbb R}$. But this is saying that $$T_aD\>f=T_a\>g= D\>f_a=DT_a\>f\ .$$ Since this is valid for all $f$ the claim follows.
Is $1992! - 1$ prime?
No. The smallest prime factors are $3449$ and $8627$ (found with Mathematica). For what it's worth: $$ \{n\in\mathbb{N}:2\le n\le2000\text{ and }n!-1\text{ is prime }\}=\\\{3,4,6,7,12,14,30,32,33,38,94,166,324,379,469,546,974,1963\} $$ Should have thought of checking OEIS. This is sequence A002982
If $h(x) = \dfrac{f(x)}{g(x)}$, then find the local minimum value of $h(x)$
Your calculation is fine, except the interpretation. Note that, $$h(-\sqrt{2-\sqrt3}) = h(\sqrt{2+\sqrt3}) = 2\sqrt2$$ As seen from the plot, $2\sqrt2 $ at $-\sqrt{2-\sqrt3}$ and $\sqrt{2+\sqrt3}$ are the two local minima, while $-2\sqrt2 $ are the local maxima.
Is the ring $R$ a commutative ring with identity in the Freyd–Mitchell embedding theorem?
No, the ring $R$ in question looks something like $End(P)$ for some projective generator $P$ and has no reason to be commutative in general (although it does have a unit !)
Gradient of norm of embedding
I assume that ($g, \nabla$) are induced metric and connection from the embedding $\varphi$. Note that $\nabla \| \varphi\|^2 = 2 \varphi ^\top$, the tangential part of $\varphi$. Thus $$\langle \varphi , \nabla \|\varphi\|^2 \rangle = 2\|\varphi^\top\|^2 \geq 0.$$ (It's true for any immersion $\varphi$) Remark: To show that $\nabla \| \varphi\|^2 = 2 \varphi ^\top$, let $x\in M$ and $e_1, \cdots e_{n-1}$ be an orthonormal bases of $T_xM$. Then in general for any smooth function $f:M \to \mathbb R$, $$\nabla f = \sum_{i=1}^{n-1} (\nabla_{e_i} f) \ e_i.$$ Hence $$\nabla \|\varphi\|^2 = \sum (\nabla_{e_i} \|\varphi\|^2) \ e_i = 2 \sum \langle \varphi, e_i\rangle \ e_i = 2\varphi^\top. $$
How prove $\bigl(\frac{\sin x}{ x}\bigr)^{2} + \frac{\tan x }{ x} >2$ for $0 < x < \frac{\pi}{2}$
$$\left(\frac{\sin x}{x}\right)^2 + \left(\frac{\tan x}{x}\right) = \\ \left(\frac{x - \frac{x^3}{6} + \frac{x^5}{120} + \ldots}{x}\right)^2 + \left(\frac{x + \frac{x^3}{3} + \frac{2x^5}{15} + \ldots}{x}\right) = \\ \left(1 - \frac{x^2}{6} + \frac{x^4}{120} - \ldots\right)^2 + \left(1 + \frac{x^2}{3} + \frac{2x^4}{15} + \ldots\right) \simeq \\ \left(1 + 2 \left(-\frac{x^2}{6} + \frac{x^4}{120} - \ldots \right)\right) + 1 + \frac{x^2}{3} + \frac{2x^4}{15} + \ldots\\2 + x^2 \left(\frac{ - 2}{6} + \frac{1}{3}\right) + x^4 \left(\frac{2}{120} + \frac{2}{15} \right) + \ldots = \\2 + x^4 \left(\frac{9}{60} \right) + \ldots &gt; 2.$$
When a ray of an horocircle passing through the origin intersects the y axis.
I have been thinking about your problem (which i think is more difficult than you should be required to solve). I will solve it using the Poincare Half plane model ( https://en.wikipedia.org/wiki/Poincar%C3%A9_half-plane_model ) If your level is as high as it should be to solve this problem then you should also be able to transform it to the model of hyperbolic plane you are using. The main points of the Poincare Half plane model are: the model models the complete hyperbolic plane by using the upper half plane. hyperbolic lines are represented by rays and halfcircles orthogonal to the x axis. horocycles are represented by circles tangent to the x axis, their hyperbolic centre is the point where they touch the x axis. hypercycles ( https://en.wikipedia.org/wiki/Hypercycle_%28hyperbolic_geometry%29 ) are are by lines and circle arcs that are not orthogonalto the x axis. the model is conformal, the angles in the model have the same size as the angles on the hyperbolic plane. The problem splits in a couple of sub proofs: Proof that the line $\Omega S$ is limiting parallel to line $y$ Proof that if $P_xP &lt; S_xS $ then the line $\Omega P$ will intersect line $y$ Proof that if $P_xP &gt; S_xS $ then the line $\Omega P$ will intersect line $y$ Proof that the line $\Omega S$ is limiting parallel to line $y$ In the Poincare Half plane: Assume $\Omega $ to be the point $(z,0)$ Assume $O$ to be the point $(z,2a)$ Then: the line $\Omega O$ is the vertical ray $ x=z, y &gt;0$ the hyperbolic line $y$ is the halfcircle through $(z,2a)$ centered at $(z,0)$, This line meets the x axis at $(z \pm 2a,0)$ The horocycle $h$ through $O$ centered around $\Omega$ is the circle $h$ with centre $(z,a)$ Then we need to find the point $S$ For $S_xS$ To be orthogonal to $\Omega O $ the halfcircle $S_xS$ representing it needs to be centered at $(z,0)$ To be at an angle $\pi /4 $ to the horocycle $h$ it needs to cut the horocycle $h$ at an angle $\pi /4 $ ( the line $\Omega S$ is orthogonal to the circle and the model is conformal). Therefor the hyperbolic line $S_xS$ needs to cut the horocycle $h$ where the Euclidean rays $y= t, x= z \pm t , t &gt; 0 $ cut circle h. So the point $S$ is one of the points $ (z \pm a , a)$ (lets call the point $ (z + a , a) S^+ $ and the point $ (z - a , a) S^- $, one of these points is $S$ (like in your sketch there is also a point $S$ below $O \Omega$) Then the hyperbolic line $\Omega S$ is one of the the halfcircles centered at $ (z \pm a , 0)$ going trough $\Omega (z, 0) $ The halfcircle $\Omega S$ meets the halfcircle $y$ at $ (z \pm 2a, 0)$ that is on the x axis so the hyperbolic line $\Omega S$ is limiting parallel to the hyperbolic line $y$ This completes the first part of the proof Proof that if $P_xP &lt; S_xS $ then the line $\Omega P$ will intersect line $y$ The points equidistant to $S_xS$ from $\Omega $ is the hyperbolic hypercycle $e$ and is in the poincare halfplane represented by two Euclidean rays starting at $\Omega $ one going through $S^- $ , the other one going through $S^+ $ Hyperbolic lines $\Omega P$ lines where $P_xP &lt; S_xS $ will cut horocycle $h$ on the arc $S^- O S^+$ They will be represented in the Poincare Half plane model by circles centered on the x axis equidistant from $\Omega$ and $P$, and these circles will cut halfcircle y so these hyperbolic lines $\Omega P$ will cut the hyperbolic line $y$ This completes the second part of the proof Proof that if $P_xP &gt; S_xS $ then the line $\Omega P$ will intersect line $y$ Hyperbolic lines $\Omega P$ lines where $P_xP &gt; S_xS $ will cut circle $h$ on the arc $S^- \Omega S^+$ they will be represented in the Poincare Half plane model by circles centered on the x axis equidistant from $\Omega$ and $P$, and these circles will not cut the halfcircle y so these hyperbolic lines $\Omega P$ will be ultraparalel to hyperbolic line $y$ This completes the third part of the proof Now how to translate it to the model you use of the hyperbolic plane . I have no idea, maybe add a standard proof that the Poincare Half plane model is a represenative of the hyperbolic plane Can you tell me which book you are using for your study? I only know about two books that use "free flow" hyperbolic planes , but none of them discusses horocycles. GOOD LUCK
What can be a function where $x \neq 2, y \neq 1$ for all $x,y$?
Hint: Looks like the plot of $xy=1$, but translated. Also, this looks like a homework so please it a try first.
Combinatorial Proof of Binomial Coefficient Identity, summing over the upper indices
Take $n$ objects $1,2,3,\dots, n$, and choose $k$ of them in $\binom{n}{k}$ ways. Alternatively, suppose the $r$-th largest selection is in position $j$. Then there are $\binom{j-1}{r-1}$ ways to choose the smallest $r-1$ objects and $\binom{n-j}{k-r}$ ways to choose the largest $k-r$ objects. Now to combine for all possible values the $r$-th selection can be, we have to fit the first $r$ in positions $j$ and under, so $j \geq r$. Similarly, we have to fit the remaininkg $k-r$ in positions $j+1$ through $n$, which gives $k - r \leq n - (j+1) + 1$, or $j \leq n + r - k$.
How do I solve $\int\frac{\cos^2(x)}{\sin(x)}\ dx$ without using Weierstass Substitution?
Here's one approach. First rewrite the integral as $$ \int \frac{\cos^2(x)}{\sin(x)} dx = \int \frac{\cos^2(x)}{\sin^2(x)} \sin(x) dx = \int \frac{\cos^2(x)}{1-\cos^2(x)} \sin(x) dx . $$ Using the substitution $u = \cos(x)$, this becomes $$ - \int \frac{u^2}{1-u^2} du , $$ which can now be computed by using partial fractions. I'll leave the rest of the details to you.
Why is this true for any x and y where x and y are whole numbers?
your proposition is equivalent to $\forall (x,y)\in \mathbb Z^2$ there exists $z\in \mathbb Z$ such that $x\geq z$ or $x\geq y$. It is always true if we take $z=x-1$.
Do IID random variables take same number of values?
Identically distributed means $P(X_i \leq x)$ does not vary with $i$. In the case of random variables taking only finitely many values it is true that they assume not only the same number of values but also the same exact values with same probabilities. If $X$ takes the values $0$ and $1$ with probabilites $\frac 1 2$ each and $Y$ takes the values $0$ and $1$ with probabilities $\frac 1 3$ and $\frac 2 3$ they ther are not identicaly distributed.
Find limit of $\frac{|x|^3 y^2+|x|y^4}{(x^2+y^2)^2}$
Hint. We have that $$\frac{|x|^3 y^2+|x|y^4}{(x^2+y^2)^2}=|y|\cdot\frac{|x||y|}{x^2+y^2}. $$
$p$-torsion elements and exact sequence
Let us consider the diagram: $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; U @&gt;&gt;&gt; V @&gt;&gt;&gt; V/U @&gt;&gt;&gt; 0\\ @. @Vp^jVV @Vp^jVV @Vp^jVV \\ 0 @&gt;&gt;&gt; U @&gt;&gt;&gt; V @&gt;&gt;&gt; V/U @&gt;&gt;&gt; 0 \end{CD} There is a long exact sequence with six terms coming from the long exact sequence in homology. The homology of the "left (vertical) complex" $U\overset{p^j}\to U$ (expanded with zero objects in other degrees) in the "first $U$", the upper one in the diagram, is the kernel of $p^j$, so it is $U[p^j]$. The cokernel is $U/p^j$, the homology taken in the position of the "lower $U$". Same for the other vertical complexes. We get thus the "long" exact sequence: $$ 0 \to U[p^j] \to V[p^j] \to (V/U)[p^j]\ {\color{red}{\overset\delta\to}} \ U/p^j \to V/p^j \to (V/U)/p^j \to 0\ . $$ The above delta morphism captures the information to answer the OP. It cannot be said more in this generality. (A split extension or a zero target for $\delta$ would be fine...)
Derivatives of composite functions
You use the chain rule. If $F(x) = f(g(x))$ then $F'(x) = g'(x)f'(g(x))$
Prove or disprove: complex numbers
Multiply by $z^3$ and $$z^6=z^3\bar z^3=(|z|^2)^3=1$$ so that the solutions are the sixth roots of one. 1) Yes. 2) No, 2. 3) Yes, if the modulus isn't constrained. 4) No, $3$.
Is it possible to "customize" the multinomial distribution to your specifications?
The problem is how to distribute the missing $0$'s. One way is to use the ordinary multinomial $(X_1,X_2,\dots,X_k)$ for sample size $n-k$, and let $$\Pr(Y_1=y_1,Y_2=y_2, \dots, Y_k=y_k)=\Pr(X_1=y_1-1, X_2=y_2-1,\dots,X_k=y_k-1).$$ This does not look particularly interesting, since it is a simple shift of an ordinary multinomial. Added: Another way of "customizing" is to divide the probabilities $\Pr(X_1=x_1,X_2=x_2,\dots, X_k=x_k)$, where none of the $x_i$ is $0$, by a number $Q$, where $Q$ is the probability that none of the $x_i$ is $0$. Then we face the issue of calculating $Q$. This is $1$ minus the probability that at least one of the $X_i$ takes on the value $0$. One can express the probability that none of the $X_i$ is $0$ as a complicated sum. It is not clear that there is a pleasant expression for this sum. But we give an approach using Inclusion/Exclusion that is feasible for small $k$, and that with suitable truncation might give useful approximations for larger $k$. The probability that $X_i=0$ is $(1-p_i)^n$. Add up over all the $i$. We get our first estimate $\sum_i (1-p_1)^n$. However, this sum double counts all the instances where $X_i=0$ and $X_j=0$ for distinct $i$ and $j$. This probability is $(1-p_i-p_j)^n$. So we find the sum $\sum_{i,j}(1-p_i-p_j)^n$. Subtract this from the first estimate to get the second estimate. But we have taken away too much, all instances where $X_i=0$, $X_j=0$, and $X_l=0$ for distinct $i,j,l$. So we must add back $\sum_{i,j,l} (1-p_i-p_j-p_l)^n$. Continuing this way, we find $1-Q$, and therefore $Q$.
Generalized Catalan Numbers
For two different types of parenthesis this the sequence is listed in the OEIS here. Words with balanced $k$-type parentheses are known as $\text{Dyck}(k)$ words. Maybe this helps for further investigations.
how to solve circle dividing equation (complex numbers)
$$z^3=8i=8e^{\dfrac{i\pi}2}=8e^{\left(2k\pi+\dfrac\pi2\right)i}$$ where $k$ is any integer $$z=2e^{\dfrac{(4k+1)\pi i}6}$$ where $k\equiv0,1,2\pmod3$ (See this) Now use Euler Formula
Evaluating $\int_{T}^{\infty} \exp\left[\beta - \phi t \right]t^{-\frac{3}{2}} \log ^{\kappa}(t) \mbox{d}t$
Your integral $I\left( \phi ,\beta ,\kappa \right) $ can be evaluated for $\kappa =n\in N&gt;0$: $$I\left( \phi ,\beta ,n\right) =\exp \left( \beta \right) \int_{T}^{\infty }\exp \left( -\phi \,t\right) t^{-\frac{3}{2}}\log \left( t\right) ^{n}dt$$ with the trick of [Jack D'Aurizio] (Integrating $\int_0^{\frac{\pi}{2}} x (\log\tan x)^{2n+1}\;dx$) $$I\left( \phi ,\beta ,n\right) =\exp \left( \beta \right) \underset{\alpha \rightarrow 0}{\lim }\frac{d^{n}}{d\alpha ^{n}}\int_{0}^{\infty }u^{\alpha -% \frac{3}{2}}\exp \left( -\phi \,t\right) dt$$ Performing the integration: $$I\left( \phi ,\beta ,n\right) =\exp \left( \beta \right) \sqrt{\phi }% \underset{\alpha \rightarrow 0}{\lim }\frac{d^{2n+1}}{d\alpha ^{2n+1}}\phi ^{-\alpha }\Gamma \left( \alpha -\frac{1}{2},T~\phi \right)$$ the differentiation and take the limit as $\alpha \rightarrow 0$ leads to: $$I\left( \phi ,\beta ,n\right) =\exp \left( \beta \right) \sqrt{\phi }% \sum_{k=0}^{n}\binom{n}{k}\left( -1\right) ^{n-k}\left( \log [\phi ]\right) ^{n-k}\times $$ $$\times \Gamma \left( -\frac{1}{2},T~\phi \right) \left( \log [T~\phi ]\right) ^{k}+k!\sum_{m=1}^{k}\frac{\left( \log \left( T~\phi \right) \right) ^{k-m}}{\left( k-m\right) !}G_{m+1,m+2}^{m+2,0}\left( T~\phi \left\vert \begin{array}{c} 1,1,...,1 \\ 0,0,...,0,-\frac{1}{2}% \end{array}% \right. \right)$$ where the general Leipniz rule (https://en.wikipedia.org/wiki/Product_rule) and the Wolfram Function side is used.
Definition of sphere without using a metric
This is not so commonly used, The definitions I mostly see are either the one point compactification of the $n$-dimensional vector space, respectively the CW-complex consisting of one $n$ cell and one $0$ cell, glued together in the obvious way. Where, if you know a little topology, you can see that both are actually equivalent. Especially the second is used quite often, since it is immediately a CW-complex, a structure used very often in topology.
How many integers between 2001 and 3000 inclusive are not divisible by any of the three prime numbers 3, 7 and 13?
This is correct, and is the most efficient way to calculate it for the numbers $3,7,13$. The final answer is correct also. Basically, inclusion-exclusion is the way to go if the range that you're checking (in this case $2000$ to $3000$) is "large" and the number of things that are not allowed to be multiples (in this case, $3$) is "small".
Comparing different topologies
A partial answer that should help you a little - Comparing $\mathcal{T}$ and $\mathcal{M}$ : a) Consider $V:= U(0,\{1\}, 1/2) \in \mathcal{T}$, I claim that $V \notin \mathcal{M}$ : Define $f \in V$ to be the constant function $f \equiv 1/4$. Now I claim that for any $\delta &gt; 0$, there is a $g \in C[0,1]$ such that $$ d_1(f,g) &lt; \delta \text{ and } g \notin V $$ For this you can think of a picture of a function $g$ such that $$ g(1) = 1, \text{ and } g(x) = 1/4\quad\forall x &lt; 1-\delta $$ and $g$ describes a thin triangle between $x = 1-\delta$ and $x=1$. Then, $$ d_1(f,g) = \int_{1-\delta}^1 |g(x) - 1/4|dx \leq \int_{1-\delta}^1 (1-1/4)dx = \frac{3\delta}{4} &lt; \delta $$ Hence, $d_1(f,g) &lt; \delta$. However, $$ |g(1) - 0| = 1 &gt; 1/2 \Rightarrow g \notin V $$ b) Consider $W := \{g \in C[0,1] : \int |g(x)|dx &lt; 1\} \in \mathcal{M}$, I claim that $W \notin \mathcal{T}$ : Choose $f \equiv 1/2$. For any finite subset $A \subset [0,1]$ and any $\delta &gt; 0$, we can construct a function $g \in C[0,1]$ such that $$ g(x) = 1/2 \quad\forall x \in A, \text{ but } \int |g(x)|dx &gt; 1 $$ In fact, just take the smallest $x_0 \in A$, and build a really large triangle from $(0,2)$ to $(x_0,1/2)$. Now let $g$ be the hypotenuse of that triangle for $x \leq x_0$ and $g(x) = 1/2$ for all $x &gt; x_0$. Hence, $\mathcal{T}$ and $\mathcal{M}$ are not related by inclusion.
How do I find Jordan basis?
Here is the way to go: consider the sequence of kernels: $$\{\,0\,\}\varsubsetneq\ker(A-2I)\varsubsetneq\ker(A-2I)^2\subset\dots$$ The sequence stops after step $2$ since $$A-2I=\begin{bmatrix}-2&amp;1&amp;0\\-4&amp;2&amp;0\\-2&amp;1&amp;0\end{bmatrix}\qquad (A-2I)^2=\begin{bmatrix} 0&amp;0&amp;0\\0&amp;0&amp;0\\0&amp;0&amp;0\end{bmatrix}$$ $A-2I$ has rank $1$, hence its kernel (the eigenspace) has codimension $1$, i.e. has dimension $2$. $(A-2I)^2$ is the null matrix, hence its kernel has dimension $3$. Take any vector in $\ker(A-2I)^2\smallsetminus\ker(A-2I)$, i.e. any vector of $\mathbf R^3$ which is not an eigenvector. As the eigenspace is defined by the equation $\; y=2x$, we'll take, say $$e_3=(0,1,0). $$ Note $e'_2=(A-2I)e'_3=(1,2,1),\;$ is an eigenvector by construction. We complete this set of two vectors to a basis, by choosing another eigenvector, linearly independent from $e'_2$, say $$e'_1=(1,2,0).$$ The definition of $e'_2$ from $e'_3$ can be written as $\; Ae'_3=2e'_3+e'_2$, so the matrix of the linear map in basis $(e'_1,e'_2,e'_3)$ is the Jordan form: $$J=\begin{bmatrix}2&amp;0&amp;0\\0&amp;2&amp;1\\0&amp;0&amp;2\end{bmatrix}.$$
Is the category of chain complexes over an ring $R$ a locally presentable category?
Yes it is. It's one of the main examples other than simplicial sets. Check T.Beke's paper " Sheafifiable homotopy model categories" Proposition 3.10
Isomorphim of localization rings
Note that $\mathbb{Z}_{(2)}[1+\sqrt{-3}]=\mathbb{Z}_{(2)}[\sqrt{-3}]$ is a localization of $R$, namely the localization with respect to the set $\mathbb{Z}-(2)$. Since $\mathbb{Z}-(2)\subseteq R-M$ (proof: the quotient $R/M$ is a field of characteristic $2$ so every odd integer is nonzero in it), it suffices to show that inverting each element of $\mathbb{Z}-(2)$ will invert every element of $R-M$, so that these two localizations have equivalent universal properties. Now if $a+b\sqrt{-3}\in R-M$, note that $a$ and $b$ must have different parity (if they were both even then $a+b\sqrt{-3}$ would be a multiple of $2$, and if they were both odd then $a+b\sqrt{-3}$ would be a multiple of $2$ plus $1+\sqrt{-3}$). It follows that $(a+b\sqrt{-3})(a-b\sqrt{-3})=a^2+3b^2$ is an odd integer. That is, $a^2+3b^2$ is inverted in the localization with respect to $\mathbb{Z}-(2)$, and thus so is $a+b\sqrt{-3}$ since any factor of a unit is a unit.