title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Calculate Factor thats 1 if value is not 0, and 0 if it is
You can set a new variable equal to the floor of $x/(x-1)$. ( as long as $x$ can't be equal to $1$). Alternatively the ceiling of $x/(x+1)$ (as long as $x$ isn't negative).
Distribution of Conditional Probability
I assume $C$ is the ciphertext, $K$ is the key, $P$ is the plaintext, and $e_k(x)$ is the encrypted text corresponding to plaintext $x$ and key $k$. $\text{Pr}(C=y|P=x)$ is not a random variable, so it doesn't quite make sense to say that is uniform. If you mean, for a given plaintext $x$ that $C$ is uniform (over the set of all possible ciphertexts), that's usually not true because the space of possible keys is usually much smaller than the space of possible ciphertexts. I'm not sure what you mean by your second question. If you're talking about different ciphertexts with different plaintexts, of course they can use the same key. If you're talking about the same plaintext, since the mapping from (plaintext+key) to ciphertext is deterministic, different ciphertexts will have to result from different keys.
What do "$\pm_s$" and "$\mp_t$" mean in a formula (in particular, one for the roots of a quartic polynomial)?
It says: Note: The subscript $s$ of $\pm_s$ and $\mp_s$ is to note that they are dependent. It also says: Remember: The two $\pm_s$ come from the same place in equation (7'), and should both have the same sign, while the sign of $\pm_t$ is independent. Thus, in other words, these symbols give you a convenient way of writing down all four solutions in a single formula. Those four solutions are characterised by the choice of signs in the formula; there is $\pm_s$ and $\pm_t$, which are independent, and hence together give you 4 choices: $(\pm_s, \pm_t) =(+,+), (+,-), (-,+)$ and $(-,-)$.
Finding the limit of the sequence $x(n) = (1+2/n)^n$
The limit that defines the constant $e$ is $$ e=\lim_{n\to\infty}\left(1+\frac1n\right)^n $$ Since $x^2$ is a continuous function, we get $$ e^2=\lim_{n\to\infty}\left(1+\frac2n+\frac1{n^2}\right)^n $$ It is simple to see that $$ \left(1+\frac2n\right)^n\le\left(1+\frac2n+\frac1{n^2}\right)^n\le\left(1+\frac2n\right)^n\left(1+\frac1{n^2}\right)^n $$ which is the same as $$ \left(1+\frac2n+\frac1{n^2}\right)^n\left(1-\frac1{n^2+1}\right)^n\le\left(1+\frac2n\right)^n\le\left(1+\frac2n+\frac1{n^2}\right)^n $$ which implies by Bernoulli's Inequality $$ \left(1+\frac2n+\frac1{n^2}\right)^n\left(1-\frac{n}{n^2+1}\right)\le\left(1+\frac2n\right)^n\le\left(1+\frac2n+\frac1{n^2}\right)^n $$ The Squeeze Theorem says $$ \lim_{n\to\infty}\left(1+\frac2n\right)^n=e^2 $$
Is the line created by the minor axis of an ellipse concurrent to the lines running to the opposite vanishing point?
Minor/major axes, foci, etc. are "metric objects" that have a meaning in the framework of metric geometry, therefore are outside the scope of projective geometry which (in particular) doesn't preserve neither the lengths nor the ratio of lengths (but the cross-ratio). As a consequence, it is not astonishing that you find such discrepancies... Besides, are you aware of the fact that a figure like your first figure can be easily obtained by programmation in a convenient language. For example, this figure: Fig. 1. Figure generated by a program written in Matlab (see bottom of this answer) The vanishing points are materialized by red dots. We see on this figure that we cannot infer anything on the orientations of the major/minor axis of the ellipses... Here is another figure, similar to the first one you have given with trapezoids instead of general quadrilaterals: Fig. 2: Figure obtained by changing $A$ into $[5+0.i,1+2i,0+0i]$ and changing denominator $(x+y+t)$ into $(y+t)$ in function $Z$. Only a single vanishing point at finite distance (the other one being at infinity). Explanation: everything is based on the common (and simple) expression of all projective transformations (in particular those giving a perspective): $$X=\dfrac{ax+by+c}{gx+hy+k}, \ \ Y=\dfrac{dx+ey+f}{gx+hy+k}, \tag{1}$$ (please note the common denominator) or equivalently: $$Z=\dfrac{Ax+By+Ct}{gx+hy+kt}\tag{2}$$ $$\text{with} \ Z=X+iY, A=a+id, B=b+ie, C=c+if,$$ with $t=1$ for ordinary points and $t=0$ for points at infinity. (using complex numbers for plane geometry is very handy). Remark: Here, in the plane with coordinates $(x,y)$, I have first described a chain of 6 tangent circles (+ circumscribed squares) to which I have applied a certain projective transform of the type given by (2). Program: clear all;close all;hold on A=[2+i,1+1i,0+0i]; % could be (almost) any set of comp. numbers Z=@(x,y,t)((A(1)*x+A(2)*y+A(3)*t)./(x+y+t)); % see formula (2) s=5; % shift a=0:0.01:2*pi; % parameter for k=1:4; % ellipses as images of circles by "Z transform": plot(Z(cos(a)+2*k,sin(a)+s,1)); % quadrilaterals as images of squares: plot(Z([-1,1,1,-1,-1]+2*k,[-1,-1,1,1,-1]+s,1)); end; % vanishing points in the 2 directions, with $t=0$: plot(Z(1,0,0),'or','markerfacecolor','r'); plot(Z(0,1,0),'or','markerfacecolor','r');
properly infinite $C^*$-algebra $B(H)$
Let $\{e_i:i\in I\}$ be an orthonormal basis of $H$. Since $I$ is infinite, we can find two disjoint subsets $A,B\subset I$ and bijections $f_A:I\to A$, $f_B:I\to B$. Then let $e$ be the projection onto $\overline{\operatorname{span}}\{e_i:i\in A\}$, and let $f$ be the projection onto $\overline{\operatorname{span}}\{e_i:i\in B\}$. Then $e\perp f$. To show that $e\sim_{\operatorname{MvN}} 1$, define $u\in B(H)$ by linear extension of $u(e_i)=e_{f_A(i)}$. Then $u^*u=1$, and $uu^*=e$. Showing that $f\sim_{\operatorname{MvN}} 1$ is similar.
Why does factoring out $\pm$ from an expression introduce a minus sign?
The $\pm$ sign stands for $+$ or $-$. (In my opinion, it should be avoided as much as possible, since it is likely to cause confusions like this one.) So you can answer your question by considering each time the two cases : $$\begin{cases} \text{when $\pm$ stands for $+$} \\ \text{when $\pm$ stands for $-$}.\end{cases}$$ So you indeed have $$\frac {\hbar }{\sqrt 2} (ig_z+g_y)=+\frac {\hbar }{\sqrt 2} (g_y+ig_z)$$ and also $$-\frac {\hbar }{\sqrt 2} (ig_z+g_y)=-\frac {\hbar }{\sqrt 2} (g_y-ig_z)$$ so your equality is true for $\pm$. It is the same for all the other examples you gave, and you can see that $$\pm(\pm 5)=5$$ and $$\pm(\pm(\pm 8))=\pm 8.$$
Proving the existence of a unique planar embedding
The problem comes from a textbook that just got done classifying the platonic graphs. You can simply argue that the graph in question is platonic and therefore it falls into the classification!
Prove that $\Bbb P $ is not a $F_{\delta}$ set by Baire category theorem.
Note that $\Bbb R = \color{red}(\bigcup_{n=1}^{\infty} F_n\color{red}) \cup \color{red}(\bigcup _{q \in \Bbb Q} \{q\}\color{red})$. As you have argued, one of these closed sets on RHS has nonempty interior. Suppose it is $F_m$. But $F_m\subseteq\mathbb P$, which implies $\mathbb P$ has nonempty interior. This is absurd.
2-variable limit problem
If you have access to Mathematica, I find it very useful to look at the graph. Although this problem is very solvable on paper as well. First problem: Direct substitution won't work, because it's indeterminate. One method is to let t=x,t=y and find the limit of a single variable, t. This makes the limit (t)/(5t) = 1/5. This is effectively approaching the limit along the line x=y. To try a different line, let t=-x,t=y (the line y=-x). From this approach, the limit is (-5t)/(t) = -5. Since these values are different, the function approaches a different value from different directions, and the limit does not exist. This method will only work to prove a limit's non-existence. You cannot prove that a limit exists with this methods, only disprove it. Second Problem: You can basically use direct substitution, just keep in mind that the sin(anything) must be within [-1,1]. As the values approach zero, the limit's upper and lower bounds also go to zero, so you can effectively use the squeeze/sandwich theorem, and then (0)(-1) <= answer <= (0)(1) and 0 <= answer <= 0
Sylow $p$-subgroup of a direct product is product of Sylow $p$-subgroups of factors
However, the following is true. Let $G=HK$, $H$, $K$ subgroups and let $p$ a prime dividing the order of $G$. Then there exists a $P \in Syl_p(G)$ such that $P=(P\cap H)(P \cap K)$, with $P \cap H \in Syl_p(H)$ and $P \cap K \in Syl_p(K)$. Proof Let us first find a Sylow $p$-subgroup $P$ of $G$ such that $P\cap H$ is a Sylow $p$-subgroup of $H$ and $P\cap K$ is a Sylow $p$-subgroup of $K$. Let $Q$ be a Sylow $p$-subgroup of $H$ and let $R$ be a Sylow $p$-subgroup of $K$. Choose a Sylow $p$-subgroup $S$ of $G$ such that $Q\subseteq S$. By Sylow theory, there is a $g\in G$ such that $R\subseteq S^g$. In particular, $S\cap H=Q$ and $S^g\cap K=R$. But $g=hk$ for some $h\in H$ and $k\in K$. Then $S^g\cap K=R=S^{hk} \cap K=(S^h \cap K)^k$, hence $R^{k^{-1}}=S^h \cap K$ and this is a Sylow $p$-subgroup of $K$, being a conjugate of $R$. On the other hand, $S^h \cap H=(S \cap H)^h=Q^h \in Syl_p(H)$, since it is a conjugate of $Q$. So $P=S^h$ is the Sylow $p$-subgroup we were looking for. Finally we use a counting argument to show that indeed $(P \cap H)(P \cap K)=P$. Observe that $$|(P \cap H)(P \cap K)|=\frac{|P \cap H| \cdot |P \cap K|}{|P \cap H \cap K|}=\frac{|H|_p \cdot |K|_p}{|P \cap H \cap K|}$$ where the $p$-subscript denotes the largest $p$-power dividing a positive integer (which is understood to be $1$ if the integer in question is not divisible by $p$). Since $P \cap H \cap K$ is a $p$-subgroup of $H \cap K$, note that $|P \cap H \cap K| \leq |H \cap K|_p$. Combining this: $$|(P \cap H)(P \cap K)| \geq \frac{|H|_p \cdot |K|_p}{|H \cap K|_p}=[\frac{|H| \cdot |K|}{|H \cap K|}]_p=|G|_p=|P|$$ since $G=HK$ and $P \in Syl_p(G)$. As a set $(P \cap H)(P \cap K) \subseteq P$, so we conclude $P=(P \cap H)(P \cap K)$.$\square$
Finding double coset representatives in finite groups of Lie type
Many such questions yield to using Bruhat decompositions, and often succeed over arbitrary fields (which shows how non-computational it may be). Let P be the parabolic with Levi component GL(2)xSL(2). Your second group misses being the "other" maximal proper parabolic Q only insofar as it misses the GL(1) part of the Levi component. Your double coset space fibers over $P\backslash G/Q$. It is not trivial, but is true that P\G/Q is in bijection with $W_P\backslash W/W_Q$, with W the Weyl group and the subscripted version the intersections with the two parabolics. This is perhaps the chief miracle here. Since the missing GL(1) is normalized by the Weyl group, the fibering is trivial. Then some small bit of care is needed to identify the Weyl group double coset elements correctly (since double coset spaces do not behave as uniformly as "single" coset spaces). In this case, the two smaller Weyl groups happen to be generated by the reflections attached to the two simple roots, and the Weyl group has a reasonable description as words in these two generators.
Understanding matrix multiply analogously
It's more of a function composition, like if you compose $f:X\to Y$ and $g:Y\to Z$ you'd get $g\circ f:X\to Z$. In linear algebra, a matrix $A$ of size $m\times n$ represent a linear map from $\Bbb R^n$ to $\Bbb R^m$ and an $n\times p$ matrix $B$ represent a linear map from $\Bbb R^p$ to $\Bbb R^n$. You can view $C=AB$ as a linear map $C:\Bbb R^p\to \Bbb R^m$. Note that the choice of base is also very important.
Find the number of ways of choosing $r$ non-overlapping consecutive pairs of integers from the set $S=\{1,2, ..., n\}$.
I think this is essentially the same as your solution. We want to re-conceptualize a pair of consecutive integers as a single object (as you did). The book solution conceptualizes a pair $(a_i, a_i +1)$ as as a single integer $b_i$ in the set $T'$, given by the formula $b_i = a_i - (i-1)$. We must verify that this transformation is a bijection: that it never produces duplicate $b_i$s, and that its inverse never produces any overlapping pairs. Intuitively, this is because the $T \to T'$ transformation deletes each $a_i + 1$ from the list and shifts everything else lift, and the inverse transformation reinserts it. So $b_1$ can never be equal to $b_2$, etc., and conversely, if $b_1 \neq b_2$, the pairs $(a_1, a_1+1)$ and $(a_2, a_2+1)$ won't overlap. (This can be done more carefully.) Once we know this, choosing the set $T$ of pairs is equivalent to choosing the set $T'$ of integers. A quick check shows that the minimum possible value for $b_1$ is $1$ (since that's the minimum value for $a_1$), and the maximum value for $b_r$ is $n-r$ (since the maximum for $a_r$ is $n-1$). Thus we have reduced the problem to choosing a set $T'$ of $r$ integers from among the set {$1, 2, \dots, n-r$}, which is doable $\binom{n-r}{r}$ ways.
Prime integer p such that -1 is a square mod p
$2$ works, so suppose $p$ is odd. If $a^2 \equiv -1 \pmod{p}$, then $a^4 \equiv 1 \pmod{p}$, so the order of $a$ mod $p$ divides $4$. Since $a \not \equiv 1$ and $a^2 \not \equiv 1$ (since $-1 \not \equiv 1$, since $p > 2$), we must have $ord_p(a) = 4$. Since $a^{p-1} \equiv 1 \pmod{p}$ (Fermat's little theorem), we must have $4 \mid p-1$, i.e., $p \equiv 1 \pmod{4}$. Now suppose $p \equiv 1 \pmod{4}$. Then $x^4-1$ divides the polynomial $x^{p-1}-1$. Since $x^{p-1}-1$ has exactly $p-1$ roots in $\mathbb{Z}_p$, and since $\frac{x^{p-1}-1}{x^4-1}$ has at most $p-5$ roots (since $\frac{x^{p-1}-1}{x^4-1}$ has degree $p-5$), it must be that $x^4-1$ has exactly $4$ roots. Since $x^2-1$ has exactly $2$ roots, it must be that $x^2+1$ has exactly $2$ roots. So it in particular has a root, which is what we want.
Does independence of $g^2 (X)$ and $h(X)$ imply independance of $g(X)$ and $h(X)$?
Consider $X$ a random variable with Rademacher distribution. Then $X^2=1$ is a constant, thus $X^2 \perp X$. However, $X$ is not independent of itself. Besides if $g^2(X)\perp h(X)$, then $\sqrt{g^2(X)} = |g(X)|\perp h(X)$. So if $g\geq 0$, then indeed $g(X)\perp h(X)$.
Estimate proportion of polynomial
For the first, there are $q^{n+1}$ polynomials of degree $n$ because each coefficient has $q$ choices. You are supposed to estimate the number of these that are reducible, then divide by $q^{n+1}$ to get the proportion. The degree of one factor has to be less than or equal to $n/2\ \ldots$ For the second, you need to estimate the number of products of first degree polynomials. You need to think about rearrangements of factors producing the same ultimate polynomial.
Symbol to denote the angle between two points
Two points do not determine an angle. You implied x-axis as a standard reference line. Label or name the three points say $A,B,C$. Then $\measuredangle ABC $ is angle at the middle point B.
Wright Omega function: how to interpret the solution?
The first solution, is as you correctly note: $$a=\frac{1}{b}$$ The second equation can be solved in terms of the Lambert $W$ function, as: $$ \begin{align} \ln(ab)-ab+c&=0\Rightarrow\\ \ln(ab)&=ab-c\Rightarrow\text{ (pass through $\exp$)}\\ ab&=e^{ab}\cdot e^{-c}\Rightarrow\\ ab\cdot e^{-ab}&=e^{-c}\Rightarrow\\ -ab\cdot e^{-ab}&=-e^{-c}\Rightarrow\\ -ab&=W_k(-e^{-c})\Rightarrow\\ a&=\frac{-W_k(-e^{-c})}{b}\text{, $k\in\mathbb{Z}$}\\ \end{align} $$ Now, the Wright-Omega function $\omega$ satisfies: $$W_k(z)=\omega(\ln(z)+2k\pi i)\text{, $k\in\mathbb{Z}$}$$ Therefore you can express the solutions in terms of this function as: $$a=\frac{-\omega(\ln(-e^{-c})+2k\pi i)}{b}\text{, $k\in\mathbb{Z}$}$$ Choosing the principal branch of $\ln$ and for the particular case where $c\ge 1$ the branch of $W$ which gives real solutions, is the $k=-1$ branch, so: $$ \begin{align} a&=\frac{-W_{-1}(-e^{-c})}{b}\Rightarrow\\ a&=\frac{-\omega(\ln(-e^{-c})+2(-1)\pi i)}{b}\Rightarrow\\ a&=\frac{-\omega(\pi i -c - 2\pi i)}{b}\Rightarrow\\ a&=\frac{-\omega(-c -\pi i)}{b} \end{align} $$
From fourier series to continuous fourier transform
Yes, you can make the transformation to the continuous transform case rigorous under restricted conditions, but it's rigorous development is more trouble than it's worth. However, the intuition in this argument is worth mentioning when trying to motivate the introduction of the Fourier transform. The reason the argument has hung around so long is that Fourier came up his transform by using this argument. It's about the only natural and compelling derivation leading the Fourier transform; it motivated Fourier, and it's Fourier's argument. There are reasons to consider the discrete series, but Fourier's reasoning remains the only natural motivation for considering "continuous" integral versions of Fourier expansions, at least at an any reasonable elementary level.
Problem with $AM>GM>HM$ inequality
Note that $$5>1,~~5>2,~~5>3,~~5>4,~~5=5$$ Multiplying these we prove that $$5^5 > 5!$$ Next note that $$4 >3,~~5>4,~~5>3,~~5>4,~~5=5$$ Multipluing them, we get $$5^5 > 6~~ 5!$$
find $\lim_n\frac{a^\frac1n}{n+1}+\frac{a^\frac 2n}{n+\frac 12}+...+\frac{a^\frac nn}{n+\frac 1n}$ using Riemann integral
The sum $$S_n = \sum_{n=1}^n \frac{a^{{i\over n}}}{n + \frac{1}{i}} = \sum_{i=1}^n \frac{1}{n}\frac{a^{{i\over n}}}{1 + \frac{1}{n^2}\frac{n}{i}}$$ is not quite a Riemann-sum since the summand is not on the form $\frac{1}{n}f\left({i\over n}\right)$ for some ($n$-independent) function $f$. However, it it very close to the Riemann-sum $$\tilde{S}_n = \sum_{i=1}^n \frac{1}{n}a^{{i\over n}}$$ which converges to the integral $\int_0^1a^{x}{\rm d}x$. The difference between the two sums $\tilde{S}_n$ and $S_n$ satisfy $$0 \leq \tilde{S}_n - S_n = \sum_{i=1}^n \frac{a^{{i\over n}}}{n}\frac{1}{1 + ni} \leq \frac{1}{n}\tilde{S}_n$$ and since $\tilde{S}_n$ converges we have $\lim_\limits{n\to\infty} S_n - \tilde{S}_n = 0$ and it follows that $$\lim\limits_{n\to\infty}\sum_{i=1}^n \frac{1}{n}\frac{a^{{i\over n}}}{n + \frac{1}{i}} = \int_0^1 a^x {\rm d}x = \frac{a-1}{\log(a)}$$
Uniqueness of the parameters of an Ito process, given initial and terminal conditions
The answer is no. Take two bridges of your liking such as a Brownian Bridge (with a sigma on hte brownian part of the diuffusion equation) which starts 0 and ends at 0 at time t=1, then take any other bridge with the same start and end point and with the same Brownian parameter in the diffusion (to construct alternative bridge you can use the h-transform (you seem to be a reader of Rogers and Williams textbook you can have a look there where this subject it is treated unless mistaken) where you can see that such a transform do not alter the diffusion part only the drift part and you are done. Then you have in the simplest case regarding $f_0$ and $f_1$ (i.e. dirac masses) a systematic way to have counterexamples. Regards
Least moving-overlapped subset of [1..n] that has the biggest natural density as possible.
It looks like you are searching for minimally symmetric subset of a discrete segment with the group of symmetries consisting of the translations. Such problems and approaches to solve them are known (telling the truth, there were dealt by my professor :-) ), see, for instance, the references ([BVV, §1-2, 5], [BP, Sect. 8], and [B, Sect. 6]). References [B] T.Banakh, Symmetry and Colorings: Some Results and Open Problems, II. [BVV] T.Banakh, O.Verbitski, Ya.Vorobets, Ramsey treatment of symmetry // Electronic J. of Combinatorics. 7:1 (2000) R52. – 25p. [BP] T.Banakh, I.Protasov. Symmetry and colorings: some results and open problems // Izv. Gomel Univ. Voprosy Algebry. 2001. Issue 4(17). P.5–16.
What is the value of $i^{i^{i^{\cdots}}}$
Every level of exponentiation is multi-valued. Suppose we always pick the branch $i=e^{i\pi/2}$ in every level. Then we have $$(e^{i\pi/2})^{\alpha+i\beta}=e^{-\pi\beta/2}\left(\cos\frac{\pi\alpha}{2}+i\sin\frac{\pi\alpha}{2}\right)=\alpha+i\beta.$$ Two simultaneous equations for real $\,\alpha\,$ and $\,\beta\,$ are established by comparing the real and imaginary parts. We have $$\left\{\begin{array}{l} e^{-\pi\beta/2}\cos\displaystyle\frac{\pi\alpha}{2}=\alpha,\\ \ \\ e^{-\pi\beta/2}\sin\displaystyle\frac{\pi\alpha}{2}=\beta.\\ \end{array}\right.$$ Therefore $\,\alpha^2+\beta^2=e^{-\pi\beta}\,$ and $\,\frac{\beta}{\alpha}=\tan\frac{\pi\alpha}{2}$ can be found without numerically solving for $\,\alpha\,$ and $\,\beta$. Their numerical values are given by $\,\alpha=0.4383\,$ and $\,\beta=0.3606$.
Doubt in the method of finding the positive integral solutions of a linear equation in two variables.
Yes, as long as your fraction is simplified, such a number exists, and is called a modular inverse. The method that they've used here isn't actually a very good one for solving general Diophantines of form $ax+by=c$; what you want is the Extended Euclidean Algorithm.
find $g$ such that $g\circ f=h$
It is wrong. After you have written the expression of $g(y)$ we should have $$ g(y) = \frac{6(3+y)^{2} + 8(3+y)(y-2) + 11(y-2)^{2}}{25} $$ instead of your expression. On simplifying we get $g(y) = y^{2}+2$, i.e, $$ g(x) = x^{2} + 2$$
Stability of equilibrium in diff EQ symbiotic growth model
Since you mentioned looking for online reference, I suggest Scholarpedia article. Scholarpedia covers few topics compared to Wikipedia, but it seems to treat ODE well. In your case, let $(x^*,y^*)$ be the coexistence equilibrium. The Jacobian matrix at the equilibrium is easy to compute: using product rule, we see that the vanishing term needs to be differentiated. Result: $$\begin{pmatrix} -\alpha_1 x^* & \beta_1 x^* \\ \beta_2 y^* & -\alpha_2 y^* \end{pmatrix}$$ You don't actually need to find the eigenvalues of this matrix. In the 2-dimensional case, the trace and determinant tell the story, as on the diagram below (credit: Douglas Hundley). The diagram is for linear systems. For nonlinear systems, such as yours, the upper part of the vertical axis ("center") is the difficult case. In this case the nonlinear term can swing the behavior toward asymptotic stability or instability. Or keep it neutrally stable, as in the classical Lotka-Volterra example.
Finding the equations of surfaces of revolution
For revolution of the curve $$\text{i) }x^2 - y^2 + 1 = 0 \text{ about the y-axis}$$ You may express $$x(y)=\sqrt{1-y^2}$$ then $$X(y,\phi)=\sqrt{1-y^2}\cos\phi$$ $$Y(y,\phi)=\sqrt{1-y^2}\sin\phi$$ Will be the equation for the surface formed by revolution. Notice that $X(y,0)=x(y)$. For revolution of the curve $$\text{ii) }x^2 - 2y^2 + 2a^2 = 0 \text{ about the x-axis}$$ You may express $$y(x)=\sqrt{a^2+(x^2)/2}$$ then $$X(x,\phi)=\sqrt{a^2+(x^2)/2}\sin\phi$$ $$Y(x,\phi)=\sqrt{a^2+(x^2)/2}\cos\phi$$ Will be the equation for the surface formed by revolution. Notice that $Y(x,0)=y(x)$.
Re-write $\log n^{\log n}$ as $n^{\log(\log n)}$
Note that, by definition, $a^b=e^{b\ln a}$, so you get $$\left(\ln x\right)^{\ln x}=e^{\ln x \ln\ln x}=\left(e^{\ln x}\right)^{\ln\ln x}=x^{\ln\ln x}.$$
Show that $A$ is closed w.r.t. the upper limit topology
I don't quite agree with your $A$. Recall that $\sin(x)$ is periodic... But as $f(x) = 4\sin^2(x)$ is continuous from the reals in $\tau_1$ to itself, and $C = (-\infty,1]$ is closed in the reals, $f^{-1}[C] = \{x \in \mathbb{R}: 4\sin^2(x) \le 1 \}$ is closed, and singletons are closed as well, so $A$, as a union of these, is closed in $\tau_1$. This follows without even computing exactly what $A$ is. And as $\tau_1 \subset \tau_2$ ($(x,y) = \cup_{z \in (z,y)} [z,y)$), we know that any set open in $\tau_1$ will be open in $\tau_2$ and the same for closed as well.
Finding conditions for an inequality, positive semi definite
Ok actually I had a brilliant idea. Instead of looking at the sequence, I looked at the derived matrix. Proving a sequence is positive definite is equivalent to proving the Toeplitz matrix is positive definite. In other words, I want to prove that $M$ is semi positive definite where: $$ M = \begin{bmatrix} 2+ \psi^2 & -\psi \\ -\psi & 2+\psi^2 \end{bmatrix} $$ I computed the eigenvalues with the characteristic polynomial, and in fact, there is no real roots thus the matrix is positive definite. Then, this results in having my original sequence positive definite $\forall \psi$! EDIT OFC there are real roots because the matrix is symetric (spectral theorem). I did a small mistake computing the determinant ( the formula is ad-bc ) and then I found those two roots : $$ r_1 = \psi^2 + \psi + 2, \qquad r_2 = \psi^2 - \psi + 2 $$ which polynomials in $\psi$ don't have any root. So, the roots never get negative, then the matrix has 2 positive eigenvalues.
How to solve for $x$ in the equation $x\log(x)=A$?
Not in terms of elementary functions. The result is $$x = e^{W(A)}$$ where $W$ is the product logarithm function.
How to find the vertices angle after rotation
When a point $(x,y)$ is rotated about the origin $(0,0)$ counterclockwise by an angle $\theta$, the coordinates of the new point $(x',y')$ are $$\begin{align} x'&=x\cos(\theta)-y\sin(\theta), \\ y'&=x\sin(\theta)+y\cos(\theta).\end{align}$$ Thus, when we rotate a point $(x,y)$ about another point $(p,q)$ counterclockwise by an angle $\theta$, we can compute the new point's coordinates by translating the entire plane so that $(p,q)$ goes to the origin, perform the rotation, and then translate the entire plane back. To translate $(p,q)$ to the origin, we subtract $p$ from $x$-coordinates and $q$ from $y$-coordinates, and to undo the operation we add instead of subtract. Thus, for example, after translating $(p,q)$ to the origin, the coordinates $(x,y)$ of our point have become $(x-p,y-q)$. Therefore, the new point's coordinates are $$\begin{align} x'&=(x-p)\cos(\theta)-(y-q)\sin(\theta)+p, \\ y'&=(x-p)\sin(\theta)+(y-q)\cos(\theta)+q.\end{align}$$ In your particular case, we can now see that the coordinates of the points $a$, $b$, $c$, and $d$ after rotation are $$\begin{align} a'&=((1-3)\cos(15)-(5-3)\sin(15)+3,(1-3)\sin(15)+(5-3)\cos(15)+3)\\\\ &=\left((-2)\left(\tfrac{1+\sqrt{3}}{2\sqrt{2}}\right)-(2)\left(\tfrac{-1+\sqrt{3}}{2\sqrt{2}}\right)+3,(-2)\left(\tfrac{-1+\sqrt{3}}{2\sqrt{2}}\right)-(2)\left(\tfrac{1+\sqrt{3}}{2\sqrt{2}}\right)+3\right)\\\\ &=(3-\sqrt{6},3+\sqrt{2})\\\\ &\approx(0.55051,4.41421)\\\\\\ b'&=((5-3)\cos(15)-(5-3)\sin(15)+3,(5-3)\sin(15)+(5-3)\cos(15)+3)\\\\ &=\left((2)\left(\tfrac{1+\sqrt{3}}{2\sqrt{2}}\right)-(2)\left(\tfrac{-1+\sqrt{3}}{2\sqrt{2}}\right)+3,(2)\left(\tfrac{-1+\sqrt{3}}{2\sqrt{2}}\right)-(2)\left(\tfrac{1+\sqrt{3}}{2\sqrt{2}}\right)+3\right)\\\\ &=(3+\sqrt{2},3+\sqrt{6})\\\\ &\approx(4.41421,5.44949)\\\\\\ c'&=((1-3)\cos(15)-(1-3)\sin(15)+3,(1-3)\sin(15)+(1-3)\cos(15)+3)\\\\ &=\left((-2)\left(\tfrac{1+\sqrt{3}}{2\sqrt{2}}\right)-(-2)\left(\tfrac{-1+\sqrt{3}}{2\sqrt{2}}\right)+3,(-2)\left(\tfrac{-1+\sqrt{3}}{2\sqrt{2}}\right)-(-2)\left(\tfrac{1+\sqrt{3}}{2\sqrt{2}}\right)+3\right)\\\\ &=(3-\sqrt{2},3-\sqrt{6})\\\\ &\approx(1.58579,0.55051)\\\\\\ d'&=((5-3)\cos(15)-(1-3)\sin(15)+3,(5-3)\sin(15)+(1-3)\cos(15)+3)\\\\ &=\left((2)\left(\tfrac{1+\sqrt{3}}{2\sqrt{2}}\right)-(-2)\left(\tfrac{-1+\sqrt{3}}{2\sqrt{2}}\right)+3,(2)\left(\tfrac{-1+\sqrt{3}}{2\sqrt{2}}\right)-(-2)\left(\tfrac{1+\sqrt{3}}{2\sqrt{2}}\right)+3\right)\\\\ &=(3+\sqrt{6},3-\sqrt{2})\\\\ &\approx(5.44949,1.58579)\\\\\\ \end{align}$$ Plotting these points in Mathematica demonstrates visually that our calculations were correct: ListPlot[{{3 - Sqrt[6], 3 + Sqrt[2]}, {3 + Sqrt[2], 3 + Sqrt[6]}, {3 - Sqrt[2], 3 - Sqrt[6]}, {3 + Sqrt[6], 3 - Sqrt[2]}}, AspectRatio -> 1, AxesOrigin -> {0, 0}, PlotMarkers -> {Automatic, Medium}, PlotStyle -> Blue]
Induced map from a complex of modules
We have a sequence $$0\to\ker(g)\hookrightarrow B\xrightarrow{g}\text{im}(g)\to 0$$ which is clearly exact. Furthermore, $\text{im}(f)\subseteq\ker(g)\subseteq B$ and $\text{im}(f)$ is mapped to $0$ by $g$ since $gf=0$, so we have an induced surjective map $\text{coker}(f)=B/\text{im}(f)\xrightarrow{\bar{g}}\text{im}(g)$. The map $\ker(g)\to\text{coker}(f)$ (which is the composition of the inclusion map $\ker(g)\hookrightarrow B$ and the projection map $B\to B/\text{im}(f)$) has kernel $\text{im}(f)=H(B)$, so we have the induced injective map $H(B)=\ker(g)/\text{im}(f)\to\text{coker}(f)$. Then, by the exactness of the first sequence, we conclude that $$0\to H(B)\to\text{coker}(f)\xrightarrow{\bar{g}} \text{im}(g)\to 0$$ is exact.
Funny graph of $x!$ by a graphing program
You are right that, by the traditional definition of $x!$, it is only defined at the integers. However, we can analytically continue the factorial function to all real numbers (except, as Peter notes in the comments, for all negative integers and $0$). This continuation (along with a shift) is known as the Gamma Function. However, note that there are other continuations of the factorial function. The one pictured here is the Gamma Function. Here are some other continuations which interlope $x!$.
Condition for line intersecting cuboid
This is a picking problem. Try to intersect $AB$ with every face of your cuboid. But how can two points define a cuboid? Never the less lets say $C$ $D$ and $E$ span one parallelogram of your polygone. Than try to solve the equation. $$A+\lambda (B-A)=C+\alpha (D-C)+\beta (E-C)$$ Where $A$, $B$, $C$, $D$ and $E$ are vectors. Afaik. If this equation has a solution and the greeks are between $0$ and $1$, then $AB$ intersects this face.
Intuition behind the Idealization Axiom of Internal Set Theory
I think it's a bit worse than that. The idealization axiom is equivalent to the existence of a finite set containing all standard sets AND plenty of standard sets are infinite, for example, N, Q, R, etc. All the classically constructed sets that are infinite are still infinite in IST. This axiom is something I have also been thinking about. It seems to receive brief discussion in the literature that I have read. It's the sort of thing that if you started a presentation with that sentence, folks would certainly want more information. The finite set (call it A) containing all standard sets cannot itself be internal because it uses the predicate standard. So it must be an external set. I know that IST isn't able to access external sets (though I can't make that precise) very well. Given that we measure whether a set is finite or infinite with functions I wonder if the issue is whether the function exists. That would require looking at subsets of AxA but that product isn't necessarily guaranteed to exist because transfer only gives us products of standard sets. So can we even construct an injection from A to A in order to determine that it is necessarily surjective? EDIT: I'm working on these ideas too and so have been thinking about this some more. Within IST the 'usual' natural numbers or 'usual reals' aren't sets and so we might relax our surprise that there is a finite set containing them. Cardinality, I read somewhere, is necessarily an external concept because it depends on the sets available in the model. Additionally, I wonder if there is some equivocation of the use of "finite" which in our everyday language doesn't necessarily match up with the mathematical definition which can be "not infinite" which allows for considerable quantities to be finite (the number of books in the Library of Babel by Jorge Borges) or Graham's number, even those these numbers are...I'm not sure how to say, larger than anything we can reasonably conceptualize.
Unsure about the relationships created by $LCM(1, 2,..., n)$
There is something strange about your graphs. Namely, $LCM(1, \dotsc, n)$ is no smaller than the product of all primes between $2$ and $n,$ which, the prime number theorem tells us, behaves like $e^n.$ Now, because of higher powers of primes, the LCM behaves roughly like $e^n e^{n^{(1/2)}} e^{n^{(1/3)}} \dots \ll \exp((2+ \epsilon) n),$ for any $\epsilon.$ So, what ARE the graphs of? If, as suggested by Aaron you are dividing by $n!,$ that make slightly more sense, but then the decay should be much faster than $1/n$
A cofibration induces a cofibration
You know what a pushout is, and that $X \stackrel{f^*}{\rightarrow} X \cup_f Y \stackrel{j}{\leftarrow}Y$ is a pushout of $X \hookleftarrow A \stackrel{f}{\rightarrow} Y$. Note that I avoided to consider $Y$ as a genuine subspace of $X \cup_f Y$. Of course $j$ is an embedding, but we do not need a separate proof since we shall show that it is a cofibration, and all cofibrations are known to be embeddings. Let us more generally start with any cofibration $i : A \to X$. Let $X \stackrel{f^*}{\rightarrow} Z \stackrel{j}{\leftarrow} Y$ be the pushout of $X \stackrel{i}{\leftarrow} A \stackrel{f}{\rightarrow} Y$. Here are some extended hints what you have to do. 1) Show that $X \times I \stackrel{f^* \times id_I}{\rightarrow} Z \times I \stackrel{j \times id_I}{\leftarrow} Y \times I$ is the pushout of $X \times I \stackrel{i \times id_I}{\leftarrow} A \times I \stackrel{f \times id_I}{\rightarrow} Y \times I$. You will need the exponential law for function spaces endowed with the compact-open topology. This allows to identify any continuous map $u : V \times I \to W$ with the continuous map $u^* : V \to W^I, u^*(v)(t) = u(v,t)$. 2) Given a map $\phi : Z \to W$ and a homotopy $h : Y \times I \to W$ such that $h_0 = \phi j$, we have to find a homotopy $H : Z \times I \to W$ such that $H_0 = \phi$ and $H (j \times id_I) = h$. Consider the map $\psi = \phi f^* : X \to W$ and the homotopy $k = h (f \times id_I) : A \times I \to W$. We have $k_0 = h_0 f = \phi j f = \phi f^* i = \psi i$ and use the fact that $i$ is a cofibration to find a homotopy $K : X \times I \to W$ such $K_0 = \psi$ and $K (i \times id_I) = k$. Therefore $h (f \times id_I) = K (i \times id_I)$ and we use 1) to find $H : Z \times I \to W$ such that $H (f^* \times id_I) = K$ and $H (j \times id_i) = h$. Now check that $H$ satisfies $H_0 = \phi$. To do so, use the universal property of the pushout diagram based on $X \stackrel{i}{\leftarrow} A \stackrel{f}{\rightarrow} Y$: We have $\psi i = \phi f^* i = \phi j f$, hence there exists a unique $\chi : Z \to W$ such that $\chi f^* = \psi$ and $\chi j = \phi j$. But both $H_0, \phi$ have this property.
Transformations of the form $f(ax+b)$
We have $$ f(ax+b)=f(a(x+\tfrac ba)) $$ corresponding to a stretch by a factor $\frac 1a$ followed by a horisontal translation by $-\frac ba$. You simply invert the operations, so multiplied by $a$ becomes stretched by $\frac 1a$ and adding $\frac ba$ becomes shifted by $-\frac ba$. So in steps: Given $f(ax+b)$ write down $a,\frac ba$. Invert both in turn to have $\frac 1a,-\frac ba$. Conclude that the graph of $f$ have been stretched by a factor $\frac 1a$ followed by a horisontal shift by $-\frac ba$.
How to simplify following binomial expansion?
Hint : Use $$\binom{2n+1}{0}+\binom{2n+1}{1}+\binom{2n+1}{2}+\binom{2n+1}{3}+...+\binom{2n+1}{2n+1}=2^{2n+1}$$ , which follows from the formula for $(a+b)^{2n+1}$ by setting $a=b=1$ and the symmetry-property $$\binom{2n+1}{k}=\binom{2n+1}{2n+1-k}$$ for $k=0,...,2n+1$
Probability of picking an item x from a pool within n tries, where we remove non-x items from the pool on each try?
Here is another way to look at it: Instead of tossing out the red balls and instead of stopping when you reach the blue ball, suppose as you pulled the balls out, you simply put them in a row. You will wind up with a row of balls where 99 of them are red and 1 is blue. Your problem is equivalent to the blue ball being among the first ten balls. There is a $\dfrac{1}{100}$ chance that a particular slot will contain a blue ball. It is impossible for two slots to contain the blue ball, so the probabilities are disjoint and therefore additive. Thus, as @DanielMathias said, the probability that the blue ball resides among the first ten balls drawn is $\dfrac{10}{100}$.
Sum of terms in a geometric progression
The two formulae are the same with sign reversed above and below. Maybe they used the second formula when $|r|<1$ because then it is easier to see $$\lim_{n\to\infty}r^n = 0 \implies \lim_{n\to\infty}\frac{a(1-r^n)}{1-r}=\frac a{1-r}$$
I need to determine the values of $a$ and $b$ such that $a-b=33333$ with using $9$ cards ($i$-th card represents $i$).
The sum of the digits of a number is equivalent to that number modulo $9$ (you can prove this by stating a number generally as $a_n \cdot 10^n + a_{n-1}\cdot 10^{n-1}+\dots + a_0,$ where $a_i\in \{0,1,\dots, 9\}.$ Hence $p\equiv a\pmod 9\wedge q\equiv b\pmod 9\Rightarrow p-q\equiv a-b\pmod 9.$ But $a-b\pmod 9 \equiv 3\times 5\equiv 6\pmod 9$ as $3\times 5$ is the sum of the digits of $33333$.
Is there a factorial of the form $p m^2$ greater than 720?
There is another solution : $$10!=7\times 720^2$$ This is almost surely the largest one. This PARI/GP program ? for(n=0,2000,if(isprime(core(n!),2)==1,print(n))) 2 6 10 ? shows that the only solutions upto $n=2000$ are $2,6,10$. To prove the conjecture, it would be enough to show that for every even number $n\ge 2000$, there are at least two primes $p,q$ with $\frac{n}{2}<p<q<n$.
definition of open sets in metric spaces
The set of points in the open ball $B_\epsilon(u)$ depends on the ambient space $X$. That is, if we considered $Y \subset X$ with $U \subset Y$, then it is not necessary that $B_{\epsilon,X}(u) = B_{\epsilon,Y}(u)$. For example, if $Y = U$, then $B_{\epsilon,Y}(u)$ is by definition contained in $U$.
A,B nonempty finite sets f is a bijection from A to B. Prove if $g:A\rightarrow B$ is injective then it's surjective. Is this true for infinite sets?
Your reasoning for $g$ being injective then it's surjective is intutiively fine, but formally it's a bit lacking, since the (essential) assumption of finiteness isn't used anywhere in the proof. This is a red flag, because you need $A$ and $B$ to be finite for this to be true! You can piece together a more rigorous proof using the following two lemmas: Lemma 1. Let $A$ and $B$ be sets and let $g : A \to B$ be an injection. Then $|A| = |g[A]|$, where $g[A]$ is the image of $g$. Lemma 2. Let $B$ be a finite set and let $V \subseteq B$. If $|V| = |B|$, then $V=B$. These lemmas together imply that $g$ is a bijection: Lemma 1 implies that $|g[A]| = |B|$ since $|A| = |g[A]|$ and there is a bijection $A \to B$. Lemma 2 is where the assumption of finiteness is used: setting $V = g[A]$ tells you that $g[A]=B$, which is equivalent to saying that $g$ is surjective. Now, of course, you need to worry about whether you can use Lemmas 1 and 2 in your proof. Lemma 1 has an easy proof, but Lemma 2 is more fiddly: you can prove it by induction on $|B|$. P.S. As drhab mentioned in the comments, this does indeed fail when $A$ and $B$ are not assumed to be (Dedekind-)finite.
Polarity of lines in $\mathbb{RP}^3$
Let us set apart the fact that we are in projective space $\mathbb{RP}^3$ because the issue can be set in ordinary Euclidean space $\mathbb{R}^3$. The point where I do not agree with you is that the polar line of a given line is not orthogonal in general to the initial line. Let us look at an example. Take for our quadric the ellipsoid with equation : $$x^2+4y^2+z^2=1$$ Besides, let us take $\ell$ with parametric equations : $$\begin{cases}x=2t+1\\y=t\\z=3t-1\end{cases}$$ the polar line $\ell^{\circ}$ of $\ell$ is defined as the set of points $(x,y,z)$ such that: $$\forall t \in \mathbb{R}, \ \ \ \ (2t+1)x+4ty+(3t-1)z-1=0.$$ Collecting constant terms and coefficients of $t$, we get the equivalent system: $$\begin{cases}x-z-1=0\\ 2x+4y+3z=0\end{cases}$$ whose solution is straight line $\ell^{\circ}$ with equations: $$\begin{cases}x=z+1\\y=\dfrac{-5z-2}{4}\\z=1z+0\end{cases}$$ In particular, a directing vector of this straight line $\ell^{\circ}$ is $(1,-5/4,1)$ which is not orthogonal in the Euclidean sense with respect to the directing vector : $(2,1,3)$ of $\ell.$ Nevertheless, these two former vectors can be considered as orthogonal with respect to the quadratic form induced by the ellipsoid: $$\begin{pmatrix}1 & -5/4 & 1\end{pmatrix}\begin{pmatrix}1 & 0 & 0 \\ 0 & 4 & 0\\0& 0& 1\end{pmatrix}\begin{pmatrix}2\\1\\3\end{pmatrix}=0.$$
About convergence of $(T_nR_n)$ when $(T_n),(R_n) \subset B(X)$
Two hints for (a): Write $T_n R_n x - TRx = T_n R_n x - T_n R x + T_n R x - TRx$. Think about the triangle inequality. (This is a useful technique whenever you have a situation where two different things are converging: try to split it up so you can handle one at a time.) By the uniform boundedness principle, $\sup_n \|T_n\| < \infty$. Question (b) can be solved with similar tricks.
Integration by parts on manifolds with Hessian
For those you encounter the same problem, here is the solution: Proposition 6.2 (Integration by parts formula) page 27 https://arxiv.org/abs/1505.04817v1
Persistent Markov Chain with infinite mean recurrence time
while I think that studying the simple symmetric random walk is the standard example here, I'll explicitly construct a different one. Consider $P:=\begin{pmatrix}p_{1,1} & p_{2,2} & p_{3,3} & p_{4,4} & p_{5,5} & p_{6,6} & \dots \\ 1 & 0 & 0 & 0 & 0 & 0 & \dots\\ 0 & 1 & 0 & 0 & 0 & 0 & \dots\\ 0 & 0 & 1 & 0 & 0 & 0 & \dots\\ 0 & 0 & 0 & 1 & 0 & 0 & \dots\\ \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \ddots\\ \end{pmatrix}$ This chain describes the Residual Life of an integer valued Renewal Process and sometimes goes by names like Residual Waiting Time chain. (The original post uses the term 'Persistent' -- the standard term I think is Recurrent, but it reminds me that Feller uses the term Persistent instead of Recurrent, so in case that OP uses Feller as a reference, this chain shows up in Feller vol 1, 3rd edition, page 381, and elsewhere subsequently in that chapter.) Given its simple structure, we can easily construct a case where this chain is null recurrent. 1.) For simplicity we'll require each $p_{i,i}\gt 0$ and by inspection, there is a single communicating class -- i.e. state 1 may reach any state with positive probability and state $j\geq 2$ may reach state 1 with positive probability (in fact it reaches state 1 deterministically in exactly $j-1$ iterations). 2.) To verify recurrence, it we need to ensure that given a start in state one, we return with probability 1. 3.) To be null recurrent, we need to ensure that given a start in state one, the expected time until revisiting state one is $\infty$. the finish Set $p_{i,i}:=\frac{1}{i(i+1)}$ noting that $\sum_{i=1}^{n-1}\frac{1}{i(i+1)}=1-\frac{1}{n}$ if you don't know this result -- it's a simple and worthwhile telescoping exercise. thus $\sum_{i=1}^{\infty}\frac{1}{i(i+1)} = 1$ so the chain is recurrent, but $\sum_{i=1}^{\infty}i\cdot\frac{1}{i(i+1)} = \sum_{i=1}^{\infty}\frac{1}{i+1} = \infty$ because the harmonic series is divergent. Thus this is an example of a null recurrent chain.
Geometric and algebraic definitions of the dot product , proof of equivalence?
The proof assumes that the dot product is linear, which is not trivial to prove without the standard algebraic definition. The more straightforward proof would be as follows: Create a triangle with the two vectors $a$ and $b$ so that the third side is $a-b$. Define the dot product as $a\cdot b=a_1b_1+a_2b_2$. Then note that $$||x||^2=x_1^2+x_2^2=(x_1,x_2)\cdot (x_1,x_2)$$ so the magnitude squared of a vector equals the vector dotted with itself. Then by the law of cosines, letting $\theta$ denote the angle between $a$ and $b$ and recalling that $a-b$ is the side opposite $\theta$ we get $$||a-b||^2=||a||^2+||b||^2-2||a||\,||b||\cos\theta$$ Using the magnitude squared/dot product relationship above gives $$(a-b)\cdot (a-b)=a\cdot a+b\cdot b-2||a||\,||b||\cos\theta$$ Clearly the dot product is linear and symmetric by our algebraic definition, so the left side can be re-written as $$a\cdot a+b\cdot b-2a\cdot b=a\cdot a+b\cdot b-2||a||\,||b||\cos\theta$$ from which it follows that $$a\cdot b=||a||\,||b||\cos\theta$$
How many fruit are there after a night on an alien planet
It's simply like the monkeys and the coconut problem here's a link to how to solve a problem like this https://www.youtube.com/watch?v=U9qU20VmvaU I think it might help you (:
Prove that there are insufficent information with each side length known to calculate area of any irregular polyogon that have more than three side
The easiest way is probably proof by counterexample: find two irregular polygons (quadrilaterals, say) with an equal number of sides of equal lengths, and show that they do not have the same area.
Roots of Unity - $x^3 = -i$
$x^3 + i = 0 \to x^3 - i^3 = 0 \to (x-i)(x^2 + xi - 1) = 0$. Can you use quadratic formula to complete the answer.
Solving equation with $\cos$ and $\sin$
$f(x)=0$ means $3 \cos x = 9 \sin x$. Now if $\cos x=0$, then $\sin x=1$ or $-1$, so values of $x$ where $\cos x=0$ are not a solution to the equation. Divide by $9 \cos x$ on both sides, $\tan x = 1/3$ Now you can probably compute $\arctan(1/3)$ by a calculator and $n\pi + \arctan(1/3)$ where $n$ is an integer is the complete set of solutions.
Proving $\mathrm{Aut}(S_{n}) = S_{n}$ for $n > 6$
The centralizer of an element of order $2$ that consists of $k$ transpositions, can only permute the $n-2k$ other points arbitrarily, permute the transpositions, and swap the elements within a transposition. We find that a short exact sequence $$ 1\to S_2^k\to C\to S_{n-2k}\times S_k\to 1$$ and in particular $|C|=(n-2k)!\,k!\,2^k$. The size of the centralizer is invariant under automorphism, hence a transposition must be mapped to an element of order $2$ with $k$ transpositions where $$ 2\cdot(n-2)!=2^k\,k!(n-2k)!,$$ or $$ 2^{k-1}=\frac{(n-2)!}{k!(n-2k)!}={n-k\choose k}\frac{(n-2)!}{(n-k)!}.$$ If $k\ge 4$, the right hand side if a multiple of $(n-2)(n-3)$, hence has a nontrivial odd factor, contradiction. If $k=3$, the right hand side has a factor $n-2$, which must be a power of $2$, in particular $n$ is even. Also, ${n-k\choose k}=\frac{(n-k)(n-k-1)(n-k-2)}{6}$ must be a power of $2$. But $n-k$ and $n-k-2$ are odd (and $>6-5=1$) and at most one can cancel against the $3$ in the denominator, contradiction. If $k=2$, we arrive at $2={n-2\choose 2}$, or $4=(n-2)(n-3)>4\cdot 3$, contradiction. We conclude that $k=1$ as desired. By the above, there exist $u_i,v_i$ with $(1\,i)\mapsto (u_i\,v_i)$. As $(1\,\i)(1\,j)\ne(1\,j)(1\,i)$ for $i\ne j$, we conclude that $(u_i\,v_i)$ and $(u_j\,v_j)$ cannot be disjoint. Let $a$ be the unique element of $\{u_2,v_2\}\cap \{u_3,v_3\}$ and $b_2,b_3$ the other elements. Now at least $(1\,2)\mapsto (a\,b_2)$ and $(1\,3)\mapsto (a\,b_3)$. By the same argument for $i>3$, $\{u_i,v_i}\cap\{a,b_2\}$ and $\{u_i,v_i}\cap\{a,b_3\}$ must be non-empty. Either $a\in\{u_i,v_i\}$ and we are done, or $(1\,i)\mapsto (b_2\,b_3)$. The latter can be the case only once, hence $(1\,i)\mapsto (a\,b_i)$ holds for at least $n-2$ values of $i$. The one possibly remaining $\{u_i,v_i\}$ must now intersect all the other $\{a,b_j}$ - and this is only possible if $a\in\{u_i,v_i\}$, as desired. An automorphism of $S_n$ is uniquely determined by the images of the $(1\,i)$. A sjust seen, these again are given by the choice of $a$ (the unique element common to all image transpositions and $b_2,\ldots, b_n$, all different and different from $a$ - so ultimately there are at most $n!$ choices for the images of the $(1\,i)$. On the other hand, for any permutation $a,b_2,b_3,\ldots, b_n$ of $1,2,\ldots n$, the inner automorphism given by this permutation maps $(1\,i)\mapsto (a\,b_i)$. This gives us at least $N!$ automorphisms, hence there are exactly $n!$ homomorphisms and $S_n\to\operatorname{Inn}(S_n)\to \operatorname{Aut}(S_n)$ is an isomorphism.
Expressing $e^z$ where $z=a+bi$ in polar form.
Let's do it piece by piece. In general, we know that the following is a property of the exponential function: $$ e^{x + y} = e^x \cdot e^y $$ Replace $x$ with $a$ and $y$ with $b\,i$ to get: $$ e^{a + b i} = e^a \cdot e^{b i} $$ Now for $e^{b\,i}$, use Euler's formula which states that: $$ e^{b i} = \cos b + i\sin b $$ Apply this formula to get the result: $$ e^{a + b i} = e^a (\cos b + i\sin b) $$
Simple trig question.
EDIT: Base angle The base angle of your triangle is $\theta$ and what you know is that $\cos(\theta)=\frac{4}{6}=\frac{2}{3}$. So, to get $\theta$ use an inverse function $\theta=\cos^{-1}(\frac{2}{3})$. Be careful to make sure your angle is in degrees not radians. Other angles: The same procedure can be used on the other angles. In the example you gave, you had $\cos(\theta)=\frac{\sqrt{20}}{6}$, then $\theta=\cos^{-1}(\frac{\sqrt{20}}{6})$. Note, this angle can also be obtained by noting that the sum of the angles in any triangle is 180, and you already have a base angle (count it twice since the two base angles are the same) and deduce the third angle (in your isosceles triangle). The angle in the right triangle is half this angle.
Is the sum of a bounded unimodal function and a bounded concave increasing function still unimodal or concave increasing?
Try $f(x) = \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-(x-1)^2/2}$, the PDF of the normal distribution with mean $1$ and variance $1$, which is unimodal, infinitely differentiable, and bounded and $g(x) = \arctan(x)$, which is infinitely differentiable, bounded, concave (on $[0,\infty)$, which seems to be the intention, based on your graph) and increasing. Their sum is bounded and infinitely differentiable, but is not increasing or concave. Here $c_1 = 0$ and $c_2 = \pi/2$. (Put the peak of the unimodal function where the bounded function is still small.) With more effort than I'm willing to expend, we can find scalars $a,b$ such that $af(x) + bg(x)$ has a local maximum near $x = 3/2$ whose height is the limit $\lim_{x \rightarrow \infty} af(x) + bg(x)$, defeating unimodality. The above has $a = b =1$. The below has $a = 2$, $b=1$, so there is an $a$ in $(1,2)$ that produces this outcome with $b = 1$. ($c_1$ and $c_2$ are unchanged.)
Why is the conditional probability of events following a Poisson process normally distributed?
As pointed out in the comments, the conditional distribution is not normal, but binomial with $n=100$ and $p=\frac 4{10}$. Maybe this is less surprising if you know the Poisson-limit theorem: https://en.wikipedia.org/wiki/Poisson_limit_theorem For a moment, think of the Poisson distribution as a Binomial distribution: then each of the 100 particles (which arrived upon time 10) have an equal chance of arriving at any of the times $1,2,\dots,10$. Then the expression for the conditional probability is less surprising. Hope that this was helpful.
Optimization benchmarks?
The CUTEr collection (https://magi-trac-svn.mathappl.polymtl.ca/Trac/cuter/wiki) bundles over 1000 optimization problems including the Hock and Schittkowski collection, the Maros and Meszaros collection, and more. Problems are modeled in the (a bit outdated) SIF modeling language. The main reference is http://dl.acm.org/citation.cfm?id=962439 EDIT: CUTEst, the updated version of CUTEr, is available from https://ccpforge.cse.rl.ac.uk/gf/project/cutest/wiki. The main reference is https://doi.org/10.1007/s10589-014-9687-3. The collection is also available in the AMPL modeling language (along with lots of other problems): http://www.orfe.princeton.edu/~rvdb/ampl/nlmodels (see also https://github.com/mpf/Optimization-Test-Problems). The COPS collection has discretized control problems: http://www.mcs.anl.gov/~more/cops/ On the same page, you will read about performance profiles which are a standard tool to compare several algorithms on a given problem collection. You may also enjoy Performance World: http://www.gamsworld.org/performance Hans Mittelmann maintains benchmark results for all sorts of optimization problems and solvers: http://plato.asu.edu/bench.html Jorge Mor\'e has a website on benchmarking derivative-free optimization codes: http://www.mcs.anl.gov/~more/dfo/
Stochastic integral for local martingale
Since $$\mathbb{P} \left( \int_0^T X_t^2 \, d\langle M \rangle_t < \infty \right)=1$$ for all $T \geq 0$, we have $$\mathbb{P} \left( \forall n: \int_0^n X_t^2 \, d\langle M \rangle_t < \infty \right)=1.$$ Consequently, it suffices to prove $\lim_{n \to \infty} R_n(\omega)=\infty$ for all $$\omega \in \Omega_0 :=\left\{\forall n: \int_0^n X_t^2 \, d\langle M \rangle_t < \infty\right\}.$$ Pick $\omega \in \Omega_0$ and $n \in \mathbb{N}$. Then we can choose $m=m(n,\omega) \in \mathbb{N}$, $m \geq n$, such that $$\int_0^n X_t^2(\omega) \, d\langle M \rangle_t(\omega) \leq m.$$ It follows from the definition of the stopping time and the fact that $m \geq n$ that $R_m(\omega) \geq n$. Since the sequence is increasing, we have shown that for any $n \in \mathbb{N}$ there exists $m_0 \in \mathbb{N}$ such that for all $m \geq m_0$: $$R_m(\omega) \geq n.$$ Hence, $\lim_{m \to \infty} R_m(\omega)=\infty$.
Show that the curve $2Y^2 = X^4-17$ has points in every $\mathbb{Q}_p$
Hints: Prove that the curve has genus 1. Find the primes of bad reduction. If $p$ is a prime of good reduction, and large enough, then the Hasse-Weil bounds tell you that there is an affine point (not just at infinity) over $\mathbb{F}_p$. Now use Hensel's lemma to lift it up to $\mathbb{Q}_p$. If $p$ is a prime of bad reduction, or not large enough, you will have to find $\mathbb{Q}_p$ solutions by hand (with the help of Hensel's lemma, again).
Abelian groups of order n
The problem considered in the linked question is not asking for the number of abelian groups of order $10^6$. It's asking you to find $n$ such that the number of abelian groups of order $n$ is exactly $10^6$. To rephrase, the question asks if there is a positive integer $n=p_1^{k_1}\cdots p_m^{k_m}$, such that $$P(k_1)\cdots P(k_m)=10^6,$$ where $P$ is the number of partitions. Because $10^6=2^65^6=P(2)^6P(4)^6$, the required $n$ must have $6+6$ prime factors, 6 of them to power 2, and the remaining 6 to power 4. I'm obviously quite far off as the solution is apparently: $n=4,965,978,981,753,783,895,734,117,534,249,000$ Note that this is not the solution, it is a solution. However, it is the minimal solution.
Can functions within a matrix adjust its size?
That is mostly a question of notation and convention. You can certainly choose to define that in your work, the notation $$ A = \begin{bmatrix}X & Y & Z\end{bmatrix} $$ where $X$, $Y$ and $Z$ are column matrices of the same height, will mean that $A$ is the $3\times n$ matrix that has those three columns. As a matter of terminology that would mean that you're defining $A$ as a block matrix. However, it seems unlikely that this is actually what your problem depends on. There's essentially no mathematical content in asking (effectively) "is it possible to form block matrices" -- the answer to that will only tell you how your notation works, but not anything about the underlying mathematical structure that your notation speaks about. So when you say that your problem "comes down to" whether you can employ that notation or not, it sounds very likely that you have some conceptual confusion or mistake in your work before you reach that point. And you should probably ask another question where you give details of that work and ask whether your approach is legitimate.
How to call a mathematical space "$(\mathcal S, f)$" consisting of set $\mathcal S$ and function $f : \mathcal{S \times S} \rightarrow \mathbb R$?
I don't think this type of space has its own field of study. Its too general. With just that condition on the function $f$ one could think of a thousand different structures without common features. Just choose any set $S$ define any function $\tilde{f}\colon S\times S\to\mathbb{R}$ and define $f\colon S\times S\to\mathbb{R}$ by $f(x,y)=\tilde{f}(x,y)$ if $x\neq y$ and $f(x,x)=0$. How would you distinguish interesting behavour for such a space when so many ridiculous candidates could fit the bill? Do you have any other conditions you can impose?
$\frac{\sum^n_{i=0} x_i} {\sum^n_{i=0} x_i^2 } $
It is $$\frac {x_0+x_1+x_2+...x_n}{x_0^2+x_1^2+x_2^2+...x_n^2} $$ $\sum $ starts with $i=0$.
Sum of Consecutive Integers
Let $N = 2^{\alpha_2} 3^{\alpha_3} 5^{\alpha_5} \ldots q^{\alpha_q}$, where $q$ is the largest prime dividing $N$. We want $N = a + (a+1) + (a+2) + (a+3) + \cdots b = \frac{(b-a+1)(a+b)}{2}$, where $a,b \in \mathbb{N}$ So we need to find $a$ and $b$ such that $(b-a+1)(a+b) = 2^{(\alpha_2 + 1)} 3^{\alpha_3} 5^{\alpha_5} \ldots q^{\alpha_q}$. Now note that $(a+b)$ and $(b-a+1)$ are of opposite parity. So one of them has to be odd. Further $(a+b)>(b-a+1)$ since $a \in \mathbb{N}$. Assume that $(a+b)$ is even. So $(b-a+1)$ is odd. Hence $$(a+b) = 2^{(\alpha_2 + 1)} 3^{\beta_3} 5^{\beta_5} \ldots q^{\beta_q}$$ $$(b-a+1) = 3^{\alpha_3-\beta_3} 5^{\alpha_5-\beta_5} \ldots q^{\alpha_q-\beta_q}$$ where $0 \leq \beta_p \leq \alpha_p$ and $(a+b) > (b-a+1)$ Now assume that $(a+b)$ is odd. So $(b-a+1)$ is even. Hence $$(a+b) = 3^{\beta_3} 5^{\beta_5} \ldots q^{\beta_q}$$ $$(b-a+1) = 2^{(\alpha_2 + 1)} 3^{\alpha_3-\beta_3} 5^{\alpha_5-\beta_5} \ldots q^{\alpha_q-\beta_q}$$ where $0 \leq \beta_p \leq \alpha_p$ and $(a+b) > (b-a+1)$ If we relax the fact that it has to be written as a sum of consecutive natural numbers, and assume that the consecutive numbers belong to integers then we get $$2 \times (1+\alpha_3) \times (1+\alpha_5) \times (1+\alpha_7) \cdots \times (1+\alpha_q)$$ Note that the above also acts as a trivial upper bound if it has to be written as a sum of consecutive natural numbers. This upper bound is obtained by violating the constraint $(a+b) > (b-a+1)$
Does a discrete semilattice of finite width have finite number of points between any pair of elements.
I think the simplest example is given by the ordinal $\omega+1=\{0<1<2<\cdots<\omega\}$. As a well order it contains no antichains, and it is well founded (hence nowhere dense), but of course $\{x\in\omega+1\mid 0<x<\omega\}$ is infinite. Ever infinite well order would do. You will find plenty of examples in the larger class of well quasi-orders too.
what is the precise definition of matrix?
As a sequence can be viewed as a function, a matrix can be viewed as a function $A:\{1,\ldots,m\}\times \{1,\ldots,n\}\to F$.
Does restricting the range of a collection of nonempty sets to one dominated by the index set require the Axiom of Choice?
This is a long comment rather than an answer. I can't answer either of your questions; all I'm going to do is restate your theorem in an equivalent but slightly simpler form. Maybe this will make it easier to find it in the literature. Consider the following statements. (1) If $A_i\ne\emptyset$ for each $i\in I$, then there is a set $X$ such that $|X|\le|I|$ and $A_i\cap X\ne\emptyset$ for each $i\in I$. [Your theorem.] (2) For any surjective map $f:A\to B$, there is an injective map $g:B\to A$ such that the composite map $fg:B\to B$ is surjective. (2A) For any surjective map $f:A\to B$, there is a map $g:B\to A$ such that the composite map $fg:B\to B$ is surjective. (2B) Whenever there is a surjection from $A$ to $B$, there is an injection from $B$ to $A$. [The Partition Principle.] Without assuming AC, I claim that $\text{(1)}\Leftrightarrow\text{(2)}\Leftrightarrow\text{(2A)}\wedge\text{(2B)}$. $\underline{\text{(1)}\Rightarrow\text{(2A)}}$: Suppose $f:A\to I$ is surjective. For $i\in I$ let $A_i=\{x\in A:f(x)=i\}$. By (1) there is a set $X$ such that $|X|\le|I|$ and $A_i\cap X\ne\emptyset$ for each $i\in I$. Moreover, we may assume that $X\subseteq A$; then $f\restriction X$ is a surjection from $X$to $I$. Since $|X|\le|I|$, there is a surjection $g:I\to X$. Then $fg:I\to I$ is surjective. $\underline{\text{(1)}\Rightarrow\text{(2B)}}$: Suppose $f:I\to B$ is surjective. For $i\in I$ let $A_i=\{f(i)\}$. By (1) there is a set $X$ such that $|X|\le|I|$ and $A_i\cap X\ne\emptyset$ for each $i\in I$. It follows that $B\subseteq X$ and so $|B|\le|X|\le|I|$, i.e., there is an injection from $B$ to $I$. $\underline{\text{(2A)}\wedge\text{(2B)}\Rightarrow\text{(2)}}$: Suppose $f:A\to B$ is surjective. By (2A) there is a map $h:B\to A$ such that $fh:B\to B$ is surjective. Let $C=h[B]$. Since we have surjections from $B$ to $C$ and from $C$ to $B$, it follows by (2B) and the Cantor-Bernstein Theorem that there is a bijection $g:B\to C$. Then $g:B\to A$ is injective, and $fg:B\to B$ is surjective. $\underline{\text{(2)}\Rightarrow\text{(2A)}\wedge\text{(2B)}}$: Trivial. $\underline{\text{(2A)}\wedge\text{(2B)}\Rightarrow\text{(1)}}$: Suppose $A_i\ne\emptyset$ for each $i\in I$. Let $\hat{A}_i=\{i\}\times A_i$, and let$$A=\bigcup_{i\in I}\hat{A}_i=\{\langle i,x\rangle:i\in I,x\in A_i\}.$$For $\langle i,x\rangle\in A$, define $f(\langle i,x\rangle)=i$; then $f:A\to I$ is a surjection. By (2A) there is a map $g:I\to A$ such that $fg:I\to I$ is surjective. Let $C=g[I]\subseteq A$ and let $$X=\{x:\langle i,x\rangle\in C\text{ for some }i\in I\}.$$Since we have surjections from $I$ to $C$ and from $C$ to $X$, it follows by (2B) that there is an injection from $X$ to $I$, i.e., $|X|\le|I|$. Now it is easy to see that $A_i\cap X\ne\emptyset$ for each $i\in I$. P.S. I don't know much about set theory, but I've been told that the Axiom of Determinacy (AD) is consistent with ZF+DC. If the Partition Principle (2B) were provable from ZF+DC, then it would be consistent with AD. However, AD implies that there is no injection from $\omega_1$ to $\mathbb R$. On the other hand, inasmuch as there is a surjection from $\mathbb R$ to $\omega_1$, the Partition Principle implies that there is an injection from $\omega_1$ to $\mathbb R$. Since the Partition Principle is incompatible with AD, it is not a theorem of ZF+DC, and a fortiori neither is your theorem.
Can a group G generated by $k$ elements act properly discontinuously freely and cocompactly on $\mathbb{R}^n$, for $n > k$
Many closed hyperbolic 3-manifolds have fundamental group generated by 2 elements. Indeed, if you glue two genus 2 handlebodies by a random map of their boundaries the resulting 3-manifold is highly likely to have a hyperbolic structure.
$\lim_{z \to a} f(z) = L$ iff $\forall z_n s.t. z_n \to a$ as $n \to \infty, f(z_n) \to L$
In fact this is true for any metric space. $\implies$ is quite immediate. For the reverse implication you can proceed by contradiction. If $\lim\limits_{z \to a} f(z) = L$ doesn't hold, you can find $\epsilon_0 >0$ such that for all $\delta > 0$, it exists $z_\delta$ such that $0 < \vert z_\delta - a\vert < \delta$ and $\vert f(z_\delta) - f(a) \vert \ge \epsilon_0$. As this is valid for all $\delta >0$, take $\delta_n = 1/n$ with $n \in \mathbb N$ to obtain a sequence contradicting the sequence convergence assumption.
Similar problem to Taylor's theorem proof
$\textbf{Hint:}$ Check that $F(t)=0$ for $x_1,x_2,\ldots,x_{n+1},x$. Use Rolle's theorem $n+1$-times.
A confusion with the Definition of 'Skyscraper Sheaf' from 'Stack Project'
Yes, all definitions are equivalent. Here is why: The closed subset $F=\overline{\{x\}}\stackrel {i}{\hookrightarrow}$ is irreducible (as the closure of an irreducible subset) and so the constant sheaf $i_F(A)$ on $F$ with fibre $A$ (which Hartshorne ambiguously denotes by just $A$) has value $\Gamma(V, i_F(A))$ on every non-empty subset $V\subset F$ open in $F$. The key point now is that, from the definition of "closure", an open subset $U\subset X$ cuts $F$ if and only if it contains $P$ so that the value of both sheaves on $U$ in that case is: $$\Gamma(U,i_P(A))=\Gamma(U,i_*(i_F(A)))=A\quad (P\in U)$$ and the value of both sheaves on $U$ is in the other case, namely when $U$ is disjoint from $A$ or equivalently when $U$ doesn't contain $P$: $$\Gamma(U,i_P(A))=\Gamma(U,i_*(i_F(A)))=\{0\} \quad (P\notin U)$$ . Conclusion The definitions are indeed equivalent but the introduction of a constant sheaf on the closure of the singleton set $\{x\}$ looks like a useless and confusing complication.
If $E = F \bigoplus G$ and $F$ and $G$ are orthogonal, then why is there $x_F+x_G=x \in F^{\perp}$?
There seems to be some error in the statement. If $F$ and $G$ are orthogonal and $E=F\bigoplus G$, then any $x\in E$ can be written as $x=x_F + x_G$ with $x_F\in F$ and $x_G\in G$; in the particular case $x\in F^\perp$, then $x_F=0$ (the vector with all $0$ components) and $x_G=x$. Note that if $x\notin E$, then $x$ can be written as $(x_F+x_G)+x_H$ with $x_H\in F^\perp\cap G^\perp$ and $x_F$, $x_G$ as above.
Newton's method with Gaussian elimination
Let's take a step back and look at the big picture. Newton's method says: $x_{n+1}$ is the zero of the derivative at $x_n$ The familiar one-dimensional formula, which you implemented, is $$ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$ and is gotten by solving the equation $0 = f'(x_n)(x_{n+1} - x_n) + f(x_n)$. In the multivariable case, $x_n$ is a sequence of vectors, obtained by fixing an initial guess $x_0$ and then solving the system $$ df(x_n)(x_{n+1} - x_n) + f(x_n) = 0 $$ This is why you need an implementation of Gaussian elimination: instead of manually solving, as in the one-dimensional case, we're letting a computer solve for us. So the pseudocode for the implementation is something like: def newtonMethod(f, df, x0, num_iterations=1000): x_curr = x0 x_next = gaussElimination(df(x_curr), -f(x_curr)) + x_curr for _ in range(num_iterations): x_curr = x_next x_next = gaussElimination(df(x_curr), -f(x_curr)) + x_curr return x_next PS- if you want to check your code, you might find scipy's implementation useful: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html
Why PCA is a suboptimal of maximization of mutual information
An easy to explain example is if you have two sets of functions very corrupted by high energy noise. You want to find what parts / subsets / linear combinations of these correspond to each other the most. If we just go for PCA it will optimize subspaces looking for dimensions of highest L2 norm in different senses, but if our noise has higher L2-norm than functions of interest it will rather select noise than functions of interest! And we know that independently sampled uncorrelated noise will have very low mutual information with just about anything deterministic of interest. Therefore we will do better if we search for a method which does not focus so much on norm of actual signal/function but on some statistical correspondence like... for example, cross correlation or covariance.
How do I evaluate this logarithm?
Assuming that you mean a base-$10$ logarithm and that the exponent is $3\log{2}$, then $$10^{3\log{2}} = \left(10^{\log{2}}\right)^3 = 2^3 = 8$$
$u_t = t\Delta u$, solve with streching
Three ways: Equate dimensions. Dividing by $t$ gives $\frac{1}{t}\frac{\partial u}{\partial t}=\sum_i\frac{\partial^2u}{\partial x_i^2}$, or in terms of dimensions, $\frac{U}{T^2}\sim \frac{U}{X^2}$. This means $t$ and $x$ must have the same dimensions, so they scale with the same power, i.e. $(x,t)\to(\lambda x,\lambda t)$. Find the independent variable scaling the direct way: let $u(x,t)=v(\lambda x,\lambda^k t)=:v(y,s)$. Then $u_t=\lambda^kv_s$ and $\Delta_xu=\lambda^2\Delta_yv$, so the heat equation becomes $\lambda^kv_s=\lambda^2t\Delta_yv$. Since $t=s/\lambda^k$, it reduces to $v_s=\lambda^{2-2k}s\Delta_yv$. Choosing $k=1$ gives the desired scaling invariance. Reparametrize $t$. Let $u(x,t)=v(x,f(t))$. Then $u_t=v_s\cdot f'(t)$, so if you choose $f(t)=t^2/2$, the initial value problem becomes $v_{s}=\Delta v$, subject to $v(x,0)=\delta(x)$.
Find the smallest subring of $\mathbb{Z}$ containing $8$.
Note that the second set after the cup symbol is already contained in the first set. A slick way to see that the set you chose is the smallest subring containing 8 is to note that, additively, it is the smallest subgroup containing 8. Since a subring must be an additive subgroup and this subgroup happens to be a ring, it must be the smallest subring.
Probability that coin tossing game terminates
The expected value of the game must be $(0)$, because the expected value of each coin toss is $(0)$. Suppose that there is a 2nd condition that the game must end after $n$ coin tosses, if $(+1)$ is never achieved. There is some small chance that the coin tosser loses a lot of money. As $n \to \infty$, the chance of losing goes to $0$, and if a loss is sustained, there is some chance that the loss is considerable. What is happening is that there is a smaller and smaller chance of losing a considerable amount of money. Further, as $n \to \infty$, the chance of ending up a $+1$ winner approaches but never equals certainty. It's important to remember that infinity is not a number, but rather a symbol for unbounded growth. Therefore, you can never have an infinite number of coin tosses.
Can we assume the property is true for some n in proof by induction?
No. Those are not the same at all. Suppose you have $P$ that is true of $7,13, 52,53$ and $79$ but nothing else. To do a proof by induction (which you can't because it is NOT true). You need to prove that if it is true for any $n$ then it must be true for $n+1$. You can't do this because it is true for $n =7$ but it is not true for $n+1 = 8$. (Nor from $n=13, n+1 = 14$ or $n=79, n+1 = 80$). So $\forall n \in \mathbb{N} ( P(n) \implies P(n+1))$ is false. But for $n = 52$ we have $P(n)$ is true and we have $P(n+1) = P(53)$ is true. So $(\exists n \in \mathbb{N} P(n)) \implies P(n+1)$ is true.
How to prove that 3 points are on the same line by the distance between the points
Hint: The triangle inequality says that if the three sides of a triangle have lengths $x\le y \le z$ than $x+y\ge z$ and the equality only applies if the three vertices are aligned and the triangle is degenerate .
Is this proof about sequence correct?
I don't know how you got your last inequality. Why don't you just write your sequence as $$u_n=\frac{4^{3n}}{3^{4n}}=\left(\frac{4^3}{3^4}\right)^n=\left(\frac{64}{81}\right)^n$$ Since $0<64/81<1$, the sequence tends to $0$.
Why this calculation true? (solution added)
why this is not... Because the request is exactly $P(Y>X)$, that is the integral over the domain upper the line $X=Y$ see the picture below
Normalization of a hypersurface in $\mathbb{P}^3$
Every variety has a normalization. In the particular case of $D$ a normalization can be described as follows. Consider the space $D'$ of pairs $(f,x)$, where $f \in \mathbb{P}^3$ is a cubic polynomial (up to a scalar) and $x \in \mathbb{P}^1$ is a point, such that $x$ is a root of $f$ of multiplicity at least 2. Clearly, $D'$ is isomorphic to $\mathbb{P}^1 \times \mathbb{P}^1$ (one can associate to $(f,x)$ the point $x$ and the linear polynomial $f/l_x^2$, where $l_x$ is a linear polynomial with root at $x$). On the other hand, the projection $D' \to \mathbb{P}^3$, $(f,x) \mapsto f$, gives a finite birational map $D' \to D$. Since $D'$ is smooth, it is a normalization of $D$.
compact operator and bounded operator
1) Since $S,T\geq 0$ we have in particular $S,T$ normal and hence $r(S)=\|S\|$ as well as $r(T)=\|T\|$. Further it holds in general in a c*-algebra that for $0\leq x \leq y$ holds $\|x\| \leq \|y\|$, hence $r(S)=\|S\| \leq \|S+T\| = r(S+T)$ and similar for $r(T)$. 2) Do you mean self-adjoint instead of normal. For normal operators this is clearly wrong, of course there do exist operators, which are compact, normal but not self-adjoint and in particular not positive.
Does this type of function exist?
You need to use Baire first to get uniform boundedness in an angle, after which you can just blow up, use analiticity to establish uniform continuity in a smaller angle away from the origin and pass to the limit to conclude that $e^{i\theta}$ is analytic on an open set, which is absurd. Note that the word every is crucial here. If you just ask for "almost every" in the sense of the Lebesgue measure, such crazy behavior becomes possible.
Does $R$ have to be a PID in order for all f.g. torsion-free $R$-modules to be free?
A simple counterexample to Theorem A for non-PIDs is the following: Let $R=K[x,y]$ with $K$ any field and consider the non-principal ideal $$ I=(x,y)=\{\hbox{$xP(x,y)+yQ(x,y)$ where $P$, $Q\in R$}\} $$ Then $I$ is not free over $R$ because, for instance, $xR\cap yR\neq(0)$ although it's a submodule of a free $R$-module ($R$ itself!).
Difference of two metric on same space.
For a metric space $X$ and metric $d$, $2d$ is also a metric on $X$. Let $d_1=2d$ and $d_2=d$. Then $d_1-d_2=d$ which is a metric.
how to integrate a potential function?
Recall the formula for a circle: $$x^2+y^2=1$$ Solving for the upper half of the circle in terms of $x$, $$y=\sqrt{1-x^2}$$ Now recall the formula for arc-length: $$\operatorname{arc length}_{a,b}(f)=\int_a^b\sqrt{1+[f'(x)]^2}~\mathrm dx$$ Thus, the arc-length of a circle is given by $$\begin{align}\operatorname{arc length}_{a,b}(\text{circle})&=\int_a^b\sqrt{1+[y'(x)]^2}~\mathrm dx\\&=\int_a^b\sqrt{1+\frac{x^2}{1-x^2}}~\mathrm dx\\&=\int_a^b\sqrt{\frac{1-x^2+x^2}{1-x^2}}~\mathrm dx\\&=\int_a^b\sqrt{\frac1{1-x^2}}~\mathrm dx\\&=\int_a^b\frac1{\sqrt{1-x^2}}~\mathrm dx\end{align}$$ Thus, it happens to be the case that $f(x)$ is equivalent to the arc-length of a circle plus a constant of integration. We may set $a=0$ for simplicity and letting $b$ be our variable: $$f(x)=\operatorname{arc length}_{0,x}(\text{circle})+C$$ The following red arc represents what we want to find: Via some trigonometry, we find the purple angle is $\theta_x=\frac\pi2-\arctan\left(\frac{\sqrt{1-x^2}}x\right)$, and since the circumference of the full circle is $2\pi$, we derive that $$f(x)=\theta_x+C=\frac\pi2-\arctan\left(\frac{\sqrt{1-x^2}}x\right)+C$$ $$f(x)=C^\star-\arctan\left(\frac{\sqrt{1-x^2}}x\right)$$ which holds for $x\in[0,1]$ and is equivalent to $$f(x)=C^\star+\arcsin(x)$$
Find the dimension of the intersection of subspaces
As I understand it, $U$ is given by polynomials of the form $(a_3x^3 + a_2x^2 + a_1x + a_0)(x-3)$, so $\{x^i(x-3)\mid i=0..4\}$ should be a basis, ie. $U$ has dimension 4. Similarly for $V$. $U \cap V$ is a subspace of both $U$ and $V$, so by the dimension axiom cannot possibly be have dimension larger than the minimum of both dimensions, i.e. has to be $\leq 4$. In fact, just using the same trick as before, elements in $U \cap V$ should have the form $(c_2x^2 + c_1x + c_0)(x-2)(x-3)$, so $U \cap V$ should have dimension 3.
$f(r)=0$ for $r\in R$ implies that $x-r$ divides $f(x)$
Just realize that $f(x)-f(r)$ is a linear combination of $x^k-r^k$'s and $x-r$ divides every $x^k-r^k$.
Probability of coin streak at start of sequence
Suppose $P(H) = p$ and $P(T) = 1-p$. You are looking for a streak of length $k$. We can classify all streaks of length $k$ into two categories of outcomes. You have the first $k$ flips are heads followed by a tails, or the first $k$ flips are tails followed by a heads. These outcomes have respective disjoint probabilities: $$p^k (1-p)\qquad \text{and} \qquad p(1-p)^k$$ So, the total probability that you start with a streak of length $k$ is given by: $$p(1-p)\big( p^{k-1}+(1-p)^{k-1} \big)$$ Sorry, don't know why I divided. Probability is multiplicative.
Size of maximal tori in finite simple groups of Lie type
Let me know if you have a more specific question, and I can try to answer it. Here are some attempts: Existence: For every positive integer $n\geq 2$, there is a finite simple group of Lie type with a maximal torus of order $2^n$: If $n$ is odd, then $\operatorname{PSL}(n,3)$ has a maximal torus of type $\Phi_1^{n-2} \Phi_2$ and order $(3-1)^{n-2} (3+1)=2^n$. If $n$ is even, then $\operatorname{PSL}(n+2,3)$ has a maximal torus of type $\Phi_1^{n+1}/(n,q-1)$ and order $(3-1)^{n+1}/\gcd(n+2,3-1)=2^n$. A-type case: Maximal Tori for GL, SL, PGL, and PSL can be described fairly explicitly. Things are easiest for GL where a maximal torus is the group of units of a $n$-dimensional semisimple commutative $k$-algebra, that is, a the group of units of a direct product of field extensions of the underlying field. If the underlying field is finite, then field extensions are uniquely parameterized by their dimension. Hence we find a partition $d_1 \leq d_2 \leq \ldots \leq d_m$ of $n$, so that $n=d_1 + \ldots + d_m$. Then the maximal torus are the block diagonal matrices where the $i$th block is chosen from $\newcommand{\GL}{\operatorname{GL}}\newcommand{\GF}{\operatorname{GF}}\GL(1,q^{d_i}) \leq \GL(d_i,q)$ where nonzero elements of the field with $q^{d_i}$ elements are written as $k$-linear transformations of that field, where $k$ is the subfield with $q$ elements. Such a torus has order $(q^{d_1}-1)(q^{d_2}-1)\cdots(q^{d_n}-1)$. In SL and PGL one takes the subgroup or the quotient group, and in each case one removes a factor of $q-1$. In PSL one additionally removes a factor of $\gcd(n,q-1)$. In particular, choosing the partition $d_1=d_2=\ldots=d_{n-2}=1, d_{n-1}=2$, that is, $1+1+\ldots+1+2=n$, we get a maximal torus in GL of order $(q-1)^{n-2} (q^2-1)$ and a maximal torus in SL or PGL of order $(q-1)^{n-2}(q+1)$ and a maximal torus in PSL of order $(q-1)^{n-2}(q+1)/\gcd(n,q-1)$. Specializing this to $q=3$ and $n$ odd gives $2^{n-2} 4 / 1 = 2^n$. Similarly, choosing the partition $d_1=d_2=\ldots=d_{n+2}=1$, that is, $1+1+\ldots+1=n+2$ we get a maximal torus in GL of order $(q-1)^{n+2}$, and of SL and PGL of order $(q-1)^{n+1}$, and of PSL of order $(q-1)^{n+1}/\gcd(n,q-1)$. Choosing $q=3$ and $n$ even we get $2^{n+1}/2 = 2^n$. Since PSL(2,3) is not simple we run into trouble trying to get very tiny tori in a simple group. Table: Here is a table for the orders of maximal tori in simply connected finite groups of untwisted Lie type of rank up to 4. $\Phi_1=q-1$, $\Phi_2=q+1$, $\Phi_3 = q^2+q+1$, $\Phi_4 =q^2+1$, $\Phi_5=q^4+q^3+q^2+q+1$, $\Phi_6=q^2-q+1$, $\Phi_8 = q^4+1$, $\Phi_{12}=q^4-q^2+1$. $$\begin{array}{rcc|c|c|} r & G & k & T_1 & T_2 & T_3 & T_4 & T_5 \\ \hline A_1 & SL_2 & 2 & \Phi_1 & \Phi_2 \\ \hline %{}^2A_2 & SU_3 & %{}^2B_2 & Sz & %{}^2G_2 & Ree & A_2 & SL_3 & 3 & \Phi_1^2 & \Phi_1 \Phi_2 & \Phi_3 \\ \hline B_2 & Sp_4 & 5 & \Phi_1^2 & \Phi_2^2 & \Phi_1 \Phi_2 & \Phi_1 \Phi_2 & \Phi_4 \\ \hline G_2 & & 6 & \Phi_1^2 & \Phi_2^2 & \Phi_1 \Phi_2 & \Phi_1 \Phi_2 & \Phi_3 \\ &&& \Phi_6 \\ \hline %{}^2A_3 & SU_4 & %{}^2A_4 & SU_5 & %{}^3D_4 & & %{}^2F_4 & Ree & A_3 & SL_4 & 5 & \Phi_1^3 & \Phi_1 \Phi_2^2 & \Phi_1^2 \Phi_2 & \Phi_1 \Phi_3 & \Phi_2 \Phi_4 \\ \hline % {}^2 A_3 & SU_3 & S_4 B_3 & O_7 & 10 & \Phi_1^3 & \Phi_2^3 & \Phi_1\Phi_2^2 & \Phi_1^2 \Phi_2 & \Phi_1\Phi_2^2 \\ && & \Phi_1^2\Phi_2 & \Phi_1\Phi_3 & \Phi_2\Phi_4 & \Phi_1\Phi_4 & \Phi_2\Phi_6 \\ \hline C_3 & Sp_6 & 10 & \Phi_1^3 & \Phi_2^3 & \Phi_1\Phi_2^2 & \Phi_1^2 \Phi_2 & \Phi_1\Phi_2^2 \\ && & \Phi_1^2\Phi_2 & \Phi_1\Phi_3 & \Phi_2\Phi_4 & \Phi_1\Phi_4 & \Phi_2\Phi_6 \\ \hline %{}^2A_5 & SU_6 & %{}^2A_6 & SU_7 & %{}^2D_4 & O^-_8 & A_4 & SL_5 & 7 & \Phi_1^4 & \Phi_1^3\Phi_2 & \Phi_1^2\Phi_2^2 & \Phi_1^2 \Phi_3 & \Phi_1\Phi_2\Phi_4 \\ &&& \Phi_5 & \Phi_1\Phi_2\Phi_3 \\ \hline B_4 & O_9 & 20 & \Phi_1^4 & \Phi_2^4 & \Phi_1\Phi_2^3 & \Phi_1^3\Phi_2 & \Phi_1^2\Phi_2^2 \\ &&& \Phi_1\Phi_2^3 & \Phi_1^2\Phi_2^2 & \Phi_1^3\Phi_2 & \Phi_1^2\Phi_2^2 & \Phi_1^2\Phi_3 \\ C_4&Sp_8&(same)& \Phi_4^2 & \Phi_2^2\Phi_4 & \Phi_1^2\Phi_4 & \Phi_1\Phi_2\Phi_4 & \Phi_1\Phi_2 \Phi_4 \\ &&& \Phi_1\Phi_2\Phi_4 & \Phi_2^2\Phi_6 & \Phi_1\Phi_2\Phi_6 & \Phi_1\Phi_2\Phi_3 & \Phi_8 \\ \hline D_4 & O^+_8 & 13 & \Phi_1^4 & \Phi_2^4 & \Phi_1^2\Phi_2^2 & \Phi_1^2\Phi_2^2 & \Phi_1^2\Phi_2^2 \\ &&& \Phi_1\Phi_2^3 & \Phi_1^3\Phi_2 & \Phi_1^2\Phi_3 & \Phi_4^2 & \Phi_1\Phi_2\Phi_4 \\ &&& \Phi_1\Phi_2\Phi_4 & \Phi_1\Phi_2\Phi_4 & \Phi_2^2\Phi_6 & \\ \hline F_4 & & 25 & \Phi_1^4 & \Phi_2^4 & \Phi_1^3\Phi_2 & \Phi_1^3\Phi_2 & \Phi_1\Phi_2^3 \\ &&& \Phi_1\Phi_2^3 & \Phi_1^2\Phi_2^2 & \Phi_1^2\Phi_2^2 & \Phi_3^2 & \Phi_1^2\Phi_3 \\ &&& \Phi_1^2\Phi_3 & \Phi_4^2 & \Phi_1^2\Phi_4 & \Phi_2^2\Phi_4 & \Phi_1\Phi_2\Phi_4 \\ &&& \Phi_1\Phi_2\Phi_4 & \Phi_6^2 & \Phi_2^2\Phi_6 & \Phi_2^2\Phi_6 & \Phi_1\Phi_2\Phi_6 \\ &&& \Phi_1\Phi_2\Phi_3 & \Phi_1\Phi_2\Phi_3 & \Phi_1\Phi_2\Phi_6 & \Phi_8 & \Phi_{12} \\ \hline %{}^2A_7 & SU_8 %{}^2A_8 & SU_9 %{}^2D_5 & O^-_{10} %{}^2E_6 & % Rank 5+ now \end{array}$$
Infinite field extensions and linear transformations
Assume that $\alpha\in P\setminus\Phi$, that is, $\alpha$ is linearly independent from $1$ in the $\Phi$-vector space $P$. Then (e.g. by extending $(1,\alpha)$ to a basis) we can define a $\Phi$-linear map $f:P\to P$ that satisfies $f(1)=1,\ f(\alpha)=0$. This will violate the condition, as $$\alpha_R(f(1))=\alpha\ \ne\ 0=f(\alpha_R(1))\,.$$
Very basic formula question
If you want a linear relationship, you can scale the position arithmetically. $x$ ranges from $0.004$ to $0.1$ $s$ ranges from $0.1$ to $1.0$ so if $x$ lies within that range, $s = 0.1 + (1.0 - 0.1) * (x - 0.004) / (0.1 - 0.004)$ $s = 0.1 + 0.9 * 0.046 / 0.096$ $s = 0.53125$