title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Solving an equation over the reals: $ x^3 + 1 = 2\sqrt[3]{{2x - 1}}$
We have $$x^3+1=2(2x-1)^{1/3}\iff x^3=2(2x-1)^{1/3} -1.$$ Here, setting $y=(2x-1)^{1/3}$ gives us $$y^3=2x-1 \ \ \text{and}\ \ x^3=2y-1.$$ Hence, we have $$\begin{align}y^3-x^3=(2x-1)-(2y-1)&\Rightarrow (y-x)(y^2+yx+x^2)=2(x-y)\\&\Rightarrow (y-x)(y^2+yx+x^2+2)=0\\&\Rightarrow (y-x)\{(x+(y/2))^2 + (3/4)y^2+2\}=0\\&\Rightarrow y=x.\end{align}$$ Hence, we have $$x^3=2x-1\iff (x-1)(x^2+x-1)=0\iff x=1,\frac{-1\pm\sqrt 5}{2}.$$
Why should we expect the divergence operator to be invariant under transformations?
The chain rule says if $\vec{y} = R \vec{x}$, $\dfrac{\partial}{\partial x_i} = \sum_j R_{ji} \dfrac{\partial}{\partial y_j}$. Then $$\nabla_x \cdot \vec{E} = \sum_i \dfrac{\partial}{\partial x_i} E_i = \sum_i \sum_j R_{ji} \dfrac{\partial}{\partial y_j} E_i = \sum_j \dfrac{\partial}{\partial y_j} \sum_i R_{ji} E_i = \nabla_y \cdot (R \vec{E}) $$
Quartiles of a exponentially distributed function
The usual definition of the first quartile is the place $q_1$ such that $\Pr(X\le q_1)=0.25$. In our case, $F_X(x)=1-e^{-\lambda x}$ and therefore we want $1-e^{-\lambda q_1}=0.25$. This manipulates to $e^{-\lambda q_1}=1-0.25=0.75$. Taking logarihms, we get the book's answer of $-\frac{\ln (0.75)}{\lambda}$. For the second quartile, $\ln(0.75)$ is replaced by $\ln(0.5)$, and for the third by $\ln(0.25)$. Remark: Note that $-\ln(0.75)\lt -\ln(0.5)\lt -\ln(0.25)$. This feels as if it is going the wrong way. It isn't. For the logarithms are all negative. I think the book's answer is (though correct) not presented in a good way. Better would be the equivalent $q_1=\ln(1/0.75)\cdot \frac{1}{\lambda}$. Then everything is positive. Shouldn't we all be positive?
How to draw the function $f(x)=(x^2(e^x-1))^{\frac{1}{5}}$?
Write your $$f(x)$$ in the form $$f(x)=x^{2/5}(e^x-1)^{1/5}$$
How would one analyse the solutions to this non linear ODE system?
The first two equations say this is a motion on the cylinder with speed $1$. One family of trivial solutions is where the motion is in the $f_3$ direction only: $$ f_1(t) = \cos(\alpha),\ f_2(t) = \sin(\alpha), \ f_3(t) = t + c$$ EDIT: Following WalterJ's suggestion, if $f_1(t) = \cos(g(t))$ and $f2(t)=\sin(g(t))$, the first equation is satisfied automatically and the last two become $$ \eqalign{\dot{g}^2 + \dot{f_3}^2 &= 1 \cr \ddot{g} = \dot{g}^4 + \ddot{g}^2 + \ddot{f_3}^2\cr}$$ We can then eliminate $f_3$ from this: note that $$ \dot{f_3} \ddot{f_3} = - \dot{g} \ddot{g} $$ so $$\ddot{f_3}^2 = \dfrac{\dot{g}^2 \ddot{g}^2}{1-\dot{g}^2} $$ and thus we get one equation for $g$: $$ \ddot{g} = \dot{g}^4 + \ddot{g}^2 + \dfrac{\dot{g}^2 \ddot{g}^2}{1-\dot{g}^2} $$ Note this is a separable (but rather nasty) first-order equation in $\dot{g}$.
Solving : $2n^{\frac{1}{2}}-1.5n^{\frac{1}{3}}+n^{\frac{1}{6}}=y$ for $n$ in terms of $y$
Let $x=n^{1/6}$ (i.e. $n=x^6$): $$2x^3-\frac 3 2x^2+x=y$$ Subtract $y$ and divide both sides by $2$: $$x^3-\frac 3 4x^2+\frac 1 2x-\frac 1 2y=0$$ In order to reduce this to a depressed cubic, let $x=z+\frac 1 4$: $$z^3+\frac{5z}{16}+\frac{3}{32}-\frac 1 2y=0$$ In order to turn this into a quadratic, let $z=w+\frac{5}{48w}$: $$w^3+\frac{3}{32}-\frac 1 2y-\frac{125}{110592w^3}=0$$ Multiply by $w^3$: $$w^6+\left(\frac{3}{32}-\frac 1 2y\right)w^3-\frac{125}{110592}=0$$ Time to apply the quadratic formula: $$w^3=\frac{\frac 1 2 y-\frac{3}{32}\pm\sqrt{\left(\frac{3}{32}-\frac 1 2 y\right)^2-\frac{125}{27649}}}{\frac{125}{55296}}$$ Now, there are three complex solutions to this equation. To represent this, I will use $\omega$ to represent a cube root of unity. Thus, the following equation represents three solutions for $w$: One for $\omega=1$, one for $\omega=\frac{-1+\sqrt{-3}}{2}$, and one for $\omega=\frac{-1-\sqrt{-3}}{2}$: $$w=\omega\sqrt[3]{\frac{\frac 1 2 y-\frac{3}{32}\pm\sqrt{\left(\frac{3}{32}-\frac 1 2 y\right)^2-\frac{125}{27649}}}{\frac{125}{55296}}}$$ From here, I leave the rest to you: Solve for $z$ using $z=w-\frac{5}{48w}$, solve for $x$ using $x=z+\frac 1 4$, and, finally, solve for $n$ using $n=x^6$. Good luck finishing the problem!
Binary concatenation
At the risk of stating the obvious, we have the following. If: $b$ is the function that converts from decimal to binary $d$ is the function that converts from binary to decimal and $u+v$ is the concatenation of $u$ and $v$ in decimal and $x \diamond y$ is the operation of interest then $$x \diamond y = b(d(x) + d(y))$$ for all binary strings $x$ and $y$. I'm not sure if that's what you're looking for, though.
Bernoulli random variables - 2 equations with 2 variables?
Nice approach. I give you some confirmations and further hints so that you can finish by yourself. Does it mean that the probability of $A=0$ and $B=1$ is $1/4$? That´s true. $Pr[A + B ≤ 1] = 3/4$ - Does it mean that that the probability of "not both of them equal 1" is $3/4?$ Yes. Here you can use the converse probability. The probability that not both of them are equal $1$ is $$1- P("\textrm{Both of them are equal to 1}")$$ How should I approach the solution? 2 variables and 2 equations? Yes, with the variables $p_a$ and $p_b$.
What does $\sum_{n=0}^{\infty} (-1)^n .\frac{x^{2n+1}}{2n+1}$ converge to at x= 1 and x = -1
At $x=1$, the sum converges to $\pi/4$. At $x=-1$, however, the sum diverges. $$\frac{d}{dx}\sum_{n=0}^{\infty} \frac{(-1)^n x^{2 n+1}}{2 n+1} = \sum_{n=0}^{\infty} (-1)^n x^{2 n} = \frac{1}{1+x^2}$$ So the sum is $$\int_0^1 \frac{dx}{1+x^2} = \frac{\pi}{4}$$ For $x=-1$, you get $$\int_0^1 \frac{dx}{1-x^2}$$ which diverges.
If a function $f:X\to Y$ is injective, and $|X| = |Y|$, can it not be bijective?
In the finite case, this is true. If $X$ and $Y$ are infinite it is not true. For example: $f:\mathbb{N}\to\mathbb{N}, n\mapsto n+1$. This function is injective, but not surjective, as $0$ has no preimage. Edit: So you are correct. If the sets $X,Y$ have a finite and equal cardinality, it is enough to proof that $f$ is either injective, or surjective. There are some common examples for this method. When you want to proof that a finite integral domain is a field, for example, this method can be used to great success. In that case you have to proof that every element has an inverse, and you can conclude that from looking at a certain bijective map.
Explanation of strategies in infinite horizon dynamic programming problem
This is policy iteration solution algorithm in which we iterate on the policy (strategy) function $\sigma^{(j)}$. The initial guess is $\sigma^{(0)}(s)=0$. The next iteration policy $\sigma^{(1)}$ is found by solving the optimization problem in the Bellman equation using the continuation value resulting from $\sigma^{(0)}$, that is $W(\sigma^{(0)})(s)=0$. Similarly, on the next iteration, $\sigma^{(2)}$ policy is found by soling the Bellman equation with the continuation value $W(\sigma^{(1)})(s)=\sqrt{s}$. Note however that in Bellman equation $W(\sigma^{(1)})$ is calculated at $s-a$.
Cardinality of a discrete subset
For 1, spaces in which the discrete subsets are at most countable, and these are called spaces with "countable spread" in topology. Here, the spread $s(X)$ of a space $X$ is defined as the supremum of the cardinalities of all discrete subspaces of $X$, where by convention a finite supremum is rounded up to $\aleph_0$ (only infinite cardinals are used), also because every infinite Hausdorff space has a countable discrete subset (so spaces with a finite spread would be "pathological" non-Hausdorff spaces, or finite to begin with). If a space is second countable, then every subspace is second countable too, and a discrete second countable space is at most countable, so a second countable space has countable spread. But this argument can be repeated for other classes of spaces: if every subspace of $X$ is separable ($X$ is then called hereditarily separable) or every subspace of $X$ is Lindelöf ($X$ is then called hereditarily Lindelöf) then $X$ has countable spread too (as a Lindelöf discrete space or separable discrete space both must be countable). For metrizable spaces, countable spread is equivalent to being separable, or Lindelöf, or second countable. See my post on topology atlas, but in general this need not be the case. But the Wolfram quote maybe comes from the fact that a lot of mathematics is done in separable metrizable spaces, like the Euclidean spaces. An example of a separable compact space that does not have countable spread is $\beta(\omega)$ or $[0,1]^{\omega_1}$. As to 2, the property that all finite subsets are discrete is equivalent to being $T_1$ (defined either as all singleton sets are closed, or for every $x \neq y$ in $X$, there are open sets $U$ and $V$ such that $x \in U, y \notin U$ and $y \in V, x \notin V$). This already follows from considering subsets of 2 points. As to 3, adding a discrete topology to a set doesn't make it any more topological, as all functions on it are continuous, there are no non-trivial convergent seuqneces or nets, etc. So a discrete topology adds no information. It's true, for example, that any group can always be given a discrete topology and then it's a topological group (the group operations are continuous), but if we apply theorems from the general theory of topological groups, we cannot prove anything new that we couldn't prove by just plain algebra/group theory. The same holds for other types of (finite or not) structures in discrete mathematics: discrete here is opposite to "continuous", one could say: we do not consider topological or analytical structure, but just the structure as a set. The discrete topology is as informative as no topology in this case....
Covering chessboard with L-tetrominoes
Hint: By counting the number of squares, $n$ is even. Hint: By using the standard coloring of $i+j \pmod{4}$ for square $(i,j)$, show that $n \neq 4k$. Hence $n= 4k+2$. I'm not certain about the converse.
Profit Maximization -
For part b, the question is: for what $x$ is $R(x)=(110x+1000)(480-11x)$ maximized? First, differentiate using the power rule and then set the derivative equal to zero: $$R'(x)=110(480-11x)-11(110x+1000)=-2420x+41800$$ $$R'(x)=0 \Leftrightarrow -2420x+41800 \Leftrightarrow x\approx17.2$$ The rebate offered must therefore be $$\$17.2\cdot11=\$190\,\text{dollars}$$
Is the inverse image of a sheaf along a embedding always a sheaf?
In general, it is not true. It is certainly true in the following special cases: $X \subset Y$ is open. $X$ is a point. A counterexample in the general case is quite easy: Take $Y$ to be any irreducible space, $X$ to be two isolated points and $\mathcal F$ a constant presheaf on $Y$. Since $Y$ is irreducible, the constant presheaf is a sheaf on $Y$. But it is not a sheaf von $X$.
Clarifications on the Incompleteness Theorems
Re: (1), you're correct (and I agree with your side note). The one quibble I have is with the order of things: you seem to view the incompleteness theorem as a result about models which then gets turned into a result about proofs via Completeness, but it's actually the other way around. Re: (2), this old question is relevant. Re: (3), such a theory is called (fully) categorical. In first-order logic, as a consequence of the compactness theorem (which can be proved as a corollary of the completeness theorem), there are no fully categorical theories at all (barring those theories of finite structures). We can however talk about categoricity in a given cardinality. For $\kappa$ an infinite cardinal, a (complete, consistent, and with no finite models) theory $T$ is $\kappa$-categorical iff $T$ has exactly one model, up to isomorphism, of cardinality $\kappa$. There are indeed interesting things we can say about this more restrictive kind of categoricity: If $T$ is countable, then there are only two kinds of categoricity: $T$ is categorical in some uncountable cardinal iff $T$ is categorical in every uncountable cardinal. So we just have $\aleph_1$-categoricity and $\aleph_0$-categoricity. This is Morley's theorem. Once we look at uncountable theories things get more complicated; see e.g. this exposition of Harrington and Makkai. Within the context of countable theories, we can identify relevant model-theoretic properties. On the $\aleph_0$-categoricity side, see here; for some information about uncountably categorical theories, see here. Finally, I have to mention the hilarious Vaught's Never-Two Theorem: no complete countable theory has exactly two countable models up to isomorphism. Yes, we can get every natural number except two.
Prove that $H \cap K = H$ where $H$ and $K$ are subgroups of a cyclic group
Hint: show that a cyclic group can only have a single element (and therefore only a single subgroup) of order $2$. More generally, for any finite, cyclic group $G$, for any order $d\mid |G|$, there is exactly one subgroup of $G$ with order $d$.
Given the adjoint $\mathrm{adj}(A)$, how do you find $\det(A)$ and $A^{-1}$?
Hint $$ \left( \begin{array}{ccc} \text{det}(A) & & 0\\ & \ddots & \\ 0 & & \text{det}(A) \end{array} \right)=\text{det}(A) \cdot \text{I}_n = A \cdot \text{adj}(A) $$ and $$ A^{-1}= \frac{1}{\text{det}(A)}\text{adj}(A) $$
When is $|f(x)|$ equivalent to $f(|x|)$
When the function is nonnegative on the nonnegative real axis, and the magnitude is constant on circles centered at the origin. And only then.
Why we can't use rational numbers as a root degree?
Roots are usually introduced a little after integer powers, as solutions of $$y=\sqrt[n]x\iff y^n=x.$$ This is the first encounter with irrational numbers and paves the way to non-integer powers. But in the initial setting, only natural $n$ are "accessible". After more theory, rational powers will come and the generalization $$\sqrt[p/q]x=\sqrt[p]{x^q}=x^{q/p}$$ where $p,q$ are integer can be introduced. After even more theory, you will get to real powers with $$\sqrt[r]x=x^{1/r}$$ holding.
Why null hypothesis in t-tests can be numbers other than zero?
We know that $\frac{\bar x-\mu}{s/\sqrt n}$ follows a t-distribution with n-1 degrees of freedom. Under the null hypothesis, we specify a particular value of $\mu$ that we think is the true population mean. Call it $\mu_0$. By extension, since this is assumed to be the true population mean, therefore $\frac{\bar x-\mu_0}{s/\sqrt n}$ follows a t distribution. If the sample mean $\bar x$ is far from the $\mu_0$, it is unlikely that we would have observed such an extreme sample mean under this t distribution, because the shape of the density is bell-shaped, meaning there is less density on the tails. It is not the fact that it is far from the $\mu_0$ that means that it is incorrect. The mean is only the expected value; it is possible that under another distribution, the farther you are away from $\mu_0$, the more density there is, and so the closer to $\mu_0$, the more unlikely you are to observe that value - for example, take a bimodal normal distribution. Anyway, since there is less density on the tails, we don't think that this distribution is correct. But what were we specifying for this distribution? Everything is fixed except for the mean. Therefore it must be the mean that is incorrect, and we reject the null mean. There is no particular reason why you must test the hypothesis that $\mu_0=0$. It can be any value that you believe is the true population mean (see the very first sentence).
Question about $R[X]^*$, with $R=\mathbb{Z}/4\mathbb{Z}$
Set $R = \mathbb{Z}/4 \mathbb{Z}$. We want to prove: if $r \in R$, then $q_r := 1 + 2r \in R[X]^*$ i.e. there exists a $p \in R[X]$ such that $p \cdot q_r = 1$. (comment by user26857) For a fixed $r \in R$, observe that $(1+2r)^2 = 1 + 2r + 2r + 4 r^2 = 1 + 4 (r+r^2)$. This reduces to $(1+2r)^2 = 1$, since $ 4 = 0$ in $R$, and we find that $p = q_r$. If $u \in R[X]^*$, does there exists an $r \in R[X]$ such that $u = 1 + 2r$? The answer is yes, which we will prove by induction over the degree of $u$. If $\deg(u) = 0$, then $u \in \{1,3\}$ and we can write $3 = 1 + 2 \cdot 1$. Assume that if $\deg(u) \leq n-1$, then $u = 1 + 2 r$ for some $r \in R$. Let $n > 0$ and $u = \sum_0^n a_i x^i$, $v = \sum_0^m b_j x^j$, $u \cdot v = 1$ and $a_n \neq 0 \neq b_m$. Since $u \cdot v = 1$, we must have $a_n \cdot b_m = 0$, which implies $a_n = b_m = 2$. If we can prove that $ u - 2x^n$ is a unit, we are done. Indeed: By the induction hypothesis $u - 2x^n = 1 + 2r$, thus $u = 1 + 2(r+x^n)$. To see that $U := u - 2x^n$ is a unit, observe that $\alpha := (u - 2x^n) \cdot (v - 2x^m) = uv - 2ux^m - 2vx^n + 4 x^{n+m} = 1 - 2(ux^m - vx^n) = 1 + 2 r$, where we used $2 = -2$ and set $r := (ux^m - vx^n)$. Thus, the product $\alpha$ is a unit by (1.) i.e. $\alpha^2 = (u - 2x^n)^2 \cdot (v - 2x^m)^2 = 1$. Defining $V = (u - 2x^n) \cdot (v - 2x^m)^2$, we get $U \cdot V = 1$. Hope this helps.
Questions about $d(x,A)$ and $\operatorname{diam}(A)$ in a metric space
For counterexamples for the general case, consider $A=\{(x_1,x_2)\in\mathbb{Q}^2\,|\,x_1^2+x_2^2=1\}$ and $x=(1,1)$ in $X=\mathbb{Q}^2$ for the first case and $A=(-\sqrt{2},\sqrt{2})\cap\mathbb{Q}$ and $x=2$ in $X=\mathbb{Q}$ for the second case. This means that completeness is something to be assumed. For counterexamples in the complete case, consider the complete metric space $\ell^2(\mathbb{N})$ whose elements are all infinite sequences $(x_n)$ of real numbers such that $\sum_n\,x_n^2<\infty$ and whose metric is given by $$d((x_n),(y_n))=\sqrt{\sum_{n=1}^\infty\,(x_n-y_n)^2}\text{.}$$ It is a classic result and/or an interesting exercise that this is effectively a complete metric space. In this space: Taking $A=\{(1+1/n)e_n\,|\,n\geq 0\}$, where $e_n$ is the sequence with a $1$ in the $n$-th position and $0$ in the rest, and $x=0$, we have that $A$ is closed, $d(x,A)=1$ and $d(x,y)>1$ for all $y\in A$. Thus this is a counterexample for the first statement. Taking $A=\{(1-1/n)e_n\,|\,n\geq 1\}$, we have that $A$ is closed, $\mathrm{diam}(A)=2$ and $d(x,y)<2$ for all $x,y\in A$. Therefore this is a counterexample for the second statement. In conclusion, even in the complete case there are counterexamples for your two statements. This means (up to what I can think) that a compactness assumption (e.g., that every closed ball is compact) is essential, as it has happened to you in the case of $\mathbb{R}^n$.
When does the next bus come?
This is a nice question, probably not as simple as it sounds, I'll give it a try. It is a Bayesian problem, when you arrive at the bus stop, you have no idea about when the last bus passed, so your prior on the distribution of the elapsed time $X$ since the bus last passed is uniform over $[0,n]$. As you observe the number of people in the queue, you update your distribution of $X$ to form a posterior distribution. Let $f_X$ be the pdf of $X$ and $N$ the random variable describing the number of people in the queue. Bayes theorem states that: \begin{equation} f_X(t|N=k)=\frac{P(N=k|X=t)}{P(N=k)}f_X(t) \end{equation} $f_X(t)$ corresponds to your prior and thus reads $f_X(t)=\frac{1}{n}1_{[0,n]}$; and $P(N|X=t)$ is exponentially distributed with rate $\lambda$ so $P(N=k|X=t)=e^{-\lambda t}\frac{(\lambda t)^k}{k!}$. $P(N=k)$ can be obtained equivalently by using the law of total probability or by observing that it must be so that $\int_{0}^1 f_X(t|N=k)dt=1$. Either way, we obtain: \begin{align} I_k:=P(N=k)&=\frac{1}{n}\int_0^n e^{-\lambda t}\frac{(\lambda t)^k}{k!}dt\\ &=\frac{1}{n}\left(\left[-\frac{e^{-\lambda t}}{\lambda}\frac{(\lambda t)^k}{k!}\right]_0^n+\int_0^n e^{-\lambda t}\frac{(\lambda t)^{k-1}}{(k-1)!}\right)\\ &=-\frac{e^{-\lambda n}(\lambda n)^{k-1}}{k!}+I_{k-1}\\ &=I_0-e^{-\lambda n}\sum_{i=1}^{k-1}\frac{(\lambda n)^{i}}{i!}\\ &=I_0+e^{-\lambda n}-e^{-\lambda n}\sum_{i=0}^{k-1}\frac{(\lambda n)^{i}}{i!} \end{align} Since $I_0=\frac{1}{\lambda n}(1-e^{-\lambda n})$, we get: \begin{equation} I_k=\frac{1}{\lambda n}\left(1-e^{-\lambda n}\sum_{i=0}^{k-1}\frac{(\lambda n)^{i}}{i!}\right)=\frac{1}{\lambda n}P(N(n)>k) \end{equation} We can now write $f_X(t|N=k)$ as: \begin{equation} f_X(t|N=k)=\frac{\lambda}{P(N(n)>k)}e^{-\lambda n}\frac{(\lambda t)^k}{k!}1_{[0,n]} \end{equation} We may now compute the expected elapsed time since the bus last passed: \begin{align} E[X|N=k]&=\int_0^n t f_X(t|N=k)dt\\ &=\frac{1}{P(N(n)>k)} \int_0^n e^{-\lambda n}\frac{(\lambda t)^{k+1}}{k!}dt\\ &=\frac{(k+1)}{P(N(n)>k)} \int_0^n e^{-\lambda n}\frac{(\lambda t)^{k+1}}{(k+1)!}dt \end{align} We observe that the integral in the last equality is equal to $nI_{k+1}=\frac{1}{\lambda}P(N(n)>k+1)$, whence: \begin{equation} E[X|N=k]=\frac{k+1}{\lambda}\frac{P(N(n)>k+1)}{P(N(n)>k)} \end{equation} The expected waiting time is then given by $n-E[X|N=k]$. As an illustration, I have let $n=20$ minutes and $\lambda=3$ people/min and obtained the following graph, which seems to make a lot of sense
Is $\sqrt{|xy|}$ equal to $\sqrt[4]{x^2y^2}$?
They are indeed equal. This follows since $$\begin{align} \sqrt[n]{x^n} = \left\{ \begin{array}{ccc} |x| & & n \text{ is even} \\ x & & n \text{ is odd} \end{array} \right. \end{align}$$ The derivatives are indeed equal (although the derivatives of neither function exist at $0$).
Conformal map from half plane with a slit to a punctured disk
There does not exist any conformal map between those two sets. In general, if $S$ is a Riemann surface that is homeomorphic to the open annulus $A = \{z \mid 1 < |z| < 2\}$, then $S$ is conformally equivalent to one of the following Riemann surfaces: 1) The punctured plane $\mathbb C - \{0\}$ 2) The punctured disc $D(0,1)-\{0\} = \{z \mid 0 < |z| < 1\}$ 3) An annulus $A_r = \{z \mid 1 < |z| < r\}$ where $r > 1$ Furthermore, the one of these to which $S$ is conformally equivalent is unique, meaning that two Riemann surfaces in two different ones of these three categories are conformally inequivalent, and two Riemann surfaces $A_r$, $A_s$ in category 3) such that $r \ne s$ are conformally inequivalent. It turns out that $U_1$ falls into category 3), it is conformally equivalent to $A_r$ for some $r > 1$ (this is a good exercise). It follows that $U_1$ is not conformally equivalent to $U_2$ which is in category 2).
Baire category theorem in use on a plane
Suppose , that assertion is not true. This means that for every $(a,b)\in\{(x,y)\in\mathbb{R}^2 :x^2 +y^2 =1\}=S^1$ there exists a rational number $q\in\mathbb{Q}$ and $c\in F$ such that $b=qa+c$. For a given rational number $q\in \mathbb{Q}$ denote by $A_q =\{(u,v)\in S^1 :v-qu\in F\} .$ Since $S^1 = \bigcup_{q\in\mathbb{Q}} A_q$ and $S^1$ is complete metric space ( with induced from $\mathbb{R}^2$ Euclidean metric ) thus from Baire'a Theorem there exists $q_0 \in \mathbb{Q}$ such that the set $A_{q_0}$ is dense in some ball on the circle $S^1 .$ But this means that the set $\overline{A_{q_0}}$ contain some open arc of this circle. Let us now define a map $f:S^1 \rightarrow \mathbb{R}$ by $f(x,y) =y-q_0 \cdot x .$ Since $f$ is continuous $f(\overline{A_{q_0}} ) \subset \overline{f(A_{q_0} )} \subset \overline{F} =F$ but the last shows that the set $F$ contains a connectet subset with two or more elements, so $F$ must contains an open interval. Contradiction.
Cosine related to Bitwise XOR through Repeated Roots
For any $\theta \in [0,2)$, let $(b_0.b_1b_2\cdots)_2$ be following binary representation of $\theta$. $$\theta = \sum_{n=0}^\infty \frac{b_n}{2^n} \quad\text{ with }\quad b_n = {\rm mod}(\lfloor 2^n \theta \rfloor, 2) \in \{ 0, 1 \}$$ Let $c_n = b_n \veebar b_{n+1}$ $\theta_0 = \theta$, $\theta_1 = (b_1.b_2b_3\cdots)_2$, $\theta_2 = (b_2\cdot b_3b_4\cdots)_2, \ldots$ $\phi_0 = b_0$, $\phi_1 = (b_0.b_1)_2, \ldots, \phi_n = (b_0.b_1b_2\cdots b_n)_2\ldots$ We have $$ \begin{align}2\cos(\pi\theta) = 2\cos(\pi\theta_0) &= \sqrt{2 + 2\cos(2\pi\theta_0)} \times \begin{cases} -1, & \theta_0 \in [\frac12,\frac32)\\ 1, & \text{ otherwise } \end{cases}\\ &= (-1)^{b_0+b_1} \sqrt{2+2\cos(\pi(2b_0 + \theta_1))}\\ &= (-1)^{c_0}\sqrt{2+2\cos(\pi\theta_1)} \end{align} $$ Repeat this process, we get $$\begin{align}2\cos(\pi\theta) = & (-1)^{c_0}\sqrt{2 + (-1)^{c_1}\sqrt{2 + 2\cos(\pi \theta_2)}}\\ \vdots\; &\\ = & (-1)^{c_0}\sqrt{2 + (-1)^{c_1}\sqrt{2 + (-1)^{c_2}\sqrt{\cdots (-1)^{c_m} \sqrt{2 + 2\cos(\pi\theta_{m+1})}}}}\tag{*1}\\ \vdots\;& \end{align} $$ In this process, if we terminate at step $(*1)$ by replacing $\theta_{m+1}$ with $0$, the value of RHS of $(*1)$ becomes $2\cos(\pi\phi_m)$. Since $2\cos(\pi\phi_m) \to 2\cos(\pi\theta)$ as $m \to \infty$, we obtain following nested radical expansion: $$2\cos(\pi\theta) = (-1)^{c_0}\sqrt{2 + (-1)^{c_1}\sqrt{2 + (-1)^{c_2}\sqrt{ 2 + \cdots }}}$$ Now take $x_2 = \theta$, then $$\begin{align} x_3 &= x_2 \veebar 2x_2 = (b_0c_0.c_1c_2\ldots)_2\\ \text{and}\quad x_4 &= 2\cos(\pi\theta)\\ &\,\Downarrow\\ x_5 &= \frac{x_4}{2} = \cos(\pi x_2) = \cos(x_1) = \cos(|x|) = \cos(x) \end{align}\\ $$ Aside from the extra rule of ignoring the leading $b_0$ in binary representation of $x_2 \veebar 2x_2$. This is your recipe of computing $\cos(x)$.
Does always a $B$ with orthonormal rows/column be found so that $BP=0$?
This is not possible (if $B$ is square). Suppose $BP = O_n$ where $O_n$ is the $n\times n$ zero matrix. Let $\mathbf{p}_1$ be the first column of $P$. Then the first column of $BP$ is $B\mathbf{p}_1$, which must equal the zero vector $\mathbf{0}$ (first column of $O_n$). Since $P$ is column stochastic, $\mathbf{p}_1$ is a non-zero vector. Hence $B$ has a non-zero vector in its kernel, so $B$ is singular and thus cannot have orthonormal columns (equivalently, orthonormal rows). EDIT: I missed the part where you said you did not need $B$ to be square. The above is assuming $B$ is square. Some discussion for general (not necessarily square $B$): it is still impossible for $B$ to have orthonormal columns. This is because by same reasoning as above, $B$ must have non-trivial kernel, which is impossible for a $B$ with orthonormal columns (as orthonormal columns would necessarily be linearly independent). It can be possible to make $B$ have orthonormal rows though. Basically, if $\mathbf{r}^T$ is a row of $B$ (so $\mathbf{r}\in \mathbb{R}^n$ is a column vector), having $BP = O$ is equivalent to requiring $\mathbf{r}^T \mathbf{p}_j = 0$ for every column $\mathbf{p}_j$ of $P$. In other words, the rows of $B$ should all be orthogonal to every column of $P$, or equivalently all come from $\mathrm{im}(P)^{\perp}\subseteq \mathbb{R}^n$ (orthogonal complement of the image of $P$). So we can take $B$ to have as rows an orthonormal basis for $\mathrm{im}(P)^{\perp}$, provided such an orthonormal basis exists, which occurs iff $\mathrm{im}(P)^\perp$ has dimension at least $1$, or equivalently, $P$ is singular. IN SUMMARY: ● We can never have a $B$ with orthonormal columns satisfy $BP = O$. ● If $P$ is singular, then it is possible to have $B$ with orthonormal rows with $BP = O$. This will happen iff the rows of $B$ are orthonormal elements of $\mathrm{im}(P)^\perp \subseteq \mathbb{R}^{n}$. ● If $P$ is non-singular (has full rank), then we cannot have a $B$ with orthonormal rows satisfying $BP = O$. ● Any possible $B$ must be non-square. By the way, for any real matrix $P$, we have $\mathrm{im}(P)^\perp = \ker(P^T)$, which you can use to find orthogonal complements if you wish.
Does this definition say what I want it to?
It appears you want to say $$R_o=\{\,p(t)\mid \operatorname{freq}(p(t))\ge \alpha\,\}$$
What is the difference between $\mathbb{Q}(\alpha,i\alpha)$ and $\mathbb{Q}(\alpha,i)$? Where $\alpha = \sqrt[4]{10}$.
They're equal. To see this it suffices to notice that each contain the generators of the other, i.e. that $\alpha,i\alpha\in \mathbb{Q}(\alpha,i)$ and $\alpha, i=\frac{i\alpha}{\alpha}\in \mathbb{Q}(\alpha,i\alpha)$.
Singular power cardinals?
Lemma 3.4 in the following paper by Golshani & Hayut implies that this is consistent relative to the existence of a strong cardinal. Golshani, Mohammad; Hayut, Yair, On Foreman’s maximality principle, J. Symb. Log. 81, No. 4, 1344-1356 (2016). ZBL1387.03037.
Are quasinilpotent groups a Fitting class?
The generalized Fitting subgroup of $G$ is $F^*(G) = F(G)E(G)$, where $F(G)$ is the Fitting subgroups, and $E(G)$ is a central product of the quasisimple subnormal subgroups of $G$ (its components). So, if $M$ and $N$ are normal quasinilpotent subgroups of $G$, then $M=F(M)E(M)$ and $N=F(N)E(N)$, where $F(M)$, $E(M)$, $F(N)$, $E(N)$ are all normal in $G$. Clearly $F(MN) = F(M)F(N)$. The components of $E(M)$ and of $E(N)$ are all quasisimple subnormal subgroups of $G$ and hence components of $G$, so they are also components of $MN$. Hence $E(M)E(N) = E(MN)$ and so $F^*(MN)=F^*(M)F^*(N)=MN$.
Finding expression for probability given its PGF
You say you have obtained: $\varphi_X(s)=\dfrac{7-3s}{15-14s+3s^2} = \dfrac{-1}{2(s-3)}+\dfrac{-3}{2(3s - 5)} $ Find $A, B$, then obtain the $k$-th derivative using: $\dfrac{\mathrm d^k ~~}{(\mathrm d s)^{k}}\dfrac 1{(as+b)} = \dfrac{k!\; (-a)^k }{(as+b)^{(1+k)}}$ So $\varphi_X^{(k)}(s)= \dfrac{k!\,(-1)^{k+1}}{2(s-3)^{k+1}}+\dfrac{k!\,(-3)^{k+1}}{2(3s - 5)^{k+1}}$ From there it's plain sailing since $\mathsf P(X=k) = \dfrac{\varphi_X^{(k)}(0)}{k!}$
Show that every nonsingular symmetric matrix is congruent to its inverse.
Since $A^{-1}=(A^{-1})^T$, it follows that $P^TAP=A^{-1}$ holds for $P=A^{-1}$.
Inverse fourier transform 3 dimensions
The inverse FT in 3D of a spherically symmetric function is given by $$f(x) = \frac{1}{(2 \pi)^3} \int_0^{\infty} dk \: k^2 \hat{f}(k) \int_0^{\pi} d\theta \, \sin{\theta} \: \int_0^{2 \pi} d\phi$$ For the function $\hat{f}(k) = 1/(1+k^2)$: $$\begin{align}f(x) &= \frac{2 \pi}{(2 \pi)^3} \int_0^{\infty} dk \: \frac{k^2}{1+k^2} \: \int_0^{\pi} d\theta \, \sin{\theta} \, e^{-i k x \cos{\theta}}\\ &= \frac{1}{4 \pi^2} \int_0^{\infty} dk \frac{k^2}{1+k^2} \frac{i}{k x} (e^{-i k x} - e^{i k x}) \\ &= \frac{1}{4 \pi^2 x}\int_0^{\infty} dk \frac{k^2}{1+k^2} \frac{\sin{k x}}{k} \\ &= \frac{1}{4 \pi^2 x}\left ( \pi - \int_0^{\infty} \frac{dk}{1+k^2} \frac{\sin{k x}}{k} \right )\\ &= \frac{1}{4 \pi^2 x}\left ( \pi - \frac{1}{2 \pi} \int_{-x}^x dx' \: \pi \cdot \pi \, e^{-|x'|} \right)\\ &= \frac{1}{4 \pi x} [1-(1-e^{-x})] \end{align}$$ Therefore $$f(x) = \frac{e^{-x}}{4 \pi x}$$ as was to be shown. Reader should note that I used the fact that $$\frac{k^2}{1+k^2} = 1-\frac{1}{1+k^2}$$ in the 4th line, $$\int_{-\infty}^{\infty} dk \frac{\sin{k x}}{k} = \pi$$ when $x>0$, in the 4th line, as well as Parseval's Theorem in the 5th line.
Nonlinear recurrence relation
Hint: If your relation is $a_{n+1}^2a_n^2=2^n \sqrt{a_n}$, then put $a_n=2^{u_n}$, with $u_1=0$. If I am not wrong, you will find that $u_{n+1}+\frac{3}{4}u_n=\frac{n}{2}$, that is easy to solve. (Equivalently, you can look at $b_n=\log a_n$).
Using a point cloud as a basis
Let's say you have three vectors $u_{1,1}, u_{1,2}, u_{1,3}$ that span $\mathbb R^3$ and you have a fourth vector $v_1$, so you can write: $$v_1 = \tau_1 u_{1,1} + \tau_2 u_{1,2} + \tau_3 u_{1,3}$$ Finding the $\tau$:s is just solving a linear system. Now, let's say that also have vectors $v_2$, $v_3$ such that $v_1, v_2, v_3$ span $\mathbb R^3$. And that you can decompose $v_2$ and $v_3$ as you decomposed $u_3$: $$v_2 = \tau_1 u_{2,1} + \tau_2 u_{2,2} + \tau_3 u_{2,3}$$ $$v_3 = \tau_1 u_{3,1} + \tau_2 u_{3,2} + \tau_3 u_{3,3}$$ and you have an additional vector $w = \alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3$. Now you can write: $$\begin{align} w &= \frac{1}{2} \left( \alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3 \right) + \frac{1}{2} \left( \alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3 \right) = \\ &= \frac{1}{2} \left( \alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3 \right) + \frac{1}{2}\alpha_1 \left(\tau_1 u_{1,1} + \tau_2 u_{1,2} + \tau_3 u_{1,3} \right) + \\ &+ \frac{1}{2}\alpha_2 \left( \tau_1 u_{2,1} + \tau_2 u_{2,2} + \tau_3 u_{2,3} \right) + \frac{1}{2}\alpha_3 \left( \tau_1 u_{3,1} + \tau_2 u_{3,2} + \tau_3 u_{3,3} \right) \end{align}$$ You can continue this process as long as you need (decomposing the $u_{i,j}$ and so on). The $v$ and $u$-vectors are your vectors that you have got from the points. Of course, all the decompositions of these can be calculated beforehand, if you want to apply this to more than one vector, and you just calculate the $\alpha$:s for every vector. You will get zeros if one of the $v$:s is not part of $w$, and this zero will "trickle down" to produce a lot of zeros. You might want to have "backup decompositions" if you get a zero $\alpha$ (if this is an option). Of course, when you choose which vectors should go where, you should make sure that all coefficients are non-zero. You don't have to decompose in halves, as I have done. You can use any other constants as long as they sum to 1. Of course, this decomposition is not unique. And technically, this is not a basis. A basis is linearly independent. Now I am curious, why do you want this? Can't you just calculate the image of every point?
How can I solve $x\cos(x)=\pi$ without looking a graph
In the first place, there is no hope for an analytical solution because $x$ appears both outside and inside of a trigonometric function. There is indeed the simple solution $x=-\pi$, but this is "accidental", you won't find a formula for the other roots. The trick to address such difficult equations is to find the extrema of the function, because you are sure that there is at most one solution of $f(x)=y$ between successive minima and a maxima (unless there are discontinuities), and they must be of opposite sign. To locate the extrema, we need to cancel the derivative, $$f'(x)=\cos(x)-x\sin(x)=0.$$ This equation isn't much more appetizing than the first and we seem to be stuck. Anyway, as $x=0$ isn't a solution, we can solve $$g(x)=\cot(x)-x=0$$ instead. Now, taking the derivative, $$g'(x)=-\frac1{\sin^2(x)}-1=0$$ has no solution as the LHS is strictly negative, and the function $g$ has no extrema. Anyway, there are discontinuities as $g$ has vertical asymptotes for $x=k\pi$, and it is monotonic in all intervals $(k\pi,k\pi+\pi)$, running from $\infty$ to $-\infty$. So in every such interval, the function $g$ and the derivative $f'$ have exactly one root, and the initial function $f$ has exactly one extremum. It just remains to check if there is a change of sign between $$f(k\pi)=k\pi\cos(k\pi)-\pi=((-1)^kk-1)\pi$$ and $$f(k\pi+\pi)=(k\pi+\pi)\cos(k\pi+\pi)-\pi=(-(-1)^{k}(k+1)-1)\pi.$$ The sequence is $$\cdots4,-5,2,-3,0,-1,-1,1,-2,3,-4,\cdots$$ (times $\pi$), and the sign changes for every $k$ but $k=-1$, corresponding to the exact root $x=-\pi$, and $k=0$, no root. In conclusion, every interval $(k\pi,k\pi+\pi)$, for $k<-1$ and $k>0$ contains a single root (to be computed by numerical methods), and there is an extra root at $x=-\pi$. (Also notice that for growing $k$, the equation $\cos(x)=\frac\pi x$ tends to $\cos(x)=0$, which you know how to solve :) A slightly better approximation is obtained by assuming the multiplicative $x$ to remain constant in the interval, $x\approx(k+\frac12)\pi$), and solve $\cos(x)=\frac1{k+\frac12}$.)
Constraints on sets of RPG skill bonuses as a graph problem
I will use a little bit different notation. Let denote empire as function $E: \{1,\dots,32\} \rightarrow \mathbb{N}_0$ satisfying your constraints, such as there are only two $i,j$ that $E(i)=E(j)=4$ and so on. Generate $n$ random empires $E_1,\dots,E_n$ and now start generating religions $R: \{1,\dots,32\} \rightarrow \mathbb{N}_0$ but such that $R(i)+\max_k E_k(i) \leq 5$ for all $i$. If you generate religion satisfying this condition than you get valid set of empires and religions. It would be quite boring to calculate the number of possible combinations by hand but it should not be hard to write a program for that.
What is the term independent of $x$ in the expansion of $(2x^{-1} + 3x^2)^{12}$?
The $k$-th in the binomial expansion of this binomial is $$\binom {12}k2^kx^{-k}3^{12-k}x^{2(12-k)}=\binom {12}k2^k3^{12-k}x^{24-3k},$$ and it is a constant if and only if $k=8$, in which case the coefficient is $$\binom {12}82^8\,3^4=10\,264\,320.$$
Fixed Point Property for a special space?
Here is one solution which works even when you replace the closed unit interval with the closed unit $N$-ball $B$. I will describe steps of the proof and leave you to fill in the details. (Since the problem smells like a homework.) (0). Prove the following lemma akin to the uniform continuity of continuous functions on compact metric spaces (and proven in a similar fashion): Lemma. Let $f: (X,d_X)\to (Y,d_Y)$ be a continuous map between two compact metric spaces such that for every $y\in Y$, $diam(f^{-1}(y))\le \epsilon$. Then there exists $\sigma>0$ (depending on $f$ and on $\epsilon$) such that $$ \forall x_1, x_2\in X, ~~~d_Y(f(x_1), f(x_2))<\sigma \Rightarrow d_X(x_1, x_2)< 2\epsilon. $$ Proof. Suppose the claim fails. Then for every natural number $n$ there exists a pair of points $x_n, x'_n\in X$ such that $d_X(x_n, x'_n)\ge 2\epsilon$ while $$ d_Y(f(x_n), f(x'_n))<\frac{1}{n}.$$ In view of compactness of $X$, after passing to a subsequence, we can assume that $$ \lim_{n\to\infty} x_n=x, \lim_{n\to\infty} x'_n=x', $$ with $d_X(x,x')\ge 2\epsilon$, while $$ \lim_{n\to\infty} f(x_n)= \lim_{n\to\infty} f(x'_n)=y\in Y, $$ and $f(x)=f(x')=y$. Thus, the preimage of $y$ has diameter $\ge 2\epsilon$. A contradiction. qed (1). Given $f$ and $g$ as in your problem ($f$ is surjective and $diam(f^{-1}(y))<\epsilon$ for all $y\in B$), construct a piecewise-linear continuous map $h: B\to B$ such that the following diagram is "almost commutative": $$ \require{AMScd} \begin{CD} X @>{g}>> X\\ @VVfV @VVfV \\ B @>{h}>> B \end{CD} $$ meaning that $d(f\circ g, h\circ f)<\delta(\epsilon)$, where $$ \lim_{\epsilon\to 0}\delta(\epsilon)=0. $$ I will explain how to do this in the case $B=[0,1]$, the extension to the case when $B$ is higher-dimensional is quite straightforward. In order to construct such $h$ first pick a finite subset $0=y_0< y_1 < y_2 < ... <y_n\in [0,1]$ (with $|y_i- y_{i-1}|$ sufficiently small for all $i$) such that $$ \forall i\in \{1,...,n\}, ~~diam(f^{-1}([y_{i-1}, y_i])) < 2\epsilon $$ Then define $h$ on the finite subset $\{y_0,...,y_n\}$ so that $$ \forall y_i, \exists x_i\in f^{-1}(y_i), h(y_i)=fg(x_i). $$ Then extend $h$ to the rest of $[0,1]$ linearly on each interval $[y_{i-1},y_i]$. (2). Use the fact that $B$ has the fixed point property to show that $g$ "almost" has a fixed point, i.e.: For every $\epsilon>0$ there is a point $x\in X$ such that $d(g(x), x)\le \eta(\epsilon)$, where $$ \lim_{\epsilon\to 0}\eta(\epsilon)=0. $$ To prove this, take $y\in B$ such that $h(y)=y$ and think of its preimage $f^{-1}(y)$. (3). Conclude that there is a sequence $(x_n)$ in $X$ such that $$ \lim_{n\to\infty} d(g(x_n), x_n)=0. $$ (4). Use compactness of $X$ to show that $g$ has a fixed point in $X$.
If $ab=3$ and $\frac1{a^2}+\frac1{b^2}=4,$ then $(a-b)^2=\;$?
$$\frac1{a^2}+\frac1{b^2}=4$$ $$a^2b^2\left(\frac1{a^2}+\frac1{b^2} \right)=4a^2b^2$$ $$b^2+a^2=(2ab)^2$$ $$a^2-2ab+b^2=(2ab)^2-2ab$$ $$(a-b)^2=(2ab)^2-2ab=(2\cdot3)^2-2\cdot3=30$$
$k$ such that $n,2n,\dots,kn$ have odd sum of digits
Hint : if we consider the number $n=9\cdots9$ with $2k-1$ of $9$'s. so for every element $0\leq i\leq 10^k$ we have the sum of digits of $ik$ is odd (is equal to 9(2k-1)) which proves your statement. I will provide a proof if this is necessary.
Evaluate $\sum_{r=2}^{\infty} \frac{2-r}{r(r+1)(r+2)}$
Note that\begin{align}\frac{2-r}{r(r+1)(r+2)}&=\frac1r-\frac3{r+1}+\frac2{r+2}\\&=\frac1r-\frac1{r+1}-\frac2{r+1}+\frac2{r+2}.\end{align}Therefore, your series is naturally the sum of two telescoping series and its sum is$$\frac12-\frac23=-\frac16.$$
Show that the equilibrium point $(0,0)$ is asymptotically stable and an estimate of its basin of attraction
In addition to what has been said by @Evgeny and @MrYouMath: the set $$ M=\left\{ (x,y)\in\mathbb R^2 :\; x^2+y^2<2 \right\} $$ is a positively invariant set of the considered system since $\forall (x,y)\in M$ $$ \dot V=-x^4-y^4+x^2y^2(x^2+y^2)<-x^4-y^4+2x^2y^2=-(x^2-y^2)^2\leq 0; $$ it is also a subset (guaranteed estimation) of the domain of attraction.
Asymptotic distribution of the mean?
No, the answer should be $\mathcal N(\mu, \sigma^2/n)$. Of course this is assuming the distribution has a finite variance.
Factor $x^6+x^5+x^4+x^3+x^2+x+1$ in $\mathbb{F}_2[x]$
Well, in general, one usually recursively builds irreducible polynomials of low degree via Euclidean division. But in this case, there is a very nice trick: let $P(X) = X^6+X^5+X^4+X^3+X^2+X+1$. Then it is not hard to show that $P(X)(X+1) = X^7+1$ (one can either compute this directly, or think of the analagous result for truncated geometric series). But then $P(X)(X+1)(X) = X^8+X = X^{2^3}+X$, which is the product of all irreducible polynomials of degree dividing $3$ over $\mathbb{F}_{2}$. So $P(X)$ must factor as the product of the unique two irreducible polynomials of degree $3$ over $\mathbb{F}_{2}$. I leave it to you to compute these; feel free to comment if you need more help.
Probability of independent & mutually exclusive events
For question $1$, you can draw a Venn diagram for independent events, however you will not be able to tell if the events are independent by looking at the Venn diagram, as it will just look like a standard Venn diagram. (standard meaning how a Venn diagram would look for $2$ events $A,B$, that are not mutually exclusive) For question $2$, if $A$, $B$ are $2$ independent events then $P(A\cup B)=P(A)+P(B)-P(A)P(B)$ For question $3$, a mutually exclusive events are necessarily dependent events (assuming the probability of both events is greater than $0$). Proof: Recall the following: Let two events, $X$ and $Y$ be independent. Then it follows that $P(X \cap Y)=P(X)P(Y).$ These events are mutually exclusive if $P(X \cap Y)=0.$ Lastly remember that $P(X)>0$ and that also $P(Y)>0$, as we are discussing probability and it ranges from $0-1$. So, since we know that $P(X)>0,P(Y)>0$, then it follows that $P(X)P(Y)>0.$ If these events were independent then $P(X)P(Y)=P(X\cap Y)>0$, but this would mean that they aren't mutually exclusive. Therefore, the events can not be independent and mutually exclusive simultaneously if both their probabilities are more than $0$. $Q.E.D.$
Bounded $C^\infty$ functions $\{u_\epsilon\}$ with bounded derivatives s.t. $\to$ uniformly to bounded and unif. cont. $u:\mathbb{R}^n \to \mathbb{R}$
If $u$ is not bounded, it is impossible for a sequence of bounded functions to converge uniformly to $u$. EDIT: For the revised question, the answer is still no. Consider e.g. $u(x) = \cos(x^2)$, which is bounded and has $|u(\sqrt{n\pi}) - u(\sqrt{(n+1)\pi})| = 2$. Note that $\sqrt{(n+1)\pi} - \sqrt{n\pi} \to 0$ as $n \to \infty$. If $|v'| \le B$, then for sufficiently large $n$ we have $|v(\sqrt{n\pi}) - v(\sqrt{(n+1)\pi})| \le B (\sqrt{(n+1)\pi} - \sqrt{n\pi}) < 1$, so one of $|v(\sqrt{n \pi} - u(\sqrt{n\pi})|$ and $|v(\sqrt{(n+1)\pi}) - u(\sqrt{(n+1)\pi})|$ is greater than $1/2$. EDIT: With the added assumption of uniform continuity, the answer is yes. Take convolutions of $u$ with a sequence of functions $v_n \ge 0$ which are $C^\infty$, supported in $\{x: \|x\| < 1/n\}$, and have $\int v_n = 1$.
Conditional Expected Value for uniformly distributed variable
The solution is: $$\begin{align}\mathsf E(\Theta\mid\alpha\Theta\leq w) ~=~& \dfrac{\int\limits_0^{\min(w/\alpha,2)} \theta~f_\Theta(\theta)\operatorname d\theta}{\int\limits_0^{\min(w/\alpha,2)} f_\Theta(\theta)\operatorname d\theta} \\[1ex] =~& \dfrac{\tfrac 1 2\int\limits_0^{\min(w/\alpha,2)} \theta\operatorname d\theta}{\tfrac 1 2\int\limits_0^{\min(w/\alpha,2)} 1\operatorname d\theta} \\[1ex] =~& \tfrac 1 2 \min(w/\alpha,2) \\[1ex] =~&\begin{cases}w/2\alpha & : w< 2\alpha \\[0.5ex] 1 & :2\alpha\leq w\end{cases}\end{align}$$ A quicker way to arrive at this is to observe that if $\Theta~\sim~\mathcal U[0;2]$ then $\Theta\vert\Theta{\leq}w/\alpha ~\sim~\mathcal U[0;w/\alpha]$ whenever $0<w<2\alpha$. $$\mathsf E(\Theta\mid\Theta\leq w/\alpha) ~=~\int_0^{\min(w/\alpha, 2)}\frac \theta{\min(w/\alpha,2)}\operatorname d\theta$$
A set is open in $(X_1 \times X_2, d)$ if and only if it is open in $(X_1 \times X_2, p)$.
For non-negative $a$ and $b$ the following holds: $\max(a,b)\le a+b\le 2\max(a,b)$. From this you can show that each $d$-ball is contained in a $p$-ball and each $p$-ball is contained in a $d$ ball.
The topological manifold of the space of full rank $M(m\times n,R)$ is connected
If you can compute the global functions on this variety, you can tell whether it's connected or not. Recall that if $X=\bigcup_{i}X_i$ is a disconnected algebraic variety with finitely many connected components $X_i$, then $k[X]=\bigoplus_{i}k[X_i]$. So if you can't decompose $k[X]$ into a direct sum of subrings, then your variety is connected. The easiest place to demonstrate you can't do that is with constant functions.
What could be the mathematical equation of the given signal?
Just looking at the signal, it seems to have components only up to the fourth harmonic. You can read the $y(t)$ values at intervals of $\frac T8$ from the plot-I would pick the peaks of all the obvious waves, then use the orthogonality of the Discrete Fourier Transform to compute the coefficients. You can do that in Excel easily. Added: As points out, the function is odd so only sines will be involved. It appears four terms will be enough to get close, so the form would be $$y(t)=\sum_{i=1}^4 a_i \sin \frac {2 \pi i}T$$ You can just pick off the first four peaks from the graph, which seem to be at $\frac T{16}, \frac {3T}{16}, \frac {5T}{16}, \frac {7T}{16}$ and solve the four simultaneous equations for the $a_i$. The FFT is easier if you learn how to do it, as it gives you each coefficient directly.
Can an uncountable subset of $\mathbb{R}$ have empty intersection with its derived set?
Hint: suppose no point of $A$ is a limit point of $A$. Then every point has an open neighbourhood in which it is the only element of $A$...
Fractional part of a certain random variable
From the following simulation, the claim of independence seems incorrect. u = runif(10^5) v = 2*u - floor(2*u) cor(u,v) ## 0.502127 A histogram of $V$ seems consistent with $V \sim Unif(0,1),$ as claimed. But correlation is pretty clear from a plot of $V$ against $U.$
If this limit exists and is finite, does the other one have those properties?
The existence of $\lim_{x\to 0}f(x^2)$is equivalent to the existence of $lim_{x\to 0^{+}}f(x)$.
How to compute the series which seems different with the questions I raised before?
The biggest term in the sum $\sum_{k=2}^N((k-1)/N)^{N^2}$ is the one with $k=N$, and that's $$((N-1)/N)^{N^2}=((1-(1/N))^N)^N$$ Now $(1-(1/N))^N\to(1/e)$ as $N\to\infty$, so the biggest term looks like $e^{-N}$. If you can get some bound for $(1-(1/N))^N$ in terms of $1/e$ then you should be able to relate your whole problem to $N^2e^{-N}$, which goes to 0 as $N\to\infty$.
Cauchy Sequence of Differentiable Functions Implies Cauchy Sequence of Derivatives?
No, no one can provide a proof of that because it's false. Consider the functions $\frac1n\sin(nx)$.
$R = \mathbb{Z}[ i ] / (5)$ is not an integral domain? Why?
You don't need to know that $5 = (2-i)(2+i)$ is a prime factorization; all you need is that the two factors are not units. One way to see that $2-i$ is not a unit is by computation: $$ \mathbb{Z}[i] / (2-i) \xrightarrow{i \to x} \mathbb{Z}[x] / (x^2 + 1, 2-x) \xrightarrow{x \to 2} \mathbb{Z} / (2^2 + 1) \cong \mathbb{Z} / 5 $$ (all arrows are isomorphisms). The result isn't the zero ring, so $2-i$ is not a unit. The fact the result is a domain does additionally prove that $2-i$ is prime, though.
Freely homotopic but not homotopic
Hint. It is not difficult to show that free homotopy classes of closed paths correspond to conjugacy classes in the fundamental group (assume the space is path connected for this) Therefore you should look for an example with nonabelian fundamental group.
What is the correct way of defining this function?
Having decphered that quaint computer talk it appears the function you quest is f(n,a) = 2n f(n,b) = 2n + 1
If $\det{e^A}$ is maximized, also is $\det{A}$?
By Jacobi’s formula, $$\det(e^A) = e^{\mathrm{tr}(A)}$$ So if you want to maximise $\det(e^A)$, you want to be maximising the trace of A (and vice-versa), not the determinant of A. These are not the same. So in your case, you have a maximised trace, $\mathrm{tr}(A) = \sum_i \lambda_i$.
Is $\mathbb{Z}[\frac{1}{2}]$ Noetherian?
$\Bbb Z[\frac12]$ is not a finitely generated $\Bbb Z$-module. But $\Bbb Z[\frac12]$ is a finitely generated $\Bbb Z$-algebra. It is isomorphic to a quotient of $\Bbb Z[X]$ which is Noetherian (by the Hilbert basis theorem) and so is itself Noetherian.
How do you convert a transformation function to matrix?
If $T:V\to W$ is a linear transformation and $\{v_1,v_2,\ldots,v_n\}$ is a basis of $V$ knowing the images of $v_1$, $v_2$, $\ldots,v_n$ under $T$ we can get the asked matrix. So, as $\{(1,0),(0,1)\}$ is a basis of $V_2$ we have $T(1,0)=\begin{bmatrix}1\\i\end{bmatrix}$ and $T(0,1)=\begin{bmatrix}i\\1\end{bmatrix}$, then $$T(x,y)=\begin{bmatrix}1 & i \\ i & 1\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}$$
Every II-finite set is III-finite
Revised, since there was an oversight in my original argument. I’ve now looked at Tarski’s paper; the very old-fashioned notation is a worse problem than the French. I’ve updated it and filled in some details. The proof in question is on pages $94$ and $95$. In essence he starts by assuming that there is an injection $\varphi:\wp(X)\to\wp(X)$ such that $\varphi[\wp(X)]\subsetneqq\wp(X)$, picking $A_0^{(0)}\in\wp(X)\setminus\varphi[\wp(X)]$, and letting $A_{n+1}^{(0)}=\varphi(A_n^{(0)})$ for $n\in\omega$. Let $\mathscr{A}$ be the closure of the family $\{A_n^{(0)}:n\in\omega\}$ under finite intersections. If there is a sequence $\langle B_n:n\in\omega\rangle$ in $\mathscr{A}$ such that $B_n\supsetneqq B_{n+1}\ne\varnothing$ for each $n\in\omega$, let $D_n=B_{n+1}\setminus B_n$ for $n\in\omega$. The sets $D_n$ are pairwise disjoint and non-empty. For $n\in\omega$ let $C_n=\bigcup_{k\le n}D_k$; then $\{C_n:n\in\omega\}$ is a chain with no maximum element. Assume, then, that no such sequence $\langle B_n:n\in\omega\rangle$ in $\mathscr{A}$ exists. Then in particular there is no strictly increasing sequence $\langle n_k:k\in\omega\rangle$ in $\omega$ such that $$\bigcap_{k\le n}A_k^{(0)}\supsetneqq\bigcap_{k\le n+1}A_k^{(0)}$$ for each $k\in\omega$. Thus, there is an $n_0\in\omega$ such that for each $m>n_0$, either $A_m^{(0)}\subseteq\bigcap_{k\le n}A_k^{(0)}$, or $A_m^{(0)}\cap\bigcap_{k\le n}A_k^{(0)}=\varnothing$. It follows that there is a strictly increasing sequence $\langle m_k:k\in\omega\rangle$ in $\omega$ such that $m_0>n_0$, and either $A_{m_\ell}^{(0)}\subseteq\bigcap_{k\le n_0}A_k^{(0)}$ for each $\ell\in\omega$, or $A_{m_\ell}^{(0)}\cap\bigcap_{k\le n_0}A_k^{(0)}=\varnothing$ for each $\ell\in\omega$. For $k\in\omega$ let $A_k^{(1)}=A_{m_k}^{(0)}\setminus\bigcap_{k\le n_0}A_k^{(0)}$; the sets $A_k^{(1)}$ are distinct, $$\bigcap_{k\in\omega}A_k^{(1)}\subseteq\bigcup_{k\in\omega}A_k^{(0)}\;,$$ and $$\left(\bigcap_{k\le n_0}A_k^{(0)}\right)\cap\bigcup_{k\in\omega}A_k^{(1)}=\varnothing\;.$$ Recursively construct in this fashion sequences $\langle A_k^{(\ell)}:k\in\omega\rangle$ for $\ell\in\omega$ such that each $A_k^{(\ell)}\in\mathscr{A}$, and for each $\ell\in\omega$ $$\bigcap_{k\in\omega}A_k^{(\ell+1)}\subseteq\bigcup_{k\in\omega}A_k^{(\ell)}\;,$$ and $$\left(\bigcap_{k\le n_\ell}A_k^{(\ell)}\right)\cap\bigcup_{k\in\omega}A_k^{(\ell+1)}=\varnothing\;.$$ Now for $\ell\in\omega$ let $D_\ell=\bigcap_{k\le n_\ell}A_k^{(\ell)}$; then $\{D_\ell:\ell\in\omega\}$ is a family of pairwise disjoint, non-empty subsets of $X$, and $\left\{\bigcup_{k\le\ell}D_k:\ell\in\omega\right\}$ is a chain in $\wp(X)$ with no largest element.
Ordered Sets - Down-sets
$A \prec B$ implies $A < B$ in the poset $\langle \mathcal{O}(P); \subseteq \rangle$, so $A \subsetneq B$, which implies there is at least some $b \in B\setminus A$. So clearly $A \cup \{b\} \subseteq B$. You will need to show $A \cup \{b\}$ is a down-set, so a member of $\mathcal{O}(P)$. Can you show that $b$ is minimal in $P \setminus A$, as required? Can there be another element in $B\setminus A$ besides $b$?
Question about the derivative definition
Instead of $h$ here I will use $\Delta x$ as I think it makes more sense to consider a change in $x$. For $$\lim\limits_{\Delta x\to 0} \frac{f(x+\Delta x) - f(x)}{\Delta x}\tag{1}$$ at no point do we ever substitute $\Delta x=0$. We can only say that in the limit $\Delta x$ tends to zero; this is why there is the limit notation directly in-front of the fraction. Consider the graph below which is an identical graphical representation of Equation $(1)$: You can see from the graph that the Secant line, which is the line that intersects the purple curve at $(x,f(x))$ and at $(x+\Delta x,f(x+\Delta x))$ is an approximation to the derivative (tangent) of the purple curve at $(x,f(x))$. Now imagine $\Delta x$ getting smaller and smaller until eventually it becomes vanishingly small; which we call 'infinitesimal' (loosely speaking this is as small as you can possibly get but still greater than zero). At this point you can see that the Secant line approaches the Tangent line or derivative at $(x,f(x))$ as $\Delta x$ tends to zero. Thus, the approximation of the Secant line to the Tangent line is optimal at $(x,f(x)+\Delta x)$ and this is the exact meaning of Equation $(1)$.
Determinant of $2\times 2$ matrix over $\mathbb{Z}/2\mathbb{Z}$
Actually, what is true in any commutative ring is that a matrix is invertible if and only if the determinant is a unit in that ring. This is a consequence of Cramer's rule. In this case, you have two options. Either just compute all 2x2-matrices over $\mathbb Z/2$ and check if they are invertible (there are just $2^4=16$ of them, so this is a feasible task). Or, for a generic matrix (let the coefficients be unknown), use row reduction to find the inverse matrix. You will have to use that the determinant is non-zero at some point.
Representing sum of power in different way.
Theorem 1: Given $a_1,a_2,...,a_m$ finite integers. Denominator $d$ is smallest positive integer for $b_l$ integer coefficient. consider $n=dt+r$ then $$\sum_{k>0}\binom{n}k a_k= \frac{1}d\sum_{l>0}b_ln^l=\sum(x_lt+y_l)n^l$$ With $x_u$ and $y_u$ integers. Proof: Denote $\sum b_l(x+r)^l=\sum c_l x^l$. The numbers $c_l$ are still integers, $c_0=\sum b_l r^l$ is divisible by $d$, and we have, denoting $dt+r=n$, $$ \frac1d\sum b_l(dt+r)^l=\frac1d\sum c_l (dt)^l=\frac{c_0}d+\sum_{l>0} c_l t\cdot (dt)^{l-1}=\\ \frac{c_0}d+\sum_{l>0} c_l t\cdot (n-r)^{l-1}= \frac{c_0}d+\sum_{l>0,0\leqslant j\leqslant l-1} c_l {l-1\choose j}(-r)^{l-1-j}n^jt, $$ In case of $\sum_{k>0}\binom{n}k a_k$, $c_0=0$ and $$\sum_{l>0,0\leqslant j\leqslant l-1} c_l {l-1\choose j}(-r)^{l-1-j}n^jt=\sum(x_lt+y_l)n^l$$ Source link (MO post) Formula Let $n$ and $m$ are the integers with $n\geq 1$ and $m\geq 0$ $$\sum_{k=1}^{n} k^{m}=\sum_{b=1}^{m+1} \binom{n}b\sum_{i=0}^{b-1} (-1)^{i}(b-i)^{m}\binom{b-1}i$$ Proof link Take $$a_{b,m}=\sum_{i=0}^{b-1} (-1)^{i}(b-i)^{m}\binom{b-1}i\ \ \in\mathbb{Z}$$ By theorem 1 $$S_m(n)=\sum_{b=1}^{m+1} \binom{n}ba_{b,m}=\sum_{u=0}^m(x_ut+y_u)n^u$$
Is this matrix diagonalizable and does it have multiple eigenspaces?
There are several issues with your question. Your first sentence mentions “the basis for the eigenspace”, but each eigenspace has infinitely many bases. Then you talk about “the eigenvector of said eigenvalue”; again, every eigenvalue has infinitely many corresponding eigenvectors. You say that the characteristic polynomial of $A$ is $-\lambda ^3+6 \lambda ^2-12 \lambda +8$ and that its only root is $2$; that is correct. And those $2$ eigenvectors are indeed eigenvectors. They are linearly independent and they form a basis of the eigenspace corresponding to the eigenvalue $2$. Now, concerning your questions: Yes, that is correct. Yes, since the geometric multiplicity of $\lambda$ is $\dim E_\lambda$. If a matrix has $k$ eigenvalues, then it has $k$ distinct eigenspaces.
Mutual information of two random variables with event set 1 >> event set 2
You have made a mistake in the first equation. By the chain rule of mutual information $$ I(X;YZ) = I(X;Y) + I(X;Z|Y). $$ One way to realize that there's an error in your equation is that $I(X;Y) + I(X;Y|Z)$ is symmetric when you substitute $X$ and $Y$, but $I(X;YZ)$ is not. Further, let $X_n = Z_n$ be uniformly distributed on $\{1, \ldots, n\}$ and let $Y_n$ be independent from $X_n$ and constant. Then $\frac{|Y_n]}{|X_n|}\to 0$, $I(X_n;Y_n) = 0$, and $I(X_n;Y_n Z_n) = I(X_n;Z_n) = \log n$.
Can a function describe an area on a graph like an integral?
A function $f : A \rightarrow B$ is just an object which, given any $a \in A$ then $f(a) \in B$. The graph of a function $\{(a,f(a))|a\in A\}$ in the case of $f : \mathbb R \rightarrow \mathbb R$ can be viewed as a curve but that's just an illustration. An anti-derivative, say $F(t) = \int_0^{t} f(x) \mathrm{d}x$ is just another function $F : \mathbb R \rightarrow \mathbb R$. While it is true that say, $F(3)$ gives the area under the curve $f$ defines between $0$ and $3$ on the x-axis.. but this idea of 'area' is just an intuitive guide to help motivate the definitions of integral and similar. The integral $\int_{a}^{b}$ could be thought of as a map that takes $(\mathbb R \rightarrow \mathbb R) \rightarrow \mathbb R$ but that is not usual and it can also just been seen as a notation which only has meaning when completed (given an integrand).
Find joint CDF given a joint PDF
Revised correction, based on conversation with Maxim! Summary for all $(x,y)$. For $x\le 0$ or $y\le 0,\ F(x,y)=0$. For $0\lt x\le 1,\ 0\le y\le 1$, two parts: 1) $0\lt y\le x,\ F(x,y)=xy-\frac{y^2}{2}$, 2) $x\lt y\le 1,\ F(x,y)=\frac{x^2}{2}$, For $1\lt x,\ 0\le y\le 1$, two parts: [part 2) vacuous for $x\gt 2$] 1) $0\le y\le x-1,\ F(x,y)=y$, 2) $x-1\lt y\le1,\ F(x,y)=xy-\frac{y^2}{2}-\frac{(x-1)^2}{2}$, For $1\lt y,\ F(x,y)=F(x,1)$.
Homomorphic image is projective
Let $P$ be projective, for example $P = \mathbb Z$, and let $M$ be not projective, for example $M = \mathbb Q$. Let $P \times M$ is not projective and we have a surjection $P \times M \to P$ given by $(p, m) \mapsto p$.
How do we conclude that the determinant is $1$ ?
If $\gamma$ lies entirely in the $x$-$y$ plane, $a=0$ and $P=\mathrm{diag}(1,1,-1)$ is a reflection in the $z$ direction, then $\gamma = \Gamma$ but $P B = - B$. This shows the statement is not true: you need the extra assumption that $M$ is orientation-preserving, i.e. $\det P = 1$. This shouldn't be too surprising: the binormal vector is defined via the cross product, which depends on a choice of orientation.
Pumping lemma (context-free) of $L = \{a^nb^{\max\{n,m\}}a^m\ |\ n, m ≥ 0\}$
If $p$ is the pumping length, let $w=a^pb^pa^p$. If $L$ is context-free, it is possible to write $w=uvxyz$ in such a way that $|vy|\ge 1$, $|vxy|\le p$, and $uv^kxy^kz\in L$ for every integer $k\ge 0$. Now consider cases. If $vxy$ lies entirely within the block of $b$s or one of the blocks of $a$s, $uv^2xy^2z\notin L$: if $vxy$ lies in one of the $a$ blocks, $uv^2xy^2z$ has too few $b$s, and if it lies in the $b$ block, $uv^2xy^2z$ has too many $b$s. If $vxy$ contains both $a$s and $b$s, then $uxz\notin L$, because it contains too few $b$s. (Always remember that it’s possible to pump down, by setting $k=0$, as well as up.)
Set of orthogonal transformations that preserve projected norm of a vector
It's not obvious what you mean by an orthogonal projection, but I'll assume you mean that $P$ is an orthogonal projection iff $PP^T = I$. If that's what you mean, then yes: those are the only such elements of $O(n)$. Claim: Let $P$ be $n \times m$ with $PP^T = I_m$, and let $U$ be an element of $O(n)$. The following are equivalent: $\|PUx\| = \|Px\|$ for all $x$ $\ker P$ is an invariant subspace of $U$ $[\ker P]^\perp$ is an invariant subspace of $U$. It is easy to show that $2$ and $3$ are equivalent. In fact, these statements are equivalent for any operator $U$ for which $UU^T = U^TU$. $1 \implies 2:$ Suppose for contradiction that $x \in \ker P$ but $Ux \notin \ker P$. Then $\|PUx\| \neq 0$, but $\|Px\| = 0$, which means that $1$ does not hold. $2+3 \implies 1:$ Let $x \in \Bbb R^n$ be arbitrary. Decompose $x$ into $x = x_1 + x_2$ with $x_1 \in \ker P$ and $x_2 \in (\ker P)^\perp$. Note that $\|Px\| = \|x\|$ for any $x \in (\ker P)^\perp$. We have $$ \|PUx\| = \|P(Ux_1) + PUx_2\| = \|P(Ux_2)\| = \|Ux_2\| = \|x_2\|. $$ On the other hand, $$ \|Px\| = \|Px_1 + Px_2\| = \|Px_2\| = \|x_2\|. $$ The conclusion follows.
Problem about function equation
Let $g(x)=f(x+\sqrt{2}-2)$, $g$ is also $1$-periodic, and that $g=\tau^{2-\sqrt{2}}(f)$ (the translation), we know that $(\tau^{2-\sqrt{2}}f)^{\wedge}(m)=e^{-2\pi im(2-\sqrt{2})}\widehat{f}(m)$. But $g=f$ by the periodicity and assumption, if $\widehat{f}(m)\ne 0$, then we end up with $e^{-2\pi im(2-\sqrt{2})}=1$, this is impossible if $m\ne 0$. So $\widehat{f}(m)=0$ for every $m\ne 0$, and so the sequence of Fejer means which reduced to $\widehat{f}(0)$ converges to $f$ a.e., since $f$ is continuous, $f=\widehat{f}(0)$.
Spline interpolation in SE(3)
James Kajiya wrote a paper back in the 1990s, I think, on spline-like interpolation on manifolds, and that might be worth looking at. But the really rotten thing is that it's just darned tough to know what the answer should be. Let's forget translation for the moment, and let's simplify to knowing only initial and final value and derivative (i.e., make the problem first-order). And let's assume that you want something really spline-like: it's smooth, it's almost "polynomial", it has all the nice symmetry properties one might hope for, like "if I apply a transformation $T$ to all the input data, then the resulting "spline" should just be $T$ applied to the spline for the untransformed data". When $T$ is something like a reflection in a plane, this places symmetry constraints on your interpolation scheme. OK, back to my example: Make the initial orientation be the identity, with a derivative saying "start rotating $x$ towards $y$, leaving $z$ fixed." Let the final orientation be $P_s$, which is rotation by angle $s$ about the $z$-axis, and the final derivative being the same as the initial one. Now any "spline like" interpolant is going to be continuous as a function of those input parameters. So as the final orientation $s$ is changed from, say, $\pi/4$ to $\pi/2$, the resulting path will change from "rotate a little around $z$" to "rotate a bit more around $z$", and so on. What happens when $s$ grows almost to $2\pi$? It doesn't really matter much to me what your answer is, but then ask yourself "and what will happen when $s$ gets a little larger than $2\pi$, say, $2\pi + \pi/4$? As an element of SO(3), that's indistinguishable from $s = \pi/4$. Are your interpolating curves the same for these two? By symmetry, your interpolating curves must all be in the $SO(2)$ subgroup of $SO(3)$ corresponding to rotations about $z$. I'm hoping that at this point you see the problem you're in: if you assume symmetry, and continuity as a function of input data, you end up with a fundamental-group problem: one path from the identity to $\pi/4$ , followed by the inverse of the other, represents a nontrivial element of the fundamental group. But if you take the one path and follow it by the reverse of itself, you get a trivial element of $\pi_1$. Consequence: things can't actually be continuous the way you wanted. The authors of the cited papers probably don't overcome those problems, because they cannot be overcome. They may choose to weaken their hypotheses (to hell with continuity!) or something else. But you shouldn't frown on them for failing to solve an insoluble problem. :)
Confusion about sum of squares.
Let $p=4m+3,q=4n+3,pq=16mn+12m+12n+9=4k+1$, and assume that $pq=a^2+b^2$ for some positive integers $a,b$, so $$a^2+b^2\equiv 0 \pmod {pq}$$ If $p|a$ will leads to $p|b$, by letting $a=pr,b=ps$ will leads to $p^2(r^2+s^2)=pq$ which says $p|q$, contradiction. Thus, $p\nmid a$, same as $b$. but this indicates that $$\begin{cases}a^2+b^2\equiv0\pmod p\quad(1)\\\\a^2+b^2\equiv0 \pmod q\quad(2)\end{cases}$$ For equation $(1)$, because $\gcd(a,p)=1$, so there exist an integer $a_1$ which $$a_1a\equiv1\pmod p$$ By multiple $a_1^2$ on the equation $(1)$, we get $$(ba_1)^2\equiv-1 \pmod p$$ which can't occur because $p\equiv3 \pmod4$. Same as the equation $(2)$. Therefore, $pq$ can't be sum of two squares. This completes the proof.
Prove $\lim_{n \rightarrow \infty } 2^{-\frac{n}{2}} \int_{n/\sqrt{a}}^{\infty} \frac{e^{-\frac{a}{4n}x^2+(\sqrt{a}-\frac{1}{2})x}}{x} \, dx = 0$
Make the shift $x \to x+\frac{n}{\sqrt{a}}$ to get $$\lim_{n \rightarrow \infty } 2^{-\frac{n}{2}} \int_{0}^{\infty} \frac{\exp\left(-\frac{a}{4n}\left(x+\frac{n}{\sqrt{a}}\right)^2+(\sqrt{a}-\frac{1}{2})\left(x+\frac{n}{\sqrt{a}}\right)\right)}{x+\frac{n}{\sqrt{a}}} \, dx$$ Which expands to $$\lim_{n \rightarrow \infty } 2^{-\frac{n}{2}}\exp\left( \frac{3n}{4}-\frac{n}{2\sqrt{a}} \right)\int_{0}^{\infty}\frac{\exp\left(-\frac{a}{4n}x^{2}+\left(\frac{\sqrt{a}}{2}-\frac{1}{2}\right)x\right)}{x+\frac{n}{\sqrt{a}}}dx$$ The integrand here can be bounded by $\exp\left(-\frac{a}{4n}x^{2}\right)$ for sufficiently large $n$, so the original limit is less than or equal to $$\lim_{n \rightarrow \infty } 2^{-\frac{n}{2}}\exp\left( \frac{3n}{4}-\frac{n}{2\sqrt{a}} \right)\int_{0}^{\infty}\exp\left(-\frac{a}{4n}x^{2}\right)dx$$ This integral evaluates to $\frac{\sqrt{\frac{\pi}{\frac{a}{4n}}}}{2} = \sqrt{\frac{\pi n}{a}}$, so the whole limit is $$\lim_{n \rightarrow \infty } 2^{-\frac{n}{2}}\exp\left( \frac{3n}{4}-\frac{n}{2\sqrt{a}} \right)\sqrt{\frac{\pi n}{a}} = \lim_{n \rightarrow \infty } \left(\frac{\exp\left(\frac{3}{2}-\frac{1}{\sqrt{a}}\right)}{2}\right)^{\frac{n}{2}}\sqrt{\frac{\pi n}{a}}$$ which is $0$ since $0 < \frac{\exp\left(\frac{3}{2}-\frac{1}{\sqrt{a}}\right)}{2} < 1$ for $a \in (0, 1)$. Since the original limit must be nonnegative and less than or equal to this one, the original limit is $0$ as well.
Compute: $\arctan{\frac{1}{7}}+\arctan{\frac{3}{4}}.$
$$\tan\left(\arctan\frac{1}{7}+\arctan\frac{3}{4}\right)=\frac{\frac{1}{7}+\frac{3}{4}}{1-\frac{1}{7}\cdot\frac{3}{4}}=\frac{4+21}{28-3}=1$$ and since $0^{\circ}<\arctan\frac{1}{7}+\arctan\frac{3}{4}<90^{\circ}$, we get the answer: $$\arctan\frac{1}{7}+\arctan\frac{3}{4}=45^{\circ}$$
Matrix Geometric Series
Hint. \begin{align} (I-A)^{-2} =& \big[(I-A)^{-1}\big]^2 \\ =& \big[\sum_{k=0}^{\infty} A^k\big]^2 \\ =& \big[A^0+A^1+A^2+\ldots +A^{k_0-1}+A^{k_0}+\sum_{k=k_0+1}^{\infty} A^k\big]^2 \\ \end{align}
Does $X_n \overset{L^2}{\rightarrow} X$ imply $X_n^2 \overset{L^1}{\rightarrow} X^2$?
As Did and saz mentioned, the standard definition of convergence in $L^2$ stipulates that all the random variables involved be twice integrable. However, OP took the trouble of defining in their question what $L^2$ convergence meant for them, and this definition does not involve the above-mentioned stipulation. Moreover, after I posted my original answer 15 hours ago, which covered only the case that $X$ was twice integrable, OP commented on my answer and told me in no uncertain terms that they were interested in the case where there were no restrictions on any of the random variables involved. The fact that OP did not accept my answer then, which was the only game in town at that point, further drove the point home. Therefore, in what follows I do not presuppose this condition. In fact, I don't even presuppose that any of the random variables are integrable. Case 1: $\mathbf{E(X^2) < \infty}$ If $E(X^2)<\infty$, the implication holds. Firstly note that, given a sequence $(a_1, a_2, \dots)$ of real numbers, $$ \lim_{n\rightarrow\infty} a_n = 0 \iff \lim_{n\rightarrow \infty} a_n^2=0. $$ Secondly note that, since $E(|X_n-X|^2)\rightarrow 0$, we may assume, w.l.g., that the sequence $(E(|X_1-X|^2), E(|X_2-X|^2),\dots)$ is bounded, say by $L \in [0,\infty)$. Thirdly note that we may assume, w.l.g., that the sequence $(E(X_1^2), E(X_2^2), \dots)$ is bounded by $M:=L + 2\sqrt{LE(X^2)}+E(X^2)$. Indeed, $$ \begin{align} E(X_n^2) &= E\Big(\big((X_n-X)+X\big)^2\Big) \\ &\leq E(|X_n-X|^2)+2E(|X_n-X||X|) + E(X^2) \\ &\overset{\text{Cauchy-Schwarz}}{\leq} E(|X_n-X|^2)+2\sqrt{E(|X_n-X|^2)E(X^2)} + E(X^2) \\ &\leq L + 2\sqrt{LE(X^2)} + E(X^2). \end{align} $$ Now write $$ \begin{align} E(|X_n^2-X^2|) &= E(|X_n^2-X_nX+X_nX-X^2|) \\ &= E(|X_n(X_n-X)+X(X_n-X)|) \\ &\leq E(|X_n||X_n-X|)+E(|X||X_n-X|). \end{align} $$ To see that the left-hand summand of the last expression converges to zero, note that $$ E^2(|X_n||X_n-X|) \overset{\text{Cauchy-Schwarz}}{\leq} \underset{\leq M}{E(X_n^2)}\ \cdot\ \underset{\rightarrow 0\text{ by assump.}}{E(|X_n-X|^2)}\rightarrow 0. $$ A similar argument shows that the other summand converges to zero too. Case 2: $\mathbf{E(X^2) = \infty}$ If $E(X^2) = \infty$, the implication does not hold. Here's a counter-example. For every $n \in \{1, 2, \dots\}$ define $$ \begin{align} a_n &:= \frac{1}{2}\sqrt{n}(n+1), \\ b_n &:= \frac{1}{2}\sqrt{n}(n-1) = a_n\frac{n-1}{n+1}. \end{align} $$ Verify that every pair $(a_n, b_n)$, $n \in \{1, 2, \dots\}$, satisfies $a_n > b_n > 0$, and solves the following system of equations: $$ \begin{align} (a - b)^2 &= n, \\ a^2 - b^2 &= n^2. \end{align} $$ We will later use the following estimate: $$ \begin{align} \frac{n-1}{n}a_n - b_n &= \frac{n-1}{n}a_n - \frac{n-1}{n+1}a_n \\ &= \left(\frac{1}{n}-\frac{1}{n+1}\right)(n-1)a_n \\ &\geq 0. \end{align} $$ Set $$ \begin{align} C_2 &:= \sum_{k=1}^\infty k^{-2}, \\ C_3 &:= \sum_{k=1}^\infty k^{-3}, \end{align} $$ and define, for every $n \in \{0, 1, 2, \dots\}$, $$ S_n := \begin{cases} 0 &, n = 0, \\ \sum_{i = 1}^n \frac{1/C_3}{i^3} &, n \geq 1. \end{cases} $$ Consider the standard probability space $([0,1),\mathcal{B},\lambda)$. We now define two random variables, $X_0, X$ on this probability space as follows. $$ \begin{align} X_0 &:= \sum_{n = 1}^\infty a_n\mathbb{1}_{[S_{n-1},S_n)}, \\ X &:= \sum_{n = 1}^\infty b_n\mathbb{1}_{[S_{n-1},S_n)}. \end{align} $$ Furthermore, for every $n \in \{1, 2, \dots\}$ we define $$ X_n := \frac{1}{n} X_0 + \left(1-\frac{1}{n}\right)X. $$ Observe that $X_0 > X \geq 0$, and therefore, for every $n \in \{1, 2, \dots\}$, $X_n > X \geq 0$. Also note that, for every $n \in \{1, 2, \dots\}$, $$ \frac{n-1}{n} X_0 - X \geq 0. $$ Then, for every $n \in \{1, 2, \dots\}$, $$ \begin{align} E(|X_n-X|^2) &= \frac{1}{n^2} E\big((X_0 - X)^2\big) \\ &= \frac{1}{n^2} \sum_{i=1}^\infty (a_i - b_i)^2 \frac{1/C_3}{i^3} \\ &= \frac{1/C_3}{n^2} \sum_{i=1}^\infty \frac{i}{i^3} \\ &= \frac{C_2/C_3}{n^2}, \\ E\big(|X_n^2-X^2|\big) &= E\big(X_n^2-X^2\big) \\ &= E\Big(\big(\frac{1}{n}X_0 + (1-\frac{1}{n})X\big)^2 - X^2\Big) \\ &= E\Big(\frac{1}{n^2}X_0^2 + 2\frac{n-1}{n^2}X_0X + \big(1-\frac{1}{n}\big)^2X^2 - X^2\Big) \\ &= E\Big(\frac{1}{n^2}X_0^2 + 2\frac{n-1}{n^2}X_0X - \big(\frac{2}{n}-\frac{1}{n^2}\big)X^2\Big) \\ &= \frac{1}{n^2}E\Big(X_0^2 + X^2\Big) + \frac{2}{n}E\Big(\big(\frac{n-1}{n}X_0-X\big)X\Big) \\ &\geq \frac{1}{n^2}E\Big(X_0^2 + X^2\Big) \\ &\geq \frac{1}{n^2}E\Big(X_0^2 - X^2\Big) \\ &= \frac{1}{n^2}\sum_{i=1}^\infty (a_i^2-b_i^2)\frac{1/C_3}{i^3} \\ &= \frac{1/C_3}{n^2} \sum_{i=1}^\infty \frac{i^2}{i^3} \\ &= \infty. \end{align} $$ Case 3: $\mathbf{E(X^2) = \infty}$ revisited In this section I will show that it is possible to salvage some of the flavor of Case 1 even if $E(X^2) = \infty$, namely I will show that if $E(X^2) = \infty$, then $E(X_n^2) \rightarrow \infty$. Suppose to the contrary. Then there is some $T \in [0,\infty)$, such that, for all $i$ in some infinite subset $I \subseteq \{1, 2, \dots\}$, $E(X_i^2) \leq T$. Then, for every $i \in I$, $$ \begin{align} E(X^2) &= E\Big(\big((X-X_i)+X_i\big)^2\Big) \\ &\leq E(|X_i-X|^2)+2E(|X_i-X||X_i|) + E(X_i^2) \\ &\overset{\text{Cauchy-Schwarz}}{\leq} E(|X_i-X|^2)+2\sqrt{E(|X_i-X|^2)E(X_i^2)} + E(X_i^2) \\ &\leq L + 2\sqrt{LT} + T, \end{align} $$ a contradiction. ($L$ is the same bound introduced in Case 1.) This, coupled with Case 1, shows that, if $X, X_1, X_2, \dots$ are integrable random variables defined over the same probability space, then $E\big(|X_n-X|^2\big) \rightarrow 0$ implies $V(X_n) \rightarrow V(X)$, and this holds regardless of whether $E(X^2)$ is finite.
Give the cdf and pdf given that we select a point uniformly on an annulus
How would I represent the distance R from the point (X,Y) and would this be the cdf or pdf? By definition of Euclidean Distance, $R=\sqrt{X^2+Y^2}$. So immediately we know the support is $\{r: 1\leqslant r^2\leqslant 4\}$ Now, since the points are uniformly distributed: the pdf for $R$, at any point $r$ within that support, will be equal to the ratio of the circumference of a circle with radius $r$ to the area of the annulus. Call this $f_R(r)$. And the CDF will be the integral $\displaystyle F_R(r)=\int_1^r f_R(s)\mathrm d s$ .
What is the exact closed form of $2^1\cdot1^{2}+2^2\cdot2^{2}+2^3\cdot3^{2}+\cdots+2^n\cdot n^{2}$?
Here is another answer based on the comment by @Yuriy. Let $S = \sum\limits_{i=1}^{n} x^n = \frac{x(1-x^{n})}{1-x}$ Now take the derivative of LHS wrt $x$. $\frac{dS}{dx} = \sum\limits_{i=1}^{n}nx^{n-1}$ Now, multiply by $x$ and again differentiate wrt $x$. $\frac{d(x\frac{dS}{dx})}{dx} = \sum\limits_{i=1}^{n} n^2 x^{n-1}$ Finally, multiply by $x$ and replace $x$ by $2$. $(x\frac{d(x\frac{dS}{dx})}{dx})_{\,_{x=2}} = \sum\limits_{i=1}^{n} n^2 2^{n}$, which is the desired sum. Now, compute LHS in above equation by using the closed form of $S$ from first equation. Do you want me to complete this?
the expectation of a random variable of a random variable
As suggested by leonbloy, I am converting my comment to an answer. $X$ is a random variable that necessarily takes on values in $[0,1]$. Conditioned on $X=x$, $0 \leq x \leq 1$, $Z$ is a Bernoulli random variable with parameter $x$. Thus, conditioned on $X=x$, the conditional expected value of $Z$ is $$E[Z|X=x] = 1\times x + 0\times (1-x) = x.$$ The conditional expected value of $Z$ depends on the value taken on by $X$, that is, it is a function of the random variable $X$, and this random variable is denoted as $E[Z|X]$. In this instance, it is obvious that $E[Z|X]=X$ itself. Now, the law of iterated expectations gives us that $E[Z]=E[E[Z|X]]$ where it should be noted that the outer expectation is the expectation of a function of $X$. Thus, assuming that $X$ is a continuous random variable, we have $$E[Z]=E[E[Z|X]]=E[X]=\int_{-\infty}^\infty xf_X(x)\mathrm dx = \int_0^1 xf_X(x)\mathrm dx$$ since $f_X(x) = 0$ for $x < 0$ or $x > 1$.
Differential equation in $\mathcal{S}'$, Fourier method
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Notation: $\ds{\mrm{f}\pars{x} = \int_{-\infty}^{\infty}\hat{\mrm{f}}\pars{k}\expo{\ic kx} \,{\dd k \over 2\pi}\iff \,\hat{\mrm{f}}\pars{k} = \int_{-\infty}^{\infty}\mrm{f}\pars{x}\expo{-\ic kx}\,\dd k}$. \begin{align} &\mrm{y}'\pars{x} - \ic\,\mrm{y}\pars{x} = 1 +\delta\,'\pars{x} \implies \ic k\,\hat{\mrm{y}}\pars{k} - \ic\,\hat{\mrm{y}}\pars{k} = 2\pi\,\delta\pars{k} + \ic k \\[5mm] \implies & \hat{\mrm{y}}\pars{k} = {k - 2\pi\,\delta\pars{k}\ic \over k - 1} = 1 + {1 \over k - 1} + 2\pi\,\delta\pars{k}\ic \end{align} \begin{align} \mrm{y}_{\pm}\pars{x} & = \int_{-\infty}^{\infty}\bracks{1 + {1 \over k - 1 \pm \ic 0^{+}} + 2\pi\,\delta\pars{k}\ic}\expo{\ic kx}\,{\dd k \over 2\pi} = \delta\pars{x} + \ic + \expo{\ic x}\int_{-\infty}^{\infty} {\expo{\ic kx} \over k \pm \ic 0^{+}}\,{\dd k \over 2\pi} \\[5mm] & = \delta\pars{x} + \ic + \expo{\ic x}\bracks{% \mrm{P.V.}\int_{-\infty}^{\infty}{\expo{\ic kx} \over k}\,{\dd k \over 2\pi} + \int_{-\infty}^{\infty}\expo{\ic kx}\bracks{\mp\pi\ic\,\delta\pars{k}} \,{\dd k \over 2\pi}} \\[5mm] & = \delta\pars{x} + \ic + \expo{\ic x}\bracks{% \int_{0}^{\infty}{2\ic\sin\pars{kx} \over k}\,{\dd k \over 2\pi} \mp {1 \over 2}\,\ic} = \delta\pars{x} + \ic + \expo{\ic x}\bracks{% {\ic \over \pi}\,\mrm{sgn}\pars{x}\,{\pi \over 2} \mp {1 \over 2}\,\ic} \\[5mm] & = \delta\pars{x} + \ic + \expo{\ic x} \bracks{{2\Theta\pars{x} - 1 \mp 1}}{\ic \over 2} \end{align} $$\bbox[15px,#ffe,border:1px dotted navy]{\ds{% \left\{\begin{array}{rcl} \ds{\quad\mrm{y}_{-}\pars{x}} & \ds{=} & \ds{\delta\pars{x} + \ic + \Theta\pars{x}\expo{\ic x}\ic} \\[3mm] \ds{\quad\mrm{y}_{+}\pars{x}} & \ds{=} & \ds{\delta\pars{x} + \ic - \Theta\pars{-x}\expo{\ic x}\ic} \end{array}\right.}} $$
Are there mathematical concepts that exist in dimension $4$, but not in dimension $3$?
The one that sticks out for me the most is that there are five regular polytopes (called Platonic solids) in $3$ dimensions, and they all have analogues in $4$ dimensions, but there is another regular polytope in $4$ dimensions: the 24 cell. The kicker is that in dimensions higher than $4$... there are only three regular polytopes! Another thing that can happen in $4$ dimensional space but not $3$ is that you can have two planes which only intersect at the origin (and nowhere else.) In $3$ dimensions you'd get at least a line in the intersection. I don't know if this also counts, but linear transformations in $3$-dimensions always scale one direction (that is, they have a real eigenvector). This means that in all cases, a line in one direction must either stay put or be reversed to lie upon itself. In $4$ dimensions, it's possible to have transformations (even nonsingular ones) that don't have any real eigenvectors, so all lines get shifted. Also not sure if this counts, but there are no $3$ dimensional asociative algebras over $\mathbb R$ which allow division (they're called division algebras) but there is a unique $4$ dimensional one. (Look up the Frobenius theorem
Existence of limits of group elements
Note first that $x \in \limsup A_n$ if and only if, for each $N$, there exists some $n >N$ so that $x \in A_n$. Same way, $x \in \liminf A_n$ if and only if, there exists some $N$, such that, for all $n >N$ we have $x \in A_n$. Now, since $\lim A_n$ exists, the above implies the following: For each $x \in X$ then exactly one of the following hold: $x \in \lim_n A_n$. Then, there exists some $N$, such that, for all $n >N$ we have $x \in A_n$. This comes from $x \in \liminf A_n$. $x \notin \lim_n A_n$. Then, there exists some $N$, such that, for all $n >N$ we have $x \notin A_n$. This comes from $x \notin \limsup A_n$. Now, your problem Denote $A=\lim A_n, B= \lim B_n$. You have 4 possible scenarios: $x \in A, x \in B$. Therefore, there exists some $N_1, N_2$ such that $x \in A_n \forall n >N_1$ and $x \in B_n$ forall $n >N_2$. Then, for all $n > \max\{ N_1, N_2 \}$ you have $x \notin A_n \Delta B_n$. $x \in A, x \notin B$. Therefore, there exists some $N_1, N_2$ such that $x \in A_n \forall n >N_1$ and $x \notin B_n$ forall $n >N_2$. Then, for all $n > \max\{ N_1, N_2 \}$ you have $x \in A_n \Delta B_n$. $x \notin A, x \in B$. Therefore, there exists some $N_1, N_2$ such that $x \notin A_n \forall n >N_1$ and $x \in B_n$ forall $n >N_2$. Then, for all $n > \max\{ N_1, N_2 \}$ you have $x \in A_n \Delta B_n$. $x \notin A, x \notin B$. Therefore, there exists some $N_1, N_2$ such that $x \notin A_n \forall n >N_1$ and $x \notin B_n$ forall $n >N_2$. Then, for all $n > \max\{ N_1, N_2 \}$ you have $x \notin A_n \Delta B_n$. From here, it is trivial to deduce that $$\liminf A_n \Delta B_n =A \Delta B \\ \limsup A_n \Delta B_n =A \Delta B$$
If a non uniform beam is held in equilibrium by two light strings at angles to the beam. What are the magnitudes of T1 and T2.
Your first equation says, "the sums of the vertical components of the the forces on the beam equal zero", which is correct because the beam is in equilibrium. The second equation is similar, with "vertical" replaced by "horizontal". But there is no need to introduce variables $\theta_1$ and $\theta_2$, since we know that $\theta_1 = 30^\circ$ and $\theta_2 = 40^\circ.$ So really, the equations are: Vertically: $$ T_1 sin 30^\circ + T_2 sin 40^\circ - mg = 0 $$ Horizontally: $$ T_1 cos 30^\circ - T_2 cos 40^\circ = 0 $$ and now this is just two equations with two unknowns, $T_1$ and $T_2.$
How to prove $p\in S^1$ s.t., $d_p(f|_{S^1})=0$
This follows immediately from the case of $\mathbb{R}$-valued functions, since $D$ is simply connected so $f$ lifts to the universal cover. That is, there is a differentiable map $g:D\to \mathbb{R}$ such that $f=p\circ g$ where $p:\mathbb{R}\to S^1$ is the universal covering map $p(t)=(\cos t,\sin t)$.
Conservative extension is equiconsistent?
The general claim that conservative extensions are equiconsistent depends on the theories extending ordinary propositional calculus. So in your $T_2$ we can prove $\neg A$ from $B$ and $\neg B$ (because everything follows from a contradiction), and since $\neg A$ is a sentence in the language of $T_1$ that is not a theorem of $T_1$, $T_2$ is actually not a conservative extension. If we really want, we can weaken the assumption of including propositional calculus to "allows every sentence to be negated, as a matter of syntax" together with "allows everything to be inferred from a sentence together from its negation". The latter condition is implicit in the common definition of consistency as "cannot derive both a sentence and its negation". Also, without these minimal assumptions, it is hard to imagine how you're going to state "B is false" as an axiom in the first place.
invariance property of maximum likelihood estimator when we have monotonically decreasing function
The new parameter space is $\Gamma$, the image of $\Omega$ under the transformation g. We are interested in finding the MLE of $\psi=g(\theta)$. The likelihood function can be written $$\begin{split}f(x|\theta)&=f(x|g^{-1}(g(\theta)))\\ &=f(x|g^{-1}(\psi))\end{split}$$ We know that the likelihood is maximized at the MLE of theta, $\hat \theta$. So we have $$\begin{split}f(x|g^{-1}(\hat\psi))&=f(x|\hat\theta)\\ g^{-1}(\hat\psi)&=\hat\theta\\ \hat\psi&=g(\hat\theta)\end{split}$$ Therefore the MLE of $\psi=g(\theta)$ is $g(\hat\theta)$. Note that g being monotonically decreasing has nothing to do with the likelihood function; the fact that g is one-to-one allows us to take its inverse.
Using transfinite induction to split $R$ to continuum many pairwise disjoint subsets of $R$
Quite a few fairly strong such partition results are known. A few can be found in the following references, and googling their titles will give you many more: Sur une décomposition d'un intervalle en une infnité non dénombrable d'ensembles non mesurables by Luzin/Sierpiński (1917), Sur la décomposition de l'espace euclidien en ensembles homogènes by Erdős/Marcus (1957; Zbl review), Point Set Theory by John Clifford Morgan (1990; see p. 152-154, pp. 245-248, and the references he gives), A nonmeasurable partition of the reals by Paula Ann Kemp (2001). Regarding applications of transfinite induction for results such as your (1)-(3), you'll find a huge number by looking page-by-page through the earliest volumes (1920s and 1930s) of the journal Fundamenta Mathematicae.
Inverse hyperbolic functions
The text of the question is not clear, and I am a few miles away from my copy of Stewart. But I imagine that the calculation goes something like this. Suppose that $$x=\sinh y= \frac{e^y-e^{-y}}{2}.$$ We want to solve for $y$ in terms of $x$. The first thing to do is to multiply both sides by $2$, so we won't have to carry fractions around. They are heavy. So we get $2x=e^y-e^{-y}$. Now we can do one of several things. Maybe rewrite $e^{-y}$ as $1/e^y$. So now we have $$2x=e^y -\frac{1}{e^y}.$$ It will save typing, and be useful in other ways, to let $w=e^y$. We obtain $$2x=w -\frac{1}{w}.$$ Multiply both sides by $w$. We get $2xw=w^2-1$. Rearrange this equation a little. We get $$w^2-2xw-1=0.$$ This is a quadratic equation in $w$. The solutions are, by the Quadratic Formula, $$w=\frac{2x+\sqrt{4x^2+4}}{2}=x \pm\sqrt{x^2+1}.$$ Now we remember that $w=e^y$, so $$e^y=x \pm\sqrt{x^2+1}.$$ But note that $e^y$ is always positive, and $x-\sqrt{x^2+1}$ is negative. So the solution with the minus sign has to be rejected, and $$e^y=x + \sqrt{x^2+1}.$$ Take the natural logarithm ($\ln$) of both sides. We get $$y=\ln\left(x+\sqrt{x^2+1}\right).$$
student's $t$-distribution
Hint: when the number of degrees of freedom / data points is very large-scale, the $t$ distribution is very close to a normal one. Quoting link, A Student's $t$ distribution with mean $\mu$, scale $\sigma^2$ and $n$ degrees of freedom converges in distribution (ie their density functions converge) to a normal distribution with mean $\mu$ and variance $\sigma^2$ when the number of degrees of freedom $n \to \infty $. Or from Wikipedia, Whereas a normal distribution describes a full population, t-distributions describe samples drawn from a full population; accordingly, the t-distribution for each sample size is different, and the larger the sample, the more the distribution resembles a normal distribution. From another angle, recall that the $t$ statistic is the $z$ statistic where we replaced the true population standard deviation $\sigma$ by the sample's standard error $s_n$. As $n$ grows, $s_n \to \sigma$ and, intuitively, $t_n \to z \sim N(0,1)$. In your case, what you need to calculate is the cumulative probability (the area under the density curve) of a Student $t_{456}$ distribution between $-t' = \frac{3.49-3.59}{1.045/\sqrt{457}}$ and $t' = \frac{3.69-3.59}{1.045/\sqrt{457}}$. If you want to use tables (which I find is a good idea for training), take a $z$ table and look for $t' \simeq z' = \frac{3.69-3.59}{1.045/\sqrt{457}} = 2.05$; this gives you a value of $P(z < z') = 0.9798$ (area under the curve excluding the right tail). Given that you want the area under the curve excluding both tails, the value you are looking for is $P(-z' < z < z') = 0.9798 - (1-0.9798) = 0.96$. If you only have a $t$ table, since you know that $t_n \to z$, just use the bottommost row as an approximation. Using this one (which is particularly handy since it gives values for $df=100$ and $df=1000$ as well as $z^*$), your value of $t_n' = 2.05$ falls very close to the column corresponding to a right-tail area of $0.2$, or a confidence interval of $96\%$. QED
Colouring edges of hexagon - Burnside's lemma
There are $\binom6{2,2,2}=90$ different colourings. The identity fixes all $90$ of them. A rotation of order $6$ has a single orbit, and a rotation of order $3$ only has two orbits, so neither fixes any colourings. A rotation of order $2$ (of which there is $\phi(2)=1$) has three orbits of size $2$, and we can assign the three colours to these three orbits in $3!=6$ ways. A reflection in a line through opposite midpoints (of which there are $\frac62=3$) also has three orbits of size $2$, and thus also fixes $6$ colourings. A reflection in a line through opposite vertices (of which there are also $\frac62=3$) has two orbits of size $2$ and two fixpoints. Since two of the colours have to be used for the orbits of size $2$, the two fixpoints have to be the third colour; so this also yields $3!=6$ invariant colourings. In total, that makes $$ 90+6+3\cdot6+3\cdot6=132\;, $$ and dividing by the size of the group, $12$, yields a count of $11$ equivalence classes of colourings.
Dynamical system $f(x)=x^4\sin(1/x)$. How to determine the stability of each equilibrium point?
Hint: Consider a small perturbation $x(t)$ of the zero solution, say with $x(0)=\epsilon \neq 0$. This solution will be caught in between two of the other equilibrium points (or perhaps exactly at one of them), and this lets you permit what happens to the solution $x(t)$ in the long run.