title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to prove that a group of order $72=2^3\cdot 3^2$ is solvable?
If $G$ has 4 Sylow-3 subgroups, $G$ acts on those subgroups via conjugation, inducing a homomorphism $G\to S_4$. Since $|S_4|=24<72=|G|$, this map must have a non-trivial kernel. If the morphism is not the trivial map, you are done. What can you say if the kernel is all of $G$?
Setup for evaluating $\iint_R y^2dA$
Let's convert the double integral to an iterated integral with integration order $dxdy$, since the region is horizontally simple. $R$ is bounded above by the line $x=7-3y$ and below by $x=y-1$ from $y=1$ to $y=2$. Thus the integral becomes: $$\iint_R y^2dA=\int_1^2 \int_{y-1}^{7-3y}y^2dxdy=\int_1^2y^2x\Big{|}_{y-1}^{7-3y}dy=\int_1^28y^2-4y^3dy.$$ I trust that you can take it from here? If not, please tell me.
How to find the length of a right triangle?
Note that the triangle $ADB$ is similar to triangle $CDA$. Hence, we get that $$\dfrac{AD}{DB} = \dfrac{CD}{DA} \implies \dfrac{12}{x} = \dfrac{7}{12}$$ Hence, $x = \dfrac{144}{7}$. EDIT If $CD$ is $x$ and $BD$ is $x+7$, the same procedure gives us $$\dfrac{12}{x+7} = \dfrac{x}{12} \implies x^2 + 7x = 144 \implies (x+16)(x-9) = 0$$ Since $x = CD > 0$, we get that $x=9$.
Translating coordinates from tilted axis system
If the rotation is counterclockwise by angle $\alpha$ and $(x,y)$ are the coordinates in the original axes and $(x', y')$ are the coordinates in the new (transformed) axes, then: $x' = x \cos\alpha + y \sin\alpha$ $y' = y\cos\alpha - x\sin\alpha$
The matrix denotation. Clarification on Shear?
Note that $A(1,0)=(3,5)$ $A(0,1)=(5,-3)$ and $\det(A)=34$ therefore it shoul be a reflection with scaling. To check that completely we can find eigenvalue and eigenvectors. We can exclude shear matrix since in that case angles are preserved, indeed $A^TA=34^2I$ and up to the scaling the transformation is an isometry.
Two definitions of a projective morphism
Definitely, if $X$ is "Hartshorne-projective" it is also "EGA-projective" (take $\mathcal{E}$ to be a free bundle of rank $n+1$). The opposite direction is true if any coherent sheaf is globally generated by a finite-dimensional vector space of sections after some line bundle twist; indeed, if $V$ generates $\mathcal{E} \otimes L$ then the surjection $V \otimes \mathcal{O}_Y \to \mathcal{E} \otimes L$ induces a closed embedding $$ \mathbb{P}(\mathcal{E}) = \mathbb{P}(\mathcal{E} \otimes L) \to \mathbb{P}(V \otimes \mathcal{O}_Y) = \mathbb{P}^n_Y. $$ So, for instance if $Y$ satisfies reasonable finiteness conditions and admits an ample line bundle, the definitions are equivalent.
Show that $f(x)=x^2−3$ is Riemann integrable on $[0,1]$
It's probably difficult to decipher what I wrote up there, so here I'll show you how to get a $U(f,P)$. We know that $f(x) = x^2 - 3x + 5$ is a parabola, and on $[0,3/2]$ it is decreasing, and $[3/2,2]$ it is increasing. (Do you see why?) To get a $U$, I am going to choose the left-end points on $[0,3/2]$ and right-end points on $[3/2,2]$. If you are feeling a bit skeptical as to why this is the case, sketch the graph of $x^2-3x+5$. The rectangles that I am choosing contain the area under the curve. On $[0,3/2]$, we have $h = \frac{3/2 - 0}{n}$ as the width of each rectangle, and each left-end point is going to be $x_k = 0 + k h$, where $i = 0, \dots, n-1$. So the left-hand sum on $[0,3/2]$ will be: $$\text{Left-sum on $[0,3/2]$} = \sum_{k=0}^{n-1} f(x_k) h = \sum_{k=0}^{n-1} f(k\frac{3}{2n})\frac{3}{2n} = \sum_{k=0}^{n-1} (k^2\frac{9}{4n^2} - k \frac{9}{2n} + 5)\frac{3}{2n}$$ On $[3/2,2]$, we have $h = \frac{2 - 3/2}{n}$ as the width of each rectangle, and each right-end point is going to be $x_k = 3/2 + k h$, where $i = 1, \dots, n$. So the right-hand sum on $[3/2,2]$ will be: $$\text{Right-sum on $[3/2,2]$} = \sum_{k=1}^{n} f(x_k) h = \sum_{k=1}^{n} f(3/2+k\frac{1}{2n})\frac{1}{2n} = \sum_{k=1}^{n} ((\frac32+k\frac{1}{2n})^2 - 3(\frac32+k\frac{1}{2n}) + 5)\frac{1}{2n}$$ Let me know if you feel like you are completely lost, and maybe I can clarify some more points.
On the Hardy-Littlewood Maximal function in $L^{2}(\mathbb{R}^n)$
If $f \in L^2$ then by the Hardy-Littlewood maximal inequality $f^* \in L^2$ as well. Thus, by the Cauchy-Schwarz inequality $$ \left(\int_{\mathbb{R}^n} |f(x) f^*(x)| dx \right)^2 \leq \left( \int_{\mathbb{R}^n} |f(x) |^2 dx \right)\left(\int_{\mathbb{R}^n} |f^* (x)|^2 dx\right) < \infty $$ we see it is indeed true that $ ff^* \in L^1.$ And of course, $f^*$ is locally $L^2$ since it is in fact $L^2.$ So both the results are true.
Variance, Covariance, and Correlation answer check
Firstly. If $Y=0.5+0.6X$ and $\mathsf {Var}(X)=\sigma^2$ , then $\mathsf{Var}(Y)=0.36\sigma^2$ Because $\mathsf {Var}(a+bX) ~=~ b^2~\mathsf{Var}(X)$ when $a,b$ are constants. Similarly: $\mathsf {Cov}(a+bX, c+dX) ~=~ bd~\mathsf{Var}(X)$ Revisit all your calculations. Secondly The correlation coefficient is defined as: $$\mathsf {Corr}(U,V) ~=~ \dfrac{\mathsf {Cov}(U,V)}{\sqrt{~\mathsf{Var}(U)~\mathsf{Var}(V)~}}$$ Just substitute as appropriate.
How long are $AO$ and $OC$?
Hint: From the angle bisector theorem, we have $\dfrac{AB}{BC}=\dfrac{15}{33}=\dfrac xy$. Also, $x+y=21.$ Therefore, $x=21-y.$ Plugging the values, we get $\dfrac{21-y}{y}=\dfrac{15}{33}$. Solve this, and you get the value of y, which helps you obtain the corresponding value of x.
Proof that the tensor product is the coproduct in the category of R-algebras
Proving the tensor product of commutative $R$-algebras satisfies the universal property of the coproduct is straightforward, but it might be more illuminating to figure out the more general universal property of the tensor product of non-necessarily commutative $R$-algebras. Let $A$, $B$ and $C$ be $R$-algebras, not necessarily commutative. Let's compute $\mathrm{Hom}(A \otimes_R B, C)$, where $\mathrm{Hom}$ means morphisms of $R$-algebras. An $R$-algebra homomorphism $f : A \otimes_R B \to C$ is an $R$-linear map such that $$f(a_1 a_2 \otimes b_1 b_2) = f(a_1 \otimes b_1) f(a_2 \otimes b_2).$$ Let $g : A \to C$ and $h:B \to C$ be given by $g(a) = f(a \otimes 1)$ and $h(b) = f(1 \otimes b)$. Plugging in $b_1 = b_2 = 1$ to the displayed equation, we see that $g$ is an $R$-algebra homomorphism. Similarly $h$ is also an $R$-algebra homomorphism and $f$ is determined by $f(a \otimes b) = f((a \otimes 1)(1 \otimes b)) = g(a) h(b)$. But we can't fully go backwards: if you start with $R$-algebras maps $g$ and $h$ and try to use this last formula to define an $f$, you find it is only well defined if $g(a)$ and $h(b)$ commute for any $a \in A$ and $b\in B$. Indeed, that is necessary because $(a \otimes 1)(1 \otimes b) = a \otimes b = (1 \otimes b) (a \otimes 1)$, so, applying $f$, we'd find $g(a) h(b) = f(a \otimes b) = h(b) g(a)$. You can easily check that that commutativity is enough to make $f$ well defined when starting from $g$ and $h$ and thus we get the universal property of the tensor product: $$ \mathrm{Hom}(A \otimes_R B, C) = \{ (g,h) \in \mathrm{Hom}(A,C) \times \mathrm{Hom}(B,C) : \forall a \in A, b \in B, g(a) \text{ and } h(b) \text{ commute}\}.$$ Clearly, if we restrict to the subcategory of commutative $R$-algebras, the commutation condition is automatic and this reduces to the universal property of the coproduct. Notice that the situation in the category of groups is quite similar: the direct product of groups has a universal property analogous to this one for the tensor product, i.e., group homorphisms from $G \times H$ into $K$ are given by pairs of homomorphisms $G \to K$ and $H \to K$ whose images commute in $K$. The direct product of groups is also not the coproduct (that would be the free product of groups), but is the coproduct when you restrict to Abelian groups. (One place where this analogy breaks down is that the direct product of groups is the product of groups, but the tensor product of $R$-algebras is not the product.)
Subgroup that generates $\mathbb{Z}$
The only thing you need to do is find a combination $8a+13b=1$. From this one you can get any other, by multiplying by $n$. To find the required combination use the extended euclidean algorithm: $13=8+5$ $8=5+3$ $5=3+2$ $3=2+1$ Now we flip the equalities: $1=3-2$ $2=5-3$ $3=8-5$ $5=13-8$ Then we use them recursively to get a combination of $8$ and $13$: $1=3-2=3-(5-3)=2(3)-5=2(8-5)-5=2(8)-3(5)=2(8)-3(13-8)=$ $5(8)-3(13)$
Euler's method: plotting total error (round-off included) as a function of stepsize
The first graph is the correct one, or at least one of the usual ones. One could also use $\log_2(stepsize)$ on the horizontal axis, which would give the usual loglog plot. Remember that $\log$ is monotonically increasing, so that a V shape in the errors gets translated into a V shape in the log-errors. As you have observed, the error has one component from the method that behaves like $h^p$ and one part from the numerical noise that is in first order proportional to the number of steps or $1/h$. Thus one can say that the error looks like $$ error \approx \max(Ah^p,\frac Bh) $$ (Normally it is the sum of those terms, but away from the intersection point one of them rapidly dominates the other, so that the maximum is a valid approximation.) Taking the logarithm this changes to $$ \log(error)≈\max(\log(A)+p\cdot \log(h),\,\log(B)-\log(h)) $$ so that the loglog plot should look like a piecewise linear V shape where the slope of the branch towards the larger values of $h$ is the order $p$ of the method.
Probability question regarding taking seats
Please note the number of seats taken (say, $m$ out of $n$) can be between one third to half. You will have to apply the ceiling function to get the exact number. $\lceil{\dfrac{n}{3}}\rceil \le m \le \lceil{\dfrac{n}{2}}\rceil$ Given how people come in and sit is random (within the rules), you should just look at the possible number of cases for m as per the above range I wrote. The expected number of seats taken is a mean of these cases. For example, take $n = 11$, there are possibilities of $4$ seats taken, $5$ seats taken and $6$ seats taken. You get that using the above range.
How to find point of a function maximum
Compute the derivative of $f(x)$ and set it to zero, $$f'(x) = e ^ {-bx^2} \frac {L} {R^2} \left[-2b(x^6-R^2)x+6x^5\right] =0$$ or, with $R=1$ and $b=1$, $$x\left(x^6-3x^4-1\right)=0$$ which has the obvious solution at $x=0$ for the minimum value. Note that the cubic equation in $x^2$ has only one real root and it is given analytically by $$x^2= 1+\left(\frac{3-\sqrt5}{2}\right)^{1/3} +\left(\frac{3+\sqrt5}{2}\right)^{1/3}$$ Thus, the maximum values are at $x =\pm 1.762$.
Homeomorphism confusion
I'll give a conceptual answer to your question. Two curves are homeomorphic if the first can be continuously deformed into the second. Intuitively, if you had a circular loop of wire, you could hammer and bend it into the shape of any polygon (without breaking the wire or adding any new connections). The cardinality of $S^1$ is the same as the cardinality of any line segment, which in turn has the same cardinality as any finite union of line segments. Cardinality of a set is different from "length" or "measure". For example, though $[1,2]$ is a proper subset of $[0,3]$, the two sets have the same cardinality.
The probability of exactly $r$ guests leaving with their own hats after a random permutation
The number of permutations of $n$ hats with exactly $r$ fixed-points (i.e., exactly $r$ hats return to their original owner) is: $$D_{n,r} = \binom{n}{r}D_{n-r,0} = \frac{n!}{r!}\sum_{i=0}^{n-r}\frac{(-1)^i}{i!}$$ Since, number of ways of selecting $r$ fixed-points is $\displaystyle \binom{n}{r}$ and forming perfect derangement of the rest is $\displaystyle D_{n-r,0} = D_{n-r,0} = (n-r)!\sum_{i=0}^{n-r}\frac{(-1)^i}{i!}$ Thus, the required probability is $\displaystyle \frac{1}{r!}\sum_{i=0}^{n-r}\frac{(-1)^i}{i!}$ Take the limit $n \to \infty$ and see that it approaches $\dfrac{e^{-1}}{r!}$
Quadratic residue and primitive root
Regarding your statements $(1)$ and $(2)$ for composite $n$ which have primitive roots, note they are true only for all $a$ which are coprime to $n$, e.g., as it states in Primitive root modulo $n$ ... $g$ is a primitive root modulo $n$ if for every integer $a$ coprime to $n$, there is an integer $k$ such that $g^{k} \equiv a \pmod{n}$ Such a value $k$ is called the index or ... Your $(1)$ then is, as you stated, true when the index is $2k$ to get $x = g^{k}$. For your $(2)$, have the odd index be $0 \le 2k + 1 \lt \phi(n)$ and assume there's an $x$ where $$x^2 \equiv g^{2k + 1} \pmod{n} \tag{1}\label{eq1A}$$ Now, $x$ must be coprime to $n$ so there's a $0 \le j \lt \phi(n)$ where $x \equiv g^j$ so you then have $$g^{2j} \equiv g^{2k + 1} \pmod{n} \implies g^{2j - (2k + 1)} \equiv 1 \pmod{n} \tag{2}\label{eq2A}$$ With $d = 2j - (2k + 1)$, since the multiplicative order of $g$ modulo $n$ is $\phi(n)$, and you have $0 \le 2j \lt 2\phi(n)$ so $-\phi(n) \lt d \lt 2\phi(n)$, this means you either have $d = 0 \implies 2j = 2k + 1$, which is not possible since you can't have an even equal an odd, or $d = \phi(n) \implies 2j = \phi(n) + (2k + 1)$. However, apart from $n = 2$ (where statement $(2)$ doesn't apply), $\phi(n)$ for all the other cases, i.e., $n = 4, p^{k}$ and $2p^k$, is even. Thus, once again, you have an even number on the left and an odd on the right, so it can't be true. This shows the original assumption of $x$ existing can't be true, so $a$ must be a quadratic nonresidue. As for handling $a$ when it's not coprime to $n$, for simpler algebra & handling, first reduce $a$, if need be, so it's $0 \le a \lt n$. With $a = 0$, it's a quadratic residue. With $a \gt 0$, for $n = 2$, there's no other values, while for $n = 4$, you have $a = 2$ being a quadratic nonresidue. For $p^k$ and $2p^k$, where $p$ is an odd prime, you have $$a = 2^i p^j(m) \tag{3}\label{3A}$$ for some $i \ge 0$ and $0 \le j \le k$, with $ij \neq 0$, and $m$ where $\gcd(m, 2p) = 1$. For $j = k$, the only possibility is $a = p^k$ with $n = 2p^k$ and $m = 1$, i.e., $$x^2 \equiv p^k \pmod{2p^k} \tag{4}\label{eq4A}$$ If $k$ is even, then $x \equiv p^{\frac{k}{2}} \pmod{2p^k}$, while if $k$ is odd, then $x \equiv p^{\frac{k + 1}{2}} \pmod{2p^k}$, so $a$ is a quadratic residue in either case. Next, consider $j \lt k$, with the $2$ cases for $n$: Case #$1$: $n = p^k$ There's an integer $q$ such that $$x^2 \equiv 2^i p^j(m) \pmod{p^k} \iff x^2 - 2^i p^j(m) = qp^k \tag{5}\label{eq5A}$$ Let $x$ have $r$ factors of $p$, so $x^2$ has $2r$ factors. If $2r \lt j$, the left side has $2r$ factors of $p$ altogether, while if $2r \gt j$, then it has $j$ factors in total. In summary, it has $b = \min(2r, j)$ factors of $p$. However, since the right side has at least $k \gt j \ge b$ factors, this means it has more factors of $p$, which is not possible. As such, with $j$ being odd, $a$ would be a quadratic nonresidue. Otherwise, with $j = 2r$, if you have $x = p^r x'$, dividing both sides by $p^j$ gives $$(x')^2 - 2^i(m) = qp^{k - j} \iff (x')^2 \equiv 2^i(m) \pmod{p^{k - j}} \tag{6}\label{eq6A}$$ Since $p^{k - j}$ has a generator and $2^i(m)$ is coprime to $p^{k - j}$, you can then use $a = 2^i(m)$ and $n = p^{k - j}$ with your statements $(1)$ and $(2)$ to determine whether or not this $a$ is a quadratic residue. Case #$2$: $n = 2p^k$ As before, there's an integer $q$ such that $$x^2 \equiv 2^i p^j(m) \pmod{2p^k} \iff x^2 - 2^i p^j(m) = q(2p^k) \tag{7}\label{eq7A}$$ As with case #$1$, if $j$ is odd then it's a quadratic nonresidue, else $j = 2r$ with $x = p^r x'$ giving, after dividing by $p^j$, $$(x')^2 - 2^i(m) = q(2p^{k-j}) \tag{8}\label{eq8A}$$ If $i = 0$, you then have $$(x')^2 \equiv m \pmod{2p^{k-j}} \tag{9}\label{eq9A}$$ You can use $a = m$ and $n = 2p^{k-j}$ with your statements $(1)$ and $(2)$ to find whether or not this is a quadratic residue. For $i \gt 0$, $x'$ must be even, i.e., $x' = 2x''$, so \eqref{eq8A} becomes $$4(x'')^2 - 2^i(m) = q(2p^{k-j}) \iff 2(x'')^2 - 2^{i-1}(m) = q(p^{k-j}) \tag{10}\label{eq10A}$$ The multiplicative inverse of $2$ modulo $p^{k-j}$ is $\frac{p^{k-j} + 1}{2}$, so multiplying both sides of \eqref{eq10A} by this value means it then becomes the equivalent of $$(x'')^2 \equiv \left(\frac{p^{k-j} + 1}{2}\right)2^{i-1}(m) \pmod{p^{k-j}} \tag{11}\label{eq11A}$$ Similar to in case #$1$, you can now use $a = \left(\frac{p^{k-j} + 1}{2}\right)2^{i-1}(m)$ and $n = p^{k - j}$ with your statements $(1)$ and $(2)$ to determine whether or not this $a$ is a quadratic residue.
$y=\arcsin xy;\quad xy'+y=y'\sqrt{1-x^2 y}$
The relation $y = \arcsin(xy)$ implies that $xy = \sin(y)$. Differentiating both sides with respect to $x$, you obtain $$xy' + y = y'\cos(y).$$ But $\cos(y) = \cos(\arcsin(xy))$...
Bounded or unbounded?
In fact any closed set can be the zero set of $f$ in this context. (That gives you all kinds of examples where $A$ is not bounded.) Proof: Suppose $A\subset \mathbb R^n$ is closed. Then $g(x) = d(x,A)$ maps $\mathbb R^n$ continuously to $[0,\infty)$ and $\{x\in \mathbb R^n : g(x)=0\}=A.$ You can now define $f(x) = (g(x),0,\dots,0)$ for a continuous map from $\mathbb R^n$ to $\mathbb R^m$ such that $\{x\in \mathbb R^n : f(x)=0\}=A.$
Is it possible to prove uniqueness without using proof by contradiction?
There is often no need for contradiction; to say that there is a unique object $x$ satisfying some formula $\varphi(x)$ is to say that There exists $x$ satisfying $\varphi(x)$ — symbolically, this is $\exists x\, \varphi(x)$; If $x,y$ are such that $\varphi(x)$ and $\varphi(y)$ are true, then $x=y$ — symbolically, this is $\forall x \forall y (\varphi(x) \wedge \varphi(y) \to x=y)$. So you can prove uniqueness by first supposing $x$ and $y$ are objects for which $\varphi(x)$ and $\varphi(y)$ are both true, and deriving $x=y$. You've probably done this a thousand times without realising. For example There is a unique empty set. To see this, suppose that $A$ and $B$ are empty sets. For any $x$, the statements $x \in A$ and $x \in B$ are both false, so that $x \in A \Leftrightarrow x \in B$ is true. By the axiom of extensionality, $A=B$. Every group (or even monoid) has a unique identity element. To see this, let $G$ be a group and suppose $u,v \in G$ satisfy $ug=g=gu$ and $vg=g=gv$ for all $g \in G$. Then $u=uv$ since $v$ is an identity element, and $uv=v$ since $u$ is an identity element, so $u=v$. Every time you prove a function is injective, you're proving a uniqueness result. To say a function $f : X \to Y$ is injective is to say that, for all $y \in \mathrm{im}(f)$, there exists a unique $x \in X$ such that $f(x)=y$. This is proved by showing that if $x,x' \in X$ with $f(x)=f(x')$ ($=y$), then $x=x'$.
Maximum Value of $a+b+c$
Hint: $a=1$ and $\frac{44}{b}=76-c$ implies $b|44$.
Let $S:U\rightarrow V, \ T:V\rightarrow W$ be linear map. And if $TS$ is surjective, Can this imply $T$ is injective?
Take $U=V=\mathbb{R^n},W=\{0\}$ and let $S$ be the identity transformation, $T$ the zero transformation. Of course $T$ is not injective and of course $TS:U\to W$ is surjective.
Game Theory: Penalty Shot Game
Suppose the row player chooses top with probability $p$ and bottom with probability $1-p$. If the column player chooes left, the row player's expected payoff is $$\frac{1}{2}p-(1-p)\cdot 1.$$ If the column player chooses right, the row player's expected payoff is $$-p\cdot1+\frac{1}{3}(1-p).$$ The minimax strategy is the $p$ that causes these two payoffs to be equal. $$\frac{1}{2}p-(1-p)=-p+\frac{1}{3}(1-p),$$ so $p=8/17$ is the minimax strategy. The value is found by substituting this value of $p$ into the payoffs from above so the value is $-5/17$. You can read a summary of minimax solutions here: http://www.mit.edu/~jcrandal/16.499/GameTheoryBasics.pdf. Edit: fixed mistake in the algebra spotted by memo.
Distribution Probability Problem
Hints: You are given the distribution of $(X_1,X_2)$, this allows you to compute $\mathbb E(\varphi(X_1,X_2))$ for every bounded measurable function $\varphi$. You can deduce from this the value of $\mathbb E(\varphi(R,T))$ for every bounded measurable function $\varphi$. This yields the distribution of $(R,T)$. If a step is unclear, say so.
Intersection of topological manifolds.
Actually, the intersection of two topological manifolds in a Euclidean space can be almost anything. Here's an example to show how bad things can get. Let $C$ be any closed subset of $\mathbb R^n$ whatsoever, and let $f\colon \mathbb R^n \to \mathbb R$ be the function $$ f(x) = \operatorname{dist}(x,C). $$ Thus $f$ is continuous, and $f(x)=0$ if and only if $x\in C$. Let $M\subset\mathbb R^{n+1}$ be the graph of $f$, and let $N\subset\mathbb R^{n+1}$ be the graph of the zero function (i.e., $N = \mathbb R^n\times \{0\}$). Then $M$ and $N$ are both topological submanifolds of $\mathbb R^{n+1}$, and $M\cap N = C \times \{0\}$.
how many times do we have to choose to remove all the edges from the entire graph?
The answer is 6 times. First you choose any 8 vertices and you remove all the edges that connect those vertices. Then you take the other 8 vertices and also remove their edges. At this point you have a bipartite graph, $K_{8,8}$, and you have done 2 choices. Now you choose any four vertices from one side and four from the other and remove their edges. Then you choose the other four vertices from one side and the other four from the other side and remove the edges. In this step you have done 4 choices and you have 2 bipartite graphs $K_{4,4}$. Finally, you do two more choices to remove the edges of each $K_{4,4}$.
A question about prime factorization of $n!$
The primes in $(N/2,N]$ appear with exponent 1, so you need to show that there are arbitrarily many primes in such an interval. You could use Ramanujan's proof of Bertrand's postulate, for example. If you have access to sledgehammers like the Prime Number Theorem that would suffice as well.
Show $B=\bigcup \mathcal{A}$ is well-ordered and $A\leq B, \forall A\in \mathcal{A}$
Consider a subset $\emptyset\neq S\subseteq B$ and take $s\in S$. Select $A_i$ such that $s\in A_i$ (such $A_i$ exists of course, by definition of $B$). Then the set $$\{t\in S:t\leqslant s\}$$ is contained in $A_i$. This because if $t\in A_j\succeq A_i$ (I use $\preceq$ and $\succeq$ for the ordering you've defined on the subsets of $X$, so there is no confusion with the order on the elements of $X$), then $t\in A_i$ since $t\leqslant s\in A_i$ (here I apply the definition of $\preceq$). On the other hand, if $t\in A_j\preceq A_i$, then $t\in A_j\subseteq A_i$ (again, by definition of $\preceq$). These are the only cases because of the hypothesis on $\mathcal A$ (that either $C\preceq D$ or $D\preceq C$, for all $C,D\in\mathcal A$) So, being $A_i$ well-ordered, there is a minimum $m\in A_i$ to $J=\{t\in S:t\leqslant s\}$. Can you prove that $m$ is the minimum of $S$? Hint: $B$ is totally ordered, since every two $x,y\in B$ belong to some $A_i$, which is totally ordered (any well-ordered set is also totally ordered), so for every $q\in S$, either $s\leqslant q\rightarrow m\leqslant s\leqslant q$, or $q\leqslant s\rightarrow q\in J\rightarrow m\leqslant q$
The fractional order derivative approach above
Actually, it doesn't exist for $s=1$. Indeed, this is why we have $0<s<1$. You will notice that we have will have $\Gamma(0)$. Indeed, $\Gamma(0)$ doesn't exist, and as $s\to1$, $\frac1{\Gamma(1-s)}\to0$. Just the same, you will notice that if $f(x)\ne0$, then $\int_0^x\frac{f(y)}{(x-y)^s}dy\to\pm\infty$. Thus we result with the following indeterminate form: $$\frac1{\Gamma(1-s)}\frac d{dx}\int_0^x\frac{f(y)}{(x-y)^s}dy\stackrel{s\to1}=0\times\infty$$ which is quite unfortunate. However, it is noticeable that if we were to evaluate the limit, we'd end up with the following: $$\lim_{s\to1^-}\frac1{\Gamma(1-s)}\frac d{dx}\int_0^x\frac{f(y)}{(x-y)^s}dy=f'(x)$$ just as you would expect. Now, if $u,v$ were functions of $x$, one might apply the multivariate chain rule as follows: $$\frac d{dx}\int_0^uf(y)(v-y)^sdy=u'f(u)(v-u)^s+sv'\int_0^uf(y)(v-y)^{s-1}dy$$ whenever $0<s<1$, $u(x)=v(x)=x$, we end up with $$\frac d{dx}\int_0^xf(y)(x-y)^sdy=s\int_0^xf(y)(x-y)^{s-1}dy$$ This result means that $$\frac1{\Gamma(1-s)}\frac d{dx}\int_0^xf(y)(x-y)^{-s}dy=\frac1{\Gamma(2-s)}\frac{d^2}{dx^2}\int_0^xf(y)(x-y)^{1-s}dy$$ Now letting $s\to1$ on the RHS gives us $$\lim_{s\to1}\frac1{\Gamma(2-s)}\frac{d^2}{dx^2}\int_0^xf(y)(x-y)^{1-s}dy=\frac1{\Gamma(1)}\frac{d^2}{dx^2}\int_0^xf(y)dy=f'(x)$$ You may also want to check the Wikipedia or Google for other references.
Check differentiability of $(x,y)$ at $(0,0)$.
Observe that both partial derivatives of first order at $\;(0,0)\;$ exist and equal zero, and besides: $$\frac{\partial f}{\partial x}=\frac{y\sqrt{x^2+y^2}-\frac{x^2y}{\sqrt{x^2+y^2}}}{x^2+y^2}=\frac{y^3}{(x^2+y^2)^{3/2}}\xrightarrow[(x,y)\to(0,0)]{}0\;\;\text{(why?)}$$ so the partial derivatives exist and are continuous at the origin...thus the function is differentiable there. Now fill up what's needed in the above.
Question about passage in Halbeisen's book
(This is basically exactly what Brian mentioned in his comment, but in way more than 600 characters.) Note that if $\mathsf{ZFC}$ refutes the sentence $\varphi$, then there must be a finite fragment $\Phi$ of $\mathsf{ZFC}$ which refutes $\varphi$. So what we are aiming for is to show that (relative to the consistency of $\mathsf{ZFC}$) this cannot happen. So we begin with a finite fragment $\Phi$ of $\mathsf{ZFC}$, and we have in mind a forcing notion that should produce generic extensions satisfying $\Phi + \varphi$. Unfortunately, even demonstrating that the desired forcing notion $\mathbb{P}$ is an element of an arbitrary set model $\mathsf{M}$ of $\Phi$ might require axioms not in $\Phi$. Furthermore, the demonstration that the generic extension satisfies $\Phi + \varphi$ might also require axioms of $\mathsf{ZFC}$ not in $\Phi$ (because we will have to construct the required names, which will in all likelihood require, for example, instances of Replacement not in $\Phi$). We must then analyse exactly what we need so that the above process can be carried out, and get a suitable finite fragment $\mathsf{ZFC}^*$ of $\mathsf{ZFC}$ such that if you begin with a set model $M$ of $\mathsf{ZFC}^*$ the forcing notion $\mathbb{P}$ is an element of $\mathsf{M}$, and, moreover, constructing a generic extension $\mathsf{M}[X]$ results in a model of $\Phi + \varphi$. This then shows (relative to the consistency of $\mathsf{ZFC}$) that the finite fragment $\Phi$ cannot refute $\varphi$. This analysis can be carried out for any finite fragment $\Phi$ of $\mathsf{ZFC}$, leading to an appropriate finite fragment $\mathsf{ZFC}^*$ so that the above works. In this manner we can demonstrate the relative consistency of $\varphi$ with $\mathsf{ZFC}$. Note that there are many relative consistency results that begin not with finite fragments of $\mathsf{ZFC}$, but rather of stronger theories, such as $\mathsf{ZFC} + \exists \text{ inaccessibles}$. The above description, mutatis mutandis, will handle those cases as well. (But in practice we don't tend to worry about the particulars, and think of forcing over models of $\mathsf{ZF(C)}$ -- or stronger theories. Even more, (and perhaps far more often than the formalist in me would like to admit) we generally think of forcing over the entire von Neumann universe $\mathsf{V}$.)
Find the equation in polar coordinate form for a straight line through the points with polar coordinates (4,0) and (4,π/3).
Your Cartesian co-ordinaries are incorrect. They should be $(4,0)$ and $(2 , 2\sqrt{3})$. Sorry for the bad notation. Also you then need to use the Rsin$(θ + α)$ form to get it as the book's answer.
Operator precedence - Discrete Math (Predicate logic)
The first one implies that for all $x$ in $S$ there exists a $y$ in $T$ such that the predicate $P(x,y)$ implies $Q(x)$ whereas in the second one, the existence of a $y$ in $T$ such that $P(x,y)$ is true implies $Q(x)$. The first one is a consequence of $P(x)$ being true, the other one is a consquence of the existence of an element that makes the predicate true.
The strictly upper triangular "subalgebra" of $M_n(K)$ is not unital?
You are right on the fact that $UT_n(\mathbb{K})$ don’t have a unit. A lot of algebras you want to study won’t have a unit element. So I guess that the author was saying that it’s better to work with unital algebras in the sens that if you work with a non-unital algebra $A$ then try to find a unital algebra $B$ such that $A$ is a subalgebra of $B$.
Function with $f(a)-f(b)$ dividing $a^3-b^3$
What about $f(x)=x^3$? ${}{}{}{}{}{}{}{}{}{}{}{}{}$
A better proof for the set of irrational number not closed under ordinary multiplication.
You denote (not notate!) the set of rational numbers by $\mathbb Q$ and that of the real numbers by $\mathbb R$ ; therefore the set of irrational numbers can be written as $\mathbb R \backslash \mathbb Q$ or $\mathbb R - \mathbb Q$, depending on your taste. Your proof could simply go as follows : since $\sqrt 2 \in \mathbb R \backslash \mathbb Q$ but $(\sqrt 2)^2 = 2 \in \mathbb Q$, $\mathbb R \backslash \mathbb Q$ is not closed under multiplication. Hope that helps,
Prove that for all real numbers $x,y$ and $z$ that $x^2+y^2+1 \geq x+xy$.
We have: $$ \frac{x^2+1}{2}≥|x| \\ \frac{x^2+y^2}{2}≥|xy| \\ \frac{y^2+1}{2}≥|y| $$ By AM_GM. Thus, adding yields: $$ x^2+y^2+1≥|xy|+|x|+|y|≥xy+x $$
How would you find the exact roots of $y=x^3+x^2-2x-1$?
Let $p(x) = x^3+x^2-2x-1$, we have $$p(t + t^{-1}) = t^3 + t^2 + t + 1 + t^{-1} + t^{-2} + t^{-3} = \frac{t^7-1}{t^3(t-1)}$$ The RHS has roots of the form $t = e^{\pm \frac{2k\pi}{7}i}$ ( coming from the $t^7 - 1$ factor in numerator ) for $k = 1,2,3$. So $p(x)$ has roots of the form $$e^{\frac{2k\pi}{7} i} + e^{-\frac{2k\pi}{7} i} = 2\cos\left(\frac{2 k\pi}{7}\right)$$ for $k = 1,2,3$.
Why does $U(f,P) - L(f,P) < \epsilon$ make a good criterion for integrability?
Isn't the goal of analysis is always to make $\epsilon$ as small as possible, and possibly 0? You seem to have a misunderstanding of what $\epsilon$ stand for here. You are given $\epsilon&gt;0$ and you want to make something smaller than this $\epsilon &gt;0$. Maybe this can clear some of this out. Why does $U(f,P) - L(f,P) &lt; \epsilon$ make a good criterion for integrability? Because THM Let $x,y$ be arbitrary real numbers. If $x&lt;y+\epsilon$ for each $\epsilon &gt;0$, then $x\leq y$. P By contradiction. Suppose $x&gt;y$. Then $x-y&gt;0$. Take $\epsilon=x-y$. The above gives $x&lt;y+\epsilon=y+x-y=x$ which is impossible. It must be the case $x\leq y$. Note that since $\sup L\leq \inf U$ is always true, the criterion gives that $\inf U\leq \sup L$ which means $\sup L=\inf U$ and $f$ is integrable. Now, proving that for each $\epsilon &gt;0$ there exists $P=P_\epsilon$ such that $$U(f,P)-L(f,P)&lt;\epsilon$$ is usually easier than proving $\sup L=\inf U$ directly, in particular when the function is not given explicitly (say, if we want to prove $f$ is integrable when it is continuous) or any other cases where $f$ is in incognito.
How to know the j-invariant of the modular elliptic curve from the modular form?
This is not a well-defined question, because the modular form $f$ corresponds to an isogeny class of elliptic curves, not a single elliptic curve; and the elliptic curves in the isogeny class can have different $j$-invariants. However, if you ask for the $j$-invariant of some elliptic curve in the isogeny class, then this is a very well-studied problem, and there is an effective way of doing so using the period lattice. This is all described comprehensively and beautifully in John Cremona's book Algorithms for modular elliptic curves (available for free online here); the algorithm you're after is described in section 2.14.
Is the following true: A is a proper subset of B implies A is strictly dominated by B
No. Infinite sets provide a counterexample. In fact, one definition of an infinite set is as a set that is in bijective correspondence with a proper subset. Consider $\Bbb N$ and the subset $\Bbb N\setminus \{1\}$, say. There is a bijection between these two sets. For the bijection, just shift by $1$. This is sometimes referred to as "Hilbert's hotel".
Approximating the area of triangles (same base and height), using rectangles stacked differently
You don't need Cavalieri's principle. Dispose the triangles with their bases on the same line (see diagram below) and draw $n$ lines parallel to the bases so that they divide the altitudes into $n$ equal parts. Those lines intersect the triangles at $n$ couples of segments and it is easy to prove (by similar triangles) that segments on the same line are equal. On those segments, taken as bases, you can then construct $n$ couples of rectangles, all having the same height $h/n$. They are the approximating rectangles you are looking for.
A question from NBHM regarding minimal and characteristic polynomials.
Yes, the statement and your justification are ultimately correct. That said, your solution is longer than it needs to be. The question was whether the statement &quot;If $A\in M_3(\mathbb R)$ and $A^3=I,A\neq I$,then $A^2+A+I=O$&quot; is correct, and you are trying to show that this statement is not correct. To show that this statement is not correct, you only need to give an example (or otherwise prove the existence of an example) of an $A$ for which $A\in M_3(\mathbb R)$ and $A^3=I,A\neq I$, but $A^2 + A + I \neq O$. So, the following is a complete answer: The statement is incorrect. For example, $$ A = \begin{pmatrix} -\frac{1}{2} &amp; -\frac{\sqrt{3}}{2} &amp;0 \\ \frac{\sqrt{3}}{2} &amp; -\frac{1}{2} &amp; 0 \\ 0 &amp; 0 &amp; 1 \\ \end{pmatrix} $$ satisfies $A^3=I,A\neq I$, but $A^2 + A + I \neq O$. Interestingly, you prove the (correct) stronger statement that there exists no matrices $A \in M_3(\Bbb R)$ for which $A^3 = I, A \neq I$, and $A^2 + A + I = O$.
Is there a way to generate a random matrix that satisfies a given linear matrix inequality (LMI)?
Haven't dig into details but judging from the title it should be what I am looking for. Or at least give me a understanding of what can be achieved.Uniform sampling in semi-algebraic sets
$\{(1), (12)(34), (13)(24), (14)(23)\}$ is the only non-cyclic proper subgroup of $A_4$?
You can think in following way: $A_4$ is a group of order 12. By Lagrange theorem all of its subgroups must have orders which | 12, and not be 12. So, their orders are 1, 2, 3, 4 and 6. Order one is cyclic, two and three are prime, so they are also cyclic. If group of order 4 was cyclic, then $A_4$ would have the element of order 4, which is not possible, since a permutation of order 4 in $S_4$ is of type $(a,b,c,d)$ which is not in $A_4$. So, one example is among subgroups of order 4. Claim: $A_4$ has no subgroup of order 6. Proof: Let H be a subgroup of order 6 in $A_4$. $A_4$ consists of neutral, eight three-cycles and three elements of type $(a,b)(c,d)$, called double transpositions. So, elementary cardinality gives us that in H it must be at least one three-cycle , without loss of generality let it be $(1,2,3)$. With it, H, being a subgroup, must contain $(1,3,2)$ , its inverse. So far, we had three elements in H, namely two three-cycles and neutral. So, we must have double transposition or another three-cycle. Option one: We have one double transposition in H, let it be, again without loss of generality, (1,2)(3,4). Now, (1,2,3)(1,2)(3,4)=(1,3,4) must be in H, and so must be its inverse (1,4,3). Now H has neutral, (1,2,3),(1,3,2), (1,2)(3,4), (1,3,4), (1,4,3) which is six elements, and it is enough to show that H is not closed under multiplication, take (1,2,3)(1,3,4) = (2,3,4) which is not in H. Similar is deal with another option. Now, the only possibility is group of order 4. Here we consult again Lagrange theorem: Order of group is a multiple of order of any element. So, we have a group of order 4. Now, all elements can have order 1 or 2. So, in H, of order 4, it can not be three-cycles, since they have order 3. We are left with neutral and three double transposition, which is non-cyclic group and your example and the only possibility.
Find the number of solutions $(x,y)$ in non-negative integers such that $ax+by\leq ab$, where $a$ and $b$ are positive integers.
So far, you should agree that the number of solutions $(x,y)$ is $$ \frac{(a+1)(b+1) + B}{2}, \tag{1} $$ where $B$ is the number of points on the boundary $ax + by = ab$. So we need to count $B$. Write $a = da'$, $b = db'$, where $d = \gcd(a,b)$, so that $\gcd(a',b') = 1$. Our equation becomes \begin{align*} (da') x + (db') y &amp;= d^2 a' b'\\ \iff a' x + b' y &amp;= d a' b' \tag{2} \end{align*} Mod $a'$, $b'y \equiv 0$ so $a' \mid y$. Likewise mod $b'$, $a'x \equiv 0$ so $b' \mid x$. So if we write $x = b'x'$ and $y = a' y'$, the number of solutions to (2) is equal to the number solutions $(x',y')$ to $$ a'b'x' + a'b'y' = da'b' $$ i.e. just $x' + y' = d$. There are $d+1$ solutions to this since $x'$ and $y'$ are nonnegative. So $B = d + 1 = \gcd(a,b) + 1$ in (1), and the final answer is $$ \boxed{\frac{(a+1)(b+1) + \gcd(a,b) + 1}{2}.} $$
basic question on varieties (algebraic geometry)
If $V$ were affine, so so would be its intersection with the (closed!) plane $z=0$, because a closed subset of an affine variety is affine . But that intersection is the punctured affine plane $\mathbb A^2_{x,y,0}\setminus\{(0,0)\}$, which is well known not to be affine. The open subset $V$ is the union of the two open subsets $U_1: xz-y^2\neq0$ and $U_2:x^3-yz\neq 0$ of $\mathbb A^3$. This is equivalent to proving that given $a,b,c\neq 0$ with $ac-b^2=a^3-bc=0$, we can write $a=t^3,b=t^4, c=t^5$ for $t=\frac ba$ . The open subsets $U_i$ are affine varieties because the complement of a hypersurface (like $xz-y^2=0$) in $\mathbb A^3$ is affine. Thus we have written $U=U_1\cup U_2$ as the union of two affine open subsets of $\mathbb A^3$.
Are there other simple conditions we can use to demonstrate the non-existence of a universal set?
We can use Cantor's Theorem actually. Cantor's Theorem states that for any set $X$, cardinality of power set $|P(X)| &gt; |X|$. Now, suppose for a contradiction that set of all sets $S$ exists. But then $P(S) \subseteq S$ because every set in $P(S)$ also included in $S$ by definition of $S$. But then $|P(S)| \le |S|$ and by Cantor's Theorem $|P(S)| &gt; |S|$, a contradiction.
Show $ M $ is a characteristic subgroup of $ K $
If $G$ is finite group, then there is no problem in proof: If $K/M_1$ is nilpotent, and $K/M_2$ is nilpotent then $K/(M_1\cap M_2)$ is also nilpotent. Since $K/M_1$ is nilpotent, there exists $n\geq 2$ such that $\gamma_n(K/M_1)=1$ i.e. $\gamma_n(K)\subseteq M_1$. (Then, this subset-relation also holds for $n+1,n+2,\cdots$.) Similarly there exists $m\geq 2$ such that $\gamma_m(K)\subseteq M_2$. Taking maximum of $m,n$, say $n$, we have that $$\gamma_n(K)\subseteq M_1 \mbox{ and }\gamma_n(K)\subseteq M_2.$$ Thus, $\gamma_n(K)\subseteq M_1\cap M_2$ i.e. $\gamma_n(K/M_1\cap M_2)=1$, i.e. $K/M_1\cap M_2$ is nilpotent. As you already mentioned, $M$ is the smallest normal subgroup of $K$ modulo which $K$ becomes nilpotent. So, $\phi(M)$ must be $M$. I didn't see any role of $G$ and $L$. What is the exact problem?
Minimize the probability of winning a game with infinite independent $U(0, 1)$ random variables
The following is my own attempt at solving the problem after mathworker21's suggestions. Let $P(k)$ be the probability of winning the game with threshold $k$ using the best strategy. Notice that if $0 \le k \le \frac 1 2$, by stopping right away we have probability $p = 1 - k \ge \frac 1 2$ of winning and probability $q = k \le \frac 1 2$ of losing, so we should stop right away. In fact, there must be some $\alpha \ge \frac 1 2$ such that the best strategy will stop right away if $0 \le k \le \alpha$ for some $\alpha$, and show $X_1$ otherwise. If $0 \le k \le \alpha$, the probability of winning is $p = 1 - k$. If $\alpha &lt; k \le 1$, we show $X_1 = x$. Then: If $0 \le x \le k$, we haven't lost yet, and we continue the game with threshold $k - x$. Therefore we win with probability $p = P(k - x)$. If $k &lt; x \le 1$, we have lost, so we win with probability $p = 0$. Thus the probability of winning is: $$p = \int_0^k P(k - x) \, dx + \int_k^1 0 \, dx = \int_0^k P(x) \, dx$$ Therefore we can write: $$P(k) = \begin{cases} 1 - k &amp; \text{if } 0 \le k \le \alpha \\ \int_0^k P(x) \, dx &amp; \text{if } \alpha &lt; k \le 1 \end{cases}$$ Since $P$ must be continuous in $\alpha$, we must have $1 - \alpha = \int_0^\alpha P(x) \, dx$. The integral is: $$\int_0^\alpha P(x) \, dx = \int_0^\alpha (1 - x) \, dx = -\frac {\alpha^2} 2 + \alpha$$ Therefore $1 - \alpha = -\frac {\alpha^2} 2 + \alpha$, which implies $\alpha = 2 - \sqrt 2$. Now, if $\alpha &lt; k &lt; 1$, then $P'(k) = P(k)$, so $P(k) = c e^k$ for some $c \in \mathbb R$. Again, since $P$ is continuous in $\alpha$, we have that $1 - \alpha = c e^\alpha$, which implies $c = (1 - \alpha) e^{-\alpha}$. Finally, we can write: $$P(k) = \begin{cases} 1 - k &amp; \text{if } 0 \le k \le \alpha \\ (1 - \alpha) e^{k - \alpha} &amp; \text{if } \alpha &lt; k \le 1 \end{cases}$$ As expected, $P$ is decreasing in $[0, \alpha]$ and increasing in $[\alpha, 1]$. Thus the probability of winning is minimized for $k = \alpha = 2 - \sqrt 2 \approx 0.586$.
Proof that $f:\mathbb{R}^2 \to \mathbb{R}$ is quasi-concave iff $\forall r \in \mathbb{R}$ the set $A=\{x \in \mathbb{R}^2:f(x)\geq r\}$ is convex
Let's recall that a function $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ is quasiconcave if for all $x_1,x_2 \in \mathbb{R}^2$ and $\alpha \in (0,1)$, then $f(\alpha x_1 + (1-\alpha) x_2) \geq \min \{f(x_1),f(x_2)\}$. Let's suppose that for all $r \in \mathbb{R}$, $A_r := \{x \in \mathbb{R}^2: f(x) \geq r \}$ is convex. Take $x_1,x_2 \in \mathbb{R}^2$ and $\alpha \in (0,1)$. Without loss of generality suppose $f(x_1) \geq f(x_2)$. Then, it is clear that $x_1,x_2 \in A_{f(x_2)}$. Being this set convex, we further have that $\alpha x_1 + (1-\alpha) x_2 \in A_{f(x_2)}$, which means $$ f(\alpha x_1 + (1-\alpha) x_2) \geq f(x_2) = \min\{f(x_1),f(x_2)\}. $$ Hence, $f$ is quasiconcave. EDIT: For the converse, suppose $f$ is quasiconcave. Let $r \in \mathbb{R}$ and take $x_1,x_2 \in A_r$ and $\alpha \in (0,1)$. $x_1,x_2$ being in $A_r$ means $f(x_1) \geq r$ and $f(x_2) \geq r$, hence $\min\{f(x_1),f(x_2)\} \geq r$. Furthermore, the quasiconvexity of $f$ implies $$ f(\alpha x_1 + (1-\alpha)x_2) \geq \min\{f(x_1),f(x_2)\} \geq r. $$ That is, $\alpha x_1 + (1-\alpha)x_2 \in A_r$.
How do I know if I'm well-prepared for a real analysis exam? Or more generally, proof-based exams?
Did you get all the problems on the professor's sample exam right? By “right”, I mean, were you able to write perfect proofs, from scratch, without notes or checking against the solutions? If so, stop. You are definitely prepared. If not, you know what topics you are less than prepared on: the ones that you missed the questions on, or had to check notes for. Find more problems about these topics and attempt those problems. If you have a textbook, it probably has problems in it. Ask your professor or TA about them if you have trouble solving the additional problems. When organizing the material, you should definitely memorize the definitions and theorems. But see if you can organize them, not in a list, but in a concept map. That will help you see the connections between the ideas, so you know how to get from one to another in a new situation. For instance, you tagged this with linear algebra, and I was able to find some premade linear algebra concept maps online (example). But the real benefit will be when you create your own.
What is common domain of two equal function
"So instead of writing Domain of f or Domain g why they have used the word common domain?" You have a point here in the sense that the third condition is given as some enlargement of the first condition. But that does not really harm, does it? Let me give you a more handsome definition of the statement $f=g$ where $f$ and $g$ are both functions. $f$ and $g$ have the same domain and for every $x$ that is an element of that domain we have $f(x)=g(x)$. This definition is equivalent with the one in your question. Actually I put the conditions 1) and 3) together and leave condition 2) out. This because condition 2) is automatically satisfied if the conditions 1) and 3) are satisfied, hence is redundant. This definition is practicized in e.g. set theory. Another point is that definitions are usually given as "if" statements, but should be read as "if and only if" statements (so necessity and sufficiency). P.S. In certain areas of mathematics (e.g. categories) more is demanded for functions $f$ and $g$ to be equal. The condition that I gave is then accompanied with the condition that $f$ and $g$ have a common codomain (which is - at least in this context - not the same thing as range). Here by statement "$f$ and $g$ have a common codomain" is meant that the codomain of $f$ is the same as the codomain of $g$. So it is not the (much weaker) statement that the codomains of $f$ and $g$ have a non-empty intersection.
How to prove that $x\cdot y\neq 0$ when $x\neq 0$ and $y\neq0$ via field axioms?
Suppose that $x\cdot y=0$ and $x\neq 0$, then $\frac1x$ exists and $\frac1x\cdot x\cdot y=\frac1x\cdot 0$, what is equivalent to $y=0$.
Isolate the a value from this formula
You want : $5337\cdot(0.1^{a\cdot 0.1}+0.1^{a\cdot 0.2}+0.1^{a\cdot 0.3})=10159$ Set $x:=0.1^{a\cdot 0.1}=(0.1^{0.1})^a$ then your problem becomes : $$x+x^2+x^3=\frac{10159}{5337}$$ A third degree equation may be solved formally by WolframAlpha for example with a real solution given by : The $a$ solutions may be obtained by using $a\cdot 0.1\ln(0.1)=\ln(x)$ and the real solution is indeed $\approx \frac{\ln(0.7891830272818)}{0.1\ln(0.1)}\approx 1.028222635581$
Probability calculator
Maybe I understand the problem now. I'll do the Option 2 case, you ought to see how to do the others. The probability of getting any one of the three winning patterns with Option 2 is $$a={12\over168}\,{11\over167}\,{10\over166}$$ So the probability of winning with Option 2 is $3a$ --- almost. Trouble is, you could have more than one of the three patterns, and you don't want to count that twice. So, you have to subtract the probability of having two winning patterns. For any two of the patterns, that's $$b={12\over168}\,{11\over167}\,{10\over166}\,{9\over165}\,{8\over164}$$ and there are three ways to have two winning patterns, so we are down to $3a-3b$. But we've now undercounted the situations where we have all three winning patterns. This has probability $$c={12\over168}\,{11\over167}\,{10\over166}\,{9\over165}\,{8\over164}\,{7\over163}\,{6\over162}$$ so the final answer is $3a-3b+c$. The same idea will work with any of the options (except that since option 1 appears only 6 times we can't get all three winning patterns of option 1). The technical name for what I've done here is "the principle of inclusion-exclusion".
Complex Representations of A4
Hint As OP suggests in the comments, one can start by identifying $$A_4 / [A_4 , A_4] \cong A_4 / (\Bbb Z_2 \times \Bbb Z_2) \cong \Bbb Z_3 ,$$ which gives immediately that there are $3$ representations of dimension $1$, and so by the sum-of-squares formula $1$ representation of dimension $3$. Alternatively, one can get away with using only the count of the conjugacy classes (which you've already found to be $4$) and the sum-of-squares formula, $$12 = |A_4| = \sum_{i = 1}^4 n_i^2 ,$$ where $n_i$ is the dimension of the $i$th irreducible representation. Checking manually shows that the only way to write $12$ as a sum of four positive squares is $1^2 + 1^2 + 1^2 + 3^2$. Of course, the existence of the trivial representation means that we need only look for ways to write $11$ as a sum of three squares.
Solving in terms of z , three variable two equation system
From the first equation we get $$z=\frac{x+2y}{4},$$ subsituting that into equation two we get $$3\Big(\frac{x+2y}{4}\Big)^2=\frac{1}{2}x^2+y^2,$$ or equivalently $$\frac{3x^2+12yx+12y^2}{16}=\frac{1}{2}x^2+y^2\Rightarrow 3x^2+12yx+12y^2=8x^2+16y^2,$$ which simplifies to $$5x^2-12xy+4y^2=0.$$ This is a quadratic in $x$ and this factorises to (by inspection or, if you like, using the quadratic formula) $$(5x-2y)(x-2y)=0\Rightarrow x=\frac{2y}{5},\quad x=2y.$$ Now, from equation one, observe that $x=4z-2y$, so $$\frac{2y}{5}=4z-2y\Rightarrow\frac{12y}{5}=4z\Rightarrow y=\frac{5z}{3}.$$ Then $$x=4z-2\Big(\frac{5z}{3}\Big)=4z-\frac{10z}{3}=\frac{2z}{3}.$$
Computability problems -- can't solve
$g(x)$ outputs the program which takes $n$ as input, tests whether $x \in A$ and if so outputs $n$. $\operatorname{cod}(f)$ is finite hence recursive. There is some $n$ for which $\phi_n(n)$ does not terminate, and hence for all subsequent $N &gt; n$, $f(N)$ does not terminate. Such an $f$ would allow you to solve the halting problem. To test whether program $n$ terminates on $i$, run $\phi_n(i)$ in parallel with $\phi_{f(n)}(i)$; whichever one terminates tells you whether $i \in W_n$.
Divergence Theorem specific question
As you write correctly, we have $$ \int_{\partial\omega} \langle F,n\rangle\,dS = \int_{\omega}\def\div{\mathop{\rm div}} \div F\, d(x,y,z) $$ Your domain of integration is $\omega$, that is the part of the unit ball, where $x-2y+z \ge 0$. Note that $x-2y+z = 0$ describes a plane through the origin, that is, it splits the unit ball in two halves. Therefore $\omega$ is half a unit ball, hence its volume is $\lambda(\omega) = \frac 12 \cdot \frac 43\pi =\frac 23\pi$. Hence $$ \int_{\partial\omega} \langle F,n\rangle\,dS = \int_{\omega}\def\div{\mathop{\rm div}} \div F\, d(x,y,z) = \frac 23 \pi$$
Divergence theorem(calculating flux of vector field)
Yes you have correctly applied the divergence theorem. Now it is about finding the volume of the region. See the base of the pyramid in the below diagram, which is in $XY$ plane. Now there are two square pyramids, one above $XY$ plane and one below with vertex at $(0,0,4)$ and $(0,0,-4)$. Side of the base $ b = 4\sqrt2$ and height is $h = 4$. You can simply use the formula for pyramid volume $V = \frac{1}{3}b^2h$. Multiply by $2$ as you have two pyramids. If you are going integral route, take the base above $x-$axis and that part of the pyramid for positive $z$. Find volume of it and then multiply by $4$. $V = 4 \displaystyle \int_0^4 \int_{0}^{4-z} \int_{y+z-4}^{4-y-z} \ dx \ dy \ dz$
Proof of $5|p^2 \implies 5|p$ for all $p \in \mathbb{N}$
(Expanding on my comment) Hint: Consider the unique prime factor decomposition ($p_i$ are prime numbers, $n_i\in \mathbb{N}_{&gt;0}$) of $p$ $$p=p_1^{m_1}\cdot\dots\cdot p_n^{m_n}$$ then $p^2$'s prime factor decomposition is (why?) $$p^2=p_1^{2m_1}\cdot\dots\cdot p_n^{2m_n}$$ but if $5\mid p^2$ then there exists $p_i=5$ (why?). Conclude.
In how many ways can you subtract edges from the graph of a cube so that there are no isolated vertices?
Take two opposite vertices. Each has 1,2 or 3 neighbours. For each combination, there is a hexagon made by the other six vertices, some of which are already connected. Count how many ways you can make sure the rest are connected.
The sum of series $1\cdot 3\cdot 2^2+2\cdot 4\cdot 3^2+3\cdot 5\cdot 4^2+\cdots \cdots n$ terms
Hint: Observe that \begin{align*} a_k&amp;=k(k+2)(k+1)^2\\ &amp;=(k+1-1)(k+1+1)(k+1)^2\\ &amp;=\left[(k+1)^2-1\right](k+1)^2\\ &amp;=(k+1)^4-(k+1)^2 \end{align*} Now, here is something that can be useful in order to compute the sum of the fourth powers and squares Geometric interpretation for sum of fourth powers
Logarithmic differentiation for this function
For the question as currently written: "What is the derivative of $f(x)=\log_x(c)$ at $x=c$?", the simplest thing is to use the change-of-base formula to move $x$ from the base to being the argument of a function. Since (using $\ln$ for the natural logarithm) $\log_x c = \frac{\ln c}{\ln x}$, differentiating $f(x) = \log_x(c)$ with respect to $x$ is $$\begin{align*} \frac{d}{dx}\log_x(c) &amp;= \frac{d}{dx}\frac{\ln c}{\ln x}\\ &amp;= \ln(c)\frac{d}{dx}\left(\frac{1}{\ln x}\right)\\ &amp;=\ln(c)\left(\frac{-(\ln x)&#39;}{(\ln x)^2}\right)\\ &amp;= -\frac{\ln(c)}{\ln(x)}\left(\frac{1}{x\ln x}\right)\\ &amp;= -\log_x(c)\left(\frac{1}{x\ln x}\right). \end{align*}$$ Evaluating at $x=c$, we have: $$f&#39;(c) = -\log_c(c)\left(\frac{1}{c\ln c}\right) = -\frac{1}{c\ln(c)}.$$ On the other hand, if as joriki surmises, the question was "What is the derivative $f&#39;(c)$ of $f(x)=\log(x)$ [natural logarithm] at $c=e$?" then since $f&#39;(x) = \frac{1}{x}$, simply plugging $c=e$ yields $\frac{1}{e}$.
If $A$ is a $2\times1$ matrix and $B$ is a $1\times {2}$, prove $C = AB$ is not invertible
We have$$C=AB=\left(\begin{array}{c} A_{1}\\ A_{2} \end{array}\right)\left(\begin{array}{cc} B_{1} &amp; B_{2}\end{array}\right)=\left(\begin{array}{cc} A_{1}B_{1} &amp; A_{1}B_{2}\\ A_{2}B_{1} &amp; A_{2}B_{2} \end{array}\right)$$so $\det C=A_{1}B_{1}A_{2}B_{2}-A_{1}B_{2}A_{2}B_{1}=0$. Hence C is not invertible.
Show that a sequence of measurable functions {$f_n$} converging pointwise to f. Show that its uniform
Consider $[-n,n]\cap E$ which is of finite measure. You can use Egoroff to get a set $E_n\subseteq [-n,n]\cap E$ such that $f_n$ converges uniformly on $E_n$, and $m([-n,n]\cap E)\setminus E_n)&lt;1/n$. Then you have to show that $m(E\setminus \bigcup E_n)=0$. To do this, note that $F_n:=(E\setminus\bigcup E_n)\cap [-n,n]\subseteq (E\cap[-n,n])\setminus E_n$. Hence, $m(F_n)&lt;1/n$. But $F_n\uparrow (E\setminus \bigcup E_n)$. By continuity of measure $$m(E\setminus \bigcup E_n)=\lim_n m(F_n)=0.$$
Describe the equivalence classes for each equivalence relation
Your hunches are correct. For the first one, it's horizontal lines in the plane. For the second, it's circles centered at the origin.
If $n = 2^k - 1$ for $k \in \mathbb{N}$, then every entry in row $n$ of pascal's triangle is odd.
Hint: $\binom{2^n-1}{k-1}+\binom{2^n-1}{k} = \binom{2^n}{k}$. Show that $\binom{2^n}{k}$ is always even for $k\neq 0,2^n$.
$\pi$-system generating cylindrical $\sigma$-algebra
$\mathfrak{U}(C)$ is by definition the minimal $\sigma$-Algebra such that the functions $p_t: x\mapsto x_t$ are $\mathfrak{U}(C)$-$\mathscr{B}(\mathbb{R})$-measurable for $t \in [0, \infty)$. Now, it can easily be shown that the system of sets \begin{align*} \mathscr{A}:=\left\{ A\in \mathscr{B}(\mathbb{R}): p_t^{-1}(A) \in \sigma(\left\{ p_t^{-1}(B):B\in \prod \right\}) \right\} \end{align*} is a $\sigma$-Algebra which contains $\prod$. Therefore $\sigma(\prod) = \mathscr{B}(\mathbb{R}) \subset \mathscr{A}$, i.e $\mathscr{A} = \mathscr{B}(\mathbb{R})$. From this it follows that any $p_t$ is $\sigma(\left\{ p_t^{-1}(B):B\in \prod \right\})$-$\mathscr{B}(\mathbb{R})$-measurable, i.e. $\mathfrak{U}(C)\subset \sigma(\left\{ p_t^{-1}(B):B\in \prod \right\})$ since $\mathfrak{U}(C)$ is the minimal $\sigma$-Algebra which satisfies this. BS
Calculating surface area of part of a sphere
Find the area of the spherical caps on either side, and subtract it from the total surface area $4\pi r^2$ For the area of the spherical caps, you can use $A = \Omega r^2$ where the angle $\Omega$ is the solid angle(steradians) of a cone whose cross-section subtends the angle θ at the center, given by $\Omega = 2\pi (1 - cos\theta) $
Subtraction with a negative result
Being the curious kid that he is he asked why the two results weren't the same, and I couldn't give him an answer. That's because the two results are the same, and he is implicitly using a slightly different and context-dependent notation to express his answer. The arithmetic is correct, but $-4$ is not a decimal digit in the usual scheme of things. A correct answer of $(-4).3$ was found, with an intended meaning of $-4 +0.3$. That notation is non-standard, and writing it as $-4.3$ gives the wrong answer when read as a standard decimal. Although it's clear what an expression like $(-4).3$ should mean here, to represent that result in the standard system with digits 0-9, the minus sign can only apply to all digits in the number at once. The conversion to standard notation is $("-4").3 = -(3.7) = -3.7 $
Reflection in terms of simple reflections
I suspect your best hope is $$s_{\beta} = w s_{\alpha} w^{-1},$$ where $\beta = w(\alpha)$ for some simple root $\alpha$. It is always possible to write $\beta = w(\alpha)$ when $\beta$ is a root. There are many possibilities for $w$ and $\alpha$. Define the depth of a positive root $\beta$ to be the smallest $k$ such that $w(\beta)$ is negative and $\ell(w) = k$. (See chapter 4 of ``Combinatorics of Coxeter Groups'' for further details.) It's clear that simple roots have depth 1. It's not hard to show that the depth of $s_{\alpha_i}(\beta)$ is smaller than the depth of $\beta$ if $\langle \beta, \alpha_i \rangle &gt; 0$. This provides a brute force procedure for finding $w$ (and $\alpha$) and expressing it as a product of simple reflections. This is just an elaboration of Matt Pressland's comment. A good general formula appears to be too much to ask for. However, in type A (with the simple roots ordered in the usual way), there is a nice answer: $$\beta = \alpha_i + \cdots + \alpha_j \implies s_\beta = s_{\alpha_i} s_{\alpha_{i+1}} \cdots s_{\alpha_{j-1}}s_{\alpha_j}s_{\alpha_{j-1}}\cdots s_{\alpha_i}$$
Find the sufficient and necessary condition to make a matrix rational
If $A^5=2E_n$ for some rational $A$, then $2^{n/5}=\det(A)$ is rational. Hence $n$ must be divisible by $5$. Conversely, if $n=5m$, since $C^5=2E_5$ when $C$ is the $5\times5$ companion matrix for the polynomial $x^5-2$, we have $(\underbrace{C\oplus\cdots\oplus C}_{m\ \text{ copies}})^5=2E_n$.
Train wait problem, probability
The probability that you have to wait more than $x$ minutes for the $15$-minute train is the probability that you are in the first $15-x$ minutes of its cycle, which assuming a uniform distribution is $$\frac{15-x}{15}=1-\frac{x}{15}\quad\hbox{for $0\le x\le15$}\ .$$ The probability that you have to wait more than $x$ minutes for the other train is similarly $$1-\frac{x}{40}\quad\hbox{for $0\le x\le40$}\ .$$ The probability that you have to wait more than $x$ minutes for the first train to arrive is $$\Bigl(1-\frac{x}{15}\Bigr)\Bigl(1-\frac{x}{40}\Bigr)\quad\hbox{for $0\le x\le15$}\ ,$$ assuming independence since the trains are not synchronised. The cumulative probability function for your waiting time is $$P(X\le x)=1-\Bigl(1-\frac{x}{15}\Bigr)\Bigl(1-\frac{x}{40}\Bigr) =\frac{11x}{120}-\frac{x^2}{600}\ .$$ The density function is $$f(x)=\frac{d}{dx}P(X\le x)=\frac{11}{120}-\frac{x}{300}\quad\hbox{for $0\le x\le15$}$$ and the expected waiting time is $$E(X)=\int_0^{15} xf(x)\,dx=\cdots=6.5625\ .$$
How to tell a function is convex but not strictly convex?
Hint: $\;f(x,y)=4x^2 + 9y^2 + 12xy = (2x+3y)^2\,$ is constant along the lines $\,2x+3y=c\,$.
Let $f:A→B$ be an onto function and let $T⊆B$. Prove that $(f◦f^{-1})(T) =T$
An important point here is that $f^{-1}(z)$ must be treated as a set, whereas $f(w)$ is a single element. Note that $(f \circ f^{-1})(T) = f(f^{-1}(T)). \tag 0$ Let $z \in (f \circ f^{-1})(T) = f(f^{-1}(T)); \tag 1$ then in accord with (0) there is some $w \in f^{-1}(T) \subset A \tag 2$ with $z = f(w); \tag 3$ but (2) implies $f(w) \in T; \tag 4$ thus, $z \in T, \tag 5$ and $f(f^{-1}(T)) \subset T; \tag{5.5}$ to go the other way, observe that $z \in T \tag 6$ with $w \in f^{-1}(z) \subset f^{-1}(T) \tag 7$ yields $z = f(w) \in f(f^{-1}(T)) \subset T; \tag 8$ this shows $T \subset f(f^{-1}(T)); \tag 9$ (5.5) and (9) together imply $T = f(f^{-1}(T)), \tag{10}$ the requisite result.
Where is the mistake? $-1=(-1)^{2/2}=\left(\left (-1\right)\displaystyle ^2\right)^{1/2}=1^{1/2}=\sqrt 1=1$
Each $=$ but the second is clearly right. That the second $=$ is wrong proves $\color{blue}{x^{a/b}=(x^a)^{1/b}}$ admits counterexamples with $x&lt;0$. We use $x&gt;0$ in the manipulation$$(x^{a/b})^b=x^a\implies\color{blue}{x^{a/b}=(x^a)^{1/b}},$$which is our only reason to expect the blue part to be true.
Measure of $\lbrace (x_1,...,x_n) \in \mathbb{R}^n : \sum_{k=1}^{n} x_k \leq 1 \rbrace$
This way, let $f_n(x)$ be (for $x\geqslant 0$ and $n&gt;0$) the measure of $$\left\{(x_1,\ldots,x_n)\in\mathbb{R}_+^n : \sum_{k=1}^n x_k\leqslant x\right\}.$$ Then $f_1(x)=x$ and $f_n(x)=\int_0^x f_{n-1}(x-x_n)\,dx_n$ for $n&gt;1$. By induction, $f_n(x)=x^n/n!$, and the answer is $f_n(1)=1/n!$.
Is there a real function such that the difference of its values at distinct points is bounded from below?
HINT: No. For $k\in\Bbb Z$ let $A_k=\{x\in\Bbb R:kM\le f(x)&lt;(k+1)M\}$. Now apply the pigeonhole principle.
Linear map $f:V\rightarrow V$ injective $\Longleftrightarrow$ surjective
The proof is correct; personally I like to view it as a corollary of the rank theorem: $\mathrm{dim} \ V=\mathrm{dim \ Im} \ f+\mathrm{dim \ Ker} \ f$ (try to prove it, it is a generalization of your (iii); for the finite dimensional case it is immediate from (ii) and (iii)), which in fact holds for vector spaces of arbitrary dimension.
Many kinds of Infinitely many
[EDIT --- We can prove that for every $n$ there are infinitely many primes $q$ such that $37^n$ divides $q^2+q+1$. The $37$ and the $q^2+q+1$ can be replaced by any prime and any integer polynomial, with some exceptions as highlighted by the example Hurkyl gives in the comments.] Suppose $q^2+q+1$ is divisible by $37^n$. Let $s=q+37^nt$, where $t$ is to be determined. Then $$s^2+s+1=q^2+(2)37^nqt+37^{2n}t^2+q+37^nt+1$$ We want this to be divisible by $37^{n+1}$, so we want $at+b$ to be divisible by $37$, where $a=2q+1$ and $b=(q^2+q+1)/37$. It's easy to check that $a$ is not divisible by $37$, so it is guaranteed that there is a value of $t$ such that $at+b$ is a multiple of $37$, and that makes $s^2+s+1$ a multiple of $37^{n+1}$. This proves, by induction, that for any $n$ you can find $q$ such that $37^n$ divides $q^2+q+1$. [Aside --- what I've done here is essentially a proof of a very special case of Hensel's Lemma, referred to by @jspecter. If you want the general proof for arbitrary primes and polynomials, and you want to know when there are counterexamples along the lines of the comment by Hurkyl, don't play coy with jspecter, but instead look up Hensel's Lemma, try to understand it, and come back with specific questions if you get stuck.] Now, this gives one value of $q$ such that $37^n$ divides $q^2+q+1$, but you want infinitely many. Here are two ways to cook that goose. For any given $q$, there are only finitely many values of $n$ such that $37^n$ divides $q^2+q+1$. So, if you take $n$ too big for $37^n$ to divide $q^2+q+1$, there must be a new value of $q$ that works for that $n$. Repeating gets you infinitely many $q$. If $37^n$ divides $q^2+q+1$, then it also divides $r^2+r+1$ where $r=q+37^nt$ and $t$ is any positive integer whatsoever. So, again, infinitely many. But unless I miss my guess, you want even more --- you want infinitely many prime values of $q$ such that $37^n$ divides $q^2+q+1$. Dirichlet's Theorem on Primes in Arithmetic Progression to the rescue: since $q$ is not a multiple of $37$, Dirichlet says that for each $n$ there are infinitely many values of $t$ such that $q+37^nt$ is prime. And we are done.
Solving 2 simultaneous equation
For the second equation you can write $49^x$ as $7^{2x}$, and then $\frac{7^{2x}}{7^y}=1$. Now rewrite this equality as $7^{2x-y}=1$. What should $2x-y$ be equal to so that the equation is solved? You can do the same for the first equation , write everything as a power of $2$ , then you get two equations for $x$ and $y$. I assume you can solve it yourself from here.
prove that $\lim_{x \to \infty} a_n+\frac{a_{n-1}}{x}+\cdots + \frac{a_0}{x^n}=a_n$
There is nothing wrong with your approach. However, if you let $A = \max |a_n|,$ and $x\geq \max(1, \frac{(n+1)A}{\epsilon})$ you should be good.
Prove: [0,1] is supercompact
Cleaner and fewer cases: Take the subbase $\mathcal{S}=\{(a,1], a \in [0,1]\} \cup \{[0,b): b \in [0,1]\}$, which is the standard subbase for an ordered space having a maximum $1$ and a minimum $0$ (they're the intersections of $(a,+\infty)$ with $[0,1]$ with $a\in [0,1]$ (if $a$ is not in that range this intersection is either $\emptyset$ or $[0,1]$ hence useless. Likewise for $(-\infty,a) \cap [0,1]$ etc.)). Let $\{(a_i, 1]: i \in I\} \cup \{[0,b_j): j \in J\}$ be an arbitrary cover by subbasic elements (so all $a_i, b_j \in [0,1]$). $J \neq \emptyset$ as we need to cover $0$ and $I \neq \emptyset$ as we need to cover $1$. Let $b=\sup\{b_j: j \in J\}$ which is well-defined. Note that $b$ cannot be covered by a set of the form $[0,b_j), j \in J$ or else we would have $b_j &gt; b$ contradicting that $b$ is an upperbound for $\{b_j : j \in J\}$. So $b$ must be covered by some set $(a_{i_0}, 1]$ with $i_0 \in I$. As $a_{i_0} &lt; b$, $a_{i_0}$ cannot be an upperbound for $\{b_j: j \in J\}$ (because the sup is the smallest upperbound) so there is some $j_0 \in J$ with $a_{i_0}&lt; b_{j_0}$. It follows that $[0,1] = [0,b_{j_0}) \cup (a_{i_0}, 1]$ and we have a two-element subcover of our subbasic cover.
Stochastic processes with non-zero higher order variations
here are my two cents Specifically you can look at fractional Brownian motion with Hurst index different from 1/2. There is a stochastic integral and a stocahstic calculus for those processes but I believe this is less general than what you are looking for. I know otherwise that Young integrals allows integrators with q-finite variations in the deterministic case and that maybe rough path analysis can be be used to define stochastic integrals that might if not mistaken match what you are looking for. Anyway intrinsically as Bichteler-Dellacherie's theorem demonstrate, semi-martingales (in a nutshell Quadratic variation processes + FV processes) are the best processes to define a stochastic integral in the sense of the existence of a "dominated convergence theorem" inside, if I may say. Best regards.
Showing that $f(x) = \frac{1}{2} x^3 + 3x^2 -\frac{5}{3} x - 5$ irreducible over $\mathbb{Q}(\sqrt[4]{5})$
Hint The Rational Root Test shows that $f$ is irreducible over $\Bbb Q$ so its Galois group has an element of order $3$. On the other hand, the Galois group of the minimal polynomial $x^4 - 5$ of $\sqrt[4]{5}$ over $\Bbb Q$ is $D_8$, which has order $8$.
Mathematical formulation of expanding a matrix
One way to write N in terms of M would be $N[i,j] = M[\lfloor\frac{i}{2}\rfloor,\lfloor\frac{j}{2}\rfloor]$ But I really don't understand by what you mean by separating it into odd, even indexes.
Finding an equation of a plane through the origin that is parallel to a given plane and parallel to a line.
We know that the required plane's normal is perpendicular to both the given plane's normal $(2,-1,-1)$ and the line's direction vector $(3,-3,-1)$. We can indeed perform a cross product to get the required plane's normal, since its result is perpendicular to both its inputs: $$(2,-1,-1)\times(3,-3,1)=\dots$$
Distinct number of prime divisors
Let $k$ be a number with $n$ distinct prime divisors. Then we have $$ k=p_1\cdots p_n\ge p_1\cdots p_1=p_1^n=2^n, $$ where $p_i$ is the $i$-th prime number. It follows since $p_1&lt;p_i$ for all $i\ge 2$.
Embeddings of $\text{SL}_2(\Bbb C)$ into $\text{SL}_3(\Bbb C)$
Without more context it's difficult to be certain what meaning was intended, but I conjecture that $(1,-1,0)$ and $(0,1,-1)$ are to be interpreted as root vectors in the root system of $SL_3(\mathbb C)$. Let $\mathfrak h$ be the usual Cartan subalgebra of $\mathfrak{sl}_3(\mathbb C)$, namely containing trace zero diagonal matrices. Let $r=(1,-1,0)$. Then $e=E_{1,2}$ is in the $r$-root space, in the sense that $$ [h,e]=\mathrm{tr}(h\,\mathrm{diag}(r))e $$ for all $h\in\mathfrak h$. Similarly $f=E_{2,1}$ is in the $(-r)$-root space. The Lie subalgebra of $\mathfrak{sl}_3(\mathbb C)$ generated by $e$ and $f$ consists of trace zero matrices of the form $$ \begin{pmatrix}a&amp;b&amp;0\\ c&amp;d&amp;0\\ 0&amp;0&amp;0\end{pmatrix}. $$ Applying the matrix exponential, the corresponding Lie subgroup of $SL_3(\mathbb C)$ is the first one you describe. Similarly for the other root.
Can there be an $S \subseteq \mathbb{R}$ closed under multiplication and addition with $|\mathbb{Q}| < |S| < |\mathbb{R}|$?
Yes. Take $S$ which is a counterexample to the Continuum Hypothesis, and consider the field generated by $S$ given by Löwenheim–Skolem theorem. It has the same cardinality as $S$, say $\aleph_1$, and it is a field. So it is closed under addition and multiplication.
Prove that discontinuity set is countable union of closed sets
Note that $\text{osc}_x(f)\ge 0$ by definition, and $f$ is continuous at $x$ if and only if $\text{osc}_x(f) = 0$. The first fact implies that $$ [0,1] = \bigcup_{n\in \mathbb N} D_{1/n} \cup \{ x : \text{osc}_x (f) = 0\}$$ while the second tells you that the set of discontinuities of $f$ is $$\bigcup_{n\in \mathbb N} D_{1/n}. $$ By taking complement, you have $$\{ x : f\text{ is continuous at }x\} = \bigcap_{n\in \mathbb N}\left( [0,1]\setminus D_{1/n}\right).$$
$\cos x + \cos 2x = 1$. Maclaurin series.
You have an extra minus sign in the second term of the $\cos 2x$ series, but you fix it the next line. When I put $x=0.69$ into the original equation I get about $0.96$, very close to $1$. When I plug in $1.75$ I get $-1.11$ but the MacLaurin series is not very accurate this far out.
Piece wise functions and differentiability
You have to be very careful. Consider $$ f(x)=\begin{cases} x^2\sin\left(\frac1x\right) &amp; x&gt;0\\0&amp; x\leq 0\end{cases} $$ This function is differentiable at $0$ since $$ \lim_{h\to 0^+}\left|\frac{f(h)-f(0)}h\right|=\lim_{h\to 0^+}h\left|\sin\left(\frac1h\right)\right|=0\text{ and }\lim_{h\to 0^-}\frac{f(h)-f(0)}h=0. $$ But $$ f'(h)=\begin{cases} 2x\sin\left(\frac1x\right)-\cos\left(\frac1x\right) &amp; x&gt;0\\0&amp;x\leq 0\end{cases} $$ is not continuous!
Expected value uniform decreasing function
I will make a guess that rand(n) returns uniform reals on the interval [0,n). Since rand(n) gives the same distribution as n*rand(1). Thus for given $n$ and $k$ the random variable produced by the algorithm is $$ X = n \prod_{m=1}^k U_m $$ where $U_m$ are independent identically distributed continuous uniform random variables on the unit interval. Thus $$ \mathbb{E}\left(X\right) = n \prod_{m=1}^k \underbrace{\mathbb{E}(U_m)}_{=\frac{1}{2}} = n 2^{-k} $$ Responding to OP's request to consider the case when rand(n) returns unifrom random integers $[0,n]$. Let $Y_m \sim \mathcal{DU}\left([0,Y_{m-1}\right)$ be such a uniform random integer drawn on $m$-th iteration. Assume $Y_0 = n$. Then we seek to find $$ \mathbb{E}\left(Y_k\right) = \mathbb{E}\left( \mathbb{E}\left(Y_k \mid Y_{k-1}\right)\right) = \mathbb{E}\left(\frac{1}{2}Y_{k-1}\right) = \ldots =\frac{1}{2^k} Y_0 = \frac{n}{2^k} $$ Thus the expectation is the same as in the case of continuous uniforms. Now, assuming rand(n) generates uniform integers on $[0,n)$, instead of $[0,n]$. In that case $Y_k|Y_{k-1} \sim \mathcal{DU}\left( \left[0, \max(Y_{k-1}-1,0)\right]\right)$. Obtaining closed form is not feasible for arbitrary $k$, but with the help of Mathematica I was able to get expectations for low values of $k$: $$ \mathbb{E}\left(Y_1\right) = \begin{cases} \frac{n-1}{2} &amp; n &gt; 1 \\[5pt] 0 &amp; \text{otherwise} \end{cases} $$ $$ \mathbb{E}\left(Y_2\right) = \begin{cases} \frac{(n-1)(n-2)}{4 n} &amp; n&gt;2 \\[5pt] 0 &amp; \text{otherwise} \end{cases} $$ $$ \mathbb{E}\left(Y_3\right) = \begin{cases} \frac{1}{8 n} \left(4 H_{n-1}+(n-1)(n-6)\right) &amp; n&gt;3 \\[5pt] 0 &amp; \text{otherwise} \end{cases} $$ $$ \mathbb{E}\left(Y_4\right) = \begin{cases} \frac{1}{16 n} \left( 4 \left(H_{n-1}\right){}^2+12 H_{n-1}-4 H_{n-1}^{(2)}+(n-1)(n-14) \right) &amp; n&gt;4 \\[5pt] 0 &amp; \text{otherwise} \end{cases} $$ This is the reproducing code: Block[{z, yc, ypr}, Rest@NestList[(Piecewise[{{Expectation[(#1 /. z -&gt; yc), Distributed[yc, DiscreteUniformDistribution[{0, Max[ypr - 1, 0]}]], Assumptions -&gt; Element[ypr, Integers] &amp;&amp; ypr &gt;= 1], ypr &gt;= 1}}, # /. z -&gt; 0] /. ypr -&gt; z) &amp;, z, 4] /. z -&gt; n] // Simplify[#, Element[n, Integers] &amp;&amp; n &gt;= 0] &amp; The rational part in the expectation appears to be $\frac{(n-1)(n+2-2^k)}{2^k n}$, thus in the large $n$ limit, expectation would agree with the continuous case. The expression involving harmonic numbers is of order $\mathcal{O}\left(\log(n)^{k-2}\right)$, and thus small compared to $n$. With the guess-work, I was also able to find $\mathbb{E}(Y_5)$: $$ \mathbb{E}(Y_5) = [n &gt; 5 ] \left( \frac{(n-1)(n-30)}{32 n} + \frac{1}{32n} \ell_n \right) $$ where $$ \ell_n = -8 H_{n-1} H_{n-1}^{(2)}+\frac{8}{3} \left(H_{n-1}\right){}^3+12 \left(H_{n-1}\right){}^2+28 H_{n-1}-12 H_{n-1}^{(2)}+\frac{16 }{3} H_{n-1}^{(3)} $$ Here is confirmation with simulations: In[153]:= With[{n = 17}, (28 HarmonicNumber[n-1] + 12 HarmonicNumber[n-1]^2 + 8/3 HarmonicNumber[n-1]^3 - 12 HarmonicNumber[n-1,2] - 8 HarmonicNumber[n-1] HarmonicNumber[n-1,2] + 16/3 HarmonicNumber[n-1,3] + (n-1)(n-30))/(32 n)] // N Out[153]= 0.131232 In[157]:= Table[Nest[RandomInteger[{0,Max[#1-1,0]}]&amp;, 17, 5], {10^7}] // N // Mean Out[157]= 0.13157
Is there an exact form for the solution for $x$ to $(\frac{1}{x})^{(\frac{1}{x})}=1-(\frac{1}{x})$?
There is no analytical solution for the equation $$y^y=1-y$$ even using special functions. We can make decent approximation considering that we look for the zero of function $$f(y)=y\log(y)-\log(1-y)$$ Using Taylor expansion, we have $$f(y)=y (\log (y)+1)+\frac{y^2}{2}+O\left(y^3\right)$$ Ignoring the higher order terms, the non-trivial solution is $$y_0=2 W\left(\frac{1}{2 e}\right)$$ where $W(.)$ is Lambert function. Now, iterating using Newton method, we have $$y_{n+1}=\frac{(y_n-2) y_n+(y_n-1) \log (1-y_n)}{(y_n-2)+(y_n-1) \log (y_n)}$$ This can be considered as an explicit recurrence equation which, as shown below, converges very fast. $$\left( \begin{array}{cc} n &amp; y_n \\ 0 &amp; \color{red}{0.3}14369902967628019173913392043 \\ 1 &amp; \color{red}{0.303}893759613688751450583356220 \\ 2 &amp; \color{red}{0.303659}245373552388724380071295 \\ 3 &amp; \color{red}{0.303659127029996}192264680381756 \\ 4 &amp; \color{red}{0.30365912702996605124501895}3168 \\ 5 &amp; \color{red}{0.303659127029966051245018951213} \end{array} \right)$$
Determine continuity of two variable function
Let $(x_0,0) \in \mathbb{R}^2$ and let $x_n = (x_0,1/n)$ and $y_n=(x_0, -1/n)$. Then $f(x_n) = \sqrt{x_0^2 + 1/n^2} \longrightarrow |x_0|$ while $f(y_n) = -\sqrt{x_0^2 + 1/n^2} \longrightarrow - |x_0|$. Therefore if $x_0 \neq 0$, $f$ is not continuous at $(x_0,0)$. It's left to check what happens when $x_0=0$, though it's quite clear that $f$ is continuous at $(0,0)$
How to solve Sin over Tan Limit with variables inside the function?
Clearly we must have $b \neq 0$ otherwise denominator of the function (whose limit is given in the question) will be zero. Now we are given that $$\lim_{x \to 0}\frac{\sin (ax) + b - 2}{\tan (bx)} = 3$$ Clearly we can see that $$\lim_{x \to 0}\sin (ax) + b - 2 = \lim_{x \to 0}\frac{\sin (ax) + b - 2}{\tan (bx)}\cdot\tan (bx) = 3\cdot 0 = 0$$ or $0 + b - 2 = 0$ so that $b = 2$. Next note that if $a = 0$ then we have the numerator $\sin (ax) = 0$ identically and hence the limit will not be $3$. Therefore $a \neq 0$. We now have $$\lim_{x \to 0}\frac{\sin(ax)}{\tan(bx)} = \lim_{x \to 0}\frac{\sin (ax)}{ax}\cdot\frac{a}{b}\cdot\frac{bx}{\tan (bx)} = 1\cdot\frac{a}{b}\cdot 1 = \frac{a}{2}$$ Thus $a/2 = 3$ and hence $a = 6$. We thus have $a = 6, b = 2$. It is important to show that both $a, b$ are non-zero in order to get the correct answer.