title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to get the Y of an X from exponential distribution function ? (Mean Integrated Squared Error)
Nevermind i confused myself but in fact it's pretty easy : $f(x) = \exp(-x)$ (if $\lambda=1$)
Calculating the determinant of a matrices
You can use the Laplace formula directly: $$ \det\begin{bmatrix} 0 & 1 & 1 & 1\\ 1 & 0& x & x\\ 1 & x & 0 & x\\ 1 & x & x & 0 \end{bmatrix} = -\det\begin{bmatrix}1& x & x\\1 & 0 & x\\ 1 & x & 0\end{bmatrix} +\det\begin{bmatrix}1& 0 & x\\1 & x & x\\ 1 & x & 0\end{bmatrix} -\det\begin{bmatrix}1& 0 & x\\1 & x & 0\\ 1 & x & x\end{bmatrix} $$ Switching two rows negates the determinant. We see that we can get the first determinant by switching the rows of the second determinant once and also by switching the rows of the third determinand twice. We are left with $$ {}=-3\det\begin{bmatrix}1& x & x\\1 & 0 & x\\ 1 & x & 0\end{bmatrix}$$ Applying the formula for $3\times3$ matrices yields $${} = -3 (0 + x^2 + x^2 - 0 - x^2 - 0) = -3x^2. $$
Prove $5^n + 2 \cdot 3^n - 3$ is divisible by 8 $\forall n\in \mathbb{N}$ (using induction)
As $3^n$ is an odd number, $4\cdot 3^n\equiv 4~(mod~8)$, also $12\equiv 4~(mod~8)$.
Pricing a contract that pays G at T>0
If the filtration $\mathcal{F}(t)$ is generated by the underlying Wiener process (or Brownian Motion), then all $\mathcal{F}(T)$-measurable payoff functions can be hedged and the market is said to be complete. What your professor has said is not correct (or your understanding, anyway!). It is not enough for any measure $\mathbb{Q}$ to exist for there to be no arbitrage opportunities - it needs to be a risk neutral measure (or martingale measure). You have made a couple of mistakes: In the definition of the money market account $B$, you should have that $B(t)=e^{\int_{0}^{t} r(s) ds}$. The replicating value (also called the no-arbitrage value, the risk neutral value, etc) should be $V(t)=\mathbb{E}^\mathbb{Q}\left[ \frac{B(t)}{B(T)} V(T) \vert \mathcal{F}(t) \right]$. In the case that $G(T)\equiv G=1$, this is like getting one dollar in $T$ years discounted back to today, so the present value should be $e^{-\int_{t}^{T} r(s) ds}\cdot1 =e^{-\int_{t}^{T} r(s) ds} $ at time $t$. Let us see it formally: $V(t)=\mathbb{E}^\mathbb{Q}\left[ \frac{B(t)}{B(T)} G \vert \mathcal{F}(t) \right] = \mathbb{E}^\mathbb{Q}\left[ \frac{B(t)}{B(T)} \cdot 1 \vert \mathcal{F}(t) \right] = \frac{B(t)}{B(T)} $ To price the second payoff, we must assume the underlying asset $S$ follows some SDE, say $ dS_t=(r_t-\delta_t) S_t dt+\sigma_t S_t dW_t. $ This has the solution $ S_T=S_te^{\int_{t}^{T}(r_s-\delta_s) ds} \cdot \frac{z_T}{z_t}, $ where the stochastic exponential process $z_t=e^{\int_{0}^{t} \sigma_s dW_s - \frac{1}{2} \int_{0}^{t} \sigma_s^2 ds}$ is a martingale under the risk neutral measure. Then we get that $V(t)=\mathbb{E}^\mathbb{Q}\left[ \frac{B(t)}{B(T)} S_T \vert \mathcal{F}(t) \right] = S_t e^{-\int_{t}^{T} \delta_s ds} \mathbb{E}^\mathbb{Q}\left[ \frac{z_T}{z_t} \vert \mathcal{F}(t) \right] = S_t e^{-\int_{t}^{T} \delta_s ds}. $
Is the parallel transport on an associated $G$-bundle given by $G$-action in a local $G$-bundle chart?
The answer is yes. I'm not that familiar with the Maurer-Cartan formalism so I'll offer a perspective based directly on parallel transport. Let $c \colon I \rightarrow M$ be a smooth curve with $c(0) = p$. By design, the parallel transport maps $\operatorname{Pt}^P_{c,0,t} \colon P_{c(0)} \rightarrow P_{c(t)}$ of $P$ are $G$-equivariant. How does the parallel transport on $E = P \times_G S$ is related to the parallel transport on $P$? Every element $\xi \in E_{p}$ is by definition an equivalence class $\xi = [\sigma_0, s_0]$ with $\sigma_0 \in P_p$ and $s_0 \in S$. The parallel transport on $E$ is then given by $$ \operatorname{Pt}^E_{c,0,t}(\xi) = \operatorname{Pt}^E_{c,0,t}([\sigma_0, s_0]) = [\operatorname{Pt}^P_{c,0,t}(\sigma_0), s_0].$$ Namely, to parallel transport $\xi$ along $c$, we "represent it with respect to arbitrary frame" as $\xi = [\sigma, s_0]$, parallel transport the frame and keep the second component constant. Now suppose you are given in advance a section $\sigma(t)$ of $P$ along $c$ which trivializes $P$ (along $c$). This section also gives us a trivialization of $E$ along $c$ by identifying $[\sigma(t), s]$ with $s$. How do the parallel transport maps look like with respect to this trivialization? We have $$ s \cong [\sigma(0),s] \mapsto [\operatorname{Pt}^P_{c,0,t}(\sigma(0)), s] = [\sigma(t), g^{-1}(t) \cdot s] \cong g^{-1}(t)s $$ for a unique $g(t) \in G$ such that $\operatorname{Pt}^P_{c,0,t}(\sigma(0))g(t) = \sigma(t)$. Hence, the parallel transport maps with respect to such trivializations factor through a left $G$-action. Although I haven't checked the details, it seems reasonable that your $\tilde{c}(t)$ is precisely my $g^{-1}(t)$.
Three equilateral triangles form a hexagon
To rotate a vector around any point in the plane it's the same that to rotate this vector around his tail. Let $R^{\alpha}$ be rotation on the plane by angle $\alpha$. Also, let $K$, $L$ and $M$ be midpoints of $AF$, $BC$ and $DE$ respectively. Thus, $$R^{60^{\circ}}\left(\vec{KM}\right)=R^{60^{\circ}}\left(\frac{1}{2}\left(\vec{AD}+\vec{FE}\right)\right)=\frac{1}{2}R^{60^{\circ}}\left(\vec{AP}+\vec{PD}+\vec{FE}\right)=$$ $$=\frac{1}{2}R^{60^{\circ}}\left(\vec{AB}+\vec{PC}+\vec{FP}\right)=\frac{1}{2}R^{60^{\circ}}\left(\vec{AB}+\vec{FC}\right)=\vec{KL}$$ and we are done!
Branch points of rational functions
If $b\in X$ and $f(b)\neq\infty$ then $b$ is a branch point iff $f'(b)=0$ (derivative wrt. an arbitrary local coordinate; the ramification index is the maximal $k$ s.t. $f^{(k)}(b)=0$ (the number of branches meeting at $b$ is $k+1$)). If $f(b)=\infty$, replace $f$ with $1/f$.
Is the isomorphism class of a Galois group a first-order property?
Yes. Fix $N\in\mathbb{N}$ and a subgroup $G\subseteq S_n$. Then you can write down a first-order formula which says: there exists a bilinear map $K^N\times K^N\to K^N$ which makes $K^N$ a field such that there exist elements $\alpha_1,\dots\alpha_n\in K^N$ such that $f(x)$ factors as $a_n(x-\alpha_1)\dots(x-\alpha_n)$ over this field and $\alpha_1,\dots,\alpha_n$ generate $K^N$ as a field (every element of $K^N$ can be written as a polynomial in the $\alpha_i$, where we only need powers less than $n$ since each $\alpha_i$ has minimal polynomial of degree at most $n$) and there exist distinct linear maps $\sigma_g:K^N\to K^N$ for each $g\in G$ which are field automorphisms and satisfy $\sigma_g(\alpha_i)=\alpha_{g(i)}$ (note that distinctness of the $\sigma_g$ is needed since the $\alpha_i$ may not be distinct) and every linear map $K^N\to K^N$ which is a field automorphism is equal to one of the $\sigma_g$. Taken together, this says that $K^N$ is a splitting field of $f$ and its automorphism group is $G$. Since the splitting field of $f$ has degree at most $n!$, we can take a disjunction of these formulas over all $N\leq n!$ and all $G$ which are isomorphic to $H$ to get a formula which says the Galois group of $f$ is isomorphic to $H$.
Expectation of indicator variable squared
Since $X$ is an indicator random variable, $X$ only takes the values $0$ and $1$, so $X^2=X$.
$(1+\frac{1}{n\log n})^n-1=O(\frac{1}{n})$.
It's not true. In fact $$\left(1 + \frac{1}{n \log n}\right)^n = \exp\left(\frac{1}{\log n}\right) + O\left(\frac{1}{n \log n}\right) = 1 + \frac{1}{\log n} + O\left(\frac{1}{\log(n)^2}\right)$$
Tangent line to a curve and inertia
I would say yes, since from a physical point of view you are basically describing the instantaneous velocity. Instantaneous velocity is always tangent to the trajectory of an object: About point 1: The tangent line of a point on a differentiable function is the line that goes through that point and has the same slope as that point. Thus, the tangent of a straight line is the straight line itself.
Does Disjunctive Syllogism eliminate one premise?
Change the second assumption to "not S" and it would be OK.
“congruence modulo 7” is an equivalence relation on Z. Find three elements in the equivalence class [3].
Yes for (a) U are right Since $$10 \equiv 3 \pmod 7$$ and $$17 \equiv 3 \pmod 7$$ and $$3 \equiv 3 \pmod 7$$ Well for (b) You first have to list the elements in $$[3] = \{3,10,17,24,31,..... \}$$ Notice that this list keeps going on and they are all have the form $3 + 7k$ where $k$ is an integer. For your question, We are only concerned with the positive values of $k$ Now the first positive value in $[3]$ is $3$ This is when $k=0$ Now the $10^{th}$ value in $[3]$ is when $k = 9$ which is $3 + 7(9) =66$ And so if $N = 66$ then $S \cap [3] = 10$ and this will hold true for all values of $N$ up to $66 + 7 -1 = 72$ And so $ 66\leq N \leq 72$ Notice that if $N = 73$ then $S \cap [3] = 11$ because $73 \in [3]$ and so that's why it is strictly less than 73 Take some time to understand this question ! It's pretty neat and will save you a lot of time in the future because it truly tests your understanding of congruences
calculus optimization for the volume of a cone
Hint: the slanted height ($12$) is the hypotenuse of a rectangular triangle that has, as other sides, the height ($h$) and the radius ($r$) of the basis of the cone. So $144-h^2=r^2$
Please explain $\frac{\partial}{\partial a_{ij}} \sum_{i=1}^m a_{1i} b_{i1} + \cdots + \sum_{i=1}^m a_{ni} b_{in} = b_{ji}$
Let $$f_k(a_{11},a_{12},\ldots,a_{nm},b_{11},b_{12},\ldots,b_{mn}):=\sum_{l=1}^m a_{kl}b_{lk}$$ so that $$\frac{\partial}{\partial a_{ij}}\sum_{k=1}^n f_k=\sum_{k=1}^n \frac{\partial}{\partial a_{ij}}f_k$$ is the derivative in question. Then note that $f_k$ does not depend on $a_{ij}$ if $k\neq i$ so we have $$\sum_{k=1}^n \frac{\partial}{\partial a_{ij}}f_k=\frac{\partial}{\partial a_{ij}}f_i.$$ However, this is simply given by \begin{align*} &\frac{\partial}{\partial a_{ij}}f_i(a_{11},a_{12},\ldots,a_{nm},b_{11},b_{12},\ldots,b_{mn})=\frac{\partial}{\partial a_{ij}}\sum_{l=1}^m a_{il}b_{li} =\sum_{l=1}^m \frac{\partial}{\partial a_{ij}}a_{il}b_{li}\\=&\frac{\partial}{\partial a_{ij}}a_{ij}b_{ji}=b_{ji}. \end{align*}
differential equation $y'=\sqrt{|y|(1-y)}$
Given $$\displaystyle \frac{dy}{dx} = \sqrt{|y|\cdot (1-y)}=\sqrt{y(1-y)}\;,$$ bcz $\; 0<y<1$ So $$\displaystyle \frac{dy}{\sqrt{y(1-y)}} = dx\Rightarrow \int \frac{1}{\sqrt{y(1-y)}}dy = \int dx$$ Now Put $$\displaystyle y=\left(z+\frac{1}{2}\right)\;,$$ Then $dy=dz$ So we get $$\displaystyle \int\frac{1}{\sqrt{\left(\frac{1}{2}\right)^2-z^2}}dz = \int dx$$ Now Put $\displaystyle z=\frac{1}{2}\sin \phi\;,$ Then $\displaystyle dz =\frac{1}{2}\cos \phi d\phi $ So we get $$\displaystyle \int\frac{\cos\phi}{\cos \phi}d\phi = \int dx\Rightarrow \phi=x+\mathcal{C}$$ So we get $$\displaystyle \sin^{-1}\left(2z\right) =x+\mathcal{C}$$ So we get $$\displaystyle \sin^{-1}\left(4y-2\right) = x+\mathcal{C}\Rightarrow 4y-2 = \sin(x+\mathcal{C})$$
Are all vector bundles "flat vector bundles"?
No. Using the Chern-Weil perspective on characteristic classes (see here), you can prove that all the rational Pontryagin classes of a flat vector bundle have to vanish. Thus all you need are vector bundles with non-vanishing rational Pontryagin classes, of which there are many. A very nice source for this perspective on characteristic classes and flat bundles is Morita's book "Geometry of characteristic classes".
if $f(x) = a exp(-a (x-b))$ . Find sufficient statistic for b
The domain for this problem is very important. You haven't specified it but I'm assuming the problem is: find the sufficient statistic for $$f(x) = a\, exp[-a(x-b)] \, \textbf{1}\{x > b\}.$$ Then the joint pdf is $$f(x) = a^n exp\left[-a \sum_{i=1}^{n} x_{i}\right] \,exp[a\,b\,n] \, \prod_{i=1}^{n}\textbf{1}\{x_{i}>b\}.$$ Notice all the $x_{i}$ will be greater than $b$ as long as the minimum is greater than $b$, therefore the likelihood function is $$L(b) =a^n exp\left[-a \sum_{i=1}^{n} X_{i}\right] \,exp[a\,b\,n] \, \textbf{1}\{X_{1,n}>b\},$$ where $X_{1,n} = min(X_{1},...,X_{n})$. Then $X_{1,n}$ is a sufficient statistic. It is also the MLE.
Generate Random Latin Squares
In the combinatorics community, the Jacobson and Matthews approach is widely considered the best approach to obtain random Latin squares (approximately) uniformly at random from the set of all Latin squares. A practical non-MCMC approach that samples uniformly would be extremely well-received (but seems beyond current techniques). Other uniform sampling methods are: Generating Latin squares row-by-row by appending random permutations and restarting whenever their is a clash gives the uniform distribution. [Or equivalently, uniformly sampling from the set of row-Latin squares, then restarting if there is a clash.] Generating a list of all Latin squares, and picking one at random. Storage requirements could be reduced by (a) storing only a list of normalised Latin squares [i.e. first row in order] then randomly permuting the columns after sampling, and (b) storing only the differences between subsequent Latin squares (i.e. Latin trades) [although, this makes the algorithm more complicated]. In either case, for not particularly large n [maybe n>5], these approaches are either impractically slow, or requires impractically large storage space. However, for some applications, we don't need to sample $n \times n$ Latin squares for $n>5$, in which case, this is not a problem. Moreover, for statistical applications, sampling from all possible Latin squares is often not necessary, in which case we just apply a random isotopism to any given Latin square (i.e. we pick a Latin square, then permute its rows, columns and symbols randomly). Any attempt to sample via extending a Latin rectangle (or partial Latin square) to a Latin square without restarting from scratch after a clash occurs will almost certainly result in a non-uniform distribution. [I suppose theoretically you could add weights to your intermediate choices.] Different Latin rectangles admit a different number of completions, so if we don't restart from scratch, we will favour Latin squares that have Latin rectangles that admit fewer completions (i.e. there's less competition for those Latin squares). This non-uniformity might seem like a subtle difference, but consider a $(n-2) \times n$ Latin rectangle. The number of completions is always a power of 2. If the number of completions is 1 (which can happen: take a cyclic group's Cayley table and delete the last two rows), then its completion is guaranteed to be generated from that point on. If the number of completions is $2^{n/2}$ (which can also happen: take the elementary abelian 2-group's Cayley table and delete the last two rows), then the probability of it being generated from that point on could be $2^{-n/2}$ (depending on how things are implemented). So, the difference in probabilities can be at least exponential in n. Even if you don't care too much about the uniform distribution, the Jacobson and Matthews is still reasonable: it is quite fast and simple to implement (there's also implementations for GAP ("loops") and SAGE available, and probably others I'm unaware of).
Cancelling Handle Attachments
If I understood you correctly, you can always choose to attach handles in increasing order, i.e., start by attaching 0-handles, then move on to 1-handles, etc. until you attach the top k-handles (if any). Then the condition you stated, i.e., that $\partial h^k=(+/-1)h^{k-1}$, i.e., the algebraic (instead of geometric) intersection is $+/-1$ (and $h^k$ has algebraic intersection number $0$ with the belt spheres of all the other (k-1)-handles that you have attached) is necessary for canceling. This means $h^k$ goes algebraically * . STILL, you must be able to actually find a way of finding a way to(having an embedding so that you)have the attaching sphere intersect exactly once the k-handle $h^k$ . This is taken from my understanding and from a few different sources. This ultimately means that there is an embedding of the handle which has these properties, and not that ( the case of geometric intersection) that the intersection can be avoided through a different choice of embedding.
Example of a dynamical system which has an $\omega$-limit which is a cylinder of closed orbits
Hint: Consider a $2$-torus with periodic orbits, and foliate its neighborhood by $2$-tori. Now you can consider orbits that spiral towards the central torus with inclination tending more and more to the inclination of the periodic orbits, but always traveling along the height of the tori. The limit set of all such orbits will be the central torus. I believe that you can use this idea to write down a system explicitly.
$f'(t)\rightarrow b$ as $t\rightarrow +\infty$ $\Rightarrow f (t)/t\rightarrow b $ using Mean Value Theorem
Given an $\epsilon>0$ there is an $M>0$ such that $$|f'(t)-b|<{\epsilon\over2}\qquad(t>M)\ .$$ Given any $t>M$ the MVT guarantees the existence of a $\xi>M$ with $$f(t)-f(M)=f'(\xi)(t-M)=\bigl(f'(\xi)-b\bigr)(t-M) + b(t-M)\ .$$ It follows that $f(t)-bt=\bigl(f'(\xi)-b\bigr)(t-M)+f(M)-bM$, so that $$\bigl|f(t)-bt\bigr|\leq{\epsilon\over2}(t-M)+\bigl|f(M)\bigr|+|b|M\qquad(t>M)\ .$$ After dividing by $t$ we therefore obtain $$\left|{f(t)\over t}-b\right|\leq{\epsilon\over2} +{\bigl|f(M)\bigr|+|b|M\over t}\qquad(t>M)\ .$$ Here the right side is $<\epsilon$ as soon as $t>M'$ for a suitable $M'>M$.
Could $\sum e^{a_i}$ be simplified? Does it have an identity?
One possible way to reduce the cost of the exponentiation operation may be to replace exponentiation involving large exponents with multiplications. This is what I mean: Assume that $\{a_i\}$ is sorted in ascending order (if not, sort them first), so that $a_{i+1}\geq a_i$. Now, let's use the notation $$ A(n) = \sum_{i=1}^n e^{a_i} $$ Obviously, $A(1)=e^{a_1}$. Now, we can write $$ A(2)=A(1)+e^{a_2} $$ But we could also write this as $$ A(2) = A(1)+b_1e^{a_2-a_1} $$ where $b_1=e^{a_1}$. Similarly, we have $$ A(3)=A(2)+b_2e^{a_3-a_2} $$ where $b_2=e^{a_2}=b_1e^{a_2-a_1}$. If we define the sequence $$ b_{i+1} = e^{a_{i+1}-a_i}b_i $$ with $b_1=e^{a_1}$, then we can write $$ A(n+1)=A(n)+b_{n+1} $$ The advantage of this is that, while $a_n$ may be very large, and thus require a lot of computation, it may be much quicker to calculate the various $b_i$. It requires just as many exponentiation operations, but reduces the size of the exponent significantly, which may save on operation cost. However, it is subject to the exponentiation cost relationship. If the cost of exponentiation remains almost constant with size of the exponent, then this will actually be slower than simply evaluating the exponentials directly, due to the added multiplications.
Intuitive explanation of (why/how) integration gives us the average value of a continuous function?
I think you should look at the geometric representation of integrals of continuous functions in $\mathbb{R}$. It's easy to see that while integrating, we are actually adding the values of a certain function at all the points in a certain interval (multiplied by a weight $\text{dt}$). Now, when we divide it by the length of the interval, we get the average value of the function (as $\sum{\text{dt}}$ is the length of the interval) It's not formal, but I think it's easy to visualize it this way.
Proof that set of units of a ring is a multiplicative group
A group is a set with an associative operation that has identity and inverse. The ring structure grants you the associative law and the identity (wich is obviously an unit), so you only have to prove two facts: The set of units is closed under product, that is, the product of two units is also an unit. Every unit has an inverse and this inverse is also an unit.
Cantor's derived sets
Yes, there are such sets. To describe an example, let's start with simpler tasks. If we just want $P\ne\emptyset$ with $P^1=\emptyset$, take $P$ to be a singleton. If we want $P^1\ne\emptyset$ and $P^2=\emptyset$, take $P$ to be a strictly increasing sequence together with its limit $a$. Then $P^1=\{a\}$. If we want $P^2\ne\emptyset$ and $P^3=\emptyset$, then take $P$ such that $P^1$ is a strictly increasing sequence together with its limit $a$. For instance, start with such a sequence $a_0<a_1<\dots\nearrow a$. Let all these points be in $P$, together with, for each $i>0$, a strictly increasing sequence sitting betwen $a_{i-1}$ and $a_i$ and converging to $a_i$. Note that $P^1=\{a_1,a_2,\dots\}\cup\{a\}$. We can recursively extend these examples to produce, for each $n$ an example $P$ with $P^n\ne\emptyset$ and $P^{n+1}=\emptyset$. To see this, note that any $P$ so built has a minimal point, some points that are successor points, meaning that they have an immediate predecessor in $P$, and some points that are limits of sequences of points in $P$. We build the next example: For each successor point $b$ in $P$ with immediate predecessor $b^-\in P$, fix a strictly increasing sequence sitting strictly between $b^-$ and $b$ and converging to $b$. Let $Q$ be the union of $P$ and (the ranges of) all these sequences. Note that $Q$ is closed and that all points of $P$, except for its least element, are limit points of elements of $Q$, so $Q'$ is just $P$ with its least element removed. Ok, we are ready to build the desired example. Let $P_n$ be a set built as in the procedure just described, with $P_n^n$ a singleton. By means of a dilation and a translation, we may also ensure that $P_0=\{0\}$, $P_1\subset(0,1/2]$ with $\max P_1=1/2$, $P_2\subset(1/2,3/4]$ with $\max P_2=3/4$, and so on, with $P_n\subset(1-1/2^{n-1},1-1/2^n]$ and $\max P_n=1-1/2^n$ for all $n>0$. Now set $P=P_0\cup P_1\cup\dots\cup P_n\cup\dots\cup\{1\}$. In fact, you can ensure much more. For each countable ordinal $\alpha$ there is a closed countable set of reals $P=P^0$ such that $P\supsetneq P^1\supsetneq\dots\supsetneq P^\beta\supsetneq\dots\supsetneq P^\alpha$ for all $\beta<\alpha$ (at limit stages $\gamma$, define $P^\gamma=\bigcap_{\rho<\gamma}P^\rho$). An easy way to achieve this is to note first that each countable ordinal embeds into $\mathbb R$, and that we may further ensure that the embedding is continuous (where the ordinal is given the order topology). We then just note that successor ordinals are compact, and that their derived sets are easy to compute. For instance, the limit points of an ordinal $\alpha$ are just the limit ordinals below $\alpha$. The limit points of this set are the ordinals below $\alpha$ that are multiples of $\omega^2$, and so on. In the examples above, each set $P_n$, $n>0$, has order type $\omega^n+1$ and $P=P_\omega$ has order type $\omega^\omega+1$ (the exponentiation is in the ordinal sense). (To see that any countable ordinal embeds into $\mathbb R$, in fact note that any countable linear order embeds into $\mathbb Q$, as can be easily verified by constructing the embedding recursively using an enumeration of the linear order and the fact that $\mathbb Q$ is dense in itself and has no endpoints. Once such an embedding of an ordinal is arranged, consider the closure of its range. Check that this is again an ordinal, perhaps slightly larger than the ordinal you began with, and redefine the embedding accordingly, to obtain a continuous embedding.) It is perhaps worth pointing out that mention of ordinals is not an accident or capricious. For instance, theorem 2.1 of MR0867644 (88a:05013). Baumgartner, James E. Partition relations for countable topological spaces. J. Combin. Theory Ser. A 43 (1986), no. 2, 178–195, implies in particular that if $X$ is a countable Hausdorff space with a countable open base, $0<\alpha$ is countable, and $X^\alpha\ne\emptyset$, then $X$ has a subspace homeomorphic to $\omega^\alpha+1$.
Convergence of an integral with one singularity
Note that we have: $0<1-x+\frac{1}{b}\log(x) \leq 1-x$ for $x<1$ and $x$ close to $1$. So: \begin{align} \frac{1}{bx(1-x+\frac{1}{b}\log(x))} > \frac{1}{bx(1-x)} \end{align} What do you know about the integral of the expression in the RHS? Conclude.
Divisibility of $a^x - 1$
Hint: $$a^k-1=(a-1)\sum_{i=0}^{k-1}a^i\tag{1}$$ What happens if you replace $a$ with $a^x$? Also, don't feel sad about the "obvious" part. When a huge result is proven in physics it's a revolution. When the same thing happens in mathematics the result is obvious. Edit: As this question has been closed, I'll supply some details. Replacing $a$ with $a^x$ in $(1)$ yields $$a^{xk}-1=(a^x-1)\sum_{i=0}^{k-1}a^{xi}\tag{2}$$ Letting $y=xk$ we have $$a^x-1\mid a^y-1$$
correlated or independent
The notation $(X_1, X_2)$ means $X_1$ is the result of the first draw and $X_2$ the result of the second draw which occurs without replacement. Both $X_1$ and $X_2$ are uniformly distributed on $\{1, 2, \ldots, 20\}$, but they are not independent random variables. $$P(E_1) = P\{X_1 \geq 8\} = \frac{13}{20}, ~~ P(E_2) = P\{X_2 \geq 12\} = \frac{9}{20}.$$ But $E_1$ and $E_2$ are independent events if and only if $E_1^c$ and $E_2$ are independent events, in which case we would have $P(E_2 \mid E_1^c) = P(E_2)$. But clearly, $$P(E_2 \mid E_1^c) = \frac{9}{19} > \frac{9}{20} = P(E_2)$$ and so $E_1$ and $E_2$ are dependent events. Since $P(E_1^c) = 1 - P(E_1)$, the law of total probability gives $$\begin{align*} P(E_2) &= P(E_2\mid E_1)P(E_1) + P(E_2\mid E_1^c)P(E_1^c)\\ &= P(E_2\mid E_1^c) + (P(E_2\mid E_1) -P(E_2\mid E_1^c))P(E_1) \end{align*}$$ which shows that $$\min\{P(E_2\mid E_1), P(E_2\mid E_1^c)\} \leq P(E_2) \leq \max\{P(E_2\mid E_1), P(E_2\mid E_1^c)\},$$ and since $P(E_2 \mid E_1^c) > P(E_2)$, we conclude that $$P(E_2 \mid E_1) < P(E_2) ~\text{and}~ P(E_2 \cap E_1) = P(E_2 \mid E_1)P(E_1) < P(E_2)P(E_1).$$ We can also calculate $$P(E_2 \mid E_1) = \frac{P(E_2) - P(E_2\mid E_1^c)P(E_1^c)}{P(E_1)} = \frac{\frac{9}{20}-\left(\frac{9}{19}\times\frac{7}{20}\right)}{\frac{13}{20}} =\frac{9 \times 12}{13 \times 19}$$ and so $$P(E_2 \cap E_1) = P(E_2 \mid E_1)P(E_1) = \frac{9 \times 12}{13 \times 19} \times \frac{13}{20} = \frac{9}{20}\times \frac{12}{19}$$ in contrast to the $P(E_2 \cap E_1) = \dfrac{9}{20}$ claimed by the OP.
What does it mean to define $f^{-1}(a)$ in the context of level sets, tangent planes and normals?
$S=f^{-1}(2)$ is the subset of $\mathbb{R}^3$ such that $$ e^{x+2y}\cos(z)-xz+y=2 \quad \mbox{for} \quad (x,y,z)\in S $$ so it is a surface in $\mathbb{R}^3$ analougous to a level curve for a function of two variables.
Let $G$ be a graph such that $\chi(G - x - y) = \chi(G) - 2$, for all distinct vertices $x,y$. Prove that $G$ is complete.
Suppose $G$ is not a complete graph. So there exist two vertices, say $x$ and $y$ such that x is not connected to $y$. Let $f : V (G − x − y) → [χ(G) − 2]$ be a proper $(χ(G) − 2)$-coloring of $G$. Create a new color and assign it to both $x$ and $y$ to create a proper $(χ(G)−1)-$ coloring of $G$, a contradiction.
How can I prove $p \mid n^p - n$ for $n$ natural and $p$ prime?
Proof by induction: If $n=1$ then it is obvious. Now do the induction step from $n$ to $n+1$. By induction hypothesis we have $n^p-n=p\cdot d$. It is easy to see that $p\mid {p\choose k}$ if $1\leq k \leq p-1$ since we have $$k{p\choose k} = p{p-1\choose k-1}$$ Now we use binomial theorem: \begin{eqnarray*} (n+1)^p-(n+1) &=& n^p + \underbrace{{p\choose 1}n^{p-1}+...+{p\choose p-1}n}_a+1-n-1 \\ &=& \underbrace{n^p-n}_{p\cdot d} + p\cdot b \\ &=& p (d+b) \end{eqnarray*} where $a = p\cdot b$
Find range of 'a' for which the eqn. has at least one real solution.
Firstly, $$\Delta \ge 0 \implies 1+4(a+1) \ge 0 \implies a \ge -\dfrac{5}{4}$$ Secondly, $$|y|=|\csc x| \ge 1 \implies \left| \frac{-1 \pm \sqrt{4a+5}}{2} \right| \ge 1$$ $$\implies -1-\sqrt{4a+5} \le -2 \quad \text{ or} \quad -1+\sqrt{4a+5} \ge 2$$ $$\implies \sqrt{4a+5} \ge 1 \quad \text{ or} \quad \sqrt{4a+5} \ge 3$$ $$\implies a \ge -1 \quad \text{ or} \quad a \ge 1$$ (Using or because we need "at least one real" only) Combining $$-1 \le a < \infty$$
If $M$ is a flat $R$-module and $rm=0$ for some $r$ and $m$, show $m=0$.
Hint: $r$ is a non-zero divisor on $R$ if and only if the sequence $$0\longrightarrow R\xrightarrow{\:\times r\:} R$$ is exact. This sequence remains exact on tensoring by $M$.
Difficult integrals involving trigonometric functions
These integrals are related to the Poisson kernel. If $R>1$ we have $$ \frac{1}{R+e^{i\theta}} = \sum_{n\geq 0}\frac{(-1)^n e^{ni\theta}}{R^{n+1}},\qquad \frac{1}{R+e^{-i\theta}} = \sum_{n\geq 0}\frac{(-1)^n e^{-ni\theta}}{R^{n+1}} $$ hence by considering the sum/difference/product between these representations, and the fact that $\int_{0}^{2\pi} e^{ki\theta}\,d\theta = 2\pi \delta(k)$, the computation of the given integrals is straightforward. If $R\in(0,1)$ one may apply the same principle to $$ \frac{1}{R+e^{i\theta}} = \sum_{n\geq 0}(-1)^n R^n e^{-(n+1)i\theta},\qquad \frac{1}{R+e^{-i\theta}} = \sum_{n\geq 0}(-1)^n R^n e^{(n+1)i\theta}$$ and if $R=1$ one may tackle the given integrals through the tangent half-angle substitution.
Can any boolean function be expressed as a tree with each node having at least one variable input?
No. This is not possible for arbitrary expressions. It would imply that the output depends on the input which is connected to the final gate. A function which allows to factor out a single input variable in an AND output is called positive unate. Similarly, a negative unate function would allow to be decomposed into an input and a remainder function using an OR output gate. Not all functions are unate, let alone can be split into unate sub-functions. Try to compose a 2-input XOR from AND, OR, NOT in this way. The output of XOR depends on two inputs rather than on just one.
The dimension of a matrix as a subspace of $\cal{M}_{3 \times 3}$
Let$$B=\begin{bmatrix}1&1&1\\1&0&-1\\0&1&2\end{bmatrix}.$$Then $A.B=0\iff B^T.A^T=0^T=0$. But\begin{align}B^T.(x,y,z)=(0,0,0)&\iff\left\{\begin{array}{l}x+y=0\\x+z=0\\x-y+2z=0\end{array}\right.\\&\iff z=y=-x\end{align}So, $B^T.A^T=0$ if and only if $A$ is a matrix of the form$$\begin{bmatrix}a&-a&-a\\b&-b&-b\\c&-c&-c\end{bmatrix}.$$Can you take it from here?
solve inductive inequality $a_m>a_{m-1}+0.5 a_{m-1}^2$
I am able to get $m=O(a_0\log (1/a_0))$. Let $f(x)=x+c\,x^2$ (where $c>0$ takes the place of $0.5$), and $0<a_0<1$. If $g(x)=f(a_0)+f'(a_0)(x-a_0)$, then $g(x)\le f(x)$, $g(x_0)=f(x_0)$ and $g'(x_0)=f'(x_0)$. Define $b_0=a_0$ and $b_{n+1}=b_n$, $n\ge1$. It is clear the the sequence $a_n$ reaches $1$ before $b_n$. It is easy to see that $$ g^n(x)=(1+2\,c\,a_0)^nx-\frac{a_0}{2}\bigl((1+2\,c\,a_0)^{n-1}-1\bigr). $$ If $N$ is the largest index such that $b_N<1$, we have $$ (1+2\,c\,a_0)^Na_0-\frac{a_0}{2}\bigl((1+2\,c\,a_0)^{N-1}-1\bigr)<1. $$ From here we deduce $$ a_0(1+2\,c\,a_0)^N<2\implies N<\frac{\log(2/a_0)}{\log(1+2\,c\,a_0)}\sim\frac{\log(2/a_0)}{2\,c\,a_0}. $$
Kolmogorov Differential Equations governing random dynamical system
Renaming indices you get to $$ A_{jk} = \left. \frac{\partial P_{jk}}{\partial u} \right|_{u=t} $$ so that \begin{eqnarray} \sum_k A_{jk} &=& \sum_k\left. \frac{\partial P_{jk}}{\partial u} \right|_{u=t} \\ &=& \left.\frac{\partial}{\partial u}\left(\color{blue}{\sum_k P_{jk} }\right)\right|_{u=t} \\ &=& \left.\frac{\partial}{\partial u}\left(\color{blue}{1 }\right)\right|_{u=t} \\ &=& 0 \end{eqnarray} so that
Find a power series representation for the function and determine the interval of convergence
Your answer is correct. One may recall that $$ \frac1{1+u}=\sum_{n=0}^\infty (-1)^n u^n, \quad |u|<1, $$ giving, for $2x^2<1$, $$ \frac1{1+2x^2}=\sum_{n=0}^\infty (-2)^n x^{2n} $$ that is $$ \frac{x}{1+2x^2}=\sum_{n=0}^\infty (-2)^n x^{2n+1}, \quad x \in \left(-\frac{\sqrt{2}}2,\frac{\sqrt{2}}2\right). $$
Set of ordered pairs of the transitive closure R* of R
As transitive closure of relation $R$ the relation $R^*$ is transitive and satisfies $R\subseteq R^*$. In fact it is the 'smallest' transitive relation that contains $R$. If we have $uRv$ and $vRw$ then also $uR^*v$ and $vR^*w$. The transitivity of $R^*$ then tells us that $uR^*w$. Applying that here on cases like $u=a,v=b,w=c$ (or $u=e,v=c,w=e$) leads to the conclusion that $aR^*a$ (or $eR^*e$).
Dimension 3 indecomposable modules of a group $G = \langle x \rangle \times \langle y \rangle$ of order $p^2$
Here's an outline of a proof using elementary linear algebra. You could probably make it shorter by assuming some theory (about radicals, etc.) First, it will be more convenient to think about the actions of the elements $u=x-1$ and $v=y-1$ of the group algebra, rather than directly about the action of $x$ and $y$. Since $x^p=y^p=1$ and $\operatorname{char}(k)=p$, we have $u^p=v^p=0$, but the condition required of the modules means that $u^2$ and $v^2$ act as zero. Also, the actions of $u$ and $v$ commute. Let $M$ be an indecomposable module on which $u^s$ and $v^2$ act as zero. First consider the action of $u$ on $M$. Since $u^2$ acts as zero, $Mu$ is in the kernel of the action of $u$. Choose a basis of $M$ as follows: first choose a basis of $Mu$, extend to a basis of $\{m\in M\mid mu=0\}$, and then extend to a basis of $M$ by including one element $b$ such that $bu=c$ for each basis element $c$ of $Mu$. This gives a basis of $M$ such that for each basis element $b$, either $bu=0$ or $bu$ is another basis element, with at most one $b$ such that $bu=c$ for each basis element $c$. [For algebraically closed $k$, you could get this by considering the Jordan normal form of the action of $u$.] Since $\dim(M)=3$, there are not many possibilities for the number of $b$ with $bu=0$. There are either $3$ or $2$, in which case $M$ is the direct sum of $3$ or $2$ nonzero $u$-stable subspaces. But if there are three, then $Mu=0$ and so every subspace is $u$-stable, and the same considerations applied to $v$ show that $M$ is the direct sum of $3$ or $2$ nonzero submodules, and so is not indecomposable. So $M$ has a basis $\{s,t,tu\}$ where $su=tu^2=0$, and $M=S\oplus T$ is the direct sum of two $u$-stable subspaces, where $S=\langle s\rangle$ and $T=\langle t,tu\rangle$. Now consider the action of $v$. If both $S$ and $T$ are $v$-stable, then $M$ is not indecomposable. If $Sv\not\leq S$, then a simple calculation shows that, up to multiplying $s$ by a nonzero scalar, we can assume that $sv=tu$ and $tv=0$. So $M$ is the module with basis $\{s,t,sv=tu\}$ (with all basis elements killed by $u$ and by $v$ except that $sv\neq0\neq tu$). If $Sv\leq S$ but $Tv\not\leq T$, then a simple calculation shows that, up to replacing $s$ with a linear combination of $s$ and $tu$, we can assume that $tv=s$. So $M$ is the module with basis $\{t,tu,tv\}$ (with all basis elements killed by $u$ and by $v$ except that $tu\neq0\neq tv$).
Retract of Compact $2$-manifold
The map $r_*: H^*(C_k;\Bbb R) \to H^*(S_g;\Bbb R)$ is an injective ring homomorphism, by the fact that it's a retraction. Poincare duality says that the cup-product pairing on $H^1(S_g;\Bbb R)$ is nondegenerate; another way of phrasing it is that $H^1(S_g;\Bbb R)$ is a symplectic vector space. A subspace on which the cup-product is trivial is called an isotropic subspace. It is a standard (linear algebra) theorem that an isotropic subspace is of dimension at most half the dimension of the vector space itself. In particular, because the product on $H^1(C_k;\Bbb R)$ is trivial, and $r_*$ is a ring homomorphism, its image is an isotropic subspace. So the image is of rank at most $\dim H^1(S_g;\Bbb R)/2 = g$. The maps on real cohomology are just the maps in integral cohomology tensored with $\Bbb R$, so you find the same result at the level of integral cohomology rings. In particular, $H^1(C_k;\Bbb Z)$ must have rank at most $g$.
Disproving Littlewood's first principle
So, you want a set $E\subset\mathbb R$ of finite measure for which there is no open set $O$ which is the finite union of disjoint open intervals such that $m(O\triangle E)=0$. Let $E$ be a "Fat" Cantor Set. If $O$ is nonempty, $O\setminus E$ contains an interval. If $O$ is empty, $E\setminus O$ has positive measure.
For every *non-square* matrix prove that $AA^t$ or/and $A^tA$ is singular
To elaborate on my hint, suppose $A$ is an $n \times m$ matrix and $n \neq m$. It must be that $rank(A^t) = rank(A) \leq \min(n,m) < \max(n,m)$. Using the fact that $rank(AB) \leq rank(A)$ for any $A,B$ for which the product is defined, we have that: $$rank(AA^t) \leq rank(A) < \max(n,m)$$ $$rank(A^tA) \leq rank(A^t) < \max(n,m).$$ But it must be the case that the dimensions of $AA^t$ or $A^tA$ is $\max(n,m)$. Therefore at least one of them does not have full rank. For square matrices, not having full rank is equivalent to being singular.
Stone-Weierstrass applied to trigonometric polynomials on a disc
To expand on @Jose27's comment: $\overline{e^{inz}}$ is not $e^{-inz}$, since $z$ is not assumed to be real. In fact with $z=x+iy$, $$ e^{inz} = e^{in(x+iy)} = e^{inx}e^{-ny} $$ so $$ \overline{e^{inz}} = e^{-inx}e^{-ny} $$ but $$ e^{-inz} = e^{-in(x+iy)} = e^{-inx}e^{ny}. $$ In other words, $\overline{e^{inz}}$ is not a member of $T$ (it's not holomorphic when $n\neq 0$).
$A\subset\mathbb R$ then use quantifier to write statements.
Your tries are not correct ! $(1):\exists x_m\in A, \forall x\in A:x\leq x_m$ $(3): \exists M \, \forall x\in A:|x|\leq M$
Calculate the limit $\lim \limits_{x \to 2} \left(\frac{x^2+2x-8}{x^2-2x}\right)$, although $x\neq 2$
Given $$\lim_{x\rightarrow2}\dfrac{x^2+2x-8}{x^2-2x}=\lim_{x\rightarrow2}\dfrac{\color{red}{(x-2)}(x+4)}{x\color{red}{(x-2)}}=\lim_{x\rightarrow2}\dfrac{x+4}{x}=\lim_{x\rightarrow2}1+\dfrac4x = 3$$ OR You could also use L'Hopital's rule $$\lim_{x\rightarrow2}\dfrac{x^2+2x-8}{x^2-2x}=\lim_{x\rightarrow2}\dfrac{2x+2}{2x-2}=\dfrac{2(2)+2}{2(2)-2}=\dfrac{6}{2}=3$$
The De Morgan Formulas (for finite sets )
What you have shown is that if $w \in \left(\cup_k A_k\right)^c$ then $w \in \cap_k A_k^c$, that is $\left(\cup_k A_k\right)^c \subset \cap_k A_k^c$. You must also show the reverse inclusion. In fact, all your implications ($\implies$) are equivalences $(\iff)$, but I leave you to check that. Other than that your proofs are fine. You may observe that you never explicitly use that your index set is the 'finite naturals', so you may exchange it for any arbitrary index set (hence the general De Morgan's).
Algebraic and definable closure of a vector space is a span of its subset
First, recall that the theory of vector spaces over an infinite field $K$ admits quantifier elimination. That is, for all formulae $\psi$ there is a quantifier free formula $\psi'$ such that: $$\psi \leftrightarrow \psi'.$$ Now, for any atomic formula $\varphi(x,y_1,y_2,...,y_n)$, the language can only express terms that are linear combinations, so either: $\varphi(M) = \{\, m \in M \mid M \vDash \varphi(m,e_1,e_2,...,e_n)\,\}=M$ (occurs when $x$ appears with zero coefficient), Or, $M \vDash \varphi(x,e_1,e_2,...,e_n)$ implies that $x$ is in the linear span of the $e_i$'s, and therefore unique. If $v$ is a linear span of elements in $A$, say $v = \lambda_1 a_1 + ... \lambda_n a_n$, consider the atomic formula: $$\varphi(x,y_1,...,y_n) \equiv x = F_{\lambda_1}y_1 + ... + F_{\lambda_n}y_n.$$ Clearly $M \vDash \varphi(v,a_1,...,a_n)$ and $\varphi(M) =\{\,v\,\}$. So $v \in acl(A)$ and $v \in dcl(A)$. Conversely, if $v \in acl(A)$, then there exists a formula $\psi$ such that: $M \vDash \psi(v,a_1,...,a_n)$ ($a_i \in A$), And $\psi(M)$ is finite. By quantifier elimination, we can assume $\psi$ is quantifier free. By the above remarks, we can thus deduce that $v$ also satisfies an atomic formula of type 2. and so $v$ is in the linear span of $A$. The definable and algebraic closures coincide since $span(A) \subseteq dcl(A) \subseteq acl(A) = span(A)$.
Hypotenuse and angle ratio relationship
Well if $\angle BAC = 90^\circ$ and $\angle ABC : \angle ACB = 1:2$, this means $\angle ABC = 60^\circ$ and $\angle ACB = 30^\circ$. Now $AC = 4$ and $BC = \sqrt{AB^2 + AC^2}$. But $AB = BC\cos(60^\circ)$.
A way of finding the root of the polynomial equation
There is a very well known formula for the quadratics (I see that you know it). I assume someone may have asked a similar question on this website so I wouldn't be surprised to see a "duplicate" coming up in the comments... But for degree $3$ and $4$ there are formulas, known as Cardano's formulas (other mathematicians have found these formulas too, but Cardano's name is usually stuck on them) which give the solutions of a polynomial equation as a function of its coefficients. The modern proofs involve Galois Theory, which is in some sense a mix of group theory and field theory, so I would put it as "a little linear algebra but mostly other methods". For degree $n \ge 5$, it is known that there are no formulas involving either addition, subtraction, multiplication, division, radicals, or any combination/composition of them as a function of the coefficients. This is a very deep result, because it means that for polynomials of degree $5$ or more we cannot expect to find the roots in an easy manner at all. As little satisfaction we can have for these polynomials, there exists numerical methods if we wish for instance to compute the roots over $\mathbb R$ or $\mathbb C$. Over other fields we are pretty much screwed in general, unless the polynomials we look at are particularly pretty and we have a clever way of "guessing" the roots using theory. Hope that helps,
Is this a martingale?
Yes, $Z$ is a proper martingale. However, $\int_0^T(Z_sW_s)^2\,ds$ is not integrable for large $T$. As the quadratic variation of $Z$ is $[Z]_t=4\int_0^t(Z_sW_s)^2\,ds$, Ito's isometry says that this is integrable if and only if $Z$ is a square-integrable martingale, and you can show that $Z$ is not square integrable at large times (see below). However, it is conditionally square integrable over small time intervals. $$ \begin{align} \mathbb{E}\left[Z_t^2W_t^2\;\Big\vert\;\mathcal{F}_s\right]&\le\mathbb{E}\left[W_t^2\exp(W_t^2)\;\Big\vert\;\mathcal{F}_s\right]\\ &=\frac{1}{\sqrt{2\pi(t-s)}}\int x^2\exp\left(x^2-\frac{(x-W_s)^2}{2(t-s)}\right)\,dx \end{align} $$ It's a bit messy, but you can evaluate this integral and check that it is finite for $s \le t < s+\frac12$. In fact, integrating over the range $[s,s+h]$ (any $h < 1/2$) with respect to $t$ is finite. So, conditional on $W_s$, you can say that $Z$ is a square integrable martingale over $[s,s+h]$. This is enough to conclude that $Z$ is a proper martingale. We have $\mathbb{E}[Z_t\vert\mathcal{F}_s]=Z_s$ (almost surely) for any $s \le t < s+\frac12$. By induction, using the tower rule for conditional expectations, this extends to all $s < t$. Then, $\mathbb{E}[Z_t]=\mathbb{E}[Z_0] < \infty$, so $Z$ is integrable and the martingale conditions are met. I mentioned above that the suggested method in the question cannot work because $Z$ is not square integrable. I'll elaborate on that now. If you write out the expected value of an expression of the form $\exp(aX^2+bX+c)$ (for $X$ normal) as an integral, it can be seen that it becomes infinite exactly when $a{\rm Var}(X)\ge1/2$ (because the integrand is bounded away from zero at either plus or minus infinity). Let's apply this to the given expession for $Z$. The expression for $Z$ can be made more manageable by breaking the exponent into independent normals. Fixing a positive time $t$, then $B_s=\frac{s}{t}W_t-W_s$ is a Brownian bridge independent of $W_t$. Rearrange the expression for $Z$ $$ \begin{align} Z_t&=\exp\left(W_t^2-\int_0^t(2(\frac{s}{t}W_t+B_s)^2+1)\,ds\right)\\ &=\exp\left(W_t^2-2\int_0^t\frac{s^2}{t^2}W_t\,ds+\cdots\right)\\ &=\exp\left((1-2t/3)W_t^2+\cdots\right) \end{align} $$ where '$\cdots$' refers to terms which are at most linear in $W_t$. Then, for any $p > 0$, $$ Z_t^p=\exp\left(p(1-2t/3)W_t^2+\cdots\right). $$ The expectation $\mathbb{E}[Z_t^p\mid B]$ of $Z_t^p$ conditional on $B$ is infinite whenever $$ p(1-2t/3){\rm Var}(W_t)=p(1-2t/3)t \ge \frac12. $$ The left hand side of this inequality is maximized at $t=\frac34$, where it takes the value $3p/8$. So, $\mathbb{E}[Z_{3/4}^p\mid B]=\infty$ for all $p\ge\frac43$. The expected value of this must then be infinite, so $\mathbb{E}[Z^p_{3/4}]=\infty$. It is a standard application of Jensen's inequality that $\mathbb{E}[\vert Z_t\vert^p]$ is increasing in time for any $p\ge1$ and martingale $Z$. So, $\mathbb{E}[Z_t^p]=\infty$ for all $p\ge 4/3$ and $t\ge3/4$. In particular, taking $p=2$ shows that $Z$ is not square integrable.
Relation between the maximum eigenvalues of symmetric positive definite matrix $A$ and $B A B^\dagger$
Let $C=BAB^+=BAB^T(BB^T)^{-1}$. Note that $BAB^T$ and $(BB^T)^{-1}$ are $m\times m$ symmetric $>0$ matrices and, therefore, their product $C$ is diagonalizable and has only $>0$ eigenvalues. More precisely, $C$ is similar to the following $>0$ symmetric mtrix $S=(BB^T)^{-1/2}BAB^T(BB^T)^{-1/2}=[(BB^T)^{-1/2}B]A[(BB^T)^{-1/2}B]^T$. For every vector $x\in\mathbb{R}^m$, $x^TSx=y^TAy$ where $y=[(BB^T)^{-1/2}B]^Tx$. Then $x^TSx\leq \rho(A)||y||^2$ where $||y||^2=x^T(BB^T)^{-1/2}BB^T(BB^T)^{-1/2}x=||x||^2$ and we are done.
Calculate the probability to have $5$ consecutive $H$ in $200$ coin tosses.
$\color{blue}{HINT:}$ Lets try to solve the main question using recurrence relations , you can receive detailed information about recurrence relations in the following link : https://en.wikipedia.org/wiki/Recurrence_relation. Lets say that the last flipping is $T$ and contain $5$ consecutive Heads ,so the number of this case is $a_{n-1}$, or Lets say that the last two flippings are $TH$ and contain $5$ consecutive Heads ,so the number of this case is $a_{n-2}$, or Lets say that the last three flippings are $THH$ and contain $5$ consecutive Heads ,so the number of this case is $a_{n-3}$, or Lets say that the last four flippings are $THHH$ and contain $5$ consecutive Heads ,so the number of this case is $a_{n-4}$, or Lets say that the last five flipping are $THHHH$ and contain $5$ consecutive Heads ,so the number of this case is $a_{n-5}$, or Lets say that the last five flippings are $HHHHH$ ,so the number of this case is $2^{n-5}$ We know that the desired condition can be constructed by union of these cases. Lets say that the desired condition is $a_n$ ,then $a_n=a_{n-1}+a_{n-2}+a_{n-3}+a_{n-4}+a_{n-5}+2^{n-5}$ This is a non-homogeneous recurrence relation , i guess you can solve it by yourself or use calculators.When you find an explicit formula ,then put $200$ into $\color{red}{n}$ . At last divide the result by $2^{200}$ as denominator.
How do I find the value of a partial sum without calculator?
So that formula for the sum of geometric series was $$\sum_{i=0}^{n-1} a r^i = a \frac{1-r^n}{1-r}$$ Compare this to $$\sum_{i=0}^{7} (-2/3)^i$$ to identify what is a, r and n. Then plug those in.
Minimizing the length of wire between two poles?
Geometric method If one mirror reflects $A$ to the $A_1$, then obviously $AC+CB = A_1C+BC$. Latter is minimal when $A_1$, $B$ and $C$ are aligned. One can even find mirror image of $B$ as well, so final $l = \sqrt{(a+b)^2 + h^2}$. One can use another triangles, triangle similarities, etc, but this one is quite visual. Algebraic method \begin{align} l &= \sqrt{a^2+x^2} + \sqrt{b^2+(h-x)^2} \\ \frac {dl}{dx} &= \frac x{\sqrt{a^2+x^2}} - \frac {h-x}{\sqrt{b^2+(h-x)^2}} = 0 \end{align} from latter one can find that $$ x^2 \left [ b^2+(h-x)^2\right ] = (a^2+x^2)(h-x)^2 \\ x^2 b^2 + x^2 (h-x)^2 = a^2(h-x)^2 + x^2(h-x)^2 \\ xb = a(h-x) $$ So, \begin{align} A_0C &= x = \frac {ah}{a+b} \\ B_0C &= h-x = \frac {bh}{a+b} \end{align} and \begin{align} l &= \sqrt{a^2+\frac {a^2h^2}{(a+b)^2}} + \sqrt{b^2+\frac {b^2h^2}{(a+b)^2}} = \sqrt{1+\frac {h^2}{(a+b)^2}} (a+b) = \sqrt{(a+b)^2+h^2} \end{align}
Concrete examples of valuation rings of rank two.
Qiaochu's answer is sound in principle, but in practice one needs to be more careful with the definition of the ring. The quotient field $K$ of $A$ consists of formal Laurent series of the form $$f=\sum_{r=-r_0}^\infty x^r\sum_{s=-s_0(r)}^\infty a_{r,s}y^s.$$ Here $r_0$ is an integer and for each integer $r$, $s_0(r)$ is an integer (depending on $r$). So these are the power series where the powers of $x$ are bounded below and for each integer $r$ the coefficient of $x^r y^s$ is zero for all $s$ below a bound depending on $r$. This complicated-looking condition ensures that the product of two elements of $K$ is also an element of $K$ (note that one cannot multiply two general Laurent series). Then $A$ will consist of all such series with the additional conditions that $r_0=0$ and $s_0(0)=0$. The valuation of an element $f$ is the least $(r,s)$ under lexicographic ordering with $a_{r,s}\ne0$. Here the ordering is $(r,s)<(r',s')$ if $r < r'$ or $r=r'$ and $s < s'$. A more high-brow interpretation of the condition for memebership of $K$ is that the support of $f$, the setof $(r,s)$ for which $a_{r,s}\ne0$, should be well-ordered, that is each subset of the support has a least element. (With repsct to this lexicographic ordering of course.) By considering a version of this construction in $n$ variables one can construct explicitly a ring with a valuation of rank $n$.
Uniformly Lipschitz continuously differentiable?
Every continuously differentiable function is locally lipschitz. However, the function $f(x)=e^x$ is continuously differentiable, but not uniformly lipschitz. So we are essentially assuming that the derivative exists and is globally bounded.
Detailed proof of Central Limit Theorem for Markov Chains
The step that interests you is rather direct. To see why it holds, first rewrite the double sum in the expectation as $$S=\sum_{k=0}^\infty\sum_{m=1}^\infty f(X_k) f(X_{k+m})\mathbf 1_{T>k+m}$$ then condition each $k$-term by $\sigma(X_k)$. This yields $$E(S)=\sum_{k=0}^\infty\sum_{m=1}^\infty E\left(f(X_k) E\left( f(X_{k+m})\mathbf 1_{T>k+m}\mid X_k\right)\right)$$ By the Markov property and the homogeneity of the Markov chain, for each $k$ and each positive $m$, $$E\left( f(X_{k+m})\mathbf 1_{T>k+m}\mid X_k\right)=\mathbf 1_{T>k}g_m(X_k)$$ where $$g_m(x)=E\left(f(X_m)\mathbf 1_{T>m}\mid X_0=x\right)$$ This is the formula in your text, minus the typo $Y_0=0$.
Holomorphic function mapping a set onto a straight line
See the Open mapping theorem: the image of an open set in a non-constant holomorphic function is open. No non-empty subset of a line is open, so if the image is contained in a line, the function must be constant. That said, proving the open mapping theorem may depend on already having results like the one you ask for. In that case your approach is fine. Here's an informal way of looking at it: pick a point $z$ in the domain of $f$ and reason that sufficiently close to $z$, $f$ is well-approximated by multiplication by $f^\prime(z)$, which is a scale and/or rotation of the complex plane. Hence there must be some direction you can go from $z$ such that $f(z)$ moves away from the line, unless $f^\prime(z)$ is $0$. So if you always stay within a line, $f^\prime(z)$ must always be $0$, i.e. $f$ must be constant. This approach uses no heavy machinery, just basic facts about the derivative.
asymtote and value of a function
y = f(x) = Ln(px + q) for x > 1 has an asymptote at x = 1 means that: f(x) --> infinity when x --> 1. This happens when: p + q = 0 ,and together with 4p + q = 1 we have: p = 1/3, and q = -1/3. we can check by draw the graph of f(x) = ln(x/3 - 1/3).
For $h \in G$ and $\phi \in Aut(G)$ is $\phi^n(h)$ periodic in any finite quotient?
This is false. Let $G$ be the direct sum of countably many copies of $\mathbb{Z}/2\mathbb{Z}$ indexed by $\mathbb{Z}$ with generators $e_i, i \in \mathbb{Z}$. $S \subset \mathbb{Z}$ be an infinite set of non-negative integers such that their indicator function $1_S$ is not periodic. $G \to G/N \cong \mathbb{Z}/2\mathbb{Z}$ be the quotient in which the generators $e_s, s \in S$ are identified and the others are killed. $\phi : G \to G$ act by $\phi(e_i) = e_{i+1}$. $h = e_0$. Then $\phi^i(h) N = 1_S(i)$ is not periodic; in fact it can be arbitrary. Perhaps some words about the train of thought behind this counterexample might be helpful. First, $\phi$ descends to an automorphism $G/N \to G/N$ if is inner, so if there is a counterexample, then $\phi$ necessarily has infinite order in $\text{Out}(G)$. The easiest way I know to get a large group of outer automorphisms is to take the direct sum or direct product of copies of some abelian group, and the simplest automorphisms of these are the permutations. The simplest permutation of infinite order is an infinite cycle, and after that it wasn't hard to see how to choose $N$.
Is there a better way to do this probability calculation?
The event $A$ consists of $|A|$ equally-likely elements of the probability space $\Omega$, where $|A|$ is the number of ways that we can form a sum of $6$ or less with three ordered positive integers. For example, $(1,1,2) \in A$ because $1 + 1 + 2 \leq 6$, and $(2,1,1) \in A$ because $2 + 1 + 1 \leq 6$, and moreover these are two distinct events. The number $|A|$ is also the number of ways we can distribute $6$ or fewer indistinguishable balls into three identified bins (think of the number of balls in each bin as the number of pips showing on each die), which in turn is the number of ways we can distribute exactly $7$ balls into four identified bins (where the first three bins are identified with the three bins in the previous model, and the fourth bin holds the balls that were not put in bins in that model). And we can count $|A|$ by putting one ball into each of the four bins and then distributing the remaining $3$ balls in those bins, zero or more of those balls in each bin. So $|A|$ is the number of ways to distribute $3$ indistinguishable balls into $4$ identified bins, where there is no minimum number of balls to put in any bin. This can be solved by the "stars and bars" method. Once you have found $|A|$ by this method, you can set $P(A) = \dfrac{|A|}{|\Omega|},$ using the same reasoning by which you proposed to find $P(A_i).$ Another possibility is to find each of your $A_i$ by distributing exactly $i-3$ balls into three bins with no minimum number of balls per bin, and compute $P(A_i)$ as you have indicated. This will come to the same result for fairly obvious reasons. This method works for totals up to $8$ (just change the number of balls distributed into the bins). If you want the probability of a total of $9$ or less, you have to take into account the fact that $(1,1,7)$ (for example) is not in your probability space; the simple stars-and-bars method alone does not account for that.
Definite integral of $\sin^4x$
"Hint": $$\int \sin^4(x)dx=\int [\sin^2(x)]^2dx=\frac{1}{4}\int(1-\cos(2x))^2dx$$
Problem from Iran Olympiad?
Let $a=2^x$ and $b=2^y$ where $x,y$ are distinct and $b$ is the rearrangement of $a$. We know that $|x-y|<4$ otherwise $a$ and $b$ would have a different number of digits. Therefore $|x-y|=1,2,3$ Successive powers of 2 are congruent in $ mod(9)$ to $1,2,4,8,7,5,1,2,4...$. This implies that powers of 2 differing by factors of $2^1,2^2,2^3$ cannot be congruent. Yet if $a,b$ have the same digits then $b=a \; mod(9)$ which means $|x-y|\neq 1,2,3$. Due to the contradiction we can therefore conclude that there are no such $a,b$
Generalisation of prime numbers to matrices?
One peculiar connection between primes and matrix products is through the definition of a so called dynamical zeta function . I am not sure this is what you are looking for, but it may give you some ideas. Here is an explicit application to products of matrices https://arxiv.org/abs/chao-dyn/9301001
Double derivative of the composite of functions
Let $p = (x_1,\ldots,x_m)$, $f(p) = (y_1,\ldots,y_n)$, and $(g\circ f)(p)= (z_1,\ldots,z_k)$. Then the chain rule can be written $$ \frac{\partial z_j}{\partial x_i} \;=\; \sum_{\alpha=1}^m \frac{\partial z_j}{\partial y_\alpha} \frac{\partial y_\alpha}{\partial x_i} $$ What you have written as $(Dg)_{f(p)}$ is the matrix of partial derivatives $\dfrac{\partial z_j}{\partial y_\alpha}$, and what you have written as $(Df)_p$ is the matrix of partial derivatives $\dfrac{\partial y_\alpha}{\partial x_i}$. Taking the derivative again yields $$ \frac{\partial^2 z_j}{\partial x_h \partial x_i} \;=\; \sum_{\alpha=1}^m\sum_{\beta=1}^m \frac{\partial^2 z_j}{\partial y_\beta\partial y_\alpha}\frac{\partial y_\beta}{\partial x_h}\frac{\partial y_\alpha}{\partial x_i}\;+\;\sum_{\alpha=1}^m \frac{\partial z_j}{\partial y_\alpha}\frac{\partial^2 y_\alpha}{\partial x_h \partial x_i} $$ What you have written as $(D^2g)_{f(p)}$ is the rank-three $k\times m\times m$ tensor of second partial derivatives $\dfrac{\partial^2 z_j}{\partial y_\beta \partial y_\alpha}$. As you can see, what you have written as $(Df)_p^2$ is the rank four $m\times n\times m\times n$ tensor whose entries are $\dfrac{\partial y_\beta}{\partial x_h}\dfrac{\partial y_\alpha}{\partial x_i}$. That is, $(Df)_p^2$ is the tensor product (or Kronecker product) of the matrix $(Df)_p$ with itself. In general, given an $m\times n$ matrix $A$ and a $q\times r$ matrix $B$, their tensor product is the $m\times n\times q \times r$ tensor whose $(i,j,k,\ell)$-th entry is $a_{i,j}b_{k,\ell}$. This operation is analogous to the outer product of two vectors. More generally, it is possible to take the tensor product of any rank $R$ tensor with any rank $S$ tensor to get a tensor of rank $R+S$. From a more algebraic point of view, $(Df)_p$ is a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$, and the second derivative $(D^2f)_p$ is a linear tranformation $$ (D^2f)_p : \mathbb{R}^n\otimes\mathbb{R}^n \to \mathbb{R}^m $$ where $\otimes$ denotes the tensor product of vector spaces. The object $(Df)_p^2$ is the linear transformation $$ (Df)_p^2 : \mathbb{R}^n\otimes\mathbb{R}^n \to \mathbb{R}^m\otimes\mathbb{R}^m $$ defined by $$ (Df)_p^2(v\otimes w) \;=\; (Df)_p(v) \,\otimes\, (Df)_p(w) $$ Since $(D^2g)_{f(p)}$ goes from $\mathbb{R}^m \otimes \mathbb{R}^m$ to $\mathbb{R}^k$, the composition $(D^2g)_{f(p)}\cdot (Df)_p^2$ is defined, and is a linear transformation from $\mathbb{R}^n\otimes\mathbb{R}^n$ to $\mathbb{R}^k$.
Orthogonal projection to graph of Hilbert space
The orthogonal projection of $(y,z)\in\mathcal{H}\times\mathcal{H}$ onto the graph $\mathcal{G}(A)=\{ (x,Ax)\in\mathcal{H}\times\mathcal{H} : x\in\mathcal{H} \}$ is the unique $(x,Ax)\in\mathcal{H}\times\mathcal{H}$ such that the following orthogonality conditions hold in $\mathcal{H}\times\mathcal{H}$: $$ ((y,z)-(x,Ax)) \perp (x',Ax'), \;\;\; x'\in\mathcal{H}. $$ That is, the following hold for all $x'\in\mathcal{H}$: $$ (y-x,z-Ax)\perp(x',Ax'),\;\;\; x'\in\mathcal{H}, \\ \langle y-x,x'\rangle=\langle Ax-z,Ax'\rangle,\;\;\; x'\in\mathcal{H}, \\ \langle y-x,x'\rangle=\langle A^*(Ax-z),x'\rangle,\;\;\; x'\in\mathcal{H}, \\ y-x=A^{*}(Ax-z) \\ y+A^*z =(A^*A+I)x \\ x=(A^*A+I)^{-1}(y+A^*z). $$ Therefore, if $P$ denotes the orthogonal projection of $(y,z)$ onto the graph $\mathcal{G}(A)$, then $$ P(y,z) = (x,Ax) \\ = ((A^*A+I)^{-1}(y+A^*z),A(A^*A+I)^{-1}(y+A^*z)) \in\mathcal{G}(A)\subset\mathcal{H}\times\mathcal{H}. $$ It's ugly and pretty all at the same time.
Combinatorics Question with bridges and inability to cross over each other
For each village on one side, consider the set of villages on the other side it directly connects to. Label the villages on one side from $v_1$ to $v_{15}$ and the other side from $u_1$ to $u_{20}$. Clearly, the highest numbered village $v_i$ connects to equal to the lowest numbered village $v_{i+1}$ connects to. If it is more, the two bridges will intersect. Hence it is less than or equal. If it is less, then if $v_i$ connects to $u_j$ and $v_{i+1}$ connects to $u_k$ with $j<k$, $u_k$ is not indirectly connected to $v_i$. If a village $v_i$ directly connects to $u_a$ and $u_b$, it is directly connected to $u_c$ where $a\leq c\leq b$. This can be obtained from the fact that if $u_c$ is connected to something else, then it causes an intersection of bridges, hence, $u_c$ would connect to $v_i$. Hence, this "partitions" the villages on one side into the villages of the other side. Instead of "partitioning" the villages, we look at the bridges. Each bridge is associated to exactly one village $v_i$. We partition all $20+15-1$ bridges into $15$ villages. By stars and bars or otherwise, we obtain the answer of ${(20-1)+(15-1)\choose(15-1)}$.
What is the conditional probability that the second card is a Spade given that the second-to-last card is a Spade?
I spread out a shuffled deck of cards, face down, before you. I point to any one among these fifty-two and ask you: What is the probability that that card is one from the thirteen spades?   Does your answer depend on to which is the card that I point? Well, it so happens that I have pointed to the second from the bottom of the deck, and when I turn it over it is revealed to be a spade. Now I point to any other card, among the fifty-one remaining in the deck, and now ask: What is the probability that that card is one among the twelve remaining spades? ...
if $x_{m}$be prime number, show that $2m+1$ must prime number!
As far as I can tell, everything you've shown is correct. As such, as you stated near the end of your question text, with $$\alpha = \sqrt{a + 1} + \sqrt{a}, \; \; \beta = \sqrt{a + 1} - \sqrt{a}, \; \; \alpha\beta = 1 \tag{1}\label{eq1A}$$ you get $$x_{n} = \frac{\alpha^{2n + 1} + \beta^{2n + 1}}{\alpha + \beta} \tag{2}\label{eq2A}$$ First, note for any integer $k \ge 0$ that $$\begin{equation}\begin{aligned} \alpha^{2k} + \beta^{2k} & = (\sqrt{a + 1} + \sqrt{a})^{2k} + (\sqrt{a + 1} - \sqrt{a})^{2k} \\ & = \sum_{i=0}^{2k}\binom{2k}{i}(\sqrt{a + 1})^{2k-i}(\sqrt{a})^{i} + \sum_{i=0}^{2k}\binom{2k}{i}(\sqrt{a + 1})^{2k-i}(-\sqrt{a})^{i} \\ & = \sum_{i=0}^{2k}\binom{2k}{i}\left((\sqrt{a + 1})^{2k-i}(\sqrt{a})^{i} + (-1)^i(\sqrt{a + 1})^{2k-i}(\sqrt{a})^{i}\right) \\ & = \sum_{i=0}^{2k}\binom{2k}{i}(\sqrt{a + 1})^{2k-i}(\sqrt{a})^{i}(1 + (-1)^i) \\ & = 2\sum_{i=0}^{k}\binom{2k}{2i}(\sqrt{a + 1})^{2k-2i}(\sqrt{a})^{2i} \\ & = 2\sum_{i=0}^{k}\binom{2k}{2i}(a + 1)^{k-i}(a)^{i} \end{aligned}\end{equation}\tag{3}\label{eq3A}$$ The second last line comes from all of the terms with odd $i$ canceling as $1 + (-1)^{i} = 1 - 1 = 0$ and the even terms having a factor of $1 + (-1)^{i} = 1 + 1 = 2$. Also, since $a$ is an integer, this shows $\alpha^{2k} + \beta^{2k}$ is an integer. Next, consider that $x_n$ in \eqref{eq2A} is a prime, but $2n + 1$ is not a prime. Since $x_0 = 1$ is not a prime, this means $n \ge 1$, so $2n + 1 \gt 1$ and, thus, must be composite. This means there are integers $q \gt 1$ and $r \gt 1$ where $$2n + 1 = qr \tag{4}\label{eq4A}$$ Also, since $2n + 1$ is odd, this means $q$ and $r$ are also odd, so $q \ge 3$ and $r \ge 3$. Note for all odd positive integers $s$, we have $$\begin{equation}\begin{aligned} x^s + y^s & = (x + y)(x^{s-1} - x^{s-2}y + \ldots - x(y^{s-2}) + y^{s-1}) \\ & = (x + y)\left(\sum_{i=0}^{s-1}(-1)^{i}x^{s-1-i}y^{i}\right) \end{aligned}\end{equation}\tag{5}\label{eq5A}$$ Using \eqref{eq4A} and \eqref{eq5A}, then \eqref{eq2A} becomes $$\begin{equation}\begin{aligned} x_n & = \frac{\alpha^{qr} + \beta^{qr}}{\alpha + \beta} \\ & = \frac{(\alpha^{q})^{r} + (\beta^{q})^{r}}{\alpha + \beta} \\ & = \frac{(\alpha^{q} + \beta^{q})\left(\sum_{i=0}^{r-1}(-1)^{i}(\alpha^{q})^{r-1-i}(\beta^{q})^{i}\right)}{\alpha + \beta} \\ & = \left(\frac{\alpha^{q} + \beta^{q}}{\alpha + \beta}\right)\left(\sum_{i=0}^{r-1}(-1)^{i}(\alpha^{q})^{r-1-i}(\beta^{q})^{i}\right) \end{aligned}\end{equation}\tag{6}\label{eq6A}$$ With the first factor, letting $q = 2m + 1$, we can see from \eqref{eq2A} it's $x_m$ and, thus, an integer. With the second factor, note the first & last terms are $$(\alpha^{q})^{r-1} + (\beta^{q})^{r-1} \tag{7}\label{eq7A}$$ Since $r$ is odd, then $r - 1$ is even so by \eqref{eq3A} the value in \eqref{eq7A} is an integer. Next, consider the second term & the second last term of that factor, $$\begin{equation}\begin{aligned} -(\alpha^{q})^{r-2}(\beta^{q}) - (\alpha^{q})(\beta^{q})^{r-2} & = -(\alpha\beta)^{q}((\alpha^{q})^{r-3} + (\beta^{q})^{r-3}) \\ & = -((\alpha^{q})^{r-3} + (\beta^{q})^{r-3}) \end{aligned}\end{equation}\tag{8}\label{eq8A}$$ where $\alpha\beta = 1$ from \eqref{eq1A} was used. Once again, $r - 3$ is even so the value in \eqref{eq8A} is an integer. You can repeat this pairing of terms from the start and end of the second factor in \eqref{eq6A} until you get to the middle term of $$\begin{equation}\begin{aligned} (-1)^{\frac{r-1}{2}}(\alpha^{q})^{\frac{r-1}{2}}(\beta^{q})^{\frac{r-1}{2}} & = (-1)^{\frac{r-1}{2}}\left(\alpha\beta\right)^{\frac{q(r-1)}{2}} \\ & = (-1)^{\frac{q(r-1)}{2}} \end{aligned}\end{equation}\tag{10}\label{eq10A}$$ This is also, of course, an integer. Thus, the second factor of \eqref{eq6A} is a sum of integers so it, too, is also an integer. In addition, you can easily show both these factors in \eqref{eq6A} are $\gt 1$. As such, this shows that $x_n$ is a product of $2$ integer factors each $\gt 1$ and, thus, it cannot be prime. However, since it was stated it was a prime, this is a contradiction. This means the assumption of $2n + 1$ being composite must be false, thus proving $2n + 1$ is actually a prime if $x_n$ is a prime.
Help with minimization problem
The quantity $x^2+y^2$ is minimized when $\sqrt{x^2+y^2}$ is minimized, and $\sqrt{x^2+y^2}$ is just the distance from the point $\langle x,y\rangle$ to the origin. Thus, you’re looking for the point on the line $3x-4y=12$ that is closest to the origin. Call this line $\ell$; the point closest to the origin is the point of intersection of $\ell$ with a line through the origin and perpendicular to $\ell$. The slope of $\ell$ is $\frac34$, so the slope of the perpendicular is $-\frac43$: you want the intersection of $\ell$ and the line $y=-\frac43x$.
Determine the expected number of neutrons in a randomly chosen atom.
The first part is correct. For the second part you'll want: $$E(x) = \frac{1}{5}\sum_{x=24}^{28}{xP(x)}.$$ Basically average them but weigh each number of neutrons according to its probability. As for which probability distribution this follows: It follows a discrete probability distribution, and you're given all of the probabilities of interest explicitly in the table in your image.
Let Y be a random variable with $0\le Y\le 1.$
Since $Y(1-Y)\geq0$ we conclude that $$ \mathbb{E}Y-(\mathbb{E}Y)^2-\mathrm{var}(Y)=\mathbb{E}Y-\mathbb{E}Y^2=\mathbb{E}(Y-Y^2)\geq0 $$ Thus $$\mathrm{var}(Y)\leq(\mathbb{E}Y)(1-\mathbb{E}Y)\leq \frac{1}{4}$$ Now equality folds if and only if $\mathbb{E}Y=1/2$ and $Y=Y^2$, $P$-a.s., or equivalently $P(Y\notin\{0,1\})=0$. That is if $Y$ is a Bernoulli trial with equal probability.
Determining how to make a matrix have less pivots than columns (Example)
We can take a direct approach here by row reducing. \begin{align*} \left[\begin{array}{rrr} 2 & a & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{array}\right] \xrightarrow{R_2-\frac{1}{2}\cdot R_1\to R_2}\left[\begin{array}{rrr} 2 & a & 0 \\ 0 & -\frac{1}{2} \, a + 2 & 1 \\ 0 & 1 & 2 \end{array}\right] \\ \xrightarrow{R_2\leftrightarrow R_3}\left[\begin{array}{rrr} 2 & a & 0 \\ 0 & 1 & 2 \\ 0 & -\frac{1}{2} \, a + 2 & 1 \end{array}\right] \\ \xrightarrow{R_3-\left(\frac{1}{2}\,a-2\right)\cdot R_2\to R_3}\left[\begin{array}{rrr} 2 & a & 0 \\ 0 & 1 & 2 \\ 0 & 0 & a - 3 \end{array}\right] \end{align*} These reductions show that our matrix $A$ is row equivalent to $$ \left[\begin{array}{rrr} 2 & a & 0 \\ 0 & 1 & 2 \\ 0 & 0 & a - 3 \end{array}\right] $$ What is the rank of this new matrix?
Prove: from the definition of the limit if $a_n \to L$ then $|a_n| \to |L|$
The idea is to use the fact that $|a_n-L|$ can be made arbitrarily small to show $||a_n|-|L||$ can be made arbitrarily small. In this case, you can use that for each $x,y$, $$||x|-|y||\leq |x-y|$$ The second case is not always true: you need $L\neq 0$. Now note $$\left| {\frac{1}{{{a_n}}} - \frac{1}{L}} \right| = \left| {\frac{{L - {a_n}}}{{{a_n}L}}} \right| = \frac{{\left| {{a_n} - L} \right|}}{{\left| {{a_n}L} \right|}}$$ Since $a_n\to L$, there is an $n_0$ such that $|a_n-L|<|L|/2$ whenever $n\geq n_0$. Then $$|L|-|a_n|\leq |a_n-L|<|L|/2$$ so that $|a_n|>|L|/2$. Again, since $a_n\to L$, there is a $n_1$ for which $$|a_n-L|<\epsilon |L|^2/2$$ whenever $n\geq n_1$. Then, for $n\geq \max\{n_0,n_1\}$, we must have $$\left| {\frac{1}{{{a_n}}} - \frac{1}{L}} \right| = \frac{{\left| {{a_n} - L} \right|}}{{\left| {{a_n}L} \right|}} < \frac{2}{{\left| L \right|}}\frac{{\left| {{a_n} - L} \right|}}{{\left| L \right|}} < \frac{2}{{\left| L \right|}}\frac{{\varepsilon {{\left| L \right|}^2}}}{{2\left| L \right|}} = \varepsilon $$ Note there is some unusual steps involved, but you get the idea.
Calculate $\int_{-\infty}^\infty \frac{\cos{(kx)}}{\sqrt{x^2+a^2}} \,dx$
Let $F(k,a)$ be given by $$F(k,a)=\int_{-\infty}^\infty \frac{\cos(kx)}{\sqrt{x^2+a^2}}\,dx$$ Exploiting the even symmetry of the integrand and enforcing the substitution $x\to |a|x$, where we assume that $a\in \mathbb{R}$ yields $$\begin{align} F(k,a)&=2\int_0^\infty \frac{\cos(k|a| x)}{\sqrt{x^2+1}}\,dx\\\\ &=2\text{Re}\left(\int_0^\infty \frac{e^{ik|a| x}}{\sqrt{x^2+1}}\,dx\right)\tag 1\\\\ &=2\text{Re}\left(\int_0^\infty \frac{e^{-k|a| x}}{\sqrt{x^2-1}}\,dx\right)\tag 2\\\\ &=2\text{Re}\left(\underbrace{\int_0^1 \frac{e^{-k|a| x}}{\sqrt{x^2-1}}\,dx}_{\text{Purely Imaginary}}\right)+2\text{Re}\left(\underbrace{\int_1^\infty \frac{e^{-k|a| x}}{\sqrt{x^2-1}}\,dx}_{\text{Purely Real}}\right)\\\\ &=2\int_1^\infty \frac{e^{-k|a| x}}{\sqrt{x^2-1}}\,dx\\\\ &=2\int_0^\infty e^{-k|a| \cosh(x)}\,dx\\\\ &=2K_0(k|a|) \end{align}$$ where $K_0(x)$ is the modified Bessel Function of the Second Kind. NOTES: In arriving at $(1)$, we used $\cos(k|a|x)=\text{Re}(e^{ik|a|x})$. In going from $(1)$ to $(2)$, we chose branch cuts from $i$ to $i\infty$ and from $-i$ to $-i\infty$. Then, we applied Cauchy's Integral Theorem and deformed the real-line contour from $0$ to $R$ to the contour in the first quadrant comprised of (i) the line segment from $0$ to $i(1-\epsilon)$ , (ii) the semi-circular contour centered at $i$ with radius $\epsilon$ from $i(1-\epsilon)$ to $i(1+\epsilon)$, (iii)the line segment from $i(1+\epsilon)$ to $iR$, and (iv) the quarter-circular arc from $iR$ to $R$. As $\epsilon \to 0$ and $R\to \infty$ the contributions from integrals around the semi-circle and the quarter circle vanish. What remains it given by $(2)$. The OP edited the question and provided a specific contour over which to proceed. We now analyze the integral over the given contour. First note that the OP assumed that $a>0$. We will proceed under that assumption. If we choose to cut the plane with branch cuts from $ia$ to $-i\infty$ and $-ia$ to $-i\infty$, then on the positive real axis $\sqrt{z^2+a^2}=\sqrt{x^2+a^2}$, while on the negative real axis, $\sqrt{z^2+a^2}=-\sqrt{x^2+a^2}$. Hence, this reveals $$\begin{align} \int_{\text{BC}}\frac{e^{ikz}}{\sqrt{z^2+a^2}}\,dz+\int_{\text{FA}}\frac{e^{ikz}}{\sqrt{z^2+a^2}}\,dz&=\int_{-R}^\epsilon\frac{e^{ikx}}{-\sqrt{x^2+a^2}}\,dx+\int_{\epsilon}^R\frac{e^{ikx}}{\sqrt{x^2+a^2}}\,dx\\\\ &=2i\int_\epsilon^R \frac{\sin(kx)}{\sqrt{x^2+a^2}}\,dx\tag 3 \end{align}$$ The sum of the integrals along $CD$ and $EF$ is given by $$\begin{align} \int_{\text{CD}}\frac{e^{ikz}}{\sqrt{z^2+a^2}}\,dz+\int_{\text{DF}}\frac{e^{ikz}}{\sqrt{z^2+a^2}}\,dz&=\int_{0}^{a-\epsilon}\frac{e^{-ky}}{-\sqrt{-y^2+a^2}}\,i\,dy+\int_{a-\epsilon}^0\frac{e^{-ky}}{\sqrt{-y^2+a^2}}\,i\,dy\\\\ &=-2i\int_0^{a-\epsilon} \frac{e^{-ky}}{\sqrt{a^2-y^2}}\,dy\tag 4 \end{align}$$ Using $(3)$ and $(4)$, and letting $R\to \infty$ and $\epsilon\to 0$ reveals (after taking Imaginary Parts) $$\begin{align} \int_0^\infty \frac{\sin(kx)}{\sqrt{x^2+a^2}}\,dx&=\int_0^a \frac{e^{-kx}}{\sqrt{a^2-x^2}}\,dx\\\\ &=\frac \pi 2\left(I_0(k|a|)+L_0(k|a|)\right) \end{align}$$ for $a>0$, where $I_0(x)$ and $L_0(x)$ are the Modified Bessel Function of the First Kind and Zero Order and the Modified Struve Function of Zero Order, respectively. Note that the choice of contour as given in the OP, does not provide any insight regarding the integral of interest $\int_{-\infty}^\infty\frac{\cos(kx)}{\sqrt{x^2+a^2}}\,dx$.
Why the derivative of the logarithm of a theta function is not an elliptic function?
From $$\vartheta(z+\tau) =e^{-\pi i\tau-2\pi iz}\vartheta(z)$$ you get $$\vartheta'(z+\tau) =e^{-\pi i\tau-2\pi iz}\vartheta'(z) -2\pi i e^{-\pi i\tau-2\pi iz}\vartheta(z)$$ and so $$\frac{\vartheta'(z+\tau)}{\vartheta(z+\tau)} =\frac{\vartheta'(z)}{{\vartheta(z)}} -2\pi i.$$ Therefore $\vartheta'/\vartheta$ is not an elliptic function but $(\vartheta'/\vartheta)'$ is (essentially the Weierstrass $\wp$-function).
Dirichlet's Approximation Theorem for integer
If $\alpha$ is an integer, you can just take $$\begin{cases} a=1 \\ b=\alpha.\end{cases}$$ Then you have $$\vert a\alpha -b\vert =0 <\frac 1n$$ for all $n\geqslant 1$.
sphere bundles isomorphic to vector bundle.
Apparently, in this context you have a sphere bundle say $M$ with the orthogonal group as structure group. This is a very special case of a sphere bundle. This means that you have a family of trivialization $\phi _i$ of your bundel $F$ over open sets $U_i$, say $\phi _i : U_i\times S \to M$ such that the change of chart over $U_i\cap U_j$ is of the form $\phi _{i,j} (u, v)= O_{i,j}(u).v$ where $O_{i,j}: U_i\cap U_j \to O(n)$ is a certain (continuous or smooth) function. Bredon says that you can use the same functions to define an $\bf R^n$ bundle $E$. First defined its trivialization over $U_i$, say $E_i$ by the formula $\Phi _i : U_i\times {\bf R}^n = E _i$. Then $E$ is the union of the $E_i=U_i\times {\bf R}^n$ glued over $U_i \cap U_j$ by the map $\phi _{i,j} (u, v)= O_{i,j}(u).v$. Another approach is through the theory of principal bundles. To a sphere bundle with $O(n)$ as structure group you can associate its frame bundle $P$, a principal $O(n)$ bundle, which is the bundle whose fiber at a point $p$ is the set of isometries between the fiber over the point and the standard $n$ sphere. Once this frame bundle is defined, you can construct its associate euclidan bundle by the formula $E= P\times _{O(n)} {\bf R}^n$
Phase plane diagram for system of non-linear odes
The Jacobian is given by $$J[x, y] =\begin{bmatrix} \dfrac{\partial x'}{\partial x} & \dfrac{\partial x'}{\partial y} \\ \dfrac{\partial y'}{\partial x} & \dfrac{\partial y'}{\partial y} \end{bmatrix}=\begin{bmatrix} -2 & - 1 \\ y & x \end{bmatrix}$$ When we evaluate this at the first critical (equilibrium) point, $(x, y) = (2, 0)$, we have $$J[x, y] = \begin{bmatrix} -2 & - 1 \\ 0 & 2 \end{bmatrix} \implies \lambda_{1,2} = -1 \pm i \implies \mbox{Stable Spiral}.$$ When we evaluate this at the second critical point, $(x, y) = (0,1)$, we have $$J[x, y] = \begin{bmatrix} -2 & - 1 \\ 0 & 1 \end{bmatrix} \implies \lambda_{1,2} = -2, 1 \implies \mbox{Unstable Saddle}.$$ As noted in comments, a saddle is always an unbstable equilibrium. In this second matrix, we find the roots of characteristic polynomial using $|A-\lambda I| = 0 \implies \lambda ^2+\lambda -2 = (\lambda -1) (\lambda +2) = 0 \implies \lambda_1 = -2, \lambda_2 = 1$. The phase portrait is correct. Here is another variant Update In the first matrix, the characteristic polynomial is given by: $$ = \lambda ^2+2 \lambda +2 = (\lambda + (1 - i)) (\lambda + (1 + i)) \implies \lambda_{1,2} = -1 \pm i$$
Calculus: why do we define rate of change as $dy/dx$?
Besides thinking of derivatives as rates of change one can think about it as "the best linear approximation". Given any function $f $ depending on a variable $x $ we may inquire what the best linear approximation of the function around some fixed value $x_0$ is. The answer is that $f (x_0+\epsilon) \approx f (x_0)+\epsilon f'(x_0) $ where $f'(x_0) $ is the derrivative of $f $ evaluated at $x_0$, that is the slope of the function at that point. This interpretation generalizes easily to functions of several variables. When thinking of rates of change, imagine a rectangle whose one side has a fixed length $l $ and the other depends on time. Suppose the other side depends on time via $b (t)=ct $ where $c $ has units of velocity, that is to say: The $b $ side of the rectangle moves with velocity $c $ making the area of the rectangle larger over time. The area as a function of time is $A (t)=lb (t)=lct $. The rate of change has units area/time. It is $\frac {dA}{dt}(t)=lc $. What this means is that if you have an area $A_0$ at some time $t_0$ and wait a very small amount of time $\delta t$, your area then increases to a very good approximation (which gets better the smaller $\delta t $ is) to become the value $A(t_0+\delta t)\approx A_0+ \delta t\cdot \frac {dA}{dt}(t_0)$. Surely you find this intuitive as it is basically the same as velocity. The above example justifies the identification of "absolute change of a function due to small change of the independant variable" and "rate of change times small change of independent variable". These two things are almost equal and the difference between them becomes smaller if we make the change in the variable smaller. The diference is also small if the function "looks linear" at the initial value of the variable as opposed to "fluctuating vildly". In fact the area of a rectangle is a linear function of obe side's length and the approximation is exactly true in this case. This is the same as "the rate of change being constant", or equivalently "the acceleration being zero". Now consider what it means to say how much the area of a rectangle changes if we change one of the sides a little bit. The initial area being $A_0$ and increasing one side by $\delta b $ the area increases by a small rectangle $\delta b\cdot l $. Compare the total area after the increase $A (b+\delta b)=A_0+ \delta b\cdot l=A (b)+\delta b \cdot \frac {dA}{db}(b) $ to the formulas above and maybe you will be convinced that the definition of rate of change as a ratio is correct. The approximation is exactly true here because area is a linear function as discussed above.
Subspaces related problem from hoffman kunze
Yes your proof is correct and in this case we say that $V$ is a direct sum of $W_1$ and $W_2$ and we write $$V=W_1\oplus W_2$$
Finding a function based on its Derivative without Integrating
You are correct in saying there is a large number of maps $f$ and points $a$ satisfying this. Take any real number $a$, and any map $f$ differentiable at $a$, then this equals $g'(a)$ where $\forall x \in \mathbb{R}, g(x) = f(x) +(\frac{1}{6} - f'(a))x$. This exercice is just there for you to remember that some limits are better calculated when seen as derivatives at some point. One typical example is $\lim \limits_{x \to 0} \frac{\sin(x)}{x} = 1$, though depending on how you define $\sin$ in the first place, this one might be trivial.
Help in understanding a math question
This is how the points are situated: And this is the claim you have to show:
Question of ultrafilters/prime ideals in finite Boolean algebras
Every ultrafilter on a finite Boolean algebra is fixed (principal) and therefore contains an atom; there is exactly one ultrafilter for each atom. The ultrafilters on a Boolean algebra are precisely the prime filters, so the prime ideals are the complements of the ultrafilters. Thus, both statements are correct.
Another way to evaluate the nested radical $x=\sqrt{2+{\sqrt{2+\sqrt{2+\ldots}}}}$
Hint Prove by induction that $$\sqrt{2+{\sqrt{2+\sqrt{2+\ldots\sqrt{2}}}}}=2 \cos \left(\frac{\pi}{2^{n+1}}\right)$$ where $n$ is the number of roots. P.S. If I am not mistaken, this can be interpreted geometrically as follows: If $A_1A_2....A_{2^n}$ is a regular polynomial with $A_1A_2=1$ then $$A_1A_3=\sqrt{2+{\sqrt{2+\sqrt{2+\ldots\sqrt{2}}}}}$$ Now, as $n$ increases the angle $A_2$ gets closer and closer to $180$ and hence $A_1A_3$ gets closer and closer to $A_1A_2+A_2A_3$.
Is it possible to compare elements in a matrix without converting a matrix equation to a system of equations?
You can always take elements from first matrix row space and second matrix column space and compute the dot product. In fact it is all matter of syntax. For example, in your equation, $b$ would become $b = B_{1,\cdot}^T \cdot C_{\cdot,2}$. By the way, it is common to index matrix entries as $b_{i,j}$ where $i$ is the row number and $j$ is the column number. It makes indexing much simpler than using alphabet.
Convergence Proof: Difference within Function of Epsilon
The if case Assumptions $\lim_{\epsilon' \to 0} \phi(\epsilon') = 0$, i.e. for any $\epsilon>0$ there exists $\epsilon''>0$ such that for $0<\epsilon'<\epsilon''$ we have $0 \leq \phi(\epsilon')<\epsilon$ for any $\epsilon'>0$ there exists $N$ such that for $n>N$ we have $0 \leq |a_n−a| \leq \phi(\epsilon')$ To show $\lim_{n \to \infty} a_n = a$, i.e. for any $\epsilon>0$ there exists $N$ such that for $n>N$ we have $|a_n−a| < \epsilon$ Proof Take $\epsilon>0$. By assumption 1 there exists $\epsilon''>0$ such that for all $0 < \epsilon' < \epsilon''$ we have $0 \leq \phi(\epsilon') < \epsilon$. For such $\epsilon'',$ let $\epsilon'=\epsilon''/2$. Then $0 \leq \phi(\epsilon') < \epsilon.$ By assumption 2 there now exists $N$ such that for $n > N$ we have $0 \leq |a_n-a| \leq \phi(\epsilon')$. But by the previous paragraph we have $\phi(\epsilon') < \epsilon$ so $0 \leq |a_n-a| < \epsilon$. Thus, for any $\epsilon>0$ there exists $N$ such that for $n > N$ we have $|a_n-a| < \epsilon$.
Uniformly convergent series
Yes, you are correct. Since $e^{-|x|}\leq 1$ for all real number $x$, it follows that the series $$\sum_{n=1}^{\infty} \frac{e^{-\vert{x}\vert}}{n^3}$$ is uniformly convergent in $\mathbb{R}$.
What is Equipotent relation?
The function isn't onto since there is no $a$, $b$ such that $g(a,b)=3$. You can see this by considering a few cases: $g(0,1)=2$, $g(1,0)=1$, $g(0,0)=0$, and $g(1,1)=5$. However, if $g(a,b)$ is unique in $\Bbb N$ then it's possible to define a bijective mapping $f$ from the range of $g$ to $\Bbb N$ such that $f \circ g$ is onto. (I don't know whether $g(a,b)$ actually is unique in $\Bbb N$.) Regardless of whether this function $g$ "works" or not, we can make a bijective map from $\Bbb N\times\Bbb N$ to $\Bbb N$ using the same trick as the one for proving that the rational numbers are countable. The general result is that the cross-product of two countable sets is countable.
Pairing possibilities in a game with each board has 3 peoples
When selecting the first board you have: $\frac{9!}{6!3!}$, because you have 9 to choose from. You will only select 3, that is the $\frac{9!}{6!}$ part, then (1,2,3), (2,1,3), (2,3,1), (3,2,1), (3,1,2), (1,3,2) is the same group, that is $3!$. Then you have only 6 people to choose from, so it becomes $\frac{6!}{3!3!}$ for the same reason. Then, the 3 left will form a group. So the answer is $\frac{9!}{6!3!}\frac{6!}{3!3!}$.
How to find $n$ in this equation? (involving modulus)
You are looking to solve $7n\equiv k \pmod {24}$ for $k=10,11,12,13$. Just like in the reals you would like to multiply by the inverse of $7$. As $7$ is coprime to $24$ you can find it. You can generally use the Euclidean algorithm, but here we can do it by inspection. We note that $7\cdot 3 \equiv -3 \pmod {24}$, so $1=7-2\cdot 3$ gives $7\cdot (1+2\cdot 3)=7\cdot 7 = 49 \equiv 1 \pmod {24}$ and $7$ is its own inverse. Then $$7n \equiv 10 \pmod {24}\\n\equiv 7\cdot {10} \pmod {24}\\22 \equiv n \pmod {24}$$ and the others are similar.
Composition of functions which is one-to-one.
Correct answer is 2. Assume that $g$ is not 1-1. Then there exist $x,y \in X$ such that $g(x) = g(y)$. But then also $(f\circ g)(x) = f(g(x)) = f(g(y)) = (f\circ g)(y)$.
Rank of endomorphism
Let $V_k$ be the image of $f^k$. This gives a decreasing chain of subspaces $$ V \overset{f}{\to} V_1 \overset{f}{\to} V_2 \overset{f}{\to} \cdots \overset{f}{\to} V_k \overset{f}{\to} V_{k+1} \overset{f}{\to} V_{k+2} \overset{f}{\to} \cdots $$ By definition, $r_k = \dim V_k$, and the restriction $f \colon V_k \to V_{k+1}$ is surjective. So by the Rank-Nullity Theorem, the difference $r_k - r_{k+1}$ is the dimension of the kernel of $f$ (restricted to $V_k$). And so also $r_{k+1}-r_{k+2}$ is the dimension of the kernel of $f$, restricted to $V_{k+1}$. But since $V_{k+1} \subseteq V_k$, the kernel of $f$ on $V_{k+1}$ is a subset of the kernel of $f$ on $V_k$. This gives the inequality.
Comparing 2 possibly competing definitions of maximal subgroups
The "definition with symbols" says that $H$ is a proper subgroup of $G$, and if $K$ is a subgroup containing $H$, it is either $H$ or $G$. Now, let's think about these two cases. If $K=G$, then it isn't proper. So another way s reading the "definition with symbols" is that any subgroup containing $H$ is either $H$, or isn't proper. That is, the only proper subgroup containing $H$ is $H$ itself. So there is no proper subgroup containing $H$. But this is exactly what the "definition without symbols" says. Going the other way, if there is no proper subgroup containing $H$, that means for any $K$ with $H \leq K < G$ we cannot have $H < K$. This leaves only $K = H$ as an option. I hope this helps ^_^
Derive KL divergence minimization
As he states in the video, "The expectation of $q$ goes away on this last term here because there's no $q$ here." The expectation $\mathbb{E}_q$ is with respect to the randomness in $z$ which follows the distribution $q$. Since $\log p(x)$ has no $z$, it is deterministic/constant, so the expectation can be dropped. Edit for clarification: throughout the derivation, $x$ is constant. However, $z$ is a random variable. In the derivation above, the density of $z$ is $q$. [Note that it is important to clarify this because $z$ can follow other distributions. For example, in the latent variable model the distribution of $z$ is $p$, not $q$.] So, $\mathbb{E}_q[z]$ is just the expectation of $z$ when it follows the distribution $q$. More generally, for any function $f$, $\mathbb{E}_q[f(z)]$ is the expectation of $f(z)$ when $z$ follows the distribution $q$. For example, you begin with $\mathbb{E}_q[\log p(z \mid x)]$ which is a special case where $f(z):= \log p(z \mid x)$. Now, if $c$ is some constant (deterministic, does not depend on the random variable $z$), then $\mathbb{E}_q[c]=c$. This is the case here with $c=\log p(x)$; since $x$ is constant, $\log p(x)$ is a constant.
Real Analysis, Folland problem 3.5.27 Functions of Bounded Variation
Hints: b) is quite easy so no hint (except telling you it's easy is a hint). c) $\sum_{1}^{n}|F(x_j) - F(x_{j-1})| = \sum_{1}^{n}|f'(c_j)(x_j - x_{j-1})|.$ d) Consider $\sum_{k=1}^{n} |F (\pi/2 + 2k\pi) - F (2k\pi)|.$ e) Let $y_k=1/(\pi/2 + 2k\pi), x_k = 1/( 2k\pi).$ Consider $\sum_{k=1}^{n} |F( y_k) - F(x_k)|.$
How is the number of distinct ordered bases be $ > p^n$?
A basis of an $n$-dimensional vector space is a choice of a particular $n$-tuple of elements satisfying some condition. A basis lives in $V^n$. It would be surprising if the number of bases was larger than the cardinality of $V^n$, not that of $V$.