title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Show that $((N_t-t)^2-t)_{t \geq 0}$ is a martingale for a Poisson process $(N_t)_{t \geq 0}$
Hint: Write $$X_t^2 = \bigg( \big[ (N_t-N_s)-(t-s) \big]+ \big[ N_s-s \big] \bigg)^2$$ for $s \leq t$ and expand the square. In order to calculate the conditional expectation $\mathbb{E}(X_t^2 \mid \mathcal{F}_s)$, consider the terms separately and use that $(N_t-N_s)-(t-s)$ is independent of $\mathcal{F}_s$ $N_t-N_s \sim N_{t-s}$
When is a convex polygon inscribable?
It seems you want "the diameter is the diameter of the minimum bounding circle". Take a long skinny rectangle, which is inscribable. Now make a pentagon by bending one long side out just a little bit. No longer inscribable-the circumscribed circle has not changed. Added: a figure is below. The long rectangle and the pentagon have the same diameter, but the pentagon is not inscribable
Solve $\frac{1}{x}+\frac{1}{y}= \frac{1}{2007}$
Write the equations as, $$(x+y)2007=xy$$ $$xy-2007x-2007y+2007^2=2007^2$$ $$(x-2007)(y-2007)=2007^2$$ also, $2007=3^2.223$ Can you continue?
Degree zero in the localization. Does it need to be homogeneous
Yes, by definition $S_{(f)}$ consists of fractions $\frac{a}{b}$ where $a \in S$ and $b \in Q$ are homogeneous of equal degree. See, for example, the parenthetical remark following the first displayed equation in Proj as a scheme. Or, better yet, any modern text on algebraic geometry or scheme theory like Hartshorne, or maybe Daniel Murfets notes here.
# of people that go to a clinic follows a poisson distribution of 4 per day......
Already solved in the Comments, but for the permanent record on this site--and the chance to illustrate a little more about similar problems--here are two direct numerical answers. 1) Let $X \sim Pois(\lambda = 4).$ Then $P(X = 0) = e^{-\lambda}\lambda^0/0! = e^{-4} = 0.01831564$ is the probability of no patients showing on any one day. Because days are independent, the probability of no patients on 5 successive days is (as per @Michael) $[P(X=0)]^5 = 2.0612 \times 10^{-9}.$ 2) Let $Y$ be the number of patients in 5 days. Then $Y \sim Pois(\lambda_Y = 20).$ So the probability of no patients in 5 days is $P(Y = 0) = e^{-20} = 2.0612 \times 10^{-9}.$ In working a Poisson problem, the first thing to do is to make sure the rate is adjusted (if necessary) to match the domain in which the random events occur. In (2) the adjusted rate for five days is $\lambda_Y = 20$. It is worthwhile noting that the two methods are about equally easy for the question of getting no patients in five days. But if the question were something like the probability of getting three or fewer patients in five days, then method (1) becomes a little awkward and method (2) remains fairly simple: $$P(Y \le 3) = e^{-20}(1 + 20 + 20^2/2 + 20^3/6) = 3.2037 \times 10^{-6}$$ or in R software ppois(3, 20) ## 3.20372e-06 Still 'a very low' probability. However, $P(Y \le 25) = 0.8878$ is a reasonable question to ask in practice, with a larger answer, and not difficult to compute with software.
Linear transformations with changing bases (given)
That's the way I would do it. Someone please correct me if I do something incorrect. I'm still studying Linear Algebra. $[T1]_{b}^{\gamma}$ It's a linear transformation that get some vector in the standard basis ($\gamma$) for $R^2$, apply the transformation, and then return it in the basis $b$. Starting with that, we can affirm that: $$ [T1]_{b}^{\gamma} = [I]_{b}^{\gamma}\cdot [T1]_{\gamma}^{\gamma} $$ $[I]_{b}^{\gamma}$ is a change of basis from $\gamma$ to $b$. Now, all we need to do is get the matrix that represents $[I]_{b}^{\gamma}$ and multiply to the matrix of the transformation $[T1]_{\gamma}^{\gamma}$. The column vectors for the matrix of transformation $[T1]_{\gamma}^{\gamma}$ is going to be the transformed vectors that compose the basis $\gamma$: $$ (1,0) \rightarrow (1+0,2\cdot 1 + 4\cdot 0) = (1,2)\\ (0,1) \rightarrow (0+1,2\cdot 0 + 4\cdot 1) = (0,4)\\ \left[\begin{matrix} 1 & 0 \\ 2 & 4 \end{matrix}\right] $$ Now for the matrix of change of basis $\gamma$ to $b$, we need to decompose $\gamma$ vectors in terms of $b$ vectors: $$ (1,0) = a(1,2)+b(0,1) \rightarrow a = 1 , b = -2\\ (0,1) = c(1,2)+d(0,1) \rightarrow c = 0 , d = 1\\ \left[\begin{matrix} 1 & 0 \\ -2 & 1 \end{matrix}\right] $$ Now multiplying the matrices, we get: $$ \left[\begin{matrix} 1 & 0 \\ -2 & 1 \end{matrix}\right] \cdot \left[\begin{matrix} 1 & 0 \\ 2 & 4 \end{matrix}\right] = \left[\begin{matrix} 1 & 0 \\ 0 & 4 \end{matrix}\right] $$ Now, to check if everything is correct, let's pick a random vector from $R^2$ describe it in terms of $\gamma$: $$ V_{\gamma} = (a,b) $$ Then we apply the transformation $[T1]_{b}^{\gamma}$: $$ \left[\begin{matrix} 1 & 0 \\ 0 & 4 \end{matrix}\right]\cdot \left[\begin{matrix} a \\ b \end{matrix}\right] = \left[\begin{matrix} a \\ 4b \end{matrix}\right] $$ Now we just need to check if we apply $[T1]_{\gamma}^{\gamma}$ and then apply $[I]_{b}^{\gamma}$ is going to give us the same result: $$ \left[\begin{matrix} 1 & 2 \\ 0 & 4 \end{matrix}\right] \cdot \left[\begin{matrix} a \\ b \end{matrix}\right] = \left[\begin{matrix} a \\ 2a + 4b \end{matrix}\right]\\ \left[\begin{matrix} 1 & 0 \\ -2 & 1 \end{matrix}\right] \cdot \left[\begin{matrix} a \\ 2a + 4b \end{matrix}\right] = \left[\begin{matrix} a \\ 4b \end{matrix}\right] $$ So it's indeed correct. I really don't know if I was clear enough, of if it's 100% correct. I hope to have contributed a little with your learning proccess. Feel free to leave a comment if you have had any doubts on what I've done... Let's share some knowledge :) Thanks!
Does the sum of a converging and diverging series converge or diverge?
Let $\sum_{n=1}^{\infty} a_n$ be convergent and $\sum_{n=1}^{\infty} b_n$ be a series. Assume that $\sum_{n=1}^{\infty} (a_n + b_n)$ converges, but then $$\sum_{n=1}^{\infty} b_n = \sum_{n=1}^{\infty} (a_n + b_n) - \sum_{n=1}^{\infty} a_n$$and so $\sum_{n=1}^{\infty} b_n$ is the difference of two convergent series, so must converge. So we have shown that $\sum_{n=1}^{\infty} b_n$ must converge if the sum converges. To show that the sum /difference of two convergent series is also convergent, just assume that $\sum_{n=1}^{\infty} a_n$ and $\sum_{n=1}^{\infty} b_n$ are convergent. Let $S_k = \sum_{n=1}^{k} a_n$ and $T_k = \sum_{n=1}^{k} b_n$. Then we know $S_k$ and $T_k$ converge as $k \rightarrow \infty$ (to say $S$ and $T$ respectively), so there is an integer $N>0$ large enough such that $$|S_k -S| < \frac{\varepsilon}{2}, \quad |T_k - T| < \frac{\varepsilon}{2}$$ Then by the triangle inequality, for $n>N$, we have $$\left|\sum_{n=1}^{k} (a_n + b_n) - (S + T) \right| \leq \left|\sum_{n=1}^{k} a_n - S \right| + \left|\sum_{n=1}^{k} b_n - T \right|$$ $$< \varepsilon$$ for any $\varepsilon >0$, which is exactly what it means to say that $\sum_{n=1}^{\infty} (a_n + b_n)$ converges to $S+T$
Splitting field of polynomial
I think, you are a bit confused. The numbers $i, e^{\frac{2 \pi i}{6}}$ are complex numbers, so they do not belong to the algebraic closure of $\mathbb{Z}_7$. Since $f=x^2+1 \in \mathbb{Z}_7[x]$ has no roots in $\mathbb{Z}_7$, then call $a \in \overline{\mathbb{Z}_7}$ a root of $f$. Then also $-a$ is a root of $f$, hence the splitting field $F$ of $f$ over $\mathbb{Z}_7$ is $\mathbb{Z}_7(a,-a) = \mathbb{Z}_7(a)$. Clearly this field is isomorphic to the field $\mathbb{Z}_7[x]/(x^2 +1)$ which has $49$ elements, so $F$ is just the field with $49$ elements. Note that all elements of $F\setminus \{ 0\}$ are roots of the polynomial $x^{48}-1$ since $F\setminus \{ 0\}$ is a group of $48$ elements (all elements have order dividing $48$). Since $x^{48}-1$ cannot have more than $48$ roots, all elements of $F\setminus \{ 0\}$ are all of its roots. So $F$ is the splitting field of $x^{48}-1$. Let $w=2a+2 \in F$. Then $$w^8 = (2a+2)^8 = 2^8(a+1)(a+1)^7=4(a+1)(a^7+1^7)=$$ $$=4(a^8+a^7+a+1)=4((a^2)^4+a(a^2)^3+a+1) =4(1-a+a+1) = 8 = 1$$ so $w$ is a root of $x^8-1$. Finally, the splitting field of $x^8-1$ is contained in the splitting field of $x^{48}-1$ (which is $F$) since $x^8-1$ divides $x^{48}-1$. But $F=\mathbb{Z}_7(w)$, and $w \notin \mathbb{Z}_7$. Since there are no intermediate fields between $\mathbb{Z}_7$ and $F$, the splitting field must be $F$.
Which is the signature of the matrix?
Since the eigenvalues $\lambda_1,\lambda_2,\lambda_3,\lambda_4$ of $A$ are the roots of the characteristic polynomial, we have $$\chi_A(x) = x^4-9x^3+cx^2+dx+37 = (x-\lambda_1)(x-\lambda_2)(x-\lambda_3)(x-\lambda_4).$$ By comparing the $x^3$ coefficient of both sides, we get $\lambda_1+\lambda_2+\lambda_3+\lambda_4 = 9$. By comparing the constant term of both sides, we get $\lambda_1\lambda_2\lambda_3\lambda_4 = 37$. We are given that all the eigenvalues have the same sign. Since $\lambda_1\lambda_2\lambda_3\lambda_4 = 37 > 0$, none of the eigenvalues can be zero. If all the eigenvalues were negative, we'd have $9 = \lambda_1+\lambda_2+\lambda_3+\lambda_4 < 0$, a contradiction. Hence, all the eigenvalues must be positive, i.e. the signature of $A$ is $4-0 = 4$.
When flipping a coin 12 times, what is the probability that, at some point after the first flip, the number of heads equals the number of tails?
If after the $n$th toss, the positive difference between heads and tails is $k$. Also, if $H_n$ denotes the number of Heads after $n$ tosses, then either $H_n=H_{n-1}$ and $ T_n=T_{n-1}+1$, or $H_n=H_{n-1}+1$ and $ T_n=T_{n-1}$. This is just saying that the $n$th toss was either H or T. This implies that $$k=|H_n-T_n|=|H_{n-1}-T_{n-1}|\pm1=k_{n-1}\pm1$$ This says that the positive difference between H and T on the ($n-1$)th toss was either $k-1$ or $k+1$. So the number of ways to flip a coin such that the positive difference $F_{n,k}$ between H and T is $k$ after $n$ tosses is made up of a combination of $F_{n-1,k-1}$ and $F_{n-1,k+1}$. Why is this combination simply addition? Well each method in $F_{n-1,k-1}$ and $F_{n-1,k+1}$ correspond to exactly one way in $F_{n,k}$. To see this, choose any way in $F_{n,k}$. It is a sequence $$HHTHTTH\cdots THT$$This has $|H-T|=k$. This means either $H=T+k$ or $T=H+k$. If you remove the final $T$, then this corresponds to a chain with either $H=T+k-1$ or $T-1=H+k$. These are ways from $F_{n-1,k-1}$ and $F_{n-1,k+1}$ respectively. Also the opposite chain, $$TTHTHHT\cdots HTH$$ is also in $F_{n,k}$, and also comes from either $F_{n-1,k-1}$ or $F_{n-1,k+1}$. So in the case when $H=T+k$, you get $2$ chains from two chains in $F_{n-1,k-1}$ and $F_{n-1,k+1}$, and similarly in the case $T=H+k$. From this you can conclude that every chain in $F_{n,k}$ is in one-to-one correspondence with either a chain from $F_{n-1,k-1}$ or a chain from $F_{n-1,k+1}$, with no chain from either of these left out. So $$F_{n,k}=F_{n-1,k-1}+F_{n-1,k+1}$$ You want the probability of landing in a way from one of $$F_{1,0},F_{2,0},\cdots F_{12,0}$$Notice that there are no ways in $F_{n,0}$ if $n$ is odd. The required probability is $$P=\frac{F_{2,0}}{2^2}+\frac{F_{4,0}}{2^4}+\cdots +\frac{F_{12,0}}{2^{12}}$$Using the recurrence relation, we have that $$\require{cancel}\begin{align}F_{n,0}&=F_{n-1,1}+\cancel{F_{n-1,-1}}_{\text{since k must be positive}}\\&=F_{n-2,0}+F_{n-2,2}\\&=F_{n-4,0}+F_{n-4,2}+F_{n-2,2}\end{align}$$ We can also find that $$\begin{align}F_{n,k}&=F_{n-1,k-1}+F_{n-1,k+1}\\&=F_{n-2,k-2}+2F_{n-2,k}+F_{n-2,k+2}\end{align}$$ These two can be used to reduce things like $F_{12,0}$ down to things in terms of $F_{n,n}$, which is equal to $2$, or $F_{2,0}$, $F_{2,2}$, etc. It is quite cumbersome to calculate $P$. I believe this should give the correct answer though. As an example of this process: $$F_{12,0}=F_{2,0}+F_{2,2}+F_{4,2}+F_{6,2}+F_{8,2}+F_{10,2} \\\, \\F_{10,2}=F_{8,0}+2F_{8,2}+F_{8,4} \\F_{8,2}=F_{6,0}+2F_{6,2}+F_{6,4} \\F_{6,2}=F_{4,0}+2F_{4,2}+F_{4,4} \\\, \\F_{8,4}=F_{6,2}+2F_{6,4}+F_{6,6} \\F_{6,4}=F_{4,2}+2F_{4,4} \\\, \\F_{4,2}=F_{3,3}+F_{2,0}+F_{2,2}=2+2+2=6$$ So $$\begin{align}F_{12,0}&=F_{2,0}+F_{2,2}+F_{4,2}+F_{6,2}+F_{8,2}+F_{10,2} \\&=F_{8,0}+3F_{6,0}+8F_{4,0}+21F_{4,2}+18F_{4,4}+F_{6,6}+F_{3,3}+2F_{2,0}+2F_{2,2} \\&=F_{8,0}+3F_{6,0}+F_{4,0}+2F_{2,0}+126+36+2+2+4+4 \\&=F_{8,0}+3F_{6,0}+F_{4,0}+2F_{2,0}+174 \end{align}$$ You can then reduce further until everything is in terms of numbers. Edit: This method gets far too messy, I wouldn't recommend it.
General technique to find partial sum formula for series such as $\sum n^3/2^n$
Let's say you want to compute $$ \sum_{n=0}^{m}\frac{a_0+a_1n+\dotsb+a_kn^k}{c^n} =\sum_{n=0}^{m}\frac{P(n)}{c^n}. $$ First note that $$ \sum_{n=0}^{\infty}(a_0+a_1n+\dotsb+a_kn^k)x^n=P(xD)\left(\frac{1}{1-x}\right)=G(x) $$ where $xD$ is the operator $x\frac{d}{dx}$. Then $$ G(x/c)=\sum_{n=0}^{\infty}\left(\frac{a_0+a_1n+\dotsb+a_kn^k}{c^n}\right)x^n;\quad |x|<|c|. $$ In particular $$ \frac{1}{1-x}G(x/c)=\sum_{n=0}^{\infty}\left(\sum_{m=0}^{n}\left(\frac{a_0+a_1n+\dotsb+a_kn^k}{c^n}\right)\right)x^{n}. $$ Hence it suffices to compute the coefficient of $x^{n}$ in $\frac{1}{1-x}G(x/c)$ which is typically done (by hand) with partial fractions.
Does Lambert W (Product Log) count as an explicit solution?
The closed form of a special function is itself. The real question is : Is the LambertW function a special function ? To answer, we have to look on the list of standard functions. The hitch is that there is no definitive or exhaustive list of such functions. What is a special function ? It's a function (usually named after an early investigator of its properties) having particular use in mathematical physics or some other branch of mathematics. From : http://mathworld.wolfram.com/SpecialFunction.html Special functions are particular mathematical functions which have more or less established names and notations due to their importance in mathematical analysis, functional analysis, physics, or other applications : https://en.wikipedia.org/wiki/List_of_mathematical_functions The LambertW function appears in the mathematical litterature for a long time, is well-known and is implemented in most of the mathematical softwares. It this sens, we can say that the LambertW function has reached the honorific rank of special function. Hense, we are not cheating when we express a result of calculus in which the LambertW function is involved. An article for the general public on this subject : https://fr.scribd.com/doc/14623310/Safari-on-the-country-of-the-Special-Functions-Safari-au-pays-des-fonctions-speciales
Summation of stopping point
Since $n$ doesn't change as $i$ varies from $0$ to $n-2$, you're summing multiple copies of $n$. How many copies? There are $n-1$ items in your sum, since the starting point $i=0$ contributes an item beyond the stopping point $i=n-2$, and $(n-2)+1 = n-1$. This makes a total of $n(n-1)$.
Why is the solution set of the reduced row echelon form of A equal to the solution set of A?
When you do elementary operations $O_1, \ldots, O_n$ on the rows of $A$ to get $\text{rref}(A)$ you form an inversible matrix $O = O_1 \cdots O_n$ such that $$ OA = \text{rref}(A) $$ and let us show that $$ \text{Null}(OA) = \text{Null}(A) = \text{Null}(\text{rref}(A)) $$ $(\supset)$ Let $X \in \text{Null}(A)$. We have $ AX = 0 $. Thus $ OAX=0 $. Therefore $X \in \text{Null}(OA)$. $(\subset)$ Let $X \in \text{Null}(OA)$. We have $$ OAX = 0 $$ by multiplying on both sides by $O^{-1}$, we get $$ AX=0 $$ Thus $X \in \text{Null}(A)$. We showed that $\text{rref}(A)$ and $A$ have the same solution set.
Finite simplicial complex can be viewed as a subcomplex of a simplex
The key idea is that in a simplicial complex (unlike in, say, a $\Delta$-complex), each simplex is uniquely determined by its vertices (this is part of the definition of a simplicial complex). So, since $K$ has finitely many vertices, say $N - 1$, consider the simplex $\Delta^n$, and identify the $N - 1$ vertices of $\Delta^N$ with the $N - 1$ vertices of $K$. Now let's think about how to identify the $k$-simplices of $K$ with $k$-simplices of $\Delta^N$. For any $k + 1$ vertices in $K$, there might or might not be a $k$-simplex in $K$ with those vertices, but there's at most one $k$-simplex in $K$ with those vertices (since $K$ is a simplicial complex). There is also exactly one $k$-simplex in $\Delta^N$ with those $k + 1$ vertices. So, if $K$ has a $k$-simplex with those vertices, include the $k$-simplex in $\Delta^n$ with these vertices; if it doesn't, don't. Repeating this process and taking all simplices in $\Delta^n$ that correspond to simplices in $K$, we obtain a subcomplex of $\Delta^n$ which is homeomorphic to our original complex $K$.
The geometric meaning of a line plus a vector
The set $\{k(1, 2, 3) \mid k \in \mathbb{R}^3\}$ is the line in $\mathbb{R}^3$ through the origin in the direction of the vector $(1, 2, 3)$. The set $\{k(1, 2, 3) + (2, 9, -1) \mid k \in \mathbb{R}^3\}$ is the line in $\mathbb{R}^3$ through the point $(2, 9, -1)$ in the direction of the vector $(1, 2, 3)$. All we've done is translated the entire line $\{k(1, 2, 3) \mid k \in \mathbb{R}^3\}$ by the vector $(2, 9, -1)$. Consider the two-dimensional analogue of this question. What is the difference between $\{k(1, 2) \mid k \in \mathbb{R}\}$ and $\{k(1, 2) + (2, 9) \mid k \in \mathbb{R}\}$? The former is the line through the origin in the direction of the vector $(1, 2)$, while the latter is the line through the point $(2, 9)$ in the direction of the vector $(1, 2)$. This can be observed in the image below where the first line is the blue one. $\hspace{45mm}$
Hint requested for: If $\sum_{n=0}^{\infty} a_n x^n$ converges for some $x_0$, then it converges uniformly and absolutely on $[-a, a]$ with $a<|x_0|$?
Since the series $\displaystyle\sum_{n=0}^\infty a_n{x_0}^n$ converges, you know that $\lim_{n\to\infty}a_n{x_0}^n=0$. This is equivalent to the ssertion that $\lim_{n\to\infty}\lvert a_n{x_0}^n\rvert=0$. This is anough to be able to apply the Weierstrass $M$-test to the series $\displaystyle\sum_{n=0}^\infty a_nx^n$ in $[-a,a]$. Don't forget to use the fact that $a&lt;\lvert x_0\rvert$.
A smooth plane is: $x_1+x_2+x_3=3$. A light ray hits the plane along the direction $(−1,−1,0)$. What's the normalized direction of the reflection.
Your answer is wrong. The reflected ray will be in the direction given by $$ \vec{r} = \vec{d} - 2 \, \text{proj}_{\vec{n}}\vec{d} $$ where $\vec{d}$ is the direction of the incident ray, $\vec{n}$ is a normal to the plane and $\text{proj}_{\vec{n}}\vec{d}$ is the orthogonal projection of $\vec{d}$ on $\vec{n}$. $$ \vec{r} = (-1,-1,0) - 2 \frac{(-1,-1,0)\cdot (1,1,1)}{(1,1,1)\cdot (1,1,1)}(1,1,1) =\left(\frac{1}{3},\frac{1}{3},\frac{4}{3}\right) $$ so $\vec{r}$ normalized is $$ \left(\frac{1}{3\sqrt{2}},\frac{1}{3\sqrt{2}},\frac{4}{3\sqrt{2}}\right) $$
Finding two vectors in a tetrahedron
Hint: Let: $$ \vec A= (a,0,0)^T \qquad \vec B= (0,b,0)^T \qquad \vec C= (0,0,c)^T $$ take the vectors $\overrightarrow {AB}=\vec B-\vec A$ and $\overrightarrow {AC}=\vec C-\vec A$
Sum of series of upto $(2n+1)$ terms
The simplest method would be to split this sequence into two different sequences. The first is each odd term and it gives: $ 1^2 + 2^2 + 3^2 + 4^2 + ...$. The second is every odd term and this gives: $ 1^2.2 + 2^2.3 + 3^2.4 + 4^2.5 + ...$. The $(2n+1)^{th}$ term is an odd term, so it is from the first sequence. So, the first sequence goes up to $(n+1)$ terms and the second goes up to $n$ terms. Check this: $(n+1)$ + $n$ = $2n+1$. The sum to $n$ terms of the first sequence is given by: $\sum_{k=0}^{n} n^2 $ which is equal to $\frac{n}{6}(n+1)(2n+1)$. so, the sum of $(n+1)$ terms is $\frac{n}{6}(n+1)(2n+1) + (n+1)^2 $. The sum of the second sequence is given by: $\sum_{k=0}^{n} n^2 (n+1) $ = $\sum_{k=0}^{n} (n^3 + n^2) $ = $\sum_{k=0}^{n} n^3 $ + $\sum_{k=0}^{n} n^2$. We already know the value of the second part(as we did it above). The first part is $\sum_{k=0}^{n} n^3 $ = $ \frac{n^2(n+1)^2}{4}$. So, the total sum is given by: $\frac{n}{6}(n+1)(2n+1) + (n+1)^2 $ + $\frac{n}{6}(n+1)(2n+1)$ + $ \frac{n^2(n+1)^2}{4}$. I will leave you to simplify the final answer. http://pirate.shu.edu/~wachsmut/ira/infinity/answers/sm_sq_cb.html
Linear dependence of these functions?
The workhorse for problems like this is the Wronskian. Put \begin{align*} f(x) &amp;= e^{2x} &amp; g(x) &amp;= e^{3x} &amp; h(x) &amp;= x \end{align*} and define $$ W(x)= \begin{bmatrix} f(x) &amp; g(x) &amp; h(x) \\ f^\prime(x) &amp;g^\prime(x) &amp; h^\prime(x) \\ f^{\prime\prime}(x) &amp; g^{\prime\prime}(x) &amp; h^{\prime\prime}(x) \end{bmatrix} = \begin{bmatrix} e^{2x} &amp; e^{3x} &amp; x \\ 2e^{2x} &amp;3e^{3x} &amp; 1 \\ 4e^{2x} &amp; 9e^{3x} &amp; 0 \end{bmatrix} $$ If there exists an $x$ such that $\det W(x)\neq 0$, then $\{f,g,h\}$ is linearly independent. In our case, note that $$ \det W(0)=\det\begin{bmatrix}1&amp;1&amp;0\\2&amp;3&amp;1\\ 4&amp;9&amp;0\end{bmatrix}=-5 $$ Hence our functions are linearly independent.
Upper bound for an integral of positive functions
Certainly not. Let $\gamma(t)=\sqrt{t}$, $w(t)=\frac1{t^2+1}$. Then $\gamma$ is continuous and increasing with $\gamma(0)=0$. We also have $$ \int_0^\infty w(\tau)\,d\tau=\frac\pi2. $$ But $$ \int_0^\infty\gamma(w(\tau))\,d\tau=\int_0^\infty\frac1{\sqrt{\tau^2+1}}\,d\tau=\infty. $$
Quadratic extensions - understanding
1) A quadratic extension of $K$ must be the splitting field of a quadratic polynomial; by the quadratic formula, we see that $L=K(\alpha)$ for some $\alpha \notin K, \alpha^2 \in K$. It cannot be the case that $\alpha$ is transcendental over $K$, because in that case $K(\alpha)$ is, by definition, infinite. 2) (A note: Your notation in this question could use improvement, as you're using $\alpha$ in two different way.) No. Consider the quadratic extension $L=\mathbf{Q}(\sqrt{2})$ over $\mathbf{Q}$. The polynomial $f(x)=x^2 - 2x - 1$ has roots $1 \pm \sqrt{2}$. 3) Yes. Even better, any quadratic extension is Galois.
Find, from first principles, the derivative of $\log(\sec(x^2))$
$$\ln(\sec(x+h^2))-\ln(\sec x^2)=\ln\dfrac{\cos x^2}{\cos(x+h)^2}$$ $$=\ln\left(1+\dfrac{\cos x^2-\cos(x+h)^2}{\cos(x+h)^2}\right)$$ Now $\lim_{h\to}\dfrac{\ln(1+h)}h=1$ So, we need to find $\lim_{h\to0}\dfrac{\dfrac{\cos x^2-\cos(x+h)^2}{\cos(x+h)^2}}h$ $$=\dfrac1{\lim_{h\to0}\cos(x+h)^2[\cos x^2+\cos(x+h)^2]}\cdot\lim_{h\to0}\dfrac{\sin^2(x+h)^2-\sin^2(x^2)}h$$ Using Prove $ \sin(A+B)\sin(A-B)=\sin^2A-\sin^2B $, $\lim_{h\to0}\dfrac{\sin^2(x+h)^2-\sin^2(x^2)}h=\lim_{h\to0}\sin(x^2+(x+h)^2)\lim_{h\to0}\dfrac{\sin(2hx+h^2)}{2hx+h^2}\lim_{h\to0}\dfrac{2hx+h^2}h=?$
Evaluating $\frac{d}{dx} \int_{1}^{3x} \left(5\sin (t)-7e^{4t}+1\right)\,\mathrm dt$
It is clear that: $$\dfrac{d}{dx}\int_{1}^{3x}\left(5\sin{t}-7e^{4t}+1\right)dt=15\sin{3x}-21e^{12x}+3$$ use this $$\dfrac{d}{dx}\int_{u(x)}^{v(x)}f(t)dt=f(u(x))\cdot u'(x)-f(v(x))\cdot v'(x)$$
ODE boundary condition and integer values?
$e^{2\pi im} = 1$ does not imply $e^{i\pi}e^{2m} = 1$, as if it did then $e^{2\pi im} = e^{i\pi + 2m}$. In general, $e^{ix} = 1$ iff $x = 2k\pi$ for integer $k$. Hence $e^{2\pi im} = 1$ iff $m$ is an integer.
Pascal's triangle, estimate row value by fixed row and maximum yields.
With respect to the comments following your query, with both $A$ and $k$ fixed, I don't know of any way of precisely computing the largest value of $n$ such that $\binom{n}{k} \leq A.$ However, I think a lower bound and an upper bound for the precise value of $n$ can be supplied, purely from the definition of $\binom{n}{k}.$ Let $B = A \times k!$. Let $g(n) \equiv (n) \times (n-1) \times \cdots \times (n - [k-1]).$ Then you want the largest $n$ so that $g(n) \leq B.$ $g(n) &lt; n^k,$ so if $n$ is chosen so that $n^k &lt; B$, then this guarantees that $g(n) &lt; B.$ $n^k &lt; B \iff k \log n &lt; \log B \iff \log n &lt; \frac{\log B}{k}.$ Therefore, a lower bound for $n$ is given by choosing the largest $n$ such that $\log n &lt; \frac{\log B}{k}.$ Similar analysis can be used to provide an upper bound for $n$. Let $m = (n - [k-1]).$ Clearly, $g(n) &gt; m^k.$ Further, if $\log m &gt; \frac{\log B}{k}$ Then $\log m^k = k \times \log m &gt; \log B ~\Rightarrow $ $m^k &gt; B.$ Therefore, an upper bound for $n$ is given by choosing the smallest $n$ such that $\log (n - [k-1]) &gt; \frac{\log B}{k}.$ Note, that the above approach merely provides a starting point for (perhaps) attaining tighter bounds. Tighter bounds can be sought by trying to more accurately determine the geometric mean of $g(n).$ That is, you are looking for some value $r$ where $[n - (k-1)] &lt; r &lt; n$ and $r^k = g(n).$ It is known that the geometric mean of (for example) the integers $i$ such that $ [n - (k-1)] \leq i \leq n$ is less than the arithmetic mean of the integers in this range (i.e. the average $V_n ~= \frac{n + [n - (k-1)]}{2}).$ This means that the first part of the above analysis, which used the idea that $n^k &gt; g(n)$ may instead use the idea that $(V_n)^k &gt; g(n)$. Repeating the analysis re the search for a lower bound for $n$ this means that a tighter (i.e. larger) lower bound can be computed by choosing the largest $n$ such that $\log (V_n) &lt; \frac{\log B}{k}.$ However, we have now reached the boundary of where my knowledge can offer any support.
If f has a slant asymptote at infinity, prove for $A,B\in\mathbb{R}$ where $A\neq 0$, $\lim_{x\to\infty}\frac{f(x)}{Ax+B}=1$
$$\begin{align} \lim_{x\to\infty}\frac{f(x)}{Ax+B}&amp;=\lim_{x\to\infty}\frac{f(x)-(Ax+B)+(Ax+B)}{Ax+B} \\ &amp;=\lim_{x\to\infty}\left(\frac{f(x)-(Ax+B)}{Ax+B}+1\right) \\ &amp;=\lim_{x\to\infty}\bigl(f(x)-(Ax+B)\bigr)\lim_{x\to\infty}\frac{1}{Ax+B}+1 \end{align}$$ What can we say about the first limit given that $y=Ax+B$ is an oblique asymptote?
Prove $(-\infty,5]$ is not compact set.
This set is not compact, because consider the open cover consisting of $B(1,p)$ such that $p\in (-\infty,5]$. This cover can be shown to not reduce to a finite subcover. So, the set is not compact. Note: here my notation for balls is $B(r,q)$, where $q$ is the center point, and $r$ is the radius.
Few basic things unclear to me about inner product spaces and orthonormal basis
Concering your first question: If, say $B = \{ b_1, \dots, b_n \}$, $x = \sum_{i=1}^n a_i e_i$ and $y = \sum_{i=1}^n b_i e_i$ and if you mean by $[x]_B$ the tuple $(a_1, \dots, a_n)$ and by $[y]_B$ the tuple $(b_1, \dots, b_n)$, then $\langle x, y \rangle = \langle \sum_{i=1}^n a_i e_i, \sum_{j=1}^n b_j e_j \rangle = \sum_{i=1}^n \sum_{j=1}^n a_i b_j \langle e_i, e_j \rangle = \sum_{i=1}^n \sum_{j=1}^n a_i b_j \delta_{i,j} = \sum_{i=1}^n a_i b_i = \langle [x]_B, [y]_B \rangle_{st}$. As for the second, if $C = (c_{ij})_{i,j}$, then $[Ci] = (c_{1i}, \dots, c_{ni})$ and $[Cj] = (c_{1j}, \dots, c_{nj})$. Then you get $[Ci]^*[Cj] = \sum_{k=1}^n c_{ki} c_{kj} = \langle[Ci],[Cj]\rangle_{st}$ Remark: I suppressed the subindex $B$ here, since I do not see any reason for it.
If a, b, c are three linearly independent vectors show that the vectors a × b, b × c, c × a are also linearly independent.
Assume that $$\lambda(a\times b)+\mu(b\times c)+\nu(c\times a)=\vec 0\ .$$ Taking the scalar product with $c$ gives $$\lambda\&gt;(a\times b)\cdot c=0\ .$$ Since the triple product is assumed $\ne0$ it follows that $\lambda=0$, and similarly for $\mu$ and $\nu$.
Least positive integer $\equiv 3^{18} \pmod{37}?$
Hint. From Fermat's little theorem, given $37$ is prime and $\gcd(3,37)=1$ $$3^{36} \equiv 1 \pmod{37}$$ or $$37 \mid \left(3^{18}+1\right)\cdot \left(3^{18}-1\right)$$ According to Euclid's lemma one of these should happen $37 \mid \left(3^{18}+1\right)$ or $37 \mid \left(3^{18}-1\right)$ But, there is an "easier" way $$27\equiv 3^3 \equiv -10 \pmod{37} \Rightarrow \\ 3^{18} \equiv 10^6 \equiv 100^3 \pmod{37} \Rightarrow \\ 3^{18} \equiv {-11}^3 \equiv 121\cdot(-11)\pmod{37} \Rightarrow \\ 3^{18} \equiv 10\cdot(-11) \equiv - 110\pmod{37} \Rightarrow \\ 3^{18} \equiv 1\pmod{37}$$ because $37 \mid 111$ or $100 \equiv -11 \pmod{37}$.
Dimension of a subspace of $Hom_{\mathbb{C}}(V, V)$
Hint: Let $u_i$ and $w_j$ be arbitrary bases for $U$ and $W$, respectively. They together form a basis for $V$. Translate the condition for $\varphi$ to its matrix representation with respect to this basis.
find all fucntions such that $f(x+y) \geq f(x) + yf(f(x)) $
From the given inequality, $f$ is increasing. Now, if $f(x)\geq x+1$ for some $x$, then from the given inequality for $y=1$, $$f(x+1)\geq f(x)+f(f(x))\geq f(x)+f(x+1),$$ which implies that $f(x)\leq 0$, a contradiction. Therefore $f(x)&lt;x+1$ for all $x&gt;0$. Hence, for all $x,y&gt;0$, $$f(f(x))\leq\frac{f(x)}{y}+f(f(x))\leq\frac{f(x+y)}{y}\leq\frac{x+y+1}{y},$$ and letting $y\to\infty$ we obtain that $f(f(x))\leq 1$ for all $x&gt;0$. Now, if $f$ is not bounded, then there exists $x_1&gt;0$ such that $f(x_1)&gt;2$. Then, there exists $x_2$ such that $f(x_2)&gt;x_1$, and since $f$ is increasing, we obtain that $$f(f(x_2))\geq f(x_1)&gt;2,$$ which is a contradiction with $f(f(x)\leq 1$ for all $x$. Hence $f$ is bounded, and we can then proceed as in your answer.
X and Y are Bernoulli. Suppose $P(X = 1, Y = 1) = P(X = 1) P(Y = 1)$. Prove that X and Y must be independent
You will need to show: $P(X=0,Y=0)=P(X=0)P(Y=0)$ $P(X=0,Y=1)=P(X=0)P(Y=1)$ $P(X=1,Y=0)=P(X=1)P(Y=0)$
Show that the graph $\{{(x,y) \in \mathbb{R}^2: y=\cos x}\}$ is closed in the metric space $\mathbb{R}^2$
Consider $g(x,y) = y-\cos x$ defined as a function $\mathbf{R}^2\to \mathbf{R}$. This is a continuous function. SO the pre-image of the singleton $\{0\}$ is a closed set. The pre-image is precisely the graph. This works for any continuous function, not necessarily the cosine function.
How do you prove that every vector enclosed in a space has a unique linear combination made from the basis of the space?
We already know that $\text{rank}(A)=n$. But now we are ready: The rank of $A|b$ is also $n$ because $A|b$ has as many rows as your matrix $A$. Remember that your theorem doesn't say to determine the rank of $A \cdot b$ but of $A|b$, so it's pretty easy because the rank of $A | b$ can't get greater or smaller.
Expected value of maximum function of non-independent random variable
The correct integral is $$\int_0^1 \max(x,1-x)\,dx = \int_0^{1/2} (1-x)\,dx + \int_{1/2}^1 x \,dx $$ $$ = 3/8 + 3/8 = 3/4.$$
If $\mu$ is a measure on (X, $\mathcal{M}$), can $\mu$ be semifinite on a finite space X?
In your example $\mu_0$ is the zero measure, which is certainly semifinite.
Compute $I = \int_0^{2\pi} \frac{ac-b^2}{[a \sin(t)+ b \cos(t)]^2+[b \cos(t)+ c \sin(t)]^2}dt$
I will show you an overkill, but a useful one (at least, IMHO). If the quadratic form $$ q(x,y) = A x^2 + 2B xy + C y^2 $$ associated with the matrix $$ M_q=\begin{pmatrix} A &amp; B \\ B &amp; C\end{pmatrix}$$ is positive definite, i.e. (by Sylvester's criterion) $A&gt;0$ and $AC-B^2&gt;0$, then the integral $$ I_q = \iint_{\mathbb{R}^2}\exp\left(-q(x,y)\right)\,dx\,dy $$ is convergent and its value equals $\frac{\pi}{\sqrt{\det M_q}}=\frac{\pi}{\sqrt{AC-B^2}}$. The proof relies on the spectral theorem, Fubini's theorem and the fact that the determinant of $M_q$ is the product of its eigenvalues. What happens if we perform a change of variables, for instance if we switch to polar coordinates? We get: $$ I_q = \int_{0}^{2\pi}\int_{0}^{+\infty}\rho\exp\left(-\rho^2 q(\cos\theta,\sin\theta)\right)\,d\rho\,d\theta=\frac{1}{2}\int_{0}^{2\pi}\frac{d\theta}{q(\cos\theta,\sin\theta)}.$$ In your case, the coefficients of the quadratic form $q$ are: $$ A=2b^2,\qquad B=ba+bc,\qquad C=a^2+c^2.$$ Can you guess now what the value of your integral has to be? I got: $$ \int_{0}^{2\pi}\frac{dt}{(a\sin t+b\cos t)^2+(b\cos t+c\sin t)^2}=\color{red}{\frac{2\pi}{\left|b\right|\cdot\left|a-c\right|}}$$ under the assumptions $b\neq 0$ and $a\neq c$.
Prove that if rref of matrix $\bf (v_1\space v_2 \space \cdots v_m)$ has row of zeroes, vectors $\bf v_1,\cdots v_m$ do not span $\mathbb R^{n}$
As stated in the problem, the last row consists of all zeros. Let $w \in \mathbb{R}^n$ and $w_n=1$. Assume $w= \sum_{j=1}^m \alpha_j v_j$. All $v_{m,n}=0, \Rightarrow w_n=0$. Contradiction.
Number of vector subspaces in $C_{2}$
More generally, if the ground field is a finite field with $q$ elements, we obtain the number of $k$-dimensional vector spaces in $\mathbb{F}_q^n$ as follows: We adopt the following $q$-analogue notations: $$ [n]_q = \frac{1-q^n}{1-q} = 1+q+ \cdots + q^{n-1}, $$ $$ [n]_q!=[1]_q[2]_q\cdots [n]_q, $$ $$ \binom n k_q = \frac{[n]_q!}{[k]_q![n-k]_q!}. $$ Theorem [Number of Subspaces] Let $k\leq n$. Denote by $s(n,k)$ be the number of $k$-dimensional subspaces of $\mathbb{F}_q^n$. Then $$ s(n,k)=\binom n k_q. $$ Proof First, we form a set of $k$ linearly independent vectors in $\mathbb{F}_q^n$. This can be done in $(q^n-1)(q^n-q)\cdots(q^n-q^{k-1})$ ways. Among these, there are $|\mathrm{GL}_k(\mathbb{F}_q)|$ sets yielding the identical subspaces. Therefore, $$ s(n,k) = \frac{(q^n-1)(q^n-q)\cdots(q^n-q^{k-1})}{(q^k-1)(q^k-q)\cdots (q^k-q^{k-1})}=\binom n k_q. $$
How to show $(\exists x)( \forall y)\varphi\rightarrow( \forall y)(\exists x)\varphi $ is logically valid
You're forgetting to use from item 4 the part that says the sequence differs in at most the $i$th component. For instance, say you have a sequence $a$ satisfying a formula $\psi = \exists x\forall y(\varphi)$. Mendelson uses only the universal quantifier, then, actually, the above rendering of $\psi$ is only syntactic sugar for $\neg \forall x(\neg\forall y(\varphi))$. For a sequence $a$ to satisfy $\psi$, it means that, by item 2, $a$ does not satisfy $\forall x(\neg\forall y(\varphi))$. For $a$ to not satifsy $\forall x(\neg\forall y(\varphi))$, by item 4, it means that there is at least one sequence $a'$ differing from $a$ in at most the $i$th component (in wich $i$ is the index of the variable $x$) not satisfying $\neg\forall y(\varphi)$. As for your edit, what you can get is actually A sequence $a'$ differing from $a$ in at most the $i$th position ($i$ being the index of $x$) satisfying $\forall y (\varphi)$. A sequence $a''$ differing from $a$ in at most the $j$th position ($j$ being the index of $y$) satisfying $\forall x(\neg\varphi)$. for $x$ and $y$ can be different variables. To obtain a contradiction from this you can use item 4 (from Mendelson) to obtain another sequence, say $a'''$, differing from $a$ in at most the $i$th and $j$th position that will work for both 1. and 2. above.
For what values of "a" does the resulting system have (a) no solution, (b) a unique solution, (c) infinitely many solutions?
From your third row it is evident that if $a^2=3$ there is no solution,otherwise there is a unique solution.
finding the integrating factor of $y \ dx+(x+3x^{3}y^{4}) \ dy = 0$
$$y \ dx+(x+3x^{3}y^{4}) \ dy = 0$$ $$ydx+xdy+3x^{3}y^{4}\ dy = 0$$ $$d(xy)+3x^{3}y^{4}\ dy = 0$$ The integrating factor is now obvious: $$\mu (x,y)= \dfrac 1 {(xy)^3}$$ $$\dfrac {d(xy)}{x^3y^3}+3ydy=0$$ Integrate. The integrating factor given is correct. It depends on both $x$ and $y$ not only on $x$. You can find the correct formula for this kind of integrating factor here at section 3 : Formula integrating factor
does F(x) = -F(1/x) for all x in domain of F for this specific F(x)?
First, write the integral this way: $$ F(x) = \int_1^x e^{t + 1/t}\frac{dt}{t}. $$ Now we'll try the $u$-substitution $u=1/t$. Then $dt=-du/u^2$ and the new integral is $$ F(x)=\int_1^{1/x} e^{1/u+u}\cdot\frac{-du/u^2}{1/u} = -\int_1^{1/x}e^{u+1/u}\frac{du}{u} = -F(1/x). $$
Normed space where unit ball's weak and norm topology coincide?
The answer is "no". The unit sphere is norm closed in the unit ball under the (norm) subspace topology. But in an infinite dimensional normed space, the weak closure of the unit sphere is the unit ball. See this post for a proof of this. So, in the space $B(X, {\rm weak})$, the closure of the unit sphere is again all of $B_X$ (in general if $A$ is a subspace of a topological space $X$ and $C\subset A$, then the closure of $C$ in $A$ is the intersection of $A$ with the closure of $C$ in $X$). In particular, the unit sphere is not weakly closed in $B(X, {\rm weak})$.
Find $\lim_{x\to 0}(1+\int_{2x}^{4x}\sin(t^2) dt)^{\csc(4x^3)}$
Hint: $$ \lim_{x\to 0} \frac{\ln \left(1+\int_{2x}^{4x} \sin(t^{2})dt\right)}{\sin (4x^{3})} = \frac{1}{4} \lim_{x\to 0} \frac{4x^{3}}{\sin(4x^{3})}\frac{\ln \left(1+\int_{2x}^{4x} \sin(t^{2})dt\right)}{\int_{2x}^{4x}\sin(t^{2})dt}\frac{\int_{2x}^{4x}\sin(t^{2})dt}{x^{3}}. $$ The first and second limit is 1, and the last one $$ \lim_{x\to 0} \frac{\int_{2x}^{4x}\sin(t^{2})dt}{x^{3}} = \frac{56}{3} $$ can be proved by using l'Hospital's rule or by Taylor's theorem.
Composite tangent function questions (domain and solving an equation)
$\displaystyle\sqrt{\frac{1}{x+2}} = \frac{\pi}{4} + \pi n,\ n\in\mathbb{N}$ because you should only consider natural number values of n (including 0), otherwise you are equating a square root to a negative number, and you're only looking at real domains/codomains.
If $R$ is an equivalence relation, does $R^2$ too?
The key is that $R_1\subseteq R_2$ implies $R_1^2\subseteq R_2^2$. Together with $I_A^2=I_A$ and $(R^{-1})^2=(R^2)^{-1}$ the three properties for $ 2$ follow. But actually the situation is much simpler: While transitivity (alone) of $R$ in fact just means $R^2\subseteq R$, transitivity plust reflexivity means that $$R^2=(R\cup I_A)^2=R^2\cup RI_A\cup I_AR\cup I_A^2=R^2\cup R\cup I_A=R. $$ Hence $R^2$ is an equivalence relation because it is the same relation as $R$. (We also not that we did not need symmetry to show $R^2=R$, only transitivity and reflexivity; hence $R^2=R$ holds as well for relations such as $\le$)
find norm of linear operator
You have $|Tf| \le \int_0^1 \|f\| dx = \|f\|$, so $\|T\| \le 1$. Let $f(x) = 1$ for all $x$ to get $|Tf| = 1$ hence $\|T\| =1 $.
Total Utility Value Composition of Different Utility Functions
For equal weighting the utility as a function of $X$ is constant. And for an linear function (which the linear combination of linear utility functions has to be), the maximum will always be at the boundary, so either $x=0$ or $x=1,000$. You need a non-linearity to have a maximum that is not on the boundary.
Prove that every element of E is equal to a product of primas.
$E$ being a set of all positive even numbers means each element has at least one factor of $2$. Thus, the product of $2$ elements of $E$ would have at least $2$ factors of $2$ and, as such, any element with only $1$ factor of $2$ cannot be a product of $2$ elements. By definition, this means these numbers are all "prima", including $2$, $6 = 2 \times 3$, $10 = 2 \times 5$, etc. These elements are all products of primas, namely just the $1$ prima of themselves. Each $e \in E$, where $e$ has $2$ or more factors of $2$, can be written as $e = 2^n m$, where $n \ge 2$ and $m$ is an odd integer. In these cases, $e = 2 \times 2^{n-1} m$, where $2$ and $2^{n-1} m$ are each a member of $E$. Thus, this shows that, by the definition, these elements are not considered to be prima, e.g., $4 = 2 \times 2, 8 = 2 \times 4, 12 = 2 \times 6$, etc. Also, note that $e = \left(\Pi_{i=1}^{n-1} 2\right)\left(2m\right)$, where $2$ and $2m$ are each primas, so $e$ is a product of primas, e.g., $24 = 2 \times 2 \times 6$. As this covers all of the cases, it shows that all elements of $E$ can be written as product of primas. However, note that unlike with primes, the set of prima factors is not unique for any integers where $2$ or more factors of $2$ and $m$ having $2$ or more prime factors. For example, $60 = 2 \times 30 = 10 \times 6$.
Trying to identify mathematical symbol
This looks like U+03D0 GREEK BETA SYMBOL. See the unicode chart containing it. Apparently it's just a typographical variant of $\beta$ that someone may have used for some particular purpose, like some distinguish $\phi/\varphi$, $\epsilon/\varepsilon$, $\theta/\vartheta$. See also this mailing list thread. It seems to be unknown whether this symbol ever had any mathematical use.
Absolute Convergence of Improper Integral of sinx/(e^x-e^(-x))
The integrand is $$\frac 12 \sin (x) \text{csch}(x)$$ Composing Taylor series, you have $$\frac 12 \sin (x) \text{csch}(x)=\frac{1}{2}-\frac{x^2}{6}+O\left(x^4\right)$$ so, no problem around $x=0$
Convergence of a characteristic function
You are almost done. $$\log (\psi_n(t)) = \sum_{j=1}^n \left(\frac{\cos(tj/n)-1}{j} \right) = \sum_{j=1}^n \left(\frac{\cos(tj/n)-1}{j/n} \right) \frac1n $$ This is nothing but a Riemann sum. In the limit as $n \rightarrow \infty$, we get $$\lim_{n \rightarrow \infty} \log (\psi_n(t)) = \lim_{n \rightarrow \infty} \sum_{j=1}^n \left(\frac{\cos(tj/n)-1}{j/n} \right) \frac1n = \int_0^1 \frac{\cos(tx) - 1}{x} dx$$ Hence, $$\log (\psi(t)) = \log \left( \lim_{n \rightarrow \infty} \psi_n(t) \right) = \lim_{n \rightarrow \infty} \log (\psi_n(t)) = \int_0^1 \frac{\cos(tx) - 1}{x} dx$$ For the sake of completeness, a sequence of random variables $X_n$, converges in distribution to a random variable $X$, iff the sequence of characteristic functions of $X_n$, converges to the characteristic function of the random variable $X$ point-wise.
$n! > (n/e)^{n}$ for all $n \geq 1$ by induction
$$(k+1)(k/e)^k=(k+1)k^ke^{-k}\\ =(k+1)^{k+1}e^{-k-1}e^1\left(\frac k{k+1}\right)^k\\ \geq(k+1)^{k+1}e^{-k-1}e/e$$ because $e&gt;((k+1)/k)^k\to1/e&lt;(k/(k+1))^k$. So the next rung of the inequality has been established.
Is the relation $R = \{a,b\}$ transitive?
Yup, that's transitive! For every triple of elements $a, b, c$, if $aRb$ and $bRc$ then $aRc$. It doesn't matter that there are some elements - namely, $5$ and $6$ - which can't be "extended" to form a triple. There's no more to transitivity than the definition you wrote above.
How to solve a symbolic non-linear vector equation?
Let's rewrite: $$a(P_0 - P_2 + t(V_0-V_2)) + b(P_1 - P_2 + t(V_1 - V_2)) = P - P_2 - t V_2$$ which is linear in $a$ and $b$. If we let $A=P_0-P_2$ and $A&#39;=V_0-V_2$ and $B=P_1-P_2$ and $B&#39;=V_1-V_2$ and $C=P-P_2$ and $C&#39;=-V_2$ then you have $$a(A + tA&#39;) + b(B + tB&#39;) = C + tC&#39;$$ This can be written as a matrix equation: $$ \begin{bmatrix} A_1 + t A&#39;_1 &amp; B_1 + t B&#39;_1 &amp; C_1 + tC&#39;_1 \\ A_2 + t A&#39;_2 &amp; B_2 + t B&#39;_2 &amp; C_2 + tC&#39;_2 \\ A_3 + t A&#39;_3 &amp; B_3 + t B&#39;_3 &amp; C_3 + tC&#39;_3 \end{bmatrix} \begin{bmatrix} a \\ b \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0\end{bmatrix}$$ or $Ax=0$, with suitable definitions for $A$ and $x$, and with both $A$ and $x$ unknown. Now you know that this only has a solution if $\det A = 0$, which gives you a cubic in $t$. Solving for $t$ using your favorite cubic solver when gives you $Ax=0$ with only $x$ unknown - and $x$ is precisely the zero eigenvector of the matrix $A$. The fact that the third component of $x$ is $-1$ forces the values of $a$ and $b$, and you are done.
Minimization of integrals in real analysis
Let $f(a,b,c)$ be defined as $$f(a,b,c)=\int_{-\pi}^\pi \left(\gamma -a-b\cos \gamma -c\sin \gamma\right)^2\,d\gamma$$ The first partial derivatives vanish at a local maximum or minimum. We will now evaluate the three first partial derivatives. $$\begin{align} \frac{\partial f}{\partial a}&amp;=-2\int_{-\pi}^\pi \left(\gamma -a-b\cos \gamma -c\sin \gamma\right)\,d\gamma\\\\ &amp;=4\pi a \implies a=0 \\\\ \frac{\partial f}{\partial b}&amp;=2\int_{-\pi}^\pi \left(\gamma -a-b\cos \gamma -c\sin \gamma\right) \sin \gamma \,d\gamma\\\\ &amp;=-2\pi c +2\int_{-\pi}^\pi \gamma \sin \gamma \,d\gamma\implies c=2 \\\\ \frac{\partial f}{\partial c}&amp;=-2\int_{-\pi}^\pi \left(\gamma -a-b\cos \gamma -c\sin \gamma\right) \cos \gamma \,d\gamma\\\\ &amp;=2\pi b\implies b=0 \end{align}$$ The minimum value of $f$ is $f(0,0,2)=\frac23 \pi^3-4\pi$
What is the cardinality of the union of uncountable sets
If you mean intervals of the reals, the union of all the open intervals, is equal to $\mathbb R$ itself. Therefore this union has the power of the continuum $\frak c=2^{\mathbb N}$ for cardinality.
Determination of all invariant subspaces
Assume that $K$ is not algebraically closed. Let $A\in M_n(K)$; if $m_A$, the minimum polynomial of $A$ is in the form $P(x)^m$, where $P$ is an irreducible polynomial of degree $p$, then $\chi_A$, the characteristic polynomial of $A$, is $P(x)^{n/p}$; in particular, $p$ divides $n$. The case $p=1$ is very particular ($A$ admits eigenvectors). Up to a translation, we may assume that $P(x)=x$, that is $A$ is nilpotent. When $K$ is infinite, the number of invariant subspaces (IS) is finite iff $m=n$, that is when $A$ is cyclic. For example, $A=\begin{pmatrix}0&amp;1&amp;0\\0&amp;0&amp;0\\0&amp;0&amp;0\end{pmatrix}$admits the IS of dimension $2$: $[e_1,e_2+\tau e_3],\tau\in K$ (the orbit of $e_2+\tau e_3$ under the action of $A$ spans this vector subspace). In the sequel, we assume that $p\geq 2$. If $p=n$, that is $\chi_A=P(x)$, then $A$ has no proper IS. If $p&lt;n$, then we consider the Frobenius form of $A$. There are $&gt;0$ integers $d_1\leq\cdots\leq d_{k-1}\leq d_k=m$ s.t. $\sum_{i=1}^k d_i =n/p$ and $A$ is similar over $K$ to $diag(C_{P^{d_1}},\cdots,C_{P^{d_k}})$, where $C_Q$ is the companion matrix of $Q$. Note that if $F$ is an IS, then $\chi_{A|F}$ divides $\chi_A$ and, therefore, is a power of $P$; in particular, $dim(F)$ is a multiple of $p$. We consider the particular case when $p=2,K=\mathbb{R},n=6$. Note that we may assume that $P(x)=x^2+1$. The IS have even dimension. Below, $\tau$ is a real parameter. There are $2$ cases. Case $1$. The Frobenius form of $A$ is $diag(C_P,C_P,C_P)$ and its real Jordan form is again $diag(U,U,U)$ where $U=C_P=\begin{pmatrix}0&amp;-1\\1&amp;0\end{pmatrix}$. Note that $A$ is diagonalizable non-cyclic. There is an infinity of IS over $\mathbb{C}$ and also over $\mathbb{R}$. Indeed, since $A^2+I=0$, the orbit of any non-zero vector spans an IS of dimension $2$. Thus any IS is the sum of such orbits. For example, the orbit of $e_1+\tau e_3$ spans $[e_1+\tau e_3, e_2+\tau e_4]$. Case 2. The Frobenius form of $A$ is $diag(C_P,C_{P^2})$ and its real Jordan form is $diag(U,\begin{pmatrix}U&amp;I_2\\0&amp;U\end{pmatrix})$. Note that $A$ is non-diagonalizable and non-cyclic. Thus again, there is an infinity of IS of dimension $2$ or $4$. Now we use the real Jordan form. For example, the orbit of $e_1+\tau e_3$ spans $[e_1+\tau e_3,e_2+\tau e_4]$ of dimension $2$. The orbit of $e_1+\tau e_5$ spans $[e_3,e_4,e_1+\tau e_5,e_2+\tau e_6]$ of dimension $4$.
Convolution of an Integrable Function and a Compact Smooth Function
No, the resulting function may not have compact support. For example, suppose $g &gt; 0$ on the whole line (say, $g = \exp(-x^2)$) and $$f = \begin{cases}\exp\left(\frac{-1}{1-|x|^2}\right) &amp; |x|&lt;1 \\ 0 &amp; |x| \ge 1\end{cases}$$ - the standard approximate identity. Then for every $y \in \mathbb{R}$ we have $$(f * g)(y) = \int_{-\infty}^{\infty} g(x)f(y-x)\, dx = \int_{y-1}^{y+1} \exp(-x^2)\exp\left(\frac{-1}{1-|y-x|^2}\right)\, dx &gt; 0.$$ In particular, $\mathrm{supp}(f*g)=\mathbb{R}$. However, $f*g$ will, in fact, be smooth and integrable.
Compacity of the unit group of discrete valuation ring.
The field $K_\mathfrak{p}$ has the metric topology induced by the $\mathfrak{p}$-adic absolute value; any subset of $K_\mathfrak{p}$ then carries the subspace topology (this applies in particular to the valuation ring and its unit group). Regarding the compactness of $U_\mathfrak{p}$ in this topology, as Mindlack says, one usually first proves compactness of the valuation ring itself; since $U_\mathfrak{p}$ is easily seen to be a closed subset of the valuation ring, it too is compact. The compactness of the valuation ring requires more work, but the key ingredient for the proof is the finiteness of the residue field.
Structure of finite pretty ring
The answer to 1) is Yes and 2) is No. Earlier you established they have exactly one unit, and that implies they are reduced. A finite reduced ring is a finite direct product of fields (by the Artin-Wedderburn theorem and Wedderburn's little theorem). Such a product must be commutative, so this answers 2). Clearly if any of the fields is something more than $F_2$, you have extra units. So all the fields have to be isomorphic to $F_2$. This answers 1). 3) I've never heard of them. The uniqueness condition is highly restrictive. TBH I am not sure why one would choose this definition given that "has only one unit" is so much more concise. A more popular subject of study are clean rings in which elements are the sum of a unit and idempotent. There is a notion of a uniquely clean ring which additionally requires uniqueness. Update Just now I ran across a paper defining a generalization of clean rings called left unit fusible rings which is similar to what you describe with uniqueness dropped, and "nonunit" replaced by "zero divisor." Ghashghaei, E., &amp; McGovern, W. W. (2017). Fusible rings. Communications in Algebra, 45(3), 1151-1165.
$\sin 4\alpha = 2\sin 2\alpha \times \cos 2\alpha $?
Of course yes. If you like, you can also make a simple substitution to see it more clearly. For example, your second example: Let $\lambda=5\alpha$: then, $\sin 10\alpha = \sin 2\lambda = 2\sin\lambda\cos\lambda = 2\sin 5\alpha\cos 5\alpha$.
Why is $0 \to \tilde{H}_0(X) \to H_0(X) \to H_0(\{x\}) \to 0$ exact?
You probably know that Eilenberg and Steenrod introduced axioms to define the concept of a homology theory on an abstract level. Singular homology is one example of such a theory, but there are many other. Given such a homology theory, a general method to define reduced homology groups is $$(D) \quad \tilde H_n(X) = \ker (c_* : H_n(X) \to H_n(*))$$ where $c : X\to *$ denotes the unique map to the one-point space $*$. See Tyrone's comment. However, the standard &quot;textbook approach&quot; for singular homology is to define reduced homology groups $\tilde H_n(X)$ as the homology groups of the augmented chain complex. Doing so, equation $(D)$ becomes a theorem which requires a proof. Working with the augmented chain complex yields $\tilde H_n(X) = H_n(X)$ for $n &gt; 0$ (both are the same quotient groups), thus trivially $(D)$ is satisfied for $n &gt; 0$ since $H_n(*) = 0$. In dimension $0$ let us observe that $\tilde H_0(X)$ is (in contrast to the kernel definition) not a genuine subgroup of $H_0(X)$, but there is a canonical group monomorphism $\iota : \tilde H_0(X) \to H_0(X)$: The first group is $\ker \epsilon / \text{im} \partial_1$, the second is $C_0(X) / \text{im} \partial_1$ and $\iota$ is induced by the inclusion $\mu : \ker \epsilon \hookrightarrow C_0(X)$. If we understand this point, we may laxly write $\tilde H_0(X) \subset H_0(X)$. However, the precise statement is that $\text{im} \iota = \ker c_*$. Consider the induced map $c_\# : C_0(X) \to C_0(*)$ on chain complexes. Then $$\ker c_\# \hookrightarrow C_0(X) \stackrel{c\#}{\to} C_0(*)$$ is trivially exact. But by the definitions of $c_\#$ and $\epsilon$ we have $\ker c_\# = \ker \epsilon$, thus $$\ker \epsilon \stackrel{\mu}{\to} C_0(X) \stackrel{c\#}{\to} C_0(*)$$ is exact. The induced $c_* : H_0(X) \to H_0(*)$ is given by $$c_*([\xi]) = [c_\#(\xi)] . $$ Thus $[\xi] \in \ker c_*$ means $[c_\#(\xi)] = 0$. Since the quotient map $C_0(*) \to H_0(*)$ is an isomorphism, the latter is equivalent to $c_\#(\xi) = 0$ and therefore equivalent to $\xi \in \text{im} \mu$ which is the same as $[\xi] = [\mu(\eta)] = \iota([\eta]) \in \text{im} \iota$. Therefore $\ker c_* = \text{im} \iota$ as desired.
Ordered pairs that satisfy inequalities
I found these $17$ solutions by assigning values to $p$ and finding possible values for $q$ $$\{(0,0), (0, \pm 1), (1, 0), (1,\pm 1), (1 \pm 2), (-1, 0),(2,0),(2,\pm 1),(2,\pm 2),(-2,0),(\pm 3,0)\}$$
Closed form of recursively defined sequence
Hint to see that the sequence converges: Note that the term $a_{n+2}$ is a weighted mean of its two predecessors. Hint to find the limit: Try with $a_0=0$ and $a_1=1$. Then, scale and translate.
Solve for x: $5x=1 \bmod 12$
The solution is $x = 5^{-1} \pmod {12}$. Basically you want to calculate the modular inverse of $5 \pmod {12}$, which is represented as $5^{-1} \pmod{12}$. You can use the Extended Euclidean algorithm for that. But for small modulos, it's easier to just do trial and error. You want the unique number (modulo $12$) that when multiplied by $5$ will give you a result that is $1$ modulo $12$. You only have to test numbers in the range $1$ to $11$. You can quite easily see that $(5)(5) = 25 \equiv 1 \pmod {12}$. So $5^{-1} \equiv 5 \pmod{12}$ and the solution is $x \equiv 5 \pmod{12}$.
complex integral problem
Hint: The integrand isn't holomorphic, but $\lvert z\rvert$ is constant on the contour, so ...
Compute the volume of a cube not axis-aligned
Not sure if I understand exactly what you mean, but it seems you can calculate the lengths of the sides of the cube by applying the formula for norm of vectors, i.e. length equals $\sqrt{(x_1-x_o)^2 + (y_1-y_2)^2 + (z_1 - z_2)^2}$ and then applying the formula for a cube.
Finite subsets of $\mathcal{U}$ don't cover $C \setminus \{p\}$
$\newcommand{\ms}{\mathscr}\newcommand{\ext}{\operatorname{ext}}$Let $\ms{F}$ be a finite subset of $\ms U$, say $\ms F=\{\ext(a_1,b_1),\dots,\ext(a_n,b_n)\}$. Let $a=\max\{a_1,\dots,a_n\}$ and $b=\min\{b_1,\dots,b_n\}$; by hypothesis $a&lt;p&lt;b$. To show that $\ms F$ does not cover $C$, it suffices to show that $(a,b)\ne\{p\}$, i.e., that at least one of the open intervals $(a,p)$ and $(p,b)$ is non-empty. In fact both are non-empty, and you can show this simply by showing that $\qquad\qquad\qquad\qquad\qquad$if $x,y\in C$ and $x&lt;y$, then $(x,y)\ne\varnothing$. This follows from the fact that $C$ is connected, though the details of the argument depend on just what definition of connectedness you’re using.
Number of solutions of a linear equation $AX=B$
Another, more abstract way, to see it is this: suppose the system has at least two distinct solutions. Then there exist $X_1$ and $X_2$ such that $AX_1 = AX_2 = B$ with $X_1 \ne X_2$. Thus $A(X_1 - X_2) = 0$, while $X_1 - X_2 \ne 0$. Let $C$ be the set of non-vanishing columns of $X_1 - X_2$. $C \ne \phi$ since $X_1 - X_2 \ne 0$. Let $D$ be any matrix formed by selecting its columns from $span(C)$. Then $AD = 0$, so $A(X_1 + D) = AX_1 + AD = AX_1 = B$; but clearly the number of such matrices $D$ is infinite, whence the number of matrices $X_1 + D$ is infinite as well. QED and Cheers.
Every algebra could be decomposed into a part with unit, and without a unit. Question on uniqueness proof.
The point is that $C \subset A$ and $(e - e_1) \notin A$ (and $A$ is a subspace of $A$-with-a-unit-adjoined): Let $V$ be a vector space, and suppose $W \subset V$ (a subspace) and $v \in V \setminus W$ are such that $W + v = V$. Suppose $C, C' \subset W$ are such that $C + v = C' + v$. Then $C = (C + v) \cap W = (C' + v) \cap W = C'$.
Let $A = \mathbb{Z}$, $B = [−1, \pi]$, $C = (2, 7)$. List all elements of $A \cap (B^c \cap C)$.
You're right. Here are the intermediate steps: $B^c=(-\infty, -1) \cup (\pi, \infty)$, so $B^c\cap C = (\pi, 7)$. And as you wrote, the only integers in that interval are $4,5,6$.
Non linear ordinary differential equation
Yes, you can easily solve this using available numerical solvers. As an example, I will use Sage since it's available online and is free. First step is to convert to first-order linear system by introducing another unknown $u=y'$: $$ y'=u,\quad u'=\sin x-\sin(x+y) $$ The rest is up to Sage: var('x y u') P=desolve_system_rk4([u,sin(x)-sin(x+y)],[y,u],ics=[0,0,1],ivar=x,end_points=30) list_plot([ [i,j] for i,j,k in P]) Pretty neat plot: It does some other weird things later on, as you can see by increasing the end_points parameter.
Order of operations with matrix multiplication
Matrix multiplication is associative but not commutative. If $B = S^{-1}AS$ and $B$ is invertible then you can show $B^{-1} = S^{-1}A^{-1}S$ since $(S^{-1}A^{-1}S)(S^{-1}AS) = S^{-1}(A^{-1}(SS^{-1})A)S = S^{-1}(A^{-1}A)S = S^{-1}S = I$ and similarly $(S^{-1}AS)(S^{-1}A^{-1}S) = S^{-1}(A(SS^{-1})A^{-1})S = S^{-1}(AA^{-1})S = S^{-1}S = I$ More generally in multiplication of invertible square matrices, you have $(CD)^{-1} = D^{-1}C^{-1}$ and you can prove this the same way
Extending multiple sink problem for multigraphs
The easiest thing to do is to subdivide the edges causing you problems. That is, if there are multiple edges from $u$ to $v$ with costs $c_1, c_2, \dots, c_k$, then replace the $i^{\text{th}}$ edge with a pair of edges $uw_i, w_iv$, each with cost $\frac12 c_i$ (where $w_i$ is a new vertex created just for this edge and not used anywhere else). Once this is done, you have a simple graph, and edge-disjoint paths in the new graph correspond to edge-disjoint paths in the old graph with the same cost.
How to partition a set into k subsets, given minimum subset size is limited by some constant?
You're looking for triples of three numbers (which I'm going to list in nondecreasing order) that sum to 36, and where each one is at least 5. Instead you can look for triples of three numbers $p \le q \le r$ that sum to $36 - 15 = 21$, where each is nonnegative, and then the numbers you originally sought were $p+5, q+5, r+5$. (i.e., there's a 1-1 correspondence between the collection of partition-sizes that you seek and the collection of $(p, q, r)$ with $0 \le p \le q \le r$ and $p+q+r = 21$.) These latter triples can be divided into those with $p = 0$, where the remaining numbers sum to 21; those where $p = 1$, and the remaining numbers sum to $20$...but both are at least $1$, those where $p = 2$, and the remaining numbers sum to $19$, but both are at least $2$, etc. But the same trick applies to those: pairs of numbers, in nondecreasing order, summing to to $19$, but both being at least $2$ have the same count as pairs of (nondecreasing) numbers, summing to $15$, but both being at least $0$. Let's write $$ C(n, k) $$ for the number of $n$-element sets of nonnegative numbers, in increasing order, that sum to exactly $k$. Then what I've said above shows that the number you're looking for is $C(3, 21)$, and that $$ C(3, 21) = C(2, 21) + C(2, 21-3\cdot 1) + C(2, 21-3\cdot 2) + \ldots + C(2, 21-3\cdot 7) $$ Now how large is $C(2, k)$? It, too, satisfies a recurrence: the first number is either $0$ (in which case the remaining number must sum to 21), or it's 1, in which case the remaining number must be at least 1 and sum to 20, i.e., the count of which is the same as numbers that are at least 0 and sum to 19, etc. $$ C(2, k) = C(1, k) + C(1, k-2 \cdot 1) + C(1, k-2 \cdot 2) + \ldots + C(1, k-2 \cdot (k/2)) $$ where the $k/2$ in the last term should be rounded down [i.e., if $k$ is odd, then we end with $C(1, 1)$ rather than $C(1, -1)$. ] Now how large is $C(1, s)$ for any nonnegative $s$? It's exactly $1$. That means that $C(2, k)$ is exactly $\lfloor \frac{k+1}{2} \rfloor$. So $$ C(3, 21) = \lfloor \frac{22}{2} \rfloor + \lfloor \frac{19}{2} \rfloor + \lfloor \frac{16}{2} \rfloor + \lfloor \frac{13}{2} \rfloor + \lfloor \frac{10}{2} \rfloor + \lfloor \frac{7}{2} \rfloor + \lfloor \frac{4}{2} \rfloor + \lfloor \frac{1}{2} \rfloor \\ = 11+9 + 8 + 7 + 5 + 3 + 2 = 45. $$ This seems surprisingly low to me, but I don't see any obvious error (except that my "round down $(k+1)/2$" answer for $C(1, k)$ could be off by one), so I'm going to go ahead and submit it as an answer, and if not an answer, at least a suggested path for you to follow in getting to the correct answer. Let me just sanity check by writing them down...there aren't that many. 12, 12, 12 11, 12, 13 11, 11, 14 10, 13, 13 10, 12, 14 10, 11, 15 10, 10, 16 9, 13, 14 9, 12, 15 9, 11, 16 9, 10, 17 9, 9, 18 8, 14, 14 8, 13, 15 8, 12, 16 8, 11, 17 8, 10, 18 8, 9, 19 8, 8, 20 7, 14, 15 7, 13, 16 7, 12, 17 7, 11, 18 7, 10, 19 7, 9, 20 7, 8, 21 7, 7, 22 6, 15, 15 6, 14, 16 6, 13, 17 6, 12, 18 6, 11, 19 6, 10, 20 6, 9, 21 6, 8, 22 6, 7, 23 6, 6, 24 Hunh. I seem to get 38. So I probably DID have an off-by-one error in my formula for $C(2, k)$. Anyhow, the answer is 38, and you can work out my off-by-one error by back-tracing, if it pleases you do to so.
Infinite product with sign changes.
An example to consider. $$ a_n = \frac{(-1)^n}{\sqrt{n}}\;. $$ [Note, for $n \ge 2$, we have $-1 &lt; a_n &lt; 1$ so $0 &lt; 1+a_n &lt; 2$.] Then of course the alternating series $\sum_{n=2}^\infty a_n$ converges. But what about infinite product $$ \prod_{n=2}^\infty\big(1+a_n\big)=\prod_{n=2}^\infty\left(1+\frac{(-1)^n}{\sqrt{n}}\right)\;? $$ We have as $n \to \infty$: $$ \log\left(1+\frac{(-1)^n}{\sqrt{n}}\right) = \frac{(-1)^n}{\sqrt{n}}-\frac{1}{2n} + O(n^{-3/2})\;. $$ Thus, $$ \sum_{n=2}^\infty\log\left(1+\frac{(-1)^n}{\sqrt{n}}\right) $$ is the sum of a series $\sum\frac{(-1)^n}{\sqrt{n}}$that converges, a series $-\sum\frac{1}{2n}$ that diverges to $-\infty$, and a series that converges by limit comparison with $\sum n^{-3/2}$. Therefore $$ \prod_{n=2}^\infty\big(1+a_n\big) $$ diverges to $0$. The reciprocal series $$ \prod_{n=2}^\infty\frac{1}{1+a_n}= \prod_{n=2}^\infty\big({1+b_n}\big) $$ diverges to $+\infty$ while the series $\sum b_n$ still converges. And $-1 &lt;b_n&lt; 1$ for $n \ge 4$.
choose at most k from n
For completeness and convenience I have taken the information in the comments and put it here as an answer. Thus, my answer is essentially taken from the accepted answer to this very similar question. # Yes, there are various closed form expressions for this summation. However, they are all relatively complex and so in practice using them to calculate the summation may actually me more complicated than just calculating the summation term by term! If $S= \sum_{k\le K}{n\choose k} $, then in terms of hypergeometric ${}_2F_1 (a,b,c,d)$ functions, $$ S = 2^n - 1 - \binom{n}{1+K} \; {}_2F_1 (1, 1+K-n, 2+K,-1) $$ Denote the fractional part of $z$ as $\lfloor z \rfloor = z - [z]$. Now , if for any positive integer $j&gt;1$, we let $x = j^{-n}$, then there are the two following intriguing expressions (which are a result of generating functions): $$ S =\left\lfloor\frac{(j^{K+1}\cdot(1+x))^{n}}{j^{n}-1}-j^{n}\cdot\left\lfloor\frac{(j^{K}\cdot(1+x))^{n}}{j^{n}-1} \right\rfloor\right\rfloor$$ And, $$S=\left\lfloor\frac{(1+x)^n}{x^{K}(1-x)}\right\rfloor \; (\mod 4^n).$$
How does this integer division work?
$$\mu t[((x+1)\dot-(\operatorname{mult}(t,y)+y))=0] $$ is the smallest $t\in\Bbb N_0$ with $((x+1)\dot-(\operatorname{mult}(t,y)+y))=0$, i.e., the smallest $t$ with $ty+y\ge x+1$, i.e., the smallest $t$ with $(t+1)y&gt;x$, hence (as $y&gt;0$) the greatest $t$ with $ty\le x$, as desired.
Rewriting supremum over an image set
Yes, these are always equal. The notation $\sup_{x \in f(C)} h(x)$ really means the supremum of the set $\{h(x):x\in f(C)\}$, which is equal to the set $\{h(f(c)):c\in C\}$. So both of your suprema are the suprema of the same set.
Joint/Simultaneous optimization
There are many ways to make it more exact. However, there are many different ways to make it more exact, and they lead to different answers, and there is no abstract sense in which one of these answers is better than the others. So, some examples: you could try to minimize $f^2+g^2$ (this is very popular, under the name, "least squares", or "ell-two norm"), or you could try to minimize $|f+g|$, or more generally $\root q\of{f^q+g^q}$ for some chosen number $q$, $1\le q\lt\infty$; you could try to minimize $\max(f,g)$, or $\sqrt{|fg|}$, or the harmonic mean $2fg/(f+g)$. Each of these has its uses. You have to decide which one gives results the closest to what you are trying to get. The ones I've given have been symmetric in $f$ and $g$. In case it's more important to minimize one than the other, there are weighted versions of all of these, like minimizing $f^2+17g^2$, or $|39f+g|$, and so on, and so forth. EDIT: Let's look at an example. Let $f(x)=|x|$, $g(x)=|4-2x|$, and try to jointly minimize on $[0,2]$. Here are some candidates: $$\matrix{x&amp;f(x)&amp;g(x)&amp;\max&amp;f^2+g^2&amp;f+g&amp;{\rm features}\cr4/3&amp;4/3&amp;4/3&amp;1.33&amp;3.56&amp;2.67&amp;{\rm minimizes\ maximum}\cr8/5&amp;8/5&amp;4/5&amp;1.6&amp;3.2&amp;2.4&amp;{\rm minimizes\ }f^2+g^2\cr2&amp;2&amp;0&amp;2&amp;4&amp;2&amp;{\rm minimizes\ }f+g\cr}$$
Lagging Distance on Pirate-Merchant Pursuit Curve
Let $p(x) = \frac{dy}{dx} $ and $v_p = v_m = v $ From the arc-length formula, we have $$\int_0^x \sqrt{1+p(z)}dz = v t = (x-x_0)p(x) + y$$ Taking an $x$ derivative on both sides: $$\sqrt{1+p(x)} = (x-x_0)p'(x) $$ Separating variables: $$ \frac{dp}{\sqrt{1+p^2}}= \frac{dx}{x-x_0} $$ Integrating and imposing the condition that $p=0$ at $t=0$, we get $$\ln(p+\sqrt{1+p^2}) = -\ln(x_0-x) $$ With some algebraic manipulation, we find that $$p = \frac12 [(1-\frac{x}{x_0})^{-1}+(1-\frac {x}{x_0})] = \frac {2xx_0-x^2}{2x_0 (x_0-x)}$$ Equating this to our original expression for $p(x) $, $$p = \frac {2xx_0-x^2}{2x_0 (x_0-x)}=\frac{v_mt-y}{x_0-x} $$ Or $$V_mt-y = \frac{2xx_0-x^2}{2x_0}$$ In the limit that $x \to x_0$, the LHS is precisely the quantity that we are looking for, so: $$D = \lim_{x \to x_0} \frac{2xx_0-x^2}{2x_0} = \frac12 x_0$$
Formal definition of a functor in $\mathsf{ZFC}$
Yes, the definition of a functor as a quadruplet is standard (as a definition of a function between sets as a triplet). This approach does not lead to difficulties you mentioned.
Is Least Upper Bound of Empty Set equal to Greatest Lower Bound of a another Set?
The minimal element of the lattice is greater than every element in the empty set (in which there are none), and every element that is greater than every element in the empty set (which is every element) is greater than the minimal element. Therefore by definition it is the least upper bound of the empty set.
Comparison test in improper integrals
It's not true as stated. For example, $1/x^3 = o(1/(x+1)^2)$ as $x \to \infty$, but $\int_0^\infty 1/x^3\; dx$ diverges while $\int_0^\infty 1/(x+1)^2\; dx$ converges. Of course here the divergence happens as $x \to 0$, not as $x \to \infty$. A correct statement is: if $f$ and $g$ are bounded on $(0, \infty)$, $g \ge 0$, $\int_0^\infty g(x)\; dx$ converges, and $f(x) = o(g(x))$ as $x \to \infty$, then $\int_0^\infty f(x) \; dx$ converges. On the other hand, if $f$ and $g$ are bounded on $(0,\infty)$, $\int_0^\infty g(x)\; dx$ converges, and $f(x) = o(|g(x)|)$, then it is possible for $\int_0^\infty f(x)\; dx$ to diverge, even if $f \ge 0$. The crucial point is not whether $f \ge 0$, rather whether $g \ge 0$.
Does a strictly convex and continuous function always exist?
No, not necessarily. In particular, consider some infinite-dimensional normed linear space $X$, but equip $X$ with the weak topology. Suppose there was a weakly continuous, strictly convex function from $C = X$ to $\Bbb{R}$. By adding affine functions and translating the graph as necessary, we may assume without loss of generality that $f$ achieves a minimum of $0$ at $0 \in X$. Since $f$ is weakly continuous, $f^{-1}(-\infty, 1) = f^{-1}[0, 1)$ is a weakly open set. Weakly open sets in $X$ must contain a finite-codimensional affine subspace, and since $X$ is infinite-dimensional, this subspace is non-trivial. Choose a line contained in this non-trivial affine subspace, and identify it with $\Bbb{R}$. The result is a strictly convex function $$g : \Bbb{R} \to \Bbb{R}$$ such that $g(x) \in [0, 1)$ for all $x \in \Bbb{R}$. But, this is impossible! Pick two distinct points $x_1, x_2 \in \Bbb{R}$ such that $x_1 &lt; x_2$. Then, if $x &gt; x_2$, $$g(x_2) \le \frac{x_2 - x_1}{x - x_1}g(x) + \left(1 - \frac{x_2 - x_1}{x - x_1}\right)g(x_1).$$ Take the limit as $x \to \infty$, remembering that $g(x)$ is bounded, and we see that $g(x_2) \le g(x_1)$. But, on the other hand, if $x &lt; x_1$, then similarly, $$g(x_1) \le \frac{x_2 - x_1}{x_2 - x}g(x) + \left(1 - \frac{x_2 - x_1}{x_2 - x}\right)g(x_2),$$ hence as $x \to -\infty$, $g(x_1) \le g(x_2)$. That is, $g(x_1) = g(x_2)$, hence $g$ is constant, contradicting $g$ being strictly convex.
Eigenvalues of the commutator of two triangular matrices
$Y = XT_1X^*$ and $Z = XT_2X^*$ $YZ = XT_1T_2X^*$ and $YZ−ZY = X(T_1T_2 - T_2T_1)X^*$ The eigenvalues of $YZ−ZY$ equal the eigenvalues of $(T_1T_2 - T_2T_1)$ since $T_1$ and $T_2$ are both upper Triangular $T_1T_2$ and $T_2T_1$ are both upper triangular and have identical main diagonals $(T_1T_2 - T_2T_1)$ is an upper triangular matrix with all $0$'s down the main diagonal Trace = $0$ and determinant $0.$
Are these Poisson-related problems and are the solutions correct?
Yes, these are Poisson processes; and the solutions are correct. The key thing to note is that summing independent Poisson processes gives a Poisson process. Thinking of Poisson processes as arrivals, this says that if you count arrivals from two independent Poisson processes, then you get a Poisson process. You can read more about Poisson processes in Section 1.4 of Perla Sousi's lecture notes. The main result being used here is that interarrival times in a Poisson process are exponentially distributed, and hence have the memoryless property. (a/b) The time to get a red is exponential with rate $\lambda_r = 1/10$, green with rate $\lambda_g = 1/15$ and orange with rate $\lambda_o = 1/20$; call these arrival times $E_r$, $E_g$ and $E_o$. The first arrival is then $\min\{E_r,E_g,E_o\}$, and minimum of exponentials is exponential with parameter $\lambda := \lambda_r+\lambda_g+\lambda_o$; the expected time is then $1/\lambda$. For (a), you can just calculate directly $\Pr( \text{Exponential}(\lambda_g) &lt; \text{Exponential}(\lambda_r+\lambda_o) )$. You can also see this as a sort of symmetry argument. Suppose we're trying to calculate $\Pr( E_1 = \text{Exponential}(1) &lt; E_2 = \text{Exponential}(2)$. Write $\text{Exponential}(2)$ as $\min\{E_{2,1} = \text{Exponential}(1),E_{2,1}' = \text{Exponential}(1)\}$. There are then three exponentials ($E_1$, $E_{2,1}$ and $E_{2,1}'$) each of rate 1; if the first is the smallest, then $E_1 &lt; E_2$, while otherwise $E_2 &gt; E_1$. In this set-up, clearly $\Pr(E_1 &lt; E_2) = 1/3$, by symmetry. For (c), just use the memoryless property and the fact that the Poisson processes are independent, so it doesn't matter that you've seen orange buses
Formula to Count the Number of Unique Vertices in a Grid?
For a larger grid with elements of different sizes, you can often rearrange the position and/or orientation of elements in the grid in order to have more "blue" vertices or fewer "blue" vertices. So merely knowing the sizes of the elements and how many you have of each is not enough information. You also need to know how each element was placed in the grid.
Is "being an integral domain" a local property?
As is pointed out in the comments, this is not actually true. The product of any two fields is a counterexample. The problem is, geometrically, your space isn't connected. Indeed, if you start with a Noetherian ring which isn't the product of non-trivial rings, and every localization is integral, then the ring is integral. Geometrically, this is saying that if $X$ is a Noetherian scheme, then $X$ is integral if and only if $X$ is connected and $\mathcal{O}_{X,p}$ is a domain for all $p\in X$. Try to prove this yourself. Here are some observations that might help: 1) If we can show that the irreducible components are disjoint then we're done, because there are finitely many of them (why?) and so we'd have a disconnection of $X$. 2) If two of the components intersected at $p$, you could descend this intersection to $\mathcal{O}_{X,p}$. Why is that bad?
Can someone please help with this natural deduction proof?
1) Unpack the premise with $\land$E. 2) With $p \lor q$ use $\lor$E: from $p$, we get $p \lor (q \land r)$ by $\lor$I. 3) From $q$, use again $\lor$E with $p \lor r$. Again from $p$ we have $p \lor (q \land r)$. 4) From $r$ we get $q \land r$ by $\land$I and then $p \lor (q \land r)$ by $\lor$I. In conclusion, we have derived $p \lor (q \land r)$ in all three branches of our derivation and it's done.
To prove $\det (xy^t)=0$
The statement is true only when $n\ge2$. There may be several approaches $n\ge2$ so $\exists z\ne 0:y^tz=0$ so $(xy^t)z=0$ so $\det (xy^t)=0$ $\operatorname{rank}(xy^t)=1$ and $n&gt;1$ so $\det(xy^t)=0$ If all $x_i=0$ then $xy^t=0$ then $\det(xy^t)=0$. Otherwise if $x_r\ne 0$ since $n\ge 2$ we can take the $j$th row minus $x_j/x_r$ the $r$th row to get a row with all $0's$ without changing the determinant. So $\det(xy^t)=0$ $\det(xy^t)=\sum_\sigma \operatorname{sign}(\sigma)\prod_i(xy^t)_{i\sigma(i)}=\sum_\sigma\operatorname{sign}(\sigma) \prod_ix_iy_{\sigma(i)}=(\prod_ix_iy_i)\sum_\sigma \operatorname{sign}(\sigma)=0$
If eigenvectors of different eigenvalues are orthogonal, then is the matrix normal?
Counterexample: the matrix $$\begin{bmatrix}0&amp;0&amp;0\\0&amp;1&amp;1\\0&amp;0&amp;1\end{bmatrix}$$ meets your criteria, but isn’t even diagonalizable.
Meaning of "Complx" as notation
Yes, the paragraph right before the first occurrence of the expression implies that $\mathrm{Complx}(a,b)=a+bi$. ... the vector $\mathbf{y}$ is real with dimension $2N$, where $N$ is the number of complex data points used in the inversion. ... For compact programmable expressions, we let the vector $\mathbf{y}$ ... be redefined as complex so that they can be conveniently stored in the computer memory. ... we have for the $j$th element of the first matrix-vector multiplication $$y_j=\mathrm{Complx}\Bigl\{\cdots\Bigr\}$$ (original text)
Find a matrix R α ∈ R 2 × 2 such that f ( x ) = f R α ( x ) for every x ∈ R 2
Hint: The columns of the matrix are $f(1,0)$ and $f(0,1)$. For the second part, look at what happens to $(1,0)$ and $(0,1)$ and apply a little trigonometry.
Let $X\sim\mathrm{Exp}(1),Y\sim\mathrm{Exp}(2)$ independent random variables. Let $Z = \min(X, Y)$. Calculate $E(Z^2)$.
Hint: $P(Z \ge z) = P(X \ge z) P(Y \ge z)$. Find the distribution of $Z$.