title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Can I find $a$ and $b$ so that $\cosh(ax) - \cosh(bx) = \cosh(cx)$?
Put $x=0$ to see that such an equation cannot hold.
If a continuous function f equals its inverse then there is x such that f(x)=x
Suppose for some $c$ that $f(c) \ne c$. Without loss of generality suppose $f(c) > c$. Then $f(f(c)) = c < f(c)$. Now you can use the intermediate value theorem on the function $f(x) - x$ on the interval $[c, f(c)]$.
Prove that $(T,+,\cdot)$ is a vector space and find its dimension and one basis
We have $$ \begin{bmatrix} 3 & 2 &1 \\ 1& 1&4 \\ 5& 2 & -2 \end{bmatrix} \vec{x}= \begin {bmatrix} a \\ b \\ c \end{bmatrix}$$ The matrix is invertible, so it seems all of $\mathbb{R}^3$ will work for availability of a consistent solution, so this is your $\bf{T}$. $\mathbb{R}^3$ is a vector space of dimension three, and you can use the standard basis for this.
Integrate overlapping parametric function
Your path must be closed in order to form an area. $[-3,3]$ will not do. $[-2,2]$ will work. That is: $x(-2) = x(2)$ and $y(-2) = y(2)$ Area: $\int y \ dx = -\int x\ dy$ You can solve either one... but just do demonstrate that the two are equal, I will do both. $\int_{-2}^2 t^2(5t^4 -12t^2) \ dt = -\int_{-2}^2 (t^5 -4t^3)(2t) \ dt\\ \int_{-2}^2 5t^6 -12t^4 \ dt = -\int_{-2}^2 2t^6 -8t^4 \ dt\\ 2(\frac {5}{7} 2^7 - \frac {12}{5} 2^5) = -2(\frac {2}{7}2^7 - \frac {8}{5} 2^5)\\ 2(\frac {512}{35}) = - 2(-\frac {512}{35}) $
planer subgraph of k4,4
Well, the first answer is "so, don't do that". Using Euler's formula for bipartite graphs gives us an easy way to prove that at least $4$ edges need to be deleted. Using Kuratowski's theorem is going to be harder, because we'll need to care about which edges we delete, and because finding minors in a graph is hard. But let's ignore that, and forge on. First of all, although there are many ways to remove $3$ edges from $K_{4,4}$, we'll only need to care about one of them. Suppose at least $2$ of the three edges we delete share an endpoint. Then there's a $K_{3,3}$ subgraph in the graph remaining! From the side where two deleted edges share an endpoint, take the three vertices not including that endpoint (automatically missing those two deleted edges). From the other side, take three vertices not including the endpoint of the third edge. Then all $9$ edges between the vertices we chose are still present, and we get $K_{3,3}$. A $K_{3,3}$ subgraph is definitely a $K_{3,3}$ minor, so in this case, the graph we're left with is definitely not planar. Now suppose we delete three edges that don't share any endpoints. Well, in $K_{4,4}$, all vertices on each side look identical. So we'll get isomorphic graphs no matter which three edges we delete. Here's one way to do it, identical to all the others (the missing edges are edges $15$, $26$, and $37$): Here, we can get a $K_{3,3}$ by deleting edges $28$ and $47$, then contracting edges $17$ and $27$: You can check that if one side has vertices $3,4$, and the combined $\{1,2,7\}$, and the other side has vertices $5, 6, 8$, then all edges between the two sides are present, and we have a $K_{3,3}$ minor.
Construct a digraph which reflect four given rankings and use component analysis to interpret these rankings
To be honest, I don't understand the problem either. I don't see "component analysis" defined in the text, and Google has not made me any more enlightened. But here's some comments: There's four acyclic tournaments, one for each judge, defined by the transitive closure of each path: If the problem required working with tournaments, these would be them. But this doesn't seem to be the way to approach this problem. This is the graph you get if you take the union of the four directed paths given by the four judges: It's possible the author is asking for a "tournament wins analysis" (something that I've never heard of outside this book), like what they do for the tournament in Figure 7.4, except doing it for the above digraph. This could be what the author means with "This kind of argument can be used generally."
PDF of joint multivariate normal distribution
the symbol $\mathcal P$ in your equation stands for a probability density function. In this case, the arguments are vectors. If you want, you can see it as a function that takes in three arguments ($\mathbf s$, $\mathbf p$, and $\mathbf t$) and spits out the density of that triple. As for the shape of the joint density, it is probably not possible to write it (e.g., I'm missing some details about $\omega$). However, you can use the law of conditional probabilities to say that $\mathcal P(\mathbf s,\mathbf p ) = \mathcal P(\mathbf p|\mathbf s )\mathcal P (\mathbf s)$ (go on from there to complete the full joint distirbution). As for the integral, that just means that the authors marginalize $\mathbf p$ and $\mathbf t$ to find the density of $\mathbf s$ alone (you could in fact do as you wrote; i.e. integrate $s_1$, then $s_2$, and so on). In some way, you can interpret it as saying that you are looking for the distribution of the skill $\mathbf s$ of the players given the observed ranking $\mathbf r$ and assignment $A$ of players into teams. To find this density, you would need the performance of the players $\mathbf p$ and the performance of the teams $\mathbf t$; because these last two are not observed (they are latent variables) you integrate them out to average their influence over all the possible values. In general, in Bayesian modeling, models are constructed as hierarchies of random variables that depend on other random variables until they define a big joint multivariate model; then the the model is conditioned on the observed variables and the latent variables are integrated out (marginalized, as in your example are $\mathbf p$ and $\mathbf t$). In general, it is not possible to solve the integrals of the models and these are approximated. In the paper you pointed to, the authors use some variant of expectation propagation.
How to get apparent linear diameter from angular diameter
It appears you are using the wrong angular units: $2\;\tan^{-1}\left(\frac{5}{100}\right)=5.7248$ degrees $=0.099917$ radians. The formula you cite above is valid for a flat object perpendicular to the line of sight. If your object is a sphere, the angular diameter is given by $2\;\sin^{-1}\left(\frac{5}{100}\right)=5.7320$ degrees $=0.100042$ radians. Usually, the angular size is referred to as the apparent size. Perhaps you want to find the actual size of the object which has the same apparent size but lies at a different distance. In that case, as joriki says, just multiply the actual distance by $\frac{10}{100}$ to get the actual diameter. This is a result of the "similar triangles" rule used in geometry proofs. Update: In a comment to joriki's answer, the questioner clarified that what they want is to know how the apparent size varies with distance. The formulae for the angular size comes the diagram above: for the flat object: $\displaystyle\tan\left(\frac{\alpha}{2}\right)=\frac{D/2}{r}$; for the spherical object: $\displaystyle\sin\left(\frac{\alpha}{2}\right)=\frac{D/2}{r}$
Show that the standard atlas for $\mathbb{RP}^2$, ${(U_i,\phi_i)}_{i=1}^3$, is not orienting.
$\frac{1}{x^3}$ has the same sign as $x$. $U_1 \cap U_3$ has points with positive and negative $x_2$, and so positive and negative determinant, so it’s not orientation preserving.
Integrating $\frac{1}{z\sin(z)}$ on a path
If you meant $\,D_1(0)=S^1=\{z\in\Bbb C\;;\;|z|=1\}\,$ , then we have one unique singularity within this path, in $\,z=0\,$: $$f(z):=\frac{1}{z\sin z}=\frac{1}{z}\cdot\csc z=\frac{1}{z}\left(\frac{1}{z}+\frac{z}{6}+\frac{7}{360}z^3+\ldots\right)=\frac{1}{z^2}+\frac{1}{6}+\frac{7}{360}z+\ldots$$ and thus $$Res_{z=0}(f)=0\Longrightarrow\oint\limits_{S^1}f(z)\,dz=0$$ Added on request: Taylor series for $\,z\,$ "close" to zero: $$\csc z=\frac{1}{\sin z}=\frac{1}{\left(z-\frac{z^3}{6}+\frac{z^5}{120}-\ldots\right)}=\frac{1}{z}\frac{1}{\left(1-\left(\frac{z^2}{6}-\mathcal O(z^4)\right)\right)}=$$ $$=\frac{1}{z}\left(1+\frac{z^2}{6}+\mathcal O(z^4)\right)=\frac{1}{z}+\frac{z}{6}+\mathcal O(z^3)$$ We stop there since we're interested only in the coefficient of $\,z^{-1}\,$ ...Note that we're relying on $$\frac{1}{1-z}=1+z+z^2+\ldots\,\,\,,\,\,\text{for}\;\;|z|<1\;\;\text{(this is where the "close to zero" part kicks in)}$$
Derivative of log and exponential function
Not quite, you want the chain rule. The first one would have the form $e^u \mapsto u'e^u.$ In this case, that's $\frac{1}{2 \sqrt{x}}e^\sqrt{x},$ by just taking the derivative of $\sqrt{x}$. Can you do the other one?
Integration of $\sin(x)^{\sin(x)}$
The integral will only be defined over domains where $\sin x \gt 0$ (or at points where $\sin x = -1$ but those are isolated). There is no reason to expect to be able to express the integral in any simpler form. You could certainly numerically compute a definite integral for any interval within $(0,\pi)$ to get a function. Each interval of length $\pi$ might have a different constant of integration. Alpha gives the definite integral from $0$ to $\pi$ as about $2.60589$ and gives a messy series expansion for the indefinite integral valid near zero
changes in 2 dependent variables
So $\delta = C_1/(r^3t)$ and $\sigma = C_2/(rt)$. Therefore $$\frac{\delta_2}{\delta_1} = \frac{r_1^3t_1}{r_2^3t_2}$$ and $$\frac{\sigma_2}{\sigma_1} = \frac{r_1t_1}{r_2t_2}$$ We have $$ \frac{2}{3} = \frac{r_1^3t_1}{r_2^3t_2} \,\,\,\,\,\,\,\,\, \frac{2}{3} = \frac{r_1t_1}{r_2t_2}$$ So $r_1$ must be kept the same and $t_1$ changed to $(3/2)t_1$.
List of proofs of non-trivial theorems which were unnoticed to be wrong for at least a few years
There are many variations on this theme in MathOverflow. Widely accepted mathematical results that were later shown wrong Failures that lead eventually to new mathematics Most interesting mathematics mistake How to refer to a theorem that you have shown to be wrong What are some correct results discovered with incorrect or no proofs Examples where physical heuristics led to incorrect answers Oldest bug in computer algebra Retracted mathematics papers What mistakes did the italian algebraic geometers actually make Italian school of algebraic geometry and rigorous proofs Can a mathematical definition be wrong Mathematicians whose works were criticized by contemporaries but became widely accepted later What are examples of theorem which were once valid then became invalid as standard definitions shifted Have we ever lost any mathematics Examples of conjectures that were widely believed to be true but later proved false Statements which were given as axioms which later turned out to be false and for good measure Smith-Minkowski-Siegel mass formula Grunwald-Wang theorem Polygons Flip Finitely: Flaws and a Fix
Spivak Calculus. Why is the books proof valid? Is my attempt at a proof valid?
All these solutions look overcomplicated. Given $x$, choose a sequence $(x_n)_n$ in $A$ with $x_n\to x$. We have $f(x_n)\geq g(x_n)$ for all $n$. Letting $n\to \infty$ and using continuity of $f,g$, we obtain $f(x)\geq g(x)$.
Proving with Induction When The Claim Fails For Several Numbers
Induction works with any starting point, it doesn't need to start at $n=1$. If you prove that a statement $P(n)$ is true for $n=k$, and that $P(n)\Rightarrow P(n+1)$ for any integer $n$, then you have proven $P(n)$ to be true for all integers $n\geq k$. So find the first value of $n$ for which $11n+17\leq 2^n$, and start from there.
Question about holomorphic functions.
No. Let $z_0$ be a zero of $f$ in $D$; wlog we may assume that $z_0 = 0$ (by making the change of variable $z \mapsto z + z_0,$ which doesn't change anything in the question). Then $f$ has a local power series expansion of the form $z^m(a_0 + a_1 z + a_2 z^2 + \cdots),$ where $m$ is the order of the zero, so that $a_0 \neq 0$. Then $\sqrt{f(z)}$ has the local expansion $\sqrt{f(z)} = z^{m/2}(a_0 + a_1 z + a_2 z^2 + \cdots)^{1/2}.$ Now the binomial theorem shows that $(a_0 + a_1 z + a_2 z^2 + \cdots)^{1/2}$ is a well-defined holomorphic function in a n.h. of $0$ (once we fix a choice of $a_0^{1/2}$), and so $\sqrt{f(z)}$ has a branch point at $0$ if and only if $z^{m/2}$ does, which is to say if and only if $m$ is odd. So $\sqrt{f(z)}$ will be branched at zeroes of odd order, but not at zeroes of even order.
Why is the weak topology not more widely defined?
In fact, the concept of weak topology occurs in functional analysis: Given a topological vector space $X$, we can retopologize $X$ by giving it the initial topology induced by the family $X'$ of continuous linear functionals living on $X$. Unfortunately the phrase "weak topology" is not standardized. In the context of CW-complexes it means something completely different: The letter "W" in "CW" stands for weak topology, but here it is a suitable final topology. See the discussion in Confusion about topology on CW complex: weak or final? Thus "weak topology" and "initial topology" cannot be regarded as synonyms, the interpretation of "weak topology" depends on the context. Nevertless, the concept of initial topology is quite useful and important. In my opinion it should be treated in good textbooks. However, in older literature (as Kelley and Munkres) it usually does not occur. I guess that thinking in universal properties is a modern approach; it was unusual to do that for a long time. Perhaps it is a matter of taste: The price for the modern aproach is a higher level of abstraction which is not really needed very frequently.
What is the correct way to express the idea that a set is "entirely contained" in an interval?
$I$ is already a set. By definition, $[a,b]=\{x\in\mathbb R| a\leq x\land x\leq b\}$. So what you want to express is written simply as $S\subseteq I$, since that means that every element of $S$ is also an element of $I$.
Evaluate $\lim_{n\to\infty}(1+x)(1+x^2)\cdots(1+x^{2n}),|x|<1$
Hint: Multuply by $1-x$. Then you get $$(1-x)(1+x)(1+x^2)…(1+x^{2^n})=(1-x^2)(1+x^2)…(1+x^{2^n}) =\\ (1-x^4)(1+x^4)…(1+x^{2^n})=...=(1-x^{2^{n+1}})$$ Thus $$(1+x)(1+x^2)…(1+x^{2^n})=\frac{1-x^{2^{n+1}}}{1-x}$$
Using MATLAB to visualize Chirkov Standard Map
Probably the problem may be with the value of constant 'K'.It should be above 18 for Chirikov map to behave chaotically.
Induced Module structure of the Sheaf of Ideals with application to the Sheaf of Relative Differentials
The sheaf $J$ is a sheaf on $X$, whereas $\mathscr{O}_Y$ is a sheaf on $Y$, so no, $J$ is not generally an $\mathscr{O}_Y$-module (unless, e.g., $X=Y$). The way to get an $\mathscr{O}_Y$-module from $J$ is to pull it back: $i^*J$ is a sheaf of modules on $\mathscr{O}_Y$. In general, given any $\mathscr{O}_X$-module, the naturally associated $\mathscr{O}_Y$-module is the pullback via $i$. Regarding Hartshorne's remarks, he is abusing notation (a little bit, in my opinion). Since $\Delta:X\rightarrow X\times_YX$ is an immersion, there is an open subscheme $U$ of the target of $\Delta$ such that $\delta$ factors as a closed immersion $X\rightarrow U$ followed by the open immersion $U\hookrightarrow X\times_YX$. In fact there is a largest open subscheme for which this factorization is possible, namely the complement in $X\times_YX$ of $\overline{\Delta(X)}\setminus\Delta(X)$. Anyway, choosing an open subscheme $U$ such that a factorization of the type above exists, we get a quasi-coherent ideal sheaf $J$ of $\mathscr{O}_U$, and $J/J^2$ is a quasi-coherent $\mathscr{O}_U$-module. It is clear that as an $\mathscr{O}_U$-module, $J/J^2$ is killed by $J$. Because $j:X\rightarrow U$ is a closed immersion (identifying $X$ with $\Delta(X)$), it is a general fact that pushforward along $j$ is fully faithful on quasi-coherent sheaves with essential image the quasi-coherent $\mathscr{O}_U$-modules killed by $J$ (the ideal sheaf). So when Hartshorne says $J/J^2$ can be regarded as a sheaf of modules on $\mathscr{O}_X$ (he actually says $\mathscr{O}_{\Delta(X)}$ but I'm sticking with $X$), he really means there is a unique $\mathscr{O}_X$-module $F$ which, when pushed forward to $U$, is isomorphic to $J/J^2$. In fact $F=j^*(J/J^2)$. So Hartshorne is basically leaving out the $j^*$. Also, the open subscheme $U$ used to get the sheaf on $X$ (which is $\Omega_{X/Y}^1$) doesn't matter in the end. For an arbitrary immersion (not just the diagonal), the sheaf $j^*(J/J^2)$ obtained in the manner above is called the conormal sheaf of the immersion. For a proof of the assertion about pushforward of quasi-coherent sheaves along an immersion, see the section called ``Closed immersions and quasi-coherent sheaves" in the Stacks Project Chapter on morphisms of schemes. A down to earth manifestation of this situation is the following fact. If $A$ is a ring and $J$ is an ideal of $A$, then an $A/J$-module is the ``same thing" as an $A$-module killed by $J$, and if $M$ is an $A$-module killed by $J$, then the natural map $M\otimes_A(A/J)\rightarrow M$ is an isomorphism. More precisely, the restriction functor from $A/J$-modules to $A$-modules (which corresponds to pushforward along $\mathrm{Spec}(A/J)\rightarrow\mathrm{Spec}(A)$) is fully faithful with essential image the $A$-modules killed by $J$, and its quasi-inverse (on the essential image) is given by base change, $M\rightsquigarrow M\otimes_A(A/J)$ (which corresponds to pullback along $\mathrm{Spec}(A/J)\rightarrow\mathrm{Spec}(A)$). In fact, the assertion I alluded to above about quasi-coherent sheaves pretty much amounts to this statement about modules.
Find all functions $f$ :- $\mathbb{N}$ $\to$ $\mathbb{N}$ such that :- $xf(y) + yf(x) = (x + y)f(x^2 + y^2)$
Suppose there exists an integer $n$ such that $f(1) &lt;f(n)$. With $x=1$ and $y=n$, we have $$ nf(1)+f(n) = (n+1)f(n^2+1) \implies f(1)&lt;f(g(n))&lt;f(n),$$ where $g(n)=n^2+1$. Repeating the same argument using $f(1)&lt;f(g(n))$, we deduce that $$f(1)&lt;f\left(g^{f(n)-f(1)}(n)\right)&lt;f\left(g^{f(n)-f(1)-1}(n)\right)&lt;\ldots&lt;f(g(n)) &lt;f(n),\tag{1}$$ where $g^m(\cdot)$ denotes the composition of the function $g$ repeated $m$ times. However, there cannot be more than $f(n)-f(1)-1$ integers between $f(n)$ and $f(1)$. Thus, (1) leads to contradiction because $f(x)\in\mathbb{N},\forall x\in\mathbb{N}$. Hence, $f(1)\nless f(n).$ Using similar arguments, we can show that $f(1)\ngtr f(n)$. Therefore, we conclude that $f(x)=f(1)$ for all values of $x$ which trivially satisfies the given relation.
Modeling a multiple criteria decision problem
I'm not sure a decision matrix is the right way to go here as any weighting you use will be subjective. I looked for similar examples on this website but most of the examples used were simple choices between various options which makes me think this problem needs an objective answer using a clever formula.
What is the relationship between hyperbolic geometry and Einstein's special relativity?
The connection here is the Minkowski space, which can be used to describe both. Hyperbolic geometry For example, take hyperbolic 2-space in the hyperboloid model. You'd represent hyperbolic points as points on the hyperboloid, namely as $$\left\{(x_1,x_2,x_3)^T\;\middle|\;x_1^2-x_2^2-x_3^2=1\right\}$$ This expression $x_1^2-x_2^2-x_3^2$ is the quadratic form which lies at the foundation of the Minkowski space $\mathbb R^{1,2}$. The corresponding bilinear form can be used to compute distances, as $$d(x,y)=\operatorname{arcosh}\left(x_1y_1-x_2y_2-x_3y_3\right)$$ You can even think about this projectively: you may use a vector which does not lie on the hyperboloid, then use that vector to define a line which will intersect the hyperboloid in a given point, which is the point it specifies. This will work for every vector whose quadratic form is positive. This idea is also very useful to define lines. A hyperbolic line (i.e. a geodesic) connecting two hyperbolic points is modeled by the intersection between the hyperboloid and a plane spanned by these two points and the origin. You can describe this plane by its normal vector, and you can compute that normal vector as the cross product of two vectors representing the two points. Conversely, you can obtain the intersection between two geodesics by computing the cross product between two normal vectors of such planes, although the quadratic form for that point likely won't be $1$ yet, but any other positive value instead. Therefore, this normal vector of the plane is a reasonable (and homogenous) representation of the line. Its quadratic form will be negative. Common vocabulary Now change the vocabulary to use terms which are common for Minkowski spaces. A vector whose quadratic form is positive is said to be time-like. So points of the hyperbolic plane correspond to time-like vectors, with scalar multiples of a vector representing the same point. Likewise a vector whose quadratic form is negative is called space-like. So a line in hyperbolic geometry corresponds to a space-like vector, and all its multiples. In between these two, there are those vectors for which the quadratic form is zero. These correspond to ideal points of your geometry. in a certain sense, an ideal point is as much a line as it is a point. The set of hyperbolic isometries are those linear transformations of your vector space which preserve the set of ideal points, i.e. which preserve the light cone. These correspond roughly to Lorentz transformations in relativistic vocabulary (with some care because here we identify scalar multiples but there we don't, but the central idea of preserving the light cone remains). Relativistic geometry So where do these physically sounding terms come from? Imagining the whole vector space as some kind of space-time-diagram should be fairly simple. The first dimension (with the positive sign) would be time, the other two would be space. A vector would denote an event in this diagram. An event where all spatial coordinates are zero would happen at the same place as the origin, but at some different time. An event with zero time coordinate would happen at the same time but at some other place. The light cone would correspond to a cone of slope $1$, which is the speed of light in our coordinate system. Light travels from the origin along the light cone. But time and space are relative, so the above choice of coordinate system is only valid for a given inertial system. To convert between inertial systems which meet at the origin, you'd again use a Lorentz transformation, i.e. a transformation which preserves the light cone. Using such a transformation, any event which is a time-like distance away from the origin can be made to happen at the same place but in the past or the future. You'd use an inertial system which travels to that event or came from it. Likewise, any event which is a space-like distance away can be made to happen at the same time, using the right movement to compensate. Conclusion So conversions between inertial systems correspond to isometric transformations of the hyperbolic space. And objects in hyperbolic space correspond to (equivalence classes of) events in space-time diagrams. The above would generalize for higher dimensions, but the part about two points spanning a line would be more complicated to read, since you'd more likely talk about three points spanning a plane. $ $ $ $
Another integration by parts problem
Let $u = t^2$, then $du = 2t dt$, hence this integral is $\frac{1}{2}\int_0^4 f'(u) du = (f(4) - f(0))/2$. Therefore the integral is positive iff $f(4) &gt; f(0)$.
Ball around a set equals union of balls around each of its points
Your argument is correct. For the converse take $y \in \cup_{x \in A} B_d(x,\epsilon)$. There exists $x \in A$ such that $y \in B_d(x,\epsilon)$. Hence $d(y,A) \leq d(y,x) &lt;\epsilon$. So $y \in B_d(A,\epsilon)$.
Area of middle piece in triangle
Given any polygon $P_1P_2\cdots P_n$, let $[P_1P_2\cdots P_n]$ be a shorthand for its area. First, we need a lemma. For the configuration illustrated below, we have $$\frac{[ABD]}{[ABC]} = \frac{uv}{u+v-uv} \quad\text{ where }\quad u = \frac{AE}{AC}, v = \frac{BF}{BC}$$ To prove the lemma, let $a,b,c$ be the baricentric coordinates of $D$ with respect to $\triangle ABC$. Notice $$\begin{cases} D \text{ lies on } AF &amp;\implies b : c = 1-v : v\\ D \text{ lies on } BE &amp;\implies a : c = 1-u : u \end{cases} \quad\implies\quad a : b : c = \frac1u -1 : \frac1v - 1 : 1$$ This leads to $$\frac{[ABD]}{[ABC]} = c = \frac{c}{a+b+c} = \frac{1}{\frac1u + \frac1v - 1} = \frac{uv}{u+v-uv} $$ Back to original question. Let $\displaystyle\;\varphi(u,v) = \frac{uv}{u+v-uv}$ and label vertices as shown below. Apply lemma to $\triangle APB, \triangle AQB, \triangle ARB, \triangle ASB$ with respect to $\triangle ABC$, we obtain $$\frac{[APB]}{[ABC]} = \frac{[ARB]}{[ABC]} = \varphi\left(\frac13,\frac23 \right), \frac{[AQB]}{[ABC]} = \varphi\left(\frac23,\frac23\right) \quad\text{ and }\quad \frac{[ASB]}{[ABC]} = \varphi\left(\frac13,\frac13\right) $$ Notice $[PQRS] = [AQB] - [ARB] + [ASB] - [APB]$, the desired ratio is $$\frac{[PQRS]}{[ABC]} = \varphi\left(\frac23,\frac23\right) + \varphi\left(\frac13,\frac13\right) - 2 \varphi\left(\frac13,\frac23 \right) = \frac12 + \frac15 - 2\cdot \frac27 = \frac{9}{70}$$
Simplifying Large Bases with large Exponents
First, notice that $105308 \equiv 5 \pmod {11}$. So, $$105308^{7125} \equiv 5^{7125} \pmod {11}$$ By Fermat's Little Theorem, $5^{10} \equiv 1 \pmod {11}$. We have that $$5^{7125} \equiv 5^{5} \equiv 125 \times 25 \equiv 4 \times 3 \equiv 1 \pmod {11}$$
Understanding affine combination of points in affine plane
An affine plane (or space, more generally) is just a linear plane $V$ lifted by some support vector $x$ which is not necessarily in $V$: $$A = x + V.$$ Now if you have a collection of points $a_1, \ldots, a_m \in A$, note you can rewrite them as $x + v_1, \ldots, x + v_m$. Then if $c_1, \ldots, c_m$ are coefficients such that $\sum c_i = 1$, we obtain $$\sum_{i=1}^m c_i a_i = \sum_{i=1}^m c_i (x + v_i) = x + \sum_{i=1}^m c_i v_i,$$ which is an element of $A$.
Is exit probability monotonic in drift and diffusion coefficient?
No, in general we cannot expect monotonicity. Counterexample: It is not difficult to see that $$x_t := \frac{1}{2} +t \qquad \quad y_t := \frac{1}{2} + t + W_{t \wedge \tau}$$ for $$\tau := \inf\{t \geq 0; y_t = 0\}$$ are solutions to the SDEs $$dx_t = \, dt \qquad x_0 = \frac{1}{2}$$ and $$dy_t = \, dt + 1_{(0,\infty)}(y_{t \wedge \tau}) \, dW_t, \qquad y_0 = x_0 = \frac{1}{2},$$ respectively. If we choose $a=1$, then $$\mathbb{P}\left[ \sup_{t \leq \frac{1}{2}} x_t \geq 1 \right] = 1$$ whereas $$\mathbb{P}\left[ \sup_{t \leq \frac{1}{2}} y_t \geq 1 \right] &lt; 1.$$
How to differentiate functions with square roots
$\dfrac{\operatorname d}{\operatorname dx}\sqrt x=\dfrac 12x^{-\frac12}$. The rest is just chain rule etc. For example: $(\sqrt{\dfrac{1-2x}{1+2x}})'=\dfrac 12(\dfrac {1-2x}{1+2x})^{-\frac12}\cdot(\dfrac{1-2x}{1+2x})'=\dfrac 12(\dfrac {1-2x}{1+2x})^{-\frac12}\cdot(\dfrac{-2(1+2x)-(1-2x)\cdot2}{(1+2x)^2})=\sqrt{\dfrac {1+2x}{1-2x}}\cdot(\dfrac{-2}{(1+2x)^2})$.
Mellin transform of digamma function
This problem is solved quickly if you apply the integral form of digamma function, and then manage to simplify the double improper integral, which require a few tricks. The integral form of digamma that I used to solve the problem was: $$\psi(t) = \int_0^\infty \left({e^{-s} \over s }- {e^{-s t} \over {1 - e^{-s}}}\right)ds $$ Just combine the last integral form into the definition of Mellin Transform to create a double improper integral: $$M_x (\psi(x+1)) = \int_0^\infty t^{x-1} \psi(t+1) dt = \int_0^\infty \int_0^\infty t^{x-1} \left({e^{-s} \over s }- {e^{-s t} \over {1 - e^{-s}}}\right) ds dt$$ Combining the expression inside the double integral, and factor the common terms, we obtain: $$M_x (\psi(x+1)) = \int_0^\infty \int_0^\infty {{t^{x-1} e^{-s} (1 - s e^{-s t} - e^{-s})} \over {s (1 - e^{-s})}} ds dt$$ We begin the tricky part, just as understand that the part of denominator and the numerator can be evaluated as a result of a geometric series, this means: $$\sum_{n=1}^\infty e^{-n s} = {e^{-s} \over {1 - e^{-s}}}$$ This make the double integral more easy to evaluate, by change part of the function into a power series: $$M_x(\psi(x+1)) = \sum_{n=1}^\infty \int_0^\infty \int_0^\infty {{t^{x-1} e^{-n s} (1 - s e^{-s t} - e^{-s})} \over {s }} ds dt$$ The integral inside the power series can be factored with two parts, each can be evaluated quickly. Ignoring so far the full expression of the problem to avoid lost of focus, this means the double integral is splitted into: $$\int_0^\infty \int_0^\infty {{t^{x-1} e^{-n s} (1 - s e^{-s t} - e^{-s})} \over {s }} ds dt =$$ $$ \int_0^\infty \int_0^\infty {{t^{x-1} ( e^{-s n} - e^{-s (n+1)})} \over {s }} ds dt - \int_0^\infty \int_0^\infty {{t^{x-1} e^{-s ( n + t)}} } ds dt $$ The first integral is divergent unless an adequate convergence factor (renormalization) are introduced. Also the first integral can be evaluated according to Frullani's formula: $${\int_0^\infty {{e^{-a t} - e^{-b t}} \over t }} = {\log \left({b \over a} \right)}$$ Then the first integral should be remarked as the following limit of this renormalization function: $$\int_0^\infty \int_0^\infty {{t^{x-1} ( e^{-s n} - e^{-s (n+1)})} \over {s }} ds dt = \lim_{\alpha \to 0} \int_0^\infty \int_0^\infty {{t^{x-1} e^{-\alpha (s+t)} ( e^{-s n} - e^{-s (n+1)})} \over {s }} ds dt$$ Just expand the exponentials terms, and the integrals can be evaluated: $$\int_0^\infty \int_0^\infty {{t^{x-1} ( e^{-s n} - e^{-s (n+1)})} \over {s }} ds dt = \lim_{\alpha \to 0} \int_0^\infty \int_0^\infty {{t^{x-1} e^{-\alpha t} ( e^{-s( n + \alpha)} - e^{-s (n+1+\alpha)})} \over {s }} ds dt = ...$$ $$\rightarrow \lim_{\alpha \to 0} \int_0^\infty {{t^{x-1} e^{- \alpha t} \log \left({{\alpha + n + 1}\over {\alpha + n}}\right)}} dt = \lim_{\alpha \to 0} {\Gamma(x) \over \alpha^x} \log\left({{\alpha + n + 1} \over {\alpha + n}}\right)$$ Since the limit inside the logarithm depends of the series parameter, we should evaluated it first: $$\lim_{\alpha \to 0} \sum_{n=1}^\infty {{\Gamma (x) \over \alpha^x} \log \left({{\alpha + n + 1} \over {\alpha + n}}\right)} = \lim_{\alpha \to 0} {{\Gamma (x) \over \alpha^x} \log \left({ \prod_{n=1}^\infty {{\alpha + n + 1} \over {\alpha + n}}}\right)}$$ The infinite product inside the logarithm is equal to the Wallis Product: $$ \lim_{\alpha \to 0} \log \left({ \prod_{n=1}^\infty {{\alpha + n + 1} \over {\alpha + n}}}\right) = \log \left( {\prod_{n=1}^\infty {{n + 1} \over {n}}}\right) = \log \left({\pi \over 2} \right) $$ And the other limit converges if and only if $x&lt;0$: $$\lim_{\alpha \to 0} {\Gamma (x) \over \alpha^x} \log \left({\pi \over 2}\right) \rightarrow 0 , x&lt;0 $$ This means the first integral is conditionally convergent to zero, if and only if $x&lt;0$. About the second integral, the integration is straightforward, it just a product of two gamma functions: $$- \int_0^\infty \int_0^\infty {{t^{x-1} e^{-s ( n + t )}} } ds dt = - n^{x-1} \Gamma(x) \Gamma(1-x) $$ And finally the sum is finite, if we restrict the domain to $1-x&gt;1 \rightarrow x&lt;0$, so: $$\sum_{n=1}^\infty - n^{x-1} \Gamma(x) \Gamma(1-x) = - \zeta(1-x) \Gamma(x) \Gamma(1-x)$$ Joining alltogether, the Mellin Transform of Digamma Function are: $$M_x(\psi(x+1)) = - \zeta(1-x) \Gamma(x) \Gamma(1-x)$$ The domain of Mellin Transform of Digamma function can be analytically continued by the functional equation of zeta function, which is defined by: $$ \zeta(s) = 2^{s} \pi^{s-1} \sin \left({\pi s \over 2}\right) \Gamma(1-s) \zeta(1-s) $$ However to justify the result announced by this question, just apply the Euler's Reflection Formula: $$\Gamma(s) \Gamma(1-s) ={ \pi \over \sin\left(\pi s\right)}$$ And finally we get: $$M_x(\psi(x+1)) = - {\pi \over \sin(\pi x)} \zeta(1-x)$$
Inequality involving quantiles
You have for all $r\in\Bbb R$ $$F_X(r) - F_Y(r) \le \sup_{r\in\mathbb{R}}|F_X(r)-F_Y(r)|\le \epsilon$$ Hence for all $r\in\Bbb R$ it holds $$F_X(r) \le F_Y(r) + \varepsilon$$ $q_Y(\alpha)$ is the $\alpha$-quantile of Y hence we have for each $n\in\Bbb N$ $$F_Y\left(q_Y(\alpha) - \frac{1}{n}\right) = P\left( Y \le q_Y(\alpha) - \frac{1}{n}\right) \le P\left( Y &lt; q_Y(\alpha) \right) \le \alpha$$ And because $F_X$ is continuous we get: $$\begin{align*}P\left(X \le q_Y(\alpha)\right) &amp;= F_X\left(q_Y(\alpha)\right) \\ &amp;= \lim_{n\to\infty} F_X\left(q_Y(\alpha) - \frac{1}{n}\right) \\&amp;\le \lim_{n\to\infty} F_Y\left(q_Y(\alpha) - \frac{1}{n}\right) + \varepsilon \\ &amp;\le \alpha + \varepsilon\end{align*}$$
Show that if there are 101 people of different heights standing in a line
This is a special case of the Erdős-Szekeres theorem with $r = s = 11$. Here is a proof for this special case. I am assuming that no two people have the same height. For the $i$:th person $P_i$ in the line, we let $x_i$ be the length of the longest sequence of people behind $P_i$ which are increasing in height (including $P_i$). We let $y_i$ be the length of the longest sequence of people in front of $P_i$ which are decreasing in height (including $P_i$). Now, for $i &lt; j$ we have $(x_i,y_i) \neq (x_j,y_j)$. This follows from the fact that if $P_i$ is shorter than $P_j$ then the longest sequence of increasing heights ending at $P_i$ can be used to build a longer sequence ending at $P_j$, so $x_i &lt; x_j$, or if $P_i$ is taller than $P_j$, the longest sequence of decreasing heights after $P_j$ can be used to build a longer sequence of decreasing heights after $P_i$, so $y_i &gt; y_j$. Thus, we have 101 different tuples of numbers. We can place these into a grid of $101 \times 101$ pigeonholes, and we place $(x_i,y_i)$ into the pigeonhole at $(x_i,y_i)$. Each pigeonhole gets at most one tuple, since $(x_i,y_i) \neq (x_j, y_j)$. But we see that we have more tuples than fit into a $10 \times 10$ grid of pigeonholes, meaning that at least one tuple has $x_i &gt; 10$ or $y_i &gt; 10$, meaning that there is a subsequence of at least 11 people with decreasing or increasing heights.
The function $f: (0,1) \to \mathbb R $ defined by $f(x) := 1/x$ is not uniformly continuous, but it is continuous.
We have $$\delta \leq xy \epsilon, \forall x,y \in (0,1)$$ Fix $y=\frac12, \epsilon=2$ We have $$\delta \leq x, \forall x \in (0,1)$$ Such positive number $\delta$ doesn't exists, if it exists, then we can choose $x=\frac{\delta}{2}$. Hence the conclusion of $\delta \leq 0$
Showing linear independence by using induction
Note first that$$\tilde{B}_{k+1,\,j}^\prime(x)=B_{k+1}^\prime\left(x+\frac{k}{2}-j\right)=B_k\left(x+\frac{k+1}{2}-j\right)-B_k\left(x+\frac{k-1}{2}-j\right)\\=\tilde{B}_{k,\,j-1}(x)-\tilde{B}_{k,\,j}(x),$$provided we define $\tilde{B}_{k,\,k+1}:=0$. If $\sum_{j=0}^{k+1}a_j\tilde{B}_{k+1,\,j}=0$ then, at values of $x$ where derivatives are well-behaved because none of the $B_\ell$ have an argument of $\pm\frac12$,$$0=\sum_{j=1}^{k+1}a_j\tilde{B}_{k+1,\,j}^\prime=\sum_{j=1}^{k+1}a_j(\tilde{B}_{k,\,j-1}-\tilde{B}_{k,\,j})=a_1\tilde{B}_{k,\,0}+\sum_{j=1}^k(a_{j+1}-a_j)\tilde{B}_{k,\,j}.$$By the inductive hypothesis, $a_1=0$ and each $a_{j+1}-a_j=0$, and so the only possible nonzero coefficient is $a_0$. But this must also be $0$, because $\tilde{B}_{k,\,0}$ isn't identically $0$.
Help in Basic Problem of change of cordinates
Do not worry. You can solve it down ( a bit of laborious method) Just write $ (1,0,i)= c_1(1,1,0) + c_2(1,i,i+1)$ Then calculate and compare both side and solve for $c_1,c_2$ Which gives you the coordinates.
How to prove a set contains no rational numbers?
Suppose for a contradiction that $E+a$ always contains a rational. Then $$ \bigcup_{q\in \mathbb Q}(E-q) $$ would contain every $a\in \mathbb R$, but would also be a countable union of measure-zero sets.
Peculiar presentation of Symmetric group of degree 10
The presentation you've given isn't $S_{10}$. It isn't finite; you can just make words like $s_1 s_2 s_3$ of infinite order. As you said any time $[s_i,s_j]\not= 1$ you have that $o(s_is_j)=6$, but if you want to use that you still need to define $[s_i,s_j]$ for the other pairs of elements. Note that there are generating sets of $S_{10}$ for which those relations hold. For example: $$\begin{align*}s_1&amp;=(16)(27)(38)(49)(5\text{X})\\ s_2&amp;=(12)(56)\\ s_3&amp;=(12)(36)\\ s_4&amp;=(16)(23)(47)(5\text{X})\\ s_5&amp;=(12)(37)\\ s_6&amp;=(12)(67) \end{align*}$$ but of course these obey additional relations which would be necessary to define a group presentation of $S_{10}$ associated with these generators.
Find all $f:\mathbb {R} \rightarrow \mathbb {R}$ where $f(f(x))=f'(x)f(x)+c$
There is a solution with $f(x)=ax+bx^2+cx^3+dx^4+...$. Here, I am taking your $c=0$, but introducing my $c$ as one of the coefficients. $$f(f(x))=af(x)+bf(x)^2+cf(x)^3+...\\ =a^2x+abx^2+acx^3+...+ba^2x^2+2ab^2x^3+...+ca^3x^3+...\\ f'(x)f(x)=(a+2bx+3cx^2+...)(ax+bx^2+cx^3+...)\\ =a^2x+3abx^2+(4ac+2b^2)x^3+... $$ Equate coefficients $$ a^2=a^2\\ab+ba^2=3ab\to a=2\\ac+2ab^2+ca^3=4ac+2b^2\to c=-b^2$$ so it seems the first few terms must be $f(x)=2x+bx^2-b^2x^3+...$ EDIT: I made Maple find this solution, when your $c=0$: $$\color{red}{f(x)=\frac2b(1+2bx-\sqrt{1+2bx})}$$ When $x=2y$, the coefficients of the series above were $1,1,-2,5,-14,42$. This sequence is fairly famous, or I could have looked it up in the OEIS (Online Encyclopedia of Integer Sequences). I then used the formula, which involves $\frac1{n+1}{2n\choose n}$, in the sequence, and Maple worked out the infinite sum. To check that, let $y=\sqrt{1+2bx}$, and suppose $y&gt;1/2$. Then $$f(x)=\frac2b(y^2-y)\\ 1+2bf(x)=4y^2-4y+1=(2y-1)^2\\f(f(x))=\frac2b(4y^2-4y+1-(2y-1))$$ On the other hand, $f'(x)=4-\frac{2}{\sqrt{1+2bx}}=4-\frac2y$, so $$f(x)f'(x)=\frac2b(y^2-y)(4-\frac2y)=\frac2b(y-1)(4y-2)=\frac2b(4y^2-6y+2)=f(f(x))$$
finding the ordered pair of $N \times N$ matrices
$$trace(XY-YX)=trace(I)$$ $$trace(XY)-trace(YX)=n$$ $$0=n$$ hence we have a contradiction.
Proof of $\sin nx=2^{n-1}\prod_{k=0}^{n-1} \sin\left( x + \frac{k\pi}{n} \right)$
There is a nice argument based on Weierstrass products. The sine function has its (simple) zeroes at $\pi\mathbb{Z}$ and $$\frac{\sin x}{x}=\prod_{m\geq 1}\left(1-\frac{x^2}{n^2\pi^2}\right)\tag{1}$$ holds. The $\sin(n x)$ function has its zeroes at $\frac{\pi}{n}\mathbb{Z}=\left(\pi Z\right)\cup\left(\pi \mathbb{Z}+\frac{1}{n}\right)\cup\ldots\cup\left(\pi\mathbb{Z}+\frac{n-1}{n}\right)$, so by separating the zeroes according to their residue class $\pmod{\pi}$ and using $(1)$ and $$ \prod_{k=1}^{n-1}\sin\frac{k\pi}{n}=\frac{2n}{2^n}\tag{2}$$ your identity easily follows. With a similar argument, you may check that your RHS and LHS have the same value at $x=0$ and by applying $\frac{d}{dz}\log(\cdot)$ to both sides you get two meromorphic functions with the same Eisenstein series, by Herglotz' trick.
Not understanding a cancellation step in an inequality proof from Spivak's Calculus.
You are right, it should be $&lt;\frac{\varepsilon}{2} + \frac{\varepsilon}{2}$ rather than $=\frac{\varepsilon}{2} + \frac{\varepsilon}{2}$. Good catch.
Query on a simple exercise involving representations of functors.
The idea it's correct but personally I find more easily to follow the argument when stated in the following way. More in details it should work like this: by Yoneda lemma we have that every natural transformation of kind $$\tau' \colon \hom_{\mathcal D}(-,r) \Rightarrow \hom_{\mathcal D}(-,r')$$ is of the form $\hom_{\mathcal D}(-,h)$ for some $h \in \hom_{\mathcal D}(r,r')$; now by the hypothesis we have the following natural transformation $$\hom_{\mathcal D}(-,r) \stackrel{\phi}\cong K \stackrel{\tau}{\Rightarrow} K' \stackrel{\phi'^{-1}}{\cong} \hom_{\mathcal D}(-,r')$$ by what we've said above $\phi'^{-1} \circ \tau \circ \phi = \hom_{\mathcal D}(-,h)$ for a certain $h \in \hom_{\mathcal D}(r,r')$. Now composing on the left both sides of the equation you get that $$\tau \circ \phi = \phi' \circ \hom_{\mathcal D}(-,h)$$ which is exactly what you wanted to prove. Hope this helps. Edit: The fact that every natural transformation $$\hom_{\mathcal D}(-,r) \Rightarrow \hom_{\mathcal D}(-,r')$$ is of the form $\hom_{\mathcal D}(-,h)$ for some $h \in \mathcal D(r,r')$ follows in the following way. By yoneda lemma we have a natural bijection $$[\mathcal D^\text{op},\mathbf{Set}](\hom_{\mathcal D}(-,r),\hom_{\mathcal D}(-,r')) \cong \hom_{\mathcal D}(r,r')$$ the inverse of this bijection is given by the mapping sending every $h \in \hom_{\mathcal D}(r,r')$ in the family of funtions $$\langle g \in \hom_{\mathcal D}(z,r) \mapsto \hom_{\mathcal D}(g,r')(h)=h \circ g \in \hom_{\mathcal D}(z,r')\rangle_{z \in\mathcal D}\ .$$ Since for every $g \in \hom_{\mathcal D}(z,r)$ we have that $$h \circ g = \hom_{\mathcal D}(z,h)(g)$$ we have that mapping above is exactly $\hom_{\mathcal D}(-,h)$. Summarizing we have that the inverse of the yoneda bijection sends every $h \in \hom_{\mathcal D}(r,r')$ in the natural trasformation $\hom_{\mathcal D}(-,h)$. Since this mapping is a bijection every natural transformation in $[\mathcal D^\text{op},\mathbf{Set}](\hom_{\mathcal D}(-,r),\hom_{\mathcal D}(-,r'))$ is of the form $\hom_{\mathcal D}(-,h)$ for some $h \in \mathcal D(r,r')$.
How to evaluate line integrals without using Green's Theorem?
[Please don't up vote - just paraphrasing answers in comments so question not in unanswered state forever] In (A) you have to evaluate the line integral along a piecewise smooth path. This means breaking the boundary of the rectangle up into 4 smooth curves (the sides), parameterising the curves, evaluating the line integral along each curve and summing the results. In (B) you have to expand $\dfrac{dF_2}{dx}, \dfrac{dF_1}{dy}$ and $dA$ and evaluate the result.
Range space $\mathcal{R}(\textbf{A})$ the same as $\mathcal{R}(\textbf{AA}^H)$?
For this kind of problem, it is convenient to use the following form of SVD: $$A = \sum_{i=1}^{rank} \lambda_i u_i v^H_i$$ Where the $\lambda_i &gt; 0$ Then, $$ AA^H = \sum_{i=1}^{rank} \lambda^2_i u_i u^H_i$$ It follows immediately that $A$ and $AA^H$ have the same rank
How to solve corrected formula for hose flow and pressures
Ok, with @fedja's encouragement I took another stab and believe I have this figured out. I started with some easy numbers to work with and then tested some real number situations. So $A=\frac 1{29.71 D^2}\enspace$ and $\enspace B=\frac{CL}{100\times100^2}$ $GPM \times A = \sqrt{NP}\enspace therefore \enspace GPM^2 \times A^2 = NP\enspace$and $\enspace GPM^2 \times B = FL$ $PP = FL + NP$, so: $(GPM^2 \times A^2)+(GPM^2\times B)=FL+NP=PP$ $GPM^2 \times (A^2+B)=PP\enspace$ leading to: $GPM = \sqrt{ \left(\frac{PP}{A^2 + B} \right)} $
Cauchy's Integral forumla problem
Hint : Actually $\gamma:|z-2i|=4$. So $-9 \not \in \gamma$. That is $\displaystyle f(z)=\frac{z}{(z+9)^2}$ is analytic in $\gamma$. So what about Cauchys theorem ?
First passage time of rolling dices
A common way to attack this type of problem is with an absorbing Markov chain. For each of the “goal” conditions, define an absorbing state—one that transitions only to itself with probability $1$. Since there’s no way for the product of successive rolls to be six if the previous roll was a 4 or 5, the process effectively starts over in that situation, so we end up with 6 states: the start state, which includes having rolled a 4 or 5, one state for each of the rolls that could lead to a product of 6, and the absorbing goal state. In canonical form, all of the absorbing states are placed last, so for this Markov chain we have the canonical-form matrix $$P = \begin{bmatrix} \frac13 &amp; \frac16 &amp; \frac16 &amp; \frac16 &amp; \frac16 &amp; 0 \\ \frac13 &amp; \frac16 &amp; \frac16 &amp; \frac16 &amp; 0 &amp; \frac16 \\ \frac13 &amp; \frac16 &amp; \frac16 &amp; 0 &amp; \frac16 &amp; \frac16 \\ \frac13 &amp; \frac16 &amp; 0 &amp; \frac16 &amp; \frac16 &amp; \frac16 \\ \frac13 &amp; 0 &amp; \frac16 &amp; \frac16 &amp; \frac16 &amp; \frac16 \\ 0&amp;0&amp;0&amp;0&amp;0&amp;1 \end{bmatrix}.$$ The expected time until absorption given that you start in state $i$ is the sum of the expected number of visits to each of the transient states. From this you can set up a system of linear equations or, equivalently, use matrix operations on $P$. Let $Q$ be the upper-left $5\times5$ submatrix that covers only the transient states. Then $[Q^k]_{i,j}$ is the probability of being in state $j$ in exactly $k$ steps, starting from state $i$. Summing over all $k$, $$N=\sum_{k=0}^\infty Q^k = (I-Q)^{-1} $$ produces the fundamental matrix $N$, which gives the expected number of times that the system is in state $j$ given that it started in state $i$. The expected absorption times are then the row sums of $N$, i.e., $N\mathbf 1$, and the solution to this problem is the first element of this vector. Addition: Since the transition probabilities have a nice symmetry, this system can be simplified by lumping states 2-5 together into a single “might stop on the next roll” state. The reduced transition matrix is $$P=\begin{bmatrix}\frac13 &amp; \frac23 &amp; 0 \\ \frac13 &amp; \frac12 &amp; \frac16 \\ 0 &amp; 0&amp; 1\end{bmatrix}.$$ This is small enough to solve by hand: $$I-Q = \begin{bmatrix} \frac23 &amp; -\frac23 \\ -\frac13 &amp; \frac12 \end{bmatrix} \\ N = (I-Q)^{-1} = \begin{bmatrix} \frac92 &amp; 6 \\ 3 &amp; 6 \end{bmatrix} $$ and so the expected time to absorption is $\frac92 + 6 = \frac{21}2 = 10.5$.
How to evaluate $ \sum_{k=0}^{\frac{n}{2}} {n\choose2k}$?
This is the number of subsets of $\{1, \ldots, n\}$ that have even size. If you need a further hint, this question may help.
Multiplication operator $M_a:\ell^p\rightarrow \ell^q$
Firstly, we prove the Generalized Holder Inequality. Generalized Holder Inequality: Let $(X,\mathcal{F},\mu)$ be a measure space. Let $1\leq r&lt;p&lt;\infty$. Define $q=\frac{pr}{p-r},$ then $q\in(1,\infty)$ and $\frac{1}{p}+\frac{1}{q}=\frac{1}{r}.$ For any measurable functions $f,g:X\rightarrow\mathbb{R}$, we have that $||fg||_{r}\leq||f||_{p}||g||_{q}$. In particular, if $f\in L^{p}$ and $g\in L^{q}$, then $fg\in L^{r}$. Proof: $\lambda=\frac{r}{p}$, then $\lambda\in(0,1)$ and $1-\lambda=\frac{r}{q}$. Recall that $\ln:(0,\infty)\rightarrow\mathbb{R}$ is a concave function. Let $x,y&gt;0$ be arbitrary, we have that \begin{eqnarray*} \lambda\ln x^{p}+(1-\lambda)\ln y^{q} &amp; \leq &amp; \ln(\lambda x^{p}+(1-\lambda)y^{q})\\ x^{r}y^{r} &amp; \leq &amp; \lambda x^{p}+(1-\lambda)y^{q}. \end{eqnarray*} Note that the above inequality continues to hold if $x=0$ or $y=0$. Now, let $f,g:X\rightarrow\mathbb{R}$ be measurable functions. If $||f||_{p}=0$ or $||g||_{q}=0$, then $f=0$ or $g=0$ ($\mu$-a.e.) and hence $fg=0$ ($\mu$-a.e.). In this case, we have $||fg||_{r}=0$ and hence $||fg||_{r}\leq||f||_{p}||g||_{q}.$ Suppose that $||f||_{p}\neq0$ and $||g||_{q}\neq0$. If $||f||_{p}=\infty$ or $||g||_{q}=\infty$, we clearly have $||fg||_{r}\leq||f||_{p}||g||_{q}.$ Consider the case that $||f||_{p},||g||_{q}\in(0,\infty)$. Define $F=|f|/||f||_{p}$ and $G=|g|/||g||_{q}$. Note that $||F||_{p}=||G||_{q}=1$ For each $t\in X$, we have that $$ F^{r}(t)G^{r}(t)\leq\lambda F^{p}(t)+(1-\lambda)G^{q}(t). $$ Integrating both sides with respect to $\mu$ yields \begin{eqnarray*} \int F^{r}(t)G^{r}(t)d\mu(t) &amp; \leq &amp; \lambda\int F^{p}(t)d\mu(t)+(1-\lambda)\int G^{q}(t)d\mu(t)\\ &amp; = &amp; \lambda\cdot1+(1-\lambda)\cdot1\\ &amp; = &amp; 1. \end{eqnarray*} On the other hand, \begin{eqnarray*} \int F^{r}(t)G^{r}(t)d\mu(t) &amp; = &amp; \frac{1}{||f||_{p}^{r}||g||_{q}^{r}}\int|f|^{r}|g|^{r}d\mu\\ &amp; = &amp; \frac{||fg||_{r}^{r}}{||f||_{p}^{r}||g||_{q}^{r}}. \end{eqnarray*} Re-arranging terms, we obtain $||fg||_{r}\leq||f||_{p}||g||_{q}.$ We go back to your problem: (Note that the notations are different) Let $1\leq r&lt;p&lt;\infty$. Define $q=\frac{pr}{p-r}$, then $q\in(1,\infty)$ and $\frac{1}{p}+\frac{1}{q}=\frac{1}{r}$. (1) Let $a=(a_{n})$ be a sequence such that for any $x=(x_{n})\in l^{p},$ $(a_{n}x_{n})\in l^{r}$. Prove that $a\in l^{q}$. Solution of (1): For each $n$, define $T_{n}:l^{p}\rightarrow l^{r}$ by $T_{n}x=(a_{1}x_{1},a_{2}x_{2},\ldots,a_{n}x_{n},0,0,\ldots),$ where $x=(x_{n})\in l^p$. Clearly $T_{n}$ is linear. Moreover, by the generalized Holder inequality, we have that \begin{eqnarray*} ||T_{n}x||_{r} &amp; \leq &amp; \left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}\left\{ \sum_{k=1}^{n}|x_{k}|^{p}\right\} ^{\frac{1}{p}}\\ &amp; \leq &amp; \left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}\left\{ \sum_{k=1}^{\infty}|x_{k}|^{p}\right\} ^{\frac{1}{p}}\\ &amp; &amp; \left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}\cdot||x||_{p}. \end{eqnarray*} This shows that $T_{n}$ is a bounded linear map. Consider the family of bounded linear maps $\{T_{n}\mid n\in\mathbb{N}\}$. Observe that for each $x\in l^{p}$, $\{T_{n}x\mid n\in\mathbb{N}\}$ is a bounded subset of $l^{r}$ (For, let $n\in\mathbb{N}$ be arbitrary, then $||T_{n}x||_{r}^{r}=\sum_{k=1}^{n}|a_{k}x_{k}|^{r}\leq\sum_{k=1}^{\infty}|a_{k}x_{k}|^{k}&lt;\infty$ because it is given that $(a_{n}x_{n})\in l^{r}$). By the Uniform Boundedness Principle, $\sup_{n}||T_{n}||&lt;\infty$. We go to show that $||T_{n}||=\left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}$. From the above discussion, we clearly have $||T_{n}||\leq\left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}$. If $\left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}=0$, we are done. Suppose that $\left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}&gt;0$. Let $x=\eta(|a_{1}|^{\frac{q}{p}},|a_{2}|^{\frac{q}{p}},\ldots,|a_{n}|^{\frac{q}{p}},0,0,\ldots)\in l^{p}$, where $\eta=\left(\sum_{k=1}^{n}|a_{k}|^{q}\right)^{-\frac{1}{p}}$, then $||x||_{p}=1$. Note that \begin{eqnarray*} ||Tx||_{r}^{r} &amp; = &amp; \eta^{r}\sum_{k=1}^{n}\left|a_{k}\cdot|a_{k}|^{\frac{q}{p}}\right|^{r}\\ &amp; = &amp; \eta^{r}\sum_{k=1}^{n}|a_{k}|^{(\frac{p+q}{p})r}\\ &amp; = &amp; \eta^{r}\sum_{k=1}^{n}|a_{k}|^{q}. \end{eqnarray*} Hence, $||T_{n}||\geq||T_{n}x||_{r}\geq\eta\left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{r}}=\left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}$. Now, it follows that \begin{eqnarray*} ||a||_{q} &amp; = &amp; \left\{ \sum_{k=1}^{\infty}|a_{k}|^{q}\right\} ^{\frac{1}{q}}\\ &amp; = &amp; \sup_{n}\left\{ \sum_{k=1}^{n}|a_{k}|^{q}\right\} ^{\frac{1}{q}}\\ &amp; = &amp; \sup_{n}||T_{n}||\\ &amp; &lt; &amp; \infty. \end{eqnarray*} (2) Define $T:l^{p}\rightarrow l^{r}$ by $Tx=(a_{n}x_{n})$, where $x=(x_{n})\in l^{p}$. Prove that $T$ is a bounded linear map and $||T||=||a||_{q}$. Proof: Clearly, $T$ is linear. By the Generalized Holder Inequality, for any $x\in l^{p}$, we have that $||Tx||_{r}\leq||a||_{q}||x||_{p}.$ This shows that $T$ is bounded and $||T||\leq||a||_{q}$. If $||a||_{q}=0$, we are done. Suppose that $||a||_{p}\neq0$. Define $x=\eta(|a_{n}|^{\frac{q}{p}})$, where $\eta=\left(\sum_{k=1}^{\infty}|a_{k}|^{q}\right)^{-\frac{1}{p}}$. By direct verification, $||x||_{p}=1$. Therefore, $||T||\geq||Tx||_{r}=\left\{ \sum_{k=1}^{\infty}|a_{k}|^{q}\right\} ^{\frac{1}{q}}=||a||_{q}$. This shows that $||T||=||a||_{q}$.
Does every projection operator satisfy $\|Px\| \leq \|x\|\,$?
The property does not hold. For example, consider $$ P = \pmatrix{1&amp;0\\\alpha &amp;0} $$ for some $\alpha \neq 0$. Note that $P^2 = P$, but $$ \left\| \pmatrix{1&amp;0\\\alpha &amp;0} \pmatrix{1\\0} \right\| = \left\| \pmatrix{1\\ \alpha} \right\| \geq \left\| \pmatrix{1\\0} \right\| $$
Rudin's Analysis Page 2 formula (4), what is the superscript before the 2 mean?
My edition is of 1964, and the formulas on page 2 are not numbered. In any case: This is the first time that you see a proof of this kind. Even if there should be a typo at the indicated place, you should be clearly aware of what Rudin wants to prove there, so that you can fill in the exact small print. Don't go to page 3 until you have understood this point. Rudin put a lot of thought into this proof: In order to show that a $p\in{\mathbb Q}$ with $p^2&lt;2$ cannot be the maximal such $x\in{\mathbb Q}$, by "reverse engineering" he came up with a $q=p+h$, $h&gt;0$, such that $q^2&lt;2$ as well.
When is $V_\kappa \prec_{\Sigma_1} V$
Assume that $V_\kappa\prec_{\Sigma_1}V$ holds. Observe that the sentence Every set is equipotent with an ordinal. is $\Pi_2$, and it holds over $V$. Hence it also holds over $V_\kappa$. By my previous answer, it implies $\kappa$ is a beth fixed point, so we have $H_\kappa=V_\kappa$.
Algorithm for dividing sums of square roots: $\tfrac{\sum_{i=1}^m \sqrt{x_i}}{\sum_{j=1}^n \sqrt{y_j}}$
$$f=\sqrt2+\sqrt3+\sqrt5+\sqrt7+\sqrt{11}\\ g=(\sqrt2+\sqrt3+\sqrt5+\sqrt7)^2-(\sqrt{11})^2\\ =6+2(\sqrt6+\sqrt{10}+\sqrt{14}+\sqrt{15}+\sqrt{21}+\sqrt{35})\\ =2(3+\sqrt6+\sqrt{10}+\sqrt{15})+2\sqrt7(\sqrt2+\sqrt3+\sqrt5)\\ =g_1+\sqrt7g_2$$ $$h=(g_1+\sqrt7g_2)(g_1-\sqrt7g_2)=g_1^2-7g_2^2\\ =4(40+10\sqrt{15}+12\sqrt{10}+16\sqrt6)-28(10+2\sqrt6+2\sqrt{10}+2\sqrt{15})\\ =-120+8\sqrt6-8\sqrt{10}-16\sqrt{15}\\ =(-120+8\sqrt6)-\sqrt5(8\sqrt2+16\sqrt3)\\ =h_1+\sqrt5h_2$$ $$k=(h_1+\sqrt5h_2)(h_1-\sqrt5h_2)=h_1^2-5h_2^2\\ =14400+384-1920\sqrt6-5(128+768+256\sqrt6)\\ =10304-3200\sqrt6\\ N=10304^2-6(3200)^2$$ Given $f(\sqrt2,\sqrt3,\sqrt5)$ in the denominator, let $$g(\sqrt2,\sqrt3)=f(\sqrt2,\sqrt3,\sqrt5)f(\sqrt2,\sqrt3,-\sqrt5)\\ h(\sqrt2)=g(\sqrt2,\sqrt3)g(\sqrt2,-\sqrt3)\\ K=h(\sqrt2)h(-\sqrt2)$$ EDIT $$f(\sqrt2,\sqrt3,\sqrt5,\sqrt7,\sqrt{11})=f_1(\sqrt2,\sqrt3,\sqrt5,\sqrt7)+\sqrt{11}f_2(\sqrt2,\sqrt3,\sqrt5,\sqrt7)\\ g(\sqrt2,\sqrt3,\sqrt5,\sqrt7)=f(\sqrt{11})f(-\sqrt{11})=f_1^2-11f_2^2\\ =g_1(\sqrt2,\sqrt3,\sqrt5)+\sqrt7g_2(\sqrt2,\sqrt3,\sqrt5)\\ h(\sqrt2,\sqrt3,\sqrt5)=g(\sqrt7)g(-\sqrt7)=g_1^2-7g_2^2\\ k(\sqrt2,\sqrt3)=h_1^2-5h_2^2\\ m(\sqrt2)=k_1^2-3k_2^2\\ N=m(\sqrt2)m(-\sqrt2)$$
Arranging Drivers and Passengers from IB Math AA HL
The drivers can be placed in $5\cdot4\cdot3 = 60$ ways There are spaces for $1, 4,4$ people in the three cars. Since no seating arrangement has been specified, they can be placed in $\binom9 1 \binom 84 \binom 4 4 = 630$ ways Multiply to get $60\cdot630 = 37800$ Your answer seems correct to me
Transition Matrix with an unknown variable
A Markov chain transition matrix must have its rows (or columns, depending on the convention) sum to $1$. If we look at the M row, we have $0.2+1-2\alpha+0.2=1$, which means $\alpha=0.2$.
integrate $ x e^{-x^2}$ from ${-\infty} $ to ${+\infty}$
First it would be from $(-\infty,\infty)$ to $(-\infty,-\infty)$. You have $"-(-\infty)^2=-\infty"$ and $"-(\infty)^2=-\infty"$ so the change of variable $$ z=-x^2 $$ is not allowed: it is not a bijection between these intervals. You would better seperate your initial interval : $(-\infty,\infty)=(-\infty,0) \cup (0,\infty)$ then make the change of variable on each part.
What does double vertical-line means in linear algebra?
Double bars (or sometimes even single bars) tend to denote a norm in Mathematics. Most likely, the double bars here are denoting the Euclidean norm. This is just the length of the vector. So for example, the vector (I shall write it horizontally for compactness) $(1,2,3)$ has length $$ \|(1,2,3) \|=\sqrt{1^2+2^2+3^2}=\sqrt{14} $$ and the vector $$ \|(3,-1,2) \|=\sqrt{3^2+(-1)^2+2^2}=\sqrt{14} $$ Notice that $A\mathbf{x}$ is just a vector, so $\|A\mathbf{x}\|$ is just the length of the vector. $\|\mathbf{x}\|$ is just the length of $\mathbf{x}$. So here you are looking for scaling of $\mathbf{x}$ under transformation by $A$ to be between $m$ and $M$. (Look at $\frac{\|A\mathbf{x}\|}{\|\mathbf{x}\|}$ and think about what it means 'pictorially' to see what I am talking about).
A bag contains 5 red, 6 blue, and 4 yellow marbles
I am assuming that the order of drawing is not important. IE We just care about which marbles we have drawn. If order is important, please state that. Hint for A): How many red / blue / yellow marbles did you draw? $ 6 \times 7 \times 5 - 1 $ Hint for B): If you draw 5 marbles, then at least 2 of them must be the same color. Thus the additional condition is a red herring. Note that we cannot draw 5 yellow marbles. $ {7 \choose 2} - 1$ Hint for C): This is a standard Stars and Bars question. { 6 \choose 2 }
Factor group of quaternion group
Let's work out explicitly what the quotient looks like by a subgroup of order 4. Specifically, let's quotient by $\left&lt;A\right&gt; = \{1,A,A^2,A^3\}$. $Q/\left&lt;A\right&gt;$ is, by definition, the set of cosets of $\left&lt;A\right&gt;$ in $Q$. As pointed out, there must be two cosets because 8/4=2. One of these cosets, $S_1$, contains the elements of $\left&lt;A\right&gt;$, and the other, $S_2$, contains the leftovers: $\{B,AB,A^2B,A^3B\}$. Strictly speaking, these two cosets are the elements of $Q/\left&lt;a\right&gt;$. The group operation is defined as the usual product for two subsets of a group. If $S$ and $T$ are subsets of group $G$, their product $ST$ is said to be the set $\{st|s\in S, t\in T\}$. Under this product, the two cosets of $\left&lt;A\right&gt;$ in $Q$ form a group. You can verify directly that $S_1S_1 = S_2S_2 = S_1$ and $S_2S_1=S_1S_2 = S_2$. These relations demonstrate that $Q/\left&lt;A\right&gt;$ is cyclic of order 2. This is a relatively unexciting example. If there were more than two cosets, you might have to compute several pairwise products before discovering the structure of the quotient group. Because $|Q/\left&lt;A\right&gt;|=2$, there was only one possibility for a group structure. Also, if you were to take $G/H$ with $H$ not normal in $G$, you would find that the cosets would not form a group under the product we defined.
Let a be a positive rational number. Let $A = \{ x\in\Bbb Q \mid x^2 < a \}$ . Show that $A$ is bounded in $\Bbb Q$
Because a square root can be either positive or negative, when you have $x^2&lt;a$ and you take the square root of both sides, you actually get two inequalities out. $$\begin{align} x^2 &amp;&lt; a \\ \implies\quad x&lt;\sqrt{a} \;\;\;&amp;\text{and}\;\;\; x&gt;-\sqrt{a}\\ \implies\quad -\sqrt{a}&lt; \;&amp; x &lt; \sqrt{a} \end{align}$$ That gives you a lower bound.
Finding a space $X$ such that $\dim C(X)=n$.
This is possible only for $n=0$. In fact, the following stronger statement is true. Let $X$ be a nonempty topological space. Then $C(X)$ is $0$-dimensional if every $f\in C(X)$ is locally constant, and otherwise $C(X)$ is infinite-dimensional. To prove this, first suppose that every $f\in C(X)$ is locally constant. Then for any $f\in C(X)$, the function $g$ defined by $g(x)=1/f(x)$ if $f(x)\neq 0$ and $g(x)=0$ if $f(x)=0$ is continuous. Since $fgf=f$, this proves $C(X)$ is von Neumann regular (and not the zero ring since $X$ is nonempty) and thus $0$-dimensional. Now suppose some $f_0\in C(X)$ is not locally constant. Say $x_0\in C(X)$ is such that $f_0$ is not constant in any neighborhood of $x_0$. By subtracting a constant from $f_0$, we may assume $f_0(x_0)=0$. Since $f_0$ does not vanish identically in any neighborhood of $x_0$, $x_0$ is in the closure of the set $A=f_0^{-1}(\mathbb{R}\setminus\{0\})$. Let $U$ be some ultrafilter on $A$ which converges to $x_0$. For each $k\in\mathbb{N}$, define $$P_k=\{f\in C(X):\lim_Ue^{a|f_0(x)|^{-k}}f(x)=0\text{ for all }a\in\mathbb{R}\}$$ where $\lim\limits_U$ denotes the limit with respect to $x$ along the ultrafilter $U$. Intuitively, you can think of $P_k$ as consisting of those functions which go to $0$ along $U$ "much faster" than $e^{-|f_0|^{-k}}$. I claim that for any $k$, $P_k$ is a prime ideal. First, $P_k$ is clearly closed under addition. Since any continuous function on $X$ is bounded in a neighborhood of $x_0$ and $U$ converges to $x_0$, $P_k$ is closed under multiplication by arbitrary elements of $C(X)$. To show $P_k$ is prime, suppose $g,h\not\in P_k$. Then for some $a\in\mathbb{R}$ and some $\epsilon&gt;0$, the set $$S=\{x\in A: |e^{a|f_0(x)|^{-k}}g(x)|&gt;\epsilon\}$$ is in $U$ (here we use the fact that $U$ is an ultrafilter; if $U$ were just a filter all we would know is that the complement of some such set $S$ is not in $U$). Similarly, for some $b\in\mathbb{R}$ and some $\epsilon'&gt;0$, the set $$T=\{x\in A: |e^{b|f_0(x)|^{-k}}h(x)|&gt;\epsilon'\}$$ is in $U$. Since $U$ is a filter, $S\cap T\in U$ as well. But if $x\in S\cap T$, then $$|e^{(a+b)|f_0(x)|^{-k}}g(x)h(x)|&gt;\epsilon\epsilon'.$$ This witnesses that $gh\not\in P_k$. So each $P_k$ is a prime ideal, and it is clear that if $k\leq \ell$ then $P_\ell\subseteq P_k$. To conclude that we have an infinite chain of prime ideals and so $C(X)$ is infinite-dimensional, we thus only need to show that $P_k\neq P_{k+1}$ for each $k$. To prove this, just consider the function $f(x)=e^{-|f_0(x)|^{-k-1}}$ (with $f(x)=0$ when $f_0(x)=0$). Note that $f_0(x)$ converges to $0$ along $U$ and so $e^{a|f_0(x)|^{-k}-|f_0(x)|^{-k-1}}$ converges to $0$ along $U$ for any $a$. Thus $f\in P_k$. But $f\not\in P_{k+1}$, since $e^{a|f_0(x)|^{-k-1}-|f_0(x)|^{-k-1}}$ goes to $\infty$ as $f_0(x)$ approaches $0$ for $a&gt;1$. Thus $P_{k+1}\neq P_k$. (Actually, we don't need to restrict to $k\in\mathbb{N}$; we could let $k\in[0,\infty)$ and a similar argument would still work. So we actually get a chain of primes indexed by $[0,\infty)$, not just a countable chain.)
Product of two Hermitian matrices
Note that $S^HS$ is not the adjoint of $SS^H$. The adjoint of $SS^H$ is always $SS^H$, whatever $S$ is. In your example, your $S$ is not hermitian, so the commutation of hermitian matrices does not apply.
Derivatives: Interesting (unexpected?) situations where they arise?
A handout from my vector calculus class. Consider the social network of seven individuals with the unimaginative names $A,B,C,D,E,F$ and $G$. An edge connects each pair of friends. This network or graph consists of two smaller, distinct graphs or components. Question: How to write an algorithm to suggest that person $B$ befriend $D$? The computer program should analyze the two components $\{A,B,C,D\}$ and $\{E,F,G\}$, identify that person $B$ is in the first component and then step through that list to find people to whom person $B$ is not currently linked. To do this, the computer will be fed the graph Laplacian, a matrix defined via the formula: \begin{equation*} L = (a_{ij}) = \begin{cases} \text{degree of vertex $i$ along the diagonal} \\ \text{$-1$ when an edge connects vertices $i$ and $j$}. \end{cases} \end{equation*} For the network of seven friends, the Laplacian matrix looks like: \begin{equation} L = \begin{bmatrix} 3 &amp; -1 &amp; -1 &amp; -1 &amp; 0 &amp; 0 &amp; 0 \\ -1 &amp; 2 &amp; -1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ -1 &amp; -1 &amp; 3 &amp; -1 &amp; 0 &amp; 0 &amp; 0 \\ -1 &amp; 0 &amp; -1 &amp; 2 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; -1 &amp; 2 &amp; -1 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; -1 &amp; 1 \end{bmatrix} \end{equation} where rows are in alphabetical order. Question: How to determine the components of the graph using this matrix? Note that the vector $\begin{bmatrix} 1 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \end{bmatrix}^T$ is in the nullspace of $L$ and this vector corresponds to the first component. Can you find a second vector in the nullspace? In general, these vectors associated with the components form a basis for the nullspace (and this isn't difficult to prove). So if you find the basis for $N(L)$, you've found the components of the original graph. In real life, graphs aren't as simple as the one pictured above. In fact, the graph may consist of one giant component with tightly clustered "approximate components" embedded within. (See any of the images in this search.) And if the graph does have a lot of components, there are more computationally efficient methods of finding them. So why introduce the graph Laplacian? It turns out that the graph Laplacian is a basic object in the field of spectral clustering, which has numerous "real life" applications. In fact, I actually used the technique at a previous job while analyzing a large dataset. I should point out that in spectral graph theory, you analyze all eigenvalues of the graph Laplacian not just $\lambda=0$, as we have done. Question: What does this have to do with derivatives? If you take multi-variable calculus, you may learn about the Laplace operator $\Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$, which I have written in two dimensions. You may not believe it, but there's actually a connection between the Laplace operator and the graph Laplacian which can be explained via the discrete Laplacian!
Solving algebra word problem?
If $b=2j$, that implies that he has more jacks than balls. However, from the problem, you know that he has more jacks than balls, so the answer should be $b=\frac{j}{2}$.
Equivalent of random variable sequences in distribution?
The assertion is not true. A counterexample: Pick a random variable $Y$ and define $Y_n:=Y$ and $X_n:=Y+1/n$. Then $X_n$ converges in distribution to $Y$, and obviously $Y_n$ converges in distribution to $Y$. But $X_n$ and $Y_n$ are equal nowhere.
Argue for that the sum function $F$ for a power series satifies that $F(x) = \frac{x}{(1-x)^2} - \frac{2x}{(2-x)^2}$ when $-1<x<1$
$$\sum_{n=0}^\infty n(1-2^{-n})z^n=\sum_{n=0}^\infty nz^n-\sum_{n=0}^\infty n\left(\frac{z}{2}\right)^n$$ $$\sum_{n=0}^\infty nz^n=z\sum_{n=0}^\infty nz^{n-1}=z \left(\sum_{n=0}^\infty z^{n} \right)'$$
How to integrate $\int_0^{\pi/2} \frac{\sin(x)}{\sin(x+\frac{\pi}{4})}{\rm d}x$ using substitution $x = \frac{\pi}{2} - y$?
Let$$I:=\int_0^{\pi/2}\frac{\sin x{\rm d}x}{\sin(x+\pi/4)}=\int_0^{\pi/2}\frac{\sqrt{2}\sin x{\rm d}x}{\sin x+\cos x}=\int_0^{\pi/2}\frac{\sqrt{2}\cos y{\rm d}y}{\sin y+\cos y}$$ so, averaging the last two expressions,$$I=\frac{1}{\sqrt{2}}\int_0^{\pi/2}{\rm d}x=\frac{\pi}{\sqrt{8}}.$$
how to prove $a+b-ab \le 1$ if $a,b \in [0,1]$?
Hint: in that interval, $$(1-a)(1-b) \ge 0$$
About vector space of polynomials.
What is the degree of the $0$ polynomial? Assuming you can skirt this issue, you might note the lack of closure (in the manner suggested by André Nicolas). For example, here are two polynomials with degree at least three, for which the sum is a polynomial that does not have degree at least three: $x^3 + x$ and $-x^3$. (Observe each has degree three.) The sum of these polynomials is $x$, which is a polynomial of degree $1 &lt; 3$. Answer: Nope.
Solve $3^a-5^b=2$ for integers a and b.
We know that $b$ is even (since $2^b+2$ is divisible by 3). We also know that the only solution to $y^2+2=x^3$ is $y=5,x=3$. (Solving the diophantine equation $y^{2}=x^{3}-2$) Thus it is sufficient to show that $a$ is divisible by 3. Suppose that $a \geq 2$. Since 9 divides $5^b+2$, we get that $b=6k+2=3m+2$. We have $25(125)^m+2=3^a$. We get $3^a$ is $27$ mod $31$ which forces $a$ to be $3$ mod $30$, in particular divisible by $3$.
Draw grid lines of a soccer field in perspective
Generally speaking, what you’re looking for is a homography—a planar perspective transformation—between the field’s ideal rectangle and its image. I highly recommend that you look through the graphics libraries that you’re using. Many have built-in library functions for constructing such a transformation. That will save you some work and allow you to draw other things besides a grid onto the image as well as map points on the image back to the ideal rectangle, which is handy for selection. If you need to construct this for yourself, this answer has a very nice description of how to do so. If all you really need to do is draw a grid, though, there’s a possibly simpler method that uses cross-ratios: given four points $a$, $b$, $c$ and $d$ on a line, the quantity $$(a,b;c,d) = {[a,c][b,d]\over[a,d][b,c]}$$ is invariant under projective transformations of the line. (Here, the notation $[p,q]$ denotes the determinant of the matrix with the homogeneous coordinates of points $p$ and $q$ as its rows or columns.) You can use this invariant to map the distance along the edge of the field into a distance in the image. Let’s say $\lambda$ is the proportion of the distance along a side of the field at which you want to place the end of a grid line. Form the cross-ratio $$(\lambda,0;1,\infty) = {\begin{vmatrix}\lambda&amp;1\\1&amp;1\end{vmatrix} \begin{vmatrix}0&amp;1\\1&amp;0\end{vmatrix} \over \begin{vmatrix}\lambda&amp;1\\1&amp;0\end{vmatrix} \begin{vmatrix}0&amp;1\\1&amp;1\end{vmatrix}} = 1-\lambda.$$ Besides the two endpoints $P_0$ and $P_1$ of the corresponding line segment in the image, you’ll also need the vanishing point $P_\infty$ of the lines parallel to it. This will be the intersection of the extension of the image of the field edge with the extension of the image of the opposite edge. Call the distance between the endpoints $d$ and the distance from the image of the “start” corner to the vanishing point $v$. (Note that $v$ might be negative.) The corresponding cross-ratio for the image is $$(\mu,0;d,v) = {\begin{vmatrix}\mu&amp;1\\d&amp;1\end{vmatrix} \begin{vmatrix}0&amp;1\\v&amp;1\end{vmatrix} \over \begin{vmatrix}\mu&amp;1\\v&amp;1\end{vmatrix} \begin{vmatrix}0&amp;1\\d&amp;1\end{vmatrix}} = {v(\mu-d)\over d(\mu-v)}.$$ Equating the two cross-ratios and solving for $\mu$ gives $$\mu = {d v \lambda \over v+d(\lambda-1)}.$$ Dividing this by $d$ gives you the proportion of the distance between the two endpoints in the image, so the image point that corresponds to the proportion $\lambda$ is $$\left(1-\frac \mu d\right)P_0+\frac \mu d P_1.$$ In particular, if $\lambda=\frac k4$, then this becomes $${(k-4)(d-v) P_0 + k v P_1 \over (k-4)d+4v}.$$
Does it $\sum_{n=1}^{\infty} ({1}- \frac{1}{n^4})^{ n^5}$ converge?
Using root test: $$\lim_\limits{n\to\infty} \sqrt[n]{a_n} =\lim_\limits{n\to\infty} \left(1-\frac{1}{n^4}\right)^{n^4} =\frac{1}{e}&lt;1.$$ So, it converges.
Fixed point in a map of a function
Since we know $f(x)\in [a,b] \ \forall x\in [a,b]$ and $$g(a)\geq a \implies g(a)-a\geq 0$$ and also $$g(b)\leq b \implies g(b)-b\leq0$$ We know, by the Intermediate Value Thereom $$\exists \ x^* \in [a,b] \ni g(x^*)-x^*=0$$ That is$$g(x^*)=x^*$$ So we know that there must be a fixed point in $[a,b]$.
Cutting an $n$ dimensional vector space into two parts with an $n-1$ dimensional subspace.
We have $S=\ker f$ for some linear functional $f$. Note that $u\equiv v$ if $f(\lambda u+(1-\lambda)v)\ne0$ for all $\lambda\in[0,1]$. As $f(\lambda u+(1-\lambda)v)=\lambda f(u)+(1-\lambda)f(v)$, this is equivalent to $f(u)f(v)&gt;0$ (both need to be on the same side of $0$). Now, f $u\equiv v$ and $v\equiv w$, we know that $f(u)f(v)&gt;0$ and $f(v)f(w)&gt;0$. Then $f(u)f(w)&gt;0$, as both have the same sign as $f(v)$. So the relation is transitive. The equivalence classes are precisely the sets $\{f&gt;0\}$ and $\{f&lt;0\}$, which are the two sides of the hyperplane you are looking for.
Knowledge of $\frac{\partial \log(\mu(x))}{\partial x}$ implies knowledge of $\frac{\partial \mu(x)}{\partial x}$
If you don't know at least one value of $\mu$, no; any solution is closed under multiplication of $\mu$ by a constant, i.e. addition of a constant to $\ln\mu$. If you do, yes: let $y:=\ln\mu$ so $\mu:=\exp y,\,\mu^\prime=y^\prime\exp y$ (we obtain $y$ by integrating the known $y^\prime$).
How to prove that there exists a constant C such that $\max |f|^{2} \le C \int \left ( |f'|^{2} + f^{2} \right ) dx$?
In fact your question deals with Sobolev space $H^1$. Have a look at the proof of Theorem of Rellich-Kondrachev in this paper.
What is needed to specify a group?
Consider two groups $G_1$ and $G_2$, each of which is generated by two elements $a$ and $b$. In $G_1$, $a$ and $b$ satisfy these relations: $a^2=e$; $b^3=e$; $aba^{-1}=b^{-1}$. In $G_2$, $a$ and $b$ satisfy these relations: $a^2=e$; $b^2=e$ $(ab)^3=e$. So, the two presentations are quite different. However, $G_1\simeq G_2$, because, in fact, both of them are isomorphic to $S_3$. Here's another example. This time, I will provide two presentations of the group $\mathbb{Z}_6$: group generated by $a$ and the relation $a^6=e$; group generated by $a$ and $b$ and the relations $a^2=e$, $b^3=e$ and $ab=ba$.
power of 2 involving floor function divides cyclic product
HINT: You're not quite correct for the powers of $2$. There are $2n$ numbers total, so there can in fact be exactly $n$ odds and $n$ evens (i.e. neither has to be $\ge n+1$.) Instead: Let there be $k$ odds, and $2n-k$ evens. This gives you ${k \choose 2}$ factors of $2$ from the (odd - odd) terms and ${2n-k \choose 2}$ factors of $2$ from the (even - even) terms. First you can show that their sum is minimized at $k=n$ and that sum is $n(n-1) = n^2 - n$. So now you're need another $2[{n \over 3}]$ more factors of $2$. These come from splitting evens further into $4k$ vs $4k+2$, and the odds into $4k+1$ vs $4k+3$, because some of the differences will provide a factor of $4$, i.e. an additional factor of $2$ in addition to the factor of $2$ you already counted. Proof sketch: e.g. suppose there are $n$ evens, and for simplicity of example lets say $n$ is itself even. In the worst split exactly $n/2$ will be multiples of $4$, which gives another $\frac12 {\frac n 2}({\frac n 2}-1)$ factors of $2$ from these numbers of form $4k$, and a similar thing happens with the $4k+1$'s, the $4k+2$'s, and the $4k+3$'s. This funny thing is that if you add up everything, $n({\frac n 2}-1) \ge 2[{\frac n 3}]$ (you can try it, and you will need to prove it). In other words $2[{\frac n 3}]$ is not a tight bound at all, but rather a short-hand that whoever wrote the question settled on just to make you think about the splits into $4k + j$. In fact a really tight bound would involve thinking about splits into $8k+j, 16k+j$ etc. As for $lcm(1, 3, 5, \dots, 2n-1)$, first note that $lcm$ divides $T$ just means each of the odd numbers divide $T$. You can prove this by the pigeonhole principle. Further explanation: E.g. consider $lcm(3, 9) = 9$, so this lcm divides $T$ if both odd numbers divide $T$. In general, the lcm can be factorized into $3^{n_3} 5^{n_5} 7^{n_7} 11^{n_{11}} \dots$ and it divides $T$ if every term $p^{n_p}$ divides $T$. but in your case $p^{n_p}$ itself must be one of the numbers in the list $(1, 3, 5, \dots, 2n-1)$ or else the lcm wouldn't contain that many factors of $p$. Can you finish from here?
Is $S^\circ$ convex if $S$ convex?
Let $X$ be a topological vector space over $\mathbb{R}$. Recall the definition of open neighborhoods: $U$ is said to be an open neighborhood of $x\in X$ if it is an open set and it contains $x$. First, $x\in S^\circ$ iff there is an open neighborhood $V_x\ni x$ such that $V_x\subseteq S$. Take $x,y\in S^\circ$ and $V_x,$ $V_y$ two open neighborhoods of $x$ and $y$ in $S$ respectively. We need to prove that $\lambda x + (1-\lambda)y\in S^\circ$ for all $\lambda\in (0,1)$. Since, $S$ is convex we have that $\lambda x + (1-\lambda)y\in S$ for all $\lambda\in (0,1)$. It suffices to find an open neighborhood of the point $\lambda x + (1-\lambda)y$ which lies inside $S$. This is: $$ V^\lambda = \lambda V_x + (1-\lambda)V_y = \{z=\lambda v_x + (1-\lambda)v_y;\ v_x\in V_x, v_y\in V_y\} $$ Since the mappings $+:X\times X\to X$ and $\cdot:\mathbb{R}\times X\to X$ are open mappings, $V^\lambda$ is open. Additionally, $V^\lambda\subseteq S$. Indeed, every $z\in V^\lambda$ is written as a convex combination of points of $S$. We have proved that $V^\lambda$ is an open neighborhood of $\lambda x + (1-\lambda)y$. This completes the proof.
Let $K$ be a field. Prove that the field of all polynomials over $K$ is a vector space over $K$.
If you want to be more precise, maybe show the axioms in a more rigorous manner. For instance, for commutativity under addition, let $$ p(X)=\sum_{i=1}^n a_i X^i\:\:\:\:\:q(X)=\sum_{i=1}^n b_i X^i$$ where some of the $a_i, b_i\in K$ may be zero (in particular the degrees may differ). Then $$ p(X)+q(X)=\sum_{i=1}^n (a_i+b_i)X^i=\sum_{i=1}^n(b_i+a_i)X^i=q(X)+p(X),$$ so that $K$ is commutative under addition. Indeed, we see here more explicitly that the commutativity follows from the fact that $a_i+b_i=b_i+a_i$ in $K$. Perhaps try recasting your proof in this more explicit manner. It's more insightful, and more rigorous.
Abel's Test for convergence proof
In the proof of Abel's Test, $A$ is a constant relative to $n$ but not constant relative to $m$ and should be written as $A_m,$ and should not be defined as just any upper bound for $\{|\sum_{j=1}^ka_j\}_k ,$ but as $\sup_k |\sum_{j=1}^ka_j|.$ Then $A_m=\sup_k|\sum_{j=1}^k|=\sup_k|\sum_{j=1}^kx_{m+j}|,$ which is as small as we want if we choose $m$ large enough. We obtain $$|\sum_{j=m+1}^nx_jy_j|=|\sum_{j=1}^{n-m}a_jb_j|\leq 2A_mb_1=2A_my_{m+1}$$ I suppose we should also write $a_{n,m}$and $b_{n,m}$ rather than $a_n$ and $b_n$.
Every compact orientable surface with $S^1\times\{0\}$ as its boundary intersects the $z$-axis
Consider $M_{s}^{g}$ to be the manifold in question with genus $g$. Consider $M^{g}$ to be $M_{s}^{g}$ filled the circle into a region homeomorphic to $\mathbb{D}^{2}$ such that it has no intersection with $M^{g}_{s}$. This can always be done. By the classification theorem we have $M_{g}$ to be a closed oriented surface with $g$ handles. I claim the following: 1) $M_{g}$ must contain at least one point in $z$-axis in its interior. 2) Any line passing through this point must intersect with $M_{g}$ at some point. The second one is clear since $M_{g}$ are all compact. The first one need a contradiction type argument. If $M_{g}$ has an intersection point with $z$-axis, then we are done. If $z$-axis is disjoint from $M_{g}$, then we can put a box such that $M_{g}$ is in the box, and $z$-axis is outside of the box. Now up to homeomorphism we can deform $M_{g}$ to the 'standard' type such that $\mathbb{S}^{1}$ remain a circle on one of the $g$ handles or on its surface area(hence contractible). In the former case the center of the circle is in $M_{g}$. In the later case it is clear the line must intersect $M_{g}$ at some point.
A strengthening of delta system lemma
Let $S = S^{\kappa^{+}}_{&lt; \kappa} = \{\delta &lt; \kappa^{+} : cf(\delta) &lt; \kappa\}$. For $\delta \in S$, let $A_{\delta}$ be a club in $\delta$ of order type $cf(\delta)$ (so $|A_{\delta}| &lt; \kappa$). Then by Fodor's lemma, for every stationary $W \subseteq S$, $\langle A_{\delta} : \delta \in W \rangle$ is not a delta system. However, if $S \subseteq S^{\kappa^{+}}_{\kappa} = \{\delta &lt; \kappa^{+} : cf(\delta) = \kappa\}$ is stationary then this is true by Fodor's lemma.
The number of critical points
Under the hypothesis of this question, $f(x)$ has at most one critical point. I argue as follows: First of all, we observe, from scrutiny of the given equation satisfied by $f(x)$, $f''(x)+xf'(x)=\cos(x^3f'(x)), \tag 1$ that $f''(x_c) = 1 \tag 2$ at every critical point $x_c$ of $f(x)$. For $f'(x_c) = 0 \tag 3$ by the definition of critical point. Setting $x = x_c$ in (1) yields $f''(x_c) + x_c f'(x_c) = \cos(x_c^3 f'(x_c)), \tag 4$ or $f''(x_c) + x_c \cdot 0 = \cos (x_c^3 \cdot 0), \tag 5$ whence $f''(x_c) = \cos 0 = 1 \tag 6$ as asserted. Now since $f''(x)$ is continuous, we may for every critical point $x_c$ pick a real $\mu_c$ with $0 &lt; \mu_c &lt; 1$ and find $\delta_c &gt; 0$ such that $s \in (x_c - \delta_c, x_c + \delta_c) \Longrightarrow f''(s) &gt; \mu_c; \tag 7$ then for $x \in (x_c, x_c + \delta_c)$, $f'(x) - f'(x_c) = \displaystyle \int_{x_c}^x f''(s) ds \ge \int_{x_c}^x \mu_c ds = \mu_c(x - x_c) &gt; 0; \tag 8$ by virtue of (3), this becomes $f'(x) &gt; 0; \tag 9$ likewise for $x \in (x_c - \delta_c, x_c)$, $- f'(x) = f'(x_c) - f'(x) = \displaystyle \int_x^{x_c} f''(s) ds \ge \int_x^{x_c} \mu_c ds = \mu_c(x_c - x) &gt; 0, \tag{10}$ i.e., $f'(x) &lt; 0; \tag{11}$ (9) and (11) together show that for any critical point $x_c$ $x \in (x_c - \delta_c, x_c) \Longrightarrow f'(x) &lt; 0,\; x \in (x_c, x_c + \delta_c) \Longrightarrow f'(x) &gt; 0; \tag{12}$ we also see from (12) that the critical points of $f(x)$ are isolated, that is, there is a finite distance in $x$ between any two of the $x_c$. Now suppose $f(x)$ has more than one critical point; then by what we have just shown, there must exist two critical values $x_{c1} &lt; x_{c2}$ with no critical point between them, i.e., $x \in (x_{c1}, x_{c2}) \Longrightarrow f'(x) \ne 0; \tag{13}$ furthermore, by (12) there are $\delta_{c1}$ and $\delta_{c2}$ such that $x_{c1} + \dfrac{\delta_{c1}}{2} &lt; x_{c2} - \dfrac{\delta_{c2}}{2}, \tag{14}$ $f'(x_{c1} + \dfrac{\delta_{c1}}{2}) &gt; 0, \; f'(x_{c2} - \dfrac{\delta_{c2}}{2}) &lt; 0, \tag{15}$ and $x \in [x_{c1} + \dfrac{\delta_{c1}}{2},x_{c2} - \dfrac{\delta_{c2}}{2}] \Longrightarrow f'(x) \ne 0; \tag{16}$ but now we have $f'(x)$ continuous on $[x_{c1} + \dfrac{\delta_{c1}}{2},x_{c2} - \dfrac{\delta_{c2}}{2}]$ and taking values with opposite signs on the endpoints of this interval; by the intermediate value theorem, there must exist some $x_c \in [x_{c1} + \dfrac{\delta_{c1}}{2},x_{c2} - \dfrac{\delta_{c2}}{2}]$ with $f'(x_c) = 0$, in contradiction to (16); therefore, $f(x)$ has at most one critical point on all of $\Bbb R$. Must $f(x)$ have at least one critical point, implying in light of the above it has exactly one? Perhaps a further analysis of (1) would provide an answer to this question, but I don't have one as of this writing. Nor do I have, at the moment, any words on the parity of $f(x)$. Finally, there are some general results available on estimating the number and nature of critical points a function $f: \Bbb R \to \Bbb R$ may have; for example, if they are non-degenerate, meaning $f''(x_c) \ne 0$, one can see they must alternate between local maxima and minima; further results may be available if more hypotheses are placed upon $f(x)$; remember, google is your friend . . .
Random matrix theory
$P(A) dA$ means $$ P( a_{11} &lt; x, a_{12} &lt; y , a_{22} &lt; z ) = \underbrace{Const}_{\text{normilzation}} \int_{- \infty}^x \int_{- \infty }^y \int_{-\infty}^z e^{-a\text{Tr}(A^2)+b \text{Tr}(A) +c} da_{11}da_{12}da_{22} $$ I have a feeling they want you to do the following...first note that $$\text{Tr}(A^2) = a_{11}^2 +2a_{12}^2 + a_{22}^2 $$ $$\text{Tr}(A) = a_{11} + a_{22}$$ Using this, we see $$\exp(-a\text{Tr}(A^2)+b \text{Tr}(A)) = \exp (\underbrace{-aa_{11}^2 + b a_{11} }- \underbrace{a a_{22}^2 + ba_{22} } - \underbrace{2a a_{12}^2 } )$$ Now let's complete the square. We see if $\mu = b /2a$, we may write $$ \exp(-a\text{Tr}(A^2)+b \text{Tr}(A)) =\exp( -a(a_{11} -\mu)^2 - a(a_{22} - \mu)^2 -2a a_{12}^2 +2a\mu^2) $$ Now can you compare the first expression to a normal distribution using this modified form?
Confusing probability riddle
Let's go through each of the three cases for the first generation: With probability $\frac 1 3$ the first snake dies without any offspring, which makes the lineage go extinct. With probability $\frac 1 3$ the first snake dies after having a single offspring. We are then left with one generation consisting of one snake, which is the same scenario we started with. Hence, if we denote by $P$ the probability that our lineage goes extinct (what we're trying to solve for), in this case the lineage will go extinct with probability $P$. With probability $\frac 1 3$ the first snake has two offsprings and then dies. In order for this lineage to go extinct, both of the offspring's lineages need to go extinct. Each of these events happens with probability $P$, which gives a probability $P^2$ of both events happening at once. Hence, in total, we get that $P = \frac 1 3 + \frac 1 3P + \frac 1 3 P^2$. Solving for $P$, we get \begin{alignat*}{2} &amp;&amp;3P &amp;= 1 + P + P^2\\ &amp;\iff&amp;\qquad P^2 - 2P + 1 &amp;= 0\\ &amp;\iff&amp;\qquad (P - 1)^2 &amp;= 0\\ &amp;\iff&amp;\qquad P &amp;= 1 &amp;\end{alignat*} so for time tending to infinity the lineage is certain to go extinct. (There are a few issues with this, since we assume that such a probability $P$ exists in the first place, but an approach like this is nice for getting a feeling for the problem)
Finding hermitian conjugate and inverse of a complex matrix
The hermitian of a matrix $A$ satisfies $(A^H)_{k,l}= A_{l,k}^*$, where by $*$ here I mean conjugate. You can use this formulate to easily find the hermitian of $F$, right? And what happens when you multiply $F^H$ times $F$?
Wronskian of the ODE $u''(t)+P(t)u'(t)+Q(t)u(t)=R(t)$?
The Wronskian is always associated with a homogeneous equation, hence the question is about the Wronskian of the corresponding homogeneous equation, not the Wronskian of the equation itself. By Abel's identity, the Wronskian is either zero for all $t\in [0,1]$, or else it is never zero. Now let's look at the options: $2t-1$ is equal to zero at $t=1/2$, but not equal to zero at some other points in $[0,1]$, so it is not the Wronskian. $\sin 2\pi t$ is equal to zero at $t=1/2$, but not equal to zero at some other points in $[0,1]$, so it is not the Wronskian. $\cos 2\pi t$ is equal to zero at $t=1/4$, but not equal to zero at some other points in $[0,1]$, so it is not the Wronskian. It follows that among the four options given, only $W(t)=1\,\forall t\in[0,1]$ can be the Wronskian, that is, if it is known that it must be one of these four.
Why does this subspace need to be closed?
When you used the Hahn Banach corollary: the induced 'norm' on $X/U$ needs to be a norm, so that, in particular $\|x\|_{X/U}\ne 0$ would be important. However, if $x\in\bar U$, then $\|x\|_{X/U}:=d(x,U)=0$.
Linear regression distribution
In my textbook it says that Y is a sum of a nonrandom quantity and a normal variable so it is distributed normally. Can someone explain this please? Here $ X$ is a fixed variable and hence, $\beta X$ is a fixed quantity. $\varepsilon\sim N(0,\sigma^{2}) $ is the normal variable. As the distribution of the right hand side is normal, $Y$ is normal. Normality of $Y$ can be easily observed by considering the MGF of $Y$. By definition, \begin{eqnarray*} M_{Y}(t)&amp;=&amp;M_{\beta X+\varepsilon }(t)=E(e^{t(\beta X + \varepsilon)})\\ &amp;=&amp;e^{t\beta X}E(e^{t\varepsilon})\\ &amp;=&amp;e^{t\beta X}\exp\{\dfrac{1}{2}t^{2}\sigma^{2}\}\quad \mbox{ since } \varepsilon\sim N(0,\sigma^2)\\ &amp;=&amp;\exp\{(\beta X)t +\frac{1}{2}t^{2}\sigma^{2}\} \end{eqnarray*} which is the MGF of Normal distribution with mean $\beta X$ and variance $\sigma^{2}$.
Natural Transformations Without Objects
A natural transformation of functors $\mathbb{C} \to \mathbb{D}$ is the same thing as a functor $\mathbb{2} \times \mathbb{C} \to \mathbb{D}$, where $\mathbb{2} = \{ 0 \to 1 \}$. The domain of the natural transformation is the restriction to $\{ 0 \} \times \mathbb{C}$, and the codomain is the restriction to $\{ 1 \} \times \mathbb{D}$.
Strength of "Every finite dimensional subspace of a vector space has a complement"
As a lower bound on the strength of this, by this answer of mine on MO, "every line has a complementary hyperplane" implies the axiom of choice for finite sets of bounded cardinality. More precisely, that answer shows that if $X$ is a set such that there exists an $\mathbb{F}_p$-hyperplane in the space $\mathbb{F}_p(X)$ of rational functions with elements of $X$ as variables for all primes $p\leq n$, then there is a choice function on the subsets of $X$ of cardinality $\leq n$.
For continuous and bounded $f(x,y)$ prove $g(x)=\sup_y f(x,y)$ is continuous too
If the domain is only a region, this is false. Consider the function $f(x,y) = y$ defined on the region $$ G = \{ -1 \leq y \leq 0 \} \cup \{ 0 \leq y \leq 1, y \geq -x \} $$
Airplane decelerating as a function of speed
Hint You have $$\frac{dv}{dt}=-av^2-b$$ As Narasimham answered, the easiest way is to rewrite the equation as $$\frac{dt}{dv}=-\frac 1{av^2+b}$$ So $$\int dt=-\int\frac {dv}{av^2+b}$$ Change variable $v=\sqrt {\frac b a}z$, $dv=\sqrt {\frac b a}dz$. This makes $$\int dt=-\frac 1{\sqrt{ab}}\int\frac{dz}{1+z^2}$$ which is a known integral.
how to find matrix exponental of a fourth order matrix in Maple12
0% accept rate, you should fix that! You need to load the linear algebra package, (with(LinearAlgebra):) and then compute the first four terms in the exponential series
Encyclopedia of Groups
Groups of order less than 30 are at http://opensourcemath.org/gap/small_groups.html Also, http://world.std.com/~jmccarro/math/SmallGroups/SmallGroups.html goes up to order 32.
Cartan homotopy formula and curl
Apply (plug in by means of the operator $i$) both sides to a volume 3-form $\tau$. The equality of vector fields on both sides is equivalent to the equality of the obtained 2-forms. On the left we have $i_{curl(a\times b)}\tau= di_b i_a \tau$ (note the order: vector field $a$ is plugged in first). On the right first use the Cartan calculus formula $$ i_{[a,b]}\tau=[i_b, L_a]\tau = i_b L_a \tau - L_a i_b\tau = i_b d i_a\tau - di_a i_b\tau -i_a d i_b\tau $$ (note that $d\tau = 0$). Since $L_b\tau = ({div}\,b)\tau$, we have $i_{a\, {div}\, b}\tau = i_a(L_b\tau)= i_a d i_b\tau$ and similarly for $i_{b\, {div}\, a}\tau = i_b d i_a\tau$. Hence the inner multiplication of $[a,b]+ a\, {div}\, b - b\, {div}\, a$ to the 3-form $\tau$ is $$ i_b d i_a\tau - di_a i_b\tau -i_a d i_b\tau + i_a d i_b\tau - i_b d i_a\tau = - di_a i_b\tau, $$ which proves the formula.