title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Non-homeomorphic (?) subspaces of Euclidean plane
The complement of any neighbourhood of $(0,0)$ in $Y_1$ is a finite collection of line segments, while there are neighbourhoods of $(0,0)\in Y_2$ where that isn't the case. One other property that differentiated them is that $Y_2$ isn't locally connected. For instance any small enough neighbourhood of $(1/2,0)$ consists of infinitely many line segments.
Good references(books or lecture notes) for self study on Banach Algebra.
I would suggest the book by Richard V. Kadison/John R. Ringrose, Fundamentals of the Theory of Operator Algebras Volume I, and you may pair this one with another book by Corneliu Constantinescu, $C^{\ast}$-algebras Volume II. These two books assume little background of measure theory, they even do not assume very much about Functional Analysis, so I think they suit for your need.
Union of simply connected spaces at a point not simply connected
After posting this question I came across this blog post http://wildtopology.wordpress.com/2014/06/28/the-griffiths-twin-cone/ and now feel like I can answer my own question. As mentioned in the comments the space I describe looks like this but it is homeomorphic to a space that looks like this Using the second of these two spaces and two applications of the Seifert–van Kampen theorem first with $U$ and $V_\text{odd}$ and then with $U \cup V_\text{odd}$ and $V_{\text{even}}$ we can see that the inclusion map identifying the Hawaiian earring with the intersection of the space with the $xy$ plane induces a surjection of fundamental groups. The kernel of this map is the conjugate closure of the union of subgroups that contain loops only going around either even or odd circles but not both. This is not the entire fundamental group because the loop that goes once around every circle from biggest to smallest is not contained in this union.
Union of infinitely often events
Yes, it is correct (I assume $X_n\;i.o.$ means $\bigcap_{k\geq 1}\bigcup_{m\geq k} X_m$). I think it would be advisable, if you would make the additional step $$ [A_n \;i.o. ] \subseteq [B_n\cup C_n\;i.o] \subseteq [B_n \;i.o. ]\cup [C_n \;i.o. ]. $$ Explanation for the last $\subseteq$: If $x$ appears infinitely often in sets $B_n\cup C_n$, but not infinitely often in $B_n$, it has to be contained infinitely often in $C_n\setminus B_n$, and hence in $C_n$. So $x\in [B_n \;i.o. ]\cup [C_n \;i.o. ]$.
Bounded recursive sequence
The sequence $a_n=\sin n$ is bounded, not monotonic, not constant, doesn't converge, and isn't periodic. I'm not certain whether you are using "recursive" in the math-logic sense of the term, or if you just want it to satisfy a recurrence relation. Assuming the latter, $a_n$ satisfies a three-term constant coefficient linear recurrence. To find it, eliminate $\cos n$ from the formulas, $\sin(n+1)=\sin n\cos1+\cos n\sin1$ and $\sin(n+2)=\sin n\cos2+\cos n\sin2$. Alternatively, consider (almost) any solution of the recurrence $$2a_n=3a_{n-1}-2a_{n-2}$$ The solutions are of the form $a_n=b_+(r_+)^n+b_-(r_-)^n$ where $$r_{\pm}={3\pm\sqrt{-7}\over4}$$ and $b_{\pm}$ are constant depending on the initial conditions $a_0$ and $a_1$. Since $r_{\pm}$ have modulus $1$ but are not roots of unity, the sequence satisfies all the criteria (so long as you stay away from $a_0=a_1=0$). It can be expressed in terms of sines and cosines so it's really a second cousin to the first example.
Difference between a linearly independent set and l.i. vectors
The important distinction is between the linear independence of a set of vectors and the linear independence of a list of vectors. For an easy example let's say we're working in $3$-dimensional space with basis vectors $\mathbf i$, $\mathbf j$ and $\mathbf k$. Then the list of vectors $(\mathbf j,\mathbf k,\mathbf i, \mathbf k)$ isn't linearly independent, simply because it contains $\mathbf k$ twice*. But by definition a set can't contain duplicate elements. So the set $\{\mathbf j,\mathbf k,\mathbf i, \mathbf k\}$ is linearly independent because it is equal to the set $\{\mathbf i,\mathbf j,\mathbf k\}$ *The definition of linear independence of a list $(\mathbf v_i)$ is that there is a list of coefficients $(a_i)$ of the same length such that $\sum_ia_i\mathbf v_i=0$ and such that not all of the $a_i$ are $0$. In this case we are taking our coefficients to be $(0,1,0,-1)$, so $(\mathbf j,\mathbf k,\mathbf i, \mathbf k)$ isn't linearly independent because $0\mathbf j+1\mathbf k+0\mathbf i+(-1) \mathbf k=0$.
Understanding the Glivenko-Cantelli lemma in relation to strong law of large numbers
The law of large numbers says that for each $x$ you have a null set $N_x$ such that for all $\omega \in N^c$ $\hat F_n(x)(\omega) \to F(x)$. Now for the convergence of the supremum you'd have to look at the union of all $N_x$ for $x\in\mathbb R$. This could be a non-null set (as the union is over an uncountable index set), thus the Glivenko-Cantelli theorem is a strict improvement over the law of large number!
Area between 2 curves
If you wish to take the integral with respect to $y$, the area you seek is between the curves $x=-1$ and $x=y^2-2y$. Your integral should be $$\int\limits_0^1(y^2-2y+1)dy$$ Integrating with respect to $x$ is slightly more complicated. $$x=y^2-2y=(y-1)^2-1$$ $$(y-1)^2=x+1$$ $$y=-\sqrt{x+1}+1$$ The negative square root gives us the lower half of the parabola. So to integrate with respect to $x$, it's $$\int\limits_{-1}^0(-\sqrt{x+1}+1)dx$$
Area using definite integrals with a straight line
The first step is finding the points of intersection between the two graphs. We do $3x-x^2 = -3x \implies x = 0$ or $x = 6$. Now, in order to be precise, see that $3x-x^2 \geq -3x$ on $[0,6]$. The question is a little bit tricky since the quadratic is both positive and negative on this interval, thus by subtracting the area under the linear function gives us the full area. Therefore the answer is $\displaystyle \int_{0}^6 (3x-x^2)dx-\int_{0}^6(-3x) = 36$.
What is the derivative of a matrix w.r.t itself?
You want to differentiate a scalar quantity $x^TVx$ with respect to matrix $V$, so that the derivative will be a matrix with the same dimension as $V$. Now, $x^TVx$ is equal to $Trace(Vxx^T)$, so using standard results of the derivative of the trace of a matrix product, see page $3$ here, the result is $$\frac{\partial x^TVx}{\partial V}=\frac{\partial Trace(Vxx^T)}{\partial V}=xx^T$$
Show that $M \times \left\{0,1 \right\}$ contains two connected components
$M\times\{0,1\}$ is diffeomorphic to $M\sqcup M$, so the number of path connected components of $M\times\{0,1\}$ is exactly twice as that of $M$.
Is there a closed form expression for integral $\int_{0}^a \frac{1}{x} \ln(x+1) dx$?
I don't know if this counts as a "closed form", but if $a\in(0,1)$ we have: $$ \int_{0}^{a}\frac{\log(1+x)}{x}\,dx = \int_{0}^{a}\sum_{n\geq 1}\frac{(-1)^{n-1} x^{n-1}}{n}\,dx = \sum_{n\geq 1}\frac{(-1)^{n-1} a^n}{n^2}. \tag{1}$$ that can be written as $-\text{Li}_2(-a)=\text{Li}_2(a)-\frac{1}{2}\text{Li}_2(a^2).$ By taking the limit as $\alpha\to 1^-$ we also have $\int_{0}^{1}\frac{\log(1+x)}{x}\,dx=\frac{\pi^2}{12}=\eta(2)=\frac{\zeta(2)}{2}.$ If $a>1$, we may use the dilogarithm reflection formulas to get: $$ \int_{0}^{a}\frac{\log(1+x)}{x}\,dx = \frac{\pi^2}{6}+\frac{\log^2(a)}{2}+\sum_{n\geq 1}\frac{(-1)^{n}}{n^2 a^n}.\tag{2} $$ Equivalently, $$\begin{eqnarray*} \int_{0}^{a}\frac{\log(1+x)}{x}\,dx &=& \frac{\pi^2}{12}+\int_{1}^{a}\frac{\log(1+x)}{x}\,dx\\&=&\frac{\pi^2}{12}+\int_{\frac{1}{a}}^{1}\frac{\log(x+1)-\log(x)}{x}\,dx\\&=&\frac{\pi^2}{6}+\frac{\log^2(a)}{2}-\int_{0}^{\frac{1}{a}}\frac{\log(1+x)}{x}\,dx \end{eqnarray*}$$ proves the above reflection formula $(2)$ through $(1)$.
What is the value of $i^i$?
$$i^i = e^{i\log i} = e^{i(\log |i|+i\arg i)} = e^{i(i\arg i)} = e^{-\arg i} = e^{-\frac{\pi}{2}+2 \pi k} \qquad k \in \mathbb{Z}$$
First-Order Languages and Circular Reasoning
The "construction" is circular, the reasoning ... not necessarily. When you write a book about the syntax of (e.g.) english language, you use the language itself. This "procedure" works because you have already learnt how to speak and read. In mathematics you use the language of set (but also arithmetic : is very difficult to speak of "objects" without being able to count them ...) to set up your theory. The same in mathematical logic that is a branch of mathematics: you need set language for describing basic objects like symbols (we need a set of primitive ones), formula (a string, i.e.a set of symbols), derivation (a sequence, i.e.a set of formulas), etc. The "trick" is the interplay between the (mathematical) language you are "speaking of" (the english language subject to the study in your syntax book) and the (mathematical) language you are "speaking with" (the english language with which your syntax book is written). The first we call it : object language. The second we call it : metalanguage.
Identity of inverse matrix
Think about what happens if they were just numbers (but don't forget that it's not commutative): Let's call $A:=C_N$ and $B:=W_N$ for simplicity. If these were numbers, you are looking for $\displaystyle\frac1{1/A\,+B}$. Now we want to pull out $\displaystyle\frac1{1/A}$ (alias $A$) from the left, so that, because of the inverse, we have to pull out $1/A$ from the right: $$1/A+B=(1+BA)\cdot 1/A\\ \frac1{1/A\,+B}=\frac1{1/A}\cdot\frac1{1+BA}\,.$$ Writing it with inverses, we get the same line: As $A^{-1}+B=(I+BA)A^{-1}$, we have $$(A^{-1}+B)^{-1}= A(I+BA)^{-1}\,.$$
Proving $\sum_{n=1}^\infty \frac{x^n}{n}$ equals $-\log(1-x)$
This is indeed incorrect due to neglect of the constant of integration. From $\frac{d}{dx}f(x)=\frac{1}{1-x}$ you can conclude that $f(x)$ differs from $-\log(1-x)$ by a constant, not that they are equal. However, it is easy to determine the constant by plugging in $x=0$, since $f(0)=0=-\log(1-0)$. So the constant is actually $0$, and $f(x)=-\log(1-x)$. (Another step that requires justification is that you can differentiate $f(x)$ term-by-term. This is a standard theorem, though--any power series can be differentiated term-by-term inside its radius of convergence.)
Linear homogeneous DE of minimal order that has the solution $y_1(x) = x^2e^x$
Taking logarithms, $\log{y_1} = x+2\log{x}$. Differentiating, $$ \frac{y'_1}{y_1} = 1+\frac{2}{x}. $$ Hence a minimal linear homogeneous DE is $$ y'_1 -\left(1 - \frac{2}{x} \right) y_1 = 0. $$
weak convergence implies boundedness.
Banach-Steinhaus tells us that a family of linear functionals on a Banach space is either unbounded on a dense $G_\delta$ set or is uniformly bounded in norm. Since this family converges weakly we have for any $g \in L^2$ that $\sup_n \langle f_n , g \rangle < \infty$ and therefore the family of functionals is uniformly bounded in norm.
State each of the following English sentences in symbols
Both of your solutions are correct, though the second would be better stated without the unnecessary restriction on $M$, as indicated in my comment. Concerning the "graphical intuition", the definition of continuity can be rephrased as "for every neighborhood of $f(a)$, there is a neighborhood of $a$ which is carried by $f$ into the neighborhood of $f(a)$. In other words: if you draw a circle about $f(a)$, then you can also draw a circle about $a$ such that everything inside it is carried into the circle about $f(a)$. Your negation says there is some neigbhorhood of $f(a)$ such that any neighborhood of $a$ will contain at least one point that is carried outside the neighborhood of $f(a)$. This is what we mean by discontinuity: no matter how close you get to $a$, something is carried away from $f(a)$.
Correspondence between the Algebraic $K_1$ and the topological $K_1$
As someone already pointed out, it would be better to write $K^1(X)$ for topological K-theory of the space $X$. Note that this is the same as operator K-theory $K_1(C(X))$ of the $C^\ast$-algebra $C(X)$. Now, let us denote algebraic K-theory by $K_1^{\text{alg}}$. You want to compare $K_1(C(X))$ and $K_1^{\text{alg}}(C(X))$. As you say, $K_1(C(X))$ can be defined via $K_0$ and suspension, but for our purposes, the following equivalent formulation is more useful: $$ (\ast)\qquad\qquad K_1(A)= \frac{\mathrm{GL(A)}}{\mathrm{GL(A)^0}},$$ where the $0$-superscript stands for "connected component of the identity". Note that for this to make sense $A$ must be, say, a Banach algebra. This already gives a feeling for comparison since as you wrote $$ K_1^{\text{alg}}(A)= \frac{\mathrm{GL(A)}}{[\mathrm{GL(A)},\mathrm{GL(A)}]}.$$ One can also define $S\!K_1$ by replacing $\mathrm{GL}$ with $\mathrm{SL}$ in $(\ast)$. Then if $A$ is commutative, therefore $A\cong C(X)$, the following result holds: $$K_1^{\text{alg}}(A)\cong A^\ast\oplus S\!K_1(A),$$ where the first summand stands for invertibles. You can find a bit more on this in Karoubi's book "K-theory - An introduction", Chapter II, exercise 6.13.
Conditional expectation of two Poisson variables
I finally solved the problem by myself. The solve is shown below: Previously, we should know that $P[X+Y=k]=e^{-\lambda_x-\lambda_y}\frac{(\lambda_x+\lambda_y)^k}{k!} $ verification : Poisson Distribution of sum of two random independent variables $X$, $Y$ $P[X=k|X+Y=i]= \frac{P[X=k,Y=i-k]}{P[X+Y=i]} $ Now, let get started from $$ E[X=k|X+Y=i]= \sum_{k=0}^{i}k\frac{P[X=k,Y=i-k]}{P[X+Y=i]} = \sum_{k=0}^{i}k\frac{P[X=k]P[Y=i-k]}{P[X+Y=i]}=\sum_{k=0}^{i}k\frac{\frac{e^{-\lambda_x}\lambda_x^k}{k!}\frac{e^{-\lambda_y}\lambda_y^{(i-k)}}{(i-k)!}}{\frac{e^{-\lambda_x-\lambda_y}(\lambda_x+\lambda_y)^i}{i!}}=\sum_{k=1}^{i}\frac{i!}{(k-1)!(i-k)!}\frac{\lambda_x^k\lambda_y^{(i-k)}}{(\lambda_x+\lambda_y)^i}$$ $ k=0$ is vanished. $$ = \frac{i\lambda_x}{(\lambda_x+\lambda_y)^i}\sum_{k=1}^{i}\frac{(i-1)!}{(k-1)!(i-k)!}\lambda_x^{k-1}\lambda_y^{(i-k)}=\frac{(X+Y)\lambda_x}{\lambda_x+\lambda_y}$$ $\because \sum_{k=1}^{i-1}\frac{(i-1)!}{(k-1)!(i-k)!}\lambda_x^{k-1}\lambda_y^{(i-k)}=(\lambda_x+\lambda_y)^{i-1}$ $, i=X+Y$ $\therefore E[X|X+Y]=\frac{(X+Y)\lambda_x}{\lambda_x+\lambda_y}$
Software package for plotting 3-d splines
Your points $P$ do not appear to lie on a rectangular grid. So, they would be referred to as "scattered", and what you're doing is "scattered data interpolation". If you search for this term, you'll get lots of hits. One example is this Wikipedia page. If you want an interpolating polynomial, you simply write down a polynomial in which the number of coefficients is the same as the number of your data points. Then, for each point of $P$, you write down an equation that expresses the interpolation. You get a system of linear equations, which you can (usually) solve to get the coefficients of the polynomial. I say "usually" because sometimes the linear systems arising in 2D interpolation don't have unique solutions. If the number of points is large, you might run into problems -- the so-called Runge phenomenon will bite you. If you want a software package that does the interpolation for you, then there are various recommendations in the answers listed below. Matlab seems to be people's favorite solution: Link1 Link2 Link3 Link4 I have never used SAGE, but I can guess what the error message is telling you. A tensor product spline (or polynomial) surface of degree $kx \times ky$ will have at least $(kx+1) \times (ky+1)$ coefficients. In order for some internally constructed system of equations to be solvable (as I outlined above), the number of input data points $m$ must be at least this large. The "spline" option probably tells SAGE to use cubic splines, which means you will need at least 16 data points.
Derivation of the Dirac-Delta function property: $\delta(bt)=\frac{\delta(t)}{\mid b \mid}$
I suggest that you should think of $\delta$ a bit differently than you do, namely either as a distribution or as a measure. I will discuss the first point of view. The notation with integrals are handy but often lead to confusion when the "integrand" is not a real function. So, what is a distribution? Well, you could say it is something that eats nice functions (typically belonging to some nice space, like infinitely differentiable functions with compact support, $C_0^{+\infty}(\mathbf R)$) and spits out numbers. If $\phi$ is such a function and $T$ is a distribution, then this is usually denoted by $T(\phi)$ or $\langle T,\phi\rangle$. We say that $T$ acts on the test function $\phi$. Distributions should also be linear, so $T(\phi+\psi)=T(\psi)+T(\phi)$ and $T(a\phi)=aT(\phi)$. If we want to use fancy names, we could say that a distribution $T$ is a linear functional on $C_0^{+\infty}(\mathbf R)$. Main example of distribution Let $f$ be a function such that the integral $$ \int_{-\infty}^{+\infty} f(x)\phi(x)\,dx $$ makes sense for all $\phi\in C_0^{+\infty}(\mathbf R)$ (such a function is usually said to be locally integrable), then there is a natural distribution $T$ associated with $f$, namely $$ T(\phi)=\int_{-\infty}^{+\infty}f(x)\phi(x)\,dx. $$ Without going into details, more or less all the functions you usually meet (with not too strong singularities) can be considered as distributions in this way, and sometimes one even does not distinguish between the function $f$ and the distribution $T$. However, it is possible to define distributions that does not correspond to any function as above, so distributions are in some sense more general then functions. Important point A distribution $T$ is not (unlike functions) defined at specific points $x\in\mathbf R$, so that it does not make sense to talk about $T(x)$. So, how could one define something that should correspond to typically "point-wisy" operations as differentiations $f'$, a dilation $f(bx)$ ($b\neq 0$) or a shift $f(x-b)$? Getting closer to your question Let us concentrate on how $T(bx)$ is defined for distributions, since this is what the question is about. Before doing so, let us see what happens in the case of when $T$ is given as in the Main example above. Thus, we want to somehow relate $T$ above with $$ \int_{-\infty}^{+\infty}f(bx)\phi(x)\,dx. $$ The change of variables $x\mapsto u=bx$ then transforms this integral into $$ \frac{1}{|b|}\int_{-\infty}^{+\infty}f(u)\phi(u/b)\,du. $$ But this integral can be written as $$ \frac{1}{|b|}T(\phi(\cdot\,/b)), $$ i.e. the factor $1/|b|$ times the action of our original distribution $T$ acting on the dilated test function. As is common in distribution theory (this is done for derivatives of distributions, shifts, Fourier transforms, ...), we want distributions to work like functions, and make a definition: Definition of dilation of distributions Given a distribution $T$, we define its dilation $T_b$ as the distribution $$ T_b(\phi)=\frac{1}{|b|}T(\phi(\cdot\,/b)). $$ (Just a reminder: To define a distribution, we need to say how it acts on test functions.) Over to $\delta$ and the answer to your question The delta distribution $\delta$ is the distribution defined by $$ \delta(\phi)=\phi(0) $$ i.e. it acts on a test function by returning the value of the test function at zero. By definition, the dilation of the delta function (which you wrote $\delta(x/b)$, but which I will write $\delta_b$ according to the definition above) $$ \delta_b(\phi)=\frac{1}{|b|}\delta(\phi(\cdot\,/b))=\frac{1}{|b|}\phi(0). $$ The second line is just the definition of $\delta$. I hope this answers your question. Final comments 1) I hope it is clear to you that there does not exist a function $f$ such that $$ \int_{-\infty}^{+\infty}f(x)\phi(x)\,dx=\phi(0) $$ for all $\phi\in C_0^{+\infty}(\mathbf R)$. 2) The choice of space $C_0^{+\infty}(\mathbf R)$ above could have been some other space.
Find the general value of $x$ satisfying $\sin x=\frac{1}{2}$ and $\cos x=-\frac{\sqrt {3}}{2}$
Hint: You can also solve $$\tan(x)=-\frac{\sqrt{3}}{3}$$
Cardinality of the real numbers
Use the Cantor–Bernstein–Schroeder theorem: Choose $x\in \mathbb{R}$, and let $\{b_i\}_{i=1}^{\infty}$ be the (shortest) binary expansion of $\frac{1}{2}+\frac{\arctan x}{\pi}$ (which is in $(0,1)$). Then define $\phi(x) = \{ i | b_i = 1 \}$. This establishes an injective map $\phi:\mathbb{R}\to 2^{\mathbb{N}}$. To go the other way, suppose $A \subset \mathbb{N}$. Then let $t_i = 1_A(i)$ (indicator function), and define $\eta(A) = \sum_{i=1}^{\infty}\frac{t_i}{3^i}$. This establishes an injective map $\eta : 2^{\mathbb{N}} \to \mathbb{R}$. The desired result follows from the Cantor–Bernstein–Schroeder theorem.
How is bisector of one side of a right angled triangle, drawn from right angled corner equal to the half of the bisected side?
A start: Complete the triangle to a rectangle in the obvious way. (Make a copy of the triangle, and rotate it into place.)
If $t'=t$, why does $\partial /\partial t \neq \partial /\partial t'$?
When you compute $\partial/\partial t$, it is understood that $x$ should be held constant as $t$ varies, but when you compute $\partial/\partial t'$, it is $y$ that should be held constant as $t'$ varies. And this is not the same thing, since $y \neq x$.
Prove that $\int_a^b x^2 dx = \frac{b^3-a^3}{3}$
Let us write a Riemann sum for a partition of $[a,b]$, $a=x_0<x_1<\cdots<x_n=b$ with $x_j-x_{j-1}<\delta$. Note that we can write, using the Mean Value Theorem, $$ x_j^3-x_{j-1}^3=3d_j^2\,(x_j-x_{j-1}). $$ for some $d_j$ with $x_{j-1}\leq d_j\leq x_j$. So $$ \frac{b^3-a^3}3=\sum_{j=1}^n\frac{x_j^3-x_{j-1}^3}3=\sum_{j=1}^nd_j^2\,(x_j-x_{j-1}) $$ Then, for points $c_1,\ldots,c_n$ with $x_{j-1}<c_j<x_j$, consider the Riemman sum $$ \sum_{j=1}^n c_j^2\,(x_j-x_{j-1}). $$ Note that $c_j,d_j\in[x_{j-1},x_j]$, so $|d_j^2-c_j^2|\leq x_j^2-x_{j-1}^2$. We have \begin{align} \left|\frac{b^3-a^3}3-\sum_{j=1}^n c_j^2\,(x_j-x_{j-1})\right| &=\left|\sum_{j=1}^n (d_j^2-c_j^2)\,(x_j-x_{j-1})\right| \leq\sum_{j=1}^n |d_j^2-c_j^2|\,(x_j-x_{j-1})\\ &\leq\,\delta\,\sum_{j=1}^n|d_j^2-c_j^2|\leq\delta\,\sum_{j=1}^n(x_j^2-x_{j-1}^2)\\ &=\delta\,(x_n^2-x_0^2)=\delta\,(b^2-a^2). \end{align} That is, given $\varepsilon>0$, a choice of $\delta=\varepsilon/(b^2-a^2)$ will make $(b^2-a^3)/3$ satisfy the definition.
Wedge Products with the Symplectic Form
You are right, this map is an isomorphism. In fact, much more is true. $\mathbb{R}^{2n}$ (with its standard symplectic form, complex structure and scalar product) is a Kähler manifold. On any Kähler manifold $M^{2n}$ (with Kähler form $\omega$), each Lefschetz operator $$L^k : \Omega^l_{dR}(M) \to \Omega^{l+2k}_{dR}(M) : \tau \mapsto \tau \wedge \omega^k$$ is an isomorphism when $l = n-k$. The proof I saw most often of this fact uses the 'Hodge adjoint' $\Lambda$ of $L$, the fact that the commutator $[L, \Lambda] = (k-n)Id$ and some induction. This is merely linear algebra, albeit not the most straightforward one. There is certainly a more 'elementary' proof of this result in the case you are interested in, but it would probably be a reformulation and simplification of the above proof. After all, one only needs to prove the injectivity of the map $T = L^1$, which would follow from the existence (for any $(n-1)$-form $\tau \neq 0$) of a $(n-1)$-form $\tau'$ such that $\tau \wedge \omega \wedge \tau' \neq 0$, and the above proof implies that such a $\tau'$ may be constructed by applying $L$ and $\Lambda$ on $\tau$. For more information about this line of thinking, you might take a look at Huybrechts's book, or Voisin's book starting on p.139.
Probability and Interest Rates of Markets
A table could be useful: $\begin{array}{|c|c|c|c|} \hline & 0.7&0.1&0.2 \\ \hline \$ & 6 &7&5 \\ \hline ¥&5 &6&4 \\ \hline £ &5 &6&4 \\ \hline \end{array}$ The probability, that Dollar interest rate is smaller or equal than the Pound interest rate and the Yen interest rate, given the Dollar interest rate is 6%, is $0.1\cdot 0.1=0.1^2$ The probability, that Dollar interest rate is smaller or equal than the Pound interest rate and the Yen interest rate, given the Dollar interest rate is 7%, is $0\%$ The probability, that Dollar interest rate is smaller or equal than the Pound interest rate and the Yen interest rate, given the Dollar interest rate is 5%, is $(0.7+0.1)\cdot (0.7+0.1)=0.8^2$ Now use the Law of total probability to calculate the probability, that the dollar interest rate is smaller or equal than the Pound interest rate and the Yen interest rate.
How to show that: if $n\ln\left(1+a/n\right)\geqslant k\ln\left(1+a/k\right)$ then $n\geqslant k$?
if $n\gt m\gt0$ and $a\gt0$, then Bernoulli's Inequality says $$ \left(1+\frac an\right)^{n/m}\gt\left(1+\frac an\frac nm\right)=\left(1+\frac am\right) $$ Therefore, $$ \left(1+\frac an\right)^n\gt\left(1+\frac am\right)^m $$ Reversing roles we have that for $m\gt n\gt0$, $$ \left(1+\frac an\right)^n\lt\left(1+\frac am\right)^m $$ Taking logarithms proves the required implication.
Two non-isomorphic groups which are epimorphic images of each other
Take $G=\prod_{i\in \mathbb{N}} \mathbb{Z}$ and $H=\mathbb{Z}/2 \times \prod_{i\in \mathbb{N}} \mathbb{Z}$. Then $G$ is an epimorphic image of $H$ (by projecting the $i$'th coordinate of $H$ to the $(i-1)$'th coordinate of $G$, and $H$ is an epimorphic image of $G$ by sending the first coordinate to its projection in $\mathbb{Z}/2$ and the other coordinates to themselves. However, $G$ and $H$ are not isomorphic, since $G$ does not contain elements of finite order, while $H$ does.
Expected Value and Variance of Moving a Token on a Checkerboard Based on a Die Roll.
The expected value is to be $0$; if it was not, then then die must be biased toward a particular direction(s). If each roll is treated as a new instance of a random variable, the variance of each roll with be exactly $1$. For example, with just one roll, you are guaranteed to be one square from the origin. If the new location of the token is set as a new origin, the variance of the next roll must also be $1$, and so on... One of the most important rules of statistics says that variances of random variables are always additive; regardless of whether you are adding or subtracting the output values of the random variables. Thus, the variance in the question you are asking will simply be equal to the number of rolls performed.
The bijective property on relations vs. on functions
Actually "bijective", like "injective" and "surjective", is a perfectly well defined notion, and any object that claims this badge has to be a function. Most people would not consider applying the notion at all in a context where the object is not assumed to be a function to begin with, but it is acceptable to define a bijective relation to be a one-to-one correspondence, in other words a relation that is actually a bijective function (this seems to be the case for this definition, which does not correspond to (2) of the question). Other notional may be less clearly defined; notably "one-to-one" has always been a mystery to me, because it means "bijective" when used as in my previous sentence, but a "one-to-one map" is actually only an injective function. The notion descibed in (2) might be called an "injective partial function" as you do without causing much confusion. However it does create the precedent of applying the adjective "injective" to something that is not a function (or a module, a resolution, or a metric space). Alternative terms one could think of proposing are "invertible partial function" (but it is not really invertible, as composition with its "inverse" partial function only gives a partial identity), or "partial bijection" (but one has to understand that "partial" applies to both sides, so nothing of surjectivity is left) or "partial injection" (no confusion, but like "injective partial function" the terminology appears to be asymmetric, while the notion itself is not). Finally "zero-or-one-to-zero-or-one correspondence" does seem to suggest the proper definition, but is frankly quite awkward.
Prove that a positive odd integer $N>1$ has a unique representation in the form $N=x^2-y^2$ iff N is prime
HINT $\ $ Nonuniquness is an immediate consequence of the following composition law $\rm\qquad\qquad\ (a^2-b^2)\ (A^2-B^2)\ =\ (a\:A+b\:B)^2-(a\:B+A\:b)^2$ $\rm\qquad\qquad\ \phantom{(a^2-b^2)\ (A^2-B^2)}\ =\ (a\:A-b\:B)^2-(a\:B-A\:b)^2$ E.g. composing $\rm\ 7 = 4^2 - 3^2\ $ with $\ 11 = 6^2 - 5^2\ $ yields for $\rm\: 7\cdot 11\:$ the following $2$ rep's $\rm\qquad\qquad\ (4^2-3^2)\ (6^2-5^2)\ =\ (4\cdot 6+3\cdot 5)^2-(4\cdot 5+6\cdot 3)^2\ =\ 39^2 - 38^2$ $\rm\qquad\qquad\ \phantom{(4^2-3^2)\ (6^2-5^2)}\ =\ (4\cdot 6-3\cdot 5)^2-(4\cdot 5-6\cdot 3)^2\ =\ 9^2 - 2^2$
Help with solving a system of differential equations
$$\dot{X}= \begin{pmatrix} -1 & -1 \\ -1 & -1\end{pmatrix} X+ e^{-2t} \begin{pmatrix} 1 \\ 0 \end{pmatrix}$$ Rewrite both equations with $x(t),y(t),\, X=\pmatrix {x(t) \\ y(t)}$ $$ \begin{cases} x'=-x-y+e^{-2t} \\ y'=-x-y \end{cases} $$ Then you can deduce that $$y'-x'=-e^{-2t}$$ Integrate $$\implies y(t)=x(t)+\frac 12e^{-2t}+K$$ Plug that in this equation and solve for $x(t)$ the differential equation $$x'=-x-y+e^{-2t}$$ $$\implies x'=-2x+\frac 12e^{-2t}-K$$ $$......$$ Can you take it from there ?
Find $f(\Bbb R^2)$ where $f(x,y) = (e^x \cos y, e^x \sin y)$
Either you note that $f(z)=e^z$ and know (or show) that $e^z$ takes all complex values except zero, or you solve explicitly the equations $e^x\cos y=a$ and $e^x\sin y=b$: These equations give $e^x=\sqrt{a^2+b^2}$ and since the real exponential $e^x$ never vanishes you get $a\ne0$ or $b\ne0$. In particular, $x=\log\sqrt{a^2+b^2}$. If $a=0$, then you can take for example $y=\pm\pi/2$ depending on the sign of $b$. Otherwise, if $a\ne0$, you have $\tan y=b/a$ and so you can take $y=\arctan (b/a)$. In other words, $e^z=a+ib$ has a solution $z$ if and only if $a\ne0$ or $b\ne0$. Hence, $f(\mathbb C)=\mathbb C\setminus\{0\}$.
Normal Curvature Along a non arclength-parameterized curve
You can still compute curvature of a non-arclength-parametrized curve; there are various ways to do it. My favorite is just to adjust the usual calculations with the chain rule. See, for example, p. 13 of my differential geometry text. So compute $\kappa\vec N$ at the point and then dot with the unit surface normal. (This is the Meusnier's formula application comments referred to.) Alternatively, yes, evaluate the second fundamental form of the surface at $\phi(t^2,t)$ on the unit tangent vector of the curve. At $t=0$ this will be easy enough; for other values of $t$ it will be yucky algebra either way, I guess. I can answer further questions if you ask.
Reference book for Calculus Conceptual problems
This book is very introductory to analysis. 'How to think about analysis', Lara Alcock, Oxford University Press.
Distance of rays from the center of a chord to the arc
The point $C$ has coordinates $(0,452)$. In order to find the length of segment $CG$ as a function of the angle $\alpha$ one can first find the coordinates of the point $G$ as functions of $\alpha$. The point $G$ is one of the two points of intersection (the one with positive $x$-coordinate) of the two graphs: \begin{equation} y=x\tan\alpha+452 \end{equation} \begin{equation} x^2+y^2=500^2 \end{equation} Substituting the first equation into the second gives \begin{equation} x^2+(x\tan\alpha+452)^2=500 \end{equation} which can be solved for the non-negative value of $x$ in terms of $\alpha$. Then using the first of the two equations above, one can find $y$ as a function of $\alpha$. Then one can derive the length of $CG$ as a function of $\alpha$ for $0\le\alpha\le\tfrac{\pi}{2}$. For $\tfrac{\pi}{2}<\alpha\le\pi$ the distance is the same as for the acute angle $\pi-\alpha$.
How to find vector components of velocities of two balls after elastic collision, using angle-free representation
I am a intermediate student and have seen such questions . I believe you will not be getting an "ANGLE FREE x-y component separated" dish on your platter. I believe you should notice that $$\vec {v'_1}=\vec v_1 - \frac{2m_2}{m_1+m_2} \frac{<\vec v_1 - \vec v_2, \vec x_1-\vec x_2>}{||\vec x_1-\vec x_2||}(\vec x_1-\vec x_2)$$ and $$\vec {v'_2}=\vec v_2 - \frac{2m_1}{m_1+m_2} \frac{<\vec v_2 - \vec v_1, \vec x_2-\vec x_1>}{||\vec x_2-\vec x_1||}(\vec x_2-\vec x_1)$$ are your best bet. U must be having vector components of these quantities ($\vec v_1 ,\vec v_2$ etc.) And using the i,j,k orthogonal components must do your job. Suggestions are always welcome! Cheers
p factorial minus p primorial
Because of Wilson's theorem the given condition is equivalent to $\ p$# $\equiv -p\mod p^2\ $ Upto $10^6$ , the following primes do the job ($\ p=2\ $ and $\ p=3\ $lead to $\ 0\ $, so might better be ruled out) : ? s=1;forprime(p=1,10^6,s=s*p;if(Mod(s,p^2)==-p,print1(p," "))) 2 3 19 1471 3001 ?
Evaluation of the integral of a linear gaussian model?
Firstly you use $$\int_{-\infty}^{\infty} e^{-a x^2+b x+c} dx = \frac{\sqrt{\pi } e^{\frac{b^2}{4 a}+c}}{\sqrt{a}}$$ to reduce it to a single integral, after that $$\int_{A}^{\infty} e^{-a x^2+b x+c} dx =\frac{\sqrt{\pi } e^{\frac{b^2}{4 a}+c} \left(\text{erf}\left(\frac{b-2 a A}{2 \sqrt{a}}\right)+1\right)}{2 \sqrt{a}}$$ to compute the remaining integral
Show that the set $\{x\in[0,1]\mid f(x)=g(x)\}$ is compact when $f,g$ are continuous
Hint: Recall that $f-g$ is a continuous function. For the first show that $S$ is the preimage of a closed set, intersected with a compact set; for the second show that $U$ is the preimage of an open set, intersected with an open interval.
A fly on a triangle?
The easy way is to note that the two non-starting nodes are symmetric, so the expected number of moves to get back to start from $L$ and $R$ are the same. As your move from start is to $L$ or $R$, $E[N]=1+E[R]$ Then $E[R]=\frac 12(1) + \frac 12E[R]$ because you have half chance to go to start and half chance to go to left (which has the same expectation as right). So $E[R]=2, E[N]=3$ For your try, you forgot to add the $1$ move from $N$ to $L$ or $R$. Also writing $2\frac 12$ when you mean the product is confusing. Many (and maybe you later) will read it as $2+\frac 12$
Germs and local ring.
You need to assume something about the space $X$ for the claimed statement to be true. A natural assumption to make is that $X$ is completely regular; I will use this assumption in addressing (2) below. (1) Well, $A_{m_x}$ is not (a priori) a set of functions, it's just a ring of formal "fractions" (which may or may not make sense when evaluated as pointwise fractions of functions). You can still think of it perfectly well as a ring without having to think of its elements as functions on $X$ (which, as you observe, you can't exactly do). (2) If $a\in A$ is such that $a=0$ in a neighborhood $U$ of $x$, then in fact the image of $a$ in the localization $A_{m_x}$ vanishes: the canonical "inclusion" $A\to A_{m_x}$ is not injective! To prove this, note that by complete regularity, there is a function $f:X\to[0,1]$ such that $f(x)=1$ and $f(y)=0$ for all $y\not\in U$. We then have $f\not\in m_x$ and $fa=0$, so it follows that $a$ maps to $0$ in the localization $A_{m_x}$. Note that you also need to use complete regularity to show that your map $\phi:A_{m_x}\to A_x$ is surjective: given a germ of a continuous function at $x$, it is not at all obvious a priori that you can write it as a quotient of two continuous functions that are defined on all of $X$. In detail, if you have a function $f:U\to\mathbb{C}$ where $U$ is an open neighborhood of $x$, let $V$ be an open neighborhood of $x$ whose closure is contained in $U$ (by regularity) and let $g:X\to[0,1]$ be a function such that $g(x)=1$ and $g(y)=0$ for all $y\not\in V$ (by complete regularity). Define $h(y)=\min(1,2g(y))$. Then $h=1$ on a neighborhood of $x$ (namely, the set where $g>1/2$), so $hf$ (which is defined on $U$) has the same germ at $x$ as $f$. But $hf$ vanishes on $U\setminus\overline{V}$, so we can continuously extend it to all of $X$ by setting it equal to $0$ outside of $U$. This continuous extension is then an element of $A$ whose germ at $x$ coincides with the germ of $f$. Thus the map $A\to A_x$ is surjective, and hence so is the map $A_{m_x}\to A_x$.
Trig integral simplification
The basic ingredient is$$\sec^2(\theta)-1=\tan^2(\theta).$$The rest follows easily.
Real analysis problems involving proofs with sequences and sets
For your (i), why do you have a $\beta'$ and what is $\alpha$? If $a,b \in S$, there exist $m_a, n_a, m_b, n_b \in \mathbb{Z}$ such that $a = m_a + n_a \beta$ and $b = m_b + n_b \beta$. To show $ra+sb \in S$, you need to find $x,y \in \mathbb{Z}$ such that $ra+sb = x + y \beta$. (Edited to follow edits to the Question) For your (ii): Generally, you need to say what $n_k$ is. With your edit, $\left( n \sqrt{2} - \lfloor n \sqrt{2}\rfloor\right)_{n \in \mathbb{N}}$ is bounded (by $0$ and $1$), but is not monotonic. However, think about $\frac{m}{n}$ being close to $\beta$. Then $m$ is close to $n\beta$. For each $n$, it is automatic that you can pick $m$ such that $\left|\frac{m}{n} - \beta \right| \leq \frac{1}{2n}$. (The fractions with denominator $n$ partition the line into segments of width $\frac{1}{n}$ and $\beta$ is in one of those segments. It can't be more than halfway away from the closest endpoint.) Rather than think of incrementing $n$, perhaps think of choosing an $n$ that exactly subdivides the interval containing $\beta$ more finely. For (iii): Let $\varepsilon > 0$ and $r \in \mathbb{R}$. Now pretend to solve for $\beta$: $m + n\beta = r$, so $\beta = \frac{r-m}{n}$. Can you find an $n$ large enough (and a matching $m$) so that the gap between $\beta$ and $\frac{r-m}{n}$ is less than $\varepsilon$?
Solving a linear system of equations with multiple solutions by minimizing the total error
I would add slack variables $s_i$ to each of your 3 equations in the system. Thus, these variables represent the error for each function. However, we have to be careful about the sign of each slack variable because we want the sum of their magnitudes to be minimized in the objective function. So we want to minimize $\sum |s_i|$. Can you take it from here? There are excellent resources on here about introducing more variables to get rid of the absolute values in the objective function of an LP. Edit: For example add constraints $$t_{ip} - t_{im} = s_i$$ $$\forall t \geq 0$$ And the objective is minimize $\sum_i t_{ip} + t_{im}$
Probability to win or lose tournament
If the probability of winning is $p$, then the probability of winning $3$ times is $p^3$. Hence, for the probability of winning $3$ times to be more $40\%$ you need $$ p^3 \geq 0.4 \Leftrightarrow p \geq 0.4^{\frac{1}{3}}.$$
3D Surface Area Integral
Mostly you forgot to square components before adding them under the square root. Along the surface, $\vec r=\langle x,y,2\sqrt{x^2+y^2}\rangle$, so $$d\vec r=\langle1,0,\frac{2x}{\sqrt{x^2+y^2}}\rangle dx+\langle0,1,\frac{2y}{\sqrt{x^2+y^2}}\rangle dy$$ So $$\begin{align}d^2\vec A & =\langle1,0,\frac{2x}{\sqrt{x^2+y^2}}\rangle dx\times\langle0,1,\frac{2y}{\sqrt{x^2+y^2}}\rangle dy\\ & =\pm\langle\frac{-2x}{\sqrt{x^2+y^2}},\frac{-2y}{\sqrt{x^2+y^2}},1\rangle dx\,dy\end{align}$$ Taking magnitude, $$d^2A=||d^2\vec A||=\sqrt{\frac{4x^2+4y^2}{x^2+y^2}+1}\,dx\,dy=\sqrt5\,dx\,dy$$ So you should get $$A=\int d^2A=\int_0^1\int_{x^2}^x\sqrt5\,dy\,dx=\frac{\sqrt5}6$$
Statistics: Random Variable within a Random Variable
You are told that $X\sim \text{unif}(a,b)$, and that $Y|X\sim\text{unif}(0, X)$. The simplest answer is to use the law of total expectation, which gives $$E[Y] = E\{E[Y|X]\} = E\left[\frac{0+X}{2}\right] = \frac{1}{2}E[X] = \frac{1}{2}\cdot\frac{a+b}{2}.$$ You can also use $$E[Y] = \int_{-\infty}^\infty E[Y|X = x] f_X(x)\,dx = \int_a^b \frac{0+x}{2}\cdot\frac{1}{b-a}\,dx$$ if you haven't been taught/don't cover the previous law.
Does the joint of the solution of a linear SDE with delayed itself has a PDF? How to compute their mutual entropy?
You can write the random variable $\mathbf{Z}(t)$ as $$ \begin{align*} \mathbf{Z}(t) = e^{\mathbf{A}(t-t_0)}\mathbf{z}_{t_0} + \int_{t_0}^{t}e^{\mathbf{A}(t-s)}\Gamma(s)ds, \end{align*} $$ where $\Gamma(s)$ is the underlying white-noise process. The same is true of $\mathbf{Z}(t+\tau)$, and indeed we have $$ \begin{align*} \mathbf{Z}(t+\tau) &= e^{\mathbf{A}(t+\tau - t_0) }\mathbf{z}_0+\int_{t_0}^{t+\tau}e^{\mathbf{A}(t+\tau-s)}\Gamma(s)ds \\ &= e^{\mathbf{A}\tau }\mathbf{Z}(t) +\int_{t}^{t+\tau} e^{\mathbf{A}(t + \tau-s)}\Gamma(s)ds. \end{align*} $$ So we see that $\mathbf{Z}(t)$ is given by the linear transformation acting on functions \begin{align*} f(t) \mapsto e^{\mathbf{A}t}\mathbf{z}_0 + \int_{t_0}^{t}e^{\mathbf{A}(t-s)}f(s)ds \end{align*} and so since $\mathbf{Z}(t), \mathbf{Z}(t+\tau)$ are given by a linear transformation of a common Gaussian random variable, $\Gamma(s)$, it follows that they will themselves will have a multivariate Gaussian distribution. Now all that remains is to calculate the parameters of this distribution. The mean and variance are straightforward and we also have \begin{align*} \mathbb{E}\left[ \mathbf{Z}(t+\tau) \mathbf{Z}(t)^{T} \right] &= \mathbb{E}\left[ e^{\mathbf{A}\tau} \mathbf{Z}(t) \mathbf{Z}(t)^T \right] + \mathbb{E}\left[ \int_{t}^{\tau}e^{\mathbf{A}(t+\tau - s)} \Gamma(s) ds \cdot \mathbf{Z}(t)^T\right] \\ &= e^{\mathbf{A}\tau} \mathbb{E}\left[ \mathbf{Z}(t)\mathbf{Z}(t)^T \right] + \underbrace{ \mathbb{E} \left[\int_{t}^{t+\tau} e^{\mathbf{A}(t+\tau - s)}\Gamma(s)ds \right]}_{=0} \cdot \mathbb{E}\left[ \mathbf{Z}(t)^T \right] \\ &= e^{\mathbf{A}\tau}\mathbb{E}\left[ \mathbf{Z}(t)\mathbf{Z}(t)^T \right]. \end{align*} Now you can use these results to calculate the mutual entropy, or any other functional you desire. To expand on the comments; defining the random variable $$ \mathbf{V} = \int_{t}^{t+\tau} e^{\mathbf{A}(t+\tau - s)}\Gamma(s) ds, $$ then $$ \begin{bmatrix} \mathbf{Z}(t) \\ \mathbf{Z}(t+\tau) \end{bmatrix} = \begin{bmatrix} \mathbf{I} & \mathbf{0} \\ \mathbf{e}^{\mathbf{A}\tau} & \mathbf{I} \end{bmatrix} \begin{bmatrix} \mathbf{Z}(t) \\ \mathbf{V} \end{bmatrix} $$ now it remains to convince yourself that $\mathbf{Z}(t)$ and $\mathbf{V}$ are themselves jointly Gaussian which I will leave you to do, but maybe it will help to declutter things slightly and consider just the univariate case, with initial condition $z_0 = 0$, and writing the finite approximation to $Z_n(t)$ as $$ Z_n(t) = \sum_{t_k < t} e^{a (t - t_k )} \Delta B_{t_k} $$ where $\Delta B_{t_k} = B(t_{k+1}) - B( t_k )$ is the Brownian motion increment process then I hope it seems at least plausible that $$ \begin{align*} Z_n(t+\tau) &= \sum_{t_k < t + \tau} e^{a (t + \tau -t_k) } \Delta B_{t_k} \\ &= e^{a \tau} \left( \sum_{t_i < t } e^{a (t - t_i )}\Delta B_{t_i} + \sum_{t < t_j < t + \tau} e^{a (t - t_j) }\Delta B_{t_j} \right) \\ &= e^{a \tau} \left( Z_{n}(t) + \sum_{t < t_j < t + \tau} e^{a (t - t_j) }\Delta B_{t_j} \right) \end{align*} $$ is suggestive of $Z(t+\tau)$ and $Z(t)$ being jointly Gaussian.
Inverse of a matrix over a finite field from inverse over $\mathbb{Q}$
Look at the adjugate matrix. $M_n(R)$ is the ring of $n \times n$ matrices with coefficients in a commutative ring $R$. If $A \in M_n(\mathbb{Z})$ and $A_p \in M_n(\mathbb{F}_p),\ A_p \equiv A \bmod p$ then $$A\ \text{adj}(A) = \det(A) I \qquad \text{in } M_n(\mathbb{Z})$$ Whose reduction $\bmod p$ of each term is $$A_p\ \text{adj}(A_p) = \det(A_p) I \qquad \text{in } M_n(\mathbb{F}_p)$$ If $\det(A_p) \not\equiv 0 \bmod p$ then $\det(A) \ne 0$ and $$A^{-1} = \frac{1}{\det(A)}\text{adj}(A)\qquad \text{in } M_n(\mathbb{Q})$$ whose reduction $\bmod p$ of each term is $$ \qquad A_p^{-1} =\frac{1}{\det(A_p)}\text{adj}(A_p) \qquad \text{in } M_n(\mathbb{F}_p)$$
Is this proof for area of a circle correct?
In the 18th century Leonhard Euler would have said that if $n$ is an infinitely large integer, then $\sin\dfrac{2\pi}n = \dfrac{2\pi}n$. Today we say that the limit as $n\to\infty$ of the ratio of one of those to the other is $1$. I'd say the argument is on the right track but seems more complicated than it needs to be and seems to rely on less primitive concepts than what is needed. I also have some qualms about nitpicking details. One may consider the polygon as inscribed in the circle or circumscribed about the circle or some compromise between those extremes, and as far as I know it doesn't matter which you pick. Suppose we consider it circumscribed rather than inscribed and we eschew trigonometric functions and argue as follows: the area of each triangle is 1/2 base times height. The sum of the bases is the perimeter of the polygon, so the area of the polygon is 1/2 perimeter times "radius", where the "radius" is the distance from any vertex to the center. By making $n$ large enough, we can make the perimeter of the polygon differ by as little as desired from the circumference of the circle, and we can make the area of the polygon differ by as little as desired from the area of the circle. Thus 1/2 times the circumference of the circle times the radius can be made as close as desired to the area of the circle by making $n$ big enough. But the circumference of the circle and the radius and the area of the circle do not depend on $n$. Therefore 1/2 times the circumference times the radius must be exactly the area of the circle.
Proof that a particular element of this matrix is always real
Let $$ A^{-1} = \pmatrix{C&D\\E&F} $$ where $C$ is $(r-1) \times (r - 1)$. Per my comment above, we have $$ X^{-1} = BA^{-1} = \pmatrix{I&0\\E&F} $$ Note that $F$ is invertible whenever $B$ is invertible. Moreover, we may therefore state that $X$ has the form $$ X = (X^{-1})^{-1} = \pmatrix{I&0\\\star & F^{-1}} $$ where $\star$ denotes an unimportant $(n +1 - r) \times (r-1)$ matrix. However, $F$ is a principal submatrix of the Hermitian matrix $A^{-1}$, so it is Hermitian. It follows that $F^{-1}$ is Hermitian. It follows that $X_{rr} = (F^{-1})_{11}$ is real, as desired.
General solution of $yy'' - (y')^2 + y' = 0$
Hint: Divide everything by $y^2$. Then use the identities $$ \left(\dfrac{y'}y\right)'=\dfrac{y''y-(y')^2}{y^2}\qquad\text{and}\qquad\left(\dfrac1y\right)'=\dfrac{-y'}{y^2}. $$
How to integrate for a countable sum of measures?
There is a more precise answer to the question, as a matter of fact a necessary and sufficient condition, as I pointed out a few months ago answering another question on the MathOverflow. Since $$ m=\sum_{n\in\mathbb{N}}a_n m_n\triangleq\lim_{N\to\infty} \sum_{|n|\leq N}a_n m_n\tag{1}\label{1} $$ the OP question is equivalent to asking when it is possible the passage to the limit under the integral symbol, i.e. when $$ \int f\mathrm{d}m\triangleq\int f\mathrm{d}\left(\sum_{n\in\mathbb{N}}a_n m_n\right)=\lim_{N\to\infty} \sum_{|n|\leq N}a_n\int f \mathrm{d}m_n\triangleq\sum_{n\in\mathbb{N}}a_n \int f \mathrm{d}m_n\; ?\tag{2}\label{2} $$ The answer requires the two following definitions Definition 1. Let $(E,\mathcal{E})$ be a measure space and $\phi:\mathcal{E}\to\overline{\mathbb{R}}$ a numerical set function: $\phi$ is called exhaustive if $$ \lim_n\phi(A_n)=0 $$ for all families $\{A_n\}$ of pairwise disjoint sets in $\mathcal{E}$. Definition 2. Let $(E,\mathcal{E})$ be a measure space and $H$ a set (and thus possibly a family) of numerical set functions defined on $\mathcal{E}$: $H$ is called uniformly exhaustive if the numerical set function $$ A\mapsto\sup_{\phi\in H} \vert\phi(A)\vert\;\text{ is exhaustive.} $$ By Cafiero's theorem (on the passage to the limit under the integral), formula \eqref{2} holds if and only if the limit \eqref{1} exist pointwise and $$ \bigg(f\cdot\sum_{|n|\leq N}a_n m_n\bigg)_{N\geq 1}=\bigg(\sum_{|n|\leq N}f\cdot a_n m_n\bigg)_{N\geq 1}=\bigg(\sum_{|n|\leq N} a_n\, f\cdot m_n\bigg)_{N\geq 1}\text{ is uniformly exaustive.} $$ A minimal bibliography on the theorem is available in my MathOverflow post cited above
How to find where the magnitude of the gradient of a function is maximized?
The magnitude of the gradient just depends on the distance from the origin $\rho=\sqrt{x^2+y^2}$. The function $f:[0,+\infty]\to[0,+\infty]$ defined by: $$ f(\rho) = 2\rho\, e^{-\rho^2} $$ attains its maximum when $f'=0$. Since: $$ f'(\rho) = 2(1-2\rho^2)e^{-\rho^2} $$ the maximum is attained for $\rho=\frac{1}{\sqrt{2}}$: $$ f(\rho)\leq \sqrt{\frac{2}{e}}.$$
Prove $n^5+n^4+1$ is not a prime
$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$ $\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$ =$ (n^2 + n + 1)(n^3 − n + 1)$ Hence, for $n>1$, $n^5 + n^4 + 1$ is not a prime number.
path-connected subgroup of Lie group is Lie group
I found a proof on page 354 of Structure and Geometry of Lie Groups (Hilgert and Neeb) with a free google preview. It uses the Brouwer fixed point theorem, and $A(H)$ is the sub algebra you describe.
How to show that this function is continuous but not differentiable at x=0?
Hint: you can use L'Hospital rule for $$ \lim _{x \to 0} \frac{\tan^{-1}(\frac{1}{x})}{\frac{1}{x}} $$
Derivation of dice-sides polynomial
Here's a derivation. It's less clean than I'd like, but it does the job. Let a triple $(a,b,c)$ denote three visible faces of the die. We'll work modulo 7 and treat the die faces as $\{-3,-2,-1,+1,+2,+3\}$, noting that opposite faces adding to $0$. So $(a,b,c)$ are a permutation of $(\pm 1, \pm2, \pm3)$. We will express a relationship between three $\pm1$-value functions that describe the triple. The handedness $h(a,b,c)$ is $+1$ for right-handed triples on the die and $-1$ for left-handed triples. The sign $s(a,b,c)$ is the number of minus signs among $(a,b,c)$ from $\{\pm 1, \pm2, \pm3\}$. The chirality $f(a,b,c)$ is the cyclic order of $(1,2,3)$ in $(a,b,c)$, ignoring sign. The cyclic order $(1,2,3)$ gives $+1$ and $(1,3,2)$ gives $-1$. Claim: For any triple, the product of the three function satisfies $h s f = +1$. Proof: This is satisfied by the triple $(1,2,3)$ for which they are all $+1$. When we negate one of the triple elements, we've mirrored the triple geometrically, so $h$ and $s$ negate but $f$ is sign-independent and stays the same, preserving the equality. When we swap two elements, we negate $h$ and $f$ but keep $s$, again preserving equality. Since these operations suffices to reach any triple, all triples satisfy it. Now, we use this equality to derive the last element $c$ from the other two elements of the right-handed triple $(a,b,c)$. Right-handed means $h=1$, so we can rewrite the claims as $sf=1$ or $s=f$. Since $1\times 2 \times 3 = -1 \bmod 7$, we have $s(a,b,c) = -abc$, as each minus sign flips the product. We can express the chirality $f$ from only $a,b$, since $c$ is forced up to sign, and moreover from $a^2,b^2$, which erases the sign. Squaring mod 7 takes $(1,2,3)$ to $(1,4,2)$, where each element is $4$ times the cyclically previous one modulo 7. So, $b^2/a^2\bmod 7$ is $4$ for this chirality and $2=4^{-1}\bmod 7$ for the opposite one. Therefore, we can find $f$ from the different of this ratio and its inverse: $$b^2/a^2-a^2/b^2 = 2f \bmod 7$$ and dividing both sides by 2, which is the same as multiplying by -3, $$-3(b^2/a^2-a^2/b^2) = f \bmod 7$$ Now, plugging expressions into $s=f$ gives $$abc = 3(b^2/a^2-a^2/b^2). \bmod 7$$ Finally, dividing to isolate $c$ and using the fact the nonzero elements have order 6 modulo 7, so $x^{-3} = x^3$, we get the polynomial expression $$c=3(a^3b-ab^3)\bmod 7.$$
Conditions for an object to be called a variable
Regarding (6): This result is gibberish. You have stated that $a$ is a variable that can only take two values. Consequently, the limit required to evaluate the derivative, $f'(a)$, does not exist. Also, your source for $a$, the solutions to $f(x) = 0$, make $a$ a constant. This might be clearer if you don't reuse $a$ to mean multiple things. In particular, let $a_1 = 3$, $a_2 = -1$, and observe that your set of solutions is $\{a_1, a_2\}$. That is, stop forcing $a$ to simultaneously represent two distinct values. Then $f(a_1) = 0$ -- a constant function always producing the value zero -- and of course the derivative of a constant function with respect to any independent variable is zero. Similarly, $f(a_2) = 0$ gives a zero derivative. I don't make sense of your equation (7). I would check this by plugging both choices for $a \in \{a_1,a_2\}$ into this equation to see if they work. (They don't in this case but I interpret your particular (7) was a generic example, not an example intended to work with the given $f$).
Ordered Permutations
For a sequence consisting of the numbers $1$ through $9$ each exactly once containing another sequence is equivalent to ordering constraints. For example, containing $123$ is equivalent to "$1 \prec 2$ and $2 \prec 3$". Such constraints can be visualized with a directed graph. (In cases where the constraints can all be satisfied, the graph must be acyclic.) For the quoted question, the graph looks like this: Counting the number of arrangements that satisfy the constraints is equivalent to counting topological orderings of the graph. The process of choosing such an ordering can be described by choosing a node that has no incoming edges, deleting it, and then repeating until the graph is empty. In general, however, counting the number of ways can be hard. In this case, it is not too bad to count by hand and exploit some symmetry. (Maybe there is a more clever way, but I didn't see it yet.) We start with $1$, then go to either $2$ or $4$. Due to symmetry, the number of sequences that start with $12$ is that same as the number that start with $14$, so consider only $12$. The next choice must be $3$ or $4$. There are only $5$ sequences that start with $123$: $$ \begin{array}{c} 123456789 \\ 123457689 \\ 123457869 \\ 123475689 \\ 123475869 \\ \end{array} $$ For sequences that start with $124$, the next choice must be either $3$, $5$, or $7$. For $1243$: $$ \begin{array}{c} 124356789 \\ 124357689 \\ 124357869 \\ 124375689 \\ 124375869 \\ \end{array} $$ $1247$ also has $5$ by symmetry. For $1245$: $$ \begin{array}{c} 124536789 \\ 124537689 \\ 124537869 \\ 124573689 \\ 124573869 \\ 124578369 \\ \end{array} $$ Thus the total is $2\cdot(5 + 2 \cdot 5 + 6) = 42$. What is the smallest number of 3-digit sequences needed such that there is exactly 1 permutation of 9 that contains all of them? If the graph has fewer than $8$ edges, then it must have at least two connected components, and each connected component has at least one digit that can start the permutation. So, to have only $1$ permutation, there must be at least $8$ edges. Each 3-digit sequence gives no more than $2$ edges, so $4$ sequences are required. (You have already given an example of this.) Exactly 2 permutations? Given a permutation, if there is not a constraint between two consecutive digits, those digits can be swapped, and no constraints will be violated. So if exactly two permutations are allowed, they must differ only by a swap of consecutive elements, as any other missing constraint between consecutive elements would admit another permutation. We can still mostly argue by counting edges. (Note that none of the needed edges can come only from transitivity, as a transitive relation cannot describe directly adjacent elements of the permutation.) If the swap is at the beginning or end, only $8$ edges are needed, but overlapping prevents a solution with only $4$ 3-digit sequences. Otherwise, $9$ edges are needed, so still there must be $5$. Examples: $$ \begin{array}{c} 134, 234, 456, 678, 789 \\ 124, 134, 456, 678, 789 \\ 123, 235, 245, 567, 789 \\ \end{array} $$ 4-digit sequences instead of 3? We can follow the same outlines as the previous arguments, but each 4-digit sequence can introduce $3$ edges instead of $2$. To specify exactly one permutation, we need $8$ edges, so at least $3$ 4-digits sequences. For example: $1234, 4567, 6789$. To specify exactly two permutations, at least $3$ sequences are needed, and indeed $3$ suffice (but not for all swap positions): $1235, 2456, 6789$.
Canadian Senior Math Contest Question
I would call all of these algebra, but this is definitely competition math algebra. These are solvable by expanding through binomial expansion (and I believe this is the intended solution). If you need more hints, ask below.
When does an under-specified linear system have a unique solution?
For example if you have that $a_{11},a_{12},a_{13}>0$ and $b_1=0$ your only solution for the first equation is $x=y=z=0$ that is a solution for the second equation if and only if $b_2=a_{21}+a_{22}+a_{23}$ So in the case in which \begin{cases} b_1=0\\ b_2=a_{21}+a_{22}+a_{23} \end{cases} Your only solution is $x=y=z=0$ The geometric idea is no difficult. Your system of equations represents the locus of the intersection of two planes related by your equations but in general in $\mathbb{R}^3$ the intersection of two planes is the empty set or a plane (if the two planes are equal) or a line. There is not the case in which the intersection is one single point. But In your case you must consider also the condition $x,y,z\geq 0$ so you can consider $b_1,b_2$ such that the two different planes intersect them in a line and that line passes on the origin and its intersection with the locus $x,y,z\geq 0$ is only the origin. In this case your only solution will be the origin.
How do I find the dot product of these vectors?
Recall that if $v$ and $w$ are two vectors in $\mathbb{R}^2$, then $v\cdot w=\|v\|\|w\|\cos{\theta}$ where $\theta$ is the angle between $v$ and $w$ and $\|v\|$ and $\|w\|$ are the lengths of the vectors. So since each of your vectors has length 1, this problem reduces to finding the angles between each of these pairs of vectors. We are given that $AEB$ is an equilateral triangle, so the angle between $EA$ and $EB$ is $60^\circ$. The sum of the angles between $AB$ and $BE$ ($60^\circ$) and $BE$ and $BC$ must sum to $90^\circ$, so the angle between $BC$ and $BE$ is $30^\circ$. Finally, to find the angle between $DA$ and $BE$, we need to translate $BE$ over to the point $D$. Then we see that the angle is $90^\circ$ plus the angle between $AB$ and $BE$ ($60^\circ$), and hence is $150^\circ$. So $BC\cdot BE=1\cdot 1\cdot\cos{30^\circ}=\frac{\sqrt{3}}{2}$, $DA\cdot BE=1\cdot1\cdot\cos{150^\circ}=-\frac{\sqrt{3}}{2}$, and $EA\cdot EB=1\cdot1\cdot\cos{60^\circ}=\frac{1}{2}$.
"Vanishing inner product implies orthogonality" Is it a definition or theorem?
Just to clarify the excellent other answers and Willie Wong's illucid comment: Orthongonal := inner product equals zero--- a linear algebra term about vectors (applies to all vector spaces); is a definition. Perpendicular := intersect at a right angle--- a geometry term about lines, planes or higher dimensional hyper-planes (applies to Euclidean planes and spaces); is a definition. $\mathbb R^n$ representing Euclidean $n$-space and vectors in $\mathbb R^n$ representing Euclidean lines or planes or hyper planes; is an interpretation. In $\mathbb R^n$, orthogonal if and only if perpendicular; is a theorem (which is provable by the Pythagorean Theorem) (-- and which is frequently not considered an important theorem and ignored, as many [most?] texts do not considered classic geometry to be important; instead all geometry is only interpreted in linear algebra terms, in which "perpendicular" is not used.) (In a way, asking why orthogonal/perpendicular means inner product is $0$, is like asking why $\{(t, t(x)+a|x \in \mathbb R^n\}$ is a line. It's a result of interpreting classical geometry into analytical terms, and then deciding the classical interpretation is no longer pertinent and, from then on, only using the analytic interpretation as the very basis and definition of geometry instead.)
How to eliminate $\theta$?
$$r_1\cdot r_2 = \frac{(4a)^2}{\sin \theta \cos \theta}$$ $$\left(\frac{r_2}{r_1}\right)^{\frac{1}{3}}=\frac{\sin \theta}{\cos \theta}$$ $$\frac{\left(\frac{r_2}{r_1}\right)^{\frac{1}{3}}}{r_1\cdot r_2 }=\frac{\sin^2 \theta}{(4a)^2}=r_1^{-\frac{4}{3}}r_1^{-\frac{2}{3}}$$ $$\frac{1}{\left(\frac{r_2}{r_1}\right)^{\frac{1}{3}}r_1\cdot r_2 }=\frac{\cos^2 \theta}{(4a)^2}=r_1^{-\frac{2}{3}}r_1^{-\frac{4}{3}}$$ $$\therefore r_1^{-\frac{4}{3}}r_2^{-\frac{2}{3}}+r_1^{-\frac{2}{3}}r_2^{-\frac{4}{3}}=\frac{1}{(4a)^2}$$
Hashing upper bound?
This is not a full answer but a potential start. For $n=1$ its clear the $E(X)=1$ For $n=2$ consider all possible permutations of $4$ objects in the $2$ slots - $$((1, 1, 1, 1), (1, 1, 1, 2), (1, 1, 2, 1), (1, 1, 2, 2), (1, 2, 1, 1), (1, 2, 1, 2), (1, 2, 2, 1), (1, 2, 2, 2), (2, 1, 1, 1), (2, 1, 1, 2), (2, 1, 2, 1), (2, 1, 2, 2), (2, 2, 1, 1), (2, 2, 1, 2), (2, 2, 2, 1), (2, 2, 2, 2))$$ This gives the following probability distributions: $Pr(X=2)=6$, $Pr(X=3)=8$, $Pr(X=4)=2$ and hence $E(X)=\frac{11}{4}$. For $n=3$ a similar analysis gives $E(X)=\frac{1467}{256}$. For $n=4$ it gives $E(X)=\frac{39203}{4096}$. Higher cases tax my ability to quickly code it efficiently so I haven't yet examined them but I highly suspect a pattern should emerge.
Solving inequality-constrained least-norm problem non-iteratively
Assuming the problem has a solution, your strategy is pretty much there, with a couple small tweaks. You might have to recursively apply something like step 2 as you have outlined. Imagine that the optimal solution occurs with two or three $x_i$ fixed at a boundary instead of just one. Worst case, this now has factorial time complexity in the dimension of $x$ rather than linear. That isn't bad in $4$ dimensions like you have. Since you have $4$ coordinates and $2$ boundaries each to worry about, you really have $2^4=16$ vectors to consider even in a non-recursive version of step 2 (since you don't know if it's the upper bound or the lower bound that you care about). The problem you have specified though looks familiar enough that dealing with the combinatorial explosion at the boundary is probably already solved with a simpler algorithm. For example, applying the method of Lagrange Multipliers you can conclude that there is some vector $\lambda$ so that $x=\frac12A^T\lambda$, hence $AA^T\lambda=2b$. That won't really help if the optimal solution is outside your range of interest, but symmetric linear solvers are typically quite a bit faster than more general purpose tools. Other thoughts that come to mind, if the constraint $Ax=b$ is tight then with probability $1$ (with most common distributions of entries for $A$ and $b$) the system has the maximum possible rank (in your case, a $3\times4$ matrix has maximal rank $3$). If the system were square then by definition, it would minimize $\|x\|_2^2$ (since it's the only solution, so it's the minimum across all solutions), and you would just have to hope that it satisfies the remaining inequalities. Update: Your comment is interesting. In the event that you don't care about the general problem and are definitely working with a $3\times4$ matrix with maximal rank there is quite a bit more we can say about the system. First, that means the general solution to the system of equations can be represented by $\lambda v_h+v_p$, where $v_h$ is any nonzero solution to the homogeneous equation $Ax=0$ and $v_p$ is any solution to the particular equation $Ax=b$. Note that $v_h$ is any nonzero right singular vector corresponding to the singular value $0$ and that any major linear algebra libraries can compute those extremely efficiently. Additionally, finding a single solution to $Ax=b$ is also implemented in those libraries and very fast. All that said, now we have the reduced problem of minimizing $\|\lambda v_h+v_p\|_2^2$ where $v_h$ and $v_p$ are both known and $\lambda$ is just a scalar, still subject to $m\leq\lambda v_h+v_p\leq M$ (coordinate-wise inequality) for some vectors $m$ and $M$. That system of inequalities can quickly be turned into a single lower bound and a single upper bound for $\lambda$. It might be possible that the inequalities can't all be satisfied, and this will be the step where you determine satisfiability of your equations. Let $v_{hi}$ and $v_{pi}$ denote the $i$-th coordinate of the homogeneous and particular solutions, respectively. Differentiating with respect to $\lambda$ and setting equal to $0$ we conclude $$\lambda=-\frac{\sum v_{pi}}{\sum v_{hi}}$$. From there, just check the upper and lower bounds for $\lambda$ for any smaller values (might not be necessary -- I'm a little tired to think about convexity and whatnot) or if the previously computed value falls outside the interval of interest.
How many N by N adjacency matrices exist with maximum degree of m?
This is an incomplete solution, and I would like to come back to this when I have time. Hopefully you can take this as a starting point and run with it. Note that if you just want an approximation, you can use the Monte Carlo method. Let me know if this interests you and I can expand. Assumption. $0 \leq n \leq m < N$ are integers. Let's start with a simple version of your problem in which we drop the requirement of undirectedness. Let $g(i) \equiv {}_{N-1} C_i \cdot {}_2F_1(1, i - N + 1; i + 1; -1)$ where ${}_a C_b$ is the binomial coefficient and ${}_2F_1{}$ is the ordinary hypergeometric function. Lemma (Counting digraphs). There are exactly $(g(n) - g(m+1))^N$ directed graphs with (i) $N$ nodes, (ii) $n \leq \operatorname{outdeg}(v) \leq m$, and (iii) no self-loops (i.e., no edges of the form $v \rightarrow v$). By symmetry, the above also holds if we replace $\operatorname{outdeg}$ with $\operatorname{indeg}$. Proof. There are ${}_{N-1} C_k$ ways to assign $k$ edges to a single node if we exclude self-loops. It follows that there are $\sum_{k = n}^m {}_{N-1} C_k = \sum_{k = 0}^m {}_{N-1} C_k - \sum_{k = 0}^{n-1} {}_{N-1} C_k$ ways to assign edges to a single node. Some tedious algebra reveals that $\sum_{k = 0}^{\ell-1} {}_{N-1} C_k = 2^{N-1} - g(\ell)$, from which the result follows. Figure. $\log$ of the number of digraphs when $n = 0$ (values $m \geq N$ are capped to $N - 1$). Let's return to your original problem. This problem is much harder because unlike the simpler problem above, one cannot "uncouple" the nodes of the graph to obtain an expression of the form $(\cdot)^N$. Let $\mathbb{N}$ denote the nonnegative integers. When $k$ is a positive integer, let $\mathbb{N}^k$ denote the usual Cartesian product. For convenience, let $A^0 \equiv \{ 0 \}$ for any set $A$. Let $\mathbf{x}_{(-1)} \equiv (x_2, \ldots, x_N)$ denote the vector $\mathbf{x}$ with its first co-ordinate removed. When $\mathbf{x} \equiv (x_1)$ is a singleton vector, we interpret this as $\mathbf{x}_{(-1)} \equiv 0$. Let $u_0(0) = 1$. For each positive integer $k \leq N$, let $u_k : \mathbb{N}^k \rightarrow \mathbb{N}$ by $$ u_k(\mathbf{x}) = \sum_{\mathbf{y} \in \mathcal{Y}} u_{k-1}(\mathbf{x}_{(-1)} + \mathbf{y}) \qquad \text{where} \qquad \mathcal{Y} \equiv \left \{ \mathbf{y} \in \{0,1\}^{k-1} \colon n \leq x_1 + \mathbf{1} \cdot \mathbf{y} \leq m \right \}. $$ Lemma (Counting undirected graphs). There are exactly $u_N(\mathbf{0})$ undirected graphs satisfying the same properties (i), (ii), and (iii) as above. Code implementing the recurrence is given at the end of this post. Note, however, that this code only has a slight edge over the naive $O(2^{N^2})$ algorithm involving enumerating all possibilities. Figure. $\log$ of the number of undirected graphs when $n = 0$. import itertools import numpy as np _cache = {} def log_num_undirected_graphs(num_nodes, min_out_degree, max_out_degree): def u(x): k = len(x) if k == 0: return 1 key = (min_out_degree, max_out_degree) + x result = _cache.get(key, None) if result is not None: return result result = 0 Y = itertools.product((0, 1), repeat=k-1) if k > 1 else [()] for y in Y: tmp = x[0] + sum(y) if tmp < min_out_degree or tmp > max_out_degree: continue x_prime = tuple(x_i + y_i for x_i, y_i in zip(x[1:], y)) result += u(x_prime) _cache[key] = result return result zero = (0,) * num_nodes return np.log(u(zero))
For positive T, show ⟨Tx,y⟩≤⟨Tx,x⟩⟨Ty,y⟩
So let $[x,y]=\left<Tx,y\right>$, we have $[x,x]=\left<Tx,x\right>\geq 0$ and $[y,x]=\left<Ty,x\right>=\left<y,Tx\right>=\overline{\left<Tx,y\right>}=\overline{[x,y]}$, linearity and scalar conjugate are easy to check. Now use Cauchy-Schwarz. It is $|\left<Tx,y\right>|^{2}\leq\left<Tx,x\right>\left<Ty,y\right>$.
Trouble simplifying the follow expression of a squared norm: $\Bigl\lVert\frac{\langle u,v \rangle}{\lVert v \rVert} v \Bigr\rVert ^2$
As you said, we can pull out the inner product as a scalar: $$\left\| \frac{\langle u,v \rangle}{\|v\|}v\right\|^2 = |\langle u,v \rangle|^2 \left\| \frac{v}{\|v\|}\right\|^2$$ But notice that $ \frac{v}{\|v\|}$ is a unit vector. Therefore, $ \|\frac{v}{\|v\|}\| =1$. So we have: $$|\langle u,v \rangle|^2 \left\| \frac{v}{\|v\|}\right\|^2=|\langle u,v \rangle|^2 .$$
What is meant by "prove $M$ is a left $R$-module"?
First, I think you meant "...annihilated by $I$." not "....annihilated by $R$". If someone says, "Show $X$ is a thing." and just specifies a set $X$ (without specifying any operations), there must be some understood obvious operations. So if you are asked to show $\mathbb{R}$ is a group. It is understood that the group operation is $+$ ($\mathbb{R}$ has 2 obvious operations: addition and multiplication. Multiplication doesn't make $\mathbb{R}$ a group due to lack of inverses). To answer your first question, yes. There are examples of pairs abelian groups and rings which do not allow for a module structure. For example: Let $R=\mathbb{R}$. Then any $\mathbb{R}$-module is a (real) vector space. So pick any finite abelian group, say $\mathbb{Z}_4$, and it cannot be a real vector space since real vector spaces are either trivial (sets of cardinality 1) or infinite (and $|\mathbb{Z}_4|=4 \not= 1$ or $\infty$). For you next question, if $M$ is an $R$-module. Then there is a natural $R/I$ action iff $M$ is annihilated by $I$. Namely: $(r+I) \cdot m = r \cdot m$ for $r \in R$ and $m \in M$ (i.e. have cosets act via their representative). This is the proper interpretation of your problem. The author wasn't explicit because "everyone knows" what the "correct" action "should" be. :) To prove this: Assume $M$ is annihilated by $I$. Then verify that $(r+I) \cdot m = r \cdot m$ is a well-defined operation (i.e. equivalent representatives yield equal answers). Then run through the rest of the module actions to see that this actually makes $M$ an $(R/I)$-module. For the converse: Assume $M$ is an $(R/I)$-module with this action. Then for all $x \in I$ and $m \in M$ you have $x \cdot m = (x+I) \cdot m = (0+I) \cdot m = 0 \cdot m = 0$ since $x+I=0+I$. Thus $M$ is annihilated by $I$. EDIT: (To address your question from the comments)... Let $M$ be an abelian group. If $M$ is also an $R$-module, the $R$-module structure doesn't have to be "internal" to the group. In fact, a single abelian group can have multiple module structures. For example: Consider $\mathbb{R}^2$ as an $\mathbb{R}[x]$-module. Let $\mathbb{R}$ act on $\mathbb{R}^2$ as scalar multiplication. Let $x$ act as your favorite linear endomorphism (linear operator) on $\mathbb{R}^2$. Each linear operator yields a different $\mathbb{R}[x]$-module structure (some may be isomorphic, some not, but all unequal) even though the underlying abelian group $\mathbb{R}^2$ is the same. However, sometimes the module structure is determined by the group alone. This is true of $\mathbb{Z}$-modules. This happens because the $\mathbb{Z}$-action is forced to match additive-exponents. Consider for example: $3 \cdot x = (1+1+1) \cdot x = 1\cdot x +1\cdot x+1\cdot x=x+x+x=3x$. Each abelian group has exactly one $\mathbb{Z}$-module action. This is why texts identify $\mathbb{Z}$-modules and abelian groups as the "same thing".
Is $[0,4,4]^T$ in the plane in $\mathbb{R}^3$ spanned by the columns of $A$?
Assuming that the column vectors of your matrix $A$ can be written with $v=\begin{bmatrix}3&-2&1\end{bmatrix}^T$ and $w=\begin{bmatrix}-5&6&1\end{bmatrix}^T$ where $A=[v\quad w]$ then your (linear independant) vectors $v$ and $w$ span obviously one plane $P=\operatorname{span}\{v,w\}\subset\mathbb{R}^3$ with $\dim(P) = 2$. Assume that there is one vector $x\in P$. $x$ has then the following form $$x=\lambda v+\mu w\in P$$ and therefore can be written as a linear combination of the two vectors $v,w$ that span $P$. If you want to check whether $u=\begin{bmatrix}0&4&4\end{bmatrix}^T$ is in $P$ you have to find $\lambda,\mu$ such that the combination of the vectors $v,w$ equals $u$. If there is no solution, then the vector is not part of the plane. Basically you solve the following system of equations: $$\begin{bmatrix}3&-5\\-2&6\\1&1\end{bmatrix}\cdot\begin{bmatrix}\lambda\\\mu\end{bmatrix}=\begin{bmatrix}0\\4\\4\end{bmatrix}$$ Hint: This system has a unique solution.
Why in cyclotomic extension, Gal($K/F$) may not isomorphic to the whole $Z/nZ$?
Once you choose a primitive root $w$, any automorphism of $F(w)$ sends $w$ to another root of the minimal polynomial of $w$ over $F$. Consequently, you do not obtain ALL the possible primitive roots of $1$. In others words, since $X^7-1$ is not irreducible, the Galois group does not act transitively on the roots. To be more explicit, in your example, say that $w$ is a root of $X^3+X+1$. What are the other roots ? Well, we have : $w(w^6+w^2+1)=w^7+w^3+w=w^3+w+1=0$, so $w^2$ is another root. The last one is $w^4$, since $w^3(w^{12}+w^4+1)=w^{15}+w^7+w^3=w^3+w+1=0$. Hence, the map $\sigma$ sending for example $w$ to $w^3$ is NOT an automorphism of $F(w)$. In fact, it is not even well defined: $\sigma(0)=0=\sigma(w^2+w+1)=w^4+w^2+1=w(-1-w)+w^2+1=-w+1\neq 0$.
Does a symmetric matrix necessarily have a symmetric square root?
Any symmetric matrix $P$ (I assume you mean real matrices) can be diagonalized by an orthogonal matrix $U$: $$ P=U^T D U,\quad D=\left(\begin{array}{ccc} \lambda_1&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\lambda_n\\ \end{array}\right),\quad U U^T=I, $$ $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $P$, $\lambda_1,\ldots,\lambda_n\in\mathbb R$. If $P$ is positive semi-definite, then we have also $\lambda_1\ge 0,\ldots,\lambda_n\ge 0$. Consider the matrix $$ Q=U^T \left(\begin{array}{ccc} \sqrt{\lambda_1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\sqrt{\lambda_n}\\ \end{array}\right) U. $$ It is easy to see that $$ Q^2=U^T \left(\begin{array}{ccc} \sqrt{\lambda_1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\sqrt{\lambda_n}\\ \end{array}\right) U U^T \left(\begin{array}{ccc} \sqrt{\lambda_1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\sqrt{\lambda_n}\\ \end{array}\right) U= $$ $$ =U^T \left(\begin{array}{ccc} \sqrt{\lambda_1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\sqrt{\lambda_n}\\ \end{array}\right) I \left(\begin{array}{ccc} \sqrt{\lambda_1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\sqrt{\lambda_n}\\ \end{array}\right) U= U^T \left(\begin{array}{ccc} \lambda_1&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\lambda_n\\ \end{array}\right) U=P. $$ Now suppose that a symmetric matrix $R$ which is not positive semi-definite, i.e. has one or more negative eigenvalues, has a symmetric square root $S$. $S$ is diagonaliziable by an orthogonal matrix $U$, $$ S=U^T \left(\begin{array}{ccc} \lambda_1&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\lambda_n\\ \end{array}\right) U; $$ $$ S^2=U^T \left(\begin{array}{ccc} \lambda_1&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\lambda_n\\ \end{array}\right) U U^T \left(\begin{array}{ccc} \lambda_1&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\lambda_n\\ \end{array}\right) U= U^T \left(\begin{array}{ccc} \lambda_1^2&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\lambda_n^2\\ \end{array}\right) U. $$ We can see that the eigevalues of $R=S^2$ are $\lambda_1^2\ge 0,\ldots,\lambda_n^2\ge 0$. It contradicts with the assumption that $R$ has negative eigenvalues.
On logic vs information theory
This is an interesting question. Here's a stab: Let the statement be $A$. I'm considering two ways of convincing you of $A$. 1) Enumerate all crows, and demonstrate that each one is black. 2) Enumerate all non-black things, and demonstrate that each one is not a crow. Now, there are far more non-black things than crow things. So if I give you an example where I point out a crow and demonstrate that it is black, I've made further progress towards convincing you than I would have if I pointed out a non-black thing and demonstrated that it was not a crow. So I'm likely to choose strategy 1). Now I'm trying to think of an example where strategy 2) is the better bet, i.e. where there are far more "crows" than there are "non-black" things. EDIT: Mark Dominus points out that this is a thing; it's called Hempel's Paradox. They use ravens, but that's probably really important.
Can you write a non-piecewise equation that describes an arbitrary shape?
As has been alluded to in the comments, your question depends on the definition of "equation" and "piecewise function". I'm sure you have some idea ("I'll know it when I see it") what sorts of things you'd like to allow and forbid, and we could try to make this precise in some algebraic way by building functions up out of some collection of "elementary" functions you'd like to start with. However, the functions you end up allowing will likely end up conflicting with the piecewise functions you want to forbid. For example, if you allow squares and square roots, you end up letting the absolute value function in the back door: $$\sqrt{x^2}=|x|=\begin{cases}x&\text{if }\ x\geq 0\\-x & \text{if }\ x<0\end{cases}$$ Once you have one piecewise function, you can set about making more. Assuming you allow division, you also have $$\frac{\sqrt{x^2}}{x}=\frac{|x|}{x}=\begin{cases}1&\text{if }\ x>0\\-1&\text{if }\ x<0\end{cases}$$ and hence $$\frac{\sqrt{x^2}+x}{2x}=\frac{|x|+x}{2x}=\begin{cases}1&\text{if }\ x>0\\0&\text{if }\ x<0\end{cases}.$$ Fiddling around with transformations of the above function, you can essentially make indicator functions of intervals, and from there you can get nearly any piecewise function at all. All this is to show that it will be very difficult, if not impossible, to formally describe the class of curves which are expressible by equations which you might consider "nice". However, if it's just the shape you're looking for, pretty much anything you could scribble down can be approximated by nice equations, as @Jim Conant points out in the comments.
How to convert x versus v graph to v versus t
let the equation of $v$ vs $x$ graph be $v=ax+b$ Then $$\dfrac{dx}{dt}=ax+b$$ $$\dfrac{dx}{ax+b}=dt$$ Integrate both sides $$\int \dfrac{dx}{ax+b}=\int dt$$ $$\dfrac{1}{a}\ln(ax+b) = ct$$ you probably know $a$ and $b$ from the graph, find $c$ from initial condition [should be given]
Edges of what kind of graph may not be partitioned as triangles?
Hint: Show that if the edge set can be partitioned into triangles, then every vertex must have even degree. (This rules out exactly one of your examples, although it might still be the case that the edge set of another one cannot be partitioned into triangles.)
What projection (transformation) is it?
From each data point $(A_i,B_i,Z_i,)$ find the nearest point on the $Z$ axis, which is $(0,0,Z_i).$ Construct the line between those two points. The point where that line intersects the plane $B = 1$ is $\left(\frac{A_i}{B_i}, 1, Z_i\right).$ You can eliminate the middle coordinate, which is always $1,$ and write the projected pont as $\left(\frac{A_i}{B_i}, Z_i\right).$ The axes for those coordinates are simply copies of the $A$ and $Z$ axes, translated from the plane $B=0$ to the plane $B=1;$ specifically, you can use the lines $B=1,Z=0$ and $A=0,B=1$ as the new axes. This projection is a kind of hybrid of orthogonal and central projections onto the plane $B=1.$ In any plane containing the $Z$ axis, the projection takes points orthogonally onto a line parallel to the $Z$ axis. But in any plane parallel to the $A,B$ plane, the projection is a central projection through a point with coordinates $(0,0,Z).$
Why is $\sqrt{4} = 2$ and Not $\pm 2$?
It is correct that if $x^2 = 4$ then $x$ is either $2$ or $-2$. But by definition $\sqrt y$ (for a positive real number $y$) is the $positive$ real number $z$ such that $z^2 = y$.
Is Quotient Spaces of Hausdorff also Hausdorff where the Quotient Spaces is T1?
Let $X$ be two copies of the real line, and let $X\to Y$ be the quotient map that identifies the two lines along the set $\{x\neq 0\}$. $X$ is Hausdorff, but $Y$, the "line with doubled origin", is locally Hausdorff and therefore $T_1$, but not Hausdorff. In fact, every topological space is the quotient of a Hausdorff space. This appears to have been proved by M. Shimrat in 1956, though there may be a very short argument. I think that something stronger should be true: every topological space $Y$ is a quotient of a Stonean space, which is a Hausdorff extremally disconnected space. I believe that it is possible to show this by embedding the complete lattice of opens in $Y$ into a complete Boolean algebra, though if $Y$ is not sober this may complicate the proof or weaken the conclusion.
Counting Binary Strings (No block decompositions)
We denote with run of length $n$ a substring of $n$ consecutive, equal characters from a binary alphabet $\{0,1\}$. A maximum run of a string is a run with maximum length. An $n$-length zero-run is a run consisting of $n$ consecutive $0$s, and a one-run is specified accordingly. We derive a generating function for the number $a_n$ of strings of length $n$ having no odd, maximum zero-run. To avoid ambiguities we follow OPs comment and allow odd maximum runs of $1$'s together with odd runs of $0$'s as long as they are less in length than the odd maximum run of $1$'s which seems to be a somewhat more challenging problem. The string $$0111\color{blue}{000}11$$ has an odd maximum zero-run of length $3$ equal to the maximum one-run and is not admissible. The string $$\color{blue}{0}111011$$ has an odd maximum zero-run of length $1$ less than the maximum one-run of length $3$ and is admissible. We derive the number of admissible words of length $n$ by counting those words which are not admissible. The number of wanted words is $2^n$ minus this quantity. We do so by deriving a generating function \begin{align*} G^{(=2k+1,\geq 2k+1)}(z)\qquad\qquad\qquad k\geq 0 \end{align*} The first component of the index gives the maximum run length of $0$'s, the second component gives the maximum run length of $1$'s. So, $[z^n]G^{(=2k+1,\geq 2k+1)}(z)$ gives the number of words of length $n$ having maximum zero-runs of length $2k+1$ and maximum one-runs of length less or equal to $2k+1$. It follows the number $a_n$ of admissible words of length $n$ is \begin{align*} a_n=2^n-[z^n]\sum_{k=0}^{\lfloor (n-1)/2\rfloor}G^{(=2k+1,\geq 2k+1)}(z)\qquad\qquad n\geq 0 \end{align*} We build the generating functions $G(z)$ from the generating function of Smirnov words also called Carlitz words. These are words with no consecutive equal characters at all. (See example III.24 Smirnov words from Analytic Combinatorics by Philippe Flajolet and Robert Sedgewick for more information.) A generating function for the number of Smirnov words over a binary alphabet is given by \begin{align*} \left(1-\frac{2z}{1+z}\right)^{-1} \end{align*} Replacing occurrences of $0$ in a Smirnov word by one up to $2k+1$ zeros generates words having runs of $0$ with length less or equal $2k+1$. \begin{align*} z\longrightarrow z+z^2+\cdots+z^{2k+1}=\frac{z\left(1-z^{2k+1}\right)}{1-z} \end{align*} The same can be done with ones and the resulting generating function is \begin{align*} G^{(\geq 2k+1,\geq 2k+1)}(z)=\left(1-2\cdot\frac{\frac{z\left(1-z^{2k+1}\right)}{1-z}}{1+\frac{z\left(1-z^{2k+1}\right)}{1-z}}\right)^{-1} &=\frac{1-z^{2k+1}}{1-2z+z^{2k+1}} \end{align*} Since we need a generating function counting words having exactly $2k+1$ zero-runs we calculate \begin{align*} &G^{(=2k+1,\geq 2k+1)}(z)=G^{(\geq 2k+1,\geq 2k+1)}(z)-G^{(\geq 2k,\geq 2k+1)}(z)\\ &\qquad=\left(1-\frac{\frac{2z\left(1-z^{2k+1}\right)}{1-z}}{1+\frac{z\left(1-z^{2k+1}\right)}{1-z}}\right)^{-1} -\left(1-\frac{\frac{z\left(1-z^{2k}\right)}{1-z}}{1+\frac{z\left(1-z^{2k}\right)}{1-z}} -\frac{\frac{z\left(1-z^{2k+1}\right)}{1-z}}{1+\frac{z\left(1-z^{2k+1}\right)}{1-z}}\right)^{-1}\\ &\qquad=\frac{1-z^{2k+2}}{1-2z+z^{2k+2}}-\frac{(1-z^{2k+1})(1-z^{2k+2})}{1-2z+z^{2k+2}+z^{2k+3}-z^{4k+3}}\\ &\qquad=\frac{(1-z^{2})(1-z^{2k+2})z^{2k+1}}{(1-2z+z^{2k+2})(1-2z+z^{2k+2}+z^{2k+3}-z^{4k+3})} \end{align*} We finally conclude The number $a_n$ of admissible words of length $n$ is \begin{align*} a_n&=2^n-[z^n]\sum_{k=0}^{\lfloor (n-1)/2\rfloor}G^{(=2k+1,\geq 2k+1)}(z)\\ &=2^n-[z^n]\sum_{k=0}^{\lfloor (n-1)/2\rfloor} \frac{(1-z^{2})(1-z^{2k+2})z^{2k+1}}{(1-2z+z^{2k+2})(1-2z+z^{2k+2}+z^{2k+3}-z^{4k+3})}\tag{1} \end{align*} The sequence $(a_n)$ starts with \begin{align*} (a_n)_{n\geq 1}=(1,2,5,12,24,\color{blue}{48},95,187,367,724,\ldots) \end{align*} Example: Let's calculate the number $a_6$ of admissible words of length $6$. According to (1) we obtain the following series expansions (with some help of Wolfram Alpha) \begin{align*} G^{(=1,\leq 1)}(z)&=\frac{(1-z)^2(1-z^2)z}{(1-2z+z^2)(1-2z+z^2)}=\frac{(1+z)z}{1-z}\\ &=z+2z^2+2z^3+2z^4+2z^5+\color{blue}{2}z^6+2z^7+2z^8+2z^9+2z^{10}\cdots\\ G^{(=3,\leq 3)}(z)&=z^3+2z^4+5z^5+\color{blue}{12}z^6+25z^7+53z^8+109z^9+220z^{10}\cdots\\ G^{(=5,\leq 5)}(z)&=z^5+\color{blue}{2}z^6+5z^7+12z^8+28z^9+64z^{10}\cdots\\ \end{align*} We conclude the number $a_6$ is given by \begin{align*} a_6&=2^6-[z^6]\sum_{k=0}^2G^{(=k,\leq k)}(z)\\ &=64-[z^6]\left(G^{(=1,\leq 1)}(z)+G^{(=3,\leq 3)}(z)+G^{(=5,\leq 5)}(z)\right)\\ &=64-\color{blue}{2}-\color{blue}{12}-\color{blue}{2}\\ &=48 \end{align*} The $64-a_6=16$ non-admissible words are \begin{array}{cccc} 1\color{blue}{0}1010&\color{blue}{0}10101\\ 001\color{blue}{000}&101\color{blue}{000}&011\color{blue}{000}&111\color{blue}{000}\\ \color{blue}{000}100&1\color{blue}{000}10&\color{blue}{000}110&01\color{blue}{000}1\\ 11\color{blue}{000}1&\color{blue}{000}101&1\color{blue}{000}11&\color{blue}{000}111\\ 1\color{blue}{00000}&\color{blue}{00000}1\\ \end{array}
$\int \pi e^{\pi\overline{z}}\ dz$ over a square
There's an error in your $z_2$ integral. You're missing the ${\rm e}^{\pi}$ factor that came out of the integral after the third equals sign, $$ \int _{z_2} dz\;{\rm e}^{\pi \bar{z}} = -2 i {\rm e}^\pi $$
Are integral functions of continuous functions of bounded variation?
Yes. If $f$ is continuous on $[a,b]$: Then $f$ is bounded, i.e., there is $M>0$ s.t. $|f(x)|\leq M$ for all $x\in [a,b]$. Therefore $$|F(t)-F(s)|=\leq \int_s^t |f(x)|\,\mathrm d x\leq M|s-t|,$$ for all $s,t\in [a,b]$. Therefore, it has bounded variation on $[a,b]$. If $f$ is locally $L^1(a,b)$ only : Set$$f^{+}(x)=\max\{f(x),0\}\quad \text{and}\quad f^-(x)=-\min\{f(x),0\},$$ then $f^+$ and $f^-$ are positive. Therefore $$t\mapsto \int_0^tf^+(x)\,\mathrm d x\quad \text{and}\quad t\mapsto \int_0^tf^-(x)\,\mathrm d x$$ are increasing. Since $$F(t)=\int_0^t f^+(x)\,\mathrm d x-\int_0^t f^-(x)\,\mathrm d x,$$ (becaue $f(x)=f^+(x)-f^-(x)$), you can write $F$ as a difference of two increasing function. And thus, it has bounded variation.
Help needed with differential equation which does not seem to fit any type
Note that the differential equation can be written as $$ \frac{dy}{dx}=(x+y)^2+2(x+y) $$ I would do $u = x+y$ which in turn will give us $$ \frac{du}{dx} = \frac{dy}{dx}+1 $$ Substituting in the original equation should give $$ \frac{du}{dx} -1 = u^2+2u \Rightarrow \frac{du}{dx}=1+2u+u^2 = (1+u)^2 $$ Can you complete it from here?
Sum of derivative of integrals: $f(x)=\left(\int\limits_0 ^{x} e^{-t^2}dt\right)^2$ and $g(x)=\int\limits_{0}^{1}\frac{e^{-x^2(t^2+1)}}{t^2+1}dt$
There was a minor error, almost but not quite a typo. When you substituted, "changing" $xt$ to $t$, there were two slips. I think the slips could have been avoided if you had made the substitution in slightly different language, letting $u=xt$. Then $du=(x)dt$, which absorbs the extra $x$ in the integral. And your $e^{t^2}$ should be, in my notation, $e^{-u^2}$ (this really was a typo). With these minor corrections, things work out fine. There must be a more conceptual way of doing it, though the computational approach you took is reasonable, and works quickly enough.
Is it possible that this angle is $45^\circ$?
$x$ is not $45^\circ$. Let $E$ be the point of the intersection of $\overline{AC}$ and $\overline{BD}$. If $x=45$, then $\angle DEC= 45^\circ$ and due to vertical angles, $\angle AEB=45^\circ$. If $\angle AEB=45^\circ$, then $\angle ABE=45^\circ$. But, if this were true, $\triangle ABC$ is a $45-45-90$ triangle. If this were true, then there would be no $\triangle DEC$, as points $B$ and $E$ would lie on the circumference of the circle.
Condition when inequality with numbers is true
$4\sqrt{\pi}(2m)^{mn} = 4\sqrt{\pi}2^{mn}m^{mn} \geq 4\sqrt{\pi}2^k m^{mn}$ since $k \leq n$ and $m > 1$. Clearly, $4\sqrt{\pi}2^k m^{mn} > 2^k$ for all $k \in \mathbb{N}$. Are you sure that the conditions on $n,m$ and $k$ are correct ?
Show that $f$ is continuous at exactly one point
Using the sequential criterion for continuity is fine, but I think you have to make clear, why $\bigl(f(z_n)\bigr)$ converges for all $z_n \to 1$, even when not all $z_n$ are rational or all irrational. That almost follows from your argument (just look at the subseqeunces of the rational resp. irrational $z_n$), but you can expand on this a little: Let $z_n \to 1$ be any sequence. If all but finitely many $z_n$ are rational, then $f(z_n) \to f(1)$, as allready shown (the finitely many terms do not make a difference), same if all but finitely many $z_n$ are irrational. Otherwise (that is we have infinitely many rational and irrational $z_n$) let $(z_{n_k})$ denote the subsequence of rational and $(z_{n'_k})$ the subsequence of irrational numbers. Then $(z_{n_k})$ and $(z_{n'_k})$ cover $(z_n)$ and have the same limit $f(1)$ (as shown), hence $f(z_n) \to f(1)$.
Null hypothesis with values seems way too simple for the marks given
It should be: $$t=\frac{\bar{x}-\mu}{s/\sqrt{n}}=\frac{47.1-48}{\sqrt{4.7}/\sqrt{12}}=-1.44.$$ $$t_{0.025;11}=2.2.$$ $$|t|<t_{0.025;11} \Rightarrow \text{Do not reject } H_0.$$ You must use $t$-distribution, because $n=12$ is a small sample.
Definition of partial derivatives from Rudin's PMA
You can do linear algebra in terms of $n$-tuples $(x_1,x_2,\ldots, x_n)$ and matrices $\bigl[a_{ik}\bigr]$, or in terms of "abstract" vectors ${\bf x}$ and linear maps $A$. Similarly with derivatives in a multivariate setting: You can consider component functions $f_i$ $(1\leq i\leq m)$ and their partial derivatives $f_{i.k}$, as defined in $(25)$, or you can talk about the derivative as a linear map between tangent spaces, and satisfying $${\bf f}({\bf p}+{\bf X})-{\bf f}({\bf p})=d{\bf f}({\bf p}).{\bf X}+o\bigl(|{\bf X}|\bigr)\qquad({\bf X}\to{\bf 0})\ .$$ What you are proposing is a chimerical mixed version, which certainly makes mathematical sense, but is only seldom used. At any rate, you would have $${\bf f}_{.k}({\bf p})=d{\bf f}({\bf p}).{\bf e}_k\ .$$ (The dots have the following significance: In $d{\bf f}({\bf p}).{\bf X}$ I write a dot to enhance that one should first compute the derivative $d{\bf f}({\bf p})$ and then apply this linear map to the vector ${\bf X}$. Some people write $d{\bf f}_{\bf p}\,{\bf X}$ instead. In $f_{i.k}$ the $i$ denotes the $i^{\rm th}$ component of ${\bf f}$, and $k$ denotes the number of the variable with respect to which one differentiates. This then allows to write ${\bf f}_{.k}$ with the exact meaning the OP had in mind in his question.)
Surface integral of sphere within a paraboloid in spherical coordinates
The limits for $\phi$ are wrong. Note that $\sqrt{3}>1$ and $\arccos(x)$ is defined in $[-1,1]$. Solving the equation $2cz+z^2=3c^2$ we get $z=c$ and $z=-3c$ (which is not acceptable because $x^2+y^2=2cz=-6c^2<0$). Hence $\phi$ goes from $0$ (north pole) to $\arccos(1/\sqrt{3})$. Therefore $$S=3c^2\int_{\theta=0}^{2\pi}\int_{\phi=0}^{\arccos(1/\sqrt{3})}\sin(\phi) d\phi d\theta=6\pi c^2[-\cos(\phi)]_0^{\arccos(1/\sqrt{3})}=2\pi c^2(3-\sqrt{3}).$$ P.S. Your result $4\sqrt 3\pi c^3$ seems to be a volume not a surface: $$S=\iint_{x^2+y^2\leq 2c^2}\frac{\sqrt{3}c}{\sqrt{3c^2-x^2-y^2}}dx dy= 2\pi\sqrt{3}c\int_{\rho=0}^{\sqrt{2}c}\frac{\rho}{\sqrt{3c^2-\rho^2}}d\rho=2\pi c^2(3-\sqrt{3})$$
Calculus Exponential Functions Again
Intuitively, when $x \to +\infty, e^x \to +\infty$, so the whole thing goes to zero. Can you make that more precise?
two married couples seating in a row
The simple approach is to note that the only way you can fail to have a couple together is if one couple is in seats $1,3$ and the other is in seats $2,4$. So have the first person sit down somewhere. To fail, that person's spouse must sit in the corresponding seat-probability $\frac 13$.
Proofs: Is it legitimate to modify each expression and show equivalence?
The proof is valid. To show that the two matrices are equal, what we need is just to show that the entries are equal to each other, which is what the proof has illustrated. In general, if you want to prove that $A=B$, we can illustrate that $A=C$ and $B=C$ and conclude that $A=C$ since equality is an equivalence relation. Perhaps your teacher was talking about proving an "If $A$ then $B$" question, then we can't assume that we have $B$ since that is what we are trying to prove.