title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Exponential growth with a growth rate that varies with time
If $f(t)$ grows exponentially, that is if the growth (or derivative) of $f$ at some point $t$ is proportional to $f(t)$, $f$ is described by the following linear differential equation with constant coefficients $$ f'(t) = \lambda f(t) $$ If you replace the constant $\lambda$ with a function $\lambda(t)$ you again get a linear differential equation, but with time-varying coefficients $$ f'(t) = \lambda(t) f(t) $$ Equations of that type can always be solved by separation of variables ( http://en.wikipedia.org/wiki/Separation_of_variables ). If you set $y = f(t)$, you get $$ \frac{dy}{dt} = \lambda(t) y $$ Then you throw all caution to the wind, forget that you're not supposed to handle $dx$ and $dy$ as if they were actual quantities, and get $$ \begin{eqnarray*} \frac{1}{y} dy &=& \lambda(t) dt &\implies\\ \int \frac{1}{y} dy &=& \int \lambda(t) dt &\implies\\ f(t) &=& e^{\int \lambda(t) dt} \end{eqnarray*} $$ By the chain rule, $f'(t) = \lambda(t) e^{\int_0^t \lambda(t) dt}$, i.e. $f$ is actually a solution of the original differential equation. Since you can pick any antiderivative of $\lambda(t)$, i.e. add an arbitrary constant to $\int \lambda(t) dt$, you might was well pick $\int_0^t \lambda(t) dt$, which yields the general solution $$ f(t) = f(0)e^{\int_0^t \lambda(t) dt} $$
Partition of Unity in Spivak's Calculus on Manifolds
I was looking back at my question today for some reason and immediately saw why the function $f$ is required. Although $\psi_i$ is smooth with compact support in $U_i$, the functions $\varphi_i$ can only be defined on $U$ where $\sum_{i = 1}^n\psi_i > 0$. The problem is that $\varphi_i$ usually does not go to zero at the boundary of $\operatorname{supp}(\psi_i)$ (much less smoothly extend to zero outside the boundary). You can see this near the boundary of a $\operatorname{supp}(\psi_i)$ which is away from all other $\operatorname{supp}(\psi_j)$. Near this boundary $\varphi_i(x) = \frac{\psi_i(x)}{\psi_i(x)} = 1$. The solution is to use a cutoff function $f$ which forces everything to smoothly go to $0$ near the boundary of $U$.
Find all prime triples $(a,b,c)$ such $a+1,b+1,c+1$ form a geometric sequence
You're looking for all triplets of prime numbers $(p,(p+1)k-1,(p+1)k^2-1 )$, with $k>1$ and $(p+1)k^2-1<100$ so $$1<k<\sqrt{\frac{101}{p+1}}$$ As $k$ is a rational number let $$k=\frac{a}{b}$$ $$b<a<b\sqrt{\frac{101}{p+1}}$$ And $b^2$ must divide $p+1$, so we can consider $b$ as the largest integer such that $b^2$ divides $p+1$ and find possible values of $a$. $$\begin{array}{c|c|c} p&b&a\\\hline 2&1&2,3,4,5\\ 3&2&3,4,5,6,7,8,9,10\\ 5&1&2,3,4\\ 7&2& 3,4,5,6,7 \\ 11&2&3,4,5\\ 13&1&2\\ 17&3&4,5,6,7\\ 19,23&2&3,4\\ 29,37,41,61,73&1&\\ 31&4&5,6,7\\ 43&2&3\\ 47&4&5\\ 53&3&4\\ 59,67,83&2&\\ 71&6&7\\ 79&4&\\ 89&3&\\ 97&7&\\ \end{array} $$ So only 39 cases to try.
About functions of bounded variation
First, let's state the definition given in the article by Weston J.D., Functions of Bounded Variation in Topological Vector Spaces, The Quarterly Journal of Mathematics, Vol 8, Issue 1, 1957, pp. 108-111: Let $[a,b]$ be a real interval, and let $S$ be a finite succession of nonoverlapping closed subintervals $[a_i,b_i]$, $i=1,\dots, m$. Given $g:[a,b]\to X$, let $$V_S(g)=\sum_{i=1}^m (g(b_i)-g(a_i)) \tag1$$ and let $V(g)$ be the set of all points $V_S(g)$, for all possible choices of $S$. If $V(g)$ is bounded, $g$ is said to be of bounded variation. The key point here is that $[a_i,b_i]$ do not have to be a partition of $[a,b]$: we can leave gaps between them. For example, if $g$ is real-valued, we would take only the intervals on which $g$ increases, or only those on which it decreases. It would not make sense to include intervals of both kinds, since it would decrease $V_S(g)$, creating cancellation in (1). Now, you ask about the standard definition of bounded variation if the space X is Banach Well, is there the standard definition when $X$ is a Banach space? In Functional Analysis and Semi-Groups by E. Hille we find 3 definitions (see 3.2.4 on page 59): $g:[a,b]\to X$ is of weak bounded variation if $x^*\circ g$ is of bounded variation (in the usual, real-variable sense) for every linear functional $x^*\in X^*$. $g$ is of bounded variation if $V(g)$, as defined above, is bounded. This is the same definition that Weston gives. $g$ is of strong bounded variation is $\sup_i \|g(\alpha_i)-g(\alpha_{i-1})\|<\infty$ where the supremum is over all partitions of $[a,b]$. This is probably what most people would think of as the standard definition, same as for maps into any metric space. According to Hille, the following results are due to Dunford and Gelfand: 1 and 2 are equivalent 3 implies 2, but not conversely The latter is shown by an example. Take $X=L^\infty[0,1]$ and $g(t)=\chi_{[0,t]}$ for $t\in [0,1]$. Since $\|g(t)-g(s)\|_{L^\infty}=1$ whenever $t\ne s$, it's clear that $g$ is not of strong bounded variation. On the other hand, for any $0\le t<s\le 1$ the difference $g_s-g_t$ is $\chi_{[s,t]}$; summing such differences over any collection of nonoverlapping intervals we get a function of norm $1$. Thus, $g$ is of bounded variation.
Intersection of two lines, computing the t-value
Hint: Your first line has points of the form $((1-t)x_1 + tx_2, (1-t)y_1 + ty_2)$, so substitute this into the equation for the second line $y=mx+ b$ to get $$ (1-t)y_1 + ty_2 = m((1-t)x_1 + tx_2)+b$$ which is a linear equation which you can solve for $t$ and then use that value of $t$ in the first expression to find the point of intersection.
A representation is semisimple if its restriction to a subgroup of index prime to Char(F) is semisimple
(Expanded form of the answers from the comments.) A module is semisimple iff every submodule has a direct complement. Aschbacher's Finite Group Theory on pages 39-40 proves that if U is a sub FG-module of V and U has an FP-module direct complement, then it has an FG-module direct complement where P is a Sylow p-subgroup of G. The argument is an averaging argument as described. Since H has index coprime to p, H contains a Sylow p-subgroup P of G. Every semisimple FH-module is also semisimple as an FP-module (or just replace n in Aschbacher's proof with $[G:P]$ and allow $P=H$ to be any subgroup of index coprime to p), and so Aschbacher's result answers your question. The same proof is phrased in more complicated language on pages 70, 71, and 72 of Benson's Representations and Cohomology Part 1. A module is called relatively H-projective if G-module homomorphisms that split as H-module homomorphisms also split as G-module homomorphisms. In other words, you just want to show that every G-module is relatively H-projective when $[G:H]$ is invertible in the ring, which is Corollary 3.6.9.
Which mathematical topics should an applied math major know to be employable in industry?
As Najib Idrissi mentions in a comment to your question, you should see appropriate career advisors at your university to help you with your particular situation. That said, you may find it helpful to take a look on the Society for Industrial and Applied Mathematics (SIAM) website, especially: http://www.siam.org/careers/thinking.php Here is a relevant quote from http://www.siam.org/careers/thinking/solve.php: Part of the preparation for your future is obtaining a solid foundation in mathematical and computational knowledge—tools like differential equations, probability, combinatorics, applied algebra, and matrices, as well as the art of abstraction and advanced computing and programming skills. Preparation for a career in applied mathematics and computational science also involves being able to apply these skills to real-life problems, and achieving practical results. See also http://www.siam.org/reports/mii/2012/index.php for the SIAM Report on Mathematics in Industry (MII 2012). There is a brief summary of the report at http://sinews.siam.org/DetailsPage/tabid/607/ArticleID/181/IMA-To-Host-Workshop-on-Careers-in-Industry.aspx. In particular: The 2012 report documents the technical knowledge and skills needed to succeed in industry. Requirements include mastery of the core areas of the mathematical sciences, with depth in one area and enough breadth that it is possible to quickly come up to speed in another. Also important are a sufficient grasp of an application discipline relevant to the prospective employer and proficiency in a programming language. As to the required soft skills, communication and teamwork were mentioned in the first report; the new report also cites the need for enthusiasm, self-direction, the ability to complete projects, and a sense of the business. It is clear that no academic program can, by itself, provide all of these requirements. (The "first report" above refers to the 1996 SIAM Report on Mathematics in Industry: http://www.siam.org/reports/mii/1996/report.php.)
Derivation of Dual Curve for Parametric Equations
Let $$ Xx + Yy + 1 = 0 \tag{1} $$ be the equation of a line in line coordinates. A point $(x)$ and line $[X]$ (in the notation of Coxeter) are said to be incident in projective geometry if their inner product is zero: $$ \{xX\} = 0. $$ The tangent to our parametric equation at point $t_0$ is given by $<x'(t_0),y'(t_0)>$. Hence, the point $(x')$ is incident on our line if $$ \{x'X\} = 0. $$ That is, $$ Xx' + Yy' = 0 \tag{2} $$ Thus, we have two equations in two unknowns ($(1)$ and $(2)$), which via Cramer's rule yield $$ X=\frac{-y'}{xy'-yx'}, Y=\frac{x'}{xy'-yx'}. $$ Q.E.D.
Proof of identity involving binomial coefficient
\begin{align} \binom{-n}{k} &= \frac{(-n - k + 1)(-n - k + 2)\cdots (-n)}{k!}\\ & = (-1)^k \frac{(n + k - 1)(n + k - 2)\cdots (n)}{k!} \\ & = (-1)^k \binom{n+k-1}{k}. \end{align}
Transform $\int \frac{x^2}{x - 2}$ to $ \int x +\frac{4}{x-2} + 2 $
You'd need to do polynomial division. Therefore your answer would be $$x^2 \div (x-2)=x+2+\frac{4}{x-2}$$ Note: polynomial division uses the same principle as regular division when you have a remainder.
Determining Isomorphisms and Identifying Structural Properties
One crucial way to distinguish groups from one another is looking at the orders of elements. If one group has more elements of a certain order than another group, they cannot be isomorphic. There is no general easy way to prove groups are isomorphic. Perhaps you know the Chinese Remainder Theorem?
Coordinate dependence of the volume of parallelotope
What makes the standard basis special is that it's one in which the measure was defined. It's not SO special, however: bases that different from it by an orthogonal change-of-basis matrix work fine, too. And to say things "should not be coordinate dependent" is a little odd -- is this a moral imperative, or just a principle based on "some other things behave this way, and I like it"? Sometimes coordinates matter. In geometry, the measure of curvature is coordinate independent, but one of the best ways to compute it involves the Christoffel symbols, which don't even transform nicely ("tensorially") wrt coordinates. I think that if you ask this question not for $R^n$, but for $R$, where things are really pretty clear, you'll see that coordinate dependence of "size" (in this case, length) isn't so crazy -- it amounts to the chain rule.
Find a method of moments estimator of the survival function $S(t)=P(X>t)$ for a given $t>\mu$
$S(t) = P(X>t) = E[1_{X>t}]$. This is the first population moment. Thus: set $S(t)$ estimate = sample mean of $1_{X_i > t}$.
Find all solutions to $a+b+2c-d=3$
If you just want all real solutions ($a,b,c,d\in \mathbb{R}$) to $$a+b+2c-d=3$$ then just solve the equation for any one of the variables: $$d=a+b+2c-3.$$ The set of solutions is $$\{(a,b,c,a+b+2c-3)\ | \ a,b,c \in\mathbb{R} \}$$
Expectation of product of independent random variables
If two random variables $X,Y$ have a joint distribution then they are independent if and only if the corresponding CDF's satisfy: $$F_{X,Y}(x,y)=F_X(x)F_Y(y)\tag1$$ Here $(1)$ is a necessary but also sufficient condition for:$$\mathsf P_{X,Y}=\mathsf P_X\times \mathsf P_Y$$where $\mathsf P_{X,Y}$ denotes the probability on $(\mathbb R^2,\mathcal B^2)$ induced by $(X,Y):\Omega\to\mathbb R$ and $\mathsf P_X,\mathsf P_Y$ denote the probabilities on $(\mathbb R,\mathcal B)$ induced by $X:\Omega\to\mathbb R$ and $Y:\Omega\to\mathbb R$. Then under suitable conditions: $$\mathsf EXY=\int xydF_{X,Y}(x,y)=\int\int xydF_X(x)dF_Y(y)=\int xdF_X(x)\int ydF_Y(y)=\mathsf EX\mathsf EY$$
Convergence in $L_{\infty}$ norm implies uniform convergence
You were not very specific about your hypotheses - I assume you're working on $\mathbb{R}^{n}$ with Lebesgue measure. Suppose there exists a point $x$ such that $|f_{n}(x) - f(x)| > \varepsilon$. Then there exists an open set $U$ containing $x$ such that for all $y \in U$ you have $|f_{n}(y) - f(y)| > \varepsilon$ by continuity. But this contradicts the a.e. statement you gave. In case you don't know yet that $f$ is continuous (or rather: has a continuous representative in $L^\infty$), a similar argument shows that $(f_{n})$ is a uniform Cauchy sequence (with the sup-norm, not only the essential sup-norm), hence $f$ will be continuous (in fact, uniformly continuous). Note that I haven't used compact support at all, just continuity. If you're working in a more general setting (like a locally compact space), you'd have to require that the measure gives positive mass to each non-empty open set. Finally, note that $f$ need not have compact support. It will only have the property that it will be arbitrarily small outside some compact set ("vanish at infinity" is the technical term). For instance, $\frac{1}{1+|x|}$ can easily be uniformly approximated by functions with compact support.
Freeness of $R^I$ ($R$ ring and $I$ infinite)
I presume $R^{(I)}$ denotes the direct sum of copies of $R$ indexed by $I$, while $R^I$ denotes the direct product. Of course when $R$ is a field, then $R^I$ is free. It is well-known that $\Bbb Z^I$ is not free over $\Bbb Z$ for $I$ infinite though. When $R$ is a slender ring then the natural pairing $R^I\otimes R^{(I)}\to R$ is a perfect pairing, and I think this implies that for $I$ infinite then $R^I$ isn't free. For example, $\Bbb Z$ is a slender ring.
Sigma and Series involving factorials
Please note that $$16n^2+20n+7=(4n+2)(4n+1)+2(4n+2)+1$$ So, $$\frac{16n^2 +20n +7}{(4n+2)!}=\frac{1}{(4n)!}+\frac{2}{(4n+1)!}+\frac{1}{(4n+2)!}$$$$=\frac{1}{(4n)!}+\frac{1}{(4n+1)!}+\frac{1}{(4n+2)!}+\frac{1}{(4n+3)!}+\left(\frac{1}{(4n+1)!}-\frac{1}{(4n+3)!}\right)$$ Therefore, $$\sum_{n=0}^{\infty} \frac{16n^2 +20n +7}{(4n+2)!}=\left(\sum_{n=0}^{\infty} \frac{1}{n!}\right)+\left(\frac{1}{1!}-\frac{1}{3!}+\frac{1}{5!}-+\ldots \right)$$ $$=e+\sin1$$
Is it that $\frac{p}{2-p}(n-1)^{(p-2)/p} \leq Cy^{p-2}$, $ p \in (1,2), y \in [(n-1)^{1/p}, n^{1/p}]$?
Your question: Does there exists $C > 0$ such that $$\forall\,y \in [(n-1)^{1/p}, n^{1/p}], \frac{p}{2-p}(n-1)^{(p-2)/p} \leq Cy^{p-2}?$$ To make things easier, let's move $y^{p-2}$ to LHS, so that every quantity in the inequality is in the form of $\pm\underbrace{(\dots)}_{\text{positive}}$. $$\frac{p}{2-p} \frac{(n-1)^{-(2-p)/p}}{y^{-(2-p)}} = \frac{p}{2-p} \frac{y^{2-p}}{(n-1)^{(2-p)/p}} \le \frac{p}{2-p} \underbrace{\frac{n^{(2-p)/p}}{(n-1)^{(2-p)/p}}}_{\to 1}$$ Since convergent sequence is bounded, simply take $C$ to be any number larger than or equal to $\sup_n (n/(n-1))^{(2-p)/p}$.
What is the value of $ \sum\limits_{k=0}^{n-1}\binom {n-k-1}{j-1} \binom {r+k}{j+k}$?
$$ \begin{align} \sum_{k=0}^{n-1}\binom{n-k-1}{j-1}\binom{r+k}{j+k} &=\sum_{k=0}^{n-1}\binom{n-k-1}{j-1}\binom{r+k}{r-j} \end{align} $$ which looks like $\binom{n+r}{r}$, but we are missing some $-j\le k\lt0$. For example, let $n=6$, $j=3$, and $r=4$ $$ \begin{align} \sum_{k=0}^{n-j}\binom{n-k-1}{j-1}\binom{r+k}{r-j} &=\overbrace{\binom{5}{2}\binom{4}{1}}^{k=0}+\overbrace{\binom{4}{2}\binom{5}{1}}^{k=1}+\overbrace{\binom{3}{2}\binom{6}{1}}^{k=2}+\overbrace{\binom{2}{2}\binom{7}{1}}^{k=3}\\ &=95 \end{align} $$ But $\binom{10}{4}=210$. Where is the missing $115$? $$ \begin{align} \sum_{k=-j}^{-1}\binom{n-k-1}{j-1}\binom{r+k}{r-j} &=\overbrace{\binom{8}{2}\binom{1}{1}}^{k=-3}+\overbrace{\binom{7}{2}\binom{2}{1}}^{k=-2}+\overbrace{\binom{6}{2}\binom{3}{1}}^{k=-1}\\ &=115 \end{align} $$
double sum formulation
We can write for instance: Let $\left(S_j\right)_{1\leq j \leq 4}=(4,1,2,3)$. We consider \begin{align*} \sum_{j=1}^4\sum_{k=1}^{S_j} (-1)^{j+k}l_{j,k} \end{align*} ... The $4$ solid vertices partition the graph into $4$ sections $\left(S_j\right)_{1\leq j \leq 4}$ with lengths $(4,1,2,3)$ summing up to a total of $10$. It is convenient to use double indices for the length of each subsection $l_{j,k}$ with $1\leq j\leq 4,1\leq k\leq S_j$.
Domain of the function and its simplified expression
$f(x)=\frac{x+3}{x^2-9}=\frac{x+3}{(x+3)(x-3)}=\frac{1}{x-3}$ $x$ cannot be $3$ because if $x=3$ the denominator will become $0$ and division by $0$ is not defined, any other number substituted for $x$ will give you a real value, so domain of the function is $(-\infty,3)\cup(3,\infty)$ $f(x)=\frac{-x}{3-\sqrt{x+9}}$, here $(x+9)$ must be $\geq 0$ so $x$ can assume values $[-9,\infty)...(I)$, but since $3-\sqrt{x+9}$ is in denominator, $3-\sqrt{x+9}\neq 0 \Rightarrow 3\neq\sqrt{x+9}\Rightarrow 9\neq x+9 \Rightarrow x\neq 0...(II)$, combining $(I), (II)$ the domain is $[-9,0)\cup(0,\infty)$ $f(x)=\frac{|x+2|}{3x-6}$ when $(x+2)$ is $\geq 0$ then $f(x)=\frac{x+2}{3(x-2)}$ when $(x+2)$ is $< 0$ then $f(x)=\frac{-(x+2)}{3(x-2)}$ find the domian for above two cases and then combine them
How to calculate $\int \frac{x}{\sqrt x -2}dx$
Let $t=\sqrt x$ so $x=t^2$ and $dx=2tdt$ hence $$\displaystyle \int \frac{xdx}{\sqrt x -2}dx=2\int\frac{t^3}{t-2}dt$$ Can you take it from here?
Combinatorics Problem: Number of lists $L(N,T)$ of length $N$ and typing time $T$.
HINT: I would say that it would be the generalised formula of $(a)$ that is $$ T=\sum_{i=0}^{9}n_it_{d_i},\qquad where\quad \sum_{j=0}^{9}n_j=N. $$ So $n_j$ stand for the amount of the digit $j$ you typed. EDIT: The solution for the question $(a)$ is the same $$ T=\sum_{i=0}^{2}n_it_{d_i},\qquad where\quad \sum_{j=0}^{2}n_j=N. $$
While solving an equation I get 2 answers, but when I substitute one of the answers the equality doesn't hold?
For your equation must you demand $$x\geq -1$$ and $$x\geq -\frac{3}{2}$$ and $$x\geq -\frac{1}{8}$$ because of the existence of the square roots in your equation. After squaring one times we obtain $$2\sqrt{x+1}\sqrt{2x+3}=5x-3$$ you can only square again if $$x\geq \frac{3}{5}$$
if $f$ be continuous and one to one on $D$
No, $D$ doesn't have to be a compact set. Pick $$f:\mathbb R\setminus\{0\} \to \mathbb R\setminus\{0\}, \quad f(x)=\frac 1x$$ which is continuous, one-to-one and it's also its own inverse.
Gaussian integral with a shift in the complex plane
I saw this used once. Basically, we show that the value of the integral doesn't change with respect to $a$. It requires differentiation under the integral. $$ I(a) = \int_{-\infty}^{\infty} \exp(-(x+ia)^2) dx $$ \begin{align*} \frac{dI}{da} &= \int_{-\infty}^{\infty} \frac{d}{da} \exp(-(x+ia)^2) dx \\\\ &= \int_{-\infty}^{\infty} -2i(x+ia)\exp(-(x+ia)^2) dx \\ &= i \int_{-\infty}^{\infty} \frac{d}{dx} \exp(-(x+ia)^2) dx \\ &= i \exp(-(x+ia)^2) |_{x=\pm \infty} = 0 \end{align*}
Flea hopping on triangle combinatorial solution
You can set up a recursion for the situation. Let $p_n$ be the probability that the flea is on the starting vertex after $n$ hops. We have $p_0=1$ and $p_k=\frac12 (1-p_{k-1})$. The reasoning is that if on step $k-1$ the flea is on the starting vertex, then flea can't be there on the $k$th hop; while if on step $k-1$ the flea is not on the starting vertex, it has a $.5$ probability of getting there on the next step. You can then use induction to establish your result.
Relation chi-square and student-t distribution
For induction you need to show that $f_{n+1} = f_1 * f_n$ where $*$ is the convolution. As for the student-$t$ distribution write: $$P(\frac{X_0}{\sqrt{\frac{1}{n}\sum_{i=1}^nX_i ^2}}\leq t)=\mathbb{E}\left[\Phi\left(t\sqrt{\frac{1}{n}\sum_{i=1}^nX_i ^2}\right)\right],$$ where $\Phi$ is the normal CDF. Take the derivative with respect to $t$ and you'll have $$f(t)=\mathbb{E}\left[\phi\left(t\sqrt{\frac{1}{n}\sum_{i=1}^nX_i ^2}\right)\sqrt{\frac{1}{n}\sum_{i=1}^nX_i ^2}\right]\\ =\frac{1}{\left(2\pi\right)^{n/2}}\int_\mathbb{R^n}\phi\left(\sqrt{t^2+n}\sqrt{\frac{1}{n}\sum_{i=1}^nX_i ^2}\right)\sqrt{\frac{1}{n}\sum_{i=1}^nX_i ^2}\mathrm{d}X_1\ldots\mathrm{d}X_n\\=\frac{1}{\left(2\pi\right)^{n/2}}\times\frac{1}{\left(1+t^2/n\right)^\frac{n+1}{2}}\int_\mathbb{R^n}\phi\left(\sqrt{\sum_{i=1}^n\hat{X}_i ^2}\right)\sqrt{\sum_{i=1}^n\hat{X}_i ^2}\mathrm{d}\hat{X}_1\ldots\mathrm{d}\hat{X}_n,$$ with $\phi(z)=\exp(-z^2/2)/\sqrt{2\pi}$ being the normal PDF. The last integral is independent of $t$ so you got what you needed. (The value of the integral is the mean of a $\chi$-distributed random variable of ordern $n$.)
Good book on Bolza problem and Hamilton systems
I found a book that seems pretty good. Will confirm quality once I use it a little. "The Calculus of Variations" - Bruce van Brunt Also recommended "Stochastic Differential Equations: An Introduction with Applications" - Bernt Oksendal
Variant of dominated convergence theorem, does it follow that $\int f_n \to \int f$?
First observe that $|f|\leq g$ since $|f_n|\leq g_n$ for all $n$ and $f_n\to f,g_n\to g$ almost everywhere. Therefore the function $$ h_n=g+g_n-|f-f_n| $$ is non-negative, so we can apply Fatou's lemma to conclude that $$ \int \liminf_{n\to\infty}h_n\leq \liminf_{n\to\infty}\int h_n$$ Using the fact that $\int g_n\to \int g<\infty$, it follows that $$ \limsup_{n\to\infty}\int |f_n-f|=0 $$ and since $$ \Big|\int f_n-\int f\Big|\leq \int |f-f_n| $$ this shows that $\int f_n\to\int f$.
Are $\prod_{i\in I,j\in J}X_{ij}$ and $\prod_{i\in I}\prod_{j\in J}X_{ij}$ homeomorphic?
Set $X = \cup\left\{X_{ij}: i \in I, j \in J \right\}$. Then first set in your question equals (as a set, by the definition of Cartesian products): $$ \prod_{i \in I,j \in J} X_{ij} = \left\{ f: I \times J \rightarrow X: \forall_{i \in I}\,\forall_{j \in J}\, f(i,j) \in X_{ij} \right\}\mbox{,}$$ while if we set (for $i \in I$): $$X_i = \prod_{j \in J} X_{ij} = \left\{ f: J \rightarrow \cup_{i \in I} X_{ij} : \forall_{j \in J}\,f(j) \in X_{ij} \right\}\mbox{,}$$ the second set equals $$\left\{f: I \rightarrow \cup_{i \in I} X_i : \forall_{i \in I}\,f(i) \in X_i \right\}\mbox{.}$$ Now, in the last definition we thus have that each $f(i)$ is itself a function from $J$ to $\cup_{i \in I} X_{ij}$ such that $f(i)(j) \in X_{ij}$, for all pairs $(i,j)$ from $I \times J$. So if we have a member $f$ of the first set, set $H(f)$ in the other set as the function defined by $(H(f)(i))(j) = f(i,j)$: $H(f)$ is a function defined on $I$, so we have to define $H(f)(i)$ for all $i$, but these themselves are functions from $J$, so we need to define all $(H(f)(i))(j)$ values. One checks easily that the functions have the right co-domains and satisfy the conditions to be in $X_i$ and the second set. This is the required bijection (the inverse is similarly defined, I leave that to the reader). Now use that function into a product is continuous iff all compositions with the projections of the product are continuous. This allows one to easily show that $H$ and its inverse are both continuous.
what is the inner product appeared in front of the integral?
The footnote on page 25 states that: "We use $g(\cdot, \cdot)$ and $\langle \cdot, \cdot\rangle$ interchangeably, although with the latter, it is easier to forget any $t$-dependence that $g$ might have."
Show that $\ker(C)=\{0\}$ - what is wrong with this arguement?
There are precisely two errors in your reasoning (each of which was separately observed in the comments on your question): The first error is that the passage from $$\int_0^1\int_0^ty(\tau)d\tau dt=-\int_0^1y(t)dt $$ to $$\int_0^ty(\tau)d\tau=-\int_0^1y(t)dt$$ is completely unjustified. (Why did you drop the $\int_0^1 ... dt$ on the left? Are you assuming the integrand ($\int_0^ty(\tau)d\tau$) is constant with respect to the variable $t$? It is not.) The second error is that $$\int_0^1y(\tau)d\tau=-\int_0^1y(t)dt$$ only implies that $\int_0^1y(t)dt = 0$; it does not follow that $y$ is itself constantly $0$. All the rest of your reasoning would have been sound, but each of these steps is unsupported and in fact fallacious.
Caro-Wei Theorem Proof
The characteristic function you’re thinking of is $\mathbb{1}_S$, defined as $\mathbb{1}_S(x)=1$ when $x\in S$ and $\mathbb{1}_S(x)=0$ when $x\notin S$. In the proof you reproduced, don’t think of $1_{\sigma\in A_i}$ as a characteristic function; it’s just a number that depends on $i$. It’s the number $1$ if $\sigma\in A_i$, and it’s $0$ if $\sigma\notin A_i$. So $X(\sigma)$ (defined as $X(\sigma)=\sum_{i=1}^n 1_{\sigma\in A_i}$) is the number of values of $i$ between $1$ and $n$ for which $\sigma\in A_i$, or, using the definition of $A_i$ (and the unmentioned assumption that the vertex set of the graph is $\{1,\dots,n\}$), it’s the number of vertices $i$ in the graph $G$ for which $\sigma(i)<\sigma(j)$ for some vertex $j$ that is a neighbor of of $i$ in $G$.
Is there such a equivalence ralation existing on a set of natural number N?
Sure. Consider this relation: $m\sim n$ when one of these conditions occur: $m,n\in\{1\}$; $m,n\in\{2,3\}$; $m,n\in\{4,5,6\}$; $m,n\in\{7,8,9,10\}$; $\vdots$
How is the dual cone of a subspace its orthogonal complement?
Let $y\in V^*$ and assume that there exists $v\in V$ such that $y^Tv >0.$ Since $V$ is a subspace, it follows that $-v \in V.$ But then $$0\leq y^T(-v)= -y^Tv<0,$$ a contradiction.
Find the limit of a recursive sequence with parameter
I think there are three cases, $|\alpha| > 1$ then $\lim_{n \to \infty} a(n) = +\infty$ $|\alpha| = 1$ then $\lim_{n \to \infty} a(n) = 1$ $|\alpha| < 1$ then $\lim_{n \to \infty} a(n) = 1/2$
Find a basis for the vector space of symmetric matrices with an order of $n \times n$
Let $E_{ij}$ be the matrix with all its elements equal to zero except for the $(i,j)$-element which is equal to one. Then a desired basis is $$ \frac{1}{2}\big(E_{ij}+E_{ji}\big), \quad 1\le i\le j\le n. $$
show that χ(G)≤√(2|E|)
For a clique $K_n$ you have $$n = \chi(K_n) \not\leq \sqrt{2\big|E(K_n)\big|} = \sqrt{n(n-1)},$$ in other words that inequality is not true. However, you can prove $$\chi(G)\big(\chi(G)-1\big) \leq 2\big|E(G)\big|.$$ Hint: Imagine there is no edge between some pair of different colors, what could you do? I hope this helps $\ddot\smile$
Exact sequences in category theory
Since in abelian categories all epimorphisms and monomorphisms are normal, $g$ and $f$ are normal. Hence, $$g \cong \text{coker}(\text{ker}g).$$ Moreover, by exactness, $\text{coker}f=\text{coim}g$. That is, $$\text{coker}f\cong \text{coker}(\text{ker}g).$$ Hence, $g\cong \text{coker}f.$
Limits in topological spaces
There’s no need to consider sequences at all; you just need to show that $\mathscr{X}=\{X_n:n\in\Bbb N\}$ is an open cover of $\Bbb R$ that has no finite subcover. Clearly $\Bbb R\setminus\Bbb N\subseteq X_n$ for each $n\in\Bbb N$, and for $k,n\in\Bbb N$ we have $n\in X_k$ if and only if $k>n$, so in particular $n\in X_{n+1}$, and $\mathscr{X}$ is an open cover of $\Bbb R$. But if $\mathscr{R}$ is a finite subset of $\mathscr{X}$, let $m=\max\{n\in\Bbb N:X_n\in\mathscr{R}\}$; then $m\notin\bigcup\mathscr{R}$, so $\mathscr{X}$ has no finite subcover. You can get an even more efficient example by letting $C_n=\Bbb N\setminus\{n\}$ for each $n\in\Bbb N$ and then setting $U_n=\Bbb R\setminus C_n$: $\mathscr{U}=\{U_n:n\in\Bbb N\}$ is an open cover of $\Bbb R$, but for each $n\in\Bbb N$ the only member of $\mathscr{U}$ that contains $n$ is $U_n$, so clearly $\mathscr{U}$ has no proper subcover, let along a finite one. By the way, the convergence of sequences in the co-countable topology on $\Bbb R$ is extremely simple: it’s a nice (and quite easy) exercise to show that a sequence $\langle x_n:n\in\Bbb N\rangle$ converges if and only if it is eventually constant, i.e., if and only if there is an $m\in\Bbb N$ such that $x_n=x_m$ for all $n\ge m$. Of course in this case the sequence converges to $x_m$.
Distribution of the normal cdf
Let $X$ and $Y$ be the standard normal random variables. Then $$ \mathbb{E}(\Phi(a X + b)) = \mathbb{E}( \mathbb{P}( Y \le a x + b \vert X = x ) ) = \mathbb{P}(Y- a X \le b ) $$ But the combination $Z = Y-a X$ also follows normal distribution (being a linear combination of normals), with zero mean and variance $\mathbb{E}((Y-a X)^2) = 1 + a^2$. Hence $$ \mathbb{E}(\Phi(a X + b)) = \Phi\left(\frac{b}{\sqrt{1+a^2}}\right) $$ Here is numerical checks: In[14]:= With[{a = 3., b = 1/2}, {NExpectation[CDF[NormalDistribution[], a x + b], x \[Distributed] NormalDistribution[]], CDF[NormalDistribution[], b/Sqrt[1 + a^2]]}] Out[14]= {0.562816, 0.562816}
Understanding application of the internal language of a category (Elephant, D1.3.12)
Letting $\langle a,b\rangle : [\![x,y.\varphi]\!]\rightarrowtail A\times B$, we have the pullback square $$\require{AMScd} \begin{CD} [\![x,y,y'.\varphi\land\varphi[y'/y]]\!] @>q>> [\![x,y.\varphi]\!]\times B \\ @VpVV @VV(id\times\sigma)\,\circ\,\alpha\,\circ\,(\langle a,b\rangle\times id)V \\ [\![x,y.\varphi]\!]\times B @>>\alpha\,\circ\,(\langle a,b\rangle\times id)> A\times (B\times B) \end{CD}$$ We also have $$\begin{CD} [\![x,y,y'.\varphi\land\varphi[y'/y]]\!] @= [\![x,y,y'.\varphi\land\varphi[y'/y]]\!] \\ @V\langle h,k\rangle VV @VVV \\ A\times B @>>id\times\langle id,id\rangle> A\times(B\times B) \end{CD}$$ where $\langle h,k\rangle$ witnesses the fact that the subobject $[\![x,y,y'.\varphi\land\varphi[y'/y]]\!]$ factors through $id\times\langle id,id\rangle$, i.e. $\varphi(y)\land\varphi(y')\vdash_{x,y,y'}y=y'$. Let's say we have $r,s:X\to[\![x,y.\varphi]\!]$ such that $a\circ r = a \circ s$. We want to show that this means $b\circ r = b\circ s$ at which point we'll have $\langle a,b\rangle\circ r = \langle a,b\rangle\circ s$ which implies that $r = s$ since $\langle a,b\rangle$ is a mono. Consider the arrow $\langle r,b\circ s\rangle:X\to[\![x,y.\varphi]\!]\times B$, we have $$\begin{align} \alpha\circ(\langle a,b\rangle\times id)\circ\langle r,b\circ s\rangle & = \alpha\circ\langle\langle a,b\rangle\circ r,b\circ s\rangle \\ & = \alpha\circ\langle \langle a\circ r, b\circ r\rangle, b\circ s\rangle \\ & = \langle a\circ r, \langle b\circ r, b\circ s\rangle\rangle \end{align}$$ and the arrow $\langle s,b\circ r\rangle:X\to[\![x,y.\varphi]\!]\times B$ for which we have $$\begin{align} (id\times\sigma)\circ\alpha\circ(\langle a,b\rangle\times id)\circ\langle s,b\circ r\rangle & = (id\times\sigma)\circ\alpha\circ\langle\langle a,b\rangle\circ s,b\circ r\rangle \\ & = (id\times\sigma)\circ\alpha\circ\langle\langle a\circ s, b\circ s\rangle, b\circ r\rangle \\ & = (id\times\sigma)\circ\langle a\circ s, \langle b\circ s, b\circ r\rangle\rangle \\ & = \langle a\circ s, \langle b\circ r, b\circ s\rangle\rangle \end{align}$$ and since $a\circ r = a\circ s$ these two arrows are equal, so there is a unique arrow $t:X\to[\![x,y,y'.\varphi\land\varphi[y'/y]]\!]$ determined by them. But, $$\langle a\circ r,\langle b\circ r,b\circ s\rangle\rangle = \langle h,\langle k,k\rangle\rangle\circ t = \langle h\circ t,\langle k\circ t,k\circ t\rangle\rangle$$ so $b\circ r = k \circ t = b \circ s$.
Sharpness of Kolmogorov-Chentsov
Consider a Poisson process with intensity $1$. Since $\mathbb{E}(|X_t-X_s|^2)=|t-s|$, it follows that $\alpha \geq 0$ and $\alpha_c \geq 0$. However, Poisson process does not have a continuous modification. If you want to see sharpness of Kolmogorov-Chentsov theorem for continuous processes, then you can consider a deterministic function $f$ which is $\beta$-Hölder for $\beta<\alpha$ but not $\alpha$-Hölder-continuous, e.g. $\alpha=1$ and $$f(t) := t \log(|t|), \qquad t>0.$$ Then consider the deterministic process $X_t := f(t)$. You can easily check that $\alpha_c=1$ but $(X_t)_{t \geq 0}$ has no modification which is Lipschitz continuous.
How to prove if an only if for a equivalence relation?
When they say $x \equiv y$ if and only if there exists $h \in H$ and $k \in K$ such that $x = hyk$ then that's something they are assuming. The problem asks you to prove that given this assumption, the relation $\equiv$ fulfills the requirements to be an equivalence relation.
Determination of a volume.
Recall that the volume is the integral of the cross sectional areas. The two graphs that make the base square are $$\begin{cases} y_1 = 1-|x| & -1\le x\le 1 \\ y_2=|x|-1 & -1\le x\le 1\end{cases}$$ So that the area of the square orthogonal to it is just the square of the side-length, but all you have is the diagonal length as $y_1-y_2$. However we know that $s\sqrt{2}=d$ is the relation between sides and diagonals, so the cross sectional area is just i.e. $\left({1\over\sqrt{2}}(y_1-y_2)\right)^2$ integrated across $-1\le x\le 1$, which gives us the formula for volume: $$V={1\over 2}\int_{-1}^12^2(1-|x|)^2\,dx$$ Doing the algebra out this gives: $$V=2\left(\int_{-1}^11-2|x|+|x|^2\,dx\right)$$ $$=2\left(\int_{-1}^11\,dx-2\int_{-1}^1 |x|\,dx +\int_{-1}^1 x^2\,dx\right)$$ Now the first integral is easily seen to be $2$, the second is twice the area under $|x|$ which by basic geometry is $2$ because each of the two triangles has area ${1\over 2}$, so the first and second integral cancel one another out, and we are left with $$\int_{-1}^1 x^2\,dx = {2\over 3}$$ which leaves us with a grand total of $V={4\over 3}$.
Infinity laplacian of radial functions
The mistake comes from forgetting the case when $i = j $ in the double derivative. The actual formula is: $$ \frac{\partial^2 R}{\partial x_i x_j} = \frac{x_i x_j}{r^2} \left( R^{\prime \prime} - \frac{R^\prime} {r} \right) + \delta_j(i) \frac{R^\prime}{r}$$ Therefore the infinite laplacian is: $$ \Delta_{\infty}R(r)=\sum_{i,j=1}^n\left( \frac{x_ix_j}{r^2}\left(R''-\frac{R'}{r}\right) + \delta_j(i) \frac{R^\prime}{r} \right) \frac{x_ix_j}{r^2}R'^2 $$ Using your same computations, we have that $\begin{align} \Delta_{\infty}R(r) & = \left( R^{\prime \prime} - \frac{R^\prime}{r}\right) R^{\prime 2} + \sum_{i,j= 1}^n \delta_j(i) \frac{R^\prime}{r} \frac{x_i x_j}{r^2} R^{\prime 2} \\ & = \left( R^{\prime \prime} - \frac{R^\prime}{r}\right) R^{\prime 2} + \sum_{i=1}^n \frac{R^\prime}{r} \frac{x_i^2}{r^2} R^{\prime 2} \\ & = \left( R^{\prime \prime} - \frac{R^\prime}{r}\right) R^{\prime 2} + \frac{R^\prime}{r}R^{\prime 2} \\ & = R^{\prime \prime} R^{\prime 2}. \end{align}$
Complex analysis cookbook? (Soft question)
I used the textbook "Complex Variables and Applications" by Churchill and Brown, which is easy to follow and has a "cookbook" of techniques for applying $\mathbb{C}$ contour integral methods in $\mathbb{R}$ situations.
Complex Amplitudes and their Exp'l Increase/Decrease
For question 1, you have $S(t)=Ae^{i\omega t}$. You are thinking that $S'(t)=i\omega Ae^{i\omega t}$. That's true only if $A$ is a constant in time. If $A$ changes, you have $$S'(t)=i\omega Ae^{i\omega t}+\frac{dA}{dt}e^{i\omega t}$$ If the angle $\psi$ is constant, then we have the ratio between the two components (radial and tangential) is constant. $$\frac{\frac{dA}{dt}}{\omega A}=c$$ or $$\frac{dA}{dt}=c\omega A$$ The solution of this equation is an exponential $A(t)=A_0e^{c\omega t}$.
Proving $A^{-1}=-\frac{1}{6}A^{2}+\frac{2}{3}A-\frac{1}{6}I$
A hunch at what they're going for: Suppose that an invertible matrix $A$ has characteristic polynomial (say $A$ is $3 \times 3$) $a_3 x^3 + a_2x^2 + a_1 x + a_0$. By the Cayley Hamilton theorem, we have $$ a_3 A^3 + a_2 A^2 + a_1 A + a_0 I = 0 $$ That is, $$ a_3A^3 + a_2 A^2 + a_1 A = -a_0 I $$ Which we can factor to get $$ -\frac{1}{a_0}\left( a_3A^2 + a_2 A + a_1I \right)A = I $$ So that $a_3A^2 + a_2 A + a_1I$ must be the inverse of $A$.
Binary expression of a fraction
The answer is $n=2053$. I have an answer but it is lacking lots of rigor. I have verified it by exhaustive search with Mathematica. What follows in my thoughts/reasoning. I welcome others to post further answers building upon and improving my working. Initially I explored this problem with a shorter range to find some patterns (e.g. only up to $15$). Firstly we need a value of $n$ such that $\frac{1}{n}$ in base $2$ is of sufficient length (before it terminates or recurrs) to include all possible representations. Prime values of $n$ hence have longer lengths and we do not need to consider composite numbers (except possible powers of primes). (Also primes of the form $2^n-1$ appear to be shorter than other nearby primes.) We only need to consider the binary representations of values between $1024$ and $2047$ as values from $1$ to $1023$ will occur within twice their value. Even though we only want up to $1990$ we need to consider up until $2047$ as some lower values will not be present in the binary representations of $1024$ to $1990$ (such as $997=1994\div2$). So I looked for the first prime after $2048$ which is $2053$. Using a computer I found that the binary $\frac{1}{2053}$ is ($2052$ recurring digits): 0.000000000001111111101100000011000111100000110100110111101111010010100111000101111001000101000101001101001011111100001000100110101001111101011100011001100100000000010111111100010000100101011010001001111010011100110111011111010101000110101100111100111110011110001111010001100111001111110111100001010100110010110000000100011111010011000111000000111001110110111101011010011001110111111101010000011011011011101101101010110111010011010110111110011010001111111001100001000000110101110111100101010100001010110110010011100000111100110110011111011111000101001001001100100100000010010111101000010011101100111010111110110010001100001010000110011010111111110010000010001011101010001011011010001101111001110100111101101110010110110000011100011011100011101100011011000011110001011010010001111001001101000011111101011000011010001011111010001000111010100110110101111011100100101100010001000101010101001010101100010101000100101101010000111011010110101110011100101111100000100100111010001110111001101010111111010010000111001010111000010011001100111111111110000000010011111100111000011111001011001000010000101101011000111010000110111010111010110010110100000011110111011001010110000010100011100110011011111111101000000011101111011010100101110110000101100011001000100000101010111001010011000011000001100001110000101110011000110000001000011110101011001101001111111011100000101100111000111111000110001001000010100101100110001000000010101111100100100100010010010101001000101100101001000001100101110000000110011110111111001010001000011010101011110101001001101100011111000011001001100000100000111010110110110011011011111101101000010111101100010011000101000001001101110011110101111001100101000000001101111101110100010101110100100101110010000110001011000010010001101001001111100011100100011100010011100100111100001110100101101110000110110010111100000010100111100101110100000101110111000101011001001010000100011011010011101110111010101010110101010011101010111011010010101111000100101001010001100011010000011111011011000101110001000110010101000000101101111000110101000111101100110011 I then calculated all substrings of digits up to length $11$ to see if all digits were covered. Conveniently they were. I then went back and exhaustively searched all values up to $2053$ and didn't find a lower value for $n$. Here is my Mathematica code for exhaustively searching up to $2053$ if anyone wants to run it/check it/improve it/build upon it. Select[Table[{x,Length[Complement[Range[1990],Select[Map[FromDigits[#,2]&,Flatten[With[{a=RealDigits[1/x,2][[1]][[1]]},Table[a[[i;;i+b]],{b,0,10},{i,1,Length[a]-b}]],1]],1<=#<=1990&]]]},{x,1,2053}],#[[2]]==0&]
Is the submanifold compact?
Yes, because $M$ is closed and bounded. Closedness: $M$ is given implicitly on the form $F=0$ with $F:\mathbb{R}^4\rightarrow\mathbb{R}^2$, $F$ continuous. Continuiuty guarantees closedness. To see this, recall that a set is closed if and only if it contains all its limit points. Let $u_n = (x_n,y_n,z_n,w_n) \in \mathbb{R}^4$ be a sequence on $M$, i.e., $F(u_n) = 0$ for all $n$. If $u_n$ converges to some $u\in\mathbb{R}^4$, it follows by continuity of $F$ that $F(u)=0$. Thus, the limit point $u\in M$ as well. Boundedness: The defining equations $F$ can be combined to give $$ 3x^2+y^2 = x^2 + w^2 = 2x^2 + 2$$ and hehce $$ x^2 + y^2 = 2. $$ Thus the projection of $M$ on the $xy$ plane is a circle of radius $\sqrt{2}$. Hence, $x^2$ and $y^2$ are smaller than 2 for every point $(x,y,z,w)\in M$. Similarly, $$ z^2 + w^2 = 2 + 2x^2 \leq 6, $$ so the projection of $M$ on the $zw$ plane is contained in a circle of radius $\sqrt{6}$. Thus, both $z^2$ and $w^2$ are smaller than 6. It follows that for all $(x,y,z,w)\in M$, $x^2+y^2+z^2+w^2 \leq 8$, and $M$ is bounded.
Is this simple drawing a category?
(I don't know why the people are using again comments to answer the question; this way even "answered" questions will stay on the list of unanswered questions.) Yes, this is a category. Actually, every preordered set (a set with a reflexive and transitive relation $\leq$) can be regarded as a category in a natural way, where there is exactly one morphism $x \to y$ when $x \leq y$. In your case, you consider the preordered set $\{A,B,C\}$ with $A \leq B$ and $A \leq C$ (and of course $A \leq A$, $B \leq B$, $C \leq C$).
Finding Jordan Normal Form of 4x4 Matrix
The geometric multiplicity of an eigen-value $\lambda$ of a matrix $A$ is usually defined as the dimension of the eigen-space associated to this eaigen-value. This space is the same as the kernel of $A - \lambda I$. By the rank-nullity theorem one has that the dimension of the kernel of a $n \times n$ matrix $B$ is $n - \text{rank}(B)$.
Prove that $\angle I_aB_0I_c=90$
This follows from basic rules about tangents to the incircle. We have $$ A'B_0=AB_0-AA'=\frac{AB+AC-BC}{2}-\frac{AB+AD-BD}{2}=\frac{BD+CD-BC}{2}=C'D.$$
Does the series $\sum\limits_{k=1}^{\infty} \int\limits_0^1 \frac{\cos(2 k \pi x) \cos(2 n \pi x)}{k}\,\mathrm dx$ equal $0$ for $n \in \Bbb N$?
$$I:=\int\limits_0^1\frac{\cos^22n\pi x}{n}dx$$ $$u:=2n\pi x\implies dx=\frac{du}{2n\pi}\implies$$ $$I=\frac1{2n^2\pi}\int\limits_0^{2n\pi}\cos^2u\,du=\left.\frac1{2n^2\pi}\left(\frac{u+\sin u\cos u}2\right)\right|_0^{2n\pi}=\frac1{2n^2\pi}\left(\frac{2n\pi}2\right)=\frac1{2n}$$
Is it possible to prove that if $f(x)$ is differentiable, then $(f(x))^m$ is differentiable?
Yes: use the chain and power rules.
Domain of a variable and equations involving complex numbers.
Loosely speaking, a domain is a set from which values can be taken and put into a function. You're right, it does not have to be the set of all real numbers $\mathbb R$, or even the set of complex numbers $\mathbb C$. I could even have some function $f$ that has $X = \{a,b,c,d\}$ be the domain. You can read more here.
Differentiation in $R^n$. If all partial derivatives exist and zero everywhere show that $f$ is constant.
The mean value theorem for functions of one variable and the assumption that all the partial derivatives of $f$ vanish (surely you meant "partial derivatives", not "partial orders") give you that $f$ is constant along all lines parallel to the coordinate axes. But you can get from any point in $\mathbb R^n$ to any other point by a sequence of (at most $n$) steps parallel to the axes. So $f$ takes the same value at all points.
Evaluate $\iint_{0<x<y<1}xy\,dxdy$
Draw the figure.. You will get , $$\int_{x=0}^1\int_{y=0}^x f(x,y)\,dx\,dy.$$
What does it mean when standard deviation is higher than the variance?
Generally speaking, the standard deviation doesn't have the same units as the variance, so a simple comparison of two particular values doesn't mean anything. You might have a formalism that allows you to claim that $0.5\thinspace\mathrm m&gt;0.25\thinspace\mathrm m^2$, but you're not going to get much insight from that inequality. On the contrary, it should leave you feeling a bit dirty. What's more meaningful is to say that as $\Delta t$ approaches $0$, the standard deviation approaches $0$ more slowly than the variance does. That might be what Hull means; you'd have to unpack the context.
Proving multivariable chain rule
From your post, I assume you are ok with the first equality, so we are at $$ g(f(x_0+u)) = g \Big( f(x_0) + D_f(x_0)u + |u| \varepsilon_1(u) \Big) $$ Then you say: We apply $g$ to each term in the sum, and somehow ... We do not apply $g$ to each term in the sum. The next step is actually the same thing that happened in the first step: in the first step, you used the linear approximation of $f$ to say that $$ f(x_0+u) = f(x_0) + D_f(x_0) \cdot u + |u| \varepsilon_1(u) $$ The next step that you are asking about is doing the same thing, but with $g$. In place of $x_0$ we now have $y_0=f(x_0)$, and in place of $u$, we now have $$ v = D_f(x_0) \cdot u + |u|\varepsilon_1(u) $$ So we get $$ g(y_0 + v) = g(y_0) + D_g(y_0) \cdot v + |v|\varepsilon_2(v) $$ Now just plug in the expression for $v$ and you get what you have in your post. Although on your last line you call it $h(u)$ where I call it $v$.
Can I take $\log$ on both sides of inequality such way?
You can take the logarithm on both sides of the inequality, if you know the numbers are positive. This produces $\log(f(n))\le \log(cn^k)$. However $\log(cn^k)$ is not the same thing as $kc\log n$, so your second inequality doesn't follow. What you do have is $\log(cn^k) = \log(c) + k\log(n)$, so you can get $$ \log(f(n)) \le \log(c) + k\log(n)$$
How do we gain a better intuition on the definition of uniform continuity and its advantages compared to usual continuity?
In the first round continuity is defined at individual points. A function $f: \&gt;X\to Y$ is continuous at the point $p\in X$, if for each $\epsilon&gt;0$ there is a $\delta&gt;0$ such that $$x\in U_\delta(p)\ \Rightarrow \ f(x)\in U_\epsilon\bigl(f(p)\bigr)\ .$$ This definition creates a certain dependence $\epsilon\rightsquigarrow\delta$ that describes how large distances of $x$ from $p$ guarantee a value error of $f(x)$ from $f(p)$ which is still $&lt;\epsilon$. It is then said that the function $f$ is continuous on the space $X$ if $f$is continuous at each point $p\in X$. This sounds innocouus. But it means that for such a spacewide continuity of $f$ we have to have "uncountably many" dependencies $\epsilon\rightsquigarrow\delta$ under control, one for each point $p\in X$. Now in proofs handling about continuous functions, e.g., about the existence of the Riemann integral, we want just one such dependency, in order to simplify matters. That's where uniform continuity comes in. The function $f$ is uniformly continuous on $X$, if for each $\epsilon&gt;0$ there is a $\delta&gt;0$ such that for all $p\in X$ we have $$x\in U_\delta(p)\ \Rightarrow \ f(x)\in U_\epsilon\bigl(f(p)\bigr)\ .$$ (When you don't like neighborhoods you can write $|x-p|&lt;\delta$ instead of $x\in U_\delta(p)$.)
Induction differential proof
Another way : suppose $\frac{d^n}{dx^n}(x^n) = n!$ for some $n\in\mathbf N$. Then : $$\frac{d^{n+1}}{dx^{n+1}}(x^{n+1}) = \frac{d^n}{dx^n}\left(\frac{d}{dx}(x^{n+1})\right) = \frac{d^n}{dx^n}((n+1)x^n) = (n+1)\frac{d^n}{dx^n}(x^n) = (n+1)n! = (n+1)!$$ There you have to know the first derivative of $x^n$ for any $n$, which can be proved... by induction, knowing the derivative of a product and the derivative of $x$ and $1$ :-)
Given an angle and a point, construct a circle through the point and tangent to the sides of the angle
Hint. Construct any circle tangent to the sides of the angle. Use $\overleftrightarrow{BP}$ to find a point $P'$ on that circle. (There are two choices.) Now, figure out how to construct the "matching" circle through $P$. Note: Each choice of $P'$ yields a different matching circle. Since OP has taken the hint, here's an illustration, with $\bigcirc O$ the "any circle":
Invariance properties of transformations
Linear maps don't necessarily preserve individual vectors (but when they do, those vectors are called eigenvectors with eigenvalue $1$). This really comes down to how you define line, colinear, angle, and length. Here's what I would use: A line is a set of the form $\{u+rv:r\in\mathbb{R}\}$ for some $u,v\in V$ with $v\ne 0$. Three points in $V$ are colinear if there is a line that contains those points. The angle between $u,v\in V$ is usually defined as the quantity $$\arccos\left(\frac{\langle x,y \rangle}{|x||y|}\right).$$ Here $V$ must be an inner product space (e.g. $\mathbb{R}^n$ with the dot product). The length of $v\in V$ is just the norm of $V$. If $V$ is an inner product space, the length of $v$ is $\langle v, v \rangle^{1/2}$. For example, let $f:V\to W$ be a linear map and let $L=\{u+rv:r\in\mathbb{R}\}$ be a line. If we assume that $f(v)\ne 0$, then $f(L)=\{f(u)+rf(v):r\in\mathbb{R}\}$, which is also a line. (What is the geometric interpretation in the case $f(v)=0$?) Another example: if $f:V\to V$ is a scaling map, then there is some $r&gt;0$ such that $f(x)=rx$ for all $x\in V$. If $u,v\in V$, then the angle between $f(u)$ and $f(v)$ is $$ \arccos\left(\frac{\langle rx,ry \rangle}{|rx||ry|}\right) = \arccos\left(\frac{\langle x,y \rangle}{|x||y|}\right),$$ which is the same as the angle between $u$ and $v$. (Can you see why the $r$'s cancel?)
Find convergence of series ....
Hint $$\left|\frac{\sin((k+1)!)}{k!}\right|\le\frac{1}{k!}$$
Adam Optimising Algorithm
If you are referring to Theorem 4.1, it says that both $\alpha_t \leq \frac{\alpha}{\sqrt{t}}$ and $\beta_{1,t} = \beta_1 \lambda^{t-1}$ for some $0&lt;\lambda&lt;1$ are necessary for the upper regret bound to hold. Under these conditions + assumptions on bounded gradients, the regret bound is $O(\sqrt{T})$ (note the large $T$). If you increase $O(\frac{1}{\sqrt{t}})$ to $O(t^{-\frac{1}{4}})$, i.e. make the learning rate converge faster, this bound may not hold because $$ \frac{\alpha_t}{t^{-\frac{1}{4}}} = \frac{\alpha_t}{\frac{1}{\sqrt{t}}}\cdot\frac{\frac{1}{\sqrt{t}}}{t^{-\frac{1}{4}}} \to_t c \cdot 0 \Rightarrow\alpha_t = o\bigg(\frac{1}{t^{\frac{1}{4}}}\bigg) $$
Prove a property of divisor function
Hint: Instead of inducting on $n$ consider inducting on $k$.
Span of a set of vectors containing the zero vector
if I have a set of 3 vectors, one of which is the zero vector, and the question asks if the set of these three vectors spans R2, why is the answer yes? The answer is not necessarily yes. For example, consider $$ \{(0,0),(1,1),(2,2)\} $$ I thought the three vectors had to be linearly independent to span R2, No. A set of two vectors must be linearly independent if it spans $\Bbb R^2$. but if one of the vectors is the zero vector, isn't the set linearly dependent? Yes, that's right.
Expectation of Brownian motion $W^k, k = 2,4$
I think you mean $\mathbb{E}[W^k_t]$ on the left-hand side of your formulas. Anyway, for $k=2$ you use the fact that $W^0_t = 1$, while for $k=4$ you have to use $\mathbb{E}[W^2_s] = s$.
Is this proof in which I prove a statement by the contrapositive correct?
Your proof is correct; I have two advices to give you. 1) You do not need to be so detailed, just a couple of lines are enough. For example: Assume $x$ is even, $ x= 2k$; then $$ x^2 - 4x + 1 = (2k)^2 - 4(2k) + 1 = 4k^2 - 8k +1 = 2 (2k^2 - 4k) + 1, $$ and this is odd because it is of the form $ 2u + 1. $ 2) Try to find out direct proofs whenever it is possible, and especially if the statement is relatively easy. A direct proof could be: Assume $x^2 - 4x + 1$ is even, $ x^2 - 4x + 1 = 2n. $ Then $$ x^2 = 2n + 4x -1 = 2 (n+2x) - 1 $$ and this is odd, which implies $x$ odd.
cardinality of collection of chains in the collection of infinite subsets of $\mathbb{N}$ with inclusion partial order
Uncountable as the collection of infinite subsets of $\mathbb N$ is already uncountable.
Proof Verification: Show sequence is bounded and find limit: $x_1 \gt 1$ and $x_{n + 1} = 2 - \frac{1}{x_n}$
To take this out of unanswered list: We will prove that $1 \lt x_{n +1} \lt x_n$ for each $n \in \Bbb N$. $x_2 = 2 - \frac 1 {x_n} \gt 2 - 1 = 1$ and $x_2 = 2 - \frac 1 {x_2} = \dfrac{2x_2 - 1}{x_2} = 1 + \dfrac{x_2 -1}{x_2} \lt 1 + x_2 - 1 = x_2$ whence we have that $1 \lt x_2 \lt x_1$. Now suppose $1 \lt x_{n +1} \lt x_n$ for an arbitrary natural number $n$. $$x_{n + 2} = 2 - \frac{1}{x_{n+ 1} } \gt 2 - 1 = 1$$ $$ x_{n + 2} = 2 - \frac{1}{x_{n+ 1} } = \frac{2x_{n + 1} - 1}{x_{n + 1}} = 1 + \frac{x_{n + 1} - 1}{x_{n + 1}} \lt 1 + x_{n + 1} - 1 = x_{n + 1} $$ Therefore we have $ 1 \lt x_{n + 2} \lt x_{n + 1} $ and hence induction is complete. So the sequence is monotone decreasing and is bounded below by $1$. By the Monotone Convergence Theorem $\lim (x_n) = x$ exists in $\Bbb R$. We know that $x_nx_{n + 1} = 2x_n - 1$ and that $(x_n)$ converges and so does $(x_{n + 1})$ to the same limit since it can be interpreted as a subsequence of $(x_n)$. Then the sequence $(x_nx_{n + 1})$ will also converge to the limit $\lim {(x_n)} \lim {(x_{n+ 1})} = x^2 $. The sequences $(y_n)$ and $(z_n)$ given by $y_n = 2$ and $ z_n = -1 $ for $n \in \Bbb N$ also converge to $2$ and $(-1)$ respectively so $\lim (2x_n - 1) = 2x - 1$. Whence we have that $$x^2 = 2x - 1 \implies x = 1$$
manifolds without symplicial or cell structure
A homeomorphism from a manifold to a simplicial complex is called a triangularization. The $E_8$ manifold (a four-dimensional manifold) does not have a triangularization. I don't know the history of this result but I suspect it's from the late 80s as the proof uses Casson's invariant. More recently -- just over a year ago! -- Ciprian Manolescu proved that there are manifolds of any dimension greater than 4 which do not admit an triangularization. He uses Pin(2)-equivariant Seiberg-Witten Floer homology. Here is the paper.
Clarification needed for the proof of $\dim(U_1+U_2)=\dim(U_1)+\dim(U_2)-\dim(U_1 \cap U_2)$
Since $w_1,\ldots,w_k\in U_2$, $c_1w_1+\cdots c_kw_k\in U_2$. But you also know that $c_1w_1+\cdots c_kw_k\in U_1$. Therefore, $c_1w_1+\cdots c_kw_k\in U_1\cap U_2$. There is a conceptual problem in what you wrote. You wrote that $\{w_1,\ldots,w_k\}$ is a basis of $U_2\setminus U_1\cap U_2$. How can that be? $U_2\setminus U_1\cap U_2$ is not a vector space.
$\mathscr{B} = \{ [a, b) | a< b \in \mathbb{R} \}$ is a basis for a Topology in $\mathbb{R}$
Simpler and direct is $[a,b) \cap [r,s) = [\max(a,r), \min(b,s))$. In addition, it is necessary to note that every $r \in \mathbb{R}$ is in some base set. $[r, r+1)$ for example.
if $T$ is a stopping time, $X_T$ is $\mathcal F_T$ measurable. Proof with characteristic functions
Let $B$ be any Borel set, then $$\{X_T \in B\} \cap \{T \leq k\} = \bigcup_{n=0}^k \{X_T \in B\} \cap \{T=n\} = \bigcup_{n=0}^k \{X_n \in B\} \cap \{T=n\} \in \mathcal{F}_k,$$ and so $X_T$ is $\mathcal{F}_T$-measurable. Alternatively you can use the following small lemma which follows directly from the definition of $\mathcal{F}_T$. A random variable $Y$ is $\mathcal{F}_T$ measurable if, and only if, $Y 1_{\{T \leq k\}}$ is $\mathcal{F}_k$ measurable for each $k \geq 0$. Since (according to the identity from the body of your question) $$X_T 1_{\{T\leq k\}} = \sum_{n=0}^k X_n 1_{\{T=n\}},$$ it follows that $X_T 1_{\{T \leq k\}}$ is $\mathcal{F}_k$-measurable and so $X_T$ is $\mathcal{F}_T$-measurable.
Formula for $P(X_1=a_1,\ldots , X_n=a_n)$
If we let $E_k=\{X_k=a_k\}, T_k=\{X_k\leq a_k\}, S_k=\{X_k&lt;a_k\}$, so that $E_k=T_k\setminus S_k$ then: $$\mathsf P(E_1,\ldots,E_n)~{=\mathsf P(E_1,\ldots, E_{n-1}, T_n)-\mathsf P(E_1,\ldots, E_{n-1}, S_n)\\~\\={\mathsf P(E_1,\ldots, E_{n-2},T_{n-1},T_n)\\-\mathsf P(E_1,\ldots, E_{n-2},T_{n-1},S_n)-\mathsf P(E_1,\ldots, E_{n-2},S_{n-1},T_n)\\+\mathsf P(E_1,\ldots, E_{n-2},S_{n-1},S_n)}\\\vdots\\ ={\mathsf P(T_1,\ldots, T_n)\\-\mathsf P(T_1,\ldots, T_{n-1},S_n)-\mathsf P(T_1,\ldots,T_{n-2},S_{n-1},T_n)-\ldots-\mathsf P(S_1,T_2,\ldots,T_n)\\+\ldots\\\vdots\\+(-1)^n \mathsf P(S_1,\ldots,S_n)}}$$ Well, you get the idea. For each $k\in\{0,..,n\}$, the series will contain $\binom nk$ terms which are the joint probabilities of the distinct selections of $k$ strict inequality events and $n-k$ non-strict inequality events, and the coefficients of these terms will be $(-1)^k$. So for example: ${\mathsf P(X_1=a_1,X_2=a_2, X_3=a_3) ~}{= \mathsf P(E_1,E_2,E_3) \\ ={\mathsf P(T_1,T_2,T_3)\\-\mathsf P(T_1,T_2,S_3)-\mathsf P(T_1,S_2,T_3)-\mathsf P(S_1,T_2,T_3)\\+\mathsf P(T_1,S_2,S_3)+\mathsf P(S_1,T_2,S_3)+\mathsf P(S_1,S_2,T_3)\\-\mathsf P(S_1,S_2,S_3)}\\ ={\mathsf P(X_1\leq a_1,X_2\leq a_2,X_3\leq a_3)\\-\mathsf P(X_1\leq a_1,X_2\leq a_2,X_3\lt a_3)-\mathsf P(X_1\leq a_1,X_2\lt a_2,X_3\leq a_3)-\mathsf P(X_1\lt a_1,X_2\leq a_2,X_3\leq a_3)\\+\mathsf P(X_1\leq a_1,X_2\lt a_2,X_3\lt a_3)+\mathsf P(X_1\lt a_1,X_2\leq a_2,X_3\lt a_3)+\mathsf P(X_1\lt a_1,X_2\lt a_2,X_3\leq a_3)\\-\mathsf P(X_1\lt a_1,X_2\lt a_2,X_3\lt a_3)}}$
Compare the topologies $\tau$ and $\tau^{*}$
In an attempt to compare them, consider $X=\mathbb{R}$ (with $\tau$ being the standard topology) and $S=[0,1)$. Then $S\in\tau^*$, but $S\notin\tau$. Also $(1,2)\in\tau$, but $(1,2)\notin\tau^*$, so they are not comparable. Note that if $S$ is in $\tau$, then $\tau^*\subset\tau,$ and if $S=\varnothing$ then the two topologies coincide.
How much choice is needed to prove that every compact metric space is a continuous image of the Cantor set?
The proof I would use, is to use that $X$ has a countable base (this needs countable choice, I think), and so $X$ embeds into $[0,1]^\omega$, quite explicitly. And as the Cantor function $c: C \to [0,1]$ maps $C$ onto $[0,1]$ (also choicelessly), $C$ maps onto $[0,1]^\omega$ too, with $c'=c^\omega$ essentially. Then Kechris in his book shows that any closed subset of $C$ is a retract of it (using branches in a finitary tree argument, I recall it as being quite explicit), so $c'^{-1}[X]$ is too, and hence $X$ is an image of $C$ (composing the retract with $c'$). Cursorily I only see the Urysohn embedding (from a countable base) as the part that would need countable choice.
Proof involving dot product of 3 vectors
I don't understand why you are complicating things when it can be done so simply. $$w\cdot f_i=(a_1f_1+a_2f_2+\cdot \cdot \cdot +a_mf_m)\cdot f_i$$ $$w\cdot f_i=a_1(f_1\cdot f_i)+a_2(f_2\cdot f_i)+\cdot \cdot \cdot +a_m(f_m \cdot f_i)$$ Now because $f_1, \cdot \cdot \cdot ,f_m $ are mutually orthogonal, $f_i \cdot f_m =0$ for $i \ne m$ Also note that dot product of a vector with itself that is $f_i \cdot f_i=||f_i||^2$, so we have: $$\frac{w\cdot f_i}{f_i \cdot f_i}=\frac{a_1(f_1\cdot f_i)+a_2(f_2\cdot f_i)+\cdot \cdot \cdot +a_m(f_m \cdot f_i)}{f_i \cdot f_i }$$ $$\implies \frac{w\cdot f_i}{f_i \cdot f_i}=\frac{0+0+\cdot \cdot a_i(f_i \cdot f_i)+\cdot \cdot +0+0}{f_i \cdot f_i}=a_i\frac{||f_i||^2}{||f_i||^2}=a_i$$
Show that partitions and equivalence relations are interdefineable.
I think what the authors want to say by $p=p_{R_p}$ is that $p$ "is" the coequalizer of $p_0,p_1 : R_p = X\times_I X \to X$. By $R=R_{p_R}$ they mean that $R$ "is" the pullback of $p_R: X\to X\amalg_R X$ along itself. As common in category theory, "being" a coequalizer or pullback of something just means it satifies the universal property. Usually two of such objects are isomorphic via a unique isomorphism. Edit: Now for some hints: Let $p: X\to I$ be an epimorphism and $p_0,p_1:R_p = X\times_I X\to X$ a pullback of $p$ along itself. Let $T$ be a set with morphism $t:X\to T$ such that $tp_0=tp_1$. Then $\bar t: I\to T$ can be defined as follows: Given $i\in I$ find a preimage $x\in X$ of $i$ via $p$. Then set $\bar t(i)=t(x)$. (Why is $\bar t$ well defined? And why is it unique?) Now let $(p_0,p_1):R\to X\times X$ be an equivalence relation. Let $p_R:X\to P_R = X\amalg_RX$ be the coequalizer of $p_0,p_1$. Check the following property: $(\star)$ Since $R$ is an equivalence relation, two elements $p_R(x),p_R(x')\in X\amalg_RX$ are equal iff there is a $r\in R$ such that $p_0(r)=x$ and $p_1(r)=x'$, i.e. iff $(x,x')\in R$. (Check with concrete form of the coequalizer. Note that if $R$ is not an equivalence relation, we have to make the equivalence closure of $R$ first.) Let $T$ be a set with morphisms $t_0,t_1:T\to X$ such that $p_Rt_0=p_Rt_1$. Then $\bar t: T\to R$ can be defined as follows: Given $s\in T$ consider $(t_0(s),t_1(s))\in X\times X$. Since $p_Rt_0(s)=p_Rt_1(s)\in X\amalg_RX$, by $(\star)$ we have $(t_0(s),t_1(s))\in R$. Set $\bar t(s)=(t_0(s),t_1(s))$. (Check uniqueness.) P.S: You got the definition of a reflexive relation wrong. It should be: $R$ is reflexive iff $(x,x)\in R$ for all $x\in X$.
Doubt in understanding the definite integrals
Well, in order to prove that statement rigorously we need to first make sense of it rigorously - and to do that we need to have some formal definition of area to start with. Since that's exactly what integration is intended to get us, this is a bit circular. That said, there's a basic result which gets the ball rolling, namely the convergence of the upper and lower Riemann sums. Intuitively, each lower Riemann sum should be a lower bound on the area and each upper Riemann sum should be an upper bound on the area, so their joint limit has to be the exact area. In the particular case you're looking at, this is something you can - and should! - prove by hand. Of course this isn't really rigorous since - as said above - we don't have already have a formal definition of area. However, it motivates the following first stab at a definition of area: Suppose $f$ is defined and continuous on $[a,b]$. We say that the area under the graph of $f$ from $a$ to $b$ equals $A$, and write $$\int_a^bf(x)dx=A,$$ iff for every $\epsilon&gt;0$ there is some $\delta&gt;0$ such that whenever $a=x_0&lt;x_1&lt;...&lt;x_n=b$ with $x_{i+1}-x_i&lt;\delta$ for each $i$ the associated upper and lower Riemann sums are each within $\epsilon$ of $A$. Here we're being a bit more general than you may expect by allowing Riemann sums with uneven rectangle widths. In fact we're going to want to be much more general very soon: we'll want to look at the areas given by possibly discontinuous functions, in which situation "upper" and "lower" sums may not make sense, and consequently the full definition of the Riemann integral is somewhat more complicated. And even that's not the end of the story, but that's getting a bit far afield.
What's the real name for these things? Categories whose morphisms have "length."
A while back I asked a similar question on MO. There did not seem to be a consensus regarding the name of such a gadget, but perhaps you will find the discussion interesting.
Conversion of a given statement into predicate logic?
Your conclusion is "If a person is not afraid, then $G$ is guilty" which just intuitively should sound wrong to you. I'd say you have two separate sentences. The first is If Mr.A is guilty, then no witness is lying unless he is afraid And this sentence starts with an "If". Now, "If $X$, then $Y$" sentences translate usually to $A\implies B$, so I suggest you do the same here. This means the logical statement will start with $G\implies$, not with $\neg A\implies$. Now, the second part must be the "no witness is lying unless he is afraid". This can be rephrased either as "if a witness is lying, he is afraid", so $(W(x)\land L(x))\implies A(x)$ or as "if a witness is not afraid, he is not lying", so $(W(x)\land \neg A(x)\implies \neg L(x)$ (it's easy to see those two phrases are logically equivalent). So, the resulting statement is $$G\implies \forall x:((W(x)\land L(x))\implies A(x))$$ with the second statement of course being $\exists x: W(x)\land A(x)$. Now, from jsut the fact that the only statement you have about $G$ is $$G\implies\text{something}$$ it should be clear that there is no way you can prove that $G$ is true (it could always be false, ant that statement will still be true). However, you can maybe prove $\neg G$ since $A\implies B$ is the same as $\neg B$ implies $\neg A$. In our case, this means we are interested in the statement $$\neg(\forall x:((W(x)\land L(x))\implies A(x)))\implies \neg G$$ which simplifies to: $$\exists x: W(x)\land L(x)\land \neg A(x) \implies \neg G$$ which means "if there exists a lying witness that is not afraid, then mr. A is guilty". However, you only know about one witness, and you don't know if they are lying. You know that if they are lying, then mr. A is not guilty, but if they are not lying, mr. A might be guilty or not, you just don't know.
Finitely generated projective modules are locally free
Yes, this is true. See this Math Overflow question for a precise statement and a reference to its proof in Bourbaki's Commutative Algebra. This result is also stated in my commutative algebra notes, but the proof is not unfortunately not yet written up there. I certainly hope that this will be remedied soon though, as I will be teaching a course out of these notes starting on Monday. When the proof gets written, I will update this answer with a page number. Added: Here is something in the MO answer that I decided was worth a comment here. For finitely generated modules, this stronger version of local freeness is actually equivalent to projectivity, whereas the weaker "pointwise local freeness" is subtly weaker in general.
What does $(a,b) = 1$ mean in the context of the irrationality of $\sqrt 2$?
The expression "$a^2=2b^2$ is soluble in integers $a,b$ with $(a,b)=1$" is simply another way of saying that there exists a solution $(a,b)$ where $a,b\in\mathbb{Z}$ and $\gcd(a,b)=1$.
Is this a known result?
You at least need differentiability for existence of KKT. But if you focus on sufficiency, you can say this: Sufficiency: Suppose your problem is to minimize $f(x)$ over a convex set $X$ and subject to $g_k(x)\leq 0$ for all $k \in \{1, \ldots, K\}$ (call this Probelm P1). Define: $$ L(x, \lambda) = f(x) + \sum_{k=1}^K\lambda_k g_k(x) $$ Assume that $\lambda^*$ is a vector with nonnegative components. Assume that $L(x,\lambda^*)$ is a convex function over $x \in X$. Assume we have a vector $x^* \in X$ for which the following "KKT conditions" hold: \begin{align} &amp;\nabla f(x^*) + \sum_{k=1}^K \lambda_k^* \nabla g_k(x^*) = 0 \\ &amp; g_k(x^*) \leq 0 \: \: \forall k \in \{1, \ldots, K\} \\ &amp;\lambda_k^* \geq 0 \: \: \forall k \in \{1, \ldots, K\} \\ &amp;\lambda_k^*g_k(x^*) = 0 \: \: \forall k \in \{1, \ldots, K\} \end{align} We want to show that $x^*$ is an optimal solution to the problem P1. The above assumptions show that $x^*$ satisfies the constraints (namely, $x^* \in X$ and $g_k(x^*) \leq 0$ for all $k \in \{1, \ldots, K\}$). The derivative assumptions also show that $x^*$ is a point of zero-derivative for the function $L(x, \lambda^*)$. Since this function is convex in $x$, $x^*$ must minimize $L(x,\lambda^*)$ over all $x \in X$. Thus, for all $x \in X$ we have: $$ L(x^*,\lambda^*) \leq L(x,\lambda^*) $$ That is, for all $x \in X$: $$ f(x^*) + \underbrace{\sum_{k=1}^K\lambda_k^*g_k(x^*)}_{zero} \leq f(x) + \sum_{k=1}^K\lambda_k^*g_k(x) $$ Thus, if $x$ is a vector in $X$ that satisfies the constraints $g_k(x) \leq 0$ for all $k$, then $\sum_{k=1}^K\lambda_k^*g_k(x) \leq 0$ (recall that $\lambda_k^*\geq 0$ for all $k$) and the above inequality reduces to $f(x^*) \leq f(x)$, meaning that $x^*$ is an optimal solution to problem P1.
Is Gauss's lemma valid for polynomials with coefficients in a GCD domain?
FYI: a proof of Gauss's Lemma for GCD domains is given in Section 15.5 of these notes. (I don't claim any superiority to any other proofs you may have seen or discovered...)
Is my reasoning on Transformation Matrices right?
I think I got it and i can answer myself: Can we say in this case that $M = B^{-1} \cdot A$? In other means: $$[v]_B = \underbrace{B^{-1}\cdot A}_{\text{M?}} \cdot [v]_A$$ $T(\overrightarrow v)$ is the Linear Transformation that maps $\overrightarrow v$ to coordinate tuple on basis $B$ As described before this is: $$T(\overrightarrow v) = [v]_B$$ Since $\overrightarrow v$ is expressed as linear combination on basis $A$, the linear transformation acts on each vector of basis $A$, yielding the associated Transformation Matrix $M$ of the linear transformation. The column vectors of $M$ are the vectors of basis $A$ transformed to coordinate tuple on basis $B$. Since $T(\overrightarrow{v_i}) = B^{-1}\cdot \overrightarrow{v_i}$ Then $[v]_B = M\cdot [v]_A$ as we already know that $[v]_B = B^{-1}\cdot A \cdot [v]_A$ the proof is concluded.
Angle between two vectors in 3D Cartesian space looking down z axis
To project $3$D vectors onto $2$D $xy$-plane (also known described by the hyperplane $z=0$), simply set the third coordinate to zero. Hence the two process that you described are the same. To show that it is valid to set the third coordinate to zero, we are minimizing $$(\hat{x} - x)^2 + (\hat{y} - y)^2 +(0-z)^2$$ where the variables are $\hat{x}$ and $\hat{y}$. The minimal value is attained at $(x,y,0)$.
On the density of $C[0,1]$ in the space $L^{\infty}[0,1]$
Let $\psi_n$ be a sequence of standard mollifiers, symmetric about 0. If $f \in L^\infty$, $g \in L^1$, it is a quick application of Fubini's theorem to see that $\int (f * \psi_n) g = \int f (\psi_n * g)$, where $*$ denotes convolution. Since $f * \psi_n$ is continuous and $\psi_n * g \to g$ in $L^1$, we are done.
Given two vertices, how to find the other two vertices of a rhombus?
Let $B=(x,y)$ be the vertex below the given diagonal. Since $B$ is equidistant to $A$ and $C$: $$ (x-5)^2+(y-4)^2=(x+3)^2+(y+4)^2. $$ Simplifying the above yields $$\tag{1} 1=x+y $$ Since $BC$ has gradient $5/3$ $$ {y-4\over x-5}={5\over3}; $$ whence $$\tag{2}3y-5x=-13.$$ By $(1)$, we have $y=1-x$. Substituting into $(2)$ gives $ 3(1-x)-5x=-13$. This gives $x=2$; and, from $(1)$, $y=-1$. So $B=(2,-1)$. Since the given diagonal has slope 1, and since $B$ is 5 units to the right and 3 units up from $A$, the forth vertex is 5 units up and 3 to the right of $A$, So $D=(0,1 )$.
If the Collatz Conjecture is unsolvable is it true?
If you proved that Collatz was not disprovable in (say) PA, this would mean that there were no finite cycles not hitting $1$. However, it would leave open the possibility that there was some number $n$ which never hit $1$, and never entered a finite cycle (just "shot off to infinity"); so this wouldn't actually prove the Collatz conjecture. I believe there have been no serious attempts at proving Collatz to be unprovable in PA (or related theories). The problem is that we only know a small handful of techniques for showing PA-unprovability, and none of them seem to apply to Collatz. Proving unprovability is extremely hard. Incidentally, the statements for which PA-unprovability implies truth are those which are equivalent in PA to a $\Pi^0_1$ sentence; it is not known whether the Collatz conjecture is equivalent over PA to a $\Pi^0_1$ sentence, so currently no way is known for turning a proof of the PA-undisprovability of Collatz into a proof of the Collatz conjecture.
If $k$ cannot equal $0$ and $A$ is as given below, what is $A$ inverse?
Using the row-reduction method, we try to bring the augmented block matrix $(A|I)$ to the form $(I|A^{-1})$. In this case, that is fairly simple: $$ \left(\begin{array}{ccc|ccc}1&amp;0&amp;2k&amp;1&amp;0&amp;0\\0&amp;1&amp;k&amp;0&amp;1&amp;0\\0&amp;0&amp;k&amp;0&amp;0&amp;1\end{array}\right) \to \left(\begin{array}{ccc|ccc}1&amp;0&amp;0&amp;1&amp;0&amp;-2\\0&amp;1&amp;0&amp;0&amp;1&amp;-1\\0&amp;0&amp;1&amp;0&amp;0&amp;\tfrac{1}{k}\end{array}\right) $$ So that your inverse should be $$ A^{-1} = \left(\begin{array}{ccc}1&amp;0&amp;-2\\0&amp;1&amp;-1\\0&amp;0&amp;\tfrac{1}{k}\end{array}\right) $$
Proving independence among vectors: u+v
$$a\cdot(u+v)+b\cdot( v+w)+c\cdot(u+w)=(a+c)\cdot u+(a+b)\cdot v+(b+c)\cdot w$$
How is this NOT injective and IS surjective?
Small $p$ is a function. Big $\mathscr P$ is the power set symbol. Different things. The domain of the function is $\mathscr P(\{1,2,....n\})$. The codomain of the function is the set of integers from $0$ to $n$. In other words: $$p:\mathscr P(\{1,2,....n\})\to \{0,1,...,n\}$$ The function is simply that $p(X) = $ the number of element in $X$. In other words: If $S\subseteq \{1,2,3.....n\}$ the $$p:S \mapsto |S|$$ or in other words $$p(S) = |S|$$ SO for example $p(\emptyset) = 0$. And $p(\{1,2,3,....., n\}) =n$. And $p(\{2,3,7\}) = 3$ and so on. If $n&gt; 1$ then $p$ is obviously not injective as if $|X| = |Y|$ but $X\ne Y$ then $f(X) = |X| = |Y| = f(Y)$. (Example: If $n\ge 7$ then $p(\{1,2,3\}) = p(\{2,3,7\}) = 3$. [Also: $|\mathscr P(\{1,2,3,....n\}| = 2^n$ and $|\{0,1,2,3,....,n\}| = n+1$. If $n&gt; 1$ then $2^n &gt; n+1$ and you can't have an injection for a set of higher cardinality to one of a lower cardinality.] [On the other hand if $n=1$ then $\mathscr P(\{1\}) = \{\emptyset, \{1\}$ and $p(\emptyset) =0$ and $p(\{1\}) = 1$ and $p$ is a bijection $\{\emptyset,\{1\}\} \leftrightarrow \{0,1\}$] [Oh, and as Wuestenfux points out in the comments: If $n=0$ then $\{\} =\emptyset$ and $\mathscr P(\emptyset)= \{\emptyset\}$ and $p: \{\emptyset\}\to \{0\}$ via $p(\emptyset) = |\emptyset| = 0$. That is also a bijection. ] [Worth noting (maybe) That $|\mathscr P(\{\})| = 2^0$ and $|\{0\}| = 0+1$ and $2^0 = 0+1$. And $|\mathscr P(\{1\}| = 2^1$ and $|\{0,1\}| = 1+1$ and $2^2 = 1+ 1$. But for $n&gt;1$ then $|\mathscr P(\{1,...,n\}) = 2^n$ and $|\{0,1,...n\}| = n+1$ and $2^n &gt; n+1$.] [Or maybe that wasn't worth noting...] And $p$ is clearly surjective as $p(\emptyset)=0$ and for any $k; 0&lt; k\le n$ then $p(\{1,2,.....k\}) = k$. That's all the statement is saying..