title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Constructing $R$-modules using formal sums
The construction you describe is that of a free $R$-module. It can be made precise. Let $X$ be any set (it need not be finite, but then the following finiteness condition is superflous): Let the elements of $M$ be functions $f : X \to R$ (families $(f_x)_{x\in X}$) , such that $\{f_x\neq 0 : x\in X\}$ is finite. Then define the module structure in the obvious fashion. Finally you can prove that a "copy" $X'$ of $X$ is a basis for $M$ and that $f = \sum_{x\in X'} f_x\cdot x$. This is well-defined because of the finiteness condition above.
Does my set have empty interior?
In general, there may be no interior points. Let $(r_n)=\mathbb Q$ be an enumeration of the rationals. Let me now construct balls that will contain the point $(x,r_n)$. For each $n$ there is a rational number $s_n$ such that $|s_n-x| \le \frac1{2^{2n+1}}$. Now set $q_{2n}:=(s_n,r_n)$, and fill all other rational points of $\mathbb R^2$ into $(q_{2n+1})$. Then $(x,r_n) \in B_{\frac1{2^{2n}}}(q_{2n})= B_{\frac1{2^{2n}}}(s_n,r_n) $. The points $(x,r_n)$ are dense in $\{x\}\times \mathbb R$ but not in $C$. So $C$ has no interior points. [This is a similar construction to proof that subsets of separable sets are separable.]
Conditional expectation of number of dice rolls
Note the following: For every $1\leqslant x\leqslant y$, $P(X\geqslant x\mid Y=y)=a^{x-1}$ where $a=4/5$. For every $x\geqslant y+1$, $P(X\geqslant x\mid Y=y)=a^{y-1}b^{x-y-1}$ where $b=5/6$. Finally, $E(X\mid Y=y)=\sum\limits_{x\geqslant1}P(X\geqslant x\mid Y=y)$.
Quadrilateral $ABCD$ with $AB=AD$, $\angle BAD=60^\circ$, $\angle BCD=120^\circ$. Prove $BC+DC=AC$
Hint: $ABD$ is an equilateral triangle. Hint: $ABCD$ is a cyclic quad. The result follows by applying Ptolemy's theorem.
Real analysis proof where $n\geq1$ in the naturals by induction
$n\geq 1$ by definition of the natural numbers.
$\cap$-stable subset of a Dynkin system implies that its $\sigma$-algebra is contained in the Dynkin system?
The argument is simple indeed (I had overlooked Theorem 5.5 in Schilling; they even refer to this theorem, but I thought they were referring to Lemma 5.4 for some reason): By Theorem 5.5, we have $\delta(\mathscr A\times\mathscr B)=\sigma(\mathscr A\times\mathscr B)$, since $\mathscr A\times\mathscr B$ is stable under finite intersections. Since $\mathscr A\times\mathscr B\subset\mathscr D$, we have that $\delta(\mathscr A\times\mathscr B)\subset\mathscr\delta (\mathscr D)=\mathscr D$ (by Prop. 5.3), hence the inclusion I wanted to show holds.
Check convergence $\sum\limits_{n=1}^\infty\left(\sqrt{1+\frac{7}{n^{2}}}-\sqrt[3]{1-\frac{8}{n^{2}}+\frac{1}{n^{3}}}\right)$
Hint: Consider the Taylor expansion of the summand about $1/n$ for $n$ large: $$\sqrt{1+\frac{7}{n^{2}}}-\sqrt[3]{1-\frac{8}{n^{2}}+\frac{1}{n^{3}}} \approx \left (1 + \frac{7}{2 n^2} \right ) - \left (1 - \frac{8}{3 n^2} \right ) = \frac{37}{16 n^2}$$ Use the comparison test.
Any locally compact space is peripherally compact
If compact sets are closed (so in a Hausdorff space, e.g.) $\partial B$ is a closed subset of $\overline{B}$ so also compact. So in a Hausdorff space having one compact neighbourhood implies we have a local base of compact neighbourhoods (in the general sense), and so then the space is peripherally compact. If $X$ is $\Bbb R$ in the included point topology wrt $0$, say, then each point $x$ has a compact neighbourhood $\{x,0\}$ but the boundaries of open sets are mostly not compact.
Napier analogy and algebra in triangle.
$$\sum_{cyc}\frac{a-b}{a+b}=\sum_{cyc}\frac{(a-b)(c^2+ab+ac+bc)}{\prod\limits_{cyc}(a+b)}=$$ $$=\frac{\sum\limits_{cyc}c^2(a-b)}{\prod\limits_{cyc}(a+b)}=\frac{(a-b)(a-c)(b-c)}{\prod\limits_{cyc}(a+b)}=-\prod_{cyc}\frac{a-b}{a+b}$$ and we are done!
Evaluate $\lim \limits_{x \to 2} \frac{(\cos\theta)^x+(\sin\theta)^x-1}{x-2}, \theta \in (0,\frac{\pi}{2})$
HINT: Write $1=\cos^2\theta+\sin^2\theta$ Now set $x-2=u$ in $$\lim_{x\to2}\dfrac{a^x-a^2}{x-2}=a^2\lim_{u\to0}\dfrac{a^u-1}u=a^2\ln a$$ using $\lim_{h\to0}\dfrac{e^h-1}h=1$ and $a=e^{\ln a}$
How to proof that bracket of two vector field can be computed by second derivation
For the vector field $X$ and the associated dynamical system $\dot{x}=X(x)$ we have $$x(t+h)=x(t)+\int_t^{t+h}{X(x(s))ds}\\ =x(t)+\int_t^{t+h}{[X(t)+\frac{dX}{dx}(x(t))X(t)s]ds}+0(h^2)\\ =x(t)+X(t)h+\frac{1}{2}\frac{dX}{dx}(x(t))X(x(t))h^2+o(h^2)$$ We calculate the evolution of a initial point $x_0$ under the various flows. Initially under $\Phi_t^X$ $$x(t)=x_0+X(x_0)t+\frac{1}{2}\frac{dX}{dx}(x_0)X(x_0)t^2+o(t^2)$$ Then under $\Phi_t^Y$ $$x(2t)=x(t)+Y(x(t))t+\frac{1}{2}\frac{dY}{dx}(x(t))Y(x(t))t^2+o(t^2)\\=x_0+[X(x_0)+Y(x_0)]t+\left[\frac{1}{2}\frac{dX}{dx}(x_0)X(x_0)+\frac{dY}{dx}(x_0)X(x_0)+\frac{1}{2}\frac{dY}{dx}(x_0)Y(x_0)\right]t^2+o(t^2)$$ Then under $\Phi_{-t}^X$ $$x(3t)=x(2t)-X(x(2t))t+\frac{1}{2}\frac{dX}{dx}(x(2t))X(x(2t))t^2+o(t^2)\\ =x_0+Y(x_0)t+\left[-\frac{dX}{dx}(x_0)Y(x_0)+\frac{dY}{dx}(x_0)X(x_0)+\frac{1}{2}\frac{dY}{dx}(x_0)Y(x_0)\right]t^2+o(t^2)$$ And finally under $\Phi_{-t}^Y$ $$x(4t)=x(3t)-Y(x(3t))t+\frac{1}{2}\frac{dY}{dx}(x(3t))Y(x(3t))t^2+o(t^2)\\ =x_0+\left[-\frac{dX}{dx}(x_0)Y(x_0)+\frac{dY}{dx}(x_0)X(x_0)\right]t^2+o(t^2)\\ =x_0+\mathcal{L}_XY(x_0)t^2+o(t^2)$$
Convergence in topologies
They need not be the same. Let $X = ^{\omega_1}\mathbb{Z}$, the set of integer sequences of length $\omega_1$, and give it the order topology $\tau_1$ induced by the lexicographic order. Every linearly ordered topological space is $T_1$ and hereditarily normal, so $\langle X,\tau_1 \rangle$ is certainly $T_3$. There are no non-trivial convergent sequences in $\langle X,\tau_1 \rangle$. To see this, suppose that $x \in X$, and $A \subseteq X\setminus\{x\}$ is countable. There is an $\alpha < \omega_1$ such that for each $y\in A$ there is some $\xi < \alpha$ such that $x(\xi)\ne y(\xi)$. Let $x^-,x^+\in X$ be be defined as follows: $$\begin{align*} x^-(\xi) &= \begin{cases} x(\xi),&\text{if }\xi \ne \alpha\\ x(\alpha)-1,&\text{if }\xi = \alpha \end{cases}\\ &\\&\\&\\ x^+(\xi) &= \begin{cases} x(\xi),&\text{if }\xi \ne \alpha\\ x(\alpha)+1,&\text{if }\xi = \alpha. \end{cases} \end{align*}$$ Then $x \in (x^-,x^+) \subseteq X\setminus A$. Now let $\tau_2$ be the discrete topology on $X$; clearly $\tau_1 \subseteq \tau_2$, and there are no non-trivial convergent sequences in $\langle X,\tau_2 \rangle$, either. But the Borel $\sigma$-field of $\tau_2$ is $\wp(X)$, and I’m reasonably sure that that of $\tau_1$ isn’t. Edit: I originally had $X = ^{\omega_1}2$ with the lexicographic order topology, which, as Byron Schmuland pointed out, does have convergent sequences. I was actually thinking of the tree topology on $X$, which has a base consisting of all sets of the form $\{x \in X:x \upharpoonright\alpha = \varphi\}$, where $\alpha < \omega_1$ and $\varphi \in ^{\alpha}2$. It’s zero-dimensional and $T_1$, hence completely regular, and every countable subset is easily seen to be closed and discrete by an argument similar to the one used above to show that $\langle X,\tau_1 \rangle$ has no non-trivial convergent sequences..
Residues of a Meromorphic Function
The answer to all questions is "not in general". It may happen, but that's an exceptional case. Clearly, if $A$ is a finite set, there are no problems. Now consider an infinite set $A$, and a bounded domain $\Omega$. If we have e.g. the residue $1$ at all points of $A$, then show that the given series cannot converge. On the other hand, if the sequence of residues converges to $0$ fast enough (what "fast enough" means depends on $A$), then the given series will converge absolutely (and locally uniformly) on $\Omega\setminus A$, and so define a meromorphic function on $\Omega$. Examples where the series converges absolutely (and locally uniformly) are however easier to construct on unbounded domains, to get some experience show that $$\sum_{n = 0}^\infty \frac{1}{z - n^2}$$ converges absolutely and locally uniformly on $\mathbb{C}\setminus \{n^2 :n \in \mathbb{N}\}$, and hence defines a meromorphic function on $\mathbb{C}$.
How is the Fourier transform "linear"?
Let $f$, $g$ be functions of a real variable and let $F(f)$ and $F(g)$ be their fourier transforms. Then the fourier transform is linear in the sense that, for complex numbers $a$ and $b$, $$F(af + bg) = a F(f) + b F(g)$$ i.e. it has the same notion of linearity that you may be used to from linear algebra. This is not a quirk - it expresses the fact that functions form an infinite dimensional vector space, with addition and multiplication by a scalar defined in the obvious way: $$(f+g)(x) = f(x) + g(x)$$ $$(af)(x) = a f(x)$$
If $x+y+z+w=29$ where x, y and z are real numbers greater than 2, then find the maximum possible value of $(x-1)(y+3)(z-1)(w-2)$
Let $$f(x,y,z,w) = (x-1)(y+3)(z-1)(w-2)$$ and $$g(x,y,z,w) = x + y + w - 29$$ We want to $$\max\{f(x,y,x,w)\}$$ subject to: $$g(x,y,z,w) = 0, \ \ \ x,y,z > 2$$ Let \begin{align*} \mathcal{L}(x,y,z,\lambda) &= f(x,y,z,w) + \lambda g(x,y,z,w)\\ &= (x-1)(y+3)(z-1)(w-2) + \lambda(x + y + w - 29) \end{align*} Then $$\nabla \mathcal{L}(x,y,z,\lambda) = 0$$ yields $4$ equations: \begin{equation}{\tag{1}} (y+3)(z-1)(w-2) + \lambda = 0 \end{equation} \begin{equation}{\tag{2}} (x-1)(z-1)(w-2) + \lambda = 0 \end{equation} \begin{equation}{\tag{3}} (x-1)(y+3)(w-2) + \lambda = 0 \end{equation} \begin{equation}{\tag{4}} (x-1)(y+3)(z-1) + \lambda = 0 \end{equation} Now when you set each one equal to each other and find another four equations where $y$ will be a linear combination of $x,y,w$ we find another $4$ equations: \begin{equation}{\tag{5}} y = x - 4 \end{equation} \begin{equation}{\tag{6}} y = z - 4 \end{equation} \begin{equation}{\tag{7}} y = y \end{equation} \begin{equation}{\tag{8}} y = w - 5 \end{equation} Now, we see that $$(y+4) + y + (y + 4) + (y+5) = 29 \Rightarrow y = 4$$ Then it is trivial to find $x,z,w$ by plugging in $y=4$ to the equations above. Hope that helps!
How to cite preprints from arXiv?
I don't see any reason for treating preprints on arxiv in references differently, so you should do the same thing as what you would do for a preprint found on the author's webpage. Personally, I would simply put NOTE = {preprint, \url{http://arxiv.org/abs/****}} in the bibtex entry. (You also need \usepackage{url} in your main tex-file, if you use this syntax. If your bibtex style supports url field, it would be put the url there.) BTW by checking a few results from this Scholar Google search: "preprint arxiv" site:springerlink.com you can see, that there are other people using the same convention. EDIT: After adding this answer I've noticed that it is basically the same thing as Willy Wong's suggestion from TeX.SE thread linked in Marvis' comment: How to cite an article from Arxiv using bibtex.
$f$ is analytic in the unit disc and $f(z)=f(z^2)$ for every $z\in D(0,1)$. Prove that $f$ is constant
Since $f$ is continuous, and $f(z) = f(z^{2^n})$ for all $n$, we have that $$ f(z) = \lim_{n \to \infty} f(z) = \lim_{n \to \infty} f\left(z^{2^n}\right) = f\left(\lim_{n \to \infty} z^{2^n}\right) = f(0) $$ for all $z$ with $|z| < 1$.
Non-empty limit set for the dynamical system : $x_1' = x_1 + 2x_2 - 2x_1(x_1^2 + x_2^2)^2, \space x_2' = 4x_1 + 3x_2 - 3x_2(x_1^2 + x_2^2)^2 $
The Cauchy's inequality can be rewriten in the form $$(x_1\cdot 1+x_2\cdot 1)^2\le (x_1^2+x_2^2)\cdot (1^2+1^2)$$ or $$(x_1+x_2)^2\le 2(x_1^2+x_2^2).$$ This implies that $$ \dot V\leq 3(x_1+x_2)^2 - 2(x_1^2 + x_2^2)^3\le 6(x_1^2+x_2^2)-2(x_1^2+x_2^2)^3 $$ It is easy to check that $\forall (x_1,x_2):\; x_1^2+x_2^2>\sqrt3\;\;$ $\dot V<0$, thus the solution for any initial value enters the bounded set $$ \Omega_C=\left\{ (x_1,x_2):\; x_1^2+x_2^2<C \right\} $$ in finite time for any $C>\sqrt3$ and stays there forever, hence any solution is bounded and, due to the Bolzano–Weierstrass theorem, has a nonempty $\omega$-limit set.
$\min\{X, Y\}$ is geometrically distributed according to parameter $1 - (1-p)^{2}$
\begin{align} P(\min(X,Y) > k) &= P(X > k, Y>k) \\ &= P(X > k)P(Y>k) \\ &= P(X >k)^2 \\ &=((1-p)^2)^{k} \end{align} Hence the CDF is $1-((1-p)^2)^k=1-(1-\color{blue}{(1-(1-p)^2)})^k$ Hence it is a geometric distribution (we just have to compare with the CDF of a geometric distribution) with success probability $1-(1-p)^2.$
Find the upper bound for $\chi(G)$ using theorem 8.20
As JMoravitz points out, there are only two vertices of degree at least six, so $\max_{H \subseteq G} \delta(H) \leq 5$. By deleting the edge that connects these two vertices of degree six, we obtain a $5$-regular subgraph $H'$ so that $\max_{H \subseteq G} \delta(H) \geq \delta(H') = 5$. Hence, using the given theorem, it follows that: $$ \chi(G) \leq 1 + \max_{H \subseteq G} \delta(H) = 1 + 5 = 6 $$ Now to obtain the lower bound $\chi(G) \geq 5$, we argue by contradiction. Suppose instead that $G$ is $4$-colourable. Then without loss of generality, we can colour the four vertices of the $4$-clique subgraph with four distinct colours as follows: But by symmetry, $G$ has another $4$-clique, two of which are coloured blue and purple. This forces the remaining two vertices of this reflected $4$-clique to be coloured as follows: But it is now impossible to properly colour the topmost vertex, as it is adjacent to four vertices with four different colours. To prove the tighter upper bound $\chi(G) \leq 5$, we use the following proper $5$-colouring: Thus, $\chi(G) = 5$, as desired.
Prove that it exists a pair $(i,j)$
Observation 1. According to Wilson's theorem, we have $\Pi_{k=1}^{p-1}a_{k} \equiv -1 \:(\mathrm{mod} \: p)$ and $\Pi_{k=1}^{p-1}k \equiv -1 \:(\mathrm{mod} \: p)$. So we obtain $\Pi_{k=1}^{p-1}ka_{k} \equiv (-1)^2 \equiv 1 \:(\mathrm{mod} \: p)$. Observation 2. For any $k \in \{1,\dots, p-1\}$, $ka_{k} \not\equiv 0 \:(\mathrm{mod} \: p)$. For the sake of contradiction, we assume on the contrary $ia_{i} \not\equiv ja_{j} \:(\mathrm{mod} \: p)$ for any pair $i \neq j$. Then according to Observation 2, we conclude $$\{ka_k (\mathrm{mod}\: p)|k=1,\dots,p-1\} = \{1(\mathrm{mod}\: p),2(\mathrm{mod}\: p),\dots,p-1(\mathrm{mod}\: p)\},$$ where the elements in both sets are congruence class modulo $p$. From this we derive $$\Pi_{k=1}^{p-1}ka_{k} \equiv \Pi_{k=1}^{p-1}k \equiv -1 \:(\mathrm{mod} \: p),$$ by Wilson's theorem again, which contradicts Observation 1, since $1 \not\equiv -1 \:(\mathrm{mod}\: p)$ when $p > 2$.
Merge two sets, list and tree
Unfortunately your code is far from perfect, or I am misunderstanding it. If I do understand your intentions, there are multiple places where it could be improved, to name some issues In the following code \begin{align} &\verb`if(p2 != NULL)`\\ &\verb` p2=p2->LC`\\ &\verb`else`\\ &\verb` p2=p2->RC` \end{align} the else branch looks up NULL->RC which would cause an error. In the first loop you never set new->next. In general, either you need to use some additional data structure (like stack or list to remember tree pointers), more loops (to recreate the tree pointers) or recursion; otherwise you won't be able to access all the elements of the tree. To approach this problem I suggest you should divide it into two parts: making the tree into a list and then merging two list. Some may call such solution suboptimal, but it is simple, natural, and its asymptotic are just as good (even if the constants are a bit worse). To give you a start, the following function appends at the front the contents of a tree tree to a list tail (I'm trying to write in similar style of pseudocode you use, sorry if I got it wrong): tree_to_list(tree, tail) = if (tree == NULL) return tail else middle = tree_to_list(tree->RC, tail) new_tail = tree_to_list(tree->LC, middle) head = newcell(NODE) head->data = tree->data head->next = new_tail return head Now you only need to write a function that would merge two lists, one given and one obtained from tree_to_list. On the other hand, if you would insist on doing this without some additional lists, you can make it using a function which takes a list, a tree and returns a pair (first element of a list, an element of the list that corresponds to the last element of the tree). If you would like to follow this approach, to give you a start, consider the following code. It's not complete (some special cases marked by ...) and it has some issues (what would happen if middle in the last branch would be NULL?), but it should give you some general idea. merge_subprocedure(list, tree) if (list == NULL) ... else if (tree == NULL) ... else if (list->data <= tree->data) (first, middle) = merge(list->next, tree) head = newcell(NODE) head->data = list->data head->next = first return (head, middle) else head = newcell(NODE) head->data = tree->data (first, middle) = merge(list, tree->LC) head->next = first (first2, middle2) = merge(middle->next, tree->RC) middle->next = first2 return (head, middle2) Finally, I think that stackoverflow would be much better for this question. Nevertheless, I hope this helps $\ddot\smile$
$x\in \partial A \iff d(x,A)=d(x,A^{c})=0 \iff x \in \partial A^{c}$
$d(x,A)=0$ implies that for any $r>0$ there exists $a \in A$ with $d(a,x) <r$. Thus $a \in B_r(x) \cap A$. Hence every ball $B_r(x)$ centered at $x$ intersects $A$. Similarly, $d(x,A^{c})=0$ implies that every ball $B_r(X)$ centered at $x$ intersects $A^{c}$. Hence $x \in \partial A$ and $x \in \partial A^{c}$
Understanding the relationship between a distribution, an event, and a random variable
A probability distribution defines a function mapping events to a range $[0,1]$ Yes, the function maps subsets of the sample space (events) to that real interval.   Further it is required that the probability measure for the entire sample space equals $1$, and the probability measure for the empty set equals zero. An event is a set of (equivalent?) objects An event is a subset of the sample space; a set of outcomes. The probability of anything from an event set occurring is the sum of the probabilities of its elements (naturally) Well, that works for sample spaces of countable many outcomes. More generally, the probability for any union of pairwise disjoint events is the sum of the probabilities for each event. $$\forall A\subseteq U~\forall B\subseteq U: \Big(~A\cap B=\emptyset \to \big(\mathsf P(A\cup B)=\mathsf P(A)+\mathsf P(B)\big)~\Big)$$ The probability of a union of events is bounded above by the sum of their probabilities (union bound / Boole's inequality, seems straightforward) Yes.   Even when such events may not be disjoint, the probability for their union cannot be greater than the sum of their probabilities. $$\mathsf P(\bigcup_i E_i) \leq \sum_i \mathsf P(E_i)$$ A discrete random variable is a function mapping a value from $U$ to some other set $V$ (it is said to "take values in V" -- why "take"?) and it defines a distribution on $V$ Well, if $U$ is the sample space, $V$ some measure space, and the image only contains countably many values, then that is so.   Outcomes are said to 'take values', in the sense that they are assigned them by this mapping. $Pr[X=x]$ represents the probability that event x will occur under the probability distribution defined by random variable (function) $X$ (I'm especially interested in getting this right since he seems to use this syntax a lot) Then we'll take it slow. $X$ is a discrete random variable.   It is a function of the sample space ($U$) mapping to measure space $V$.   It is to be understood that $X$ on its own stands for: the measure of the outcome which is realised (ie: that "happens") . $x$ is a value in that measure space. $X=x$ is an event. It is shorthand for the event $\{\omega\in U: X(\omega)=x\}$, the set of all outcomes taking value $x$ (ie: all outcomes with an $X$-measure of $x$). And so $\Pr(X=x)$ is the probability for that event. A uniform random variable is a random variable whose probabilities are all $1/|U|$ (simple enough) That is for a uniform discrete random variable.   Later you will meet uniform continuous random variables which have a similar definition. The notation for a uniform random variable is $r \xleftarrow R U$ -- what does the $R$ represent here? "Selected at Random", presumably.   I am not familiar with this notation.
Boundary of compact set
No. Take $A$ to be any infinite set and $x$ to be anything not in $A$. Now consider the space $X = A \cup \{x\}$ with the topology where a set is open iff it's empty or it contains $x$. Thus $A$ has the discrete topology as a subspace of $X$ and $\{x\}$ is compact (as it's finite). But $\overline{\{x\}} = X$ and $\{x\}^\circ = \{x\}$, so $\partial \{x\} = A$ which—being an infinite discrete space—is not compact.
$G$ infinite where every non trivial proper subgroup is maximal, show $G$ is simple
Use the fact that any group with no nontrivial subgroups must be finite (try to prove this for yourself if you didn't already know this): If $G$ were not simple, it would have a nontrivial normal subgroup $N$, which must be maximal. Hence $G/N$ has no nontrivial subgroups, and is therefore finite. On the other hand, using the hypothesis, you can also show that $N$ has no nontrivial subgroups. This implies that $N$ is finite.
planar representation of $K_n$
Given any four points. Draw straight lines between each pair. There are six lines, and one crossing. So drawing straight lines between $n$ points will give $n\choose4$ crossings, although some may coincide. Try to arrange the points so none of the crossings coincide.
Two problems about 1-form
A differential $1$-form on the line is always closed. If $\alpha= f(x)dx$, it is the differential of $g(x)=\int_0^xf(t)dt$. If $S^1$ is a circle, it is the quotient of $\mathbb{R}$ by the translation $t(x)=x+1$, consider the form $dx$ of $\mathbb{R}$ it is invariant by $t$ and induces a $1$-form on $S^1$ which is not exact, since there does not exist an element $x$ such that $\alpha_x=0$. If $\alpha=df$, $f$ has a maximum $x_0$ since $S^1$ is compact and $df_{x_0}=0$. On $\mathbb{R}^2$, the form $xdy$ is not closed since its differential is $dx\wedge dy$.
How to calculate the expectation for the following cost function
Since $$ W=(X_2+X_3)\mathbb 1_{\{X_2<10\}}\mathbb 1_{\{X_3<10\}} = X_2 \mathbb 1_{\{X_2<10\}}\mathbb 1_{\{X_3<10\}} + X_3\mathbb 1_{\{X_2<10\}}\mathbb 1_{\{X_3<10\}} $$ and $X_2,X_3$ are independent, you can split it into $$ \mathbb E[W] = \mathbb E[X_2; X_2<10]\mathbb P(X_3<10)+\mathbb E[X_3; X_3<10]\mathbb P(X_2<10) $$ $$ =2\left(1-e^{-10\lambda}\right)\int_0^{10} x \lambda e^{-\lambda x } dx $$ Note that $$ \mathbb E[W] \neq \mathbb E[X_2; X_2<10]+\mathbb E[X_3; X_3<10]. $$ Double integral give the right answer too.
How to describe conjugacy classes for elements of $\mathbb{Z}_{2} \times \mathbb{Z}_{2}$?
If you have a group $G$, the conjugacy class of an element $h\in G$ is all the elements of the form $g\cdot h\cdot g^{-1}$ for $g\in G$. In your case, the conjugacy class of $(0,1)$ consists of the elements: $$ \begin{split} (0,0)+(0,1)+(0,0)&=(0,1) \\ (0,1)+(0,1)+(0,1)&=(0,1) \\ (1,0)+(0,1)+(1,0)&=(0,1) \\ (1,1)+(0,1)+(1,1)&=(0,1) \end{split} $$ Note that there is only one element in the conjugacy class of $(0,1)$ which is $(0,1)$. This is because the group is abelian so in all cases $g\cdot h\cdot g^{-1}=g\cdot g^{-1}\cdot h=h$, which is what Adam Hughes meant by the conjugation being trivial. The conjugacy classes of $\mathbb{Z}_2\times \mathbb{Z}_2$ are therefore the sets consisting of each element individually: $$ \begin{split} Cl((0,0))&=\{(0,0)\} \\ Cl((0,1))&=\{(0,1)\} \\ Cl((1,0))&=\{(1,0)\} \\ Cl((1,1))&=\{(1,1)\}\end{split} $$ So $\mathbb{Z}_2\times \mathbb{Z}_2$ has four conjugacy classes each consisting of one element.
construct tempered distribution
You will need something that oscillates wildly, yet defines a tempered distribution. The idea is to take something bounded (hence tempered), but with a very capricious derivative - and since derivatives of temepred distribution are tempered, we will obtain what we want. Consider, for example, $f(x)=\sin(e^x)$. This is a bounded smooth function, hence $f$ is tempered: $f\in S'(\Bbb R)$. Its derivative belongs to $S'$, too, as a derivative of tempered distribution. Finally, $f' = e^x \cos(e^x) \in S'(\Bbb R)$, and it can not be bounded by a polynomial for obvious reasons - an exponent grows faster than any polynomial.
Taking element that is not in given set
Provided that your background theory includes The Axiom of Extensionality and The Axiom of Regularity you can indeed take $x := J$ as a new element, i.e. we have $J \not \in J$ for all sets $J$. This is true for all commonly used set theories. However, I'd also like to add that you're posing a valid question here. There are set theories in which such a choice for $x$ may not be possible. For example, in NF, the set of all sets, call it $V$, does exist and hence for $V$ there is no possible choice of $x$ with $x \not \in V$. Note that, while it has extensionality, NF does not and cannot have the axiom of regularity for the reasons outlined above.
How could I describe a function whose domain is x>=1 for integers, starts at 3 f(1)=3, then multiplied by 2 f(2)=6, then by 3 f(3)=18, repeat
How about $f(n)=3^{\lceil n/2\rceil}\cdot 2^{\lfloor n/2\rfloor}$? Here $\lceil x\rceil$ denotes the smallest integer not less than $x$, and $\lfloor x\rfloor$ is the largest integer not greater than $x$.
Find the units and zero divisors of $\mathbb Z_3 \oplus\mathbb Z_{6}$
Note that if $R_1, R_2$ be rings then $U(R_1\times R_2)\cong U(R_1)\times U(R_2)$ where $U(R)$ denotes the group of units of the ring $R$. By this, the ring $\mathbb Z_3\times \mathbb Z_6$ has $2\times 2=4$ units. In a finite ring a nonzero element is either a zero divisor or a unit. So the ring has $18-4=14$ zero divisors.
$\frac{(n+1)\log 2}{\log(2n + 2)} \ge \frac{x \log 2}{2 \log x}$
Note that $f:[e,\infty)\to\mathbb{R}$ given by $f(x)=\dfrac{x}{2\log(x)}$ is an ascending function. If $2n<x\le 2n+2$, we have $f(2n)\lt f(x)\le f(2n+2)$. That means $ \dfrac{x}{2\log(x)} \le \dfrac{2n+2}{2 \log(2n+2)}$ which implies the result $\dfrac{x}{2\log(x)}\log(2) \le \dfrac{(n+1)}{\log(2n+2)}\log(2)$.
Am I solving this constrained variational problem correctly?
(In this answer, we assume that $a\neq b$. By renaming if necessary, we may assume that $a<b$.) OP's approach is correct in principle, but his result (v1) cannot be right. OP must have made a mistake somewhere in his algebra. The coefficient in front $\lambda$ in the integral $\int_a^b \!\mathrm{d}x~y(x)$ over the solution cannot vanish. While nothing can replace a mathematical proof, it is always good to have some intuition for what the answer could or should be. We leave it to OP to find his mistake, but below is a physical argument why his result (v1) cannot be right. In the physics model from Newtonian point mechanics, $x$ is time; $y$ is position; $y^{\prime}$ is velocity; the mass $m=1$; the Lagrange multiplier $\lambda$ is a constraint force that imposes the constraint; and $\frac{C}{b-a}$ is an average position. OP's action functional reads (up to normalization) $$ J[y]~:=~\frac{1}{2}\int_a^b \!\mathrm{d}x~(y^{\prime 2}(x)+ y^2(x))+ \lambda\left(\int_a^b \!\mathrm{d}x~y(x) -C\right). \tag{1}$$ To derive the EL eq. for $y$ (but not $\lambda$!) we can drop the last term, and use the following action functional instead $$ S[y]~:=~\int_a^b \!\mathrm{d}x~\{\frac{1}{2}y^{\prime 2}(x) -V(y(x))\}, \tag{2}$$ where $$V(y)~:=~-\left(\frac{y}{2}+\lambda\right)y \tag{3}$$ is an unstable quadratic potential. The EL eq. for $y$ is just Newton's 2nd law: $$ y^{\prime\prime}~=~-\frac{dV}{dy}~=~y+\lambda. \tag{4}$$ While $\lambda$ is a Lagrange multiplier in the action functional (1), it can be viewed as a constant external force in the action functional (2). Regardless of the boundary conditions, by tuning the external force $\lambda$ any way we please, e.g. very big, the average position $\frac{1}{b-a}\int_a^b \!\mathrm{d}x~y(x)$ of the solution must be affected, contradicting OP's result (v1).
Irrational integral $\int \frac{dx}{x \sqrt{x^2+5x+1}}$
Hint:$$ \int \frac{dx}{x\sqrt{x^2+5x+1}}=\int \frac{dx}{x^2\sqrt{1+\frac{5}{x}+\frac{1}{x^2}}}=-\int \frac{d(\frac{1}{x})}{\sqrt{(\frac{1}{x})^2+\frac{5}{x}+1}} $$
Prove that if $f$ is continuous at a point, then there is an interval around that point at which its intersection with the domain is bounded.
All you need is to choose a positive epsilon, say $\epsilon = 1$, then there is a delta $\delta > 0$ such that $|f(x) - f(c)| < \epsilon = 1$ if $0 < |x-c| < \delta\implies - 1 < f(x) - f(c) < 1 \implies -1 + f(c) < f(x) < 1 + f(c) \implies |f(x)| < 1 + |f(c)|= M$ which shows $f$ is bounded.
Cardinality of Sigma Algebra
For the infinite case, let us pick a countable subset $T$ of $X$ such that $\mathfrak{M}$ induces an infinite sigma algebra over $T$ (that is, define $\mathfrak{M}(T) = {A\cap T: A\in\mathfrak{M}}$). If we manage to show that $\mathfrak{M}(T)$ contains a subset of cardinality $2^{\aleph_0}$, we are done. Hence it is enough to show that an arbitrary infinite sigma algebra over $\mathbb{N}$ contains a subset of cardinality $2^{\aleph_0}$. Let's work with that. We need more assumptions on $\mathfrak{\mathbb{N}}$ (see Brian's comments). So assume that $(*)$ there exists $\mathfrak{A} = \{A_k\}_{k\in\mathbb{N}}$ a sequence of nonempty pairwise disjoint subsets of $\mathfrak{M}(\mathbb{N})$. This sequence is in bijective correspondence with $\mathbb{N}$, hence the set of all possible finite and countable unions of elements of the sequence $\mathfrak{A}$ are in bijective correspondence with the powerset of $\mathbb{N}$, that is, $\mathfrak{M}(\mathbb{N})$ is at least of size $2^{\aleph_0}$. On the other hand, $\mathfrak{M}(\mathbb{N})\subset \mathcal{P}(\mathbb{N})$. So, given $X$ and an infinite sigma algebra on $X$, there exists (countable) $T\subset X$ such that if the the induced sigma algebra on $T$ satisfies $(*)$, then $\mathfrak{M}(T)$ has cardinality $2^{\aleph_0}$.
Can natural deduction prove it's own rules, as my logic book says? Is there a level confusion there?
Natural Deduction consists of a set of fundamental rules, which are each independent, and justified by the semantics of the connectives.   The fundamental rules can be used to prove sentences which may be used to justify derived rules.   Sometimes these sentences may be called Tautological Consequences (TautCon). Here is the Natural Deduction proof for $\vdash (p\to q)\to(\lnot q\to\lnot p)$ using only the usual fundamental rules. (Note: in most Natural Deduction systems, implication equivalence is not actually considered a fundamental rule.) $$\def\fitch#1#2{\quad\begin{array}{|l} #1 \\ \hline #2 \end{array}} \fitch{} {\fitch{1.~p\to q\hspace{14.75ex}\text{Assumption}} {\fitch{2.~\lnot q\hspace{14.5ex}\text{Assumption}} {\fitch{3.~p\hspace{12.5ex}\text{Assumption}} {4.~q\hspace{12.5ex}\text{1,3,Conditional Elimination} \\5.~\bot\hspace{11.75ex}\text{2,4,Negation Elimination}} \\6.~\lnot p\hspace{14.25ex}3{-}5,\text{Negation Introduction}} \\7.~\lnot q\to\lnot p\hspace{11.5ex}2{-}6,\text{Conditional Introduction}} \\8.~(p\to q)\to(\lnot q\to\lnot p)\hspace{2ex}1{-}7,\text{Conditional Introduction}}$$ Now, because this sentence is provable, we don't need to repeat all the above typesetting to apply conditional elimination.   We can just cite such a proof to justify deriving $(\lnot \psi\to\lnot \phi)$ from $(\phi\to\psi)$.   We can call doing this applying a derived rule of inference and, in this case, name it Contraposition .
Row swapping not working as expected for determinant
It is$$\begin {vmatrix} 1&0&1 \\ 5&-1&0 \\ 1&0&0 \end {vmatrix}=1$$ and $$\begin {vmatrix} 1&0&0 \\ 5&-1&0 \\ 1&0&1 \end {vmatrix}=-1.$$ So $$\begin {vmatrix} 1&0&1 \\ 5&-1&0 \\ 1&0&0 \end {vmatrix}=-\begin {vmatrix} 1&0&0 \\ 5&-1&0 \\ 1&0&1 \end {vmatrix}.$$
If $x$ and $y$ are conjugates, and $H_1$ the smallest normal subgroup containig $x$ and $H_2$ that containing $y$. Show that $|H_1| = |H_2|$
You're right. Easy argument: intersection of all normal subgroups containing subset $X \subset G$ is indeed the smallest one with such property and unique (it's usually called the normal closure of a subset). So if $x = g^{-1}yg$, then $y$ lies in normal closure of $x$ and vise versa. ("Theorem" you're referring to is probably telling that if you define normal subgroups as kernels of homomorphisms, then they are precisely those which are invariant under conjugation.)
A problem in real valued function on compact set.
This is an example of the intermediate value theorem. Let $$g(x)=f(x+1)-f(x)$$ Then $$g(0)=f(1)-f(0)$$ and $$g(1)=f(2)-f(1)=-g(0)$$ Now if $g(0)=0$ we are done with $x=0$ otherwise $g(0)\neq 0$ and by intermediate value there is $x\in (0,1)$ such that $g(x)=0$. And we are done.
Can the Lesbegue integral be defined on a space which is not $\sigma$-finite?
DH Fremlin in his Measure Theory Volume II (2010) discusses integration in non-$\sigma$-finite spaces. He proposes an integral for semi-finite spaces. Integrals for spaces that are not at least semi-finite must contend with ill-behaved infinite sets. He proposes to transform such "bad" spaces to better behaved semi-finite, complete, and saturated spaces that rid themselves of the "bad" infinite sets while preserving the spaces's "good" properties. Fremlin's books on measure theory are free on the internet but are becoming somewhat hard to find.
What are some features of trees (graph theory)?
There are two features of trees that I always found interesting. First, they are minimally connected: the removal of any edge creates a disconnected graph; second, they are maximally acyclic on their set of vertices: joining any two vertices with a new edge produces a cycle. So, trees are delicately balanced between diconnected graphs and graphs with cycles. A standard way to define length of a path in a tree is by the number of edges it uses. However, one may assign weights to each edge and interpret the length of a path as the sum of the weights of the edges in the path. This setup allows for the somewhat disconcerting situation in which a path using more edges may actually be "shorter" than one using fewer edges. EDIT: Re my comment to Neal. I have been sloppy. I am thinking in terms of paths connecting arbitrary pairs of vertices, not a specific pair. So, the path connecting a particular pair of vertices may use fewer edges than the path connecting a different pair, yet still be longer due to the weights of the edges in the two paths,
Probability the driver has no accident in the next 365 days
You want the first accident to be between the first year and second year. \begin{align} P(365< T \leq 2 \cdot 365) &= F(2 \cdot 365) - F(365) \end{align}
Intersection of two cosets of two different normal subgroups of finite index is a coset of a normal subgroup of finite index
$\newcommand{\Size}[1]{\left\lvert #1 \right\rvert}$Suppose $N, H$ are any subgroups of finite index in $G$. If $x N \cap y H \ne \emptyset$, let $z \in x N \cap y H$. Then $z \in x N$, so that $z N = x N$, and similarly $z H = y H$, so $$ x N \cap y H = z N \cap z H = z (N \cap H). $$ In fact clearly $z (N \cap H) \subseteq z N \cap z H$, and if $w \in z N \cap z H$, then $w = z n = z h$ for some $n \in N$ and $h \in H$, and thus $n = h \in N \cap H$. Now use the fact that if two subgroups have finite index, then their intersection has finite index. This follows from the formula $$ \Size{N H : H} = \Size{N : N \cap H} $$ which holds if $H$ has finite index (note that $N H$ is not necessarily a subgroup), so that if $N$ has also finite index $$ \Size{G : N \cap H} = \Size{G : N } \cdot \Size{N : N \cap H}. $$
Tiling an $n\times n$ Grid
Yes, $n$ black squares is minimal. No matter how you tile your grid, there will always be at least one black square in each row and in each column because adding a new tile always places a black square in both of the rows and columns in which the tile was added. The The best you can do is have one black line along the diagonal. Here's one way to achieve it:
An isomorphism between product of number fields, contains the same number of factors
In a product of $n$ fields there are exactly $2^n$ idempotent elements. That tells you that the number of factors is the same. To get that the factors are the same up to order, you can use the fact that in a product of fields the factors are precisely the minimal ideals.
In a metabelian group $M$, is it true that $[[u,g],f] = [[u,f],g]$ for $u\in M'$, $g,h\in M$?
Yes, this is well-known, and a proof can be found in many references, e.g., in Lemma 3.1. at page $7$ here. It states that for a metabelian group $G$ we have $$ [c,b,a]=[c,a,b] $$ for all $a\in G'=[G,G]$, with the notation $[a,b,c]=[[a,b],c]$. For the proof one first shows the Hall-Witt identity $$ [c, b, a] = [b, a, c]^{−1} [c, a, b] $$ which holds for all groups, and then derives the result as a corollary.
Show that T2-space is preserved by continuous map.
Let $x,y$ be distinct points in $X$ then since $f$ is one to one then $f(x)$ and $f(y)$ are distinct points. Since $Y$ is Hausdorff we can find neighbourhoods that separate these two points. Then the preimages of these neighbourhoods are disjoint and open since $f$ is continuous.
Abstract Algebra. Let $\mathit{G} $ be an abelian group. Show that the elements of finite order in $\mathit{G}$ form a subgroup of $\mathit{G}$.
Not every set of elements of finite order is a subgroup. What is true though is that the product and inverse of elements with finite order also has finite order, so the collection of ALL elements of finite order is a subgroup. For product of elements of finite order take let $g$ and $h$ be of order $n$ and $m$ respectively. Try to find some large number $k$ such that $(gh)^k=e$. Hint: in an abelian group exponents distribute over multiplication. I'll leave proving that inverses of elements of finite order have finite order to you.
How to solve this matrix for h and k?
Hint : Multiply the first row with $8-4h$ and the second row with $h$. Subtract the second row from the first.
Linear Algebra: solving minimization problems using orthogonal projections
By definition$$U^\perp=\{v\in V\,|\,(\forall u\in U):\langle u,v\rangle=0\}.$$It's therefore the space of all elements of $V$ which are orthogonal to each element of $U$. And $U\cap U^\perp=\{0\}$, from which it follows that you can form a direct sum with them. Because it is with this inner product that you have $\|f\|=\sqrt{\langle f,f\rangle}$, where$$\|f\|=\sqrt{\int_{-\pi}^\pi\bigl\lvert f(x)\bigr\rvert^2\,\mathrm dx}$$ Because, in this context,$$\|u-v\|=\sqrt{\int_{-\pi}^\pi\bigl\lvert u(x)-v(x)\bigr\rvert^2\,\mathrm dx}.$$
Is math capable of predicting social evolution?
The problem of prediction is not so much the math as it is the complexity of those social interactions and sparsity of data. In a very simple physics experiment, knowing all the physical laws and observing the initial state entirely, you can predict what is going to happen when, say, you increase the temperature. But in reality you neither know the laws that govern those social processes, nor do you observe the initial state perfectly. A rather recent tool used to predict outcomes are prediction markets. Suppose you want to predict the outcome of the election Obama vs Romney. You create a new market that sells two securities. The Obama security pays \$1 if and only if Obama wins (after election is over), and \$0 otherwise (becomes worthless). The Romney security similarly pays \$1 if and only if Romney wins. Then you let people trade these securities in the prediction market, very much like a stock market. Now, intuitively, if more people think Obama wins, then more people want to buy this security, which drives up the price. Since the security pays at most \$1, you want to pay at most \$1 for that security, and since he might win, you want to bid at least \$0. Hence, the price $p$ takes value $p\in[0,1]$, and the Romney price is $1-p$. Thus, the price fulfills the axioms of probability and can be interpreted as the "market probability", "market prediction" or "market estimate" of the probability that, say, Obama wins. Indeed, prediction markets have been shown to be more accurate in predicting the outcomes of elections than polls. See here for an older overview article, and here for evidence that prediction markets outperform polls. This may not be suprising if you think about it: insiders can make a lot of money in these markets, so they use that knowledge and drive the price in the right direction.
Prob. 25(b), Chap. 4 in Baby Rudin: The set $C_1 + C_2$ need not be closed in $\mathbb{R}$ even for closed sets $C_1$ and $C_2$
Hint : Take $a_n=n\alpha-[n\alpha]$. Since $\alpha$ is irrational, $a_n$'s are all distinct. This shows that there are infinitely many such numbers in $(0,1)$. Can you choose two such that they are within $\epsilon$- distance from each other? What about the integer multiple of the difference? Expansion: There must be a half open interval of the form $\left(\frac{k-1}{N-1},\frac{k}{N-1}\right)$ for $k=1,2,\ldots,N-1$ containing two of the numbers $a_1,a_2,\ldots,a_N$ for there are $N-1$ such intervals and all these numbers are distinct. The inequalities will tell you that for some $i,j$ we have $0 \lt (i\alpha-[i\alpha])-(j\alpha-[j\alpha]) \lt \frac{1}{N-1}$ which implies that $a=(i-j)\alpha+([j\alpha]-[i\alpha]) \in \left(0,\frac{1}{N-1}\right)$ and $a$ is an element in $C_1+C_2$ Now go ahead show that there is such a point in every interval of the form $\left(\frac{k}{n}, \frac{k+1}{n}\right)$ for any positive integer $n$ and any integer $k$.
Identity involving partial sums of Fourier series
This relies on switching the order of summation and integration. For one particular value of $S_k$: $$\begin{align}S_k &= \frac{1}{2 \pi} \int_0^{2 \pi} dx' \: f(x') \sum_{n=-k}^k e^{i k (x-x')} \\ &= \frac{1}{2 \pi} \int_0^{2 \pi} dx' \: f(x') \frac{e^{i (k+1)(x-x')} - e^{-i k (x-x')}}{e^{i (x-x')} -1}\\ &= \frac{1}{2 \pi} \int_0^{2 \pi} dx' \: f(x') \frac{\sin{\left[\left(k+\frac{1}{2}\right)(x-x')\right]}}{\sin{\left[\frac{1}{2}(x-x')\right]}} \end{align}$$ Now we want to evaluate a sum over $k$ of $S_k$: $$\begin{align}\sum_{k=0}^{N-1} S_k &= \frac{1}{2 \pi} \int_0^{2 \pi} dx' \: f(x') \frac{1}{\sin{\left[\frac{1}{2}(x-x')\right]}}\sum_{k=0}^{N-1}\sin{\left[\left(k+\frac{1}{2}\right)(x-x')\right]}\end{align}$$ Now $$\begin{align}\sum_{k=0}^{N-1}\sin{\left[\left(k+\frac{1}{2}\right)(x-x')\right]}&= \Im{\left[\sum_{k=0}^{N-1}e^{i\left[\left(k+\frac{1}{2}\right)(x-x')\right]}\right]} \\ &= \Im{\left[e^{i\left[\frac{1}{2}(x-x')\right]} \sum_{k=0}^{N-1}e^{i\left[k(x-x')\right]}\right]}\\ &=\Im{\left[e^{i\left[\frac{1}{2}(x-x')\right]}\frac{e^{i N (x-x')}-1}{e^{i(x-x')}-1}\right]}\\ &= \Im{\left[e^{i N (x-x')/2} \frac{\sin{[N (x-x')/2]}}{\sin{[(x-x')/2}]} \right]} \\ &= \frac{\sin^2{[N (x-x')/2]}}{\sin{[(x-x')/2}]}\end{align}$$ Therefore $$\sum_{k=0}^{N-1} S_k = \frac{1}{2 \pi} \int_0^{2 \pi} dx' \: f(x') \frac{\sin^2{[N (x-x')/2]}}{\sin^2{[(x-x')/2}]}$$ The stated result follows, save for the factor of $2 \pi$.
Demonstration by induction without using the induction hypothesis
In the induction step, the induction hypothesis is available for you to use. But there's no requirement that you use it, if you can reach the desired conclusion without it. It is only rarely possible to reach that goal without appealing to the induction hypothesis, so when you find yourself having done so, it is indeed a good habit to stop and check what is going on. It may be that either you have made a mistake along the way, or that you actually don't need induction at all -- that is, what you thought was an induction step would work as a free-standing proof of your final conclusion. In this particular case, it looks like you've hit on one of the rare cases where it is meaningful to do an induction proof without using the induction hypothesis. What's going on here is that you have two things you're allowed to use in the induction step: The number you're looking at is one plus something. The induction hypothesis holds for that "something". In your proof it turns out that part (1) is enough for you and you don't need to use part (2). That's completely fine. It is not one of the cases where the induction could have been omitted, because without the induction you wouldn't have (1). (The only other case of this happening I can recall offhand is proving "every natural number is either 0 or a successor" by induction. You could instead have used that fact as a lemma and then instead of the induction just do a case analysis on whether $n$ is $0$ or a successor. But doing it the way you do is valid too).
Probability of Train Arriving in 5-minute interval given one and only one MUST arrive every 5-minute interval.
At first glance, this problem looks ill-posed, since nothing told you the mechanism for enforcing the rule that exactly one train must arrive in every 5-minute interval, combined with Poisson distributions of arrival. (For example, you could say that trains arrive at Poisson distributed times, except that after a train arrives there is a five minute gap and then the Poisson rate resumes, and except that if there is a 5-minute gap, a train is forced to arrive immediately. However, in fact the problem is well-posed, and even trivial. If one and only one train must arrive in any 5-minute interval, then the trains must arrive exactly 5 minutes apart, and the probability of a train arriving in any given 5-minute interval is exactly 1.
On reciprocal-sums of integer polynomials
Partial answer for 1: take the polynomial $p(x)=(x-1)^2(x-2)^2..(x-m)^2+1$. Then $p(k)=1, k=1,..m, p>0$ on the reals so no integer roots, no negative terms and obviously $S_{p,N} > m$ for $N \ge m$, so definitely the supremum on all polynomials and all $N$ is infinity, while in degree $2m$ we see that the supremum is bigger than $m$ Edit later - actually we can even take $q(x)=(x-1)(x-2)..(x-m)+1$ since it is at least $1$ on the natural numbers and then we get that in degree $m$ the supremum is greater than $m$
How does (21) factor into prime ideals in the ring $\mathbb{Z}[\sqrt{-5}]$?
Note that $(21)=(3)(7)$, so that it suffices to factorize $(3)$ and $(7)$. This is easier because $3$ and $7$ are primes in $\mathbb Z$. Is $(3)$ a prime ideal ? Inspection reveals it is not : if $z_1=1-\sqrt{-5}$ and $z_2=1+\sqrt{-5}$ are both not in $(3)$ but $z_1z_2=6$ is. Straightforward computations show that $(3)=(3,z_1)(3,z_2)$. Note that $(3,z_1)$ and $(3,z_2)$ are the same thing as $\lbrace x+y\sqrt{-5} \ | \ x,y\in{\mathbb Z}, y\equiv -x\ ({\sf mod} \ 3) \rbrace$ and $\lbrace x+y\sqrt{-5} \ | \ x,y\in{\mathbb Z}, y\equiv x\ ({\sf mod} \ 3) \rbrace$ respectively, and those two ideals are easily seen to be prime. Similarly, one obtains the factorization $(7)=(7,3-\sqrt{-5})(7,3+\sqrt{-5})$. In the end, the complete Dedekind factorization of $(21)$ is $$ (21)=(3,1-\sqrt{-5})(3,1+\sqrt{-5})(7,3-\sqrt{-5})(7,3+\sqrt{-5}) \tag{1} $$ Call those factors $J_1,J_2,J_3,J_4$ in that order. For an ideal $J$, denote its ideal class by $c(J)$ and let $c_i=c(J_i)$. Straightforward computations show that $J_1^2=(2+\sqrt{-5})$, $J_3^2=(-2+3\sqrt{-5})$, $J_1J_3=(1+2\sqrt{-5})$, so $c_1=c_2=c_3=c_4$ and the subgroup generated by the $c_i$ is a two-element group. One can also show that the whole class group consists only of two elements, but that’s a little harder.
Jordan Block of a complex matrix, with $A^4=I$
Observe that $$A^4=I\implies (A-I)(A+I)(A^2+I)=(A-I)(A+I)(A-iI)(A+iI)$$ Thus, over $\;\Bbb C\;$ , the matrix's minimal polynomial decomposes as a product of different linear factors and is thus diagonalizable, which means it cannot have a Jordan Block as the one you wrote.
There exist less than 3 in predicate logic
The most natural reading of the sentence Anna and Bob have less than $3$ sons is that Anna and Bob are a couple, and that they are the parents of at most two boys (and some unknown number of girls, possibly none). This does not rule out the possibility that Bob, say, has more sons by some other woman. I would interpret it, then, as the verbal equivalent of this: $$\neg\exists x,y,z(x\ne y\land x\ne z\land y\ne z\land\neg Ax\land\neg Ay\land\neg Az\land Rxa\land Rxb\land Rya\land Ryb\land Rza\land Rzb)$$ ‘There do not exist $x,y$, and $z$ that are distinct, not women, and children of both Anna and Bob.’ You can of course replace $$\neg\exists x,y,z\big(x\ne y\land x\ne z\land y\ne z\land\varphi(x,y,z)\big)$$ by $$\forall x,y,z\Big(\varphi(x,y,z)\to x=y\lor x=z\lor y=z\Big)$$ if you prefer.
Unable to understand the last part of the solution
I can't answer your first question, but I can (sort of) answer your second question: From (1), we can also get $$ (m + m'\cos\theta)^2 + {m'}^2 - {m'}^2 \cos^2 \theta = 1 $$ or $$ (m + m'\cos\theta)^2 = 1 - {m'}^2 \sin^2 \theta $$ Similarly, from (2) we can also get $$ (n + n'\cos\theta)^2 = 1 - {n'}^2 \sin^2 \theta $$ These equations are basically (4) and (5) except that the positions of $m, n$ and $m', n'$ are interchanged. Do you see how to proceed from here?
How to make the probability that two random sets have any intersection close to zero (negligible)?
The probability that $S_2$ is disjoint from $S_1$ is $\frac{\binom{p-d}{d}}{\binom{p}{d}} = \prod_{j=0}^{d-1} \frac{p-d-j}{p-j}$. This product is at least $\left( 1 - \frac{d}{p-d} \right)^d \ge 1 - \frac{d^2}{p-d}$. If $d = o( \sqrt{p})$, this is $1 - o(1)$, and hence the sets are almost surely disjoint. On the other hand, the product is at most $\left( 1 - \frac{d}{p} \right)^d \le e^{ - \frac{d^2}{p}} = o(1)$ when $d = \omega(\sqrt{p})$. Hence the sets will almost surely intersect when the set size is large compared to $\sqrt{p}$. For a heuristic argument, suppose $S_1$ and $S_2$ are formed by choosing each element independently with probability $q = \frac{d}{p}$. The probability that an element is in both sets is $q^2$. Hence the expected size of the intersection is $p q^2 = \frac{d^2}{p}$, which suggests that the threshold should be when $d = \sqrt{p}$.
Showing distribution has a $\chi^2$ distribution with df = n
One approach is to work instead with the moment generating function. Consider $t < 1/2$ and note that the moment generating function of $h(X_{i}) = \frac{1}{\theta}(X_{i} + X_{i}^{-1} - 2)$ is \begin{eqnarray} M_{h(X)}(t) &=& E\Big[ \exp\Big\{ \frac{t}{\theta}( X_{i} + X_{i}^{-1} - 2) \Big\}\Big] \\ &=& \frac{e^{-2t/\theta}}{\sqrt{2\pi\theta}} \int_{0}^{\infty}e^{1/\theta} \exp\Big(\frac{t}{\theta}\{x + 1/x \}\Big)x^{-3/2}\exp\Big(\frac{-1}{2\theta}\{x + 1/x\}\Big) dx \\ &=& \frac{e^{(1-2t)/\theta}}{\sqrt{2\pi\theta}} \int_{0}^{\infty} x^{-3/2}\exp\Big(\frac{-(1 - 2t)}{2\theta}\{x + 1/x\}\Big) dx \\ &=& \frac{1}{\sqrt{1 - 2t}} \int_{0}^{\infty} \frac{e^{(1-2t)/\theta}}{\sqrt{2\pi}}\sqrt{\frac{1 - 2t}{\theta}}x^{-3/2}\exp\Big(\frac{-(1 - 2t)}{2\theta}\{x + 1/x\}\Big) dx \\ &=& \frac{1}{\sqrt{1 - 2t}}\int_{0}^{\infty}{ p(x; \eta) dx}, \qquad \textrm{where } \eta = \theta/(1 - 2t) \\ &=& \frac{1}{\sqrt{1 - 2t}} \end{eqnarray} This is exactly the mgf of a chi-squared random variable with one degree of freedom. Hence, $h(X_{i}) \sim \chi_{1}^{2}$. Since $\frac{1}{\theta}T(x) = \sum_{i=1}^{n} h(X_{i})$, this means that $\frac{1}{\theta}T(x) \sim \chi_{n}^{2}$.
Lagrange Multipliers: "What is a Critical Point?"
Courtesy of Wolfram, here's a contour plot of $z=e^{xy}$ (in the white region, $z$ is large; in the red region, it's close to $0$) along with $x^3+y^3=16$ (the blue curve): If you are standing at the solution $(2,2)$, you can see that by walking along the constraint, you will be going downhill very quickly. You have found the maximum of $e^{xy}$ subject to the given constraint. Also note that the point $(2,2)$ is precisely where the tangent of $x^3+y^3=16$ coincides with the tangent of the level curves of $e^{xy}$.
If a function is odd/even, then its best polynomial approximation is also odd/even.
Assume $a = -b$, and to introduce some notation, fix a positive integer $m$, let $P_m$ denote the subsapce of polynomial functions in $C[a,b]$ of degree at most $m$, and for any $f \in C[a,b]$, let $f_m$ denote the element of $P_m$ that is closest to $f$ in the supremum norm. (By a bunch of complicated theorems, you know such an element exists, and that it is unique. This is the key to at least one way of doing the exercise. In other words, you can leverage the known uniqueness of the best approximation into a fairly simple proof of the result.) Here is a strategy to the problem. If $f$ is even, define $p \in P_m$ by declaring for all $x \in [a,b]$ that $p(x) = (1/2)(f_m(x) + f_m(-x))$. Verify that $p$ thus defined is indeed an element of $P_m$ and that $p$ is even. Prove that $\|f - p\| \leq \|f - f_m\|$. It follows from the uniqueness property mentioned above that $f_m = p$. If $f$ is odd, follow the same program, but with $p$ defined by $p(x) = (1/2)(f_m(x) - f_m(-x))$. For a little more guidance on (1). Why $p$ is in $P_m$. Because $f_m$ is in $P_m$ by definition and it is generally true that whenever $g(x)$ is a polynomial, the function $g(-x)$ is a polynomial of the same degree, it follows that $f_m(x)$ and $f_m(-x)$ are both polynomials of degree at most $m$, and hence (because $P_m$ is a subspace of $C[a,b]$) that $p$ thus defined is a polynomial of degree at most $m$. Why $p$ is even. The fact that $p$ is even is just a simple computation from the definition. (It doesn't depend in any way on $f_m$ being a polynomial. If $g$ is any element of $C[a,b]$, the function $(1/2)(g(x) + g(-x))$ is even.) Why $\|f - p\| \leq \|f - f_m\|$. This can be seen from a short computation. Fix $t \in [a,b]$. Note that \begin{align} |f(t) - p(t)| & = |f(t) - (1/2)(f_m(t) + f_m(-t))| \\ & = (1/2) |2f(t) - f_m(t) - f_m(-t)| \\ & \leq (1/2) (|f(t) - f_m(t)| + |f(t) - f_m(-t)|) \\ & = (1/2) (|f(t) - f_m(t)| + |f(-t) - f_m(-t)|), \end{align} where in the last line we used the fact that $f$ is even to replace $f(t)$ with $f(-t)$. Now note that by definition of the norm we have $|f(t) - f_m(t)| \leq \|f - f_m\|$, and also that $|f(-t) - f_m(-t)| \leq \|f - f_m\|$ (this latter inference is where we use the fact that $a = -b$, because we need to know that $-t$ is also in $[a,b]$ to deduce that $|f(-t) - f_m(-t)| \leq \sup_{s \in [a,b]} |f(s) - f_m(s)| = \|f - f_m\|$). We can therefore deduce from the chain of above inequalities and equalities that $|f(t) - p(t)| \leq \|f - f_m\|$. Since this holds for all $t \in [a,b]$, we deduce that $\|f - p\| \leq \|f - f_m\|$ as desired. We can generalize this exercise a little bit. Suppose $V$ is a normed vector space and $W$ a subspace of $V$. Suppose further that $T: V \to V$ is a linear map with the property that (1) $T \circ T$ is the identity, (2) $T$ preserves the norm (i.e. for all $v \in V$ one has $\|T(v)\| = \|v\|$), and (3) $W$ is invariant under $T$ (i.e. for all $w \in W$, $T(w) \in W$). Suppose finally that $v \in V$ is a fixed element of $V$ and that $T(v) = v$. Theorem: under these hypotheses, if there is a unique element $w$ of $W$ such that $\|v - w\| = \inf_{z \in W} \|v - z\|$, then one has $T(w) = w$. Proof: let $w' = (1/2)(w + T(w))$. Our hypotheses imply that $w' \in W$ and that $T(w') = w'$. And clearly $\|v - w'\| = (1/2) \|v - w + v - T(w)\| \leq (1/2) (\|v - w\| + \|v - T(w)\|)$. But since $T(v) = v$ and $T$ is linear and norm-preserving, we know that $\|v - T(w)\| = \|T(v) - T(w)\| = \|T(v - w)\| = \|v - w\|$, and it follows that $\|v - w'\| \leq \|v - w\|$ and hence (from uniqueness) that $w = w'$. End of proof. We recover the result above by taking $V = C[-b,b]$, $W = P_m$, and $T$ the map $f(x) \mapsto f(-x)$ (in the even case) or $f(x) \mapsto -f(-x)$ (in the odd case).
Strategy to maximize no. of balls from N boxes
There are iterative solutions to such stopping problems; one generally solves them by working backwards. For example, if you are at the last box and you have 1 more box to choose, you obviously choose the last box. If you are at the second-to-last box and you have one more box to choose (and the number of balls in each box is random and uniformly distributed in $[B,A]$), you take the second-to-last box if and only if it has more than $(A+B)/2$ balls. Otherwise you take the last box sight unseen. Etc. Here I have ignored the fact that the boxes all have distinct numbers of balls, but that case is also relatively easy to solve by working backwards. Update: Let's first work this out fully where the boxes are not distinct. In that case there is a threshold function $f(k,n)$ defined on all $n \leq k \leq N$ where $k$ is the number of boxes left unopened and $n$ is the number of boxes left to pick, and there's an expected score function $e(k,n)$ which tells you how much you expect to get out of the last $k$ boxes with $n$ picks. You take the $k$th-from-last box iff it has at least $f(k,n)$ balls. $f$ and $e$ defined inductively by $f(n,n)=A$ (if there are $n$ boxes left of which you have to pick $n$, you take this box no matter what's in it), $e(n,n) = n*(A+B)/2$, and by the principle that you only want to take the $k$th-from-last box if you expect it to improve your final score. That is, $$m + e(k-1, n-1) \geq e(k-1,n) \Leftrightarrow m \geq f(k,n)$$ or $$f(k,n) = e(k-1,n) - e(k-1, n-1)\;,$$ where $m$ is the occupation of the $k$th-from-last box. You also have that $$\begin{align*}&e(k,n) =\\ &P(m\geq f(n,k)) *(e(k-1,n-1) + E[m| m\geq f(n,k)]) + (1-P(m\geq f(n,k))) * e(k-1,n)\end{align*}$$ or $$e(k,n) = \frac{A-f(n,k) +1}{A-B+1} * (e(k-1,n-1) + (A-f(n,k))/2) + \frac{f(n,k)-B}{A-B+1} e(k-1,n)\;.$$ Now we move on to the distinct-boxes case. Here, there is a threshold function $f(k,n,S)$ and an expected score function $e(k,n,S)$, defined on all $n \leq k \leq N$ and $S \subset [B,A] ,\space |S| = (A-B+1) - (N-k)$ where $k$ is the number of boxes left unseen, $n$ is the number of boxes left to pick and $S$ is the set of box occupation numbers you have not yet seen. If the $k$th-from-last box has at least $f(k,n,S)$ balls, you take it, otherwise you move to the next box. This function is defined inductively by $f(n,n,S)=A$ (if there are $n$ boxes left of which you can pick $n$, you take all boxes no matter what's in them), $e(n,n,S) = n * E[S]$ where $E[S]$ is the average value of the elements of $S$, and by the principle that you only want to take the $k$th-from-last box if you expect it to improve your final score. That is, $$m + e(k-1, n-1, S/\{m\}) \geq e(k-1,n,S/\{ m\}) \Leftrightarrow m \geq f(k,n,S)\;.$$ Also once we've defined $f(k,n,S)$ we can define $$\begin{align*}&e(k,n,S) =\\&\frac{1}{|S|} \left( \sum_{m \in S, m\geq f(k,n,S)} (m + e(k-1,n-1, S/\{m\})) + \sum_{m \in S, m< f(k,n,S)} e(k-1,n,S/\{m \}) \right).\end{align*}$$ These are all tricky to compute, but you can do it by working backwards and thinking of this like a tree. You use $e(k-1,n-1)$ and $e(k-1,n)$ to compute $f(k,n)$, then use all 3 quantities to compute $e(k,n)$. This is related, for example, to option pricing in financial markets.
How many pages does the book have?
On average, the last $P$ pages have $r=\frac DP$ digits. If $r$ is an integer, then any $Q$ with $10^{r-1}+P-1\le Q\le 10^r-1$ is a solution, provided $10^{r-1}+P-1\le 10^r-1$ (i.e., $P\le 9\cdot 10^{r-1}$). If on the other hand $P> 9\cdot 10^{r-1}$, then $Q$ must be $\ge 10^r$; more precisely we need to have as many $(r+1)$-digit pages as $(r-1)$-digit pages, so $Q=10^r-1+\frac{P-9\cdot 10^{r-1}}2$ - unless that makes $Q-P<10^{r-1}$, in which case we have to adjust $Q$ once again. Ultimately, this would boil down to some kind of binary search. If $r$ is not an integer, there will be at most one valid $Q$. We can find a first approximation by assuming that we start with $\lfloor r\rfloor$-digit numbers and end with $P\cdot(r-\lfloor r\rfloor)$ numbers with $\lfloor r\rfloor +1$ digits. That would make $Q=10^r+P\cdot(r-\lfloor r\rfloor)-1=10^r+(D\bmod P)-1$. However, if this makes $Q\ge 10^{r+1}$ or $Q-P+1<10^r$, we have to do some fine-tuning similar to the first case, again with a kind of binary search.
Continuity of derivative of convex function
Continuity follows immediately from the Darboux's Theorem. It is a straightforward exercise to show that any monotonic function which has the intermediate value property is continuous. Let me know if you need a hint.
Use of the Leibniz integral rule in Laplace transform proof
The Leibniz Rule for an infinite region If there is a positive function $g(x, y)$ that is integrable, with respect to $x$, on $[0,∞)$, for each $y$, and such that $|\frac{∂f}{∂y} (x, y)| ≤ g(x, y)$ for all $(x, y)$, then $$\frac{d}{dy}\int_{0}^\infty f(x,y)\,dx=\int_{0}^\infty \frac{\partial}{\partial y} f(x,y)\,dx$$ Ref.: https://math.hawaii.edu/~rharron/teaching/MAT203/LeibnizRule.pdf
Finding a moment generating function
What you are doing is correct, except there was a typo where the '2' should be a '1'. $$ \begin{aligned} M(t)&=E(e^{tx})=\int_{-1}^{\infty} e^{tx} e^{-x-1} dx \\ &=e^{-1}\int_{-1}^{\infty} e^{(t-1)x} dx \\ &=e^{-1}\frac{1}{t-1}\left. e^{(t-1)x} \right|_{-1}^\infty \\ &=e^{-1}\frac{e^{1-t}}{1-t} \\ &=\frac{e^{-t}}{1-t} \\ \end{aligned} $$ so long as $t<1$. We can do a quick check: The distribution given is a standard exponential distribution with parameter $1$ but shifted $1$ unit in the negative direction and thus we can argue that $E(X+1)=1 \Rightarrow E(X)=0$. The mean should be given by $E(X)=M'(0)=0$ which checks out. Now you can take a few more derivatives of $M(t)$ and plug in $t=0$ and calculate the moments of $X$ directly by integration: $E(X^n)$ and compare.
Probability of multiple chances with at least one result
And what if I want that at least one ball is one of the 36ish ones for 5000 randomly balls selected, considering for every try the ball got is put back? Hint: Think of the binomial distribution. And what if for the 5000 tries I want that the 36 balls appear n times? You mean, that n times one of the 36 balls appears ? If so, then the binomial distribution will be helpful, too.
How to use triangle inequality to establish Reverse triangle inequality
The answer is quite easy: $|a-b|+|b|\geq |a|$ $|b-a|+|a|\geq |b|$ Then $|a-b| \geq \max\{|a|-|b|,|b|-|a|\}=||a|-|b||$. This argument is quite standard and applies in proving the continuity of norms.
Generalisation of the binomial theorem to other geometries
A sphere has area $4\pi R^2$ and volume $\frac{4}{3}\pi R^3$ so the binomial theorem will work here if you add some $R_0$ to the previous radius: For the area: $$4\pi (R+R_0)^2 = 4\pi (R^2 + 2RR_0 + {R_0}^2)$$ And for the volume: $$\frac{4}{3}\pi (R+R_0)^3 = 4\pi (R^3 + 3R^2R_0 + 3R{R_0}^2 + {R_0}^3)$$ mostly because it is so much fun to have an excuse to say RR in my equations. I don't have the formula for toruses in my head so I can't help you there, but ellipses and hyperbels are also polynomials.
How do I determine from a picture of a vector field if it's a possible formula for the vector field and conservative or not?
I will answer the second part of the question about the plots (first part seems settled by now). Thought experiment No. $1$ or circulation: Imagine yourself to be piloting a plane flying in a closed loop within the vector field, discard gravity, and ask yourself whether the total energy spent when traveling against the direction of the wind will be possibly cancelled out by the free ride you will catch when flying with the wind. In other words, is it possible that the circulation (the line integral in a closed loop) is zero? We can guess whether it is at all possible or if it is impossible by visual inspection - if it is impossible, the field is not conservative. So draw a box in a somewhat deliberate fashion and observe whether circulating counterclockwise the field is consistently either flowing against or with you. Say that the vector field in your example is $\langle y,-x\rangle,$ and that you draw a square centered below the horizontal: The field is fighting your progress throughout the path, resulting in a negative circulation of $-190,$ represented as an orange stripe along the path. This random closed path is enough to rule out that the field is conservative. Let's look at a counterexample of a conservative field, $\langle x, y \rangle:$ It is not possible to tell whether the circulation is zero by visual inspection, but it is certainly impossible to rule out, since the field alternates between being aligned with (in green) and flowing against (in orange) the motion along the rectangular path. Another way to look at it is to detect the presence of curl, which is zero in path-independent vector fields. The presence of negative curl can be easily noticed on the plot: This can be tricky in different ways (macroscopic circulation without microscopic and simple connectedness). Thought experiment No. 2 or flux: Conservative fields are the gradient of a scalar function (so-called "potential" function), and gradients are examples of co-vector fields - they indicate the directions of steepest ascent, and are often represented as isolines or equipotential lines (uniting points with the same value in the potential function). These can be dotted with vectors to obtain directional derivatives. Equivalently, a common visualization is with stacks of parallel lines at every point. Here the experiment is running from the bottom of valleys to the top of mountains along the steepest paths: following the direction of the gradient field, which portrayed in vectorial form, is orthogonal to the isolines. If the vector field is indeed the gradient of some other function, it may be possible to visualize on a plot of the vector field the isolines orthogonal to the vectors, not crossing at any points and packed more closely together as the magnitude of the vectors increases. The mental image is of many alpinists climbing away (perpendicular) to the isobars from the valley to the mountains, along the steepest paths. The idea is the two-dimensional flux integral through an isoline. In the case of the conservative field $\langle x, y\rangle,$ the flux across a circle centered at the origin is seen as a consistent pink stripe of uniform width: And the representation with stacks (co-vectors) does convey the idea of isolines with each coordinate-specific stack being denser in proportion to the length of the vector in that position. The flux across the isoline in the plot is in pink, and consistent in sign along the isoline: In contradiction, the example in the original post expressed as stacks would not lend itself to joining in an isoline equally dense covectors: An example / counterexample dual that parallels the graph in the original question is the magnetic field created by a straight electrical wire, which spirals following the right-hand rule, and is non-conservative, since by Maxwell's equation $\nabla \times \mathbf{B} = \mathbf{J}_\mathbf f + \frac{\partial\mathbf{E}}{\partial t}$ (Ampère's circuital law) dictates that the field is only curl-free in the absence of free-currents or time-varying electric fields (see here or here) (left plot with arrows and stacks) in contradistinction to the diverging electric field created by a positive charge (right plot):
How can I prove the majority of three languages is also regular if the three languages are regular?
You can use $${\rm maj}(A,B,C)=(A\cap B)\cup(A\cap C)\cup(B\cap C).$$
L'Hopitas Rule, rewriting $(8+x)^{\frac{1}{x}}$
HINT: Rewrite $(8+x)^{\frac{1}{x}}$ as $e^{\frac{1}{x}\ln(8+x)}$. So $\lim\limits_{x\to\infty} (8+x)^{\frac{1}{x}}$ becomes $e^{\lim\limits_{x\to\infty}\frac{1}{x}\ln(8+x)}$ Now apply L'Hospitals Rule in the exponent.
Group of order 21 and subgroup of order 7
Suppose that $H$ and $K$ are distinct subgroups of $G$ with $|H| = |K| = 7$. Then, since $7$ is prime, we must have $H \cap K = 1$, and so $$|HK| = \frac{|H||K|}{|H \cap K|} = \frac{49}{|H \cap K|} = 49$$ This obviously can't happen in a group containing only $21$ elements. Therefore $\langle a \rangle$ must be the only subgroup of order $7$. This implies that $\langle a \rangle$ is normal (indeed, characteristic). Proof: any automorphism $\phi : G \to G$ must map $\langle a \rangle$ to a subgroup of the same size, hence must map $\langle a \rangle$ to itself. Conjugation by $g$ is an automorphism, hence $g\langle a \rangle g^{-1} = \langle a \rangle$ for all $g \in G$.
integration by partial fraction
Okay, first a bunch of nitpicks: You are not trying to integrate the integral, you are trying to integrate the function itself. The integral does not equal the partial fraction decomposition, only the rational function itself does. You are missing the $dx$ in all the integrals. So. You are trying to integrate $\displaystyle \frac{3x^2+x+4}{x^3+x}$. First, you make sure the numerator has smaller degree than the denominator, doing long division if necessary to get it into that form (done). Then, you factor the denominator completely (done). Then you set up the partial fraction decomposition problem (done): $$\frac{3x^2+x+4}{x(x^2+1)} = \frac{A}{x} + \frac{Bx+C}{x^2+1}.$$ Then, you perform the operation on the right hand side to the expression you want (done): $$\frac{3x^2+x+4}{x(x^2+1)} = \frac{A(x^2+1) + (Bx+C)x}{x(x^2+1)}.$$ Then, you know the numerators are equal (done): $$3x^2 + x+4 = A(x^2+1)+ (Bx+c)x.$$ Finally, you figure out the values of $A$, $B$, and $C$. There are two strategies: Do the operations on the right and write it as a polynomial; since two polynomials are equal if and only if they have the same coefficients, the coefficient of $x^2$ on the right equals $3$, etc. This will set up a system of linear equations for $A$, $B$, and $C$, which you can solve: $$3x^2 + x + 4 = Ax^2 + A + Bx^2 + Cx = (A+B)x^2 + Cx + A$$ so $A+B=3$, $C=1$, and $A=4$. From this, you get $A=4$, $B=-1$, and $C=1$, so $$\frac{3x^2+x+4}{x(x+1)} = \frac{4}{x} + \frac{-x+1}{x^2+1}.$$ Now you just need to do the simpler integrals $$\int\frac{3x^2+x+4}{x^3+x}\,dx = \int\frac{4}{x}\,dx - \int\frac{x}{x^2+1}\,dx + \int\frac{1}{x^2+1}\,dx.$$ Plug in some values of $x$ to get information about $A$, $B$, and $C$. Specifically, pick values that make some of the terms equal to $0$ (the roots of the original polynomial), and start simplifying. For example, from $$3x^2 + x + 4 = A(x^2+1) + (Bx+C)x,$$ plugging in $x=0$ you get $4 = A(1) + 0$, so $A=4$; now we have $$3x^2 + x + 4 = 4x^2 + 4 + (Bx+C)x.$$ Moving all the known factors to the left, we have $$-x^2 + x = (Bx+C)x$$ and factoring out $x$, we get $x(-x+1) = (Bx+C)x$, from which you can cancel $x$ to get $$-x+1 = Bx+C$$ which immediately gives $B=-1$ and $C=1$, as before. Now that we know $A$, $B$, and $C$, proceed as in 1.
Can a series be empty?
Yes. An empty sum is defined to have the value $0$.
Linear Algebra - Linear Transformations
You're given a basis which I'll label as $\vec{v}_1 = \cos(t)$ and $\vec{v}_2 = \sin(t)$. This lets you interpret columns of numbers as vectors. The column vector $\begin{bmatrix} a\\b\end{bmatrix}$ literally means the vector $a\cos(t) + b\sin(t)$. Writing the matrix $A$ relative to this basis is done by computing $A\vec{v}_1$ and expressing it in terms of $\vec{v}_1$ and $\vec{v}_2$ and then doing the same for $A\vec{v}_2$. In matrix language, $T\vec{v}_1$ corresponds to $A\begin{bmatrix}1\\0\end{bmatrix}$. On the other hand, $T\vec{v}_1 = T\cos(t) = \cos(t)''+7\cos(t)' +4\cos(t) = 3\cos(t) -7\sin(t) = \begin{bmatrix}3\\-7\end{bmatrix}$ So, the matrix $A$ should have the property that $A\begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix} 3\\-7\end{bmatrix}$. This forces $A$ to have the form $A = \begin{bmatrix} 3 & *\\-7 & *\end{bmatrix}$. Can you take it from here?
Probability - None are Defective
$12$ total, $4$ defective, $8$ non-defective No. of ways of selecting $3$ non-defective balls out of $8={8\choose 3}$ Total no. of ways of selecting $3$ balls = ${12\choose 3}$ Thus, Required probability = $\frac{8\choose 3}{12\choose 3}$ and when you simplify it , it would be same as your answer(which is correct).
What are some tricks for checking a complex function's analyticity?
From Wikipedia's holomorphic function page: Because complex differentiation is linear and obeys the product, quotient, and chain rules; the sums, products and compositions of holomorphic functions are holomorphic, and the quotient of two holomorphic functions is holomorphic wherever the denominator is not zero. ... which addresses your particular examples. In fact, the material in the rest of that section is much of the first material in a Complex Analysis course. With the immediately following examples, you get a starting point of several holomorphic functions.
Expected time spent in a node between visits to another node in an asymmetric random walk on integers
\begin{equation*} \gamma^0_0=1. \end{equation*} \begin{equation*} \gamma^0_i=q\gamma^0_{i+1}+p\gamma^0_{i-1},i\neq0. \end{equation*} Thus \begin{equation*} \gamma^0_i=A+(1-A)(\frac{p}{q})^i. \end{equation*} Consider the case of $i>0$. To visit $i+1$ and return to 0, the walker must visit $i$ at least twice. $\gamma^0_i$ is a non-increasing function of $i$. But $(p/q)^i$ is an increasing function. The best we can do is setting $A=1$ so that $\gamma^0_i=1$ for all $i>0$. Consider the case to the left of 0. Still let $i>0$. We have \begin{equation*} \gamma^0_{-i}=A+(1-A)(\frac{q}{p})^i. \end{equation*} lim$_{i\rightarrow\infty}\gamma^0_{-i}=0$, so $A=0$ and $\gamma^0_{-i}=(q/p)^i$ for all $i>0$.
The limit of the convergent Fourier series of $f$ is $f$
If the Fourier Series for $f$ converges uniformly to $g$, then $g$ is continuous and \begin{align} \hat{g}(n)&=\frac{1}{2\pi}\int_{0}^{2\pi}g(x)e^{-inx}dx \\ &=\lim_{N\rightarrow\infty}\frac{1}{2\pi}\int_{0}^{2\pi}\sum_{k=-K}^{K}\hat{f}(k)e^{ikx}e^{-inx}dx \\ &= \lim_{N\rightarrow\infty}\sum_{k=-K}^{K}\hat{f}(k)\frac{1}{2\pi}\int_0^{2\pi}e^{i(k-n)x}dx=\hat{f}(n). \end{align} Therefore, the Parseval identity gives $\int_0^{2\pi}|f(x)-g(x)|^2dx = 0$. So, the following holds for all $y$ $$ 0 \le \int_0^y|f(x)-g(x)|^2dx \le \int_0^{2\pi}|f(x)-g(x)|^2dx =0. $$ It follows that $\int_0^y|f(x)-g(x)|^2dx =0$ for all $0\le y\le 2\pi$. By the Fundamental Theorem of Calculus, the derivative with respect to $y$ is $|f(y)-g(y)|^2$, which must be $0$. Therefore $f=g$.
How to determine the time and place of collision of 2 vectors.
I started to answer your question when I noticed some parts don't make much sense. For one thing, you say for $A$ that time $t$ is measured in seconds, but for $B$ it's value of time $p$ is in minutes. Well, after $5$ minutes, i.e., $300$ seconds, object $A$'s $x$ value would be at $2 + 300(0.7) = 212$. Also, every minute, it would be increasing by $60(0.7)=42$. However, $B$'s $x$ value would start at $1$ and only increase by $1$ every minute after that. As you can see, $A$'s $x$ value starts off larger and is increasing much faster, so there's no chance it would ever coincide with $B$'s $x$ value at some later time. I believe your sheet has a mistake. If $t$ is in minutes instead, then the question becomes much more reasonable. Let us assume this. Then, since $B$ would start $5$ minutes after object $A$, this means $p$ would be $5$ less than $t$, i.e., $p = t - 5$. Using this, you equate the $x$ and $y$ coordinate equations for objects $A$ and $B$ to solve each for $t$. If they give the same value, then the $2$ objects collide at that time. Otherwise, with different values, it means the $x$ and $y$ coordinates never match at the same time between the $2$ objects and, thus, they never collide. The equations become $$\begin{equation}\begin{aligned} 2 + 0.7t & = 1 + t - 5 \\ 6 & = 0.3t \\ t & = 20 \end{aligned}\end{equation}\tag{1}\label{eq1A}$$ and $$\begin{equation}\begin{aligned} t & = 35 - (t - 5) \\ 2t & = 40 \\ t & = 20 \end{aligned}\end{equation}\tag{2}\label{eq2A}$$ This means they would collide at a time of $20$ minutes, with a position of $x = 16$ and $y = 20$.
Does the sequence $ x_n = ( - \frac{1}{n}) $ converge to $ 0$ in $\mathbb{R_l}$
Let $(x_n)$ be defined by $x_n=-1/n$. If $a\ge 0$, the open set $[a,\infty)$ contains no terms of $(x_n)$, so $(x_n)$ does not converge to $a$.$\\[4pt]$ If $a < 0$, the open set $[a,a/2)$ contains only finitely many terms of $(x_n)$, so $(x_n)$ does not converge to $a$. Hence the sequence $(x_n)$ does not converge.
Reduction modulo a prime ideal
Apparently, the operation that is performed is this: the coefficients of the polynomials $a_i(x)$ belong to the ring of algebraic integers $A$ of an algebraic number field $K$. For any prime ideal $\mathfrak p$ of $A$, we can consider the quotient ring $A/\mathfrak p$, which is an integral domain and has a field of fractions, which is called the residue (class) field. Alternatively, we can consider the localisation $A_{\mathfrak p}$ of $A$ at $\mathfrak p$, which is a local ring with maximal ideal $\mathfrak pA_{\mathfrak p}$, and its residue field $\,A_{\mathfrak p}/\mathfrak pA_{\mathfrak p}$. In short, we reduce each coefficient of the $a_i(x)$ modulo the prime ideal $\mathfrak p$, in the same way as we reduce a polynomial ideal with integer coefficients modulo a prime number.
Numerical method for fourth order PDE.
$\displaystyle \frac{\partial^2 K(x)}{\partial x^2}\frac{\partial^2 w(x)}{\partial x^2}+2\frac{\partial K(x)}{\partial x}\frac{\partial^3 w(x)}{\partial x^3}+K(x)\frac{\partial^4 w(x)}{\partial x^4}$
Centralizer of $A$ is equal to $\langle A \rangle$
Notice that multiplying $f=c+be$ by $a$ yields $$af=ac+abe=ac+bd$$because $ae=d$, so you can drop the second equation. So the only conditions needed are $d=ae$ and $f=c+be$, so any matrix of the form $$\begin{pmatrix}c & ae \\ e & c+be\end{pmatrix}$$is in the centralizer of $A$. In particular, you can take $e=0$, and get (unsurprisingly) that any matrix of the form $cI$ commutes with $A$, even though it is not necessarily a multiple of $A$; thus the centralizer is not generated by $A$ as a vector space. But notice that $$\begin{pmatrix}c & ae \\ e & c+be\end{pmatrix}=c\begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}+e\begin{pmatrix}0 & a \\ 1 & b\end{pmatrix},$$so the centraliser is actually the subspace generated by $I$ and $A$. Note also that $$A^2=\begin{pmatrix}a & ab \\ b & a+b^2\end{pmatrix}=a\begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}+b\begin{pmatrix}0 & a \\ 1 & b\end{pmatrix}=aI+bA,$$thus the subalgebra of matrices commuting with $A$ is generated by $A$ as a subalgebra.
Understanding quantifiers
From what I see, the reason your answer is incorrect in one aspect is because you've said $\forall y$ instead of $\exists y$. The importance is because you're taking all of the people x, and then saying that if there exists their friend, y, that has measles, then they will be quarantined. Hope that answers your question! ☺️
Suppose that $f(x) \to\ell$ as $x\to a$ and $g(y) \to k$ as $y \to\ell$. Does it follow that $g(f(x)) \to k$ as $x \to a$?
No, not necessarily: we can abuse the fact that the definition of limit ignores the value the function may take there. For example, take $a = l = k = 0$ and let $f,g:\mathbb R \to \mathbb R$ be as follows. $$f(x) := 0,\qquad\text{ and}$$ $$g(x) := \begin{cases} 1 & x = 0 \\ 0 & x \neq 0\end{cases}.$$ Then: $$\lim_{x\to 0} g(x) = \lim_{x\to 0} f(x) = 0,$$ but $g(f(x)) = 1$ for all $x \in \mathbb R$, so $\displaystyle\lim_{x \to 0} g(f(x)) = 1$, not $0$. Claim The statement in the title holds if and only if one of the following hold: $g$ is continuous at $l$, or There is some neighbourhood of $a$ (excluding $a$) on which $f$ does not obtain the value $l$. Proof Case 1. Assume $g$ is continuous at $l$. Let $\epsilon > 0$. Since $\lim_{y \to l}g(y) = k$ and $g(y) = k$ (by continuity), there exists $\delta>0$ such that $$ 0 \boldsymbol{\leq} |y - l| < \delta \implies |g(y)-k| < \epsilon. \tag{1}$$ (Note that we can only use "$\leq$" at $\ast$ because it is continuous!) Since $\lim_{x \to a}f(x) = l$, there exists $\eta>0$ such that $$ 0 < |y - l| < \eta \implies |f(x)-l| < \delta. \tag{2}$$ which, combined with one, gives you the limit you want. Case 2. Assume there is some neighbourhood of $a$ (excluding $a$) on which $f$ does not obtain $l$ at all: i.e., there exists $\tau>0$ such that $$0<|x - a|<\tau \implies f(x) \neq l. \tag{3}$$ Then, repeating the argument for the first case: Given $\epsilon > 0$, since $\lim_{y \to l}g(y) = k$, there exists $\delta>0$ such that $$ 0 < |y - l| < \delta \implies |g(y)-k| < \epsilon. \tag{4}$$ (And without continuity, that's all we get.) Since $\lim_{x \to a}f(x) = l$, there exists $\eta>0$ such that $$ 0 < |y - l| < \eta \implies |f(x)-l| < \delta. \tag{5}.$$ But moreover, combining $(5)$ with $(3)$ gives $$ 0 < |y - l| < \min(\eta,\tau) \implies 0 < |f(x)-l| < \delta, \tag{6}$$ which you can combine with $(4)$ to get the desired limit. Case 3. Assume $g$ is discontinuous at $l$, and that $f$ attains the value $l$ on any (arbitrarily small) neighbourhood of $a$. We'll use a sequence. You can prove (by contradiction — with the second assumption) that there must be a sequence of points $$(x_n)_{n=1}^∞ \to a,$$ with $x_n \neq a$ for any $n$, such that, for every $n \in \mathbb N$, $$f(x_n) = l.$$ Consequently, $$g(f(x_n)) = g(l),$$ and by the discontinuity of $g$ at $l$, $k \neq g(l)$. For contradiction, assume that $fg(x) \to k$ as $x \to a$. Then (by the equivalent sequential definition of continuous limit), every sequence $(k_n)_{n=1}^\infty\to a$ with $k_n \neq a$ satisfies $$g(f(k_n)) \to k.$$ But aha! $(x_n)_n$ is exactly an example of a sequence for which $g(f(k_n)) \not\to k$ (because it already converges to $g(l)\neq k$).
How to show that $f=0$ a.e. on $[0,1]\times [0,1]$.
Ben Derrett's argument works as well as this one, but this is a bit shorter and I like to differentiate integrals. Since $f$ is bounded, it is in $L^1_{loc}$. Let $(x,y)$ be a Lebesgue point of $f$ and consider that $\frac{1}{4r^2} \int_{x-r}^{x+r} \int_{y-r}^{y+r} f(x,y) dydx = 0$ for all $r$. Since these squares are nicely shrinking sets (in that the area of a square is comparable to the area of a circle) Lebesgue's differentiation theorem gives that $f(x,y)$ = 0. Since Lebesgue points are a full measure set $f = 0$ a.e. Edit: Lebesgue's differentiation theorem on $\mathbb{R}^n$ says that if $B_r(x)$ is the ball of radius $r$ around $x$ and $f$ is an $L^1_{loc}$ function, then for almost every $x$ we have $f(x)=\lim_{r\ \to 0} \frac{1}{m(B_r(x))}\int_{B_r(x)}f dm$ where m is the Lebesgue measure. An easy lemma shows that if a sequence of sets shrinks to zero at the same rate as balls (they are called 'nicely shrinking sets') then the differentiation theorem still holds.
Proof for '$AB = I$ then $BA = I$' without Motivation?
Here is a sketch of a "brutal proof" of the sort you imagine. (On rereading the other answers, you've already seen this. But the notes at the end may be interesting.) A more detailed version of the following can be found in "Bijective Matrix Algebra", by Loehr and Mendes. Remember that the adjugate matrix, $\mathrm{Ad}(A)$, is defined to have $(i,j)$ entry equal to $(-1)^{i+j}$ times the determinant of the $(n-1) \times (n-1)$ matrix obtained by deleting the $i$-th row and $j$-th column from $A$. Write down a brute force proof of the identity: $$\mathrm{Ad}(A) \cdot A = A \cdot \mathrm{Ad}(A) = \det A \cdot \mathrm{Id}_n$$ by grouping like terms on both sides and rearranging. Likewise, write down a brute force proof of the identity $$\det(AB) = \det(A) \det(B).$$ So, if $AB=\mathrm{Id}$, you know that $\det(A) \det(B)=1$. Now compute $\mathrm{Ad}(A) ABA$ in two ways: $$(\mathrm{Ad}(A) A)BA = \det(A) BA$$ and $$\mathrm{Ad}(A) (AB) A = \mathrm{Ad}(A) A = \det(A) \mathrm{Id}.$$ Since $\det(A) \det(B) =1$, you also have $\det(B) \det(A)=1$, and get to deduce that $BA = \mathrm{Id}$. There is some interesting math here. Let $R$ be the polynomial ring in $2n^2$ variables $x_{ij}$ and $y_{ij}$ and let $X$ and $Y$ be the $n \times n$ matrices with these entries. Let $C_{ij}$ be the entries of the matrix $XY-\mathrm{Id}$ and let $D_{ij}$ be the entires of the matrix $YX-\mathrm{Id}$. Tracing through the above proof (if your subproofs are brutal enough) should give you identities of the form $D_{ij} = \sum_{k,\ell} P_{k \ell}^{ij} C_{ij}$. It's an interesting question how simple, either in terms of degree or circuit length, the polynomials $P_{k \ell}^{ij}$, can be made. I blogged about this question and learned about some relevant papers (1 2), which I must admit I don't fully understand.
finding graph that not have euler cycle
Any connected graph in which each vertex has even degree has a Euler cycle. Try a disconnected graph.
How to find the dual cone?
If the cone $K$ is a subspace, then the dual cone is the orthogonal complement of $K$. This is because $x^T y \geq 0$ for all $x$ in $K$ implies that $x^T y = 0$ for all $x$ in $K$ (when $K$ is a subspace). More detail: Assume that $K$ is a subspace let $y \in K^*$ (the dual cone for $K$). If $x \in K$, then $-x \in K$ also (because $K$ is a subspace), and so it follows from the definition of $K^*$ that $y^Tx \geq 0$ and $y^T(-x) \geq 0$. In other words, $y^T x = 0$. Hence, if $x \in K^*$, then $x \in K^\perp$ (the orthogonal complement of $K$).
Pigeonhole principle nontrivial problem
There are $2n+1$ residue classes $\pmod {2n+1}$. If two of your numbers are in the same residue class, then (by definition) their difference is divisible by $2n+1$, so let's suppose that they occupy $n+2$ distinct residue classes. Now remark that we can divide the classes into pairs that add to a multiple of $2n+1$, namely $(1,2n),(2,2n-1),\cdots, (n,n+1)$ with $0$ left unpaired. We note that there are $n$ pairs. Even if $0$ is one of your choices, we would still have to make $n+1$ choices from those pairs. by the pigeon hole principle, therefore, if you have $n+2$ choices you must have chosen two from the same pair, and we are done.
divisors of natural numbers
The first one is true even if you remove the hypothesis that $a<b$. I also see no difference between the assertions.