title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
A proof of the Isoperimetric Inequality - how does it work?
My attempt at an answer has two parts. First, I think the geometric mean thing slightly obscures what's going on and contributes to your bafflement why "the $r$'s magically fall out of the equation in the last step". A more geometric view of this step is: We know that the curve has "width" $w=2r$ (along the chosen direction). Then $A = w\overline{h} = 2r\overline{h}$, where $\overline{h}$ is the "average height" of the curve (along the chosen direction). Thus we have \[A + \pi r^2 \le lr \;\Leftrightarrow\; 2r\overline{h} + \pi r^2 \le lr \;\Leftrightarrow\; 2\overline{h} + \pi r \le l \;\Leftrightarrow\; 2\overline{h} + \frac{\pi}{2} w \le l\;.\] Now the cancellation of the $r$'s seems more natural, and the inequality gives an upper bound for the "average height" we can achieve for a given "width" $w$ and length $l$. This bound implies the isomperimetric inequality: \[l^2 \ge (2\overline{h} + \frac{\pi}{2} w)^2=(2\overline{h} - \frac{\pi}{2} w)^2 + 4\pi w\overline{h} \ge 4\pi w\overline{h}=4\pi A\;.\] (There may be a connection here to the inequality by Bonnesen cited in Christian Blatter's comment.) Second, regarding the proof as a whole, it seems useful to think of it as a way of transforming the difficult global optimization problem implied by the isoperimetric inequality (how to enclose the greatest possible area within a given circumference) into a trivial local optimization problem through some clever bookkeeping. In a sense, what makes the problem difficult is that how much area you can enclose with a curve element (say, with respect to the origin) depends both on where you are and in which direction you move, but in which direction you move in turn determines where you will end up, and hence how much area you will be able to enclose later. The proof decouples this by adding to the area element a suitable penalty which has two crucial properties: It exactly cancels the "where you are" part of how much area you can enclose, and because it is itself the area element of a circle, it automatically adds up to a constant. There are no longer any variable lengths in the integrand, only the angle between the tangent vector at the curve and the tangent vector at the corresponding point of the circle, and it is then obvious that the integral is maximized by always choosing the tangent vector of the curve parallel to the tangent vector of the circle -- which necessarily results in a circle.
Connectedness in a Product Metric?
Connectedness is a topological property, so since all product metrics induce the same topology, it suffices to prove Theorem 11.4.4 for one particular choice of product metric. In particular, you can choose a product metric which is also conserving (e.g., $d(a,b)=\sum_i\tau_i(a_i,b_i)$). (Really, the right way to prove this is to not talk about metrics at all and only talk about topologies: $X_{j,a}$ is always a homeomorphic copy of $X_j$ if you give $P$ the product topology, and that is all the argument needs.)
Liouville's theorem and the Wronskian
Equations of Motion What you are describing is Hamiltonian's view of the evolution of a dynamical system. $\mathbf q$ is the vector describing system's configuration with generalized coordinates (or degrees of freedom) in some abstract configuration space isomorphic to $\mathbb R^{s}$. For instance $s=3n$ particle coordinates for a system of $n$ free particles in Euclidean space. $\mathbf p$ is the vector of generalized momentum attached to instanteanous time rate of the configuration vector. For example for particles of mass $m$ in non-relavistic mechanics $\mathbf p= m \frac{d}{dt}\mathbf q$. Classical mechanics postulates that the simulteanous knowledge of both $\mathbf q(t)$ and $\mathbf p(t)$ at time $t$ is required to predict the system's temporal evolution for any time $t'>t$ (causality). So the complete dynamical state of the system is in fact described by the phase $\mathbf y =(\mathbf p, \mathbf q)$ which evolves in an abstract space called the phase space isomorphic to $\mathbb R^{2s}$. The fact that the knowledge of this phase is sufficient to predict the evolution demands on a mathematical point of view configuration $\mathbf q$ to evolve according to a system of $s$ ODEs of at most second order in time (known as Lagrange equations) or equivalently that phase evolves according to a system of $2s$ ODEs of the first order in time knwown as Hamilton's equations of motion, $\frac{d\mathbf p}{dt}=-\frac{\partial H}{\partial \mathbf q}(\mathbf y,t)$ $\frac{d\mathbf q}{dt}=\frac{\partial H}{\partial \mathbf p}(\mathbf y,t)$ given some Hamilton's function $H(\mathbf y,t)$ describing the system. Physically it represents mechanical energy. For non-dissipative systems it does not depend over time explicitely. as a consequence of Liouville's theorem it is a conserved quantity along phase-space curves. More generally, equations of motions appear to be formulated as a system of Euler-Cauchy's ODEs $\frac{d\mathbf y}{dt} = \mathbf f(\mathbf y,t)$ \with $\mathbf f$ some vector-function over $\mathbb R^{2s}\times\mathbb R$ In accordance with Hamiltonian formalism, $\mathbf f$ appears to be $\mathbf f= (-\frac{\partial H}{\partial \mathbf q}, \frac{\partial H}{\partial \mathbf p}) $ Liouville's theorem Now consider the phase flow $\mathbf y_t$, that is consider the one-parameter ($t\in \mathbb R$) group of transformations $\mathbf y_0\mapsto \mathbf y_t(\mathbf y_0)$ mapping initial phase at time $t=0$ to current one in phase space. This is another parametrization of the phase using $y_0$ as curvilinear coordinates. Suppose you have now hypothetical system replica with initial phase points distributed to fill some volume $\mathcal V_0$ in phase space. Liouville's theorem says that the cloud of points will evolve such as preserving its density along their curves in phase space, like an incompressible fluid flow, keeping the filled volume unchanged. Since $\mathcal V(t)=\int_{\mathcal V_0} \det \frac{\partial \mathbf y}{\partial \mathbf y_0}(\mathbf y_0,t)\ \underline{dy_0} = \int_{\mathcal V_0} J(\mathbf y_0,t)\ \underline{dy_0}$ Now compute the volume-change time rate at any instant $t$ introducing Euler's form (1), $\frac{d\mathcal V}{dt}=\int_{\mathcal V} \frac{\partial \mathbf f}{\partial t}+ \nabla_{\mathbf y_0} .\mathbf f(\mathbf y_0,t)\ \underline{dy_0}$. Setting this time rate to zero gives Liouville's theorem in local form: $\frac{\partial \mathbf f}{\partial t}+ \nabla_{\mathbf y_0} .\mathbf f(\mathbf y_0,t)=0$. Applying them to Hamilton's form, $\frac{\partial }{\partial t}[\frac{\partial H}{\partial \mathbf p}-\frac{\partial H}{\partial \mathbf q}] + \frac{\partial}{\partial \mathbf q}\frac{\partial H}{\partial \mathbf p} - \frac{\partial}{\partial \mathbf p}\frac{\partial H}{\partial \mathbf q}=0$ (1) For conservative systems in energy, $H(\mathbf y,t)$ does not depend upon time explicitely and the left-most term in (1) vanishes. Liouville's theorem can be generalized to any physical observable depending upon the phase of the system $A(\mathbf y,t)$ which is a conserved along the curves of the phase space, $\frac{dA}{dt}= \frac{\partial A}{\partial t} + \frac{\partial A}{\partial \mathbf q}\frac{\partial H}{\partial \mathbf p} - \frac{\partial A}{\partial \mathbf p}\frac{\partial H}{\partial \mathbf q}=0$ The Wronskian Now consider the situation of Euler's ODE can be linearized around phase $\mathbf y_0$. The vectorial function $\mathbf f$ now expresses as a matrix vector product with the phase. You have now a $2s\times 2s$ square linear system of ODEs. $\frac{d\mathbf y}{dt} = {\mathbf f}(\mathbf y_0,0)+M_{\mathbf y_0}(t)(\mathbf y-\mathbf y_0)+\ldots$ with $M=\frac{\partial f}{\partial \mathbf y_0}$. One can solve a new system of the form, $\mathbf y'=M_{\mathbf y_0}(t) \mathbf y$ for any translated phase around $\mathbf y_0$. Consider $2s$ phase solutions of this system $(\mathbf y^1, \mathbf y^2,\ldots, \mathbf y^{2s})$. Then the wronskian $W=\det(\mathbf y^1, \mathbf y^2,\ldots, \mathbf y^{2s})$ satisfies the first-order ODE, $\frac{d}{dt}W= \mathrm{tr}(M_{y_0}) W$ and can be integrated as $W(t) = W_0\exp \int _0^t\mathrm{tr}(M_{\mathbf y_0}(s)) ds$ Hope this helps.
Two ways to define Spec of a ring
There is a standard argument here that appears in most texts on the subject, e.g. Hartshorne II.2.2. It's not a particularly short argument so I'll just summarize it here. What we have to show is that the natural map $\psi: A_f \to \mathcal{O}_X(D(f))$ is both injective and surjective. Injectivity: If $\psi(a/f^n) = \psi(b/f^m)$ then we let $\mathfrak{a}$ be the annihilator of $af^m - bf^n$. By looking at the localization at primes $\mathfrak{p}\in D(f)$, we can show that $f\in\sqrt{\mathfrak{a}}$, so we have $f^l (af^m - bf^n)$ for some $l$, and therefore $a/f^n = b/f^m$ in $A_f$. Surjectivity: We start with a section defined by $a_i/g_i$ on some finite covering set of principal opens $\{D(h_i)\}$. Looking at radicals, we can assume that $g_i=h_i$. It is argued that we can further assume $h_j a_i = h_i a_j$ for all $i,j$. Write $f^n = \sum_i h_i b_i$ for some large $n$ and set $a = \sum_i a_i b_i$. Then $a/f^n = a_i / h_i$ on $D(h_i)$, so $\psi(a/f^n)$ is actually the section we started with.
"Free-available" universities beyond Russia.
While not english-speaking, in France you can do part of that. Universities On principle, Universities are open to everyone. You have the "right" to assist to any course any time you want. Specific sessions with TA are restricted however to people enlisted. In practice, there are access restriction in some cases, like entering the buildings or premises. It may require to state your case and desire to be granted access, and pass through the control (and the people enforcing it might not be aware of this "fundamental" right). This is usually easier in early years of teaching, with very large attendance, that it is with classes of a dozen people all on the teacher attendance list. Collège de France This is an institution whose purpose is both research and teaching at very high level. You can assist lectures freely (sometimes you need to register yourself in advance). However the topics are not following the flow of a course, and can be on any subject, so you can't rely on it to build your knowledge logically and reliably. This is for cherry picking! PHD lectures Like all "exams" in France, you have the right to assist to it if you don't disrupt. It is, of course, very specific and very much like lecture at the Collège de France. Caveats and workarounds With all of the above, you can't take exams. Exams are restricted to people enlisted in the Universities. However, in France Universities fees are usually fairly small (a few hundreds euros per year for most Universities), and there is no age restriction. There might be restriction based on: (i) level (you need to have initial years passed - or equivalent level - before taking more advanced courses) or (ii) number of people wanting to take the class (more a local problem for some specific courses, and usually only for first and second year of University) For English courses, I will let the Americans, British, Irish, Scottish, Welsh, Australians... speak for it!
Ultrafilters preserving infinite joins
This is certainly not true in general. For instance, if you take a single infinite join $1=\bigcup a_i$ where the $a_i$ are disjoint, and let $S$ consist of the complements of all the $a_i$, then no proper filter containing $S$ can preserve this join. More generally, an obvious necessary condition is that for each join $a=\bigcup a_i$ which you want to preserve, the filter generated by $S$ cannot contain every $a\setminus a_i$. (Conversely, it is similarly obviously both necessary and sufficient that it be possible simultaneously to choose an $i$ for each join such that $S$ together with all the elements $\neg(a\setminus a_i)$ still has the finite intersection problem. I don't see any way to simplify that condition in general, though.)
Do these higher-dimensional analogues of Möbius transformations have a name?
These seem to be projective transformations / homographies / collineations. See particularly the formulas given when projective spaces are defined by adding points at infinity to affine spaces. This is no surprise since there is a long history of projective geometry in optics, going back to the study of perspective. I think you are probably already aware of this, but these maps provide a good description of image transformations by lenses only in the paraxial approximation. Here's a chapter by Douglas S. Goodman from the Optical Society of America's Handbook of Optics which contains a discussion of these transformations in Section 1.15 (page 59 of the PDF, page 1.60 in the internal numbering of the book). It seems the preferred terminology in optics is "collineation"; note however that Wikipedia distinguishes collineations from homographies, though they agree for real projective spaces.
Prove $|z_1|-|z_2|\leq|z_1-z_2|$
From triangle inequality, $$ \\ |z|=|z-w+w|\leq |z-w|+|w|\Rightarrow \\ |z|-|w|\leq |z-w| $$
existence of a linear operator which extends linear map
Hint: Begin with a basis $(v_1,\dots,v_j)$ of null $T_1$. Extend this to a basis $(v_1,\dots,v_k)$ of null $T_2$. Extend that to a basis $(v_1,\dots,v_n)$ of $V$. Define your map so that it takes the vectors $T_1v_{j+1},\dots,T_1 v_k$ to $0$ and $T_1 v_{k+1},\dots,T_1 v_{n}$ to $T_2 v_{k+1},\dots,T_2 v_{n}$.
Solving differential equation with initial conditions
You're basically done. If you must solve for $y$, then: \begin{align*} -\ln|6-y| &= \frac{x^3}{3} \\ \ln|6-y| &= -\frac{x^3}{3} \\ |6 - y| &= e^{-x^3/3} \\ 6 - y &= e^{-x^3/3} &\text{since $y(0) = 5$ implies that $6 - y > 0$}\\ y &= 6 - e^{-x^3/3} \end{align*}
Placing complex polynomial into Taylor series form
On one hand, if $f(z)=z^{10}$, then $f(z)=a_0+a_1(z-2)+a_2(z-2)^2+\cdots +a_{10}(z-2)^{10}$, with $a_k=\dfrac{f^{(k)}(2)}{k!}$. All higher order terms vanish because $f^{(11)}(z)\equiv 0$. You can directly compute $f^{(k)}(2)$ for $k\in\{0,1,\ldots,10\}$. On the other hand, $f(z)=z^{10}=(2+(z-2))^{10}=\sum\limits_{k=0}^{10}{10\choose k}2^{10-k}(z-2)^k$ by the binomial theorem. The fact that these expansions are identical is a consequence of the uniqueness of Taylor series coefficients, but it is also straightforward to verify by induction, without reference to the series, that $f^{(k)}(z)=k!{10\choose k}z^{10-k}=\dfrac{10!}{(10-k)!}z^{10-k}$ for each $k\in \{0,1,\ldots,10\}$ and $z\in\mathbb C$.
Prove that if $a,b\geq0$ and $a^2<b^2$ then $a<b$
Note that $(b+a)(b-a) = b^2 - a^2 &gt; 0$, which means that $b+a$ and $b-a$ have the same sign and neither is zero. Since $a$ and $b$ are nonnegative, then $b+a$ is positive, so so is $b-a$ i.e. $b &gt; a$.
Determine the interval of convergence of series $\sum_{n=1}^{\infty } \frac{1}{\cos ^2(n \cdot x)+\sqrt{n}}$
We have $$\frac{1}{\cos^2(nx)+\sqrt n}\ge \frac1{\sqrt n+1}\sim\frac1{\sqrt n}$$ and the Riemann series $\sum \frac1{\sqrt n}$ is divergent so the given series is divergent for all $x$.
Arithemetic series addition
Let your sequences be $m=1,2,3,4,5\dots, n=6,14,22,30,\dots$. It is true that $n(i)=4m(i)-2$ Then your sums are $M(i)=\sum_{j=1}^ij=\frac 12j(j+1)$ and $N(i)=\sum_{j=1}^i8j-2=8M(i)-2i$. Even after two terms we have $N(2)=20 \neq 8M(2)-2=22$
Definition of Hodge star operator
Some define the Hodge star operator by $\langle \alpha, \beta\rangle\operatorname{vol} = \alpha\wedge\ast\beta$, in which case $\ast : A^{p,q}(M) \to A^{n-p,n-q}(M)$. Other people instead define the Hodge star operator by $\langle \alpha, \beta\rangle\operatorname{vol} = \alpha\wedge\overline{\ast\beta}$ or $\langle\alpha, \beta\rangle \operatorname{vol} = \alpha\wedge\ast\bar{\beta}$, in which case $\ast : A^{p,q}(M) \to A^{n-q,n-p}(M)$. The fact that there are different conventions is not deep, it is just a matter of preference. In my experience, $\langle\alpha, \beta\rangle\operatorname{vol} = \alpha\wedge\overline{\ast\beta}$ is more common.
Unramified subextension of the Galois closure of a totaly ramified $p$-adic field
(edit: in hindsight, I think there is a flaw in my argument, so read if you please.) Obviously if we know $M$ contains $L'$, by the definition of $L$ as the Galois closure of $L'$ we immediately get $L=M$. Since your statement is about a bunch of finite extensions over $\mathbb{Q}_p$ and about unramifiedness, I figure it's best to think about it using finite fields. Given this slogan, we know that $k_{L} / (k_{L'} = k_K) / \mathbb{F}_p$ are finite extensions. Let's think about where $k_M$ should sit in. Since $L/M/K$, we know that $k_L/k_M/k_K$, so now our tower of finite fields becomes $k_L/k_M/(k_K = k_{L'})$. Theory about unramified extensions (and along with the fact that there is only one finite extension of a given degree of a finite field) now forces $M/L'$, so we are done.
Evaluation of $\int_\Gamma e^{-z^2}\ dz$
Consider $$\oint_C dz \, e^{-z^2}$$ where $C$ is a rectangle having vertices $-R$, $R$, $R+i c$, $-R+i c$. By Cauchy's Theorem, this integral is zero. On the other hand, it is also equal to $$\int_{-R}^R dx \, e^{-x^2} + i \int_0^c dy \, e^{-(R+i y)^2} -\int_{\Gamma} dz \, e^{-z^2} -i \int_0^c dy \, e^{-(-R + i y)^2} $$ As $R\to\infty$, the 2nd and 4th integrals vanish because each integral is bounded by the value $$e^{-R^2} \int_0^c dy \, e^{y^2} \le c e^{-(R^2-c^2)}$$ Thus, we are left with the equality $$\int_{-\infty}^{\infty} dx \, e^{-x^2} = \int_{\Gamma} dz \, e^{-z^2}$$ as was to be shown.
Confused on how to find the interior of a set
Draw the set $[0,1] \cup [2,3]$ on a number line. The interior is the subset of points where for each point you can draw some open interval around it that is still contained in $[0,1] \cup [2,3]$. In other words, the interior of $[0,1] \cup [2,3]$ is the subset of points $x$ such that for each $x$, there is some $\epsilon &gt; 0$ with $(x - \epsilon, x + \epsilon)$ being a subset of $[0,1] \cup [2,3]$. Can you tell from the picture below (the green shaded part is the set $[0,1] \cup [2,3]$) what the interior is? Which points have an $\epsilon &gt; 0$ so that $(x - \epsilon, x + \epsilon)$ is still a subset of $[0,1] \cup [2,3]$? Hopefully, you said all points in $(0,1) \cup (2,3)$. Why don't we include $0$ (or $1$ or $2$ or $3$)? Observe what happens if you look at $(0 - \epsilon, 0 + \epsilon)$ for any positive number $\epsilon &gt; 0$. Is that interval contained in $[0,1] \cup [2,3]$? If it is contained in this set for at least on $\epsilon$, then $0$ would be in the interior, but hopefully you see that the interval is not contained in the set. The part $(-\epsilon, 0)$ of the interval $(0 - \epsilon, 0 + \epsilon)$ is not contained in $[0,1] \cup [2,3]$.
If a normal series has maximal lengh in $G$, then it is a composition series.
Thanks to the hint of fulges in comments, I worked out the following proof: Remark: here $\subset$ means strict inclusion. Let $G_0 \subset ... \subset G$ be a normal series of maximal length $n$ in $G$. If $G_0 \neq \{e \}$, then we can construct another normal series $\{ e \} \subset G_0 \subset ... \subset G$ having a greater length $n+1 &gt; n$. So $G_0 = \{ e \}$. Now let $\{e \} \subset ... \subset G_i \subset G_{i+1} \subset ... \subset G$ be a normal series of maximal length in $G$. Assume $\exists i \in \{0, ...,$ maximal length$\} \subset \mathbb{N}$ such that $G_{i+1}/G_i$ is not simple. But then there exists a normal subgroup $K/G_i: \ G_i/G_i \subset K/G_i \subset G_{i+1}/G_i$. But then $K$ is a normal subgroup of $G_{i+1}$ such that $G_i \subset K \subset G_{i+1}$, and the normal series $\{e \} \subset ... \subset G_i \subset K \subset G_{i+1} \subset ... \subset G$ has a greater length. A contradiction. Then $G_{i+1}/G_i$ is simple for any appropriate $i$ in our series.
Help to understand this solution of this book
Let's try an intuitive explanation... FACT: A wins the first game...to get 3 winning games the longest series is the following A-BBAA Thus the tournament could be maximum 5 games long. For A to be the winner, he has to win "at least" 2 games on 4, otherwise the winner will be B. In numbers $$\sum_{i=2}^{4} \binom{4}{i}p^i(1-p)^{4-i}$$ But we could do a different and equivalent brainstorming. For A to be the winner, he has to avoid that B wins at least 3 games on 4 remaining. In numbers $$1-\sum_{i=3}^{4} \binom{4}{i}(1-p)^i p^{4-i}$$ The two solutions lead to the same result but the latter is shorter in counts with respect to the one showed in your textbook
How can one construct a strict quasigroup with neither a left nor a right identity on $n$ elements?
Instead of swapping the first two columns, swap the first and third column. Then there will still be no identity on either side since $2\cdot 2=2$ so $2$ is the only possible identity but $2\cdot 1=3$ and $1\cdot 2=3$. Moreover, the quasigroup is not associative because (for instance) $$(1\cdot 2)\cdot 2=3\cdot 2=4$$ (or $1$ if $n=3$) and $$1\cdot (2\cdot 2)=1\cdot 2=3.$$ Note that this works only if $n\geq 3$. If $n\leq 1$ then obviously it is impossible and for $n=2$ it is easy to see that any quasigroup must have an identity element.
Resolution Principle - First Order Logic
The resolvent simply represents a statement that is a logical consequence from the two original clauses. Remember that a clause like $\{ P, Q \}$ can be seen as a disjunction $P \lor Q$. So, when you resolve clauses $\{ P, Q \}$ and $\{ \neg P, R \}$ to get the resolvent $\{ Q, R \}$, you are really inferring the claim $Q \lor R$ from the claims $P \lor Q$ and $\neg P \lor R$ ... which is indeed a valid inference.
Is a locally compact Hausdorff vectorspace with countable base a topological vector space?
This is obviously false, because the topology could have nothing to do with the vector space structure. For instance, with $K=\mathbb{R}$ and $X=\mathbb{R}$ with its usual vector space structure, we can pick a bijection between $X$ and $[0,1]$ and thus give $X$ a topology which is homeomorphic to the usual topology on $[0,1]$. Or, we could pick some crazy bijection between $X$ and $\mathbb{R}$ to put a topology on $X$ which is homeomorphic to the usual topology but for which addition is not continuous.
Notation for isomorphic groups: $A\cong B$ versus $A\xrightarrow{\sim} B$.
Upgraded to an answer from a comment by request: I think it's better practice to use $A \cong B$ when you mean "there exists an isomorphism from A to B" and $A \xrightarrow{\sim} B$ when you mean "I have a specific isomorphism from A to B in mind". It's fine to use $A \cong B$ even in the latter case, but it would be strange to read $A \xrightarrow{\sim} B$ when there's no specific map being discussed.
Show that $||x| - |y|| \le |x-y|$ for all $x, y, \in \mathbb{R}$
Yes you can. If you are unsure, you can divide the proof in two cases: $|x|-|y|&lt;0$ or $|x|-|y| \geq 0$. You can prove the inequality in both cases and these cases contain all possibilities, so you have proved the inequality in all cases.
Quick way to show that inclusion is a local property?
One implication is easy. For the harder one, suppose $S \not \subset T$. Fix an element $x \in S \setminus T$. Show that the set of all ideals of $R$ that contain $T$ and do not contain $x$ satisfies the condition for Zorn's lemma, so it has a maximal element. Show that such a maximal element $P$ is a prime ideal in $R$, and $SR_P \not \subset TR_P$.
Covariant differentiation of a section with zero differential is zero?
It's not possible for a section to satisfy $d\sigma_p=0$. That's because the projection $\pi\colon E\to M$ is a left inverse for it: $\pi\circ\sigma = \operatorname{Id}_M$. This implies $d\pi_{\sigma(p)}\circ d\sigma_p = \operatorname{Id}_{T_pM}$, which implies $d\sigma_p$ is injective for every $p$.
Probability of a random variable less than the other
Yes, that is so. Although your method is a little roundabout. $\begin{align}\mathsf P(X&lt;Y) = &amp; ~ \int_{-\infty}^0\int_{-\infty}^y ab\mathsf e ^{ax+by}\operatorname d x\operatorname d y\\[1ex] = &amp; ~ \int_{-\infty}^0 b\mathsf e^{(a+b)y}\operatorname d y\\[1ex] = &amp; ~ {\frac{b}{a+b}}\end{align}$
Understanding the process for finding partial derivatives of a variable in convoluted implicit functions
Just let $c(x,y) = \langle c^1=x+u,c^2=yu\rangle$ then using Chain Rule you have: $$\frac{\partial u}{\partial x}=\frac{\partial}{\partial x} (F \circ c) = \nabla F\Bigr|_{c(x,y)} \cdot \left\langle \frac{\partial c^1}{\partial x} \Bigr|_{(x,y)}, \frac{\partial c^2}{\partial x}\Bigr|_{(x,y)} \right\rangle = \frac{\partial F}{\partial x^1}\Bigr|_{c(x,y)}\frac{\partial c^1}{\partial x^1} \Bigr|_{(x,y)} +\frac{\partial F}{\partial x^2}\Bigr|_{c(x,y)}\frac{\partial c^2}{\partial x^1} \Bigr|_{(x,y)}$$ Edit: Let $f(x,y)$ be differentiable and $c(x,y) = \langle c^1,c^2 \rangle$ be differentiable as well. Suppose also that $f \circ c$ is defined (then it is differentiable). Let $ f\circ c$ be differentiable at $(a,b)$ then: $$\frac{\partial (f \circ c)}{\partial x}\Bigr|_{(a,b)} = \frac{d}{dt}\Bigr|_{t=a} (f \circ c)(t,b) = \nabla f\Bigr|_{c(a,b)} \cdot \left\langle\frac{\partial c^1}{\partial x}\Bigr|_{(a,b)},\frac{\partial c^2}{\partial x}\Bigr|_{(a,b)}\right\rangle$$
Finding the bounds for a truncation error
We have: $$ \sum_{i=1}^{n}\frac{(-1)^{i+1}}{2i-1}=\sum_{j=0}^{n-1}\frac{(-1)^j}{2j+1}=\sum_{j=0}^{n-1}(-1)^j\int_{0}^{1}x^{2j}\,dx=\frac{\pi}{4}-(-1)^n\int_{0}^{1}\frac{x^{2n}}{1+x^2}\,dx $$ but over the interval $[0,1]$, the function $\frac{1}{1+x^2}$ is between $\frac{1}{2}$ and $1$, hence: $$\frac{1}{4n+2}\leq\left|-\frac{\pi}{4}+\sum_{i=1}^{n}\frac{(-1)^{i+1}}{2i-1}\right|\leq \frac{1}{2n+1}.$$
Clustering with Nearest Neighbours algorithm
Normally, nearest neighbours (or $k$-nearest neighbours) is, as you note, a supervised learning algorithm (i.e. for regression/classification), not a clustering (unsupervised) algorithm. That being said, there is an obvious way to "cluster" (loosely speaking) via nearest neighbours. (so-called unsupervised nearest neighbours). This "clustering" simply refers to getting the nearest neighbours of a given point $p$ (not necessarily in the data set), either by taking all the neighbours in some ball around $p$ with cutoff radius $r$ or by taking the $k$ nearest neighbours and returning them as the cluster. In such cases, the use of fast spatial partitioning data structures are the key: simply use either a $k$D-tree or a ball tree (metric tree). Either algorithm lets you do fast nearest neighbour search (average time is around $O(n\log n)$, under some reasonable distribution assumptions, I believe). See here for a comparison. Since you mention big data though, I would say that both of those standard tree algorithms will fail miserably when there are millions of points in hundreds of dimensions (or less even). In such cases, you will have to use approximate nearest neighbours algorithms (e.g. locality-sensitive hashing). A good read for this might be An Investigation of Practical Approximate Nearest Neighbor Algorithms by Liu et al. Separately, a different approach that you may be thinking of is using Nearest-neighbor chain algorithm, which is a form of hierarchical clustering. In this case, one is interested in relating clusters, as well as the clustering itself. Lastly, maybe look into clustering methods based on nearest neighbours (i.e. extending them). E.g: KNN-kernel density-based clustering for high-dimensional multivariate data, Tran et al Unsupervised Nearest Neighbors Clustering with Application to Hyperspectral Images, Cariou &amp; Chehdi
Set used as a "mask" to "clip" that set
"Clip" here is used in the sense of computer graphics -- to selectively draw visual elements that are not obstructed (or clipped) by an opaque foreground element. A similar use appears in signal processing -- where very large or very small values are replaced with pre-specified extreme values. In this usage, the range of the function is constrained to a set and points whose image would be outside the set are mapped to specified points in the set. These uses are based on the verb "to clip" meaning to cut away, as in gardening. "Mask" is used in the same sense as described above for computer graphics. In this case, the foreground object masks the background object and (partially) prevents the background object being rendered. A similar use with a more physical basis is photomasking, where a physical obstruction is used to prevent certain wavelengths of light reaching a chemical reaction which would etch a semiconducting substrate -- the net result is that a negative image of the mask is etched from the semiconductor. A similar idea from digital image manipulation is "image masking", where "transparent"/"opaque" data from one array is used to select which members of another array are used in an image combining operation. This use is based on a long-time meaning of the word "mask": an object which obscures all or part of a face. When you write "$A \cap E$", you are using the set $E$ to clip the set $A$ -- only those elements of $A$ that pass through the set $E$ (by being the same elements) appear in the result. When you write "$A \cap E^\mathrm{c}$", you are using the set $E$ to mask the set $A$ -- those elements in $A$ that are blocked by $E$ (by being the same elements) do not appear in the result.
Can two 'different' vector spaces have the same vector?
Strictly speaking, $(v_1,\ldots,v_m,0)\ne (v_1,\ldots,v_m)$. Then again, the obvious injective map $\iota\colon\Bbb R^m\to \Bbb R^{m+1}$ is sometimes viewed as being so natural that we sometimes identify the image $\iota(\Bbb R^m)$ and $\Bbb R^m$ and say, for example $\Bbb R^m\subseteq \Bbb R^{m+1}$ (instead of more correctly $\iota(\Bbb R^m)\subseteq \Bbb R^{m+1}$). In such a context, we would also identify $(v_1,\ldots,v_m)$ with $\iota(v_1,\ldots,v_m)=(v_1,\ldots,v_m,0)$ and say they are equal. There are contexts where this comes really natural, such as when whe construct, e.g., rationals as equivalence classes of pairs of integers and yet view the integers as subset of the rationals; every step of $\Bbb N\subset\Bbb Z\subset \Bbb Q\subset \Bbb R\subset \Bbb C$ is of this kind: We construct the larger object from the smaller and identify the original set with its different-looking copy in the larger set. However, for the case at hand $\Bbb R^m\to\Bbb R^{m+1}$, the embedding may be what first comes to mind - but it is not really natural. Hence here I would rather not make the identification (unless after specifically mentioning the choice of embedding). As a sidenote, Let $V$ be any vector space, $v\in V$ any element, and $\alpha$ any object with $\alpha\notin V$ (for example, my neighbour's dog). Then we can define a vector space $V'$ containing $\alpha$ in place of $v$. This way, we can easily construct vector spaces having arbitrary elements in common (and in fact the same elements playing different "roles" in the different vector spaces)
Limit of $\lim_{x \to 0} (\cot (2x)\cot (\frac{\pi }{2}-x))$ (No L'Hôpital)
HINT We have that $$\cot (2x)\cot\left(\frac{\pi }{2}-x\right)=\frac{\cos(2x)}{\sin(2x)}\frac{\sin x}{\cos x}$$
Spherical triple integral for volume enclosed by $(x^2+y^2+z^2)^2=a^2(x^2+y^2-z^2)$
As volume is symmetric with respect to coordinate planes, then in first octant there is $1/8$ of volume. For angles we have restriction $1-2\cos^2 \theta = -\cos 2\theta \geqslant 0$ $$V=8\int\limits_{\frac{\pi}{4}}^{\frac{\pi}{2}}\sin \theta d\theta \int\limits_{0}^{\frac{\pi}{2}}d\phi\int\limits_{0}^{a\sqrt{-\cos 2\theta}}r^2dr=\frac{4\pi a^3}{3}\int\limits_{0}^{\frac{\pi}{4}}(1-2 \sin^2 t)^{3/2}d(\sin t)=\frac{\pi^2 a^3}{4\sqrt{2}}$$
Radius of Convergence; sum of function series
With a little help from a fellow student, I figured it out. In case anyone ever has a similar problem, I'll leave the solution here. Starting with the last line from the above: \begin{align*} =&amp; \sum_{n=0}^{\infty} a_n x^{(2n+1)}(-2n)\\ =&amp; \sum_{n=0}^{\infty} \frac{(-1)^n 2^{(-n)}}{n!} x^{(2n+1)}(-2n)\\ =&amp; x^2\sum_{n=1}^{\infty} \frac{(-1)^{n-1} 2^{(1-n)}}{(n-1)!} x^{(2n-1)}\\ \end{align*} The last sum starts at $n=1$, because it makes no sense for $n=0$ (where we'd get stuff like $(-1)!$; so we shift it to $n=1$). To shift it back down to starting at $n=0$, we have to add +1 to each $n$. \begin{align*} =&amp; x^2\sum_{n=1}^{\infty} \frac{(-1)^{n-1} 2^{(1-n)}}{(n-1)!} x^{(2n-1)}\\ =&amp; x^2\sum_{n=0}^{\infty} \frac{(-1)^{n} 2^{(-n)}}{n!} x^{(2n+1)}\\ =&amp; x^2 f(x) \end{align*} Which is what was to be shown.
A few statements about linear maps and vector spaces
If $A\subset V$ is a generator for $V$, then $f(A)$ is a generator for $f(V)$. Suppose a set of vectors $A = \{\mathbf{v}_k \}_{k=1}^n$ generates $V$. This means we can write any $\mathbf{v} \in V$ as $\mathbf{v} = \displaystyle \sum_{k=1}^n c_k\mathbf{v}_k$, where the $c_k$'s are constants in some field, such as $\mathbb{R}$. So what about vectors in the space $f(V)$? Vectors in $V$ can be written as the sum above, hence vectors here are of the form $\displaystyle f \left( \sum_{k=1}^n c_k\mathbf{v}_k \right)$. Since $f$ is a linear map, we can rewrite this as: $$f \left( \sum_{k=1}^n c_k\mathbf{v}_k \right) \ = \ \sum_{k=1}^n f(c_k \mathbf{v}_k) \ = \ \sum_{k=1}^n c_k f(\mathbf{v}_k)$$ This shows that vectors in $f(V)$ can be written as a linear combination of the vectors in the set $\{f(\mathbf{v}_k) \}_{k=1}^n = f(A)$. In other words, $f(A)$ generates $f(V)$. If $\{a_1,...,a_n\}\subseteq V$ is linearly dependent, then $\{f(a_1),...,f(a_n)\}\subseteq W$ is as well. A set of vectors $\{\mathbf{v}_k\}_{k=1}^n$ is called linearly dependent when the equation $\displaystyle \sum_{k=1}^n c_k\mathbf{v}_k = 0$ has a solution where $c_k \neq 0$ for at least one $k$. In other words, $\{\mathbf{v}_k\}_{k=1}^n$ is a linearly dependent set when we can find some nontrivial linear combination of those vectors giving zero. Applying the same trick as above: $$f \left( \sum_{k=1}^n c_k\mathbf{v}_k \right) = \ \sum_{k=1}^n f(c_k \mathbf{v}_k) \ = \ \sum_{k=1}^n c_k f(\mathbf{v}_k)$$ Because $f$ is linear, $f(\mathbf{0}) = \mathbf{0}$. So a nontrivial linear combination of the $\mathbf{v}_k$'s that gives zero can be "factored through" $f$ as above to get a nontrivial linear combination of $f( \mathbf{v}_k)$'s also giving zero. This shows linear dependence of the set $\big\{ f( \mathbf{v}_k) \big\}_{k=1}^n$. In the general case $\dim(f(V))\leq \dim(V)$. Can you find examples for $=$ and $&lt;$ respectively? This fact is implied by the first fact above. The dimension of a space is merely the minimum number of linearly independent vectors needed to generate the space$^\dagger$. So if $A$ is a linearly independent generating set for $V$, then we have shown above that $f(A)$ must generate $f(V)$. But need $f(A)$ be linearly independent as well? If not: $\text{dim}(f(V)) &lt; \text{dim}(V)$ since we can arrive at a linearly independent generating set for $f(V)$ by removing "superfluous" vectors from $f(A)$. $^\dagger$Any two linearly independent generating sets for a vector space will have the same cardinality. This is the dimension theorem for vector spaces.
Dual of the multiplication map $m: A \otimes_k A \rightarrow A$, where $A$ is a fin. dim. algebra over a field $k$.
The elements of $D(A)$ are maps $f:A \to k$, and the elements of $D(A) \otimes D(A)$ are maps of the form $$ (a,b) \mapsto \sum_i f_i(a)g_i(b), \quad \text{for all } a,b \in A $$ for some $f_i,g_i \in D(A)$. That is, $D(A) \otimes D(A)$ is the space of all bilinear maps over $A$. With that: for any $f \in D(A)$, we have $$ (\delta(f))(a,b) = (f \circ m)(a,b) = f(ab). $$ So, $\delta$ maps $f \in D(A)$ to the bilinear map $\delta(f): (a,b) \mapsto f(ab)$. Strictly speaking, if we consider $m$ to be a linear function on $A \otimes A$ and we apply the duality functor $D$, then we should find that the target space of $\delta$ should be $D(A \otimes A)$ rather than $D(A) \otimes D(A)$. With that, we would find that $\delta$ maps $f \in D(A)$ to the unique linear map $\delta(f):A \otimes A \to k$ satisfying $\delta(f): a \otimes b \mapsto f(ab)$. That being said: because of the universal property that defines tensor products, these two $\delta$'s are the same up to canonical isomorphism.
inverse Laplace transfor by using maple or matlab
I am going to derive your result by hand using Cauchy's theorem applied to a contour integral. The analysis will follow that in the solution to this problem. Consider the following contour integral in the complex plane: $$\oint_C dz \, \frac1{\sqrt{z}} e^{-\sqrt{a z}} \cos{\left (\sqrt{a z}\right )} \, e^{z t}$$ where $C$ is the following contour: We will define $\text{Arg}{z} \in (-\pi,\pi]$, so the branch is the negative real axis. There are $6$ pieces to this contour, $C_k$, $k \in \{1,2,3,4,5,6\}$, as follows. $C_1$ is the contour along the line $z \in [c-i R,c+i R]$ for some large value of $R$. $C_2$ is the contour along a circular arc of radius $R$ from the top of $C_1$ to just above the negative real axis. $C_3$ is the contour along a line just above the negative real axis between $[-R, -\epsilon]$ for some small $\epsilon$. $C_4$ is the contour along a circular arc of radius $\epsilon$ about the origin. $C_5$ is the contour along a line just below the negative real axis between $[-\epsilon,-R]$. $C_6$ is the contour along the circular arc of radius $R$ from just below the negative real axis to the bottom of $C_1$. We will show that the integral along $C_2$,$C_4$, and $C_6$ vanish in the limits of $R \rightarrow \infty$ and $\epsilon \rightarrow 0$. On $C_2$, the real part of the argument of the exponential is $$R t \cos{\theta} - \sqrt{R} \cos{\frac{\theta}{2}}$$ where $\theta \in [\pi/2,\pi)$. Clearly, $\cos{\theta} &lt; 0$ and $\cos{\frac{\theta}{2}} &gt; 0$, so that the integrand exponentially decays as $R \rightarrow \infty$ and therefore the integral vanishes along $C_2$. On $C_6$, we have the same thing, but now $\theta \in (-\pi,-\pi/2]$. This means that, due to the evenness of cosine, the integrand exponentially decays again as $R \rightarrow \infty$ and therefore the integral also vanishes along $C_6$. On $C_4$, the integral vanishes as $\epsilon$ in the limit $\epsilon \rightarrow 0$. Thus, we are left with the following by Cauchy's theorem (i.e., no poles inside $C$): $$\left [ \int_{C_1} + \int_{C_3} + \int_{C_5}\right] dz \: \frac1{\sqrt{z}} e^{-\sqrt{a z}} \cos{\left ( \sqrt{a z} \right )} e^{z t} = 0$$ On $C_3$, we parametrize by $z=e^{i \pi} x$ and the integral along $C_3$ becomes $$\int_{C_3} dz \: \frac1{\sqrt{z}} e^{-\sqrt{a z}} \cos{\left ( \sqrt{a z} \right )} e^{z t} = -i e^{i \pi} \int_{\infty}^0 dx \:x^{-1/2} e^{-i \sqrt{a x}} \cosh{\left ( \sqrt{a z} \right )} e^{-x t}$$ On $C_5$, however, we parametrize by $z=e^{-i \pi} x$ and the integral along $C_5$ becomes $$\int_{C_5} dz \: \frac1{\sqrt{z}} e^{-\sqrt{a z}} \cos{\left ( \sqrt{a z} \right )} e^{z t} = i \, e^{-i \pi} \int_0^{\infty} dx \: x^{-1/2} e^{i \sqrt{a x}} \cosh{\left ( \sqrt{a z} \right )} e^{-x t}$$ Or, parametrizing the integral over $C_1$ as $z=c+i p$: $$\int_{c-i \infty}^{c+i \infty} dp \: \frac1{\sqrt{p}} e^{-\sqrt{a p}} \cos{\left ( \sqrt{a p} \right )} e^{p t} = i 2 \int_0^{\infty} dx \: x^{-1/2} \cos{\left (\sqrt{a x}\right )} \cosh{\left ( \sqrt{a x} \right )} e^{-x t}$$ Now we may write down an expression for the inverse Laplace transform: $$\begin{align} \mathcal{L}^{-1}\left [ \frac1{\sqrt{p}} e^{-\sqrt{a p}} \cos{\left ( \sqrt{a p} \right )} \right ] &amp;= \frac1{i 2 \pi} \int_{c-i \infty}^{c+i \infty} dp \: \frac1{\sqrt{p}} e^{-\sqrt{a p}} \cos{\left ( \sqrt{a p} \right )} e^{p t}\\ &amp;= \frac1{\pi} \int_0^{\infty} dx \: x^{-1/2} \cos{\left (\sqrt{a x}\right )} \cosh{\left ( \sqrt{a x} \right )} e^{-x t}\\ &amp;= \frac1{2 \pi} \operatorname{Re}{\left [\int_{-\infty}^{\infty} dy \, e^{-t y^2} \, \left ( e^{-(1-i) \sqrt{a} y} + e^{(1+i) \sqrt{a} y} \right ) \right ]}\\ &amp;= \frac1{2 \pi} \operatorname{Re}{\left [e^{-i a/(2 t)} \int_{-\infty}^{\infty} dy \, e^{-t \left (y + \frac{(1-i) \sqrt{a}}{2 t} \right )^2 } \\ + e^{i a/(2 t)} \int_{-\infty}^{\infty} dy \, e^{-t \left (y - \frac{(1+i) \sqrt{a}}{2 t} \right )^2 } \right ]} \\ &amp;= \frac1{\sqrt{\pi t}} \cos{\left ( \frac{a}{2 t} \right )}\end{align} $$ as was to be shown. Note that each of the integrals could be shifted back to the origin without affecting its value.
Assume we have 10 men and 10 women, how many ways are there to pair them up into 10 pairs with one man and one woman in each pair?
Your first approach is correct. The way to get an answer to this is to look at the problem inductively. You can pair the first woman off with 10 men. The second woman could then choose from 9 men. Going on like this you would conclude that the tenth woman could choose from 1 man. Hence your answer is going to be $10 \times 9 \times \dots \times 1 = 10!$. For the second approach, if $N=4$ ($2$ men, $2$ woman) then the first woman can pair with 2 men while the second woman can pair with 1 man. Therefore, it would be $2!$. As noted in the comments, for $2$ men and $2$ woman there are only $2$ possibilities $$m_1w_1, m_2w_2\tag{1}$$ $$m_2w_1, m_1w_2\tag{2}$$ which is why it is $2$!. If $N=6$ ($3$ men, $3$ woman) then the first woman can pair with 3 men. The second woman can pair with 2 men and the third woman can pair with one man. Therefore, it would be $3!$. This can be seen by $$m_1w_1, m_2w_2, m_3w_3\tag{1}$$ $$m_2w_1, m_1w_2, m_3w_3\tag{2}$$ $$m_3w_1, m_2w_2, m_1w_3\tag{3}$$ $$m_1w_2, m_2w_3, m_3w_1\tag{4}$$ $$m_3w_2, m_2w_1, m_1w_3\tag{5}$$ $$m_2w_3, m_1w_1, m_3w_2\tag{6}$$
determine the set of prime ideals of the Dedekind Domain
It is known that $\mathbb{Z}[i]$ is a Euclidean domain, so all ideals are principle. This can be done using the norm function, which takes $N(a+bi) = a^2+b^2$. This is multiplicative function too, you can check this. To check if your given ideal is principle, it suffices to check that its norm is a prime number (why?). We have that $N(1+i) = 1^2+1^2 = 2$ is prime so that $1+i$ is prime. However, $N(3-4i) = 3^2+4^2 = 25$. Try factoring it using the Euclidean algorithm, I can help if you get stuck. This should also help you with all primes. I can tell you that $p$ remains prime if and only if it is 3 mod 4, try to prove this.
If she gets the first problem correct, what is the probability she gets an A?
You have the first part correct.$$\mathsf P(A)=\binom 43p^3q+\binom 44p^4$$ Now, when given that she gets the first question correct (event $Q_1$), then she will get an A should she get at least 2 of the 3 remaining questions correct.  So the second verse is much the same as the first. $$\mathsf P(A\mid Q_1)=\binom 32p^2q+\binom 33p^3$$
What's the advantage of defining topologies on open sets rather than closed sets?
To your first question: What's the advantage of defining topologies on closed sets rather than open ones? None, really; the two approaches are precisely equivalent. It's a bit analogous to saying that you can define "even numbers" as integers of the form $2k$ for $k \in \mathbb{Z}$, and then define "odd numbers" as integers that are not even; or you can define "odd numbers" as integers of the form $2k+1$ for $k \in \mathbb{Z}$, and then define "even numbers" as integers that are not odd. It really doesn't matter, neither approach is "right", and it's ultimately a matter of taste and preference. (This analogy is a very crude one, because in topology there are sets that are neither open nor closed, and sets like that don't correspond to anything in my even/odd analogy.) To the second question: Also does this mean limits can be defined as $$\forall \epsilon &gt; 0, \exists \delta &gt; 0$$ such that $$|x - a| \leq \delta \implies |f(x) - f(a)| \leq \epsilon$$ No, it doesn't mean that at all. Munkres isn't saying that you can just replace open intervals with closed intervals throughout the entire theory and leave everything else unchanged and it will all work out the same. What he is saying is that you can take closed sets as the fundamental building blocks of the theory, but if you do that, you may have to make other changes too. For example, in a theory based on open sets, we have the property that an arbitrary union of open sets is open; in a theory based on closed sets, we have the property that an arbitrary intersection of closed sets is closed.
Prove the linear transformations to be independent when their ranges are disjoint
Suppose that $cT = U$ for some constant $c \neq 0$. Since $T \neq 0$, there is some $x \in V$ and some $0 \neq y \in W$ such that $T(x) = y \neq 0$. But then it follows that $$y = \frac{1}{c} c y = \frac{1}{c} U(x) = U(\frac{1}{c} x) \in \operatorname{Range}(U),$$ where we have used the linearity of $U$. This means $y \in \operatorname{Range}(U) \cap \operatorname{Range}(T)$, which is a contradiction.
Find $\delta(\varepsilon)$ function
You don't actually have to bound $|x\log x - y\log y|$ You can prove that $f$ is continuous in $[0,1]$ because $lim_{x\rightarrow 0, x&gt;0} f(x) = 0$ and $f(0) = 0$ You also know that $f$ is differentiable in $]0,1[$ you can then prove that $f'$ is continuous on $[0,1]$ Using the mean value theorem $$\exists M&gt;0, \forall(x,y)\in ]0,1[,\ \ M|x-y| &lt; |f(x)-f(y)|$$ You can then chose $\delta(\epsilon) = M = \sup_{x\in[0,1]}|f'(x)| = constant$ because you proved that $f'$ is continuous
Show that $E$ is a vector subspace with $\operatorname{dim} E=m-1$
If you are given a path $\lambda$ then the chain rule shows that $Df(x_0) \lambda'(0) = 0$. Hence $\lambda'(0) \in \ker Df(x_0)$. Now suppose $\lambda \in \ker Df(x_0)$, and consider $\phi(s,t) = f(x_0+s \lambda + t Df(x_0)^T)$. It is easy to check that $\phi(0,0) = 0$, ${\partial \phi(0,0) \over \partial s} =0$ and ${\partial \phi(0,0) \over \partial t} = \|Df(x_0)\|^2 &gt;0$, hence the implicit function theorem gives some locally defined $C^1$ function $\tau$ such that $\phi(s,\tau(s)) = 0$ locally. Now define $l(s) = x_0+s \lambda + \tau(s) Df(x_0)^T$ and check that $l'(0) = \lambda$. Hence $E = \ker Df(x_0)$. Note that $\dim {\cal R} Df(x_0) = 1$ and use the rank nullity theorem to finish.
Quadrics are birational to projective space
Over an algebraically closed field $k$ of characteristic $\neq 2$ every irreducible quadric $Q\subset \mathbb P^n_k$ has equation $q(x)=x_0x_1+x_2^2+...+x_m^2=0 \quad (2\leq m\leq n)$ in suitable coordinates . Projecting from $p=(1:0:0:\cdots:0)\in Q$ to the hyperplane $H\subset \mathbb P^n_k$ of equation $x_0=0$ will give the required birational isomorphism. Explicitly, the projection is the birational map $$\pi: Q--\to H:(a_0:a_1:\cdots:a_n)\mapsto (0:a_1:\cdots:a_n)$$ You can compute the inverse rational map and find $$\pi^{-1}:H--\to Q:(0:a_1:\cdots:a_n) \mapsto (-(a_2^2+\cdots +a_m^2):a_1\cdot a_1:\cdots:a_1\cdot a_n)$$
How to prove this set equality? (HINT)
$\{ c_1\cos(t)+c_2\sin(t), \quad c_1,c_2\in\mathbb{R} \} = \{ A\cos(t)+B, \quad A,B\in\mathbb{R} \}$ is not true ! Reason: suppose that the above equation is true. Then there are $A,B \in \mathbb R$ such that $ \sin t=A \cos t +B$. With $t= \pi$ we get $A=B$ and with $t= \pi /2$ we get $B=1$, hence $$\sin t=\cos t +1,$$ which is absurd.
Proving that the matrix is not invertible.
The statement would be true if you considered $D = BA$. You can see that the matrix $A$ gives rise to a transformation $T_A : \mathbb R^3 \to \mathbb R^2$. Similarly, the matrix $B$ gives rise to $T_B : \mathbb R^2 \to \mathbb R^3$ and $T_D = T_B \circ T_A : \mathbb R^3 \to \mathbb R^3$. The problem with $T_D$ is that $$ \mathrm{Im}(T_D) \subseteq \mathrm{Im}(T_B) $$ and $T_B$ cannot be surjective because the image of a basis in $\mathbb R^2$ can span at most a subspace of $\mathbb R^3$ of dimension $2$, not $3$. Hope that helps,
Did I find the basis of a subset of $\mathscr{P}_4(\mathbb{F})$?
You have given a correct basis. One needs to give justification. This involves the following steps: (i) the proposed basis elements satisfy the condition on the second derivative, and therefore (ii) all linear combinations of them satisfy the condition. Also (iii) the proposed basis is a linearly independent set. Thus the subspace generated by the proposed basis is a subspace of $U$ and has dimension $4$. However, $U$ itself is a proper subspace of the full space, and therefore has dimension $\le 4$. It follows that the space generated by the proposed basis is all of $U$. Your $W$ is essentially correct, it should be the space generated by $x^2$.
Differentiation of a continuous bilinear form
Well, i think that you can try better with this expression: $B(x_1+a_1, x_2+a_2)-B(x_1,x_2)$.
Antiderivative of $\cos(x)\ln(1+\cos(x))$
On the second line, you cannot pull the $\sin(x)$ in front of the integral. You should use $$ \sin^2(x)=1-\cos^2(x)=(1+\cos(x))(1-\cos(x)). $$
Weak convergence preserves pointwise limit inferior?
no in general you do not preserve it. Weak converging sequences oscillate a lot so in general they do not have pointwise limit. The typical example is exactly $f_n(x)=\sin(nx)$. If $x$ is irrational $\liminf_n \sin(nx)=-1$ but the sequence converges weakly to zero in $L^2(0,1)$.
Dense Subset of $[0,1]^n$ might have Lebesgue Measure 0
Your specific question should be answered by considering the set $\mathbb{Q}^n\cap [0,1]^n$, this is countable, and hence both measurable and has Lebesque measure $0$. In the general case second countability of $\mathbb{R}^n$ should hand you the solution, however this might prove tricky, since measurable sets can look quite ugly (for instance $\mathbb{R}\setminus \mathbb{Q}$).
Convergence of $\frac{\sin(g(n))}{f(n)}$
Let $S(n) = \sum_{k=0}^n g(k)$. As others have commented, if $S(n)$ is bounded, your sum converges. On the other hand, if $S(n)$ is unbounded, I will construct $f(n)$ such that your sum diverges. WLOG suppose $S(n)$ is unbounded above, and take an increasing function $N(m)$ on nonnegative integers so that $N(0) = 0$ and $S(N(m)) \ge m + S(N(m-1))$. Let $f_0(n) = m$ if $N(m-1) &lt; n \le N(m)$. Then $$\sum_{n=1}^{N(m)} \frac{\sin(g(n))}{f_0(n)} = \sum_{k=1}^m \ \sum_{n=N(k-1)+1}^{N(k)} \frac{\sin(g(n))}{k} = \sum_{k=1}^m \frac{S(N(k)) - S(N(k-1))}{k} \ge m$$ This $f_0$ is not allowed only as $f$ because it is not strictly increasing. But a very slight adjustment will make it strictly increasing while still having, say, $\displaystyle\sum_{n=N(k-1)+1}^{N(k)} \frac{\sin(g(n))}{f(n)} \ge \frac{1}{2}$.
Where is $\operatorname{Log}(z^2-1)$ Analytic?
If $2xy=0$ then either (a) $x=0$, in which case the other inequality becomes $-y^2-1\leq 0$ which is satisfied by all $y\in\mathbb{R}$, or (b) $y=0$, where the other inequality becomes $x^2 - 1 \leq 0$ which is satisfied by all $|x| \leq 1$. These inequalities must both be satisfied together. You are describing the union of the sets they are satisfied on individually, where what you really want is the intersection.
Compute $f^{(15)}(0)$ for $f(x)=(\sin(x^3))^3.$
Use $$(\sin t)^3=\frac{3\sin t-\sin 3t}{4}$$ to get $$(\sin t)^3=t^3-\frac{t^5}2+\cdots.$$ Then $$(\sin x^3)^3=x^9-\frac{x^{15}}2+\cdots.$$ Then $f^{(15)}(0)$ is $15!$ times the $x^{15}$ coefficient here.
Uniform and pointwise convergence of sequence in standard topology.
It does converge to $0$ pointwise. If the convergence is uniform then there exits $n_0$ such that $|f_n(x)| &lt;1/2$ for all $x $ for all $n &gt;n_0$. In particular $|f_n(-\frac 1 {n^{2}}) | &lt;1/2$. But this says $1-\frac 1 n &lt;1/2$ for all $n &gt;n_0$. We get a contradiction by letting $n \to \infty$.
Find exact value of $P(1/2)$ given sum of pair of roots
As you wrote, it is possible that the system is inconsistent. To check if it is (or not), let us define $$ f_1=\alpha+\beta - 1 $$ $$ f_2=\alpha+\gamma- 2$$ $$ f_3=\alpha+\delta - 5 $$ $$ f_4=\beta+\gamma-6 $$ $$ f_5=\beta+\delta - 9 $$ $$ f_6=\gamma+\delta -10$$ and define a norm $$F=\sum_{i=1}^6 f_i^2$$ Compute the partial derivatives (this now leads to a square system) $$\frac{\partial F}{\partial \alpha}=6 \alpha +2 \beta +2 \gamma +2 \delta -16$$ $$\frac{\partial F}{\partial \beta}=2 \alpha +6 \beta +2 \gamma +2 \delta -32$$ $$\frac{\partial F}{\partial \gamma}=2 \alpha +2 \beta +6 \gamma +2 \delta -36$$ $$\frac{\partial F}{\partial \delta}=2 \alpha +2 \beta +2 \gamma +6 \delta -48$$ Set each of them equal to $0$ since we look for the minimum of the norm $F$. This gives $$\alpha= -\frac{3}{2}\qquad \beta = \frac{5}{2}\qquad \gamma = \frac{7}{2}\qquad\delta = \frac{13}{2}$$ and for these values $F=0$ : so, the system is consistent. Now $$P(x)=x^4+ax^3+bx^2+cx+d=(x-\alpha)(x-\beta)(x-\gamma)(x-\delta)$$ Replace $\alpha,\beta,\gamma,\delta$ by their values and make $x=\frac 12$ to get the result which is effectively $-72$. However, please notice that the values given in your PS are wrong for $\alpha$ and $\beta$.
Finding the minimum cost
Suppose there are $n$ trips. For minimum cost, the box must be full on every trip (otherwise you could have made the same number of trips with a smaller box). So volume of box = $xyz = V/n$. By the inequality for arithmetic and geometric means, $$(axy + 2bxz + 2cyz)/3 \ge (axy\cdot2bxz\cdot2cyz)^{1/3} = (4abcV^2/n^2)^{1/3}.$$ The cost of the box is minimized if AM = GM, which happens if and only if $axy = 2bxz = 2cyz.$ From this you can work out the ratios of the sides of the box. The total cost of the $n$ trips is $$dn + 3(4abcV^2)^{1/3}n^{-2/3}.$$ If $n$ were a continuous variable, elementary calculus shows that the cost would be a minimum when $n = 2(abcV^2/d^3)^{1/5}.$ Since in practice $n$ has to be an integer, I think you would have to take the integers on each side of the theoretical minimum and see which gives the smaller cost.
How can I find rank of $A=\sum_{i=1}^4 x_ix_i^T$ without actually finding the matrix $A$?
The matrix $A$ can be written as $A=XX^T$ where $X=[x_1\ x_2\ x_3\ x_4]$. Now use the fact that $\operatorname{rank}XX^T=\operatorname{rank}X$, so all you need to do is the find the rank of $X$.
Alternate definition of Group Object, and how to 'play' with it?
If you understand that the definitions are equivalent, then you understand that $G$ actually does come equipped with structure arising from the lifting of its eepresentable functor to groups. For the specific question about group objects in groups, let $G$ be a group object with multiplication $*$ and let $\times$ be the multiplication on $Hom(X,G)$ determined by the group object structure on $G$. Then $*$ induces another multiplication on $Hom(X,G)$ which is a homomorphism with respect to $\times$, so the Eckmann-Hilton argument applies to show that $\times=*$ and that both operations are commutative. EDIT: To expand on the above, by the Yoneda lemma, one gets a group homomorphism $\mu:G\times G\to G$ mapped to the multiplications on $Hom(Z,G)$ by the Yoneda embedding. Furthermore, one gets a unit $e:1\to G$ mapped to the units of $Hom(Z,G)$ by Yoneda. The Eckmann-Hilton argument shows that $\mu$ and the given multiplication of $G$ must coincide and commute.
Is this a valid linear transformation?
$(a)$ and $(b)$ are. The others do not possess the properties that $T(cv)=cT(v)$ and $T(v+w)=T(v)+T(w)$.
Having problem to find all solutions using the Chinese Remainder Theorem
We have , $$x\equiv2 \mod 4 \implies \color{#d05}{x=4a+2}\\ $$ $$\begin{align}x&amp;\equiv3 \mod 5 \\4a+2&amp;\equiv3\mod5\\ 4a&amp;\equiv 1\mod5 \\ a &amp;\equiv 4 \mod 5\implies\color{#2cd}{a = 5b+4}\end{align} $$ $$\begin{align}x&amp;\equiv9 \mod 13 \\ 4(5b+4)+2&amp;\equiv 9\mod13 \\ 20b+18&amp;\equiv9\mod13 \\ 20b&amp;\equiv 4\mod 13 \implies \color{#2c0}{b = 13c+8}\end{align} $$ So , $x = 4a+2 = 20b+18=260c + 178$
Why is my approach to this poisson distribution problem wrong?
There is some confusion in what you write. Even if you ignore the sign error in your computation, you appear to switch the loss of $2$ per unsold roll to a loss of $3$. But the big error you make is that you ignore the fact that the fellow can't sell more than $10$ even if there is demand for more, as he's only got $10$ to sell. Thus the answer is $$E[P]=\sum_{i=0}^9 P(D=i)\times (7i-3(10-i))+P(D≥10)\times 70$$ (trusting that you meant a loss of $3$ per unsold roll...that's what you need to get to the official answer) And the rest follows by direct computation.
Graph where every vertex has degree 3, perfect matching?
Yes, your hypothesis is correct, it is sufficient to prove it for connected graphs. Given $U$ a set of vertices consider $G-U$. Given a connected component $K$ in $G-U$ count the number of edges in $G$ that had a vertex in $K$. If this number is $0$ that is because $K=G-U$ (notice that $K$ would also be connected in $G$, and would not have any edges coming out from it). If this number is $1$ then this means there is a unique edge in $G$ coming out from $K$, this edge would be a bridge of $G$, which does not exist by hypothesis. If this number is $2$ there are two options, there is a vertex $u$ of $K$ that is adjacent to both vertices, in which case the edge connecting $u$ with the rest of $K$ is a bridge, or the second option is that the two edges come out of different vertices of $K$, in this case $K$ would have $2$ vertices of degree $2$, and the rest would be of odd degree, meaning $K$ has an even number of vertices. What have we concluded? if $G-U$ is not connected then every connected component of $G-U$ with an odd number of vertices has at least $3$ edges going to $U$. This means if there are $d$ connected components with an odd number of vertices then there are at least $3d$ edges going from $U$ to $G-U$. On the other hand the number of edges going from $U$ to $G-U$ is at most $3|U|$. Let $E$ be the number of edges between $U$ and $G-U$, then we have: Thus $3d\leq E\leq3|U|\implies d\leq |U|$. In other words if we remove $k$ vertices from $G$ we have at most $k$ connected components with an odd number of vertices, so by Tutte's theorem there is a perfect matching.
Showing that the map $R/I \otimes_R R/J \to R$, $(r+I) \otimes (s+I) \mapsto rs$ is well defined
Your proposed map $$ R/I \otimes_R R/J \to R, \quad \overline{x} \otimes \overline{y} \mapsto xy $$ is well-defined only if $I = J = 0$, in which case it is the usual isomorphism of $R$-modules $R \otimes_R R \to R$. This follows from the universal property of the tensor product: The above map is well-defined if and only if $$ R/I \times R/J \to R, \quad (\overline{x}, \overline{y}) \mapsto xy $$ is a well-defined $R$-bilinear map. By fixing the argument $\overline{1} \in R/I$ we see that the map $$ R/J \to R, \quad \overline{y} \mapsto y $$ must be well-defined, which it is only for $J = 0$ (because it follows for every $y \in J$ from the well-definedness and $\overline{y} = \overline{0}$ that $y = 0$). We find in the same way that we need $I = 0$. What does hold is that $R/I \otimes_R R/J \cong R/(I+J)$: It holds for every $R$-module $M$ that $$ R/I \otimes_R M \cong M/IM, $$ and it therefore follows that $$ R/I \otimes_R R/J \cong (R/J) / (I \cdot R/J) = (R/J) / ((I+J)/J) \cong R/(I+J). $$ This isomorphism is on elements given by $$ R/I \otimes_R R/J \to R/(I+J), \quad \overline{x} \otimes \overline{y} \mapsto \overline{xy}. $$ Note that the well-definedness of your proposed map is therefore equivalent to the well-definedness of the map $$ R/(I+J) \to R, \quad \overline{z} \mapsto z, $$ which holds only for $I + J = 0$, i.e. for $I = J = 0$. This confirms the above result.
singular.invariant_ring in Sagemath
I posted an answer in the Ask SageMath thread Calling singular.invariant_ring (it's a bug, it has been reported, and a workaround is available).
In modular arithmetic, is a residue class a vector space? Does it have other structure, e.g., ring or group structure?
In mod 10 arithmetic positive integers having , say 3 as the unit digit, along with negative integers terminating in 7, form a residue class. What happens to the sum $13+23$?
Given concurrent cevians $AD$, $BE$, $CF$ of $\triangle ABC$, show $\text{area }\triangle DEF\leq \tfrac14\text{}\triangle ABC$
$S_{DEF}\leq \frac{1}{4}S_{ABC}$ can be rewritten as $4S_{DEF}\leq S_{ABC}$. You can scale this triangle so that its circumradius is equal to $\frac{1}{4}$, making $S_{ABC} = abc = (a_1+a_2)(b_1+b_2)(c_1+c_2)$. This is because $A = \frac{abc}{4R}$. Start by finding the areas of $\triangle EDC$, $\triangle AFE$, and $\triangle FBD$. $S_{AFE} = \frac{1}{2}c_1b_2\sin A$. Since $2R = \frac{a}{\sin A}$, $\frac{1}{2} = \frac{a}{\sin A}$, and $\sin A=2a$. Plugging this into the area you get, $S_{AFE} = c_1b_2(a_1+a_2)$ With this method, $S_{CDE} = a_2b_1(c_1+c_2)$ and $S_{FBD} = a_1c_2(b_1+b_2)$. Whe combined, the expressions for these areas contain all the terms from the expression of $S_{ABC}$ except $(a_1b_1c_1)$ and $(a_2b_2c_2)$. From this, you get the expression $S_{DEF}= S_{ABC}-S_{CDE}-S_{FBD}-S_{AFE} = (a_2b_2c_2) + (a_1b_1c_1)$ Ceva's Theorum states $\frac{a_1b_1c_1}{a_2b_2c_2} = 1$, so $a_1b_1c_1=a_2b_2c_2$. $4S_{DEF}\leq S_{ABC}$ is equivilant to $3S_{DEF}\leq S_{ABC} - S_{DEF}$, which as shown above is equivalent to $3S_{DEF}\leq S_{CDE}+S_{FBD}+S_{AFE}$. This can finally be rewritten as $3(a_2b_2c_2 + a_1b_1c_1) \leq (a_1b_1c_2+a_2b_2c_1)+(a_1c_1b_2+a_2c_2b_1)+(b_1c_1a_2+b_2c_2a_1)$ For the first set of parenthesis on the right, $a_1b_1c_2+a_2b_2c_1 = a_1b_1c_1(\frac{c_2}{c_1}) + a_2b_2c_2(\frac{c_1}{c_2}) = (\frac{c_1}{c_2}+\frac{c_2}{c_1})a_1b_1c_1$. Remember, $a_1b_1c_1=a_2b_2c_2$ due to Ceva's theorum. Thus, the above inequality, with the first set of parenthesis, can be written as $2(a_1b_1c_1)\leq (\frac{c_1}{c_2}+\frac{c_2}{c_1})a_1b_1c_1$, which is true. This same method can be repeated for the other two sets of parenthesis to get the required statement.
Show that if $n$ points are such that any three lie in a circle of radius $1$, then all of them lie in a circle of radius $1$
If any three of the $n$ points are contained in a circle with radius $r=1$, then the distance between any two of the $n$ points is at most $d=2$. Hence the diameter of the $n$ points as a set is at most $2$. In particular they are contained in a circle of diameter $2$, which has radius $r=1$.
The evaluation of Complex integration
Simply start to evaluate the integral, \begin{align} \left\lvert \int_{C_K} \frac{e^{iz}}{z} dz \right \rvert &amp;=\left\lvert \int_0^K \frac{e^{(K-t)i-t}}{(K-t)+it} \cdot(-1+i) dt \right\rvert\\ &amp;\leqslant\sqrt{2}\int_0^K \frac{e^{-t}}{\lvert (K-t) + it\rvert} dt \end{align} The denominator is bounded below by $\frac{1}{\sqrt 2} K$, so you obtain, \begin{align} \left\lvert \int_{C_K} \frac{e^{iz}}{z} dz \right \rvert &amp;\leqslant\frac{2}{K} \int_0^K e^{-t} dt \\ &amp;=\frac{2(1-e^{-K})}{K} \end{align} and this last term has limit zero as $K \to \infty$. Comment on approach taken: the approximation using the length of $C_K$ and the maximum value of the integrand doesn't work because it cannot exploit the fact that $e^{iz}$ decreases rapidly along $C_K$. Moreover, in the answer you gave you still have a term $e^{-t}$, but the $t$ variable is only relevant inside the integral and therefore should never appear as part of the final estimate.
Algebraic expression in its most simplified form
You did not distribute the term $(x - 3)$ in the denominator when you wrote: $$\begin{align} &amp; =\dfrac{x(x-3)+(-4)}{(x-3)}\times \dfrac{(x-3)}{x(x-3)+2+6x} \\ \\ &amp; =\dfrac{x(x-3)+(-4)(x-3)}{(x-3)x(x-3)+2+6x} \end{align}$$ What would be correct is the following denominator: $$\begin{align} &amp; \quad\color{blue}{(x-3)}[x(x-3)+2+6x] \\ \\ &amp; = \color{blue}{(x-3)}x(x-3)+\color{blue}{(x-3)}(2+6x)\end{align} $$ But note $$\dfrac{x(x-3)+(-4)}{\color{blue}{\bf (x-3)}}\times \dfrac{\color{blue}{\bf(x-3)}}{x(x-3)+2+6x}$$ The highlighted terms cancel, leaving you: $$\begin{align} &amp; =\dfrac{x(x-3)+(-4)}{1}\times \dfrac{1}{x(x-3)+2+6x} \\ \\ &amp; = \frac{x(x-3) - 4}{x(x - 3) + 2 + 6x} \\ \\ &amp; = \dfrac{x^2-3x-4}{x^2+3x+2} \tag{$\diamondsuit$} \end{align} $$ Now, all both the numerator and denominator of $\diamondsuit$ factor very nicely, and in fact, share a common factor, and hence, can be further simplified. Recall: $$\frac{[b + c]a}{a[d+e]} = \frac{a[b+c]}{a[d+e]} = \frac{b+c}{d+e}$$ Or: $$\frac{a[b+c]}{a[d+e]} = \frac{ab+ac}{ad+ae}= \frac{b+c}{d+e}$$
How to correctly solve $\sqrt{1+x}+\sqrt{1-x}>1$?
Note that if $x &gt; 0$, then $\sqrt{1 + x} &gt; 1$; similarly, when $x \leq 0$, we have that $\sqrt{1 - x} &gt; 1$. As we have a sum of two positive numbers, of which at least one is larger or equal to one, on the LHS, it must always be larger than one. Because of the existence of this particular argument, your solving process is too complicated indeed. However, removing square roots from an equation by squaring isn't a bad tactic in se. Just make sure the inequalities still hold after squaring.
Relationship between universal quantifier and existential quantification. Is it always equivalent by just moving the negation symbol?
Yes, that’s correct: $\forall x(\neg\varphi(x))$ and $\neg\exists x(\varphi(x))$ are logically equivalent. Similarly, $\exists x(\neg\varphi(x))$ and $\neg\forall x(\varphi(x))$ are logically equivalent. These are essentially infinitary versions of De Morgan’s laws: the first equivalence corresponds to $$(\neg\varphi\land\neg\psi)\leftrightarrow\neg(\varphi\lor\psi)\;,$$ the second to $$(\neg\varphi\lor\neg\psi)\leftrightarrow\neg(\varphi\land\psi)\;.$$
direct sum of representation of product groups
When $V_1$ and $V_2$ are representations of $G_1$ and $G_2$ respectively, I'll use $V_1 \boxtimes V_2$ to mean the representation of $G_1 \times G_2$ with underlying vector space $V_1 \otimes_{\mathbb{C}} V_2$, and $V_1 \boxplus V_2$ to mean the representation of $G_1 \times G_2$ with underlying vector space $V_1 \oplus V_2$. If $V$ is an irreducible representation of $G_1 \times G_2$, then $V$ is isomorphic to $V_1 \boxtimes V_2$ for some irreducible representations $V_1$ and $V_2$ of $G_1$ and $G_2$ respectively. This means that if we know the representations of $G_1$ and $G_2$, then using the $\boxtimes$ construction we can get to all the (irreducible) representations of $G_1 \times G_2$. Conversely, the $\boxtimes$ product of two irreducible representations always produces an irreducible representation of $G_1 \times G_2$. On the other hand, $V_1 \boxplus V_2$ is always reducible as a $G_1 \times G_2$ representation, since both vector subspaces $V_1$ and $V_2$ are stable under the $G_1 \times G_2$ action. On the $V_1$ subspace, really only the $G_1$ part of the group acts, and the $G_2$ part acts trivially, and similarly for the $V_2$ subspace. We cannot produce all irreducible representations of $G_1 \times G_2$ using this construction, which can already be seen in the example $G_1 = G_2 = \mathbb{Z} / 2 \mathbb{Z}$.
Eight employees in a company, part 2
a) We are permuting three groups so it should be $2!2!4!\over8!$ and your answer is correct assuming the $1\over2$ in type $C$ is a typing mistake for $2\over2$. b) You are not correct this time. In your reasoning, after you choose not type $A$, you cannot do anything but splitting into cases about not type $B$ because what you have given the two type $A$ guys might be type $B$ or not. For the correct probability, first note that the original $A$ and $C$ guys must all get type $B$ so it is actually just $4!4!\over8!$
How to define a sequence that takes every value of Z once and no other?
The simple answer is, of course, $$0,-1,1,-2,2,-3,3,...$$ The harder part is making an explicit formula for $a_n$. How about (for $n=0,1,2,3,...)$: $$a_n = \begin{cases} \frac n2 &amp; n \text{ even} \\ -\frac{n+1}2 &amp; n \text{ odd} \end{cases}$$ And if piecewise definitions aren't allowed... $$a_n = \frac{n}2\frac{(-1)^n + 1}2 - \frac{(n+1)}2 \frac{(-1)^{n+1} + 1}2$$
Prove that $n^2(n^2+1)(n^2-1)$ is a multiple of $5$ for any integer $n$.
Hint: $$ n^2(n^2+1)(n^2−1)\cong n^2(n^2-4)(n^2−1) = (n-2)(n-1)(n^2)(n+1)(n+2) $$
Understanding the homomorphisms from quotient polynomial rings
Homomorphisms $f:\Bbb R[X]/(X^2+1)\to \Bbb C$ correspond to homomorphisms $\phi:\Bbb R[x]\to\Bbb C$ which has $(X^2+1)\,\subseteq\,\ker\phi$ (as $f$ has to take 'all forms of' zero to zero). Now a $\phi:\Bbb R[X]\to\Bbb C$ must map $1$ to $1$ (because, I guess, unitarity of rings is assumed) and, a priori, it can map $X$ to anywhere in $\Bbb C$ but that already determines the whole homomorphism $\phi$. Then, $(X^2+1)\subseteq\ker\phi\ \iff\ X^2+1\in\ker\phi\ \iff\ \phi(X)^2=-1$, that means that either $\phi(X)=i$ or $\phi(X)=-i$. So we get exactly two such homomorphisms.
How many ways can you arrange the letters of the word ADDITION so that no vowels are next to each other?
Ordering the vowels and consonants separately, they coincidentally have $\frac {4!}{2!} = 12$ arrangements each (the divisor of $2!$ there accounting for the repeats). Inserting the vowels into the consonants means choosing which of the five spaces to fit them into - the three spaces between letters and the two on the end - which can be done $\binom 54= 5$ ways. Since these three stages are independent of each other - the choices made in one case do not limit the other cases - each stage will multiply up the options from the previous stage, so that overall there are: $$\frac {4!}{2!}\frac {4!}{2!}\binom 54 = 12\cdot 12\cdot 5 = 720\text{ options}$$
Integral $\int_{0}^{\frac{\pi}{2}}{e^{-x}} \cos3xdx$
There are two ways of doing this question. (i) First you can write $e^{-x}$ as $(-e^{-x})'$ and then apply the integration by parts forrmula, which gives other terms and, up to numerical factors, the integral of $e^{-x}\sin(3x).$ Then once more write $e^{-x}$ as $(-e^{-x})'$ and apply the integration by parts formula again, which gives you back other terms and the desired integral with a number in front of it, so you can solve for the desired integral. (ii) First you can write $\cos(3x)$ as $(\frac{1}{3}\sin(3x))'$ and then apply the integration by parts formula, which gives other terms and, up to numerical factors, the integral of $e^{-x}\sin(3x).$ Then write $\sin(3x)$ as $(-\frac{1}{3}\cos(3x))'$ and apply the integration by parts formula again which gives you back other terms and the desired integral with a number in front of it, so you can solve for the desired integral.
Formula to calculate trading quantity so that leaves half the profit in cash and half the profit in stock
Try setting out all the values: Initial unit price $P_i=80$ Current unit price $P_c=85$ Transaction fee rate $f = 0.01$ Initial number of units $U_i = 10$ Sale number of units $U_s$ (unknown as yet) Final number of units $U_c$ (unknown as yet) Final profit in cash $V_c$ (unknown as yet) So we need three simultaneous equations to find the three unknowns two ways of calculating the total profit: $V_c + U_cP_c = U_i (P_c - P_i) - f\, U_i P_i - f\, U_s P_c$ some of the initial units are sold and the rest kept: $U_i=U_s+U_c$ the cash profit has the same value as the share profit $V_c=U_cP_c$ Solving these will give $U_c= U_i\dfrac{(1-f) P_c - (1+f)P_i}{(2-f)P_c} \approx 0.198049069$ $U_s =U_i-U_c \approx 9.801950931$ $V_c=U_cP_c\approx 16.83417085$ As a check $10$ units were purchased at $\$80$ each for $\$800$ and a fee of $\$8$ $9.801950931$ units were sold at $\$85$ each for $\$833.1658291$ and a fee of $\$8.331658291$ so the cash surplus is $\$16.834170854$ and the remaining $0.198049069$ units valued at $\$85$ each are worth $\$16.834170854$
Solve the following second order linear differential equation
This link seems to describe this well, I have changed to notation to fit your question. You can use a general form for the particular solution when you know the homogeneous part. You wanted to solve \begin{equation} x''(t) + x(t) = b(t) \end{equation} with \begin{equation} b(t) = \frac{-2}{\cos(t)} \end{equation} and you know \begin{equation} x_c(t) = \underbrace{c_1 \cos(t)}_{x_1(t)} + \underbrace{c_2 \sin(t)}_{x_2(t)} = x_1(t) + x_2(t) \end{equation} define \begin{equation} W(x_1,x_2):=x_1(t)x_2'(t) - x_1'(t)x_2(t) \end{equation} which is called the Wronskian, in this case \begin{equation} W(c_1 \cos(t),c_2 \sin(t))=c_1c_2 \end{equation} then the general solution is \begin{equation} x_p(t) = x_2(t)\int \frac{x_1(t)b(t)}{W(x_1,x_2)}\;dt - x_1 \int \frac{x_2(t)b(t)}{W(x_1,x_2)}\;dt \end{equation} which in this particular case is \begin{equation} x_p(t) = -c_2 \sin(t)\int \frac{2c_1 \cos(t)}{c_1c_2 \cos(t)}\;dt + c_1 \cos(t) \int \frac{2c_2 \sin(t)}{c_1c_2 \cos(t)}\;dt \end{equation} you can cancel the constants and the first integral is easily resolved \begin{equation} x_p(t) = 2\cos(t) \int \tan(t)\;dt -2(t+c_3)\sin(t) \end{equation} \begin{equation} x_p(t) = -2\cos(t) (\log(\cos(t))+c_4) -2(t+c_3)\sin(t) \end{equation} and $x(t)=x_c(t)+x_p(t)$ so \begin{equation} x(t) = c_1 \cos(t) + c_2 \sin(t) -2(t+c_3)\sin(t)-2\cos(t)( \log(\cos(t))+c_4) \end{equation}
Where does $\forall xP(x,x)\land\forall x\forall y( P(x,y)\rightarrow\forall z( P(x,z)\land P(y,z) ) )\rightarrow \exists x\forall yP(y,x)$ not hold?
You are right: it is valid. Here is a proof with Tableau: 1) $∀xP(x,x) ∧ ∀x∀y(P(x,y) → ∀z(P(x,z)∧P(y,z)))$ --- premise 2) $\lnot ∃x∀yP(y,x)$ --- negation of the conclusion 3) $\lnot ∀yP(y,a)$ --- from 2) 4) $\lnot P(b,a)$ --- from 3) : $b$ new 5) $∀xP(x,x)$ --- from 1) 6) $∀x∀y(P(x,y) → ∀z(P(x,z)∧P(y,z)))$ --- from 1) 7) $P(b,b)$ --- from 5) 8) $P(b,b) → ∀z(P(b,z)∧P(b,z))$ --- from 6) 9a) $\lnot P(b,b)$ --- left branch from 8): it closes with 7) 9b) $∀z(P(b,z)∧P(b,z))$ --- right branch from 8) 10) $P(b,a)$ --- from 9b): it closes with 4).
Marginal Density function
Your answer is correct. In order to find the density function of $Y_2$, we "integrate out" $y_1$, and therefore $$f_{Y_2}(y_2)=\int_{y_1=y_2}^\infty \frac{1}{2}e^{-y_2/2}e^{-y_1/2}\,dy_1.$$ We get $e^{-y_2}$ (for $y_2\ge 0$). The density function of $Y_1$ is calculated in a similar way, but in integrating out $y_2$, we integrate from $0$ to $y_1$. The expression is marginally more complicated because of the evaluation at $0$ term. Remark: For two random variables, indices are not really a good idea. They look alike, and carry no clear geometry. You didn't get caught.
Trouble with uniqueness of Cauchy problem
Answer: I am a blind and completely absolutely retarded and my IQ is strictly less than zero, and my brain does not have a mass(its mass is 0 grams) Anyone with IQ bigger or equal to zero will ask what is $D$. Here , $D$ is a $R \times (0,+\infty)$, what is unbounded. Now, with a higher amount than 0 grams of brain one surely knows that there is an additional condition for unbounded regions for unique solution: bounded solution!!!! Thanks anyway.
Find the rank of matrix depending on $p$
Yes, you are right so far. Since the matrix has $3$ rows, its rank is at most $3$, and you know that for $ p \notin \{2,3\}$ the minor you considered is nonzero, making the rank $3$. Then you look at the cases $p=2$ and $p=3$ (but there you will have to look at other minors).
Solving a non exact differential equation given the form of its integrating factor
We want an exact equation of the form $M dx + N dy$, and since it is not exact, we multiply it by the I.F., $I(x, y) = x^p y^q$ and want to solve for $p$ and $q$ to make it exact, so have: $$x^p y^q ((3y-2xy^3)dx+(4x-3x^2y^2)dy=0)$$ When we differentiate, we have: $$\dfrac{\partial M}{\partial y} = 3(q+1)x^py^q - 2 (q+3) x^{p+1}y^{q+2} $$ $$\dfrac{\partial N}{\partial x} = 4(p+1) x^p y^q - 3(p+2)x^{p+1}y^{q+2}$$ Notice that this was not written correctly in your answer (although it looks like you did it correctly). We equate like terms to get $\dfrac{\partial M}{\partial y} = \dfrac{\partial N}{\partial x}$, and get: $$3 (q + 1) = 4(p+1)\\-2(q+3) = -3 (p+2)$$ This leads to the simultaneous equations: $$3q - 4 p = 1\\-2q +3p = 0$$ This leads to: $$p=2, ~ q = 3$$ So, the integrating factor is: $$I(x, y) = x^2 y^3$$ Notice from my comment that you can avoid all of this error prone math.
How can I prove that if$f$ is a bijection, then the inverse is also a bijection?
Since $f$ is bijective we know that there is an $f^{-1}$ so that $f^{-1}$ maps $y$ to the unique element $x$ such that $f(x) = y$. Thus for any $x$ in the domain we have $f^{-1}(f(x)) = x$. From here you should be able to show that $f^{-1}$ is surjective and injective.
What is the intuition behind this divergence example
For dealing with flux I find the heat inside a volume to be the easiest example to build my intuition. In this case, with constant divergence everywhere it means every point is a heat source putting out exactly 3 units per area of heat. Also, it's not accurate to say that there are three more vectors entering than leaving a point as an interpretation of divergence. This can be seen by calculating a similar example given by $OP = \frac{1}{4}x\boldsymbol{i} + \frac{1}{4}y\boldsymbol{j} + \frac{1}{4}z\boldsymbol{k}$ where the divergence is $\frac{3}{4}$ so you wouldn't even have a whole vector. Sources and sinks tend to be a better interpretation with the size of the divergence being a measure of how quickly the flow is through that point per unit volume.
Kalman Filter of a Stochastic Vectorial Linear Differential Equation with 1 Delay
Sorry, my reputation has not reached 50, so I give you comments here. This equation you given should be a kind of functional differential equation. Even without $W_t$, I think there is no closed form for the solution. (please refer to the book "Stability Analysis and Robust Control of Time-Delay Systems" by Min Wu et al.) Since the functional differential equation means it is not a Markov process (See you initial condition is in a function form. This decides that this system is a distributed system, i.e., the system state cannot be completely determined by one previous time instant). So I guess the Kalman filter might not work. But I am not sure if Wiener filter can be applied. Hopefully, someone can give you a better answer, and I am also waiting for it. Cheers, Ryan
Finding limit function $\lim_{n \rightarrow \infty} n ((x^2 +x + 1)^{1/n} -1)$
Put $\,a:=x^2+x+1\,$ . Note that $\,x\in\Bbb R\implies a&gt;0\;$ (why?) . Thus, you want $$\lim_{n\to\infty} n\left(\sqrt[n] a-1\right)$$ Let us define for a continuous variable $$x&gt;0\;,\;\;f(x):= x(\sqrt[x]a-1)=\frac{\sqrt[x]a-1}{\frac1x}$$ Now, you can apply l'Hospital when $\,x\to\infty\,$ (why?) , so $$\lim_{x\to\infty}f(x)\stackrel{\text{l'H}}=\frac{-\frac1{x^2}a^{\frac1x}\log a}{-\frac1{x^2}}=\lim_{x\to\infty}\frac{a^{1/x}\log a}1=\log a$$ Thus....
Drawing a triangle with 2 known corners and all side lengths
I would do as follows. Pick the longest side, $c$. Draw AB of length $c$ along $x$-axis, starting from zero. Let $C_x = x$. From the right-agled triangles featuring the height of the triangle we have $$b^2 - x^2 = a^2 - (c-x)^2= a^2 - c^2 +2cx - x^2$$ $$2cx = b^2 + c^2 - a^2$$ Then the coordinates of point $C$ are $$C_x = \frac {b^2 + c^2 - a^2}{2c}$$ $$c_y=\sqrt {b^2 - C_x^2}$$ I haven't checked it thoroughly, but seems that I got it right. Oh, and of course, if it is done with given arbitrary numbers $a,b,c$ then you should first check if the triangle inequality holds or it will just going to be embarassing for you when the formulas don't work.
Continuity of potential function at an interior point
The Absolute continuity of the Lebesgue integral can be used here. Since the function $\rho(x)/|x-x_0|$ is integrable on the ball $B(x_0, 1)$, it follows that for every $\epsilon$ there exists $\delta&gt;0$ such that if $E\subset B(x_0, 1)$ has measure less than $\delta$, then $$ \int_E \frac{|\rho(x)|}{|x'-x_0|} dx' &lt; \epsilon $$ This argument works for both 2nd and 3rd integrals, they don't have to be treated separately.
Is a transformed mixing process is still mixing?
The $\sigma$-algebra generated by $f(X_t)$ is contained in the $\sigma$-algebra generated by $X_t$; consequently for all $I\subset \mathbb Z$ (possibly infinite), $$ \sigma\left( f(X_t),t\in I\right)\subset \sigma\left( X_t,t\in I\right) $$ For this reason, for all $n$, $$ \alpha\left( \left( f(X_t)\right)_{t\geqslant 1},n\right)\leqslant \alpha\left( \left( X_t\right)_{t\geqslant 1},n\right) $$ and a similar relation holds for $\beta$ and $\phi$-mixing coefficients.
3D graphing on a 3-D plane?
Consider the range of the function, then draw level curves: f(x,y) = |x+y| = 0 f(x,y) = |x+y| = 1 f(x,y) = |x+y| = 2 Etc.
Derivative of Determinant Map
Write $U((V_1,V_2,W_1,W_2)) = V_1 W_2-V_2 W_1$. Then just compute the partial derivatives: $\frac{\partial U(V_1,V_2,W_1,W_2)}{\partial V_1} = W_2$, $\frac{\partial U(V_1,V_2,W_1,W_2)}{\partial V_2} = -W_1$, $\frac{\partial U(V_1,V_2,W_1,W_2)}{\partial W_1} = - V_2$, $\frac{\partial U(V_1,V_2,W_1,W_2)}{\partial W_2} = V_1$. Then the derivative at $(V,W)$ in the direction $(H,K)$ is given by $$DU((V,W))((H,K)) = W_2 H_1-W_1 H_2-V_2 K_1+V_1 K_2 = \det(H,W)+\det(V,K)$$ This can also be written in terms of the Frobenius inner product as $$DU((V,W))((H,K)) =\langle \begin{bmatrix} W_2 &amp; -V_2 \\ -W_1 &amp; V_1\end{bmatrix}, \begin{bmatrix} H_1 &amp; K_1 \\ H_2 &amp; K_2 \end{bmatrix} \rangle$$ and so we can write the gradient $$ \nabla U((V,W)) = \begin{bmatrix} W_2 &amp; -V_2 \\ -W_1 &amp; V_1\end{bmatrix}$$
Find the time period of the signal $\cos (n^2\cdot\pi/4)$?
If $x$ is a variable then the function $\cos{(kx^2)}$ is not periodic. One can easily see this by differentiating the function giving $-2kx\sin{(x^2)}\to \pm\infty$ as $x\to\infty$.
How many ways can we divide the Space with $N$ lines?
With four lines, there are seven different ways, i.e. numbers of regions into which a bounded space can be divided by four lines. The minimum number of regions is $4+1=5$, produced when no lines intersect within the space. But every number of regions from $5$ to maximum $11$ is possible. For beginning with $4$ non-intersecting lines within the space, we can add $1, 2, 3$ regions by redrawing one line so as to cut first one, then two, then three other lines, as in the top row of the figure. Then we add a $4th$ and $5th$ region by redrawing the third line so as to cut one and then both of the first two lines, as in the second row of the figure. Finally, we get a $6th$ region by making the second line cut the first, as in the third row. Thus to the original five regions we have added$$3+2+1=6$$Generalizing, since the minimum number of regions is $n+1$, and the maximum is$$\frac{n^2+n+2}{2}$$then the number of ways will be$$\frac{n^2+n+2}{2}-n=\frac{n^2-n+2}{2}$$ And since$$\frac{n^2-n+2}{2}=\frac{(n-1)n}{2}+1$$we see that the number of ways $n$ lines can divide the space is equal to the $(n-1)th$ triangle number plus $1$. Note: This seems to be something more general than a geometric problem. If the space is bounded, no pair of non-intersecting lines need to be parallel. Nor do the line segments need to be straight, provided that no two intersect more than once within the space. And perhaps the only condition on the bounded space is that it be concave from within? Correction: If the plane is unbounded, and "how many ways" means "how many arrangements," as arrangement is explained in the comments on OEIS A241600 referenced by @Daniel Mathias, then the above is not a suitable answer to the question posted, and there do appear to be nine arrangements of four lines. The first two rows have parallels, the third does not. There is one 3-line concurrence in the second row, and a 3-line and 4-line concurrence in the third. The number of regions, left to right and top to bottom, is$$5, 8, 9, 9, 10, 10, 8, 10, 11$$Unlike the situation as I first understood it, there are gaps and repetitions in the number of regions produced by the different arrangements. This seems to make the determination of $P$ as a function of $n$ a more difficult task. Correction continued: $P$ for $n=5$ The figures shows twenty-one arrangements for two or more lines parallel when $n=5$. In each row (except the last, which is actually two rows of two each) four lines keep a given position throughout the row as a fifth line shifts its position through the essentially different arrangements possible. After the first row showing one arrangement each for five and four parallels, the second row has only three parallels, the third has two pairs of two, and the fourth and fifth only two. Next we can see by a single figure below the arrangements possible when no lines are parallel. Again we suppose the four lines $AB$, $AC$, $FB$, $FD$ given in position. $G$ is any point not collinear with any two of the six intersection points $A$, $B$, $C$, $D$, $E$, $F$. A fifth line through $G$ can pass through and among the six points in $6+7=13$ different ways. But if $G$ is collinear with a line $BE$, $DC$, or $AF$, it can be seen even without another figure that the number of possible arrangements will be only $5+6=11$. And finally, we have one arrangement each for four and five concurrent lines. Adding them up,$$1+1+5+4+10+13+11+1+1=47$$in agreement with OEIS A241600.