title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Basic combinatorics question, can you help me? | You have $10$ placeholders, and you choose the $5$ places where the yellow beans will go: ${10 \choose 5}=252$ possibilities.
Regarding the second statement and your comment:
If, by Catalan word, you actually mean Dyck word, then the answer is easy: for a length $2n$, you have ${2n \choose n}$ possible words with two symbols, out of which $C_n$ are Dyck words, with $C_n$ the Catalan numbers:
$$C_n=\frac1{n+1}{2n \choose n}$$
So your probability would be $\dfrac1{n+1}$. |
Proving derivative of a function is a linear isomorphism on some subset of the domain | I don't think you need the inverse function theorem here. All you need to know is that $Df(p)$ is a linear isomorphism at a given $p \in U$ if and only if $\det Df(p) \neq 0$. The mapping $p \mapsto Df(p)$ is continuous (because $f$ is continuously differentiable) and the mapping $A \mapsto \det A$ is continuous, so the composition $p \mapsto \det Df(p)$ is continuous. Therefore, the set $\{ p \in U : \det Df(p) \neq 0 \}$ is open, and your result follows.
Having said that, the official statement of the inverse function theorem does tell you that $Df(p)$ is invertible for $p$ in a local neighbourhood of $a$, which is precisely the result you want. It even gives you a formula for the inverse $Df(p)^{-1}$. But quoting the inverse function theorem here seems overkill, because the non-trivial part of the inverse function theorem is really that the function $f$ itself has a local inverse. |
Defective component probability | Use Bayes' formula:
$$
P(A|B)=\frac{P(A)P(B|A)}{P(B)}
$$
Define $A$ as defective, $B$ as shown defective by test. So we have
$$
P(A|B) = \frac{0.05*1}{0.95*0.02+0.05}
$$ |
solve $|x-6|>|x^2-5x+9|$ | Another way is to note the inequality is equivalent to
$$(x-6)^2>(x^2-5x+9)^2\iff -(x-1)(x-3)(x^2-6x+15)>0$$
The quadratic is always positive, so this is the same as $(x-1)(x-3)<0$ which means $x\in (1,3)$. |
Irreducible representation of dimension $5$ of $S_5$ | You can construct one of these as follows (the other comes, as you observed, by tensoring it with the sign character). Undoubtedly you know that $S_5$ has six Sylow 5-subgroups, each with a normalizer $N$ of order 20. It is easy to convince yourself of the fact that the conjugation action on this set of six elements is doubly transitive. Therefore, by the usual result, this 6-dimensional representation splits into a direct sum of a 1-dimensional trivial representation and a 5-dimensional irreducible one.
This gives, indeed, the exceptional transitive embedding $f$ of $S_5$ in $S_6$. Even more so: by studying the type of elements present in $N$ (and its conjugates), you can deduce a number of things about the cycle structure of elements of $f(S_5)$. Furthermore, $f(S_5)$ obviously has six conjugates in $S_6$. This conjugation action gives rise to the famous non-inner automorphism of $S_6$. Using the bits that you get from the cycle structure of elements in $f(S_5)$ you can deduce that this outer automorphism interchanges the conjugacy classes of $(12)$ and $(12)(34)(56)$ (among other things). |
Advantages of IMO students in Mathematical Research | Training for competitions will help you solve competition problems - that's all. These are not the sort of problems that one typically struggles with later as a professional mathematician - for many different reasons. First, and foremost, the problems that one typically faces at research level are not problems carefully crafted so that they may be solved in certain time limits. Indeed, for problems encountered "in the wild", one often does not have any inkling whether or not they are true. So often one works simultaneously looking for counterexamples and proofs. Often solutions require discovering fundamentally new techniques - as opposed to competition problems - which typically may be solved by employing variations of methods from a standard toolbox of "tricks". Moreover, there is no artificial time limit constraint on solving problems in the wild. Some research level problems require years of work and immense persistence (e.g. Wiles proof of FLT). Those are typically not skills that can be measured by competitions. While competitions might be used to encourage students, they should never be used to discourage them.
There is a great diversity among mathematicians. Some are prolific problem solvers (e.g. Erdos) and others are grand theory builders (e.g. Grothendieck). Most are somewhere between these extremes. All can make significant, surprising contributions to mathematics. History is a good teacher here. One can learn from the masters not only from their mathematics, but also from the way that they learned their mathematics. You will find much interesting advice in the (auto-)biographies of eminent mathematicians. Time spent perusing such may prove much more rewarding later in your career than time spent learning yet another competition trick. Strive to aim for a proper balance of specialization and generalization in your studies. |
Can we qualitatively predict the strategy of the German and US teams in today's World Cup soccer match? | I tried to deal with the complete problem analytically, but what I came up with seemed so involved and unenlightening that I propose a further simplification: Let's assume that after a goal is scored, the scoring team is likely enough to win by adopting a conservative approach that we can neglect the probability of the other team equalizing.
Then the game reduces to trying to score a goal or make it to the final whistle before the other team scores a goal. At each point in the game, each team can choose among various approaches (in game-theoretical terms the "actions"), some more conservative and some more risky. Given a fixed approach of their opponents, each of the team's approaches can be characterized by a rate $\lambda_1$ at which they score goals and a rate $\lambda_2$ at which their opponents score goals. In switching from a more conservative to a more risky approach, they will increase both $\lambda_1$ and $\lambda_2$. (If one increases while the other decreases, that means that one of the approaches is dominated by the other and need not be considered.)
Now the game is a simple continuous-time Markov chain with transition rates $\lambda_1$ and $\lambda_2$ from an initial tied state to two final winning states, and the probability distribution after time $t$ is
$$
\pmatrix{
c_1\left(1-\mathrm e^{-\lambda t}\right)\\
\mathrm e^{-\lambda t}\\
c_2\left(1-\mathrm e^{-\lambda t}\right)
}
$$
with $\lambda=\lambda_1+\lambda_2$ and $c_i=\lambda_i/\lambda$ (where the order of the entries is win, draw, lose). Thus, if time $t$ remains to be played and the team adopts the same approach for this entire time, its probability of advancing to the next round will be
$$
p=c_1\left(1-\mathrm e^{-\lambda t}\right)+\mathrm e^{-\lambda t}\;,
$$
the sum of the first two entries. Let's analyze this for large and small $t$. For $t\gg1/\lambda$, we can neglect the exponential terms and have $p\approx c_1$, i.e. the probability approaches the proportion of goals scored by the team. For $t\ll1/\lambda$, we can expand the exponential terms to first order in $t$ and get $p\approx1-(1-c_1)\lambda t=1-c_2\lambda t=1-\lambda_2 t$, so the probability decreases in proportion to the rate at which the opponents score goals.
Both limiting results make sense – if the game goes on forever, eventually someone is going to score a goal, and you want to make sure that it's likely to be you; if the game lasts only a short time, it's quadratically unlikely that both teams would shoot a goal during that time, so it doesn't pay to invest in shooting one first and you just want to minimize the chances of the other team shooting one.
To summarize, in the short term you want to minimize $\lambda_2$, and in the long term you want to maximize $c_1=\lambda_1/(\lambda_1+\lambda_2)$ (or equivalently minimize $c_2$). Clearly the most conservative approach minimizes $\lambda_2$. For $c_2$, it depends on the details, but it's at least plausible that by adopting a more risky approach you can increase your own rate of scoring goals disproportionately more than your opponents', and thus maximize $c_1$. If this is so, then the team should indeed adopt a more risky approach when there's a lot of time left and a more conservative approach closer to the end of the match.
To derive a more quantitative result, let's assume for simplicity that one team's approach is fixed (say, they play conservatively throughout the game because a more risky approach would lower their proportion of goals scored); and given this, the other team has exactly two approaches, a conservative approach with goal rates $\lambda_1$ and $\lambda_2$ and a risky approach with goal rates $\lambda'_1$ and $\lambda'_2$ (again with $\lambda'=\lambda'_1+\lambda'_2$). Then there must be a crossover point $t_\mathrm c$ before the end of the match at which the team should switch from the risky to the conservative approach. To find $t_\mathrm c$, consider an infinitesimal time interval $\mathrm dt$ right before the switch. If the team takes the risky approach during this interval, it can lose either by conceding a goal during the interval, with probability $\lambda'_2\mathrm dt$, or by losing during the remainder of the game, with probability $(1-\lambda'\mathrm dt)c_2(1-\mathrm e^{-\lambda t_\mathrm c})$ (since it will take the conservative approach after the switch). If the team takes the conservative approach during the interval, the result will be the same with the primed quantities replaced by the unprimed quantities. The crossover point $t_\mathrm c$ is determined by the condition that these two probabilities are the same, i.e.
\begin{eqnarray}
\mathrm dt\lambda'_2+\left(1-\lambda'\mathrm dt\right)c_2\left(1-\mathrm e^{-\lambda t_\mathrm c}\right)&=&
\mathrm dt\lambda_2+\left(1-\lambda\mathrm dt\right)c_2\left(1-\mathrm e^{-\lambda t_\mathrm c}\right)\;,\\
\lambda'_2-\lambda'c_2\left(1-\mathrm e^{-\lambda t_\mathrm c}\right)&=&
\lambda_2-\lambda c_2\left(1-\mathrm e^{-\lambda t_\mathrm c}\right)\;,
\end{eqnarray}
\begin{eqnarray}
t_\mathrm c&=&-\frac1\lambda\log\left(1-\frac1{c_2}\frac{\lambda'_2-\lambda_2}{\lambda'-\lambda}\right)\\
&=&-\frac1\lambda\log\left(1-\frac\lambda{\lambda_2}\frac{\lambda'_2-\lambda_2}{\lambda'-\lambda}\right)\\
&=&-\frac1\lambda\log\left(1-\frac{1-\lambda'_2/\lambda_2}{1-\lambda'/\lambda}\right)\;.
\end{eqnarray}
To produce some numbers, let's assume that Germany takes a conservative approach throughout and the US has a choice of either doing the same, in which case they score one goal per $90$ minutes and Germany scores one goal per $60$ minutes, or taking risks, in which case they score one goal per $40$ minutes and Germany scores one goal per $30$ minutes (so the US can increase its share of goals by taking risks). Then we get $\mathrm t_c\approx86$ minutes, i.e. there would be about $4$ minutes of interesting play and then $86$ minutes of disappointment.
To improve that, let's assume that the US does a bit worse when playing conservatively, only scoring one goal per $120$ minutes. Then $\mathrm t_c\approx55$ minutes, so almost the entire first half would be interesting. To arrive at an even later crossover point, as predicted in the article, we'd have to assume an even greater advantage from taking risks, or assume overall higher goal rates (as $t_\mathrm c$ scales inversely proportionally if we scale all goal rates proportionally), or drop some of the simplifications. Thus, the continuation of the game after the first goal might favour risk taking, since a conceded goal can be compensated by a single goal whereas after taking the lead only two opposing goals can bring defeat; and certainly the incentive to win in order to get a weaker opponent in the next round favours risk taking. |
Do redundant constraints help in big-M reformulation? | I think the question of whether redundant constraints help is in general an empirical one. Sometimes they do, sometimes they don't. The extra binary variables might indeed slow things down, but they also might not.
One concern I would have would be whether the $\beta$ variables might "distract" the solver from focusing on other integer variables that perhaps would be more important. If I thought that were happening, I would try adding branching priorities (which at least some solvers let you supply). Branching priorities are basically weights assigned to the integer variables, such that the solver is encouraged/compelled to branch first on the variables with higher priority. Giving the $\beta$ variables lower priorities would at least keep the solver focused on the other integer variables. |
Is there any geometrical definition of polynomials? | A curve which after certain successive differentiations gives zero at all points. You will get a straight line parallel to x axis at the end at all points. |
Finding the Laurent Series for $\frac{1}{e^z-1}$ for $0<|z|<2\pi$ | One may observe that $\displaystyle z \to \frac{z}{e^z-1}$ is analytic in $0<|z|<2\pi$, then it admits a power series expansion
$$
\frac{z}{e^z-1}=\sum\limits_{n=0}^{\infty}b_n\frac{z^n}{n!}, \quad 0<|z|<2\pi, \tag1
$$ with $b_0=1$, $b_1=-1/2$.
Then, multiplying $(1)$ by $\displaystyle e^z$,
$$
\left(\sum\limits_{n=0}^{\infty}b_n\frac{z^n}{n!}\right)\left(\sum\limits_{n=0}^{\infty}\frac{z^n}{n!}\right)=\frac{z}{e^z-1}e^z=z+\frac{z}{e^z-1}=\sum\limits_{n=0}^{\infty}b_n\frac{z^n}{n!},\tag2
$$ using the Cauchy product, one gets
$$
b_n=\sum_{n=0}^n \binom{n}{k}b_k, \quad n>1, \quad b_0=1,\, b_1=-1/2,\tag3
$$ but from $(3)$ one sees that these are just the standard Bernoulli numbers: $B_n$.
Finally, we have
$$
\frac1{e^z-1}=\frac1z-\frac12+\frac{z}{12}+\cdots=\frac1z+\sum\limits_{n=0}^{\infty}B_{n+1}\frac{z^n}{(n+1)!}, \quad 0<|z|<2\pi. \tag4
$$ |
Determinant of the matrix associated with the quadratic form | Consider the matrix $$A= \begin{pmatrix}
4 & 3 & 3 \\
3 & 9 & 4 \\
3 & 4 & 2 \\
\end{pmatrix}.$$
Note that the coefficients relating to $x^2,y^2$ and $z^2$ lie on its diagonal. The other entries correspond to half of the coefficients of the interaction terms. Multiplying this matrix with $\begin{pmatrix}
x&y&z \\
\end{pmatrix}^T$ and $\begin{pmatrix}
x\\y\\z \\
\end{pmatrix}$ yields
\begin{align} \begin{pmatrix}
x&y&z \\
\end{pmatrix}^T A \begin{pmatrix}
x\\y\\z \\
\end{pmatrix} &= \begin{pmatrix}
x&y&z \\
\end{pmatrix}^T \begin{pmatrix}
4 & 3 & 3 \\
3 & 9 & 4 \\
3 & 4 & 2 \\
\end{pmatrix}\begin{pmatrix}
x\\y\\z \\
\end{pmatrix}\\
&=\begin{pmatrix}
x&y&z \\
\end{pmatrix}^T \begin{pmatrix}
4x + 3y + 3z \\
3x + 9y + 4z \\
3x + 4y + 2z \\
\end{pmatrix}\\
&= 4x^2 +3xy + 3xz +3xy +9y^2+4yz+3xz+4yz +2z^2\\
&= 4x^2 +9y^2 +2z^2 +6xy + 6xz + 8yz.
\end{align}
Hence, $A$ is the matrix we are looking for which produces the desired function.
However, we are interested in the determinant of $A$. Luckily, $A$ is a $3 \times 3$ matrix. Hence, we can use the rule of Sarrus to calculate the determinant:
\begin{align}
\det(A)= A&= \begin{vmatrix}
4 & 3 & 3 \\
3 & 9 & 4 \\
3 & 4 & 2 \\
\end{vmatrix}\\
&= 4\cdot 9 \cdot 2 +3\cdot4\cdot3 +3\cdot4\cdot3 - 3\cdot9\cdot3 -3\cdot3\cdot2 -4\cdot4\cdot4\\
&= 72 +36+36 - 81-18 -64\\
&=-19.
\end{align}
Hence, det$(A)=-19$. |
Irreducibility issue | As long as there's a complete answer, there might as well be a conceptual explanation too. As Dylan Moreland says in the comments, note that $f(x) = \frac{x^p - 1}{x - 1}$. Since $x^p - 1 \equiv (x - 1)^p \bmod p$ by Fermat's little theorem, it follows that
$$f(x) \equiv \frac{(x - 1)^p}{x - 1} \equiv (x - 1)^{p-1} \bmod p$$
hence that
$$f(x+1) \equiv x^{p-1} \bmod p.$$
But $f(1) = p$, so hopefully the idea of using Eisenstein's criterion on $f(x + 1)$ seems more natural now. |
When does $\frac{x^2}{2} - \frac{x}{2} = x\log x$? | One obvious solution is $x=1$, when both sides of
$$\frac12(x^2-x)=x\log_b x$$
become $0$. Note that the right side of
is defined only when $x>0$ so we shall assume so.
Thus we are allowed to divide by $x$ and arrive at
$$\frac12(x-1)=\log_b x$$
and after exponentiation
$$\tag1x=\sqrt{b^{x-1}}.$$
While we don't find an explicit expression for the solution, $(1)$ can be used as a recursion formula for a sequence quickly converging to a solution. The value depends on which logarithm we use and can be obtained only as a numerical approximation this way as $x\approx 0.749228$ if $b=10$ and $x\approx 3.5128624$ if $b=e$.
EDIT: For some bases $b$, the inverse of $(1)$ is better suited to iterate towards the fixpoint, i.e. to let $x_{n+1}=1+2\log_bx_n$.
For example, this way one obtains $x\approx 6.319722$ if $b=2$. |
How do you find all minimal vertex cover of bipartite graph $G$? | This is largely dependent from bipartite graph to bipartite graph, Konig's Theorem states that the size of the minimal vertex cover is the size of the maximal matching. In your case, the largest matching can be at most $m$. Also see Hall's Theorem. |
Show that $f$ satisfies Lipschitz condition if it is Lipschitz on the lines with rational coordinates | Claim: If $f$ is continuous on a metrix space $X$, and is Lipschitz continuous on a dense subset $A\subset X$, then it is Lipschitz continuous on $X$.
Proof: Given $x,y\in X$, pick two sequences $x_n\to x$, $y_n\to y$ such that $x_n,y_n\in A$. By continuity of $f$ and of the distance function,
$$
|f(x)-f(y)| = \lim_{n\to\infty} |f(x_n)-f(y_n)| \le L \lim_{n\to\infty} d(x_n,y_n) = L d(x,y)
$$
as desired.
Concerning your proof: you should say that MVT is used twice, to go from $(x_1,y_1)$ to $(x_2,y_1)$ and then to $(x_2,y_2)$. Which raises the issue of whether $(x_2,y_1)$ is in the domain: convexity does not guarantee that.
To repair this gap, first work with a square $Q$ contained in $\Omega$. You will obtain that $f$ is $M$-Lipschitz on every such square, with same constant $M$. Then, given two points $p,q\in\Omega$, cover the line segment connecting them by open squares $Q_j$. There is a partition of this segment such that every subsegment belongs to some $Q_j$ (this is a consequence of the Lebesgue number lemma, or can be proved directly). Finally, use the triangle inequality to sum over the partition. |
Equation manipulation tips/explanation for $2v_1w_1v_2w_2=v_1^{\ \ 2}w_2^{\ \ 2}+v_2^{\ \ 2}w_1^{\ \ 2}$ and its solution of $\mathbf{w}=a\mathbf{v}$ | $$2v_1w_1v_2w_2=v_1^2w_2^2+v_2^2w_1^2$$
$$(v_1w_2-v_2w_1)^2=0$$
Hence we have $$v_1w_2=v_2w_1$$ which is what you obtained.
The determinant of the matrix $\begin{bmatrix} v_1 & v_2 \\ w_1 & w_2\end{bmatrix}$ is $0$, hence the conclusion.
Note: Be careful when you divide by a number, we should check that it is non-zero. |
Proving A is B if and only if C | In my opinion this depends on what type of objects $A,B$ and $C$ are, but most likely it is the second interpretation. In particular, if $A$ and $B$ are sets and $C$ is a logical statement, then your second interpretation holds; you'd have $(A = B) \Leftrightarrow C$.
If all three of $A,B,C$ are logical formulas, then it is plausible to interpret the statement as $A \Leftrightarrow (B \Leftrightarrow C)$. However, this seems an unlikely interpretation because this would mean that "is" and "if and only if" are both used to mean the equivalence of logical formulas. |
Flipping a fraction within an Inequality? | This is possible in your case because all parts are guaranteed to be $>0$. In general:
Given a strictly monotone decreasing function $f : A\to\mathbb R$ where $A\subset \mathbb R$ is an interval and an inequality
$$a<b$$
where both $a,b\in A$ the inequality implies
$$f(a) > f(b)$$
In your case, $A = (0,\infty)$ and $f(x) = \frac1x$. For a non-strict version ($a\le b$) the function $f$ can be monotone (not necessarily strictly monotone).
Using $A = (-\infty, 0)$ and $f(x) = \frac1x$ also works, so
$$-\infty < a<x<b<0 \Leftrightarrow \underbrace0_{="\frac1{-\infty}"} > \frac1a > \frac1x > \frac1b > \underbrace{-\infty}_{="\frac10"}$$
and
$$0<a<x<b<\infty \Leftrightarrow \underbrace{\infty}_{="\frac10"} > \frac1a > \frac1x > \frac1b > \underbrace0_{= "\frac1\infty"}$$
But we must have that $a,b,x$ have the same sign.
Note that you can see that $\frac10$ cannot be defined here. In the first equation, it becomes a $-\infty$ while in the second it becomes a $+\infty$. |
Expressiveness of First Order Logic with unary predicates and finite number of variables? | Does the identity predicate count as part of the logic? In which case "there are exactly two $F$s" can be expressed using three variables but not two. |
If $\forall V\subseteq X$ where $x\in \overline V; f(x) \in \overline{f(V)}$, then $f$ is continous in $x$ | I guess the $=$ sign in the last sentence is supposed to be a $\in$.
The important point is that $f(X\setminus f^{-1}(U))$ is a subset of $Y\setminus U$. So if $f(x)\in \overline{f(X\setminus f^{-1}(U))}$, then it is in the closure of $Y\setminus U$, which is absurd. So you reached a contradiction, and this means that your assumption that $f$ is not continuous at $x$ is false. |
Simplifying an expression in $\Bbb{Q}(\zeta_p)$ | There are at least a couple of profitable ways of re-writing this. A short triggy answer is that your expression simplifies to
$$
\frac{\left(2\cos\left(\frac{2\pi}{p}\right)-1\right)^3}{2\cos\left(\frac{2\pi}{p}\right)-2},
$$
which for $p=3$ gives the value of $\boxed{\frac{8}{3}}$ you allude to above, and for $p=5$ gives the neatly random-looking value of
$$\frac{\left(\sqrt{5}-3\right)^3}{4(\sqrt{5} - 5)}=\frac{8\sqrt{5}-18}{\sqrt{5} - 5}=\boxed{\frac{2}{11\sqrt{5}+25}.}$$
For discussing the work that leads to the simplification, it's slightly more convenient from the view of algebraic number theory to deal with the reciprocal
$$
\xi_p:=\frac{\zeta_p^2(\zeta_p-1)^2}{(\zeta_p^2-\zeta_p+1)^3}.
$$
First, it's not too hard to check that $\zeta_p^2-\zeta_p+1$ is a unit of $\mathbb{Z}[\zeta_p]$ for $p>3$, so writing it as
$$
\frac{\zeta_p^2}{(\zeta_p^2-\zeta_p+1)^3}\cdot (\zeta_p-1)^2
$$
makes it clear that $\xi_p$ an algebraic integer (this is why I wanted the reciprocal), of asbolute norm $p^2$ (since $\zeta_p-1$ is a degree 1 prime above $p$ in $\mathbb{Q}(\zeta_p)$.) Second, let's take advantage of the fact that we know that $\xi_p$ is totally real, and so an element of $\mathbb{Z}[\zeta_p^+]$, where $\zeta_p^+:=\zeta_p+\zeta_p^{-1}.$ From the above re-writing, it's unreasonable to expect (actually, probably impossible) for $\xi_p$ to live in any proper subfield of $\mathbb{Q}(\zeta_p^+)$. So a reasonable interpretation for the problem of an ultimate simplication for $\xi_p$ is to write it completely in terms of $\zeta_p^+$. Let's do this now:
$$
\xi_p=\frac{\zeta_p^2(\zeta_p-1)^2}{(\zeta_p^2-\zeta_p+1)^3}=\frac{\zeta_p^3}{(\zeta_p^2-\zeta_p+1)^3}\cdot \frac{(\zeta_p-1)^2}{\zeta_p}=\frac{\zeta_p+\zeta_p^{-1}-2}{(\zeta_p+\zeta_p^{-1}-1)^3}=\boxed{\frac{\zeta_p^+-2}{(\zeta_p^+-1)^3}}
$$
Now writing $\zeta_p^+=2\cos(2\pi/p)$ and reciprocating gives the formula at the top of this answer.
Finally, let me mention that from an algebraic number theory point of view, it might me most useful to write $\xi_p$ not in terms of $\zeta_p^+$, but in terms of a prime of $\mathbb{Z}[\zeta_p]$ above $p$, i.e., in terms of
$$
\mathfrak{p}:=(1-\zeta_p)(1-\zeta_p^{-1})=2-\zeta_p^+.
$$
Re-writing the previous boxed expression, we finally conclude with the reasonably concise (and algebraically transparent) formulation
$$
\xi_p=\boxed{\frac{\mathfrak{p}}{\left(1-\mathfrak{p}\right)^3}.}
$$ |
Find the volume generated by revolving the given region around $y$ axis. | If you draw the figure in the $xy$ plane, you have a triangle with a vertex at the origin. Then, using cylindrical shells with radius $x$ and height $3x/2$, your integral is $$V=2\pi\int_0^2x\frac32 x dx=8\pi$$ |
Is $(M_n^5)_{n \in \mathbb{N}_0}$ a martingale, given that both $(M_n)_{n \in \mathbb{N}_0}$ and $(M_n^2 )_{n \in \mathbb{N}_0}$are martingales? | Suppose that $(M_n)_{n \geq 0}$ is a martingale such that $M_0=10$ and $(M_n^2)_{n \in \mathbb{N}}$ is also a martingale. Then $$\mathbb{E}(M_n) = \mathbb{E}(M_0) = 10$$ and $$\mathbb{E}(M_n^2) = \mathbb{E}(M_0^2) = 100$$ for all $n \in \mathbb{N}$. Consequently, $$\text{var}(M_n) =\mathbb{E} \big[ (M_n-\mathbb{E}(M_n))^2 \big] = \mathbb{E}(M_n^2)-(\mathbb{E}(M_n))^2=0$$ for all $n \in \mathbb{N}$. Thus $M_n=\mathbb{E}(M_n)=10$ almost surely for all $n \in \mathbb{N}$. In particular, $(M_n^k)_{n \in \mathbb{N}}$ is a martingale for any $k \geq 1$.
In conclusion: yes, the statement is true. |
Show that the $n$th real polynomial has $n$ simple real roots | Your proof is fine. A slight variant is to divide out $x-\alpha$ for $\alpha$ each of the given roots, to obtain a degree $1$ quotient that can only have real coefficients. But your approach is much simpler. For starters, it doesn't require us to verify $x-\alpha_i|p_n\implies\prod_i(x-\alpha_i)|p_n$. |
Would a class in Linear Optimization/Programming be useful for a CS degree? | It is correct that in CS or when you work as a software engineer you rarely need to implement an optimisation algorithm yourself.
However it is very important to recognise an optimisation problem when you see one in order to be able to apply the correct software packages. This is often not trivial and should be part of every course on optimisation. |
How to approximate/connect two continuous cubic Bézier curves with/to a single one? | You probably already know this, but let me explicitly point out:
== exact match ==
It is not possible to construct a single cubic Bezier curve that exactly matches your new curve.
A single cubic Bezier curve has a constant "curvature" A along its length.
When you shrink it, the new curve has "sharper" curvature B (but still constant along the length of the new curve).
It is not possible to construct a single cubic Bezier curve -- since, like all single cubic curves, it has constant curvature C along its length -- such that C=A along the part of the length, and C=B along the rest of the length, where A<>B.
(I'm mis-using the term "curvature" here for something that is more precisely the 3rd derivative of a curve, sometimes called the "jerk").
== approximation ==
Perhaps the simplest approximation is:
Use the initial starting point and first control knot -- p0 and p1,
and the final control knot and final ending point -- r'2 and r'3.
Use those 4 points as the start, control knots, and endpoint of a Bezier curve that approximates the one you want: a0, a1, a2, a3.
This approximation exactly hits the endpoints of your desired curve, and has the same initial and final slope at those endpoints, but it slightly diverges from your desired curve in the middle.
In particular, it probably doesn't go exactly through the cutpoint K.
There are many other possible approximations you could make.
The "Don Lancaster's Guru's Lair Cubic Spline Library"
may have the details I'm leaving out:
It is possible to nudge a1 along the line p0-p1, or to nudge a2 along the line r'2-r'3 -- or both -- such that the approximation curve not only starts and ends at the same points and slopes, but also passes through the cutpoint K.
pick any 4 points along your new curve (perhaps the endpoints, the cutpoint K, and some other point) and generate a cubic Bezier curve that goes exactly through all 4 points.
pick many points along your new curve -- bunched together in places where curve matching is important, perhaps near the endpoints and point K, and spaced further apart where curve matching is not so important -- and generate a cubic Bezier curve that is a least-squares best fit to those points. |
Probability of winning Concentration game | Make a tree (by continuing this one):
Then count end conditions to compute probabilities.
Assume perfect play: i.e., that if a player knows the locations of some cards, he'll only turn them over if they complete a pair. |
Prove sum of two Riemann integrable functions is Riemann integrable, using approach in Analysis I of Tao? | Denote the upper and lower Riemann integrals of $f$ by $\overline \int_I f$ and $\underline \int_I f$ respectively.
We have that $U(f+g,P) \le U(f,P) + U(g,P)$ and $L(f,P)+L(g,P)\le L(f+g,P)$. Thus, $$ L(f,P)+L(g,P)\le L(f+g,P)\le U(f+g,P) \le U(f,P)+U(g,P). $$
Note that the right-hand inequality gives us that, for each partition $P$, $$ \overline \int_I (f+g) \le U(f,P) + U(g,P), $$ so that $$ \overline \int_I (f+g) \le \overline \int_I f + \overline \int_I g .$$ Similarly, $$ \underline\int_I f + \underline\int_I g \le \underline\int_I (f+g). $$
We know that $f$ and $g$ are Riemann-integrable, which means that their upper and lower Riemann integrals are the same. This allows us to conclude that $$ \int_I f + \int_I g = \overline\int_I (f+g) = \underline\int_I (f+g),$$ which is what we needed to show. |
Prove that the sum of 2 bivectors in $R^{3}$ is a bivector? Hint:Think geometrically | There is some redundancy in expressing bivectors with the symbols $a\wedge b$. It is possible that $a\wedge b=c\wedge d$ even if $a,b$ are different from $c,d$ (including up to sign). We can think of any bivector $a\wedge b$ as representing a 2D subspace with orientation determined by the basis $\{a,b\}$, but equipped with a scalar equal to the area of the parallelogram spanned by $a$ and $b$. Therefore, if $R$ is a rotation that stabilizes the plane spanned by $a$ and $b$, the rotation preserves the area of the parallelogram spanned by $a$ and $b$, so $a\wedge b=(Ra)\wedge (Rb)$.
(If we assume $a,b$ are L.I. and use a basis $\{a,b,\bullet\}$ in order to write an explicit formula for $R$, this fact can be checked explicitly with algebra.)
Therefore, to simplify a sum $a\wedge b+c\wedge d$ to a simple bivector, we note that the 2D subspaces spanned by $\{a,b\}$ and $\{c,d\}$ intersect in some axis $\ell$, then apply rotations $R$ and $S$ to $a,b$ and $c,d$ respectiely so that it looks like $p\wedge q+r\wedge s$ with $p\| r$ parallel. When they're parallel, they can be expressed as scalar multiples of a common vector, and then you can apply the distributive property for $\wedge$ only in reverse. |
Find $\angle ACB$ using the knowledge of cyclic quadrilaterals | Note that AOD = 100 (2*ABD since angle subtended at center is twice the angle subtended on the arc)
Hence AOB = 80 (since Points D O and B are collinear )
Hence ACB =40 ( angle subtended at center is twice the angle subtended on the arc)
Ps- I am not yet familiar with Mathjax or latex :(
Note that ACB,AOB...are all angles |
How do you solve this $\int\limits ^{\infty }_{0}\frac{\cos( x)}{x^{n} +1} dx,\ n >0$ | The first note is that if $n$ is an odd number then there is no simple expression for this integral. So lets suppose n to be an even number.
That is $n=2m ;m=1,2,3,...$
Let's look at a more general integral
$$I(2m)=\int_0^{\infty}\frac{\cos(ax)}{x^{2m}+b^{2m}}dx$$
where $a,b$ are positive, real numbers.
This integral is equivalent to the following integral:
$$I(2m)=\frac{1}{2}\int_{-\infty}^{\infty}\frac{e^{iax}}{x^{2m}+b^{2m}}dx$$
(taking into account $e^{ix}=\cos x+i\sin x$)
The simplest way to compute this integral is to use the contour integration method.
Lets take the simplest contour,$C$, that means a semicircle, in the upper half plane, with radius $R\rightarrow \infty$ and consider along the contour the following complex integral
$$I(2m)=\frac{1}{2}\int_C\frac{e^{iaz}}{z^{2m}+b^{2m}}dz$$
Along the real axis we get the desired integral but along the circular half-arc the integral vanishes.
So we can apply Cauchy's Theorem of Residues
Isolated singularities of the integrand $$f(z)=\frac{e^{iaz}}{z^{2m}+b^{2m}}$$ we find by solving the equation
$$z^{2m}+b^{2m}=0$$
Solutions:
$$z_k=be^{i\frac{2k+1}{2m}\pi};k=0,1,2,...,2m-1$$
Only the first $m$ of them ($z_0,z_1,...z_{m-1}$) are located inside the countour.
The end result:
$$\int_0^{\infty}\frac{\cos(ax)}{x^{2m}+b^{2m}}dx=\frac{i\pi}{2m}\sum_{k=0}^{m-1}\frac{e^{iaz_k}}{z_k^{2m-1}}$$
So computing this integral boils down to the complex algebra.
An example:
$$\int_0^{\infty}\frac{\cos(x)}{x^6+1}dx=\frac{\pi}{6e}\left [1+\sqrt e (\cos\frac{\sqrt 3}{2} +\sqrt 3\sin\frac{\sqrt 3}{2}) \right ]$$ |
Quick probability question (multiplication of expectations) | This holds for all $X, U$:
$$E[(X-E[X])U] = E[XU] - E[X]E[U] = Cov(X,U)$$
You are telling us that the first expression equals:
$$E[U]E[X-E[X]] = E[U](E[X] - E[E[X]]) = E[U] \cdot 0 = 0$$
So this means that the $Cov(X,U)$ is $0$. However that doesn't imply $X$ independent of $U$, because uncorrelatedness does not imply independence. |
Short question about key of Hill Cipher | The determinant must be odd and not 13. This is because of the fact that a matrix with entries mod $n$ is invertible if and only if its determinant is invertible mod $n$. Since the only invertible elements mod 26 are the odds except 13, the same must be required of your determinant. |
Combinatorial geometry problem on polyhedrons | Umm, I think this is a pretty famous problem with a google-able solution. Anyway, we would need to use Euler's theorem for connected planar graphs (as this is one). Using standard notation...
We have $F \geqslant 5 $, and $E = 3V/2$. We will show that
not all faces of the polyhedron are triangles. Otherwise, $E = 3V/2$ and Euler's formula will give us $F - 3F/2 + F = 2$,that is, $F = 4$, which is a contradiction.
The game strategy for the two players:
The first player writes
his/her name on a face that is not a triangle; call this face $A_1A_2 . . . A_n, n ≥ 4$. The
second player, in an attempt to obstruct the first, will sign a face that has as many common
vertices with the face signed by the first as possible, thus claiming a face that shares an
edge with the one chosen by the first player. Assume that the second player signed a
face containing the edge $A_1A_2$. The first player will now sign a face containing the edge
$A_3A_4$. Regardless of the play of the second player, the first can sign a face containing
either $A_3$ or $A_4$, and wins. |
uniform convergence of $\sum\limits_{n=0}^{\infty} \frac {(-1)^nx^{2n+1}}{(2n+1)!}$ | No. If $f_k \to f$ uniformly, then $(f_k)$ must be uniformly Cauchy, i.e for every $\varepsilon > 0$ there is an $K$ such that $\sup|f_j-f_k| < \varepsilon$ for every $j,k \ge K$.
In your case, $$f_k(x) = \sum_{n=0}^k \dfrac{(-1)^n x^{2n+1}}{(2n+1)!}$$ and
$$
\sup_{x\in\mathbb{R}} |f_{k+1}-f_k| = \sup_{x\in \mathbb{R}} \frac{|x|^{2k+3}}{(2k+3)!} = \infty
$$
for every $k$.
(In fact, a similar reasoning shows that a sequence of polynomial can never converge uniformly on $\mathbb{R}$ to something other than a polynomial.) |
Examples of a problem solved by a well-chosen derivative equaling zero | In elementary calculus, one shows that the solutions of $\frac{dy}{dt}=ky$ are $y=Ce^{kt}$ by letting $w=ye^{-kt}$. Then $\frac{dw}{dt}=0$. |
$\sum_{k=1}^{\infty} |a_k-a_{k+1}|$ converges implies $\lim_{k\to \infty}|a_1 - a_k|$ converges | The sequence $(a_n)_n$ is Cauchy:
If $m>n>N$ then $$|a_m-a_n|\le\sum_{k=n}^{m-1}|a_{k+1}-a_k|\le \sum_{k=N}^\infty|a_k-a_{k+1}|$$
and the latter is $<\epsilon$ for $N$ large enough for this expresses exactly that $\sum_{k=1}^\infty|a_k-a_{k+1}| $ converges. Hence with $a:=\lim_{k\to\infty} a_k$, we have $\lim_{k\to\infty}|a_1-a_k|=|a_1-a|$. |
Prove equality between binomial coefficients. | HINT :
$$(1+x)^n=\binom n0+\binom n1x+\cdots +\binom{n}{n-1}x^{n-1}+\binom nnx^n$$
$$(1+x)^m=\binom m0+\binom m1x+\cdots +\binom{m}{m-1}x^{m-1}+\binom mmx^m$$
Now consider the coefficient of $x^k$ in the expansion of $(1+x)^n(1+x)^m=(1+x)^{n+m}$. |
Complex number, series representation | The radius of convergence of the following power series
$$
e^{z-1}=\sum_{n=0}^\infty \frac{(z-1)^n}{n!} \tag1
$$ is infinite, as may be seen by the ratio test for example.
Thus $(1)$ is valid for any finite complex value of $z$, and you get your identity by writing
$$
e^{z}=e\:e^{z-1} \tag2
$$ and using $(1)$. |
Homology of manifolds with boundary | Yes, it is still true: any compact topological manifold (with or without boundary) is homotopy-equivalent to a finite CW-complex, which has finitely-generated homology groups, only finitely many of which are nonzero. |
On the equation $(a^2+1)(b^2+1)=c^2+1$ | See Kenji Kashihara, Explicit complete solution in integers of a class of equations $(ax^2−b)(ay^2−b)=z^2−c$, Manuscripta Math. 80 (1993), no. 4, 373–392, MR1243153 (94j:11031).
The review in Math Reviews says,
"The author studies the Diophantine equation $(ax^2−b)(ay^2−b)=z^2−c$, where $a,b,c\in{\bf Z}$, $a\ne0$, $b$ divides 4, and in the case $b=\pm4$, then $c\equiv0\pmod 4$. This equation for $a=1, b=1$ has been treated by S. Katayama and the author [J. Math. Tokushima Univ. 24 (1990), 1--11; MR1165013 (93c:11013)], and the present paper extends the techniques to show that there exists a permutation group $G$ on all integral solutions of the equation, and also an algorithmic method for computing a minimal finite set of integral solutions, in the sense that all integral solutions are contained in the $G$-orbits of the set. Such minimal sets are listed for the equations with $a=2, b=\pm1, 0\lt|c|\le85$."
Looks like this includes the case $a=1$, $b=-1$, $c=-1$ which is what we want. |
Function vanishing at a point but not in any neighborhood | There need not be such a function. Let $X=\omega_1+1$ with the order topology; $X$ is compact and Hausdorff, but every continuous real-valued function on $X$ is constant on a nbhd of the point $\omega_1$.
It is always possible if $X$ is Tikhonov and first countable, as in that case each singleton is a zero-set. |
How to find which treatment is most effective in gene data given one standard method and 3 variations | If you're concerned with whether the chance of increased expression differs at all between the four treatment groups, you might start off with a likelihood ratio test for checking whether $p_1 = p_2 = p_3 = p_4$, where $p_i$ is the probability of increased expression within treatment group $i$.
The likelihood ratio will compare the maximum likelihood of the data when all probabilities are constrained to be equal (the null hypothesis) to the maximum likelihood when all four are allowed to vary (the alternative). Notice that within group $i$ the number with increased expression can be assumed binomially distributed with parameters $n_i$ and $p_i$, and so our likelihood function will be a product of binomial probabilities. Specifically, if $x_i$ is the number with increased expression within group $i$, $\hat{p}_i \equiv x_i / n_i$, and $\hat{p} \equiv (x_1 + x_2 + x_3 + x_4) / (n_1 + n_2 + n_3 + n_4)$ (these are the estimates which maximize the likelihood under the two hypotheses) the likelihood ratio becomes,
$$
\begin{align}
\Lambda(x) = \frac{\hat{p}^{x_1 + x_2 + x_3 + x_4} (1 - \hat{p})^{n_1 + n_2 + n_3 + n_4 - x_1 - x_2 - x_3 - x_4}}{\hat{p}_1^{x_1} (1 - \hat{p}_1)^{n_1 - x_1} \hat{p}_2^{x_2} (1 - \hat{p}_2)^{n_2 - x_2} \hat{p}_3^{x_3} (1 - \hat{p}_3)^{n_3 - x_3} \hat{p}_4^{x_4} (1 - \hat{p}_4)^{n_4 - x_4}} .
\end{align}
$$
Once you've computed this value, there's theory that says $- 2 \log [\Lambda(x)] \sim \chi^2_3$ under the null hypothesis, where $\chi^2_3$ refers to a chi-squared distribution with three degrees of freedom. This will allow you to compute an approximate $p$-value and determine if there's evidence that the probabilities differ.
For trying to determine pairwise differences, you could probably just use standard $z$ tests where your test statistic will take the form,
$$
z = \frac{\hat{p}_i - \hat{p}_j}{\sqrt{\frac{\hat{p}_i(1 - \hat{p}_i)}{n_i} + \frac{\hat{p}_j(1 - \hat{p}_j)}{n_j}}} ,
$$
which approximately follows a standard normal distribution when $p_i = p_j$. |
Prime number characterisation using congruences | The proposed congruence $24((n-1)! + 1) \equiv 0 \pmod{n}$ does not hold for all positive integers $n$: the first two counterexamples are $n = 9$ and $n = 10$.
In fact:
$\bullet$ If $2^4 \mid n$, then $24((n-1)!+1)$ is not divisible by $16$, so the congruence fails.
$\bullet$ For any odd prime $p$, if $p^2 \mid n$ then $24((n-1)!+1) \equiv p \pmod{p^2}$, so the congruence fails.
$\bullet$ If $n$ is of the form $ap$ for a prime $p \geq 5$ and $a \geq 2$, then
$24((n-1)!+1)$ is not divisible by $p$, so the congruence fails.
Can you finish it off from here? |
Prove that:$(P,\Vert\cdot\Vert)$ is not a Banach space. | exponential is a limit of polynomials and is not a polynomial. |
Where did I go wrong in this combinatorics question? In how many ways can the innkeeper assign the guests to the rooms? | Suppose you first choose persons $A$ and $B$ and then you choose room $1$ for them.
Secondly you choose persons $C$ and $D$ and then you choose room $2$ for them.
Now another choice: first the persons $C$ and $D$ and for them room $2$ is chosen, then the persons $A$ and $B$ and for them room $1$ is chosen.
Another choice but the same outcome. This explains your double-counting. |
Uniform continuity on bounded subset of $\mathbb{R}$ implies boundedness of the function | If $B\subset\mathbb R$ is bounded, then $\overline B$ is compact. Since $f$ is uniformly continuous, there is a unique continuous extension $\tilde f$ of $f$ defined on $\overline B$. Since $\overline B$ is compact, $\tilde f$ is uniformly continuous. Let $\delta$ be such that $|\tilde f(x)-\tilde f(y)| < 1$ if $|x-y|<\delta$. By compactness, there are finitely many $\delta$-neighborhoods that cover $\overline B$. Hence $\tilde f$ is bounded on $\overline B$. The restriction $\tilde f|_B = f$ is therefore also bounded. |
Can a sequence which decays more slowly still yield a converging series? | The question is the following:
Show that for every converging series $\sum\limits_na_n$ with positive terms there exists a converging series $\sum\limits_nb_n$ with positive terms such that $\lim\limits_{n\to\infty}\frac{a_n}{b_n}=0$.
A hands-on approach is as follows: for every $n$, consider $$A_n=\sum_{k\geqslant n}a_k,$$ then, by hypothesis, $A_n\to0$ hence, for each $k\geqslant0$, there exists some finite $\nu(k)$ such that $$A_{\nu(k)}\leqslant2^{-k}.$$ Assume without loss of generality that the sequence $(\nu(k))$ is nondecreasing and define $(b_n)$ by
$$\color{red}{\forall k\geqslant0,\quad\forall n\in[\nu(k),\nu(k+1))},\quad \color{red}{b_n=k\,a_n}.$$
Then $\frac{a_n}{b_n}\to0$ when $n\to\infty$ and $$\sum_{n\geqslant\nu(0)}b_n=\sum_{k\geqslant0}k\sum_{n=\nu(k)}^{\nu(k+1)-1}a_n\leqslant\sum_{k\geqslant0}kA_{\nu(k)}\leqslant\sum_{k\geqslant0}k2^{-k},$$ which is finite, hence the series $\sum\limits_nb_n$ converges, as desired. |
If $f^2$ monotonically increasing in $R$ then f monotonic | If there were two points $x_1$, $x_2\in{\mathbb R}$ with $f(x_1)f(x_2)<0$ there would be a point $\xi$ between $x_1$ and $x_2$ with $f(\xi)=0$. This would prevent $f^2$ from being monotone between $x_1$ and $x_2$. Therefore one has $f(x)\geq0$ or $f(x)\leq0$ throughout, and this implies that one of the following is true:
$$f(x)=\sqrt{f^2(x)}\quad \forall x\in{\mathbb R}\qquad{\rm resp.}\qquad f(x)=-\sqrt{f^2(x)}\quad \forall x\in{\mathbb R}\ .$$
In both cases $f$ is monotone. |
Duality between Tor and Ext? | If $k$ is a field as you say it is, it is not difficult to see using the standard tensor-hom adjunction that $Ext^*_R(k,k)\cong (Tor_*^R(k,k))^*$ as $k$-graded modules, where $R$ is a $k$-algebra. I've learned this from these Tor-Ext notes by May.
Here's the relevant passage, which contains other comments which might be useful to you. It's at the very end of the note: |
Prove that sum is an integer. | Since
$$\frac{1}{k^2+\sqrt{k^4+\frac 14}}=\frac{k^2-\sqrt{k^4+\frac 14}}{(k^2)^2-(k^4+\frac 14)}=-4k^2+2\sqrt{4k^4+1}$$
we have
$$\sum_{k=1}^{n}\sqrt{2-\frac{1}{k^2+\sqrt{k^4+\frac 14}}}=\sum_{k=1}^{n}\sqrt{2+4k^2-2\sqrt{4k^4+1}}\tag1$$
Here, since we have that
$$4k^4+1=4k^4+4k^2+1-4k^2=(2k^2+1)^2-(2k)^2=(2k^2+2k+1)(2k^2-2k+1)$$
and that
$$(2k^2+2k+1)+(2k^2-2k+1)=2+4k^2$$
we have
$$\begin{align}(1)&=\sum_{k=1}^{n}\sqrt{\left(\sqrt{2k^2+2k+1}-\sqrt{2k^2-2k+1}\right)^2}\\\\&=\sum_{k=1}^{n}\left(\sqrt{2k^2+2k+1}-\sqrt{2k^2-2k+1}\right)\\\\&=\sum_{k=1}^{n}\left(\sqrt{2k^2+2k+1}-\sqrt{2(k-1)^2+2(k-1)+1}\right)\\\\&=\sqrt{2n^2+2n+1}-\sqrt{1}\end{align}$$
This is an integer for $n=119$ since $2\cdot 119^2+2\cdot 119+1=169^2$.
Added :
$(1)$ is an integer if and only if there is an integer $m$ such that
$$2n^2+2n+1=m^2\iff (2n+1)^2-2m^2=-1$$
This is a Pell's equation, so we have
$$n=-\frac 12+\left(\frac{1-\sqrt 2}{4}\right)\cdot (3-2\sqrt 2)^k+\left(\frac{1+\sqrt 2}{4}\right)\cdot (3+2\sqrt 2)^k$$
where $k=1,2,3,\cdots$.
These satisfy $a_n=6a_{n-1}-a_{n-2}+2$ with $a_1=3,a_2=20$ :
$$a_3=119,\quad a_4=696,\quad a_5=4059,\quad a_6=23660,\quad \cdots$$ |
conditions for continuous function | It does indeed. First observe that $f(0)=0$. Now define $g(x):=f(x)/x$ for $x\in(0,1]$ and $g(0):=0$. Then $g:[0,1]\to[0,\infty)$ is continuous and
$$
xg(x)\le\int_0^x g(t)\,dt
$$
for all $x\in[0,1]$. Let $x_0$ be the smallest point in $[0,1]$ at which $g$ attains its maximum value. Suppose that $g(x_0)>0$. Then $g(t)<g(x_0)$ for all $t\in[0,x_0)$, and so the average value $x_0^{-1}\int_0^{x_0}g(t)\,dt$ is strictly less than $g(x_0)$. This leads to absurdity:
$$
x_0g(x_0)\le\int_0^{x_0}g(t)\,dt<x_0g(x_0).
$$
Consequently $g(x_0)=0$, so $g\equiv 0$, and finally $f\equiv 0$. |
Measure theory: motivation behind monotone convergence theorem | Here's a simple example where the monotone convergence theorem fails for the Riemann integral. Fix an enumeration $(q_k)_{k\in\mathbb N}$ of the rational numbers in the interval $[0,1]$. Define $f_n(x)$ to equal $1$ for $x=q_0,q_1,\dots, q_n$ and to equal $0$ for all other $x\in[0,1]$. Then the sequence of functions $(f_n)$ converges monotonically pointwise to the characteristic function of $\mathbb Q\cap[0,1]$, which is not Riemann integrable on $[0,1]$, even though each $f_n$ is Riemann integrable with integral $0$.
One of the main motivations (if not the motivation) for Lebesgue's theory of integration was to improve the behavior of integration vis à vis limits. The monotone convergence theorem, the dominated convergence theorem, and Fatou's lemma are among the instances of this improved behavior. |
Poincare inequality question | If $u\in C^1[0,1]$ satisfies $u(0)=0$ then
$$\int_0^1u(x)^2dx=\int_0^1\left(\int_0^xu'(t)dt\right)^2dx\leq \int_0^1x\int_0^x(u'(t))^2dtdx\leq \frac 12\int_0^1(u'(t))^2dt,$$
hence for $K>1/2$ the inequality you want doesn't hold.
If we don't require $u(0)=0$, then take functions like $u(x)=x+C$ to get want we want. |
Can we construct a non-measurable set using only the axiom of countable choice? | Countable choice is not enough. See Solovay example here https://en.m.wikipedia.org/wiki/Non-measurable_set#:~:text=In%20mathematics%2C%20a%20non-measurable,assigned%20a%20meaningful%20"volume".&text=Solovay%20constructed%20Solovay's%20model%2C%20which,of%20the%20reals%20are%20meas |
Proving that for every context-free language there exist a pushdown automata $M$ s.t. $L=L_{e}(M)$ | I’m assuming that $L_e$ means that you’re accepting by empty stack. In that case you should be able simply to modify $\delta(q_0,\epsilon,Z)$, where $Z$ is the initial stack symbol, by adding $(q_0,\epsilon)$. That is, if $\delta(q_0,\epsilon,Z)=A$ in the original PDA, let $\delta(q_0,\epsilon,Z)=A\cup\{(q_0,\epsilon)\}$ in the modified PDA. |
Very interesting problem with integral,number theory and irrationality | $$\begin{align}
\int_0^{\pi/2}\left(\frac{\sin x-1}{\sin x+1}\right)^n dx
&=\int_0^{\pi/2}\left(\frac{\cos x-1}{\cos x+1}\right)^n dx
=\int_0^{\pi/2}\left(-\tan^2\frac x2\right)^n dx\\
&=2\int_0^{1}\frac{(-t^2)^n}{1+t^2} dt
=2\int_0^{1}\left(\frac1{1+t^2}-\frac{1-(-t^2)^n}{1+t^2}\right) dt\\
&=\frac{\pi}2-2\int_0^1 \sum_{k=0}^{n-1}(-1)^kt^{2k} dt
=\frac{\pi}2-2\sum_{k=0}^{n-1}\frac{(-1)^k}{2k+1}.
\end{align}
$$
Thus, $$\frac ab=2\sum_{k=0}^{n-1}\frac{(-1)^k}{2k+1}.$$
The fact that the limit of the above sum for $n\to\infty$ is equal to $\frac\pi4$ is very well-known and as far as I know never was used for a proof of the irrationality of $\pi$. |
Calculate the probability that the exponential random variable $X$ is greater than the exponential random variable $Y$? | Assuming independence between the 2 rv's and assuming that the specified parameters are "rate" parameter you get
$$\mathbb{P}[X>Y]=\int_0^{\infty}\lambda_x e^{-\lambda_x x}\left[\int_0^x \lambda_y e^{-\lambda_y y} dy \right]dx$$ |
Method of Exhaustion applied to Parabolic Segment in Apostol's Calculus | In the post, you write that Archimedes used the limiting process, albeit indirectly. That depends on the meaning of "indirectly." What Archimedes did was to show that the area of a parabolic segment is neither greater nor less than four-thirds of the area of a certain triangle.
The inequality that you suggest, using $\frac{n^3}{3}+\frac{n}{6}$, would work just as well as $\frac{n^3}{3}$, since we are dividing everything by $n^3$. The preference for $\frac{n^3}{3}$ is probably because $1^2+2^2+\cdots+n^2$ is a polynomial in $n$. Then $\frac{n^3}{3}$ is the "leading term."
Remark: Someone more combinatorially minded might observe that $1^2+2^2+\cdots +n^2=\frac{1}{4}\binom{2n+2}{3}$, and work with that. |
Can a Proper Partial Order have a Totally-Ordered 'Spine'? | The answer to $(2)$ is no even if we rule out the trivial partial order of equality noted in the comments by Wojowu, and your example $P$ for $(1)$ can be used to show this. Let $Q$ be the set of rational numbers in $\Bbb R\setminus\{5\}$; clearly $Q$ is dense in $P$ and linearly ordered. $P\setminus Q$ includes the irrationals, so it also is dense in $P$, but $P$ is not linearly ordered. |
Use Jensen's inequality to prove that $f$ is an increasing function. | Apply Jensen's inequality to (the convex function) $φ(x)=|x|^{q/p}$, with $1\le p <q <+\infty$, to obtain that $$\|X\|_p=\left(\Bbb E|X|^p\right)^{\frac1p}=\left(φ\Bbb E|X|^p\right)^{\frac1q}\le \left(\Bbb E φ\left(|X|^p\right)\right)^{\frac1q}=\left(\Bbb E|X|^q\right)^{\frac1q}=\|X\|_q$$
This shows in particular that $L^q\subseteq L^p$. A common application is that if $X_n \overset{L^q}\to X$ then $X_n \overset{L^p} \to X$ for all $1\le p < q$. |
Irreducible factor decomposition | Hope this answer can help people who are doing algebra.
(a) $f(X)=X^4+2 \in \mathbb{Z}_5[X]$
$f(0)=2,f(1)=3,f(2)=3,f(3)=3,f(4)=2 \Rightarrow f(X)$ has no linear factors in $\mathbb{Z}_3[X]$.
Hence if $f(X)$ is reducible, then it will have 2 quadratic factors for which they are monic.
$$X^4+2 = (X^2+bX+c)(X^2+b^\prime X+c^\prime)$$
Gives,
$$b = -b^\prime \space\space\space\space\space\space\space\space -(1)\\
c+c^\prime+bb^\prime=0 \space\space\space\space\space\space\space\space -(2)\\
b^\prime c+bc^\prime = 0 \space\space\space\space\space\space\space\space -(3)\\
cc^\prime=2 \space\space\space\space\space\space\space\space -(4)$$
Sub. $(1)$ into $(3)$, we get $-bc+bc^\prime=0 \Rightarrow b=0$ or $c=c^\prime$
If $b=0$, $(2) \Rightarrow c=-c^\prime$, $(4) \Rightarrow c^2 =2$ (contradiction)
If $c=c^\prime$, $(4) \Rightarrow c^2 =2$ (contradiction)
Therefore, $X^4+2$ is irreducible in $\mathbb{Z}_5[X]$
(b) $X=0,1$ is a root for the polynomial. and after some long divisions. $X^5+X=X(X+1)^4$ in $\mathbb{Z}_2[X]$
(c) $f(X)=X^5 + 4 X^4 - 3 X^3 + X^2 + 7 X + 11 \in \mathbb{Q}[X]$
$$\bar{f}(X) = X^5+X^3+X^2+X+1 \in \mathbb{Z}_2[X]$$
$\bar{f}(0)=\bar{f}(1)=1 \Rightarrow \bar{f}(X)$has no linear factors in $\mathbb{Z}_2[X]$.
So, $\bar{f}(X)$ must be decompose into $$\text{(polynomial of degree 2) * (polynomial of degree 3)}$$
There are only one irreducible polynomial of degree 2 in $\mathbb{Z}_2[X]$ which is $X^2+X+1$. By long division, we can show that $X^2+X+1 \nmid \bar{f}(X)$. Hence $\bar{f}(X)$ is irreducible in $\mathbb{Z}_2[X]$ also $deg(f(X))=deg(\bar{f}(X))$ (Note: this is same as saying the prime number 2 does not divide the highest order coefficient of $f(X)$), then $f(X)$ is irreducible in $\mathbb{Q}[X]$. |
Help with proof involving derivatives | You'll just need to tough it out and expand the first few terms of the binomial. The required cancellations will happen. That is, write, for some $n$,
$$ (x+h)^n = \sum_{j = 0} ^n \binom{n}{j} x^j h^{n-j} $$
Try some test cases then get the general form. |
Hom and Tensor Product of Linear Maps | I think, it wants to simply mean
$$\left[L(v \otimes \varphi)\right](u) = v(\varphi ( u))$$
well.. this notation $\langle\varphi|u\rangle$ now confused me too, as $\varphi$ is not a linear form. The $L$ should be understood, because without it the LHS gives syntax error, as mt_ wrote in the comment.. |
Do problem weights change as the overall grade of an assignment is curved? | Yes, it can. It depends a bit on how the exam is rescaled. Let's assume your teacher uses a very simple scaling by linearly taking the best score to $100\%$ and the worst score to $0\%$. Let us also assume that you have gotten $70$ and everybody else in the class has gotten $71$. This is clearly a terribly written test and an extreme example. For now, you have $0\%$ on the exam as you have the worst grade. If you get $2$ more points, you will have $100\%$ on this exam.
This is extreme, but the basic concept is that scaling that spreads the scores will make those $2$ points more valuable. Scaling that just shifts the scores will not change the value of the $2$ points. There may be scaling at the level of the total score, which again could make the points more valuable. |
Stuck in my induction proof | Write $$(k+1)^2(\frac{1}{4}k^2+k+1)=\frac{1}{4}(k+1)^2(k^2+4k+4)=…$$ |
Does the composition of a non-degenerate random variable-valued function with itself induce dependence? | Let $\Omega$ be a general sample space (possibly different from the set $[0,1]$). Let $f:\mathbb{R}\times \Omega \rightarrow \mathbb{R}$ be a function that satisfies your assumptions: For outcomes $y$ in the sample space $\Omega$, the random variables $f(x,y)$ (indexed by $x \in \mathbb{R}$) are non-generate and pairwise independent. In particular $f(x,y)$ is a measurable function of the outcome $y$ for each $x \in \mathbb{R}$. For simplicity of notation define $V_x(y) = f(x,y)$. Non-degenerate implies that for each $x \in \mathbb{R}$ there is a threshold $h_x$ such that
$$P[V_x(y)>h_x] \in (0,1)$$
Define the function $g:\mathbb{R}\times \Omega \rightarrow \mathbb{R}$ by
$$ g(x,y) = \left\{ \begin{array}{ll}
3 &\mbox{ if $x=0$ and $V_0(y)>h_0$} \\
2 & \mbox{ if $x=0$ and $V_0(y)\leq h_0$}\\
0 & \mbox{if $x\neq 0$ and $V_x(y)>h_x$}\\
1 & \mbox{if $x \neq 0$ and $V_x(y)\leq h_x$}
\end{array}\right.$$
Then for each distinct $x_1,x_2 \in \mathbb{R}$ the functions $g(x_1,y)$ and $g(x_2,y)$ are measurable functions of $y$, non-degenerate, and are independent random variables. However for $x_1=1$ and $x_2=2$:
$$G(y)=g(g(1,y),y) =\left\{\begin{array}{ll}
g(0,y) & \mbox{if $V_1(y)>h_1$} \\
g(1,y) & \mbox{ if $V_1(y)\leq h_1$}
\end{array}\right.$$
$$H(y)=g(g(2,y),y) =\left\{\begin{array}{ll}
g(0,y) & \mbox{if $V_2(y)>h_2$} \\
g(1,y) & \mbox{ if $V_2(y)\leq h_2$}
\end{array}\right.$$
Then
$$P[H =3]=P[V_2(y)>h_2]P[V_0(y)>h_0] > 0 $$
but
$$P[H=3|G=2]=0$$
So $G$ and $H$ are not independent. |
Is $\frac{d}{d^2x}$ a valid differential operator? | $$\frac{dy}{d^2x}=\frac{dy}{dx}\cdot\frac{1}{dx}$$
But $\frac{1}{dx}$ has no meaning, thusly neither does $\frac{dy}{d^2x}$
On the other hand, we have $$\frac{dy}{d(x^2)}=\frac{dy}{dx}\cdot\frac {1}{2x}$$
(can you show why?) |
In the Quaternion group, why does $-1*k = -k$, or any other element for that matter? | Yes, that is what $-k$ means, by definition. We have the element $-1$ and the element $k$, they commute and we call their product $-k$. It is called that partly because of what it represents in the ring of quaternions (where you can prove that $(-1)\cdot k$ fulfills the defining property of $-k$, the additive inverse of $k$), and partly for brevity. |
Over what fields are finite order endomorphisms of vector spaces diagonalizable? | If you want conditions on $F$ that are necessary and sufficient so that any endomorphism $T$ of any finite dimensional vector space with $T^k=I$ is diagonalizable, then...
For characteristic zero, a necessary and sufficient condition is that $F$ contain all $k$th roots of unity for each $k\geq 1$. To see the sufficiency, note that the minimal polynomial of such a $T$ divides $x^k-1$, and over such a field this splits into distinct linear factors. Conversely, the companion matrix of $x^k-1$ has minimal polynomial $x^k-1$, which you need to factor into linear terms in order to be diagonalizable. Thus, the condition (which is weaker than being algebraically closed) is both necessary and sufficient.
For positive characteristic, this is impossible. Let $p$ be the characteristic. The companion matrix of $x^p-1 = (x-1)^p$ has minimal polynomial $(x-1)^p$, and hence is not diagonalizable. So if $\mathrm{char}(F)\gt 0$, there is always an endomorphism of finite multiplicative order that is not diagonalizable.
If $V$ is fixed, of dimension $n$, the situation is slightly different. It is not hard to verify that if $T$ has finite order in $V$, then the order is at most $n$. So in this case:
If $F$ has characteristic $0$, or characteristic $p$, $2\leq p\leq n$, a necessary and sufficient condition is that the field contain all $k$th roots of unity, $1\leq k\leq n$. An argument as above works (use the companion matrix of $x^k-1$ and then complete it with $0$s to get an $n\times n$ matrix that has minimal polynomial $x(x^k-1)$).
If $F$ has positive characteristic $p\geq n$, then the same argument as above shows you cannot do it.
Per the comment, we actually have a third permutation: $k$ is fixed. What is required so that every endomorphism of order (dividing?) $k$ is diagonalizable?
If $F$ has characteristic $0$ or characteristic $p$ that does not divide $k$, then you need $F$ to contain (i) all $k$th roots of unity if you want order exactly $k$ only; and (ii) all $m$th roots of unity for all divisors $m$ of $k$ if you want it for any endomorphism such that $T^k=I$.
If the characteristic of $F$ is $p$ and $p$ divides $k$ and is no larger than $\dim(V)$, then you’re still out of luck. Writing $k=pm$, the companion matrix of $x^k -1 = (x^m)^p-1 - (x^m-1)^p$ has minimal polynomial $(x^m-1)^p$, and hence is not diagonalizable. |
How to answer this kind of questions | Assume that $a_i = a_1+(i-1)\delta$ for all $i > 0$ and some $\delta$. Proceed by induction. Let
$$
S_k = \frac{1}{\sqrt{a_1}+\sqrt{a_2}}+
\frac{1}{\sqrt{a_2}+\sqrt{a_3}}+\cdots+
\frac{1}{\sqrt{a_{k-1}}+\sqrt{a_k}}
$$
Observe that for $k = 2$, we have
$$
S_2 = \frac{1}{\sqrt{a_1}+\sqrt{a_2}}
$$
trivially. Next, suppose that
$$
S_k = \frac{k-1}{\sqrt{a_1}+\sqrt{a_k}}
$$
Then, keeping in mind that $a_k = a_1+(k-1)\delta$,
\begin{align}
S_{k+1}
& = \frac{1}{\sqrt{a_1}+\sqrt{a_2}}+
\frac{1}{\sqrt{a_2}+\sqrt{a_3}}+\cdots+
\frac{1}{\sqrt{a_k}+\sqrt{a_{k+1}}} \\
& = \frac{k-1}{\sqrt{a_1}+\sqrt{a_k}} + \frac{1}{\sqrt{a_k}+\sqrt{a_{k+1}}} \\
& = \frac{k-1}{\sqrt{a_1}+\sqrt{a_1+(k-1)\delta}}
+ \frac{1}{\sqrt{a_k}+\sqrt{a_k+\delta}} \\
& = \frac{(k-1)(\sqrt{a_1+(k-1)\delta}-\sqrt{a_1})}{(k-1)\delta}
+ \frac{\sqrt{a_k+\delta}-\sqrt{a_k}}{\delta} \\
& = \frac{\sqrt{a_k}-\sqrt{a_1}}{\delta}
+ \frac{\sqrt{a_{k+1}}-\sqrt{a_k}}{\delta} \\
& = \frac{\sqrt{a_{k+1}}-\sqrt{a_1}}{\delta} \\
& = \frac{\sqrt{a_1+k\delta}-\sqrt{a_1}}{\delta} \\
& = \frac{k\delta}{\delta(\sqrt{a_1+k\delta}+\sqrt{a_1})} \\
& = \frac{k}{\sqrt{a_1}+\sqrt{a_{k+1}}}
\end{align}
For instance,
\begin{align}
\frac{1}{\sqrt{1}+\sqrt{25}}+\frac{1}{\sqrt{25}+\sqrt{49}}
& = \frac{1}{1+5}+\frac{1}{5+7} \\
& = \frac{1}{6}+\frac{1}{12} \\
& = \frac{3}{12} \\
& = \frac{2}{8} \\
& = \frac{3-1}{\sqrt{1}+\sqrt{49}}
\end{align} |
Calculus - prove that limit doesn't exists using esplion and delta | Choose any $\varepsilon < 1$. For whatever $\delta$ you can choose $x = \min\lbrace 4, 3+\frac{\delta}{2}\rbrace$ So:
$$\vert \sqrt{x+1}-1\vert > \vert \sqrt{3+1}-1\vert = 1 > \varepsilon$$ |
How to minimize the chance of making an error in algebraic calculations? | This is arguably more a question for psychologists than for mathematicians, but here are a few techniques which could be helpful:
Check your work! If you have more than one approach which will generate a valid solution and have time to do so, test both. The odds that you'd make the same error in both cases are rather slim. Better yet, if you have a way to verify a solution which does not involve the same degree of effort as solving it in the first place, use it! (i.e. if you are integrating, take the derivative, or if you are finding eigenvalues and eigenvectors of a matrix, actually multiply each of them to be sure you have the correct result).
Verify calculations, signs, etc. at each step rather than waiting until the end to (maybe) spot an error. This may double the amount of time spent crunching numbers, but it reduces to near-zero the amount of time spent correcting past computational errors and doing work which you'll later have to throw away.
Verify whichever computations you can using an independent source. Obviously this is not suitable for an exam under most academic integrity policies, nor would it be ethical to use an external source on homework to do anything more than check for correctness (even that may be questionable depending on the context, but taking solution minus believed solution and seeing if it is or is not zero after checking the work yourself may be acceptable). However, if you are writing a paper then you could run your calculations past at least one collaborator, advisor, or peer if not a computer solver. |
Prove that every nonzero element has an induction given the following axioms. | Your proof is correct, though if they are looking for a formal proof, you have a bit more work to do formalizing those steps. And notice: you don't use any of the 10 axioms .... but you do need induction. |
Notation in Geometry | I just receive this theorem:
Theorem 2.3
Been 3 rect lines R1, R2 and R3 in which R1 is parallel to R2 and R3 intersects R1 and R2. And being a and b two alternate-external opposites angles. Then a=b
We can say that a and b are PARALLEL ANGLES
Also I have this:
Definition 2.3
Been 2 angles a and b if their sides are perpendiculars "in pairs" we can say that both angles are perpendicular angles. If their sides are parallel "in pairs" then we say that both angles are parallels
Do anyone has any comment? |
Permutation problem involving item order | Pair every true with its following false and make a list of the $7$ false answers and the $14$ true/false answers |
Equation-driven smoothly shaded concentric shapes | I suspect this is not quite what you're looking for*, but my first idea was to draw line segments from some central point, such as the origin, to various points on the curve, shading the line segment from white at the central end to black at the curve end. Here is the Mathematica code and the result for your heart-shaped curve, using the origin as the central point, and drawing line segments for $t$ from $0$ to $2\pi$ in steps of $\frac{\pi}{1200}$:
Graphics[Table[
Line[{{0, 0}, {16 Sin[t]^3,
13 Cos[t] - 5 Cos[2 t] - 2 Cos[3 t] - Cos[4 t]}},
VertexColors -> {White, Black}], {t, 0, 2 \[Pi], \[Pi]/1200}]]
* I don't think this is what you're looking for particularly because I don't think it's all that expedient; beyond that, I wouldn't call this concentrically-shaded, exactly, either; also, I have no idea what such shapes are formally called.
Here's a second thought, which is to graph your curve in $k\%$ black, scaled by a factor of $k\%$. Again, here's Mathematica code and the result with 400 steps from black to white (I don't know what's causing the diagonal line artifacts):
Show[Table[
ParametricPlot[(1 - k) {16 Sin[t]^3,
13 Cos[t] - 5 Cos[2 t] - 2 Cos[3 t] - Cos[4 t]}, {t, 0, 2 \[Pi]},
PlotStyle -> GrayLevel[k], Axes -> None], {k, 0, 1, 0.0025}]] |
Solving a Partial Differential Equation missing term with Separation of Variables | Your eigenfunctions are incomplete. The sinusoidal solution is certainly valid for when $\lambda \ne 0$. But what happens when $\lambda = 0$? Your equations reduce to
\begin{align}
X'' &= 0 \\
T' &= 0
\end{align}
With boundary conditions $X'(0) = X'(1) = 0$. You can determine that a non-trivial solution does exist for this case: $X(x) = 1$ (up to a multiplicative constant).
Therefore, the complete solution space is
$$ u(x,t) = b_0 + \sum_{n=1}^\infty b_n e^{-n^2\pi^2 t}\cos(n\pi x) $$ |
Proving $H$ is a normal subgroup of $G$ | Well, for example let us take the set $V = \{a,b,c\}$. Then there are six permutations in $(SV,\circ)$. A non trivial permutation is one sending $a \to b , b \to a$ and $c \to c$, for example. I am sure you can figure out the rest of the elements.
Now, take $W = \{a,b\}$. $G$, for this $W$ is defined as the permutations which keep $W$ in $W$.
For example, let $\sigma_1$ be defined as $a \to b, b\to a , c \to c$. We note that $\sigma_1(a) = b \in W$ and $\sigma_1(b) = a \in W$. Thus, $\sigma$ sends every element of $W$ to some other element in $W$. This need not be the same element : for example, $a,b$ are not going to themselves under $\sigma_1$, but we have $\sigma_1 \in G$.
An example of an element not in $G$ would be $\sigma_2$ sending $a \to b , b \to c, c \to a$, because $\sigma_2(b) = c$ which is not in $W$, but $b \in W$, so $\sigma_2(W) \neq W$.
However, if every element of $W$ is sent to itself, that is more special than every element going to another in $W$. For example, the identical permutation has this property, because it sends every element to itself,and therefore those in $W$ as well. Such permutations come into $H$.
For an example where a non-identical permutation can be in $H$ : take the set $V' = \{a,b,c,d\}$ and $W = \{a,b\}$. Then the permutation $a \to a , b\to b , c \to d , d \to c$ is not identical, but it sends every element of $W$ to itself. Such a permutation would be in $H'$ (where $H'$ is the subgroup for this $V',W'$).
However, $a \to b, b\to a , c\to d , d \to c$ is not an element of $H'$, but rather of $G'$, because every element of $W'$ is not going to itself, but at least is going to another element of $W$.
The permutation $a \to b,b\to c , c \to a, d\to d$ is seen to belong to neither $H'$ nor $G'$.
From the above , I leave you to see that $H$ is a subset of $G$(we don't know if they are groups, so we don't use subgroup yet).
We will show that $G$ is a group, I leave $H$ to you.
It is enough to check that $G$ contains the identity, and is closed under inverse and multiplication.
Let $\sigma,\tau \in G$. We claim $\sigma \circ \tau \in G$. To see this, take $w \in W$. Then $\sigma \circ \tau(w) = \sigma(\tau(w))$. Now, $\tau(w) \in W$ by definition of $\tau \in G$, and then $\sigma(\tau(w)) \in W$ by definition of $\sigma \in G$ (andd because $\tau(w) \in W$). Thus, $\sigma \circ \tau \in G$, because every element of $W$ goes to another in $W$.
For inverse, we need to use $\sigma(W) = W$. Let $w\in W$. Then, from the equality, $\sigma(W) = W $ so $w \in \sigma(W)$. By definition of $\sigma(W) = \{\sigma(v) : v \in W\}$, there is some $v \in W$ so that $\sigma(v) = w$. But then, $\sigma^{-1}(w) = v\ in W$ by definition of the inverse. Thus, $\sigma^{-1}$ keeps elements of $W$ in $W$, so inverse closure is done.
The identity is clearly in $G$. Thus $G$ is a group.
Work similarly for $H$, it is even easier. |
Find the 100th derivative of $x \sinh(2x)$ | The binomial coefficient in the Leibniz formula should be ${100\choose k}$ and not ${2\choose k}$. |
Prove, with the help of the definition of convergence, that $\lim_{(x,y) \to (-1,1)} (3x^2 + 4y^2) = 7$. | You've made a couple of mistakes. You say
"so for $\delta>0$ with $|x+1|<\delta$ and $|y−1|<\delta$ we have that $|x-1|\leq|x+1|+1<\delta+1$ and $|y+1|\leq|y-1|+1<\delta+1$," and this isn't so.
We have $$|x-1|=|x+1-2|\leq|x+1|+2<\delta+2$$
You've made the same error with $|y+1|$.
The definition of limit in the case of functions of several variables, means the same thing intuitively that it means in the case of functions of a single variable. The function value is arbitrarily close to the limit (within $\varepsilon$) whenever it is evaluated at points sufficiently close to the given point (within $\delta$). We can either require that each coordinate is within $\delta$ as in the definition above, or we can require that the distance from $(x,y)$ to $(x_0, y_0)$ is $<\delta$. It come to the same thing, because if each coordinate is within $\delta$ the distance is $<\sqrt2\delta$. Does this make it any clearer? |
When does tensor product have a (exact) left adjoint? | For question 2, $F\otimes_A -$ has a left adjoint iff $F$ is finitely generated, and the left adjoint is always exact. For if $(f_i)\in F^I$ is an element of an infinite product of copies of $F$, then it is easy to see $(f_i)$ is in the image of the canonical map $F\otimes_A A^I\to F^I$ iff $\{f_i\}$ is contained in a finitely generated submodule of $F$. If $F\otimes_A -$ has a left adjoint, then it this canonical map must be an isomorphism, and it follows that $F$ must be finitely generated. Conversely, if $F$ is finitely generated (and hence projective), $F^{\vee}\otimes_A -$ is adjoint to $F\otimes_A -$ on both sides.
(By a similar argument using instead the injectivity of the map $F\otimes_A A^I\to F^I$, you can show that $F$ must be finitely presented, even if you don't assume $A$ is Noetherian. So for non-Noetherian $A$, you still get that $F\otimes_A -$ has a left adjoint iff $F$ is finitely presented and flat, or equivalently finitely generated and projective.)
Note that in this case you can also easily see directly that $F\otimes_A -$ preserves injectives, since $F$ is a direct summand of a finitely generated free module and tensoring with a finitely generated free module obviously preserves injectives. In general, however, $F\otimes_A -$ might preserve injectives without having a left adjoint. For instance, it is a well-known theorem that a ring is Noetherian iff any (possibly infinite) direct sum of injective modules is injective. So since you're assuming $A$ is Noetherian, tensoring with any free module preserves injectives, and hence so does tensoring with any projective module. |
Simple intuitive example as to why this proof technique is wrong | "If $x$ is a dog then $x$ is a mammal" is provable. "If $x$ is a mammal then $x$ is a dog" is not, because of horses. You cannot interchange $A$ and $B$ and expect the two statements to be equally provable. |
Find the Laurent series for $f(z)=\frac{1}{z(1-z)}$ | Since$$f(z)=\frac1z\cdot\frac1{1-z}$$and since, when $|z|>1$, you have$$\frac1{1-z}=-\sum_{n=-\infty}^{-1}z^n,$$you have, in same same region\begin{align}f(z)&=\frac1z\left(-\sum_{n=-\infty}^{-1}z^n\right)\\&=-\sum_{n=-\infty}^{-1}z^{n-1}\\&=-\sum_{n=-\infty}^{-2}z^n.\end{align}In the other region, use the fact that$$f(z)=\frac1{z-1}\cdot\frac1{1+(z-1)}$$ |
what is wmaxima doing here? | I cannot explain the xMaxima behaviour (perhaps having produced an error with parallel substitution, it then tries serial substitution to try to get an error-free answer).
The original point about the partial derivative is that $g(x)=f(x,0)=0$ for all $x$ and so is continuous including at $x=0$, and has a partial derivative with respect to $x$ of $0$. So $f(x,y)$ has a partial derivative everywhere of $\dfrac{\partial f}{\partial x} = \dfrac{y(y^2-x^2)}{x^2+y^2}$ except when $y=0$ where it is $\dfrac{\partial f}{\partial x} = 0 $.
You can make a similar statement about $h(y)=f(0,y)=0$ and so $\dfrac{\partial f}{\partial y}$. |
How do I find residues at simple poles? | (From my above comment:)
$g$ has a removable singularity at $z=a$, i.e. it can be continued analytically to a function $\tilde g$ which is defined and holomorphic in $a$ as well. Frequently, the new function $\tilde g$ is called $g$ as well. In other words, $g(a)$ is defined as the limit of $g(z)$ for $z \to a$. |
Problem in doing conditional probability. | Let $A_j$ denote the event that the $j$-th shell hits the target and $A$ be the event that at least two shells out of $n$ will hit the target. Then, by independence, your can calculate your probability with
\begin{align*}
P(A) = & 1 - [P(\text{none of the shell will find tharget}) + P(\text{only one shell out of n will find the target}) \\
= & 1 - P(A_1^c \cap \cdots \cap A_n^c) - (P(A_1 \cap A_2^c \cdots \cap A_n^c) + \cdots + P(A_1^c \cap A_2^c \cdots \cap A_n)) \\
= & 1- \prod_{j = 1}^n (1 - P_j) - \sum_{j = 1}^n \prod_{k \neq j} P_j(1 - P_k).
\end{align*} |
How you prove that $p^2 \mid m$ and $p^2 \mid n \Rightarrow p^2 \mid mn$? | $m=ap^2$, $n=bp^2$, $mn=abp^4$,
$ p^2\mid p^4$
Note that also $p^4\mid mn$ |
What's the name of a submanifold-plus-any-missing-boundary? | In the most generality, you probably can't say anything besides "the closure of a submanifold". Consider the following construction. Take the polar coordinate system in $\mathbb{R}^2$ by $(r,\theta)$. Define the open sets
$$\Omega_n = \{ \theta \in (\frac{1}{n} - \frac{1}{2^n}, \frac{1}{n} + \frac{1}{2^n}); r > 0\} $$
Then
$$ \Omega = \left(\cup_{n = 1}^\infty \Omega_n\right) \cup \{ r > 1\} $$
is an open set, and so is a submanifold of $\mathbb{R}^2$ (and it is connected). Its closure necessarily includes the origin. But at the origin $\bar{\Omega}$ is quite a horrible set; as is at the entire half-line given by $\theta = 0$. I'm personally not aware of any established terminology for classes of objects which include $\bar{\Omega}$.
Note also that the unit sphere can be considered to be the closure of itself. It can also be considered to be the closure of the submanifold given by the unit sphere with the north pole removed. So there's some question as to what the "boundary points" are if you are only presented with the closure of the set, but not the set itself. |
Help with probability combinations?! | The total number of hands with $6$ cards is $72 \choose 6$. If there are $N$ ways to get a certain type of hand, then the probability of getting that type of hand is $N/ {72 \choose 6}$.
One example, three of a kind and a pair: The three of a kind can be in any of the $12$ ranks, and the pair can be in any $11$ other ranks. There are $6 \choose 3$ possibilities for the suits of the three of a kind, and $6 \choose 2$ possibilities for the suits of the pair. The last card can be of any of the $10$ remaining ranks, and any of the $6$ suits. So the probability of getting a three of a kind + a pair is $$\frac{12 \cdot 11 \cdot {6 \choose 3} \cdot {6 \choose 2} \cdot 10 \cdot 6}{{72 \choose 6}} \approx 0.0152$$
By the way, you might be missing three hands: the flush, the straight and the four of a kind + another pair. |
A mass is oscillating on the end of a spring. The distance, y, of the mass from its equilibrium point is given by the formula | The frequency of oscillation is:
$$\nu=\dfrac{\Omega}{2\pi}$$
$$\Omega=12\pi\omega$$
So:
$$\nu=6\omega$$ which ise the frequency of oscillation in $Hz$. This means you have $6\omega$ oscillations every second |
Sum of two Beta distributed random variables | The flaw in the claim is made apparent by simply considering the support of $X$, $Y$, and the sum $X+Y$: since $X, Y \in [0,1]$, $X+Y \in [0,2]$ and that is very clearly not Beta distributed no matter what the underlying parameters might be. |
Gödel's incompleteness theorem and real closed fields | The reason that the result does not contradict the Incompleteness Theorem is that the natural numbers are not definable over the theory of real-closed fields. Roughly speaking, this means that there is no formula $F(x)$ of our language such that $F(a)$ is true in the reals if and only if $a$ is a natural number.
If there were such a formula, your intuition would be correct, and we could translate any problem about the integers under addition and multiplication into a problem about the reals. Then the theory of real-closed fields, since it is recursively axiomatizable, could not be complete.
We can produce a formula in the language of real-closed fields that "says" $x=1$. We can also produce a formula that says $x=2$, and a formula that says $x=3$, and so on. Then we can say also $x=1$, $2$, or $3$. But there is no single formula that says that $x$ is a natural number. (We cannot use infinitely long formulas.) There is, by the way, also no formula that says that $x$ is rational.
There is a similar phenomenon in the theory of algebraically closed fields of characteristic $0$. The theory is complete. From the fact of completeness we can immediately deduce, from the Incompleteness Theorem, that the natural numbers are not definable in the theory.
Added: There is a complete (in the informal sense!) classification of the sets that are definable over the theory of real-closed fields. The idea has been used to establish connections between algebraic geometry and model theory. A number of important recent results come from exploiting that connection. |
What constitues algebraic competency? | From my experience, Calculus teachers will expect you to know how to do some "basic algebra". They might give you a quick refresher at the start of the first semester but if at that point things are still fuzzy, you will feel that the course is heavy and hard to follow. So for example, being able to:
Isolate a variable ($2x+1=3x\Rightarrow x=1$)
Know your exponent and log laws
Know basic math symbols and notation such as $\mathbb{R}$, $\Rightarrow$, $\infty$, etc.
Knowing what a function is and their properties
Know your basic functions, their domains, ranges and graphs (not quite algebra but still useful)
Knowing vocabulary such as "constant", "variable", "factor", "polynomial", "degree", "term", etc.
Being able to use factorisation (of polynomials)
will all come in handy. You don't have to know everything from the start, but the more proficient you will be with this, the more smoothly Calculus will go.
Let me give you an example. In the first Calculus course, you will have to calculate limits like one:
\begin{equation}
\lim_{x\rightarrow\infty}\dfrac{x^2-x}{x-1}
\end{equation}
In order to do this, you have to see that $x^2-x=x(x-1)$ which will allow you to cancel out the $x-1$ on the numerator and denominator and get to the final answer (which is $\infty$). But if you are not used to do factorization, your eye might not see it and you might get stuck on the problem longer than normal (as in someone who is "algebraically competent").
The list I gave you already covers a lot but probably isn't complete. It's a good place to start. |
Logic: proving a contradiction (Compound propositions) | Since it ends in $\dotsb\land(p\land\lnot r)$, this could only be true if $p\land\lnot r$ is true — that is, if $p$ is true and $r$ is false.
Substituting that into the first term, we get that for it to be true, we must have:
\begin{align}(\lnot{\rm T}\lor q)&\land(\lnot q\lor{\rm F})\\
({\rm F}\lor q)&\land(\lnot q\lor{\rm F})\\
q&\land\lnot q
\end{align}
which is clearly never true. Thus, the statement as a whole is never true, i.e., it is a contradiction. |
how can using lagranigan when subject to is membership function for optimization | This problem has nothing to do with Lagrangians, as it is a discrete optimization problem.
Given $n$ points in ${\mathbb R}^d$, $\>d\geq1$, the function
$$J(y):=\sum_{k=1}^n|x_k-y|^2$$
is minimized iff $y$ is the centroid of the $x_k$, i.e., iff
$$y={1\over n}\sum_{k=1}^n x_k=:x_*\ .$$
In the case at hand we are not allowed to choose $y:=x_*$; instead we have to choose $y:=x_k$ for some $k\in[n]$. Replacing $x_*$ by $x_k$ results in the extra cost $n\,|x_*-x_k|^2$ (this is essentially the parallel axis theorem). In order to make this extra cost minimal we choose $k_*\in[n]$ such that $|x_{k_*}-x_*|$ is minimal, and put $y_*:=x_{k_*}$. The resulting value of $J$ then is
$$J(y_*)=\sum_{k=1}^n|x_k-x_*|^2 +n\,|x_{k_*}-x_*|^2\ .$$ |
Proving the following function is a Diffeomorphism | To apply the inverse function theorem you need to prove that the derivative $Df(x)$ is invertible. This follows quite easily from the definition (with the limit $\lim_{h\to 0}\frac{f(x+h)-f(x)-Df(x)h}{\Vert h\Vert}=0$): If $y\neq 0$, then take $t$ very small, so that $\frac{\Vert f(x+ty)-f(x)-Df(x)ty\Vert}{\Vert ty\Vert}<\frac{1}{2}$. This implies
$$\frac{\Vert Df(x)y\Vert}{\Vert y\Vert}\geq\frac{\Vert f(x+ty)-f(x)\Vert}{\Vert ty\Vert}-\frac{\Vert f(x+ty)-f(x)-Df(x)ty\Vert}{\Vert ty\Vert}\geq1-\frac{1}{2}=\frac{1}{2},$$
so in particular $Df(x)y\neq 0$. Therefore $Df(x)$ is injective and, since it is a linear map, also surjective and hence invertible.
In particular, the Inverse Function Theorem states that $f$ is an open map with open image. The same proof as in https://math.stackexchange.com/a/3129225/58818 works in this case to prove that $f$ is surjective. |
Clarification about the measurability of a function | Well, $d(-,x)$ is continuous on $\Omega'$. Indeed, by the triangle inequality,
$$
d(y,x)-d(z,x)\leq d(y,z)
$$
and by symmetry, we get $|d(y,x)-d(z,x)|\leq d(y,z)$ so $d(-,x)$ is actually a contraction. Hence, $d(f,x)$ is a composition of a measurable map with a continuous function and hence, measurable. |
Is this set in $\Bbb{C}^3$ compact? | Note that the set defined in $\Bbb C^n$ by the "unit sphere equation" $z_1^2+\cdots+z_n^2=1$ is a very different set from the one defined by the corresponding equation in $\Bbb R^{2n}$ (with $2n$ variables).
Since $\Bbb C$ is algebraically closed, the "unit sphere" in $\Bbb C^3$ is not bounded. For instance, for any $z_1,z_2\in \Bbb C$, the equation $z_1^2+z_2^2+z=1$ has at least one solution. |
Are set-theoretically defined functions and category-theoretic morphisms equivalent notions? | I don't think there is one (and to many, that's the point of "category"). The one arguable link is that the definition of a category requires set-theoretic functions in order to define composition, usually as a map
$$
\mathrm{Hom}(A,B) \times \mathrm{Hom}(B,C) \to \mathrm{Hom}(A,C).
$$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.