title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
proof for propositional logic | The formula you exhibited is not a tautology. It is false when $p$ and $r$ are true while $q$ is false. (It's also false when $p,r$ are false and $q$ is true.)
Perhaps the last $\land$ was supposed to be $\leftrightarrow$; then the formula would be a tautology. |
Maximal ideals in $R/I$ for $R$ a commutative ring | Yes. It is part of the all-powerful and important Correspondence Theorem for rings. The is also a Correspondence Theorem for groups. |
Hereditarily Lindelof Space | Let $X$ be hereditarily Lindelöf. If $F$ is closed in $X$, consider $U=F^\complement$, which is open. For each $x \in U$ pick (by $T_3$-ness of $X$) an open neighbourhood $O_x$ of $x$ such that $\overline{O_x} \subseteq U$.
Then $\{O_x: x \in U\}$ is an open cover of $U$, which is Lindelöf by assumption, so there is a countable $N \subseteq U$ such that $$\bigcup\{O_x: x \in N\}=U$$
Because all closures of $O_x$ are also a subset of $U$ we can even say
$$\bigcup\{\overline{O_x}: x \in N\}=U$$
making $U$ an $F_\sigma$ and so by de Morgan
$$F=\bigcap\{\overline{O_x}^\complement: x \in N\}$$
is a $G_\delta$.
Note that we only need regularity of $X$ (which will follow if $X$ is Hausdorff, in your case) for this direction.
For the reverse: to see $X$ is herediatrily Lindelöf, we only need to show all open subsets of $X$ are Lindelöf. As all closed sets are $G_\delta$ by assumption, it follows by de Morgan (as in the previous proof) that all open sets are $F_\sigma$ and thus $\sigma$-compact (as closed sets are compact) and hence Lindelöf (a countable union of finite subcovers is a countable subcover, etc.)
So there we do use the compactness of $X$. |
Proving a Galois Group is isomorphic to $D_4$ | We put as before $\;a=\sqrt{2+i}\;,\;\;b=\sqrt{2-i}$ . I think the basic automorphisms are ("copying" the embedding $\;D_4\hookrightarrow S_4$ ):
$$\begin{cases}\sigma:\;\;a\mapsto -b\;,\;\;b\mapsto a\;\\{}\\\tau:\;\;a\mapsto a\;,\;\;b\mapsto -b\;\end{cases}$$
Observe then
$$\begin{cases}\sigma^2(a)=\sigma(-b)=-a\;,\;\;\sigma^2(b)=\sigma(a)=-b\;,\;\\{}\\
\sigma^3(a)=\sigma(-a)=b\;,\;\;\sigma^3(b)=\sigma(-b)=-a\;,\\{}\\
\sigma^4(a)=\sigma(b)=a\;,\;\;\sigma^4(b)=\sigma(-a)=b\;.\end{cases}\;\;\;\;\;\;\;\;\;\;\implies \text{ord}\,\sigma=4$$
$${}$$
$$\tau^2(a)=a\;,\;\;\tau^2(b)=b\;\;\;\;\implies \text{ord}\,\tau=2$$
And also
$$\begin{cases}\tau\sigma\tau(a)=\tau\sigma(a)=\tau(-b)=b=\sigma^3(a)\\{}\\
\tau\sigma\tau(b)=\tau\sigma(-b)=\tau(-a)=-a=\sigma^3(b)\end{cases}\;\;\;\;\;\;\;\;\;\;\implies \tau\sigma\tau=\sigma^3$$
and we thus got
$$\text{Gal}\,\left(K/\Bbb Q\right)=\left\{\;\sigma,\,\tau\;/\;\sigma^4=\tau^2=1\;,\;\;\tau\sigma\tau=\sigma^3\;\right\}\cong D_4$$ |
Strategies for developing explicit formulas for nth term given recurrence relation? | Since you have a first order recurrence relation (i.e. only two succesive terms $a_n$ and $a_{n+1}$ are involved) you can solve the recurrence relation by getting from $a_{n+1}$ back to $a_0$ as follows $$\begin{align*}a_{n+1}=2a_n+1&=2^1(2a_{n-1}+1)+1=2^2a_{n-1}+(1+2)=\\\\&=2^2(2a_{n-2}+1)+3=2^3a_{n-2}+(1+2+2^2)=\\\\&=2^3(2a_{n-3}+1)+7=2^4a_{n-3}+(1+2+2^2+2^3)=\\\\&=\ldots (\text{guess the solution, by generalizing the above sequence}) \\\\&=2^{n+1}a_0+\sum_{k=0}^{n}2^k\end{align*}$$ and since $a_0=0$ we have that $$a_{n+1}=2^{n+1}\cdot0+\frac{1-2^{n+1}}{1-2}=2^{n+1}-1$$ where we also used the formula for the finite geometric series. This gives the explicit formula for the given reccurence relation which is $$a_n=2^n-1$$
Note that this method works only in a special cases where the relation is rather simple. For general methods, look at the links provided in the comments. |
Which of those courses can get me to Theoretical Physics? | Partial answer.
For particle physics you'll certainly need Lie groups (and their representations). You'll also need partial differential equations (not on your list) and operator theory.
You (probably) won't need number theory, or algebraic geometry (but who knows for sure).
Since you need this information as part of your decision about which school to attend, please edit your question to show which courses are available at school A but not B, and vice verse. For courses common to both schools you can postpone deciding whether to take them until you get there.
Finally, this might be a better question on physics stackexchange. |
How do I calculate the following limit with L'Hospital? | Hint: This is one where you learn a lot more without L'Hopital: Note that for large $x, x^2 < e^{2x}.$ Thus $e^{2x} < x^2+e^{2x} < 2e^{2x}$ for large $x.$ Similarly, $e^x < x + e^x < 2e^x.$ If you use these estimates, you can bound the expression below and above with functions that have the same limit. |
Big-O for $\Sigma_{i = 1}^{N} i \times \binom{N}{i}$ | Yes it is true, but we can get a better upper bound by observing that
$$
\sum_{i=1}^N\binom{N}{i}\binom{i}{1}=\sum_{i=1}^N N\binom{N-1}{i-1}=N\sum_{u=0}^{N-1}\binom{N-1}{u}=N2^{N-1}
$$
where we have used the identity
$$
\binom{n}{k}\binom{k}{r}=\binom{n}{r}\binom{n-r}{k-r}\quad (n\ge k\geq r\geq 0)
$$
as well as the fact that
$$
2^n=(1+1)^n =\sum_{k=0}^n \binom{n}{k}
$$
by the binomial theorem. |
Describe in English a Turing machine that semidecides the following language - Is this solution correct? | Let
$$
L = \{ \langle M \rangle \mid M \text{ accepts the binary code of at least } 4 \text{ odd numbers} \}
$$
Now construct a Turing machine $N$ such that
$N$, on input $I$, checks whether $I = \langle M \rangle$ for some Turing machine $M$. If not, stop and reject.
If $I = \langle M \rangle$ for some Turing machine $M$, $N$ runs $M$ on the binary codes of all odd numbers. More precisely, $N$ has two phases between which it alternates. During the first phase it writes down the least odd number that it hasn't run $M$ on yet (for at least one step). Then it runs $M$ on all numbers it has written down thus far for $1$ step (or any other finite number of steps). $N$ accepts $I$ if it passes our test in item 1. and $M$ accepts (at any point during $N$'s run) the binary code of $4$ odd numbers. |
Circumscribed circle | $\triangle OBC$ is right angled, and $OB=OC$. You'll be able to find the radius. |
What does "Consider R as an vector space over Q" mean? | A vector space over a field $k$ is a set of vectors $V$ with an addition operation and a scalar multiplication operation (subject to some axioms). Consider $V=\mathbb{R}$ and $k=\mathbb{Q}$, and take vector addition to just be real number addition, and scalar multiplication to just be real number multiplication. It's not hard to see (from the field axioms for $\mathbb{R}$) that all the axioms of a $\mathbb{Q}$-vector space are satisfied.
As far as the dimensionality of $\mathbb{R}$ as a $\mathbb{Q}$-vector space, think about the cardinality of an $n$-dimensional $\mathbb{Q}$-vector space for $n$ finite, and then ask whether $\mathbb{R}$ has that cardinality. |
Prove that the Jordan Canonical Form of $T$ contains the Jordan Canonical Form of $T|_W$ for any $T$-invariant $W.$ | Crucially, the characteristic polynomial of $T|_W$ divides the characteristic polynomial of $T,$ and the minimal polynomial of $T|_W$ divides the minimal polynomial of $T.$ Consequently, if we obtain the Jordan Canonical Form for $T|_W$ by finding the Smith Normal Form of $xI - T|_W$ and using the elementary divisors, then these elementary divisors are among the elementary divisors obtained from the Smith Normal Form of $xI - T.$ Ultimately, this guarantees that the Jordan Canonical Form for $T|_W$ is a submatrix of the Jordan Canonical Form for $T.$ For an excellent reference on how to obtain the Smith Normal Form $xI - T$ and the elementary divisors of $T,$ see sections 12.2 and 12.3 of the text Abstract Algebra by Dummit and Foote. |
First order ODE - Solve $dy/dx=\sin(x+2y)+\cos(x+2y)$ | Hint: set $v=x+2y\implies\frac{dv}{dx}=1+2\frac{dy}{dx}$ |
Is there an infinite sequence that converges only for a specific x? | Example:
$f_i (x = n) = \frac{1}{i!}$ and $f_i (x \neq n) = x^2$ |
Explicit piecewise linear approximation of a function of 4 variables | For fixed $t$, the $x$ range is fixed for both $y$s, therefore
$$
f(x,y,z)\sim\sum_{i=1}^n\sum_{j=1}^5\sum_{k=1}^2f(x_i,y_j,z_k)L_i(x)L_j(y)L_k(z),
$$
where $L_i$ is Lagrange`s polynomial:
$$
L_i(h)=\prod_{l=1,~ l\neq i}^n\frac{h-h_l}{h_i-h_l}.
$$ |
Three Terms Inversely Proportional | You can do this in two steps. In each step, change just one of the variables to the desired final value.
If $3$ workers assemble $5$ computers parts in $3$ hours. How many computer parts it will take $9$ workers to assemble in $5$ hours.
We need to increase the number of workers from $3$ to $9$ and the number of hours from $3$ to $5.$ Choose one thing to do first.
Suppose you choose to increase the number of workers first from $3$ to $9.$ Then $9$ workers can assemble $\frac{9}{3}\times5=15$ parts in $3$ hours.
Now increase the number of hours. We know $9$ workers can assemble $15$ parts in $3$ hours, so in $5$ hours, the same $9$ workers can assemble $\frac{5}{3}\times15=25$ parts.
To do the whole thing in one equation, just apply the second ratio without first simplifying the first multiplication. So instead of $\frac{9}{3}\times5=15$ and then $\frac{5}{3}\times15=25,$ you have
$$
\frac{5}{3}\times\frac{9}{3}\times5=
\frac{5\times9\times5}{3\times3}=25.
$$ |
Continuous partial derivatives | We have
$$\left|\dfrac{x^3y^2+2xy^4}{{(x^2+y^2)^{3/2}}}\right|\le \dfrac{|x|(x^2+y^2)^2+2|x|(x^2+y^2)^2}{(x^2+y^2)^{3/2}}=3|x|(x^2+y^2)^{1/2}\xrightarrow{(x,y)\to(0,0)}0$$
so $\dfrac {\partial f}{\partial x}(x,y)$ is continuous at $(0,0)$ |
Proof that a graph has lebesgue measure zero using fubini's thm | Let $G_f$ denote the graph of $f$. If $\mu$ is Lebesgue measure on $\mathbb R^{n}$ and $\nu$ is Lebesgue measure on $\mathbb R$ then $(\mu \times \nu) (G_f)=\int_{G_f} d\mu \times \nu=0$ by Fubini/Tonelli Theorem because the section of $G_f$ by any point $x \in \mathbb R^{n}$ is the singleton set $\{f(x)\}$ and $\nu \{f(x)\}=0$ for all $x$. |
If $X$ is gamma distributed, find the distribution of $Y=1/X$ | Following @StefanHansen's comment, note that
$$P(Y\lt y)=P(X\gt1/y)=\int_{1/y}^{+\infty}f_X(x)\mathrm dx\stackrel{(x=1/t)}{=}\int_0^yf_X\left(\frac1t\right)\frac{\mathrm dt}{t^2},
$$
hence
$$
f_Y(y)=\frac1{y^2}f_X\left(\frac1y\right).
$$ |
Finding the total probability | Exactly one of the following things can occur:
$$\begin{array}{c|c|c}\text{First pick}&\text{Second pick}&\text{Total number of times a green was selected}\\
\hline \color{blue}{B}& \color{blue}{B}& 0~~~~\color{red}{\times}\\
\color{green}G&\color{blue}{B}&1~~~~\color{green}{\checkmark}\\
\color{blue}{B}&\color{green}G&1~~~~\color{green}{\checkmark}\\
\color{green}G&\color{green}G&2~~~~\color{green}{\checkmark}
\end{array}$$
Getting at least one green corresponds to any of the events above except for the one where you selected to blues.
As for approaching the problem... you may either find the probability of each outcome and add them up for those where a green was selected at least once, or what most people would do instead is find the probability that a green was not selected at least once and subtract it away from $1$ since it is known that the probabilities should have added up to $1$.
As for actually calculating the probabilities, I will show you one of the cases and let you figure out how to repeat the process for the rest. I will show you how to calculate the probability that the first ball is green and the second is blue.
The probability that the first ball selected is green is $\frac{5}{42}$. This should be clear already... since there are five green marbles in a bag of $42$ marbles and we selected one at random (which implies that each marble was equally likely to have been selected).
Once that has happened, we then take another ball and see if it is blue. That would happen with probability $\frac{37}{42}$.
The probability that these both happen one after the other is the product of their probabilities, so the probability that the first ball is green and the second ball is blue is $\frac{5}{42}\times\frac{37}{42}$.
Now, in this problem we are told that we replace the ball, meaning that after we pulled it out and looked at it, we put it back. In other similar problems, the ball might not have been replaced in which case depending on the outcome of the first draw, the second draw will have a different number of each ball available and a different number of total balls remaining, so take that into consideration when finding the probabilities to multiply by.
Now, to continue, again, either recognize that in your problem you have "At least one green" corresponds to "green then blue" or "blue then green" or "green then green" and add the probabilities of these together. Alternatively, you find the probability that there was not at least one green and subtract this away from $1$. They will both give the same answer.
In larger, more challenging problems, you have your choice on whether you approach them directly or indirectly or a different way alltogether. Usually, you should pick the one which requires the least arithmetic or requires the least confusing argument, but that is subjective. |
$s_n=s_{n-1}+(n-1)s_{n-2}$ prove $s_n>\sqrt{n!n}$ for $n\ge4$ | Let's try an induction :
You have $s_1 = 1$ and $s_2 = 2$, so $s_3 = 4$ and $s_4 = 10$ and $s_5 = 26$.
Therefore, you have $s_4 = 10 = \sqrt{100} > \sqrt{96} = \sqrt{4! \times 4}$. And $s_5 = 26 = \sqrt{676} > \sqrt{600} = \sqrt{5! \times 5}$.
Now, let's suppose your result is true for two successive ranks $n$ and $n+1$, that is you have $s_n > \sqrt{n! n}$ and $s_{n+1} > \sqrt{(n+1)!(n+1)}$. You have then
$$s_{n+2} = s_{n+1} + (n+1)s_n$$
so
$$s_{n+2}^2 = s_{n+1}^2 + (n+1)^2s_n^2 + 2 (n+1)s_{n+1}s_n$$
so you get
$$s_{n+2}^2 > (n+1)!(n+1) + (n+1)^2n! n + 2(n+1) \sqrt{(n+1)!(n+1)n!n}$$
$$=(n+1)!(n+1)^2 + 2(n+1)!(n+1)\sqrt{n}$$
$$=(n+1)!((n+1)^2 + 2(n+1)\sqrt{n})$$
Now $n \geq 4$, so $\sqrt{n} \geq 2$, so you get
$$s_{n+2}^2 > (n+1)!((n+1)^2 + 4(n+1)) = (n+1)!((n+2)^2 + 2n +1)$$ $$> (n+1)!(n+2)^2 = (n+2)!(n+2)$$
Therefore you have
$$s_{n+2} > \sqrt{(n+2)!(n+2)}$$
which is your equality at the rank $n+2$. This completes your proof. |
Question on inscribed equilateral triangle | This sketch (made with GeoGebra) should do better than a hand-drawn. All Specs are met (both triangles are equilateral, $BC\parallel DE$ and the circle is in fact a circle.
From this we can use the intercept theorem for $PQ\parallel BC$ and thus $AQ:QC = AR:RS$
But since $AR=OR$ by construction, $RO = OS = AR$ and therefore
$$AR:RS = 1:2=AQ:QC$$ |
Show that $P(\lim\limits_{n\to \infty} X_{n}=0 \operatorname{or} 1)=1$ and if $X_{0}=\theta$ then $P(\lim\limits_{n\to \infty} X_{n}=1)=\theta$ | Since $(X_n)_{n\geq 0}$ is a bounded martingale, it converges almost surely. Let $X_{\infty}$ denote its a.s-limit. We know that $\mathbb{P}(X_{\infty} \in [0, 1]) = 1$.
Now we claim that
$$ \mathbb{E}[X_{n+1}(1-X_{n+1})] = (1-\alpha^2)\mathbb{E}[X_n(1-X_n)] \tag{1}$$
holds. Indeed, write $A_{n+1} = \{ X_{n+1} = \alpha + \beta X_n \}$. Then by noting that
$$ \mathbb{P}(\{ X_{n+1} = \alpha + \beta X_n\} \cup \{ X_{n+1} = \beta X_n\})
\stackrel{\text{(tower)}}{=} \mathbb{E}[X_n + (1-X_n)] = 1, $$
we may write $ X_{n+1} = \alpha \mathbf{1}_{A_{n+1}} + \beta X_n $. From this, we get
\begin{align*}
\mathbb{E}[X_{n+1}(1-X_{n+1})]
&= \mathbb{E}[X_{n+1}] - \mathbb{E}[X_{n+1}^2] \\
&= \mathbb{E}[X_n] - \mathbb{E}[\alpha^2 \mathbf{1}_{A_{n+1}} + 2\alpha\beta X_n \mathbf{1}_{A_{n+1}} + \beta^2 X_n^2] \\
&\stackrel{\text{(tower)}}{=} \mathbb{E}[X_n] - \mathbb{E}[\alpha^2 X_n + 2\alpha\beta X_n^2 + \beta^2 X_n^2] \\
&= (1-\alpha^2)\mathbb{E}[X_n(1-X_n)]
\end{align*}
as desired.
From $\text{(1)}$, we get $\mathbb{E}[X_n(1-X_n)] \to 0$ as $n\to\infty$. Then by the dominated convergence theorem,
$$ \mathbb{E}[X_{\infty}(1-X_{\infty})] = \lim_{n\to\infty} \mathbb{E}[X_n(1-X_n)] = 0. $$
Since $X_{\infty}(1-X_{\infty}) $ a.s. non-negative, the above computation shows that $X_{\infty}(1-X_{\infty}) = 0$ a.s., and so, we get $\mathbb{P}(X_{\infty} \in \{0, 1\}) = 1$ as desired.
Finally, by noting that $X_{\infty}$ is $\{0,1\}$-valued almost surely, we can write
$$ \mathbb{P}(X_{\infty} = 1) = \mathbb{E}[X_{\infty}] = \lim_{n\to\infty} \mathbb{E}[X_n] = \mathbb{E}[X_0] = \theta. $$ |
Probability of a big coastal storm accruing twice over twenty years | The probability that there are at least two storms is the $1$ minus the probability of not storms, minus the probability of exactly $1$ storm. For the latter probability there are $20$ choices for what year the storm occurs, so the probability is $$20\cdot.01\cdot.99^{19}$$ |
Show that $f^{-1}(Y\setminus B_{1})=X\setminus f^{-1}(B_{1})$. | The identity $f^{-1}(Y)=X$ is always true for any function $f:X \rightarrow Y$, and is in fact a trivial statement.
If $x\in X$ then $f(x) \in Y$ hence $x \in f^{-1}(Y)$. Thus, $X \subset f^{-1}(Y)$.
But certainly, we also have $f^{-1}(Y) \subset X$ by definition of the inverse image.
The equality follows.
NB: For any function, we can define the inverse image of a subset of the target space, and we use the symbol $f^{-1}$ for it. It must not be mixed with the notation $f^{-1}$ that represents the inverse function of a bijection. In order to distinguish both of them, just look at the nature of the argument. The "inverse image" symbol takes subsets of the target space as argument, whereas the inverse function of a bijection takes elements of the target space as argument. |
Calculus: Application of definite integrals | For (1) the arc length of C is given by
$L = \int_{-a}^a\sqrt{1+[y'(x)]^2}dx = 2\int_0^a\sqrt{1+\sinh^2(x)}dx =2\int_0^a\cosh(x)dx = 2\sinh(a) $ |
Tangent plane to the graph of $f(x,y)=|xy|$ at $(0,0)$ | First, $$\frac {\partial f}{\partial x}\bigg|_{(x,y)}=\lim_{h\to\infty} \frac{f(x+h,y)-f(x,y)}{h}=\lim_{h\to\infty} \frac {| (x+h)(y)|- |xy|}{h}=\lim_{h\to\infty} |y| \frac{|x+h|-|x|}{h}$$
It follows that $$\frac {\partial f}{\partial x}\bigg|_{(0,0)}=\lim_{h\to\infty} |0| \frac{|0+h|-|0|}{h}=\lim_{h\to\infty} |0| \frac{|h|}{h}=0$$
Also, $$\frac {\partial f}{\partial y}\bigg|_{(x,y)}=\lim_{h\to\infty} \frac{f(x,y+h)-f(x,y)}{h}=\lim_{h\to\infty} \frac {| (x)(y+h)|- |xy|}{h}=\lim_{h\to\infty} |x| \frac{|y+h|-|y|}{h}$$
so $$\frac {\partial f}{\partial y}\bigg|_{(0,0)}=\lim_{h\to\infty} |0| \frac{|0+h|-|0|}{h}=\lim_{h\to\infty} |0| \frac{|h|}{h}=0$$
This means that both $$\frac {\partial f}{\partial x}\bigg|_{(0,0)}$$ and $$\frac {\partial f}{\partial y}\bigg|_{(0,0)}$$ exist.
It remains to show that $$\lim_{(x,y)\to(0,0)}\frac{f(x,y)-f(0,0)-\frac {\partial f}{\partial x}\bigg|_{(0,0)}(x-0)-\frac {\partial f}{\partial y}\bigg|_{(0,0)}(y-0)}{\|(x,y)\|}=0$$
But $$\lim_{(x,y)\to(0,0)}\frac{f(x,y)-f(0,0)-\frac {\partial f}{\partial x}\bigg|_{(0,0)}(x-0)-\frac {\partial f}{\partial y}\bigg|_{(0,0)}(y-0)}{\|(x,y)\|}$$ $$=\lim_{(x,y)\to(0,0)}\frac{f(x,y)}{\|(x,y)\|}$$ $$=\lim_{(x,y)\to(0,0)}\frac{|xy|}{\|(x,y)\|}$$ $$=\lim_{(x,y)\to(0,0)}\frac{|xy|}{\sqrt{x^2+y^2}}=0$$.
So with all this we can conclude that that $f$ has a tangent plane at $(0,0)$ |
Intuition about Taking an Integral | There are different ways to define integrals named after different people. What you teacher described is an informal explanation of the Riemann integral. You can see rigorous construction under the link, but it amounts to subdividing the interval of integration into subintervals of smaller and smaller lengths, and replacing the area under the graph with the sum of areas of the rectangles. The heights of the rectangles are equal to values of the function at some point on the subinterval, the sums are called Riemann sums. If as the sizes of the subintervals get uniformly smaller there exists a limit of the Riemann sums then the function is called Riemann integrable
This works well for continuous functions and others, but not for unbounded functions because then you can make some rectangles have arbitrarily large areas. For such cases a more general notion of Lebesgue integral is used. It is much more involved, but roughly instead of subdividing the integration interval you are subdividing the range of the integrated function into small subintervals, and then add up areas of "rectangles" with bases being sets where the function takes values from them. If the limit exists it is called the Lebesgue integral, and the function is called Lebesgue integrable. Every Riemann integrable function is Lebesgue integrable but not vice versa. Moreover, the base sets involved can be very complicated, and a whole Lebesgue measure theory has to be developed first to figure out their "lenghts".
Another way to define integral, called Daniell integral, is to approximate general functions by some "elementary" functions, whose integrals are either easy to compute as with step functions, or are already defined by some other construction, Riemann's for example. The integral is defined as the limit of integrals of approximating "elementary" functions. Daniell integral is equivalent to the Lebesgue integral in the sense that the same functions are integrable, and the values of integrals are the same. But it does not require developing measure theory in advance. There is a weaker but simpler version of Daniell integral called regulated integral.
There are also other more involved constructions like Henstock–Kurzweil integral, which is even more general than Lebesgue integral, Darboux integral, etc., but they are variations on the three ideas described above. |
Equivalent definition of subharmonic functions. | The "converse part", as you say, is false for regularity reasons. The function $u(x)=|x|$ is subharmonic on $\mathbb R$ in the sense of the second definition, but it is not in the sense of the first, because it is not differentiable. |
Are these two tournaments on four vertices isomorphic? | How many tournaments are there?
The top-ranked answer there suggests persuasively that all tournaments on four vertices with a (directed) Hamilton cycle are isomorphic. Given that both of yours do $(1\to3\to4\to2\to1)$, they are isomorphic.
Once you have that context, it is easier to find the isomorphism. The tournament without its Hamilton cycle is just two arcs. So you can quickly note that $u_3$ is the only vertex with an outdegree of one that leads to the other vertex with an outdegree of one. In the second graph, the vertex with that condition is $v_4$. Following the two Hamilton cycles around, the isomorphism is $$u_1\mapsto v_3, u_2\mapsto v_1,u_3\mapsto v_4, u_4\mapsto v_2$$ |
The determinant of sum of squares of a special family of real $2\times2$ matrices | Hints.
If $A_0$ is not a scalar multiple of the identity, show that it is either (i) a diagonalisable matrix with two distinct real eigenvalues, (ii) a scalar multiple of a rotation matrix with a conjugate pair of non-real eigenvalues, or (iii) similar to a $2\times2$ Jordan block.
In each of the above three cases, what matrices commute with $A_0$? |
Ap calculus: critical number of $F(x)=x^2-3x-\frac{4}{x}-2$ | If I remember correctly:
$a.)$ Given a function $f(x)$, we calculate the derivative of this function and set it equal to $0$: $f'(x)=0$
We then solve for $x$ using the resulting equation. We set the zeroes of $f'(x)$ as $c$ for easier notation.
$b.)$ Using the values of $c$ that we obtained earlier, note that:
$$\text{if } f(x) \le f(c), f(x) \, \text{has a maximum at $x=c$}$$
$$\text{if } f(x) \ge f(c), f(x) \, \text{has a minimum at $x=c$}$$
$c.)$ Using the values of $c$ obtained, you show that for values of $x$ not equal to $c$, you will get this:
$$\text{if } f'(x) \lt 0, f(x) \, \text{is increasing}$$
$$\text{if } f'(x) \gt 0, f(x) \, \text{is decreasing}$$
$$\text{if } f'(x) = 0, f(x) \, \text{is constant}$$ |
Prove that a graph doesn't contain a cycle | HINT: How many edges does $G$ have? How many vertices? What do you know about the numbers of vertices and edges in a tree? |
A question on probability measures on uncountable spaces | I don't think the claim is correct:
if we choose $X = [0,1]$ and define $$\displaystyle \mathbb{P}\left[(a,b)\right] = \mathbb{1}_{0.5 \in (a,b)}$$
Here $\mathbb{1}_{0.5 \in (a,b)}$ stands for the indicator function.
For this example, any partition of $X$ can have at most one set that contains $0.5$ and therefore at most one set of a partition can have positive measure!
I think we need more conditions on the probability measure $\mathbb{P}$ |
Let $Y$ be a proper subspace of $(X, \| \cdot \|)$. Is $\text{dist}(x,Y) > 0$ for $x \in X \setminus Y$? | The answer is no. For a counterexample, let $(X,\|\cdot\|)=(\ell^\infty,\|_\cdot\|_\infty)$ be the usual space of bounded sequences with the sup norm. Define $Y$ to be the subspace consisting of sequences that are eventually $0$ after finitely many terms. Then if we take $x=(1,\frac12,\frac13,\dots)\in X$ and $y_n = (1,\frac12,\dots,\frac1n,0,0,\dots)\in Y$, we have $\|x-y_n\| = \frac1{n+1}$ and therefore $\inf_{y\in Y} \|x-y\| = 0$. |
If $Df \equiv 0$, prove that $f$ is constant. | Here is another approach based on the same idea:
Pick $u_0 \in U$ and let $A = \{ u \in U | f(u) = f(u_0) \}$. Since $f$ is continuous,
$A$ is closed.
If $u \in A$, then $B(u,\epsilon) \subset A$ for some $\epsilon>0$ since $U$
is open. Choose $u' \in B(u,\epsilon)$ and let $p(t) = u+t(u'-u)$.
Now suppose $f(u) \neq f(u')$, then we can find a linear $g$ such that
$g(f(u)) \neq g(f(u'))$, then the function $\phi = g \circ f \circ p$
is a map $[0,1] \to \mathbb{R}$ and we can use the usual mean value
theorem.
We have $D \phi(t) = g(Df(p(t)) (u'-u)) = g(0) = 0$, from which we get
$g(f(u)) = g(f(u'))$,
which is a contradiction. Hence $f(u) = f(u')$, and so we see that
$A$ is open.
Since $U$ is connected, it follows that $A=U$. |
Find the least $n$ such that the fraction is reducible | Euclidean division is the key. Since
$$5n+6 = 5(n-13) + 71$$
We see that $\text{gcd}(5n+6, n-13)$ must divide 71, which is prime. As you say, we want $\text{gcd}(5n+6, n-13)$ to be greater than $1$, so it must be $71$. In particular
$$n-13 = 71k \quad \Longrightarrow \quad n = 13+71k$$
The least value is $n=84$ for $k=1$, and therefore
$$
\frac{n-13}{5n+6} = \frac{84-13}{5\cdot 84 + 6} =\frac{71}{426}
$$
which is reducible because
$$\frac{71}{426} = \frac{1}{6}$$ |
Given that $H(x)$ is a Heaviside function, how would I graph $H(x)+2H(x-3)-3H(x-5)$ | @JeanMarie provided an excellent comment on how to do it. Also, @JohnJoy provided a very useful interpretation that is worth exploring.
You would just break up the Heaviside function into three distinct pieces and add the results of them over their respective ranges.
The plot should end up being
Here is the same plot using a different tool |
How do I calculate this double integral? | The simplest way to compute the integral is to change to the polar coordinates:
$$
x=r\cos\phi,\quad y=r\sin\phi,\quad dxdy= rdrd\phi,
$$
so that:
$$\begin{align}
\iint_D \cfrac{2 dx dy}{(x^2 + y^2) \sqrt{x^2 + y^2}}
&=\iint_D \cfrac{2 rdr d\phi}{r^3}\\
&=2\int_{\pi/6}^{\pi/3} d\phi\int^R_{R\frac{\cos(\pi/12)}{\cos(\phi-\pi/4)}}\frac{dr}{r^2}\\
&=2\int_{\pi/6}^{\pi/3} d\phi\left[-\frac1r\right]^R_{R\frac{\cos(\pi/12)}{\cos(\phi-\pi/4)}}\\
&=2\int_{\pi/6}^{\pi/3} d\phi
\left(\frac{\cos(\phi-\pi/4)}{R\cos(\pi/12)}-\frac1R\right)\\
&=\frac2R\left[\frac{\sin(\phi-\pi/4)}{\cos(\pi/12)}-\phi\right]_{\pi/6}^{\pi/3}\\
&=\frac4R\left[\tan\left(\frac\pi{12}\right)-\frac\pi{12}\right].
\end{align}
$$ |
Using Logarithmic Properties to simplify a quotient? | $$\begin{align*}\ln|g(x)|&=\ln(\tan^3x)-\ln(e^{3x^3}x^7) \\
\ln|g(x)|&=3\ln(\tan x)-3x^3\ln(e)-7\ln(x) \\
\ln|g(x)|&=3\ln(\tan x)-3x^3-7\ln(x)\end{align*}$$ |
Find the lengths of the given curves | This equals $x^{-1/3}\sqrt{x^{2/3}+1}$, so use the substitution $u=x^{2/3}$. |
Why 2D linear interpolation formula sums the initial point's y component? | The two formulas only give the same result if the line goes through the origin. The first formula is the correct one. |
How to plot $y=\frac{1}{(x-4)^{1/3}}$ with mathematical softwares? | The problem is that Maple takes the complex root with less complex argument, so the cubic root of $-1$ is understood to be $e^{\frac{\pi}{3}}$. One possibility is to ask Maple to print a piecewise function, which in the $x\leqslant 4$ region should be defined as $-\frac{1}{(4-x)^{\frac{1}{3}}}$. |
Round Table Adjacent Seats Problem | We assign seats at random, by putting place mats labelled $1,2,3,\dots, A, B, C$ at random in front of the $10$ chairs. The place mats labelled $A,B,C$ indicate where any person apart from $1$ to $7$ may sit. A pair of lovers now enters the room, and they wish to sit together. We find the probability that they can do so without asking anybody to move.
The only thing that matters is the placement of $A, B, C$. And since the table is round, all that matters is the placement of $B$ and $C$ relative to $A$. So we have $9$ empty spaces, and have to choose $2$ of them. There are $\binom{9}{2}$ choices, all equally likely.
We count the bad choices. For a choice to be bad, it must involve choosing $2$ non-adjacent seats from the $7$ not next to $A$. There are $\binom{7}{2}$ ways to choose $2$ seats, of which $6$ give an adjacent pair. So there are $15$ bad choices, out of the $\binom{9}{2}$ choices. Thus the probability there will be an adjacent pair is $\frac{21}{36}$. |
Is there a closed-form solution for the lasso term in optimization problem? | The closed-solution needs some conditions.For example,the $x$ is a vector or the coefficient matrix is ones matrix.In other condition,you must seek an approximate solution. |
Why is $f:\mathbb{R}\to S^1$ $f(t)=(\cos(t),\sin(t))$ a local diffeomorphism? | The map $f$ is smooth and the derivative at a point is an injective linear map. Restricting the codomain of derivative to the tangent space to a point on the circle, the derivative becomes a linear isomorphism.
Inverse function theorem of smooth manifold applies and you obtain that $f$ is really a local diffeomorphism.
Edit: To answer @Idonotknow's questions all at once.
The map $f$ is not a global diffeomorphism for an obvious reason: the map is not injective as $f(x)=f(x+2\pi n)$ for all integers $n$ and all real numbers $x$.
If you view the map as $f:\Bbb R\to\Bbb R^2$, the derivative at $x$ would be given by the linear transformation represented by the Jacobian matrix $$\begin{bmatrix}-\sin x\\\cos x\end{bmatrix}$$ just as what OP found.
You check the above linear map's injectivity by letting it act on a nonzero vector $c\in\Bbb R$ (view it as a vector in the vector space $\Bbb R$), giving the nonzero vector $$c\begin{bmatrix}-\sin x\\\cos x\end{bmatrix}.$$ You check that is vector is nonzero, by calculating its inner product with itself: $$c\begin{bmatrix}-\sin x&\cos x\end{bmatrix}c\begin{bmatrix}-\sin x\\\cos x\end{bmatrix}=c^2\not=0.$$
Viewing $f$ as a maps $f:\Bbb R\to\Bbb R^2$, the derivative at a point $x\in\Bbb R$ is a map between the tangent spaces $df_x:T_x\Bbb R\to T_{f(x)}\Bbb R^2$. What I mean by restricting the codomain, is that we choose the subspace $T_{f(x)}S^1$ of $T_{f(x)}\Bbb R^2$, and restrict the codomain to $T_{f(x)}S^1$. You can do so, because the image of $f$, by definition, is on $S^1$ only. |
Implicit surface representation of a cube | $\max (|x|,|y|,|z| ) = 1$ for the unit cube.
See http://en.wikipedia.org/wiki/Lp_space#The_p-norm_in_finite_dimensions. |
Find an equation of the tangent line to the curve $y = x\cos(x)$ at the point $(\pi, -\pi)$ | Your derivative, which we need for slope, is close, but $y'$ should be $$y' =\underbrace{(1)}_{\frac d{dx}(x)}\cdot(\cos x) + (x)\underbrace{( -\sin x)}_{\frac d{dx}( \cos x)}= \cos x -x\sin x$$ Now, for slope itself, we evaluate $y'(\pi) = \cos (\pi) - \pi\sin(\pi) = -1 - 0 = -1$.
That gives you the equation of the line: $$y+\pi = -(x -\pi)$$
To get the slope-intercept form, simply distribute the negative on the right, and subtract $\pi$ from each side to isolate $y$: $$y + \pi = -x+ \pi \iff y = -x$$ |
Why do we require a topological space to be closed under finite intersection? | You need to think about what the intuition behind open sets are. One way to think about it is through neighborhoods: an open set is a set which is a neighborhood of each of its points. What is a neighborhood of a point? A neighborhood of a point $x$ is a set that contains of all points that are "sufficiently close" to $x$ (what does "sufficiently close" mean? It depends on the situation; you think of different neighborhoods perhaps specifying different degrees of closeness). In particular, any set that contains a neighborhood of $x$ is itself a neighborhood of $x$. And specifying two degrees of closeness specifies another degree of closeness that makes sense (the smaller of the two at any given place, say).
So: if you think about open sets as sets that are neighborhoods of all of the points they contain, then it is natural that the union of any family of pen sets will be open: each point in the union is one of the open sets, and that open set is a neighborhood, and the union contains that neighborhood and so is itself a neighborhood. So the arbitrary union of open sets should still be open.
What about intersection? Well, if you take two open sets $O_1$ and $O_2$, and you consider a point $x$ in $O_1\cap O_2$, then $O_1$ contains all points that are $1$-sufficiently close to $x$, and $O_2$ contains all points that are $2$-sufficiently close to $x$ (with "$1$-sufficiently" and "$2$-sufficiently" describing the two degrees of closeness required), so $O_1\cap O_2$ will contains all points that are both $1$-sufficiently and $2$-sufficiently close to $x$, and so it contains all points that are "sufficiently close" to $x$ for some meaning of "sufficiently close", so it is also an open set. This gives you, inductively, any finite intersection.
But what about arbitrary intersections? Then you run into trouble, because specfying two degrees of "closeness" gives you a degree of closeness (the smaller one), but an infinite number of degrees of closeness may end up excluding everything! (Just as in your example, taking the intersection of all $(-\frac{1}{n},\frac{1}{n})$, which specify all points that are $\frac{1}{n}$-close to $0$, but the intersection excludes everything). So we don't want to require that an arbitrary intersection of neighborhoods be a neighborhood, and so we don't want to require that an arbitrary intersection of open sets be an open set. |
The sum of five different positive integers is 320. The sum of greatest three integers in this set is 283. | Hint:
To maximise $x$, we need to pay attention to the condition $x + e = 119$. The maximum possible is $x = 118$ as $e ≥ 1$, and with this value of $x$, it is possible to satisfy the other conditions (try it yourself!).
Then to minimise $x$, the largest three numbers should be consecutive, which results in $96 ≤ x$. When $x = 96, e = 23$, but then the remaining number is $320 - 23 - 283 = 14$, so $23$ is no longer the smallest number and $14 + 96 \ne 119$. Hence we need $283 + e = 320 - d$, and in the optimal case where $d = e+1$, what value of $x$ do we need? |
Reconciling definition of tangent vector with intuition | Using $\mathbb{R}^{3}$ as an example for a manifold on which to define tangent spaces is not a helpful start. The tangent spaces corresponding to $p\in\mathbb{R}^{3}$ are all isomorphic to $\mathbb{R}^{3}$ itself. It is more useful to think of a two-dimensional surface in $\mathbb{R}^{3}$ such as $S^{2}$, the surface of the unit ball in $\mathbb{R}^{3}$. A curve $\alpha$ from $(-\epsilon,\epsilon)\rightarrow{}S^{2}$ going through $p\in{}S^{2}$ will have a derivative at $p$, and all curves having the same derivative at $p$ form an equivalence class, which you can define to be a tangent vector at $p$, in other words, $v\in{}T_{p}(S^{2})$, where $v$ is this equivalence class. The set of all such equivalence classes is the tangent space $T_{p}(S^{2})$. $\partial/\partial{}\xi^{i}$ with $i=1,2$ is a basis for this tangent space, where $\xi^{i}$ is a local chart for the manifold $S^{2}$ around $p$, and $\partial/\partial{}\xi^{i}$ is the equivalence class containing a curve which only varies in the coordinate $\xi^{i}$ while the coordinates $\xi^{j}$ remain constant for $j\neq{}i$. Chapter 2 in Manifolds and Differential Geometry (Jeffrey Lee) is a good introduction, and I learned it from a pretty good and dense introduction to tangent vectors in Methods of Information Geometry by Shun-ichi Amari, pages 5-7. |
Generating all solutions for a negative Pell equation | The fundamental solution of the equation $x^2-2y^2=-1$ is $(1,1)$. We get all positive solutions by taking odd powers of $1+\sqrt{2}$. The positive solutions are $(x_n,y_n)$, where $x_n+y_n\sqrt{2}=(1+\sqrt{2})^{2n-1}$.
One can alternately obtain a recurrence for the solutions. If $(x_n,y_n)$ is a positive solution, then the "next" solution $(x_{n+1},y_{n+1})$ is given by
$$x_{n+1}=3x_n+4y_n,\qquad y_{n+1}=2x_n+3y_n.$$
Note that your solution $(7,5)$ is the case $n=2$, and $(41,29)$ is the case $n=3$.
Similarly, the positive solutions of the equation $x^2-dy^2=-1$ (if they exist) are obtained by taking the odd powers of the fundamental solution. |
Help with integral with parameter. | Hint. Use the Leibniz integral rule. The first step is
$$g'(y)=\frac{\tan^2((\pi/y)\cdot y)}{2(\pi/y)}\cdot \frac{d}{dy}\left(\pi/y\right)
+\int_{0}^{\pi/y}\frac{d}{dy}\left(\frac{\tan^2(xy)}{2x}\right)dx.$$ |
Finding the area on a graph bounded by four curves. | Hint. Use a double integral in the following order:
$$A=\int_{-1}^1\int_?^? dx\,dy\ .$$
See if you can fill in the missing bits and complete the calculation. |
Bound difference of complex numbers to bound their log ratio | We assume that $a \ne 0$ (which implies $ b \ne 0$) and then calling $z=b/a$ the problem becomes $|1-z| \le 1-e^{-\delta}$ implies $|\log z| \le \delta $
Here we note that the inequality $|1-z| \le 1-e^{-\delta} < 1$ implies $\Re z >0$ so we have a canonical logarithm $\log z= \log |z| + i \arg z , |\arg z| < \frac{\pi}{2}$ and of course the inequality is not generally true for other branches as for those the argument, hence |Log $z$|, can go to infinity, so we assume that we use the principal branch with its standard Taylor series.
Then letting $w=1-z, |w| \le 1-e^{-\delta}=c$, $\log z=\log (1-w)=-\sum_{n \ge 1}\frac{w^n}{n}$
By the triangle inequality:
$|\log z| =|\log (1-w)| \le \sum_{n \ge 1}|\frac{w^n}{n}| \le \sum_{n \ge 1}\frac{c^n}{n}=|\log (1-c)|=-\log (1-c)=\delta$ so we are done! |
The diffeomorphism of $\mathbb R^n$ | The function $f:\mathbb{R}\to\mathbb{R}$ defined by $f(x)=-x$ and $K=[-1,1]$ provides a counterexample - any continuous map $\tilde{f}$ satisfying properties 1) and 2) would necessarily fail to be injective by the intermediate value theorem.
As Jason DeVito points out below, we can use a similar setup to create a counterexample for any $n$. Letting $f:\mathbb{R}^n\to\mathbb{R}^n$ be defined by $f(x_1,x_2,\ldots,x_n)=(-x_1,x_2,\ldots,x_n)$, we have for all $p\in\mathbb{R}^n$
$$\det(df_p)=\begin{vmatrix}
-1 & 0 & \cdots & 0\\
0 & 1 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \cdots & 1
\end{vmatrix}=-1.$$
Thus, any diffeomorphism $\tilde{f}$ satisfying 1) would also have to have $\det(d\tilde{f}_p)=-1$ for some $p\in\mathbb{R}^n$. Because $\tilde{f}$ is a diffeomorphism, it can't have $\det(d\tilde{f}_p)=0$ for any $p\in\mathbb{R}^n$. Because $\det(d\tilde{f}_p)$ varies continuously with $p$, we must have $\det(d\tilde{f}_p)<0$ for all $p\in\mathbb{R}^n$. But $\det(dI_p)=1>0$ for all $p\in\mathbb{R}^n$, so we can't have $\tilde{f}=I$ anywhere. |
How would I express multiple intervals in this problem? | Hint: The metric topology on $\mathbb{Q}$ is the same as its subspace topology from $\mathbb{R}$, so a subset of $\mathbb{Q}$ is both closed and open iff it is both the intersection of a closed subset of $\mathbb{R}$ with $\mathbb{Q}$ and the intersection of an open subset of $\mathbb{R}$ with $\mathbb{Q}$.
Spoiler:
$[-\sqrt{2},\sqrt{2}] \cap \mathbb{Q} = (-\sqrt{2},\sqrt{2}) \cap \mathbb{Q}$ |
In how many ways can $5$ exams be scheduled in $40$ class days so that any two exams are at least $3$ class days apart? | This problem can be solved with concept of integral solutions.
Let the teacher arranges an exam with gaps $a_1,a_2,a_3,a_4,a_5$ and $a_6$. So first exam is after $a_1$ days from first day. That is on $a_1+1$ th day. Then next exam is on $(a_1+1)+(a_2+1)$th day and so on.
So the constraints are $a_1,a_6 \ge 0$ and $a_2,a_3,a_4,a_5 \ge 2$. And their sum should be $40-5=35$.
Now you can frame an equation:
$a_1+a_2+a_3+a_4+a_5+a_6= 35$
You can write each $a_2,a_3,a_4and a_5$ as $e_2+2,e_3+2,e_4+2,e_5+2$ such that $e_2,e_3,e_4,e_5 \ge 0$ . So you get a new equation:
$a_1+e_2+e_3+e_4+e_5+a_6=27$
The number of integral solutions of this equation, each $\ge 0$ is given by $C^{32}_5$ which is the required answer |
Can the median, angle bisector and the altitude of a triangle intersect to form an equilateral triangle? | I found $4$ situations where a median, a bisector and an altitude form an equilateral triangle. I believe this listing to be exhaustive. Note that half of them use external angle bisectors, and most of them have at least some part of the red triangle outside the blue, so not just a decomposition of the blue one. All of them reuse one original vertex. I'll leave it to you to decide which of these you consider solutions. Click on figures for a bigger view.
Edge length ratio $1:\sqrt{13}:4$
Angles ca. $13.9°, 60°, 106.1°$
Edge length ratio $\sqrt3-1:\sqrt2:2 = 1:\sqrt{\sqrt3+2}:\sqrt3+1$
Angles $15°, 30°, 135°$
Edge length ratio $\sqrt2:2:\sqrt3+1 = 1:\sqrt2:\sqrt{\sqrt3+2}$
Angles $30°, 45°, 105°$
Edge length ratio $1:2:\sqrt7$
Angles ca. $19.1°, 40.9°, 120°$
I found this via a considerable bit of Sage computation. The core idea is using homogeneous coordinates, and parametrizing the triangle as a set of three tangents to the unit circle. That way, the angular bisector can be expressed easily by connecting one vertex to the center of the circle, which is either the incircle or some excircle. Two tangents to the unit circle are parametrized using the tangent half-angle formula, while the third is fixed to the one tangent the formula won't cover. If you care about the details, here they are:
import itertools
import string
PR.<t,u> = QQ[]
def simpl(v):
if not v:
return v
g = gcd(v.list())
return v.parent()(v / g)
def cp(a, b):
return simpl(a.cross_product(b))
def mp(a, b):
return simpl(b[-1]*a + a[-1]*b)
AB = vector([1-t^2, 2*t, 1+t^2])
AC = AB(t=u)
BC = vector([-1, 0, 1])
A = cp(AB, AC)
B = cp(AB, BC)
C = cp(AC, BC)
O = vector([0, 0, 1])
ortho = diagonal_matrix([1, 1, 0])
ABC = matrix([A, B, C]).det()
medians = [cp(A, mp(B, C)), cp(B, mp(C, A)), cp(C, mp(A, B))]
bisectors = [cp(A, O), cp(B, O), cp(C, O)]
altitudes = [cp(A, ortho*BC), cp(B, ortho*AC), cp(C, ortho*AB)]
triplets = [_ for _ in itertools.product(medians, bisectors, altitudes) if matrix(_).det()]
def dehom(v):
return v[:-1]/v[-1]
def distsq(a, b):
d = a - b
return d*d
def equilat(ab, bc, ac):
a = cp(ab, ac)
b = cp(ab, bc)
c = cp(ac, bc)
abc = matrix([a, b, c]).det()
da = dehom(a)
db = dehom(b)
dc = dehom(c)
dab = distsq(da, db)
dbc = distsq(db, dc)
dac = distsq(da, dc)
eq1 = (dab - dbc).numerator()
eq2 = (dab - dac).numerator()
eq3 = (dac - dbc).numerator()
g = gcd([eq1, eq2, eq3])
if g == 0:
return
if g != 1:
for f, p in g.factor(False):
i = PR.ideal([f])
assert abc in i # Equilateral triangle would become degenerate
i = PR.ideal([eq1 // g, eq2 // g, eq3 // g])
dim = i.dimension()
assert dim == 0 # Finite set of solutions
for s in i.variety(AA):
if not abc(t=s[t], u=s[u]):
# Equilateral triangle would be degenerate
continue
if not ABC(t=s[t], u=s[u]):
# Original triangle would be degenerate
continue
names = dict((str(k), v) for k, v in s.items())
pts1 = [A, B, C, a, b, c]
pts2 = [_(**names) for _ in pts1]
if not all(_[-1] for _ in pts2):
# Exclude points at infinity
continue
pts3 = [dehom(_) for _ in pts2]
assert dab(**names) == dbc(**names) == dac(**names)
A3, B3, C3 = pts3[:3]
pts4 = pts3[3:]
dAB = (A3 - B3).norm()
dAC = (A3 - C3).norm()
dBC = (B3 - C3).norm()
if dAB >= dAC >= dBC:
yield transform(A3, B3, C3, pts4)
if dAC >= dAB >= dBC:
yield transform(A3, C3, B3, pts4)
if dAB >= dBC >= dAC:
yield transform(B3, A3, C3, pts4)
if dBC >= dAC >= dAB:
yield transform(C3, B3, A3, pts4)
if dBC >= dAB >= dAC:
yield transform(B3, C3, A3, pts4)
if dAC >= dBC >= dAB:
yield transform(C3, A3, B3, pts4)
def transform(A, B, C, abc):
B2x, B2y = (B - A).list()
M1 = matrix([[B2x, -B2y], [B2y, B2x]])
M2 = M1.inverse()
R = A.parent().base_ring()
assert M2 * (B - A) == vector([1, 0])
C2 = M2 * (C - A)
if C2[1] < 0:
M2 = diagonal_matrix([1, -1])*M2
C2 = M2 * (C - A)
abc2 = sorted(M2 * (_ - A) for _ in abc)
pts = [vector(R, [0,0]), vector(R, [1,0]), C2] + abc2
for p in pts:
p.set_immutable()
return tuple(pts)
unique = sorted(set(j for i in triplets for j in equilat(*i)))
def svg(f, A, B, C, a, b, c):
f.write("""<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg width="1000px" height="400px" viewBox="-0.5, -0.7, 2.0, 0.8"
stroke-width="0.008"
version="1.1" xmlns="http://www.w3.org/2000/svg">
""")
dAB = (A - B).norm()
dAC = (A - C).norm()
dBC = (B - C).norm()
l1 = dAC / dBC
l2 = dAB / dBC
f.write("""<descr>
Answer to https://math.stackexchange.com/q/3028611/35416
with edge length ratio 1 : {} : {}
</descr>
""".format(l1.radical_expression(), l2.radical_expression()))
f.write('<g stroke="blue">\n')
for p1, p2 in [(A, B), (B, C), (C, A)]:
p3 = list(map(float, p1 + 10*(p1 - p2)))
p4 = list(map(float, p2 + 10*(p2 - p1)))
f.write('<line x1="{}" y1="{}" x2="{}" y2="{}"/>\n'.format(*(p3+p4)))
f.write('</g>\n')
f.write('<g stroke="red">\n')
for p1, p2 in [(a, b), (b, c), (c, a)]:
p3 = list(map(float, p1 + 10*(p1 - p2)))
p4 = list(map(float, p2 + 10*(p2 - p1)))
f.write('<line x1="{}" y1="{}" x2="{}" y2="{}"/>\n'.format(*(p3+p4)))
f.write('</g>\n')
f.write('<g stroke="black" fill="blue">\n')
for p1 in [A, B, C]:
p2 = list(map(float, p1))
f.write('<circle cx="{}" cy="{}" r="0.025"/>\n'.format(*(p2)))
f.write('</g>\n')
f.write('<g stroke="black" fill="red">\n')
for p1 in [a, b, c]:
p2 = list(map(float, p1))
f.write('<circle cx="{}" cy="{}" r="0.025"/>\n'.format(*(p2)))
f.write('</g>\n')
f.write('</svg>\n')
flip = diagonal_matrix([1, -1])
for i, s in enumerate(unique):
with open('MX3028611{}.svg'.format(string.letters[i]), 'w') as f:
svg(f, *(flip*_ for _ in s))
You could also consider a slightly different question: Draw all the medians, internal and external bisectors and altitudes for a total of 12 red lines. Can you now find an equilateral triangle with only red edges? This yields a family of situations where you have a $60°$ angle with an angle bisector dividing it, and the heights through the opposite vertices.
This family also includes situations where the bisector becomes the external bisector of a $120°$ vertex, thus still dividing a $60°$ angle. You can obtain one from the other by moving the edge opposite the $60°$ angle in any way you like.
Apart from this one-parameter family there are six more sporadic solutions:
Edge length ratio $1:1:1$
Angles $60°, 60°, 60°$
Edge length ratio $1:2:\sqrt7$ (which we already saw above)
Angles ca. $19.1°, 40.9°, 120°$
and
Edge length ratio $2 : \sqrt5 + 3 : 2\sqrt5 + 2$
Angles ca. $15.5°, 44.5°, 120°$
Edge length ratio $1:\sqrt7:\sqrt7$
Angles ca. $21.8°, 79.1°, 79.1°$
Edge length ratio $1 : \sqrt[3]{\frac19\sqrt{57} + 1} + \frac{2}{3\sqrt[3]{\frac19\sqrt{57} + 1}} + 1 : \frac{\sqrt[3]{6\sqrt{57} + 46}}3 + \frac4{3\sqrt[3]{6\sqrt{57} + 46}} + \frac43$
(I wonder whether @Blue will come up with something nicer for this here as well…)
Angles ca. $18.2°, 60°, 101.8°$ |
Show that orthogonal matrices are diagonalizable | As people have indicated, you could simply apply the spectral theorem. Here I run through a specialized argument to the orthogonal case:
Since $Q$ is orthogonal we have $\langle Qv, Qw \rangle = (Qv)^*Qw = v^* Q^T Q w = \langle v, w \rangle$.
Given any eigenvector $v$ with eigenvalue $\lambda$, if we have some vector $w$ orthogonal to $v$ then we have $\lambda \langle v, Qw \rangle = \langle Qv, Qw \rangle = \langle v, w \rangle = 0$, so $Q$ maps $v^\perp$ into itself. We can induct on the dimension of our space to show $Q$ acts diagonalizably on $v^\perp$ so it acts diagonalizably on $v \oplus v^\perp$
We can infact say more:
Note that if $\lambda$ is an eigenvector of $Q$ then we have $|\lambda|\|v\| = \langle \lambda v, \lambda v \rangle = \langle Qv, Qv \rangle = \|v\|$. We conclude all the eigenvalues have norm $1$.
If $v,w$ are eigenvectors with different eigenvalues then we have $\langle v, w \rangle = \langle Qv, Qw \rangle = \langle \lambda v, \mu w \rangle = \lambda \mu^* \langle v, w \rangle$. Thus if $\lambda \mu^* \neq 1$ then $v$ and $w$ are orthogonal.
Combining these one can show that $Q = PRP^{-1}$ where $P$ is an orthogonal matrix and $R$ is a block diagonal matrix with $1,-1$ and $2 \times 2$ rotation matrices down the diagonal. |
When is $\frac {t^a - 1} {t^b -1} $ an integer? | Fix $b$ and $t$ and consider the question of which $a$ are such that $\frac{t^a - 1}{t^b - 1}$ is an integer; i.e., such that $t^a \equiv 1 \pmod{t^b - 1}$. Clearly, the order of $t$ within the multiplicative group of (invertible) integers modulo $t^b - 1$ is $b$ (as every smaller positive power of $t$ is strictly between $1$ and $(t^b - 1) + 1$). Thus, the answer to our question is those $a$ which are multiples of $b$, just as you conjectured. |
Does there exist a harmonic map from S^2 to 3d hyperbolic space | We can prove that the only such maps are the constants using the Bochner technique.
Consider the Bochner Identity for harmonic maps:
If $f:(M,g)\to (N,h)$ is harmonic then $$-\frac12\Delta|Df|^2 = |\nabla Df|^2 + \sum_i h(Df(\text{Rc}^M(e_i)), Df(e_i)) - \sum_{i,j}\text{Rm}^N(Df(e_i),Df(e_j),Df(e_i),Df(e_j))$$ for $e_i$ an orthonormal frame for $g$.
If we had a harmonic map $f:\Bbb S^2 \to \Bbb H^3$, then the second term on the right becomes $|Df|^2 \ge 0$ and the third is a (negative multiple of) a sectional curvature of $\Bbb H^3$. Integrating gives zero on the LHS (Stokes Theorem) and the sum of three non-negative integrals on the RHS; so the three quantities on the RHS must be identically zero. In particular $|Df|^2 = 0$ and thus $f$ is constant. |
How to find a matrix of a linear transformation from $P_2$ to $P_3$ | $$F(1)=x \\
F(x)= \frac{1}{2} x^2 \\
F(x^2) = \frac{1}{3} x^3$$
So the matrix will be:
$$ \begin{bmatrix}
0 & 0 & 0 \\
1 & 0 & 0 \\
0 & \frac{1}{2} & 0 \\
0 & 0 & \frac{1}{3}
\end{bmatrix}$$ |
How to find distance from a point to a set of points whose value is smallest? | For these particular points:
It is clear that $(4,4)$ is exactly $3$ units away from $(1,4)$. And since the $y$ value of $(2,0)$ is already $4$ units away from the $y$ value of $(1,4)$, then that's the minimum distance between them: any difference in $x$-value can only increase that distance further. Similarly, $(8,1)$ already has a difference of $7$ in $x$-value from $(1,4)$, so that distance is at least $7$. So, we can very quickly tell $(4,4)$ is the closest, without having to calculate all the exact distances.
In general, though, suppose your fixed point is $(x,y)$, and you have a bunch of other points $(x_i,y_i)$. Then for each of those other points you can calculate $dx_i^2+dy_i^2$ with $dx_i$ and $dy_i$ being the absolute difference in $x$ value and $y$ value respectively (i.e. $dx_i = |x-x_i|$ and $dy_i = |y-y_i|$) The point with the smallest value for this will be closest. There is no need to take the square root of this in order to calculate the actual distance, since if $x$ < $y$, then $\sqrt{x} < \sqrt{y}$. So at least you can avoid that step. Also, if you ever find that $dx_i \leq dx_j$ and $dy_i \leq dy_j$, then you immediately know that point $j$ is not any closer than point $i$.
Unfortunately, what does not work is to simply add $dx$ and $dy$ for each point and compare those. For example, if you have $dx_i=10$ and $dy_i=0$, while $dx_j=6$ and $dy_j=6$, then $dx_i+dy_i < dx_j+dy_j$, but $dx_i^2+dy_i^2 > dx_j^2+dy_j^2$ |
Prove $(a,b,c)=((a,b),(a,c))$ | It's much simpler using the identities you already know (associativity, commutativity, etc)
$$((a,b),(a,c))\, =\, ((a,b),a,c)\, =\, (a,b,a,c)\, =\, (a,b,c)$$
Remark $\ $ By induction, in the same way, one can always "flatten" such gcd expressions. |
Is there any way to find an explicit formula for the adjoint of a linear transformation? | If the vector space is finite-dimensional and $A$ is the matrix representation of $T$ with respect to the standard basis, then the matrix of representation of $T^*$ is given by conjugating the entries of $A$, then taking the transpose. This is generally denoted $A^*$ or $A^H$. |
The inner product of the Cartesian Product space | One can check directly that setting
$$
\langle (a_1, b_1), (a_2, b_2) \rangle _{A \times B}= \langle a_1 , a_2 \rangle_A + \langle b_1 ,b_2 \rangle_B
$$
defines an inner product on $A\times B$. Linearity and switching properties are easy, and also that
$$
\langle (a,b),(a,b)\rangle_{A\times B}\ge0
$$
Now, suppose that $\langle (a,b),(a,b)\rangle_{A\times B}=0$. Then $\langle a,a\rangle_A+\langle b,b\rangle_B=0$, from which $a=0$ and $b=0$ follows.
The projection maps $p_A\colon A\times B\to A$ and $p_B\colon A\times B\to B$ are bounded (that is, continuous) and it's readily shown that this is a product in the sense that if we are given a Hilbert space $C$ and bounded linear maps $f_A\colon A\to C$, $f_B\colon B\to C$, there is a unique bounded linear map $g\colon A\times B\to C$ such that $f_A=p_A\circ g$ and $f_B=p_B\circ g$: just define
$$
g(a,b)=f_A(a)+f_B(b)
$$
and check boundedness. |
greedy algorithm on set cover | Assuming that Suresh is guessing correctly,
There are simple examples involving just three sets, where only two of the three sets are needed to get a cover, but the greedy algorithm uses all three.
EDIT: Just to incorporate in the answer what's already in the comments, one simple example consists of the three sets, $[0,1/2]$, $[1/2,1]$, and $[1/5,4/5]$. |
Proof verification: $g\circ f$ injective $\implies$ $f$ injective | No, you cannot deduce from $g\bigl(f(x)\bigr)=z$ that $f(x)=g^{-1}(z)$ since you have no reason to assume that $g$ has an inverse.
You can prove the statement that you want to prove as follows: if $f(x)=f(y)$, then $g\bigl(f(x)\bigr)=g\bigl(f(y)\bigr)$, and therefore, since we are assuming that $g\circ f$ is injective, $x=y$. |
How can a countably infinite product space be discrete? | You have almost done. You write that a bases of the topology is made of the sets
$$
U = O_1 \times O_2 \times ... \times O_n \times X_{n+1} \times X_{n+2} \times ...
$$
Now you have to remember your hypothesis:
and all but a finite number of the $X_i, i \in \Bbb N$ are singleton sets.
So, there exists $N$ such that, for $i\geq N$, $X_i=\{a_i\}$. Therefore, a basis of the product topology is given by the sets
$$
U = O_1 \times O_2 \times ... \times O_{N-1} \times \{a_{N}\} \times \{a_{N+1}\} \times ...
$$
where $O_1,\ldots, O_{N-1}$ are arbitrary subsets of $X_1,\ldots, X_{N-1}$. Making unions of these elements you obtain arbitrary subsets of: $\;\prod^{\infty}_{i=1} X_i=\left(\prod^{N-1}_{i=1}X_i\right)\times \left(\prod^{\infty}_{i=N}\{a_i\}\right)$. |
How can I prove these two questions without using the following theorem? | This answer is just for part a. Lyapunov's condition is that $$\lim_{n\rightarrow\infty}\frac{\sum_{i=1}^n\mathbb E\left(|X_i-\mu_i|^{2+\delta}\right)}{\left(\sum_{i=1}^n\sigma_i^2\right)^{(2+\delta)/2}}=0$$
for some $\delta>0$. For this problem we have
$$\begin{split}\frac{\sum_{i=1}^n\mathbb E\left(|X_i-\mu_i|^{2+\delta}\right)}{\left(\sum_{i=1}^n\sigma_i^2\right)^{(2+\delta)/2}}&=
\frac{\sum_{i=1}^ni^{\theta(2+\delta)}}{\left(\sum_{i=1}^n i^{2\theta}\right)^{(2+\delta)/2}}\\
&=\frac{\sum_{i=1}^ni^{\theta(2+\delta)}}{\left(\sum_{i=1}^n i^{2\theta}\right)^{(2+\delta)/2}}\cdot \frac{\frac{1}{n^{\theta(2+\delta)}}}{\frac 1{n^{\theta(2+\delta)}}}\cdot\frac{\frac 1 {n^{(2+\delta)/2}}}{\frac 1 {n^{(2+\delta)/2}}}\\
&=\frac{\left[\sum_{i=1}^n\left(\frac i n\right)^{\theta(2+\delta)}\frac 1n\right]\cdot \frac 1 {n^{(2+\delta)/2-1}}}{\left[\sum_{i=1}^n\left(\frac i n\right)^{2\theta}\frac 1 n\right]^{(2+\delta)/2}}\\
&\longrightarrow\frac{\int_0^1x^{\theta(2+\delta)}dx\cdot \frac 1{n^{(2+\delta)/2-1}}}{\int_0^1x^{2\theta}dx}\end{split}$$
The integrals converge to $$\frac{2\theta+1}{\theta(2+\delta)+1}$$ if $\theta>-\frac{1}{2+\delta}$ and since $\delta>0$, $\frac1 {n^{(2+\delta)/2-1}}$ converges to $0$, so the whole expression is $0$ in the limit. Since $\delta$ was arbitrary, the Liapunov condition for CLT holds if $\theta>-\frac 12$. |
Is Positive Semi-Definiteness of a Matrix a loose measure of Independence? | having PSD of a covariance matrix imply that the random variables are linearly independent
No, all covariance matrices are PSD.
So does having an eigen value greater then 0 imply that there is a correlation and if so does it make another loose measure of independence?
If you have all eigenvalues greater than zero, then the covariance matrix will be PD. And you will be able to use a whitening transformation to transform your original random vector $X$ into a $Y$ that is uncorrelated. Just having uncorrelatedness is not enough for you to claim they are independent, in general.
But, if you are dealing with a random vector $X$ with a multivariate normal distribution, you can conclude that after the transformation the components of $Y$ are not only uncorrelated, but also independent. |
Random Walk Martingale Proof | "How do we go about proving whether a certain random variable is a martingale?"
Call your random variable $X = (X_n)_n$.
You need to show that $E(X_{n+1} \mid X_1, ..., X_n) = X_n$. Note that this is equivalent to $E( X_{n+1} - X_n \mid X_1, ..., X_n ) = 0$.
I would suggest looking at $X_{n+1} - X_n$, and expanding some squares. Also, you can replace the conditioning on $\sigma(X_1, ..., X_n)$ by $\sigma(S_1, ..., S_n)$, since by knowing $X_1, ..., X_n$ you can determine $S_1, ..., S_n$.
Now you just need to calculate some things like $E(S_{n+1}^2 \mid S_n)$ and $E(S_{n+1} \mid S_n)$. |
Simple combinatorics question - caught off guard! | Combinatorial proof: From a set of $2n$ chocolates, choose $n$ to eat. However, these choices come in pairs -- the $n$ I didn't choose I could have equally well chosen. Hence, all ${2n\choose n}$ "menus" can be paired off, so there must be an even number of them. |
Olympic Badminton, or How to Design a Tournament | There are no impossibility results "Arrow like" regarding tournament design but lots of open problems especially in the incomplete information case.
Kay Konrad's book on contests is a good general reference.
As tournaments can be modeled as all-pay auctions, the literature on optimal auction design may also be relevant, see Vijay Krishna's book.
I know the above economics literature well but I am not familiar with the computer science one that may be also relevant: see this thesis https://stacks.stanford.edu/file/druid:qk299yx6689/TV-thesis-final-augmented.pdf |
Why this order is well-defined? | The answer to your first question is explained right before proposition 7 of the preceding section.
For your second question, $f/g\in \mathcal O_p(X)$ means that $\operatorname{ord}_P(f/g)=\operatorname{ord}_P(f)-\operatorname{ord}_P(g)\geq 0$. Likewise, $\operatorname{ord}_P(g)-\operatorname{ord}_P(f)\geq 0$. |
Find $\tan^{-1} (i\sqrt{2})$. | Let $u=e^{iz}$ just as you did.
$$\begin{array}{rcl}
u^2 &=& \dfrac{1-\sqrt{2}}{1+\sqrt{2}} \\
u^2 &=& \dfrac{(1-\sqrt{2})^2}{(1+\sqrt{2})(1-\sqrt{2})} \\
u^2 &=& \dfrac{(1-\sqrt{2})^2}{-1} \\
e^{2iz} &=& -(1-\sqrt{2})^2 \\
e^{2iz} &=& (1-\sqrt{2})^2 e^{i\pi} \\
2iz &=& \ln[(1-\sqrt{2})^2e^{i\pi}] + 2ni\pi \\
2iz &=& 2\ln[\sqrt{2}-1] + i\pi + 2ni\pi \\
z &=& -i\ln[\sqrt{2}-1] + \dfrac\pi2 + n\pi \\
\end{array}$$ |
Integral of binomial coefficients | There is indeed a relation with the Bernoulli numbers of second kind. If I'm right, this is the result :
Theorem : generating function of $f_n(x)$
$$\sum_{n=0}^{+\infty} f_n(x)z^n = \dfrac{(1+z)^{x-1}-1}{\log (1+z)}$$
Corrolary :
$$f_n(x) = \sum_{k=0}^n {{x-1}\choose{k+1}} \dfrac{b_{n-k}}{(n-k)!}$$
I'll present here mainly the "formal" steps to get to this result, I'll skip the details of convergence, derivation under integral and all. Start with :
$$f_n(x)=\dfrac{1}{n!} \int_1^x \frac{\Gamma(t)}{\Gamma(t-n)} \text{d}t=\dfrac{1}{n!} \int_1^x (t-1)(t-2) \cdots (t-n) \text{d}t$$
Now introduce $g(t,y)=y^{t-1}$. We have $\dfrac{\partial^n g}{\partial y^n}(t,y)=(t-1)(t-2) \cdots (t-n)y^{t-n-1}$. Thus :
$$\begin{array}{rcl}
f_n(x,y) & = & \displaystyle \frac{1}{n!} \int_1^x (t-1)(t-2) \cdots (t-n)y^{t-n-1} \text{d}t\\
& = & \displaystyle \frac{1}{n!} \dfrac{\partial^n \;}{\partial y^n} \left( \int_1^x y^{t-1} \text{d}t\right) \\
& = & \displaystyle \frac{1}{n!} \dfrac{\partial^n \;}{\partial y^n} \left( \frac{y^{x-1}-1}{\log(y)}\right) \\
& = &
\end{array}$$
We have $f_n(x)=f_n(x,1)$. Thus, we get (modulo some good argument for the Taylor expansion) :
$$\boxed{\sum_{n=0}^{+\infty} f_n(x)z^n = \sum_{n=0}^{+\infty} \displaystyle \left. \frac{1}{n!} \dfrac{\partial^n \;}{\partial y^n} \left( \frac{y^{x-1}-1}{\log(y)}\right) \right|_{y=1} z^n = \dfrac{(1+z)^{x-1}-1}{\log (1+z)}}$$
This demonstrates the theorem. The corollary is a direct consequences of the following series exapnsion :
$$\displaystyle \dfrac{(1+z)^{x-1}-1}{z} = \sum_{n=0}^{+\infty} {{x-1}\choose{n+1}}z^n \; \; \; \; \; \; \; \text{and} \; \; \; \; \; \; \; \displaystyle \frac{z}{\log(1+z)} = \sum_{n=0}^{+\infty} \frac{b_n}{n!}z^n$$
$f_n(x)$ is then given by the Cauchy product :
$$\boxed{f_n(x) = \sum_{k=0}^n {{x-1}\choose{k+1}} \dfrac{b_{n-k}}{(n-k)!}}$$
I'm not sure we can go any further than this.
By curiosity, how did you come across this problem ? |
Existence of a prime in between two integers, of which the larger integer is divisible by all prime divisors of the smaller integer. | Let $a=8$, $b=10$. Then the only prime divisor of $2$ also divides $b$, but $9$ is not prime. |
Reduction of Order being done in two different ways? | Note: There is a typo somewhere in the first problem. This is very clear given that the solution they give is not a solution to the homogeneous DEQ.
I am going to write out the details for the second question. There is no guesswork in these problems.
We are asked to use Reduction of Order to solve:
$$y'' - 3y' + 2y = 5e^{3x}$$
We can find a solution to the homogeneous equation $y'' - 3y' + 2y = 0$, as:
$$y_h(x) = c_1 e^x + c_2 e^{2x}$$
We are free to choose either of those as our solution, so lets choose as the author did, $y_1 = e^x$.
Using Reduction of Order, we have:
$$y = y_1 u = e^x u \rightarrow y' = e^x u + e^x u' \rightarrow y'' = e^x u + 2 e^x u' + e^x u''$$
Substituting these into the original equation yields:
$$y'' - 3y'+2y = e^x u'' -e^x u' = 5e^{3x}$$
Dividing by $e^x$ yields:
$$u''-u' = 5 e^{2x}$$
Now, we make the substitution $w = u'$, yielding (exactly what the author has):
$$w' - w = 5e^{2x}$$
Solving for $w(x)$ yields:
$$w(x) = c_1 e^x+5 e^{2x}$$
We know $u' = w$, hence
$$u' = c_1e^x+5 e^{2x} \rightarrow u(x) = c_1 + c_2 e^x + \dfrac{5}{2}e^{2x}$$
Lastly, we know that $y = y_1 u$, hence
$$y(x) = c_1e^x + c_2e^{2x} + \dfrac{5}{2}e^{3x}$$
If you use something like Undetermined Coefficients, you will see it matches this result exactly.
Because of the error in the first problem, I cannot comment on it, but there is definitely a typo and that is throwing results off (I may be able to reverse engineer something, but it is only guesswork). Maybe you can state which book this is from and we can see if there are multiple versions. |
Which numbers are square modulo 9? | You would better state it using $0$ instead of $9$, i.e., $\forall{k}:k^2\equiv0,1,4,7\pmod9$.
There you go:
$k\equiv0\pmod9 \implies k^2\equiv0^2\equiv0\pmod9$
$k\equiv1\pmod9 \implies k^2\equiv1^2\equiv1\pmod9$
$k\equiv2\pmod9 \implies k^2\equiv2^2\equiv4\pmod9$
$k\equiv3\pmod9 \implies k^2\equiv3^2\equiv9\equiv0\pmod9$
$k\equiv4\pmod9 \implies k^2\equiv4^2\equiv16\equiv7\pmod9$
$k\equiv5\pmod9 \implies k^2\equiv5^2\equiv25\equiv7\pmod9$
$k\equiv6\pmod9 \implies k^2\equiv6^2\equiv36\equiv0\pmod9$
$k\equiv7\pmod9 \implies k^2\equiv7^2\equiv49\equiv4\pmod9$
$k\equiv8\pmod9 \implies k^2\equiv8^2\equiv64\equiv1\pmod9$ |
$\liminf, \limsup$, Measure Theory, show: $\lim \int n \ln(1+(f/n)^{1/2})\mathrm{d}\mu=\infty$ | You can check that for $a>0$ the function
$$
\varphi(t)=t\log\left(1+\left(\frac{a}{t}\right)^{1/2}\right)
$$
is strictly increasing on $[1,+\infty)$ and tends to infinity. Unfortunately you had to dig up to second derivative to prove this. Thus for each $x\in X$ you have strictly increasing to infinity sequence $\{f_n(x):n\in\mathbb{N}\}$, so
$$
\liminf\limits_{n\to\infty} f_n(x)=\lim\limits_{n\to\infty} f_n(x)=+\infty
$$
The rest is clear. |
Finding the middle of n points | You can always do this when $n$ is odd, but if $n$ is even you can only do this if the respective centroids (or sums) of the even $A_i$ points and the odd $A_i$ points are coincident.
Suppose we have $P_1$ defined somehow. Then
\begin{align}
P_2 &= 2A_1-P_1 \\
P_3 &= 2A_2-P_2 = 2A_2 - 2A_1 + P_1 \\
P_4 &= 2A_3-P_3 = 2A_3 - 2A_2 + 2A_1 - P_1 \\
\end{align}
and consider that we need $P_{n+1}$ as defined by continuation to be equal to $P_1$.
When $n$ is odd, we have $P_1 = 2A_n-2A_{n-1}+ 2A_{n-2}-\ldots-2A_2+2A_1 - P_1$
and so we see that we need $P_1 = A_n - A_{n-1} + A_{n-2}-\ldots- A_2+ A_1$
When $n$ is even , we have $P_1 = 2A_n-2A_{n-1}+ 2A_{n-2}-\ldots+2A_2-2A_1 + P_1$,
which is only possible if $ 2A_n-2A_{n-1}+ 2A_{n-2}-\ldots+2A_2-2A_1 = 0$
or as stated $ A_n+ A_{n-2}+\ldots+A_2 = A_{n-1}+ A_{n-3}+\ldots+ A_1$
(and in this case , the location of $P_1$ is arbitrary).
Notice this result holds regardless of the dimension of space the $A_i$ points are embedded in. |
Can I conclude that $x=f(x)$ from the assumption that $f(x)=f(f(x))$? | Just write it down from definitions. If $f$ is injective, then this means, by definition, that
$$\forall x_1, x_2\in D_f: f(x_1)=f(x_2)\implies x_1=x_2.$$
Now, set $x_1=x$ and $x_2=f(x)$. What does the above expression change into? |
How to find the closed form of $\int_{0}^{\infty}{(e^{-x}+x-1)^2\over x(e^{x\over n}-1)}\mathrm dx$ | Hint by user reuns:
$$\begin{align}
\int_0^{\infty } \frac{(\exp (-x)+x-1)^2}{x \left(\exp \left(\frac{x}{n}\right)-1\right)} \, dx & =\int_0^{\infty } \frac{(\exp
(-x)+x-1)^2 \sum _{k=1}^{\infty } \exp \left(-\frac{k x}{n}\right)}{x} \, dx \\
&=\sum _{k=1}^{\infty } \int_0^{\infty }
\frac{(\exp (-x)+x-1)^2 \exp \left(-\frac{k x}{n}\right)}{x} \, dx \\
&=\sum _{k=1}^{\infty } \left(\frac{n^2 (-k+n)}{k^2 (k+n)}+2
\ln (k+n)-\ln (k (k+2 n))\right) \\
&=\sum _{k=1}^{\infty } \left(\frac{n^2 (-k+n)}{k^2 (k+n)}+\ln \left(\frac{(k+n)^2}{k (k+2
n)}\right)\right) \\
&=\sum _{k=1}^{\infty } \frac{n^2 (-k+n)}{k^2 (k+n)}+\sum _{k=1}^{\infty } \ln \left(\frac{(k+n)^2}{k (k+2
n)}\right) \\
&=\sum _{k=1}^{\infty } \frac{n^2 (-k+n)}{k^2 (k+n)}+\ln \left(\prod _{k=1}^{\infty } \frac{(k+n)^2}{k (k+2
n)}\right) \\
&=-2 \gamma n+\frac{n^2 \pi ^2}{6}-2 n \, \psi (1+n)+\ln \left(\frac{4^n \,\Gamma \left(\frac{1}{2}+n\right)}{\sqrt{\pi
} \,\Gamma (1+n)}\right) \\
&=-2 n H_n+\frac{n^2 \pi ^2}{6}+\ln \left(\binom{2 n}{n}\right)
\end{align}$$
where
$\gamma$ is Euler’s constant $=0.577216...$
$\psi (1+n) $ is the digamma function
$H_n$ is the $n$th harmonic number |
Logic with increasing monte carlo possible output | Your $MC_3$ is correct with a higher probability than what you calculated. $MC_3$ is correct whenever at least two of the $MC$ trials return correct (as opposed to exactly two). In your notation, we want $R \geq 2$, not $R=2$. The probability that $MC_3$ is correct is $$P(MC_3) = \binom{3}{2}P(MC)^2(1-P(MC))+ P(MC)^3$$
$$P(MC_3) = 3\cdot \left(\frac{3}{4}\right)^2\cdot \frac{1}{4} + \left(\frac{3}{4}\right)^3$$
$$P(MC_3) = \frac{27}{64} + \frac{27}{64} = \frac{27}{32} = 84.375 \%$$
It's a bit unclear what you mean in the "second type" and "third type" experiments ... I suggest you use MathJax and define as many of your variables/functions as possible. |
Auxillary circle and power of point | This is a straight forward application of the generalized angle bisector theorem. I am not sure how you can have AB and AC tangential to the circum-circle of triangle ADE unless you have angle BAC = 180 degrees which means ABC is not a triangle. Here is a sketch of the proof of your original proposition.
Apply the generalized angle bisector theorem to triangle ABC with point D on BC ignoring point E. Then you get:
$\frac{BD}{DC} = \frac{AB\ sin(BAD)}{AC\ sin(DAC)}$
Apply the theorem again to triangle ABC with point E on BC ignoring point D. Then you get: $\frac{CE}{BE} = \frac{AC\ sin(CAE)}{AB\ sin(BAE)}$
Since angle BAD = angle CAE and angle BAE = angle DAC, we can equate 1 & 2 and rearrange terms to yield the required result. |
find nth term of a few sequences | The sequence is $$-\left(\frac{5n(n + 1)}{2} - 4\right).$$ The OEIS is a terrific website, but, like any search engine, you occasionally have to finesse your search terms. Like Aleksandar showed, your sequence as you've presented it here is not in the OEIS. But try removing the initial $4$ and you get http://oeis.org/A166137 which is described as 5*n*(n+1)/2-4. The offset is 1,2. I don't know what the 2 means, but I do know the 1 means that the first $n$ is $n = 1$. If you try $n = 0$ in 5*n*(n+1)/2-4 you get $-4$. |
Find the values of a and b that make the equation system have infinite solutions | For infinite solution there are two conditions
$N(A)\neq\{0\}$
$b\in Col(A)$ |
A graph with a disjoint matching contains components that are either paths or even circuits. | Edges in $M$ do not intersect and edges in $M'$ do not intersect either. The only way edges in $(V(G),M\cup M')$ can intersect is between edges in $M$ and edges in $M'$. We can't have three edges intersect at the same vertex. That is because two of those edges both belong to either $M$ or $M'$, But they can't intersect by definition of a matching. Hence the degree of each vertex in $(V(G),M\cup M')$ is at most two. Hence vertices in each components is at most 2. So then that means the component is either a path or a cycle. |
Is there an analytic function that takes on every value apart from 3 numbers? | $\sqrt{z}$ does not take on all complex values. Which values it does take depends on which branch you choose.
For an analytic function that takes on all but $3$ complex values, take
$f(z) = z$ on the complex plane with those $3$ points removed.
If you're asking for an entire function that takes on all but $3$ complex values, that is forbidden by Picard's "little" theorem. |
Is this quantity possible? | It certainly is possible, although it may be difficult to actually write down. Here are some hints to a (probably non-optimal) solution:
There is nothing special about the number $2015$. In fact there exist sets of size $n$ with these conditions for every $n$. Proof by induction seems the way to go.
The second condition is easy to satisfy. If we take all the elements to be coprime and non-square then no products of elements can be square.
For the inductive step, suppose we have a set $M$ of size $n$ which satisfies the conditions. Let $$\mathcal M = \left\{\sum_{i=1}^m x_i:1\le m \le n,\ x_1<\cdots<x_m\in M\right\}$$be the set of all sums of elements of $M$. $\mathcal M$ is a finite set, so it contains a maximal element $N$. To construct a set of size $n+1$ which satisfies the conditions, we need to find a non-square integer $k$ which is coprime to every element of $M$, for which $k+m$ is not square for every $m\in\mathcal M$.
Gaps between square numbers can be made arbitrarily long. If we can find some $k\in\mathbb N$ which is non-square, coprime to every element of $M$ and translates $\mathcal M$ into a gap between two large squares, then we'll be done.
Let $$k = 1+N^2\prod_{x\in M}x^2$$Then $k$ is certainly coprime to every element of $M$, and $$\left(N\prod_{x\in M}x\right)^2<k+m<\left(1+N\prod_{x\in M}x\right)^2$$ for every $m \in \mathcal M$. |
Is it possible to to solve an equation with both power(?) and exponential terms for $x$? | There is no nice formula that expresses $x$ as a function of $y$. For any particular numerical value of $y$ you can use software to find a value of $x$. For examples, if you ask Wolfram alpha to find $x$ when $y=100$
solve 12+2x+x^1.92+2^(0.425(x−12))=100
you find out that $y$ is
9.0984743836320913466
Since your function is increasing, it would be straightforward to write a python program (or a program in any other language) to find values by bisection, or to build a table of values as in a comment. You could even build a table of values in a spreadsheet. |
Infinite indices zero sequence | Your basic idea is correct, but you have to be careful!
You consider $A=\{n\in\mathbb N ~:~a_m\leq a_n \forall m\geq n\}$ and assume $A$ is finite. Then you can choose $M:=\max A$. For $n>M$ you now $n\notin A$ and therefore not $a_m\leq a_n$ for all $m\geq n$. But this doesn't mean $a_m>a_n$ for all $m\geq n$ but there exists $m\geq n$ such that $a_m>a_n$.
Hence you can construct a monoton increasing subsequence. Since $(a_n)_n$ is a sequence of positive real numbers, you can deduce a contradition to $a_n\to 0$.
How to construct the subsequence:
Consider $M+1\notin A$, so there exists $m>M+1$ such that $a_m>a_{M+1}$. Define $n_1:=m>M+1$.
Since $n_1>M$ we know $n_1\notin A$ hence there exists $m>n_1$ such that $a_m>a_{n_1}$ and define $n_2:= m$.
Since $n_2>n_1>M$ we know $n_2\notin A$ hence there exists $m>n_2$ such that $a_m>a_{n_2}$ and define $n_3:=m$.
Iterate the argument and you get a subsequence $(a_{n_k})_k$ of $(a_n)_n$ which is striktly monotonic increasing and $a_{n_1}>0$. Especially $a_{n_k}\not\to 0$.
Since $a_n\to 0$ you get a contradiction. |
True yet unprovable statement? | It depends on what kinds of claims you want to make and prove about the natural numbers. If all you're interested in is how the numbers are ordered (i.e. that 18 is the next one from 17), then that's all perfectly provable.
Also, it turns out you can prove any claims that are purely about addition (see Presburger Arithmetic).
But, once you add multiplication in there, you get into trouble. That is, Godel's Incompleteness Theorem says there is no complete (and sound and recursive) set of axioms that can prove all truths involving addition and multiplication for the natural numbers.
Very crudely, the reason is that you can use natural numbers to encode logic statements about numbers, and even statements about logic proofs. Thus, logic statements about numbers become statements about statements (or proofs) about numbers. And, once you have addition and multiplication in your proof system, the system becomes 'strong enough' to be able to encode the infamous Godel sentence $G$ which effectively ends up saying "I am not provable within this system" (so note that the Godel system refers to the system you are using, meaning that every different system has its very own Godel sentence).
OK, so now consider $G$: Yes, through the encoding it ends up making a claim about its own (un)provability, but it is also still a statement about numbers. So, is it true or false? Well, if false, then it would be provable .. and (assuming the system is sound) therefore true. OK, so it can;t be false. So .. it's true ... and therefore not provable (again, within the system you're working with)! |
How do you prove this logical equivalence? | $$(\exists! x~:~P(x)) \iff (\forall x~:~P(x) \implies Q(x)) \iff (\exists x~:~P(x) \land Q(x))$$
The first thing to check is if it is actually true. $\iff$ is associative and commutative, it just means that an even number of statements are false.
$P$ and $Q$ are unary predicates, so they are interchangable with unary sets (i.e., $P(x)$ is the same as $x \in P$). So $\exists! x ~:~ P(x)$ is the same as $|P| = 1$, and $\forall x~:~P(x) \implies Q(x)$ is the same $P \subseteq Q$, and $\exists x~:~P(x) \land Q(x)$ is the same as $P \cap Q \ne \emptyset$.
So the claim is equivalent to the claim that an odd number of the following statements are true:
$|P| = 1$
$P \subseteq Q$
$P \cap Q \ne \emptyset$
So just trial and error, eventually trying out the first statement to be false and the last two true:
$$P = \{2, 3\}$$
$$Q = P$$
Is a counter example to the proposition. |
Calculating the Nullity of a Linear Transformation of Polynomials | First of all I think you got the kernel wrong as we get
$$\int_{-1}^1 \! ax^2 + bx + c \, \mathrm d x = \frac{2a}{3} + c$$
so for $T(p) = 0$ we need $2a + 3c = 0$ (probably a typo?).
Hence we have one equation restricting our choice of values for three variables, i.e. the kernel has dimension 2 (note that you can freely choose $b$ and either $a$ or $c$ according to the equation above -- the fact that $b$ can be choosen arbitrarily either way stems from the fact that $bx$ is the odd part of the polynomial $ax^2 + bx + c$ and the integrals of odd functions over intervals symmetric around zero vanish.)
There is another, in my opinion easier, way to get this:
The rank-nullity theorem states that for a linear map $f \colon V \to W$ we have $\dim V = \operatorname{rank} f + \operatorname{nullity} f$.
Applying this to your problem we have $\dim V = 3$ but $\operatorname{rank} T = 1$ as the rank of $T$ is bounded by the dimension of its range (i.e. 1 as $\dim \mathbb R = 1$ as our base field is $\mathbb R$ also) and clearly the rank cannot be 0 as not all such polynomials are mapped to $0$ (take e.g. $x^2$).
Hence we quickly determined $\operatorname{rank} T = 1$ and thus find that $T$ has nullity $2$. |
Evaluating $\int \sqrt{x^2-3}\:dx$ | Note that $$\cos^2 \theta + \sin^2 \theta = 1$$ so dividing both sides by $\cos^2 \theta$ gives $$1 + \tan^2 \theta = \frac{1}{\cos^2 \theta} \iff \frac{1}{\cos^2 \theta} - 1 = \tan^2 \theta$$
And so, taking the square root of both sides gives you what you want. |
Difficulty in Understanding the Points of Incidence in Applications of AM-GM Inequality. | Since x >= 3, the point of incidence is x = 3. In that case, 1/x = 1/3. So we write x as x/9 + 8x/9, and when we apply AM- GM to x/9 and 1/x, since at the point of incidence they're both equal to 1/3, equality holds when applying AM- GM. Furthermore, in 8x/9, we put x = 3 (because of the point of incidence), and we get the desired result.
If this isn't 100% clear, just let me know, I'll be glad to explain it to you thoroughly :) |
Epsilon-delta definition for a multivariable limit | When one approach doesn't give you what you need, it is time to abandon it and try something else.
Divide the neighborhood of $0$ into two parts: where $|x| \le |y|$ and where $|y| \le |x|$. In the first region, divide top and bottom by $|y|^3$. Then the denominator is trapped between 1 and 2, which allows you to find bounds on the entire fraction. In the other region, divide by $|x|^3$ to do the same. Use the combined bounds to prove your theorem. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.