title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Finding $y(x)$ using a minimization problem. | With the help of Lagrange multiplier $\lambda$ the minimization problem can be posed as
Determine the stationary points for $(y,\lambda)$ in
$$
\int_0^2 y\sqrt{1+y' ^2} dx + \lambda \left(\int_0^2\sqrt{1+y'^2}dx -3\right)
$$
Now considering
$$
L = y\sqrt{1+y' ^2}+\lambda\left(\sqrt{1+y'^2}-\frac 32\right)
$$
the variation gives us the Euler-Lagrange conditions
$$
\cases{
y''(y+\lambda)-y'^2-1=0\\
\int_0^2\sqrt{1+y'^2}dx -3 = 0
}
$$
This ODE has as primitive function
$$
y(x,c_1,c_2,\lambda) = \frac{1}{2} \left(e^{-e^{c_1}(c_2+ x)-2
c_1}+e^{e^{c_1}(c_2 + x)}-2 \lambda \right)
$$
From this point, the problem is solved numerically by a minimization process:
calling $r_1(c_1,c_2,\lambda) = y(0)$ and $r_2(c_1,c_2,\lambda) = y(2)$ we need to solve
$$
\min_{c_1,c_2,\lambda}\left(r_1^2+r_2^2+\left(\int_0^2\sqrt{1+y'^2}dx -3\right)^2\right)
$$
Follows a MATHEMATICA V 11 script which accomplishes that.
Clear[lambda, c1, c2, y, x]
L = y[x] Sqrt[1 + y'[x]^2] + lambda (Sqrt[1 + y'[x]^2] - 3/2)
eq1 = D[L, y[x]] - D[D[L, y'[x]], x] // FullSimplify // Numerator
soly = DSolve[eq1 == 0, y, x][[1]] /. {C[1] -> c1, C[2] -> c2}
yx = y[x] /. soly;
r1 = yx /. {x -> 0}
r2 = yx /. {x -> 2}
int = Integrate[Evaluate[Sqrt[1 + y'[x]^2] /. soly], {x, 0, 2}]
solc1c2 = NMinimize[{r1^2 + r2^2 + (int - 3)^2, lambda > 1}, {c1, c2, lambda}]
yx0 = y[x] /. soly /. solc1c2[[2]]
Plot[yx0, {x, 0, 2}] |
Disprove continuity by e-d criterium | $\Phi_x$ is not continuous under this metric $d$. You can find a counterexample by considering $g=0$ on $[0,1]$, $f_n$ be a sequence of functions such that $f_n(0)=1$ and $f_n(x)=0$ for $x\in [\frac{1}{n},1]$ and $f_n$ is linear on $ [0,\frac{1}{n}]$ and $\Phi_0$.
Using this counterexample, you can show that there exists $\epsilon>0$ such that for all $\delta>0$, there exists $f,g$ such that $d(f,g)<\delta$ but $|\Phi_0 f-\Phi_0 g|\ge \epsilon$, which proves the discontinuity of $\Phi_0$ by the definition. |
Why is there an intimate relationship between calculus and $e$? | The roles of $e$ and $\pi$ are similar in calculus.
The number $e$ arises naturally in the solution of the simplest possible first order differential equation because $y(t)=e^t$ is the solution of $y'=y$.
In similar fashion, $\pi$ arises in the solution of the the simplest second order differential equation because $y(t)=\sin(t)$ and $y(t)=\cos(t)$ are the solutions of $y^{\prime\prime}=-y$.
While not immediately calculus related, it might also be mentioned that the golden ratio arises as the solution of one of the simplest algebraic equations, namely $x^2-x-1=0$. |
Is $\operatorname{int}(A\cup B)=\operatorname{int}(A)\cup\operatorname{int}(B)$? | Proposition. If the boundaries of A and B are disjoint, then
int A$\cup$B = int A $\cup$ int B. |
Solving using Chinese Remainder Theorem | The congruence
$$x \equiv 1 \mod 3$$ tells us that $x = 1 + 3k$ for some $k$. Plug this into the second equation to get
$$1 + 3k \equiv 2 \mod 5$$
So $3k \equiv 1$ and $k\equiv 2 \mod 5$. This means $k = 2 + 5j$ for some $j$, which in turn means $x = 1 + 3(2 + 5j) = 7 + 15j$. Now plug this into the third equation
$$7+15j \equiv 3 \mod 7$$
So $15j \equiv -4 \equiv 3 \mod 7$, whence $j \equiv 3 \mod 7$. This means $j = 3 + 7m$ for some $m$. Therefore $x = 7+15j = 7 + 15(3 + 7m) = 52 + 105m$.
Therefore $x \equiv 52 \mod 105$. Specifically, $52$ is a solution. |
Existence of a solution of Neumann problem in $\mathbb{R}^3$ | The solution is correct. It can be phrased in a more direct way, if your statement is made less "negative". Instead of saying "there does not exist a solution unless equality holds", we could say "if there exists a solution, then equality holds". Then the proof is direct: let $u$ be a solution then use Green's identity and finally conclude that the equality holds.
In any case, yours is not really a proof by contradiction (reductio ad absurdum), but rather a proof of contrapositive statement. |
Proving that a polynomial function with complex roots is uniformly continuous | case 1 : $p(x)=C$ then $f$ is constant therefore uniformely continuous
case 2 : $deg(p)>0$
p has no real root so f is $C^\infty$
$$f'(x) = -p'(x)/p^2(x)$$
$$deg(p^2)=2*(deg(p')+1)$$
so $\lim\limits_{x\to\infty}f'(x)=0$ and $\lim\limits_{x\to-\infty}f'(x)=0$
so $f'$ is bounded and $f$ is uniformly continuous |
$a_n =0$ for all $n \in \mathbb N$ if $\sum a_n^k =0$ for all integers $k \ge 1$ | Assume that the conclusion is wrong, and let $m$ be the smallest
index such that $a_m \ne 0$. Without loss of generality we can
assume that $m=1$, i.e.
$$ \tag 1
a_1 \ne 0 \, .
$$
Let $0 < \varepsilon < \frac 12|a_1|$. $\sum_{n=1}^\infty a_n$ is absolutely convergent, therefore there is an integer $N$ such that
$$
\sum_{n=N+1}^\infty |a_n| < \varepsilon \, .
$$
and in particular $|a_n| < \varepsilon$ for $n > N$. It follows that
for all integers $k$,
$$
\sum_{n=N+1}^\infty |a_n|^k \le \sum_{n=N+1}^\infty \varepsilon^{k-1}|a_n| = \varepsilon^{k-1} \sum_{n=N+1}^\infty |a_n| < \varepsilon^k \, .
$$
Since $\sum a_n^k =0$, the $k$-th power sums
$$
p_k := p_k(a_1, \ldots, a_N) = \sum_{n=1}^N a_n^k
$$
satisfy
$$ \tag 2
|p_k| = \bigl| \sum_{n=N+1}^\infty a_n^k \, \bigr| \le
\sum_{n=N+1}^\infty |a_n|^k < \varepsilon^k
$$
for $1 \le k \le N$.
Now let
$$
e_k := e_k(a_1, \ldots, a_N)
$$ be the $k$-th elementary symmetric polynomial
in the variables $a_1, \ldots, a_N$.
Then $e_0 = 1$, and Newton's identities state that
$$
\begin{aligned}
e_1 &= p_1 \\
2e_2 &= e_1 p_1 - p_2 \\
3e_3 &= e_2 p_1 - e_1 p_2 + p_3
\end{aligned}
$$
and generally
$$ \tag 3
k e_k = \sum_{i=1}^k (-1)^{i-1} e_{k-i} p_i \, \text{ for } k \ge 0 \, .
$$
From $(2)$ and $(3)$ it follows easily by induction that
the elementary symmetric polynomials satisfy
$$
|e_k| \le \varepsilon^k \text{ for } 0 \le k \le N \, .
$$
Now define
$$
P(x) = (x-a_1)(x-a_2) \cdots (x-a_N) \\
= x^N - e_1 x^{N-1} + e_2 x^{N-2} \cdots \pm e_N \, .
$$
Then $P(a_1) = 0$ and therefore $r := |a_1|$ satisfies
$$
r^N \le |e_1| r^{N-1} + |e_2| r^{N-2} + \cdots + |e_N| \\
\le \varepsilon r^{N-1} + \varepsilon^2 r^{N-2} + \cdots + \varepsilon^N \\
= \varepsilon r^{N-1} \bigl( 1 + \frac{\varepsilon}{r } + \cdots +
(\frac{\varepsilon}{r })^{N-1} \bigr)
$$
or
$$
1 \le \frac{\varepsilon}{r } \bigl( 1 + \frac{\varepsilon}{r } + \cdots +
(\frac{\varepsilon}{r })^{N-1} \bigr) \, .
$$
$\varepsilon$ was chosen such that $0 < \frac \varepsilon r < \frac 12$, therefore
$$
1 < \frac 12 \bigl( 1 + \frac 12 + \cdots +
(\frac 12)^{N-1} \bigr) < 1
$$
which is a contradiction.
So the initial assumption is wrong, and it is proven that
all $a_n$ are zero. |
Applying Hall's Theorem to a tripartite graph | No it is not strong enough. Suppose $G$ is a cycle on $3n$ vertices; $n$ an integer satisfying $n \ge 2$ i.e., $V(G)= \mathbb{Z}/3n\mathbb{Z}$, and $i$ and $j$ are adjacent iff $i-j \equiv_3 1$. Then $G$ is tripartite w sides $X_0,X_1,X_2$; $X_i =\{j; j \equiv_3 i\}$. Can you see the rest from here. |
How can we define a partial ordering on the the $\mathbb{R}$-valued $n$ by $n$ matrix vector space? | There is no such function $\mu$ satisfying the first and the second properties at the same time.
Assume that such a function exists. It suffices to take $x,y\in V\setminus \left\{ \textbf{0} \right\} $ such that $x*y=\textbf{0}$, and applying $\mu$ to both sides to get $\ \mu (x*y)=0 \iff \mu(x)\mu(y)=0 \iff \mu(x)=0 \ \vee \ \mu(y)=0 \iff x=\textbf{0} \ \vee \ y=\textbf{0} $ which is a contradiction. |
Dedekind cuts: Showing that the set B has no smallest element | We try to discover something that might work.
Suppose we are given a positive rational $r$ such that $r^2\gt 2$. We want to produce a smaller positive rational $s$ such that $s^2\gt 2$.
We will produce $s$ by taking a little off $r$, say by using $s=r-\epsilon$, where $\epsilon$ is a small positive rational.
So we need to make sure that $(r-\epsilon)^2$ is still $\gt 2$.
Calculate. We have
$$(r-\epsilon)^2-2=(r^2-2)-2r\epsilon+\epsilon^2\gt (r^2-2)-2r\epsilon.$$
If we can make sure that $(r^2-2)-2r\epsilon\ge 0$ we will have met our goal. That can be done by choosing $\epsilon=\frac{r^2-2}{2r}$. That leads to the choice
$$s=r-\frac{r^2-2}{2r}=\frac{r^2+2}{2r}$$ |
What's the difference between Pick's formula and the Shoelace formula(Gausse's formula)? | But what's the difference?
Pick's theorem works only if the polygon vertices have integer coordinates.
Shoelace formula works with any real coordinates.
If you have a sequence of tuples corresponding to the consecutive points along the path the user drew, then the Shoelace formula is trivial to implement.
Let's say path is that sequence, with path[0] being the starting point, and path[len(path)-1] the final point. Then, we can use the fact that in Python, path[len(path)-1] == path[-1]:
def pathArea(path):
# Less than three points, and the area is zero.
if len(path) < 3:
return 0
A = 0.0
for i in range(0, len(path)):
A += path[i-1][0] * path[i][1] - path[i][0] * path[i-1][1]
return 0.5 * abs(A)
Mathematically, the above uses
$$A = \frac{1}{2}\left\lvert \sum_{i=0}^{n-1} x_{i-1} y_{i} - x_{i} y_{i-1} \right\rvert$$
where $n$ is len(path), and $x_{-1} = x_{n-1}$ and $y_{-1} = y_{n-1}$ matching the final point in path.
When drawing, you can use line segments between the points, but note that you'll only want to add a new vertex if it differs from the previous one. So, if self.currentPath is the path being drawn, and newX and newY are the current coordinates being drawn, use
if (self.currentPath is None) or (len(self.currentPath) < 1):
self.currentPath = [ (newX, newY) ]
elif (self.currentPath[-1][0] != newX) or (self.currentPath[-1][1] != newY):
self.currentPath.append( (newX, newY) ]
so that the vertex list will not contain consecutive duplicate points.
Note that the above pathArea() function does not need the path to be closed, to start and end with the same point; there is an implicit line segment (polygon side) between the starting and the ending point.
If the user draws a closed figure with line segments poking out, drawing α instead of a nice circle, you will need to trim off the extra vertices. Otherwise the actual area calculated is that of a fish shape, as an extra line segment is added between the starting and ending points.
To do that, you need to find vertex $S$ near the beginning of the path, and vertex $E$ near the end of the path, such that the line segment from the vertex $S$ to the next vertex (from $(x_S , y_S)$ to $(x_{S+1}, y_{S+1})$), intersects the line segment from the vertex preceding $E$ to vertex $E$ (from $(x_{E-1}, y_{E-1})$ to $(x_E , y_E)$); with $S < E$. If you find the intersection, remove all initial vertices on the path up to $S$, and all vertices from $E$ to end, including both vertices $S$ and $E$, and prepend or append the intersection point to the path. |
Field Extension by a product of two elements | Let $sm + tn = 1$; that is, $sm = 1-tn$.
We have $a^mb^m = (ab)^m \in K(ab)$, so $b^m \in K(ab)$ and thus $b\cdot(b^n)^{-t}=(b^m)^s \in K(ab)$. Therefore, $b \in K(ab)$ (and so $a\in K(ab)$ as well).
It follows that $K(a,b) = K(ab)$. |
Proving direct sum when field is NOT of characteristic $2$. | I do not think that the property you want to prove depends on the characteristic. $W_1 \cap W_2 = \{0\}$ holds true for any field. As $A^t = A$ and $A_{ij} =0$ for $i \le j$ implies $A_{ji} = (A^t)_{ji} = A_{ij} = 0$ for $i \le j$, hence $A = 0$. So it suffices to show that $\dim W_1 + \dim W_2 = n^2$. $W_1$ has as a basis the $\frac{(n-1)n}2$ matrices $E_{ij}$, $i > j$, where $(E_{ij})_{kl} = \delta_{ik}\delta_{jl}$, and $W_2$ has as a basis the $n$ matrices $E_{ii}$ together with the $\frac{n(n-1)}2$ matrices $E_{ij} + E_{ji}$, $i>j$. So
$$ \dim W_1 + \dim W_2 = (n-1)n + n = n^2. $$
That is $M_n(\mathbf F) = W_1 \oplus W_2$.
But for ${\rm char}\,\mathbf F \ne 2$, we can write
$$ A = \frac{A+A^t}2 + \frac{A-A^t}2 $$
And know write the antisymmetric part $\frac{A-A^t}2$ as a difference of twice its "upper part" (in $W_1$) and the symmetric matrix which has the same "upper part" as $\frac{A+A^t}2$ and the lower part the transpose of the upper part. So a more direct approach is possible if we can divide by $2$. |
nowhere zero forms on Spheres | If $g$ is a Riemannian metric on a smooth manifold $M$, there is an isomorphism $\Phi_g : \mathfrak{X}(M) \to \Omega^1(M)$ given by $X \mapsto g(X, \cdot)$. In particular, for any one-form $\alpha$, there is a vector field $X_{\alpha}$ such that $\alpha = g(X_{\alpha}, \cdot)$. It follows that $\alpha_p = 0$ if and only if $(X_{\alpha})_p = 0$, so $M$ admits a nowhere-zero one-form if and only if $M$ admits a nowhere-zero vector field. By the Poincaré-Hopf Theorem, if a closed connected orientable manifold $M$ admits a nowhere-zero vector field, then $\chi(M) = 0$; the converse is also true, see here. Therefore $S^n$ admit a nowhere-zero one-form if and only if $n$ is odd. Viewing $S^{2m-1}$ as the unit sphere in $\mathbb{C}^m$ with coordinates $(x^1, y^1, \dots, x^m, y^m)$, an example of such a form is the restriction of $-y^1dx^1 + x^1dy^1 + \dots - y^mdx^m + x^mdy^m$ to $S^{2m-1}$.
For other forms, note that if $\alpha$ is nowhere-zero one-form on a closed smooth oriented $n$-dimensional manifold $M$, equipped with a Riemannian metric $g$, then $\ast\alpha$ is a nowhere-zero $(n-1)$-form. For intermediate forms, note that $\operatorname{rank}(\bigwedge^kT^*M) = \binom{n}{k} > n = \dim M$, so it follows from obstruction theory that $\bigwedge^kT^*M$ admits a nowhere-zero section, and hence $M$ admits a nowhere-zero $k$-form. In particular, every sphere $S^n$ admits nowhere-zero $k$-forms for all $k = 2, \dots, n - 2$, as well as $k = 0$ and $n$. |
Generating function of a sequence | Suggested steps.
Begin with
$$
g(x) = \sum_{n=0}^\infty a(n) x^n = 1 + \sum_{n=1}^\infty a(n)x^n
$$
as the OP suggests.
Next, in terms of $g$, what is
$$
\sum_{n=1}^\infty a(n-1)x^n
$$
What is
$$
\sum_{n=1}^\infty n x^n
$$
Put these three results together (using your recurrence $a(n)=a(n-1)+n$) to get an equation satisfied by $g$.
Solve it, to determine what $g$ actually is. |
Proof using Cauchy's criterion | You may easily notice that
$$ \lim_{k\to +\infty}\left(\frac{3k+3}{3k-1}\right)^k = \lim_{k\to +\infty}\left(1+\frac{4}{3k-1}\right)^k = e^{4/3} $$
hence the general term of the given series behaves like $\frac{e^{4/3}}{3^k}$ and the series is convergent by asymptotic comparison with a geometric series. |
How to solve this limit: $\lim\limits_{x \to 0}\left(\frac{(1+2x)^\frac1x}{e^2 +x}\right)^\frac1x$ | HINT: write $$y=\left(\frac{(1+2x)^{1/x}}{e^2+x}\right)^{1/x}$$ and take the logarithm on both sides and write
$$e^{\frac{\ln\left(\frac{(1+2x)^{1/x}}{e^2+x}\right)}{x}}$$
and use the rules of L'Hospital |
$SU(2)$ subgroups of $SU(3)$: Is my reasoning correct? | Your reasoning is correct: If the two subgroups were conjugate, then the centralizers were conjugate, too. This follows from the identity $C_G(H^g) = C_G(H)^g$, where $C_G(H)$ is the centralizer of $H$ in the group $G$. Since the centralizers have different cardinalities, they can not be conjugate. |
Representations of $S_3$ | She is considering a complex representation V which comes with an action of the group elements on V. |
Subset proof, show A⊆B | Take $a\in A$. Then either $a\in C$ or $a\in C^\complement$. If $a\in C$, then $a\in A\cap C$ and therefore $a\in B\cap C$; in particular, $a\in B$. And if $a\in C^\complement$, then $a\in A\cap C^\complement$ and therefore $a\in B\cap C^\complement$; in particular, $a\in B$, again.
Concerning your proof, I don't understand the sentence “Since $x\notin C$ and $x\in C\implies\emptyset$”. |
integral question help me please | Let's try substitution $y=1-x$:
$$
I=\int_0^1 \frac{f(x)}{f(x)+f(1-x)}dx=\int_0^1 \frac{f(1-y)}{f(1-y)+f(y)}dy;
$$
hence,
$$2I=\int_0^1 \frac{f(x)+f(1-x)}{f(x)+f(1-x)}dx=1\Longrightarrow I=\frac12$$ |
nr edges in hamitonian graph, proof correct? | This is a valid proof. Someone grading it may decide that you are not allowed to use Ore's theorem, but that does not affect the validity. |
Let $E \subseteq X$. Prove or give an counter example to the statement $(E^a)^a \subseteq E^a$ and hence determine if $E^a$ is closed in $X$. | Let $x\in(E^a)^a$. Let $\varepsilon>0$. The ball $B_{\varepsilon/2}(x)$ contains an element $y\in E^a$ such that $y\neq x$. Let $\varepsilon'=d(x,y)$. Then the ball $B_{\varepsilon'}(y)$ contains an element $e\in E$ with $e\neq y$. Since $x\notin B_{\varepsilon'}(y)$, $e\neq x$. And since $\varepsilon'=d(x,y)<\frac\varepsilon2$,$$d(x,e)\leqslant d(x,y)+d(y,e)<\varepsilon.$$Therefore, $e\in B_\varepsilon(x)$ and $e\neq x$. Since this takes place for every $\varepsilon>0$, $x\in E^a$. So, $(E^a)^a\subset E^a$. |
Is there a continuous $f(x,y)$ which is not of the form $f(x,y) = g_1(x) h_1(y) + \dots + g_n(x) h_n(y)$ | Let's call $f$ an $n$-SOP if we can write
$$f(x,y) = \sum_{k = 1}^n g_k(x)\cdot h_k(y)$$
with continuous functions $g_k, h_k \colon [0,1] \to \mathbb{R}$. If $f$ is an $n$-SOP, for every family $x_1 < x_2 < \dotsc < x_r$ of $r > n$ points in $[0,1]$, the set
$$\left\{ \begin{pmatrix} f(x_1,y) \\ f(x_2,y) \\ \vdots \\ f(x_r,y)\end{pmatrix} : y \in [0,1]\right\}$$
is contained in an $n$-dimensional linear subspace of $\mathbb{R}^r$.
But
$$\begin{pmatrix} \exp (x_1\cdot 0/r) & \exp (x_1\cdot 1/r) & \cdots & \exp (x_1 \cdot (r-1)/r) \\ \exp (x_2 \cdot 0/r) & \exp (x_2 \cdot 1/r) & \cdots & \exp (x_2 \cdot (r-1)/r) \\ \vdots & \vdots & & \vdots \\ \exp (x_r\cdot 0/r) & \exp (x_r\cdot 1/r) & \cdots & \exp (x_r \cdot (r-1)/r)\end{pmatrix}$$
is a Vandermonde matrix, hence has rank $r$. Therefore $(x,y) \mapsto e^{xy}$ is not an $n$-SOP. |
Proof the conic hull of a finite set of vectors is closed | Suppose $x$ is in the convex conical hull, then $x = \sum_k \alpha_k u_k$ for some
$\alpha_k \ge 0$. We can presume that none of the $u_k$ are zero.
If $\sum_k \alpha_k = 0$ then we can write $x = 0 = 0 u_1$, so we can suppose that $\alpha_k > 0$ for all $k$.
If the $u_k$ are linearly dependent, there are $\beta_k$ not all zero such that
$\sum_k \beta_k u_k = 0$ and so
$\sum_k \alpha_ku_k = \sum_k (\alpha_k + s \beta_k) u_k$ for any $s$.
Let $k_* \in \operatorname{argmin}_k \{ | { \alpha_k \over \beta_k} | | \beta_k \neq 0 \}$ and $s_* = -{\alpha_{k_*} \over \beta_{k_*}}$. Then $\alpha_{k_*} + s \beta_{k_*} = 0$, $\alpha_k + s \beta_k \ge 0$ for all $k$ and so $x$ lies in the convex conic hull of $u_k$, $k \neq k_*$. This can be repeated as long as the remaining $u_k$ are linearly dependent.
In particular, if $x \in \operatorname{cone} \{ u_\alpha \}$, it lies in the convex conical hull of a linearly independent subset of $ \{ u_\alpha \}$. |
Examples of a cone in $\mathbb{R}^2$ | The cones are exactly the origin (more exactly, the set containing exactly the origin), the rays starting from the origin but not including it (in slight abuse of language, I'll call that an open ray), and arbitrary unions of those sets.
Proof:
The origin is a cone:
Obviously $\lambda(0,0)=(0,0)$.
An open ray from the origin is a cone:
The open ray from the origin passing through point $p=(x,y)\ne (0,0)$ is given by $C_p=\{\mu p:\mu \in \mathbb R^{>0}\}$. Thus if $\lambda\in\mathbb R^{>0}$ and $q\in C_p$, then there exists a $\mu>0$ such that $q=\mu p$, and then $\lambda q = \lambda\mu p\in C$ because $\lambda\mu>0$.
The union of cones is a cone.
Be $C_i$, $i\in I$, a collection of cones ($I$ is an arbitrary index set), and be $C=\bigcup_{i\in I}C_i$. Assume $p\in C$. Then there exists an $i\in I$ such that $p\in C_i$. But then for any $\lambda>0$, $\lambda p\in C_i$. But that implies $\lambda p\in C$.
Any cone is such an union of open rays and possibly the origin.
Be $C$ a cone. Be $p\in C$. Then either $p$ is the origin, or if not, then by definition of the cone, for all $\lambda>0$ we have $\lambda p\in C$. But the set $\{\lambda p: \lambda>0\}$ is exactly the open ray going through $p$, thus that open ray is a subset of $C$. Since through each $p\in C$ other than possibly the origin there's such a ray, it means $C$ is either the union of such rays (if the origin is not in $C$) or the union of those rays and the origin (if the origin is in $C$). $\square$
Note that arbitrary unions also include the empty union; indeed, the empty set vacuously fulfils the cone condition.
To add a few concrete examples of cones:
The $x$ axis.
The $x$ axis without the origin.
The coordinate cross (union of $x$ axis and $y$ axis).
Any of the four quadrants.
The upper half-plane.
The union of all straight lines through the origin whose slope is rational. |
Ratio test for $Σ ne^{-n^2}$? | Note that
$$
\frac{(n+1)e^{-(n+1)^2}}{ne^{-n^2}}=\frac{n+1}{n}\cdot e^{-(n+1)^2+n^2}=\frac{n+1}{n}\cdot e^{-2n-1}.
$$
The first term converges to $1$ as $n\to\infty$; what does the second term do? |
Proof for : if $((a \mid b),$ & $(a \nmid c))$, then $b\nmid c.$ | Yes, you've successfully proven the implication, by using proof by contradiction.
You are assuming the premise, and the negation of the consequent.
If we call $P: a\mid b$, and $Q: a\nmid c$, and $R: b\nmid c$,
You've assumed $P\land Q$ and $\lnot R = b\mid c$
$P\land Q\tag 1$
$P\;\;\tag{from (1)}$
$Q\;\;\tag {from(1)}$
$\quad|\lnot R\tag{(2): Assumption }$
$\qquad||\lnot Q\;\;\tag{as given by asker}$
$\qquad|| \lnot P \lor \lnot Q\tag{disjunction intro}$
$\qquad|| \lnot (P \land Q)\tag {DeMorgan's}$
$\qquad|| (P\land Q) \land \lnot (P\land Q)\tag{conjunction intro}$
$\lnot (\lnot R)\tag {follows from contradiction}$
$R\tag{double negation}$
So we have proven $$\big((P\land Q)\land \lnot R\big) \to \lnot(P\land Q)$$
We've reached a contradition. We conclude $(P\land Q) \to R$.
But note, we can are essentially done when we arrive at $\lnot Q$, because we reach $Q\land \lnot Q \equiv \bot$, which is where you stopped (having obtained a contradiction). Logically, we have proven $\lnot\lnot R$, or $R$, and hence $(P\land Q) \to R$. |
density of squarefree numbers in $\mathbb{Z}$ that are 1 mod 4 | You have essentially answered the question. The first thing to note is that the density of odd squarefree numbers is 4/$\pi^2$ which is easily seen from the Dirichlet generating series for odd squarefree numbers which is $ \prod_{p\neq 2} (1 + 1/p^s)$. It is easy to see that this is $\zeta(s)/\zeta(2s)$ with the 2-Euler factor removed, from which the residue at s=1 is determined as 4/$\pi^2$.
With this in hand, let us count the positive fundamental discriminants less than $x$. Let $\chi$ denote the non-trivial Dirichlet character (mod 4). The number of squarefree $D\equiv 1$ (mod 4) is then $\sum_{D \leq x \, odd} \frac{1+\chi(D)}{2} \mu^2(D)$. The key fact needed now is that the Dirichlet series $\sum_{D}\frac{\chi(D)\mu^2(D)}{D^s}$ equals $(1-2^{-s})^{-1}L(s,\chi)/\zeta(s)$ which is analytic for $\Re(s)\geq 1$ so that using the standard Tauberian theorem or otherwise, we get $\sum_{D\leq x} \mu^2(D) \chi(D) = o(x)$. So the density of positive discriminants $\equiv 1$ (mod 4) is $2/\pi^2$.
The other cases are similarly treated. The $D$ of the form $4m$ with $m\equiv 2$ (mod 4) and $m$ squarefree and $D\leq x$ is the same as counting the number of $m_1$ $\leq x/8$ with $m_1$ squarefree and odd is asymptotic to $\frac{4}{\pi^2} (x/8)$ which gives a density of $1/2\pi^2$.
For the case $m\equiv 3$ (mod 4), we get by a similar analysis density of $1/2\pi^2$. Thus the total density of positive fundamental discriminants is $3/\pi^2$.
A similar analysis for negative fundamental discriminants gives a density of $3/\pi^2$. |
Finding the value for $a$ for which three planes don't intersect | Guide:
We do not want the columns to span $\mathbb{R}^3$ or it will be consistent.
Let $C_i$ be the $i$-th column.
If the third column is a linear combination of the first two, determine $a_1$ and $a_2$ where $C_3=a_1C_1+a_2C_2$ using the first two rows.
Using $a_1$ and $a_2$ that you found in the previous step, you should be able to compute $a$ and verify that it satisfy the condition.
Remark about your attempt:
After you obtain $-20z+10az=10$, we have $z(2-a)=-1$. Now, we note that if $a=2$, then we get a contradiction, which is the value that you are looking for.
Suppose $a \ne 2$, then you can solve for $z$ in terms of $a$ and in turn solve for $x$ and then $y$.
General writing remark for improving readability, you might like to label your equations and include how are each line obtain, which operation was performed.
Things will be neater after you learn about Gaussian eliminations. |
Probable positions in line to share birthday in birthday problem | It is probably better to ask one question at a time.
For the question
What positions would have the highest probability of winning the prize, if there is a prize for the first triplet that share the same birthday?
then any of the first three to enter the room is most likely to be in the first triplet.
If the prize is for the third of the triplet then I think you can work out the probability $p(n,d)$ that after $n$ people have entered the room there are $d$ pairs and no triplets, using the recursion something like
$$p(n,d) = \frac{(365-n+1+d)\,p(n-1,d) + (n-2d+1)\,p(n-1,d-1) }{365}$$
starting at $p(0,0)=1$ and $p(0,d)=0$ for $d \not = 0$, assuming there are 365 equally likely and independent birthdays. This is easily coded.
You then want to find $n$ such that the probability of being the third of the first triplet $\sum_d p(n-1,d) - \sum_d p(n,d)$ is maximised, which I think may be about $n=85$ with a probability of about 0.0116385.
If you just look at $p(n,0)$, then this reduces to the earlier second of the first pair problem, looking for where $p(n-1,0)-p(n,0)$ is maximised, which is indeed with $n=20$ and a probability of about 0.0323. |
equivalence of finite state and regular expression | Your suggested regular expression is correct, in the sense that it is equal to $R_{ij}^k$, but it's not very useful, because to construct $R_{kk}^k$ for example, you already need to know what $R_{kk}^k$ is!
The point of the proof is to induct on $k$, and to do this we cleverly define $R_{ij}^k$ in terms of expressions $R_{mn}^l$ where $l<k$, and we assume we already know how to construct such expressions. Otherwise we would have a circular definition. |
Proving that $\cos\left(\frac{\pi}{2}+a\pi\right)=0$ when a is an integer | $sin (n\pi)=0$ for $n\in I $ thats all now use $\sin (x)=\frac {e^{ix}-e^{-ix}}{2i} $ to complete the proof |
Showing a Set of Connectives is Inadequate | Short answer.
To prove that the set of connectives $\{\lor, \land, \to, \leftrightarrow\}$ is inadequate, it is sufficient to show that the set $\{\lor, \land, \to\}$ is inadequate, essentially because, as you said correctly, the connective $\leftrightarrow$ can be expressed by means of $\to$ and $\land$.
Long answer.
Let us see why more precisely (and pedantically).
Definition 1. Given a set $X \subseteq \{\bot, \top, \lnot, \lor, \land, \to, \leftrightarrow\}$ of connectives for propositional logic (the constants $\bot$ and $\top$ can be seen as 0-ary connectives), the set of formulas built up from propositional variables by means of the connectives in $X$ is denoted by $\mathcal{F}_X$.
Definition 2. A set $X \subseteq \{\bot, \top, \lnot, \lor, \land, \to, \leftrightarrow\}$ of connectives for propositional logic is inadequate if there is a formula $A \in \mathcal{F}_{\{\bot, \top, \lnot, \lor, \land, \to, \leftrightarrow\}}$ such that every formula in $\mathcal{F}_X$ is not logically equivalent to $A$.
Lemma. Every formula in $\mathcal{F}_{\{\lor, \land, \to, \leftrightarrow\}}$ is logically equivalent to a formula in $\mathcal{F}_{\{\lor, \land, \to\}}$.
Proof. By induction on the structure of the formula in $\mathcal{F}_{\{\lor, \land, \to, \leftrightarrow\}}$. (Hereafter, we will implicitly use that, in formula $A$, if a subformula $B$ of $A$ is replaced by a formula logically equivalent to $B$, the obtained global formula is logically equivalent to $A$).
A formula in $\mathcal{F}_{\{\lor, \land, \to, \leftrightarrow}\}$ is:
either $A \diamond B$ with $\diamond \in \{\lor, \land, \to\}$ and $A, B \in \mathcal{F}_{\{\lor, \land, \to, \leftrightarrow\}}$, and then by induction hypothesis there are formulas $A', B' \in \mathcal{F}_{\{\lor, \land, \to\}}$ logically equivalent to $A,B$, respectively; therefore, $A' \diamond B'$ is a formula in $\mathcal{F}_{\{\lor, \land, \to\}}$ logically equivalent to $ A \diamond B$;
or $A \leftrightarrow B$ with $A, B \in \mathcal{F}_{\{\lor, \land, \to, \leftrightarrow\}}$, and then by induction hypothesis there are formulas $A', B' \in \mathcal{F}_{\{\lor, \land, \to\}}$ logically equivalent to $A,B$, respectively; therefore, $(A' \to B') \land (B' \to A')$ is a formula in $\mathcal{F}_{\{\lor, \land, \to\}}$ logically equivalent to $ A \leftrightarrow B$. $\square$
You are asking if the following claim holds. The answer is positive.
Claim. If $\{\lor, \land, \to\}$ is inadequate, then $\{\lor, \land, \to, \leftrightarrow\}$ is inadequate.
Proof. Since $\{\lor, \land, \to\}$ is inadequate, there is a formula $A\in \mathcal{F}_{\{\bot, \top, \lnot, \lor, \land, \to, \leftrightarrow\}}$ such that every formula in $\mathcal{F}_{\{\lor, \land, \to\}}$ is not logically equivalent to $A$. But every formula in $\mathcal{F}_{\{\lor, \land, \to, \leftrightarrow\}}$ is logically equivalent to some formula in $\mathcal{F}_{\{\lor, \land, \to\}}$, according to the lemma above. Therefore, every formula in $\mathcal{F}_{\{\lor, \land, \to, \leftrightarrow\}}$ is not logically equivalent to $A$ (otherwise there would be a formula in $\mathcal{F}_{\{\lor, \land, \to\}}$ logically equivalent to $A$, by transitivity of logical equivalence, which is impossible by hypothesis), i.e. $\{\lor, \land, \to, \leftrightarrow\}$ is inadequate. $\square$
Remark (beyond your question).
Now, you just have to prove that $\{\lor, \land, \to\}$ is inadequate, i.e. to show that there is a formula $A \in \mathcal{F}_{\{\bot, \top, \lnot, \lor, \land, \to, \leftrightarrow\}}$ such that every formula in $\mathcal{F}_{\{\lor, \land, \to\}}$ is not logically equivalent to $A$. It is easy to prove that you can take $A = \lnot p$, for any propositional variable $p$.
Hint (to prove that every formula in $\mathcal{F}_{\{\lor, \land, \to\}}$ is not logically equivalent to $\lnot p $). What is the truth value of a formula in $\mathcal{F}_{\{\lor, \land, \to\}}$ with a valuation assigning true to every propositional variable occurring in it? What is the truth value of $\lnot p$ with a valuation that assigns true to the propositional variable $p$? |
$\lim_{x \rightarrow 0} [\frac{\arcsin x}{x}] = 1$, [] is floor function. | Hint. Since $\sin(x)$ and $\tan(x)$ are increasing in $x \in (0,\pi/2)$ then
$$0 < \sin x < x\quad\mbox{and}\quad x< \tan x$$
imply that for $x\in (0,1)$
$$0=\arcsin(0) < x < \arcsin(x)\quad\mbox{and}\quad
\arctan(x) < x.$$
Note that you need also that $x<\sin(2x)$ in $(0,\pi/4)$ which implies that in $(0,1/\sqrt{2})$,
$$\arcsin(x)<2x.$$ |
Where is this function continuous? | This function is NOT continuous at any of the closed endpoints. Continuity must apply from both directions, not just one. That's why only the open intervals with open images are continuous. So Deusovi's comment is correct: all the open intervals are valid continuous segments for the function. Deusovi should make that an answer. |
Kolmogorov Probability Theory Question | Yes, here $\cal F$ is a subset of the power set of $E$. In other words, the types of things in $\cal F$ are sets, and the elements of those sets are in elements from $E$. Alternatively, everything in $\cal F$ is in $\mathcal P(E)$ but not necessarily the other way around.
A decomposition $\cal U$ is a special subset of the power set in which the sets in $\cal U$ do not intersect each other at all, and they cover the entire set $E$. Not every subset of the power set is a decomposition, but every decomposition is a subset of the power set.
Finally, independence DOES NOT mean disjoint/mutually exclusive. The definition of independence is exactly that the probability of the intersection equals the products of the probabilities; i.e. if that equation is true, then we can call those experiments "mutually independent", and if the equation is not true, those experiments are not "mutually independent". There is not really an intuitive picture you can draw in your head of when two things are independent.
Lastly, I'm not sure that learning probability theory from Kolmogorov's book is the best idea. I think it would be much easier to learn it from a more modern book with more modern/more intuitive notation first, then come back to Kolmogorov. However, I do not claim to know your situation, so of course this is just my two cents.
Personally I started my probability journey from measure.axler.net/MIRA.pdf (chapter 2,3 and chapter 12 are the key parts). A classic book on pure probability theory would be colorado.edu/amath/sites/default/files/attached-files/…, which I think is pretty good. I don't know what level you want to study the subject at, but these are where I started. |
Finding the eigenvectors of a matrix. | For the matrix:
$$A=\begin{bmatrix}2&1&1\\1&2&1\\1&1&2\end{bmatrix}$$
The CP is given by:
$$|A - \lambda I| = 0$$
For the CP, we get:
$$-\lambda^3+6 \lambda^2-9 \lambda+4 = -(\lambda-4) (\lambda-1)^2 = 0$$
This leads to three eigenvalues as $\lambda_1 = 4, \lambda_2 = 1, \lambda_3 = 1$.
We have a repeated eigenvalue.
To find an eigenvector, for each eigenvalue, we solve:
$$[A - \lambda_i I]v_i = 0$$
When we have repeated eigenvalues, we may need to resort to generalized eigenvectors, which I assumed you learned in class.
Lets find an eigenvector as an example for $\lambda_2 = 1$:
$$[A - \lambda_2 I]v_2 = 0$$
$$[A - \lambda_2 I]v_2 = \begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}v_2 = 0$$
The row-reduced-echelon-form (RREF) is given by:
$$A=\begin{bmatrix}1&1&1\\0&0&0\\0&0&0\end{bmatrix}v_2 = 0$$
So, we can choose an eigenvector to satisfy the RREF for:
$$x + y + z = 0 \rightarrow (-1, 0 , 1)$$
Since we have a repeated eigenvalue, we can choose a second, linearly independent, eigenvector as:
$$(-1, 1, 0)$$
Notice that both of these satisfy the RREF equation.
Repeat the above process for the other eigenvector so you can practice.
This leads to the eigenvalues and eigenvectors as:
$$\lambda_1 = 4, v_1 = (1, 1, 1)$$
$$\lambda_2 = 1, v_2 = (-1, 0, 1)$$
$$\lambda_3 = 1, v_3 = (-1, 1, 0)$$
However, we are being asked to find the normalized eigenvectors, so we have to do an additional step.
The way we normalize is to divide each eigenvector by it's length.
$v_2 = (-1,0, 1)$
$|v_2| = \sqrt{-1^2 + 0^2 + 1^2} = \sqrt{2}$
Thus, the normalized eigenvector is:
$$\tilde v_2 = \frac{v_2}{|v_2|} = \frac{1}{\sqrt{2}}(-1,0, 1)$$
You should verify that the normalized eigenvector has length one.
Repeat this for the other two eigenvectors. |
Help Understanding Fields | The statement you are trying to prove is, indeed, not true if the field has characteristic $2$. If $u=(1,0)$ and $v=(0,1)$ then $u$ and $v$ are linearly independent but $u+v=u-v=(1,1)$. |
Finite abelian group has a subgroup $H$ with cyclic quotient that "preserves" the order in the quotient. | Here’s an inductive proof.
It suffices to prove this for abelian $p$-groups (do each $p$-part separately, then take their direct sums).
Let $G$ be abelian of order $p^n$, $g\in G$, and assume the result holds for any abelian groups of order $p^k$, $k\lt n$. If $G$ is cyclic, take $H=\{e\}$ and we are done.
If $G$ is not cyclic, then it has more than one subgroup of order $p$. As $\langle g\rangle$ contains at most one subgroup of order $p$, let $H$ be a subgroup of order $p$ with $H\cap\langle g\rangle = \{e\}$. Then $gH$ has the same order in $G/H$ as $g$ has in $G$, and $|G/H|\lt |G|$. Inductively, $G/H$ has a subgroup $M/H$ such that $G/M \cong (G/H)/(M/H)$ is cyclic, and the order of $(gH)M = gM$ in $G/M$ is the same as the order of $gH$ in $G/H$, which is the same as the order of $g\in G$. $\Box$ |
How to find the cube of a uniform distribution? | You want to find the probability density function; not the cumulative distribution function.
$$\begin{align}f_{X^3}(u) &=\dfrac{\mathrm d~~~}{\mathrm d~u}F_{X^3}(u)\\[1ex] &=\dfrac{\mathrm d~~~}{\mathrm d~u}\mathsf P(X\leq u^{1/3})\\[1ex] &=\dfrac{\mathrm d~\tfrac 13u^{1/3}}{\mathrm d~u~~~}\mathbf 1_{u^{1/3}\in[0..3]}\\[1ex]&=\tfrac 19 u^{-2/3}\mathbf 1_{u\in[0..27]}\end{align}$$ |
How large is the distance between centroids of two equilateral triangles | Supposing the grid length is 1 unit, we know the distance from the centroid to a side is $\frac{1}{2\sqrt{3}}$ units. Thus the distance between two neighbouring centroids is $\frac{1}{\sqrt{3}}$. For centroids that are further apart, note that the centroids form a hexagonal grid; computation from here is not too difficult. |
Proof of the theorem on numerical semigroups | I think you're referring to the ua+vb=1 part? This is an idea I've often heard referred to as "the Euclidean algorithm". Technically this is an algorithm for finding appropriate coefficients, but it's correctness of course also implies the theorem that such coefficients are guaranteed to exist (whenever a and b are coprime). |
How can I prove or disprove that there exists a function such that... | No. Here's why. Consider $a = 1, b = -2$ and look at $x = 0, y = 2$ and $x = 1, y = 0$. For each of these $bx - ay = -2$. But for the first, $ax - by = 4$, while for the second $ax - by = 1$.
Since the function $f$ can only take on one value for the argument $-2$ (because of the definition of "function"), it must take on the value either $4$ or $1$, but not both.
A similar argument works for almost any other pair of values for $a$ and $b$; the only exception I can see is $a = b = 0$, when it's easy to build $f$ (but not interesting: it's the everywhere-zero function). |
Hermitian Operators and the Spectral Theorem | I think that this is a great question that is usually unasked and unanswered. This is quite unfortunate because it has a very simple answer:
We can use the adjoint operator $T^{*}$ to detect $T$-invariant subspaces of codimension one.
Let me show you how this works. Assume $W \subseteq V$ is a codimension one subspace (geometrically, a hyperplane). Let us choose a normal vector $0 \neq w^{\perp} \in W^{\perp}$ to $W$. If the vector $w^{\perp}$ is an eigenvector of $T^{*}$ then $W$ is a $T$-invariant subspace. To see why, let $w \in W$ and compute
$$ \left< Tw, w^{\perp} \right> = \left< w, T^{*}w^{\perp} \right> = \left< w, \lambda w^{\perp} \right> = \lambda \left< w, w^{\perp} \right> = 0 $$
which shows that $Tw \in W$.
Stated differently, this observation shows that any eigenvector $w$ of $T^{*}$ gives us a $T$-invariant codimension one subspace $W = \operatorname{span} \{ w \}^{\perp}$.
Given the result above, what kind of condition we can impose on an operator $T$ which guarantees that if $v \in V$ is an eigenvector of $T$ then $W = \operatorname{span} \{ v \}^{\perp}$ is $T$-invariant? Well, we can try and guarantee that if $v$ is any eigenvector of $T$ then $v$ is also an eigenvector of $T^{*}$ (possibly with a different eigenvalue).
One condition which guarantees the above is $T = T^{*}$ because then $v$ is trivially an eigenvector of $T^{*}$ with the same eigenvalue. A less trivial condition which guarantees the above is $TT^{*} = T^{*}T$ because then if $v \in V$ is an eigenvector of $T$ with eigenvalue $\lambda$ then $v$ is an eigenvector of $T^{*}$ with eigenvalue $\overline{\lambda}$.
So what is the difference between the real and the complex case? The usual proof of the spectral theorem for the real and complex case goes like this:
Find one eigenvector $v \in V$ of $T$ and set $E = \operatorname{span} \{ v \}$.
Show that $E^{\perp}$ is $T$-invariant.
Restrict $T$ to $E^{\perp}$ and repeat the argument above for $T|_{E^{\perp}}$
When $V$ is complex, step one is trivial and doesn't use any property of $T$. To get step $(2)$, we use $T^{*}T = TT^{*}$. Step three is then again trivial because if $T$ was normal on $(V, \left< \cdot, \cdot \right>)$, then $T|_{E^{\perp}}$ will be normal on $(E^{\perp}, \left< \cdot, \cdot \right>|_{E^{\perp}})$.
When $V$ is real, step one is not trivial. The condition $T^{*}T = TT^{*}$ which is enough for step two to carry unfortunately doesn't guarantee that $T$ has even one eigenvector so we can't start the argument. However, the stronger condition $T = T^{*}$ does guarantee that $T$ has at least one eigenvector (this is a non-trivial result). This condition is also enough for step two and step three is again trivial. |
Arithmetic Progression of a book | There are $9$ single-digit positive integers, $99 - 9 = 90$ two-digit positive integers, and $999 - 99 = 900$ three-digit positive integers. The nine pages numbered with a single digit have $9 \cdot 1 = 9$ digits on them. The $90$ pages numbered with two digits have $90 \cdot 2 = 180$ digits on them. Thus, at the end of the first $99$ pages, we have encountered a total of $9 + 180 = 189$ digits. Each page numbered with a three-digit integer has three digits on it. Since $9 + 90 \cdot 2 = 189 < 1260 < 9 + 90 \cdot 2 + 900 \cdot 3 = 2889$, the number of pages in the book must be a three-digit positive integer. Since $1260 - 189 = 1071$, there must be $1071$ digits on pages numbered with three digits.
Since $1071 = 3 \cdot 357$, the book must contain $357$ pages with three-digit numbers, the smallest of which is $100$. Thus, there are $356$ pages in the book numbered larger than $100$, so the book has $456$ pages. |
Expected number of rolling a pair of dice to generate all possible sums | This is the coupon collectors problem with unequal probabilities. There is a treatment of this problem in Example 5.17 of the 10th edition of Introduction to Probability Models by Sheldon Ross (page 322). He solves the problem by embedding it into a Poisson process. Anyway, the answer is
$$ E[T]=\int_0^\infty \left(1-\prod_{j=1}^m (1-e^{-p_j t})\right) dt, $$
when there are $m$ events with probability $p_j, j=1, \dots, m$ of occurring.
In your particular problem with two fair dice, my calculations give
$$E[T] = {769767316159\over 12574325400} = 61.2173.$$ |
Number of elements of a set via action | Edit: I am assuming that $X$ and $G$ are finite for everything to make sense.
Note that precisely by Burnside's lemma, your question translates to asking wether
$$
|X| = \sum_{g \in G}|X^g|
$$
where $X^g := \{x \in X : g \cdot x = x\}$.
However, when $g = 1$ we have
$$
X^1 = \{x \in X : 1 \cdot x = x\} = X
$$
so in turn this would mean that
$$
0 = \sum_{g \neq 1}|X^g|,
$$
or equivalently, that $|X^g| = 0$ for all $g \neq 1$.
Thus equality holds when $g \cdot x \neq x $ for all $x \in X, g \neq 1$. If I recall correctly, this is the definition of a free action. |
$(L^\infty ,\left \| . \right \|_\infty )$ is complete | I would say it's just a convention. You want a function $f$, you have found it's value outside of $A$, the easiest $f$ you can think about on the whole space is found setting it to be $0$ in $A$. Note that it is not really important since it will just be a representative of the class of functions in $L^\infty$.
So yes, there's nothing in the value $0$ itself in this case, you may very well choose $1/17$ or $f\equiv g$ on $A$ for some function $g$. |
Prime - composite numbers | The hint says to prove that $T$ can be identified with a subgroup of $(\mathbb{Z}/n\mathbb{Z})^*$, by using the residue classes.
Then $T$ is a subgroup because, whenever you have a finite abelian group $G$, the set $\{x\in G: x^k=1\}$ is a subgroup for any integer $k$.
The group $G=(\mathbb{Z}/n\mathbb{Z})^*$ has order $\varphi(n)$, so $|T|$ is a divisor of $\varphi(n)$: $k|T|=\varphi(n)$. Since $S=G\setminus T$ is by assumption not empty, we can draw a conclusion about $k$ and so…
Regarding the search of $n$ such that $S\neq\emptyset$, try a small composite number. |
quotient between areas of triangles | You have the right idea, but you’re using the wrong heights. The side that the two triangles share is $\overline{AC}$. Measured from this side, the height of $\triangle ABC$ is 9, and the height of $\triangle ADC$ is 6, so the ratio is $\frac{9}{6} = 1.5$. |
ellipse equation giving negative number when trying to solve | Because this equation describes an ellipse with an semimajor axis of $3$ and a semiminor axis of $2$ centered at $(1, 0)$, you can only solve with $-2\leq x\leq 4$ or $-2\leq y\leq 2$ before going beyond where the ellipse exists. |
Associated Legendre Polynomials from the Schrodinger Equation for the Hydrogen Atom | I found a better approach as described here.
When solving the polar equation you get an equation $$(1-x^2)\frac{d^2y}{dx^2}-2x\frac{dy}{dx}+(l(l+1)-\frac{m^2}{1-x^2})y=0$$
If you make the substitution $y=(1-x^2)^{m/2}f(x)$ you get a new differential equation;
$$(1-x^2)\frac{d^2f}{dx^2}-2x(m+1)\frac{df}{dx}+(l(l+1)-m(m+1))f(x)=0$$
If you set $m=0$ you get the regular Legendre equation for which the ordinary Legendre polynomial $P_l$ is a solution. If you differentiate the equation above you get again the regular Legendre equation except $f$ is replaced by $f'$ and $m$ is replaced by $m+1$ that means that since $P_l$ solves the equation for m=0, $P_l'$ solves the equation for m=1. You can continue differentiating and the same holds for a given m the solution of the above equation is the mth derivative of the Legendre polynomial. You could prove that it holds for all $m$ by induction. Then since we made a substitution $y=(1-x^2)^{m/2}f(x)$ the solution to the original equation is $$P_l^m=(1-x^2)^{m/2}\frac{d^m}{dx^m}P_l$$ which is called an associated Legendre polynomial. You can deduce from the azimuth equation that $m$ must be an integer as described here, I found this easiest if you express the solution to the azimuth equation in terms of sines and cosines rather than exponentials.
If you set $m=0$ you get that the solution is the Legendre polynomial $P_l$ which involves a derivative of order $l$ therefore $l$ must be a nonnegative integer since it is the order of a derivative. Then if you expand the definition of the associated Legendre polynomial you have a derivative of order $l+m$ which means that $m$ must be bounded below by $-l$. We have deduced that $l$ is a nonnegative integer and that $m$ is an integer bounded below by $-l$.
How can we deduce that $m\le l$? If $m>l$ then $P_l^m=0$. Maybe this is not allowed because we have to be able to normalise the wave function. Is this an allowable solution? |
limit of a recursive sequence, then show how it can be unbounded and not monotone | Part a) $\;\ldots\;$ $x^2=x \rightarrow x=1$
$x^2 = x \implies x=1\,$ or $\;x=0\,$. In order to justify that the limit cannot be $\,x=0\,$ you need to use the positivity of $a$ and the "monotone increasing" assumptions, which leaves $\,x=1\,$ indeed.
Part b) Show there exists an $a$ such that the sequence is not monotone.
Hint: $\;x_2=a(a+1) \gt a = x_1\,$. Try to find an $a$ such that $x_3 \lt x_2\,$.
Part c) Show there exists an $a$ such that the sequence is unbounded.
Hint: $\;x_{n+1} \gt x_n^2 \gt x_{n-1}^{2^2} \gt \ldots \gt x_1^{2^{n}}=a^{2^{n}}\,$. Try to find an $\,a\,$ such that $\,a^{2^{n}}\,$ diverges. |
Prove that if $n\geq1$ then $\binom{2n}{2}=$ | Although this question has been answered, I have another proof.
Let $N=\{1,\ldots,n\}$, $N_2=\{1,2\}\times N$, $A$ be the set of subsets of $N$, $B=\{(X,Y)\in A\times A:|X|=|Y|\}$ and $C=\{X\subset N_2:|X|=n\}$.
It's easy to show that
$$|B|=\sum_{k=0}^n\binom nk^2$$
and
$$|C|=\binom {2n}n$$
Therefore, our goal is to show that $|B|=|C|$, so let's build a bijective function from $C$ to $B$.
Let be $X\subset N_2$. $X$ has $k$ elements whose first component is $1$ and $n-k$ elements whose first component is $2$. Define
$$f(X)=\Big(\{v:(1,v)\in X\},\{v:(2,v)\notin X\}\Big)$$
Since $f$ is bijective, we are finished.
To say it in plain words, we can render $N_2$ as a matrix like this:
$$\begin{pmatrix}1&2&\cdots& n\\1&2&\cdots& n\end{pmatrix}$$
Choosing a set $X\subset C$ is choosing $n$ terms of the matrix. This leaves $k$ terms selected in the first row and $k$ terms unselected in the second row. This is like selecting two subsets of the sime size from $N$, which is like selecting an element from $B$. |
Primitive solutions to $a^2 + 4b^2 = c^2$ | $$a^2+4b^2=z^2\iff a^2+(2b)^2=z^2\iff \\ a=m^2-n^2, \ b=mn , \ z=m^2+n^2 , \ \ (m,n)=1 , m-n>0.$$ |
limit of recursively given seqeuence | You have correctly shown that $x_n$ is decreasing and bounded above by b.
Can you show that it is bounded below as well?
To find the value it converges to, we assume that it converges to a value $x$, say.
Then $x_{2n}=x_{2n+1}=x$
$\frac{x+a}2=/frac{x+\sqrt{ax}+a}3$
$3(x+a)=2(x+\sqrt{ax}+a)$
$3x+3a=2x+2\sqrt{ax}+2a$
$x+a=2\sqrt{ax}$
$(x+a)^2=4ax$
$x^2+2ax+a^2=4ax$
$x^2-2ax+a^2=0$
$(x-a)^2=0$
$x=a$ |
Does $\frac{\text{clockrate}(P_2)}{\text{clockrate}(P_1)}=1.37$ say that $P_2$ is $37\%$ faster than $P_1$, or the opposite? | Since: $$P_2 = 137\% P_1$$ and we are talking about rates, this means that rate $P_2$ is faster! |
Construct sequence $u_j$ that converges to function $u$ | Let $x \in \Bbb{R}$ then exists $N \in \Bbb{N}$ such that $x<n, \forall n \geq N$
So $u_n(x)=51_{[0,1]},\forall n \geq N$ thus $u(x)-u_n(x)=0, \forall n \geq N$
So $u_n(x) \to u(x)$ pointwise |
I need help understanding the answer of combinatorics problem | There are $13$ different ranks (numbers) and you must choose two different ranks.
This is where the $\binom{13}{2}$ comes from.
You must also choose one of $4$ colors for each of the chosen ranks.
This is where the $\times4\times4$ comes from. |
Factoring polynomials over finite fields | Your polynomial is equivalent:
$$
x^4+x-1 \equiv x^4+x+1 \pmod 2
$$ |
Count sets of pairwise non-containing sets | If I understand your question correctly, you are referring to an anti-chain: a collection of sets $S$ so that no two sets contained in the collection are comparable, i.e. no $A, B\in S$ with $A\neq B$ and $A\subset B$.
The opposite of an anti-chain is a chain, a set $S$ of sets for which every pair of sets $A,B\in S$ either satisfy $A\subset B$ or $B\subset A$. |
Proving non-contradiction in a natural deduction system | Assume $\lnot (\phi \lor \lnot \phi)$.
Then assume $\phi$ and derive $(\phi \lor \lnot \phi)$ by (1).
Thus, $\lnot (\phi \lor \lnot \phi), \phi \vdash (\phi \lor \lnot \phi)$.
But also $\lnot (\phi \lor \lnot \phi), \phi \vdash \lnot (\phi \lor \lnot \phi)$.
So, applying (4): $\lnot (\phi \lor \lnot \phi) \vdash \lnot \phi$.
But $\lnot \phi \vdash \phi \lor \lnot \phi$.
Thus, $\lnot (\phi \lor \lnot \phi) \vdash \phi \lor \lnot \phi$.
From it and: $\lnot (\phi \lor \lnot \phi) \vdash \lnot (\phi \lor \lnot \phi)$, using again (4) we conclude with:
$\vdash \lnot \lnot (\phi \lor \lnot \phi)$
and the result follow by (5). |
When is the sum of a quasi-convex function and a convex function quasi-convex? | This is not always true. In fact, constructing a counterexample is an exercise in Bauschke & Combettes' book (volume 2), Exercise 10.15. Without seeing your particular functions, it is hard to say whether or not their sum will be quasiconvex. |
measure $\lambda(E)=0$ or $\lambda(E)=+\infty$ | Assuming that $P_n$ is positive set in the decomposition for $\lambda-n\mu$, note that for each $n$,
$$
(\lambda-n\mu)(N_n)\leq 0
$$
Since $0\leq \mu(N_n)<\infty$, this is equivalent to $\lambda(N_n)\leq n\mu(N_n)<\infty$, that is, each $N_n$ has finite measure with respect to $\lambda$. It follows that $N$ is $\sigma$-finite.
Next, let $E$ be a measurable subset of $P$. Since $E\subset P_n$ for each $n$ and $P_n$ is a positive set with respect to $\lambda-n\mu$ then
$$
0\leq (\lambda-n\mu)(E)
$$
Again, since $0\leq \mu(E)<\infty$, this is equivalent to $n\mu(E)\leq \lambda(E)$ and this is valid for every positive integer $n$. If $\mu(E)=0$, then $\lambda(E)=0$ because $\lambda\ll\mu$. If $0<\mu(E)$ then $\lambda(E)=\infty$ because in this case, we can made $n\mu(E)$ as big as we want and the inequality $n\mu(E)\leq \lambda(E)$ will always hold. |
How to give a consistent mathematical explanation to a paradoxical divergent/convergent phenomena. | $\frac{\sin{x}}{x} \to 1$ when $x \to 0$, so $\cos(90^\circ-x)=\sin{x} \approx x$, meaning when you're $10$ times closer to $90^\circ$ cosine is $10$ times smaller and the tangent is $10$ times bigger because sine is approximately equal to one. |
Why frattini sub group is important? | There is no short answer for all your questions. Can one compute the Frattini subgroup? Yes, there are algorithms for doing it. I will not go into detail, but just give a reference by Bettina Eick. For cyclic groups one should know the Frattini subgroup.
For example, $\Phi(C_p)=1$ for primes $p$ and $\Phi(C_{p^a})\cong C_{p^{a-1}}$.
Other examples examples are:
$$
\Phi(S_n)=1,\; \Phi(D_n)=1,\; \Phi (C_{p} \wr C_{p})\cong [C_{p} \wr C_{p},C_{p} \wr C_{p}]
$$
Why are Fratini groups important? See the following references:
Intuition behind the Frattini subgroup
Some questions on the Frattini subgroup
Does every finite nilpotent group occur as a Frattini subgroup? |
Is there an intuitive proof of the identity $ \sum_{L \subset S} \prod_{x \in L} (x-1) = \prod_{x \in S} x$ from general principles? | Let $S'$ be $\{x : x-1 \in S\}$, in other words we shift $S$ by one.
The equality
$$ \sum_{L \subset S} \prod_{x \in L} (x-1) = \prod_{x \in S} x$$
reduces to
$$ \sum_{L \subset S'} \prod_{x \in L} x = \prod_{x \in S'} (x+1)$$
but this is obviously distributivity: LHS is the expanded RHS. |
Area of sub-triangle inside a triangle | Hint: use ratio of areas of $2$ triangles. Let's use your diagram to calculate the red area and you can generalize it easily. The red area is half of the area of the triangle whose vertices at $0,2,2$ whose area is again is $\frac{2}{5}$ area of the triangle whose vertices are at $0,2,5$ and in turn whose area is half the area of the triangle whose vertices are at $0,4,5$. Hope it helps. |
Question About Dual Vector Spaces and the Adjoint Map | In this context, the annihilator $U^\perp$ of $U$ is defined as the kernel of $I^\ast : V^\ast \to U^\ast$, and hence, in particular, is a subspace of $V^\ast$, making it a set of functionals on $V$.
Now, what you're thinking of is the orthogonal complement of $U$ in $V$, in the case that $V$ is an inner product space; the point is that the annihilator $U^\perp \subset V^\ast$ is the generalisation of the orthogonal complement $U^\perp \subset V$ in the absence of an inner product on $V$. So, in the finite dimensional case, for simplicity, an inner product $\langle \cdot,\cdot \rangle$ defines an isomorphism $R : V \to V^\ast$, conjugate-linear if you're working over $\mathbb{C}$, by $v \mapsto \langle v,\cdot \rangle$. You can then check that $R$ restricts to an isomorphism
$$
\text{orthogonal complement of $U$ in $V$} \to \text{annihilator of $U$ in $V^\ast$}.
$$ |
The Matrix of an Equivalence Relation | For $n\gt1$, there is more than one possible equivalence relation on a set with $n$ elements. Obviously they can’t all be represented by the same matrix. In particular, the “minimal” equivalence relation on $S$ is $R=\{(x,x) \mid x\in S\}$. This relation is represented by the $n\times n$ identity matrix.
It’s a worthwhile exercise to work out what properties the representative matrix must have. The reflexive and symmetric properties are pretty easy to translate into statements about the matrix, but transitivity might be a bit tricky. |
Does an Eulerian semi-graceful polyhedral graph exist? | First we show that there is no Eulerian graceful polyhedral graph.
Suppose a polyhedral graph can be created. For a polyhedral graph we need minimum degree 3. Since each vertex except the endpoints is even this means that for $n$ vertices we will need at least $2n-1$ edges. Even in the best case the last edge must go from $2n-1$ to 0, but this forces the edge of length $2n-2$ to vertex 1 etc.
In fact this forces so many vertices that you end up with more than $n$ forced vertices. Contradiction.
As an example: consider the case $n=5$. Walking backwards from the highest edge
we see that it must visit vertices $0,9,1,8,2,7,3$ (or an analogous sequence if we start with $9,0$). This is the first time we actually have a choice: the next edge can go to $0$ or to $6$, but we already visited 8 vertices. Clearly the situation gets even worse if our vertices have a higher degree.
The same argument confirms Ed's observation that there must always be a vertex of degree at most 2.
Answer to one of the other questions:
"Is there a graceful Eulerian labelling for anything other than the path graph?"
Yes, there is. Use vertices with labels $0,1,2,5,6$ and use the path $1,0,2,5,1,6,0$ (the path implies the edges). Since vertices 1 and 0 are used twice, this is not a path graph.
An "algorithm" to produce lots of them could be the following ("just focus on the edges, do not care about vertex labels").
Start with an arbitrary long path using increasing edge labels $1,\ldots,n$.
You start with vertex label 0 and at each step decide arbitrarily whether you add or subtract the edge label to obtain the next vertex label.
Add an arbitrary positive constant that is large enough to make all your vertex labels non-negative.
Add a "positive path" (always adding your current edge label to the last vertex label) until your vertex label is at least half of the maximum value of your vertex labels until here.
Zigzag back to zero, i.e. alternate adding and subtracting the next edge label. Because you started at at least half of the maximum value this guarantees that your last edge will be from the highest value to zero.
Stop, or go back to step 3.
An almost complete argument that no polyhedral graph can be created: for a polyhedral graph we need minimum degree 3. Since each vertex except the endpoints is even this means that for $n$ vertices we will need at least $2n-1$ edges. Even in the best case the last edge must go from $2n-1$ to 0, but this forces the edge of length $2n-2$ to vertex 1 etc.
In fact this forces so many vertices that you end up with more than $n$ vertices. Contradiction. |
Local martingale in change of measure | Note that if $M_t$ is a continuous local martingale, and $h_t$ an $\mathbf{F}^M$-predictable function, the stochastic exponential defined by
$$
\mathcal{E}(hM)_t:=\exp\left(\int_0^t h_tdM_t - \frac{1}{2}\int_0^th_t^2d\langle M\rangle_t\right)
$$
is a positive local martingale, (and it is a martingale iff $\mathbb{E}[\mathcal{E}(hM)_t]=1\;\forall t$).
However there are some different conditions that can ensure that the stochastic exponential is actually a true martingale. In a general setting where $h$ is a bounded function and $W_t$ is a Brownian Motion, you have the following:
Let $\zeta_t=\int_0^t h(s)dW_s$, where $h$ satisfies
$$
\sup_{s\le t}\,|h(s)|\le C.
$$
Then the process $Z_t=\exp(\zeta_t -\frac{1}{2}\langle\zeta\rangle_t)$ is a martingale.
Check eg. Revuz & Yor (1999). |
Is $2^{O(n)}$ the same as $O(2^n)$ for big O time complexity? | No: $4^n = 2^{2n}$ is $2^{O(n)}$ but not $O(2^n)$, for example. |
Comparing expected counts to observed counts | Chi-squared GOF test. You don't say how many dishes $n$ are involved in the percentage
data provided. I suppose you intend to do a chi-squared goodness-of-fit (GOF) test. If so, the 'expected' counts for the various dishes
would be $E_1 = .25n,\; E_2 = .20n,$ and so on. (Note: Do not round expected counts to integers.)
If $n$ is large
enough that all seven of the $E_i > 5,$ then you can use each
dish as a category, and the degrees of freedom for the approximating
chi-squared distribution would be $k - 1 = 7 - 1 = 6.$ (If some of
the $E_i$ are too small, combine dishes into 'categories' of
two or more until you get all $E_i$ large enough. Then $k$
will be reduced accordingly, and the degrees of freedom also.
For example, with $n = 55,$ you would have to combine dishes 5 an 6 into
a single category with 11%.)
GOF test statistic. These expected counts are to be compared with the observed counts
$X_i,$ which are the numbers of each type of dish, not the
proportions. Then the chi-squared GOF statistic is
$$Q = \sum_{i=1}^k \frac{(X_i - E_i)^2}{E_i},$$
which is approximately distributed as $Chisq(df = k - 1).$
Larger values of this computed value of $Q$ indicate poorer
fit to your probability model upon which the $E_i$ are based.
The question is how large $Q$ can be before the fit is so
bad that you 'reject' the model.
P-values. Now, for the p-value. Suppose you have $df = 6$ and $Q = 10.7.$
The p-value is the area under the density curve of $Chisq(6)$
to the right of 10.7.
You can use a printed 'chi-squared' table to 'bracket' the
p-value within 'bounds'. In the table I'm looking at, I notice
the the row for $df = 6$ has entries for 10.64 and 12.59.
Headers of the corresponding columns show right-tail probabilities
.10 and .05, respectively. So you can say the p-value is
between .10 and .05; a lot closer to .10 than to .05, but
still not exact. Thus, you would not reject (at the 5% level) the
model consisting of the percentages given at the start of your Question.
If you have software available, you can use the CDF function
to find the the exact probability below 10.7 (and subtract from 1). In R statistical
software the CDF for chi-squared distributions is called pchisq,
so the following R code gives the desired result: the exact
p-value is .098 (which is indeed between .10 and .05).
1 - pchisq(10.7, 6)
## 0.09810273
Illustrating tail areas under PDF curve. The figure below shows the PDF of $Chisq(6)$. vertical dotted red lines
are at 10.64 and 12.59 (the values in found in row $df = 6$ of
the table); areas under the curve to the right of these lines are
.10 and .05, respectively. The vertical blue line is at $Q = 10.7$, and the area to the right of this line is the p-value .098.
Note: Analysis of variance (ANOVA) has nothing to do with this
problem. |
Cryptography/modulus equations help | If the clear-text digit is $x$ and the encrypted digit is $y\equiv 7x\mod{26}$, then for example $E=4$ maps to $y = 7\cdot 4 = 28\equiv 2\mod{26}$, so the encrypted value is C.
If you mean for the clear-text value to be $y$ and the encrypted value for $x$, then you must first solve $y\equiv 7x\mod{26}$ for $x$: $x = 7^{-1}\cdot y\mod{26}$, and now you must find the multiplicative inverse of $7$ modulo $26$. |
Theorem for existence of a solution and existence of a unique solution. | For 2. the correct answer is "False", but your explanation is not correct.
Indeed, if $f$ is Lipschitz, the there exists a unique solution, but there are cases of uniqueness also with $f$ only continuous. Lipschitzianity is a sufficient condition for (existence and) uniqueness.
For 3., as you said, the correct answer is (iii). |
Any equivalent process to a right continuous process has a right continuous modification | I assume (as in Ethier-Kurtz) that $X$ and $Y$ take values in a metric space $E$. In fact, to be able to apply a known result later, I suppose that $E$ is complete and separable. Let $\Bbb Q_+$ denote the non-negative rational numbers, and let $H:=E^{\Bbb Q_+}$ denote the space of paths from $\Bbb Q_+$ to $E$ with associated product $\sigma$-algebra $\mathcal H:=\mathcal E^{\otimes\Bbb Q_+}$. (Here $\mathcal E$ is the Borel $\sigma$-algebra on $E$.) Let $(\Omega_1,\mathcal F_1,P_1)$ denote the probability space on which $Y$ is defined. Then $Y$, viewed as a map $\omega\mapsto(Y_s(\omega):s\in\Bbb Q)$ from $\Omega_1$ to $H$, induces a probability measure $P_Y$ on $(H,\mathcal H)$: $P_Y(B)=P_1(\{\omega\in\Omega_1: Y(\omega)\in B\})$, $B\in\mathcal H$.
Likewise, $X$ induces $P_X$, and $P_Y=P_X$ because $X$ and $Y$ are equivalent.
Now define $G:=\{x\in H: x$ is the restriction to $\Bbb Q$ of a right-continuous map of $[0,\infty)$ into $E\}$. It is known that $G$ is in the completion of $\mathcal H$ for each probability measure on $(H,\mathcal H)$ (see, for example, Theorem IV-18 in volume A of Probabilities and Potential by Dellacherie and Meyer), so writing $\overline P_X$ for the completion of $P_X$, one checks that $\overline P_X(G)=1$ because $X$ has right-continuous paths. It follows that $\overline P_Y(G)=1$ as well. Consequently there exists $G_0\in\mathcal H$ with $G_0\subset G$ and $P_Y(G_0)=1$.
Finally, define $Z_t(\omega):=\lim_{s\downarrow t,s\in\Bbb Q}Y_s(\omega)$, $t\ge 0$, if $\omega\in Y^{-1}(G_0)$ and $Z_t(\omega)=e_0$ for all $t\ge 0$ if $\omega\in Y^{-1}(G_0)$, where $e_0$ is a fixed point of $E$.
(I suspect this is not the argument envisioned by Ethier-Kurtz for an exercise appearing so early in their treatment, but a simpler proof does not occur to me.) |
Set addition between A and B | Let $(u,v) \in A+B$, then there exists $(x,y) \in A$ and $(t,t) \in B$ with $x^2+y^2 \leq 1$ and $0 \leq t \leq 1$ such that
$$(u,v)=(x,y)+(t,t)=(x+t,y+t).$$
Then $x=u-t$ and $y=v-t$. Furthermore
$$x^2+y^2 \leq 1\implies (u-t)^2+(v-t)^2 \leq 1, \qquad \text{ with } 0 \leq t \leq 1. $$ |
Calculate the determinant when the sum of odd rows $=$ the sum of even rows | To row 2 add row 4, then add row 6, then add row 8, $\ldots$. In the end row 2 will be a row that will equal the sum of all the even rows. If you want you can think of it as a vector equalling the sum of a bunch of vectors, but what's important here is it's a row of a matrix.
Repeat with row 1 and the odd rows, and row 1 will agree with row 2.
More details added: Let $A=\left(\begin{smallmatrix}1&2&3&4\\2&3&3&2\\5&6&5&6\\4&5&5&8\end{smallmatrix}\right)$. Then the first and third rows add up to $(\begin{smallmatrix}6&8&8&10\end{smallmatrix})$, as do the second and fourth rows. |
Solving a System of Quadratic Equations for Sound Triangulation | Solving system of quadratic and indeed, general polynomial equations is possible with techniques like Buchberger's algorithm. See the first two chapters of the book by Cox et.al. There are also many numerical tools to assist you with this. Sympy in python is one alternative (the one I'd recommend) and there is also one I wrote in C# following the textbook cited. |
Convert IVP to an equivalent Volterra integral equation | As written above, both solutions are wrong. To verify your solution, derive it once; you will get $u'(x) = 1 - \cos x + x \Bbb e ^x + u(x) \sin x$. Deriving it a second time: $u'' (x) = \sin x + \Bbb e ^x + x \Bbb e ^x + u'(x) \sin x - u(x) \cos x$. This clearly does not look like your original equation.
Concerning your instructor's solution, deriving it once gives $u'(x) = x^2 - 1 + u(x) \sin x$; deriving once more gives $u'' (x) = 2x - u(x) \cos x + u' (x) \sin x$. Again, this is very far from the given equation.
Maybe you have made some typing mistakes when writing the above formulae, or maybe you have not taken notes correctly. As you are going to see next, your instructor's formula is very close to the correct one.
Let us start by integrating the given equation; we get
$$u'(y) - u'(0) = \int \limits _0 ^y u' (t) \sin t \Bbb d t - \int \limits _0 ^y \Bbb e ^t u(t) + \int \limits _0 ^y t \Bbb d t .$$
Using that $u'(0) = -1$ and integration by parts in the first integral we get
$$u'(y) = u(y) \sin y - \int \limits _0 ^y u(t) (-\cos t) \Bbb d t - \int \limits _0 ^y \Bbb e ^t u(t) + \frac {y^2} 2 -1 ,$$
or equivalently
$$u'(y) = \frac {y^2} 2 -1 + u(y) \sin y + \int \limits _0 ^y (\cos t - \Bbb e ^t) u(t) \Bbb d t .$$
Now, let us integrate this once more:
$$u(x) - u(0) = \int \limits _0 ^x \frac {y^2} 2 -1 \Bbb d y + \int \limits _0 ^x u(y) \sin y \Bbb d y + \int \limits _0 ^x \int \limits _0 ^y (\cos t - \Bbb e ^t) u(t) \Bbb d t \Bbb d y .$$
We shall now use $u(0) = 1$ and we shall change the order of integration in the last integral in order to get
$$u(x) = \frac {x^3} 6 - x + 1 + \int \limits _0 ^x u(y) \sin y \Bbb d y + \int \limits _0 ^x \int \limits _t ^x (\cos t - \Bbb e ^t) u(t) \Bbb d y \Bbb d t .$$
Since the function in the last integral does not depend on $y$, if we change the letter from $y$ to $t$ in the first integral we get
$$u(x) = \frac {x^3} 6 - x + 1 + \int \limits _0 ^x \Big( \sin t + (x-t) (\cos t - \Bbb e ^t) \Big)u(t) \Bbb d t .$$
As you can see, the only differences between the instructor's formula and the correct one are the denominator ($6$ instead of $3$) and the sign in front of $\cos t$. |
Different definitions of a relatively compact operator | Both definitions aim to make the same statement -- the operator $KT^{-1}$ is compact. But we cannot really to do that, because $T^{-1}$ does not necessarily exist, so in (i) we replace the inverse by the resolvent and in (ii) we write the more usual definition of compactness; if $(Tx_n)$ is bounded then $KT^{-1}(Tx_n)= Kx_n$ should contain a convergent subsequence.
Actually, (i) is equivalent to a version of (ii), where we only allow bounded sequences $(x_n)$. Indeed, to see that (ii) implies (i), pick a bounded sequence $(y_n)$. Then the sequence $x_n:= (T-z)^{-1}y_n$ is bounded as well, and $Tx_n= (T-z)x_n + zx_n = y_n + zx_n$ is bounded. By (ii), $(Kx_n)$ contains a convergent subsequence, but $Kx_n = K(T-z)^{-1}y_n$, so we showed compactness of $K(T-z)^{-1}$.
For the other direction, note that if the sequences $(x_n)$ and $(Tx_n)$ are bounded, then $((T-z)x_n)$ is bounded as well, so by (i) $K(T-z)^{-1} (T-z)x_n = Kx_n$ contains a convergent subsequence.
As stated in your question, the definitions are not equivalent (but (ii) still implies (i)). Note that (i) does not imply (ii), because any compact operator satisfies (i), but not necessarily (ii) (look at $T=0$).
Answer cross-posted from Math Overflow. |
Circle Tangents Questions | HINTS.
(1) Triangles $CBA$ and $ABD$ are similar. Exploit that to find $BD$.
(2) The incenter of $ABC$ lies inside $DEF$, because it is the intersection of the angle bisectors. |
Fixed Point Iterations of F(X) = (S + KX) / (K + X), S > 0 | Solve
$\displaystyle \frac{S+KX}{K+X} = \frac{S+K(\frac{S+KX}{K+X})}{K+(\frac{S+KX}{K+X})} $
i.e., the fixed point is such that an iteration of your function is equal to the next iteration of your function.
Better yet, solve:
$\displaystyle X = \frac{S+KX}{K+X}$ |
The order of an element in a group | No, $\mathbb Z_{24}$ is an additive group and contains the element zero. What you are looking here are numbers $k$ such $5k\equiv 0\mod 24$. |
Find minimum and maximum given $x+y+z = 10$ and $x^2+y^2+z^2 = 36 $. | Using the first equation, $$x^2+y^2+z^2+2(xy+yz+zx)=100$$
From this equation, subtract the second equation, so we are left with,
$$2(xy+yz+zx)=64\\(xy+yz+zx)=32$$
Thus let x,y,z be the roots of the cubic eq P(a)=0 ,using vieta's theorem
$$a^3−10a^2+32a−p=0$$ where $p=xyz$.
Now you can find the maxima and minima of $a^3−10a^2+32a$ and hence $p$. Apparently, the maxima and minima come out to be infinitely large. Why?
This is because, the domain of $a$ is such that we consider complex value of the variables named $(x,y,z)$. For the real values of the variables, it will only exist when $\dfrac{dp}{da}\leq 0$ (in other words we find the local extremas). Differentiating and equating to zero for $a^3−10a^2+32a$ gives us,
$$3a^2−20a+32=0$$ and the roots $a=\frac{8}{3}, 4$. Which further yields the maximum and minimum for the real domain as $32$ and $33.185$. You can verify this yourself following this. |
Proving the differentiablity of a function. | The direct answer to your question is no. The proof of continuity of $\frac{d}{dx}f(x)$ at point $c$, does not imply that $f(x)$ is differentiable at point $c$. Further more, you must first prove that $f(x)$ is differentiable before worrying about the continuity of $\frac{d}{dx}f(x)$. Continuity does not imply differentiability.
You may be confused with the fact that if $f(x)$ is differentiable at point $c$, then $f(x)$ is in fact continuous at point $c$. This does not always work in reverse.
Here is how you can show that $f(x)$ is differentiable
The function $a:\mathbb{R}\to\mathbb{R}$ defined by $a(x)=x(x+3)=x^2+3x$ is differentiable at every $c\in\mathbb{R}$ with the derivative $\frac{d}{dx}a(x)=2x+3$, since
$$
\frac{d}{dx}a(c)=\lim_{h\to 0} \left[\frac{(c+h)^2+3(c+h)-c^2-3c}{h}\right]=\lim_{h\to 0} \left[\frac{c^2+2ch+h^2+3c+3h-c^2-3c}{h}\right]
$$
$$
=\lim_{h\to 0} \left[\frac{2ch+h^2+3h}{h}\right]=\lim_{h\to 0} \left[2c+h+3\right]= 2c+3
$$
The function $b:\mathbb{R}\to\mathbb{R}$ defined by $b(x)=e^{-\frac{x}{2}}=\frac{1}{\sqrt{e^x}}$ is differentiable at every $c\in\mathbb{R}$ with the derivative $\frac{d}{dx}b(x)=-\frac{1}{2}e^{-\frac{x}{2}}=-\frac{1}{2\sqrt{e^x}}$, since
$$
\frac{d}{dx}b(c)=\lim_{h\to 0} \frac{\frac{1}{\sqrt{e^{c+h}}}-\frac{1}{\sqrt{e^c}}}{h}=\lim_{h\to 0} \left[\frac{1}{h\sqrt{e^c}\sqrt{e^h}}-\frac{1}{h\sqrt{e^c}}\right]
$$
$$
=\frac{1}{\sqrt{e^c}}\lim_{h\to 0} \left[\frac{1}{h\sqrt{e^h}}-\frac{1}{h}\right]=\frac{1}{\sqrt{e^c}}\lim_{h\to 0} \left[\frac{1-\sqrt{e^h}}{h\sqrt{e^h}}\right]=\frac{1}{\sqrt{e^c}}\lim_{h\to 0} \left[\frac{\frac{d}{dh}\left[1-\sqrt{e^h}\right]}{\frac{d}{dh}\left[h\sqrt{e^h}\right]}\right]
$$
$$
=\frac{1}{\sqrt{e^c}}\lim_{h\to 0} \left[\frac{-\frac{\sqrt{e^h}}{2}}{(h+2)\frac{\sqrt{e^h}}{2}}\right]=-\frac{1}{\sqrt{e^c}}\lim_{h\to 0} \left[\frac{1}{h+2}\right]=-\frac{1}{2\sqrt{e^c}}
$$
Since $f(x)=a(x)\cdot b(x)=(x^2+3x)e^{-\frac{x}{2}}$ and both $a(x)$ and $b(x)$ are differentiable at every $c\in\mathbb{R}$, then $f(x)$ is also differentiable at every $c\in\mathbb{R}$. |
Relation Big-O and limit of a function? | As a mathematics major, this question will have a more mathematical explanation. As a first resource, you should look at the wiki page. It is a great resource.
After this, I would like to say that, at least from a math perspective, $f(x) \neq O(g(x))$, generally. There is a great discussion in the wiki article on this.
What relates to limits in the expression "$f(x) = O(g(x))$"? Well, motivation for this starts when we would like to talk about the behavior of a function, $f$, as values of $x$ get arbitrarily large. Thus, it is often meant by $f(x) = O(g(x))$, that there exists an $M \in \mathbb{R}$ such that $$f(x) \leq M \cdot g(x)$$ for all $x \geq x_0$ for some $x_0 \in \mathbb{R}$. In particular, this is true when $x \rightarrow \infty$; the most straightforward examples are when $g$, $f$ are polynomials.
As a more accurate description of $f(x) = O(g(x))$, we should say $f(x) \in O(g(x))$, where $O(g(x))$ is a set. In particular, it is the set which describes a family of functions which have a "leading" term, $g(x)$: "thinking of $O(g(x))$ as the class of all functions $h(x)$ such that $|h(x)| ≤ C|g(x)|$ for some constant $C$". This makes much more sense and answers your last question.
In the context of computer science, big-O notation is probably used too liberally. |
What is the probability that a player does not have at least 1 card of each suit with a 52-card deck? | Perhaps there's some smart way to count the number of permutations of the 52 cards such that each player gets at least one card of each suite, which is the complement of what is asked for. But sometimes we do not need the exact solution and a simulation can give a pretty good approximation. In this case, with $10^6$ samples, we can estimate the probability being asked for is about $0.184171$. I bet this is pretty close the exact result.
Here's the code in Mathematica
In[2]:= n = 52
Out[2]= 52
In[13]:= n0 = n/4
Out[13]= 13
In[3]:= l = Range[n]
Out[3]= {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, \
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, \
35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52}
In[14]:= ll = Partition[l, n0]
Out[14]= {{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13}, {14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26}, {27, 28, 29, 30, 31, 32,
33, 34, 35, 36, 37, 38, 39}, {40, 41, 42, 43, 44, 45, 46, 47, 48,
49, 50, 51, 52}}
In[39]:= containAll[sample_] := Module[{i, sl, crossl},
sl = Partition[sample, n0];
crossl = Tuples[{sl, ll}];
If[And @@ Map[ContainsAny[#[[1]], #[[2]]] &, crossl], 1, 0]
]
In[53]:= s0 =
Total[Parallelize[
Map[(Permute[l, #] // containAll) &,
RandomPermutation[n, 1000000]]]] // AbsoluteTiming
Out[53]= {191.22, 815829}
In[59]:= 1 - 815829/1000000 // N
Out[59]= 0.184171 |
$l^p$ norms and the relationship between $l^{p_1}$ and $l^{p_2}$ where $p_1<p_2$ | The key point to use is that ($p < \infty$ here) $$a \in \ell^p \implies a_n \to 0 \implies |a_n| < 1 \text{ for all large $n$}$$
Then since $p_2 > p_1$, it's true that $|a_n|^{p_2} < |a_n|^{p_1}$, so we can just compare directly. The finitely many terms that are large don't affect convergence, so we're done. |
Stirling numbers of the second kind with max cardinality | I Googled restricted partitions stirling and the third hit was http://dlmf.nist.gov/26.9
Looks pretty thorough. |
Infinitude of prime numbers | This page at MIT purports to be an attempt to collect as many proofs as possible. But I suspect it has been left unfinished because it's not very long and there are lots of proofs of this theorem.
This page presents several proofs and could probably be expanded. |
Why Is $y^{-1}$ = $\frac{1}{y^1}$? | If you want to understand these fundamentals and the book you are reading is asking you to just believe this or that 'fact', then just get rid of the book and find a good one. You are not supposed to just believe anything. There are plenty of books that will prove the fundamental results as well. Look for introductory books on analysis and/or algebra (not pre-calc or pre-algebra) and enjoy!
For now: $(-1)(-1)+(-1)=(-1)\cdot ((-1)+1)$ by distributivity. Then $(-1)\cdot ((-1)+1)=(-1)\cdot 0=0$ (but now you may want to know why $a\cdot 0=0$...
As for $y^{-1}$, that is just defined, for $y\ne 0$, to be $1/y$. The reason is that it agrees with the familiar rules for exponentiation, so there is nothing arbitrary in this definition either. |
Linear function definition | If $A$ and $B$ are vector spaces (i.e., set whose elements you can add to and subtract from one another, and which you can multiply with elements from some field $K$ (e.g. $\mathbb{R}$,$\mathbb{Q}$,$\mathbb{C}$,$\ldots$) then $$
f \,:\, A \to B
$$
is called linear if $$\begin{eqnarray}
f(x + y) &=& f(x) + f(y) &\text{for all $x,y \in A$,} \\
f(\lambda x) &=& \lambda f(x) &\text{for all $x \in A$, $\lambda \in K$.}
\end{eqnarray}$$
Note that you don't really need $A$ and $B$ to be vector spaces here, though - the definition is sensible as long as there are operations $+ \,:\, A\times A \to A$ and $+ \,:\, B\times B \to B$, and (for the second part) also operations $\cdot \,:\, K\times A \to A$ and $\cdot \,:\, K\times B \to B$.
Note that the graph of a linear function $f \,:\, \mathbb{R}^2 \to \mathbb{R}^2$ is necessarily a line through the origin. Thus, not every function $\mathbb{R}^2 \to \mathbb{R}^2$ whose graph is a line is linear - that's true only if the line goes through the origin. Functions $\mathbb{R}^2 \to \mathbb{R}^2$ whose graph is a line, but doesn't necessarily go through the origin are called affine.
For polynomials, unfortunately, polynomials of degree $1$, i.e. of the form $ax + b$, are often called linear, even though they aren't linear functions, only affine functions. |
On a sum of infinite series | You have $ \frac{1}{2} \sum^m_{n=2} \frac{1}{n-1} - \frac{1}{2} \sum^m_{n=2} \frac{1}{n+1} $.
For the first sum, assume that $ p = n - 1 $ and after replacing it, you get $ \frac{1}{2} \sum^{m-1}_{p=1} \frac{1}{p} $. For instance, for the lower bound of the first sum you have $ n = 2 $. After replacing $ p $, you obtain $ p + 1 = 2 \leftrightarrow p = 1 $.
For the second sum, assume that $ q = n + 1 $ and after replacing it on the sum, you get $ \frac{1}{2} \sum^{m+1}_{q=3} \frac{1}{q} $. |
Find area ratio of a semicircle is inscribed in a quarter circle | I don't see why $a\sqrt{2}$, the distance from the quarter-circle's right-angle to the red semicircle's base's centre, should be equal to $R-a$, i.e. the quarter-circle with the semicircle's radius subtracted. That's equivalent to claiming the larger circle's radius joining the two centres has an excess length $a$ beyond the semicircle's base, which doesn't look true.
What you should do instead is note that the shapes' equations imply $x+y=\frac{a^2+R^2}{2a}$ is the equation of the semicircle's base, and the endpoints satisfy $xy=\frac{(x+y)^2-x^2-y^2}{2}=\frac{(a^2-R^2)^2}{8a^2}$. Thus $x,\,y$ are, in some order, the roots of $t^2-\frac{a^2+R^2}{2a}t+\frac{(a^2-R^2)^2}{8a^2}=0$ (changing the order reflects one endpoint in $y=x$ to give the other). These roots are $$t_\pm:=\frac{a^2+R^2\pm\sqrt{6a^2R^2-a^4-R^4}}{4a}.$$The squared distance between $(t_+,\,t_-)$ and $(t_-,\,t_+)$ is $$4a^2=2(t_+-t_-)^2=\frac{6a^2R^2-a^4-R^4}{2a^2}.$$This rearranges to $0=(3a^2-R^2)^2$, i.e. $a=R/\sqrt{3}$.
Now we've proven that, let's answer the original question: $$\frac{\frac{\pi}{2}\left(\frac{R}{\sqrt{3}}\right)^2}{\frac{\pi}{4}R^2}=\frac{2}{3}.$$ |
Choosing integers $(a,b,c,d)$ such that $0\le a\le b\le c\le d\le n$ | You have done the wrong approach. The right approach is to let $x_1=a-0$, $x_2=b-a$, $x_3=c-b$, $x_4=d-c$ and $x_5=n-d$. This forms a bijection between the quadruples $(a,b,c,d)$ and the number of nonnegative solutions to $x_1+x_2+x_3+x_4+x_5=n$, which by stars and bars is $\binom{n+4}4$. |
First decimal digits of factorial $n$ divided by $x$ | First, you want to find $n! \pmod x$. Then you can just use long division to get all the places you want. Depending on your computer, direct computation works fine up to $n \approx 10^8$ or so. You never need to handle a number bigger than $nx$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.