title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Left adjoint for the "strings category" functor | This analogous to the adjuntion between the singular simplicial set functor and geometric realization, or to the adjunction between the nerve functor and the fundamental category functor.
Let $I : \Delta \to \mathrm{Cat}$ be the "inclusion" that sends $[n]$ to the category also called $[n]$ that you mentioned in your description of $\mathrm{str}$. Then $\mathrm{str}(\mathcal{C}) = \mathrm{Fun}(I(-), \mathcal{C})$. The left adjoint is then given by $- \otimes I$, the enriched functor tensor product with $I$, computed by an enriched coend $\mathcal{X} \mapsto \int^{[n] \in \Delta} \mathcal{X}_n \times [n]$. (Here, $\mathcal{X}$ is a simplicial object in categories, $\mathcal{X}_n$ is its category of $n$-simplices, and the $[n]$ that $\mathcal{X}_n$ is multiplied by is really the category $I([n])$.)
See the nLab article Nerve and Realization for the general construction covering all three cases I mentioned here, and for a proof of the adjunction. |
Find the sum which is not possible | Lets take a variable $i=1$
Now lets take another variable $Answer$ which will be our final answer.
Now, Start iterating variable $i$ one by one , and add it to $Answer$ variable(only when $i$ is not present in $K$ numbers).
Finally stop this process when your $Answer >i $ or $i=N$ satisfies.
Value of $Answer$ is your final answer. |
Do the ratios of successive primes converge to a value less than 1? | By the prime number theorem,
$$\lim_{n\to\infty} \frac {p_n} {n \log n} = 1$$
Replacing $n$ by $n+1$,
$$\lim_{n\to\infty} \frac {p_{n+1}} {(n+1) \log (n+1)} = 1$$
Using limit arithmetic,
$$\lim_{n\to\infty} \frac {p_n} {p_{n+1}} \frac {(n+1)\log(n+1)}{n \log n} = 1$$
However, it is elementary that
$$\lim_{n\to\infty} \frac {(n+1)\log(n+1)}{n \log n} = 1$$
and we conclude (again using limit arithmetic - dividing the last two results):
$$\lim_{n\to\infty} \frac {p_n}{p_{n+1}} = 1$$ |
Plane that separates two sets of 3D positions | It depends on what do you mean by "best separate".
One possible way is as follows:
Label one of the set with $y_i=1$ and the other class with $y_i=-1$.
The goal is to separate the two class with a plane (assuming it exists)
You can solve the following optimization problem using quadratic programming
$$\min_{w,b} \|w \|^2$$
subject to $$y_i(w^Tx_i-b) \geq 1$$
This is a famous algorithm called support vector machine (SVM). It maximizes the margin between $2$ classes of points. |
Finding the law of probabily of random variable | I just finished the lecons of probability we have two days ago , and I still have trouble , to imagine $\Omega$ and $Z(\Omega)$ intuitively ,
probability is still so new for me , if someone can give me any recommendation of how I need to represent it in my head , to better understand how the random variables work , it would be so help full . here is my tought on the problem , $P(\{1,....,n\})$ represent the Set of all subsets of
$[[1,n]]$ since $X$ is the uniforme law on this set , then what I understanded is that it mean that , if $S_{k}$ is a subset of $[[1,n]]$
then $p(X=S_{k})=\frac{1}{2^{n}}$ , then I had trouble to visualise how $Y$ work but today by thinking more about it , here what I figured out (hope it make sens , and i am not thinking a delirium) ,
since $Y$ is the random variable of the cardinal of the intesection above the evenement $Y=m$ mean that $m$ evenement
$X_{i_{1}}(S_{i_{1}})......X_{i_{m}}(S_{i_{m}})$ happened , hence
$(Y=m)=\{ \cup_{(S_{i_{j}})_{j\in [[1,m]] }} (X_{i_{1}}=S_{i_{1}} , ..., X_{i_{m}}=S_{i_{m}}\}$ since this reunion is disjoint , and the variables are identically independtly distributed then
$p(Y=m)=\sum_{(S_{i_{j}})_{j\in [[1,m]] }} \frac{1}{2^{nm}}$ since $(S_{i_{j}})_{j\in [[1,m]] }$ is a $m$ list choosen from $S_{1} , ...., S_{2^{n}}$
hence $p(Y=m)=\frac{\binom{2^{n}}{m}}{2^{nm}}$ . if I don't make any nonesense thanks |
Multivariable Chain Rule - How to solve this? | Let $u=x^2-y^2$, $v=2xy$. Then, $g(x,y)=f(u,v)$, and
\begin{align}
g_x &= \frac{\partial f}{\partial x}\\
&=\frac{\partial f}{\partial u}\frac{\partial u}{\partial x}+\frac{\partial f}{\partial v}\frac{\partial v}{\partial x}\\
&=2xf_u+2yf_v\\
\end{align} |
Geometric Series-Bounce Height of a Ball | You need the sum of the infinite geometric series given by
$$S=20+\frac{8}{10}\times20+\left(\frac{8}{10}\right)^2\times20+\cdots$$
For a geometric series
$$S=a+ar+ar^2+\cdots$$
The $N^{th}$ partial sum is given by
$$S_N=a+ar+ar^2+\cdots+ar^{N-1}=\frac{a(1-r^N)}{1-r}$$
Then,
$$S=\lim_{N\to\infty}S_N=\lim_{N\to\infty}\frac{a(1-r^N)}{1-r}$$
If $|r|<1$, the limit exists and is given by,
$$S=\frac{a}{1-r}$$
Hence, here
$$S=\frac{20}{1-\frac{8}{10}}=100$$ |
Differential equation by exact equation | $$ydx-xdy+xy^2dx=0$$
Rewrite the differential equation as:
$$\frac {ydx-xdy}{y^2}+xdx=0$$
It's exact:
$$2 d\left (\frac {x}{y} \right)+dx^2=0$$
After integration, it gives:
$$2 \left (\frac {x}{y} \right)+x^2=C$$ |
$S\Delta T = T$ (where S and T are sets and $\Delta$ denotes the symmetric difference between them). Is $ S = \phi $? | Let $x \in S$.
Either $x \in T$ or $x$ is not in $T$. If $x\in T=S\Delta T$ then $x \not \in T$. A contradiction.
If $x \not \in T$ then $x \in S\Delta T = T$. A contradiction.
So $x \in S$ leads to a contradiction so $S$ is empty.
====
Alternatively. $S\Delta T = (S\cap T^c) \cup (T\cap S^c)$. If $(S\cap T^c) \cup (T\cap S^c) = T$ then $S\cap T^c \subset T$. But $S\cap T^c \subset T^c$ and $T^c$ and $T$ are disjoint so $S\cat T^c$ are disjoint so $S\cap T^c = \emptyset$. So $S\Delta T = T\cap S^c = T$.
Not $T\cap S \subset T$ so $T\cap S \subset T\cap S^c$. But $T\cap S\subset S$ and $T\cap S^c \subset S^c$. $S$ and $S^c$ are disjoint so their subsets are disjoint so $T\cap S \subset T\cap S^c$ implies $T\cap S=\emptyset$.
So $S\cap T^c = \emptyset$ and $S\cap T = T\cap S = \emptyset$. So $S = (S\cap T)\cup (S\cap T^c) =\emptyset \cup \emptyset=\emptyset$.
(This assumes the existance of a universal set so that $A\subset U; B\subset U$. It's safe to assume there is one. But if not we can always simply declare $U =A\cup B$ and $A^c= U\setminus A$ and $B^c = U\setminus B$.)
======
Alternatively: Venn Diagrams:
Consider $X = S\setminus T; Y= T\setminus S; Z= S\cap T$. These sets partition $A\cup B$: These sets are mutually distinct and $X\cup Y \cup Z = A\cup B$.
So if $x \in A\cup B$ then exactly one of the following is true $x\in X; x\in Y; x\in Z$.
$S\Delta T = X \cup Y$, $S = X\cup Z$, $T=Y\cup Z$.
If $X\cup Y = S\Delta T = T=Y\cup Z$.
Let $x \in A\cup B$.
Then if $x \in X$ then $x\in X\cup Y =Y\cup Z$. So $x\in Y$ or $x\in Z$. But that contradicts the uniqueness of an element being in exactly one of the sets.
If $x \in Y$ then we get not contradiction. $x \in T$ an $x\not \in S$.
If $x \in Z$ then $x\in Y\cup Z = X\cup Y$. So $x \in Y$ or $X$ which contradicts uniqueness.
So $x \in Y$ is the only possibility for any element in $A\cup B$. So $x \in S = X\cup Z$ implying $x \in X$ or $x \in Y$ is impossible. So $S = \emptyset$. |
Any set E of real numbers with positive outer measure contains a subset that fails to be measurable | We can write $E$ as a countable union of bounded sets (say, $E\cap [-n,n]$ for eacn $n\in\mathbb{N}$). If each of these bounded sets had outer measure $0$, then $E$ would have outer measure $0$, by countable subadditivity. So one of these bounded sets has positive outer measure, and we can replace $E$ with such a set. |
Book which covers these contents of the same level of Fulton's book. | The theory is about the same in the complex analytic case, and in fact that setting is the original motivation, so it's helpful to study it there. I like (with some reservations) Rick Miranda's book "Algebraic Curves and Riemann Surfaces" for this. |
$T(p(x)) = p'(x)$ is an isomorphism | Consider any polynomial $p(x)$ in $\mathbb R[x]$, and we need to find another polynomial $q(x)$ in $V$ such that $q'(x) = p(x)$, but this is easy, just consider
$$q(x) = \int_0^x p(t)dt \in V.$$ |
Fractal walking: well defined case or not? | It is hard to imagine due to the infinities that occur*, but it is actually well defined as long as the length of the complete figure converges to some finite value.
Let $L(s)$ be the length of the path you walk for such a "triangle" with edges of length $s$. And let $\alpha$ be the ratio between one side of the triangle and the connecting line between two triangles. The path thus has a length of
$$ L(s) = \frac{3}{2}s + \alpha s + L\left(\frac{s}{4}\right) + \alpha s + \frac{3}{2}s$$
$$ = (3+2\alpha)s + L\left(\frac{s}{4}\right)$$
$$ = (3+2\alpha)s + (3+2\alpha)\frac{s}{4} + L\left(\frac{s}{16}\right)$$
and so on. What we get in the end is
$$ L(s) = (3+2\alpha)s\sum_{i=0}^\infty\frac{1}{4^i} = \frac{4}{3}(3+2\alpha)s $$
(*) Walking through an infinite amount of triangles is not a problem, as long as the total path is finite (and you don't need time to take a turn ;-) ). |
Left exactness of inverse limit | The limit is left exact in that category, regardless of the indexing set.
If you think of the limit of a family as the submodule of the product subject to the compatibility conditions, then the induced morphisms between the limits are just the restrictions of the product functions. If $0\to M'_i\stackrel{f_i}{\to} M_i \stackrel{g_i}{\to} M''_i \to 0$ is exact, then each $f_i$ is one-to-one, so the induced map
$$\lim M'_i \quad\stackrel{(f_i)}{\longrightarrow }\quad \lim M_i$$
is one-to-one, because $(f_i)$ is just the restriction of the product map $\prod M'_i\to \prod M_i$ to the submodule given by the limit, and this is the restriction of a one-to-one map (each component is one-to-one, so the product map is one-to-one).
Likewise, the composition $(g_i)\circ(f_i)$ is the zero map, because at each component you get $g_i\circ f_i = 0$; since each component of the map is the zero map, the composition is the zero map.
Finally, if $(m_i)$ lies in the kernel of $(g_i)$, then $m_i\in\mathrm{ker}(g_i)$ for each $i$, hence for each $i$ there is an $m'_i\in M'_i$ with $m_i = f_i(m_i)$; since each $f_i$ is one-to-one, $m'_i$ is uniquely determined. The difficulty lies in showing that this element $(m'_i)$ lies in $\lim M'_i$. But if $j,k\in I$ with $j\leq k$, then we know that $p_{kj}(f_j(m'_j)) = p_{kj}(m_j) = m_k = f_k(p'_{kj}(m'_j))$. Since $f_k$ is one-to-one, that means that $p'_{kj}(m'_j) = m'_k$ (as $f_k(m'_k)=m_k$), so the family is consistent.
If your limit is over a functor (instead of a partially ordered set) the same argument works, considering each arrow instead of each pair.
For more general categories the result still holds if the notion of "exactness" can be made to make sense; but the simplest way of doing it then is to invoke the fact that $\lim$ is a right adjoint, and therefore respect all limits, including "kernel" (equalizer of the identity and the zero map). |
Problem on differentiation of multivariable function | It is just 0. Recall partial derivative definition:
In mathematics, a partial derivative of a function of several
variables is its derivative with respect to one of those variables,
with the others held constant
So fy(0, 0) is just going to be:
$$fy(0, 0) = \lim_{\Delta y \to 0} \frac{f(0, \Delta y) - f(0, 0)}{\Delta y}$$
And from the function definition, when $x = 0, f = 0$, so $f(0, \Delta y) - f(0, 0)$ is always zero, which makes the above limit to be zero. And more generally, for this function $fy(0, y) = 0, \forall y$. |
Show that the height of the heap is $\lfloor \lg n \rfloor$ | You already have shown it!
You wrote it as
$$k \le \lg n < k+1,$$
so
$$k = \lg n + \alpha,$$
where
$$0 \le \alpha < 1,$$
thus
$$k = \lfloor \lg n \rfloor.$$ |
Chain of circles internally tangent to an ellipse. | In the theory of Lucas sequences
if $\,v\,$ is constant, and
$$ x_{n+1} = v\,x_n - x_{n-1} \tag{1}$$
for all $\,n,\,(P=v,\;Q=1)\,$ then
$$ x_n^2 - x_{n+1}x_{n-1} = u \tag{2}$$
for a constant $\,u.\,$ This implies that
$$ x_n^2 -v\,x_n x_{n+1} + x_{n+1}^2 = u \tag{3}$$
for all $\,n\,$ because
$$ x_{n-1}\!+\!x_{n+1} \!=\! v\,x_n, \;\;
x_{n-1}x_{n+1}\!=\!x_n^2\!-\!u. \tag{4}$$
In your case, the constants are
$$ u=4e^2b^2,\quad v=4e^2-2. \tag{5}$$
Also, check that if
$$ r_k:=(x_{k}-x_{k-1})/2, \tag{6} $$ then
$$ (r_n+r_{n+6})/r_{n+3} = v^3-3v \tag{7} $$
for all $\,n.$ This
implies that
$$ r_{n+6}(r_n + r_{n+6}) = r_{n+3}(r_{n+3}+r_{n+6}). \tag{8}$$ for all $\,n.\,$ By the way, there is a
similar result for $\,(x_{n+m}+x_{n-m})/x_n\,$ and
$\,(r_{n+m}+r_{n-m})/r_n\,$ for any integer $\,m.$
Note that my answer is completely based on equation $(3)$
which was given in the question. I have not used any of
the geometric content of the question. |
Graphing an inequality to solve a mathematical model | You are correct.
So your profit equation is defined by:
$$y = 10x+0.5x^{2}-0.001x^{3}-5000$$
So for part b, in order for the company to begin generating a profit, we must find the values where $y > 0$.
So we have that:
$$10x+0.5x^{2}-0.001x^{3}-5000 > 0$$
We find that the positive roots (since we don't want negative cooktops) is:
$$x = 100, 500$$
Now we must check the intervals to determine when $y$ is $> 0$ or $< 0$.
Looking at the graph of our function tells us that when $0 < x < 100 $, $y < 0$.
Also, when $100< x < 500$, $y > 0$.
Finally, when $500 < x < \infty$ $y < 0$.
So we conclude that at least $101$ cooktops (I assume it must an integer value) must be produced in order for the company to receive a profit.
You can perform a similar procedure for part c. You have set up the problem correctly.
Comment if you have questions. |
How do I solve this composite function and find its domain? | $$f(x)=-x^2+1 = 1 - x^2,\quad g(x)=\sqrt x$$
$$(g\circ f)(x) = g(f(x)= g(1-x^2 )=\sqrt{f(x)} = \sqrt{1-x^2}.$$
Now, for the domain of $(g\circ f)(x):\quad $ When is $1-x^2 \geq 0\quad ?\quad$
The domain consists of all real numbers such that $1 \geq x^2 \iff x^2 \leq 1$. |
convergence of a numerical series | $$\frac1{\left(\sqrt[k]2+\log k)\right)^{k^2}}<\frac1{\sqrt[k]{2}^{k^2}}$$ |
Ways to solve the inverse Z transform of complex conjugate poles function | You should have given us the ROC as well. I will assume $|z|>1$.
Rewrite it as
$$\begin{align}
F(z) = \frac{-3z+2}{(z^2-z+1)}&=-3\left(\frac{z}{z^2-z+1}\right)+2\left(\frac{1}{z^2-z+1}\right)\\
&=-3X(z)+2z^{-1}X(z)
\end{align}$$
So after finding the inverse $z$-transform of $X(z)$, the answer will be
$$f[n]=-3x[n]+2x[n-1]$$
Now to find $x[n]=\mathcal{Z}^{-1}\{X(z)=\frac{z}{z^2-z+1}=\frac{z^{-1}}{1-z^{-1}+z^{-2}}\}$ use the standard formula (no 21) in this table for $\omega_0=\frac{\pi}{3}$ to get $$x[n]=\frac{1}{\sin\left(\frac{\pi}{3}\right)}\sin\left(\frac{\pi}{3}n\right)u[n]$$ |
Cauchy Principal Value Integral of Simple Rational Functions | For a given function, its Cauchy Principal Value is defined by:
$$
p.v. = \lim _{ x\rightarrow \infty }{ \int _{ -x }^{ x }{ f\left( t \right)dt } }
$$
So you have to solve the integral between that limits and then solve the limit. You would have to be careful with $x=2$ if you are looking for the area under the function, which can't be the case because that function is not convergent. |
Is the integral closure of an integrally closed Noetherian domain in a finite extension field Noetherian? | This is true if $\dim R = 1$, and is known as the Krull-Akizuki theorem. In fact, it is commonly stated with the stronger conclusion that any subring $S \subseteq L$ containing $R$ is Noetherian. If $\dim R = 2$, it is still true that $\overline{R}^L$ is Noetherian, although there may be subrings of $L$ that are not. In dimension $3$ though, Nagata has given examples of $3$-dimensional Noetherian domains whose integral closures are not Noetherian.
If $R$ is a Nagata ring (e.g. an excellent ring, and thus virtually any geometric ring), then the desired conclusion holds. |
A proof of limits relating to a uniformly convergent series | Let $\epsilon>0$ and find some $N$ such that $|F_{n}(x)-F(x)|<\epsilon$, for all $x\in(a,b)$ and $n\geq N$. So for fixed $n\geq N$, we have
\begin{align*}
F_{n}(x)-\epsilon<F(x)<F_{n}(x)+\epsilon,~~~~x\in(a,b)
\end{align*}
taking limit $x\rightarrow x_{0}$, one has
\begin{align*}
-\infty<L_{n}-\epsilon\leq\liminf_{x\rightarrow x_{0}}F(x)\leq\limsup_{x\rightarrow x_{0}}F(x)\leq L_{n}+\epsilon<\infty,
\end{align*}
in particular, $-\infty<\liminf_{x\rightarrow x_{0}}F(x)\leq\limsup_{x\rightarrow x_{0}}F(x)<\infty$.
Now taking $n\rightarrow\infty$, we have
\begin{align*}
\limsup_{n}L_{n}-\epsilon\leq\liminf_{x\rightarrow x_{0}}F(x)\leq\limsup_{x\rightarrow x_{0}}F(x)\leq\liminf_{n}L_{n}+\epsilon,
\end{align*}
taking $\epsilon\downarrow 0$, we have
\begin{align*}
\limsup_{n}L_{n}\leq\liminf_{x\rightarrow x_{0}}F(x)\leq\limsup_{x\rightarrow x_{0}}F(x)\leq\liminf_{n}L_{n},
\end{align*}
we conclude that $\lim_{n}L_{n}=\lim_{x\rightarrow x_{0}}F(x)$. |
Sequence-Function convergence | $0 \le y_n \le |x_n|$. The sandwich lemma says $y_n \to 0$. |
Linear functions vs Linearithmic functions complexity | First, your notation with the Big-O symbol is not the most conventional; to denote that the time complexity of $4n$ grows slower than that of $n\log{n}$, we would use the Little-O symbol:
$$4n \in o(n \log{n})$$
Proving this is quite trivial: since
$$ \lim_{n \to \infty} \frac{4n}{n \log{n}} = \lim_{n \to \infty} \frac{4}{\log{n}} = 0 $$
by definition we have $4n \in o(n\log{n})$. |
Norm of a polynomial $N_{E/K}(f)(X)$ with $E/K$ a Galois extension | $\sigma(f(X))$ means applying $\sigma$ to the coefficients, usually denoted $f^\sigma(X)$ (well KCd prefers the notation $(\sigma f)(X)$) and the subring of $E[X]$ fixed by $Gal(E/K)$ is $K[X]$.
is obvious
for $E/F/K$ then $F=E$ iff $F$ is not fixed by any $\sigma$.
the roots of $f^\sigma$ are $K$-conjugates to the roots of $f$. |
How to simplify $\det(M)=\det(A^T A)$ for rectangular $A=BC$, with square diagonal $B$ and rectangular $C$ with orthonormal columns? | The simplest thing I can think of is to take the QR decomposition of $A=BC$, then $\det(M)$ is simply the square of the product of the diagonal elements of $R$. |
Proving a Line-integration along a parametrized curve identitiy. | $C$ is defined by
$$C := \{ (x,y) \ | \ x\in[a,b], \ f(x) = y \}$$
so
$$\int_{C} F(x,y)dx = \int_{\substack{x\in [a,b] \\ y = f(x)}} F(x,y)dx = \int_{x\in[a,b]} F(x,f(x))dx =\int_a^b F(x,f(x))dx$$
This just represents any function along $C$. For example the length of the curve would be
$$\int_C \sqrt{dx^2 + dy^2} = \int_C\sqrt{1 + (\frac{dy}{dx})^2}dx = \int_a^b \sqrt{1 + (f'(x))^2}dx$$
Here $F(x,y) = \sqrt{1 + (\frac{dy}{dx})^2}$ |
Showing $\big( \frac{a}{n\bar{x}}, \frac{b}{n\bar{x}} \big)$ is an exact confidence interval for a gamma distribution | You wrote
$$P\left(\frac{a}{n\bar{x}}<Y<\frac{b}{n\bar{x}}\right).$$
But what you need is
$$P\left(\frac{a}{n\bar{X}}<\lambda<\frac{b}{n\bar{X}}\right) = 0.95.\tag{1}$$
I've set $\bar X$ in capital since it's a random variable. One must remember what is random and what is not random in this kind of problem. To say that $a/(n\bar X)$ is "random" in effect means that if you take another sample, the value of that expression will change. $\lambda$ on the other hand is not random since it will remain the same if another sample is taken.
(1) is equivalent to
$$
P\left(\frac a\lambda < n\bar X < \frac b\lambda\right) = 0.95,
$$
or
$$
P\left(\frac a\lambda < Y < \frac b\lambda\right) = 0.95 \tag{2}
$$
Here is an ambiguity in the question: does "exponential with parameter $\lambda$" mean having density proportional to $y\mapsto e^{-\lambda y}$ or does it mean $y\mapsto e^{-y/\lambda}$? Since (2) is equivalent to
$$
P\left(a < \lambda Y < b\right) = 0.95,
$$
I take it to mean $\lambda Y$ has a gamma distribution with parameter $1$ in place of $\lambda$, so the density of the exponential is proportional to $y\mapsto e^{-\lambda y}$, i.e. $\lambda$ is an intensity parameter rather than a scale parameter.
So
$$
P(X_i > c) = \int_c^\infty e^{-\lambda u} (\lambda \; du) = \int_{\lambda c}^\infty e^{-y}\;dy,
$$
and so
$$
P(Y>c) = (X_1 + \cdots + X_n > c) = \int_{\lambda c}^\infty \frac{y^{n-1} e^{-y}}{\Gamma(n)} \; dy.
$$
Finally we have
$$
P\left( \frac a\lambda < Y < \frac b\lambda \right) = \int_{\lambda(a/\lambda)}^{\lambda(a/\lambda)} \cdots\cdots = \int_a^b \cdots\cdots.
$$ |
Restriction of differential $1$-forms to open subsets? | Since $X_V = 0$, the equation is true at any point in $V$. Since $g = 0$ outside $V$, the equation reduces to $X = (1-0)X$ away from $V$, so it is true at any point not in $V$. |
What is the exact formula for $\frac{\Gamma((x+1)/2)}{\Gamma(x/2)}$? | There's no known closed form for this. Any alternative expression still involves two gammas; say, using the duplication formula, you can rewrite it in terms of $\Gamma(x)$ and $\Gamma(x/2)$, but it doesn't really help. The "power law" holds only in the asymptotic sense, because of $\lim\limits_{x\to+\infty}\Gamma(x+a)/[x^a\Gamma(x)]=1$ for any real $a$ (in our case $a=1/2$). |
Can a functional be expressed by an inner product in an infinite-dimensional space? | Each polynomial's coefficients form a member of $\ell^2$ when extended with zeros. Then you use the standard $\ell^2$ inner product with $q $ isomorphic to $(1,0,0,\dots) $, i.e. $q=1$. |
conformal map of unit disk slit | Note there are many ways to do this.
Let's denote by $U$ our original disc slit along $(-1,r]$ (along the $x$-axis). Let's rotate by $\pi$ radians counterclockwise by applying the map $z\mapsto e^{\pi i}z=-z$ and denote our new region by $-U$. This is the unit disc slit along $[r,1)$ (along the real axis). Apply the inverse Cayley transform $z\mapsto i\frac{1+z}{1-z}$ on $-U$ to obtain the upper half plane slit along $\Big[i\frac{1+r}{1-r},i\infty\Big)$ (along the $y$-axis
), so denote this region by $\mathbb{H}-\Big[i\frac{1+r}{1-r},i\infty\Big).$ Apply the map $z\mapsto z^2$ to obtain the plane with slits from $\Big[-\infty,-\Big(\frac{1+r}{1-r}\Big)^2\Big]$ and $[0,\infty)$, both on the real axis. Hence, we apply the map $z\mapsto\frac{z}{z+\bigg(\frac{1+r}{1-r} \bigg)^2}$ to obtain the plane slit along $(0,\infty]$ on the real axis. Apply a branch of $\sqrt z$ to map to $\mathbb{H}$, and then apply Cayley's transform into the unit disc. |
How to check whether this function is continuous or not..? | The problem with your approach is that your function may have a jump discontinuity at the boundary of $A$. Consider for instance what would happen at $x=1/3$ if $X=\mathbb [0,1]$, $A=[0,1/3]$, and $B=[2/3,1]$.
For $K\subseteq X$ define $d(x,K)=$inf{$d(x,y):x\in K$}. Show that this function is continuous.Then define $f:X\to\mathbb R$ by $f(x)=d(x,A)/(d(x,A)+d(x,B))$. Using the fact that $A$ and $B$ are closed, you should be able to show $f(x)=0$ iff $x\in A$ and $f(x)=1$ iff $x\in B$. Now you can also show $f$ is defined on all of $X$. It will be continuous as sums and quotients of continuous real-valued functions are continuous everywhere they're defined. |
What is the maximum vertical distance between the line y = x + 42 and the parabola y = x^2 for −6 ≤ x ≤ 7? | Hint: $$x+42-x^2=-\left(x-\frac12\right)^2+\frac{169}4$$
has a maximum at $(\frac12,\frac{169}4)$ in the interval $[-6,7]$. |
All numbers close enough to 1 are squares | Over $\Bbb Q_2$, the neighbourhood $1+8\Bbb Z_2$ of $1$ is made up of
squares. An argument using Hensel's lemma proves that $x^2=a$ is soluble
over $\Bbb Z_2$ whenever $a\equiv1\pmod8$ in $\Bbb Z_2$.
The same is true in any finite extension of $\Bbb Q_p$. It's fairly obvious
when $p$ is odd, but one has to use a stronger form on Hensel when $p=2$.
Alternatively it drops out from the theory of the $p$-adic exponential
and logarithm. (They are continuous bijections near the identity.) |
Determining radii of cylinder such that jacobian is of rank 1 | If the rank of matrix is strictly less than 2, it means it's either 1 or 0.
If it's $0$, all element in your matrix are zero. However, it can't be as it implies $2x=2x-1=0$
If it's $1$, the rows a linearly dependent $(2x,2y,2z)=k(2x-1,2y,0)$. This can only be when $y=z=0$.
Can you do the rest?
P.S. Geometrically, this is very easy question. $f(x,y,z)$ defines two level-surfaces. Since sphere $x^2+y^2+z^2=1$ has no special points, the rank of Jacobi matrix cannot be 0. It will be 1 when two surfaces: sphere and cylinder have the same tangent plane or in other words touch. Or where the line of intersection has special points. One can easily see when it happens. |
Finite Incidence Geometry Questions | Question 1: Let $v$ be the number of points and $b$ be the number of lines in the incidence geometry. We are given that each line has $k=n$ points, and that any two points determine exactly $\lambda = 1$ line. The only thing we need to show is that there exists a constant $r$ such that every point is incident with exactly $r$ lines (notice that we do not need to obtain a formula for $v$ or $b$ in terms of $n$, they will not be uniquely determined).
If we fix a point $p$, we have that there are $(v-1)$ ordered pairs $(p, p^{\prime})$ of distinct points. Each of these pairs are incident with exactly one line. Since each line on $p$ contains $n-1$ such pairs, we have that $p$ must be incident with $r=(v-1)/(n-1)$ lines, independent of the choice of $p$. Thus we have that the points and lines of an incidence geometry form a $(v,b,(v-1)/(n-1),n,1)$ BIBD. (Double counting can also be used to show that $b = \frac{v(v-1)}{n(n-1)}$, but this is not necessary to solve the problem.)
Question 2: Take a point $p$ and a line $\ell$ not on $p$. Then there are $n$ points $v_{1}$, $\ldots$, $v_{n}$ on $\ell$, and a single line containing each of $(p, v_{1})$, $(p,v_{2})$, $\ldots$, $(p,v_{n})$. These $n$ lines are all distinct, because $\ell$ is the only line containing $(v_{i}, v_{j})$ for $i \neq j$. Then, since there are $(v-1)/(n-1)$ total lines on $p$, there are $(v-1)/(n-1) - n$ lines on $p$ which do not meet $\ell$.
Note that you cannot assume the existence or nonexistence of "parallel" lines (and you need to be clear exactly how these are defined). You may or may not have parallel lines depending on the type of incidence geometry. You get the design regardless. |
On a property of a converging positive martingale | In general, the assertion is wrong. Consider for example a Brownian motion $(B_t)_{t \geq 0}$ the process
$$M_t := \exp \left( B_t - \frac{t}{2} \right), \qquad t \geq 0.$$
$(M_t)_{t \geq 0}$ is a continuous positive martingale. Moreover, the strong law of large numbers gives
$$\frac{B_t}{t} \xrightarrow[]{t \to \infty} 0$$
implying
$$M_t = \exp \left( t \left[ \frac{B_t}{t} - \frac{1}{2} \right] \right) \xrightarrow[]{t \to \infty} 0=: M_{\infty}.$$
However, we clearly have $M_t \neq 0 = \mathbb{E}(M_{\infty} \mid \mathcal{F}_t)$. |
How prove that, for every $n$, $ \lim\limits_{x\to\infty}f_{n}(x)=\frac{1}{n!}$ | The result holds if, for every $n\geqslant0$,
$$
\left(\frac{\log(x+1)}{\log x}\right)^x=\sum_{k=0}^n\frac1{k!(\log x)^k}+O\left(\frac1{(\log x)^{n+1}}\right).
$$
To show this, note that
$$
\frac{\log(x+1)}{\log x}=1+\frac{\log(1+1/x)}{\log x}=1+\frac1{x\log x}+O\left(\frac1{x^2}\right),
$$
hence
$$
x\log\left(\frac{\log(x+1)}{\log x}\right)=\frac1{\log x}+O\left(\frac1{x}\right),
$$
and
$$
\left(\frac{\log(x+1)}{\log x}\right)^x=\exp\left(\frac1{\log x}+O\left(\frac1{x}\right)\right).
$$
For every $n\geqslant0$,
$$
\exp(u)=\sum_{k=0}^n\frac{u^k}{k!}+v_n(u),
$$
when $u\to0$, where $v_n(u)=O\left(u^{n+1}\right)$. Applying this to
$$
u(x)=\frac1{\log x}+O\left(\frac1{x}\right)
$$
yields
$$
\sum_{k=0}^n\frac{u(x)^k}{k!}=\sum_{k=0}^n\frac1{k!(\log x)^k}+O\left(\frac1{x}\right),\qquad
v_n(u(x))=O\left(\frac1{(\log x)^{n+1}}\right),
$$
hence the result follows. |
SVD of a positive definite matrix. | Positive definite matrices are unitarily diagonalisable and possess positive eigenvalues. Hence $A$ has the eigendecomposition $UDU^\ast$ for some unitary matrix $U$ and positive diagonal matrix $D$. Since this by itself is a singular value decomposition, the eigenvalues and singular values of $A$ coincide. |
How to calculate combinations with duplicates? | one way to look at this a little more analytically is to count the non-zero terms in:
$$
(A+B+C)^2
$$
where the variables are not assumed to commute, but are subject to the relations $A^2=0$, $C^2=0$ |
Prove that $2^{n+2}$ is a divisor of the number $k^{2^{n}}-1$ | $$\frac{k^{2^{n+1}}-1}{2^{n+3}}=\frac{k^{2^{n}}-1}{2^{n+2}}\times \frac{k^{2^n}+1}{2}$$
The first fraction on RHS is an integer by induction hypothesis and second one is integer because $k$ is odd, hence LHS is integer and you are done!
To prove without induction
Observe that $$k^{2^n}-1=(k^2-1)\prod_{r=1}^{n-1}(k^{2^r}+1)$$
$$\Longrightarrow \frac{k^{2^n}-1}{2^{n+2}}=\frac{k^2-1}{2^3}\times \prod_{r=1}^{n-1}\frac{k^{2^r}+1}{2}$$
which is an integer because $k^2\equiv 1\pmod 8$ and all powers of $k$ are odd. |
How many different choice of non-empty sets are there given sum of set sizes | I assume that you meant to imply (or forgot to state) that $A$ and $B$ are in all cases subsets of a given set $S$ with $7$ elements. In that case, since $A$ and $B$ are non-empty, any subset of $S$ with $1$ to $7$ elements can lie between $A$ and $B$. There are $2^7=128$ subsets of $S$, and all of them except the empty set can occur, for a count of $128-1=127$. |
Prove Without Using Borsuk-Ulam | I don't know exactly what you consider a big theorem. Embedding $S^1\subset S^n$ as intersection with a plane through the origin, the restriction to this $S^1$ would have odd degree (does that require a big theorem?), while the embedded $S^1$ can be continuously contracted to a point. The degree of the restriction cannot change under the deformation, but $0$ is not odd. |
Evaluate the limit containing $\arctan{x}$ and $\arcsin{x}$ | Here is an approach which relies on trigonometric identities and certain standard limits. The approach involves some amount of labor in algebra and should be used only when more powerful tools like Taylor series or L'Hospital's Rule are forbidden. On the other hand it does show that simple tools can be used to tackle difficult problems if one applies them properly.
Let $t=\arcsin x$ so that $x=\sin t$ and then $\tan t=x(1-x^2)^{-1/2}$ so that $t=\arctan x(1-x^2)^{-1/2}$ and thus the expression $\arcsin x-\arctan x$ can be written as $$\arctan\frac{x} {\sqrt{1-x^2}}-\arctan x=\arctan\dfrac{x(1-\sqrt{1-x^2})}{x^2+\sqrt{1-x^2}} $$ which can be simplified as $$\arctan\frac{x^3}{(x^2+\sqrt{1-x^2})(1+\sqrt{1-x^2})}=\arctan u\text{ (say)} $$ where $u/x^3\to 1/2$ and further $$\frac{2u}{x^3}-1=\frac{1-(1+x^2)\sqrt{1-x^2}}{(x^2+\sqrt{1-x^2})(1+\sqrt{1-x^2})}$$ which can be written as $$\frac{-x^2+x^4+x^6}{(x^2+\sqrt{1-x^2})(1+\sqrt{1-x^2})(1+(1+x^2)\sqrt{1-x^2})}$$ and thus $$\lim_{x\to 0}\frac{1}{x^2}\left(\frac{2u}{x^3}-1\right)=-\frac{1}{4}\tag{1}$$ If $L$ is the desired limit in question and $f(x) $ is the function whose limit needs to be evaluated then
\begin{align}
\log L&=\log\lim_{x\to 0}f(x)=\lim_{x\to 0}\log f(x)\text{ (via continuity of log)} \notag\\
&=\lim_{x\to 0}\frac{2}{x^2}\log\left(\frac{2\arctan u} {x^3}\right)\notag\\
&=\lim_{x\to 0}\frac{2}{x^2}\cdot\dfrac{\log\left(1+\dfrac{2\arctan u-x^3}{x^3}\right)}{\dfrac{2\arctan u-x^3}{x^3}}\cdot\frac{2\arctan u-x^3}{x^3}\notag\\
&=\lim_{x\to 0}\frac{2}{x^2}\cdot\frac{2\arctan u-x^3}{x^3}\notag\\
&=\lim_{x\to 0}\frac{4\arctan u-4u}{x^5}+\frac{4u-2x^3}{x^5}\notag\\
&=\lim_{x\to 0}4\cdot\frac{\arctan u-u} {u^2}\cdot\frac{u^2}{x^6}\cdot x-\frac{1}{2}\text{ (via equation (1))}\notag\\
&=-\frac{1}{2}\notag\end{align}
The desired limit $L$ is thus $1/\sqrt{e}$. We have used the limit $$\lim_{x\to 0}\frac{\arctan x-x} {x^2}=\lim_{x\to 0}\frac{x-\tan x} {x^2}=\lim_{x\to 0}\frac{x\cos x-\sin x} {x^2}=0$$ which can be proved without Taylor series or L'Hospital's Rule. Also note that the factor $u^2/x^6=(u/x^3)^2\to (1/2)^2=1/4$. |
Basic proofs for epsilon delta limits: $\lim_{x \to a} \frac{x}{1+x}=\frac{a}{1+a}$ | You write
we want that for all $\epsilon \gt 0$ such that $0 \lt |x-a| \lt \delta$ , then $|f(x)-L| \lt \epsilon$ for a suitable $\delta$.
What you actually want is: For every $\epsilon > 0$, there corresponds a $\delta > 0$ such that for all $x$, $0 < |x - a| < \delta$ implies $|f(x) - L| < \epsilon$. Also, since $a$ can be zero or negative, you cannot start with the assumption $|x - a| < a$. The $\delta$ you have chosen can be negative for the same reasons.
Using the fact that
$$\frac{x}{1 + x} = 1 - \frac{1}{1 + x}$$
for all $x\neq -1$, we have
$$\left|\frac{x}{1 + x} - \frac{a}{1 + a}\right| = \left|\frac{1}{1 + a} - \frac{1}{1 + x}\right| = \left|\frac{x - a}{(1 + a)(1 + x)}\right| = \frac{|x - a|}{|1 + a||1 + x|}.$$
for all $x\neq -1$. Now since $a \neq -1$, $|1 + a| > 0$. So we may consider first $0 < |x - a| < \frac{|1 + a|}{2}$. By the triangle inequality,
$$|1 + a| = |(1 + x) - (x - a)| \le |1 + x| + |x - a|,$$
thus
$$|1 + x| \ge |1 + a| - |x - a| > |1 + a| - \frac{|1 + a|}{2} = \frac{|1 + a|}{2}.$$
Hence if $0 < |x - a| < \frac{|1 + a|}{2}$, then
$$\frac{|x - a|}{|1+a||1 + x|} < \dfrac{|x - a|}{|1+a|\cdot \frac{|1+a|}{2}} = \frac{2}{(1 + a)^2}|x - a|,$$
which can be made less than $\epsilon$ by having $|x - a| < \frac{\epsilon(1 + a)^2}{2}$. So choose
$$\delta = \min\left\{\frac{|1+a|}{2},\frac{\epsilon(1+a)^2}{2}\right\}.$$ |
Exercise 5.3 in Calculus Made Easy: are these answers equivalent? | $${\sqrt{a^2 - b^2} - a - b \over a - b - \sqrt{a^2 - b^2}}=\frac{\sqrt{a+b}\left(\sqrt{a-b}-\sqrt{a+b}\right)}{\sqrt{a-b}\left(\sqrt{a-b}-\sqrt{a+b}\right)}=\frac{\sqrt{a+b}}{\sqrt{a-b}}$$ |
What does it mean when dx is put on the start in an integral? | The notation $\int f(x) \, dx $ and $\int dx \; f(x)$ mean the exact same thing. |
How can I determine the scale factor of a pantograph from the ratio of the arms? | This page breaks down the triangles pretty well. |
Probability that the eventually a six on a dice will appear. | Let $D$ denote the random variable telling how many rolls Dave took to roll a $6$, and let $L$ denote the variable telling how many rolls Linda took.
Then what you calculated were $P(D=k)$ for certain values of $k$. For example, the probability that Dave took $5$ rolls to roll a $6$ is
$$\frac56\cdot\frac56\cdot\frac56\cdot\frac56\cdot\frac16$$
What you need to calculate is the probability that $D=L\pm1$ or $D=L$, which is the probability
$$P(D=1\wedge (L=1\vee L=2)) + P(D=2\wedge (L=1\vee L=2\vee L=3)) + \dots$$
and the sum is actually infinite. I think writing it down may simplify it a bit, and you will be summing power series, which is not that hard... |
Proof of every finite group is finitely presented. | I'll try to clarify the proof though you might have done it already (I was struggling with this theorem too so the answer could be useful for someone).
First of all, let's check that $N$ $\leq$ ker $\pi$. It is true since $N$ is the intersection of all normal subgroups containig $R_0$, and ker $\pi$ is one of such subgroups. Now we apply the following theorem. Let $h:A\rightarrow B$ be a homomorphism and $A$ $\rhd$ $X$ $\leq$ ker $h$, then for the projection $p:A\rightarrow A/X$ there is a unique homomorphism $f$ such that the following diagram commutes ($h=f\circ p$):
$\hskip2.8in$
Let say that in our case there is a unique homomorphism $\phi$ such that the following diagram has to commute:
$\hskip2.8in$
Now consider the cosets $g_1N,...,g_nN\in F(S)/N$. Note that they are pairwise distinct. Indeed, we have $i\neq j\rightarrow\pi(g_i)=g_i\neq g_j=\pi(g_j)$, and the commutativity implies $\pi(g_i)=\phi(p(g_i))=\phi(g_iN)$, $\pi(g_j)=\phi(p(g_j))=\phi(g_jN)$. Thus $i\neq j\rightarrow\phi(g_iN)\neq\phi(g_jN)$. So it's not the case that $g_iN=g_jN$ for $i\neq j$.
It's easy to verify that $F(S)/N=\langle g_1N,...,g_nN\rangle$. With this if we'll prove that $\{g_1N,...,g_nN\}$ is closed under the multiplication and inversion, then we'll get $F(S)/N=\{g_1N,...,g_nN\}$.
Let's begin from the multiplication. We have $g_iNg_jN=g_ig_jN$. Let $g_ig_j=g_k$ then $g_ig_jN=g_kN$. Indeed, $g_ig_jg_k^{-1}\in N$ thus $g_ig_j\in Ng_k=g_kN$. Further we have $(g_jN)^{-1}=g_j^{-1}N$. Let $g_j^{-1}=g_k$ then $g_j^{-1}N=g_kN$. Indeed, let $g_i=e_G$ then $g_ig_kg_j\in N$. So $g_ig_k\in Ng_j^{-1}=g_j^{-1}N$. This implies $g_j^{-1}N=g_ig_kN$. But $g_ig_kN=g_kN$ as we have proved.
We conclude that $F(S)/N$ consists of $n$ distinct elements: $\{g_1N,...,g_nN\}$. The commutativity of our diagram implies $\phi(g_iN)=g_iN$. This means $\phi$ is injective. Now, if we assume that ker $\pi$ - $N$ contains some element $a$, then $aN\neq N$ and $\phi(aN)=e_G$. This gives a contradiction with the injectivity. Thus we have $N$ $=$ ker $\pi$ and $G\cong(S|R_0)$. |
Limit with integral and $x^3$ | Use L'Hôpital combined with the fundamental theorem of calculus. |
Another flipping the coin problem | The words "it takes $10$ tosses to get a head and a tail" means the in the first $9$ tosses, you repeatedly only get one of them, and only after the 10th toss, you receive both.
Your $P(\geq 1 H\ and \geq 1T)$ does not respect the fact that all first $9$ tosses yield the same result.
The probabilty that is asked for is $P(\{\text{first head in 10th toss}\} \cup \{\text{first tail in 10th toss}\})$.
So, yes, the question is for a specific odering, settled by the words "it takes", which means "only then and not before". In any other ordering, it would have taken less than $10$ tosses to get at least one head and one tail. |
Two interesting results in integration $\int_{0}^{a}f(a-x) \ \mathrm{d}x= \int_{0}^{a}f(x)\ \mathrm{d}x$ and differentiation of powers of functions | Try this one!
$$\int_{0}^{1}\frac{\ln(1+x)}{1+x^{2}}dx$$ |
solve $\lim_{x\rightarrow \infty}\frac{x^3}{e^x}$ using L'Hospitals rule | There will be situtation in which you may need to use L'Hospital's rule more than once, and in this case, you need to apply it three times:$$\lim_{x\to\infty} \frac {x^3}{e^x} \quad \overset{L'H}{=}\quad \lim_{x\to\infty} \frac{3x^2}{e^x}\quad \overset{L'H} = \quad \lim_{x\to\infty} \frac{6x}{e^x}\quad \overset{L'H} =\quad \lim_{x\to \infty}\frac{6}{e^x} = 0$$
You may apply L'Hospital repeatedly, provided (and only so long as) your limit evaluates to an indeterminate form. |
Probability Qual problem: $X_n\rightarrow X$ in probability implies $EX_n\rightarrow EX$ under given condition? | It suffices to show the $X_n$ are uniformly integrable. Indeed, since $|x|/f(x) \to 0$ we have that $E[|X_n| 1_{|X_n|>\lambda}] \leq k E[f(X_n)] \leq k C$ where $k$ may be chosen to be as small as we like by taking $\lambda$ large enough. It follows that $X_n$ are UI and so Vitali's convergence theorem implies we have $L_1$ convergence. |
Adding witnesses to prove Gödel's completeness theorem | The part that I don't understand is why do we know that there are exactly $\kappa$ existential sentences of $\mathscr{L}_{\kappa \cdot n} := \mathscr{L}\cup \{c_{\alpha} | \alpha <\kappa \cdot n\}$?
We know there are at most $\kappa$ existential sentences, because each existential sentence is a finite sequence of symbols from an alphabet consisting of the basic logical symbols, a countable infinity of variable letters, the symbols of $\mathscr L$, and the $\kappa\times_c n=\kappa$ new constant letters. This alphabet has size $\kappa$, so the number of existential sentences is no larger than
$$ \sum_{n\in\omega} \kappa^n = \sum_{n\in\omega} \kappa = \omega\times_c\kappa = \max(\omega,\kappa) = \kappa $$
(This is basic cardinal arithmetic).
On the other hand, there are also at least $\kappa$ existential sentences, even before we begin using the new constant letters. For example, there are $\aleph_0$ formulas of the form $\exists x(x=x\land x=x\land \cdots \land x=x)$. This is already enough, unless $\mathscr L$ is uncountable such that $\kappa>\aleph_0$. But in that case we can also take $\exists x\,p(x,\ldots, x)$ for each predicate letter $p\in\mathscr L$, $\exists x(x=f(x,\ldots,x))$ for each function letter $f\in\mathscr L$, and $\exists x(x=c)$ for each constant letter $c\in\mathscr L$.
Cantor-Schröder-Bernstein now gives that there are exactly $\kappa$ existential sentences.
In fact, however, the proof would still work well enough if there were fewer than $\kappa$ existential sentences; just choose some of the $\varphi_\alpha$s to be equal! We need a witness for each existential sentence, but it doesn't harm anything if, for lack of enough properties to witness, some of the existential sentences get more than one witness constant.
Also I don't quite understand what that statement actually means. Does it say there are exactly $\kappa$ existential sentences that cannot be stated in $\mathscr{L}_{\kappa \cdot (n-1)}$ and don't need symbols of $\mathscr{L}_{\kappa \cdot (n+1)} \setminus \mathscr{L}_{\kappa \cdot n}$?
That's not what it actually says (and not what is needed; see above remarks about reusing $\varphi_\alpha$s), but it may be most intuitive to imagine that.
In that case you would need to show that there are at least $\kappa$ sentences that can be made with $\mathscr{L}_{\kappa\cdot n}$ but not with $\mathscr L_{\kappa\cdot(n-1)}$. But the sentences $\exists x(x=c_\alpha)$ for every $\alpha$ between $\kappa\cdot(n-1)$ and $\kappa\cdot n$ will do fine for that.
Especially if we do not know the language.
As long as the language allows you to form $\exists x(x=c_\alpha)$ for each different $c_\alpha$, there will be enough sentences. The size of $\mathcal L$ doesn't matter for this -- except if there are more than countably many symbols in $\mathscr L$, in which case they're going to push $\kappa$ itself upwards.
If $=$ is not a primitive logical symbol for you, it needs to be tacitly assumed that $\mathscr L$ does contain at least one predicate letter of nonzero arity, such that every term can be put into a wff somehow. But if it doesn't, then the completeness theorem holds vacuously, because everything collapses into propositional logic at best. |
calculate the value of complex? | Note that the expression can be written as: $$\frac{-1+i\sqrt 3}{\sqrt 2(1+i)} = \frac{-1 + i\sqrt 3}{2} \times \frac{\sqrt 2}{1+i}$$ $$=e^{\frac{2\pi i}{3}} \times \frac{1}{e^{\frac{i \pi}{4}}}$$ $$=e^{\dfrac{5\pi i}{12}}$$ using the well-known Euler’s formula.
Now, use De Moivre’s formula to conclude. |
Of the first 100 terms in fibonacci sequence, how many are odd? | Odd, Odd, Even, Odd, Odd, Even, is a correct observation.
This is a valid approach. |
Sequence of vectors with specific conditions forming a basis | This is a cool problem. Set $V=\langle e_1+e_2,e_2+e_3,e_3 \rangle$ and $W=\langle e_1+e_4,e_2,e_3+e_4 \rangle$, and set $X=\langle W,e_4\rangle$. Finally, set $Z = \langle e_1,e_2,e_3,e_5 \rangle$; we're trying to prove that $Z=E$.
Note that $V = \langle e_1,e_2,e_3 \rangle$ and $X = \langle e_1,e_2,e_3,e_4 \rangle = \langle V,e_4\rangle$.
Now we do some dimension-counting.
From 1 and 2, we know that $\dim E < 5$.
From 3, we know that $\dim E > 3$. Therefore $\dim E = 4$. To prove that $e_1,e_2,e_3,e_5$ is a basis for $E$, it suffices to prove that $\dim Z\ge 4$.
From 3, we know that $\dim V = 3$.
From 4, we know that $\dim W < 3$. It follows that $\dim X \le \dim W + \dim \langle e_4 \rangle \le \dim W + 1 < 4$. Therefore $V\subseteq X$ but $\dim X \le 3 = \dim V$, and therefore $V=X$. It follows that $e_4$ is a linear combination of $e_1,e_2,e_3$.
Finally, $V\subseteq Z$; if $\dim Z \le 3 = \dim V$, then it would follow that $e_5$ is a linear combination of $e_1,e_2,e_3$. But since $e_4$ is also, it would follow that $\dim E \le 3$, a contradiction. Therefore $\dim Z > 3$, as required. |
How Would You Translate This Sentence To Predicate Logic? | Close but no cigar. The correct answer is:
$$\exists x(\mathrm{Apple}(x) \wedge \forall y(\mathrm{Person}(y) \rightarrow \mathrm{Loves}(y,x)))$$
To see that your answer isn't quite right, consider a universe with one person and no apples. Then the implication is vacuously true (since $\mathrm{Apple}(x)$ is false). So the statement you wrote down becomes
$$∃x∀y \,\mathrm{True}$$
which is vacuously true. So the statement you wrote down is satisfied by a universe in which there are no apples. Therefore, it cannot capture the meaning of a sentence of the form: "There exists an apple such that [whatever]."
Edit. The general principle is this.
"For all apples $x$ [whatever]" = "For all $x$, if $x$ is an apple then [whatever]."
"There exists an apple $x$ such that [whatever]" = "There exists $x$ such that $x$ is an apple, and [whatever]." |
When is the order of convergence of Newton-Raphson greater than 2? | Newton's method is an example of a functional iteration, i.e., $$x_{n+1} = g(x_n).$$ Newton's method corresponds to the choice of $$g(x) = x - \frac{f(x)}{f'(x)}.$$ In general, we say that $r$ is a fixed point of a function $g$ if and only if $g(r) = r$. If $r$ is a fixed point of $g$ and if $$g^{(j)}(r) = 0, \quad j=1,2,\dotsc,k-1 \quad \text{and} \quad g^{(k)}(r) \not = 0,$$ then the functional iteration $$x_{n+1} = g(x_n)$$ will converge to $r$ provided $x_0$ is sufficiently close to $r$. Moreover, the order of convergence is exactly $k$. This last bit follows from Taylor's formula. Specifically, there exists $\xi_n$ between $r$ and $x_n$ such that
$$
x_{n+1} - r = g(x_n) - g(r) = \frac{1}{k!}g^{(k)}(\xi_n)(x_n-r)^k
$$
When $x_n \rightarrow r$, the squeeze lemma will ensure that $\xi_n \rightarrow r$. Continuity of $g^{(k)}$ will therefore imply
$$
\frac{|r - x_{n+1}|}{|r - x_n|^k} \rightarrow \frac{1}{k!}|g^{(k)}(r)| \not = 0
$$
which is exactly what we mean when we say that the order of convergence is $k$.
Now returning to the case of Newton's method. In general, we have
$$ g'(x) = 1 - \frac{f'(x)^2 - f(x)f''(x)}{f'(x)^2} = \frac{f(x)f''(x)}{f'(x)^2}$$
Since $r = g(r)$ if and only if $f(r) = 0$ we always have $$g'(r) = 0.$$
This is the reason why Newton's method has at least quadratic convergence near an isolated root.
When do we have at least cubic convergence? We to that end we consider $g''(r)$. If $f$ is at least three times differentiable, then we have
\begin{align}
g''(x) &= \frac{(f'(x)f''(x)+f(x)f'''(x))f'(x)^2 - 2f(x)f''(x)f'(x)f''(x)}{f'(x)^4} \\ &= \frac{f'(x)^3f''(x) + f(x)f'(x)^2 f'''(x) - 2f(x)f'(x)f''(x)^2}{f'(x)^4}
\end{align}
It follows that
$$
g''(r) = \frac{f''(r)}{f'(r)}
$$
Conclusion: we can only have cubic convergence provided $f''(r) = 0$. This happens quite rarely. One example is $f(x) = \sin(x)$ and $r = \pi$. Here the convergence is cubical as we can just manage to see from the actual numbers:
$$\begin{array}{c|c|c}
n & x_n & x_n - \pi \\ \hline
0 & 3.000000000000000 & -1.415926535897931 \times 10^{-1} \\
1 & 3.142546543074278 & 9.538894844847157 \times 10^{-3} \\
2 & 3.141592653300477 & -2.893161266115385 \times 10^{-10} \\
3 & 3.141592653589793 & 0
\end{array}
$$ |
Discontinuous Fourier transforms? | Just take the inverse Fourier transform of your favourite discontinuous $L^2$ function. |
Convex Hull = Boundary+Segments | Prove this by induction on the dimension $n$. When $n=1$ this is clear.
Now for $n > 1$, by considering closest point projection, you can prove that every point on the boundary of a convex body has a supporting hyperplane $L$. So let $x \in \partial H $ and $L$ be the supporting hyperplane at $x$.
Then $L\cap H$ is the conex hull of $L\cap A$. By the induction hypothesis, we have either $x \in L\cap A$ or $x = \lambda y_1 + (1-\lambda)y_2$ for $y_1, y_2 \in \partial (L \cap A)$. Now notice that $\partial (L \cap A) \subset L \cap \partial A$. |
How to compute geodesics in upper half-plane? | The center of the circle lies on the real axis, and also on the perpendicular bisector of the two points, so... |
Please some help with sequences | It is a property of the reals that any bounded sequence of reals has a Cauchy subsequence. However, as drhab points out, $\mathbb{Q}$ is not complete. If we consider sequences in $\mathbb{Q}$, the sequence $a_0=1$ and $a_{n+1}=\frac{a_n^2+2}{2a_n}$ is bounded, but not convergent in $\mathbb{Q}$ (the limit in $\mathbb{R}$ is $\sqrt2\not\in\mathbb{Q}$). |
Sufficient condition for measure convergence on $\mathscr{B}(\mathbb{R}^d)$ | No, suppose on $\mathbb R$ we let $g_n = (1/n)\chi_{[0,n]}, n = 1,2,\dots ,$ and define $\mu_n$ accordingly. Set $\mu = 0, g=0.$ Then $g_n \to 0=g$ pointwise everywhere, and
$$\int_{\mathbb R} g_n \, d\mu_n = \int_{\mathbb R} g_n^2 \,d\lambda = \frac{1}{n}\to 0 = \int_{\mathbb R} g\,d\mu .$$
But for every $n,$ $|\mu_n([0,n])- \mu([0,n]) = 1-0.$ |
Why a group $G$ can form a 2-category? | If you want to understand how the category of groups becomes a 2-category, and in general if you want to understand 2-categories, you ought to be familiar with the canonical example of a (strict) 2-category, namely Cat itself. This the the category of all (small) categories, and has as its objects categories, as its morphisms functors, and as its 2-morphisms natural transformations. Taking a group to be a category with one object, Grp becomes a full subcategory of Cat. Ultimately, I would recommend that you try to spend some more time understanding all these definitions, as once you have that set, the 2-category structure of Grp will become pretty straightforward. |
Verifying the Biot-Savart Law agrees with the Definition of Vorticity | Since you asked for hints, here are 3:
$\partial_{x_1} f(x-y) = -\partial_{y_1} f(x-y)$.
Changing $y$ to polar $(r,\theta)$ coordinates,
$y_1\partial_{y_1}+y_2\partial_{y_2} = r\partial_r$. |
Smallest $x+y$ for integer equation $n = 5x + 3y$ in closed form | The integer solutions of $n = 5x + 3y$ are given by $x=3t-n$, $y=-5t+2n$, with $t \in \mathbb Z$.
To minimize $x+y=-2t+n$, we need to maximize $t$.
Now:
$x\ge0\ $ iff $\ 3t \ge n$.
$y\ge0\ $ iff $\ 2n \ge 5t$.
$x+y\ge0\ $ iff $\ n \ge 2t$.
These gives
$
\frac{n}{3} \le t \le \frac{2n}{5} \le \frac{n}{2}
$.
Therefore, $ t \le \left\lfloor\frac{2n}{5} \right\rfloor$ and so the smallest value of $x+y$ is $-2\left\lfloor\frac{2n}{5} \right\rfloor+n$. |
calculate: $\int_0^\infty \frac{\log x \, dx}{(x+a)(x+b)}$ using contour integration | A standard way forward to evaluate an integral such as $\displaystyle \int_0^\infty \frac{\log(x)}{(x+a)(x+b)}\,dx$ using contour integration is to evaluate the contour integral $\displaystyle \oint_{C}\frac{\log^2(z)}{(z+a)(z+b)}\,dz$ where $C$ is the classical keyhole contour.
Proceeding accordingly we cut the plane with a branch cut extending from $0$ to the point at infinity along the positive real axis. Then, we have
$$\begin{align}
\oint_{C} \frac{\log^2(z)}{(z+a)(z+b)}\,dz&=\int_\varepsilon^R \frac{\log^2(x)}{(x+a)(x+b)}\,dx\\\\
& +\int_0^{2\pi}\frac{\log^2(Re^{i\phi})}{(Re^{i\phi}+a)(Re^{i\phi}+b)}\,iRe^{i\phi}\,d\phi\\\\
&+\int_R^\varepsilon \frac{(\log(x)+i2\pi)^2}{(x+a)(x+b)}\,dx\\\\
&+\int_{2\pi}^0 \frac{\log^2(\varepsilon e^{i\phi})}{(\varepsilon e^{i\phi}+a)(\varepsilon e^{i\phi}+b)}\,i\varepsilon e^{i\phi}\,d\phi\tag1
\end{align}$$
As $R\to \infty$ and $\varepsilon\to 0$, the second and fourth integrals on the right-hand side of $(1)$ vanish and we find that
$$\begin{align}\lim_{R\to\infty\\\varepsilon\to0}\oint_{C} \frac{\log^2(z)}{(z+a)(z+b)}\,dz&=-i4\pi \int_0^\infty \frac{\log(x)}{(x+a)(x+b)}\,dx\\\\
&+4\pi^2\int_0^\infty \frac{1}{(x+a)(x+b)}\,dx\tag2
\end{align}$$
And from the residue theorem, we have for $R>\max(a,b)$
$$\begin{align}
\oint_{C} \frac{\log^2(z)}{(z+a)(z+b)}\,dz&=2\pi i \left(\frac{(\log(a)+i\pi)^2}{b-a}+\frac{(\log(b)+i\pi)^2}{a-b}\right)\\\\
&=2\pi i\left(\frac{\log^2(a)-\log^2(b)}{b-a}\right)-4\pi ^2 \frac{\log(a/b)}{b-a} \tag3
\end{align}$$
Now, finish by equating the real and imaginary parts of $(2)$ and $(3)$.
Can you finish now? |
Which one of the following are true?[CSIR-June 2017] | Your argument for (d) is not valid because $a_n=2^{n}$ does not satisfy the hypothesis. Note that $|a_n-a_m| \leq |a_n-a_{n+1}|+...+|a_{m-1}-a_m|$ for $n >m$. Conclude that $(a_n)$ Ia Cauchy sequence, hence bounded. It follows that the series $\sum a_n x^{n}$ converges in $(-1,1)$. Being a power series it converges in some interval containing $(-1,1)$. Other parts of your answer are correct. |
Integrating $f(x,y,z)$ within the region of a cylinder and a hyperboloid in cartesian coordinates | Hyperboloid $z = \sqrt{a^2+x^2+y^2} \,$ where $a \in (0,1)$ is upper portion of the hyperboloid (above $XY$ plane). Please note $z = -\sqrt{a^2+x^2+y^2}$ gives you a similar region below $XY$ plane.
Given $a \lt 1,$ it does intersect with cylinder $y^2 + z^2 = 1$.
If you visualize the region is bound below and on sides by hyperboloid and bound above by the cylinder.
In cartesian coordinates,
Bounds of $z$ is $ \,\sqrt{a^2+x^2+y^2} \leq z \leq \sqrt{1-y^2} \,$ (bound below by hyperboloid and above by the cylinder)
To find bounds of $x,y$ as you wrote, $1 - a^2 = x^2 + 2y^2 \implies y = \pm \sqrt{\frac{1 - a^2 - x^2}{2}}$
Bounds of $y$, $-\sqrt{\frac{1 - a^2 - x^2}{2}} \leq y \leq \sqrt{\frac{1 - a^2 - x^2}{2}}$
From above equation $1 - a^2 = x^2 + 2y^2$, the lower and upper bounds of $x$ (occurs when $y = 0$) is $\pm \sqrt{1-a^2}$.
Bounds of $x, \, \, -\sqrt{1-a^2} \leq x \leq \sqrt{1-a^2}$
If you need to integrate over both regions above $XY$ plane and below $XY$ plane, just multiply the integral by $2$. |
Give an example of a map $\pi:\Bbb N\to\Bbb N$ with a right inverse but no left inverse and vice versa | It might help to recall the basic theorem that a function has a left-inverse iff it is injective, and a right-inverse iff it is surjective. If that doesn't sound familiar, then pause to look it up in any introductory text, as it is pretty crucial. [There's a wrinkle which doesn't matter here.]
So your task comes to this: to find a function $f\colon\mathbb{N} \to \mathbb{N}$ which is surjective but not injective, and to find one which is injective and not surjective. |
Quadratic Equation using surds property | Absolutely. Since you know that $$\sqrt{2\pm\sqrt{3}}=\frac{\sqrt{3}\pm1}{\sqrt{2}},$$ you can always make such a substitution. All you're doing is rewriting the same thing in a different-looking form, and there's no problem with doing that.
I'd take a different route, though. Since $\sqrt{\alpha}=\alpha^{1/2}$ for any positive $\alpha$, we can rewrite the equation using exponential properties as $$\left(2+\sqrt{3}\right)^{x/2}+\left(2-\sqrt{3}\right)^{x/2}=4^{x/2},$$ then make the substitution $y=x/2$ to get $$\left(2+\sqrt{3}\right)^y+\left(2-\sqrt{3}\right)^y=4^y.$$ It should be clear just by looking that $y=1$ ($x=2$) provides a solution to this equation. In fact, it is the only solution, which can be proved in various ways. |
Complexity of $T(n)=T(n - \sqrt{n})+n$ | Assuming $T$ is monotone, defined on the reals, and usual assumptions for the recurrence relation to make sense (i.e., not having to deal with corner cases and floors/ceilings).
We will show that $T(n) = \Theta(n^{3/2})$.
Why? The reason to assume this is the right thing to prove is heuristic:
$$
T(n) = T(n-\sqrt{n})+ n \simeq T(n-2\sqrt{n})+ 2n-\sqrt{n} \simeq T(n-k\sqrt{n})+ kn-(k-1)\sqrt{n}
$$
and we get $T(1)$ for $k\simeq\sqrt{n}$, which leads to $T(n) \simeq k\sqrt{n} \simeq n^{3/2}$. Of course, there were a lof of approximations made at every step, so we may want to actually prove it.
Upper bound. Suppose there exists $C\geq 1$ such that $T(k) \leq Ck^{3/2}$ for every $k<n$. (The base case is easy, we just need $C$ to be chosen greater than the first few terms of $T$). Then
$$
T(n) = n + T(n-\sqrt{n}) \leq n + C(n-\sqrt{n})^{3/2}
\leq Cn + C(n-\sqrt{n})^{3/2} \leq C n^{3/2}
$$
using the fact that
$$
x^{3/2} \geq (x-\sqrt{x})^{3/2} + x, \qquad x\geq 1\,.
$$
(To see why this is true, observe that, dividing both sides by $x^{3/2}$, this is equivalent to $1-1/\sqrt{x} \geq (1-1/\sqrt{x})^{3/2}$).
Lower bound. Same thing, by induction. Suppose $T(k) \geq ck^{3/2}$ (for some small $c\in(0,1/2)$ chosen based on the first terms $T(1),\dots$) for every $k<n$. Then
$$
T(n) = n + T(n-\sqrt{n}) \geq n + c(n-\sqrt{n})^{3/2}
\geq n + c(n^{3/2}-2n) \geq cn^{3/2}
$$
since $c\leq 1/2$, and using that
$$
(x-\sqrt{x})^{3/2} \geq x^{3/2} - 2x,\qquad x\geq 1
$$
(shown e.g. via calculus). |
Calculate the flux caused by a forcefield. | I think you just may have integrated with the wrong bounds, this worked for me:
$$\int_0^1\int_0^{1-t}-s^2dsdt=-\frac{1}{3}\int_0^1 (1-t)^3dt$$ then with $u=1-t$ and $du=-dt$ we get:
$$\frac{1}{12}(1-t)^4 |_{0}^1=\frac{1}{12}$$
heres a crude picture to show the region of integration once everything is parameterized: |
$ E_{\text{TM}} = \{ \langle M \rangle \mid L(M) = \varnothing \} $ is undecidable. | You can use the basic halting problem proof to show this. A Turing Machine can print out its own description, and can simulate an arbitrary Turing machine on arbitrary input. So assume you had a Turing machine $T$ that could decide your problem. Then build a Turing machine $T'$ that prints out its description and simulates $T$ on that input. Then if $T$ says the language of $T'$ is empty, then accept any string. Otherwise, accept no strings. This contradicts the output of $T$ so $T$ cannot exist. |
Construct a matrix given basis for column space and basis for row space [GStrang P193 3.6.22] | 1-2) This construction works because the columns are on the left and the rows are on the right. I imagine it was conceived by thinking about how matrix multiplication on the left and right produces a vector. On a more subtle note, vectors are only matrices when you are working in some basis. More generally, vectors do not a priori have a default matrix representation.
3) I you multiply $$B_{m \times n} = \begin{bmatrix}
\vert & & \vert \\
\mathbf{c_1} & \cdots & \mathbf{c_s} \\
\vert & & \vert \\
\end{bmatrix}_{m \times s}
\begin{bmatrix}
-- & \mathbf{{r_1}^T} & -- \\
& ... & \\
-- & \mathbf{{r_s}^T} & -- \\
\end{bmatrix}_{s \times n}
= \sum\limits_{1 \le i \le s}\mathbf{c_s{r_s}^T} $$ by a column vector on the right, the column times $\begin{bmatrix}
-- & \mathbf{{r_1}^T} & -- \\
& ... & \\
-- & \mathbf{{r_s}^T} & -- \\
\end{bmatrix}_{s \times n}$ gives a new column. Multiplying this column by $\begin{bmatrix}
\vert & & \vert \\
\mathbf{c_1} & \cdots & \mathbf{c_s} \\
\vert & & \vert \\
\end{bmatrix}_{m \times s}$ gives you an element of the column space. If you multiply a row vector on the left of $B_{m\times n}$ it similarly gives you an element of the row space. |
Example of an antipodal set | If I am reading the definition correctly then the vertices of the unit cube in dimension $d$ form such a set. The two parallel hyperplanes will contain opposite faces of the cube whose dimension depends on the dimension of the smallest face containing the two points.
Imagine the possibilities in the plane and space to see the geometry.
I wonder if size $2^d$ is maximal for antipodal sets in $\mathbb{R}^d$. |
Definition of smooth curves | You can find the more general definition of smooth morphisms in e.g. Vakil or the Stacks project.
$\dim(C)$ is not equal to $1$ in general! Perhaps a more appropriate name here would be to call $C\rightarrow S$ a family of curves parametrised by the base $S$. In any case, the upshot is that here you want your fibres to be (smooth) curves, hence the condition (smooth of) "relative of dimension $1$".
For example if you consider the subscheme in $\mathbb{P^2} \times \mathbb{A}^1_t$ cut out by $y^2z = x(x-z)(x-tz)$, where $x,y,z$ are coordinates on $\mathbb{P}^2$, and $t$ is your coordinate on $\mathbb{A}^1_t$. Then the projection to $\mathbb{A}^1_t$ defines a (non-smooth!) curve over $\mathbb{A^1}_t$. This is not smooth because your fibre over $t=0,1$ are nodal cubics. However if you restrict to the open subset $\mathbb{A}^1_t - \{0,1\}$, then you do get a smooth morphism. |
Evaluating indefinite integral $\int \frac{3x}{x-2}\,dx$ | Hint: try writing the numerator instead as $3x-6+6$ and break it into two nice terms.
Alternatively, try a change of variable $u = x-2$. |
Is the quarter disk diffeomorphic to the half disk? | Hint: Use the fact that the tangent space at any point of the half disk contains some vector $v$ together with its opposite $-v$; the "tangent space" at the corner of the quarter disk does not. Conclude that there cannot be a linear map between the two tangent spaces of full rank. |
Factorize $ 77 $ in $ \mathbb{Z}[\sqrt{-13}]$ | Ideals are useful and important, but in this case I think it would be much simpler to just use the norms of numbers. As you know, $N(a + b \sqrt{-13}) = a^2 + 13b^2$. So we're looking to solve $a^2 + 13b^2 = 7, 11, 49, 77$ or $121$ so that $(a^2 + 13b^2)(c^2 + 13d^2) = 5929$.
Without loss of generality, temporarily ignore solutions with $a$ or $b$ negative. As it turns out, $N(n) = 7$ is impossible, as is $N(n) = 11$. There is only one solution for $49$, and that's $n = 7$. Looking at the sequence $13, 52, 117$, we find that $(2 - 3 \sqrt{-13})(2 + 3 \sqrt{-13}) = 121$, but $77$ is not divisible by either of those factors. So the only solution with positive real integers is $7 \times 11 = 77$.
That takes care of all solutions with $b = d = 0$. Moving on to $b \neq 0$, we have to look in a much smaller circle. The only solutions are $8^2 + 13 \times 1^2 = 77$ and $5 + 13 \times 4^2 = 77$. Any higher value of $a$ or $b$ will overshoot the mark.
So, you have found all three solutions.
If you still want to look at ideals, how many combinations of principal ideals can you get from $$\langle 7, 1 - \sqrt{-13} \rangle \langle 7, 1 + \sqrt{-13} \rangle \langle 11, 3 - \sqrt{-13} \rangle \langle 11, 3 + \sqrt{-13} \rangle ?$$ |
integral of Laplacian of a positive function | Since there is likely nothing more to be added in response to the question, I am placing this community wiki answer with the sole purpose of closing the thread (so as not to add to the already excessive number of unanswered questions). |
What is $\frac{d}{dx}(4\dot{y}^3)$ | As a forewarning, I am switching notation from $\frac{dy}{dx}=\dot{y}$ to $\frac{dy}{dx}=y'$ because a dot is usually used to denote a "time" (or parametric) derivative.
Make sure we are starting off with a valid Euler-Lagrange equation. In case you are unaware, it is easy to convince yourself that when minimizing length as the integral of a function $$f(x,y,y') = \sqrt{1+(y')^2}$$ it will be equivalent to minimize the integral of the square $$f^2(x,y,y') = 1+(y')^2 .$$
The Euler-Lagrange equation we want to use is $$\frac{\partial f}{\partial y} - \frac{d}{dx}\bigg[\frac{\partial f}{\partial y'}\bigg]=0 \text{ .}$$
Applying this to our function $f$, we have
$$0 - \frac{d}{dx}\bigg[2y'\bigg]=0 $$ so clearly
$$ \frac{d}{dx}\bigg[2y'\bigg]=\frac{d}{dx}\bigg[2\frac{dy}{dx}\bigg]=2\frac{d^2y}{dx^2}=0 .$$
We are left with the second order ODE
$$\frac{d^2y}{dx^2} = 0$$
which models a line in the plane $y(x) = c_1 x + c_2 \text{ .}$ |
Expected number of connected singletons in random graph | Let $u \in K$ and $v \in M$. I'm using $K$ and $M$ to denote the sets of nodes and theirs sizes, ok?
$$P(u \leftrightarrow v, d(v)=1) = P(d(v)=1 |u \leftrightarrow v)P(u \leftrightarrow v)=(1-p)^{K-1}p$$
Now, let
$$N = \sum_{u\in K}\sum_{v\in M} 1_{\{u↔v,d(v)=1\}}$$
note that $N$ counts the number of nodes in $K$ which connect to nodes in $M$ with degree one. Taking the expected value and using linearity you get
$$\mathbb{E}[N]=KMp(1-p)^{K-1}$$
You can play with the expression above putting $K$ or $M$ as a function of each other. Or you can set $p$ as a function of $K$ and $M$...
Hope this can help... |
Riemann Integrability and Max-Min functions | No. Define $f:[0, 1] \to \mathbb{R}$
$$
f(x) = \begin{cases}
-1 \text{ if } x \in \mathbb{Q}\cap[0,1]\\
0 \text{ if } x \not \in \mathbb{Q}\cap[0,1]
\end{cases}
$$
Then $f^{+} = 0$ and
$$
f^{-} = \begin{cases}
1 \text{ if } x \in \mathbb{Q}\cap[0,1]\\
0 \text{ if } x \not \in \mathbb{Q}\cap[0,1]
\end{cases}
$$
Therefore, $f^{+} \in \mathcal{R}[0,1], f^{-} \not \in \mathcal{R}[0,1]$ and $f \not \in \mathcal{R}[0,1]$. |
Premises of the extreme value theorem on “restricted” domains | For $S\subset D\subset X$ with $D$ equipped with the subspace topology, the set $S$ is compact in $D$ iff $S$ is compact in $X$ (see Q1 and Q2). Then by the Heine-Borel theorem, you need to check whether $S$ is a closed and bounded subset of $\mathbb{R}^n$ (and not of $D$). |
Checking to see if a Random amount of Dice are the Same | For an even number of dice it can also fail. Consider $4$ dice rolls with results $3,4,4,5$.
The proof is simple, really. Consider an arrangement of $n$ dice rolls where $n-2$ dice have value $k$, one die has value $k-1$ and another die has value $k+1$.
This works provided $1<k<6$ and $n\geqslant 2$.
In other words, your algorithm checks a condition that is necessary, but not sufficient, for the array to be of equal numbers. |
Determine recurrence and transience on an infinite state space | A Markov chain of this form is recurrent. For a state $i>0$, define
$$
\tau_i = \inf\{n>0:X_n=i\mid X_0=i\},
$$
and for positive integers $m$ define
$$
\sigma_m = \inf\{n>0: X_n =0\}.
$$
Then
$$
\mathbb P(\tau_i<\infty) = \mathbb P\left(\bigcup_{m=1}^\infty \{X_{\sigma_m+1}\geqslant i\}\right) = 1,
$$
as by independence,
$$
\mathbb P\left(\bigcap_{m=1}^\infty \{X_{\sigma_m+1}<i\} \right) = \lim_{m\to\infty} \left(\sum_{j=1}^{i-1} q_j\right)^m = 0.
$$
Now, whether the chain is positive or null recurrent is less clear; I believe it depends on the $q_i$, but I am not sure of an example where e.g. $\mathbb E[\tau_1]=\infty$. |
what is the maximum number of edges in a graph with self-loop? | If there are no loops it is $\binom{n}{2}$, there are at most $n$ loops so the new maximum is $\binom{n}{2}+n=\binom{n+1}{2}$ |
Using the Pumping Lemma to show that the language of all strings of even length having no $0s$ in their second half is not regular | Assume $L$ is regular, with pumping length $n$.
Consider $0^n1^n\in L$. Then $0^n1^n=uvw$ with $|uv|\le n$ (so consisting only of $0$s), and $|v|>0$ and $uv^kw\in L$ for all $k\in\Bbb N_0$. Then $uv^2w$ has length $>2n$, but $0$ occurs among the last $n+1$. |
using one vector to build positive definite matrix | The matrix will be positive semidefinite. To see this multply $A$ from the left and right by $Y$ and $Y^T$ for row vectors $Y$. The result will be $(XY^T)^2$.
It will be positive definite if and only if $X\ne0$ and $n=1$. I.e., the matrix is not positive definite if there is a non-zero vector $Y$ orthogonal to $X$. |
Application of Jensen's Inequality for convex functions | A well known fact is $Ee^{tX} =e^{t^{2}\sigma^{2}/2}$ if $X$ is normal with mean $0$ and variance $\sigma^{2}=EX^{2}$. Just put $t=\sum a_k$ and expand $t^{2}$ as $\sum_{i,j} a_ia_j$. |
How to check either the series is convergent or not? | HINT:
The sum needs to start at $n\ge 3$.
Note that we can write
$$\log\left(\frac{n-2}{n+3}\right)=\log\left(1-\frac{5}{n+3}\right)$$
and
$$-\frac{5}{n-2}\le \log\left(1-\frac{5}{n+3}\right)\le -\frac{5}{n+3}$$ |
Confusion regarding Lagrange multipliers | Setting the partial derivative of L with respect to lambda f to 0 forces g(x,y)=0. Requiring partial of L with respect x and y to 0 will lead to a local extreme point subject to g(x,y) = 0. Because of the form of L this could be a minimum. |
Proof of Generalized Distributive Laws by mathematical induction | What you use is associativity and break it up in two steps using the distributive law for less terms twice:
$$p\land(q_1\lor q_2 \lor \cdots \lor q_n \lor q_{n+1}) \\
\Leftrightarrow p\land((q_1\lor q_2 \lor \cdots \lor q_n) \lor q_{n+1}) \\
\Leftrightarrow (p\land(q_1\lor q_2 \lor \cdots \lor q_n)) \lor (p \land q_{n+1}) \\
\Leftrightarrow ((p\land q_1)\lor (p\land q_2) \lor \cdots \lor (p \land q_n)) \lor (p \land q_{n+1}) \\
\Leftrightarrow (p\land q_1)\lor (p\land q_2) \lor \cdots \lor (p \land q_n) \lor (p \land q_{n+1})
$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.