title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
What are some other topological invariants apart from connectedness, compactness and fundamental groups?
Homology groups, related to homotopy groups (the abelianization of the fundamental group (the first homotopy group) is the first homology group). Cohomology groups, related to homology groups via Poincare duality. If your topological space is a low-dimensional manifold or a knot, then there are plenty of invariants.
Image of a set by the transformation: $\tan(z)$
IMHO the best way to solve this is to break the transformation down into simple transformations and apply them successively. Extending your formula for $T(z)$, we have $$T(z)=i\Bigl(-1+\frac2{1+e^{2iz}}\Bigr)\ :$$ the reason I have written it this way is that now $z$ occurs only once in the formula. We now see what happens to complex numbers $z\in\Omega$ if we apply this formula step by step. The best way to understand it is to draw diagrams, but I am not good at posting diagrams online so I will leave that part of it up to you. Firstly, we have $$e^{2iz}=e^{-2y}e^{2ix}\ .$$ For $z\in\Omega$, the modulus $e^{-2y}$ can be any positive real number, and $e^{2ix}$ can be any point on the unit circle except for $-1$. Multiplying these (this is where a diagram will be really helpful), $e^{2iz}$ takes all complex values except for the negative real axis (including the origin). Now $1+e^{2iz}$ shifts everything $1$ unit to the right. So now we have all complex numbers except for real numbers $x\le-1$. If we take the reciprocal of all real numbers $x\le1$, we get all real numbers $x'\ge1$ together with all real numbers $x'<0$. So taking the reciprocal of $1+e^{2iz}$, we don't get these numbers. We also don't get the origin. So the image of $\Omega$ under the transform $$z\mapsto\frac1{1+e^{2iz}}$$ consists of all complex numbers except for real $x\ge1$ and real $x\le0$. The rest is downhill. Multiply by $2$ and add $-1$: we get all complex numbers except for real $x\ge1$ and real $x\le-1$. Finally, multiply by $i$: this rotates the plane through a right angle, so we have all of $\Bbb C$ except for the set you have indicated.
On the existence of a specific linear operator
As written, your proof will generally not produce a linear operator. Consider $T:\ell^1\to\ell^1$ given by $$Tx(n)=x(n-1).$$ Then we have the left inverse $S:T(\ell^1)\to\ell^1$ defined by $$Sx(n)=x(n+1).$$ But, we have $y=\left(1,1,\frac{1}{4},\ldots,\frac{1}{(n-1)^2},\ldots\right)\in \ell^1\setminus T(\ell^1),$ and if we extend $S$ to $\tilde{S}$ by means of your proof, we have $$\tilde{S}y=0\neq \tilde{S}\left(1,0,0,\ldots\right)+\tilde{S}\left(0,1,\frac{1}{4},\ldots,\frac{1}{(n-1)^2},\ldots\right).$$ Generally, what is required is that $T(X)$ be a complemented subspace of $Y$. Because then $Y=T(X)\oplus Z$ for some closed subspace $Z$ of $Y$, and we can extend $S$ by linear extension of \begin{align} \tilde{S}y=\left\{ \begin{array}{lcl} Sy&:&y\in T(X)\\ 0&:&y\in Z \end{array} \right. \end{align} In this case, we also get preservation of norm, i.e. $\|\tilde{S}\|=\|S\|$.
How to evaluate the integral$ \frac{\log x}{1+x^2}$
Break up the integral into two pieces: $$\int_0^1 dx \frac{\log{x}}{1+x^2} + \int_1^{\infty} dx \frac{\log{x}}{1+x^2}$$ Sub $x=1/y$ in the second integral and see that the sum of the above integrals, and hence the original integral, is zero.
Smallest irreducible periodic Markov chain
$0\to1\to0\to1\to0\to1\to0\to1\to0\to1\to0\to1\to0\to1\to0\to1\to\ldots$ -- Did From state $0$ you go to state $1$ with probability $1$ and from state $1$ you go to state $0$ with probability $1$. -- Ritz
Conclusion about Zeros of a polynomial ,when sum of it's coefficients is zero
The sum of the coefficients is zero if and only if $1$ is a root of the given polynomial.
How to get the minimizer for the following function?
With the form of the gradient that you have, it seems like your original function is $$f(x_1,x_2)=(x_1+1)^2(x_2-2)^2+C$$ Expand the right hand side, and you get $C=-4$. The term containing squares is non-negative (it is $0$ if $x_1=-1$ or $x_2=2$). So the minimum value will be indeed $-4$.
Finding side of a parallelogram
We have $$\angle{ABP}=\angle{CBP}=\angle{BPA}=\angle{BCP}$$ from which we know that $\triangle{ABP}$ and $\triangle{PBC}$ are similar. Let $AB=x$. Then, $AP=x,BC=x+5$ to have $$AB:PB=PB:CB\iff x:6=6:x+5.$$
Find values of $a$ and $b$ that make matrix orthogonal
Orthogonal matrix means $$AA^T=I$$ Hence, $$\begin{bmatrix} \frac{1}{2} & a\\ b& \frac{1}{2} \end{bmatrix}\begin{bmatrix} \frac{1}{2} & b\\ a& \frac{1}{2} \end{bmatrix}=\begin{bmatrix} \frac{1}{4}+a^2 & \frac{1}{2}(a+b)\\ \frac{1}{2}(a+b)& \frac{1}{4}+b^2 \end{bmatrix}=\begin{bmatrix} 1 & 0\\ 0& 1\end{bmatrix}$$ which implies $$a=-b=\pm\frac{\sqrt{3}}{2}$$
If $U \subseteq W^{\perp}$ and $V = W+U$ so $U=W^{\perp}$.
Vector space decompositions do not "care" about inner products, so in general $V = W_1 \oplus W_2$ and $V=W_1 \oplus W_3$ does not imply $W_2 = W_3$. For example, if $W_1= <(1,0)>, W_2 = <(0,1)>$, and $W_3 = <(1,1)>$, then $\mathbb{R}^2 = W_1 \oplus W_2$ and $\mathbb{R}^2 = W_1 \oplus W_3$.
equal unions and intersections
I will build upon the answer given to the weaker problem and use the same notations for clarity - except for the subsets which are $P_i$ here instead of $A_i$. Here the number of subsets is $n+2$, which means the incidence matrix $M$ has a kernel of dimension at least $2$ when regarded as a linear map $\mathbb{R}^{n+2}\rightarrow \mathbb{R}^n$. The subspace $V=\left\lbrace \vec c \in \mathbb{R}^{n+2} \ |\ c_1+\ldots + c_{n+2}=0\right\rbrace$ has codimension $1$ and hence must intersect $\operatorname{Ker}M$ along a subspace of dimension at least $1$. Take a nonzero vector $\vec c \in V\cap \operatorname{Ker}M$. Define again $I=\left\lbrace i \ | \ c_i>0\right\rbrace$ and $J=\left\lbrace j \ | \ c_j<0\right\rbrace$. The conditions $I\cap J=\emptyset$ and $\cup_{i\in I}P_i = \cup_{j\in J}P_j$ are for free, according to Tad's answer. Now let $M_i$ denote the $i$-th column of $M$. $\cap_{i\in I}P_i$ is the set of all indices $k$ such that the $k$-th coordinate of $\sum_{i\in I}c_i M_i$ is $\sum_{i\in I}c_i.$ Similarly $\cap_{j\in J}P_j$ is the set of all indices $k$ such that the $k$-th coordinate of $\sum_{j\in J}c_j M_j$ is $\sum_{j\in J}c_j.$ Since $\vec c $ was chosen in $V$, the intersections must agree as required.
Find a relationship between $f(f^{-1}(f(A)))$ and $f(A)$. Prove it in the general.
You might have the right idea, but you need to write it better. In particular, you should never write things like $f^{-1}(x)$, because you don't know whether $f$ is bijective. Here's the "LHS" (it's the easiest one): Let $y \in f(f^{-1}(f(A)))$, this means that there is some $x \in f^{-1}(f(A))$ such that $y = f(x)$. But $f^{-1}(f(A))$ is by definition the set of all $x$ such that $f(x) \in f(A)$. Hence $y \in f(A)$.
“if p(k) is true and p(k+1) is true, then p(k+2) is true”. How can I write this phrase using $\implies$?
How about $p(k)\wedge p(k+1)\Rightarrow p(k+2)$ meaning "if $p(k)$ and $p(k+1)$ then $p(k+2)$"
Is a field (ring) an algebra over itself?
A field K is an algebra over a field (the field being K). A commutative ring R is an algebra over a ring (the ring being R).
How do you write an integral and why
I personally prefer the following notation for integration \[ \int f(x)\ dx \] I try to leave no room for ambiguity and to me this notation makes the integral clear, explicit and easily understood. Also the $dx$ can be read as "with respect to $x$".
Prove the inequality of edge number, $||H||\leq||H_1||+||H_2||$
You said if we separete $H$ into $H_1,H_2$, we lose the edge which have end points on both $H_1$ and $H_2$. This is your mistake. $H_1$ and $H_2$ are not disjoint, they meet at $H_1\cap H_2$, with $\vert H_1\cap H_2\vert \leq k$. So that you there are no edges from $H_1\setminus H_2$ to $H_2\setminus H_1$, and you do not miss any edges. However when summing $\vert\vert H_1\vert\vert + \vert\vert H_2\vert\vert$ you are counting twice the edges inside $H_2\cap H_1$, That is why you have $\vert\vert H\vert\vert\leq\vert\vert H_1\vert\vert + \vert\vert H_2\vert\vert$ In total you should have, $$\vert\vert H\vert\vert = \vert\vert H_1\vert\vert + \vert\vert H_2\vert\vert - \vert\vert G[U_1\cap U_2]\vert\vert$$ Note: it would be nice to make your question independent from any link as much as possible. Maybe next time try to put definitions and some explanations directly here.
Proving that $(ax^n)' = nax^{n-1}$ using the definition of the derivative
The easiest way to do this is by simply multiplying out the $(x+h)^n$: $$ (x+h)^n=\sum_{k=0}^{n}\binom{n}{k}x^kh^{n-k}, $$ where $\binom{n}{k}=\frac{n!}{k!(n-k)!}$ is the binomial coefficient. Then $$ \frac{a(x+h)^n-ax^n}{h}=\frac{a}{h}\sum_{k=0}^{n-1}\binom{n}{k}x^kh^{n-k}=a\sum_{k=0}^{n-1}\binom{n}{k}x^kh^{n-k-1}. $$ Noting that the $k=n-1$ term does not involve an $h$, while all other terms involve a positive power of $h$, you can compute the desired limit.
Sylvester's criterion about positive definite matrices.
What the author is saying is that if you let $M=\mbox{span }\,\{\alpha_1,\ldots,\alpha_k\}$ and you write $A'$ for the "left upper corner of size $k$" of $A$, then for any nonzero $y\in M$ you have $$ y^*A'y=\begin{bmatrix}y\\0\end{bmatrix}^*\,A\,\begin{bmatrix}y\\0\end{bmatrix}>0 $$ (where the zeroes in the matrices above have size $(n-k)\times1$). So $A'$ is positive definite.
Infimum taken over finite coverings
A finite covering is just that: a finite collection of sets $I_1,...,I_N$ such that $E\subset \cup_{j=1}^N I_j$. Depending on your definitions, the sets $I_k$ are probably required to be open or closed. In your example $E_1$ is covered by a single element $[0,1+1/j]$ for any $j$. As an example, suppose that $E_1=[0,1)$. Then $E_1\subset \cup_{j=1}^\infty [0,1-1/j]$, but this is an infinite cover (and cannot be done in a finite way for that particular set and family of covers).
Numbers between $1$ to $80$ with non-terminating decimal representations
Let's count the number of terminating decimals first. It is easy to see that if $\frac{1}{x}$ has some terminating decimal expansion, where $x\in\mathbb{N}$, then $x=2^a5^b$, where $a,b\in\mathbb{N_0}$. It is easy to list these numbers systematically: $1, 2, 4, 8, 16, 32, 64$ (powers of 2) $5, 10, 20, 40, 80$ ($5$ times a power of 2) $25, 50$ ($5^2$ times a power of 2) Hence there are 14 of these numbers in total. Hence $80-14=66$ of these numbers do not have terminating decimals.
Finding pdf and cdf of a random variable
The random variable is neither discrete nor continuous. It takes values other than $3,5,6,7,8$. It is defined on the space $(-3,9)$ with normalized Lebesgue measure as the basic probability measure. The correct values of $F(x)$ are $0$ for $x <3$, $2/3$ for $3 \leq x \leq 5$, $\frac {x+5} {12}$ for $5 \leq x \leq 7$, $1$ for $x \geq 8$.
Assigning values for $\lambda$ in Poisson distribution
There are $576$ areas and $537$ hits.   That's $n=537$, $p=\tfrac 1{576}$ for a Binomial Distribution (of the count of successful hits in a square). The Poisson approximation to a Binomial uses $\lambda = np$ as its rate parameter.   The probability of a square having $k$ hits is thus approximated: $$\mathsf P(X=k) \approx \frac{(np)^k~\mathsf e^{-np}}{k!} = \frac{537^k~\mathsf e^{-537/576}}{576^k~k!}$$ The expected estimated number of squares with exactly $k$ hits is then: $$\mathsf E(N_{X=k}) \approx n~\mathsf P(X=k) = \frac{537^k~\mathsf e^{-537/576}}{576^{k-1}~k!}$$ (Due to the linearity of expectation and that $\mathsf P(A) = \mathsf E(\mathbf 1_A)$.) Hence: $$\mathsf E(N_{X=0}) \approx n~\mathsf P(X=0) = \frac{537^0~\mathsf e^{-537/576}}{576^{-1}~0!} \approx 226{\small .74} $$
Integrate: $\int_0^1 \mathrm{d}x_1 \int_0^1 \mathrm{d}x_2 \ldots \int_0^1 \mathrm{d}x_n \delta\left( \sum_{i=1}^n k_i x_i \right)$
$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}% \newcommand{\dd}{{\rm d}}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\fermi}{\,{\rm f}}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\half}{{1 \over 2}}% \newcommand{\ic}{{\rm i}}% \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\ol}[1]{\overline{#1}}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ \begin{align} &\color{#0000ff}{\large\int_{0}^{1}\dd x_{1}\int_{0}^{1}\dd x_{2}\ldots\int_{0}^{1}\dd x_{n}\,\delta\pars{\sum_{i = 1}^{n}k_{i}x_{i}}} \\[3mm]&=\int_{0}^{1}\dd x_{1}\int_{0}^{1}\dd x_{2}\ldots\int_{0}^{1}\dd x_{n} \int_{-\infty}^{\infty}\expo{\ic q\sum_{i = 1}^{n} k_{i}x_{i}}\,{\dd q \over 2\pi} \\[3mm]&= \int_{-\infty}^{\infty}{\dd q \over 2\pi}\prod_{i = 1}^{n}\int_{0}^{1} \expo{\ic qk_{i}x_{i}}\,\dd x_{i} = \int_{-\infty}^{\infty}{\dd q \over 2\pi}\prod_{i = 1}^{n} {\expo{\ic qk_{i}} - 1 \over \ic qk_{i}} = \int_{-\infty}^{\infty}{\dd q \over 2\pi}\prod_{i = 1}^{n} \expo{\ic qk_{i}/2}\,{2\ic\sin\pars{qk_{i}/2} \over \ic qk_{i}} \\[3mm]&=\color{#0000ff}{\large% {2^{n - 1} \over \pi}\int_{-\infty}^{\infty}{\dd q \over q^{n}}\prod_{i = 1}^{n} {\expo{\ic qk_{i}/2}\sin\pars{qk_{i}/2} \over k_{i}}} \end{align} I don't see any further reduction unless we know something else about the $\braces{k_{i}}$.
Requirements for a Linear Transformation
Another way to say this is that $T$ is additive and homogeneous of degree $1$. In general, a function $f:X \to Y$ is additive if $f(x+y)=f(x)+f(y)$ for all $x,y \in X$. If $X$ and $Y$ are vector spaces then we say that $f$ is homogeneous of degree $k$ if for all $a$ not equal to $0$ in the underlying scalar field of $X$ and all $x \in X$, $f(ax)=a^kf(x)$ for some integer $k$.
Right adjoint to the product in an over category
I don't understand the point of your question. If it is easier to describe the construction in $\widehat{\mathbb{C} \downarrow X}$ than in $\hat{\mathbb{C}} \downarrow Y(X)$, then why not use the construction in $\widehat{\mathbb{C} \downarrow X}$? No matter. Let's just translate what's going on in the first description to the second description. Given objects $A \to Y(X)$ and $B \to Y(X)$ in $\hat{\mathbb{C}} \downarrow Y(X)$, we construct the fibred exponential as follows: $$B^A (C) = \coprod_{c \in \mathbb{C}(C, X)} (\hat{\mathbb{C}} \downarrow Y(X)) (\tilde{Y}( c) \times_{Y (X)} A, B)$$ Here, $\tilde{Y}$ is the modified Yoneda embedding $(\mathbb{C} \downarrow X) \to (\hat{\mathbb{C}} \downarrow Y(X))$ given by $$\tilde{Y} (c) (D) = \coprod_{d \in \mathbb{C}(D, X)} (\mathbb{C} \downarrow X)(c, d)$$ The structural morphisms $B^A \to Y(X)$ and $\tilde{Y}(c) \to Y(X)$ are the obvious ones arising from their fibrewise construction.
Given these conditions on a matrix, prove (or provide a counterexample) that the matrix is invertible
Much of your reasoning is very good. Let's consider $(b)$. Suppose that $U + \frac{3}{4}I$ is not invertible. Then there is some vector $x$ such that $(U + \frac{3}{4}I)x = 0$, or equivalently that $Ux = -\frac{3}{4} I x = - \frac{3}{4}x$. This would mean that $-\frac{3}{4}$ is an eigenvalue of $U$. But as $U$ is unitary, all of its eigenvalues are of magnitude $1$.
Lower estimate using Taylor theorem
Here is an unsatisfactory counter example: Let $h_n$ ('Hessian') be given by the graph created by joining the following points: $(-1,0), (0, 0), ({1 \over n}, -4), ({1 \over 4}-{1 \over n},-4), ({1 \over 4}, 0), (1,0)$. Let $g_n(x) = \int_{-1}^x h_n(t)dt$, $f_n(x) = \int_{-1}^x g_n(t)dt$. We see that $f_n$ is $C^2$, $f_n(x) \le 0$, $f_n(0) = 0$. Note that $h_n(x) \downarrow -4 \cdot 1_{(0,{1 \over 4})}(x)$, $g_n(x) \downarrow -4x \cdot 1_{(0,{1 \over 4})}(x)-1_{[{1 \over 4},1]}(x)$, and $f_n(x) \downarrow -2x^2 \cdot 1_{(0,{1 \over 4})}(x)-(x-{1 \over 8})\cdot 1_{[{1 \over 4},1]}(x)$. In particular, $f_n(1) \to -{7\over 8}$. However, $x^2 h_n(x) \ge - 4 x^2 \cdot 1_{(0,{1 \over 4})}(x)$, and so $\inf_{x \in [0,1]} {1 \over 2} x^2 h_n(x) \ge -{1 \over 8}$.
Rotating a plane in 3D Space
OK, I finally found an answer. The method I should have used is Rodrigues' Rotation Formula Now I have another problem with it though. :)
Finding the rectangle surrounding a capsule defined by two points while minimizing trig, and square root
Let your position vector be $\vec{P_1}=[x_1,y_1]$ and your direction vector between $P_1$ and $P_2$ be normalized $\vec{u}= \dfrac{[x_2-x_1,y_2-y_1]}{d}$ where $d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$. Let $\vec{n}=\dfrac{[y_2-y_1,x_1-x_2,]}{d}$ be a normal vector to segment $P_1P_2$. Then $\vec{c_3}$ and $\vec{c_4}$ are the two vectors $\vec{P_1}+(d+r)\vec{u}\pm r\vec{n}$ and the vectors $\vec{c_1}$ and $\vec{c_2}$ are $\vec{P_1}-r\vec{u}\pm r\vec{n}$. Given the labeling of your diagram I believe the equations would be as follows: \begin{equation} \vec{c_1}=\vec{P_1}-r(\vec{u}-\vec{n}) \end{equation} \begin{equation} \vec{c_2}=\vec{P_1}-r(\vec{u}+\vec{n}) \end{equation} \begin{equation} \vec{c_3}=\vec{P_1}+(d+r)\vec{u}-r\vec{n} \end{equation} \begin{equation} \vec{c_4}=\vec{P_1}+(d+r)\vec{u}+r\vec{n} \end{equation}
The number of roots for polynomials
A 5th degree polynomial (with real coefficients) has at least 1 real root. A polynomial of odd degree has at least 1 real root. The fundamental theorem of algebra says that a polynomial roots equal to its degree. However, they may be complex and they may be roots of multiplicity. $x^2-1$ has 2 real roots. $x^2+1$ has 2 complex roots. $x^2-2x + 1$ has one root of multiplicity 2. Does this help?
A formula for heads and tail sequences
Odlyzko "Enumeration of Strings" (in "Combinatorial Algorithms on Words" (Springer, 1985), pp. 205-228) gets the generating function for the number of strings on $\{0, 1\}$ that don't contain a stretch of $k$ zeros as: $$ B_{0^k}(z) = \frac{1 - z^k}{1 - 2 z + z^{k + 1}} $$ Unless $k = 2$ (when this gives a Fibonacci number), no simple functions are forthcomming. I don't see a simple way to extend the techniques used to consider simultaneously excluding stretches of zeros and ones.
Emily and Greg play a dice-throwing game. They take turns to throw a fair die starting with Emily.
To determine the overall probability that Greg wins: Each round begins with some state, depending on whether Greg has scored a point or not. That is, we have two states: $S_0,S_1$. Accordingly, we let $p_0,p_1$ denote the probability that Greg wins assuming we are starting from the associated state. As we start in $S_0$ the answer we want is $p_0$. Starting in $S_1$ we see that the possible outcomes for the round are: Emily wins (probability $\frac 16$), Greg wins (prob $\frac 56 \times \frac 13=\frac 5{18}$), we end up back in $S_1$ (prob $1-\frac 16 - \frac 5{18}=\frac 59$). Thus $$p_1=\frac 16\times 0 +\frac 5{18} \times 1 +\frac 59 \times p_1\implies p_1=\frac 58$$ Starting in $S_0$ we see that the possible outcomes for the round are: Emily wins (probability $\frac 16$), Greg scores and we move to $S_1$ (prob $\frac 56 \times \frac 13=\frac 5{18}$), we end up back in $S_0$ (prob $\frac 59$ as before). Thus $$p_0=\frac 16\times 0 +\frac 5{18}\times p_1 + \frac 59 \times p_0\implies p_0=\frac {25}{64}$$ Note: Using the formula $p(X=k)=\left( \frac 23 \right)^{k-2}\times \left( \frac 56 \right)^k \times \frac {k-1}9$ and summing over $k$ we confirm this probability.
A linear Equation in 4D space corresponds to the whole 3D space
Consider just two variables, $x$ and $y$. The way to describe the 1D number line, i.e. the $x$ axis, is with the equation $y=0$. If you take the equation $x+y=6$, this describes a different line in the plane. $y$ cannot have freedom to vary. So for your four variables, $u$, $v$, $w$, and $z$: If you want to describe the $uvw$ "plane", the only value $z$ can have is $0$. So the equation should just be $z=0$. Your equation describes a different 3D space.
Prove $A^\alpha$ positive semi definite
Write $A^\alpha = e e^T + (\alpha-1) I$, with $e$ being a vector of ones. Note that $A^\alpha$ is real, symmetric. It is straightforward to compute the eigenvalues by noting that $e$ is an eigenvector, and any $y \bot e$ is an eigenvector. Now translate this into the requirement that $A^\alpha \ge 0$. Here is another way: $x^T A^\alpha x = (\sum_x x_k)^2+(\alpha-1) \sum_k x_k^2 $. We need $\alpha \|x\|_2^2 \ge \|x\|_2^2 - (e^T x)^2$, or $\alpha \ge 1- {(e^T x)^2 \over \|x\|_2^2}$ (for $x \neq 0$, of course). Hence we see that $A^\alpha \ge 0$ iff $\alpha \ge 1- (e^T x)^2$ for all unit vectors $x$ iff $\alpha \ge 1$.
Let $x_n=\sqrt[3]{n+1}-\sqrt[3]{n}$. Prove that $(x_n)$ converges.
Bounded below is clear, $x_n$ is positive. For decreasing, multiply top and (missing) bottom by $(n+1)^{2/3}+(n+1)^{1/3}n^{1/3}+n^{2/3}$. We are using the identity $x^3-y^3=(x-y)(x^2+xy+y^2)$. But this is cheating in a way, since as soon as we have done the manipulation, we can compute the limit without using the decreasing bounded below machinery. If we are allowed to use the derivative, then it is easy to check that $(x+1)^{1/3}-x^{1/3}$ has negative derivative, and the fact that our sequence is decreasing follows.
Elliptic PDE, uniqueness of solution
For this equation, the weak maximum principle is satisfied. You just need $a\in L^\infty(\Omega)$. See Gilbarg/Trudinger, Chapter 8. The result you are looking for is Theorem 8.3
Proof of formulas in sequent calculus
There is no such thing as the sequent calculus (even for a particular logic, like -- to keep things simple -- classical propositional logic). There is a number of varieties. There is no such thing as the natural deduction calculus (for the same logic) either. But on any story, sequent calculi and natural deduction systems are different sorts of beasts (though there are versions of the sequent calculus which are said to be "natural deduction style", but they aren't natural deduction calculi). This means that the question really isn't very well posed at it stands. Perhaps you'd like to sharpen it up? I'll add that if you want a clear modern introduction to sequent calculus and their relation to natural deduction, the place to go is Sara Negri and Jan von Plato's Structural Proof Theory. They also talk about proof search methods for various calculi.
Algebraic complements in vector space of functions without the axiom of choice
If you take a look at the proof that the existence of a complement implies the axiom of choice then by a careful proof analysis one can construct a counterexample that is actually in a space which has a basis. The proof of the algebraic principle goes through proving that the axiom of multiple choice holds, rather than the axiom of choice. The two are equivalent over $\sf ZF$, so it's fine. This version is as follows: The axiom of multiple choice. Let $X$ be a family of pairwise disjoint non-empty sets, each having at least two elements. Then there exists a function $f$ whose domain is $X$ such that $f(x)$ is a proper finite subset of $x$, for all $x\in X$. The proof of the direct complement begins by taking $X$ as above, and defining the following vector space: $$V=\bigoplus_{x\in X}F^{(x)}$$ Where $F^{(x)}$ is just the space of functions with finitely many non-zero values from $x$ into $F$. We also define $F^{(x)}_0$ to be the space $f\in F^{(x)}$ such that $\sum_{u\in x}f(u)=0$. I will leave it to you to find a basis for this space (hint, $F^{(\bigcup X)}$). Now if we can find a complement to the subspace $W=\bigoplus_{x\in X}F^{(x)}_0$ then we can construct a function $f$ as above. Begin now with a model where the axiom of choice fails, then the axiom of multiple choice fails, and we can find such $X$ where the vector space $V$ has a basis but $W$ doesn't have a direct complement.
Rational Functions are Determined By Locations and Multiplicities of Zeros and Poles (why?)
The precise statement is the other way round: a function meromorphic on the whole Riemann sphere $S$ is rational. This is completely false on $\mathbb C$: the function $e^z$ is meromorphic (even holomorphic) on $\mathbb C$, but certainly not rational on $\mathbb C$, nor anywhere else. Now if $f,g\in Rat(S)$ are rational on the Riemann sphere $S$ and have the same zeros and poles (counted with multiplicities), then the quotient $\phi=f/g$ having neither zeros nor poles is holomorphic on the whole of $S$ and thus constant (say because $\phi$ is bounded on $S$ by compacity, hence bounded on $\mathbb C$ and Liouville's theorem applies). "Yes,Georges, but I gave you a counterexample" Your rational functions don't have the same zeros: $$f(\frac {1}{2})=0\neq g(\frac {1}{2})=-2 $$ A very optional remark That meromorphic functions on $S$ are rational is the simplest example of a very profound result in algebraic geometry: Serre's GAGA principle.
Algebra, groups and permutations
Hints: i) The dihedral group $\,D_4\,$ contains either rotations (in integers multiples of $\,\pi/2\,$ , say around an imaginary pivot in the center of the square (= the intersection point of its diagonals), or reflections through one of the symmetry axis of the square (either a vertrical or a horizonal line through the square's middle or through one of the two diagonals). ii) Example of rotation in an angle $\,\pi\,$ anti-clockwise: we get the following mappings of the vertices: $$1\to 3\;,\;\;2\to 4\;,\;\;3\to 1\;,\;\;4\to 2$$ You can see that the above symmetry of the square is represented by the permutation $\,(13)(24)\,$ (written as product of disjoint cycles) iii) Example of reflection, say through the diagonal $\,13\,$ in the square: $$1\to 3\;,\;\;2\to 4\;,\;\;3\to 3\;,\;\;4\to 2$$ represented by the permutation $\,(24)\,$ ... etc.
T/F: Any of these two statements imply the third: $A$ is Hermitian, $A$ is unitary, $A^{2}=I$
No, you are not correct. Assume $A = A^*$ and $A^2 = I$. Then $I = A^2 = A^* A = A A^*$, so $A$ is unitary.
Modulo world of four remainder 1
Simply put, the first non-$\mathbb {M}$ prime number has to be $5\times 5 $. All numbers below that in your list are primes. For Q.2 an approach is to find a number that has sufficiently many divisors in real life. The fact that almost all $\mathbb {M}$-numbers are prime takes care of the rest. E.g. $$21\times 21=9\times 49$$
Is $R= A[T] / (T^3-a)$ where $A$ is a left artinian ring and $a \in A$, left noetherian?
Yes. By the Hopkins-Levitzki Theorem, if $A$ is left Artinian, then it is left Noetherian. Next, by Hilbert's basis theorem, $A[T]$ is left Noetherian. Finally, any quotient of a left Noetherian ring is left Noetherian.
Laplace using $t$-shift
$$F(s)=\frac{5}{s}e^{-2s}+\frac{4}{s^2}e^{-2s}.$$ Now, if we apply the shift theorem we end up with $$f(t)=5\,\mathcal U(t-2)+\dots.$$ You can do the other one, where $\mathcal U(\cdot)$ is the Heaviside step function: $$\mathcal U(t-a)=\cases{1 & if $t\ge a$\\0 & if $t<a$}$$
Conditional Probability Question: What is the probability that the plant will be alive when you return?
What you did is correct. Now $$P(\mbox{alive} | \mbox{no water}) = 1- P(\mbox{die} | \mbox{no water}) = 1 - 0.9 = 0.1$$ and you can conclude the rest.
Can I have formula for all natural numbers, if I know it separately for even and for odd?
$$f(m,n)=\frac{(-1)^{m+1}+1}{2}\left(\frac{n^m-n}{m}+n \right)+ \frac{(-1)^{m}+1}{2}\left(\frac{n^m-n^2}{m}+\frac{n(n+1)}{2} \right)$$
How to build 2nd rank tensors from set of vectors?
If $A,B,C$ are rank one tensors then $$A\otimes A\ ,\ B\otimes B\ ,\ C\otimes C$$ are rank two tensor tensors as well $$A\otimes B\ ,\ A\otimes C\ ,\ B\otimes A\ ,\ B\otimes C\ ,\ C\otimes A\ ,\ C\otimes B,$$ to begin, but also linear combination of them.
Is the complement of an ample divisor always affine
Yes. As you say we may assume $D$ is very ample. So there is a closed immersion of $X$ into $\mathbb{P}^n$ such that $D$ is $X \cap H$ for some hyperplane $H$. Pick coordinates $[x_0: x_1: \ldots: x_n]$ for the ambient $\mathbb{P}^n$ such that the vanishing locus of $x_0$ is $H$. Let $f_1, \ldots, f_m$ be homogeneous equations cutting out $X$ in these coordinates. Then their dehomogenizations with respect to $x_0$ are homogeneous equations cutting out $X \setminus D$ in $\mathbb{P}^n \setminus H = \mathbb{A}^n$.
map to a product is a submersion
I am sorry but it is not true. Let us consider the following counterexample. Let $W=\{*\}$ be the singleton, let $Y=Z=X$ be a manifold of dimension greater or equal to $1$ and let $f=g$ be the unique possible map. Then the pullback $Y\times_W Z$ is simply the usual product $X\times X$. Finally, let $F=\Delta : x \in X \mapsto (x,x)\in X\times X$ be the diagonal map, which is not a submersion. However, $pr_1 \circ F = pr_2 \circ F = id_X$ is a submersion.
"go / goes to" $(x \to y)$ vs. "maps to" $(x \mapsto y)$
Well, in view of mappings, the notation is, e.g., $\exp:{\Bbb R}\rightarrow{\Bbb R}_{>0}: x\mapsto e^x$. The symbol $\mapsto$ is only used for the assignment of elements, while the symbol $\rightarrow$ has a broader use.
Sequence with $\mathbb N$ as set of limit points
$1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5 \ldots$ I suppose you get the idea
Limit and Lebesgue integral in a compact
Let $f_n(x):=|f(x)|\chi_{|x|\leq n}$. This forms a increasing sequence of integrable functions. By the monotone convergence theorem, $\lVert f-f_n\rVert_1\to 0$. This gives $$\int_{z_m+K}|f(x)|dx\leq \int_{z_m+K}|f(x)-f_n(x)|dx+\int_{z_m+K}|f_n(x)|\\\leq\lVert f-f_n\rVert_1+\int_{(z_m+K)}|f(x)|\chi_{B(0,n)}(x)dx.$$ For a fixed $n$, as $|z_m|\to +\infty$, $B(0,m)$ and $z_m+K$ are disjoint for $m\geq N_n$. This implies $$\limsup_{m\to +\infty}\int_{z_m+K}|f(x)|dx\leq \lVert f-f_n\rVert_1.$$ As $n$ is arbitrary, the result follows.
How to show that $(-a)(-b)=ab$ holds in a field?
$$(a+(-a))b=0$$ $$ab+(-a)b=0$$ $$(-a)b=-(ab)$$ Now $$(-a)(b+(-b))=0$$ $$(-a)b+(-a)(-b)=0$$ $$-(ab)+(-a)(-b)=0$$ $$(-a)(-b)=ab$$
Exact Differential Equation. Solve: $ x^4 \frac{dy}{dx} + x^3y + \operatorname{cosec} (xy) = 0 $
First get rid of the embarrassing argument $xy$ of the cosecant be setting $z:=xy$, so that $y'=\dfrac{z'}x-\dfrac z{x^2}$. This gives $$x^3z'-x^2z+ x^2z + \operatorname{cosec} (z) = 0 ,$$ which happens to be separable: $$\sin(z)\,z'+\frac1{x^3}=0$$ and $$\cos(z)=\cos(xy)=C-\frac1{2x^2}.$$
Equality of two sets written differently
Notice that $2(m-1)+3=2m-2+3=2m+1$.
Proof of Convergence Based on Monotonicity and a Limit
Let $M$ be the bound of $b_n$. Then, for sufficiently large $n$: $a_n -b_n < 1 \implies a_n <1+b_n < 1+M$ So $a_n$ converges. Since $a_n$ and $b_n-a_n$ converges, so does $a_n+ (b_n-a_n)=b_n$
Points in a triangular lattice at the same distance from the origin and "breaking of symmetry"
This seems to be the prettiest one: for $$ u^2 + uv + v^2 = x^2 + xy+y^2, $$ take coprime integers $p,q,r,s$ and $$ u = rs+ps- rq $$ $$ v = pq-ps + rq $$ $$ x = pq -rs + rq $$ $$ y = -pq+rs+ps $$ $$ \left( \begin{array}{r} u \\ v \\ x \\ y \\ \end{array} \right) = \left( \begin{array}{rrrr} 0 & 1 & 1 & -1 \\ 1 & 0 & -1 & 1 \\ 1 & -1 & 0 & 1 \\ -1 & 1 & 1 & 0 \\ \end{array} \right) \left( \begin{array}{r} pq \\ rs \\ ps \\ rq \\ \end{array} \right) $$ parisize = 4000000, primelimit = 500000 ? u = 0*p*q+r*s+p*s- r*q ; v = p*q+ 0*r*s-p*s +r*q ; x = p*q -r*s +0*p*s+ r*q ; y = -p*q+r*s+p*s + 0* r*q ; ? u %2 = s*p + (-r*q + s*r) ? v %3 = (q - s)*p + r*q ? x %4 = q*p + (r*q - s*r) ? y %5 = (-q + s)*p + s*r ? u^2 + u*v + v^2 %6 = (q^2 - s*q + s^2)*p^2 + (r*q^2 - s*r*q + s^2*r)*p + (r^2*q^2 - s*r^2*q + s^2*r^2) ? f = u^2 + u*v + v^2 %7 = (q^2 - s*q + s^2)*p^2 + (r*q^2 - s*r*q + s^2*r)*p + (r^2*q^2 - s*r^2*q + s^2*r^2) ? g = x^2 + x*y + y^2 %8 = (q^2 - s*q + s^2)*p^2 + (r*q^2 - s*r*q + s^2*r)*p + (r^2*q^2 - s*r^2*q + s^2*r^2) ? f - g %9 = 0 ? ?
Branch cuts for $(z^2+1)^{1/3}$
Having thought about this a lot more I think I have an explanation. If one considers the behaviour of $\lim_{|z|\to\infty} f(z) \sim z^\frac{2}{3}$. Here we clearly have a branch point at $\infty$ as well as the ones previously found in the question. In branch cut $A$, our choice of cut going through infinity is important as this then includes our new branch point. Originally it was thought that the path through infinity was similar to in the case of $g(z)=(1+z^2)^\frac{1}{2}$ where the principle branch cut 'happens' to go through infinity but isn't required to . It turns out this is not the case1. When we then move to branch cut $B$, we no longer have this branch point at infinity on our branch cut. This is why it is necessary to include the $(-\infty, \infty)$ branch cut. Presumably any additional branch cut through infinity would also suffice. Hopefully this answer clears up this question if anyone else has it. If I am wrong or unclear about anything then please let me know and I'll have another look and hopefully understand it better afterwards. 1. This can be seen by noticing that the asymptotic behaviour of $g$ is clearly different $\lim_{|z|\to\infty} g(z) \sim z$.
How does one find pairs of integers $U,K_1 \geq 1$ that satisfy $9U=9K_1 + 4 K_1^2$?
The solution set is $\{K_1=3a, U=3a+4a^2 \mid a\in \mathbb Z\}$
$f(z)$ is analytic if and only if $f(z)$ cannot be written as function of $\bar z$ ( where $z\in\mathbb C$ and $\bar z$ is conjugate of $z$)
By definition, the Wirtinger derivative is $$ {\partial\over\partial\overline{z}}=\frac12\left({\partial\over\partial x}+i{\partial\over\partial y}\right) $$ This is just a formal definition of a differential operator. If $f$ is an analytic function, you can check that the condition $$ {\partial f\over\partial\overline{z}}=0$$ is equivalent to the Cauchy-Riemann equations. Similarly, with the formal definition $$ {\partial\over\partial z}=\frac12\left({\partial\over\partial x}-i{\partial\over\partial y}\right)$$ you can check that $$ {\partial f\over\partial z}=f'(z)$$ This is sometimes expressed by saying that and analytic function $f$ is a function of $z$ alone, not of $\overline{z},$ but this is just an informal statement. Obviously, $z$ and $\overline{z}$ are not independent variables, and it makes no real sense to talk about partial derivatives with respect to them.
How to find the 4th and 5th coefficients (a_4 and a_5) of the serie expansion of cos(1-cos(x))
I think that's exactly what you're supposed to do. Note, however, that you don't need any more than $\frac{x^2}2-\frac{x^4}{24}$, as any higher order terms will clearly only contribute to $a_6$ and higher. As you're expanding $\cos\left(\frac{x^2}2-\frac{x^4}{24}\right)$ you can also rememeber that you don't need the terms with higher degree than $5$, which means you can discard a lot of terms as you apply the binomial theorem. It's less of a mess than you think, I think.
Show that $\lim_{x\to0}f(x)$ doesn't exist using $\epsilon$-$\delta$
We are to prove that, for every $l \in \mathbb{R}$ there is some $\varepsilon > 0$ such that for every $\delta > 0$ there is some $0 < |x| < \delta$ such that $|f(x) - l| \geq \varepsilon$. (You sufficiently familiar with such a statement?) Let $l \in \mathbb{R}$. If $l \geq 1$, then for all $x \in \mathbb{Q}$ we have $|f(x) - l| = l$; taking $\varepsilon := l/2$ suffices. If $0 < l < 1$, let $m := \min \{ l, 1-l \}$. Then for all $x \in \mathbb{R}$ we have $|f(x) - l| \geq m$; taking $\varepsilon := m/2$ suffices. If $l \leq 0$, then for all $x \in \mathbb{Q}^{c}$ we have $|f(x) - l| = 1 + |l|$; taking $\varepsilon := (1+|l|)/2$ suffices.
Plotting function g(v) against p(v)
If you cannot use a parametric plot function, since the calculations will be very fast, do the following assign a value to $V$ and compute $P$ from the equation of state. We do not care if $P <0$ using this $V$ compute $G$ Now, you have a table $(V_i,P_i,G_i)$
is the image of path metrizable?
For $x,y\in \alpha(I)$ let $d(x,y)=\inf\{\,|s-t|:\alpha(s)=x,\alpha(t)=y\,\}$.
Schroder numbers recurrence relation
With a quick search on Wikipedia, I've found out that the $$G(x)=\frac{1-x-\sqrt{1-6x+x^2}}{2x}= \Sigma S_nx^n$$ is a generating function for the Schroder numbers, $S_n$. Now, we can simplify this by cross multiplying out, taking all the terms to the same side, and equating the coefficients of the resulting power series to $0$. This must be the source of the recurrence relation, though I have not verified it.
Expected value of power of sum of square of dependent Gaussian
Is $\beta$ an integer or what? If what you actually know is that the vector $\vec X=(X_1,\ldots,X_N)$ follows a multivariate normal distribution with zero mean and covariance matrix $\Sigma$, then you have joint density $$f_{\vec X}(\vec x)=\frac1{\sqrt{2\pi \det(\Sigma)}}e^{-\frac12\vec x^T\Sigma^{-1}\vec x},$$ where $\vec x=(x_1,\ldots,x_N)^T$. So, as a very general answer, we can say that $$E\left[\left(\sum_{k=1}^N X_k^2\right)^\beta\right]=\int_{\mathbb R^N}\left(\sum_{k=1}^N x_k^2\right)^\beta \cdot f_{\vec X}(\vec x)\quad dx_1\ldots dx_N,$$ but I can't think of a general expression for this integral.
Proving that a natural number made entirely of 6's and 0's is not a square.
To make the last part of the argument of @lsp explict, if a square $x^2$ ends in $6$, then the last two decimal digits of $x$ are either $a4$, and then $$ x^2 \equiv (a \cdot 10 + 4)^2 \equiv a \cdot 80 + 16 \pmod{100}, $$ or $a6$, and then $$ x^2 \equiv (a \cdot 10 + 6)^2 \equiv a \cdot 120 + 36 \pmod{100}. $$ In both cases the last-but-one digit is indeed odd.
injective map between set of subfields and set of endomorphisms
An element $x\in E$ is fixed by $End_FE$ if and only if $x\in F$. To see this, remark that $x\in F$, by definition the result is true, thus $F\subset Fix(End_F(E))$. On the other hand, lrt $x\in Fix(End_F)$, suppose that $x$ is not in $F$, you can find a basis $(x,(y_i)_{i\in I})$ of the $F$-vector space $E$. Let $c\in F\neq 1$, you can define $f(x)=cx, f(y_i)=y_i$, it is an $F$-endomorphism of $E$ such that $f(x)\neq x$. Contradiction. We deduce that $End_FE=End_{F'}E$ implies that $F=Fix(End_FE)=Fix(End_{F'}E)=F'$.
Can x mod (N - a) or x mod (N + a) be calculated just by knowing x mod N??
For the specific case $22!$, the answer is easy: since $4 \leq 22$ and $6 \leq 22$ you have that $22! = 0 \pmod{24}$. However, we didn't use the fact we got from Wilson's Theorem, and it doesn't help us in general. The answer to your question is no, and here is an illustrative example: suppose you know that $x = 5 \pmod{10}$. Then you could have, among many options, that $x = 5$, or that $x = 15$. In the former case, $x = 5 \pmod{12}$. In the latter case, $x = 3 \pmod{12}$.
A Basic Limit From Exponentials
Let $a>0$ and $a\ne1$. We want to prove the existence of $\displaystyle \lim_{h\to0}\frac{a^h-1}{h}$. Assume that $r>1$ and let $f(x)=x^r-rx+r-1$ for $x>0$. Then $$f'(x)=r(x^{r-1}-1)\begin{cases}<0 &\text{if }0<x<1\\ =0 &\text{if }x=1\\ >0 &\text{if }x>1 \end{cases}$$ Therefore, $f$ attains its absolute minimum at $x=1$. So for all $x>0$, we have $$f(x)\ge f(1)=0$$ $$x^r\ge rx+1-r$$ So when $r>1$ and $h>0$, $\displaystyle\frac{a^{rh}-1}{rh}\ge\frac{ra^h+1-r-1}{rh}$ and hence \begin{align} \frac{a^{rh}-1}{rh}-\frac{a^h-1}{h}\ge0 \end{align} When $r>1$ and $h<0$, $\displaystyle\frac{a^{rh}-1}{rh}\le\frac{ra^h+1-r-1}{rh}$ and hence \begin{align} \frac{a^{rh}-1}{rh}-\frac{a^h-1}{h}\le0 \end{align} Therefore, $\displaystyle \frac{a^h-1}{h}$ is an increasing function in $h$. As it is bounded below by $0$ on $(0,\infty)$, $\displaystyle \lim_{h\to0^+}\frac{a^h-1}{h}$ exists. When $h<0$, $$\frac{a^h-1}{h}=a^h\left(\frac{a^{-h}-1}{-h}\right)$$ As $\displaystyle \lim_{h\to0^-}a^h$ exists and equals $1$, $\displaystyle \lim_{h\to0^-}\frac{a^h-1}{h}$ exists and $\displaystyle \lim_{h\to0^-}\frac{a^h-1}{h}= \lim_{h\to0^+}\frac{a^h-1}{h}$. Therefore, $\displaystyle \lim_{h\to0}\frac{a^h-1}{h}$ exists. Source: How does one prove that $e$ exists?
An alternative definition of open sets using the following property?
As Tim Raczkowski says, your definition does not work as is (consider the uncountable well ordered set with a point at $\infty$). But there is a generalization of sequence which may help. That generalization is nets. Nets are mappings of arbitrary directed sets into the topological space, so they are like sequences, but may have uncountably many terms. For more information, you could take a look at General Topology by Stephen Willard. You can even see the original exposition of nets here, though the article is almost 100 years old, and a bit dense.
Question on calculating work in a vector field while having a surface parametrized with two variables.
Consider the force given by, $$F(x,y,z)=(x+y,2x-3y,x+5y+1)$$ With $z=0$ Use Stokes theorem to show that, $$w=\iint_{D} 1 dA=A(D)$$ Now all you have to show is that, $$\frac{(x-c_1)^2}{a^2}+\frac{(y-c_2)^2}{b^2}=1$$ Encloses the same area if we vary $c_1$ and/or $c_2$.
Vector space and span?
The set $\{ v_{j} : j \neq k\}$ is the following set $$S=\{v_{1},v_{2},\dots,v_{k-1},v_{k+1},\dots,v_{m-1},v_{m}\}$$ So you have that the vector space $X_{k} \not\subset \langle S \rangle$. Furthermore, you will have that $$X_{k} \not\subset \langle v_{i} \rangle \ \forall i \neq k,$$ because if it was a subset of these "spans", then it will be contained in the span of S. But see that this is derived from the definition of $S$ and the definition is that $$S=\{v_{1},v_{2},\dots,v_{k-1},v_{k+1},\dots,v_{m-1},v_{m}\}.$$ But you could deduce from $$X_{k} \not\subset \langle v_{i} \rangle \ \forall i \neq k$$ that $X_{k} \not\subset \langle S \rangle$. I think you are asking about the definiton of the set $S$.
Surface bounded by a cylinder and two planes
I was just going to comment with a hint, but apparently I don't have enough rep for that yet... As mentioned in the question the surface 'S' can be broken up into three parts; a circular base (where the cylinder intersects the plane y = -2), a circular top (where the cylinder intersects the plane x + y = 3 and finally the original side of the cylinder (minus the sections sliced away by the planes. The circular base of the cylinder, where y = -2, is a circle (centred at the origin and with a radius of 2) in the x-z plane and as such one possible parameterisation is: $x(r,t) = rcos(t) \\ y(r,t) = -2 \\ z(r,t) = rsin(t)$ Where 'r' is the distance of the point (x, y, z) from the origin of the circle, $r = \sqrt{x^2 + z^2}$ A similar line of reasoning will help you find the parameterisation of the circular region formed by the intersection of the cylinder and the plane $x + y = 3$. (That it's a circle is the hint, hopefully something you'll be able to confirm when you find the curve defined by the intersection of the plane and the cylinder) As for the third part of the surface area you should be able to re-use one of the parameterisations for the circular regions and just take a note of the possible values of x and z. (hint: they're on a cylinder) Let me know if I've messed up or been unclear. =] Edit: Whoever said that the intersection of the plane $x + y = 3$ and the cylinder is a circle was certainly wrong... I'll sit down now.
What does a line before an equal sign mean? (Propositional logic)
The image is: $$\models $$ In logic, this means that the RH side is the logical consequence of the LH side. That is, in every possible model the RH side is true given that every element in the LH side is satisfied by the model. If the LH side is a model instead of a set of axioms, it means the RH side is true in the model. An example probably helps illustrate what is going on. Say we have $$p_0,p_1 \in\Phi $$ Now, in every possible model where both of those propositions are satisfied, one of them is also satisfied so we can say: $$\Phi \models p_1$$ On the other hand if we have a model $\mathfrak A$, which has as its domain {1}, then we can say: $$\mathfrak A \models 1=1$$ (Assuming the rules of usual predicate calculus).
Matrix form of composition of two linear transformations
You are wrong about $T$. It could never be the matrix that you mentioned, because that matrix is not invertible. The matrix of $T$ is $\left[\begin{smallmatrix}1&0\\0&-1\end{smallmatrix}\right]$. And the matrix of $S$ is $\left[\begin{smallmatrix}\cos\left(\frac\pi4\right)&-\sin\left(\frac\pi4\right)\\\sin\left(\frac\pi4\right)&\cos\left(\frac\pi4\right)\end{smallmatrix}\right]$. So,$$U=\begin{bmatrix}\frac1{\sqrt2}&\frac1{\sqrt2}\\\frac1{\sqrt2}&-\frac1{\sqrt2}\end{bmatrix}\text{ and }U.\begin{bmatrix}0\\\frac1n\end{bmatrix}=\frac1{\sqrt2}\begin{bmatrix}\frac1n\\-\frac1n\end{bmatrix}.$$So, $\lim_{n\to\infty}U.\vec{v_n}=\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right]$.
Zeckendorf with Negative Fibonacci Numbers
The negative fibbonacci numbers run as F(-n)=F(n) for 2|n and F(-n)=-F(n) for n odd. So we get for fibonacci numbers -21 13 -8 5 -3 2 -1 1 -1 -3 1 -3 -3 -1 -8 2 1 -8 2 -8 t0 12 -21 5 2 1 -21 -8 -3 -1 So yes, all of the natural numbers can be expressed as a sum of non-consecutive fibonacci numbers of negative index. It ought be a straight-forward conversion from the positive to negative schema. eg -21 5 2 1 21 -5 -2 -1 13 8 -5 -2 -1 13 3 -2 -1 13 It's possible to go both ways, and there is a carry rule to prevent adjacent columns being filled with more than one counter.
How bad is this analogy for logical independence?
It's not a bad analogy. But it can only take you so far. Similar analogies are comparing forcing to the construction of field extensions (in particular the algebraic closure). Both the analogies have some of the essence of the idea we want to state: Sometimes you have insufficient data to decide whether a statement is true or false. But ultimately these examples are very different because there is a very canonical way we see the real numbers and the interval $(-1,1]$, but there is absolutely no canonical way we see models of set theory (except, of course, those who feel it is inconsistent, then they see it canonically as having no models). If you are trying to understand, or explain the idea of independence to someone who is not familiar with set theory, or even mathematics, enough to grasp "axioms" and "models" and so on -- then the analogy could serve you well if you use it properly (i.e. not stretching it too far, due to the point I made above). Another analogy which I like very much which fits very well for independence is asking a question of the form "How many numbers satisfy the equation $x^3=2$?" an immediate answer would be one, or three, but it would also be zero. We didn't specify what "number" means here, and if we only think about it as a formula in the language of ring theory, or field theory if you will, then there are different models -- which are far from bizarre -- in which the answers are different. In the rational numbers there are no numbers with this property; in the real numbers there is just one; and in the complex numbers there is a set of three numbers with this property. We learn from this two things, the first is that the axioms of the field cannot prove the existence or the uniqueness (if it does exist) of such number, and that context is important. But when passing to set theory, or just any theory $T$, the context is lost because context is in the semantics and this is now a game of syntax (which by the completeness theorem we can turn into a game of semantics as well), and with only syntax to support us we learn that the axioms of field theory are too weak for this sort of proof. And this is the subtleties which I find discerning the examples of independence which we meet but often fail to recognize, and the independence which is "in your face" in set theory - which is also filled with technicalities and fine points about internal and external objects, and so on. I like this proof because everyone knows (or should know) about irrational numbers, about the complex numbers and so on. It's easier to explain to the layman compared to group theory (which is easy to explain to a mathematician, but not to the common man). But as I learned during the previous year (I was writing my masters) -- there are no real shortcuts in understanding all the delicate points, even in the big picture.
Why is $A^2$ halfway between $A$ and $A^\infty$?
The matrix $A$ has eigenvectors $v_1$ and $v_2$ with $Av_1=v_1$ and $Av_2=(1/2)v_2$. Then $A^2 v_1=v_1$, $A^2 v_2=(1/4)v_2$, $A^\infty v_1=v_1$ and $A^\infty v_2=0$. So $$\frac12(A+A^\infty)v_1=\frac12(v_1+v_1)=v_1=A^2v_1$$ and $$\frac12(A+A^\infty)v_2=\frac12(v_1+0)=\frac12 v_1=A^2v_2.$$ As every vector $v$ is a linear combination of $v_1$ and $v_2$, then $$\frac12(A+A^\infty)v=A^2v.$$
Probability to shoot a target
Let $ A_i $ be the event that shooter number $ i $ hits the target. What youre looking for is just $ (A_1 \cap A^c_2 \cap A^c_3)\cup(A^c_1 \cap A_2 \cap A^c_3) \cup (A^c_1 \cap A^c_2 \cap A_3) $ Those are all disjoint events so it should'nt be hard to calculate.
Is my proof by natural deduction for $\vdash (p\rightarrow(q\wedge r))\rightarrow((p\rightarrow q)\wedge(p\rightarrow r))$ correct?
Semantically that is of course perfectly valid, and it is indeed no problem in most formal proof systems!
Find all the equilibrium points of this second-order equation $x''+2x'=3x-x^3$
Of course, I agree with the usual method used by Moo and Lutz. Let me propose an alternative approach. $\frac{d^2x}{dt^2}+2\frac{dx}{dt}=3x-x^3\quad$is an autonomous ODE. It is well known that the reduction of order is obtained thanks to the change of function : $\frac{dx}{dt}=u(x(t))\quad\to\quad \frac{d^2x}{dt^2}=\frac{du}{dx}\frac{dx}{dt}=\left(\frac{du}{dx}\right)u \quad\to\quad \frac{d^2x}{dt^2}+2\frac{dx}{dt}= \left(\frac{du}{dx}\right)u+2u=3x-x^3$ $$\left(2+\frac{du}{dx}\right)u=3x-x^3$$ The stationary points are obtained for $\quad\frac{dx}{dt}=0 \quad\to\quad u=0 \quad\to\quad 0=3x-x^3$ $$x(3-x^2)=0 \quad\to\quad \begin{cases}x=0\\ x=\pm\sqrt{3}\end{cases}$$
Show $T: \mathbb{F}^{n \times n} \to \mathbb{F}^{n \times n},\;T(A): = BA$ is diagonalizable iff B is diagonalizable
Suppose that $B$ is diagonalizable, let $\{e_1,\ldots,e_n\}$ be a basis s. t. $B(e_i)=c_ie_i$. Consider the basis $e_{ij}$ for ${\mathbb F}^{n\times n}$ s. t. the only non zero coefficient of $e_{ij}$ is the coefficient of the intersection of the $i$-row and $j$-column which is $1$. $T(e_{ij})=c_ie_{ij}$. Conversely, suppose that $T$ is diagonalizable. There exists a basis $M_j$ of ${\mathbb F}^{n\times n}$ s. t. $BM_j=d_jM_j$. There exist $n$-elements $M_1,\ldots,M_n$ of the basis and vectors $\{u_1,\ldots,u_n\}$ s. t. $\{M_1(u_1),\ldots,M_n(u_n)\}$ is a basis, (if this is not true, this implies that for there exists a vector subspace $V$, $\dim V\lt n$ s. t. $\forall i\in\{1,\ldots,n^2\}\;Im(M_i)\subset V$ and the for every element $\forall M\in\operatorname{span}(M_i),\;Im(M)\subset V$ contradiction.) $TM_i(u_i)=BM_i(u_i)=d_iM(u_i)\;\implies B$ is diagonalizable in the basis $\{M_1(u_1),\ldots,M_n(u_n)\}$.
General question about representations of groups
I am not sure if I understood your question correctly, but it seems to be a Linear Algebra question. It looks as if you are asking this: given a $n\times n$ matrix $A$ and a vector space $V$ over a field $K$ such that $\dim V=n$, is there always an endomorphism $f$ of $V$ and a basis $B=(a_1,\ldots,a_n)$ of $V$ such that the matrix of $f$ with respect to $B$ is $A$? Yes, there is. Actually, you can take any basis $(a_1,\ldots,a_n)$ of $V$ and consider the endomorphism $f$ of $V$ such that$$(\forall j\in\{1,2,\ldots,n\}):f(e_j)=\sum_{j=1}^na_{ij}e_j.$$
what is the homology groups some quotient space of torus
Although you've got the answer by yourself, I would like to write an answer solving the problem with cellular homology, so that someone who asks the same question can find an answer here. I solved this problem a few month ago in an Algebraic Topology course as an exercise. Proof: Let $X = S^1\times S^1/ \sim$ be the space with the identifications: $$(e^{2\pi i/m}z,x_0)\sim (z,x_0)$$ $$(x_0,e^{2\pi i/n}z)\sim (x_0,z)$$ Like @Berci said, you should imagine this space as a grid of $m$ and $n$ lines, i.e. there are $m$ vertical and $n$ horizontal repititions: (OK. The picture is not the nicest one, but it's enough to induce an imagination.) $X$ consists of one 0-cell ($x_0$ is $e_1^0$), two 1-cells ($a$ is $e_1^1$, $b$ is $e_2^1$) and one 2-cell (we all it $e_1^2$). The attaching map identifies $x\in \partial D_1^2$ with $a^nb^ma^{-n}b^{-m}$. This implies the cellular chain complex $$0\to \mathbb{Z}[e_1^2]\overset{\partial_2=0}{\longrightarrow} \mathbb{Z}[a] \oplus\mathbb{Z}[b]\overset{\partial_1=0}{\longrightarrow} \mathbb{Z}[x_0]\to 0.$$ This implies $$H_p(X) = \begin{cases} \mathbb{Z}\mbox{ for } p=0,2 \\ \mathbb{Z}^2\mbox{ for } p=1 \\ 0\mbox{ for } p>2 \end{cases}.$$ Otherwise you can just see, that the space $X$ is still a Torus (cf. remark above). So it is not surprising, that we've got the homology group of the Torus.
About the definition of tangent space and tangent cone.
Comment: "We can see that it is important in the definition of tangent space that I(X) is the radical ideal with X as zero locus." Response: Let $A$ be a commutative ring and let $A_{red}:=A/nil(A)$ where $nil(A)$ is the nilradical. Let $\mathfrak{m} \subseteq A$ be a maximal ideal with $\overline{\mathfrak{m}}:=\mathfrak{m}A_{red}$ with corresponding point $x\in S:=Spec(A)$. You may define the tangent cone $C_x(S)$ at $x$ as $$C1.\text{ } C_x(S):=Spec(Gr(\mathfrak{m}))$$ where $$C2.\text{ }Gr(\mathfrak{m}):= \oplus \mathfrak{m}^n/\mathfrak{m}^{n+1}:=A/\mathfrak{m} \oplus \mathfrak{m}/\mathfrak{m}^2\oplus \cdots.$$ Question: "I was wondering if the choice of the ideal J in the definition of tangent cone is also important. It seems not but I have some troubles to write down a formal proof, any help?" Answer: Since the cotangent space $\mathfrak{m}/\mathfrak{m}^2$ depends on the reduced structure a similar result holds for the tangent cone. You may define the tangent cone in terms of the local ring: Let $S:=Spec(A)$ and let $\mathfrak{m} \in S$ be a maximal ideal. It follows $$\mathcal{O}_{S, \mathfrak{m}} \cong A_{\mathfrak{m}}$$ and there is an inclusion $$ \mathfrak{m}_{\mathfrak{m}} \subseteq \mathcal{O}_{S,\mathfrak{m}}$$ and we may define $$C3.\text{ }Gr(\mathfrak{m}_{\mathfrak{m}}):= \oplus_n \mathfrak{m}_{\mathfrak{m}}^n/\mathfrak{m}_{\mathfrak{m}}^{n+1}.$$ there is a canonical isomorphism $$\mathfrak{m}_{\mathfrak{m}}^n/\mathfrak{m}_{\mathfrak{m}}^{n+1} \cong \mathfrak{m}^n/\mathfrak{m}^{n+1}$$ and an isomorphism of rings $$Gr(\mathfrak{m}) \cong Gr(\mathfrak{m}_{\mathfrak{m}}),$$ hence the definition in $C2$ may be done using the local ring at $\mathfrak{m}$. The definition in $C3$ is intrinsic since it is defined using the local ring and the local ring does not depend on an embedding of $S$ into some affine space: If $A$ is a finitely generated $k$-algebra for some field $k$ you may choose a set of generators $a_1,..,a_n \in A$ and a presentation $$A\cong k[x_1,..,x_n]/I$$ giving an embedding $$i:S \rightarrow \mathbb{A}^n_k$$ as a closed sub-scheme of affine $n$-space. The local ring of $S$ at $x$ and the cone $C_x(S)$ does not depend on the embedding $i$ - it depends on the maximal ideal $\mathfrak{m}_{\mathfrak{m}}$. As you have observed: A similar statement is true for the tangent space. If your variety/scheme $S \subseteq \mathbb{A}^n_k$ contains the origin $(0)$ with corresponding ideal $\mathfrak{m}:=(x_1,..,x_n)$, you must prove that your definition of $C_0(S)$ agrees with the definition in $C1$. If $A$ is a $k$-algebra with $k$ a field and if $I \subseteq A \otimes_k A$ is the ideal of the diagonal. Assume $I^n/I^{n+1},A\otimes_k A/I^{n+1}$ is a projective $A$-module for all $n\geq 1$. It follows the module $I^n/I^{n+1}$ has the property that $$ I^n/I^{n+1}\otimes_A A/\mathfrak{m}\cong \mathfrak{m}^n/\mathfrak{m}^{n+1}$$ for any maximal ideal $\mathfrak{m}$ with $A/\mathfrak{m} \cong k$. If you define $Gr(I):= \oplus_n I^n/I^{n+1}$ you get a map $$\pi: Spec(Gr(I)) \rightarrow Spec(A)$$ with the property that for any such maximal ideal $\mathfrak{m}\in Spec(A)$ it follows $$\pi^{-1}(\mathfrak{m}) \cong Gr(\mathfrak{m}).$$ Hence $Spec(Gr(I))$ has the tangent cone $C_x(S)$ as fibers. This is similar to the definition of the cotangent sheaf $\Omega:=\tilde{\Omega^1_{A/k}}$: There is an isomorphism $$ \Omega_x \otimes_{\mathcal{O}_{S,x}} \kappa(x) \cong \mathfrak{m}/\mathfrak{m}^2$$ where $\mathfrak{m}$ is the ideal of the $k$-rational point $x$. Note: The maximal ideals $\mathfrak{m}$ and $\overline{\mathfrak{m}}$ have the same residue field. On page 303 in Mumford's "The red book.." you will find this discussed and a relation between the tangent cone and "blowing up" a variety/scheme at a point. In Mumford's notation: Let $S=Spec(k[x_1,..,x_n]/A)$ and let $(0,..,0):=x\in S$. Let $A^*:=\{f^*: f\in A\}$ where for any element $f\in k[x_1,..,x_n]$ we write $$f:=f_r+f_{f+1}+ \cdots $$ where $f_i \in k[x_1,..,x_n]_i$ are the homogeneous components of $f$. Define $f^*:=f_r$. It follows $A^*$ is a homogeneous ideal. On page 305 in M you will find the following proved: Let $\mathfrak{m}_x \subseteq \mathcal{O}_{S,x}$ be the maximal ideal in the local ring of $S$ at $x$ and define $gr(\mathcal{O}_{S,x}):=\oplus_i \mathfrak{m}_x^i/\mathfrak{m}_x^{i+1}$. There is an isomorphism $$Spec(gr(\mathcal{O}_{S,x})) \cong Spec(k[x_1,..,x_n]/A^*).$$
Random variable(s) transformation, various cases
Yes, this is correct. However, whereas in the case $x,y\to z$ the delta distribution approach is useful in practice, in the case $x,y\to z,w$ it’s just a reformulation of the usual differential relationship that doesn’t add any practical value. We have $$ f_{Z,W}(z,w)\mathrm dz\mathrm dw=f_{X,Y}(x,y)\mathrm dx\mathrm dy $$ and thus $$ f_{Z,W}(z,w)=f_{X,Y}(x,y)\frac{\partial(x,y)}{\partial(z,w)}\;, $$ where $\frac{\partial(x,y)}{\partial(z,w)}$ is the Jacobian of the transformation. The delta distribution formulation follows directly by integration: \begin{eqnarray} f_{Z,W}(z',w') &=& \iint\mathrm dz\mathrm dwf_{Z,W}(z,w)\delta(z'-z)\delta(w'-w) \\ &=& \iint\mathrm dx\mathrm dy\frac{\partial(x,y)}{\partial(z,w)}f_{Z,W}(z,w)\delta(z'-z)\delta(w'-w) \\ &=& \iint\mathrm dx\mathrm dyf_{X,Y}(x,y)\delta(z'-z(x,y))\delta(w'-w(x,y))\;. \end{eqnarray}
How to find probability after finding the CDF of a max?
Hint: What is the probability $0 \le X_1 \le 0.9999$? What is the probability $0 \le X_i \le 0.9999$ for all $i$? What is the probability $0 \le Y \le 0.9999$? What is the probability $0.9999 \lt Y \lt 1$?
Pull back of line bundles between projective schemes
What is $O_{Proj S_*}(n) = O_S(n)$? Whatever it is, you can describe it and thus work with it by knowing the following facts: $O_S(n)$ is isomorphic to the structure sheaf on each open set of the cover $D(s)$, where $s \in S_n$, via some $\phi_s: O(n)|_{D(s)} \to O|_{D(s)}$. In fact the structure sheaf over $D(s)$ is just $S[s^{-1}]_0$, and $O_S(n)$ is just $S[s^{-1}]_n$. So $\phi_s$ can be taken to be multiplication by $s$, which is an isomorphism on this patch. The transition function $\phi_s \circ \phi_t^{-1}$ is precisely multiplication by $s/t$. (This follows immediately from 1.) Main point of this: To understand a line bundle, we just need to know an open cover on which it trivializes, and the transition functions on the intersections. We know how to pull back a line bundle, also. Explicitly: If $O_V \to L$ is a trivialization over $V$, then the pullback by $\phi^*$ of this map gives a trivialization of $O_{\phi^{-1}(V)} \cong \phi^* O_V \to \phi^* L$. The map $\psi: O_{\phi^{-1}(V)} \cong \phi^* O_V$ is the restriction of a canonical globally defined sheaf isomorphism between $\phi^* O_S$ and $O_T$, which is locally just "multiplication" $A \otimes_A B \to B$ - you can probably replace it with an equality without loss, but I personally am confused by that level of sloppiness - at least at this point in my mathematical life. Anyway it is not too difficult to keep track of it. So let's just apply our understanding of the pullback to $O_S(n)$ and see what happens. 1) $\phi^{-1}(D(s)) = D(\phi(s))$. So far so good - your condition that $V(\phi(S_+)) = \emptyset$ implies that this gives an open cover of $Proj T$ on which we know that $\phi^* O(n)$ trivializes. (It may not be the case that $S_n$ surjects onto $S_{dn}$. That is okay, we still get a cover from the $D(\phi(s))$, as $s \in S_n$. Assuming your condition, which is that $V(\phi(S+)) = \emptyset$, for any homogeneous prime in $Proj(T)$ there is some $k$ and $s$ so that $\phi(s)^k \not \in P$. So this implies that $\phi(s) \not \in P$.) 2) Also, the trivialization $T_s: O_S|_{D(s)} \to O_S(n) |_{D(s)}$ becomes some $\phi^*(T_s) : \phi^* O_S \to \phi^*(O_S(n))$ over $D(\phi(s))$. Combing this with the isomorphism $\psi : O_T \to \phi^* O_S$, our trivialization over this chart is $\phi^*(T_s) \circ \psi|_{D(\phi(s)}$. 3) Claim: The transition functions become $\phi(s/t) = \phi(s) / \phi(t)$. Proof: First we compute: $(\phi^*(T_t) \circ \psi))^{-1} \circ \phi^*(T_s) \circ \psi = \psi^{-1} \phi^* (T_s \circ T_t^{-1}) \psi = \psi^{-1} \_ \times \phi(s/t) \psi = \psi^{-1} (\_ \times \phi(s)/\phi(t)) \psi$. Then note: the isomorphism $\psi$ commutes with multiplication by elements of the structure sheaf (it is an isomorphism of $O_X$ algebras, in particular $O_X$ modules), so in the end we get that our transition function is just multiplication by $\phi(s) / \phi(t)$. If you look back at the first paragraph, this is some cover and transition maps that describe $O_T(dn)$. So apparently the pullback is isomorphic as a line bundle to $O_T(dn)$. I think your claim is correct - but maybe I am overlooking something subtle! This stuff is confusing!
How can I prove that $(a,b) = (c,d) \land (c,d)=(e,f) \implies (a,b)=(e,f)$ is true
Hint, assuming $$(a,b)=(c,d) \iff a=c\wedge b=d\;\;\;\;\;\;(*)$$ then apply $(*)$ once in the $\rightarrow$ direction to break up your pairs on the left hand side, then use the fact that $=$ is transitive: $$p=q \wedge q=r \implies p=r$$ and finally apply $(*)$ a second time in the $\leftarrow$ direction to form your pairs on the right hand side.
Probability a football team wins with three probabilities given
No this is wrong. Let "Captain in good form"=$C$ For part 1: $$P(Win)=1-P(Loss)$$ $$P(Loss)=P(Loss\cap C)+P(Loss\cap C')=P(Loss/C).P(C)+P(Loss/C').P(C')=0.25×0.7+0.6×0.3=0.355$$ Hence $P(Win)=0.645$ For part 2: $$P(Loss/C).P(C)=P(Loss\cap C)$$ Hence $P(C\cap Loss)=0.25×0.7=0.175$
Need help evaluating $\lim\limits_{n \rightarrow +\infty} \dfrac{ n^{kn} }{ (kn)! } , k\in N, k\neq 0 $
We use Stirling's approximation for $n!$ at large $n$ $n!\approx\sqrt{2\pi n}(\frac{n}{e})^n$. Thus, as $n\to\infty$ we get: $$ \frac{n^{kn}}{(kn)!}\to\frac{\sqrt{2\pi}}{k!}\exp[(k-1)n\ln n+n-0.5\ln n] $$ as $ n\to\infty$ it is clear that the first term dominates. If $k-1>0$ or $k> 1$ the limit is $\infty$ if $k< 1$ the limit is $0$. If $k= 1$, the $ n$ term domintes and the limit is also $\infty$
Geometric interpretation of $\Sigma^{-1}\Phi x$ where $\Phi = [\phi_1,\phi_2,\cdots, \phi_n]$, $\Sigma=\sum_{i=1}^{n} \phi_i\phi_i^T+\lambda I$
Some potentially useful observations: $\sum_{i=1}^{n} \phi_i\phi_i^T = \Phi \Phi^T$ $\eta$ solves the equation $(\Phi \Phi^T + \lambda I) \eta = \Phi x$ The $\phi_i$ are linearly independent if and only if $\Phi^T\Phi$ is invertible $\Phi\Phi^T$ is invertible if and only if the $\phi_i$ span $\Bbb R^d$. It is only in this case that $\Sigma$ is defined for $\lambda = 0$ If the $\phi_i$ span $\Bbb R^d$, then there exists a matrix $\Phi$ such that $\Phi \Psi = I_d$. For instance, we can use the Moore-Penrose pseudoinverse $\Psi = \Phi^T(\Phi\Phi^T)^{-1}$. In this case, we can rewrite our equation for $\eta$ as $$ (\Phi \Phi^T + \lambda I) \eta = \Phi x\\ (\Phi \Phi^T + \lambda \Phi\Psi) \eta = \Phi x \\ \Phi(\Phi^T + \lambda \Psi)\eta = \Phi x $$ When $\lambda = 0$, the equation $\Phi\Phi^T \eta = \Phi x$ describes a least squares solution to $\Phi^T \eta = x$
Zeros of a cubic polynomial with rational coefficients
It is possible. For example, given a prime number $p$ and $a,b\in\mathbb Z$ with $a+b\le-2$, let $$f(x)=x^3+pax^2+pbx+p\in \mathbb Z[x].$$ Then by Eisenstein's criterion, $f$ is irreducible in $\mathbb Q[x]$, i.e. $f$ has no rational root. However, since $f(0)=p>0$ and $f(1)=1+(a+b+1)p<0$, $f$ has three distinct real root located in $(-\infty,0)$, $(0,1)$ and $(1,+\infty)$ respectively.
How can I express xNORy solely with NAND operations?
I’ll use $\mid$ for the connective NAND. You want $\overline{x+y}$; as a first step, this is $(x+y)\mid(x+y)$. Now check that $x+y=(x\mid x)\mid(y\mid y)$, and put the pieces together.
What's the definition of $\operatorname{div}(u\otimes u)$?
Just searched in google and found this link $$\operatorname{div}(u\otimes v) = (\nabla u)v + (\operatorname{div}v)u.$$ Also, there is an explanation of your statement here using the notion of a tensor field. I can't make out the meaning of this either, but it might help you...
$C^n$ function with big/small O notation
No they are both weaker; both properties can only encode the existence of the first derivative when $n=1$, but cannot got any further. Take for instance the function $$ f(x) = \exp\left(-\frac1{|x|}\right) \sin\left(\frac1{|x|}\right). $$ It satisfies both properties for every $a$ and every $n$, but $f'$ is unbounded near $0$ (hence discontinuous, because it vanishes at the origin).
Theater seating arrangement probability
For your answer of $\frac{(n-1)!}{n!}$, you're only counting possibilities where $A$ is always sitting on the same side of $B$. Multiply by the missing factor of $2$, and the answers agree. Their numerator comes from the fact that there are only $n - 1$ pairs of adjacent seats, in a row of $n$ seats.
What is the purpose/meaning of parametrizing a set by an index set
If the vector spaces are finite dimensional, there is not much conceptual difference between using $i\in I$ and using $1\leq i\leq n$ to index your bases. However, $i\in I$ allows for infinite dimensions, without changing your notation.
labelled graph characteristic polynomial
OK, if you wish to obtain an answer here then I confirm my above comments that your should talk with a specialist. My google search extended my knowledge of an answer not far from zero. I didn’t find exactly the same generalization of a characteristic polynomial of a graph as you proposed but I guess there should be some of them. See, for instance a paper “A Generalization of the Characteristic Polynomial of a Graph” by Richard J. Lipton and Nisheeth K. Vishnoi. Personally I see two natural directions in which a further generalization may be useful. The first is a problem of an isomorphism of vertex or edge colored graphs. The second is a problem of a simultaneous isomorphism of two graphs with common vertex set (If I remember it right, a corresponding problem for the similarity of pairs of matrices is so called “wild problem” and when specialists in matrices encounter an equivalent problem they said with respect: “O-o, so it is the wild problem” and stop. Here is even written (in Russian) that it is not expected to obtain a reasonable answer to such a problem). how to go about constructing non-isomorphic graphs with matching characteristic polynomial with a single "edge label". I suggest such graphs exist, because the quest to find two non-isomorphic graphs with matching characteristic polynomials (so called cospectral graphs) and even more additional restrictions, looks elaborated. See, for instance, the following papers: Anthony DiGenio “A Construction of Cospectral Graphs”. Haruo Hosoya, Kyoko Ohta, Masaki Satomi “Topological twin graphs II. Isospectral polyhedral graphs with nine and ten vertices”. M. Randić , W. R. Müller , J. V. Knop , and N. Trinajstić “The Characteristic Polynomial as a Structure Discriminator”.