title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
$\mathcal{O}_L$ free over $\mathcal{O}_K[G]$
The answer is yes. Firstly, since $\mathcal{O}_K$ is a Dedekind domain, and $\mathcal{O}_L$ is an integral domain, that means that $\mathcal{O}_L$ is a torsion-free $\mathcal{O}_K$-module, hence flat, hence locally free. Locally free modules over integral domains have a well-defined rank, which is the rank of any localization of the module at a prime. By localizing at $(0)$, we find that $\mathcal{O}_L$ is a locally free $\mathcal{O}_K$ module of rank $[L:K] = |G|$. On the other hand, $\mathcal{O}_K[G]$ is clearly a free $\mathcal{O}_K$-module of rank $|G|$. If $\mathcal{O}_L$ is a free $\mathcal{O}_K[G]$-module, then it would have to have rank 1 - since $\mathcal{O}_L\ne 0$, it's not rank 0. If it had rank $\ge 2$ over $\mathcal{O}_K[G]$, then since ranks are multiplicative, it would have to have rank $\ge 2|G|$ over $\mathcal{O}_K$, which is not true.
Describing Homomorphisms....?
Hint: Show that a homomorphism of abelian groups $\mathbb{Q} \to \mathbb{Q}$ is actually $\mathbb{Q}$-linear. Then, recall what $K$-linear maps $K \to V$ look like, where $V$ is any $K$-vector space and $K$ is a field. (Big hint: Look at the image of $1 \in K$.)
Indefinite integrals with absolute values
HINT: i would consider the cases $$2x+3\geq 0$$ or $$2x+3<0$$
Two possible answers of $\int\frac{x}{4x-1}dx$
You got the solution of the given differential equation as $$-e^{-y}=\frac{1}{16}(4x-1+\ln|4x-1|)+c\tag1$$where $c$ is an arbitrary constant and it can be written as $$-e^{-y}=\frac{1}{16}(4x-1+\ln|4x-1|)+c\\\implies -e^{-y}=\frac{1}{16}(4x+\ln|4x-1|)+c'\tag2$$where $~c'=c-1/16~$ is an arbitrary constant. Now putting the initial condition $~y(0)=0~,$ from $(1)$ you get $~c=-15/16~$ and from $(2)$ you get $~c'=-1~.$ Now putting $~c=-15/16~$ in equation $(1)$, you get $$-e^{-y}=\frac{1}{16}(4x-1+\ln|4x-1|)-\frac{15}{16}\\\implies -e^{-y}=\frac{1}{16}(4x+\ln|4x-1|)-1\tag3$$ Again putting $~c'=-1~$ in equation $(2)$, you get $$-e^{-y}=\frac{1}{16}(4x+\ln|4x-1|)-1$$which is same as equation $(3)$. So from here you can find that the particular solution of the given differential equation is same in both cases and it is independent of the choice of the arbitrary constant. Therefore your both answers are correct and you are free to choose any one of them.
$S^1$ is homeomorphic to $[0, 1]/ \{0, 1\}$.
The function $f:[0,1] \to S^1$, defined by $f(x) = (\cos(2\pi x), \sin(2\pi x))$, is surjective and continuous and has the property that $f(x) = f(x')$ iff ($x = x'$ or $\{x,x'\} = \{0,1\}$). So the decomposition $R$ from $f$ of $[0,1]$, as in the first paragraph, is exactly your $D$ from below. So here $X = [0,1], Y = S^1$. The $\theta$ from $S^1$ to $X/R$ is just (as defined in the first paragraph) the function that maps a point $(\cos(2\pi x),\sin(2\pi x))$ on the circle to the equivalence class of $x$, where we ensure that $x \in [0,1]$. Any point on the circle can be written that way.
Proof of the limit of a function as X tends towards infinity
Let $f(x)=\frac{2x-1}{x^3}$ and $\epsilon>0\;$ given. we have $$(\forall x>\color{red}{1})\;\;\; |f(x)|<\frac{2x}{x^3}=\frac{2}{x^2}<\frac{2}{x}$$ thus, if we have $ \frac{2}{x}<\epsilon$ then we will be sure that $\;\;|f(x)|<\epsilon$. Put $A=\max(\frac{2}{\epsilon},\color{red}{1})$. we can write $(\forall x>A)\;\;\; |f(x)|<\frac{2}{x}<\epsilon$. $$\implies \lim_{x\to+\infty}f(x)=0.$$
Differentiate the Function: $y=2x \log_{10}\sqrt{x}$
Your derivative of $\log_{10} \sqrt{x}$ is incorrect. I recommend having another go and posting your working here for correction if you get the same thing.
On the properties of characteristic subgroup
No. For example, take $G = \mathbb{Z}_4 \times \mathbb{Z}_4$ with $H_1 = 2\mathbb{Z}_4$ and $K_1 = \mathbb{Z}_4$. Let $\phi \colon G \to G$ swap the two components, i.e., $\phi(x,y) = (y,x)$. The subgroup $H_1 \times K_1$ is not preserved under $\phi$.
Finding an inverse of a polynomial modulo some other polynomial
Hint. Use the euclidian algorithm on $f(x) := 3x^2 + 7x + 5$ and $g(x) := x^3 - 3x - 1$ to find polynomials $a, b \in \mathbf Q[x]$ such that $$ a(x) \cdot (3x^2 + 7x + 5) + b(x) \cdot (x^3 - 3x - 1) = \gcd(f,g) = 1 $$ In $L$ you have then $$ (a + \left<g\right>)\cdot (f + \left<g\right>) = 1 + \left<g\right>$$
Is $(0,1)\cup[2,3)$ not Dedekind complete or does it have a gap somewhere?
You're right. What you've discovered is that "Dedekind completeness" and topological connectedness are not necessarily the same thing. They are only such under the following considerations: the topology in question is the order topology, the order is dense (i.e. for any distinct elements $x$ and $y$ in the space, there is a third, $z$, such that $x < z < y$). Under these assumptions, any Dedekind-complete ordered and topological set is topologically connected, and vice versa. However, note that to make sense of them, we need both an order and a topology on the space in question. These two things can be chosen separately, and that is, in fact, what is going on in the case in question. Typically, if we are given an order, but no topology, we will assume the order topology to treat the space topologically, but we don't have to. If we are given both, and that topology is at variance with the order topology, the specified topology overrides the order topology "default", and topological and ordering results about the space in question need not be in agreement. And that is what is happening in the case here. We are considering this set as inheriting both order and topology, individually, from $\mathbb{R}$. The inherited order on $(0, 1) \cup [2, 3)$ is equivalent to that on, say, $(0, 2)$ (or even $(0, 1)$ itself, but I choose the other for metrical consistency and hence intuitiveness): to see this, consider that $(0, 2) = (0, 1) \cup [1, 2)$, and then note that the original set has the same interval structure but that we've in effect simply relabeled the points in the second interval from $[1, 2)$ to $[2, 3)$. But the inherited topology on $(0, 1) \cup [2, 3)$ is not the same as the order topology from the previous inherited order. This can be seen by noting that the order-equivalence to $(0, 2)$ would make the order topology connected (order-equivalence implies equivalence of order topologies), since that is a connected subset of $\mathbb{R}$, but as you've seen, this set, under topological inheritance, is disconnected. Or in other words, your observation proves the inequivalence of the inherited order's order topology and the directly-inherited topology from $\mathbb{R}$, for this set.
Exercise on Fundamentals of Divisibility: Factorization Domains
Since $10\neq 1 \pmod 4,$ $\mathbb{Z}[\sqrt{10}]$ is the ring of integers of $\mathbb{Q}(\sqrt{10}).$ The ring of integers of a number field is always a Noetherian domain. We show that every Noetherian domain $R$ is a factorization domain like this: Suppose $a_0\in R$ is not zero or a unit, but not expressible as a product of irreducibles. Then in particular, it can not be irreducible itself, so we can factorize $a_0 = bc$ where neither $b$ or $c$ are units. Hence we have strict containments $Ra_0 \subset Rb$ and $Ra_0 \subset Rb.$ Now if both $b$ and $c$ could be written as a product of irreducibles, $a_0$ could be, so let $a_1=b$ if $b$ is not a product of irreducibles, otherwise set $a_1=c.$ Thus $Ra_0 \subset Ra_1$ and $a_1$ is not zero or a unit and not expressible as a product of irreducibles. We can repeat this process indefinitely to get the strictly ascending chain $$ Ra_0 \subset Ra_1 \subset Ra_2 \subset Ra_3 \cdots $$ contradicting the fact that $R$ is Noetherian. Also, about how you did the rest of the problem, have you checked the norm you used is indeed a multiplicative norm? Usually the norm on that ring is $N(a+\sqrt{10}b) = a^2 - 10b^2.$ To show $3$ is irreducible with this norm we have to show $a^2-10b^2=3$ has no solutions, which follows from the fact that $3$ is not a quadratic residue mod 5. For the second part proving $3$ is not prime, once you see $3|1\pm\sqrt{10}$ it is very easy to finish off: Your ring lies inside $\mathbb{C}$ and $3$ can only divide those numbers if $\dfrac{1\pm \sqrt{10}}{3} \in \mathbb{Z}[\sqrt{10}].$
Convergent subsequence of a given sequence
I think you meant that for every sequence $f_{n}$ such that $||f_{n}||\leq 1$,there exists a subsequence $f_{n_{k}}$ such that $I(f_{n_{k}})$ converges uniformly on $[0,1]$.To do this,consider the closed unit ball $D$ in $C[0,1]$, and show that $I(D)$ is pre-compact.In order to proceed this way,observe that for every continuous $f$, $I(f)$ is of class $C^1$,hence,the collection $\{I(f):f\in D\}$ is uniformly Lipschitz, with Lipschitz constant $1$.Hence,what we have is an equicontinuous family of maps in $C[0,1]$.Now,by Arzela-Ascoli theorem, we are done.
Functions in modular arithmetic that are injective, surjective, or invertible.
You could shorten you answer observing that, since $\mathbf Z/12\mathbf Z$ is a finite set, injective $\iff$ surjective$\iff$ bijective. Furthermore, in any ring $R$, the map $x\mapsto ax+b$ is injective (resp. surjective, bijective) if and only if $x\mapsto ax$ is, and: \begin{alignat}{2}&x\mapsto ax(+b)\;\text{is injective}&&\iff a\;\text{is a non-zero divisor},\\ &x\mapsto ax(+b)\;\text{is surjective}&&\iff a\in R^\times\;(\text{the set of units in}\;R). \end{alignat}
Finite extensions of $\mathbb Q_p$ are exactly completions of numberfields
Maybe Krasner’s Lemma is the most direct route to your first statement. It says, very roughly, that if you take an irreducible polynomial over $\Bbb Q_p$ and jiggle its coefficients just slightly, each root $\rho'$ of the new polynomial will be close to a unique root $\rho$ of the original, and in fact the two will generate the same field over $\Bbb Q_p$. So, if $K=\Bbb Q_p(\alpha)$, take its $\Bbb Q_p$-polynomial $f(X)$ and jiggle it to an $\,\bar f\in\Bbb Q[X]$. Then $\,\bar f$ has a root $\alpha'$ also generating $K$ over $\Bbb Q_p$, but $\alpha'$ is algebraic (over $\Bbb Q$). In case your extension $K$ is unramified over $\Bbb Q_p$, you don’t need Krasner or anything like him. For, such a $K$ can be gotten by adjoining a root of unity to $\Bbb Q_p$. To get your field that completes to give $K$, adjoin a root of unity of the same order to $\Bbb Q$. Of course the really interesting extensions are the ramified ones, so this argument doesn’t apply.
Symmetric Operator with Different dot products
Hint: Take your vector space to be $\mathbb{C}^2$ and pick $A$ to be represented by a Hermitian $2 \times 2$ matrix. Then $A$ is symmetric with respect to the standard dot product. Now pick a new inner product and try seeing if $A$ is symmetric.
Can we halve the real number
Suppose $A \subset \mathbb{R}$ is dense and complete. Then its closure by assumption is $\mathbb{R}$, i.e. its limits points together with $A$ is all of $\mathbb{R}$. A limit point of $A$ is the limit of some convergent (in $\mathbb{R}$) sequence of $A$. A convergent sequence is Cauchy and so this sequence is Cauchy in $A$. $A$ is complete so the limit is in $A$. So the closure of $A$ is $A$, i.e. $A=\mathbb{R}$.
What is the motivation behind the, convex and concave closures of submodular functions?
Submodular functions act like concave function, as they share subadditivity property : A concave function f such that f(0)=0 is subadditive. A submodular function f such that f($\emptyset$) is subadditive. We can't say a submodular function is concave as by nature a submodular function is a set function. Also having a definition of concave and convex closure of a function can be useful in several way, so it is for a set function. In general we are used to minimize convex function and maximize them is more difficult. You can see a concave function $g$ as the opposite of a convex function $f=-g$, then it comes that maximizing a concave function is easier than minimizing one.
Roll two dice. What is the probability that one die shows exactly two more than the other die?
To get yourself started, you could draw a table. The rows could be one roll, and the columns could be the other roll. Then the checkmark shows where the rolls are "two away" from each other. \begin{array}{r|c|c|c|c|c|c} &1&2&3&4&5&6\\\hline 1&&&\checkmark&&&\\\hline 2&&&&\checkmark&&\\\hline 3&\checkmark&&&&\checkmark&\\\hline 4&&\checkmark&&&&\checkmark\\\hline 5&&&\checkmark&&&\\\hline 6&&&&\checkmark&& \end{array} Notice that, since all pairs are equally likely, we have a $8/36 = 2/9$ chance of being "two away".
probability counting order with no replacement
Self-explanatory answer: $$\binom{26}{2}\cdot\binom{26-2+9}{2}=171600$$
How can I determine what objects an object can "see" in 2d space?
Let $\vec{p}_i = (x_i, y_i)$ be the position of point $i$, and $\hat{n}_i = (X_i, Y_i)$ the unit vector it is facing at, with $\lVert\hat{n}_i\rVert = \sqrt{X_i^2 + Y_i^2} = 1$. In a right-handed coordinate system, with $\theta=0$ in the direction of the positive $x$ axis, $\theta=90^\circ \text{ and } -270^\circ$ in the direction of positive $y$ axis, $\theta=\pm180^\circ$ in the direction of negative $x$ axis, and $\theta=270^\circ \text{ and } -90^\circ$ in the direction of negative $y$ axis, if point $i$ is facing in direction $\theta_i$, then $$\left\lbrace\begin{aligned} X_i &= \cos\theta_i \\ Y_i &= \sin\theta_i \\ \end{aligned} \right. \quad \iff \quad \theta_i = \operatorname{atan2}\left( Y_i, X_i \right) \tag{1}\label{G1}$$ where $\operatorname{atan2}$ is the two-parameter form of arctangent, equivalent to $\arctan\left(\frac{Y_i}{X_i}\right)$ for positive $X_i$, but covers full $360^\circ$ by taking into account the signs of both $X_i$ and $Y_i$. It is provided by most programming languages. To determine the angle between the direction point $i$ is facing and the direction point $k$ is at relative to point $i$, we need to find the angle $\phi_{i,k}$ between vectors $\hat{n}_i$ and $(\vec{p}_k - \vec{p}_i)$. We can use the fact that the angle $\phi$ between two vectors $\vec{a} = (a_x, a_y)$ and $\vec{b} = (b_x, b_y)$ is $$\cos(\phi) = \frac{\vec{a} \cdot \vec{b}}{\lVert\vec{a}\rVert \, \lVert\vec{b}\rVert} = \frac{a_x b_x + a_y b_y}{\sqrt{(a_x^2 + a_y^2) (b_x^2 + b_y^2)}}$$ In other words, $$\cos\left(\phi_{i,k}\right) = \frac{\hat{n}_i \cdot (\vec{p}_k - \vec{p}_i)}{\left\lVert \vec{p}_k - \vec{p}_i \right\rVert} = \frac{ X_i ( x_k - x_i ) + Y_i ( x_k - x_i ) }{\sqrt{ (x_k - x_i)^2 + (y_k - y_i)^2 }}\tag{2a}\label{G2a}$$ or equivalently $$\cos\left(\phi_{i,k}\right) = \frac{\cos(\theta_i) ( x_k - x_i ) + \sin(\theta_i) ( y_k - y_i ) }{\sqrt{ (x_k - x_i)^2 + (y_k - y_i)^2 }}\tag{2b}\label{G2b}$$ Because $\cos(\phi) = \cos(-\phi)$, you don't actually need to use inverse cosine (arccos) function at all, just compare the right side to the cosine of the half of the visible sector angle (since the visible sector is centered in the direction the point is facing, the limits are $\pm$ half the sector angle), to find whether point $k$ is visible to point $i$.
How to solve this 4 terms equation?
For the first one, if the product of three factors is 0 then at least one of these is 0; so you have to solve three (easy) equations. For the second one, write it as $x(x^3−x^2−x+1)=0$ and note that the sum in parentheses is 0 when $x=1$, so you may factor $x^3−x^2−x+1 = (x-1)(x^2+Ax+1)$. Find the value of $A$ and solve the 2nd degree equation
There are enough Galois extensions?
A finite extension of fields is normal if and only if it is the splitting field extension of some polynomial. By the primitive element theorem, a separable extension $E/L$ is simple, i.e. $E=L(\alpha)$ for some $\alpha\in L$. Since $E/L$ is separable, the minimal polynomial of $\alpha$ over $L$ is separable. The Galois closure of $E/L$ is the splitting field of this polynomial, which is normal (since it is a splitting field) and separable (since it is the splitting field of a separable polynomial).
What does the Tate module of an elliptic curve tell us?
You should begin by carefully understanding, in the case that $E = \mathbb C/\Lambda$ is an elliptic curve over the complex numbers, the canonical isomorphism between the $\ell$-adic Tate module and $\mathbb Z_{\ell} \otimes_{\mathbb Z} \Lambda$ (or, what is the same, the inverse limit $\varprojlim_n \Lambda/\ell^n\Lambda$). Once you've understood that, you could read the proofs in Silverman classifying the possible endomorphism rings for elliptic curves. Silverman's proofs work in arbitrary characteristic, and use Tate modules. But you could try to imitate them for elliptic curves over $\mathbb C$, using the lattice $\Lambda$ directly as a tool. The arguments then become quite a bit simpler. Comparing these simple arguments with the Tate module arguments should further build up your intuition. The next step is to learn the proof of the Hasse--Weil theorem counting points on elliptic curves over finite fields, and the philosophy of thinking of its an application of the Lefschetez fixed point theorem for the Frobenius endomorphism --- with the Tate module playing the role of $H_1$. If you recall that in the complex case the lattice $\Lambda$ is canonically identified with $H_1(E,\mathbb Z)$, this will add yet more intuition. Summary: Tate modules substitute for the lattice $\Lambda$ which plays such an important role in the study of elliptic curves over $\mathbb C$.
Interesting question regarding proof for a number to be composite.
Actually, Sophie Germain's Identity kills it : $n^4+4^n=((n+2^k)^2+4^k))((n-2^k)^2+4^k))$, when $n=2k+1$ and you can figure it out that $n$ must be odd in your case.
Verify language that i found of this automata
A short regular expression for the language accepted by this DFA is $$ 1^*0^+(\varepsilon \cup 1\{0,1\}^*). $$
Assigning drivers to buses and routes
Since there is no direct restriction on which drivers can be on which routes, we can just solve two independent problems: Which drivers are assigned to which buses. Which buses are assigned to which routes. Each problem is a separate instance of finding a perfect matching in a bipartite graph, which we know can be checked efficiently. A more general problem, finding a 3-dimensional matching, is NP-complete. But here, our restrictions are that we have a set of allowed triples of driver, bus, and route. For example, we could have a triple $(C_1,B_1,R_1)$ and $(C_2,B_1,R_2)$, saying that driver $C_1$ can drive bus $B_1$ on route $R_1$, or another driver $C_2$ can drive the same bus $B_1$ on route $R_2$. This is strictly more general, because we can allow these two triples without allowing the triple $(C_1, B_1, R_2)$: whether or not bus $B_1$ can drive on route $R_2$ in this version of the problem can depend on who is driving it.
ith term in a series after m operations where a operation is ith term = ith term+(i-1)th term
After m times, the $A_i(new)$ will be $$A_i + mC_1 A_{i-1} + (m+1)C_2 A_{i-2} + ... (m+i-2)C_{i-1} A_{1} + (m+i-1)C_i A_{0}$$ Try it out for your example if you are doubtful
How many different types of sandwiches can a customer order?
Yes, just multiply the number of possibilities for each category, since the categories are all independent and there's no other stipulation.Your answer is correct. $(4)(5)(8)(3)=480$
computing the orbits for a group action
Hint The Frobenius is an involution on $\mathbb F_9$ with $3$ fixed points (the elements of $\mathbb F_3$). So what about the orbits of the non-fixed points?
"Englightening" Definition of Simple Lie Group
This is a definition made more for convenience than to capture some Platonic ideal of exactly the right concept. The point is more or less to guarantee that the Lie algebra is simple, which is really what one wants. Here the "atomic" theorem you want is Levi decomposition, which says that finite-dimensional Lie algebras are built from Solvable Lie algebras, which are in some sense all built from abelian Lie algebras, and Semisimple Lie algebras, which are built from simple Lie algebras in a very strong sense: namely, they are finite direct sums of simple Lie algebras. It breaks the analogy to finite groups for $\mathbb{R}$ not to be regarded as a simple Lie algebra, but it just turns out that as you study Lie algebras more, $\mathbb{R}$ doesn't behave very much like the other simple Lie algebras.
Line Segment is an edge of the Convex hull
With a convex hull, all interior angles are less than $\pi$. This means if we enter a halfplane of $l(x,y)$, we must rejoin $l$ from the same halfplane, and so the other halfplane is out of bounds. If $l$ is the black line, the green line is okay, but the blue line isn't..
How do I calculate the Radius of convergence of this sum
Your sum is a power series: $$\sum_{n=0}^{\infty} n^3 (5x+10)^n = \sum_{n=0}^{\infty} n^3 5^n (x- (-2))^n$$ So the series is centered around the $x_0 = -2$, and its radius of convergence is $$\frac{1}{\rho} = \overline{\lim_{n \to \infty}} \sqrt[n]{|a_n|}$$ (Cauchy Hadamard formula, where $\rho$ is the radius of convergence and the RHS is the supremum limit) In your case, $$\frac{1}{\rho} = 5 \Rightarrow \rho = \frac{1}{5}$$ That is to say that the series converges $\forall x \in (-2 - \frac{1}{5}, -2 + \frac{1}{5})$ You should actually check what happens at the extreme of the interval but that was not required by the question I think :)
What is the relation between Kruskal tensor and CP decomposition?
Tensor Toolbox stores the output of a CP - Decomposition as a kruskal Tensor. The output format of a Kruskal Tensor is [ Lambda; A, B , C ] , where one element of Lambda is the value that is obtained when a particular Column of A, B, C is normalized. Let lambda ai be value obtained by normalizing ith column of A and so on. Therefore Lambda i = lambda ai * lambda bi * lambda ci. ( My guess ) Therefore Lambda is a vector of size R. A particular column of A is in a outer product with the corresponding column of B, C, (ai o bi o ci), this outer product is multiplied by Lambda i to give a rank 1 tensor, this format of writing a tensor is known as kruskal tensor. For more details refer to :
Prove $\vec i^i = \vec i_i$ in rectangular coordinates
I see this from a geometric perspective, but there may be a more general way to see it! You see, Euclidean space is actually a Riemannian space with $g=I$ or $g_{ij}=\delta_{ij}$ in Cartesian coordinates. What you seem to be asking is about the musical isomorphism and raising and lowering indices. Here we are talking about vectors and covectors, but more generally one can talk about contravariant and covariant tensors. In any case, when a metric is available, one can lower an index by contracting with the metric. So, in this case: $$ v_j = g_{ij}{v}^i = \delta_{ij}v^i $$ where notice that the RHS is just the $j$th component of the vector. Or, in matrix-vector notation: $$ v = Iu = u $$ where $v$ is the dual of $u$. See also here and here.
Harmonic function as real part of any analytic function
Unless I'm missing something, your question makes little sense to me: $\phi\left(\frac{z+1}{2},\frac{z-1}{2i}\right)$ is not defined since $\phi(x,y)$ only exists for $x$ and $y$ real Your candidate function $f$ has constant imaginary part, which is impossible for a non constant holomorphic function. It's a classical fact or exercise that a harmonic function is (at least locally) the real part of a holomorphic function, and that this holomorphic function is unique up to a (purely imaginary) additive contant (and this function looks nothing like what you wrote). You should try to do this exercise (use the Cauchy-Riemann equations), it would be instructive. As for your example, I can explain it to you: recall the principal value logarithm is $\log z = \ln |z| + i \operatorname{Arg}(z)$ (for $z \notin \mathbb{R}_-$), where $\operatorname{Arg}(z)$ is the principal value argument of $z$, i.e. in $(-\pi, \pi]$. In your problem, I'm assuming you are working in the half-plane $H = \{z\in \mathbb{C}, ~\operatorname{Re}(z)>0\}$. For $z \in H$, it's not hard to check that $\operatorname{Arg}(z) = \tan^{-1}(y/x)$, where $z = x+iy$. It follows that $\operatorname{Re}(-i\log(z)) = \operatorname{Arg}(z) = \tan^{-1}(y/x)$.
The complex equation
HINT: $$2y=-\sqrt{x^2+y^2}$$ But $\displaystyle\sqrt{x^2+y^2}\ge0\implies y\le0$
Direct sum of $L^2$ spaces
Map $f \in L^{2}(X, \Omega,\mu)$ to $\bigoplus f_i$ where $f_i$ is the restriction of $f$ to $X_i$.
path homotopy inverses
We may define i: I → I to be the identity map, i¯: I → I to be i¯(s) = 1-s and e: I → I to be e(s) = o for all s∈I. Then we may define H: IxI → I by H(s,t) = (1-t)(i∗i¯)(s)+te(s) for all (s,t)∈IxI. Since I is a convex subset of real numbers, H is well defined. Then you may check f。H is a path homotopy between f∗f¯ and ex0.
Equation of ellipsoid surface obtained by revolving an ellipse
If the ellipse $x^{2} + x^{2/9}=1$ This can't be an ellipse, can it? For one thing, it's an equation with $x$ and nothing else. Clearly, a typo: one of two $x$s should be $z$. Even then, it's not an ellipse, unless we evict $/9$ from the exponent. So, the correct equation is $$x^{2} + \frac19 z^{2 }=1$$ Now if you do what you did, the result is what you expect.
Finding the equation for a tangent line at a certain point
Equation of the tangent line of $f(x)=(x^3-25x)^8$ at $(-5,0)$ is $$y−y_0=f′(x_0).(x−x_0)$$ where $y_0=0,x_0=-5$. $$f'(x)=8(x^3-25x)^7(3x^2-25)\Rightarrow f'(-5)=0$$ So equation of the tangent line is $$y-0=0(x+5)\Rightarrow y=0$$
Polynomials are dense in $L^2$
Hopefully I am not saying something stupid :) Let $\epsilon >0$. Then, there exists some $N$ so that $$\|f(x)- \sum_{n=-N}^N \hat{f}(n)e^{inx} \|_2 < \frac{\epsilon}{2}$$ Now for each $-N \leq n \leq N$ you can find some polynomial $P_n$ so that $$\| \hat{f}(n) e^{inx} - \hat{f}(n)P_n \|_2 < \frac{\epsilon}{2(2N+1)}$$ Now prove that $$P= \sum_{n=-N}^N \hat{f}(n) P_n$$ works.
What is known about totally positive matrices?
Here's one answer (taken from Fomin and Zelevinsky's review paper, which you can download from here (ref. [2]): An initial minor is a solid minor (consecutive rows and columns) bordering either the left or the top edge of the matrix. Each matrix entry is the lower right corner of exactly one initial minor, so there are $n^2$ of them (as opposed to $\binom{2n}{n}-1$ if you count all minors). According to Theorem 9 in the paper, a square matrix is totally positive iff all its initial minors are positive, and it can be shown that there is no criterion which can get away with testing less than $n^2$ minors. And then there are of course many other things known about totally positive matrices; for example, the eigenvalues are positive and simple.
Counting positive integral solutions to an equation(2)
You have reduced the problem of determining the number of solutions in the positive integers of the equation $$y_1 + y_2 + y_3 + y_4 + y_5 + y_6 = 33 \tag{1}$$ subject to the restrictions that $y_i \leq 8$ for each $i = 1, 2, 3, 4, 5, 6$ to the equivalent problem of determining the number of solutions in the nonnegative integers of the equation $$z_1 + z_2 + z_3 + z_4 + z_5 + z_6 = 27 \tag{2}$$ subject to the restriction $z_i \leq 7$ for each $i = 1, 2, 3, 4, 5, 6$. Since $27$ is closer to $42 = 6 \cdot 7$ than $0 = 6 \cdot 0$, we can simplify the problem by setting $x_i = 6 - z_i$ for $i = 1, 2, 3, 4, 5, 6$. Making these substitutions in equation 2 yields \begin{align*} z_1 + z_2 + z_3 + z_4 + z_5 + z_6 & = 27\\ 6 - x_1 + 6 - x_2 + 6 - x_3 + 6 - x_4 + 6 - x_5 + 6 - x_6 & = 27\\ -x_1 - x_2 - x_3 - x_4 - x_5 - x_6 & = -15\\ x_1 + x_2 + x_3 + x_4 + x_5 + x_6 & = 15 \tag{3} \end{align*} Since $x_i = z_i - 6$, equation 3 is an equation in the nonnegative integers with the restriction that $x_i \leq 6$ for $i = 1, 2, 3, 4, 5, 6$. If there were no restrictions, the the number of solutions of equation 3 would equal the number of ways we could insert five addition signs into a row of fifteen ones, which can be done in $$\binom{15 + 5}{5} = \binom{20}{5}$$ ways since a particular solution is determined by selecting which five of the twenty symbols (five addition signs and fifteen ones) will be addition signs. From these, we must remove those solutions in which one or more $x_i$'s exceed $6$. Since $3 \cdot 7 = 21 > 15$, at most two of the $x_i$'s can exceed $6$ simultaneously. Suppose $x_1 > 6$. Let $t_1 = x_1 - 7$. Then $t_1$ is a nonnegative integer. Substituting $t_1 + 7$ for $x_1$ in equation 3 yields \begin{align*} t_1 + 7 + x_2 + x_3 + x_4 + x_5 + x_6 & = 15\\ t_1 + x_2 + x_3 + x_4 + x_5 + x_6 & = 8 \tag{4} \end{align*} Equation 4 is an equation in the nonnegative integers with $$\binom{8 + 5}{5} = \binom{13}{5}$$ solutions. By similar argument, there are $\binom{13}{5}$ solutions in which $x_i > 6$ for each $i = 1, 2, 3, 4, 5, 6$. If we subtract these $$\binom{6}{1}\binom{13}{5}$$ solutions, we subtract those solutions in which two $x_i$'s exceed $6$ twice, so we must add those solutions back. Suppose both $x_1$ and $x_2 > 6$. Let $t_1 = x_1 - 7$; let $t_2 = x_2 - 7$. Substituting $t_1 + 7$ for $x_1$ and $t_2 + 7$ for $x_2$ in equation 3 yields \begin{align*} t_1 + 7 + t_2 + 7 + x_3 + x_4 + x_5 + x_6 & = 15\\ t_1 + t_2 + x_3 + x_4 + x_5 + x_6 & = 1 \tag{5} \end{align*} Equation 5 is an equation in the nonnegative integers with $$\binom{5 + 1}{5} = \binom{6}{5}$$ solutions. However, there are $\binom{6}{2}$ ways in which two of the six variables exceed $6$. Hence, by the Inclusion-Exclusion Principle, there are $$\binom{20}{5} - \binom{6}{1}\binom{13}{5} + \binom{6}{2}\binom{6}{5}$$ solutions to equation 1 that satisfy the given restrictions.
Given two polytopes $P$ and $Q$, show that $(P^* \times Q^*)^* = P \oplus Q $
One way to see this is using the duality between facets and vertices. A polytope $P\subset\Bbb R^d$ containing the origin $0$ is given by either vertices $v_1,\dots,v_n$ such that $ P = \operatorname{conv}(v_1,\dots,v_n) $, or facet normals $a_1,\dots,a_m$ such that $P = \left\{\, x \in \Bbb R^d \,\middle|\, \langle x,a_i\rangle \le 1 \text{ for $i=1,\dots,m$}\,\right\}$. The dual polytope $P^*$ then also contains the origin in its relative interior and has vertices $a_1,\dots,a_m$ and facet normals $v_1,\dots, v_n$. Given two such polytopes $P$ and $Q$ what can we say about the descriptions of $P\times Q$ and $P\oplus Q$ in terms of the vertices and facets of $P$ and $Q$? The vertices of $P\oplus Q$ are $(v,0)$ for vertices $v$ of $P$ and $(0,w)$ for vertices $w$ of $Q$, The facet normals of $P\times Q$ are $(a,0)$ for facet normals $a$ of $P$ and $(0,b)$ for facet normals $b$ of $Q$. Now what are the vertices of $(P^*\times Q^*)^*$? They are the facet normals of $P^*\times Q^*$, so of the form $(v,0)$ and $(0,w)$, where $v$ and $w$ are facet normals of $P^*$ and $Q^*$, respectively. However this just means $v$ and $w$ are vertices of $P$ and $Q$, respectively. Therefore, the vertices of $(P^*\times Q^*)^*$ are exactly those of $P\oplus Q$.
Sum identity using Stirling numbers of the second kind
Here is a combinatorial proof; it can probably be turned into an algebraic proof (in the sense of algebraic manipulation of sums) with some effort. The combinatorics will tell you what needs to be reindexed to get things to work out. This is an exponential generating function, so a natural question is what sequence it's the exponential generating function of. The answer, which can be deduced from a suitable version of the exponential formula, is: The coefficient of $\frac{x^n}{n!}$ in $e^{xe^x}$ counts the number of ways to partition a set $N$ with $n$ elements into nonempty subsets, then distinguish an element of each subset. The LHS counts pairs consisting of a subset $K$ of $N$ and a function $K \to N \setminus K$. So we want to show this is in bijection with the above. The bijection turns out to be that $N \setminus K$ is the set of distinguished elements, and the preimages of the function $K \to N \setminus K$ give the rest of the partitions the distinguished elements are in. The RHS counts tuples consisting of a subset $K$ of $N$, a subset $I$ of $K$, a partition of $K \setminus I$ into $|I|$ nonempty subsets, and a bijection between $I$ and the partitions of $K \setminus I$. Again we want to show this is in bijection with the above. Here the bijection turns out to be that $N \setminus K$ is the set of distinguished elements such that the partition they're in has no other elements. All the other partitions have at least one other element, and the distinguished elements for those correspond to $I$, while the rest of the partitions correspond to the nonempty partitions of $K \setminus I$.
Convergence of finite difference scheme for PDE $u_t+au_{xxx}=f$
For the consistency part, expand $v_m^{n+1}$ and $v_{m+M}^{n}$ as Taylor series about $v_{m}^{n} \simeq u(mh,nk)$ to be inserted in the scheme. As discussed in Definition 1.4.2 of the book by Strikwerda, the source $f$ does not matter in the definition of consistency. Thus, $$ \frac{v_{m}^{n+1} - v_{m}^{n}}{k} \simeq (u_t)_m^n + O(h,k) $$ $$ \frac{v_{m+2}^{n} - 3 v_{m+1}^{n} + 3 v_{m}^{n} - v_{m-1}^{n}}{h^3} \simeq (u_{xxx})_m^n + O(h,k) $$ proves that the scheme is consistent. As usual, the Von Neumann analysis consists in a Fourier transformation of the scheme (see Sec. 2.2 of the book by Strikwerda), and the source $f$ can be ignored. Under the periodicity assumption, we have $$ v_m^{n} = \frac{1}{\sqrt{2\pi}}\int_{-\pi/h}^{\pi/h} \text{e}^{\text{i}mh\xi}\, \hat{v}^n(\xi) \, d\xi $$ to be injected in the scheme. This step defines the amplification factor $g(\theta)$ to be determined, which is defined in such a way that $\hat{v}^{n+1}(\xi) = g(h\xi) \, \hat{v}^{n}(\xi).$ Neumann stability imposes $|g(\theta)| \leq 1$.
How to find the limit of $\frac{\ln(n+1)}{\sqrt{n}}$ as $n\to\infty$?
You need to use L'Hopital's rule, twice. What this rule states is that basically you if you are trying to take the limit of something that looks like $\frac{f(x)}{g(x)}$, you can keep on taking the derivative of the numerator and denominator until you get a simple form where the limit is obvious. Note: L'Hopitals rule doesn't always work. We have: $$\lim_{n\to \infty} \frac{ln(n+1)}{\sqrt{n}} $$ We can keep on taking the derivative of the numerator and denominator to simplify this into a form where the limit is obvious: $$= \lim_{n\to \infty} \frac{\frac{1}{n+1}}{\frac{1}{2\sqrt{n}}} $$ $$= \lim_{n\to \infty} \frac{(n+1)^{-1}}{(2\sqrt{n})^{-1}} $$ $$= \lim_{n\to \infty} \frac{2\sqrt{n}}{n+1} $$ Hrm - still not clear. Let's apply L'Hopitals rule one more time: $$= \lim_{n\to \infty} \frac{\frac{1}{\sqrt{n}}}{1} $$ $$= \lim_{n\to \infty} \frac{1}{\sqrt{n}} $$ The limit should now be obvious. It is now of the form $\frac{1}{\infty}$, which equals zero. $$\lim_{n\to \infty} \frac{ln(n+1)}{\sqrt{n}} = 0 $$
Do we have the following notation?
By definition $E[1_B|F]$ must be an $F$ measurable r.v. with $\int_A E[1_B|F]dP=\int_A 1_BdP$ for all $A\in F$. The r.h.s is straightforward: $\int_A 1_BdP = P(A\cap B)$. Thus, $E[1_B|F]$ is zero whenever $B\cap A=\emptyset$. There's some collection of sets $A\in F$ with $A\cap B\neq \emptyset$ so set $E[1_B|F]=P(B\cap A)/ P(A)$, whenever $P(A)\neq 0$ and zero otherwise (check this satisfies the definition!). Of course $P(B\cap A)/P(A):=P(B|A),$ which gives you back the original, classical definition of conditional probability. Hopefully you can see the advantage of this more abstract definition as it assigns $P(B|A)=0$ whenever $P(A)=0$.
Relation between one-sheet and two-sheet hyperboloid, and hyperbolic space
Embedded in standard Minkowski space $\mathbf{R}^{2, 1}$, each sheet of the two-sheeted hyperboloid acquires a Riemannian metric of constant negative curvature. Qualitatively, the induced metric on the upper sheet $H$ is Riemannian because each tangent plane is spacelike. To show the curvature of $H$ is constant and negative, it may be easiest to show that the identity component of the orthogonal group $O(2, 1)$ acts transitively (and isometrically) on $H$, and to show the curvature of $H$ is negative at $(0, 0, 1)$. For the latter, stereographic projection from $(0, 0, -1)$ to the open unit disk $\{(x, y, z) : x^{2} + y^{2} < 1, z = 0\}$ defines an isometry with the Poincaré metric, see for example Problem in understanding models of hyperbolic geometry.
Is it possible to get the CNF out of the DNF of this expression
First, the CNF of any expression is equivalent to that expression, and same for the DNF, meaning that the CNF and DNF should always be equivalent to each other as well. So, if the DNF is $A \lor B$, then the CNF cannot be $\neg A \land \neg B$, since that is the negation of it, i.e. not something that is equivalent. Ok, so then what is the CNF? Well, the CNF is just $A \lor B$ as well. Think of it as a conjunctin that consists of a single conjunct. And, since that conjunct is a disjunctin of literals, it fits the definition of CNF just fine.
The difference between well order and total order?
A well-ordering, as you say, is a linear ordering where every nonempty set has a least element. Every well-ordering is a linear ordering by definition$^*$, but the converse is not true - the following are examples of linear orders which are not well-ordered: $\mathbb{Z}$ ($\mathbb{Z}$ itself has no least element). $\{{1\over n+1}: n\in\mathbb{N}\}\cup\{0\}$ (while the whole set does have a least element, the nonempty proper subset $\{{1\over n+1}: n\in\mathbb{N}\}$ does not). $\mathbb{Q}$, $\mathbb{R}$, $[0, 1]$, ... There are lots. The simplest examples of well-orderings are the finite linear orders and $\mathbb{N}$ itself (the fact that $\mathbb{N}$ is well-ordered is the thing that makes proof by induction work!). However, there are bigger well-orderings; for example, "$\mathbb{N}+\mathbb{N}$," where "$+$" denotes the sum of linear orders (put the first "after" the second). Concretely, an example of a linear ordering of type $\mathbb{N}+\mathbb{N}$ would be $$\{1-{1\over n+1}: n\in\mathbb{N}\}\cup\{2-{1\over n+1}: n\in\mathbb{N}\}$$ with the usual ordering. Indeed, there are arbitrarily large well-orderings, even though they get increasingly difficult to visualize. For any infinite cardinal $\kappa$, the set of isomorphism types of well-orderings$^{**}$ of cardinality $\le \kappa$ is itself well-ordered by "embeds into," and has cardinality $>\kappa$ (in fact, $\kappa^+$). For a concrete example, there are uncountable well-ordered sets! Note that this does not rely on the axiom of choice. $^*$Actually, we can say a bit more: any antisymmetric relation $R$ on a set $X$ satisfying $$\mbox{For all nonempty $Y\subseteq X$, there is some $y\in Y$ with $yRz$ for all other $z\in Y$}$$ is actually a well-ordering (that is, the "linear ordering" requirement is superfluous): we already have antisymmetry, so now just show trichotomy and transitivity: For trichotomy, given $x\not=y$ think about the two-element set $\{x, y\}$ ... For transitivity, suppose $xRy$ and $yRz$ (note: by antisymmetry this means $x\not=z$) but $x\not Rz$. By trichotomy, we have $zRx$. But now think about the three-element set $\{x, y, z\}$ ... $^{**}$There's a bit of an issue here actually: an isomorphism type is a proper class, so we can't form the set of isomorphism types of well-orderings of a given cardinality. There are various ways to get around this, and it's at this point that the von Neumann ordinals should be introduced. But this should really be a side issue.
How do I know the following multivariable set (domain) is convex?
This domain is not convex. Suppose $\lambda=\frac{1}{2}$ with $$x=(\alpha,B_s,B_r,B_d)=(1,1,1,0)$$ $$y=(\alpha',B_s',B_r',B_d')=(0,0,0,0)$$ Then $x$ and $y$ both satisfy all the inequalities but $\lambda x + (1-\lambda y)=(\frac{1}{2},\frac{1}{2},\frac{1}{2},0)$ does not satisfy 2) because $$\frac{1}{2} \nleq \frac{1}{2}\left(\frac{1}{2}+0\right).$$
Subgroup of maximal order is normal
A normal subgroup of prime index is maximal, but it need not be "of largest order." For example, in the cyclic group of order $6$ generated by $x$, $\langle x^3\rangle$ is of prime index (namely, index $3$), but is not of largest order (the largest order for a proper normal subgroup is $3$, given by $\langle x^2\rangle$). The mistake here is that even though a normal subgroup of prime index cannot be properly contained in a proper subgroup, it can have smaller size than a subgroup that does not contain it. Also: you cannot assume that you can do a quotient modulo a subgroup of index $p$ unless you first prove that the subgroup is normal. So using the quotient would be circular. As for the proof of this standard problem: consider the action of $G$ on the left cosets of $H$ given by $g\cdot xH =gxH$. This induces a group homomorphism from $G$ to $S_{G/H}$, the permutation group of the cosets, with kernel contained in $H$.
Hölder Condition for Fourier Series
The lacunary series is kind of self-similar. Namely, $$ f(x) = e^{ix} + \sum_{k=1}^\infty 2^{-k\alpha}e^{i2^{k }x} = e^{ix} + \sum_{k=0}^\infty 2^{-(k+1)\alpha}e^{i2^{k+1 }x} \\ =e^{ix} + 2^{-\alpha} f(2x) \tag{1} $$ Take distinct $x,y\in \mathbb R$. If $|x-y|\ge 1$, we have the Hölder bound simply because $f$ is bounded. Suppose $|x-y|<1$. Apply (1) to get $$ |f(x)-f(y)| \le |x-y| + 2^{-\alpha} | f(2x) -f(2y)| \tag{2}$$ Iterate this $n$ times, where $n$ is the smallest integer such that $2^n|x-y|\ge 1$. The result is $$ |f(x)-f(y)| \le |x-y|\sum_{k=0}^{n-1} 2^{(1-\alpha) k} + 2^{-\alpha n} | f(2^n x) -f(2^n y)| \tag{3} $$ The geometric sum $ \sum_{k=0}^{n-1} 2^{(1-\alpha) k}$ is dominated by its largest term. The difference $| f(2^n x) -f(2^n y)|$ is bounded because $f$ is. Thus, $$ |f(x)-f(y)| \le C ( |x-y|\, 2^{(1-\alpha) n} + 2^{-\alpha n} ) \tag{4} $$ It remains to note that $2^{-n}$ is comparable to $|x-y|$, and the desired bound $$ |f(x)-f(y)| \le C |x-y|^\alpha \tag{5} $$ follows.
Relation between Exterior derivative of a section and a Connection on the Vector bundle
A connection on $E$ determines a splitting of the tangent bundle $TE = H\oplus V$, where for each $q\in E$, $V_q$ is the vertical tangent space at $\boldsymbol q$ (the tangent space to the fiber $E_q$ through $q$) and $H_q$ is a complementary subspace called the horizontal tangent space at $\boldsymbol q$. The vertical space is defined independently of any choice of connection, while the horizontal space is defined in terms of the connection as follows: Given any smooth curve $\gamma:I\to M$ (where $I\subseteq\mathbb R$ is an interval), a section of $\boldsymbol E$ along $\boldsymbol \gamma$ is a lift of $\gamma$ to $E$, i.e., a curve $\sigma\colon I\to E$ such that $p\circ \sigma = \gamma$, which is to say that $\sigma(t) \in E_{\gamma(t)}$ for each $t\in I$. A section $\sigma$ along $\gamma$ is a horizontal lift if it is parallel along $\gamma$: $D_t \sigma(t) \equiv 0$ (where $D_t$ is the covariant differentiation along $\gamma$ determined by $\nabla$). A vector $w\in T_qE$ is said to be horizontal (with respect to $\nabla$) if it is the velocity vector of a horizontal lift of some curve. The decomposition $TE = V\oplus H$ determines a linear bundle map $\pi_V\colon TE\to V$ called vertical projection, which for each $q\in E$ is just the projection from $T_qE$ to $V_q$ with kernel $H_q$. Given a section $s$ of $E$ and a vector field $X\in \mathfrak X(M)$, the relation between $ds(X)$ and $\nabla_X s$ is $$ \nabla_X s = \pi_V(ds(X)). $$
Tight bounds for Bowers array notation
For 3 entry arrays the approximation is not very good. It gives $a \uparrow^c b \approx f_{c-1}^b(a)$. The actual result is closer to $a \uparrow^c b \approx f_{c}^b(a)$. I will actually proof something this time. Let $a<65536$. We take $b\geq4$. Note that $f_{2}(f_2(b)) > f_{2}(16b) = 65536^b\cdot 16b > \{a,b\}$. Therefore $f_{3}(f_3(b)) > f_{3}(2b) > f_{2}^{2b}(4) > \{a,b,2\}$. By induction, $f_{c+1}(f_{c+1}(b)) > f_{c+1}(2b) > f_{c}^{2b}(4) > \{a,b,c\}$. Then we have $f_{\omega}(f_{\omega}(4)) >f_{a+1}(2a) > \{a,2,1,2\}=\{a,a,a\}$. $\{a,3,1,2\}=\{a,a,\{a,a,a\}\}<\{a,a,f_{\omega}(f_{\omega}(4))\}<f_{f_{\omega}(f_{\omega}(4))+1}(2a)<f_{f_{\omega}(f_{\omega}(f_{\omega}(4)))}(2a)<f_{f_{\omega}(f_{\omega}(f_{\omega}(4)))}(f_{\omega}(f_{\omega}(f_{\omega}(4))))=f_{\omega}(f_{\omega}(f_{\omega}(f_{\omega}(4))))$. By induction again, $\{a,b,1,2\}<f_{\omega+1}(2b)$ for $b\geq2$. By induction again, $\{a,b,c,2\}<f_{\omega+c}(2b)$ for $b\geq2$. $\{a,2,1,3\}=\{a,a,a,2\}<f_{\omega+a}(2a)<f_{\omega2}(f_{\omega2}(4))$. By triple induction, we will get $\{a,b,c,d\}<f_{\omega(d-1)+c}(2b)$. $\{a,a,a,a\}<f_{\omega3+a}(a)<f_{\omega^2}(f_{\omega^2}(4))$ Induction, $\{a,b,1,1,2\} < f_{\omega^2+1}(b)$ Induction, $\{a,b,c,1,2\} < f_{\omega^2+c}(b)$ $\{a,2,1,2,2\}=\{a,a,a,1,2\}< f_{\omega^2+a}(a)<f_{\omega^2+\omega}(f_{\omega^2+\omega}(4))$ $\{a,b,1,2,2\} < f_{\omega^2+\omega+1}(2b)$ $\{a,b,c,d,2\} < f_{\omega^2+\omega(d-1)+c}(2b)$ $\{a,b,c,d,e\} < f_{\omega^2(e-1)+\omega(d-1)+c}(2b)$. Now, realize that $$f_{\omega^2(e-1)+\omega(d-1)+c-1}^{2b}(a) \approx f_{\omega^2(e-1)+\omega(d-1)+c-1}^{2b}(2b) =f_{\omega^2(e-1)+\omega(d-1)+c}(2b)$$ Also note that large overestimations have been made. First expanding and then using the proper upper bound actually gives a much better bound. In general the bound is a good bound for $n\geq3$ and the number of entries > 3. Note that intuitively, we can replace every $2b$ with $b-1$ to get a lower bound. However we run into some problems when formally proving it. Eg. Estimating $f_3(f_3(b-1)-1)$ formally is very hard.
Converting Linear Combination to GCD
Hint $\,e:=c/d\,$ times $\,\gcd(a,b)=d \Rightarrow \bbox[5px,border:1px solid #c00]{\gcd(ae,be) = c}\,\ $ Note $\,d\mid a,b\,\Rightarrow\,d\mid c=ax\!+\!by$
Quick question on multivariate differentiation
The usual interpretation is $$ \frac{\partial ^2f}{\partial x \partial y} = \frac{\partial}{\partial x} \frac{\partial}{\partial y} f , $$ so $y$ first, then $x$. (Just to give a reference to a standard textbook, this is how it's defined in Calculus: A Complete Course by Adams.)
Find non trivial homomorphism $\mathbb{Z}/q\mathbb{Z} \rightarrow \text{Aut}(\mathbb{Z}/p\mathbb{Z})$
The map in your problem could represent the morphism induced by a semidirect product of a $p$-Sylow and a $q$-Sylow $\mathbb Z_p\rtimes_{\psi}\mathbb Z_q$. Consider the homomorhism $\psi: \mathbb Z_q\to \operatorname{Aut}(\mathbb Z_p)$ you have that $|\psi(\mathbb Z_q)|$ divides $|\mathbb Z_q|$ and $|\operatorname{Aut}(\mathbb Z_p)|$, where $|\operatorname{Aut}(\mathbb Z_p)|=\varphi(p)=p-1$ (for example the two groups could be $\mathbb Z_2$ and $\mathbb Z_3$). Note: Since $q$ divides $|\operatorname{Aut}(\mathbb Z_p)|$, which is a cyclic group, you could take the homomorphism that maps a generator of $\mathbb Z_q$ to a generator of the (single) $q$-group in $\operatorname{Aut}(\mathbb Z_p)$ (in a $p$-cyclic group $P$ every element of $P$ has the same order of the group and generate it). If $q|p-1$, we can assume $|\psi(\mathbb Z_q)|=q$ and for the first theorem of isomorphism, we know that $$|\psi(\mathbb Z_q)|=q=|F:Ker(\psi)|\implies|Ker(\psi)|=\dfrac{|\mathbb Z_q|}{q}=1 \implies \psi\ne id_{\mathbb Z_q}$$ Since the order of the Kernel of the morphism is not equal to the order of the group, this shows that $$\exists\psi\in \operatorname{Hom}(\mathbb Z_q,\operatorname{Aut}\mathbb Z_q)\text{, where }\psi\text{ is non-trivial and it is also injective}.$$
Find $\dim \operatorname{Ker}T$
Your solution is correct. Here is an alternative: for any linear transformation, the dimension of the domain ist equal to the Dimension of the Image plus the Dimension of the Kernel. your Domain ist Dimension 4, and it should be easy to see that your map is surjective. So the Image is one dimensional, leaving three dimensional for the kernel. Google "Rank nullity Theorem" for details. Or "isomorphism Theorem" for a more general version.
How to find singularities of $cos(\frac{1}{z})*\frac{sin(z-1)}{(z^2+1)}$?
Both $\cos(1/z)$ and $\sin(1-z)$ are analytic and non-zero on small enough neighborhoods of $i$ and $-i$. This means that $\frac{\cos(1/z)\sin(1-z)}{(z-i)(z+i)}$ has the same singularity types at $z = \pm i$ as $\frac{1}{(z-i)(z+i)}$.
Prove existence of subsequence in finite sequences
The factors of $n$ do not matter. Just show that there is a monotone sequence in $a_n$ of length $\lceil \sqrt n \rceil$ using the theorem. Then select that subsequence of the $b$s and do the same.
non uniform convergence of integrable functions
Use the inequality $$\left|\int_A |f_n|\mu (dx) -\int_A |f| \mu (dx)\right|\leq \int_A |f_n -f|\mu (dx) $$
Judgment-level negation $\nvdash$
I assume that $x$ does not occur free in $\Gamma$. Yes, it is equivalent to say $\Gamma \not\vdash \lnot \exists x A(x)$ for some $x$, $Γ⊬¬A(x)$ Both of them mean that there is a model of $\Gamma$ and $\exists x A(x)$. Roughly, it means that it is possible to make $\Gamma$ and $\exists x A(x)$ true simultaneously. Indeed, $\Gamma \not\vdash \lnot \exists x A(x)$ means that $\lnot \exists x A(x)$ is not provable from the hypothesis $\Gamma$, which amounts to say that there is a model of $\Gamma$ and $\exists x A(x)$. Under the assumption that $x$ is not free in $\Gamma$, $\Gamma \vdash \lnot A(x)$ means that $\lnot A(x)$ is provable from the hypothesis $\Gamma$, for any $x$. It amounts to say that $\Gamma \vdash \forall x \lnot A(x)$. Therefore, saying that $\Gamma \not\vdash \lnot A(x)$ for some $x$ (i.e. negating that $\Gamma \vdash \lnot A(x)$ for any $x$) means that $\Gamma \not\vdash \forall x \lnot A(x)$, which amounts to say that there is a model of $\Gamma$ and $\lnot \forall x \lnot A(x)$, i.e. there is a model of $\Gamma$ and $\exists x A(x)$.
Find the maximum length of a line segment enclosed in a given area
$x = u+v$ $y = v$ $u^2+v^2 \le 1$ $u^2+v^2 \le 1 \implies u,v$ are inside the unit circle, therefore: $-1 \leq u \leq 1$ $-1 \leq v \leq 1$ $-\sqrt{2} \leq u+v \leq \sqrt{2}$ Therefore: $-\sqrt{2} \leq x \leq \sqrt{2}$ $-1 \leq y \leq 1$ The longest line within this rectangle is the diagonal of course, which is equal to: $$\sqrt{(2\sqrt{2})^2+2^2} = \sqrt{12}$$
complex number quotient and proof
The proposition does not hold true. Counterexample: $w=1+i\,$, $\;k=2+i$ satisfy the condition $\,\operatorname{Im}(w) = \operatorname{Im}(k) = 1 \ne 0\,$, but the roots of the equation are $\,z_1 = \cfrac{1+i}{2}\,$ and $\,z_2=1-i\,$ neither of which has modulus $1$.
Conservation of norms by the 2-d euler vorticity equation
Well, you can do that by standard energy estimates. Consider the equation $$ \partial_t w+u\cdot\nabla w=0, $$ where $u$ is divergence free. Then you have the following estimates $$ \frac{d}{dt}\frac{1}{1+p}\int w^{p+1}=\int w^p\partial_tw=-\int u\cdot \nabla w w^p=-\frac{1}{1+p}\int u\cdot \nabla(w^{p+1})=\frac{1}{1+p}\int \nabla\cdot u w^{p+1}. $$ The last integral vanish due to the divergence free condition. Consequently, integrating in time, we have $$ \|w(t)\|_{L^{p+1}}=\|w(0)\|_{L^{p+1}} $$ To recover the $L^\infty$ case, just take the appropriate limit in $p$. Is this helping?
Conceptual difference between strong and weak formulations
If we say a solution is weak/strong/classical/viscous, the following aspects are concerned (or more): How we obtain the solution. The regularity of the solution (how smooth this solution is, integrability, differentiability). The solution satisfies the equation in what sense. Weak solution: We can obtain the solution by Ritz-Galerkin formulation: find the minimizer of the following quadratic functional in an appropriate Hilbert space, $$ \mathcal{F}(u) = \frac{1}{2}\int_{\Omega} |\nabla u|^2 - \int_{\Omega} fu. $$ Smoothness depends on the right side data. If the $f\in H^{-1}$, then $u\in H^1$. If $f\in L^2$, then $u\in H^2_{loc}$. Moreover if $\Omega$ is $C^{1,1}$, we have an $H^2$-solution $u$ globally. The solution satisfies the equation in distribution sense (see following explanation). Why "weak": The term "weak" normally refers to the 2 and 3: The solution $u$ is only in $H^1$ in the most general setting, this means that $u$ is the only differentiable once, notice $-\Delta$ has second partial derivative in it. The strong solution, however, indeed have twice differentiability, normally if we say $u$ is a strong solution, we mean that $u$ has $W^{2,p}$-regularity (Please refer to Gilbarg and Trudinger). The solution satisfies the equation only in the "weak" formulation $$ \int_{\Omega} \nabla u \cdot \nabla v \, dx = \int_{\Omega} fv \, dx \quad \forall v \in V, \tag{1} $$ where $V$ is certain Sobolev space. Two ways to get this weak form: first is to write what condition the minimizer of $\mathcal{F}(u)$ must satisfy: if $u$ is a minimizer, then $$ \lim_{\epsilon \to 0}\frac{d}{d\epsilon} \mathcal{F}(u+\epsilon v) =0 $$ and the weak form of Euler-Lagrange equation is (1). Another is multiplying the original equation by a test function then integration by parts. The intuition behind this should be Riesz representation theorem (at least to me it makes sense), we have: $$ \langle (-\Delta)u,v\rangle = l_u(v) = (u,v)_{V}, $$ from the differential operator $-\Delta$ $\to$ linear functional $l_u$ $\to$ representation using inner product $(\cdot ,\cdot)_V$. The inner product $(\cdot ,\cdot)_V$ on this Hilbert space $V$ is the left hand side of (1), if we make the test function space have zero boundary condition (We can use Poincaré inequality to prove the equivalence of the standard $H^1$-inner product). If you have taken any numerical PDE course in finite element, the professor would introduce Lax-Milgram theorem, and Lax-Milgram relies on Riesz. Why weak form is useful in finite element method: Short answer: Weak form is very handy in that it helps us formulate a linear equation system which can be solved by computer! Long answer: The essential of Galerkin type approach is that we are exploiting the fact that the infinite dimensional Hilbert space has a set of basis $\{\phi_i\}_{i=1}^{\infty}$, if we can expand the $u$ in this basis: $$ u = \sum_{i=1}^{\infty} u_i\phi_i, $$ where $u_n$ is a number, pluggin back to (1), and let the test function $v$ run through all $\phi_j$ (same function, different subscript): $$ \int_{\Omega} \nabla (\sum_{i=1}^{\infty} u_i\phi_i) \cdot \nabla \phi_j \, dx =\sum_{i=1}^{\infty} u_i \int_{\Omega} \nabla \phi_i \cdot \nabla \phi_j \, dx = \int_{\Omega} f\phi_j \, dx \quad \forall j =1,2,\ldots. \tag{2} $$ We have an infinite dimensional linear equation system: $$ AU = F, $$ where $A_{ji} = \displaystyle\int_{\Omega} \nabla \phi_i \cdot \nabla \phi_j \, dx$, $U_i = u_i$, and $F_j = \displaystyle\int_{\Omega} f\phi_j\, dx$. Finite element method essentially choose a finite dimensional subspace $V_h\subset V$ (may not be a subspace, please google Discontinuous Galerkin method), so that we approximate the solution in this finite dimensional subspace $V_h$! The summation in (2) does not have an infinite upper limit any more, instead there are finitely many $\phi_i$ and $v$ runs from $\phi_1$ to $\phi_N$, so that the linear system generated is still $AU = F$, but this time, it only has $N$ equations, and we can use computer to solve it.
A question from the mod p irreducibility test's proof
One of the listed assumptions was that $\deg f_1=\deg f$. A consequence of this is that $\deg g_1=\deg g$ and $\deg h_1=h$. For if this were not the case, then we would have $$ \deg f_1=\deg (g_1h_1)=\deg g_1+\deg h_1<\deg g+\deg h=\deg gh=\deg f, $$ which is a contradiction.
Proving that $\sum_{k=1}^{\infty}\frac{1}{k}$ diverges
Simple way Let $\sum_{k=1}^{\infty}\frac 1k=L$ where $L\in \mathbb{R}$. We have $$L=1+\frac 12+\frac 13+\frac 14+\frac 15+\frac 16+\cdots$$ $$L>\frac 12+\frac 12+\frac 14+\frac 14+\frac 16+\frac 16+\cdots$$ $$L>1+\frac 12+\frac 13+\cdots$$ $$L>L$$ It is contradiction.
Fourier and $Z$ transform of a signal?
I assume that by $d(k)$ you mean the discrete delta impulse $\delta(k)$. By noting that $$f(k)*\delta(k-l)=f(k-l)$$ for any sequence $f(k)$, the signal $x(k)$ can be written as $$x(k)=4[u(k-2)-u(k-3)]=4\delta(k-2)$$ The $\mathcal{Z}$-transform of $X(k)$ is then simply $$X(z)=4z^{-2}$$ which converges anywhere except for $z=0$, i.e. its region of convergence is $|z|>0$. Since the region of convergence includes the unit circle, the Fourier transform of $x(k)$ exists and is simply given by $X(z)$ with $z=e^{j\omega}$: $$X(e^{j\omega})=4e^{-2j\omega}$$ The magnitude is $|X(e^{j\omega})|=4$, and the phase is $\arg\{X(e^{j\omega})\}=-2\omega$.
Predicate logic from formula to English
Let's write A=Monkey(x) B=∀y(Monkey(y) ∧ x ̸= y ⇒ Likes(y, x)) We look at "∃x(A and B)". So there exists x such that A and B both holds. By A, x is a monkey. By B, we know the following fact: "if y is a monkey and y is not x, then y likes x". As a conclusion, we obtain that: "there is a monkey (let's call him x) such that every other monkey likes him (i.e. any monkey that is not x likes x) . "
Model existence theorem in set theory
You are asking for the completeness theorem of first-order logic, proved by Kurt Gödel in 1929. There are various ways to state the completeness theorem, and among them are the following two assertions: Whenever a statement $\varphi$ is true in every model of a theory $T$, then it is derivable from $T$. Whenever a theory $T$ is consistent, then it has a model. These assertions are easily seen to be equivalent, by the following argument. If the first holds, and a theory $T$ has no model, then false holds (vacuously) in every model of $T$, and so $T$ derives a contradiction; so the second holds. If the second holds, and $\varphi$ holds in every model of $T$, then $T+\neg\varphi$ has no models and so is inconsistent by 2, so by elementary logic, $T$ derives $\varphi$; so the first statement holds.
How can I further simplify $(a \le b) \lor (b \le a)$ to prove that it is a tautology?
Your simplified $(a \le b) \lor (b \le a)$ is indeed a tautology, because its negation is the following contradiction $(x > y) \land (y > x)$.
Is there a procedure to solve Diophantine Equations?
By a famous result of Matiyasevich, there is no universal algorithm which, when fed any Diophantine equation, will determine whether or not that equation has a solution in integers. Interestingly, it is still unknown whether there is an algorithm that will always determine whether or not a Diophantine equation has a solution in rationals. For quadratic Diophantine equations, in any number of variables, there is an algorithm. For degree $4$ equations, it is known that there is not. The question for cubic equations is unresolved.
Finite field with real-like square root
Let $F$ be a finite field of $q$ elements. Then, for all $r$, $ x^2 = r $ or $ x^2 = -r $ has a solution in $F$ iff $(x^2-r)(x^2+r)=x^4-r^2$ always has a root in $F$ iff the set of 4th powers is the same as the set of squares. In a cyclic group of order $n$, the set of $m$th powers is the same as the set of $d$th powers, where $d=\gcd(m,n)$. Therefore, the set of 4th powers is the same as the set of squares iff $(n,4)=(n,2)$. The cyclic group $F^\times$ has $n=q-1$ elements. Therefore, the set of 4th powers is the same as the set of squares in $F$ iff $(q-1,4)=(q-1,2)$. If $q$ is odd, then $(q-1,2)=2$. Thus $(q-1,4)=2$, which happens iff $q \equiv 3 \bmod 4$. If $q$ is even, then $(q-1,4)=(q-1,2)=1$. Bottom line: $q \not\equiv 1 \bmod 4$.
Linear independence: span of vectors
If $U$ is in the span of $V$ and $W$ we can write $U=aV+bW$ for some scalars $a$ and $b$. So $ \langle U, U \rangle=\langle U, (aV+bW) \rangle=a\langle U, V \rangle+b\langle U, W \rangle =0$ so $U=0$.
Find the sum of coefficient of all the integral power of $x$ in the expansion of $\big(1 + 2\sqrt x\big)^{40}$?
The answer is $\frac{1}{2} (3^{40} + 1)$. Let $$g(y) = (1 + 2 y)^{40}$$ Sought for is the sum of coefficients of $y^{2i}$ of the series expansion of $g(y)$. Now the subseries of $g(y)$ consisting only of even powers is obviously $h(y) = \frac{1}{2}(g(y) + g(-y))$. The sum of these coefficients is therefore $h(1) = \frac{1}{2}(g(1) + g(-1)) = \frac{1}{2}(3^{40} + (-1)^{40}) = \frac{1}{2}(3^{40} + 1)$
Show that $F^2_{n+2} – F^2_{n-2}$ is not a multiple of a Fibonacci number.
Maybe this helps:$$F_{n+2}^2-F_{n-2}^2=(F_{n+2}-F_{n-2})(F_{n+2}+F_{n-2})$$ $$ F_{n+2}-F_{n-2}=F_{n+1}+F_n-F_{n-2}=F_{n+1}+F_{n-1}=F_n+2F_{n-1}$$ $$ F_{n+2}+F_{n-2}=F_{n+1}+F_n+F_{n-2}=2F_n+F_{n-1}+F_{n-2}=3F_n$$
Construct a counterexample when some constraint in Burnside's Theorem is missed
You may as well start with your favorite non-algebraically closed field like $\mathbb R$. Let $E=\mathbb R^2$. Then $\mathrm{End}(E_\mathbb R)\cong M_2(\mathbb R)$, and $\mathbb C\cong R=\left\{\left[\begin{smallmatrix}a&b\\-b&a\end{smallmatrix}\right]\mid a,b\in \mathbb R\right\}$ is an $\mathbb R$ subalgebra of the endomorphism ring. $E$ is simple over $R$ because given the equation $$ \begin{bmatrix}a&b\\-b&a\end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}w\\ z\end{bmatrix} $$ you always have solutions if $[x,y]^\top$ is nonzero: $$ a=\frac{wx+yz}{x^2+y^2}\\ b=\frac{wy-xz}{x^2+y^2} $$ Therefore $R$ acts transitively on the nonzero elements of $E$, and it is a simple $R$ module. But $\mathbb C\ncong M_2(\mathbb R)$.
Why are elliptic points called elliptic?
This is because of the condition on elements $\gamma \in \mathrm{SL}_2(\mathbb{Z})$ you get by imposing that it has exactly one fixed point in the interior of the upper half plane. If you solve the equation $\gamma \tau = \tau$ for some $\tau \in \mathbb{H}$, and $\gamma$ doesn't fix any other point, it implies that the absolute value of the trace of $\gamma$ is less than 2. Elements of the modular group with absolute value of trace equal to or greater than 2 are called parabolic and hyperbolic respectively. So the monikers are with respect to some inequality being satisfied.
Draw numbers from 1 to n until increasing, find conditional expectation
Interesting question! Here is my suggestion. It is almost a complete argument, but there are some details for you to fill in. First, let me change the question slightly. Define $T$ to be the longest initial increasing sequence. This is $Y-1$ in your notation, except when all the balls are chosen in which case $T = n = Y$. Consider the following memoryless-type property. Suppose that you are able to calculate the unconditional expectation. Let's somehow relate the conditional to this. Suppose we have $n = 100$ balls, and the first that we draw is $X = 10$. From now on, if any ball in $\{1, ..., X-1 = 9\}$ is chosen, then we 'lose'. While this is not the case, the balls are being drawn uniformly from the remainder. That is, on the event that only balls not from $\{1, ..., 9\}$ are drawn until there is a decrease in value, the game is exactly the same as if we were just working with $n-X-1 = 90$ balls initially. (Of course, we need to increment $Y$ by $1$, since we have done a single step.) Along lines suggested by @awkward, let $\sigma = (\sigma_1, ..., \sigma_n)$ be the chosen order. Also consider $\eta = (\eta_0, \eta_1, ..., \eta_n) \in \{0,1\}^{n+1}$. Consider the following slightly generalised problem. Define $T_0(\sigma)$ to be the length of the longest initial increasing sequence in $(\sigma_1, \sigma_2, ...)$ -- so $T = Y - 1$ in your notation. Define $T_1(\eta) := \min\{ r \in \mathbb N_0 \mid \eta_r = 0 \}$. Set $T(\sigma, \eta) := T_0(\sigma) \wedge T_1(\eta)$. In words, this says "choose $T$ to be the length of the initial increasing sequence, but stop early if any of the $\eta_r$-s are $0$". (This includes terminating immediately, setting $T = 0$, if $\eta_0 = 0$.) Now consider the case where $\sigma \sim \textrm{Uniform}(S_n)$ is a uniform permutation and $\eta \sim \textrm{Bernoulli}(p)^{\otimes n}$, ie is an iid sequence of $\textrm{Bernoulli}(p)$-s. (Here $\textrm{Bernoulli}(p) = 1$ with probability $p$ and $0$ with probability $1 - p$. The case $p = 1$ reduces to your original question.) Write $T$ for the random variable in this case. Let's look at $\Pr(T > r)$ for some $r \in \{0, 1, ..., n-1\}$. This requires the first $r+1$ balls to be ordered in increasing order and also $\eta_0 = \eta_1 = \cdots = \eta_r = 1$. These events are independent, and have probability $1/(r+1)!$ and $p^{r+1}$, respectively. Hence $$ \textstyle \Pr_{n,p}( T > r ) = p^{r+1} / (r+1)!. $$ (Note that $\Pr_{n,p}(T > n) = 0$.) The subscript-$p$ indicates that the parameters $p \in [0,1]$ and $n \in \mathbb N$. Using the fact that $\textrm{Ex}_{n,p}(T) = \sum_{r=0}^\infty \Pr_{n,p}(T > r)$, we obtain $$ \textstyle \textrm{Ex}_{n,p}(T) = \sum_{r=0}^{n-1} p^{r+1} / (r+1)! = \sum_{r=1}^n p^r / r! = \sum_{r=0}^n p^r / r! - 1. $$ If $n$ is large (or $p$ is very small), then this is roughly $e^p - 1$. Now let's relate this to the original question. Suppose that $X = \sigma_1 = N$. Then, as described above, it is (almost) the case that $$ \textrm{Ex}_{n,1}(T \mid \sigma_1 = N) = \textrm{Ex}_{n-1, (n-N)/(n-1)}(T) + 1. $$ This is because when we select a ball we there are $N-1$ out of the $n-1$ balls which are 'bad', ie less than the first. (This is not quite true: for selecting the second ball, ie $\sigma_2$, this is the case; but for $\sigma_3$ it is now $N-1$ out of $n-2$; for $\sigma_3$, it is $N-1$ out of $n-3$; etc. However, since the expectation of $T$ is always at most $e$, only an order-1 number are chosen so this variation should be irrelevant in the limit.) But we have an expression for this expectation. Assuming that $n$ is large (so that this approximation is probably valid), we deduce that $$ \mathrm{Ex}_{n,1}(T \mid \sigma_1 = N) \approx \exp(1 - N/n). $$ I hope this is helpful for you!
Pullback on differential forms are linear
Let $X,Y$ be arbitrary functionals (and $a$ an arbitrary constant). Since linearity is preserved on $k$-forms: \begin{align} T^*(aY + X) &= (aY + X) \circ T \\ &= (aY + X) (T) \\ &= aY(T) + X(T) \\ &= aT^*(Y) + T^*(X). \end{align} \begin{align} (T \circ S)^* (Y) &= Y \circ (T \circ S) \\ &= Y \circ T \circ S \\ &= S^*(Y \circ T) \\ &= S^* \circ T^* (Y). \end{align}
Is $X$ connected if for every continuous $ f $, $ f(X)$ is an interval
It is false. Any constant function $f:X\to\{a\}$ is continuous and its image is connected. But $X$ could be "anything". EDIT Your second statement is true. If $X$ is not connected, write $X=A\cup B$ for nonempty open and disjoint sets $A$ and $B$. Now define $f(x)=0$ for $x\in A$ and $f(x)=1$ for $x\in B$.
Real matrices such that $A^2=-I_n$
Hint. $A$ is similar to a matrix composed on the "diagonal" of matrices of dimension $2$ of the type $$\begin{pmatrix} 0 &-1\\ 1 & 0 \end{pmatrix}$$ This can be seen by picking a non zero vector $x$. The family $(x,A.x)$ is linearly independent and the matrix of $A$ on the subspace is of the type mentioned above. By induction, suppose that you've build a linearly independent family of vectors $$(x_1,Ax_1, \dots , x_p,Ax_p)$$ If it is a basis of $\mathbb C^{2n}$ you're done. If not, pick up a vector $x \notin \text{Vect}\{x_1,Ax_1, \dots , x_p,Ax_p\}$. I pretend that $$(x_1,Ax_1, \dots , x_p,Ax_p,x,Ax)$$ is linearly independent. For the proof, take $(\alpha_1,\beta_1, \dots, \alpha_p,\beta_p,\alpha,\beta) \in \mathbb C^{2p+2}$ such that $$\alpha_1 x_1 + \beta_1 Ax_1+ \dots + \alpha_p x_p + \beta_p Ax_p+\alpha x + \beta Ax=0$$ Applying $A$ on both side, you get $$\alpha_1 Ax_1 - \beta_1 x_1+ \dots + \alpha_p Ax_p - \beta_p x_p+\alpha Ax - \beta x=0$$ By linear combination, $$(\alpha^2+\beta^2)x \in \text{Vect}\{x_1,Ax_1, \dots , x_p,Ax_p\}.$$ Hence $\alpha=\beta=0$. You can then complete the proof by induction.
Limit of Lambert $W$ Product Log is the Natural Log?
On any compact interval of $\mathbb{R}^+$ the sequence of continuous and monotonic functions given by $$ f_m(x) = x^{1/m} e^{x} $$ converges uniformly towards $e^x$, hence, for a given $y\in\mathbb{R}^+$, the sequence $\{x_m\}_{m\in\mathbb{N}^+}$ of the solutions of $f_m(x)=y$ converges towards the solution of $e^x = y$, i.e. $\log y$.
Problem with left limits.
Showing that $\frac {G(x-)-G(x_0)} {x-x_0} \to G'(x_0)$ as $ x \to x_0$: let $\epsilon >0$ and choose $\delta>0$ such that $|\frac {G(x)-G(x_0)} {x-x_0}- G'(x_0)|<\epsilon$ for $ |x-x_0| <\delta$. Then $|\frac {G(x-)-G(x_0)} {x-x_0}- G'(x_0)|\leq \epsilon$ for $ |x-x_0| <\delta$ by just taking left hand limits.
Best comprehensive guide to number terminology
Mathworld Prime pages OEIS Numberphile standupmaths singingbanana Mathologer Come to mind. Numberphile has videos on sets, sexy primes, and happy numbers. If you don't have some elementary set theory knowledge, number theory will be basically impossible. EDIT Also forgot List of Mathematical symbols,List of math jargon,Glossary of math etc.
Find length of a side from given mesurements
$PA=PC\implies P $ lies on the perpendicular bisector of the line $AC$. Proceed in the following way . Suppose $Q$is the center of the rhombus. $PB+PD=BD=10\implies QD= 5 \\ AQ=\sqrt{PA^2-PQ^2}=\sqrt{5^2-3^2}=4\\ AD=\sqrt{AQ^2+QD^2}=\sqrt{41}$
Exercise 1.1.3 in Charles Weibel’s book “An Introduction to Homological Algebra”
Actually everything you wrote is correct, except the last equation! Why do you say $(u_{n-1} \circ d_n)(v) = 0$? In reality, $d_n v$ is an element of $C_{n-1}$ that you need to write as a sum $v_1' + v_2' + v_3'$ where $v_1' \in \operatorname{im}d_n$ etc. But you know that such a sum is unique, and you already have the decomposition $d_n v = d_n v + 0 + 0$. Therefore $u_{n-1} d_n v = (d_n v, 0, 0)$. On the other hand, $\partial_n u_n v = (d_n v_3, 0, 0)$. But by the identification that allowed you to define $\partial_n$, namely $C_n / \operatorname{ker} d_n = \operatorname{im} d_n$, it follows that this last element is equal to $(d_n v, 0, 0)$. Everything checks out. PS: You perhaps know this, but working over a field was essential here. Being able to write stuff as direct sums is what allows you to perform this trick; over a general ring this result is false.
Collatz Conjecture: Is there a straightforward argument showing that there are no nontrivial 2 "step" repeats (where each "step" is an odd number)
You got $$x = \frac{3 + 2^{w_1}}{2^{w_1 + w_2} - 9} \tag{1}\label{eq1A}$$ For $x$ to be a positive integer means the denominator must divide evenly into the numerator which means the denominator must also be less than or equal to the numerator. For somewhat easier algebra, let $$y = 2^{w_1} \tag{2}\label{eq2A}$$ so \eqref{eq1A} now becomes $$x = \frac{y + 3}{2^{w_2}y - 9} \tag{3}\label{eq3A}$$ This gives $$y + 3 \ge 2^{w_2}y - 9 \iff 12 \ge (2^{w_2} - 1)y \iff \frac{12}{2^{w_2} - 1} \ge y \tag{4}\label{eq4A}$$ The denominator must also be a positive integer, but since the smallest power of $2$ greater than $9$ is $16$, we get $$2^{w_2}y \ge 16 \iff y \ge \frac{16}{2^{w_2}} \tag{5}\label{eq5A}$$ Combining \eqref{eq4A} and \eqref{eq5A} gives $$\frac{16}{2^{w_2}} \le y \le \frac{12}{2^{w_2} - 1} \tag{6}\label{eq6A}$$ For $w_2 = 1$, \eqref{eq6A} gives $8 \le y \le 12$. Since \eqref{eq2A} states $y$ is a power of $2$, the only possible solution is $w_1 = 3$ giving $y = 8$. However, \eqref{eq3A} gives $x = \frac{11}{7}$, which is not an integer. Next, if $w_2 = 2$, \eqref{eq6A} gives $4 \le y \le 4$, i.e., $w_1 = 2$. Substituting these into \eqref{eq3A} gives $x = 1$. If $w_2 = 3$, then \eqref{eq6A} gives $2 \le y \le \frac{12}{7}$, i.e., there's no value of $y$. Likewise, any value of $w_2 \gt 3$ will not allow any value of $y$. Also, Steven Stadnicki's question comment gives another way to see $w_2 = 2$ is its maximum possible value. This means the only valid positive integer solution of \eqref{eq1A} is $w_1 = w_2 = 2$ giving $x = 1$.
Show that $f(n)$ tends to $+\infty$ if $f$ is injective
We need to show, for all $M$ (without loss of generality, assume $M \in \Bbb{N}$), we can find an $N \in \Bbb{N}$ such that $$n > N \implies f(n) > M.$$ The set $\{1, 2, \ldots, M\}$ is a finite set. Note that $$f^{-1}\{1, 2, \ldots, M\} = \bigcup_{n=1}^M f^{-1}\{n\},$$ where the right hand side is a finite union of singleton or empty sets (since $f$ is injective), which means the left hand side finite. Thus, we may define $$N = \max f^{-1}\{1, 2, \ldots, M\}.$$ Hence, $$n > N \implies n \notin f^{-1} \{1, 2, \ldots, M\} \implies f(n) \notin \{1, 2, \ldots, M\} \implies f(n) > M,$$ as required.
Prove that $y_1(x)=y_2(x)$
The wronskian $w(x)= y_1y_2'-y_2y_1'$ is constant ($w'=0$), so it must be $0$ because of the limit conditions. Hence $(y_1/y_2)'=0$
Twin Primes Conjecture and related problems
You're about 90 years too late. The probabilistic approach to prime numbers was developed by Cramér in the 1930's. You might look at this paper by the other Granville.
Integers $m$ so that $m+2011\mid m^3+2011$
HINT: Set $m+2011=y$ so $m$ will be integer $\iff y$ is $$\dfrac{m^3+2011}{m+2011}=\dfrac{(y-2011)^3+2011}y=y^2-3\cdot y\cdot2011+3\cdot2011^2+\dfrac{2011-2011^3}y$$ So we need $y|(2011-2011^3)=-2011(2011^2-1)$
Integral equality using lower limit of integration
Rearrange $$\int_{r_0}^r \frac{sds}{f(s)}-\alpha=\int_{\tilde r}^r\frac{sds}{f(s)}$$ as $$\alpha = \int_{r_0}^r \frac{sds}{f(s)}-\int_{\tilde r}^r\frac{sds}{f(s)}=\int_{r_0}^{\tilde r} \frac{sds}{f(s)}$$ Since the integral of any function varies continuously with its limits of integration, such an $\tilde r$ must exist provided $$\min \left\{\int_{r_0}^r \frac{sds}{f(s)}\ \middle |\ r \ge 0\right\} \le \alpha \le \max \left\{\int_{r_0}^r \frac{sds}{f(s)}\ \middle|\ r \ge 0\right\}$$
Good books on complex numbers
A distinction needs to be made between purely geometric uses of complex numbers and uses in the theory of equations (polynomials, rational functions, etc.). Obviously, there is a good deal of overlap, but some books deal primarily with one aspect or the other. Parsonson (1971). Pure Mathematics, Vol. 2 (both). The material on complex numbers and equations occupies roughly the first half of the book. Challenging problems, similar to STEP papers or old S-levels. Ferrar (1943). Higher Algebra (both). About 60 pages on geometric/trigonometric applications and 100 on the theory of equations. Problems at or above the difficulty in Parsonson. Not to be confused with the same author's Higher Algebra for Schools. Durell and Robson (1930, 1937). Advanced Algebra, Volume II and Advanced Trigonometry (both). I'm less familiar with these books, but I know they were the standard books on these subjects at higher certificate/scholarship level in England for many years. They can be downloaded here. Hahn (1994). Complex Numbers and Geometry (geometry). Andreescu and Andrica (2005). Complex Numbers from A to... Z (geometry). I don't know the last two books well, but they're recommended at imomath.com. They seem to be mostly about geometry and have little on the theory of equations in comparison with Parsonson and Ferrar. Andreescu and Andrica's book is very focused on using complex numbers to do coordinate geometry (including cases where this results in pages' worth of calculations), and it comes with solutions to the exercises. Colin and Morvan (2011). Nombres complexes, polynômes et fractions rationnelles. After briefly introducing the theory, most of the book is devoted to presenting detailed solutions to exercises on these topics. Gautier, Girard, Gerll, Thiercé, Warusfel (1971). Aleph 0. Algèbre, Terminale CDE: nombres réels, calcul numérique, nombres complexes (both). Most of this book is devoted to geometric and algebraic uses of complex numbers. Similar or slightly lower level to Parsonson, but more detailed treatment. Engel (2009). Komplexe Zahlen und ebene Geometrie. In addition to the basic material, this book discusses the Riemann sphere and gives some computer visualizations in MAPLE. Kretzschmar (2011). Komplexe Zahlen für Dummies. The title speaks for itself! I'm including this title just for fun, as it seems to be aimed at very elementary users such as those in electronics. The book by Engel gives an analytic proof of the fundamental theorem of algebra. Unfortunately, I don't believe any of the other books proves it. There is a book by Yaglom called Complex Numbers in Geometry, but it actually discusses topics that are far removed from what one usually thinks of with this title. The book Geometry of Complex Numbers by Schwerdtfeger deals with advanced topics.
height 'h' in integration
The formula is fine to use and is actually one of the ways to define integration over the real number line (granted, it doesn’t work for all functions but it’s “good enough”). I’m not sure why it would’ve been called satan’s formula at all — perhaps because it’s only an approximation, not an exact result? Or maybe because you can vary n in the formula? I’d only encourage using it when you have a computer handy though and you can’t integrate the original expression as otherwise it’s pretty time consuming and useless.
Alternate proof of Liouville's Theorem (Is it right)?
It's hard to say exactly where the error is because you leave out details. But the proof isn't right (for example you can't approximate an entire function uniformly by polynomials, not that I see how the proof would go if you could). My point in posting this is to say I really don't think the proof can be fixed. Because if it worked it seems like the same argument would show that $\sin(t)$ is unbounded for real $t$: It's easy to see that a non-constant polynomial must be unbounded on $\mathbb R$. Now $\sin(t) = t-t^3/6+\dots$; approximate it by a polynomial. Insert here the parts of the argument you left out. QED. (I really can't be sure this is relevant bcause as I said I'm not entirely certainly exactly how the argument you have in mind is supposed to work. But it does show that a correct version must involve something that doesn't work on the line... And it shows in particular that a bounded function can be approximated uniformly on compact sets by unbounded functions.)
Algebraic group acting on projective space
You might be interested in the flag variety $G/B$ associated with a connected algebraic group with maximal connected solvable subgroup $B$. It's a fact that $G/B$ is a projective variety and may be realized as an orbit in projective space with the properties you listed. To prove this, one may consider a representation $G \to GL(V)$ such that $B$ is the stabilizer of a line $L \subseteq V$. Then $B$ will be the stabilizer of the point $L$ in the projective space $\mathbb{P}(V)$. By maximality of $B$ in $G$, the orbit of $L$ is closed and so $G/B$ is projective. In other words, $G/B$ is an example of an orbit of an action by $G$ in projective space that is not dense (since it's closed and if we assume $\operatorname{dim}G$ is large enough), and has the nontrivial stabilizer $B$. Note also that the Heisenberg group is the group of unipotent matrices $U_3 \subseteq B \subseteq GL_3$ contained in the standard Borel subgroup in $GL_3$ so that $U_3$ is contained in the stabilizer of the action given above on $G/B$.