title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Can we use in the distribution of a discrete random variable, the probability of a continuous random variable?
Yes, indeed, your approach is okay. However, do notice that your $f(x,y)$ is the probability that exactly $y$ of the $20$ bulbs will expire after $x$ days. $$\begin{align} f_Y(x,y) &= \mathrm C_y^{20} \,\mathsf P(X>x)^y\,\mathsf P(X\leq x)^{20-y} \\[1ex] & = \mathrm C_y^{20}\big(1-F_X(x)\big)^y\big(F_X(x)\big)^{20-y}\end{align}$$ For the maintanence not to be required you need less than three of the twenty bulbs to expire within the ten day period.   That is that no less than 19 survive. That is, you seek: $f(10,20)+f(10,19)+f(10,18)$ That is all.
Arzela-Ascoli and compactness in $C(X), l^p, L^p$
Question 1: I don't know of any connection between equicontinuity and equisummability. There is indeed something similar for $L^p$, sometimes known as the Kolmogorov–Riesz theorem: A subset $\mathcal{F}$ of $L^p(\mathbb{R}^n)$ (with $1\le p<\infty$) is compact if and only if it is closed, bounded, equi-integrable in the sense that for every $\varepsilon>0$ there is some $R$ so that for every $f\in\mathcal{F}$, $$\int_{|x|>R}|f(x)|^p\,dx<\varepsilon,$$ and one more condition is satisfied, a sort of equicontinuity in the mean: for every $\varepsilon>0$ there is some $\delta>0$ so that, for every $f\in\mathcal{F}$ and every $y\in\mathbb{R}^n$ with $|y|<\delta$, $$\int_{\mathbb{R}^n}|f(x+y)-f(x)|^p\,dx<\varepsilon.$$ Question 2: The intuition for equisummability is most easily explained using a non-equisummable sequence: Let $e_n$ be the sequence with all zeros, except for $1$ in the $n$th position. It has no convergent subsequence in $\ell^p$, and the reason for that is that there is “mass escaping to infinity”. The equisummability condition stops that from happening. As for equicontinuity, without it you can have examples like $\arctan(nx)$ which converges (as $n\to\infty$) to a discontinuous function. Here, the problem is that the continuity of each function is progressively worse. Equicontinuity stops that from happening. Shameless plug: See a paper on the Kolmogorov–Riesz theorem that I cowrote.
Find $k, m$ so solutions to $(iz+k)^2=-2+2\sqrt3i$ are the same as those to $z^2-2iz+m=0$
HINT: If you expand the LHS and compare the real and imaginary components you can find what $k$ and $z$ are. If $z$ is a complex number assume that it is equal to $a+bi$. But, the algebra might be a bit long. When you solve the second equation, you can see that $b=1$ I got $k=0$ and $m=-4$
Determining order of an element
A variant: As $a^{2n}=1$, the order of $a$ is a divisor of $2n$, which cannot be a divisor of $n$. Hence it is $2d$ for some (strict) divisor of $n$. Then we have $a^{n-2d}=-1$. However, as $d$ is a strict divisor of $n$ and $2d$ is not, we have $0 <n-2d <n$, contradicting the minimality of $n$.
how to test a hypothesis when given population mean and standard deviation is known?
The null hypothesis is that the mean weight is 49.3 lbs, with 14 lbs. standard deviation. The question is, what is the probability that you observed a mean of 51.5 lbs. in a sample of 196 children, if the true mean and standard deviation followed the null hypothesis? We'll assume that a great many random factors are involved in the weight of any particular child, and that the sampling of children was random, and that their weights are independent; then by appealing to the central limit theorem (and observation), we'd expect the weights of children to be distributed as a normal distribtuion. Let $X_n\sim\mathcal{N}(\mu,\sigma^2)$ be independent, indentically distributed random variables, for $n=1,2,\ldots N$, each representing the weight of a child. $N=196$, $\mu=49.3$, and $\sigma=14$. What we observed was the sample mean $$ \hat\mu = \frac{1}{N}\sum_{n=1}^N X_n $$ By the properties of normal random variables, the sample mean is also distributed as a normal, with $\hat\mu\sim\mathcal{N}(\mu,\sigma^2/N)$. Now our question is, what were the chances that we observed a sample mean different than the true mean? How often could we have expected to see this value, or a more extreme value? This is the definition of the $p$-value. $$ p = \Pr[\hat\mu\geq51.5] = \Pr[\mathcal{N}(\mu,\sigma^2/N)\geq51.5] = $$ $$ \Pr\left[\mathcal{N}(0,1)\geq \frac{51.5-49.3}{14/\sqrt{196}}\right] = 1-\Phi(2.2) \approx 1-0.9861 = 0.0139 = 1.39\% $$ So if the null hypothesis was true to begin with, there is slightly more than a $1\%$ chance that we'd observe a sample mean of $51.5$ in a sample of $196$. That's a rare-sounding outcome; I'll leave it to you to figure out if this is significant at the $1\%$ level, and what the z-score is above.
Problems with introducing ordered pairs axiomatically
There is no problem introducing a new symbol $g$ for a pairing function to ZFC. Let's call ZFC$g$ the theory with the new symbol $g$ together with the new axiom $$\forall x_1 \forall x_2 \forall y_1 \forall y_2 (g(x_1,x_2) = g(y_1,y_2) \to x_1 = y_ 1 \land x_2 = y_2)$$ and also with the comprehension and replacement axioms expanded to include formulas mentioning this new function symbol $g$. Since $\newcommand{\ZFC}{\mathsf{ZFC}}\ZFC$ has a definable pairing function (e.g. the Kuratowski pairing function) this new theory is $\ZFC_g$ relatively consistent with $\ZFC$. In fact, since every model of $\ZFC$ can be expanded to a model of $\ZFC_g$, it is conservative over $\ZFC$: every statement that doesn't involve $g$ that is provable in $\ZFC_g$ was already provable in $\ZFC$. In other words, $\ZFC_g$ is a 100% harmless extension of $\ZFC$. This trick does avoid most junk theorems, e.g. $\varnothing \in g(\varnothing,\varnothing)$ is neither provable nor disprovable in $\ZFC_g$. However, it does not avoid junk facts, e.g. $\varnothing \in g(\varnothing,\varnothing)$ has to be true or false in any model of $\ZFC_g$. As with any conventions for tuples, functions, sequences, and so on, $\ZFC$ is completely agnostic as to how these are defined, it is only the fact that at least one encoding exists that really matters.
unmarked number in a circle
A number $n\in I_{1000}=\{1, 2, \dotsc, 1000\}$ is marked if and only if there exist $i, j\in\mathbb N$ such that $$ 15i+1=n+1000j\Leftrightarrow n-1=5(3i-200j) $$ observe that $$ 3\cdot 67 -200 = 201-200 = 1\Rightarrow n-1 =5(3\cdot 67k-200k)=5k $$ then $n$ is markable if and only if $n-1$ is a $5$ multiple or $$ n=\{1, 6, 11, 16, 21, 26, \dotsc, 996\} $$
Neighbourhood filter of an isolated point of a topological space
HINT: If $\mathcal V(x)$ is an ultrafilter then either $\{x\}$ is in $\mathcal V(x)$ or it isn't. Can it not be there? Also, recall that if $A\in\mathcal V(x)$ then there is an open set $U$ such that $x\in U\subseteq A$. The other direction is much simpler, and you should probably be able to figure it out on your own.
Let $r_0,r_1,...,r_m$ be the real roots of $a_nx^n+a_{n-1}x^{n-1}+...+a_0$.Is there a closed-form expression for $\sum_{i=1}^mr_i -\sum_{i=1}^m1/r_i$?
$\sum_{i=1}^mr_i=-\frac{a_{n-1}}{a_n}$. Now substitute $\frac{1}{x}$ for $x$ then $\sum_{i=1}^m \frac{1}{r_i}=-\frac{a_{1}}{a_0}$.
Finding a conformal mapping with certain points explicitly mapped
You need a special bilinear transformation of the type $w(z)=k.\frac{z-a}{z-\overline{a}}$ where $a$ is a zero of $w(z)$. In your case $a=1+i$ so $w(z)=k.\frac{z-(1+i)}{z-(1-i)}$. To determine $k$, use $w(-1+i)=\frac{1}{\sqrt 2}$
Vector cube question
When you add vectors graphically, place the tail of the second one at the head of the first one. Then the vector sum goes from the tail of the first to the head of the second. You can move $\vec{a}, \vec{b},$ and $\vec{c}$ around (translate them) to get them where you need them to be, but don't rotate them. You can also subtract two vectors. It's the same as adding them, but turn the second one backwards before you add them. So if you move $\vec{a}$ from $AB$ up to $EF$, you start at $A$ and end at $F$ by adding together $\vec{c} + \vec{a}$. Can you take it from here?
If $\sum a_n$ converges and for almost all n $a_n>0$, does it mean that the series $\sum a_n$ converges absolutely?
If 'almost all' means that the $F:=\{n\mid a_n<0\}$ is finite then the answer is 'yes'. It is allready sufficient to have $a_n ≥0$ for almost all $n$ (a bit weaker). The summation of negatives will be a negative number (or $0$ if there are no negatives) and not someting like $-\infty$, so: $$\sum\left|a_{n}\right|=\sum_{n\in F^{c}}a_{n}-\sum_{n\in F}a_{n}<\infty$$
Alternative Hypergeometric Proof
No, the expression given is somewhat analogous to the Binomial case, but not exactly the same thing at all.   Those brackets are not just for associating the terms in the exponents.$$\mathbb P(Y{=}y)=\dfrac{\dbinom{n}{y}r^{(y)}(m-r)^{(n-y)}}{m^{(n)}}$$ Here $r^{(y)}, m^{(n)},$ et cetera, are used to indicate the falling factorial: $$\begin{align}r^{(y)} & = r(r-1)\cdots(r-y+1) \\[1ex] & =\frac{r!}{(r-y)!} \\[1ex] & = {}^r\mathrm P_y\end{align}$$ That counts the number of ways to select an ordered list of $y$ items from a set of $r$. Your favoured event is the combination of two ordered lists: of $y$ from $r$ 'success' items, and $n-y$ from $m-r$ 'fail' items.   $r^{(y)}$ counts the way to select the first ordered list, $(m-r)^{(n-y)}$ counts the ways to select the second, and $\binom ny$ counts the ways to blend the two lists together into one ordered list. The total space is the selection of an ordered list of $n$ from $m$ items; counted by $m^{(n)}$. $$\begin{align}\mathbb P(Y{=}y) ~&=~ \dfrac{{}^n\mathrm C_y\cdot{}^r\mathrm P_y\cdot{}^{m-r}\mathrm P_{n-y}~{}}{{}^m\mathrm P_n} \\[1ex] &=~\dfrac{\dbinom{n}{y}r^{(y)}(m-r)^{(n-y)}}{m^{(n)}} \end{align}$$
Outward normal vector on a curve
Here's one way (among many), provided certain differentiability assumptions, though strictly speaking it involves calculus as well as algebra. One can ask if the velocity sweeps out a circle in the direction of the normal or in the opposite direction. For the unit normal to be well defined and unique up to sign, the unit tangent vector $\mathbf{u}(t)=\dot{\gamma}(t)/\|\dot{\gamma}(t)\|$ has to exist everywhere. If $\mathbf{u}$ is differentiable, we can define the angular velocity function with respect to a particular choice of normal. $$ \omega(t)=\dot{\mathbf{u}}(t)\cdot\mathbf{n}(t) $$ It can then be shown (for a simple curve in $\mathbb{R}^2$) that $$ \int_a^b\omega(t)dt=\begin{cases}2\pi & \mathbf{n}\ \ \text{inward} \\ -2\pi & \mathbf{n} \ \ \text{outward} \end{cases} $$ Computing this quantity allows one to identify the two normals.
Making an argument regarding the minimum characteristic polynomial rigorous.
I agree - the answer is correct but it's hard to see a way of making this argument rigorous. Can I suggest an alternative approach? Let's focus on the vector $$e_1 = \left[ \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right]$$ Suppose, for contradiction, that there exists a non-zero polynomial of degree less than three, $$c_0 + c_1 A + c_2 A^2$$that annihilates every vector in the vector space. Then in particular, it must annihilate $e_1$. But $$ (c_0 + c_1 A + c_2 A^2) e_1 = \left[ \begin{array}{c} c_0 \\ c_1 \\ c_2 \end{array} \right].$$ so $c_0 + c_1 A + c_2 A^2$ can't annihilate $e_1$ (unless $c_0 = c_1 = c_2 = 0$). Hence the minimal polynomial must be at least a cubic! Finally, I would invite you to think about how this idea might generalise for arbitrary matrices written in rational canonical form.
Difference between the principle of addition, the principle of multiplication, permutations and combinations
Yes, you are correct. A more down to earth example, if I have $4$ apples and $3$ oranges in a basket, there are $3+4=7$ ways I can pick one fruit from the basket, and there are $3\cdot 4$ ways I can pick one apple and one orange from the basket. The independent means that no matter how one task is performed, the number of ways you can perform the second task is the same. For example, the tasks "pick one apple" and "pick one orange" are independent in the previous example, since no matter what apple I pick, I still have the same $3$ oranges to pick from.$$$$ On the other hand, if my first "task" is "pick one fruit" and the second is "pick one orange", then the number of ways I can perform the second task depends on the way I performed my first task. If I picked an apple, I have $3$ ways of performing the second task. If I picked an orange, I have only $2$. The formula for the number of permutations is obtained by applying the principle of multiplication. These are two different things, the principle of addition is a general principle, while combinations are specific collections of objects from a set.
Prime ideals in $k[x,y]/(xy-1)$.
HINT: $k[x,y]/(xy-1)$ is naturally isomorphic to the ring of fractions $k[x][\frac{1}{x}] = S^{-1}\ k[x]$, where $S= \{1,x,x^2, \ldots\}$.
For which values of $p,q$, does the following double integral converge?
HINT We can show that $$I'=\iint_{x+y\ge1}\frac{1}{x^p+y^q}dxdy\ge \int_1^\infty\frac{1}{x^p+1}dx $$ which diverges for $p\le 1$ and $$I'=\iint_{x+y\ge1}\frac{1}{x^p+y^q}dxdy\le \int_1^\infty\frac{1}{x^p}dx+\int_1^\infty\frac{1}{y^q}dy+\iint_{x+y\ge1, \, x\le1, \, y\le1}\frac{1}{x^p+y^q}dxdy+\\+\int_0^{\frac \pi 2}\, d\theta\int_1^\infty \frac{r}{r^p\cos^p\theta+r^q \sin^q \theta}dr$$ and the first two integrals converges for $p,q>1$, the third is a proper integral and for the latter assuming wlog $p<q$ $$\int_0^{\frac \pi 2}\, d\theta\int_1^\infty \frac{r}{r^p\cos^p\theta+r^q \sin^q \theta}dr\le\frac \pi 2 \cdot M(p,q)\int_1^\infty \frac1{r^{p-1}}dr$$ where $M(p,q)$ is an upper bound for the function $$\frac{1}{\cos^p\theta+r^{q-p} \sin^q \theta}$$
Jacobian of sinc$\Bigr(\frac{|x|}{2}\Bigr)\frac{x}{2}$
Numpy is using $$\textrm{sinc}x=\frac{\sin\pi x}{\pi x}$$ So create a function def mysinc(x): return np.sinc(x/np.pi) and call it instead of np.sinc. You need to do this both in the definition of $f$ as well when you calculate the analytic formula. The differences will be on the order of 1e-10, which are numerical errors
Find the probability $P(X<Y)$.
For the special case of $X$ and $Y$ being identically distributed, you have $$P(X &lt; Y) + P(Y &lt; X) + P(Y = X) = 1$$ $$2 P(X &lt; Y) + P(X = Y) = 1$$ $$ P( X &lt; Y) = 1/2 (1 - P(X = Y))$$ So it reduces to computing $P(X =Y)$ whose computation appears here Probability that two independent Poisson random variables with same paramter are equal
Integral $\int^1_0\frac{\log\cos(\frac{\pi x}{2})}{x(1+x)}\,dx$ with logarithm of cosine and rational function.
One may write, with a partial fraction decomposition and standard changes of variable, $$ \begin{align} \int^{1}_{0}\frac{\log\cos\left(\frac{\pi x}{2}\right)}{x(1+x)}\,dx &amp;=\frac\pi2 \int^{\large\frac\pi2}_{0}\frac{\log\cos u}{u\left(u+\frac\pi2\right)}\,du \\\\&amp;=\int^{\large\frac\pi2}_{0}\frac{\log\cos u}{u}\,du-\int^{\large\frac\pi2}_{0}\frac{\log\cos u}{u+\frac\pi2}\,du \\\\&amp;=\int^{\large\frac\pi2}_{0}\frac{\log\cos u}{u}\,du-\int^{\pi}_{\large\frac\pi2}\frac{\log\sin v}{v}\,dv\qquad \left(v=u+\frac\pi2\right) \\\\&amp;=\int^{\large\frac\pi2}_{0}\frac{\log\cos v}{v}\,dv-\int^{\pi}_{\large\frac\pi2}\frac{\log\frac{\sin v}{v}}{v}\,dv-\int^{\pi}_{\large\frac\pi2}\frac{\log v}{v}\,dv \\\\&amp;=\int^{\large\frac\pi2}_{0}\frac{\log\cos v}{v}\,dv-\int^{0}_{\large\frac\pi2}\frac{\log\frac{\sin v}{v}}{v}\,dv-\int^{\pi}_{0}\frac{\log\frac{\sin v}{v}}{v}\,dv-\int^{\pi}_{\large\frac\pi2}\frac{\log v}{v}\,dv \\\\&amp;=\int^{\large\frac\pi2}_0\frac{\log\frac{\cos v\sin v}{v}}{v}\,dv-\int^{\pi}_{0}\frac{\log\frac{\sin v}{v}}{v}\,dv-\int^{\pi}_{\large\frac\pi2}\frac{\log v}{v}\,dv \end{align} $$ then, by observing that $\displaystyle\frac{\cos v\sin v}v=\frac{\sin 2v}{2v}$, the first integral above is equal to the second one, we are left with $$ \int^{1}_{0}\frac{\log\cos\left(\frac{\pi x}{2}\right)}{x(1+x)}\,dx=-\int^{\pi}_{\large\frac\pi2}\frac{\log v}{v}\,dv =\left[-\frac{\log^2 v}{2}\right]^{\pi}_{\large\frac\pi2} $$ giving $$ \int^{1}_{0}\frac{\log\cos\left(\frac{\pi x}{2}\right)}{x(1+x)}\,dx=\frac12\log^2 2-\log2 \log \pi. $$
Question about simplifying absolute value/exponents
$$\quad \left|x^n\right|=\left|x\right|^n\quad \mathrm{so} \quad\sqrt[n]{\left|x^n\right|}=\sqrt[n]{\left|x\right|^n}=|x|$$
What is the sum $\sum_{j=1}^{2n} j^2$?
The formula you mentioned itself contains the answer. As you know:$\sum_{j=1}^{n} j^2 = \frac{1}{6}n(n+1)(2n+1)$. Here on substituting $2n$ in place of $n$ you will get: $$\sum_{j=1}^{2n} j^2=\frac{1}{6}(2n)(2n+1)(4n+1)=\frac{1}{3}n(2n+1)(4n+1)$$
Finding gcd of polynomials with extra parameter in Maple
How are you passing the info? From the help: gcd(a, b, 'cofa', 'cofb') The optional third argument cofa is assigned the cofactor a/gcd(a, b). The optional fourth argument cofb is assigned the cofactor b/gcd(a, b). Examples: gcd(x^2-y^2,x^3-y^3,c,d); -y+x c; x + y d; 2 2 x + x y + y
Recursive set that contains in a way all the other recursive ones?
Let $c_A$ be a natural number that corresponds to any effective coding (for example: a Turing machine) of a recursively enumerable set $A$ and consider $S = \bigcup_A \{c_A\} \times A$. The answer to your modified question is: no. Let us try to imitate the usual liar paradox. Suppose, that there exists a total Turing machine $U$ that computes $S$. And consider another Turing machine: $$N(x) = \mathit{not}\; U(x, x)$$ By the closure property $N$ is also recursive. So, there is a natural number $c_N$ corresponding to $N$ under $U$. Let us see what we get by applying $N$ to itself: $$N(c_N) = \mathit{not}\; U(c_N, c_N) = \mathit{not}\; N(c_N)$$ where the first equality is the definition of $N$ and the second holds by the definition of $S$.
Formulate xor matrix
Note that $a\mathbin{\sf xor}b = a+b \bmod 2$ when $a,b\in\{0,1\}$. So your matrix is Pascal's triangle modulo 2: $$ f(i,j) = \binom{i+j}{i} \bmod 2 $$ So in order to calculate it, you just need to be able to find out whether $\binom{m}{n} = \frac{m!}{(m-n)!n!} $ is even or odd. You can do that by computing how many factors of $2$ there are in each of the factorials and checking if there are as many in the denominator as in the numerator. For counting factors of $2$ in a factorial, see Highest power of a prime $p$ dividing $n!$.
why there are no parabolic (on a paraboloid) non-euclidean geometry?
Hyperbolic geometry is not really geometry on a hyperboloid. It's geometry on an infinite surface of constant negative Gaussian curvature, something which cannot be represented even in 3D. You can model it using a sheet of a hyperboloid, but the metric you get isn't the normal 3D metric you'd intuitively expect. Elliptic geometry is not the geometry on an ellipsoid either. While spherical geometry is what you get as geometry on the sphere, elliptic geometry is what you get from that if you identify antipodal pairs of points. It's the geometry on a surface of constant positive Gaussian curvature. Just like the parabola is the singular limiting case between ellipse and hyperbola, the parabolic geometry is the limiting case between elliptic and hyperbolic geometry. And between constant positive and constant negative curvature, that limiting case is zero curvature. Might be you could model parabolic geometry on a paraboloid using some strange metric, but why bother if you can have a flat plane using normal Euclidean metric, perfectly intuitive? For a nice uniform way of looking at these different geometries, I suggest looking into Cayley-Klein metrics. Perspectives on Projective Geometry by Richter-Gebert has some nice chapters on this. Disclaimer: I'm working with that author, so I might be somewhat biased here.
mutually exclusive events where one event occurs before the other
Assuming that at least one of $E$ or $F$ has positive probability, then with probability $1$, one of them will occur eventually. Then $P(E|(E\cup F))=\frac{P(E\cap(E\cup F))}{P(E\cup F)}=\frac{P(E)}{P(E)+P(F)}$
Proving if G is a 3-regular graph, then the size of edge cut equals size of min size of vertex cut
Let S be vertex cut. G-S gives two components H1, H2. Vertices in S has one edge to vertex in H1, one edge to vertex in H2 and one edge to vertex in S. Claim is if we take edges from S to H1 they make edge cut. Since a vertex in S has only one edge to H1, we pick exactly one edge for every vertex in S (cut vertex set). Hence κ(G)=κ′(G).
Weak Mathematical Induction for Modulo Arithmetic $8\mid 3^{2n}-1$
$8$ divides $3^{2k}-1$ then $3^{2k}=8m+1$ for some $m$. $3^{2(k+1)}-1 = 3^{2k} \cdot 3^2-1 = (8m+1) \cdot 3^2-1 = 8m \cdot 3^2+3^2-1 = 8m \cdot 3^2+8$ which is divisible by 8.
Determinates and Adjugates
$$Badj(B) = \det(B)I_n$$ $$\det(B)\det(adj(B))=\det(B)^n$$ $$\det(adj(B))=\det(B)^{n-1}$$ Here $n=3$, hence $\det(adj(B))=\det(B)^2$ for this question.
Not understanding steps in Algebraic simplification
From the first equation and by putting $(3-x)$ as a common factor we get $$(3-x)\left((4-x)(6-x)-8\right)=0$$ Now we develop the second factor and simplify we get $$(4-x)(6-x)-8=x^2-10x+16$$ finally we factor the last expression $$x^2-10x+16=(x-2)(x-8)$$ and we conclude.
Supremum of derivatives of absolutely continuous functions
Usual trick: Let $$p_n(x) = \begin{cases} n+1 &amp;\text{when }x\in \left( \frac{1}{n+1}, \frac{1}{n}\right), \\ 0 &amp;\text{otherwise.}\end{cases}$$ and $$f_n(x) = \int_0^x p_n(s) ds.$$ Then $\|f_n\| = \frac{1}{n}$ and thus $\{f_n\}$ converges in $\| \cdot\|$ to the zero function. In particular, $$A = \{ f_n\} \cup \{0\}$$ is a compact set. But $x\mapsto \sup_n |\dot {f_n}(x)|$ is not an integrable function.
what are the last 2 digits of $n^n$ without computing the formula
Last 2 digits of the result depend only of the last 2 digits of the original number. Thus do all operations using modulo 100. Details about why we can simplify multiplication in modular multiplication see in Wikipedia. result1 = n mod 100 result2 = (result1 * result1) mod 100 result3 = (result2 * result1) mod 100 result4 = (result3 * result1) mod 100 result5 = (result4 * result1) mod 100 And so on, repeat it $n$ times. Thus you get the result very quickly. Also you can write a simple Python script, that would be literally a few lines of code.
Can we extend a function $u \in H_0^1(\Omega)$ to $\overline{u} \in H_0^1(\widetilde{\Omega})$ with $\Omega \subset \widetilde{\Omega}$?
If $f_n \in C^\infty_c(\Omega)$ satisfies $$\|f_n - u\|_{H^1(\Omega)} \to 0,$$ then $\overline f_n \in C^\infty_c(\widetilde \Omega)$ satisfies $$\| \overline f_n - \overline u \|_{H^1(\widetilde \Omega)} \to 0$$ Thus $\overline u \in H^1_0(\widetilde \Omega)$.
Tarski's Theorem: V=HSP
Notice that a variety is, by definition (in the context of Tarski's theorem), closed under $H,S,P$, and so if $K=V(K)$, that is, $K$ is a variety, then $H(K), S(K), IP(K) \subseteq K$, whence $HV, SV, IPV \leq V$, while the converse is trivial. It also follows that $H,S,IP \leq V$, and of course, $I \leq H$. I suppose this is enough to answer your questions.
If a finite field has characteristic 2 why is every element a square?
Consider the function $f : \Bbb F \to \Bbb F$ given by $f(x) = x^2$. Then, $$f(x) = f(y) \iff x^2 = y^2 \iff x^2 - y^2 = 0 \color{red}{\iff} (x - y)^2 = 0 \iff x - y = 0.$$ The $\color{red}{\iff}$ is true since the field has characteristic $2$. Thus, $f$ is actually injective. An injective map from a finite set to itself is always surjective and thus, every element of $\Bbb F$ is a square. In general, the above argument shows that if $\Bbb F$ is a finite field with characteristic $p$, then $x \mapsto x^p$ is a bijection. In fact, this is actually an isomorphism since $(x + y)^p = x^p + y^p$.
A problem regarding inequality
Suppose $|x - 1| &lt; \frac{1}{10}$. Then $$|x^2 - 1| = |(x-1)(x+1)| = |x-1|\cdot|x+1| &lt; \frac{1}{10} |x+1|$$ We now need to show $|x+1| &lt; \frac{21}{10}$ in order to show $|x^2 - 1| &lt; \frac{21}{100}$. Since $|x-1| &lt; \frac{1}{10}$, this means $$ -\frac{1}{10} &lt; x-1 &lt; \frac{1}{10}.$$ Adding 2 to each side gives us: $$ -\frac{1}{10} + 2 &lt; x-1+2 &lt; \frac{1}{10} + 2.$$ We can do this without any issues since 2 is positive. Now, add everything together to get: $$ \frac{19}{10} &lt; x + 1 &lt; \frac{21}{10}.$$ Since $-\frac{21}{10} &lt; \frac{19}{10}$ and $\frac{19}{10} &lt; x+1$, we have by the transitive property that .... Can you take it from here?
How to find 'k' from this equation
By multiplying both sides of equation by some constant, you will get the equation like $$(A+k)\rho^k=B(X,\lambda)$$ Or, $$(A+k)\rho^{A+k}=(A+k)e^{(A+k)\ln\rho}=B\rho^A\\ \left((A+k)\ln\rho\right)e^{(A+k)\ln\rho}=B\rho^A\ln\rho$$ Then you will refer to Lambert W function and obtain $$(A+k)\ln\rho=W_e(B\rho^A\ln\rho)$$ and then the solution is $$k=\frac{W_e(B\rho^A\ln\rho)}{\ln\rho}-A$$
Prove that the sum of convex functions is again convex.
Hint: $$(f+g)(tx_1 +(1-t)x_2)=f(tx_1 +(1-t)x_2)+g(tx_1 +(1-t)x_2)\leq ...$$
Visualizing Infinity discerning countable and uncountable
I'll try to answer the question of how to visualize a countable set $C$ in, e.g., $\mathbb{R}^2$, although I'm not sure if this was your main question. If $C$ is dense then I'm not sure how to visualize it differently than the ambient space. Instead let's consider the special case that $C$ is closed. In this case the usual proof of the Cantor–Bendixson theorem (see https://en.wikipedia.org/wiki/Perfect_set_property) maybe gives a way to "visualize" $C$ in terms of countable discrete sets, which you suggested that you already can visualize. The proof of this theorem shows that if we transfinitely iterate the process of removing the isolated points of a countable set $C$, taking intersections at limit stages, we end up with the empty set after countable many stages. So running this process in reverse, it says that $C$ is "constructed" in countably many stages by starting with a countable discrete set, and at each successor stage, adding another countable discrete set whose limit points are the current set. (Well, what happens at the limit steps if you go in reverse isn't really a "construction", but a fuzzy question gets a fuzzy answer.) Edit by questioner: This answer actually didn't satisfy me, but the discussion in the comments of this answer eventually convinced me to believe that one only can visualize (in the sense I meant) finite unions of bounded connected subspaces of the plane, why the answer to my question would be "no, there is no way". This is why I accepted the answer.
What's the best way to find a perpendicular vector?
You don't need the cross-product, as long as you have a scalar product. Remember, most vector spaces do not have a cross-product, but a lot of them do have a scalar product. Take any arbitrary vector $\vec{r}$ that is not parallel to the given vector $\vec{v}$. Then $\vec{q} = \vec{r} - \left(\frac{\vec{r}\cdot\vec{v}}{\vec{v}\cdot\vec{v}}\right) \vec{v}$ will be orthogonal to $\vec{v}$. To prove this, just show that $\vec{q} \cdot \vec{v} = 0$, which just takes a little bit of algebra. This process is the basis of the Gram-Schmidt process, an important process in linear algebra.
Proof that for any convex polyhedron there exist 2 faces with equal number of edges
Since I don't have 50 reputation, it is not a comment but an answer... But I am not sure it merits one on its own. I was foiled by your mention of Dirichlet... not said like that in my country! Basically, you take the face K with the higher number of edges (n in your case). There are exactly n other polygons linked to this first face K. You have maximum (n-1) numbers to distribute (in fact less so). Hence the absolute necessity to have at least two faces with the same number of edges.
The number of numbers not divisible by $2,3,5,7$ or $11$ between multiples of $2310$
Use chinese remainder representation with basis $[2,3,5,7,11]$. Given a CRR basis $[a,b,c,d,e]$ with $M = abcde = 2\times 3 \times 5 \times 7 \times 11 = 2310$, each valid tuple represents exactly 1 number from $0$ to $M - 1 \pmod M$. So you want the number of tuples that don't contain a zero, or $(a - 1)(b - 1)(c - 1)(d - 1)(e - 1)$ or $1 \times 2 \times 4 \times 6 \times 10 = 480$.
Stochastic Differential Equations - A Few General Questions
For a) Simulating the SDE of an underlying stock price is not used to price the stock; that would yield nothing more than a self-fulfilling prophecy. Rather, practitioners use simulation methodologies to price financial derivatives of an underlying asset. For example, the "payoff" function for a simple European call option is $\max(0,P_T - K)$, where $K$ is the "strike" price and $P_T$ is the price of the underlying asset at expiry time $T$. An analyst would simulate the underlying price, and evaluate the payoff function at time $T$. Finally, the call option value (i.e., price) is simply the present value of the expected value of the payoff function. For b) There are several statistical and market-based methods that practitioners use to determine the parameters of the SDE model. For market makers, the most common methodology involves the use of the term structure of the market implied volatility. Other methods include non-parametric models of the PDF of the underlying. Simpler methods include statistical estimations using historical time series data. For c) Darrel Duffie's book Dynamic Asset Pricing Theory might be a good one.
Closed sets and subtraction of elements.
Consider $A = \mathbb N_{\ge 2}$ and $B = \{ n + \tfrac1n \mid n\in A \}.$ Notice that $A,B$ are in fact closed. For example, for any sequence $b_k = n_k + \frac{1}{n_k}\in B$ we have $$ |b_k - b_l| \ge |n_k - n_l| - \left| \frac{1}{n_k} - \frac{1}{n_k} \right| \ge |n_k - n_l| - \frac{1}{2}. $$ So, $b_k$ converges if and only if $b_k$ is eventually constant. Now, $M = A-B$ contains the sequence $m_k = -\frac{1}{1+k}$, which converges to $0\notin M$. That is, $M$ isn't closed
Identify the regions in $(\alpha_{1},\ \alpha_{2})$ plane for which $f_{\alpha_{1},\ \alpha_{2}}$ is a convex or concave function
Hint: There is first and second order characterizations of convex functions here Well known result is following: Suppose function $f$ is defined and have continuous second partial derivatives in interior of convex set $X \subset \mathbb{R}^n$. $f$ is convex if and only if when quadratic form $$\sum_{i,j=1}^{n}f_{ij}(x)\xi_i \xi_j$$ is positive semidefinite.
Close Form Solution of Quasi Concave Maximization Problem - $ \arg \max_{x \in \mathbb{R}_{++}} \frac{ \ln(1+ax) + \ln(1+bx) }{x+c} $
I have tried Dinkelbach's and Schaible's method, but have not found a solution. I still provide my notes so anyone can check them and maybe provide useful comments. Denote the problem as $$\max_{x \in \mathbb{R}^n_{++}} \frac{\ln(1+ax) + \ln(1+bx)}{x+c}$$ and define: $$h(\lambda) = \max_{x \in \mathbb{R}^n_{++}} \{ \ln(1+ax) + \ln(1+bx)-\lambda (x+c) \}$$ Dinkelbach (1967) showed that $h(\lambda) \geq 0$ if and only if: $$\max_{x \in \mathbb{R}^n_{++}} \frac{\ln(1+ax) + \ln(1+bx)}{x+c} \geq \lambda.$$ The problem now is to find the largest $\lambda$ such that $h(\lambda) \geq 0$. Since $h$ is concave, its maximum can be found with the first order condition $a/(1+ax) + b/(1+bx) - \lambda = 0$, which simplifies to: $$\lambda abx^2 + (2ab - 2a\lambda - 2b\lambda) x + (a+b+\lambda) = 0.$$ Plugging this into $h(\lambda)$ and solving for $h(\lambda)=0$ does not give me a closed form solution. Schaible (1974) showed that you can substitute $y = x / (x+c)$ and $t = 1/(x+c)$ to get an equivalent but convex optimization problem: $$\max_{y \in \mathbb{R}^n_{++}} \{ t \ln(1+ay/t) + t \ln(1+by/t) : y+ct\geq 1 \}.$$ The KKT conditions are: $$\begin{align} \frac{at}{t+ay} + \frac{bt}{t+by} - \lambda = 0 \\ \ln(1+ay/t) + \ln(1+by/t) - \frac{a}{t+ay} - \frac{b}{t+ay} - c\lambda = 0 \\ \lambda(1-y-ct) = 0 \\ y+ct \geq 1 \\ \lambda \geq 0 \end{align}$$ I do not find a closed form solution for this either.
Method of characteristics for a system of PDEs when equations are dependent on both variables
$$\frac{\partial a}{\partial x}+g\frac{\partial a}{\partial y}=ab-a \tag 1$$ $$\frac{\partial b}{\partial x}+g\frac{\partial b}{\partial y}=ab-b \tag 2$$ Let $u(x,y)=a-b$ $$\frac{\partial u}{\partial x}+g\frac{\partial u}{\partial y}=-u$$ This is a first order linear PDE easy to solve. You get the general solution : $$u=e^{-x}F(y-gx) \tag 3$$ with arbitrary function $F$. The condition $u(0,y)=\alpha-\beta$ implies $\alpha-\beta=e^0F(y-0)=F(y)$ Thus the function $F$ is constant and is known now : $F=\alpha-\beta$ $$u=(\alpha-\beta)e^{-x}$$ $$b=a-(\alpha-\beta)e^{-x}$$ Putting it into Eq.$(2)$ : $$\frac{\partial b}{\partial x}+g\frac{\partial b}{\partial y}=(a-1)\big(a-(\alpha-\beta)e^{-x}\big) \tag 4$$ This is a first order linear PDE. The Charpit-Lagrange characteristic equations are : $$\frac{dx}{1}=\frac{dy}{g}=\frac{da}{(a-1)\big(a-(\alpha-\beta)e^{-x}\big)}$$ A first characteristic equation comes from $\frac{dx}{1}=\frac{dy}{g}$ $$y-gx=c_1$$ A second characteristic equation comes from $\frac{dx}{1}=\frac{da}{(a-1)\big(a-(\alpha-\beta)e^{-x}\big)}$. solving thir Riccati ODE leads to : $$a=1+\frac{x+(\alpha-\beta)e^{-x}}{(\alpha-\beta)\text{Ei}\big((\alpha-\beta)e^{-x}-\exp\big(x+(\alpha-\beta)e^{-x}\big)\big)+c_2}$$ Ei is the special function called &quot;Exponential Integral&quot; : https://mathworld.wolfram.com/ExponentialIntegral.html The general solution of the PDE $(4)$ is : $$a=1+\frac{x+(\alpha-\beta)e^{-x}}{(\alpha-\beta)\text{Ei}\big((\alpha-\beta)e^{-x}-\exp\big(x+(\alpha-\beta)e^{-x}\big)\big)+\Phi\big(y-gx \big)} \tag 5$$ with arbitrary function $\Phi$. The condition $a(0,y)=\alpha$ gives : $\alpha=1+\frac{0+(\alpha-\beta)e^{0}}{(\alpha-\beta)\text{Ei}\big((\alpha-\beta)e^{0}-\exp\big(0+(\alpha-\beta)e^{0}\big)\big)+\Phi\big(y-0 \big)} =1+\frac{(\alpha-\beta)}{(\alpha-\beta)\text{Ei}\big((\alpha-\beta)-\exp\big((\alpha-\beta)\big)\big)+\Phi\big(y\big)}$ This implies $\Phi(y)=$constant. $\Phi=\frac{(\alpha-\beta)}{\alpha-1}-(\alpha-\beta)\text{Ei}\big((\alpha-\beta)-\exp\big((\alpha-\beta)\big)\big)$ The constant $\Phi$ is known now. We put it into $(5)$ which gives the solution : $$a=1+\frac{x+(\alpha-\beta)e^{-x}}{(\alpha-\beta)\text{Ei}\big((\alpha-\beta)e^{-x}-\exp\big(x+(\alpha-\beta)e^{-x}\big)\big)+\frac{(\alpha-\beta)}{\alpha-1}-(\alpha-\beta)\text{Ei}\big((\alpha-\beta)-\exp\big((\alpha-\beta)\big)\big)}$$ Then one found $b$ with : $$b=a-(\alpha-\beta)e^{-x}$$ Note that the specified condition leads to a solution which is function of $x$ only. If this was known from the very start the calculus would had been much simpler with a system reduced into ODEs instead of PDEs : $$\frac{d a}{d x}=ab-a$$ $$\frac{d b}{d x}=ab-b$$
Do not understand: "A real function $f$ is continuous if $f^{-1}(O)$ is open for each open set $O$"
If $f$ is a function from $A$ to $B$ and $C \subset B$ then $f^{-1}(C)$ is defined as $\{a\in A:f(a)\in C\}$. This does not require the existence of an inverse for $f$.
Counterexample for the invalidity of the following
Choose $A(0)$ to be true and $A(x)$ to be false for all $x\neq 0$ and $B(x)$ to be false for all $x$.
Finding $n$ using Chebyshev's inequality
Chebyshev's inequality is of the form $\newcommand{\Var}{\operatorname{Var}}$ $\newcommand{\E}{\operatorname{E}}$ $\def\Xbar{\,\overline{\!X}}$ $$P(|Y-\E(Y)| \geq \alpha) \leq \dfrac{\Var(Y)}{\alpha^2},$$ where $Y$ is a random variable. (Please note that the $\geq$ and $\leq$ are reversed as to how you typed them.) But as stated in your question, you are not to apply Chebyshev to a single random variable but rather a sample mean (i.e., a mean of random samples) $$Y=\Xbar=\frac{X_1+\dots+X_n}{n}$$ where $X$ is a random variable (the height of a person) and the $X_1$, $\ldots$, $X_n$ are just $n$ numbers generated by $X$ at distinct times. It so happens that the $X_1$, $\ldots$, $X_n$ can be treated as independent, identically distributed random variables, and by the properties of expectation and variance we have \begin{align*} \E(\Xbar) &amp;= \E\left(\frac{X_1+\dots+X_n}{n}\right)\\ &amp;=\frac{\E(X_1)+\dots+\E(X_n)}{n}\\ &amp;=\frac{n\E(X_1)}{n}\\ &amp;=\E(X);\\ \Var(\Xbar) &amp;= \Var\left(\frac{X_1+\dots+X_n}{n}\right)\\ &amp;=\frac{\Var(X_1)+\dots+\Var(X_n)}{n^2}\\ &amp;=\frac{n\Var(X_1)}{n^2}\\ &amp;=\tfrac{1}{n} \Var(X). \end{align*} Then the Chebyshev's inequality for $Y=\Xbar$ takes the form $$P(|\Xbar-\E(X)| \geq \alpha) \leq \dfrac{\Var(X)}{n\ \alpha^2},$$ and you can plug in your values. But since you want the probability of the "$\leq$" case, the inequality yelds $$P(|\Xbar-\E(X)| &lt; \alpha) = 1 - P(|\Xbar-\E(X)| \geq \alpha) \geq 1 -\dfrac{\Var(X)}{n\ \alpha^2}.$$ P.S.: It is a little bit odd that your desired probability is $\geq 95$% rather than just $=95$%, since that would yield $$0.95 \leq P(|\Xbar-\E(X)| &lt; \alpha) \geq 1 -\dfrac{\Var(X)}{n\ \alpha^2}.$$ Then, there is no way to actually relate the leftmost side with the rightmost side. P.S. $\boldsymbol 2$: Never mind the P.S. (which was a confussion all on my own); this answer suggests relating all the quantities as $$P(|\Xbar-\E(X)| &lt; \alpha) \geq 1 -\dfrac{\Var(X)}{n\ \alpha^2} \geq 1 -\dfrac{5}{n\ \alpha^2} \geq 0.95$$ since $1 -\Var(X)/n\alpha^2$ is an increasing function on $\alpha$.
Geometric meaning of a vector space
Consider $\tilde x = \ln x$ and $\tilde y = \ln y$. Then $(\tilde x, \tilde y)$ is just the standard $\mathbb R^2$. Therefore your vector space is just the standard $\mathbb R^2$ using logarithmic coordinates instead of Cartesian coordinates.
Integral of Dirac Delta Function
To talk about the Delta function rigorously, you need to introduce the idea of distributions. In this case you never look at $\delta(x)$ outside an integral, and you define it to have the property $$\int\delta(x-x_0)f(x)dx=f(x_0),$$ which then implies it having unit integral if we take $f(x)=1$. In fact, if we actually define $\delta(x)$ to be $0$ everywhere and $\infty$ at $0$, then its Lebesgue integral should be zero. We can however look at $\delta(x)$ as the limit of a series of Gaussians: $$\delta_a(x)=\frac{1}{a\sqrt{\pi}}e^{-x^2/a^2},$$ as on the Wikipedia page. Each Gaussian $\delta_a$ has unit integral, and we can show that as $a\rightarrow 0$, we start to arrive at the properties of $\delta(x)$. You can also see in the animation that it approaches the shape of being zero everywhere, but highly peaked at the origin.
Complex functions decomposition
Of course you can. Complex function is also a complex scalar. The general form of a function $f(z)$ can then be written as $$f(z)=\Re f(z)+i~\Im f(z)=\sqrt{f(z)\bar f(z)}\exp\{\arctan\frac{\Im f(z)}{\Re f(z)}\}$$
A $k$-linear map $\sigma: k[x]\rightarrow k[x]$ with property $\sigma(fg)=f\sigma(g)+\sigma(f)g$ has representation $\sigma(f)=aD(f)$
Such a map is called a derivation. Notice that $D$ is a derivation by the linearity of the derivative and the product rule. First notice that if such an $a$ exists, then $$ a = a \cdot 1 = a \cdot D(t) = \sigma(t). $$ Therefore we define $a := \sigma(t)$. Next notice that $$ \sigma(1) = \sigma(1 \cdot 1) = 1 \cdot \sigma(1) + \sigma(1) \cdot 1 = 2 \sigma(1), $$ so $\sigma(1) = 0 = a D(1)$. So we already have have $\sigma(t^n) = a D(t^n)$ for $n = 0$ and $n = 1$. Suppose that $\sigma(t^n) = a D(t^n)$ for some $n \geq 0$. Then \begin{align*} \sigma(t^{n+1}) &amp;= \sigma(t \cdot t^n) = t \sigma(t^n) + \sigma(t) t^n = t a D(t^n) + a t^n \\ &amp;= a (D(t^n) t + t^n) = a (D(t^n) t + t^n D(t)) = a \cdot D(t^{n+1}). \end{align*} This shows that $\sigma(t^n) = a D(t^n)$ for all $n \in \mathbb{N}$. By the linearity of $\sigma$ and $D$ it follows that $\sigma = aD$. PS: Instead of induction we can also notice that for every two derivations $D_1$ and $D_2$ on a $k$-algebra $k[x]$ the set $\{p \in k[x] \mid D_1(p) = D_2(p)\}$ forms a unital subalgebra of $k[x]$. Taking $D_1 = \sigma$ and $D_2 = aD$ this subalgebra contains $x$ by construction of $a$, and therefore already all of $k[x]$. More generally we can use this to show that a derivation of an $k$-algebra $A$ is uniquely determined by its values on generators of $A$.
Properties of two squares subgroup of Rubiks square group
If you by twisted corners or flipped edges mean situations where two configurations have the pieces in the same position, but they differ in orientation of the pieces - this cannot happen in that group. This is guaranteed by the fact that the operations preserves the axis of the normal vectors of all surfaces, but the normal vector can become opposite. Given the position of a piece this uniquely determines the face normals and by that also the orientation. In addition you have that the operations so that edges in the $x-y$, $x-z$ and $y-z$ plane remains in that plane. Also corners remain in one of two groups $xyz=\pm 1$. This makes the group being simultaneuos permutations on five sets of four elements each. I think these are almost independent: the parity of the permutations on the corners have to have the same parity and the parity of the permutations of the edges has to add up to even. This would make it $(4!)^5/4$ elements.
Link complements in $\mathbb{R}^{3} $ and $S^{3} $
They're clearly not homeomorphic as one is compact and the other is not. The main similarity though is that two $\mathbb{R}^3$ knot complements (not link complements) are homeomorphic iff the corresponding knots are isotopic (up to chiral pairings) iff the same knot complements in the one-point compactification (the corresponding link complements in $S^3$) are homeomorphic. So they are as strong as each other as far as being a knot invariant is concerned. It is also not hard to see that the fundamental groups of both types of link complements will be isomorphic (Van Kampen's theorem). This is called the knot/link group.
Prove that pairs of numbers in a set have same parity given certain criteria
First, yes, you should prove it by cases. First you should prove that (i) implies that every pair of integers of $S$ is of the same parity. Then you should prove that (ii) implies the same thing. Second, it seems to me that these statements (both cases) can be proved directly or by contrapositive. Or in other ways. So, since you asked "Would Proof by Contrapositive help me here?", I will say yes, it can be used. That's not the same as saying that it must, or should, be used. Third, I don't think a truth table is appropriate for this question. Finally, here are some thoughts and hints on a proof. Direct proof: (i) Pick a pair $x,y \in S$. We will try to prove that $x$ and $y$ have the same parity. Write $S = \{x,y,z,w\}$. By hypothesis, $w$ and $z+x$ have the same parity. And so do $w$ and $z+y$. So, $z+x$ and $z+y$.... (Here, "..." means that remaining steps and details are left for you to think about.) (ii) This time, $z+x$ and $z+y$ each have the opposite parity of $w$, so.... Contrapositive: (i) Assume $x, y \in S$ and $x,y$ have opposite parity. Now consider $x$ and $w+z$; and consider $y$ and $w+z$.... (ii) I leave this entirely for you to think about. [The approach I hinted in "direct" could be used for "contrapositive". The approach I hinted for "contrapositive" could be used for "direct".] Cases: The four elements of $S$ can each separately be even or odd. There are $16$ cases. You can analyze the $16$ cases explicitly (check which ones satisfy the hypotheses, and verify that those ones also satisfy the conclusion). I'm not claiming that last one would be very aesthetically pleasing. Other: If $x$ has the same parity as $y+z$, then $x+y+z$ is even. If this holds for all triples, then what happens when you add up all the triples? There are $4$ triples. $x$ shows up in $3$ of them. In other words, add together $x+y+z$, plus $x+y+w$, plus $x+z+w$, plus $y+z+w$. What is the total? What does it mean to say that each total $x+y+z$ is even (or, in case (ii), odd)? If we know the parity of the total $x+y+z+w$, and the parity of $x+y+z$, what does that tell us about the parity of $w$?
Can the continuum $\mathfrak c$ be a limit cardinal?
Yes. The only restriction is that $\mathfrak{c}$ must be a cardinal of uncountable cofinality. So $\mathfrak{c}$ can be $\aleph_{\omega_1}$, but not $\aleph_\omega$.
How to show whether these integrals converge or not?
Dirichlet's test is the key. $$\int_{1}^{+\infty}\frac{\sin(x^2)\cos(2x)}{\sqrt{x}}\,dx=\frac{1}{2}\left(\int_{1}^{+\infty}\frac{\sin((x+1)^2-1)}{\sqrt{x}}\,dx+\int_{1}^{+\infty}\frac{\sin((x-1)^2-1)}{\sqrt{x}}\,dx\right)\tag{1}$$ and: $$\int_{1}^{+\infty}\frac{\sin((x+1)^2-1)}{\sqrt{x}}\,dx=\int_{3}^{+\infty}\frac{\sin(u)}{2\sqrt{1+u}\sqrt{-1+\sqrt{1+u}}}\,du, $$ $$\int_{1}^{+\infty}\frac{\sin((x-1)^2-1)}{\sqrt{x}}\,dx=\int_{0}^{+\infty}\frac{\sin(u)}{2\sqrt{1+u}\sqrt{1+\sqrt{1+u}}}\,du, \tag{2}$$ where $\sin(u)$ is a function with a bounded primitive and $\frac{1}{\sqrt{1+u}\sqrt{\pm 1+\sqrt{1+u}}}$ is a function eventually decreasing to zero. Dirichlet's test hence gives that both your integrals are converging - as improper Riemann integrals, obviously.
Is it correct my demonstration in this issue of functions on sets?
Yes your proof is correct, but unorthodox. Usually functions are not used so formally; one writes $y=f(x)$ instead of $(x,y)\in f$. Furthermore, just point out that to prove that $f$ in invertible it is enough to show that it is injective. You would need the fact that $f$ is also surjective later to prove that $f^{-1}=g$ so it is ok anyway. The "orthodox proof" would be as follows: Let $x_1,x_2\in A$. The equation $f(x_1)=f(x_2)$ implies by hypothesis that $$x_1=g(f(x_2))=g(f(x_1))=x_2,$$ so $f$ must be injective and therefore exists $f^{-1}:f(A)\subset B\to A$. To show that $f^{-1}=g$ we need first to check that $f(A)=B$; i.e. that $f$ is surjective. Let $y\in B$ and denote $x=g(y)$. By hypothesis $y=f(x)$, so $y\in f(A)$ and the surjectiveness of $f$ follows. To see that $f^{-1}=g$ we need finally to check that $f(g(y))=y$ for every $y\in B$ and $g(f(x))=x$ for every $x\in A$. Using the hypothesis again you get that these conditions are equivalent to $f(x)=f(x)$ and $g(y)=g(y)$, which obviously hold.
Show that there is $F$ contained in $E$ and $(g_n)$ of simple functions such that $(g_n)\to f$ uniformly on $F$ and $m(E\setminus F)<\epsilon$.
If $f$ is a bounded non-negative function (say by $N\in\mathbb N$), then defining $$g_n:=\sum_{i=0}^{N2^n-1}i2^{-n}\mathbf 1\left\{i2^{-n}\lt f\leqslant \left(i+1\right)2^{-n}\right\},$$ we get a sequence of simple functions such that $\left|g_n\left(x\right)-f\left(x\right)\right|\leqslant 2^{-n}$ for any $x$.
Maximizing a convex function under constraints
In principle yes, but it is non-convex so in practice hard. Trivial reformulation is to simply define $X$ as $RR^T$, and now you have a standard non-convex program which you can throw your favorite global non-convex solver on. Alternatively, upper bound the objective with a simple concave function, solve the relaxation, and then proceed with a spatial branch-and-bound strategy. Not saying these are prefered or good approaches, just answering the question whether it is possible in principle.
Questions about Notation Used in Dirac Equation:
This is all explained in chapter 3.2, notably equation 3.16 $x_{\mu}=g_{\mu\nu} x^{\nu}$, where $g_{\mu\nu}$ is the Minkowski metric.
1-dimensional solution space of homogeneous system Ax=0?
You only need to show that the $x$ you defined is orthogonal to every row of $A$. You can see that's the case by using the row-expansion of the determinant: given any row $j$ of $A$, consider the $n \times n$ matrix obtained from $A$ by duplicating that row. This new square matrix $A&#39;$ has determinant $0$, and you can compute its determinant by expanding along the newly created duplicate row: $$ 0={\rm det} (A&#39;) = \sum_i (-1)^{j-1} (-1)^i A_{ji} |A_i | = (-1)^{j-1} \sum_i A_{ji} x_i$$
Number of ways to represent any N as sum of odd numbers?
Let's say $S(n)$ is the set of ways to write $n$ as a sum of odd numbers. We can partition this set into two subsets: $A(n)$ and $B(n)$, where $A(n)$ is the set of sums where the last summand is a $1$, and $B(n)$ is the set of all other sums. Can you see why $A(n)$ has the same size as $S(n-1)$? Can you see why $B(n)$ has the same size as $S(n-2)$? If you prove this, you find that $|S(n)| = |A(n)| + |B(n)| = |S(n-1)| + |S(n-2)|$, which is the Fibonacci recurrence relation. You can then prove by induction that your sequence is equal to the Fibonacci sequence.
Gauss curvature of C^2 surfaces
This question depends strongly on the regularity of the local isometries. If the surface is $C^2$ and there's a local isometry of class $C^2$, it will indeed preserve the Gaussian curvature. This is shown in the paper "On the Fundamental Equations of Differential Geometry" by Philip Hartman and Aurel Wintner. Nash's isometric $C^1$-embedding theorem shows that curvature occurs no longer as an obstruction to $C^1$-isometries. There are embeddings of $S^2$ into arbitrarily small portions of $\mathbb R^3$ which is forbidden if curvature makes sense. $C^2$-isometric embeddings of $S^2$ into $\mathbb R^3$ are unique up to the action of the Euclidean group in the target. This however is a non-trivial regularity phenomenon. A side note: The rigidity stated for $C^2$-embeddings of the sphere holds furthermore for $C^{1,\alpha}$-embeddings provided $\alpha&gt;2/3$, while the Nash-type flexibility remains true if for some small $\alpha&gt;0$. The critical $0&lt;\alpha&lt;1$ is unknown.
Equality in de Rham cohomology
Well, if those sets are pairwais disjoint, then everything splits as a direct sum: the $k$-forms $$\Omega^k(\cup_i U_i)\simeq\oplus_i\Omega^k(U_i)$$ is given by $\omega\mapsto (\omega|_{U_i})_i$ and the De Rham differential $d$ respects this splitting. So, both the kernel and image of $d$ respect this splitting as well.. Note, however, that for an infinite number of $U_j$'s the cohomology of $\cup_j U_j$ equals the product of the cohomologies, not the direct sum.
If $f$ is convex and decays exponentially, then $f'$ decays exponentially
Suppose $0\le f(x)\le e^{-x}.$ Let $x\ge 1.$ Look at the chord connecting $ (x-1,f(x-1))$ to $ (x,f(x)).$ The slope of this chord equals $$\frac{f(x)-f(x-1)}{1} \ge \frac{0-f(x-1)}{1}\ge -e^{-(x-1)}.$$ But by the MVT, this slope equals $f'(c)$ for some $c\in (x-1,x).$ Because $f'$ is increasing, we have $$f'(x) \ge f'(c) \ge -e^{-(x-1)} = - e\cdot e^{-x}.$$ Thus $0 &gt; f'(x) \ge - e\cdot e^{-x}$ for all $x\ge 1.$ So we have the desired result in this case, and I expect the general result to follow in a like manner.
How to determine new dimensionless variables when non-dimensionsionalizing a system of ODEs?
Take a look at the Buckingham PI theorem, which provides a way for computing sets of dimensionless parameters from the given variables, even if the form of the equation is still unknown.
Solving for a variable in an inverse function
Your solution is $100\%$ correct.
Is the $L^2$ norm of vectors convex?
Let $X=(x,y), \ A=(a,b).\ C=(c,d)$. Triangle inequality $||A-C||=||A-X+X-C||\le ||A-X||+||X-C||$. Equality (which is the minimum) when $X$ is in between $A$ and $C$ and on the straight line connecting $A$ and $C$. Note: $||A-C||=\sqrt{(a-c)^2+(b-d)^2}$, etc.
Prove that if $\alpha$ is injective, α∗ is injective.
Take $A=B=\Bbb Z$ and $A'=4\Bbb Z $ , $B'=2\Bbb Z$ and $\alpha(x )=2x $
If two tailed or one tailed should be used to test Hypothesis
You write Management wants to test whether the production rate is increased or not. so $H_1$ is $\mu &gt; \mu_0$.
Finite region in $R^3$ that is bounded by the three surfaces (cylindrical)
For these types of problems there are two similar ways to compute the volume. One is that you take long cylinders along $z$ of thickness $dr$. The volume of such cylindrical shell is $$dV=2\pi rh\ dr$$ The other is you take disks and washers (perforated disks) of height $dz$, with volume $$dV=\pi(r_2^2-r_1^2)dz$$ Here $r_1$ is the inner radius (might be $0$), and $r_2$ is the outer radius. It does not matter which method you choose, first you need to find the "corners" of the orange region in your figure. At $r=0$ you have $z=2$ and $z=8$. Then for the lowest corner: $$r=z=2-r^2$$ The only positive $r$ solution for this is $r=1$, so $r=z=1$. For the right corner: $$r=z=8-r^2$$ The positive solution is $z=r=\frac{\sqrt{33}-1}2\approx 2.37$. Now we go back to the integration. You will need to split the volume calculations in regions. In the washer method, you have three regions: 1. $z$ from $1$ to $2$, with $r_1=\sqrt{2-z}$ and $r_2=z$, 2. $z$ from $2$ to $2.37$, where $r_1=0$ and $r_2=z$, and 3. $z$ from $2.37$ to $8$, with $r_1=0$ and $r_2=\sqrt{8-z}$. For the cylindrical shells method you have only two regions: 1. $r$ from $0$ to $1$, where the height of the shell is $(8-r^2)-(2-r^2)=6$, and 2. $r$ from $1$ to $2.37$, where the height is $(8-r^2)-r$.
Find $P(b^2\ge4ac)$ given that $a,b,c\in\{-9,-8,\dots,8,9\},a\ne0$
In the case where $|4ac| &gt; b^2$ the probability is 1/2 by changing the sign of $c$. In the case where $|4ac| \leq b^2$ and $a \neq 0$, the probability is $1$. In the case where $a=0$ one has to decide how to interpret the words "real roots". The simplest is to exclude equations of degree less than 2 (probability 1/19). If a single root of multiplicity one is allowed, or the equation $0=0$, then the answer will be different. The main part of the problem is then to count the cases where $b^2 &lt; 4|ac|$. If $|ac| &gt; 20$ there is no constraint on $b$. There are a finite number of possibilities with $|ac| \leq 20$ and they can be accounted for by hand.
Reinvestment yield rate with bonds
$Fr=1000*.045=45$ $P(1+i′)n=Fr∗\frac{(1+j)^n−1}{j}+C$ $950(1+i')^{20}=45*45\frac{1.04^{20}-1}{.04}+1000$ $i'\approx .046103$ Multiply by 2 because of semiannual rates and then $i'\approx .092212$
Problem with integral of absolute value of a function
The integrand function is not always positive on $(-\pi,\pi)$. The $L^1$ norm of the Dirichlet kernel is divergent since $$ \left\|D_n\right\|_1 \geq 4\,\text{Si}(\pi)+\frac{8}{\pi}\log(n)\tag{1}$$ as mentioned by Wikipedia. That inequality is obtained by partitioning $(-\pi,\pi)$ into many sub-intervals whose endpoints are given by roots of the integrand function, then applying a Riemann-sum argument near the origin and Jensen's inequality far from the origin. $(1)$ is the reason behind Gibbs' phenomenon, for instance.
Conditions for a family to be normal
As Martin R pointed out, your argument is hard to judge without details. After adopting Montel's theorem, having deduced that it is necessary for $\mathfrak F_{A,B}$ to be normal that for each compact $K\subset\mathbb C$, $|f(z)|\leqslant M_K$ for some positive $M_K$ whenever $f\in\mathfrak F_{A,B}$ and $z\in K$, you can pick a compact $K_0$ that contains $0$ and $1$. Then consider $|f(0)|$ and $|f(1)|$ for each $f\in\mathfrak F_{A,B}$ and this gives the boundedness of $A$ and $B$.
Evaluation of $ \displaystyle \lim_{x \to 0} \frac{\sin ( \pi \cos(x) )}{x \sin x}$ using a Taylor Approximation
You've done everything almost right; the problem is that $$\sin {(\pi - \frac{\pi x^2}{2})} \nsim \pi - \frac{\pi x^2}{2}$$ beacuse the argument of the sin does not goes to zero! (it goes to $\pi$!) The correct way of doing that is $$ \sin {(\pi - \frac{\pi x^2}{2})} = \sin {\frac{\pi x^2}{2}} \sim \frac{\pi x^2}{2}$$ (becasue $\sin(\pi - x) = \sin x$); note that this time the argument of the sin actually goes to zero
Open and Closed Quotient Maps.
It’s true that the quotient map from $\Bbb R$ to $\Bbb R/\Bbb Q$ is not open, but your argument doesn’t really make sense: $\Bbb R/\Bbb Q$ isn’t a metric space, so it’s not clear what you mean by a ball in $\Bbb R/\Bbb Q$, and $0$ and $1$ aren’t points of $\Bbb R/\Bbb Q$. Let $q$ be the quotient map; then $$q^{-1}\big[q[(0,1)]\big]=(0,1)\cup\Bbb Q\;,$$ which is not open in $\Bbb R$, so $q[(0,1)]$ is not open in $\Bbb R/\Bbb Q$, and $q$ is therefore not an open map. The same quotient space works for (b). This time consider the closed set $F=[0,1]$; $$q^{-1}\big[q[F]\big]=[0,1]\cup\Bbb Q\;,$$ which is not closed in $\Bbb R$, so $q[F]$ is not closed in $\Bbb R/\Bbb Q$, and $q$ is not a closed map. Your example for (b) does not work: if $f:\Bbb R\to\Bbb R/\{a,b\}$ is the quotient map, then $f$ is closed. Let $F$ be any closed set in $\Bbb R$. If $F\cap\{a,b\}=\varnothing$, then $f^{-1}\big[f[F]\big]=F$, so $f[F]$ is closed in $\Bbb R/\{a,b\}$. Otherwise, $f^{-1}\big[f[F]\big]=F\cup\{a,b\}$, which is closed in $\Bbb R$, so again $f[F]$ is closed in $\Bbb R/\{a,b\}$, and $f$ is a closed map. Your answer to (c) is fine.
Proving some segments equal
This image makes you think like it’s always true because of the way AB and BC seem to be equal, but that’s not a condition. Sorry for the bad drawing I’m on my phone. Edit: if there is indeed a 45 degree angle in BAC then The triangle DEC is isocèle by noticing that the sum of the angles must be 180.
Is there an "addition theorem" of logarithm?
No, it's the other way round: $$\exp(a+b)=\exp(a)\exp(b)$$ or $$\log(ab)=\log(a)+\log(b).$$ In fact, via the complex numbers, the first identity directly leads to the trigonometric ones. We can refute the hypothesis $$\log(a+b)=f(\log(a)+\log(b))$$ for rational functions with algebraic coefficients. Indeed, $$\log(2)=\log(1+1)=f(\log(1),\log(1))=f(0,0)$$ is a transcendental number, but cannot be the result of a rational function with algebraic coefficients applied to algebraic arguments.
What is the total variation of a dirac delta function $\delta(x)$?
Let $(f_1,f_2,f_3,\ldots)$ be any sequence of functions of bounded variation such that, given any continuous function $g$,$$\lim_{n\to\infty}\int f_ng=g(0).$$(Of course, there are many such sequences, but the argument is independent of that.) Then$$\lim_{n\to\infty}\mathrm{TV}(f_n)=\infty.$$This is one way to interpret your question, and then the answer is indeed $\infty$.
$2\sin 2x-2\cos2x=\frac{\cos x+\cos3x}{\cos x-\sin x}$
Hint: $\cos3x+\cos x=2\cos2x\cos x$ $\sin2x-\cos2x=\cos x(\cos x+\sin x)$ $\iff \cos^2x+\cos2x=\sin x\cos x$ Divide both sides by $\cos^2x$
Is the result of the actions $\left((\vec A+\vec B) \times (\vec A\times \vec B)\right)\cdot(\vec A \times \vec B)$ depends by $\vec A$ and $\vec B$
The vector cross product is not associative, but it does distribute over addition: $$(A+B)\times (A \times B) = A \times (A \times B) + B \times (A \times B).$$ The dot product operation is commutative. So you might have some success in applying the property $$\alpha\cdot (\beta \times \gamma) = \beta \cdot (\gamma \times \alpha) = \gamma \cdot (\alpha \times \beta).$$ Using this, let $C = A \times B$, and we find $$\begin{align*} \left((A + B)\times (C)\right) \cdot (C) &amp;= \left(A\times (C) + B \times (C)\right)\cdot (C) \\ &amp;= (C)\cdot (A\times (C)) + (C)\cdot (B\times (C)) \\ &amp;= A \cdot ((C)\times (C)) + B\cdot ((C)\times (C)) \\ &amp;= 0. \end{align*}$$
Cauchy’s functional equation for non-negative arguments
Yes. It follows easily from induction that $f(nx)=nf(x) \, \forall n \in \mathbb{Z}^+$. Thus $f(\frac{m}{n})=\frac{1}{n}f(m)=\frac{m}{n}f(1)$. If $f(y)&lt;0$ for some $y \in[0, +\infty)$, then $f(ny)=nf(y)&lt;m$ for sufficiently large $n$. Thus $f(y) \geq 0 \forall y \in [0, +\infty)$. Now if $a \geq b$ then $f(b+(a-b))=f(b)+f(a-b) \geq f(b)$, so $f(x)$ is monotonic. This is sufficient to imply that $f(x)=xf(1)$ (where here we require $f(1) \geq 0$).
Structures in Non-linear Sigma Model
$\newcommand{\Reals}{\mathbf{R}}\newcommand{\dd}{\partial}\newcommand{\Cpx}{\mathbf{C}}$Your three questions boil down to the nature of pullbacks under a mapping. (I): A metric on $X$ determines a corresponding structure on $\Sigma$. (Away from critical points of $\phi$, the pullback $\phi^{*}g$ is a metric on $\Sigma$.) Particularly, the equation $$ \phi^{*}g(\dd_{z}, \dd_{\bar{z}}) = g(\phi_{*}\dd_{z}, \phi_{*}\dd_{\bar{z}}) = g_{i\bar{\jmath}}\frac{\dd \phi^{i}}{\dd z}\, \frac{\dd \phi^{\bar{\jmath}}}{\dd \bar{z}} $$ defines a metric $\phi^{*}g$ on $\Sigma$: To evaluate on a pair of tangent vectors to $\Sigma$, use $\phi_{*}$ to "push forward" the vectors to tangent vectors on $X$, then evaluate $g$. In local coordinates $w^{i} = \phi^{i}(z, \bar{z})$, the chain rule gives \begin{align*} \phi_{*}\frac{\dd}{\dd z} &amp;= \frac{\dd \phi^{i}}{\dd z}\, \frac{\dd}{\dd w^{i}} + \frac{\dd \phi^{\bar{\jmath}}}{\dd z}\, \frac{\dd}{\dd w^{\bar{\jmath}}}, \\ \phi_{*}\frac{\dd}{\dd \bar{z}} &amp;= \frac{\dd \phi^{i}}{\dd \bar{z}}\, \frac{\dd}{\dd w^{i}} + \frac{\dd \phi^{\bar{\jmath}}}{\dd \bar{z}}\, \frac{\dd}{\dd w^{\bar{\jmath}}}. \end{align*} (II): Similarly, a connection $D$ in $TX$ induces a connection $\phi^{*}D$ in $\phi^{*}TX$, via $$ (\phi^{*}D)_{\dd_{z}}(\phi^{*}\psi) = D_{\phi_{*}(\dd_{z})} \psi. $$ The local expression for this connection naturally involves the Christoffel symbols for connection on $X$ because the covariant derivative of a section $\phi^{*}\psi$ of $\phi^{*}TX$ in the direction of a tangent vector $\dd_{z}$ is defined by pushing $\dd_{z}$ to $TX$ by $\phi_{*}$ and using this vector to differentiate $\psi$. (III): The derivatives are taken with respect to coordinates on $\Sigma$. An $X$-valued mapping $\phi$ is holomorphic if each component function (with respect to some holomorphic coordinate system on $X$) is an ordinary holomorphic function. In symbols, $\bar{\dd} \phi^{i} = 0$, or $$ \frac{\dd \phi^{i}}{\dd \bar{z}} = 0. $$ In case it helps, you've secretly been doing the same types of calculation ever since you learned vector calculus. Think of polar coordinates, $$ (x, y) = \phi(r, \theta) = (r\cos\theta, r\sin\theta). $$ By the chain rule, the mapping $\phi$ sends the coordinate vector fields on the $(r, \theta)$-plane $\Sigma$ to the fields \begin{align*} \phi_{*}\frac{\dd}{\dd r} &amp;= \frac{\dd x}{\dd r}\, \frac{\dd}{\dd x} + \frac{\dd y}{\dd r}\, \frac{\dd}{\dd y}, \\ \phi_{*}\frac{\dd}{\dd \theta} &amp;= \frac{\dd x}{\dd \theta}\, \frac{\dd}{\dd x} + \frac{\dd y}{\dd \theta}\, \frac{\dd}{\dd y}, \\ \end{align*} on the $(x, y)$-plane $X$, and (in case this is useful to you elsewhere) pulls back the coordinate $1$-forms on $X$ to the $1$-forms \begin{align*} \phi^{*} dx &amp;= \frac{\dd x}{\dd r}\, dr + \frac{\dd x}{\dd \theta}\, d\theta, \\ \phi^{*} dy &amp;= \frac{\dd y}{\dd r}\, dr + \frac{\dd y}{\dd \theta}\, d\theta \end{align*} on $\Sigma$.
Which is the correct answer from the following options-
Hint: You can discard the first two options by looking at $x_p$ for prime values of $p$ Also, it is not true that the last two options are correct, since the last two options clearly contradict each other (one claiming $K=L$ is always true, the other saying $K\neq L$ is possible). In fact, only the third option is true, and proving $K=L$ is most efficient by looking at the limit of $x_{6n}$ as $n\to\infty$.
Show that $x \mapsto \left( x^{\top} \sigma x , -\mu^{\top}x \right)^{\top}$ transforms a given set into a convex set.
Let $n=2, \sigma = I, \mu = (-1,0)^T$, Then $X = \{(t,1-t)\}_{t \in [0,1]}$ and $f(X) = \{ (t^2+(1-t)^2, t) \}_{t \in [0,1]}$, which is not convex (it is the graph of $t \mapsto t^2+(1-t)^2$ on $[0,1]$).
Geometric Interpretation of members of $\mathrm{O}(2)\setminus\mathrm{SO}(2)$
The matrices in $O(2) \setminus SO(2)$ are exactly those that can be written as $FR$, where $R$ is a rotation, and $F$ is the diagonal matrix with $-1, 1$ as diagonal entries, i.e., a flip around the y-axis. So every matrix in your class is just a rotation followed by an orientation-reversal, or "flip", or whatever you like. (Proof of my claim: suppose $M$ is in $O(2)\setminus SO(2)$. Then $FM$ is in $SO(2)$ because it has determinant $+1$. Hence it's a rotation $R$. Since $FM = R$, and $FF = I$, we have $M = FFM = FR$. )
What is a definable set?
So even though the question talks about the language of set theory specifically, let me answer the question more generally. Also, I'll talk about definable subsets of $M$, as opposed to definable sbsets of the Cartesian powers of $M$ - for example, we can ask "When is a set of ordered pairs of elements of $M$ definable?" The answer is a straightforward generalization of what I write below, and coming up with it is a good exercise. Let me first describe definability without parameters. That is, what I talk about below will be not taking into account the $a_i$s of your question; I'll get to those later. Suppose I have any first-order structure $M$ in some language $L$. Say, $L$ is the language of rings ($+, \times, 0, 1$) and $M$ is the ring $\mathbb{Z}$. Then $M$ has (of course) many subsets, but some of these subsets are much nicer than others, in the sense of being "describable" using the language $L$. For example, let $E$ be the set of even numbers. Then $E$ is the set of $x$ such that "There is some $y$ such that $x=y+y$." More formally, $E$ is the set of elements $x$ of $M$ such that the statement "there is some $y$ such that $x=y+y$" is a true statement about $M$. This is what is meant by "$M$ satisfies . . . " It's just saying, "... is true in $M$." In particular, being very formal we would write $$E=\{x: M\models \exists y(x=y+y)\}.$$ The formal definition of "$\models$" is actually somewhat annoying; at first, I think it's better to just think of it informally, so "$M\models \varphi$" means "$\varphi$ is a true statement about $M$." But see here, or any good logic textbook. A set like this - such that there is a formula $\varphi(x)$ in the language $L$, so that the set is equal to $\{x: M\models\varphi(x)\}$ - is called definable without parameters. Other definable subsets of $M$ include the set of perfect squares, the set of positive numbers (this one takes some work, since we don't have the symbol "$&lt;$"!), and the set of primes. Note that the definability of a subset of $M$ depends crucially on the language of $M$. For example, consider the structure $M'$ whose underlying set is $\mathbb{Z}$ as before, but whose language is the empty language (no symbols except =). Then no subset of $M'$ is definable without parameters except the whole structure and the emptyset; this is a consequence of the fact, provable by induction, that definable sets are preserved by automorphisms: if $\alpha$ is an automorphism of $M$ and $D$ is a definable subset of $M$, then $\alpha(D)=D$. The claim about $M'$ now follows since every permutation of $M'$ is an automorphism of $M'$. OK, now what about those $a_i$s? Well, there's a few ways to treat this. First, given any set $A\subset M$ of a structure in a language $L$, we can consider an expanded language $L_A=L\sqcup\{c_a: a\in A\}$ gotten by adding a new constant symbol for each element of $A$, and the expansion $M_A$ of $M$ gotten by taking $M$, and interpreting the symbol $c_a$ as $a$ for each $a\in A$. The act of passing from $M$ to $M_A$ is all about adding expressive power. Specifically, more sets are definable in $M_A$ than were definable in $M$! For example, every finite subset of $A$ is now definable in $M_A$; and we might get more, besides. On the other hand, because definitions can only use finitely many symbols, the whole set $A$ might not be definable in $M_A$ (for example, take $M'$ from before and $A=\{evens\}$). That is, in defining a set, we only use finitely many symbols: for a specific set $D\subset M$, $D$ is definable in $M_A$ iff $D$ is definable in $M_{A_0}$ for some finite $A_0\subset A$. We say a set $D\subset M$ is definable from parameters (usually, we just say "definable") if, for some $A$, $D$ is a definable subset of $M_A$. Note that by the above paragraph, we can assume $A$ is finite - $A$ is just the set $\{a_0, a_1, . . . , a_n\}$ of your OP! So then $Def(M)$ is the collection of definable-with-parameters subsets of $M$. Since this is a collection of subsets of $M$, it's a subset of the powerset of $M$. Alternatively, if you don't want to think of expanding $M$, we can think of a formula $\varphi$ in $n+1$ variables as describing a whole bunch of sets, one for each $n$-tuple of elements of $M$: given the tuple $a_1, . . . , a_n$, we get the set $$\varphi_{a_1, . . . , a_n}=\{m\in M: M\models \varphi(m, a_1, . . . , a_n)\}.$$ Then a set is definable with parameters if it is of the form $\varphi_{a_1, . . . , a_n}$ for some formula $\varphi$ and tuple of elements of $M$ $a_1, . . . , a_n$. I find this approach more difficult to think about, but it is certainly shorter. On the other hand, one advantage of it is that it connects nicely with definable subsets of higher Cartesian powers: a formula $\varphi$ in $n+1$ variables defines - without parameters - a subset of $M^{n+1}$; then we have that a subset of $M$ is definable-with-parameters iff it is a projection of some definable-without-parameters subset of a higher Cartesian power of $M$. This is kind of nice!
If X has an exponential distribution with expectation 1/λ, find the probability density function for λX. What is the distribution of λX?
It is clearer to work via the cumulative density function, $F(x)=P(X&lt;x)=1-e^{-\lambda x}$. For our new variable $\lambda X$, we seek $P(\lambda X&lt;x)=P(X&lt;x/ \lambda) = F(x/ \lambda )= 1-e^{-x}$. Thus the new PDF is $e^{-x}$, ie. $\lambda X$ has an exponential distribution with parameter 1. This shouldn't surprise us. The mean of the new distribution is $\lambda \times 1/ \lambda=1 $ and similarly for the variance.
Stuck on finding the canonical form an endomorphism.
Based on the given information, the matrix with respect to the canonical basis can be computed from $$ f(e_1)+\lambda e_2+\mu e_3=2e_1+e_2+e_3, \quad f(e_2)=\lambda e_2, \quad f(e_3)=\mu e_3 $$ whence the matrix is \begin{bmatrix} 2 &amp; 0 &amp; 0 \\ 1-\lambda &amp; \lambda &amp; 0 \\ 1-\mu &amp; 0 &amp; \mu \end{bmatrix} Now you can apply the information you have about the subspace spanned by $e_1$ and $e_2$.
Find a six-digit perfect square of a particular form – BMO 1993 P1
Hint The condition $n = 1001 r + 1$ does not express one of our conditions, namely that $n$ is a square, say, $m^2$ for some positive integer $m$. In terms of $m$, our condition is $m^2 = 1001 r + 1$, and the key observation is that this equation can be rearranged and then factored: $$(m + 1) (m - 1) = 1001 r .$$ Now, both sides are factorizations of the same integer. The prime factorization of $1001$ is $1001 = 7 \cdot 11 \cdot 13$, so one or two of these three primes are factors of $m + 1$ and the other(s) are factors of $m - 1$. (If all three were factors of the same term, we would have $m \pm 1 = 1001$, and in both cases here $m^2$ has seven digits.) If we take the case that $7, 11$ are factors of $m + 1$ and $13$ is a factor of $m - 1$, then we have$$\left\{\begin{array}{rcl}m + 1 &amp;=&amp; 7 \cdot 11 s \\m - 1 &amp;=&amp; 13 t\end{array}\right.$$ for some integers $m, s, t$; $r$ is just $r = s t$. Eliminating $m$ gives $$77 s = 13 t + 2$$ for some $s, t$, and reducing modulo $13$ and solving gives $s \equiv 11 \pmod {13}$, so the smallest positive solution is $s = 11$; substituting gives that the the corresponding value is $t = 65$, so $r = st = 715$ (reproducing one of the given answers) and hence $n = 715716$. The next smallest positive solution is $s = 24$, but this corresponds to a four-digit $m$, and hence to an $m^2$ of at least seven digits, so $r = 715$ is the only solution from this case. Proceeding similarly for the other five cases should yield the other three solutions.
What is the maximal domain of these functions?
I will suppose $f(x)$ should be $f(y)$ and $f(z)$ and that we are working in $\mathbb{R}$. This is a rational expression. The only restriction is that the denominator is not $0$. Since $$2y^2+1=0$$ has no solution then the denominator does not impose any restriction on the domain. Hence $dom(f)=\mathbb{R}$. Same thinking leads to $$z-2\ne 0 \Leftrightarrow z\ne 2.$$ The domain is $\mathbb{R}\backslash\{2\}$.
Filtering/mapping elements from one set based on certain joint conditions on the first and a second set.
$C = \{\langle x, y, z \rangle \in B \ |\ \forall \langle x_A, y_A, z_A \rangle \in A \ [ (x - x_A \leq \Delta x) \ \land\ (y - y_A \leq \Delta y) \ \land \ (z - z_A \leq \Delta z)]\}$
Prove that $ A=\sqrt{\frac{(b-c)^2}{a^2}+\frac{(c-a)^2}{b^2}+\frac{(a-b)^2}{c^2}} $ is also rational number.
$$\sum_{cyc}\frac{a^2}{(b-c)^2}-2=\frac{\left(a^3-a^2 b-a b^2+b^3-a^2 c+3 a b c-b^2 c-a c^2-b c^2+c^3\right)^2}{(a-b)^2 (a-c)^2 (b-c)^2}$$ It follows that the LHS can be $\leq 0$ in very few cases: $$ abc=(a+b-c)(a-b+c)(-a+b+c).$$ Then it is not difficult to prove that such identity grants $\sum_{cyc}b^2 c^2 (b-c)^2$ to be a rational square.
Infinite series for arctan of x
You are only going to get an approximation good to some number of decimal places. If you use $N$ terms in the series, then the error is approximately the magnitude of the $N+1$th term. The question you must ask yourself is, if I want $M$ decimal places of accuracy, then how big must $N$ be? Example: say you want $M=6$ places of accuracy. Then $$\frac{(\sqrt{2}-1)^{2 N+1}}{2 N+1} &lt; 10^{-6}$$ By trial and error, I get $N=6$. That means you only need $6$ terms in the series to get that level of accuracy.