title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Finite satisfiability and Zorn's lemma
Let $\mathcal Z$ be the set of candidates for $\Delta$ if we ignore (iii), i.e., the elements of $\mathcal Z$ are all sets $A$ of wffs such that $\Sigma \subseteq A$ and $A$ is finitely satisfiable. Then $\mathcal Z$ is partially ordered by $\subseteq $. Show that it is inductively ordered and conclude. However, this will work only if $\Sigma$ is nice.
Do all norms agree on which vector is the longest?
Yes. Otherwise the norms would only be multiples of each other. $$\|(30,40)\|_2=50 > 45 =\|(0,45)\|_2$$ $$\|(30,40)\|_\infty=40 < 45 =\|(0,45)\|_\infty$$
What will be a circle look like considering this distance function?
Yes. What you obtained is right. In general, the $p$-metric, where $p \geq 1$, is defined as $$d_S(P,Q) = \left(\vert x_2 - x_1 \vert^p + \vert y_2 - y_1 \vert^p\right)^{1/p}$$ Now it is easy to prove that as $p \to \infty$, we get the metric $\max\{|x_2-x_1|,|y_2-y_1|\}$. Now you can use a plotting software and see how unit ball looks like for different $p$'s. You will find the for $p=1$, it is a diamond and as you increase $p$ you encompass "more" and "more" region in the plane. For $p=2$, you get the "usual" circle and for $p \to \infty$, you get the "unit square". The diagram below shows the unit ball for different $p$'s. The inner most diamond is the case for $p=1$, which is also called as the taxi-cab metric. As $p$ increases, the "area" enclosed by the unit ball slowly increases. The outer unit square is the metric you get when you let $p \to \infty$, i.e., you get the metric $\max\{|x_2-x_1|,|y_2-y_1|\}$. $\hskip0.5in$
simple combinatorics question about object division
If there were no restrictions, then the number of solutions of the equation $$x_1 + x_2 + \cdots + x_{15} = 300$$ in the non-negative integers would be $$\binom{300 + 14}{14} = \binom{314}{14}$$ However, we want each $x_i \leq 40$. Thus, we must subtract the number of solutions for which one or more of the $x_i \geq 41$. Since $7 \cdot 41 = 287 < 300 < 328 = 8 \cdot 41$, up to seven of the $x_i$'s could be at least $41$. To do so, we use the Inclusion-Exclusion Principle. Subtracting the number of solutions in which exactly one $x_i$ exceeds $40$ from the total removes each solution in which exactly two $x_i$'s exceed $40$ twice, so we must add the number of solutions in which exactly two $x_i$'s exceed $40$. Removing the number of solutions in which exactly one $x_i$ exceeds $40$ from the total number of solutions, then adding back the number of solutions in which exactly two $x_i$'s exceed $40$ counts each solution in which exactly three $x_i$'s exceed $40$ counts once (since each solution was first removed three times, then restored three times), so we must subtract the number of solutions in which exactly three $x_i$'s exceed $40$. By similar reasoning, we must add the number of solutions in which exactly four $x_i$'s exceed $40$, subtract the number in which exactly five $x_i$'s exceed $40$, add the number of solutions in which exactly six $x_i$'s exceed $40$, then subtract the number of solutions in which exactly seven $x_i$'s exceed $40$. Suppose $x_1 > 40$. Let $y_1 = x_1 - 41$. \begin{align*} x_1 + x_2 + \cdots + x_{15} & = 300\\ y_1 + 41 + x_2 + \cdots + x_{15} & = 300\\ y_1 + x_2 + \cdots + x_{15} & = 259 \end{align*} The number of solutions of this equation in the non-negative integers is $$\binom{259 + 14}{14} = \binom{273}{14}$$ Since there are $15$ ways for exactly one of the $x_i$'s to exceed $40$, the number of solutions in which exactly one $x_i > 40$ is $$\binom{15}{1}\binom{273}{14}$$ Suppose $x_1$ and $x_2$ both exceed $40$. Let $y_1 = x_1 - 41$ and $y_2 = x_2 - 41$. Then \begin{align*} x_1 + x_2 + x_3 + \cdots + x_{15} & = 300\\ y_1 + 41 + y_2 + 41 + x_3 + \cdots + x_{15} & = 300\\ y_1 + y_2 + x_3 + \cdots + x_{15} & = 218 \end{align*} This equation has $$\binom{218 + 14}{14} = \binom{232}{14}$$ solutions in the non-negative integers. Since there are $\binom{15}{2}$ ways for exactly two of the $x_i$'s to exceed $40$, there are $$\binom{15}{2}\binom{232}{14}$$ solutions in which exactly two of the $x_i$'s exceed $40$. Calculate the number of solutions in which exactly three, exactly four, exactly give, exactly six, and exactly seven of the $x_i$'s exceed $40$, then use the Inclusion-Exclusion Principle to determine the number of solutions in which each $x_i \leq 40$. Your solution should look something like $$\binom{314}{14} - \binom{15}{1}\binom{273}{14} + \binom{15}{2}\binom{232}{14} - \cdots + \cdots - \cdots + \cdots - \cdots$$
Use sum and difference formulas to find the derivative of the function. How to make my answer match the correct answer?
$$\frac{dy}{dx}=\frac{d \left( 3x-1+\frac{1}{x}\right)}{dx}=\frac{d(3x)}{dx}-\frac{d(1)}{dx}+\frac{d(\frac{1}{x})}{dx}=3-0-\frac{1}{x^2}$$
Hints of showing these two sets are diffeomorphic
Let $g$ be the map that takes $(r,\theta)\rightarrow (r,\beta\theta)$ (suitable $\beta$) in polar coordinates and on a suitable 'wedge'. It is $C^\infty$ outside of the origin. Let $h$ be a linear map, mapping (in cartesian coordinates) $(1;0)$ to $(1;0)$ and (suitable $\theta_0$): $$(\cos(\theta_0);\sin(\theta_0)) \mapsto (\cos(\beta\theta_0);\sin(\beta\theta_0))$$ Now let $\rho(r)$ be a $C^\infty$ function which is identical 1 for $0\leq r\leq \delta_0$ and identically zero for $r\geq \delta_1>\delta_0$. Then (here writing $h$ in polar coordinates): $$f(r,\theta) = \rho(r) h(r,\theta) + (1-\rho(r)) g(r,\theta)$$ will do the job in the first case (but not the second). I think you are right that the two other sets are not diffeomorphic as a linear map may not take a wedge of angle $>\pi$ to one of angle $<\pi$ (and vice versa). Quite interesting btw.
What does this mean??
The $f|A$ means $f$ restricted to $A$, which is just the function $f$ but where we only take $x$ values from the subset $A$ of its domain. Also, two functions with a common domain are equal iff for all $x$ in that domain, their values agree on $x$. So applying this the formula means: $$\forall x \in A_\lambda \cap A_\mu: f_\lambda(x) = f_\mu(x)$$ The context: we have $f_{\lambda}: A_\lambda \to Y$ (or whatever the codomain is) and $f_\mu: A_\mu \to Y$. Then for all $x$ in $A_\lambda \cap A_\mu$ we have two possible candidate values for a combination function of $f_\lambda$ and $f_\mu$: $x \in A_\lambda$, so $f_\lambda(x)$ is defined, and $x \in A_\mu$ so $f_\mu(x)$ is also defined. As a function can only have one value for a given $x$ we need to have that actually $f_\lambda(x) = f_\mu(x)$ for such a combination function to be definable. As this holds for all $x$ in this intersection, we get $f_\lambda|_{A_\lambda \cap A_\mu} = f_\mu|_{A_\lambda \cap A_\mu}$ as the condition, in a shorthand way. In that case we can define $f: \cup_{\lambda \in L} A_\lambda \to Y$ by : if $x \in A_\lambda$, $f(x) = f_\lambda(x)$, and by the above condition we have that this does not depend on the $\lambda$, because if $x \in A_\mu$ as well, for $\mu \in L$, it would be in $A_\lambda \cap A_\mu$ and there $f_\lambda$ and $f_\mu$ agree on values.
Pulse vertices of a quad from a center point?
This question should perhaps be posted in Game Development SE site. However, I shall try to provide a solution for its mathematical part. The points of water surface, $P(x,y,z)$ can be modeled to belong to the surface: $\frac{\sin(x^2 + y^2 - t)}{(x^2 + y^2 + 1)}$ where $(x,y) \in [-L, L] \times [-L, L]$ in which $L$ specifies the extent of the rendered water surface $t$ parameter varies with time or in each iteration/render cycle A visualization of this can be found here
find the grammar for the language that contains all and only the words that have the form: $(a ... b c (b) (c c) b ) (a (c b) c ... a (b) a b)$
Here is a grammar for your language: $\begin{array}{l} S\to SS|(T)\\ T\to (T)|TT|A\\ A\to aA|bA|cA|\epsilon. \end{array} $
How to find the solutions to this modular equation: $-2131\cdot 3^x\equiv y\pmod{1763}$?
Solution in pari/gp: erd()= { m= Mod(3, 1763); h= znorder(m); print("\nMultiplicative order of "m" is "h"\n"); S= []; for(y=0, 1762, x= znlog(y/(-2131), m, h); if(x, S= concat(S, [[x,y]])) ); S= vecsort(S); print("[x, y] = "S) }; Output: Multiplicative order of Mod(3, 1763) is 168 [x, y] = [[1, 659], [2, 214], [3, 642], [4, 163], ... [165, 1227], [166, 155], [167, 465]] Verify for $(x,y)$=[2, 214]: Mod(-2131,1763)*Mod(3,1763)^(2+168*0)=Mod(214, 1763) Mod(-2131,1763)*Mod(3,1763)^(2+168*777)=Mod(214, 1763) i.e. $-2131\cdot 3^{2+168\cdot k}\equiv 214\pmod {1763}$, where $k$=0,1,2,3,... Verify for $(x,y)$=[167, 465]: Mod(-2131,1763)*Mod(3,1763)^(167+168*0)=Mod(465, 1763) Mod(-2131,1763)*Mod(3,1763)^(167+168*77777)=Mod(465, 1763) i.e. $-2131\cdot 3^{167+168\cdot k}\equiv 465\pmod {1763}$, where $k$=0,1,2,3,...
Problem on the dimension of a linear space
Is this equation $r(G(u)) = r(\text{GL}(m))$ valid? Yes. Let $A_1, \dots, A_s\in \text{GL}(m)$ be such that any element of $\text{GL}(m)$ can be expressed as their linear combination. Let $v=uA\in G(u)$, and let $A=\lambda_1A_1+\dots+\lambda_sA_s$. Then $v = \lambda_1uA_1+\dots+\lambda_suA_s$, i.e., any element of $G(u)$ is a linear combination of $uA_1,\dots,uA_s$. Thus $r(G(u)) \leq r(\text{GL}(m))$; together with the opposite inequality, it means that these two values are equal. What's the rank of $\text{GL}(m)$? It's $m^2$. Let $E^{i,j}$, where $1\leq i,j\leq m$, be $m\times m$ matrix with $1$ at the position $(i,j)$ and $0$ at all other positions, and let $I$ be the unit $m\times m$ matrix. Then $I\in\text{GL}(m)$, and for any $i,j$ $I+E^{i,j}\in\text{GL}(m)$, so $E^{i,j}\in\left<\text{GL}(m)\right>$, and the matrices $E^{i,j}$ span the whole $M(m,m)$.
If $E$ is a measurable set, how to prove that there are Borel sets $A$ and $B$ such that $A\subset E$, $E\subset B$ and $m(A)=m(E)=m(B)$?
I assume by $m$ you mean Lebesgue measure on $\mathbb R^n$. Use that this measure is regular. This gives us that, if $m(E)<\infty$, then for any $n$ there are a compact set $K_n$ and and open set $U_n$ with $K_n\subset E\subset U_n$, and $m(E)-1/n<m(K_n)$ and $m(U_n)<m(E)+1/n$. This implies that $A=\bigcup_n K_n$ and $B=\bigcap U_n$ have the same measure as $E$, and they are clearly a Borel subset and a Borel superset of $E$, respectively. If $E$ has infinite measure, it is even easier: Take as $B$ the set $\mathbb R^n$. As before, regularity gives us for each $n$ a compact set $K_n$ with $K_n\subset E$ and $m(K_n)\ge n$. Then we can again take as $A$ the set $\bigcup_n K_n$.
Finding the amount of acid removed from a tank after a repititive process.
To simplify the notation, let's call the $729 l$ capacity $T$ and the $3x$ spilled/replaced $y$. Also, do not use percentages but unitary ratios to evaluate the mixture content. So we indicate by $R_w$ the ratio q.ty of water/total, and it will be $R_a=1-R_w$. Now: at step 0: you have no water, so that $R_w(0)=0$. at step 1: you spill $y$ liters of acid and replace with $y$ of water, $R_w(1)=y/T$ at step 2: you spill $y$ liters of the mixture that contains a fraction $y/T$ of water, so you spill $yy/T$liters and fill back with $y$ liters, so you change the quantity of the water by $y(1-y/T)$ liters, meaning that you change the ratio by $y/T(1-y/T)$ ( which, with the opposite sign, is also the change of ratio of the acid). So $R_w(2)-R_w(1)=y/T-y/TR_w(1)$ We can easily see that this recurrence keeps also for the next steps, and therefore: $$ R_{\,w} (n) = \left( {1 - y/T} \right)R_{\,w} (n - 1) + y/T\quad \left| {\,R_{\,w} (0)} \right. = 0 $$ For the acid instead , which you spill but do no replacement, it will be: $$ R_{\,a} (n) = \left( {1 - y/T} \right)R_{\,a} (n - 1)\quad \left| {\,R_{\,a} (0)} \right. = 1 $$ the above readily solve to: $$ R_{\,w} (n) = 1 - \left( {1 - y/T} \right)^{\,n} \quad \quad R_{\,a} (n) = \left( {1 - y/T} \right)^{\,n} $$ And, from here, I suppose you can take over.
Modelling "If-Then" with equality constraints in an Integer Linear Program
The following inequalities do the job: $$x\le y\le U x.$$
How to find a general equation for the number of colored smaller cubes on a large cube?
$3\times 3\times 3$ cube is cut into $27$ smaller $1\times 1\times 1$ cubes. Let $A,B,C,D$ be the events with $0,1,2,3$ sides painted. Then: $$n(A)=1 \ \text{(the central cube inside)}\\ n(B)=6 \ \text{(the central cubes on the the faces)}\\ n(C)=12 \ \text{(the central cubes on the edges)}\\ n(D)=8 \ \text{(the cubes on the vertices)}$$ Let $P$ be the event of being painted side, then: $$P(A)\cdot P(P|A)+P(B)\cdot P(P|B)+P(C)\cdot P(P|C)+P(D)\cdot P(P|D)= \\ \frac{1}{27}\cdot \frac{0}{6}+\frac{6}{27}\cdot \frac{1}{6}+\frac{12}{27}\cdot \frac{2}{6}+\frac{8}{27}\cdot \frac{3}{6}=\frac13.$$ Alternatively, $6\times 9$ unit faces are painted out of $6\times 27$ total number of unit faces, which results in $1/3$.
What type of function is $y=x^{-1}$?
$$y=x^{-1}$$ So $y=x^{-1}$ is not a polynomial , because $-1 \notin \mathbb Z^+_0$. $$y=\frac{1}{x}$$ $$y=\frac{h(x)}{g(x)}$$ Where $h(x)=1$ and $g(x)=x$. So for $x \neq 0 $ , $y$ is a Rational function.
If $z=\cos \theta + i \sin \theta$, express $\displaystyle \frac {1}{1-z \cos \theta}$ in the form $a+i\cdot b$.
$$\dfrac1{1-\cos\theta(\cos\theta+i\sin\theta)}=\dfrac1{\sin^2\theta-i\sin\theta\cos\theta}=\dfrac1{-i\sin\theta}\cdot\dfrac1{\cos\theta+i\sin\theta}=\dfrac{\cos\theta-i\sin\theta}{-i\sin\theta}$$
Prove that every odd prime divides a number of the form $l^2+m^2+1$ $(l,m\in \mathbb {Z})$
It's a classical proof that uses the Pigeonhole Principle. You can prove a more general result with the same method: If $p$ is an odd prime (it's clear when $p=2$), then for all $a,b,c\in\mathbb Z$ such that $p\nmid a,b$ exist $x,y\in\mathbb Z$ such that $p\mid ax^2+by^2+c$. Proof: First I'll prove there are $(p+1)/2$ squares mod $p$. Notice $$r^2\equiv s^2\pmod{p}\iff (r+s)(r-s)\equiv 0\pmod{p}\iff r\equiv \pm s\pmod{p},$$ so $0, 1^2, 2^2,\ldots, ((p-1)/2)^2$ are all the squares mod $p$, so there are $(p+1)/2$ squares mod $p$. Therefore, in the congruence $ax^2\equiv -by^2-c\pmod{p}$, the LHS can gain $(p+1)/2$ different values mod $p$ and the RHS can also gain $(p+1)/2$ different values. Also $(p+1)/2+(p+1)/2=p+1$, so by the Pigeonhole Principle exists at least one solution $x,y\in\mathbb Z$.
combinatorics,choose specific number of elements from sets
If the sets elements are considered to be different and as they can be used once then its $3.2.{3 \choose 1}=18$
Why can I multiply a constant to a first derivative to get the second derivative
$$ \dfrac{d^2W}{dt^2} = \dfrac{d}{dt}\left(\dfrac{dW}{dt}\right) = \dfrac{d}{dt}\left(\dfrac{1}{25}(W-300)\right) = \dfrac{1}{25}\dfrac{dW}{dt} - \dfrac{1}{25}\dfrac{d 300}{dt} = \dfrac{1}{25}\left(\dfrac{1}{25}(W-300)\right).$$
Transcendence degree/dimension of a curve
1) Yes, but the formulation "$f$ generates a quadratic extension" is awkard. I would say "$\bar k(x,y)$ is a quadratic extension of $\bar k(x)$, because the minimum polynomial of $y$ over $\bar k(x)$ is $f(x,Y) \in \bar k(x)[Y]$". 2) Easier just to say that $f(x,y) = 0$, so $x$ and $y$ are algebraically dependent. But 1) also already says this. 3) You mean, for instance, $\bar k(x,y) = \bar k(V_f) \supseteq \bar k(x) \supseteq \bar k$? The first $\supseteq$ is algebraic of degree 2, the second transcendental of transcendence degree 1. Or the other way around, $\bar k(x,y) \supseteq \bar k(y) \supseteq \bar k$. Now the first $\supseteq$ is algebraic of degree 3, the second still transcendental of transcendence degree 1.
Proving this convex hull lemma
Hint 1 : intuition (?) When you're at point $q$, whatever direction you look you have $k$ points "in front" of you. The repeated hull process guarantees that you have $k$ "layers" of points in your point set $P$, so intuitively to have at least $k$ points in every direction, you want to put $q$ in the "innermost layer" of $P$. Hint 2 : a result that can be useful (formal proof omitted) If $q$ belongs to the convex hull of $P$, then any closed half-plane whose bounding line passes through $q$ contains at least one vertex of that convex hull. Hint 3 : basically the solution Take a point $q$ in the $k$-th convex hull ($H_k$ with the pdf's notation). Then $q$ belongs to every $H_i$, $1\le i\le k$.
Intersections Between a Cubic Bézier Curve and a Line
The calculations are roughly the same regardless of what line is used. Suppose the line has equation $lx+my+nz = d$. In vector form, this is $$ \mathbf{A}\cdot \mathbf{X} = d, $$ where $\mathbf{A} = (l,m,n)$. Also, suppose the Bézier curve has equation $$ \mathbf{X}(t) = (1-t)^3\mathbf{P}_0 + 3t(1-t)^2\mathbf{P}_1 + 3t^2(1-t)\mathbf{P}_2 + t^3\mathbf{P}_3 $$ Substituting this into the previous equation, we get $$ (1-t)^3(\mathbf{A} \cdot \mathbf{P}_0) + 3t(1-t)^2(\mathbf{A} \cdot \mathbf{P}_1) + 3t^2(1-t)(\mathbf{A} \cdot \mathbf{P}_2) + t^3(\mathbf{A} \cdot \mathbf{P}_3) - d = 0 $$ This is a cubic equation that you need to solve for $t$. The $t$ values you obtain (at most 3 of them) are the parameter values on the Bézier curve at the intersection points. If any of these $t$ values is outside the interval $[0,1]$, you'll probably want to ignore it. In your particular case, you know that the line passes through either the start-point or the end-point of the Bézier curve, so either $t=0$ or $t=1$ is a solution of the cubic equation, which makes things easier. Suppose $t=0$ is a solution. Then the cubic above can be written in the form $$ t(at^2 + bt + c) =0 $$ Now you only have to find $t$ values in $[0,1]$ that satisfy $at^2 + bt +c =0$, which is easy.
stochastic Birth model simulation vs deterministic exponential growth not equal
It sounds like there should be some discrepancy between the model and simulations. The rate equation you wrote for your model assumes a constant $\lambda$, but in your simulations its time-varying.
Find a basis of a 3rd order polynomial that contains the basis of a kernel
Hint Theorem: any linearly independent family of vectors can be completed into a basis by picking up vectors from an existing basis. So take a basis of $V=\mathbb R_3[t]$... there is a famous one! And pick up one of the vector of this famous basis to transform the linearly independent family $B$ of vectors that generates $\ker \theta$ into a basis of $V$. Additional hint... in your pick up process, some vectors are better than the others.
How to find out if a function is surjective or injective?
The first function is injective as well as surjective. Since all elements of the codomain have preimages, therefore f is surjective. Again, since since domain and codomain are finite sets containing same number of elements, therefore $f$ is injective also. Similar logic can be applied to check for the other functions.
Linear connection on a manifold: Math vs. Physics
Your first definition of $\nabla$ often is referred to as covariant derivative of a vector field. More general, you can define a connection $\nabla$ on a vector bundle $E$ as a map between sections on $E$ $$\nabla:\Gamma(E)\to\Gamma(E\otimes T^*M)\qquad\text{satisfying}\qquad\nabla(fe_1+e_2)=e_1\otimes\mathrm{d}f+f\nabla e_1+\nabla e_2$$ for $e_i\in\Gamma(E)$ and $f\in C^\infty(M)$. This is Wald's approach. On the other hand, if you "insert" a vector field $X\in\mathcal{T}(M)$ into the second component, you obtain a map $$\nabla:\Gamma(E)\times\mathcal{T}(M)\to\Gamma(E)$$ which is commonly denoted as $\nabla e(X)\equiv\nabla_X e$ and in the case of $\Gamma(E)=\mathcal{T}(M)$ is exactly how Lee introduces a linear connection. However, I am not sure, if the two notions can be summarized as "Math vs. Physics" since both notions are used in mathematics (as far as I know).
Finding all continuous and discontinuous points of composite functions
Composition of continuous functions is still continuous, hence $f \circ g$ can only be discontinuous when $g=0$, which is impossible. Hence $f\circ g$ is continuous everywhere. For $g \circ f$, we need to check $x=0$, when $x$ near $0$, $g\circ f(x)=2, g\circ f(0)=1$, hence $\lim_{x\to0}g\circ f(x)=2\neq g\circ f(0)$, hence it is only discontinuous at $x=0$.
Induction proof fibonacci numbers
Sum up the recurrence relations: $$\sum_{i=1}^n F_{2i}=\sum_{i=1}^n F_{2i-1}+\sum_{i=1}^n F_{2i-2}=\sum_{i=1}^n F_{2i-1}+\sum_{i=0}^{n-1} F_{2i}$$ As $F_0=0$, this equality simplifies to: $$F_{2n}=\sum_{i=1}^n F_{2i-1}.$$
Find an equation for a moving rod
Firstly obtain the straight-line equation of the rod at any point in the slide. If $x_0$ and $y_0$ are the $x$-intercept and $y$-intercept, then by Pythagoras, $x_0^2 + y_0^2 = 1$. So we have the equation of this line: \begin{eqnarray*} y &=& -\dfrac{y_0}{x_0}x + y_0 \\ &=& -\dfrac{y_0}{\sqrt{1 - y_0^2}}x + y_0 \end{eqnarray*} With this equation we treat $x$ as a constant and try to maximise $y$ with respect to $y_0$. That is, for each $x$ we want the maximum $y$ these equations achieve. To do this we differentiate: \begin{eqnarray*} \dfrac{dy}{dy_0} &=& \dfrac{-x}{\sqrt{1 - y_0^2}} - \dfrac{y_0^2x}{\left(1 - y_0^2\right)^{\frac{3}{2}}} + 1 \\ &=& 1 - \dfrac{x}{\left(1 - y_0^2\right)^{\frac{3}{2}}} \\ \end{eqnarray*} Setting this to $0$ gives $y_0 = \dfrac{1}{\sqrt{1 - x\frac{2}{3}}}$ and $x_0 = x^\frac{1}{3}$. Substitute these back to find $y$: \begin{eqnarray*} y &=& \dfrac{-\sqrt{1 - x^\frac{2}{3}}x}{x^\frac{1}{3}} + \sqrt{1 - x^\frac{2}{3}} \\ &=& \left(1 - x^\frac{2}{3}\right)^\frac{3}{2} \end{eqnarray*} So the region you want is the region bounded by this curve and the $x$ and $y$ axes.
What is the coordinate of a point $P$ on the line $2x-y+5=0$ such that $|PA-PB|$ is maximum where $A=(4,-2)$ and $B=(2,-4)$
Now $PA=\sqrt{(x-2)^2+(y+4)^2}$ and $PB=\sqrt{(x-4)^2+(y+2)^2}$ Now using Cosine formula We get $$\displaystyle \cos \theta = \frac{(PA)^2+(PB)^2-(AB)^2}{2\cdot (PA)\cdot (PB)}$$ Now we know that $$\displaystyle |\cos \theta | \leq 1\Rightarrow \left|\frac{(PA)^2+(PB)^2-(AB)^2}{2\cdot (PA)\cdot (PB)}\right|\leq 1$$ So we get $$|(PA)^2+(PB)^2-(AB)^2|=2|PA||PB|\Rightarrow \left |PA-PB\right|^2\leq |AB|^2$$ So we get $|PA-PB|\leq |AB|$ and equality hold when $\theta = \pi$ Means $A,P,B$ are Collinear. So equation of line $PAB$ is $$\displaystyle y+4 = \frac{-2+4}{4-2}(x-2)\Rightarrow x-y=6$$ Now solving $2x-y=-5$ and $x-y=6\;,$ we get $(x,y) = (-11,-17)$
Graph of $\sin(x)$ along the line $y=x$
Hint: Consider the parametrization of your curve: $$\mathbf{s}(t) = (t, \sin(t)), \quad t \in \mathbb{R},$$ then, the transformation given by: $$ \mathbf{p}(t;\theta) = M \, \mathbf{s}^T(t), \quad M = \left(\begin{array}{cc} \sin{\theta} & \cos{\theta} \\ -\cos{\theta} & \sin\theta \end{array}\right),$$ applies a clockwise rotation of angle $\theta$ to $\mathbf{s}(t)$. Cheers!
Solving a differential equation with composite functions
You say, that $$ c(a,b) = f(a+e^b) + g(a-e^b) $$ where $f, g : \mathbb R \to \mathbb R$, and you need to find some particular form of $c(a,b)$ so that \begin{align} c(a,0) &= 0\\ \left . \frac {\partial c}{\partial b} \right |_{b = 0} &= 1 + a \end{align} Use first restriction. $$ c(a,0) = f(a+1) + g(a-1) = 0 \tag 1 $$ Now, let's find the form of of $c_b(a,b)$ $$ c_b(a,b) = f'(a+e^b)e^b - g'(a-e^b)e^b = e^b \left ( f'(a+e^b) - g'(a+e^b)\right) \tag 2 $$ so $$ c_b(a,0) = f'(a+1)-g'(a-1) = 1 + a \tag 3 $$ Now, differentiate $(1)$ $$ f'(a+1) + g'(a-1) = 0 \tag 4 $$ and therefore $$ g'(a-1) = -f'(a+1) \tag 5 $$ substitute $(5)$ to $(3)$ $$ 2f'(a+1) = a+1 $$ or $$ f'(x) = \frac x2 $$ which has obvious solution $$f(x) = \frac {x^2}4 + C$$ Now, find $g(x)$ $$ g(a-1) = -f(a+1) = -\frac {(a+1)^2}4 - C $$ or, after some trivial manipulations $$ g(x) = -\frac {(x+2)^2}4 - C $$ So, final answer is $$ c(a,b) = f(a+e^b) + g(a-e^b) = \frac {\left ( a+e^b\right )^2}4 - \frac {\left ( a-e^b+2\right )^2}4 $$
Whenever a linear operator behave like polynomial in other linear operator.
Hint: Standard basis of given vector space: $\{1,x,x^2,x^3\}$ $T(1)=1$ $T(x)=x+1$ $T(X^2)=x^2+2x+1$ $T(x^3)=x^3+3x^2+3x+1$ So matrix form of T in standard basis is \begin{bmatrix} 1 & 1 & 1 &1 \\ 0 & 1 & 2 & 3\\ 0 & 0 & 1 & 3\\ 0& 0 & 0 & 1 \end{bmatrix} Similarly, $D(1)=0$ $D(x)=1$ $D(X^2)=2x$ $D(x^3)=3x^2$ So matrix form of T in standard basis is \begin{bmatrix} 0 & 1 & 0 &0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 3\\ 0& 0 & 0 & 0 \end{bmatrix} Verify: $$T=1/6D^3+1/6D^2+I$$
Deriving the Cubic Formula
There's an error there: $\omega=\exp\left(\frac{2\pi i}3\right)$, not $\exp\left(\frac{\pi i}3\right)$. Therefore, $\omega^2=\frac1\omega$ and $\omega=\frac1{\omega^2}$. So,$$-\frac h{\omega u}=-\frac{\omega^2}u\text{ and }-\frac1{\omega^2u}=-\frac\omega u.$$
Geometric intuition behind the Hasse principle
Here is one way in which you can interpret the Hasse-Minkowski result "geometrically." Proposition. Let $X$ be a smooth conic over a number field $K$. The following are equivalent: The set $X(K)$ of $K$-rational points of $X$ is nonempty. The curve $X$ is isomorphic over $K$ to $\textbf{P}^1$. The equivalence follows from an application of the Riemann-Roch theorem. Pick a rational point $x \in X(K)$. There is a rational function with only a simple pole at $x$. Such a function gives an isomorphism between $X$ and $\textbf{P}^1$ defined over $K$.
Maps homotopic between convex subspace of $\mathbb{R}^n$ and topological space.
Can you show that any map must be homotopic to a particular constant map? If two maps, $f$ and $g$, are each homotopic to a constant map $h$ with image $y_0$, via $\phi_f$ and $\phi_g$, then they are homotopic to each other via $\phi_g^{-1}\circ\phi_f$. Thinking about this more, I think it's slightly more complicated. First, fix $x_0\in X$. Since $X$ is convex, we can smoothly contract $f$ to a map whose image is the single point $f(x_0)$. We just use $\phi_f(x,t)=f((1-t)x+tx_0)$. Similarly, we can write down $\phi_g(x,t)=g(tx+(1-t)x_0)$, which is the same thing for $g$, but running the other way in time. Both of these maps exploit the convexity of $X$. Finally, we use the path connectedness of $Y$ by joining $\phi_f$ and $\phi_g$ with the path between $f(x_0)$ and $g(x_0)$. I guess we can write that down by saying, let $\chi:I=[0,1]\to Y$ be a path with $\chi(0)=f(x_0)$ and $\chi(1)=g(x_0)$. We can stick these all together as follows: $$\phi:X\times I\to Y:(x,t)\mapsto\cases{\phi_f(x,3t) & $0\le t\le \frac13$\\ \chi(3t-1) & $\frac13\le t\le \frac23$\\ \phi_g(x,3t-2) & $\frac23\le t\le 1$}$$ We verify that this is well-defined and continuous, because: $\phi(x,\frac13)=\phi_f(x,1)=f(x_0)=\chi(0)$ $\phi(x,\frac23)=\chi(1)=g(x_0)=\phi_g(x,0)$. We verify that it is the right homotopy by checking: $\phi(x,0)=\phi_f(x,0)=f(x)$ $\phi(x,1)=\phi_g(x,1)=g(x)$. The trick is that all of the contracting and such has to happen back in $X$, where we have structure that allows things like multiplication by $t$, because we don't know anything about $Y$ other than the presence of paths.
Proving $\ A_i $ and $\ A_j $ are independent events
$P(A_1\cap A_2)=\frac 1 8 +\frac 1 8=\frac 1 4$ and $P(A_1)=P(A_2)=\frac 1 2$ so $A_1$ and $A_2$ are independent.
Show an ideal is a finitely generated projective module via a split exact sequence
Using the surjectivity of the evaluation map $\mathrm{Hom}_R(I,R)\otimes_R I\to R$, you can find $R$-linear maps $\alpha_1,\dots,\alpha_m:I\to R$ and elements $i_1,\dots,i_m\in I$ such that $$\alpha_1(i_1)+\cdots+\alpha_m(i_m)=1$$ Consider the map $\phi:R^{\oplus m}\to I,(r_1,\dots,r_m)\mapsto r_1i_1+\cdots+r_mi_m$. This map is surjective. Indeed, write $\rho_k=\alpha_k(i_k)$ for $k=1,\dots,m$. Then, if $i\in I$, $$i=i\cdot 1=\sum_{k=1}^m i\rho_k=\sum_{k=1}^m i\alpha_k(i_k)=\sum_{k=1}^m \alpha_k(i)i_k=\phi(\alpha_1(i),\dots,\alpha_m(i))$$ This furthermore shows that $$\psi=(\alpha_1,\dots,\alpha_m):I\to R^{\oplus m}$$ is a splitting: $\phi\circ\psi=\mathrm{id}_I$.
$f(x,y) = 0$ when $x = y = 0$. Shouldn't $\frac{\partial^2 f}{\partial y \partial x} = 0$?
Let $f$ be given by $$f(x,y)=\begin{cases} 2xy\frac{x^2-y^2}{x^2+y^2}&,x^2+y^2>0\\\\ 0&,x^2+y^2=0 \end{cases}$$ For $x^2+y^2>0$, we have $$\begin{align}\frac{\partial f(x,y)}{\partial x}&=\frac{2y(x^4+4x^2y^2-y^4)}{(x^2+y^2)^2}\\\\\frac{\partial f(x,y)}{\partial y}&=-\frac{2x(y^4+4x^2y^2-x^4)}{(x^2+y^2)^2}\end{align}$$ To calculate the first partial derivaties at the origin, we revert to using the limit definition of the partial derivatives. Proceeding, we find that $$f_x(0,0)=\lim_{h\to 0}\frac{f(h,0)-f(0,0)}{h}=0\\\\ f_y(0,0)=\lim_{h\to 0}\frac{f(0,h)-f(0,0)}{h}=0$$ We proceed in a similar fashion to find the mixed partial derivatives at the origin. We find that $$\begin{align} f_{yx}(0,0)&=\lim_{h\to 0}\frac{f_y(h,0)-f_y(0,0)}{h}\\\\ &=\lim_{h\to 0}\frac{-\frac{2h(0^4+4x^20^2-h^4)}{(h^2+0^2)^2}-0}{h}\\\\ &=2 \end{align}$$ and $$\begin{align} f_{xy}(0,0)&=\lim_{h\to 0}\frac{f_x(0,h)-f_x(0,0)}{h}\\\\ &=\lim_{h\to 0}\frac{\frac{2h(0^4+4x^20^2-h^4)}{(h^2+0^2)^2}-0}{h}\\\\ &=-2 \end{align}$$ as was to be shown!
positive definite matrix interval
The matrix $\begin{bmatrix}-b&a&a\\a&-b&a\\a&a&-b\end{bmatrix}$ is invertible except when $b=2a$ or $b=-a$. Therefore the matrix $A=\begin{bmatrix}0&a&a\\a&0&a\\a&a&0\end{bmatrix}$ has eigenvalues $2a$ and $-a$. It follows that $I+A =\begin{bmatrix}1&a&a\\a&1&a\\a&a&1\end{bmatrix}$ has eigenvalues $2a+1$ and $1-a$. A real symmetric matrix is positive definite if and only if its eigenvalues are (strictly) positive.
Are the multiplicative inverses contained in the cone?
Point 3 is not just a consequence of point 2. For example, $\Bbb Z$ satisfies 1 and 2 in $\Bbb Q$, but it does not satisfy 3. Point 3 tells you that for any $x\in P$, $x^{-2}\in P$. Multipying $xx^{-2}$, you get $x^{-1}\in P$.
Solving a Fredholm Equation of the second kind
There is only a tiny mistake: we have $$ \phi(x) = 3 + \lambda \int \limits_0^\pi R(x,s;\lambda) \color{red}{3} \, \mathrm {d} s = 3 + \frac{\color{red}{6} \lambda}{1 - \lambda \frac{\pi}{2}} \sin(x) \, .$$ This is the correct solution for all $\lambda \in \mathbb{R} \setminus {\frac{2}{\pi}}$. For $\lambda = \frac{2}{\pi}$ there is no solution, since $\frac{\pi}{2}$ is an eigenvalue of the integral operator. Note, however, that your series only converges for $|\lambda| < \frac{2}{\pi}$. In order to confirm the above result and to extend its validity to all $\lambda \in \mathbb{R} \setminus {\frac{2}{\pi}}$, we can employ the method of degenerate kernels. Using a trigonometric addition formula we may write the integral equation as $$ \phi (x) = 3 + \lambda \cos(x) \int \limits_0^\pi \cos(s) \phi(s) \, \mathrm{d} s + \lambda \sin(x) \int \limits_0^\pi \sin(s) \phi(s) \, \mathrm{d} s \, . $$ Therefore, the solution must have the form $\phi(x) = 3 + a \cos(x) + b \sin(x)$ for some $a,b \in \mathbb{R}$. Plugging this result back into the equation we obtain a system of linear equations for $a$ and $b$, which has no solution for $\lambda = \frac{2}{\pi}$ and the unique solution $a = 0$ and $b = \frac{6 \lambda}{1-\lambda \frac{\pi}{2}}$ for $\lambda \in \mathbb{R} \setminus {\frac{2}{\pi}}$ as expected. Alternatively, we can derive a differential equation for $\phi$. Its general solution has the form given above and suitable boundary conditions lead to the same linear system.
Why is the property of modular forms under the transformation of elements in an $SL_{2}(\Bbb Z)$ matrix important?
It helps to know that modular forms came first, and the definition came later. In the theory of Weierstrass elliptic functions, the invariants $g_2, g_3, \Delta$ are modular forms and it was natural to try to find other examples and generalizations of such objects. Once these were found, it was natural to try to characterize them and to make that the definition of modular forms. It has a lot to do with the symmetries of discrete period lattices in the complex plane.
Subrings of a Noetherian ring which inherits the Noetherian property
Your proof is correct! I want to point out another possible approach: for any ideal $\mathfrak{a}$ of $R$, we have $\mathfrak{a} = \varphi(\mathfrak{a}S)$ (both inclusions are easy). Thus, letting $\mathcal{L}_R$ and $\mathcal{L}_S$ denote the posets of ideals of $R$ and $S$, respectively, the map $$\mathfrak{a} \mapsto \mathfrak{a}S : \mathcal{L}_R \to \mathcal{L}_S$$ is injective. This map is also clearly order-preserving, so we can identify $\mathcal{L}_R$ with a subposet of $\mathcal{L}_S$. Since each ascending chain in $\mathcal{L}_S$ stabilizes, each ascending chain in $\mathcal{L}_R$ must also stabilize.
How to derive the Lambert W function series expansion?
There is a variant of Lagrange inversion which I like to call poor man's Lagrange inversion which consists in using the Cauchy Residue Theorem. In the present case we are looking for the inverse to $ze^z$ so that $$W(x) e^{W(x)} = x.$$ From the Cauchy Residue Theorem we get $$[x^n] W(x) = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} W(z) \; dz.$$ Now put $W(z) = w$ so that $w e^w = z$ and $e^w (1+w) \; dw = dz$ to obtain (we use the branch that takes $w=0$ to $z=0$, there is another that takes $w=-\infty$ to $z=0$) $$\frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n+1}} e^{-w(n+1)} e^w (w+w^2) \; dw \\ = \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n+1}} e^{-wn} (w+w^2) \; dw.$$ Extracting coefficients now yields $$\frac{(-1)^{n-1} n^{n-1}}{(n-1)!} + \frac{(-1)^{n-2} n^{n-2}}{(n-2)!} \\ = \frac{(-1)^{n-1}}{n!} n^{n-1} (n - (n-1)) = \frac{(-1)^{n-1}}{n!} n^{n-1}.$$ We conclude that $$W(x) = \sum_{n\ge 1} \frac{(-1)^{n-1}}{n!} n^{n-1} x^n.$$ Here is an example I at MSE and another example II at MSE of Lagrange inversion. Addendum, Sep. 17 2020. For a more compact version use Lagrange Inversion as presented in Analytic Combinatorics by Flajolet and Sedgewick. We first choose the branch that has $W(0) = 0.$ We then have $W'(x) e^{W(x)} + W(x) e^{W(x)} W'(x) = 1$ so that $W'(1) = 1$. This means we may write for $n\ge 1$ by the Cauchy Coefficient Formula $$[x^n] W(x) = \frac{1}{n} [x^{n-1}] W(x) = \frac{1}{n} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^n} W'(z) \; dz.$$ Putting $W(z) = w$ we also have $W(z) = z + \cdots$ so that the image of the circle in $z$ is a contour in $w$ that makes one turn by the origin and may in turn be deformed to another small circle. We then have $$[x^n] W(x) = \frac{1}{n} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^n} \exp(-nw) \; dw.$$ This is by inspection $$\frac{1}{n} [w^{n-1}] \exp(-nw) = \frac{1}{n} \frac{(-n)^{n-1}}{(n-1)!} = \frac{(-1)^{n-1}}{n!} n^{n-1}$$ as before.
Comparing Poisson Processes
Given that there are $N$ regional planes that arrive before the next international plane, the total number of regional flyers who will board the next international plane is $$S \mid N = Y_1 + Y_2 + \cdots + Y_N$$ where $Y_i$ are IID with first and second moments $f_1$ and $f_2$. Thus the unconditional mean is $$\operatorname{E}[S] = \operatorname{E}[\operatorname{E}[S \mid N]] = \operatorname{E}[N \operatorname{E}[Y]] = \operatorname{E}[N f_1] = f_1 \operatorname{E}[N],$$ and the variance is $$\begin{align*} \operatorname{Var}[S] &= \operatorname{Var}[\operatorname{E}[S \mid N]] + \operatorname{E}[\operatorname{Var}[S \mid N]] \\ &= \operatorname{Var}[N \operatorname{E}[Y]] + \operatorname{E}[N \operatorname{Var}[Y]] \\ &= f_1^2 \operatorname{Var}[N] + \operatorname{E}[N]\operatorname{Var}[Y] \\ &= f_1^2 \operatorname{Var}[N] + (f_2 - f_1^2)\operatorname{E}[N], \end{align*}$$ where we have used the law of total expectation and the law of total variance, respectively. All that remains is to determine the distribution of the counting random variable $N$. To this end, suppose the regional planes arrive according to a Poisson process with rate $\lambda$, so that the counting variable is $$R(t) \sim \operatorname{Poisson}(\lambda t)$$ with $$\Pr[R(t) = r] = e^{-\lambda t} \frac{(\lambda t)^r}{r!},$$ and for the international planes, we know that the first interrarival time is exponentially distributed, namely $$\Pr[T_w \le t] = 1 - e^{-\mu t}, \quad f_{T_w}(t) = \mu e^{-\mu t}.$$ Conditioned on the first arrival time of an international plane, the number of regional planes arriving is then $R(T_w) \mid T_w$, and the unconditional number of regional planes arrving is then $$\Pr[N = r] = \int_{t = 0}^\infty \Pr[R(T_w) = r \mid T_w = t] f_{T_W}(t) \, dt = \int_{t = 0}^\infty e^{-\lambda t} \frac{(\lambda t)^r}{r!} \mu e^{-\mu t} \, dt = \frac{\lambda^r \mu}{(\lambda + \mu)^{r+1}}.$$ This is of course a geometric random variable with support on $\{0, 1, \ldots \}$ with parameter $p = \mu/(\lambda + \mu)$, so that $$\Pr[N = r] = p(1-p)^r.$$ From here, it is trivial to compute the desired moments and substitute back into our earlier formulas for the unconditional mean and variance of $S$.
Showing that $\lim_{R \rightarrow \infty} \bigg(\int_{0}^{R}e^{ix^{2}}dx-e^{i \pi/4}\int_{0}^{R}e^{-r^{2}}dr \bigg)=0$?
From Felix Martin's comment/hint and taking inspiration from Josh Keneda, I managed to get a solution everything pertaining to the solution can be seen in $\text{Lemma (1.1)}$ $\text{Lemma (1.1)}$ $\text{Cauchy Integral Theorem}$ $(1.1.3)$ Let $U$ be an open subset of C which is simply connected, let $f : U → C$ be a holomorphic function, and let ${\displaystyle \!\,\Gamma } \!\,$ be a rectifiable path in $U$ whose start point is equal to its end point. Then in $(1.1.4)$ $(1.1.4)$ $$\oint_{\Gamma}f(z)dz = 0.$$ In view of $(1.1.4)$, we can make the following conclusions in $(1.1.5)$ $(1.1.5)$ $$\oint_{\gamma_{R}^{2}}e^{iz^{2}} dz = \bigg( \oint_{0}^{R}e^{iz^{2}}dz + \oint_{0}^{\pi / 4} e^{iz^{2}}dz + \oint_{R}^{0}e^{iz^{2}}dz \bigg) = 0.$$ Clearly, it's trival to see that $z=x$ and in view of $\text{Lemma (0.0}) $ we have the following developments in $(1.1.6)$ $(1.1.6)$ $$\oint_{\gamma_{R}^{2}}e^{iz^{2}} dz = \bigg( \oint_{0}^{R}e^{iz^{2}}dz + \oint_{0}^{\pi / 4}e^{(iRe^{i \theta})^{2}}Rie^{i \theta}d \theta + \oint_{R}^{0}e^{(iRe^{\pi i /4})^{2}}e^{\pi i /4} dr \bigg) = 0.$$ From $(1.1.6)$, one can make the observation $$\bigg |\int_{0}^{\pi /4}e^{iR^{2}}e^{2i\theta}Rie^{i\theta}d \theta \bigg | \leq R \int_{0}^{\pi /4} |e^{[iR^{2}(cos2 \theta + i\sin 2 \theta)]}|d\theta $$ $$ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \leq \int_{0}^{\pi / 4} e^{-R^{2} \sin 2\theta} d \theta \leq r \int_{0}^{\pi / 4 }e^{-R^{2}(4 \theta / \pi)}d \theta$$ $$\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, = \frac{\pi}{4} \frac{1-e^{-R^{2}}}{R} \rightarrow 0 \, \text{as}\, R \rightarrow \infty.$$ whereas we have the following developments in $(1.1.7)$ $(1.1.7)$ \begin{align*}\lim_{R \rightarrow \infty} R e^{i\pi/4} \int_R^0 e^{-u^2} \frac{-1}{R} du &= \lim_{R\rightarrow \infty} R e^{i\pi/4} \int_0^R e^{-u^2} \frac{1}{R} du\\ &= \lim_{R \rightarrow \infty} e^{i \pi/4} \int_0^R e^{-u^2} du\\ &= e^{i\pi/4} \int_0^\infty e^{-u^2} du\\&=e^{i\pi/4}\frac{\sqrt{\pi}}{2}.\end{align*} Combining $(1.1.6)$ - $(1.1.7)$ one can arrive at the following conclusions in $(1.1.8)$$ $(1.1.8)$ $$\lim_{R \rightarrow \infty}\oint_{R}^{0}e^{ix^{2}} = \int_{0}^{\infty}(cos(x^{2}) + isin(x^{2}) \frac{\sqrt[1]{2}}{2}(1+i)dx = 0i + \frac{\sqrt[1]{\pi}}{2}$$ Comparing the real and imagery parts of $(1.1.8)$ gives in $(1.1.9)$ $(1.1.9)$ $$I=\int_{0}^{\infty}e^{ix^{2}}dx = \int_{0}^{\infty}(cos(x^{2}) + isin(x^{2}))dx = e^{i\pi/4} \sqrt(\pi)/2.$$
In how many ways we can divide 10 apples among 3 people in such a way that each should get at least 2 apples?
1)let's say the apples are identical W1 + 2 + w2 +2 + w3 +2 = 10 (w is whole no.) And wi represents no of apple ith person gets W1 + w2 + w3 = 4 implying 6c2 ways(beggar's method) 2) they are not identical 10= 2+2+6 or 2+3+5 or 2+ 4 +4 or 3+3+4. We will make groups and arrange 10!/(2!2!6!2!)×3!(extra division of 2! Because there are 2 groups, If u know this then good but if u don't let me know I will explain)+10!(2!3!5!)*3!+10!/(2!4!4!2!)*3!+10!(3!3!4!2!)*3!
Finding the logarithm of a matrix?
You can make use of the block structure of $A$: $$ A = \begin{pmatrix} C & 0 \\ 0 & 4 \end{pmatrix} = e^B = \sum_{k=0}^\infty \frac{1}{k} B^k \Rightarrow B = \begin{bmatrix} D & 0 \\ 0 & x \end{bmatrix} $$ so we can assume $4 = e^x \Rightarrow x = \ln(4)$. For the block matrices we get $$ C = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} = e^D = \sum_{k=0}^\infty \frac{1}{k!} D^k $$ and try an upper triangular matrix $$ D = \begin{pmatrix} y & z \\ 0 & y \end{pmatrix} $$ and get the powers $$ D^2 = \begin{pmatrix} y & z \\ 0 & y \end{pmatrix} \begin{pmatrix} y & z \\ 0 & y \end{pmatrix} = \begin{pmatrix} y^2 & 2 y z \\ 0 & y^2 \end{pmatrix} \\ D^3 = \begin{pmatrix} y^2 & 2 y z \\ 0 & y^2 \end{pmatrix} \begin{pmatrix} y & z \\ 0 & y \end{pmatrix} = \begin{pmatrix} y^3 & 3 y^2 z \\ 0 & y^3 \end{pmatrix} \\ \vdots \\ D^k = \begin{pmatrix} y^k & k y^{k-1} z \\ 0 & y^k \end{pmatrix} \quad (k \ge 1) $$ which suggest $$ C = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} = e^D = \begin{pmatrix} e^y & e^y z \\ 0 & e^y \end{pmatrix} $$ so $y = \ln(2)$ and $z = 1/e^y = 1/2$. This gives $$ B = \begin{pmatrix} \ln(2) & 1/2 & 0 \\ 0 & \ln(2) & 0 \\ 0 & 0 & \ln(4) \end{pmatrix} $$
Calculus of Variation: Euler-Lagrange Equation in 1D
Just assume $u_x$ is a normal variable and $\frac{\mathrm{d} }{\mathrm{d} x} (u_x) = u_{xx}$. Notice that u(x) is a function of one variable.
Will X be a binomial random variable?
Yes! $X\sim Bin(100;\frac{1}{10})$ (Assuming independence...)
Create a mathematical model based on some heuristics
What about something like this: $f(x,y) = e^{-g(x,y)}$ $g(x,y) = \omega_1x^2 + \omega_2y^{-2} + \omega_3\bigl(\frac{y}{x}\bigr)^{-2}$ The intuition is that as $g \rightarrow 0$ we have $f \rightarrow 1$, and as $g \rightarrow \infty$ we have $f \rightarrow 0$. In other words, f scores things between 0 and 1. Personally I prefer scoring functions like this because it is easy to tell whether something has a "good" score or not. The $\omega$'s are just positive weighting terms that can be adjusted according to your problem. You can tune them to ensure that you get a good balance of scores close to 0 and close to 1. You can also decide if you want things like the ratio to "weigh more" than the other categories. Does this help?
Geometrical interpretation ($\left| \frac{z-1}{z-i}\right| > 1$, $\arg z < \pi$)
There was no need to do so much computation. $|z-1| &gt; |z-i|$ is equivalent to $z$ being further from $1$ than $i$, and hence it is clearly the open region bounded by the perpendicular bisector of the points $(1,i)$. The other condition says that it is in the open half plane above the real axis. Combine the two to get the desired region. (So you should be using dotted lines rather than solid lines by convention.)
Closures and Interiors in a topology
The closed sets are the complements of the open sets. The complement of $\varnothing$ is $X$. Now $U\supseteq A$ iff $X\setminus U\subseteq X\setminus A$, so the closed sets other than $X$ itself are precisely the sets disjoint from $A$. Thus, the only closed set containing $A$ is $X$, and $\operatorname{cl}A=X$. You can also see this by looking at limit points. Suppose that $x\in X$, and $U$ is an open nbhd of $x$. Then $U\ne\varnothing$, so $U\supseteq A$. Thus, every open nbhd of $x$ contains a point of $A$ $-$ contains every point of $A$, in fact! $-$ so $x\in\operatorname{cl}A$. Since $x$ was an arbitrary point of $X$, $X=\operatorname{cl}A$. The closure of $B$ is also $X$, by the same argument. The closure of a non-empty set $B$ can never be empty, because the whole space is always a closed set containing $B$.
$ f(b)\leq f(a)+\omega(|b-a|)^\alpha$ with $\omega(0)=0$ and $\omega$ being differentiable implies that $f$ is smooth
Enlightened by this question: $f$ defined $|f(x) - f(y)| \leq |x - y|^{1+ \alpha}$ Prove that $f$ is a constant. I have an answer. Since $\alpha&gt;1$, the map $x\mapsto x^\alpha$ is differentiable at $x=0$. The inequality $$ \frac{|f(x+h)-f(x)|}{h}\leq \frac{\omega(|h|)^\alpha-\omega(0)^\alpha}{h}. $$ implies that for any $x\in{\bf R}$, $$ |f'(x)|\leq \omega(0)^{\alpha-1}\cdot\omega'(0)=0 $$ which implies that $f$ is a constant.
how is this solved? $\lim_{x→0} \frac{1}{x^2}\left(\left(\tan(x+\frac{\pi}{4})\right)^{\frac{1}{x}}−e^2\right)$
We use Taylor series around $0$: $$ \tan \left( {x + \frac{\pi }{4}} \right) = 1 + 2x + 2x^2 + \frac{8}{3}x^3 + \cdots . $$ Hence, by using Taylor series for the logarithm and the exponential function, \begin{align*} \left( {\tan \left( {x + \frac{\pi }{4}} \right)} \right)^{\frac{1}{x}} &amp; = \exp \left( {\frac{1}{x}\log \tan \left( {x + \frac{\pi }{4}} \right)} \right) \\ &amp; = \exp \left( {\frac{1}{x}\log \left( {1 + 2x + 2x^2 + \frac{8}{3}x^3 + \cdots } \right)} \right) \\ &amp; = \exp \left( {\frac{1}{x}\left( {2x + \frac{4}{3}x^3 + \cdots } \right)} \right) = e^2 \exp \left( {\frac{4}{3}x^2 + \cdots } \right) \\ &amp; = e^2 \left( {1 + \frac{4}{3}x^2 + \cdots } \right). \end{align*} This yields the limit $\frac{4}{3}e^2$.
How are codes assigned value in Shannon Fano Coding?
It might be better if you could explain what you don't understand. Did you read the paragraph above? The list is divided into "two groups of as nearly equal probabilities as possible". It looks as though they put $a$ into one group and $b-h$ in the other group. Each symbol in the first group ($a$) is assigned a 0 as the first bit, so $a$ gets a 0 as its first bit. The others get a 1 as their first bit. the result is column I of the table Then the second group is further divided, into $c,e$ and $b,d,f,g,h$. The items in the first group get an additional 0 on their codes; the items in the second group get an additional 1. This is column II of the table.
A relation between convergence in measure and pointwise convergence
Suppose $\{f_n\}$ converges to $f$ in measure. Let $\{f_{n_k}\}$ be a subsequence of $\{f_n\}$. Choose $n_{k_1} &lt; n_{k_2} &lt; n_{k_3} &lt; \cdots$ such that $m(|f_{n_{k_j}} - f| &gt; 2^{-j}) &lt; 2^{-j}$ for all $j \in \Bbb N$. Let $A_j = (|f_{n_{k_j}} - f| &gt; 2^{-j})$ and $A = \limsup A_j$. Since $\sum_j m(A_j) &lt; \infty$, $$m(A) = \lim_{j\to \infty} m\left(\bigcup_{k\ge j} A_k\right) \le \lim_{j\to \infty} \sum_{k = j}^\infty m(A_k) = 0.$$ Given $x\notin A$, there exists a positive integer $N$ such that $x\notin A_j$ for all $j\ge N$, i.e., $|f_{n_{k_j}}(x) - f(x)| \le 2^{-j}$ for all $j \ge N$. Thus $f_{n_{k_j}}(x) \to f(x)$. As $x$ was arbitrary, $f_{n_{k_j}} \to f$ a.e.. Now suppose $\{f_n\}$ does not converge to $f$ in measure. Decompose $\Bbb R$ as the countable union of open intervals $I_k$. Then there exists an index $\ell$, positive numbers $\epsilon$ and $\alpha$, and a sequence of indices $n_1 &lt; n_2 &lt; n_3 &lt; \cdots$ such that $m(\{x\in I_\ell :|f_{n_k}(x) - f(x)| &gt; \epsilon\}) \ge \alpha$ for all $k \in \Bbb N$. Let $\{f_{n_{k_j}}\}$ be any subsequence of $\{f_{n_k}\}$. If $X$ is the set of points $x\in I_\ell$ such that $\lim\limits_{j\to \infty} f_{n_{k_j}}(x) \neq f(x)$, then $X = \cup_{m\ge 1} (\limsup\limits_{j\to \infty} X^j_m)$, where $X_m^j = \{x\in I_\ell :|f_{n_{k_j}} - f| &gt; \frac{1}{m}\}$. Let $N$ be a positive integer such that $\frac{1}{N} &lt; \epsilon$. Then $$ m(X) \ge m(\limsup\limits_{j\to \infty} X_j^N) \ge \limsup_{j\to \infty}\, m(X_j^N) \ge \alpha.$$ It follows that $\{f_{n_{k_j}}\}$ does not converge to $f$ pointwise almost everywhere.
Show that the equation $\frac{x^n-1}{x-1}=4y^2$ has only two solutions
There are infinitely many solutions. $(x,n,y)=(4y^2-1,2,y)$ For all $y$. (The op used $y=1$ already.) Edit: The op's question is asking when is the expression $\frac{x^n-1}{x-1}$ is an even square. The more general question of when the expression is a power number has been heavily investigated. When $n&gt;2$ it is conjectured that there are only three solutions $(x,n)=(18,3),(7,4),(3,5)$. Only one of these results in an even square (which already has been stated by the op). Partial results of the general problem shows that there are no other solutions to the op's problem. More information can be found here: https://mathoverflow.net/questions/58697/a-geometric-series-equalling-a-power-of-an-integer
In-place inversion of large matrices
We suppose an (m+n) by (m+n) matrix $\textbf{A}$ with block partition: $\textbf{A}=\begin{pmatrix}\textbf{E}&amp;\textbf{F}\\\\ \textbf{G}&amp;\textbf{H}\end{pmatrix}$ A sequence of steps can be outlined, invoking recursively an in-place matrix inversion and other steps involving in-place matrix multiplication. Some but not all of these matrix multiplication operations would seem to require additional memory to be "temporarily" allocated in between recursive calls to the in-place matrix inversion. At bottom we will argue that needed additional memory is linear, e.g. size m+n. 1. The first step is recursively to invert in-place matrix $\textbf{E}$: $\begin{pmatrix}\textbf{E}^{-1}&amp;\textbf{F}\\\\ \textbf{G}&amp;\textbf{H}\end{pmatrix}$ Of course to make that work requires the leading principal minor $\textbf{E}$ to be nonsingular. 2. The next step is to multiply $\textbf{E}^{-1}$ times block submatrix $\textbf{F}$ and negate the result: $\begin{pmatrix}\textbf{E}^{-1}&amp;-\textbf{E}^{-1}\textbf{F}\\\\ \textbf{G}&amp;\textbf{H}\end{pmatrix}$ Note that the matrix multiplication and overwriting in this step can be performed one column of $\textbf{F}$ at a time, something we'll revisit in assessing memory requirements. 3. Next multiply $\textbf{G}$ times the previous result $-\textbf{E}^{-1}\textbf{F}$ and add it to the existing block submatrix $\textbf{H}$: $\begin{pmatrix}\textbf{E}^{-1}&amp;-\textbf{E}^{-1}\textbf{F}\\\\ \textbf{G}&amp;\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F}\end{pmatrix}$ This step can also be performed one column at a time, and because the results are accumulated in the existing block submatrix $\textbf{H}$, no additional memory is needed. Note that what has now overwritten $\textbf{H}$ is $\textbf{S}=\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F}$. 4. &amp; 5. The next two steps can be done in either order. We want to multiply block submatrix $\textbf{G}$ on the right by $\textbf{E}^{-1}$, and we also want to recursively invert in-place the previous result $\textbf{S}$. After doing both we have: $\begin{pmatrix}\textbf{E}^{-1}&amp;-\textbf{E}^{-1}\textbf{F}\\\\ \textbf{G}\textbf{E}^{-1}&amp;\textbf{S}^{-1}\end{pmatrix}$ Note that the matrix multiplication and overwriting of $\textbf{G}$ can be done one row at a time. 6. Next we should multiply these last two results, overwriting $\textbf{G}\textbf{E}^{-1}$ with $\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}$ and negating that block: $\begin{pmatrix}\textbf{E}^{-1}&amp;-\textbf{E}^{-1}\textbf{F}\\\\ -\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&amp;\textbf{S}^{-1}\end{pmatrix}$ 7. We are on the home stretch now! Multiply the two off-diagonal blocks and add that to the diagonal block containing $\textbf{E}^{-1}$: $\begin{pmatrix}\textbf{E}^{-1}+\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&amp;-\textbf{E}^{-1}\textbf{F}\\\\ -\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&amp;\textbf{S}^{-1}\end{pmatrix}$ 8. Finally we multiply the block containing $-\textbf{E}^{-1}\textbf{F}$ on the right by $\textbf{S}^{-1}$, something that can be done one row at a time: $\textbf{A}^{-1}=\begin{pmatrix}\textbf{E}^{-1}+\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&amp;-\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}\\\\ -\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&amp;\textbf{S}^{-1}\end{pmatrix}$ Requirements for additional memory: A temporary need for additional memory arises when we want to do matrix multiplication and store the result back into the location of one of the two factors. Such a need arises in step 2. when forming $\textbf{E}^{-1}\textbf{F}$, in step 4. (or 5.) when forming $\textbf{G}\textbf{E}^{-1}$, in step 6. when forming $\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}$, and in step 8. when forming $-\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}$. Of course other "temporary" allocations are hidden in recursive calls to in-place inversion in step 1. and step 4.&amp;5. when matrices $\textbf{E}$ and $\textbf{S}$ are inverted. In each case the allocated memory can be freed (or reused) after one column or one row of a required matrix multiplication is performed, because the overwriting can be done one column or one row at a time following its computation. The overhead for such allocations is limited to size m+n, or even max(m,n), i.e. a linear overhead in the size of $\textbf{A}$.
Are there infinitely many numbers such that $ \phi(n)=\phi(n-1)+\phi(n-2)$?
I've seen these called Phibonacci numbers. This paper talks about bounding their asymptotic density so presumably there are infinitely many, but I am not familiar with a proof. They are A065557.
Identify all combinations of sets which don't have intersection
If construct graph, when each vertex represents one set and edges connect only vertex which have intersection, then my problem can be represented as Listing all maximal independent sets and solved using the Bron-Kerbosch algorithm.
Prove that $\sqrt{5.5}$ is irrational
Write $5.5 =\frac{11}{2}$, and suppose $\sqrt{\frac{11}{2}} = \frac{p}{q}$ with $\gcd(p,q)=1$. Then we have $$\frac{11}{2} = \frac{p^2}{q^2} \implies 11q^2 = 2p^2.$$ From this we obtain that $q^2$ is even, and so $q=2m$. Thus we have $$11(2m)^2=4 \dot\ 11m^2 = 2 p^2 \implies p^2 = 2 \dot\ 11 m^2.$$ So $p^2$ is even and thus $p$ is even. So $2 \ | \ \gcd(p,q)$, which is a contradiction. Your attempt is not correct as from $5.5q^2 = p^2$ you cannot conclude that $q$ divides $p^2$ as $5.5$ is not a integer.
Find third 3d coordinate given two other coordinates
If the coordinates of the center of the blue sphere are $B=(B_x,B_y,B_z)$ and the coordinates of the center of the green sphere are $G=(G_x,G_y,G_z)$, then the vector $$\overrightarrow{BG}=G-B=\langle G_x-B_x,G_y-B_y,G_z-B_z\rangle$$ and we can unitize that vector (create a vector of length 1 in the same direction by taking $$\vec{n}=\frac{\overrightarrow{BG}}{\|\overrightarrow{BG}\|}=\left\langle\frac{G_x-B_x}{\|\overrightarrow{BG}\|},\frac{G_y-B_y}{\|\overrightarrow{BG}\|},\frac{G_z-B_z}{\|\overrightarrow{BG}\|}\right\rangle$$ where $\|\overrightarrow{BG}\|=\sqrt{(G_x-B_x)^2+(G_y-B_y)^2+(G_z-B_z)^2}$. Now, the distance from the center of the green sphere to the middle of the far end of the box is the length of the box plus the radius of the sphere. Let's call that total distance $d$. The vector $d\vec{n}$ will have length $d$ and be in the same direction as the vector from B to G, which is along the line through the centers of the spheres. If we add $d\vec{n}$ to $G$, we'll get the coordinates of the center of the far end of the box: $$\begin{align} G+d\vec{n} &amp;=(G_x,G_y,G_z)+d\left\langle\frac{G_x-B_x}{\|\overrightarrow{BG}\|},\frac{G_y-B_y}{\|\overrightarrow{BG}\|},\frac{G_z-B_z}{\|\overrightarrow{BG}\|}\right\rangle \\ &amp;=\left\langle G_x+d\cdot\frac{G_x-B_x}{\|\overrightarrow{BG}\|},G_y+d\cdot\frac{G_y-B_y}{\|\overrightarrow{BG}\|},G_z+d\cdot\frac{G_z-B_z}{\|\overrightarrow{BG}\|}\right\rangle. \end{align}$$
Finding beam paths using reflection
To expand a bit on the suggestion in my comment, you can “unfold” the paths by repeated reflections of the target point $B$ and the reflectors themselves. Each image of $B$ and, of course, $B$ itself, that lies within the maximum distance corresponds to a unique path from $A$ to $B$. The trick, then, is to efficiently generate these images of $B$, but that seems fairly straightforward to do since the reflecting surfaces parallel the coordinate axes. If the right edge of the box is the line $x=w$, then the $x$-coordinate of the first reflection is $w+(w-x_B)$, the second is $2w+x_B$, the third $3w+(w-x_B)$ and so on. In general, the even-numbered reflections are spaced $2w$ apart, as are the odd-numbered reflections. This pattern holds going to the left as well. The same pattern occurs for the reflections in the top and bottom walls, except that the spacing between reflections with the same parity is $2h$, where $h$ is the height of the box. A pair of nested loops seem like they’d do the trick here. If you need to trace any of the paths, draw a straight line from $A$ to the corresponding image of $B$. Each time this line crosses one of the grid lines in the above diagram (which are the reflections of the sides of the box), the actual path will change direction. (Sorry about the near-illegible labels. Text doesn’t appear to survive GeoGebra’s graphics export process any more.)
Show that $g=\sum_{n=1}^{\infty } |f _{n+1 }-f _n | $ has $||g ||_p\le 1 $ if $||f _{n+1 }-f _n ||_p <2 ^{-n } $
As suggested in the comments, the sequence of partial sums $g_k = \sum_{n=1}^k |f_{n+1} - f_n|$ is monotonically increasing and converges to $g$. Then it follows from Monotone Convergence Theorem that $\lim \int g_k = \int g$ and similarly $\lim ||g_k||_p = \int ||g||_p$. Applying triangle inequality and $||f_{n+1} - f_n||_p &lt; 2^{-n}$, you can show each partial sum $||g_k||_p \leq 1$ and then the result follows by passing the inequality to infinite.
How many bit strings of length $12$ have a substring $01$?
It might be easier to ask how many bit strings of length $n$ do not have the substring '01'. I think of the bit string as a little plot, so $0100$ would be the graph $(0,0),(1,1),(2,0),(3,0)$. The substring $01$ corresponds to an upward jump in the graph. It follows that the only graphs that do not have an upward jump are those of the form $1^* 0^*$ (of the appropriate length, of course). Hence the number of bit strings of length $n$ do not have the substring '01' is $n+1$ (all ones, all ones except for last place, ..., all zeros). Hence the answer is $2^n-(n+1)$.
Give tight asymptotic bounds for the following recurrence. T(n) = 3T(n/3)+log n
You have that (among other things) $\log(n) \in O \left (n^{0.5} \right )$. Since $0.5&lt;\log_b(a)=\log_3(3)=1$, the first case of the master theorem tells you that $T(n) \in \Theta(n)$.
Showing that a countable subset of $\mathbb{R}$ is Borel
The fact that singletons are Borel gives an immediate proof of this fact. This suggests that singletons being Borel is very tightly related to the fact, and so any proof avoiding using that singletons are Borel might feel like a "guise." But let us assume that we have forgotten that singletons are Borel. Then using De Morgan's Law, since a countable subset of $\mathbb{R}$ is a countable union of singletons, the complement of the countable subset is a countable intersection of open intervals. As open intervals are the prototypical Borel set, and Borel sets are preserved under countable union and relative complements, we see that a countable union of singletons is Borel. $\diamondsuit$ But you might notice that what we really did was replicate a proof that singletons are Borel. So it goes.
Simple algebraic manipulation with 2 equations
$$\frac{1}{d_{2}}=\frac{1}{12}-\frac{1}{1.066d_{2}} $$ $$(1+\frac{1}{1.066})\frac{1}{d_{2}}=\frac{1}{12}$$ $$d_{2}=12(1+\frac{1}{1.066})$$ $$d=d_{2}+30$$ $$d=12(1+\frac{1}{1.066})+30$$
Hilbert-Bernays theorem
It is not in general true [as the question assumed in its original form] that if a formal theory $T$ is consistent, then it has a model. (For example, take second-order Peano Arithmetic extended with a constant $c$ governed by the axioms $c \neq 0$, $c \neq S0$, $c \neq S00$, etc. This is consistent but has no model -- as nicely explained in this answer by Henning Makholm) It is a theorem that if a first-order theory is consistent, then it has a model. This is a version of the completeness theorem for first-order logic that, famously, is due ultimately to Kurt Gödel. Any standard math logic text will prove this: but there's a good version in Leary/Kristiansen, A Friendly Introduction to Mathematical Logic, which is freely available through the friendly generosity of the authors.
Expected revenue in first-price auction with budget constraint drawn uniformly between [0,1]
The revenue ranking that Che and Gale have in the paper is correct - see their Theoretical Economics article in (2006?) for a more general result. The characterization of the first-price auction equilibrium, however, is not. I'm not sure about the particular example you mention but they assume the equilibrium of the FPA is given by $\min\{ b(v),w\}$ where $b(\cdot)$ is the unconstrained bid and $w$ is the budget. However, even with their assumptions, the differential equation that $b(\cdot)$ must solve may have only a solution that is not always increasing -- so it is not really a solution... Few years ago, Kotowski from Yale solved (correctly this time) for the eq. of the FPA with budget constraints: http://mfile.narotama.ac.id/files/Jurnal/MIT%202012-2013%20%28PDF%29/First-Price%20Auctions%20with%20Budget%20Constraints.pdf
Under what condition, $n(>2)$ non-zero vectors of equal length forms a regular n-gon in Euclidean plane
You can take $\mathbb{R}^2$ as 1-dimensional vector space over $\mathbb{C}$, then $f_1=1$ and $f_k=e^{\frac{i\pi}{n}(k-1)}$.
Special properties of the number $146$
$146$ can be written as squares of two primes: $$146=5^2+11^2=5^2+(1+4+6)^2=(1+4)^2+(1+4+6)^2.$$
What is the least prime containing the digitsequences 00..99?
Beat this (101 digits): 12343820246621009119671503254485735539522606365680728479308616498759414276974588313377899290518170401 Improvement: 11254572024662100919681305234473853359322606563670827489507616497839414286984377515588799290317180401 ... and 10234352248821200911987160326445673663962808386850725479305818495769414278974655313377599290615170401 (Edit after being accepted) This just in, and there's still some room: 10023436224882120711785190329446953993792808389860526457306818476597414258754966313355677270916150401
Distribution of a random variable related to insertion sort
When you're doing this step of the insertion sort algorithm, A[1] .. A[j-1] are the original A[1] .. A[j-1] sorted into increasing order. Since the initial permutation was random, the rank of A[j] among A[1] .. A[j] is equally likely to be each of the $j$ possibilities. If A[j] is the $i$'th largest of A[1] .. A[j], the algorithm will compare it to A[j-1], ..., A[j-i], so $X = i$, except that if $i=j$ there is no A[0] to compare it to. Thus for $k = 1, 2, \ldots j-2$ we have $P(X = k) = 1/j$, but $X = j-1$ covers two possibilities ($A[j]$ is either lowest or second-lowest) so $P(X = j-1) = 2/j$.
How many ways is there to do a round trip if at least one of the roads taken on the return trip is different?
To plan a round trip from $A$ to $C$ and back, where the return route is not the reverse of the first part from $A$ to $C$, we proceed as follows. Since there are $14$ ways to go from $A$ to $C$ we must choose one of those ways. Once that has been done, exactly one of the $14$ routes has been ruled out for reversal to get back to $A$ from $C$, leaving us $13$ choices for the return trip. This gives $14\cdot 13=182$ possible round trips. Note that we do not have to consider the specific types of these trips in terms of how they go through $B$ (or avoid $B$). One of the trips for the first leg of the journey from $A$ to $C$ being chosen, it is the only one excluded on the way back.
Power of exponentials functions
Hint: Use the Hessian evaluated at $0$ and look up Vandermonde matrix.
Simpson's Rule Equivalent for Line Integrals?
It looks to me that dealing with $f^\prime(t)$ and $g^\prime(t)$ when you're unable or unwilling to compute analytic derivatives is what's giving you trouble; there are a number of numerical differentiation methods you can use along with Simpson's rule (but methinks you really should be using Gaussian quadrature instead). On the other hand, there is a simple strategem due to J.C. Nash for estimating derivatives, based on the central difference approximation $$f^{\prime}(x)\approx\frac{f(x+h)-f(x-h)}{2h}$$ The trick here is to choose $h$ to be small, but not too small; Nash's suggestion is to use $h=\sqrt{\varepsilon}\left(|x|+\sqrt{\varepsilon}\right)$ where $\varepsilon$ is machine epsilon. This should be good enough for most purposes.
Evaluate the integral using a specified substitution
Note that\begin{align}\sqrt{\left(t-\frac1t\right)^2+4}&amp;=\sqrt{t^2+2+\frac1{t^2}}\\&amp;=t+\frac1t.\end{align}Can you take it from here?
How do you prove the most basic of statements?
You're supposed to formally express ideas using the givens. Some statements might sometimes make complete intuitive sense to you, but without really showing why they're true based on something that you take to be true (whether it's an axiom or something that's already proved), you don't know whether it's true or not, since there wouldn't be proof of it being true. This exercise wants you to think formally - based on already known things (givens, axioms, theorems). Now, for the answers (I replaced the letters with numbers, and the ')' with a '.', for automatic aligning of lists in Math.StackExchange's text editor): You need to show that for each condition, if one of the conditions is true, then all the others are not true. I'll do only the case that $ y&gt;x$: $ \&gt; y&gt;x \Rightarrow y-x\in F^+. $ Assume also that $ y &lt; x \Leftrightarrow x&gt;y \Rightarrow x-y \in F^+ \stackrel{\text{distrivutivity,} +_{F} \&gt; \text{commutativity}} \Leftrightarrow -1 \left( y-x\right) \in F^+$, which is a contradiction. Now assume also that $y=x \Rightarrow x-y = x-x = 0 \notin F^+$, in contradiction to the fact that $x-y \in F^+$. Again, the idea is assuming one case, and showing that it doesn't imply the others, i.e, when this case happens, the others don't, i.e that something could be of only one of those cases. $ x&gt;0 \Rightarrow x-0 \in F^+ \\ y&gt;0 \Rightarrow y-0 \in F^+ \\ \text{Add both sides of the inequalities to get:} \\ \left( x+y \right) - \left( 0+0 \right) \Rightarrow \\ \Rightarrow x+y - 0 \in F^+ $ And I won't finish doing the exercise - I wanted to show you the formal writing and expressing.
How do I construct a function $\operatorname{sog}$ such that $\operatorname{sog}\circ\operatorname{sog} = \log$?
EDIT: I got my copy of Hellmuth Kneser (1950) from the G&ouml;ttingen digital repository, click to download just the article itself (rather than the whole journal issue) LINK HELLMUTH KNESER 1950 Reele analytische or straight to DIRECT TO PDF. If anyone tries and has trouble with the link(s), I have now made a nice pdf that is small enough for email, I sent it to myself and gmail says 2MB, no trouble. I put the journal cover, contents page, then a third page after the article itself. There are two issues, both of which are non-problems for the logarithm. If you are considering a function on the reals that is also real valued, as long as there are no fixpoints, we can expect to produce real analytic half-iterates, in an open set around the relevant portion of the real axis. The open set may vary in width. To be specific, the open set cannot be expected to include complex fixpoints of the function. It will generally turn out that fractional iterates will not extend to the entire complex plane, even when the original function is entire and single valued. If you have a fixpoint on the real line, where the derivative of the function is negative, it is easy to see that the half iterate cannot stay real valued. For example, a half iterate of $-x$ is $ix,$ because $i(ix) = -x.$ In the presence of fixpoints with derivative larger than $1$ or strictly between $0$ and $1,$ we get to solve a Sch&ouml;der equation. With $x^2$ at $1$ we arrive at $x^{\sqrt 2},$ because $$ \left( x^{\sqrt 2} \right)^{\sqrt 2} = x^2 $$ for positive $x.$ As soon as we try to include $0,$ we are stuck with $|x|^{\sqrt 2},$ which is $C^1$ but not $C^2$ at the origin. So, fixpoints are a problem. Finally, the hardest bit is when the fixpoint has derivative $1.$ I spent quite a bit of time finding a half-iterate for $\sin x.$ It works, the result is in between the sine wave and a sawtooth curve (of line segments) that is tangent to the sine curve at multiples of $\pi.$ And, for $0 &lt; x &lt; \pi,$ it is real analytic. I wrote to Jean Ecalle, it turns out that the function really is $C^\infty$ when we expand to include $x=0,$ therefore $C^\infty$ on the entire real line. Let's see, amplitude a little larger than $\sin x$ itself, not as large as the sawtooth; $\pi/2 \approx 1.570796,$ and we get $f_{1/2} (1.570796) \approx 1.140179.$ The (substantial) work is summarized at https://mathoverflow.net/questions/45608/formal-power-series-convergence/46765#46765 As commented, Helmuth Kneser constructed a real analytic half-iterate for $e^x.$ At about the same time I did $\sin x,$ I also did $e^x-1,$ which has a fixpoint with derivative $1$ at $x=0.$ Once again, real analytic for $x &gt; 0,$ also for $x &lt; 0,$ but just $C^\infty$ on the whole real line. Having trouble posting the picture, partly because it is a jpeg instead of a pdf. Anyway, the outcome is that $\log x$ works, but $\log (1+x)$ is more difficult.
Are (finite dimensional?) inner product spaces also super vector spaces?
Converting my comments into an answer: any vector space $V$ can be made into a super vector space in many different ways, corresponding to any direct sum decomposition $V \cong V_0 \oplus V_1$. This is extra structure in general so it doesn't make sense to say that $V$ &quot;is&quot; a super vector space this way, only that it &quot;can be made&quot; a super vector space this way. There are two canonical such decompositions, namely $V_0 = V$ (concentrated in even degree) or $V_1 = V$ (concentrated in odd degree). The even one is distinguished because that construction is symmetric monoidal. The category of super vector spaces is not that interesting, and equivalent to the category of pairs of vector spaces. What is interesting is the symmetric monoidal category of super vector spaces, which is where you define supercommutative algebras and Lie superalgebras and so forth.
Got stuck with this $L^2(-1, 1)$ optimization problem. Any ideas where it comes from?
Hint: Your problem is addressed in functional analysis and is a constrained calculus of variations problem. I would form the Lagrangian and get rid of the constraint, and then would try to derive the Lagrange-Euler formula for the problem.
Finding values of $x$ such that a sequence of functions converges.
I do not know if you need to prove your answers. If that is the case, there are many steps missing. In any case, (ii) is correct. (iii) is basically an exponential, for that to converge the base belongs to $(-1,1]$, in your case $-1&lt;\sin(x)\leq1$ which implies $x\neq3\pi/2$, and convergence is as you said.
How and why vector spaces are defined?
The resolution of the systems of linear equations is one of the main goal but of course there are many others applications (i.e. least squares, dynamical systems, image compression, etc,.). See also the related Why study linear algebra?
Why we cannot simplify $\partial x$?
Your tangent is now a tangent plane $$ f(x,y) = f(c,d) + \mbox{grad } f \cdot [(x,y) - (c,d)] $$ then $$ \Delta f = f(x,y) - f(c,d) = \mbox{grad } f \cdot (\Delta x, \Delta y) = \frac{\partial f}{\partial x} \Delta x + \frac{\partial f}{\partial y} \Delta y $$
Strong Triadic Closure and Nodes that Violate/satisfy it
Yes as in the previous answer they told you that it's B,E and F. I can say yes but let consider E for now; you see, E has many bridges: to F, C, D and G. as E is having strong ties with B and D, and the fact that there is no edge between B and D, in that case E violate the STC rule. But you can also find that in the other case, E fulfills the STC rule, where there are strong ties between (E and F) and between (E and B) and a weak tie between (F and B)!!
$\langle x,y \rangle (\lVert x \rVert + \lVert y \rVert) \leq \lVert x \rVert \lVert y \rVert \lVert x+y \rVert$
$\langle x,y\rangle(\lVert x\rVert+\lVert y\rVert)\leq\lVert x\rVert\lVert y\rVert\lVert x+y\rVert\;.$ Proof: $\langle x,y\rangle(\lVert x\rVert+\lVert y\rVert)=\lVert x\rVert\langle x,y\rangle+\lVert y\rVert\langle x,y\rangle\le$ $\underset{\overbrace{\text{Cauchy–Schwarz inequality}}}{\le}\lVert x\rVert^2\lVert y\rVert+\lVert y\rVert\langle x,y\rangle=$ $\underset{\overbrace{\lVert x\rVert^2=\langle x,x\rangle}}{=}\lVert y\rVert\left(\langle x,x\rangle+\langle x,y\rangle\right)=$ $\underset{\overbrace{\text{Bilinearity of scalar product}}}{=}\lVert y\rVert\langle x,x+y\rangle\le$ $\underset{\overbrace{\text{Cauchy–Schwarz inequality}}}{\le}\lVert x\rVert\lVert y\rVert\lVert x+y\rVert\;.$
Is it true that $({\aleph_1})^{\aleph_0} = {\aleph_1}$?
No. It is not necessarily true. And it is also not necessarily true that $|\Bbb R|=\aleph_1$, this is known as the Continuum Hypothesis, and it is not provable nor refutable from the usual axioms of set theory. Note that $2\leq\aleph_1\leq2^{\aleph_0}$, and therefore $2^{\aleph_0}\leq\aleph_1^{\aleph_0}\leq(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}$. But if $\aleph_1&lt;2^{\aleph_0}$, meaning if the Continuum Hypothesis fails, then $\aleph_1&lt;\aleph_1^{\aleph_0}$.
When is this integer a perfect square: $n^2 + 20n + 12$
Notice that: $$S_n=n^2+20n+12=(n+10)^2-88 \ .$$ Assume that $S_n$ to be a perfect square; i.e. $S_n=m^2$, then we have: $(n+10)^2-88=m^2$, let's dfine $N:=(n+10)$. This implies that $(N-m)(N+m)=88$; notice that $(N-m)$ and $(N+m)$ have the same pairity, both of them are odd or both of them are even. We have the following cases: $(N-m)= 1$ and $(N+m)=88$; which is imposible by the above notices. $(N-m)= 2$ and $(N+m)=44$; which gives the solution $N=23$, $m=21$; $n=13$. $(N-m)= 4$ and $(N+m)=22$; which gives the solution $N=13$, $m=9$; $n=3$. $(N-m)= 8$ and $(N+m)=11$; which is imposible by the above notices. $(N-m)=11$ and $(N+m)= 8$; which is imposible by the above notices. $(N-m)=22$ and $(N+m)= 4$; which gives the solution $N=13$, $m=-9$; $n=3$. $(N-m)=44$ and $(N+m)= 2$; which gives the solution $N=23$, $m=-21$; $n=13$. $(N-m)=88$ and $(N+m)= 1$; which is imposible by the above notices. At first you without loss of generality you can assume that $m$ is non-negative, and also note that $ N = n+10 &gt; 0+10=10.$
Combinations and Permutations
HINT: This is a basic application of the multiplication (or Chinese menu) principle. How many ways are there for Tom to choose an entrée? How many ways are there for Tom to choose a pair of side dishes? Note that you’re just counting $2$-element subsets of the set of $6$ possible side dishes. How must you combine these two partial results to get the final answer?
An Integral Representation of Logarithmic Derivative of Zeta Function
Let us consider the integral $$I_{1}\left(s\right)=\int_{0}^{\infty}\frac{x^{s-1}e^{x}}{\left(e^{x}-1\right)^{3}}dx,\,\mathrm{Re}\left(s\right)&gt;3.$$ Then, by the dominated convergence theorem, we get $$I_{1}\left(s\right)=-\frac{1}{2}\sum_{k\geq2}k\left(k-1\right)\int_{0}^{\infty}x^{s-1}e^{-kx}dx$$ $$=-\frac{1}{2}\Gamma\left(s\right)\left(\zeta\left(s-2\right)-\zeta\left(s-1\right)\right).$$ In a similar fashion we have $$I_{2}\left(s\right)=\int_{0}^{\infty}\frac{x^{s-1}}{e^{x}-1}dx=\Gamma\left(s\right)\zeta\left(s\right),\mathrm{Re}\left(s\right)&gt;1$$ so we can conclude that $$I_{3}\left(s\right)=\int_{0}^{\infty}\frac{x^{s-1}}{e^{x}-1}\left(\frac{12e^{x}}{\left(e^{x}-1\right)^{2}}-\frac{12}{x^{2}}+1\right)dx$$ $$=-\frac{12}{2}\Gamma\left(s\right)\left(\zeta\left(s-1\right)-\zeta\left(s-2\right)\right)-12\Gamma\left(s-2\right)\zeta\left(s-2\right)+\Gamma\left(s\right)\zeta\left(s\right),\mathrm{Re}\left(s\right)&gt;3$$ but, using the asymptotic expansion of $\Gamma\left(s\right)$ at $s=0$ and $\zeta\left(s\right)$ at $s=1$ we can see that $I\left(s\right)$ is defined for $\mathrm{Re}\left(s\right)\geq1$ and $$\lim_{s\rightarrow1}-\frac{12}{2}\Gamma\left(s\right)\left(\zeta\left(s-1\right)-\zeta\left(s-2\right)\right)-12\Gamma\left(s-2\right)\zeta\left(s-2\right)+\Gamma\left(s\right)\zeta\left(s\right)$$ $$=\color{red}{\frac{5}{2}-\gamma+12\log\left(A\right)}$$ where $A$ is the Glaisher–Kinkelin constant and $\gamma$ is the Euler-Mascheroni constant, which is equivalet to your result.
If $a$ and $b$ leave the same remainders when divided by $n$, so do $a^k$ and $b^k$ for all $k \in \mathbb{N}$
Hint: First note that we have $x\equiv_n y$ if and only if $n$ divides $x-y$. For the induction step, we need to show that if $a^k\equiv_n b^k$ then $a^{k+1}\equiv_n b^{k+1}$. We use the fact that $$a^{k+1}-b^{k+1}=a^{k+1}-ab^k+ab^k-b^{k+1}=a(a^k-b^k) +b^k(a-b).$$ Remark: To do it without induction, use the identity $$a^m-b^m=(a-b)(a^{m-1}+a^{m-2}b+\cdots +ab^{m-2}+b^{m-1}).$$
Conditions for $\beta_0=\bar{y}$ in simple linear regression using least squares
If you want $\bar{y}=\beta_0$ then from the first equation you've set the condition that $\beta_1\mathbf{a}_1=-\beta_2\mathbf{a}_2$. Meaning that $\mathbf{a}_1,\mathbf{a}_2$ are linearly dependent. However, for the last equation you formulated, $\mathbf{X}^T\mathbf{X}$ is invertible if and only if $\mathbf{X}$ is full column rank, meaning that $1, a_1, a_2$ are linearly independent. Therefore you have conflicting conditions. To conclude, the answer to your original question is that your original data matrix needs to be of column rank 1, meaning that all variables are a simple linear transformation of each other. If this conditions is satisfied, then solving linear regression via least squares fails.
Diagonalization of circulant matrices
I think the fastest way to see this is to decompose the circulant matrix into a linear combination of powers of the permutation matrix associated with long permutation, ie. $(n\,n-1\,\ldots\,1)$ (This is basically the definition of a circulant matrix). This permutation matrix obviously has eigenvectors $(\omega^k,\omega^{2\cdot k},\ldots,\omega^{(n-1)\cdot k} )$, so we can diagonalize the permutation matrix (and hence linear combinations of powers of this matrix) by conjugating by a matrix with these eigenvectors as columns, which is the discrete fourier matrix. There might be a more elegant way to express this, but all my attempts basically boil down to definitions that expand the above.
If a circle, whose radius squared is an integer, has rational points on its circumference, then at least one of those points is a lattice point.
It's well-known which positive integers are the sum of two squares, namely those where each prime $\equiv3\pmod 4$ occur to an even power in its prime factorisation. Let $A$ be the set of these numbers. Your result follows from the observation that if $n^2a\in A$ where $n$ and $a$ are integers, then $a\in A$. This comes from this characterisation of the elements of $A$.
If $g$ is a permutation, then what does $g(12)$ mean?
It is the composition of the permutation $g$ and the transposition $(1\ 2)$. Since $g$ is even and all transpositions are odd, it is elementary that $g (1\ 2) = g \circ (1\ 2)$ is odd, as the product of an even and odd permutation is odd. (There is a space between $1$ and $2$; this is not the number $12$.) Edit: To also answer your second question: the hint tells you that to each even permutation $g$ you may associate an odd one $g (1 \; 2)$. Note that this correspondence is injective, because $g_1 (1 \; 2) = g_2 (1 \; 2) \implies g_1 = g_2$ (just multiply by the inverse of $(1 \; 2)$ on the right, which happens to be $(1 \; 2)$ again). Now note that this correspondence is also invertible, simply because $(1 \; 2)$ has an inverse in $S_n$. Therefore, you have a bijection between the set of even permutations and the set of odd permutations, therefore these two sets must have the same number of elements, hence the conclusion.
A value at a point of continuity is determined by dense subset?
If $Y$ is Hausdorff and if $D$ is a dense subset of $X$: Let $f:X\to Y$ and $g:X\to Y$ be continuous at $x_0$ with $f|_D=g|_D.$ By contradiction, suppose $f(x_0)\ne g(x_0).$ Let $V_f, V_g$ be disjoint open subsets of $Y$ with $f(x_0)\in V_f$ and $g(x_0)\in V_g.$ By definition of continuity at a point, there are open subsets $U_f, U_g$ of $X$, each containing $x_0$, such that $f[U_f]\subseteq V_f$ and $g[U_g]\subseteq V_g .$ Now $U_f\cap U_g$ is open in $X,$ and non-empty (because it contains $x_0$) so there exists $d\in D\cap U_f\cap U_g.$ So $f(d)\in V_f$ but also $f(d)=g(d)\in V_g,$ which is absurd because $V_f,V_g$ are disjoint.