text
stringlengths 256
16.4k
|
---|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
I think
I can safely say that nobody understands quantum mechanics. ~ Richard Feynman. Hilbert space
Hilbert space is generalization of the Euclidean space. In a Hilbert space we can have infinite number of dimensions.
A vector in an infinite-dimensional Hilbert space is represented as an infinite vector: \((x_1, x_2, .)\).
The inner product of two vectors \(?x|y? = \sum_{i=1}^? x_i y_i\)
The norm \( ||x|| = ?x|x?^{\frac{1}{2}} = \sqrt{\sum_{i=1}^? x_i^2} < ?\)
Tensor product of two vectors \(\begin{bmatrix} a_1 \\ a_2 \end{bmatrix} ? \begin{bmatrix} b_1 \\ b_2 \end{bmatrix} = \begin{bmatrix} a_1 * b_1 \\ a_1 * b_2 \\ a_2 * b_1 \\ a_2 * b_2 \end{bmatrix}\)
Qubit
In quantum computing, a qubit or quantum bit is a unit of quantum information. Its the analog of a bit for quantum computation. A qubit is a two-state quantum-mechanical system. The state 0 could be represented as vector \(\begin{bmatrix} 1 \\ 0 \end{bmatrix} ? |0? ? |??\) (horizontal polarization) and state 1 could be represented as orthogonal vector \(\begin{bmatrix} 0 \\ 1 \end{bmatrix} ? |1? ? |??\) (vertical polarization) .
A generic qubit state corresponds to a vector \(\begin{bmatrix} ? \\ ? \end{bmatrix} = ? \begin{bmatrix} 1 \\ 0 \end{bmatrix} + ? \begin{bmatrix} 0 \\ 1 \end{bmatrix} ? ? \ |0? + ? \ |1? \) in a two-dimensional Hilbert space such as \(|?|^2 + |?|^2 = 1\) (normalized vector). \(|?|^2, |?|^2\) are the probabilities for the particle to be in state 0 and 1. ? and ? are complex numbers and they are called amplitudes.
Another way to write the state ? of a qubit is by using cos and sin functions.
|?? = cos(?) |0? + sin(?) |1?
More details about qubit can be found here: https://arxiv.org/pdf/1312.1463.pdf
Two qubits
A state in a combined two qubits system corresponds to a vector in four-dimensional Hilbert space \(|\gamma? = e \ |00? + f \ |01? + g \ |10?+ h \ |11? = \begin{bmatrix} e \\ f \\ g \\ h \end{bmatrix} \).
In general, the state vector of two combined systems is the tensor product of the state vector of each system.
Some expressions and their equivalents:
|0??|0??|0? ? |000?
|0??|1??|0? ? |010?\((\sqrt{\frac{1}{2}}|0? + \sqrt{\frac{1}{2}}|1?)?(\sqrt{\frac{1}{2}}|0? + \sqrt{\frac{1}{2}}|1?) ? \frac{1}{2}|00? + \frac{1}{2}|01? + \frac{1}{2}|10? + \frac{1}{2}|11?\)
Bloch sphere
A state ? of a qubit can be also written as: \(|?? = cos(\frac{?}{2})|0? + sin(\frac{?}{2}) exp(i?) |1?\), and it can be visualized as a vector of length 1 on the Bloch sphere.
The angles ?,? correspond to the polar and azimuthal angles of spherical coordinates (??[0,?], ??[0,2?]).
Bra-ket notation
|01? ? |0??|1? ? \(\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \) (column vector)
?01| ? ?0|??1| ? \(\begin{bmatrix} 0 \ 1 \ 0 \ 0 \end{bmatrix} \) (row vector)
Quantum superposition
An arbitrary state of a qubit could be described as a superposition of 0 and 1 states.
N qubits
A 3 qubits quantum system can store numbers such as 3 or 7.
|011? ? |3?
|111? ? |7?
But it can also store the two numbers simultaneously using a superposition state on the last qubit.\((\sqrt{\frac{1}{2}}|0? + \sqrt{\frac{1}{2}}|1?)?|1??|1? ? \sqrt{\frac{1}{2}} (|011? + |111?)\)
In general, a quantum computer with n qubits can be in an arbitrary superposition of up to \(2^n\) different states simultaneously.
Quantum gates Hadamard H gate
The Hadamard gate acts on a single qubit. It maps the state |0? to a superposition state with equal probabilities on state 0 and 1 (\(\sqrt{\frac{1}{2}}|0? + \sqrt{\frac{1}{2}}|1?\)).
The Hadamard matrix is defined as: \(H = \frac {1}{\sqrt {2}} \begin{bmatrix}1 \ 1 \\ 1 \ -1 \end{bmatrix}\)\(H * \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \sqrt{\frac{1}{2}} \\ \sqrt{\frac{1}{2}} \end{bmatrix}\)
Pauli X gate (NOT gate)
The Pauli-X gate acts on a single qubit. It is the quantum equivalent of the NOT gate for classical computers.
The Pauli X matrix is defined as: \(X = \begin{bmatrix}0 \ 1 \\ 1 \ 0 \end{bmatrix}\)\(X * \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}\)
Pauli Y gate
The Pauli-Y gate acts on a single qubit. It equates to a rotation around the Y-axis of the Bloch sphere by ? radians. It maps |0? to i |1? and |1? to ?i |0?
The Pauli Y matrix is defined as: \(Y = \begin{bmatrix}0 \ -i \\ i \ 0 \end{bmatrix}\)\(Y * \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ i \end{bmatrix}\)
Pauli Z gate
The Pauli-Z gate acts on a single qubit. It equates to a rotation around the Z-axis of the Bloch sphere by ? radians. It leaves the basis state |0? and maps |1? to ?|1?
The Pauli Z matrix is defined as: \(Z = \begin{bmatrix}1 \ 0 \\ 0 \ -1 \end{bmatrix}\)\(Z * \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\)
Other gates are described in this link: https://en.wikipedia.org/wiki/Quantum_gate
State measurement
State measurement can be done using polarizing filters. Filters transmit or block particles based on angle of polarization.
IBM Q
IBM Q is a cloud quantum computing platform.
Below a simple experiment where we have two qubits. No transformation is applied on first qubit. A Pauli X transformation is applied on second qubit. As result there is one possible state |10? with probability equals 1.
Data representation
Given a vector \(x = (x_1, x_2, . , x_n)\). The quantum system topology needed to store the vector x is \(log_2(n)\) qubits.
For example, the quantum system representation of the following vector requires only 3 qubits.
(0, 0.2, 0.2, 0, 0.1, 0.2, 0.1, 0.3) ? 0 |000? + 0.2 |001? + 0.2 |010? + 0 |011? + 0.1 |100? + 0.2 |101? + 0.1 |110? + 0.3 |111? |
By generalizing the approach in Integral involving a dilogarithm versus an Euler sum. meaning by using the integral representation of the harmonic numbers and by computing a three dimensional integral over a unit cube analytically we have found the generating function of cubes of harmonic numbers. We have: \begin{eqnarray} &&S^{(3)}(x) := \sum\limits_{n=1}^\infty H_n^3 x^n = \frac{-18 \text{Li}_3\left(1-\frac{1}{x}\right)+6 \text{Li}_3\left(\frac{1}{x}\right)-18 \text{Li}_3(x)}{6(1-x)}+ \frac{6 \log ^3(1-x)-9 \log (x) \log ^2(1-x)+3 \left(3 \log ^2(x)+\pi ^2\right) \log (1-x)}{6(x-1)}+\frac{-\log (x) \left(2 \log ^2(x)+ 3 i \pi \log (x)+5 \pi ^2\right)}{6 (x-1)} \end{eqnarray} Clearly some of the terms on the right hand side are complex even though the whole expression is of course real. The first two terms in the first fraction on the rhs are complex and the middle term in the last fraction is complex. My question is how do I simplify the right hand side to get rid of the complex terms?
By using the functional equations for the trilogarithm we simplified the result as follows: \begin{eqnarray} &&S^{(3)}(x)= \\ &&\frac{ \text{Li}_3(x)}{(1-x)}+ 3\frac{\text{Li}_3(1-x)-\zeta (3)}{(1-x)}+ \log(1-x)\frac{ \left(-2 \log ^2(1-x)+3 \log (x) \log(1-x)-\pi ^2 \right)}{2 (1-x)} \end{eqnarray}
For a sanity check we expand each of the terms in the formula in a Taylor series about zero we have: \begin{eqnarray} &&\frac{Li_3(x)}{1-x} =\\ && x+\frac{9 x^2}{8}+\frac{251 x^3}{216} + O(x^4) \\ &&3\frac{\text{Li}_3(1-x)-\zeta (3)}{(1-x)} =\\ &&-\frac{\pi ^2 x}{2}+\frac{9 x^2}{4}-\frac{3}{2} x^2 \log (x)-\frac{3 \pi ^2 x^2}{4}-\frac{11 \pi ^2 x^3}{12}+4 x^3-3 x^3 \log (x) + O(x^4) \\ &&\log(1-x)\frac{ \left(-2 \log ^2(1-x)+3 \log (x) \log(1-x)-\pi ^2 \right)}{2 (1-x)} =\\ && \frac{\pi ^2 x}{2}+\frac{3 \pi ^2 x^2}{4}+\frac{3}{2} x^2 \log (x) +\frac{11 \pi ^2 x^3}{12}+x^3+3 x^3 \log (x)+ O(x^4) \end{eqnarray}
As we can see the terms proportional to $\log(x)$being present in the second and the third term exactly cancel each other. The formula is correct.
Here we provide a closed form for another related sum. We have: \begin{eqnarray} &&\sum\limits_{n=1}^\infty \frac{H_n^3}{n} \cdot x^n =\\ &&3 \zeta(4)-3 \text{Li}_4(1-x)+3 \text{Li}_3(1-x) \log (1-x)+\log (x) \log ^3(1-x)+\\ &&\text{Li}_2(x){}^2-2 \text{Li}_4(x)-3 \text{Li}_4\left(\frac{x}{x-1}\right)+\frac{3}{2} \left(\text{Li}_2(x)-\frac{\pi ^2}{6}\right) \log ^2(1-x)+2 \text{Li}_3(x) \log (1-x)+\frac{1}{8} \log ^4(1-x) \end{eqnarray} We obtained this formula by dividing the right hand side in the question above by $x$ and then integrating. |
$(p,q)$th order oriented growth measurement of composite $p$-adic entire functions Abstract
Let us consider $\mathbb{K}$ be a complete ultrametric algebraically closed field and suppose $\mathcal{A}\left( \mathbb{K}\right) $ be the $\mathbb{K}$-algebra of entire functions on $\mathbb{K}$. For any $p$-adic entire functions $f\in\mathcal{A}\left( \mathbb{K}\right) $ and $r>0$, we denote by $|f|\left(r\right)$ the number $\sup \left\{ |f\left( x\right) |:|x|=r\right\},$ where $\left\vert \cdot \right\vert (r)$ is a multiplicative norm on $\mathcal{A}\left(\mathbb{K}\right).$ For another $p$-adic entire functions $g\in\mathcal{A}\left( \mathbb{K}\right),$ $|g|\left(r\right) $ is defined and the ratio $\frac{|f|\left( r\right) }{|g|\left( r\right) }$ as $r\rightarrow \infty $ is called the comparative growth of $f$ with respect to $g$ in terms of their multiplicative norm. Likewise to complex analysis, in this paper we define the concept of $(p,q)$th order (respectively $(p,q)$th lower order) of growth as $\rho^{\left(p,q\right)} \left( f\right) =\underset{r\rightarrow +\infty }{\lim \sup} \frac{\log ^{[p]}|f| \left(r\right) }{\log ^{\left[ q\right] }r}$ (respectively $\lambda ^{\left( p,q\right) } \left(f\right) =\underset{r\rightarrow +\infty }{\lim\inf }\frac{\log ^{[p]}|f|\left( r\right)} {\log^{\left[ q\right] }r}$), where $p$ and $q$ are any two positive integers. We study then some growth properties of composite $p$-adic entire functions on the basis of their $\left( p,q\right)$th order and $(p,q)$th lower order where $p$ and $q$ are any two positive integers.
Keywords
$p$-adic entire function, growth, $(p;q)$th order, $(p;q)$th lower order, composition
3 :: 10
Refbacks There are currently no refbacks.
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
Suppose that an object is moving along the x-axis with an initial velocity \(v_i\) (which could be \(0m/s\) but, in general, can be any arbitrary value) and is displaced by an amount \(Δx\). Let's suppose that this object is acted upon by any arbitrary force \(\vec{F}\) as it is being displaced as illustrated in Figure 1. The work \(W\) done by \(\vec{F}\) is, by deffinition, given by
$$W≡\int_c\vec{F}·d\vec{R}.\tag{1}$$
For an object whose motion is restricted to the x-axis, Equation (1) simplifies to
$$W=\int{\vec{F}·d\vec{x}}.\tag{2}$$
Since the vertical component of the force \(\vec{F}_y\) is always perpendicular to \(d\vec{x}\) (the object's displacement), it follows that \(\vec{F}_y\) does not contribute to the work done on the object. Only \(\vec{F}_x\) contributes to the work done and thus Equation (2) simplifies to
$$W=\int{\vec{F}_x·d\vec{x}}.\tag{3}$$
Since \(\vec{F}_x\) is always parallel (or anti-parallel) to \(d\vec{x}\), we can simplify Equation (3) to
$$W=\int{F_xdx},$$
if \(\vec{F}_x\) is pointing in the same direction as \(d\vec{x}\). (If \(\vec{F}_x\) is pointing in the opposite direction to \(dx\), we can just add a minus sign in front of the integral.) We'll restrict our attention to problems in which the force applied to the object is constant. Given this assumption, we can further simplify the integral to
$$W=F_x\int{dx}=F_xΔx.\tag{4}$$
Since \(F_x\) is constant, we can simplify Newton's second law (so that it is expressed in terms of accelleration instead of the time rate-of-change of momentum) and substitute \(ma_x\) in place of \(F_x\) to get
$$W=ma_xΔx.\tag{5}$$
Also, recall that the equations of kinematics apply to any object moving at a constant acelleration. Since we have chosen to restrict our attention to objects acted upon by a constant force, the x-component of accelleration \(a_x\) must also be constant. This means that we can use the following equation from kinematics (in fact, we could use any of the equations from kinematics since they all apply to any object moving with a constant acelleration) to describe the object's motion:
$$v_f^2=v_i^2+2a_xΔx.$$
If we algebraically manipulate the above equation in order to solve for \(Δx\), we get
$$Δx=\frac{v_f^2-v_i^2}{2a_x}.$$
Substituting this result into Equation (5), we have
$$W=ma_x(\frac{v_f^2-v_i^2}{2a_x})=m(\frac{v_f^2-v_i^2}{2})=\frac{1}{2}mv_f^2-\frac{1}{2}mv_i^2=ΔKE.\tag{6}$$
What we have essentially just proved to ourselves is that there is some relationship between the work done on an object and that object's change in energy. Or to be more technical, we've shown that work is an energy transfer mechanism: a way of transfering energy into and out of an object. In this proof in particular, we've shown that whenever a force \(\vec{F}\) acts on an object as that object is being displaced but whose altitude (or height) isn't changing, then that force \(\vec{F}\) transfers kinetic energy into or out of the object (depending on whether \(W=ΔKE\) is positive or negative). Equation (6) is called the
work kinetic-energy theorem. In the next lesson, we'll prove that if an object (initially at a height \(y_i\)) moves along any trajectory until it reaches a different height \(y_f\), the force of gravity \(\vec{F}_g\) transfers energy into or out of the object. But it does not transfer kinetic energy into or out of the object; instead, it only transfers potential energy into or out of the object. |
In classical mechanics, the condition to fix the variation of the trajectory at the endpoints has a clear-cut meaning. We want the system to propagate from $x\in\mathcal{C}$ to $y\in\mathcal{C}$, therefore, we only consider curves that have these two points as endpoints, so the variations also have to respect this. It is clear.
Moving on to relativistic field theory and the (covariant) Lagrangian approach of it, these conditions however become
much less transparent.
The action is usually defined as $$ S[\phi]=\int_{\mathcal{D}}\mathcal{L}(\phi,\nabla\phi...\nabla^{(k)}\phi)\ d^nx, $$ and we consider only variations $\delta\phi$ for which $\delta\phi|_{\partial\mathcal{D}}=0$.
In particular, it is accepted, that the derivatives of the variation $\nabla_\mu\delta\phi$
cannot be set to zero on the boundary.
Sources I have seen, however did not explain any of this, they just postulated.
Questions:
What is the motivation to postulate the vanishing of the variation at the boundary?
What is the reason one cannot also postulate the vanishing of the derivative of the variation at the boundary?
Is there any significance to the domain $\mathcal{D}$ in the action? In particular, I have seen some books demand the integral to be extended to the entire manifold, while some books explicitly said that $\mathcal{D}$ should be a compact or precompact domain.
Further notes: I have seen a throwaway note in Wald's General Relativity that, when defined the functional derivative $\delta S/\delta\phi$ as $$ \delta S[\phi]=\int_{\mathcal{D}}\frac{\delta S}{\delta\phi}\delta\phi\ d^nx, $$said that more generally, $S$ is functionally differentiable if there exists a tensor distribution$\frac{\delta S}{\delta\phi}$ such that $$ \delta S[\phi]=\langle \frac{\delta S}{\delta\phi},\delta\phi\rangle. $$ This made me think that maybe one should take the variations $\delta\phi$ as test functions. But then, if they are usual test functions and have compact support, then taking $\mathcal{D}=\text{supp}(\delta\phi)$ will reproduce the usual boundary conditions, but if we extend the domain of integration outside $\mathcal{D}$, then because the test function is identically zero outside its support, then we can also set the derivatives zero, right? But if we take the variations to be tempered test functions, then the domain always have to be the entire manifold.
.
It is possible to derive the Israel junction conditions from variational principles, but in this case, the Gibbons-Hawking-York boundary term mustbe appended to the action. This provides a direct application of the GHY term, which itself is a consequence of not being able to set the derivatives of the variation to zero, so here I see this claim validated by practical results, yet the underlying reason still eludes me. |
Your question is,
If the average of the first $n$ terms of a sequence tends to a limit, does the sequence itself tend to a limit?
The answer is no in general, as is discussed in the comments. The simplest counterexamples are the sequences which oscillate between two different values $\alpha$ and $\beta$; we would expect that the average of the first $n$ terms of such a sequence will tend to the average of $\alpha$ and $\beta$. As a concrete example let's define
$$x_n = \frac{1+(-1)^n}{2},$$
so that $x_n$ alternates between $0$ (when $n$ is odd) and $1$ (when $n$ is even). We then have
$$n \, z_n = x_1 + x_2 + \cdots + x_n = \begin{cases}\frac{n}{2} & \text{if } n \text{ is even}, \\\frac{n-1}{2} & \text{if } n \text{ is odd},\end{cases}$$
from which we can deduce that
$$\frac{1}{2} - \frac{1}{2n} \leq z_n \leq \frac{1}{2}$$
and thus
$$\lim_{n \to \infty} z_n = \frac{1}{2}.$$
There are, however, many cases where one can deduce the convergence of the original sequence from the convergence of this average.
For example, if $(x_n)$ is a positive, monotonic sequence, then the convergence of $(z_n)$ implies the convergence of $(x_n)$.
A slightly more difficult (and more useful) example is discussed in this thread.
Results of this general shape are called Tauberian theorems. A nice reference is the book
Divergent Series by G. H. Hardy. |
Maximum Principle:Let $\Omega$ be a boundedconnected region of $\mathbb{R}^m$ with $u$ defined and continuous in $\bar{\Omega}, \, \Delta u = 0$. Then $u$ achieves its maximum(and minimum) on $\partial \Omega$. Proof:
$$ \Delta u = 0, \, \Omega $$ $$ \exists \, x^0: \max_{x \in \bar{\Omega}}u(x)=u(x^0)=M < +\infty $$
If $x^0 \in \partial \Omega$: The principle holds true.
If $x^0 \in \Omega$: Using the
mean value property:
$$ u(x^0) = \frac{1}{4 \pi \delta^2} \int_\limits{S(x^0,\delta)} u(x) \, ds = M $$ $$ \Rightarrow u(x) = M, \, \forall x \in S(x^0,\delta) \Rightarrow u(x) = M, \, \forall x \in \Omega. \quad\square $$
I don't get the last line of this proof. Why $u(x) = M, \, \forall x \in S(x^0,\delta)$ follows from the mean value property and why $u(x) = M$ holds for every $x$ in $\Omega$ ? |
I think the Op's proof is correct assuming both functions are entire. However, even if $F(x)$ is entire, the fractional iterate is in general not entire. The Op's result does not hold if the fractional iterate is not entire
$$F^{o\frac{1}{n}}(x)\;\;\;\;h(x)=F^{o \frac{1}{2}}(x)\;\;\;\;h(h(x))=F(x)$$
If the half iterate is not entire then there are points for which $F(x)=h(h(x))$ is only by analytic continuation, and not by direct computation. So definition (1), $\forall x\;\;h(h(x))=F(x)$ does not hold $\forall x$, since the half iterate has multiple values depending on the path, unless the half iterate is also entire, which pretty much rules out all non-trivial half iterates!
I wanted to generate a counter example that was as simple as possible. Lets start with a function that has three fixed points, $(0,\pm 1)$, and we will develop the half iterate at the fixed point at the origin. I also wanted to avoid the parabolic case see Will Jagy's half iterate of sin(x) post which occurs when the first derivative of the fixed point=1, so I chose a positive 1st derivative at the fixed point at the origin of 4. Avoiding the parabolic case gives guaranteed convergence so then the formal half iterate would have guaranteed convergence near the origin, and would have a first derivative of 2.
So here is my counter example, for $F(x)=4x-3x^3$, for which the half iterate is $h(x)$, and $F(x)$ has fixed points of $(0,\pm 1)$. For all points within some radius of convergence, $h(h(x))=F(x)$, but eventually we get to the singularity of $h(x)$ which gives it a defined radius of convergence. But weird stuff happens when the radius of convergence of $h(x)$ is larger than $h(1)$ where 1 is one of the other fixed points. So, below, I post the formal half iterate of $F(x)$, which has a radius of convergence of $\frac{16}{9}\approx 1.78$. Here, is my counter example, where all three fixed points for $F(x)$ has $h(x)$ within its analytic radius of convergence, and even $h(1), h(-1)$ are within the radius of convergence so the half iterate seems to be completely unambiguous at these points. And yet this leads to a clear counter example.
$$F(1)=1$$$$h(1)=1.66125776701924932137$$$$h(1.66125776701924932137)=1$$$$F(h(1))=F(1.66125776701924932137)=-7.10907369782592055937<>h(1)$$
However, even though the radius of convergence of the Taylor series of $h(x)>h(1)$, when you look at a number 1.2399067, $h(1.2399067)=16/9$, so $h(h(x))$ has a smaller radius of convergence is 1.2399067. Of course, this smaller radius of convergence has a singularity that cancels by analytic continuation since $F(x)$ is entire. And this is what allows weird stuff to happen... where in the complex plane, $h(x)$ at the fixed point of $F(x)$ has multiple values depending on the path, even though $F(x)$ is entire and is always well defined independent of the path. So $h(x)$ can only by fully defined by path dependent analytic continuation in the complex plane.
{h(x)=
+x *2
-x^ 3*3/10
-x^ 5*27/850
-x^ 7*243/44200
-x^ 9*4391901/3862196000
-x^11*4097709/15835003600
-x^13*263696194479501/4216940633698000000
-x^15*4352793841907459397/276378289132566920000000
-x^17*0.00000408866926284292783744
-x^19*0.00000108632410179569368855
-x^21*0.000000293954426198467149790
-x^23*0.0000000807297320769806555906
-x^25*0.0000000224441951265113300999
-x^27*0.00000000630439537551479828510
-x^29*0.00000000178645419922952969101
-x^31*0.000000000510065370119009481553
-x^33*1.46596264762260914617 E-10
-x^35*4.23776939074721938452 E-11
-x^37*1.23135265881044955235 E-11
-x^39*3.59433071863758569107 E-12 ....
}
Besides the formal Taylor series, one may generate the half iterate by using the identity:: $$h(x) = F \circ h \circ F^{-1} = \lim_{n \to \infty} F^{n} \circ h \circ F^{-n} = \lim_{n \to \infty} F^{n}(2\cdot F^{-n}(x)) $$
The $h = F \circ h \circ F^{-1}$ equation also shows that the half iterate radius of convergence is tied to the radius of convergence of $F^{-1}(x)$, which is $\frac{16}{9}$. This may be calculated from where $\frac{d}{dx}F(x)=0$ which is at $x=\pm \frac{2}{3}$ where $F(\pm \frac{2}{3})=\pm\frac{16}{9}$. Here is a graph of h(x), at the real axis, from -16/9 to +16/9, which is out to the radius of convergence, where the derivative of h(x) goes to infinity. The singularity is cancelled out when iterating $h(h(x))$ since the derivative of $h(x)=0$ where $h(x)=16/9$.
One more image showing $h(x), h^{o2}(x)=F(x), h^{o3}(x), h^{o4}(x)$ from 0 to 1. Odd iterations are in purple, and even iterations are in red. Notice that $h^{o3}(1)$=-7.10907 as expected, as opposed to $h(1)$=1.6612577. We see that $h(1)$ has multiple values, depending on how many times we have iterated $h(x)$. But F(x) is entire, so no matter how many times we iterate F(1)=1. Also notice that this still contradicts the Op's proof, since for analytic functions, the half iterate is multiple valued, apparently infinitely valued in this case, depending on the path in the complex plane. |
On page 75 in Sutskever's thesis http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf
In equation (7.5) setting $a_0=1$,
$a_{t+1} = (1+\sqrt{4 a_t^2 + 1})/2 $
The author said, "to understand the sequence $a_t$ we note that the function $x \rightarrow (1+ \sqrt{4x^2+1})/2$ quickly approaches $x \rightarrow x + 0.5 $ from below as $x \rightarrow \infty$,"
then he claims
"so $a_t \approx (t+4)/2 $ for large $t$ "
I couldn't figure out how he made the deduction here. It is not at all obvious to me how one might even guess the express. If we brute-force it we could probably guess $t/2$ but the $t+4$ term seems to come from nowhere at the large number limit. |
I am new to integral equations. In this field, people study the Fredholm equation
$$\phi(x) + \int_0^1 K(x, y) \phi(y) dy = f(x). $$
I am a bit surprised to see the first term on the left hand side. In linear algebra, we have the equation
$$ \sum_j A_{ij} x_j = b_i . $$
Here $A$ is the counterpart of $K$, and $b$ the counterpart of $f$. Therefore, the simplest and most natural integral equation for me would be
$$ \int_0^1 K(x, y) \phi(y) dy = f(x). $$
So why did people not start from this one, but the strange one above? This might be a very naive question, but I did not find any discussion in the literature. |
IIs it valid to apply Einstein's Relativity to scenarios involving expansion of space? For a practical example of this: Is it legitimate to speak of distant red-shift galaxies as experiencing time more slowly in relation to our experience of time? I appreciate that isn't sensible in other ways, but by explaining if it is legitimate if albeit not-sensibly, you are kindly helping me understand a bit more about Relativity and expansion of space :O)
One can apply the underlying
principle of relativity -- that all reference frames are valid and agree on the speed of light -- to expanding space, but one has to be careful.
In particular,
special relativity assumes reference frames are these global things that cover all of space and time. Picture a uniform grid clocks and rulers stretching as far as the eye can see, existing for all time.
However, once space itself is expanding, curving, or doing anything other than sitting still and behaving nicely, special relativity is no longer adequate. This is where
general relativity comes in. This is Einstein's extension of his theory, where the principle of relativity is taken to only apply locally. That is, two nearby observers can compare their results in the special-relativistic way, but distant ones can't do so quite so easily.
The problem is that it is ambiguous how to transport vector quantities from one location to another. Think of an arrow on the surface of the Earth, somewhere at the equator. You might ask, "How does its direction compare with this other arrow located at the North Pole?" To do the comparison, you slide the equatorial arrow on over to the pole, keeping its orientation the same. But its direction at the pole depends on the path you took to get there!
The same problem happens when comparing distant things in the universe. Many statements like "time is flowing slower over there" are actually devoid of meaning without more information, since it's unclear how to compare things like the flow of time between distant points.
Is it legitimate to speak of distant red-shift galaxies as experiencing time more slowly in relation to our experience of time?
No, it is not, since they are not moving relative to the hubble flow, which means that they are sitting on their comoving coordiantes and are therefore at rest relative to the CMB, just like we are (peculiar velocities neglected). Time dilatation only happens when objects are moving in space, not if they are flowing with the expanding space.
Is it valid to apply Einstein's Relativity to scenarios involving expansion of space?
Of course, but you have to keep in mind that you have to substract the recessional velocity due the hubble expansion from the total velocity relative to our galaxy to calculate the dime dilatation.
For example, if your observed galaxy has a distance of $d$ and a total velocity of $v$ relative to our galaxy (which, for simplicity, we assume to be at rest to the CMB), the peculiar velocity $v_{pec}$ of this galaxy would be $v_{pec}=v-H\cdot d$ where $H$ is the hubble parameter with units of ${\text{sec}^{-1}}$. Now you can plug in the peculiar velocity into the formula for special relativistic time dilatation. Since peculiar velocities are rather small compared to the speed of light this effect is more or less neglectable.
For example: if you observe an object with redshift $z_{observed}+1=3$, but from the distance where you measure it you would expect redshift $z_{expected}+1=2$, then you know that $\frac{3}{2}=\sqrt{\frac{c+v}{c-v}}$ and the peculiar velocity would be $v_{pec}=0.3846 \, \text{c}$. Here you would get a time dilatation factor of $12:13$.
PS: Fraser Cain did a short video for laymen on this topic on Youtube
Is it legitimate to speak of distant red-shift galaxies as experiencing time more slowly in relation to our experience of time?
Yes, it is. Take a periodic source of light, acting as a clock. Consider these three scenarios:
the source of light is close enough that the Minkowski approximation holds and moves away at a high velocity the source of light is far away and at rest relative to the Hubble flow the source of light sits lower in a gravitational potential at constant distance from the source of gravity
In any of these cases, the light will be redshifted and time will appear to tick slower. Observationally, the situations are equivalent, even though we attribute the effect to Doppler shift, cosmological and gravitational redshift and time dilation, respectively.
In any of these cases, we can parallel transport the source's velocity vector along the light path and treat the result as the source's velocity relative to the observer. It might seem paradoxical that objects can have a non-zero relative velocity even though they are both 'at rest' according to various interpretations of the term (no motion relative to Hubble flow in one case, a constant distance from the source of gravity in the other). This is no longer an issue once we reject distance parallelism. |
it is a question Convergence/Divergence of calculus II! Please give me a hand!
Determine convergence or divergence using any method covered so far.
$$\sum_{n = 1}^{\infty} \sin (1/n)$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
it is a question Convergence/Divergence of calculus II! Please give me a hand!
Determine convergence or divergence using any method covered so far.
$$\sum_{n = 1}^{\infty} \sin (1/n)$$
You need to determine convergence for $\sum_{n=1}^\infty\sin(1/n)$. The series diverges. The two hints below may guide you when trying to justify this.
Hint 1: $ \lim_{\theta\to0}\sin(\theta)/\theta=1$ and $1/n\to 0$ as $n\to\infty$.
Hint 2: $\sin(1/n)$ is positive. So you may attempt a limit comparison test.
Hint: $\sin(x) / x \to 1$ as $x \downarrow 0$.
We can prove this is divergent through direct comparison, though the way I know uses a Taylor Series expansion, which is not normally covered when students first see this problem. For others, this might shed some new light on the intuition that is lost from the limit comparison test. Sometimes it can feel like the limit comparison test just works "by magic," at least for me.
If you are unfamiliar with a Taylor Series expansion, I encourage you to research it. It becomes quite useful in most portions of applied math.
The first term of the Taylor Series can often hint at the convergence or divergence of a function, as you will see below. For this problem, expand $\sin{x}$ about $x=0$.
$$\sin{x}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots$$
If we truncated the Taylor series at the second term, we’d have the following positive Lagrange remainder: $$\sin{x}=x-\frac{x^3}{3!}+f^{(5)}(c)\frac{x^5}{5!},$$ where $c$ is between $0$ and $x$. Note that $f^{(5)} (x)=\cos{x}$, so we can replace $f^{(5)}(c)$ with $\cos{(c)}$ from this point on.
Now, if we borrow this same Taylor expansion, but let $x=1/n$ for some positive integer n, we’d get $$\sin{(\frac{1}{n})}=\frac{1}{n}-\frac{1}{6n^3}+\frac{\cos{(c)}}{5!}\frac{1}{n^5}.$$
Like stated before, looking at the first term $1/n$ hints that this function might diverge. As a similar example, if we were instead observing the function $1-\cos{\frac{1}{n}}$, we'd find the first term to be $\frac{1}{n^2}$ which hints that it converges.
(This expansion is quite weird when you think about it. We built it centered around $x=0$ which would mean that with this substitution, we centered it around $\lim_{n→∞}\frac{1}{n}=0$, an odd concept for Taylor expansions, but we’ll work with it.)
If you note that $1/n$ is always going to be positive and $0<\frac{1}{n}\leq1$, this means that $0<c<1$. As a result, $\cos{(c)}>0$, regardless of $c$. The resulting remainder will be positive, so removing it will result in an under-approximation. To show this bring over the other two terms to the left, as seen below: $$\sin{(\frac{1}{n})}-\frac{1}{n}+\frac{1}{6n^3}=\frac{\cos{(c)}}{5!}\frac{1}{n^5}>0$$
$$\implies \sin{(\frac{1}{n})}>\frac{1}{n}-\frac{1}{6n^3}$$ We can use this for our summation. Observing the corresponding series on the right hand, we have a divergent series: $$\sum_{n=1}^{\infty}{\frac{1}{n}-\frac{1}{6n^3}}=\sum_{n=1}^{\infty}{\frac{1}{n}}-\sum_{n=1}^{\infty}{\frac{1}{6n^3}}.$$ The second series above would be convergent, but the harmonic series $\sum_{n=1}^{\infty}{1/n}$ is famously divergent. By direct comparison test, because the above series is divergent and all terms of the above series are less than the corresponding terms of $$\sum_{n=1}^{\infty}{\sin{(\frac{1}{n})}}$$ we have the following for our sine series $$\sum_{n=1}^{\infty}{\sin{(\frac{1}{n})}}>\sum_{n=1}^{\infty}{\frac{1}{n}-\frac{1}{6n^3}}$$ meaning it too is divergent. |
The Ideal Gas in a Field: Transmembrane Ionic Gradients A Half-Step Beyond Ideal: Ion Gradients and Transmembrane Potentials
One key way the cell stores free energy is by having different concentrations of molecules in different "compartments" - e.g., extra-cellular vs. intracellular or in an organelle compared to cytoplasm. The molecules playing this role are charged molecules, or ions, such as sodium ($\plus{Na}$), chloride ($\minus{Cl}$), potassium ($\plus{K}$), calcium ($\dplus{Ca}$), and numerous nucleotide species. A brief overview of trans-membrane ion physiology is available.
Although the simplest way to study the physics of free energy storage in such a gradient is by considering ideal particles all with zero potential energy, the reality of the cell is that electrostatic interactions are critical. Fortunately, the most important non-ideal effects of charge-charge interactions can be understood in terms of the usual ideal particles (which do not interact with one another) that do, however, feel the effects of a "background" electrostatic field. Such a mean-field picture is a simple approximation to the electrostatic effects induced primarily by having an excess of one or more charged species on a given side of a membrane - for example, the excess of $\plus{Na}$ ions in the extracellular environment.
Two semi-ideal gases of ions in different potentials
Unlike our examination of two membrane-separated ideal gases, the particles here are explicitly charged and they "feel" the electrostatic potential $\Phi$. The positively charged anions schematically represent $\plus{K}$ ions interacting with the field generated by the imbalance of $\plus{Na}$ ions - the extracellular or "outside" concentration of sodium is maintained at a high value relative to the cytoplasm or "inside" by constant ATP-driven pumping. However,
the effects of the $\plus{Na}$ concentration gradient will only be treated implicitly via the different values for $\inn{\Phi} < \out{\Phi}$.
To be precise, the model consists of $N$ particles that do not interact with one another, but which interact with the external potential $\Phi$ as if each had a charge of $q$, leading to potential energy $q \cdot \Phi_{\mathrm{X}}$ for each particle, where X = "in" or "out". The total volume $V$ is divided into inside and outside so that $\inn{V} + \out{V} = V$, with $\inn{N} + \out{N} = N$ ions populating the two compartments. The whole system is maintained at constant temperature $T$. Particles can pass through the channel shown in the figure, but we assume it is closed so that $\inn{V}$ and $\out{V}$ are constants: as shown in our discussion of two membrane-separated ideal gases, the assumption is a convenience and not an approximation because the total system volume $V$ and particle-number $N$ are truly constant.
Deriving the free energy
We have two gases of ideal "ions" (that interact with the external potential but not with other ions). Mathematically, we can largely follow our discussion of two membrane-separated ideal gases. The total free energy is the sum of the two ideal gas free energies and the two electrostatic potential energies:
where $\fidl$ is defined in the ideal gas page and $q$ is the ionic charge.
Because it is really the
difference in electrostatic potential which governs the ionic behavior, we define $\dphi = \inn{\Phi} - \out{\Phi}$. In terms of this quantity, we can rewrite the total free energy as
Eq. (3) is the free energy as a function of the number of particles inside the membrane (in volume $\inn{V}$). Inclusion of the electrostatic effects shifts the location of the most probable state, or free energy minimum.
The most probable concentrations: The Nernst equation
If we open the channel and allow exchange of atoms between the compartments, the value of $\inn{N}$ can change. The probability of having $\inn{N}$ atoms in $\inn{V}$ is proportional to the Boltzmann factor of the free energy:
The most probable $\inn{N}$ value therefore can be found by determining the minimum of $F$. This will represent the equilibrium point in the thermodynamic limit (very large $N$ - when fluctuations about the most probable $\inn{N}$ will be very small compared to $\inn{N}$ itself). We set $\dee F / \dee \inn{N} = 0$ in Eq. (3), then re-arrange and cancel terms to find
where you should recognize the left-hand side as the ratio of concentrations.
In words, Eq. (6) shows that
the concentrations inside and outside vary according to the Boltzmann factor of the ionic charge times the potential difference. Such an equilibrium is called a Donnan equilibrium. It should be comforting that when $\dphi = 0$, we recover equal concentrations. Comparison to cellular behavior
As the exercise below will show, for some ions ($\minus{Cl}$, $\plus{K}$) the Nernst equation is a reasonable approximation. This suggests that such ions permeate the membrane passively. For some ions ($\plus{Na}$, $\dplus{Ca}$), the concentration ratios are very different from what would be predicted from the Nernst equation because the cell uses active transport to control them.
Mass action and its limitations
It is always worthwhile to pursue both thermodynamic
and kinetic analyses of any system you really care about, or just to train yourself to consider a problem from multiple perspectives. By comparison to the present case, some of the results from the truly ideal (uncharged) two-compartment system may seem puzzling.
In contrast to the uncharged system, we can see that the transport rates through the channel
cannot be equal in the two directions. Let $k_{io}$ be the inside-to-outside rate constant and $k_{oi}$ be the reverse rate constant. Starting from detailed balance, which says that the overall flows must be equal and opposite, and substituting the Nernst relation (6), we find that
In other words, the ratio of rates for an ion channel depends on the potential difference. By itself, this does not contradict the mass action viewpoint (that rate constants are independent of concentrations) ... so long as $\dphi$ is truly constant. But if, more generally, $\dphi$ depends on the relative concentrations of the ion species moving through the channel, then the mass-action picture breaks down. Such a breakdown would occur, for example, if there were two species of ions, one of which could not permeate the membrane and hence was maintained at fixed inside and outside concentrations: in this case, flow of the permeable ion would change $\dphi$ and, in turn, change the rate "constants".
The brief overview of trans-membrane ion physiology may help to clarify the bigger picture of ion/membrane behavior.
References R. Phillips et al., "Physical Biology of the Cell," (Garland Science, 2009). B. Alberts et al., "Molecular Biology of the Cell", Garland Science (many editions available). Exercises Derive Eqs. (5) and (6). Use Eq. (6) to derive a concentration ratio for $\minus{Cl}$ using $\dphi = -90 \, \mathrm{mV}$ (typical for skeletal muscle) and compare your result to the experimental value of $\sim 1 / 30$. This will require careful consideration of units when multiplying together physical constants. |
I have a real $ 4\times4$ matrix of the form $$ C = \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ c_{31} & c_{32} & 0 & c_{34} \\ c_{41} & c_{42} & c_{43} & c_{44} \end{pmatrix} $$ The coefficients satisfy \begin{align*} c_{31}, c_{32}, c_{41} &\ge 0,\\ c_{42} &> 0,\\ c_{34}, c_{43}, c_{44} &\le 0, \end{align*} and $\det(C) = c_{31} c_{42} - c_{32} c_{41} > 0.$
I want to show that the matrix has four distinct, real eigenvalues. My approaches so far:
Calculate the characteristic polynomial and use Mathematica to find the roots. The resulting formulas are too large to work with. Show that the discriminant of the characteristic polynomial is positive. This works a little better, but again, the formulas are huge and ugly.
I would like to exploit the block structure of $A$, but I do not know how.
Edit: The coefficients I am actually working with are\begin{align*} &c_{31} = \frac{\rho(\rho p_0 + 2 \lambda_0)}{p_0}, ~~~~~ c_{32} = \frac{\rho \lambda}{p_0}, ~~~~~ c_{34} = - \frac{\lambda}{p_0},\\ &c_{41} = \frac{\rho \lambda_0}{p}, ~~~~~ c_{42} = \frac{\rho (\rho p + \lambda \frac{n+1}{n})}{p}, ~~~~~ c_{43} = - \frac{\lambda_0}{p}, ~~~~~ c_{44} = - \frac{\lambda \frac{n-1}{n}}{p}\end{align*}with $n \in \mathbb{N} \setminus\{0\}, ~~~\rho, p, p_0 > 0, ~~~\lambda, \lambda_0 \ge 0$ and $\max\{\lambda, \lambda_0\} > 0.$
Actually, one of the eigenvalues is $-\rho,$ so there are only three remaining (and none of these three is equal to $-\rho$). The three remaining eigenvalues are then the roots of a cubic equation. (This implies that at least one other root is real). The discriminant of the cubic equation, however, is still very complicated. |
A matrix $Q$ is orthogonal if and only if its columns forms a orthonormal basis, if and only if $Q^{-1}=Q^T$.
Therefore, if there exists an orthornomal basis of eigenvectors of $A$, we havethat the matrix of change of basis if ortogonal. That is to say, there is $Q$ orthogonal so that
$Q^{-1}AQ=\Lambda$
But then $A=Q^{-1}\Lambda Q=Q^T\Lambda Q$
It remains to show that an orthonormal basis of eigenvectors exists.
Eigenspaces corresponding to different eigenvalues are orthogonal. As A is diagonalizable we have $\mathbb R^n=V_1\oplus V_2\dots\oplus V_k$ where $V_i$ is the eigenspace corresonding to the eignevalue $\lambda_i$.
Each $V_i$ has a orthonormal basis $v_{i,1},\dots,v_{i,n_i}$ where $n_i=\dim V_i$. (from the comments we know that we can use this fact.)
Thus, putting toghether these basis we got an orthonormal basis $v_{i,j}$ of $\mathbb R^n$ consisting of eigenvectors of $A$.
EditIf you cannot use that $A$ is diagonalizable, then use the following workaround to shot that in fact $A$ is diagonalizable.
Write $\mathbb R^n=V_1\oplus\dots\oplus V_k\oplus W$. Where the $V_i$ are the eigenspaces of $A$ and $W$ is the orthogonal complement of $V_1\oplus\dots\oplus V_k$.
Then, the resctriction of $A$ to $W$ is symmetric and has no eigenvectors.Now, consider the function
$\max \langle Aw,w\rangle$ when $w$ runs on the spaces of unitary vectors of $W$ (that is to say, $||w||=1$).
Any max point is an eigenvector, contradicting the fact that $A|_W$ has no eigen vectors. (See below.) Therefore $W$ contains no unitary vectors, hence $W=\{0\}$ and $V_1\oplus\dots\oplus V_k=\mathbb R^n$.
Let's see that max points are eigenvectore. We restrict to $W$ a andLet $F(w)=\langle Aw,w\rangle$ then the derivative of $F$ at point $w$, in the direction $v$ is $dF_w[v]=2\langle Aw,v\rangle$ (because $A$ is symmetric). The tangent space of $\{||w||=1\}$ at the point $w$ is $w^\perp$. Thus, $w$ is a critical value if and only if $dF_w[v]=0\forall v\in w^\perp$. That is to say, $\langle Aw,v\rangle=0$ for all $v$ such that $\langle w,v\rangle=0$. Therefore $Aw$ must be a multiple of $w$ as the orthogonal ov $w$ is contained in the orthogonal of $Aw$.
Edit another way to see that $A|_W$ has an eigenvector is to consider its characteristic polynomial. It has roots in $\mathbb C$ because of the fundamental theorem of algebra. They are reals because $A|_W$ is symmetric. So $A|_W$ has at least one eigenvalue, whence eigenvector. |
Suppose the theory of chiral Weyl fermion (say, left) $\psi_{L}$, which interacts with abelian gauge field. This theory contains the
gauge anomaly, which I write in the form $$ \tag 1 \frac{dQ_{L}}{dt} = \text{A}, $$ where $Q_{L}$ is the left charge and $A$ is anomaly function.
The same thing is true about right fermion $\psi_{R}$. If the gauge field is vector (not axial vector), then
$$ \tag 2 \frac{dQ_{R}}{dt} = -\text{A}, $$ The underlying reason for this is that the dynamics of theory generates anomalous commutator between canonical momentums of EM field (the electric field $\mathbf E$): pcecisely, $$ \tag 3 [E_{i}(\mathbf x), E_{j}(\mathbf y)]_{L/R} = -i\Delta^{ij}_{L/R}(\mathbf A, \mathbf y)\delta (\mathbf x - \mathbf y), $$ where $L,R$ denotes the subspaces of left and right fermions. This gives (see the question) the anomaly $\text{A}$: $$ \tag 4 \frac{dQ_{L/R}}{dt} = \text{A} = \int d^{3}\mathbf r E_{i}(\mathbf r)\partial_{j}\Delta^{ij}_{L/R}(\mathbf A, \mathbf r) $$
Suppose now we take the "direct sum" of left and right representations:
$$ \psi = \psi_{L} \oplus \psi_{R} $$ In this case, by using $(1), (2)$, we see, that there is no gauge anomaly of vector charge $Q_{\text{vector}}$, $$ \tag 5 \frac{dQ_{\text{vector}}}{dt} = \frac{dQ_{L}}{dt} + \frac{dQ_{R}}{dt} = \text{A} - \text{A} = 0, $$ but there is the chiral anomaly of axial charge $Q_{\text{axial}}$, $$ \tag 6 \frac{dQ_{\text{axial}}}{dt} =\frac{dQ_{L}}{dt} - \frac{dQ_{R}}{dt} =\text{A}+\text{A}= 2\text{A} $$ Since the vector current is the gauge current, then the total contribution into anomalous commutator from the left and right particles must vanish: $$ \tag 7 [E_{i}(\mathbf x), E_{j}(\mathbf y)]_{L\oplus R} = -i\Delta^{ij}_{L}(\mathbf A, \mathbf y)\delta (\mathbf x - \mathbf y)-i\Delta^{ij}_{R}(\mathbf A, \mathbf y)\delta (\mathbf x - \mathbf y) = 0 $$
Although we assume the left and right particles with the same charge and mass, this looks like anomaly cancellation. The left and right fermions remain anomalous separately.
My question is following. Although the anomalous commutator $(3)$ exists on subspaces $L$ and $R$, it vanishes for their direct sum, as is shown by $(7)$. But the chiral anomaly $(6)$ exists. In terms of broken canonical commutator $(3)$ I can understand this phenomena as the fact that this commutator violates chiral symmetry. This is the direct consequence of Eqs. $(4)$, $(6)$. But to me is very strange that the gauge anomaly of left and right fermions give the ungauged anomaly for the difference of their fermion number. Is my understanding correct? |
It looks like you're new here. If you want to get involved, click one of these buttons!
Last time we studied meets and joins of partitions. We observed an interesting difference between the two.
Suppose we have partitions \(P\) and \(Q\) of a set \(X\). To figure out if two elements \(x , x' \in X\) are in the same part of the meet \(P \wedge Q\), it's enough to know if they're the same part of \(P\) and the same part of \(Q\), since
$$ x \sim_{P \wedge Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ and } x \sim_Q x'. $$ Here \(x \sim_P x'\) means that \(x\) and \(x'\) are in the same part of \(P\), and so on.
However, this does
not work for the join!
$$ \textbf{THIS IS FALSE: } \; x \sim_{P \vee Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ or } x \sim_Q x' . $$ To understand this better, the key is to think about the "inclusion"
$$ i : \{x,x'\} \to X , $$ that is, the function sending \(x\) and \(x'\) to themselves thought of as elements of \(X\). We'll soon see that any partition \(P\) of \(X\) can be "pulled back" to a partition \(i^{\ast}(P)\) on the little set \( \{x,x'\} \). And we'll see that our observation can be restated as follows:
$$ i^{\ast}(P \wedge Q) = i^{\ast}(P) \wedge i^{\ast}(Q) $$ but
$$ \textbf{THIS IS FALSE: } \; i^{\ast}(P \vee Q) = i^{\ast}(P) \vee i^{\ast}(Q) . $$ This is just a slicker way of saying the exact same thing. But it will turn out to be more illuminating!
So how do we "pull back" a partition?
Suppose we have any function \(f : X \to Y\). Given any partition \(P\) of \(Y\), we can "pull it back" along \(f\) and get a partition of \(X\) which we call \(f^{\ast}(P)\). Here's an example from the book:
For any part \(S\) of \(P\) we can form the set of all elements of \(X\) that map to \(S\). This set is just the
preimage of \(S\) under \(f\), which we met in Lecture 9. We called it
$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S \}. $$ As long as this set is nonempty, we include it our partition \(f^{\ast}(P)\).
So beware: we are now using the symbol \(f^{\ast}\) in two ways: for the preimage of a subset and for the pullback of a partition. But these two ways fit together quite nicely, so it'll be okay.
Summarizing:
Definition. Given a function \(f : X \to Y\) and a partition \(P\) of \(Y\), define the pullback of \(P\) along \(f\) to be this partition of \(X\):
$$ f^{\ast}(P) = \{ f^{\ast}(S) : \; S \in P \text{ and } f^{\ast}(S) \ne \emptyset \} . $$
Puzzle 40. Show that \( f^{\ast}(P) \) really is a partition using the fact that \(P\) is. It's fun to prove this using properties of the preimage map \( f^{\ast} : P(Y) \to P(X) \).
It's easy to tell if two elements of \(X\) are in the same part of \(f^{\ast}(P)\): just map them to \(Y\) and see if they land in the same part of \(P\). In other words,
$$ x\sim_{f^{\ast}(P)} x' \textrm{ if and only if } f(x) \sim_P f(x') $$ Now for the main point:
Proposition. Given a function \(f : X \to Y\) and partitions \(P\) and \(Q\) of \(Y\), we always have
$$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ but sometimes we have
$$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) . $$
Proof. To prove that
$$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ it's enough to prove that they give the same equivalence relation on \(X\). That is, it's enough to show
$$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P) \wedge f^{\ast}(Q) } x'. $$ This looks scary but we just follow our nose. First we rewrite the right-hand side using our observation about the meet of partitions:
$$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x\sim_{f^{\ast}(Q) } x'. $$ Then we rewrite everything using what we just saw about the pullback:
$$ f(x) \sim_{P \wedge Q} f(x') \textrm{ if and only if } f(x) \sim_P f(x') \textrm{ and } f(x) \sim_Q f(x'). $$ And this is true, by our observation about the meet of partitions! So, we're really just stating that observation in a new language.
To prove that sometimes
$$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) , $$ we just need one example. So, take \(P\) and \(Q\) to be these two partitions:
They are partitions of the set
$$ Y = \{11, 12, 13, 21, 22, 23 \}. $$ Take \(X = \{11,22\} \) and let \(i : X \to Y \) be the inclusion of \(X\) into \(Y\), meaning that
$$ i(11) = 11, \quad i(22) = 22 . $$ Then compute everything! \(11\) and \(22\) are in different parts of \(i^{\ast}(P)\):
$$ i^{\ast}(P) = \{ \{11\}, \{22\} \} . $$ They're also in different parts of \(i^{\ast}(Q)\):
$$ i^{\ast}(Q) = \{ \{11\}, \{22\} \} .$$ Thus, we have
$$ i^{\ast}(P) \vee i^{\ast}(Q) = \{ \{11\}, \{22\} \} . $$ On the other hand, the join \(P \vee Q \) has just two parts:
$$ P \vee Q = \{\{11,12,13,22,23\},\{21\}\} . $$ If you don't see why, figure out the finest partition that's coarser than \(P\) and \(Q\) - that's \(P \vee Q \). Since \(11\) and \(22\) are in the same parts here, the pullback \(i^{\ast} (P \vee Q) \) has just one part:
$$ i^{\ast}(P \vee Q) = \{ \{11, 22 \} \} . $$ So, we have
$$ i^{\ast}(P \vee Q) \ne i^{\ast}(P) \vee i^{\ast}(Q) $$ as desired. \( \quad \blacksquare \)
Now for the real punchline. The example we just saw was the same as our example of a "generative effect" in Lecture 12. So, we have a new way of thinking about generative effects:
the pullback of partitions preserves meets, but it may not preserve joins!
This is an interesting feature of the logic of partitions. Next time we'll understand it more deeply by pondering left and right adjoints. But to warm up, you should compare how meets and joins work in the logic of subsets:
Puzzle 41. Let \(f : X \to Y \) and let \(f^{\ast} : PY \to PX \) be the function sending any subset of \(Y\) to its preimage in \(X\). Given \(S,T \in P(Y) \), is it always true that
$$ f^{\ast}(S \wedge T) = f^{\ast}(S) \wedge f^{\ast}(T ) ? $$ Is it always true that
$$ f^{\ast}(S \vee T) = f^{\ast}(S) \vee f^{\ast}(T ) ? $$
To read other lectures go here. |
Increasing the amount of installed renewable energy sources such as solar and wind is an essential step towards the decarbonization of the energy sector.
From a technical point of view, however, the stochastic nature of distributed energy resources (DER) causes operational challenges. Among them, unbalance between production and consumption, overvoltage and overload of grid components are the most common ones.
As DER penetration increases, it is becoming clear that incentive strategies such as Net Energy Metering (NEM) are threatening utilities, since NEM doesn’t reward prosumers to synchronize their energy production and demand.
In order to reduce congestions, distributed system operators (DSOs) currently use a simple indirect method, consisting of a bi-level energy tariff, i.e. the price of buying energy from the grid is higher than the price of selling energy to the grid. This encourages individual prosumers to increase their self-consumption. However, this is inefficient in regulating the aggregated power profile of all prosumers.
Utilities and governments think that a better grid management can be achieved by making the distribution grid ‘smarter’, and they are currently deploying massive amount of investments to enforce this vision.
As I explained in my previous post on the need of decentralized architectures for new energy markets, the common view of the scientific community is that a smarter grid requires an increase in the amount of communication between generators and consumers, adopting near real-time markets and dynamic prices, which can steer users’ consumption during periods in which DER energy production is higher, or increase their production during high demand. For example, in California a modification of NEM that allows prosumers to export energy from their batteries during evening peak of demand has been recently proposed.
But as flexibility will be offered at different levels and will provide a number of services, from voltage control for the DSOs to control energy for the transmission system operators (TSOs), it is important to make sure that these services will not interfere with each other. So far, a comprehensive approach towards the actuation of flexibility as a system-wide leitmotiv, taking into account the effect of DR at all grid levels, is lacking.
In order to optimally exploit prosumers’ flexibility, new communication protocols are needed, which coupled with a sensing infrastructure (smart meters), can be used to safely steer aggregated demand in the distribution grid, up to the transmission grid.
The problem of coordinating dispatchable generators is well known by system operators and has been studied extensively in the literature. When not taking into account grid constraints, this is known under the name of
economic dispatch, and consists in minimizing the generation cost of a group of power plants . When operational constraints are considered, the problem increases in complexity, due to the power flow equations governing currents and voltages in the electric grid. Nevertheless, several approaches are known for solving this problem, a.k.a. optimal power flow (OPF), using approximations and convex formulations of the underlying physics. OPF is usually solved in a centralized way by an independent system operator (ISO). Anyway, when the number of generators increases, as in the case of DERs, the overall problem increases in complexity but can be still effectively solved by decomposing it among generators.
The decomposition has other two main advantages over a centralized solution, apart from allowing faster computation. The first is that generators do not have to disclose all their private information in order for the problem to be solved correctly, allowing competition among the different generators. The second one is that the computation has no single point of failure.
In this direction, we have recently proposed a multilevel hierarchical control which can be used to coordinate large groups of prosumers located at different voltage levels of the distribution grid, taking into account grid constraints. The difference between power generators and prosumers is that the latter do not control the time of generated power, but can operate deferrable loads such as heat pumps, electric vehicles, boilers and batteries.
The idea is that prosumers in the distribution grid can be coordinated only by means of a price signal sent by their parent node in the hierarchical structure, an aggregator. This allows the algorithm to be solved using a
forward-backward communication protocol. In the forward passage each aggregator receives a reference price from its parent node and sends it downwards, along to its reference price, to its children nodes (prosumers or aggregators), located in a lower hierarchy level. This mechanism is propagated along all the nodes, until the terminal nodes (or leafs). Prosumers in leaf nodes solve their optimization problems as soon as they are reached by the overall price signal. In the backward passage, prosumers send their solutions to their parents, which collect them and send the aggregated solution upward.
Apart from this intuitive coordination protocol, the proposed algorithm has other favorable properties. One of them is that prosumers only need to share information on their energy production and consumption with one aggregator, while keeping all other parameters and information private. This is possible thanks to the decomposition of the control problem. The second property is that the algorithm exploits parallel computation of the prosumer specific problems, ensuring minimum overhead communication.
However, being able to coordinate prosumers is not enough.
The main difference between the OPF and DR problem, is that the latter involves the participation of self-serving agents, which cannot be a-priori trusted by an independent system operator (ISO). This implies that if an agent find it profitable (in terms of its own economic utility), he will compute a different optimization problem from the one provided by the ISO. For this reason, some aspects of DR formulations are better described through a game theoretic framework.
Furthermore, several studies have focused on the case in which grid constraints are enforced by DSOs, directly modifying voltage angles at buses. Although this is a reasonable solution concept, the current shift of generation from the high voltage network to the low voltage network lets us think that in the future
prosumers and not DSOs could be in charge of regulating voltages and mitigating power peaks.
With this in mind, we focused on analyzing the decomposed OPF using game theory and mechanism design, which study the behavior and outcomes of a set of agents trying to maximize their own utilities $latex u(x_i,x_{-i})&s=1$, which depend on their own actions $latex x_i &s=1$ and on the action of the other agents $latex x_{-i}&s=1$, under a given ‘mechanism’. The whole field of mechanism design tries to escape from the Gibbard–Satterthwaite theorem, which can be perhaps better understood by means of its corollary:
If a strict voting rule has at least 3 possible outcomes, it is non-manipulable if and only if it is dictatorial.
It turns out, that the only way to escape from this impossibility result, is adopting money transfer. As such, our mechanism must define both an allocation rule and a taxation (or reward) rule. In this way, the overall value seen by the agents is equal to their own utility augmented by the taxation/remuneration imposed by the mechanism:
$latex v_i (x_i,x_{-i})= u_i(x_i,x_{-i}) + c_i(x_i,x_{-i}) &s=1$
Anyway, monetary transfers are as powerful as perilous. When designing taxes and incentives, one should always keep in mind two things:
Designing wrong incentives could result in spectacular failures, as we learned from the case of a very anecdotal misuse of incentives from British colonial history, known as the cobra effect If there is a way to fool the mechanism, self-serving prosumers will almost surely find it out. Know that some people will do everything they can to game the system, finding ways to win that you never could have imagined― Steven D. Levitt
A largely adopted solution concept, used to rule out most of the strategic behaviors from agents (but not the same as strategyproof mechanism), is the one of ex-post Nash Equilibrium (NE), or simply equilibrium, which is reached when the following set of problems are jointly minimized:
$latex
\begin{aligned} \min_{x_i \in \mathcal{X}_i} & \quad v(x_i, x_{-i}) \quad \forall i \in \{N\} \\ s.t. & \quad Ax\leq b \end{aligned}&s=1 $
where $latex x_i \in \mathcal{X}_i &s=1$ means that the agents’ actions are constrained to be in the set $latex \mathcal{X}_i &s=1$, which could include for example the prosumer’s battery maximum capacity or the maximum power at which the prosumer can draw energy from the grid. The linear equation in the second row represents the grid constraints, which is a function of the actions of all the prosumers, $latex x = [x_i]_{i=1}^N &s=1$, where N is the number of prosumers we are considering.
Rational agents will always try to reach a NE, since in this situation they cannot improve their values given that the other prosumers do not change their actions.
Using basic optimization notions, the above set of problems can be reformulated using KKT conditions, which under some mild assumptions ensure that the prosumers’ problems are optimally solved. Briefly, we can augment the prosumers objective function using a first order approximation, through a Lagrangian multiplier $latex \lambda_i$, of the coupling constraints and using the indicator function to encode their own constraints:
$latex \tilde{v}_i (x_i,x_{-i}) = v_i (x_i,x_{-i}) + \lambda_i (Ax-b) + \mathcal{I}_{\mathcal{X}_i} &s=1$
The KKT conditions now reads
$latex
\begin{aligned} 0& \in \partial_{x_i} v_i(x_i,\mathrm{x}_{-i}) + \mathrm{N}_{\mathcal{X}_i} + A_i^T\lambda \\ 0 & \leq \lambda \perp -(Ax-b) \geq 0 \end{aligned} &s=1 $
where $latex \mathrm{N}_{\mathcal{X}_i}&s=1$ is the normal cone operator, which is the sub-differential of the indicator function.
Loosely speaking, Nash equilibrium is not always a reasonable solution concept, due to the fact that multiple equilibria usually exists. For this reasons equilibrium refinement concepts are usually applied, in which most of the equilibria are discarded a-priori. Variational NE (VNE) is one of such refinement. In VNE, the price of the shared constraints paid by each agent is the same. This has the nice economic interpretation that all the agents pay the same price for the common good (the grid). Note that we have already considered all the Lagrangian multiplier as equal $latex \lambda_i = \lambda \quad \forall i \in \{N\}&s=1$ in writing the KKT condition.
One of the nice properties of the VNE is that for well behaving problems, this equilibrium is unique. Being unique, and with a reasonable economic outcome (price fairness), rational prosumers will agree to converge to it, since at the equilibrium no one is better off changing his own actions while the other prosumers’ actions are fixed. It turns out that a trivial modification of the parallelized strategy we adopted to solve the multilevel hierarchical OPF can be used to reach the VNE.
On top of all this, new economic business models must be actuated in order to reward prosumers for their flexibility. In fact, rational agents would not participate in the market if the energy price they pay is higher than what they pay to their current energy retailer. One of such business models is the aforementioned Californian proposal to enable NEM with the energy injected by electrical batteries.
Another possible use case is the creation of an self-consumption community, in which a group of prosumers in the same LV grid, pays only at the point of common coupling with the grid of the DSO (which e.g. could be the LV/MV transformer in figure 1). In this way, if the group of prosumers is heterogeneous (someone is producing energy while someone else is consuming), the overall cost that they pay as a community will be always less than what they would have paid as single prosumers, at the loss of the DSO. But if this economic surplus drives the prosumers to take care of power quality in the LV/MV, the DSO could benefit from this business model, delegating part of its grid regulating duties to them.
How does blockchain fits in? Synchronizing thousands of entities connected to different grid levels is a technically-hard task. Blockchain technology can be used as a thrust-less distributed database for creating and managing energy communities of prosumers willing to participate to flexibility markets. On top of the blockchain, off-chain payment channels can be used to keep track of the energy consumed and produced by prosumers and to disburse payments in a secure and seamless way.
Different business models are possible, and technical solutions as well. But
we think that in the distribution grid, the economic value lies in shifting the power production and consumption of the prosumers, enabling a really smarter grid. At Hive Power we are enabling the creation of energy sharing communities where all participants are guaranteed to benefit from the participation, reaching at the same time a technical and financial optimum for the whole community. Key links: |
Stone duality, one of many dualities between certain lattices and certain topological spaces, asserts that there is a contravariant categorical equivalence between the category $\text{Bool}$ of boolean algebras and the category $\text{Stone}$ of stone spaces. For those who are not familiar with this, here is a brief statement of what this says:
Definition:A boolean algebra is a partially ordered set $A$ such that
1) $A$ has finite meets and finite joins, including the empty join.
2) Meets distribute over joins in $A$
3) For each $a \in A$, there is $b$ such that $a \vee b$ is the greatest element, and $a \wedge b$ is the least element.
(1) says that $A$ is a bounded lattice, (1) and (2) say that $A$ is a bounded distributive lattice, and (3) says that $A$ is complemented.
Definition:A stone space $X$ is a space occuring as the cofiltered limit $\text{limit}\ X_i$ of discrete spaces in the category of topological spaces.
One functor $\text{Spec} : \text{Bool} \rightarrow \text{Stone}$ sends a boolean algebra $A$ to $\text{Bool}(A, \{ 0, 1 \})$ of maps of boolean algebras from $A$ to the boolean algebra $\{ 0, 1\}$. The other sends a stone space $X$ to $\text{Stone}(X, \{ 0, 1\})$, the set of maps of stone spaces from $X$ to $\{ 0, 1\}$.
I am interested in how this might work for $\sigma$-algebras. A $\sigma$-algebra is a subalgebra of the boolean algebra $P(X)$ (power set of a set) closed under countable meets (and therefore countable joins). By the categorical equivalence, $\sigma$-algebras $A$ on $X$ correspond to certain quotient objects of $\hat{X}$, the profinite completion of $X$ (inverse limit over all quotients onto a finite set).
Question:Let $X$ be a set. Can we characterize the quotient stone spaces of $\hat{X}$ corresponding to $\sigma$-algebras in $P(X)$?
Note: it was mentioned in the comments that $\sigma$-algebra can also refer to an ambient boolean algebra which has countable meets (and therefore countable joins). Here I mean to specifically fix an embedding into $P(X)$, and for joins and meets in the $\sigma$-algebr to match joins and meets in $P(X)$. |
I learnt Ardens theorem and its usage as follows:
Ardens Theorem Let $P$ and $Q$ be two regular expressions over alphabet $Σ$. If $P$ does not contain null string, then $R = Q + RP$ has a unique solution that is $R = QP^*$
Using Ardens theorem (source):
Using Arden's theorem to find regular expression 1. For getting the regular expression for the automata we first create equations of the given form for all the states $q_1 = q_1w_{11} + q_2w_{21 }+ … + q_nw_{n1} + \epsilon$ ($q_1$ is the initial state) $q_2 = q_1w_{12} + q_2w_{22} + … + q_nw_{n2}$ : $q_n = q_1w_{1n} + q_2w_{2n} + … + q_nw_{nn}$ where $w_{ij}$ is the regular expression representing the set of labels of edges from $q_i$ to $q_j$ 2. Note: for parallel edges there will be that many expressions for that state in the expression. 4. Solve these equations to get the equation for $q_i$ in terms of $w_{ij}$ and that expression is the required solution, where $q_i$ is a final state. 3. Ignore trap states while doing above.
I feel $R=Q+RP$ is equivalent to left linear grammar production $R\rightarrow Q | RP$. But I dont know whether I should call $R=Q+RP$ as left linear grammar. But I understand the theorem also holds for right linear grammar (if you allow me to call it that for convenience) production also; i.e.
$R = Q + PR$ has a unique solution $R = QP^*$
Now I have came across two problems, one which uses right linear grammar, other using left linear grammar.
Problem 1 (Using right linear grammar) Given: $\Sigma=\{0,1\}$ $X_0=1X_1$ $X_1=0X_1+1X_2$ $X_2=0X_1+\{\lambda\}$ Give the regular expression representing strings in $X_0$.
Solution 1 using Arden's theorem $X_1=0X_1+1X_2$ $=0X_1+1(0X_1+\lambda)$ $=0X_1+10X_1+1)$ $=(0+10)X_1+1$ $=(0+10)^*1$ (By Arden's theorem) $X_0=1X_1$ $=1(0+10)^*1$
Problem 2 (using left linear grammar) Given: $\Sigma = \{a,b\}$ $X_0=\epsilon + X_0b$ $X_1=X_0a+X_1b+X_2a+X_3a$ $X_2=X_1a+X_2b+X_3b$ Give regular expression for $X_1\cup X_2$
Solution 1 using Arden's theorem was not given. So I tried to solve myself. First thing I noticed was that I can fully ignore $X_3$ as its equation was not given. I felt its unreachable state which is somewhat backed by solution given below using automaton (but the steps listed above in application of Arden's theorem only talks about trap state in last step, not unreachable state). But I was not able to solve this using Ardens theorem. So, I analysed the equations. In problem 1, $X_2$ is defined in terms of $X_1$. So we were able to put value of $X_2$ in $X_1$. So we were able to get $X_1$ in terms $X_1$ itself which can be reduced to regular expression using Arden's theorem. That doesn't seem to be the case here, as $X_1$ and $X_2$ are defined in terms of each other.
After going through all these, I have bunch of related questions:
Which is the initial state? "Using Arden's theorem" section says $q_1$ is initial state and $q_1$ has $\epsilon$. So I feel that the variable which has $\epsilon$ added (ORed) to its equation, is starting state. Which is final state? Again "Using Arden's theorem" section says we solve for final state. So I suppose the variable for which we have been asked to solve for in the question, turns out to be the final state. Right? (Problem 1 asks to solve for $X_0$, so the solution 1 for problem 1 solves for $X_0$. Problem 2 asks to solve for $X_1\cup X_2$ and in solution 2 of problem 2, $X_1$ and $X_2$ are depicted as final state.) How do I solve problem 2 which using Arden's theorem? Problem 2 asks for solution of union $X_1\cup X_2$. What does it means to solve for union of two variables in the context of Arden's theorem? How can I solve it? Is it right to draw automatons the way they have drawn? In problem 1, each equation gives transition (in automaton) going out of the state corresponding to the variable on LHS. In problem 2, each equation gives transition (in automaton) coming inside the state corresponding to the variable on LHS. Is this right to do? Which one is trap state (and unreachable)? Is it correct to interpret variable for which no equation is given as a trap state? For example in problem 2 (left linear), equation is given for $X_3$, so it turned out to be unreachable state. But if such variable (for which no equation is given) was given in problem 1 (right linear), I guess, it would have been trap state (no transition emitting out from the state corresponding to that variable). Can we ignore both? |
Counterexample: $[0,1]\times[0,1]$ with induced subspace topology from $\mathbb{R}^2$ is compact. The open cover $\mathscr{U}$ is just the two circular sectors. When we look at the up-left corner and down-right corner, it fails - there is no such $\delta$, such that let the open ball be only in one of circular sectors... what is wrong?
The upper-left and bottom-right corners are not contained in either open circle. That is, your claimed counterexample fails to cover the square.
Moreover, I think you may have misunderstood what is important about Lebesgue's number lemma. The following statement
Let $\mathcal U$ be a cover. For any $x$, there is some $\delta$ and some $U\in \mathcal U$ such that $B(x,\delta)\subseteq U$
is clearly true of any cover - no need for compactness. This is basically just writing down what "open" and "cover" are defined to be. Given that your counterexample fails this condition, it is clear that it is not an open cover.
Note that Lebesgue's number lemma exchanges the quantifiers and says that there is some $\delta$ that works for
all $x$. This is what requires compactness. |
This is the sine-Gordon action:$$\frac{1}{4\pi} \int_{ \mathcal{M}^2} dt \; dx \; k \partial_t \Phi \partial_x \Phi - v \partial_x \Phi \partial_x \Phi + g \cos(\ell_{}^{} \cdot\Phi_{}) $$Here $\mathcal{M}^2$ is a 1+1 dimensional spacetime, where 1D space is a $S^1$ circle of length $L$.
At $g=0$ : it is a chiral boson theory with zero mass, gapless scalar boson $\Phi$.
At large $g$ : It seems to be well-known that at large coupling $g$ of the sine-Gordon equation, the scalar boson $\Phi$ will have a mass gap.
Q1: What is the original Ref which states and
proves this statement about the nonzero (or large) mass gap for large $g$?
-
Q2: What does the mass gap $m$ scale like in terms of other quantities (like $L$, $g$, etc)?
-
NOTE: I find S Coleman has discussion in
(1)"
Aspects of Symmetry: Selected Erice Lectures" by Sidney Coleman
and this paper
(2)Quantum sine-Gordon equation as the massive Thirring model -Phys. Rev. D 11, 2088 by Sidney Coleman
But I am not convinced that Coleman shows it explicitly. I read these, but could someone point out explicitly and explain it, how does he(or someone else)
rigorously prove this mass gap?
Here Eq.(17) of this reference does a quadratic expansion to show the mass gap $m \simeq \sqrt{\Delta^2+\#(\frac{g}{L})^2}$ with $\Delta \simeq \sqrt{ \# g k^2 v}/(\# k)$, perhaps there are even This post imported from StackExchange Physics at 2014-06-04 08:00 (UCT), posted by SE-user Idear
more mathematical rigorous way to prove the mass gap with a full cosine term? |
RD Sharma Solutions Class 9 Chapter 10 Ex 10.5
(1) ABC is a triangle and D is the mid-point of BC. The perpendiculars from B to AB and AC are equal. Prove that the triangle is isosceles.
Sol:
Given that, in two right triangles one side and acute angle of one are equal to the corresponding side and angle of the other
We have to prove that the triangles are congruent
Let us consider two right triangles such that
\(\angle\)
AB = DE…… (ii)
\(\angle\)
Now observe the two triangles ABC and DEF
\(\angle\)
\(\angle\)
And AB = DE[From (ii)]
So, by AAS congruence criterion, we have
\(\Delta ABC \cong \Delta DEF\)
Therefore, the two triangles are congruent
Hence proved
(2) ABC is a triangle in which BE and CF are, respectively, the perpendiculars to the sides AC and AB. If BE = CF, prove that \(\Delta\)
Sol: Given that ABC is a triangle in which BE and CF are perpendicular to the sides AC and AS respectively such that BE = CF.
To prove, \(\Delta\)
Now, consider \(\Delta\)
We have
\(\angle\)
BC = CB[Common side]
And CF = BE[Given]
So, by RHS congruence criterion, we have
\(\Delta BFC \cong \Delta CEB\)
Now,
\(\angle\)
\(\angle\)
AC = AB[Opposite sides to equal angles are equal in a triangle]
\(\Delta\)
(3) If perpendiculars from any point within an angle on its arms are congruent. Prove that it lies on the bisector of that angle.
Solution:
Given that, if perpendicular from any point within, an angle on its arms is congruent, prove that it lies on the bisector of that angle.
Now,
Let us consider an angle ABC and let BP be one of the arm within the angle
Draw perpendicular PN and PM on the arms BC and BA such that they meet BC and BA in N and M respectively.
Now, in \(\Delta\)
We have \(\angle\)
BP = BP[Common side]
And MP = NP[given]
So, by RHS congruence criterion, we have
\(\Delta BPM \cong \Delta BPN\)
Now, \(\angle\)
BP is the angular bisector of \(\angle\) ABC.
Hence proved
(4) In fig. (10).99, AD \(\perp\)
Sol:
Given in the fig. (10).99, AD \(\perp\)
And AQ = BP and DP = CQ,
To prove that \(\angle\)
Given that DP = QC
Add PQ on both sides
DP + PQ = PQ + QC DQ = PC ……(i)
Now, consider triangle DAQ and CBP,
We have
\(\angle\)
And DQ = PC[From (i)]
So, by RHS congruence criterion, we have
\(\Delta DAQ \cong \Delta CBP\)
Now,
\(\angle\)
Hence proved
(5) ABCD is a square, X and Y are points on sides AD and BC respectively such that AY = BX. Prove that BY = AX and \(\angle\)
Solution:
Given that ABCD is a square, X and Y are points on sides AD and BC respectively such that AY = BX.
To prove BY = AX and \(\angle\)
Join B and X, A and Y.
Since, ABCD is a square
\(\angle\) DAB = \(\angle\) CBA = 90° \(\angle\) XAB = \(\angle\) YAB = 90° ……(i)
Now, consider triangle XAB and YBA
We have
\(\angle\)
And AB = BA[Common side]
So, by RHS congruence criterion, we have \(\Delta XAB \cong \Delta YBA\)
Now, we know that corresponding parts of congruent triangles are equal.
BY = AX and \(\angle\)
Hence proved
(6) Which of the following statements are true (T) and which are false (F):
(i) Sides opposite to equal angles of a triangle may be unequal.
(ii) Angles opposite to equal sides of a triangle are equal
(iii) The measure of each angle of an equilateral triangle is 60
(iv) If the altitude from one vertex of a triangle bisects the opposite side, then the triangle may be isosceles.
(v) The bisectors of two equal angles of a triangle are equal.
(vi) If the bisector of the vertical angle of a triangle bisects the base, then the triangle may be isosceles.
(vii) The two altitudes corresponding to two equal sides of a triangle need not be equal.
(viii) If any two sides of a right triangle are respectively equal to two sides of other right triangle, then the two triangles are congruent.
(ix) Two right-angled triangles are congruent if hypotenuse and a side of one triangle are respectively equal to the hypotenuse and a side of the other triangle.
Solution:
(i) False (F)
Reason: Sides opposite to equal angles of a triangle are equal
(ii) True (F)
Reason: Since the sides are equal, the corresponding opposite angles must be equal
(iii) True (T)
Reason: Since all the three angles of equilateral triangles are equal and sum of the three angles is 180 , each angle will be equal to \(\frac{180^{\circ}}{3} = 60^{\circ}\)
(iv) False (F)
Reason: Here the altitude from the vertex is also the perpendicular bisector of the opposite side.
The triangle must be isosceles and may be an equilateral triangle.
(v) True (T)
Reason: Since it an isosceles triangle, the lengths of bisectors of the two equal angles are equal
(vi) False (F)
Reason: The angular bisector of the vertex angle is also a median
=> The triangle must be an isosceles and also may be an equilateral triangle.
(vii) False (F)
Reason: Since two sides are equal, the triangle is an isosceles triangle. The two altitudes corresponding to two equal sides must be equal.
(viii) False (F)
Reason: The two right triangles may or may not be congruent
(ix) True (T)
Reason: According to RHS congruence criterion the given statement is true.
(7) Fill the blanks In the following so that each of the following statements is true.
(i) Sides opposite to equal angles of a triangle are ___
(ii) Angle opposite to equal sides of a triangle are ___
(iii) In an equilateral triangle all angles are ___
(iv) In \(\Delta\)
(v) If altitudes CE and BF of a triangle ABC are equal, then AB ___
(vi) In an isosceles triangle ABC with AB = AC, if BD and CE are its altitudes, then BD is ___ CE.
(vii) In right triangles ABC and DEF, if hypotenuse AB = EF and side AC = DE, then \(\Delta ABC \cong \Delta\)
Solution:
(i) Sides opposite to equal angles of a triangle are equal
(ii) Angles opposite to equal sides of a triangle are equal
(iii) In an equilateral triangle all angles are equal Reason: Since all sides are equal in a equilateral triangle. the angles opposite to equal sides will be equal .
(iv) In a \(\Delta\)
(v) If altitudes CE and BF of a triangle ABC are equal, then AB = AC
(vi) In an isosceles triangle \(\Delta\)
(vii) In right triangles ABC and DEF, if hypotenuse AB = EF and side AC = DE, then, \(\Delta\) |
Let $k$ be a field and $M$ a finitely generated, graded module over the graded ring $S=k[x_1,\dots,x_n]$. Let $\cdots \rightarrow F_j \rightarrow F_{j-1} \rightarrow \cdots F_1 \rightarrow F_0 \rightarrow M \rightarrow 0$ be the minimal, free, graded resolution of $M$. Define $b_j$ to be the maximum among the degrees that appear in $F_j$, i.e. if $F_j = \oplus_{i=1}^{n_j} S(a_{i,j})$, then $b_j = \max_i a_{i,j}$. Suppose that $M$ is $m$-regular in the sense of Castelnuovo-Mumford, i.e. that $b_j - j \le m, \, \forall j$.
Question: Why is it true that there exists a maximal integer $j$ such that $b_j -j = m$?
Motivation: Proof of Proposition 20.16 in
Eisenbud (CA with a view...). |
Search
Now showing items 1-3 of 3
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Net-baryon fluctuations measured with ALICE at the CERN LHC
(Elsevier, 2017-11)
First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ... |
Peter Saveliev
Hello! My name is Peter Saveliev. I am a professor of mathematics at Marshall University, Huntington WV, USA.
My current projects are these two books:
In part, the latter book is about
Discrete Calculus, which is based on a simple idea:$$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$I have been involved in research in algebraic topology and several other fields but nowadays I think this is a pointless activity. My non-academic projects have been: digital image analysis, automated fingerprint identification, and image matching for missile navigation/guidance. Once upon a time, I took a better look at the poster of Drawing Handsby Escher hanging in my office and realized that what is shown isn't symmetric! To fix the problem I made my own picture called Painting Hands:
Such a symmetry is supposed to be an involution of the $3$-space, $A^2=I$; therefore, its diagonalized matrix has only $\pm 1$ on the diagonal. These are the three cases:
(a) One $-1$: mirror symmetry, then pen draws pen. No! (b) Two $-1$s: $180$ degrees rotation, the we have two right (or two left) hands. No! (c) Three $-1$s: central symmetry. Yes! Integer-valued calculus (Can calculus help to determine if the universe is non-orientable?), an essay making a case for discrete calculus by appealing to topology and physics. The political spectrum is a circle, an essay based on the very last section of the topology book. Note: I am frequently asked, what should "Saveliev" sound like? I used to care about that but got over that years ago. The one I endorse is the most popular: "Sav-leeeeeev". Or, simply call me Peter. |
Now let's think about emergent conservation laws!
When a heavy rock connected to a lighter one by a pulley falls down and pulls up the lighter one, you're seeing an emergent conservation law:
Here the height of the heavy rock plus the height of light one is a constant. That's a conservation law! It forces some of the potential energy lost by one rock to be transferred to the other. But it's not a fundamental conservation law, built into the fabric of physics. It's an
emergent law that holds only thanks to the clever design of the pulley. If the rope broke, this law would be broken too.
It's not surprising that biology uses similar tricks. But let's see exactly how it works. First let's look at all four reactions we've been studying:
$$\begin{array}{cccc} \mathrm{X} + \mathrm{Y} & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} & \qquad (1) \\ \\ \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} & \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} & \qquad (2) \\ \\ \mathrm{X} + \mathrm{ATP} & \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}} & \qquad (3) \\ \\ \mathrm{XP}_{\mathrm{i}} +\mathrm{Y} & \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}} & \mathrm{XY} + \mathrm{P}_{\mathrm{i}} & \qquad (4) \end{array} $$
It's easy to check that the rate equations for these reactions have the following
conserved quantities, that is, quantities that are constant in time:
A) \([\mathrm{X}] + [\mathrm{XP}_{\mathrm{i}} ] + [\mathrm{XY}],\) due to the conservation of X.
B) \([\mathrm{Y}] + [\mathrm{XY}],\) due to the conservation of Y.
C) \(3[\mathrm{ATP}] +[\mathrm{XP}_{\mathrm{i}} ] +[\mathrm{P}_{\mathrm{i}}] +2[\mathrm{ADP}],\) due to the conservation of phosphorus.
D) \([\mathrm{ATP}] + [\mathrm{ADP}],\) due to the conservation of adenosine.
Moreover, these quantities, and their linear combinations, are the
only conserved quantities for reactions (1)–(4).
To see this, we use some standard ideas from reaction network theory. Consider the 7-dimensional space with orthonormal basis given by the species in our reaction network:
$$\mathrm{ATP}, \mathrm{ADP}, \mathrm{P}_{\mathrm{i}}, \mathrm{XP}_{\mathrm{i}}, \mathrm{X}, \mathrm{Y}, \mathrm{XY}$$
We can think of complexes like \(\mathrm{ADP} + \mathrm{P}_{\mathrm{i}}\) as vectors in this space. An arbitrary choice of the concentrations of all species also defines a vector in this space. Furthermore, any reaction involving these species defines a vector in this space, namely the sum of the products minus the sum of the reactants. This is called the
reaction vector of this reaction. Reactions (1)–(4) give these reaction vectors:
$$\begin{array}{ccl} v_\alpha &=& \mathrm{XY} - \mathrm{X} - \mathrm{Y} \\ \\ v_\beta &= & \mathrm{P}_{\mathrm{i}} + \mathrm{ADP} - \mathrm{ATP} \\ \\ v_\gamma &=& \mathrm{XP}_{\mathrm{i}} + \mathrm{ADP} - \mathrm{ATP} - \mathrm{X} \\ \\ v_\delta &= & \mathrm{XY} + \mathrm{P}_{\mathrm{i}} - \mathrm{XP}_{\mathrm{i}} - \mathrm{Y} \end{array} $$
Any change in concentrations caused by these reactions must lie in the
stoichiometric subspace: that is, the space spanned by the reaction vectors. Since these vectors obey one nontrivial relation:
$$v_\alpha + v_\beta = v_\gamma + v_\delta$$
the stochiometric subspace is 3-dimensional. Therefore, the space of conserved quantities must be 4-dimensional, since these specify the constraints on allowed changes in concentrations.
Now let's compare the situation where 'coupling' occurs! For this we consider only reactions (3) and (4):
Now the stoichiometric subspace is 2-dimensional, since \(v_\gamma\) and \(v_\delta\) are linearly independent. Thus, the space of conserved quantities becomes 5-dimensional! Indeed, we can find an additional conserved quantity:
E) \([\mathrm{Y} ] +[\mathrm{P}_{\mathrm{i}}]\)
that is linearly independent from the four conserved quantities we had before. It does not derive from the conservation of a particular molecular component. In other words, conservation of this quantity is not a fundamental law of chemistry. Instead, it is an
emergent conservation law, which holds thanks to the workings of the cell! It holds in situations where the rate constants of reactions catalyzed by the cell's enzymes are so much larger than those of other reactions that we can ignore those other reactions.
And remember from last time: these are precisely the situations where we have
coupling.
Indeed, the emergent conserved quantity E) precisely captures the phenomenon of coupling! The only way for ATP to form ADP + P
i without changing this quantity is for Y to be consumed in the same amount as P i is created... thus forming the desired product XY.
Next time we'll look at a more complicated example from biology: the urea cycle.
You can also read comments on Azimuth, and make your own comments or ask questions there! |
To begin our analysis of rotational mechanics one should first remark that the basic equations of translational kinematics (forming the “starting point” for a description of translational mechanics) cannot be used to describe the motion of a rotating body. This is because in our analysis of translational kinematics we assumed that all of the mass elements composing the object travel, more or less, the same distances in a given amount of time with the same velocities and accelerations. The very first idealizations that we made in our analysis of translational kinematics was that the object remains absolutely rigid (which is to say that it does not deform) and that all parts (mass elements) of the object undergo equal displacements in a given amount of time with the same velocities and accelerations; this is what allowed us to treat the whole object as a single particle. But with a rotating object, this is not the case. As a body rotates, different mass elements travel different linear distances in a given amount of time with different linear velocities and accelerations. For example if a solid ball were spinning on the ground, in a given amount of time (say one second), the mass elements farther away from the axis of rotation will travel a greater linear distance than mass elements closer to the axis of rotation and thus the farther away mass elements will also be moving with greater speeds.
In rotational kinematics we begin at a similar starting point. We begin by first only analyzing the motion of rotating objects that are absolutely rigid. We then recognize that if we treat the whole object as a collection of particles, even though the three basic linear quantities of translation kinematics (linear displacement, linear velocity, and linear acceleration) are different for different mass elements, there are three angular/rotational quantities which remain the same for all mass elements—namely, angular displacement, angular velocity, and angular acceleration.
Suppose that we are watching an object rotate about a fixed axis (we will investigate the more complicated case where the object “wobbles” later on). Suppose that \(O\) indicates the center of the object and that the axis of rotation (which we can think of as the z-axis) passes perpendicularly through the object and \(O\). Let \(P\) be an arbitrary point in the object (where some particle is located) that moves as the object rotates. Also let the x-axis be a fixed reference line in space. As the object rotates, the line segment \(\overline{OP}\) will describe an angle \(θ\) which will be taken to be measured
counter-clockwise from the reference line. This is to say that the angle \(θ\) will be positive if the particle rotates counter-clockwise from the reference line and negative if it rotates clockwise from the reference line. Note that the particle at \(P\) and indeed all other particles making up the object will travel in circles as the object rotates. It is therefore convenient to indicate the position of a given particle in polar coordinates as \((r,θ)\). As the particle at \(P\) traverses the arc length \(s\) (describing the angle \(θ\)) in a given period of time \(Δt\), the arc length and the angle are related by the equation
$$s=rθ.\tag{1}$$
As the particle at \(P\) moves from location \(A\) to location \(B\) it sweeps out the angle \(Δθ=θ_f-θ_i\). This quantity \(Δθ\) is called the
angular displacement of the rigid object and is defined as
$$Δθ\equiv{θ_f-θ_i}.\tag{2}$$
Since all other particles making up the object undergo the same angular displacement this quantity is indicative of the angular displacement of the object as a whole. This is analogous to when a rigid object undergoes linear displacement; since every part of the object undergoes the same linear displacement this quantity can be used to describe the linear displacement of the object as a whole. The rate at which this angular displacement occurs can vary. If this rigid body spins rapidly, this angular displacement will occur in a very short time interval. If this body rotates very slowly, then this angular displacement will occur over a longer time interval. To quantify the average rate-of-change of the angular displacement with respect to time, we define the quantity call average angular speed:
$$ω_{avg}\equiv{\frac{θ_f-θ_i}{t_f-t_i}}=\frac{Δθ}{Δt}.\tag{3}$$
The instantaneous angular speed is defined as the limit of this quantity as \(Δt\) approaches zero:
$$ω\equiv{\lim_{Δt\to0} \frac{Δθ}{Δt}}=\frac{dθ}{dt}.\tag{4}$$
The instantaneous angular velocity is positive when the object is spinning counter-clockwise because \(Δθ\) is increasing (since it is measured counter-clockwise from the reference line) and it is negative when the object is spinning clockwise because \(Δθ\) is decreasing.
If a particle moves along not just a circle but, more generally, any curve (with a radius of curvature, \(r\))from any point \(P\) to \(Q\), it will traverse an arclength \(Δs\) and sweep out an angular displacement, \(Δθ\), which satisfies equation (1). The ratio \(\frac{Δs}{Δt}\) gives the average tangential speed, \(v_t\), of the particle as it moves from \(P\) to \(Q\). Let's divide both sides of Equation 1 by \(Δt\) to obtain
$$v_{avg-t}=\frac{Δs}{Δt}=r\frac{Δθ}{Δt}=rω_{avg}.\tag{5}$$
By taking the limit on both sides as \(Δt→0\), we get
$$v_t=\frac{ds}{dt}=r\frac{dθ}{dt}=rω.\tag{6}$$
If the instantaneous angular speed is changing, then the rigid body is undergoing angular acceleration. We define the average angular acceleration as
$$α_{avg}\equiv{\frac{ω_f-ω_i}{t_f-t_i}}=\frac{Δω}{Δt}\tag{7}$$
and the instantaneous angular acceleration as
$$α\equiv{\lim_{Δt\to0} \frac{Δω}{Δt}}=\frac{dω}{dt}.\tag{8}$$
The angular acceleration \(α\) is positive when a rigid body rotating counter-clockwise is speeding up (\(ω_f>ω_i\) so \(ω_f-ω_i>0\) and \(α>0\)) and when a rigid body rotating clockwise is slowing down (\(ω_f>ω_i\) since it is less negative and thus \(ω_f-ω_i>0\) and \(α>0\)).
Since the rotation axis is fixed, there are only two possible directions associated with the vectors quantities \(\vec{ω}\) and \(\vec{α}\). In the case of fixed axis rotation we do not need to use vector notation to express the directionality of these two quantities; the directionality can simply be expressed by a plus or minus sign. We will, however, briefly cover the directionality of these vectors using vector notation since it will be very useful later on. We can specify the direction of these quantities using the right-hand rule. If the object is rotating counter-clockwise, the direction of \(\vec{ω}\) is out of the page; if the object is rotating clockwise, the direction is into the page. If the object is rotating counter-clockwise then \(\vec{ω}\) is positive; if it is rotating clockwise then \(\vec{ω}\) is negative. From the definition \(\vec{α}\equiv{\frac{d\vec{ω}}{dt}}\), if the body is speeding up when \(\vec{α}\) acts in the same direction and slowing down when it acts in the opposite direction.
This article is licensed under a CC BY-NC-SA 4.0 license. |
You can see the correct estimate
$a_n\leq n^2/(2n^2+1)$ since you have a sum of $n$ things less than or equal to $n/(2n^2+1)$.
$n^2/(2n^2+n) \leq a_n$ since you have a sum of $n$ things greater than or equal to $n/(2n^2+n)$.
In general, you try to reduce the complexity of the expression for the $n$th term of a sequence and see how the original sequence compares. It often helps to know that part of the expression is bounded. For instance, we know that $-1\leq$sin$2n\leq 1$, so we have
$-1/(1+\sqrt n)\leq ($sin$2n)/(1+\sqrt n)\leq 1/(1+\sqrt n)$,
whence by the squeeze theorem
lim$_{n\to\infty} ($sin$2n)/(1+\sqrt n)=0$. |
I'm learning about functions of random variables and am trying to work out an example I made up. If $y = \sin(x)$ and $x$ has domain $[0, 4\pi]$, is the following the correct expression for the pdf of $y$: $$\begin{align*} f_Y(y) &= \frac{d}{dy}F_Y(y)\\ &= \frac{d}{dy}[F_X(2\pi) - F_X(\pi) + F_X(4\pi) - F_X(3\pi)]\\ &= \frac{d}{dx}F_X(2\pi)\left|\frac{dg^{-1}(y)}{dy}\right| - \frac{d}{dx}F_X(\pi)\left|\frac{dg^{-1}(y)}{dy}\right| + \frac{d}{dx}F_X(4\pi)\left|\frac{dg^{-1}(y)}{dy}\right| - \frac{d}{dx}F_X(3\pi)\left|\frac{dg^{-1}(y)}{dy}\right|\\ &=f_X(2\pi)\left|\frac{1}{\sqrt{1-0}}\right| - f_X(\pi)\left|\frac{1}{\sqrt{1-0}}\right| + f_X(4\pi)\left|\frac{1}{\sqrt{1-0}}\right| - f_X(3\pi)\left|\frac{1}{\sqrt{1-0}}\right|\\ &= f_X(2\pi) - f_X(\pi) + f_X(4\pi) - f_X(3\pi)\\ \end{align*}$$
Draw a picture.
Although you can apply a standard formula for changes of variable, this one is tricky because the transformation $X\to Y$ is not one-to-one. Often the most convenient and reliable method is to compute the distribution function (CDF) and then differentiate it.
The distribution function of $Y=\sin(X)$ is, by definition,
$$F_Y(y) = \Pr(Y \le y) = \Pr(\sin(X) \le y) = \Pr(\{X\,|\, \sin(X) \le y\}).$$
The latter probability is with respect to $X$. The graph of $\sin$ has been emphasized where its height is less than or equal to $y$. The values of $X$ where this occurs, shown in thick blue along the axis, show the set $\{x\in[0,4\pi]\,|\, \sin(x) \le y\}$.
When $0 \lt y$, this set consists of three disjoint intervals $\newcommand{s}{\sin^{-1}y}[0, \s]$, $[\pi -\s, 2\pi + \s]$, and $[3\pi - \s, 4\pi]$. Because they are disjoint, the chance that $X$ lies within this union is the sum of the chances of each interval:
$$\eqalign{ \Pr(\sin(X) \le y) &= F_X(\s) - F_X(0) \\ &+ F_X(2\pi + \s) - F_X(\pi - \s)\\ &+1 - F(3\pi - \s). }$$
The value $1 = F_X(4\pi)$ appeared because the range of $X$ is $[0, 4\pi]$. However, we may not replace $F_X(0)$ by $0$ because possibly $X$ has nonzero probability there.
When $y \lt 0$, the set $\{X \in[0,4\pi]\,|\, \sin(X) \le y\}$ is the union of just two disjoint intervals:
By emulating the preceding argument, you should have no trouble writing down an expression for their probability in terms of $F_X$.
The PDF, when it exists, is the derivative of $F_Y$. In the first case
$$f_Y(y) = \frac{d}{dy} F_Y(y) = \frac{d}{dy}\left(F_X(\s) - F_X(0) + F_X(2\pi + \s) \cdots - F(3\pi - \s)\right).$$
Apply the Chain Rule, recognizing that $\frac{d}{dy}\s = 1/\sqrt{1-y^2}$:
$$f_Y(y) = \frac{1}{\sqrt{1-y^2}}\left(f_X(\s) + f_X(\pi-\s) + f_X(2\pi+\s) + f_X(3\pi-\s)\right)$$
This formula can be understood for all $y$ provided we replace $f_X(\s)$ by $f_X(4\pi + \s)$ whenever $y \lt 0$. Equivalently, and much more generally (with no restrictions on the range of $X$),
$$f_Y(y) = \frac{1}{\sqrt{1-y^2}}\sum_{i=-\infty}^\infty f_X(2i \pi + \s) + f_X((2i+1)\pi - \s).$$ |
I had acquired a ballscrew assembly from one of the loading docks, and was really excited about using it as the main actuator for this desk. (This is the same ballscrew from Kris's first seek&geek) Even though there was no obvious part number or datasheet, I could estimate the stiffness by looking at similar ballscrews and felt pretty happy using this approximation in the rest of my calculations.
The ballscrew assembly has been sitting on my bookshelf for years with the same wrapping I found it with - paper towels and packing tape. This week I took off the wrapping and started dimensioning things... and started this very wild ride.
Long story short, I accidentally discovered the true reason it was wrapped up. I had thought the towels were simply to prevent dust from getting in the bearings, but the true reason was to prevent pine resin from contaminating everything else! At the base of the ballscrew where the supporting block bearing is, there was a glob of pine resin. In my excitement to measure all the dimensions, I had allowed the ball nut to sink into this resin. So suddenly, the entire assembly was seized! In retrospect, what I should have done was soak the entire assembly in acetone to dissolve all the grease and resin. But, for some reason, I thought there would be rubber or plastic components that would be unhappy with the solvent bath. So I painstakingly took everything apart, soaked everything in acetone, reassembled the pieces, and finally relubricated all the parts!
First discovery: the two bearings in the driver block are in a face-to-face configuration. They use 8mm ID flanged bearings, where the outer flange is held in place and the inner races are preloaded by a torqued nut compressing them against the 12mm screw.
The face-to-face configuration has more compliance against rolling moments, which makes it more forgiving with misalignment (4x less sensitive to roll than the back-to-back configuration). Assuming maximum race deflection of these ball bearings is 15μm under nominal max load of 3300N, linear stiffness should be 2.2*10^6 N/m, making
Next up was reattaching the shaft. The end of the ballscrew had a really fine thread, which got slightly damaged by me pressing the shaft back on. I used a knife to gently nudge the threads back in place, so I could reattach the nut. There's also a washer on the front end of this assembly that protects the inner races of the bearings from resin goop. I replaced the resin goop ball with a blob of lithium grease. Probably this was unnecessary. Next up was re-assembly hell. Luckily for me, this ballnut uses an external ball-return-plate. Otherwise I doubt I would have been able to repair this item (or maybe I would've come up with the better idea of dunking the whole assembly in acetone first). There were originally 50 balls, 2.3mm diameter. Unfortunately I lost one in the repacking process :( Repacking the balls involved picking them up with tweezers, packing them in the channel, then feeding the shaft such that the balls were evenly spaced. I did this five times in the process of hardware debugging. Next up was lubrication. Chain oil was too clingy, Tap-magic too light, but machine oil worked fine. Never again! But, the ballscrew lives! And now I feel justified using this reuse ballscrew in the desk. This is what the balls are doing on the inside.
We can take a guess at load capacity of the ballnut knowing how many there are (too many!) and their diameters. First, taking a look at contact pressure.
Maximum contact pressure can be approximated with
So for these balls, $P_(max)$ < 11.2 N per ball, for a total load capacity of 550N, or 123lbs. That means no attempting to stand on the ballnut by itself.
Ballscrew assembly pre-shenanigans
Long story short, I accidentally discovered the true reason it was wrapped up. I had thought the towels were simply to prevent dust from getting in the bearings, but the true reason was to prevent pine resin from contaminating everything else!
At the base of the ballscrew where the supporting block bearing is, there was a glob of pine resin. In my excitement to measure all the dimensions, I had allowed the ball nut to sink into this resin. So suddenly, the entire assembly was seized!
In retrospect, what I should have done was soak the entire assembly in acetone to dissolve all the grease and resin. But, for some reason, I thought there would be rubber or plastic components that would be unhappy with the solvent bath. So I painstakingly took everything apart, soaked everything in acetone, reassembled the pieces, and finally relubricated all the parts!
Fixing my mistakes
First discovery: the two bearings in the driver block are in a face-to-face configuration. They use 8mm ID flanged bearings, where the outer flange is held in place and the inner races are preloaded by a torqued nut compressing them against the 12mm screw.
Bearing diagram. Solid lines are outer diameter (outer races), dashed lines are inner races, r
ed lines are approximations of ball contact forces and directions
$K_(moment) = \frac{K_(linear) L^2}{4} = 3.1\cdot10^4 \frac{N}{m}$
So that's neat. The next component in the stack is a steel washer. This item was supposed to prevent the ball of resin from gooping up the bearings below, but when the ballnut plunged into the resin it brought up this guy with it.
Next up was reattaching the shaft. The end of the ballscrew had a really fine thread, which got slightly damaged by me pressing the shaft back on. I used a knife to gently nudge the threads back in place, so I could reattach the nut. There's also a washer on the front end of this assembly that protects the inner races of the bearings from resin goop.
I replaced the resin goop ball with a blob of lithium grease. Probably this was unnecessary.
Next up was re-assembly hell. Luckily for me, this ballnut uses an external ball-return-plate. Otherwise I doubt I would have been able to repair this item (or maybe I would've come up with the better idea of dunking the whole assembly in acetone first).
There were originally 50 balls, 2.3mm diameter. Unfortunately I lost one in the repacking process :(
Repacking the balls involved picking them up with tweezers, packing them in the channel, then feeding the shaft such that the balls were evenly spaced. I did this five times in the process of hardware debugging.
Next up was lubrication. Chain oil was too clingy, Tap-magic too light, but machine oil worked fine.
Never again! But, the ballscrew lives! And now I feel justified using this reuse ballscrew in the desk.
This is what the balls are doing on the inside.
Modified from barnesballscrew.com
Maximum contact pressure can be approximated with
$P_(max) = \frac{P_(load)}{\frac{\pi}{2}r}$,
where we need to take care to not exceed the Brinell hardness... that's how bearings fail! Assuming the bearings are 52100 bearing steel, hardness should be ~200 BHN.
(wikipedia) |
I've been working with the convexity adjustment for interest rates that arises when changing from one measure $Q_{T_p}$ with a numéraire $N_p=P(t,T_p)$ to a measure $Q_{T_e}$ with a numéraire $N_e=P(t,T_e)$ where $T_p$ is the time of payment and $T_e$ is the time where the interest of the forward rate ends.
So I have the forward rate:
\begin{align*} L(t, T_s, T_e) = \frac{1}{\Delta_s^e}\bigg(\frac{P(t, T_s)}{P(t, T_e)}-1 \bigg), \end{align*}
And the expecation of the payment with the change of measure:
\begin{align*} &\ P(t_0, T_p)E^{T_p}\big(L(T_s, T_s, T_e) \mid \mathcal{F}_{t_0}\big) \\ =&\ P(t_0, T_p)E^{T_e}\Big(\frac{\eta_{T_p}}{\eta_{t_0}}L(T_s, T_s, T_e) \mid \mathcal{F}_{t_0}\Big)\\ =&\ P(t_0, T_p)E^{T_e}\Big(\frac{P(t_0, T_e)}{P(t_0, T_p)P(T_p, T_e)} L(T_s, T_s, T_e)\mid \mathcal{F}_{t_0}\Big)\\ =&\ P(t_0, T_e)E^{T_e}\Big(\frac{1}{P(T_p, T_e)} L(T_s, T_s, T_e)\mid \mathcal{F}_{t_0}\Big)\\ =&\ P(t_0, T_e)E^{T_e}\Big(\big(1+ \Delta_p^e L(T_p, T_p, T_e) \big) L(T_s, T_s, T_e)\mid \mathcal{F}_{t_0}\Big) \end{align*}
Now I believe, this last equation is the tricky one since I have two forward rates observed in times $T_p$ and $T_s$ I'd like to confirm my thinking that since we are under the $Q_{T_e}$ measure and both $L(T_p, T_p, T_e)$ and $L(T_s, T_p, T_e)$ are martingales under it, I can use this equality:
\begin{align*} P(t_0, T_e)E^{T_e}\Big(\big(1+ \Delta_p^e L(T_p, T_p, T_e) \big) L(T_s, T_s, T_e)\mid \mathcal{F}_{t_0}\Big)= P(t_0, T_e)E^{T_e}\Big(\big(1+ \Delta_p^e L(T_s, T_p, T_e) \big) L(T_s, T_s, T_e)\mid \mathcal{F}_{t_0}\Big)\tag{1} \end{align*}
Does this equality holds because that reason? Now from there I've seen two possible solutions:
Assume a model (e.g. log-normal) for both $L(t, T_p, T_e)$ and $L(t, T_p, T_e)$ since now we'll be working with both values observed in $T_s$ when I use Ito's lemma to compute the product and then integrate I'll do it from $t_0$ to $T_s$ and with that solution can solve the expectation.
Use the linear model proposed here in page 19,Section 4.2. where essentially the calculations are made with respect to $(1+ \Delta_s^e L(T_s, T_s, T_e)$ but with a specific value to the $\Delta_s^e$ being multiplied in order to account for the payment being in $T_p$ instead of in $T_e$.
I'm working on this to price an option, specifically a digital option (which I think should be almost the same). Have anyone here used any of this results of would have a preference over one of these options?
Much help appreciated |
Would it be possible to construct a reflector such that, for a given wavelength (perhaps part of the microwave spectrum?), the reflected wave interferes constructively with itself?
Ideally, this would work for any two arbitrary reflection points that are "close" to one another.
My question has two very interrelated ideas:
Could such a shape even exist? How would you go about finding the defining curve?
Notes:
I initially assumed a parabolic reflector because one nicety of this thought experiment was to produce a collimated beam in addition to the interference pattern, and I remembered that parabolic dishes are widely used to roughly focus light into beams. In actuality I realize that if such an shape does exist, it probably wouldn't have such a nice defining equation.
I tried considering the problem from the $\Delta L =m\lambda$ standpoint, in two dimensions first to simplify things.
If the emitter is considered to be a point source at the focus of the parabola $y=ax^2$ then for any two rays with reflection points $(x,ax^2)$ and $(x+h,a(x+h)^2)$ then the path length difference is $\Delta L=\sqrt{x^2+(ax^2-\frac{1}{4a})^2}-\sqrt{(x+h)^2+(a(x+h)^2-\frac{a}{4a})^2}+a((x+h)^2-x^2)$, presuming I did my math right.
If I understand what I have correctly, it will give me the equation of the reflector where rays separated in the x-direction by h units interfere constructively. Ideally, however, there's a shape that will reflect a given wavelength with constructive interference for
every h (within realistic constraints of course. Infinitely large reflectors can't exist, etc ). |
№ 8
All Issues Volume 60, № 6, 2008 On the smoothness of a solution of the first boundary-value problem for second-order degenerate elliptic-parabolic equations
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 723–736
In this work, the first boundary-value problem is considered for second-order degenerate elliptic-parabolic equation with,
generally speaking, discontinuous coefficients. The matrix of senior coefficients satisfies the parabolic Cordes condition with respect to space variables.
We prove that the generalized solution to the problem belongs to the Holder space
C 1+λ if the right-hand side f belongs to L p, p> n. Generalized stochastic derivatives on spaces of nonregular generalized functions of Meixner white noise
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 737–758
We introduce and study generalized stochastic derivatives on Kondratiev-type spaces of nonregular generalized functions of Meixner white noise. Properties of these derivatives are quite analogous to properties of stochastic derivatives in the Gaussian analysis. As an example, we calculate the generalized stochastic derivative of a solution of a stochastic equation with Wick-type nonlinearity.
On the uniform convergence of wavelet expansions of random processes from Orlicz spaces of random variables. II
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 759–775
We establish conditions under which wavelet expansions of random processes from Orlicz spaces of random variables converge uniformly with probability one on a bounded interval.
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 776–782
We investigate the problem of stability of a nonlinear system on a time scale and propose a unified approach to the analysis of stability of motion based on a generalized direct Lyapunov method.
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 783–795
We construct a linear method of the approximation $ \{Q_{n,\psi} \}_{n \in {\mathbb N}}$ in the unit disk of classes of holomorphic functions $A^{\psi}_p$ that are the Hadamard convolutions of unit balls of the Bergman space $A_p$ with reproducing kernels $\psi(z) = \sum^\infty_{k=0}\psi_k z^k.$ We give conditions on $\psi$ under which the method $ \{Q_{n,\psi} \}_{n \in {\mathbb N}}$ approximate the class $A^{\psi}_p$ in metrics of the Hardy space $H_s$ and Bergman space $A_s,\; 1 \leq s \leq p,$ with error that coincides in order with a value of the best approximation by algebraic polynomials.
Energy interaction between linear and nonlinear oscillators (energy transfer through the subsystems in a hybrid system)
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 796–814
The analysis of the energy transfer between subsystems coupled in a hybrid system is an urgent problem for various applications. We present an analytic investigation of the energy transfer between linear and nonlinear oscillators for the case of free vibrations when the oscillators are statically or dynamically connected into a double-oscillator system and regarded as two new hybrid systems, each with two degrees of freedom. The analytic analysis shows that the elastic connection between the oscillators leads to the appearance of a two-frequency-like mode of the time function and that the energy transfer between the subsystems indeed exists. In addition, the dynamical linear constraint between the oscillators, each with one degree of freedom, coupled into the hybrid system changes the dynamics from single-frequency modes into two-frequency-like modes. The dynamical constraint, as a connection between the subsystems, is realized by a rolling element with inertial properties. In this case, the analytic analysis of the energy transfer between linear and nonlinear oscillators for free vibrations is also performed. The two Lyapunov exponents corresponding to each of the two eigenmodes are expressed via the energy of the corresponding eigentime components.
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 815 – 828
We consider the problem of the saturation, in the spaces
S p φ, of linear summation methods for Fourier series, which are determined by the sequences of functions defined on a subset of the space C. We obtain sufficient conditions for the saturation of such methods in these spaces.
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 829–836
We prove a necessary and sufficient condition of topological equivalence of smooth functions which are given on a circle and have a finite number of local extrema.
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 837–842
We prove a new exact Kolmogorov-type inequality estimating the norm of a mixed fractional-order derivative (in Marchaud's sense) of a function of two variables via the norm of the function and the norms of its partial derivatives of the first order.
On the conditions of convergence for one class of methods used for the solution of ill-posed problems
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 843–850
We propose a new class of projection methods for the solution of ill-posed problems with inaccurately specified coefficients. For methods from this class, we establish the conditions of convergence to the normal solution of an operator equation of the first kind. We also present additional conditions for these methods guaranteeing the convergence with a given rate to normal solutions from a certain set.
On conditions for Dirichlet series absolutely convergent in a half-plane to belong to the class of convergence
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 851–856
For a Dirichlet series $F(s) = \sum^{\infty}_{n=0}a_n \exp \{s\lambda_n\}$ with the abscissa of absolute convergence $\sigma_a = 0$, let $M(\sigma) = \sup\{|F(\sigma+it)|:\;t \in {\mathbb R}\}$ and $\mu(\sigma) = \max\{|a_n| \exp(\sigma \lambda_n):\;n \geq 0\},\quad \sigma < 0.$ It is proved that the condition $\ln \ln n = o(\ln \lambda_n),\;n\rightarrow\infty$, is necessary and sufficient for equivalence of relations $\int^0_{-1}|\sigma|^{\rho-1}\ln M(\sigma)d\sigma < +\infty$ and $\int^0_{-1}|\sigma|^{\rho-1}\ln \mu(\sigma)d\sigma < +\infty,\quad \rho > 0,$ for each such series.
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 857–861
We prove the existence of nontrivial functions in
R n , n> 2, with vanishing integrals over balls of fixed radius and given majoranta of growth.
Ukr. Mat. Zh. - 2008. - 60, № 6. - pp. 862–864
We investigate a Besicovitch-Danzer-type characterization of a circle in a class of compact sets whose boundary divides the plane into several components. |
Niels Bohr introduced the atomic Hydrogen model in 1913. He described it as a positively charged nucleus, comprised of protons and neutrons, surrounded by a negatively charged electron cloud. In the model, electrons orbit the nucleus in atomic shells. The atom is held together by electrostatic forces between the positive nucleus and negative surroundings.
Hydrogen Energy Levels
The Bohr model is used to describe the structure of hydrogen energy levels. The image below represents shell structure, where each shell is associated with principal quantum number
n. The energy levels presented correspond with each shell. The amount of energy in each level is reported in eV, and the maxiumum energy is the ionization energy of 13.598eV.
F igure 1: Some of the orbital shells of a Hydrogen atom. The energy levels of the orbitals are shown to the right. Hydrogen Spectrum
The movement of electrons between these energy levels produces a spectrum. The Balmer equation is used to describe the four different wavelengths of Hydrogen which are present in the visible light spectrum. These wavelengths are at 656, 486, 434, and 410nm. These correspond to the emission of photons as an electron in an excited state transitions down to energy level n=2. The Rydberg formula, below, generalizes the Balmer series for all energy level transitions. To get the Balmer lines, the Rydberg formula is used with an n
f of 2. Rydberg Formula
The Rydberg formula explains the different energies of transition that occur between energy levels. When an electron moves from a higher energy level to a lower one, a photon is emitted. The Hydrogen atom can emit different wavelengths of light depending on the initial and final energy levels of the transition. It emits a photon with energy equal to the difference of square of the final (\(n_f\)) and initial (\(n_i\)) energy levels.
\[\text{Energy}=R\left(\dfrac{1}{n^2_f}-\dfrac{1}{n^2_i}\right) \label{1}\]
The energy of a photon is equal to Planck’s constant, h=6.626*10
-34m 2kg/s, times the speed of light in a vacuum, divided by the wavelength of emission.
\[E=\dfrac{hc}{\lambda} \label{2}\]
Combining these two equations produces the Rydberg Formula.
\[\dfrac{1}{\lambda}=R\left(\dfrac{1}{n^2_f}-\dfrac{1}{n^2_i}\right) \label{3}\]
The Rydberg Constant (R) = \(10,973,731.6\; m^{-1}\) or \(1.097 \times 10^7\; m^{-1}\).
Limitations of the Bohr Model
The Bohr Model was an important step in the development of atomic theory. However, it has several limitations.
It is in violation of the Heisenberg Uncertainty Principle. The Bohr Model considers electrons to have both a known radius and orbit, which is impossible according to Heisenberg. The Bohr Model is very limited in terms of size. Poor spectral predictions are obtained when larger atoms are in question. It cannot predict the relative intensities of spectral lines. It does not explain the Zeeman Effect, when the spectral line is split into several components in the presence of a magnetic field. The Bohr Model does not account for the fact that accelerating electrons do not emit electromagnetic radiation. References Bohr, Niels. "On the Constitution of Atoms and Molecules, Part I." Philosophical Magazine26 (1913): 1-24. <http://web.ihep.su/dbserv/compas/src/bohr13/eng.pdf> Bohr, Niels. "On the Constitution of Atoms and Molecules, Part II." Philosophical Magazine26 (1913): 476-502. <http://web.ihep.su/dbserv/compas/src/bohr13b/eng.pdf> Turner, J. E. Atoms, Radiation, and Radiation Protection. Weinheim: Wiley-VCH, 2007. Print. Problems 1. An emission spectrum gives one of the lines in the Balmer series of the hydrogen atom at 410 nm. This wavelength results from a transition from an upper energy level to n=2. What is the principal quantum number of the upper level? 2. The Bohr model of the atom was able to explain the Balmer series because: larger orbits required electrons to have more negative energy in order to match the angular momentum. differences between the energy levels of the orbits matched the difference between energy levels of the line spectra. electrons were allowed to exist only in allowed orbits and nowhere else. none of the above 3. One reason the Bohr model of the atom failed was because it did not explain why accelerating electrons do not emit electromagnetic radiation. moving electrons have a greater mass. electrons in the orbits of an atom have negative energies. electrons in greater orbits of an atom have greater velocities. Answers 1. (1/λ) = R*[ 1/(2 2) - 1/(n 2) ] , R=1.097x10 7 m -1 , λ=410nm
(1/410nm) = (1.097x10
7 m -1) * [ 1/(2 2) - 1/(n 2) ]
[ (1/4.10x10
-7m) / (1.097x10 7 m -1) ] - [ (1/4) ] = [ -1/(n 2) ]
-1/-0.02778 = n
2
36 = n
2 , n=6 --> The emission resulted from a transition from energy level 6 to energy level 2. 2. (B) differences between the energy levels of the orbits matched the difference between energy levels of the line spectra. 3. (A) accelerating electrons do not emit electromagnetic radiation. Contributors Michelle Faust |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Warning
Only the calculation of the density is tested for open shell configurations (and relies on a correct .OCCUPATION). All other properties are only tested for closed shell systems and should not be trusted for open shell systems without a thorough testing.
**VISUAL¶ Sampling¶ .LIST¶
Calculate various densities in few points. Example (3 points; coordinates in bohr):
.LIST 3 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0
.LINE¶
Calculate various densities along a line. Example (line connecting two points; 200 steps; coordinates in bohr):
.LINE 0.0 0.0 0.0 0.0 0.0 5.0 200
Scalar and vector densities are written to files
plot.line.scalar and
plot.line.vector, respectively,and should be saved after calculation, e.g.
pam --get=plot.line.scalar ...
The first three columns of the output files gives the coordinates (x, y, z) of the point. It is then followed by one/three columns giving the value of the scalar/vector density in that point.
.RADIAL¶
Compute radial distributions
by performing Lebedev angular integration over a specified number of even-spaced radial shells out to some specified distance from a specified initial point. Example (coordinates and distance in bohr):
.RADIAL0.0 0.0 0.010.0200
The first line after the keyword specifies the initial point, here chosen to be the origin.The second and third line is the distance and step size, respectively.Scalar and vector densities are written to files
plot.radial.scalar and
plot.radial.vector, respectively,and should be saved after calculation, e.g.
pam --get=plot.radial.scalar ...
.2D¶
Calculate various densities in a plane. The plane is specified using 3 points that have to form a right angle. Example (coordinates in bohr):
.2D 0.0 0.0 0.0 !origin 0.0 0.0 10.0 !"right" 200 !nr of points origin-"right" 0.0 10.0 0.0 !"top" 200 !nr of points origin-"top"
.2D_INT¶
Integrate various densities in a plane using Gauss-Lobatto quadrature. The plane is specified using 3 points that have to form a right angle. Example (coordinates in bohr):
.2D_INT 0.0 0.0 0.0 !origin 0.0 0.0 10.0 !"right" 10 !nr of tiles to the "right" 0.0 10.0 0.0 !"top" 10 !nr of tiles to the "right" 5 !order of the Legendre polynomial for each tile
.3D¶
Calculate various densities in 3D and write to cube file format. Example (coordinates in bohr):
.3D 40 40 40 ! 40 x 40 x 40 points
.3DFAST¶
Fast evaluation of the molecular electrostatic potential. Example (coordinates in bohr):
.3DFAST 40 40 40 ! 40 x 40 x 40 points
.3D_ADD¶
Add space around the cube file. Default (coordinates in bohr):
.3D_ADD 4.0
.3D_IMP¶
Calculate various densities in 3D on an imported grid (does not have to be regular) Example:
.3D_IMP grid_file ! a file with x,y,z-coordinates of grid points
.3D_INT¶
Integrate densities in 3D.
Modification of densities¶ .CARPOW¶
Scale densities by Cartesian product \(x^iy^jz^k\). The keyword is followed by three integers specifying the exponents \((i,j,k)\). Example:
.DENSITY.CARPOW1 0 0
is equivalent to the specification:
.EDIPX
.SCALE¶
Scale densities by a factor. Default:
.SCALE 1.0
.DSCALE¶
Scale densities
down by a factor.Default:
.DSCALE 1.0
Densities¶ .DENSITY¶
Compute number density \(n(\mathbf{r})\) . Example (unperturbed density):
.DENSITY DFCOEF
Another example (perturbed density, first response vector):
.DENSITY PAMXVC 1
.ELF¶
Compute the electron localization function. Example:
.ELF DFCOEF
.GAMMA5¶
Compute the electron chirality density. Example:
.GAMMA5 DFCOEF
.J¶
Compute the current density \(\mathbf{j}(\mathbf{r})=-e\psi_{i}^{\ast}c\boldsymbol{\alpha}\psi_{i}\). Example (use first response vector):
.J PAMXVC 1
.JDIA¶
Compute the nonrelativistic diamagnetic current density. Example:
.JDIA DFCOEF
.JX¶
Compute the x-component \(j_{x}(\mathbf{r})=-e\psi_{i}^{\ast}c\alpha_{x}\psi_{i}\) of the current density. Example (use first response vector):
.JX PAMXVC 1
.JY¶
Compute the y-component \(j_{y}(\mathbf{r})=-e\psi_{i}^{\ast}c\alpha_{y}\psi_{i}\) of the current density. Example (use first response vector):
.JY PAMXVC 1
.JZ¶
Compute the z-component \(j_{z}(\mathbf{r})=-e\psi_{i}^{\ast}c\alpha_{z}\psi_{i}\) of the current density. Example (use first response vector):
.JZ PAMXVC 1
.DIVJ¶
Compute the divergence of the current density. Example (use first response vector):
.DIVJ PAMXVC 1
.ROTJ¶
Compute the curl of the current density. Example (use first response vector):
.ROTJ PAMXVC 1
.BDIPX¶
Compute the x-component \(m^{[1]}_{x}(\mathbf{r})=-\frac{1}{2}(\mathbf{r}\times\mathbf{j})_{x}\) of the magnetic dipole operator. Example (use first response vector):
.BDIPX PAMXVC 1
.BDIPY¶
Compute the y-component \(m^{[1]}_{y}(\mathbf{r})=-\frac{1}{2}(\mathbf{r}\times\mathbf{j})_{y}\) of the magnetic dipole operator. Example (use first response vector):
.BDIPY PAMXVC 1
.BDIPZ¶
Compute the z-component \(m^{[1]}_{z}(\mathbf{r})=-\frac{1}{2}(\mathbf{r}\times\mathbf{j})_{z}\) of the magnetic dipole operator. Example (use first response vector):
.BDIPZ PAMXVC 1
.EDIPX¶
Compute the x-component \(Q^{[1]}_{x}(\mathbf{r})=xn(\mathbf{r})\) of the electric dipole.
.EDIPY¶
Compute the y-component \(Q^{[1]}_{y}(\mathbf{r})=yn(\mathbf{r})\) of the electric dipole.
.EDIPZ¶
Compute the z-component \(Q^{[1]}_{z}(\mathbf{r})=zn(\mathbf{r})\) of the electric dipole.
.ESP¶
Compute the electrostatic potential. Example:
.ESP DFCOEF
.ESPE¶
Compute the electronic part of the electrostatic potential.
.ESPN¶
Compute the nuclear part of the electrostatic potential.
.ESPRHO¶
Compute the electrostatic potential times density.
.ESPERHO¶
Compute the electronic part of the electrostatic potential times density.
.ESPNRHO¶
Compute the nuclear part of the electrostatic potential times density.
.NDIPX¶
Compute the NMR shielding density, with the “X”-component of the nuclear magnetic dipole moment and the selected component of the magnetically-induced current density (by the chosen record on PAMXVC file) as perturbing operators.
.NDIPY¶
Compute the NMR shielding density, with the “Y”-component of the nuclear magnetic dipole moment and the selected component of the magnetically-induced current density (by the chosen record on PAMXVC file) as perturbing operators.
.NDIPZ¶
Compute the NMR shielding density, with the “Z”-component of the nuclear magnetic dipole moment and the selected component of the magnetically-induced current density (by the chosen record on PAMXVC file) as perturbing operators.
.NICS¶
Compute the NMR shielding density in a selected point in space. Is used to calculate NICS. Example:
.NICS 1.2 -1.0 2.0
will calculate the NMR shielding in point (1.2, -1.0, 2.0). This keyword can be used only with one of: NDIPX, NDIPY, NDIPZ keywords.
.READJB¶
Use the grid and the magnetically-induced current density (jB) from a file to calculate the jB-dependent densities, e.g. the NMR shielding density or the magnetizability density. Example:
.READJB file_name ! a file with x,y,z-coordinates of grid points and jB vector field
.GAUGE¶
Specify gauge origin. Example:
.GAUGE 0.0 0.0 0.0
.SMALLAO¶
Force evaluation of small component basis functions.
.OCCUPATION¶
Specify occupation of orbitals. Example (neon atom):
.OCCUPATION 2 1 1-2 1.0 2 1-3 1.0
The first line after the keyword gives the number of subsequent lines to read. In each line, the first number is the fermion ircop. In molecules with inversion symmetry there are two fermion ircops: gerade (1) and ungerade (2). Otherwise there is a single fermion ircop (1). The specification of the fermion ircop is followed by the range of selected orbitals and their occupation. If a single orbital is specified a single number is given instead of the range.
Another example (water):
.OCCUPATION 1 1 1-5 1.0
Another example (nitrogen atom):
.OCCUPATION 2 1 1-2 1.0 2 1-3 0.5
.LONDON¶
Activate LAO contribution. This keyword is followed by a letter “X”, “Y” or “Z” indicating the component of an external perturbing magnetic field. For example:
.LONDON X
.NONE¶
Select “none” connection when when plotting LAO perturbed densities.
.NODIRECT¶
Skip direct LAO contribution when plotting perturbed densities.
.NOREORTHO¶
Skip LAO reorthonormalization contribution when plotting perturbed densities.
.NOKAPPA¶
Skip orbital relaxation contribution when plotting perturbed densities. |
Codeforces Round #548 (Div. 2) Finished
Vivek initially has an empty array $$$a$$$ and some integer constant $$$m$$$.
He performs the following algorithm:
Find the expected length of $$$a$$$. It can be shown that it can be represented as $$$\frac{P}{Q}$$$ where $$$P$$$ and $$$Q$$$ are coprime integers and $$$Q\neq 0 \pmod{10^9+7}$$$. Print the value of $$$P \cdot Q^{-1} \pmod{10^9+7}$$$.
The first and only line contains a single integer $$$m$$$ ($$$1 \leq m \leq 100000$$$).
Print a single integer — the expected length of the array $$$a$$$ written as $$$P \cdot Q^{-1} \pmod{10^9+7}$$$.
1 1 2 2 4 333333338
In the first example, since Vivek can choose only integers from $$$1$$$ to $$$1$$$, he will have $$$a=[1]$$$ after the first append operation, and after that quit the algorithm. Hence the length of $$$a$$$ is always $$$1$$$, so its expected value is $$$1$$$ as well.
In the second example, Vivek each time will append either $$$1$$$ or $$$2$$$, so after finishing the algorithm he will end up having some number of $$$2$$$'s (possibly zero), and a single $$$1$$$ in the end. The expected length of the list is $$$1\cdot \frac{1}{2} + 2\cdot \frac{1}{2^2} + 3\cdot \frac{1}{2^3} + \ldots = 2$$$.
Name |
Consider the following decision problem:
Given: Two (3CNF-)formulas $\varphi_1$, $\varphi_2$ on a shared set $X\cup Y$ of variables ($X$ and $Y$ disjoint).
Question: $\exists$ assignment $\tau_X$ on $X$ such that $\varphi_1$ is satisfiable and $\varphi_2$ is unsatisfiable?
(The "satisfiable" and "unsatisfiable" conditions are relative to the fixed assignment $\tau_X$ from the outer quantifier and can only "choose" the assignment to variables in $Y$.)
This problem is a generalization of the well-known 3-SAT/3-UNSAT problem, which is DP-complete:
Given: Two (3CNF-)formulas $\varphi_1$, $\varphi_2$ on a set $Y$ of variables.
Question: Is $\varphi_1$ satisfiable and $\varphi_2$ unsatisfiable?
The generalization works in the same way in which the NP-complete problem 3-SAT is generalized for higher levels of the polynomial hierarchy. In fact, if the formula $\varphi_1$ and the corresponding condition is removed, this problem coincides with the $\Sigma_2^p$-complete variant of 3-SAT:
Given: A (3CNF-)formula $\varphi_2$ on a set $X\cup Y$ of variables ($X$ and $Y$ disjoint).
Question: $\exists$ assignment $\tau_X$ on $X$ such that $\varphi_2$ is unsatisfiable?
Now I wonder how to call the complexity class that this problem is a member of (and probably complete for). It seems to be $\textit{NP}^{\textit{DP}}$, i.e., $\textit{NP}$ with access to a $\textit{DP}$ oracle, in the same way that $\Sigma_2^p=\textit{NP}^{\textit{coNP}}$ is $\textit{NP}$ with access to a $\textit{coNP}$ oracle. However, I have not found such a class in the literature. Is the problem simply too unnatural to receive any attention? Or can it be simplified such that it falls into another, more common class? |
How many $(a,b)$ for $a,b \in \Bbb{N}$ pairs can satisfy the following equation: $$\log_{2^a}\left(\log_{2^b}\left(2^{1000}\right)\right)=1$$ The answer is $3$, but I can't figure out how to get that answer.
This is my attempt.
$$\log_{2^a}\left(\log_{2^b}\left(2^{1000}\right)\right)=1$$$$\frac{1}{a}\log_2\left(\log_{2^b}\left(2^{1000}\right)\right)=1$$$$\log_2\left(\log_{2^b}\left(2^{1000}\right)\right)=a$$$$\log_{2^b}\left(2^{1000}\right)=2^a$$$$\frac{1}{b}\log_{2}\left(2^{1000}\right)=2^a$$$$\log_{2}\left(2^{1000}\right)=2^ab$$$$2^{1000}=2^{2^ab}$$$$1000=2^ab$$
That's it! This is dead end. Honestly, this is the best I could do altough I very much doubt that I can get two variables by solving one equation (for that we need a system of equations!). So, I think that I need another approach that will either give me what $a$ and $b$ can be or direct answer (i.e. the number of possible values for $a$ and $b$), but I don't know which one. |
I'm new to number theory. This might be kind of a silly question, so I'm sorry if it is.
No apology is necessary since your question is by no means silly. It is not at all surprising that you are puzzled by the cited exposition since it is incredibly sloppy. Kudos to you for reading it very carefully and noticing these problems.
Edit: I'd like to add that this textbook states that if $p$ is a prime number, then so is $-p$. That's where my confusion stems from. The textbook is A Classical Introduction to Modern Number Theory by Ireland and Rosen.
Let's examine closely that initial section on primes and prime factorizations.
On page $1$ begins a section titled "Unique Factorization in $\Bbb Z$" where they briefly review divisibility of "natural numbers $1,2,3\ldots"$ This leads to the following "definition" of a prime:
Numbers that cannot be factored further are called primes. To be more precise, we say that a number $p$ is a prime if its only divisors are $1$ and $p.$
This is imprecise. Is $1$ a prime by this definition? In the next paragraph we find
The first prime numbers are $2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43,\ldots$
So $1$ is not prime. That agrees with modern conventions.
On the next page they segue into factorization in the ring of integers $\Bbb Z$ where they write
If $p$ is a positive prime, $-p$ will also be a prime. We shall not consider $1$ or $-1$ as primes even though they fit the definition.
This poses a few problems. They now claim that $1$
does fit the prior definition of a prime, but they didn't list it above (or explain why it was excluded). Further it implies that $ p = -2$ is a prime but it doesn't fit the above definition (it has divisors $\,\pm1, \pm 2,\,$ not only $1$ and $p$). They don't give any definition of a prime integer (vs. natural).
Readers familiar with basic ring theory and factorization in integral domains will likely have no problem inferring what is intended (the notion of an irreducible or indecomposable element), but any careful reader lacking such background will likely be quite puzzled by these inconsistencies and gaps.
As such, it comes as no surprise that the following proof employing these fuzzy notions may well prove troublesome for readers unfamiliar with the intended notions.
Lemma $1.$ Every nonzero integer can be written as a product of primes.
PROOF $ $ Assume that there is an integer that cannot be written as a product of
primes. Let $N$ be the smallest positive integer with this property. Since $N$
cannot itself be prime we must have $\,N = mn,\,$ where $1 < m,\, n < N.\,$ However, since $m$ and $n$ are positive and smaller than $N$ they must each be a
product of primes. But then so is $N = mn.$ This is a contradiction.
The proof has many problems. It doesn't properly handle the (implied) prime factorization of $\pm1$ and they forgot to handle the possibility that the counterexample is negative (w.l.o.g. reducing to a positive counterexample).
Considering all of the above problems, it is no wonder that you found this proof confusing.
The proof can be given in a more positive way by using mathematical
induction. It is enough to prove the result for all positive integers. $2$ is a
prime. Suppose that $2 < N$ and that we have proved the result for all
numbers $m$ such that $2 \leq m < N$. We wish to show that $N$ is a product of
primes. If $N$ is a prime, there is nothing to do. If $N$ is not a prime, then
$N = mn,$ where $2 \leq m,\, n < N.$ By induction both $m$ and $n$ are products of
primes and thus so is $N.$
Here they've reformulated the induction from negative form - an (infinite)
descent on counterexamples (or a "minimal criminal") - into a positive ascent, i.e. into a complete (or strong) induction, and they give some hint about the reduction to the positive case, but still there is no handling of $\pm1$. What is actually intended can be inferred from the next theorem they present.
Theorem $1.$ For every nonzero integer $n$ there is a prime factorization
$$ n\, =\ (-1)^{e(n)} \prod_p p^{a(p)}$$
with the exponents uniquely determined by $n$. Here $e(n) = 0$ or $1$ depending on whether $n$ is positive or negative and where the product is over all positive primes. The exponents $a(p)$ are nonnegative integers and, of course, $a(p) = 0$ for all but finitely many primes.
That explains how they handle the prime factorization of $\pm1$ and the reduction to positive primes. With that in mind you should be able to fix the proof of the lemma.
As above, often when there is puzzling exposition in textbooks it can be clarified by reading a bit further to help infer what was intended. But - of course - that is no excuse for sloppy exposition. |
Van der Waals forces are driven by induced electrical interactions between two or more atoms or molecules that are very close to each other. Van der Waals interaction is the weakest of all intermolecular attractions between molecules. However, with a lot of Van der Waals forces interacting between two objects, the interaction can be very strong.
Introduction
Here is a chart to compare the relative weakness of Van der Waals forces to other intermolecular attractions.
Force Strength (kJ/mol) Distance (nm) Van der Waals 0.4-4.0 0.3-0.6 Hydrogen Bonds 12-30 0.3 Ionic Interactions 20 0.25 Hydrophobic Interactions <40 varies Causes of Van der Waals Forces
Quantum Mechanics strongly emphasizes the constant movement of electrons in an atom through the Schrödinger Equation and the Heisenberg’s Uncertainty Principle. The Heisenberg’s Uncertainty Principle proposes that the energy of the electron is never zero; therefore, it is constantly moving around its orbital. The square of the Schrödinger Equation for a particle in a box suggests that it is probable of finding the electron (particle) anywhere in the orbital of the atom (box).
These two important aspects of Quantum Mechanics strongly suggest that the electrons are constantly are moving in an atom, so dipoles are probable of occurring. A dipole is defined as molecules or atoms with equal and opposite electrical charges separated by a small distance.
It is probable to find the electrons in this state:
This is how
spontaneous (or instantaneous) dipoles occur. When groups of electrons move to one end of the atom, it creates a dipole. These groups of electrons are constantly moving so they move from one end of the atom to the other and back again continuously. Therefore, the opposite state is as probable of occurring.
Opposite state due to fluctuation of dipoles:
Dipole-Dipole Interaction
Dipole-Dipole interactions occur between molecules that have permanent dipoles; these molecules are also referred to as polar molecules. The figure below shows the electrostatic interaction between two dipoles.
The potential energy of the interaction for the top pair of the image above is represented by the equation:
\[ V = -\dfrac{2\mu_A\mu_B}{4\pi\epsilon_o r^3} \tag{1} \]
The potential energy of the interaction for the bottom pair is represented by the equation:
\[ V = -\dfrac{\mu_A\mu_B}{4\pi\epsilon_o r^3} \tag{2}\]
with
\( V \) is the potential energy \( \mu \) is the dipole moment \( \epsilon_o \) is the vacuum permittivity \( r \) is the length between the two nuclei
The negative sign indicates that energy is released out of the system, because energy is released when bonds are formed, even weak bonds. The negative sign also suggests that the interaction is driven by an attractive force (a positive sign would indicate a repulsion force between the two molecules). If the conditions of these two samples are the same except for their orientation, the second pair of the electron will always have a larger potential energy, because both the negative and positive ends are involve in the interactions.
Induced Dipoles
An induced dipole moment is a temporary condition during which a neutral nonpolar atom (i.e. Helium) undergo a separation of charges due to the environment. When an instantaneous dipole atom approaches a neighboring atom, it can cause that atom to also produce dipoles. The neighboring atom is then considered to have an induced dipole moment.
Even though these two atoms are interacting with each other, their dipoles may still fluctuate. However, they must fluctuate in synchrony in order to maintain their dipoles and stay interacted with each other. Result of synchronizing fluctuation of dipoles:
The potential energy representing the dipole-induced dipole interaction is:
\[ V = -\dfrac{\alpha\mu^2}{4\pi\epsilon_o r^6} \tag{4}\]
\( \alpha \) = polarizability of the nonpolar molecule
Polarizability defines how easy the electron density of an atom or a molecule can be distorted by an external electric field.
Spontaneous Dipole-Induced Dipole Interaction
Spontaneous dipole-induced dipole interactions are also known as
dispersion or London forces (name after the German physicist Fritz London). They are large networks of intermolecular forces between nonpolar and non-charged molecules and atoms (i.e. alkanes, noble gases, and halogens). Molecules that have induced dipoles may also induce neighboring molecules to have dipole moments, so a large network of induced dipole-induced dipole interactions may exist. The image below illustrates a network of induced dipole-induced dipole interactions.
The potential energy of an induced dipole-induced dipole interaction is represented by this equation:
\[ V = -\dfrac{3}{2}\dfrac{I_aI_b}{I_a + I_b}\dfrac{\alpha_a\alpha_b}{r^6} \tag{5}\]
\( I \) = The first ionization energy of the molecule
The radius is a huge determinant of how large the potential energy is since the potential energy is inversely proportional to \( r^6 \). A small increase in the radius, would greatly decrease the potential energy of the interaction.
References Atkins, Peter and Julio de Paula. Physical Chemistry for the Life Sciences. Oxford, UK: Oxford University Press. 2006. 458. Chang, Raymond. Physical Chemistry for the Biosciences. Sausalito, CA: Edwards Brothers, Inc. 2005. 492-498. Garrett, Reginald H. and Charles M. Grisham. Biochemistry. Belmont, CA: Thomas Brooks/Cole. 2005. 13. Petrucci, Ralph H. et. al. General Chemistry. Eight Edition. Upper Saddle River, NJ: Prentice-Hall, Inc. 2002. 497-500. Contributors Justin Than (UCD) |
It looks like you're new here. If you want to get involved, click one of these buttons!
Okay, now I've rather carefully discussed one example of \(\mathcal{V}\)-enriched profunctors, and rather sloppily discussed another. Now it's time to build the general framework that can handle both these examples.
We can define \(\mathcal{V}\)-enriched categories whenever \(\mathcal{V}\) is a monoidal preorder: we did that way back in Lecture 29. We can also define \(\mathcal{V}\)-enriched functors whenever \(\mathcal{V}\) is a monoidal preorder: we did that in Lecture 31. But to define \(\mathcal{V}\)-enriched profunctors, we need \(\mathcal{V}\) to be a bit better. We can see why by comparing our examples.
Our first example involved \(\mathcal{V} = \textbf{Bool}\). A
feasibility relation
$$ \Phi : X \nrightarrow Y $$ between preorders is a monotone function
$$ \Phi: X^{\text{op}} \times Y\to \mathbf{Bool} . $$ We shall see that a feasibility relation is the same as a \( \textbf{Bool}\)-enriched profunctor.
Our second example involved \(\mathcal{V} = \textbf{Cost}\). I said that a \( \textbf{Cost}\)-enriched profunctor
$$ \Phi : X \nrightarrow Y $$ between \(\mathbf{Cost}\)-enriched categories is a \( \textbf{Cost}\)-enriched functor
$$ \Phi: X^{\text{op}} \times Y \to \mathbf{Cost} $$ obeying some conditions. But I let you struggle to guess those conditions... without enough clues to make it easy!
To fit both our examples in a general framework, we start by considering an arbitrary monoidal preorder \(\mathcal{V}\). \(\mathcal{V}\)-enriched profunctors will go between \(\mathcal{V}\)-enriched categories. So, let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(\mathcal{V}\)-enriched categories. We want to make this definition:
Tentative Definition. A \(\mathcal{V}\)-enriched profunctor
$$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} $$ is a \(\mathcal{V}\)-enriched functor
$$ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ Notice that this handles our first example very well. But some questions appear in our second example - and indeed in general. For our tentative definition to make sense, we need three things:
We need \(\mathcal{V}\) to itself be a \(\mathcal{V}\)-enriched category.
We need any two \(\mathcal{V}\)-enriched category to have a 'product', which is again a \(\mathcal{V}\)-enriched category.
We need any \(\mathcal{V}\)-enriched category to have an 'opposite', which is again a \(\mathcal{V}\)-enriched category.
Items 2 and 3 work fine whenever \(\mathcal{V}\) is a commutative monoidal poset. We'll see why in Lecture 62.
Item 1 is trickier, and indeed it sounds rather scary. \(\mathcal{V}\) began life as a humble monoidal preorder. Now we're wanting it to be
enriched in itself! Isn't that circular somehow?
Yes! But not in a bad way. Category theory often eats its own tail, like the mythical ourobous, and this is an example.
To get \(\mathcal{V}\) to become a \(\mathcal{V}\)-enriched category, we'll demand that it be 'closed'. For starters, let's assume it's a monoidal
poset, just to avoid some technicalities. Definition. A monoidal poset is closed if for all elements \(x,y \in \mathcal{V}\) there is an element \(x \multimap y \in \mathcal{V}\) such that
$$ x \otimes a \le y \text{ if and only if } a \le x \multimap y $$ for all \(a \in \mathcal{V}\).
This will let us make \(\mathcal{V}\) into a \(\mathcal{V}\)-enriched category by setting \(\mathcal{V}(x,y) = x \multimap y \). But first let's try to understand this concept a bit!
We can check that our friend \(\mathbf{Bool}\) is closed. Remember, we are making it into a monoidal poset using 'and' as its binary operation: its full name is \( \lbrace \text{true},\text{false}\rbrace, \wedge, \text{true})\). Then we can take \( x \multimap y \) to be 'implication'. More precisely, we say \( x \multimap y = \text{true}\) iff \(x\) implies \(y\). Even more precisely, we define:
$$ \text{true} \multimap \text{true} = \text{true} $$$$ \text{true} \multimap \text{false} = \text{false} $$$$ \text{false} \multimap \text{true} = \text{true} $$$$ \text{false} \multimap \text{false} = \text{true} . $$
Puzzle 188. Show that with this definition of \(\multimap\) for \(\mathbf{Bool}\) we have
$$ a \wedge x \le y \text{ if and only if } a \le x \multimap y $$ for all \(a,x,y \in \mathbf{Bool}\).
We can also check that our friend \(\mathbf{Cost}\) is closed! Remember, we are making it into a monoidal poset using \(+\) as its binary operation: its full name is \( [0,\infty], \ge, +, 0)\). Then we can define \( x \multimap y \) to be 'subtraction'. More precisely, we define \(x \multimap y\) to be \(y - x\) if \(y \ge x\), and \(0\) otherwise.
Puzzle 189. Show that with this definition of \(\multimap\) for \(\mathbf{Cost}\) we have
$$ a + x \le y \text{ if and only if } a \le x \multimap y . $$But beware. We have defined the ordering on \(\mathbf{Cost}\) to be the
opposite of the usual ordering of numbers in \([0,\infty]\). So, \(\le\) above means the opposite of what you might expect!
Next, two more tricky puzzles. Next time I'll show you in general how a closed monoidal poset \(\mathcal{V}\) becomes a \(\mathcal{V}\)-enriched category. But to appreciate this, it may help to try some examples first:
Puzzle 190. What does it mean, exactly, to make \(\mathbf{Bool}\) into a \(\mathbf{Bool}\)-enriched category? Can you see how to do this by defining
$$ \mathbf{Bool}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Bool}\), where \(\multimap\) is defined to be 'implication' as above?
Puzzle 191. What does it mean, exactly, to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category? Can you see how to do this by defining
$$ \mathbf{Cost}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Cost}\), where \(\multimap\) is defined to be 'subtraction' as above?
Note: for Puzzle 190 you might be tempted to say "a \(\mathbf{Bool}\)-enriched category is just a preorder, so I'll use that fact here". However, you may learn more if you go back to the general definition of enriched category and use that! The reason is that we're trying to understand some general things by thinking about two examples.
Puzzle 192. The definition of 'closed' above is an example of a very important concept we keep seeing in this course. What is it? Restate the definition of closed monoidal poset in a more elegant, but equivalent, way using this concept. |
Double Integral: Volume Under Surface
In this lesson, we'll take a look at double integrals and see that they aren't that much
more complicated than regular integrals. Regular integrals (that is, integrals of the form \(\int{f(x)dx}\)) give the area underneath a curve. In multi-variable calculus, double integrals are written as
$$F(x,y)=∫∫f(x,y)dxdy,$$
and represent the volume underneath the surface \(f(x,y)\). In this lesson, we'll prove how the expression \(∫∫f(x,y)dxdy\) gives the volume underneath \(f(x,y)\) by deriving it. In an earlier lesson we proved that \(\int{f(x)dx}\) gives the area underneath \(f(x)\) by deriving \(\int{f(x)dx}=\lim_{Δx→0}f(x_i)Δx_i\) and realizing from the derivation that we took the infinite sum of the areas of infinitely many, infinitesimally skinny rectangles to get the area; analogously, in this lesson, we'll derive
$$∫∫f(x,y)dxdy=\lim_{Δx→0,Δy→0}\sum_{i,j}f(x_i,y_j)ΔxΔy$$
and prove to ourselves (through the derivation) that this expression represents the volume underneath the surface \(f(x,y)\).
Let \(f(x,y)\) be any arbitrary function that is smooth and continuous as illustrated in Figure 1(b) and Figure 2. This derivation will be almost identical to the derivation of \(\int{f(x)dx}\) except that we must take the infinite sum of infinitely many infinitesimally skinny columns, not rectangles. How do we construct a very skinny column? As you can see in Figure 2, I have subdivided the interval \(x_n-x_1\) on the \(x\)-axis an \(n\) number of times and the interval \(y_m-y_1\) on the \(y\)-axis an \(m\) number of times. To construct a very skinny volume element \(ΔV\) at the coordinate \((1,1)\), we can take the product
$$ΔV_{1,1}=f(x_1,y_1)Δx_1Δy_1,$$
where \(Δx_1=x_2-x_1\) is the width of the column, \(Δy=y_2-y_1\) is the depth, and \(f(x_1,y_1)\) is the height of the column. To find the volume \(ΔV_{2,2}\) of the column located at the coordinate point \((2,2)\) (see Figure 2), we just need to take the product
$$ΔV_{2,2}=f(x_2,y_2)Δx_2Δy_2,$$
where \(Δx_2=x_3-x_2\) is the width of the column, \(Δy_2=y_3-y_2\) is the depth, and \(f(x_3,y_1)\) is the column's height. But suppose that we wanted to represent the volume of
any column underneath \(f(x,y)\); how would we do that? We'll let \(x_i\) be any arbitrary \(x\)-value along the interval \(x_n-x_1\) such that \(i=1,...,n\) where \(n\) is any integer and we'll also let \(y_j\) be any arbitrary \(y\)-value along the interval \(y_m-y_1\) such that \(j=1,...,m\). The volume \(ΔV_{i,j}\) of any column located at some arbitrary coordinate value \((x_i,y_j)\) is given by
$$ΔV_{i,j}=f(x_i,y_j)Δx_iΔy_j.$$
To approximate the volume underneath the surface \(f(x,y)\), let's take the Riemann sum of every volume element \(ΔV_{i,j}\) to get
$$\text{Volume underneath f(x,y)}≈\sum_{i,j}^{n,m}f(x_i,y_j)Δx_iΔy_j.$$
Let me take a moment to explain the summation notation for a sum with two indices (in this example, \(i\) and \(j\)). The way that this sum works is by first taking the sum from \(i=1\) to \(i=n\) (while keeping \(j\) a constant at \(j=1\)) to get
$$f(x_1,y_1)Δx_1Δy_1+...+f(x_n,y_1)Δx_nΔy_1.$$
This sum approximates the volume underneath the "edge" of the surface as illustrated in Figure 3. To estimate the volume underneath the adjacent portion of the surface (the highlighted portion represented by \(R_2\)), we need to find the total volume of the adjacent row of columns underneath \(R_2\). This is accomplished by taking the sum from \(i=1\) to \(i=n\) (keeping \(j=2\)) to get
$$f(x_1,y_2)Δx_1Δy_2+...+f(x_n,y_2)Δx_nΔy_2.$$
The expression above approximates the volume underneath the region of the surface designated by \(R_2\). Of course, to approximate the volume underneath both \(R_1\) and \(R_2\) we must add both expressions together to get
$$f(x_1,y_1)Δx_1Δy_1+...+f(x_n,y_1)Δx_nΔy_1.$$
$$+f(x_1,y_2)Δx_1Δy_2+...+f(x_n,y_2)Δx_nΔy_2.$$
The expression approximating the volume underneath \(R_1\), \(R_2\), and \(R_3\) is given by
$$f(x_1,y_1)Δx_1Δy_1+...+f(x_n,y_1)Δx_nΔy_1.$$
$$+f(x_1,y_2)Δx_1Δy_2+...+f(x_n,y_2)Δx_nΔy_2.$$
$$+f(x_1,y_3)Δx_1Δy_3+...+f(x_n,y_3)Δx_nΔy_3.$$
And so on. To approximate the volume underneath the entire surface, we just take the sum
$$f(x_1,y_1)Δx_1Δy_1+...+f(x_n,y_1)Δx_nΔy_1$$
$$+f(x_1,y_2)Δx_1Δy_2+...+f(x_n,y_2)Δx_nΔy_2$$
$$+...+f(x_1,y_m)Δx_1Δy_m+...+f(x_n,y_m)Δx_nΔy_m.$$
The expression, \(\sum_{i,j}^{n,m}f(x_i,y_j)Δx_iΔy_j\), is just a short-hand way of representing the equation above—that's all it is. Also, we can rewrite this sum as
$$\sum_{i,j}^{n,m}f(x_i,y_j)Δx_iΔy_j=\sum_{j=1}^m(\sum_{i=1}^nf(x_i,y_j)Δx_iΔy_j).$$
The equation on the right-hand side is basically the same thing as the one on the left-hand side except the order of addition of each volume element is changed. To evaluate the sum on the right-hand side, we first evaluate the sum
$$\sum_{i=1}^nf(x_i,y_j)Δx_iΔy_j=f(x_1,y_j)Δx_1Δy_j+...+f(x_n,y_j)Δx_nΔy_j.$$
Then, we just take a sum of this result from \(j=1\) to \(j=m\) to get
$$\sum_{j=1}^m\sum_{i=1}^nf(x_i,y_j)Δx_iΔy_j=f(x_1,y_1)Δx_1Δy_1+...+f(x_n,y_1)Δx_nΔy_1$$
$$+f(x_1,y_2)Δx_1Δy_2+...+f(x_n,y_2)Δx_nΔy_2+...+f(x_1,y_m)Δx_1Δy_m+...+f(x_n,y_m)Δx_nΔy_m.$$
Although the equation above is pretty ugly to look at, if we change the order of the sum of each of the terms, we'll end up with the same sum as \(\sum_{i,j}^{n,m}f(x_i,y_j)Δx_iΔy_j\). Both sums are totally equivalent. Recall that to get the
exact area underneath an arbitrary curve \(f(x)\), we had to let the width of each rectangle approach zero and the sum become infinite; analogously, in this situation, we must let the width and depth of each column approach zero (that way, they are infinitesimally skinny) and the sum become infinite (meaning, we'll be taking the sum of the volumes of infinitely many columns) to find the exact volume underneath the surface \(f(x,y)\). Doing this, we have
$$\lim_{Δx_i→0,Δy_j→0}\sum_{j=1}^m\sum_{i=1}^nf(x_i,y_j)Δx_iΔy_j.$$
As the number of columns becomes infinite, the variables \(x_i\) and \(y_j\) become continuous (\(x_i→x\) and \(y_i→y\)), the widths and depths of each column becomes infinitesimal (\(Δx→dx\) and \(Δy→dy\)), and the two finite sums turn into infinite sums (\(\sum_{i=1}^n→∫\) and \(\sum_{j=1}^m→∫\)). Thus, the equation above becomes\(^1\)
$$∫∫f(x,y)dxdy≡\lim_{Δx_i→0,Δy_j→0}\sum_{j=1}^m\sum_{i=1}^nf(x_i,y_i)Δx_iΔy_j.$$
This article is licensed under a CC BY-NC-SA 4.0 license.
Notes
1. I have also inserted the symbol "\(≡\)" to represent the fact that the double integral is defined as the quantity on the right-hand side of the equation below. |
If you go through the process of non-dimensionalizing the equations, the math becomes more clear. If you start with the momentum equation (ignoring viscous forces because they aren't important for the analysis):$$\frac{\partial u_i}{\partial t} + \frac{\partial u_i u_j}{\partial x_j} = -\frac{1}{\rho} \frac{\partial p}{\partial x_i} + g$$
Then introduce relevant scales to non-dimensionalize things: $\bar{u}_i = u_i/u_0$, $\bar{x}_i = x_i/L$, $\bar{\rho} = \rho/\rho_0$, $\bar{g} = g/g_0$, $\tau = u_0/L t$ and $\bar{p} = p/p_0$, you get:
$$\frac{\partial \bar{u}_i}{\partial \tau} + \frac{\partial \bar{u}_i \bar{u}_j}{\partial \bar{x_j}} = -\frac{\text{Eu}}{\bar{\rho}} \frac{\partial \bar{p}}{\partial \bar{x}_i} + \frac{1}{\text{Fr}^2}\bar{g}$$
$\text{Eu} = \frac{p_0}{\rho_0 u_0^2}$ is the Euler number and $\text{Fr} = \frac{u_0}{\sqrt{g_0 L}}$ is the Froude number.
The Froude number is the ratio of convective forces to gravity forces. When convective forces are much, much larger than gravity forces, the Froude number is large and so $\frac{1}{\text{Fr}^2} \ll 1$, and the gravity term can be neglected relative to the convective terms. This is how we can mathematically justify dropping the gravity term when the convective forces are large. |
Many times during the course of the Chemistry 105 laboratory you will be asked to report an average, relative deviation, and a standard deviation. You may also have to analyze multiple trials to decide whether or not a certain piece of data should be discarded. This section describes these procedures.
Average and Standard Deviation
The
average or mean of the data set, \(\bar{x}\), is defined by:
\(\bar{x} = \dfrac{\sum_{i=1}^N x_i}{N}\)
where x
i is the result of the i th measurement, i = 1,…,N. The standard deviation, σ, measures how closely values are clustered about the mean. The standard deviation for small samples is defined by:
\( \sigma = \sqrt{\dfrac{\sum_{i=1}^N (x_i-\bar{x})^2}{N}} \)
The smaller the value of σ, the more closely packed the data are about the mean, and we say that the measurements are
precise. In contrast, a high accuracy of the measurements occurs if the mean is close to the real result (presuming we know that information). It is easy to tell if your measurements are precise, but it is often difficult to tell if they are accurate. Relative Deviation
The relative average deviation, d, like the standard deviation, is useful to determine how data are clustered about a mean. The advantage of a relative deviation is that it incorporates the relative numerical magnitude of the average. The relative average deviation, d, is calculated in the following way.
Calculate the average, \(\bar{x}\), with all data that are of high quality. Calculate the deviation, d=|x i- \(\bar{x}\)|, of each datum. Calculate the average of these deviations. Divide that average of the deviations by the mean of the data. This number is generally expressed as parts per thousand (ppt). You can do this by simply multiplying by 1000.
Report the relative average deviation (ppt) in addition to the standard deviation in all experiments.
Analysis of Poor Data
Sometimes a single piece of data is inconsistent with other data. You need a method to determine, or test, if the datum in question is so poor that it should be excluded from your calculations. Many tests have been developed for this purpose. One of the most common is what is known as the Q test (section 4-3). While it is very popular, it is not particularly useful for the small samples you will have (you will generally only do triplicate trials). Instead you will use what is commonly known as the 4d test. To use this test you need to follow the procedure outlined below.
Calculate the average, \(\bar{x}\). Calculate the deviation, d. Calculate the average of these deviations. Calculate the deviation of the "suspect" datum from the mean you calculated above, d s. If this deviation is greater than 4 times the average deviation, then you should discard this datum.( If d s>4d, then discard.)
Keep in mind that you also always have the right to discard a piece of data that you are sure is of low quality: that is, when you are aware of a poor collection. However, beware of discarding data that do not meet the 4d test. You may be discarding your most accurate determination!
Other important concepts and procedures
Associated topics you should be familiar with from your Chem 105 class:
Normal error curve: Histogram of an infinitely large number of good measurements usually follow a Gaussian distribution Confidence limit (95%) Linear least squares fit Residual sum of squares Correlation coefficient |
The
of a matrix is$$A_{m\times n} = U_{m\times m}\Lambda_{m\times n} V_{n\times n}'$$where $U$ and $V$ are orthogonal matrices and $\Lambda$ has ( Singular Value Decomposition (SVD) i, i) entry $\lambda_i \geq 0$ for $i = 1, 2, \cdots , min(m, n)$ and the other entries are zero. Then the left singular vectors$U$ for rows of matrix and right singular vectors$V$ for columns of matrix can be plotted on the same graph called . bi-plot
I'm wondering how to do the
of a three dimensional array and plot the singular vectors on the same graph like SVD . bi-plot
Thanks |
The capacitance of
any capacitor is defined as \(C≡\frac{ΔV}{Q}\). In this lesson, we'll be interested in finding the capacitance for what is known as a parallel-plate capacitor. A parallel-plate capacitor is a capacitor whose conductors are two thin plates which are parallel to one another and seperated by an insulator as illustrated in Figure 1. We'll assume that the two conductors are separated by a vacuum.
Let's try to find an expression for the voltage \(ΔV\) across the capacitor and then, after that, we'll substitute it into the equation \(C=\frac{ΔV}{Q}\). The general expression for the voltage between any two points is given by
$$ΔV=q\int_B^A\vec{E}·d\vec{r}.\tag{1}$$
For our purposes we'll let the point \(A\) and \(B\) be the two points illustrated in Figure 1. We showed in a previous lesson (using Guass's law) that the electric field in between two parallel plates is a constant given by \(\vec{E}=\frac{σ}{ε_0}\hat{i}\) where \(σ\) is the magnitude of the charge density on each plate and \(\hat{i}\) is a unit vector pointing from the negatively charged plate to the positively charged plate along the x-axis as illustred in Figure 1. Substituting \(\vec{E}=\frac{σ}{ε_0}\hat{i}\) into Equation (1), we have
$$ΔV=\frac{σq}{ε_0}\int_B^A\hat{i}·d\vec{r}.\tag{2}$$
[Explain why \(\hat{i}·d\vec{r}\) simplifies to \(dr\).]
$$ΔV=\frac{σq}{ε_0}\int_B^Adr=\frac{σqd}{ε_0}.\tag{3}$$
Substituting Equation (3) into \(C=\frac{ΔV}{Q}\), we find that the capacitance of a parallel-plate capacitor (where each conductor is seperated by vacuum) is given by
$$C=\frac{σqd}{qε_0}=$$ |
Let $\Omega \subseteq \mathbb{R}^n$ be open. For any compactly supported distribution $u \in \mathcal{E}'(\Omega)$, the distributional Fourier transform $\hat{u}$ is in fact a $C^\infty$ function on $\mathbb{R}^n$ with formula given by $$\hat{u}(\xi) = \langle u(x), \chi(x) e^{i x \cdot \xi} \rangle, $$
where $\chi$ is any element of $C_0^\infty(\mathbb{R}^n)$ such that $\chi \equiv 1$ on the support of $u$.
Suppose we have a sequence $\{u_j\}_{j = 1}^\infty \subseteq \mathcal{E}'(\Omega)$ along with $u \in \mathcal{E}'(\Omega)$ such that the supports of the $u_j$ and of $u$ are contained in a fixed compact set $K \subseteq \Omega$. Moreover, suppose that $$ \langle u_j, \varphi \rangle \to \langle u, \varphi \rangle \qquad \text{as } j \to \infty $$ for all $\varphi$ in the Schwartz Class $\mathcal{S}(\mathbb{R}^n)$.
I would like to know, given any multi-index $\alpha$ and any compact $K' \subseteq \mathbb{R}^n$, must it be the case that the $D^\alpha_\xi \hat{u}_j$ converges uniformly to $D^\alpha_\xi \hat{u}_j$ on $K'$?
Note: it can be shown that $D_\xi^\alpha \hat{u}(\xi) = \langle u, (ix)^\alpha \chi(x) e^{ix\cdot \xi} \rangle$ for all multi-indices $\alpha$.
Edit: Here is my incomplete attempt to show that the statement is true. I guess that they best way to go is to try and use the uniform boundedness principle, but I may be wrong.
Every element of $\mathcal{E}'(\Omega)$ is a continuous linear map from the Fréchet space $C^\infty(\Omega)$ to $\mathbb{C}$. Note that the family of seminorms $\rho_m$, $m \in \mathbb{N}$, that defines the Fréchet space topology on $C^\infty(\Omega)$ is
$$\rho_m(\varphi) = \sup \{|\partial^\beta \varphi(x)| : x \in K_m, \text{ } \beta \le m \}, \qquad \varphi \in C^\infty(\Omega), $$ where $\{K_m\}_{m= 1}^\infty$ is a compact exhaustion of $\Omega$.
The convergence criterion supposed above guarantees that the continuous linear operators $T_j \equiv u_j-u : C^\infty(\Omega) \to \mathbb{C}$ have the property
$$\sup_j |T_j \varphi| < \infty, \qquad \text{for all $\varphi \in C^\infty(\Omega)$.}$$
Therefore, the uniform boundedness principle for Fréchet spaces implies that the family $T_j$ is equicontinuous. This means that for every $\varepsilon > 0$ there exists $\delta_\varepsilon > 0$ and an $m_\varepsilon \in \mathbb{N}$ so that
$$\rho_{m_\varepsilon}(\varphi) < \delta_\varepsilon \implies |T_j(\varphi)| < \varepsilon, \qquad \text{for all $j$.}$$
So, for given $\varepsilon > 0$, we want to show that there exists $J \in \mathbb{N}$ so that
$$|D^\alpha_\xi (\hat{u}_j - u)(\xi) | = |T_j((ix)^\alpha \chi(x) e^{i x \cdot \xi})| < \varepsilon \qquad \text{all } \xi \in K' \text{ and all } j \ge J. $$
At this stage, my intuition is that the convergence in $\mathcal{E}'(\Omega)$ which I suppose is not strong enough to guarantee the convergence in $C^\infty(\mathbb{R}^n)$ I am seeking, because I do not have an effective way to make $(ix)^\alpha \chi(x) e^{i x \cdot \xi}$ small (uniformly in $\xi \in K'$) in the $\rho_\varepsilon$ seminorm.
Although I have yet to come up a with a counterexample.
Hints or solutions are greatly appreciated! |
OpenCV 4.0.1
Open Source Computer Vision
class cv::DenseOpticalFlow class cv::DISOpticalFlow DIS optical flow algorithm. More... class cv::FarnebackOpticalFlow Class computing a dense optical flow using the Gunnar Farneback's algorithm. More... class cv::KalmanFilter Kalman filter class. More... class cv::SparseOpticalFlow Base interface for sparse optical flow algorithms. More... class cv::SparsePyrLKOpticalFlow Class used for calculating a sparse optical flow. More... class cv::VariationalRefinement Variational optical flow refinement. More...
enum {
cv::OPTFLOW_USE_INITIAL_FLOW = 4,
cv::OPTFLOW_LK_GET_MIN_EIGENVALS = 8,
cv::OPTFLOW_FARNEBACK_GAUSSIAN = 256
}
enum {
cv::MOTION_TRANSLATION = 0,
cv::MOTION_EUCLIDEAN = 1,
cv::MOTION_AFFINE = 2,
cv::MOTION_HOMOGRAPHY = 3
}
int cv::buildOpticalFlowPyramid (InputArray img, OutputArrayOfArrays pyramid, Size winSize, int maxLevel, bool withDerivatives=true, int pyrBorder=BORDER_REFLECT_101, int derivBorder=BORDER_CONSTANT, bool tryReuseInputImage=true) Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK. More... void cv::calcOpticalFlowFarneback (InputArray prev, InputArray next, InputOutputArray flow, double pyr_scale, int levels, int winsize, int iterations, int poly_n, double poly_sigma, int flags) Computes a dense optical flow using the Gunnar Farneback's algorithm. More... void cv::calcOpticalFlowPyrLK (InputArray prevImg, InputArray nextImg, InputArray prevPts, InputOutputArray nextPts, OutputArray status, OutputArray err, Size winSize=Size(21, 21), int maxLevel=3, TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01), int flags=0, double minEigThreshold=1e-4) Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids. More... RotatedRect cv::CamShift (InputArray probImage, Rect &window, TermCriteria criteria) Finds an object center, size, and orientation. More... Mat cv::estimateRigidTransform (InputArray src, InputArray dst, bool fullAffine) Computes an optimal affine transformation between two 2D point sets. More... double cv::findTransformECC (InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, int motionType=MOTION_AFFINE, TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 50, 0.001), InputArray inputMask=noArray()) Finds the geometric transform (warp) between two images in terms of the ECC criterion [53] . More... int cv::meanShift (InputArray probImage, Rect &window, TermCriteria criteria) Finds an object on a back projection image. More... Mat cv::readOpticalFlow (const String &path) Read a .flo file. More... bool cv::writeOpticalFlow (const String &path, InputArray flow) Write a .flo to disk. More...
anonymous enum
anonymous enum
int cv::buildOpticalFlowPyramid ( InputArray img, OutputArrayOfArrays pyramid, Size winSize, int maxLevel, bool withDerivatives =
true,
int pyrBorder =
BORDER_REFLECT_101,
int derivBorder =
BORDER_CONSTANT,
bool tryReuseInputImage =
true
)
Python: retval, pyramid = cv.buildOpticalFlowPyramid( img, winSize, maxLevel[, pyramid[, withDerivatives[, pyrBorder[, derivBorder[, tryReuseInputImage]]]]] )
Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.
img 8-bit input image. pyramid output pyramid. winSize window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels. maxLevel 0-based maximal pyramid level number. withDerivatives set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally. pyrBorder the border mode for pyramid layers. derivBorder the border mode for gradients. tryReuseInputImage put ROI of input image into the pyramid if possible. You can pass false to force data copying.
void cv::calcOpticalFlowFarneback ( InputArray prev, InputArray next, InputOutputArray flow, double pyr_scale, int levels, int winsize, int iterations, int poly_n, double poly_sigma, int flags )
Python: flow = cv.calcOpticalFlowFarneback( prev, next, flow, pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags )
Computes a dense optical flow using the Gunnar Farneback's algorithm.
prev first 8-bit single-channel input image. next second input image of the same size and the same type as prev. flow computed flow image that has the same size as prev and type CV_32FC2. pyr_scale parameter, specifying the image scale (<1) to build pyramids for each image; pyr_scale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one. levels number of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used. winsize averaging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field. iterations number of iterations the algorithm does at each pyramid level. poly_n size of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field, typically poly_n =5 or 7. poly_sigma standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for poly_n=5, you can set poly_sigma=1.1, for poly_n=7, a good value would be poly_sigma=1.5. flags operation flags that can be a combination of the following:
The function finds an optical flow for each prev pixel using the [55] algorithm so that
\[\texttt{prev} (y,x) \sim \texttt{next} ( y + \texttt{flow} (y,x)[1], x + \texttt{flow} (y,x)[0])\]
void cv::calcOpticalFlowPyrLK ( InputArray prevImg, InputArray nextImg, InputArray prevPts, InputOutputArray nextPts, OutputArray status, OutputArray err, Size winSize =
Size(21, 21),
int maxLevel =
3,
TermCriteria criteria =
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01),
int flags =
0,
double minEigThreshold =
1e-4
)
Python: nextPts, status, err = cv.calcOpticalFlowPyrLK( prevImg, nextImg, prevPts, nextPts[, status[, err[, winSize[, maxLevel[, criteria[, flags[, minEigThreshold]]]]]]] )
Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.
prevImg first 8-bit input image or pyramid constructed by buildOpticalFlowPyramid. nextImg second input image or pyramid of the same size and the same type as prevImg. prevPts vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers. nextPts output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input. status output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0. err output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn't found then the error is not defined (use the status parameter to find such cases). winSize size of the search window at each pyramid level. maxLevel 0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel. criteria parameter, specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon. flags operation flags: minEigThreshold the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [22]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.
The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See [22] . The function is parallelized with the TBB library.
RotatedRect cv::CamShift ( InputArray probImage, Rect & window, TermCriteria criteria )
Python: retval, window = cv.CamShift( probImage, window, criteria )
Finds an object center, size, and orientation.
probImage Back projection of the object histogram. See calcBackProject. window Initial search window. criteria Stop criteria for the underlying meanShift. returns (in old interfaces) Number of iterations CAMSHIFT took to converge The function implements the CAMSHIFT object tracking algorithm [25] . First, it finds an object center using meanShift and then adjusts the window size and finds the optimal rotation. The function returns the rotated rectangle structure that includes the object position, size, and orientation. The next position of the search window can be obtained with RotatedRect::boundingRect()
See the OpenCV sample camshiftdemo.c that tracks colored objects.
Computes an optimal affine transformation between two 2D point sets.
src First input 2D point set stored in std::vector or Mat, or an image stored in Mat. dst Second input 2D point set of the same size and the same type as A, or another image. fullAffine If true, the function finds an optimal affine transformation with no additional restrictions (6 degrees of freedom). Otherwise, the class of transformations to choose from is limited to combinations of translation, rotation, and uniform scaling (4 degrees of freedom).
The function finds an optimal affine transform
[A|b] (a 2 x 3 floating-point matrix) that approximates best the affine transformation between: Two point sets Two raster images. In this case, the function first finds some features in the src image and finds the corresponding features in dst image. After that, the problem is reduced to the first case.
In case of point sets, the problem is formulated as follows: you need to find a 2x2 matrix
A and 2x1 vector b so that:
\[[A^*|b^*] = arg \min _{[A|b]} \sum _i \| \texttt{dst}[i] - A { \texttt{src}[i]}^T - b \| ^2\]
where src[i] and dst[i] are the i-th points in src and dst, respectively \([A|b]\) can be either arbitrary (when fullAffine=true ) or have a form of
\[\begin{bmatrix} a_{11} & a_{12} & b_1 \\ -a_{12} & a_{11} & b_2 \end{bmatrix}\]
when fullAffine=false.
double cv::findTransformECC ( InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, int motionType =
MOTION_AFFINE,
TermCriteria criteria =
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 50, 0.001),
InputArray inputMask =
noArray()
)
Python: retval, warpMatrix = cv.findTransformECC( templateImage, inputImage, warpMatrix[, motionType[, criteria[, inputMask]]] )
Finds the geometric transform (warp) between two images in terms of the ECC criterion [53] .
templateImage single-channel template image; CV_8U or CV_32F array. inputImage single-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as temlateImage. warpMatrix floating-point \(2\times 3\) or \(3\times 3\) mapping matrix (warp). motionType parameter, specifying the type of motion: criteria parameter, specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations (a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values are shown in the declaration above. inputMask An optional mask to indicate valid values of inputImage.
The function estimates the optimum transformation (warpMatrix) with respect to ECC criterion ([53]), that is
\[\texttt{warpMatrix} = \texttt{warpMatrix} = \arg\max_{W} \texttt{ECC}(\texttt{templateImage}(x,y),\texttt{inputImage}(x',y'))\]
where
\[\begin{bmatrix} x' \\ y' \end{bmatrix} = W \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}\]
(the equation holds with homogeneous coordinates for homography). It returns the final enhanced correlation coefficient, that is the correlation coefficient between the template image and the final warped input image. When a \(3\times 3\) matrix is given with motionType =0, 1 or 2, the third row is ignored.
Unlike findHomography and estimateRigidTransform, the function findTransformECC implements an area-based alignment that builds on intensity similarities. In essence, the function updates the initial transformation that roughly aligns the images. If this information is missing, the identity warp (unity matrix) is used as an initialization. Note that if images undergo strong displacements/rotations, an initial transformation that roughly aligns the images is necessary (e.g., a simple euclidean/similarity transform that allows for the images showing the same image content approximately). Use inverse warping in the second image to take an image close to the first one, i.e. use the flag WARP_INVERSE_MAP with warpAffine or warpPerspective. See also the OpenCV sample image_alignment.cpp that demonstrates the use of the function. Note that the function throws an exception if algorithm does not converges.
int cv::meanShift ( InputArray probImage, Rect & window, TermCriteria criteria )
Python: retval, window = cv.meanShift( probImage, window, criteria )
Finds an object on a back projection image.
probImage Back projection of the object histogram. See calcBackProject for details. window Initial search window. criteria Stop criteria for the iterative search algorithm. returns : Number of iterations CAMSHIFT took to converge. The function implements the iterative object search algorithm. It takes the input back projection of an object and the initial position. The mass center in window of the back projection image is computed and the search window center shifts to the mass center. The procedure is repeated until the specified number of iterations criteria.maxCount is done or until the window center shifts by less than criteria.epsilon. The algorithm is used inside CamShift and, unlike CamShift , the search window size or orientation do not change during the search. You can simply pass the output of calcBackProject to this function. But better results can be obtained if you pre-filter the back projection and remove the noise. For example, you can do this by retrieving connected components with findContours , throwing away contours with small area ( contourArea ), and rendering the remaining contours with drawContours.
Read a .flo file.
path Path to the file to be loaded
The function readOpticalFlow loads a flow field from a file and returns it as a single matrix. Resulting Mat has a type CV_32FC2 - floating-point, 2-channel. First channel corresponds to the flow in the horizontal direction (u), second - vertical (v).
bool cv::writeOpticalFlow ( const String & path, InputArray flow )
Python: retval = cv.writeOpticalFlow( path, flow )
Write a .flo to disk.
path Path to the file to be written flow Flow field to be stored
The function stores a flow field in a file, returns true on success, false otherwise. The flow field must be a 2-channel, floating-point matrix (CV_32FC2). First channel corresponds to the flow in the horizontal direction (u), second - vertical (v). |
Overview
The definite integral \(∫_a^bf(x)dx\) that we're all familiar with gives us the area between the following two line segments: the curvilinear line segment \(f(x)\) along the interval \([f(a),f(b)]\) and the straight line segment along the interval \([a,b]\) on the x-axis. In other words, if you imagined building a "wall" whose base was along the straight line \(ab\) and whose ceiling was \(f(x)\) (over the interval \([f(a),f(b)]\), the area of that wall would represent the definite integral \(∫_a^bf(x)dx\). But what if that function popped into or out of the screen a little and the base of the wall, instead of just being a straight line, was actually a curved line? How would we find the area of that wall? The answer to that question is we would have to use something called a line integral which will be the subject of this lesson.
Defining and calculating line integrals
Let's suppose that we have some arbitrary curve \(C\) in the \(xy\)-plane whose endpoints are \(c_1\) and \(c_2\) as illustrated in Figure 1. We shall associate a value \(S\) known as the arc length (which could be measured either from \(c_1\) or \(c_2\)) with each point along the curve. Let's also say that \(f(x,y)\) can be any arbitrary surface as described in the video above. \(f(s)\) represents the \(z\)-values of the surface associated with each value of \(s\). Let's say that we wanted to find the area of the curvy, blue colored wall in Figure 1. How could we do that? We would essentially do something analogous to what we would have to do if we wanted to find the area of a "wall" formed by a single-variable function \(f(x)\) and the \(x\)-axis —we would have to take the infinite sum of the areas of infinitely many really skinny rectangles. The width of each rectangle in Figure 1 is \(dS\)—a very small change in arc length. The height of each rectangle is just \(f(s)\) (or, equivalently, \(f(x,y)\)). Thus, the area of each rectangle is \(f(s)ds\). If we take the infinite sum (\(∫\)) of the tiny areas of all of the infinitesimally skinny rectangles, we'll just get
$$∫_Cf(s)ds,\tag{1}$$
where \(∫_C\) denotes the infinite sum along the curve \(C\). Expression (1) is called a
line integral and represents the area between a function and any curved line \(C\). What would happen if the curve \(C\) was a straight line segment lying on the \(x\)-axis as in Figure 3. Then the arc length (either measured from the \(x\)-value \(x_1\) or \(x_2\) associated with the point \(c_1\) or \(c_2\)) could be measured by \(x\) and a change in arc length would just be \(dx\). Thus, Expression (1) would simplify to
$$\int_{x_1}^{x_2}f(x)dx.$$
Thus, Expression (1) is a generalization of the definite integral. Expression (1), superficially, looks identical to a definite integral since swapping the variable \(x\) for another variable \(S\) doesn't seem to matter. But unlike the integrand of a definite integral of the form \(\int{f(x)dx}\), the integrand \(f(s)\) of Expression (1) is, clearly, a function of the independent variables \(x\) and \(y\) (see Notes 1); thus, we can write \(s(x,y)\). Substituting \(s(x,y)\) into Expression (1), we have
$$∫_Cf(s(x,y))dS=∫_Cf(x,y)dS.\tag{2}$$
Equation (2) shows that the integrand is a multi-variable fucntion of \(x\) and \(y\); also, we can see that the variables inside of the integrand (\(x\) and \(y\)) are in terms of the limits of integration with respect to \(S\). Thus, the integral in Equation (2), at least initially, looks pretty complicated to solve. But we'll see that by making two simplifications, the integrals \(∫_Cf(x,y)dS\) will simplify to something that your already familiar with—namely, it'll simplify to a definite integral as we'll see shortly. The first simplification that we'll have to make is to express each \((x,y)\) coordinate on the curve \(C\) as parametric functions of some variable \(t\). Doing so, we have \(x=x(t)\) and \(y=y(t)\). This allows us to write \(S(x,y)\) as \(S(x(t),y(t))=S(t)\). Substituting \(S(x,y)\) with \(S(x(t),y(t)\) into Equation (2), we have
$$∫_Cf(x,y)dS=∫_Cf(x(t),y(t))dS.\tag{3}$$
Now the question that you might be asking yourself is: what was the point of expressing the integrand in terms of \(t\). This brings us to the second simplification that we'll want to do to Equation (3): express \(dS\) as a function of \(t\). To do this, let's start out by writing \(dS\) in terms of \(x\) and \(y\). If you imagined looking at Figure 1 from a "birds eye" view, you would see the curve \(C\) lying along the flat \(xy\)-plane as illustrated in Figure 2. As you can see from Figure 2, the length \(dS\) forms the hypotenuse of a right triangle where the lengths of the two other sides of the triangle are given by \(dx\) and \(dy\). Using the Pythagorean theorem, we can rewrite \(dS\) as
$$dS=\sqrt{dx^2+dy^2}.$$
Now, let's multiply the right-hand side of the above equation by \(dt/dt\) (\(=1\)) and so some algebraic simplifications to get
$$dS=\frac{1}{dt}\sqrt{dx^2+dy^2}dt=\sqrt{\frac{1}{dt^2}(dx^2+dy^2)}dt=\sqrt{(\frac{dx}{dt})^2+(\frac{dy}{dt})^2}dt=\sqrt{(x'(t))^2+(y'(t))^2}dt.$$
Substituting the above expression into Equation (3), we have
$$∫_Cf(s)dS=∫_{t_0}^tf(x(t),y(t))\sqrt{(x'(t))^2+(y'(t))^2}dt.\tag{4}$$
As you can see on the right-hand side of Equation (4), both the integrand and the limits of integration are expressed in terms of the single variable \(t\). Equation (4) is nice since it means that all line integrals can be evaluated as a definite integral of a single variable. Expression (1) represents, conceptuaally, what a line integral actually is (the area between a function and a curved line \(C\)); but Equation (4) is what we use to the line integral.
This article is licensed under a CC BY-NC-SA 4.0 license.
Notes
1. For example, if the curve \(C_1C_{1m}\) in Figure 2 has an arc length of \(S=1m\), then \(S(x_{1m},y_{1m})=1m\); whereas, if the curve \(arcC_1C_{2m}\) in Figure 2 has an arc length of \(s=2m\), then \(s(x_{2m},y_{2m})=2m\). Thus, the value of \(s\) depends upon the values of both \(x\) and \(y\) and to capture this fact we write the function \(S(x,y)\).
References
1. Khan Academy. "Introduction to the line integral | Multivariable Calculus | Khan Academy". Online video clip. YouTube. YouTube, 16 December 2016. Web. 27 March 2017.
2. Atheia. How to Calculate Line Integrals. Retrieved from https://www.wikihow.com/Calculate-Line-Integrals |
This question already has an answer here:
I asked a similar question yesterday, but I didn't really get the info I wanted so maybe if I post a question and get an answer I will understand this concept better. Just some background info the topic is using Riemann sums approximation to find upper/lower sums.
The question is $f(x)=(x-2)^{2} +1, [a,b]=1,3,$ find the lower sum with $n=3.$
Here is my attempt: $\sum_{1}^{3}$, by Riemann's definition: $\Delta x= (b-a)/n,$ so our $\Delta\,x$ is $2/3.$ $x*k= a+\Delta\,x*k= 2/3k+1$ for $k\in \{0,1,2,3\}$ according to my book, why does it include 0 is my question?
So I have my $\Delta x$ and my $x∗k.$ Now I have to check where the function given is decreasing and increasing on the intervals given, correct?
So my 3 sub intervals are $[1,5/3]$ and $[5/3,7/3]$ and $[7/3,3]$
If I plug $1$ in $(x-2)^{2} +1$ I get $2,$ plugging in $5/3$ I get $10/9.$ so It appears from $[1,5/3]$ we are decreasing. So for this first interval my lower sum is 5/3.
Next interval is from $[5/3,7/3]$. Plugging in $5/3$ I get $10/9.$ Plugging in $7/3$ I get $10/9.$ In this interval I have no lower sum, so I add the two and divide by 2, giving me a lower sum of 2.
Final interval is from $[7/3,3]$.Plugging in $7/3$ I get $10/9.$ Plugging in $3$ I get $2.$ In this interval the lower sum is 7/3.
Now here is where I get lost. The answer according to me text is 2/3*(((5/3 -2)^2 +1) +((2-2)^2 +1) + ((7/3 -2)^2 +1)). So what I understand what they did is they expanded the summation 3 times, I get that.
They plugged x as the lower for each expansion, ie. 5/3 for the first expansion, 2 for the secound expansion, and 7/3 for the last expansion. I get this.
Then they multiplied the whole thing by $\Delta x$
My main concern is where did we use 2/3K +1. What was the point of even figuring this out, wouldn't we have been fine with just $\Delta x$. I thought we would plug in 2/3K +1 where x is in each expansion like right/left sums.
EDIT: Can someone answer the questions in my post please, I get how to do most of it |
Z F XU
Articles written in Bulletin of Materials Science
Volume 39 Issue 2 April 2016 pp 519-523
The effect of plating temperatures between 60 and 90$^{\circ}$C on structure and corrosion resistance for electroless NiWP coatings on AZ91D magnesium alloy substrate was investigated. Results show that temperature has a significant influence on the surface morphology and corrosion resistance of the NiWP alloy coating. An increase in temperature will lead to an increase in coating thickness and form a more uniform and dense NiWP coatings. Moreover, cracks were observed by SEM in coating surface and interface at the plating temperature of 90$^{\circ}$C. Coating corrosion resistance is highly dependent on temperature according to polarization curves. The optimum temperature isfound to be 80$^{\circ}$C and the possible reasons of corrosion resistance for NiWP coating have been discussed.
Volume 40 Issue 3 June 2017 pp 577-582
NiWP alloy coatings were prepared by electrodeposition, and the effects of ferrous chloride (FeCl$_2$), sodium tungstate (Na$_2$WO$_4$) and current density ($D_K$) on the properties of the coatings were studied. The results show that upon increasing the concentration of FeCl$_2$, initially the Fe content of the coating increased and then tended to be stable; the deposition rate and microhardness of coating decreased when the cathodic current efficiency ($\eta$) initially increased and then decreased; and for a FeCl$_2$ concentration of 3.6 gl$^{−1}, the cathodic current efficiency reached its maximum of 74.23%. Upon increasing the concentration of Na$_2$WO$_4$, the W content and microhardness of the coatings increased; the deposition rate andthe cathode current efficiency initially increased and then decreased. The cathodic current efficiency reached the maximum value of 70.33% with a Na$_2$WO$_4$ concentration of 50 gl$^{−1}$, whereas the deposition rate is maximum at 8.67 $\mu$mh$^{−1}$ with a Na$_2$WO$_4$ concentration of 40 gl$^{−1}$. Upon increasing the $D_K$, the deposition rate, microhardness, Fe and W content of the coatings increased, the cathodic current efficiency increases first increased and then decreased. When $D_K$ was 4 A dm$^{−2}$,the current efficiency reached the maximum of 73.64%.
Volume 41 Issue 2 April 2018 Article ID 0041
In this paper, ternary NiFeW alloy coatings were prepared by jet electrodeposition, and the effects of lord salt concentration, jet speed, current density and temperature on the properties of the coatings, including the composition, microhardness, surface morphology, structure and corrosion resistance, were investigated. Results reveal that the depositionrate reaches a maximum value of 27.30 $\mu$m h$^{−1}$, and the total current efficiency is above 85%. The maximum microhardness is 605 HV, and the wear and corrosion resistance values of the alloy coating are good. Moreover, the ternary NiFeW alloy coating is smooth and bright, and it presents a dense cellular growth. The alloy plating is nanocrystalline and has face-centered cubic structure.
Current Issue
Volume 42 | Issue 6 December 2019
Click here for Editorial Note on CAP Mode |
It is a well-known fact that language generated by a context-free grammar is the minimal solution of a particular system of equations, for example:
$$\begin{align*} X &=\{{\epsilon}\} \cup Y\\ X &=\{{a}\}Y \{{v}\} \end{align*}$$
generates the language $\{a^nb^n : n \in \mathbb{N}\}$.
How can I prove that fact? How do I make the step from equations over sets to context-free grammars? In contrast to regular languages, which are accepted by automata without stack, it seems like the equations that generate CFG can't be easily transformed to pushdown automata. |
Working on the two explanations given above, the natural transformation \\(\beta\_\text{Italians}\\) maps *every* Italian in \\(F\\) to *the* Italian in \\(\mathrm{Ran}\_G (H)\\), of which there can be only one unique possible function.
We can therefore reason that \\(\beta\_\text{Germans}\\) and \\(\alpha\_\text{Germans}\\) are effectively the same map, where the sizes of each are equated by
\\[\begin{align}
|\alpha\_\text{Germans}| \\\\
= |F\circ G(\text{Germans})|^{|H(\text{Germans})|} \\\\
= 4^5 \\\\
= |F(\text{Germans})|^{|Ran\_G(H)(\text{Germans})|} \\\\
= |\beta\_\text{Germans}|.
\end{align}
\\] |
Conformational Statistical Mechanics
Conformational transitions play a key role in biomolecular processes: enzymes tend to close up over their substrates, transporters such as antiporters often undergo an outward to inward-facing transition (facing one side of the bilayer or the other), and the steps taken by molecular motors are nothing other than very large scale conformational changes. Examples of simple conformational changes - without associated (un)binding - in this site's antiporter and molecular motor models are shown below:
Other sections of this website employ kinetic descriptions of conformational transitions, but here we want to probe the more detailed statistical mechanics. What is the meaning of conformational free energy? What is the connection between microscopic forces at the atomic level and observed populations and rate constants?
Equilibrium conformational statistical mechanics
We'll consider a generic conformational transition between two states A and B, characterized by rate constants $\kab$ and $\kba$. A and B could correspond to the numbered states shown above, for instance. If we have a large number of copies $N$ of our molecule (or system) of interest which are in equilibrium, then some subset of these $N_A$ and $N_B$ are in the two states of interest. Our discussion will apply even if there are other states besides A and B, so that $N = N_A + N_B + N_C + N_D + \cdots$.
Equilibrium , as always, implies there is a balance of flows among states. This can be stated in different ways:
where here $N_X$ refers to the equilibrium number of systems in state $X$ and $\conceq{X} = N_X / V$ refers to the equilibrium concentration in a volume $V$.
Statistical mechanics tell us how to derive "macroscopic" observable properties, such as the ratio of state populations $N_A/N_B$, from "microscopic" forces and energies. Our microscopic description will employ classical physics, although the treatment of quantum properties is largely similar. For a good discussion of quantum statistical mechanics and the connections to a classical description, see the book by Simon and McQuarrie.
In classical statistical mechanics, we can derive observable properties solely by knowing the the potential energy $U$ as a function of the positions of all the atoms:
You should assume that all the atoms of all the molecules in a system are included - protein, water, ions, ligand, and any other buffer molecules.
Knowing $U(\rn)$ is equivalent to knowing all the forces between atoms, because forces are obtained simply from derivatives of $U$. It is important that $U$ is a function of the atomic positions that includes all the physics and chemistry you would expect - electrostatics, van der Waals interactions, and even complex stereochemical effects; $U$ should
not be confused with the thermodynamic internal energy, which is essentially the average energy (averaged over all configurations) for a given set of conditions. $U$ gives the energy for any specific configuration $\rn$.
The essential guiding equation of equilibrium statistical mechanics is that the probability of a configuration $\rn$ (sometimes called a microstate) is proportional to the Boltzmann factor of the potential energy:
where $k_B$ is Boltzmann's constant and $T$ is the temperature in degrees Kelvin.
To extend this configuration-specific picture to conformational
states, which are defined as large collections of configurations, we must sum up (integrate over) the probabilities of all the configurations consistent with a given state. Thus, we have
where the notation $\int_A$ indicates that we integrate over all configurations of state A. State A can be defined in an arbitrary way - for example, as those configurations in which a certain pair of atoms is closer or further than some threshold distance, or perhaps configurations with an RMSD value less then a threshold. To conform with physical expectations for a state, the selected configurations should interconvert among themselves much more rapidly than transitions to any other macroscopic state (B, C, ...).
We use the (relative) state populations to define the conformational free energies based on Boltzmann factors of
free energies instead of potential energies:
where $F_X$ is the Helmholtz free energy of state X and $G_X$ is the Gibbs free energy. The integrals used above strictly refer to the Helmholtz free energy, which is appropriate for constant volume, but the the constant-pressure Gibbs free energy will be very similar. See the textbook by Zuckerman for a discussion of this issue.
The bottom line is that the conformational free energy yields an effective energy whose Boltzmann factor yields the state probability. The free energy is
not simply the average energy within a state, as discussed in the Zuckerman textbook.
Because the probability of a state is equal to the fraction of systems in that state - that is, $\mathrm{prob} \left( \mbox{state X} \right) = N_X / N$ - we can now re-write the original equilibrium balance condition in terms of free energies:
Basic non-equililbrium statistical mechanics of conformational transitions
The connection of the microscopic picture based individual configurations $\rn$ to observable
kinetic quantities like rate constants can be considerably more complicated. Nevertheless, the basic ideas can be understood with a minimum of complex math.
The key object for studying a system's nonequilibrium behavior is the trajectory $\rn(t)$, the configuration as a function of time. It's simplest to think of this as a time-ordered list of configurations:
In nature, a trajectory is generated by the full quantum mechanical behavior of the universe, but it's a lot easier to imagine our system of interest contained in a finite volume $V$ including solvent, ligands, etc. Our system is in contact with its environment characterized by its temperature $T$, pressure $P$, pH, etc. Given all this, one can imagine writing down equations which would tell us how the system configuration changes from from time $t_j$ to $t_{j+1}$. For concreteness, it's simplest to imagine using Newton's laws (as employed in molecular dynamics simulation ) - each atom's velocity changes according to the force and the position according to the velocity.
Now we can perform a valuable thought experiment. Imagine following the trajectory (the "movie") of our system for a
very long period of time - so long that that every important system behavior occurs multiple times. From this trajectory, we can calculate any observable of interest. For example, we could calculate the rate $\kab$ for transitions from state A to B by averaging the 'first-passage' times required for the trajectory to reach B for the first time after each time it enters A - and then taking the reciprocal of this average. That is, as was explained by Hill,
From the same long trajectory, we can also calculate equilibrium averages by considering all configurations $\rn$ occurring in the trajectory. For example, we can calculate the ratio of equilibrium populations of states A and B based on the time spent in each state
Importantly, we can calculate equilibrium quantities from the (dynamical) trajectory. We cannot, however, calculate non-equilibrium quantities like rates just by considering configuration integrals as in (5) .
When is free energy minimized?
You have probably heard words to the effect that "the free energy is minimized," and indeed this website performs such minimizations in several cases (e.g., for membrane-separated molecules or ions and for binding ) to find equilibrium populations. As was emphasized in the thermodynamic derivations for those cases, free energy minimization is entirely appropriate in the "thermodynamic limit" where we are concerned with the spatial locations of very large numbers of molecules (or ions).
Should we also apply the concept of free energy minimization to conformational states? To see why we should
not do so, let's begin by re-examining Eq. (4) , which gives the exact statistical mechanical (relative) probability of state A. The absolute (or fractional) probability is necessarily less than one - see Exercises. So in terms of any discrete states that have been defined (A, B, ...), according to Eq. (5) the minimum free energy will correspond to the maximum probability. Depending on the system and the states that have been defined there is no reason why the maximum probability should approach one. Thus, there may be significant probability in other states.
In slightly different words, we expect each state to have some probability, so it would be wrong to ignore those besides the most probable. Of course, it's possible to define a state that, by construction/choice, has almost all the probability. However, defining conformational states is a tricky business, and something we won't delve into.
Because state definitions can be somewhat arbitrary, in conformational statistical mechanics we often turn to a more precise, if still imperfect, quantity call the conformational free energy. (This also can be called the "potential of mean force", PMF, but it only yields the true mean force if the PMF is defined in special ways.)
To understand basic conformational behavior, we will define the conformational free energy $G(q) \simeq F(q)$ for any coordinate $q$ as the energy whose Boltzmann factor is proportional the probability of observing the value $q$. For example, $q$ could be the angle between two helices or the distance between two atoms of a protein. In analogy to the way the relative probability for state A was defined as the sum (integral) over all Boltzmann factors within state A via Eq. (4) , we can similarly calculate the relative probability for coordinate $q$. We can do this by defining $\rnhat$ (note the 'hat' over $\mathbf{r}$) as the set of all coordinates
except $q$, which leads to the relations
To understand this equation in a simple context, imagine we have a two-dimensional system with a potential energy $U(x,y)$. Then, to find the Boltzmann factor of $G(x)$, we integrate over $y$ which
is $\rnhat$ in this simple case; that is, we sum up the probability for all configurations $(x,y)$ consistent with a given $x$ value. This process is called "projection" or "marginalization" and is discussed further in the Zuckerman textbook.
The sketch shows a schematic $G(q)$ with multiple minima, a phenomena which should be expected in complex systems such as biomolecules. The degree to which there is a single dominant state depends on the $\Delta G$ values between minima: if they are large compared with $k_B T$, then the global minimum region of $G$ indeed will capture most of the probability. However, unless there is evidence for this, you should expect a mulitiplicity of states will be important.
A few words of warning are required about the conformational free energy. As I have discussed at length in a blog post , the problem with $G(q)$ or a PMF is that coordinates tended to be selected based on intuition; and although the resulting landscape has a genuine basis in statistical mechanics, it may obscure important conformational (i.e., "mechanistic") behavior. The choice of $q$ can affect the number, apparent locations, and identities of the local minima, as well as affecting the values of the barrier heights connecting them. Hence mechanistic and kinetic inferences based on $G(q)$ alone must be subjected to more detailed testing.
References D.A. McQuarrie and J.D. Simon, Physical Chemistry: A Molecular Approach(University Science Books, 1997) D.M. Zuckerman, Statistical Physics of Biomolecules: An Introduction(CRC Press, 2010). Exercises The equation (4) gives an integral proportional to the probabilty of state A. What ratioof integrals gives the absolute or fractional probability of state A (i.e., $N_A/N$)? How do you know this is less than one? A time-correlation function is another important non-equilibrium quantity. Describe in wordsa procedure which could be used to calculate such a function from a very long trajectory, namely, the probability to be in state B at time $\tau$ following the most recent entry to state A. |
This post describes some geometric machine learning algorithms.
K-Nearest Neighbor Regression
k-nearest neighbors regression algorithm (k-NN regression) is a non-parametric method used for regression. For a new example x, predict y as the average of values \(y_1, y_2, …, y_k\) of k nearest neighbors of x.
K-Nearest Neighbor Classifier
k-nearest neighbors classifier (k-NN classifier) is a non-parametric method used for classification. For a new example x, predict y as the most common class among the
k nearest neighbors of x. Support Vector Machine (SVM)
In SVM, y has two values -1 or 1, and the hypothesis function is \(h_{W,b}(x) = g(W^T x + b)\) with g(z) = {1 if z >= 0, otherwise -1}.
We need to find W and b so \(W^T x + b\) >> 0 when y=1 and \(W^T x + b\) << 0 when y=-1
\(\widehat{?}^{(i)} = y^{(i)} . (W^T x^{(i)} +b)\)
Functional margin
We want that this function be >> 0 for all values of x.
We define \(\widehat{?} = min \ \widehat{?}^{(i)}\) for all values of i
Geometric margin
Geometric margin \(?^{(i)}\) is the geometric distance between an example \(x^{(i)}\) and the line defined by \(W^Tx + b = 0\). We define \(x_p^{(i)}\) as the point of projection of \(x^{(i)}\) on the line.
The vector \(\overrightarrow{x}^{(i)} = \overrightarrow{x_p}^{(i)} + ?^{(i)} * \frac{\overrightarrow{W}}{||W||} \)
\(x_p^{(i)}\) is in the line, then \(W^T (x^{(i)} – ?^{(i)} * \frac{W}{||W||}) + b = 0\)
We can find that \(?^{(i)} = \frac{W^T x^{(i)} + b}{||W||} \)
More generally \(?^{(i)} = y^{(i)}(\frac{W^T x^{(i)}}{||W||} + \frac{b}{||W||}) \)
We define \(? = min {?}^{(i)}\) for all values of i
We can deduce that the Geometric margin(?) = Functional margin / ||W||
Max Margin Classifier
We need to maximize the geometric margin \(?\), in other words \(\underset{W, b, ?}{max}\) ?, such as \(y^{(i)}(\frac{W^T x^{(i)}}{||W||} + \frac{b}{||W||})\) >= ? for all values \((x^{(i)},y^{(i)})\).
To simplify the maximization problem, we impose a scaling constraint: \(\widehat{?} = ? * ||W|| = 1\).
The optimization problem becomes as follow: \(\underset{W, b}{max}\frac{1}{||W||}\) (or \(\underset{W, b}{min}\frac{1}{2}||W||^2\)), such as \(y^{(i)}(W^T x^{(i)} + b)\) >= 1 for all values \((x^{(i)},y^{(j)})\).
Using the method of Lagrange Multipliers, we can solve the minimization problem.\(L(W, b, ?) = \frac{1}{2} ||W||^2 – \sum_{i=1}^m ?_i * (y^{(i)}(W^T x^{(i)} + b) – 1)\)
Based on the KKT conditions, \(?_i >= 0\) and \(?_i * (y^{(i)}(W^T x^{(i)} + b) – 1) = 0\) for all i.
We know that \(y^{(i)}(W^T x^{(i)} + b) – 1 = 0\) is true only for few examples (called support vectors) located close to the the line \(W^Tx + b = 0\) (for these examples, \(? = {?}^{(i)} = \frac{1}{||W||}\)), therefore \(?_i\) is equal to 0 for the majority of examples.
By deriving L, we can find:\(\frac{\partial L(W, b, ?)}{\partial w} = W – \sum_{i=1}^m ?_i * y^{(i)} * x^{(i)} = 0 \\ \frac{\partial L(W, b, ?)}{\partial b} = \sum_{i=1}^m ?_i * y^{(i)} = 0\)
After multiple derivations, we can find the equations for W, b and ?.
More details can be found in this video: https://www.youtube.com/watch?v=s8B4A5ubw6c
Predictions
The hypothesis function is defined as: \(h_{W,b}(x) = g(W^T x + b) = g(\sum_{i=1}^m ?_i * y^{(i)} * K(x^{(i)}, x) + b) \)
The function \(K(x^{(i)}, x) = ?(x^{(i)})^T ?(x) \) is known as the kernel function (? is a mapping function).
When ?(x) = x, then \(h_{W,b}(x) = g(\sum_{i=1}^m ?_i * y^{(i)} * (x^{(i)})^T . x + b) \)
Kernels
There are many types of kernels: “Polynomial kernel”, “Gaussian kernel”,…
In the example below the kernel is defined as Quadratic Kernel. \(K(x^{(i)}, x) = ?(x^{(i)})^T ?(x)\) and \(?(x) = (x_1^2, x_2^2, \sqrt{2} x_1 x_2)\). By using this kernel, SVM can find a linear separator (or hyperplane).
Regularization
Regularization can be applied to SVM by adding an extra term to the minimization function. The extra term relaxes the constraints by introducing slack variables. C is the regularization parameter.
\(\underset{W, b, ?_i}{min}\frac{||W||^2}{2} \color{red} {+ C \sum_{i=1}^m ?_i }\), such as \(y^{(i)}(W^T x^{(i)} + b)\) >= 1 – ?i for all values \((x^{(i)},y^{(j)})\) and \(?_i >= 0\).
The regularization parameter serves as a degree of importance that is given to miss-classifications. SVM poses a quadratic optimization problem that looks for maximizing the margin between both classes and minimizing the amount of miss-classifications. However, for non-separable problems, in order to find a solution, the miss-classification constraint must be relaxed.
As C grows larger, wrongly classified examples are not allowed, and when C tends to zero the more miss-classifications are allowed.
One-Class SVM
One-Class SVM is used to detect outliers. It finds a minimal hypersphere around the data. In other words, we need to minimize the radius R.
The minimization function for this model is defined as follow:
\(\underset{R, a, ?_i}{min} ||R||^2 + C \sum_{i=1}^m ?_i \), such as \(||x^{(i)} – a||^2 \leq R^2 + ?i\) for all values \((x^{(i)},y^{(j)})\) and \(?_i >= 0\).
Perceptron Algorithm
Similar to SVM, the Perceptron algorithm tries to separate positive and negative examples. There is no notion of kernel or margin maximization when using the perceptron algorithm. The algorithm can be trained
online.
Averaged Perceptron Algorithm
Averaged Perceptron Algorithm is an extension of the standard Perceptron algorithm; it uses averaged weights and biases.
The classification of a new example is calculated by evaluating the following expression:\(? = sign(\sum_{i=1}^m c_i.w_i^T.x + b_i)\)
Note: the sign of wx+b is related to the location of the point x.
w.x+b = w(x1+x2)+b = w.x2 because x1 is located on the blue line then w.x1+b = 0.
w.x2 is positive if the two vectors have the same direction.
K-means
Given a training set \(S = \{x^{(i)}\}_{i=1}^m\). We define k as the number of clusters (groups) to create. Each cluster has a centroid \(\mu_k\).
1-For each example, find j that minimizes: \(c^{(i)} := arg\ \underset{j}{min} ||x^{(i)} – \mu_j||^2\)
2-Update \(\mu_j:=\frac{\sum_{i=1}^m 1\{c^{(i)} = j\} x^{(i)}}{\sum_{i=1}^m 1\{c^{(i)} = j\}}\)
Repeat the operations 1 & 2 until convergence. |
In a $N$ dimensional phase space if I have $M$ 1st class and $S$ 2nd class constraints, then I have $N-2M-S$ degrees of freedom in phase space. How can I calculate the degrees of freedom in configuration space?
The number of
physical degrees of freedom (DOF) or dynamical variables is simply the number of generalized positions whose evolution is given by a second order in time differential equation. Using the OP's notation, the number of DOF is $${1\over 2}(N-2M-S)$$For instance, in electrodynamics the phase-space is six-dimensional $\{A_i,F_{0i}\}_{i=1}^3$ and the Gauss law is a first class constraint. Thus $N=6,\, M=1, S=0$. So that there is two DOF corresponding to the two polarizations of electromagnetic waves or the two photon's helicities.
One can take an alternative and equivalent point of view in which the phase-space consists of $\{A_{\mu},F_{0\mu}\}_{\mu=0}^3$ and besides the Gauss law one has the first class constraint $F_{00}\approx0$ (the symbol $\approx$ is read "weakly zero" and means zero when the constraints are verified, you may perfectly write $=$) which Poisson commutes with the Gauss law and both are therefore first class constraints. Then $N=8,\, M=2, S=0$ and the number of DOF is still two, of course.
In the case of the gravitational field, the counting of DOF is analogous. The phase-space consists of $\{h_{ab},p_{ab}\}_{a=1,b=1}^{a=3,b=3}$, with $h_{ab}$ the components of the spatial metric and $p_{ab}$ their conjugated momenta. The four $(0,\mu)$ Einstein equations are not dynamical equations —since they do not contain second order temporal derivatives— but first class constraints. Hence $N=12,\, M=4,\, S=0$ so that the number of DOF is two corresponding to the two polarizations of gravitational waves.
However, consider the case of the Procca field (a vectorial field of mass $m$). Now the phase-space consists of $\{A_{\mu},F_{0\mu}\}_{\mu=0}^3$ and there are two constraints $\partial_i\, F_{0i}=m^2A_0$ —I am considering a theory with no matter fields besides the vectorial field, if one added other fields, then there would be a density of charge $\rho$ in the right hand side— which reduces to the Gauss law when $m=0$ and $F_{00}=0$ like in the electromagnetic case. However, now due to the mass term, the two constraints do not Poisson commute, thus the constraints are second class. Hence $N=8,\, M=0, S=2$ and the number of degrees of freedom is three corresponding to the three helicities of a massive vectorial particle.
With $M$ 1st class constraints there should be imposed $M$ gauge-fixing conditions.
So the dimension of the physical phase space$^1$ is $N-2M-S$.
The dimension of the physical configuration space$^2$ is $\frac{N-2M-S}{2}$.
In other words, there are $\frac{N-2M-S}{2}$ physical degrees of freedom (d.o.f.), cf. e.g. this Phys.SE post.
--
$^1$ Phase space is the space of generalized positions and momenta.
$^2$ Configuration space consists of generalized positions. |
Suppose you have an action $S(\epsilon) = S_1 + S_2 + \epsilon\, S_\mathrm{int}$. Assume that $S_1$ is gauge invariant under the action of the group $G$ and $S_2$ is gauge invariant under the action of the group $H$, such that the action $S_1$ + $S_2$ is gauge invariant under the action of $G\times H$. Suppose that $S_\mathrm{int}$ breaks the gauge group down to $F \in G\times H$, that is, the action $S(\epsilon)$ is gauge invariant under the action of $F$ only.
This implies that $$ S(0)=S_1+S_2= \lim\limits_{\epsilon\, \rightarrow\, 0}\,S(\epsilon) $$ has a wider gauge group than $S(\epsilon)$, that is, the gauge symmetry of the action $S(\epsilon)$ is enhanced when sending $\epsilon$ to $0$. For clarity, by "gauge invariant" I mean that the theory has a redundancy of description.
Does this imply that the parameter $\epsilon$ is technically natural?
To clarify, I mean "
natural" in the sense of 't Hooft, e.g. as discussed in
The question is motivated by the fact that I could only find the concept of technical naturalness associated with global symmetries in the literature. On the other hand, I did not find any statement saying that it does not hold in the case of gauge symmetries.
EDIT: I can provide a simpler example to clarify ever more what I mean. Consider the Proca lagrangian density for a real massive spin-1 field,
$$ \mathcal{L}=-\dfrac{1}{2}F^{\mu\nu}F_{\mu\nu}+m^2A_\mu A^\mu, \qquad F_{\mu\nu}= \partial_\mu A_\nu - \partial_\nu A_\mu. $$
The corresponding Proca action is not invariant under the gauge group $U(1)$, but taking the limit $m\rightarrow 0$ gives us the action for a free photon, which is gauge invariant under $U(1)$. Hence, sending $m\rightarrow 0$ enhances the gauge symmetry of the action.
In this particular case, my question becomes: is the Proca mass $m$ natural in the sense of 't Hooft? In other words, is a small Proca mass $m$ protected against large quantum corrections, the latter being proportional to the small mass itself? |
Speaking of LaTeX, if you, like us, want to write LaTeX math code in your blog, you should have a look at the LaTeX WP plugin.
The output will become something like this:
Or maybe like this:
These are not as pretty as real output, but they sure are prettier than writing math the hard way:
f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n = ...
I’m looking forward to be serving you with more math-stuff in the future!
This is going to my first post in this blog, and along with some other more technical posts, I will dual post it in my own blog over at dragly.org.
If you are using Thunderbird for e-mail and want to send mathematical formulas to your contacts, you should consider the LaTeX It! plugin or the Equations plugin. The former requires you to have LaTeX and ImageMagick installed, while Equations uses an external server to generate your images.
Continue reading |
The Torque in rotational motion is equivalent to force in linear motion. It is the prime parameter which keeps an object under rotatory motion. The torque applied to an object begins to rotate it with an acceleration inversely proportional to its moment of inertia. Mathematically given by-
Torque
\(\tau =I\alpha\)
Where,
?? is Torque(Rotational ability of a body). I is the moment of inertia(virtue of its mass) ?? is angular acceleration(rate of change of angular velocity). Relationship between Torque and Moment of Inertia
For simple understanding, we can imagine it as Newton’s Second Law for rotation. Where, torque is the force equivalent, a moment of inertia is mass equivalent and angular acceleration is linear acceleration equivalent. The rotational motion does obey Newton’s First law of motion.
Consider an object under rotatory motion with mass m, moving along an arc of a circle with radius r. From Newton’s Second Law of motion we know that,
F= ma\(\Rightarrow a=\frac{F}{m}\) ———(1)
Substitute linear acceleration a with angular acceleration. That is-
We know that, Acceleration \(a=\frac{d}{dt}\left ( \frac{ds}{dt} \right )\)
For rotatory motion s = rd??. Thus, Substituting we get-\(=\frac{d}{dt}\left ( \frac{rd\theta }{dt} \right )\) \(=r\frac{d}{dt}\left ( \frac{d\theta }{dt} \right )\)
Thus, \(a=r\alpha\) is the angular acceleration ———-(2)
Simarelly replace force F by Torque ?? we get-\(\tau =Fr\) \(\Rightarrow F=\frac{\tau }{r}\) ——–(3)
Substituting equation (2) and (3) in (1) we get-\((1)\Rightarrow r\alpha =\frac{\left ( \frac{\tau}{r} \right )}{m}\) \(\Rightarrow r\alpha =\frac{\tau }{rm}\) \(\Rightarrow \tau =mr^{2}\alpha\)
We know that moment of inertia \(I =mr^{2}\)
Thus, substituting it in the above equation we get-\(\Rightarrow \tau =I\alpha\)
Hope you understood the relation and conversion between the Torque and the Moment of Inertia of rotational motion.
Physics Related Topics: Stay tuned with Byju’s for more such interesting articles. Also, register to “BYJU’S-The Learning App” for loads of interactive, engaging physics related videos and an unlimited academic assist. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
The solubility of gases depends on the pressure: an increase in pressure increases solubility, whereas a decrease in pressure decreases solubility. This statement is formalized in Henry's Law, which states that
the solubility of a gas in a liquid is directly proportional to the pressure of that gas above the surface of the solution. This can be expressed in the equation:
\[C = k \times P_{gas} \]
Introduction
Variable Represents C the solubility of a gas in solvent k the proportionality constant P gas the partial pressure of the gas above the solution
Example 1: Using Henry's Law
The aqueous solubility at 20 degrees Celsius of Ar at 1.00 atm is equivalent to 33.7 mL Ar(g), measured at STP, per liter of water. What is the molarity of Ar in water that is saturated with the air at 1.00 atm and 20 degrees Celsius? Air contains 0.934% Ar by volume. Assume that the volume if the water does not change when it becomes saturated with air.
STP molar volume: (22.414 L = 22,414 mL)
\[\begin{eqnarray} k_{Ar} &=& \frac{C}{P_{Ar}} \\ &=& \frac{(\frac{33.7 mL\ Ar}{1 L})(\frac{1 mol\ Ar}{22,414 mL})}{1 atm} \\ &=& 0.00150 M\ atm^{-1} \end{eqnarray} \]
\[\begin{eqnarray} C &=& k_{Ar}P_{Ar} \\ &=& 0.0015 M\ atm^{-1} \times 0.00934 atm \\ &=& 1.40 \times 10^{-5} M\ Ar \end{eqnarray} \]
The example above illustrates the following: (A) At l ow pressure, a gas has a low solubility.Decreased pressure allows more gas molecules to be present in the air, with very little being dissolved in solution. (B) At high pressure, a gas has a high solubility.Increased pressure forces the gas molecules into the solution, relieving the pressure that is applied, causing there to be fewer gas molecules in the air and more of it in solution.
Common examples of pressure effects on gas solubility can be demonstrated with carbonated beverages, such as a bottle of soda (above). Once the pressure within the unopened bottle is released, CO
2(g) is released from the solution as bubbles or fizzing. Deep Sea Divers and "The Bends"
In order for deep sea divers to breathe underwater, they must inhale highly compressed air in deep water, resulting in more nitrogen dissolving in their blood, tissues, and other joints. If the diver returns to the surface too rapidly, the nitrogen gas diffuses out of the blood too quickly and causes pain and possibly death. This condition is known as "the bends."
To prevent the bends, one can return to the surface slowly so that the gases will diffuse slowly and adjust to the partial decrease in pressure or breathe a mixture of compressed helium and oxygen gas, because helium is only one-fifth as soluble in blood than nitrogen. Think of a human body under water as a soda bottle under pressure. Imagine dropping the bottle and trying to open it. In order to prevent the soda from fizzing out, you open the cap slowly to let the pressure decrease. The atmosphere is approximately 78% nitrogen and 21% oxygen, but the body primarily uses the oxygen. Under water, however, the high pressure of water surrounding our bodies causes nitrogen to form in our blood and tissue. And like the bottle of soda, if the body moves or ascends to the water surface the water too quickly, the nitrogen is released too quickly and creates bubbles in the blood.
R eferences Petrucci, et al. General Chemistry Principles & Modern Applications. 9th ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2007 http://www.elmhurst.edu/~chm/vchembook/174temppres.html More information on the bends can be found here: http://www.rescuediver.org/med/bends.htm Contributors Michelle Hoang (UCD) |
Document Type: Original Article
Authors
1
Ivan Franko National University of Lviv, Ukraine
2
Jan Kochanowski University, Poland
Abstract
For a constant $K\geq 1$ let $\mathfrak{B}_K$ be the class of pairs $(X,(\mathbf e_n)_{n\in\omega})$ consisting of a Banach space $X$ and an unconditional Schauder basis $(\mathbf e_n)_{n\in\omega}$ for $X$, having the unconditional basic constant $K_u\le K$. Such pairs are called $K$-based Banach spaces. A based Banach space $X$ is rational if the unit ball of any finite-dimensional subspace spanned by finitely many basic vectors is a polyhedron whose vertices have rational coordinates in the Schauder basis of $X$.
Using the technique of Fra\"iss'e theory, we construct a rational $K$-based Banach space $\big(\mathbb U_K,(\mathbf e_n)_{n\in\omega}\big)$ which is $\mathfrak{RI}_K$-universal in the sense that each basis preserving isometry $f:\Lambda\to\mathbb U_K$ defined on a based subspace $\Lambda$ of a finite-dimensional rational $K$-based Banach space $A$ extends to a basis preserving isometry $\bar f:A\to\mathbb U_K$ of the based Banach space $A$. We also prove that the $K$-based Banach space $\mathbb U_K$ is almost $\mathfrak{FI}_1$-universal in the sense that any base preserving $\varepsilon$-isometry $f:\Lambda\to\mathbb U_K$ defined on a based subspace $\Lambda$ of a finite-dimensional $1$-based Banach space $A$ extends to a base preserving $\varepsilon$-isometry $\bar f:A\to\mathbb U_K$ of the based Banach space $A$. On the other hand, we show that no almost $\mathfrak{FI}_K$-universal based Banach space exists for $K>1$. The Banach space $\mathbb U_K$ is isomorphic to the complementably universal Banach space for the class of Banach spaces with an unconditional Schauder basis, constructed by Pe\l czy'nski in 1969.
Keywords
Main Subjects |
Find the eigenvalues and eigenvectors of the following matrix and express the
matrix in the form of $P=Ee^{\lambda t}E^{-1}$ where $E$ are the eigenvectors and $\lambda$ are the eigenvalues
\begin{bmatrix}0 & 1 & 0 & 0\\ -a^2-b & 0 & b & 0\\ 0 & 0 & 0 & 1\\b & 0 & -a^2-b & 0\end{bmatrix} Find the matrix $P$
What i tried. First i tried finding the eigenvalues by taking the characteristic polynomials and then solving for for the characteristic polynomials to get the four eigenvalues. Then for each eigenvalues i tired finding the corresponding vector basis. Combining the vector basis gives $E$ the eigenvector. Then taking the inverse gives $E^{-1}$ following which we do a matrix multiplication to get the matrix $P$. While i know the steps behind solving this problem my diffculty lies in the inherent tediouness in solving each step to get the correct answer Is there a simpler way to solve this problem without having to go through all the difficult steps or at least is there a mathmatical software that could help me solve this problem? Could anyone hep me with finding the matrix $P$. Thanks
I worked out the four eigenvectors
$$\lambda_{1}=-ia$$ $$\lambda_{2}=ia$$
$$\lambda_{3}=-\sqrt{-a^2-2b}$$ $$\lambda_{4}=\sqrt{-a^2-2b}$$
Eigenvector $E$ is $$\begin{bmatrix}1 & 1 & 1 & 1\\ -ia & ia & -\sqrt{-a^2-2b}& \sqrt{-a^2-2b}\\ 1 & 1 & -1 & -1\\-ia & ib & \sqrt{-a^2-2b}& -\sqrt{-a^2-2b}\end{bmatrix}$$ |
I should calculate the Limit $\lim \limits_ {x \to 2} \left(\frac{x^2+2x-8}{x^2-2x}\right)$, although I noticed, that $x\neq 2$ must apply. Is the limit undefined? Otherwise, with which steps should I go on to calculate the limit?
We have that
$$\frac{x^2+2x-8}{x^2-2x}=\frac{\color{red}{(x-2)}(x+4)}{x\color{red}{(x-2)}}=\frac{x+4}{x}$$
and then take the limit.
To clarify why we are allowed to cancel out the $(x-2)$ factor refer to the related
Given $$\lim_{x\rightarrow2}\dfrac{x^2+2x-8}{x^2-2x}=\lim_{x\rightarrow2}\dfrac{\color{red}{(x-2)}(x+4)}{x\color{red}{(x-2)}}=\lim_{x\rightarrow2}\dfrac{x+4}{x}=\lim_{x\rightarrow2}1+\dfrac4x = 3$$
OR
You could also use L'Hopital's rule
$$\lim_{x\rightarrow2}\dfrac{x^2+2x-8}{x^2-2x}=\lim_{x\rightarrow2}\dfrac{2x+2}{2x-2}=\dfrac{2(2)+2}{2(2)-2}=\dfrac{6}{2}=3$$ |
https://doi.org/10.1351/goldbook.C01036
The fractional variation of the @R05326-1@ frequency of a @N04256@ in nuclear magnetic @R05326-1@ (NMR) @S05848@ in consequence of its magnetic environment. The chemical shift of a @N04256@, \(\delta \), is expressed as a ratio involving its frequency, \(\nu _{\mathrm{cpd}}\), relative to that of a standard, \(\nu _{\mathrm{ref}}\), and defined as: \[\delta = \frac{\nu _{\text{cpd}}- \nu _{\text{ref}}}{\nu _{ref}}\] \(\delta \)-values are normally expressed in \(\mathrm{ppm}\). For
Green Book, 3rd ed., p. 29 [Terms] [Book] PAC, 2001, PAC, 2008, 1H and 13C NMR the reference signal is usually that of tetramethylsilane (TMS), strictly speaking in @D01739@ in CDCl 3. Other references are used in the older literature and for other solvents, such as D 2O. @R05326-2@ lines to high frequency from the TMS signal have positive, and @R05326-1@ lines to low frequency from TMS have negative, \(\delta \)-values (arising from relative @S05644-1@ and @S05644-2@ respectively). The archaic terms '@D01856@' and '@U06575@' should no longer be used. For nuclei other than 1H, chemical shifts are expressed either in the same manner relative to an agreed substance containing the relevant @N04256@ or relative to the 1H @R05326-1@ of TMS as \(\mathit{\Xi}\) values, defined in the references below. Sources:
Green Book, 3rd ed., p. 29 [Terms] [Book]
PAC, 2001,
73, 1795. ( NMR nomenclature. Nuclear spin properties and conventions for chemical shifts(IUPAC Recommendations 2001)) on page 1807 [Terms] [Paper]
PAC, 2008,
80, 59. ( Further conventions for NMR shielding and chemical shifts (IUPAC Recommendations 2008)) on page 61 [Terms] [Paper] |
Skills to Develop
Set up a linear equation to solve a real-world application. Use a formula to solve a real-world application.
Josh is hoping to get an \(A\) in his college algebra class. He has scores of \(75\), \(82\), \(95\), \(91\), and \(94\) on his first five tests. Only the final exam remains, and the maximum of points that can be earned is \(100\). Is it possible for Josh to end the course with an \(A\)? A simple linear equation will give Josh his answer.
Figure 2.4.1 Credit: Kevin Dooley
Many real-world applications can be modeled by linear equations. For example, a cell phone package may include a monthly service fee plus a charge per minute of talk-time; it costs a widget manufacturer a certain amount to produce
x widgets per month plus monthly operating charges; a car rental company charges a daily fee plus an amount per mile driven. These are examples of applications we come across every day that are modeled by linear equations. In this section, we will set up and use linear equations to solve such problems. Setting up a Linear Equation to Solve a Real-World Application
To set up or model a linear equation to fit a real-world application, we must first determine the known quantities and define the unknown quantity as a variable. Then, we begin to interpret the words as mathematical expressions using mathematical symbols. Let us use the car rental example above. In this case, a known cost, such as \($0.10/mi\), is multiplied by an unknown quantity, the number of miles driven. Therefore, we can write \(0.10x\). This expression represents a variable cost because it changes according to the number of miles driven.
If a quantity is independent of a variable, we usually just add or subtract it, according to the problem. As these amounts do not change, we call them fixed costs. Consider a car rental agency that charges \($0.10/mi\) plus a daily fee of \($50\). We can use these quantities to model an equation that can be used to find the daily car rental cost \(C\) .
\(C=0.10x+50 \tag{2.4.1}\)
When dealing with real-world applications, there are certain expressions that we can translate directly into math. Table \(\PageIndex{1}\) lists some common verbal expressions and their equivalent mathematical expressions.
Verbal Translation to Math Operations One number exceeds another by a \(x,x+a\) Twice a number \(2x\) One number is \(a\) more than another number \(x,x+a\) One number is a less than twice another number \(x,2x−a\) The product of a number and \(a\), decreased by \(b\) \(ax−b\) The quotient of a number and the number plus \(a\) is three times the number \(\dfrac{x}{x+a}=3x\) The product of three times a number and the number decreased by \(b\) is \(c\) \(3x(x−b)=c\)
How to: Given a real-world problem, model a linear equation to fit it
Identify known quantities. Assign a variable to represent the unknown quantity. If there is more than one unknown quantity, find a way to write the second unknown in terms of the first. Write an equation interpreting the words as mathematical operations. Solve the equation. Be sure the solution can be explained in words, including the units of measure.
Find a linear equation to solve for the following unknown quantities: One number exceeds another number by \( 17\) and their sum is \( 31\). Find the two numbers.
Solution
Let \( x\) equal the first number. Then, as the second number exceeds the first by \(17\), we can write the second number as \( x +17\). The sum of the two numbers is \(31\). We usually interpret the word is as an equal sign.
\[\begin{align*} x+(x+17)&= 31\\ 2x+17&= 31\\ 2x&= 14\\ x&= 7 \end{align*}\]
\[\begin{align*} x+17&= 7 + 17\\ &= 24\\ \end{align*}\]
The two numbers are \(7\) and \(24\) .
Exercise \(\PageIndex{1}\)
Find a linear equation to solve for the following unknown quantities: One number is three more than twice another number. If the sum of the two numbers is \(36\), find the numbers.
Answer
\(11\) and \(25\)
There are two cell phone companies that offer different packages. Company A charges a monthly service fee of \($34\) plus \($.05/min\) talk-time. Company B charges a monthly service fee of \($40\) plus \($.04/min\) talk-time.
Write a linear equation that models the packages offered by both companies. If the average number of minutes used each month is \(1,160\), which company offers the better plan? If the average number of minutes used each month is \(420\), which company offers the better plan? How many minutes of talk-time would yield equal monthly statements from both companies? Solution
a.
The model for Company A can be written as \( A =0.05x+34\). This includes the variable cost of \( 0.05x\) plus the monthly service charge of \($34\). Company B’s package charges a higher monthly fee of \($40\), but a lower variable cost of \( 0.04x\). Company B’s model can be written as \( B =0.04x+$40\).
b.
If the average number of minutes used each month is \(1,160\), we have the following:
\[\begin{align*} \text{Company A}&= 0.05(1.160)+34\\ &= 58+34\\ &= 92 \end{align*}\]
\[\begin{align*} \text{Company B}&= 0.04(1,1600)+40\\ &= 46.4+40\\ &= 86.4 \end{align*}\]
So, Company B offers the lower monthly cost of \($86.40\) as compared with the \($92\) monthly cost offered by Company A when the average number of minutes used each month is \(1,160\).
c.
If the average number of minutes used each month is \(420\), we have the following:
\[\begin{align*} \text{Company A}&= 0.05(420)+34\\ &= 21+34\\ &= 55 \end{align*}\]
\[\begin{align*} \text{Company B}&= 0.04(420)+40\\ &= 16.8+40\\ &= 56.8 \end{align*}\]
If the average number of minutes used each month is \(420\), then Company A offers a lower monthly cost of \($55\) compared to Company B’s monthly cost of \($56.80\).
d.
To answer the question of how many talk-time minutes would yield the same bill from both companies, we should think about the problem in terms of \((x,y)\) coordinates: At what point are both the \(x\)-value and the \(y\)-value equal? We can find this point by setting the equations equal to each other and solving for \(x\).\[\begin{align*} 0.05x+34&= 0.04x+40\\ 0.01x&= 6\\ x&= 600 \end{align*}\]
Check the \(x\)-value in each equation.
\(0.05(600)+34=64\)
\(0.04(600)+40=64\)
Therefore, a monthly average of \(600\) talk-time minutes renders the plans equal. See Figure \(\PageIndex{2}\).
Figure \(\PageIndex{2}\)
Exercise \(\PageIndex{2}\)
Find a linear equation to model this real-world application: It costs ABC electronics company \($2.50\) per unit to produce a part used in a popular brand of desktop computers. The company has monthly operating expenses of \($350\) for utilities and \($3,300\) for salaries. What are the company’s monthly expenses?
Answer
\(C=2.5x+3,650\)
Using a Formula to Solve a Real-World Application
Many applications are solved using known formulas. The problem is stated, a formula is identified, the known quantities are substituted into the formula, the equation is solved for the unknown, and the problem’s question is answered. Typically, these problems involve two equations representing two trips, two investments, two areas, and so on. Examples of formulas include the
area of a rectangular region,
\[A=LW \tag{2.4.2}\]
the
perimeter of a rectangle,
\[P=2L+2W \tag{2.4.3}\]
and the
volume of a rectangular solid,
\[V=LWH. \tag{2.4.4}\]
When there are two unknowns, we find a way to write one in terms of the other because we can solve for only one variable at a time.
It takes Andrew \(30\; min\) to drive to work in the morning. He drives home using the same route, but it takes \(10\; min\) longer, and he averages \(10\; mi/h\) less than in the morning. How far does Andrew drive to work?
Solution
This is a distance problem, so we can use the formula \(d =rt\), where distance equals rate multiplied by time. Note that when rate is given in \(mi/h\), time must be expressed in hours. Consistent units of measurement are key to obtaining a correct solution.
First, we identify the known and unknown quantities. Andrew’s morning drive to work takes \(30\; min\), or \(12\; h\) at rate \(r\). His drive home takes \(40\; min\), or \(23\; h\), and his speed averages \(10\; mi/h\) less than the morning drive. Both trips cover distance \(d\). A table, such as Table \(\PageIndex{2}\), is often helpful for keeping track of information in these types of problems.
\(d\) \(r\) \(t\) To Work \(d\) \(r\) \(12\) To Home \(d\) \(r−10\) \(23\)
Write two equations, one for each trip.
\[d=r\left(\dfrac{1}{2}\right) \qquad \text{To work} \nonumber\]
\[d=(r-10)\left(\dfrac{2}{3}\right) \qquad \text{To home} \nonumber\]
As both equations equal the same distance, we set them equal to each other and solve for \(r\).
\[\begin{align*} r\left (\dfrac{1}{2} \right )&= (r-10)\left (\dfrac{2}{3} \right )\\ \dfrac{1}{2r}&= \dfrac{2}{3}r-\dfrac{20}{3}\\ \dfrac{1}{2}r-\dfrac{2}{3}r&= -\dfrac{20}{3}\\ -\dfrac{1}{6}r&= -\dfrac{20}{3}\\ r&= -\dfrac{20}{3}(-6)\\ r&= 40 \end{align*}\]
We have solved for the rate of speed to work, \(40\; mph\). Substituting \(40\) into the rate on the return trip yields \(30 mi/h\). Now we can answer the question. Substitute the rate back into either equation and solve for \(d\).
\[\begin{align*}d&= 40\left (\dfrac{1}{2} \right )\\ &= 20 \end{align*}\]
The distance between home and work is \(20\; mi\).
Analysis
Note that we could have cleared the fractions in the equation by multiplying both sides of the equation by the LCD to solve for \(r\) .
\[\begin{align*} r\left (\dfrac{1}{2} \right)&= (r-10)\left (\dfrac{2}{3} \right )\\ 6\times r\left (\dfrac{1}{2} \right)&= 6\times (r-10)\left (\dfrac{2}{3} \right )\\ 3r&= 4(r-10)\\ 3r&= 4r-40\\ r&= 40 \end{align*}\]
Exercise \(\PageIndex{3}\)
On Saturday morning, it took Jennifer \(3.6\; h\) to drive to her mother’s house for the weekend. On Sunday evening, due to heavy traffic, it took Jennifer \(4\; h\) to return home. Her speed was \(5\; mi/h\) slower on Sunday than on Saturday. What was her speed on Sunday?
Answer
\(45\; mi/h\)
The perimeter of a rectangular outdoor patio is \(54\; ft\). The length is \(3\; ft\) greater than the width. What are the dimensions of the patio?
Solution
The perimeter formula is standard: \(P=2L+2W\). We have two unknown quantities, length and width. However, we can write the length in terms of the width as \(L =W+3\). Substitute the perimeter value and the expression for length into the formula. It is often helpful to make a sketch and label the sides as in Figure \(\PageIndex{3}\).
Figure \(\PageIndex{3}\)
Now we can solve for the width and then calculate the length.
\[\begin{align*} P&= 2L + 2W\\ 54&= 2(W+3)+2W\\ 54&= 2W+6+2W\\ 54&= 4W+6\\ 48&= 4W\\ W&= 12 \end{align*}\]
\[\begin{align*} L&= 12+3\\ L&= 15 \end{align*}\]
The dimensions are \(L = 15\; ft\) and \(W = 12\; ft\).
Exercise \(\PageIndex{4}\)
Find the dimensions of a rectangle given that the perimeter is \(110\; cm\) and the length is \(1\; cm\) more than twice the width.
Answer
\(L=37\; cm\), \(W=18\; cm\)
The perimeter of a tablet of graph paper is \(48\space{in.}^2\). The length is \(6\; in\). more than the width. Find the area of the graph paper.
Solution
The standard formula for area is \(A =LW\); however, we will solve the problem using the perimeter formula. The reason we use the perimeter formula is because we know enough information about the perimeter that the formula will allow us to solve for one of the unknowns. As both perimeter and area use length and width as dimensions, they are often used together to solve a problem such as this one.
We know that the length is \(6\; in\). more than the width, so we can write length as \(L =W+6\). Substitute the value of the perimeter and the expression for length into the perimeter formula and find the length.
\[\begin{align*} P&= 2L + 2W\\ 48&= 2(W+6)+2W\\ 48&= 2W+12+2W\\ 48&= 4W+12\\ 36&= 4W\\ W&= 9 \end{align*}\]
\[\begin{align*}L&= 9+6\\ L&= 15 \end{align*}\]
Now, we find the area given the dimensions of \(L = 15\; in\). and \(W = 9\; in\).
\[\begin{align*} A&= LW\\ A&=15(9)\\ A&= 135\space{in.}^2 \end{align*}\]
The area is \(135\space{in.}^2\).
Exercise \(\PageIndex{5}\)
A game room has a perimeter of \(70\; ft\). The length is five more than twice the width. How many \(ft^2\) of new carpeting should be ordered?
Answer
\(250\space{ft}^2\)
Find the dimensions of a shipping box given that the length is twice the width, the height is \(8\; \) in, and the volume is \(1,600\space{in.}^3\).
Solution
The formula for the volume of a box is given as \(V =LWH\), the product of length, width, and height. We are given that \(L =2W\), and \(H =8\). The volume is \(1,600\; \text{cubic inches}\).\[\begin{align*} V&= LWH\\ 1600&= (2W)W(8)\\ 1600&= 16W^2\\ 100&= W^2\\ 10&= W \end{align*}\]
The dimensions are \(L = 20\; in\), \(W= 10\; in\), and \(H = 8\; in\).
Analysis
Note that the square root of \(W^2\) would result in a positive and a negative value. However, because we are describing width, we can use only the positive result.
Media
Access these online resources for additional instruction and practice with models and applications of linear equations.
Key Concepts A linear equation can be used to solve for an unknown in a number problem. See Example. Applications can be written as mathematical problems by identifying known quantities and assigning a variable to unknown quantities. See Example. There are many known formulas that can be used to solve applications. Distance problems, for example, are solved using the \(d = rt\) formula. See Example. Many geometry problems are solved using the perimeter formula \(P =2L+2W\), the area formula \(A =LW\), or the volume formula \(V =LWH\). See Example, Example, and Example. |
In two dimensions, Poisson's equation has the fundamental solution,
$$G(\mathbf{r},\mathbf{r'}) = \frac{\log|\mathbf{r}-\mathbf{r'}|}{2\pi}. $$
I was trying to derive this using the Fourier transformed equation, and the process encountered an integral that was divergent. I was able to extract the correct function eventually, but the math was sketchy at best. I am hoping someone could look at my work and possibly justify it. Here goes.
First off, make the assumption that $G$ only depends on the difference $\mathbf{v}=\mathbf{r}-\mathbf{r'}$. Now, let's write $G$ as an inverse Fourier Transform and take the Laplacian,
$$\nabla^2G(\mathbf{v}) = \int\frac{d^2k}{(2\pi)^2}(-k^2)e^{i\mathbf{k} \cdot \mathbf{v}} \hat{G}(\mathbf{k}) = \delta(\mathbf{v}) $$
For this to be a delta function, we require that $\hat{G}(\mathbf{k}) = -1/k^2$. Now taking the inverse Fourier Transform of $G$...
\begin{align*} G(\mathbf{v}) &= -\int\frac{d^2k}{(2\pi)^2} \frac{e^{i\mathbf{k}\cdot\mathbf{v}}}{k^2} = -\int\limits_{0}^{\infty} \int\limits_{0}^{2\pi} \frac{dkd\theta}{(2\pi)^2} \frac{e^{i|\mathbf{k}||\mathbf{v}|\cos\theta}}{k}\\ &= - \int\limits_0^{\infty}\frac{dk}{2\pi}\frac{J_0(kv)}{k} \end{align*}
Here $J_0$ is a Bessel function of the first kind. This integral is divergent as far as I can tell, but let's continue onward and take a derivative with respect to $|\mathbf{v}|$.
\begin{align*} \frac{dG}{dv} &= \int\limits_0^{\infty}\frac{dk}{2\pi} J_1(kv)\\ &= \frac{1}{2\pi v} \end{align*}
Then integrating this and setting the constant to zero we get the desired result...
$$ G(\mathbf{v}) = \frac{\log v}{2\pi} $$
Clearly this was a lot of heuristics, but I am hoping someone could justify some of this with distributions etc... Could someone tell me what on earth I have done and why it worked? |
Since you know the center and the point of tangency, you can compute the slope of the radius to the point of tangency. Since the center is $(3, 0)$ and the point of tangency is $(2, 2\sqrt{2})$, the slope of the radius to the point of tangency is
$$m_r = \frac{2\sqrt{2} - 0}{3 - 2} = 2\sqrt{2}$$
The slope of the tangent line to the circle is perpendicular to the radius at the point of tangency. If two non-vertical lines are perpendicular, their slopes are negative reciprocals, so the slope of the tangent line to the circle at $(2, 2\sqrt{2})$ is the negative reciprocal of the slope of the radius to the point of tangency. Thus, the tangent line has slope
$$m_{\perp} = -\frac{1}{2\sqrt{2}} = -\frac{1}{2\sqrt{2}} \cdot \frac{\sqrt{2}}{\sqrt{2}} = -\frac{\sqrt{2}}{4}$$
You can then use the point-slope equation
$$y - y_0 = m(x - x_0)$$
to write the equation of the tangent line, where $(x_0, y_0)$ is the point $(2, 2\sqrt{2})$ and $m$ is the slope of the tangent line. The equation of the tangent line to the circle $(x - 3)^2 + y^2 = 9$ at $(2, 2\sqrt{2})$ is
$$y - 2\sqrt{2} = -\frac{\sqrt{2}}{4}(x - 2)$$
A reflection in the $x$-axis sends point $(x, y)$ to the point $(x, -y)$. Thus, the reflection of the point $(2, 2\sqrt{2})$ in the $x$-axis is $(2, -2\sqrt{2})$. To find the equation of the tangent line to the circle at this point, follow the steps outlined above with $(2, -2\sqrt{2})$ replacing $(2, 2\sqrt{2})$. |
Category:Uniformly Continuous Functions
This category contains results about Uniformly Continuous Functions.
Let $M_1 = \left({A_1, d_1}\right)$ and $M_2 = \left({A_2, d_2}\right)$ be metric spaces.
$\forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: \forall x, y \in A_1: d_1 \left({x, y}\right) < \delta \implies d_2 \left({f \left({x}\right), f \left({y}\right)}\right) < \epsilon$
Let $I \subseteq \R$ be a real interval.
for every $\epsilon > 0$ there exists $\delta > 0$ such that the following property holds: for every $x, y \in I$ such that $\size {x - y} < \delta$ it happens that $\size {\map f x - \map f y} < \epsilon$. Formally: $f: I \to \R$ is uniformly continuous if and only if the following property holds: $\forall \epsilon > 0: \exists \delta > 0: \paren {x, y \in I, \size {x - y} < \delta \implies \size {\map f x - \map f y} < \epsilon}$ Subcategories
This category has only the following subcategory.
U ► Uniformly Continuous Real Functions (1 C, 3 P) Pages in category "Uniformly Continuous Functions"
The following 3 pages are in this category, out of 3 total. |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Every $LL(k)$ grammar is $LR(k)$, but there are $LL(k)$ grammars which are not $LALR(k)$.
There's a simple example in Parsing Theory by Sippu&Soisalon-Soininen
$$\begin{align}S &\to a A a \mid b A b \mid a B b \mid b B a\\A &\to C \\B &\to C\end{align}$$$$\begin{align}S &\to a A a \mid b A b \mid a B b \mid b B a\\A &\to c \\B &\to c\end{align}$$
The language of this grammar is finite, so it is obviously $LL(k)$. (In this case, $LL(3)$.) The grammar is also $LR(1)$. However, the grammar is not $LALR(k)$ for any value of $k$.
The canonical $LR(k)$ machine has two states with $LR(0)$ itemsets $\{[A\to c \cdot], [B\to c\cdot]\}$. These two states have different lookahead sets in each item, corresponding to the two different predecessor states before shifting $c$. The $LALR$ algorithm merges these two states, thereby losing the distinction between the lookahead sets. This produces two reduce-reduce conflicts. Since there is only one token of lookahead at this point, increasing $k$ would make no difference.
Every $LL(k)$ grammar is $LR(k)$, but there are $LL(k)$ grammars which are not $LALR(k)$.
There's a simple example in Parsing Theory by Sippu&Soisalon-Soininen
$$\begin{align}S &\to a A a \mid b A b \mid a B b \mid b B a\\A &\to C \\B &\to C\end{align}$$
The language of this grammar is finite, so it is obviously $LL(k)$. (In this case, $LL(3)$.) The grammar is also $LR(1)$. However, the grammar is not $LALR(k)$ for any value of $k$.
The canonical $LR(k)$ machine has two states with $LR(0)$ itemsets $\{[A\to c \cdot], [B\to c\cdot]\}$. These two states have different lookahead sets in each item, corresponding to the two different predecessor states before shifting $c$. The $LALR$ algorithm merges these two states, thereby losing the distinction between the lookahead sets. This produces two reduce-reduce conflicts. Since there is only one token of lookahead at this point, increasing $k$ would make no difference.
Every $LL(k)$ grammar is $LR(k)$, but there are $LL(k)$ grammars which are not $LALR(k)$.
There's a simple example in Parsing Theory by Sippu&Soisalon-Soininen
$$\begin{align}S &\to a A a \mid b A b \mid a B b \mid b B a\\A &\to c \\B &\to c\end{align}$$
The language of this grammar is finite, so it is obviously $LL(k)$. (In this case, $LL(3)$.) The grammar is also $LR(1)$. However, the grammar is not $LALR(k)$ for any value of $k$.
The canonical $LR(k)$ machine has two states with $LR(0)$ itemsets $\{[A\to c \cdot], [B\to c\cdot]\}$. These two states have different lookahead sets in each item, corresponding to the two different predecessor states before shifting $c$. The $LALR$ algorithm merges these two states, thereby losing the distinction between the lookahead sets. This produces two reduce-reduce conflicts. Since there is only one token of lookahead at this point, increasing $k$ would make no difference. |
Slightly delayed, Feb’s open problem is by one of the three possible “yours truly”s, Sesh. Looking for more reader participation, hint, hint. Basic setting: Consider \(f:\{0,1\}^n \rightarrow R\), where \(R\) is some ordered range. There is a natural coordinate-wise partial order denoted by \(\prec\). The function is monotone if for all \(x \prec y\), \(f(x) \leq f(y)\). The distance to monotonicity, \(\epsilon_f\) is the minimum fraction of values that must be changed to make \(f\) monotone. This is an old problem in property testing. Open problem: Is there an efficient constant factor approximation algorithm for \(\epsilon_f\)? In other words, is there a \(poly(n/\epsilon_f)\) time procedure that can output a value \(\epsilon’ = \Theta(\epsilon_f)\)? State of the art: Existing monotonicity testers give an \(O(n)\)-approximation for \(\epsilon_f\), so there is much much room for improvement. (I’d be happy to see a \(O(\log n)\)-approximation.) Basically, it is known that the number of edges of \(\{0,1\}^n\) that violate monotonicity is at least \(\epsilon_f 2^{n-1}\) [GGLRS00], [CS13]. A simple exercise (given below) shows that there are at most \(n\epsilon_f 2^n\) violated edges. So just estimate the number of violated edges for an \(O(n)\)-approximation. (Consider \(S \subseteq \{0,1\}^n\) such that modifying \(f\) on \(S\) makes it monotone. Every violated edge must have an endpoint in \(S\).) Related work: Fattal and Ron [FR10] is a great place to look at various related problems, especially for hypergrid domains. References
[CS13] D. Chakrabarty and C. Seshadhri.
Optimal Bounds for Monotonicity and Lipschitz Testing over Hypercubes and Hypergrids. Symposium on Theory of Computing, 2013.
[FR10] S. Fattal and D. Ron.
Approximating the distance to monotonicity in high dimensions . ACM Transactions on Algorithms, 2010.
[GGL+00] O. Goldreich, S. Goldwasser, E. Lehman, D. Ron, and A. Samorodnitsky.
Testing Monotonicity . Combinatorica, 2000. |
I think the most obvious way to do this is to treat entropy as the analogue to variance, if mutual information is the analogue to covariance. Notice that one similarity is that $\mathbb{I}[X;X] = \mathbb{H}[X]$ (as for variance vs covariance) one difference is that (unlike covariance) we have $\mathbb{I}[X;Y] \geq 0 $. Then:$$\mathbb{I}_N[X;Y] = \frac{\mathbb{I}[X;Y]}{\sqrt{\mathbb{H}[X]\phantom{|}}\sqrt{\mathbb{H}[Y]\phantom{|}}}$$
This is on the wikipedia page currently, among other measures. See also this question.
Also note that the following hold:\begin{align}\mathbb{I}[X;Y] &= \mathbb{H}[X] + \mathbb{H}[Y] - \mathbb{H}[X,Y] \\\mathbb{H}[X] + \mathbb{H}[Y] &\geq \mathbb{H}[X,Y] \geq 0 \\\mathbb{H}[X,Y] &= \mathbb{H}[X|Y] + \mathbb{H}[Y]\end{align}
Since Shannon entropy and mutual information are non-negative, so is $\mathbb{I}_N[X;Y]$. However, if $X=Y$, $$ \frac{\mathbb{H}[X] + \mathbb{H}[X] - \mathbb{H}[X,X]}{\sqrt{\mathbb{H}[X]\phantom{|}}\sqrt{\mathbb{H}[X]\phantom{|}}} = 1 $$
Another natural normalization is:$$\widetilde{\mathbb{I}}_N[X;Y] = \frac{\mathbb{I}[X;Y]}{{\mathbb{H}[X]}+{\mathbb{H}[Y]}}$$ |
Introduction to kinematics
Kinematics is the study of how to describe the motion of objects using mathematics. Galileo long ago thought about the parabolic motion of cannonballs. This is one of the earliest applications of kinematics that I'm aware of and this will be our starting point. The study of the motion of objects near Earth's surface is a branch of kinematics called projectile motion. We're going to start off just looking at one-dimensional projectile motion and, later on, we'll also delve into two-dimensional and three-dimensional projectile motion. Now, if your cannonball gets too fast (meaning its speed is a considerable fraction of the speed of light), then we'll have to talk about four-dimensional projectile motion—but for now, let's not go there.
We're going to start by learning from examples and saving the abstract generalizations for later since this, to me, seems like the most pedagogic way of teaching this material. It is also very pedagogic to give a glimpse of topics later to come and present them in a way which captures interest, even though it might be a new concept which the student doesn't yet know—but it is one's not fully knowing which largely makes it interesting to begin with. We will mention Newton's second law and the concept of force throughout these lessons and hopefully we'll have a friendly first encounter with these ideas and hopefully they'll spark some interest. But we'll save a more thorough treatment of these ideas for lessons which will be covered after we're done with kinematics.
Position vectors and displacement
We're going to start off with something very general but we'll see that the results are very important for analyzing one-dimensional projectile motion. In physics, we use something called a
position vector (written as \(\vec{R}\)) to specify the location of an object at each instant of time. You could imagine a set of \(x\), \(y\), and \(z\) Cartesian axes sitting stationary on the ground (say, to make things easier to visualize, on Earth's surface) as in Figure 1. These Cartesian coordinate axes should be thought of as imaginary rulers (that are infinitely long) with a bunch of pink tick marks on them that specify how far away the object is from the origin. Indeed, if you were standing at the origin of this coordinate system, then these rulers would measures how far away the object is in the left-right direction, the up-down direction, and the in-out direction.
The position vector of the object is defined as
$$\vec{R}(t)≡x(t)\hat{i}+y(t)\hat{j}+z(t)\hat{k}.$$
Notice that the position vector is a parametric function of time: at each moment into time \(t\), it specifies the location of the object. So, for example, in Figure 1 you can see that at \(t=0\), if you were standing at the origin of the coordinate system, then the ball would start off right next to you since \(\vec{R}(0)=0\). After a time \(t_1\) has gone by, the object will have moved to a position specified by \(\vec{R}(t_1)\). After waiting an additional time of \(Δt=t_2-t_1\), the object will have moved to a position specified by \(\vec{R}(t_2)\). Now, if you were to take a tape measure and run it down the entire arclength of the red line in Figure 1, you would measure some amount of course; and that amount is what we call the distance the object traveled. Distance is a useful concept in everyday life but it, to a large extant, takes a backseat in most discussions of kinematics. The purpose of our entire discussion on position vectors was to develope a far more useful concept called displacement. Let me explain what that is. Let the position vectors \(\vec{R}(t)\) and \(\vec{R}(t_0)\) represent any two arbitrary positions of the object at any arbitary instants of time. The displacement (represented as \(Δ\vec{R}\)) of the object as it moved from \(\vec{R}(t_0)\) to \(\vec{R}(t)\) is given by the difference of these two vectors:
$$Δ\vec{R}≡\vec{R}(t)-\vec{R}(t_0).\tag{1}$$
First off, I'd like to start by saying that there is a big difference between distance and displacement and the two should nevver be confused. To see this distinction, let's consider the displacement of the object as it moves from \(\vec{R}(t_1)\) to \(\vec{R}(t_2)\). From Equation (1), we know that this displacement is given by \(Δ\vec{R}=\vec{R}(t_2)-\vec{R}(t_1)\). What does this look like graphically? Notice that in order to get \(Δ\vec{R}\), we had to add the two vectors \(\vec{R}(t_2)\) and \(-\vec{R}(t_1)\). Since whenever you add any two vectors you always get a new vector, it follows that \(Δ\vec{R}\) must also be a vector. But what does the vector \(Δ\vec{R}\) look like on the graph? Well, we already know what the vectors \(\vec{R}(t_1)\) to \(\vec{R}(t_2)\) look like since they are already drawn for us in Figure 1. If we wanted to find out what the vector \(\vec{R}(t_1)+\vec{R}(t_2)\) looked like, we'd just have to put the 'tail' of \(\vec{R}(t_2)\) next to the 'head' of \(\vec{R}(t_1)\) to get out new vector. The same procedure is, of course, true for adding any two vectors. But before we can find out what \(\vec{R}(t_1)-\vec{R}(t_2)\) looks like, we have to first figure out what \(-\vec{R}(t_2)\) is. To get \(-\vec{R}(t_2)\), all you need to do is rotate \(\vec{R}(t_2)\) by 180 degrees. (The reason why this is is explained very well in the Linear Algebra section of the Khan Academy.) If we then put the tail of \(-\vec{R}(t_2)\) next to the head of \(\vec{R}(t_1)\), then we'll get the vector \(Δ\vec{R}\) shown in Figures 1 and 2.
From Figure 1, you can immediately see that there are two big differences between distance and displacement. First of all, distance is a scalar (meaning it has only a magnitude) whereas displacement is a vector (meaning it has both magnitude and direction). The second big difference is that the magnitude of the displacement is not the same as the distance. As you can see graphically, the length of the vector \(Δ\vec{S}\) is shorter than the distance from \(\vec{R}(t_1)\) to \(\vec{R}(t_2)\) (as a reminder, the distance is the arclength of the red curve from the point at \(\vec{R}(t_1)\) to the point at \(\vec{R}(t_2)\)). An extreme example is often given in most textbooks to make this distinction clear. If you watch a runner run for 1 mile on a circular racetrack and return to his starting point, the total distance he would have run would be 1 mile. But his total displacement \(Δ\vec{S}\) would be zero. The reason is because the position vector \(\vec{R}(0)\) specifying his location at the beginning of the run points at the same location as the position vector \(\vec{R}(t)\) specifying his location at the end of the run (which is, of course, the same location). Since the two position vectors are the same, it follows that there difference \(Δ\vec{S}=\vec{R}(t_1)-\vec{R}(t_2)\) must be zero.
Instantaneous velocity
Here's where things start to get interesting. The quotient \(\frac{Δ\vec{S}}{Δt}\) gives us the average velocity of the object as it moves during the time interval \(Δt\). It isn't exact though since the object might be accelerating or deceleration during that time interval. It is oftentimes best to start with the extreme examples to get the basic point across of when \(\frac{Δ\vec{S}}{Δt}\) can be a good estimate and when it can be totally off. If the runner Usain Bolt runs a total distance of 1 mile in a circle and returns back to his starting point, his average velocity \(\frac{Δ\vec{S}}{Δt}\) over the time interval of that entire run would be zero since \(Δ\vec{S}=0\). The reason why this calculation gave us such a poor estimate of how fst he was running is becuse we did the calculation over a time interval \(Δt\) that was so big that his running speed varied wildly over that time. But if we considered a very small time interval, his speed would remain roughly constant and the calculation of \(\frac{Δ\vec{S}}{Δt}\) would give us a good estimate.
A long time ago, Newton encountered this same kind of problem. When an apple falls, it acellerates and its speed keeps changing. Newton, of course, realized that if you calculated \(\frac{Δ\vec{S}}{Δt}\) over a very small time interval, the object wouldn't acellerate much and its speed would be almost constant. For a small value of \(Δt\), the quotient \(\frac{Δ\vec{S}}{Δt}\) would give you a pretty good estimate of how fast the apple was falling. Now, if you keep choosing values of \(Δt\) that get smaller and smaller, the displacement would keep getting smaller and smaller (each of the values would approach zero). But (assuming the apple is in motion) the ratio of the two would be approaching some finite number. This number is called the instantaneous velocity. We can define the instantaneous velocity mathematically as
$$\vec{v(t)}≡\lim_{Δt→0}\frac{Δ\vec{S}}{Δt}=\frac{d\vec{S}}{dt}.\tag{2}$$
Equation (2) can be used to tell you how fast the apple is moving right at the time \(t\).
Instantaneous acelleration
Another very important idea is how quickly is the velocity changing? By taking the time derivative of the position \(\vec{R}\), the velocity \(\vec{v}\) is able to capture how quickly the position \(\vec{R}\) is changing. To find out how quickly the velocity \(\vec{v}\) is changing, we need to take its time derivative as well; this is called acelleration and is defined as
$$\vec{a}(t)≡\lim_{Δt→0}\frac{Δ\vec{v}}{Δt}=\frac{d\vec{v}}{dt}.\tag{3}$$
Displacement, and instantaneous velocity and acelleration, will be the three fundamental quantities which will be used to describe kinematics: the motion of objects.
This article is licensed under a CC BY-NC-SA 4.0 license. |
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ... |
For the first part take any $\alpha \in L$. Then let $\{\alpha_1,\dots \alpha_n\}$ be the set of distinct elements obtained by $\text{Aut}(L/K)$ acting on $\alpha$. Note that this set is finite, as the extension is algebraic. Now consider:
$$h(x) = \Pi_{i=1}^{n} (x-\alpha_i)$$
Now it's not hard to see that $h(x)$ is fixed by $\text{Aut}(L/K)$, as it only permutes the factors on the right and so we have that $h(x) \in L^H[x]$. Moreover it's irreducible, as if $g = \min(\alpha,L^H)$ then by the transitivity of the Galois group on the set of distinct elements we have that $(x-\alpha_i)$ is a factor of $g$ for any $i$. Hence we conclude that $h = \min(\alpha,L^H)$ and as it's separable $\alpha$ is separable over $L^H$ and so we conclude that $L^H \subseteq L$ is a separable extension.
However I'm not able to prove the second part for infinite extensions. Anyway here's proof for finite extensions.
We first prove that the extension $\text{Aut}(L/K) = \text{Aut}(L/L^H)$. As $K \subseteq L^H \subseteq L$ we have that every automorphism of $L$ fixing $L^H$, also fixes $K$ and so $\text{Aut}(L/L^H) \subseteq \text{Aut}(L/K)$. However from the condition we have that any automorphism on $L$ fixing $K$, also fixes $L^H$ and so we must have $\text{Aut}(L/K) \subseteq \text{Aut}(L/L^H)$. From here we conclude that $\text{Aut}(L/K) = \text{Aut}(L/L^H)$
This will give us that $K \subseteq L^H$ is also a normal extension and by Galois correspondence we have that $|\text{Aut}(L/K)| = 1$. (Here's the part where I need finiteness).
Now let $\beta \in L^H$ and consider $f = \min(\beta,K)$. Let $L_f$ be the splitting field of $f$ over $K$. As $K \subseteq L^H$ is normal we must have $L_f \subseteq L^H$. But then $|\text{Aut}(L_f/K)| = \frac{|\text{Aut}(L/K)|}{|\text{Aut}(L/L_f)|} = 1$, as $\text{Aut}(L/L_f)$ is normal in $\text{Aut}(L/K)$. But now $\text{Aut}(L_f/K)$ acts transitively on the roots of $f$ and so we must have that the only root of $f$ is $\beta$. So if $\beta \not \in K$, then $\deg f \ge 2$ and as it's only root is $\beta$ we have that $f$ isn't separable and hence $\beta$ isn't separable. From here we conclude that $K \subset L^H$ is purely inseparable extension. |
If $(W_t^1)_{t \geq 0}$ and $(W_t^2)_{t \geq 0}$ are independent Brownian motions, then $W_t^1$ and $W_t^2$ are independent for any $t \geq 0$. Hence,
$$\mathbb{E}(F) = A \cdot \mathbb{E} \exp \bigg[ \sigma (\varrho W_t^1+\sqrt{1-\varrho^2} W_t^2) \bigg] = A \cdot \mathbb{E}\exp(\sigma \varrho W_t^1) \cdot \mathbb{E}\exp(\sigma \sqrt{1-\varrho^2} W_t^2).$$
Now as $W_t^1$ and $W_t^2$ are Gaussian random variables with mean $0$ and variance $t$, we find
$$\begin{align*} \mathbb{E}(F) &= A \cdot \exp \left(\frac{1}{2} \sigma^2 \varrho^2 t \right) \cdot \exp \left(\frac{1}{2} \sigma^2 (1-\varrho^2) t \right) = A \exp \left(\frac{\sigma^2}{2} t \right). \end{align*}$$
Remark An alternative argumentation goes as follows: If $(W_t^1)_t$ and $(W_t^2)_t$ are independent Brownian motions, it is not difficult to see that $$B_t := \varrho W_t^1+ \sqrt{1-\varrho^2} W_t^2$$ also defines a Brownian motion. In particular, $B_t \sim N(0,t)$. Using again that the exponential moments are known, the claim follows. |
Friends, a colleague of mine showed me a book with a intriguing chapter structure:
I was wondering how we could achieve something similar. I understand that this unusual numbering will break the counters, and I don't expect the chapter numbers to be set automatically, but I'd like to see how it could work even with a manual chapter numbering assignment.
As a minimum working example, I wrote a single code with some chapters added:
\documentclass[oneside]{book}\usepackage[T1]{fontenc}\usepackage{lipsum}\begin{document}\tableofcontents\chapter{Before the beginning} % Chapter -1\lipsum[2]\chapter{Much ado about nothing} % Chapter 0\lipsum[2]\chapter{Small is beautiful} % Chapter 0.000000001\lipsum[2]\chapter{All is one} % Chapter 1\lipsum[2]\chapter{Murdering irrationals} % Chapter \sqrt{2}\lipsum[2]\chapter{Golden Phi} % Chapter \Phi\lipsum[2]\end{document}
Any ideas? |
1. What is a base
b logarithm? Discuss the meaning by interpreting each part of the equivalent equations [latex]{b}^{y}=x[/latex] and [latex]{\mathrm{log}}_{b}x=y[/latex] for [latex]b>0,b\ne 1[/latex].
2. How is the logarithmic function [latex]f\left(x\right)={\mathrm{log}}_{b}x[/latex] related to the exponential function [latex]g\left(x\right)={b}^{x}[/latex]? What is the result of composing these two functions?
3. How can the logarithmic equation [latex]{\mathrm{log}}_{b}x=y[/latex] be solved for
x using the properties of exponents?
4. Discuss the meaning of the common logarithm. What is its relationship to a logarithm with base
b, and how does the notation differ?
5. Discuss the meaning of the natural logarithm. What is its relationship to a logarithm with base
b, and how does the notation differ?
For the following exercises, rewrite each equation in exponential form.
6. [latex]{\text{log}}_{4}\left(q\right)=m[/latex]
7. [latex]{\text{log}}_{a}\left(b\right)=c[/latex]
8. [latex]{\mathrm{log}}_{16}\left(y\right)=x[/latex]
9. [latex]{\mathrm{log}}_{x}\left(64\right)=y[/latex]
10. [latex]{\mathrm{log}}_{y}\left(x\right)=-11[/latex]
11. [latex]{\mathrm{log}}_{15}\left(a\right)=b[/latex]
12. [latex]{\mathrm{log}}_{y}\left(137\right)=x[/latex]
13. [latex]{\mathrm{log}}_{13}\left(142\right)=a[/latex]
14. [latex]\text{log}\left(v\right)=t[/latex]
15. [latex]\text{ln}\left(w\right)=n[/latex]
For the following exercises, rewrite each equation in logarithmic form.
16. [latex]{4}^{x}=y[/latex]
17. [latex]{c}^{d}=k[/latex]
18. [latex]{m}^{-7}=n[/latex]
19. [latex]{19}^{x}=y[/latex]
20. [latex]{x}^{-\frac{10}{13}}=y[/latex]
21. [latex]{n}^{4}=103[/latex]
22. [latex]{\left(\frac{7}{5}\right)}^{m}=n[/latex]
23. [latex]{y}^{x}=\frac{39}{100}[/latex]
24. [latex]{10}^{a}=b[/latex]
25. [latex]{e}^{k}=h[/latex]
For the following exercises, solve for
x by converting the logarithmic equation to exponential form.
26. [latex]{\text{log}}_{3}\left(x\right)=2[/latex]
27. [latex]{\text{log}}_{2}\left(x\right)=-3[/latex]
28. [latex]{\text{log}}_{5}\left(x\right)=2[/latex]
29. [latex]{\mathrm{log}}_{3}\left(x\right)=3[/latex]
30. [latex]{\text{log}}_{2}\left(x\right)=6[/latex]
31. [latex]{\text{log}}_{9}\left(x\right)=\frac{1}{2}[/latex]
32. [latex]{\text{log}}_{18}\left(x\right)=2[/latex]
33. [latex]{\mathrm{log}}_{6}\left(x\right)=-3[/latex]
34. [latex]\text{log}\left(x\right)=3[/latex]
35. [latex]\text{ln}\left(x\right)=2[/latex]
For the following exercises, use the definition of common and natural logarithms to simplify.
36. [latex]\text{log}\left({100}^{8}\right)[/latex]
37. [latex]{10}^{\text{log}\left(32\right)}[/latex]
38. [latex]2\text{log}\left(.0001\right)[/latex]
39. [latex]{e}^{\mathrm{ln}\left(1.06\right)}[/latex]
40. [latex]\mathrm{ln}\left({e}^{-5.03}\right)[/latex]
41. [latex]{e}^{\mathrm{ln}\left(10.125\right)}+4[/latex]
For the following exercises, evaluate the base
b logarithmic expression without using a calculator.
42. [latex]{\text{log}}_{3}\left(\frac{1}{27}\right)[/latex]
43. [latex]{\text{log}}_{6}\left(\sqrt{6}\right)[/latex]
44. [latex]{\text{log}}_{2}\left(\frac{1}{8}\right)+4[/latex]
45. [latex]6{\text{log}}_{8}\left(4\right)[/latex]
For the following exercises, evaluate the common logarithmic expression without using a calculator.
46. [latex]\text{log}\left(10,000\right)[/latex]
47. [latex]\text{log}\left(0.001\right)[/latex]
48. [latex]\text{log}\left(1\right)+7[/latex]
49. [latex]2\text{log}\left({100}^{-3}\right)[/latex]
For the following exercises, evaluate the natural logarithmic expression without using a calculator.
50. [latex]\text{ln}\left({e}^{\frac{1}{3}}\right)[/latex]
51. [latex]\text{ln}\left(1\right)[/latex]
52. [latex]\text{ln}\left({e}^{-0.225}\right)-3[/latex]
53. [latex]25\text{ln}\left({e}^{\frac{2}{5}}\right)[/latex]
For the following exercises, evaluate each expression using a calculator. Round to the nearest thousandth.
54. [latex]\text{log}\left(0.04\right)[/latex]
55. [latex]\text{ln}\left(15\right)[/latex]
56. [latex]\text{ln}\left(\frac{4}{5}\right)[/latex]
57. [latex]\text{log}\left(\sqrt{2}\right)[/latex]
58. [latex]\text{ln}\left(\sqrt{2}\right)[/latex]
59. Is
x = 0 in the domain of the function [latex]f\left(x\right)=\mathrm{log}\left(x\right)[/latex]? If so, what is the value of the function when x = 0? Verify the result.
60. Is [latex]f\left(x\right)=0[/latex] in the range of the function [latex]f\left(x\right)=\mathrm{log}\left(x\right)[/latex]? If so, for what value of
x? Verify the result.
61. Is there a number
x such that [latex]\mathrm{ln}x=2[/latex]? If so, what is that number? Verify the result.
62. Is the following true: [latex]\frac{{\mathrm{log}}_{3}\left(27\right)}{{\mathrm{log}}_{4}\left(\frac{1}{64}\right)}=-1[/latex]? Verify the result.
63. Is the following true: [latex]\frac{\mathrm{ln}\left({e}^{1.725}\right)}{\mathrm{ln}\left(1\right)}=1.725[/latex]? Verify the result.
64. The exposure index
EI for a 35 millimeter camera is a measurement of the amount of light that hits the film. It is determined by the equation [latex]EI={\mathrm{log}}_{2}\left(\frac{{f}^{2}}{t}\right)[/latex], where f is the “f-stop” setting on the camera, and t is the exposure time in seconds. Suppose the f-stop setting is 8 and the desired exposure time is 2 seconds. What will the resulting exposure index be?
65. Refer to the previous exercise. Suppose the light meter on a camera indicates an
EI of –2, and the desired exposure time is 16 seconds. What should the f-stop setting be?
66. The intensity levels I of two earthquakes measured on a seismograph can be compared by the formula [latex]\mathrm{log}\frac{{I}_{1}}{{I}_{2}}={M}_{1}-{M}_{2}[/latex] where
M is the magnitude given by the Richter Scale. In August 2009, an earthquake of magnitude 6.1 hit Honshu, Japan. In March 2011, that same region experienced yet another, more devastating earthquake, this time with a magnitude of 9.0. [1] How many times greater was the intensity of the 2011 earthquake? Round to the nearest whole number. http://earthquake.usgs.gov/earthquakes/world/historical.php. Accessed 3/4/2014. ↵ |
The fourth method of solving a
quadratic equation is by using the quadratic formula, a formula that will solve all quadratic equations. Although the quadratic formula works on any quadratic equation in standard form, it is easy to make errors in substituting the values into the formula. Pay close attention when substituting, and use parentheses when inserting a negative number.
We can derive the quadratic formula by
completing the square. We will assume that the leading coefficient is positive; if it is negative, we can multiply the equation by [latex]-1[/latex] and obtain a positive a. Given [latex]a{x}^{2}+bx+c=0[/latex], [latex]a\ne 0[/latex], we will complete the square as follows: First, move the constant term to the right side of the equal sign:[latex]a{x}^{2}+bx=-c[/latex] As we want the leading coefficient to equal 1, divide through by a:[latex]{x}^{2}+\frac{b}{a}x=-\frac{c}{a}[/latex] Then, find [latex]\frac{1}{2}[/latex] of the middle term, and add [latex]{\left(\frac{1}{2}\frac{b}{a}\right)}^{2}=\frac{{b}^{2}}{4{a}^{2}}[/latex] to both sides of the equal sign:[latex]{x}^{2}+\frac{b}{a}x+\frac{{b}^{2}}{4{a}^{2}}=\frac{{b}^{2}}{4{a}^{2}}-\frac{c}{a}[/latex] Next, write the left side as a perfect square. Find the common denominator of the right side and write it as a single fraction:[latex]{\left(x+\frac{b}{2a}\right)}^{2}=\frac{{b}^{2}-4ac}{4{a}^{2}}[/latex] Now, use the square root property, which gives[latex]\begin{array}{l}x+\frac{b}{2a}=\pm \sqrt{\frac{{b}^{2}-4ac}{4{a}^{2}}}\hfill \\ x+\frac{b}{2a}=\frac{\pm \sqrt{{b}^{2}-4ac}}{2a}\hfill \end{array}[/latex] Finally, add [latex]-\frac{b}{2a}[/latex] to both sides of the equation and combine the terms on the right side. Thus,[latex]x=\frac{-b\pm \sqrt{{b}^{2}-4ac}}{2a}[/latex] A General Note: The Quadratic Formula
Written in standard form, [latex]a{x}^{2}+bx+c=0[/latex], any quadratic equation can be solved using the
quadratic formula:
where
a, b, and c are real numbers and [latex]a\ne 0[/latex]. How To: Given a quadratic equation, solve it using the quadratic formula Make sure the equation is in standard form: [latex]a{x}^{2}+bx+c=0[/latex]. Make note of the values of the coefficients and constant term, [latex]a,b[/latex], and [latex]c[/latex]. Carefully substitute the values noted in step 2 into the equation. To avoid needless errors, use parentheses around each number input into the formula. Calculate and solve. Example 9: Solve the Quadratic Equation Using the Quadratic Formula
Solve the quadratic equation: [latex]{x}^{2}+5x+1=0[/latex].
Solution
Identify the coefficients: [latex]a=1,b=5,c=1[/latex]. Then use the quadratic formula.
Example 10: Solving a Quadratic Equation with the Quadratic Formula
Use the quadratic formula to solve [latex]{x}^{2}+x+2=0[/latex].
Solution
First, we identify the coefficients: [latex]a=1,b=1[/latex], and [latex]c=2[/latex].
Substitute these values into the quadratic formula.
The solutions to the equation are [latex]x=\frac{-1+i\sqrt{7}}{2}[/latex] and [latex]x=\frac{-1-i\sqrt{7}}{2}[/latex] or [latex]x=\frac{-1}{2}+\frac{i\sqrt{7}}{2}[/latex] and [latex]x=\frac{-1}{2}-\frac{i\sqrt{7}}{2}[/latex].
Try It 8
Solve the quadratic equation using the quadratic formula: [latex]9{x}^{2}+3x - 2=0[/latex].
The Discriminant
The
quadratic formula not only generates the solutions to a quadratic equation, it tells us about the nature of the solutions when we consider the discriminant, or the expression under the radical, [latex]{b}^{2}-4ac[/latex]. The discriminant tells us whether the solutions are real numbers or complex numbers, and how many solutions of each type to expect. The table below relates the value of the discriminant to the solutions of a quadratic equation.
Value of Discriminant Results [latex]{b}^{2}-4ac=0[/latex] One rational solution (double solution) [latex]{b}^{2}-4ac>0[/latex], perfect square Two rational solutions [latex]{b}^{2}-4ac>0[/latex], not a perfect square Two irrational solutions [latex]{b}^{2}-4ac<0[/latex] Two complex solutions A General Note: The Discriminant
For [latex]a{x}^{2}+bx+c=0[/latex], where [latex]a[/latex], [latex]b[/latex], and [latex]c[/latex] are real numbers, the
discriminant is the expression under the radical in the quadratic formula: [latex]{b}^{2}-4ac[/latex]. It tells us whether the solutions are real numbers or complex numbers and how many solutions of each type to expect. Example 11: Using the Discriminant to Find the Nature of the Solutions to a Quadratic Equation
Use the discriminant to find the nature of the solutions to the following quadratic equations:
[latex]{x}^{2}+4x+4=0[/latex] [latex]8{x}^{2}+14x+3=0[/latex] [latex]3{x}^{2}-5x - 2=0[/latex] [latex]3{x}^{2}-10x+15=0[/latex] Solution
Calculate the discriminant [latex]{b}^{2}-4ac[/latex] for each equation and state the expected type of solutions.
[latex]{x}^{2}+4x+4=0[/latex][latex]{b}^{2}-4ac={\left(4\right)}^{2}-4\left(1\right)\left(4\right)=0[/latex]. There will be one rational double solution. [latex]8{x}^{2}+14x+3=0[/latex][latex]{b}^{2}-4ac={\left(14\right)}^{2}-4\left(8\right)\left(3\right)=100[/latex]. As [latex]100[/latex] is a perfect square, there will be two rational solutions. [latex]3{x}^{2}-5x - 2=0[/latex][latex]{b}^{2}-4ac={\left(-5\right)}^{2}-4\left(3\right)\left(-2\right)=49[/latex]. As [latex]49[/latex] is a perfect square, there will be two rational solutions. [latex]3{x}^{2}-10x+15=0[/latex][latex]{b}^{2}-4ac={\left(-10\right)}^{2}-4\left(3\right)\left(15\right)=-80[/latex]. There will be two complex solutions. |
Are there any analytical proofs for the 2nd law of thermodynamics?
Or is it based entirely on empirical evidence?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
It's simple to "roughly prove" the second law in the context of statistical physics. The evolution $A\to B$ of macrostate $A$, containing $\exp(S_A)$ microstates, to macrostate $B$, containing $\exp(S_B)$ microstates, is easily shown by the formula for the probability "summing over final outcomes, averaging over initial states", to be $\exp(S_B-S_A)$ higher than the probability of the inverse process (with velocities reversed). Because $S_B-S_A$ is supposed to be macroscopic, such as $10^{26}$ for a kilogram of matter, the probability in the wrong direction is the exponential of minus this large difference and is zero for all practical purposes.
The more rigorous versions of this proof are always variations of the 1872 proof of the so-called H-theorem by Ludwig Boltzmann:
This proof may be adjusted to particular or general physical systems, both classical ones and quantum ones. Please ignore the invasive comments on the Wikipedia about Loschmidt's paradoxes and similar stuff which is based on a misunderstanding. The H-theorem is a proof that the thermodynamic arrow of time - the direction of time in which the entropy increases - is inevitably aligned with the logical arrow of time - the direction in which one is allowed to make assumptions (the past) in order to evolve or predict other phenomena (in the future).
Every Universe of our type has to have a globally well-defined logical arrow of time: it has to know that the future is being directly evolving (although probabilistically, but with objectively calculable probabilities) from the past. So any universe has to distinguish the future and the past logically, it has to have a logical arrow of time, which is also imprinted to our asymmetric reasoning about the past and the future. Given these qualitative assumptions that are totally vital for the usage of logic in any setup that works with a time coordinate, the H-theorem shows that a particular quantity can't be decreasing, at least not by macroscopic amounts, for a closed system.
It was first found empirically, and later derived from various more theoretical assumptions.
There is a proof in Section 7.2 of Chapter 7: Phenomenological Thermodynamics of Classical and Quantum Mechanics via Lie algebras, based on a few axioms for thermodynamics, and a proof in Chapter 9 that these laws follow from the standard assumptions in statistical mechanics.
The reversibility objections (Loschmidt's paradox) are unjustified since the Poincare recurrence theorem assumes that the system in question is bounded, which is (most likely) not the case for the real universe.
If we assume time evolution is unitary and hence reversible, and the total size of the phase space subject to constraints based upon the total energy and other conserved quantities is finite, then the only conclusion is Poincaré recurrences cycling ergodically through the entire phase space. Boltzmann fluctuations to states of lower entropy might occur with exponentially suppressed probabilities, but the entropy would increase both toward its past and future. This is so not the second law as Boltzmann's critics never tire of pointing out.
The H-theorem depends upon the stosszahlansatz assumption that separate events in the past are uncorrelated, but that is statistically exceedingly improbable assuming a uniform probability distribution.
If the total size of the phase space is infinite, Carroll and Chen proposed that in eternal inflation there can be some state with finite entropy with entropy increasing in both time directions.
To me, the most likely scenario is to drop the assumption of unitarity and replace that with time evolution using Kraus operators acting upon the density matrix.
The problem when you include gravity or other long range forces, is that thermodynamics becomes non extensive. For instance, the energy of the union of two systems is not the sum of the energies of the individual systems.
To handle those cases, generalized entropies have been proposed. By generalized it means that these formalisms allow for long range forces and non-extensivity, for certain parameters of the definition of entropy, but reduces to the classical extensive entropy for certain value of the parameter. One of such extended entropies is Tsallis entropy. It depends on a parameter $q$, and for $q=1$ it reduces to the standard classical entropy.
It has been shown that this entropy works well in some gravitational systems, where it predicts the correct distribution of temperatures and densities, for instance in a polytropic model of a self-gravitating system. It has also been shown that this entropy satisfies the second law for any parameters $q$ in the classical case, and at least for $q\in(0,2]$ in the quantum case.
In the strict sense of the question: no. Physics is science based on empirical evidence. But this applies to all laws of physics. E.g. if by tomorrow you find and confirm experimental evidence which contradict current theories, you have to expand the theories (or invent new ones), and you gain insight in the domain of applicability of your old theory (which still stays valid in its domain).
Of course you might be able to derive/prove the second law from certain assumptions, but if you were to find an experiment where the second law doesn't hold, then you start to know the limitations of your assumptions.
There is actually a very simple derivation of the Second Law in classical thermodynamics, assuming only classical mechanics and the First Law. Here is a brief sketch -- whether this constitutes a "proof" depends largely on taste, the level of rigor desired, and how comfortable you are with thermo-style derivations.
The First Law of Thermodynamics is:
\begin{align} dU = dq + dw \end{align}
where the differentials refer to changes of the system. By convention we have defined a gain of energy or heat by the system as positive, work done on the system as positive, and work done by the system on the surroundings as negative.
Without loss of generality, we assume only pressure-volume work. The work done by the system is quantified by the amount of work done in the surroundings, and so the relevant pressure is the
external pressure $P_{ext}$ in the surroundings that the system is pushing against. Then, the work done by the system is
\begin{align} dw = -P_{ext} dV \end{align}
If the system is expanding against the surroundings, $dV \ge 0$, and according to classical mechanics the internal pressure of the system must be greater than the external pressure of the surroundings, i.e.
\begin{align} P_{int} \ge P_{ext} \end{align}
For a
reversible change, the internal and external pressures are equal ($P_{int} = P_{ext}$), and so the work done by the system in a reversible process is
\begin{align} dw_{rev} = -P_{int} dV \end{align}
Therefore,
\begin{align} P_{int} dV &\ge P_{ext} dV \\ -P_{int} dV &\le -P_{ext} dV \\ dw_{rev} &\le dw \end{align}
which means that the magnitude of work done by the system on the surroundings is maximal during a reversible process. Combining this result with the First Law gives:
\begin{align} dq_{rev} &\ge dq \end{align}
We now define the state function entropy $S$ classically as
\begin{align} dS = \frac{dq_{rev}}{T} \end{align}
From the previous inequality for reversible heat, we see that
\begin{align} dS = \frac{dq_{rev}}{T} \ge \frac{dq}{T} \end{align}
which is the generalized Clausius inequality. This is a
complete mathematical statement of the Second Law of Thermodynamics. All consequences of the Second Law can be derived from it, including the proposition that heat always spontaneously flows from hot to cold.
The one missing part is that we did not establish that entropy $S$ is a state function, but this is easy and can be found in any introductory thermodynamics treatment (e.g. [1]). |
Defining parameters
Level: \( N \) = \( 8 = 2^{3} \) Weight: \( k \) = \( 21 \) Nonzero newspaces: \( 1 \) Newforms: \( 2 \) Sturm bound: \(84\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{21}(\Gamma_1(8))\).
Total New Old Modular forms 43 21 22 Cusp forms 37 19 18 Eisenstein series 6 2 4 Decomposition of \(S_{21}^{\mathrm{new}}(\Gamma_1(8))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 8.21.c \(\chi_{8}(7, \cdot)\) None 0 1 8.21.d \(\chi_{8}(3, \cdot)\) 8.21.d.a 1 1 8.21.d.b 18 |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
Existence of entire solutions for semilinear elliptic problems on ${\mathbb R}^{N}$
DOI: http://dx.doi.org/10.12775/TMNA.1999.001
Abstract
In this paper, we consider the existence of positive and negative
entire solutions of semilinear elliptic problem $$ -\Delta u + u = g(x,u), \quad u \in H^{1}({\mathbb R}^{N})\tag{P} $$ where $N \geq 2$ and $g:{\mathbb R}^{N} \times {\mathbb R }\to {\mathbb R}$ is a continuous function with superlinear growth and $g(x,0) = 0$ on ${\mathbb R}^{N} $.
entire solutions of semilinear elliptic problem
$$ -\Delta u + u = g(x,u), \quad u \in H^{1}({\mathbb R}^{N})\tag{P}
$$
where $N \geq 2$ and $g:{\mathbb R}^{N} \times {\mathbb R }\to {\mathbb
R}$ is a continuous function with superlinear
growth and $g(x,0) = 0$ on ${\mathbb R}^{N} $.
Keywords
Nonlinear elliptic problem; positive solution; entire solution
Full Text:FULL TEXT Refbacks There are currently no refbacks. |
Make sure that the dev and test sets come from the same distribution。
Not having a test set might be okay.(Only dev set.)
So having set up a train dev and test set will allow you to integrate more quickly. It will also allow you to more efficiently measure the bias and variance of your algorithm, so you can more efficiently select ways to improve your algorithm.
High Bias: underfitting High Variance: overfitting
Assumption——human: 0% (Optimal/Bayes error), train set and dev set are drawn from the same distribution.
Train set error
Dev set error
Result
1%
11%
high variance
15%
16%
high bias
15%
30%
high bias and high variance
0.5%
1%
low bias and low variance
High bias –> Bigger network, Training longer, Advanced optimization algorithms, Try different netword.
High variance –> More data, Try regularization, Find a more appropriate neural network architecture.
In logistic regression,
w∈Rnx,b∈R
w \in R^{n_x}, b \in R
J(w,b)=1m∑i=1mL(ŷ (i),y(i))+λ2m||w||22
J(w, b) = \frac {1} {m} \sum _{i=1} ^m L(\hat y^{(i)}, y^{(i)}) + \frac {\lambda} {2m} ||w||_2^2
||w||22=∑j=1nxw2j=wTw
||w||_2^2 = \sum _{j=1} ^{n_x} w_j^2 = w^Tw This is called L2 regularization.
J(w,b)=1m∑i=1mL(ŷ (i),y(i))+λ2m||w||1
J(w, b) = \frac {1} {m} \sum _{i=1} ^m L(\hat y^{(i)}, y^{(i)}) + \frac {\lambda} {2m} ||w||_1 This is called L1 regularization.
w will end up being sparse. λ\lambda is called regularization parameter.
In neural network, the formula is
J(w[1],b[1],...,w[L],b[L])=1m∑i=1mL(ŷ (i),y(i))+λ2m∑l=1L||w[l]||2
J(w^{[1]},b^{[1]},...,w^{[L]},b^{[L]}) = \frac {1} {m} \sum _{i=1} ^m L(\hat y^{(i)}, y^{(i)}) + \frac {\lambda} {2m} \sum _{l=1}^L ||w^{[l]}||^2
||w[l]||2=∑i=1n[l−1]∑j=1n[l](w[l]ij)2,w:(n[l−1],n[l])
||w^{[l]}||^2 = \sum_{i=1}^{n^{[l-1]}}\sum _{j=1}^{n^{[l]}} (w_{ij}^{[l]})^2, w:(n^{[l-1]}, n^{[l]})
This matrix norm, it turns out is called the
Frobenius Norm of the matrix, denoted with a
F in the subscript.
L2 norm regularization is also called
weight decay.
If λ\lambda is set too large, matrices
W is set to be reasonabley close to zero, and it will zero out the impact of these hidden units. And that’s the case, then this much simplified neural network becomes a much smaller neural network. It will take you from overfitting to underfitting, but there is a
just right case in the middle.
Dropout will go through each of the layers of the network, and set some probability of eliminating a node in neural network. By far the most common implementation of dropouts today is inverted dropouts.
Inverted dropout,
kp stands for
keep-prob:
z[i+1]=w[i+1]a[i]+b[i+1]
z^{[i + 1]} = w^{[i + 1]} a^{[i]} + b^{[i + 1]}
a[i]=a[i]/kp
a^{[i]} = a^{[i]} / kp
In test phase, we don’t use dropout and
keep-prob.
Why does dropout workd? Intuition: Can’t rely on any one feature, so have to spread out weights.
By spreading all the weights, this will tend to have an effect of shrinking the squared norm of the weights.
Normalizing inputs can speed up training. Normalizing inputs corresponds to two steps. The first is to subtract out or to zero out the mean. And then the second step is to normalize the variances.
If the network is very deeper, deep network suffer from the problems of vanishing or exploding gradients.
If activation function is
ReLU or
tanh,
w initialization is:
w[l]=np.random.randn(shape)∗np.sqrt(2n[l−1]).
w^{[l]} = np.random.randn(shape) * np.sqrt(\frac {2} {n^{[l-1]}}). This is called Xavier initalization.
Another formula is
w[l]=np.random.randn(shape)∗np.sqrt(2n[l−1]+n[l]).
w^{[l]} = np.random.randn(shape) * np.sqrt(\frac {2} {n^{[l-1]} + n^{[l]}}).
In order to build up to gradient checking, you need to numerically approximate computatiions of gradients.
g(θ)≈f(θ+ϵ)−f(θ−ϵ)2ϵ
g(\theta) \approx \frac {f(\theta + \epsilon) - f(\theta - \epsilon)} {2 \epsilon}
Take matrix
W, vector
b and reshape them into vectors, and then concatenate them, you have a giant vector θ\theta. For each
i:
dθapprox[i]=J(θ1,...,θi+ϵ,...)−J(θ1,...,θi−ϵ,...)2ϵ≈dθi=∂J∂θi
d\theta _{approx}[i]= \frac {J(\theta_1,...,\theta_i + \epsilon,...)-J(\theta_1,...,\theta_i - \epsilon,...)} {2\epsilon} \approx d\theta_i=\frac {\partial J} {\partial \theta_i}
If
||dθapprox−dθ||2||dθapprox||2+||θ||2≈10−7
\frac {||d\theta_{approx} - d\theta ||_2} {||d\theta_{approx}||_2 + ||\theta||_2} \approx 10^{-7}, that’s great. If ≈10−5\approx 10^{-5}, you need to do double check, if ≈10−5\approx 10^{-5}, there may be a bug.
本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。
扫码关注云+社区
领取腾讯云代金券 |
It looks like you're new here. If you want to get involved, click one of these buttons!
Okay, now I've rather carefully discussed one example of \(\mathcal{V}\)-enriched profunctors, and rather sloppily discussed another. Now it's time to build the general framework that can handle both these examples.
We can define \(\mathcal{V}\)-enriched categories whenever \(\mathcal{V}\) is a monoidal preorder: we did that way back in Lecture 29. We can also define \(\mathcal{V}\)-enriched functors whenever \(\mathcal{V}\) is a monoidal preorder: we did that in Lecture 31. But to define \(\mathcal{V}\)-enriched profunctors, we need \(\mathcal{V}\) to be a bit better. We can see why by comparing our examples.
Our first example involved \(\mathcal{V} = \textbf{Bool}\). A
feasibility relation
$$ \Phi : X \nrightarrow Y $$ between preorders is a monotone function
$$ \Phi: X^{\text{op}} \times Y\to \mathbf{Bool} . $$ We shall see that a feasibility relation is the same as a \( \textbf{Bool}\)-enriched profunctor.
Our second example involved \(\mathcal{V} = \textbf{Cost}\). I said that a \( \textbf{Cost}\)-enriched profunctor
$$ \Phi : X \nrightarrow Y $$ between \(\mathbf{Cost}\)-enriched categories is a \( \textbf{Cost}\)-enriched functor
$$ \Phi: X^{\text{op}} \times Y \to \mathbf{Cost} $$ obeying some conditions. But I let you struggle to guess those conditions... without enough clues to make it easy!
To fit both our examples in a general framework, we start by considering an arbitrary monoidal preorder \(\mathcal{V}\). \(\mathcal{V}\)-enriched profunctors will go between \(\mathcal{V}\)-enriched categories. So, let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(\mathcal{V}\)-enriched categories. We want to make this definition:
Tentative Definition. A \(\mathcal{V}\)-enriched profunctor
$$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} $$ is a \(\mathcal{V}\)-enriched functor
$$ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ Notice that this handles our first example very well. But some questions appear in our second example - and indeed in general. For our tentative definition to make sense, we need three things:
We need \(\mathcal{V}\) to itself be a \(\mathcal{V}\)-enriched category.
We need any two \(\mathcal{V}\)-enriched category to have a 'product', which is again a \(\mathcal{V}\)-enriched category.
We need any \(\mathcal{V}\)-enriched category to have an 'opposite', which is again a \(\mathcal{V}\)-enriched category.
Items 2 and 3 work fine whenever \(\mathcal{V}\) is a commutative monoidal poset. We'll see why in Lecture 62.
Item 1 is trickier, and indeed it sounds rather scary. \(\mathcal{V}\) began life as a humble monoidal preorder. Now we're wanting it to be
enriched in itself! Isn't that circular somehow?
Yes! But not in a bad way. Category theory often eats its own tail, like the mythical ourobous, and this is an example.
To get \(\mathcal{V}\) to become a \(\mathcal{V}\)-enriched category, we'll demand that it be 'closed'. For starters, let's assume it's a monoidal
poset, just to avoid some technicalities. Definition. A monoidal poset is closed if for all elements \(x,y \in \mathcal{V}\) there is an element \(x \multimap y \in \mathcal{V}\) such that
$$ x \otimes a \le y \text{ if and only if } a \le x \multimap y $$ for all \(a \in \mathcal{V}\).
This will let us make \(\mathcal{V}\) into a \(\mathcal{V}\)-enriched category by setting \(\mathcal{V}(x,y) = x \multimap y \). But first let's try to understand this concept a bit!
We can check that our friend \(\mathbf{Bool}\) is closed. Remember, we are making it into a monoidal poset using 'and' as its binary operation: its full name is \( \lbrace \text{true},\text{false}\rbrace, \wedge, \text{true})\). Then we can take \( x \multimap y \) to be 'implication'. More precisely, we say \( x \multimap y = \text{true}\) iff \(x\) implies \(y\). Even more precisely, we define:
$$ \text{true} \multimap \text{true} = \text{true} $$$$ \text{true} \multimap \text{false} = \text{false} $$$$ \text{false} \multimap \text{true} = \text{true} $$$$ \text{false} \multimap \text{false} = \text{true} . $$
Puzzle 188. Show that with this definition of \(\multimap\) for \(\mathbf{Bool}\) we have
$$ a \wedge x \le y \text{ if and only if } a \le x \multimap y $$ for all \(a,x,y \in \mathbf{Bool}\).
We can also check that our friend \(\mathbf{Cost}\) is closed! Remember, we are making it into a monoidal poset using \(+\) as its binary operation: its full name is \( [0,\infty], \ge, +, 0)\). Then we can define \( x \multimap y \) to be 'subtraction'. More precisely, we define \(x \multimap y\) to be \(y - x\) if \(y \ge x\), and \(0\) otherwise.
Puzzle 189. Show that with this definition of \(\multimap\) for \(\mathbf{Cost}\) we have
$$ a + x \le y \text{ if and only if } a \le x \multimap y . $$But beware. We have defined the ordering on \(\mathbf{Cost}\) to be the
opposite of the usual ordering of numbers in \([0,\infty]\). So, \(\le\) above means the opposite of what you might expect!
Next, two more tricky puzzles. Next time I'll show you in general how a closed monoidal poset \(\mathcal{V}\) becomes a \(\mathcal{V}\)-enriched category. But to appreciate this, it may help to try some examples first:
Puzzle 190. What does it mean, exactly, to make \(\mathbf{Bool}\) into a \(\mathbf{Bool}\)-enriched category? Can you see how to do this by defining
$$ \mathbf{Bool}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Bool}\), where \(\multimap\) is defined to be 'implication' as above?
Puzzle 191. What does it mean, exactly, to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category? Can you see how to do this by defining
$$ \mathbf{Cost}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Cost}\), where \(\multimap\) is defined to be 'subtraction' as above?
Note: for Puzzle 190 you might be tempted to say "a \(\mathbf{Bool}\)-enriched category is just a preorder, so I'll use that fact here". However, you may learn more if you go back to the general definition of enriched category and use that! The reason is that we're trying to understand some general things by thinking about two examples.
Puzzle 192. The definition of 'closed' above is an example of a very important concept we keep seeing in this course. What is it? Restate the definition of closed monoidal poset in a more elegant, but equivalent, way using this concept. |
It looks like you're new here. If you want to get involved, click one of these buttons!
In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:
Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.
Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.
Today we'll conclude our discussion of Chapter 1 with two more bombshells:
Joins
are left adjoints, and meets are right adjoints.
Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.
This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!
Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.
In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.
Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have
all joins: it's enough that all the joins in this formula exist:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have
all meets: it's enough that all the meets in this formula exist:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.
Suppose \(A\) is a poset with all binary joins. Then we get a function
$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:
$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that
$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the
diagonal
$$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called
duplication, since it duplicates any element of \(A\).
Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:
$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).
Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).
A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function
$$ \wedge : A \times A \to A $$that's the
right adjoint of \( \Delta \). This is just a clever way of saying
$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.
Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.
All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.
Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by
$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.
I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.
Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).
So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called
duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.
Once you start looking you can find duality everywhere, from ancient Chinese philosophy:
to modern computers:
But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!
This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises. |
How to Perform a Nonlinear Distortion Analysis of a Loudspeaker Driver
A thorough analysis of a loudspeaker driver is not limited to a frequency-domain study. Some desirable and undesirable (but nonetheless exciting) effects can only be caught by a nonlinear time-domain study. Here, we will discuss how system nonlinearities affect the generated sound and how to use the COMSOL Multiphysics® software to perform a nonlinear distortion analysis of a loudspeaker driver.
Understanding Linear and Nonlinear Distortions
A transducer converts a signal of one energy form (input signal) to a signal of another energy form (output signal). In regard to a loudspeaker, which is an electroacoustic transducer, the input signal is the electric voltage that, in the case of a moving coil loudspeaker, drives its voice coil. The output signal is the acoustic pressure that the human ear perceives as a sound. A distortion occurs when the output signal quantitatively and/or qualitatively differs from the input signal.
Schematic representation of a moving coil loudspeaker.
The distortion can be divided into two principal parts:
Linear distortion Nonlinear distortion
The term
linear distortion, which might sound rather confusing, implies that the output signal has the same frequency content as the input signal. In this distortion, it is the amplitude and/or phase of the output signal that is distorted. In contrast, the term nonlinear distortion suggests that the output signal contains frequency components that are absent in the input signal. This means that the energy is transferred from one frequency at the input to several frequencies at the output. Input and output signals in linear and nonlinear transducers.
Let the input sinusoidal signal, A_\text{in} \sin \left( 2\pi f t \right), be applied to a transducer with a nonlinear transfer function. The frequency content of the output signal will then have more than one frequency. Apart from the fundamental portion, which corresponds to the frequency f, there will be a distorted portion. Its spectrum usually (but not always) consists of the frequencies f^{(2)}, f^{(3)}, f^{(4)}, \ldots, which are multiples of the fundamental frequency f^{(n)} = n f, in which n \geq 2. These frequencies, called
overtones, are present in the sound, and it is the overtones that make musical instruments sound different: A note played on a violin sounds different from the same note played on a guitar. The same happens with the sound emitted from a loudspeaker.
The distortion is a relative quantity that can be described by the value of the
total harmonic distortion (THD). This value is calculated as the ratio of the amplitude of the distorted portion of the signal to that of the fundamental part:
The profile of a signal with a higher THD visibly differs from the pure sinusoidal.
Unfortunately, the value of the THD of the output signal itself might not be enough to judge the quality of the loudspeaker. A signal with a lower THD may sound worse than a signal with a higher THD. The reason is that the human ear perceives various overtones differently.
The distortion can be represented as a set of individual even-order, 2nf, and odd-order, (2n-1)f, components. The former are due to asymmetric nonlinearities of the transducer, while the latter are due to symmetric nonlinearities. The thing is that the sound containing even-order harmonics is perceived as “sweet” and “warm”. This can be explained by the fact that there are octave multiples of the fundamental frequency among them. The odd-order harmonics sound “harsh” and “gritty”. That is quite alright for a guitar distortion pedal, but not for a loudspeaker. What matters is, of course, not just the presence of those harmonics, but rather their level in the output signal.
Another interesting effect, called
intermodulation, occurs when the input signal contains more than one frequency component. The corresponding output signals start to interact with each other, producing frequency components absent in the input signal. In practice, if a two-tone sine wave such as A_\text{in} \sin \left( 2\pi f_1 t \right) + B_\text{in} \sin \left( 2\pi f_2 t \right) (in which f_2 > f_1) is applied to the input, the system nonlinearities will result in the modulation of the higher-frequency component by the lower one. That is, the frequencies f_2 \pm f_1, f_2 \pm 2f_1, and so on will appear in the frequency spectrum of the output signal. The quantitative measure of the intermodulation that corresponds to the frequency, f_2 \pm (n-1) f in which n \geq 2, is the n th-order intermodulation distortion (IMD) coefficient. It is defined as:
In practice, using an input signal containing three or more frequencies for the IMD analysis is not advisable, as the results become harder to interpret.
Transient Nonlinear Analysis of a Loudspeaker Driver
To summarize, the linear analysis of the loudspeaker, though a powerful tool for a designer, might not be sufficient. The loudspeaker can only be completely described if an additional nonlinear analysis is carried out. The nonlinear analysis is supposed to answer the following questions:
How does the nonlinear behavior of the loudspeaker affect the output signal? What are the limits of the input signal that ensure the loudspeaker functions acceptably? How should I compensate for the undesired distortion of the loudspeaker?
From the simulation point of view, there is both bad and good news. The bad news is that the full nonlinear analysis cannot be performed in the frequency domain. It requires the transient simulation of the loudspeaker, which is more demanding and time consuming than the frequency-domain analysis. The good news is that the effect of certain nonlinearities is only significant at low frequencies.
For example, the voice coil displacement is greater at lower frequencies and therefore the finite strain theory must be used to model the mechanical parts of the motor. Using the finite strain theory is redundant at higher frequencies, where the infinitesimal strain theory is applicable. The figures below show the results for the transient loudspeaker tutorial, driven by the same amplitude (V_0 = 10 V) of input voltage:
Voice coil motion in the air gap of the loudspeaker driver for a single-tone input voltage signal: 70 Hz on the left and 140 Hz on the right. Acoustic pressure at the listening point for a single-tone input voltage. The blue curves correspond to the nonlinear time-domain analysis, while the red curves correspond to the frequency-domain analysis: 70 Hz on the left and 140 Hz on the right.
The animations above depict the magnetic field in the voice coil gap and the motion of the former and the spider (both in pink) as well as the voice coil (in orange). As expected, the displacements, as well as the spider deformation, are higher at the lower frequency. The spider deformation obeys the geometrically nonlinear analysis and therefore the linear approximation is inaccurate in this case. This is confirmed by the output signal plots. These plots depict the acoustic pressure at the listening point located about 14.5 cm in front of the speaker dust cap tip.
The acoustic pressure profile obtained from the nonlinear time-domain modeling for the 70-Hz input signal deviates from the sinusoidal shape to a certain extent, which means that higher-order harmonics start playing a definite role. This is not visible for the input signal at 140 Hz: There’s only a slight difference in the amplitude between the linear frequency-domain and nonlinear time-domain simulation results. The THD value of the output signal drops from 4.3% in the first case to 0.9% in the second case. The plots below show how the harmonics contribute to the sound pressure level (SPL) at the listening point.
Frequency spectra of the SPL at the listening point: single-tone input voltage (70 Hz on the left and 140 Hz on the right).
The IMD analysis of the loudspeaker is carried out in a similar way. What’s different is the input signal applied to the voice coil, which contains two harmonics parts:
whose amplitudes, V_1 and V_2, usually correlate as 4 : 1, which corresponds to 12 dB.
The example below studies the IMD of the same test loudspeaker driver. The dual-frequency input voltage, in which f_1 = 70 Hz and f_2 = 700 Hz, serves as the input signal. The SPL plot on the left shows how the second- and third-order harmonics arising in the low-frequency part of the output signal generate a considerable level of the corresponding order IMDs in the high-frequency part. The IMD level becomes sufficiently lower if the signal frequency f_1 is increased to 140 Hz. This is seen in the right plot below.
Frequency spectra of the SPL at the listening point for a two-tone input voltage. Modeling Tips for Analyzing a Loudspeaker Driver
Since transient nonlinear simulations tend to be demanding, the loudspeaker driver model should not be overcomplicated. The 2D axisymmetric formulation is a good starting approach and was used for the tutorial examples in the previous section. After that, it’s important to estimate which effects are more important than others. This will help you set up an adequate multiphysics model of a loudspeaker.
The system nonlinearities include, but are not limited to, the following:
Nonlinear behavior of the magnetic field in the loudspeaker pole piece made of high-permeability metal Geometric nonlinearities in the moving parts of the motor Topology change as the voice coil moves up and down in the air gap
Speaking the lumped parameters’ language, this means that they are no longer constants like the Thiele-Small parameters, but functions of the voice coil position, x, and the input voltage, V. The above-mentioned nonlinearities will be reflected in the nonlinear inductance, L \left( x, V \right); compliance, C \left( x, V \right); and dynamic force factor, Bl \left( x, V \right). For instance, the tutorial example shows that the nonlinear behavior of the force factor is more distinct at 70 Hz, whereas it is almost flat (that is, closer to linear) at 140 Hz.
Nonlinear (left) and almost linear (right) behavior of the dynamic force factor: 70 Hz on the left and 140 Hz on the right.
With the following steps, the discussed nonlinearities can be incorporated into the model. First, the nonlinear magnetic effects are taken into account through the constitutive relation for the corresponding material. In the test example, the BH curve option is chosen for the iron pole piece. Next, the
Include geometric nonlinearity option available under the Study Settings section forces the structural parts of the model to obey the finite strain theory. Lastly, the topology change is captured by the Moving Mesh feature. Whenever applied, the feature ensures that the mesh element nodes move together with the moving parts of the system. Since the displacements can be quite high, it is likely that the mesh element distortion reaches extreme levels and the numerical model becomes unstable. The Automatic Remeshing option is used as a remedy against highly distorted mesh elements.
All in all, the nonlinear time-domain analysis of the loudspeaker requires much more effort and patience than the linear frequency-domain study. This is especially relevant when the model includes the
Moving Mesh feature with the Automatic Remeshing option activated. Investing some time in the geometry and mesh preprocessing will pay off, as the moving mesh is very sensitive to the mesh quality. That is, highly distorted mesh elements and near-zero angles between the geometric entities have to be avoided. A proper choice of the Condition for Remeshing option may also require some trial and error.
The loudspeaker design discussed here might not be considered “good” by most standards. The odd-order harmonics prevail in the frequency content of the output signal.
Next Steps
To perform your own nonlinear distortion analysis of a loudspeaker, click on the button below. This will take you to the Application Gallery, where you can find the MPH-files for this model together with detailed modeling instructions. (Note: You must have a COMSOL Access account and valid software license.)
Additional Resources Check out other examples of modeling loudspeakers in these tutorials: Further reading: L.L. Beranek and T.J. Mellow, Acoustics: Sound Fields and Transducers, Academic Press, 2012. Brüel & Kjær, “Audio Distortion Measurements,” Application Note BO0385, 1993. W. Marshall Leach, Jr., Introduction to Electroacoustics and Audio Amplifier Design, Kendall Hunt, 2010. L.L. Beranek and T.J. Mellow, Comments (4) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Last time we tackled von Neumann's minimax theorem:
Theorem. For every zero-sum 2-player normal form game,
where \( p'\) ranges over player A's mixed strategies and \( q'\) ranges over player B's mixed strategies.
We reduced the proof to two geometrical lemmas. Now let's prove those... and finish up the course!
But first, let me chat a bit about this theorem. Von Neumann first proved it in 1928. He later wrote:
As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved.
Von Neumann's gave several proofs of this result:
• Tinne Hoff Kjeldesen, John von Neumannís conception of the minimax theorem: a journey through different mathematical contexts,
Arch. Hist. Exact Sci. 56 (2001) 39–68.
In 1937 he gave a proof which became quite famous, based on an important result in topology: Brouwer's fixed point theorem. This says that if you have a ball
and a continuous function
then this function has a
fixed point, meaning a point \( x \in B\) with
You'll often seen Brouwer's fixed point theorem in a first course on algebraic topology, though John Milnor came up with a proof using just multivariable calculus and a bit more.
After von Neumann proved his minimax theorem using Brouwer's fixed point theorem, the mathematician Shizuo Kakutani proved another fixed-point theorem in 1941, which let him get the minimax theorem in a different way. This is now called Kakutani fixed-point theorem.
In 1949, John Nash generalized von Neumann's result to nonzero-sum games with any number of players: they all have Nash equilibria if we let ourselves use mixed strategies! His proof is just one page long, and it won him the Nobel prize!
Nash's proof used the Kakutani fixed-point theorem. There is also a proof of Nash's theorem using Brouwer's fixed-point theorem; see here for the 2-player case and here for the
n-player case.
Apparently when Nash explained his result to von Neumann, the latter said:
That's trivial, you know. That's just a fixed point theorem.
Maybe von Neumann was a bit jealous?
I don't know a proof of Nash's theorem that doesn't use a fixed-point theorem. But von Neumann's original minimax theorem seems to be easier. The proof I showed you last time comes from Andrew Colman's book
Game Theory and its Applications in the Social and Biological Sciences. In it, he writes: In common with many people, I first encountered game theory in non-mathematical books, and I soon became intrigued by the minimax theorem but frustrated by the way the books tiptoed around it without proving it. It seems reasonable to suppose that I am not the only person who has encountered this problem, but I have not found any source to which mathematically unsophisticated readers can turn for a proper understanding of the theorem, so I have attempted in the pages that follow to provide a simple, self-contained proof with each step spelt out as clearly as possible both in symbols and words.
There are other proofs that avoid fixed-point theorems: for example, there's one in Ken Binmore's book
Playing for Real. But this one uses transfinite induction, which seems a bit scary and distracting! So far, Colman's proof seems simplest, but I'll keep trying to do better.
Now let's prove the two lemmas from last time. A lemma is an unglamorous result which we use to prove a theorem we're interested in. The mathematician Paul Taylor has written:
Lemmas do the work in mathematics: theorems, like management, just take the credit.
Let's remember what we were doing. We had a zero-sum 2-player normal-form game with an \( m \times n\) payoff matrix \( A\). The entry \( A_{ij}\) of this matrix says A's payoff when player A makes choice \( i\) and player B makes choice \( j\). We defined this set:
For example, if
then \( C\) looks like this:
We assumed that
This means there exists \( p'\) with
and this implies that at least one of the numbers \( (Aq')_i\) must be positive. So, if we define a set \( N\) by
then \( Aq'\) can't be in this set:
In other words, the set \( C \cap N\) is empty.
Here's what \( C\) and \( N\) look like in our example:
Next, we choose a point in \( N\) and a point in \( C\):
• let \( r\) be a point in \( N\) that's as close as possible to \( C,\)
and
• let \( s\) be a point in \( C\) that's as close as possible to \( r,\)
These points \( r\) and \( s\) need to be different, since \( C \cap N\) is empty. Here's what these points and the vector \( s - r\) look like in our example:
To finish the job, we need to prove two lemmas:
Lemma 1. \( r \cdot (s-r) = 0,\) \( s_i - r_i \ge 0\) for all \( i,\) and \( s_i - r_i > 0\) for at least one \( i.\) Proof. Suppose \( r'\) is any point in \( N\) whose coordinates are all the same those of \( r,\) except perhaps one, namely the \( i\)th coordinate for one particular choice of \( i.\) By the way we've defined \( s\) and \( r\), this point \( r'\) can't be closer to \( s\) than \( r\) is:
This means that
But since \( r_j' = r_j\) except when \( j = i,\) this implies
Now, if \( s_i \le 0\) we can take \( r'_i = s_i.\) In this case we get
so \( r_i = s_i.\) On the other hand, if \( s_i > 0\) we can take \( r'_i = 0\) and get
which simplifies to
But \( r_i \le 0\) and \( s_i > 0,\) so this can only be true if \( r_i = 0.\)
In short, we know that either
• \( r_i = s_i\)
or
• \( s_i > 0\) and \( r_i = 0.\)
So, either way we get
Since \( i\) was arbitrary, this implies
which is the first thing we wanted to show. Also, either way we get
which is the second thing we wanted. Finally, \( s_i - r_i \ge 0\) but we know \( s \ne r,\) so
for at least one choice of \( i.\) And this is the third thing we wanted! █
Lemma 2. If \( Aq'\) is any point in \( C\), then Proof. Let's write
for short. For any number \( t\) between \( 0\) and \( 1\), the point
is on the line segment connecting the points \( a\) and \( s.\) Since both these points are in \( C,\), so is this point \( ta + (1-t)s,\) because the set \( C\) is convex. So, by the way we've defined \( s\) and \( r\), this point can't be closer to \(r\) than \( s\) is:
This means that
With some algebra, this gives
Since we can make \( t\) as small as we want, this implies that
or
or
By Lemma 1 we have \( r \cdot (s - r) \ge 0,\) and the dot product of any vector with itself is nonnegative, so it follows that
And this is what we wanted to show! █
Proving lemmas is hard work, and unglamorous. But if you remember the big picture, you'll see how great this stuff is.
We started with a very general concept of two-person game. Then we introduced probability theory and the concept of 'mixed strategy'. Then we realized that the expected payoff of each player could be computed using a dot product! This brings geometry into the subject. Using geometry, we've seen that every zero-sum game has at least one 'Nash equilibrium', where neither player is motivated to change what they do — at least if they're rational agents.
And this is how math works: by taking a simple concept and thinking about it very hard, over a long time, we can figure out things that are not at all obvious.
For game theory, the story goes much further than we went in this course. For starters, we should look at nonzero-sum games, and games with more than two players. John Nash showed these more general games still have Nash equilibria!
Then we should think about how to actually find these equilibria. Merely knowing that they exist is not good enough! For zero-sum games, finding the equilibria uses a subject called linear programming. This is a way to maximize a linear function given a bunch of linear constraints. It's used all over the place — in planning, routing, scheduling, and so on.
Game theory is used a lot by economists, for example in studying competition between firms, and in setting up antitrust regulations. For that, try this book:
• Lynne Pepall, Dan Richards and George Norman,
Industrial Organization: Contemporary Theory and Empirical Applications, Blackwell, 2008.
For these applications, we need to think about how people actually play games and make economic decisions. We aren't always rational agents! So, psychologists, sociologists and economists do experiments to study what people actually do. The book above has a lot of case studies, and you can learn more here:
• Andrew Colman,
Game Theory and its Applications in the Social and Biological Sciences, Routledge, London, 1982.
As this book title hints, we should also think about how game theory enters into biology. Evolution can be seen as a game where the winning genes reproduce and the losers don't. But it's not all about competition: there's a lot of cooperation involved. Life is not a zero-sum game! Here's a good introduction to some of the math:
• William H. Sandholm, Evolutionary game theory, 12 November 2007.
For more on the biology, get ahold of this classic text:
• John Maynard Smith,
Evolution and the Theory of Games, Cambridge University Press, 1982.
And so on. We've just scratched the surface!
You can also read comments on Azimuth, and make your own comments or ask questions there! |
For a transverse wave(or for pressure waves required to produce longitudinal waves), the motion perpendicular to the direction of propagation of the wave is governed by an equation like $y = Asin(\omega t)$ in case of harmonic waves(here $\omega$ is angular frequency of simple harmonic motion). The time period ($T$) of the (harmonic) wave is then $2*\pi/\omega$. And frequency$(\nu)$ is $1/T$. Also, velocity of propagation of a wave $v = \nu*\lambda$.
Now the Doppler effect shows us that if the source of the wave or the observer is in motion the frequency$(\nu)$ of the wave changes(Here I am talking about longitudinal waves propagating in a stationary medium). My initial doubt was how the frequency and the time period of the wave could change if $\omega$ did not change. However, this animation (scenes 4 and 5 of 8) helped me realise(?) that, in the case of the observer moving and the source remaining stationary, the wave itself does not change, but the apparent frequency of the wave as seen by the observer changes.
Also, as there is relative velocity between the source and the wave I thought that the apparent change in frequency would be due to the apparent change in velocity and wavelength $(v=\nu*\lambda)$ which means that $\lambda$ would be constant.
However, in the case where the source is moving with respect to the observer, which is stationary, (scene 7 of 8) the state of motion of the observer is the same as in the case where both the observer and the source are stationary(scene 3 of 8). This must mean that there is an inherent change in the wavelength and frequency of the wave, visible to any stationary observer. However, according to my initial arguments, how can frequency of the wave change if there is no change in $\omega$?
So my final questions are:-
1.In the case where the observer is moving, is it true that the apparent wavelength does not change from the case where both source and observer are stationary? And in the case where the source is moving, is it true that both wavelength and frequency are changing from the normal case? And if so, why is it that the relative motion is not what matters?
2.In the case where the source is moving, how can frequency change when $\omega$ remains constant?
Edit: Clarified that I am talking about longitudinal waves propagating in a stationary medium. |
For this process, the interaction Hamiltonian is given by:
$$\mathcal{H}_{\rm int}=-\frac{g}{\sqrt 2}\left(V_{cb}\bar{b}_L\gamma^\mu c_L W^-_\mu+\bar{\nu}_L\gamma^\mu\ell_L W^+_\mu\right).$$
After integrating-out the heavy bosons, we obtain the following Hamiltonian
$$\mathcal{H}_{\rm eff}=-\dfrac{G_F}{\sqrt{2}}V_{cb}[\bar{b}\gamma^\mu(1-\gamma_5)c][\bar{\nu}\gamma^\mu(1-\gamma_5)\ell],$$where $G_F/\sqrt{2}=g^2/(8 m_W^2)$ is the Fermi constant.
To obtain the tree-level amplitude for the process $B_c\to J/\psi \ell^+ \nu$, we consider the following matrix element
$$\mathcal{A}(B_c\to J/\psi \ell^+ \nu)=-i\langle J/\psi \,\ell^+\, \nu_\ell |\mathcal{H}_{\rm eff} | B_c\rangle. $$If you write explicitly the leptonic fields in terms of creation and annihilation operators, then you will notice that$$\mathcal{A}(B_c\to J/\psi \ell^+ \nu)=i \dfrac{G_F}{\sqrt{2}}V_{cb}\bar{u}_\nu \gamma^\mu (1-\gamma_5) v_\ell \langle J/\psi| \bar{b}\gamma^\mu(1-\gamma_5)c| B_c\rangle.$$
Note that we have isolated the hadronix matrix element from the rest. Now, if we are able to find this element by using Lattice QCD methods or experimental results, then we will be able to compute the decay rate and other observables. [However, I don't think this is possible for this particular transition at present.]
For your second question, if you consider only valence quarks in the mesons, then you are using a tree-level approximation to describe hadronic states. This is a crude approximation, because QCD is non-perturbative at low energies. You can improve it by computing high-order QCD corrections, but you will never have a reliable result.This post imported from StackExchange Physics at 2014-06-15 16:45 (UCT), posted by SE-user Melquíades |
The Ideal Gas: The basis for "mass action" and a window into free-energy/work relations The Ideal Gas: The basis for "mass action" and a window into free-energy/work relations
The simplest possible multi-particle system, the ideal gas, is a surprisingly valuable tool for gaining insight into biological systems - from mass-action models to gradient-driven transporters. The word "ideal" really means non-interacting, so in an ideal gas all molecules behave as if no others are present. The gas molecules only feel a force from the walls of their container, which merely redirects their momenta like billiard balls. Not surprisingly, it is possible to do exact calculations fairly simply under such extreme assumptions. What's amazing is how relevant those calculations turn out to be, particularly for understanding the basic mechanisms of biological machines and chemical-reaction systems.
Although ideal particles do not react or bind, their statistical/thermodynamic behavior in the various states (e.g., bound or not, reacted or not) can be used to build powerful models - e.g., for transporters.
Mass-action kinetics are ideal-gas kinetics
The key assumption behind mass-action models is that events (binding, reactions, ...) occur precisely in proportion to the concentration(s) of the participating molecules. This certainly cannot be true for
all concentrations, because all molecules interact with one another at close enough distances - i.e., at high enough concentrations. In reality, beyond a certain concentration, simple crowding effects due to steric/excluded-volume effects mean that each molecule can have only a maximum number of neighbors.
But in the ideal gas - and in mass-action kinetics - no such crowding effects occur. All molecules are treated as point particles. They do not interact with one another, although virtual/effective interactions occur in a mass-action picture. (We can say these interactions are "virtual" because the only effect is to change the number of particles - no true forces or interactions occur.)
Pressure and work in an ideal gas
Ideal gases can perform work directly using pressure. The molecules of an ideal gas exert a pressure on the walls of the container holding them due to collisions, as sketched above. The amount of this pressure depends on the number of molecules colliding with each unit area of the wall per second, as well as the speed of these collisions. These quantities can be calculated based on the mass $m$ of each molecule, the total number of molecules, $N$, the total volume of the container $V$ and the temperature, $T$. In turn, $T$ determines the average speed via the relation $(3/2) \, N \, k_B T = \avg{(1/2) \, m \, v^2}$. See the book by Zuckerman for more details.
We can calculate the
work done by an ideal gas to change the size of its container by pushing one wall a distance $d$ as shown above. We use the basic rule of physics that work is force ($f$) multiplied by distance and the definition of pressure as force per unit area. If we denote the area of the wall by $A$, we have
If $d$ is small enough so that the pressure is nearly constant, we can calculate $P$ using (1) at either the beginning or end of the expansion. More generally, for a volume change of arbitrary size (from $V_i$ to $V_f$) in an ideal gas, we need to integrate:
which assumes the expansion is performed slowly enough so that (1) applies throughout the process.
Free energy and work in an ideal gas
The free energy of the ideal gas can be calculated exactly in the limit of large $N$ (see below). We will see that it does, in fact, correlate precisely with the expression for work just derived. The free energy depends on temperature, volume, and the number of molecules; for large $N$, it is given by
where $\lambda$ is a constant for fixed temperature. For reference, it is given by $\lambda = h / \sqrt{2 \pi m k_B T}$ with $h$ being Planck's constant and $m$ the mass of an atom. See the book by Zuckerman for full details.
Does the free energy tell us anything about work? If we examine the
free energy change occuring during the same expansion as above, from $V_i$ to $V_f$ at constant $T$, we get
Comparing to (3),
this is exactly the negative of the work done! In other words, the free energy of the ideal gas decreases by exactly the amount of work done (when the expansion is performed slowly). More generally, the work can be no greater than the free energy decrease. The ideal gas has allowed us to demonstrate this principle concretely. The ideal gas free energy from statistical mechanics
The free energy is derived from the "partition function" $Z$, which is simply a sum/integral over Boltzmann factors for all possible configurations/states of a system. Summing over all possibilities is why the free energy encompasses the full thermodynamic behavior of a system.
where $\lambda(T) \propto 1/\sqrt(T)$ is the thermal de Broglie wavelength (which is not important for the phenomena of interest here), $\rall$ is the set of $(x,y,z)$ coordinates for all molecules and $U$ is the potential energy function. The factor $1/N!$ accounts for interchangeability of identical molecules, and the integral is over all volume allowed to each molecule. For more information, see the book by Zuckerman, or any statistical mechanics book.
The partition function can be evaluated exactly for the case of the ideal gas because the non-interaction assumption can be formulated as $U(\rall) = 0$ for all configurations - in other words, the locations of the molecules do not change the energy or lead to forces. This makes the Boltzmann factor exactly $1$ for all $\rall$, and so each molecule's inegration over the full volume yields a factor of $V$, making the final result
Although (8) assumes there are no degrees of freedom internal to the molecule - which might be more reasonable in some cases (ions) than others (flexible molecules) - the expression is sufficient for most of the biophysical explorations undertaken here.
References Any basic physics textbook. D.M. Zuckerman, "Statistical Physics of Biomolecules: An Introduction," (CRC Press, 2010), Chapters 5, 7. |
These are homework exercises to accompany Miersemann's "Partial Differential Equations" Textmap. This is a textbook targeted for a one semester first course on differential equations, aimed at engineering students. Partial differential equations are differential equations that contains unknown multivariable functions and their partial derivatives. Prerequisite for the course is the basic calculus sequence.
Q2.1
Suppose \(u:\mathbb{R}^2\mapsto\mathbb{R}^1\) is a solution of
$$ a(x,y)u_x+b(x,y)u_y=0 . $$ Show that for arbitrary \(H\in C^1\) also \(H(u)\) is a solution. Q2.2
Find a solution \(u\not\equiv const.\) of
$$ u_x+u_y=0 $$ such that $$ \mbox{graph}(u):=\{(x,y,z)\in\mathbb{R}^3:\ z=u(x,y),\ (x,y)\in\mathbb{R}^2\} $$ contains the straight line \((0,0,1)+s(1,1,0),\ s\in\mathbb{R}^1\). Q2.3
Let \(\phi(x,y)\) be a solution of
$$ a_1(x,y)u_x+a_2(x,y)u_y=0\ . $$ Prove that level curves \(S_C:=\{(x,y):\ \phi(x,y)=C=const.\}\) are characteristic curves, provided that \(\nabla\phi\not=0\) and \((a_1,a_2)\not=(0,0)\). Q2.4
Prove Proposition 2.2.
Q2.5
Find two different solutions of the initial value problem
$$u_x+u_y=1,$$
where the initial data are \(x_0(s)=s\), \(y_0(s)=s\), \(z_0(s)=s\).
Hint: \((x_0,y_0)\) is a characteristic curve. Q2.6
Solve the initial value problem
$$ xu_x+yu_y=u $$ with initial data \(x_0(s)=s,\ y_0(s)=1\), \(z_0(s)\), where \(z_0\) is given. Q2.7
Solve the initial value problem
$$ -xu_x+yu_y=xu^2, $$ \(x_0(s)=s,\ y_0(s)=1\), \(z_0(s)=\mbox{e}^{-s}\). Q2.8
Solve the initial value problem
$$ uu_x+u_y= 1, $$ $x_0(s)=s,\ y_0(s)=s$, \(z_0(s)=s/2\) if \(0<s<1\). Q2.9
Solve the initial value problem
$$ uu_x+uu_y= 2, $$ \(x_0(s)=s,\ y_0(s)=1\), \(z_0(s)=1+s\) if \(0<s<1\). Q2.10
Solve the initial value problem \(u_x^2+u_y^2=1+x\) with given initial data \(x_0(s)=0,\ y_0(s)=s,\ u_0(s)=1,\
p_0(s)=1,\ q_0(s)=0\), \(-\infty<s<\infty\). Q2.11
Find the solution \(\Phi(x,y)\) of
$$ (x-y)u_x+2yu_y=3x $$ such that the surface defined by \(z=\Phi(x,y)\) contains the curve $$ C:\ \ x_0(s)=s,\ y_0(s)=1,\ z_0(s)=0,\ s\in{\mathbb R}. $$ Q2.12
Solve the following initial problem of chemical kinetics.
$$ u_x+u_y=\left(k_0e^{-k_1x}+k_2\right)(1-u)^2,\ x>0,\ y>0 $$ with the initial data \(u(x,0)=0,\ u(0,y)=u_0(y)\), where \(u_0\), \(0<u_0<1\), is given. Q2.13
Solve the Riemann problem
\begin{eqnarray*} u_{x_1}+u_{x_2}&=&0\\ u(x_1,0)&=&g(x_1) \end{eqnarray*} in \(\Omega_1=\{(x_1,x_2)\in\mathbb{R}^2:\ x_1>x_2\}\) and in \(\Omega_2=\{(x_1,x_2)\in\mathbb{R}^2:\ x_1<x_2\}\), where $$ g(x_1)=\left\{\begin{array}{r@{\quad:\quad}l} u_l&x_1<0\\ u_r&x_1>0 \end{array}\right. $$ with constants \(u_l\not=u_r\). Q2.14
Determine the opening angle of the Monge cone, that is, the angle between the axis and the apothem (in German: Mantellinie) of the cone, for equation
$$ u_x^2+u_y^2=f(x,y,u), $$ where \(f>0\). Q2.15
Solve the initial value problem
$$ u_x^2+u_y^2=1, $$ where \(x_0(\theta)=a\cos\theta,\ y_0(\theta)=a\sin\theta,\ z_0(\theta)=1, \ p_0(\theta)=\cos\theta\), \(q_0(\theta)=\sin\theta\) if \(0\le\theta<2\pi\), \(a=const.>0\). Q2.16
Show that the integral \(\phi(\alpha,\beta;\theta,r,t)\), see the Kepler problem, is a complete integral.
Q2.17
a) Show that \(S=\sqrt{\alpha}\ x +\sqrt{1-\alpha}\ y +\beta\) , \(\alpha,\
\beta\in\mathbb{R}^1, \ 0<\alpha<1\), is a complete integral of \(S_x-\sqrt{1-S_y^2}=0\). b) Find the envelope of this family of solutions. Q2.18
Determine the length of the half axis of the ellipse
$$ r=\frac{p}{1-\varepsilon^2\sin(\theta-\theta_0)},\ 0\le\varepsilon<1. $$ Q2.19
Find the Hamilton function \(H(x,p)\) of the Hamilton-Jacobi-Bellman differential equation if \(h=0\) and \(f=Ax+B\alpha\), where
\(A,\ B\) are constant and real matrices, \(A:\ \mathbb{R}^m\mapsto \mathbb{R}^n\), \(B\) is an orthogonal real \(n\times n\)-Matrix and \(p\in\mathbb{R}^n\) is given. The set of admissible controls is given by $$ U=\{\alpha\in\mathbb{R}^n:\ \sum_{i=1}^n\alpha_i^2\le1\}\ . $$ Remark. The Hamilton-Jacobi-Bellman equation is formally the Hamilton-Jacobi equation \(u_t+H(x,\nabla u)=0\), where the Hamilton function is defined by $$ H(x,p):=\min_{\alpha\in U}\left(f(x,\alpha)\cdot p+h(x,\alpha)\right), $$ \(f(x,\alpha)\) and \(h(x,\alpha)\) are given. See for example, Evans [5], Chapter 10. |
Cosecant Function is Odd
Jump to navigation Jump to search
Theorem
Let $x \in \R$ be a real number.
Let $\csc x$ be the cosecant of $x$.
Then, whenever $\csc x$ is defined: $\csc \left({-x}\right) = -\csc x$ Proof
\(\displaystyle \csc \left({-x}\right)\) \(=\) \(\displaystyle \frac 1 {\sin \left({-x}\right)}\) Cosecant is Reciprocal of Sine \(\displaystyle \) \(=\) \(\displaystyle \frac 1 {- \sin x}\) Sine Function is Odd \(\displaystyle \) \(=\) \(\displaystyle - \csc x\) Cosecant is Reciprocal of Sine
$\blacksquare$
Also see Sine Function is Odd Cosine Function is Even Tangent Function is Odd Cotangent Function is Odd Secant Function is Even Sources 1968: Murray R. Spiegel: Mathematical Handbook of Formulas and Tables... (previous) ... (next): $\S 5$: Trigonometric Functions: $5.31$ 1968: Murray R. Spiegel: Mathematical Handbook of Formulas and Tables... (previous) ... (next): $\S 5$: Trigonometric Functions: Functions of Angles in All Quadrants in terms of those in Quadrant I |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.