text
stringlengths 256
16.4k
|
---|
Preprint 2013-20 Sergio Caucao, David Mora, Ricardo Oyarzúa: Analysis of a mixed-FEM for the pseudostress-velocity formulation of the Stokes problem with varying density Abstract:
We propose and analyse a mixed finite element method for the nonstandard pseudostress-velocity formulation of the Stokes problem with varying density in R$^$d, d \in {2,3}. Since the resulting variational formulation does not have the standard dual-mixed structure, we reformulate the continuous problem as an equivalent fixed-point problem. Then, we apply the classical Babuska-Brezzi theory to prove that the associated mapping T is well defined, and assuming that \|\frac{\nabla \rho}{\rho}\| is suficiently small, we show that T is a contraction mapping, which implies that the variational formulation is well-posed. Under the same hypothesis on \rho we prove stability of the continuous problem. Next, adapting to the discrete case the arguments of the continuous analysis, we are able to establish suitable hypotheses on the finite element subspaces ensuring that the associated Galerkin scheme becomes well-posed. A feasible choice of subspaces is given by Raviart-Thomas elements of order k \ge 0 for the pseudostress and polynomials of degree k for the velocity. Finally, several numerical results illustrating the good performance of the method with these discrete spaces, and confirming the theoretical rate of convergence, are provided.
This preprint gave rise to the following definitive publication(s):
Sergio CAUCAO, David MORA, Ricardo OYARZúA: A priori and a posteriori error analysis of a pseudostress-based mixed formulation of the Stokes problem with varying density. IMA Journal of Numerical Analysis, vol. 36, 2, pp. 947-983, (2016). |
I want to obtain a good numerical approximation (up to 10 decimal place would be ok for me) to an integral:
$$ \int^{\infty}_{0} f(r)r^2dr $$
I am using the function $f(r)$, which is related to the function
$$g(r)=-\frac{\sqrt[3]{3} \sqrt[3]{e^{-2 r}}}{\pi ^{2/3}}-\frac{\sqrt[3]{2 \pi }}{5 \sqrt[3]{e^{-2 r}} \left(\frac{3 \sqrt[3]{\pi } \sinh ^{-1}\left(\frac{2 \sqrt[3]{2 \pi }} {\sqrt[3]{e^{-2 r}}}\right)}{5\ 2^{2/3} \sqrt[3]{e^{-2 r}}}+1\right)}$$
by the relation
$$ f(r)=-\frac{1}{4\pi}\nabla^2_{r,\theta,\phi} g(r) $$
Obviously, explicit integration is impossible. The product $f(r)r^2$ is well-behaved and integrable for sure. The function $f(r)$ decays faster than $\frac{1}{r^2}$.
When I try to increase
WorkingPrecision in
NIntegrate,
Mathematica says the expression I am integrating itself is not specified so precisely. How can I overcome this? Any tips/ hints?
I am asking for a general strategy to obtain a precise value of the integral:
NIntegrate[f[r]*4*π*r^2, {r, 0, y}, WorkingPrecision -> x]
where y and x are some numbers.
P.S., I've been using
Mathematica for only two days. |
Difference between revisions of "Inertia"
(→Relationship between Inertia and Frequency)
Line 3: Line 3:
== Derivation ==
== Derivation ==
−
[[Image:Arc_circle.png|right|thumb|250px|Cross-section of a cylindrical body rotating about the axis of its centre of mass]]
+
[[Image:Arc_circle.png|right|thumb|250px|Cross-section of a cylindrical body rotating about the axis of its centre of mass]]
Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy).
Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy).
Line 68: Line 68:
{|
{|
−
|[[Image:Steam_turbine_shaft.jpg|none|thumb|400px|Steam turbine]]
+
|[[Image:Steam_turbine_shaft.jpg|none|thumb|400px|Steam turbine]]
−
|[[Image:Hydro_turbine_shaft.jpg|none|thumb|265px|Hydro turbine]]
+
|[[Image:Hydro_turbine_shaft.jpg|none|thumb|265px|Hydro turbine]]
−
|[[Image:Combustion_engine_shaft.png|none|thumb|200px|Combustion engine]]
+
|[[Image:Combustion_engine_shaft.png|none|thumb|200px|Combustion engine]]
|}
|}
{|
{|
−
|[[Image:Aero_GT_shaft.png|none|thumb|322px|Aeroderivative gas turbine]]
+
|[[Image:Aero_GT_shaft.png|none|thumb|322px|Aeroderivative gas turbine]]
−
|[[Image:Industrial_GT_shaft.png|none|thumb|400px|Industrial gas turbine]]
+
|[[Image:Industrial_GT_shaft.png|none|thumb|400px|Industrial gas turbine]]
|}
|}
Line 106: Line 106:
== Relationship between Inertia and Frequency ==
== Relationship between Inertia and Frequency ==
−
[[Image:EirGrid Inertia.png|right|thumb|500px|Source: EirGrid, [http://www.eirgrid.com/media/Renewable%20Studies%20V3.pdf All Island TSO Facilitation of Renewables Studies], 2010]]
+
[[Image:EirGrid Inertia.png|right|thumb|500px|Source: EirGrid, [http://www.eirgrid.com/media/Renewable%20Studies%20V3.pdf All Island TSO Facilitation of Renewables Studies], 2010]]
Inertia is the stored kinetic energy in the rotating masses coupled to the power system. Whenever there is a mismatch between generation and demand (either a deficit or excess of energy), the difference in energy is made up by the system inertia.
Inertia is the stored kinetic energy in the rotating masses coupled to the power system. Whenever there is a mismatch between generation and demand (either a deficit or excess of energy), the difference in energy is made up by the system inertia.
Line 114: Line 114:
The figure to the right illustrates this concept by way of a tank of water where system demand is a flow of water coming out of the bottom of the tap and generation is a hose that tops up the water in the tank (here the system operator manages the tap, which determines how much water comes out of the hose). The system frequency is the water level and the inertia is the volume of water in the tank. This analogy is instructive because it can be easily visualised that if system inertia was very large, then the volume of water and the tank itself would also be very large. Therefore, a deficit of generation would cause the system frequency to fall, but at a slower rate than if the system inertia was small. Likewise, excess generation would fill up the tank and cause frequency to rise, but at a slower rate if inertia is very large.
The figure to the right illustrates this concept by way of a tank of water where system demand is a flow of water coming out of the bottom of the tap and generation is a hose that tops up the water in the tank (here the system operator manages the tap, which determines how much water comes out of the hose). The system frequency is the water level and the inertia is the volume of water in the tank. This analogy is instructive because it can be easily visualised that if system inertia was very large, then the volume of water and the tank itself would also be very large. Therefore, a deficit of generation would cause the system frequency to fall, but at a slower rate than if the system inertia was small. Likewise, excess generation would fill up the tank and cause frequency to rise, but at a slower rate if inertia is very large.
−
[[Image:Inertia and Frequency Response.png|right|thumb|500px|Frequency response after a generator trip at different levels of system inertia]]
+
[[Image:Inertia and Frequency Response.png|right|thumb|500px|Frequency response after a generator trip at different levels of system inertia]]
Therefore, it can be said that system inertia is related to the rate at which frequency rises or falls in a system whenever there is a mismatch between generation and load. The standard industry term for this is the '''Rate of Change of Frequency (RoCoF)'''.
Therefore, it can be said that system inertia is related to the rate at which frequency rises or falls in a system whenever there is a mismatch between generation and load. The standard industry term for this is the '''Rate of Change of Frequency (RoCoF)'''.
− +
to the the
[[Category:Fundamentals]]
[[Category:Fundamentals]]
Revision as of 05:45, 7 December 2018
In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts.
Contents Derivation
Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy).
The length of a circle arc is given by:
[math] L = \theta r [/math]
where [math]L[/math] is the length of the arc (m)
[math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m)
A cylindrical body rotating about the axis of its centre of mass therefore has a rotational velocity of:
[math] v = \frac{\theta r}{t} [/math]
where [math]v[/math] is the rotational velocity (m/s)
[math]t[/math] is the time it takes for the mass to rotate L metres (s)
Alternatively, rotational velocity can be expressed as:
[math] v = \omega r [/math]
where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s)
[math]n[/math] is the speed in revolutions per minute (rpm)
The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies:
[math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math]
where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m
2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg)
Alternatively, rotational kinetic energy can be expressed as:
[math] KE = \frac{1}{2} J\omega^{2} [/math]
where [math]J = mr^{2}[/math] is called the
moment of inertia (kg.m 2).
Notes about the moment of inertia:
In physics, the moment of inertia [math]J[/math] is normally denoted as [math]I[/math]. In electrical engineering, the convention is for the letter "i" to always be reserved for current, and is therefore often replaced by the letter "j", e.g. the complex number operator i in mathematics is j in electrical engineering. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR 2literally stands for weight x radius squared. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR WR 2is often used with imperial units of lb.ft 2or slug.ft 2. Conversions factors: 1 lb.ft 2= 0.04214 kg.m 2 1 slug.ft 2= 1.356 kg.m 2 1 lb.ft WR Normalised Inertia Constants
The moment of inertia can be expressed as a normalised quantity called the
inertia constant H, calculated as the ratio of the rotational kinetic energy of the machine at nominal speed to its rated power (VA): [math]H = \frac{1}{2} \frac{J \omega_0^{2}}{S_{b}}[/math]
where [math]H[/math] is the inertia constant (s)
[math]\omega_{0} = 2 \pi \times \frac{n}{60}[/math] is the nominal mechanical angular frequency (rad/s) [math]n[/math] is the nominal speed of the machine (revolutions per minute) [math]S_{b}[/math] is the rated power of the machine (VA) Generator Inertia
The moment of inertia for a generator is dependent on its mass and apparent radius, which in turn is largely driven by its prime mover type.
Based on actual generator data, the normalised inertia constants for different types and sizes of generators are summarised in the table below:
Machine type Number of samples MVA Rating Inertia constant H Min Median Max Min Median Max Steam turbine 45 28.6 389 904 2.1 3.2 5.7 Gas turbine 47 22.5 99.5 588 1.9 5.0 8.9 Hydro turbine 22 13.3 46.8 312.5 2.4 3.7 6.8 Combustion engine 26 0.3 1.25 2.5 0.6 0.95 1.6 Relationship between Inertia and Frequency
Inertia is the stored kinetic energy in the rotating masses coupled to the power system. Whenever there is a mismatch between generation and demand (either a deficit or excess of energy), the difference in energy is made up by the system inertia.
For example, suppose a generator suddenly disconnects from the network. In that instant, the equilibrium in generation and demand is broken and demand exceeds generation. Because energy must be conserved, there must always be energy balance in the system and the instantaneous deficit in energy is supplied by the system inertia. However, the kinetic energy in rotating masses is finite and when energy is used to supply demand, the rotating masses begin to slow down. In aggregate, the speed of rotation of these rotating masses is roughly proportional to the system frequency and so the frequency begins to fall. New generation must be added to the system to reestablish the equilibrium between generation and demand and restore system frequency, i.e. put enough kinetic energy back into the rotating masses such that it rotates at a speed proportional with nominal frequency (50/60 Hz).
The figure to the right illustrates this concept by way of a tank of water where system demand is a flow of water coming out of the bottom of the tap and generation is a hose that tops up the water in the tank (here the system operator manages the tap, which determines how much water comes out of the hose). The system frequency is the water level and the inertia is the volume of water in the tank. This analogy is instructive because it can be easily visualised that if system inertia was very large, then the volume of water and the tank itself would also be very large. Therefore, a deficit of generation would cause the system frequency to fall, but at a slower rate than if the system inertia was small. Likewise, excess generation would fill up the tank and cause frequency to rise, but at a slower rate if inertia is very large.
Therefore, it can be said that system inertia is related to the rate at which frequency rises or falls in a system whenever there is a mismatch between generation and load. The standard industry term for this is the
Rate of Change of Frequency (RoCoF).
Figure 4 shows the system frequency response to a generator trip at different levels of system inertia. It can be seen that the rate of change of the frequency decline increases as the system inertia is decreased. Furthermore, the minimum frequency that the system falls to (called the frequency nadir) is also lower as system inertia is decreased. |
Archimedean Principle Contents Theorem
Let $x$ be a real number.
Then there exists a natural number greater than $x$. $\forall x \in \R: \exists n \in \N: n > x$
Let $x$ and $y$ be a natural numbers.
Then there exists a natural number $n$ such that: $n x \ge y$ Proof
Let $x \in \R$.
Let $S$ be the set of all natural numbers less than or equal to $x$:
$S = \set {a \in \N: a \le x}$ It is possible that $S = \O$.
Suppose $0 \le x$.
Then by definition, $0 \in S$.
But $S = \O$, so this is a contradiction.
From the Trichotomy Law for Real Numbers it follows that $0 > x$.
Thus we have the element $0 \in \N$ such that $0 > x$.
Now suppose $S \ne \O$.
Then $S$ is bounded above (by $x$, for example).
Let $s = \map \sup S$.
Now consider the number $s - 1$.
So $\exists m \in S: m > s - 1 \implies m + 1 > s$.
But as $m \in \N$, it follows that $m + 1 \in \N$.
Because $m + 1 > s$, it follows that $m + 1 \notin S$ and so $m + 1 > x$.
Also known as
This result is also known as:
the Archimedean law the Archimedean property (of the natural numbers) the Archimedean ordering property (of the real line) the axiom of Archimedes. Also see In Equivalence of Archimedean Property and Archimedean Law it is shown that on the field of real numbers the two are equivalent. Not to be confused with the better-known (outside the field of mathematics) Archimedes' Principle. Source of Name
This entry was named for Archimedes of Syracuse.
The name
axiom of Archimedes was given by Otto Stolz in his $1882$ work: Zur Geometrie der Alten, insbesondere über ein Axiom des Archimedes. Sources 1975: W.A. Sutherland: Introduction to Metric and Topological Spaces... (previous) ... (next): $\S 1.1$: Real Numbers: Example $1.1.1 \ \text{(a)}$ 1977: K.G. Binmore: Mathematical Analysis: A Straightforward Approach... (previous) ... (next): $\S 3$: Natural Numbers: $\S 3.3$: Archimedean Property 2000: James R. Munkres: Topology(2nd ed.) ... (previous) ... (next): $1$: Set Theory and Logic: $\S 4$: The Integers and the Real Numbers 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: Archimedean property 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: Archimedes, axiom of |
Your equations are flawed. Also there is no expectation if the process $\{r_s\}$ is deterministic.The correct reasoning is, assuming $\{r_s\}$ is stochastic:$$f(t,u)=-\frac{d}{du}\ln P(t,u)=-\frac{\frac{d}{du}P(t,u)}{P(t,u)}\\=-\frac{\frac{d}{du}E^Q_t[e^{-\int_t^u r_s ds}]}{P(t,u)}=\frac{E^Q_t[e^{-\int_t^u r_s ds} r_u]}{P(t,u)}=E^Q_t\left[\frac{e^{-\...
This is indeed a standard result. You can convince yourself by noticingThe bank account grows from 1 at $t=\tau$ to $E\left[\exp(\int_\tau^T r(u)du)|\mathscr{F}_\tau\right]$ at time $T$The price of a security paying $X$ at time $T$ discounted to $t=\tau$ is then $E\left[X \exp(-\int_\tau^T r(u)du)\right|\mathscr{F}_\tau]$Hence the price of a credit risk-...
Yes, LIBOR rates can be simulated using short rate models. Or rather, Libor rates can be obtained from simulated short rate values.Usually, you have formulas giving you the zero-coupon bond price as a function of the short rate. For affine models for example, this would be of the form:$$P(t, T) = e^{A(t, T) - r(t)B(t,T)}$$(for example, for the one-factor ...
When taking the partial derivative $\frac{\partial}{\partial t}$ in a conditional expectation, not only the parameter $t$ within the expectation needs to be considered, the information set $\mathscr{F}_t$ should also be considered.For this particular question, based on an answer to this question,\begin{align*}P(t, T) = e^{-(T-t)r_t - \int_t^T (T-u)\...
In practice, most derivatives traded on Fed Funds rates are linear(i.e. Forwards) rather than non-linear (options and exotics). As such, there has not been a strong case for precise modeling of the full distribution of a Fed Funds rate for a particular day. In contrast , there is a large market for derivatives on 3month USD Libor , which is less sensitive ...
I solved from (2) to (4) by myself !(2) My answerUse the result of (1) with keeping in mind that the following R.H.S is $\mathcal{F}_t $ measurable.\begin{eqnarray}V_t &=& E^{\mathbb{P}} \left[ \exp \left(- \int^T_t r_s ds \right) \cdot ( P(T,S) - K )^+ \middle| \mathcal{F}_t \right] \\&=& P(t, T) E^{ \tilde{\mathbb{P}} } \left[ ( P(...
Let $r(s)$ be the process of a short rate. Then, by risk neutral pricing, $$ P(t,T) = \mathbb{E}^\mathbb{Q}\left[ \exp\left( -\int_t^T r(s)\mathrm{d}s\right) \Bigg| \mathcal{F}_t\right].$$Thus, the zero-coupon bond is determined completely by the short rate process. Here, $P(t,T)$ denotes the time $t$ price of a zero-coupon bond maturing at time $T$. You ...
Treasury / OIS spread is simply the difference between a given Treasury bond's yield (typically the on-the-run Treasuries, like 2y, 5y, etc.) and the fixed rate on an OIS of a similar tenor. If you consider OIS to be a decent proxy for repo rates, the Treasury / OIS spread is a way of gauging how cheap / rich Treasuries are versus their funding.Typically, ...
(Cumulative Integration Formula Replacing $du$ and $dB_s$)I have developed formulas to solve this by myself!\begin{eqnarray}\int^t_0 \int^u_0 dB_s \ du &=& \int^t_0 \int^u_s du \ dB_s \\\int^T_t \int^u_0 dB_s \ du &=& \int^T_0 \int^u_s du \ dB_s - \int^t_0 \int^u_s du \ dB_s\end{eqnarray}Therefore, we can use the following ...
(My answer)the Vasicek Bond Price and its Forward PriceRecall the result of Exercise 5.2.(1) or Exercise 4.5.(10).\begin{eqnarray}P(t, T) &=& E \left[ \exp \left( - \int^T_t r_u du \right) \middle| \mathcal{F}_t \right] \\&=& E \left[ \exp \left( - \int^T_t \left( e^{-bu} r_0 + \sigma \int^u_0 e^{-b(u-s)} dB_s \right) du\right) ...
I solved by myself. The following is this solution.Let $T-t=s$, one reaches the following equation.\begin{eqnarray}B'(s) + \beta B(s) + \frac{1}{2} \sigma^2 B(s)^2 =1\end{eqnarray}One finds out it is the Riccatti equation because of $A(s)=0$. Therefore, one reaches the following equation.\begin{eqnarray}B' = - \frac{1}{2} \sigma^2 B^2 - \...
I found the answer to my question!It consists of separating the terms $1$ and $e^{-a(T-u)}$ from $B(u,T)$ and isolating the $e^{aT}$ term from the integral and it's straight forward then!All in all it's the form of $B(u,T)$ that makes it possible to state such a formula.PS: In order to continue the demonstration, we need to derive $-\frac{\partial^2}{\...
The problem should go away if you simulate $r_t$. Ho Lee should work for the function of the form you assumed:$P(0,T)=e^{-aT^2-bT}=e^{-(aT+b)T}$The problem with your simulation is that the forward rate, as you correctly derived, is as follows:$f(0,T)=2aT+b$So when you take the derivative to calculate $\theta$, you lose b. But remember the short rate ...
The main thing we want is the $P(t,T)$ function.In the short rate model, we model the system as an instantaneous short rate variable which evolves stochastically. Different models assign different dynamics to the short rate (mean reversion, constant or stochastic vol, etc), but they all assume that $P(t,T)$ is the expectation of the integral of the ...
Although it's been a long time this question has been asked, I'd like to propose an answer in case someone was looking for the same thing.First, I think there's a confusion between $P(t,T)$ and $DF(t,T)$. The former is the $t-$price of a contract paying $1$ unit of currency at date $T$ while the later is the (stochastic) discount factor at $t$ for flows ...
CIR can be used to simulate paths, although forecasting with a model of that class is a bit unintuitive. Why? The results are highly dependent on the stochastic parameters of the equation. So, let's say you obtained a calibrated model and simulated it 5 times. You would (most likely) get very different results with the same model but with different random ... |
Difference between revisions of "Inertia"
m
Line 121: Line 121:
[[Category:Fundamentals]]
[[Category:Fundamentals]]
+ Latest revision as of 00:31, 22 December 2018
In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts.
Contents Derivation
Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy).
The length of a circle arc is given by:
[math] L = \theta r [/math]
where [math]L[/math] is the length of the arc (m)
[math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m)
A cylindrical body rotating about the axis of its centre of mass therefore has a rotational velocity of:
[math] v = \frac{\theta r}{t} [/math]
where [math]v[/math] is the rotational velocity (m/s)
[math]t[/math] is the time it takes for the mass to rotate L metres (s)
Alternatively, rotational velocity can be expressed as:
[math] v = \omega r [/math]
where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s)
[math]n[/math] is the speed in revolutions per minute (rpm)
The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies:
[math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math]
where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m
2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg)
Alternatively, rotational kinetic energy can be expressed as:
[math] KE = \frac{1}{2} J\omega^{2} [/math]
where [math]J = mr^{2}[/math] is called the
moment of inertia (kg.m 2).
Notes about the moment of inertia:
In physics, the moment of inertia [math]J[/math] is normally denoted as [math]I[/math]. In electrical engineering, the convention is for the letter "i" to always be reserved for current, and is therefore often replaced by the letter "j", e.g. the complex number operator i in mathematics is j in electrical engineering. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR 2literally stands for weight x radius squared. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR WR 2is often used with imperial units of lb.ft 2or slug.ft 2. Conversions factors: 1 lb.ft 2= 0.04214 kg.m 2 1 slug.ft 2= 1.356 kg.m 2 1 lb.ft WR Normalised Inertia Constants
The moment of inertia can be expressed as a normalised quantity called the
inertia constant H, calculated as the ratio of the rotational kinetic energy of the machine at nominal speed to its rated power (VA): [math]H = \frac{1}{2} \frac{J \omega_0^{2}}{S_{b}}[/math]
where [math]H[/math] is the inertia constant (s)
[math]\omega_{0} = 2 \pi \times \frac{n}{60}[/math] is the nominal mechanical angular frequency (rad/s) [math]n[/math] is the nominal speed of the machine (revolutions per minute) [math]S_{b}[/math] is the rated power of the machine (VA) Generator Inertia
The moment of inertia for a generator is dependent on its mass and apparent radius, which in turn is largely driven by its prime mover type.
Based on actual generator data, the normalised inertia constants for different types and sizes of generators are summarised in the table below:
Machine type Number of samples MVA Rating Inertia constant H Min Median Max Min Median Max Steam turbine 45 28.6 389 904 2.1 3.2 5.7 Gas turbine 47 22.5 99.5 588 1.9 5.0 8.9 Hydro turbine 22 13.3 46.8 312.5 2.4 3.7 6.8 Combustion engine 26 0.3 1.25 2.5 0.6 0.95 1.6 Relationship between Inertia and Frequency
Inertia is the stored kinetic energy in the rotating masses coupled to the power system. Whenever there is a mismatch between generation and demand (either a deficit or excess of energy), the difference in energy is made up by the system inertia.
For example, suppose a generator suddenly disconnects from the network. In that instant, the equilibrium in generation and demand is broken and demand exceeds generation. Because energy must be conserved, there must always be energy balance in the system and the instantaneous deficit in energy is supplied by the system inertia. However, the kinetic energy in rotating masses is finite and when energy is used to supply demand, the rotating masses begin to slow down. In aggregate, the speed of rotation of these rotating masses is roughly proportional to the system frequency and so the frequency begins to fall. New generation must be added to the system to reestablish the equilibrium between generation and demand and restore system frequency, i.e. put enough kinetic energy back into the rotating masses such that it rotates at a speed proportional with nominal frequency (50/60 Hz).
The figure to the right illustrates this concept by way of a tank of water where system demand is a flow of water coming out of the bottom of the tap and generation is a hose that tops up the water in the tank (here the system operator manages the tap, which determines how much water comes out of the hose). The system frequency is the water level and the inertia is the volume of water in the tank. This analogy is instructive because it can be easily visualised that if system inertia was very large, then the volume of water and the tank itself would also be very large. Therefore, a deficit of generation would cause the system frequency to fall, but at a slower rate than if the system inertia was small. Likewise, excess generation would fill up the tank and cause frequency to rise, but at a slower rate if inertia is very large.
Therefore, it can be said that system inertia is related to the rate at which frequency rises or falls in a system whenever there is a mismatch between generation and load. The standard industry term for this is the
Rate of Change of Frequency (RoCoF).
Figure 4 shows the system frequency response to a generator trip at different levels of system inertia. It can be seen that the rate of change of the frequency decline increases as the system inertia is decreased. Furthermore, the minimum frequency that the system falls to (called the frequency nadir) is also lower as system inertia is decreased. |
In my textbook, it's stated that:
When $\epsilon < -1$, demand is elastic and raising price will result in smaller income, while lowering price will result in bigger income.
When $\epsilon = -1$, demand is neither elastic nor inelastic and change in price won't result in change in income.
When $\epsilon > -1$, demand is inelastic and raising price will result in bigger income, while lowering price will result in smaller income.
$\epsilon = \%\Delta Q / \%\Delta P$.
This is the exercise I found confusing:
Old price: 5
New price: 6
Old quantity: 25
New quantity: 20
Calculate elasticity
This is my solution:
$\% \Delta P = \frac{\text{new price } - \text{ old price}} {\text{old price}} = \frac{6 - 5} 5 = 0.2$
$\%\Delta Q = \frac{\text{new quantity } - \text{ old quantity}} { \text{old quantity}} = \frac{20 - 25} {25} = -0.2$
$\epsilon = \%\Delta Q / \%\Delta P = -0.2 / 0.2 = -1$
$$$$ This is why I am confused:
$\text{Old income} = \text{old price} \times \text{old quantity} = 5 \times 25 = 125$
$\text{New income} = \text{new price} \times \text{new quantity} = 6 \times 20 = 120$
Old income does not equal new income even though elasticity is -1!
What am I doing wrong? Am I misunderstanding the textbook?
$$$$
Edit: the answer provided is $\epsilon = 1.22$ but I have no idea where it comes from. |
It is the resultant polyhedron after having made a parallel cut to the basis of a pyramid. The mentioned cut will be named a minor base.
Lateral faces will have now the shape of isosceles trapeze.
The height will be the distance between basis.
The following figure is an example of a frustum of pyramid with pentagonal bases.
Calculate the area of a frustum of pyramid of basis squared with: $$$A_{basis}=16 \ m^2 \\ A_{minor \ basis}= 9 \ m^2 \\ height = 3$$$ To find the area of the trapeze sides, it is necessary to calculate the value of $$Ap$$, the apothem of the frustum of pyramid, or height of the trapeze:
$$a$$ being the side of the basis and $$b$$ the side of the minor basis. Analyzing the triangle that remains, of basis $$0,5 \ m$$: $$$Ap^2=0,5^2+3^2 \\ Ap=3,04 \ m$$$
Now that we already have the apothem, we calcule the area of the side, $$$A_{lateral}=\Big(Perimetre _{basis}+Perimetre_{minor \ basis}\Big) \dfrac{Ap}{2} \\ A_{lateral}=(16+12) \cdot \dfrac{3,04}{2}=42,56 \ m^2$$$ And the entire area will be: $$$A_{total}=A_{laterals}+A_{basis}+A_{minor \ basis} \\ A_{total}=42,56+9+16=67,56 \ m^2$$$
To calculate the volume of the pyramidal frustum we will use the following expression ($$h$$ is the height, $$A$$ is the area of the basis and $$A'$$ the area of the minor basis) $$$V=\dfrac{h}{3}(A+A'+\sqrt{A\cdot A'})$$$
In the previous example the mentioned volume has a value of $$V=55,5 m^3$$. |
Onsager's regression hypothesis
“…the average regression of fluctuations will obey the same laws as the corresponding macroscopic irreversible process"
comes vividly to life when experimentalists observe the Brownian motion $q(t)$ of a damped oscillator (as nowadays they commonly do). Setting
$\qquad q(t)= x(t) \cos(\omega_0 t) - y(t) \sin(\omega_0 t)$
for $\omega_0$ the resonant frequency of the oscillator and $x(t),\,y(t)$ the (slowly varying) in-phase and quadrature amplitudes, these amplitudes are observed to satisfy
$\displaystyle\qquad \langle x(t) x(t+\tau)\rangle = \langle y(t) y(t+\tau)\rangle = \left[\frac{k_\text{B}T}{m \omega_0^2}\right]\,e^{-\omega_0|\tau|/(2 Q)}$
where $m$ is the mass of the oscillator and $Q$ is its mechanical quality. This example illustrates Onsager's regression principle as follows
“…the average regression of fluctuations (in the above oscillator example, the autocorrelation $\langle x(t) x(t+\tau)\rangle$) will obey the same laws (in the example, exponential decay of fluctuations with rate constant $\Gamma = \omega_0/(2 Q)$) as the corresponding macroscopic irreversible process (in the example, macroscopic damping of the oscillator motion with the same rate constant $\Gamma$)"
It is common experimental practice to deduce $Q$ not from observations of macroscopic damping, but rather by statistical analysis of the observed regression of Brownian motion fluctuations. Thus, in this practical sense, Onsager's regression hypothesis nowadays is universally accepted.
By a similar analysis of coupled fluctuations in larger-dimension dynamical systems, Onsager deduced certain reciprocity relations that bear his name (and for which he received the Nobel Prize in Chemistry in 1968). Accessible discussions of the Onsager relations in textbooks include Charles Kittel's
Elementary statistical physics (see Ch. 33, "Thermodynamics of Irreversible Processes and the Onsager Reciprocal Relations") and Landau and Lifshitz' Statistical Physics: Part 1 (see Ch. 122, "The Symmetry of the Kinetic Coefficients").
In the context of separative transport (where these relations find common application) Onsager's principle demonstrates from general thermodynamic that if an imposed current $j_\text{A}$ of conserved quantity $\text{A}$ induces a current $j_\text{B}$ of conserved quantity $\text{B}$ via $j_\text{B} = L_\text{BA}\,j_\text{A}$, then a reciprocal flow induction occurs with $j_\text{A} = L_\text{AB}\,j_\text{B}$ and $L_\text{AB}=L_\text{BA}$. As Kittel and Landau/Lifshitz both discuss, this principle follows by considering the temporal decay of microscopic fluctuations (assuming local thermodynamic equilibrium).
Physically speaking, if a flow of $A$ linearly induces a flow of $B$, then the reciprocal induction occurs too, with equal constant of proportionality. This relation apples in a great many physical systems, including for example (and non-obviously) the coupled transport of electrolytes and nutrients across cell membranes.
Whether Onsager's dynamical assumptions hold in a given instance has to be carefully analyzed on a case-by-case basis. That is why Kittel's text cautions, prior to working through an example involving thermoelectric coupling (Chapters 33 and 34):
It is rarely a trivial problem to find the correct choice of (generalized) forces and fluxes applicable to the Onsager relation.
In consequence of this necessary admixture of physical reasoning in applying the Onsager relations in particular cases, it sometimes happens that practical applications of Onsager's formalism are accompanied by lively theoretical and/or experimental controversies
, which are associated not to the Onsager formalism itself, but to the applicability (or not) of various microscopic dynamical models that justify its use.
We thus see that the Onsager relations are not rigorous constraints in the sense of the First and Second Laws, but rather describe simplifying symmetries that emerge in a broad range of idealized (chiefly, linearized & spatially localized) descriptions of dynamical behavior; with these symmetries providing a vital key to the general description of a large set of transport processes that have great practical importance.
Perhaps I should mention, that I would myself be very interested in any references that generalize Onsager's relation to the coupled quantum dynamical flow of symbol-function measures; this is associated to the practical challenge of generating quantum spin hyperpolarization via separative transport processes. |
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know? |
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know? |
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation. |
Prove that $tr\left(\gamma_\mu\gamma_\nu\gamma_\rho\gamma_\sigma\gamma_5\right)=0$ when the spacetime dimension is not 4.
What I have tried:
We know that $\gamma_\alpha\gamma^\alpha=d\mathbb{1}$, so we can write:
$tr\left(\gamma_\mu\gamma_\nu\gamma_\rho\gamma_\sigma\gamma_5\right)=\frac{1}{d}tr\left(\gamma_\alpha\gamma^\alpha\gamma_\mu\gamma_\nu\gamma_\rho\gamma_\sigma\gamma_5\right)$
Then I thought I could commute $\gamma^\alpha$ past two gammas because if $\alpha\notin\left\{\mu,\,\nu,\,\rho,\,\sigma\right\}$, then $\left\{\gamma_\alpha,\,\gamma_\mu\right\}=0$ and somehow show that I get minus of what I started with, using the cyclicality of the trace and that $\left\{\gamma_5,\,\gamma_\mu\right\}=0$.
However, what I am not sure about is why can we always find such $\alpha$ so that $\alpha\notin\left\{\mu,\,\nu,\,\rho,\,\sigma\right\}$. I understand this is generally possible when $d\in\mathbb{R}\wedge d>4$, however, when $d\in\mathbb{C}$, this claim doesn't make any sense for me.
Can anyone provide a rigorous proof of this claim which avoids the hurdle I mentioned above? |
In the weak-field case,$$\mathrm{d}s^2 = -\left(1+2\frac{\Phi}{c^2}\right)c^2\mathrm{d}t^2 - \frac{4}{c}A_i\mathrm{d}t\mathrm{d}x^i + \left(1-2\frac{\Phi}{c^2}\right)\mathrm{d}S^2\text{,}$$where $\Phi$ is the Newtonian potential and $\mathrm{d}S^2 = \mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2$ is the Euclidean metric. In the static case, $A_i = 0$, which is the form used for GPS calculations, but in general it is more interesting as being a direct analogue to classical electromagnetism, first formulated for gravity by Heaviside in 1893:$$\begin{eqnarray*}\mathbf{E}_\text{g} = -\nabla\Phi - \frac{1}{2c}\frac{\partial\mathbf{A}}{\partial t}& \quad\quad & \mathbf{B}_\text{g} = \nabla\times\mathbf{A}\end{eqnarray*}$$$$\begin{eqnarray*}\nabla\cdot\mathbf{E}_\text{g} = -4 \pi G \rho_\text{g} & \quad\quad & \nabla \times \mathbf{E}_\text{g} = -\frac{1}{2c}\frac{\partial\mathbf{B}_\text{g}}{\partial t} \\\nabla\cdot\mathbf{B}_\text{g} = 0 & \quad\quad & \nabla\times\frac{1}{2}\mathbf{B}_\text{g} = -\frac{4\pi G}{c}\mathbf{J}_\text{g} + \frac{1}{c}\frac{\partial\mathbf{E}_\text{g}} {\partial t}\end{eqnarray*}$$This particular version was taken from
Einstein's general theory of relativity by Grøn Øyvind and Sigbjørn Hervik; a few variations in defining these fields exist in the literature.
But probably more importantly, the
post-Newtonian formalism gives a more general approximation scheme, the first few terms of which are:$$\mathrm{d}s^2 = -(1+2\Phi+2\beta\Phi^2+\ldots)\mathrm{d}t^2 + (1-2\gamma\Phi+\ldots)\mathrm{d}S^2 + (\ldots)\mathrm{d}t\mathrm{d}x^i\text{,}$$with many other potentials that I'm omitting here. This is very useful for understanding the general predictions of GTR and comparing them to alternative theories of gravity (e.g., GTR predicts $\beta = \gamma = 1$, other theories might not). |
Compact Subspace of Linearly Ordered Space/Reverse Implication/Proof 1 Theorem
Let $\left({X, \preceq, \tau}\right)$ be a linearly ordered space.
Let the following hold: $(1): \quad$ For every non-empty $S \subseteq Y$, $S$ has a supremum and an infimum in $X$. $(2): \quad$ For every non-empty $S \subseteq Y$: $\sup S, \inf S \in Y$. Then $Y$ is a compact subspace of $\left({X, \tau}\right)$. Proof
Let $\tau'$ be the $\tau$-relative subspace topology on $Y$.
Let $\preceq'$ be the restriction of $\preceq$ to $Y$.
Lemma
$\left({Y, \preceq', \tau'}\right)$ is a linearly ordered space.
Proof
By definition of a generalized ordered space:
Suppose then that $\varnothing \subsetneqq U \subsetneqq Y$.
Let $C = Y \setminus U$.
Thus $C$ is also $\tau'$-closed.
$C$ has a supremum in $X$ by the premise.
Let $c = \sup_X C$
Since $c \in C = Y \setminus U$ and $U$ is an upper set in $Y$:
$c \prec u$ for all $u \in U$.
Furthermore, if $y \in Y$ and $c \prec y$, then by the definition of supremum, $y \notin C$, so $y \in U$.
Thus:
$U = {c^\succeq}_Y$
where ${c^\succeq}_Y$ denotes the upper closure of $c$ in $Y$.
So $U$ is open in the $\preceq'$-order topology for $Y$.
Thus $\tau'$ is the $\preceq'$-order topology.
$\Box$
The premises immediately show that $\left({Y, \preceq'}\right)$ is a complete lattice.
$\blacksquare$ |
Definition:Boolean Algebra/Definition 3 Definition
A
Boolean algebra is an algebraic structure $\left({S, \vee, \wedge}\right)$ such that:
\((BA \ 0)\) $:$ $S$ is closed under both $\vee$ and $\wedge$ \((BA \ 1)\) $:$ Both $\vee$ and $\wedge$ are commutative \((BA \ 2)\) $:$ Both $\vee$ and $\wedge$ distribute over the other \((BA \ 3)\) $:$ Both $\vee$ and $\wedge$ have identities $\bot$ and $\top$ respectively \((BA \ 4)\) $:$ $\forall a \in S: \exists \neg a \in S: a \vee \neg a = \top, a \wedge \neg a = \bot$ The operations $\vee$ and $\wedge$ are called join and meet, respectively.
The identities $\bot$ and $\top$ are called
bottom and top, respectively.
Also, $\neg a$ is called the
complement of $a$. $\begin{array}{c|cc} + & 0 & 1 \\ \hline 0 & 0 & 1 \\ 1 & 1 & 0 \\ \end{array} \qquad \begin{array}{c|cc} \times & 0 & 1 \\ \hline 0 & 0 & 0 \\ 1 & 0 & 1 \\ \end{array}$
Some sources refer to a Boolean algebra as:
or
both of which terms already have a different definition on $\mathsf{Pr} \infty \mathsf{fWiki}$.
Other common notations for the elements of a Boolean algebra include: $0$ and $1$ for $\bot$ and $\top$, respectively $a'$ for $\neg a$.
When this convention is used, $0$ is called
zero, and $1$ is called one or unit. Also see Results about Boolean algebrascan be found here. Source of Name
This entry was named for George Boole. |
Rana Baydoun, Omar Samad, Maria Aoun, Bilal Nsouli and Ghassan Younes
Abstract
A new radiocarbon laboratory has been established recently at the Lebanese Atomic Energy Commission. This laboratory consists of benzene synthesis line and a low background liquid scintillation counter, Tri-Carb 3180 TR/SL for measurements with Bismuth Germanate (BGO) guard detector. The effectiveness of the benzene line was tested firstly by preparing benzene from a wood sample after carbide and acetylene receiving. Normalization and standardization of the liquid scintillation counter was carried out, as well as the Factor of Merit (E2/B) was determined for three different counting regions. To assure accuracy and reliability of results, reference materials were used. Based on z-score and u-score evaluation, as well as Student’s t-test, acceptable data were obtained from travertine and wood samples available through the International Atomic Energy Agency (IAEA-C2 and IAEA-C5), and humic acid (U) and murex shell (R) from the Fifth International Radiocarbon Intercomparison (VIRI). After that, a preliminary study was done, which is the base for future research in order to assess the anthropogenic impact and degree of environmental pollution in terms of radiocarbon isotope ratio (Δ14C) deduced from the Percent Modern Carbon or PMC. This work represents the results of four reference materials and those of five green grass samples. The Δ14C of green grass samples collected from two different clean zones were found to be 50‰ and 52‰, while the values of those collected from different polluted zones were 23‰, 7‰ and 15‰.
Rana Baydoun, Omar El Samad, Bilal Nsouli and Ghassan Younes
Internal method validationThe purpose of the validation is to verify that the conventional radiocarbon method, used for the determination of radiocarbon content in tree leaves, grass and wood, when applied in our laboratory, fits to its intended use.TruenessTrueness was used to test the closeness of analytical result to the reference value and it was quantified in terms of bias ( Taverniers et al ., 2004 ). As well as, a statistical parameter, z -score was used to test the acceptance of the results. The whole working procedure consisting of benzene
Hyperspectral remote sensing combined with advanced image processing techniques is an efficient tool for the identification of agricultural crops. In our study we pursued spectral analysis on a relatively small sample area using low number of training points to examine the potential of high resolution imagery. Spectral separability measurements were applied to reveal spectral overlapping between 4 crop species and for the discrimination we also used statistical comparisons such as plotting the PC values and calculating standard deviation of single band reflectance values on our classes. These statistical results were proven to be good indicators of spectral similarity and potential confusion of data samples. The classification of Spectral Angle Mapper (SAM) had an overall accuracy of 72% for the four species where the poorest results were obtained from the test points of garlic and sugar beet. Comparing the statistical analyses we concluded that spectral homogeneity does not necessarily have influence on the accuracy of mapping, whereas separability scores strongly correlate with classification results, implying also that preliminary statistical assessments can improve the efficiency of training site selection and provide useful information to specify some technical requirements of airborne hyperspectral surveys.
score system based on macroinvertebrates over a wide range of unpolluted running-water sites. Water Research. 1983, vol. 17, pp. 333-347.[5] JUST, T. et al. Vodohospodářské revitalizace a jejich uplatnění v ochraně před povodněmi. Praha : 3. ZO ČSOP HOŘOVICKO, EKOLOGICKÉ SLUŽBY s.r.o, AOPK ČR, MŽP ČR, 359 pp. ISBN 80-239-6351-1.[6] ZAMORA-MUNOZ, C., SAINZ-CANTERO, C., SANCHEZ-ORTEGA, A., ALBA-TERCEDOR, J. Are biological indices BMWP and ASPT and their significance regarding water quality seasonally dependent? Factors explaining their
Friđgeir Grímsson, Alexandros Xafis, Frank H. Neumann and Reinhard Zetter
reconstruction of seven continuous and 20 categorical pollen traits scored for extant Winteraceae. Supplement to Grímsson et al. „A Winteraceae pollen tetrad from the early Paleocene of western Greenland and the fossil record of Winteraceae in Laurasia and Gondwana”. http://rpubs.com/AlastairPotts/WinterIsComing .PRAGLOWSKI J. 1979. Winteraceae Lindl. World Pollen and Spore Flora, 8: 1–25.PUNT W., HOEN P.P., BLACKMORE S., NILSSON S. & LE THOMAS A. 2007. Glossary of pollen and spore terminology. Rev. Palaeobot. Palynol., 143: 1–81.RAINE J.I., MILDENHALL D
chronologies ( Wigley et al ., 1984 ).Ordination of the pine populations and climate-growth relationshipThe principal component analysis (PCA) was applied to identify the short-time factors affecting tree-ring widths. The identification of PC1 and PC2 was based on an analysis of the component scores. The variables (n=52) were site sensitivity chronologies. The cluster analysis (CA) based on Ward’s method and 1-r Pearson’s distance has been used to analyse similarity of the response of each pine population to climate elements in the 1951–2012 period. The variables (n
Environmental resources and values (natural capital) should be seen as a key factor in regional competitiveness. However, little attention has been paid so far to the role of natural capital in the process of achieving competitive advantage from the territorial perspective. Therefore, the main purpose of this paper is to present the results of a study on the environmental competitiveness of Polish regions. The author’s contribution to the theory is the use of taxonomic metrics for research purposes. Based on certain predefined criteria the environmental potential of each voivodship was assessed in 2004 and 2012. For research purposes, 26 indicators of state, pressure, and environmental protection were proposed. Owing to the fact that the subset of diagnostic variables (indicators) contained elements that could not be directly aggregated, their unification was achieved using standardization formulas. The methodology proposed by the author might be used to assess environmental competitiveness in different regions or countries. The results of the performed analyses indicated that the Subcarpathian province scored highest in terms of environmental competitiveness, while Swiętokrzyskie province scored lowest.
( i , max ) ) 2$$\begin{array}{}\text H = \sqrt{\sum(x(i,j)\,-\,x(i,{\text {max}}))^{2} }\end{array}$$where:HI - hierarchy indexx(i, j) – standardized value of variable i for unit j (standardization performed with z-score method The z-score formula is the following:z ( i j ) = x ( j ) − x ( i ) ¯ d ( x ) ( i ) ,$\begin{array}{}z(ij)=\frac{x(j)-{\overline{x(i)}}}{d(x)(i)},\end{array}$where z(ij) – standardized value, x(ij) – “original” value of variable i for city j, x ( i ) - mean value of the variable i for all cities, d
order to check the attitude of Silesians towards the co-inhabiting nationalities, respondents were asked to assess (on a scale from 1 to 5) relations between Silesians and Poles, as well as between Silesians and Germans. The relationship between Silesians and Germans got better scores: over 90% of respondents rated it between 3 and 5. One-third of them defined relations with the German people as very good ( Fig. 6 ). Among the reasons given for that were: no problems with members of the German minority, family working in Germany, and the overlapping of these two
, synthetic control method, and propensity score matching, among others) (18 articles). The remaining articles employ an assorted group of quantitative and/or qualitative techniques, such as comparative analysis of descriptive financial data, qualitative content analysis, in-depth case study analysis, principal component analysis or descriptive analysis of survey and interview responses (perception-based data).Economic EfficiencyThe first thing that can be said about the effects of municipal amalgamations on economic efficiency and cost savings is that the promises |
The ring $R$ is commutative with unit. An ideal $I$ is called primary, if it stands the following:
If $ab \in I$ then $a \in I$ or $b^n \in I$, for a natural number $n$.
Show that if $I$ is a primitive ideal of $R$, then $Rad(I)$ is a prime ideal of it.
Could you give me a hint how we could show this?
EDIT:
That's what I have tried:
$Rad(I)=\{ x \in R| \exists n \in \mathbb{N} \text{ such that } x^n \in I \}$
$P$ is a prime ideal iff $a,b \in R$, $a \cdot b \in \mathbb{P}$, then $a \in P$ or $b \in P$
Let $a \cdot b \in Rad(I) \Rightarrow \exists m \in \mathbb{N}$ such that $(a \cdot b)^m \in I \Rightarrow a^m \cdot b^m \in I \Rightarrow a^n \in I \text{ or } (b^{m})^n \in I \Rightarrow a \in Rad(I) \text{ or } b^{m \cdot n } \in I \Rightarrow a \in Rad(I) \text{ or } b \in Rad(I)$
Could you tell me if it is right? |
I haven't done a surface integral in a while so I am asking to get this checked.
$\mathbf{F} = \langle x, y, z\rangle$ and the surface is $z = xy + 1$ where $0\leq x,y\leq 1$.
$\hat{\mathbf{n}} = \nabla f/ \lvert\nabla f\rvert = \frac{1}{\sqrt{3}}\langle 1, 1, 1\rangle$
$dS = \frac{\lvert\nabla f\rvert dxdy}{\frac{\partial f}{\partial z}} = \sqrt{3}dxdy$
$\mathbf{F}\cdot\hat{\mathbf{n}} = \frac{1}{\sqrt{3}}(x+y+z) = \frac{1}{\sqrt{3}}(x+y+xy + 1)$
$$ \int_0^1\int_0^1(x + y + xy + 1)dxdy = \frac{9}{4} $$
However, when I used the divergence theorem, I obtained:
$$ \int_S(\mathbf{F}\cdot\hat{\mathbf{n}})dS = \int_V(\nabla\cdot\mathbf{F})dV $$ and $\nabla\cdot\mathbf{F} = 3$ so $$ \int 3dV = 3V = 3\frac{5}{4} = \frac{15}{4} $$
Which one is wrong or are both incorrect? If so, what is wrong? |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
ISSN:
1937-1632
eISSN:
1937-1179
All Issues
Discrete & Continuous Dynamical Systems - S
June 2016 , Volume 9 , Issue 3
Issue in memory of Alfredo Lorenzi
Select all articles
Export/Reference:
Abstract:
In the study of mathematical models, which lead to Cauchy problems for differential equations of parabolic (resp. hyperbolic) type or to an elliptic boundary value problem the following issues typically have a prominent interest:
For more information please click the “Full Text” above.
Abstract:
The boundary controllability problems firstly discussed, in this paper, might be described by a one-dimensional $x$-space equation and $t>0$, modeling - at same time $t$ - different physical phenomena in a composite solid made of different materials. These phenomena may be governed, at same time $t$, for example, by the heat equation and by the Schrödinger equation in separate regions. Interface conditions are assumed. Extensions of such boundary controllability problems to two-dimensional $(x,y)$-space are also investigated.
Abstract:
Combining a priori estimates with penalization techniques and an implicit function argument based on Campanato's near operators theory, we obtain the existence of periodic solutions for a fourth order integro-differential equation modelling actuators in MEMS devices.
Abstract:
We establish a logarithmic stability estimate for the problem of detecting corrosion by a single electric measurement. We give a proof based on an adaptation of the method initiated in [3] for solving the inverse problem of recovering the surface impedance of an obstacle from the scattering amplitude. The key idea consists in estimating accurately a lower bound of the local $L^2$-norm at the boundary, of the solution of the boundary value problem used in modeling the problem of detection corrosion by an electric measurement.
Abstract:
In the dynamic or Wentzell boundary condition for elliptic, parabolic and hyperbolic partial differential equations, the positive flux coefficient $% \beta $ determines the weighted surface measure $dS/\beta $ on the boundary of the given spatial domain, in the appropriate Hilbert space that makes the generator for the problem selfadjoint. Usually, $\beta $ is continuous and bounded away from both zero and infinity, and thus $L^{2}\left( \partial \Omega ,dS\right) $ and $L^{2}\left( \partial \Omega ,dS/\beta \right) $ are equal as sets. In this paper this restriction is eliminated, so that both zero and infinity are allowed to be limiting values for $\beta $. An application includes the parabolic asymptotics for the Wentzell telegraph equation and strongly damped Wentzell wave equation with general $\beta $.
Abstract:
We consider the Ibragimov-Shabat equation, which contains nonlinear dispersive effects. We prove that as the diffusion parameter tends to zero, the solutions of the dispersive equation converge to discontinuous weak solutions of a scalar conservation law. The proof relies on deriving suitable a priori estimates together with an application of the compensated compactness method in the $L^p$ setting.
Abstract:
Let $u$ be a non-negative super-solution to a $1$-dimensional singular parabolic equation of $p$-Laplacian type ($1< p <2$). If $u$ is bounded below on a time-segment $\{y\}\times(0,T]$ by a positive number $M$, then it has a power-like decay of order $\frac p{2-p}$ with respect to the space variable $x$ in $\mathbb R\times[T/2,T]$. This fact, stated quantitatively in Proposition 1.2, is a ``sidewise spreading of positivity'' of solutions to such singular equations, and can be considered as a form of Harnack inequality. The proof of such an effect is based on geometrical ideas.
Abstract:
An identification problem is considered for a degenerate evolution equation with overdetermination on the solution semigroup kernel. Solutions of problems with Cauchy and Showalter conditions on initial values are proved to be existing and unique. Solutions stability estimates are derived. The abstract results are applied to an identification problem for the linearized Oskolkov system of equations. There are considered different degrees of system degeneration with respect to the time derivatives of unknown functions.
Abstract:
We consider operators in divergence form, $A_1u=(au')'$, and in nondivergence form, $A_2u=au''$, provided that the coefficient $a$ vanishes in an interior point of the space domain. Characterizing the domain of the operators, we prove that, under suitable assumptions, the operators $A_1$ and $A_2$, equipped with general Wentzell boundary conditions, are nonpositive and selfadjoint on spaces of $L^2$ type.
Abstract:
We study linear nonautonomous parabolic systems with dynamic boundary conditions. Next, we apply these results to show a theorem of local existence and uniqueness of a classical solution to a second order quasilinear system with nonlinear dynamic boundary conditions.
Abstract:
In this paper we study an inverse problem with time dependent operator-coefficients. We indicate sufficient conditions for the existence and the uniqueness of a solution to this problem. A number of concrete applications to partial differential equations is described.
Abstract:
The aim of the paper is to show a reachability result for the solution of a multidimensional coupled Petrovsky and wave system when a non local term, expressed as a convolution integral, is active. Motivations to the study are in linear acoustic theory in three dimensions. To achieve that, we prove observability estimates by means of Ingham type inequalities applied to the Fourier series expansion of the solution.
Abstract:
Our aim in this paper is to prove the existence and uniqueness of solutions to Cahn-Hilliard and Allen-Cahn type equations based on a modification of the Ginzburg-Landau free energy proposed in [12] (see also [16]) which takes into account strong anisotropy effects. In particular, the free energy contains a regularization term, called Willmore regularization.
Abstract:
Let $X$ be a complex Banach space and $A:\,D(A) \to X$ a quasi-$m$-sectorial operator in $X$. This paper is concerned with the identification of diffusion coefficients $\nu > 0$ in the initial-value problem: \[ (d/dt)u(t) + {\nu}Au(t) = 0, \quad t \in (0,T), \quad u(0) = x \in X, \] with additional condition $\|u(T)\| = \rho$, where $\rho >0$ is known. Except for the additional condition, the solution to the initial-value problem is given by $u(t) := e^{-t\,{\nu}A} x \in C([0,T];X) \cap C^{1}((0,T];X)$. Therefore, the identification of $\nu$ is reduced to solving the equation $\|e^{-{\nu}TA}x\| = \rho$. It will be shown that the unique root $\nu = \nu(x,\rho)$ depends on $(x,\rho)$ locally Lipschitz continuously if the datum $(x,\rho)$ fulfills the restriction $\|x\|> \rho$. This extends those results in Mola [6](2011).
Abstract:
We study the stabilization problem for the wave equation with localized Kelvin--Voigt damping and mixed boundary condition with time delay. By using a frequency domain approach we show that, under an appropriate condition between the internal damping and the boundary feedback, an exponential stability result holds. In this sense, this extends the result of [19] where, in a more general setting, the case of distributed structural damping is considered.
Abstract:
The purpose of this paper is to study a boundary reaction problem on the space $X \times {\mathbb R}$, where $X$ is an abstract Wiener space. We prove that smooth bounded solutions enjoy a symmetry property, i.e., are one-dimensional in a suitable sense. As a corollary of our result, we obtain a symmetry property for some solutions of the following equation $$ (-\Delta_\gamma)^s u= f(u), $$ with $s\in (0,1)$, where $(-\Delta_\gamma)^s$ denotes a fractional power of the Ornstein-Uhlenbeck operator, and we prove that for any $s \in (0,1)$ monotone solutions are one-dimensional.
Abstract:
By means of the Mittag-Leffler function existence and uniqueness conditions are obtained for a strong solution of the Cauchy problem to quasilinear differential equation in a Banach space, solved with respect to the highest-order derivative. The results are used in the study of quasilinear equations with degenerate operator at the highest-order derivative. Some special restrictions for nonlinear operator in the equation are used here. Existence conditions of a unique strong solution for the Cauchy problem and generalized Showalter--Sidorov for degenerate quasilinear equations were found. The obtained results are illustrated by an example of initial-boundary value problem for a quasilinear system of equations not resolved with respect to the highest-order time derivative.
Abstract:
We consider a unique solvability of nonlocal elliptic problems in infinite cylinder in weighted spaces and in Hölder spaces. Using these results we prove the existence and uniqueness of classical solution for the Vlasov--Poisson equations with nonlocal conditions in infinite cylinder for sufficiently small initial data.
Abstract:
We consider nonlinear elliptic functional differential equations. The corresponding operator has the form of a product of nonlinear elliptic differential mapping and linear difference mapping. It were obtained sufficient conditions for solvability of the Dirichlet problem. A concrete example shows that a nonlinear differential--difference operator may not be strongly elliptic even if the nonlinear differential operator is strongly elliptic and the linear difference operator is positive definite. The analysis is based on the theory of pseudomonotone--type operators and linear theory of elliptic functional differential operators.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
I will present a very short proof of the Prime Number Theorem.
My question is, if the following proof is acceptable?
Let ф(np) be the Euler ф function (Euler totient function) for any primorial np with (1) $$np=\prod_{p=prime}^{p≤n} p$$ which is defined as (2) $$ф(np)=np*\prod_{p=prime}^{p≤n} (1-1/p)=np*\prod_{p=prime}^{p≤n} \frac{(p-1)}{p}$$ and where the product extends over all primes p dividing the primorial np.
This function gives the number of all possible primes up to any primorial np, which are not divided by all the primes less than n and all these possible primes between the interval [1,np] are quite uniformly distributed, especially for $\lim_{n\to ∞}$.
We further know that all the possible primes between the interval $[n,n^2]$ are identical with the actual primes.
For $\lim_{n\to ∞}$the primorial np is defined by (3) $$np=e^n$$ and therefore (4): $$\frac{ф(e^n)}{e^n}=\prod_{p=prime}^{∞} \frac{(p-1)}{p}$$
As the possible primes between the interval $[1,np]$ are quite uniformly distributed and because all the possible primes between the interval $[n,n^2]$ are identical with the actual primes, the equation from above also states the prime number density function (5) $$\frac{π(n)}{n}$$ and we can write (6) $$\frac{ф(e^n)}{e^n}=\prod_{p=prime}^{∞} \frac{(p-1)}{p}=\frac{π(n)}{n}$$
From Euler (1737) we also know that for $\lim_{n\to ∞}$ (7)
$$\sum_{n=1}^∞ \frac{1}{n}=\prod_{p=prime}^∞ \frac{p}{(p-1)}$$
and with
$$ln(n)+γ=\sum_{n=1}^∞ \frac{1}{n}$$
and with the Euler-Mascheroni constant γ=0,57721… the two equations from above state that for $\lim_{n\to ∞}$
$$ln(n)≅\sum_{n=1}^∞ \frac{1}{n}=\prod_{p=prime}^∞ \frac{p}{(p-1)}$$
and therefore (8)
$$\frac{1}{ln(n)}≅\prod_{p=prime}^∞ \frac{(p-1)}{p}$$
Equation (6) in conjunction with equation (8) gives (9)
$$\frac{ф(e^n)}{e^n}=\prod_{p=prime}^{∞} \frac{(p-1)}{p}=\frac{π(n)}{n}≅\frac{1}{ln(n)}$$
and therefore (10) for $\lim_{n\to ∞}$
$$\frac{π(n)}{n}≅\frac{1}{ln(n)}$$
and (11)
$$π(n)≅\frac{n}{ln(n)}$$
respectively. |
Problem.If $X$ and $Y$ measure the lifetimes of two components operating independently. Suppose each has density (in unit of 100 hours)
$$ f(x) = \begin{cases} \frac{1}{x^2}, & \text{if } x > 1 \\ 0, & \text{elsewhere}, \end{cases} $$
If $Z = \sqrt{XY}$ measures the quality of the system, show that $Z$ has density function
$$ f(z) = \begin{cases} 4\frac{\ln(z)}{z^3}, & \text{if } z > 1 \\ 0, &\text{elsewhere} \end{cases} $$
I use the substitutions $Z = \sqrt{XY}$ and $U = Y$ to obtain that the Jacobian is $-2z/u$, but then when I try to solve for the marginal distribution of $z$, I obtain a divergent integral! My joint distribution function for $u$ and $z$ comes out to be $2 z^{-3} u^{-1}$. I'm not sure what's going wrong. |
@Rubio The options are available to me and I've known about them the whole time but I have to admit that it feels a bit rude if I act like an attribution vigilante that goes around flagging everything and leaving comments. I don't know how the process behind the scenes works but what I have done up to this point is leave a comment then wait for a while. Normally I get a response or I flag after some time has passed. I'm guessing you say this because I've forgotten to flag several times
You can always leave a friendly comment if you like, but flagging gets eyes on it to get the problem addressed - ideally before people start answering it. something we don't want is for people to farm rep off someone else's content, which we see occasionally; but even beyond that, SE in general and we in particular dislike it when people post content they didn't create without properly acknowledging its source. And most of the creative effort here is in the question.
So yeah, it's best to flag it when you see it. That'll put it into the queue for reviewers to agree (or not) - so don't worry that you're single-handedly (-footedly?) stomping on people :)
Unfortunately, a significant part of the time, the asker never supplies the origin. Sometimes they self-delete the question rather than just tell us where it came from. Other times they ignore the request and the whole thing, including whatever effort people put into answering, gets discarded when the question is deleted.
Okay. This is the first Riley I've written, and it gets progressively harder as you go along, so here goes. I wrote this, and then realized that I used a mispronunciation of the target, so I had to sloppily improvise. I apologize. Anyway, I hope you enjoy it!My prefix is just shy of white,Yet...
IBaNTsJTtStPMP means "I'm Bad at Naming Things, so Just Try to Solve this Patterned Masyu Puzzle!".The original Masyu rules apply.Make a single loop with lines passing through the centers of cells, horizontally or vertically. The loop never crosses itself, branches off, or goe...
This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Etienne Word™. Use the following examples to find the rule:These are not the only examples of Etienne Wo...
This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Eternal Word™. Use the following examples to find the rule:$$% set Title text. (spaces around the text ARE ...
IntroductionI am an enthusiastic geometry student, preparing for my first quiz. Yet while revising I accidentally spilt my coffee onto my notes. Can you rescue me and draw me a diagram so that I can revise it for tomorrow’s test? Thank you very much!My Notes
Sometimes you are this wordRemove the first letter, does not change the meaningRemove the first two letters, still feels the sameRemove the first three letters and you find a wayRemove the first four letters and you get a numberThe letters rearranged is a surnameWh...
– "Sssslither..."Brigitte jumped. The voice had whispered almost directly into her ear, yet there was nobody to be seen. She looked at the ground beneath her feet. Was something moving? She was probably imagining things again.– Did you hear something? she asked her guide, Skaylee....
The creator of this masyu forgot to add the final stone, so the puzzle remains incomplete. Finish his job for him by placing one additional stone (either black or white) on the board so that the result is a uniquely solvable masyu.Normal masyu rules apply.
So here's a standard Nurikabe puzzle.I'll be using the final (solved) grid for my upcoming local puzzle competition logo as it will spell the abbreviation of the competition name. So, what does it spell?Rules (adapted from Nikoli):Fill in the cells under the following rules....
I've designed a set of dominoes puzzles that I call Donimoes. You slide thedominoes like the cars in Nob Yoshigahara's Rush Hour puzzle, always alongtheir long axis. The goal of Blocking Donimoes is to slide all the dominoesinto a rectangle, without sliding any matching numbers next to each ot...
I am mud that will trap you. I am a colloid hydrogel. What am I?Take the first half of me and add me to this:I am dangerous to wolves and werewolves alike. Some people even say that I am dangerous to unholy things. Use the creator of Poirot to find out: What am I?Now, take another word for ...
Clark who is consecutive in nature, lives in California near the 100th street. Today he decided to take his palindromic boat and visit France. He booked a room which has a number of thrice a prime. Then he ordered Taco and Cola for his breakfast. The online food delivery site asked him to enter t...
Suppose you are sitting comfortably in your universe admiring the word SING. Just then, Q enters your universe and insists that you insert the string "IMMER" into your precious word to create a new word for his amusement.Okay, you can make the word IMMERSING...But then you realize, you can a...
You! I see you walking thereNary a worry or a careCome, listen to me speakMy mind is strong, though my body is weak.I've got a riddle for you to ponderSomething to think about whilst you wanderIt's a classic Riley, a word split in threeFor a prefix, an...
@OmegaKrypton rather a poor solution, I think, but I'll try it anyway: Quarrel= cross words. When combined heartlessly: put them together by removing the middle space. Thus, crosswords. Nonstop: remove the final letter. We've made crossword = feature in daily newspaper
I saw this photo on LinkedIn:Is this a puzzle? If so, what does it mean and what is a solution?What I've found so far:$a = \pi r^2$ is clearly the area of a disk of radius $r$$2\pi r$ is clearly its diameter$\displaystyle \int\dfrac{dx}{sin\ x} = ln\left(\left| tan \dfrac{x}{2}\right|\... |
Today we (Chamseddine-Connes-van Suijlekom) posted a preprint on grand unification in the spectral Pati–Salam model which I summarize here.
The paper builds on two recent discoveries in the noncommutative geometry approach to particle physics: we showed how to obtain inner fluctuations of the metric without having to assume the order one condition on the Dirac operator. Moreover the original argument by classification of finite geometries \(F\) that can provide the fine structure of Euclidean space-time as a product \(M\times F\) (where \(M\) is a usual 4-dimensional Riemannian space) has now been replaced by a much stronger uniqueness statement. This new result shows that the algebra
\(
M_{2}(\mathbb{H})\oplus M_{4}(\mathbb{C}) \)
where \(\mathbb{H}\) are the quaternions, appears uniquely when writing the higher analogue of the Heisenberg commutation relations. This analogue is written in terms of the basic ingredients of noncommutative geometry where one takes a spectral point of view, encoding geometry in terms of operators on a Hilbert space \(\mathcal{H}\). In this way, the inverse line element is an unbounded self-adjoint operator \(D\). The operator \(D\) is the product of the usual Dirac operator on \(M\) and a `finite Dirac operator’ on \(F\), which is simply a hermitian matrix \(D_{F}\). The usual Dirac operator involves gamma matrices which allow one to combine the momenta into a single operator. The higher analogue of the Heisenberg relations puts the spatial variables on similar footing by combining them into a single operator \(Y\) using another set of gamma matrices and it is in this process that the above algebra appears canonically and uniquely in dimension 4.
This leads without arbitrariness to the Pati–Salam gauge group \(SU(2)_{R}\times SU(2)_{L}\times SU(4)\), together with the corresponding gauge fields and a scalar sector, all derived as inner perturbations of \(D\). Note that the scalar sector can not be chosen freely, in contrast to early work on Pati–Salam unification. In fact, there are only a few possibilities for the precise scalar content, depending on the assumptions made on the finite Dirac operator.
From the spectral action principle, the dynamics and interactions are described by the
spectral action,
\(
\mathrm{tr}(f(D/\Lambda)) \)
where \(\Lambda\) is a cutoff scale and \(f\) an even and positive function. In the present case, it can be expanded using heat kernel methods,
\(\mathrm{tr}(f(D/\Lambda))\sim F_{4}\Lambda^{4}a_{0}+F_{2}\Lambda^{2}%
a_{2}+F_{0}a_{4}+\cdots \)
where \(F_{4},F_{2},F_{0}\) are coefficients related to the function \(f\) and \(a_{k}\) are Seeley deWitt coefficients, expressed in terms of the curvature of \(M\) and (derivatives of) the gauge and scalar fields. This action is interpreted as an effective field theory for energies lower than \(\Lambda\).
One important feature of the spectral action is that it gives the usual Pati–Salam action with unification of the gauge couplings. Indeed, the scale-invariant term \(F_{0}a_{4}\) in the spectral action for the spectral Pati–Salam model contains the terms
\(
\frac{F_{0}}{2\pi^{2}}\int\left( g_{L}^{2}\left( W_{\mu\nu L}^{\alpha }\right) ^{2}+g_{R}^{2}\left( W_{\mu\nu R}^{\alpha}\right) ^{2}% +g^{2}\left( V_{\mu\nu}^{m}\right) ^{2}\right) . \)
Normalizing this to give the Yang–Mills Lagrangian demands
\(
\frac{F_{0}}{2\pi^{2}}g_{L}^{2}=\frac{F_{0}}{2\pi^{2}}g_{R}^{2}=\frac{F_{0}% }{2\pi^{2}}g^{2}=\frac{1}{4}, \)
which requires gauge coupling unification. This is very similar to the case of the spectral Standard Model where there is unification of gauge couplings. Since it is well known that the SM gauge couplings do not meet exactly, it is crucial to investigate the running of the Pati–Salam gauge couplings beyond the Standard Model and to find a scale \(\Lambda\) where there is grand
unification:
\(
g_{R}(\Lambda)=g_{L}(\Lambda)=g(\Lambda). \)
This would then be the scale at which the spectral action is valid as an effective theory. There is a hierarchy of three energy scales: SM, an intermediate mass scale \(m_{R}\) where symmetry breaking occurs and which is related to the neutrino Majorana masses (\(10^{11}-10^{13}\)GeV), and the GUT scale \(\Lambda\).
In the paper, we analyze the running of the gauge couplings according to the usual (one-loop) RG equation. As mentioned before, depending on the assumptions on \(D_{F}\), one may vary to a limited extent the scalar particle content, consisting of either composite or fundamental scalar fields. We will not limit ourselves to a specific model but consider all cases separately. This leads to the following three figures:
In other words, we establish grand unification for all of the scenarios with unification scale of the order of \(10^{16}\) GeV, thus confirming validity of the spectral action at the corresponding scale, independent of the specific form of \(D_{F}\). |
About the last question, the usual undecidability proof for universality could be adapted.
Recall that in this proof, one considers an instance $\langle \Sigma,\Delta,u,v\rangle$ of Post's correspondence problem, where $\Sigma$ and $\Delta$ are two disjoint alphabets, and $u$ and $v$ are two homomorphisms from $\Sigma^\ast$ to $\Delta^\ast$. Then $$L_u=\{a_1\cdots a_n(u(a_1\cdots a_n))^R\mid n>0\wedge\forall 0<i\leq n.a_i\in\Sigma\}$$ and $$L_v=\{a_1\cdots a_n(v(a_1\cdots a_n))^R\mid n>0\wedge\forall 0<i\leq n.a_i\in\Sigma\}$$—where $w^R$ denotes the reversal of word $w$—are two DCFLs s.t. $L_u\cap L_v=\emptyset$ iff the original PCP instance was negative. Letting $$L=\overline{L_u}\cup\overline{L_v}\;,$$one thus defines a CFL (since DCFLs are effectively closed under complement and CFLs under union), which is universal, i.e. equal to $(\Sigma\cup\Delta)^\ast$, iff the original PCP instance was negative.
Now, if $L$ is universal, i.e. if $L=(\Sigma\cup\Delta)^\ast$, then $L$ is closed under permutations. Conversely, if $L$ is not universal, i.e. if $L_u\cap L_v\neq\emptyset$, there is at least one word $x$ of form $x=w(u(w))^R=w(v(w))^R$ for some $w$ in $\Sigma^+$. Then $x$ does not belong to $L$, but it's easy to find a permutation of $x$ that belongs to $L$: for instance, permute the last letter of $w$ (which is in $\Sigma$) with the first of $(u(w))^R$ (which is in $\Delta$) to obtain a word in $\Sigma^\ast\Delta\Sigma\Delta^\ast\subseteq L$.
Hence $L$ is closed under permutation iff it is universal iff the original PCP instance was negative. |
I want to create a table of values, and the values are defined recursively as follows: $$ b_{t,m} = \begin{cases} b_{t+1,m} + b_{t+1,m-1}\delta^{t-T} & \text{ if } t \in \{2,\ldots,T-1\} \text{ and } m \in \{2,\ldots,T-t+1\} \\ % 1 & \text{ if } t\in \{2,\ldots,T\} \text{ and } m = 1 \\ % 0 & \text{ otherwise } \end{cases} $$
I need to create the table for $t=\{1,\ldots,T\}$ and $m=\{1,\ldots,T\}$ I'm new to Mathematica, and trying to accomplish this by mapping a recursive function over a list. My current attempt is as follows:
Clear[capT, delta, indexTable, bFunc, bMat]capT = 3;bFunc[{t_Integer, 1}] /; 2 <= t <= capT = 1;bFunc[{t_Integer, m_Integer}] /; m < 1 || m > capT - t + 1 || ( 1 <= m <= capT - t + 1 && (t < 2 || t > capT - 1)) = 0;bFunc[{t_Integer, m_Integer}] /; 2 <= t <= capT - 1 && 2 <= m <= capT - t + 1 := bFunc[{t, m}] = bFunc[{t + 1, m}] + bFunc[{t + 1, m - 1}]*delta^(t - capT)indexTable = Reverse[Table[{t, m}, {t, 1, capT, 1}, {m, 1, capT, 1}]];bMat = Map[bFunc, indexTable, 2]
Which produces the following output
{bFunc[{1, 0, 0}], bFunc[{1, 1/delta, 0}], bFunc[{0, 0, 0}]}
I must be missing something. This approach partially works, because all the values I need are there, but they are wrapped inside of additional function calls. The output that I want to have would look like
{{1, 0, 0}, {1, 1/delta, 0}, {0, 0, 0}}
What am I not seeing? |
Singer-Terhaar is part of CFA II and III curriculum. It estimates risk premium for some asset, traded at some local market, as weighted average of expected premiums for the case of (1) local market, completely integrated with global, and (2) local market completely isolated from global.
For integrated case, risk premium (RP) for asset i
$$ RP_i = \rho_{i,M} \times \sigma_i \times (RP_M/\sigma_M) $$ ,i.e. correlation of asset with global investable market times deviation times GIM's Sharpe ratio. For isolated case CFA recommends just dropping rho term from the formula above.
Weighting parameter ("integration") is then used to multiply to integrated estimate, and to sum with (1-weighting_parameter) times isolated estimate.
The question is: how to estimate integration? |
So I have this equation from cosmology giving $H_0 t$ as a function of $a$:
\begin{equation} \int_0^a \frac{da}{\left( \frac{\Omega_{\text{rad,0}}}{a^2} + \frac{\Omega_{\text{m,0}}}{a} +\Omega_{\Lambda\text{,0}} a^2\right)^{1/2}}=H_0 t \end{equation} I can plot it on a log-log scale like this:
ParametricPlot[{Log10[NIntegrate[(or0/a^2 + om0/a + ol0 a^2)^(-1/2), {a, 0, a1}]], Log10[a1]}, {a1, 0, 10},Exclusions -> None,Frame -> True,FrameLabel -> {"Log[H0 t]", "Log[a]"},BaseStyle -> {FontSize -> 16},PlotRange -> All,FrameStyle -> Directive[Thick, Black],FrameTicksStyle -> Directive[Thin, Black],PlotStyle -> Black,Axes -> False,ImageSize -> Large]
which gives this nice plot after throwing out some errors that say
"NIntegrate::nlim: a = a1 is not a valid limit of integration.":
I want to plot the log of the
derivative of $a$ with respect to $t$, $\log\dot{a}$, versus $\log(H_0 t)$. How can I do this?? I've tried several things but none of them seem close to the best way and I can't get it to work. Thanks! |
The spindle is the exterior surface of the volume that has the shape of a segment of a mandarine or orange, which is known as a wedge.
The time zones are (although the Earth is not exactly spherical) the most habitual example of this figure.
If we apply proportions according to the area of the sphere (with the degree of opening of the spindle $$n$$) we find that the area of the spherical spindle is: $$$A_{spindle}=4\pi \cdot r^2 \cdot \dfrac{n_{degrees}}{360^\circ}$$$
Using the same procedure but with the expression of volume, we can see that the volume of the spherical wedge is: $$$V_{wedge}=\dfrac{4}{3}\pi \cdot r^3 \dfrac{n_{degrees}}{360^\circ}$$$ |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
I'm still looking at how shocks behave in the ideal information transfer model, but I'd like to discuss non-ideal information transfer for a minute.
The information transfer model of supply $S$ and demand $D$ essentially has the 'invisible hand' operating as an entropic force -- I have some animations here (and here is the underlying model). In the ideal case we have the price $P$ of a good given by:
P = \frac{dD}{dS} = k \; \frac{D}{S}
$$
where $k$ is a constant. In general we have:
$$
\text{(1) }\; P = \frac{dD}{dS} \leq k \; \frac{D}{S}
$$
and we call this case non-ideal information transfer. What does this look like? Well here is a demand shock, sending the price lower:
The ideal price is black and an example non-ideal price satisfying equation (1) is in gray ... it falls below the ideal price. The information transfer model doesn't tell us what that non-ideal price is -- it is the result of any number of effects: expectations of agents, confidence, 'frictions', network effects, asymmetric information, etc. As a physics analogy, one of the sources of non-ideal behavior are interactions between molecules like attractive forces.
The loss in ideal NGDP (
total supply × ideal price) is proportional to the loss in entropy [1] as can be seen in the next pair of graphs:
When the fall in the number of points occurs on one side (the fall in demand), there are temporarily unequal numbers of points on each side representing a coordinated state with lower entropy (an uncoordinated state would have equal numbers of points on each side, the highest entropy state [2]). This is the sense in which I mean coordination causes recessions (and is equivalent to entropy loss). Once the coordination is over -- in the model, points aren't being taken away from the demand side -- the situation returns to an uncoordinated state with equal numbers on each side. That is the maximum entropy state (although at lower absolute entropy since there are fewer points).
The fall in the non-ideal price leads to a larger fall in NGDP than the ideal price -- so there could be a component of a recession, for example, that is due to other factors beyond the operation of an ideal market. So in general we can say that:
\Delta NGDP = \Delta NGDP_{ideal} + \Delta NGDP_{nonideal}
$$
This brings me to two recent voxeu.org articles, a paper (H/T John Cochrane and Mark Thoma) and an old post from Scott Sumner:
Item 1: The authors propose that confidence shocks could impact the macroeconomy without seeming changes in the "fundamentals". As the authors put it: The appealing feature of [confidence-based] models was that they could accommodate coordination failures and movements in economic confidence without any commensurate movements in ‘hard’ fundamentals, such as peoples’ abilities and tastes or the economy’s know-how, or expectations of such fundamentals.
$$
\Delta NGDP_{ideal} < \Delta NGDP_{nonideal}
$$
Item 2: Rabah Arezki, Valerie A. Ramey and Liugang Sheng 5 Jan 2015 This paper shows that news shocks seem to describe one particular market pretty well without any bells and whistles (see John Cochrane for a partial, but extensive, list of mechanisms this leaves out). Noah Smith refers to the paper as an existence proof of cases where real business cycle (RBC) theory works.
We could interpret this as saying in specific markets we have:
$$
\Delta NGDP_{nonideal} \simeq 0
$$
and that the reason RBC theories like the one in the paper get e.g. unemployment going the wrong way is that elevated unemployment arises chiefly from the non-ideal component.
Item 3: Daron Acemoglu, Asuman Ozdaglar, Alireza Tahbaz-Salehi 27 March 2015 This paper shows that large macroeconomic deviations could be the result of small fluctuations combined with network effects. As put by the authors: In this sense, our results provide a novel solution to what Bernanke et al. (1996) refer to as the ‘small shocks, large cycles puzzle’.
We could interpret this as saying that because of network effects in macro situations we again have:
\Delta NGDP_{ideal} < \Delta NGDP_{nonideal}
$$
There is a problem with this particular mechanism, though --
a prioriwe should assume the network amplification factors are distributed evenly (or logarithmically equally) between large and small. That is to say for a set of small shocks, these should result in small, medium and large cycles. But as we see in the next item, we don't see medium effects.
One way to rescue this is something like the thresholds in random graph theory. In adding random links to a graph, above a certain threshold number, there is almost surely a giant connected component. Basically in this sense, there are either large connected networks or small pieces disconnected from most of the network -- leading to either large cycles from small shocks when the shocks hit the giant connected component, or small cycles from small shocks when they don't. The linked paper doesn't have anything to say about this (at least not in a language I understand as as a mathematician or physicist).
Item 4: Scott Sumner 20 Dec 2011 Sumner put forward the puzzle of the lack of what he termed mini-recessions. As he describes it: It’s often said that nature abhors a vacuum. I’d add that nature abhors a huge donut hole in the distribution of “shocks.” Suppose there were lots of earthquakes of zero to six magnitude. And occasional earthquakes of more than seven. ... But nothing between 6 and 7. Wouldn’t that be very odd?
From the previous item, we could understand this as shocks being amplified by network effects when they hit the giant connected component of the random input-output graph. We could interpret this as saying in macro situations we again have:
$$
\Delta NGDP_{ideal} < \Delta NGDP_{nonideal}
$$
while additionally positing that $\Delta NGDP_{nonideal} \simeq 0 $ when the shock behind $\Delta NGDP_{ideal}$ is in a disconnected market.
However there is an additional way we could interpret this observation. When shocks randomly rise above the noise, a different non-market amplification effect takes over though e.g. the news that coordinates group behavior. What I have in mind is something like this paper [pdf] from Salganik, Dodds and Watts (2006) where they set up multiple online music sites that differed in whether other people could see which songs were downloaded or not. In that scenario, when social interaction was allowed, the songs that went to the top not only went a lot farther relative to the second or third place, but it was also more unpredictable which songs would go to the top. In a sense, what I'm describing here would be something like which "shocks" unpredictably become the most "popular" through media like CNBC or Bloomberg (or even market indices like the Dow or S&P500). That bad economic news cycle triggers $\Delta NGDP_{nonideal}$ to become large, resulting in a much bigger impact than the fundamentals would suggest.
Overall, these four items paint a picture where $\Delta NGDP_{nonideal}$ may be the most important effect in macroeconomic fluctuations -- but not during "normal times". In recessions, complicated economic models dominate, but the situation simplifies to a simple model with a few variables outside of recessions.
Footnotes:
[1] This is one thing that makes statistical economics different from statistical mechanics -- the second law of thermodynamics
ΔS > 0does not always apply. Most of the time there is economic growth and ΔS > 0and ΔN > 0, but during recessions there is a spontaneous fall in entropy ( ΔS < 0) and ΔN < 0.
[2] Imagine throwing balls into the two buckets at random -- you'd end up with approximately equal numbers in each side. In order to get unequal numbers, you'd need to coordinate your throws. |
I previously worked out that ensembles of information equilibrium relationships have a formal resemblance to a single aggregate information equilibrium relationship involving the ensemble averages:
\frac{d \langle A \rangle}{dB} = \langle k \rangle \frac{\langle A \rangle}{B}
$$
I wanted to point out that this means ensemble ratios and abstract prices will exhibit a dynamic equilibrium just like individual information equilibrium relationships if $\langle k \rangle$ changes slowly (with respect to both $B$ and now time $t$):
\frac{d}{dt} \log \frac{\langle A \rangle}{B} \approx (\langle k \rangle - 1) \beta
$$
plus terms $\sim d\langle k \rangle /dt$ where we assume (really, empirically observe) $B \sim e^{\beta t}$ with growth rate $\beta$. The ensemble average version allows for the possibility that $\langle k \rangle$ can change over time (if it changes too quickly, additional terms become important in the solution to the differential equation as well as the last dynamic equilibrium equation).
Generally, considering the first equation above with a slowly changing $\langle k \rangle$, we can apply nearly all of the results collected in the tour of information equilibrium chart package to ensembles of information equilibrium relationships. These have been described in three blog posts:
The present post arguing the extension of the dynamic equilibrium approach to ensemble averages |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ...
@Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation")
@Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable
Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags
@Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag
@glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :)
@Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work)
This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin...
@Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension
@Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity
I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head
@Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write
@Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all.
I've actually recently asked some questions on math.SE on related topics
@Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true
also probably even more generally without $i$ factors
so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal)
Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary
@Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t
Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check
If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ...
There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h... |
@Rubio The options are available to me and I've known about them the whole time but I have to admit that it feels a bit rude if I act like an attribution vigilante that goes around flagging everything and leaving comments. I don't know how the process behind the scenes works but what I have done up to this point is leave a comment then wait for a while. Normally I get a response or I flag after some time has passed. I'm guessing you say this because I've forgotten to flag several times
You can always leave a friendly comment if you like, but flagging gets eyes on it to get the problem addressed - ideally before people start answering it. something we don't want is for people to farm rep off someone else's content, which we see occasionally; but even beyond that, SE in general and we in particular dislike it when people post content they didn't create without properly acknowledging its source. And most of the creative effort here is in the question.
So yeah, it's best to flag it when you see it. That'll put it into the queue for reviewers to agree (or not) - so don't worry that you're single-handedly (-footedly?) stomping on people :)
Unfortunately, a significant part of the time, the asker never supplies the origin. Sometimes they self-delete the question rather than just tell us where it came from. Other times they ignore the request and the whole thing, including whatever effort people put into answering, gets discarded when the question is deleted.
Okay. This is the first Riley I've written, and it gets progressively harder as you go along, so here goes. I wrote this, and then realized that I used a mispronunciation of the target, so I had to sloppily improvise. I apologize. Anyway, I hope you enjoy it!My prefix is just shy of white,Yet...
IBaNTsJTtStPMP means "I'm Bad at Naming Things, so Just Try to Solve this Patterned Masyu Puzzle!".The original Masyu rules apply.Make a single loop with lines passing through the centers of cells, horizontally or vertically. The loop never crosses itself, branches off, or goe...
This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Etienne Word™. Use the following examples to find the rule:These are not the only examples of Etienne Wo...
This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Eternal Word™. Use the following examples to find the rule:$$% set Title text. (spaces around the text ARE ...
IntroductionI am an enthusiastic geometry student, preparing for my first quiz. Yet while revising I accidentally spilt my coffee onto my notes. Can you rescue me and draw me a diagram so that I can revise it for tomorrow’s test? Thank you very much!My Notes
Sometimes you are this wordRemove the first letter, does not change the meaningRemove the first two letters, still feels the sameRemove the first three letters and you find a wayRemove the first four letters and you get a numberThe letters rearranged is a surnameWh...
– "Sssslither..."Brigitte jumped. The voice had whispered almost directly into her ear, yet there was nobody to be seen. She looked at the ground beneath her feet. Was something moving? She was probably imagining things again.– Did you hear something? she asked her guide, Skaylee....
The creator of this masyu forgot to add the final stone, so the puzzle remains incomplete. Finish his job for him by placing one additional stone (either black or white) on the board so that the result is a uniquely solvable masyu.Normal masyu rules apply.
So here's a standard Nurikabe puzzle.I'll be using the final (solved) grid for my upcoming local puzzle competition logo as it will spell the abbreviation of the competition name. So, what does it spell?Rules (adapted from Nikoli):Fill in the cells under the following rules....
I've designed a set of dominoes puzzles that I call Donimoes. You slide thedominoes like the cars in Nob Yoshigahara's Rush Hour puzzle, always alongtheir long axis. The goal of Blocking Donimoes is to slide all the dominoesinto a rectangle, without sliding any matching numbers next to each ot...
I am mud that will trap you. I am a colloid hydrogel. What am I?Take the first half of me and add me to this:I am dangerous to wolves and werewolves alike. Some people even say that I am dangerous to unholy things. Use the creator of Poirot to find out: What am I?Now, take another word for ...
Clark who is consecutive in nature, lives in California near the 100th street. Today he decided to take his palindromic boat and visit France. He booked a room which has a number of thrice a prime. Then he ordered Taco and Cola for his breakfast. The online food delivery site asked him to enter t...
Suppose you are sitting comfortably in your universe admiring the word SING. Just then, Q enters your universe and insists that you insert the string "IMMER" into your precious word to create a new word for his amusement.Okay, you can make the word IMMERSING...But then you realize, you can a...
You! I see you walking thereNary a worry or a careCome, listen to me speakMy mind is strong, though my body is weak.I've got a riddle for you to ponderSomething to think about whilst you wanderIt's a classic Riley, a word split in threeFor a prefix, an...
@OmegaKrypton rather a poor solution, I think, but I'll try it anyway: Quarrel= cross words. When combined heartlessly: put them together by removing the middle space. Thus, crosswords. Nonstop: remove the final letter. We've made crossword = feature in daily newspaper
I saw this photo on LinkedIn:Is this a puzzle? If so, what does it mean and what is a solution?What I've found so far:$a = \pi r^2$ is clearly the area of a disk of radius $r$$2\pi r$ is clearly its diameter$\displaystyle \int\dfrac{dx}{sin\ x} = ln\left(\left| tan \dfrac{x}{2}\right|\... |
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation. |
Erwin Kreyszig Section 2.8, Problem 1:
Define a functional on $C[a,b]$ by fixing $t_0\in[a,b]$ and setting:
$$f_1(x)=x(t_0)$$
Define a second functional on $l^2$ by choosing a fixed $a=(\alpha_j)\in l^2$ and setting $$f(x) = \sum_{j=1}^\infty \xi_j\alpha_j$$ where $x=(\xi_j)\in l^2$
Show these two functionals are linear
This question has been self-answered by the OP(me). |
$f\left(\mathbf{x}\right):\mathbb{R}_+^n\rightarrow\mathbb{R}_+$ is a concave monotonically increasing function to be minimised over the feasible region $\sum_{i=1}^n x_i=1$ and $x_i\geq 0\quad\forall1\leq i\leq n$.
Given that the feasible region is a convex polytope, is it possible to say anything about the optimal $\mathbf{x}^*$? I have a hunch that at $\mathbf{x}^*$, at least one of the inequality constraints will be satisfied with equality. In other words $\mathbf{x}^*$ will be right on the edge of the feasible region. But can't prove it. Am I right there?
If it helps to consider a special case, the function is
$f\left(\mathbf{x}\right)=\log|\mathbf{I}_k+\sum_{i=1}^{n}x_i\mathbf{A}_i|$
where $\mathbf{A}_i\in\mathbb{C}^{k\times k}$ are all positive semidefinite Hermitian matrices. |
Logblog: Richard Zach's Logic Blog
You are looking at an archived page. The website has moved to richardzach.org.
Dana Scott's proof reminded commenter "fbou" of Kalmár's 1935 completeness proof. (Original paper in German on the Hungarian Kalmár site.) Mendelsohn's
Introduction to Mathematical Logic also uses this to prove completeness of propositional logic. Here it is (slightly corrected):
We need the following lemma:
Let $v$ be a truth-value assignment to the propositional variables in $\phi$, and let $p^v$ be $p$ if $v(p) = T$ and $\lnot p$ if $v(p) = F$. If $v$ makes $\phi$ true, then \[p_1^v, \dots, p_n^v \vdash \phi.\]
This is proved by induction on complexity of $\phi$.
If $\phi$ is a tautology, then any $v$ satisfies $\phi$. If $v$ is a truth value assignment to $p_1, \dots, p_n$, let $\Gamma(v,k) = \{p_1^v, \dots, p_k^v\}$. Let's show that for all $v$ and $k = n, \dots, 0$, $\Gamma(v, k) \vdash \phi$. If $k = n$, then $\Gamma(v, n) \vdash \phi$ by the lemma and the assumption that $\phi$ is a tautology, i.e., true for all $v$. Suppose the claim holds for $k+1$. This means in particular $\Gamma(v, k) \cup \{p_{k+1}\} \vdash \phi$ and $\Gamma(v, k) \cup \{\lnot p_{k+1}\} \vdash \phi$ for any given $v$. By the deduction theorem, we get $\Gamma(v, k) \vdash p_{k+1} \to \phi$ and $\Gamma(v, k) \vdash \lnot p_{k+1} \to \phi$. By $\vdash p_{k+1} \lor \lnot p_{k+1}$ and proof by cases, we get $\Gamma(v, k) \vdash \phi$. The theorem then follows since $\Gamma(v, 0) = \emptyset$.
Notes: |
I apologize if the dynamic equilibrium [1] posts are getting monotonous, but as the blog's primary purpose is as a "working paper" (one that is now apparently a few hundred pages long) I must continue!
The latest Case-Shiller price index data was released earlier today showing a continued rise in housing prices. In looking at the data, I noticed it has the telltale signs of a a dynamic equilibrium in the presence of shocks. However as the previous derivation looked at ratios of quantities in information equilibrium, I thought I needed to expand the theory a bit.
If we have housing demand $H_{d}$ in information equilibrium with housing supply $H_{s}$ with abstract price $P$ (i.e. $P : H_{d} \rightleftarrows H_{s}$), we can say:
P \equiv \frac{dH_{d}}{dH_{s}} = \; k \; \frac{H_{d}}{H_{s}}
$$
We can solve the differential equation to obtain
\begin{align}
H_{d} & = \; H_{d}^{(0)} \left( \frac{H_{s}}{H_{s}^{(0)}} \right)^{k}\\
P & = \; k \frac{H_{d}^{(0)}}{H_{s}^{(0)}} \left( \frac{H_{s}}{H_{s}^{(0)}} \right)^{k-1}
\end{align}
$$
Now if housing supply grows at some rate $r$ such that $H_{s} \sim e^{rt}$, then
\frac{d}{dt} \log P \approx \; (k-1) r
$$
Note that this is basically identical to the result for the ratios of quantities in information equilibrium in [1]. This should be apparent because the RHS of the first equation above is such a ratio and the LHS is the abstract price. Now let's use our procedure in [1] and say that the Case-Shiller index is our abstract price. The results are pretty decent:
The vertical lines again represent the centroids of the shocks. The negative shocks are at 1982.5, 1993.1, and 2007.7 (each associated with recessions). The positive shocks are at 1978.5 and 2005.6 (likely the California housing bubble and the global housing bubble, respectively).
PS I did want to note that we get increased prices with increased supply per the equations above. That is because we are assuming equilibrium (general equilibrium). If the housing supply increased
quicklyrelative to housing demand, then we would get the standard economics 101 result. I discussed this more extensively here. |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know? |
Suppose I made a tag and it is used by many people everyday, so will it increase my reputation? And also, suppose no one uses it even once for a long time, i.e. 6 months, then?
Maybe it could be called book-errata? I cannot give many examples of discussions from math.SE offhand but at least one example from here Find limit of unknown function This is an example from a different forum: http://www.sosmath.com/CBB/viewtopic.php?p=181367 I guess questions (and answers) r...
"Math-review is a tag for questions concerning troubles with a text one is reading. Typos and interpretation issues are also pertinent. Please be sure to mention the source you are using and to make quotation to make your question more precise "
I created a new proposal at area 51 its called Math Review (http://area51.stackexchange.com/proposals/90443/math-review) and it is intended to be a Q&A site concerning troubles one find while reading text books and articles. Most of them are bad typos, but not always, for instance I recently fou...
In the definition of martingales, one finds in Stroock and Varadhan (Multidimensional Diffusion processes - page 20) the strange request that it be right-continuous process. However no such requirement is made in the wiki https://en.wikipedia.org/wiki/Martingale_%28probability_theory%29 no...
In the book Multidimensional diffusion processes, of Stroock and Varadhan one reads (page 23): This is the proof of $(i)$. Here the authors say Define $f_t$ on $(\{\tau \leq t\}, \mathcal{F}_t [\{\tau \leq t\}])$ What is the $\sigma$-algebra $\mathcal{F}_t [\{\tau \leq t\}]$? Is it $\{...
This is a meta meta question on the principles by which we should decide which tags to create. The motivation for the question stems from this meta discussion, in which a majority thought that there shouldn't be an errata tag, whereas a significant minority of $5$ upvoted my answer saying there s...
« first day (1176 days earlier) ← previous day next day → last day (1480 days later) » |
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ...
@Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation")
@Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable
Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags
@Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag
@glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :)
@Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work)
This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin...
@Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension
@Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity
I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head
@Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write
@Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all.
I've actually recently asked some questions on math.SE on related topics
@Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true
also probably even more generally without $i$ factors
so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal)
Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary
@Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t
Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check
If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ...
There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h... |
Suppose initially there are no fixed costs.
What does it mean to take an average?
Consider a cost function $C(y)$. What does it mean to take the “average” of this function? Mathematically, it is just $$A(y) = \frac{c(y)}{y}$$
Let’s suppose we are considering $C(y) = y^3$. Suppose we now consider $y = 5$. Then $$A(5) = \frac{5^3}{5} = 25 $$This is just saying that, for each unit I buy, I am buying them at $25$ each on average. So I could have paid $$ \frac{15 + 39 + 46 + 14 + 11}{5}$$or $$ \frac{17 + 3 + 78 + 23 + 4}{5}$$
What information does the average cost give us?
The 'cost per unit' given by an average cost function isn't like taking an average by adding up the cost of each unit we bought. When we are given a cost function $C(y)$, this just tells me the total cost. I don't know how much my first TV cost me from this equation alone. And the average doesn't tell me that either.
Note how above we have two sets of 5 TVs that yielded the same average cost. Thus, the average cost function doesn't tell me how much I paid for each specific TV.
What does marginal cost tell me?
This is exactly what marginal cost provides. Marginal cost provides the specific cost of each successive infinitesimal amount of good. Consider again $C(y) = y^3$. $$MC(y) = \frac{d C(y)}{d y}= 3y^2$$
Suppose I have purchased $3.5$ units of TV. Then the MC equation thus says, for $c(y) = y^3$, each additional infinitesimal amount of TV costs me $$3(3.5)^2 = 36.75$$
at that point. If I purchase an additional $0.1$ amount of TV, then my MC changes and now I am at $$3(3.6)^2 = 38.88$$
Thus, this concept is a bit trickier because it involves a continuous amount of TV and the cost per infinitesimal unit changes as you buy more. So you really can't just consider how much $MC(1)$ is and $MC(2)$ is. You are considering it for some infinitesimal additional amount $dy$ at a given point (e.g. $y=2$). The trend is easier to think through if you don't restrict yourself to integers. Note, we are assuming you can have noninteger quantities of goods, otherwise we wouldn't be working in $\Bbb{R}^n$ and the integration later might be trickier.
Example to Clarify Marginal Cost
So suppose I want to buy $5$ TVs. For the $k$th TV purchased, the $$MC(k)= 3k^2$$ Going back to our example above, let's now suppose $k \in [0,5]$. To find the average cost, we simply do the addition formula used above for the 5 TV example, except now summed over each infinitesimal amount (of which there are an infinite number). This gives us
$$A(5) = \frac{3(0)^2+\cdots + 3k^2+ \cdots + 3(5)^2}{5} = \frac{\int_{0}^{5} 3y^2 dy}{5}= \frac{(125-0)}{5}=\frac{c(5)}{5}=25$$
$$A(5) =25$$the same as we calculated earlier.
Summary
Marginal Cost is $$MC(y) = \frac{d C(y)}{d y}$$Average variable cost is
$$A(y) = \frac{\int_{0}^{y} MC(y) dy}{y}$$
Note, since we assumed $FC = 0$, this formula also thus defines AC but would not be true for $FC \neq 0$.
AVERAGE COST VS AVERAGE VARIABLE COST
I have been sloppy about the distinction between AVC and AC up until now. I have avoided this distinction by assuming $FC = 0$. Now I will try to clarify this point by assuming $FC$ can be anything.
($FC$)are costs the firm pays that do not vary with the amount produced. For example, suppose I work for Uber and I buy a car. That money is spent and doesn't change with the amount I drive. But the amount of gasoline I consume does change as with the amount I drive. Costs that scale with production are known as Fixed costs ($VC(y)$). variable costs
$$AC(y) = \frac{C(y)}{y}=\frac{VC(y) +FC}{y}= \frac{VC(y)}{y} + \frac{FC}{y}=AVC + AFC $$
Hyperbolic Behavior of AFC with y >0
As production goes up ($y\rightarrow \infty$), AFC goes down ($AFC \rightarrow 0$) in an inversely proportional fashion. But note, as production goes down ($y\rightarrow 0$), AFC goes to infinity ($AFC \rightarrow \infty$). Thus, the plot of AFC will always be a hyperbola unless $FC = 0$ in which case AFC is just 0.
How does AFC affect AC?
Without specifying $VC(y)$, we cannot know the behavior of $AC(y)$ as $y$ moves away from $0$. There will be some $y_{0}$ such that, for $y > y_{0}$, $AC(y)$ can essentially be anything. Of course, as $y \rightarrow 0$, $AC(y) \rightarrow \infty$. This is because AVC cannot be negative and so we are guaranteed any variable costs will not lower the average cost below AFC. Therefore, $AVC \geq 0$, so since $AFC \rightarrow \infty$ as $y\rightarrow 0$ (remember it is a hyperbola), thus AC must go to infinity as well.
Thus, since the behavior of fixed costs are always known, AVC is the missing ingredient needed to specify the behavior of AC.
AC and AVC can't be exactly equal for $FC>0$
Since $AFC \rightarrow 0$ but does not ever equal 0 (for $FC >0$), we know that $$AC \neq AVC$$ for any $y$ and $FC >0$. But since $AFC$ approaches 0 for large enough $y$, AC approaches AVC asymptotically.
AC and AVC can't be exactly parallel for $FC>0$
If $AVC$ and $AC$ are parallel, then their derivatives should be equal. But notice that $$\frac{dAFC}{dy} = -\frac{FC}{y^2}$$so $$\frac{dAC}{dy} = \frac{dAVC}{dy} + \frac{dAFC}{dy} = \frac{dAVC}{dy} - \frac{FC}{y^2} \neq \frac{dAVC}{dy}$$So they aren't parallel because their derivatives aren't equal. That said,
for large enough $y$, the derivatives will be very close to each other so they may appear nearly parallel over some portion of the curves.
MC intersects AC at minimum point of AC curve
See Alecos's answer.
MC intersects AVC at minimum point of AVC curve
Consider the average variable cost curve. Find $y^*$ that solves
$$\min_{y} AVC(y)$$
So at this point, $$\frac{dAVC(y^*)}{dy^*} = 0$$
This means by quotient rule
$$\frac{VC'(y^*)y^*-VC(y^*)}{(y^*)^2} = 0$$and since $y^*$ can't be 0, this implies $$VC'(y^*)y^*-VC(y^*)=0$$ which rearranged gives $$VC'(y^*)=\frac{VC(y^*)}{y^*}$$Recall, $$C(y) = FC + VC(y)$$ Note, since $FC$ is a constant, $$C'(y) = VC'(y)$$ Therefore,
$$C'(y^*) = \frac{VC(y^*)}{y^*}$$
EXTRA STUFF
Why do we define the firm's longrun shutdown point in terms of average cost not marginal cost?
Recall a firm's profit function is $$\pi = py - C(y)$$
I am not defining the behavior of $p$ here or who controls $p$. I am just considering for what $p$ will the firm shut down, and ignoring everything else because it's irrelevant.
So we can easily see that since $$AC(y)= \frac{C(y)}{y}$$ for $p=AC(y)$, this yields $$\pi = \left(\frac{C(y)}{y}\right)y - C(y) = C(y)- C(y) = 0$$So the firm will shut down if $p <AC(y)$ because then $\pi < 0$.
So consider the function $C(y) = y^3$. Note, $$MC = 3y^2 > AC = y^2$$ We know the firm profit maximizes at $MR = MC$. So since, for $y > 0$, $MC(y) > AC(y)$, the firm would always produce if $p = MC$ since this would mean for $y>0$,
$$ MC(y)y-C(y) = \pi_{p=MC} > \pi_{p=AC} = \left(\frac{C(y)}{y}\right)y - C(y) =0$$
$$\pi_{p=MC} > \pi_{p=AC} = 0$$So, for this $C(y)$, at $p=MC$, $\pi >0$ for all $y$.
So this example clearly shows you would never shut down at $p=MC$ for $y>0$ for $C(y) = y^3$. Although this isn't the most thorough explanation, this example clearly invalidates that train of thought and makes clear firms shut down for $p< AC$
Does average cost necessarily equal marginal cost at some point?
See Alecos's answer.
Takeaway Rules In long run, firms produce if $p \geq AC(y)$. They shut down for $p < AC(y)$ Firms always produce at $MR = MC$ |
I have simple code, which flags nodes with in region enclosed by cylinder. On implementing the code, the result is mild tilt of the cylinder observed case with $\theta=90^{\circ}$.
The algorithm for checking any point inside arbitrarily oriented cylinder is as follows. Let $\vec{r}$ be the vector joining center $\vec{c}$ and arbitrary point $\vec{o}$, and $$\vec{r}=\vec{c}-\vec{o}.$$ For orientation vector $\vec{o}$, the projection of $\vec{r}$ on it is, $$ u = \vec{r}\cdot\vec{o}.$$ Therefore, the perpendicular vector is, $$\vec{p}=\vec{r}-u\cdot\vec{o}.$$ For a cylinder of length $2l$ and radius $a$, check $$\vec{p}.\vec{p}<a^2 \hspace{1cm} \text{for } -l\leq u \leq l. $$
The actual issue:The above algorithm is implemented in Fortran. The code checks for points in Cartesian grid if inside the cylinder. Following being the test case:The cylinder makes an angle $\theta=90^{\circ}$ in the yz-plane with respect to y-axis. Therefore, the orientation vector $\vec{o}$ is (0, 1, 0). Case 2:Orientation vector is specified with intrinsic Fortran functions with double precision accuracy
dsin and
dcos with $\vec{o}=(0.0, \sin(\pi/2.0), \cos(\pi/2.0))$ with $\pi$ value assigned with more than 20 significant decimal points. The resulting cylinder results in mild tilt.
The highlighted region indicates the extra material due to tilt of cylinder with respect to Cartesian axes. I also tried architecture specific maximum precision "pi" value. This also results in same problem.
This shows like the actual angle made by cylinder is not $90^\circ$. Can anyone suggest valid solution for this problem. I need to use the inbuilt trigonometric functions for arbitrary angles and looking for accurate cell flagging method.
Note: All operations are performed with double precision accuracy. |
I seek to prove the identity
$$\int_2^x\frac{dt}{\log^kt}=O\left(\frac{x}{\log^kx}\right)$$
I was given the following hint:
Split the integral into $\int_2^{f(x)}+\int_{f(x)}^x$ for a well-chosen function $f(x)$ with $2\le f(x)<x$ and estimate both parts from above.
but my proof was different. Can anyone (i) confirm if my proof is correct or incorrect and (ii) find the proof using the author's hint? My proof:
Pick any $a>e^k$. Then $\int_2^a\frac{dt}{\log^kt}$ is constant and finite, so it suffices to prove that $\int_a^x\frac{dt}{\log^kt}=O\left(\frac{x}{\log^kx}\right)$, which follows from
$$\left(1-\frac{k}{\log a}\right)\int_a^x\frac{dt}{\log^kt}\le\int_a^x\frac{dt}{\log^kt}\left(1-\frac{k}{\log t}\right)=\frac{x}{\log^kx}-\frac{a}{\log^ka}\le\frac{x}{\log^kx}$$ |
Morris (2008) discusses various ways for computing a (standardized) effect size measure for pretest posttest control group designs, where the characteristic, response, or dependent variable assessed in the individual studies is a quantitative variable.
As described by Becker (1988), we can compute the standardized mean change (with raw score standardization) for a treatment and control group with $$g_T = c(n_T-1) \frac{\bar{x}_{post,T} - \bar{x}_{pre,T}}{SD_{pre,T}}$$ and $$g_C = c(n_C-1) \frac{\bar{x}_{post,C} - \bar{x}_{pre,C}}{SD_{pre,C}},$$ where $\bar{x}_{pre,T}$ and $\bar{x}_{post,T}$ are the treatment group pretest and posttest means, $SD_{pre,T}$ is the standard deviation of the pretest scores, $c(m) = \sqrt{2/m} \Gamma[m/2] / \Gamma[(m-1)/2]$ is a bias-correction factor
1), $n_T$ is the size of the treatment group, and $\bar{x}_{pre,C}$, $\bar{x}_{post,C}$, $SD_{pre,C}$, and $n_C$ are the analogous values for the control group. Then the difference in the two standardized mean change values, namely $$g = g_T - g_C$$ indicates how much larger (or smaller) the change in the treatment group was (in standard deviation units) when compared to the change in the control group. Values of $g$ computed for a number of studies could then be meta-analyzed with standard methods. 2)
Morris (2008) uses five studies from a meta-analysis on training effectiveness by Carlson and Schmidt (1999) to illustrate these computations. We can create the same dataset with:
datT <- data.frame( m_pre = c(30.6, 23.5, 0.5, 53.4, 35.6), m_post = c(38.5, 26.8, 0.7, 75.9, 36.0), sd_pre = c(15.0, 3.1, 0.1, 14.5, 4.7), sd_post = c(11.6, 4.1, 0.1, 4.4, 4.6), ni = c(20, 50, 9, 10, 14), ri = c(0.47, 0.64, 0.77, 0.89, 0.44))
and
datC <- data.frame( m_pre = c(23.1, 24.9, 0.6, 55.7, 34.8), m_post = c(19.7, 25.3, 0.6, 60.7, 33.4), sd_pre = c(13.8, 4.1, 0.2, 17.3, 3.1), sd_post = c(14.8, 3.3, 0.2, 17.9, 6.9), ni = c(20, 42, 9, 11, 14), ri = c(0.47, 0.64, 0.77, 0.89, 0.44))
The contents of
datT and
datC are then:
m_pre m_post sd_pre sd_post ni ri 1 30.6 38.5 15.0 11.6 20 0.47 2 23.5 26.8 3.1 4.1 50 0.64 3 0.5 0.7 0.1 0.1 9 0.77 4 53.4 75.9 14.5 4.4 10 0.89 5 35.6 36.0 4.7 4.6 14 0.44
and
m_pre m_post sd_pre sd_post ni ri 1 23.1 19.7 13.8 14.8 20 0.47 2 24.9 25.3 4.1 3.3 42 0.64 3 0.6 0.6 0.2 0.2 9 0.77 4 55.7 60.7 17.3 17.9 11 0.89 5 34.8 33.4 3.1 6.9 14 0.44
After loading the metafor package with
library(metafor), the standardized mean change within each group can be computed with:
datT <- escalc(measure="SMCR", m1i=m_post, m2i=m_pre, sd1i=sd_pre, ni=ni, ri=ri, data=datT) datC <- escalc(measure="SMCR", m1i=m_post, m2i=m_pre, sd1i=sd_pre, ni=ni, ri=ri, data=datC)
Now the contents of
datT and
datC are:
m_pre m_post sd_pre sd_post ni ri yi vi 1 30.6 38.5 15.0 11.6 20 0.47 0.5056 0.0594 2 23.5 26.8 3.1 4.1 50 0.64 1.0481 0.0254 3 0.5 0.7 0.1 0.1 9 0.77 1.8054 0.2322 4 53.4 75.9 14.5 4.4 10 0.89 1.4181 0.1225 5 35.6 36.0 4.7 4.6 14 0.44 0.0801 0.0802
and
m_pre m_post sd_pre sd_post ni ri yi vi 1 23.1 19.7 13.8 14.8 20 0.47 -0.2365 0.0544 2 24.9 25.3 4.1 3.3 42 0.64 0.0958 0.0173 3 0.6 0.6 0.2 0.2 9 0.77 0.0000 0.0511 4 55.7 60.7 17.3 17.9 11 0.89 0.2667 0.0232 5 34.8 33.4 3.1 6.9 14 0.44 -0.4250 0.0864
The standardized mean change values are given in the
yi columns. Note that internally, the
escalc() function computes
m1i-m2i, so the argument
m1i should be set equal to the posttest means and
m2i to the pretest means if one wants to compute the standardized mean change in the way described above. The sampling variances (the values in the
vi columns) are computed based on equation 13 in Becker (1988).
We can now compute the difference between the two standardized mean changes values for each study. In addition, since the treatment and control groups are independent, the corresponding sampling variances can be computed by adding up the sampling variances of the two groups:
dat <- data.frame(yi = datT$yi - datC$yi, vi = datT$vi + datC$vi) round(dat, 2) yi vi 1 0.74 0.11 2 0.95 0.04 3 1.81 0.28 4 1.15 0.15 5 0.51 0.17
The
yi values above are the exact same value given in Table 5 (under the $d_{ppc1}$ column) by Morris (2008).
Equation 16 in Morris (2008) is the exact sampling variance of $g$. To actually compute the sampling variance in practice, the unknown parameters in this equation must be replaced with their sample counterparts. As noted earlier, the
escalc() function actually uses a slightly different method to estimate the sampling variance (based on equation 13 in Becker, 1988). Hence, the values above and the ones given in Table 5 (column $\hat{\sigma}^2(d_{ppc1})$ in Morris, 2008) differ slightly.
There are in fact dozens of ways of how the sampling variance for the standardized mean change can be estimated (see Viechtbauer, 2008, Tables 2 and 3 – and even that is not an exhaustive list). Hence, there are dozens of ways of estimating the sampling variance of $g$ above. Differences should only be relevant in small samples.
For the actual meta-analysis part, we simply pass the
yi and
vi values to the
rma() function. For example, a fixed-effects model can be fitted with:
Fixed-Effects Model (k = 5) Test for Heterogeneity: Q(df = 4) = 4.43, p-val = 0.35 Model Results: estimate se zval pval ci.lb ci.ub 0.95 0.14 6.62 <.01 0.67 1.23 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Note that these results are slightly different than the ones in Table 5 due to the different ways of estimating the sampling variances.
In his article, Morris (2008) discusses two other ways of computing an effect size measure for pretest posttest control group designs. The second approach that pools the two pretest SDs actually can be more efficient under certain conditions. However, that approach assumes that the true pretest SDs are equal for the two groups. That may not be the case. The approach given above does not make that assumption and therefore is more broadly applicable (but may be less efficient).
If you really want to use the approach with pooled pretest SDs, then this can be done as follows:
sd_pool <- sqrt((with(datT, (ni-1)*sd_pre^2) + with(datC, (ni-1)*sd_pre^2)) / (datT$ni + datC$ni - 2)) dat <- data.frame(yi = metafor:::.cmicalc(datT$ni + datC$ni - 2) * (with(datT, m_post - m_pre) - with(datC, m_post - m_pre)) / sd_pool) dat$vi <- 2*(1-datT$ri) * (1/datT$ni + 1/datC$ni) + dat$yi^2 / (2*(datT$ni + datC$ni)) round(dat, 2) yi vi 1 0.77 0.11 2 0.80 0.04 3 1.20 0.14 4 1.05 0.07 5 0.44 0.16
The
yi values above are the exact same value given in Table 5 (under the $d_{ppc2}$ column) by Morris (2008). Note that the equation used for computing the sampling variances above is slightly different from the one used in the paper, so the values for 'vi' above and the ones given in Table 5 (column $\hat{\sigma}^2(d_{ppc2})$ in Morris, 2008) differ slightly.
The example above assumes that the pretest posttest correlations (the values given under the
ri column) are the same for the control and treatment groups. Ideally, those values should be coded separately for the two groups.
In practice, one is likely to encounter difficulties in actually obtaining those correlations from the information reported in the articles. In that case, one can substitute approximate values (e.g., based on known properties of the dependent variable being measured) and conduct a sensitivity analysis to ensure that the conclusions from the meta-analysis are unchanged when those correlations are varied.
Becker, B. J. (1988). Synthesizing standardized mean-change measures.
British Journal of Mathematical and Statistical Psychology, 41(2), 257–278.
Carlson, K. D., & Schmidt, F. L. (1999). Impact of experimental design on effect size: Findings from the research literature on training.
Journal of Applied Psychology, 84(6), 851–862.
Morris, S. B. (2000). Distribution of the standardized mean change effect size for meta-analysis on repeated measures.
British Journal of Mathematical and Statistical Psychology, 53(1), 17–29.
Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control group designs.
Organizational Research Methods, 11(2), 364–386.
Viechtbauer, W. (2007). Approximate confidence intervals for standardized effect sizes in the two-independent and two-dependent samples design.
Journal of Educational and Behavioral Statistics, 32(1), 39–60. |
Is there a bijection from a finite (closed) segment of the real line to $\mathbb{R}$? For example, is there a bijection from $[0,1]$ to $\Bbb{R}$?
If so, is there a straightforward example? If not, why?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Is there a bijection from a finite (closed) segment of the real line to $\mathbb{R}$? For example, is there a bijection from $[0,1]$ to $\Bbb{R}$?
If so, is there a straightforward example? If not, why?
This question appears to be off-topic. The users who voted to close gave this specific reason:
There are many bijections from an
interval $(a, b)\to R$, e.g. open
$g(x) = \cot(\frac{\pi}{2}x)$ is a bijection $g: (0, 1)\to \mathbb{R} $.
Now, we need to find a bijection from the
interval $[a, b]\to R$, by showing that there exists a bijection from the closed interval $[a, b]$ to the open interval $(a, b)$. closed
Taking the interval: $[0,1]$. Define $f(x)$ as following: $$f(x) = \left\{ \begin{array}{1 1} \frac{1}{2} & \mbox{if } x = 0\\ \frac{1}{2^{n+2}} & \mbox{if } x = \frac{1}{2^n}\\ x & \mbox{otherwise} \end{array} \right.$$
Then $f: [0, 1] \to (0, 1)$ is a bijection.
Now, compose: $g(f(x)): [1, 0] \to \mathbb{R}$, and you have your bijection.
Yes. There is such function, but it is less straightforward than one would think.
The reason is that "straightforward" functions are usually continuous, and a continuous function from $[0,1]$ would either have values of $\pm\infty$ or will have a range of a closed interval $[a,b]$ and not the entire real line.
However there are relatively simple ways of removing the two endpoints and then you can write a bijection from $(0,1)$ to $\mathbb R$ simply by $\frac{1-2x}{2x(x-1)}$ or some other function which you can find.
Composing these two bijections will give you a bijection between $[0,1]$ and $\mathbb R$. Examples for both bijections have been given plenty of times on this site before. |
For analytic $f$, how can I represent the expression $f(z)\cdot\exp\left({s\,\log(z)}\right)$, i.e. $f(z)\cdot z^s$ in the form
$$\sum_{n}^\infty\left(\sum_{k}^\infty a_k s^k\right)z^n,$$
at least as a formal power series, where the index runs over the number needed?
I got to $$f(z)\cdot\exp\left({s\,\mathrm{log}(z)}\right)$$ $$=\left(\sum_{m}^\infty\frac{1}{m!}f^{(m)}(0)\,z^m\right) \left(\sum_{j=0}^\infty \frac{1}{j!} \left(s\,\log{(z)}\right)^j\right)$$ $$=\sum_{m}^\infty\sum_{j=0}^\infty \frac{1}{m!\,j!}f^{(m)}(0)\,s^j\cdot z^m\, \log{(z)}^j,$$
and I know
$$\log(z)=\sum_{l=1}^\infty (-1)^{l+1}\frac{1}{l}(z-1)^{l}.$$
I'm motivated by wanting to understand the Mellin transform (and the Laplace transform, for that matter) and here I approach this by looking at what it does to series components of a function
$$f(z)=\sum c_n z^n\mapsto \mathcal M(f)=\sum c_n^\mathcal{M} z^n$$ |
What will be the complexity of finding Gini Index of a sorted vector of $N$ values, which is defined as:
$Gini(\mathbf{x})=1-2\sum_{k=1}^N \frac{\mathbf{x}(k)}{\Vert\mathbf{x}\Vert_1}(\frac{N-k+.5}{N})$
Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It only takes a minute to sign up.Sign up to join this community
What will be the complexity of finding Gini Index of a sorted vector of $N$ values, which is defined as:
$Gini(\mathbf{x})=1-2\sum_{k=1}^N \frac{\mathbf{x}(k)}{\Vert\mathbf{x}\Vert_1}(\frac{N-k+.5}{N})$
Even though the biggest part of the question has been answered in the comments, I want to point out a detail that is very important in my opinion.
There are several questions about the complexity you can ask:
Regarding a difference between tight ($\Theta(N)$) and upper bound ($\mathcal O(N))$ complexities, the following question on SO can be useful. I will be using $\Theta$ notation for this answer; though, using $\mathcal O$ is sufficient as well.
Now, for the Gini index, let's consider two "algorithms" to find the desired quantity: variant $A$: that will be described exactly by the formula in your question and variant $B$ described by the modified formula:
$$ \text{Gini}_A(\mathbf{x}) = 1-2\sum_{k=1}^{N}\frac{\mathbf{x}(k)}{\color{red}{||\mathbf{x}||_1}}\frac{N-k+0.5}{N}\\ \text{Gini}_B(\mathbf{x}) = 1-\frac{2}{\color{red}{||\mathbf{x}||_1}}\sum_{k=1}^{N}\mathbf{x}(k)\frac{N-k+0.5}{N} $$
Notice, that in algorithm $B$ I explicitly moved the calculation of the vector norm $||\mathbf{x}||_1$ outside of the summation to signal that
its computation happens only once. Now, if we assume that algorithm $A$ actually computes the norm $||\mathbf{x}||_1$ $N$ times (for no reason), then: |
Previous Article
Does the existence of "talented outliers" help improve team performance? Modeling heterogeneous personalities in teamwork JIMOHome This Issue
Next Article
Application of the preventive maintenance scheduling to increase the equipment reliability: Case study- bag filters in cement factory Extension of generalized solidarity values to interval-valued cooperative games
a.
School of Economics and Management, Fuzhou University, Fuzhou, Fujian 350108, China
b.
School of Architecture, Fuzhou University, Fuzhou, Fujian 350108, China
The main purpose of this paper is to extend the concept of generalized solidarity values to interval-valued cooperative games and hereby develop a simplified and fast approach for solving a subclass of interval-valued cooperative games. In this paper, we find some weaker coalition monotonicity-like conditions so that the generalized solidarity values of the $ \alpha $-cooperative games associated with interval-valued cooperative games are always monotonic and non-decreasing functions of any parameter $ \alpha \in [0,1] $. Thereby the interval-valued generalized solidarity values can be directly and explicitly obtained by computing their lower and upper bounds through only using the lower and upper bounds of the interval-valued coalitions' values, respectively. The developed method does not use the interval subtraction and hereby can effectively avoid the issues resulted from it. Furthermore, we discuss the effect of the parameter $ \xi $ on the interval-valued generalized solidarity values of interval-valued cooperative games and some significant properties of interval-valued generalized solidarity values.
Keywords:Cooperative game, interval-valued cooperative game, solidarity value, interval computing, uncertainty. Mathematics Subject Classification:Primary: 91A12. Citation:Deng-Feng Li, Yin-Fang Ye, Wei Fei. Extension of generalized solidarity values to interval-valued cooperative games. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2018185
References:
[1] [2]
S. Z. Alparslan G$\rm{\ddot{o}}$k, O. Branzei, R. Branzei and S. Tijs,
Set-valued solution concepts using interval-type payoffs for interval games,
[3]
S. B$\rm{\acute{e}}$al, E. R$\rm{\acute{e}}$mila and P. Solal,
Axiomatization and implementation of a class of solidarity values for TU-games,
[4] [5]
R. Branzei, O. Branzei, S. Z. Alparslan G$\rm{\ddot{o}}$k and S. Tijs,
Cooperative interval games: A survey,
[6] [7]
A. Calik, T. Paksoy, A. Yildizbasi and N. Y. Pehlivan,
A decentralized model for allied closed-loop supply chains: Comparative analysis of interactive fuzzy programming approaches,
[8]
E. Calvo and E. Guti$\rm{\acute{e}}$rrez-L$\rm{\acute{o}}$pez,
Axiomatic characterizations of the weighted solidarity values,
[9] [10] [11]
W. B. Han, H. Sun and G. J. Xu,
A new approach of cooperative interval games: The interval core and Shapley value revisited,
[12] [13]
X. F. Hu and D.-F. Li,
A new axiomatization of the Shapley-solidarity value for games with a coalition structure,
[14]
Y. Kamijo and T. Kongo,
Whose deletion does not affect your payoff? The difference between the Shapley value, the egalitarian value, the solidarity value, and the Banzhaf value,
[15]
G. Kara, A. $\rm{\ddot{O}}$zmen and G. W. Weber, Stability advances in robust portfolio optimization under parallelepiped uncertainty,
[16]
B. B. Kirlar, S. Erg$\rm{\ddot{u}}$n, S. Z. Alparslan G$\rm{\ddot{o}}$k and G. W. Weber,
A game-theoretical and cryptographical approach to crypto-cloud computing and its economical and financial aspects,
[17] [18] [19] [20]
R. Moore,
[21] [22]
B. Oksendal, L. Sandal and J. Uboe,
Stochastic Stackelberg equilibria with applications to time-dependent newsvendor models,
[23]
A. $\rm{\ddot{O}}$zmen, E. Kropat and G. W. Weber,
Robust optimization in spline regression models for multi-model regulatory networks under polyhedral uncertainty,
[24]
A. $\rm{\ddot{O}}$zmen, G. W. Weber, I. Batmaz and E. Kropat,
RCMARS: Robustification of CMARS with different scenarios under polyhedral uncertainty set,
[25]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k, S. Erg$\rm{\ddot{u}}$n and G. W. Weber,
Cooperative grey games and the grey Shapley value,
[26]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k, M. O. Olgun and G. W. Weber,
Transportation interval situations and related games,
[27]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k and G. W. Weber,
An axiomatization of the interval Shapley value and on some interval solution concepts,
[28]
M. Pervin, S. K. Roy and G. W. Weber,
Analysis of inventory control model with shortage under time-dependent demand and time-varying holding cost including stochastic deterioration,
[29] [30]
S. K. Roy, G. Maity and G. W. Weber,
Multi-objective two-stage grey transportation problem using utility function with goals,
[31]
E. Savku and G. W. Weber,
A stochastic maximum principle for a markov regime-switching jump-diffusion model with delay and an application to finance,
[32]
G. J. Xu, H. Dai, D. S. Hou and H. Sun,
A-potential function and a non-cooperative foundation for the Solidarity value,
show all references
References:
[1] [2]
S. Z. Alparslan G$\rm{\ddot{o}}$k, O. Branzei, R. Branzei and S. Tijs,
Set-valued solution concepts using interval-type payoffs for interval games,
[3]
S. B$\rm{\acute{e}}$al, E. R$\rm{\acute{e}}$mila and P. Solal,
Axiomatization and implementation of a class of solidarity values for TU-games,
[4] [5]
R. Branzei, O. Branzei, S. Z. Alparslan G$\rm{\ddot{o}}$k and S. Tijs,
Cooperative interval games: A survey,
[6] [7]
A. Calik, T. Paksoy, A. Yildizbasi and N. Y. Pehlivan,
A decentralized model for allied closed-loop supply chains: Comparative analysis of interactive fuzzy programming approaches,
[8]
E. Calvo and E. Guti$\rm{\acute{e}}$rrez-L$\rm{\acute{o}}$pez,
Axiomatic characterizations of the weighted solidarity values,
[9] [10] [11]
W. B. Han, H. Sun and G. J. Xu,
A new approach of cooperative interval games: The interval core and Shapley value revisited,
[12] [13]
X. F. Hu and D.-F. Li,
A new axiomatization of the Shapley-solidarity value for games with a coalition structure,
[14]
Y. Kamijo and T. Kongo,
Whose deletion does not affect your payoff? The difference between the Shapley value, the egalitarian value, the solidarity value, and the Banzhaf value,
[15]
G. Kara, A. $\rm{\ddot{O}}$zmen and G. W. Weber, Stability advances in robust portfolio optimization under parallelepiped uncertainty,
[16]
B. B. Kirlar, S. Erg$\rm{\ddot{u}}$n, S. Z. Alparslan G$\rm{\ddot{o}}$k and G. W. Weber,
A game-theoretical and cryptographical approach to crypto-cloud computing and its economical and financial aspects,
[17] [18] [19] [20]
R. Moore,
[21] [22]
B. Oksendal, L. Sandal and J. Uboe,
Stochastic Stackelberg equilibria with applications to time-dependent newsvendor models,
[23]
A. $\rm{\ddot{O}}$zmen, E. Kropat and G. W. Weber,
Robust optimization in spline regression models for multi-model regulatory networks under polyhedral uncertainty,
[24]
A. $\rm{\ddot{O}}$zmen, G. W. Weber, I. Batmaz and E. Kropat,
RCMARS: Robustification of CMARS with different scenarios under polyhedral uncertainty set,
[25]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k, S. Erg$\rm{\ddot{u}}$n and G. W. Weber,
Cooperative grey games and the grey Shapley value,
[26]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k, M. O. Olgun and G. W. Weber,
Transportation interval situations and related games,
[27]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k and G. W. Weber,
An axiomatization of the interval Shapley value and on some interval solution concepts,
[28]
M. Pervin, S. K. Roy and G. W. Weber,
Analysis of inventory control model with shortage under time-dependent demand and time-varying holding cost including stochastic deterioration,
[29] [30]
S. K. Roy, G. Maity and G. W. Weber,
Multi-objective two-stage grey transportation problem using utility function with goals,
[31]
E. Savku and G. W. Weber,
A stochastic maximum principle for a markov regime-switching jump-diffusion model with delay and an application to finance,
[32]
G. J. Xu, H. Dai, D. S. Hou and H. Sun,
A-potential function and a non-cooperative foundation for the Solidarity value,
[1] [2]
Serap Ergün, Bariş Bülent Kırlar, Sırma Zeynep Alparslan Gök, Gerhard-Wilhelm Weber.
An application of crypto cloud computing in social networks by cooperative game theory.
[3]
Jiang-Xia Nan, Deng-Feng Li.
Linear programming technique for solving interval-valued constraint matrix games.
[4]
Yuhua Sun, Laisheng Wang.
Optimality conditions and duality in nondifferentiable interval-valued programming.
[5]
Hsien-Chung Wu.
Solving the interval-valued optimization problems based on the concept of null set.
[6]
Serap Ergün, Sirma Zeynep Alparslan Gök, Tuncay Aydoǧan, Gerhard Wilhelm Weber.
Performance analysis of a cooperative flow game algorithm in ad hoc networks and a comparison to Dijkstra's algorithm.
[7]
Xiuhong Chen, Zhihua Li.
On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized (
[8]
Tadeusz Antczak, Najeeb Abdulaleem.
Optimality conditions for $ E $-differentiable vector optimization problems with the multiple interval-valued objective function.
[9] [10]
Harish Garg.
Some robust improved geometric aggregation operators under interval-valued intuitionistic fuzzy environment for multi-criteria decision-making process.
[11]
Harish Garg, Kamal Kumar.
Group decision making approach based on possibility degree measure under linguistic interval-valued intuitionistic fuzzy set environment.
[12] [13]
Deepak Singh, Bilal Ahmad Dar, Do Sang Kim.
Sufficiency and duality in non-smooth interval valued programming problems.
[14] [15] [16] [17] [18] [19] [20]
Shou-Fu Tian.
Initial-boundary value problems for the coupled modified Korteweg-de Vries equation on the interval.
2018 Impact Factor: 1.025
Tools
Article outline
Figures and Tables
[Back to Top] |
This is a continuation of a problem I asked over at physics exchange and math exchange. Basically I have two ODEs that I am solving in order to calculate the radial and tangential velocity of liquid dispensed on the center of a disk rotating at rate of $\omega$. $u_m$ is the average flow velocity in the $r$-direction, and the $v_m$ is the average flow velocity in the $\theta$-direction. They are defined as
$$u_m(r)=\frac{1}{h(r)}\int_0^{h(r)}u(r,z)\,dz$$
$$v_m(r)=\frac{1}{h(r)}\int_0^{h(r)}v(r,z)\,dz$$
Where $h(r)$ is the height of the liquid film on the spinning disk. Using the continuity equation, the height of the film is
$$h(r)=\frac{Q}{2\pi\,r\,u_m}$$
Where $Q$ is the flow rate of liquid onto the spinning disk. Using appropriate boundary conditions of no-slip at $z=0$ and no shear at $z=h(r)$, the velocities $u$ and $v$ are:
$$u(r,z)=3u_m\left[ \frac{z}{h}-\frac{1}{2}\left(\frac{z}{h} \right)^2 \right]$$
$$v(r,z)=r\,\omega+\frac{3}{2}(r\,\omega- v_m)\left[\left(\frac{z}{h} \right)^2 -2 \frac{z}{h}\right]$$
Then taking the Navier Stokes equations in cylindrical coordinates and assuming that $\partial P/\partial r=0$, the remaining relevant terms are then
$$u\frac{\partial u}{\partial r}-\frac{v^2}{r}=\nu \left(\frac{\partial^2 u}{\partial z^2} \right)$$
$$u\frac{\partial v}{\partial r}+\frac{vu}{r}=\nu \left(\frac{\partial^2 v}{\partial z^2} \right)$$
Then I substitute $u(r,z)$ and $v(r,z)$ into the N-S equations, integrate them from $0$ to $h(r)$ in the $z$-direction, and then substitute $h(r)$ in terms of $Q$ using the equation above. This then gives me two coupled ODEs where $u_m$ and $v_m$ are the dependent variables and $r$ is the independent variable. The ODEs are
$$69ru_m\frac{du_m}{dr}=8r^2\omega^2-16r\omega v-21{u_m}^2+48{v_m}^2-\frac{480\pi^2\nu r^3{u_m}^3}{Q^2}$$
$$21r(v_m-r\omega)\frac{du_m}{dr}+48ru\frac{dv_m}{dr}=-69u_m v_m+37r\omega u_m+\frac{480\pi^2\nu r^3(v_m-r\omega){u_m}^2}{Q^2}$$
To solve these equations I rearranged them in matrix form:
$$\textbf{A}=\begin{bmatrix}69\,r\,u_m & 0\\ 21(r\,v_m-r^2\omega) & 48\,r\,u_m \end{bmatrix}$$
$$\textbf{u}=\begin{bmatrix}u_m \\ v_m \end{bmatrix}$$
$$\textbf{f}=\begin{bmatrix} 8\,r^2\omega^2-16\,r\,\omega\,v_m-21{u_m}^2+48{v_m}^2-\frac{480\pi^2\nu}{Q^2}r^3{u_m}^3 \\ -69\,u_m\,v_m+37\,r\,\omega\,u_m-\frac{480\pi^2\nu}{Q^2}r^3{u_m}^2(v_m-r\,\omega) \end{bmatrix}$$
Then the equations are of the form
$$\textbf{A} \frac{d \textbf{u}}{dr}=\textbf{f}$$
I can rearrange this as
$$\frac{d \textbf{u}}{dr}=\textbf{A}^{-1}\textbf{f}$$
And solve it using a Runge-Kutta method. I have tried solving it using the following code in MATLAB:
function output = improvedPigford(flow,spin)if nargin<2 spin = 75; % [RPM]endif nargin<1 flow = 935; % [mL/min]endw = spin*pi*2/60; % rotation of wafer [rad/s]d = 0.002; %diameter of injection nozzle [m]r0 = 0.5*d; % begin solving here [m]q = flow/(100^3*60); % volumetric flow rate of liquid [m^3/s]kv = 8.926780E-07; % kinematic viscosity of water @ 25C [m^2/s]u0 = q/(pi*r0^2); % initial r-velocity (assume same as nozzle velocity) [m/s]v0 = 0; % initial theta-velocity [m/s]%% solve improved Pigford modelrRange = [r0 0.1];initCond = [u0 v0];options = odeset('Stats','on','RelTol',1e-6,'MStateDependence','strong');dudr = @(r,u) improvedModel(r,u,q,kv,w);sol = ode15s(dudr,rRange,initCond,options);r = sol.x;u = sol.y(1,:);v = sol.y(2,:);h = q./(2*pi*r.*u);%% plot sectionfigure(1);clf;plot(r*1000,u,'r-',r*1000,v,'b-');xlabel('Radius (mm)');ylabel('Fluid Velocity (m/s)');%% output sectionoutput=[r,u,v,h];end% --------------------------------------------------------------------------function dudr = improvedModel(r,u,q,kv,w)%{Function is of the form A*u=f, so u=inv(A)*f%}A = [69*r*u(1), 0; 21*r*(u(2)-r*w), 48*r*u(1)];f = [8*r^2*w^2 - 16*r*w*u(2) - 21*u(1)^2 + 48*u(2)^2 - (480*pi^2*kv*r^3*u(1)^3)/q^2;... -69*u(1)*u(2) + 37*r*w*u(1) + (480*pi^2*kv*r^3*u(1)^2*(u(2)-r*w))/q^2];dudr = A\f;end
However when I run this code the solution always blows up at about %r=0.0441%, no matter which RK function I use: ode45, ode23, ode113, ode15s, ode23s, ode23t, or ode23tb. The place where the solution blows up does change when I change the parameters like flow rate, spin speed, viscosity, etc., but it does always blow up.
Am I perhaps doing something wrong in my implementation of the Runge-Kutta method in MATLAB, or is it possible that the equations are simply ill-posed? Maybe there is simply a singularity at the location where it blows up or something? I've checked the integration of the N-S equations several times using symbolic mathematics software, and I'm fairly confident that I did it correctly. Otherwise though I'm kind of out of ideas of what to look at next. Any suggestions or ideas would be appreciated.
Update
Following the suggestion of @Geoff Oxberry I changed the MATLAB formulation to include the possibility that the mass matrix $\textbf{A}$ may be singular. Doing that the new code is this:
function output = improvedPigford(flow,spin)if nargin<2 spin = 200; % [RPM]endif nargin<1 flow = 1000; % [mL/min]endw = spin*pi*2/60; % rotation of wafer [rad/s]d = 0.002; %diameter of injection nozzle [m]r0 = 0.5*d; % begin solving here [m]q = flow/(100^3*60); % volumetric flow rate of liquid [m^3/s]kv = 8.926780E-07; % kinematic viscosity of water @ 25C [m^2/s]u0 = q/(pi*r0^2); % initial r-velocity (assume same as nozzle velocity) [m/s]v0 = 0; % initial theta-velocity [m/s]%% solve improved Pigford modelrRange = [r0 0.1];initCond = [u0 v0];dudr = @(r,u) improvedModel(r,u,q,kv,w);M = @(r,u) massMatrix(r,u,w);options = odeset('Stats','on','RelTol',1e-6,'Mass',M);sol = ode15s(dudr,rRange,initCond,options);r = sol.x;u = sol.y(1,:);v = sol.y(2,:);h = q./(2*pi*r.*u);%% plot sectionfigure(1);clf;plot(r*1000,u,'r-',r*1000,v,'b-');xlabel('Radius (mm)');ylabel('Fluid Velocity (m/s)');%% output sectionoutput=[r,u,v,h];endfunction f = improvedModel(r,u,q,kv,w)%{Function is of the form M*u'=f%}f = [8*r^2*w^2 - 16*r*w*u(2) - 21*u(1)^2 + 48*u(2)^2 - (480*pi^2*kv*r^3*u(1)^3)/q^2;... -69*u(1)*u(2) + 37*r*w*u(1) + (480*pi^2*kv*r^3*u(1)^2*(u(2)-r*w))/q^2];endfunction M = massMatrix(r,u,w)%{Function calculates the mass matrix M for M*u'=f%}M = [69*r*u(1), 0; 21*r*(u(2)-r*w), 48*r*u(1)];end
However, the result is the same: the solution still blows up at the same location.
Update 2
As @Kirill asked, I am showing a plot of what the solution looks like when it blows up.
$v_m$ goes to $-\infty$, while $u_m$ goes to $+\infty$. What makes me suspicious that something may be wrong is the fact that due to boundary conditions on $v(r,z)$ [$v=r\,\omega$ @ $z=0$ and $\frac{\partial v}{\partial z}=0$ @ $z=h(r)$], $v_m$ should almost certainly be positive. @Kirill found a sign discrepancy between my derivation and his derivation done using Mathematica, so I will be checking my derivation again, then getting back with my results.
Update 3
I went over the calculations
again, and I think I did have a sign error as @Kirill postulated. After working through derivation one more time, I ended up with the following equations:
$$69r\,u_m\frac{du_m}{dr}=8r^2\omega^2-16r\omega v_m-21{u_m}^2+48{v_m}^2-\frac{480\pi^2\nu r^3{u_m}^3}{Q^2}$$
$$21r(v_m-r\omega)\frac{du_m}{dr}+48r\,u_m\frac{dv_m}{dr}=-69u_m v_m+37r\omega u_m-\frac{480\pi^2\nu r^3(v_m-r\omega){u_m}^2}{Q^2}$$
This is equivalent to the equations that @Kirill obtained doing the derivation in Mathematica. When solving this system of equations (using the above code but with the corrected term) I get the following plots for $u_m$ and $v_m$:
This appears to be identical to the plot that @Kirill showed.
For a further comparison, this derivation is an improvement of the Pigford model, which can be found here:
R.M. Wood and B.E. Watts, "The Flow, Heat, and Mass Transfer Characteristics of Liquid Films on Rotating Disks",
Trans. Instn Chem. Engrs., Vol 51, 1973.
The derivation of the Pigford model is identical to mine, except that the final integration $\int_0^h(r) dz$ is simply skipped over, with the $u(r,z)$ and $v(r,z)$ simply being replaced by $u_m(r)$ and $v_m(r)$, with an $O(1)$ correction factor added to each equation. From this the Pigford model equations are:
$$r\,u_m \frac{du_m}{dr}={v_m}^2-\frac{12 K_1 \nu\, \pi^2\,r^3\,{u_m}^3}{Q^2}$$
$$r\,u_m \frac{dv_m}{dr}=-u_m v_m-\frac{12 K_2 \nu\, \pi^2\,r^3\,(v_m-r\,\omega){u_m}^2}{Q^2}$$
If I solve those two equations simultaneously, I get the following solution:
Which is qualitatively close to the equations that I derived. For a final comparison, I'll look at the height $h(r)$ calculated by both of the models:
The shapes of the two curves are a little different, but mostly only differ by a $O(1)$ constant, as one might expect because of the addition of the fitting constants in the Pigford model. |
Alternating current From Academic Kids
An
alternating current ( AC) is an electrical current where the magnitude and direction of the current varies cyclically, as opposed to direct current, where the direction of the current stays constant. The usual waveform of an AC power circuit is a sine wave, as this results in the most efficient transmission of energy. However in certain applications different waveforms are used, such as triangular or square waves.
Used generically, AC refers to the form in which electricity is delivered to businesses and residences. However, audio and radio signals carried on electrical wire are also examples of alternating current. In these applications, an important goal is often the recovery of information encoded (or modulated) onto the AC signal.
Contents History
William Stanley Jr designed one of the first practical coils to produce alternating currents. His design was an early precursor of the modern transformer, called an induction coil. From 1881 to 1889, the system used today was devised by Nikola Tesla, George Westinghouse, Lucien Gaulard, John Gibbs, and Oliver Shallenger. These systems overcame the limitations imposed by using direct current, as found in the system that Thomas Edison first used to distribute electricity commercially.
The first long-distance transmission of alternating current took place in 1891 near Telluride, Colorado, followed a few months later in Germany. Thomas Edison strongly advocated the use of direct current (DC), having many patents in that technology, but eventually alternating current came into general use (see War of Currents). Charles Proteus Steinmetz of General Electric solved many of the problems associated with electricity generation and transmission using alternating current.
Distribution and domestic power supply Main article: Electricity distribution
Unlike DC, AC can be stepped up or down by a transformer to a different voltage. Voltage conversion is possible for DC using rotating machines or inverters. high-voltage, direct current electric power transmission systems contrast with the more common alternating-current systems as a means for the bulk transmission of electrical power. However these tend to be more expensive and less efficient than transformers, or did not exist when Edison, Westinghouse and Tesla were designing their power systems.
Use of a higher voltage leads to more efficient transmission of power. The power losses in the conductor are due to the current and are described by the formula <math> P= I^2R <math>, implying that if the current is doubled, the power loss will be four times greater. Therefore it is advantageous when transmitting large amounts of power to convert the power to extremely high voltages (sometimes as high as hundreds of kilovolts). However high voltages also have disadvantages, the main ones being the increased danger to anyone who comes into contact with them, the extra insulation required, and generally increased difficulty in their safe handling. Therefore power for general use is stepped down to a relatively low level, generally around 200 V to 500 V phase-phase and 100 V to 250 V phase-neutral.
Three-phase electrical generation is very common and is a more efficient use of commercial generators. Electrical energy is generated by rotating a coil inside a magnetic field, in large generators with a high capital cost. However, it is relatively simple and cost effective to include three separate coils in the generator stator (instead of one). These sets of coils are physically separated and at an angle of 120¯ to each other. Three current waveforms are produced that are 120¯ out of phase with each other, but of equal magnitude.
Three-phase systems are designed so that they are balanced at the load; if a load is correctly balanced no current will flow in the neutral point. Also even in the worst case unbalanced (linear) load the neutral current will not exceed the highest of the phase currents. For three-phase at low (normal mains) voltages a four-wire system like this is normally used reducing the cable requirements by one-third over using a separate neutral per phase. When stepping down three-phase a transformer with a Delta primary and a Star secondary is often used so there is no need for a neutral on the supply side.
For smaller customers (just how small varies by country and age of install) only a single phase and the neutral are taken to the property. For larger installs all three phases and the neutral are taken to the main board. From a three-phase main board both single and three-phase circuits may lead off (and in some cases also circuits with two phases (not to be confused with two-phase) and a neutral are led off).
Three-wire single phase systems, with a single centre-tapped transformer giving two live conductors, is a common distribution scheme for residential and small commercial buildings in North America. A similar method is used for a different reason on construction sites in the UK. Small power tools and lighting are supposed to be supplied by a local center-tapped transformer with a voltage of 55V between each power conductor and the earth. This significantly reduces the risk of electrocution in the event that one of the live conductors becomes exposed through an equipment fault whilst still allowing a reasonable voltage for running the tools.
A third wire is usually (should be
always but there are many older, non-compliant, or third world installs where it is not) connected between the individual electrical appliances in the house and the main consumer unit or distribution board. The third wire is known in Britain and most other English-speaking countries as the earth wire, but in (English-speaking) North America it is the ground wire. Exactly what happens to the ground wire before the main board varies, but there are three main possibilities, which are listed here by their European names: TT (customer's earth not connected to neutral at all) TN-S (neutral and earth run back separately to the transformer star point) TN-C-S (neutral and earth are joined at the intake position).
There is also TN-C where neutral and earth are joined right through the install, but this is much less common than the others and requires special procedures to make it safe.
A system should be designed so that in the event of a short to earth on any part of the system some form of fuse or breaker will make the system safe. In a TT system the high earth loop impedance means that a Residual-Current Device (RCD) must be used. In other earthing systems this can be covered by the normal overcurrent protention systems. RCDs may still be used on such systems though as they can protect against small earth faults such as through a person.
AC frequencies by country
The frequency of the electrical system varies by country; most electric power is generated at either 50 or 60 Hz. The 60 hertz countries are: American Samoa, Antigua and Barbuda, Aruba, Bahamas, Belize, Bermuda, Canada, Cayman Islands, Colombia, Costa Rica, Cuba, Dominican Republic, El Salvador, French Polynesia, Guam, Guatemala, Guyana, Haiti, Honduras, South Korea, Marshall Islands, Mexico, Micronesia, Montserrat, Nicaragua, Northern Mariana Islands, Palau, Panama, Peru, Philippines, Puerto Rico, Saint Kitts and Nevis, Suriname, Taiwan, Trinidad and Tobago, Turks and Caicos Islands, United States, Venezuela, Virgin Islands (U.S.), Wake Island.
[1] ( http://www.philip.allen.org/voltages.htm)
The following countries have a mixture of 50 Hz and 60 Hz supplies: Bahrain, Brazil (mostly 60 Hz), Japan (60 Hz used in western prefectures), Liberia (now officially 50 Hz, formerly 60 Hz and many independent 60 Hz generating plants still exist).
[2] ( http://www.50hz.com/pwchrt2.htm)
Very early AC generating schemes used arbitrary frequencies based on convenience for water turbine and generator design, since frequency was not critical for incandescent lighting loads. Once induction motors became common, it was important to standardized frequency for compatiblity with the customer's equipment. Standardizing on one frequency also, later, allowed interconnection of generating plants on a grid for economy and security of operation.
It is generally accepted that Nikola Tesla chose 60 hertz as the lowest frequency that would not cause street lighting to flicker visibly. The origin of the 50 hertz frequency used in other parts of the world is open to debate but seems likely to be a rounding off of 60hz to the 1 2 5 10 structure popular with metric standards.
Other frequencies were somewhat common in the first half of the 20th century, and remain in use in isolated cases today, often tied to the 60 Hz system via a a rotary converter or static inverter frequency changer. 25 Hz power was used in Ontario, Quebec, the northern USA, and for electrified railroads. In the 1950's, much of this electrical system, from the generators right through to household appliances, was converted and standardised to 60 Hz. Some 25 Hz generators are still in use at Niagara Falls for large industrial customers who did not want to replace existing equipment. The lower frequency eases the design of low speed electric motors, particularly for hoisting, crushing and rolling applications, and commutator-type traction motors for applications such as railways, but also causes a noticeable flicker in incandescent lighting. 16.67 Hz power (1/3 of the mains frequency) is still used in some European rail systems, such as in Sweden and Switzerland.
Off-shore, textile industry, marine, computer mainframe, aircraft, and spacecraft applications sometimes use 400Hz, for benefits of reduced weight of apparatus or higher motor speeds.
AC-powered appliances can give off a characteristic hum at the multiples of the frequencies of AC power that they use. Most countries have chosen their television standard to match (or at least approximate) their mains supply frequency. This helps prevent unfiltered powerline hum and magnetic interference from causing visible beat frequencies in the displayed picture.
Mathematics of AC voltages Missing image Sine_wave.png
Alternating currents are usually associated with alternating voltages. An AC voltage
v can be described mathematically as a function of time by the following equation: <math>
v(t)=A \times\sin(\omega t), <math>
where
Ais the amplitudein volts (also called the peak voltage), ωis the angular frequency in radians per second, and tis the time in seconds.
Since angular frequency is of more interest to mathematicians than to engineers, this is commonly rewritten as:
<math>
v(t)=A \times\sin(2 \pi f t), <math>
where
The peak-to-peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Since the maximum value of sin(
x) is +1 and the minimum value is −1, an AC voltage swings between + A and − A. The peak-to-peak voltage, written as V P-P, is therefore (+ A) − (− A) = 2 æ A.
In power distribution work the AC voltage is nearly always given in as a root mean square (rms) value, written
V rms. For a sinusoidal voltage: <math>
V_\mbox{rms}={A \over {\sqrt 2}} <math>
V rms is useful in calculating the power consumed by a load. If a DC voltage of V DC delivers a certain power P into a given load, then an AC voltage of V rms will deliver the same average power P into the same load if V rms = V DC. Because of this fact rms is the normal means of measuring voltage in mains (power) systems.
To illustrate these concepts, consider the 240 V AC mains used in the UK (it should be noted that the UK is now officially 230 V +10% −6% but in reality voltages are still closer to 240 V than 230 V in most cases). It is so called because its rms value is (at least nominally) 240 V. This means that it has the same heating effect as 240 V DC. To work out its peak voltage (amplitude), we can modify the above equation to:
<math>
A=V_\mbox{rms} \times \sqrt 2 <math>
For our 240 V AC, the peak voltage
V P or A is therefore 240 V æ √2 = 339 V (approx.). The peak-to-peak value V P-P of the 240 V AC mains is even higher: 2 æ 240 V æ √2 = 679 V (approx.)
Note that non-sinusoidal waveforms have a different relationship between their peak magnitude and effective (RMS) value. This is of practical significance when working with non-linear circuit elements that produce harmonic currents, such as rectifiers.
The European Union (including the UK) have now officially harmonized on a supply of 230 V 50 Hz. However they made the tolerance bands very wide at Ý10%. Some countries actually specify stricter standards than this for example the UK specifies 230 V +10% −6%. Most supplies to the old standards therefore conform to the new one and do not need to be changed.
External links " AC/DC: What's the Difference (". Edison's Miracle of Light, American Experience ( http://www.pbs.org/wgbh/amex/edison/sfeature/acdc.html)? http://www.pbs.org/wgbh/amex/index.html). (PBS) " AC-DC: Inside the AC Generator (". Edison's Miracle of Light, American Experience. (PBS) http://www.pbs.org/wgbh/amex/edison/sfeature/acdc_insideacgenerator.html) Kuphaldt, Tony R., " Lessons In Electric Circuits : Volume II - AC (". March 8, 2003. (Design Science License) http://www.faqs.org/docs/electric/AC/index.html) Nave, C. R., " Alternating Current Circuits Concepts (". HyperPhysics. http://hyperphysics.phy-astr.gsu.edu/hbase/electric/accircon.html) " Alternating Current (". Magnetic Particle Inspection, Nondestructive Testing Encyclopedia. http://www.ndt.net/article/az/mpi/alternating_current.htm) (AC) " Alternating current (". Analog Process Control Services. http://www.apcs.net.au/nav/article/fg40400.html) Hiob, Eric, " An Application of Trigonometry and Vectors to Alternating Current (". British Columbia Institute of Technology, 2004. http://www.math.bcit.ca/examples/elex/trig_vectors/) " Introduction to alternating current and transformers (". Integrated Publishing. http://www.tpub.com/neets/book2/index.htm) " Wind Energy Reference Manual Part 4: Electricity (". Danish Wind Industry Association, 2003. http://www.windpower.org/en/stat/unitsac.htm) Chan. Keelin, " Alternating current Tools (". JC Physics ( http://www.jcphysics.com/toolbox_indiv.php?sub_id=17) http://www.jcphysics.com/), 2002. " Measurement -> ac (". Analog Process Control Services. http://www.apcs.net.au/nav/kn/ff4050901.html) Williams, Trip "Kingpin", " Understanding Alternating Current (". http://www.alpharubicon.com/altenergy/understandingAC.htm), Some more power concepts " Table of Voltage, Frequency, TV Broadcasting system, Radio Broadcasting, by Country (".cs:Střídavý proud http://www.salestores1.com/woreltab.html)
da:Vekselstr½m de:Wechselstrom es:Corriente alterna fr:Courant alternatif he:זרם חילופין it:corrente alternata nl:Wisselstroom ja:交流 pl:Prąd przemienny pt:Corrente alternada sl:Izmenični električni tok fi:Vaihtovirta sv:Vðxelstr—m |
I am studying Numerical Analysis with the book of Richard L.Burden. A question which I'm struggling with right now is following.
Transform the second-order initial-value problem
$y'' - 2y' + 2y = e^{2t}\sin t$ for $0 \leq t \leq 1, $ with $y(0) = -0.4, y'(0) = -0.6, h=0.1$
into a system of first order initial-value problems, and use the Runge-Kutta method ith h=0.1 to approximate the solution.
Then, $$u_1(t) = y(t), u_2(t) = y'(t)$$ $$u_1'(t) = u_2(t)$$ $$u_2'(t) = e^{2t}\sin t - 2u_1(t) + u_2(t)$$ $$u_1(0) = -0.4, u_2(0) = -0.6$$
This initial conditions give $w_{1,0} = -0.4, w_{2,0}=-0.6$
I can understand that $k_{1,1} = hf_1(t_0, w_{1,0}, w_{2,0}) = hw_{2,0}$
$f_1 = u_1'= u_2(t)$, So $f_1(t_0, w_{1,0}, w_{2,0}) = u_2(t_0, w_{1,0}, w_{2,0}) = w_{2,0}$ (By definition of $w_{i,j}$)
However, I can't understand the following. $$k_{2,1} = hf_1(t_0 + \frac{h}{2}, w_{1,0} + \frac{1}{2}k_{1,1}, w_{2,0} + \frac{1}{2}k_{1,2}) = h\left[w_{2,0} + \frac{1}{2}k_{1,2}\right]$$
Why does $f_1(t_0 + \frac{h}{2}, w_{1,0} + \frac{1}{2}k_{1,1}, w_{2,0} + \frac{1}{2}k_{1,2})$ equal to $w_{2,0} + \frac{1}{2}k_{1,2}$? It seems that third argument in the function comes out, but there is no detailed explanation in this book. |
This question already has an answer here:
Let $f_n$ be a uniformly bounded sequence of holomorphic functions on $D$. Suppose there exists a point $a\in D$, such that $\lim_{n\rightarrow\infty}f_n^{(k)}(a)=0$ for each $k$. Show that $f_n\rightarrow0$ uniformly on each compact subset of $D$.
Since $f_n$ is holomorphic and bounded, by Cauchy estimate, $\displaystyle f_n^{(k)}(a)\leq \dfrac{k!\sup_{z\in D}|f|}{r^k}$ where $r$ is the radius of the region $D$. Hence, $f_n^{(k)}$ is also bounded.
I'm having trouble how to proceed further. |
Number problems involve finding two numbers that satisfy certain conditions.
If we label the numbers using the variables \(x\) and \(y,\) we can compose the objective function \(F\left( {x,y} \right)\) to be maximized or minimized.
The constraint specified in the problem allows to eliminate one of the variables.
When we get the objective function as a single variable function, we can use differentiation to find the extreme values.
Solved Problems Example 1Find two numbers whose sum is \(36\) if the sum of their squares is to be a minimum. Example 2Find two positive numbers whose product is \(a\) such that their sum is minimum. Example 3Find two numbers whose difference is \(8\) and whose product is a minimum. Example 4Determine two positive numbers whose product is \(4\) such that the sum of their squares is minimum. Example 5Find the number whose sum with its reciprocal is a minimum. Example 6Find two numbers whose difference is \(6\) such that the sum of their squares is a minimum. Example 7Find two positive numbers whose sum is \(12\) so that the product of the square of one and \(4\text{th}\) power of the other is maximum. Example 8Find two positive numbers whose product is \(2\) and the sum of one number and the square of the other is a minimum. Example 9Find two positive numbers whose sum is \(7\) and the product of the cube of one number and the exponential function of the other is a maximum. Example 10The sum of two positive numbers is \(24.\) The product of one and the square of the other is maximum. Find the numbers. Example 11Find two positive numbers whose sum is \(32\) and the sum of their square roots is maximum. Example 1.Find two numbers whose sum is \(36\) if the sum of their squares is to be a minimum.
Solution.
Let \(x\) and \(y\) be the two numbers. We want to find the minimum of the function
\[{F\left( {x,y} \right)} = {{x^2} + {y^2}}.\]
As \(x + y = 36,\) we can eliminate one variable in the objective function. Substitute \(y = 36 – x\) in the objective function.
\[{F\left( {x,y} \right) = {x^2} + {y^2} }={ {x^2} + {\left( {36 – x} \right)^2} }={ {x^2} + 1296 – 72x + {x^2} }={ 2{x^2} – 72x + 1296 \equiv F\left( x \right).}\]
Take the derivative:
\[{F^\prime\left( x \right) }={ \left( {2{x^2} – 72x + 1296} \right)^\prime }={ 4x – 72.}\]
The critical points are
\[{F^\prime\left( x \right) = 0,}\;\; \Rightarrow {4x – 72 = 0,}\;\; \Rightarrow {x = 18.}\]
Note that the second derivative is positive:
\[F^{\prime\prime}\left( x \right) = \left( {4x – 72} \right)^\prime = {4 \gt 0}.\]
Hence, the objective function has a local minimum at \(x = 18.\)
So the sum of squares is a minimum when \(x = 18,\) \(y = 36-x = 18.\)
Example 2.Find two positive numbers whose product is \(a\) such that their sum is minimum.
Solution.
Let \(x\) and \(y\) be the two numbers. The objective function is written in the form
\[F = x + y.\]
As \(xy = a,\) we can substitute \(y = \large{\frac{a}{x}}\normalsize\) into the objective function:
\[{F = x + y }={ x + \frac{a}{x} }={ F\left( x \right).}\]
Take the derivative and find the critical points:
\[{F^\prime\left( x \right) }={ \left( {x + \frac{a}{x}} \right)^\prime }={ 1 – \frac{a}{{{x^2}}};}\]
\[{F^\prime\left( x \right) = 0,}\;\; \Rightarrow {1 – \frac{a}{{{x^2}}} = 0,}\;\; \Rightarrow {{x^2} = a,}\;\; \Rightarrow {x = \pm \sqrt a .}\]
We should take only positive root \(x = + \sqrt a .\)
Find the second derivative and determine its sign at this point:
\[{F^{\prime\prime}\left( x \right) = \frac{{3a}}{{{x^3}}},}\;\; \Rightarrow {F^{\prime\prime}\left( x \right) = \frac{{3a}}{{{{\left( {\sqrt a } \right)}^3}}} }={ \frac{3}{{\sqrt a }} \gt 0.}\]
We see that \(x = \sqrt a\) is a point of minimum by the Second Derivative Test.
Hence, the answer is
\[x = y = \sqrt a .\]
Example 3.Find two numbers whose difference is \(8\) and whose product is a minimum.
Solution.
The objective function is
\[F = xy,\]
where \(x\) and \(y\) are the two numbers.
Since \(x – y = 8,\) we can substitute \(y = x – 8\) in the objective function above. This yields:
\[{F = xy }={ x\left( {x – 8} \right) }={ {x^2} – 8x }={ F\left( x \right).}\]
Take the derivative:
\[{F^\prime\left( x \right) }={ \left( {{x^2} – 8x} \right)^\prime }={ 2x – 8.}\]
There is one critical point \(x = 4\).
Note that the second derivative is always positive:
\[{F^{\prime\prime}\left( x \right) }={ \left( {2x – 8} \right)^\prime }={ 2 \gt 0.}\]
Hence, the objective function has a minimum at the point \(x = 4.\) The other number equals \(y = -4.\)
Example 4.Determine two positive numbers whose product is \(4\) such that the sum of their squares is minimum.
Solution.
The objective function is given by
\[F = {x^2} + {y^2},\]
where \(x,y\) are the two unknown numbers.
As \(xy = 4,\) we obtain:
\[{F = {x^2} + {y^2} }={ {x^2} + {\left( {\frac{4}{x}} \right)^2} }={ {x^2} + \frac{{16}}{{{x^2}}} }={ F\left( x \right).}\]
The derivative of the objective function is
\[{F^\prime\left( x \right) }={ \left( {{x^2} + \frac{{16}}{{{x^2}}}} \right)^\prime }={ 2x – \frac{{32}}{{{x^3}}} }={ \frac{{2{x^4} – 32}}{{{x^3}}}.}\]
Now it is easy to find the critical points:
\[{F^\prime\left( x \right) = 0,}\;\; \Rightarrow {\frac{{2{x^4} – 32}}{{{x^3}}} = 0,}\;\; \Rightarrow {{x^4} = 16,}\;\; \Rightarrow {x = \pm 2,}\]
so the critical points are
\[x = – 2,\,0,\,2.\]
We should take only the point \(x = 2.\) Then \(y = 2.\)
Example 5.Find the number whose sum with its reciprocal is a minimum.
Solution.
The function to be minimized is written as
\[{F\left( x \right) }={ x + \frac{1}{x},}\]
where \(x\) is supposed to be a positive number.
Take the derivative:
\[{F^\prime\left( x \right) }={ \left( {x + \frac{1}{x}} \right)^\prime }={ 1 – \frac{1}{{{x^2}}} }={ \frac{{{x^2} – 1}}{{{x^2}}}.}\]
There are the following critical values:
\[x = – 1,\,0,\,1.\]
Only the root \(x = 1\) satisfies the condition \(x \gt 0.\)
Determine the second derivative:
\[{F^{\prime\prime}\left( x \right) }={ \left( {1 – \frac{1}{{{x^2}}}} \right)^\prime }={ \frac{2}{{{x^3}}} \gt 0.}\]
As the second derivative is positive at \(x = 1,\) this point corresponds to the minimum of the objective function. The minimum value of the function is
\[{{F_{\min }} = F\left( 1 \right) }={ 1 + \frac{1}{1} }={ 2.}\]
Example 6.Find two numbers whose difference is \(6\) such that the sum of their squares is a minimum.
Solution.
Let \(x\) and \(y\) be the two numbers. The objective function is written as
\[F = {x^2} + {y^2}.\]
As \(x – y = 6,\) we substitute \(y = x – 6\) in the function above:
\[{F = {x^2} + {y^2} }={ {x^2} + {\left( {x – 6} \right)^2} }={ {x^2} + {x^2} – 12x + 36 }={ 2{x^2} – 12x + 36 }={ F\left( x \right).}\]
Compute the derivative:
\[{F^\prime\left( x \right) }={ \left( {2{x^2} – 12x + 36} \right)^\prime }={ 4x – 12,}\]
so the critical point is \(x = 3.\)
The second derivative is
\[{F^{\prime\prime}\left( x \right) }={ \left( {4x – 12} \right)^\prime }={ 4 \gt 0.}\]
Hence, \(x = 3\) corresponds to the minimum of the objective function by the Second Derivative Test. The other number equals \(y = -3.\)
Example 7.Find two positive numbers whose sum is \(12\) so that the product of the square of one and \(4\text{th}\) power of the other is maximum.
Solution.
The objective function is written in the form
\[F\left( {x,y} \right) = {x^2}{y^4},\]
where \(x\) and \(y\) are the two numbers.
As \(x + y = 12\) we can write
\[{F = {x^2}{y^4} }={ {x^2}{\left( {12 – x} \right)^4} }={ F\left( x \right).}\]
Compute the derivative:
\[{F^\prime\left( x \right) }={ \left[ {{x^2}{{\left( {12 – x} \right)}^4}} \right]^\prime }={ 2x \cdot {\left( {12 – x} \right)^4} }+{ {x^2} \cdot 4{\left( {12 – x} \right)^3} \cdot \left( { – 1} \right) }={ 6x{\left( {12 – x} \right)^3}\left( {4 – x} \right).}\]
Determine the critical points:
\[{F^\prime\left( x \right) = 0,}\;\; \Rightarrow {6x{\left( {12 – x} \right)^3}\left( {4 – x} \right) = 0,}\;\; \Rightarrow {x = 0,\,4,\,12.}\]
At \(x = 0\) and \(x = 12,\) the objective function is equal to zero.
When \(x = 4,\) the value of \(y\) is
\[{y = 12 – x }={ 12 – 4 }={ 8.}\]
At this point, the objective function attains the maximum value:
\[{{F_{\max }} }={ {4^2} \cdot {8^4} }={ {2^{16}} }={ 65536.}\]
Example 8.Find two positive numbers whose product is \(2\) and the sum of one number and the square of the other is a minimum.
Solution.
Let \(x\) and \(y\) be the two numbers. The constraint equation is written in the form
\[{xy = 2,}\;\; \Rightarrow {y = \frac{2}{x}.}\]
The objective function is given by
\[{F = x + {y^2} }={ x + {\left( {\frac{2}{x}} \right)^2} }={ x + \frac{4}{{{x^2}}}.}\]
Find the derivative and determine the critical points:
\[{F^\prime\left( x \right) = \left( {x + \frac{4}{{{x^2}}}} \right)^\prime }={ 1 + 4 \cdot \left( { – \frac{2}{{{x^3}}}} \right) }={ 1 – \frac{8}{{{x^3}}} }={ \frac{{{x^3} – 8}}{{{x^3}}};}\]
\[{F^\prime\left( x \right) = 0,}\;\; \Rightarrow {\frac{{{x^3} – 8}}{{{x^3}}} = 0,}\;\; \Rightarrow {{x^3} = 8,\;}\; \Rightarrow {x = 2.}\]
Thus, the function has two critical points \(x = 0\) and \(x = 2.\) We should take only positive number \(x = 2.\)
Using the First Derivative Test, one can show that \(x = 2\) is a point of minimum.
The second number is \(y = 1.\)
Example 9.Find two positive numbers whose sum is \(7\) and the product of the cube of one number and the exponential function of the other is a maximum.
Solution.
Let \(x\) and \(y\) be the two numbers. The objective function is given by
\[F = {x^3}{e^y}.\]
As \(x + y = 7,\) we substitute \(y = 7 – x\) in the function above.
\[{F = {x^3}{e^y} }={ {x^3}{e^{7 – x}} }={ F\left( x \right).}\]
Differentiate \(F\left( x \right):\)
\[{F^\prime\left( x \right) }={ \left( {{x^3}{e^{7 – x}}} \right)^\prime }={ \left( {{x^3}} \right)^\prime \cdot {e^{7 – x}} + {x^3} \cdot \left( {{e^{7 – x}}} \right)^\prime }={ 3{x^2}{e^{7 – x}} + {x^3} \cdot {e^{7 – x}} \cdot \left( { – 1} \right) }={ 3{x^2}{e^{7 – x}} – {x^3}{e^{7 – x}} }={ {x^2}{e^{7 – x}}\left( {3 – x} \right).}\]
It is clear that the positive critical value is only \(x = 3.\) Using the First Derivative Test, one can show that \(x = 3\) is a point of local maximum.
Respectively, the other number is \(y = 4.\)
Example 10.The sum of two positive numbers is \(24.\) The product of one and the square of the other is maximum. Find the numbers.
Solution.
Let the two numbers be \(x\) and \(y.\) The objective function is written as
\[F\left( {x,y} \right) = x{y^2}.\]
The constraint equation has the form
\[{x + y = 24,}\;\; \Rightarrow {y = 24 – x.}\]
Hence
\[{F = x{y^2} }={ x{\left( {24 – x} \right)^2}.}\]
Expanding \({\left( {24 – x} \right)^2},\) we obtain:
\[{F\left( x \right) }={ x{\left( {24 – x} \right)^2} }={ x\left( {576 – 48x + {x^2}} \right) }={ 576x – 48{x^2} + {x^3}.}\]
Differentiate:
\[{F^\prime\left( x \right) }={ \left( {576x – 48{x^2} + {x^3}} \right)^\prime }={ 576 – 96x + 3{x^2} }={ 3\left( {192 – 32x + {x^2}} \right).}\]
Find the critical points:
\[{F^\prime\left( x \right) = 0,}\;\; \Rightarrow {3\left( {192 – 32x + {x^2}} \right) = 0,\;}\; \Rightarrow {{x^2} – 32x + 192 = 0;}\]
\[{D = {\left( { – 32} \right)^2} – 4 \cdot 192 }={ 1024 – 768 }={ 256;}\]
\[{{x_{1,2}} = \frac{{ – \left( { – 32} \right) \pm \sqrt {256} }}{2} }={ \frac{{32 \pm 16}}{2} }={ 24,\,8;}\]
When \(x = 24,\) then \(y = 0,\) so the objective function is equal to zero in this case.
Note that the second derivative is
\[{F^{\prime\prime}\left( x \right) }={ \left( {576 – 96x + 3{x^2}} \right)^\prime }={ 6x – 96.}\]
Hence, the second derivative is negative for \(x = 8,\) that is the point \(x = 8\) is a point of maximum of the objective function.
The other number \(y\) is equal to
\[{y = 24 – x }={ 24 – 8 }={ 16.}\]
Example 11.Find two positive numbers whose sum is \(32\) and the sum of their square roots is maximum.
Solution.
We write the objective function in the form
\[F = \sqrt x + \sqrt y ,\]
where \(x, y\) are two positive numbers.
As \(x + y = 32,\) we can plug in \(y = 32 – x\) into the objective function.
\[{F = \sqrt x + \sqrt y }={ \sqrt x + \sqrt {32 – x} }={ F\left( x \right).}\]
Differentiate \(F\left( x \right):\)
\[{F^\prime\left( x \right) }={ \left( {\sqrt x + \sqrt {32 – x} } \right)^\prime }={ \frac{1}{{2\sqrt x }} + \frac{{\left( { – 1} \right)}}{{2\sqrt {32 – x} }} }={ \frac{{\sqrt {32 – x} – \sqrt x }}{{2\sqrt x \sqrt {32 – x} }}.}\]
Determine the critical points:
\[{F^\prime\left( x \right) = 0,}\;\; \Rightarrow {\frac{{\sqrt {32 – x} – \sqrt x }}{{2\sqrt x \sqrt {32 – x} }} = 0,}\;\; \Rightarrow {\sqrt {32 – x} – \sqrt x = 0,}\;\; \Rightarrow {\sqrt {32 – x} = \sqrt x ,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{l}} {32 – x = x}\\ {x \lt 32}\\ {x \gt 0} \end{array}} \right.,} \Rightarrow {x = 16.}\]
There are total 3 critical points: \(x = 0, 16, 32.\) We calculate the values of the objective function at these points:
\[{F\left( 0 \right) }={ \sqrt 0 + \sqrt {32 – 0} }={ \sqrt {32} }={ 4\sqrt 2 \approx 5.66;}\]
\[{F\left( {16} \right) }={ \sqrt {16} + \sqrt {32 – 16} }={ 4 + 4 }={ 8;}\]
\[{F\left( {32} \right) }={ \sqrt {32} + \sqrt {32 – 32} }={ \sqrt {32} }={ 4\sqrt 2 \approx 5.66}\]
Thus, the maximum value \({F_{\max }} = 8\) is attained at \(x = 16,\) \(y = 16.\) |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
One of the most important ways to get involved in complex variable analysis is through complex integration. When we talk about complex integration we refer to the line integral.
Line integral definition begins with γ a differentiable curve such that
$$ \begin{matrix}\gamma : [a,b] \mapsto \mathbb{C}\\ \;\;\;\;\; \;\;\;\;\;\;\; x \mapsto \gamma(x) \end{matrix}$$
Now we divide the interval [a, b] in n parts zi
such that z0
a, and
zn
=b
For each subinterval we take \( E_{i}=f(\zeta_{i})(z_{i}-z_{i-1}), \; i=1,..,n\).
Then we take the partial sums \( \sum_{i=1}^{n}E_{i} = \sum_{i=1}^{n}f(\zeta_{i})(z_{i}-z_{i-1}) \).Making the limit when n tends to infinity we get the line integral as
$$\int_{a}^{b}f(z)dz \;\;, \;\; \int_{C}f(z)dz $$
Both two formulas are analogous
The complex integral over a C curve is defined as
$$\int_{C}f(z)dz = \int_{C}(u+iv)(dx+idy)= \int_{C}udx -vdy + i\int_{C}vdx -udy$$
A very interesting property of the integral and that is used in most of proofs and arguments is the follwing
$$\left | \int_{a}^{b }f(z)dz \right | \le \int_{a}^{b }\left |f(z) \right |dz$$
Click here to see a proof
of this fact.
Line integral definitionº
Given f, a complex variable function and γ a piecewise differentiable curve. We define the line integral of f over γ as:
$$\int_{\gamma}f(z)dz = \int_{a}^{b}f(\gamma(t))\gamma'(t)dt $$
The most important therorem called Cauchy's Theorem
which states that the integral over a closed and simple curve is zero on simply connected domains. Cauchy gave a first demonstration supposing that the function f has a continuous first derivative, later Eduard Gousart discovered that in fact, this hypothesis was redundant, for this reason Cauchy's theorem is sometimes called Cauchy-Gousart's Theorem. This will be the version that we will see here.
In the following theorems, C is a closed and simple curve and contained in a simply connected open R region (this is a domain).
Theorem of Cauchy - Gousart
Given f a, holomorphic
function over R, then
$$\int_{C}f(z)dz = 0 $$
Click here
to see a proof of Cauchy's theorem.
Green's Theorem in the plane
Let P and Q be continuous functions and with continuous partial derivatives in R and on their boundary C. Then
$$ \int_{C} P dx+ Q dy = \int\int_{R}[\frac{\partial Q}{\partial x}- \frac{\partial P}{\partial y}]dx dy $$
It is relatively simple to put Green's theorem in complex form
:
Green's theorem in complex form
Given F, with continuous partial derivatives in R and on their boundary C. Then
$$ \int_{C}F(z, \bar{z})dz = 2i\int\int_{R}\frac{\partial F}{\partial \bar{z}}dA $$
Click here to see a proof
The following Theorem is sometimes called reciprocal of the Cauchy's theorem
Morera's Theorem
Given f, a complez variable function, lets suposse that it verifies
$$\int_{C}f(z)dz = 0 $$
then, f is Holomorphic
over R.
The following theorems are consequences of Cauchy's theorem
Theorem 1
If a and b are two points of R then the integral
$$ \int_{a}^{b} f(z) dz $$
It is independent of the path followed between a and b.
The proof of this theorem is simple, it is enough to observe if C is any path between a and b and C' is another different path, then for Cauchy's theorem, the total integral between C and C' is zero, as the path C 'does not matter, both line integrals would be same.
Theorem 2
Lets a and b two points of R and F'(z) = f(z) then
$$ \int_{a}^{b} f(z) dz = F(b) - F(a) $$
Reciprocally, if a and z are points of R and it is fulfilled
$$ F(z) = \int_{a}^{z} f(z) dz $$
then, F is Holomorphic
in R and F'(z) = f(z)
The following theorem is very important, it talk about that the value of an integral over a closed and simple curve that surrounds a singularity does not depend on the curve
Theorem 3
Given f a function Holomorphic
in a region bounded by two closed and simple curves C and C'
$$ \int_{C} f(z) dz = \int_{C'} f(z) dz$$
Where C and C' are traversed positively oriented, that is, counterclockwise.
The following theorem is a generalization of the previous one in a region with n singularities instead two.
Figura 1: Region closed between curves C and C'
Teorema 4
Given f a function Holomorphic
in a region bounded by nclosed and simple curves \( C_{1}, C_{2}, ..., C_{n}\). Which are enclosed by another major curve C. Then.
$$ \int_{C} f(z) dz = \int_{C_{1}} f(z) dz + \int_{C_{2}} f(z) dz + ... + \int_{C_{n}} f(z) dz$$
Where the curves curves \( C_{1}, C_{2}, ..., C_{n}\) are traversed positively oriented, that is, counterclockwise.
Figura 2: Region encerrada entre las curvas C y \( C_{1}, C_{2}, ..., C_{n}\) |
Let's start with the experiment of throwing a six-sided dice and looking at what number turns out. We can represent its sample space by $$\Omega=\lbrace 1,2,3,4,5,6 \rbrace$$.
Let's consider two events: $$A =$$ "to extract an even number", $$B =$$ "to extract the number $$4$$ or higher". As we already know , the set of results that fulfill $$A$$ and $$B$$ is, respectively, $$A= \lbrace 2, 4, 6 \rbrace$$, $$B=\lbrace 4 , 5 , 6 \rbrace$$.
We can consider the following operations between two events: union, intersection, difference and complementary.
Let's see what they mean in our example.
The union of $$A$$ and $$B$$, which is written as "$$A$$ or $$B$$", or $$$A\cup B$$$ is the event formed by all the results that satisfy $$A$$ or $$B$$. In our case, it would be the event $$C =$$ "to extract an even number or a number higher than $$4$$". If we represent it as the set of possible results, $$C= \lbrace 2, 4, 5, 6 \rbrace$$, which are all the results that satisfy one of the two events.
It can be useful to express it with the sets notation since the union of $$A$$ and $$B$$ is, in fact, $$$A \cup B= \lbrace 2, 4, 6 \rbrace \cup \lbrace 4, 5, 6 \rbrace = \lbrace 2, 4, 5, 6 \rbrace$$$.
The intersection of $$A$$ and $$B$$, which we write as "$$A$$ and $$B$$", or $$$A\cap B$$$ is the event formed by all the results that satisfy $$A$$ and $$B$$. In our case, it would be the event $$C =$$ "to extract an even and higher than or equal to $$4$$ number". If we represent it as the set of possible results, $$C=\lbrace 4, 6 \rbrace$$, which are all the results that satisfy both events simultaneously.
As before, if we express it as operations between sets, the intersection of $$A$$ and $$B$$ is in fact $$$A \cap B= \lbrace 2, 4, 6 \rbrace \cap \lbrace 4, 5, 6 \rbrace = \lbrace 4, 6 \rbrace$$$.
The difference of $$A$$ and $$B$$, which we write as $$$A-B$$$ is the event formed by all the results that satisfy $$A$$, but do not satisfy $$B$$. In our case, it would be the event $$C =$$"to extract an even number, but not higher than or equal to $$4$$", or what amounts to the same, $$C =$$ "to extract an even number smaller than $$3$$".
We can see that $$C=\lbrace 2 \rbrace$$, since it is the only result that satisfies both conditions. The fact that there are two different ways of writing $$C$$ is not a coincidence; in fact, it is always satisfied that $$A-B=A\cap \overline{B}$$.
With sets, the difference between $$A$$ and $$B$$ is $$$A-B= \lbrace 2, 4, 6 \rbrace - \lbrace 4, 5, 6 \rbrace = \lbrace 2 \rbrace$$$
Sometimes we can also find it written as $$A$$ \ $$B$$. To calculate this, it is as if we remove from $$A$$ all the results that there are in $$B$$. We must be careful because $$A - B$$ is not the same that $$B - A$$.
In our case, $$B - A = \lbrace 5 \rbrace$$, which is the only result that is in $$B$$, but is not in $$A$$.
Finally, there is the complementary or the opposite of $$A$$ . If our event is $$A$$, we write the opposite event as $$$\overline{A}$$$ which is formed by all the elementary events that do not satisfy $$A$$.
In our case $$\overline{A}=$$"be an odd number"$$=\lbrace 1,3,5 \rbrace$$. With the notation on theory of sets, we calculate the complementary by means of $$$\overline{A}=\Omega-A$$$
That is to say, the complementary of an event $$A$$ is, as a matter of fact, the difference between $$\Omega$$ and $$A$$: all the elementary events except those that satisfy $$A$$.
As a result of our definition, we see clearly that the opposite event of an impossible event is a sure event since if $$C=\emptyset$$, that is to say, the event $$C$$ is impossible, then $$\overline{C}=\Omega$$, and vice versa, the opposite of a sure event is an impossible event, since if $$D$$ is a sure event, that is to say $$D=\Omega$$, then $$\overline{D}=\emptyset$$.
Properties of operations between events
Next, we highlight a series of properties of the sets that can turn out to be useful to us concerning probability.
Commutative:
$$A \cup B = B \cup A$$
$$A \cap B = B \cap A$$
Associative:
$$(A \cup B)\cup C = A \cup (B \cup C)$$
$$(A \cap B)\cap C = A \cap (B \cap C)$$
For this reason, when we only have unions or intersections, we are not used to using brackets since there is no risk of confusion.
Idempotence:
$$A \cup A=A$$
$$A \cap A=A$$
Simplificative:
$$A \cup (A \cap B) = A$$
$$A \cap (A \cup B)=A$$
Distributive:
$$A \cup (B \cap C) = (A \cup B)\cap(A \cup C)$$
$$A \cap (B \cup C) = (A \cap B)\cup(A \cap C)$$
Neutral element:
$$A \cup \emptyset=A$$
$$A \cap \Omega=A$$
Complementary:
$$A \cup \overline{A}=\Omega$$
$$A\cap\overline{A}=\emptyset$$
Regression:
$$\overline{\overline{A}}=A$$, that is to say, the complementary one of the complementary of $$A$$ is $$A$$.
De Morgan's Laws:
$$ \overline{A\cup B}=\overline{A}\cap\overline{B}$$
$$\overline{A\cap B}=\overline{A}\cup\overline{B}$$
Written like that, these properties seem difficult, but in fact, if you think about them a little bit, most of them will seem like a question of common sense to you.
The commutative property of the union is telling us that it is the same "to extract a number one or a number four" as it is "to extract a number four or a number one".
The complementary, with the intersection, tells us that "to extract three and not to extract three" is the impossible event, that is to say, that this can never happen. Logical, right?
The idempotency with the intersection only says that "to extract two and to extract two" is just "to extract two".
Do you dare to translate others? If you try it, you will see that in fact this table is not complicated.
In a meeting we have $$20$$ people, some of them are wearing glasses. Determine which results form the event "be a woman and not wear glasses, or to wear glasses".
We need to define, first of all, which ones are our possible results .
For example, we can suppose that they are, on one hand, $$H=$$"to be a man" and $$M=$$"to be a woman" and on the other hand, $$G=$$"to wear glasses" and $$NG=$$"not to wear glasses".
In this case, our sample space is formed by $$$\Omega=\lbrace (H,G), (H,NG), (M,G), (M,NG) \rbrace$$$
With this notation, the event "to be a woman and not wear glasses" $$=\lbrace (M, NG) \rbrace$$.
The event "to wear glasses"$$= \lbrace (H, G), (M, G) \rbrace$$, is formed by the men that wear glasses and the women wearing glasses. Then, the event union of "being a woman and not wearing glasses" or "wearing glasses" is formed by $$\lbrace (H,G), (M,G), (M,NG) \rbrace$$, that is to say, the only ones that do not satisfy the event are the men that do not wear glasses.
Let's have a look at that: in fact $$\overline{H}=M$$, and $$\overline{G}=NG$$, means that if someone is not a man, then it is a woman, and the opposite of wearing glasses is not wearing glasses.
For this reason, we could also have described our sample space like $$$\Omega=\lbrace (H,G), (H,\overline{G}), (\overline{H},G), (\overline{H},\overline{G}) \rbrace$$$
An urn contains three red balls and two blue balls. We extract two balls; we take one at a time, look at its color, then put it back before taking out the second ball.
Define what possible results satisfy the event "to extract one red ball and one blue ball, no matter in what order". Verify that this event coincides with the intersection of "to extract a red ball" and "to extract a blue ball"
Define which results satisfy the event "to extract a red ball the first time, or a blue ball the second time". Verify that this event coincides with the union of "to extract a red ball the first time" or "to extract a blue ball the second time".
That is a frequent kind of experiment, which is related to combinatorial analysis.
1
First, let's analyze what happens when the order does not matter to us. Every time that we extract a ball, it can be a red ball $$(R)$$, or a blue ball $$(B)$$. Then, we can consider that our sample space is $$\Omega=\lbrace \{R,R\}, \{R,B\}, \{B,B\} \rbrace$$. In that case, the possible result that satisfies the statement is $$\{R,B\}$$.
Let’s now see that the event "to extract a red ball and a blue one" matches up with the intersection of the events "to extract a red ball" (in either one of the two extractions) and "to extract a blue ball" (also in any of the extractions). The results that satisfy the event "to extract a red ball" are $$\{\{R,R\} , \{R,B\}\}$$. The results that satisfy the event "to extract a blue ball" are $$\{\{R,B\} , \{B,B\}\}$$. The event intersection is formed by those that satisfy both events simultaneously, that is to say, only $$\{R,B\}$$.
This is not the only way to solve this section . We can also think that the results are arranged, and then see which ones satisfy the statement. If we consider the results in order, then our sample space is
$$\Omega=\{ (R,R), (R,B), (B,R), (B,B)\}$$, in short, we usually write it as $$\Omega=\{RR,RB,BR,BB\}$$. Then, the possible results that satisfy the statement are $$RB$$ and $$BR$$.
In this case, the event "extract a red ball" $$=\{RR, RB, BR \}$$. The event "extract a blue ball"$$=\{ RB, BR, BB \}$$. The event intersection is formed by the events that satisfy both. In this case, the common events are $$\{RB, BR\}$$, as we have seen before.
2
Now let's think about what happens when the order IS important. In this case, we need to know in what order we have extracted the balls, therefore we have to write the space muestral as $$\Omega=\{RR, RB,BR,BB\}$$. The results that satisfy "to extract a red ball the first time, or a blue one the second one" are: $$RR, RB, BB$$.
Let’s now see what the event "to extract a red ball at the first time" $$=\{ RR, RB \}$$. And the event "to extract a blue ball the second time"$$=\{ RB, BB\}$$. Therefore, the event union of both is the set formed by those that satisfy one or the other, that is to say, "extracts a red ball the first time, or the blue one the second one"$$=\{ RR, RB, BB \}$$, as we have seen before. |
$\newcommand{\Sym}[1]{\operatorname{Sym}{#1}}$
Let $V$ be a $n$-dim real vector space with dual space $V^*$. Let $\alpha$ be a covariant $k$-tensor, i.e., $\alpha \in T^k(V^*) \equiv (V^*)^{\otimes k}$. Then how would you show that the symmetrization $\Sym{\alpha}$ of $\alpha$ is the unique symmetric $k$-tensor such that $$ \boxed{\Sym{\alpha} (v,...,v) =\alpha(v,...,v), \qquad v\in V} $$
Note the symmetrization is defined by $$ \Sym{\alpha} (v_1,...,v_k) = \frac{1}{k!} \sum_{\sigma\in S_k} \alpha (v_{\sigma_1},...,v_{\sigma_k}) $$ Where $S_k$ is the symmetric group of $k$ and $T^k(V^*)$ is identified as the space of multi-linear real functionals on $V^k$.
EDIT: The key seems to be proving the following fact $$ \boxed{k!(v_1\cdots v_k) = \sum_{l=0}^k (-1)^l \sum_{|J|=l,J\subseteq \{1,...,k\}}\left( \sum_{i\in\{1,...,k\}-J} v_i \right)^k} $$ where the $|J|$ is the number of elements in set $J$. I used $v_1\cdots v_k$ to denote $\beta(v_1,..., v_k)$ where $\beta$ is a symmetric $k$-tensor. Similarly, I used $v^k$ to denote $\beta(v,...,v)$.
E.g. when $k=3$, we have $$ 3!(abc)=(a+b+c)^3-(a+b)^3-(a+c)^3-(b+c)^3+a^3+b^3+c^3 $$ However I'm having troubles proving this identity for general $k$.
EDIT 2: I just learned that the formula in the first EDIT refers to the polarization formula, which can be found in this post |
With the release of the Fed transcripts from the September 16th 2008 meeting, a narrative of the Fed worrying about commodities inflation distracted it from the worsening economic situation is forming. Here are e.g. Matthew Yglesias and David Glasner. In general this is part of a larger monetarist narrative that the Fed caused the recession and the financial crisis with tight monetary policy prior to September 2008. Here fore example is Scott Sumner.
In this post I will analyze this scenario with the information transfer model. First, I will look at the direct effect of monetary policy on NGDP and the price level. One issue is determining what the counterfactual monetary policy would have been. I chose a linear extrapolation from 2006 as this counterfactual. A second issue is the counterfactual for NGDP: was the shock an exogenous shock (i.e. independent of monetary policy) or not (i.e. potentially caused by monetary policy). Due to this ambiguity, I decided to do the calcuation two ways: NGDP had empirical path (scenario 1: NGDP shock was exogenous -- not due to monetary policy) and NGDP had a counterfactual path (scenario 2: no exogenous NGDP shock) [2].
Turns out both gave me the same answer, so we can be fairly confident about the effect of monetary policy. I used this procedure to extract NGDP shocks. Here is the effect of monetary policy relative to the counterfactual in scenario 1 (the effect of monetary policy is dashed blue relative to the counterfactual solid blue line, the gray shaded area is the actual shock):
Here is the effect of monetary policy relative to the counterfactual in scenario 2 (same key to the graph as above):
Both of these result in the same impact on NGDP (this graph shows the difference between the dashed curves and the solid curves in the previous two graphs in black and the actual shock in blue):
From this analysis, base adjustments resulted in a peak -3% of GDP shock, but that is only about 10% of the required shock (integrated) or 23% of the required shock (amplitude). Therefore the direct impact of monetary policy through the price level (the quantity theory of money) is insufficient to account for the entire shock. There is another potential source of a shock from monetary policy: interest rates. Here I show the effect scenario 1 (dashed black) and scenario 2 (solid black) on the long term interest rates and the short term interest rates (green, which is shown relative to the long term rate):
The Fed was effectively raising interest rates gradually from well before the onset of the financial crisis by having the base grow more slowly than NGDP. Short run rates followed long run rates up until the first rounds of QE. If we use the IS-LM model, we can get an estimate of the impact of this interest rate increase. Begin with the IS market equation:
$$
\log r = \log \frac{Y^{0}}{\kappa_{IS} IS_{ref}} - \kappa_{IS}\frac{\Delta Y}{Y^{0}}
$$
If we have a small change in $r = r_{0} + \delta r$, then we have
$$
\log r_{0} + \frac{\delta r}{r_{0}} + \cdots = \log \frac{Y^{0}}{\kappa_{IS} IS_{ref}} - \kappa_{IS}\frac{\Delta Y}{Y^{0}}
$$
So that
$$
\frac{\delta r}{r_{0}} \simeq - \kappa_{IS}\frac{\Delta Y}{Y^{0}}
$$
Essentially the percent increase in the interest rate $r$ is equal to $\kappa_{IS}$ times the percent decrease in output. Now we don't know what $\kappa_{IS}$ (it is effectively the slope of the IS curve, and estimates tend to cluster around 1, but can be as high as 5, see e.g. here [1]), but if we assume $\kappa_{IS} \sim 1$ then our approximate 10% change (about 50 basis points) in the 10-year interest rate (which was averaging 4.7% for the two years prior to the financial crisis) would result in a shock of 10%. Coupled with the 3% shock due to the base adjustment above, this would account for all of the shock that caused the Great Recession. (The 25 basis point adjustment referred to as Alternative A at the last meeting would have reduced the impact by about half.)
So according to this analysis the Fed is to blame, acting through the interest rate channel. An exogenous shock from the financial crisis is not necessary to account for any additional shock to NGDP. This is effectively Sumner's view above, however I don't think he'd agree with my use of the IS-LM model! The unfortunate thing is that after the shock occurred, we ended up mired in a liquidity trap.
[1] Comparative Performance of U.S. Econometric Models (1991) Edited by Lawrence R. Klein
[2] PS: Here is another representation of the counterfactuals (scenario 1 dotted and scenario 2 solid black) and the actual path (blue): |
It is well-known that the complement of $\{ ww \mid w\in \Sigma^*\}$ is context-free. But what about the complement of $\{ www \mid w\in \Sigma^*\}$?
Still CFL I believe, with an adaptation of the classical proof. Here's a sketch.
Consider $L = \{xyz : |x|=|y|=|z| \land (x \neq y \lor y \neq z)\}$, which is the complement of $\{www\}$, with the words of length not $0$ mod $3$ removed.
Let $L' = \{uv : |u| \equiv_3 |v| \equiv_3 0 \land u_{2|u|/3} \neq v_{|v|/3}\}$. Clearly, $L'$ is CFL, since you can guess a position $p$ and consider that $u$ ends $p/2$ after that. We show that $L = L'$.
$L \subseteq L'$: Let $w = xyz \in L$. Assume there's a $p$ such that $x_p \neq y_p$. Then write $u$ for the $3p/2$ first characters of $w$, and $v$ for the rest. Naturally, $u_{2|u|/3} = x_p$. Now what is $v_{|v|/3}$? First:
$$|v|/3 = (|w| - 3p/2)/3 = |w|/3 - p/2.$$
Hence, in $w$, this is position: $$|u|+|v|/3 = 3p/2 + |w|/3 - p/2 = |w|/3 + p,$$ or, in other words, position $p$ in $y$. This shows that $u_{2|u|/3} = x_p \neq y_p = v_{|v|/3}$.
If $y_p \neq z_p$, then let $u$ be the first ${3\over2}(|w|/3 + p)$ characters of $w$, so that $u_{2|u|/3}$ is $y_p$; $v$ is the rest of $w$. Then: $$|u| + |v|/3 = 2|w|/3 + p$$ hence similarly, $v_{|v|/3} = z_p$.
$L' \subseteq L$: We reverse the previous process. Let $w = uv \in L'$. Write $p = 2|u|/3$. Then: $$p+|w|/3 = 2|u|/3+|uv|/3 = |u| + |v|/3.$$ Thus $w_p = u_{2|u|/3} \neq v_{|v|/3} = w_{p + |w|/3}$, and $w \in L$ (since if $w$ is of the form $xxx$, it must hold that $w_p = w_{p+|w|/3}$ for all $p$).
Here is the way I think about solving this problem. In my opinion, it's intuitively clearer.
A word $x$ is not of the form $www$ iff either (i) $|x| \not\equiv 0$ (mod 3), which is easy to check, or (ii) there is some input symbol $a$ that differs from the corresponding symbol $b$ that occurs $|w|$ positions later.
We use the usual trick of using the stack to maintain an integer $t$ by having a new "bottom-of-stack" symbol $Z$, storing the absolute value $|t|$ as the number of counters on the stack, and sgn($t$) by the state of the PDA. Thus we can increment or decrement $t$ by doing the appropriate operation.
The goal is to use nondeterminism to guess the positions of the two symbols you are comparing, and use the stack to record $t := |x|-3d$, where $d$ is the distance between these two symbols.
We accomplish this as follows: increment $t$ for each symbol seen until the first guessed symbol $a$ is chosen, and record $a$ in the state. For each subsequent input symbol, until you decide you've seen $b$, decrement $t$ by $2$ ($1$ for the input length and $-3$ for the distance). Guess the position of the second symbol $b$ and record whether $a \not= b$. Continue incrementing $t$ for subsequent input symbols. Accept if $t = 0$ (detectable by $Z$ at top) and $a \not= b$.
The nice thing about this is that it should be completely clear how to extend this to arbitrary powers.
Just a different ("grammar oriented") perspective to prove that the complement of $\{ w^k \}$ is CF for any fixed $k$ using closure properties.
First note that in the complement of $\{ w^k \}$ there is always $i$ such that $w_i \neq w_{i+1}$. We focus on $w_1 \neq w_2$ and start with a simple CF grammar that generates:
$L = \{\underbrace{a00...0}_{w_1} \; \underbrace{b00...0}_{w_2} ... \underbrace{000...0}_{w_k} \mid |w_i|=n \} = \{ a 0^{n-1} \, b 0^{n(k-1)-1} \}$
E.g. for $k = 3$, we have $L = \{ a\,b\,0, a0\,b0\,00, a00\,b00\,000, ...\}$, $G_L = \{ S \to ab0 | aX00, X \to 0X00 | 0b0 \}$
Then apply closure under
inverse homomorphism, and union:
First homomorphism: $\varphi(1) \to a, \varphi(0) \to b, \varphi(1)\to 0, \varphi(0) \to 0 $
Second homomorphism: $\varphi'(0) \to a, \varphi'(1) \to b, \varphi'(1)\to 0, \varphi'(0) \to 0$
$L' = \varphi^{-1}(L) \cup \varphi'^{-1}(L)$ is still context free
Apply closure under
cyclic shifts to $L'$ to get the set of strings of length $kn$ not of the form $w^k$:
$L'' = Shift(L') = \{ u \mid u \neq w^k \land |u| = kn \}$.
Finally
add the regular set of strings whose length is not divisible by $k$ in order to get exactly the complement of $\{w^k\}$:
$L'' \cup \{\{0,1\}^n\mid n \bmod k \neq 0\} = \{ u \mid u \neq w^k\}$ |
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ... |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
Let's imagine the following scenario:
algebraic multiplicity: $\lambda_{1} = \lambda_{2} = \lambda_3: 3$ geometric multiplicity: 1
the first column of the fundamental matrix can be found as follows using row expansion:
$\vec{y_{1}(t)} = e^{\lambda_{1}t}[I\vec{x_{1}}+ \frac{t^1}{1!}(A-\lambda_{1}I)\vec{x_{1}}+...+\frac{t^n}{n!}{(A-\lambda_{1}I)}\vec{x_{1}}] = e^{\lambda_{1}t}.\vec{x_{1}} $
Now we need to create a new vector ourselves, as we only have one eigenvector:
$ \vec{y_{2}(t)} = e^{\lambda_{1}t}[I\vec{x_{2}}+ \frac{t^1}{1!}(A-\lambda_{2}I)\vec{x_{2}}+...+\frac{t^n}{n!}{(A-\lambda_{2}I)}\vec{x_{2}}] = e^{\lambda_{2}t}(t\vec{x_{1}}+\vec{x_{2}}) $
where $\frac{t^1}{1!}(A-\lambda_{2}I)\vec{x_{2}} = \vec{x_{1}}$ and all the other terms become $0$.
And let's now create the third and final vector:
$ \vec{y_{3}(t)} = e^{\lambda_{3}t}[I\vec{x_{3}}+ \frac{t^1}{1!}(A-\lambda_{3}I)\vec{x_{3}}+...+\frac{t^n}{n!}{(A-\lambda_{3}I)}\vec{x_{3}}] = e^{\lambda_{3}t}(\vec{x_{3}}+t\vec{x_{2}}+ \frac{t^2}{2!}\vec{x_{1}}) $
with $(A-\lambda{3}I)\vec{x_{3}} = \vec{x_{2}}$ and all the other terms equal $0$.
My question is: are the columns of the fundamental matrix which is composed of those three vectors (2+1), still linear independent? The two last vectors are calculated using the first eigenvector. This means, I think, that the three vectors won't be linear independent, meaning the columns of the fundamental matrix will not be linear independent neither.
But when solving a system of first order differential equations, converted to matrices, using the eigenvalues and eigenvectors the solutions are in the columns of the FM. And they have to be linear independent. Isn't that paradoxal? |
So this is a little cheating, but you asked for an interesting order:
Classify all groups of order $24k+4$. ALL OF THEM.
I find it interesting that there is even an answer!
I'll let you assume someone else has already classified the groups of order $4$ and $n/4 = 6k+1$, though the latter can get pretty tricky [ no one has managed $5^{10} = 6(1627604)+1$ yet ].
Proposition: For each distinct group $N$ of order $6k+1$, find the conjugacy classes of (a) automorphisms of $N$ with order dividing 4, (b) unordered pairs of commuting automorphisms of $N$ with orders dividing 2 [ where $(x,y) \equiv (x,xy)$ and $(x,y) \equiv (y,x)$ as well ]. Then for each class, we get an isomorphism class of group, either (a) $C_4 \ltimes N$ or (b) $K_4 \ltimes N$. These are precisely all isomorphism classes of groups of order $24k+4$.
Proof: Burnside's fusion theorem and checking some details on isomorphisms of semi-directs. $\square$
In general the groups of order 4 are easy: $C_4$ and $K_4$. Since the $6k+1$ can be tricky in general, let's choose a specific number.
Example:For example, $n=316=24\cdot 13+4 = 4\cdot 79$. First we have someone else classify the groups of order $4$ and $n/4 = 79$. We get $C_4$ and $K_4$ for the Sylow $2$-subgroup and then just $C_{79}$ for the Sylow $2$-complement.
Now the automorphism group of $C_{79}$ is cyclic of order 78. It has (a) the trivial automorphism $1 = (x \mapsto x)$ and the order two automorphism $-1 = (x \mapsto x^{-1})$ and that's it, and (b) the pairs $(1,1)$, $(1,-1)\equiv(-1,-1)\equiv(-1,1)$. That gives us the four groups (a) $C_4 \times C_{79}$, $C_4 \ltimes C_{79} = \langle a,b: a^4=b^{79}=1, ba=ab^{-1} \rangle$, and (b) $K_4 \times C_{79}$, $C_2 \times D_{2\cdot 79}$.
Generalization: This also works for $24k+20$. However, $24k+12$ (the last of the $8k+4$ cases) presents some added difficulty, as $A_5$ of order $60 = 24(2) +12$ demonstrates. This difficulty comes from the 3. Without it, the Sylow 2-subgroup cannot be affected by the rest of the group, and so “the rest of the group” becomes a normal subgroup of order $6k\pm1$, so we just get a semi-direct product. The $24k+12$ case has been settled as well, but the proof by Gorenstein–Walters is a few hundred pages long and involves some very interesting mathematics.
Even crazier, is that we can handle some $32k+16$. If the $16$, I mean the Sylow $2$-subgroup, is $C_{16}$ or $C_4 \times C_4$, then handling the 3 is easy (there is no “$A_5$” case), and nearly the same proposition holds! This is a shorter, earlier result of Brauer, and it was one of the earlier uses of modular character theory. |
Infeasible-start primal-dual methods and infeasibility detectors for nonlinear programming problems Yurii Nesterov, Yinyu Ye, and Michael ToddIn this paper we present several ``infeasible-start''path-following and potential-reduction primal-dualinterior-point methods for nonlinear conic problems.These methods try to find a recession directionof the feasible setof a self-dual homogeneous primal-dual problem. The methodsunder consideration generate an $\epsilon$-solution for an$\epsilon$-perturbation of an initial strictly (primal and dual) feasibleproblem in $O(\sqrt{\nu} \ln {\nu \over \epsilon \rho_f})$iterations, where $\nu$ is the parameter of a self-concordantbarrier for the cone, $\epsilon$ is a relative accuracyand $\rho_f$ is a feasibility measure.
We also discuss the behavior of path-following methodsas applied to infeasible problems.We prove that strict infeasibility(primal or dual) can be detectedin $O(\sqrt{\nu} \ln {\nu \over \rho_{\cdot}})$iterations, where $\rho_{\cdot}$is a primal or dual infeasibility measure.
Contact: [email protected]
Technical Report No. 1156, School of Operations Research andIndustrial Engineering,Cornell University, Ithaca, NY 14853-3801. |
Let $A\in \mathbb{R}^{n\times n}$ symmetric and positive semidefinite, and $\omega\in \mathbb{R}\setminus\{0\}$. I am interested in solving the following linear system for a range of values of $\omega$:
$$((A-\omega^2 I)(A-\omega^2 I)+\omega^2 I)x = b.$$ It may be useful to note that the matrix factors as $$ (A-(\omega^2-i\omega)I)(A-(\omega^2+i\omega)I), $$ where $i^2 = -1$.
Details: $A$ is sparse and I won't have direct access to its entries. The dimension of the null space of $A$ is a non-negligible fraction of $n$. The dimension of the problem, $n$, will be as big as the computer's RAM will allow.
What is a good way to solve preprocess / precondition this system? Note that the RHS, $b$, will change when $\omega$ changes.
Notes: This is a follow up question to this one. The idea of the proposed solution to that question shows that if we could perform an complete eigendecomposition on $A$, we would have a pretty much ideal preprocess. I have implemented an Lansczos iteration to approximate this eigendecomposition but it doesn't perform as well as I had hoped. I can explain this idea in more detail as an addendum if there is interest.
Of course full answers are appreciated, but they are not expected. I am mainly looking for ideas to investigate. Any comments and pointers to the literature are much appreciated.
Note to mods: Is this kind of question acceptable? I can change it to something more definite if asking for ideas is unacceptable.
Edit
This is what I plan on doing. First note that as $\omega\to \infty$ the matrix starts looking like $I(\omega^4+\omega^2)$, so we are mainly interested in when $\omega$ is comparable to the norm of $A$ and smaller.
To that end, we compute $r$ eigen-pairs of $A$, $(\lambda_i,q_i)\in \mathbb{R}\times \mathbb{R}^{n\times n}$, with the largest eigenvalues. Then, since these eigenvectors can be made to be orthonormal we have $$ x= \sum_{i=1}^r \alpha_i q_i + \sum_{i={r+1}}^n \alpha_i q_i. $$
Now, taking the dot product of both side of the equation with $q_i$ for $1\le i\le r$ we get
$$ \alpha_i = \left\langle q_i,b \right\rangle \frac{1}{(\lambda_i - \omega^2)(\lambda_i - \omega^2) + \omega^2}. $$
I plan on using this information to construct an initial guess for $x$. I am still unsure on what preconditioner to use. |
Every functor $\mathbf{Set} \rightarrow \mathbf{Set}$ I can think of preserves monomorphisms (i.e. injective functions), including:
$\mathrm{Hom}(X,-)$, $X \times -$, $X \sqcup -$, and the constant functors.
The monads I can think of all have this property, too.
What are some natural examples that don't? |
A general class of counterexamples is given by Exercises II.3.9 and II.3.10 of [Conway]; see references below.
Exercise II.3.9. Let $A \in \mathscr B(\mathscr H)$ and $\mathscr N = \operatorname{graph}(A) \subseteq \mathscr H \oplus \mathscr H$, that is, $\mathscr N = \{h \oplus Ah \, : \, h\in\mathscr H\}$. Because $A$ is continuous and linear, $\mathscr N \leqslant \mathscr H \oplus \mathscr H$. Let $\mathscr M = \mathscr H \oplus (0) \leqslant \mathscr H \oplus \mathscr H$. Prove the following statements: (a) $\mathscr M \cap \mathscr N = (0)$ if and only if $\ker(A) = (0)$. (b) $\mathscr M + \mathscr N$ is dense in $\mathscr H \oplus \mathscr H$ if and only if $\operatorname{ran}(A)$ is dense in $\mathscr H$. (c) $\mathscr M + \mathscr N = \mathscr H \oplus \mathscr H$ if and only if $A$ is surjective.
(Here the notation $V \leqslant W$ means that $V$ is a closed linear subspace of $W$.)
The proofs of the statements in the preceding exercise are straightforward. This exercise is followed, suggestively, by the following.
Exercise II.3.10. Find two closed linear subspaces $\mathscr M,\mathscr N$ of an infinite-dimensional Hilbert space $\mathscr H$ such that $\mathscr M \cap \mathscr N = (0)$ and $\mathscr M + \mathscr N$ is dense in $\mathscr H$, but $\mathscr M + \mathscr N \neq \mathscr H$.
Of course, the solution is to give an example of a Hilbert space $\mathscr H$ and an operator $A \in \mathscr B(\mathscr H)$ with $\ker(A) = (0)$ such that $\operatorname{ran}(A)$ is dense in $\mathscr H$, but $\operatorname{ran}(A) \neq \mathscr H$. Then choose $\mathscr M$ and $\mathscr N$ as in Exercise II.3.9, and the result follows.
A clear example of such an operator is the operator $A : \ell^2 \to \ell^2$ given by $e_n \mapsto \frac{1}{n + 1}e_n$, where $\{e_0,e_1,\ldots\} \subseteq \ell^2$ denotes the standard orthonormal basis. This choice of $\mathscr H$ and $A$ gives rise to a counterexample similar to the one given by Robert Israel.
Other, more complicated examples of this form exist as well, as t.b. shows us. (While his answer to the present question gets a little carried away with specifics, the more general construction is mentioned in his answer to another question.)
References:
[Conway]: John B. Conway,
A Course in Functional Analysis (1985), Springer Graduate Texts in Mathematics 96. |
There is a classic problem:
Suppose that $X_1,\ldots,X_n$ form an i.i.d. sample from a uniform distribution on the interval $(0,\theta)$, where $\theta>0$ is unknown. I would like to find the MLE of $\theta$.
The pdf of each observation will have the form: $$ f(x\mid\theta) = \begin{cases} 1/\theta\quad&\text{for }\, 0\leq x\leq \theta\\ 0 &\text{otherwise}. \end{cases} $$ The likelihood function therefore has the form: $$ L(\theta) = \begin{cases} 1/\theta^n \quad&\text{for }\; 0\leq x_i \leq \theta\;\; \text{for all }i,\\ 0 &\text{otherwise}. \end{cases} $$ The general solution is usually that the MLE of theta must be a value of $\theta$ for which $\theta \geq x_i$ and which maximizes $1/\theta^n$ among all such values.
The reasoning is that since $1/\theta^n$ is a decreasing function of $\theta$, the estimate will be the smallest possible value of $\theta$ such that $\theta\geq x_i$.
Therefore, the mle of $\theta$, $\hat{\theta}$, is $\max(X_1,\ldots,X_n)$.
Here, I do not understand why we cannot just differentiate the likelihood function with respect to theta and then set it equal to $0$?
Thanks! |
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ... |
Here are some details that are in the spirit of Tarski's work:
Let $M$ be a system of magnitudes and select any element in the carrier set and call it $1$, so that the set $M$ is a pointed set and the object of study becomes $(M,1,+)$. We also have an injective morphism
$\tag 1 \iota: \mathbb N^> = \mathbb N \setminus \{0\} \to M \text{ such that } 1 \mapsto 1$
so we can view the image of the imbedding as an inclusion, $\mathbb N^> \subset M$.
It is not difficult to show that for any $x \in M$ there exist a unique element $H(x)$ such that $H(x)+H(x)=x$.
For any $x \in M$ there exist an $N_x \ge 0$ such that for all $n \ge N_x$ the equations $m H^n(1) + u = x$ with $m \gt 0$ have solutions ('$m \; \text{times}$' is shorthand for repeated addition), so we can take the maximum $m_{(x,n)}$ and define sets $\{m_{(x,n)}H^{n}(1)\} $.
For $s, t \in M$, we can create a set
$\tag 2 X_{(s,t)} = \{\; (m_{(s,n)} \times m_{(t,n)})\, H^{(n+n)}(1) \;\}$
and set $Y_{(s,t)} = \{ m \in M \; | \; (\forall x \in X_{(s,t)}) (\exists u \in M) \,[x + u = m]\}$.
Invoking $\text{P-5}$ (see above link) we can get a $z_{(s,t)} \in M$ that separates $X_{(s,t)}$ and $Y_{(s,t)}$. This element is clearly in $Y_{(s,t)}$ and is therefore unique.
We state the following two theorems without proof.
Theorem 1: The mapping $(s,t) \mapsto z_{(s,t)}$ is a commutative operation that distributes over addition in $(M,1,+)$. Moreover, $1$ is a multiplicative identity in $(M,1,+,*)$.
Theorem 2: Every element $x \in (M,1,+,*)$ has a multiplicative inverse. |
I have
$$\begin{pmatrix} 0&B_3&-B_2 \\ -B_3&0&B_1 \\ B_2&-B_1&0 \end{pmatrix}\begin{pmatrix} \omega_1 \\ \omega_2 \\ \omega_3 \end{pmatrix}=\begin{pmatrix} \Delta_1 \\ \Delta_2 \\ \Delta_3 \end{pmatrix}$$
With $\Delta_1 B_1 + \Delta_2 B_2 + \Delta_3 B_3 = 0$. Because the constraint, the system has clearly solutions in the null space of the $B$ matrix for every possible set of values $\Delta$ that satisfy the constraint.
However, I'm not sure what is the best way to incorporate the constraint into
LinearSolve so that it will be able to find the whole set of solutions.
I tried replacing one of the $\Delta$ in terms of the others, but no matter which $\Delta$ I choose to replace in terms of the others, I keep getting solutions where $\omega_3$ will be zero. This seems weird since nothing in the problem makes a special distinction between the last coordinate. I've noticed that
RowReduce would turn the matrix into
$$\begin{pmatrix} 1&0&-B_1/B_3 \\ 0&1&-B_2/B_3 \\ 0&0&0 \end{pmatrix}$$
In any case, if I replace $\Delta_1$ with the constraint equation, I get a solution
$$\begin{pmatrix} -\Delta_2 /B_3 \\ -B_2 \Delta_2/B_1 B_3 - \Delta_3/B_1 \\ 0 \end{pmatrix} $$
Intuitively, the plane orthogonal to $B$ as a vector contains the elements of $\Delta$ and also a set of solutions to the inhomogeneous system of equations, which can be extended with
NullSpace in order to construct the whole set of solutions. But I wonder why these solutions are constrained to the $\omega_3 = 0$ subspace in the first place
Code that manifests the issue:
LinearSolve[{{0,B3, -B2},{-B3,0,B1},{B2, -B1,0}},{-d2*(B2/B1)-d3*(B3/B1),d2,d3}]
Also
LinearSolve[{{0,B3, -B2},{-B3,0,B1},{B2, -B1,0}},{d1,-d1*(B1/B2)-d3*(B3/B2),d3}] |
2018-08-25 06:58
Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Detaljerad journal - Similar records 2018-08-25 06:58 Detaljerad journal - Similar records 2018-08-25 06:58
Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Detaljerad journal - Similar records 2018-08-24 06:19
Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Detaljerad journal - Similar records 2018-08-24 06:19
Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Detaljerad journal - Similar records 2018-08-24 06:19
Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Detaljerad journal - Similar records 2018-08-24 06:19
Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Detaljerad journal - Similar records 2018-08-24 06:19
Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Detaljerad journal - Similar records 2018-08-24 06:19
First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Detaljerad journal - Similar records 2018-08-23 11:31
Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Detaljerad journal - Similar records |
Current browse context:
astro-ph.CO
Change to browse by: Bookmark(what is this?) Astrophysics > Astrophysics of Galaxies Title: The MASSIVE Survey - X. Misalignment between Kinematic and Photometric Axes and Intrinsic Shapes of Massive Early-Type Galaxies
(Submitted on 31 Jan 2018 (v1), last revised 21 Jun 2018 (this version, v2))
Abstract: We use spatially resolved two-dimensional stellar velocity maps over a $107"\times 107"$ field of view to investigate the kinematic features of 90 early-type galaxies above stellar mass $10^{11.5}M_\odot$ in the MASSIVE survey. We measure the misalignment angle $\Psi$ between the kinematic and photometric axes and identify local features such as velocity twists and kinematically distinct components. We find 46% of the sample to be well aligned ($\Psi < 15^{\circ}$), 33% misaligned, and 21% without detectable rotation (non-rotators). Only 24% of the sample are fast rotators, the majority of which (91%) are aligned, whereas 57% of the slow rotators are misaligned with a nearly flat distribution of $\Psi$ from $15^{\circ}$ to $90^{\circ}$. 11 galaxies have $\Psi \gtrsim 60^{\circ}$ and thus exhibit minor-axis ("prolate") rotation in which the rotation is preferentially around the photometric major axis. Kinematic misalignments occur more frequently for lower galaxy spin or denser galaxy environments. Using the observed misalignment and ellipticity distributions, we infer the intrinsic shape distribution of our sample and find that MASSIVE slow rotators are consistent with being mildly triaxial, with mean axis ratios of $b/a=0.88$ and $c/a=0.65$. In terms of local kinematic features, 51% of the sample exhibit kinematic twists of larger than $20^{\circ}$, and 2 galaxies have kinematically distinct components. The frequency of misalignment and the broad distribution of $\Psi$ reported here suggest that the most massive early-type galaxies are mildly triaxial, and that formation processes resulting in kinematically misaligned slow rotators such as gas-poor mergers occur frequently in this mass range. Submission historyFrom: Irina Ene [view email] [v1]Wed, 31 Jan 2018 19:00:21 GMT (2305kb,D) [v2]Thu, 21 Jun 2018 05:02:24 GMT (2460kb,D) |
Search
Now showing items 11-20 of 24
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV
(American Physical Society, 2015-06)
We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ...
Measurement of jet quenching with semi-inclusive hadron-jet distributions in central Pb-Pb collisions at ${\sqrt{\bf{s}_{\mathrm {\bf{NN}}}}}$ = 2.76 TeV
(Springer, 2015-09)
We report the measurement of a new observable of jet quenching in central Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV, based on the semi-inclusive rate of charged jets recoiling from a high transverse momentum ...
Centrality dependence of inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Springer, 2015-11)
We present a measurement of inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV as a function of the centrality of the collision, as estimated from the energy deposited in the Zero Degree ...
One-dimensional pion, kaon, and proton femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm {NN}}}$ =2.76 TeV
(American Physical Society, 2015-11)
The size of the particle emission region in high-energy collisions can be deduced using the femtoscopic correlations of particle pairs at low relative momentum. Such correlations arise due to quantum statistics and Coulomb ...
Measurement of jet suppression in central Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2015-06)
The transverse momentum ($p_{\rm T}$) spectrum and nuclear modification factor ($R_{\rm AA}$) of reconstructed jets in 0-10% and 10-30% central Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV were measured. Jets were ...
Coherent $\psi(2S)$ photo-production in ultra-peripheral Pb-Pb collisions at $\sqrt{s_{\rm NN}}$= 2.76 TeV
(Elsevier, 2015-12)
The ALICE Collaboration has performed the first measurement of the coherent $\psi(2S)$ photo-production cross section in ultra-peripheral Pb-Pb collisions at the LHC. This charmonium excited state is reconstructed via the ...
Production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\mathbf{\sqrt{s_{{\rm NN}}} = 5.02}$ TeV
(Elsevier, 2015-01)
We report on the production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV at the LHC. The measurement is performed with the ALICE detector at backward ($-4.46< y_{{\rm ...
Measurement of dijet $k_T$ in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2015-06)
A measurement of dijet correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector is presented. Jets are reconstructed from charged particles measured in the central tracking detectors and ...
Measurement of charged jet production cross sections and nuclear modification in p-Pb collisions at $\sqrt{s_\rm{NN}} = 5.02$ TeV
(Elsevier, 2015-10)
Charged jet production cross sections in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with the ALICE detector at the LHC are presented. Using the anti-$k_{\rm T}$ algorithm, jets have been reconstructed in ... |
Morris (2008) discusses various ways for computing a (standardized) effect size measure for pretest posttest control group designs, where the characteristic, response, or dependent variable assessed in the individual studies is a quantitative variable.
As described by Becker (1988), we can compute the standardized mean change (with raw score standardization) for a treatment and control group with $$g_T = c(n_T-1) \frac{\bar{x}_{post,T} - \bar{x}_{pre,T}}{SD_{pre,T}}$$ and $$g_C = c(n_C-1) \frac{\bar{x}_{post,C} - \bar{x}_{pre,C}}{SD_{pre,C}},$$ where $\bar{x}_{pre,T}$ and $\bar{x}_{post,T}$ are the treatment group pretest and posttest means, $SD_{pre,T}$ is the standard deviation of the pretest scores, $c(m) = \sqrt{2/m} \Gamma[m/2] / \Gamma[(m-1)/2]$ is a bias-correction factor
1), $n_T$ is the size of the treatment group, and $\bar{x}_{pre,C}$, $\bar{x}_{post,C}$, $SD_{pre,C}$, and $n_C$ are the analogous values for the control group. Then the difference in the two standardized mean change values, namely $$g = g_T - g_C$$ indicates how much larger (or smaller) the change in the treatment group was (in standard deviation units) when compared to the change in the control group. Values of $g$ computed for a number of studies could then be meta-analyzed with standard methods. 2)
Morris (2008) uses five studies from a meta-analysis on training effectiveness by Carlson and Schmidt (1999) to illustrate these computations. We can create the same dataset with:
datT <- data.frame( m_pre = c(30.6, 23.5, 0.5, 53.4, 35.6), m_post = c(38.5, 26.8, 0.7, 75.9, 36.0), sd_pre = c(15.0, 3.1, 0.1, 14.5, 4.7), sd_post = c(11.6, 4.1, 0.1, 4.4, 4.6), ni = c(20, 50, 9, 10, 14), ri = c(0.47, 0.64, 0.77, 0.89, 0.44))
and
datC <- data.frame( m_pre = c(23.1, 24.9, 0.6, 55.7, 34.8), m_post = c(19.7, 25.3, 0.6, 60.7, 33.4), sd_pre = c(13.8, 4.1, 0.2, 17.3, 3.1), sd_post = c(14.8, 3.3, 0.2, 17.9, 6.9), ni = c(20, 42, 9, 11, 14), ri = c(0.47, 0.64, 0.77, 0.89, 0.44))
The contents of
datT and
datC are then:
m_pre m_post sd_pre sd_post ni ri 1 30.6 38.5 15.0 11.6 20 0.47 2 23.5 26.8 3.1 4.1 50 0.64 3 0.5 0.7 0.1 0.1 9 0.77 4 53.4 75.9 14.5 4.4 10 0.89 5 35.6 36.0 4.7 4.6 14 0.44
and
m_pre m_post sd_pre sd_post ni ri 1 23.1 19.7 13.8 14.8 20 0.47 2 24.9 25.3 4.1 3.3 42 0.64 3 0.6 0.6 0.2 0.2 9 0.77 4 55.7 60.7 17.3 17.9 11 0.89 5 34.8 33.4 3.1 6.9 14 0.44
After loading the metafor package with
library(metafor), the standardized mean change within each group can be computed with:
datT <- escalc(measure="SMCR", m1i=m_post, m2i=m_pre, sd1i=sd_pre, ni=ni, ri=ri, data=datT) datC <- escalc(measure="SMCR", m1i=m_post, m2i=m_pre, sd1i=sd_pre, ni=ni, ri=ri, data=datC)
Now the contents of
datT and
datC are:
m_pre m_post sd_pre sd_post ni ri yi vi 1 30.6 38.5 15.0 11.6 20 0.47 0.5056 0.0594 2 23.5 26.8 3.1 4.1 50 0.64 1.0481 0.0254 3 0.5 0.7 0.1 0.1 9 0.77 1.8054 0.2322 4 53.4 75.9 14.5 4.4 10 0.89 1.4181 0.1225 5 35.6 36.0 4.7 4.6 14 0.44 0.0801 0.0802
and
m_pre m_post sd_pre sd_post ni ri yi vi 1 23.1 19.7 13.8 14.8 20 0.47 -0.2365 0.0544 2 24.9 25.3 4.1 3.3 42 0.64 0.0958 0.0173 3 0.6 0.6 0.2 0.2 9 0.77 0.0000 0.0511 4 55.7 60.7 17.3 17.9 11 0.89 0.2667 0.0232 5 34.8 33.4 3.1 6.9 14 0.44 -0.4250 0.0864
The standardized mean change values are given in the
yi columns. Note that internally, the
escalc() function computes
m1i-m2i, so the argument
m1i should be set equal to the posttest means and
m2i to the pretest means if one wants to compute the standardized mean change in the way described above. The sampling variances (the values in the
vi columns) are computed based on equation 13 in Becker (1988).
We can now compute the difference between the two standardized mean changes values for each study. In addition, since the treatment and control groups are independent, the corresponding sampling variances can be computed by adding up the sampling variances of the two groups:
dat <- data.frame(yi = datT$yi - datC$yi, vi = datT$vi + datC$vi) round(dat, 2) yi vi 1 0.74 0.11 2 0.95 0.04 3 1.81 0.28 4 1.15 0.15 5 0.51 0.17
The
yi values above are the exact same value given in Table 5 (under the $d_{ppc1}$ column) by Morris (2008).
Equation 16 in Morris (2008) is the exact sampling variance of $g$. To actually compute the sampling variance in practice, the unknown parameters in this equation must be replaced with their sample counterparts. As noted earlier, the
escalc() function actually uses a slightly different method to estimate the sampling variance (based on equation 13 in Becker, 1988). Hence, the values above and the ones given in Table 5 (column $\hat{\sigma}^2(d_{ppc1})$ in Morris, 2008) differ slightly.
There are in fact dozens of ways of how the sampling variance for the standardized mean change can be estimated (see Viechtbauer, 2008, Tables 2 and 3 – and even that is not an exhaustive list). Hence, there are dozens of ways of estimating the sampling variance of $g$ above. Differences should only be relevant in small samples.
For the actual meta-analysis part, we simply pass the
yi and
vi values to the
rma() function. For example, a fixed-effects model can be fitted with:
Fixed-Effects Model (k = 5) Test for Heterogeneity: Q(df = 4) = 4.43, p-val = 0.35 Model Results: estimate se zval pval ci.lb ci.ub 0.95 0.14 6.62 <.01 0.67 1.23 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Note that these results are slightly different than the ones in Table 5 due to the different ways of estimating the sampling variances.
In his article, Morris (2008) discusses two other ways of computing an effect size measure for pretest posttest control group designs. The second approach that pools the two pretest SDs actually can be more efficient under certain conditions. However, that approach assumes that the true pretest SDs are equal for the two groups. That may not be the case. The approach given above does not make that assumption and therefore is more broadly applicable (but may be less efficient).
If you really want to use the approach with pooled pretest SDs, then this can be done as follows:
sd_pool <- sqrt((with(datT, (ni-1)*sd_pre^2) + with(datC, (ni-1)*sd_pre^2)) / (datT$ni + datC$ni - 2)) dat <- data.frame(yi = metafor:::.cmicalc(datT$ni + datC$ni - 2) * (with(datT, m_post - m_pre) - with(datC, m_post - m_pre)) / sd_pool) dat$vi <- 2*(1-datT$ri) * (1/datT$ni + 1/datC$ni) + dat$yi^2 / (2*(datT$ni + datC$ni)) round(dat, 2) yi vi 1 0.77 0.11 2 0.80 0.04 3 1.20 0.14 4 1.05 0.07 5 0.44 0.16
The
yi values above are the exact same value given in Table 5 (under the $d_{ppc2}$ column) by Morris (2008). Note that the equation used for computing the sampling variances above is slightly different from the one used in the paper, so the values for 'vi' above and the ones given in Table 5 (column $\hat{\sigma}^2(d_{ppc2})$ in Morris, 2008) differ slightly.
The example above assumes that the pretest posttest correlations (the values given under the
ri column) are the same for the control and treatment groups. Ideally, those values should be coded separately for the two groups.
In practice, one is likely to encounter difficulties in actually obtaining those correlations from the information reported in the articles. In that case, one can substitute approximate values (e.g., based on known properties of the dependent variable being measured) and conduct a sensitivity analysis to ensure that the conclusions from the meta-analysis are unchanged when those correlations are varied.
Becker, B. J. (1988). Synthesizing standardized mean-change measures.
British Journal of Mathematical and Statistical Psychology, 41(2), 257–278.
Carlson, K. D., & Schmidt, F. L. (1999). Impact of experimental design on effect size: Findings from the research literature on training.
Journal of Applied Psychology, 84(6), 851–862.
Morris, S. B. (2000). Distribution of the standardized mean change effect size for meta-analysis on repeated measures.
British Journal of Mathematical and Statistical Psychology, 53(1), 17–29.
Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control group designs.
Organizational Research Methods, 11(2), 364–386.
Viechtbauer, W. (2007). Approximate confidence intervals for standardized effect sizes in the two-independent and two-dependent samples design.
Journal of Educational and Behavioral Statistics, 32(1), 39–60. |
A very basic question. As title, what is the difference between "sort" and "universe" in type theory? Are they interchangable? Or are there only finite number of sorts, but infinite universes?
Sort is (typically, though see Pure Type Systems) a meta-level concept and universes are an internalization of a particular case of a sort. The second chapter of Bart Jacobs' thesis covers a fairly general case of how sorts interact with a language. I'll be roughly following that.
I'll use the terminology $T$ is a
type of sort $s$ if $T : s$ and $e$ is a term of type $T$ if $e : T$, so terms are always "grandchildren" of some sort, and types are always "children". $s$ is a sort if we can introduce variables of a type of sort $s$ into the context, i.e. $$\frac{\Gamma\vdash A : s}{\Gamma, x:A\vdash x:A}$$
If there were no dependencies between sorts, we could almost say a sort was an index into a collection of contexts, which leads to the notion of dependencies between sorts. We say $s_2$
depends on $s_1$ when types of sort $s_2$ can contain terms of sort $s_1$, i.e. if $$\frac{}{\Gamma\vdash A:s_1\qquad \Gamma,x:A\vdash B:s_2}$$ then it can be the case that $x$ occurs free in $B$. Without the dependency it would necessarily be the case that $x$ was not free in $B$.
That's all that is required to be a sort. Many additional features can be specified such as the existence of function types of sort $s$ built out of other types of sort $s$. One feature we can add (called axioms in Jacobs' thesis), is the fact that one sort is a type of another sort, i.e. $s_1 : s_2$. This internalizes the types of sort $s_1$ as terms of sort $s_2$. If we want to
embed $s_1$ into $s_2$ so that terms of $s_1$ are terms of $s_2$, we can add what Jacobs calls an $(s_1,s_2)$-inclusion.
A universe is then a sort that is closed under all the operations in the language. A hierarchy of universes means that we further internalize the types of every universe and have an embedding of each universe into (at least) the next higher universe. Closure typically includes the following rule arising from dependent products: $$\frac{\Gamma\vdash A:\mathbb{U}_n\qquad\Gamma,x:A\vdash B:\mathbb{U}_n}{\Gamma\vdash(\Pi x\!:\!A.B) : \mathbb{U}_n}$$ For this to be interesting, $x$ has to occur free in $B$ meaning $\mathbb{U}_n$ depends on $\mathbb{U}_n$. Cycles of dependencies between sorts are characteristic of dependently typed languages.
So a (hierarchy of) universe(s) is a (hierarchy of) sort(s) with a variety of additional features many of which imply additional dependencies. For contrast, we can consider sorts (potentially in addition to a hierarchy of universes) that aren't universes (or at least aren't
the universe, i.e. we could have multiple universe hierarchies). One example is separating compile-time calculation from run-time calculation as in Higher-order ML. In this we'd have a sort that represents compile-time terms in addition to a sort for run-time terms and a dependency allowing run-time terms to depend on compile-time terms. Another example is Cloud Haskell's
Static type constructor. We can view it as part of a $(\square,*)$-inclusion where sort $\square$ classifies "static" terms which are terms which only depend on terms bound at the top level. |
I was inspired by Dietrich Vollrath's latest blog post to work out the generalization of the macro ensemble version of the information equilibrium condition [1] to more than one factor of production. However, as it was my lunch break, I didn't have time to LaTeX up all the steps so I'm just going to post the starting place and the result (for now).
We have two ensembles of information equilibrium relationships $A_{i} \rightleftarrows B$ and $A_{j} \rightleftarrows C$ (with two factors of production $B$ and $C$), and we generalize the partition function analogously to multiple thermodynamic potentials (see also here):
Z = \sum_{i j} e^{-k_{i}^{(1)} \log B/B_{0} -k_{j}^{(2)} \log C/C_{0}}
$$
Playing the same game as worked out in [1], except with partial derivatives, you obtain:
\begin{align}
\frac{\partial \langle A \rangle}{\partial B} = & \; \langle k^{(1)} \rangle \frac{\langle A \rangle}{B}\\
\frac{\partial \langle A \rangle}{\partial C} = & \; \langle k^{(2)} \rangle \frac{\langle A \rangle}{C}
\end{align}
$$
This is the same as before, except now the values of $k$ can change. If the $\langle k \rangle$ change
slowly(i.e. treated as almost constant), the solution can be approximated by a Cobb-Douglas production function:
\langle A \rangle = a \; B^{\langle k^{(1)} \rangle} C^{\langle k^{(2)} \rangle}
$$
And now you can read Vollrath's piece keeping in mind that using an ensemble of information equilibrium relationships implies $\beta$ (e.g. $\langle k^{(1)} \rangle$) can change and we aren't required to maintain $\langle k^{(1)} \rangle + \langle k^{(2)} \rangle = 1$.
Update 28 July 2017
I'm sure it was obvious to readers, but this generalizes to any number of factors of production using the partition function
Z = \sum_{i_{n}} \exp \left( - \sum_{n} k_{i_{n}}^{(n)} \log B^{(n)}/B_{0}^{(n)} \right)
$$
where instead of $B$ and $C$ (or $D$), we'd have $B^{(1)}$ and $B^{(2)}$ (or $B^{(3)}$). You'd obtain:
\frac{\partial \langle A \rangle}{\partial B^{(n)}} = \; \langle k^{(n)} \rangle \frac{\langle A \rangle}{B^{(n)}}
$$ |
Nick Rowe has a post up where he blegs the impossible ... A 3D Edgeworth box does not exist (it is at a minimum 6D as I mention in a comment at the post) [1]. However, Nick does cite MINIMAC, a minimal macro model described by Paul Krugman here. It gives us a fun new example to apply the information equilibrium framework!
U \rightarrow C_{i}
$$
$$
U \rightarrow M_{i}
$$
Where there are a large number of periods $i = 1 ... n$. Our utility function is:
U \sim \prod_{i}^{n} C_{i}^{s} \prod_{j}^{n} M_{j}^{\sigma}
$$
I'm going to build in a connection to the information transfer model right off the bat by adding (assuming constant information transfer index for simplicity):
P : N \rightarrow M
$$
so that:
$$
P \sim M^{k - 1}
$$
and more importantly for us:
$$
\left( \frac{M}{P} \right)^{1 - s} \sim M^{(2 - k)(1 - s)}
$$
Which means that our utility function matches the MINIMAC utility function up to a logarithm (our $U$ would be $\log U$ using Krugman's $U$) if $\sigma = (2 - k) (1 -s)$:
U \sim \prod_{i}^{n} C_{i}^{s} \prod_{j}^{n} \left( \frac{M_{j}}{P_{j}} \right)^{1 - s}
$$
The general budget constraint is given by:
L = \sum_{i} C_{i} + \sum_{j} M_{j}
$$
In the MINIMAC model, we're only concerned with two periods (call them $i$ and $j$). Essentially in period $i$, $C_{k \leq i} = 0$ and $M_{k \leq i} = M$ and in period $j$, $C_{k \geq j} = C$ and $M_{k \geq j} = M'$ to make the connection with Krugman's notation. We'll use the maximum entropy assumption with a large number of time periods so that the most likely point is near the budget constraint (first shown here):
My terrible handwriting and the maximum entropy solution for a large number of periods (high dimensional volume). The orange dots represent the density of states for a high dimensional system. Connection with Krugman's notation also shown.
There are some interesting observations. If $k = 2$, which is $\kappa = 1/2$, then we have the quantity theory of money, but $\sigma = 0$, so utility only depends on consumption. Also if we take $M_{i} = M_{j}$ (constant money supply), we should randomly observe cases of unemployment where $L' < L$ and consumption is below the maximum entropy level near the budget constraint:
Occasionally, you observe a point (red) that moves away from the budget constraint resulting in unemployment.
In fact, we should
observe $L' < L$ since the maximum entropy point is near, but not exactly at the budget constraint. typically Voilà!The natural rate of unemployment is essentially dependent on the dimensionality of the consumption periods. With an infinite number, you'd observe no unemployment. For two time periods, you'd observe ~ 50% unemployment (the red dot in the image above would appear near the center of the triangle most of the time). In our world with some large, but not infinite, number of periods we have a distribution that peaks around a natural rate around ~ 5%:
The natural rate is given by the dimensionality of the temporal structure of the model. In some large, but finite, number of time periods you have an unemployment rate near e.g. 5%. Footnotes:
[1] You can easily fit a pair of
xyaxes together where x1 = x0 - x2and y1 = y0 - y2(flip both axes) but you can't do it for three sets of xyzaxes since x1 = x0 - x2 -x3(i.e. it depends on two axes). As Nick mentions in reply to my comment, you can do it for 2 agents and 3 goods. And he's right that the math works out fine -- it's basically a three-good Arrow-Debreu general equilibrium model. For my next trick, I think I will build Nick's model. |
ISSN:
1937-1632
eISSN:
1937-1179
All Issues
Discrete & Continuous Dynamical Systems - S
June 2008 , Volume 1 , Issue 2
Guest Editors
Boris Belinskiy, Kunquan Lan, Xin Lu, Alain Miranville, and R. Shivaji
Select all articles
Export/Reference:
Abstract:
In this paper we describe a new implicit finite element scheme for the discretization of Landau-Lifchitz equations. A proof of convergence of the numerical solution to a (weak) solution of the original equations is given and numerical tests showing the applicability of the method are also provided.
Abstract:
The stability of functional differential equations under delayed feedback is investigated near a Hopf bifurcation. Necessary and sufficient conditions are derived for the stability of the equilibrium solution using averaging theory. The results are used to compare delayed versus undelayed feedback, as well as discrete versus distributed delays. Conditions are obtained for which delayed feedback with partial state information can yield stability where undelayed feedback is ineffective. Furthermore, it is shown that if the feedback is stabilizing (respectively, destabilizing), then a discrete delay is locally the most stabilizing (resp., destabilizing) one among delay distributions having the same mean. The result also holds globally if one considers delays that are symmetrically distributed about their mean.
Abstract:
In the context of a coupled partial differential equation (PDE) model, we provide a rather general procedure by which one may invoke a recently derived operator theoretic result in [21], so as to obtain strong stability of those dissipative $C_{0}$-semigroups which model PDEs in Hilbert space. In particular, the procedure is applied here to a PDE which models structural acoustic interactions; it is wellknown that for this interactive PDE the classical stability tools--i.e., the Nagy-Foias decomposition--is inapplicable. The novelty of adopting the present strong stability technique is that one does not need to have an explicit representation of the resolvent.
Abstract:
For nonautonomous linear delay equations $v'=L(t)v_t$ admitting a nonuniform exponential contraction, we establish the nonuniform exponential stability of the equation $v'=L(t) v_t +f(t,v_t)$ for a large class of nonlinear perturbations.
Abstract:
We establish a priori estimate for solutions to the prescribing Gaussian curvature equation
$ - \Delta u + 1 = K(x) e^{2u}, x \in S^2,$ (1)
for functions $K(x)$ which are allowed to change signs. In [16], Chang, Gursky and Yang obtained a priori estimate for the solution of (1) under the condition that the function K(x) be positive and bounded away from 0. This technical assumption was used to guarantee a uniform bound on the energy of the solutions. The main objective of our paper is to remove this well-known assumption. Using the method of moving planes in a local way, we are able to control the growth of the solutions in the region where K is negative and in the region where K is small and thus obtain a priori estimate on the solutions of (1) for general functions K with changing signs.
Abstract:
We consider a structural acoustic model where the active wall is a nonlinear shell. We use a shell modeled with the intrinsic method of Michel Delfour and Jean-Paul Zolésio. We show the existence and uniqueness of solutions in the finite energy space as a consequence of a special trace estimate.
Abstract:
This work studies the sensitivity of a global climate model with deep ocean effect to the variations of a Solar parameter $Q$. The model incorporates a dynamic and diffusive boundary condition. We study the number of stationary solutions according to the positive parameter $Q$.
Abstract:
An approximate analytical solution characterizing initial conditions leading to action potential firing in smooth nerve fibres is determined, using the bistable equation. In the first place, we present a non-trivial stationary solution wave. Then, we extract the main features of this solution to obtain a frontier condition between the initiation of the travelling waves and a decay to the resting state. This frontier corresponds to a separatrix in the projected dynamics diagram depending on the width and the amplitude of the stationary wave.
Abstract:
In this paper we prove a global existence result for the solution of a phase-field model with initial data in high order Sobolev spaces using the invariant regions. This improves, in some sense, the result of [9].
Abstract:
We consider a model for one-dimensional transversal oscillations of an elastic-ideally plastic beam. It is based on the von Mises model of plasticity and leads after a dimensional reduction to a fourth-order partial differential equation with a hysteresis operator of Prandtl-Ishlinskii type whose weight function is given explicitly. In this paper, we study the case of clamped beams involving a kinematic hardening in the stress-strain relation. As main result, we prove the existence and uniqueness of a weak solution. The method of proof, based on spatially semidiscrete approximations, strongly relies on energy dissipation properties of one-dimensional hysteresis operators.
Abstract:
Science missions around natural satellites require low eccentricity and high inclination orbits. These orbits are unstable because of the planetary perturbations, making control necessary to reach the required mission lifetime. Dynamical systems theory helps in improving lifetimes reducing fuel consumption. After a double averaging of the 3-DOF model, the initial conditions are chosen so that the orbit follows the stable-unstable manifold path of an equilibria of the 1-DOF reduced problem. Corresponding initial conditions in the non-averaged problem are easily computed from the explicit transformations provided by the Lie-Deprit perturbation method.
Abstract:
The dynamics of constant harvesting of a single species has been studied extensively within the framework of ratio-dependent predator-prey models. In this work, we investigate the properties of a Michaelis-Menten ratio-dependent predator-prey model with two nonconstant harvesting functions depending on the prey population. Equilibria and periodic orbits are computed and their stability properties are analyzed. Several bifurcations are detected as well as connecting orbits, with an emphasis on analyzing the equilibrium points at which the species coexist. Smooth numerical continuation is performed that allows computation of branches of solutions.
Abstract:
A mathematical model is introduced for weakly nonlinear wave phenomena in molecular systems like DNA and protein molecules that includes thermal effects: exchange of heat energy with the surrounding aqueous medium. The resulting equation is a stochastic discrete nonlinear Schrödinger equation with focusing cubic nonlinearity and "Thermal'' terms modeling heat input and loss: PDSDNLS.
New numerical methods are introduced to handle the unusual combination of a conservative equation, stochastic, and fully nonlinear terms. Some analysis is given of accuracy needs, and the special issues of time step adjustment in stochastic realizations. Numerical studies are presented of the effects of thermalization on solitons, including damping induced
self-trappingof wave energy, a discrete counterpart of single-point blowup. Abstract:
We generalize logarithmic Sobolev inequalities to logarithmic Gagliardo-Nirenberg inequalities, and apply these inequalities to prove ultracontractivity of the semigroup generated by the doubly nonlinear $p$-Laplacian
$\dot{u}=\Delta_p u^m.$
Our proof does not use Moser iteration, but shows that the time-dependent Lebesgue norm $\||u(t)|\|_{r(t)}$ stays bounded for a variable exponent $r(t)$ blowing up in arbitrary short time.
Abstract:
In the paper we will study the problem of steady viscous linear case with Coriolis force in the exterior domain.
Abstract:
One-dimensional wave equations with cubic power law perturbed by Q-regular additive space-time random noise are considered. These models describe the displacement of nonlinear strings excited by state-independent random external forces. The presented analysis is based on the representation of its solution in form of Fourier-series expansions along the eigenfunctions of Laplace operator with continuous, Markovian, unique Fourier coefficients (the so-called commutative case). We shall discuss existence and uniqueness of Fourier solutions using energy-type methods based on the construction of Lyapunov-functions. Appropriate truncations and finite-dimensional approximations are presented while exploiting the explicit knowledge on eigenfunctions of related second order differential operators. Moreover, some nonstandard partial-implicit difference methods for their numerical integration are suggested in order to control its energy functional in a dynamically consistent fashion. The generalized energy $\cE$ (sum of kinetic, potential and damping energy) is governed by the linear relation $\E [\varepsilon(t)] = \E [\varepsilon(0)] + b^2 trace (Q) t / 2$ in time $t \ge 0$, where $b$ is the scalar intensity of noise and $Q$ its covariance operator.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
In the canonical ensemble, the Helmholtz free energy \(A (N, V, T)\) is a natural function of \(N , V \) and \(T\). As usual, we perform a Legendre transformation to eliminate \(N\) in favor of \( \mu = \frac {\partial A}{\partial N} \):
\( \tilde{A}(\mu,V,T)\)
\( A(N(\mu),V,T) - N\left(\frac {\partial A}{\partial N}\right)_{V,T}\)
\( A(N(\mu),V,T) - \mu N \)
It turns out that the free energy \( \tilde{A}(\mu,V,T)\) is the quantity \(- PV \). We shall derive this result below in the context of the partition function. Thus,
\[-PV = A(N(\mu),V,T) - \mu N\]
To motivate the fact that \(PV\) is the proper free energy of the grand canonical ensemble from thermodynamic considerations, we need to introduce a mathematical theorem, known as Euler's theorem:
Euler's Theorem: Let \( f(x_1,...,x_N) \) be a function such that
\[ f(\lambda x_1,...,\lambda x_N) = \lambda^n f(x_1,...,x_N)\]
Then \(f\) is said to be a
homogeneous function of degree \(n\). For example, the function \( f(x) = 3x^2 \) is a homogeneous function of degree 2, \( f(x,y,z) = xy^2 + z^3 \) is a homogeneous function of degree 3, however, \( f(x,y) = e^{xy}-xy \) is not a homogeneous function. Euler's Theoremstates that, for a homogeneous function \(f\),
\[nf(x_1,...,x_N) = \sum_{i=1}^N x_i \frac {\partial f}{\partial x_i}\]
Proof: To prove Euler's theorem, simply differentiate the the homogeneity condition with respect to lambda:
\( \frac {d}{d\lambda} f(\lambda x_1,...,\lambda x_N) \)
\( \frac {d}{d\lambda} \lambda^nf(x_1,...,x_N)\)
\( \sum_{i=1}^N x_i \frac {\partial f}{\partial (\lambda x_i)}\)
\( n\lambda^{n-1}f(x_1,...,x_N)\)
Then, setting \(\lambda = 1 \), we have
\[\sum_{i=1}^N x_i \frac {\partial f}{\partial x_i} = nf(x_1,...,x_N)\]
Now, in thermodynamics, extensive thermodynamic functions are homogeneous functions of degree 1. Thus, to see how Euler's theorem applies in thermodynamics, consider the familiar example of the Gibbs free energy:
\[ G = G (N, P, T ) \]
\[ G (\lambda N, P, T) = \lambda G (N, P, T ) \]
\[ G(N,P,T) = N \frac {\partial G}{\partial N} = \mu N\]
or, for a multicomponent system,
\[ G = \sum_{j} \mu_j N_j\]
\[ G = E - TS + PV \]
Now, for the Legendre transformed free energy in the grand canonical ensemble, the thermodynamics are
\[ d\tilde{A} = dA - \mu dN - Nd\mu = -PdV - SdT - Nd\mu\]
But, since
\(\tilde{A}\)
\(\tilde{A}(\mu,V,T)\)
\(d \tilde{A}\)
\( \left(\frac {\partial \tilde{A}}{\partial \mu}\right)_{V,T}d\mu+\left(\frac {\partial \tilde{A}}{\partial V}\right)_{\mu,T}dV+ \left(\frac {\partial \tilde{A}}{\partial T}\right)_{\mu,V}dT\)
the thermodynamics will be given by
\(N\)
\( -\left(\frac {\partial \tilde{A}}{\partial \mu}\right)_{V,T}\)
\(P\)
\( -\left(\frac {\partial \tilde{A}}{\partial V}\right)_{\mu,T}\)
\(S\) \( -\left(\frac {\partial \tilde{A}}{\partial T}\right)_{V, \mu}\) Since, \(\tilde{A}\) is a homogeneous function of degree 1, and its extensive argument is \(V\), it should satisfy
\[ \tilde{A}(\mu,\lambda V,T) = \lambda \tilde{A}(\mu,V,T)\]
\[ \tilde{A}(\mu,V,T) = V \frac {\partial \tilde{A}}{\partial V} = -PV\]
and since
\[\tilde{A} = A-\mu N = E - TS - \mu N\] |
Given the radius and $x,y$ coordinates of the center point of two circles how can I calculate their points of intersection if they have any?
This can be done without any trigonometry at all. Let the equations of the circles be$$(x-x_1)^2 + (y-y_1)^2 = r_1^2, \tag{1}$$$$(x-x_2)^2 + (y-y_2)^2 = r_2^2. \tag{2}$$By subtracting $(2)$ from $(1)$ and then expanding, we in fact obtain a
linear equation for $x$ and $y$; after a little rearranging it becomes$$-2x(x_1 - x_2) - 2y(y_1 - y_2) = (r_1^2 - r_2^2) - (x_1^2 - x_2^2) - (y_1^2 - y_2^2).$$(If the circles intersect, this is the equation of the line that passes through the intersection points.) This equation can be solved for one of $x$ or $y$; let's suppose $y_1 - y_2 \ne 0$ so that we can solve for $y$:$$y = -\frac{x_1 - x_2}{y_1 - y_2} x + \dotsc. \tag{3}$$ Substituting this expression for $y$ into $(1)$ or $(2)$ gives a quadratic equation in only $x$. Then the $x$-coordinates of the intersection points are the solutions to this; the $y$-coordinates can be obtained by plugging the $x$-coordinates into $(3)$.
Easy solution is to consider another plane such that the centers are along an axis.
Given the points $(x_1,y_1)$ and $(x_2,y_2)$. We focus on the center point of both circles given by $$ \left( \frac{x_1+x_2}{2}, \frac{y_1+y_2}{2} \right). $$
The distance between the centers of the circles is given by $$ R = \sqrt{ (x_2-x_1)^2 + (y_2-y_1)^2}. $$
We can consider the following orthogonal vectors $$ \vec{a} = \left( \frac{x_2-x_1}{R}, \frac{y_2-y_1}{R} \right), \vec{b} = \left( \frac{y_2-y_1}{R}, - \frac{x_2-x_1}{R} \right). $$
In the $(\vec{a},\vec{b})$ plane we get the equations $$ \big( a + R / 2 \big)^2 + b^2 = r_1^2,\\ \big( a - R / 2 \big)^2 + b^2 = r_2^2. $$
Whence $$ a = \frac{r_1^2 - r_2^2}{2R},\\ b = \pm \sqrt{ \frac{r_1^2+r_2^2}{2} - \frac{(r_1^2-r_2^2)^2}{4R^2} - \frac{R^2}{4}}. $$ The intersection points are given by $$ (x,y) = \frac{1}{2} \big( x_1+x_2, y_1+y_2 \big) + \frac{r_1^2 - r_2^2}{2R^2} \big( x_2-x_1, y_2-y_1 \big)\\ \pm \frac{1}{2} \sqrt{ 2 \frac{r_1^2+r_2^2}{R^2} - \frac{(r_1^2-r_2^2)^2}{R^4} - 1} \big( y_2-y_1, x_1-x_2 \big), $$ where $R$ is the distance between the centers of the circles.
A nice way to look at this is to first consider the case when one point is at the origin and the other lies on the x-axis. Let the points be at $(0,0)$, $(d,0)$ and the radii be $r_1$, $r_2$. The two equations simplify to $$x^2+y^2=r_1^2$$ and $$(x-d)^2+y^2=r_2^2$$ Use the first to find $y^2$ and substitute in the second. $$(x-d)^2+r_1^2-x^2=r_2^2$$ expand and simplify $$-2xd+d^2+r_1^2=r_2^2$$ so $$x=\frac{r_1^2-r_2^2+d^2}{2d}$$ and from Pythagorus $$y=\sqrt{r_1^2-x^2}.$$ This part came from mathworld.
For the general position case with points $(x_1,y_1)$ $(x_2,y_2)$ let $$\begin{align} d&=\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}\\ l&=\frac{r_1^2-r_2^2+d^2}{2d}\\ h&=\sqrt{r_1^2-l^2} \end{align}$$ Now $\left(\tfrac{x_2-x_1}{d},\tfrac{y_2-y_1}{d}\right)$ $\left(\tfrac{y_2-y_1}{d},-\tfrac{x_2-x_1}{d}\right)$ are two orthogonal unit vectors and we can rotate and translate to get the general solution $$\begin{align} x&=\frac{l}{d}(x_2-x_1) \pm \frac{h}{d}(y_2-y_1) + x_1,\\ y&=\frac{l}{d}(y_2-y_1) \mp \frac{h}{d}(x_2-x_1) + y_1.\\ \end{align}$$
Example 1: Find the points of intersection of the circles given by their equations as follows:
$(x - 2)^2 + (y - 3)^2 = 9$
$(x - 1)^2 + (y + 1)^2 = 16$
Solution to Example 1:
We first expand the two equations as follows:
$x^2 - 4x + 4 + y^2 - 6y + 9 = 9 $
$x^2 - 2x + 1 + y^2 + 2y + 1 = 16 $
Multiply all terms in the first equation by -1 to abtain an equivalent equation and keep the second equation unchanged
$-x^2 + 4x - 4 - y^2 + 6y - 9 = -9 $
$x^2 - 2x + 1 + y^2 + 2y + 1 = 16 $
We now add the same sides of the two equations to obtain a linear equation
$2x - 3 + 8y - 8 = 7 $
Which may written as
$x + 4y = 9 \textbf{ or } x = 9 - 4y $
We now substitute $x$ by $9 - 4y$ in the first equation to obatin
$(9 - 4y)^2 - 4(9 - 4y) + 4 + y^2 - 6y + 9 = 9 $
Which may be written as
$17y^2 -62y + 49 = 0 $
Solve the quadratic equation for y to obtain two solutions
$y = \frac{(31 + 8\sqrt{2})}{17} \approx 2.49 $
and $ y =\frac{31 - 8\sqrt{2}}{17} \approx 1.16 $
We now substitute the values of y already obtained into the equation $x = 9 - 4y $ to obtain the values for x as follows
$x = \frac{29 + 32\sqrt{2}}{ 17} \approx - 0.96 $
and $x = \frac{29 - 32\sqrt{2})}{17} \approx 4.37 $
The two points of intersection of the two circles are given by
$(- 0.96 , 2.49)$ and $(4.37 , 1.16)$
After going through most of the answers posted here, I still struggled implementing them. After much searching around I found this page: http://www.ambrsoft.com/TrigoCalc/Circles2/circle2intersection/CircleCircleIntersection.htm
It includes extensive notation on every part of the equation as well as drawings, a calculator and JavaScript implementation. Hope it can help others.
Let $C_1 = (x_1,y_1), C_2 = (x_2,y_2)$ be the centers of the two circles and $r_1,r_2$ be their radii respectively.
Their equations are $$(x-x_1)^2 + (y-y_1)^2 = r_1^2$$ $$(x-x_2)^2 + (y-y_2)^2 = r_2^2$$
They intersect only iff $|r_1-r_2|\leq|C_1-C_2|\leq|r_1+r_2|$, where $|C_1-C_2|$ is the distance between the two centers. If equality holds, the circles touch and there is one solution. For strict inequalities, they intersect and they have two solutions.
Just solve the system of equations. Suppose that $x_0$ is a point on the first circle. Then, its parametric representation is $x_0 = (x_1+r_1\cos\theta,y_1+r_1\sin\theta)$ for some $\theta$. If $x_0$ also lies on the second circle, which will make it a point of intersection, it must also satisfy the equation of the second circle i.e. $$(x_0-x_2)^2 + (y_0-y_2)^2 = r_2^2$$ Substitute the parametric form, and find out the value(s) of $\theta$.
Make equations of circles both start with $ x^2 +y^2,$ subtract one from the other to get equation of its radical axis which is a straight line. Intersection of this radical axis and one of the circles can be found by plugging in for x or y of one circle into the other.
EDIT1
If the
of this quadratic equation that is the real guide is $(>0,0,<0)$ then the intersections are respectively $ (2, 2$ coinciding points at tangent,2) in number which are the required ( real,real,complex) points. discriminant
following the very nice hint of arkeet you can after defining a,b,c,g,h using rc1,zc1,rc2,zc2, as the center point of the circles and R1 and R2 their radiiand
assuming that rc2!=rc1 (the case where rc2=rc1 can be also done in the manner or arkeet, however the present assumption was useful to me)
g = (zc2 - zc1)/(rc2 - rc1); h = (R1*R1-R2*R2 + zc2*zc2 - zc1*zc1 + rc2*rc2 - rc1*rc1 )/(2*(rc2 - rc1));
a = (g*g+ 1); b = -2*g*h + 2*g*rc1 - 2*zc1; c = h*h -2*h*rc1 + rc1*rc1 + zc1*zc1 - R1*R1;
zplus = (-b +sqrt(b*b-4*a*c)) / (2*a); zminus = (-b -sqrt(b*b-4*a*c)) / (2*a); andrplus = -g
zplus+h;rminus = -gzminus+h;
you can test this by verifying that the intersection points do lie on the 2 circles. |
Diamagnetism arises from closed atomic shells of electrons. When a B-field is applied, these electrons set up a screening current that opposes the applied field.
Start by thinking of electrons in a circular orbit of radius
ρ in the xy plane.
Recall
Flux
Force on an electron
Current due to
Z electrons in an atom
So
Magnetisation per atom
where for a spherically symmetrical orbit (note that strictly we should be using the average values <
ρ> and < r>).
So
Total magnetisation for
N atoms per unit volume
-> diamagnetic susceptibility
Again note that strictly not all atoms will have the same
r 2 so really we need to consider Gyromagnetic ratio, μ/l
classical
Similarly, in quantum mechanics
Magnetic levitation can occur when magnetic force balances gravity U m=-M.B
Using this principle, scientists have managed to levitate Brazilian tree frogs, which isn’t as impressive as it sounds when you realise how small they are. Unfortunately, the need for a magnetic field gradient makes it hard to levitate larger items, so there won’t be any floating elephants just yet.
Paramagnetism: unfilled atomic shells have an overall magnetic moment which aligns along the direction of an applied field.
States with different
M J are shifted in energy by
For now, we will consider only spin-1/2 systems.
Take
N spin-1/2 atoms per unit volume- J=1/2, g J=2
Boltzmann factors
Total magnetisation
latex \frac{\chi B}{\mu_0}=\mu_B B N \tanh\frac{\mu_B B}{kT}$
In the high-temperature, low-field limit,
-> Curie’s Law
Collective magnetisation: so far we have considered the atoms individually and not worried about interactions between them. The next types of magnetism all arise because of interactions of atoms and so fall under the heading of collective magnetism. Magnetic dipole-dipole interactions: neighbouring atoms exert a force on each other which tries to align dipoles m.
Interaction energy
Take
For the purposes of making a quick calculation, we’ll use typical values and
r~0.3nm, giving us U~10 -24 J
This corresponds to an ordering temperature of tenths or hundredths of Kelvin (i.e. very small). At room temperature we wouldn’t expect the spins to be aligned because of magnetic dipole interactions- and yet we do see magnetism at room temperature, so we need to look for other mechanisms.
Exchange interaction: this is the reason why spins align at temperatures as high as 1000K.
Consider two adjacent atoms and two electrons with a total wavefunction
Electrons are fermions, so \psi(r_1, r_2, s_1, s_2)= -\psi(r_2, r_1, s_2, s_1)$
Electrons with the same spin ‘repel’ each other and form symmetric and antisymmetric wavefunctions.
What we end up with are
Singlet
S=0
Triplet
S=1 , M S=0,±1
The triplet energy lies below that of the singlet. The energy difference between the singlet and triplet is where
J is the exchange integral. More will be added on the exchange integral at a later date. Ferromagnetism and the molecular field model: ferromagnetism occurs when the exchange integral J>0 (electron-electron coupling). Spins align to create a spontaneous magnetisation at a critical temperature- we can see this using the molecular field (also called the mean field) model.
Say that the atoms look like they are in an effective flux
(first term on RHS is an external field, second is a molecular ‘field’ due to exchange interactions). Note that
B eff is not a real magnetic field that we can measure, it’s just a way of thinking about these interactions.
Assume that Curie’s Law holds for the total field
B eff.
T C is the critical temperature where we see a spontaneous magnetisation.
Domains: pure iron only magnetises in the presence of an external field. Why don’t all the spins line up spontaneously in zero field?
We need to consider the three types of energy in a magnetic crystal.
Magnetostatic energy=. This will be large if all spins are lined up in the same direction. Anisotropy energy:spins like to line up along particular crystal directions. There is an energy cost for any spins not aligned with this easy axis. Exchange energy=. There is an energy cost whenever spins are not aligned. All spins are aligned along easy axis but magnetostatic energy is large. Magnetostatic energy is reduced. Further reduction of magnetostatic energy. All magnetostatic energy now contained within the crystal but now there are lots of atoms perpendicular to the easy axis. Better still- now only a few atoms perpendicular to the easy axis.
So why doesn’t this just go on until the domains get infinitely small? To answer this, we need to think about domain walls.
Domain walls (Bloch walls) look like this:
Most of the spins in the wall are not aligned with the easy axis; also, adjacent spins are at an angle
θ to each other, so their exchange energy is higher. This means that each domain wall costs energy. What we have to do is find an optimum where we are not ‘spending too much’ on any one type of energy.
Thickness of domain walls: energy is minimised by changing the spin slowly in N steps by a small angle .
For each line of spins, exchange energy cost
(here we’re considering the difference in exchange energy when the
N spins are at angle θ instead of being aligned). θ is small, so do a power series expansion.
per line of atoms.
We have lines per unit area (
a is just the atomic spacing).
Anisotropy energy cost =
Total energy cost
So
U tot is a minimum when
For iron, a domain wall will be around 300 atoms thick (or 300
a metres thick). Impurities and hard magnets: pure iron is a soft magnetic material, i.e. it does not retain its magnetisation in zero field. In order to pin the domain walls and keep the magnetisation, we need to introduce impurities, e.g. carbon in steel. Magnetisation of a hard magnet
Reversible boundary displacement- domain boundaries move a little but they don’t hit impurities so they can easily go back to their original position. Irreversible boundary displacement- the applied field forces the domain walls through impurities. When the field is removed, they can’t pass back through the impurities. Magnetisation rotation- all spins line up with the field, i.e. saturation.
Because (2) is irreversible, the system has a hysteresis and an overall magnetisation is retained when the applied field is reduced to zero. A reverse field is then required to demagnetise the material.
Antiferromagnetism: if the exchange integral J<0, adjacent spins line up antiparallel to each other- this is antiferromagnetism. To model this, we use the molecular field model, but this time we have two sublattices, A and B, each with N/2 atoms.
Mean field at A due to B
Mean field at B due to A
Total magnetisation
Applied field
B 0
As before, assume Curie’s Law holds.
where $latex T_N=\frac{C\lambda}{2} is the Néel temperature.
So
Antiferromagnets have an anisotropic susceptibility:
Vertical B-field has little effect
Horizontal B-field ->
Magnetic resonance experiments: the basic idea is to split up the atomic energy levels by applying a magnetic field and then observe the transitions between them.
We’ll use a
J=3/2 level for this example.
Selection rule for
ΔJ=0.
Expect to see three peaks corresponding to the three allowed transitions.
You may not expect there to be a splitting at zero field, but remember that this isn’t an isolated atom- it is in a crystal, where there are other perturbations to consider.
Apparatus- Nuclear Magnetic Resonance (NMR)
For ESR (electron spin resonance), we are looking at atomic rather than nuclear transitions so we need a frequency in the GHz region. To achieve this, replace the coil with a cavity resonator. We also need to cool the sample to a low temperature to see a signal.
In both cases, either sweep the frequency through resonance whilst keeping the B-field constant, or vice versa.
Adiabatic demagnetisation (used for cooling): we can cool a solid containing magnetic ions by using a magnetic field. To do this, we apply a field, and then remove it adiabatically (keeping the populations of spin states the same). T f is limited by small interactions which split energy levels at T=0.
Superconductivity: in a superconductor, resistivity goes to zero below the critical temperature T C. But a superconductor is not just a perfect conductor, it is also a perfect diamagnet ( χ=-1)- it expels magnetic flux (Meissner effect).
Superconductivity can be destroyed by a critical magnetic field
B C or critical current J. C Meissner effect and flux expulsion: when a superconductor is cooled below T C (which is usually in the range of mK to ~160K), surface currents are set up which expel magnetic flux. For a zero resistance metal cooled below T, the flux becomes trapped rather than expelled. C Type I superconductors Sharp transition to normal state at B C. Type II superconductors e.g. Nb
Superconducting state. Vortex state: a few lines of flux get through the superconductor surrounded by helical screening currents.
Type II are of more practical use.
Penetration depth: from London theory, surface currents go as where (penetration depth). Magnetisation energy occurs over this penetration depth. BCS theory, Cooper pairs and energy gap: specific heat, IR absorption and tunnelling all indicate that superconductivity is related to an energy gap (this idea will be discussed further a bit later).
Electrons experience an attraction caused by the interaction with the crystal lattice, leading to binding in pairs (Cooper pairs). Electrons of wavevector
k 1 and kcan exchange virtual phonons. This interaction is strongest when 2 k= 1 -k, so electrons bind together in pairs with momenta 2 kand F –k. F
Wavefunction singlet
The pair has charge
2e, mass 2m and binding energy Δ per electron.
Pairs are destroyed when there is enough energy to excite electrons across the gap.
We have a perfect conductor because no scattering can occur until there is sufficient energy to excite pairs across the gap.
Coherence length: say (roughly)
So
Flux quantisation
Current density
General wavefunction of a Cooper pair, mass
2m, charge 2e
-> \bold{j}(\bold{r})=\frac{-e}{2m}|\psi(\bold{r})|^2(\bar{h}\nabla \theta+2e\bold{A}$
Far from the surface of a superconductor in its Meissner state,
j=0, so
Integrate around closed curve
C inside superconductor.
ψ( r) is single-valued, so around a closed loop,
-> \Phi=\pm\frac{2\pi n \bar{h}}{2e}=\pm\frac{nh}{2e}=\pm n\Phi_0$
where
Φ 0=2.07×10 -5Tm 2is the flux quantum. Flux around a closed loop is quantised in units of Φ. 0
Evidence for 2 Δ band gap in superconductors Infrared absorption. Analogous to results for semiconductor band gap; absorption only occurs when hν>2 Δ Tunelling between two superconductors with a thin barrier (~10 -9m)
Tunnel current shows features due to alignment of energy levels either side of the barrier- measures energy gaps.
Form of heat capacity indicates a 2 Δband gap- below T. C
Experiment to measure the flux quantum
Place sample in a magnetic field of ~10μT.
Cool sample through
T C.
Vibrate sample between search coils to find the trapped flux. |
Skills to Develop
Graph exponential functions. Graph exponential functions using transformations.
As we discussed in the previous section, exponential functions are used for many real-world applications such as finance, forensics, computer science, and most of the life sciences. Working with an equation that describes a real-world situation gives us a method for making predictions. Most of the time, however, the equation itself is not enough. We learn a lot about things by seeing their pictorial representations, and that is exactly why graphing exponential equations is a powerful tool. It gives us another layer of insight for predicting future events.
Graphing Exponential Functions
Before we begin graphing, it is helpful to review the behavior of exponential growth. Recall the table of values for a function of the form \(f(x)=b^x\) whose base is greater than one. We’ll use the function \(f(x)=2^x\). Observe how the output values in Table \(\PageIndex{1}\) change as the input increases by \(1\).
\(x\) \(−3\) \(−2\) \(−1\) \(0\) \(1\) \(2\) \(3\) \(f(x)=2^x\) \(\dfrac{1}{8}\) \(\dfrac{1}{4}\) \(\dfrac{1}{2}\) \(1\) \(2\) \(4\) \(8\)
Each output value is the product of the previous output and the base, \(2\). We call the base \(2\) the
constant ratio. In fact, for any exponential function with the form \(f(x)=ab^x\), \(b\) is the constant ratio of the function. This means that as the input increases by \(1\), the output value will be the product of the base and the previous output, regardless of the value of \(a\).
Notice from the table that
the output values are positive for all values of \(x\); as \(x\) increases, the output values increase without bound; and as \(x\) decreases, the output values grow smaller, approaching zero.
Figure \(\PageIndex{1}\) shows the exponential growth function \(f(x)=2^x\).
Figure \(\PageIndex{1}\): Notice that the graph gets close to the x-axis, but never touches it.
The domain of \(f(x)=2^x\) is all real numbers, the range is \((0,\infty)\), and the horizontal asymptote is \(y=0\).
To get a sense of the behavior of exponential decay, we can create a table of values for a function of the form \(f(x)=b^x\) whose base is between zero and one. We’ll use the function \(g(x)={\left(\dfrac{1}{2}\right)}^x\). Observe how the output values in Table \(\PageIndex{2}\) change as the input increases by \(1\).
\(x\) \(-3\) \(-2\) \(-1\) \(0\) \(1\) \(2\) \(3\) \(g(x)={\left(\dfrac{1}{2}\right)}^x\) \(8\) \(4\) \(2\) \(1\) \(\dfrac{1}{2}\) \(\dfrac{1}{4}\) \(\dfrac{1}{8}\)
Again, because the input is increasing by \(1\), each output value is the product of the previous output and the base, or constant ratio \(\dfrac{1}{2}\).
Notice from the table that
the output values are positive for all values of \(x\); as \(x\) increases, the output values grow smaller, approaching zero; and as \(x\) decreases, the output values grow without bound.
Figure \(\PageIndex{2}\) shows the exponential decay function, \(g(x)={\left(\dfrac{1}{2}\right)}^x\).
Figure \(\PageIndex{2}\)
The domain of \(g(x)=\left(\dfrac{1}{2}\right)^x\) is all real numbers, the range is \((0,\infty)\),and the horizontal asymptote is \(y=0\).
CHARACTERISTICS OF THE GRAPH OF THE toolkit FUNCTION \(f(x) = b^x\)
An exponential function with the form \(f(x)=b^x\), \(b>0\), \(b≠1\), has these characteristics:
one-to-one function horizontal asymptote: \(y=0\) domain: \((–\infty, \infty)\) range: \((0,\infty)\) x-intercept: none y-intercept: \((0,1)\) increasing if \(b>1\) decreasing if \(b<1\)
Figure \(\PageIndex{3}\) compares the graphs of exponential growth and decay functions.
Figure \(\PageIndex{3}\)
Given an exponential function of the form \(f(x)=b^x\), graph the function
Plot at least \(3\) points of the graph by finding 3 input-output pairs, including the y-intercept \((0,1)\). Draw a smooth curve through the points. State the domain, \((−\infty,\infty)\), the range, \((0,\infty)\), and the horizontal asymptote, \(y=0\).
Sketch a graph of \(f(x)=0.25^x\). State the domain, range, and asymptote.
Since \(b=0.25\) is between zero and one, we know the function is decreasing. The end behavior of the graph is as follows: as \(x \rightarrow -\infty\), \(y \rightarrow \infty\), and as \(x \rightarrow \infty\), \(y \rightarrow 0\), so the graph has an asymptote \(y=0\).
Solution Find two other points. Plot the y-intercept, \((0,1)\), along with the two other points. We will use \((−1,4)\) and \((1,0.25)\).
Draw a smooth curve connecting the points as in Figure \(\PageIndex{4}\).
Figure \(\PageIndex{4}\)
The domain is \((−\infty,\infty)\); the range is \((0,\infty)\); the horizontal asymptote is \(y=0\).
\(\PageIndex{1}\) Sketch the graph of \(f(x)=4^x\). State the domain, range, and asymptote.
Answer
The domain is \((−\infty,\infty)\); the range is \((0,\infty)\); the horizontal asymptote is \(y=0\).
Graphing Transformations of Exponential Functions
Transformations of exponential graphs behave similarly to those of other functions. Just as with other toolkit functions, we can apply the four types of transformations—shifts, reflections, stretches, and compressions—to the toolkit function \(f(x)=b^x\) without loss of shape. For instance, just as the quadratic function maintains its parabolic shape when shifted, reflected, stretched, or compressed, the exponential function also maintains its general shape regardless of the transformations applied.
Graphing a Vertical Shift
The first transformation occurs when we add a constant \(d\) to the toolkit function \(f(x)=b^x\), giving us a vertical shift \(d\) units in the same direction as the sign. For example, if we begin by graphing a toolkit function, \(f(x)=2^x\), we can then graph two vertical shifts alongside it, using \(d=3\): the upward shift, \(g(x)=2^x+3\) and the downward shift, \(h(x)=2^x−3\). Both vertical shifts are shown in Figure \(\PageIndex{5}\).
Figure \(\PageIndex{5}\)
Observe the results of shifting \(f(x)=2^x\) vertically:
The domain, \((−\infty,\infty)\), remains unchanged. When the function is shifted up \(3\) units to \(g(x)=2^x+3\): The y-intercept shifts up \(3\) units to \((0,4)\). The asymptote shifts up \(3\) units to \(y=3\). The range becomes \((3,\infty)\). The When the function is shifted down \(3\) units to \(h(x)=2^x−3\): The y-intercept shifts down \(3\) units to \((0,−2)\). The asymptote also shifts down \(3\) units to \(y=−3\). The range becomes \((−3,\infty)\). The Graphing a Horizontal Shift
The next transformation occurs when we add a constant \(c\) to the input of the toolkit function \(f(x)=b^x\), giving us a horizontal shift \(c\) units in the
opposite direction of the sign. For example, if we begin by graphing the toolkit function \(f(x)=2^x\), we can then graph two horizontal shifts alongside it, using \(c=3\): the shift left, \(g(x)=2^{x+3}\), and the shift right, \(h(x)=2^{x−3}\). \(h(x)=2^{x−3}\). Both horizontal shifts are shown in Figure \(\PageIndex{6}\).
Figure \(\PageIndex{6}\)
Observe the results of shifting \(f(x)=2^x\) horizontally:
The domain, \((−\infty,\infty)\), remains unchanged. The asymptote, \(y=0\), remains unchanged. The y-intercept shifts such that: When the function is shifted left \(3\) units to \(g(x)=2^{x+3}\),the y-intercept becomes \((0,8)\). This is because \(2^{x+3}=2^32^x=(8)2^x\), so the initial value of the function is \(8\). When the function is shifted right \(3\) units to \(h(x)=2^{x−3}\),the y-intercept becomes \(\left(0,\dfrac{1}{8}\right)\). Again, see that \(2^{x−3}=\left(\dfrac{1}{8}\right)2^x\), so the initial value of the function is \(\dfrac{1}{8}\). When the function is shifted left \(3\) units to \(g(x)=2^{x+3}\),the
Given an exponential function with the form \(f(x)=b^{x+c}+d\), graph the translation
Draw the horizontal asymptote \(y=d\). Shift the graph of \(f(x)=b^x\) left \(c\) units if \(c\) is positive, and right \(c\) units if \(c\) is negative. Shift the graph of \(f(x)=b^x\) up \(d\) units if \(d\) is positive, and down \(d\) units if \(d\) is negative. State the domain, \((−\infty,\infty)\), the range, \((d,\infty)\), and the horizontal asymptote \(y=d\).
Graph \(f(x)=2^{x+1}−3\). State the domain, range, and asymptote.
Solution
We have an exponential equation of the form \(f(x)=b^{x+c}+d\), with \(b=2\), \(c=1\), and \(d=−3\).
Draw the horizontal asymptote \(y=d\), so draw \(y=−3\).Shift the graph of \(f(x)=b^x\) left \(1\) unit and down \(3\) units.
Figure \(\PageIndex{7}\)
The domain is \((−\infty,\infty)\); the range is \((−3,\infty)\); the horizontal asymptote is \(y=−3\).
\(\PageIndex{2}\) Graph \(f(x)=2^{x−1}+3\). State domain, range, and asymptote.
Answer
The domain is \((−\infty,\infty)\); the range is \((3,\infty)\); the horizontal asymptote is \(y=3\).
Given an equation of the form \(f(x)=b^{x+c}+d\), use a graphing calculator to approximate the solution for \(x\)
Press [Y=]. Enter the given exponential equation in the line headed “ Y”. 1= Enter the given value for \(f(x)\) in the line headed “ Y”. 2= Press [WINDOW]. Adjust the \(y\)-axis so that it includes the value entered for “ Y”. 2= Press [GRAPH]to observe the graph of the exponential function along with the line for the specified value of \(f(x)\). To find the value of \(x\),we compute the point of intersection. Press [2ND]then [CALC]. Select “intersect” and press [ENTER]three times. The point of intersection gives the value of
Solve \(42=1.2{(5)}^x+2.8\) graphically. Round to the nearest thousandth.
Solution
Press
[Y=] and enter \(1.2{(5)}^x+2.8\) next to Y 1=. Then enter \(42\) next to Y2=. For a window, use the values \(–3\) to \(3\) for \(x\) and \(–5\) to \(55\) for \(y\). Press [GRAPH]. The graphs should intersect somewhere near \(x=2\).
For a better approximation, press
[2ND] then [CALC]. Select [5: intersect] and press [ENTER] three times. The x-coordinate of the point of intersection is displayed as \(2.1661943\). (Your answer may be different if you use a different window or use a different value for Guess?) To the nearest thousandth, \(x≈2.166\).
\(\PageIndex{3}\) Solve \(4=7.85{(1.15)}^x−2.27\) graphically. Round to the nearest thousandth.
Answer
\(x≈−1.608\)
Graphing a Stretch or Compression
While horizontal and vertical shifts involve adding constants to the input or to the function itself, a
stretch or compression occurs when we multiply the toolkit function \(f(x)=b^x\) by a constant \(|a|>0\). For example, if we begin by graphing the toolkit function \(f(x)=2^x\), we can then graph the stretch, using \(a=3\), to get \(g(x)=3(2)^x\) as shown on the left in Figure \(\PageIndex{8}\), and the compression, using \(a=\frac{1}{3}\), to get \(h(x)=\frac{1}{3}{(2)}^x\) as shown on the right in Figure \(\PageIndex{8}\).
Figure \(\PageIndex{8}\): (a) \(g(x)=3{(2)}^x\) stretches the graph of \(f(x)=2^x\) vertically by a factor of \(3\). (b) \(h(x)=\frac{1}{3}{(2)}^x\) compresses the graph of \(f(x)=2^x\) vertically by a factor of \(\frac{1}{3}\). Graphing Reflections
In addition to shifting, compressing, and stretching a graph, we can also reflect it across the
x-axis or the y-axis. When we multiply the toolkit function \(f(x)=b^x\) by \(−1\),we get a reflection across the x-axis. When we multiply the input by \(−1\),we get a reflection across the y-axis. For example, if we begin by graphing the toolkit function \(f(x)=2^x\), we can then graph the two reflections alongside it. The reflection across the \(x\)-axis, \(g(x)=−2^x\), is shown on the left side of Figure \(\PageIndex{10}\), and the reflection across the \(y\)-axis \(h(x)=2^{−x}\), is shown on the right side of Figure \(\PageIndex{10}\).
Figure \(\PageIndex{10}\): (a) \(g(x)=−2^x\) reflects the graph of \(f(x)=2^x\) across the x-axis. (b) \(g(x)=2^{−x}\) reflects the graph of \(f(x)=2^x\) across the \ (y\ )-axis. Summarizing Translations of the Exponential Function
Now that we have worked with each type of translation for the exponential function, we can summarize them in
Table \(\PageIndex{3}\) to arrive at the general equation for translating exponential functions.
TRANSLATIONS OF EXPONENTIAL FUNCTIONS
A translation of an exponential function has the form
\(f(x)=ab^{x+c}+d\)
Where the toolkit function, \(y=b^x\), \(b>1\),is
shifted horizontally \(c\) units to the left. stretched vertically by a factor of \(|a|\) if \(|a|>0\). compressed vertically by a factor of \(|a|\) if \(0<|a|<1\). shifted vertically \(d\) units. reflected across the x-axis when \(a<0\).
Note that the order of the shifts, transformations, and reflections follows the order of operations.
Write the equation for the function described below. Give the horizontal asymptote, the domain, and the range.
\(f(x)=e^x\) is vertically stretched by a factor of \(2\) , reflected across the
y-axis, and then shifted up \(4\) units. Solution
We want to find an equation of the general form \(f(x)=ab^{x+c}+d\). We use the description provided to find \(a, b, c,\) and \(d\).
We are given the toolkit function \(f(x)=e^x\), so \(b=e\). Note: \(e\) is a number, not a variable. It was defined in Section 4.1. Write down its definition, hand it it to your professor, and make her give you extra credit! The function is stretched by a factor of \(2\), so \(a=2\). The function is reflected about the y-axis. We replace \(x\) with \(−x\) to get: \(e^{−x}\). There is no horizontal shift, so \(c=0\). The graph is shifted vertically 4 units, so \(d=4\).
Substituting in the general form we get,
\(f(x)=ab^{x+c}+d\)
\(=2e^{−x+0}+4\)
\(=2e^{−x}+4\)
The domain is \((−\infty,\infty)\); the range is \((4,\infty)\); the horizontal asymptote is \(y=4\).
\(\PageIndex{6}\) Write the equation for function described below. Give the horizontal asymptote, the domain, and the range.
\(f(x)=e^x\) is compressed vertically by a factor of \(\dfrac{1}{3}\), reflected across the
x-axis and then shifted down \(2\) units. Answer
\(f(x)=−\frac{1}{3}e^{x}−2\); the domain is \((−\infty,\infty)\); the range is \((−\infty,2)\); the horizontal asymptote is \(y=-2\).
Media
Access this online resource for additional instruction and practice with graphing exponential functions.
Key Equations
General form for the translation of the toolkit function \(f(x)=b^x\): \(f(x)=ab^{x+c}+d\) Key Concepts The graph of the function \(f(x)=b^x\) has a y-intercept at \((0, 1)\), domain \((−\infty, \infty)\), range \((0, \infty)\), and horizontal asymptote \(y=0\). If \(b>1\), the function is increasing. The end behavior of the graph to the left: \(y\) will approach the asymptote \(y=0\), and to the right: \(y\) will increase without bound. If \(0<b<1\), the function is decreasing. The end behavior of the graph to the left: \(y\)will increase without bound, and to the right: \(y\) will approach the asymptote \(y=0\). The equation \(f(x)=b^x+d\) represents a vertical shift of the toolkit function \(f(x)=b^x\). The equation \(f(x)=b^{x+c}\) represents a horizontal shift of the toolkit function \(f(x)=b^x\). Approximate solutions of the equation \(f(x)=b^{x+c}+d\) can be found using a graphing calculator. The equation \(f(x)=ab^x\), where \(a>0\), represents a vertical stretch if \(|a|>1\) or compression if \(0<|a|<1\) of the toolkit function \(f(x)=b^x\). When the toolkit function \(f(x)=b^x\) is multiplied by \(−1\), the result, \(f(x)=−b^x\), is a reflection across the x-axis. When the input is multiplied by \(−1\),the result, \(f(x)=b^{−x}\), is a reflection across the y-axis. All translations of the exponential function can be summarized by the general equation \(f(x)=ab^{x+c}+d\). Using the general equation \(f(x)=ab^{x+c}+d\), we can write the equation of a function given its description. Contributors Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (formerly of Santa Ana College). This content produced by OpenStax and is licensed under a Creative Commons Attribution License 4.0 license. |
Hi I'm learning about Lie Groups to understand gauge theory (in the principal bundle context) and I'm having trouble with some concepts. Now let $a$ and $g$ be elements of a Lie group $G$, the left translation $L_{a}: G \rightarrow G$ of $g$ by $a$ are defined by : $L_{a}g=ag$ which induces a map $L_{a*}: T_{g}G \rightarrow T_{ag}G$ Let $X$ be vector field on a Lie group $G$. $X$ is said to be a left invariant vector field if $L_{a*}X|_{g}=X|_{ag}$. A vector $V \in T_{e}G$ defines a unique left-invariant vector field $X_{V}$ throughout $G$ by: $X_{V}|_{g}= L_{g*}V$, $g \in G$ Now the author gives an example of the left invariant vector field of $GL(n,\mathbb{R})$: Let $g={x^{ij}(g)}$ and $a={x^{ij}(a)}$ be elements of $GL(n,\mathbb{R})$ where $e= I_{n}=\delta^{ij}$ is the unit element. The left translation is: $L_{a}g=ag=\Sigma x^{ik}(a)x^{kj}(g)$ Now take a vector $V=\Sigma V^{ij}\frac{\partial}{\partial x^{ij}}|_{e} \in T_{e}G$ where the $V^{ij}$ are the entries of $V$. The left invariant vector field generated by $V$ is:
$X_{V|_{g}}=L_{g*}V=\Sigma V^{ij}\frac{\partial}{\partial x^{ij}}|_{e}x^{kl}(g)x^{lm}(e) \frac{\partial}{x^{km}}|_{g}= \Sigma V^{ij}x^{kl}(g) \delta^{l}_{i} \delta^{m}_{j} \frac{\partial}{\partial x^{km}}|_{g}= \Sigma x^{ki}(g)V^{ij} \frac{\partial}{\partial x^{kj}}|_{g}= \Sigma (gV)^{kj} \frac{\partial}{\partial x^{kj}}|_g$
Where $gV$ is the usual matrix multiplication. This is a bit over my head. What does it mean that one has a tangent vector at the unit element of a Lie group? Maybe solving this exercise may help with the question:
Let $c(s)=\begin{pmatrix} cos s & -sin s & 0 \\ sin s & cos s & 0 \\ 0 & 0 & 1 \end{pmatrix}$ be a curve in $SO(3)$. Find the tangent vector to this curve at $I_{3}$. And why does this induce a left invariant vector field? And btw, what is a left invariant vector field?? What does it mean geometrically? And what does it mean if a vector $V^{ij}$ has two indices?? Can one explain the example to me? |
Learning Objectives To understand the autoionization reaction of liquid water. To know the relationship among pH, pOH, and p K w.
As you learned in Chapter 8 and Chapter 4, acids and bases can be defined in several different ways (Table 16.1.1 ). Recall that the Arrhenius definition of an acid is a substance that dissociates in water to produce H
+ ions (protons), and an Arrhenius base is a substance that dissociates in water to produce OH − (hydroxide) ions. According to this view, an acid–base reaction involves the reaction of a proton with a hydroxide ion to form water. Although Brønsted and Lowry defined an acid similarly to Arrhenius by describing an acid as any substance that can donate a proton, the Brønsted–Lowry definition of a base is much more general than the Arrhenius definition. In Brønsted–Lowry terms, a base is any substance that can accept a proton, so a base is not limited to just a hydroxide ion. This means that for every Brønsted–Lowry acid, there exists a corresponding conjugate base with one fewer proton, as we demonstrated in Chapter 8 . Consequently, all Brønsted–Lowry acid–base reactions actually involve two conjugate acid–base pairs and the transfer of a proton from one substance (the acid) to another (the base). In contrast, the Lewis definition of acids and bases, discussed in Chapter 8, focuses on accepting or donating pairs of electrons rather than protons. A Lewis base is an electron-pair donor, and a Lewis acid is an electron-pair acceptor. Table 16.1.1 Definitions of Acids and Bases
Acids Bases Arrhenius H + donor OH − donor Brønsted–Lowry H + donor H + acceptor Lewis electron-pair acceptor electron-pair donor
Because this chapter deals with acid–base equilibriums in
aqueous solution, our discussion will use primarily the Brønsted–Lowry definitions and nomenclature. Remember, however, that all three definitions are just different ways of looking at the same kind of reaction: a proton is an acid, and the hydroxide ion is a base—no matter which definition you use. In practice, chemists tend to use whichever definition is most helpful to make a particular point or understand a given system. If, for example, we refer to a base as having one or more lone pairs of electrons that can accept a proton, we are simply combining the Lewis and Brønsted–Lowry definitions to emphasize the characteristic properties of a base.
In Chapter 8, we also introduced the acid–base properties of water, its
autoionization reaction, and the definition of pH. The purpose of this section is to review those concepts and describe them using the concepts of chemical equilibrium developed in Chapter 15 . Acid–Base Properties of Water
The structure of the water molecule, with its polar O–H bonds and two lone pairs of electrons on the oxygen atom, was described in Chapter 8 and Chapter 4, and the structure of liquid water was discussed in Chapter 13 . Recall that because of its highly polar structure, liquid water can act as either an acid (by donating a proton to a base) or a base (by using a lone pair of electrons to accept a proton). For example, when a strong acid such as HCl dissolves in water, it dissociates into chloride ions (Cl
−) and protons (H +). As you learned in Chapter 8, the proton, in turn, reacts with a water molecule to form the hydronium ion (H 3O +):
In this reaction, HCl is the acid, and water acts as a base by accepting an H
+ ion. The reaction in Equation 16.1.1 is often written in a simpler form by removing H 2O from each side:
\[ HCl_{(aq)} \rightarrow H^+_{(aq)} + Cl^-_{(aq)} \tag{16.1.2}\]
In Equation 16.1.2, the hydronium ion is represented by H
+, although free H + ions do not exist in liquid water.
Water can also act as an acid, as shown in Equation 16.1.3. In this equilibrium reaction, H
2O donates a proton to NH 3, which acts as a base:
\[\underset{aicd}{H_2O_{(aq)}} + \underset{base}{NH_{3(aq)}} \rightleftharpoons \underset{acid}{NH^+_{4 (aq)}} + \underset{base}{OH^-_{(aq)}} \tag{16.1.3}\]
Thus water is amphiproticSubstances that can behave as either an acid or a base in a chemical reaction, depending on the nature of the other reactant(s)., meaning that it can behave as either an acid or a base, depending on the nature of the other reactant. Notice that Equation 16.3 is an equilibrium reaction as indicated by the double arrow.
The Ion-Product Constant of Liquid Water
Because water is amphiprotic, one water molecule can react with another to form an OH
− ion and an H 3O + ion in an autoionization process:
\[2H_2O(l) \rightleftharpoons H_3O^+_{(aq)}+OH^−_{(aq)} \tag{16.1.4}\]
The equilibrium constant
K for this reaction can be written as follows:
\[ K=\dfrac{[H_3O^+(aq)][OH^−(aq)]}{[H_2O(l)]^2} \tag{16.1.5}\]
When pure liquid water is in equilibrium with hydronium and hydroxide ions at 25°C, the concentrations of the hydronium ion and the hydroxide ion are equal: [H
3O +] = [OH −] = 1.003 × 10 −7 M. Thus the number of dissociated water molecules is very small indeed, approximately 2 ppb. We can calculate [H 2O] at 25°C from the density of water at this temperature (0.997 g/mL):
\[[H_2O(l)]=mol/L=(0.997\; \cancel{g}/mL)\left(\dfrac{1 \;mol}{18.02\; \cancel{g}}\right)\left(\dfrac{1000\; \cancel{mL}}{L}\right)=55.3\; M \tag{16.1.6}\]
With so few water molecules dissociated, the equilibrium of the autoionization reaction (Equation 16.4) lies far to the left. Consequently, [H
2O] is essentially unchanged by the autoionization reaction and can be treated as a constant. Incorporating this constant into the equilibrium expression allows us to rearrange Equation 16.1.5 to define a new equilibrium constant, the ion-product constant of liquid water ( K w)
\[K=\dfrac{K_w}{[H_2O(l)]^2} \tag{16.1.7a}\]
with
\[K_w = [H_3O^+(aq)][OH^−(aq)]=[H_3O^+(aq)][OH^−(aq)] \tag{16.1.7b}\]
Substituting the values for [H
3O +] and [OH −] at 25°C into this expression,
\[K_w=(1.003 \times10^{−7})(1.003 \times 10^{−7})=1.006 \times 10^{−14} \tag{16.1.8}\]
Thus, to three significant figures,
K w = 1.01 × 10 −14 M. Like any other equilibrium constant, K w varies with temperature, ranging from 1.15 × 10 −15 at 0°C to 4.99 × 10 −13 at 100°C.
In pure water, the concentrations of the hydronium ion and the hydroxide ion are equal, and the solution is therefore neutral. If [H
3O +] > [OH −], however, the solution is acidic, whereas if [H 3O +] < [OH −], the solution is basic. For an aqueous solution, the H 3O + concentration is a quantitative measure of acidity: the higher the H 3O + concentration, the more acidic the solution. Conversely, the higher the OH − concentration, the more basic the solution. In most situations that you will encounter, the H 3O + and OH − concentrations from the dissociation of water are so small (1.003 × 10 −7 M) that they can be ignored in calculating the H 3O + or OH − concentrations of solutions of acids and bases, but this is not always the case. The Relationship among pH, pOH, and p K w
The
pH scale is a concise way of describing the H 3O + concentration and hence the acidity or basicity of a solution. Recall from Chapter 8 that pH and the H + (H 3O +) concentration are related as follows:
\[pH=−log_{10}[H^{+}(aq)] \tag{16.1.9}\]
\[[H^{+}(aq)]=10^{−pH} \tag{16.1.10}\]
Because the scale is logarithmic, a pH difference of
1 between two solutions corresponds to a difference of a factor of 10 in their hydronium ion concentrations. (Refer to Essential Skills 3 in Section 8.11 , if you need to refresh your memory about how to use logarithms.) Recall also that the pH of a neutral solution is 7.00 ([H 3O +] = 1.0 × 10 −7 M), whereas acidic solutions have pH < 7.00 (corresponding to [H 3O +] > 1.0 × 10 −7) and basic solutions have pH > 7.00 (corresponding to [H 3O +] < 1.0 × 10 −7).
Similar notation systems are used to describe many other chemical quantities that contain a large negative exponent. For example, chemists use an analogous
pOH scale to describe the hydroxide ion concentration of a solution. The pOH and [OH −] are related as follows:
\[pOH=−log_{10}[OH^{−}(aq)] \tag{16.1.11}\]
The constant
K w can also be expressed using this notation, where p K w = −log K w.
Because a neutral solution has [OH
−] = 1.0 × 10 −7, the pOH of a neutral solution is 7.00. Consequently, the sum of the pH and the pOH for a neutral solution at 25°C is 7.00 + 7.00 = 14.00. We can show that the sum of pH and pOH is equal to 14.00 for any aqueous solution at 25°C by taking the negative logarithm of both sides of Equation 16.7:
\[−logKw=−log([H_{3}O^{+}(aq)][OH{−}(aq)])=(−log[H_{3}O^{+}(aq)])+(−log[OH^{−}(aq)])=pH+pOH \tag{16.1.13}\]
Thus at any temperature, pH + pOH = p
K w, so at 25°C, where K w = 1.0 × 10 −14, pH + pOH = 14.00. More generally, the pH of any neutral solution is half of the p K w at that temperature. The relationship among pH, pOH, and the acidity or basicity of a solution is summarized graphically in Figure 16.1.1 over the common pH range of 0 to 14. Notice the inverse relationship between the pH and pOH scales. Note the Pattern
For any neutral solution, pH + pOH = 14.00 (at 25°C) and and pH=12pKw.
Figure 16.1.1 The Inverse Relationship between the pH and pOH ScalesAs pH decreases, [H +] and the acidity increase. As pOH increases, [OH −] and the basicity decrease. Common substances have pH values that range from extremely acidic to extremely basic. Example 16.1.1
The
K w for water at 100°C is 4.99 × 10 −13. Calculate p K w for water at this temperature and the pH and the pOH for a neutral aqueous solution at 100°C. Report pH and pOH values to two decimal places. Given: K w Asked for: p K w, pH, and pOH Strategy: A Calculate p K w by taking the negative logarithm of K w. B For a neutral aqueous solution, [H 3O +] = [OH −]. Use this relationship and Equation 16.7 to calculate [H 3O +] and [OH −]. Then determine the pH and the pOH for the solution. Solution: A Because p K w is the negative logarithm of K w, we can write
\[pK_w = −\log K_w = −\log(4.99 \times 10^{−13}) = 12.302 \notag \]
The answer is reasonable:
K w is between 10 −13 and 10 −12, so p K w must be between 12 and 13. B Equation 16.1.7 shows that K w = [H 3O +][OH −]. Because [H 3O +] = [OH −] in a neutral solution, we can let x = [H 3O +] = [OH −]:
\[K_w =[H_3O^+][OH^−]=(x)(x)=x^2 \notag \]
\[x=\sqrt{K_w} =\sqrt{4.99 \times 10^{−13}} =7.06 \times 10^{−7}\; M \notag \]
Because
x is equal to both [H 3O +] and [OH −],
pH = pOH = −log(7.06 × 10
−7) = 6.15 (to two decimal places)
We could obtain the same answer more easily (without using logarithms) by using the p
K w. In this case, we know that p K w = 12.302, and from Equation 16.13, we know that p K w = pH + pOH. Because pH = pOH in a neutral solution, we can use Equation 16.13 directly, setting pH = pOH = y. Solving to two decimal places we obtain the following:
\[pK_w pH + pOH = y + y = 2y \notag \]
\[y=\dfrac{pK_w}{2}=\dfrac{12.302}{2}=6.15=pH=pOH \notag \]
Exercise
Humans maintain an internal temperature of about 37°C. At this temperature,
K w = 3.55 × 10 −14. Calculate p K w and the pH and the pOH of a neutral solution at 37°C. Report pH and pOH values to two decimal places. Answer: p K w = 13.45 pH = pOH = 6.73 Summary
Water is
amphiprotic: it can act as an acid by donating a proton to a base to form the hydroxide ion, or as a base by accepting a proton from an acid to form the hydronium ion (H 3O +). The autoionization of liquid water produces OH − and H 3O + ions. The equilibrium constant for this reaction is called the ion-product constant of liquid water ( K w ) and is defined as K w = [H 3O +][OH −]. At 25°C, K w is 1.01 × 10 −14; hence pH + pOH = p K w = 14.00. Key Takeaway For any neutral solution, pH + pOH = 14.00 (at 25°C) and pH = 1/2 p K w. Key Equations Ion-product constant of liquid water
Equation 16.1.7:
K w = [H 3O +][OH −] Definition of pH
Equation 16.1.9: pH = −log
10[H +]
Equation 16.1.10: [H
+] = 10 −pH Definition of pOH
Equation 16.1.11: pOH = −log
10[OH +]
Equation 16.1.12: [OH
−] = 10 −pOH Relationship among pH, pOH, and p K w
Equation 16.1.13: p
K w= pH + pOH Conceptual Problems
What is the relationship between the value of the equilibrium constant for the autoionization of liquid water and the tabulated value of the ion-product constant of liquid water (
K w)?
The density of liquid water decreases as the temperature increases from 25°C to 50°C. Will this effect cause
K wto increase or decrease? Why?
Show that water is amphiprotic by writing balanced chemical equations for the reactions of water with HNO
3and NH 3. In which reaction does water act as the acid? In which does it act as the base?
Write a chemical equation for each of the following.
Nitric acid is added to water. Potassium hydroxide is added to water. Calcium hydroxide is added to water. Sulfuric acid is added to water.
Show that
Kfor the sum of the following reactions is equal to K w. \[HMnO_{4(aq)} \rightleftharpoons H^+_{(aq)} + MnO^−_{4(aq)} \notag \]
\[ MnO_{4}^{-} \left ( aq \right )+H_{2}O \left ( aq \right ) \rightleftharpoons HMnO_{4} \left ( aq \right )+OH^{-} \left ( aq \right ) \notag \]
Answers
\[K_{auto} = \dfrac{[H_3O^+][OH^−]}{[H_2O]^2} \notag \]
\[K_w = [H_3O^+][OH^−] = K_{auto}[H_2O]^2 \notag \]
water is the base: \[ H_2O_{(l)} + HNO_{3(g)} \rightarrow H_3O^+_{(aq)} + NO^−_{3(aq)} \notag \]water is the acid: \[H_2O_{(l)} + NH_{3(g)} \rightarrow OH^−_{(aq)} + NH^−_{4(aq)} \notag \] Numerical Problems
The autoionization of sulfuric acid can be described by the following chemical equation:
\[H_2SO_{4(l)}+H_2SO_{4(l)} \rightleftharpoons H_3SO^+_{4(soln)}+H_SO^−_{4(soln)} \notag \]
At 25°C,
K= 3 × 10 −4. Write an equilibrium constant expression for K(H 2SO 4) that is analogous to K w. The density of H 2SO 4is 1.8 g/cm 3at 25°C. What is the concentration of H 3SO 4 +? What fraction of H 2SO 4is ionized?
An aqueous solution of a substance is found to have [H
3O] += 2.48 × 10 −8M. Is the solution acidic, neutral, or basic?
The pH of a solution is 5.63. What is its pOH? What is the [OH
−]? Is the solution acidic or basic?
State whether each solution is acidic, neutral, or basic.
[H 3O +] = 8.6 × 10 −3M [H 3O +] = 3.7 × 10 −9M [H 3O +] = 2.1 × 10 −7M [H 3O +] = 1.4 × 10 −6M [H
Calculate the pH and the pOH of each solution.
0.15 M HBr 0.03 M KOH 2.3 × 10 −3M HNO 3 9.78 × 10 −2M NaOH 0.00017 M HCl 5.78 M HI
Calculate the pH and the pOH of each solution.
25.0 mL of 2.3 × 10 −2M HCl, diluted to 100 mL 5.0 mL of 1.87 M NaOH, diluted to 125 mL 5.0 mL of 5.98 M HCl added to 100 mL of water 25.0 mL of 3.7 M HNO 3added to 250 mL of water 35.0 mL of 0.046 M HI added to 500 mL of water 15.0 mL of 0.0087 M KOH added to 250 mL of water. 25.0 mL of 2.3 × 10
The pH of stomach acid is approximately 1.5. What is the [H
+]?
Given the pH values in parentheses, what is the [H
+] of each solution? household bleach (11.4) milk (6.5) orange juice (3.5) seawater (8.5) tomato juice (4.2)
A reaction requires the addition of 250.0 mL of a solution with a pH of 3.50. What mass of HCl (in milligrams) must be dissolved in 250 mL of water to produce a solution with this pH?
If you require 333 mL of a pH 12.50 solution, how would you prepare it using a 0.500 M sodium hydroxide stock solution?
Answers \[K_{H_2SO_4}=[H_3SO_4^+][HSO_4^−]=K[H_2SO_4]_2 \notag \]
\[[H_3SO_4^+] = 0.3 M \notag \]
[H
3SO 4 +] = 0.3 M; the fraction ionized is 0.02.
pOH = 8.37; [OH
−] = 4.3 × 10 −9M; acidic pH = 0.82; pOH = 13.18 pH = 12.5; pOH = 1.5 pH = 2.64; pOH = 11.36 pH = 12.990; pOH = 1.010 pH = 3.77; pOH = 10.23 pH = −0.762; pOH = 14.762
2.9 mg HCl
Contributors Anonymous
Modified by Joshua B. Halpern |
Something has been buzzing me recently. It is well-known that $\textbf{PCP}[poly(n), 0] = \textbf{coRP}$, but does $\textbf{PCP}[poly(n), O(1)] = \textbf{coRP}$ ?
I have found a proof for this statement but something feels wrong about it. Here it goes :
Let $L \in \textbf{PCP}[poly(n), O(1)]$. There is a verifier $V$ for $L$ which flips $poly(n)$ random coins and reads $q$ bits from the certificate, where $q$ is a constant.
The proof query of $q$ bits can return $2^q$ possible values. My idea is to build a $\textbf{coRP}$ algorithm $V'$ for $L$ by simulating $2^q$ copies of $V$, where each copy is given a different answer to the certificate query. $V'$ accepts if at least one of the copies of $V$ accepts.
If the input is in $L$ then there exists a certificate so that $V$ accepts with perfect completeness. The queried bits from that certificate are necessarily in our $2^q$ possible values, so at least one of the simulated copies of $V$ accepts with perfect completeness.
If the input is not in $L$, then all copies have probability < 1 of accepting. Thus $V'$ has some probability $\rho < 1$ of accepting.
Using amplification, we can then boost $\rho$ to $\frac{1}{2}$ to match the definition of $\textbf{coRP}$.
Is this proof correct ? |
I am having trouble solving this set of two coupled differential equations using NDSolve:
$$\left[\left\{-\frac{\mu _0 Q_e M_{\text{earth}} \text{vy}(t)}{4 \pi m_e \left(\left(5 R_{\text{earth}}-t \text{vx}(t)\right){}^2+t^2 \text{vy}(t)^2\right){}^4}=\text{vx}'(t),\frac{\mu _0 Q_e M_{\text{earth}} \text{vx}(t)}{4 \pi m_e \left(\left(5 R_{\text{earth}}-t \text{vx}(t)\right){}^2+t^2 \text{vy}(t)^2\right){}^4}=\text{vy}'(t),\text{vx}(0)=480000,\text{vy}(0)=0\right\},\{\text{vx},\text{vy}\},\{t,0,30\}\right]$$
NOTE: all the variables are defined prior to this, the same problem arises when I sub in numerical values. I am only using them here to make it easier for everyone to read.
The input code was the same as above expression:
NDSolve[{-(( Subscript[M, earth] Subscript[Q, e] Subscript[\[Mu], 0] vy[t])/( 4 \[Pi] Subscript[m, e] ((5 Subscript[R, earth] - t vx[t])^2 + t^2 vy[t]^2)^4)) == Derivative[1][vx][t], ( Subscript[M, earth] Subscript[Q, e] Subscript[\[Mu], 0] vx[t])/( 4 \[Pi] Subscript[m, e] ((5 Subscript[R, earth] - t vx[t])^2 + t^2 vy[t]^2)^4) == Derivative[1][vy][t], vx[0] == 480000, vy[0] == 0}, {vx, vy}, {t, 0, 30}]
There was a runtime error:
"NDSolve::ndnum: Encountered non-numerical value for a derivative at t == 0.`."
I followed the documentations when writing the input, is there anything else I should add? |
Here is an extended answer that concludes
Summary On entropic grounds, gravitational radiative decoherence is similarly irreversible to all other forms of radiative decoherence, and in consequence, Nature's quantum state-spaces are effectively low-dimension and non-flat.
Update B For further discussion and references, see this answer to the CSTheory.StackExchange question "Physical realization of nonlinear operators for quantum computers."
Update A This augmented survey/answer provides an entropically naturalized and geometrically universalized survey of the physical ideas that are discussed by Jan Dereziski, Wojciech De Roeck, and Christian Maes in their article Fluctuations of quantum currents and unravelings of master equations (arXiv:cond-mat/0703594v2). Especially commended is their article's "Section 4: Quantum Trajectories" and the extensive bibliography they provide.
By deliberate intent, this survey/answer relates also to the lively (and ongoing) public debate that is hosted on
Gödel's Lost Letter and P=NP, between Aram Harrow and Gil Kalai, regarding the feasiblity (or not) of scalable quantum computing. Naturalized survey of thermodynamics
We begin with a review, encompassing both classical and quantum thermodynamical principles, following the exposition of Zia, Redish, and McKay's highly recommended
Making sense of the Legendre transform ( AJP, 2009). The fundamental thermodynamical relations are specified as
$$ \Omega(E)=e^{\mathcal{S}(E)}\,,\quad\qquad Z(\beta)=e^{-\mathcal{A}(\beta)}\,,\\[2ex] \frac{\partial\,\mathcal{S}(E)}{\partial\,E} = \beta\,, \quad\qquad \frac{\partial\,\mathcal{A}(\beta)}{\partial\,\beta}= E\,,\\[3ex] \mathcal{S}(E) + \mathcal{A}(\beta) = \beta E\,.$$
In these relations the two conjugate thermodynamic variables
$$ E := \text{total energy}\,, \quad\qquad \beta := \text{inverse temperature}\,, $$
appear as arguments of four fundamental thermodynamic functions
$$ \mathcal{S} := \text{entropy function}\,, \quad\qquad \mathcal{A} := \text{free energy function}\,, \\ {Z} := \text{partition function}\,,\quad\qquad {\Omega} := \text{volume function}\,.$$
Any one of the four thermodynamic potentials $(\mathcal{S},\mathcal{A},Z,\Omega)$ determines the other three via elementary logarithms, exponentials, Laplace Transforms, and Legendre transforms, and moreover, any of the four potentials can be regarded as a function of either of the two conjugate variables.
Aside The preceding relations assume that only one quantity is globally conserved and locally transported, namely the energy $E$. When more than one quantity is conserved and transported — charge, mass, chemical species, and magnetic moments are typical examples — then the above relations generalize naturally to a vector space of conserved quantities and a dual vector space of thermodynamically conjugate potentials. None of the following arguments are fundamentally altered by this multivariate thermodynamical extension. Naturalized survey of Hamiltonian dynamics
To make progress toward computing concrete thermodynamic potential functions, we must specify a Hamiltonian dynamical system. In the notation of John Lee's
Introduction to Smooth Manifolds we specify the Hamiltonian triad $(\mathcal{M},H,\omega)$ in which
$$\begin{array}{rl}\mathcal{M}\ \ :=&\text{state-space manifold}\,,\\H\,\colon \mathcal{M}\to\mathbb{R}\ \ :=&\text{Hamiltonian function on $\mathcal{M}$}\,,\\\omega\,\llcorner\,\colon T\mathcal{M}\to T^*\mathcal{M}\ \ :=& \text{symplectic structure on $\mathcal{M}$}\,.\end{array}\hspace{1em}$$
The dynamical flow generator $X\colon \mathcal{M}\to T\mathcal{M}$ is given by Hamilton's equation
$$\omega\,\llcorner\,X = dH\,.$$
From the standard (and geometrically natural) ergodic hypothesis — that thermodynamic ensembles of Hamiltonian trajectories fill state-spaces uniformly, and that time averages of individual trajectories equal ensemble averages at fixed times — we have ${\Omega}$ given naturally as a level set volume
$$\text{(1a)}\qquad\qquad\quad\quad \Omega(E) = \int_\mathcal{M} \star\,\delta\big(E-H(\mathcal{M})\big)\,, \qquad\qquad\qquad\qquad\qquad$$
where "$\star$" is the
Hodge star operator that is associated to the natural volume form $V$ on $\mathcal{M}$ that is given as the maximal exterior power $V=\wedge^{(\text{dim}\,\mathcal{M})/2}(\omega)$. This expression for $\Omega(E)$ is the geometrically naturalized presentation of Zia, Redish, and McKay's equation (20).
Taking a Laplace transform of (1a) we obtain an equivalent (and classically familiar) expression for the partition function $Z(\beta)$
$$\text{(1b)}\qquad\qquad\qquad Z(\beta) = \int_\mathcal{M} \star\exp\big({-}\beta\,H(\mathcal{M})\big)\,, \qquad\qquad\qquad\qquad$$
The preceding applies to Hamiltonian systems in general and thus quantum dynamical systems in particular. Yet in quantum textbooks the volume/partition functions (1ab) do not commonly appear, for two reasons. The first reason is that John von Neumann derived in 1930 — before the ideas of geometric dynamics were broadly extant — a purely algebraic partition function that, on flat state-spaces, is easier to evaluate than the geometrically natural (1a) or (1b). Von Neumann's partition function is $$\text{(2)}\qquad Z(\beta) = \text{trace}\,\exp{-\beta\,\mathsf{H_{op}}}\quad\text{where}\quad [\mathsf{H_{op}}]_{\alpha\gamma} = \partial_{\,\bar\psi_\alpha}\partial_{\,\psi_\gamma} H(\mathcal{M})\,.\qquad\qquad$$Here the $\boldsymbol{\psi}$ are the usual complete set of (complex) orthonormal coordinate functions on the (flat, Kählerian) Hilbert state-space $\mathcal{M}$. Here $H(\mathcal{M})$ is real and the functional form of $H(\mathcal{M})$ is restricted to be bilinear in $\boldsymbol{\bar\psi},\boldsymbol{\psi}$; therefore the matrix $[\mathsf{H_{op}}]$ is Hermitian and uniform on the state-space manifold $\mathcal{M}$. We appreciate that $Z(\beta)$ as defined locally in (2) is uniform globally iff $\mathcal{M}$ is geometrically flat; thus von Neumann's partition function does not naturally extend to non-flat complex dynamical manifolds.
We naively expect (or hope) that the geometrically natural thermodynamic volume/partition functions (1ab) are thermodynamically consistent with von Neumann's elegant algebraic partition function (2), yet — surprisingly and dismayingly — they are not. Surprisingly, because it is not immediately evident why the geometric particion function (1b) should differ from von Neumann's partition function (2). Dismayingly, because the volume/partition functions (1ab) pullback naturally to low-dimension non-flat state-spaces that are attractive venues for quantum systems engineering, and yet it is von Neuman's partition function (2) that accords with experiment.
We would like to enjoy the best of
both worlds: the geometric naturality of the ergodic expressions (1ab) and the algebraic naturality of von Neumann's entropic expression (2). The objective of restoring and respecting the mutual consistency of (1ab) and (2) leads us to the main point of this answer, which we now present. The main points: sustaining thermodynamical consistency
Assertion I For (linear) quantum dynamics on (flat) Hilbert spaces, the volume function $\Omega(E)$ and partition function $Z(\beta)$ from (1ab) are thermodynamically inconsistent with the partition function $Z(\beta)$ from (2).
Here by "inconsistent" is meant not "subtly inconsistent" but "grossly inconsistent". As a canonical example, the reader is encourage to compute the heat capacity of an ensemble of weakly interacting qubits by both methods, and to verify that the (1ab) predict a heat capacity for an $n$-qubit system that is superlinear in $n$. To say it another way, for strictly unitary dynamics (1ab) predict heat capacities that are non-intensive.
So the second — and most important — reason that the volume/partition functions (1ab) are not commonly given in quantum mechanical textbooks is that strictly unitary evolution on strictly flat quantum state-spaces yields non-intensive predictions for thermodynamic quantities that experimentally
are intensive.
Fortunately, the remedy is simple, and indeed has long been known: retain the geometric thermodynamic functions (1ab) in their natural form, and instead alter the assumption of unitary evolution, in such a fashion as to naturally restore thermodynamic extensivity.
Assertion II Lindbladian noise of sufficient magnitude to spatially localize thermodynamic potentials, when unraveled as non-Hamiltonian (stochastic) quantum trajectories, restores the thermodynamical consistency of the volume/partition functions $(\Omega(E),Z(\beta))$ from (1ab) with the partition function $Z(\beta)$ from (2).
Verifying Assertion II is readily (but tediously) accomplished by the Onsager-type methods that are disclosed in two much-cited articles: Hendrik Casimir's
On Onsager's Principle of Microscopic Reversibility ( RMP 1945)and Herbert Callen's The Application of Onsager's Reciprocal Relations to Thermoelectric, Thermomagnetic, and Galvanomagnetic Effects ( PR, 1948). A readable textbook (among many) that covers this material is Charles Kittel's Elementary Statistical Physics (1958).
To help in translating Onsager theory into the natural language of geometric dynamics, a canonical textbook is John Lee's
Introduction to Smooth Manifolds (2002), which provides the mathematical toolset to appreciate the research objectives articulated in (for example) Matthias Blau's on-line lecture notes Symplectic Geometry and Geometric Quantization (1992).
Unsurprisingly, in light of modern findings in quantum information theory, the sole modification that naturality and universality require of Onsager's theory is this: the fluctuations that are the basis of Onsager's relations must be derived naturally from unravelled Lindblad processes, by the natural association of each Lindbladian generator to an observation-and-control process.
We note that it is neither mathematically natural, nor computationally unambiguous, nor physically correct, to compute Onsager fluctuations by non-Lindbladian methods. For example, wrong answers are obtained when we specify Onsager fluctuations as operator expectation fluctuations, because this procedure does not account for the localizing effects of Lindbladian dynamics.
Concretely, the fluctuating quantities that enter in the Onsager formulation are given as the data-streams that are naturally associated to Lindbladian observation processes … observation processes that are properly accounted in the overall system dynamics, in accord with the teaching of quantum information theory. Thereby Onsager's classical thermodynamical theory of global conservation and local transport processes straightforwardly naturalizes and universalizes — via the mathematical tool-set that quantum information theory provides — as a dynamical theory of the observation of natural processes.
Physical summary Consistency of the geometrically natural thermodynamic functions (1ab) with the algebraically natural thermodynamic function (2) is restored because the non-unitary stochastic flow associated to unraveled Lindbladian noise reduces the effective dimensionality of the quantum state-space manifold, and also convolutes the quantum state-space geometry, in such a fashion that as to naturally reconcile geometric descriptions of thermodynamics (1ab) with von Neumann-style algebraic descriptions of thermodynamics (and information theory) on Hilbert state-spaces (2).
Assertion III The thermodynamic consistency requires that, first, quantum dynamical flows be non-unitary and that, second, the resulting trajectories be restricted to non-flat state-spaces of polynomial dimensionality.
We thus appreciate the broad principle that quantum physics can make sensible predictions regarding physical quantities that are globally conserved and locally transported only by specifying non-unitary dynamical flows on non-flat quantum quantum spaces.
Duality of classical physics versus quantum physics The above teaching regards "classical" and "quantum" as well-posed and mutually consistent limiting cases of a broad class of naturalized and universalized Hamiltonian/Kählerian/Lindbladian dynamical frameworks. For practical purposes the most interesting dynamical systems are intermediate between fully classical and fully quantum, and the thrust of the preceding analysis is that the thermodynamical properties of these systems are naturally and universally defined, calculable, and observable.
Duality of fundamental physics versus applied physics The fundamental physics challenge of constructing a thermodynamically and informatically consistent description of non-unitary quantum dynamics on non-flat complex state-spaces — a challenge that is widely appreciated as difficult and perhaps even impossible — is appreciated as dual to the practical engineering challenge of efficiently simulating noisy quantum system dynamics … a challenge that is widely appreciated as feasible.
Remarks upon gravitational decoherence The above analysis establishes that decoherence associated to gravitational coupling — and more broadly the ubiquity of the superradiant dynamics that is associated to every bosonic field of the vacuum — and further supposing this decoherence to be "irreversible" (in Scott's phrase), would have the following beneficent implications: the naturality and universality of thermodynamics is thereby preserved, and quantum trajectories are effectively restricted to low-dimension non-flat state-spaces, and the efficient numerical simulation of generic quantum systems is thus permitted.
From a fundamental physics point-of-view, the converse hypothesis is attractive:
Kählerian hypothesis Nature's quantum state-spaces are generically low-dimension and non-flat in consequence of irreversible decoherence mechanisms that are generically associated to bosonic vacuum excitations. Conclusions
As with the
ergodic hypothesis, so with the Kählerian hypothesis, in the sense that regardless of whether the Kählerian hypothesis is fundamentally true or not — and regardless of whether gravitation radiation accounts for it or not — for practical quantum systems engineering purposes experience teaches us that the Kählerian hypothesis is true.
The teaching that the Kählerian hypothesis is
effectively true is good news for a broad class of 21st century enterprises that seek to press against quantum limits to speed, sensitivity, power, computational efficiency, and channel capacity … and it is very good news especially for the young mathematicians, scientists, engineers, and entrepreneurs who hope to participate in creating these enterprises.
Acknowledgements This answer benefited greatly from enjoyable conversations with Rico Picone, Sol Davis, Doug and Chris Mounce, Joe Garbini, Steve Flammia, and especially Aram Harrow; any remaining errors and infelicities are mine alone. The answer is also very largely informed by the ongoing debate of Aram Harrow with Gil Kalai, regarding the feasibility (or not) of scalable quantum computing, that has been hosted on the web page Gödel's Lost Letter and P=NP, regarding which appreciation and thanks are extended. |
Associativity is About Composition
Complex multiplication can be visualized as rotating and scaling the complex plane. So, besides thinking of a complex number $z$ as a point, we can think of it as encoding a transformation composed of a rotation and scaling.
Recall associativity:$$ z \cdot (q \cdot c) = (z \cdot q) \cdot c $$
If we think of $z$ and $q$ as transformations, and $c$ as a point being transformed, then this reads:$$ z(q(c)) = (z \cdot q)(c) $$
That is, starting at a point $c$ and applying the transformation $q$ and then the transformation $z$ is the
same thing as applying the transformation "$z \cdot q$" to $c$. Thus $z \cdot q$ is the transformation "do $q$ and then do $z$", which is to say, the composition of $z$ and $q$.
So what associativity tells us is really that $z\cdot q = z \circ q$
Adding Rigor
Technically speaking, we've been a little loose here. A complex number $z$ does not
quite encode a transformation; $z\cdot = x \mapsto z\cdot x$ $(z\cdot)(x) = z\cdot x$ $z\cdot$ is a function $f$ defined by $f(x) = z\cdot x$.
Under this interpretation, associativity looks like the following:$$ \begin{align*} z \cdot (q \cdot c) &= (z \cdot q) \cdot c \\ {z\cdot}({q\cdot}(c)) &= ((z \cdot q)\cdot)(c) \\ ({z\cdot} \circ {q\cdot})(c) &= ((z \cdot q)\cdot)(c) \\ {z\cdot} \circ {q\cdot} &= { (z\cdot q)\cdot } \hspace{10pt} \text{since $c$ is arbitrary} \end{align*} $$
Which reads similarly, but not quite the same, as our previous (not-quite correct) conclusion that multiplication is composition. Instead, this says that multiplication gives a result whose associated transformation is the composition of the associated transformations of the factors.
While this interpretation is more correct, it is ugly. And if you really stare at it long enough, it
basically
Point being, this is the correct version, but don't remember it. Remember the other one.
And now we can generalize! If we tried to generalize with the not-quite-correct interpretation, we would say that $\star$ is associative iff $a\star b = a\circ b$. But what is $a\circ b$? It's something loose and
In fact, if we
tried to get a formal deinition, we would just end up with $z \circ q = {z\cdot} \circ {q\cdot}$. This is because if we would want $z \circ q$ to represent "apply $q$ then $z$". Thus, we would define |
In this section, we will investigate the shape of this room and its real-world applications, including how far apart two people in Statuary Hall can stand and still hear each other whisper.
For the exercises 1-4, write the equation of the ellipse in standard form. Then identify the center, vertices, and foci.
1) \(\dfrac{x^2}{25}+\dfrac{y^2}{64}=1\)
Answer
\(\dfrac{x^2}{5^2}+\dfrac{y^2}{8^2}=1\); center: \((0,0)\); Vertices: \((5,0)\), \((-5,0)\), \((0,8)\), \((0,-8)\); foci: \((0,\sqrt{39})\), \((0,-\sqrt{39})\)
2) \(\dfrac{(x-2)^2}{100}+\dfrac{(y+3)^2}{36}=1\)
3) \(9x^2+y^2+54x-4y+76=0\)
Answer
\(\dfrac{(x+3)^2}{1^2}+\dfrac{(y-2)^2}{3^2}=1\); center: \((-3,2)\); Vertices: \((-2,2)\), \((-4,2)\), \((-3,5)\), \((-3,-1)\); foci: \((-3,2+2\sqrt{2})\), \((-3,2-2\sqrt{2})\)
4) \(9x^2+36y^2-36x+72y+36=0\)
For the exercises 5-8, graph the ellipse, noting center, vertices, and foci.
5) \(\dfrac{x^2}{36}+\dfrac{y^2}{9}=1\)
Answer
center: \((0,0)\); Vertices: \((6,0\), \((-6,0)\), \((0,3)\), \((0,-3)\); foci: \((3\sqrt{3},0)\), \((-3\sqrt{3},0)\)
6) \(\dfrac{(x-4)^2}{25}+\dfrac{(y+3)^2}{49}=1\)
7) \(4x^2+y^2+16x+4y-44=0\)
Answer
center: \((-2,-2)\); Vertices: \((2,-2)\), \((-6,-2)\), \((-2,6)\), \((-2,-10)\); foci: \((-2,-2+4\sqrt{3})\), \((-2,-2-4\sqrt{3})\)
8) \(2x^2+3y^2-20x+12y+38=0\)
For the exercises 9-11, use the given information to find the equation for the ellipse.
9) Center at \((0,0)\), focus at \((3,0)\), vertex at \((-5,0)\)
Answer
\(\dfrac{x^2}{25}+\dfrac{y^2}{16}=1\)
10) Center at \((2,-2)\), vertex at \((7,-2)\), focus at \((4,-2)\)
11) A whispering gallery is to be constructed such that the foci are located \(35\) feet from the center. If the length of the gallery is to be \(100\) feet, what should the height of the ceiling be?
Answer
Approximately \(35.71\) feet
In analytic geometry, a hyperbola is a conic section formed by intersecting a right circular cone with a plane at an angle such that both halves of the cone are intersected. This intersection produces two separate unbounded curves that are mirror images of each other.
For the exercises 1-4, write the equation of the hyperbola in standard form. Then give the center, vertices, and foci.
1) \(\dfrac{x^2}{81}-\dfrac{y^2}{9}=1\)
2) \(\dfrac{(y+1)^2}{16}-\dfrac{(x-4)^2}{36}=1\)
Answer
\(\dfrac{(y+1)^2}{4^2}-\dfrac{(x-4)^2}{6^2}=1\); center: \((4,-1)\); Vertices: \((4,3)\), \((4,-5)\); foci: \((4,-1+2\sqrt{13})\), \((4,-1-2\sqrt{13})\)
3) \(9y^2-4x^2+54y-16x+29=0\)
4) \(3x^2-y^2-12x-6y-9=0\)
Answer
\(\dfrac{(x-2)^2}{2^2}-\dfrac{(y+3)^2}{(2\sqrt{3})^2}=1\); center: \((2,-3)\); Vertices: \((4,-3)\), \((0,-3)\); foci: \((6,-3)\), \((-2,-3)\)
For the exercises 5-8, graph the hyperbola, labeling vertices and foci.
5) \(\dfrac{x^2}{9}-\dfrac{y^2}{16}=1\)
6) \(\dfrac{(y-1)^2}{49}-\dfrac{(x+1)^2}{4}=1\)
Answer
7) \(x^2-4y^2+6x+32y-91=0\)
8) \(2y^2-x^2-12y-6=0\)
Answer For the exercises 9-10, find the equation of the hyperbola.
9) Center at \((0,0)\), vertex at \((0,4)\), focus at \((0,-6)\)
10) Foci at \((3,7)\) and \((7,7)\), vertex at \((6,7\)
Answer
\(\dfrac{(x-5)^2}{1}-\dfrac{(y-7)^2}{3}=1\)
Like the ellipse and hyperbola, the parabola can also be defined by a set of points in the coordinate plane. A parabola is the set of all points in a plane that are the same distance from a fixed line, called the directrix, and a fixed point (the focus) not on the directrix.
For the exercises 1-4, write the equation of the parabola in standard form. Then give the vertex, focus, and directrix.
1) \(y^2=12x\)
2) \((x+2)^2=\dfrac{1}{2}(y-1)\)
Answer
\((x+2)^2=\dfrac{1}{2}(y-1)\); vertex: \((-2,1)\); focus: \( \left( -2, \dfrac{9}{8} \right ) \); directrix: \(y=\dfrac{7}{8}\)
3) \(y^2-6y-6x-3=0\)
4) \(x^2+10x-y+23=0\)
Answer
\((x+5)^2=(y+2)\); vertex: \((-5,-2)\); focus: \( \left( -5, -\dfrac{7}{4} \right ) \); directrix: \(y=-\dfrac{9}{4}\)
For the exercises 5-8, graph the parabola, labeling vertex, focus, and directrix.
5) \(x^2+4y=0\)
6) \((y-1)^2=\dfrac{1}{2}(x+3)\)
Answer
7) \(x^2-8x-10y+46=0\)
8) \(2y^2+12y+6x+15=0\)
Answer For the exercises 9-11, write the equation of the parabola using the given information.
9) Focus at \((-4,0)\); directrix is \(x=4\)
10) Focus at \( \left( 2, \dfrac{9}{8} \right ) \); directrix is \(y=\dfrac{7}{8}\)
Answer
\((x-2)^2= \left (\dfrac{1}{2} \right ) (y-1)\)
11) A cable TV receiving dish is the shape of a paraboloid of revolution. Find the location of the receiver, which is placed at the focus, if the dish is \(5\) feet across at its opening and \(1.5\) feet deep.
In previous sections of this chapter, we have focused on the standard form equations for nondegenerate conic sections. In this section, we will shift our focus to the general form equation, which can be used for any conic. The general form is set equal to zero, and the terms and coefficients are given in a particular order, as shown below.
For the exercises 1-3, determine which of the conic sections is represented.
1) \(16x^2+24xy+9y^2+24x-60y-60=0\)
Answer
\(B^2 - 4AC =0\), parabola
2) \(4x^2+14xy+5y^2+18x-6y+30=0\)
3) \(4x^2+xy+2y^2+8x-26y+9=0\)
Answer
\(B^2 - 4AC = -31 < 0\), ellipse
For the exercises 4-5, determine the angle \(\theta \) that will eliminate the \(xy\) term, and write the corresponding equation without the \(xy\) term.
4) \(x^2+4xy-2y^2-6=0\)
5) \(x^2-xy+y^2-6=0\)
Answer
\(\theta =45^{\circ},x'^2+3y'^2-12=0\)
For the exercises 6-8, graph the equation relative to the \(x'y'\) system in which the equation has no \(x'y'\) term.
6) \(9x^2-24xy+16y^2-80x-60y+100=0\)
7) \(x^2-xy+y^2-2=0\)
Answer
\(\theta =45^{\circ}\)
8) \(6x^2+24xy-y^2-12x+26y+11=0\)
In this section, we will learn how to define any conic in the polar coordinate system in terms of a fixed point, the focus at the pole, and a line, the directrix, which is perpendicular to the polar axis.
For the exercises 1-4, given the polar equation of the conic with focus at the origin, identify the eccentricity and directrix.
1) \(r=\dfrac{10}{1-5\cos \theta }\)
Answer
Hyperbola with \(e=5\) and directrix \(2\) units to the left of the pole.
2) \(r=\dfrac{6}{3+2\cos \theta }\)
3) \(r=\dfrac{1}{4+3\sin \theta }\)
Answer
Ellipse with \(e=\dfrac{3}{4}\) and directrix \(\dfrac{1}{3}\) unit above the pole.
4) \(r=\dfrac{3}{5-5\sin \theta }\)
For the exercises 5-8, graph the conic given in polar form. If it is a parabola, label the vertex, focus, and directrix. If it is an ellipse or a hyperbola, label the vertices and foci.
5) \(r=\dfrac{3}{1-\sin \theta }\)
Answer
6) \(r=\dfrac{8}{4+3\sin \theta }\)
7) \(r=\dfrac{10}{4+5\cos \theta }\)
Answer
8) \(r=\dfrac{9}{3-6\cos \theta }\)
For the exercises 9-10, given information about the graph of a conic with focus at the origin, find the equation in polar form.
9) Directrix is \(x=3\) and eccentricity \(e=1\)
Answer
\(r=\dfrac{3}{1+\cos \theta }\)
10) Directrix is \(y=-2\) and eccentricity \(e=4\)
Practice Test For the exercises 1-2, write the equation in standard form and state the center, vertices, and foci.
1) \(\dfrac{x^2}{9}+\dfrac{y^2}{4}=1\)
Answer
\(\dfrac{x^2}{3^2}+\dfrac{y^2}{2^2}=1\); center: \((0,0)\); vertices: \((3,0)\), \((-3,0)\), \((0,2)\), \((0,-2)\); foci: \((\sqrt{5},0)\), \((-\sqrt{5},0)\)
2) \(9y^2+16x^2-36y+32x-92=0\)
For the exercises 3-6, sketch the graph, identifying the center, vertices, and foci.
3) \(\dfrac{(x-3)^2}{64}+\dfrac{(y-2)^2}{36}=1\)
Answer
center: \((3,2)\); vertices: \((11,2)\), \((-5,2)\), \((3,8)\), \((3,-4)\); foci: \((3+2\sqrt{7},2)\), \((3-2\sqrt{7},2)\)
4) \(2x^2+y^2+8x-6y-7=0\)
5) Write the standard form equation of an ellipse with a center at \((1,2)\), vertex at \((7,2)\), and focus at \((4,2)\).
Answer
\(\dfrac{(x-1)^2}{36}+\dfrac{(y-2)^2}{27}=1\)
6) A whispering gallery is to be constructed with a length of \(150\) feet. If the foci are to be located \(20\) feet away from the wall, how high should the ceiling be?
For the exercises 7-8, write the equation of the hyperbola in standard form, and give the center, vertices, foci, and asymptotes.
7) \(\dfrac{x^2}{49}-\dfrac{y^2}{81}=1\)
Answer
\(\dfrac{x^2}{7^2}-\dfrac{y^2}{9^2}=1\); center: \((0,0)\); vertices: \((7,0)\), \((-7,0)\); foci: \((\sqrt{130},0)\), \((-\sqrt{130},0)\); asymptotes: \(y=\pm \dfrac{9}{7}x\)
8) \(16y^2-9x^2+128y+112=0\)
For the exercises 9-11, graph the hyperbola, noting its center, vertices, and foci. State the equations of the asymptotes.
9) \(\dfrac{(x-3)^2}{25}-\dfrac{(y+3)^2}{1}=1\)
Answer
center: \((3,-3)\); vertices: \((8,-3)\), \((-2,-3)\); foci: \((3+\sqrt{26},-3)\), \((3-\sqrt{26},-3)\); asymptotes: \(y=\pm \dfrac{1}{5}(x-3)-3\)
10) \(y^2-x^2+4y-4x-18=0\)
11) Write the standard form equation of a hyperbola with foci at \((1,0)\), and \((1,6)\), and a vertex at \((1,2)\).
Answer
\(\dfrac{(y-3)^2}{1}-\dfrac{(x-1)^2}{8}=1\)
For the exercises 12-13, write the equation of the parabola in standard form, and give the vertex, focus, and equation of the directrix.
12) \(y^2+10x=0\)
13) \(3x^2-12x-y+11=0\)
Answer
\((x-2)^2=\dfrac{1}{3}(y+1)\); vertex: \((2,-1)\); focus: \((2,-\dfrac{11}{12})\); directrix: \(y=-\dfrac{13}{12}\)
For the exercises 14-17, graph the parabola, labeling the vertex, focus, and directrix.
14) \((x-1)^2=-4(y+3)\)
15) \(y^2+8x-8y+40=0\)
Answer
16) Write the equation of a parabola with a focus at \((2,3)\) and directrix \(y=-1\).
17) A searchlight is shaped like a paraboloid of revolution. If the light source is located \(1.5\) feet from the base along the axis of symmetry, and the depth of the searchlight is \(3\) feet, what should the width of the opening be?
Answer
Approximately \(8.49\) feet
For the exercises 18-19, determine which conic section is represented by the given equation, and then determine the angle \(\theta\) that will eliminate the \(xy\) term.
18) \(3x^2-2xy+3y^2=4\)
19) \(x^2+4xy+4y^2+6x-8y=0\)
Answer
parabola; \(\theta \approx 63.4^{\circ}\)
For the exercises 20-21, rewrite in the \(x'y'\) system without the \(x'y'\) term, and graph the rotated graph.
20) \(11x^2+10\sqrt{3}xy+y^2=4\)
21) \(16x^2+24xy+9y^2-125x=0\)
Answer
\(x'^2-4x'+3y'=0\)
For the exercises 22-23, identify the conic with focus at the origin, and then give the directrix and eccentricity.
22) \(r=\dfrac{3}{2-\sin \theta }\)
23) \(r=\dfrac{5}{4+6\cos \theta }\)
Answer
Hyperbola with \(e=\dfrac{3}{2}\) and directrix \(\dfrac{5}{6}\) units to the right of the pole.
For the exercises 24-26, graph the given conic section. If it is a parabola, label vertex, focus, and directrix. If it is an ellipse or a hyperbola, label vertices and foci.
24) \(r=\dfrac{12}{4-8\sin \theta }\)
25) \(r=\dfrac{2}{4+4\sin \theta }\)
Answer
26) Find a polar equation of the conic with focus at the origin, eccentricity of \(e=2\), and directrix: \(x=3\). |
The deeper problem with this supposition is that it assumes a conceptual identity between the notions of Hamiltonian and energy, and this is an identity that is not correct. That is, discernment needs to be applied to separate the two of these things.
Conceptually, energy is a physical quantity that is, in a sense, "nature's money" - the "currency" that you have to expend to produce physical changes in the world. On a somewhat deeper level, energy is to time what momentum is to space. This can be seen across many areas, such as Noether's theorem, which relates the law of conservation of energy to the fact that the history of a system can be translated back and forth in time and still work the same way, i.e. that there is no preferred point in time in the laws of physics, and likewise, the same for momentum with it being translated around in space and still working the same way. It also occurs in relativity, in which the "four-momentum" incorporates energy as its temporal component.
The Hamiltonian, on the other hand, is a mathematically modified version of the Lagrangian, through what is called the Legendre transform. The Lagrangian is a way to describe how that forces impact the time evolution of a physical system in terms of an optimization process, and the Hamiltonian converts this directly into an often more useful/intuitive differential equation process. In many cases, the Hamiltonian is
equal to, the system total mechanical energy $E_\mathrm{mech}$, i.e. $K + U$, but this is not always so even in classical Hamiltonian mechanics, a fact which indicates and underscores the basic conceptual separation between the two.
In quantum mechanics, the "energy is to time what momentum is to space" concept manifests in that it is the
generator of temporal translation, or the generator of evolution, in the same way that momentum is the generator of spatial translation. In particular, just as we have a "momentum operator"
$$\hat{p} := -i\hbar \frac{\partial}{\partial x}$$
which translates a position-space (here using one dimension for simplicity) wave function (mathematical representation of restricted information regarding the particle position on the part of an agent) $\psi$ via the somewhat-loose "infinitesimal equation"
$$\psi(x - dx) = \psi(x) + \left(\frac{i}{\hbar} \hat{p} \psi\right)(x)$$
for translating it by a tiny forward nudge $dx$, likewise we would
want to have an energy operator
$$\hat{E} := i\hbar \frac{\partial}{\partial t}$$
which does the same but for translation with regard to time (the sign change is because we usually consider a temporal advance from $t$ to $t + dt$, as opposed to psychologically [perhaps also psycho-culturally] preferring spatial motions to be directed rightward, in our descriptions of things.). The problem here is that wave functions generally do not contain a time parameter, and at least non-relativistic quantum mechanics treats space and time separately, so the above cannot be a true operator on the system state space. Rather, it is more of a "pseudo-operator" that we'd "like" to have but can't "really" for this reason. One should note that this is the expression that appears on the right of the Schrodinger equation, which we could thus "better" write as
$$\hat{H}[\psi(t)] = [\hat{E}\psi](t)$$
where $\psi$ is now a
temporal sequence of wave functions (viz. a "curried function", which becomes an "ordinary" function when you consider the wave functions as the basis-independent Hilbert vectors). The Hamiltonian operator $\hat{H}$ is a bona fide operator, which acts only on the "present" configuration information for the system. What this equation is "really" saying is that in order for such a time series to represent a valid physical evolution, the Hamiltonian must also be able to translate it through time. The distinction between Hamiltonian and energy manifests in that the Hamiltonian will not translate every time sequence, while the energy pseudo-operator will, just as the momentum operator will translate every spatial wave function. Moreover, many Hamiltonians may be possible that give rise to the same energy spectrum.
Because these two things are different, it makes no sense to equate them as operators, like suggested. You can, and should, have $\hat{H}[\psi(t)] = [\hat{E}\psi](t)$, but you should
not have $\hat{H} = \hat{E}$! |
The cone is the revolution volume resulting from rotating a rectangle triangle of hypotenuse $$g$$ (the generatrix), low leg $$r$$ (which is the radius) and leg $$h$$ (which is the height of the cone).
Also it is possible to interpret the cone as the pyramid inscribed into a prism of circular basis.
To calculate the area or volume of a cone we only need two of the following $$3$$ pieces of information: height, radius, generatrix, because using Pythagoras theorem we can find the third one:
$$$g^2=r^2+h^2$$$
The area of the side is calculated,
$$$A_{lateral}=\pi \cdot r \cdot g$$$
And the entire area is:
$$$A_{total}=A_{lateral}+A_{basis}=\pi \cdot r(r+g)$$$
Regarding the volumes, as we have already studied in the prism and the pyramid, the volume of the cone is a third of the volume of the cylinder of equal base and height.
$$$V_{cone}=\dfrac{1}{3}V_{cylinder}=\dfrac{1}{3} \pi\cdot r^2\cdot h$$$ |
In Mas-Colell, Whinston, and Green's Microeconomics they define the
indirect utility function, $v(p,w)$ as
$$ v(p,w) := u(x^*) $$
Where $x^* \in x(p,w)$ solves the utility maximization problem.
They state a property of $v(p,w)$ is quasiconvexity, i.e. the set
$$ \{(p,w): v(p,w) \leq \bar{v} \} $$
is convex for any $\bar{v}$.
Just the page before they said that convexity of preferences implies that $u(\bullet)$ is
quasiconcave, so my question is why when we look at the max of $u$, it's quasiconcave property inflects (can't think of a better word) to quasiconvexity? |
In a two-good space, initially the consumer maximizes $U(x,z)\;\; s.t. \;\;p_xx+p_zz =I$ and we assume it obtains the solution $(x^*, z^*)$ as a function of prices and income.
In the constrained case, the consumer will either choose $(0, \tilde z)$ or $(x^*+\epsilon, z'$), for some $\epsilon >0 $ always exhausting its budget, so in particular, $\tilde z = I/p_z$.In order for the consumer to still choose to buy a strictly positive quantity of $x$, it must be the case that
$$U(x^*+\epsilon, z') > U(0, \tilde z)$$
Apply a first order approximation around $(x^*, z^*)$ without ignoring the remainders, we want
\begin{align} U(x^*, z^*) + U_x(x^*)\cdot \epsilon + U_z(z^*)(z'-z^*) + R_{\epsilon} &\\> U(x^*, z^*) + U_x(x^*)(-x^*) + U_z(z^*)(\tilde z-z^*) + R_0\end{align}
Simplify and re-arrange, we want
$$U_x(x^*)(x^*+\epsilon) + R_{\epsilon} > U_z(z^*)(\tilde z-z') + R_0 $$
We know that from the unconstrained optimization, $U_x(x^*)/U_z(z^*) = p_x/p_z$ so
$$\frac {p_x}{p_z}(x^*+\epsilon) + \frac {R_{\epsilon}}{U_z(z^*)} > \left(\frac{I}{p_z}-z'\right) + \frac {R_0}{U_z(z^*)} $$
Multiply throughout by $p_z$,
$$p_x(x^*+\epsilon) + p_z\frac {R_{\epsilon}}{U_z(z^*)} > I - p_zz' + p_z\frac {R_0}{U_z(z^*)} $$
but $p_x(x^*+\epsilon) + p_zz' = I \implies p_x(x^*+\epsilon) = I-p_zz'$ so we are left with the requirement that (ignoring positive terms)
$$R_{\epsilon} > R_0 $$
in order for the consumer to choose $x^*+ \epsilon$ and not $0$ for $x$.
Note that the above take into account also the signs of the remainders, it is not just about their absolute magnitudes.
Now let's go back to our first-order expansions. We know that both candidate bundles yield utilities lower than $U(x^*, z^*)$, because they were feasible in the unconstrained case, and they weren't chosen.
Looking at the expansion of $U(0, \tilde z)$ we then conclude that we have
$$U_x(x^*)(-x^*) + U_z(z^*)(\tilde z-z^*) + R_0 < 0 $$
$$\implies U_z(z^*)\cdot \Big[(U_x(x^*)/U_z(z^*))\cdot(-x^*) + \tilde z-z^*\Big] + R_0 < 0$$
$$\implies \frac {U_z(z^*)}{p_z}\cdot \Big[-p_xx^* + p_z\tilde z-p_zz^*\Big] + R_0 < 0$$
But $-p_xx^* -p_zz^* = -I$ and $p_z\tilde z =I$ so the term in brackets is zero. Therefore we conclude that
$$R_0 <0$$
Looking now at the expansion of $U(x^*+\epsilon, z')$, we know we have
$$U_x(x^*)\cdot \epsilon + U_z(z^*)(z'-z^*) + R_{\epsilon} < 0$$
Performing the same manipuations as before we obtain here too that
$$R_{\epsilon} < 0$$
So the condition to buy $x^*+\epsilon$ can be re-written as
$$|R_{\epsilon}| < |R_0|$$
This formalizes somewhat the notion that if $\epsilon$ is "sufficiently small", $R_{\epsilon}$ will be smaller in absolute terms than $R_0$, since the approximation to the same function will be "better", and so we will observe $x^*+\epsilon$ and not $0$. But it also tells us what the graphs in the other answer told us too, that there is not a single general answer to the matter. |
My chemistry textbook introduces the wave function as
$$\psi(x)= A \sin\left(\frac{2\pi x}{\lambda}\right)$$ Therefore, the Schrödinger Equation is:
$$\frac{d^2\psi(x)}{dx^2} = -~\left(\frac{2\pi}{\lambda}\right)^{2}\psi(x)$$
and by multiplying each side by $\frac{-h^2}{8\pi^2m}$ you get:
$$-~\frac{h^2}{8\pi^2m} \frac{d^2\psi(x)}{dx^2} = \frac{p^2}{2m}\psi(x) = (T)(\psi(x))$$ where $T$ is kinetic energy.
Now, I don't fully understand that last bit, and I am willing to see an explanation on it, but my main question is that, when the book transitions from discussing the Schrödinger Equation in the context of a "particle-in-a-box" to the context of a one-electron atom, it seemingly changes the wave function to $$\psi_{(n,\ell,m)}(r,\theta,\phi) = R_{n,\ell}(r) Y_{\ell,m}(\theta,\phi)$$ where n = principle quantum number, $\ell$ = angular momentum number, m = magnetic quantum number, R = the "radial part" of the equation, r = radius (distance between nucleus and electron), Y = the "angular part" of the equation, and $\theta$ and $\phi$ are some angles that I still don't know how to get.
I guess my question is did the wave function $\psi$ change? Why is suddenly so different and why do r,$\theta$,and $\psi$ suddenly come into the picture? Is it because you are going from the one-dimensional, Cartesian "x" version of the wave function to the three-dimensional, polar coordinate version of it? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.