text
stringlengths 256
16.4k
|
---|
This post is a first step towards integrating the information transfer versions of the IS-LM model and the quantity theory. We'll begin with one of the basic equations from the information transfer framework:
$$ P = \frac{1}{\kappa}\frac{Q^d}{Q^s} $$
In the LM market, we have aggregate demand represented by $NGDP$ as an information source sending information to the money supply (the monetary base, $MB$) with the interest rate as the price (the detector that detects the signal transmitted from the demand to the supply). We'll write this price $P \rightarrow r$ as
$$ c \log r = \log \frac{1}{\kappa}\frac{NGDP}{MB} $$
With $c$ being an arbitrary constant. If I fit this equation to the Effective Fed Funds rate, I get a very good fit (model is blue, data is green):
The fit parameters are $1/\kappa = 39.1$ and an overall normalization of $c = 0.279$ (assuming of course that the fed funds rate is divided by 100 to change from a percentage into a dimensionless number).
An interesting way to visualize this data is to plot the interest rate versus real GDP (aka real output, denoted $Y$ in the ISLM model). With the information transfer model providing both the interest rate (above) and RGDP derived from $NGDP$ and the price level in the quantity theory, we can observe "LM curves" in the data where increasing $Y$ traces out an upward sloping curve with the interest rate.
It appears as though the LM curves are "reset" by the Fed lowering interest rates to "heat" the economy (by increasing the monetary base via the equation above). There was a period of relative constant interest rates (dashed red line) from the mid-90s to the early 2000s (the late 90s "tech boom") where the economy grew with limited intervention from the Fed. That last statement would probably send Scott Sumner up the wall. The Fed is always intervening, but in this case by limited intervention I mean keeping the monetary base growing roughly at the same rate as NGDP. This keeps interest rates constant via the equation above.
In 2008 we see the data bump up against the zero lower bound. The LM market stops sending information detectable by the interest rate. Here we would obtain the "flat" LM curve of the liquidity trap. This is different from the constant interest rate of the late 1990s, which is the LM market equilibrium moving along at roughly a constant interest rate as $Y$ increases until 2001.
It appears there is a qualitative change in the properties of the LM market that begin in the early 1990s -- interestingly the first recession with Alan Greenspan as Fed chair. For some reason, no LM curve appears after the recession ends. Could raising interest rates in the late 1990s have helped us avoid the Great Recession later on by leaving interest rates high enough to avoid the zero lower bound?
Should proper macroeconomic stabilization produce a picture that looks (heuristically) like this:
And do the 90s look like a massive failure in retrospect?
The next step (next post maybe?) is to see if using the real interest rate (related via the Fisher equation to the nominal rate) shows any significant difference in qualitative behavior. |
Prologue: The big $O$ notation is a classic example of the power and ambiguity of some notations as part of language loved by human mind. No matter how much confusion it have caused, it remains the choice of notation to convey the ideas that we can easily identify and agree to efficiently.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
Sorry, but you do not have an issue if you understand the meaning of big $O$ notation.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$
$O(f(n))$, if one asks you what's the equals of $O(f(n))$, what would be your answer? There is no answer. value
What is important is the semantics. What is important is (how) people can agree easily on (one of) its precise interpretations that will describe asymptotic behavior or time or space complexity we are interested in. The default precise interpretation/definition of $T(n)=O(f(n))$ is, as translated from Wikipedia,
$T$ is a real or complex valued function and $f$ is a real valued function, both defined on some unbounded subset of the real positive numbers, such that $f(n)$ is strictly positive for all large enough values of $n$. For for all sufficiently large values of $n$, the absolute value of $T(n)$ is at most a positive constant multiple of $f(n)$. That is, there exists a positive real number $M$ and a real number $n_0$ such that
${\text{ for all }n\geq n_{0}, |T(n)|\leq \;Mf(n){\text{ for all }}n\geq n_{0}.}$
Please note this interpretation is considered
the definition. All other interpretations and understandings, which may help you greatly in various ways, are secondary and corollary. Everyone (well, at least every answerer here) agrees to this interpretation/definition/semantics. As long as you can apply this interpretation, you are probably good most of time. Relax and be comfortable. You do not want to think too much, just as you do not think too much about some of the irregularity of English or French or most of natural languages. Just use the notation by that definition.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$
$O(f(n))$, if one asks you what's the equals of $O(f(n))$, what would be your answer? There is no answer. value
Indeed, there could be no answer, since the question is ill-posed. $T(n)$ does not mean an exact number. It is meant to stand for a function whose name is $T$ and whose formal parameter is $n$ (which is sort of bounded to the $n$ in $f(n)$). It is just as correct and even more so if we write $T=O(f)$. If $T$ is the function that maps $n$ to $n^2$ and $f$ is the function that maps $n$ to $n^3$, it is also conventional to write $f(n)=O(n^3)$ or $n^2=O(n^3)$. Please also note that the definition does not say $O$ is a function or not. It does not say the left hand side is supposed to be equal to the right hand side at all! You are right to suspect that equal sign does not mean equality in its ordinary sense, where you can switch both sides of the equality and it should be backed by an equivalent relation. (Another even more famous example of abuse of the equal sign is the usage of equal sign to mean assignment in most programming languages, instead of more cumbersome
:= as in some languages.)
If we are only concerned about that one equality (I am starting to abuse language as well. It
is not an equality; however, it is an equality since there is an equal sign in the notation or it could be construed as some kind of equality), $T(n)=O(f(n))$, this answer is done.
However, the question actually goes on. What does it mean by, for example, $f(n)=3n+O(\log n)$? This equality is not covered by the definition above. We would like to introduce another convention,
the placeholder convention. Here is the full statement of placeholder convention as stated in Wikipedia.
In more complicated usage, $O(\cdots)$ can appear in different places in an equation, even several times on each side. For example, the following are true for $n\to \infty$.
$(n+1)^{2}=n^{2}+O(n)$
$(n+O(n^{1/2}))(n+O(\log n))^{2}=n^{3}+O(n^{5/2})$
$n^{O(1)}=O(e^{n})$
The meaning of such statements is as follows: for any functions which satisfy each $O(\cdots)$ on the left side, there are some functions satisfying each $O(\cdots)$ on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function $f(n) = O(1)$, there is some function $g(n) = O(e^n)$ such that $n^{f(n)} = g(n)$."
You may want to check here for another example of placeholder convention in action.
You might have noticed by now that I have not used the set-theoretic explanation of the big $O$-notation. All I have done is just to show even without that set-theoretic explanation such as "$O(f(n))$ is a set of functions", we can still understand big $O$-notation fully and perfectly. If you find that set-theoretic explanation useful, please go ahead anyway.
You can check the section in "asymptotic notation" of CLRS for a more detailed analysis and usage pattern for the family of notations for asymptotic behavior, such as big $\Theta$, $\Omega$, small $o$, small $\omega$, multivariable usage and more. The Wikipedia entry is also a pretty good reference.
Lastly, there is some inherent ambiguity/controversy with big $O$ notation with multiple variables,1 and 2. You might want to think twice when you are using those. |
Let $X \subset \mathbb R^n$, $f:X\to\mathbb R^m$, $x_0\in X$
Assumption: All partial derivatives of f at $x_0$ exist and are continuous
$\Rightarrow$ f is differentiable at $x_0$.
$\Rightarrow D_vf(x_0)=\nabla f(x_0)\cdot v$ (assuming $m=1$ for simplicity)
Which means that all directional derivatives of f at $x_0$ can be expressed as linear combination of the (continuous) partial derivatives of f at $x_0$
Therefore these directional derivatives also have to be continuous. (*)
Is the conclusion correct? (Or why not?) And if yes, is my proof correct? (Or why not?)
(*) I'm implicitly assuming that differentiability at $x_0$ implies differentiability at all points around $x_0$ if they are close enough. Only with this assumption I can conclude that the partial derivatives are defined around $x_0$ and therefore ask if they are continuous around $x_0$ or not.
I hope you can follow my thoughts. Else just ask for clarifications, it's my first question. Thank you for your help :) |
The easiest way to find a differential equation that will provide wavefunctions as solutions is to start with a wavefunction and work backwards. We will consider a sine wave, take its first and second derivatives, and then examine the results. The amplitude of a sine wave can depend upon position, \(x\), in space,
\[ A (x) = A_0 \sin \left ( \frac {2 \pi x}{\lambda} \right ) \label {3-3}\]
or upon time, \(t\),
\[A(t) = A_0\sin(2\pi \nu t) \label {3-4}\]
or upon both space and time,
\[ A (x, t) = A_0 \sin \left ( \frac {2 \pi x}{\lambda} - 2\pi \nu t \right ) \label {3-5}\]
We can simplify the notation by using the definitions of a wave vector, \(k = \frac {2\pi}{\lambda}\), and the angular frequency, \(\omega = 2\pi \nu\) to get
\[A(x,t) = A_0\sin(kx − \omega t) \label {3-6}\]
When we take partial derivatives of A(x,t) with respect to both \(x\) and \(t\), we find that the second derivatives are remarkably simple and similar.
\[ \frac {\partial ^2 A (x, t)}{\partial x^2} = -k^2 A_0 \sin (kx -\omega t ) = -k^2 A (x, t) \label {3-7}\]
\[ \frac {\partial ^2 A (x, t)}{\partial x^2} = -\omega ^2 A_0 \sin (kx -\omega t ) = -\omega ^2 A (x, t) \label {3-8}\]
By looking for relationships between the second derivatives, we find that both involve A(x,t); consequently an equality is revealed.
\[ k^{-2} \frac {\partial ^2 A (x, t)}{ \partial x^2} = - A (x, t) = - \omega \frac {\partial ^2 A (x, t)}{\partial x^2} \label {3-9}\]
Recall that \(\nu\) and \(λ\) are related; their product gives the velocity of the wave, \(\nu \lambda = v\). Be careful to distinguish between the similar but different symbols for frequency \(\nu\) and the velocity v. If in ω = 2πν we replace ν with v/λ, then
\[ \omega = \frac {2 \pi \nu}{\lambda} = \nu k \label {3-10}\]
and Equation \(\ref{3-11}\) can be rewritten to give what is known as the classical wave equation in one dimension. This equation is very important. It is a differential equation whose solution describes all waves in one dimension that move with a constant velocity (e.g. the vibrations of strings in musical instruments) and it can be generalized to three dimensions. The classical wave equation in one-dimension is
\[\frac {\partial ^2 A (x, t)}{\partial x^2} = \nu ^{-2} \frac {\partial ^2 A (x, t)}{\partial t^2} \label {3-11}\]
Example \(\PageIndex{1}\)
Complete the steps leading from Equation \(\ref{3-5}\) to Equations \(\ref{3-7}\) and \(\ref{3-8}\) and then to Equation \(\ref{3-11}\).
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski |
Edit: Edge cases suck; see comments. See also MWG Chapter 10 section C, D.
Suppose $(\vec x^*, \vec m^*)$ solves
$$\max \sum^I_{i=1} m_i + \phi_i(x_i)$$
but is not Pareto optimal.
$$\begin{align}\implies \exists \ (x_i', m_i') \quad \text{s.t.} \quad & u_i(x_i', m_i') \geq u_i(x_i^*, m_i^*) \quad \forall \ i = 1,\cdots,I \\& u_i(x_i', m_i') > u_i(x_i^*, m_i^*) \quad \text{for some} \ i\end{align}$$
$$\implies \sum^I_{i=1} m'_i + \phi_i(x'_i) > \sum^I_{i=1} m^*_i + \phi_i(x^*_i)$$
which is a contradiction. If we have a solution to the utility maximization problem, it must be Pareto optimal.
(Note that this comes form continuous and increasing properties of $\phi(\cdot)$)
Suppose $(\vec x^*, \vec m^*)$ is a feasible Pareto optimal allocation, but does not solve
$$\max \sum^I_{i=1} m_i + \phi_i(x_i)$$
Because we treat $m_i$ as numeraire and $\phi_i(\cdot)$ is strictly increasing, we know $u_i(\cdot)$ is locally non-satiated. The Pareto allocation should be just feasible.
$$\exists \ (x_i', m_i') \quad \text{s.t.} \quad \sum^I_{i=1} m'_i + \phi_i(x'_i) > \sum^I_{i=1} m^*_i + \phi_i(x^*_i)\\\implies \boxed{ \sum^I_{i=1} \phi_i(x'_i) > \sum^I_{i=1} \phi_i(x^*_i)}$$
If this is true because this alternative allocation simply gives an individual more of $x$, for all else equal, then the alternative allocation is infeasible. So we'd have a contradiction.
If this is true because in the alternative allocation, someone else is allocated more $x$ and just one other person is allocated less, then the original allocation would not be Pareto optimal. Suppose it was. If you took the original allocation and shifted $x$ in the way of the new allocation, then you would need a corresponding trade in the numeraire good, $m$, to keep whoever is losing $x$ at least at the same utility level. But
trades in just the numeraire good can never change summed aggregate utility. From the original allocation, if you can trade $m$ for $x$ and make someone better off without hurting anyone, you weren't at a Pareto optimum, and if you can't trade $m$ for $x$ to make someone better off, you can't increase summed aggregate utility, which means the original allocation was a solution to the maximization problem.
This logic applies no matter how you rearrange $x$ between multiple people.
$\square$ |
Could anyone recommend a method for the following least-squares problem:
find $R \in \mathbb{R}^{3 \times 3}$ that minimizes: $\sum\limits_{i=0}^N (Rx_i - b_i)^2 \rightarrow \min$, where $R$ is a unitary (rotation) matrix.
I could get an approximate solution by minimizing $\sum\limits_{i=0}^N (Ax_i - b_i)^2 \rightarrow \min$ (arbitrary $A \in \mathbb{R}^{3 \times 3}$), taking matrix $A$ and:
computing SVD: $A = U \Sigma V^T$, dropping $\Sigma$ and approximating $R \approx U V^T$ computing polar decomposition: $A = U P$, dropping scale-only symmetric (and positive definite in my case) $P$ and approximating $R \approx U$
I could also use QR decomposition, but it wouldn't be isometric (would depend on the choice of the coordinate system).
Does anyone know of a way to do this, at least approximately, but with better approximation than the two methods above? |
Decomposing the ELBO Rob Zinkov 2018-11-02
When performing Variational Inference, we are minimizing the KL divergence between some distribution we care about \(p(\v{z} \mid \v{x})\) and some distribution that is easier to work with \(q_\phi(\v{z} \mid \v{x})\).
\[ \begin{align} \phi^* &= \underset{\phi}{\mathrm{argmin}}\, \text{KL}(q_\phi(\v{z} \mid \v{x}) \;\|\; p(\v{z} \mid \v{x})) \\ &= \underset{\phi}{\mathrm{argmin}}\, \mathbb{E}_{q_\phi(\v{z} \mid \v{x})} \big[\log q_\phi(\v{z} \mid \v{x}) - \log p(\v{z} \mid \v{x}) \big]\\ \end{align} \]
Now because the density of \(p(\mathbf{z} \mid \mathbf{x})\) usually isn’t tractable, we use a property of the log model evidence \(\log\, p(\v{x})\) to define a different objective to optimize.
\[ \begin{align} \Expect_{q_\phi(\v{z} \mid \v{x})} \big[\log q_\phi(\v{z} \mid \v{x}) - \log p(\v{z} \mid \v{x})\big] &\leq \Expect_{q_\phi(\v{z} \mid \v{x})} \big[\log q_\phi(\v{z} \mid \v{x}) - \log p(\v{z} \mid \v{x})\big] - \log p(\v{x}) \\ &= \Expect _{q_\phi(\v{z} \mid \v{x})} \big[\log q_\phi(\v{z} \mid \v{x}) - \log p(\v{z} \mid \v{x}) - \log p(\v{x})\big] \\ &= \Expect _{q_\phi(\v{z} \mid \v{x})} \big[\log q_\phi(\v{z} \mid \v{x}) - \log p(\v{x}, \v{z})\big]\\ &= -\mathcal{L}(\phi) \end{align} \]
As \(\mathcal{L}(\phi) = \log p(\v{x}) - \text{KL}(q_\phi(\v{z} \mid \v{x}) \;\|\; p(\v{z} \mid \v{x}))\) maximizing \(\mathcal{L}(\phi)\) effectively minimizes our original KL.
This term \(\mathcal{L}(\phi)\) is sometimes called the evidence lower-bound or ELBO, because the KL term must always be greater-than or equal to zero, \(\mathcal{L}(\phi)\) can be seen as a lower-bound estimate of \(\log p(\v{x})\).
Due to various linearity properties of expectations, this can be rearranged into many different forms. This is useful to get an intuition for what can be going wrong when you learn \(q_\phi(\v{z} \mid \v{x})\)
Now why does this matter? Couldn’t I just optimize this loss with SGD and be done? Well you can, but often if something is going wrong it will show up as one or some terms being unusually off. By making these tradeoffs in the loss function explicit means you can adjust it to favor different properties of your learned representation. Either by hand or automatically.
Entropy form
The classic form is in terms of an energy term and an entropy term. The first term encourages \(q\) to put high probability mass wherever \(p\) does so. The second term is encouraging that \(q\) should as much as possible maximize it’s entropy and put probability mass everywhere it can.
\[ \mathcal{L}(\phi) = \Expect_{q_\phi(\v{z} \mid \v{x})}[\log p(x, z)] + H(q_\phi(\v{z} \mid \v{x})) \]
where
\[ H(q_\phi(\v{z} \mid \v{x})) \triangleq - \Expect_{q_\phi(\v{z} \mid \v{x})}[\log q_\phi(\v{z} \mid \v{x})] \]
Reconstruction error minus KL on the prior
More often these days, we describe the \(\mathcal{L}\) in terms of a reconstruction term and KL on the prior for \(p\). Here the first term is saying we should put mass on latent codes \(\v{z}\) from which \(p\) is likely to generate our observation \(\v{x}\). The second term then suggests to this trade off with \(q\) also being near the prior.
\[ \mathcal{L}(\phi) = \Expect_{q_\phi(\v{z} \mid \v{x})}[\log p(\v{x} \mid \v{z})] - \text{KL}(q_\phi(\v{z} \mid \v{x}) \;\|\; p(\v{z}))\]
ELBO surgery
But there are other ways to think about this decomposition. Because we frequently use amortized inference to learn a \(\phi\) useful for describing all kinds of \(q\) distributions regardless of our choice of observation \(\v{x}\). We can talk about the average distribution we learn over our observed data, with \(p_d\) being the empirical distribution of our observations.
\[ \overline{q}_\phi(\v{z}) = \Expect_{p_d(\v{x})} \big[ q_\phi(\v{z} \mid \v{x}) \big] \]
This is sometimes called the aggregate posterior.
With that we can decompose our KL on the prior into a mutual information term that encourages each \(q_\phi(\v{z} \mid \v{x})\) we create to be near the average one \(\overline{q}_\phi(\v{z})\) and a KL between this average distribution and the prior. The encourages the representation generated for \(\v{z}\) to be useful.
\[ \mathcal{L}(\phi) = \Expect_{q_\phi(\v{z} \mid \v{x})}[\log p(\v{x} \mid \v{z})] - \mathbb{I}_q(\v{x},\v{z}) - \text{KL}(\overline{q}_\phi(\v{z}) \;\|\; p(\v{z})) \]
where
\[ \mathbb{I}_q(\v{x},\v{z}) \triangleq \Expect_{p_d}\big[\Expect_{q_\phi(\v{z} \mid \v{x})} \big[\log q_\phi(\v{z} \mid \v{x})\big] \big] - \Expect_{\overline{q}_\phi(\v{z})} \log \overline{q}_\phi(\v{z}) \]
Full decomposition
Of course with more aggressive rearranging, we can just have a term to encourage learning better latent representations. In a setting where you aren’t learning \(p\) some of these terms are constant and can generally be ignored. I provide them here for completeness.
\[ \mathcal{L}(\phi) = \Expect_{q_\phi(\v{z} \mid \v{x})}\left[ \log\frac{p(\v{x} \mid \v{z})}{p(\v{x})} - \log\frac{q_\phi(\v{z} \mid \v{x})}{q_\phi(\v{z})} \right] - \text{KL}(p_d(\v{x}) \;\|\; p(\v{x})) - \text{KL}(\overline{q}_\phi(\v{z}) \;\|\; p(\v{z}))\]
I highly encourage checking out the Appendix of the Structured Disentangled Representations paper to see how much further this can be pushed.
Final notes
Of course, all the above still holds in the VAE setting where \(p\) becomes \(p_\theta\) but I felt the notation was cluttered enough already. It’s kind of amazing how much insight can be gained through expanding and collapsing one loss function. |
Answering the question in the title, no, but during the course of writing the past few posts, I'd looked at the wikipedia article on general equilibrium. I saw this random bit about Sraffa:
Anglo-American economists became more interested in general equilibrium in the late 1920s and 1930s after Piero Sraffa's demonstration that Marshallian economists cannot account for the forces thought to account for the upward-slope of the supply curve for a consumer good.
What follows is a
just-so story mechanism about how changes in output should affect the price. It appears as though Sraffa's argument entirely ignores the premise of Marshallian supply and demand diagrams (there is a single good in a single market) by asserting that there are other goods and factors of production. The use of "first order" in the wikipedia explanation of the mechanism is also pretty laughable since Sraffa didn't include a single equation much less any scales which we could use to say that anything is "first order". I tried to read Sraffa but was instantly filled with a grim sense of philosophers talking about what change is. I then Googled around and found this, which helped a bit. I concluded that information theory is the proper way to deal with the entire situation.
What are we talking about when we draw supply and demand curves anyway? Let's go back to the beginnings of this blog. Supply (S) and demand (D) are related by the equation:
where P is the price. The general solution (general equilibrium) to this is:
$$\text{(2) } \frac{D}{D_{ref}} = \left( \frac{S}{S_{ref}} \right)^{1/\kappa} $$
A demand curve is what you get when you look at the partial equilibrium by holding demand constant $D = D_{0}$ (an "exogenous" constant information source) and relating it back to the price to get a pair of equations:
$$ P = \frac{1}{\kappa} \; \frac{D_{0}}{\langle S \rangle}$$
$$ \Delta D \equiv D - D_{ref} = \frac{D_{0}}{\kappa} \log \frac{\langle S \rangle}{S_{ref}} $$
Symmetrically, a supply curve is what you get when you look at the partial equilibrium by holding supply constant (an "exogenous" constant information destination)
$$ P = \frac{1}{\kappa} \; \frac{\langle D \rangle}{S_{0}}$$
$$ \Delta S \equiv S - S_{ref} = \kappa S_{0} \log \frac{\langle D \rangle}{D_{ref}} $$
What are the angle brackets for? They're there to remind us that the variable is "endogenous" (the expected value inside the model) while the other variable is exogenous. These angle bracket variables are important to the discussion above because they parameterize our position along a supply or demand curve. Here are the supply (red) and demand (blue) curves along with the "fully endogenous" general equilibrium solution (gray dotted):
The demand curve seems to make intuitive sense -- at least at first. If the price goes up, the quantity of a good demanded goes down. But you get into trouble if you try this reasoning the other way: if the price goes down, the quantity demanded goes up? Maybe. Maybe not. If you weren't getting enough at the current price, it might. That depends on your utility, though. Sometimes this is referred to as the diminishing marginal utility of a good: the price you are willing to pay goes down the more widely available a good is. But there we've gone and switched up the independent variable again. The first half (effect of a price change) looks at price as independent, while the second half (diminishing marginal utility) looks at $\langle S \rangle$ as the independent variable [1].
Both of these get the partial equilibrium analysis in the wrong order mathematically. What we have is an exogenous (independent) change in demand. If demand increases and is satisfied (which we assume by taking $\langle S \rangle$ as endogenous/dependent), the price goes down.
This seems like a totally non-intuitive way to think about it. What is really going on here?
What we have is a system in contact with a "demand bath" [2] (or better yet, an "aggregate demand bath"). You could also call it an "information bath". If that bath didn't exist, adding (satisfied) demand would make the price go up, per equation (2) above (and it would move along the gray dotted curve in the figure above). What the bath is doing is sucking up any extra demand (source information) that we are creating by moving along the demand curve, so that "demand" is the same before and after the shift. Since there is "no change" in demand (any change is mitigated by the bath), the next variable in the chain, $\langle S \rangle$ effectively becomes the changing independent variable. This means the explanation is that decreasing/increasing the supply increases/decreases the price at constant demand (in the presence of a demand bath).
So why does the supply curve slope upwards? This time we're in contact with a "supply bath", so "the supply" is basically the same after we move along the curve. Moving along the supply curve is a change in $\langle D \rangle$. This means the explanation is that decreasing/increasing the demand decreases/increases the price at constant supply (in the presence of a supply bath).
Therefore there is no actual change in the supply along a supply curve so there is no bidding up factors of production or lack thereof per Sraffa. What we're doing is increasing the demand, so the price goes up.
That is to say supply and demand curves are kind of misnomers. A supply curve is the behavior of the price at constant supply, but is parameterized by increasing or decreasing demand. A demand curve is the behavior of the price at constant demand, but is parameterized by increasing or decreasing supply. [3]
[1] Yes, economists take price to be the independent variable, but in the formulation above it is more natural that either supply or demand (or both) are the independent variable(s). The price is the derivative of the demand with respect to the supply (the marginal change in demand for a marginal change in supply).
[2] This whole description is based on an isothermal expansion/compression of an ideal gas in contact with a thermal bath.
[3] Shifts "of" the supply and demand curves (as opposed to shift "along") are effectively changes in the information bath |
I'm working on a qualifying exam question and I am stuck about how to even commence.
Let $M$ be a $3$ dimensional manifold. Suppose $\alpha$ is a $1$-form such that the $3$-form $\alpha \wedge d\alpha$ is nowhere zero. Show that there is a unique vector field $v$ such that $\alpha(v) = 1$ and $d\alpha(v,w) = 0$ for any other vector field $w$.
You may use, without proof, the fact that if a vector space has a non-degenerate skew-symmetric bilinear pairing then it must have even dimension.
First off, $\alpha \wedge d\alpha$ is a volume form on $M$. Let us work in local coordinates. Then $$\alpha = f dx + g dy + h dz$$ $$d\alpha = (\frac{\partial g}{\partial x} - \frac{\partial f}{\partial y}) dx\wedge dy + (\frac{\partial h}{\partial y}-\frac{\partial g}{\partial z} ) dy\wedge dz + (\frac{\partial f}{\partial z}-\frac{\partial h}{\partial x}) dz \wedge dx$$ If we let $v = a \frac{\partial}{\partial x} + b \frac{\partial}{\partial y} + c \frac{\partial}{\partial z}$, then the first equation boils down to $$af + bg + ch = 1$$ I wanted to use $a = \frac{1}{f}, b = \frac{1}{g}, c = \frac{1}{h}$ as a first guess but there's no guarantee yet that $f, g, h$ are never zero. I am not at all sure how to proceed from here.
I tried to understand the hint given, but assuming $d\alpha$ is the skew-symmetric form mentioned, we know it cannot be non-degenerate because it acts on a vector space of 3. So there does exist a degeneracy i.e. a vector field such that when paired with any $w$, gives a $0$.
I'd appreciate any help! |
Prove that $$e^{\binom{n}{2}}>n!$$
$n \in \mathbb{Z_+}$
Sorry, couldn't attempt it.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Prove that $$e^{\binom{n}{2}}>n!$$
$n \in \mathbb{Z_+}$
Sorry, couldn't attempt it.
Hint: Use $\binom{n}{2} = 0+1+2+\cdots+(n-1)$.
I quite like Thomas Andrews' approach. Alternatively you can estimate $$ \ln(n!)=\sum_{k=1}^n\ln k <\int_1^{n+1}\ln x\,dx. $$ And calculating that integral gives you a good enough upper bound on the r.h.s. Admittedly this needs more machinery, but it also gives a better approximation to $n!$. |
Equivalence of Definitions of Integral Dependence Contents Theorem
For $x \in A$, the following are equivalent:
\((1):\quad\) \(\displaystyle \) \(\) \(\displaystyle \) $x$ is integral over $R$ \(\) \((2):\quad\) \(\displaystyle \) \(\) \(\displaystyle \) The $R$-module $R \left[{x}\right]$ is finitely generated \(\) \((3):\quad\) \(\displaystyle \) \(\) \(\displaystyle \) There exists a subring $B$ of $A$ such that $x \in B$, $R \subseteq B$ and $B$ is a finitely generated $R$-module \(\) \((4):\quad\) \(\displaystyle \) \(\) \(\displaystyle \) There exists a subring $B$ of $A$ such that $x B \subseteq B$ and $B$ is finitely generated over $R$ \(\) \((5):\quad\) \(\displaystyle \) \(\) \(\displaystyle \) There exists a faithful $R \left[{x}\right]$-module $B$ that is finitely generated as an $R$-module \(\) Proof $(1) \implies (2)$
By hypothesis, there exist $r_0, \ldots, r_{n-1} \in R$ such that:
$x^n + r_{n-1} x^{n-1} + \cdots + r_1 x + r_0 = 0$
So the powers $x^k$, $k \ge n$ can be written as an $R$-linear combination of:
$\left\{ {1, \ldots, x^{n-1} }\right\}$
$\Box$
$(2) \implies (3)$
$B = R \left[{x}\right]$ trivially satisfies the required conditions.
$\Box$
$(3) \implies (4)$
By $(3)$ we have an $R$-module $B$ such that $R \subseteq B$, $B$ is finitely generated over $R$.
Also, $x \in B$, so $x B \subseteq B$ as required.
$\Box$
$(4) \implies (5)$
By $(4)$ we have an $R \left[{x}\right]$-module $B$ that is finitely generated over $R$.
Let $y$ lie in the annihilator $\operatorname{Ann}_{R \left[{x}\right]} \left({B}\right)$
We have that $1 \in B$.
Then in particular $y \cdot 1 = 0$, and $y = 0$.
Therefore, $B$ is faithful over $R \left[{x}\right]$.
$\Box$
$(5) \implies (1)$
Let $B$ be as in $(5)$, say generated by $m_1, \ldots, m_n \in B$.
Then there are $r_{i j} \in R$, $i, j = 1,\ldots, n$ such that:
$\displaystyle x \cdot m_i = \sum_{j \mathop = 1}^n r_{i j} m_j$
Let $b_{i j} = x \delta_{i j} - r_{i j}$ where $\delta_{i j}$ is the Kronecker delta.
Then:
$\displaystyle \sum_{j \mathop = 1}^n b_{i j} m_j = 0, \quad i = 1, \ldots, n$ So, let $M = \left({b_{i j} }\right)_{1 \le i, j \le n}$.
Then by Cramer's Rule:
$\left({\det M}\right) m_i = 0$, $i = 1, \ldots, n$
Since $\det M \in R \left[{x}\right]$, also $\det M \in \operatorname{Ann}_{R \left[{x}\right]} \left({B}\right)$.
So $\det M = 0$ by hypothesis.
But $\det M = 0$ is a monic polynomial in $x$ with coefficients in $R$.
Thus $x$ is integral over $R$.
$\blacksquare$ |
Nodal solutions for an elliptic equation in an annulus without the signum condition
Bull. Korean Math. Soc. Published online August 20, 2019
Tianlan Chen, Yanqiong Lu, and Ruyun MaDepartment of Mathematics, Northwest Normal University
Abstract : This paper is concerned with the global behavior of components ofradial nodal solutions of semilinear elliptic problems$$-\Delta v=\lambda h(x, v)\ \ \text{in}\ \Omega,\ \ \v=0\ \ \text{on}\ \partial\Omega,$$where $\Omega=\{x\in \mathbb{R}^N: r_1<|x|<r_2\}$ with $0<r_1<r_2,\N\geq2$.The nonlinear term is continuous and satisfies $h(x, 0)=h(x,s_1(x))=h(x, s_2(x))=0$ for suitable positive, concave function$s_1$ and negative, convex function $s_2$, as well as $sh(x, s)>0$for $s\in\mathbb{R}\setminus\{0, s_1(x), s_2(x)\}$. Moreover, wegive the intervals for the parameter $\lambda$ which ensure theexistence and multiplicity of radial nodal solutions for the aboveproblem. For this, we use global bifurcation techniques to prove ourmain results. |
1. The problem statement, all variables and given/known data
A spherical capacitor consists of a spherical conducting shell charge [itex]-Q[/itex] concentric with a smaller conducting sphere of radius [itex]4.0[/itex] cm and charge [itex]Q[/itex]. The larger conducting shell has inner and outer radii of [itex]11.0[/itex] cm and [itex]13.0[/itex] cm, respectively. What is the capacitance of the system in pF?
2. Relevant equations
[itex]C = \frac{Q}{\Delta V}[/itex]
3. The attempt at a solution
I don’t know what to do when the outer shell has a thickness. I know that when it doesn’t have a thickness you would do
[itex] -\int_{r_1}^{r_2} kQ\frac{dr}{r^2} [/itex] where [itex]r_1[/itex] and [itex]r_2[/itex] are the inner and outer radii, respectively. But here what do you do for [itex]r_2[/itex]?
Thanks.
http://ift.tt/1fhkTPX |
I can't seem to grasp the following:
Let $X_1 \sim \exp(\lambda_1), X_2 \sim \exp(\lambda_2)$ and independent.
Then $$ \mathbb{E}\left[X_1 | X_1 < X_2\right] = \frac{1}{\lambda_1 + \lambda_2} $$
Why? How do I get this result?
Also, is this somehow related to $ \mathbb{E}\left[\min(X_1,X_2)\right] = \frac{1}{\lambda_1 + \lambda_2} $? If so, why are they the same?
I would prefer answers that solve it using identities rather than pdf/CDF. |
As motivation of the title, consider the shape of the function $e^{-x}\left(x+\lfloor x\rfloor^2\right)$ as plotted by WolframAlpha:
This exercise I believe that is very easy, let $\lfloor x\rfloor$ the floor function (... obviously we combine with this function when we want to define an integral of the kind hedgehog), then
$$\int_0^6 e^{-x}\left(x+\lfloor x\rfloor^2\right)dx=\sum_{k=1}^6\left(\int_{k-1}^k xe^{-x}dx+(k-1)^2\int_{k-1}^ke^{-x}dx\right),$$ by integrating by part the first summand we get $$\sum_{k=1}^6\left(e^{-k+1}-ke^{-k+1}+ke^{-k}-2e^{-k}-k^2e^{-k}+k^2e^{-k+1}\right)\approx 2.13235.$$ Truly Wolfram Alpha online calculator get the closed-form and agree with my calculations. I know that it is using geometric series (and variation of those), if you want to see the closed-form and get the comparison type these codes, first the integral
integrate e^(-x)(x+(floor(x))^2)dx, from x=0 to x=6
and secondly the finite sum that we've obtained
sum e^(-k+1)-k e^(-k+1)+k e^(-k)-2 e^(-k)-k^2 e^(-k)+k^2 e^(-k+1), from k=1 to k=6
Now this question, maybe isn't important but I believe that it is a nice exercise with the purpose to be in a good mood
Question.Can you calculate as a closed-form with all details the infinite case of an integral for a hedgehog, this $$\int_0^\infty e^{-x}\left(x+\lfloor x\rfloor^2\right)dx?$$ If you believe that isn't feasible get a closed-form, you can provide us your approximation. Thanks a lot. |
An \(n\)th order linear system of differential equations with constant coefficients is written as
\[ {\frac{{d{x_i}}}{{dt}} = {x’_i} } = {\sum\limits_{j = 1}^n {{a_{ij}}{x_j}\left( t \right)} + {f_i}\left( t \right),\;\;}\kern-0.3pt {i = 1,2, \ldots ,n,} \]
where \({x_1}\left( t \right),{x_2}\left( t \right), \ldots ,{x_n}\left( t \right)\) are unknown functions of the variable \(t,\) which often has the meaning of time, \({a_{ij}}\) are certain constant coefficients, which can be either real or complex, \({f_i}\left( t \right)\) are given (in general case, complex-valued) functions of the variable \(t.\)
We assume that all these functions are continuous on an interval \(\left[ {a,b} \right]\) of the real number axis \(t.\)
By setting
\[ {X\left( t \right) = \left[ {\begin{array}{*{20}{c}} {{x_1}\left( t \right)}\\ {{x_2}\left( t \right)}\\ \vdots \\ {{x_n}\left( t \right)} \end{array}} \right],\;\;}\kern-0.3pt {X’\left( t \right) = \left[ {\begin{array}{*{20}{c}} {{x’_1}\left( t \right)}\\ {{x’_2}\left( t \right)}\\ \vdots \\ {{x’_n}\left( t \right)} \end{array}} \right],\;\;}\kern-0.3pt {f\left( t \right) = \left[ {\begin{array}{*{20}{c}} {{f_1}\left( t \right)}\\ {{f_2}\left( t \right)}\\ \vdots \\ {{f_n}\left( t \right)} \end{array}} \right],\;\;}\kern-0.3pt {A = \left[ {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}& \cdots &{{a_{1n}}}\\ {{a_{21}}}&{{a_{22}}}& \cdots &{{a_{2n}}}\\ \cdots & \cdots & \cdots & \cdots \\ {{a_{n1}}}&{{a_{n2}}}& \cdots &{{a_{nn}}} \end{array}} \right],} \]
the system of differential equations can be written in matrix form:
\[X’\left( t \right) = AX\left( t \right) + f\left( t \right).\]
If the vector \(f\left( t \right)\) is identically equal to zero: \(f\left( t \right) \equiv 0,\) then the system is said to be homogeneous:
\[X’\left( t \right) = AX\left( t \right).\]
Homogeneous systems of equations with constant coefficients can be solved in different ways. The following methods are the most commonly used:
elimination method (the method of reduction of \(n\) equations to a single equation of the \(n\)th order); method of integrable combinations; method of eigenvalues and eigenvectors (including the method of undetermined coefficients or using the Jordan form in the case of multiple roots of the characteristic equation); method of the matrix exponential.
Below on this page we will discuss in detail the elimination method. Other methods for solving systems of equations are considered separately in the following pages.
Elimination Method
Using the method of elimination, a normal linear system of \(n\) equations can be reduced to a single linear equation of \(n\)th order. This method is useful for simple systems, especially for systems of order \(2.\)
Consider a homogeneous system of two equations with constant coefficients:
\[\left\{ \begin{array}{l} {x’_1} = {a_{11}}{x_1} + {a_{12}}{x_2}\\ {x’_2} = {a_{21}}{x_1} + {a_{22}}{x_2} \end{array} \right.,\]
where the functions \({x_1},{x_2}\) depend on the variable \(t.\)
We differentiate the first equation and substitute the derivative \({x’_2}\) from the second equation:
\[ {{x^{\prime\prime}_1} = {a_{11}}{x’_1} + {a_{12}}{x’_2},\;\;}\Rightarrow {{{x^{\prime\prime}_1} = {a_{11}}{x’_1} }+{ {a_{12}}\left( {{a_{21}}{x_1} + {a_{22}}{x_2}} \right),\;\;}}\Rightarrow {{{x^{\prime\prime}_1} = {a_{11}}{x’_1} }+{ {a_{12}}{a_{21}}{x_1} + {a_{22}}{a_{12}}{x_2}.}} \]
Now we substitute \({a_{12}}{x_2}\) from the first equation. As a result we obtain a second order linear homogeneous equation:
\[ {{{x^{\prime\prime}_1} = {a_{11}}{x’_1} + {a_{12}}{a_{21}}{x_1} }+{ {a_{22}}\left( {{x’_1} – {a_{11}}{x_1}} \right),\;\;}}\Rightarrow {{{x^{\prime\prime}_1} = {a_{11}}{x’_1} + {a_{12}}{a_{21}}{x_1} }}+{{ {a_{22}}{x’_1} – {a_{11}}{a_{22}}{x_1},\;\;}}\Rightarrow {{{x^{\prime\prime}_1} – \left( {{a_{11}} + {a_{22}}} \right){x’_1} }}+{{ \left( {{a_{11}}{a_{22}} – {a_{12}}{a_{21}}} \right){x_1} = 0.}} \]
It is easy to construct its solution, if we know the roots of the characteristic equation:
\[
{{{\lambda ^2} – \left( {{a_{11}} + {a_{22}}} \right)\lambda }+{ \left( {{a_{11}}{a_{22}} – {a_{12}}{a_{21}}} \right) = 0.}} \]
In the case of real coefficients \({a_{ij}},\) the roots can be both real (distinct or multiple) and complex. In particular, if the coefficients \({a_{12}}\) and \({a_{21}}\) have the same sign, then the discriminant of the characteristic equation will always be positive and, therefore, the roots will be real and distinct.
After the function \({x_1}\left( t \right)\) is determined, the other function \({x_2}\left( t \right)\) can be found from the first equation.
The elimination method can be applied not only to homogeneous linear systems. It can also be used for solving nonhomogeneous systems of differential equations or systems of equations with variable coefficients.
Solved Problems
Click a problem to see the solution. |
2019-09-12 12:25
Measurements of hadron production in $\pi^{+}$+C and $\pi^{+}$+Be interactions at 60 GeV/$c$ / Aduszkiewicz, A (NA61/SHINE Collaboration) Precise knowledge of hadron production rates in the generation of neutrino beams is necessary for accelerator-based neutrino experiments to achieve their physics goals. NA61/SHINE, a large-acceptance hadron spectrometer, has recorded hadron+nucleus interactions relevant to ongoing and future long-baseline neutrino experiments at Fermi National Accelerator Laboratory. [...] CERN-EP-2019-198.- Geneva : CERN, 2019 - 42. Draft (restricted): PDF; Fulltext: PDF; Detailed record - Similar records 2019-09-06 17:12 Detailed record - Similar records 2019-07-30 12:02 Detailed record - Similar records 2019-04-25 16:19
Proton-proton interactions and onset of deconfinement / Aduszkiewicz, A (NA61/SHINE Collaboration)/NA61 Strongly interacting matter at very high densities is expected to be in a state of quasi-free quarks and gluons - the quark-gluon plasma. This hypothesis motivates studies of phases and transitions of strongly interacting matter by colliding atomic nuclei at high energies. [...] CERN-EP-2019-086.- Geneva : CERN, 2019 - 22. Draft (restricted): PDF; Fulltext: PDF; Detailed record - Similar records 2018-08-10 12:00 Detailed record - Similar records 2018-05-02 18:26
Measurements of total production cross sections for pi+ + C, pi+ + Al, K+ + C, and K+ + Al at 60 GeV/c and pi+ + C and pi+ + Al at 31 GeV/c / Aduszkiewicz, A. (Warsaw U.) ; Andronov, E.V. (St. Petersburg State U.) ; Antićić, T. (Boskovic Inst., Zagreb) ; Baatar, B. (Dubna, JINR) ; Baszczyk, M. (AGH-UST, Cracow) ; Bhosale, S. (Cracow, INP) ; Blondel, A. (Geneva U.) ; Bogomilov, M. (Sofiya U.) ; Brandin, A. (Moscow Phys. Eng. Inst.) ; Bravar, A. (Geneva U.) et al. This paper presents several measurements of total production cross sections and total inelastic cross sections for the following reactions: pi+ + C, pi+ + Al, K+ + C, K+ + Al at 60 GeV/c, pi+ + C and pi+ + Al at 31 GeV/c. The measurements were made using the NA61/SHINE spectrometer at the CERN SPS. [...] arXiv:1805.04546; CERN-EP-2018-099; FERMILAB-PUB-18-210-AD-CD-ND; CERN Preprint CERN-EP-2018-099.- Geneva : CERN, 2018-09-11 - 11 p. - Published in : 10.1103/PhysRevD.98.052001 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: openaccess_PhysRevD.98.052001 - PDF; CERN-EP-2018-099 - PDF; 1805.04546 - PDF; fermilab-pub-18-210-ad-cd-nd - PDF; Preprint: PDF; External link: Fermilab Library Server (fulltext available) Detailed record - Similar records 2017-05-09 22:16
Measurement of meson resonance production in $\pi ^-+$ C interactions at SPS energies / NA61/SHINE Collaboration We present measurements of $\rho^0$, $\omega$ and K$^{*0}$ spectra in $\pi^{-} + $C production interactions at 158 GeV/c and $\rho^0$ spectra at 350 GeV/c using the NA61/SHINE spectrometer at the CERN SPS. Spectra are presented as a function of the Feynman's variable $x_\text{F}$ in the range $0 < x_\text{F} < 1$ and $0 < x_\text{F} < 0.5$ for 158 GeV/c and 350 GeV/c respectively. [...] CERN-EP-2017-105; FERMILAB-PUB-17-268-AD-ND; arXiv:1705.08206.- Geneva : CERN, 2017-09-20 - 34. - Published in : Eur. Phys. J. C 77 (2017) 626 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: fermilab-pub-17-268-ad-nd - PDF; CERN-EP-2017-105 - PDF; Preprint: PDF; External link: FERMILABPUB Detailed record - Similar records 2017-04-06 17:58
Measurements of $\pi ^\pm $ , K$^\pm $ , p and ${\bar{\text {p}}}$ spectra in proton-proton interactions at 20, 31, 40, 80 and 158 $\text{ GeV}/c$ with the NA61/SHINE spectrometer at the CERN SPS / NA61/SHINE Collaboration Measurements of inclusive spectra and mean multiplicities of $\pi^\pm$, K$^\pm$, p and $\bar{\textrm{p}}$ produced in inelastic p+p interactions at incident projectile momenta of 20, 31, 40, 80 and 158 GeV/c ($\sqrt{s} = $ 6.3, 7.7, 8.8, 12.3 and 17.3 GeV, respectively) were performed at the CERN Super Proton Synchrotron using the large acceptance NA61/SHINE hadron spectrometer. Spectra are presented as function of rapidity and transverse momentum and are compared to predictions of current models. [...] CERN-EP-2017-066; FERMILAB-PUB-17-185-AD-ND; arXiv:1705.02467.- Geneva : CERN, 2017-10-10 - 54 p. - Published in : Eur. Phys. J. C 77 (2017) 671 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: fermilab-pub-17-185-ad-nd - PDF; CERN-EP-2017-066 - PDF; Preprint: PDF; External link: FERMILABPUB Detailed record - Similar records 2016-09-13 11:59 Detailed record - Similar records 2016-02-29 13:05 Detailed record - Similar records |
Say I want to find the n-th prime. Is there an algorithm to directly calculate it or must I do with sieving? I know always calculate the next prime with a sieve principle, but what if I want the n-th prime?
Duplicate:
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
Say I want to find the n-th prime. Is there an algorithm to directly calculate it or must I do with sieving? I know always calculate the next prime with a sieve principle, but what if I want the n-th prime?
Duplicate:
What do you mean by "directly"? This is not well-defined. Arguing that an algorithm doesn't do something during its computation is not a nice well-defined concept (when that something is a semantic condition).
It is probably one of the most common mistakes that people make when thinking about algorithms because they look at obvious cases and think it is simple to formalize the concept that algorithms for a problem does performs some task like computing some other things during its computation but that is not simple at all!
Moreover why would we care? What we care about in practice is not that the algorithm computes all previous primes, but rather time and space efficiency. So I guess a better way of asking your question is asking if there is more efficient way of computing $n$th prime number than sieve-based methods.
If you are asking if there is a provably correct and efficient algorithm to find $n$th prime given $n$ in binary the answer is that is an open question. We don't know if there is any such algorithm, and we don't know if there isn't one. AFAIK, it is consistent with our current state of knowledge that there are algorithms for generating the $n$th prime that run in linear time in $n$. In fact, it can even be the case that we can generate the $n$th prime from bits of $n$ using a polynomial-size constant-depth circuit with threshold gates ($\mathsf{TC^0}$), in simplified non-technical terms: there can be a parallel algorithm with polynomial number of processors that generates the $n$th prime number in constant time and each processor computes very simple functions. So there is big gap between algorithms that we have (upper-bounds) and lower-bounds we can prove for the problem.
However we have algorithms that work efficiently and correctly assuming conjectures like conjectures about how primes are distributed.
See the Wikipedia article on generating prime numbers if you haven't. Also you may want to check this question: Which is the fastest algorithm to find prime numbers?
When $n$ is not big, then I think the sieve-based algorithms perform well. We just need to keep a list of size $O(n \log n)$ and run one of a well-known sieve-based algorithms for generating primes.
When $n$ is big and we can't afford to keep a list of size $O(n \log n)$, we can use an
in-place sieve method. But I personally prefer to have a small list of primes and use Miller-Rabin test to iterate through possible primes.
The sieve-based algorithms are the most efficient algorithms we currently know for generating prime numbers.
It is not clear what you mean by generating $n$th prime "directly". If you mean that there is any known algorithms as fast as sieve-based algorithms for generating the $n$th prime number that doesn't compute the previous primes numbers then the answer is that we don't know any such algorithm.
If by "direct" you mean a polynomial in one variable that gives the $n$th prime number, then we can actually prove that there is no polynomial function with integer coefficient for calculating nth prime number. And even if there was such a polynomial the algorithms for evaluating polynomials use $\Omega(\log m)$ operations where $m$ is the power of the polynomial to be evaluated.
The cop-out but realistic answer for small inputs would be $O(1)$ for tiny inputs and roughly $O(\log n)$ below some finite threshold as it can be done as a binary search through a table of primes. Since we can get good starting bounds it might be even smaller -- our search range is a small fraction of the table.
What is often done for medium sizes, say 100k to 100M, is sparse tables followed by sieving the range between entries. With enough entries the sieving doesn't take long, so the result is pretty fast and even 1k of tables plus a decent sieve takes you to 100M or so. It doesn't scale well however. I know one application that does this for large sizes at the expensive of ridiculous amounts of table storage, but was also done before open source implementations of the next paragraph were available.
Beyond these toy input sizes, we can do a binary search on the prime count. The extended LMO method is complexity $O\big(\frac{x^{2/3}}{\log^2{n}}\big)$. I believe that would come out to $O\big(\frac{x^{2/3}}{\log{n}}\big)$ for the nth prime. In practice typically one does a good approximation, a single call for the fast prime count, followed by sieving the remainder as this is typically an extremely small range and usually faster than a second prime count, much less $\log n$ of them.
As to your question of whether this can be done directly, the last paragraph gives a well-defined algorithm, so yes, but it isn't a closed form function. Calculating the nth prime by sieving to n works fine for small inputs, but is
massively slower for large inputs. Even at just $10^{10}$, primesieve is about 2000x slower than the inverse fast prime count method, and the gap keeps widening. |
Let's put the succinct answer by @TheAlmightyBob into an abstract model:
We want to model the labor market.
Markets' structure assumptions: goods market and labor markets are perfectly competitive. All participants are "too small" economically, and they cannot affect equilibrium price through their quantities demanded/supplied - they are "price takers". Markets "clear" - i.e. prices adjust so that quantity actually supplied equals quantity actually bought.
Agents assumption: There are $n$ identical workers, and $m$ identical firms, that participate in the market. Both populations are fixed.
Other assumptions: a) deterministic environment, b) one perishable good produced, c) model in "real terms" (real wage etc, scaled by the price of the good produced).
The typical firm produces according to the technology$$Y_j = F_j(K_j,L_j;\mathbf q) \tag{1}$$
where $\mathbf q$ is a vector of parameters. Perfect competition in the goods market, and a perishable good imply that all output produced is sold.The goal of the firm is maximization of capital returns over the choice of labor.
$$\max_{L_j} \pi_j = F_j(K_j,L_j;\mathbf q) - wL_j$$
We are modelling the labor market, so we are interested in the first-order condition
$$\frac {\partial \pi_j}{\partial L_j} = 0 \tag{2}$$and the corresponding input demand schedule
$$L_j^* = L_j^*\left(K_j, \mathbf q, w\right) \tag{3}$$
Total Labor demand is $L_d = m\cdot L_j^*$. The labor market equilibrium assumption implies
$$ L_d = L_s \Rightarrow m\cdot L_j^*\left(K_j, \mathbf q, w\right) = L_s \tag{4}$$
which implicitly expresses the equilibrium wage as a function of technology constants, of per-firm capital, and of labor supplied. In order to fully characterize the labor market, we need to derive also the optimal labor supply.
Each identical worker derives utility from consumption and leisure, subject to a biological limit of available time, $T$, and the budget constraint that consumption equals wage income:
$$\max_{L_i} U(C_i, T-L_i;\mathbf \gamma),\;\; \text{s.t.} \;C_i= wL_i$$
where $\gamma$ is a vector of preference parameters, indicating the relative weight between utility from consumption, and from leisure.This will give us individual labor supply as
$$L_i^* = L_i^*(T,w, \mathbf \gamma) \tag{5}$$
and total labor supply is $L_s = n\cdot L_i^*$. Plugging this into $(4)$ we obtain
$$mL_j^*\left(K_j, \mathbf q, w\right) =n L_i^*(T,w, \mathbf \gamma) \tag{6}$$
If we stop here, we have a
model that examines the labor market. We have fully described the market, and the goals and the constraints of the participants in it (firms and workers), partial equilibrium related to the specific market. We can perform comparative statics in order to see how the various components of $(6)$ affect the equilibrium wage. Among them, there is the capital-per-firm term, whose effects on wage we can also consider based on $(6)$, by treating it as varying arbitrarily.
In order to turn this model into a
model: general equilibrium a) We need to specify things about capital: who owns it/controls it/makes decisions on it. What is the objective functions of these decision makers. This will lead us to an optimal $K_j^*$ as a function of the structure we will impose here. Then, comparative statics with respect to $K_j$ will turn into comparative statics with respect to the factors that affect the determination of $K_j^*$, which may very well prove to involve also $\mathbf q, w$ and even the other parameters in $(6)$, changing in this way the comparative statics results obtained in a partial equilibrium setting.
b) We also need to take into account any
macroeconomic identities that characterize this economy, something along the lines of $mY_j \equiv ...$ where the right hand side will be determined by the assumptions we make related to capital, but also, for example, by whether we will assume that the economy is closed or open, or partially open to the outside economic system.
So, apart from being more complicated as a model, it may also lead us to different conclusions than partial equilibrium analysis. |
I am having an issue classifying $\mathbb{Z}\times\mathbb{Z}/\langle(0,3)\rangle$ according to the fundamental theorem of finitely generated abelian group (i.e. finding what $\mathbb{Z}\times\mathbb{Z}/\langle(0,3)\rangle$ is isomorphic to). I think It should $\mathbb{Z}$, but I am not sure why. Thanks!
Well, it is not $\mathbb{Z}$. What is the order of (the coset) $[0,1]$?
You don't need the fundamental theorem at all. What you need is the following: if $G,G'$ are groups and $N\subseteq G$, $N'\subseteq G'$ are normal subgroups then $N\times N'$ is normal in $G\times G'$ and
$$(G\times G')/(N\times N')\simeq (G/N)\times (G'/N')$$
With that you can easily check that $\langle(0,3)\rangle=\{0\}\times 3\mathbb{Z}$ and so your group is $\mathbb{Z}\times\mathbb{Z}_3$.
We have
$\qquad \mathbb{Z}\times\mathbb{Z} = \mathbb{Z} e_1 \oplus \mathbb{Z} e_2 $
$\qquad \langle(0,3)\rangle = \mathbb{Z} (0 e_1) \oplus \mathbb{Z} (3e_2) $
Therefore,
$\qquad \mathbb{Z}\times\mathbb{Z}/\langle(0,3)\rangle \cong \mathbb{Z}\times\mathbb{Z_3}$
An explicit isomorphism is induced by $(x,y) \in \mathbb{Z}\times\mathbb{Z} \mapsto (x, y \bmod 3) \in \mathbb{Z}\times\mathbb{Z_3}$. |
I have the next system of ODEs: $$ \frac{1}{2}\sigma^2 p''_n(x)+x p'_n(x)+p_n(x)-\frac{p_n(x)-p_{n-1}(x)}{\Delta t}=0,$$ $$p_1(a)=p(b)=0;\quad p_0(x)=0.1,\quad n=1.\dots N, $$ which was derived from the elliptic pde by method of lines: $$\frac{\partial p}{\partial t}=p+x\frac{\partial p}{\partial x}+\frac{1}{2}\sigma^2\frac{\partial^2 p}{\partial x^2},$$ $$lim_{x\rightarrow\pm \infty} p(x,t)=0,\quad \forall t.$$ Now I should set the border values: $a$ and $b.$ I want them to be as far from zero as possible. But the problem arises when I try to solve the system by method of shooting, that doesn't permit me to set them different from $a=0$, $b=1$.
n = 5;h = 1/n;U[x_] = Table[Subscript[u, i][x], {i, 0, n}] /. Subscript[u, 0][x] -> g[x];Eq = Table[Subscript[eq, i], {i, 0, n}];\[Sigma] = 0.1;a[x_, t_] = 1/2 \[Sigma]^2;b[x_, t_] = x;c[x_, t_] = -1;d[x_, t_] = 1;f[x_, t_] = 0;g[x_] = 0.1;\[Alpha][t_] = 0;\[Beta][t_] = 0;xl = -0.5;xr = 0.5;A=0;B=1;eqs = Table[Eq[[i]] = 1/(xr - xl)^2 a[x, h i ] D[U[x][[i]], {x, 2}] + 1/(xr - xl) b[x, h i ] D[U[x][[i]], x] - c[x, h i ] U[x][[i]] - d[x, h i ] ((U[x][[i]] - U[x][[i - 1]])/h) - f[x, h i ] == 0, {i, 2, n + 1}];bcs = Table[{U[A][[i]] == 0, U[B][[i]] == 0}, {i, 2, n + 1}]sols = First[ NDSolve[{eqs, bcs}, U[x], x, Method -> {"Shooting", "StartingInitialConditions" -> Table[(D[U[x][[i]], x] /. x -> 0) == 1, {i, 2, n + 1}]}]]
When I set $A=-1$ and change initial conditions for method of shooting I get the error:
NDSolve::bvluc: The equations derived from the boundary conditions are numerically ill-conditioned. The boundary conditions may not be sufficient to uniquely define a solution. If a solution is computed, it may match the boundary conditions poorly.
NDSolve::berr: The scaled boundary value residual error of 5.277001462096112`*^45 indicates that the boundary values are not satisfied to specified tolerances. Returning the best solution found.
I've also tried to rescale the system by inputing scale variables $xl$ and $xr$. But the same problem holds true. Only current values can be applied. Have no clue what's the problem can be - the analytical solution for the given pde exists uniquely. |
Berkey et al. (1998) describe a meta-analytic multivariate model for the analysis of multiple correlated outcomes. The use of the model is illustrated with results from 5 trials comparing surgical and non-surgical treatments for medium-severity periodontal disease. Reported outcomes include the change in probing depth (PD) and attachment level (AL) one year after the treatment. The effect size measure used for this meta-analysis was the (raw) mean difference, calculated in such a way that positive values indicate that surgery was more effective than non-surgical treatment in decreasing the probing depth and increasing the attachment level. The data are provided in Table I in the article and are stored in the dataset
dat.berkey1998 that comes with the metafor package:
library(metafor) dat <- dat.berkey1998 dat
(I copy the dataset into
dat, which is a bit shorter and therefore easier to type further below). The contents of the dataset are:
trial author year ni outcome yi v1i v2i 1 1 Pihlstrom et al. 1983 14 PD 0.47 0.0075 0.0030 2 1 Pihlstrom et al. 1983 14 AL -0.32 0.0030 0.0077 3 2 Lindhe et al. 1982 15 PD 0.20 0.0057 0.0009 4 2 Lindhe et al. 1982 15 AL -0.60 0.0009 0.0008 5 3 Knowles et al. 1979 78 PD 0.40 0.0021 0.0007 6 3 Knowles et al. 1979 78 AL -0.12 0.0007 0.0014 7 4 Ramfjord et al. 1987 89 PD 0.26 0.0029 0.0009 8 4 Ramfjord et al. 1987 89 AL -0.31 0.0009 0.0015 9 5 Becker et al. 1988 16 PD 0.56 0.0148 0.0072 10 5 Becker et al. 1988 16 AL -0.39 0.0072 0.0304
So, the results from the various trials indicate that surgery is preferable for reducing the probing depth, while non-surgical treatment is preferable for increasing the attachment level.
Since each trial provides effect size estimates for both outcomes, the estimates are correlated. The
v1i and
v2i values are the variances and covariances of the observed effects. In particular, for each study, variables
v1i and
v2i form a 2×2 variance-covariance matrix of the observed effects, with the diagonal elements corresponding to the sampling variances of the mean differences (the first for probing depth, the second for attachment level) and the off-diagonal value corresponding to the covariance of the two mean differences.
Before we can proceed with the model fitting, we need to construct the full (block-diagonal) variance-covariance for all studies from these two variables. We can do this using the
bldiag() function in one line of code:
The
V matrix is then equal to:
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] 0.0075 0.0030 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 [2,] 0.0030 0.0077 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 [3,] 0.0000 0.0000 0.0057 0.0009 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 [4,] 0.0000 0.0000 0.0009 0.0008 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 [5,] 0.0000 0.0000 0.0000 0.0000 0.0021 0.0007 0.0000 0.0000 0.0000 0.0000 [6,] 0.0000 0.0000 0.0000 0.0000 0.0007 0.0014 0.0000 0.0000 0.0000 0.0000 [7,] 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0029 0.0009 0.0000 0.0000 [8,] 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0009 0.0015 0.0000 0.0000 [9,] 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0148 0.0072 [10,] 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0072 0.0304
A multivariate random-effects model can now be used to meta-analyze the two outcomes simultaneously.
res <- rma.mv(yi, V, mods = ~ outcome - 1, random = ~ outcome | trial, struct="UN", data=dat, method="ML") print(res, digits=3) Multivariate Meta-Analysis Model (k = 10; method: ML) Variance Components: outer factor: trial (nlvls = 5) inner factor: outcome (nlvls = 2) estim sqrt k.lvl fixed level tau^2.1 0.026 0.162 5 no AL tau^2.2 0.007 0.084 5 no PD rho.AL rho.PD AL PD AL 1 0.699 - no PD 0.699 1 5 - Test for Residual Heterogeneity: QE(df = 8) = 128.227, p-val < .001 Test of Moderators (coefficient(s) 1,2): QM(df = 2) = 155.772, p-val < .001 Model Results: estimate se zval pval ci.lb ci.ub outcomeAL -0.338 0.080 -4.237 <.001 -0.494 -0.182 *** outcomePD 0.345 0.049 6.972 <.001 0.248 0.442 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
This is what Berkey et al. (1998) call a multivariate maximum likelihood (MML) random-effects model.
1) Note that
rma.mv() uses REML estimation by default, so
method="ML" must be explicitly requested. The
random = ~ outcome | trial part adds random effects for each outcome within each trial to the model. With
struct="UN", the random effects are allowed to have different variances for each outcome and are allowed to be correlated.
The results show that the amount of heterogeneity in the attachment level (AL) outcome (i.e., $\hat{\tau}_{AL}^2 = .026$) is larger than the amount of heterogeneity in the probing depth (PD) outcome (i.e., $\hat{\tau}_{PD}^2 = .007$). Furthermore, the true outcomes appear to correlate quite strongly (i.e., $\hat{\rho} = .70$). On average, surgery is estimated to lead to significantly greater decreases in probing depth (i.e., $\hat{\mu}_{PB} = .35$), but non-surgery is more effective in increasing the attachment level (i.e., $\hat{\mu}_{AL} = -.34$).
The results given in Table II in the paper actually are based on a meta-regression model, using year of publication as a potential moderator. To replicate those analyses, we use:
res <- rma.mv(yi, V, mods = ~ outcome + outcome:I(year - 1983) - 1, random = ~ outcome | trial, struct="UN", data=dat, method="ML") print(res, digits=3) Multivariate Meta-Analysis Model (k = 10; method: ML) Variance Components: outer factor: trial (nlvls = 5) inner factor: outcome (nlvls = 2) estim sqrt k.lvl fixed level tau^2.1 0.025 0.158 5 no AL tau^2.2 0.008 0.090 5 no PD rho.AL rho.PD AL PD AL 1 0.659 - no PD 0.659 1 5 - Test for Residual Heterogeneity: QE(df = 6) = 125.756, p-val < .001 Test of Moderators (coefficient(s) 1,2,3,4): QM(df = 4) = 143.439, p-val < .001 Model Results: estimate se zval pval ci.lb ci.ub outcomeAL -0.335 0.079 -4.261 <.001 -0.489 -0.181 *** outcomePD 0.348 0.052 6.694 <.001 0.246 0.450 *** outcomeAL:I(year - 1983) -0.011 0.024 -0.445 0.656 -0.059 0.037 outcomePD:I(year - 1983) 0.001 0.015 0.063 0.950 -0.029 0.031 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Note that publication year was centered at 1983, as was done by the authors. These results correspond to those given in the rightmost column in Table II on page 2545 (column "multiple outcomes MML"). The output above directly provides the correlation among the true effects. We can compute the covariance with:
[1] 0.009
To test whether the slope of publication year actually differs for the two outcomes, we can fit the same model with:
res <- rma.mv(yi, V, mods = ~ outcome*I(year - 1983) - 1, random = ~ outcome | trial, struct="UN", data=dat, method="ML") print(res, digits=3)
The output is identical, except for the last part, which is now equal to:
Model Results: estimate se zval pval ci.lb ci.ub outcomeAL -0.335 0.079 -4.261 <.001 -0.489 -0.181 *** outcomePD 0.348 0.052 6.694 <.001 0.246 0.450 *** I(year - 1983) -0.011 0.024 -0.445 0.656 -0.059 0.037 outcomePD:I(year - 1983) 0.012 0.020 0.593 0.553 -0.027 0.051 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Therefore, the slope is actually not significantly different for the two outcomes ($p = .553$). In fact, it does not appear as if publication year is at all related to the two outcomes.
One could actually consider a simpler model for these data, which assumes a compound symmetry structure for the random effects (this would imply that the amount of heterogeneity is the same for the two outcomes). A formal comparison of the two models can be conducted using a likelihood ratio test:
res1 <- rma.mv(yi, V, mods = ~ outcome - 1, random = ~ outcome | trial, struct="UN", data=dat, method="ML") res0 <- rma.mv(yi, V, mods = ~ outcome - 1, random = ~ outcome | trial, struct="CS", data=dat, method="ML") anova(res0, res1) df AIC BIC AICc logLik LRT pval QE Full 5 -1.6813 -0.1684 13.3187 5.8407 128.2267 Reduced 4 -2.4111 -1.2008 5.5889 5.2056 1.2702 0.2597 128.2267
Since model
res1 has one more parameter than model
res0, the test statistic (
LRT) follows (approximately) a chi-square distribution with 1 degree of freedom. The p-value (0.2597) suggests that the more complex model (with
struct="UN") does not actually yield a significantly better fit than the simpler model (with
struct="CS"). However, with only 5 studies, this test is likely to have very low power.
Berkey, C. S., Hoaglin, D. C., Antczak-Bouckoms, A., Mosteller, F., & Colditz, G. A. (1998). Meta-analysis of multiple outcomes by regression with random effects.
Statistics in Medicine, 17(22), 2537–2550.
rma.mv() estimates this model. Therefore, if a particular study had only measured one of the two outcomes, it could still be included in the analysis. The variance-covariance matrix of that study would then simply be a $1 \times 1$ matrix, with the value of that matrix equal to the sampling variance of the observed outcome. |
I am working the following USNCO problem (#41 from 2002). Based on this question and answer Deriving a reduction potential from two other reduction potentials, it seems that $\Delta G$ must be calculated and then added. However, when I do this, I get an answer that is not one of the given choices.
The problem is:
Use the given standard reduction potentials to determine the reduction potential for this half-reaction: $$\ce{MnO4- + 3e- +4H+ -> MnO2 + 2H2O}$$
The given reactions are:
$$\begin{align} \ce{MnO4- + e-} &\rightarrow \ce{MnO4^2-} & E &= +0.564~\mathrm{V} \\ \ce{MnO4^2- + 2e- + 4H+} &\rightarrow \ce{MnO2 + 2H2O} & E &= +2.261~\mathrm{V} \end{align}$$
The possible answers are: $1.695~\mathrm{V}$, $2.825~\mathrm{V}$, $3.389~\mathrm{V}$, and $5.086~\mathrm{V}$.
The correct answer is A. Using $\Delta G$, I got $E = 1.928$, but this is not a choice. Am I doing the math wrong, or is my method incorrect? Thanks! |
Your intuition is correct. The factor $A$ changes with temperature.
This article details how the value of $\ce{A}$ for an elementary, bimolecular reaction between $\ce{P}$ and $\ce{Q}$ can be derived to be:
$$A_{\ce{PQ}}=N_\ce{P}N_\ce{Q}d^2_{\ce{PQ}}\sqrt{\frac{8k_\mathrm{B}T}{\mu}}$$
The RHS is clearly a function of temperature. Without going into the details,
[a] it suffices to remember that $A$ is a function of temperature because it is related to molecular collisions, which themselves are a function of temperature.
However, it is worth noting this paragraph on Wikipedia:
...under a wide range of practical conditions, the
weak temperature dependence of the pre-exponential factor is negligible compared to the temperature dependence of the $\mathrm{e}^{(-E_\mathrm{a}/RT)}$ factor [b] (my emphasis); except in the case of "barrierless" diffusion-limited reactions, in which case the pre-exponential factor is dominant and is directly observable.
Given this, it may be within sufficient experimental errors to make the assumption that $A$ does not vary with temperature. However, it is just that, an assumption. In reality, $A$ does vary with temperature.
[a]: This isn't the exactly correct expression though. As the article itself notes, "Often times however, when the term is determined experimentally, $A$ is the preferred variable and when the constant is determined mathematically, $Z$ is the variable more often used. The derivation for $Z$, while mostly accurate, ignores the steric effect of molecules." [b]: Of course, here Wikipedia is using the actually computed values of $A$ (and not vague estimated formulae like the one above; as actually $\sqrt{T}$ grows faster than $\mathrm{e}^{-1/T}$) |
I'm trying to understand why is it possible to describe every diagonal line in the Ulam-Spiral with an quadratic polynomial $$2n\cdot(2n+b)+a = 4n^2 + 2nb +a$$ for $a, b \in \mathbb{N}$ and $n \in 0,1,\ldots$.
It seems to be true but why?
Wikipedia says: "The pattern also seems to appear even if the number at the center is not 1 (and can, in fact, be much larger than 1).
This implies [WHY?] that there are many integer constants b and c such that the function: $4n^2+bn+c$ as $n$ counts up $\{1, 2, 3, ...\}$, a number of primes that is large by comparison with the proportion of primes among numbers of similar magnitude."
I can't find a source with a detailed explanation.
I found these equations:
So here is the solution:
\begin{align*} y_t - y_{t+1} - (y_{t+1} - y_{t+2}) &= 8\\ y_t - 2y_{t+1} + y_{t+2} &= 8\\ y_{t+2} - 2y_{t+1} + y_t &= 8 \end{align*}
1) We solve $y_{t+2} - 2y_{t+1} + y_t = 0.$
Let $y_t = A\beta^t$ \begin{align*} A\beta^{t+2} - 2A\beta^{t+1} + A\beta^t &= 0\\ A\beta^{t}\cdot (\beta^2 - 2\beta + 1) &= 0 \end{align*} $\beta^2 - 2\beta + 1 = 0$ has two identical solutions $\beta_{1,2} = 1$. So with $A_1$ and $A_2t$ we get $$y_t = A_1 + A_2t.$$
2) $1 + a_1 + a_2 = 0$ and $a_1 = -2$ so let $y_t = ct^2$ \begin{align*} c\cdot(t+2)^2 - 2c\cdot(t+1)^2 + ct^2 &= 8\\ c\cdot\big(t^2+4t+4 - 2\cdot(t^2+2t+1) + t^2\big) &= 8\\ c\cdot(t^2+4t+4 - 2t^2-4t-2 + t^2) &= 8\\ 2c &= 8\\ c &= 4 \end{align*} So $y_t = 4t^2$
3) The complete solution is $$y_t = 4t^2 + A_2t + A_1.$$
The "exclusion lines" seem to be interesting too: $$4n^2+n$$ $$4n^2+3n$$ $$4n^2+3n-1$$ $$4n^2-n$$ seem not to have any primes at all.
Useful website I found a bit late: http://ulamspiral.com |
This problem arises from a Bayesian statistical modeling project. In order to compute with my model, I need to perform an integration in which part of the integrand is the "Pólya" or "Dirichlet-Multinomial" Distribution,
$$p(n\mid \alpha) = \frac{(N!) \Gamma(K\alpha)}{\Gamma(\alpha)^K \Gamma ( N + K\alpha)} \prod_{i=1}^K \frac{\Gamma(n_i + \alpha)}{ n_i!}$$
where $n_i$ and $N = \sum_{i=1}^K n_i$ are integers, $n = \left(n_1, n_2, \dots, n_K\right)$, and $\alpha > 0$. The integral I wish to compute, $\int_0^\infty (\text{other terms})p(n|\alpha) d\alpha$, works well for small $N$, but the quadrature methods I've attempted (in MATLAB) break down as $N$ becomes large. I haven't tried Monte Carlo; an accurate, fast quadrature method would be very nice for my project.
Currently, the "best" method when $N$ is large is to compute $\log[p(n|\alpha)]$ over a grid in alpha, normalize, and exponentiate. This is inaccurate (I lose essentially all detail about the distribution except its peaks), but at least produces a number.
I would appreciate any advice on improving this computation, or pointers to different algorithms/methods or existing software.
EDIT: I should maybe add that that my evaluation of $p(n|\alpha)$, performed by computing $\log p(n|\alpha)$ using some carefully-written code to compute $\log \Gamma(x)$ for large $x$, does not appear to be causing any problems.
EDIT 2: Additionally, "large" values would be on the order of $N\sim 10^8$, with the largest $n_i\sim 10^5$, along with many small values of $n_i$. The other terms are numerically well-behaved. As a simplification with roughly the appropriate tail behavior, you could take
$(\text{other terms}) = \exp(-\alpha)$ |
Let $G$ be a context free grammar in Chomsky normal form (CNF) with language $L(G)\subseteq \Sigma^n$. In other words, all strings generate by $G$ have size $n$.
Say that a string $w\in L(G)$ has height $h$ if $w$ has a parse-tree of height at most $h$. Say that $G$ has height $h$ if each string $w\in L(G)$ has height $h$. Let $|G|$ be the number of production rules in $G$. I have the following problem, which I believe it is well studied in the field of parallel parsing, but with a somewhat distinct terminology.
Problem: Given a context free grammar $G$ in CNF accepting a language $L(G)\subseteq \Sigma^n$, construct a context free grammar $G'$ in CNF such that $L(G') = L(G)$, $h(G') = O(\log n)$, $|G'| = |G|^{O(1)}\cdot n^{O(1)}$.
Does the problem given above has always a solution? In other words, from $G$ we want to construct a context free grammar $G'$ accepting the same language as $G$ but such that every string in this language has a parse tree of logarithmic height. The size of the obtained grammar $G'$ is allowed to blow up polynomially in $n$ and in the size of the original CFG $G$.
I'm mostly interested in references dealing with the problem above or similar problems.
Obs 1: Without the requirement that $|G'|=|G|^{O(1)}\cdot n^{O(1)}$, we can construct a grammar $G'$ with size $2^{O(n)}$ by considering a distinct set of production rules for each string in $L(G)$. Obs 2: I don't care about the time necessary to construct $G'$. The only important thing is its size $|G'|$. Obs 3: Both grammars are required to be in Chomsky normal form. Also both are allowed to be ambiguous. |
It should be: "$f:R\to R'$ is a
ring homomorphism". Otherwise this is not true. Indeed, if $f$ is not a ring homomorphism then $f(ab)\neq f(a)f(b)$ for some $a,b\in R$. It is clear that $\varphi(ab)\neq\varphi(a)\varphi(b)$ as well where $a,b$ are now treated as polynomials of degree $0$. Note that for polynomial $r$ of degree $0$ we have $\varphi(r)=f(r)$.
As an example of such group homomorphism that is not a ring homomorphism but even satisfies $f(1)=1$ consider this: let $R=R'=\mathbb{Z}^2$ (with pointwise multiplication) and let $f(x,y)=(x,2x-y)$. I leave it as an exercise that $f$ is a group homomorphism. But it is not a ring homomorphism because
$$f((2,1)\cdot (2,1))=f(4,1)=(4,7)$$$$f(2,1)\cdot f(2,1)=(2,3)\cdot (2,3)=(4,9)$$
BTW: this example shows that your
I also know how to show the additive and multiplicative properties for the ring homomorphism statement cannot be correct (more precisely I'm refering to the "multiplicative" part).
So the assumption "$f$ is a group homomorphism"
is a mistake (it is not strong enough) and it should be "$f$ is a ring homomorphism".
Also note that the identity of $R[X]$ is $1$ (treated as a polynomial of degree $0$). Therefore $\varphi(1)=1$ if and only if $f(1)=1$. It's quite trivial. More difficult is to show that $\varphi$ preserves multiplication if $f$ does. |
It is certainly not required but usually one writes a weighted sum of probability density functions where the sum of the weights equals 1. In your example the sum of the weights is $\sqrt{\pi}$. One can rewrite the pdf of $X$ as
$$f_X(x)={{2m^m x^{2m-1}}\over{\Gamma(m)}}\sum_{i=1}^n w_i h(t_i)$$
with $\sum_{i=1}^n w_i=1$. This makes it more explicit that each of the weighted pdf's is indeed a legitimate pdf:
$$\int_{0}^{\infty}{{2m^m x^{2m-1}}\over{\Gamma(m)}}h(t_i)dx=1$$
As a check here is the
Mathematica code:
g[x_] := (2 m^m x^(2 m - 1)/Gamma[m])
Exp[-m (2^(1/2) λ t + μ + x^2 Exp[-2^(1/2) λ t - μ]) ]
Integrate[g[x], {x, 0, ∞}, Assumptions -> {a > 0, t ∈ Reals, λ > 0, μ > 0, m > 1/2}]
(* 1 *)
Because we have a weighted sum of nice pdf's we can write
$$\int_{\sqrt{a}}^{\infty}f_X(x)dx=\sum_{i=1}^{n}w_i \int_{\sqrt{a}}^{\infty}{{2m^m x^{2m-1}}\over{\Gamma(m)}}h(t_i)dx=\sum_{i=1}^{n}w_i{{\Gamma(m,a\,m\, e^{-\sqrt{2}t_i \lambda-\mu})}\over{\Gamma(m)}}$$
(I say "nice pdf" to avoid me messing up any discussion of switching the order of integration and summation.) You want $Pr(Y>a)=Pr(X^2>a)=Pr(X>\sqrt{a})$ (because the only positive support is where $X \ge 0$). (The numerator in the final sum above is the incomplete gamma function.) |
We are here with you hands in hands to facilitate your learning & don't appreciate the idea of copying or replicating solutions. Read More>>
MTH101 Calculus And Analytical Geometry GDB Solution & Discussion
For a functiona pointand a positive numberFind
Moreover find a number such that
Note:
Please follow the following methodology to find
+ Click Here To Join also Our facebook study Group.+ How to Join Subject Study Groups & Get Helping Material? + How to become Top Reputation, Angels, Intellectual, Featured Members & Moderators?
.+ http://bit.ly/vucodes
+ http://bit.ly/papersvu
(Link for Past Papers, Solved MCQs, Short Notes & More)
sorry, saw ur msg late..ab to gdb band hogya.
Assalam o allikum Amna
sis can you help me please for this solution
mujhay b Math ki koi samjh nahi hai
email me also and guide me please
my id is:
[email protected]
Please Someone upload the GDB file then I will try to solve the GDB . Thanks
I am copying my solution. It is only for learning purpose so please do not just copy paste otherwise I might get Zero :)
Note: This solution might not be 100% correct >>>>
Finding
\[\mathop {\lim }\limits_{x \to 3} f(X) = \mathop {\lim }\limits_{x \to 3} (3 - 2X) = 3 - 2\mathop {\lim }\limits_{x \to 3} (X) = 3 - 2\left( 3 \right) = - 3 = L\]
At the moment we have,
\[L = - 3\]
\[{X_0} = 3\]
\[ \in = 0.02\]
f(X) is in interval of
\[(L - \in ,L + \in ) = ( - 3.02, - 2.98)\]
Defining δ in terms of ∈
\[\left| {f(X) - L} \right| < \in = > \left| {(3 - 2x) - ( - 3)} \right| < \in = > 2\left| {X - 3} \right| < \in = > \left| {X - 3} \right| < \in /2\]
Finding
δ where \[\left| {X - {X_0}} \right| < \delta = > \left| {X - 3} \right| < \delta \] Since both above left hand inequalities satisfy therefore \[\delta = \in /2 = > 0.02/2 = 0.01\] So the subset intervals are \[({X_0} - \delta ,{X_0})U({X_0} + \delta ,{X_0}) = > (2.99,3)U(3,3.01)\] Finally proving value of δ keeps within ∈
\[\left| {X - {X_0}} \right| < \delta = > \left| {X - 3} \right| < \in /2 = > 2X - 6 < \in = > 2x - 3 - 3 < \in = > ( - 2X + 3) - ( - 3) < \in \]
Which is similar to
\[\left| {f(X) - L} \right| < \in = > \left| {(3 - 2x) - ( - 3)} \right| < \in \]
Hence limit of f(x) exists as x approaches 3
THEEK hai ya kaya sir g ?
kuch samajh nahi aa raha hai kindly math type ki file upload kardo
MTH101 gdb solution 2015 |
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr... |
In my textbook, it's stated that:
When $\epsilon < -1$, demand is elastic and raising price will result in smaller income, while lowering price will result in bigger income.
When $\epsilon = -1$, demand is neither elastic nor inelastic and change in price won't result in change in income.
When $\epsilon > -1$, demand is inelastic and raising price will result in bigger income, while lowering price will result in smaller income.
$\epsilon = \%\Delta Q / \%\Delta P$.
This is the exercise I found confusing:
Old price: 5
New price: 6
Old quantity: 25
New quantity: 20
Calculate elasticity
This is my solution:
$\% \Delta P = \frac{\text{new price } - \text{ old price}} {\text{old price}} = \frac{6 - 5} 5 = 0.2$
$\%\Delta Q = \frac{\text{new quantity } - \text{ old quantity}} { \text{old quantity}} = \frac{20 - 25} {25} = -0.2$
$\epsilon = \%\Delta Q / \%\Delta P = -0.2 / 0.2 = -1$
$$$$ This is why I am confused:
$\text{Old income} = \text{old price} \times \text{old quantity} = 5 \times 25 = 125$
$\text{New income} = \text{new price} \times \text{new quantity} = 6 \times 20 = 120$
Old income does not equal new income even though elasticity is -1!
What am I doing wrong? Am I misunderstanding the textbook?
$$$$
Edit: the answer provided is $\epsilon = 1.22$ but I have no idea where it comes from. |
The intensity of light (as calculated from time average of the poynting vector) is given by $I = (1/2) \epsilon v E_0^2$. Here the intensity is dependent on the velocity of light in the medium. The refractive index also depends on the velocity of light. So is it safe to say that the intensity of light depends on the refractive index of the medium?
Since $n=\sqrt{\varepsilon_r\mu_r}$ (the relative permeability $\mu_r$ being almost always $1$), and $v=\frac{c}{n}$, you can also write $I = \frac{nc\varepsilon_0}{2} E_0^2$. (We used the decomposition $\varepsilon=\varepsilon_r\varepsilon_0$.)
So, the intensity depends linearly on the refractive index.
As mentioned in the other answers, if the medium is linear then the refractive index is independent of the intensity of light, and the intensity can be related to the electric field amplitude through $I = \frac{nc\varepsilon_0}{2} E_0^2$.
However, that does
not mean, as the (incorrect) accepted answer implies, that the intensity "depends linearly on $n$". If you shine a laser through a piece of glass, the beam does not magically get more intense in the region with higher refractive index; instead, the intensity remains constant (it is an energy flux, and energy is conserved), and the electric-field amplitude $E_0$ decreases.
As such, in the linear-optical regime, and absent reflection losses at the boundary between media, the intensity does
not depend on the refractive index.
Having gotten over the boring part, however, and addressing the broader issue raised in the question's title,
Relation between intensity of light and refractive index
there
are indeed regimes when the intensity of light has an interesting relationship with the refractive index, though it goes the other way around $-$ the refractive index depends on the intensity.
To be more specific, this happens when the light is intense enough that nonlinear effects can kick in, because of something called the Kerr effect: if the intensity is high enough, then the refractive index will increase by a small amount $\Delta n$ which is normally proportional to the intensity: $$ n(I) = n_0 + n_2 I. $$ This is important, because when lasers reach that kind of intensity, this typically only happens in the middle of the beam, and there the added $\Delta n$ makes the medium seem optically thicker, much like a convex lens would (an effect known as a Kerr lens), so it will tend to focus the beam into a tighter spot.
So, what happens if you focus the beam more tightly? Well, it will get more intense, so the self-focusing will increase and the Kerr lensing will get more severe - and if you're not careful, you can get into a regime with runaway self-focusing where the beam gets tighter and tighter until the intensity exceeds the damage threshold of the material and you burn a hole in your medium. And, if it's not your lucky day, the light will then diffract off of that hole only to re-self-focus a bit further down the line, and eventually it will destroy your entire beamline.
To emphasize the importance of this, if you look at the largest available peak laser intensity over the past few decades, there's a very, very flat line lasting some fifteen years between the late sixties and 1985: this is the threshold where self-focusing makes it impossible to amplify the light further without the laser destroying itself, a problem which was only solved with the advent of chirped pulse amplification.
For a more recent take on that topic, see What is Chirped Pulse Amplification, and why is it important enough to warrant a Nobel Prize?
The other answers here seem incomplete, because they ignore how you might actually do an experiment. The formula in question is correct, as is the response saying equivalently $I=n\times c\times \epsilon_0\times E_0^2/2$, but if you only ask what happens when you raise $n$ without considering what happens to $E_0$ you will get the wrong idea. The equation seems to say that if n increases then the intensity will increase. But actually, if you go from air to water the Fresnel equations show that $E_0$ changes by a factor $2/(1+n)$ where $n$ is the water refractive index, so Intensity changes by a factor $4n/(1+n)^2$ which is always less than one (for positive $n$).
So is it safe to say that the intensity of light depends on the refractive index of the medium?
All other things in the formula being equal (electric field amplitude, dielectric constant), yes. However, this may be difficult to achieve with a single light ray in a single experiment. A propagating light ray experiencing change of $n$ will also experience change of $\epsilon_0$ and will also change its electric field amplitude.
When light ray enters a dielectric with higher $n$, only part of the light energy will "get in" and propagate inside the dielectric. So intensity inside may be less than intensity outside, despite higher $n$. It depends on what percentage will get through the boundary, which in turn depends on the details of the angle of incidence and quality of boundary.
The answer can also depend on whether one wants to include the energy of the excited dielectric matter into the definition of light intensity, or consider it separately (this may make sense, since part of it is essentially kinetic energy of charged particles, not EM energy). |
I will refer to Qiaochu's excellent answer here as proof that if we define
$$f(N):=\sum\limits_{n=0}^N n^2$$
then $f$ is a polynomial of degree $3$.
It is easy to calculate the first few values of this sum. Namely,
$\begin{align}f(0) &= 0 \\f(1) &= 1 \\f(2) &= 5 \\f(3) &= 14\end{align}$
I claim that these four points are sufficient to uniquely determine $f$.
To wit, we have in general that
$$f(x)=\sum\limits_{k=0}^3 c_kx^k$$
which when be combined with the four computed values above results in the following system of equations:
$$\begin{pmatrix}1 & 0 & 0 & 0 \\1 & 1 & 1 & 1 \\1 & 2 & 4 & 8 \\1 & 3 & 9 & 27\end{pmatrix}\begin{pmatrix}c_0 \\ c_1 \\ c_2 \\ c_3\end{pmatrix}=\begin{pmatrix}0 \\ 1 \\ 5 \\ 14\end{pmatrix}$$
This matrix is a Vandermonde matrix which has a well-known determinant
$$\begin{align}\det(V) &= (1-0)(2-0)(3-0)(2-1)(3-1)(3-2) \\&\neq 0\end{align}$$
Because its determinant is nonzero, the matrix is invertible, and so we have
$$\begin{pmatrix}c_0 \\ c_1 \\ c_2 \\ c_3\end{pmatrix}=V^{-1}\cdot\begin{pmatrix}0 \\ 1 \\ 5 \\ 14\end{pmatrix}$$
from which $f(x)$ can be determined directly.
However, if you're like most people, inverting a $4\times4$ matrix doesn't exactly tickle your fancy!
Luckily, now that we see that the interpolating cubic is unique, we could find it through the described matrix multiplication, but we would get to the same result if we proceeded a different route as well. This is where Lagrange polynomials come to the rescue.
Using the general formula, we have immediately that
$$\begin{align}f(x) &= 0\cdot(\dots)+1\cdot\frac{x(x-2)(x-3)}{1(1-2)(1-3)}+5\cdot\frac{x(x-1)(x-3)}{2(2-1)(2-3)}+14\cdot\frac{x(x-1)(x-2)}{3(3-1)(3-2)} \\&= \frac{1}{2}\left(x^3-5x^2+6x\right)-\frac{5}{2}\left(x^3-4x^2+3x\right)+\frac{14}{6}\left(x^3-3x^2+2x\right) \\&= \frac{1}{6}\left(2x^3+3x^2+x\right) \\&= \frac{1}{6}x\left(x+1\right)\left(2x+1\right)\end{align}$$
You can generalize this approach to find expressions for $\sum n^p\quad\forall p\in\mathbb{N}$.
Or, you know, there's always Faulhaber's formula. |
I have two arbitrary vectors $\vec{x}$ and $\vec{x}'$ given in spherical coordinates $(|\vec{x}|=x,\theta,\phi)$ (as convention I take the "physics notation" given on Wikipedia http://en.wikipedia.org/wiki/Spherical_coordinate_system). I now want to rotate the coordinate system so that it's $z$-direction points along $\vec{x}$. That means, $\vec{x}$ would have the values $(0, 0, x)$. I now need to compute the angles of $\vec{x}'$. The absolute value does not change, but the angles do. I need to figure out how the angles are in the new coordinate system. With the help of rotation matrices, one is able to get:
$\vec{x}' = x' (\sin(\theta')\cos(\phi'-\phi)\cos(\theta)-\sin(\theta)\cos(\theta'),\sin(\theta')\sin(\phi'-\phi),\sin(\theta')\cos(\phi'-\phi)\sin(\theta)+\cos(\theta)\cos(\theta')) \equiv x' (\sin(\alpha')\cos(\beta'),\sin(\alpha')\sin(\beta'),\cos(\beta')) $
Now $\alpha', \beta'$ are the angles in the normal sense but in the new coordinate system. I need a converting rule $\theta', \phi' \to \alpha', \beta'$. Anyone a hint?
Edit (some further explanations): I need this to compute an integral of the form $\int \mathrm{d}^3x' g(\theta',\phi')f(|\vec{x}-\vec{x}'|)$ and I converted $\mathrm{d}^3x'=x'^2\mathrm{d}x\mathrm{d}\phi' \sin(\theta')\mathrm{d}\theta'$ to spherical coordinates. The problem is that $|\vec{x}-\vec{x}'|=x^2+x'^2-2xx'[\sin(\theta')\cos(\phi'-\phi)\sin(\theta)+\cos(\theta)\cos(\theta')]$ contains angles of both vectors and I need to get rid of the unprimed angles (which is possible in transforming the coordinate system under the integral to point with it's z-direcion along $\vec{x}$). |
Kay, David, Styles, Vanessa and Süli, Endre (2009)
Discontinuous Galerkin finite element approximation of the Cahn--Hilliard equation with convection. SIAM Journal on Numerical Analysis, 47 (4). pp. 2660-2685. ISSN 0036-1429 Abstract
The paper is concerned with the construction and convergence analysis of a discontinuous Galerkin finite element method for the Cahn-Hilliard equation with convection. Using discontinuous piecewise polynomials of degree $p\geq1$ and backward Euler discretization in time, we show that the order-parameter $c$ is approximated in the broken ${\rm L}^\infty({\rm H}^1)$ norm, with optimal order ${\cal O}(h^p+\tau)$; the associated chemical potential $w=\Phi'(c)-\gamma^2\Delta c$ is shown to be approximated, with optimal order ${\cal O}(h^p+\tau)$ in the broken ${\rm L}^2({\rm H}^1)$ norm. Here $\Phi(c)=\frac{1}{4}(1-c^2)^2$ is a quartic free-energy function and $\gamma>0$ is an interface parameter. Numerical results are presented with polynomials of degree $p=1,2,3$.
Item Type: Article Schools and Departments: School of Mathematical and Physical Sciences > Mathematics Depositing User: Vanessa Styles Date Deposited: 06 Feb 2012 20:48 Last Modified: 20 Jun 2012 14:23 URI: http://sro.sussex.ac.uk/id/eprint/28244 📧Request an update |
I am solving old problems from various qualifiers from different universities to prepare myself for an upcoming test. I came across this and wanted to ask if anyone can confirm my answers?
My answers:
** I use $\succeq$ to denote "at least as good as".
(a) A certainty equivalence, in general, is the amount of money $c(F,u)$ so that:
$F \in \Delta(\mathbb{R})$, $\delta_{C(F,u)} \backsim F$ $\equiv u[C(F,u)] = U(F)$
Here, I've never seen something like this and so it is my best guess:
$ \forall$ $x<M $ , $C(F,\sqrt(x))= u^{-1}[\int\sqrt(x)dF$] $ \forall$ $x\geq M$, $u[C(F,\sqrt(M))]= [\sqrt(M)] \implies C(F,u)=x$
Since $u()$ isn't 1-1, thus not invertible, over $x\geq M$ I eventually just decided my result above was true?
Or should it be something more like:
$u[C(F,u)] = F(M)\sqrt(x) + [1-F(M)]*\sqrt(M)$
(b) I know that the certainty equivalent is less than or equal to the expected value of $F$ iff an agent is risk averse.
I think that is the same as saying $u[C(F,u)] \leq u(\int x dF)$ , $\forall F \in \Delta(\mathbb{R})$
(c) An agent is risk averse if and only the agent's preferences are represented by a concave utility index $u(.)$ and so this agent is risk averse since:
$x < M$ $\implies u(x)=\sqrt(x)$, which is clearly concave. $x\geq M$ let ${x_1,x_2} \subset [M,\infty)$ and let $\alpha \in [0,1]$ Then $\alpha*x_1 + (1-\alpha)*x_2 \in [M,\infty)$
Now, note that $u(x_i)=\sqrt(M)$, for $ i=1,2,3$
and so $$u(x_3) \geq \alpha u(x_1) + (1-\alpha)u(x_2)$$
$$\to u(\alpha x_1 + (1-\alpha)x_2) \geq \alpha u(x_1) + (1-\alpha)u(x_2)$$
(D) Again, I am not sure about this. All i know about F.O.S.D is that for two lotteries F,G, then for all EU money monotone preference: $$F \geq_{FOSD} G \iff F \succeq G $$
Any and all help is appreciated. |
A blog of Python-related topics and code.
The following code attempts to pack a predefined number of smaller circles (of random radii between two given limits) into a larger one.
The Morse oscillator is a model for a vibrating diatomic molecule that improves on the simple harmonic oscillator model in that the vibrational levels converge with increasing energy and that at some finite energy the molecule dissociates. The potential energy varies with displacement of the internuclear separation from equilibrium, $x = r - r_\mathrm{e}$ as: $$ V(x) = D_\mathrm{e}\left[ 1-e^{-ax} \right]^2, $$ where $D_\mathrm{e}$ is the dissociation energy, $a = \sqrt{k_\mathrm{r}/2D_\mathrm{e}}$, and $k_\mathrm{e} = (\mathrm{d}^2V/\mathrm{d}x^2)_\mathrm{e}$ is the bond force constant at the bottom of the potential well.
The harmonic oscillator is often used as an approximate model for the behaviour of some quantum systems, for example the vibrations of a diatomic molecule. The Schrödinger equation for a particle of mass $m$ moving in one dimension in a potential $V(x) = \frac{1}{2}kx^2$ is$$-\frac{\hbar^2}{2m}\frac{\mathrm{d}^2\psi}{\mathrm{d}x^2} + \frac{1}{2}kx^2\psi = E\psi.$$With the change of variable, $q = (mk/\hbar^2)^{1/4}x$, this equation becomes$$-\frac{1}{2}\frac{\mathrm{d}^2\psi}{\mathrm{d}q^2} + \frac{1}{2}q^2\psi = \frac{E}{\hbar\omega}\psi,$$where $\omega = \sqrt{k/m}$. This differential equation has an exact solution in terms of a quantum number $v=0,1,2,\cdots$:$$\psi(q) = N_vH_v(q)\exp(-q^2/2),$$where $N_v = (\sqrt{\pi}2^vv! )^{-1/2}$ is a normalization constant and $H_v(q)$ is the
Hermite polynomial of order $v$, defined by:$$H_v(q) = (-1)^ve^{q^2}\frac{\mathrm{d}^v}{\mathrm{d}q^v}\left(e^{-q^2}\right).$$The Hermite polynomials obey a useful recursion formula:$$H_{n+1}(q) = 2qH_n(q) - 2nH_{n-1}(q),$$so given the first two: $H_0 = 1$ and $H_1 = 2q$, we can calculate all the others.
The following code simulates (very approximately) the growth of a polycrystal from a number of seeds. Atoms are added to the crystal lattice of each of the resulting grains until no more will fit, creating realistic-looking boundaries where two grains meet. |
In layman's terms:
First let's start with the Fourier series, a method that Fourier wrote for the first time in a paper about heat diffusion modelling. The idea is that any continuous function can be approximated by adding up lots of sine an cosine functions. The more terms you use, the more accurate will be the approximation:
$f(x) = a_0\cos\frac{\pi y}{2}+a_1\cos 3\frac{\pi y}{2}+a_2\cos5\frac{\pi y}{2}+\cdots + b_0\sin\frac{\pi y}{2}+b_1\sin 3\frac{\pi y}{2}+b_2\sin5\frac{\pi y}{2}+\cdots$
In order to find the coefficients, the following trick was used for cosine functions:$$a_n = \displaystyle\frac{1}{\pi}\int_{-\pi}^\pi f(x) \cos(nx)\, dx,$$and similarly for sine functions:$$b_n = \displaystyle\frac{1}{\pi}\int_{-\pi}^\pi f(x) \sin(nx)\, dx.$$
These formulae are known as the Fourier sine and cosine transforms.
Euler's formula $e^{i\theta} = \cos(\theta) + i \sin(\theta)$ shows a relationship between exponential and cosine and sine functions. The Fourier sine and cosine transforms can thus be combined into a single transform:$$\hat{f}(\xi) = \int_{-\infty}^{\infty} f(x)\ e^{- 2\pi i x \xi} \, dx,$$and this explains why you see exponential functions in the Fourier transform instead of cosine and sine functions.
Now, in image processing we are typically working with two-dimensional images. So the Fourier transform has been done twice:$$\displaystyle F(u,v)=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x,y)\ e^{-j2\pi(ux+vy)} \, dx \, dy. $$
Since images actually come in discrete pixels rather than continuous the integrals are replaced by summations. |
I want to evaluate
$$\lim_{x\rightarrow0^+}\frac{\log{x}}{e^{1/x}}$$
I know that for $x\rightarrow0^+$, $\log{x}\rightarrow-\infty$ and $e^{1/x}\rightarrow+\infty$.
This leads to an indeterminate form $\left[\frac{\infty}{\infty}\right]$, so I'm not sure what to do in these situations. Perhaps change the variable, but not sure what's the logic behind chaning the variable of the limit.
Any hints? |
First, a disclaimer: I'm not sure I see the statistical validity of combining both linear and logistic regression
with the same measurement vectors $x_n$. I am going to assume you know what you are doing :-) and address the optimization question only.
Some quick and dirty approaches:
My Matlab toolbox CVX 2.1 can handle this, although with a caveat because it has to jump through some hoops to get the underlying solvers to accept the logistic regression term. CVX 3.0 beta coupled with the SCS solver can solve this problem "natively", thus avoiding the aforementioned caveat; but this will be a bit more difficult to get up and running, and again, it's a beta!. YALMIP can probably handle this well, too; and I believe it connects to SCS as well, which means it can also solve this problem natively. CVXPY coupled with SCS can do this same thing in Python. And you can implement your own proximal gradient solver if you are so inclined, though of course that's an advanced approach. You'd have to build a function to compute your own derivatives of the smooth portion of the objective.
Here is a logistic regression example for CVX, so you can see how to express the logistic term in a compliant manner using the CVX function
log_sum_exp. It's a simple matter to modify this example to add the additional terms.
My recommendation is that you provide weighting values for both the linear regression and $\ell_1$ terms. That is, minimize something like this:$$f = -\sum_{n=1}^{N}\log~p(y_{n}^{a}|x_{n},w) + \lambda_1\sum_{n=1}^{N}(y_{n}^{b}-w^{T}x_{n})^{2} +\lambda_2\|w\|_1$$You won't know what the best values of $\lambda_1$ and $\lambda_2$ are until you have done some cross validation. What I
do know is that the chance that $\lambda_1=1$ is your best choice is slim to none.
The model in CVX is going to look something like this, and assumes that the data $y^a_n$, $y^b_n$ are stored in column vectors
ya and
yb, respectively, and the vectors the columns of the matrix
X.
cvx_begin
variable w(m)
minimize(...
-ya'*X*w+sum(log_sum_exp([zeros(1,m); w'*X'])) ... %logistic
+lambda1*sum_square(yb-X*w) ... %linear
+lambda2*norm(w,1)) %regularizer
cvx_end |
On the quality of semidefinite approximations of uncertain semidefiniteprograms affected by box uncertainty Aharon Ben-Tal and Arkadi Nemirovski Let P(z) = A_0 + z_1 A_1 + ... + z_L A_L be an affine mapping taking valuesin the space of m x m real symmetric matrices such that A_0 is positivesemidefinite. Consider the following question: what is the largest R suchthat the set P(\{z: \|z\|_\infty \leq R\}) is contained in the positivesemidefinite cone? In general, it is NP-hard to compute a ``tight enough''approximation of R. One can, however, easily build a simple semidefinite program such that its optimal value r is a lower bound on R. We demonstratethat the ratio R/r does not exceed {\pi\sqrt{k}\over 2}, where k is the maximum of the ranks of A_1, A_2,...,A_L. We present 3 applications of theresult: we demonstrate that one can build efficiently lower bounds, exact within the factor {\pi\over 2}, for the following two quantities:(1) the largest R such that all instances of an interval symmetric matrix\{A=A^T: |A_{ij}-A^*_{ij}| \leq R D_{ij} \forall i,j\} are positivesemidefinite;(2) the largest R such that all instances of an interval square matrix\{A: |A_{ij}-A^*_{ij} \leq R D_{ij} \forall i,j\} admit a common quadraticLyapunov stability certificate.Besides this, we present an alternative proof (which does not use theGoemans-Williamson construction) of the fact, established by Yu. Nesterov,that the standard semidefinite upper bound on the maximum of a positivesemidefinite quadratic form over the unit cube is at most {\pi\over 2} timeslarger than the true value of the maximum.
Research report #2/00, April 2000, MINERVA Optimization Center, Technion -Israel Institute of Technology, Technion City, Haifa 32000, Israel
Contact: [email protected]
See home page for Technion Faculty of Industrial Engineering and Management |
Mohammadi, B., Alizadeh, E. (2019). Endpoints of generalized $\phi$-contractive multivalued mappings of integral type. Caspian Journal of Mathematical Sciences (CJMS), 8(2), 137-144. doi: 10.22080/cjms.2018.9207.1265
Babak Mohammadi; Esmaeil Alizadeh. "Endpoints of generalized $\phi$-contractive multivalued mappings of integral type". Caspian Journal of Mathematical Sciences (CJMS), 8, 2, 2019, 137-144. doi: 10.22080/cjms.2018.9207.1265
Mohammadi, B., Alizadeh, E. (2019). 'Endpoints of generalized $\phi$-contractive multivalued mappings of integral type', Caspian Journal of Mathematical Sciences (CJMS), 8(2), pp. 137-144. doi: 10.22080/cjms.2018.9207.1265
Mohammadi, B., Alizadeh, E. Endpoints of generalized $\phi$-contractive multivalued mappings of integral type. Caspian Journal of Mathematical Sciences (CJMS), 2019; 8(2): 137-144. doi: 10.22080/cjms.2018.9207.1265
Endpoints of generalized $\phi$-contractive multivalued mappings of integral type
Department of Mathematics, Marand Branch, Islamic Azad University, Marand, Iran
Abstract
Recently, some researchers have established some results on existence of endpoints for multivalued mappings. In particular, Mohammadi and Rezapour's [Endpoints of Suzuki type quasi-contractive multifunctions, U.P.B. Sci. Bull., Series A, 2015] used the technique of $\alpha-\psi$-contractive mappings, due to Samet et al. (2012), to give some results about endpoints of Suzuki type quasi-contractive multifunctions satisfing property (BS). In this paper, we prove existence and uniqueness of endpoint for multivalued mappings satisfing the weaker conditions generalized $\phi$-contractivity of integral type and property (HS). This result generalize and improve Mohammadi and Rezapour's result. Also, we give an example to illustrate the usability of the result. |
The other contributor deleted his answer, maybe to let me extend my above comment, so here it is.
Let $T$ be a possibly nondeterministic transducer, and $L$ be a regular language. Modify $T$ into a transducer $T'$ that checks that its input is in $L$ (by, e.g., changing the state set into the Cartesian product of the state sets of $T$ and $L$, and modifying the transition function so that the $L$ part of the states is properly updated, while retaining the behavior of $T$.)
A
branch of $T'$ is a sequence $\rho_1 C_1\rho_2 C_2 \cdots \rho_n C_n \rho_{n+1}$ such that $\rho_1\rho_2 \cdots \rho_{n+1}$ is an accepting simple path in $T'$, and each $C_i$ is a strongly connected component of $T'$ the states of which include the destination of $\rho_i$ (and the origin of $\rho_{i+1}$). The branch is tame if:
The input length of the path $\rho_1\rho_2\cdots\rho_{n+1}$ is greater than or equal to its output length;
For any $i$, any simple cycle in $C_i$, the input length of the cycle is greater than or equal to its output length.
Fact: $\big[$ For any $x, y$, $x[T']y$ implies $|y| \leq |x|$ $\big]$ iff all branches are tame.
The proof is rather immediate. The latter property being decidable (as the number of branches is bounded, and the number of simple cycles too), this shows that the problem of the question is decidable. |
You can usually view the cost function as the average squared error over some dataset with $N$ pairs of data, thus being defined as:
\begin{align}J &= \frac{1}{N} \sum_{i=1}^{N} \left(f(x_i,\beta) - y_i \right)^2\end{align}
We want the average error of our model (for all data we have) to decrease as we fine tune values for $\beta$, the vector parametrically defining how our model $f(\cdot,\cdot)$ works. So we like to look at all the data at once since it can be used to make changes to $\beta$ that should actually make the model improve as a whole.
If we instead made the cost function based on a subset of the dataset, you would end up with a cost function that may sub-optimally modify the value for $\beta$. This sub-optimal behavior may make us take longer to get to a local minimum of the cost function or even diverge from a good solution if you aren't careful with your hyper parameters.
Stochastic gradient descent or mini-batch gradient descent are methods that use a single piece of data or a subset of data from the dataset to make adjustments. These methods have actually found use for really large datasets where the longer convergence time is more worthwhile than the time it takes to do a whole pass on the data to compute necessary gradients and costs. |
A blog of Python-related topics and code.
Two important parameters in plasma physics are the
electron Debye length, $\lambda_{\mathrm{D}e}$, a measure of the distance over which charge-screening effects occur and deviations from quasi-neutrality are observed, and the number of paricles in a "Debye cube" (of side length $\lambda_{\mathrm{D}e}$), $N_\mathrm{D}$.
An important concept in plasma physics is the Debye length, which describes the screening of a charge's electrostatic potential due to the net effect of the interactions it undergoes with the other mobile charges (electrons and ions) in the system. It can be shown that, given a set of reasonable assumptions about the behaviour of charges in the plasma, the electric potential due to a "test charge", $q_\mathrm{T}$ is given by$$\phi = \frac{q_\mathrm{T}}{4\pi\epsilon_0 r}\exp\left(-\frac{r}{\lambda_\mathrm{D}}\right),$$where the electron Debye length,$$\lambda_\mathrm{D} = \sqrt{\frac{\epsilon_0 T_e}{e^2n_0}},$$for an electron temperature $T_e$ expressed as an energy (i.e. $T_e = k_\mathrm{B}T_e'$ where $T_e'$ is in K) and number density $n_0$. Rigorous derivations, starting from Gauss' Law and solving the resulting Poisson equation with a Green's function are given elsewhere (e.g. Section 7.2.2. in J. P. Freidberg,
Plasma Physics and Fusion Energy, CUP (2008)).
Just a simple Python app to try out the TkInter interface to the Tk GUI toolkit and to keep my children occupied. It shows a window with a square grid of cells which can be coloured by selecting from a palette. Run with
In a nuclear fusion reaction two atomic nuclei combine to form a single nucleus of lower total mass, the difference in mass, $\Delta m$ being released as energy in accordance with $E = \Delta m c^2$. It is this process which powers stars (in our own sun, hydrogen nuclei are fused into helium), and nuclear fusion has been actively pursued as a potential clean and cheap energy source in reactors on Earth for over 50 years.
A Reuleaux polygon is a curvilinear polygon built up of circular arcs. For an odd number of vertices, it has a constant width, and for this reason many polygonal coins, such as the UK's 50p piece and this Bermudian dollar coin are Reuleaux polygons. This property also means they make serviceable bicycle wheels: |
CryptoDB Igor E. Shparlinski Affiliation: University of New South Wales Publications Year Venue Title
2005
EPRINT
Elliptic Curves with Low Embedding Degree
Motivated by the needs of the {\it pairing based cryptography\/}, Miyaji, Nakabayashi and Takano have suggested a construction of so-called MNT elliptic curves with low embedding degree. We give some heuristic arguments which suggest that there are only about $z^{1/2+o(1)}$ of MNT curves with complex multiplication discriminant up to $z$. We also show that there are very few finite fields over which elliptic curves with small embedding degree and small complex multiplication discriminant may exist (regardless of the way they are constructed).
2003
PKC
2003
EPRINT
Hidden Number Problem in Small Subgroups
Boneh and Venkatesan have proposed a polynomial time algorithm for recovering a "hidden" element $\alpha \in \F_p$, where $p$ is prime, from rather short strings of the most significant bits of the residue of $\alpha t$ modulo $p$ for several randomly chosen $t\in \F_p$. Gonz{\'a}lez Vasco and the first author have recently extended this result to subgroups of $\F_p^*$ of order at least $p^{1/3+\varepsilon}$ for all $p$ and to subgroups of order at least $p^\varepsilon$ for almost all $p$. Here we introduce a new modification in the scheme which amplifies the uniformity of distribution of the `multipliers' $t$ and thus extend this result to subgroups of order at least $(\log p)/(\log \log p)^{1-\varepsilon}$ for all primes $p$. As in the above works, we give applications of our result to the bit security of the Diffie--Hellman secret key starting with subgroups of very small size, thus including all cryptographically interesting subgroups.
2002
EPRINT
Secure Bilinear Diffie-Hellman Bits
The Weil and Tate pairings are a popular new gadget in cryptography and have found many applications, including identity-based cryptography. In particular, the pairings have been used for key exchange protocols. This paper studies the bit security of keys obtained using protocols based on pairings (that is, we show that obtaining certain bits of the common key is as hard as computing the entire key). These results are valuable as they give insight into how many ``hard-core'' bits can be obtained from key exchange using pairings.
2001
ASIACRYPT
2000
EPRINT
On the Security of Diffie--Hellman Bits
Boneh and Venkatesan have recently proposed a polynomial time algorithm for recovering a "hidden" element $\alpha$ of a finite field $\F_p$ of $p$ elements from rather short strings of the most significant bits of the remainder modulo $p$ of $\alpha t$ for several values of $t$ selected uniformly at random from $\F_p^*$. We use some recent bounds of exponential sums to generalize this algorithm to the case when $t$ is selected from a quite small subgroup of $\F_p^*$. Namely, our results apply to subgroups of size at least $p^{1/3+ \varepsilon}$ for all primes $p$ and to subgroups of size at least $p^{\varepsilon}$ for almost all primes $p$, for any fixed $\varepsilon >0$. We also use this generalization to improve (and correct) one of the statements of the aforementioned work about the computational security of the most significant bits of the Diffie--Hellman key.
2000
EPRINT
Security of Polynomial Transformations of the Diffie--Hellman Key
D. Boneh and R. Venkatesan have recently proposed an approachto proving that a reasonably small portions of most significant bits of the Diffie-Hellman key modulo a prime are as secure the the whole key. Some further improvements and generalizations have been obtained by I. M. Gonzales Vasco and I. E. Shparlinski. E. R. Verheul has obtained certain analogies of these results in the case of Diffie--Hellman keys in extensions of finite fields, when an oracle is given to compute a certain polynomial function of the key, for example, the trace in the background field. Here we obtain some new results in this direction concerning the case of so-called "unreliable" oracles.
2000
EPRINT
Security of the Most Significant Bits of the Shamir Message Passing Scheme
Boneh and Venkatesan have recently proposed a polynomial time algorithm for recovering a ``hidden'' element $\alpha$ of a finite field $\F_p$ of $p$ elements from rather short strings of the most significant bits of the remainder mo\-du\-lo $p$ of $\alpha t$ for several values of $t$ selected uniformly at random from $\F_p^*$. Unfortunately the applications to the computational security of most significant bits of private keys of some finite field exponentiation based cryptosystems given by Boneh and Venkatesan are not quite correct. For the Diffie-Hellman cryptosystem the result of Boneh and Venkatesan has been corrected and generalized in our recent paper. Here a similar analysis is given for the Shamir message passing scheme. The results depend on some bounds of exponential sums.
Program Committees Eurocrypt 2012 PKC 2012 Crypto 2009 PKC 2009 PKC 2007 Eurocrypt 2005 Eurocrypt 2004 PKC 2002 Coauthors William D. Banks (1) Dan Boneh (1) Don Coppersmith (1) Steven D. Galbraith (1) Herbie J. Hopkins (1) Arjen K. Lenstra (2) Wen-Ching W. Li (1) Daniel Lieman (1) Florian Luca (2) Oscar García Morchon (1) Mats Näslund (3) Phong Q. Nguyen (2) Ronald Rietman (1) Ludo Tolhuizen (1) Maria Isabel Gonzalez Vasco (3) William Whyte (1) Arne Winterhof (2) |
Continuing in this series (here, here and here), I found Cobb and Douglas's original paper from 1928 [pdf] where their least squares fit gives them the function:
P = 1.01 L^{3/4} C^{1/4}
$$
And they get a pretty good result:
Also, Noah Smith writes today:
Yes, in a Solow model you can tie capital K to observable things like structures and machines and vehicles. But you'll be left with a big residual, A.
Now if we use the information equilibrium model:
$$
NGDP = A \; K^{\alpha} \; L^{\beta}
$$
And use the "economic potential" (see also here):
$$
NGDP = TS + X + Y + ...
$$
$$
NGDP \approx (c/\kappa + \xi + \eta + ... ) NGDP
$$
So that ...
$$
NGDP \approx (c/\kappa + \xi + \eta + ... ) A \; K^{\alpha} \; L^{\beta}
$$
$$
= (A c/\kappa + A \xi + A \eta + ... ) \; K^{\alpha} \; L^{\beta}
$$
$$
= (\underbrace{A c/\kappa}_{\text{residual productivity}} + \underbrace{A \xi + A \eta + ...}_{\text{measurable output}}) \; K^{\alpha} \; L^{\beta}
$$
or
$$
= (\underbrace{A c/\kappa}_{\text{entropy}} + \underbrace{A \xi + A \eta + ...}_{\text{real output}}) \; K^{\alpha} \; L^{\beta}
$$
So that we say
$$
NGDP \approx (A_{TS} + A_{0}) \; K^{\alpha} \; L^{\beta}
$$
Noah's statement is essentially that we expect a number the size of $A_{0}$, but it turns out it is large (i.e. the size of $A_{TS} + A_{0}$) and $A_{TS}$ is this large residual (or the whole term is the large residual). In this description, the Cobb Douglas production function works because the entropy term is approximately proportional to output: $TS \approx (c/\kappa) NGDP$. |
We now summarize the postulates of Quantum Mechanics that have been introduced. The application of these postulates will be illustrated in subsequent chapters.
Postulate 1
The properties of a quantum mechanical system are determined by a wavefunction Ψ(r,t) that depends upon the spatial coordinates of the system and time, \(r\) and \(t\). For a single particle system, r is the set of coordinates of that particle \(r = (x_1, y_1, z_1)\). For more than one particle, \(r\) is used to represent the complete set of coordinates \(r = (x_1, y_1, z_1, x_2, y_2, z_2,\dots x_n, y_n, z_n)\). Since the state of a system is defined by its properties, \(\Psi\) specifies or identifies the state and sometimes is called the state function rather than the wavefunction.
Postulate 2
The wavefunction is interpreted as probability amplitaude with the absolute square of the wavefunction, \(Ψ^*(r,t)Ψ(r,t)\) interpreted at the probability density at time \(t\). A probability density times a volume is a probability, so for one particle
\[\Psi^*(x_1,y_1,z_1,t)\Psi(x_1,y_1,z_1,t)dx_1dy_1dz_1\]
is the probability that the particle is in the volume \(dx\;dy\;dz\) located at \(x_l, y_l, z_l\) at time \(t\).
For a many particle system, we write the volume element as \(d\tau = dx_1dy_1dz_1\dots dx_ndy_ndz_n\); and \(Ψ^*(r,t)Ψ(r,t)dτ\) is the probability that particle 1 is in the volume \(dx_ldy_ldz_1\) at \(x_ly_lz_l\) and particle 2 is in the volume \(dx_2dy_2dz_2\) at \(x_2y_2z_2\), etc.
Because of this probabilistic interpretation, the wavefunction must be
normalized.
\[ \int \limits _{all space} \Psi ^* (r, t) \psi (r , t) d \tau = 1 \tag {3-38}\]
The integral sign here represents a multi-dimensional integral involving all coordinates: \(x_l \dots z_n\). For example, integration in three-dimensional space will be an integration over \(dV\), which can be expanded as:
\(dV=dx\,dy\,dz\) in Cartesian coordinates or \(dV=r^2\sin{\phi}\, dr\,d\theta \;d\phi\) in spherical coordinates or \(dV=r\, dr\,d\theta\,dz\) in cylindrical coordinates. Postulate 3
For every observable property of a system there is a quantum mechanical operator. The operator for position of a particle in three dimensions is just the set of coordinates \(x\), \(y\), and \(z\), which is written as a vector
\[ r = (x , y , z ) = x \vec {i} + y \vec {j} + z \vec {k} \tag {3-39}\]
The operator for a component of momentum is
\[ \hat {P} _x = -i \hbar \dfrac {\partial}{\partial x} \tag {3-40}\]
and the operator for kinetic energy in one dimension is
\[ \hat {T} _x = -\left (\dfrac {\hbar ^2}{2m} \right ) \dfrac {\partial ^2}{\partial x^2} \tag {3-14}\]
and in three dimensions
\[ \hat {p} = -i \hbar \nabla \tag {3-42}\]
and
\[ \hat {T} = \left ( -\dfrac {\hbar ^2}{2m} \right ) \nabla ^2 \tag {3-43}\]
The Hamiltonian operator \(\hat{H}\) is the operator for the total energy. In many cases only the kinetic energy of the particles and the electrostatic or Coulomb potential energy due to their charges are considered, but in general all terms that contribute to the energy appear in the Hamiltonian. These additional terms account for such things as external electric and magnetic fields and magnetic interactions due to magnetic moments of the particles and their motion.
Postulate 4
The time-independent wavefunctions of a time-independent Hamiltonian are found by solving the time-independent Schrödinger equation.
\[\hat {H} (r) \psi (r) = E \psi (r) \tag {3-44}\]
These wavefunctions are called stationary-state functions because the properties of a system in such a state, i.e. a system described by the function \(\Psi(r)\), are time independent.
Postulate 5
The time evolution or time dependence of a state is found by solving the time-dependent Schrödinger equation.
\[ \hat {H} (r , t) \psi (r , t) = i \hbar \frac {\partial}{\partial t} \Psi (r , t ) \tag {3-45}\]
For the case where \(\hat{H}\) is independent of time, the time dependent part of the wavefunction is \(e^{-iωt}\) where \(ω = \frac {E}{ħ}\) or equivalently \(ν = \frac {E}{h}\), which shows that the energy-frequency relation used by Planck, Einstein, and Bohr results from the time-dependent Schrödinger equation. This oscillatory time dependence of the probability amplitude does not affect the probability density or the observable properties because in the calculation of these quantities, the imaginary part cancels in multiplication by the complex conjugate.
Postulate 6
If a system is described by the eigenfunction \(\Psi\) of an operator \(\hat{A}\) then the value measured for the observable property corresponding to \(\hat{A}\) will always be the eigenvalue \(a\), which can be calculated from the eigenvalue equation.
\[ \hat {A} \Psi = a \Psi \tag {3-46}\]
Postulate 7
If a system is described by a wavefunction \(\Psi\), which is not an eigenfunction of an operator \(\hat{A}\), then a distribution of measured values will be obtained, and the average value of the observable property is given by the expectation value integral,
\[\left \langle A \right \rangle = \dfrac {\int \Psi ^* \hat {A} \Psi d \tau}{\int \Psi ^* \Psi d \tau} \tag {3-47}\]
where the integration is over all coordinates involved in the problem. The average value \(\left \langle A \right \rangle\), also called the expectation value, is the average of many measurements. If the wavefunction is normalized, then the normalization integral in the denominator of Equation (3-47) equals 1.
Problems Exercise 3.21What does it mean to say a wavefunction is normalized? Why must wavefunctions be normalized? Exercise 3.22Rewrite Equations(3-42) and (3-43) using the definitions of ħ, \(\nabla\), and \(\nabla _2\). Exercise 3.23Write a definition for a stationary state. What is the time dependence of the wavefunction for a stationary state? Exercise 3.24Show how the energy-frequency relation used by Planck, Einstein, and Bohr results from the time-dependent Schrödinger equation. Exercise 3.25Show how the de Broglie relation follows from the postulates of Quantum Mechanics using the definition of the momentum operator. Exercise 3.26What quantity in Quantum Mechanics gives you the probability density for finding a particle at some specified position in space? How do you calculate the average position of the particle and the uncertainty in the position of the particle from the wavefunction? |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
An RLC circuit is a simple electric circuit with a resistor, inductor and capacitor in it -- with resistance
R, inductance Land capacitance C, respectively. It's one of the simplest circuits that displays non-trivial behavior.
You can derive an equation for the behavior by using Kirchhoff's laws (conservation of the stocks and flows of electrons) and the properties of the circuit elements. Wikipedia does a fine job.
You arrive at a solution for the current as a function of time that looks generically like this (not the most general solution, but a solution):
i(t) = A e^{\left( -\alpha + \sqrt{\alpha^{2} - \omega^{2}} \right) t}
$$
with $\alpha = R/2L$ and $\omega = 1/\sqrt{L C}$. If you fill in some numbers for these parameters, you can get all kinds of behavior:
As you can tell from that diagram, the Kirchhoff conservation laws don't in any way nail down the behavior of the circuit. The values you choose for
R, Land Cdo. You could have a slowly decaying current or a quickly oscillating one. It depends on R, Land C.
Now you may wonder why I am talking about this on an economics blog. Well, Cullen Roche implicitly asked a question:
Although [stock flow consistent models are] widely used in the Fed and on Wall Street it hasn’t made much impact on more mainstream academic economic modeling techniques for reasons I don’t fully know.
The reason is that the content of stock flow consistent modeling is identical to Kirchhoff's laws. Currents are flows of electrons (flows of money); voltages are stocks of electrons (stocks of money).
Krichhoff's laws do not in any way nail down the behavior of an RLC circuit.
SFC models do not nail down the behavior of the economy.
If you asked what the impact of some policy was and I gave you the graph above, you'd probably never ask again.
What SFC models do in order to hide the fact that anything could result from an SFC model is effectively assume
R = L = C = 1, which gives you this:
I'm sure to get objections to this. There might even be legitimate objections. But I ask of any would-be objector:
How is accounting for money different from accounting for electrons?
Before saying this circuit model is in continuous time, note that there are circuits with clock cycles -- in particular the device you are currently reading this post with.
I can't for the life of me think of any objection, and I showed exactly this problem with a SFC model from Godley and Lavoie:
But to answer Cullen's implicit question -- as the two
Nick Rowe is generally better than me at these things. Mathematicanotebooks above show, SFC models don't specify the behavior of an economy without assuming R = L = C = 1 ... that is to say Γ = 1. Update:
Nick Rowe is generally better than me at these things. |
Knowing that at 25 °C the following galvanic cell: $$\ce{Pb~|~Pb(NO_3)_2~1M~||~PbS~saturated~|~Pb}$$ shows an $\mathrm{EMF} =0.413~\mathrm{V}$, find the $K_\mathrm{sp}$ of $\ce{PbS}$.
My Approach This is a concentration cell based on $\ce{Pb^2+}$. Since $\ce{Pb(NO3)2}$ dissociates completely, while $\ce{PbS}$ is a salt with a low solubility, the left semicell will be the cathode and the right one the anode. So we have the following semireactions: \begin{align} \ce{Pb^2+ + 2e- &-> Pb} && \text{(cathode)} \\ \ce{Pb &-> Pb^2+ +2e- } && \text{(anode)} \end{align}
And for the anode we also have $$\ce{PbS <=> Pb^2+ + S^2- }$$ where $\ce{[Pb^2+]} = \ce{[S^2- ]}= \sqrt{K_\mathrm{sp}}$. So the semicell potentials are: \begin{align} E_\text{cathode} &= E^\circ\\ E_\text{anode} &= E^\circ - \frac{0.059}{2} \log_{10}{[\ce{Pb^2+}]}\\ \end{align}
Thus: $$0.413~\mathrm{V} = E_\text{cathode} - E_\text{anode} = \frac{0.059}{2} \log_{10}\ce{[Pb^2+]} \Rightarrow \ce{[Pb^2+]} = 10^{14} $$ And: $$K_\mathrm{sp} = \ce{[Pb^2+]}^2 = 10^{28}$$
I'm sure that I'm wrong for a sign but I don't understand where is the error. |
I'll give this a shot, since I'm sufficiently disturbed by the advice given in some of the other answers.
Let $\vec{X},\vec{Y}$ be infinite bit sequences generated by two RNGs (not necessarily PRNGs which are deterministic once initial state is known),and we're considering the possibility of using the sequence $\vec{X} \oplus \vec{Y}$ with the hope of improving behavior in some sense.There are lots of different ways in which $\vec{X} \oplus \vec{Y}$ could be considered better or worse compared to each of $\vec{X}$ and $\vec{Y}$;here are a small handful that I believe are meaningful, useful, and consistent with normal usage of the words "better" and "worse":
(0) Probability of true randomness of the sequence increases or decreases (1) Probability of observable non-randomness increases or decreases (with respect to some observer applying some given amount of scrutiny, presumably) (2) Severity/obviousness of observable non-randomness increases or decreases.
First let's think about (0), which is the only one of the three that has any hope of being made precise.Notice that if, in fact, either of the two input RNGs really is truly random, unbiased, and independent of the other,then the XOR result will be truly random and unbiased as well.With that in mind, consider the case when you believe $\vec{X},\vec{Y}$ to be truly random unbiased isolated bit streams, but you're not completely sure.If $\varepsilon_X,\varepsilon_Y$ are the respective probabilities that you're wrong about each of them,then the probability that $\vec{X} \oplus \vec{Y}$ is not-truly-random is then$\leq \varepsilon_X \varepsilon_Y \lt min\{\varepsilon_X,\varepsilon_Y\}$,in fact
much less since $\varepsilon_X,\varepsilon_Y$ are assumed very close to 0 ("you believe them to be truly random").And in fact it's even better than that, when we also take into account the possibility of $\vec{X},\vec{Y}$ being truly independent even when neither is trulyrandom:$$\begin{eqnarray*}Pr(\vec{X} \oplus \vec{Y} \mathrm{\ not\ truly\ random}) \leq\min\{&Pr(\vec{X} \mathrm{\ not\ truly\ random}), \\&Pr(\vec{Y} \mathrm{\ not\ truly\ random}), \\&Pr(\vec{X},\vec{Y} \mathrm{\ dependent})\}.\end{eqnarray*}$$Therefore we can conclude that in sense (0), XOR can't hurt, and could potentially help a lot.
However, (0) isn't interesting for PRNGs, since in the case of PRNGs none of the sequences in question have any chance of being truly random.
Therefore for this question, which is in fact about PRNGs, we must be talking about something like (1) or (2).Since those are in terms of properties and quantities like "observable", "severe", "obvious", "apparent",we're now talking about Kolmogorov complexity, and I'm not going to try to make that precise.But I will go so far as to make the hopefully uncontroversial assertion that, by such a measure,"01100110..." (period=4) is worse than "01010101..." (period=2) which is worse than "00000000..." (constant).
Now, one might guess that (1) and (2) will follow the same trend as (0), and that therefore the conclusion "XOR can't hurt" might still hold.However, note the significant possibility that neither $\vec{X}$ nor $\vec{Y}$ was observably non-random,but that correlations between them cause $\vec{X} \oplus \vec{Y}$ to be observably non-random.The most severe case of this, of course, is when $\vec{X} = \vec{Y}$ (or $\vec{X} = \mathrm{not}(\vec{Y})$), in which case $\vec{X} \oplus \vec{Y}$ is constant, the worst of all possible outcomes;in general, it's easy to see that, regardless of how good $\vec{X}$ and $\vec{Y}$ are, $\vec{X}$ and $\vec{Y}$ need to be "close" to independent in order for their xor to be not-observably-nonrandom.In fact, being not-observably-dependent can reasonably be defined as $\vec{X} \oplus \vec{Y}$ being not-observably-nonrandom.
Such surprise dependence turns out to be a really big problem.
An example of what goes wrong
The question states "I'm excluding the common example of several linear feedback shift registers working together as they're from the same family".But I'm going to exclude that exclusion for the time being,in order to give a very simple clear real-life example of the kind of thing that can go wrong with XORing.
My example will be an old implementation of rand() that was on some version of Unix circa 1983.IIRC, this implementation of the rand() function had the following properties:
the value of each call to rand() was 15 pseudo-random bits, that is, an integer in range [0, 32767). successive return values alternated even-odd-even-odd; that is, the least-significant-bit alternated 0-1-0-1... the next-to-least-significant bit had period 4, the next after that had period 8, ... so the highest-order bit had period $2^{15}$. therefore the sequence of 15-bit return values of rand() was periodic with period $2^{15}$.
I've been unable to locate the original source code,but I'm guessing from piecing together a couple of posts fromin https://groups.google.com/forum/#!topic/comp.os.vms/9k4W6KrRV3A thatit did precisely the following (C code), which agrees with my memory of the properties above:
#define RAND_MAX 32767
static unsigned int next = 1;
int rand(void)
{
next = next * 1103515245 + 12345;
return (next & RAND_MAX);
}
void srand(seed)
unsigned int seed;
{
next = seed;
}
As one might imagine, trying to use this rand() in various ways led to an assortment of disappointments.
For example, at one point I tried simulating a sequence of random coin flips by repeatedly taking:
rand() & 1
i.e. the least significant bit. The result was simple alternation heads-tails-heads-tails.That was hard to believe at first (must be a bug in my program!), but after I convinced myself it was true, I triedusing the next-least-significant bit instead. That's not much better, as noted earlier-- that bit is periodic with period 4.Continuing to explore successively higher bits revealed the pattern I noted earlier: that is, each next higher-order bit had twice the period of the previous,so in this respect the highest-order bit was the most useful of all of them. Note however that there was no black-and-white threshold"bit $i$ is useful, bit $i-1$ is not useful" here; all we can really say is the numbered bit positions had varying degrees of usefulness/uselessness.
I also tried things like scrambling the results further, or XORing together values returned from multiple calls to rand().XORing pairs of successive rand() values was a disaster, of course-- it resulted in all odd numbers!For my purposes (namely producing an "apparently random" sequence of coin flips), the constant-parity result of the XORwas even worse than the alternating even-odd behavior of the original.
A slight variation puts this into the original framework: that is, let $\vec{X}$ be the sequence of 15-bit values returned by rand() with a given seed $s_X$,and $\vec{Y}$ the sequence from a different seed $s_Y$. Again, $\vec{X} \oplus \vec{Y}$ will be a sequence of either all-even or all-odd numbers,which is worse than the original alternating even/odd behavior.
In other words, this is an example where XOR made things worse in the sense of (1) and (2), by any reasonable interpretation.It's worse in several other ways as well:
(3) The XORed least-significant-bit is obviously biased, i.e. has unequal frequencies of 0's and 1's, unlike any numbered bit position in either of the inputs which are all unbiased. (4) In fact, for every bit position, there are pairs of seeds for which that bit position is biased in the XOR result, and for every pair of seeds, there are (at least 5) bit positions that are biased in the XOR result. (5) The period of the entire sequence of 15-bit values in the XOR result is either 1 or $2^{14}$, compared to $2^{15}$ for the originals.
None of (3),(4),(5) is obvious, but they are all easily verifiable.
Finally, let's consider re-introducing the prohibition of PRNGs from the same family.The problem here, I think, is that it's never really clear whether two PRNGs are "from the same family",until/unless someone starts using the XOR and notices (or an attacker notices) things got worse in the sense of (1) and (2),i.e. until non-random patterns in the output cross the threshold from not-noticed to noticed/embarrassing/disastrous, and at that point it's too late.
I'm alarmed by other answers here which give unqualified advice "XOR can't hurt" on the basis of theoretical measures which appear to meto do a poor job of modelling what most people consider to be "good" and "bad" about PRNGs in real life.That advice is contradicted by clear and blatant examples in which XOR makes things worse, such the rand() example given above.While it's conceivable that relatively "strong" PRNGs could consistently display the opposite behavior when XORed to that of the toy PRNG that was rand(),thereby making XOR a good idea for them, I've seen no evidence in that direction, theoretical or empirical, so it seems unreasonable to me to assumethat happens.
Personally, having been bitten by surprise by XORing rand()s in my youth, and by countless other assorted surprise correlations throughout my life,I have little reason to think the outcome will be different if I try similar tactics again.That is why I, personally, would be very reluctant to XOR together multiple PRNGs unless very extensive analysis and vetting has been done to give me some confidence that it might be safe to do so for the particular RNGs in question. As a potential cure for when I have low confidence in one or more of the individual PRNGs, XORing them is unlikely to increase my confidence, so I'm unlikely to use it for such a purpose. I imagine the answer to your question is that this is a widely held sentiment. |
An excess return is the payoff of a zero cost portfolio. For example: $R_i - R_f$ is an excess return. $c \left( R_i - R_f \right) $ is an excess return for any $c \in \mathbb{R}$,. More generally, $R_i - R_j$ is an excess return for any returns $R_i$ and $R_j$.
Excess returns are nice to work with because you cans simply scale them up or scale them down and they're still excess returns. Let's imagine excess return $R_i - R_f$ has a market beta of $\beta_i$.
$$ R_i - R_f = \alpha_i + \beta_i \left( R_m - R_f \right) + \epsilon_i $$
Then excess return $\frac{1}{\beta_i} (R_i - R_f)$ has a market beta of $1$. $$\frac{1}{\beta_i} \left( R_i - R_f\right) = \frac{\alpha_i}{\beta_i} + \left( R_m - R_f \right) + \frac{\epsilon_i}{\beta_i} $$
Excess return $\frac{1}{\beta_i} (R_i - R_f) -\frac{1}{\beta_j} (R_j - R_f) $ will have a market beta of 0.
Since $\beta_H > 1$, multiplying by $\frac{1}{\beta_H}$ to obtain $\frac{1}{\beta_H} (R_H - R_f)$ is deleveraging the excess return $R_H - R_f$. Since $\beta_L < 1$, multiplying by $\frac{1}{\beta_L}$ to obtain $\frac{1}{\beta_L} (R_L - R_f)$ is leveraging the excess return $R_L - R_f$ |
Your friend meant that all
complex numbers can be represented by such matrices.
$$a+bi = \begin{pmatrix} a & -b \\ b & a \end{pmatrix}$$
Adding complex numbers matches adding such matrices and multiplying complex numbers matches multiplying such matrices.
This means that the collection of matrices:
$$R = \left\{ \begin{pmatrix} a & -b \\ b & a \end{pmatrix} \;\Bigg|\; a,b \in \mathbb{R} \right\}$$
is "isomorphic" to the field of complex numbers.
Specifically,
$$i = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$$
Notice that for this matrix $i^2=-I_2=-1$. :)
How does this help?
It allows you to construct the complex numbers from matrices over the reals. This allows you to get at some properties of the complex numbers via linear algebra.
For example: The modulus of a complex number is $|a+bi|=a^2+b^2$. This is the same as the determinant of such a matrix. Now since the determinant of a product is the product of a determinant, you get that $|z_1z_2|=|z_1|\cdot |z_2|$ for any two complex numbers $z_1$ and $z_2$.
Another nice tie, transposing matches conjugation. :)
Edit: As per request, a little about Euler's formula.
The exponential function can be defined in a number of ways. One nice way is via its MacLaurin series: $e^x = 1+x+\frac{x^2}{2!}+\cdots$. If you start thinking of $x$ as some sort of indeterminant, you might start to ask, "What can I plug into this series?" It turns out that the series:$$e^A = I+A+\frac{A^2}{2!}+\frac{A^3}{3!}+\cdots$$converges for any square matrix $A$ (you have to make sense out of "a convergent series of matrices").
Consider a "real" number, $x$, encoded as one of our matrices: $$x=\begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} \quad \mbox{then} \quad e^x = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} + \begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} + \begin{pmatrix} x^2/2 & 0 \\ 0 & x^2/2 \end{pmatrix} + \cdots$$ $$= \begin{pmatrix} 1+x+x^2/2+\cdots & 0 \\ 0 & 1+x+x^2/2+\cdots \end{pmatrix} = \begin{pmatrix} e^x & 0 \\ 0 & e^x \end{pmatrix} = e^x$$
So (no surprise) the matrix exponential and the good old real exponential do the same thing.
Now one can ask, "What does the exponential of a complex number get you?" It turns out that...$$\mbox{Given } a+bi = \begin{pmatrix} a & -b \\ b & a \end{pmatrix} \quad \mbox{then} \quad e^{a+bi} = \begin{pmatrix} e^a\cos(b) & -e^a\sin(b) \\ e^a\sin(b) & e^a\cos(b) \end{pmatrix}$$...this involves
some (?intermediate?) linear algebra.
Anyway accepting that, we have found that $e^{a+bi} = e^a(\cos(b)+i\sin(b))$. In particular,$$e^{i\theta} = \begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{pmatrix}$$So that $$e^{i\pi} = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix} = -1$$
We can see this way that complex exponentiation (with pure imaginary exponent) yields a rotation matrix. Thus leading us down a path to start identifying complex arithmetic with 2-dimensional geometric transformations.
Of course, there are many other ways to arrive at these various relationships. The matrix route is not the fastest/easiest route but it is an interesting one to contemplate.
I hope that helps a little bit. :) |
Unicity of mermorphic functions concerning shared functions with their difference
Bull. Korean Math. Soc. Published online August 9, 2019
Bingmao Deng, Mingliang Fang, and Dan LiuInstitute of Applied Mathematics, South China Agricultural University
Abstract : In this paper, we investigate the uniqueness of meromorphic functions of finite order concerning sharing small functions and prove that if $f(z)$ and $\Delta_c f(z)$ share $a(z), b(z), \infty$ CM, where $a(z), b(z) (\not \equiv \infty)$ are two distinct small functions of $f(z)$, then $f(z)\equiv \Delta_cf(z)$. The result improve the results due to Li et al (Bull. Korean Math. Soc., 2015), Cui et al (J. Diff. Equ. Appl., 2016) and L\"{u} et al (Comput. Methods Funct. Theory, 2017). |
1) The region \(D\) bounded by \(y = x^3, \space y = x^3 + 1, \space x = 0,\) and \(x = 1\) as given in the following figure.
a. Classify this region as vertically simple (Type I) or horizontally simple (Type II).
Type: Type I but not Type II
b. Find the area of the region \(D\).
c. Find the average value of the function \(f(x,y) = 3xy\) on the region graphed in the previous exercise.
Answer: \(\frac{27}{20}\)
2) The region \(D\) bounded by \(y = \sin x, \space y = 1 + \sin x, \space x = 0\), and \(x = \frac{\pi}{2}\) as given in the following figure.
a. Classify this region as vertically simple (Type I) or horizontally simple (Type II).
Type: Type I but not Type II
b. Find the area of the region \(D\).
Answer: \(\frac{\pi}{2}\, \text{units}^2\)
c. Find the average value of the function \(f(x,y) = \cos x\) on the region \(D\).
3) The region \(D\) bounded by \(x = y^2 - 1\) and \(x = \sqrt{1 - y^2}\) as given in the following figure.
a. Classify this region as vertically simple (Type I) or horizontally simple (Type II).
Type: Type II but not Type I
b. Find the volume of the solid under the graph of the function \(f(x,y) = xy + 1\) and above the region \(D\).
Answer: \(\frac{1}{6}(8 + 3\pi)\, \text{units}^3\)
4) The region \(D\) bounded by \(y = 0, \space x = -10 + y,\) and \(x = 10 - y\) as given in the following figure.
a. Classify this region as vertically simple (Type I) or horizontally simple (Type II).
Type: Type II but not Type I
b. Find the volume of the solid under the graph of the function \(f(x,y) = x + y\) and above the region in the figure from the previous exercise.
Answer: \(\frac{1000}{3}\, \text{units}^3\)
5) The region \(D\) bounded by \(y = 0, \space x = y - 1, \space x = \frac{\pi}{2}\) as given in the following figure.
Classify this region as vertically simple (Type I) or horizontally simple (Type II).
Type: Type I and Type II
6) The region \(D\) bounded by \(y = 0\) and \(y = x^2 - 1\) as given in the following figure.
Classify this region as vertically simple (Type I) or horizontally simple (Type II).
Type: Type I and Type II
7) Let \(D\) be the region bounded by the curves of equations \(y = cos \space x\) and \(y = 4 - x^2\) and the \(x\)-axis. Explain why \(D\) is neither of Type I nor II.
Answer: The region \(D\) is not of Type I: it does not lie between two vertical lines and the graphs of two continuous functions \(g_1(x)\) and \(g_2(x)\). The region is not of Type II: it does not lie between two horizontal lines and the graphs of two continuous functions \(h_1(y)\) and \(h_2(y)\).
8) Let \(D\) be the region bounded by the curves of equations \(y = x, \space y = -x\) and \(y = 2 - x^2\). Explain why \(D\) is neither of Type I nor II.
In exercises 9 - 14, evaluate the double integral \(\displaystyle \iint_D f(x,y) \,dA\) over the region \(D\).
9) \(f(x,y) = 1\) and
\(D = \big\{(x,y)| \, 0 \leq x \leq \frac{\pi}{2}, \space \sin x \leq y \leq 1 + \sin x \big\}\)
Answer: \(\frac{\pi}{2}\)
10) \(f(x,y) = 2\) and
\(D = \big\{(x,y)| \, 0 \leq y \leq 1, \space y - 1 \leq x \leq \arccos y \big\}\)
11) \(f(x,y) = xy\) and
\(D = \big\{(x,y)| \, -1 \leq y \leq 1, \space y^2 - 1 \leq x \leq \sqrt{1 - y^2} \big\}\)
Answer: \(0\)
12) \(f(x,y) = sin \space y\) and \(D\) is the triangular region with vertices \((0,0), \space (0,3)\), and \((3,0)\)
13) \(f(x,y) = -x + 1\) and \(D\) is the triangular region with vertices \((0,0), \space (0,2)\), and \((2,2)\)
Answer: \(\frac{2}{3}\)
14) \(f(x,y) = 2x + 4y\) and
\(D = \big\{(x,y)|\, 0 \leq x \leq 1, \space x^3 \leq y \leq x^3 + 1 \big\}\)
In exercises 15 - 20, evaluate the iterated integrals.
15) \(\displaystyle \int_0^1 \int_{2\sqrt{x}}^{2\sqrt{x}+1} (xy + 1) \,dy \space dx\)
Answer: \(\frac{41}{20}\)
16) \(\displaystyle \int_0^3 \int_{2x}^{3x} (x + y^2) \,dy \space dx\)
17) \(\displaystyle \int_1^2 \int_{-u^2-1}^{-u} (8 uv) \,dv \space du\)
Answer: \(-63\)
18) \(\displaystyle \int_e^{e^2} \int_{\ln u}^2 (v + \ln u) \,dv \space du\)
19) \(\displaystyle \int_0^1 \int_{-\sqrt{1-4y^2}}^{\sqrt{1-4y^2}} 4 \,dx \space dy\)
Answer: \(\pi\)
20) \(\displaystyle \int_0^1 \int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}} (2x + 4y^3) \,dx \space dy\)
21) Let \(D\) be the region bounded by \(y = 1 - x^2, \space y = 4 - x^2\), and the \(x\)- and \(y\)-axes.
a. Show that
\[\iint_D x\,dA = \int_0^1 \int_{1-x^2}^{4-x^2} x \space dy \space dx + \int_1^2 \int_0^{4-x^2} x \space dy \space dx\] by dividing the region \(D\) into two regions of Type I.
b. Evaluate the integral \[\iint_D s \space dA.\]
22) Let \(D\) be the region bounded by \(y = 1, \space y = x, \space y = ln \space x\), and the \(x\)-axis.
a. Show that
\[\iint_D y^2 dA = \int_{-1}^0 \int_{-x}^{2-x^2} y^2 dy \space dx + \int_0^1 \int_x^{2-x^2} y^2 dy \space dx\] by dividing the region \(D\) into two regions of Type I, where \(D = \big\{(x,y)\,|\,y \geq x, y \geq -x, \space y \leq 2-x^2\big\}\).
b. Evaluate the integral \[\iint_D y^2 dA.\]
23) Let \(D\) be the region bounded by \(y = x^2\), \(y = x + 2\), and \(y = -x\).
a. Show that \[\iint_D x \space dA = \int_0^1 \int_{-y}^{\sqrt{y}} x \space dx \space dy + \int_1^2 \int_{y-2}^{\sqrt{y}} x \space dx \space dy\] by dividing the region \(D\) into two regions of Type II, where \(D = \big\{(x,y)\,|\,y \geq x^2, \space y \geq -x, \space y \leq x + 2\big\}\).
b. Evaluate the integral \[\iint_D x \space dA.\]
Answer: a. Answers may vary; b. \(\frac{8}{12}\)
24) The region \(D\) bounded by \(x = 0, y = x^5 + 1\), and \(y = 3 - x^2\) is shown in the following figure. Find the area \(A(D)\) of the region \(D\).
25) The region \(D\) bounded by \(y = cos \space x, \space y = 4 \space cos \space x\), and \(x = \pm \frac{\pi}{3}\) is shown in the following figure. Find the area \(A(D)\) of the region \(D\).
Answer: \(\frac{8\pi}{3}\)
26) Find the area \(A(D)\) of the region \(D = \big\{(x,y)| \, y \geq 1 - x^2, y \leq 4 - x^2, \space y \geq 0, \space x \geq 0 \big\}\).
27) Let \(D\) be the region bounded by \( y = 1, \space y = x, \space y = ln \space x\), and the \(x\)-axis. Find the area \(A(D)\) of the region \(D\).
Answer: \(\left(e - \frac{3}{2}\right)\, \text{units}^2\)
28) Find the average value of the function \(f(x,y) = sin \space y\) on the triangular region with vertices \((0,0), \space (0,3)\), and \((3,0)\).
29) Find the average value of the function \(f(x,y) = -x + 1\) on the triangular region with vertices \((0,0), \space (0,2)\), and \((2,2)\).
Answer: \(\frac{2}{3}\) In exercises 30 - 33, change the order of integration and evaluate the integral.
30) \[\int_{-1}^{\pi/2} \int_0^{x+1} sin \space x \space dy \space dx\]
31) \[\int_0^1 \int_{x-1}^{1-x} x \space dy \space dx\]
Answer: \[\int_0^1 \int_{x-1}^{1-x} x \space dy \space dx = \int_{-1}^0 \int_0^{y+1} x \space dx \space dy + \int_0^1 \int_-^{1-y} x \space dx \space dy = \frac{1}{3}\]
32) \[\int_{-1}^0 \int_{-\sqrt{y+1}}^{\sqrt{y+1}} y^2 dx \space dy\]
33) \[\int_{-1/2}^{1/2} \int_{-\sqrt{y^2+1}}^{\sqrt{y^2+1}} y \space dx \space dy\]
Answer: \[\int_{-1/2}^{1/2} \int_{-\sqrt{y^2+1}}^{\sqrt{y^2+1}} y \space dx \space dy = \int_1^2 \int_{-\sqrt{x^2-1}}^{\sqrt{x^2-1}} y \space dy \space dx = 0\]
34) The region \(D\) is shown in the following figure. Evaluate the double integral \(\displaystyle \iint_D (x^2 + y) \,dA\) by using the easier order of integration.
35) The region \(D\) is shown in the following figure. Evaluate the double integral \(\displaystyle \iint_D (x^2 - y^2) \,dA\) by using the easier order of integration.
Answer: \[\iint_D (x^2 - y^2) dA = \int_{-1}^1 \int_{y^4-1}^{1-y^4} (x^2 - y^2)dx \space dy = \frac{464}{4095}\]
36) Find the volume of the solid under the surface \(z = 2x + y^2\) and above the region bounded by \(y = x^5\) and \(y = x\).
37) Find the volume of the solid under the plane \(z = 3x + y\) and above the region determined by \(y = x^7\) and \(y = x\).
Answer: \(\frac{4}{5}\, \text{units}^3\)
38) Find the volume of the solid under the plane \(z = 3x + y\) and above the region bounded by \(x = tan \space y, \space x = -tan \space y\), and \(x = 1\).
39) Find the volume of the solid under the surface \(z = x^3\) and above the plane region bounded by \(x = sin \space y, \space x = -sin \space y\), and \(x = 1\).
Answer: \(\frac{5\pi}{32}\, \text{units}^3\)
40) Let \(g\) be a positive, increasing, and differentiable function on the interval \([a,b]\). Show that the volume of the solid under the surface \(z = g'(x)\) and above the region bounded by \(y = 0, \space y = g(x), \space x = a\), and \(x = b\) is given by \(\frac{1}{2}(g^2 (b) - g^2 (a))\).
41) Let \(g\) be a positive, increasing, and differentiable function on the interval \([a,b]\) and let \(k\) be a positive real number. Show that the volume of the solid under the surface \(z = g'(x)\) and above the region bounded by \(y = g(x), \space y = g(x) + k, \space x = a\), and \(x = b\) is given by \(k(g(b) - g(a)).\)
42) Find the volume of the solid situated in the first octant and determined by the planes \(z = 2\), \(z = 0, \space x + y = 1, \space x = 0\), and \(y = 0\).
43) Find the volume of the solid situated in the first octant and bounded by the planes \(x + 2y = 1\), \(x = 0, \space z = 4\), and \(z = 0\).
Answer: \(1\, \text{units}^3\)
44) Find the volume of the solid bounded by the planes \(x + y = 1, \space x - y = 1, \space x = 0, \space z = 0\), and \(z = 10\).
45) Find the volume of the solid bounded by the planes \(x + y = 1, \space x - y = 1, \space x - y = -1, \space z = 1\), and \(z = 0\)
Answer: \(2\, \text{units}^3\)
46) Let \(S_1\) and \(S_2\) be the solids situated in the first octant under the planes \(x + y + z = 1\) and \(x + y + 2z = 1\) respectively, and let \(S\) be the solid situated between \(S_1, \space S_2, \space x = 0\), and \(y = 0\).
Find the volume of the solid \(S_1\). Find the volume of the solid \(S_2\). Find the volume of the solid \(S\) by subtracting the volumes of the solids \(S_1\) and \(S_2\).
47) Let \(S_1\) and \(S_2\) be the solids situated in the first octant under the planes \(2x + 2y + z = 2\) and \(x + y + z = 1\) respectively, and let \(S\) be the solid situated between \(S_1, \space S_2, \space x = 0\), and \(y = 0\).
Find the volume of the solid \(S_1\). Find the volume of the solid \(S_2\). Find the volume of the solid \(S\) by subtracting the volumes of the solids \(S_1\) and \(S_2\). Answer: a. \(\frac{1}{3}\, \text{units}^3\) b. \(\frac{1}{6}\, \text{units}^3\) c. \(\frac{1}{6}\, \text{units}^3\)
48) Let \(S_1\) and \(S_2\) be the solids situated in the first octant under the plane \(x + y + z = 2\) and under the sphere \(x^2 + y^2 + z^2 = 4\), respectively. If the volume of the solid \(S_2\) is \(\frac{4\pi}{3}\) determine the volume of the solid \(S\) situated between \(S_1\) and \(S_2\) by subtracting the volumes of these solids.
49) Let \(S_1\) and \(S_2\) be the solids situated in the first octant under the plane \(x + y + z = 2\) and under the sphere \(x^2 + y^2 = 4\), respectively.
Find the volume of the solid \(S_1\). Find the volume of the solid \(S_2\). Find the volume of the solid \(S\) situated between \(S_1\) and \(S_2\) by subtracting the volumes of the solids \(S_1\) and \(S_2\). Answer: a. \(\frac{4}{3}\, \text{units}^3\) b. \(2\pi\, \text{units}^3\) c. \(\frac{6\pi - 4}{3}\, \text{units}^3\)
50) [T] The following figure shows the region \(D\) bounded by the curves \(y = sin \space x, \space x = 0\), and \(y = x^4\). Use a graphing calculator or CAS to find the \(x\)-coordinates of the intersection points of the curves and to determine the area of the region \(D\). Round your answers to six decimal places.
51) [T] The region \(D\) bounded by the curves \(y = cos \space x, \space x = 0\), and \(y = x^3\) is shown in the following figure. Use a graphing calculator or CAS to find the
x-coordinates of the intersection points of the curves and to determine the area of the region \(D\). Round your answers to six decimal places.
Answer: 0 and 0.865474; \(A(D) = 0.621135\, \text{units}^3\)
52) Suppose that \((X,Y)\) is the outcome of an experiment that must occur in a particular region \(S\) in the \(xy\)-plane. In this context, the region \(S\) is called the sample space of the experiment and \(X\) and \(Y\) are random variables. If \(D\) is a region included in \(S\), then the probability of \((X,Y)\) being in \(D\) is defined as \(P[(X,Y) \in D] = \iint_D p(x,y)dx \space dy\), where \(p(x,y)\) is the joint probability density of the experiment. Here, \(p(x,y)\) is a nonnegative function for which \(\iint_S p(x,y) dx \space dy = 1\). Assume that a point \((X,Y)\) is chosen arbitrarily in the square \([0,3] \times [0,3]\) with the probability density
\[p(x,y) = \frac{1}{9} (x,y) \in [0,3] \times [0,3],\]
\[p(x,y) = 0 \space \text{otherwise}\]
Find the probability that the point \((X,Y)\) is inside the unit square and interpret the result.
53) Consider \(X\) and \(Y\) two random variables of probability densities \(p_1(x)\) and \(p_2(x)\), respectively. The random variables \(X\) and \(Y\) are said to be independent if their joint density function is given by \(p_(x,y) = p_1(x)p_2(y)\). At a drive-thru restaurant, customers spend, on average, 3 minutes placing their orders and an additional 5 minutes paying for and picking up their meals. Assume that placing the order and paying for/picking up the meal are two independent events \(X\) and \(Y\). If the waiting times are modeled by the exponential probability densities
\[p_1(x) = \frac{1}{3}e^{-x/3} \space x\geq 0,\]
\[p_1(x) = 0 \space \text{otherwise}\]
\[p_2(y) = \frac{1}{5} e^{-y/5} \space y \geq 0\]
\[p_2(y) = 0 \space \text{otherwise}\]
respectively, the probability that a customer will spend less than 6 minutes in the drive-thru line is given by \(P[X + Y \leq 6] = \iint_D p(x,y) dx \space dy\), where \(D = {(x,y)|x \geq 0, \space y \geq 0, \space x + y \leq 6}\). Find \(P[X + Y \leq 6]\) and interpret the result.
Answer: \(P[X + Y \leq 6] = 1 + \frac{3}{2e^2} - \frac{5}{e^{6/5}} \approx 0.45\); there is a \(45\%\) chance that a customer will spend \(6\) minutes in the drive-thru line.
54) [T] The Reuleaux triangle consists of an equilateral triangle and three regions, each of them bounded by a side of the triangle and an arc of a circle of radius
s centered at the opposite vertex of the triangle. Show that the area of the Reuleaux triangle in the following figure of side length \(s\) is \(\frac{s^2}{2}(\pi - \sqrt{3})\).
55) [T] Show that the area of the lunes of Alhazen, the two blue lunes in the following figure, is the same as the area of the right triangle
ABC. The outer boundaries of the lunes are semicircles of diameters \(AB\) and \(AC\) respectively, and the inner boundaries are formed by the circumcircle of the triangle \(ABC\). |
To use this tool, enter the required fields and click "Calculate"
This calculator uses the classical method of calculating gyroscopic stability, as described by Bob McCoy in his book Modern Exterior Ballistics. Because the classical method requires detailed bullet dimensions and aerodynamic coefficients, we apply the simplifications derived by Don Miller in his article, A New Rule for Estimating Rifling Twist - An Aid to Choosing Bullets and Rifles and the later follow-up article, How Good are Simple Rules for Estimating Rifling Twist. The "Miller Rule", as it's often referred to, uses empirical data to simplify the math down to the point where only the bullet's length is required. The price in accuracy paid for this simplification is surprisingly small.
The classical equation to calculate gyroscopic stability is as follows. (\(S_g\)) is the gyroscopic stability factor, and must be above 1.0 if the bullet is to remain stable.
$$S_g = {{8\pi} \over {\rho_{air}t^2d^5C_{M\alpha}}}{{A^2}\over{B}} $$
The variables are as follows:
\(S_g\) The gyroscopic stability factor. A bullet is stable if \(S_g\) is over 1.0. \(\rho_{air}\) Air density \(t\) rifling twist \(d\) the bullet's caliber \(C_{M\alpha}\) the bullet's overturning moment coefficient \({{A^2}\over{B}}\) The square of the bullet's axial moment inertia divided by its transverse moment of inertia
The first four variables are easy to measure and/or calculate. The last two can be troublesome. \({{A^2}\over{B}}\) can be calculated (tediously) if you have detailed information about the bullet's dimensions and weight distribution. \(C_{M\alpha}\) must be either measured in a sophisticated lab, or calculated with engineering software. What Don Miller did was to take those two hard to get numbers and find suitable substitutes that depend only on bullet length. He looked at data from known projectiles studied by the Ballistics Research Lab and came up with his remarkably accurate simplifications.
If a bullet has a gyroscopic stability factor (\(S_g\)) of less than 1.0, it will tumble. So you need at least that. However, a bullet must also exhibit dynamic stability in addition to gyroscopic stability. While dynamic stability is a hard thing to pin down, it turns out that a little bit of margin on your \(S_g\) will help ensure that your bullet starts off stable. Anything less than about 1.25 is getting close to the edge.
Additionally, if you want to minimize yaw and wring the last tiny bit of ballistic performance, you should aim for an \(S_g\) of roughly 1.5.
Yes. The faster you spin a bullet, the less accurate it will be. There is no reason to spin a bullet any faster than necessary to get the \(S_g\) that you want. For optimal accuracy, I shoot for 1.3-1.4. For ballistic optimization, 1.5-1.7 is a better number. Any faster will result in poor precision and possibly bullet destruction.
Surprisingly so. When compared with the more detailed classical methods of determining bullet stability, Miller matches up very well. There is a caveat, however. Miller rule is based on a library of test data collected by the BRL. The less your bullet looks like the projectiles used in the library, the greater chance there is for the rule to fall short. In other words, Miller works great for sane bullets. If you start getting into crazy numbers (say, a 200 grain .224 bullet) it's not going to work very well.
More or less. The data that Miller used to crate the rule consisted of data from boattail and flat-based projectiles. The data for boattail bullets is a better fit than that for flat based bullets, but it's still a reasonably good approximation. As with any calculation, use an appropriate margin of safety, as real life sometimes intervenes.
More or less. Miller's source data did not include plastic tipped projectiles. However, knowledge of the mechanics of stability tell us that the model will generally be conservative for polymer tipped bullets.
Sign up for occasional email updates. |
Definition:Euclidean Space Contents Definition
Let $S$ be one of the standard number fields $\Q$, $\R$, $\C$.
Let $S^n$ be a cartesian space for $n \in \N_{\ge 1}$.
Let $d: S^n \times S^n \to \R$ be the usual (Euclidean) metric on $S^n$.
Then $\tuple {S^n, d}$ is a
Euclidean space. Special Cases
Let the Euclidean metric $d$ be applied to $\R^n$.
Then $\left({\R^n, d}\right)$ is a
Euclidean $n$-space.
Let $\Q^n$ be an $n$-dimensional vector space of rational numbers.
Let the Euclidean Metric $d$ be applied to $\Q^n$.
Then $\left({\Q^n, d}\right)$ is a Euclidean $n$-space.
Let $\C$ be the complex plane.
Let $d$ be the Euclidean metric on $\C$.
Then $\left({\C, d}\right)$ is a Euclidean space.
Let $S$ be one of the standard number fields $\Q$, $\R$, $\C$.
Let $S^n$ be a cartesian space for $n \in \N_{\ge 1}$.
Let $M = \left({S^n, d}\right)$ be a Euclidean space.
Definition
For any real number $a$ let:
$L_a = \left\{{ \left({x, y}\right) \in \R^2: x = a }\right\}$
Furthermore, define:
$L_A = \left\{{L_a: a \in \R }\right\}$ For any two real numbers $m$ and $b$ let: $L_{m,b} = \left\{{ \left({x, y}\right) \in \R^2: y = m x + b }\right\}$
Furthermore, define:
$L_{M,B} = \left\{{ L_{m,b}: m,b \in \R }\right\}$ Finally let: $L_E = L_A \cup L_{M,B}$ The abstract geometry $\left({\R^2, L_E}\right)$ is called the Euclidean plane. Also see Also known as
Some authors use the term
Cartesian plane instead of Euclidean plane. Sources 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: Euclidean plane 1991: Richard S. Millman and George D. Parker: Geometry: A Metric Approach with Models(2nd ed.) ... (previous) ... (next): $\S 2.1$ Also see Results about Euclidean spacescan be found here. Source of Name
This entry was named for Euclid.
They bear that name because the geometric space which it gives rise to is
Euclidean in the sense that it is consistent with Euclid's fifth postulate. |
I'm new to quantum computing, so while studying Grover's algorithm I (and, I think a lot of other people too) could not help but notice that exactly the same operator is applied $\sqrt{N}$ times:
$$U = [2 \left| \psi \right> \left<\psi \right| - I ]\mathcal{O} $$
Of course, it depends on the oracle $\mathcal{O}$ and, as far as I understood from David Roberts' comment in this discussion on mathoverflow
More specifically, $U$ depends on $\left| \psi \right>$ (and $N$, but that's a bit different) which is not always in the same relation to $\left| E \right>$ in different concrete instances of the problem. Also, complexity is roughly an asymptotic measure, taken at worst case.
which (I think) means that in real tasks $\left| \psi \right>$ can be not equal superposition of states.
However, I don't see any prohibition here about $U^k$ ($k \in \mathbb{N}$, for example $k = [\sqrt{N}]+1$) be constructed previously for some specific range of tasks.
So, my question is whether it is theoretically possible to obtain the short form for $U^k$ and apply a single operator instead of applying $U$ up to $\sqrt{N}$ times? May be it is impossible in general because of just analytic difficulties, but we can construct the operator $U^k$ for some specific example that allows simplification?
To illustrate my point of view, I theorized about the following problem: Suppose I want to compute my function $f(x)$ on some grid with $N_1$ points and find whether $f(x)=a$ somewhere, if not - I will try bigger grid $N_2>N_1$. Suppose I can efficiently construct an oracle $\mathcal{O}$ for this for any $N$.
My $\left| \psi \right>$ is an equal superposition of states:
$$ \left| \psi \right> = \frac{1}{\sqrt{N}}\sum_{x=0}^{N-1}\left| x \right>$$
I'll switch to the pure linear algebra, limiting to $4\times 4$ for simplicity:
$$\frac{1}{N}A := \left| \psi \right> \left<\psi \right| = \frac{1}{N}\begin{bmatrix} 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{bmatrix} $$
I will define $\tilde{I}$ - matrix representation of some arbitrary realization of oracle, e.g.
$$\tilde{I} = \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} $$
Let also denote
$$\tilde{A} = A\tilde{I} = \begin{bmatrix} 1 & -1 & 1 & 1\\ 1 & -1 & 1 & 1 \\ 1 & -1 & 1 & 1 \\ 1 & -1 & 1 & 1 \end{bmatrix} $$
and notice that for $N=4$: $ \ \tilde{A}^k = 2^{k-1}\tilde{A} $ (for $N=5$ we have $3^{k-1}$).
Now we can compute:
$$U^k = ((\frac{2}{N}A - I)\tilde{I})^k = (\frac{2}{N}\tilde{A} - \tilde{I})^k = /\text{binomial theorem} / =$$ $$= \frac{2^k}{N^k}\tilde{A}^k+...+\begin{pmatrix}k \\ i\end{pmatrix}\frac{2^i}{N^i}\tilde{A}^i(-1)^{k-i}\tilde{I}^{k-i}+...+(-1)^k\tilde{I}^k=/\text{power of $\tilde{A}$}/=$$ $$=\frac{2^k}{N^k}2^{k-1}\tilde{A}+...+\begin{pmatrix}k \\ i\end{pmatrix}\frac{2^i}{N^i}2^{i-1}\tilde{A}(-1)^{k-i}\tilde{I}^{k-i}+...+(-1)^k\tilde{I}^k=/\text{$\tilde{A}=A\tilde{I}$}/= $$ $$=A\big(\frac{2^k}{N^k}2^{k-1}\tilde{I}+...+\begin{pmatrix}k \\ i\end{pmatrix}\frac{2^i2^{i-1}}{N^i}(-1)^{k-i}\tilde{I}^{k-i+1}+...+\begin{pmatrix}k \\ 1\end{pmatrix}(-1)^{k-1}2\tilde{I}^k\big)+(-1)^k\tilde{I}^k=$$ $$=\frac{1}{2}A\big(\frac{4}{N}I - \tilde{I}\big)^k\tilde{I} -A(-1)^k\tilde{I}^{k+1}+(-1)^k\tilde{I}^k =\frac{1}{2}A\big(\frac{4}{N}I - \tilde{I}\big)^k\tilde{I}+(-1)^k(I-A\tilde{I})\tilde{I}^k.$$
The matrices raised to power $k$ are just diagonal matrices based on the oracle and I think it is possible to implement them if we can implement the oracle itself. Since $\tilde{I}$ is arbitrary realization of the oracle, we can switch back to the "quantum" formula:
$$U^k =\frac{N}{2}\left| \psi \right> \left<\psi \right|\big(\frac{4}{N}I - \mathcal{O}\big)^k\mathcal{O}+(-1)^k(I-N\left| \psi \right> \left<\psi \right|\mathcal{O})\mathcal{O}^k = $$ $$=\frac{N}{2}\left| \psi \right> \left<\psi \right|\mathcal{O}'\mathcal{O}+(-1)^k(I-N\left| \psi \right> \left<\psi \right|\mathcal{O})\mathcal{O}'' $$
Thus we have much lesser structure to implement that will give us result in a few steps, not $\sqrt{N}$ in
this specific task. Even if I made a mistake in calculations, the simplification approach is clear. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ... |
Current browse context:
hep-th
Change to browse by: Bookmark(what is this?) High Energy Physics - Theory Title: Rectangular superpolynomials for the figure-eight knot
(Submitted on 1 Sep 2016 (v1), last revised 10 Sep 2016 (this version, v2))
Abstract: We rewrite the recently proposed differential expansion formula for HOMFLY polynomials of the knot $4_1$ in arbitrary rectangular representation $R=[r^s]$ as a sum over all Young sub-diagrams $\lambda$ of $R$ with extraordinary simple coefficients $D_{\lambda^{tr}}(r)\cdot D_\lambda(s)$ in front of the $Z$-factors. Somewhat miraculously, these coefficients are made from quantum dimensions of symmetric representations of the groups $SL(r)$ and $SL(s)$ and restrict summation to diagrams with no more than $s$ rows and $r$ columns. They possess a natural $\beta$-deformation to Macdonald dimensions and produces positive Laurent polynomials, which can be considered as plausible candidates for the role of the rectangular superpolynomials. Both polynomiality and positivity are non-evident properties of arising expressions, still they are true. This extends the previous suggestions for symmetric and antisymmetric representations (when $s=1$ or $r=1$ respectively) to arbitrary rectangular representations. As usual for differential expansion, there are additional gradings. In the only example, available for comparison -- that of the trefoil knot $3_1$, to which our results for $4_1$ are straightforwardly extended, -- one of them reproduces the "fourth grading" for hyperpolynomials. Factorization properties are nicely preserved even in the 5-graded case. Submission historyFrom: Alexei Morozov [view email] [v1]Thu, 1 Sep 2016 08:30:42 GMT (459kb) [v2]Sat, 10 Sep 2016 13:32:49 GMT (19kb) |
I am by no means an expert on LLL, but I have worked with it before. Please correct me if this answer is in some way incorrect.
Define a basis $\beta = \{v_1,v_2,\ldots,v_n\}$ for $\mathbb{R}^n$. Then the lattice $L$ generated by $\beta$ is the set of
integer linear combinations of $\beta$:
$$L = \{ m_1v_1 + \cdots + m_nv_n : m_i \in \mathbb{Z} \} $$
This means the $\beta$-coordinate representation of vectors in $L$ are entirely integers.
The basis $B$ is a set of vectors in $L$ that spans $L$ by integer linear combinations of the vectors in $B$. Since each of the vectors in $B$ are in $L$, they must have
integer coordinates with respect to $\beta$, but they may not have integer entries as vectors in $\mathbb{R}^n$.
To make this concrete, consider the lattice $L$ spanned by $\beta = \{ (\sqrt{2},0), (0,\sqrt{3}) \}$. Then $B = \{(\sqrt{2},0),(\sqrt{2},\sqrt{3}) \}$ is a basis for $L$. Note that the vectors in $B$ have irrational entries. The coordinates of $(\sqrt{2},0)$ in the basis $\beta$ is $(1,0)$ and the coordinates of $(\sqrt{2},\sqrt{3})$ in $\beta$ is $(1,1)$. So while the vectors in $B$ do not have integer values, they do have integer coordinates with respect to the basis $\beta$. |
We're coming up to the end of a forecast I made almost three years ago. The previous update is here and everything is at the aggregated forecast post. It's a forecast I made in comparison with a NY Fed DSGE model, and it appears to be coming down to a tie. However that's a win for the five parameter monetary information equilibrium model (that also works for the entire post-war period) versus the 40+ parameter DSGE model.
Here is the updated forecast graph for the recently released PCE inflation data:
The performance is approximately the same (the IE model (blue) is slightly better, but biased low):
What is interesting is that the constant inflation model (green) does better than both. It's interesting because it's the same result as a dynamic equilibrium model of PCE inflation:
The dynamic equilibrium says that without shocks, PCE inflation is approximately constant (1.7%) and there was only a tiny shock in mid-2013 (before either of the forecasts above were made). That means that over the forecast window, the dynamic equilibrium model is equivalent to constant PCE inflation -- the model that does better than the IE monetary model and the NY Fed DSGE model. And aside from the two shocks (three parameters each), it only has one parameter (the dynamic equilibrium of 1.7% inflation). And it's really only that single parameter that is in effect (isn't exponentially suppressed) over the forecast period.
As a side note, over the course of working with the information equilibrium model I've come to a better understanding of how it all fits together. This is a good sign as it implies I'm learning something! The monetary IE model used above (and documented in detail in my paper) is probably best understood as an ensemble model with money as a factor of production with a particular model for the changing IT index:
\frac{d \langle N \rangle}{dM0} = \langle k \rangle \; \frac{\langle N \rangle}{M0}
$$
$$
\langle k \rangle \sim \frac{\log \langle N \rangle}{\log M0}
$$
As such, it's scope is defined in terms of how well the model for the IT index matches the empirical data.
Update: forgot equations. Added them.
Update: forgot equations. Added them. |
C SHIVAKUMARA
Articles written in Bulletin of Materials Science
Volume 19 Issue 4 August 1996 pp 607-613
A series of oxides LnBaCuCoO
5 (Ln=Pr, Nd, Sm, Dy, Gd, Ho and Er) have been synthesized by ceramic method. The oxides crystallize in a tetragonal structure, isostructural to YBaCuCoO 5. All the oxides in the series are semiconducting. IR spectra of these oxides show distinct absorption bands at 630 cm −1, 550 cm −1 and 330 cm −1 which are assigned to 2 and 1 modes respectively. Doping of holes in these oxides, by calcium substitution in Er 1− xCa
Volume 40 Issue 7 December 2017 pp 1291-1299
Monovalent ion doped lanthanum cobaltate La$_{1−x}$Na$_x$CoO$_3$ ($0 \leq x \leq 0.25$) compositions were synthesized by the nitrate–citrate gel combustion method. All the heat treatments were limited to below 1123 K, in order to retain the Na stoichiometry. Structural parameters for all the compounds were confirmed by the Rietveld refinement method usingpowder X-ray diffraction (XRD) data and exhibit the rhombhohedral crystal structure with space group R-3c (No. 167). Thescanning electron microscopy study reveals that the particles are spherical in shape and sizes, in the range of 0.2–0.5 $\mu$m.High temperature electrical resistivity, Seebeck coefficient and thermal conductivity measurements were performed on thehigh density hot pressed pellets in the temperature range of 300–800 K, which exhibit p-type conductivity of pristine anddoped compositions. The X-ray photoelectron spectroscopy (XPS) studies confirm the monotonous increase in Co$^{4+}$ withdoping concentration up to $x = 0.15$, which is correlated with the electrical resistivity and Seebeck coefficient values of thesamples. The highest power factor of 10 $\mu$WmK$^{−2}$ is achieved for 10 at% Na content at 600 K. Thermoelectric figure ofmerit is estimated to be $\sim$$1 \times 10^{−2}$ at 780 K for 15 at% Na-doped samples.
Current Issue
Volume 42 | Issue 6 December 2019
Click here for Editorial Note on CAP Mode |
ISSN:
1937-1632
eISSN:
1937-1179
All Issues
Discrete & Continuous Dynamical Systems - S
April 2013 , Volume 6 , Issue 2
Issue dedicated to Michel Frémond on the occasion of his 70th birthday
Select all articles
Export/Reference:
Abstract:
This special volume of Discrete and Continuous Dynamical Systems - Series S is dedicated to Michel Frémond on the occasion of his 70th birthday, for his important contributions to several theoretical and applied problems in Mechanics, Thermodynamics and Engineering.
For more information please click the “Full Text” above.”
Abstract:
In this paper we introduce a 3D phenomenological model for shape memory behavior, accounting for: martensite reorientation, asymmetric response of the material to tension/compression, different kinetics between forward and reverse phase transformation. We combine two modeling approaches using scalar and tensorial internal variables. Indeed, we use volume proportions of different configurations of the crystal lattice (austenite and two variants of martensite) as scalar internal variables and the preferred direction of stress-induced martensite as tensorial internal variable. Then, we derive evolution equations by a generalization of the principle of virtual powers, including microforces and micromovements responsible for phase transformations. In addition, we prescribe an evolution law for phase proportions ensuring different kinetics during forward and reverse transformation of the oriented martensite.
Abstract:
A one-dimensional model for a shape memory alloy is proposed. It provides a simplified description of the pseudo-elastic regime, where stress-induced transitions from austenitic to oriented martensitic phases occurs. The stress-strain evolution is ruled by a bilinear rate-independent o.d.e. which also accounts for the fine structure of minor hysteresis loops and applies to the case of single crystals only. The temperature enters the model as a parameter through the yield limit $y$.Above the critical temperature $\theta_A^*$, the austenite-martensite phase transformations are described by a Ginzburg-Landau theory involving an order parameter $φ$, which is related to the anelastic deformation. As usual, the basic ingredient is the Gibbs free energy, $\zeta$, which is a function of the order parameter, the stress and the temperature. Unlike other approaches, the expression of this thermodynamic potential is derived rather then assumed, here. The explicit expressions of the minimum and maximum free energies are obtained by exploiting the Clausius-Duhem inequality, which ensures the compatibility with thermodynamics, and the complete controllability of the system. This allows us to highlight the role of the Ginzburg-Landau equation when phase transitions in materials with hysteresis are involved.
Abstract:
We propose a model describing the liquid-vapour phase transition according to a phase-field method. A phase variable $φ$ is introduced whose equilibrium values $φ=0$ and $φ=1$ are associated with the liquid and vapour phases. The phase field obeys Ginzburg-Landau equation and enters the constitutive relation of the density, accounting for the sudden density jump occurring at the phase transition. In this paper we concern ourselves especially with the problems arising in the phase field approach due to the existence of the critical point in the coexistence line, which entails the merging of the phases described by $φ$.
Abstract:
In this paper, we deal with a PDE system describing a phase transition problem characterized by irreversible evolution and ruled by a nonlinear heat flux law. Its derivation comes from the modelling approach proposed by M. Frémond. Our main result consists in showing the global-in-time existence and the uniqueness of the solution of the related initial and boundary value problem.
Abstract:
This paper is concerned with a diffusion model of phase-field type, consisting of a parabolic system of two partial differential equations, interpreted as balances of microforces and microenergy, for two unknowns: the problem's order parameter $\rho$ and the chemical potential $\mu$; each equation includes a viscosity term -- respectively, $\varepsilon \,\partial_t\mu$ and $\delta\,\partial_t\rho$ -- with $\varepsilon$ and $\delta$ two positive parameters; the field equations are complemented by Neumann homogeneous boundary conditions and suitable initial conditions. In a recent paper [5], we proved that this problem is well-posed and investigated the long-time behavior of its $(\varepsilon,\delta)-$solutions. Here we discuss the asymptotic limit of the system as $\varepsilon$ tends to $0$. We prove convergence of $(\varepsilon,\delta)-$solutions to the corresponding solutions for the case $\varepsilon =0$, whose long-time behavior we characterize; in the proofs, we employ compactness and monotonicity arguments.
Abstract:
We address the thermal control of the quasi-static evolution of a polycrystalline shape memory alloy specimen. The thermomechanical evolution of the body is described by means of the phenomenological SOUZA$-$AURICCHIO model [6,53]. By assuming to be able to control the temperature of the body in time we determine the corresponding quasi-static evolution in the
energeticsense. By recovering in this context a result by RINDLER [49,50] we prove the existence of optimal controls for a suitably large class of cost functionals and comment on their possible approximation. Abstract:
Our aim in this paper is to define proper dynamic boundary conditions for a generalization of the Cahn-Hilliard system proposed by M. Gurtin. Such boundary conditions take into account the interactions with the walls in confined systems. We then study the existence and uniqueness of weak solutions.
Abstract:
In this article, we give an asymptotic expansion, with respect to the viscosity which is considered here to be small, of the solutions of the $3D$ linearized Primitive Equations (EPs) in a channel with lateral periodicity. A rigorous convergence result, in some physically relevant space, is proven. This allows, among other consequences, to confirm the natural choice of the
non-localboundary conditions for the non-viscous PEs. Abstract:
In this paper we consider some mechanical phenomena whose dynamics is described by a class of quasi-variational inequalities of parabolic type. Our system consists of a second-order parabolic variational inequality with gradient constraint depending on the temperature and the heat equation. Since the temperature is unknown in our problem, the constraint function is unknown as well. In this sense, our problem includes the quasi-variational structure, and in the mathamtical analysis one of main difficulties comes from it. Our approach to the problem is based on the abstract theory of quasi-variational inequalities with non-local constraint which has been developed in [6]. However the abstract theory is not directly used in the existence proof of a solution, since the mathematical situation of the problem is much nicer than that in the abstract theory [6]. In this paper we prove the existence of a weak solution of our system.
Abstract:
We propose an improved model explaining the occurrence of high stresses due to the difference in specific volumes during phase transitions between water and ice. The unknowns of the resulting evolution problem are the absolute temperature, the volume increment, and the liquid fraction. The main novelty here consists in including the dependence of the specific heat and of the speed of sound upon the phase. These additional nonlinearities bring new mathematical difficulties which require new estimation techniques based on Moser iteration. We establish the existence of a global solution to the corresponding initial-boundary value problem, as well as lower and upper bounds for the absolute temperature. Assuming constant heat conductivity, we also prove uniqueness and continuous data dependence of the solution.
Abstract:
In this paper, tensegrity structures are modeled by introducing suitable energy convex functions. These allow to enforce both ideal and non-ideal constraints, gathering compatibility, equilibrium, and stability problems, as well as their duality relationships, in the same functional framework. Arguments of convex analysis allow to recover consistently a number of basic results, as well as to formulate new interpretations and analysis criterions.
Abstract:
We show that many couplings between parabolic systems for processes in solids can be formulated as a gradient system with respect to the total free energy or the total entropy. This includes Allen-Cahn, Cahn-Hilliard, and reaction-diffusion systems and the heat equation. For this, we write the coupled system as an Onsager system $(X,Φ,K)$ defining the evolution $\dot U=-K(U)D Φ(U)$. Here $Φ$ is the driving functional, while the Onsager operator $K(U)$ is symmetric and positive semidefinite. If the inverse $G =K ^{-1}$ exists, the triple $(X,Φ,G)$ defines a gradient system.
Onsager systems are well suited to model bulk-interface interactions by using the dual dissipation potential $\Psi^*(U,\Xi)=1/2\langle \Xi, K(U)\Xi\rangle$. Then, the two functionals $\Phi$ and $\Psi^*$ can be written as a sum of a volume integral and a surface integral, respectively. The latter may contain interactions of the driving forces in the interface as well as the traces of the driving forces from the bulk. Thus, capture and escape mechanisms like thermionic emission appear naturally in Onsager systems, namely simply through integration by parts.
Abstract:
We consider the inverse problem of determining the possible presence of an inclusion in a thin plate by boundary measurements. The plate is made by non-homogeneous linearly elastic material belonging to a general class of anisotropy. The inclusion is made by different elastic material. Under some a priori assumptions on the unknown inclusion, we prove constructive upper and lower estimates of the area of the unknown defect in terms of an easily expressed quantity related to work, which is given in terms of measurements of a couple field applied at the boundary and of the induced transversal displacement and its normal derivative taken at the boundary of the plate.
Abstract:
An initial-boundary-value problem for a class of sixth order viscous Cahn-Hilliard type equations with a nonlinear diffusion is considered. The study is motivated by phase-field modelling of various spatial structures, for example arising in oil-water-surfactant mixtures and in modelling of crystal growth on atomic length, known as phase field crystal model. For such problem we prove the existence and uniqueness of a global in time regular solution. First the finite-time existence is proved by means of the Leray-Schauder fixed point theorem. Then, due to suitable estimates, the finite-time solution is extended step by step on the infinite time interval.
Abstract:
The non-smooth view of Michel Frémond has already been proven successful in managing collisions between rigid particles and in this paper, it will be adapted so as to represent pedestrians and their strategy of displacement. The developed discrete approach applies a rigorous thermodynamic framework in which the local interactions between particles are managed by the use of pseudopotentials of dissipation. It handles local interactions such as pedestrian-pedestrian and pedestrian-obstacle in order to reproduce the global and real dynamics of pedestrian traffic. Social forces are introduced and implemented in order to simulate the behavior of pedestrians and subgroups of pedestrians. The numerical implementation allows us to perform simulations in various situations so that the safety and comfort of public spaces can be enhanced.
Abstract:
Pseudo-potentials are very useful tools to define thermodynamically admissible constitutive rules. Bipotentials are convenient for numerical purposes, in particular for non-associative rules. Unfortunately, these functionals are not always easy to construct starting from a given constitutive law. This work proposes a procedure to find the pseudo-potentials and the bipotential starting from the usual description of a non-associative constitutive law. This method is applied to different non-associative plasticity models such as the Drucker-Prager model and the non-linear kinematic hardening model. The same procedure allows one to obtain the pseudo-potentials of an endochronic plasticity model. The pseudo-potentials for the contact problem with dissipation are constructed using the same ideas. For all these non-associative constitutive laws a bipotential is then automatically deduced.
Abstract:
The quasistatic rate-independent evolution of delamination in the so-called mixed-mode, i.e. distinguishing opening (mode I) from shearing (mode II), devised in [45], is described in detail and rigorously analysed as far as existence of the so-called energetic solutions concerns. The model formulated at small strains uses a delamination parameter of Frémond's type combined with a concept of interface plasticity, and is associative in the sense that the dissipative force driving delamination has a potential which depends in a 1-homogeneous way only on rates of internal parameters. A sample numerical simulation documents that this model can really produce mode-mixity-sensitive delamination.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
I was studying special relativity and i found this derivation of the Lorentz transformations \begin{equation} \left( \begin{array}{cccc} x'^0 \\ x'^1 \\ x'^2\\ x'^3 \end{array} \right)= \left( \begin{array}{cccc} \gamma & -\gamma \beta & 0& 0 \\ -\gamma \beta &\gamma &0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array} \right)\left( \begin{array}{cccc} x^0 \\ x^1 \\ x^2\\ x^3 \end{array} \right) \end{equation}
and then he denotes \begin{equation} Λ^μ{}_ν= \left( \begin{array}{cccc} \gamma & -\gamma \beta & 0& 0 \\ -\gamma \beta &\gamma &0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array} \right) \end{equation} as the lorentz transformation matrix. If that's the case which matrix is for example $Λ_{νμ}$ or $Λ^{νμ}$ or even $Λ_μ^ν$.I am confused about which matrix is which.
Anyone to clarify?
Also, how do i know which index is first meaning is it $Λ_{ν}{}^{\ μ}$ or $Λ^μ{}_{\ ν}$?
I will appreciate if you have any reference to check these. |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
This is a naive question, out of my expertise; apologies in advance.
Goldbach's Conjecture and many other unsolved questions in mathematics can be written as short formulas in predicate calculus. For example, Cook's paper "Can Computers Routinely Discover Mathematical Proofs?" formulates that conjecture as
$$\forall n [( n > 2 \wedge 2 | n) \supset \exists r \exists s (P(r) \wedge P(s) \wedge n = r + s) ]$$
If we restrict attention to polynomially-long proofs, then theorems with such proofs are in NP. So if P=NP, we could determine whether e.g. Goldbach's Conjecture is true in polynomial time.
My question is: Would we also be able to exhibit a proof in polynomial time?
Edit. As per the comments of Peter Shor and Kaveh, I should have qualified my claim that we coulddetermine if Goldbach's conjecture is true if it indeed is one of the theorems with a short proof. Which of course we do not know! |
So far in my education career I have only met differential equations as small parts of courses on other stuff. Solving special cases as part of calculus, solving simple systems as a part of linear algebra. This coming semester I'm going to have two courses devoted entirely to differential equations, so I thought I would try to gain some understanding that isn't purely mechanical.
Here's an example I have some questions about.
This is example 10.2.1 from 'KALKULUS' (3rd edition) by Lindstrøm. The translation is mine. The part where the differential equation is solved has been removed.
An animal population consists today of $P$ animals and has a growth rate $r$. How big is the population in $t$ years?
Let $y(t)$ be the population size after $t$ years. In the time between $t$ and $t+\Delta t$ the population increases from $y(t)$ to $y(t+\Delta t)$, i.e. an increase of $y(t+\Delta t)-y(t)$. We can also derive this increase in another way: The growth rate is $r$, which means that the population increase per time unit is $ry(t)$. During a small time interval from $t$ to $t+\Delta t$ the population increase is approximately $ry(t)\Delta t$, an approximation that gets better with smaller $\Delta t$'s. If we equate these expressions, we get
$$y(t+\Delta t)-y(t)\approx ry(t)\Delta t$$
Dividing by $\Delta t$, we get
$$\frac{y(t+\Delta t)-y(t)}{\Delta t}\approx ry(t)$$
Letting $\Delta t$ go towards zero, this gives
$$y'(t)=ry(t)$$
Thus we have a differential equation that $y$ has to satisfy:
$$y'(t)-ry(t)=0$$
NOTE: We could have gotten this differential equation faster by using the fact that the growth rate $r$ by definition means that $y'(t)=ry(t).$ We have chosen the more elaborate approach because it shows a general thought process which can be used in several situations.
Why is the population increase approximately $ry(t)Δt$ during a small time interval from $t$ to $t+Δt$?
How is this thought process different than using the definition? In the real world it has been observed that the growth of a population tend to depend on the size of the population, so you want the change in population to depend on its size. In the language of calculus one way to write this out is $y'(t)=ry(t)$. What's the advantage of going through the more elaborate reasoning? |
The procedure below assumes that the original distribution $X$ (the "signal") is non-Gaussian, and $Y$ is Gaussian (normally distributed noise.)
General procedure
The procedure is as follows:
Find a function $F$ that applied to a collection of real numbers produces one value (say, 0) for normally distributed data and other different values for non-normally distributed data.
Pick a sample $\{s_i\}_{i=1}^{n}$ of the noised data with a relatively small number of $n$ points (say $n \in [20,40]$).
Formulate an optimization problem with $n$ variables $\{v_i\}_{i=1}^{n}$ that maximizes $ \lvert F(s-v) \rvert$ subject to constrains that would hold if $\{v_i\}_{i=1}^{n}$ come from a normal distribution (with known parameters).
Solve the optimization problem several times with different samples and accumulate the sets $\{s_i - v_i\}_{i=1}^{n}$ and $\{v_i\}_{i=1}^{n}$. Monitor the $\chi^2$ test over $\{v_i\}_{i=1}^{n}$.
Reconstruct the original distribution (the "signal") CDF and PDF using quantiles of the union of $\{s_i - v_i\}_{i=1}^{n}$ from all optimization runs.
Step details Non-Gaussianity measure
First, we are going to adopt exess kurtosis as a measure for non-Gaussianity. For this we are going to rewrite excess kurtosis as
ExKurtosis[inp_] := CentralMoment[inp, 4] - 3 CentralMoment[inp, 2]^2
For points comming from the Normal Distribution
ExKurtosis is close to 0:
In[1139]:= ExKurtosis[NormalDistribution[a, b]]
Out[1139]= 0
So, excess kurtosis close to 0 means Gaussianity, excess kurtosis significantly larger than 0 means non-Gaussianity.
Other non-Gaussiany measures exist with better properties (theoretical justification, robustness, speed of computation). See this article "Independent Component Analysis: A Tutorial" .
Constraints for normally distributed noise
We should come up with constraints which would hold if the values given to the variables are normally distributed. Since we know the mean and the standard deviation of the noise we can write up several such constraints based on the properties of Normal Distribution. (Mean, StandardDeviation, Kurtosis, etc.)
Constraints from "signal" knowlege
We can add constraints coming up from our knowledge of the distribution that is noised.
From the examples in the question we can add the constraints $\{s_i - v_i > 0\}_{i=1}^{n}$.
Code Data generation
The data is generated as given in the question.
SeedRandom[1256]
fSurvivalGompertzDistRand[α_, β_] :=
ProbabilityDistribution[(1/((E^(α/β) Gamma[
0, α/β])/β) E^(((1 -
E^(t β)) α)/β)), {t, 0, ∞}]
data = RandomVariate[fSurvivalGompertzDistRand[0.016, 0.65], {20000}];
σ = 2.5;
dataNoise =
data + RandomVariate[NormalDistribution[0, σ], {20000}];
(Re-)start the process
The results of the maximization step are gathered in the lists
signalVals and
noiseVals.
SeedRandom[5456]
signalVals = {};
noiseVals = {};
Maximization
Select a sample with "good enough" kurtosis. This is not necessary, simple random sampling would do, but it might help getting better results faster.
hk = 1000;
While[! (10 < Abs[hk] < 40),
dnSample = RandomSample[dataNoise, 40];
hk = ExKurtosis[dnSample];
vars = Array[x, Length[dnSample]];
]
hk
Solve the maximization problem:
AbsoluteTiming[
sol = Maximize[
Join[
{Abs[ExKurtosis[dnSample - vars]], Abs[ExKurtosis[vars]] < 0.1,
Abs[Mean[vars]] < 0.05,
Abs[σ - StandardDeviation[vars]] < 0.1,
Mean[Map[If[Abs[#] < σ, 1, 0] &, vars]] > 0.66 },
Map[Abs[#] <= 3.1 σ &, vars],
Map[# > 0 &, dnSample - vars]
], vars]
]
(* {79.7731, {0.931781, {x[1] -> 0.488303, x[2] -> 0.0693204,
x[3] -> -1.58657, x[4] -> -2.73186, x[5] -> -0.792337,
x[6] -> -0.301162, x[7] -> 0.0463628, x[8] -> 0.24009,
x[9] -> 2.15609, x[10] -> 0.844921, x[11] -> 0.877771,
x[12] -> -0.988591, x[13] -> 0.814648, x[14] -> -1.98969,
x[15] -> -0.0298853, x[16] -> -0.189145, x[17] -> 0.850365,
x[18] -> 0.521628, x[19] -> -1.80022, x[20] -> 0.607911,
x[21] -> 0.0872866, x[22] -> 0.68063, x[23] -> -0.0647998,
x[24] -> -2.32211, x[25] -> -2.8472, x[26] -> 1.95862,
x[27] -> 1.04585, x[28] -> -1.0081, x[29] -> 1.04367,
x[30] -> -0.140025, x[31] -> 1.44755, x[32] -> -0.540915,
x[33] -> 0.46877, x[34] -> 2.14427, x[35] -> 0.437988,
x[36] -> 0.99062, x[37] -> 0.462472, x[38] -> -0.11133,
x[39] -> 0.260179, x[40] -> 1.55722}}} *)
While doing the experiments I stopped the maximization process if I thought it takes too much time (more than ~3 minutes).
Accumulate the results
signalVals = Append[signalVals, dnSample - vars /. sol[[2]]];
noiseVals = Append[noiseVals, vars /. sol[[2]]];
opts = {ImageSize -> Medium, PlotRange -> All};
Grid[{{Histogram[Flatten[signalVals], 20, "Probability", opts,
PlotLabel -> "Signal"],
Histogram[Flatten[noiseVals], 20, "Probability", opts,
PlotLabel -> "Noise"]}}]
Reconstruct CDF and PDF
qs = Range[0, 1, 0.1];
xs = Quantile[Flatten[signalVals], qs]
qCDF = Interpolation[Transpose[{xs, qs}], InterpolationOrder -> 1];
Plot[{qCDF[t],
Evaluate@CDF[fSurvivalGompertzDistRand[0.016, 0.65], t]}, {t,
Min[xs], Max[xs]}, PlotTheme -> "Detailed",
PerformanceGoal -> "Speed"]
Plot[{qCDF'[t],
Evaluate@PDF[fSurvivalGompertzDistRand[0.016, 0.65], t]}, {t,
Min[xs], Max[xs]}, PlotTheme -> "Detailed",
PerformanceGoal -> "Speed"]
Monitoring the process
It is helpful to look at goodness of fit measures in order to evaluate the procedure's results.
PearsonChiSquareTest[Flatten[signalVals],
fSurvivalGompertzDistRand[0.016, 0.65]]
(* Out[1087]= 0.061774 *)
PearsonChiSquareTest[Flatten[noiseVals],
NormalDistribution[0, σ]]
(* Out[1089]= 0.18782 *)
PearsonChiSquareTest[#,
fSurvivalGompertzDistRand[0.016, 0.65]] & /@ signalVals
(* Out[1090]= {0.301886, 0.238065, 0.142501, 0.80441} *)
PearsonChiSquareTest[#, NormalDistribution[0, σ]] & /@ noiseVals
(* Out[1091]= {0.46331, 0.608089, 0.970406, 0.338096} *)
Experimental results Noise with $\sigma = 2.5$
Using noise as provided in the question and making 4 maximization runs, these are the histograms of the obtained distributions:
Here are the reconstructed CDF and PDF:
Noise with $\sigma = 1$
It seems that better results are obtained with smaller standard deviation of the noise. (As expected.) Again using 4 maximization runs. We can see that the CDF is much better approximated.
These are the histograms of the obtained distributions:
These are the reconstructed CDF and PDF: |
Bayes' Rule for Ducks
Sunday February 23, 2014
You look at a thing.
Is it a duck?
Re-phrase: What is the probability that it's a duck, if it looks like that?
Bayes' rule says that the probability of it being a duck, if it looks like that, is the same as the probability of any old thing being a duck, times the probability of a duck looking like that, divided by the probability of a thing looking like that.
\[ Pr(duck | looks) = \frac{Pr(duck) \cdot Pr(looks | duck)}{Pr(looks)} \]
This makes sense:
If ducks are mythical beasts, then \( Pr(duck) \) (our "prior" on ducks) is very low, and the thing would have to be veryduck-like before we'd believe it's a duck. On the other hand, if we're at some sort of duck farm, then \( Pr(duck) \) is high and anything that looks even a little like a duck is probably a duck. If it's very likely that a duck would look like that (\( Pr(looks|duck) \) is high) then we're more likely to think it's a duck. This is the "likelihood" of a duck looking like that thing. In practice it's based on how the ducks we've seen before have looked. The denominator \( Pr(looks) \) normalizes things. After all, we're in some sense portioning out the probabilities of this thing being whatever it could be. If 1% of things look like this, and 1% of things look like this andare ducks, then 100% of things that look like this are ducks. So \( Pr(looks) \) is what we're working with; it's the denominator.
Here's an example of a strange world to test this in:
There are ten things. Six of them are ducks. Five of them look like ducks. Four of them both look like ducks and are ducks. One thing looks like a duck but is not a duck. Maybe it's a fake duck? Two ducks do not look like ducks. Ducks in camouflage. Test the equality of the two sides of Bayes' rule:
\[ Pr(duck | looks) = \frac{Pr(duck) \cdot Pr(looks | duck)}{Pr(looks)} \]
\[ \frac{4}{5} = \frac{\frac{6}{10} \cdot \frac{4}{6}}{\frac{5}{10}} \]
It's true here, and it's not hard to show that it must be true, using two ways of expressing the probability of being a duck and looking like a duck. We have both of these:
\[ Pr(duck \cap looks) = Pr(duck|looks) \cdot Pr(looks) \]
\[ \displaystyle Pr(duck \cap looks) = Pr(looks|duck) \cdot Pr(duck) \]
Check those with the example as well, if you like. Using the equality, we get:
\[ Pr(duck|looks) \cdot Pr(looks) = Pr(looks|duck) \cdot Pr(duck) \]
Then dividing by \( Pr(looks) \) we have Bayes' rule, as above.
\[ Pr(duck | looks) = \frac{Pr(duck) \cdot Pr(looks | duck)}{Pr(looks)} \]
This is not a difficult proof at all, but for many people the result feels very unintuitive. I've tried to explain it once before in the context of statistical claims. Of course there's a wikipedia page and many other resources. I wanted to try to do it with a unifying simple example that makes the equations easy to parse, and this is what I've come up with.
This post was originally hosted elsewhere. |
Inertia
In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts.
Derivation
Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy).
The length of a circle arc is given by:
[math] L = \theta r [/math]
where [math]L[/math] is the length of the arc (m)
[math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m)
A circular body rotating about the axis of its centre of mass therefore has a rotational velocity of:
[math] v = \frac{\theta r}{t} [/math]
where [math]v[/math] is the rotational velocity (m/s)
[math]t[/math] is the time it takes for the mass to rotate L metres (s)
Alternatively, rotational velocity can be expressed as:
[math] v = \omega r [/math]
where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s)
[math]n[/math] is the speed in revolutions per minute (rpm)
The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies:
[math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math]
where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m
2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg)
Alternatively, rotational kinetic energy can be expressed as:
[math] KE = \frac{1}{2} I\omega^{2} [/math]
where [math]I = mr^{2}[/math] is called the
moment of inertia (kg.m 2) |
I think the Op's proof is correct assuming both functions are entire. However, even if $F(x)$ is entire, the fractional iterate is in general not entire. The Op's result does not hold if the fractional iterate is not entire
$$F^{o\frac{1}{n}}(x)\;\;\;\;h(x)=F^{o \frac{1}{2}}(x)\;\;\;\;h(h(x))=F(x)$$
If the half iterate is not entire then there are points for which $F(x)=h(h(x))$ is only by analytic continuation, and not by direct computation. So definition (1), $\forall x\;\;h(h(x))=F(x)$ does not hold $\forall x$, since the half iterate has multiple values depending on the path, unless the half iterate is also entire, which pretty much rules out all non-trivial half iterates!
I wanted to generate a counter example that was as simple as possible. Lets start with a function that has three fixed points, $(0,\pm 1)$, and we will develop the half iterate at the fixed point at the origin. I also wanted to avoid the parabolic case see Will Jagy's half iterate of sin(x) post which occurs when the first derivative of the fixed point=1, so I chose a positive 1st derivative at the fixed point at the origin of 4. Avoiding the parabolic case gives guaranteed convergence so then the formal half iterate would have guaranteed convergence near the origin, and would have a first derivative of 2.
So here is my counter example, for $F(x)=4x-3x^3$, for which the half iterate is $h(x)$, and $F(x)$ has fixed points of $(0,\pm 1)$. For all points within some radius of convergence, $h(h(x))=F(x)$, but eventually we get to the singularity of $h(x)$ which gives it a defined radius of convergence. But weird stuff happens when the radius of convergence of $h(x)$ is larger than $h(1)$ where 1 is one of the other fixed points. So, below, I post the formal half iterate of $F(x)$, which has a radius of convergence of $\frac{16}{9}\approx 1.78$. Here, is my counter example, where all three fixed points for $F(x)$ has $h(x)$ within its analytic radius of convergence, and even $h(1), h(-1)$ are within the radius of convergence so the half iterate seems to be completely unambiguous at these points. And yet this leads to a clear counter example.
$$F(1)=1$$$$h(1)=1.66125776701924932137$$$$h(1.66125776701924932137)=1$$$$F(h(1))=F(1.66125776701924932137)=-7.10907369782592055937<>h(1)$$
However, even though the radius of convergence of the Taylor series of $h(x)>h(1)$, when you look at a number 1.2399067, $h(1.2399067)=16/9$, so $h(h(x))$ has a smaller radius of convergence is 1.2399067. Of course, this smaller radius of convergence has a singularity that cancels by analytic continuation since $F(x)$ is entire. And this is what allows weird stuff to happen... where in the complex plane, $h(x)$ at the fixed point of $F(x)$ has multiple values depending on the path, even though $F(x)$ is entire and is always well defined independent of the path. So $h(x)$ can only by fully defined by path dependent analytic continuation in the complex plane.
{h(x)=
+x *2
-x^ 3*3/10
-x^ 5*27/850
-x^ 7*243/44200
-x^ 9*4391901/3862196000
-x^11*4097709/15835003600
-x^13*263696194479501/4216940633698000000
-x^15*4352793841907459397/276378289132566920000000
-x^17*0.00000408866926284292783744
-x^19*0.00000108632410179569368855
-x^21*0.000000293954426198467149790
-x^23*0.0000000807297320769806555906
-x^25*0.0000000224441951265113300999
-x^27*0.00000000630439537551479828510
-x^29*0.00000000178645419922952969101
-x^31*0.000000000510065370119009481553
-x^33*1.46596264762260914617 E-10
-x^35*4.23776939074721938452 E-11
-x^37*1.23135265881044955235 E-11
-x^39*3.59433071863758569107 E-12 ....
}
Besides the formal Taylor series, one may generate the half iterate by using the identity:: $$h(x) = F \circ h \circ F^{-1} = \lim_{n \to \infty} F^{n} \circ h \circ F^{-n} = \lim_{n \to \infty} F^{n}(2\cdot F^{-n}(x)) $$
The $h = F \circ h \circ F^{-1}$ equation also shows that the half iterate radius of convergence is tied to the radius of convergence of $F^{-1}(x)$, which is $\frac{16}{9}$. This may be calculated from where $\frac{d}{dx}F(x)=0$ which is at $x=\pm \frac{2}{3}$ where $F(\pm \frac{2}{3})=\pm\frac{16}{9}$. Here is a graph of h(x), at the real axis, from -16/9 to +16/9, which is out to the radius of convergence, where the derivative of h(x) goes to infinity. The singularity is cancelled out when iterating $h(h(x))$ since the derivative of $h(x)=0$ where $h(x)=16/9$.
One more image showing $h(x), h^{o2}(x)=F(x), h^{o3}(x), h^{o4}(x)$ from 0 to 1. Odd iterations are in purple, and even iterations are in red. Notice that $h^{o3}(1)$=-7.10907 as expected, as opposed to $h(1)$=1.6612577. We see that $h(1)$ has multiple values, depending on how many times we have iterated $h(x)$. But F(x) is entire, so no matter how many times we iterate F(1)=1. Also notice that this still contradicts the Op's proof, since for analytic functions, the half iterate is multiple valued, apparently infinitely valued in this case, depending on the path in the complex plane. |
Suppose we have summary estimates (e.g., estimated average effects) obtained from two independent meta-analyses or two subgroups of studies within the same meta-analysis and we want to test whether the estimates are different from each other. A Wald-type test can be used for this purpose. Alternatively, one could run a single meta-regression model including all studies and using a dichotomous moderator to distinguish the two sets. Both approaches are conceptually very similar with a subtle difference that will be illustrated below with an example.
We will use the 'famous' BCG vaccine meta-analysis for this illustration. First, we compute the log risk ratios (and corresponding sampling variances) for each study and then dichotomize the
alloc variable.
library(metafor) dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg) dat$alloc <- ifelse(dat$alloc == "random", "random", "other") dat trial author year tpos tneg cpos cneg ablat alloc yi vi 1 1 Aronson 1948 4 119 11 128 44 random -0.8893 0.3256 2 2 Ferguson & Simes 1949 6 300 29 274 55 random -1.5854 0.1946 3 3 Rosenthal et al 1960 3 228 11 209 42 random -1.3481 0.4154 4 4 Hart & Sutherland 1977 62 13536 248 12619 52 random -1.4416 0.0200 5 5 Frimodt-Moller et al 1973 33 5036 47 5761 13 other -0.2175 0.0512 6 6 Stein & Aronson 1953 180 1361 372 1079 44 other -0.7861 0.0069 7 7 Vandiviere et al 1973 8 2537 10 619 19 random -1.6209 0.2230 8 8 TPT Madras 1980 505 87886 499 87892 13 random 0.0120 0.0040 9 9 Coetzee & Berjak 1968 29 7470 45 7232 27 random -0.4694 0.0564 10 10 Rosenthal et al 1961 17 1699 65 1600 42 other -1.3713 0.0730 11 11 Comstock et al 1974 186 50448 141 27197 18 other -0.3394 0.0124 12 12 Comstock & Webster 1969 5 2493 3 2338 33 other 0.4459 0.5325 13 13 Comstock et al 1976 27 16886 29 17825 33 other -0.0173 0.0714
First, we fit two separate random-effects models within each subset defined by the
alloc variable:
res1 <- rma(yi, vi, data=dat, subset=alloc=="random") res2 <- rma(yi, vi, data=dat, subset=alloc=="other")
We then combine the estimates and standard errors from each model into a data frame. We also add a variable to distinguish the two models and, for reasons to be explained in more detail below, we add the estimated amounts of heterogeneity within each subset to the data frame.
dat.comp <- data.frame(estimate = c(coef(res1), coef(res2)), stderror = c(res1$se, res2$se), meta = c("random","other"), tau2 = round(c(res1$tau2, res2$tau2),3)) dat.comp estimate stderror meta tau2 1 -0.9709645 0.2759557 random 0.393 2 -0.4812706 0.2169886 other 0.212
We can now compare the two estimates (i.e., the estimated average log risk ratios) by feeding them back to the
rma() function and using the variable to distinguish the two estimates as a moderator. We use a fixed-effects model, because the (residual) heterogeneity within each subset has already been accounted for by fitting random-effects models above.
Fixed-Effects with Moderators Model (k = 2) Test for Residual Heterogeneity: QE(df = 0) = 0.000, p-val = 1.000 Test of Moderators (coefficient(s) 2): QM(df = 1) = 1.946, p-val = 0.163 Model Results: estimate se zval pval ci.lb ci.ub intrcpt -0.481 0.217 -2.218 0.027 -0.907 -0.056 * metarandom -0.490 0.351 -1.395 0.163 -1.178 0.198 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
While we find that studies using random assignment obtain larger (more negative) effects than studies not using random assignment ($b_1 = -0.490$, $SE = 0.351$), the difference between the two estimates is not significant ($z = -1.395$, $p = .163$).
The test of the difference between the two estimates is really just a Wald-type test, given by the equation $$z = \frac{\hat{\mu}_1 - \hat{\mu}_2}{\sqrt{SE[\hat{\mu}_1]^2 + SE[\hat{\mu}_2]^2}},$$ where $\hat{\mu}_1$ and $\hat{\mu}_2$ are the two estimates and $SE[\hat{\mu}_1]$ and $SE[\hat{\mu}_2]$ the corresponding standard errors. The test statistics can therefore also be computed with:
zval -1.395
This is the same value that we obtained above.
Now let's take a different approach, fitting a meta-regression model with
alloc as a categorical moderator based on all studies:
Mixed-Effects Model (k = 13; tau^2 estimator: REML) tau^2 (estimated amount of residual heterogeneity): 0.318 (SE = 0.178) tau (square root of estimated tau^2 value): 0.564 I^2 (residual heterogeneity / unaccounted variability): 89.92% H^2 (unaccounted variability / sampling variability): 9.92 R^2 (amount of heterogeneity accounted for): 0.00% Test for Residual Heterogeneity: QE(df = 11) = 138.511, p-val < .001 Test of Moderators (coefficient(s) 2): QM(df = 1) = 1.833, p-val = 0.176 Model Results: estimate se zval pval ci.lb ci.ub intrcpt -0.467 0.257 -1.816 0.069 -0.972 0.037 . allocrandom -0.490 0.362 -1.354 0.176 -1.199 0.219 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The result is very similar to what we saw earlier: The coefficient for the
alloc dummy is equal to $b_1 = -0.490$ ($SE = 0.362$) and not significant ($p = .176$).
However, the results are not exactly identical. The reason for this is as follows. When we fit separate random-effects models in the two subsets, we are allowing the amount of heterogeneity within each set to be different (as shown earlier, the estimates were $\hat{\tau}^2 = 0.393$ and $\hat{\tau}^2 = 0.212$ for studies using and not using random assignment, respectively). On the other hand, the mixed-effects meta-regression model fitted above has a single variance component for the amount of residual heterogeneity, which implies that the amount of heterogeneity
within each subset is assumed to be the same ($\hat{\tau}^2 = 0.318$ in this example).
Using the
rma.mv() function, we can easily fit a meta-regression model using all studies where we allow the amount of residual heterogeneity to be different in each subset:
Multivariate Meta-Analysis Model (k = 13; method: REML) Variance Components: outer factor: trial (nlvls = 13) inner factor: alloc (nlvls = 2) estim sqrt k.lvl fixed level tau^2.1 0.212 0.460 6 no other tau^2.2 0.393 0.627 7 no random Test for Residual Heterogeneity: QE(df = 11) = 138.511, p-val < .001 Test of Moderators (coefficient(s) 2): QM(df = 1) = 1.946, p-val = 0.163 Model Results: estimate se zval pval ci.lb ci.ub intrcpt -0.481 0.217 -2.218 0.027 -0.907 -0.056 * allocrandom -0.490 0.351 -1.395 0.163 -1.178 0.198 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Note that the two estimates of $\tau^2$ are now identical to the ones we obtained earlier from the separate random-effects models. Also, the coefficient, standard error, and p-value for the moderator now matches the results obtained earlier.
A discussion/comparison of these two approaches (i.e., assuming a single $\tau^2$ value or allowing $\tau^2$ to differ across subsets) can be found in the following article:
Rubio-Aparicio, M., López-López, J. A., Viechtbauer, W., Marín-Martínez, F., Botella, J., & Sánchez-Meca, J. (in press). A comparison of hypothesis tests for categorical moderators in meta-analysis using mixed-effects models.
Journal of Experimental Education. [Link]
We can also do a likelihood ratio test (LRT) to examine whether there are significant differences in the $\tau^2$ values across subsets. This can be done with:
res1 <- rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial, struct="DIAG", data=dat) res0 <- rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial, struct="ID", data=dat) anova(res1, res0) df AIC BIC AICc logLik LRT pval QE Full 4 29.2959 30.8875 35.9626 -10.6480 138.5113 Reduced 3 27.5948 28.7885 31.0234 -10.7974 0.2989 0.5845 138.5113
So in this example, we would not reject the null hypothesis $H_0: \tau^2_1 = \tau^2_2$ ($p = .58$). |
We consider the double semion model proposed in Levin and Wen's paper
In their paper, the double semion model is defined on a honeycomb lattice.
Now I am trying to study the same model on a square lattice.
Question 1: Is the following Hamiltonian correct?
$$H=-\sum_{\textrm{vertex}} \prod_{k \in \textrm{vertex}}\sigma_{k}^{z} + \sum_{\textrm{plaquette}} \left[ \prod_{j \in \textrm{legs}} i^{(1-\sigma_{j}^{z})/2} \right] \prod_{k \in \textrm{plaquette}} \sigma_{k}^{x}.$$ On the figure there are totally 8 green legs around each plaquette.
As shown in Levin and Wen's paper, the ground state of the double semion model is the equal-weight superposition of all close loops, and each loop contributes a minus sign. Given a loop configuration, the wave function component is given by $(-1)^{\textrm{number of loops}}$. If we have even (odd) number of loops, the wave function component of this configuration is $+1$ ($-1$). On the honeycomb lattice everything looks fine. But I am confusing about the state on the square lattice when the strings are crossing.
Question 2: For the following two configurations, should we regard them as one loop or two loops? Do they have the same amplitude in the ground state wave function?
Here we consider a $3 \times 3$ torus, i.e., we have periodic boundary conditions on both directions. The red line denotes the string, i.e., the spin is $\left| \downarrow \right\rangle$ on each red link.
This is configuration I.
This is configuration II. |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
Coefficients of Expansion
Almost all materials expand on heating—the most famous exception being water, which contracts as it is warmed from 0 degrees Celsius to 4 degrees. This is actually a good thing, because as freezing weather sets in, the coldest water, which is about to freeze, is less dense than slightly warmer water, so rises to the top of a lake and the ice begins to form there. For almost all other liquids, solidification on cooling begins at the bottom of the container. So, since water behaves in this weird way, ice skating is possible! Also, as a matter of fact, life in lakes is possible—the ice layer that forms insulates the rest of the lake water from very cold air, so fish can make it through the winter.
Linear Expansion
The coefficient of linear expansion \(\alpha \) of a given material, for example a bar of copper, at a given temperature is defined as the fractional increase in length that takes place on heating through one degree: \[ L \rightarrow L + \Delta L = (1 + \alpha) L \, when \, T \rightarrow T + 1^0 C \]
Of course, \( \alpha \) might vary with temperature (it does for water, as we just mentioned) but in fact for most materials it stays close to constant over wide temperature ranges.
For copper, \( \alpha = 17 \times 10^{-6} \)
Volume Expansion
For
liquids and gases, the natural measure of expansion is the coefficient of volume expansion, \( \beta \) \[V \rightarrow V + \Delta V = (1 + \beta) V \, when \, T \rightarrow T + 1^0 \]
Of course, on heating a bar of copper, clearly the
volume as well as the length increases—the bar expands by an equal fraction in all directions (this could be experimentally verified, or you could just imagine a cube of copper, in which case all directions look the same).
The volume of a cube of copper of side
L is V = L 3. Suppose we heat it through one degree. Putting together the definitions of \( \alpha, \beta \) above, \[V \rightarrow (1+ \beta)V, \, L \rightarrow (1 + \alpha ) L, \, L^3 \rightarrow (1 + \alpha )^3 V \]
So \( (1 + \beta) = (1+ \alpha)^3 \). But remember \( \alpha \) is very, very small—so even though \( (1 + \alpha)^3 = 1+ 3\alpha + 3\alpha^2 + \alpha^3 \), the last two terms are
completely negligible (check it out!) so to a fantastically good approximation: \[ \beta = 3\alpha \] The coefficient of volume expansion is just three times the coefficient of linear expansion. Gas Pressure Increase with Temperature
In 1702, Amontons discovered a
linear increase of P with T for air, and found P to increase about 33% from the freezing point of water to the boiling point of water.
That is to say, he discovered that if a container of air were to be sealed at 0°C, at ordinary atmospheric pressure of 15 pounds per square inch, and then heated to 100°C but kept at the same volume, the air would now exert a pressure of about 20 pounds per square inch on the sides of the container. (Of course, strictly speaking, the container will also have increased in size, that would lower the effect—but it’s a tiny correction, about ½% for copper, even less for steel and glass.)
Remarkably, Amontons discovered, if the gas were initially at a pressure of
thirty pounds per square inch at 0°C, on heating to 100°C the pressure would go to about 40 pounds per square inch—so the percentage increase in pressure was the same for any initial pressure: on heating through 100°C, the pressure would always increase by about 33%.
Furthermore, the result turned out to be the same for different gases!
Finding a Natural Temperature Scale
In class, we plotted air pressure as a function of temperature for a fixed volume of air, by making several measurements as the air was slowly heated (to give it a chance to all be at the same temperature at each stage). We found a straight line. On the graph, we extended the line backwards, to see how the pressure would presumably drop on cooling the air. We found the remarkable prediction that the pressure should drop to zero at a temperature of about -273°C.
In fact, if we’d done the cooling experiment, we would have found that air doesn’t actually follow the line all the way down, but condenses to a liquid at around -200°C. However, helium gas stays a gas almost to -270°C, and follows the line closely.
We shall discuss the physics of gases, and the interpretation of this, much more fully in a couple of lectures. For now, the important point is that this suggests a
much more natural temperature scale than the Celsius one: we should take -273°C as the zero of temperature! For one thing, if we do that, the pressure/temperature relationship for a gas becomes beautifully simple: \[ P \propto T. \]
This temperature scale, in which the degrees have the same size as in Celsius, is called the Kelvin or absolute scale. Temperatures are written 300K. To get from Celsius to Kelvin, just add 273 (strictly speaking, 273.15).
An Ideal Gas
Physicists at this point introduce the concept of an “Ideal Gas”. This is like the idea of a frictionless surface: it doesn’t exist in nature, but it is a very handy approximation to some real systems, and makes problems much easier to handle mathematically. The ideal gas is one for which \(P \propto T \) for all temperatures, so helium is close to ideal over a very wide range, and air is close to ideal at ordinary atmospheric temperatures and above.
The Gas Law
We say earlier in the course that for a gas at constant
temperature PV = constant (Boyle’s Law). Now at constant volume, \( P \propto T\).
We can put these together in one equation to find a relationship between pressure, volume and temperature: \[ PV = CT \]
where
C is a constant. Notice, by the way, that we can immediately conclude that at fixed pressure, \( V \propto T \), this is called Charles’ Law. ( Exercise: prove from this that the coefficient of volume expansion of a gas varies significantly with temperature.)
But what is
C? Obviously, it depends on how much gas we have—double the amount of gas, keeping the pressure and temperature the same, and the volume will be doubled, so C will be doubled. But notice that C will not depend on what gas we are talking about: if we have two separate one-liter containers, one filled with hydrogen, the other with oxygen, both at atmospheric pressure, and both at the same temperature, then C will be the same for both of them.
One might conclude from this that
C should be defined for one liter of gas at a specified temperature and pressure, such as 0°C and 1 atmosphere, and that could be a consistent scheme. It might seem more natural, though, to specify a particular mass of gas, since then we wouldn’t have to specify a particular temperature and pressure in the definition of C.
But that idea brings up a further problem: one gram of oxygen takes up a lot less room than one gram of hydrogen. Since we’ve just seen that choosing the same
volume for the two gases gives the same constant C for the two gases, evidently taking the same mass of the two gases will give different C’s. Avogadro’s Hypothesis
The resolution to this difficulty is based on a remarkable discovery the chemists made two hundred years or so ago: they found that
one liter of nitrogen could react with exactly one liter of oxygen to produce exactly two liters of NO, nitrous oxide, all volume measurements being at the same temperature and pressure. Further, one liter of oxygen combined with two liters of hydrogen to produce two liters of steam. These simple ratios of interacting gases could be understood if one imagined the atoms combining to form molecules, and made the further assumption, known as Avogadro’s Hypothesis (1811):
Equal volumes of gases at the same temperature and pressure contain the same number of molecules.
One could then understand the simple volume results by assuming the gases were made of diatomic molecules, H
2, N 2, O 2 and the chemical reactions were just molecular recombinations given by the equations N 2 + O 2 = 2NO, 2H 2 + O 2 = 2H 2O, etc.
Of course, in 1811 Avogadro didn’t have the slightest idea what this number of molecules was for, say, one liter, and nobody else did either, for another fifty years. So no-one knew what an atom or molecule weighed,
but assuming that chemical reactions were atoms combining into molecules, or rearranging from one molecular pairing or grouping to another, they could figure out the weights of atoms, such as relative an oxygen atom had mass 16 times that of a hydrogen atom—even though they had no idea how big these masses were!
This observation led to defining
the natural mass of a gas for setting the value of the constant C in the gas law to be a “mole” of gas: hydrogen was known to be H 2 molecules, so a mole of hydrogen was 2 grams, oxygen was O 2, so a mole of oxygen was 32 grams, and so on.
With this definition, a mole of oxygen contains the same number of molecules as a mole of hydrogen: so at the same temperature and pressure, they will occupy the same volume. At 0°C, and atmospheric pressure, the volume is 22.4 liters.
So, for one mole of a gas (for example, two grams of hydrogen), we set the constant
C equal to R, known as the universal gas constant, and equal to 8.3 J/(mol.K) and PV = RT. For n moles of a gas, such as 2 ngrams of hydrogen, the law is: \[ PV = nRT.\]
and this is the standard form of the Gas Law.
(
Footnote: after the discovery of isotopes, nuclei of the same element having different masses, and in particular of a form of hydrogen called heavy hydrogen present in small quantities in nature, the definition of the mole was refined to be equal to precisely 12 grams of the carbon isotope C 12. In practice, this is a tiny correction which doesn’t affect anything we’ve said here.) |
I'm trying to solve the Poisson equation with pure Neumann boundary conditions, $$ \nabla^2\phi = \rho \quad in \quad \Omega\\ \mathbf{\nabla}\phi \cdot \mathbf{n} = 0 \quad on \quad \partial \Omega $$ using a Fourier transform method I found in Numerical Recipes. The method uses a discrete cosine transform, if you don't have access to the book, you can find a derivation here. I tried implementing the algorithm in Python; my code is listed below:
import numpy as npimport scipy.sparse as sparseimport scipy.fftpack as fftif __name__ == '__main__': shape = (3, 3) nx, ny = shape charges = np.zeros(shape) charges[:] = 1.0 / (nx * ny) charges[nx / 2, ny / 2] = 1.0 / (nx * ny) - 1.0 print charges charges = charges.flatten()#Build Laplacian ex = np.append(np.ones(nx - 2), [2, 2]) ey = np.append(np.ones(ny - 2), [2, 2]) Dxx = sparse.spdiags([ex, -2 * np.ones(nx), ex[::-1]], [-1, 0, 1], nx, nx) Dyy = sparse.spdiags([ey, -2 * np.ones(ny), ey[::-1]], [-1, 0, 1], ny, ny) L = sparse.kronsum(Dxx, Dyy).todense()################Fourier method rhofft = np.zeros(shape, dtype = float) for i in range(shape[0]): rhofft[i,:] = fft.dct(charges.reshape(shape)[i,:], type = 1) / (shape[1] - 1.0) for j in range(shape[1]): rhofft[:,j] = fft.dct(rhofft[:,j], type = 1) / (shape[0] - 1.0) for i in range(shape[0]): for j in range(shape[1]): factor = 2.0 * (np.cos((np.pi * i) / (shape[0] - 1)) + np.cos((np.pi * j) / (shape[1] - 1)) - 2.0) if factor != 0.0: rhofft[i, j] /= factor else: rhofft[i, j] = 0.0 potential = np.zeros(shape, dtype = float) for i in range(shape[0]): potential[i,:] = 0.5 * fft.dct(rhofft[i,:], type = 1) for j in range(shape[1]): potential[:,j] = 0.5 * fft.dct(potential[:,j], type = 1)################ print np.dot(L, potential.flatten()).reshape(shape) print potential
The charge density is the following,
[[ 0.11111111 0.11111111 0.11111111] [ 0.11111111 -0.88888889 0.11111111] [ 0.11111111 0.11111111 0.11111111]]
while, multiplying the solution with the Laplacian $L$ gives,
[[ 0.25 0.25 0.25] [ 0.25 -0.75 0.25] [ 0.25 0.25 0.25]]
instead of the same results as above.
I've been staring to the code for some time now and am unable to understand what I'm doing wrong. I've even checked to see if scipy's discrete cosine transform gives the correct results and that seems fine too.
If anyone could point out my mistake, I would be really grateful!
EDIT: I found out that if I multiply the solution by $L - J$, where $J$ is a matrix filled with ones, instead of just $L$, I do get the expected charge density. Why is that? |
When you have a sequence of the form $a_{n+1}=f(a_n)$ that apparently does not lead to a closed formula for $a_n$, then you have to study the function $f(x)$.
When you graph the curve $y=f(x)=\dfrac{10}x-3$ in blue and $y=x$ in red, you notice there are two intersection points.
These are called fixed points of $f$ since $f(x)=x$. Once solved this gives $x=2$ or $x=-5$.
If the sequence would converge to $\ell$, the continuity of $a_{n+1}=f(a_n)$ will lead to $f(\ell)=\ell$ so $\ell$ will be one of the two fixed points.
On the graph we can see that $2$ is a repulsive point, and $-5$ an attractive point.
Since we are not required to do the full study for all initial seeds fo the sequence, but only for $a_1=10$, we will focus on showing it converges to $-5$.
We can see that the convergence is not a staircase (monotonic convergence) but a spiral. This means we have to show that $a_{2n}$ and $a_{2n+1}$ are both monotonic but of opposite direction.
To prove this we have to:
study the sign of $a_{n+2}-a_n$, this is equivalent of studying the sign of $f(f(x))-x$. show that $-5$ is squeezed between $a_n$ and $a_{n+1}$, this is equivalent of studying the sign of $f(x)+5$.
First notice that $x<0\implies f(x)<0$ so as soon as $a_{n_0}<0$ then all subsequent $a_n$ with $n\ge n_0$ are also negative.
Since $a_2<0$ we will select $n_0=2$.
$f(x)+5=\dfrac {10}x-3+5=\dfrac{2(x+5)}x\quad\begin{cases} > 0 & x\in]-\infty,-5[\\<0 & x\in]-5,0[\end{cases}$
So if $a_n<-5$ then $a_{n+1}>-5$ and vice-versa and $-5$ is squeezed between $a_n$ and $a_{n+1}$ for $n\ge 2$
$f(f(x))-x=\dfrac{10}{\frac{10}{x-3}}-3-x=\dfrac{3(x+5)(x-2)}{10-3x}\quad\begin{cases} > 0 & x\in]-\infty,-5[\\<0 & x\in]-5,0[\end{cases}$
So $a_{n+2}>a_{n}$ for $a_n<-5$ and $a_{n+2}<a_n$ for $a_n>-5$.
Since $a_2>-5$ then $\begin{cases}a_{2n}>-5 & a_{2n}\searrow\\a_{2n+1}<-5 & a_{2n+1}\nearrow\end{cases}$
Now we can apply monotonicity theorem and $a_{2n}\to -5$ and $a_{2n+1}\to -5$.
This means that $a_n\to -5$. |
Exercise: Let $(S,\mathcal{A},\mu)$ be a measurable space and let $A_1,A_2,\ldots\in \mathcal{A}$. Define $B\subseteq S$ as $B = \bigcap\limits_{k = 1}^\infty\bigcup\limits_{n = k}^\infty A_n$. Show that $B\in\mathcal{A}$ and show that if $\sum_{n = 1}^\infty\mu(A_n)<\infty$ we have that $\mu(B) = 0$. What I've tried: I first tried to show that $B\in\mathcal{A}$. I know that $\bigcup_{n =k}^\infty A_n\in\mathcal{A}$ for any $k\in\mathbb{N}$. Unfortunately, I cannot conclude that the infinite intersection of sets that are in $\mathcal{A}$ is in $\mathcal{A}$ as well. Though I thus far haven't been able to prove that $B\in\mathcal{A}$, it makes a lot of sense. $B = (A_1\cup A_2\cup\ldots)\cap (A_2\cup A_3\cup\ldots)\cap \ldots$ so I know that $B\subseteq \bigcup_{n =1}^\infty A_n$. I feel this is the direction I need to be looking, but I'm kind of stuck at this point.
To show that if $\sum_{n = 1}^\infty \mu(A_n) <\infty$ we have that $\mu(B) = 0$ I tried to use the fact that $B_n = \bigcup_{j = n}^\infty A_j$ is a decreasing sequence. Since $B_n$ is a decreasing sequence and $B = \bigcap_{n = 1}^\infty B_n$, we have that if $\mu(B_1) <\infty$, then $\mu(B_n)\downarrow \mu(B)$. Since we have $\mu(B_1) = \sum_{n = 1}^\infty \mu(A_n) <\infty$, $\mu(B_n)\to 0$ implies $\mu(B) = 0$, which is what we want to show. However, I'm not sure how to show that $\mu(B_n)\to 0$. It again makes a lot of sense, but I'm not quite sure how I should prove it.
Question: How do I solve this exercise?
Thanks! |
I saw some data from the Atlanta Fed [1] on wage growth that looked remarkably suitable for a dynamic information equilibrium model (also described in my recent paper). One of the interesting things here is that it is a dynamic equilibrium between wages ($W$) and the rate of change of wages ($dW/dt$) so that we have the model $dW/dt \rightleftarrows W$:
\frac{d}{dt} \log \frac{d}{dt} \log W = \frac{d}{dt} \log \frac{dW/dt}{W} \approx \gamma + \sigma_{i} (t)
$$
where $\gamma$ is the dynamic equilibrium growth rate and $\sigma_{i} (t)$ represents a series of shocks. This model works remarkably well:
The shock transitions are in 1992.0, 2002.4, 2009.4, and 2014.7 which all
the related shock to unemployment. A negative shock to employment drives down wage growth (who knew?), but it also appears that wage growth has a tendency to increase at about 4.2% per year [2] unless there is a positive shock to employment (such as in 2014) when it can increase faster. The most recent downturn in the data is possibly consistent with the JOLTS leading indicators showing a deviation, however since the wage growth data seems to follow recessions it is more likely that this is a measurement/noise fluctuation. lag
I added the wage growth series to the labor market "seismogram" collection, and we can see a fall in wage growth typically follows a recession:
...
Footnotes:
[1] The time series is broken before 1997, but data goes back to 1983 in the source material. I included data back to 1987. However the data prior the 1991 recession does not have the complete 1980s recession(s), so the fit to that recession shock would be highly uncertain and so I left it out.
[2] Wage growth it typically around 3.0% lately, so a 4.2% increase in that rate would mean that after a year wage growth would be about 3.1% and after 2 years about 3.3%
. in the absence of shocks |
The use of bootstrapping in the meta-analytic context has been suggested by a number of authors (e.g., Adams, Gurevitch, & Rosenberg, 1997; van den Noortgate & Onghena, 2005; Switzer, Paese, & Drasgow, 1992; Turner et al., 2000). The example below shows how to conduct parametric and non-parametric bootstrapping using the metafor and boot packages in combination. The example is based on a meta-analysis by Collins et al. (1985) examining the effectiveness of diuretics in pregnancy for preventing pre-eclampsia.
The data can be loaded with:
library(metafor) dat <- dat.collins1985b[,1:7] dat
We only need the first 7 columns of the dataset (the remaining columns pertain to other outcomes). The contents of the dataset are:
id author year pre.nti pre.nci pre.xti pre.xci 1 1 Weseley & Douglas 1962 131 136 14 14 2 2 Flowers et al. 1962 385 134 21 17 3 3 Menzies 1964 57 48 14 24 4 4 Fallis et al. 1964 38 40 6 18 5 5 Cuadros & Tatum 1964 1011 760 12 35 6 6 Landesman et al. 1965 1370 1336 138 175 7 7 Kraus et al. 1966 506 524 15 20 8 8 Tervila & Vartiainen 1971 108 103 6 2 9 9 Campbell & MacGillivray 1975 153 102 65 40
Variables
pre.nti and
pre.nci indicate the number of women in the treatment and control/placebo groups, respectively, while
pre.xti and
pre.xci indicate the number of women in the respective groups with any form of pre-eclampsia during the pregnancy.
The corresponding log odds ratios (and corresponding sampling variances) can then be computed and added to the dataset with:
id author year pre.nti pre.nci pre.xti pre.xci yi vi 1 1 Weseley & Douglas 1962 131 136 14 14 0.0418 0.1596 2 2 Flowers et al. 1962 385 134 21 17 -0.9237 0.1177 3 3 Menzies 1964 57 48 14 24 -1.1221 0.1780 4 4 Fallis et al. 1964 38 40 6 18 -1.4733 0.2989 5 5 Cuadros & Tatum 1964 1011 760 12 35 -1.3910 0.1143 6 6 Landesman et al. 1965 1370 1336 138 175 -0.2969 0.0146 7 7 Kraus et al. 1966 506 524 15 20 -0.2615 0.1207 8 8 Tervila & Vartiainen 1971 108 103 6 2 1.0888 0.6864 9 9 Campbell & MacGillivray 1975 153 102 65 40 0.1353 0.0679
For the analyses to be shown below, we will focus on the meta-analytic random-effects model, letting $y_i$ denote the observed outcome/effect for a particular study (variable
yi in the dataset above) and $\theta_i$ the corresponding true outcome/effect. We assume that$$y_i = \theta_i + e_i,$$where $e_i \sim N(0, v_i)$ denotes sampling error, whose variance is (approximately) known (variable
vi in the dataset above). The true effects are in turn assumed to be given by$$\theta_i = \mu + u_i,$$where $u_i \sim N(0, \tau^2)$, so that $\mu$ denotes the average true outcome/effect and $\tau^2$ the variance in the true outcomes/effects. The model implies that $$y_i \sim N(\mu, \tau^2 + v_i).$$The goal is to estimate $\mu$ and $\tau^2$ and to construct corresponding confidence intervals for these parameters.
The results from fitting a random-effects model (using REML estimation) to the data given above can be obtained with:
Random-Effects Model (k = 9; tau^2 estimator: REML) tau^2 (estimated amount of total heterogeneity): 0.3008 (SE = 0.2201) tau (square root of estimated tau^2 value): 0.5484 I^2 (total heterogeneity / total variability): 75.92% H^2 (total variability / sampling variability): 4.15 Test for Heterogeneity: Q(df = 8) = 27.2649, p-val = 0.0006 Model Results: estimate se zval pval ci.lb ci.ub -0.5181 0.2236 -2.3167 0.0205 -0.9564 -0.0798 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Therefore, the estimated average log odds ratio is equal to $\hat{\mu} = -.52$ with an approximate Wald-type 95% CI of $(-0.96, -0.08)$. The variance of the true log odds ratios is estimated to be $\hat{\tau}^2 = .30$. Confidence intervals for $\tau^2$ can be obtained in a variety of ways – see Viechtbauer (2007) for an illustration of a variety of different methods. One of them is the so-called Q-profile method, which provides exact CIs (under the assumptions of the model). Using this method, the CI bounds can be obtained with:
confint(res) estimate ci.lb ci.ub tau^2 0.3008 0.0723 2.2027 tau 0.5484 0.2689 1.4842 I^2(%) 75.9238 43.1209 95.8494 H^2 4.1535 1.7581 24.0929
Note that CIs for the $I^2$ and $H^2$ statistics are also provided.
We will now use the boot package to obtain parametric and non-parametric bootstrap CIs for $\mu$ and $\tau^2$, so we start by loading the package with:
library(boot)
For parametric bootstrapping, we need to define two functions, one for calculating the statistic(s) of interest (and possibly the corresponding variance(s)) based on the bootstrap data, the second for actually generating the bootstrap data. In the present case, our interest is focused on the estimates of $\mu$ and $\tau^2$, so the first function could be written as:
boot.func <- function(data.boot) { res <- try(rma(yi, vi, data=data.boot), silent=TRUE) if (is.element("try-error", class(res))) { NA } else { c(coef(res), vcov(res), res$tau2, res$se.tau2^2) } }
The purpose of the
try() function is to catch cases where the algorithm used to obtain the REML estimate of $\tau^2$ does not converge. Otherwise, the function returns the estimate of $\mu$, the corresponding variance, the estimate of $\tau^2$, and its corresponding variance.
For a random-effects model, the data generation process is described by the last equation given above, where the two unknown parameters are replaced by their corresponding estimates (i.e., based on the fitted model). Therefore, the second function needed for the parametric bootstrapping is:
data.gen <- function(dat, mle) { data.frame(yi=rnorm(nrow(dat), mle$mu, sqrt(mle$tau2 + dat$vi)), vi=dat$vi) }
Next, we can do the actual bootstrapping (based on 10,000 bootstrap samples) with (note that setting the seed allows for reproducibility of the results):
set.seed(8781328) res.boot <- boot(dat, boot.func, R=10000, sim="parametric", ran.gen=data.gen, mle=list(mu=coef(res), tau2=res$tau2)) res.boot PARAMETRIC BOOTSTRAP Call: boot(data = dat, statistic = boot.func, R = 10000, sim = "parametric", ran.gen = data.gen, mle = list(mu = coef(res), tau2 = res$tau2)) Bootstrap Statistics : original bias std. error t1* -0.51810321 0.0043534226 0.22661343 t2* 0.05001318 -0.0004975425 0.02678066 t3* 0.30079577 0.0032844143 0.22289406 t4* 0.04845165 0.0136897479 0.07136212
Finally, a variety of different CIs for $\mu$ can be obtained with:
boot.ci(res.boot, type=c("norm", "basic", "stud", "perc"), index=1:2) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 9964 bootstrap replicates CALL : boot.ci(boot.out = res.boot, type = c("norm", "basic", "stud", "perc"), index = 1:2) Intervals : Level Normal Basic 95% (-0.9666, -0.0783 ) (-0.9604, -0.0720 ) Level Studentized Percentile 95% (-1.0661, 0.0241 ) (-0.9642, -0.0758 ) Calculations and Intervals on Original Scale
All of the intervals except the one based on the studentized method are similar to the Wald-type CI obtained earlier. The plot below shows the bootstrap distribution of $\hat{\mu}$ (with a kernel density estimate of the distribution superimposed and the tails shaded based on the percentile CI).
Since the studentized method is based on the use of the t-distribution, it is not surprising that more comparable results to the studentized CI can be obtained by using the Knapp and Hartung method when fitting the random-effects model (which also leads to the use of the t-distribution when constructing CIs for the fixed effects of the model). In particular, this can be done with:
Model Results: estimate se tval pval ci.lb ci.ub -0.5181 0.2408 -2.1512 0.0637 -1.0735 0.0373
The CI obtained in this manner is more similar to the one obtained using the studentized method.
Next, parametric bootstrap CIs for $\tau^2$ can be obtained with:
boot.ci(res.boot, type=c("norm", "basic", "stud", "perc"), index=3:4) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 9964 bootstrap replicates CALL : boot.ci(boot.out = res.boot, type = c("norm", "basic", "stud", "perc"), index = 3:4) Intervals : Level Normal Basic 95% (-0.1394, 0.7344 ) (-0.2358, 0.6016 ) Level Studentized Percentile 95% ( 0.0644, 2.2430 ) ( 0.0000, 0.8374 ) Calculations and Intervals on Original Scale
The various CIs are quite different from each other. Interestingly, the studentized method yields bounds that are quite similar to the ones obtained with the Q-profile method. Below is a plot of the bootstrap distribution of $\hat{\tau}^2$ (again, with the kernel density estimate and the tail regions shaded based on the percentile CI).
For the non-parametric bootstrap method, we only need to define one function with two arguments, one for the original data and one for a vector of indices which define the bootstrap sample. The function again returns the statistics of interest (and the corresponding variances):
boot.func <- function(dat, indices) { res <- try(rma(yi, vi, data=dat, subset=indices), silent=TRUE) if (is.element("try-error", class(res))) { NA } else { c(coef(res), vcov(res), res$tau2, res$se.tau2^2) } }
The indices that define the bootstrap sample can be directly passed to the
subset argument, which will then select the appropriate rows from the dataset. Again, the
try() function is used to catch the occasional case of non-convergence.
The actual bootstrapping can then be carried out with:
set.seed(8781328) res.boot <- boot(dat, boot.func, R=10000) res.boot ORDINARY NONPARAMETRIC BOOTSTRAP Call: boot(data = dat, statistic = boot.func, R = 10000) Bootstrap Statistics : original bias std. error t1* -0.51810321 0.004682588 0.21214699 t2* 0.05001318 -0.004397091 0.02165423 t3* 0.30079577 -0.036919665 0.15270307 t4* 0.04845165 0.001115270 0.04832504
The various non-parametric bootstrap CIs for $\mu$ can now be obtained with:
boot.ci(res.boot, index=1:2) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 9980 bootstrap replicates CALL : boot.ci(boot.out = res.boot, index = 1:2) Intervals : Level Normal Basic Studentized 95% (-0.9386, -0.1070 ) (-0.9158, -0.0932 ) (-1.3070, -0.0274 ) Level Percentile BCa 95% (-0.9430, -0.1204 ) (-0.9827, -0.1530 ) Calculations and Intervals on Original Scale
Now, the so-called bias-corrected and accelerated (BCa) CI is also included. The bootstrap distribution is shown below.
Finally, the various CIs for $\tau^2$ can be obtained with:
boot.ci(res.boot, index=3:4) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 9980 bootstrap replicates CALL : boot.ci(boot.out = res.boot, index = 3:4) Intervals : Level Normal Basic Studentized 95% ( 0.0384, 0.6370 ) (-0.0047, 0.6016 ) ( 0.1396, 4.8907 ) Level Percentile BCa 95% ( 0.0000, 0.6063 ) ( 0.0685, 0.8122 ) Calculations and Intervals on Original Scale
Again, the bootstrap distribution is shown below.
The code above is shown for illustrative purposes only. Whether bootstrap CIs for $\mu$ are preferable to the standard Wald-type or Knapp and Hartung adjusted CIs is unclear. Also, the results based on Viechtbauer (2007) suggest that the coverage probabilities of parametric and non-parametric bootstrap CIs for $\tau^2$ are less than adequate (however, only percentile and BCa intervals were closely examined in that paper, leaving the accuracy of the other bootstrap CIs unknown). Finally, other bootstrap strategies (e.g., the error bootstrap) are described by van den Noortgate and Onghena (2005).
Adams, D. C., Gurevitch, J., & Rosenberg, M. S. (1997). Resampling tests for meta-analysis of ecological data.
Ecology, 78(5), 1277–1283.
Collins, R., Yusuf, S., & Peto, R. (1985). Overview of randomised trials of diuretics in pregnancy.
British Medical Journal, 290(6461), 17–23.
van den Noortgate, W., & Onghena, P. (2005). Parametric and nonparametric bootstrap methods for meta-analysis.
Behavior Research Methods, 37(1), 11–22.
Switzer III, F. S., Paese, P. W., & Drasgow, F. (1992). Bootstrap estimates of standard errors in validity generalization.
Journal of Applied Psychology, 77(2), 123–129.
Turner, R. M., Omar, R. Z., Yang, M., Goldstein, H., & Thompson, S. G. (2000). A multilevel model framework for meta-analysis of clinical trials with binary outcomes.
Statistics in Medicine, 19(24), 3417–3432.
Viechtbauer, W. (2007). Confidence intervals for the amount of heterogeneity in meta-analysis.
Statistics in Medicine, 26(1), 37–52. |
I have a specific problem I have been set, I'm asking here because I can't really find an answer anywhere else.
Consider the scenario where a company offers some service to its users. The company has an enterprise value which is a function of the number, $n$, of users it has, $f(n)$.
When people use the service provided by the company, each user $i$ gets a non-negative utility $u_i \geq 0$.
Let $N$ denote the population of potential users, and $F$ denote the company.
We can view this as a cooperative game, $(N \cup \{F\}, v)$ played by the company and its potential users.
The value of a coalition formed is the sum of the enterprise value of the company and the utilities of its users, if the company is in the coalition. If the company is not included in the coalition, its value is $0$.
Formally:
for any $C \subseteq N \cup \{F\}$, we have
$$v(C) = \begin{cases} f(|C| - 1) + \sum_{i \in C \cap N} u_i & \text{if $F \in C$} \\ 0 & \text{otherwise}\end{cases}$$
We are given that the game is superadditive, so we assume the grand coalition forms.
We are asked to compute the Shapley value for the company $F$ and each user $i \in N$.
The only thing I am sure of is that, due to the efficiency property of Shapley values, the sum of values for all users and the company must be $v(N \cup \{F\}) = f(|N|) + \sum_{i \in N} u_i$. After attempting the problem, I ended up with a large equation which I don't believe to be the answer that they were looking for.
Is there a simple way to compute these values, possibly based on some of the properties of the Shapley value? Or does it have to be a long convoluted equation? |
Since I apparently can't seem to sit down and write anything that isn't on a blog, I thought I'd create a few posts that I will edit in real time (feel free to comment) until I can copy and paste them into a document to put on the arXiv and/or submit to the economics e-journal (H/T to Todd Zorick for helping to motivate me). WARNING: DRAFT: This post may be updated without any indications of changes. It will be continuously considered a draft. Macroeconomics
Since the information equilibrium framework depends on a large number of states for the information source and destination, it ostensibly would be better applied to the macroeconomic problem. Below are some classic macroeconomic toy models (and one macroeconomic relationship): AD-AS model, Okun's law, the IS-LM model, and the Solow growth model.
[To be added, the price level/quantity theory of money]
AD-AS
The AD-AS model uses the price level $P$ as the detector, aggregate demand $N$ (NGDP) as the information source and aggregate supply $S$ as the destination, or $P:N \rightarrow S$, which immediately allows us to write down the aggregate demand and aggregate supply curves
$$
P = \frac{N_{0}}{k_{A} S_{ref}} \exp \left( - k_{A} \frac{\Delta N}{N_{0}} \right)
$$
$$
P = \frac{N_{ref}}{k_{A} S_{0}} \exp \left( + \frac{\Delta S}{k_{A} S_{0}} \right)
$$
Positive shifts in the aggregate demand curve raise the price level along with negative shifts in the supply curve. Traveling along the aggregate demand curve lowers the price level (more aggregate supply at constant demand).
Labor market and Okun's law
The labor market uses the price level $P$ as the detector, aggregate demand $N$ as the information source and total hours worked $H$ (or total employed $L$) as the destination. We have the market $P:N \rightarrow H$ so that we can say:
$$
P = \frac{1}{k_{H}} \; \frac{N}{H}
$$
Re-arranging and taking the logarithmic derivative of both sides:
$$
H = \frac{1}{k_{H}} \; \frac{N}{P}
$$
$$
\frac{d}{dt} \log H = \frac{d}{dt} \log \frac{N}{P} - \frac{d}{dt} \log k_{H}
$$
$$
\frac{d}{dt} \log H = \frac{d}{dt} \log \frac{N}{P} - 0 = \frac{d}{dt} \log R
$$
where $R$ is RGDP. The total hours worked (or total employed) fluctuates with the change in RGDP growth (Okun's law).
IS-LM
The IS-LM model uses two markets along with an information equilibrium relationship. Let $p$ be the price of money in the money market (LM market) $p:N \rightarrow M$ where $N$ is aggregate demand and $M$ is the money supply.
We have:
$$
p = \frac{1}{k_{p}} \; \frac{N}{M}
$$
We assume that the interest rate $i$ is in information equilibrium with the price of money $p$, so that we have the information equilibrium relationship $i \rightarrow p$ (no need to define a detector at this point). Therefore the differential equation is:
$$
\frac{di}{dp} = \frac{1}{k_{i}} \; \frac{i}{p}
$$
With solution (we won't need the additional constants $p_{ref}$ or $i_{ref}$):
$$
i^{k_{i}} = p
$$
And we can write [note: this is a new take (here's the old take) on the constant $k_{i}$ that I've called $c$]:
$$
i^{k_i} = \frac{1}{k_{p}} \; \frac{N}{M}
$$
Already this is pretty empirically accurate:
We can now rewrite the money (LM) market and add the goods (IS) market as coupled markets with the same information source (aggregate demand) and same detector (interest rate, directly related to -- i.e. in information equilibrium with -- the price of money):
$$
i^{k_i} : N \rightarrow M
$$
$$
i^{k_i} : N \rightarrow S
$$
Where $S$ is the aggregate supply. The LM market is described by both increases in the money supply $M$ shifts as well as shifts in the information source $N_{0} \rightarrow N_{0} + \Delta N$, so we write the LM curve as a demand curve, with shifts:
$$
i^{k_i} = \frac{N_{0} + \Delta N}{k_{p} M_{ref}} \exp \left( - k_{p} \frac{\Delta M}{N_{0} + \Delta N} \right)
$$
The IS curve can be straight-forwardly be written down as the demand curve in the IS market:
$$
i^{k_i} = \frac{N_{0}}{k_{S} S_{ref}} \exp \left( - k_{S} \frac{\Delta N}{N_{0}} \right)
$$
Solow growth model
Let's assume two markets $p_{1}:N \rightarrow K$ and $p_{2}:N \rightarrow L$:
\text{(3a) }\frac{\partial N}{\partial K} = \frac{1}{\kappa_{1}}\; \frac{N}{K}
$$
$$
\text{(3b) }\frac{\partial N}{\partial L} = \frac{1}{\kappa_{2}}\; \frac{N}{L}
$$
The economics rationale for equations (3a,b) are that the left hand sides are the marginal productivity of capital/labor which are
assumedto be proportional to the right hand sides -- the productivity per unit capital/labor. In the information transfer model, the relationship follows from a model of aggregate demand sending information to aggregate supply (capital and labor) where the information transfer is "ideal" i.e. no information loss. The solutions are:
N(K, L) \sim f(L) K^{1/\kappa_{1}}
$$
$$
N(K, L) \sim g(K) L^{1/\kappa_{2}}
$$
and therefore we have
$$
\text{(4) } N(K, L) = A K^{1/\kappa_{1}} L^{1/\kappa_{2}}
$$
Equation (4) is the generic Cobb-Douglas form. In this case, unlike equation (2), the exponents are free to take on any value (nor restricted to constant returns to scale, i.e. $1/\kappa_{1} + 1/\kappa_{2} = 1$). The resulting model is remarkably accurate:
It also has no changes in so-called total factor productivity ($A$ is constant). The results above use nominal capital and nominal GDP $N$ rather than the usual real capital and real output (RGDP, $R$).
Summary
We have shown that several macroeconomic relationships and toy models can be easily represented using the information equilibrium framework, and in fact are remarkably accurate empirically. Below we list a summary of the information equilibrium models in the notation
detector : source → destination,
i.e.
price : demand → supply. Also the information equilibrium models that do not require detectors are shown as source → destination.
The models shown here are:
AD-AS model
$$
P: N \rightarrow S
$$
(Okun's law) Labor market
$$
P: N \rightarrow H
$$
or
$$
P: N \rightarrow L
$$
IS-LM model
$$
(i \rightarrow p ) : N \rightarrow M
$$
$$
i : N \rightarrow S
$$
Solow growth model
$$
N \rightarrow K
$$
$$
N \rightarrow L
$$ |
Earlier this semester, we saw how to approximate a function \(f (x, y)\) by a linear function, that is, by its tangent plane. The tangent plane equation just happens to be the (\(1^{\text{st}}\)-degree Taylor Polynomial of \(f\) at \((x, y)\), as the tangent line equation was the (\(1^{\text{st}}\)-degree Taylor Polynomial of a function \(f(x)\).
Now we will see how to improve this approximation of \(f (x, y)\) using a quadratic function: the \(2^{\text{nd}}\)-degree Taylor polynomial for \(f\) at \((x, y)\).
Review of Taylor Polynomials for a Function of One Variable
Do you remember Taylor Polynomials from Calculus II?
Definition: Taylor polynomials for a function of one variable, \(y = f(x)\)
If \(f\) has \(n\) derivatives at \(x = c\), then the polynomial,
\[P_n(x) = f(c) + f'(c)(x - c) + \frac{f''(c)}{2!}(x - c)^2 + \cdots + \frac{f^{(n)}(c)}{n!}(x-c)^n\]
is called the \(n^{\text{th}}\)-degree Taylor Polynomial for \(f\) at \(c\).
Now a function of one variable \(f(x)\) can be approximated for \(x\) near \(c\) using its \(1^{\text{st}}\)-degree Taylor Polynomial (i.e., using the equation of its
tangent line at the point \((c, f(c)\)). This \(1^{\text{st}}\)-degree Taylor Polynomial is also called the linear approximation of \(f(x)\) for \(x\) near \(c\).
That is:
\[f(x) \approx f(c) + f '(c) (x - c)\]
Note
Remember that the first-derivative of this \(1^{\text{st}}\)-degree Taylor polynomial at \(x = c\) is equal to the first derivative of \(f\) at \(x = c\). That is:
Since \(P_1(x) = f(c) + f '(c) (x - c)\),
\[P_1'(c) = f'(c) \nonumber\]
A better approximation of \(f(x)\) for \(x\) near \(c\) is the
quadratic approximation (i.e., the \(2^{\text{nd}}\)-degree Taylor polynomial of \(f\) at \(x = c\)):
\[f(x) \approx f(c) + f '(c) (x - c) + \frac{ f ''(c)}{2}(x - c)^2\]
Note
Remember that both the first and second derivatives of the \(2^{\text{nd}}\)-degree Taylor polynomial of \(f\) at \(x = c\) are the same as those for \(f\) at \(x = c\). That is:
Since \(P_2(x) = f(c) + f '(c) (x - c) + \frac{ f ''(c)}{2}(x - c)^2\),
\[P_2'(c) = f'(c) \quad \text{and} \quad P_2''(c) = f''(c) \nonumber\]
1st and 2nd-Degree Taylor Polynomials for Functions of Two Variables
Taylor Polynomials work the same way for functions of two variables. (There are just more of each derivative!)
Definition: first-degree Taylor polynomial of a function of two variables, \(f(x, y)\)
For a function of two variables \(f(x, y)\) whose first partials exist at the point \((a, b)\), the
\(1^{\text{st}}\)-degree Taylor polynomial of \(f\) for \((x, y)\) near the point \((a, b)\) is:
\[f (x, y) \approx L(x, y) = f (a, b) + f_x(a, b) (x - a) + f_y(a, b) (y - b)\]
\(L(x,y)\) is also called the
linear (or tangent plane) approximation of \(f\) for \((x, y)\) near the point \((a, b)\).
Note that this is really just the equation of the function \(f\)'s tangent plane.
Also note that the first partial derivatives of this polynomial function are \(f_x\) and \(f_y\)!
We can obtain an even better approximation of \(f\) for \((x, y)\) near the point \((a, b)\) by using the
quadratic approximation of \(f\) for \((x, y)\) near the point \((a, b)\). This is just another name for the \(2^{\text{nd}}\)-degree Taylor polynomial of \(f\).
Definition: Second-degree Taylor Polynomial of a function of two variables, \(f(x, y)\)
For a function of two variables \(f(x, y)\) whose first and second partials exist at the point \((a, b)\), the
\(2^{\text{nd}}\)-degree Taylor polynomial of \(f\) for \((x, y)\) near the point \((a, b)\) is:
\[f (x, y) \approx Q(x, y) = f (a, b) + f_x(a, b) (x - a) + f_y(a, b) (y - b) + \frac{f_{xx}(a, b)}{2}(x-a)^2 + f_{xy}(a,b)(x-a)(y-b) + \frac{f_{yy}(a, b)}{2}(y-b)^2 \label{tp2}\]
If we have already determined \(L(x,y)\), we can simplify this formula as:
\[f (x, y) \approx Q(x, y) = L(x,y) + \frac{f_{xx}(a, b)}{2}(x-a)^2 + f_{xy}(a,b)(x-a)(y-b) + \frac{f_{yy}(a, b)}{2}(y-b)^2 \]
Note: Since both mixed partials are equal, they combine to form the middle term. Originally there were four terms for the second partials, all divided by 2.
Observe that the power on the factor \((x - a)\) corresponds to the number of times the partial is taken with respect to \(x\) and the power on the factor \(y - b\) corresponds to the number of times the partial is take with respect to \(y\). For example, in the term with \(f_xx(a,b)\), you have the factor \((x-a)^2\), since the partial is taken with respect to \(x\) twice, and in the term with \(f_xy(a,b)\), you have the factors \((x-a)\) and \((y-b)\) (both raised to the first power), since the partial is taken with respect to \(x\) once and with respect to \(y\) once.
Also note that both the first and second partial derivatives of this polynomial function are the same as those for the function \(f\)!
Example \(\PageIndex{1}\): Finding 1st and 2nd degree Taylor Polynomials
Determine the \(1^{\text{st}}\)- and \(2^{\text{nd}}\)-degree Taylor polynomial approximations, \(L(x, y)\) & \(Q(x, y)\), for the following functions of \(x\) and \(y\) near the given point.
a. \(f(x, y) = \sin 2x + \cos y\) for \((x, y)\) near the point \((0, 0)\)
b. \(f(x, y) = xe^y + 1\) for \((x, y)\) near the point \((1, 0)\)
Solution
a. To determine the first-degree Taylor polynomial linear approximation, \(L(x, y)\), we first compute the partial derivatives of \(f\).
\[ f_x(x, y) = 2\cos 2x \quad \text{and} \quad f_y(x,y) = -\sin y \nonumber\]
Then evaluating these partials and the function itself at the point \((0,0)\) we have:
\[ \begin{align*} f(0,0) &= \sin 2(0) + \cos 0 = 1 \\ f_x(0,0) &= 2\cos 2(0) = 2 \\ f_y(0,0) &= -\sin 0 = 0 \end{align*} \nonumber\]
Now,
\[\begin{align*} L(x, y) &= f(0,0) + f_x(0,0) (x - 0) + f_y(0,0) (y - 0) \\
&= 1 + 2x \end{align*}\]
See the plot of this function and its linear approximation (the \(1^{\text{st}}\)-degree Taylor polynomial) in Figure \(\PageIndex{1}\).
\(f(x,y) = \sin 2x + \cos y \) Figure \(\PageIndex{1}\): Graph of and its \(1^{\text{st}}\)-degree Taylor polynomial, \(L(x,y) = 1 + 2x\)
To determine the second-degree Taylor polynomial (quadratic) approximation, \(Q(x, y)\), we need the second partials of \(f\):
\[ \begin{align*} f_{xx}(x,y) &= -4\sin 2x \\ f_{xy}(x,y) &= 0 \\ f_{yy}(x,y) &= -\cos y \end{align*}\]
Evaluating these 2nd partials at the point \((0,0)\):
\[ \begin{align*} f_{xx}(0,0) &= -4\sin 2(0) = 0 \\ f_{xy}(0,0) &= 0 \\ f_{yy}(0,0) &= -\cos 0 = -1 \end{align*}\]
Then,
\[\begin{align*} Q(x, y) &= L(x,y) + \frac{f_{xx}(0,0)}{2}(x-0)^2 + f_{xy}(0,0)(x-0)(y-0) + \frac{f_{yy}(0,0)}{2}(y-0)^2\\
&= 1 + 2x + \frac{0}{2}x^2 + (0)xy + \frac{-1}{2}y^2 \\ &= 1 + 2x - \frac{y^2}{2} \end{align*}\]
See the plot of the function \(f\) along with its quadratic approximation (the \(2^{\text{nd}}\)-degree Taylor polynomial) in Figure \(\PageIndex{2}\).
\(f(x,y) = \sin 2x + \cos y \) Figure \(\PageIndex{2}\): Graph of and its \(2^{\text{nd}}\)-degree Taylor polynomial, \(Q(x,y) = 1 + 2x - \frac{y^2}{2}\)
b. To determine the first-degree Taylor polynomial linear approximation, \(L(x, y)\), we first compute the partial derivatives of \(f(x, y) = xe^y + 1\) .
\[ f_x(x, y) = e^y \quad \text{and} \quad f_y(x,y) = xe^y \nonumber\]
Then evaluating these partials and the function itself at the point \((1,0)\) we have:
\[ \begin{align*} f(1,0) &= (1)e^0 + 1 = 2 \\ f_x(1,0) &= e^0 = 1 \\ f_y(1,0) &= (1)e^0 = 1 \end{align*} \nonumber\]
Now,
\[\begin{align*} L(x, y) &= f(1,0) + f_x(1,0) (x - 1) + f_y(1,0) (y - 0) \\
&= 2 + 1(x - 1) + 1y \\ &= 1 + x + y \end{align*}\]
See the plot of this function and its linear approximation (the \(1^{\text{st}}\)-degree Taylor polynomial) in Figure \(\PageIndex{3}\).
\(f(x, y) = xe^y + 1\) Figure \(\PageIndex{3}\): Graph of and its \(1^{\text{st}}\)-degree Taylor polynomial, \(L(x,y) = 1 + x + y\)
To determine the second-degree Taylor polynomial (quadratic) approximation, \(Q(x, y)\), we need the second partials of \(f\):
\[ \begin{align*} f_{xx}(x,y) &= 0 \\ f_{xy}(x,y) &= e^y \\ f_{yy}(x,y) &= xe^y \end{align*}\]
Evaluating these 2nd partials at the point \((1,0)\):
\[ \begin{align*} f_{xx}(1,0) &= 0 \\ f_{xy}(1,0) &= e^0 = 1 \\ f_{yy}(1,0) &= (1)e^0 = 1 \end{align*}\]
Then,
\[\begin{align*} Q(x, y) &= L(x,y) + \frac{f_{xx}(1,0)}{2}(x-1)^2 + f_{xy}(1,0)(x-1)(y-0) + \frac{f_{yy}(1,0)}{2}(y-0)^2\\
&= 1 + x + y + \frac{0}{2}(x-1)^2 + (1)(x-1)y + \frac{1}{2}y^2 \\ &= 1 + x + y + xy -y + \frac{y^2}{2} \\ &= 1 + x + xy + \frac{y^2}{2}\end{align*}\]
See the plot of the function \(f\) along with its quadratic approximation (the \(2^{\text{nd}}\)-degree Taylor polynomial) in Figure \(\PageIndex{4}\).
\(f(x, y) = xe^y + 1\) Figure \(\PageIndex{4}\): Graph of and its \(2^{\text{nd}}\)-degree Taylor polynomial, \(L(x,y) = 1 + x + xy + \frac{y^2}{2}\) Higher-Degree Taylor Polynomials of a Function of Two Variables
To calculate the Taylor polynomial of degree \(n\) for functions of two variables beyond the second degree, we need to work out the pattern that allows all the partials of the polynomial to be equal to the partials of the function being approximated at the point \((a,b)\), up to the given degree. That is, for \(P_3(x,y)\) we will need its first, second and third partials to all match those of \(f(x,y)\) at the point \((a,b)\). For \(P_10(x,y)\) we would need all its partials up to the tenth partials to all match those of \(f(x,y)\) at the point \((a,b)\).
If you work out this pattern, it gives us the following interesting formula for the \(n^{\text{th}}\)-degree Taylor polynomial of \(f(x, y)\), assuming all these partials exist.
Definition: \(n^{\text{th}}\)-degree Taylor Polynomial for a function of two variables
For a function of two variables \(f(x, y)\) whose partials all exist to the \(n^{\text{th}}\) partials at the point \((a, b)\), the
\(n^{\text{th}}\)-degree Taylor polynomial of \(f\) for \((x, y)\) near the point \((a, b)\) is:
\[P_n(x,y) = \sum_{i=0}^n \sum_{j=0}^{n - i} \frac{\frac{d^{(i+j)}f}{∂x^i∂y^{\,j}}(a,b) }{i!j!}(x-a)^i(y-b)^j \label{tpn}\]
Let's verify this formula for the second-degree Taylor polynomial. (We'll leave it to you to verify it for the first-degree Taylor polynomial.)
For \(n=2\), we have:
\[P_2(x,y) = \sum_{i=0}^2 \sum_{j=0}^{2 - i} \frac{\frac{d^{(i+j)}f}{∂x^i∂y^{\,j}}(a,b) }{i!j!}(x-a)^i(y-b)^j\]
Since \(i\) will start at \(0\) and continue to increase up to \(2\), while the value of \(j\) will start at \(0\) and increase to \(2-i\) for each value of \(i\), we would see the following values for \(i\) and \(j\):
\[\begin{align*} i = 0, && j = 0 \\ i = 0, && j = 1 \\ i = 0, && j = 2 \\ i = 1, && j = 0 \\ i = 1, && j = 1 \\ i = 2, && j = 0 \end{align*}\]
Then by the formula:
\[\begin{align*} P_2(x,y) &= \frac{f(a,b)}{0!0!}(x-a)^0(y-b)^0 + \frac{f_y(a,b)}{0!1!}(x-a)^0(y-b)^1 + \frac{f_{yy}(a,b)}{0!2!}(x-a)^0(y-b)^2 + \frac{f_x(a,b)}{1!0!}(x-a)^1(y-b)^0 + \frac{f_{xy}(a,b)}{1!1!}(x-a)^1(y-b)^1 + \frac{f_{xx}(a,b)}{2!0!}(x-a)^2(y-b)^0 \\
&= f(a,b) + f_y(a,b)(y-b) + \frac{f_{yy}(a,b)}{2}(y-b)^2 + f_x(a,b)(x-a) + f_{xy}(a,b)(x-a)(y-b) + \frac{f_{xx}(a,b)}{2}(x-a)^2 \\ &= f(a,b) + f_x(a,b)(x-a) + f_y(a,b)(y-b) + \frac{f_{xx}(a,b)}{2}(x-a)^2 + f_{xy}(a,b)(x-a)(y-b) + \frac{f_{yy}(a,b)}{2}(y-b)^2 \end{align*}\]
This equation is the same as Equation \ref{tp2} above.
Note that \(P_2(x,y)\) is the more formal notation for the second-degree Taylor polynomial \(Q(x,y)\).
Exercise \(\PageIndex{1}\): Finding a third-degree Taylor polynomial for a function of two variables
Now try to find the new terms you would need to find \(P_3(x,y)\) and use this new formula to calculate the third-degree Taylor polynomial for one of the functions in Example \(\PageIndex{1}\) above. Verify your result using a 3D function grapher like CalcPlot3D.
Answer
As you just found, the only new combinations of \(i\) and \(j\) would be:
\[\begin{align*} i = 0, && j = 3 \\ i = 1, && j = 2 \\ i = 2, && j = 1 \\ i = 3, && j = 0 \end{align*}\]
Note that these pairs include all the possible combinations of \(i\) and \(j\) that can add to \(3\). That is, these pairs correspond to all the possible third-degree terms we could have for a function of two variables \(x\) and \(y\), remembering that \(i\) represents the degree of \(x\) and \(j\) represents the degree of \(y\) in each term. If the point \((a,b)\) were \((0,0)\), the variable factors of these terms would be \(y^3\), \(xy^2\), \(x^2y\), and \(x^3\), respectively.
Then by the Equation \ref{tpn}:
\[P_3(x,y) = P_2(x,y) + \frac{f_{yyy}(a,b)}{0!3!}(x-a)^0(y-b)^3+ \frac{f_{xyy}(a,b)}{1!2!}(x-a)^1(y-b)^2+ \frac{f_{xxy}(a,b)}{2!1!}(x-a)^2(y-b)^1+ \frac{f_{xxx}(a,b)}{3!0!}(x-a)^3(y-b)^0\]
Simplifying, \[P_3(x,y) = P_2(x,y) + \frac{f_{yyy}(a,b)}{6}(y-b)^3+ \frac{f_{xyy}(a,b)}{2}(x-a)(y-b)^2+ \frac{f_{xxy}(a,b)}{2}(x-a)^2(y-b)+ \frac{f_{xxx}(a,b)}{6}(x-a)^3\]
Contributors Paul Seeburger (Monroe Community College) |
Very well, this is really more of a mathematics than a physics post, but so what. Besides, equations of this nature do pop up in physics problems from time to time.
Question (variants of which often appear on Quora): How do you solve the equation, \(x^a=b^x\)?
Equations of the type
$$x^a=b^x$$
do not usually have solutions in terms of elementary functions, but they can be solved with the help of Lambert's W-function.
The W-function is defined as the solution to the equation
$$W(z)e^{W(z)} = z.$$
Which means that any equation that can be brought to the form \(x e^x = {\rm const.}\) can be solved using \(W(z).\)
So let's do a bit of algebra:
$$\begin{align*}
x^a &= b^x,\\ a\ln x &= x\ln b,\\ (1/x)\ln x &= (1/a)\ln b,\\ (1/x)\ln(1/x) &= -(1/a)\ln b. \end{align*}$$
Now let \(y=\ln(1/x),\) so that \(x=1/e^y.\) Then,
$$ye^y = -(1/a)\ln b,$$
so we can now solve for \(y\) in terms of the W-function:
$$y=W(-(\ln b)/a).$$
We also note that \(e^{W(z)}=z/W(z),\) which follows from the definition of the W-function. With this we get, for \(x,\)
$$x=\dfrac{1}{e^y}=-\dfrac{a}{\ln b}W\left(-\dfrac{\ln b}{a}\right).$$
This is the general solution.
Here is one specific example: \(x^5=8^x.\) So let \(a=5\) and \(b=8.\) Then we have
$$x=-\dfrac{5}{\ln 8}W\left(-\dfrac{\ln 8}{5}\right)\simeq 2.207-1.183i.$$
Yes, that is a complex number. This particular equation has no real solution. This can be seen easily if we plot the curves \(x^5\) (red) and \(8^x\) (green), as we can see that they do not intersect anywhere:
Additionally, the solution can also be extended to equations of the form
$$cx^a=b^x.$$
These equations can be brought to the previous form as follows:
$$\begin{align*}
cx^a &= b^x\\ (c^{1/a}x)^a &= b^x\\ (c^{1/a}x)^a &= b^{c^{-1/a} c^{1/a} x}\\ (c^{1/a}x)^a &= (b^{c^{-1/a}})^{c^{1/a} x} \end{align*}$$
Now let
$$\begin{align*} X &= c^{1/a}x,\\ B &= b^{c^{-1/a}}, \end{align*}$$
and the equation becomes
$$X^a = B^X,$$
which we already know how to solve:
$$X=-\dfrac{a}{\ln B}W\left(-\dfrac{\ln B}{a}\right).$$
Substituting \(A\) and \(X\) and noting that \(\ln B=c^{-1/a}\ln b\), we get
$$\begin{align*}
c^{1/a}x &= -\dfrac{a}{c^{-1/a} \ln b} W\left(-\dfrac{c^{-1/a} \ln b}{a}\right),\\ x &= -\dfrac{a}{\ln b}W\left(\dfrac{\ln b}{ac^{1/a}}\right). \end{align*}$$
One additional caveat concerns the ambiguity arising from the the properties of exponentiation. For instance, the equation \(2^x=x^2\) has three real solutions, but the above formalism would yield only two of them (and we get those two because the W-function itself has multiple real values for part of its real domain.) To obtain the third solution, we must recognize that \(x^2=(-x)^2\), so we must also solve \(2^{-y}=y^2\), i.e., \((1/2)^y=y^2\), with \(x=-y\). This yields the third real solution, at \(x\simeq -0.766665\).
And, of course, there are infinitely many complex solutions, corresponding to the various branches of the W-function. |
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ...
@Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation")
@Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable
Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags
@Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag
@glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :)
@Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work)
This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin...
@Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension
@Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity
I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head
@Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write
@Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all.
I've actually recently asked some questions on math.SE on related topics
@Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true
also probably even more generally without $i$ factors
so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal)
Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary
@Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t
Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check
If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ...
There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h... |
Nagoya Mathematical Journal Nagoya Math. J. Volume 194 (2009), 91-147. The absolute Galois group of the field of totally $S$-adic numbers Abstract
For a finite set $S$ of primes of a number field $K$ and for $\sigma_{1}, \dots, \sigma_{e} \in \operatorname{Gal}(K)$ we denote the field of totally $S$-adic numbers by $K_{{\rm tot}, S}$ and the fixed field of $\sigma_{1}, \dots, \sigma_{e}$ in $K_{{\rm tot}, S}$ by $K_{{\rm tot}, S}({\boldsymbol\sigma})$. We prove that for almost all ${\boldsymbol\sigma} \in \operatorname{Gal}(K)^{e}$ the absolute Galois group of $K_{{\rm tot}, S}({\boldsymbol\sigma})$ is the free product of ${\hat F}_{e}$ and a free product of local factors over $S$.
Article information Source Nagoya Math. J., Volume 194 (2009), 91-147. Dates First available in Project Euclid: 17 June 2009 Permanent link to this document https://projecteuclid.org/euclid.nmj/1245209126 Mathematical Reviews number (MathSciNet) MR2536528 Zentralblatt MATH identifier 1261.12006 Subjects Primary: 12E30: Field arithmetic Citation
Haran, Dan; Jarden, Moshe; Pop, Florian. The absolute Galois group of the field of totally $S$-adic numbers. Nagoya Math. J. 194 (2009), 91--147. https://projecteuclid.org/euclid.nmj/1245209126 |
The rates of diffusion of two gases $\ce{A}$ and $\ce{B}$ are in the ratio $1:4$. If the ratio of their masses present in the mixture is $2:3$, the ratio of their mole fraction is?
My tries:
$\frac{r_1}{r_2}=\frac{1}{4}=\sqrt{\frac{M_2}{M_1}}\to\frac{1}{16}=\frac{M_2}{M_1}$, $M_i$ is molar mass of $i$.
Also given $\frac{w_1}{w_2}=\frac{2}{3}$, multiplying this by above one gives $\frac{M_2\cdot w_1}{M_1\cdot w_2}=\frac{2}{3\cdot 16}\to\frac{n_1}{n_2}=\frac{1}{24}$, where $n_i$ represents number of moles of $i$.
But this is not a correct answer please help. |
Third, since $\sf{L} \subseteq \sf{NC}^2$, is there an algorithm to convert any logspace algorithm into a parallel version?
It can be shown (Arora and Barak textbook) given a $t(n)$-time TM $M$, that an oblivious TM $M'$ (i.e. a TM whose head movement is independent of its input $x$) can construct a circuit $C_n$ to compute $M(x)$ where $|x| = n$.
The proof sketch is along the lines of having $M'$ simulate $M$ and defining "snapshots" of its state (i.e. head positions, symbols at heads) at each time-step $t_i$ (think of a computational log). Each step $t_i$ can be computed from $x$ and the state $t_{i-1}$. Because each snapshot involves only a constant-sized string, and there exist only a constant amount of strings of that size, the snapshot at $t_i$ can be computed by a constant-sized circuit.
If you compose the constant-sized circuits for each $t_i$ we have a circuit that computes $M(x)$. Using this fact, along with the restriction that the language of $M$ is in $\sf{L}$ we see that our circuit $C_n$ is by definition
logspace-uniform, where uniformity just means that our circuits in our circuit family $\{C_n\}$ computing $M(x)$ all have the same algorithm. Not a custom-made algorithm for each circuit operating on input size $n$.
Again, from the definition of uniformity we see that circuits deciding any language in $\sf{L}$ must have a function $\text{size}(n)$ computable in $O(\log n).$ The circuit family $\sf{AC}^1$ has at most $O(\log n)$ depth.
Finally it can be shown that $\sf{AC}^1 \subseteq \sf{NC}^2$ giving the relation in question.
Fourth, it sounds like most people assume that $\sf{NC} \neq \sf{P}$ in the same way that $\sf{P} \neq \sf{NP}$. What is the intuition behind this?
Before we go further, let us define what $\sf{P}$-completeness means.
A language $L$ is $\sf{P}$-complete if $L \in \sf{P}$ and every language in $\sf{P}$ is logspace reducible to it. Additionally, if $L$ is $\sf{P}$-complete then the following are true
$L \in \sf{NC} \iff \sf{P} = \sf{NC}$
$L \in \sf{L} \iff \sf{P} = \sf{L}$
Now we consider $\sf{NC}$ to be the class of languages efficiently decided by a parallel computer (our circuit). There are some problems in $\sf{P}$ that seem to resist any attempt at parallelization (i.e. Linear Programming, and Circuit Value Problem). That is to say, certain problems require computation to be done in a step-wise fashion.
For example, the Circuit Value Problem is defined as:
Given a circuit $C$, input $x$, and a gate $g \in C$, what is the output of $g$ on $C(x)$?
We do not know how to compute this any better than computing all the gates $g'$ that come before $g$. Given
some of them may be computed in parallel, for example if they all occur at some time-step $t_i$, but we dont know how compute the output of gates at timestep $t_i$ and time-step $t_{i+1}$ for the obvious difficulty that gates at $t_{i+1}$ require the output of gates at $t_i$!
This is the intuition behind $\sf{NC} \neq \sf{P}$.
Limits to Parallel Computation is a book about $\sf{P}$-Completeness in similar vein of Garey & Johnson's $\sf{NP}$-Completeness book. |
I've become confused about spherical coordinates when dealing with electric fields.
The way I always understood spherical coordinates is something like the below picture. To define a vector, you give it a distance outwards (r), and two angles to get a final position. Below, the $\theta$ and $\phi$ components are measured in radians.
(Courtesy Wikipedia.org)
However, you can also have, say, an electric field in spherical coordinates. In this case, the unit vectors $\theta$ and $\phi$ don't define angles but rather values of the vector fields. So, in the case of electric fields, we might have $E_\theta = 10\text{ Vm}$. That is, at every point there will be this electric field component in the theta direction.
So, it seems there are two different ways of dealing with spherical coordinates. One, where the $\theta$ and $\phi$ components represent angles, and one where they represent values of the components in those directions.
This would then give you two different measures of lengths of the vectors. In the first case, the length of the vector is always given by the r component. In the second case, you take $|\vec{E}|=\sqrt{E_r^2+E_\theta^2+E_\phi^2}$.
What am I mixing up here? |
For simplicity, in the following we set the electric charge $e=1$ and consider a lattice spinless free electron system in an external static magnetic field $\mathbf{B}=\nabla\times\mathbf{A}$ described by the Hamiltonian $H=\sum_{ij}t_{ij}c_i^\dagger c_j$, where $t_{ij}=\left | t_{ij} \right |e^{iA_{ij}}$ with the corresponding lattice gauge-field $A_{ij}$. As we know the transformation $\mathbf{A}\rightarrow \mathbf{A}+\nabla\theta$ does not change the physical magnetic field $\mathbf{B}$, and the induced transformation in Hamiltonian reads $$H\rightarrow H'=\sum_{ij}t_{ij}'c_i^\dagger c_j$$ with $t_{ij}'=e^{i\theta_i}t_{ij}e^{-i\theta_j}$. Now my confusion point is:
Do these two Hamiltonians $H$ and $H'$ describe the same physics? Or do they describe some same
quantum states? Or what common physical properties do they share?
I just know $H$ and $H'$ have the same spectrum, thank you very much. |
CentralityBin () CentralityBin (const char *name, Float_t low, Float_t high) CentralityBin (const CentralityBin &other) virtual ~CentralityBin () CentralityBin & operator= (const CentralityBin &other) Bool_t IsAllBin () const Bool_t IsInclusiveBin () const const char * GetListName () const virtual void CreateOutputObjects (TList *dir, Int_t mask) virtual Bool_t ProcessEvent (const AliAODForwardMult *forward, UInt_t triggerMask, Bool_t isZero, Double_t vzMin, Double_t vzMax, const TH2D *data, const TH2D *mc, UInt_t filter, Double_t weight) virtual Double_t Normalization (const TH1I &t, UShort_t scheme, Double_t trgEff, Double_t &ntotal, TString *text) const virtual void MakeResult (const TH2D *sum, const char *postfix, bool rootProj, bool corrEmpty, Double_t scaler, Int_t marker, Int_t color, TList *mclist, TList *truthlist) virtual bool End (TList *sums, TList *results, UShort_t scheme, Double_t trigEff, Double_t trigEff0, Bool_t rootProj, Bool_t corrEmpty, Int_t triggerMask, Int_t marker, Int_t color, TList *mclist, TList *truthlist) Int_t GetColor (Int_t fallback=kRed+2) const void SetColor (Color_t colour) TList * GetResults () const const char * GetResultName (const char *postfix="") const TH1 * GetResult (const char *postfix="", Bool_t verbose=true) const void SetDebugLevel (Int_t lvl) void SetSatelliteVertices (Bool_t satVtx) virtual void Print (Option_t *option="") const const Sum * GetSum (Bool_t mc=false) const Sum * GetSum (Bool_t mc=false) const TH1I * GetTriggers () const TH1I * GetTriggers () const TH1I * GetStatus () const TH1I * GetStatus ()
Calculations done per centrality. These objects are only used internally and are never streamed. We do not make dictionaries for this (and derived) classes as they are constructed on the fly.
Definition at line 701 of file AliBasedNdetaTask.h.
Calculate the Event-Level normalization.
The full event level normalization for trigger \(X\) is given by
\begin{eqnarray*} N &=& \frac{1}{\epsilon_X} \left(N_A+\frac{N_A}{N_V}(N_{-V}-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{1}{N_V}(N_T-N_V-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{N_T}{N_V}-1-\frac{\beta}{N_V}\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(\frac{1}{\epsilon_V}-\frac{\beta}{N_V}\right) \end{eqnarray*}
where
\(\epsilon_X=\frac{N_{T,X}}{N_X}\) is the trigger efficiency evaluated in simulation. \(\epsilon_V=\frac{N_V}{N_T}\) is the vertex efficiency evaluated from the data \(N_X\) is the Monte-Carlo truth number of events of type \(X\). \(N_{T,X}\) is the Monte-Carlo truth number of events of type \(X\) which was also triggered as such. \(N_T\) is the number of data events that where triggered as type \(X\) and had a collision trigger (CINT1B) \(N_V\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex. \(N_{-V}\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), but no vertex. \(N_A\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex in the selected range. \(\beta=N_a+N_c-N_e\) is the number of control triggers that were also triggered as type \(X\). \(N_a\) Number of beam-empty events also triggered as type \(X\) events (CINT1-A or CINT1-AC). \(N_c\) Number of empty-beam events also triggered as type \(X\) events (CINT1-C). \(N_e\) Number of empty-empty events also triggered as type \(X\) events (CINT1-E).
Note, that if \( \beta \ll N_A\) the last term can be ignored, and the expression simplyfies to
\[ N = \frac{1}{\epsilon_X}\frac{1}{\epsilon_V}N_A \]
Parameters
t Histogram of triggers scheme Normalisation scheme trgEff Trigger efficiency ntotal On return, the total number of events to normalise to. text If non-null, fill with normalization calculation Returns \(N_A/N\) or negative number in case of errors.
Definition at line 1784 of file AliBasedNdetaTask.cxx. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
X-ray protein crystallography is a technique by which it is possible to determine the three dimensional positions of each atom in a protein. Now over 100 years old, x-ray crystallography was first used to determine the three dimensional structures of inorganic materials, then small organic molecules, and finally macromolecules like DNA and proteins. To date, about 100,000 protein structures have been published in the Protein Data Bank, with almost 10,000 added every year. To use this technique, the crystallographer obtains protein crystals, records the diffraction pattern formed by x-rays passed through the crystals, and then interprets the data using a computer. The result is a atomic-resolution model of a protein. History
Though crystal symmetry was explored in the late 1600s by Danish scientist Nicolas Steno, and continuing efforts by René Just Haüy and William Hallowes Miller in 1839 firmly established that a crystal is a ordered lattice, it wasn't until the discovery of x-rays in 1895 and the proof of their diffraction by Max von Laue in 1912 that crystallography as a science began.
After the use of x-ray crystallography to deduce the lattice structure of table salt in 1914, the father and son team of William Henry Bragg and William Lawrence Bragg shared the 1915 Nobel Prize in Physics for the development of Bragg's law,
\[ n \lambda = 2 d \sin(\theta), \]
which relates an x-ray diffraction pattern with the three-dimensional structure of a crystal.
The field has received numerous Nobel Prizes over the years, including in chemistry in 1964 to Dorothy Crowfoot Hodgkin, who solved the structures of the small molecules cholesterol, penicillin, and vitamin B12, and in chemistry in 1962 to Max Perutz and John Cowdery Kendrew for their work on sperm whale myoglobin. David Chilton Phillips solved the first structure of an enzyme, lysozyme, in 1965.
The early 70s saw the birth of the Research Collaboratory for Structural Bioinformatics' Protein Data Bank. The PDB began with 13 structures in 1976 and has grown to the "single worldwide archive of structural data of biological macromolecules".
Technique
Obtaining crystals
The first and least certain step in crystallography of a protein is obtaining crystals of the protein of interest. Obtaining suitable amounts of the protein of interest is usually carried out in a straightforward manner using established molecular biology techniques such as molecular cloning and affinity chromatography. However, the crystallization step remains the bottleneck for this technique, with some proteins (particularly proteins that exist in the aliphatic environment of the plasma membrane) remaining intransigent to crystallization even in the face of the most diligent crystallographers. Thus, for each protein of interest, a large number of crystallization conditions must be tried, necessitating a relatively large amount (milligrams) of the pure protein.
Protein production and purification
To produce suitable amounts of protein, contemporary crystallographers turn to molecular biology's old friend
Escherichia coli. A gene which codes for the protein of interest is cloned into a small, circular piece of DNA known as an expression plasmid. The expression of the gene is typically under the control of an inducible promoter, and is regulated by the researcher rather than the bacteria. Cells are transformed with the expression plasmid, grown to high density, and induced to express the protein of interest. The cells are lysed chemically with detergents or physically with sonication, and the protein is purified, typically via affinity chromatography. High purity (greater than 95%) is desirable. Often, it takes multiple experiments before the method that obtains maximum protein is found. Crystallization
The concentrated protein solution obtained is then subjected to a wide variety of crystallization conditions. Since we have no way of knowing
a priori which set of conditions is right for obtaining crystals of a given protein, many different conditions are tried in parallel using a technique called drop diffusion.
In this technique, a small quantity (typically a microliter) of concentrated protein solution is mixed with an equal volume of precipitant. This drop is separated by air from a large volume of precipitant solution. The drop is hypotonic to the precipitant and slowly equilibrates to the concentration of the large volume of precipitant. Concomitantly, the concentration of protein increases. If this process occurs at just the right rate, the protein precipitates out of solution into an ordered lattice structure: a protein crystal.
It is often said that this part of crystallography is more of an art than a science, and indeed there is little theoretical guidance available to the crystallographer who wishes to crystallize a new protein. Patience, and to some extent, luck, determine the sucess or failure of the crystallization of any particular protein.
Obtaining x-ray diffraction data
Once crystals of suitable size and composition are obtained, it is necessary to bombard the crystal with x-rays and observe the diffraction pattern. An x-ray diffractometer works in a similar manner to a light microscope. In a light microscope, the subject is irradiated with visible light (400 nm < \( \lambda \) < 700 nm$), which is diffracted by a lens onto the retina, producing a macroscopic image of a microscopic object. Molecules such as proteins are much smaller than microscopic structures like cells, and, as such, require that a shorter wavelength of radiation be used during diffraction. X-rays, where $100 pm < \(\lambda\) < 10,000 pm$, are the perfect size to diffract around atoms (32--225 pm), bonds (74--267 pm), and molecules (100 pm to hundreds of Angstroms). However, x-rays are difficult to focus in a manner analogous to the way a lens focuses visible light. Crystallographers employ computational methods to capture the x-ray scattering pattern (pictured at right) and infer the three-dimensional positions of atoms in a molecule.
X-ray sources
Traditionally, x-ray crystallographers filtered and directed the x-rays generated by radioactive cesium in their diffractometers, but today it is much more common to use synchrotron radiation to irradiate samples. Synchrotrons, huge hollow rings used to accelerate electrons for use in studies of subatomic particles, produce huge amounts of tunable (different wavelengths) x-ray radiation that is perfect for irradiating crystals.
Sample preparation
The crystal is suspended in aqueous solution containing a cryoprotectant in the eye of a small loop. The crystal and loop are cooled with a continuous stream of liquid nitrogen to prevent chemical damage by the x-rays. X-rays are directed through the crystal, and the diffraction pattern at any given moment is recorded by a detector. The crystal is rotated sightly and a new diffraction pattern is obtained. This process is repeated through 360 degrees along one axis (typically rotations through a smaller angle on another axis are also recorded to avoid blind spots) until the instrument has recorded a diffraction pattern for each position.
X-ray scattering
As an incident x-ray (electromagnetic wave) overlaps with an electron, it is elastically scattered, generating a secondary wave that has the same wavelength, but different direction, than the incident wave (thus the wave is "scattered" or "diffracted"). Due to the symmetry of the crystal and its many repeated units, these secondary waves interfere constructively at only one point along a circle drawn around the atom that scattered them. It is that point, described by Bragg's Law, that appears as a dark spot on the detector. An example diffraction pattern, from a SARS protease, is displayed at right.
Obtaining an electron density map
The data recorded by the detector during diffraction are now subjected to computational analysis. First, each spot in each diffraction image is indexed, integrated, merged, and scaled by a computer, producing a single text file from thousands of images. The position of each spot depends on the properties of the crystal, and as such is different for every protein. The process of converting the reciprocal space-representation of the crystal into an interpretable electron density map is known as phasing.
Shown below is the software PyMOL displaying the electron density map (white) for Protein Data Bank structure 4BLL, a peroxidase from the model organism
Pleurotus ostreatus, overlayed with the model from the PDB structure (pink).
Fundamental challenge of crystallography
The problem is this: the detector is only able to record the position and intensity of an x-ray when it hits the detector. An x-ray has both intensity, which is related to the amplitude of the wave, and phase, which is related to the point at which the x-ray was scattered. The crystallographer would dearly like to know the phase of an x-ray, because this information along with with intensity, but the detector is not capable of capturing phase information due the quantum mechanical nature of x-rays and electrons.
Overcoming the challenge: phasing
Crystallographers use several methods for recovering phase information from diffraction data. Common techniques include:
direct methods molecular replacement anomalous x-ray scattering multiple isomorphous replacement
Direct methods use the Sayre equation to determine phases directly from the diffraction data. These methods are only viable for small (less than 1000 atoms) molecules and are not typically used in protein crystallography. The 1985 Nobel Prize in chemistry was awarded to Hauptman and Karle, who developed these methods.
The technique of molecular replacement uses the solved crystal structure of a homologous protein to provide a "seed" electron density map that can then be refined by a computer. Molecular replacement is used extensively in labs that solve the crystal structures of several mutants of a given protein.
Anomalous x-ray scattering relies on protein production in a host that is incapable of producing the amino acid methionine. The host instead uses a synthetic amino acid, selenomethionine, in which methionene's sulfur atom has been replaced by selenium. The positions of any selenomethionines in the diffraction pattern can be solved using different x-ray wavelengths and direct methods, and the rest of the structure can be solved using the position of the selenomethionines as a reference.
Multiple isomorphous replacement has largely been superseded by anomalous x-ray scattering and works in a similar way, except using metal ions instead of synthetic amino acids for initial phasing.
Once phases have been recovered, it is possible to mathematically reconstruct the positions of electrons within the crystal using Fourier synthesis. (The diffraction data is the Fourier transform of the electron density in the unit cell.) A computer applying these operations with correct phases constructs a three-dimensional map of electron density map than can be viewed with molecular visualization software. The resolution of the data determines the resolution of the model, as depicted below with electron density maps of tryptophan at three different resolutions.
Obtaining a three-dimensional model
With sufficient resolution (less than 1.5 A), it is possible to automatically generate a model based on the electron density map and known bond angles and lengths, and known sizes of atoms. In practice, not all crystallography data is of such high quality. Often, the crystallographer uses molecular visualization software to manually fit a chemical model to the electron density data. The result is a model that can be viewed with molecular visualization software. An example, draw in PyMOL from PDB structure 4BLL, is below.
Further reading Bragg WL (1914). "The analysis of crystals by the X-ray spectrometer". Proc. R. Soc. Lond. A89 (613): 468 Glusker JP and Trueblood KN, 1972. Crystal structure analysis: a primer. Oxford University Press. [Reprint: OUP Oxford, May 27, 2010] Drenth J. Principles of Protein X-Ray Crystallography. Springer, Apr 5, 2007 Hauptman H, 1997. "Phasing methods for protein crystallography". Curr. Opin. Struct. Biol. 7 (5): 672–80 Chernov AA (2003). "Protein crystals and their growth". J. Struct. Biol. 142 (1): 3–21 |
This is not really an answer to your question, essentially because there isn't (currently) a question in your post, but it is too long for a comment.
Your statement that
A co-ordinate transformation is linear map from a vector to itself with a change of basis.
is muddled and ultimately incorrect. Take some vector space $V$ and two bases $\beta$ and $\gamma$ for $V$. Each of these bases can be used to establish a representation map $r_\beta:\mathbb R^n\to V$, given by$$r_\beta(v)=\sum_{j=1}^nv_j e_j$$if $v=(v_1,\ldots,v_n)$ and $\beta=\{e_1,\ldots,e_n\}$. The coordinate transformation is
not a linear map from $V$ to itself. Instead, it is the map$$r_\gamma^{-1}\circ r_\beta:\mathbb R^n\to\mathbb R^n,\tag 1$$and takes coordinates to coordinates.
Now, to go to the heart of your confusion, it should be stressed that
covectors are not members of $V$; as such, the representation maps do not apply to them directly in any way. Instead, they belong to the dual space $V^\ast$, which I'm hoping you're familiar with. (In general, I would strongly discourage you from reading texts that pretend to lay down the law on the distinction between vectors and covectors without talking at length about the dual space.)
The dual space is the vector space of all linear functionals from $V$ into its scalar field:$$V=\{\varphi:V\to\mathbb R:\varphi\text{ is linear}\}.$$This has the same dimension as $V$, and any basis $\beta$ has a unique dual basis $\beta^*=\{\varphi_1,\ldots,\varphi_n\}$ characterized by $\varphi_i(e_j)=\delta_{ij}$. Since it is a different basis to $\beta$, it is not surprising that the corresponding representation map is different.
To lift the representation map to the dual vector space, one needs the notion of the adjoint of a linear map. As it happens, there is in general no way to lift a linear map $L:V\to W$ to a map from $V^*$ to $W^*$; instead, one needs to reverse the arrow. Given such a map, a functional $f\in W^*$ and a vector $v\in V$, there is only one combination which makes sense, which is $f(L(v))$. The mapping $$v\mapsto f(L(v))$$ is a linear mapping from $V$ into $\mathbb R$, and it's therefore in $V^*$. It is denoted by $L^*(f)$, and defines the action of the adjoint $$L^*:W^*\to V^*.$$
If you apply this to the representation maps on $V$, you get the adjoints $r_\beta^*:V^*\to\mathbb R^{n,*}$, where the latter is canonically equivalent to $\mathbb R^n$ because it has a canonical basis. The inverse of this map, $(r_\beta^*)^{-1}$, is the representation map $r_{\beta^*}:\mathbb R^n\cong\mathbb R^{n,*}\to V^*$. This is the origin of the 'inverse transpose' rule for transforming covectors.
To get the transformation rule for covectors between two bases, you need to string two of these together:$$\left((r_\gamma^*)^{-1}\right)^{-1}\circ(r_\beta^*)^{-1}=r_\gamma^*\circ (r_\beta^*)^{-1}:\mathbb R^n\to \mathbb R^n,$$which is very different to the one for vectors, (1).
Still think that vectors and covectors are the same thing?
Addendum
Let me, finally, address another misconception in your question:
An inner product is between elements of the same vector space and not between two vector spaces, it is not how it is defined.
Inner products are indeed defined by taking both inputs from the same vector space. Nevertheless, it is still perfectly possible to define a bilinear form $\langle \cdot,\cdot\rangle:V^*\times V\to\mathbb R$ which takes one covector and one vector to give a scalar; it is simple the action of the former on the latter:$$\langle\varphi,v\rangle=\varphi(v).$$This bilinear form is always guaranteed and presupposes strictly
less structure than an inner product. This is the 'inner product' which reads $\varphi_j v^j$ in Einstein notation.
Of course, this does relate to the inner product structure $ \langle \cdot,\cdot\rangle_\text{I.P.}$ on $V$ when there is one. Having such a structure enables one to identify vectors and covectors in a canonical way: given a vector $v$ in $V$, its corresponding covector is the linear functional$$\begin{align}i(v)=\langle v,\cdot\rangle_\text{I.P.} : V&\longrightarrow\mathbb R \\w&\mapsto \langle v,w\rangle_\text{I.P.}.\end{align}$$By construction, both bilinear forms are canonically related, so that the 'inner product' $\langle\cdot,\cdot\rangle$ between $v\in V^*$ and $w\in V$ is exactly the same as the inner product $\langle\cdot,\cdot\rangle_\text{I.P.}$ between $i(v)\in V$ and $w\in V$. That use of language is perfectly justified.
Addendum 2, on your question about the gradient.
I should really try and convince you at this point that the transformation laws are in fact enough to show something is a covector. (The way the argument goes is that one can define a linear functional on $V$ via the form in $\mathbb R^{n*}$ given by the components, and the transformation laws ensure that this form in $V^*$ is independent of the basis; alternatively, given the components $f_\beta,f_\gamma\in\mathbb R^n$ with respect to two basis, the representation maps give the forms $r_{\beta^*}(f_\beta)=r_{\gamma^*}(f_\gamma)\in V^*$, and the two are equal because of the transformation laws.)
However, there is indeed a deeper reason for the fact that the gradient is a covector. Essentially, it is to do with the fact that the equation$$df=\nabla f\cdot dx$$does not actually need a dot product; instead, it relies on the simpler structure of the dual-primal bilinear form $\langle \cdot,\cdot\rangle$.
To make this precise, consider an arbitrary function $T:\mathbb R^n\to\mathbb R^m$. The derivative of $T$ at $x_0$ is defined to be the (unique) linear map $dT_{x_0}:\mathbb R^n\to\mathbb R^m$ such that$$T(x)=T(x_0)+dT_{x_0}(x-x_0)+O(|x-x_0|^2),$$if it exists. The gradient is exactly this map; it was
born as a linear functional, whose coordinates over any basis are $\frac{\partial f}{\partial x_j}$ to ensure that the multi-dimensional chain rule,$$df=\sum_j \frac{\partial f}{\partial x_j}d x_j,$$is satisfied. To make things easier to understand to undergraduates who are fresh out of 1D calculus, this linear map is most often 'dressed up' as the corresponding vector, which is uniquely obtainable through the Euclidean structure, and whose action must therefore go back through that Euclidean structure to get to the original $df$.
Addendum 3.
OK, it is now sort of clear what the main question is (unless that changes again), though it is still not particularly clear in the question text. The thing that needs addressing is stated in the OP's answer in this thread:
the dual vector space is itself a vector space and the fact that it needs to be cast off as a row matrix is based on how we calculate linear maps and not on what linear maps actually are. If I had defined matrix multiplication differently, this wouldn't have happened.
I will also, address, then this question:
given that the dual (/cotangent) space is also a vector space, what forces us to consider it 'distinct' enough from the primal that we display it as row vectors instead of columns, and say its transformation laws are different?
The main reason for this is well addressed by Christoph in his answer, but I'll expand on it. The notion that something is co- or contra-variant is not well defined 'in vacuum'. Literally, the terms mean "varies with" and "varies against", and they are meaningless unless one says
what the object in question varies with or against.
In the case of linear algebra, one starts with a given vector space, $V$. The unstated reference is always, by convention, the basis of $V$: covariant objects transform exactly like the basis, and contravariant objects use the transpose-inverse of the basis transformation's coefficient matrix.
One can, of course, turn the tables, and change one's focus to the dual, $W=V^*$, in which case the primal $V$ now becomes the dual, $W^*=V^{**}\cong V$. In this case, quantities that used to transform with the primal basis now transform against the dual basis, and vice versa. This is exactly why we call it the dual: there exists a full duality between the two spaces.
However, as is the case anywhere in mathematics where two fully dual spaces are considered (example, example, example, example, example), one needs to break this symmetry to get anywhere. There are two classes of objects which behave differently, and a transformation that swaps the two. This has two distinct, related advantages:
Anything one proves for one set of objects has a dual fact which is automatically proved. Therefore, one need only ever prove one version of the statement.
When considering vector transformation laws, one always has (or can have, or should have), in the back of one's mind, the fact that one can rephrase the language in terms of the duality-transformed objects. However, since the
content of the statements is not altered by the transformation, it is not typically useful to perform the transformation: one needs to state some version, and there's not really any point in stating both. Thus, one (arbitrarily, -ish) breaks the symmetry, rolls with that version, and is aware that a dual version of all the development is also possible.
However, this dual version is
not the same. Covectors can indeed be expressed as row vectors with respect to some basis of covectors, and the coefficients of vectors in $V$ would then vary with the new basis instead of against, but then for each actual implementation, the matrices you would use would of course be duality-transformed. You would have changed the language but not the content.
Finally, it's important to note that even though the dual objects are equivalent, it does not mean they are the same. This why we call them dual, instead of simply saying that they're the same! As regards vector spaces, then, one still has to prove that $V$ and $V^*$ are not only dually-related, but also different. This is made precise in the statement that
there is no natural isomorphism between a vector space and its dual, which is phrased, and proved in, the language of category theory. The notion of 'natural' isomorphism is tricky, but it would imply the following:
For each vector space $V$, you would have an isomorphism $\sigma_V:V\to V^*$. You would want this isomorphism to play nicely with the duality structure, and in particular with the duals of linear transformations, i.e. their adjoints. That means that for any vector spaces $V,W\in\mathrm{Vect}$ and any linear transformation $T:V\to W$, you would want the diagram
to commute. That is, you would want $T^* \circ \sigma_W \circ T$ to equal $T$.
This is provably not possible to do consistently. The reason for it is that if $V=W$ and is $T$ an isomorphism, then $T$ and $T^*$ are different, but for a simple counter-example you can just take any real multiple of the identity as $T$. This is precisely the formal statement of the intuition in garyp's great answer.
In apples-and-pears languages, what this means is that a general vector space $V$ and its dual $V^*$ are not only dual (in the sense that there exists a transformation that switches them and puts them back when applied twice), but they are also different (in the sense that there is no consistent way of identifying them), which is why the duality language is justified.
I've been rambling for quite a bit, and hopefully at least some of it is helpful. In summary, though, what I think you need to take away is the fact that
Just because dual objects are equivalent it doesn't mean they are the same.
This is also, incidentally, a direct answer to the question title: no, it is not foolish. They are equivalent, but they are still different. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.