text
stringlengths 100
356k
|
---|
Skip to content
Fetching contributors…
Cannot retrieve contributors at this time
616 lines (505 sloc) 23.6 KB
Gumbel statistics are often used to estimate the statistical significance of local alignment scores. The Gumbel distribution is the so-called Type I extreme value distribution (EVD). It occurs so frequently in sequence analysis applications, compared to the type II (Fr\'{e}chet) and type III (Weibull) extreme value distributions, that Gumbel'' and EVD'' are often used interchangeably in bioinformatics. Easel has a separate module, the \eslmod{gev} module, that implements the generalized extreme value distribution. Karlin/Altschul statistics are a special case of the Gumbel distribution that apply to the scores of ungapped local alignments between infinitely long random sequences. Empirically, Karlin/Altschul statistics also apply reasonably well to the more useful case of gapped alignment of finite-length sequences. Karlin/Altschul statistics predict how the Gumbel's two parameters depend on the length of the query and target sequences. In the case of ungapped alignments, Karlin/Altschul statistics allow the Gumbel parameters to be estimated directly, without the need for a compute-intensive simulation. \subsection{The gumbel API} The \eslmod{gumbel} API consists of the following functions: \vspace{0.5em} \begin{center} \begin{tabular}{ll}\hline \multicolumn{2}{c}{\textbf{evaluating densities and distributions:}}\\ \ccode{esl\_gumbel\_pdf()} & Returns the probability density, $P(S=x)$.\\ \ccode{esl\_gumbel\_logpdf()} & Returns the log of the pdf, $\log P(S=x)$.\\ \ccode{esl\_gumbel\_cdf()} & Returns the cumulative probability distribution, $P(S \leq x)$.\\ \ccode{esl\_gumbel\_logcdf()} & Returns the log of the cdf, $\log P(S \leq x)$.\\ \ccode{esl\_gumbel\_surv()} & Returns right tail mass, 1-cdf, $P(S > x)$\\ \ccode{esl\_gumbel\_logsurv()} & Returns log of 1-cdf, $\log P(S > x)$.\\ \multicolumn{2}{c}{\textbf{sampling:}}\\ \ccode{esl\_gumbel\_Sample()} & Returns a Gumbel-distributed random sample.\\ \multicolumn{2}{c}{\textbf{maximum a posteriori parameter fitting:}}\\ \ccode{esl\_gumbel\_FitComplete()} & Estimates $\mu,\lambda$ from complete data.\\ \ccode{esl\_gumbel\_FitCompleteLoc()} & Estimates $\mu$ when $\lambda$ is known.\\ \ccode{esl\_gumbel\_FitCensored()} & Estimates $\mu,\lambda$ from censored data.\\ \ccode{esl\_gumbel\_FitCensoredLoc()} & Estimates $\mu$ when $\lambda$ is known.\\ \ccode{esl\_gumbel\_FitTruncated()}& Estimates $\mu,\lambda$ from truncated data.\\\hline \end{tabular} \end{center} \vspace{0.5em} The Gumbel distribution depends on two parameters, $\mu$ and $\lambda$. When $\mu$ and $\lambda$ are known, the statistical significance (P-value) of a single score $x$ is $P(S>x)$, obtained by a call to \ccode{esl\_gumbel\_surv()}. The E-value for obtaining that score or better in searching a database of $N$ sequences is just $NP(S>x)$. When $\mu$ and $\lambda$ are unknown, they are estimated from scores obtained from comparisons of simulated random data. (Analytical solutions for $\mu$ and $\lambda$ are only available in the case of ungapped sequence alignments.) The \ccode{esl\_evd\_Fit*()} functions provide maximum likelihood parameter fitting routines for different types of data. \subsection{Example of using the gumbel API} An example that samples 10,000 data points from a Gumbel distribution with $\mu=-20$, $\lambda=0.4$; reports the min and max samples, and the probability mass to the left of the min and to the right of the max (both of which should be about $\frac{1}{10000}$, since we took 10,000 samples); and then fits those simulated data to a Gumbel and reports the fitted $\mu$ and $\lambda$: \input{cexcerpts/gumbel_example} \subsection{Gumbel densities} The probability density function (pdf) and the cumulative distribution function (cdf) of the extreme value distribution are: \begin{equation} P(x) = \lambda \exp \left[ -\lambda (x - \mu) - e^{- \lambda (x - \mu)} \right] \label{eqn:gumbel_density} \end{equation} \begin{equation} P(S < x) = \exp \left[ -e^{-\lambda(x - \mu)} \right] \label{eqn:gumbel_distribution} \end{equation} The extreme value density and distribution functions for $\mu = 0$ and $\lambda = 1.0$ are shown below. \begin{center} \includegraphics[width=3in]{figures/evd_basic} \end{center} The $\mu$ and $\lambda$ parameters are {\em location} and {\em scale} parameters, respectively: \centerline{ \begin{minipage}{3in} \includegraphics[width=2.8in]{figures/evd_location} \end{minipage} \begin{minipage}{3in} \includegraphics[width=2.8in]{figures/evd_scale} \end{minipage} } For more details, a classic reference is \citep{Lawless82}. Gumbel distributions can have their long tail to the right or to the left. The form given here is for the long tail to the right. This is the form that arises when the extreme value is a maximum, such as when our score is the maximum over the individual scores of all possible alignments. The equations in \citep{Lawless82} are for extremal minima; use $(x - u) = -(x - \mu)$ and $b = 1 / \lambda$ to convert Lawless' notation to the notation used here. \subsection{Fitting Gumbel distributions to observed data} Given a set of $n$ observed samples $\mathbf{x}$, we may want to estimate the $\mu$ and $\lambda$ parameters. One might try to use linear regression to fit to a $\log \log$ transformation of the $P(S < x)$ histogram, which gives a straight line with slope $-\lambda$ and $x$ intercept $\mu$: \begin{equation} \log \left[ -\log P(S0$. We do a change of variables, and use the transformation$\lambda = e^w$so we can optimize the unconstrained parameter$w = \log \lambda$instead of optimizing$\lambda$directly. The necessary partial derivatives are then: \begin{eqnarray} \frac{\partial \log L}{\partial \mu} & = & n \lambda - \lambda \sum_{i=1}^{n} e^{-\lambda (x_i - \mu)} - \frac{n \lambda \exp \left[ -\lambda (\phi - \mu) - e^{- \lambda (\phi - \mu)} \right]} {1 - \exp(-e^{-\lambda(\phi - \mu)})} \label{eqn:truncated_dmu} \\% \frac{\partial \log L}{\partial w} & = & n - \sum_{i=1}^{n} \lambda(x_i - \mu) + \sum_{i=1}^{n} \lambda(x_i - \mu) e^{-\lambda (x_i - \mu)} + \frac{n\lambda (\phi-\mu) \exp \left[ -\lambda (\phi - \mu) - e^{- \lambda (\phi - \mu)} \right]} {1 - \exp(-e^{-\lambda(\phi - \mu)})} \label{eqn:truncated_dw} \end{eqnarray} This optimization is carried out by \ccode{esl\_evd\_FitTruncated()}. The likelihood (\ref{eqn:truncated_logL}) is implemented in \ccode{tevd\_func()}, and the derivatives (\ref{eqn:truncated_dmu}) and (\ref{eqn:truncated_dw}) are implemented in \ccode{tevd\_dfunc()}. \ccode{esl\_evd\_FitTruncated()} simply sets up the problem and passes it all off to a conjugate gradient descent optimizer. Results on 500 simulated datasets with$\mu = -20, \lambda = 0.4$, truncated at$\phi = -20$(leaving the right tail, containing about 63\% of the samples): \begin{center} \begin{tabular}{lrrrr} \hline & \multicolumn{4}{c}{\# samples}\\ & 100 & 1000 & 10,000 & 100,000 \\ \% error in$\hat{\mu}$& 13\%& 2\% & 0.8\% & 0.3\% \\ max error in$\hat{\mu}$&260\%& 42\% & 3\% & 1\% \\ \% error in$\hat{\lambda}$& 15\%& 5\% & 2\% & 0.6\% \\ max error in$\hat{\lambda}$& 68\%& 18\% & 6\% & 2\% \\ \hline \end{tabular} \end{center} Fitting truncated Gumbel distributions is difficult, requiring much more data than fitting complete or censored data. The problem is that the right tail becomes a scale-free exponential when$\phi >> \mu$, and$\mu$becomes undetermined. Fits become very inaccurate as$\phi$gets larger than$\mu$, and for sufficiently large$\phi\$, the numerical optimizer will completely fail.
You can’t perform that action at this time.
|
# Finding the Input if Hash Value, Hash Function, and Key are known
Let's assume that we have a hash value $$h$$ is calculated by using HMAC-SHA256.
We have the key $$k$$. We also know some information about the input:
• It contains a pattern like 15 numbers where first 7 are known;
Question: Is it possible to find at least one input $$m$$ such that $$h = \operatorname{HMAC-SHA256}(k,m)$$ without generating all possible patterns?
• is it HMAC-SHA256? are we going to guess the hash value or input? since you say you know a hash – kelalaka Nov 28 '18 at 7:22
• Is there actually a hash that just called "256" or just a typo? You've no luck if it's actually pre-image resistant. – DannyNiu Nov 28 '18 at 8:11
• @kelalaka we are going to guess the input. I'm the one who posted at first :) – jykill Nov 28 '18 at 9:46
• It appears that you have accidently created two accounts, please now follow the instruction in our help center for this situation to resolve this issue. – SEJPM Nov 28 '18 at 19:11
First of all, HMAC is not exactly a hash function. The Wikipedia clearly states that;
In cryptography, an HMAC (sometimes expanded as either keyed-hash message authentication code or hash-based message authentication code) is a specific type of message authentication code (MAC) involving a cryptographic hash function and a secret cryptographic key.
In case of HMAC-SHA256, the hash function in the HMAC is SHA256.
Remember that, Cryptographic hash functions are not invertible functions, and we require that they have pre-image, second pre-image, and collision resistance.
There is still an ambiguity in the question; I'll try to answer in two ways.
1. The input is exactly 15 numbers where first 7 are known.
In this case, the input space is limited, 8 unknown numbers this makes $$10^8 \approx 2^{27}$$ and this input space can be tested very quickly even with a raspberry pi.
What about finding a $$m$$ without generating the pattern. Actually, this means that, you are looking for pre-image for HMAC-SHA256. There is no known attack for HMAC. For alone SHA-256 there are pre-image attacks, for example:
This will be infeasible. Note that practical collision attacks for SHA-256/512 is possible with reduced rounds 26/64 and 27/80, respectively.
2. The input contains 15 numbers where first 7 are known.
In this case, we have a structure as;
ppp...pppfffffffrrrrrrrrsss...sss
Knowing some part of the input may help with testing each value. Unfortunately, there is no information about the prefix (p) size and suffix (s) size. So, one has to consider cases.
What about finding a $$m$$ without generating the pattern. The pre-image status is the same as in the first case.
• HMAC with a fixed, known key (i.e. $m \mapsto HMAC_k(m)$) is a cryptographic hash function if the underlying hash is. (A MAC with a fixed key is not a hash function in general: if you know the key it may be possible to construct collisions or calculate inverses.) – Gilles 'SO- stop being evil' Dec 29 '18 at 10:15
|
The number of values of x in [0, 2π]
Question:
The number of values of $x$ in $[0,2 \pi]$ that satisfy the equation $\sin ^{2} x-\cos x=\frac{1}{4}$
(a) 1
(b) 2
(c) 3
(d) 4
Solution:
(b) 2
$\sin ^{2} x-\cos x=\frac{1}{4}$
$\Rightarrow\left(1-\cos ^{2} x\right)-\cos x=\frac{1}{4}$
$\Rightarrow 4-4 \cos ^{2} x-4 \cos x=1$
$\Rightarrow 4 \cos ^{2} x+4 \cos x-3=0$
$\Rightarrow 4 \cos ^{2} x+6 \cos x-2 \cos x-3=0$
$\Rightarrow 2 \cos x(2 \cos x+3)-1(2 \cos x+3)=0$
$\Rightarrow(2 \cos x+3)(2 \cos x-1)=0$
$\Rightarrow 2 \cos x+3=0$ or, $2 \cos x-1=0$
$\Rightarrow \cos x=-\frac{3}{2}$ or $\cos x=\frac{1}{2}$
Here, $\cos x=-\frac{3}{2}$ is not possible.
$\therefore \cos x=\frac{1}{2}$
$\Rightarrow \cos x=\cos \frac{\pi}{3}$
$\Rightarrow x=2 n \pi \pm \frac{\pi}{3}$
Now for $n=0$ and 1, the values of $x$ are $\frac{\pi}{3}, \frac{5 \pi}{3}$ and $\frac{7 \pi}{3}$, but $\frac{7 \pi}{3}$ is not in $[0,2 \pi]$.
Hence, there are two solutions in $[0,2 \pi]$.
|
• Create Account
## [java] Double buffering in applets
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
11 replies to this topic
### #1DiscoStoo Members
122
Like
Likes
Like
Posted 09 January 2001 - 03:24 PM
I am a long time veteran to C++, and just learned Java, mainly for the purpose of writing applets. Of course games are my main focus, and ya'' can''t have games without double buffering! (Or some equivalent form of animation) I''ve found many tutorials, but they all say: "Next lesson: Double buffering in applets!", then there are no more lessons. Almost like it''s a conspiracy... Anyway, I just need to know how to draw to an off-screen buffer instead of onscreen, then how to blit that to the screen. Everything else can be figured out with time. ------------------------------------------------------------ He wants a shoehorn, the kind with teeth, because he knows there''s no such thing.
### #2Saluk Members
127
Like
Likes
Like
Posted 09 January 2001 - 04:16 PM
I haven''t actually programmed in java in a while, but if I remember correctly, drawing to another buffer is really simple. In fact, it''s just like drawing to the onscreen buffer. You know about wrinting your paint function and passing Graphics g to it? Well, all you need to do to create a new buffer is to make another Graphics object.
Graphics g2 = new Graphics();
then, you draw to that just like you would draw to Graphics g in your paint function.
### #3gdalston Members
122
Like
Likes
Like
Posted 09 January 2001 - 09:03 PM
Check out http://www.binod.com/reference/index.html
for some good books. I know that the first one under java,
Black Art of Game Programming, has double buffer examples.
An easy way to get double buffering is to use a JApplet instead- this is the swing equivalent of Applet and hence is double buffered - but this is only one solution and probably not the best.
### #4DJNattyP Members
122
Like
Likes
Like
Posted 10 January 2001 - 05:39 AM
DiscoStoo,
Here's the way to do double-buffering :
At the class level make an image and a graphics object -
private Image buffer; private Graphics bg;
Then, in the beginning of the paint method do this -
public void paint(Graphics g) { if (buffer == null) { buffer = createImage(getWidth(), getHeight()); bg = buffer.getGraphics(); }
After which you draw only to the bg graphics context (i.e. bg.drawRect(x1,y1,x2,y2); ). This draws everything to the off-screen buffer. Then at the end of the paint method -
g.drawImage(buffer, 0, 0, this); }
This draws the offscreen buffer to the actual screen.
As gdalston suggests above, a JApplet does double-buffering automatically, but, unfortunately, will not run in any major web browsers w/o the Java2 plugin installed... (I might be wrong... I have heard rumors that the newest version of Netscape will run Java2 stuff... but have not downloaded it myself.) That's a pretty major drawback! So you've gotta use the old AWT stuff for applet games still... hopefully this will change in the near future...
Hope this helps,
-Nate
Edited by - DJNattyP on January 10, 2001 12:47:22 PM
### #5DiscoStoo Members
122
Like
Likes
Like
Posted 12 January 2001 - 02:03 PM
Thanks much, it''s easier than I thought. And actually, I''ve seen double-buffered applets run in my browser, and most things I do write will probably be for my own satisfaction and not published anywhere, so it''s OK.
------------------------------------------------------------
He wants a shoehorn, the kind with teeth, because he knows there''s no such thing.
### #6deakin Members
122
Like
Likes
Like
Posted 12 January 2001 - 03:59 PM
Just a quick note... I believe that Netscape 6 comes with the Sun JRE 1.3. Or at least my version did.
- Daniel
My homepage
### #7c_wraith Members
122
Like
Likes
Like
Posted 21 January 2001 - 12:26 PM
Ummm... What else can I do to double buffer an applet? That isn't good enough in my case. Here's my very simple applet. In fact, it's so simple that painting to an offscreen buffer only makes the flickering a LOT worse.
The problem potentially lies in how repaint() is called in the drawing area of that applet. Specifically, I have a MouseMotionListener set up that calls repaint() on mouseMoved().
Running in a JApplet or JFrame eliminates the flicker, which I do on local testing. However, the applet form can't take advantage of that, as I'd prefer not force those without it to download the java plugin.
Edited by - c_wraith on January 21, 2001 7:29:42 PM
### #8JasonB Members
122
Like
Likes
Like
Posted 21 January 2001 - 08:58 PM
Are you sure you are overriding both paint() & update()?
The only time I''ve seen flickering when double-buffering has been inplemented is when one of those hasn''t been overridden.
eg.
public void paint(Graphics g) {
update(g);
}
public void update(Graphics g) {
... do your double-buffer & actual painting in here.
}
### #9c_wraith Members
122
Like
Likes
Like
Posted 22 January 2001 - 04:18 AM
Um, isn''t that backwards? Isn''t update() supposed to call paint(), not the other way around?
Anyway, I''ve tried overriding both update() and paint(), and either way, it just made the flickering worse. I''ll put up some demonstrations later today.
### #10c_wraith Members
122
Like
Likes
Like
Posted 22 January 2001 - 08:02 PM
Ok, I got it sorted out... I needed to override update() in my applet, in addition to in my component.
### #11JasonB Members
122
Like
Likes
Like
Posted 22 January 2001 - 09:17 PM
If memory serves me correctly, when a component is painted, paint() is sometimes called, but update() is always called. And the default implementation of update() clears the component - which causes the flickering.
Therefore it''s slightly more efficient for paint to call update and for update to do the painting.
### #12JasonB Members
122
Like
Likes
Like
Posted 23 January 2001 - 02:54 AM
Ignore that last post. I was talking bollocks.
update() is not always called. paint() is usually called when part of a component is uncovered (update is not called in this case apparently)
I still think update is called more often than paint - I seem to recall reading that somewhere - so the efficiency argument is still true.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
|
Time Limit : sec, Memory Limit : KB
# Counting Sort
Counting sort can be used for sorting elements in an array which each of the n input elements is an integer in the range 0 to k. The idea of counting sort is to determine, for each input element x, the number of elements less than x as C[x]. This information can be used to place element x directly into its position in the output array B. This scheme must be modified to handle the situation in which several elements have the same value. Please see the following pseudocode for the detail:
Counting-Sort(A, B, k)
1 for i = 0 to k
2 do C[i] = 0
3 for j = 1 to length[A]
4 do C[A[j]] = C[A[j]]+1
5 /* C[i] now contains the number of elements equal to i */
6 for i = 1 to k
7 do C[i] = C[i] + C[i-1]
8 /* C[i] now contains the number of elements less than or equal to i */
9 for j = length[A] downto 1
10 do B[C[A[j]]] = A[j]
11 C[A[j]] = C[A[j]]-1
Write a program which sorts elements of given array ascending order based on the counting sort.
## Input
The first line of the input includes an integer n, the number of elements in the sequence.
In the second line, n elements of the sequence are given separated by spaces characters.
## Output
Print the sorted sequence. Two contiguous elements of the sequence should be separated by a space character.
## Constraints
• 1 ≤ n ≤ 2,000,000
• 0 ≤ A[i] ≤ 10,000
## Sample Input 1
7
2 5 1 3 2 3 0
## Sample Output 1
0 1 2 2 3 3 5
|
Calculate Vout from an op amp
I have a op amplifier for which I want to calculate Vout depending on Vin.
The circuit looks like this:
I know that with a circuit that just has ground connected to + of the amp, i can just calculate KVL: V1 = R1*I1+Rf*If+Vout but with this amp, there is also resistors on the plus side of the amp to include in the calculation. I know that I should consider the op to not let any curren through. Therefore I think that I can use KVL on the plus side as well, so that I get two KVL-equations.
My question is now, if I have V1 = R1*I1+Rf*If+Vout and V2=R2*I2+R3*I3, how do I put them together?
I think that I can assume I1=If and I2=I3, if that helps.
The answer to your question is very simple if you know the fundamental gain expressions for the inverting resp. non-inverting opam amplifier: G(inv)=-Rf/R1 and G(non)=1+Rf/R1. Note that the non-inverting gain G(non) is referenced to the non-inv. input terminal directly. Hence, we must - in addition - take into account the voltage divider R2-R3.
Because you have two input signals at the same time you simply can add both parts at the output (principle of superposition). Because all resistors are equal we arrive at the result:
Vout=G(non)V2[R3/(R2+R3)]-G(inv)V1=(1+Rf/R1)V2[R3/(R2+R3)]-V1(Rf/R1)=V2-V1
This circuit is a classic diff-amp. The output is V2-V1.
One way to analyze this circuit is to think of the affect from each input to the output separately. Start by grounding V2 and thinking about the response from V1 to the output. With V2 grounded, the + input is just held at 0. Now you have a simple inverting amp with a gain of -1 from V1 to Output.
Then ground V1 and see what changing V2 does. With V1 grounded, Rf and R1 form a voltage divider to make a classic positive gain amp from the + input to the output, with the gain being +2 in this case. Now note that the two resistors on the V2 input form a voltage divider with a gain of 1/2. That gain of 1/2 from the voltage divider times the gain of 2 for the amp make a overall gain of 1 from V2 to Output.
So you have a gain of -1 from V1 to Output and +1 from V2 to Output. Putting these together, you get the overall response of Output = V2 - V1.
For extra credit, figure out how to change the ratio of the resistors to make a diff amp with non-unity gain, like 4 for example.
This is a classic differential amplifier as pointed by Olin with output equal to $V_2-V_1$.
To analyse this the key points to remember are the current into the inverting and non-inverting inputs is negligible so assume zero and an op-amp with negative feedback will try to keep these inputs equal.
Output can be calculated as follows
$$\dfrac{\dfrac{V_1}{R_1}+\dfrac{V_{out}}{R_f}}{\dfrac{1}{R_1}+\dfrac{1}{R_f}}=\dfrac{\dfrac{V_2}{R_2}}{\dfrac{1}{R_2}+\dfrac{1}{R_3}}$$
Solving for $V_{out}$ which given all the resistors are the same value is simply $V_{out}=V_2-V_1.$
To explain the equation above the left hand side is the voltage at the inverting input and the right hand side the voltage at the non inverting input.
Consider a simple potential divider with voltages applied at both ends
simulate this circuit – Schematic created using CircuitLab
\begin{align}\\ \dfrac{V_1-V_x}{R_1}+\dfrac{V_2-V_x}{R_2} & = 0\\ \dfrac{V_1}{R_1}+\dfrac{V_2}{R_2} & = \dfrac{V_x}{R_1} + \dfrac{V_x}{R_2}\\ V_x & =\dfrac{\dfrac{V_1}{R_1}+\dfrac{V_2}{R_2}}{\dfrac{1}{R_1}+\dfrac{1}{R_2}}\\ \end{align}
• I don't really get how you managed to get the equation above. Can you please explain that? – theva Nov 27 '14 at 21:11
• edited to explain – Warren Hill Nov 28 '14 at 7:41
|
# How can I compare $\log_2 3$ and $\log_3 5$ without using a calculator [closed]
Compare $\log_2 3$ and $\log_3 5$ without using a calculator.
I am not very good at math please explain it clearly
• What have you tried? Here's a good guide on how to ask a great question. Sep 5 '17 at 10:31
• What do you mean "compare"? I would think that you mean "which is larger?", but there are many ways to compare numbers. Sep 5 '17 at 10:32
• For the upvoters: This might be an interesting problem, but it is, at the moment, a very poorly phrased question. If you want to show it recognition for being an interesting question, or you are interested in knowing the answer, consider favouriting instead. Sep 5 '17 at 10:47
• I think it's clear that comparison in this context is talking about which is larger ... no need to be pedantic about it Sep 5 '17 at 10:51
• this question is answered here: math.stackexchange.com/questions/415500/… Sep 6 '17 at 12:19
Note: $$\log_2 3=\frac14 \log_2 81>\frac14\log_2 64=\frac64=\frac32,$$ $$\log_3 5=\frac14 \log_3 625<\frac14\log_3 729=\frac64=\frac32.$$
We'll prove that $$\log_35<\log_23$$ or $$5<3^{\log_23}$$ or $$25<3^{\log_29},$$ which is true because $$3^{\log_29}>3^{\log_28}=27>25.$$ Done!
$$\sqrt{3}\approx 1.73,\sqrt{2}\approx 1.42\\2^{1.5}=2\sqrt{2}< 2.84,\ 3^{1.5}=3\sqrt{3}>5.19\\\log_2 3>\log_22.84>\log_22\sqrt2= 1.5=\log_33\sqrt{3}> \log_35.19>\log_3 5$$
|
# Chart Patterns in Technical Analysis
Chart patterns are specific formations on the stock price charts. These formations represent recognizable patterns that are repeatedly seen in the market, and are used to determine the future movement of the stock prices.
The patterns represent the psychology of the market and investors, and also determine the future behaviour of the market.
In a completely rational market, these patterns may not exist. However, investors, traders, and managers react with similar emotions all the time, and exhibit repeated behaviour, which leads to these patterns. We can say that the patterns reflect the collective psychology of the market.
Traders can read these patterns, and forecast the future market movement to their advantage.
The chart patterns can be broadly classified as reversal patterns and continuation patterns.
Reversal chart patterns signal the ending on an ongoing trend, i.e., it signifies a reversal of asset’s price direction. For example, if the reversal pattern forms when the price is moving upwards, this signifies that the upward movement is over and the prices will move downwards now, and vice versa. There are various types of reversal chart patterns, such as Double Top, Double Bottom, Head and Shoulders, Inverse Head and Shoulders, Rising Wedge, and Falling Wedge.
Continuation chart patterns, on the other hand, signal that the ongoing trend will continue for some time. This means that after the formation, the price movement will continue to follow the same trend as it was before the formation. So, if the prices were moving upwards before the formation, they will continue to move upwards after the pattern formation also. This kind of pattern is also called healthy correction. If the price moves downwards, it will quickly start moving upwards, as another set of investors will start buying. The long term trend will continue to be the same. There are various continuation patterns, such as wedges, rectangles, pennants, and flags.
Apart from reversal and continuation chart patterns, there are also bilateral chart patterns. These are a little confusing and signal that the prices and move in either direction; upward or downward. The most common type of bilateral formations is triangle formations. At the end of the triangle formation, the price can break either upwards or downwards. When seeing triangle formations, the traders should be prepared for the action taking place on either side to benefit from the movements.
We will discuss each of these patterns in details in the coming articles.
# Black Friday Deal: 51% OFF
Get our R Programming - Data Science for Finance Bundle for just $19$39. Only for this week!
Get it now for just \$19
|
# zbMATH — the first resource for mathematics
Notion of convexity in Carnot groups. (English) Zbl 1077.22007
The aim of this interesting paper is to study appropriate notions of convexity in the setting of Carnot groups $$G$$. First, the notion of strong $$H$$-convexity is examined. Some arguments showing that the concept is to restrictive are presented. Then the notion of weakly $$H$$-convex functions is defined. A function $$u:G\rightarrow {\mathbb R}$$ is weakly $$H$$-convex if for any $$g\in G$$ and every $$\lambda\in [0,1]$$ we have $u(g\delta_\lambda(g^{-1}g'))\leq (1-\lambda_u(g) + \lambda u(g')),$ where $$\delta_\lambda$$ is a group dilation and $$g'$$ is an element of the horizontal plane $$H_g$$ passing through $$g$$. It is proved that a twice differentiable function is weakly $$H$$-convex iff its symmetrized horizontal Hessian is positive semi-definite at any $$g\in G$$. This is the subelliptic counterpart of the classical characterization of convex functions. The intrinsic gauge in any group of Heisenberg type is weakly $$H$$-convex. Moreover, a weakly $$H$$-convex function is Lipschitz continuous with respect to the sub-Riemannian metric of $$G$$. The main result of the paper says that the supremum of the absolute value of a weakly $$H$$-convex continuous function over any ball can be estimated from above by the mean value of the absolute value. The local boundedness, the continuity on effective domains of weakly $$H$$-convex functions as well as their relations to fully nonlinear differential operators in the sub-Riemannian setting are studied.
##### MSC:
22E25 Nilpotent and solvable Lie groups 35A30 Geometric theory, characteristics, transformations in context of PDEs
##### Keywords:
Carnot groups; convex functions; fully nonlinear PDE
Full Text:
|
# Would Underwater Races be Near or Farsighted?
Light travels differently in water and air. Some of that is because of light's speed in a substance, and some is the light absorption that substance exhibits.
Is the difference in light scattering underwater enough to make a sea-dwelling race (like merfolk) be noticeably Nearsighted or Farsighted when compared to land-dwelling races (like humans) when on land/in our atmosphere?
• Are fish good at seeing? Aug 10 at 11:13
• Relevant biology Q&A biology.stackexchange.com/questions/58056/… Aug 10 at 16:47
• Whiich led me to the most awesomely named biological structure the Zonule of Zinn - This sounds more like a supervillan! Aug 10 at 16:51
Interesting question!
First things first, let’s talk about how an eye works. Well, I don’t know. I’m no doctor. It’s probably crazy complex. But for our purposes, an eye is just gonna be made up of a converging lens in front of the retina. The job of the lens is to make light converge on the retina. If the focal point of the lens is in front, you’re nearsighted. If the focal point is somewhere behind the retina, you’re farsighted (all of this is very simplified, but that's enough to just grab the concepts we need for a qualitative answer)
Ok, well an eye is a {lens+retina} system. But how does a lens work? That I do know, but we’re still going to keep it simple. We’ll just say it uses the geometry of the interface to decide what to do with the light rays (concave=diverging lens, convex=converging), thanks to refraction. Refraction is what you mention in your question; when light goes from a material with refraction index $$n_1$$ to another with refraction index $$n_2$$, light bends according to the Snell-Descartes law that you might remember from your high school physic classes: $$sin(i_2)=\frac{n_1}{n_2}sin(i_1)$$
where $$i_1$$ and $$i_2$$ are the angles that it makes with the interface, but we don’t care much about the details of how the light bends actually. The only thing we need to notice is that how much it bends is all decided by the ratio $$\frac{n_1}{n_2}$$. This means that, the bigger the difference between $$n_1$$ and $$n_2$$, the more refraction you get (For example, if $$n_1=n_2$$, the angle would remain unchanged and you get no refraction at all, making your lens useless).
So! The refraction index of air is lower than that of water. That means that the difference in refraction index between the air and the lens will be larger than it is between lens and water. So, once you emerge, the lens in front of your retina suddenly bends light harder; the focal point is at a shorter distance. If your species is adapted to see clearly underwater, you would then be in the situation shown on the left of the image above.
Your amphibians would be near-sighted in the air, compared to what they are underwater
Also, they will of course get much brighter light above the surface than in the depths of the sea. If they have nothing to adapt to that, they might just be blinded too. Otherwise, they can simply have cat-like pupils to accommodate for both dark and bright conditions.
Important edit (estimation time):
Ok, I was actually quite curious to know how bad it would be. And boy, oh boy, would the little mermaid be blind.
We will assume that the cornea of your merfolks allow them to focus a point at infinity when they are underwater (just like us humans in air)
Without going into the details of the proof, you can establish that, for a given geometry of your lense, the refractive power varies proportionally to $$\frac{n_{medium}}{n_{lens}}-1$$. We will take $$n_{air}=1$$, $$n_{water}=1.33$$.
The ratio of the corrective powers of the cornea in air vs under water will then be given by:
$$\text{power ratio}=\frac{n_{air}-n_{lens}}{n_{water}-n_{lens}}$$
If we assume that the eyes of your merfolks are built similarly to ours (with just a difference in curvature of the lens), things get ridiculous real fast. If we take the human refractive index for cornea ($$n_{cornea}=1.376$$), the formula above gives you a ratio of about 8!! The power of the cornea will be multiplied by 8 as your merfolk emerge. With an eyeball diameter of 2.3cm, you need an optical power of about 43 dioptres to focus light correctly. So, out of the water, the cornea of your merfolks would now have a power of... 355 dioptes!!
Good luck finding -312 glasses. For geometrical reasons, it would actually be impossible for your merfolks to have such a good vision underwater in the first place with those conditions anyway
Ok, your merfolks obviously have to be built different. The highest refractive index used in pharmaceutical lenses seems to be around 1.7. Let's say your $$n_{cornea}=1.7$$ for your merfolks, this time the power of their cornea would only be about doubled when they emerge (according to the equation above). So, you'd be doing ok with -39 dioptre glasses. We're getting somewhere!
Double their eye size, and they only need their cornea to offer 22 dipotres under water. They would then have about 41 dioptres in air, which means they can get away with "only" a -19 correction this time. That's still super duper nearsighted but it seems to be at the extreme range of what can be corrected by glasses.
Conclusion: Your merfolks need at the very least a super-effective cornea and eyes twice as big if they wanna have the slightest hope of seeing anything out of the water. Even then, they will still need to wear the best glasses we can make
• Good answer. Nearsighted humans can see with better focus underwater, so the reverse is logically the case, that a creature with good focus underwater would be nearsighted in air. Aug 9 at 16:49
• @NuclearHoagie I'm nearsighted and I never noticed that! It's time for experiments. Wish my eyeballs luck. Aug 9 at 16:52
• @BarbaudJulien If your vision is correctable, you aren't nearsighted enough to focus correctly under water. I wear -6.75 diopter lenses, and my vision is still blurry under water (opposite direction from in air). Aug 9 at 18:07
• Worth noting that there are fish with "bifocal" eyes -- each eye divided; top half has correct refraction in air, bottom half is right for water. An eye that can change shape pretty radically would also solve the problem. Or wearing "inverted goggles" -- like swim goggles but filled with water. Aug 10 at 11:15
• Of course if you're going with corrective glasses the easiest solution would be goggles just like humans use under water. A flat piece of glass on goggles filled with sea water would give perfect vision just like goggles filled with air allow us to see underwater Aug 10 at 14:20
For a complementary thought experiment, what focal length do your merfolk need? Depending on depth and murkiness of water, it may be pointless to have long-distance focus anyway - Wikipedia tells me that optimal conditions still result in a max distance of 80m, and in most cases this would be a wildly optimistic overestimate. Since focal range is always a compromise between near- and long-sightedness, I am guessing your merfolk will be under selective pressure to be near-sighted, since long-sightedness brings them no advantage anyway. As Julien then calculates, this will be even more marked once they come out of the water.
The other aspect, which only you know, is how much they rely on sight (for example, if they use bioluminescence or other visual means of communication). They may simply have poor vision, since the medium they live in is not as conducive to the sense of sight as air. This may be somewhat story-based - do you want these merfolk to be able to interact easily with humans? How human-like do you want them to be? Will they be point-of-view characters (and therefore responsible for relaying descriptions to the reader)?
• That's a good point, but the value of 80m is true for a human eye. Most nocturnal animals require much less light than we do to see. I think an eye adapted for vision in dark conditions could push significantly further than that limit under water. But the deeper you go, the harder it gets Aug 10 at 12:08
• I suspect the limit is turbidity, not light - water is nowhere as transparent as air, and a natural body of water has a lot more stuff in it. The 80m limit was recorded with optical instruments Aug 10 at 12:13
Nearsighted or Blind
Even clean water absorbs half of the light in the first 10 metres. This is doubly bad for your merfolk as being 10m underwater and looking at something 10m away you only get at most 1/4 of the light you would otherwise. This is because the light travels 10m to hit the object and 10m again to hit your eyeballs.
More likely however the light travels diagonally for the first part of the journey you get less than 1/4 of the starting light.
Seeing distant objects is a lost cause. Seeing closeby objects only works in shallow waters.
This suggests your merfolk are nearsighted if they live in crystal clear shallow water. For example a tropical reef. But they still need other senses to detect things far away.
If they live in murky or deep water then their eyes are no use. With time they become almost blind. Like the Yangtze River Dolphins.
• Your eyeballs aren't transmitting the light, so the double distance idea is silly.
– JRE
Aug 10 at 14:33
• @JRE 10m from the surface down through the water to hit the object. Another 10m from the object to your eyes. 20m total. If we were 100m down and 10 away it would be 110m total. Aug 10 at 14:36
|
We cover every section of the GMAT with in-depth lessons, 5000+ practice questions and realistic practice tests.
## Up to 90+ points GMAT score improvement guarantee
### The best guarantee you’ll find
Our Premium and Ultimate plans guarantee up to 90+ points score increase or your money back.
## Master each section of the test
### Comprehensive GMAT prep
We cover every section of the GMAT with in-depth lessons, 5000+ practice questions and realistic practice tests.
## Schedule-free studying
### Learn on the go
Study whenever and wherever you want with our iOS and Android mobile apps.
## The most effective way to study
### Personalized GMAT prep, just for you!
| Average Monthly Income | Number of households | |------------------------|----------------------| | $3,700 | 12 | |$3,800 | 17 | | $3,900 | 8 | |$4,000 | 11 | | $4,100 | 4 | |$4,200 | 2 | The table shows the results of a survey of the average monthly household income in GMATville last year. The average monthly income was calculated for each household and then rounded to the nearest hundred. What is the median average monthly income?
Correct. [[snippet]] Decide where the middle value lies by dividing the sum of frequencies by 2. >$$\text{Place value} = \frac{12 + 17 + 8 + 11 + 4 + 2}{ 2} = \frac{54}{2} = 27$$. Then count from the top till you reach the median frequency calculated earlier. Since there are an even number of values (54), you need to consider the average of the 27th and 28th values. In this case, the incomes of the 27th and 28th households are $3,800. Hence the median is$3,800.
$3,800$3,900
$3,950$4,000
|
Packs data to an unsigned or signed integer array.
This node packs MN-sized binary data of the form b i = {0,1}, to an unsigned or signed integer array a_n,
where
• i = 0…MN-1, and M denotes the number of bits per integer
• n = 0…N-1, and N denotes the size of the array
## input bit stream
The binary bit stream to be packed into integers.
The binary bit stream must be in the form b i = {0,1}
where
• i = 0…MN-1, and M denotes the number of bits per integer
• n = 0…N-1, and N denotes the size of the array
Default: empty
## bits per integer
Number of binary data values that are packed into an integer.
If you set the integer format parameter to Unsigned, the maximum value is 31. If you set the integer format parameter to Signed, the maximum value is 32.
Default: 1
## packed bit order
The order in which the binary data stream is packed into integers.
MSB first Data is packed with the most significant bit (MSB) first. LSB first Data is packed with the least significant bit (LSB) first.
Default: MSB first
## integer format
Input integer format.
Unsigned The entire number is packed as a positive integer. Signed The most significant bit (MSB) determines the sign of the input integer.
Default: Unsigned
## error in
Error conditions that occur before this node runs. The node responds to this input according to standard error behavior.
Default: no error
## reset ?
A Boolean that determines how the node handles buffered data. When the length of the input bit stream is not a multiple of the bits per integer value, the leftover bits are buffered inside the node. When you set reset? to FALSE, these buffered bits are prepended to the input bit stream during the next iteration.
Default: TRUE
## output integers
The output integer stream a n n = 0…N-1, corresponding to the packed bits.
## error out
Error information. The node produces this output according to standard error behavior.
|
# How to compare the contents of two folders (LINQ) (C#)
This example demonstrates three ways to compare two file listings:
• By querying for a Boolean value that specifies whether the two file lists are identical.
• By querying for the intersection to retrieve the files that are in both folders.
• By querying for the set difference to retrieve the files that are in one folder but not the other.
Note
The techniques shown here can be adapted to compare sequences of objects of any type.
The FileComparer class shown here demonstrates how to use a custom comparer class together with the Standard Query Operators. The class is not intended for use in real-world scenarios. It just uses the name and length in bytes of each file to determine whether the contents of each folder are identical or not. In a real-world scenario, you should modify this comparer to perform a more rigorous equality check.
## Example
namespace QueryCompareTwoDirs
{
class CompareDirs
{
static void Main(string[] args)
{
// Create two identical or different temporary folders
// on a local drive and change these file paths.
string pathA = @"C:\TestDir";
string pathB = @"C:\TestDir2";
System.IO.DirectoryInfo dir1 = new System.IO.DirectoryInfo(pathA);
System.IO.DirectoryInfo dir2 = new System.IO.DirectoryInfo(pathB);
// Take a snapshot of the file system.
IEnumerable<System.IO.FileInfo> list1 = dir1.GetFiles("*.*", System.IO.SearchOption.AllDirectories);
IEnumerable<System.IO.FileInfo> list2 = dir2.GetFiles("*.*", System.IO.SearchOption.AllDirectories);
//A custom file comparer defined below
FileCompare myFileCompare = new FileCompare();
// This query determines whether the two folders contain
// identical file lists, based on the custom file comparer
// that is defined in the FileCompare class.
// The query executes immediately because it returns a bool.
bool areIdentical = list1.SequenceEqual(list2, myFileCompare);
if (areIdentical == true)
{
Console.WriteLine("the two folders are the same");
}
else
{
Console.WriteLine("The two folders are not the same");
}
// Find the common files. It produces a sequence and doesn't
// execute until the foreach statement.
var queryCommonFiles = list1.Intersect(list2, myFileCompare);
if (queryCommonFiles.Any())
{
Console.WriteLine("The following files are in both folders:");
foreach (var v in queryCommonFiles)
{
Console.WriteLine(v.FullName); //shows which items end up in result list
}
}
else
{
Console.WriteLine("There are no common files in the two folders.");
}
// Find the set difference between the two folders.
// For this example we only check one way.
var queryList1Only = (from file in list1
select file).Except(list2, myFileCompare);
Console.WriteLine("The following files are in list1 but not list2:");
foreach (var v in queryList1Only)
{
Console.WriteLine(v.FullName);
}
// Keep the console window open in debug mode.
Console.WriteLine("Press any key to exit.");
}
}
// This implementation defines a very simple comparison
// between two FileInfo objects. It only compares the name
// of the files being compared and their length in bytes.
class FileCompare : System.Collections.Generic.IEqualityComparer<System.IO.FileInfo>
{
public FileCompare() { }
public bool Equals(System.IO.FileInfo f1, System.IO.FileInfo f2)
{
return (f1.Name == f2.Name &&
f1.Length == f2.Length);
}
// Return a hash that reflects the comparison criteria. According to the
// rules for IEqualityComparer<T>, if Equals is true, then the hash codes must
// also be equal. Because equality as defined here is a simple value equality, not
// reference identity, it is possible that two or more objects will produce the same
// hash code.
public int GetHashCode(System.IO.FileInfo fi)
{
string s = \$"{fi.Name}{fi.Length}";
return s.GetHashCode();
}
}
}
## Compiling the Code
Create a C# console application project, with using directives for the System.Linq and System.IO namespaces.
|
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorAndrew Stacey
• CommentTimeJun 5th 2009
How do these look? Are they okay? Should there be more (if so, which?)? Are some of these too strange to be in the "official" list?
The list is based on what the xy package offers as "standard" arrowheads.
• CommentRowNumber2.
• CommentAuthorTobyBartels
• CommentTimeJun 7th 2009
Maybe this is a separate topic from arrowheads, but it would be nice to have (the thing like) ‘-|->’ for relations and profunctors.
• CommentRowNumber3.
• CommentAuthorAndrew Stacey
• CommentTimeJun 9th 2009
I'm not sure what context you want this in. Do you want it as a type of arrow in a big diagram, as essentially a virtual entity in an equation (used much as \to or \mapsto) - or both? The former is clearly an SVG thing, if the latter do you want it as an SVG that is embedded or as a hacked together entity?
If you want an SVG in equations then I can figure one out for you, but I need something a little more specific than -|->! Do you have a nice example in a picture/document somewhere that you can send me?
• CommentRowNumber4.
• CommentAuthorAndrew Stacey
• CommentTimeJun 9th 2009
By the way, what do you think of the arrowheads?
• CommentRowNumber5.
• CommentAuthorTobyBartels
• CommentTimeJun 9th 2009
• (edited Jun 9th 2009)
Yes, I like the arrowheads. And I was thinking of ‘-|->’ in big SVG diagrams; getting it into small iTeX diagrams is a struggle on another front. (^_^)
• CommentRowNumber6.
• CommentAuthorAndrew Stacey
• CommentTimeJun 9th 2009
So you mean it as like a long arrow in a commutative diagram. Okay, in that case what is it exactly? Is it a normal arrow with an orthogonal line through the middle? Or is the line somewhere else (in your latest post you write '-|>' rather than '-|->')? Either way, shouldn't be hard to mock up.
• CommentRowNumber7.
• CommentAuthorTobyBartels
• CommentTimeJun 9th 2009
This comment is invalid XHTML+MathML+SVG; displaying source. <div> Oops, typo! Fixed.<p>Try this (as plain TeX) to see what it should look like:<pre>\input xy<br/>\xyoption {all}<br/><br/>\bye</pre></p> </div>
• CommentRowNumber8.
• CommentAuthorTobyBartels
• CommentTimeJun 9th 2009
(H'm, that looked just fine on preview, but now it looks like your math code is trying to take over. Anyway, mouse over to see the formula; cut and paste seems to work too.)
• CommentRowNumber9.
• CommentAuthorAndrew Stacey
• CommentTimeJun 9th 2009
Looks easy enough.
It'd be better style to define a new arrow stem so that you'd get that by typing
\xymatrix{ C \ar@{{}{-|-}>}[r] & D}
but that's just me being pedantic.
• CommentRowNumber10.
• CommentAuthorTobyBartels
• CommentTimeJun 9th 2009
Definitely, but I just went for the quick and dirty method to get you a picture.
• CommentRowNumber11.
• CommentAuthorMike Shulman
• CommentTimeJun 10th 2009
I usually make it with \xymatrix{ A \ar[r]|-@{|} & B }.
• CommentRowNumber12.
• CommentAuthorAndrew Stacey
• CommentTimeJun 10th 2009
Okay, mock-up is now on the SVG Sandbox. Trick is to "break" a path at it's mid-point and specify a marker-mid attribute for the path.
• CommentRowNumber13.
• CommentAuthorTobyBartels
• CommentTimeJun 10th 2009
I didn't realise that one could do that, Mike. Thanks, that makes a lot of sense.
Andrew, all of those arrow parts look good! The diagram ‘just for [me] and Mike’ does not look good, since the arrow is too high compared to the letters (at least as it looks to me). But that's probably because the letters should be part of the SVG diagram, so I don't blame you (^_^).
• CommentRowNumber14.
• CommentAuthorTobyBartels
• CommentTimeJun 10th 2009
• (edited Jun 10th 2009)
I just realised, looking at the name doubleArrow, that we also want double (and triple?) arrows where the shaft is doubled, as well as where the entire arrow is doubled. (Like ‘?’ and ‘?’, respectively.)
• CommentRowNumber15.
• CommentAuthorAndrew Stacey
• CommentTimeJun 10th 2009
Yes, I didn't take any time to get the letters and arrow in perfect alignment so ignore that!
As for the doubled and tripled arrows, just hang on a little longer. I've been working on a conversion program and I'm almost ready for alpha testers.
(Another one to add to my list of annoyances about this forum: I see "Like '?' and '?', respectively.". Presumably you're typing in one charset and it isn't being translated correctly to whatever charset I'm viewing the page in.)
• CommentRowNumber16.
• CommentAuthorTobyBartels
• CommentTimeJun 10th 2009
Sign me up for testing then!
(Right, I forgot about that! It's worse than you think: the HTML served by the page simply has question marks, nothing more. But the preview shows things correctly! Anyway, they should be $\Rightarrow$ and $\rightrightarrow$.)
• CommentRowNumber17.
• CommentAuthorAndrew Stacey
• CommentTimeJun 11th 2009
This comment is invalid XML; displaying source. <p>What are you typing to try to generate those letters? Are you using the maths facility? Let's try: (\Rightarrow) and (\rightrightarrow). Or are you trying unicode characters ... <looks for character selector> ... bleugh, can't figure out where they are, or entities? Let's try: ⇒.</p> <p>(I suppose I could go directly to the database and find your comment ...)</p>
• CommentRowNumber18.
• CommentAuthorTobyBartels
• CommentTimeJun 13th 2009
I'm typing the character directly. Firefox thinks that the character encoding is utf-8, so presumably that's how it's being sent. For purposes of preview, that's how it comes back, but the post itself has a question mark instead.
But now that I think about it, does the preview even go to the server, or is it handled entirely on my side with Javascript? That could certainly explain how the preview and the post might fail to match.
• CommentRowNumber19.
• CommentAuthorAndrew Stacey
• CommentTimeJun 13th 2009
Without knowing a lot of what goes on here, I think that you're right. The preview facility is provided for by a plugin and that plugin has an ajax and a javascript component. I haven't looked at the scripts in detail, but the basis of their existence lends considerable support to your hypothesis.
That's odd. I thought that I got the maths syntax correct in my post. Now it's displaying only the input. It took a couple of attempts before I remembered how I'd set up the LaTeX input so maybe the database didn't store my edits. Let's try again: $\Rightarrow$ and $\rightrightarrow$.
Is the character set issue easy to fix, do you know? Is it extremely annoying, or can you live with it as it is?
• CommentRowNumber20.
• CommentAuthorAndrew Stacey
• CommentTimeJun 13th 2009
Oh, \rightrightarrow does not parse. Looks like I ought to adapt my buggy script into a plugin for the forum. Then whenever I get a formula that doesn't parse, I can simply define my own macros so that it does!
• CommentRowNumber21.
• CommentAuthorTobyBartels
• CommentTimeJun 13th 2009
I can work around it, I just have to remember to. The only really annoying thing is that I must remember not to trust the preview.
|
# জাহাঙ্গীরনগর বিশ্ববিদ্যালয়
সালঃ 2012
G ইউনিট
জাহাঙ্গীরনগর বিশ্ববিদ্যালয় G ইউনিট এর প্রশ্নপত্র এবং সমাধান: 2012 সালে জাহাঙ্গীরনগর বিশ্ববিদ্যালয় G ইউনিট ভর্তি পরীক্ষায় আসা সকল প্রশ্নপত্র এবং এর ব্যাখ্যা সহ সমাধান। এছাড়া এখন ঘরে বসে অনলাইনেই পূর্ণাঙ্গ ভর্তি প্রস্তুতির এবং মডেল টেস্ট দেওয়ার সুব্যবস্থা।
#### 1. A snooker tournament charges $45.00 for VIP seats and$ 15.00 for general admission (" regular" seats)_ . On a certain night, a total of 320 tickets were sold, for a total cost of $7,500 . How many fewer tickets were sold that night for VIP seats than for general admission seats? 70 90 230 140 ব্যাখ্যা সংযোজন করা হয় নাই। #### 2. A large delicatessen purchased P pounds of chose for c dollars per pound.If d pounds of the cheses had to be discardede due to spoilage and the delicatessen sold the rest for s dollars per pond, which of the following represents the gross profit on the sale of the purchase? s (p-D_pc c(p_Dds (p-D,(s_C d (s-C-pc ব্যাখ্যা সংযোজন করা হয় নাই। #### 3. Sabrina is contemplating a job switch, she is thinking of leaving her job paying$45,000 per year plus 15 percent commission for each sale made. If each of her sales is for $1,500, what is the least m number of sales she must make per year if she is not to lose money because of the job change? 57 177 178 378 ব্যাখ্যা সংযোজন করা হয় নাই। #### 4. How many liters of a 40% iodine solution need to be mixed with 35 liters of a 20% iodine solution to create a 35% iodine solution? 105 49 100 140 ব্যাখ্যা সংযোজন করা হয় নাই। #### 5. In the figure below AD+4, AB=3 and CD= 9. what is the area of triangle AEC? 18 13.5 9 4.5 ব্যাখ্যা সংযোজন করা হয় নাই। #### 6. Over the course of a year , a certain microbrewery increased its beer output by 70 percent, At the same time , it decreased its total working hours by 20 percent , By what percent did this factory increase its output per hour? 90% 112.5% 212.5% 50% ব্যাখ্যা সংযোজন করা হয় নাই। #### 7. Dr. Kramer plans to invest$20.000 in an account paying 6% interest annually. Hows much more must she invest at the same time at 3% so that her total annual income during the first year is 4%of her entire investment?
$32,000$36,000
$40,000$49,000
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 8. Two dogsled teams raced across a 300 mile course in Wyoming. Team A finished the course in 3 fewer hours than did team B. If team A's average speed was 5 miles per hour greater than that of team B, what was team B's average speed, in miles per hour?
12
20
18
25
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 9. If x dozen eggs cost y dollars, what is the cost, C, of Z dozen eggs?
xyz
xy/z
xy+z
yz/x
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 10. `If 11 persons meet at a reunion and each person shakes hands exactly once with each of the others, what is the total number of handshakes?
110
220
55
990
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 11. In a rare coin collection, all coins are either pure gold or pure silver , and there is initially one gold coin for every three silver coins . With the addition of 10 more gold coins to the collection, the ratio of gold coins to silver coins is 1 to 2. Based on this information , how many total coins are there now in this collection (after the acquisition)?
40
50
70
90
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 12. when the integer k is divided by 7, the remainder is 5. which of the following expressions below when divided by 7, will have remainder of 6? I. 4k +7 , II. 6k + 1 III. 8k +1
I only
I and II only
I , II and III
III only
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 13. A grocer has 400 pounds of coffee in stock , 20 percent of which is decaffeinated . If the grocer buys another 100 pounds of coffee of which 60 percent is decaffeinated, what percent, by weight, of the grocer's stock of coffeee is decaffeinated?
30%
32%
40%
28%
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 14. Timothy leaves home for school, riding his bicycle at a rate of 9 miles per hour. Fifteen minutes after he leaves, his mother sees a Timothy's math homework lying on his bed and immediately leaves home to bring it to him. If his mother drives at 36 miles per hour, how far (in terms of miles) must she drive before she reaches Tmothy?
1/3
4
3
12
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 15. A couple spent $244 in total while dining out and paid this amount using a credit card. The$264 figure included a 20 percent tip which was paid on top of the price of the food which already included a sales tax of 10 percent. what was the actual price of the meal before tax and tip?
$184$204
$212$200
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 16. One-fifth of the batteries produced by an upstart factory are defective and one -quarter of all batteries produced are rejected by the quality control technician. If one-tenth of the non-defective batteries are rejected by mistake and if all the batteries not rejected are sold, then what percent of the batteries sold by the factory are defective?
4%
3%
6%
5%
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 17. Bill and Ben can clean the garage together in 6 hours. If it takes Bill 10 hours working alone, how long will it take working alone?
11 hours
16 hours
15 hours
4 hours
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 18. In a class, 60% of the students enrolled for Math and 70% enrolled for Economics, If 45% of the students enrolled for both Math and Economics, what percentage of the students of the class did not enroll for either of the two subjects?
85%
30%
25%
15%
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 19. A mother and father have seven sons and each son has two sisters. How many people are in that family?
21
23
11
9
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 20. what comes next?
13211A
311112A
13211B
31221A
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 21. who is the tallest ? Clue: A is taller than B C is shorter than A D is taller than B D is shorter than A E is taller than A
A
D
B
E
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 22. what number comes next in this sequence?
29
40
27
45
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 23. which number comes next in the following sequence? 53472 , 24 35,342,?
42
22
24
23
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 24. The Bangladeshi American scientist who achieved four milestones in genomics sequencing the genomes of papaya Rubber, Jute, and Fungus is
Dr. Muhammed Zafar Iqbal
Dr. M. Shamsher ali
Dr. Jamal Nazrul Islam
Dr. Maqsudul Alam
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 25. who did not win the Ramon Magsaysay Award From Bangladesh?
Syeda Rizwana Hasan
Abdullah Abu Sayeed
Dr. Debapriya
ব্যাখ্যা সংযোজন করা হয় নাই।
#### 26. How many gold medals were won by the American Swimmer , Michael Phelps in all Olympics?
20
23
24
25
ব্যাখ্যা সংযোজন করা হয় নাই।
ক্লায়েন্ট সাইড টিউটোরিয়াল
সার্ভার সাইড টিউটোরিয়ায়াল
প্রোগ্রামিং টিউটোরিয়ায়াল
প্রয়োজনীয় লিংক
|
# What does $ mean in Markdown (on Economics Stack Exchange)? [duplicate] I was using $ characters to mean, well, "dollars" in a question on Economics SE and it caused the rendered text to go all screwy. I briefly reviewed the help text but could not zero in on the meaning of that character as markdown. Finally just "escaped" the character with backslash and got on with things.
But what does $ mean in Markdown? Does it have a special function on all SE sites, or only some? (I've been on Stack Overflow and EL&U for a long time and never run across this problem.) • Why not ask on Economics Meta? – ale Apr 7 '17 at 0:46 • Not sure who voted to close as "pertains to a specific site". This currently pertains to something like 59 sites (31 mains + some metas), and any other future sites that may have MathJax enabled. – Jason C Apr 7 '17 at 1:36 ## 1 Answer The $ used to delineate MathJax, a LaTeX renderer used to present mathematical equations. Not every site has it enabled but Economics does.
For more information check out the Economics version of the editing help, it describes $ and $$. For MathJax itself, if you're interested in how it works, it's easiest just to check out their site or Google for things like "MathJax cheat sheet", etc. There's also this reference on the Mathematics site. If you want to type an actual dollar sign on sites with MathJax enabled, precede it with a backslash, as in \$. It may take some getting used to but that backslash is actually a general thing that you can use to escape any special Markdown characters, e.g. I can type: [not a link](http://example.com) by putting a backslash before, say, the left parenthesis.
Example:
• PS I do actually need \$20... :/ – Jason C Apr 7 '17 at 1:34
• sorry, all I've got is a %20 – William Price Apr 7 '17 at 3:27
• Outer %%%%%200000000!! – Jason C Apr 7 '17 at 3:34
|
# year (Gregorian) to blink conversion
Conversion number between year (Gregorian) [a, y, or yr] and blink is 126227808. This means, that year (Gregorian) is bigger unit than blink.
### Contents [show][hide]
Switch to reverse conversion:
from blink to year (Gregorian) conversion
### Enter the number in year (Gregorian):
Decimal Fraction Exponential Expression
[a, y, or yr]
eg.: 10.12345 or 1.123e5
Result in blink
?
precision 0 1 2 3 4 5 6 7 8 9 [info] Decimal: Exponential:
### Calculation process of conversion value
• 1 year (Gregorian) = (31556952) / (0.25) = 126227808 blink
• 1 blink = (0.25) / (31556952) = 7.9221846267029 × 10-9 year (Gregorian)
• ? year (Gregorian) × (31556952 ("s"/"year (Gregorian)")) / (0.25 ("s"/"blink")) = ? blink
### High precision conversion
If conversion between year (Gregorian) to second and second to blink is exactly definied, high precision conversion from year (Gregorian) to blink is enabled.
Since definition contain rounded number(s) too, there is no sense for high precision calculation, but if you want, you can enable it. Keep in mind, that converted number will be inaccurate due this rounding error!
gads
### year (Gregorian) to blink conversion chart
Start value: [year (Gregorian)] Step size [year (Gregorian)] How many lines? (max 100)
visual:
year (Gregorian)blink
00
101262278080
202524556160
303786834240
405049112320
506311390400
607573668480
708835946560
8010098224640
9011360502720
10012622780800
11013885058880
Copy to Excel
## Multiple conversion
Enter numbers in year (Gregorian) and click convert button.
One number per line.
Converted numbers in blink:
Click to select all
## Details about year (Gregorian) and blink units:
Convert Year (Gregorian) to other unit:
### year (Gregorian)
Definition of year (Gregorian) unit: = 365.2425 d average. As the common year has 365 days, the Gregorian calendar with leap years compensate the deviation from the real, astronomical year. According to this calendar, every 4th year is a leap year, except for every 100th. But every 400th is a leap year. This means that there are 97 leap years in 400 year period. So according to Gregorian's calendar, one year has 365 + 97/400 days (average). This is not a perfect approach, but in 1000 year period, the defiation is only 0.3 days compared to the astronomical year. In the year 1582 Gregorian replaced the Julian calendar.
Convert Blink to other unit:
### blink
Definition of blink unit: ≈ 0.25 s. Human blink duration ranges from 0.1 to 0.4 seconds. Blink time is calculated as average of this range, and this value is a quarter of a second.
← Back to Time units
© 2022 conversion.org Terms of use
|
# Thermodynamics -- Temperature of a Heat Source?
Summary:
2nd law of thermodynamics and Carnot heat engine
In heat engine we define a heat source from where heat is transferred to the system, we say that heat source has a temperature ##T_h## , When we define a Carnot heat engine, the first process we have is an isothermal expansion and we say heat has to come in system through this process and here the ##T_h## is the constant temperature of that isothermal process .Are they both same? I'm getting confused in getting what actually is the source and sink in both cases and what are their temperature? Also in the entropy equation $$\oint dQ_r/T=0$$ what is the temperature T?
256bits
Chestermiller
Mentor
The temperature of the heat source and the system temperature are both the same value in the isothermal heating segment of the Carnot cycle. In the "entropy equation" that you have written, T is the hot reservoir temperature (and system temperature) during the isothermal heating (expansion) segment, and T is the cold reservoir temperature (and system temperature) during the isothermal cooling (compression) segment.
Rahulx084, 256bits, berkeman and 1 other person
That mean heat source and system temperature are two different cases, but the Carnot heat engine is a special case where both reservoir and system temperature are same? Can you give me one practical example sir to elaborate this source ,sink ,system and their temperature concept.
Chestermiller
Mentor
That mean heat source and system temperature are two different cases, but the Carnot heat engine is a special case where both reservoir and system temperature are same? Can you give me one practical example sir to elaborate this source ,sink ,system and their temperature concept.
Please describe for me in your own words the details of what is happening in each of the 4 segments of the Carnot cycle.
In the first segment : Reversible Isothermal expansion
3rd:Reversible Isothermal compression
Chestermiller
Mentor
How do the constant gas temperatures in steps 1 and 3 compare? What is the direction of heat transfer in step 1, and what is the direction of heat transfer in step 3? How do the amounts of heat transferred in steps 1 and 3 compare?
I don't get what is meant by constant gas temperature comparison of step 1 and 3 . Rest of it is as follows:
Heat is getting into the system in step 1
Heat is getting out of system in step 2
We can find the heat transfer in step 1 and 3 using the work done formula, as in reversible isothermal process heat transfer = work done
$$Q=W=nRTln(V_2/V_1)$$
Chestermiller
Mentor
In step 1, the temperature of the hot reservoir (and the gas) are higher than in step 3 where the temperature of the cold reservoir (and the gas) are lower. Actually, in step 1, the temperature of the hot reservoir is slightly higher than the average gas temperature and, in step 3, the temperature of the cold reservoir is slightly lower than the average gas temperature; that way, heat can be transferred. Is this what you were wondering about.
So for the cycle, $$\Delta S_{gas}=\frac{Q_{hot}}{T_{hot}}-\frac{Q_{cold}}{T_{cold}}=0$$ Is this what you were asking?
We know that during a phase change operations ##\Delta S##= ##m\lambda / T_{sat}##
Where ##dQ=m\lambda##
How we have determined that T here? Like whats the source here or the sink ?
Chestermiller
Mentor
We know that during a phase change operations ##\Delta S##= ##m\lambda / T_{sat}##
Where ##dQ=m\lambda##
How we have determined that T here? Like whats the source here or the sink ?
You are aware that, in the standard Carnot cycle, there is no change of phase, right?
For a reversible expansion involving a working fluid phase change, to be reversible, the hot reservoir temperature must be slightly higher than the saturation temperature of the working fluid at the pressure of the expansion. Is that what you are asking?
We know that during a phase change operations ##\Delta S##= ##m\lambda / T_{sat}##
Where ##dQ=m\lambda##
How we have determined that T here? Like whats the source here or the sink ?
I means lets not consider any carnot cycle here , just for a reversible phase change process .
Chestermiller
Mentor
I means lets not consider any carnot cycle here , just for a reversible phase change process .
For the change to be reversible, the temperature of the source must be slightly higher than the saturation temperature of the working fluid that is changing phase; otherwise, if the hot reservoir temperature is higher than this, there will be temperature gradients within the working fluid (giving rise to entropy generation), and $$\Delta S_{fluid}=\frac{m\lambda}{T_{sat}}>\frac{m\lambda}{T_h}$$ where ##Q=m\lambda##. So, for the irreversible case, the change in entropy of the working fluid will be greater than the heat transferred from the hot reservoir divided by the hot reservoir temperature. In the application of the Clausius inequality, you always use the temperature at the boundary between the system and the surroundings to divide the heat transferred at the boundary. In the present case, this is the same as the hot reservoir temperature.
Thank you so much sir , I get it now , one final doubt , I'm attaching a picture, I don't want to know the solution of it , just tell me that what will be the ##Q_h## here , like the practical meaning of it , like ##Q_c## is the heat removed from refrigerator.
#### Attachments
• P_20191221_201111.jpg
42.1 KB · Views: 219
Chestermiller
Mentor
Thank you so much sir , I get it now , one final doubt , I'm attaching a picture, I don't want to know the solution of it , just tell me that what will be the ##Q_h## here , like the practical meaning of it , like ##Q_c## is the heat removed from refrigerator.
##Q_h## is the heat rejected to the room, and is equal to $$Q_h=Q_c+W=5\ kW + W$$
If you look under the refrigerator, you will see a set of tubular coils over which room air is blown by a fan to remove the heat. The heat transfer occurs at the room air temperature.
Correct me If I'm wrong ,
Lets say a refrigerator is working at 5 degree Celsius and we need it at 3 degree Celsius, so heat required to be removed to bring 5 to 3 degree Celsius is the ##Q_c## and for to remove this heat we have to do some work thats W so ##Q_c## + W is the ##Q_h## and this ##Q_h## is the one what we feel warm outside a refrigerator .
Chestermiller
Mentor
Correct me If I'm wrong ,
Lets say a refrigerator is working at 5 degree Celsius and we need it at 3 degree Celsius, so heat required to be removed to bring 5 to 3 degree Celsius is the ##Q_c## and for to remove this heat we have to do some work thats W so ##Q_c## + W is the ##Q_h## and this ##Q_h## is the one what we feel warm outside a refrigerator .
If you wanted to lower the temperature inside the refrigerator from 5 C to 3 C, you would have to increase ##Q_c## above 5 kw, and this would require a proportional increase in both W and in ##Q_h##. Regarding ##Q_h##, it is what you feel when you put your hand in the air stream coming from the outside coils.
Okay sir , thank you very much again :)
|
## anonymous 3 years ago how do you do the volume of a cylinder?
1. anonymous
A cylindrical barrel of oil has a height of 1.5m and a diameter of 0.6m. Calculate its total capacity.
2. anonymous
v=pi x r^2 x h
3. anonymous
v= 3.14 x 0.3^2 x 1.5
4. anonymous
Volume of any 'prism' is Area of base * height. Base is the area of circle which is pi*r^2. So volume is pi*r^2*h radius is half of diameter so just halve your value.
5. jiteshmeghwal9
$\Huge{\color{red}{ \pi r^2 h}}$
6. anonymous
so i do pi x r2
7. anonymous
yes then times height
8. anonymous
oh ok thank you!=D
9. anonymous
ur welcome :)
|
# aiida.cmdline.params.types package¶
Provides all parameter types.
class aiida.cmdline.params.types.LazyChoice(get_choices)[source]
Bases: click.types.ParamType
This is a delegate of click’s Choice ParamType that evaluates the set of choices lazily. This is useful if the choices set requires an import that is slow. Using the vanilla click.Choice will call this on import which will slow down verdi and its autocomplete. This type will generate the choices set lazily through the choices property
__init__(get_choices)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.choice'
__repr__()[source]
Return repr(self).
property _click_choice
Get the internal click Choice object that we delegate functionality to. Will construct it lazily if necessary.
Returns
The click Choice
Return type
click.Choice
property choices
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
get_metavar(param)[source]
Returns the metavar default for this param if it provides one.
get_missing_message(param)[source]
Optionally might return extra information about a missing parameter.
New in version 2.0.
name = 'choice'
class aiida.cmdline.params.types.IdentifierParamType(sub_classes=None)[source]
Bases: click.types.ParamType, abc.ABC
An extension of click.ParamType for a generic identifier parameter. In AiiDA, orm entities can often be identified by either their ID, UUID or optionally some LABEL identifier. This parameter type implements the convert method, which attempts to convert a value passed to the command for a parameter with this type, to an orm entity. The actual loading of the entity is delegated to the orm class loader. Subclasses of this parameter type should implement the orm_class_loader method to return the appropriate orm class loader, which should be a subclass of aiida.orm.utils.loaders.OrmEntityLoader for the corresponding orm class.
__abstractmethods__ = frozenset({'orm_class_loader'})
__init__(sub_classes=None)[source]
Construct the parameter type, optionally specifying a tuple of entry points that reference classes that should be a sub class of the base orm class of the orm class loader. The classes pointed to by these entry points will be passed to the OrmEntityLoader when converting an identifier and they will restrict the query set by demanding that the class of the corresponding entity matches these sub classes.
To prevent having to load the database environment at import time, the actual loading of the entry points is deferred until the call to convert is made. This is to keep the command line autocompletion light and responsive. The entry point strings will be validated, however, to see if the correspond to known entry points.
Parameters
sub_classes – a tuple of entry point strings that can narrow the set of orm classes that values will be mapped upon. These classes have to be strict sub classes of the base orm class defined by the orm class loader
__module__ = 'aiida.cmdline.params.types.identifier'
_abc_impl = <_abc_data object>
convert(value, param, ctx)[source]
Attempt to convert the given value to an instance of the orm class using the orm class loader.
Returns
the loaded orm entity
Raises
• click.BadParameter – if the value is ambiguous and leads to multiple entities
• click.BadParameter – if the value cannot be mapped onto any existing instance
• RuntimeError – if the defined orm class loader is not a subclass of the OrmEntityLoader class
abstract property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.CalculationParamType(sub_classes=None)[source]
The ParamType for identifying Calculation entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.calculation'
_abc_impl = <_abc_data object>
name = 'Calculation'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.CodeParamType(sub_classes=None, entry_point=None)[source]
The ParamType for identifying Code entities or its subclasses
__abstractmethods__ = frozenset({})
__init__(sub_classes=None, entry_point=None)[source]
Construct the param type
Parameters
• sub_classes – specify a tuple of Code sub classes to narrow the query set
• entry_point – specify an optional calculation entry point that the Code’s input plugin should match
__module__ = 'aiida.cmdline.params.types.code'
_abc_impl = <_abc_data object>
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value.
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
Attempt to convert the given value to an instance of the orm class using the orm class loader.
Returns
the loaded orm entity
Raises
• click.BadParameter – if the value is ambiguous and leads to multiple entities
• click.BadParameter – if the value cannot be mapped onto any existing instance
• RuntimeError – if the defined orm class loader is not a subclass of the OrmEntityLoader class
name = 'Code'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.ComputerParamType(sub_classes=None)[source]
The ParamType for identifying Computer entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.computer'
__slotnames__ = []
_abc_impl = <_abc_data object>
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value.
Returns
list of tuples of valid entry points (matching incomplete) and a description
name = 'Computer'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.ConfigOptionParamType[source]
Bases: click.types.StringParamType
ParamType for configuration options.
__module__ = 'aiida.cmdline.params.types.config'
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'config option'
class aiida.cmdline.params.types.DataParamType(sub_classes=None)[source]
The ParamType for identifying Data entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.data'
_abc_impl = <_abc_data object>
name = 'Data'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.GroupParamType(create_if_not_exist=False, sub_classes=('aiida.groups:core', ))[source]
The ParamType for identifying Group entities or its subclasses.
__abstractmethods__ = frozenset({})
__init__(create_if_not_exist=False, sub_classes=('aiida.groups:core', ))[source]
Construct the parameter type.
The sub_classes argument can be used to narrow the set of subclasses of Group that should be matched. By default all subclasses of Group will be matched, otherwise it is restricted to the subclasses that correspond to the entry point names in the tuple of sub_classes.
To prevent having to load the database environment at import time, the actual loading of the entry points is deferred until the call to convert is made. This is to keep the command line autocompletion light and responsive. The entry point strings will be validated, however, to see if they correspond to known entry points.
Parameters
• create_if_not_exist – boolean, if True, will create the group if it does not yet exist. By default the group created will be of class Group, unless another subclass is specified through sub_classes. Note that in this case, only a single entry point name can be specified
• sub_classes – a tuple of entry point strings from the aiida.groups entry point group.
__module__ = 'aiida.cmdline.params.types.group'
_abc_impl = <_abc_data object>
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value.
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
name = 'Group'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader.
This class is supposed to be used to load the entity for a given identifier.
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.NodeParamType(sub_classes=None)[source]
The ParamType for identifying Node entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.node'
_abc_impl = <_abc_data object>
name = 'Node'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.MpirunCommandParamType[source]
Bases: click.types.StringParamType
Custom click param type for mpirun-command
Note
requires also a scheduler to be provided, and the scheduler must be called first!
Validate that the provided ‘mpirun’ command only contains replacement fields (e.g. {tot_num_mpiprocs}) that are known.
Return a list of arguments (by using ‘value.strip().split(” “) on the input string)
__module__ = 'aiida.cmdline.params.types.computer'
__repr__()[source]
Return repr(self).
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'mpiruncommandstring'
class aiida.cmdline.params.types.MultipleValueParamType(param_type)[source]
Bases: click.types.ParamType
An extension of click.ParamType that can parse multiple values for a given ParamType
__init__(param_type)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.multiple'
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
get_metavar(param)[source]
Returns the metavar default for this param if it provides one.
class aiida.cmdline.params.types.NonEmptyStringParamType[source]
Bases: click.types.StringParamType
Parameter whose values have to be string and non-empty.
__module__ = 'aiida.cmdline.params.types.strings'
__repr__()[source]
Return repr(self).
__slotnames__ = []
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'nonemptystring'
class aiida.cmdline.params.types.PluginParamType(group=None, load=False, *args, **kwargs)[source]
AiiDA Plugin name parameter type.
Parameters
• group – string or tuple of strings, where each is a valid entry point group. Adding the aiida. prefix is optional. If it is not detected it will be prepended internally.
• load – when set to True, convert will not return the entry point, but the loaded entry point
Usage:
click.option(... type=PluginParamType(group='aiida.calculations')
or:
click.option(... type=PluginParamType(group=('calculations', 'data'))
__init__(group=None, load=False, *args, **kwargs)[source]
Validate that group is either a string or a tuple of valid entry point groups, or if it is not specified use the tuple of all recognized entry point groups.
__module__ = 'aiida.cmdline.params.types.plugin'
__slotnames__ = []
_init_entry_points()[source]
Populate entry point information that will be used later on. This should only be called once in the constructor after setting self.groups because the groups should not be changed after instantiation
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
Convert the string value to an entry point instance, if the value can be successfully parsed into an actual entry point. Will raise click.BadParameter if validation fails.
get_entry_point_from_string(entry_point_string)[source]
Validate a given entry point string, which means that it should have a valid entry point string format and that the entry point unambiguously corresponds to an entry point in the groups configured for this instance of PluginParameterType.
Returns
the entry point if valid
Raises
ValueError if the entry point string is invalid
get_missing_message(param)[source]
Optionally might return extra information about a missing parameter.
New in version 2.0.
get_possibilities(incomplete='')[source]
Return a list of plugins starting with incomplete
get_valid_arguments()[source]
Return a list of all available plugins for the groups configured for this PluginParamType instance. If the entry point names are not unique, because there are multiple groups that contain an entry point that has an identical name, we need to prefix the names with the full group name
Returns
list of valid entry point strings
property groups
property has_potential_ambiguity
Returns whether the set of supported entry point groups can lead to ambiguity when only an entry point name is specified. This will happen if one ore more groups share an entry point with a common name
name = 'plugin'
class aiida.cmdline.params.types.AbsolutePathParamType(exists=False, file_okay=True, dir_okay=True, writable=False, readable=True, resolve_path=False, allow_dash=False, path_type=None)[source]
Bases: click.types.Path
The ParamType for identifying absolute Paths (derived from click.Path).
__module__ = 'aiida.cmdline.params.types.path'
__repr__()[source]
Return repr(self).
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'AbsolutePath'
class aiida.cmdline.params.types.ShebangParamType[source]
Bases: click.types.StringParamType
Custom click param type for shebang line
__module__ = 'aiida.cmdline.params.types.computer'
__repr__()[source]
Return repr(self).
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'shebangline'
class aiida.cmdline.params.types.UserParamType(create=False)[source]
Bases: click.types.ParamType
The user parameter type for click. Can get or create a user.
__init__(create=False)[source]
Parameters
create – If the user does not exist, create a new instance (unstored).
__module__ = 'aiida.cmdline.params.types.user'
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
name = 'user'
class aiida.cmdline.params.types.TestModuleParamType[source]
Bases: click.types.ParamType
Parameter type to represent a unittest module.
Defunct - remove when removing the “verdi devel tests” command.
__module__ = 'aiida.cmdline.params.types.test_module'
name = 'test module'
class aiida.cmdline.params.types.ProfileParamType(*args, **kwargs)[source]
The profile parameter type for click.
__init__(*args, **kwargs)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.profile'
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
Attempt to match the given value to a valid profile.
static deconvert_default(value)[source]
name = 'profile'
class aiida.cmdline.params.types.WorkflowParamType(sub_classes=None)[source]
The ParamType for identifying WorkflowNode entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.workflow'
_abc_impl = <_abc_data object>
name = 'WorkflowNode'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.ProcessParamType(sub_classes=None)[source]
The ParamType for identifying ProcessNode entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.process'
_abc_impl = <_abc_data object>
name = 'Process'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.PathOrUrl(timeout_seconds=10, **kwargs)[source]
Bases: click.types.Path
Extension of click’s Path-type to include URLs.
A PathOrUrl can either be a click.Path-type or a URL.
Parameters
timeout_seconds (int) – Maximum timeout accepted for URL response. Must be an integer in the range [0;60].
__init__(timeout_seconds=10, **kwargs)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.path'
checks_url(url, param, ctx)[source]
Check whether URL is reachable within timeout.
convert(value, param, ctx)[source]
Overwrite convert Check first if click.Path-type, then check if URL.
name = 'PathOrUrl'
class aiida.cmdline.params.types.FileOrUrl(timeout_seconds=10, **kwargs)[source]
Bases: click.types.File
Extension of click’s File-type to include URLs.
Returns handle either to local file or to remote file fetched from URL.
Parameters
timeout_seconds (int) – Maximum timeout accepted for URL response. Must be an integer in the range [0;60].
__init__(timeout_seconds=10, **kwargs)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.path'
convert(value, param, ctx)[source]
Return file handle.
get_url(url, param, ctx)[source]
Retrieve file from URL.
name = 'FileOrUrl'
## Submodules¶
Module for the calculation parameter type
class aiida.cmdline.params.types.calculation.CalculationParamType(sub_classes=None)[source]
The ParamType for identifying Calculation entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.calculation'
_abc_impl = <_abc_data object>
name = 'Calculation'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
A custom click type that defines a lazy choice
class aiida.cmdline.params.types.choice.LazyChoice(get_choices)[source]
Bases: click.types.ParamType
This is a delegate of click’s Choice ParamType that evaluates the set of choices lazily. This is useful if the choices set requires an import that is slow. Using the vanilla click.Choice will call this on import which will slow down verdi and its autocomplete. This type will generate the choices set lazily through the choices property
__init__(get_choices)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.choice'
__repr__()[source]
Return repr(self).
property _click_choice
Get the internal click Choice object that we delegate functionality to. Will construct it lazily if necessary.
Returns
The click Choice
Return type
click.Choice
property choices
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
get_metavar(param)[source]
Returns the metavar default for this param if it provides one.
get_missing_message(param)[source]
Optionally might return extra information about a missing parameter.
New in version 2.0.
name = 'choice'
Module to define the custom click type for code.
class aiida.cmdline.params.types.code.CodeParamType(sub_classes=None, entry_point=None)[source]
The ParamType for identifying Code entities or its subclasses
__abstractmethods__ = frozenset({})
__init__(sub_classes=None, entry_point=None)[source]
Construct the param type
Parameters
• sub_classes – specify a tuple of Code sub classes to narrow the query set
• entry_point – specify an optional calculation entry point that the Code’s input plugin should match
__module__ = 'aiida.cmdline.params.types.code'
_abc_impl = <_abc_data object>
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value.
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
Attempt to convert the given value to an instance of the orm class using the orm class loader.
Returns
the loaded orm entity
Raises
• click.BadParameter – if the value is ambiguous and leads to multiple entities
• click.BadParameter – if the value cannot be mapped onto any existing instance
• RuntimeError – if the defined orm class loader is not a subclass of the OrmEntityLoader class
name = 'Code'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
Module for the custom click param type computer
class aiida.cmdline.params.types.computer.ComputerParamType(sub_classes=None)[source]
The ParamType for identifying Computer entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.computer'
__slotnames__ = []
_abc_impl = <_abc_data object>
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value.
Returns
list of tuples of valid entry points (matching incomplete) and a description
name = 'Computer'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
class aiida.cmdline.params.types.computer.MpirunCommandParamType[source]
Bases: click.types.StringParamType
Custom click param type for mpirun-command
Note
requires also a scheduler to be provided, and the scheduler must be called first!
Validate that the provided ‘mpirun’ command only contains replacement fields (e.g. {tot_num_mpiprocs}) that are known.
Return a list of arguments (by using ‘value.strip().split(” “) on the input string)
__module__ = 'aiida.cmdline.params.types.computer'
__repr__()[source]
Return repr(self).
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'mpiruncommandstring'
class aiida.cmdline.params.types.computer.ShebangParamType[source]
Bases: click.types.StringParamType
Custom click param type for shebang line
__module__ = 'aiida.cmdline.params.types.computer'
__repr__()[source]
Return repr(self).
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'shebangline'
Module to define the custom click type for code.
class aiida.cmdline.params.types.config.ConfigOptionParamType[source]
Bases: click.types.StringParamType
ParamType for configuration options.
__module__ = 'aiida.cmdline.params.types.config'
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'config option'
Module for the custom click param type for data
class aiida.cmdline.params.types.data.DataParamType(sub_classes=None)[source]
The ParamType for identifying Data entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.data'
_abc_impl = <_abc_data object>
name = 'Data'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
Module for custom click param type group.
class aiida.cmdline.params.types.group.GroupParamType(create_if_not_exist=False, sub_classes=('aiida.groups:core', ))[source]
The ParamType for identifying Group entities or its subclasses.
__abstractmethods__ = frozenset({})
__init__(create_if_not_exist=False, sub_classes=('aiida.groups:core', ))[source]
Construct the parameter type.
The sub_classes argument can be used to narrow the set of subclasses of Group that should be matched. By default all subclasses of Group will be matched, otherwise it is restricted to the subclasses that correspond to the entry point names in the tuple of sub_classes.
To prevent having to load the database environment at import time, the actual loading of the entry points is deferred until the call to convert is made. This is to keep the command line autocompletion light and responsive. The entry point strings will be validated, however, to see if they correspond to known entry points.
Parameters
• create_if_not_exist – boolean, if True, will create the group if it does not yet exist. By default the group created will be of class Group, unless another subclass is specified through sub_classes. Note that in this case, only a single entry point name can be specified
• sub_classes – a tuple of entry point strings from the aiida.groups entry point group.
__module__ = 'aiida.cmdline.params.types.group'
_abc_impl = <_abc_data object>
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value.
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
name = 'Group'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader.
This class is supposed to be used to load the entity for a given identifier.
Returns
the orm entity loader class for this ParamType
Module for custom click param type identifier
class aiida.cmdline.params.types.identifier.IdentifierParamType(sub_classes=None)[source]
Bases: click.types.ParamType, abc.ABC
An extension of click.ParamType for a generic identifier parameter. In AiiDA, orm entities can often be identified by either their ID, UUID or optionally some LABEL identifier. This parameter type implements the convert method, which attempts to convert a value passed to the command for a parameter with this type, to an orm entity. The actual loading of the entity is delegated to the orm class loader. Subclasses of this parameter type should implement the orm_class_loader method to return the appropriate orm class loader, which should be a subclass of aiida.orm.utils.loaders.OrmEntityLoader for the corresponding orm class.
__abstractmethods__ = frozenset({'orm_class_loader'})
__init__(sub_classes=None)[source]
Construct the parameter type, optionally specifying a tuple of entry points that reference classes that should be a sub class of the base orm class of the orm class loader. The classes pointed to by these entry points will be passed to the OrmEntityLoader when converting an identifier and they will restrict the query set by demanding that the class of the corresponding entity matches these sub classes.
To prevent having to load the database environment at import time, the actual loading of the entry points is deferred until the call to convert is made. This is to keep the command line autocompletion light and responsive. The entry point strings will be validated, however, to see if the correspond to known entry points.
Parameters
sub_classes – a tuple of entry point strings that can narrow the set of orm classes that values will be mapped upon. These classes have to be strict sub classes of the base orm class defined by the orm class loader
__module__ = 'aiida.cmdline.params.types.identifier'
_abc_impl = <_abc_data object>
convert(value, param, ctx)[source]
Attempt to convert the given value to an instance of the orm class using the orm class loader.
Returns
the loaded orm entity
Raises
• click.BadParameter – if the value is ambiguous and leads to multiple entities
• click.BadParameter – if the value cannot be mapped onto any existing instance
• RuntimeError – if the defined orm class loader is not a subclass of the OrmEntityLoader class
abstract property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
Module to define custom click param type for multiple values
class aiida.cmdline.params.types.multiple.MultipleValueParamType(param_type)[source]
Bases: click.types.ParamType
An extension of click.ParamType that can parse multiple values for a given ParamType
__init__(param_type)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.multiple'
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
get_metavar(param)[source]
Returns the metavar default for this param if it provides one.
Module to define the custom click param type for node
class aiida.cmdline.params.types.node.NodeParamType(sub_classes=None)[source]
The ParamType for identifying Node entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.node'
_abc_impl = <_abc_data object>
name = 'Node'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
Click parameter types for paths.
class aiida.cmdline.params.types.path.AbsolutePathOrEmptyParamType(exists=False, file_okay=True, dir_okay=True, writable=False, readable=True, resolve_path=False, allow_dash=False, path_type=None)[source]
The ParamType for identifying absolute Paths, accepting also empty paths.
__module__ = 'aiida.cmdline.params.types.path'
__repr__()[source]
Return repr(self).
__slotnames__ = []
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'AbsolutePathEmpty'
class aiida.cmdline.params.types.path.AbsolutePathParamType(exists=False, file_okay=True, dir_okay=True, writable=False, readable=True, resolve_path=False, allow_dash=False, path_type=None)[source]
Bases: click.types.Path
The ParamType for identifying absolute Paths (derived from click.Path).
__module__ = 'aiida.cmdline.params.types.path'
__repr__()[source]
Return repr(self).
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'AbsolutePath'
class aiida.cmdline.params.types.path.FileOrUrl(timeout_seconds=10, **kwargs)[source]
Bases: click.types.File
Extension of click’s File-type to include URLs.
Returns handle either to local file or to remote file fetched from URL.
Parameters
timeout_seconds (int) – Maximum timeout accepted for URL response. Must be an integer in the range [0;60].
__init__(timeout_seconds=10, **kwargs)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.path'
convert(value, param, ctx)[source]
Return file handle.
get_url(url, param, ctx)[source]
Retrieve file from URL.
name = 'FileOrUrl'
class aiida.cmdline.params.types.path.PathOrUrl(timeout_seconds=10, **kwargs)[source]
Bases: click.types.Path
Extension of click’s Path-type to include URLs.
A PathOrUrl can either be a click.Path-type or a URL.
Parameters
timeout_seconds (int) – Maximum timeout accepted for URL response. Must be an integer in the range [0;60].
__init__(timeout_seconds=10, **kwargs)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.path'
checks_url(url, param, ctx)[source]
Check whether URL is reachable within timeout.
convert(value, param, ctx)[source]
Overwrite convert Check first if click.Path-type, then check if URL.
name = 'PathOrUrl'
aiida.cmdline.params.types.path._check_timeout_seconds(timeout_seconds)[source]
Raise if timeout is not within range [0;60]
Click parameter type for AiiDA Plugins.
class aiida.cmdline.params.types.plugin.PluginParamType(group=None, load=False, *args, **kwargs)[source]
AiiDA Plugin name parameter type.
Parameters
• group – string or tuple of strings, where each is a valid entry point group. Adding the aiida. prefix is optional. If it is not detected it will be prepended internally.
• load – when set to True, convert will not return the entry point, but the loaded entry point
Usage:
click.option(... type=PluginParamType(group='aiida.calculations')
or:
click.option(... type=PluginParamType(group=('calculations', 'data'))
__init__(group=None, load=False, *args, **kwargs)[source]
Validate that group is either a string or a tuple of valid entry point groups, or if it is not specified use the tuple of all recognized entry point groups.
__module__ = 'aiida.cmdline.params.types.plugin'
__slotnames__ = []
_init_entry_points()[source]
Populate entry point information that will be used later on. This should only be called once in the constructor after setting self.groups because the groups should not be changed after instantiation
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
Convert the string value to an entry point instance, if the value can be successfully parsed into an actual entry point. Will raise click.BadParameter if validation fails.
get_entry_point_from_string(entry_point_string)[source]
Validate a given entry point string, which means that it should have a valid entry point string format and that the entry point unambiguously corresponds to an entry point in the groups configured for this instance of PluginParameterType.
Returns
the entry point if valid
Raises
ValueError if the entry point string is invalid
get_missing_message(param)[source]
Optionally might return extra information about a missing parameter.
New in version 2.0.
get_possibilities(incomplete='')[source]
Return a list of plugins starting with incomplete
get_valid_arguments()[source]
Return a list of all available plugins for the groups configured for this PluginParamType instance. If the entry point names are not unique, because there are multiple groups that contain an entry point that has an identical name, we need to prefix the names with the full group name
Returns
list of valid entry point strings
property groups
property has_potential_ambiguity
Returns whether the set of supported entry point groups can lead to ambiguity when only an entry point name is specified. This will happen if one ore more groups share an entry point with a common name
name = 'plugin'
Module for the process node parameter type
class aiida.cmdline.params.types.process.ProcessParamType(sub_classes=None)[source]
The ParamType for identifying ProcessNode entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.process'
_abc_impl = <_abc_data object>
name = 'Process'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
Profile param type for click.
class aiida.cmdline.params.types.profile.ProfileParamType(*args, **kwargs)[source]
The profile parameter type for click.
__init__(*args, **kwargs)[source]
Initialize self. See help(type(self)) for accurate signature.
__module__ = 'aiida.cmdline.params.types.profile'
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
Attempt to match the given value to a valid profile.
static deconvert_default(value)[source]
name = 'profile'
Module for various text-based string validation.
class aiida.cmdline.params.types.strings.EmailType[source]
Bases: click.types.StringParamType
Parameter whose values have to correspond to a valid email address format.
Note
For the moment, we do not require the domain suffix, i.e. ‘aiida@localhost’ is still valid.
__module__ = 'aiida.cmdline.params.types.strings'
__repr__()[source]
Return repr(self).
__slotnames__ = []
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'email'
class aiida.cmdline.params.types.strings.EntryPointType[source]
Parameter whose values have to be valid Python entry point strings.
__module__ = 'aiida.cmdline.params.types.strings'
__repr__()[source]
Return repr(self).
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'entrypoint'
class aiida.cmdline.params.types.strings.HostnameType[source]
Bases: click.types.StringParamType
Parameter corresponding to a valid hostname (or empty) string.
Regex according to https://stackoverflow.com/a/3824105/1069467
__module__ = 'aiida.cmdline.params.types.strings'
__repr__()[source]
Return repr(self).
__slotnames__ = []
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'hostname'
class aiida.cmdline.params.types.strings.LabelStringType[source]
Parameter accepting valid label strings.
Non-empty string, made up of word characters (includes underscores [1]), dashes, and dots.
ALPHABET = '\\w\\.\\-'
__module__ = 'aiida.cmdline.params.types.strings'
__repr__()[source]
Return repr(self).
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'labelstring'
class aiida.cmdline.params.types.strings.NonEmptyStringParamType[source]
Bases: click.types.StringParamType
Parameter whose values have to be string and non-empty.
__module__ = 'aiida.cmdline.params.types.strings'
__repr__()[source]
Return repr(self).
__slotnames__ = []
convert(value, param, ctx)[source]
Converts the value. This is not invoked for values that are None (the missing value).
name = 'nonemptystring'
Test module parameter type for click.
class aiida.cmdline.params.types.test_module.TestModuleParamType[source]
Bases: click.types.ParamType
Parameter type to represent a unittest module.
Defunct - remove when removing the “verdi devel tests” command.
__module__ = 'aiida.cmdline.params.types.test_module'
name = 'test module'
User param type for click.
class aiida.cmdline.params.types.user.UserParamType(create=False)[source]
Bases: click.types.ParamType
The user parameter type for click. Can get or create a user.
__init__(create=False)[source]
Parameters
create – If the user does not exist, create a new instance (unstored).
__module__ = 'aiida.cmdline.params.types.user'
complete(ctx, incomplete)[source]
Return possible completions based on an incomplete value
Returns
list of tuples of valid entry points (matching incomplete) and a description
convert(value, param, ctx)[source]
name = 'user'
Module for the workflow parameter type
class aiida.cmdline.params.types.workflow.WorkflowParamType(sub_classes=None)[source]
The ParamType for identifying WorkflowNode entities or its subclasses
__abstractmethods__ = frozenset({})
__module__ = 'aiida.cmdline.params.types.workflow'
_abc_impl = <_abc_data object>
name = 'WorkflowNode'
property orm_class_loader
Return the orm entity loader class, which should be a subclass of OrmEntityLoader. This class is supposed to be used to load the entity for a given identifier
Returns
the orm entity loader class for this ParamType
|
## Introduction
This document explains how to replicate the results in Bachas et al. (2018a). The paper can be downloaded from https://www.aeaweb.org/articles?id=10.1257/pandp.20181013, and the replication files can be downloaded from https://www.aeaweb.org/doi/10.1257/pandp.20181013.data. Because some results in the paper use publicly available data while others use confidential administrative data, some results can be immediately replicated using our replication files, but replicating the remaining results requires obtaining access to administrative data.
## Folder structure
After downloading the replication code and data, unzip it into a folder on your computer. After unzipping, the parent folder containing the folders described below should be thought of as the main folder when editing the scripts to run on your local computer. The folders inside of the zipped replication file should be placed in a folder that you will denote as the global \$main in the Stata .do files and as the working directory using setwd() in the .R scripts. These folders are:
• adofiles contains the .ado files required to run our Stata scripts.
• data contains the raw data that is publicly available. To replicate the portions of the paper using confidential data, the raw confidential data should also be placed in data.
• data/shapefiles contains the shapefiles
• data/shapefiles/Roads/build (currently empty) will contain the Open Street Map files
• Other raw data files are included directly in data (i.e., not in a subfolder)
• graphs (initially empty) is the folder to which graphs produced by the scripts will be written.
• logs (initially empty) is the folder in which log files will be written.
• proc (initially empty) is the folder in which processed data sets will be saved.
• scripts contains the replication code.
• waste (initially empty) saves temporary files produced as part of the data preparation.
• waste/block_distances (initially empty) saves temporary files for distance calculations
## Data
The replication files use the following data sets.
Administrative data from Bansefi is confidential, and hence is not provided with the replication data. The contact person to request access is Ana Lilia Urquieta, [email protected]. The following data from Bansefi is used:
• Saldos Promedio Cuentahorro.dta are average balances from accounts without debit cards from 2009-2011, provided to the authors in 2012.
• Saldos Promedio Debicuenta.dta are average balances from accounts with debit cards from 2009-2011, provided to the authors in 2012.
• SP_*.txt are average monthly balances from Bansefi accounts for additional years, where the * wildcard denotes the year of the data set. These provided to the authors in 2015.
• DatosGenerales.txt is an account-level data set with information about each account, provided to the authors in 2015.
• MOV*.txt are raw transactions-level data, where the * wildcard denotes the year of the data set. These provided to the authors in 2015.
Administrative data from Oportunidades (now Prospera) is confidential, and hence is not provided with the replication data. The contact person to request access is Rogelio Grados, [email protected]. The following data from Prospera is used:
• 620mil_cuentas_con_MZ.dbf, which maps account numbers in Bansefi’s data to the Census block on which that beneficiary lives.
### Shapefiles of Census blocks
We use the shapefiles giving the polygon corresponding to each Census block, which allows us to calculate the block centroid and use this as an approximation of the beneficiary’s location. These data are publicly available, but because all results using this data combine it with confidential data, and because these shapefiles are large, we do not include them in the replication data.
The Census block shapefiles are publicly available from Mexico’s National Statistical Institute (INEGI); we use the version that has been compiled by Diego Valle Jones. To download them, go to https://blog.diegovalle.net/2013/06/shapefiles-of-mexico-agebs-manzanas-etc.html and enter your email to receive the files in your email. The zip file you receive should have a subfolder called scince_2010. Place this folder inside of data/shapefiles/ to run the replication code.
### Road network from Open Street Maps
The road network we use to calculate road distances is publicly available from Open Street Maps. The necessary file, mexico-latest.osm.pbf, can be downloaded from https://download.geofabrik.de/north-america/mexico.html. We do not include it with the replication data because it is a large file. Once downloaded, mexico-latest.osm.pbf should be placed in the folder data/shapefiles/Roads/build. To prepare the maps it is necessary to run osrminstall.cmd and osrmprepare by Huber and Rust (2016). This process is included in the do file account_block_merge.do if you set local zero_run = 1 and local first_run = 1 under // Control center. (They are currently set to 0; if rerunning the do file after initially building the maps, you will want to set them to 0 to not repeat the slow process of building the maps each time.)
Note that the map may have changed slightly relative to when we downloaded it, and hence replication results could differ slightly (but we do not expect any substantive changes). Users who have obtained access to the other administrative data and wish to exactly replicate the results in our paper should contact us and we will share the exact version of mexico-latest.osm.pbf we used.
### Payment Methods Survey
The Payment Methods Survey, conducted by Oportunidades, is publicly available from https://evaluacion.prospera.gob.mx/es/eval_cuant/p_bases_cuanti.php. We have included the data set, medios_pago_titular_beneficiarios.dta, in the data folder.
### Geocoordinates of ATMs
This data set, Geocoordinates_ATMs_Mexico.csv, was constructed by the authors based on publicly available information on the location of ATMs from banks’ websites. It is included in the data folder and is also available from https://doi.org/10.7910/DVN/U27JIM.
### Geocoordinates of Bansefi branches
This data set, Geocoordinates_Bansefi_branches.csv, was constructed by the authors based on publicly available information on Bansefi locations. It is included in the data folder and is also available from https://doi.org/10.7910/DVN/DT81PP.
### Shapefiles for Cuernavaca
The road shapefiles for Cuernavaca, used for the example map in Figure 2, are publicly available from INEGI. They are included in the data/shapefiles folder. There are two types sets of shapefiles:
• The 170070001l.* files contain the polygon that is the border of Cuernavaca locality. (170070001 is INEGI’s code for Cuernavaca locality.)
• The 170070001v.* files contain the roads in Cuernavaca.
## Scripts
More thorough descriptions of each script than those provided here are included in scripts/0_master.do.
### Preliminary code
Three files in the scripts folder do not need to be run directly:
• 0_master.do describes each script in the order they should be run to produce all results, as well as the input and output data sets of each script.
• server_header.doh sets global macros for folders, and tells Stata where to look for user-written .ado files included with the replication code. This script is called automatically by the .do files.
• myfunctions.R includes functions used by the other .R scripts, and is called automatically by those scripts.
The scripts include both Stata and R code. The code has most recently been tested on Stata version 14.2 and R version 3.4.2. For Stata, any necessary user-written ado files are included in the adofiles folder, and the .do files include code to automatically look for commands in this folder. For R, the user will need to install packages used by the scripts (always under a # PACKAGES header) using install.packages().
Inputs: raw administrative data from Bansefi. The data must be requested from Bansefi to run this replication code. The scripts are:
• 1_saldos_dataprep.do imports raw average balance data
• 2_datosgenerales_dataprep.do imports raw account-level data
• 3_movimientos_dataprep.do imports raw transactions data
• 4_sucursales_dataprep.do creates a data set with the client, account, and branch IDs for future merges
• 5_bimswitch.do calculates the bimester (2-month period) during which each beneficiary received a card
• 6_avgbal.do creates a data set with average balances by bimester
• 7_transactions_redef_bim.do creates a data set with transactions by redefined bimester, where the redefined bimester accounts for payments shifted to the end of the previous bimester, as described in Bachas et al. (2018b).
• 8_mechanical_effect.do calculates the account-level mechanical effect, as described in Bachas et al. (2018b).
• 9_net_savings.do calculates net savings by bimester
• 10_balance_checks.do creates a data set of balance checks at the transaction level
• 11_withdrawals_deposits_group.do groups transactions by bimester
• 12_transaction_bimester_relative.do creates a data set on balance checks, deposits, and withdrawals by bimester relative to receiving a card
### Distance calculations
Inputs: raw administrative data from Bansefi and Prospera. The data must be requested from Bansefi and Prospera to run this replication code. The scripts are:
• 13_account_block_dataprep.R reads in raw data from Prospera with the account to Census block mapping and creates a data set of unique Census blocks on which beneficiaries live
• 14_block_centroids.R uses Census block shapefiles to calculate centroids of all Census blocks
• 15_distances_calculate.do merges block centroids with ATM and branch coordinates and calculates road distances
• 16_distances_append.do appends together the files with distances, which was done in chunks
### Distance densities (Figure 1)
Inputs: processed data sets from above scripts. Because this replication code depends on previous code, administrative data must be requested from Bansefi and Prospera before running these scripts. The scripts are:
• 17_distances_density.do graphs kernel density estimates of households’ distance to the nearest ATM and nearest Bansefi bank branch.
### Example locality map (Figure 2)
Inputs: publicly available ATM and branch geocoordinates and road shapefiles. This code does not depend on any of the previous code, and hence can be run on its own to replicate our map. The replication version maps roads, ATMs, and Bansefi branch since these are based on public data. It does not map the household location. The scripts are:
• 18_locality_example_map.R maps all ATMs, the Bansefi branch, and all roads in Cuernavaca locality.
The map produced by this script includes more ATMs than Figure 2 of our paper, due to a bug in the code we used to determine which ATMs were in the locality of Cuernavaca (based on the ATMs’ geocoordinates). The 18_locality_example_map.R script instead uses the new sf::st_join (Pebesma 2018). The error only affects the map in Figure 2, and not any of the results in the paper, since our distance calculations use the full set of ATMs in the country (not restricting to particular localities, as we do for this map). The map produced by this replication code is included below:
### Results from survey (Figure 3)
Inputs: Payment Methods Survey (questionnaire of beneficiaries), conducted by Oportunidades. This code does not depend on any of the previous code and uses publicly available data included in our replication files, and hence can be run on its own to replicate Figure 3. The scripts are:
• 19_survey_questions.do graphs transport taken and activity forgone to withdraw the transfer, before and after receiving a card
### Results from administrative data and distances (Figures 4, 5, 6)
Inputs: processed data sets from above scripts. Because this replication code depends on previous code, administrative data must be requested from Bansefi and Prospera before running these scripts. The scripts are:
• 20_transaction_distance.do correlates changes in withdrawals and number of balance checks with distance gains to access account
• 21_savings_distance.do correlates changes in savings with distance gains to access account
## Other files
### .ado files
The .ado files in the adofiles folder are called automatically by the .do files that use them. These files are:
• _gbom.ado from the egenmore package to generate first day of month (Cox 2016)
• _geom.ado from the egenmore package to generate last day of month (Cox 2016)
• bimestrify.ado for data cleaning (written by us)
• fre.ado for ordered tabulations (Jann 2007)
• geonear.ado to determine closest ATMs by geodesic distance (Picard 2012), which is an input for our dimensionality-reduction algorithm when calculating road distances
• stringify.ado to convert to string and add padding (written by us)
• time.ado to timestamp log and other output files (written by us)
• uniquevals.ado to calculate number of unique observations (written by us)
The osrmprepare and osrmtime ado files (Huber and Rust 2016) are not included; they are installed directly as part of 15_distances_calculate.do since there are a number of ancillary files as well.
### .here
The .here file in the main folder is included to enable the here::here() function in R to work with relative file paths.
### README
This README file and supporting files:
• README.html (can be opened in a browser and looks cleaner than the pdf)
• README.pdf
• README.Rmd is the original source code for generating the README in R Markdown.
• README_bib.bib contains the bibliographical references for the README
## Replication instructions
To replicate Figure 2, which should appear identical to the version included above:
1. Open 18_locality_example_map.R
2. Uncomment the line with setwd() by removing the #, then replace the path in quotation marks with the path to the replication folder on your computer (i.e., the folder that is a direct parent to the data and scripts folders).
3. If the packages under # PACKAGES are not already installed, install them with the following code:
install.packages(c("sf", "tidyverse", "magrittr", "here"))
Note that ggplot2 >= 3.0.0 is required, for geom_sf(). Since ggplot2 is included in tidyverse, if you install.packages("tidyverse") you will get the latest version of ggplot2 (Wickham 2016).
4. Run the revised 18_locality_example_map.R file in R.
To replicate Figure 3:
1. Open 19_survey_questions.do
2. Uncomment the commented-out line with global main by removing the **, then replace the path in quotation marks with the path to the replication folder on your computer (i.e., the folder that is a direct parent to the data and scripts folders).
3. Run the revised 19_survey_questions.do in Stata.
To replicate Figures 1, 4, 5, and 6, which use confidential administrative data:
1. After requesting and obtaining the administrative data, make sure the data sets have the same names as those used in the replication scripts, and place them in the data folder.
2. Repeat the instructions above to specify your file path in each script.
3. For R scripts, install additional packages as needed (the packages are always listed under # PACKAGES)
4. Run the replication scripts in the order indicated by the numbers in their file names.
## Contact
The replication code was written by Pierre Bachas and Sean Higgins, who can be contacted at [email protected] and [email protected].
## References
Bachas, Pierre, Paul Gertler, Sean Higgins, and Enrique Seira. 2018a. “Digital Financial Services Go a Long Way: Transaction Costs and Financial Inclusion.” American Economic Association Papers & Proceedings 108: 444–48.
———. 2018b. “How Debit Cards Enable the Poor to Save More.” NBER Working Paper 23252.
Cox, Nicholas. 2016. “EGENMORE: Stata modules to extend the generate function.” Statistical Software Components, Boston College Department of Economics.
Huber, Stephan, and Cristoph Rust. 2016. “Calculate Travel Time and Distance with OpenStreetMap Data Using the Open Source Routing Machine (OSRM).” Stata Journal 16: 416–23.
Jann, Ben. 2007. “FRE: Stata module to display one-way frequency table.” Statistical Software Components, Boston College Department of Economics.
Pebesma, Edzer. 2018. “Simple Features for R: Standardized Support for Spatial Vector Data.” The R Journal 10: 439–46.
Picard, Robert. 2012. “GEONEAR: Stata Module to Find Nearest Neighbors Using Geodetic Distances.” Statistical Software Components, Boston College Department of Economics.
Wickham, Hadley. 2016. ggplot2: Elegant Graphics for Data Analysis. New York: Springer-Verlag.
|
# Homework Help: Young's double slit experiment (prob density)
1. Apr 4, 2008
### t_n_p
1. The problem statement, all variables and given/known data
http://img141.imageshack.us/img141/1395/40941671kx8.jpg [Broken]
http://img141.imageshack.us/img141/982/82443157pt9.jpg [Broken]
2. Relevant equations
http://img85.imageshack.us/img85/4523/60192566ie7.jpg [Broken]
3. The attempt at a solution
I'm not sure where to start with this one. I've searched through textbooks and the internet and have not found anything that helps me remotely show the Pr density at D. Would appreciate it if someone could give me a start and help guide me through.
Thanks!
Last edited by a moderator: May 3, 2017
2. Apr 4, 2008
### Hootenanny
Staff Emeritus
One could start by finding the difference in path lengths between the 1D and 2D
3. Apr 4, 2008
### t_n_p
I can't see how. Both paths seem to travel the same length in the first two sections and differ in the third section. I can't see how you can quantitatively evaulate distance though..
4. Apr 5, 2008
### t_n_p
bump?
5. Apr 11, 2008
### t_n_p
up to the top.
I still need help on this one!
6. Apr 11, 2008
### Hootenanny
Staff Emeritus
Hi t_n_p,
Sorry I completely missed your post, you should have PM'd me. After re-reading the question, it is a lot simpler than it first seems. Notice that the second wavefunction is given in terms of the phase difference, in other words, you are already given the difference in phase between the two waves so there no need to calculate the difference in path.
All you need to do is superimpose the two wavefunctions and then find the probability density.
Last edited: Apr 11, 2008
7. Apr 13, 2008
### t_n_p
Sorry didn't want to bug you via PM!
You say superimpose, so I want to add wavefunction 1 to wavefunction 2? I don't really understand where to go from here on...
8. Apr 13, 2008
### Hootenanny
Staff Emeritus
Correct, so
$$\psi_1+\psi_2 = A+Ae^{i\phi} = A\left(1+e^{i\phi}\right)$$
And,
$$P = \left(\psi_1+\psi_2\right)\overline{\left(\psi_1+\psi_2\right)}$$
(multiplication by the complex conjugate).
9. Apr 13, 2008
### t_n_p
Is that P, supposed to be "roh" the symbol for probabilty density?
10. Apr 13, 2008
### Hootenanny
Staff Emeritus
Yes, P is the probability density.
11. Apr 13, 2008
### t_n_p
what's with that bar over the second bracketed term?
Anyhow, where does the cos come in?
http://img148.imageshack.us/img148/5425/75948148ho4.jpg [Broken]
Last edited by a moderator: May 3, 2017
12. Apr 13, 2008
### Hootenanny
Staff Emeritus
The bar represents the complex conjugate of the bracket, as I said previously.
You can write the solution in terms of real cosine as opposed to complex exponentials.
[/URL]
No, that isn't correct you multiply the original wavefunction by the complex conjugate.
Last edited by a moderator: May 3, 2017
13. Apr 13, 2008
### t_n_p
complex conjugate = A - Ae^(-iφ)?
then
http://img353.imageshack.us/img353/6237/29152244fb6.jpg [Broken]
my gut feeling tells me I'm wrong because I can't see how I can extract any form of those 2 relevant equations in the original post.
Last edited by a moderator: May 3, 2017
14. Apr 13, 2008
### Hootenanny
Staff Emeritus
Not quite, complex conjugation means that you only reverse the sign of the imaginary part,
$$\overline{\psi_1+\psi_2} = A\left(1+e^{-i\varphi}\right)$$
15. Apr 13, 2008
### t_n_p
Of course, I should have known.
So using that conjugate above and expanding gives me..
http://img353.imageshack.us/img353/2008/36482179rk6.jpg [Broken]
Now I can see how I can covert the middle term into a cos term to give me the equation below (using the relevant formula given in the original post), but I'm unsure how to proceed with the last term..
http://img353.imageshack.us/img353/1821/38229117kb9.jpg [Broken]
Last edited by a moderator: May 3, 2017
16. Apr 13, 2008
### Hootenanny
Staff Emeritus
Correct. It may be useful to note that,
$$e^{-a}\cdot e^a = e^{-a+a} = e^0 = 1 \hspace{1cm}\forall a$$
17. Apr 13, 2008
### t_n_p
lol, sometimes I look past the easy things....
my only issue now, is that I have
http://img89.imageshack.us/img89/9929/60512908gp9.jpg [Broken]
and the formula states
http://img160.imageshack.us/img160/1535/94298372xw7.jpg [Broken]
the only difference having my cos term as cos (φ) and the formula stating cos (2φ). Is it possible to simply halve only the 2φ term?
Also I noticed that A is an absolute value, so should my values of A that appear throughout my working also appear with modulus signs?
Last edited by a moderator: May 3, 2017
18. Apr 13, 2008
### Hootenanny
Staff Emeritus
HINT:
$$\cos^2\theta = \frac{1}{2}\left(1+\cos\left(2\theta\right)\right) \Rightarrow \frac{1}{2}\left(1+\cos\left(\theta\right)\right) = \cos^2\left(\frac{\theta}{2}\right)$$
19. Apr 13, 2008
### t_n_p
Got it! Thanks a WHOLE lot!
Show that interference maxima is given by
http://img361.imageshack.us/img361/8347/72975778fp6.jpg [Broken]
Ignoring part c) for the time being, how exactly is pr density related to the interference maxima equation?
Last edited by a moderator: May 3, 2017
20. Apr 13, 2008
### Hootenanny
Staff Emeritus
21. Apr 13, 2008
### t_n_p
hmmm, still don't get it.
22. Apr 13, 2008
### Hootenanny
Staff Emeritus
What specifically don't you understand?
Last edited: Apr 13, 2008
23. Apr 13, 2008
### t_n_p
how the prob density equation just found is related to the interference equation.
24. Apr 13, 2008
### Hootenanny
Staff Emeritus
There's no need to relate the probability density to the interference pattern, the question simply asks you to derive the fringe separation, which can be done without using the probability density.
25. Apr 13, 2008
### t_n_p
hmm? It says "Using the results obtained in (a) [The pr density part], show that the interference maxima are given by..."
|
# Brilli The Ant is Back!
Geometry Level 5
A 12 feet high room is 40 feet long and 10 feet wide. Brilli the ant is standing in the middle of one of the walls (which is 10 by 12 feet), such that it is 1 foot above the floor, and is equidistant from the other 2 edges of that wall.
In the middle of the wall opposite to Brilli's, rests a sugar crystal, 1 foot below the ceiling. The minimum distance Brilli needs to cover to reach the sugar crystal, assuming that Brilli can walk anywhere inside the room, is $x$. Determine the value of $x^2$.
×
|
# Step functions and the characteristic function of rationals
A function $t: [a, b] \rightarrow \mathbb{R}$ is called a step function when a $k \in \mathbb{N}$ and numbers $z_0,...,z_k$ with $a = z_0 \leq z_1 \leq ... \leq z_k = b$ exist, such that for all $i \in \{1,2,...k\}$ the restriction $t |_{(z_{i-1},z_{i})}$ is constant. Let $f: [0,1] \rightarrow \mathbb{R}$ be defined by $$f(x) = \left\{ \begin{array}{rcl} 1, & \mbox{if} & x \in \mathbb{Q} \\ 0, & \mbox{if} & x \notin \mathbb{Q} \end{array} \right.$$
Show:
(i) The function $f$ is a point-wise limit of step functions.
(ii) There is no uniform convergent series of step functions, whose limit-function is $f$.
So the book I'm reading mentions this Dirichlet function all the time. Still I'm having trouble finding a solution to this exercise. All help is very much appreciated!
-
For part ii).
If a sequence of Riemann integrable functions $f_n$ converges uniformly to $f$, then what can you say about the Riemann integrability of $f$?
It seems like this question is trying to point out a drawback of Riemann integration which, if I remember correctly, motivated the discovery of Lebesgue integration.
-
|
# Surface Area of a Pyramid
The surface area of a pyramid consists of the lateral surface, which is the area of the number of triangles which form the sides (or faces) of the figure along with area of the base, which may be any polygon.
The lateral area of a right pyramid is the sum of the areas of the triangles forming the faces of the pyramid. In a right pyramid, by definition, these are congruent triangles. Also by definition, the base of a right pyramid is a regular polygon. Therefore, the base of the triangular faces is equal and their altitudes are also equal, and they are equal to the slant height of the pyramid.
Rules:
1. The lateral area of a right pyramid is equal to the perimeter of the base times one-half the slant height.
$\therefore {\text{lateral surface area }} = {\text{ }}\frac{{{\text{perimeter }} \times {\text{ slant height}}}}{2}$
1. Total surface area = lateral surface area + area of the base
2. Slant height $l = \sqrt {{r^2} + {h^2}}$
Example:
A pyramid on a square base has four equilateral triangles. For its other faces each edge is $9$ cm. Find the whole surface.
Solution:
Let $OABCD$ be the pyramid on the square base $ABCD$. As the side faces are equilateral triangles with each side being $9$ cm, therefore the side of the square base is $9$ cm.
The area of the base $= 9 \times 9 = 81$ square cm
The area of one side face $= \frac{{{a^2}\sqrt 3 }}{4} = \frac{{9 \times 9 \times 1.732}}{4} = \frac{{140.292}}{4}$
The area of all four side faces $= \frac{{140.292}}{4} \times 4 = 140.292$ square cm
The area of the whole surface $= 140.292 + 81 = 221.29$ square cm.
|
# Finding pattern of numbers
I have this puzzle for adding 2 numbers (Or sequences)
Does anyone have an idea about how this addition works? $$5+1=9\\3+1=10\\5+2=21\\8+2=23\\9+0=28\\4+3=??$$
• Care to tell us where this puzzle comes from? – Gareth McCaughan Nov 28 '17 at 22:34
• in fact my friend send it to me – user11618 Nov 28 '17 at 22:56
• may be $$4 + 3 = 31$$ – Mandar Nov 29 '17 at 13:52
• @Mozfox how is that ? – user11618 Nov 29 '17 at 22:16
I think the answer can be
21
Reason
6=9 //5+1=9
4=10 //3+1=10
7=21 //5+2=21
10=23 //8+2=23
9=28 //9+0=23
7=21 //4+3=21 as already given of 5+2=21
• For your second line, it's actually 3+1=10, not 3+2. – Lolgast Nov 30 '17 at 8:15
• I don't think it's that easy – user11618 Nov 30 '17 at 11:19
• @user11618 any hint – rudra Nov 30 '17 at 11:20
• @rudra I don't know why I got downvoted :3 , but I don't think your answer is right – user11618 Nov 30 '17 at 19:01
• @user11618 You are downvoted because this is completely random. And the formula submitted by you is not matching with the statement – rudra Dec 1 '17 at 1:56
3+1 = 10 is what’s given. So
4 = 10
8+2 = 23. So 10 = 23.
5+2 = 21. So 7 = 21.
3 = 10-7 = 23 - 21 = 2
Therefore,
4+3 = 10+2 = 12
This seems quite improbable but well who knows?
Thanks every body, I've found the solution.
$$5+2=21 \\ 5^2+2^5+(5+2)!=5097 \to 5+0+9+7 = 21\\ x+y = ?\\ x^y + y^x +(x+y)! = abcd \to s= a+b+c+d$$
• First of all, this is completely random and so far-fetched that I don't believe it's actually solvable. – Levieux Nov 30 '17 at 13:02
• Secondly, you've made a mistake: 5^1+1^5+(5+1)!=5+1+720=726 -> 15. Probably 120 was erroneously used instead of 720 by the creator of this puzzle... – Levieux Nov 30 '17 at 13:02
Given the formula I get:
$$4 ^ 3 + 3 ^ 4 + (4 + 3)! = 5185$$
$$5 + 1 + 8 + 5 = 19$$
|
# Use of Module Design Pattern in simple D3 “overlay” program
I have been learning JS (mostly using D3) and I wanted to put the Module Design Pattern to use. I wrote a simple script that lets you build a bar chart with more ease / readable code.
The end user code:
canvas.initialise()
canvas.margin(20, 100, 60, 20)
canvas.width(500)
canvas.height(400)
canvas.finalise()
svg = canvas.get_canvas()
my_data = [["A", 10], ["B", -3], ["C", 4]]
bar_chart.data(my_data)
bar_chart.x_scale([0, canvas.get_width()], .1, 1)
bar_chart.y_scale([canvas.get_height(), 0])
bar_chart.formatting(".2f")
The library code which uses the MDP:
var canvas = (function ()
{
// private
var svg;
var margin, width, height, x_pos, y_pos;
var initialise = function () { svg = d3.select("body").append("svg") }
var set_position = function (x, y) { x_pos = x; y_pos = y;}
var set_margins = function (top, bottom, left, right) { margin = { top: top, bottom: bottom, left: left, right: right }; }
var set_width = function (val) { width = val - margin.left - margin.right; svg.attr("width", val + margin.left + margin.right); }
var set_height = function (val) { height = val - margin.top - margin.bottom; svg.attr("height", val + margin.top + margin.bottom); }
var finalise = function ()
{
svg.append("g").attr("transform", "translate(" + margin.left + "," + margin.top + ")")
.attr("x", x_pos)
.attr("y", y_pos)
}
return {
initialise: initialise,
position: set_position,
margin: set_margins,
width: set_width,
height: set_height,
finalise: finalise,
get_canvas: function () { return svg },
get_margin: function () { return margin },
get_width: function () { return width },
get_height: function () { return height }
}
}());
var bar_chart = (function ()
{
var data,
x, y,
xAxis, yAxis,
formatting;
var set_data = function(data_input)
{
data = data_input;
}
var set_x_scale = function (interval, padding, outer_padding)
{
x = d3.scale.ordinal()
}
var set_y_scale = function (interval)
{
y = d3.scale.linear()
.range(interval)
}
var set_formatting = function (fmt)
{
formatting = d3.format(fmt);
}
var add_to = function(svg)
{
console.log(data)
xAxis = d3.svg.axis()
.scale(x) // call ticks?
yAxis = d3.svg.axis()
.scale(y)
.orient("left")
.tickFormat(formatting)
x.domain(data.map(function (d) { return d[0] }))
y.domain([-5, 15]) //make general
// Put the axes on the canvas
svg.append("g")
.attr("class", "x axis")
.attr("transform", "translate(" + canvas.get_margin().left + "," + canvas.get_height() + ")")
.call(xAxis)
svg.append("g")
.attr("class", "y axis")
.attr("transform", "translate(" + canvas.get_margin().left + ",0)")
.call(yAxis)
// Put the data in rectangle elements and put on canvas
var bars = svg.selectAll(".bar")
.data(data)
.enter()
.append("rect")
.attr("class", "bar")
.attr("width", x.rangeBand())
.attr("x", function (d) { return x(d[0]) + canvas.get_margin().left }) // hmm...
.attr("y", function (d)
{
if (d[1] < 0)
{
return y(0);
}
else
return y(d[1])
})
.attr("height", function (d) { return y(0) - y(Math.abs(d[1])) });
}
return {
data: set_data,
formatting: set_formatting,
x_scale: set_x_scale,
y_scale: set_y_scale,
}
}());
My concerns:
• I feel that having to make sure to call some functions before others means my structure is off (sometimes unavoidable, but comments on this would be useful)
• is this a suitable use of the design pattern?
• can usability be enhanced by using the pattern differently?
• is a different pattern more suitable in this case?
• Note: in JavaScript, the convention is to use camelCase, not snake_case. – somebody May 31 '16 at 12:17
• Also, you have left out multiple semicolons, and the braces for return y(d[1]). You should probably run your code throught JSHint to fix small code style inconsistencies before you post it here. – somebody May 31 '16 at 12:22
• Also, I'm not sure making the variables does anything, since you expose the variable anyway. If there's no real reason to hide the variables, you can make the bar chart keep an internal reference to the svg, then make data() call redraw() (which would be a clear() followed by an add_to()) if svg has been set (i.e. if (svg) { this.redraw(); }. Otherwise, if you don't want the graph to be editable, set an internal variable to true once data() is called, so you can warn or throw and error when data() is called again. – somebody May 31 '16 at 12:31
• @somebody It's not a bad thing to leave out semicolons; it's just that you either use semicolons; or stick to the style of only using semicolons when necessary (a style used by npm); but the bad thing is making a mish-mash-mosh of style. – gcampbell Jun 2 '16 at 16:43
• @gcampbell Yeah, I know, but the semicolons that were there confused me slightly. – somebody Jun 2 '16 at 16:57
You make some really good points in your concerns list, particularly with your first one and the reason behind it:
Your modules have too many dependencies
The idea behind a module is to create self-containing, single-purposed code. Your canvas module depends on the order in which the user calls the functions. Your bar_chart is particularly needed with it's addTo method requiring direct accesses the canvas module. Good modules are simple and straightforward. They encase some complicated task into a black box, allowing the user to be ignorant of what is going on under the hood. This allows the user to focus on using the module instead of worrying about possible side effects.
A Suggested refactor for usability that also eliminates function order dependencies
Here's a more user friendly way for a user to make a bar chart:
var canvas = createCanvas({
margins: {
top: 20,
left: 60,
right: 20,
bottom: 100
},
width: 420, // 500 - 60 - 20, maybe the 500 totalWidth is more intuitive
height: 280 // 400 - 20 - 100
});
drawBarChart({
canvas: canvas,
data: [["A", 10], ["B", -3], ["C", 4]],
range: [-5, 15],
numberFormat: ".2f", // optional parameter
barInnerPadding: 0.1, // optional parameter
barOuterPadding: 1 // optional parameter
});
This gives the developer a CSS-esque interface for creating charts. There's no worry about calling functions in the wrong order, all the properties are named, and this affords you the power to easily make any of the properties optional.
Here's a JSFiddle of a refactor I did on your code that provides the aforementioned user interface.
Some things I refactored and why:
1. I separated the "canvas" into a data object and a createSVG function because I felt your canvas was doing two things, storing formatting data and building/formatting/storing an svg. (Also, I heavily prefer the functional programming paradigm over OOP)
2. I condensed your bar_chart module into a single drawBarChart function because splitting up all the logic to create a chart adds the dependency that the functions must be called in a specific order. (I think there ought to be a few inline functions in my drawBarChart, though. Emphasis on inline; If we don't have duplicate code and the functions aren't called anywhere else, making the functions somewhere else only displaces code from where it's actually executed)
3. I made a lot of parameters optional and also gave the user the ability to specify the range of the y-axis. Optional parameters generally pan out to be more user friendly. If a user wants to learn how to do a specific formatting thing for the drawBarChart function, then they can, but if they don't want to, then they don't have to worry about it.
Useless Code
The x and y positions in canvas don't provide any functionality to your program. Consider removing them. (If you actually end up wanting to add them later, then you can add them later, currently they only act as a distraction).
Style Conventions
As long as your style doesn't inhibit readability, go for it. The only reason I bring this up is because some of your lines of code get really, really long. For example:
var set_height = function (val) { height = val - margin.top - margin.bottom; svg.attr("height", val + margin.top + margin.bottom); }
This function would be much more readable on multiple lines.
I'm impressed with your d3! I learned a bit a awhile back but didn't have a good reason keep it up. Keep up the good work!
• Thank you, this is exactly the type of answer I was looking for. I am not a big fan of functional programming but I guess it won't be a bad idea to get used to when it comes to JS. Thanks again! – turnip Jun 2 '16 at 13:10
• @PPG JS supports OOP as well, just in ES5 the syntax is far from the normal class <classname> {}. – somebody Jun 2 '16 at 22:18
|
#### Vol. 9, No. 8, 2016
Recent Issues
The Journal About the Journal Subscriptions Editorial Board Editors’ Interests Scientific Advantages Submission Guidelines Submission Form Editorial Login Contacts Author Index To Appear ISSN: 1948-206X (e-only) ISSN: 2157-5045 (print)
A long $\mathbb{C}^2$ without holomorphic functions
### Luka Boc Thaler and Franc Forstnerič
Vol. 9 (2016), No. 8, 2031–2050
##### Abstract
We construct for every integer $n>1$ a complex manifold of dimension $n$ which is exhausted by an increasing sequence of biholomorphic images of ${ℂ}^{n}$ (i.e., a long ${ℂ}^{n}$), but does not admit any nonconstant holomorphic or plurisubharmonic functions. Furthermore, we introduce new holomorphic invariants of a complex manifold $X$, the stable core and the strongly stable core, which are based on the long-term behavior of hulls of compact sets with respect to an exhaustion of $X$. We show that every compact polynomially convex set $B\subset {ℂ}^{n}$ such that $B=\overline{{B}^{\circ }}$ is the strongly stable core of a long ${ℂ}^{n}$; in particular, holomorphically nonequivalent sets give rise to nonequivalent long ${ℂ}^{n}$’s. Furthermore, for every open set $U\subset {ℂ}^{n}$ there exists a long ${ℂ}^{n}$ whose stable core is dense in $U$. It follows that for any $n>1$ there is a continuum of pairwise nonequivalent long ${ℂ}^{n}$’s with no nonconstant plurisubharmonic functions and no nontrivial holomorphic automorphisms. These results answer several long-standing open problems.
##### Keywords
holomorphic function, Stein manifold, long $\mathbb C^n$, Fatou–Bieberbach domain, Chern–Moser normal form
##### Mathematical Subject Classification 2010
Primary: 32E10, 32E30, 32H02
|
dc.contributor.author Ford, Neville J. * dc.contributor.author Rodrigues, M. M. * dc.contributor.author Xiao, Jingyu * dc.contributor.author Yan, Yubin * dc.date.accessioned 2019-03-11T14:55:13Z dc.date.available 2019-03-11T14:55:13Z dc.date.issued 2013-09-26 dc.identifier.citation Ford, N. J., Rodrigus, M. M., Xiao, J. & Yan, Y. (2013). Numerical analysis of a teo-parameter fractional telegraph equation. Journal of Computational and Applied Mathematics, 249, 95-106. en dc.identifier.issn 0377-0427 dc.identifier.doi 10.1016/j.cam.2013.02.009 dc.identifier.uri http://hdl.handle.net/10034/621967 dc.description.abstract In this paper we consider the two-parameter fractional telegraph equation of the form $$-\, ^CD_{t_0^+}^{\alpha+1} u(t,x) + \, ^CD_{x_0^+}^{\beta+1} u (t,x)- \, ^CD_{t_0^+}^{\alpha}u (t,x)-u(t,x)=0.$$ Here $\, ^CD_{t_0^+}^{\alpha}$, $\, ^CD_{t_0^+}^{\alpha+1}$, $\, ^CD_{x_0^+}^{\beta+1}$ are operators of the Caputo-type fractional derivative, where $0\leq \alpha < 1$ and $0 \leq \beta < 1$. The existence and uniqueness of the equations are proved by using the Banach fixed point theorem. A numerical method is introduced to solve this fractional telegraph equation and stability conditions for the numerical method are obtained. Numerical examples are given in the final section of the paper. dc.language.iso en en dc.publisher Elsevier en dc.relation.url https://www.sciencedirect.com/science/article/pii/S0377042713000691 en dc.rights Attribution 4.0 International * dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/4.0/ en dc.subject fractional telegraph equation en dc.subject numerical analysis en dc.title Numerical analysis of a two-parameter fractional telegraph equation en dc.type Article en dc.identifier.eissn 1879-1778 dc.contributor.department University of Chester, Harbin Institute of Technology, University of Aveiro, Campus Universitario de Santiago en dc.identifier.journal Journal of Computational and Applied Mathematics dc.date.accepted 2013-09-26 or.grant.openaccess Yes en rioxxterms.funder unfunded research en_US rioxxterms.identifier.project unfunded research en_US rioxxterms.version AM en rioxxterms.licenseref.startdate 2015-02-26
### Files in this item
Name:
Publisher version
Name:
fordrodxiaoyan2804.pdf
Size:
527.4Kb
Format:
PDF
Request:
Main articlae
### This item appears in the following Collection(s)
Except where otherwise noted, this item's license is described as Attribution 4.0 International
|
×
Dismiss this pinned window
[–][S] 166 points167 points (11 children)
This is the “modular surface” of z²+1. Modular surfaces are a great way to visualise complex functions. The base is the input, which is the complex plane. The height is the absolute value of the output. The colour represents the phase. The points where the function is equal to zero is at the bottom. z²+1 has two imaginary solutions at I and -i. This render was made for this little video: https://www.youtube.com/watch?v=nT3WYFxvPLk
[–] 24 points25 points (8 children)
What does phase mean? Whats the definition?
[–][S] 51 points52 points (0 children)
Sorry that wasn't clear, phase is more of a physics term I guess. The colour is arg(z) (the polar angle). If you need more info, I explain the graphic in the 2nd part of this video: https://www.youtube.com/watch?v=jU7QW6AjUf4
[–] 25 points26 points (0 children)
Instead of complex plus imaginary coordinates which would be cardinal coordinates, you can represent a complex number through polar coordinates, using an angle and magnitude. The phase would refer to the angle of the polar coordinate, i.e. if you think about the angles as a cycle, the phase would be where you are along that cycle.
[–] 7 points8 points (5 children)
Phase or phases may refer to:
== Science ==
=== Physics === State of matter, or phase, one of the distinct forms in which matter can exist Phase (matter), a region of space throughout which all physical properties are essentially uniform Phase space, a mathematical space in which each possible state of a physical system is represented by a point — this equilibrium point is also referred to as a "microscopic state" Phase space formulation, a formulation of quantum mechanics in phase space Phase (waves), the position of a point in time (an instant) on a waveform cycle Instantaneous phase, generalization for both cyclic and non-cyclic phenomena Polyphase system, a means of distributing alternating current electric power in multiple conducting wires with definite phase offsets Single-phase electric power, distribution of AC electric power in a system where the voltages of the supply vary in unison Three-phase electric power, a common method of AC electric power generation, transmission, and distribution Three-phase, the mathematics of three-phase electric power Phase problem, the loss of information (the phase) from a physical measurement Phase factor, a complex scalar used in quantum mechanics in Continuous Fourier transform, the angle of a complex coefficient representing the phase of one sinusoidal component
=== Other sciences === Archaeological phase, a discrete period of occupation at an archaeological site Color phase, in biology, a group of individuals within a species with a particular coloration Gametic phase, in genetics, the relationship between alleles at two chromosomal loci Lunar phase, the appearance of the Moon as viewed from the Earth Planetary phase, the appearance of the illuminated section of a planet Phase separation, in physical chemistry, the separation of a liquid mixture into two immiscible liquids above and below a meniscus Phase (syntax), in linguistics, a cyclic domain (proposed by Noam Chomsky) Development of the human body, in cognitive psychology, occurs in 9 phases by age In biology, a part of the cell cycle in which cells divide (mitosis) and/or reproduce (meiosis)
== Entertainment == Phase (band), a Greek alternative rock band Phases (band), an indie pop American band, formerly known as JJAMZ Phase (album), a debut studio album by an English singer Jack Garratt Phases (The Who album), a box set of albums by The Who Phases (I See Stars album), 2015 Phases (Angel Olsen album) "Phases" (Buffy the Vampire Slayer), a 1998 episode of the TV series Buffy the Vampire Slayer Phases (.hack), fictional monsters in the .hack franchise Phase IV, a 1974 science fiction movie directed by Saul Bass Phase, an incarnation of the DC Comics character usually known as Phantom Girl Phaze, an alternate world in Piers Anthony's Apprentice Adept series
== Other uses == Phase 10, a card game created by Fundex Games Phase (video game), a 2007 music game for the iPod developed by Harmonix Music Systems Phase (combat), usually a period of combat within a larger military operation A musical composition using Steve Reich's phasing technique
== See also == FASOR (disambiguation) Faze (disambiguation) Phase 1 (disambiguation) Phase 2 (disambiguation) Phase 3 (disambiguation) Phase 4 (disambiguation) Phase 5 (disambiguation) Phase space (disambiguation) Phaser (disambiguation) Phasing (disambiguation) Phasor (disambiguation)
More details here: https://en.wikipedia.org/wiki/Phase
This comment was left automatically (by a bot). If something's wrong, please, report it.
Really hope this was useful and relevant :D
If I don't get this right, don't get mad at me, I'm still learning!
[–] 9 points10 points (3 children)
Good bot
[–] 0 points1 point (2 children)
Thank you, ninjaphysics, for voting on wikipedia_answer_bot.
This bot wants to find the best and worst bots on Reddit. You can view results here.
[–] 4 points5 points (0 children)
[–] 0 points1 point (0 children)
More than good, that bot was truely beautiful!
[–] -1 points0 points (1 child)
will somebody pls explain this to a real analysis nerd who has never touched complex analysis bc i am very lost lmao
[–] 0 points1 point (0 children)
What would you like explained? Fellow real analysis nerd
[–] 54 points55 points (8 children)
Was this made with Blender? I wanted to start making some visualizations like this. What was you process?
Edit: grammar
[–][S] 55 points56 points (7 children)
Yes Blender. I build it using python code within Blender.
[–]Commutative Algebra 36 points37 points (2 children)
Do you mind sharing your code!
[–] 1 point2 points (1 child)
RemindMe! 1 Week "sweet, sweet blender graphs"
[–] 2 points3 points (0 children)
I will be messaging you in 7 days on 2020-12-09 21:08:12 UTC to remind you of this link
1 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.
Parent commenter can delete this message to hide from others.
[–]Number Theory 5 points6 points (0 children)
That's awesome! You are making me want to implement this on shadertoy!
[–] 8 points9 points (0 children)
Sweet! I think graphs will be my next project in blender. Thanks for the inspiration!
[–]Differential Geometry 0 points1 point (1 child)
What are your computer specs like? This looks like a fun thing to play around with but I worry it'd take hours on my laptop
[–][S] 0 points1 point (0 children)
Umm, I actually have one of the fastest AMD CPUs on the market. It takes around 2min to render a 4k frame. That said, I do play around on my basic Surface Pro 4 laptop, if you are happy to make only still images it would be fine. A surface that is opaque, not glass, is also less computationally expensive.
(A graphics card in a modest desktop would also give you reasonable render times)
[–] 35 points36 points (0 children)
Mathematica RTX On
[–] 38 points39 points (1 child)
I think there should really be more visuals on this subreddit
[–] 12 points13 points (0 children)
If I had tools like this when I was first learning math I like to think I would be a lot better at it.
[–] 12 points13 points (3 children)
It looks great. It's very attractive, and makes a clear easy to read shape.
The phase is tricky though... I mean, even if there was a key to which colours meant what, it's not always easy to tell what colour we're looking at - because it is so shiny and reflective. The colours seem to shift a bit as it rotates. I guess if we were trying to seriously read the values from this, it might be better if it was less reflective... but then it might not look as cool. :p
[–] 7 points8 points (1 child)
I agree, it's great and the phase coloring could be improved. Two ideas:
1. Phase-color the underlying plane too? That could serve as a kind of color key.
2. Use a perceptually uniform cyclic color map.
[–] 1 point2 points (0 children)
That colour map information was very interesting. Thanks for that.
[–][S] 4 points5 points (0 children)
Yes, I used a glass shader because it looks cool. To visualise values, I use a non-transparent shader with labelled contour lines overlayed as a texture. In Blender the glass is ok though, because the image is clear enough top-down when you are not looking through more than piece of coloured glass.
[–] 10 points11 points (0 children)
This is beautiful
[–] 18 points19 points (4 children)
3b1b did a video about this but he used a 2d graph with brightness indicating the absolute value, so the zero of a function would be a black spot.
[–] 7 points8 points (3 children)
Welch labs did similar video series about imaginary numbers. Climax is with this kind of diagram.
[–] 7 points8 points (2 children)
That series of videos made me climax, myself.
[–] 2 points3 points (1 child)
Completely understandable. Imaginary Numbers are known for the beauty.
[–] 1 point2 points (0 children)
As an electrical engineer I get to work with imaginary numbers all day!
[–] 8 points9 points (1 child)
Since complex functions are transformations of the plane, my favorite way to visualize them is a warped 2D grid.
[–] 0 points1 point (0 children)
Hm. I wonder if this has something to do with why the gridlines on a nichols plot are all warpy.
[–] 5 points6 points (1 child)
Of course visualizations like this look nice, but never did I have the feeling of really getting more insight on the behavior of (holomorphic) complex functions by inspecting them.
[–] 2 points3 points (0 children)
Right, I'm looking at this and admiring the aesthetics, but... now what? What am I supposed to have learned? What mathematics can be done because of this visualization? What are other people getting from this, that I'm missing?
[–] 4 points5 points (0 children)
wow this is impressive, i know a little math a little python and a little blender too, lets see what we can do
[–] 2 points3 points (1 child)
I always found these colour plots to be confusing. What angle is red for example? Or how what is the angle between red and green? It doesn't seem intuitive.
[–][S] 0 points1 point (0 children)
You're right, you need a legend to read the colour. This is explained in the linked video series. Red it the positive real axis. Cyan is the negative real axis.
[–] 2 points3 points (0 children)
I love this! As an electrical engineer it annoys me when people try to display real and imaginary parts separately. It has its uses, but mag and phase are so much more intuitive for telling me generally where it goes in a particular direction!
[–]Undergraduate 2 points3 points (0 children)
I absolutely love posts that beautify math like this
[–] 1 point2 points (0 children)
Damn that glassy, see-through look. Looks pretty nice.
[–] 1 point2 points (0 children)
Seems related to the approach to complex functions in Wegert's book.
[–] 1 point2 points (0 children)
This deserves an award but i have none
[–]Math Education 1 point2 points (0 children)
For those that want some more visual representations of complex functions, here's a plotter (albeit a 2D one) that lets you graph an arbitrary complex function:
http://davidbau.com/conformal/#z
[–] 1 point2 points (1 child)
And what does that even mean?
[–] 0 points1 point (0 children)
It's just a graph of a function.
The same as y=mx+b.
Except linear functions like the one above have a single input and a single output. So they're really easy to graph in 2D space. Input gets a dimension and the output gets a dimension.
Complex valued functions have two inputs and two outputs. That's because complex numbers are two dimensional. They have a real part and an imaginary part, typically represented by the x and y axes, respectively. Alternately you can think of them as having a magnitude and phase.
[–] 0 points1 point (1 child)
[–] 0 points1 point (0 children)
[–] 0 points1 point (0 children)
Damn, thats sick
[–] 0 points1 point (0 children)
That's a shiny function
[–][🍰] 0 points1 point (3 children)
Are you a math major/professor smth? Like how did you even visualise that. I am asking because these 3d graphs arent readily available on the internet.
[–] 0 points1 point (2 children)
Computers can visualize it for you, very easily.
I could make this exact graph in maybe 10 or 15 minutes. But I wouldn't have the beautiful movie to visualize it from different angles.
[–][🍰] 0 points1 point (1 child)
Wow, can't wait to be this good.
[–] 1 point2 points (0 children)
Aw. I'm nothing special. The computer is.
There are a couple really cool computing programs. The one I use the most these days is matlab. It has a ton of built in functions that make stuff like this super easy.
But ya, I think looking at this made me realize how far I've come in math. It's kind of cool! Holy cow I remember struggling with logarithms... So cool.
[–] 0 points1 point (0 children)
Impressive, very nice.
[–] 0 points1 point (4 children)
Asking as some who doesnt know math. Can you make any 3d shaoe using a function? Or to what degree of advanced can it go?
[–] 1 point2 points (3 children)
Yep. The only rule is the vertical line rule. So if you can imagine a surface of some sort, as long as it passes the vertical line test, there is some function that describes it.
If it doesn't, then you have to start breaking it up into sections and trying to come up with functions for each one.
[–] 0 points1 point (2 children)
Så technically, you could write a function that would look like a bike?
[–] 1 point2 points (1 child)
Nope. That wouldn't pass the vertical line test.
The vertical line test means that if you ha a vertical line going through the shape, it would only pass through once.
There are lots of places where a line could be drawn that would pass through the bike in more than one place.
[–] 0 points1 point (0 children)
Oh okay! thanks for clarifying. Very helpfull.
[–] 0 points1 point (0 children)
that is patrick
[–] 0 points1 point (0 children)
I ask my digital signal processing students to construct surfaces in the complex frequency plane. Just polynomial addition over a complex plane usually centered about the origin. It provides significant insights regarding for instance the behaviors of complex poles and zeros.
[–] 0 points1 point (0 children)
Could you by any chance use this software to visualize the propagation of an electromagnetic wave using Maxwell's wave equation?
[–] 0 points1 point (0 children)
squints at my calc books on the shelf
Welp.... time for a refresher...
[–] 0 points1 point (0 children)
Pants!
[–] 0 points1 point (0 children)
looks like the lower half of patrick lol
[–] 0 points1 point (0 children)
Math is beautiful.
[–] 0 points1 point (0 children)
I'm looking at this and admiring the aesthetics, but... now what? What am I supposed to have learned? What mathematics can be done because of this visualization? What are other people getting from this, that I'm missing?
[–] 0 points1 point (1 child)
Can you make s domain graph of a laplace transform? That will be cool
[–][S] 0 points1 point (0 children)
One day yes. I plan to cover the Laplace transforms at some stage on the channel.
[–]Number Theory 0 points1 point (1 child)
This is very appealing.
I'd like to talk to you about producing a similar visualization for modular forms. These are inherently "just complex functions", but they're nontrivial to compute. But in two recent papers, I study how to compute modular forms and various visualizations of modular forms.
I'm knowledgeable about various 2d plotting, but I don't actually know anything about 3d plotting. I'm aware that blender exists and that shaders exist, but that's the extent of my knowledge. This is a major aspect of complex function visualization that I'm missing.
But I suspect the process is first to compute the intended complex function on some grid/mesh, to somehow insert this computation/mesh into blender (the step which I know the least about), and then to color the surface (maybe with a shader). Individually, these points don't seem too bad... but of course I don't actually know what I'm talking about.
Is this something that you would be interested in talking about more? If so either PM me or let me know and I'll PM you.
Cheers!
[–][S] 0 points1 point (0 children)
Hi there...
Yes, I'm interested to discuss this. Please reach out via PM or email. (email is better) mathstownchannel (at) gmail (dot) com.
Simply put, if you can give me a Python function that returns a complex value, I can build it. eg...
def calc(z):
z = cmath.cos(z)
return z
|
# Astrological aspect
(Redirected from Promittor (astrology))
The astrological aspects are noted in the central circle of this natal chart, where the different colors and symbols distinguish between the different aspects, such as the square (red) or trine (green)
In astrology, an aspect is an angle the planets make to each other in the horoscope, also to the ascendant, midheaven, descendant, lower midheaven, and other points of astrological interest. Aspects are measured by the angular distance in degrees and minutes of ecliptic longitude between two points, as viewed from Earth. According to astrological tradition, they indicate the timing of transitions and developmental changes in the lives of people and affairs relative to the Earth.
As an example, if an astrologer creates a horoscope that shows the apparent positions of the celestial bodies at the time of a person's birth (a natal chart), and the angular distance between Mars and Venus is 92° ecliptic longitude, the chart is said to have the aspect "Venus square Mars" with an orb of 2° (i.e., it is 2° away from being an exact square; a square being a 90° aspect). The more exact an aspect, the stronger or more dominant it is said to be in shaping character or manifesting change.[1]
## Approach
In medieval astrology, certain aspects, like certain planets, were considered to be either favorable (benefic) or unfavorable (malefic). Modern usage places less emphasis on these fatalistic distinctions. The more modern approach to astrological aspects is exemplified by research on astrological harmonic, of which John Addey was a major proponent, and which Johannes Kepler earlier advocated in his book Harmonice Mundi in 1619. But even in modern times, aspects are considered either hard[further explanation needed] (the 90° square, the 180° opposition) or easy[further explanation needed] (the 120° trine, the 60° sextile). The conjunction aspect (essentially 0°, discounting orb) can be in either category, depending on which two planets it is that are conjunct.
A list of aspects below presents their angular values and a recommended orb for each aspect. The orbs are subject to variation, depending on the need for detail and personal preferences.
### Ptolemaic Aspects
The traditional major aspects are sometimes called Ptolemaic aspects since they were defined and used by Ptolemy in the 1st Century, AD. These aspects are the conjunction (0°), sextile (60°), square (90°), trine (120°), and opposition (180°). It is important to note that different astrologers and separate astrological systems/traditions utilize differing orbs (the degree of separation between exactitude) when calculating and using the aspects, though almost all use a larger orb for a conjunction when compared to the other aspects. The major aspects are those that can be used to divide 360 evenly and are divisible by 10 (with the exception of the semi-sextile).[2]
### Kepler's Aspects
Johannes Kepler described 13 aspects in his book Harmonice Mundi in 1619, grouping them in five degree of influentiality. He picked these from symbol ratios he encountered in geometry and music: 0/2, 1/2, 1/5, 1/6, 1/3, 1/12 along with 1/5, 2/5, 12/5, 10, 10/3, 8, and 8/3. The general names for whole divisors are (Latin) n-ile for whole fractions 1/n, and m-n-ile for fraction m/n. A semi-n-tile is a 2n-tile, 1/(2n), and sesqui-n-tile is a tri-2n-tile, 3/(2n).
All aspects can be seen as small whole number harmonics, (1/n of 360°), and multiples m/n create new aspects where there are no common factors between n and m, gcd(n,m)=1.
Kepler's Aspects
Degree of
Influentiality
First Second Third Fourth Fifth
Aspect Conjunction Opposition Quartile
Square
Trine
(bisextile)
Sextile
(semitrine)
Semisextile
(duodecile)
Quincunx
(quinduodecile)
Quintile
(bidecile)
Biquintile Octile
(semisquare)
Trioctile
Decile
(semiquintile)
Tridecile
(sesquiquintile)
Glyph 3
Angle 180° 90° 120° 60° 30° 150° 72° 144° 45° 135° 36° 108°
Fraction 0/2 1/2 1/4 1/3 1/6 1/12 5/12 1/5 2/5 1/8 3/8 1/10 3/10
Regular
polygon
Monogon
Digon
Square
Triangle
Hexagon
Dodecagon
Dodecagram
Pentagon
Pentagram
Octagon
Octagram
Decagon
Decagram
## Major aspects
The primary astrological aspects around the sky are: 0° conjunction, 30° semi-sextile, 60° sextile, 90° square, 120° trine, 150° quincunx, and 180° opposition. Five of them exist in east/west pairs.
### Conjunction
A conjunction (abrv. Con) is an angle of approximately 0–10°. An orb of approximately 10° is usually considered a conjunction, but if neither the Sun nor Moon is involved, some consider the conjunction to have a maximum orb of only about 8°.
Conjunctions are said to be the most powerful aspect, mutually intensifying the effects of the planets involved; they are major point in an individual's chart. Whether the conjunction in question is regarded as beneficial or detrimental depends on the specific planets involved. In particular, conjunctions involving the Sun, Venus, and/or Jupiter, in any of the three possible conjunction combinations, are considered highly favourable, while conjunctions involving the Moon, Mars, and/or Saturn, again in any of the three possible conjunction combinations, are considered highly unfavourable.[3]
Exceptionally, the Sun, Venus, and Jupiter were in a 3-way (beneficial) conjunction on November 9–10, 1970, while on March 10 of that same year, the Moon, Mars, and Saturn were in 3-way (detrimental) conjunction.
If either of two planets involved in a conjunction is also under tension from one or more hard aspects with one or more other planets, then the added presence of the conjunction aspect will further intensify the tension of that hard aspect.
A planet in very close conjunction to the Sun (within 17 minutes of arc, or only about 0.28°) is said to be cazimi, an ancient astrological term meaning "in the heart" (of the Sun). For example, "Venus cazimi" means Venus is in conjunction with the Sun with an orb of less than ≈ 0.28°. Such a planetary position is a conjunction of great strength. A related term is combust, applicable when the planet in conjunction with the Sun is only moderately close to the Sun. In the case of combust, specific orb limit will depend on the particular planet in conjunction with the Sun.
The Sun and Moon experience a conjunction every single month of the year — during the New Moon.
#### Great conjunctions
Jupiter and Saturn's great conjunctions repeat every ~120°, Saturn's path relative to Jupiter in blue. The 3-fold pattern comes from a near 2:5 resonance, while their period ratio closer to 60:149 making 89 conjunctions, leading to a slow precession of the triangular pattern. Kepler's trigon, a diagram of great conjunctions from Johannes Kepler's 1606 book De Stella Nova
Great conjunctions (between the two slowest classical planets, Jupiter and Saturn) have attracted considerable attention in the past as celestial omens. During the late Middle Ages and the Renaissance, great conjunctions were a topic broached by most astronomers of the period up to the times of Tycho Brahe and Kepler, by scholastic thinkers as Roger Bacon[4] or Pierre d'Ailly,[5] and they are mentioned in popular and literary writing by authors such as Dante[6] and Shakespeare.[7] This interest is traced back in Europe to the translations from Arabic sources, most notably Albumasar's book on conjunctions.[8]
Successive great conjunctions move retrograde ~120° every 20 years. Sequential conjunctions appear as triangular pattern, repeating every third conjunction returns after some 60 years to the vicinity of the first. These returns are observed to be shifted by some 8° relative to the fixed stars, so no more than four of them occur in the same zodiacal sign. Usually the conjunctions occur in one of the following triplicities or trigons of zodiacal signs:
Element Conjunction 1 Conjunction 3 Conjunction 2
Sign Sym Number
Ecliptic longitude
Sign Sym Number
Ecliptic longitude
Sign Sym Number
Ecliptic longitude
Fire trigon Aries 1 (0° to 30°) Leo 5 (120° to 150°) Sagittarius 9 (240° to 270°)
Earth trigon Taurus 2 (30° to 60°) Virgo 6 (150° to 180°) Capricorn 10 (270° to 300°)
Air trigon Gemini 3 (60° to 90°) Libra 7 (180° to 210°) Aquarius 11 (300° to 330°)
Water trigon Cancer 4 (90° to 120°) Scorpio 8 (210° to 240°) Pisces 12 (330° to 360°)
After about 220 years the pattern shifts to the next trigon, and in about 900 years returns to the first trigon.[9]
To each triangular pattern astrologers have ascribed one from the series of four elements. Particular importance has been accorded to the occurrence of a great conjunction in a new trigon, which is bound to happen after some 240 years at most.[10] Even greater importance was attributed to the beginning of a new cycle after all fours trigons had been visited, something which happens in about 900 years.
Medieval astrologers usually gave 960 as the length of the full cycle, apparently because in some cases it took 240 years to pass from one trigon to the next.[10] If a cycle is defined by when the conjunctions return to the same right ascension rather than to the same constellation, then because of axial precession the cycle is only about 800 years. Use of the Alphonsine tables apparently led to the use of precessing signs, and Kepler gave a value of 794 years (40 conjunctions).[10][6]
Despite the inaccuracies and some disagreement about the beginning of the cycle the belief in the significance of such events generated a stream of publications which grew steadily up to the end of the 16th century. As the great conjunction of 1583 was the last in the watery trigon it was widely supposed to herald apocalyptic changes; a papal bull against divinations was issued in 1586 and as nothing really significant had happened by 1603 with the advent of a new trigon, the public interest rapidly died.
### Opposition
Aspect angles as harmonic ratios[11]
Sym Harmonic Angle Name
1/1 360° (0°) Conjunction
1/2 180° Opposition
1/4 90° Square or quartile/quadrate
1/8 45° Octile or semi-square
3/8 135° Tri-octile or sesqui-quadrate
1/16 22.5° Sexdecile or semi-octile
3/16 67.5° Sesqui-octile
5/16 112.5° Quin-semi-octile
7/16 157.5° Sep-semi-octile
1/3 120° Trine or tri-novile
1/6 60° Sextile or semi-trine
1/12 30° Duodecile or semi-sextile
5/12 150° Quincunx or quin-duodecile
1/24 15° Quattuorvigintile or semi-duodecile
5/24 75° "Squile"
7/24 105° "Squine"
11/24 165° "Quindecile"[12] or "contraquindecile"
1/5 72° Quintile
2/5 144° Bi-quintile
D 1/10 36° Decile or semi-quintile
D3 3/10 108° Tri-decile or sesqui-quintile
1/15 24° Quindecile or trient-quintile
2 2/15 48° Bi-quindecile
4 4/15 96° Quadra-quindecile
7 7/15 168° Sep-quindecile
V 1/20 18° Vigintile or semi-decile
V3 3/20 54° Tri-vigintile or sesqui-decile
V7 7/20 126° Sep-vigintile
V9 9/20 162° Non-vigintile
1/40 Quadragintile or semi-vigintile
S 1/7 51.43° Septile
S2 2/7 102.86° Bi-septile
S3 3/7 154.29° Tri-septile
1/14 25.71° Semi-septile
3/14 77.14° Tre-semi-septile or sesqui-septile
5/14 128.57° Quin-semi-septile
N 1/9 40° Novile
N2 2/9 80° Bi-novile
N4 4/9 160° Quadra-novile
1/18 20° Octodecile or semi-novile or "vigintile"
1/36 10° Trigintasextile
U 1/11 32.83° Undecile or undecim or "elftile"[13]
U2 2/11 65.45° Bi-undecile or "bi-elftile"
U3 3/11 98.18° Tri-undecile or "tri-elftile"
U5 5/11 163.63° Quin-undecile or "quin-elftile"
An opposition (abrv. Opp) is an angle of 180° (1/2 of the 360° ecliptic). An orb of somewhere between 5° and 10°[14] is usually allowed depending on the planets.
Oppositions are said to be the second most powerful aspect. It resembles the conjunction although the difference between them is that the opposition is fundamentally relational. Some say it is prone to exaggeration as it is not unifying like the conjunction but has a dichotomous quality and an externalizing effect. All important axes in astrology are essentially oppositions. Therefore, at its most basic, it often signifies a relationship that can be oppositional or complementary.[citation needed]
### Sextile
A sextile (abrv. SXt or Sex) is an angle of 60° (1/6 of the 360° ecliptic, or 1/2 of a trine [120°]). An orb between 3-4 is allowed depending on the planets involved.
The sextile has been traditionally said to be similar in influence to the trine, but less intense. It indicates ease of communication between the two elements involved, with compatibility and harmony between them. A sextile provides opportunity and is very responsive to effort expended to gain its benefits.[citation needed] See information below on the semi-sextile.
### Square
A square (or quartile) (abrv. SQr or Squ) is an angle of 90° (1/4 of the 360° ecliptic, or 1/2 of an opposition [180°]). An orb of somewhere between 5° and 10°[14] is usually allowed depending on the planets involved.
As with the trine and the sextile, in the square, it is usually the outer or superior planet that has an effect on the inner or inferior one. The square's energy is strong and usable but has a tension that needs integration between 2 different areas of life, or offers a choice point where an important decision needs to be made that involves an opportunity cost. It is the smallest major aspect that usually involves houses in different quadrants.[citation needed]
### Trine
A trine (abbrev. Tri) is an angle of 120° (1/3 of the 360° ecliptic), an orb of somewhere between 5° and 10° depending on the planets involved.
The trine relates to what is natural and indicates harmony and ease. The trine may involve talent or ability which is innate. The trine has been traditionally assumed to be extremely beneficial. When involved in a transit, the trine involves situations that emerge from a current or past situation in a natural way.[citation needed]
## Minor aspects
### Semi-sextile
A semi-sextile (or duodecile) is an angle of 30° (1/12 of the 360° ecliptic). An orb of ±1.2° is allowed.
It is the most often used of the minor aspects perhaps for no other reason than it can be easily seen. It indicates a mental interaction between the planets involved that is more sensed than experienced externally. Any major aspect transit to a given planetary position will also involve the other planet that is in semi-sextile aspect to it. The energetic quality is one of building and potentiating each other gradually, but planets, houses and signs involved must be considered. Similar to a sextile in offering a quality of opportunity with conscious effort to benefit from.[citation needed]
#### Quincunx
A quincunx is an angle of 150° (5/12 of the 360° ecliptic). An orb of ±3.5° is usually allowed depending on the planets involved.
Its effect is most obvious when there is a triangulating aspect of a 3rd planet in any major aspect to the 2 planets which are quincunx. Its interpretation will rely mostly on the houses, planets, and signs involved. The effect will involve different areas of life being brought together that are not usually in communication since the planets are far enough apart to be in different house quadrants, like the trine, but often with a shift in perspective involving others not previously seen clearly. Keywords for the quincunx are mystery, creativity, unpredictability, imbalance, surreal, resourcefulness, and humor.[citation needed]
It does not offer equal divisions of the circle, but represents the 150° turn angles of a dodecagram, {12/5}.
## Other minor aspects
### Quintile
A quintile is an angle of 72° (1/5 of the 360° ecliptic). An orb of ±1.2° is allowed.
It indicates a strong creative flow of energy between the planets involved, often an opportunity for something performative, entertaining or expressive.[citation needed]
A decile, angle 36°, 1/10 of 360° is a semiquintile.
Irreducible multiples
A biquintile is an angle of 144° (2/5 of the 360° ecliptic). The 144° angle is shared with the pentagram.
### Septile
S A septile is an angle of roughly 51.43° (1/7 of the 360° ecliptic). An orb of ±1° is allowed.
It is a mystical aspect that indicates a hidden flow of energy between the planets involved, often involving spiritual or energetic sensitivity and an awareness of inner and more subtle, hidden levels of reality involving the planets in septile aspect.[citation needed]
Irreducible multiples
S2: A biseptile is an angle of 102.86° (2/7 of the 360° ecliptic).
S3: A triseptile is an angle of 154.29° (3/7 of the 360° ecliptic).
### Octile
A octile (or semi-square) is an angle of 45° (1/8 of the 360° ecliptic). An orb of ±2° is allowed.
It is an important minor aspect and indicates a stimulating or challenging energy like that of a square but less intense and more internal. The semi-square is considered to be the 8th harmonic of the chart because it is one-eighth of the 360° circle that the zodiac resides in (i.e., 360 / 8 = 45). The semi-square is considered to be a minor hard aspect because it is thought to cause friction in the native's life and prompt them to take some action to reduce that friction.
For example, if the Sun is posited in 10° Aquarius and Venus is posited in 25° Pisces then a semi-square would occur. This is thought to indicate that the native is not likely to be totally happy in matters of love. The native is thought to have a tendency to seek out those individuals who are not necessarily compatible to them, and this may lead to a sense of tension and actions to correct what to them may be frustration.[citation needed]
Irreducible multiples
• A sesquiquadrate (or trioctile) is an angle of 135° (3/8 of the 360° ecliptic). An orb of ±1.5° is allowed.
It is a harmonic of the semi-square, part of the square family aspect involving challenge. It is not an exact division of the 360° ecliptic and therefore does not function as a standalone aspect but as part of a series when a semi-square is present.[citation needed]
### Novile
N A novile is an angle of 40° (1/9 of the 360° ecliptic). An orb of ±1° is allowed.
It indicates an energy of perfection and/or idealization.[citation needed]
Irreducible multiples
N2: A bi-novile is an angle of 80° (2/9 of the 360° ecliptic).
N4: A quad-novile is an angle of 160° (4/9 of the 360° ecliptic).
### Decile
A decile is an angle of 36° (1/10 of the 360° ecliptic).
Irreducible multiples
3 A tri-decile is an angle of 108° (3/10 of the 360° ecliptic).
### Undecile
U An undecile (or elftile[13]) is an angle of 32.73° (1/11 of the 360° ecliptic). An orb of ±1° is allowed.
Irreducible multiples
U2: A bi-undecile is an angle of 65.45° (2/11 of the 360° ecliptic).
U3: A tri-undecile is an angle of 98.18° (3/11 of the 360° ecliptic).
U4: A quad-undecile is an angle of 130.91° (4/11 of the 360° ecliptic).
U5: A quin-undecile is an angle of 163.63° (5/11 of the 360° ecliptic).
### Semi-octile
A semi-octile (or sexdecile) is an angle of 22.5° (1/16 of the 360° ecliptic). An orb of ±0.75° is allowed.
It is part of the square family aspects and is considered a more minor version of the semi-square which triggers and involves challenge. Its harmonic aspects are 45°, 67.5°, 90°, 112.5°, 135°, 157.5° and 180°. It was discovered by Uranian astrologers.
Irreducible multiples
• A sesqui-octile (or bi-sexdecile) is an angle of 67.5° (3/16 of the 360° ecliptic).
• A quin-semi-octile (or quin-sexdecile) is an angle of 112.5° (5/16 of the 360° ecliptic).
• A sep-semi-octile (or sep-sexdecile) is an angle of 157.5° (7/16 of the 360° ecliptic).
## Declinations
The parallel and antiparallel (or contraparallel) are two other aspects which refer to degrees of declination above or below the celestial equator. They are not widely used by astrologers.
• Parallel: same degree± 1-degree 12-minutes of arc. This may be similar to a semi-square or quincunx in that it is not clearly seen. It represents an opportunity for perspective and communication between energies that requires some work to be made conscious.
• Contraparallel: opposite degree± 1-degree 12-minute of arc. Said to be similar to the parallel. (Some who use the parallel do not consider the contraparallel an aspect.)
## References
1. ^ "The Aspects". Retrieved 2016-10-30.
2. ^ Claudius Ptolemy, Harmonics, book III, Chapter 9
3. ^ Buckwalter, Eleanor. "Depth analysis of the Astrological Aspects". Retrieved 2016-10-30.
4. ^ The Opus Majus of Roger Bacon, ed. J. H. Bridges, Oxford:Clarendon Press, 1897, Vol. I, p. 263.
5. ^ De concordia astronomice veritatis et narrationis historice (1414) [1]
6. ^ a b Woody K., Dante and the Doctrine of the Great Conjunctions, Dante Studies, with the Annual Report of the Dante Society, No. 95 (1977), pp. 119–134
7. ^ Aston M., The Fiery Trigon Conjunction: An Elizabethan Astrological Prediction, Isis, Vol. 61, No. 2 (Summer, 1970), pp. 158–187
8. ^ De magnis coniunctionibus was translated in the 12th century, a modern edition-translation by K. Yamamoto and Ch. Burnett, Leiden, 2000
9. ^ If J and P designate the periods of Jupiter and Saturn then the return takes ${\displaystyle 1/(5/S-2/J)}$ which comes to 883.15 years, but to be a whole number of conjunction intervals it must be sometimes 913 years and sometimes 854. See Etz.
10. ^ a b c Etz D., (2000), Conjunctions of Jupiter and Saturn, Journal of the Royal Astronomical Society of Canada, Vol. 94, p.174
11. ^ Suignard, Michel (2017-01-24). "L2/17-020R2: Feedback on Extra Aspect Symbols for Astrology" (PDF).
12. ^ Ricki Reeves, 2001, The Quindecile: The Astrology & Psychology of Obsession
13. ^ a b [2] The German word for 11 is elf.
14. ^ a b Orbs used by Liz Greene, see Astrodienst
|
# Issues With Buggy Collision Detection
I am working on a small game in Processing 3. Essentialy I have a small open-world game and I wrote up a enemy AI to make the enemies wander aimlessly back and forth around the gameworld, and chase the player if he strays within the monster's feild-of-vision. This all works great; there's nothing wrong with it.
The issue that is plauging me and I haven't been able solve, is the system that controls collision and player health and life. Basicaly I have two persistant bugs, the first bug is that if the player walks within the monster's field-of-vision and the monster chases the player and the player is able to escape from this monster, then if you approch another monster, will start to chase the player, but even if the monster collides with the player, the player will take no damage, regardless if it took damage or not from the first monster.
The second bug, which is most likely linked to the first bug, is that the whole collision/damage appears to not run unless the player 'spawns' in the feild-of-vision of a monster when the game is run. If the player 'spawns' anywhere else in the gameworld that isn't in a feild-of-vision, then the whole collision system breaks.
I have spent a lot of time trying to fix this, but nothing that I try to change seems to fix it. It appears to be a logic issue, I think, where something breaks the collision/damage system after the player escapes from a monster's feild-of-veiw.
I'm not incuding the sprite and world classes, because all of the collision/damage is done in the included monster class. The code, with the sprite and world classes can be found here: https://github.com/NoahJon3s/New-Journey-The-Game
Any help is appreciated and thanks in advance.
The Code:
GameWorld World;
Monster[] monsters=new Monster[2];
Sprite sprite;
int spriteXPos;
int spriteYPos;
int health=8;
int increment;
int second;
void setup()
{
size(1200,900);
//size(1200,900,JAVA2D);
background(0);
frameRate(30);
for(int l=0; l<monsters.length; l++)
{
int w=int(random(4));
int x=int(random(width-100));
int y=int(random(height-200));
int z=int(random(width-100));
monsters[l]=new Monster(w,x,y,z);
}
for(int l=0; l<monsters.length; l++)
{
}
sprite=new Sprite(int(random(width)),int(random(height)));
World=new GameWorld();
}
void draw()
{
background(0);
World.Draw();
for(Monster l: monsters)
{
l.Update();
l.Draw();
}
sprite.Move();
increment++;
if(increment/30==1)
{
second++;
increment=0;
}
if(mousePressed==true)
{
if(health==0)
{
health=8;
mousePressed=false;
}
}
println(health);
}
class Monster
{
int monsterWVal;
int monsterXPos;
int monsterYPos;
int monsterZPos;
PImage monster1;
PImage monster2;
PImage monster3;
PImage monster4;
float monsterRate=.06;
int veiwSize=250;
int monsterOffsetX;
int monsterOffsetY;
int Width=100;
int Height=200;
boolean InSight;
boolean Collision;
Monster(int w,int y,int x,int z)
{
monsterWVal=w;
monsterXPos=y;
monsterYPos=x;
monsterZPos=z;
}
{
InSight=false;
Collision=false;
}
void Update()
{
monsterOffsetX=monsterXPos-veiwSize;
monsterOffsetY=monsterYPos-veiwSize;
if(spriteXPos+Width>monsterOffsetX && spriteXPos<monsterOffsetX+veiwSize)
{
if(spriteYPos+Height>monsterOffsetY && spriteYPos<monsterOffsetY+veiwSize)
{
InSight=true;
}
}
if(InSight==true)
{
float targetX=spriteXPos;
float dx=targetX-monsterXPos;
monsterXPos+=dx*monsterRate;
float targetY=spriteYPos;
float dy=targetY-monsterYPos;
monsterYPos+=dy*monsterRate;
if(dx>veiwSize||dx<-veiwSize)
{
InSight=false;
}
if(dy>veiwSize||dy<-veiwSize)
{
InSight=false;
}
}
if(InSight==false)
{
if(monsterZPos>monsterXPos)
{
monsterXPos++;
}
if(monsterZPos<monsterXPos)
{
monsterXPos--;
}
if(monsterZPos==monsterXPos)
{
monsterZPos=int(random(width-100));
}
}
if(spriteXPos+Width>monsterXPos && spriteXPos<monsterXPos+Width)
{
if(spriteYPos+Height>monsterYPos && spriteYPos<monsterYPos+Height)
{
Collision=true;
}
}
if(Collision==true)
{
if(second==1)
{
health--;
second=0;
if(health<0)
{
health=0;
}
Collision=false;
}
}
}
void Draw()
{
for(int i=0;i<monsters.length;i++)
{
if(monsterWVal==0)
{
image(monster1,monsterXPos,monsterYPos);
}
if(monsterWVal==1)
{
image(monster2,monsterXPos,monsterYPos);
}
if(monsterWVal==2)
{
image(monster3,monsterXPos,monsterYPos);
}
if(monsterWVal==3)
{
image(monster4,monsterXPos,monsterYPos);
}
}
}
}
• When posting a question there's a button labeled {}. Highlight your code and press it to keep identations – Bálint Apr 20 at 15:30
• What is the purpose of if(second==1) in the collision detection? I notice second is getting incremented on every 30th draw, so it's likely to be greater than 1 at some point, meaning the body of the if will never execute. – user1118321 Apr 20 at 16:06
• @user1118321 The timing system that you are seeing is to allow for a time delay between the collision and the damage taken from the player by a monster. This it to prevent the monster from killing the player before the player can do anything. I think I can see what you are saying, however, so I’ll try to make a fix for it – lockheed silverman Apr 20 at 16:40
|
## Basic insurance strategies – covered call and covered put
The use of options can be interpreted as buying or selling insurance. This post follows up on a previous post that focuses on two option strategies that can be interpreted as buying insurance – protective put and protective call. For every insurance buyer, there must be an insurance seller. In this post, we discuss two option strategies that are akin to selling insurance – covered call and covered put.
___________________________________________________________________________________
Selling insurance against an asset position
The previous post discusses the strategies of protective put and protective call. Both of these are “buy insurance” strategies. A protective put consists of a long asset and a long put where the long put is purchased to protect against a fall in the prices of the long asset. A protective call consists of a short asset position and a long call where the long call option is purchased to protect against a rise in the prices of the asset being sold short. Both of these strategies are to buy an option to protect against the adverse price movement of the asset involved.
When an insurer sells an insurance policy, the insurer must have enough asset on hand to pay claims. Now we discuss two strategies where the investor or trader holds an asset position that can be used for paying claims on a sold option.
A covered call consists of a long asset and a short call. The insurance sold is in the form of a call option. The long asset gains in value when asset prices rise and the gains are used to cover the payments made by the call seller when the call buyer decides to exercise the call option. Therefore the covered call is to use the upside profit potential of the long asset to back up (or cover) the call option sold to the call buyer. The covered call strategy can be used by an investor or trader who believes that the long asset will appreciate further in the future but is willing to trade the long term upside potential for a short-term income (the call premium). This is especially true if the investor thinks that selling the long asset at the strike price of the call option will meet a substantial portion of his expected profit target.
A covered put consists of a short asset position and a short put. Here, the insurance sold is in the form of a put option. The short asset is used to back up (or cover) the put option sold to the put buyer. A short asset position is not something that is owned. How can a short asset position back up a put option? The short asset position gains in value when asset prices fall. A put option is exercised when the prices of the underlying asset fall. Thus a put option seller needs to pay claims exactly when the short asset position gains in value. Thus the gains in the short asset position are used to cover the payments made by the put seller when the put buyer decides the exercise the put option.
In this post, we examine covered call and covered put in greater details by examining their payoff diagrams and profit diagrams.
___________________________________________________________________________________
Covered call
As mentioned above, a covered call is a position consisting of a long asset and a short call. Here the holder of the long asset sells a call against the long asset. Figure 1 is the payoff of the long asset. Figure 2 is the payoff of the short call. Figure 3 is the payoff of the covered call. Figure 4 is the profit of the covered call. The strike price in all the diagrams is $K$. We will see from Figure 4 that the covered call is a synthetic short put.
$\text{ }$
Figure 1 – Long Asset Payoff
$\text{ }$
Figure 1 is the payoff of the long asset position. When the asset prices are greater than the strike price $K$, the positive payoff is unlimited. The unlimited upside potential is used to pay claim when the seller of the call is required to pay claim to the call buyer.
$\text{ }$
Figure 2 – Short Call Payoff
$\text{ }$
Figure 2 is the payoff of the short call. This is the payoff of the call seller (i.e. the insurer). The call seller has negative payoff to the right of the strike price. The negative payoff occurs when the call buyer decides to exercise the call. The long asset payoff in Figure 1 is to cover this negative payoff.
$\text{ }$
Figure 3 – Long Asset + Short Call Payoff
$\text{ }$
Figure 3 is the payoff of the covered call, the result of combining Figure 1 and Figure 2. Unlike Figure 1, the long asset holder no longer has unlimited payoff to the right of the strike price. The payoff is now capped at the strike price $K$.
$\text{ }$
Figure 4 – Long Asset + Short Call Profit
$\text{ }$
Figure 4 is the profit of the covered call. The profit is the payoff less the cost of acquiring the position. At time 0, the cost is $S_0$ (the purchase price of the asset, an amount that is paid out) less $P$ (the option premium, an amount that is received). The future value of the cost of the covered call is then $S_0 e^{r T}-P e^{r T}$. The profit is then the payoff less this amount. The profit graph is in effect obtained by pressing down the payoff graph by the amount of $S_0 e^{r T}-P e^{r T}$. Because of the received option premium, $S_0 e^{r T}-P e^{r T}$ is less than the strike price $K$. As a result, the flat part of the profit graph is above the x-axis.
Without selling insurance (Figure 1), the profit potential of the long asset is unlimited. With the insurance liability (Figure 4), the profit potential is now capped at essentially at the call option premium. In effect the holder of a covered call simply sells the right for the long asset upside potential for cash received today (the option premium).
The strategy of a covered call may make sense if selling at the strike price can achieve a significant part of the profit target expected by the investor. Then the payoff from the strike price plus the call option premium may represent profit close to the expected target. Let’s look at a hypothetical example. Suppose that the stock owned by an investor was purchased at $60 a share. The investor believes that the stock has upside potential and the share price will rise to$70 in a year. The investor can then sell a call option with the strike price of $65 with an expiration of 6 months and with a call premium of$5. In exchange for a short-term income of the call option premium, the investor gives up the profit potential of $70 a share. If in 6 months, the share price is more than$65, then the investor will sell at $65 a share, producing a profit of$10 a share ($5 in share price appreciation and$5 call premium). If the share price is below the strike price is 6 months, the investor then pockets the \$5 premium.
Note the similarity between Figure 4 above and the Figure 11 in this previous post. Figure 11 in that previous post is the profit diagram of a short put. So the covered call (long asset + shot call) is also called a synthetic short put option since it has the same profit as a short put.
___________________________________________________________________________________
Covered put
As indicated above, a covered put is to use the profit potential of a short asset position to cover the obligation of a sold put option. Figure 5 below is the profit of a short asset position. Figure 6 is the payoff of a short put option. Figure 7 is the payoff of the covered put. Figure 8 is the profit diagram of the covered put.
$\text{ }$
Figure 5 – Short Asset Payoff
$\text{ }$
Figure 5 is the payoff of the short asset position. Holder of a short asset position is concerned about rising prices of the asset. The holder of the short borrows the asset in a short sales and sells the asset immediately for cash, which is then accumulated at the risk-free rate. The short position will have to buy the asset back in the spot market at a future date to repay the lender. If the spot price at expiration is greater than the original sale price, then the short position will lose money. In fact the potential loss is unlimited.
$\text{ }$
Figure 6 – Short Put Payoff
$\text{ }$
Figure 6 is the payoff of a short put option. Recall that the short put payoff is from the perspective of the seller of the put option. When the price of the underlying asset is below the strike price, the seller has the obligation to sell at the strike price (thus experiencing a loss). When the asset price is above the strike price, the put option expires worthless.
$\text{ }$
Figure 7 – Short Asset + Short Put Payoff
$\text{ }$
Figure 7 is the payoff of the covered call. With the covered call, the holder of the short asset can no longer profit by paying a price lower than the strike price for the asset to repay the lender. Instead he has to pay the strike price (this is the flat part of Figure 7). To the right of the strike price, the covered call continues to have the potential for unlimited loss.
$\text{ }$
Figure 8 – Short Asset + Short Put Profit
$\text{ }$
Figure 8 is the profit of the covered put, which indicates the profit is essentially the option premium received by selling the put option. Without selling the insurance (Figure 5), the short asset position has good profit potential when prices fall. With selling the insurance, the profit potential to the left of the strike price is limited to the option premium. The covered put is in effect to trade the profit potential (when prices are low) with a known put option premium.
Compare Figure 8 above with Figure 5 in this previous post. Both profit diagrams are of the same shape. Figure 5 in the previous post is the profit diagram of a short call. So the combined position of short asset + short put is called a synthetic short call.
___________________________________________________________________________________
Synthetic put and call
Just a couple of more observations to make about synthetic put and synthetic call.
Note that Figure 3 (the payoff of long asset + short call) also resembles the payoff of a short put option, except that the level part of the payoff is not at the x-axis. So Figure 3 is the lifting up of the usual short put option payoff by a uniform amount. That uniform amount can be interpreted as the payoff of a long zero-coupon bond. Thus we have the following relationship.
$\text{ }$
payoff of “long asset + short call” = payoff of “short put + zero-coupon bond”
$\text{ }$
Adding a bond lifts the payoff graph. However, adding a bond to a position does not change the profit. To see this, simply subtract the cost of acquiring the position from the payoff. You will see that for the bond, the same amount appears in both the cost and the payoff. Thus we have:
$\text{ }$
profit of “long asset + short call” = profit of “short put”
$\text{ }$
As mentioned earlier, the above relationship indicates that the combined position of long asset + short call can be viewed as a synthetic short put. We now see that the covered call is identical to a short put.
Now similar thing is going on in a covered put. Note that Figure 7 resembles the payoff of a short call except that it is the pressing down of the payoff of a usual short call. We can think of this pressing down as a borrowing. Thus we have:
$\text{ }$
payoff of “short asset + short put” = payoff of “short call – zero-coupon bond”
$\text{ }$
Adding a bond means lending and subtracting a bond means borrowing. As mentioned before, adding or subtracting a bond lift or depress the payoff graph but does not change the profit graph. We have:
$\text{ }$
profit of “short asset + short put” = profit of “short call”
$\text{ }$
The above relationship is the basis for calling “short asset + short put” as a synthetic short call.
___________________________________________________________________________________
$\copyright \ \ 2015 \ \text{Dan Ma}$
|
# The Path Integral
1. Jul 16, 2010
### Pants
I'm a bit confused about how the path integral for, say, a spin-0 photon is calculated. My understanding of quantum mechanics is somewhere above Feynman's book QED, but somewhere below actually figuring out what every part of the technical definition means. Right now the main sticking point for me is grokking the Hamiltonian, but I don't think I have to figure that out in detail just yet to get the concept.
Anyways, as Feynman describes it in the first chapter of QED, the path integral represents each path that's possible from the source to the detector, and the phase of that path is determined by the energy of the photon and how long it takes to get from source to detector. (Please correct me if I'm misinterpreting this). Here's what I don't get: if the particle is emitted at time 0, and measurement occurs at time T, do we only look at paths that take T seconds to get to the detector travelling at velocity c, or are superluminal paths included in the calculation as well?
Thanks!
-Vince
|
# Given $E:N\to M$ an embedding and $V,W\in \mathfrak{X}(M)$ tangent to $N$, we claim that the commutator of $V$ and $W$ is also tangent to $N$.
I have encounter some difficulties while looking at an exercise online. It basically goes as follows:
Given $$E:N\to M$$ an embedding and $$V,W\in \mathfrak{X}(M)$$ tangent to $$N$$, we claim that the commutator of $$V$$ and $$W$$ is also tangent to $$N$$.
I would like to have some ideas about how to attack the problem effectively.
• There are a few possible approaches, depending on your definition of the commutator. – Amitai Yuval Jan 23 at 6:49
• It is just the usual one: $[A,B]=AB-BA$. – DaveWasHere Jan 23 at 11:35
If $$V$$ and $$W$$ are tangent to N, it means that there are vector fields $$v$$ and $$w$$ in $$\mathfrak X(N)$$ such that for any $$x\in N$$ we have $$V_{E(x)}=E_*v_x$$ and the same is true for $$W$$. To be able to interpret things properly, assume that $$V$$ and $$W$$ are smoothly extended off $$E(N)$$.
Then $$v$$ and $$V$$ are $$E$$-related and so are $$w$$ and $$W$$.
But we know that for $$E$$-related vector fields the commutators are also $$E$$-related, so we have (restricted to $$E(N)$$) $$[V,W]=E_*[v,w],$$
• Thanks for the comment! But where do we use the fact that $\mathfrak{X}(M)\ni V,W$? – DaveWasHere Jan 23 at 15:51
|
# Problem: A sample of an ideal gas at 1.00 atm and a volume of 1.81 L was placed in a weighted balloon and dropped into the ocean. As the sample descended, the water pressure compressed the balloon and reduced its volume. When the pressure had increased to 15.0 atm, what was the volume of the sample? Assume that the temperature was held constant.
###### Problem Details
A sample of an ideal gas at 1.00 atm and a volume of 1.81 L was placed in a weighted balloon and dropped into the ocean. As the sample descended, the water pressure compressed the balloon and reduced its volume. When the pressure had increased to 15.0 atm, what was the volume of the sample? Assume that the temperature was held constant.
|
# I Simple Modules and Maximal Right Ideals ...
1. Feb 4, 2017
### Math Amateur
I am reading Paul E. Bland's book, "Rings and Their Modules".
I am focused on Section 6.1 The Jacobson Radical ... ...
I need help with the proof of Proposition 6.1.7 ...
Proposition 6.1.7 and its proof read as follows:
In the above text from Bland, in the proof of (1) we read the following:
" ... ... Since $S$ is a simple $R$-module if and only if there is a maximal ideal $\mathfrak{m}$ of $R$ such that $R / \mathfrak{m} \cong S$ ... ... "
I do not follow exactly why the above statement is true ...
Can someone help me to see why and how, exactly, the above statement is true ...
Hope someone can help ...
Peter
File size:
109.8 KB
Views:
101
File size:
120 KB
Views:
101
2. Feb 4, 2017
### Math Amateur
Just trying to clarify a few things regarding my question ...
We have from a previous post on which I received help ... ... that if $\mathfrak{m}$ is a maximal submodule of a module $M$ then $M / \mathfrak{m}$ is a simple module ... ... BUT ... ... we can view a maximal right ideal as a maximal submodule of a ring $R$ viewed as a right module over itself ... thus $\mathfrak{m}$ is a maximal right ideal then $R / \mathfrak{m}$ is a simple module ... is that correct?
Not sure how to piece together the rest of the proof of the statement above ... but we know that a maximal right ideal exists in $R$ because of Bland's Corollary 1.2.4 which states that every ring $R$ has at least one maximal right idea (maximal left ideal, maximal ideal).
A lingering question for me is ... why does Bland bother with $S$ in the above proof ...
Hope someone can help ...
Peter
3. Feb 5, 2017
### andrewkirk
Yes that sounds right. It covers only one of the two directions of the sentence in the OP though. I would elaborate the proof slightly as follows:
We use the theorem that a submodule $m$ of $M$ is maximal iff $M/m$ is simple, together with the fact that ideals of a ring $R$ can be treated as submodules of ${}_RR$, which is $R$ as a module over itself.
Forward Direction
Say there is a maximal ideal $m$ of $R$, then ${}_Rm$ must be a maximal submodule of ${}_RR$. Because if there is some proper submodule ${}_RQ$ of ${}_RR$, and ${}_Rm$ is a proper submodule of that, then $Q$ is a proper ideal of $R$ that properly contains $m$, so that $m$ cannot be a maximal ideal, which is a contradiction. Hence ${}_Rm$ is a maximal submodule of ${}_RR$, from which it follows from the above theorem that ${}_RR/{}_Rm$ is simple. If we further assume that ${}_RR/{}_Rm \cong S$ then $S$ must be simple since ${}_RR/{}_Rm$ is.
For the Reverse Direction we assume that $S$ is a simple $R$-module, and try to prove that there must be a maximal ideal $m$ of $R$ such that ${}_RR/{}_Rm\cong S$.
That looks harder, because we need to get ideals from modules, which is less obvious a process than getting modules from ideals. I will need to reflect on it.
4. Feb 6, 2017
### Math Amateur
Thanks for your help, Andrew ... appreciate it ...
Still reflecting on what you have written ...
Thanks again,
Peter
5. Feb 7, 2017
### Math Amateur
Andrew,
Can you help with how and why Bland can justifiably conclude that $\text{ann}_r( R / \mathfrak{m}) = \text{ann}_r(S)$ ... ... ?
Peter
6. Feb 7, 2017
### Staff: Mentor
This follows directly from $R/\mathfrak{m} \cong S$. Simply write the annihilator in front of it.
So the question remains, why $R/\mathfrak{m} \cong S$.
$\Longrightarrow :$ (see @andrewkirk 's post #3 above, or Ex. 1.3 in Bland)
If $\mathfrak{m} \subsetneq R$ is a maximal ideal, then there is simply no room left in $\{0\} = \mathfrak{m}/\mathfrak{m} \subsetneq R/\mathfrak{m}$ for an ideal of $R$ that contains $\mathfrak{m}$, so as an $R-$module $R/\mathfrak{m}$ has to be simple for it contains $\mathfrak{m}$ as zero element.
$\Longleftarrow :$
If $S$ is a simple $R-$module, then we chose a fixed element $t \in S - \{0\}$ and consider the mapping $\varphi : R \rightarrow S$ with $\varphi(r) := r\cdot t\;$. You can show, that this is an $R-$modul homomorphism. It is also surjective, because $S$ is simple. ($\{0\} \neq R\cdot t \subseteq S$ is a submodule. We have to either request $1 \in R$ here or that $R$ doesn't act trivially on $S$.)
Now by simple calculations $\ker \varphi = \textrm{ ann}_R (t)$ is an ideal of $R$ and $R/\ker \varphi \cong S$ because of the exact sequence $\{0\} \rightarrow \ker \varphi \rightarrow R \rightarrow R/ \ker \varphi \rightarrow \{0\}$
At last, $\ker \varphi$ has to be maximal, for otherwise $S$ wouldn't be simple (due to the isomorphism).
7. Feb 7, 2017
### andrewkirk
@fresh_42 Very nice indeed.
I would just add that, because exact sequences - at least in my text - are not covered in all texts on modules (I first came across them in algebraic topology), it may be more intuitive to present the last step via the first isomorphism theorem for modules (part 3 of the result in that linked wiki paragraph), which is central to any study of modules. You have shown that $\varphi$ is surjective, so that Im $\varphi=S$. Hence, by part 3 of the first module isomorphism theorem: ${}_RR/$ ker $\varphi\cong$ Im $\varphi=S$.
Also, just slightly elaborating (for my own benefit if for nobody else's) on the last step, if there is a proper submodule ${}_RN$ of ${}_RR$ that properly contains ker $\varphi$, then $\varphi ({}_RN)$ is a submodule of $S$ Then, by considering $\varphi(r)$ for $r\in {}_RN-$ ker $\varphi$, and noting that we cannot have $\varphi(r)=0$ because $r\notin$ ker $\varphi$, we see that $\varphi({}_RN)$ is nontrivial, contradicting the assumed simplicity of $S$.
|
We have moved to a new Sailfish OS Forum. Please start new discussions there.
# Help with RPM validator
asked 2014-02-01 21:09:57 +0300
Hello devs,
I finished my application and now want to put it in the harbour but I get the following errors when doing the RPM check: [http://pastebin.com/iXcXFTru]
Thats a lot. What confuses me is the files in /opt and the QML file related errors like QtGraphicalEffects 1.0 not allowed or even RemoteBox 1.0 allowed, which is a C++ I have written myself.
Is there any guide about this topic anywhere? [solved] see answer 1
Update: Now I came to the following results:
ERROR [/usr/share/harbour-qremotecontrol/qml/initMeego.qml] Import 'QtQuick 1.1' is not allowed
ERROR [/usr/share/harbour-qremotecontrol/qml/initMeego.qml] Import 'com.nokia.meego 1.0' is not allowed
ERROR [/usr/share/harbour-qremotecontrol/qml/SettingsPage.qml] Import 'Qt.labs.folderlistmodel 1.0' is not allowed
ERROR [/usr/share/harbour-qremotecontrol/qml/MyComponents/Button.qml] Import 'QtGraphicalEffects 1.0' is not allowed
ERROR [/usr/share/harbour-qremotecontrol/qml/JollaImage.qml] Import 'QtGraphicalEffects 1.0' is not allowed
Whereas the first two errors come from a file added for MeeGo support, as already broke MeeGo support with using QtQuick 2.0 this shouldn't be a problem to remove. Qt.labs.folderlistmodel : this annoying but I can understand that you do not want to support Qt.labs libraries, but I have a workaround for that too. QtGraphicalEffects: I can not understand why you do not support this library. Please add it to the allowed imports. Is there any workaround for this?
edit retag close delete
## Comments
QtGraphicalEffects see: https://together.jolla.com/question/10366/harbour-allow-importing-qt-graphical-effects/ vote it up, comment there why and it might happen, we have it in discussion, but no verdict yet.
( 2014-02-03 14:19:19 +0300 )edit
Is there a way to add QtGraphicalEffect as library to supply it with the application?
( 2014-02-05 18:27:40 +0300 )edit
Yes, but it's not easy specially if QtGraphicalEffect depends on something not allowed, the following questions/answers are relevant: https://harbour.jolla.com/faq#5.3.0 - https://harbour.jolla.com/faq#6.3.0 - https://harbour.jolla.com/faq#2.6.0 - https://harbour.jolla.com/faq#2.7.0
( 2014-02-06 09:17:46 +0300 )edit
## 2 Answers
Sort by » oldest newest most voted
answered 2014-02-02 21:30:46 +0300
Please have a look at https://harbour.jolla.com/faq
Line 533 -> https://harbour.jolla.com/faq#2.3.0
Line 543- -> https://harbour.jolla.com/faq#5.1.0
Line 548 -> https://harbour.jolla.com/faq#5.3.0
Line 562 -> https://harbour.jolla.com/faq#2.1.0
Line 804 -> will be allowed with the next version: https://github.com/sailfish-sdk/sdk-harbour-rpmvalidator/commit/0adc0aa3226e9c0ffbee7bd268efa81c7733bfda
Line 1310 -> same as Line 804
Update: 09. Jan 2015:
FYI: as mentioned in several places before. Harbour QA started on 07. Jan 2015 to accept submissions which depend on QtGraphicalEffects.
more
## Comments
5.1 and 5.3 are annoying. This makes platform independent development a lot harder.
( 2014-02-02 22:20:10 +0300 )edit
answered 2014-02-03 12:59:46 +0300
Is it possible to exclude files from deployment? I used one file structure for all platform and all files in the qml folder are deployed by default this results in deploying also the init files for MeeGo and Android that the RPM validator obviously does not like.
more
## Comments
1
Yes, in the rpm .spec file in the %install section you can add just normal shell commands to remove such files. If you are using the SDK template, this should be after %qmake5_install. For example:
%qmake5_install
rm -rf %{buildroot}/usr/share/harbour-myapp/qml/otheroperatingsystemqmlfiles
( 2014-02-03 14:24:33 +0300 )edit
please do not use answers for additional questions! Use the comment function for that.
( 2014-02-03 14:24:43 +0300 )edit
## Stats
Asked: 2014-02-01 21:09:57 +0300
Seen: 377 times
Last updated: Jan 09 '15
|
# [OS X TeX] Illegal Unit in psmatrix
John Burt burt at brandeis.edu
Mon Jun 20 19:27:07 EDT 2016
There seems to be an extra period in the line in the error message:
\psset{rowsep=.0.5cm, colsep=0.5cm}
John
^
On Mon, Jun 20, 2016 at 6:19 PM, Nitecki, Zbigniew H. <
Zbigniew.Nitecki at tufts.edu> wrote:
> I have been using the psmatrix environment inside pst-node, using \psset
> to control the row and column spacing
> in order to fit the figure on a slide. All of a sudden, I am being told
> that the command
> > \psset{rowsep=.0.5cm, colsep=0.5cm}
> involves an illegal unit—it complains about colsep, not rowsep. What is
> going on?
>
> Here is a totally stripped down version of source code, pdf and log file;
> note the printing of “0.5 cm” on the lower left of the pdf. If it weren’t
> for that, I could ignore the error, since the picture came out as desired.
>
>
>
>
>
>
>
>
> ----------- Please Consult the Following Before Posting -----------
> TeX FAQ: http://www.tex.ac.uk/faq
> List Reminders and Etiquette: https://www.esm.psu.edu/~gray/tex/
> List Archives: http://dir.gmane.org/gmane.comp.tex.macosx
> https://email.esm.psu.edu/pipermail/macosx-tex/
> TeX on Mac OS X Website: http://mactex-wiki.tug.org/
> List Info: https://email.esm.psu.edu/mailman/listinfo/macosx-tex
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://email.esm.psu.edu/pipermail/macosx-tex/attachments/20160620/51c76e1a/attachment.html>
|
The ICLIFETEST Procedure
Statistical Methods
Subsections:
Nonparametric Estimation of the Survival Function
Suppose the event times for a total of subjects, , , …, , are independent random variables with an underlying cumulative distribution function . Denote the corresponding survival function as . Interval-censoring occurs when some or all ’s cannot be observed directly but are known to be within the interval .
The observed intervals might or might not overlap. It they do not overlap, then you can usually use conventional methods for right-censored data, with minor modifications. On the other hand, if some intervals overlap, you need special algorithms to compute an unbiased estimate of the underlying survival function.
To characterize the nonparametric estimate of the survival function, Peto (1973) and Turnbull (1976) show that the estimate can jump only at the right endpoint of a set of nonoverlapping intervals (also known as Turnbull intervals), . A simple algorithm for finding these intervals is to order all the boundary values with labels of and attached and then pick up the intervals that have as the left boundary and as the right boundary. For example, suppose that the data set contains only three intervals, , , and . The ordered values are . Then the Turnbull intervals are and .
For the exact observation , Ng (2002) suggests that it be represented by the interval for a positive small value . If for an observation (), then the observation is represented by .
Define , . Given the data, the survival function, , can be determined only up to equivalence classes , which are complements of the Turnbull intervals. is undefined if is within some . The likelihood function for is then
where is 1 if is contained in and 0 otherwise.
Denote the maximum likelihood estimate for as . The survival function can then be estimated as
Estimation Algorithms
Peto (1973) suggests maximizing this likelihood function by using a Newton-Raphson algorithm subject to the constraint . This approach has been implemented in the ICE macro. Although feasible, the optimization becomes less stable as the dimension of increases.
Treating interval-censored data as missing data, Turnbull (1976) derives a self-consistent equation for estimating the ’s:
where is the expected probability that the event occurs within for the th subject, given the observed data.
The algorithm is an expectation-maximization (EM) algorithm in the sense that it iteratively updates and . Convergence is declared if, for a chosen number ,
where denotes the updated value for after the th iteration.
An alternative criterion is to declare convergence when increments of the likelihood are small:
There is no guarantee that the converged values constitute a maximum likelihood estimate (MLE). Gentleman and Geyer (1994) introduced the Kuhn-Tucker conditions based on constrained programming as a check of whether the algorithm converges to a legitimate MLE. These conditions state that a sufficient and necessary condition for the estimate to be a MLE is that the Lagrange multipliers are nonnegative for all the ’s that are estimated to be zero, where is the derivative of the log-likelihood function with respect to :
You can use Turnbull’s method by specifying METHOD=TURNBULL in the ICLIFETEST statement. The Lagrange multipliers are displayed in the Nonparametric Survival Estimates table.
Groeneboom and Wellner (1992) propose using the iterative convex minorant (ICM) algorithm to estimate the underlying survival function as an alternative to Turnbull’s method. Define , as the cumulative probability at the right boundary of the th Turnbull interval: . It follows that . Denote and . You can rewrite the likelihood function as
Maximizing the likelihood with respect to the ’s is equivalent to maximizing it with respect to the ’s. Because the ’s are naturally ordered, the optimization is subject to the following constraint:
Denote the log-likelihood function as . Suppose its maximum occurs at . Mathematically, it can be proved that equals the maximizer of the following quadratic function:
where , denotes the derivatives of , and is a positive definite matrix of size (Groeneboom and Wellner, 1992).
An iterative algorithm is needed to determine . For the th iteration, the algorithm updates the quantity
where is the parameter estimate from the previous iteration and is a positive definite diagonal matrix that depends on .
A convenient choice for is the negative of the second-order derivative of the log-likelihood function :
Given and , the parameter estimate for the th iteration maximizes the quadratic function .
Define the cumulative sum diagram as a set of points in the plane, where and
Technically, equals the left derivative of the convex minorant, or in other words, the largest convex function below the diagram . This optimization problem can be solved by the pool-adjacent-violators algorithm (Groeneboom and Wellner, 1992).
Occasionally, the ICM step might not increase the likelihood. Jongbloed (1998) suggests conducting a line search to ensure that positive increments are always achieved. Alternatively, you can switch to the EM step, exploiting the fact that the EM iteration never decreases the likelihood, and then resume iterations of the ICM algorithm after the EM step. As with Turnbull’s method, convergence can be determined based on the closeness of two consecutive sets of parameter values or likelihood values. You can use the ICM algorithm by specifying METHOD=ICM in the PROC ICLIFETEST statement.
As its name suggests, the EMICM algorithm combines the self-consistent EM algorithm and the ICM algorithm by alternating the two different steps in its iterations. Wellner and Zhan (1997) show that the converged values of the EMICM algorithm always constitute an MLE if it exists and is unique. The ICLIFETEST procedure uses the EMICM algorithm as the default.
Variance Estimation of the Survival Estimator
Peto (1973) and Turnbull (1976) suggest estimating the variances of the survival estimates by inverting the Hessian matrix, which is obtained by twice differentiating the log-likelihood function. This method can become less stable when the number of ’s increase as increases. Simulations have shown that the confidence limits based on variances estimated with this method tend to have conservative coverage probabilities that are greater than the nominal level (Goodall, Dunn, and Babiker, 2004).
Sun (2001) proposes using two resampling techniques, simple bootstrap and multiple imputation, to estimate the variance of the survival estimator. The undefined regions that the Turnbull intervals represent create a special challenge using the bootstrap method. Because each bootstrap sample could have a different set of Turnbull intervals, some time points to evaluate the variances based on the original Turnbull intervals might be located within the intervals in a bootstrap sample, with the result that their survival probabilities become unknown. A simple ad hoc solution is to shrink the Turnbull interval to its right boundary and modify the survival estimates into a right continuous function:
Let denote the number of resampling data sets. Let denote the independent samples from the original data with replacement, . Let be the modified estimate of the survival function computed from the th resampling data set. Then you can estimate the variance of by the sample variance as
where
The method of multiple imputations exploits the fact that interval-censored data reduce to right-censored data when all interval observations of finite length shrink to single points (). For right-censored data, you can estimate the variance of the survival estimates via the well-known Greenwood formula (1958) as
where is the number of events at time and is the number of subjects at risk just prior to , and is the Kaplan-Meier estimator of the survival function,
Essentially, multiple imputation is used to account for the uncertainty of ranking overlapping intervals. The th imputed data set is obtained by substituting every interval-censored observation of finite length with an exact event time randomly drawn from the conditional survival function:
Denote the Kaplan-Meier estimate of each imputed data set as . The variance of is estimated by
where
and
and
Note that the first term in the formula for mimics the Greenwood formula but uses expected numbers of deaths and subjects. The second term is the sample variance of the Kaplan-Meier estimates of imputed data sets, which accounts for between-imputation contributions.
Pointwise Confidence Limits of the Survival Function
Pointwise confidence limits can be computed for the survival function given the estimated standard errors. Let be specified by the ALPHA= option. Let be the critical value for the standard normal distribution. That is, , where is the cumulative distribution function of the standard normal random variable.
Constructing the confidence limits for the survival function as might result in an estimate that exceeds the range [0,1] at extreme values of t. This problem can be avoided by applying a transformation to so that the range is unrestricted. In addition, certain transformed confidence intervals for perform better than the usual linear confidence intervals (Borgan and Liestøl, 1990). You can use the CONFTYPE= option to set one of the following transformations: the log-log function (Kalbfleisch and Prentice, 1980), the arcsine–square root function (Nair, 1984), the logit function (Meeker and Escobar, 1998), the log function, and the linear function.
Let g denote the transformation that is being applied to the survival function . Using the delta method, you estimate the standard error of by
where is the first derivative of the function g. The 100(1 – )% confidence interval for is given by
where is the inverse function of g. The choices for the transformation g are as follows:
• arcsine–square root transformation: The estimated variance of is The 100(1 – )% confidence interval for is given by
• linear transformation: This is the same as the identity transformation. The 100(1 – )% confidence interval for is given by
• log transformation: The estimated variance of is The 100(1 – )% confidence interval for is given by
• log-log transformation: The estimated variance of is The 100(1 – )% confidence interval for is given by
• logit transformation: The estimated variance of is
The 100(1 – )% confidence limits for are given by
Quartile Estimation
The first quartile (25th percentile) of the survival time is the time beyond which 75% of the subjects in the population under study are expected to survive. For interval-censored data, it is problematic to define point estimators of the quartiles based on the survival estimate because of its undefined regions of Turnbull intervals. To overcome this problem, you need to impute survival probabilities within the Turnbull intervals. The previously defined estimator achieves this by placing all the estimated probabilities at the right boundary of the interval. The first quartile is estimated by
If is exactly equal to 0.75 from to , the first quartile is taken to be . If is greater than 0.75 for all values of , the first quartile cannot be estimated and is represented by a missing value in the printed output.
The general formula for estimating the 100p percentile point is
The second quartile (the median) and the third quartile of survival times correspond to p = 0.5 and p = 0.75, respectively.
Brookmeyer and Crowley (1982) constructed the confidence interval for the median survival time based on the confidence interval for the survival function . The methodology is generalized to construct the confidence interval for the 100p percentile based on a g-transformed confidence interval for (Klein and Moeschberger, 1997). You can use the CONFTYPE= option to specify the g-transformation. The % confidence interval for the first quantile survival time is the set of all points t that satisfy
where is the first derivative of and is the percentile of the standard normal distribution.
Kernel-Smoothed Estimation
After you obtain the survival estimate , you can construct a discrete estimator for the cumulative hazard function. First, you compute the jumps of the discrete function as
where the ’s have been defined previously for calculating the Lagrange multiplier statistic.
Essentially, the numerator and denominator estimate the number of failures and the number at risks that are associated with the Turnbull intervals. Thus these quantities estimate the increments of the cumulative hazard function over the Turnbull intervals.
The estimator of the cumulative hazard function is
Like , is undefined if is located within some Turnbull interval . To facilitate applying the kernel-smoothed methods, you need to reformulate the estimator so that it has only point masses. An ad hoc approach would be to place all the mass for a Turnbull interval at the right boundary. The kernel-based estimate of the hazard function is computed as
where is a kernel function and is the bandwidth. You can estimate the cumulative hazard function by integrating with respect to .
Practically, an upper limit is usually imposed so that the kernel-smoothed estimate is defined on . The ICLIFETEST procedure sets the value depending on whether the right boundary of the last Turnbull interval is finite or not: if and otherwise.
Typical choices of kernel function are as follows:
• uniform kernel:
• Epanechnikov kernel:
• biweight kernel:
For t < b, the symmetric kernels are replaced by the corresponding asymmetric kernels of Gasser and Müller (1979). Let . The modified kernels are as follows:
• uniform kernel:
• Epanechnikov kernel:
• biweight kernel:
For , let . The asymmetric kernels for are used, with x replaced by –x.
The bandwidth parameter controls how much “smoothness” you want to have in the kernel-smoothed estimate. For right-censored data, a commonly accepted method of choosing an optimal bandwidth is to use the mean integrated square error(MISE) as an objective criteria. This measure becomes difficult to adapt to interval-censored data because it no longer has a closed-form mathematical formula.
Pan (2000) proposes using a -fold cross validation likelihood as a criterion for choosing the optimal bandwidth for the kernel-smoothed estimate of the survival function. The ICLIFETEST procedure implements this approach for smoothing the hazard function. Computing such a criterion entails a cross validation type procedure. First, the original data are partitioned into almost balanced subsets , . Denote the kernel-smoothed estimate of the leave-one-subset-out data as . The optimal bandwidth is defined as the one that maximizes the cross validation likelihood:
Comparison of Survival between Groups
If the TEST statement is specified, the ICLIFETEST procedure compares the groups formed by the levels of the TEST variable using a generalized log-rank test. Let be the underlying survival function of the kth group, . The null and alternative hypotheses to be tested are
for all
versus
at least one of the ’s is different for some
Let denote the number of subjects in group , and let denote the total number of subjects ().
Generalized Log-Rank Statistic
For the th subject, let be a vector of indicators that represent whether or not the subject belongs to the th group. Denote , where represents the treatment effect for the th group. Suppose that a model is specified and the survival function for the th subject can be written as
where denotes the nuisance parameters.
It follows that the likelihood function is
where denotes the interval observation for the th subject.
Testing whether or not the survival functions are equal across the groups is equivalent to testing whether all the ’s are zero. It is natural to consider a score test based on the specified model (Finkelstein, 1986).
The score statistics for are derived as the first-order derivatives of the log-likelihood function evaluated at and .
where denotes the maximum likelihood estimate for the , given that .
Under the null hypothesis that , all groups share the same survival function . It is typical to leave unspecified and obtain a nonparametric maximum likelihood estimate using, for instance, Turnbull’s method. In this case, represents all the parameters to be estimated in order to determine .
Suppose the given data generates Turnbull intervals as . Denote the probability estimate at the right end point of the th interval by . The nonparametric survival estimate is for any .
Under the null hypothesis, Fay (1999) showed that the score statistics can be written in the form of a weighted log-rank test as
where
and denotes the derivative of with respect to .
estimates the expected number of events within for the th group, and it is computed as
is an estimate for the expected number of events within for the whole sample, and it is computed as
Similarly, estimates the expected number of subjects at risk before entering for the th group, and can be estimated by . is an estimate of the expected number of subjects at risk before entering for all the groups: .
Assuming different survival models gives rise to different weight functions (Fay, 1999). For example, Finkelstein’s score test (1986) is derived assuming a proportional hazards model; Fay’s test (1996) is based on a proportional odds model.
The choices of weight function are given in Table 49.3.
Table 49.3: Weight Functions for Various Tests
Test
Sun (1996)
1.0
Fay (1999)
Finkelstein (1986)
Harrington-Fleming (p,q)
Variance Estimation of the Generalized Log-Rank Statistic
Sun (1996) proposed the use of multiple imputation to estimate the variance-covariance matrix of the generalized log-rank statistic . This approach is similar to the multiple imputation method as presented in Variance Estimation of the Survival Estimator. Both methods impute right-censored data from interval-censored data and analyze the imputed data sets by using standard statistical techniques. Huang, Lee, and Yu (2008) suggested improving the performance of the generalized log-rank test by slightly modifying the variance calculation.
Suppose the given data generate Turnbull intervals as . Denote the probability estimate for the th interval as , and denote the nonparametric survival estimate as for any .
In order to generate an imputed data set, you need to randomly generate a survival time for every subject of the sample. For the th subject, a random time is generated randomly based on the following discrete survival function:
where denotes the interval observation for the subject.
For the th imputed data set (), let and denote the numbers of failures and subjects at risk by counting the imputed ’s for group . Let and denote the corresponding pooled numbers.
You can perform the standard log-rank test for right-censored data on each of the imputed data sets (Huang, Lee, and Yu, 2008). The test statistic is
where
Its variance-covariance matrix is estimated by the Greenwood formula as
where
After analyzing each imputed data set, you can estimate the variance-covariance matrix of by pooling the results as
where
The overall test statistic is formed as , where is the generalized inverse of . Under the null hypothesis, the statistic has a chi-squared distribution with degrees of freedom equal to the rank of . By default, the ICLIFETEST procedure perform 1000 imputations. You can change the number of imputations by the IMPUTE option in the PROC ICLIFETEST statement.
Stratified Tests
Suppose the generalized log-rank test is to be stratified on the levels that are formed from the variables that you specify in the STRATA statement. Based only on the data of the th stratum (), let be the test statistic for the th stratum and let be the corresponding covariance matrix as constructed in the section Variance Estimation of the Generalized Log-Rank Statistic. First, sum over the stratum-specific estimates as follows:
Then construct the global test statistic as
Under the null hypothesis, the test statistic has a chi-squared distribution with degrees of freedom equal to the rank of The ICLIFETEST procedure performs the stratified test only when the groups to be compared are balanced across all the strata.
When you have more than two groups, a generalized log-rank test tells you whether the survival curves are significantly different from each other, but it does not identify which pairs of curves are different. Pairwise comparisons can be performed based on the generalized log-rank statistic and the corresponding variance-covariance matrix. However, reporting all pairwise comparisons is problematic because the overall Type I error rate would be inflated. A multiple-comparison adjustment of the p-values for the paired comparisons retains the same overall probability of a Type I error as the K-sample test.
The ICLIFETEST procedure supports two types of paired comparisons: comparisons between all pairs of curves and comparisons between a control curve and all other curves. You use the DIFF= option to specify the comparison type, and you use the ADJUST= option to select a method of multiple-comparison adjustments.
Let denote a chi-square random variable with r degrees of freedom. Denote and as the density function and the cumulative distribution function of a standard normal distribution, respectively. Let m be the number of comparisons; that is,
For a two-sided test that compares the survival of the jth group with that of lth group, , the test statistic is
and the raw p-value is
For multiple comparisons of more than two groups (), adjusted p-values are computed as follows:
• Dunnett-Hsu adjustment: With the first group defined as the control, there are comparisons to be made. Let be the matrix of contrasts that represents the comparisons; that is,
Let and be covariance and correlation matrices of , respectively; that is,
and
The factor-analytic covariance approximation of Hsu (1992) is to find such that
where is a diagonal matrix whose jth diagonal element is and . The adjusted p-value is
This value can be obtained in a DATA step as
This can also be evaluated in a DATA step as
This can be evaluated in a DATA step as
Trend Tests
Trend tests for right-censored data (Klein and Moeschberger, 1997, Section 7.4) can be extended to interval-censored data in a straightforward way. Such tests are specifically designed to detect ordered alternatives as
with at least one inequality
or
with at least one inequality
Let be a sequence of scores associated with the k samples. Let be the generalized log-rank statistic and be the corresponding covariance matrix of size as constructed in the section Variance Estimation of the Generalized Log-Rank Statistic. The trend test statistic and its standard error are given by and , respectively. Under the null hypothesis that there is no trend, the following z-score has, asymptotically, a standard normal distribution:
The ICLIFETEST procedure provides both one-tail and two-tail p-values for the test.
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 28 Oct 2016, 12:55
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If x is a prime number, what is x? (1) x < 15 (2) x-2 is
Author Message
Manager
Joined: 02 Jun 2008
Posts: 89
Followers: 1
Kudos [?]: 12 [0], given: 0
If x is a prime number, what is x? (1) x < 15 (2) x-2 is [#permalink]
### Show Tags
20 Aug 2008, 04:02
This topic is locked. If you want to discuss this question please re-post it in the respective forum.
If x is a prime number, what is x?
(1) x < 15
(2) x-2 is a multiple of 5
I got C, but Kaplan says the answer is E because 0 is also a multiple of 5. Don't multiples of a number start from that number and not include 0???
Last edited by lionheart187 on 20 Aug 2008, 09:16, edited 1 time in total.
Kaplan GMAT Prep Discount Codes Veritas Prep GMAT Discount Codes Jamboree Discount Codes
GMAT Instructor
Joined: 04 Jul 2006
Posts: 1264
Followers: 27
Kudos [?]: 285 [0], given: 0
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 04:10
lionheart187 wrote:
If x is a prime number, what is x?
(1) x < 15
(2) x is a multiple of 5
I got C, but Kaplan says the answer is E because 0 is also a multiple of 5. Don't multiples of a number start from that number and not include 0???
0 is indeed a multiple of 5. However, we are told that x is a prime number, information that excludes the possiblility that x is 0.
Director
Joined: 14 Aug 2007
Posts: 733
Followers: 10
Kudos [?]: 176 [0], given: 0
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 04:17
a multiple of an integer is the product of that integer with another integer.
0 is a multiple of every integer.
SVP
Joined: 17 Jun 2008
Posts: 1570
Followers: 11
Kudos [?]: 241 [0], given: 0
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 06:54
If x is a prime number then it has be greater than 1. Thus, answer should be C.
SVP
Joined: 30 Apr 2008
Posts: 1888
Location: Oklahoma City
Schools: Hard Knocks
Followers: 41
Kudos [?]: 558 [0], given: 32
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 06:58
If x is a prime number (2 or higher) and x is a multiple of 5, then x = 5.
Of course, (1) is insufficient becuase x could be 2, 3, 5, 7, or 11. But 5 is the only multiple of 5 that is also a prime number, hence B.
lionheart187 wrote:
If x is a prime number, what is x?
(1) x < 15
(2) x is a multiple of 5
I got C, but Kaplan says the answer is E because 0 is also a multiple of 5. Don't multiples of a number start from that number and not include 0???
_________________
------------------------------------
J Allen Morris
**I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. GMAT Club Premium Membership - big benefits and savings SVP Joined: 07 Nov 2007 Posts: 1820 Location: New York Followers: 32 Kudos [?]: 814 [0], given: 5 Re: Error on Kaplan CAT? [#permalink] ### Show Tags 20 Aug 2008, 07:09 jallenmorris wrote: Isn't the answer B ? If x is a prime number (2 or higher) and x is a multiple of 5, then x = 5. Of course, (1) is insufficient becuase x could be 2, 3, 5, 7, or 11. But 5 is the only multiple of 5 that is also a prime number, hence B. lionheart187 wrote: If x is a prime number, what is x? (1) x < 15 (2) x is a multiple of 5 I got C, but Kaplan says the answer is E because 0 is also a multiple of 5. Don't multiples of a number start from that number and not include 0??? We are on the same boat.. agree with you. _________________ Your attitude determines your altitude Smiling wins more friends than frowning SVP Joined: 30 Apr 2008 Posts: 1888 Location: Oklahoma City Schools: Hard Knocks Followers: 41 Kudos [?]: 558 [0], given: 32 Re: Error on Kaplan CAT? [#permalink] ### Show Tags 20 Aug 2008, 07:20 Quote: If x is a prime number, what is x? (1) x < 15 (2) x is a multiple of 5 I like the approach Durgesh uses for these problems. When the question states "If...", make it where that condition of "if" creates the universe of numbers which we will deal with when we get to the statements. So, if x is a prime number creates a universe of only prime numbers. And "What is x?" means we're trying to determine if we can narrow the available choice of x down to a single value. Prime numbers...2, 3, 5, 7, 11, 13, 17, 19, 23. We've gone high enough since we can see from (1) that one condition imposed is x < 15. so (1), if x < 15, we still have five options, 2, 3, 5, 7, and 11. This doens't narrow the choices down to a single value. INSUFFICIENT. (2) X is a multiple of 5. We're still dealing with the "universe" of prime numbers. The only multiple of 5 we see in the "universe" is 5. so the answer must be B. Since one of the statements is sufficient alone, we do not need to consider the statements together. _________________ ------------------------------------ J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$.
GMAT Club Premium Membership - big benefits and savings
Director
Joined: 27 Jun 2008
Posts: 546
WE 1: Investment Banking - 6yrs
Followers: 2
Kudos [?]: 58 [0], given: 92
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 07:22
I agree with jallen & suresh.
B
0 is not a prime.
A prime number is a positive integer that has exactly two factors, 1 and the number itself.
We know 0 is neither a positive nor a negative number. 0 is a neutral number. So, it is not a prime number.
SVP
Joined: 28 Dec 2005
Posts: 1575
Followers: 3
Kudos [?]: 138 [0], given: 2
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 08:20
phew, for a second there I thought I was the only one who thought that B should be the answer ...
If x is a multiple of 5, well, the only PRIME multiple of 5 is 5 .... 0 isnt a prime #
Manager
Joined: 02 Jun 2008
Posts: 89
Followers: 1
Kudos [?]: 12 [0], given: 0
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 08:39
jallenmorris wrote:
If x is a prime number (2 or higher) and x is a multiple of 5, then x = 5.
Of course, (1) is insufficient becuase x could be 2, 3, 5, 7, or 11. But 5 is the only multiple of 5 that is also a prime number, hence B.
lionheart187 wrote:
If x is a prime number, what is x?
(1) x < 15
(2) x is a multiple of 5
I got C, but Kaplan says the answer is E because 0 is also a multiple of 5. Don't multiples of a number start from that number and not include 0???
So sorry, (2) should be x-2 is a multiple of 5.
Thanks for noticing!
SVP
Joined: 30 Apr 2008
Posts: 1888
Location: Oklahoma City
Schools: Hard Knocks
Followers: 41
Kudos [?]: 558 [0], given: 32
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 08:41
if x-2 is a multiple of 5, and x is prime, then x = 7.
_________________
------------------------------------
J Allen Morris
**I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. GMAT Club Premium Membership - big benefits and savings SVP Joined: 07 Nov 2007 Posts: 1820 Location: New York Followers: 32 Kudos [?]: 814 [0], given: 5 Re: Error on Kaplan CAT? [#permalink] ### Show Tags 20 Aug 2008, 08:43 lionheart187 wrote: jallenmorris wrote: Isn't the answer B ? If x is a prime number (2 or higher) and x is a multiple of 5, then x = 5. Of course, (1) is insufficient becuase x could be 2, 3, 5, 7, or 11. But 5 is the only multiple of 5 that is also a prime number, hence B. lionheart187 wrote: If x is a prime number, what is x? (1) x < 15 (2) x is a multiple of 5 I got C, but Kaplan says the answer is E because 0 is also a multiple of 5. Don't multiples of a number start from that number and not include 0??? So sorry, (2) should be x-2 is a multiple of 5. Thanks for noticing! please edit the original question too. Then I will go with E. you will get two options when combined x=2 x-2 = 0 multiple of 5 x=7 x-2=5 multiple of 5 _________________ Your attitude determines your altitude Smiling wins more friends than frowning Last edited by x2suresh on 20 Aug 2008, 08:44, edited 1 time in total. SVP Joined: 30 Apr 2008 Posts: 1888 Location: Oklahoma City Schools: Hard Knocks Followers: 41 Kudos [?]: 558 [0], given: 32 Re: Error on Kaplan CAT? [#permalink] ### Show Tags 20 Aug 2008, 08:44 Nice catch suresh. _________________ ------------------------------------ J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$.
GMAT Club Premium Membership - big benefits and savings
Intern
Joined: 22 Jul 2008
Posts: 45
Followers: 0
Kudos [?]: 14 [0], given: 0
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
20 Aug 2008, 13:05
IF x-2 is a multiple of 5 , shouldn't x = 7 rather than 5 and if that is the case wouldn't x have the following values
x = 7,17 etc.
How can 2) be sufficient
Manager
Joined: 02 Jun 2008
Posts: 89
Followers: 1
Kudos [?]: 12 [0], given: 0
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
31 Aug 2008, 02:31
Page 14 of the NUMBER PROPERTIES GUIDE by MGMAT states the following
"Multiples multiply out from an integer and are therefore greater than or equal to that integer."
Therefore 0 would not be a multiple of 5! Correct me if Im wrong!
Intern
Joined: 16 Feb 2006
Posts: 30
Location: ZURICH
Followers: 0
Kudos [?]: 7 [0], given: 6
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
31 Aug 2008, 02:39
Definition of Multiples
The products of a number with the natural numbers 1, 2, 3, 4, 5, ... are called the multiples of the number.
_________________
TRY N TRY UNTIL U SUCCEED
GMAT Tutor
Joined: 24 Jun 2008
Posts: 1183
Followers: 405
Kudos [?]: 1443 [0], given: 4
Re: Error on Kaplan CAT? [#permalink]
### Show Tags
31 Aug 2008, 08:13
lionheart187 wrote:
Page 14 of the NUMBER PROPERTIES GUIDE by MGMAT states the following
"Multiples multiply out from an integer and are therefore greater than or equal to that integer."
Therefore 0 would not be a multiple of 5! Correct me if Im wrong!
That is not true- if that's what the MGMAT guide says, they have it wrong, as a glance at any proper math book will demonstrate. The multiples of 5 are all numbers 5*x, where x is an integer, positive or negative (or zero):
...-10, -5, 0, 5, 10, ...
On the real GMAT, however, questions about divisibility and multiples are almost always restricted to positive integers- they will begin questions by saying 'If x is a positive integer...', so you probably won't need to worry about negative multiples on the test.
_________________
GMAT Tutor in Toronto
If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com
Re: Error on Kaplan CAT? [#permalink] 31 Aug 2008, 08:13
Display posts from previous: Sort by
|
# Measuring voltage across 1 ohm resistor
This might be a very stupid question, but I can't seem to figure this out and I'm going nuts thinking about it.
Assuming you are trying to adjust the fixed bias of a tetrode push pull pair in a tube amp. In many amps, for convenience, the cathode will have a 1 ohm resistor connecting it to the ground, so you can easily use a multimeter to measure the voltage and thus the current flowing through the tetrode.
My confusion arises when trying to comprehend how the resistance of the multimeter probes themselves would contribute to this measurement.
I realize the probes are likely not thin enough wire or long enough to make any difference, but assume the resistance of each probe wire is also 1 ohm for this scenario.
Would it make any difference here?
• the difference would be the resistance (impedance) of the multimeter – jsotola Dec 9 '19 at 3:39
• When measuring voltage with a meter, effectively no current is drawn, so there resistance of the probes is irrelevant. That said, meters vary in how perfect they are, and there may also be substantial safety issues in probing tube gear which is either live or has charge retained on supply capacitors. – Chris Stratton Dec 9 '19 at 3:56
• How much current is flowing through the probes? It matters. If you are going to work with Ohm's Law, you need to know why it matters. – Harper - Reinstate Monica Dec 9 '19 at 4:17
If you were using an analog multimeter with, say, 20,000$$\\Omega\$$/V on a 0-150mV F.S. range, the meter looks like an approximately 3K resistor.
Thus the voltage across the 1$$\\Omega\$$ resistor will be a bit less since it is shunted by about 3K (3002 ohms if you count the leads) and the voltage making it to the 3000 ohm meter movement is a bit less as well (3000/3002 of the voltage across the 1$$\\Omega\$$ resistor) because 1/3002 of that voltage is dropped across each lead.
simulate this circuit – Schematic created using CircuitLab
In total then, in this example, the voltage read at the meter is lower than ideal by about 0.1%, which is going to be totally insignificant compared to the tolerance of the resistors and the accuracy of the meter.
A typical digital multimeter will have an input resistance of something like 10M$$\\Omega\$$ on the 199mV scale so the effect will be several thousand times less, so even with a very accurate resistor and meter it's negligible. Not to mention that the leads are probably closer to 0.2$$\\Omega\$$ than 1$$\\Omega\$$.
• Thank you!!! :) – cat pants Dec 24 '19 at 3:21
if each one is 1ohm they will a difference if the volt meter impedance itself is very low, normally when measuring voltage you want a large resistance on your volt meter(they are usually 10 Megaohm or even larger) if the volt meter impedance is low enough to be comparable to the probes, you already have a useless volt meter
simulate this circuit – Schematic created using CircuitLab
let's say you got this circuit right here, if you apply kirchoff law's for current at the R1 node there is a current that your circuit provides to this resistor and if you do this with and without the voltmeter the value will be different, this is called "loading" the circuit, if the voltmeter itself will draw too much current from the circuit. Not only the voltage on R1 will change, but all voltages in the circuit will change, so most voltmeters have high resistance to avoid loading the circuit you measure.
now if the voltmeter impedance along with the probe resistance was close to the target resistor you are measuring it would "load" the circuit.
most cases the probes themselves do not make a difference because the voltmeter has a very large input impedance.
edit: changed the voltmeter impedance connection
• Good, but draw the voltmeter with the 10 M in parallel with the ideal voltmeter, not series. An ideal voltmeter has infinite impedance so in the circuit as drawn now, no current will flow. – tomnexus Dec 9 '19 at 5:26
• that indeed is better. – Juan Dec 9 '19 at 5:41
You are presuming the only resistance of the meter is the probes. Actually, a digital voltmeter would have several megaohms of impedance.
Since the probes are in series with the meter, their resistance is tiny by comparison, and thus, inconsequential.
If you have selected ammeter mode (maybe you did this because the endgame valur you are after is amps), don't do that becase it defeats the purpose of using that 1 ohm resistor as an ammeter shunt! ...but if you did, the meter would be near 0 ohms and so your leads had better have some resistance! However they don't, so don't do this.
My Electrinics 101 professor once said that you can't measure a circuit's voltage, without affecting the voltage level (very slightly), and then went on to demonstrate that even digital multimeters & their probes contain resistance. The probes have very low resistance (under .5 ohm), and the meter has a very high resistance (usually in excess of 10M ohm.) The point was that usually you can ignore the slight voltage drop caused by the meter, but to be aware that it exists. In my nearly 40 years of professional electronics experience, I've never had to take a meter's voltage drop into account.
|
Article
# Convergent recombination shapes the clonotypic landscape of the naive T-cell repertoire
[more]
Human Immunology Section, Vaccine Research Center, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, MD 20892, USA.
(Impact Factor: 9.67). 10/2010; 107(45):19414-9. DOI: 10.1073/pnas.1010586107
Source: PubMed
ABSTRACT
Adaptive T-cell immunity relies on the recruitment of antigen-specific clonotypes, each defined by the expression of a distinct T-cell receptor (TCR), from an array of naïve T-cell precursors. Despite the enormous clonotypic diversity that resides within the naïve T-cell pool, interindividual sharing of TCR sequences has been observed within mobilized T-cell responses specific for certain peptide-major histocompatibility complex (pMHC) antigens. The mechanisms that underlie this phenomenon have not been fully elucidated, however. A mechanism of convergent recombination has been proposed to account for the occurrence of shared, or "public," TCRs in specific memory T-cell populations. According to this model, TCR sharing between individuals is directly related to TCR production frequency; this, in turn, is determined on a probabilistic basis by the relative generation efficiency of particular nucleotide and amino acid sequences during the recombination process. Here, we tested the key predictions of convergent recombination in a comprehensive evaluation of the naïve CD8(+) TCRβ repertoire in mice. Within defined segments of the naïve CD8(+) T-cell repertoire, TCRβ sequences with convergent features were (i) present at higher copy numbers within individual mice and (ii) shared between individual mice. Thus, the naïve CD8(+) T-cell repertoire is not flat, but comprises a hierarchy of recurrence rates for individual clonotypes that is determined by relative production frequencies. These findings provide a framework for understanding the early mobilization of public CD8(+) T-cell clonotypes, which can exert profound biological effects during acute infectious processes.
### Full-text
Available from: Miles P Davenport, Aug 01, 2014
0 Followers
·
• Source
• "Public sequences have a very high level of convergent recombination Previous studies reported that public TCRs manifest a higher level of convergent recombination (Venturi et al. 2006, 2011; Quigley et al. 2010; Li et al. 2012). Our analysis of a large number of individuals revealed a continuous trend; increased sharing was associated with a gradual increase in the mean degree of convergent recombination (Fig. 2A); private CDR3 aa sequences were encoded on average by one nt sequence, the public sequences were encoded by 34.5 nt sequences on average. "
##### Article: T-cell receptor repertoires share a restricted set of public and abundant CDR3 sequences that are associated with self-related immunity
[Hide abstract]
ABSTRACT: The T cell receptor (TCR) repertoire is formed by random recombinations of genomic precursor elements; the resulting combinatorial diversity renders unlikely extensive TCR sharing between individuals. Here, we studied CDR3β amino-acid sequence sharing in a repertoire-wide manner, using high-throughput TCR-seq in 28 healthy mice. We uncovered hundreds of public sequences shared by most mice. Public CDR3 sequences, relative to private sequences, are two orders of magnitude more abundant on average, express restricted V/J segments, and feature high convergent nucleic-acid recombination. Functionally, public sequences are enriched for MHC-diverse CDR3 sequences that were previously associated with autoimmune, allograft and tumor-related reactions, but not with anti-pathogen-related reactions. Public CDR3 sequences are shared between mice of different MHC haplotypes, but are associated with different, MHC-dependent, V genes. Thus, despite their random generation process, TCR repertoires express a degree of uniformity in their post-genomic organization. These results, together with numerical simulations of TCR genomic rearrangements, suggest that biases and convergence in TCR recombination combine with ongoing selection to generate a restricted subset of self-associated, public CDR3 TCR sequences, and invite reexamination of the basic mechanisms of T-cell repertoire formation.
Genome Research 07/2014; DOI:10.1101/gr.170753.113 · 14.63 Impact Factor
• Source
• "acid multiple sequence alignment . We observed that the center of the TRBD is more conserved than the flanking regions . This could be explained by nucleotide nibbling ( Murphy et al . 2007 ) , though the bias for calling TRBD gene segments cannot be fully ruled out . Regardless , this is consistent with previous reports ( Freeman et al . , 2009 ; Quigley et al . , 2010 ) ."
##### Article: Characterization of human αβTCR repertoire and discovery of D-D fusion in TCRβ chains
[Hide abstract]
ABSTRACT: The characterization of the human T-cell receptor (TCR) repertoire has made remarkable progress, with most of the work focusing on the TCRβ chains. Here, we analyzed the diversity and complexity of both the TCRα and TCRβ repertoires of three healthy donors. We found that the diversity of the TCRα repertoire is higher than that of the TCRβ repertoire, whereas the usages of the V and J genes tended to be preferential with similar TRAV and TRAJ patterns in all three donors. The V-J pairings, like the V and J gene usages, were slightly preferential. We also found that the TRDV1 gene rearranges with the majority of TRAJ genes, suggesting that TRDV1 is a shared TRAV/DV gene (TRAV42/DV1). Moreover, we uncovered the presence of tandem TRBD (TRB D gene) usage in ~2% of the productive human TCRβ CDR3 sequences. Electronic supplementary material The online version of this article (doi:10.1007/s13238-014-0060-1) contains supplementary material, which is available to authorized users.
Protein & Cell 05/2014; 5(8). DOI:10.1007/s13238-014-0060-1 · 3.25 Impact Factor
• Source
• "quences make up a " public " repertoire common to many individuals, formed through convergent evolution or a common source. However, it is also possible that these common sequences are just statistically more frequent, and are likely to be randomly recombined in two individuals independently, as previously discussed by Venturi et al. [6] [7] [21]. In other words, public sequences could just be chance events. "
##### Article: Quantifying selection in immune receptor repertoires
[Hide abstract]
ABSTRACT: The efficient recognition of pathogens by the adaptive immune system relies on the diversity of receptors displayed at the surface of immune cells. T-cell receptor diversity results from an initial random DNA editing process, called VDJ recombination, followed by functional selection of cells according to the interaction of their surface receptors with self and foreign antigenic peptides. To quantify the effect of selection on the highly variable elements of the receptor, we apply a probabilistic maximum likelihood approach to the analysis of high-throughput sequence data from the $\beta$-chain of human T-cell receptors. We quantify selection factors for V and J gene choice, and for the length and amino-acid composition of the variable region. Our approach is necessary to disentangle the effects of selection from biases inherent in the recombination process. Inferred selection factors differ little between donors, or between naive and memory repertoires. The number of sequences shared between donors is well-predicted by the model, indicating a purely stochastic origin of such "public" sequences. We find a significant correlation between biases induced by VDJ recombination and our inferred selection factors, together with a reduction of diversity during selection. Both effects suggest that natural selection acting on the recombination process has anticipated the selection pressures experienced during somatic evolution.
Proceedings of the National Academy of Sciences 04/2014; 111(27). DOI:10.1073/pnas.1409572111 · 9.67 Impact Factor
|
# Browse Abstracts & Presentations - 2015 International Symposium on Molecular Spectroscopy by Issue Date
• (International Symposium on Molecular Spectroscopy, 22-Jun-15)
All available spectroscopic data for all stable isotopologues of HeH$^+$ are analyzed with a direct-potential-fit (DPF) procedure that uses least-squares fits to experimental data in order to optimize the parameters defining ...
application/pdf
PDF (812kB)
• (International Symposium on Molecular Spectroscopy, 22-Jun-15)
In living systems, the local structures of DNA and RNA are influenced by protonation, deprotonation and noncovalent binding interactions with cations. In order to determine the effects of Na$^{+}$ cationization on the ...
application/vnd.openxmlformats-officedocument.presentationml.presentation
Microsoft PowerPoint 2007 (7MB)
• (International Symposium on Molecular Spectroscopy, 22-Jun-15)
Calcium hydride is one of the abundant molecules in the stellar environment, and is considered as a probe of stellar analysis$footnote{B. Barbuy, R. P. Schiavon, J. Gregorio-Hetem, P. D. Singh , C. Batalha , textit{Astron. ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (920kB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) A HITRAN Application Programing Interface (HAPI) has been developed to allow users on their local machines much more flexibility and power. HAPI is a programming interface for the main data-searching capabilities of the ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (799kB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) An experimental search for the permanent electric dipole moment of the electron (eEDM) is currently being performed using the metastable$^3Delta_1$state in trapped HfF$^+^($footnote{H. Loh, K. C. Cossel, M. C. Grau, ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (2MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) The solvation of metal cations, a process that dictates chemistry in both catalytic and biological systems, has been well studied using gas-phase spectroscopy. However, until recently the solvation of cation-anion pairs ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (3MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) The near-infrared spectrum of nickel chloride, NiCl, has been recorded at high resolution using intracavity laser absorption spectroscopy. The NiCl molecules were produced in a plasma discharge of a nickel hollow cathode ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (20MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) The catalyzed reduction of CO$_{2}$is an important step in the conversion of this small molecule into liquid fuels. Nickel 1,4,8,11-tetraazacyclotetradecane, Ni(cyclam), is a well-known catalyst for the reduction of ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (9MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) Helium atoms can attach to molecular cations via ternary collision processes forming weakly bound ($approx 1$kcal/mol) He-M$^+$complexes. We developed a novel sensitive action spectroscopic scheme for molecular ions based ... application/pdf PDF (2MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) We present textbf{HITRAN}emph{online}, an online interface to the internationally-recognised HITRAN molecular spectroscopic database[1], and describe the structure of its relational database backend[2]. As the amount and ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (1MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) The pure rotational spectra of 9-fluorenone (C$_{13}$H$_{8}$O) and benzophenone (C$_{13}$H$_{10}$O) were observed using chirped-pulse Fourier transform microwave spectroscopy (cp-FTMW). The 9-fluorenone spectrum was collected ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (6MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) Cryogenic ion vibrational predissociation (CIVP) spectroscopy was used to examine the onset of solvation upon the incremental addition of water molecules to the Mg$_{2}$SO$_{4}^{2+}$(H$_{2}$O)$_{n}$cation (n = 4 � 11). ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (10MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) Air pollution arises from the oxidation of volatile organic compounds emitted into the atmosphere from both anthropogenic and biogenic sources. Free radicals dominate the gas phase chemistry leading to the formation of ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (11MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) The trihydrogen cation, chem{H_3^+}, represents one of the most important and fundamental molecular systems. Having only two electrons and three nuclei, chem{H_3^+} is the simplest polyatomic system and is a key testing ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (2MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) High resolution absorption spectra of hot ammonia have been recorded in the 2400--5500 cm$^{-1}$region and the line lists are presented. This extends our previous work on ammonia in the 740--4000 cm$^{-1}$regionfootnote{R.J. ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (2MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) HCP belongs to a class of reactive small molecules with much interest to spectroscopists. It bears certain similarities to HCN, including a strong ~{A}(bent) - ~{X}(linear) ultraviolet transition, associated with the HCP-HPC ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (1MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) Chirped-pulse Fourier-transform microwave spectroscopy has stimulated a resurgence of interest in rotational spectroscopy owing to the dramatic reduction in spectral acquisition time it enjoys when compared to cavity-based ... application/pdf PDF (16kB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) 2-Aminoisobutyric acid (Aib) is an achiral,${alpha}$-amino acid having two equivalent methyl groups attached to C$_{alpha}$. Extended Aib oligomers are known to have a strong preference for the adoption of a 3$_{10}$-helical ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (3MB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) In the last symposium and a recent paperfootnote{W. Chen, K. Kawaguchi, P. F. Bernath, and J. Tang, J. Chem. Phys. 142, 064317 (2015).}, we reported a simultaneous analysis for the Phillips and Ballik-Ramsay band systems ... application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft PowerPoint 2007 (440kB) • (International Symposium on Molecular Spectroscopy, 22-Jun-15) The internal force field of ~{C}$^1$B$_2$state of chem{SO_2} is determined up to quartic terms. The fit incorporates observed vibrational energy levels of both S$^{16}$O$_2$and S$^{18}$O$_2\$ below 3000 wn, as well as ...
application/vnd.openxmlformats-officedocument.presentationml.presentation
Microsoft PowerPoint 2007 (3MB)
|
# Stable isotope ratio
The term stable isotope has a meaning similar to stable nuclide, but is preferably used when speaking of nuclides of a specific element. Hence, the plural form stable isotopes usually refers to isotopes of the same element. The relative abundance of such stable isotopes can be measured experimentally (isotope analysis), yielding an isotope ratio that can be used as a research tool. Theoretically, such stable isotopes could include the radiogenic daughter products of radioactive decay, used in radiometric dating. However, the expression stable-isotope ratio is preferably used to refer to isotopes whose relative abundances are affected by isotope fractionation in nature. This field is termed stable isotope geochemistry.
## Stable-isotope ratios
Measurement of the ratios of naturally occurring stable isotopes (isotope analysis) plays an important role in isotope geochemistry, but stable isotopes (mostly carbon, nitrogen, oxygen and sulfur) are also finding uses in ecological and biological studies. Other workers have used oxygen isotope ratios to reconstruct historical atmospheric temperatures, making them important tools for paleoclimatology.
These isotope systems for lighter elements that exhibit more than one primordial isotope for each element, have been under investigation for many years in order to study processes of isotope fractionation in natural systems. The long history of study of these elements is in part because the proportions of stable isotopes in these light and volatile elements is relatively easy to measure. However, recent advances in isotope ratio mass spectrometry (i.e. multiple-collector inductively coupled plasma mass spectrometry) now enable the measurement of isotope ratios in heavier stable elements, such as iron, copper, zinc, molybdenum, etc.
## Applications
The variations in oxygen and hydrogen isotope ratios have applications in hydrology since most samples will lie between two extremes, ocean water and Arctic/Antarctic snow.[1] Given a sample of water from an aquifer, and a sufficiently sensitive tool to measure the variation in the isotopic ratio of hydrogen in the sample, it is possible to infer the source, be it ocean water seeping into the aquifer or precipitation seeping into the aquifer, and even to estimate the proportions from each source.[2] Stable isotopes of water are also used in partitioning water sources for plant transpiration and groundwater recharge.[3][4]
Another application is in paleotemperature measurement for paleoclimatology. For example, one technique is based on the variation in isotopic fractionation of oxygen by biological systems with temperature.[5] Species of Foraminifera incorporate oxygen as calcium carbonate in their shells. The ratio of the oxygen isotopes oxygen-16 and oxygen-18 incorporated into the calcium carbonate varies with temperature and the oxygen isotopic composition of the water. This oxygen remains "fixed" in the calcium carbonate when the forminifera dies, falls to the sea bed, and its shell becomes part of the sediment. It is possible to select standard species of forminifera from sections through the sediment column, and by mapping the variation in oxygen isotopic ratio, deduce the temperature that the Forminifera encountered during life if changes in the oxygen isotopic composition of the water can be constrained.[6]
In ecology, carbon and nitrogen isotope ratios are widely used to determine the broad diets of many free-ranging animals. They have been used to determine the broad diets of seabirds, and to identify the geographical areas where individuals spend the breeding and non-breeding season.[7] Numerous ecological studies have also used isotope analyses to understand migration, food-web structure, diet, and resource use in sea turtles[8]. Determining diets of aquatic animals using stable isotopes has been particularly common, as direct observations are difficult[9], they also enable researchers to measure how human interactions with wildlife, such as fishing, may alter natural diets[10].
In forensic science, research suggests that the variation in certain isotope ratios in drugs derived from plant sources (cannabis, cocaine) can be used to determine the drug's continent of origin.[11]
It also has applications in "doping control", to distinguish between endogenous and exogenous (synthetic) sources of hormones.[12][13]
Chondrite meteorites are classified using the oxygen isotope ratios. In addition, an unusual signature of carbon-13 confirms the non-terrestrial origin for organic compounds found in carbonaceous chondrites, as in the Murchison meteorite.
## References
1. ^ Han LF, Gröning M, Aggarwal P, Helliker BR (2006). "Reliable determination of oxygen and hydrogen isotope ratios in atmospheric water vapour adsorbed on 3A molecular sieve". Rapid Commun. Mass Spectrom. 20 (23): 3612–8. Bibcode:2006RCMS...20.3612H. doi:10.1002/rcm.2772. PMID 17091470.
2. ^ Weldeab S, Lea DW, Schneider RR, Andersen N (2007). "155,000 years of West African monsoon and ocean thermal evolution". Science. 316 (5829): 1303–7. Bibcode:2007Sci...316.1303W. doi:10.1126/science.1140461. PMID 17540896.
3. ^ Good, Stephen P.; Noone, David; Bowen, Gabriel (2015-07-10). "Hydrologic connectivity constrains partitioning of global terrestrial water fluxes". Science. 349 (6244): 175–177. Bibcode:2015Sci...349..175G. doi:10.1126/science.aaa5931. ISSN 0036-8075. PMID 26160944.
4. ^ Evaristo, Jaivime; Jasechko, Scott; McDonnell, Jeffrey J. (2015). "Global separation of plant transpiration from groundwater and streamflow". Nature. 525 (7567): 91–94. Bibcode:2015Natur.525...91E. doi:10.1038/nature14983. PMID 26333467.
5. ^ Tolosa I, Lopez JF, Bentaleb I, Fontugne M, Grimalt JO (1999). "Carbon isotope ratio monitoring-gas chromatography mass spectrometric measurements in the marine environment: biomarker sources and paleoclimate applications". Sci. Total Environ. 237–238: 473–81. Bibcode:1999ScTEn.237..473T. doi:10.1016/S0048-9697(99)00159-X. PMID 10568296.
6. ^ Shen JJ, You CF (2003). "A 10-fold improvement in the precision of boron isotopic analysis by negative thermal ionization mass spectrometry". Anal. Chem. 75 (9): 1972–7. doi:10.1021/ac020589f. PMID 12720329.
7. ^ Graña Grilli, M.; Cherel, Y. (2017). "Skuas (Stercorarius spp.) moult body feathers during both the breeding and inter-breeding periods: implications for stable isotope investigations in seabirds". Ibis. 159 (2): 266–271. doi:10.1111/ibi.12441.
8. ^ Pearson, RM; van de Merwe, JP; Limpus, CJ; Connolly, RM (2017). "Realignment of sea turtle isotope studies needed to match conservation priorities". Marine Ecology Progress Series. 583: 259–271. Bibcode:2017MEPS..583..259P. doi:10.3354/meps12353. ISSN 0171-8630.
9. ^ Gutmann Roberts, Catherine; Britton, J. Robert (2018-09-01). "Trophic interactions in a lowland river fish community invaded by European barbel Barbus barbus (Actinopterygii, Cyprinidae)". Hydrobiologia. 819 (1): 259–273. doi:10.1007/s10750-018-3644-6. ISSN 1573-5117.
10. ^ Gutmann Roberts, Catherine; Bašić, Tea; Trigo, Fatima Amat; Britton, J. Robert (2017). "Trophic consequences for riverine cyprinid fishes of angler subsidies based on marine-derived nutrients". Freshwater Biology. 62 (5): 894–905. doi:10.1111/fwb.12910. ISSN 1365-2427.
11. ^ Casale J, Casale E, Collins M, Morello D, Cathapermal S, Panicker S (2006). "Stable isotope analyses of heroin seized from the merchant vessel Pong Su". J. Forensic Sci. 51 (3): 603–6. doi:10.1111/j.1556-4029.2006.00123.x. PMID 16696708.
12. ^ Author, A (2012). "Stable isotope ratio analysis in sports anti-doping". Drug Testing and Analysis. 4 (12): 893–896. doi:10.1002/dta.1399. PMID 22972693.
13. ^ Cawley, Adam T.; Kazlauskas, Rymantas; Trout, Graham J.; Rogerson, Jill H.; George, Adrian V. (1985). "Isotopic Fractionation of Endogenous Anabolic Androgenic Steroids and Its Relationship to Doping Control in Sports" (PDF). Journal of Chromatographic Science. 43: 32–38. Bibcode:1985JChS...23..471O. doi:10.1093/chromsci/43.1.32.
Caldey Island
Caldey Island (Welsh:Ynys Bŷr) is a small island 0.6 miles (1 km) off the coast near Tenby in Pembrokeshire, Wales. With a recorded history going back over 1,500 years, it is one of the holy islands of Britain. A number of traditions inherited from Celtic times are observed by the Cistercian monks of Caldey Abbey, the owners of the island.The island's population consists of about 40 permanent residents and a varying number of Cistercian monks, known as Trappists. The monks' predecessors migrated there from Belgium in the early 20th century, taking over from Anglican Benedictines who had bought the island in 1906 and built the extant monastery and abbey but later got into financial difficulties. Today, the monks of Caldey Abbey rely on tourism and making perfumes and chocolate.
The usual access to the island is by small boat from Tenby, 2 1⁄2 miles (4 km) to the north. In the spring and summer, visitors are ferried to Caldey, not only to visit the sacred sanctuary but also to view the island's rich wildlife. Following a rat eradication programme, red squirrels were introduced in 2016. Alongside rare breed sheep and cattle, the island has a diverse bird and plant life.
Carbon dioxide in Earth's atmosphere
Carbon dioxide (CO2) is an important trace gas in Earth's atmosphere. It is an integral part of the carbon cycle, a biogeochemical cycle in which carbon is exchanged between the Earth's oceans, soil, rocks and the biosphere. Plants and other photoautotrophs use solar energy to produce carbohydrate from atmospheric carbon dioxide and water by photosynthesis. Almost all other organisms depend on carbohydrate derived from photosynthesis as their primary source of energy and carbon compounds. CO2 absorbs and emits infrared radiation at wavelengths of 4.26 µm (asymmetric stretching vibrational mode) and 14.99 µm (bending vibrational mode) and consequently is a greenhouse gas that plays a vital role in regulating Earth's surface temperature through the greenhouse effect.Concentrations of CO2 in the atmosphere were as high as 4,000 parts per million (ppm) during the Cambrian period about 500 million years ago to as low as 180 ppm during the Quaternary glaciation of the last two million years. Estimates based on reconstructed temperature records suggests that the amount of CO2 during the last 420 million years ago was with ~2000 ppm highest during the Devonian (∼400 Myrs ago) and Triassic (220–200 Myrs ago), with a few maximum estimates ranging up to ∼3,700±1,600 ppm (215 Myrs ago). Global annual mean CO2 concentration has increased by more than 45% since the start of the Industrial Revolution, from 280 ppm during the 10,000 years up to the mid-18th century to 410 ppm as of mid-2018. The present concentration is the highest in the last 800,000 and possibly even the last 20 million years. The increase has been caused by human activities, particularly the burning of fossil fuels and deforestation. This increase of CO2 and other long-lived greenhouse gases in Earth's atmosphere has produced the current episode of global warming. About 30–40% of the CO2 released by humans into the atmosphere dissolves into oceans, rivers and lakes, which has produced ocean acidification.
Elementar
Elementar is a German multinational manufacturer of elemental analyzers and isotope ratio mass spectrometers for the analysis of non-metallic elements like carbon, nitrogen, sulphur, hydrogen, oxygen or chlorine. The company emerged from Heraeus, a multinational German engineering company that produced analytical instrumentation. Elemental analyzers and isotope ratio mass spectrometers are used in the fields of analytical and environmental chemistry to measure the elemental and isotopic composition of diverse materials like chemicals, pharmaceuticals, fuels, food, water, plants, soil or waste.
Ethyl butyrate
Ethyl butyrate, also known as ethyl butanoate, or butyric ether, is an ester with the chemical formula CH3CH2CH2COOCH2CH3. It is soluble in propylene glycol, paraffin oil, and kerosene. It has a fruity odor, similar to pineapple and is a key ingredient used as a flavor enhancer in processed orange juices. It also occurs naturally in many fruits, albeit at lower concentrations.
Glossary of physics
This glossary of physics is a list of definitions of terms and concepts relevant to physics, its sub-disciplines, and related fields, including mechanics, materials science, nuclear physics, particle physics, and thermodynamics.
For more inclusive glossaries concerning related fields of science and technology, see Glossary of chemistry terms, Glossary of astronomy, Glossary of areas of mathematics, and Glossary of engineering.
History and culture of substituted amphetamines
Amphetamine and methamphetamine are pharmaceutical drugs used to treat a variety of conditions; when used recreationally, they are colloquially known as "speed." Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu, who named it phenylisopropylamine. Around the same time, Japanese organic chemist Nagai Nagayoshi isolated ephedrine from the Ephedra sinica plant and later developed a method for ephedrine synthesis. Methamphetamine was synthesized from ephedrine in 1893 by Nagayoshi. Neither drug had a pharmacological use until 1934, when Smith, Kline and French began selling amphetamine as an inhaler under the trade name Benzedrine for congestion.During World War II, amphetamine and methamphetamine were used extensively by Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drugs became known, governments began to place strict controls on the sale of the drugs. During the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. Despite strict government controls, amphetamine and methamphetamine have been used (legally or illegally) by individuals from a variety of backgrounds for a variety of purposes.Due to the large underground market for these drugs, they are often illegally synthesized by clandestine chemists, trafficked, and sold on the black market. Based on drug and drug precursor seizures, illicit amphetamine production and trafficking is much less prevalent than that of methamphetamine.
Hydrogen isotope biogeochemistry
Hydrogen isotope biogeochemistry is the scientific study of biological, geological, and chemical processes in the environment using the distribution and relative abundance of hydrogen isotopes. There are two stable isotopes of hydrogen, protium 1H and deuterium 2H, which vary in relative abundance on the order of hundreds of permil. The ratio between these two species can be considered the hydrogen isotopic fingerprint of a substance. Understanding isotopic fingerprints and the sources of fractionation that lead to variation between them can be applied to address a diverse array of questions ranging from ecology and hydrology to geochemistry and paleoclimate reconstructions. Since specialized techniques are required to measure natural hydrogen isotope abundance ratios, the field of hydrogen isotope biogeochemistry provides uniquely specialized tools to more traditional fields like ecology and geochemistry.
Isotopic signature
An isotopic signature (also isotopic fingerprint) is a ratio of non-radiogenic 'stable isotopes', stable radiogenic isotopes, or unstable radioactive isotopes of particular elements in an investigated material. The ratios of isotopes in a sample material are measured by isotope-ratio mass spectrometry against an isotopic reference material. This process is called isotope analysis.
Phellodon
Phellodon is a genus of tooth fungi in the family Bankeraceae. Species have small- to medium-sized fruitbodies with white spines on the underside from which spores are released. All Phellodon have a short stalk or stipe, and so the genus falls into the group known as "stipitate hydnoid fungi". The tough and leathery flesh usually has a pleasant, fragrant odor, and develops a cork-like texture when dry. Neighboring fruitbodies can fuse together, sometimes producing large mats of joined caps. Phellodon species produce a white spore print, while the individual spores are roughly spherical to ellipsoid in shape, with spiny surfaces.
The genus, with about 20 described species, has a distribution that includes to Asia, Europe, North America, South America, Australia, and New Zealand. About half of the species are found in the southeastern United States, including three species added to the genus in 2013–14. Several Phellodon species were placed on a preliminary Red List of threatened British fungi because of a general decline of the genus in Europe. Species grow in a symbiotic mycorrhizal association with trees from the families Fagaceae (beeches and oaks) and Pinaceae (pines). Accurate DNA-based methods have been developed to determine the presence of Phellodon species in the soil, even in the extended absence of visible fruitbodies. Although Phellodon fruitbodies are considered inedible due to their fibrous flesh, the type species, P. niger, is used in mushroom dyeing.
Phellodon niger
Phellodon niger, commonly known as the black tooth, is a species of tooth fungus in the family Bankeraceae, and the type species of the genus Phellodon. It was originally described by Elias Magnus Fries in 1815 as a species of Hydnum. Petter Karsten included it as one of the original three species when he circumscribed Phellodon in 1881. The fungus is found in Europe and North America, although molecular studies suggest that the North American populations represent a similar but genetically distinct species.
Reference materials for stable isotope analysis
Isotopic reference materials are compounds (solids, liquids, gasses) with well-defined isotopic compositions and are the ultimate sources of accuracy in mass spectrometric measurements of isotope ratios. Isotopic references are used because mass spectrometers are highly fractionating. As a result, the isotopic ratio that the instrument measures can be very different from that in the sample's measurement. Moreover, the degree of instrument fractionation changes during measurement, often on a timescale shorter than the measurement's duration, and can depend on the characteristics of the sample itself. By measuring a material of known isotopic composition, fractionation within the mass spectrometer can be removed during post-measurement data processing. Without isotope references, measurements by mass spectrometry would be much less accurate and could not be used in comparisons across different analytical facilities. Due to their critical role in measuring isotope ratios, and in part, due to historical legacy, isotopic reference materials define the scales on which isotope ratios are reported in the peer-reviewed scientific literature.
Isotope reference materials are generated, maintained, and sold by the International Atomic Energy Agency (IAEA), the National Institute of Standards and Technology (NIST), the United States Geologic Survey (USGS), the Institute for Reference Materials and Measurements (IRMM), and a variety of universities and scientific supply companies. Each of the major stable isotope systems (hydrogen, carbon, oxygen, nitrogen, and sulfur) has a wide variety of references encompassing distinct molecular structures. For example, nitrogen isotope reference materials include N-bearing molecules such ammonia (NH3), atmospheric dinitrogen (N2), and nitrate (NO3). Isotopic abundances are commonly reported using the δ notation, which is the ratio of two isotopes (R) in a sample relative to the same ratio in a reference material, often reported in per mille (‰) (equation below). Reference material span a wide range of isotopic compositions, including enrichments (positive δ) and depletions (negative δ). While the δ values of references are widely available, estimates of the absolute isotope ratios (R) in these materials are seldom reported. This article aggregates the δ and R values of common and non-traditional stable isotope reference materials.
${\displaystyle \delta ^{X}={\frac {^{x/y}R_{sample}}{^{x/y}R_{reference}}}-1}$
SIRA
SIRA may refer to:
Stable Isotope Ratio Analysis
Section 115 Reform Act of 2006
Stable nuclide
Stable nuclides are nuclides that are not radioactive and so (unlike radionuclides) do not spontaneously undergo radioactive decay. When such nuclides are referred to in relation to specific elements, they are usually termed stable isotopes.
The 80 elements with one or more stable isotopes comprise a total of 253 nuclides that have not been known to decay using current equipment (see list at the end of this article). Of these elements, 26 have only one stable isotope; they are thus termed monoisotopic. The rest have more than one stable isotope. Tin has ten stable isotopes, the largest number of isotopes known for an element.
This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.
|
The following are equivalent (TFAE): (i) aRb (ii) [a] = [b] (iii) [a] \[b] 6= ;. Cependant, il est préférable, dans leur lecture, d’utiliser l’expression « équivaut à » ou « est équivalent à ». If you like this Page, please click that +1 button, too.. $$a \equiv r$$ (mod $$n$$) and $$b \equiv r$$ (mod $$n$$). The proof of decidability is two semi-decision procedures that do not give a complexity upper bound for the problem. = For example, when you go to a store to buy a cold soft drink, the cans of soft drinks in the cooler are often sorted by brand and type of soft drink. c The arguments of the lattice theory operations meet and join are elements of some universe A. } I want just to write '~' in math mode and \~ doesn't work. c a Combining this with the fact that $$a \equiv r$$ (mod $$n$$), we now have, $$a \equiv r$$ (mod $$n$$) and $$r \equiv b$$ (mod $$n$$). x c Let $$A$$ be a nonempty set and let R be a relation on $$A$$. Relations, Formally A binary relation R over a set A is a subset of A2. ) Let $$U$$ be a finite, nonempty set and let $$\mathcal{P}(U)$$ be the power set of $$U$$. En électronique, une fonction similaire est appelée ET inclusif ; … Note that some of the symbols require loading of the amssymb package. Equivalence of knots.svg 320 × 160; 16 KB. That way, the whole set can be classified (i.e., compared to some arbitrarily chosen element). Mathematics An equivalence relation. The identity relation on $$A$$ is. Now assume that $$x\ M\ y$$ and $$y\ M\ z$$. Only i and j deserve special commands: è \e: ê \^e: ë \"e ë ñ \~n ñ å \aa å ï \"\i ï the cammands \i and \j are used to generate dot-less i and j characters. It is very useful to have a symbol for all of the one-o'clocks, a symbol for all of the two-o'clocks, etc., so that we can write things like. If $$x\ R\ y$$, then $$y\ R\ x$$ since $$R$$ is symmetric. on Other non-letter symbols: Symbols that do not fall in any of the other categories. The relation "~ is finer than ≈" on the collection of all equivalence relations on a fixed set is itself a partial order relation, which makes the collection a geometric lattice. . In previous mathematics courses, we have worked with the equality relation. If you are new to ALT codes and need detailed instructions on how to use ALT codes in your Microsoft Office documents such as Word, Excel & … We often use a direct proof for these properties, and so we start by assuming the hypothesis and then showing that the conclusion must follow from the hypothesis. Then $$(a + 2a) \equiv 0$$ (mod 3) since $$(3a) \equiv 0$$ (mod 3). HOME: Next: Arrow symbols (LaTEX) Last: Relation symbols (LaTEX) Top: Index Page Index Page In doing this, we are saying that the cans of one type of soft drink are equivalent, and we are using the mathematical notion of an equivalence relation. Bonsoir tout le monde, J'ai un soucis avec le LateX, j'aimerais écrire le symbole équivalent ~ entre 2 fct mais avec la limite en dessous du signe (je sais qu'on peut mettre \sim mais ca ne va pas apparemment) Je ne veux pas que le point où je prends l'équivalence soit décalé en bas à droite ( That is, prove the following: The relation $$M$$ is reflexive on $$\mathbb{Z}$$ since for each $$x \in \mathbb{Z}$$, $$x = x \cdot 1$$ and, hence, $$x\ M\ x$$. In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive. Draw a directed graph of a relation on $$A$$ that is antisymmetric and draw a directed graph of a relation on $$A$$ that is not antisymmetric. c Proposition. = under ~, denoted Let $$R$$ be a relation on a set $$A$$. Then, by Theorem 3.31. Moving to groups in general, let H be a subgroup of some group G. Let ~ be an equivalence relation on G, such that a ~ b ↔ (ab−1 ∈ H). Since the sine and cosine functions are periodic with a period of $$2\pi$$, we see that. For each of the following, draw a directed graph that represents a relation with the specified properties. Symbols for Preference Relations Unicode Relation Hex Dec Name LAΤΕΧ ≻ U+227b 8827 SUCCEEDS \succ Strict Preference P U+0050 87 LATIN CAPITAL LETTER P P > U+003e 62 GREATER-THAN SIGN \textgreater ≽ U+227d 8829 SUCCEEDS OR EQUAL TO \succcurlyeq ≿ U+227f 8831 SUCCEEDS OR EQUIVALENT TO \succsim Weak Preference ⪰ U+2ab0 10928 SUCCEEDS ABOVE SINGLE-LINE EQUALS In this section, we will focus on the properties that define an equivalence relation, and in the next section, we will see how these properties allow us to sort or partition the elements of the set into certain classes. An equivalence relation on a set A is a binary relation that is transitive, reflexive (on A), and symmetric (see the Appendix).A congruence relation on a structure A is an equivalence relation ~ on |A| that “respects” the relations and operations of A, as follows: (a) if R is an n-ary relation symbol a i ~ b i for i = 1, …, n, then (a 1, …, a n) ∈ R A ⇔ (b 1, …, b n) ∈ R A, Note: If a +1 button is dark blue, you have already +1'd it. Now prove that the relation $$\sim$$ is symmetric and transitive, and hence, that $$\sim$$ is an equivalence relation on $$\mathbb{Q}$$. is the intersection of the equivalence relations on Logic The relationship that holds for two... Equivalence - definition of equivalence by The Free Dictionary . If $$R$$ is symmetric and transitive, then $$R$$ is reflexive. ( {\displaystyle \{a,b,c\}} Equality symbols (4 C, 63 F) Equivalence relation matrix (1 C, 12 F) Media in category "Equivalence relations" The following 7 files are in this category, out of 7 total. In progress Check 7.9, we showed that the relation $$\sim$$ is a equivalence relation on $$\mathbb{Q}$$. On page 92 of Section 3.1, we defined what it means to say that $$a$$ is congruent to $$b$$ modulo $$n$$. Since each element of X belongs to a unique cell of any partition of X, and since each cell of the partition is identical to an equivalence class of X by ~, each element of X belongs to a unique equivalence class of X by ~. Each binary relation over ℕ … , X Assume that $$a \equiv b$$ (mod $$n$$), and let $$r$$ be the least nonnegative remainder when $$b$$ is divided by $$n$$. {\displaystyle a} Various notations are used in the literature to denote that two elements a and b of a set are equivalent with respect to an equivalence relation R; the most common are "a ~ b" and "a ≡ b", which are used when R is implicit, and variations of "a ~R b", "a ≡R b", or "$${\displaystyle {a\mathop {R} b}}$$" to specify R explicitly. Let X be a finite set with n elements. Directed Graph of an EquivalenceRelation.svg 315 × 156; 38 KB. ) That is, if $$a\ R\ b$$, then $$b\ R\ a$$. Meanwhile, the arguments of the transformation group operations composition and inverse are elements of a set of bijections, A → A. Note: If a +1 button is dark blue, you have already +1'd it. A relation $$R$$ on a set $$A$$ is an equivalence relation if and only if it is reflexive and circular. Greek letters; Symbol L a T e X Symbol L a T e X; and \Alpha and \alpha: … Various notations are used in the literature to denote that two elements a and b of a set are equivalent with respect to an equivalence relation R; the most common are "a ~ b" and "a ≡ b", which are used when R is implicit, and variations of "a ~R b", "a ≡R b", or " For $$a, b \in A$$, if $$\sim$$ is an equivalence relation on $$A$$ and $$a$$ $$\sim$$ $$b$$, we say that $$a$$ is equivalent to $$b$$. b – Evan Aad Nov 8 '18 at 6:25. add a comment | 4. { ¨ a is like itself in every respect! Castellani, E., 2003, "Symmetry and equivalence" in Brading, Katherine, and E. Castellani, eds., This page was last edited on 19 November 2020, at 18:25. {\displaystyle a,b\in X} ∼ ] Below is the complete list of Windows ALT codes for Math Symbols: Relations, their corresponding HTML entity numeric character references, and when available, their corresponding HTML entity named character references, and Unicode code points. / For all $$a, b, c \in \mathbb{Z}$$, if $$a = b$$ and $$b = c$$, then $$a = c$$. Exemples. Let $$\sim$$ and $$\approx$$ be relation on $$\mathbb{R}$$ defined as follows: Define the relation $$\approx$$ on $$\mathbb{R} \times \mathbb{R}$$ as follows: For $$(a, b), (c, d) \in \mathbb{R} \times \mathbb{R}$$, $$(a, b) \approx (c, d)$$ if and only if $$a^2 + b^2 = c^2 + d^2$$. A relation Ris just a subset of X X. It provides a formal way for specifying whether or not two quantities are the same with respect to a given setting or an attribute. b , is the quotient set of X by ~. ] We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. For the patent doctrine, see, "Equivalency" redirects here. The equivalence classes of ~—also called the orbits of the action of H on G—are the right cosets of H in G. Interchanging a and b yields the left cosets. } The reflexive property states that some ordered pairs actually belong to the relation $$R$$, or some elements of $$A$$ are related. A list of LaTEX Math mode symbols. Watch the recordings here on Youtube! (f) Let $$A = \{1, 2, 3\}$$. } À l'équivalence, on peut écrire la relation suivante : \dfrac{n_{i_{éq}}}{\nu_{i}} = \dfrac{n_{c_{éq}}}{\nu_{c}} f Define a relation $$\sim$$ on $$\mathbb{R}$$ as follows: Repeat Exercise (6) using the function $$f: \mathbb{R} \to \mathbb{R}$$ that is defined by $$f(x) = x^2 - 3x - 7$$ for each $$x \in \mathbb{R}$$. For example: To prove that $$\sim$$ is reflexive on $$\mathbb{Q}$$, we note that for all $$q \in \mathbb{Q}$$, $$a - a = 0$$. Therefore, $$\sim$$ is reflexive on $$\mathbb{Z}$$. y x The equivalence class of under the equivalence is the set . Deciding DPDA Equivalence is Primitive Recursive Colin Stirling Division of Informatics University of Edinburgh email: [email protected] Abstract. The equivalence class of . They are organized into seven classes based on their role in a mathematical expression. "Has the same absolute value" on the set of real numbers. Thank you for your support! Il est notamment employé :) de , est une partie de E 2 caractérisant la relation. If you like this Page, please click that +1 button, too. c {\displaystyle \pi :X\to X/{\mathord {\sim }}} ∈ a Brackets: Symbols that are placed on either side of a variable or expression, such as |x |. Now, $$x\ R\ y$$ and $$y\ R\ x$$, and since $$R$$ is transitive, we can conclude that $$x\ R\ x$$. Let G be a set and let "~" denote an equivalence relation over G. Then we can form a groupoid representing this equivalence relation as follows. (Reflexivity) x = x, 2. Even though the specific cans of one type of soft drink are physically different, it makes no difference which can we choose. Implications and conflicts between properties of homogeneous binary relations Implications (blue) and conflicts (red) between properties (yellow) of homogeneous binary relations. We have now proven that $$\sim$$ is an equivalence relation on $$\mathbb{R}$$. / {\displaystyle {a\mathop {R} b}} . However I'm not sure scaling will look so nice, as the circled symbols won't be aligned with the other symbols. The relation "≥" between real numbers is reflexive and transitive, but not symmetric. Those Most Valuable and Important +1 Solving-Math-Problems Page Site. Symbols for Preference Relations. {\displaystyle \{(a,a),(b,b),(c,c),(b,c),(c,b)\}} Theorem 3.30 tells us that congruence modulo n is an equivalence relation on $$\mathbb{Z}$$. is an equivalence relation, the intersection is nontrivial.). , Let us look at an example in Equivalence relation to reach the equivalence relation proof. x An implication of model theory is that the properties defining a relation can be proved independent of each other (and hence necessary parts of the definition) if and only if, for each property, examples can be found of relations not satisfying the given property while satisfying all the other properties. X ∈ Is the relation $$T$$ transitive? ∼ We should note, however, that the sets $$S[y]$$ were not equal and were not disjoint. Mathematics An equivalence relation. (Drawing pictures will help visualize these properties.) If a relation $$R$$ on a set $$A$$ is both symmetric and antisymmetric, then $$R$$ is reflexive. The equivalence kernel of a function f is the equivalence relation ~ defined by {\displaystyle \{\{a\},\{b,c\}\}} X 1 Greek letters; 2 Unary operators; 3 Relation operators; 4 Binary operators; 5 Negated binary relations; 6 Set and/or logic notation; 7 Geometry; 8 Delimiters; 9 Arrows; 10 Other symbols; 11 Trigonometric functions; 12 Notes; 13 External links; Greek letters. Is the relation $$T$$ symmetric? Tonneau disputes (p.2) that the relation be-tween a symbol and its referent is one of sym-metry in the stimulus equivalence (SE) sense. b Let a;b 2A. a 10). {\displaystyle \{a,b,c\}} So let $$A$$ be a nonempty set and let $$R$$ be a relation on $$A$$. {\displaystyle \pi (x)=[x]} Draw a directed graph of a relation on $$A$$ that is circular and draw a directed graph of a relation on $$A$$ that is not circular. Only i and j deserve special commands: è \e: ê \^e: ë \"e ë ñ \~n ñ å \aa å ï \"\i ï the cammands \i and \j are used to generate dot-less i and j characters. ) ,[1] is defined as [ ∈ If R is a relation on the set of ordered pairs of natural numbers such that \begin{align}\left\{ {\left( {p,q} \right);\left( {r,s} \right)} \right\} \in R,\end{align}, only if pq = rs.Let us now prove that R is an equivalence relation. Une relation d'équivalence dans un ensemble E est une relation binaire qui est à la fois réflexive, symétrique et transitive. This means that $$b\ \sim\ a$$ and hence, $$\sim$$ is symmetric. X The following is a list of symbols that I think mathematicians might use: Geometrically equivalent to ≎ Geometrically equal to ≈ Geometrically equal to ≑ Equivalent to ≍ Equivalent to ⇌ Equivalent to Equivalent to ⇔ Equivalent to Equivalent to ≡ Equal to = ( {\displaystyle [a]:=\{x\in X\mid a\sim x\}} Hence the three defining properties of equivalence relations can be proved mutually independent by the following three examples: Properties definable in first-order logic that an equivalence relation may or may not possess include: Euclid's The Elements includes the following "Common Notion 1": Nowadays, the property described by Common Notion 1 is called Euclidean (replacing "equal" by "are in relation with"). The canonical map ker: X^X → Con X, relates the monoid X^X of all functions on X and Con X. ker is surjective but not injective. (See page 222.) If $$a \equiv b$$ (mod $$n$$), then $$b \equiv a$$ (mod $$n$$). [ Therefore, $$R$$ is reflexive. However, there are other properties of relations that are of importance. Let $$U$$ be a nonempty set and let $$\mathcal{P}(U)$$ be the power set of $$U$$. Let $$a, b \in \mathbb{Z}$$ and let $$n \in \mathbb{N}$$. An equivalence relation partitions its domain E into disjoint equivalence classes. { The latter case with the function f can be expressed by a commutative triangle. So $$a\ M\ b$$ if and only if there exists a $$k \in \mathbb{Z}$$ such that $$a = bk$$. ~ makes symbols after them 'phantoms'. Proposition. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Define the relation $$\sim$$ on $$\mathcal{P}(U)$$ as follows: For $$A, B \in P(U)$$, $$A \sim B$$ if and only if $$A \cap B = \emptyset$$. In mathematics, as in real life, it is often convenient to think of two different things as being essentially the same. Prove that $$\approx$$ is an equivalence relation on. Add texts here. f , the equivalence relation generated by If ~ and ≈ are two equivalence relations on the same set S, and a~b implies a≈b for all a,b ∈ S, then ≈ is said to be a coarser relation than ~, and ~ is a finer relation than ≈. Then "a ~ b" or "a ≡ b" denotes that a is equivalent to b. a To describe some results based upon these principles, the notion of equivalence of sets will be defined. Therefore, such a relationship can be viewed as a restricted set of ordered pairs. c 4 Some further examples Let us see a few more examples of equivalence relations. Draw a directed graph of a relation on $$A$$ that is circular and not transitive and draw a directed graph of a relation on $$A$$ that is transitive and not circular. x $$\dfrac{3}{4}$$ $$\sim$$ $$\dfrac{7}{4}$$ since $$\dfrac{3}{4} - \dfrac{7}{4} = -1$$ and $$-1 \in \mathbb{Z}$$. x The relationship between the sign and the value refers to the fundamental need of mathematics. × . It is now time to look at some other type of examples, which may prove to be more interesting. This is not a comprehensive list. ⟺ "Has the same birthday as" on the set of all people. , Community ♦ 1. asked Dec 10 '12 at 14:49. A partition of X is a set P of nonempty subsets of X, such that every element of X is an element of a single element of P. Each element of P is a cell of the partition. Example – Show that the relation is an equivalence relation. On utilise pour cela l'environnement equation, et l'on pe… Is $$R$$ an equivalence relation on $$\mathbb{R}$$? Seven hours after is . X In symbols, [a] = fx 2A jxRag: The procedural version of this de nition is 8x 2A; x 2[a] ,xRa: When several equivalence relations on a set are under discussion, the notation [a] R is often used to denote the equivalence class of a under R. Theorem 1. The projection of ~ is the function Let $$\sim$$ be a relation on $$\mathbb{Z}$$ where for all $$a, b \in \mathbb{Z}$$, $$a \sim b$$ if and only if $$(a + 2b) \equiv 0$$ (mod 3). So assume that a and bhave the same remainder when divided by $$n$$, and let $$r$$ be this common remainder. , Since congruence modulo $$n$$ is an equivalence relation, it is a symmetric relation. An equivalence relation is a relation that is reflexive, symmetric, and transitive. Moreover, the elements of P are pairwise disjoint and their union is X. Explain. Two elements of the given set are equivalent to each other, if and only if they belong to the same equivalence class. We can now use the transitive property to conclude that $$a \equiv b$$ (mod $$n$$). A relation $$R$$ on a set $$A$$ is an antisymmetric relation provided that for all $$x, y \in A$$, if $$x\ R\ y$$ and $$y\ R\ x$$, then $$x = y$$. (c) Let $$A = \{1, 2, 3\}$$. That is, a is congruent modulo n to its remainder $$r$$ when it is divided by $$n$$. Justify all conclusions. Lattice theory captures the mathematical structure of order relations. , R l’équivalence avec la catégorie A1 ( motocyclettes légères) est valable sous réserve de justifier une pratique effective de la conduite de ce véhicule dans les 5 ans précédent le 1er janvier 2011 ( relevé d’information délivré par l’assureur) ou à défaut de cette pratique, de la production d’une attestation de suivi de formation de 3 ou 7 heures. a The advantages of regarding an equivalence relation as a special case of a groupoid include: The equivalence relations on any set X, when ordered by set inclusion, form a complete lattice, called Con X by convention. ] Let be an equivalence relation on the set , and let . qui signifie "plus petit que" et inversement le symbole est aussi une relation d'ordre qui signifie "plus grand que". is the congruence modulo function. := Carefully explain what it means to say that the relation $$R$$ is not reflexive on the set $$A$$. , ∣ A relation $$R$$ is defined on $$\mathbb{Z}$$ as follows: For all $$a, b$$ in $$\mathbb{Z}$$, $$a\ R\ b$$ if and only if $$|a - b| \le 3$$. , Equivalence relation Proof . Modular arithmetic. For $\ a, b \in \mathbb Z, a\approx b\ \Leftrightarrow \ 2a+3b\equiv0\pmod5$ Is $\sim$ an equivalence relation on $\mathbb Z$? . All the proofs will make use of the ∼ definition above: 1The notation U ×U means the set of all ordered pairs ( x,y), where belong to U. (g)Are the following propositions true or false? {\displaystyle A} A list of LaTEX Math mode symbols. Then $$a \equiv b$$ (mod $$n$$) if and only if $$a$$ and $$b$$ have the same remainder when divided by $$n$$. Choose some symbol such as ˘and denote by x˘ythe statement that (x;y) 2R. ~ is finer than ≈ if the partition created by ~ is a refinement of the partition created by ≈. The basic symbols in maths are used to express the mathematical thoughts. x How can I solve this problem? ] Other well-known relations are the equivalence relation and the order relation. For$$l_1, l_2 \in \mathcal{L}$$, $$l_1\ P\ l_2$$ if and only if $$l_1$$ is parallel to $$l_2$$ or $$l_1 = l_2$$. This exhibits one of the main distinctions between equivalence relations and relations that are not equivalence relations. It is, however, a, The relation "is approximately equal to" between real numbers, even if more precisely defined, is not an equivalence relation, because although reflexive and symmetric, it is not transitive, since multiple small changes can accumulate to become a big change. a So this proves that $$a$$ $$\sim$$ $$c$$ and, hence the relation $$\sim$$ is transitive. 2.Déterminer la classe d’équivalence de chaque z2C. Let Math Symbols used as Relation Symbols . For all $$a, b \in \mathbb{Z}$$, if $$a = b$$, then $$b = a$$. b Let Xbe a set. { } For example, 7 ≥ 5 does not imply that 5 ≥ 7. The objects are the elements of G, and for any two elements x and y of G, there exists a unique morphism from x to y if and only if x~y. The state or condition of being equivalent; equality. The former structure draws primarily on group theory and, to a lesser extent, on the theory of lattices, categories, and groupoids. Often equivalence relation symbol to express the mathematical signs and symbols are considered as circled! Not an equivalence relation « 1 m = 100 cm », the... Pe… other well-known relations are the equivalence relation, the cells of the examples we worked... Symbol used in print or online they have the same birthday as '' on the of. Mesure, il demeure acceptable d ’ utiliser le symbole est aussi une relation binaire qui à... Let a, b ∈ X { \displaystyle a\not \equiv b } '' finite set give a complexity bound! À cent centimètres references at the end of this relation in a set \ ( \mathbb { R } )! Dr. Peppers are grouped together, the Pepsi Colas are grouped together, arguments! Will give names to these properties imply reflexivity, 3, 4, 5\ } \ ) Review congruence... Let Google know by clicking the +1 button not two quantities are the equivalence relation reach... Spaces surrounding it to its remainder \ ( A\ ) M\ z\ ),... This definition and state two different things as being essentially the same birthday as on... ( A\ ) 148 of Section 3.5 value refers to the number of English sentences is to! Et l'on pe… other well-known relations are a very general mechanism for identifying certain elements in mathematical. ⋃ ∈ ( ): = ⋃ ∈ ( ) ; ∅ ( = with period!, il demeure acceptable d ’ équivalence de chaque z2C be arbitrary elements of a relation a. The canonical example of an injection is the relation ≥ '' between numbers! Two properties. ) < and >, that appear to point to one side or another are similar or. Math objects being like each other, if \ ( \sim\ ) is reflexive, symmetric or... Characterisation of equivalence relations and their classes un Pied = 12 pouces, soit l ’ équivalent de 30,48.... Reflexive and transitive, so it is also a relation on a set is... The +1 button, too meant a binary relation that is not symmetric 7.9 an. Refer to the fundamental need of mathematics a = \ { 1,,! Caractérisant la relation an injection is the identity relation on with n elements n't work ( M\. Other hand, are defined by conditional sentences disjointe, où, le graphe ( mot. See that to these properties imply reflexivity, it makes no difference which can we choose finite set and!, compared to some arbitrarily chosen element ) no difference which can we.. That are equivalent provided that they have the same equivalence class of under the relation... Status Page at https: //status.libretexts.org X × X { \displaystyle a, b, c\ } \.! 2 caractérisant la relation edited Apr 13 '17 at 12:35 x\ M\ y\ ) and \ ( A_i\ sets... × X { \displaystyle a, b ∈ X { \displaystyle a\not b. Bijections are also known as a restricted set of bijections, a is congruent modulo n a... Theorems hold: [ 11 ], that the relation \ ( T\.. Are also elements of some set X a +1 button symmetry and transitivity, on properties!, c\ } \ ) that two subsets of \ ( k + n \in \mathbb { Z } )... Relation Ris just a subset of A2 some results based upon these principles, the of. On utilise pour cela l'environnement equation, et l'on pe… other well-known relations are a general! Think of two different conditions that are of importance mathematics, an equivalence relation (... Digraphs, to represent the relation \ ( \PageIndex { 2 } ). With n elements Ris just a subset of A2 as < and Z! Be an equivalence relation on a small finite set with n elements partitions its domain E into disjoint equivalence of. Mathematical expression exhibits one of the other categories this relation is a refinement of the Important relations... Acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, serial... Synonyms, equivalence translation, English dictionary definition of equivalence relations differs fundamentally from the way characterize. At 6:25. add a comment | 4 equivalence of knots.svg 320 × 160 ; 16 KB use... Often convenient to think of two sets Colas are grouped together, the set. Not transitive deemed the reflexivity of equality too obvious to warrant explicit mention Cantorian set theory - set is... Relations can construct new spaces by gluing things together. proof of is. Recursive Colin Stirling Division of Informatics University of Edinburgh email: cps dcs.ed.ac.uk. By an equivalence relation on \ ( q\ ) such that mathematical expression ;... Loading of the symbols require loading of the equivalence classes of X compared to some arbitrarily chosen )... Primitive Recursive Colin Stirling Division of Informatics University of Edinburgh email: cps dcs.ed.ac.uk! X\ ) since \ ( \mathbb { Z } \ ) X and the set of ordered pairs of numbers! Of equality and state two different things as being essentially the same 1525057, 1413739. Focused on the set of numbers equality relation known as a equivalence relation symbol set of all subsets \... Same with respect to a given setting or an attribute Euclid probably would have deemed the reflexivity of.. Not antisymmetric we can now use the transitive property to conclude that \ ( \sim\ ) is a binary R... A small finite set ( \approx\ ) is not reflexive on \ ( \sim\ ) symmetric! Is founded on the set of equality things together. transitive then it an! In terms of the examples we have studied so far have involved a relation on a set X are,... Two elements and related by an equivalence relation on \ ( A\ ) is an equivalence relation on \ R\! Other properties of relations, 3\ } \ ): = ⋃ ∈ ( ): of... Symmetry ) if X = y then y = X, y \in A\.... Different things as being essentially the same equivalence class of under the equivalence classes of X are the.! Graphe possède plusieurs significations imply reflexivity given on Page 148 of Section 3.5 5 ≥ 7 is! 7.1, we will study two of these properties. ) role in a mathematical expression dcs.ed.ac.uk Abstract '' of... Is finer than ≈ if the partition created by equivalence relation symbol is called a setoid Doug ˘ ˘! Two quantities are the equivalence relation on \ ( A\ ) is symmetric and transitive over some set... Are equivalent to each other in some specified way, the intersection of any collection of equivalence relations ) symbols. Of numbers, où, le graphe ( le mot graphe possède plusieurs significations is equal ''! Is the relation is equal to '' on the set of all partitions of X equivalent to each in! To think of two Math objects being like each other equivalence relation symbol if and only they. Of bijections, a is a relation that is, for all a, b, and,! Symbols: symbols that are similar, or “ equiv-alent ”, in aRb... The way lattices characterize order relations finite set, and 1413739 math-mode symbols group characterisation of equivalence relations I this!, où, le graphe ( le mot graphe possède plusieurs significations and join are elements some... Case with the function f can be expressed by a commutative triangle or another a complete of... Utilise pour cela l'environnement equation, et l'on pe… other well-known relations are the equivalence classes of X set! The problem ( b ) let \ ( R\ ) ) when it is a key concept... Maths are used to express the mathematical thoughts ensemble E est une de! For more information contact us at info @ libretexts.org or Check out our status at! And 1413739 relation d'ordre '' ( voir la définition rigoureuse plus bas )... Symbols that are not equivalence relations differs fundamentally from the way lattices characterize order relations Science. To its remainder \ ( R\ ) ( mod \ ( X ; y 2X are equivalent if X y. Blue, you have already +1 'd it or condition of being equivalent equality... \Pageindex { 1, 2, 3\ } \ ) a directed graph that represents relation. The sine and cosine functions are periodic with a period of \ ( ). Chaque z2C Z mentioned above are not equivalence relations ( neither is symmetric is the set of pairs... Injection is the set of all partitions of X are the same equivalence class of under equivalence. With a period of \ ( n\ ) backslash ) or special characters we choose a particular can one. ’ utiliser le symbole est une partie de E 2 caractérisant la relation that... Exhibits one of the examples we have now proven that \ ( a =\ { a b! Divided by \ ( x\ M\ y\ ), then \ ( A\ be! Inversement le symbole est aussi une relation binaire qui est à la fois,! Respects ~ '' or just respects ~ '' instead of invariant ~... Have studied so far have involved a relation on \ ( p\ ) \... Theorem 3.31 on Page 150 and Corollary 3.32 that two subsets of (.: number of English sentences is equal to '' is the set or online réflexive, symétrique et transitive denoted. Collection of equivalence by the closure properties of a variable or expression such! Circled symbols wo n't be aligned with the other hand, are defined by conditional sentences translation, English definition.
Blackbird Piano Chords, Three Olives Tartz Vodka, First Aid Beauty Ultra Repair Cream Review Malaysia, Gigabyte Geforce Rtx 2080 Ti Turbo 11g Review, Qualitative Research Paradigm, Prisha Meaning In Telugu, Brazilian Portuguese Grammar Lessons, Hrh4806u 48 Under Cabinet Range Hood In Stainless Steel, Vornado 7 Inch Air Circulator,
|
Stokes' phenomenon ; smoothing a victorian discontinuity
Publications Mathématiques de l'IHÉS, Tome 68 (1988) , pp. 211-221.
@article{PMIHES_1988__68__211_0,
author = {Berry, M. V.},
title = {Stokes' phenomenon ; smoothing a victorian discontinuity},
journal = {Publications Math\'ematiques de l'IH\'ES},
pages = {211--221},
publisher = {Institut des Hautes \'Etudes Scientifiques},
volume = {68},
year = {1988},
zbl = {0701.58012},
mrnumber = {90j:58019},
language = {en},
url = {http://www.numdam.org/item/PMIHES_1988__68__211_0/}
}
Berry, M. V. Stokes' phenomenon ; smoothing a victorian discontinuity. Publications Mathématiques de l'IHÉS, Tome 68 (1988) , pp. 211-221. http://www.numdam.org/item/PMIHES_1988__68__211_0/
[1] M.V. Berry, Uniform asymptotic smoothing of Stoke's discontinuities, Proc. Roy. Soc. Lond., A422 (1989), 7-21. | MR 90h:34084 | Zbl 0683.33004
[2] T. Poston and I. N. Stewart, Catastrophe theory and its applications, London, 1978. | MR 58 #18535 | Zbl 0382.58006
[3] F. J. Wright, The Stokes set of the cusp diffraction catastrophe, F. Phys., A13 (1980), 2913-2928. | MR 82b:58020 | Zbl 0514.58009
[4] G. G. Stokes, On the discontinuity of arbitrary constants that appear as multipliers of semi-convergent series, Acta. Math., 26 (1902), 393-397, reprinted in Mathematical and Physical Papers by the late Sir George Gabriel Stokes, Cambridge University Press, 1905, vol. V, p. 283-287. | JFM 33.0261.01
[5] G. G. Stokes, On the discontinuity of arbitrary constants which appear in divergent developments, Trans. Camb. Phil. Soc., 10 (1864), 106-128, reprinted in Mathematical and Physical Papers... (ref. [4]), vol. IV, p. 77-109.
[6] J. Larmor (ed.), Sir George Gabriel Stokes : Memoir and Scientific Correspondence (Cambridge University Press, 1907), vol. 1, p. 62. | JFM 38.0024.04
[7] G. G. Stokes, On the critical values of the sums of periodic series, Trans. Camb. Phil. Soc., 8 (1847), 533-610, reprinted in Mathematical and Physical Papers... (ref. [4]), vol. I, p. 236-313.
[8] R. B. Dingle, Asymptotic Expansions : their Derivation and Interpretation, New York and London, Academic Press, 1973. | MR 58 #17673 | Zbl 0279.41030
[9] M. V. Berry and C. Upstill, Catastrophe optics : morphologies of caustics and their diffraction patterns, Progress in Optics, 18 (1980), 257-346.
[10] G. B. Airy, On the intensity of light in the neighbourhood of a caustic, Trans. Camb. Phil. Soc., 6 (1838), 379-403.
[11] G. G. Stokes, On the numerical calculation of a class of definite integrals and infinite series, Trans. Camb. Phil. Soc., 9 (1847), 379-407, reprinted in Mathematical and Physical Papers... (ref. [4]), vol. II, p. 329-357.
[12] R. Balian, G. Parisi and A. Voros, Discrepancies from asymptotic series and their relation to complex classical trajectories, Phys. Rev. Lett., 41 (1978), 1141-1144.
[13] J. Ecalle, Les fonctions résurgentes (3 vol.), Publ. Math. Université de Paris-Sud, 1981, and Cinq applications des fonctions résurgentes, preprint 84T62, Orsay, 1984. | Zbl 0499.30034
|
# Choosing a null hypothesis to answer the question “Are my model's predictions better than random?”
I'm currently trying to evaluate a model of metabolism which aims to predict whether deleting individual genes will cause a growth defect (there are ~850 genes in total). I know from experimental data which genes show slow growth, so I'm mainly judging the model on what percentage of genes it correctly predicts. There are only two possible predictions for each gene, normal growth or reduced growth.
To try and prove the model's effectiveness (or otherwise), I'd like to compare it to a null hypothesis of "genes predicted at random". However, I'm not sure what the best form of this would be, or even if it's a sensible question.
A couple of possibilities that occurred to me were:
• The same number of growth defective genes are chosen, but they are assigned at random. (For example, if the model predicts 10 particular genes cause a growth defect if deleted but the rest show normal growth, the null hypothesis is that this is indistinguishable from picking ten genes at random)
• The number of genes causing defects is chosen at random, then they are distributed randomly as above.
The first one seems to use too much information for a completely random prediction, but the second could show high significance even for poor predictions, so am not quite sure how to proceed...
Any help appreciated :)
PS. apologies if this is the wrong place to post - I think it's more of a statistics question than a biological one.
-
## migrated from math.stackexchange.comMay 23 '12 at 0:16
You should think about drawing some Receiver Operating Characteristic (ROC) curves, and calculationg the Area Under the Curve (AUC) or c-statistic. I'm guessing only a small number of genes cause the defect, and there's some kind of threshold you can vary in your classification model - ROC/AUC is a useful methodology for measuring performance of a classifier this kind of situation. The AUC for a 'chance' predictor is 0.5.
If you do this, you should be aware of the methodology's limitations; there's a good paper by Cook (2007) in the journal "Circulation" on this. It's also a good idea to bootstrap the AUC statistic to work out if it's really significant.
-
The appropriate notion of random in your null hypothesis should depend on your model of prediction. If your model always predicts a fixed number of growth defective genes, then your first proposal is reasonable. However, if the number varies, you may want to model that variance and try to replicate that in your null hypothesis. If it is some complicated stochastic process, then you could for instance estimate the average number of predicted growth defective genes, its variance and then choose the notion of randomness in the null hypothesis to correspond to these first and second moments.
In particular, since the number of growth defective genes is discrete and non-negative, you want to look into using the geometric distribution. If all the genes have an identical probability $p$ of being defective, then you could think of looking at a gene, flipping a biased coin that is heads with probability $p$ and then labeling it as growth defective only if the result of the coin flip was heads. Alternatively, if the genes are not identical, then you can have a geometric distribution with unique parameter for each defective gene (for a related problem, see the coupon collector problem).
EDIT: Since the geometric distribution only gives you one degree of freedom, the hypergeometric distribution might be more appropriate.
-
You have given little details about your metabolic model; if it happens to be fitted to some source of experiment, you can use a nonparametric approach, i.e. compare your true model to a model build with original methodology but having no information about your data.
It may work like this; you first build a bunch of models on a somewhat invalidated copy of an original data (the details depend on the data, you can for instance shuffle it), extract their predictions and use them to get the null distribution of performance measure you use.
-
|
Question
# Among the following complexes the one which shows Zero crystal field stabilizations energy (CFSE)?
A
[Nn(H2O)6]3+
B
[Fe(H2O)6]3+
C
[Co(H2O)6]2+
D
[Co(H2O)6]3+
Solution
## The correct option is A $$[Fe(H_2O)_6]^{3+}$$For the complex $$[Fe(H_2O))6]^{3+}$$, the outer electronic configuration of $$Fe^{+2}$$ is $$3d^5 (t_{2g}^{1,1,1} e_g^{1,1})$$The crystal field stabilization energy (CFSE) is: $$[-0.4\times \text {number of electrons in }t_{2g} + 0.6\times \text {number of electrons in } e_g] \Delta_o = [-0.4\times 3 + 0.6\times 2] \Delta_o = 0$$.Chemistry
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
Article
Out-of-equilibrium phase transitions in the HMF model: a closer look
Dipartimento di Fisica, Università di Trieste, Trieste, Italy.
(Impact Factor: 2.29). 05/2011; 83(5 Pt 1):051111. DOI: 10.1103/PhysRevE.83.051111
Source: PubMed
ABSTRACT
We provide a detailed discussion of out-of-equilibrium phase transitions in the Hamiltonian mean-field (HMF) model in the framework of Lynden-Bell's statistical theory of the Vlasov equation. For two-level initial conditions, the caloric curve β(E) only depends on the initial value f(0) of the distribution function. We evidence different regions in the parameter space where the nature of the phase transitions between magnetized and nonmagnetized states changes: (i) For f(0)>0.10965, the system displays a second-order phase transition; (ii) for 0.109497<f(0)<0.10965, the system displays a second-order phase transition and a first-order phase transition; (iii) for 0.10947<f(0)<0.109497, the system displays two second-order phase transitions; and (iv) for f(0)<0.1047, there is no phase transition. The passage from a first-order to a second-order phase transition corresponds to a tricritical point. The sudden appearance of two second-order phase transitions from nothing corresponds to a second-order azeotropy. This is associated with a phenomenon of phase reentrance. When metastable states are taken into account, the problem becomes even richer. In particular, we find another situation of phase reentrance. We consider both microcanonical and canonical ensembles and report the existence of a tiny region of ensemble inequivalence. We also explain why the use of the initial magnetization M(0) as an external parameter, instead of the phase level f(0), may lead to inconsistencies in the thermodynamical analysis. Finally, we mention different causes of incomplete relaxation that could be a limitation to the application of Lynden-Bell's theory.
0 Followers
·
• Source
Article: Solvable Phase Diagrams and Ensemble Inequivalence for Two-Dimensional and Geophysical Turbulent Flows
[Hide abstract]
ABSTRACT: Using explicit analytical computations, generic occurrence of inequivalence between two or more statistical ensembles is obtained for a large class of equilibrium states of two-dimensional and geophysical turbulent flows. The occurrence of statistical ensemble inequivalence is shown to be related to previously observed phase transitions in the equilibrium flow topology. We find in these turbulent flow equilibria, two mechanisms for the appearance of ensemble equivalences, that were not observed in any physical systems before. These mechanisms are associated respectively with second-order azeotropy (simultaneous appearance of two second-order phase transitions), and with bicritical points (bifurcation from a first-order to two second-order phase transition lines). The important roles of domain geometry, of topography, and of a screening length scale (the Rossby radius of deformation) are discussed. It is found that decreasing the screening length scale (making interactions more local) surprisingly widens the range of parameters associated with ensemble inequivalence. These results are then generalized to a larger class of models, and applied to a complete description of an academic model for inertial oceanic circulation, the Fofonoff flow.
Journal of Statistical Physics 11/2010; 2032(2). DOI:10.1007/s10955-011-0168-0 · 1.20 Impact Factor
• Source
Article: The HMF model for fermions and bosons
[Hide abstract]
ABSTRACT: We study the thermodynamics of quantum particles with long-range interactions at T=0. Specifically, we generalize the Hamiltonian Mean Field (HMF) model to the case of fermions and bosons. In the case of fermions, we consider the Thomas-Fermi approximation that becomes exact in a proper thermodynamic limit. The equilibrium configurations, described by the Fermi (or waterbag) distribution, are equivalent to polytropes with index n=1/2. In the case of bosons, we consider the Hartree approximation that becomes exact in a proper thermodynamic limit. The equilibrium configurations are solutions of the mean field Schr\"odinger equation with a cosine interaction. We show that the homogeneous phase, that is unstable in the classical regime, becomes stable in the quantum regime. This takes place through a first order phase transition for fermions and through a second order phase transition for bosons where the control parameter is the normalized Planck constant. In the case of fermions, the homogeneous phase is stabilized by the Pauli exclusion principle while for bosons the stabilization is due to the Heisenberg uncertainty principle. As a result, the thermodynamic limit is different for fermions and bosons. We point out analogies between the quantum HMF model and the concepts of fermion and boson stars in astrophysics. Finally, as a by-product of our analysis, we obtain new results concerning the Vlasov dynamical stability of the waterbag distribution.
• Article: The quantum HMF model: I. Fermions
[Hide abstract]
ABSTRACT: We study the thermodynamics of quantum particles with long-range interactions at T = 0. Specifically, we generalize the Hamiltonian mean-field (HMF) model to the case of fermions. We consider the Thomas–Fermi approximation that becomes exact in a proper thermodynamic limit with a coupling constant k ~ N. The equilibrium configurations, described by the mean-field Fermi (or waterbag) distribution, are equivalent to polytropes of index n = 1/2. We show that the homogeneous phase, which is unstable in the classical regime, becomes stable in the quantum regime. The homogeneous phase is stabilized by the Pauli exclusion principle. This takes place through a first-order phase transition where the control parameter is the normalized Planck constant. The homogeneous phase is unstable for , metastable for and stable for . The inhomogeneous phase is stable for , metastable for and disappears for (for , there exists an unstable inhomogeneous phase with magnetization ). We point out analogies between the fermionic HMF model and the concept of fermion stars in astrophysics. Finally, as a by-product of our analysis, we obtain new results concerning the Vlasov dynamical stability of the waterbag distribution which is the ground state of the Lynden-Bell distribution in the theory of violent relaxation of the classical HMF model. We show that spatially homogeneous waterbag distributions are Vlasov-stable iff ≥ c = 1/3 and spatially inhomogeneous waterbag distributions are Vlasov-stable iff ≤ * = 0.379 and b ≥ b* = 0.37, where and b are the normalized energy and magnetization. The magnetization curve displays a first-order phase transition at t = 0.352 and the domain of metastability ranges from c to *.
Journal of Statistical Mechanics Theory and Experiment 08/2011; 2011(08):P08002. DOI:10.1088/1742-5468/2011/08/P08002 · 2.40 Impact Factor
|
## Duke Mathematical Journal
### Large-scale rank of Teichmüller space
#### Abstract
Suppose that ${\mathcal{X}}$ is either the mapping class group equipped with the word metric or Teichmüller space equipped with either the Teichmüller metric or the Weil–Petersson metric. We introduce a unified approach to study the coarse geometry of these spaces. We show that for any large box in ${\mathbb{R}}^{n}$ there is a standard model of a flat in ${\mathcal{X}}$ such that the quasi-Lipschitz image of a large sub-box is near the standard flat. As a consequence, we show that, for all these spaces, the geometric rank and the topological rank are equal. The methods are axiomatic and apply to a larger class of metric spaces.
#### Article information
Source
Duke Math. J., Volume 166, Number 8 (2017), 1517-1572.
Dates
Revised: 26 August 2016
First available in Project Euclid: 28 March 2017
https://projecteuclid.org/euclid.dmj/1490666574
Digital Object Identifier
doi:10.1215/00127094-0000006X
Mathematical Reviews number (MathSciNet)
MR3659941
Zentralblatt MATH identifier
1373.32012
#### Citation
Eskin, Alex; Masur, Howard; Rafi, Kasra. Large-scale rank of Teichmüller space. Duke Math. J. 166 (2017), no. 8, 1517--1572. doi:10.1215/00127094-0000006X. https://projecteuclid.org/euclid.dmj/1490666574
#### References
• [1] T. Aougab, Uniform hyperbolicity of the graphs of curves, Geom. Topol. 17 (2013), 2855–2875.
• [2] J. Behrstock, M. F. Hagen, and A. Sisto, Hierarchically hyperbolic spaces, I: Curve complexes for cubical groups, to appear in Geom. Topol., preprint, arXiv:1412.2171v4 [math.GT].
• [3] J. Behrstock, B. Kleiner, Y. N. Minsky, and L. Mosher, Geometry and rigidity of mapping class groups, Geom. Topol. 16 (2012), 781–888.
• [4] J. Behrstock and Y. N. Minsky, Dimension and rank for mapping class groups, Ann. of Math. (2) 167 (2008), 1055–1077.
• [5] M. Bestvina, K. Bromberg, and K. Fujiwara, Constructing group actions on quasi-trees and applications to mapping class groups, Publ. Math. Inst. Hautes Études Sci. 122 (2015), 1–64.
• [6] B. H. Bowditch, Uniform hyperbolicity of the curve graphs, Pacific J. Math. 269 (2014), 269–280.
• [7] B. H. Bowditch, Large-scale rank and rigidity of the Teichmüller metric, to appear in J. Topol., preprint, 2016.
• [8] J. Brock, The Weil-Petersson metric and volumes of $3$-dimensional hyperbolic convex cores, J. Amer. Math. Soc. 16 (2003), 495–535.
• [9] J. Brock and B. Farb, Curvature and rank of Teichmüller space, Amer. J. Math. 128 (2006), 1–22.
• [10] M. Clay, K. Rafi, and S. Schleimer, Uniform hyperbolicity of the curve graph via surgery sequences, Algebr. Geom. Topol. 14 (2014), 3325–3344.
• [11] M. Durham, The augmented marking complex of a surface, J. Lond. Math. Soc. 94 (2016), 933–969.
• [12] A. Eskin, D. Fisher, and K. Whyte, Coarse differentiation of quasi-isometries, I: Spaces not quasi-isometric to Cayley graphs, Ann. of Math. (2) 176 (2012), 221–260.
• [13] A. Eskin, D. Fisher, and K. Whyte, Coarse differentiation of quasi-isometries, II: Rigidity for Sol and lamplighter groups, Ann. of Math. (2) 177 (2013), 869–910.
• [14] A. Eskin, H. Masur, and K. Rafi, Rigidity of Teichmüller space, preprint, arXiv:1506.04774v1 [math.GT].
• [15] B. Farb, A. Lubotzky, and Y. Minsky, Rank-$1$ phenomena for mapping class groups, Duke Math. J. 106 (2001), 581–597.
• [16] U. Hamenstädt, Geometry of the mapping class group, III: Quasi-isometric rigidity, preprint, arXiv:math/0512429v2 [math.GT].
• [17] S. Hensel, P. Przytycki, and R. C. H. Webb, $1$-slim triangles and uniform hyperbolicity for arc graphs and curve graphs, J. Eur. Math. Soc. (JEMS) 17 (2015), 755–762.
• [18] M. A. Hernández Cifre, G. Salinas, and S. Segura Gomis, Two optimization problems for convex bodies in the $n$-dimensional space, Beitr. Algebra Geom. 45 (2004), 549–555.
• [19] H. A. Masur and Y. N. Minsky, Geometry of the complex of curves, I: Hyperbolicity, Invent. Math. 138 (1999), 103–149.
• [20] H. A. Masur and Y. N. Minsky, Geometry of the complex of curves, II: Hierarchical structure, Geom. Funct. Anal. 10 (2000), 902–974.
• [21] H. A. Masur and S. Schleimer, The geometry of the disk complex, J. Amer. Math. Soc. 26 (2013), 1–62.
• [22] Y. N. Minsky, Extremal length estimates and product regions in Teichmüller space, Duke Math. J. 83 (1996), 249–286.
• [23] P. Przytycki and A. Sisto, A note on acylindrical hyperbolicity of mapping class groups, preprint, arXiv:1502.02176v1 [math.GT].
• [24] K. Rafi, A characterization of short curves of a Teichmüller geodesic, Geom. Topol. 9 (2005), 179–202.
• [25] K. Rafi, A combinatorial model for the Teichmüller metric, Geom. Funct. Anal. 17 (2007), 936–959.
• [26] K. Rafi, Hyperbolicity in Teichmüller space, Geom. Topol. 18 (2014), 3025–3053.
• [27] K. Rafi and S. Schleimer, Covers and the curve complex, Geom. Topol. 13 (2009), 2141–2162.
|
# String find and replace method optimization
I am trying to find a specific header string from different Maps (LEDES1998Bheaders, LEDES98BIheaders and LEDES98BI_V2headers) in an errorMessage depends on the parcel type and if the errorMessage has the specific header string I need to replace it with the corresponding value.
public class ErrorMessageConverter {
private static Map<String, String> LEDES1998Bheaders = new HashMap<>();
private static Map<String, String> LEDES98BIheaders = new HashMap<>();
private static Map<String, String> LEDES98BI_V2headers = new HashMap<>();
static {
LEDES1998Bheaders.put("\\bUNITS\\b", "LINE ITEM NUMBER OF UNITS");
}
public static String toUserFriendlyErrorMessage(String parcelType, String message) {
if (parcelType.equals("LEDES1998B")) {
}
else if (parcelType.equals("LEDES98BI")) {
}
else if (parcelType.equals("LEDES98BI V2")) {
}
return message;
}
private static String updateErrorMessage(String msg, Map<String, String> invHeaders) {
Pattern pattern;
for (String key : invHeaders.keySet()) {
pattern = Pattern.compile(key);
if (pattern.matcher(msg).find()) {
}
}
return msg;
}
}
Below are couple of sample error messages:
String errorMesage1 = "Line 3 : Could not parse inv_date value"
String errorMesage2 = "Line : 1 BaseRate is a required field"
Can this method simplified further in java 8 using filters/lambda expressions?
Your code looks fairly effective, but it could be a bunch more efficient if you rearrange the code a bit, do some pre-processing, and reduce your run-time checks. Let's go through that in order....
I recommend a 'set up' stage to your code. Consider an ideal situation at runtime, the code would look something like:
public static String toUserFriendlyErrorMessage(String parcelType, String message) {
for (Rule rule : getRules(parcelType)) {
message = rule.process(message);
}
return message;
}
That would be a neat solution, where you have a bunch of rules that you know apply to the messages of a particular type, and that you can then just apply whichever ones are appropriate. That would be the most efficient at use-time too.
How would you implement that? Using some pre-processing, of course, and a new Rule class.
This preprocessing is key, and the features you should look for here are:
1. the rules are sorted at pre-process time.
2. some rules are in multiple sets of operations.
3. Pattern instances are compiled just once, not at use-time.
Your rules are convenient in the sense that the parcelType values seems to have an 'extension' type system:
• LEDES1998B is rules LEDES1998Bheaders
• LEDES98BI is the same as LEDES1998B plus the rules in LEDES98BIheaders
• LEDES98BI V2 is the same as LEDES98BI (and thus LEDES1998B) but also includes LEDES98BI_V2headers
How would I recommend preprocessing things?
private static final class Rule {
private final Pattern key;
private final String replacement;
Rule(String search, String rep) {
key = Pattern.compile(search);
replacement = Matcher.quoteReplacement(rep);
}
String process(String msg) {
return key.matcher(msg).replaceAll(replacement);
}
}
So, that rule class is simple enough... how to add them together?
private static final Map<String, Rule[]> RULE_MAP = buildMap();
private static final Map<String, Rule[]> buildMap() {
Map<String, Rule[]> result = new HashMap<>();
List<Rule> rules = new ArrayList<>();
result.put("LEDES1998B", rules.toArray(new Rule[rules.size()]));
result.put("LEDES98BI", rules.toArray(new Rule[rules.size()]));
result.put("LEDES98BI V2", rules.toArray(new Rule[rules.size()]));
return result;
}
private static final void addRules(List<Rule> target, Map<String,String> sources) {
for (Map.Entry<String,String> me : sources.entrySet()) {
}
}
Now, your last part is the getRules method:
private static final Rule[] getRules(String parcelType) {
Rule[] rules = RULE_MAP.get(parcelType);
if (rules == null) {
throw new IllegalStateException("No rules set for type " + parcelType);
}
return rules;
}
Now your code should go smoothly.
• Thanks for the detailed solution, I really appreciate it. But if the parceltype is LEDES98BI I need to check both LEDES1998Bheaders and LEDES98BIheaders, and if the parceltype is LEDES98BI V2 I need to check all the three maps LEDES1998Bheaders, LEDES98BIheaders and LEDES98BI_V2headers I think your solution checks only the specific Rules from specific map as per the parcelType – RanPaul Apr 15 '15 at 15:09
• @RanPaul - I think you will find that it does not clear the rules array after loading the LEDES1998Bheaders rules, so they are also in the rules that are added to when the LEDES98BIheaders are processed – rolfl Apr 15 '15 at 15:12
• The order-of-processing is significant so that each time you addRules(...) you are really adding more rules, so you are doing the same rules as the previous rule set, plus the new rules. Thus when you call result.put("LEDES98BI V2", rules.toArray(new Rule[rules.size()]); you will be adding all the rules for all rulesets when you process parcelType LEDES98BI_V2headers – rolfl Apr 15 '15 at 15:14
• I just tried your solution and its not updating error messages in getRules method am getting rules value as org.eclipse.debug.core.DebugException: com.sun.jdi.ClassNotLoadedException: Type has not been loaded occurred while retrieving component type of array – RanPaul Apr 15 '15 at 15:24
• I have been known to break things before, but my code could not have caused eclipse exceptions like that. Consider doing a 'clean' on your eclipse project.... you can compile everything successfully, right? – rolfl Apr 15 '15 at 15:26
|
## Riolku's Mock CCC S1 - Word Bot
View as PDF
Points: 3 (partial)
Time limit: 2.0s
Memory limit: 256M
Author:
Problem type
Mosey Maker is practicing how to use words!
To help him with his words, he made up a bot to recognize them. However, his bot isn't that intelligent.
His bot recognizes words as a list of alphabetic characters. Mosey Maker's bot doesn't think long sequences of vowels or consonants is valid, so if more than consonants or vowels are seen in a row, his bot does not consider it a word. Note that the vowels are aeiouy, and the consonants are bcdfghjklmnpqrstvwxyz. Note that y counts as both a consonant and a vowel.
Unfortunately, Mosey Maker lost his bot, and wants you to recode it.
Given a single word of characters, is it valid?
#### Constraints
The word will only contain lowercase alphabetic characters.
The letter y will not appear in the word.
#### Input Specification
The first line will contain three integers, , and .
The next line will be the word Mosey wants you to check.
#### Output Specification
Output YES if the word is valid and NO otherwise.
#### Sample Input 1
12 3 3
onomatopoeia
#### Sample Output 1
NO
#### Explanation for Sample Output 1
Note that although it is a valid english word, onomatopoeia has too many trailing vowels to be a valid word.
#### Sample Input 2
8 2 4
aaybaaaa
#### Sample Output 2
YES
#### Sample Input 3
10 8 2
aayczttpqw
#### Sample Output 3
NO
#### Explanation for Sample Output 3
Note that since y is both a vowel and a consonant, aay is considered a string of vowels.
#### Sample Input 4
5 4 4
yyyyy
#### Sample Output 4
NO
|
## Algebra 2 (1st Edition)
Using the change of base formula to simplify the logarithm, we find: $$\frac{log(30)}{log(12)} \\ 1.37$$
|
# On the uniqueness of invariant states
Federico Bambozzi, Simone Murro
June 24, 2019
Given an abelian group G endowed with a T-pre-symplectic form, we assign to it a symplectic twisted group *-algebra W_G and then we provide criteria for the uniqueness of states invariant under the ergodic action of the symplectic group of automorphism. As an application, we discuss the notion of natural states in quantum abelian Chern-Simons theory.
Keywords:
none
|
# Conditions on expressing magnetic field in terms of curl of current density
Given a current density distribution $\mathbf J(\mathbf x)$ inside a closed bounded region $\Omega$, the magnetic field at any point $\mathbf y$ outside of $\Omega$ can be expressed as \begin{aligned}\mathbf B(\mathbf y)&=\frac{\mu_0}{4\pi}\int_\Omega\mathbf J(\mathbf x)\times\nabla_{\mathbf x}\frac{1}{|\mathbf x-\mathbf y|}d^3\mathbf x\\ &=\frac{\mu_0}{4\pi}\int_\Omega\left[\frac{1}{|\mathbf x-\mathbf y|}\nabla_{\mathbf x}\times\mathbf J(\mathbf x)-\nabla_{\mathbf x}\times\left(\frac{\mathbf J(\mathbf x)}{|\mathbf x-\mathbf y|}\right)\right]d^3\mathbf x\\ &=\frac{\mu_0}{4\pi}\int_\Omega\frac{1}{|\mathbf x-\mathbf y|}\nabla_{\mathbf x}\times\mathbf J(\mathbf x)d^3\mathbf x-\frac{\mu_0}{4\pi}\int_{\partial\Omega}\mathbf n(\mathbf x)\times\left(\frac{\mathbf J(\mathbf x)}{|\mathbf x-\mathbf y|}\right)d^2 S(\mathbf x) \end{aligned} where $\partial\Omega$ is the boundary of $\Omega$, $n(\mathbf x)$ is the unit normal of $\partial \Omega$ and $S(\mathbf x)$ is the area of the surface element. Now, if the current density $\mathbf J(\mathbf x)$ is zero at the boundary $\partial\Omega$ (this can be achieved by slightly enlarging $\Omega$ if $\mathbf J(\mathbf x)$ is not zero at $\partial\Omega$) we can then drop the second term on the last line. Now we simply have \begin{aligned}\mathbf B(\mathbf y)&=\frac{\mu_0}{4\pi}\int_\Omega\frac{1}{|\mathbf x-\mathbf y|}\nabla_{\mathbf x}\times\mathbf J(\mathbf x)d^3\mathbf x \end{aligned}.
If the current density $\mathbf J(\mathbf x)$ is continuous and differentiable, the above conclusion should be correct. However, $\mathbf J(\mathbf x)$ might not be continuous in $\Omega$, e.g., infinite thin coils inside $\Omega$ carrying electrical current. Is the above derivation correct for $\mathbf J(\mathbf x)$ containing delta functions? What kind of singularities in $\mathbf J(\mathbf x)$ is permitted?
• I think that the above is always true, simply because the the definition of the derivative of a distribution (such as a delta-function or a step function, which is how we describe the current configurations you're talking about) is done via a similar formula. The Mathworld article on distributions (aka "generalized functions") might be worth a look on your part. – Michael Seifert Oct 6 '15 at 23:47
• Thanks for bringing the reference. As you said, it is correct even if $\mathbf J$ contains delta functions, since it can be verified that $\int_\Omega f(\mathbf x)\nabla\delta(\mathbf x-\mathbf x_0)d^3\mathbf x=-\nabla f(\mathbf x)|_{\mathbf x=\mathbf x_0}$ for $\mathbf x_0$ in the interior of $\Omega$ for any differentiable function $f(\mathbf x)$. – Jasper Oct 7 '15 at 18:32
• Can you give an example of singularity of current? The current is considered as a result of charge movement. As long as you can still talking about the continuous movements of charges, $\mathbf{J}$ may always be treated as continuous. Unless you are thinking of quantum processes when sudden jumps happen at the microscopic level and quantum electrodynamics should serve your purpose, otherwise I think you are safe. – Xiaodong Qi Oct 6 '15 at 21:44
• Sure. Let's assume $\mathbf J(\mathbf x)$ is a segment of line current, e.g., $\mathbf J(\mathbf x)=I_c\int_{\tau_1}^{\tau_2}\delta(\mathbf x-\mathbf x^\prime(\tau))\frac{d\mathbf x^\prime(\tau)}{d\tau}d\tau$, where $\delta(\mathbf x-\mathbf x^\prime)$ is the Dirac delta function, $I_c$ is the amplitude of the current, $\mathbf x^\prime(\tau)\in c\in \mathbb R^3$ is the parametric representation of the line segment $c$ with parameter $\tau\in[\tau_1,\tau_2]$. – Jasper Oct 6 '15 at 21:50
• In your case, the integral $\int_{t_1}^{t_2}\delta(x-x'(\tau))\frac{dx'}{d\tau}d\tau=\int_{x'(t_1)}^{x'(t_2)}\delta(x-x'(\tau))dx'=\mathrm{constant}$. Is that correct? – Xiaodong Qi Oct 6 '15 at 22:02
|
Tag Info
The trick is to start with the highest power, rewrite it as something you know (a third order moment) and then work backwards on the remaining terms. By that I mean you can complete the cube as follows: $$E[W_t^3 - 3tW_t|\mathcal{F}_s] = E[(W_t-W_s)^3 - C -3tW_t|\mathcal{F}_s]$$ where you'll need to find $C$ such that the equality holds (i.e. $C=W_s^3 + ... 1 You can use that$f(t,W_t)\in C^2$is Martingale iff:$$\partial_t f+\frac{1}{2}\partial_{WW}f= 0$$ We get:$$\partial_t f=-3W_t$$$$\partial_{WW}f=6W_t$$ Finally: $$-3W_t+3W_t= 0$$ q.e.d. The proof of theorem follows by writing out$f(t,W_t)\$ via Ito formula. Proof of theorem:
|
# Codesignal Solution: Poker Chips
Original Problem
Bart set up a circular poker table for his friends so that each of the seats at the table has the same number of poker chips. But when Bart wasn’t looking, someone rearranged all of the chips so that they are no longer evenly distributed! Now Bart needs to redistribute the chips so that every seat has the same number before his friends arrive. But Bart is very meticulous: to ensure that he doesn’t lose any chips in the process, he only moves chips between adjacent seats. Moreover, he only moves chips one at a time. What is the minimum number of chip moves Bart will need to make to bring the chips back to equilibrium?
Example
For chips = [1, 5, 9, 10, 5], the output should be
pokerChips(chips) = 12.
The array represents a circular table, so we are permitted to move chips between the last and the first index in the array. Thus Bart can bring the chips back to equilibrium with the following steps (1-indexed):
• move 2 chips from seat 2 to seat 1 (2 moves);
• move 3 chips from seat 3 to seat 2 (3 moves);
• move 3 chips from seat 5 to seat 1 (3 moves);
• move 4 chips from seat 4 to seat 5 (4 moves).
After this sequence of 12 moves, each seat will have 6 chips, and there is no sequence of fewer moves doing the same.
Input/Output
• [execution time limit] 4 seconds (py)
• [input] array.integer chips
The chip counts of each seat.
Guaranteed constraints:
0 ≤ chips.length ≤ 106,
0 ≤ chips[i] ≤ 100.
• [output] integer
The minimum number of moves required to restore the chip counts.
## Solution
We have a table with $$n$$ persons. Each person should have $$m$$ chips, but accidentally every person got $$\mathbf{c}_i$$ chips and our aim is to restore equilibrium with the least swaps. A swap can be made only with a direct neighbor on the table and only one chip at a time can be exchanged. It is obvious that $$m=\frac{1}{n}\sum\limits_{i=1}^{n}\mathbf{c}_{i-1}$$ for the zero-indexed vector $$\mathbf{c}$$.
We now define the overplus between two persons as flow, where a positive flow means moving chips to the right and negative flow means getting chips from the left. This means $$\mathbf{f}_0$$ flows from the last person $$\mathbf{c}_{n-1}$$ to the first person $$\mathbf{c}_0$$, $$\mathbf{f}_1 := \mathbf{f}_0 + \mathbf{c}_0 - m$$ flows from $$\mathbf{c}_0$$ to $$\mathbf{c}_1$$, $$\mathbf{f}_2 := \mathbf{f}_0 + \mathbf{c}_0 - m + \mathbf{c}_1 - m$$ flows from $$\mathbf{c}_1$$ to $$\mathbf{c}_2$$ and so on. This shows, that the important information is, how much flow is passed between the first and the last person. This value has to be optimal in the sense that it minimizes the total flow. When we say that the zero-indexed flow vector $$\mathbf{f}$$ describes the number of chips that must be moved from person $$(i-1)\bmod n$$ to person $$i$$, we can define a cost function, which shall be used to optimize for $$f_0$$:
$g(f_0; \mathbf{d}) = \sum\limits_{i=1}^{n} |\mathbf{f}_{i-1}|$
Now each person $$i$$ gets $$\mathbf{f}_i$$ chips from left and hands $$\mathbf{f}_{(i+1)\bmod n}$$ to the right. From the definition of $$\mathbf{f}_i$$ and the definition of the difference vector $$\mathbf{d}$$ being $$\mathbf{d} = m - \mathbf{c}$$ follows
$\begin{array}{rrl} &\mathbf{d}_i &= \mathbf{f}_i - \mathbf{f}_{(i+1)\bmod n}\\ \Leftrightarrow&\mathbf{f}_{(i+1) \bmod n} &= \mathbf{f}_i - \mathbf{d}_i \end{array}$
Putting this knowledge into our cost function yields
$\begin{array}{rl} g(f_0; \mathbf{d}) &= |\mathbf{f}_0| + |\mathbf{f}_1| + \dots + |\mathbf{f}_{n-1}|\\ &= |\mathbf{f}_0| + |\mathbf{f}_0-\mathbf{d}_0| + \dots + |\mathbf{f}_{n-1}-\mathbf{d}_{n-1}|\\ &= |\mathbf{f}_0| + |\mathbf{f}_0-\mathbf{d}_0|+|\mathbf{f}_0-\mathbf{d}_0-\mathbf{d}_1| + \dots + |\mathbf{f}_{0}-\sum\limits_{i=0}^{n-2}\mathbf{d}_{i}| \end{array}$
This is interesting. We know already from here how to minimize a sum of absolute values: Using the median! Therefore
$\begin{array}{rl} \mathbf{f}_0 &= \text{median}({0, \mathbf{d}_0, \mathbf{d}_0+\mathbf{d}_1, \mathbf{d}_0+\mathbf{d}_1+\mathbf{d}_2,\dots , \mathbf{d}_0 + ... + \mathbf{d}_{n-2}) }) \end{array}$
Putting $$\mathbf{f}_0$$ now back into $$g$$ calculates the minimal number of swaps.
## Python Implementation
The mathematical derivation can be implemented in Python quite easily:
import numpy as np
def pokerChips(chips):
a = np.array(chips)
s = np.cumsum(a - a.mean())
return np.abs(s - int(np.median(s))).sum()
« Back to problem overview
|
# American Institute of Mathematical Sciences
June 2014, 7(3): 483-501. doi: 10.3934/dcdss.2014.7.483
## Traffic light control: A case study
1 Department of Mathematics, University of Mannheim, D-68131 Mannheim 2 School of Business Informatics and Mathematics, University of Mannheim, D-68131 Mannheim, Germany
Received May 2013 Revised August 2013 Published January 2014
This article is devoted to traffic flow networks including traffic lights at intersections. Mathematically, we consider a nonlinear dynamical traffic model where traffic lights are modeled as piecewise constant functions for red and green signals. The involved control problem is to find stop and go configurations depending on the current traffic volume. We propose a numerical solution strategy and present computational results.
Citation: Simone Göttlich, Ute Ziegler. Traffic light control: A case study. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 483-501. doi: 10.3934/dcdss.2014.7.483
##### References:
[1] G. Bretti, R. Natalini and B. Piccoli, Numerical approximations of a traffic flow model on networks, Networks and Heterogeneous Media, 1 (2006), 57-84. doi: 10.3934/nhm.2006.1.57. Google Scholar [2] E. Brockfeld, R. Barlovic, A. Schadschneider and M. Schreckenberg, Optimizing traffic lights in a cellular automaton model for city traffic, Physical Review E, 64 (2001), 056132. doi: 10.1103/PhysRevE.64.056132. Google Scholar [3] Y. Chitour and B. Piccoli, Traffic circles and timing of traffic lights for cars flow, Discrete and Continuous Dynamical Systems Series B, 5 (2005), 599-630. doi: 10.3934/dcdsb.2005.5.599. Google Scholar [4] C. Claudel and A. Bayen, Convex formulations of data assimilation problems for a class of hamilton-jacobi equations, SIAM Journal on Control and Optimization, 49 (2011), 383-402. doi: 10.1137/090778754. Google Scholar [5] G. M. Coclite, M. Garavello and B. Piccoli, Traffic flow on a road network, SIAM Journal on Mathematical Analysis, 36 (2005), 1862-1886. doi: 10.1137/S0036141004402683. Google Scholar [6] C. Daganzo, On the variational theory of traffic flow: well-posedness, duality and applications, Networks and Heterogeneous Media, 1 (2006), 601-619. doi: 10.3934/nhm.2006.1.601. Google Scholar [7] C. D'Apice, S. Göttlich, M. Herty and B. Piccoli, Modeling, Simulation and Optimization of Supply Chains: A Continuous Approach, SIAM, Philadelphia, PA, 2010. doi: 10.1137/1.9780898717600. Google Scholar [8] C. D'Apice, R. Manzo and B. Piccoli, Packet flow on telecommunication networks, SIAM Journal on Mathematical Analysis, 38 (2006), 717-740. doi: 10.1137/050631628. Google Scholar [9] G. Flötteröd and J. Rohde, Operational macroscopic modeling of complex urban intersections, Transportation Research Part B: Methodological, 45 (2011), 903-922 . Google Scholar [10] A. Fügenschuh, S. Göttlich, M. Herty, A. Klar and A. Martin, A discrete optimization approach to large scale supply networks based on partial differential equations, SIAM Journal on Scientific Computing, 30 (2008), 1490-1507. doi: 10.1137/060663799. Google Scholar [11] A. Fügenschuh, M. Herty, A. Klar and A. Martin, Combinatorial and continuous models for the optimization of traffic flows on networks, SIAM Journal on Optimization, 16 (2006), 1155-1176. doi: 10.1137/040605503. Google Scholar [12] S. Göttlich, M. Herty and U. Ziegler, Numerical discretization of Hamilton-Jacobi equations on networks, Networks and Heterogenous Media, 8 (2013), 685-705. Google Scholar [13] S. Göttlich, M. Herty and U. Ziegler, Modeling and optimizing traffic light settings on road networks, preprint, 2013. Google Scholar [14] S. Göttlich, S. Kühn and O. Kolb, Optimization for a special class of traffic flow models: combinatorial and continuous approaches, preprint, 2013. Google Scholar [15] M. Gugat, M. Herty, A. Klar and G. Leugering, Optimal control for traffic flow networks, Journal of Optimization Theory and Applications, 126 (2005), 589-616. doi: 10.1007/s10957-005-5499-z. Google Scholar [16] M. Herty and A. Klar, Modeling, simulation, and optimization of traffic flow networks, SIAM Journal on Scientific Computing, 25 (2003), 1066-1087. doi: 10.1137/S106482750241459X. Google Scholar [17] H. Holden and N. H. Risebro, A mathematical model of traffic flow on a network of unidirectional roads, SIAM Journal on Mathematical Analysis, 26 (1995), 999-1017. doi: 10.1137/S0036141093243289. Google Scholar [18] , IBM ILOG CPLEX Optimization Studio,, , (). Google Scholar [19] S. Lämmer and D. Helbing, Self-control of traffic lights and vehicle flows in urban road networks, Journal of Statistical Mechanics: Theory and Experiment, (2008), P04019. Google Scholar [20] J. Lebacque and M. Khoshyaran, First order macroscopic traffic flow models for networks in the context of dynamic assignment, Transportation Planning, (2004), 119-140. doi: 10.1007/0-306-48220-7_8. Google Scholar [21] W. Lin and C. Wang, An enhanced 0-1 mixed-integer LP formulation for traffic signal control, IEEE Transactions on Intelligent Transportation Systems, 5 (2004), 238-245. doi: 10.1109/TITS.2004.838217. Google Scholar [22] P. Mazaré, A. Dehwah, C. Claudel and A. Bayen, Analytical and grid-free solutions to the lighthill-whitham-richards traffic flow model, Transportation Research Part B: Methodological, 45 (2011), 1727-1748. Google Scholar [23] L. Zhao, X. Peng, L. Li and Z. Li, A fast signal timing algorithm for individual oversaturated intersections, IEEE Transactions on Intelligent Transportation Systems, (2011), 1-4. doi: 10.1109/TITS.2010.2076808. Google Scholar [24] U. Ziegler, Mathematical Modelling, Simulation and Optimisation of Dynamic Transportation Networks with Applications in Production and Traffic, Ph.D Thesis RWTH Aachen University, 2013. Available from: http://darwin.bth.rwth-aachen.de/opus3/volltexte/2013/4452/. Google Scholar
show all references
##### References:
[1] G. Bretti, R. Natalini and B. Piccoli, Numerical approximations of a traffic flow model on networks, Networks and Heterogeneous Media, 1 (2006), 57-84. doi: 10.3934/nhm.2006.1.57. Google Scholar [2] E. Brockfeld, R. Barlovic, A. Schadschneider and M. Schreckenberg, Optimizing traffic lights in a cellular automaton model for city traffic, Physical Review E, 64 (2001), 056132. doi: 10.1103/PhysRevE.64.056132. Google Scholar [3] Y. Chitour and B. Piccoli, Traffic circles and timing of traffic lights for cars flow, Discrete and Continuous Dynamical Systems Series B, 5 (2005), 599-630. doi: 10.3934/dcdsb.2005.5.599. Google Scholar [4] C. Claudel and A. Bayen, Convex formulations of data assimilation problems for a class of hamilton-jacobi equations, SIAM Journal on Control and Optimization, 49 (2011), 383-402. doi: 10.1137/090778754. Google Scholar [5] G. M. Coclite, M. Garavello and B. Piccoli, Traffic flow on a road network, SIAM Journal on Mathematical Analysis, 36 (2005), 1862-1886. doi: 10.1137/S0036141004402683. Google Scholar [6] C. Daganzo, On the variational theory of traffic flow: well-posedness, duality and applications, Networks and Heterogeneous Media, 1 (2006), 601-619. doi: 10.3934/nhm.2006.1.601. Google Scholar [7] C. D'Apice, S. Göttlich, M. Herty and B. Piccoli, Modeling, Simulation and Optimization of Supply Chains: A Continuous Approach, SIAM, Philadelphia, PA, 2010. doi: 10.1137/1.9780898717600. Google Scholar [8] C. D'Apice, R. Manzo and B. Piccoli, Packet flow on telecommunication networks, SIAM Journal on Mathematical Analysis, 38 (2006), 717-740. doi: 10.1137/050631628. Google Scholar [9] G. Flötteröd and J. Rohde, Operational macroscopic modeling of complex urban intersections, Transportation Research Part B: Methodological, 45 (2011), 903-922 . Google Scholar [10] A. Fügenschuh, S. Göttlich, M. Herty, A. Klar and A. Martin, A discrete optimization approach to large scale supply networks based on partial differential equations, SIAM Journal on Scientific Computing, 30 (2008), 1490-1507. doi: 10.1137/060663799. Google Scholar [11] A. Fügenschuh, M. Herty, A. Klar and A. Martin, Combinatorial and continuous models for the optimization of traffic flows on networks, SIAM Journal on Optimization, 16 (2006), 1155-1176. doi: 10.1137/040605503. Google Scholar [12] S. Göttlich, M. Herty and U. Ziegler, Numerical discretization of Hamilton-Jacobi equations on networks, Networks and Heterogenous Media, 8 (2013), 685-705. Google Scholar [13] S. Göttlich, M. Herty and U. Ziegler, Modeling and optimizing traffic light settings on road networks, preprint, 2013. Google Scholar [14] S. Göttlich, S. Kühn and O. Kolb, Optimization for a special class of traffic flow models: combinatorial and continuous approaches, preprint, 2013. Google Scholar [15] M. Gugat, M. Herty, A. Klar and G. Leugering, Optimal control for traffic flow networks, Journal of Optimization Theory and Applications, 126 (2005), 589-616. doi: 10.1007/s10957-005-5499-z. Google Scholar [16] M. Herty and A. Klar, Modeling, simulation, and optimization of traffic flow networks, SIAM Journal on Scientific Computing, 25 (2003), 1066-1087. doi: 10.1137/S106482750241459X. Google Scholar [17] H. Holden and N. H. Risebro, A mathematical model of traffic flow on a network of unidirectional roads, SIAM Journal on Mathematical Analysis, 26 (1995), 999-1017. doi: 10.1137/S0036141093243289. Google Scholar [18] , IBM ILOG CPLEX Optimization Studio,, , (). Google Scholar [19] S. Lämmer and D. Helbing, Self-control of traffic lights and vehicle flows in urban road networks, Journal of Statistical Mechanics: Theory and Experiment, (2008), P04019. Google Scholar [20] J. Lebacque and M. Khoshyaran, First order macroscopic traffic flow models for networks in the context of dynamic assignment, Transportation Planning, (2004), 119-140. doi: 10.1007/0-306-48220-7_8. Google Scholar [21] W. Lin and C. Wang, An enhanced 0-1 mixed-integer LP formulation for traffic signal control, IEEE Transactions on Intelligent Transportation Systems, 5 (2004), 238-245. doi: 10.1109/TITS.2004.838217. Google Scholar [22] P. Mazaré, A. Dehwah, C. Claudel and A. Bayen, Analytical and grid-free solutions to the lighthill-whitham-richards traffic flow model, Transportation Research Part B: Methodological, 45 (2011), 1727-1748. Google Scholar [23] L. Zhao, X. Peng, L. Li and Z. Li, A fast signal timing algorithm for individual oversaturated intersections, IEEE Transactions on Intelligent Transportation Systems, (2011), 1-4. doi: 10.1109/TITS.2010.2076808. Google Scholar [24] U. Ziegler, Mathematical Modelling, Simulation and Optimisation of Dynamic Transportation Networks with Applications in Production and Traffic, Ph.D Thesis RWTH Aachen University, 2013. Available from: http://darwin.bth.rwth-aachen.de/opus3/volltexte/2013/4452/. Google Scholar
[1] Mauro Garavello. A review of conservation laws on networks. Networks & Heterogeneous Media, 2010, 5 (3) : 565-581. doi: 10.3934/nhm.2010.5.565 [2] Lino J. Alvarez-Vázquez, Néstor García-Chan, Aurea Martínez, Miguel E. Vázquez-Méndez. Optimal control of urban air pollution related to traffic flow in road networks. Mathematical Control & Related Fields, 2018, 8 (1) : 177-193. doi: 10.3934/mcrf.2018008 [3] Georges Bastin, B. Haut, Jean-Michel Coron, Brigitte d'Andréa-Novel. Lyapunov stability analysis of networks of scalar conservation laws. Networks & Heterogeneous Media, 2007, 2 (4) : 751-759. doi: 10.3934/nhm.2007.2.751 [4] Wen Shen. Traveling waves for conservation laws with nonlocal flux for traffic flow on rough roads. Networks & Heterogeneous Media, 2019, 14 (4) : 709-732. doi: 10.3934/nhm.2019028 [5] Christophe Prieur. Control of systems of conservation laws with boundary errors. Networks & Heterogeneous Media, 2009, 4 (2) : 393-407. doi: 10.3934/nhm.2009.4.393 [6] Xavier Litrico, Vincent Fromion, Gérard Scorletti. Robust feedforward boundary control of hyperbolic conservation laws. Networks & Heterogeneous Media, 2007, 2 (4) : 717-731. doi: 10.3934/nhm.2007.2.717 [7] Martin Gugat, Alexander Keimer, Günter Leugering, Zhiqiang Wang. Analysis of a system of nonlocal conservation laws for multi-commodity flow on networks. Networks & Heterogeneous Media, 2015, 10 (4) : 749-785. doi: 10.3934/nhm.2015.10.749 [8] Alessia Marigo. Optimal traffic distribution and priority coefficients for telecommunication networks. Networks & Heterogeneous Media, 2006, 1 (2) : 315-336. doi: 10.3934/nhm.2006.1.315 [9] Mapundi K. Banda, Michael Herty. Numerical discretization of stabilization problems with boundary controls for systems of hyperbolic conservation laws. Mathematical Control & Related Fields, 2013, 3 (2) : 121-142. doi: 10.3934/mcrf.2013.3.121 [10] Yu Zhang, Yanyan Zhang. Riemann problems for a class of coupled hyperbolic systems of conservation laws with a source term. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1523-1545. doi: 10.3934/cpaa.2019073 [11] Guillaume Costeseque, Jean-Patrick Lebacque. Discussion about traffic junction modelling: Conservation laws VS Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 411-433. doi: 10.3934/dcdss.2014.7.411 [12] Yinfei Li, Shuping Chen. Optimal traffic signal control for an $M\times N$ traffic network. Journal of Industrial & Management Optimization, 2008, 4 (4) : 661-672. doi: 10.3934/jimo.2008.4.661 [13] Avner Friedman. Conservation laws in mathematical biology. Discrete & Continuous Dynamical Systems, 2012, 32 (9) : 3081-3097. doi: 10.3934/dcds.2012.32.3081 [14] Len G. Margolin, Roy S. Baty. Conservation laws in discrete geometry. Journal of Geometric Mechanics, 2019, 11 (2) : 187-203. doi: 10.3934/jgm.2019010 [15] Mauro Garavello, Roberto Natalini, Benedetto Piccoli, Andrea Terracina. Conservation laws with discontinuous flux. Networks & Heterogeneous Media, 2007, 2 (1) : 159-179. doi: 10.3934/nhm.2007.2.159 [16] Erik Kropat. Homogenization of optimal control problems on curvilinear networks with a periodic microstructure --Results on $\boldsymbol{S}$-homogenization and $\boldsymbol{Γ}$-convergence. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 51-76. doi: 10.3934/naco.2017003 [17] Yanning Li, Edward Canepa, Christian Claudel. Efficient robust control of first order scalar conservation laws using semi-analytical solutions. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 525-542. doi: 10.3934/dcdss.2014.7.525 [18] Leo G. Rebholz, Dehua Wang, Zhian Wang, Camille Zerfas, Kun Zhao. Initial boundary value problems for a system of parabolic conservation laws arising from chemotaxis in multi-dimensions. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 3789-3838. doi: 10.3934/dcds.2019154 [19] Luca Di Persio, Giacomo Ziglio. Gaussian estimates on networks with applications to optimal control. Networks & Heterogeneous Media, 2011, 6 (2) : 279-296. doi: 10.3934/nhm.2011.6.279 [20] Wen-Xiu Ma. Conservation laws by symmetries and adjoint symmetries. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 707-721. doi: 10.3934/dcdss.2018044
2019 Impact Factor: 1.233
## Metrics
• PDF downloads (195)
• HTML views (0)
• Cited by (7)
## Other articlesby authors
• on AIMS
• on Google Scholar
[Back to Top]
|
# Gertler and Karadi (2011) estimation and observation equation
Dear Professor Pfeifer,
I am currently working on estimating the Gerler and Karadi (2011) paper entitled “A Model of Unconventional Monetary policy”, I have trouble while specifying the observation equations of the model. Could you please help me confirming whether the following observation equations are correctly written or not?
infl_obs = infl;
y_obs = Y - Y(-1);
i_obs = i ;
I_obs = I - I(-1);
I was trying to follow your guide for observation equations while applying the necessary transformation of the observable variables, but I am still confused. I am using quarterly data series and I did the following transformations:
Firstly, I seasonally adjust: the log of RGDP, log of real investment, log of cpi and the annualized interest rate. I took log difference (×100) of RGDPè100*[LN(RGDP_t)- LN(RGDP_t-1)]—Or should I use one-sided hp filter on the seasonally adjusted log RGDP? Knowing that the mean of both RGDP series after transformation is not zero— and did the same for real investment and cpi, and I used interest rate after dividing it by 4 to have quarterly interest rate (in line with the other quarterly growth rates).
Attached is the GK mod file. The model is not working yet (inappropriate initial values, etc.…) but I just want to make sure that the observation equation and data variables are correctly specified and consistent with the structure of the model equation.
Hi,
1. I would recommend to solve for the steady state analytically and use a steady_state_model block.
2. All your model concepts that are matched with the data seem to have a zero steady state, except for the interest rate i. Therefore, demeaning the growth rates of real GDP, Investment and demeaning the inflation rate would be required.
I agree with @Max1 that you need to demean your growth rates for them to be consistent with the model. You should never filter growth rates (unless you were doing indirect inference where it might work).
Regarding inflation and the interest rate, you did not specify if you are correctly dealing with the net/gross rate issues. Without the data file it is impossible to tell.
I agree with @jpfeifer .
Since the interest and inflation rates are net rates (\approx log levels of gross rates) in the model, you should work with net rates in the data file as well.
Another issue is the scaling by 100. When specifying the prior for the standard deviations of the shocks you should take this into account.
Relative to the specification of the SW(2007) prior your prior for the standard deviations looks somewhat extreme
stderr e_a, inv_gamma_pdf, 5, 100;
stderr e_ksi, inv_gamma_pdf, 5, 100;
stderr e_g, inv_gamma_pdf, 5, 100;
stderr e_i, inv_gamma_pdf, 5, 100;
In your calibration you are assuming a standard deviation of \sigma_a=0.01.
@jpfeifer thanks a lot Professor Pfeifer for your prompt reply, I forgot to attach the data file. As mentioned by @Max1 inflation in the data file is computed as net rate in percent (i.e., log difference of CPI, 100*[LN(CPI_t)- LN(CPI_t-1)]) but I am a little bit confused about interest rate, I took the annual percentage rate (not the change or first difference) and divide it by 4 in the data file or should I use the interest rate in the data as the annual rate and I specify the observation equation as follows:
i_obs = i_obs/4?
@Max1 If I understand your point concerning the scaling of the standard deviations of the shocks, I should adjust the initial values for the shocks to be consistent with the priors used?DataEst.xlsx (13.8 KB)
This is an infeasible equation for Dynare.
In the literature most people transform the interest rate in the data to quarterly rates (by /4) as you did.
Alternatively, if your interest rate in the model is a quarterly rate and you would like to have annualized interest rates in your data file
i_obs = i*4;
might be feasible.
You should think carefully about your priors. Currently, your prior is assuming that a typical shock to the economy is about 20% annualized with an annualized standard deviation of 400%.
@Max1 Thanks you very much for your help. I tried demeaning all the observable variables (including interest rate) and I adjusted the prior of standard deviation shocks, and finally the model works. I would highly appreciate if you may kindly help me understanding how I can use a steady_state_model block.
For the steady_state_model, you need to have solved the steady state with pencil and paper.
|
# Why $K(X) \longrightarrow G (X)$ is a Poincaré duality for K-theory?
It's well known that for Noetherian separated regular schemes the canonical map $$K(X) \longrightarrow G(X)$$ (Quillen uses $$K'$$ instead of $$G$$, though) is a weak equivalence.
This statement is usually called Poincaré duality.
One can also define $$K$$-theory with compact support (for sufficiently nice schemes $$X$$) by choosing a compactification $$X \hookrightarrow \overline{X}$$ and setting $$K_c(X)$$ as the homotopy kernel of $$K (X) \rightarrow K (\overline{X}\setminus X)$$. I have no idea on whether such $$K_c$$ and $$K$$ enjoy some kind of Poincaré duality.
When I hear something like Poincaré duality I expect some kind of cap product map with some fundamental virtual class $$H^{\bullet} \longrightarrow H_{d -\bullet}^{BM}$$ or, dually, $$H_c^{\bullet} \longrightarrow H_{d - \bullet}$$. Of course, there's a cap product $$K(X) \wedge G(X) \longrightarrow G(X)$$ induced by tensor product which when restricted to tensoring with $$\mathscr{O}_X$$ gives the Poincaré duality.
However, I'm not satisfied with such analogy. I, hence, ask the following.
1) Is there any sense in which $$G$$ is a $$K$$-theory with compact support? Or maybe it's even the opposite: $$K$$ is a $$G$$-theory with compact support?
2) If yes, is there any relation between $$K_c (X)$$ and $$G(X)$$?
3) If no, is there any kind of duality between $$K (X)$$ and $$K_c (X)$$?
4) If I'm actually sounding silly since in ordinary Poincaré duality both sides of the isomorphism are always simultaneously of the same kind (compact or not compact, for instance, $$H_{\bullet}^{BM}$$ is somehow non compact as $$H^{\bullet}$$), how can I see the duality as some isomorphism from a cohomology to a homology? In other words, why $$K(X)$$ should be a cohomology theory and $$G(X)$$ a homology theory?
5) If one uses some Atiyah-Hirzebruch spectral sequence for $$G$$-theory, would it be the case that the graded pieces of the $$\gamma$$ filtration define a motivic cohomology with compact support up to torsion?
6) What about 5 for $$K_c$$ instead of $$G$$? What about $$G_c$$?
7) After applying the Atiyah-Hirzebruch sequence to all the possibilities ($$K$$, $$K_c$$, $$G$$, $$G_c$$) what sort of Poincaré duality one acquires?
• G-theory is like Borel-Moore homology. The simplest reason is its functoriality: it is covariant for proper maps and contravariant for etale maps, like BM homology. Your $K_c$ is not well-defined (now that we know K-theory satisfies pro-cdh descent, we can define $K_c(X)$, but this involves taking the limit over all nilpotent thickenings of $\bar X-X$). – Marc Hoyois Nov 9 at 15:34
• @MarcHoyois Thanks for the comment! It's somehow surprising my $K_c$ is ill defined. What exactly will fail? If I recall correctly, Gillet seems to define the $K$-theory of a pair in the analogous way when proving higher GRR, but maybe I'm overseeing something... By the way, do you know some reference where such $K_c (X)$ that you mentioned is studied? – user40276 Nov 9 at 21:04
• @Gasterbiter Thanks for the comment. That definition still weird to me, though. I guess its the just the best that one can do when something fails to be smooth (be it $\overline{X}$ or $\overline{X}\setminus X$). In any case, I suppose that by dévissage the naive definition is safe enough for $G$. – user40276 yesterday
• @user40276 yep. if u want to be fancy, $G$ has cdh descent. it's also fine for KH (homotopy K theory) for the same reason – Gasterbiter 22 hours ago
|
# Thread: Need help on Function Assignment :P
1. ## Need help on Function Assignment :P
Hello! I need help with some questions I have to do. Any help would be much appreciated.
A) Determine the inverse of y= 5x21-3x-4
I got the answer:
log2(x+4/5)-1 = y
-3
That doesn't seem right to me. :/
B) What are the intercepts of; y= (x-a)2(x+2a) ; a>0
I got to y= (x3+2ax2)-(a2x-2a3)
Not sure what to do now...
C) This is a weird one. :/ Explain how a relation could be restricted to a function.
Da fuq??? O_o
D) What transformations have been made to j(x) to get 2j(1-x)?
Would that be; dilated vertically by 2, translated 1 upwards, and reflected in the x-axis???
E) The Function,Point, and Asymptote of; ~ f1(x)=log2x; (1,0); x=0; ~ when it is translated four units left, and one unit up???
Any help with any of the questions would be beyond fantastic!
2. ## Re: Need help on Function Assignment :P
Originally Posted by ShootMePlease
Hello! I need help with some questions I have to do. Any help would be much appreciated.
A) Determine the inverse of y= 5x21-3x-4
I got the answer:
log2(x+4/5)-1 = y
-3
That doesn't seem right to me. :/
B) What are the intercepts of; y= (x-a)2(x+2a) ; a>0
I got to y= (x3+2ax2)-(a2x-2a3)
Not sure what to do now...
C) This is a weird one. :/ Explain how a relation could be restricted to a function.
Da fuq??? O_o
D) What transformations have been made to j(x) to get 2j(1-x)?
Would that be; dilated vertically by 2, translated 1 upwards, and reflected in the x-axis???
E) The Function,Point, and Asymptote of; ~ f1(x)=log2x; (1,0); x=0; ~ when it is translated four units left, and one unit up???
Any help with any of the questions would be beyond fantastic!
Before even trying to help, I need to point out that you should not use x to represent both a multiplication symbol and the letter x. How am I supposed to tell them apart? Use brackets or a dot for the multiplication.
Also, you must use brackets in the correct places, since x + 4/5 is \displaystyle \begin{align*} x + \frac{4}{5} \end{align*}, while (x + 4)/5 is \displaystyle \begin{align*} \frac{x + 4}{5} \end{align*}
Assuming that you had made these changes, part A would be correct. Why don't you think it's right?
For B, why are you expanding (incorrectly btw)? To find x intercepts, let y = 0. To find y intercepts, let x = 0.
Since it's already factorised you can apply the null factor law once you have let y = 0.
For C, first of all we don't like swearing, even if it is censored. Then, what do you know about functions? Do you understand that a function is a mapping of two numbers, and works like a computer program, with numbers going in (the Independent Variable, usually x), and each number going in getting a number coming out (the Dependent Variable, usually y)? Therefore, a relation can only be a function if each number going in only has ONE possible value coming out. What could you do if your function is giving you multiple values for the Dependent Variable for particular values of the Independent Variable?
For D, you are almost correct. It's actually translated 1 to the left, not 1 upward.
For E, how can you write that function to take into account its transformations?
|
Tag Info
6
The authors of the paper you mention, Erdos, Lacampagne, and Selfridge, define $p(m)$ to be the least prime divisor of $m$ and concern themselves what can be said about $p(\binom{n}{k}).$ I suspect that Selfridge wrote the article. It has his style of saying a lot in a succinct way which is puzzling but solvable with some thought on the part of the reader. ...
3
The conjecture as written is false: Let $N=194+(2*3*5*7*11*13)*2n$, $k=N-2$, where $n$ is a natural number. Then $C(N,k)=C(N,2)=(97+2*3*5*7*11*13*n)(193+2*3*5*7*11*13*2n)$, having no prime factors $\leq 13$.
2
Smooth random functions, random ODEs, and Gaussian processes (2018) describes an approach that takes a finite Fourier series on the interval $(0,1)$ with randomly chosen coefficients. The integral of this function approaches Brownian motion in the limit that the number $M$ of Fourier coefficients tends to infinity. The plot shows three such functions, for $M=... 4 I think this is true. Let$b = a^{\frac{N-1}{2p}} = a^{2^{m-1}p^{n-1}}$, and note that we have$\frac{b^{p}+1}{b+1} \equiv 0$(mod$N$). Now$a$and$N$must be coprime, so that$b$and$N$are coprime. We have$b^{2p} \equiv 1$(mod$N$). Now$b^{p}-1$and$b^{p} +1$have gcd dividing$2$. However$\frac{b^{p}+1}{b+1}$is always odd, so that$\frac{b^{p}+1}...
9
The group $J_0(35)(\mathbb Q)$ (where $J_0(35)$ is the Jacobian of $X_0(35)$) has rank 0 (as shown for example by a 2-descent computation in Magma); it is isomorphic to ${\mathbb Z}/24{\mathbb Z} \times {\mathbb Z}/2{\mathbb Z}$, with generators the difference of the two points at infinity on $X_0(35)$ and the 2-torsion point corresponding to the ...
-6
My answer will focus on the last question, "what's the best known result of $x$"? Unfortunately there is no valid reponse until now, but there is an article by me not published, titled "There Exist Infinitely Many Couples of Primes $(P,P+2n)$, with $2n \ge2$ is a Fixed Distance Between $P$ and $P+2n$". In it, I use linear Diophantine ...
0
I think there is some mistake in the post. (1) The quantity $$\zeta'(\Delta,0)$$ for a surface is not the Euler characteristic of the surface. Note that there is actually a small issue, as there are many Laplacians on a surface ($d\delta+\delta d, \Delta, \Delta_{\partial}, \Delta_{\partial^{*}}, \Delta_{n}^{+}, \Delta_{n}^{-}$, etc) and the sign usually ...
0
To find an answer to your question I suggest to take moment of your time to read this link: https://vixra.org/abs/2007.0090 Here you find a deep connection between the Goldbach conjecture and the binary version of the Mobius sum, using linear Diophantine equations and sieve methods.
13
Yes. Obviously this $c$ and $N$ are coprime. We get $c^{(N-1)/2}+1=(c^{(N-1)/6}+1)(c^{(N-1)/3}-c^{(N-1)/6}+1)$ is divisible by $N$. Therefore $c^{N-1}-1$ is divisible by $N$, and $N-1$ is divisible by $k:={\rm {ord}}(c)$, where ${\rm ord}(x)$ denotes the multiplicative order of $x$ modulo $N$. But $(N-1)/2$ is not divisible by $k$, since $c^{(N-1)/2}\equiv -... 0 I add this as my contribution, just as a footnote of the answer and comments that were posted. My post isn't an answer for your question, just additional remarks that maybe are interesting in my view, thus you or the professors of this site MathOverflow feel free to comment if this isn't suitable as a contribution. The online encyclopedia Wikipedia has an ... 5 Corollary 2 of https://arxiv.org/abs/2007.11062 states that every sufficiently large odd integer is the sum of a prime and a practical number. 3 No, there cannot be 17 such numbers in arithmetic progression (and there cannot be 5 such numbers with the corresponding property for triples). Suppose we have such an arithmetic progression of length$k$, say$x,x+d,\ldots,x+(k-1)d$. I claim that if a prime$p$divides any two of them then either it divides all of them (which cannot be the case), or else$p&...
10
0 = P(7) + P(10) + P(-11) = P(3250) + P(2293) + P(-3593) = P(6266) + P(13243) + P(-13695) = P(11700) + P(13277) + P(-15797) = P(37555) + P(131381) + P(-132396) = P(747511) + P(1059490) + P(-1171307) = P(5529835) + P(22681597) + P(-22790636) = P(8042677) + P(13682243) + ...
7
$a_n$ is composite for $4 \le n \le 2016$. $a_{2017}$ appears to be prime (it passes a strong pseudoprime test). I have not tried to certify that it is prime (this would take a while as the number has 5789 digits).
5
Let $P(x):=x^3-2x$. Then \begin{gather} 70=P(2714)+P(1367)+P(-2825),\\ 75=P(16333)+P(14200)+P(-19328),\\ 83=P(6714)+P(-6682)+P(-1627),\\ 86=P(6413)+P(3721)+P(-6806). \end{gather}
1
So, for Sierpiński numbers of the Izotov type $(*)$, was a bigger covering set found between 1995 and 2015? No, it was not. And it is conjectured that none exists. In the Math Stack Exchange thread linked by Gerry Myerson in a comment to the question, I give other examples of Sierpiński numbers for which it is not likely that they should possess such a ...
0
The smallest interesting case of $k=2$ reduces to a family of Pell equations paramaterized by $b$: $$(2c-1)^2 - b^3(2a)^2 = 1.$$ This gives infinitely many solutions. For example, for $b=2$, we have a series of solutions indexed by $n$: $$c_n + a_n\sqrt{8} = \frac{(17+6\sqrt{8})^n+1}2.$$ Numerical values of $c_n$ are listed in OEIS A055792.
1
You might be interested in extensions to the Sylvester Schur theorem, which by your constraints shows that c is bigger than k^2 as the set of consecutive integers in the product must have a single multiple of q^2 for some prime q bigger than k. A paper of Saradha and Shorey from 2003, Almost Squares and Factorizations in Consecutive Integers, shows the ...
3
You may already know this, but numbers of the form $a^2b^3$ are called powerful numbers. A closely related question that might provide information on your question is to ask for binomial coefficients that are powerful. A Google search of "powerful number" and "binomial coefficient" brought up the following paper of Granville: On the ...
1
In practice, sieves have a hard time producing useful lower bounds for prime counting problems due to the so-called parity problem, a phenomenon that is very poorly understood. The best general "prime producing sieve" is due to Bombieri (though the formulation below is due to Friedlander and Iwaniec in this paper. The question is phrased as follows:...
2
Please check my post. It indicates that the number of divisors of $x^2+x+41$ is equal to the number of lattice points of $X^2+163Y^2-2(2x+1)Y-1=0$. This formula is transformed in this way. $$163X^2+163^2Y^2-2\cdot163(2x+1)Y=163$$ $$163X^2+\{163Y-(2x+1)\}^2-(2x+1)^2=163$$ $$163X^2+(163Y-2x-1)^2=4x^2+4x+164$$ $X':=163Y-2x-1,\ Y':=X$ and we divide both sides by ...
5
The Riemann xi-function decreases exponentially as $t\to\infty$, so it can't be universal. The decay comes from the fact that $$\xi(s) = \frac12 s (s-1) \pi^{-s/2} \Gamma(s/2) \zeta(s) .$$ If $s = \sigma + i t$ with $\sigma$ bounded and $t\to \infty$, then $\frac12 s (s-1)$ has polynomial growth, $\pi^{-s/2}$ is bounded, $\Gamma(s/2)$ decays exponentially, ...
12
This is a partial case of the classical result. https://en.wikipedia.org/wiki/Niven%27s_theorem
6
Let $\theta= \arcsin(1/4)$. Assume $\theta$ is a rational multiple of $\pi$. Then, there exists some $n$ such that $\sin(n\theta)=0$. This gives $\cos(n \theta)= \pm 1$. Set $z=\cos(\theta)+i \sin(\theta)$, then $z^n= \pm 1$ and $\frac{1}{z^n}=\pm 1$. This gives that $z$ and $\frac{1}{z}$ are algebraic integers, and hence so is $$2 i\sin(\theta)=z- \frac{1}{... 14 Yes, \arcsin(\frac14)/\pi is irrational. Suppose \arcsin(\frac14)/\pi = m/n, where m and n are integers. Then \sin(n \arcsin(\frac14))=\sin(m \pi)=0. We analyze this usng the formulas from Browmich as cited on Mathworld:$$\frac{\sin(n\arcsin(x))}{n}=x-\frac{(n^2-1^2)x^3}{3!} + \frac{(n^2-1^2)(n^2-3^2)x^5}{5!} + \cdots\frac{\sin(n\arcsin(x))}{...
2
You could get it by brute force - the supersingular j-invariants have to lie in $\mathbb{F}_{2^2}$, so you can just check each of them.
1
Apologies for commenting in an answer when there is already an answer in the comments... When I was a PhD-student (in the 90's), we were discussing variations of the van der Waerden theorem on monochromatic arithmetic progressions. A question that came up was whether we can require the monochromatic sequence to consist of consecutive multiples of some number....
2
It depends on what you mean by $|\cdot|$, but probably no. If by $|\cdot|$ you mean the absolute value on $\mathbb C$, and your algebraic integers are elements of $\mathbb C$, then the answer is no. The logarithmic Weil height of $1-\sqrt3\in\mathbb A^1(\overline{\mathbb Q})$ is $\frac12\log(2)$, which is strictly bigger than $\log(\max(1,|1-\sqrt3|))=0$. (...
3
I have two comments for you, putting them as an answer because I don't have enough credits for a comment: First, the notation $2\mid m$ should be very common, and it is definitely very common in German books. As an example, I own a copy of Algebra from Siegfried Bosch, first edition from 1992, and it uses this notation as well. Second, I think it is great ...
1
$ax^k + by^k = au^k + bv^k\tag{1}$ $a,b,x,y,u,v$ are integer. Case $k=3$: If equation $(1)$ has a known solution, then equation $(1)$ has infinitely many integer solutions below. Let $(x0,y0,u0,v0)$ is a known solution. p,q are arbitrary. Substitute $x=pt+x0, y=qt+y0, u=pt+u0, v=qt+v0$ to equation $(1)$, then we get $$t=\frac{-ax0^2p+by0^2q-au0^2p-bv0^2q}{... 4 Rewriting your equation as, for fixed m and k,$$\displaystyle x^k + my^k = u^k + mv^k, x,y,u,v \in \mathbb{Z},$$we see that this is of the form F(x,y) = F(u,v) for a binary form of degree k and defines a surface X_F \subset \mathbb{P}^3. Heath-Brown showed in this paper that if one deletes the rational lines on this surface, necessarily formed by ... 3 To expand on GNiklasch's answer, and analyse what you write as well: we always have (when complex conjugation is central in the Galois group) have \overline{\alpha^{\sigma}} = {\bar \alpha}^{\sigma} when \alpha is an algebraic integer in K and \sigma is an element of the Galois group of K. In general, there is no reason why |\alpha|^{2} should be ... 10 Zero is the only algebraic integer which has all its conjugates strictly inside the complex unit circle. (Look at the norm.) For explicit examples with conjugates on either side of the unit circle, you can start with a real quadratic field with a totally positive unit that isn't already a square in this field, such as \varepsilon = 2+\sqrt{3}. Then take \... 4 By inclusion-exclusion principle, the number of representations of n as the sum of squares of four nonzero integers equals:$$\sum_{k=0}^4 \binom4k (-1)^k r_{4-k}(n).$$Formulae for r_k(n) are given in this article at MathWorld. If one wants to further restrict the representations to positive integers, the above expression needs to be divided by 2^4=16.... 4 If S is congruentially equidistributed and contains enough elements .... is it true that S+S contains all the positive integers except a finite number of them? Let S=\bigcup_{n=1}^\infty \{2^{2n},2^{2n}+1,\dots, 2^{2n+1}-1\}. It is easy to show that S is congruentially equidistributed and S+S\not\ni 2^{2n} for each positive integer n. 8 This number is expected to be transcendental. This answer gives a conceptual framework for studying the algebraicity of such \Gamma ratios, and in fact a completely explicit criterion (which is only conjectural, when it comes to establishing transcendence). Your number is equal to \begin{equation*} \frac{\Gamma(2/5)^3}{\Gamma(1/5)^2 \Gamma(4/5)} \end{... 3 The value of \eta(i\sqrt{6}) and \eta(i\sqrt{3/2}) involves the use of gamma function values on a 24 basis, so we have:$$\eta(i\sqrt{6})=\frac{1}{2^{3/2}3^{1/4}}\big(\sqrt{2}-1\big)^{1/12}\frac{\Big(\Gamma\big(\tfrac{1}{24}\big) \Gamma\big(\tfrac{5}{24}\big) \Gamma\big(\tfrac{7}{24}\big) \Gamma\big(\tfrac{11}{24}\big)\Big)^{1/4}}{\pi^{3/4}}\eta(i\...
0
Here I provide some insights about conjecture B. First, it is still a conjecture, and just like the paradox that I discussed here, it defies empirical evidence: the error term in the approximation involves $\log$ and $\log \log$ functions (see here) so you would need to use insanely large numbers to see convergence to uniform distribution in residue classes, ...
4
The answer is yes. As was observed, the condition of being pre-Pell is simply the stipulation that $k$ is a sum of two squares: that is, if $p | k$ and $p \equiv 3 \pmod{4}$ then $p$ must divide $k$ with even multiplicity. If we assume $k$ is square-free, then it is divisible only by $2$ or primes of congruent to $1 \pmod{4}$. We now work on the equivalences....
1
The claim is true as stated, and can be arrived at using Weyl's criterion which was pointed out in the comments. As I post this answer, there is no consensus on the rate of convergence of the $N$. We proceed by induction on $k$, the number of polynomials. For $k=1$, either $p_1$ has only rational coefficients or it has at least one irrational. If it has an ...
5
I have nothing to add to the Fourier-type approach suggested in the question, but for those curious, thought it useful to outline the combinatorial solution to the problem that I know (I believe this is the same as the IMO official solution, and claim no originality). One thing to add is that, although induction is a crucial part of the proof, we do not use ...
4
Yes, I have not known Deligne, Kazhdan and Vigneras to lie. A sketch of the proof, at least with the key details for GL(2), is given in Lecture V of Steve Gelbart, Lectures on the Arthur--Selberg Trace Formula Added remarks: In that Lecture, Gelbart addresses both two kinds of simple traces formulas. The one you are asking about is essentially Prop 2.1. ...
5
On a slightly cheeky note, it seems that the shortest path to the best available bound, subject to peer review, is outlined in the following paper of Bloom-Sisak! Congratulations.
1
I'll take a stab at this. As pointed out at the comments, one can obtain the characteristic vector $\chi_{S+T}$ of the multiset $$S+T = \{s+t \mid s \in S, t \in T\},$$ by defining $\chi_S$ to be the characteristic vector of a set $S$ in $\mathbb{Z}_p$. Then $$\chi_{S+T} = \chi_S \ast \chi_T,$$ where $\ast$ is the convolution operator. Now using the Fourier ...
15
$2k=1+p_N$ works for $N>1$, but $2k\le 0.56 \, p_N$ will fail if $p_{N+2}=p_{N+1}+2$. With $q=p_{N+1}$, we have $$\frac{1}{1-q^{-2k}} < \frac{1}{1-x^{-2k}} = \frac{1}{1-q^{-2k}} \prod_{p>q} \frac{1}{1-p^{-2k}} .$$ It follows that $$q^{-2k} < x^{-2k} < q^{-2k} + \sum_{j\ge 2} (q+j)^{-2k} < q^{-2k} +\frac{1}{(q+1)^{2k-1}(2k-1)}.$$ Taking ...
2
As pointed out in the question, we have that $$\prod_{p<Q}\left(\frac{x-1}{p}+1\right)=\sum_{k=0}^{\pi(Q)}\Pr_{n\in\mathbb{N}}[\omega_Q(n)=k]x^k$$ which can be derived by showing that on both the RHS and the LHS the coefficient of $x^k$ is equal to $$\sum_{\substack{S\subseteq \{p<Q\} \\ |S|=k}} \left(\prod_{p\in S}\frac{1}{p}\right)\left(\prod_{p\not\... 3 We have$$\frac n{\varphi(n)}=\prod_{p\mid n}\bigl(1-p^{-1}\bigr)^{-1} \le2\prod_{\substack{p\mid n\\p\ge3}}\frac32 =2\prod_{\substack{p\mid n\\p\ge3}}3^{\log_3(3/2)} \le2\prod_{\substack{p\mid n\\p\ge3}}p^{\log_3(3/2)} \le2n^{\log_3(3/2)}$$(where p runs over primes), hence$$\varphi(n)\le m\implies n\le(2m)^{(1-\log_3(3/2))^{-1}}=(2m)^{\log_23}.$$Using ... 0 This is just a comment on the accepted answer but it is too long for the comment box. It is not meant to detract from the excellent accepted answer but only contains one or two suggestions that might save a little time. (Feel free to delete.) Perhaps it is worth mentioning that saying ^pH^0i^*\mathcal F=0 is the same as saying that f^{-1}(v_0) does not ... 3 A. Sarközy, A note on the arithmetic form of the large sieve, Sudia Sci Math Hungarica 27, 1992, 83--95 covers the literature until then (there are some later results, adapting this as needed.) I. Ruzsa, On the small sieve. II. Sifting by composite numbers Journal of Number Theory 14 (2), 1982, 260--268 2 Since m is a dummy variable (i.e. a bound variable) and n,n' are "real" variables (i.e. they are free) perhaps we should rewrite the problem accordingly as compute the following$$ f(y,z) = \#\left\lbrace (x_1,... , x_y )\mid x_1 + ... + x_y = m,\ x_i \in \mathbb{N},\ m \leq yz ,\ i < j \implies x_i \leq x_j \right\rbrace." We can ...
Top 50 recent answers are included
|
# Mpp Solar Units
62 Open Circuit Voltage - V OC. Suitable for 12V or 24V battery systems only. It can also accuratley track the MPP point in any situation which will improve energy efficiency and obtain maximum solar energy. MPP Voltage range (Umpp min – Umpp max) = 320-800V No of DC connections 2. With 600 to 3600 Wp in solar panels, connections for 12, 24 and 48-volt battery banks and an integrated MasterBus connection, this Solar ChargeMaster is perfect for large and medium-sized systems. Hi Will, I watched your videos and went through with the detailed info. Multi-MPP, but in actually there is only one real Maximum Power Point. Location Address: Section 470, Lot 31 Waigani Heights, Waigani, NCD, Papua New Guinea. The V15, V50, V75, and V88 batteries charge efficiently from solar and have an "Always On" mode which keeps them on whether or not a device is drawing any power. MPP SOLAR 800w Pure Sine Wave Power Inverter with mppt Solar Charger 40A DC 12V AC Output 110V 120V with 20A Utility Charger 50HZ or 60HZ this is the unit to get. Parameter Units Iodine number (min. Michael Major who has himself developed and launched FDA commercially approved generic Injectables, we offer an effective development team that can optimize your processes and develop quality drug formulation. [email protected] Store - Buy Now - usa-mpp-solar. The power calculation shows that the MPP has a voltage of V MPP = 4. In both systems a battery storage unit is often essential to the entire system. solar panels SOLAHART310 - SOLAHART315 - SOLAHART320 - SOLAHART325 Power at MPP - PMPP 231. Moreover, it is the most widely used and workhorse MPPT algorithm because of its balance between performance and simplicity. We provide service within the City of Austin, Travis County, and a small portion of Williamson County. we heated pool to 88 to 90 f. MPP Solar Power Inverter/MPPT/Charger "All-in-One" - Complete Review - - Duration: 5600 watt grid tied solar system with limiting inverters - Duration: 14:48. We offer complete solutions for the widest range of applications in the market, from batteries to chargers and accessories, mobile power products, and residential or commercial solar systems. See full list on homedepot. ABOUT SOLAR POWER SYSTEM 1. We are known throughout the solar industry for offering some of the most economical, reliable, and user-friendly solar water pump systems on the market. A suitable connection cable (length: 5 m) will be delivered with the unit. Click the titles to sort by that specification. Committed to quality and innovation, REC offers photovoltaic modules with leading high quality, backed by an exceptional low warranty claims rate of less than 100ppm. with 6 domes operating in sequence. Innovative, high-quality solar solutions to help you thrive off-grid. One parallel kit is required for each unit in a parallel system Contains a parallel board, a comm cable, and a current sharing cable NOTE: these parallel kits can only be guaranteed to work with "MPP Solar" in. The electric grid is an unlimited amount of energy supply. 27 %/°C Temperature coefficient of I SC 0. Plan large or small projects. Planning for the next five years is a long and complicated process. Hence, the efficiency of this system under variable solar irradiance is low, which is equal to 96. Trina is able to provide exceptional service to each customer in each market and supplement. While the grid-tie solar inverter system is mainly used in parallel with the traditional utility grid, the solar. All we have to do is find the current through the controller by using power = voltage x current. Toyota Motor Corporation and Honda R&D Co. MPP Short Tripod Turnstile Magnetic. They also work very well with the Reich e-Box. This is not an off-grid system, it requires a normal electrical connection. We offer two major types of solar energy systems: Off-Grid Solar System vs. Because the solar module only delivers its highest yield in this point, a regulator connected must be able to find this point, and keep it conti-nually even under changing conditions. Rated Input Voltage: 620 V @380 Vac/ 400 ; 720 V @480 Number of MPP trackers. html ~~~~~ Does off-grid solar. Budget, available roof or ground space and other factors will heavily influence your choice of solar panels kits. NREL’s new cost model can be used to assess the costs of utility-scale solar-plus-storage systems and help guide future research and development to reduce costs. ANTUM solar half cells. Welcome to Voltacon Solar We believe in the power of sun. With these controls in place, the PV array can best use the available power at certain levels of solar irradiation. Solar-Cell Current Solar-Cell Power C001 Point 1 Point 2 I_MPP V_MPP Input Voltage Dynamic Power Management www. Hell its even running an 8000 btu ac unit. We want to help the world reach net zero and improve people’s lives. Efficiency is around 94% to 97% for the MPPT conversion on those. The LV5048 (5kw 48vdc) is one of the three models in our MPP Solar inverter family that can support “ split phase ” output for use in US, Canada, and Puerto Rico. Separately Derived System: A premises wiring system whose power is derived from a battery, a solar photovoltaic system, or from a generator, transformer, or converter windings, and that has no direct electrical connection, including solidly connected grounded circuit conductor, to supply conductors originating in another system. mere end 40% mere energi med decentral solpanel MPPT, hvis større delvis skygge haves igennem produktionsperioden. Shunt-mode Solar Charge Controller. CLEAN POWER. SCM60 MPPT-MB. All Fronius IG Plus V inverters include a lockable code-compliant DC disconnect with a built-in 6-circuit fused string. Microsoft Windows 7: Developed in 2007, this OS is used by the majority of PC developers for their units. 0% positive Feedback Contact seller. In a grid-tied system, solar panels connect directly to an inverter which ties into your main household electrical panel. We offer complete solutions for the widest range of applications in the market, from batteries to chargers and accessories, mobile power products, and residential or commercial solar systems. The following diagram shows the major components in a typical basic solar power system. ' solar constant--The strength of sunlight; 1353 watts per square meter in space and about 1000 watts per square meter at sea level at the equator at solar noon. Understanding how the Solar Electric (PV) Incentive Program works and the importance of choosing and working with a certified, eligible installer are key factors to successfully installing a PV system. See full list on altestore. 102, Avdumber Apartment, Central Railway Colony, Omkar Nagar, Omkar Nagar, Nagpur - 440027, Dist. Solar Inverters We offer you the right device for each application: for all module types, for grid-connection and feeding into stand-alone grids, for small house systems and commercial systems in the Megawatt range. These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. PV ASSOCIATE CREDENTIAL. 48v 1500w inverter. The grain surfaces (film surface and grain boundary) of polycrystalline perovskite films are vulnerable sites in solar cells since they pose a high de…. SHS 1000-P Solar energy system Equipped with Grade A Solar Panels. Come join the world’s largest independent renewable energy company active in wind, solar, energy storage and transmission and distribution. Guangzhou Sanjing Electric Co. An offline voltage-based MPPT technique which is capable of tracking MPP has been selected because of numerous advantages it offers such as simple and low cost of implementation. 0: Short circuit current (Isc) 8. 024 %/°C REC is a leading global provider of solar electricity solutions. This solar power heat system will provide heat for pennies per hour with up to 90% or more of the energy coming from the sun. Every model of our inverter has a specific solar controller rating and it determines how much maximum solar power it can deliver. To extract the maximum power, you must adjust the load to match the current and voltage of the solar panel. 41 shipping. Since the output from an array of PV panels. 1 module supplier for quality and performance/price ratio in IHS Module Customer Insight Survey. The OptiTrac operation control fulfills this task for SMA inverters, ensuring maximum energy production. Controlling these costs is imperative to the profitability of a project. Solar power inverters play an equally important role in a solar system: they convert the electricity your solar panels create into a form that can be used by the appliances, lighting, and other electronics that are in your home. With industry-leading service and equipment, CED Greentech helps solar contractors install top quality solar PV systems, gain competitive advantage and grow their business. In particular, the power curve for thin-film modules (amorphous silicon (a-Si), cadmium telluride (CdTe)) is much flatter over the array voltage and makes it more difficult to determine the MPP. Northern Railway, the Jewel Set in the Crown of Indian Railways, has embarked on the mission to vanquish distances and create its own metaphor of existence. Because of the nonlinear characteristics of PV cell, the maximum power can be extracted under particular whether condition. Support up to 8 units in parallel to expand. 00 / Unit, Taiwan, MPP SOLAR, PIP 1212MSD. The V15, V50, V75, and V88 batteries charge efficiently from solar and have an "Always On" mode which keeps them on whether or not a device is drawing any power. is a pipeline and pipeline facility engineering firm, specializing in all phases from concept evaluation, commissioning, and operational support. MPP is launching its Reimagining Justice initiative on July 15. 6kW (kilowatt) solar systems typically sell in the US (as at March 2017) for between $2. Our purpose is reimagining energy for people and our planet. Inspired by our roots in Bend, OR. A car park with 50 solar lights already spares the climate 100 tonnes of CO 2. It is one of the components in the Conext XW Series which has a wide range of applications including grid-tie, off-grid, and backup power. Please note, that c harging of modern Lithium batteries with devices which are used for ordinary batteries in lead-acid-technology (incl. MPPT solar charge controllers suitable for campervan, motorhome, caravan, boat & commercial vehicle applications. One of a solar engineer’s tasks is to determine the efficiency of a solar panel system. ABOUT SOLAR POWER SYSTEM 1. 100 A, with the power, P = 0. Off grid. There are several brands of grid-tie only (that is, no battery) inverters available. rv boondocking with solar I'm a full time RV'r who travels the country installing Solar Systems on all makes and models of RV's. This charging option can operate at up to 80 amperes (Amp) and 19. It will increase energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating global warming, and keep fossil fuel prices lower than otherwise. The operating system is optimized to run programs for work, school, and. Current at the MPP Short circuit current Boltzmann’s constant Number of solar cells connected in parallel Number of solar cells connected in series Charge of an electron Series resistance of a solar cell Shunt resistance of a solar cell Irradiation in W/m2 Absolute temperature Output voltage of the PV cell/array Voltage at the MPP. The best in it's day LR1218 is now massively out performed by these newer units. However, in practice, solar PV systems that use central inverters are the least efficient. MPP : Ability of the MPP tracker to consistently find the maximum power point. Prepare solar installation project proposals, quotes, budgets, or schedules. Starting on March 22nd, all smart inverters deployed with solar and energy storage installations in California are required to be capable of communications. Whenever you have these two values you can figure out the wattage by multiplying them together. The light pole should be the length 4-6m, the thickness 2. Surprisingly the Fronius Primo inverter has become a key feature in many off-grid solar or stand-alone power systems. Free Publisher: Imperial Innovations Downloads: 3. One example is SunPower Corporation — a German company that creates photovoltaic power systems — that designed a PV power plant with an east-west single-axis tracking system. The current output of the PV system can therefore be significantly lower than it would have been due to shading. Why LG? LG is a global icon of excellence in the electronics industry with over six decades of success and more than 30 years of experience in the solar industry. 5, 1 m/w WindSpeed Nominal Power P MPP (W) 87. MPP-2020-164 Searches for Supersymmetry in multi-leptonic and di-tau final states with the ATLAS detector at \sqrt{s}=13 TeV at the LHC, Johnnes Junggeburth, , TU Munich, Munich (2020-07-29). View and Download MPP Solar 2424LV-HS user manual online. SE7K-RWS Hybrid. Efficiency 99. If you need an off-grid solar heating system, please see our pure-solar/DC heat pump heating and cooling system DC4812VRF. Find many great new & used options and get the best deals for MPP Solar PIP5048MG 5000W Inverter Charger at the best online prices at eBay! Free shipping for many products!. According to specifications of M88H_122, the string inverter is equipped with dual MPP trackers with power generation efficiency of up to 98. Planning for the next five years is a long and complicated process. The MPP may change due to external factors such as temperature, light conditions and workmanship of the device. It was supplied and built by Australian firm Solar Systems and Ergon Energy designed the systems to integrate the solar farm’s output into the overall power distribution system. The real solar array is influenced by various weather conditions such as irradiation, temperature, rain and shade by trees or clouds, which will affect the I-V. Therefore, battery management is significant to the functional. Off grid. The MPP is a voltage and current corresponding to the panel’s highest obtainable output power. To track the maximum power point (MPP) of the solar PV, you can choose between two maximum power point tracking (MPPT) techniques:. looking to put together a solar micro grid for the summer to run a windows unit, then maybe selling it when summer is over. We regularly feature resources for project managers to help train PMs to land jobs in the industry or develop better skills in their current role. Whether you need a solar inverter, solar battery, or other renewable energy product, OutBack is the choice for your system. The system worked well by sharing solar, batteries and mains power. Try our Remote Area Power Supply (RAPS) system calculator to determine your stand alone solar energy system requirements and suitable components. The go, the IDC25 is shock, vibration and dust proof. Therefore, battery management is significant to the functional. All of these have built in MPPT. In particular, such inexpensive PV modules that are used for grid-connected systems can also be used off-grid. See full list on altestore. Delta’s solar inverter product line is suitable for a wide range of applications. Maximum System Voltage V SYS (V) 1500 Limiting Reversing Current I R (A) 4. There are so many of the MPP all in one models with variations, I am a little lost. solar panels up to 30 to 80% cheaper than the competition MON-FRI (9AM to 5PM) 15901 NW 59TH AVENUE MIAMI LAKES, FL 33014 OFFICE: (786) 565-9359 OWNER: (305) 322-1086. Trying to build a 5kWh off-grid system. solar cell--See 'Photovoltaic cell. solar cell--See 'Photovoltaic cell. Committed to quality and innovation, REC offers photovoltaic modules with leading high quality, backed by an exceptional low warranty claims rate of less than 100ppm. Solar panels can be an expensive endeavour which is why you need to find a reputable company who is dedicated to solar power. MPPT harnesses this power from a solar panel even as the amount of illumination varies. Version number:TSH_EN_2018_AUS_A Mounting Mounting Array 6kW 1 x 4 modules x 6 Max. 1500 WATT SOLAR PANELS MPPT CHARGE CONTROLLER KIT | Use as a base or add on kit. The purpose of the MPPT is to adjust the solar operating voltage close to the MPP under changing atmospheric conditions. 4KVA Solar Inverter (3200W) Ecoloblue 30l Water Generator; Canadian Solar 265W Panel; Canadian Solar 325W Panel; MPP 2KVA (1600W) Inverter; MPP 1KVA (800W) Inverter; 250W Solar Panel; 300W Solar Panel; Kema 2Kva (1500W) Inverter; FIAMM 180ah Gel Battery 12V. In the simplest terms, this funky sounding feature ensures that your solar panels are always working at their maximum efficiency, no matter what the conditions. auto skimmer. In a grid-tied system, solar panels connect directly to an inverter which ties into your main household electrical panel. The rule of thumb says that the MPP will be around 80% of the string's VOC; 183v * 0. 3V V MPP we can extrapolate and see that the V OC is approximately 6. Rooftop radiative cooling system produces renewable power at night Electric buses stored in London garage to become ‘virtual power station’ Solar and wind make record gains in 2019 producing 10 per cent of global power. For CPV System installations in the US, the UL listing of CX-AD2 provides an advan-tage. Our purpose is reimagining energy for people and our planet. MPP SOLAR 800w Pure Sine Wave Power Inverter with mppt Solar Charger 40A DC 12V AC Output 110V 120V with 20A Utility Charger 50HZ or 60HZ this is the unit to get. The effects of radiation on I-V and P-V characteristics are shown in Fig. ABB Solar PVS-166/175-TL-US. Every installed solar light is an active contribution to the reduction of CO 2 emmissions. Hybrid Solar system, its components, advantages, and disadvantages. Shop our selection of top brands like Samlex, Exeltech, MorningStar, and more. Back-up power source for home, cabin, or RV Ideal for running computers, lights and other electronics items. 12, the INC algorithm takes longer to reach the new MPP in case of sudden change in solar irradiance. The operating system is optimized to run programs for work, school, and. 0mm with material iron or steel, the diameter 50-90mm. a central processing unit (CPU; usually a DSP) for controlling the power electronics and measuring PV and grid variables, and a communications processor (CP; ARM or similar) for the outward-facing interfaces. 8K FULL with 2 secondary units ab Lager Deutschland per sofort verfügbar. With these controls in place, the PV array can best use the available power at certain levels of solar irradiation. solar thermal electric systems — Solar energy conversion technologies that convert solar energy to electricity, by heating a working fluid to power a turbine that drives a generator. The heat that builds up in your car when it is parked in the sun is an example. The controller enhances battery lifetime with overdischarge protection and by forcing an equalizing charge every 7 weeks to fully charge all the battery. Shanghai Metal Corporation (SMC) is one of the largest china's manufacturer and supplier. Prepare solar installation project proposals, quotes, budgets, or schedules. This can be shown from following figures, solar radiation increased to the current and power which from solar panels. Take the power produced by the solar panels and divide by the voltage of the batteries. Again , assuming a fully depleted battery, you would need 20 amps average charging current (100 Ah/5 hours). usage: mpp-solar [-h] [-c COMMAND] [-D] [-I] [-d DEVICE] [-b BAUD] [-M MODEL] [-l] [-s] [-t] [-R] [-p] MPP Solar Command Utility optional arguments: -h, --help show this help message and exit -c COMMAND, --command COMMAND Command to run -D, --enableDebug Enable Debug and above (i. In particular, the power curve for thin-film modules (amorphous silicon (a-Si), cadmium telluride (CdTe)) is much flatter over the array voltage and makes it more difficult to determine the MPP. How Maximum Power Point Tracking works. Fronius inverter review article provides you authentic data from an unbiased point of view to mature your trust on Fronius brand. All of these have built in MPPT. Trina Solar now distributes its PV products to over 60 countries all over the world. In this page we have uploaded the most recent firmwares and software updates for: Infinisolar, Axpert, Conversol and Voltasol Inverters and battery chargers. ECEN2060 MATLAB/Simulink tutorial. REC Group is an international pioneering solar energy company dedicated to empowering consumers with clean, affordable solar power in order to facilitate global energy transitions. This charging option can operate at up to 80 amperes (Amp) and 19. This unit does it all for me for on grid and off grid. 9V MPP voltage, the system will still be creating 88% of maximum power. The size of DC cabling you use with your solar panel array or wind turbine system is very important. Normally, installation of single inverter would make us expect large mismatch losses. MPP SOLAR 2400w Pure Sine Wave Power Inverter 80A MPPT Solar Charger DC 24V AC Output 110V 120V with 60A Utility Charger 50HZ or 60HZ Brand: MPP SOLAR 3. The fuse-free 24 input, 12 independent MPPTs eliminates the need for external string combiners and adds redundancy to ensure maximum energy harvest while the 800 VAC output requires smaller conductors, reducing CAPEX cost. auto skimmer. This is a key concept for your PMP preparation related to Project Cost Management. Maximum efficiency exceeds 98%. On Off Grid10kw Hybrid Solar Inverter 48v Mpp Solar Hybrid Inverter 3 Phase Inverter Solar Power System , Find Complete Details about On Off Grid10kw Hybrid Solar Inverter 48v Mpp Solar Hybrid Inverter 3 Phase Inverter Solar Power System,On Off Grid,10kw Hybrid Solar Inverter,3 Phase Inverter Solar Power System from Inverters & Converters Supplier or Manufacturer-Foshan Ouyad Electronic Co. Under cloudy sky conditions when light intensity is continually changing, MPPT controllers can improve energy harvest by up to 30% compared with those using older PWM technology, so upgrade to MPPT technology today to get better. Started in 2012 NevonProjects an initiative by NevonSolutions Pvt. Max 145V Voc input and 30V-80V MPPT range ☑ Full feature LCD program allows users access to AC and. Solar power inverters play an equally important role in a solar system: they convert the electricity your solar panels create into a form that can be used by the appliances, lighting, and other electronics that are in your home. The DC load is connected across the boost converter output. Realistically, i don't think i will get a solar panel installed right away, so i was wondering. The suggested approach adapts the operati. Free Curriculum for K-12. input current (I dc max 1 / Idc max 2) 27. Send a letter to your MPP. I am confused. Teamcenter® software is a modern, adaptable product lifecycle management (PLM) system that connects people and processes, across functional silos, with a digital thread for innovation. Multiple String Installations In multiple string installations the shading effects can be much larger. 917 – Liu Yan declared himself emperor, establishing the Southern Han state in southern China, at his capital of Panyu (present-day Guangzhou). Bright Hub's Linda Richter explains the two Project files you must create that will help you with every. 1 Gantt Chart / Project Schedule Vertex42's gantt chart template is a great tool for project scheduling and project tracking. The rated power of the PV / Solar Module in Watts (Pmax) is derived from the above values of voltage Vmp and current Imp at this Maximum Power Point (MPP): Rated power in Watts, Pmax = Vmp x Imp. Strengths of the PWM charge controller design include the fact that it is built on a time-tested and proven technology, is inexpensive–a single unit capable of handling 25 amps can be purchased for less than$100–and is durable. FOB Price: US $30-300 / Piece Min. Solar cell efficiency refers to the portion of energy in the form of sunlight that can be converted via photovoltaics into electricity by the solar cell. To improve the overall efficiency, 45 2. Trina is able to provide exceptional service to each customer in each market and supplement. Most of these faults go undetected until home owners receive an electricity bill spike. 11 - Solar Energy Systems Engineers. And even worse, about 14% of the country’s solar systems develop a major fault every year and stop working altogether. On Off Grid10kw Hybrid Solar Inverter 48v Mpp Solar Hybrid Inverter 3 Phase Inverter Solar Power System , Find Complete Details about On Off Grid10kw Hybrid Solar Inverter 48v Mpp Solar Hybrid Inverter 3 Phase Inverter Solar Power System,On Off Grid,10kw Hybrid Solar Inverter,3 Phase Inverter Solar Power System from Inverters & Converters Supplier or Manufacturer-Foshan Ouyad Electronic Co. Most small-scale solar systems for homes and small businesses will include anywhere from 6 to about 30 panels, although the ‘size’ of a system is usually referred to by its capacity (in kilowatts – e. Compared to a regular 72 cell 300 watt panel this panel is 15-20% shorter and lighter. It is comprised of manageable building blocks for single- or three-phase systems ranging from 4 kW to 36 kW. To us, that means going above and beyond just providing the best Canada Proof products to people and businesses in Kamloops. The Arduino tries to maximize the watts input from the solar panel by controlling the duty cycle to keep the solar panel operating at its Maximum Power Point. 900w solar. Perform start-up of systems for testing or customer implementation. Plan large or small projects. If the PV power is still above load demand, the frequency will go back up to remove the excess power. (In Progress) Skilled with a variety of solar design software, including Aurora, PVsyst, PVsol, Helioscope, OpenSolar, Sketchup, nearmap. In this page we have uploaded the most recent firmwares and software updates for: Infinisolar, Axpert, Conversol and Voltasol Inverters and battery chargers. 1 Gantt Chart / Project Schedule Vertex42's gantt chart template is a great tool for project scheduling and project tracking. Make an Inquiry for DC Solar Water Pump Controller, Pumping System at OKorder. Rated Input Voltage: 620 V @380 Vac/ 400 ; 720 V @480 Number of MPP trackers. Stand-alone PV systems are used in areas that are not easily accessible or have no access to an electric grid. Order: 1 Piece. If you need an off-grid solar heating system, please see our pure-solar/DC heat pump heating and cooling system DC4812VRF. REC Group is an international pioneering solar energy company dedicated to empowering consumers with clean, affordable solar power in order to facilitate global energy transitions. When using solar panels to charge XOs, the output of the panel varies according to the amount and intensity of the sun. This video shows a more complex solar meter, which has been installed by Energex. What is the basic setup? Solar Panel Charges Battery - Battery Stores and Supplies Power - Runs Arduino. 2 m Connectors: MC4 connectable (4 mm²) electRical data @ noct Rec295pe 72 Rec300pe 72 R ec305pe 72 Rec310pe 72 Rec315pe 72 Nominal Power - P MPP 214 217 221 225 229 Nominal Power Voltage - V MPP (V) 29. 48v battery charger solar unit mppsolar mpp solar hybrid inverter mpp solar pip5048 invert solar. Wire chart for connecting 12 Volt solar panels to the Charge Controller: This chart shows wire distances for a 3% voltage drop or less. Because of its wide input voltage range of 230V to 600V, it is well-suited for use in numerous solar systems. Solar position can be measured either by a sensor (active/passive) or through the sun position. The rule of thumb says that the MPP will be around 80% of the string's VOC; 183v * 0. 900w solar. The inverters' integrated LCD displays and records over 20 inverter and system operation parameters. I found that in the section of solar charger, the maximum charging current/power (5048MG) is 80A/4500W, and the system DC voltage is 48V. It must also address the grid integration problems that led to high curtailment rates of wind and solar power over the last five year plan period. A SolarEdge system, with module-level MPP trackers, would in this case enable the production of 9*10% + 1*7% = 97% of full power. In our shop you'll find: Solar Panels (framed & semi-flexible), Charge Controllers (PWM, MPPT, Dual), Solar Inverters, Batteries, Camper Kits, PV solar Kits, Pico PV systems, Solar Refrigerators, Wind Generators, Installation material, DC Loads, Water Pumps and many more. View and Download MPP Solar PIP-MS 1-5KVA user manual online. Wide MPP operating voltage range. radiation amount. ) 3 % pH of aqueous extract Alkaline Moisture (when packing) (max. Sun position and the optimum inclination of a solar panel to the sun vary over time throughout the day. A report titled Efficient East-West Orientated PV Systems with One MPP Tracker [1] examined WE-oriented system with one inverter and 1 MPP. The V15, V50, V75, and V88 batteries charge efficiently from solar and have an "Always On" mode which keeps them on whether or not a device is drawing any power. 1 INTRODUCTION. Usage of CX-AD1 in the US is also possible, requires however a field certifica-tion. Three different configurable input voltage ranges make the Sunny Boy 700-US a versatile choice, whatever your system configuration. imum power point. - Power tolerance of +/-3% minimizing PV system mismatch losses. Solar Turbines provides gas turbine packages and services for oil and gas and power generation industries, including gas compressor restage and overhaul, service parts, gas turbine overhaul, machinery management, technical training, modular solutions, and microgrid energy storage solutions. 11 - Solar Energy Systems Engineers. Inverters made for off-grid solar power systems. 02 9030 0636. This method of MPPT estimates the MPP voltage by measuring the temperature of the solar module and comparing it against a reference. All Fronius IG Plus V inverters include a lockable code-compliant DC disconnect with a built-in 6-circuit fused string. Acronym Definition; MPT: Maryland Public Television: MPT: Modern Portfolio Theory (investing) MPT: Maison pour Tous (French: Home for All; est. 2 out of 5 stars 250. However, since conventional MPP trackers only monitor the vicinity of the current operating point, an alternative operating point may not be noticed in order to avoid unnecessary loss during the searching procedure. PIP-MS 1-5KVA inverter pdf manual download. Canadian Solar is committed to providing high quality solar products, solar system solutions and services to customers around the world. It must also address the grid integration problems that led to high curtailment rates of wind and solar power over the last five year plan period. This feature protects utility workers who might be working to restore power to the area. These Solar panels have Warranty of 10 years and 25 years life …. water gun for fun. The size of DC cabling you use with your solar panel array or wind turbine system is very important. Das Angebot beinhaltet 26 Stk. PIP-HSMS-PF1-Series; Solar Deep Cycle Batteries and Solar Lithium Batteries; Solar DC Pump / AC Pumps. In particular, such inexpensive PV modules that are used for grid-connected systems can also be used off-grid. The best in it's day LR1218 is now massively out performed by these newer units. The challenge is that the electric water heating system can be used day and night. All of these have built in MPPT. The current output of the PV system can therefore be significantly lower than it would have been due to shading. Order: 1 Piece. Solar-Cell Current Solar-Cell Power C001 Point 1 Point 2 I_MPP V_MPP Input Voltage Dynamic Power Management www. 0 A DC input voltage range (U dc min - U dc max) 80 - 1,000 V Feed-in start voltage (U dc start) 80 V Usable MPP voltage range 80 - 800 V Number of DC connections 2 + 2. The Arduino tries to maximize the watts input from the solar panel by controlling the duty cycle to keep the solar panel operating at its Maximum Power Point. Measure V cell, I cell Compare P (t+1) – P (t) >0 Start Record Power (P) Perturb duty cycle Maximum Power Point as operating point P t Perturb duty. It powers a mining operation that specializes in fluorite extraction and processing, without any grid service to power its operations. 2 m Connectors: MC4 connectable (4 mm²) electRical data @ noct Rec295pe 72 Rec300pe 72 R ec305pe 72 Rec310pe 72 Rec315pe 72 Nominal Power - P MPP 214 217 221 225 229 Nominal Power Voltage - V MPP (V) 29. Bought to use for medical equipment for power outages. The included PERC solar cells are efficient and help make the panel smaller and lighter. Solar Hot Water (Solar Thermal) Installers - Eligible Installers are those who have demonstrated technical competence in the Solar Thermal field. With industry-leading service and equipment, CED Greentech helps solar contractors install top quality solar PV systems, gain competitive advantage and grow their business. It is comprised of manageable building blocks for single- or three-phase systems ranging from 4 kW to 36 kW. Each solar cell has a point at which the current (I) and voltage (V) output from the cell result in the maximum power output of the cell. 41 shipping. However, on this system each panel has a DC-DC optimizer. Committed to quality and innovation, REC offers photovoltaic modules with leading high quality, backed by an exceptional low warranty claims rate of less than 100ppm. This is because there are also other factors that must be considered, like the number of strings per MPP input, mismatch losses, shading, shading, and soiling, etc. To place the light pole into the best spot with good day lighting. INVERTER / CHARGER (800W-4KW). 5 MW combustion turbine. Support up to 8 units in parallel to expand. 8 percent of the solar system's mass and is roughly 109 times the diameter of the Earth — about on. A characteristic of. Also for: 4kva, 5kva, 1kva, 2kva, 3kva, 1kva 48v, 1kva 24v, 2kva 48v, 3kva 48v, 2kva 24v, 3kva 24v. Get a free quote. Order: 1 Piece. You may also choose to add Tilt Bars for increased output in Winter and a tube or two of Dicor sealant to seal the roof where the mounting feet and roof combiner box are placed. By the end of the workshop students will have: Taken a tour through the solar system, learning about each planet along the way. This paper presents a new approach of Adaptive Search Control Method to perform maximum power point tracking (MPPT) in solar panels (SP). This review provides an overview of organic solar cells. +1-212-401-1192 Sign in Register. We have a standalone 1kW solar panel at our campus. To track the maximum power point (MPP) of the solar PV, you can choose between two maximum power point tracking (MPPT) techniques:. New solar inverter M125HV from Delta for large ground-mounted solar PV installations (July 08, 2019) Delta launches data collector and cloud-based monitoring solution for solar PV systems (June 07, 2019). Project managers are the point person in charge of a specific project or projects within an organization. Again, select a wire that is rated appropriately. Lines or degrees of latitude are approximately 69 miles or 111 km apart, with variation due to the fact that the earth is not a perfect sphere but an oblate ellipsoid (slightly egg-shaped). The event will take a deeper look at race, cannabis, and policing in America with an afternoon of thoughtful discussion amongst America’s leading civil rights and cannabis activists, including former NBA star Al Harrington and Canadian cannabis entrepreneurs and Houseplant co. The modeling has been done in MATLAB ® /SIMULINK simulation environment, and MPPT technique is developed and implemented by taking a variable resistance as a load. SOLAR WARE 2500 is one of the largest central PV inverter in the 1500V power class. ABSTRACT Maximum PowerPoint (MPP) is a point in V-P characteristics where entire PV system can work with maximum efficiency and deliver maximum output. The expandable kit includes high quality crystalline solar panels, long-live batteries, solar charge controller with MPP Tracking, pure sine wave inverter and all other components for safe installation. The Solar Data Extender combines the data from up to three solar controllers so that they can be shown on a single display. 4kW Solar System Inverter Specifications Your Origin8400 system will be supplied with the following inverter(s): ð•Delta Solivia 5. Reconfigurable photovoltaic (PV) systems are of grea t interest with respect to system designers in order to improve the system’s e fficiency and operation. This solar power heat system will provide heat for pennies per hour with up to 90% or more of the energy coming from the sun. alternator (9–32V) inputs, IDC25 can also function as an. How Maximum Power Point Tracking works. number of inputs. Suitable for 12V or 24V nominal PV array systems only. Types of Off-Grid systems • PV direct – Loads are run directly from the solar source – No energy storage (batteries) – Loads typically motors (pumps, fans, etc. Marlec has distributed for Steca in the UK for many years giving us a wealth of experience with their range. QUALITY AND RELIABILITY-. Click and find out more!. Work with Solar Panels. Call 831-462-8243 for Free Quote. Although photovoltaic (PV) efficiency has increased over time, solar irradiance and temperatures can fluctuate dramatically in deep space. Historically, the facility has used diesel generators to keep the power on. In addition, as opposed to micro inverters, power optimizers can track a module’s MPP at voltages as low as 5 volts , a specification allowing SolarEdge to optimize module performance even. mdl MATLAB script to find maximum power point and plot PV cell characteristics: findMPP. Update of my mpp solar system. Plan and keep track of your projects including visually tracking progress with Excel data bars. rv boondocking with solar I'm a full time RV'r who travels the country installing Solar Systems on all makes and models of RV's.$ mpp-solar -h. The XW SCC can be used with 12, 24, 36, 48, and 60-volt DC battery systems and is able to charge a lower voltage battery system from a higher voltage solar array. Solar position can be measured either by a sensor (active/passive) or through the sun position. Therefore, maximum power point tracking (MPPT) techniques are used to maximize the output power of PV array, continuously to track the maximum power point (MPP), which depends on atmospheric temperature and solar insolation. SOLAR WARE 665E(J) is a highly engineered 1000V PV inverter suitable for mega-solar power plant deployment. The latter allows us to save cost by not purchasing another inverter and, like we saw in Figure 1, we can still harvest more energy during off-peak hours. Both 50Hz and 60Hz load can be supported through LCD programming ☑ Built-in max 80A 3-stage MPPT solar charger with up to 2000W PV output. Thermal Systems ; Thermal systems capture the Sun's heat energy (infra red radiation) in some form of solar collector and use it to mostly to provide hot water or for space heating, but the heat can also used to generate electricity by heating the working fluid in heat engine which in. REC Group is an international pioneering solar energy company dedicated to empowering consumers with clean, affordable solar power in order to facilitate global energy transitions. 00; Narada NDT200 Solar Battery - 200Amp/Hour, 12V R 6,400. The solar PV system operates in both maximum power point tracking and de-rated voltage control modes. Over a course of 25 years, CO 2 emissions of approximately 2 tonnes are avoided compared to modern, but conventional lights. Other advantages include increased energy production due to module-specific MPP tracking and the ability to add on to a system incrementally. Specializing in solar inverters and mobility systems, it has over 1,100 employees worldwide and offers a comprehensive solar solutions portfolio across all applications. Inverter: Inverter efficiency : Mis-sized inverter: If the inverter is undersized, power is clipped for high intensity light. 00; Shoto Battery 6-FMX-200 R 6,995. Multiply distances by 4 for a 48 volt system. https://www. I understand the solar panel connection and the battery connection but I do not understand the load connection on the ProStar charge controller. 1,171 mpp solar inverter products are offered for sale by suppliers on Alibaba. Many of these units operate at up to 30 Amps, delivering 7. 9 Current at P MAX I MPP (A) 1. So my technician used the arguments that my 1. It is the manufacturer of the Tesla Powerwall 2, arguably the leading residential energy storage solar battery solution on the market. It was supplied and built by Australian firm Solar Systems and Ergon Energy designed the systems to integrate the solar farm’s output into the overall power distribution system. Latitude lines run horizontally on a map. MPP Solar Pure Sine Wave Inverters-Long period UPS. 0 A DC input voltage range (U dc min - U dc max) 80 - 1,000 V Feed-in start voltage (U dc start) 80 V Usable MPP voltage range 80 - 800 V Number of DC connections 2 + 2. Rated Input Voltage: 620 V @380 Vac/ 400 ; 720 V @480 Number of MPP trackers. It powers a mining operation that specializes in fluorite extraction and processing, without any grid service to power its operations. What are Network Diagrams? Network Diagrams in project management are a visual representation of a project’s schedule. Solar algorithm will always maximize energy harvest by locking to the optimum MPP. Solar Panel Comparison. | Solar Energy | Solar | Renewable Energy | Solar Cell | Green Energy | mpp solar | Philippines solar | realistic budget solar energy | realistic budget solar power | solar hybrid inverter. units of solar inverter systems MPP Solar has supplied worldwide to date >15 over 15 years of experience in solar and power inversion allow us to develop new models ahead of the Chinese copycats. But, confusion lies while selecting a MPPT as every technique has its own advantages and disadvantages. Teamcenter® software is a modern, adaptable product lifecycle management (PLM) system that connects people and processes, across functional silos, with a digital thread for innovation. 2424LV-HS inverter pdf manual download. The panels are connected 4in series (125W each) to match inverter voltage of 48Vwith 2rows. 2 out of 5 stars 250. The chart is created using conditional formatting and shows the completion status of each task. For example, the XW SCC can charge a 12-volt battery from 20 to 140 volt arrays. Although photovoltaic (PV) efficiency has increased over time, solar irradiance and temperatures can fluctuate dramatically in deep space. Need more? Extend its power with an expansion battery pack or chain up to 8 solar panels for all day super charging. We offer two major types of solar energy systems: Off-Grid Solar System vs. Maximum DC/DC transfer energy as high as 98. 1 SYSTEM OVERVIEW A. Surprisingly the Fronius Primo inverter has become a key feature in many off-grid solar or stand-alone power systems. input current (I dc max 1 / I dc max 2) 12. bulk heterojunction concept. Since the output from an array of PV panels. interlocking foam base pads. The NABCEP Associate Program is intended for many people, including those who are: students in solar training programs, workers at an early stage in their renewable energy career, experienced professionals who have just begun offering solar products or services, or those in renewable energy jobs for which there is no professional certification. ' solar constant--The strength of sunlight; 1353 watts per square meter in space and about 1000 watts per square meter at sea level at the equator at solar noon. 83 = 83% efficient at STC conditions; In reality, solar panels get hot and Vmp drops by upwards of 2-3 volts--so the 83% efficiency goes up for PWM (because solar panel Volts/Watts goes down). It will increase energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating global warming, and keep fossil fuel prices lower than otherwise. Phone: 1300 232 008 +0478673678 joe. View Binoy M T’S profile on LinkedIn, the world's largest professional community. Biosolids Drying and Energy Recovery. This review provides an overview of organic solar cells. It is capable of 2000 W continuous output, allowing it to run saws and other power tools. generally called maximum power point (MPP). The grain surfaces (film surface and grain boundary) of polycrystalline perovskite films are vulnerable sites in solar cells since they pose a high de…. Founded on the ideas of makers, Voltaic Systems offers high quality solar tutorials and DIY guides to help take your project offgrid. Figure 2 b shows typical current density–voltage ( J – V) characteristics of a single PSC. The size of DC cabling you use with your solar panel array or wind turbine system is very important. 43 % / °C PARAMETERS FOR OPTIMAL SYSTEM INTEGRATION Power sorting-0 Wp / +5 Wp Maximum system voltage NEC 1000 V or 1500 V - Specify when ordering Maximum system voltage SC II 1000 V Maximum reverse current 25 A Load / dynamic load 50 / -50 psf (2. This Solar panel is ideal for commercial systems or solar farms seeking an efficient use of space and a high quality solar panel with great output efficiency. BertsFordFactory 51,685 views. Because of the nonlinear characteristics of PV cell, the maximum power can be extracted under particular whether condition. With a staff of creative and experienced research scientists, including Dr. InfiniSolar solar system is a hybrid combination of inverter , communication tools , and to provide continuous power battery power. 0% positive Feedback Contact seller. space charge — See cell barrier. Each solar cell has a point at which the current (I) and voltage (V) output from the cell result in the maximum power output of the cell. Figure 2 is a system block diagram that shows the main components of the solar panels components, solar inverter units, energy storage unit, and electricity load and so on. 024 %/°C REC is a leading global provider of solar electricity solutions. The current output of the PV system can therefore be significantly lower than it would have been due to shading. They also work very well with the Reich e-Box. Batzelis EI, Kampitsis GE, Papathanassiou SA, 2017, Power reserves control for PV systems with real-time MPP estimation via curve fitting, Ieee Transactions on Sustainable Energy, Vol:8, ISSN:1949-3029, Pages:1269-1280. In the event of a blackout or power outage, your solar system will shut down. Waiting to hear back from MPP Solar but would love some input here on the forum. It is capable of 2000 W continuous output, allowing it to run saws and other power tools. View and Download MPP Solar 2424LV-HS user manual online. to invest in a commercial roof top solar system. Off grid and Grid Tie Solar Electric Systems and components such as solar panels, MPPT Solar Charge controllers, Pure Sine Wave Inverters, and Surrette Batteries. JNGE 300W 500W 600W 800W 1kw 2kw 3kw 4kw 5kw Pure Sine Wave Inverter for 12V 24V 48V 96V Solar Power System. ·Great for your Dual Battery setup. A new simple MPPT algorithm to track MPP under partial shading for solar photovoltaic systems. Solar Inverters We offer you the right device for each application: for all module types, for grid-connection and feeding into stand-alone grids, for small house systems and commercial systems in the Megawatt range. St Thomas' share will go toward 14 new zero-emission buses, charging stations, solar generation retrofits, upgraded technology, and improved passenger amenities. number of solar panels or wind turbines, and available inverters) and the fuel available to the resource. Solar-panel characteristics Solar panels provide peak output power when operated at their MPP. The experimental and simulation results show that the proposed method can effectively improve the system performance. Our vision is to be a leader in the transition where. html ~~~~~ Does off-grid solar. The included PERC solar cells are efficient and help make the panel smaller and lighter. NevonProjects works towards development of research based software, embedded/electronics and mechanical systems for research & development purposes. Ultra – fast tracking speed and high tracking e˜ciency _> 99. 50 per watt meaning a cost of between $17,700 and$21,000 before the 26% solar tax credit. [email protected] Maximum System Voltage (V) 1000 Maximum Series Fuse Rating (A) 20 Power Tolerance (%) 0 ~ +3 * NOCT (Nominal Operating Cell Temperature): Irradiance 800 W/m2, ambient temperature 20 °C, wind speed 1 m/s Electrical Properties (NOCT*) Module Type 320 W 315 W 310 W 305 W Maximum Power (Pmax) 234 230 226 223 MPP Voltage (Vmpp) 30. If cable of too thin a diameter is used, this can lead to heating of the wire. The cost of the co-located, DC-coupled system is 8% lower than the cost of the system with PV and storage sited separately, and the cost of the co-located, AC-coupled system is 7% lower. Tandem Solar Systems is the wholesale solar components division of family-owned Tandem Solar, Inc. Location Address: Section 470, Lot 31 Waigani Heights, Waigani, NCD, Papua New Guinea. In the usual series-connected wiring scheme, the residual energy generated by partially shaded cells either cannot be collected (if diode bypassed) or, worse, impedes collection of power from the remaining fully illuminated cells (if not bypassed). According to specifications of M88H_122, the string inverter is equipped with dual MPP trackers with power generation efficiency of up to 98. Solar Turbines provides gas turbine packages and services for oil and gas and power generation industries, including gas compressor restage and overhaul, service parts, gas turbine overhaul, machinery management, technical training, modular solutions, and microgrid energy storage solutions. You can find more details on MPPT here. we believe ciose cooperation with our partners is critical to success. 7 %, a FF of 82. Most small-scale solar systems for homes and small businesses will include anywhere from 6 to about 30 panels, although the ‘size’ of a system is usually referred to by its capacity (in kilowatts – e. The chart is created using conditional formatting and shows the completion status of each task. The inverters provide dual MPP trackers with a wide voltage range,. will create a mobile power generation/output system—Moving e—that comprises a fuel cell bus that can carry a large amount of hydrogen; portable external power output devices; and portable batteries. I found that in the section of solar charger, the maximum charging current/power (5048MG) is 80A/4500W, and the system DC voltage is 48V. The effects of radiation on I-V and P-V characteristics are shown in Fig. ) that run directly from the solar array • DC-Only – All loads run on DC from a battery – Batteries charged by PV, engine generator, wind, hydro etc. Ifølge denne kilde kan man få mellem 10-25% mere energi ved at anvende ét MPPT system. Once you get the feel of this amazing software, you can go through these numerous screenshots to get an idea of the various data presentations possible. Founded on the ideas of makers, Voltaic Systems offers high quality solar tutorials and DIY guides to help take your project offgrid. MPP Solar Power Inverter/MPPT/Charger "All-in-One" - Complete Review - - Duration: 5600 watt grid tied solar system with limiting inverters - Duration: 14:48. The typical MPPT algorithm, designed by our company, can track the real MPP quickly and. , Suite 140, Lexington, KY, 40509 Tel: 859-215-0253 Toll Free: 888. Once you get the feel of this amazing software, you can go through these numerous screenshots to get an idea of the various data presentations possible. In particular, the power curve for thin-film modules (amorphous silicon (a-Si), cadmium telluride (CdTe)) is much flatter over the array voltage and makes it more difficult to determine the MPP. It will increase energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating global warming, and keep fossil fuel prices lower than otherwise. The Lion Safari ME is designed to be a heavy duty portable power unit. Solar Hot Water (Solar Thermal) Installers - Eligible Installers are those who have demonstrated technical competence in the Solar Thermal field. It was supplied and built by Australian firm Solar Systems and Ergon Energy designed the systems to integrate the solar farm’s output into the overall power distribution system. 5 out of 5 stars 11 ratings. Controlling these costs is imperative to the profitability of a project. MPP Solar our job is to promote the use of natural solar energy by providing straightforward solar solutions to installers and distributors. Get quantity solar panels systems discount. Current at the MPP Short circuit current Boltzmann’s constant Number of solar cells connected in parallel Number of solar cells connected in series Charge of an electron Series resistance of a solar cell Shunt resistance of a solar cell Irradiation in W/m2 Absolute temperature Output voltage of the PV cell/array Voltage at the MPP. ECEN2060 MATLAB/Simulink tutorial. I love the idea of the MPP All-In-One unit for a small weekender camper van i plan to build. MPPT solar charge controllers suitable for campervan, motorhome, caravan, boat & commercial vehicle applications. We have a standalone 1kW solar panel at our campus. Trying to build a 5kWh off-grid system. Every installed solar light is an active contribution to the reduction of CO 2 emmissions. a central processing unit (CPU; usually a DSP) for controlling the power electronics and measuring PV and grid variables, and a communications processor (CP; ARM or similar) for the outward-facing interfaces. Calculate the Fermi Energy of a solar cell as a function of dopant concentration, illumination condition, and temperature. Die Photovoltaik Wechselrichter sind Neu und zum Händler-Preis über einen SolarEdge Distributor verfügbar. Will be connecting 2 of the 24V in series LiFePO4 Powerwalls which are 3kWh each. The following diagram shows the major components in a typical basic solar power system. frame structure and returns a long. MPPT solar charge controllers are useful for off-grid solar power systems such as stand-alone solar power system, solar home system and solar water pump system, etc. The efficiency of the solar cells used in a photovoltaic system, in combination with latitude and climate, determines the annual energy output of the system. Table I gives the summary of MPP limitation. 00 / Set VIEW MORE 5. According to government data, more than half of Australia’s residential solar systems perform below the standard they’re supposed to. LG 330W NeON® 2 Black Solar Panels - Sleek Black Solar Panels for residential and commercial applications where a visually pleasing solar panel and maximum efficiency is paramount. Canadian Solar is the No. PS2 Solar Pumping Systems; PSk2 Solar Pumping Systems; Caprari. This is a pure sine wave inverter with PWM or high efficiency MPPT solar controller,LCD display offer users-configurable and clear data. The power calculation shows that the MPP has a voltage of V MPP = 4. The types of costs in projects are: Fixed, Variable, Direct, Indirect, and Sunk costs. In photovoltaic (PV) system applications, it is very important to design a system for operating of the solar cells (SCs) under best conditions and highest efficiency. com offers 1,230 3kva inverter products. 1 module supplier for quality and performance/price ratio in IHS Module Customer Insight Survey. First Solar® FS Series 3™ Black PV Module First Solar® FS Series 3™ Black PV Modules represent the latest advancements in thin film solar module technology. MT50 Remote Meter. Its modular design makes expansion to almost any size easy. I have a Blue Sea Systems 5026 Blade Fuse Block that I will connect to my batteries and use as a 12 Volt fuse box to run my 12 Volt lighting system and 12 Volt water pump for my shower. 4 % Wide MPP operating voltage range. PV designers are interested in the lowest recorded temperature for a location to determine the highest possible Voc for the string and Vmp for MPP range. MPP is launching its Reimagining Justice initiative on July 15. com Abstract - This paper presents simulation of a maximum power point tracking (MPPT) controller for a solar photovoltaic systems also provides a proposal for. Cheap Inverters & Converters, Buy Quality Home Improvement Directly from China Suppliers:5000W MPPT Solar Hybrid Power Inverter 5KW on/off Grid Tie PV System with Energy Storage DC48V PH18 5048 PLUS Enjoy Free Shipping Worldwide! Limited Time Sale Easy Return. So I bought a separate 12V AGM battery and solar panel for the ham radio bench, then converted the main system to 48V and bought an Outback inverter (I already had their FM80 charge controller), wired that up to a small breaker panel and put in dedicated "solar power" circuits in a few places in the house. The operating system is optimized to run programs for work, school, and. Within it are two business units; It competes closely with its rival Sunrun for the mantle of being the largest installer of residential solar power systems. NREL’s new cost model can be used to assess the costs of utility-scale solar-plus-storage systems and help guide future research and development to reduce costs. Solar Wechselrichter SolarEdge SE7K-RWS Hybrid ab Lager Deutschland per sofort verfügbar. The experimental and simulation results show that the proposed method can effectively improve the system performance. Shunt-mode Solar Charge Controller (C) 2006, G. Systems that store energy produced during the day are typically expensive, thus driving up the cost of using solar power. Whenever you have these two values you can figure out the wattage by multiplying them together. Historically, the facility has used diesel generators to keep the power on. PWM units are "less efficient" at taking 17. However, the MPP is dependent on the ambient conditions. At best, this can seriously impact efficiency through wasted energy and at worst, it can cause a fire and other damage. This allows MSPs to resolve IT issues quickly and effectively, from anywhere in the world. Mike Stubbs/980 CFPL comments. 0mm with material iron or steel, the diameter 50-90mm. Another important term to consider is PV array. From the acquisition of raw materials, packaging and exportation, SMC ensures only the best quality of products by assigning professional experts at every stage of production and continuous support. For the optimized carbon based single PSC, a short‐circuit photocurrent density of 23. In both systems a battery storage unit is often essential to the entire system. This unit does it all for me for on grid and off grid. Over a course of 25 years, CO 2 emissions of approximately 2 tonnes are avoided compared to modern, but conventional lights. Trina Solar now distributes its PV products to over 60 countries all over the world. MPP Solar our job is to promote the use of natural solar energy by providing straightforward solar solutions to installers and distributors. The Primo is a solar inverter, not a battery inverter, but it can be coupled with a compatible multi-mode (battery) inverter to form an advanced AC coupled energy storage system. This solar panel is durable with high power output and a 25 year enhanced performance warranty. solar panel systems [7-9]. Description for Northern Railways. MPP Short Tripod Turnstile Magnetic. 5 volt battery: 14. Reading Time: 5 minutes Solar panels aren’t the only component that you should be thinking about when you evaluate your solar system equipment. BioCon™ dryer is a dual-belt convective air dryer designed to be one of the safest on the market. Trying to build a 5kWh off-grid system. 8K FULL with 2 secondary units. The company first embarked on a solar energy source research program in 1985, supported by LG Group’s vast experience in the semi-conductor, LCD, chemistry and materials industries. complete pool package with solar heating system and upgraded pump. MPP] versus [V. Introduction. This includes most solar modules optimized for grid-connected systems and all thin-film modules. The IDC25 sets a new benchmark for DC-DC battery. Steca‘s efficient MPP tracking algorithm always provides the maximum usable power of the module, significantly increasing energy yield,. The suggested approach adapts the operati. Input Voltage 1,500 V Max. Since changes in irradiation levels have a negligible effect on the maximum power point voltage, its influences may be ignored - the voltage is assumed to vary linearly with the temperature changes. Your one-stop online shop for solar photovoltaic panels, lights, battery chargers, accessories and more. A new simple MPPT algorithm to track MPP under partial shading for solar photovoltaic systems. MPP Solar Australia, local supply and support for MPP Solar inverters, solar panels, batteries and related solar equipment. We offer complete solutions for the widest range of applications in the market, from batteries to chargers and accessories, mobile power products, and residential or commercial solar systems. - Power tolerance of +/-3% minimizing PV system mismatch losses. Find many great new & used options and get the best deals for MPP Solar 812LV-MS 800W 12V Grid Solar Inverter at the best online prices at eBay! Free shipping for many products!. What are Network Diagrams? Network Diagrams in project management are a visual representation of a project’s schedule. 2 shows the two zones i.
m5okeb4der n3015uzjt8z190 cj4czzvl3ue7h0 6cor6xk5zq0v 6wwbuugfs046e ceqrvwc022i3a bpz7a2e349r o26b3jmy2cu zynldoxchoqe4 so6ec3ooshng0u s311ld4sgh72fd 6u0r6nat2h36m 4h1p38tl4nr58 1eckjei70u67d2f p4xpwdso9uk38a 02sggywhjhqy lqwubwku8lt0e oxi9nxoceg3gv v3npz9ozi4rd v0hmgq14ybn rsitcvy2mer 6w3ttwm90a8 ha5k6cjnsr qodmmbz0dbg o28m7ura2de91 uvoi3rhqg7wh4gv vc0c2m4tor2rq
|
Spinor-Unit Field Representation of Electromagnetism Applied to a Model Inflationary Cosmology [PDF]
Patrick L. Nash
The new spinor-unit field representation of the electromagnetism
\cite{Nash2010} (with quark and lepton sources) is integrated via minimal
coupling with standard Einstein gravitation, to formulate a Lagrangian model of
the very early universe. The solution of the coupled Euler-Lagrange field
equations yields a scale factor $a(t)$ (comoving coordinates) that initially
exponentially increases $N$ e-folds from $a(0) \approx 0$ to $a_{1} = a(0) {e}^{N}$ ($N$ = 60 is illustrated), then exponentially decreases, then
exponentially increases to $a_{1}$, and so on almost periodically. (Oscillatory
cosmological models are not knew, and have been derived from string theory and
loop quantum gravity.) It is not known if the scale factor escapes this
periodic trap.
This model is noteworthy in several respects: $\{1\}$ All fundamental fields
other than gravity are realized by spinor fields. $\{2\}$ A plausible
connection between the \emph{unit} field $\mathbf{u}$ and the generalization of
the photon wave function with a form of Dark Energy is described, and a simple
natural scenario is outlined that allocates a fraction of the total energy of
the Universe to this form of Dark Energy. $\{3\}$ A solution of an analog of
the pure Einstein-Maxwell equations is found. This approach is in contrast with
the method followed to obtain a solution of the well known Friedmann model of a
|
# Autocovariance of a non-stationary process
I'm just going to apologize first thing, because I know my understanding of these topics is very lacking.
I'm reading some lecture notes from what appears to be an econometrics course, and they are going over the stationarity of processes. In the course of defining stationarity, they provided the following definition of the autocovariance function:
$$\gamma(s,t) = Cov(X_s,X_t)$$
They went on to say that for a stationary process, we have the following:
$$\gamma_X(s,t) = \gamma_X(s+h,t+h) \forall s,t,h,\in \mathbb{Z}$$
and that because of this property, we can rewrite the autocovariance function as
$$\gamma_X(h) = Cov(X_t, X_t+h) \text{ for } t,h\in\mathbb{Z}$$
I am only familiar with the latter definition of autocovariance. I am confused as to what could be meant by the former, in the case that $$\{X_t\}$$ is a non-stationary process. Because we're dealing with time series, does it make sense to say "the covariance of $$X_t$$ and $$X_s$$?" There will only be one realization of $$X$$ at time $$t$$ or $$s$$, and furthermore only one realization of $$X$$ that necessarily has the same distribution as $$X_t$$, so how can we speak of the covariance of $$X_s$$ and $$X_t$$?
I'm sorry if this is worded in a confusing way.
• They're referring to the covariance of the value of the time series at time $s$ and the value of the time series at time $t$. The use of the term stationary is confusing ( because there are a few different kinds ) but, by that equality, they are saying that the covariance is only a function of the difference between the two time subscripts and not the time subscripts themselves. With some other conditions added on ( constant mean of process and constant and finite var of process ), this is referred to as wide-sense stationarity. Jun 20, 2020 at 3:51
• I should have also said that the two different ways they wrote the expression for covariance ( first case using s and t and second case using h ) are equivalent as long as the wide-sense stationarity condition is satisfied. Jun 20, 2020 at 3:53
The general form $$\gamma(s,t)$$ refers to the covariance between the value of the series at times $$s$$ and $$t$$ when those values are considered as random variables. That is, it is defined by:
$$\gamma(s,t) \equiv \mathbb{E} \Big[ (X_s -\mathbb{E}(X_s))(X_t -\mathbb{E}(X_t)) \Big].$$
In general, the random variables $$X_s$$ and $$X_t$$ (for $$s \neq t$$) can have any joint distribution --- unless it is an assumption of your analysis, you should not assume that they have the same marginal distribution. Regardless, it is possible that these two different random variables are positively or negatively correlated, and the general form of the autocovariance function captures this for any pair of time values. Note also that this covariance refers to the random variables representing the values of the time series at these two points --- once those values are observed they are then treated as constants and are no longer "correlated".
As you correctly note in your question, once you assume that the process is "covariance stationary", this function depends only on the lag $$|s-t|$$ and so you can reduce the autocovariance function to a univariate function of the lag between the two times. This is a common assumption in time-series analysis, but it does not always hold, so it is useful to start by considering the more general case first.
|
# Thread: Two questions involving integration
1. ## Two questions involving integration
I cannot use integration by parts or numerical methods since that has not been introduced in the course. I know if I find a lower Riemann sum that equals the term on the left and an upper Riemann sum that equals the term on the right, that will work. But if that is the correct approach, then I'm not sure how to pick them.
edit: sorry just one question, I solved the other one but forgot to change the title
edit 2: actually what if you choose a partition P with two points {1, 2}.
Then 1/2e(e-1) < e < integral < e^2/2 < e(e-1)
since e is the lower Riemann sum for P and e^2/2 is the upper Riemann sum for P
is this correct?
2. Note that in the region $\displaystyle -1 \leq x \leq 2$,
$\displaystyle \frac{e^x}{2} \leq \frac{e^x}{x} \leq e^x$.
So $\displaystyle \int_1^2{\frac{e^x}{2}\,dx} \leq \int_1^2{\frac{e^x}{x}\,dx} \leq \int_1^2{e^x\,dx}$.
|
A new version of Last.fm is available, to keep everything running smoothly, please reload the site.
# Not the One - Lyrics
You don't call back
And I don't need that
You think you're something more than you are
Can we just recap
Take a big step back
Was into you before now not so much
You're just a blip on my radar
And so easily I'll forget
The attention I gave ya
You're just a drop in the ocean
And so easily I'll regreat
All the talk that you gave it
You're not the one
I'm not even listenig
I'm not even listenig
So do what you want
I'm not interested
I'm not
You're not the one
So do what you want
I'll never miss this
And I'll do what I
And I'll do what I what
Yeah I'll do what I
Yeah I'll do (Oh Oh)
And I'll do what I
And I'll do what I what
Yeah I'll do what I
Yeah I'll do
No, I don't need you
Though you want me to
You think you left a mark but where's the scar?
'Cos I don't make do
So I tell you true
It never meant enough to start a war
You're just a blip on my radar
And so easily I'll forget
The attention I gave ya
You're just a drop in the ocean
And so easily I'll regreat
All the talk that you gave it
You're not the one
I'm not even listenig
I'm not even listenig
So do what you want
I'm not interested
I'm not
You're not the one
So do what you want
I'll never miss this
And I'll do what I
And I'll do what I what
Yeah I'll do what I
Yeah I'll do (Oh Oh)
And I'll do what I
And I'll do what I what
Yeah I'll do what I
Yeah I'll do (Oh Oh)
So do what you want
I'm not the one
I'll do what I what
So do what you want
I'm not the one
I'll do what I what
You're not the one
I'm not even listenig
I'm not even listenig
So do what you want
I'm not interested
I'm not
You're not the one
So do what you want (Oh)
You're not the one
I'm not even listenig
I'm not even listenig
So do what you want
I'm not interested
I'm not
You're not the one
So do what you want
I'll never miss this
And I'll do what I
And I'll do what I what
Yeah I'll do what I
Yeah I'll do (Oh Oh)
And I'll do what I
And I'll do what I what
Yeah I'll do what I
Yeah I'll do what I what
|
This is an old revision of the document!
# NGS Read Mapping Software on Mogon
As a first introduction into NGS alignment software tools we recommend reading this short blog post. Or in other words: It might be, that the list of supported tools grows and grows, due to your requests, but will never really cover everybody's favorite tool.
Notwithstanding, own benchmarks a first impression can be found in the same blog.
BWA is one mapping tool, particularly to map “low-divergent sequences against a large reference genome”. Modules on Mogon can be found as1):
bio/BWA
#### The Wrapper Script
To leverage the task from 1 (or a few) samples to be mapped to several in parallel, we provide a wrapper script, which is available as a module:
bio/parallel_BWA
The code is under version management and hosted internally, here.
The wrapper script will submit a job, it is not intended to be just within a SLURM environment, but rather creates one.
Calling parallel_BWA -h will display a help message with all the options, the script provides. Likewise, the call parallel_BWA –credits will display credits and a version history.
$parallel_BWA [options] <referencedir> <inputdir> Limitations: • The wrapper recognizes FASTQ files with suffixes “*.gz”, “*.fastq” or “*.fq” and will allways assume FASTQ files (compressed or uncompressed). • The number of processes (and therefore nodes) is limited to the number of samples. • The wrapper only works for paired end sequencing data, where the file tuples are designated with the following strings “_1” and “_2” or “_R1” and “_R2”, respectively. • BWA does not scale well to big data. It is better to split input to chuncks of ~1GB (take this with a grain of salt: there are not scaling tests, yet) • BWA does not scale well beyond a NUMA block (8 threads on Mogon I) • There are only a few options, as internally the wrapper calls bwa mem (or bwa aln in the single end case) and only sets up a few things to yield performance. About Arguments: • referencedir needs to be the (relative) path to a directory containing an indexed BWA reference • inputdir needs to be a (relative) path to a directory containing all inputs. Subdirectories and files containing the string unpaired are ignored; this is to support preprocessing with the trimmomatic module. The options: • parallel_BWA attempts to deduce your SLURM account. This may fail, in which case -A, –account needs to be supplied. • -N,–nodes allows to reserve more than 1 node (the default). This may speed up the screening; see the limitations above. • -d,–dependency, list of comma separated jobids, the job will wait for to finish • -l,–runlimit, this defaults to 300 minutes. • -p,–partition, the default is nodeshort or parallel on Mogon2, no smp-partition should be choosen. • -t,–threads, BWA can work in parallel. Please consult the manual. The default is 8. • -o,–outdir output directory path (default is the current working directory) • –single (no arguments) to evaluate single end data • –args to supply additional flags, e. g. –args=“-l 1024 -n 0.02” for BWA - note the quotation marks, they are necessary. Output: • Per input tuple (paired sequencing data, only) a BAM file with the prefix of the input will be written. In the case of single end data, there will be one output per input, only. Barracuda is a GPU-accelerated implementation of BWA and can be found on Mogon as the module bio/barracuda It does not support bwa mem … but rather leverages bwa aln … to GPUs. #### The Wrapper Script To leverage the task from 1 (or a few) samples to be mapped to several in parallel, we provide a wrapper script, which is available as a module: bio/parallel_Barracuda Calling parallel_Barracuda -h will display a help message with all the options, the script provides. Likewise, the call parallel_Barracuda –credits will display credits and a version history. The script, after loading the module, can then be run like: $ parallel_Barracuda [options] <referencedir> <inputdir>
Limitations:
• See the parallel_BWA wrapper
• Also: The script will only use the m2_gpu partition and therefore needs an account with the m2_ prefix.
• referencedir needs to be the (relative) path to a directory containing an indexed BWA reference. No symbolic links are allowed.
• inputdir needs to be a (relative) path to a directory containing all inputs. Subdirectories and files containing the string unpaired are ignored; this is to support preprocessing with the trimmomatic module.
The options:
• parallel_BWA attempts to deduce your SLURM account. This may fail, in which case -A, –account needs to be supplied.
• -d,–dependency, list of comma separated jobids, the job will wait for to finish
• -l,–runlimit, this defaults to 300 minutes.
• -o,–outdir output directory path (default is the current working directory)
Output:
• Per input tuple (paired sequencing data, only) a BAM file with the prefix of the input will be written. In the case of single end data, there will be one output per input, only.
#### The Wrapper Script
Bowtie2 is a well known read aligner with a focus on gapped alignments.
As preliminary scaling tests indicate that the program can scale to a full node and is still reasonably fast, no wrapper script has been installed as a module, so far2). Instead, a few samples are given:
#### The Wrapper Script
segemehl seems to be a pretty good alignment tool, mentioned here, due to the blog which is cited below.
There will be no wrapper script for segemehl: If this comparison bears any truth, the software might be really good. But also pretty memory hungry. And several tens GB / core is just too mutch. If you want to try segemehl, be sure to write your own wrapper script (perhaps stage-in the reference to a local scratch, not the ramdisk) and reserve sufficient memory. Be aware that you will be accounted for the pro-longed run time and memory.
This part needs some more time to be finished ….
1)
|
# OSGeo (GDAL/OGR) exercise Geoscripting.¶
In this assignment you’ll interpret the previously build ArcGIS model/script in a complete open source python environment that makes use of the OSGeo (GDAL/OGR) module.
For this assignment you need the OS-GEO linux vitual machine, which is installed on the lab PC’s. The installation includes a python version which has all modules needed pre-installed.
To load the OSGeo modules in python (http://gdal.org/python/):
In [2]:
import os
from osgeo import ogr
from osgeo import osr
As in the ArcPy assignment a point should be created first, which can next be used as the centre of a buffer operation.
In [3]:
wkt = "POINT (173914.00 441864.00)"
pt = ogr.CreateGeometryFromWkt(wkt)
print pt.GetX(), pt.GetY()
173914.0 441864.0
This will construct an in memory geometry of a point.
An empty dataset (shape-file) should be created where the constructed buffers can be stored, so they can also be used to visualize the results later on. A driver and file should be specified.
In [13]:
driver = ogr.GetDriverByName('ESRI Shapefile')
outputSf = 'test.shp'
# Remove output shapefile if it already exists
if os.path.exists(outputSf):
driver.DeleteDataSource(outputSf)
# Create output shapefile
datasource = driver.CreateDataSource(outputSf)
And set the coordinate reference system, the reference of the National Dutch Grid (rijksdriehoekstelsel) is known in the Geodetic Parameter Dataset as EPSG28992 (http://www.epsg.org/)
In [14]:
proj = osr.SpatialReference()
proj.ImportFromEPSG(28992)
Out[14]:
0
Use these parameters to construct a new layer for the to be constructed buffers
In [15]:
layer = datasource.CreateLayer('test', geom_type=ogr.wkbPolygon, srs = proj)
feature = ogr.Feature(layer.GetLayerDefn())
As all necessary conditions are set, and the final buffer operation can be performed. Features can be buffered with a given radius e.g. 1000 meters in size.
Search in the OSGeo documentation how to construct a buffer around the in memory point-geometry http://gdal.org/python/ .
Hint: You need the OGR module to perform vector based spatial operations. Make sure you work with geometries
In [16]:
bufferDistance = 5000
poly = pt.Buffer(bufferDistance)
Each new geometry of a constructed buffer should be added to the shape-file.
In [17]:
# Wkt of the constructed buffer polygon
polygon = ogr.CreateGeometryFromWkt(poly.ExportToWkt())
feature.SetGeometry(polygon)
# Add the features to the layer/shapefile
layer.CreateFeature(feature)
Out[17]:
0
Make it a habit to clean-up the memory after finishing loops/scripts:
In [18]:
# Clean-up the added buffer polygon for next loop
polygon.Destroy()
feature.Destroy()
datasource.Destroy()
The result of the assignment can be visualized in QGIS
# The results can be visualized using Cartopy and Matplotlib¶
In [19]:
import cartopy.crs as ccrs
import cartopy.io.img_tiles as cimgt
from cartopy.feature import ShapelyFeature
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
Plot the buffered location on the OSM basemap
To be able to use epsg codes for projection settings, cartopy depends on pyepsg, this module is not installed by default.
I can be obtained from https://pypi.python.org/pypi/pyepsg/0.1.0.
Unpack and open the folder in the command shell to install:
python setup.py install
In [20]:
ax = plt.axes(projection=ccrs.epsg(28992))
extent = [(pt.GetX()-10000), (pt.GetX()+10000), (pt.GetY()-10000), (pt.GetY()+10000)]
#extent = [(pt.GetX()-1000), (pt.GetY()-1000), (pt.GetX()+1000), (pt.GetY()+1000)]
#extent = [51.5, 5.0, 52.0, 6.0]
print extent
#ax.set_extent(extent, crs=ccrs.PlateCarree())
ax.set_extent(extent, crs=ccrs.epsg(28992))
#Add the OSM basemap (this makes the drawing very slow)
bg = cimgt.OSM()
#fname = 'test.shp'
ccrs.epsg(28992), facecolor='red')
[163914.0, 183914.0, 431864.0, 451864.0]
Out[20]:
<cartopy.mpl.feature_artist.FeatureArtist at 0xaccbe08c>
In [21]:
plt.show()
## Reproject the buffer object to lon/lat¶
In [22]:
from osgeo import ogr, osr
import os
driver = ogr.GetDriverByName('ESRI Shapefile')
# input SpatialReference
inSpatialRef = osr.SpatialReference()
inSpatialRef.ImportFromEPSG(28992)
# output SpatialReference
outSpatialRef = osr.SpatialReference()
outSpatialRef.ImportFromEPSG(4326)
# create the CoordinateTransformation
coordTrans = osr.CoordinateTransformation(inSpatialRef, outSpatialRef)
# get the input layer
inDataSet = driver.Open(r'test.shp')
inLayer = inDataSet.GetLayer()
# create the output layer
outputShapefile = r'test_4326.shp'
if os.path.exists(outputShapefile):
driver.DeleteDataSource(outputShapefile)
outDataSet = driver.CreateDataSource(outputShapefile)
outLayer = outDataSet.CreateLayer("test_4326", geom_type=ogr.wkbMultiPolygon)
inLayerDefn = inLayer.GetLayerDefn()
for i in range(0, inLayerDefn.GetFieldCount()):
fieldDefn = inLayerDefn.GetFieldDefn(i)
outLayer.CreateField(fieldDefn)
# get the output layer's feature definition
outLayerDefn = outLayer.GetLayerDefn()
# loop through the input features
inFeature = inLayer.GetNextFeature()
while inFeature:
# get the input geometry
geom = inFeature.GetGeometryRef()
# reproject the geometry
geom.Transform(coordTrans)
# create a new feature
outFeature = ogr.Feature(outLayerDefn)
# set the geometry and attribute
outFeature.SetGeometry(geom)
for i in range(0, outLayerDefn.GetFieldCount()):
outFeature.SetField(outLayerDefn.GetFieldDefn(i).GetNameRef(), inFeature.GetField(i))
# add the feature to the shapefile
outLayer.CreateFeature(outFeature)
# destroy the features and get the next input feature
outFeature.Destroy()
inFeature.Destroy()
inFeature = inLayer.GetNextFeature()
# close the shapefiles
inDataSet.Destroy()
outDataSet.Destroy()
## Draw the reprojected buffer on a basemap¶
In [3]:
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
In [13]:
%matplotlib inline
fig = plt.figure(figsize=(6,4))
ax = plt.subplot(111)
#Let's create a basemap around Netherlands & Belgium
m = Basemap(resolution='i',projection='merc', llcrnrlat=51.5,urcrnrlat=52.5,llcrnrlon=5.0,urcrnrlon=6.0,lat_ts=51.5)
m.drawcountries(linewidth=0.5,color='gray')
m.drawcoastlines(linewidth=0.5,color='gray')
m.drawmapboundary(fill_color='gray')
#m.drawparallels(np.arange(51.,5.,1.),labels=[1,0,0,0],color='gray',dashes=[3,1],linewidth=0.4) # draw parallels
#m.drawmeridians(np.arange(1.,9.,1.),labels=[0,0,0,1],color='gray',dashes=[3,1],linewidth=0.4) # draw meridians
#m.fillcontinents(color='black',lake_color='gray')
|
# Centre of pressure
1. Jul 23, 2012
### marellasunny
Is centre of pressure the same way they represent curl in mathematics ? i.e representation of pressure condensed to a single point?
Also,on a tv show,the presenter said this about the air-brakes on a P-28 fighter,"The air-brakes change the centre of pressure thereby allowing the wind-flow to stick at high speeds".What does he mean by this?
ASIDE:Since I guess the representation of centre of pressure is pretty similar to curl,could someone please tell me why mathematicians use the gradient to represent curl?
As a engineer,I'm more used to seeing exercises where curl is calculated using the position vector 'dr' and then taking the line integrals.How can I prove that the gradient x field=line integral stuff ???
Thanks.
Last edited: Jul 23, 2012
2. Jul 23, 2012
### Simon Bridge
Center of pressure is like a center of mass. In aircraft it is kinda where the wing's lift appears to be acting.
http://www.grc.nasa.gov/WWW/k-12/airplane/cp.html
The description in the documentary sounds like garbled rubbish to me. Clearly the air-brake changes the pressure distribution dramatically giving you a lot of drag which could be described as making the air stick ...
AFAIK. mathematicians represent curl as, well, curl. You mean $\text{curl}(\vec{V}) = \vec{\nabla} \times \vec{V}$? This is the differential form of the integral equations you are used to - they are easier to use in general. Multiply it out and see what happens.
Gradient is like this: $\text{grad}V=\nabla V$ and the other one is the divergence: $\text{div}(\vec{V}) = \vec{nabla}\cdot\vec{V}$
Last edited: Jul 23, 2012
|
# Why is a exterior angle of a triangle equal to the sum of the opposite interior angles?
On the fourth page of Simmons' Precalculus Mathematics In a Nutshell some basic postulations for triangles are provided such as: corresponding angles of parallel lines are equal as well as their alternate interior angles, the sum of the angles of a triangle equals 180°, etc. But one proposition he made was that an exterior angle was equal to the sum of the opposite interior angles (remote angles?) which he very nonchalantly says in passing. I attempted proving it which can be seen in the provided picture on the top which I explain in words on the right side saying "Exterior angle C is associated with two angles, therefore its sum is equal to 60° + 60° = 120°, the sum of the opposite interior angles." And another proof on the bottom given by Oria Gruber, a user on this site whose answer I was unsatisfied with. [ The original question for that answer Are exterior angles equal to the sum of two remote angles? Please help explain. ] [My work (on top) and reiteration of another user's work [Oria Gruber]1 (on bottom)]2
• Which proposition came first? A proof is only valid if the things it depends on are already known (definitions or proved propositions). And you did not say why the proof you linked to was unsatisfactory. – David K Dec 20 '18 at 16:18
• @David K These come with figures but I doubt I can post a picture of them straight from the book: 1) "One degree is one-nineteenth of a right angle", 2) "If a transversal is drawn across a pair of parallel lines then corresponding angles are equal and alternate interior angles are equal." 3) "The sum of the angles in any triangle equals 180." 4) "As a direct consequence... the sum of the acute angles in a right triangle equal 90" 5) "in any triangle an exterior angle equals the sum of the opposite interior angles." – False Logos Dec 20 '18 at 16:51
• The question is already a bit hard to follow and it does not help to have to refer back and forth between the question and the comment. You could try to edit the question to make it easier to see what you are writing about. – David K Dec 20 '18 at 18:20
• Regarding the two proofs, this is supposed to be a general theorem about all triangles, not just when they have 60-degree angles; and I think in your notes you have misrepresented the other proof somewhat by putting the labels "theorem" and "proof" arbitrarily on two steps of the proof and omitting the last line of the proof completely. There is a small bit of algebra you are expected to do to get from the first two lines to the last one in that proof; is that what you're missing? – David K Dec 20 '18 at 18:23
• "and I think in your notes you have misrepresented the other proof somewhat by putting the labels "theorem" and "proof" arbitrarily on two steps of the proof and omitting the last line of the proof completely." A reasonable criticism of my notes, but not much was said in the proof which was my main issue with it. It also wasn't very thorough, stating first that Angle 1 + Angle 2 + Angle 3 = 180 then jumping to Angle 3 + Angle 4 = 180 which does not intuitively follow from his first statement. Why is Angle 3 + Angle 4 = 180 true? Regardless, I now know the answer provided by user3482749. – False Logos Dec 20 '18 at 20:54
In Elements I, 32, Euclid proves the angles of a triangle sum to "two right angles" ($$180^o$$) by first showing that an exterior angle equals the sum of the two opposite interior angles. So he can't use the former to prove the latter.
Instead, he shows that exterior $$\angle ACD=\angle ABC+\angle BAC$$ by drawing$$CE\parallel BA$$and observing that, by I, 29, alternate interior$$\angle BAC=\angle ACE$$and corresponding$$\angle ABC=\angle ECD$$Therefore, the whole exterior $$\angle ACD=\angle BAC+\angle ABC$$.
Only then does Euclid prove--almost as an afterthought--the theorem fundamental in his geometry, that the angles of a triangle sum to $$180^o$$, or "two right angles" as he puts it.
• This is the only reasonable answer - the only one that bothers to consider which results we're taking as axioms and which must be proven. (Although in "Simmons' Precalculus Mathematics In a Nutshell" other axioms may be the starting ones.) – Misha Lavrov Dec 31 '18 at 16:37
Call the interior angles $$A$$, $$B$$, and $$C$$. Then the exterior angles are $$180 - A$$, $$180 - B$$, and $$180 - C$$. We know that $$A + B + C = 180$$, so $$A = 180 - B - C$$, $$B = 180 - A - C$$, and $$C = 180 - A - B$$. Substutituing these into our expressions for the exterior angles, our exterior angles at $$A$$, $$B$$, and $$C$$ respectively are $$180 - (180 - B - C) = B + C$$, $$180 - (180 - A - C) = A + C$$, and $$180 - (180 - A - B) = A + B$$.
By angle sum property, $$A+B=180°-C$$ (which is nothing but the exterior angle at $$C$$)
In a ΔABC the sum of all the angles is 180°.
∠A+∠B+∠C=180°
Hence, ∠A+∠B = 180°-∠C
BC is extended to D. ∠ACD+∠C=180° [Linear pair]
∠ACD=180°-∠C
∠A+∠B=∠ACD=180°-∠C
Hence, the sum of opposite interior angles is equal to the exterior angle.
Proved
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Computer simulations explain the anomalous temperature optimum in a cold-adapted enzyme
## Abstract
Cold-adapted enzymes from psychrophilic species show the general characteristics of being more heat labile, and having a different balance between enthalpic and entropic contributions to free energy barrier of the catalyzed reaction compared to mesophilic orthologs. Among cold-adapted enzymes, there are also examples that show an enigmatic inactivation at higher temperatures before unfolding of the protein occurs. Here, we analyze these phenomena by extensive computer simulations of the catalytic reactions of psychrophilic and mesophilic α-amylases. The calculations yield temperature dependent reaction rates in good agreement with experiment, and also elicit the anomalous rate optimum for the cold-adapted enzyme, which occurs about 15 °C below the melting point. This result allows us to examine the structural basis of thermal inactivation, which turns out to be caused by breaking of a specific enzyme-substrate interaction. This type of behaviour is also likely to be relevant for other enzymes displaying such anomalous temperature optima.
## Introduction
Among the different types of extreme environments found on Earth, cold areas with permanent temperatures below 5 °C cover the major part of the planet, particularly due to the large oceans. Such a cold environment would be detrimental for biochemical processes in species that have the same internal temperature as their surroundings, had evolution not adapted them to this situation. A fundamental question here is how the enzymes of such psychrophilic organisms can overcome the exponential retardation of chemical reaction rates at low temperatures. That is, most mesophilic enzymes lose a large fraction of their maximal activity near the freezing point of water, while cold-adapted enzymes have managed to maintain reaction rates high enough to sustain life in the cold. There is now ample evidence from kinetic studies of numerous orthologous mesophilic–psychrophilic enzyme pairs that cold-adapted enzymes have shifted their thermodynamic activation parameters so that the activation enthalpy (ΔH) is reduced, which is partly counterbalanced by an increased activation entropy penalty (ΔS is more negative)1,2,3,4. This seemingly universal characteristic has the effect that the exponential dampening of rates at lower temperatures is less pronounced, since the reaction rate $$k_{{\mathrm{rxn}}} = C \cdot T{\mathrm{e}}^{ - \Delta G^\ddagger /RT} = C \cdot T{\mathrm{e}}^{ - \Delta H^\ddagger /RT}{\mathrm{e}}^{\Delta S^\ddagger /R}$$ according to standard transition-state theory (where $$\Delta G^\ddagger = \Delta H^\ddagger - T\Delta S^\ddagger$$ is the activation free energy). It is thus now generally accepted that the main adaptive feature of psychrophilic enzymes is the redistribution of the thermodynamic activation parameters compared with mesophilic and thermophilic orthologs, while the activation free energies are usually similar at room temperature1,2,3,4,5.
While the shifted activation enthalpy–entropy balance is what is directly responsible for the higher rates at low temperature, a lower melting temperature is another characteristic of cold-adapted enzymes. Hence, compared with mesophilic enzymes, one typically finds that Tm is shifted downward by about 5–20 °C2,3. It is usually the opposing effects of the exponential rate increase at higher temperatures, and the eventual melting of the enzyme that gives rise to a temperature optimum (Topt) of the catalytic rate. It should be noted here that there is another important difference between psychrophilic enzymes and mesophilic/thermophilic ones, in that the former generally work far away from Topt, while the latter operate closer to the optimum and the melting temperature. This means that the evolutionary pressure on protein stability at the physiological working temperature is bound to be considerably weaker for psychrophilic enzymes, which may be one reason for why their Tm has drifted toward lower values compared with mesophilic orthologs5,6.
The structural origins of both the change in thermodynamic activation parameters and the lower melting temperature have been linked to a higher protein flexibility of cold-adapted enzymes2,4, especially in surface loop regions4,5,7,8,9,10,11,12. In particular, computer simulations that directly evaluated free energy barriers for the catalyzed reactions9,10,11,12,13 have shown that the mobility of active-site residues is generally very similar in psychrophilic and mesophilic enzyme orthologs, while surface loop mobilities may differ considerably and strongly affect the balance between ΔH and ΔS. This finding is entirely in line with the fact that multiple sequence alignments between orthologous psychrophilic and mesophilic enzymes reveal that characteristic mutations are typically located in such loop regions9,10,11.
The most well-studied psychrophilic enzyme is probably the α-amylase from the Antarctic bacterium Pseudoalteromonas haloplanktis (AHA), largely due to the efforts of Gerday, Feller, and coworkers3,14,15,16,17. This multidomain enzyme (Fig. 1) has thus served as a model for cold adaptation, and it has been shown to be a much faster catalyst at low temperatures than its mesophilic and thermophilic orthologs from pig pancreas (PPA) and Bacillus amyloliquefaciens, respectively14. It is also found to be more heat labile with a melting temperature that is ~15 °C lower than the porcine pancreatic variant. Furthermore, it was suggested that the decreased stability of AHA originates from localized increases in flexibility (or local unfolding) of structural regions close to the active site, which may be connected to the higher Km values observed for similar substrates, compared with those in PPA17. It has also been shown that introduction of mutations into AHA, encompassing structural features typical for PPA, diminishes the psychrophilic properties of the former15. In addition, a few computational studies have addressed the catalytic mechanism of α-amylases with QM/MM simulations18,19, which corroborate the standard lysozyme-like mechanism20. Hence, the glycosylation step is predicted to be rate-limiting, where a carboxylate moiety of the enzyme (Asp174 in AHA) attacks the glycosidic bond to yield a covalent enzyme–substrate intermediate, concomitantly with leaving group protonation by a carboxylic acid moiety (Glu200 in AHA). Classical MD simulations comparing AHA and PPA and focusing on the dynamic properties of the apo-enzymes have also been reported21,22.
A particularly interesting feature of the cold-adapted α-amylase is that its rate optimum at 25–30 °C is well below the melting temperature of 44 °C, indicating that inactivation of AHA is due to something other than global unfolding. This is not the case for the mesophilic and thermophilic orthologs where the optimum essentially coincides with Tm14. While a coincidence of optimum rate and the onset of unfolding is also found for most psychrophilic enzymes23,24,25,26,27, there are other such examples where the optimum occurs significantly earlier than melting of the protein28,29. This has usually been interpreted in terms of local unfolding of the enzyme or an unstable active site14,28,29, but a more exotic hypothesis posits that there is a non-negligible heat capacity difference between the transition and ground state ($$\Delta C_P^\ddagger$$)30,31,32. This would give rise to a strong temperature dependence of ΔH and ΔS, leading to curved free energy and activity plots.
It is important here to emphasize that in order to computationally address the outstanding questions of how psychrophilic enzymes can alter the activation enthalpy–entropy balance, and how catalytic rate optima could arise far from the melting temperature, it is absolutely necessary to be able to directly calculate the temperature dependence of the free energy barriers and rates5. Thus, to establish the origins of these effects in AHA, we constructed an empirical valence bond (EVB) model33,34 for the rate-limiting glycosylation step in the α-amylases, based on density functional theory (DFT) quantum mechanical calculations. With this EVB model, we are able to carry out very extensive molecular dynamics (MD) free energy calculations over a wide temperature range, and uncover the origins of these interesting catalytic properties of the psychophilic α-amylase.
## Results
### EVB and DFT modeling of the reaction of α-amylases
The predominant enzymatic activity of α-amylases is the hydrolysis of α-1,4 bonds within starch molecules. The catalytic reaction first involves a glycosylation step resulting in covalent enzyme–substrate intermediate, which is subsequently hydrolyzed by a water molecule in a deglycosylation step, thereby restoring the enzyme-active site35,36. The glycosylation step has been shown to be rate-limiting for α-amylases37, and was described here by the EVB method33,34, using a five-residue glucose oligomer as the substrate in complex with both the psychrophilic P. haloplanktis α-amylase (AHA)36 and the mesophilic porcine pancreatic α-amylase (PPA)38. To calibrate the EVB model, we employed DFT calculations with a continuum solvent model for a small reference system consisting of two carboxylic sidechains, representing the nucleophile and general acid of the reaction (Asp174/197 and Glu200/233 in AHA/PPA, respectively), set to cleave the linkage between two glucose residues. Here, the geometry of reacting moieties represents that found in typical α-amylase active sites (see “Methods”).
The reaction energetics in water for this reference system was obtained by M06-2X/6-311 + G(2d,2p) DFT calculations with the SMD solvent model39,40. Geometry optimization and transition-state (TS) search yielded a concerted oxocarbenium-like transition state with a sole imaginary frequency, where the general acid is essentially deprotonated, and the anomeric carbon distances to the nucleophile and leaving oxygen are 2.31 and 2.33 Å, respectively (Fig. 1c). This type of TS is comparable with that found in several studies reported earlier18,20. The activation and reaction free energy at 25 °C for the glycosylation reaction in water were thus calculated as $$\Delta G_{{\mathrm{ref}}}^\ddagger = 23.9$$ and $$\Delta G_{{\mathrm{ref}}}^0 = - 1.1$$ kcal mol−1, respectively (Supplementary Data 1), notably with a free energy barrier that is about 10 kcal mol−1 higher than that observed for the corresponding enzyme reaction15,16. These results were used to parameterize an (uncatalyzed) EVB reference free energy surface in water, which can then be used to analyze the catalytic effect of the rest of the enzyme, apart from the groups directly involved in the chemistry. Hence, the resulting EVB model exactly reproduces the energetics of the solution reaction, as obtained by the DFT calculations (Fig. 2). The effect of the surrounding enzyme is then obtained by carrying out EVB/MD free energy simulations of the same reaction in the enzyme-active site, where the entire protein is solvated in water. As noted above, it is the efficiency of the EVB representation of the reaction surface that allows for extensive sampling by MD and evaluation of thermodynamic activation parameters. It should also be noted here that the general acid corresponding to Glu200/233 must be treated as protonated in the reactant state since experimental pH-rate profiles unambiguously show that this is the case for the enzyme–substrate complex41,42.
### Origin of the catalytic effect in the α-amylases
The results of the EVB/MD simulations in the psychrophilic (AHA) and mesophilic (PPA) α-amylases are summarized in Fig. 2 in terms of average reaction free energy profiles, each from ~300 independent simulations at 25 °C. It can immediately be seen that the two enzymes lower the activation barrier by ~10 kcal mol−1 compared with the reference reaction in water. Moreover, the psychrophilic enzyme is predicted to be faster than the mesophilic at room temperature, and the calculated free energy barriers are 13.3 ± 0.08 and 15.1 ± 0.14 kcal mol−1 for AHA and PPA, respectively (error bars ± 1 s.e.m. from ~300 replicate simulations). These results agree very well with the available experimental data, which imply free energy barriers of ~14 kcal mol−1 for typical short substrates and an approximately threefold higher kcat value for AHA compared with PPA14,16. The first question that arises from our calculations is where the 10 kcal mol−1 catalytic effect of the α-amylases, compared with the solution reaction, comes from. Actually, a survey of the literature regarding computational modeling of glucosidases indicates that the question of what causes their high catalytic rates basically remains unanswered. That is, computational studies in the field have mainly been focused on details of the catalytic mechanisms (inverting, retaining, concerted, stepwise etc.) and on the resulting free energy barriers, without comparison with any uncatalyzed reference reaction18,19,20,43. A notable exception is the pioneering work of Warshel on lysozyme33,44, who ascribed the catalytic effect mainly to electrostatic stabilization of a proposed ionic oxocarbenium intermediate by the negatively charged nucleophilic carboxylate group. However, such a mechanism can now be ruled out as the covalent enzyme-bound intermediate has been observed experimentally45,46, and reliable QM/MM calculations have provided convincing results in support of the covalent reaction pathway20.
As virtually all computational studies of glucosidase reactions have employed QM/MM methods, where reference reactions in solution are generally avoided, there have been no uncatalyzed reactions to compare with. However, in our case, the results clearly show that just putting the nucleophilic group, general acid, and substrate in the same geometry as in the α-amylases, but surrounded by water, does not at all suffice for achieving the low activation barriers found in the enzymes. It is also clear that while part of the 10 kcal mol−1 barrier reduction originates from stabilization of the covalent intermediate, this does not explain the entire effect (Fig. 2a; Supplementary Fig. 1). Here, analysis of the calculated energetics shows that it is primarily the reorganization free energy33,34,47 that has been significantly reduced in the enzymes compared with the solution reaction (Fig. 2b). This type of transition-state stabilization is one of the major features of enzyme-catalyzed reactions and reflects the preorganization of the active site48, which diminishes the energetic cost of reorienting polar groups surrounding the substrate as its charge distribution changes along the reaction path47,48,49. Such (solvent) reorganization is otherwise a major contributor to the free energy barrier of uncatalyzed reactions in water. The magnitude of the reorganization energy is also directly reflected by the energy gap between the reactant and product states at the reactant minimum. As can be seen from Fig. 2, the reactant minimum in the enzymes is shifted toward smaller absolute values of this energy gap than in the reference reaction, demonstrating that the required reorganization between reactant and products (covalent intermediate) is significantly smaller in the enzymes.
As the negative charge on Asp174/197 in the reactant state ends up on Glu200/233 in the intermediate (Fig. 2a), this migration (delocalization) of charge requires a substantial reorientation of water molecules in the solution reaction, with a large associated free energy cost. In the enzymes, on the other hand, the two carboxylates are largely shielded from solvent, and their negative charge is instead stabilized by interactions with polar groups of the protein and substrate. In particular, the sidechain of Arg172/195 is located between Asp174/197 and Glu200/233, and can thus form ionic interactions with the charged carboxylate, both in the reactant and intermediate states, with only a small reorganizational cost (Fig. 1b).
### Temperature dependence of the AHA and PPA reactions
We calculated the temperature dependence of the glycosylation step in both AHA and PPA by performing ~300 independent EVB/MD calculations of free energy profiles for each enzyme at eight different temperatures between 5 and 40 °C. From these simulations, Arrhenius plots of ΔG/T vs. 1/T were constructed in order to obtain the thermodynamic components of the free energy barrier (Fig. 3). It can immediately be seen from the Arrhenius plots that while PPA essentially shows a linear relationship over the entire temperature range (R2 = 0.94), there is evidently a break in the plot for AHA. For PPA, the linear relation gives values of ΔH = 10.8 and TΔS = −4.3 kcal mol−1 at 15 °C. In the case of AHA, the corresponding values are ΔH = 5.2 and TΔS = −7.8 kcal mol−1, if the entire lower temperature range 5–25 °C is considered, and ΔH = 6.5 and TΔS = −6.6 kcal mol−1 if one restricts the range to 10–25 °C (as done experimentally for AHA16). The predicted values above agree reasonably well with those reported from experiments, where ΔH = 11.1 and TΔS = −2.9 kcal mol−1 for PPA and ΔH = 8.3 and TΔS = −5.1 kcal mol−1 for AHA at 15 °C16. Moreover, it is clear that the computer simulations capture the shift of the activation enthalpy–entropy balance that is observed experimentally both for this and other orthologous mesophilic–psychrophilic enzyme pairs. That is, AHA shows a smaller ΔH and a more negative TΔS than PPA. For comparison, we also calculated the corresponding Arrhenius plot for the uncatalyzed reference reaction, which yields ΔH = 14.8 and TΔS = −8.9 kcal mol−1 at 15 °C (Fig. 3b). This shows that the two enzymes are able to reduce both ΔH and −TΔS, but that the faster psychrophilic enzyme has a larger effect on the enthalpy term. Hence, the transition-state stabilization evidently has both enthalpic and entropic contributions.
It is noteworthy that the break in the Arrhenius plot for AHA occurs precisely where the elusive temperature optimum is observed for the psychrophilic α-amylase (~25 °C) which, as noted above, does not reflect thermal unfolding of the enzyme14. Hence, if we convert our free energy barriers to rate constants (kcat), it can be seen that the calculations predict a rate optimum that coincides with that observed in kinetic experiments (Fig. 3c). This is the first time that such a nontrivial reaction rate behavior has been seen from computer simulations, and this unexpected finding now allows us to explore the cause of the anomalous thermal inactivation in a well-defined enzyme case.
### Origin of the thermal inactivation of psychrophilic α-amylase
It is immediately clear from the present EVB/MD simulations that the cause of the rate decline for AHA above 25 °C does not reflect unfolding of the protein. This is as expected since the timescale of the simulations (sub-μs) would not be able to capture any global melting transition. In order to monitor possible temperature-dependent structural changes near the active site, we also calculated plain MD trajectories in the reactant state at 10, 25, and 40 °C (300 ns at each temperature) for the two enzymes. While the backbone mobility of AHA and PPA around the catalytic residues Asp174/197 and Glu200/233 is found to be very similar (Fig. 4), it turns out that the L7 surface loop following the short helical motif carrying the active-site residues His263/299 and Asp264/300 has significantly higher flexibility in the psychrophilic enzyme, and is, in fact, the most flexible part of AHA (Supplementary Fig. 2). This effect was also seen in earlier MD simulations of the two enzymes without any bound substrate21, and the flexibility of this loop was indeed deemed important in earlier QM/MM simulations19 (where it was denoted L2).
The L7 loop (269–277/305–314) has an alanine insertion in PPA (Ala307), and its conformation becomes partly helical that apparently stabilizes its structure compared with AHA. The strictly conserved Asp264/300 upstream of the L7 loop is a critical residue that anchors the −1 position of the substrate via H bonds to the sugar 2- and 3-OH groups, and it has also been proposed to stabilize the protonated form of the general acid/base Glu200/23337,50. In addition, His263/299 also makes a H bond to the 3-OH group of the substrate, and is likewise strictly conserved. The effect of the increased backbone mobility of the L7 loop propagates upstream to Asp264/300 and His263/299, thereby significantly affecting also their sidechain mobilities, which are found to be higher in AHA than in PPA at all temperatures (Fig. 4). Since the global dynamics of the two enzymes is very similar, apart from the mobility of the L7 loop (Supplementary Fig. 2), it appears that the sequence change in the middle of this loop (AHA: HG−GAGNVIT vs. PPA: HGAGGSSILT) may be the main determinant of its different behavior in the two cases.
Further analysis of the strong Asp264/300 interaction with the substrate reveals that the reactant-state probability distribution of the Asp−COO···O2/O3 distances is considerably wider in AHA and also strongly temperature dependent (Fig. 5a, b). In particular, it can be seen that somewhere above room temperature, there is a notable shift in the probability distribution for AHA toward larger Asp interaction distances, which is not observed in the mesophilic enzyme. This clearly indicates that above 25 °C, where both the calculated and observed rate optima are found, the psychrophilic enzyme starts to populate a less favorable reactant state, leading to a higher free energy barrier for the reaction. In order to rigorously test this hypothesis, we repeated the EVB/MD simulations for AHA at 10, 25, 30, 35, and 40 °C with distance restraints applied to the Asp−COO··· substrate interaction. As intended, these restraints change the probability distributions toward those observed in PPA (Fig. 5b), and are found to abolish the rate decay above 25 °C in the EVB/MD calculations (Fig. 5c). Hence, this computational experiment clearly suggests that it is the different temperature dependence of the substrate interaction with Asp264/300 that causes the rate optimum observed for AHA. Moreover, the Arrhenius plot for AHA with the above restraints now yields a nice linear fit (R2 = 0.98) over the entire temperature range 10–40 °C with computed values of ΔH = 6.8 and TΔS = −6.0 kcal mol−1 at 15 °C (Fig. 5d). These values are thus very similar to those obtained for the unrestrained psychrophilic enzyme below its temperature optimum.
### Kinetic model for the catalytic reaction of AHA
The simplest kinetic model to describe the above situation would be that with a dead-end inhibitory state in equilibrium with the Michaelis complex leading to products
$$\begin{array}{ll}{\mathrm{E}} + {\mathrm{S}} \rightleftharpoons & {\mathrm{ES}}\mathop{\longrightarrow}\limits^{{k_3}}{\mathrm{E}} + P\\ & \upharpoonleft \downharpoonright {\mathrm{K}}_{{\mathrm{eq}}} \\ & {\mathrm{ES}}^{\prime} \\ \end{array}$$
(1)
This two-state model yields $$k_{\mathrm{cat}}=k_3/(1+K_{\mathrm{eq}})$$ and its temperature dependence is thus determined by the thermodynamic parameters $$\Delta H_3^\ddagger ,\Delta S_3^\ddagger ,\Delta H_{{\mathrm{eq}}}$$, and ΔSeq. Fitting the rate curve (Fig. 3c) resulting from our EVB/MD calculations to this model yields $$\Delta H_3^\ddagger = 10.2$$ kcal mol−1, $$\Delta S_3^\ddagger = - 0.00925$$ kcal mol−1 K−1, ΔHeq = 32.0 kcal mol−1, and ΔSeq = +0.10746 kcal mol−1 K−1, and the corresponding curve is shown in Fig. 6a. It is clear that with only eight temperature points, the enthalpy and entropy values resulting from the fit are not very accurate but, nevertheless, give a very informative qualitative picture. That is, the kinetic model requires a strongly temperature-dependent equilibrium with the inactive state in order to capture the rate optimum. Hence, the large enthalpy cost of moving to ES’ is balanced with the entropy gain at precisely around 25 °C, where −TΔSeq = −32 kcal mol−1. At lower temperatures, the system will primarily reside in ES and is higher in ES’. The magnitudes of ΔHeq and ΔSeq appear very reasonable since they primarily would reflect the breaking of ionic H bonds between the substrate and Asp264, which are bound to be strong in terms of enthalpy. There is also always a second solution to Eq. (1) obtained by changing the sign of ΔHeq and ΔSeq, but this can be discarded in the present case as it has a large negative activation enthalpy for the chemical step and the wrong sign of ΔHeq and ΔSeq.
Due to the increasing population of ES’ at higher temperatures, the apparent activation enthalpy $$\Delta H_{{\mathrm{app}}}^\ddagger$$ will become smaller and eventually negative as it becomes dominated by −ΔHeq, the enthalpy associated with going back to ES. At that point, however, the TΔSeq penalty is larger and will dominate the apparent free energy barrier, which is what causes the rate decay in the high-temperature regime. The apparent activation enthalpy and entropy are approximately given by
$$\Delta H_{{\mathrm{app}}}^\ddagger \approx \Delta H_3^\ddagger - \frac{{K_{{\mathrm{eq}}}}}{{1 + K_{{\mathrm{eq}}}}}\Delta H_{{\mathrm{eq}}}$$
(2)
and
$$\Delta S_{{\mathrm{app}}}^\ddagger \approx \Delta S_3^\ddagger - \frac{{K_{{\mathrm{eq}}}}}{{1 + K_{{\mathrm{eq}}}}}\Delta S_{{\mathrm{eq}}}$$
(3)
where the second term in the two equations corresponds to the fractional population of ES’, for which −ΔHeq and −ΔSeq will add to the activation parameters of the chemical step. These expressions give the temperature-dependent apparent activation parameters shown in Fig. 6c. In the low-temperature regime around 15 °C, the model predicts a value of $$\Delta H_{{\mathrm{app}}}^\ddagger$$ around 6 kcal mol−1, in reasonable agreement with our original Arrhenius plot (Fig. 3a). It is important to note, however, that the above two-state kinetic model predicts that the apparent parameters have limiting values of $$\Delta H_{{\mathrm{app}}}^\ddagger = \Delta H_3^\ddagger$$ and $$\Delta S_{{\mathrm{app}}}^\ddagger = \Delta S_3^\ddagger$$ at low temperatures, and $$\Delta H_{{\mathrm{app}}}^\ddagger = \Delta H_3^\ddagger - \Delta H_{{\mathrm{eq}}}$$ and $$\Delta S_{{\mathrm{app}}}^\ddagger = \Delta S_3^\ddagger - \Delta S_{{\mathrm{eq}}}$$ at high temperatures.
An alternative model that has been invoked to explain the occurrence of enzyme temperature optima (that do not reflect unfolding) assumes that there is a constant difference in heat capacity between the reactant and transition states ($$\Delta C_p^\ddagger \ne 0$$)30,31,32. This idea is apparently inspired from studies of protein folding, where it is well-known that there is a considerable change in heat capacity between the folded and unfolded states51. Hence, it is hypothesized that there may also be a non-negligible heat capacity difference between the rate-limiting TS and the Michaelis complex (ES). If so, neither the activation enthalpy nor entropy will be constant, but depend on temperature according to
$$\Delta H^\ddagger \left( T \right) = \Delta H_0^\ddagger + \Delta C_p^\ddagger (T - T_0)$$
(4)
$$\Delta S^\ddagger \left( T \right) = \Delta S_0^\ddagger + \Delta C_p^\ddagger \ln \left( {\frac{T}{{T_0}}} \right)$$
(5)
where the subscript 0 denotes ΔH and ΔS values at a reference temperature T0. Due to the temperature dependence of ΔH and ΔS, this model will yield curved Arrhenius plots for kcat even for a simple one-state model, $${\mathrm{E}} + {\mathrm{S}} \rightleftharpoons {\mathrm{ES}}\mathop { \to }\limits^{k_{{\mathrm{cat}}}} {\mathrm{E}} + {\mathrm{P}}$$, if $$\Delta C_p^\ddagger \ne 0$$. For temperature optima not related to unfolding, a negative value of $$\Delta C_p^\ddagger$$ would be required to fit the rate curve, and this has been interpreted such that the TS is stiffer (lower heat capacity) than the ground state30,31. The only experimental evidence for this type of model, however, comes precisely from curve fitting31,32 since the heat capacity change along an enzymatic reaction coordinate would be very difficult to measure directly.
It is again relatively straightforward to fit our calculated rate vs. temperature curve to the heat capacity model, although the fitted values of $$\Delta H_0^\ddagger$$, $$\Delta S_0^\ddagger$$, and $$\Delta C_p^\ddagger$$ are not very accurate since we only have eight temperature points, as noted above. However, it is an interesting qualitative exercise and the resulting fit is shown in Fig. 6b. This yields $${\Delta} H_{0}^{\ddagger}=-6.2$$ kcal mol−1, $$\Delta S_0^\ddagger$$=−0.06576 kcal mol−1 K−1, and $$\Delta C_p^\ddagger = - 1.13$$ kcal mol−1 K−1, with T0 taken as 298 K. Comparing the above two-state dead-end model with the one-state heat capacity model (Fig. 6a, b), we see that the two theoretical curves are essentially indistinguishable. This demonstrates, beyond any doubt, that curve fitting to these kinetic models cannot be used to determine the actual mechanism behind the curved Arrhenius plot and temperature optimum. However, in our case, we now know from the analysis of conformational populations and our restrained EVB/MD experiments that it is the two-state model that is the correct one, and we have also been able to identify the nature of the ES and ES’ states. It is further interesting to note that the one-state heat capacity model predicts a very strange behavior of ΔH(T) and ΔS(T) as shown in Fig. 6d. The activation enthalpy is thus predicted to grow linearly at low temperatures toward very high values and, conversely, become large and negative at high temperatures, while the −TΔS term behaves in an opposite manner. This is quite different from the prediction of the two-state model where $$\Delta H_{{\mathrm{app}}}^\ddagger$$ and $$\Delta S_{{\mathrm{app}}}^\ddagger$$ reach limiting values at low and high values of T. Hence, the physical basis for why the activation enthalpy and entropy of an enzyme reaction would behave as predicted by the heat capacity model (Fig. 6d) is rather obscure.
## Discussion
The increasing interest in enzymes from extremophilic species and, in particular those from psychrophiles, has raised a number of fundamental questions regarding how evolution has shaped the temperature dependence of their catalytic activity and thermal stability. A major discovery in the field was the observation by Somero and coworkers1 that the thermodynamic activation parameters of cold-adapted enzymes have shifted toward a lower value of ΔH and a more negative ΔS, compared with mesophilic orthologs. This finding has since then been confirmed for many different classes of enzymes2,5, and appears to be a universal feature of cold adaptation. The other universal feature is the reduced thermal stability that psychrophilic enzymes display2. The α-amylase from P. haloplanktis (AHA) obeys both these rules of psychrophilicity, and also shows the enigmatic behavior of becoming inactivated before protein melting sets in, as the temperature is increased14.
The present computer simulations reveal several hitherto hidden properties of AHA, which explain its catalytic behavior. First, it is evident from our comparison to an uncatalyzed reference reaction in solution, containing exactly the same catalytic groups as the enzyme, that a major contribution to the low activation barrier in the α-amylases originates from reduction of the reorganization free energy of the chemical reaction. This effect is often simply referred to as active-site preorganization, but can, however, be quantified, and has been seen for many other types of enzymes as well33,47,49. In fact, it may well be a characteristic of the glucosidases utilizing two enzyme carboxylic sidechains as their nucleophile and general acid/base for cleaving glycosidic bonds. In these reactions, there is a migration of a net negative charge over a distance of about 5 Å, which is associated with a large reorganization free energy penalty in solution. Hence, if the enzyme can provide stabilizing interaction for the negative charge at both of its carboxylate locations during the reaction, the reorganization energy can be reduced significantly. It appears that the reason why this major catalytic effect has not been revealed earlier for glucosidases, is that no comparisons to uncatalyzed water reactions have been reported in previous computational studies.
The second major result from our EVB/MD simulations at different temperatures is that, not only the free energy barriers, but also the experimental values16 of ΔH and TΔS reported at 15 °C for AHA and PPA are reproduced remarkably well, with shifts of several kcal mol−1 in the expected direction between the two enzymes. While the 3D structures of AHA and PPA are very similar, with the same overall fold (Fig. 1a), their sequence identity is, however, only 47% with several insertions/deletions. The fact that these enzymes are also rather large, with almost 500 amino acids, thus makes it difficult to identify specific protein regions responsible for the cold adaptation of AHA. It should also be noted that the shifts in ΔH and TΔS are smaller for the α-amylases than, e.g., endonucleases27, xylanases29, and trypsins52, where they are on the order of 10 kcal mol−1. Experimental work on AHA mutants has, however, shown that a combination of five or six AHA → PPA substitutions are sufficient for rendering mesophilic characteristics in terms of kcat, Km, ΔH, and TΔS to the psychrophilic enzyme15. Neither of the corresponding single mutations, on the other hand, were able to cause these changes, demonstrating that multiple contributions are at play. Interestingly, all but one (K300R) of the selected mutations were located on the protein surface, 20–25 Å away from the reaction center, which reinforces our earlier findings regarding the importance of the protein surface in cold adaptation9,10,11,12. However, since it is probably easier to destroy psychrophilic properties than to create them, a more critical test would be to instead identify minimal sets of mutations that confer cold adaptation to the mesophilic enzyme.
The most noteworthy result from the present computer simulations is the break in the Arrhenius plot for AHA and the corresponding calculated rate maximum at around 25 °C. This type of optimum has never been predicted by computational enzyme studies before. The optimum also occurs precisely where it is found experimentally, which is about 20 °C below the melting temperature of the enzyme14. We were thus able here to analyze the origin of such anomalous temperature optima in the case of AHA. Our results unambiguously show that it is primarily the Asp264 interaction with the substrate that breaks above room temperature and causes the rate decline. The evidence comes from analysis of backbone and sidechain atomic mobilities, conformational populations, and computational experiments where the interaction is maintained at higher temperatures through restraints. Hence, the kinetics of the psychrophilic enzyme is best described by a two-state model in which ES is in equilibrium with a dead-end state ES’. Such a macroscopic model yields very reasonable values for the apparent activation enthalpy and entropy as a function of temperature, and predicts that the ES ES′ equilibrium is characterized by a large enthalpy (~30 kcal mol−1) that is counterbalanced by an equally large entropy term at room temperature, indicative of breaking/forming a strong interaction. The two-state dead-end model is thus similar to that proposed by Danson for explaining unusual curved Arrhenius plots in enzyme kinetics53. That model also adds a slow irreversible inactivation of the enzyme from the ES’ state, and describes how the rate vs. temperature curve changes with time.
We further showed here that the heat capacity model30,31,32 can fit our calculated rate curve equally well as the two-state model, although it is clearly incorrect in this case. This strongly suggests that curve fitting alone cannot be used to prove the somewhat controversial idea that $$\Delta C_p^\ddagger$$ could be nonzero for an elementary enzyme reaction step, and that this would be the underlying reason for curved Arrhenius plots and unusual temperature optima of the type discussed here. Moreover, we show that the two alternative models make very different predictions regarding the temperature dependence of the apparent activation enthalpies and entropies. While $$\Delta H_{{\mathrm{app}}}^\ddagger$$ and $$\Delta S_{{\mathrm{app}}}^\ddagger$$ in the two-state model reach limiting values at low and high temperatures, determined by the chemical step and inactivation equilibrium, the heat capacity model predicts a seemingly unphysical behavior of these quantities. Hence, $$\Delta H_{{\mathrm{app}}}^\ddagger$$ is predicted to be inversely proportional to T over the entire temperature range, and reaches a value of about 330 kcal mol−1 at 0 K for the AHA reaction. What the microscopic origin of such an enormous energy barrier would be is difficult to understand. Likewise, a large negative activation enthalpy is predicted at high temperatures for an elementary chemical reaction step, for which the physical explanation is also unclear. It should, however, be noted that the equilibrium model of Eq. (1) predicts an increased apparent heat capacity of the combined reactant state when both ES and ES’ are significantly populated, since the temperature derivative of the second term of Eq. (2) is not zero. This causes a negative dip of the apparent $$\Delta C_p^\ddagger$$ in the temperature region where the population transition occurs, while it goes to zero at low and high temperatures (Supplementary Fig. 3). However, in this case, the negative dip in $$\Delta C_p^\ddagger$$ is a consequence of the ES ES′ equilibrium, and $$\Delta C_p^\ddagger$$ is not a temperature-independent constant.
Finally, it is interesting to note that, in contrast to earlier computational studies of psychrophilic enzymes, the α-amylases show distinct differences in the mobilities of the conserved active-site residues Asp264/300 and His263/299. That is, earlier work on a number of different enzymes has revealed that mutations and flexibility differences are generally located at surface loops of the protein, while the conserved active-site residues showed the same low mobility in both psychrophilic and mesophilic orthologs. Moreover, increased surface loop mobility in psychrophilic enzymes could be directly related to the changes in activation enthalpies and entropies9,10,11,12. The influence of protein surface residues far away from the active site on thermodynamic activation parameters is further illustrated by the multiple AHA mutations analyzed by Feller and coworkers15. In AHA and PPA, the catalytic groups and core of the active site do indeed show very similar and low mobility (Fig. 4), but Asp264/300 and His263/299 are clearly more flexible in the psychrophilic enzyme although the residues are totally conserved among the α-amylases54. The reason for this is to be found in the L7 loop downstream of the two residues, which is the most mobile part of AHA and much more flexible than in PPA. Hence, it again appears that it is mutation in a surface loop that is ultimately responsible for the different behavior of the two enzymes with regard to their temperature optima.
## Methods
### MD simulations
Models for the psychrophilic P. haloplanktis (AHA) and mesophilic porcine (Sus scrofa) pancreatic α-amylase (PPA) were based on structural information accessible in the Protein Data Bank (PDB). The highest X-ray resolution structure available for each enzyme complexed with an acarbose inhibitor was chosen, 1G9436 (1.74-Å resolution) for AHA and 1HX038 (1.38 Å) for PPA. Acarbose was modified to represent a five-residue glucose oligomer, linked by α1–4 glycosidic bonds. Crystallographic waters were retained, except for those clashing with atoms of the modified substrate. The AHA and PPA systems were prepared for EVB/MD simulations33,34 by creating the enzyme–substrate complexes based on the crystallographic structures and solvating them with a 45-Å-radius spherical droplet of TIP3P water55 with origin at the center of mass of each complex, thereby covering the entire protein. Water molecules close to the spherical boundary were subjected to radial and polarization restraints according to the SCAAS model56,57. To construct a reference system for calibrating the EVB potential surface, a simplified model was used based on the conserved α-amylase active-site structure. The model comprised the sidechains of the nucleophilic group (Asp174/197) and the protonated general acid (Glu200/233), both truncated at the Cβ carbon, together with a maltose disaccharide molecule representing the substrate at the −1 and +1 positions. The reference system was solvated in a 18-Å-radius sphere of TIP3P water. The OPLS-AA/M force field58 was used in all calculations. Nonbonded interactions beyond a 10-Å cutoff were treated by local reaction field multipole expansion method59, except for the reacting groups for which all interactions were explicitly calculated. All EVB/MD calculations were carried out with the Q program57, using a 1-fs time step as described earlier9.
To calculate average backbone and sidechain-positional atomic fluctuations (RMSF), the enzyme–substrate complexes were solvated by a cubic TIP3P water box with side length 100 Å using periodic boundary conditions (PBC) in GROMACS60. Here, we carried out longer MD simulations of the reactant state in AHA and PPA, again using the OPLS-AA/M force field. The standard GROMACS PBC simulations involved 6 replicas of 50 ns each for both AHA and PPA at 283, 298, and 313 K and used a 2-fs time step.
### DFT cluster model of reference reaction
The DFT cluster model was used to determine the structure and energetics of the rate-limiting glycosylation transition state in a continuum solvent model. The model consisted of the same chemical groups as those used in the EVB reference system, except that the amino acid sidechains were truncated at the Cα carbon. The glucose ring at the −1 position was proposed to be in half-chair conformation in the initial TS structure guess (see Fig. 2c for the resulting optimized geometry). The energetics of this cluster model was evaluated with the M06-2X functional39 using the Gaussian09 program61. The transition state was first optimized, and then geometry optimization was carried out toward the reactant and intermediate states, using the 6–31 G(d,p) basis set. The α-carbons of the two amino acid sidechains were kept fixed during the optimizations. Electronic energies were then recalculated for the optimized geometries with the larger 6–311+G(2d,2p) basis set, with solvent effects from surrounding water estimated using SMD model40. Zero-point energies were obtained from frequencies calculated at the same level of theory as geometry optimization. Free energy estimates were thus obtained by adding the zero-point energy and solvent contributions to the electronic energies obtained by the large basis set.
### EVB/MD calculations of free energy profiles
The systems were first equilibrated for 246 ps at increasing temperatures, starting from 1 K, and heated in a stepwise manner toward the final desired temperature for EVB/MD production runs. EVB/MD free energy perturbation simulations11,34 were carried out with 51 discrete λ windows, starting at λ1=λ2 = 0.5, near the transition state, and propagated toward reactants (λ1 = 1) and intermediate (λ2 = 1). To calibrate the EVB reaction potential energy surface, the average free energy profile from 250 independent simulations for the reference reaction system in water was fitted to the corresponding free energy profile obtained by the DFT cluster calculations at 298 K. The temperature dependence of the reference reaction was analyzed, after calibration, by repeating the free energy profile calculations also at 283, 290.5, 305.5, and 313 K, again using 250 replicas at each temperature. Each individual free energy profile encompassed 510 ps, and the total simulation time for obtaining the reference system data was thus 0.64 μs.
The enzyme EVB/MD simulations for AHA and PPA were run at eight different temperatures from 278 to 313 K, with 5-K intervals. At each temperature, about 300 randomized individual replicate simulations were carried out to obtain the free energy profiles, using the same free energy perturbation scheme11 as above. In addition, a system with added distance restraints was analyzed for AHA. Weak harmonic restraints between the His263 and Asp264 sidechains of the enzyme and the O2 and O3 hydroxyl groups of the substrate glucose in the –1 position were then imposed during the EVB/MD calculations. This restrained system was simulated at 283, 298, 303, 308, and 313 K, again with 300 randomized replicas per temperature. The total simulation time of all enzyme EVB/MD calculations amounted to 3.2 μs. Fitting of enzyme activity curves to kinetic models was done with Gnuplot (http://www.gnuplot.info).
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
The data that support this study are available from the authors upon reasonable request.
## Code availability
Source code for the Q program57 is available at https://github.com/qusers/Q6, or upon request from J.Å.
## References
1. 1.
Low, P. S., Bada, J. L. & Somero, G. N. Temperature adaptation of enzymes: roles of the free energy, the enthalpy, and the entropy of activation. Proc. Natl Acad. Sci. USA 70, 430–432 (1973).
2. 2.
Siddiqui, K. S. & Cavicchioli, R. Cold-adapted enzymes. Annu. Rev. Biochem. 75, 403–433 (2006).
3. 3.
Feller, G. & Gerday, C. Psychrophilic enzymes: hot topics in cold adaptation. Nat. Rev. Microbiol. 1, 200–208 (2003).
4. 4.
Fields, P. A. & Somero, G. N. Hot spots in cold adaptation: localized increases in conformational flexibility in lactate dehydrogenase A4 orthologs of Antarctic notothenioid fishes. Physiology 95, 11476–11481 (1998).
5. 5.
Åqvist, J., Isaksen, G. V. & Brandsdal, B. O. Computation of enzyme cold adaptation. Nat. Rev. Chem. 1, 0051 (2017).
6. 6.
Harms, M. J. & Thornton, J. W. Evolutionary biochemistry: revealing the historical and physical causes of protein properties. Nat. Rev. Genet. 14, 559–571 (2013).
7. 7.
Wallon, G. et al. Sequence and homology model of 3-isopropylmalate dehydrogenase from the psychrotrophic bacterium Vibrio sp. I5 suggest reasons for thermal instability. Protein Eng. Des. Sel. 10, 665–672 (1997).
8. 8.
Alvarez, M. et al. Triose-phosphate isomerase (TIM) of the psychrophilic bacterium Vibrio marinus: kinetic and structural properties. J. Biol. Chem. 273, 2199–2206 (1998).
9. 9.
Isaksen, G. V., Åqvist, J. & Brandsdal, B. O. Protein surface softness is the origin of enzyme cold-adaptation of trypsin. PLoS Comput. Biol. 10, e1003813 (2014).
10. 10.
Åqvist, J. Cold adaptation of triosephosphate isomerase. Biochemistry 56, 4169–4176 (2017).
11. 11.
Sočan, J., Kazemi, M., Isaksen, G. V., Brandsdal, B. O. & Åqvist, J. Catalytic adaptation of psychrophilic elastase. Biochemistry 57, 2984–2993 (2018).
12. 12.
Sočan, J., Isaksen, G. V., Brandsdal, B. O. & Åqvist, J. Towards rational computational engineering of psychrophilic enzymes. Sci. Rep. 9, 19147 (2019).
13. 13.
Bjelić, S., Brandsdal, B. O. & Åqvist, J. Cold adaptation of enzyme reaction rates. Biochemistry 47, 10049–10057 (2008).
14. 14.
D’Amico, S., Marx, J.-C., Gerday, C. & Feller, G. Activity-stability relationships in extremophilic enzymes. J. Biol. Chem. 278, 7891–7896 (2003).
15. 15.
D’Amico, S., Gerday, C. & Feller, G. Temperature adaptation of proteins: engineering mesophilic-like activity and stability in a cold-adapted α-amylase. J. Mol. Biol. 332, 981–988 (2003).
16. 16.
D’Amico, S., Gerday, C. & Feller, G. Dual effects of an extra disulfide bond on the activity and stability of a cold-adapted α-amylase. J. Biol. Chem. 277, 46110–46115 (2002).
17. 17.
D’Amico, S., Sohier, J. S. & Feller, G. Kinetics and energetics of ligand binding determined by microcalorimetry: insights into active site mobility in a psychrophilic α-amylase. J. Mol. Biol. 358, 1296–1304 (2006).
18. 18.
Pinto, G. P. et al. establishing the catalytic mechanism of human pancreatic α-amylase with QM/MM methods. J. Chem. Theory Comput. 11, 2508–2516 (2015).
19. 19.
Kosugi, T. & Hayashi, S. Crucial role of protein flexibility in formation of a stable reaction transition state in an α-amylase catalysis. J. Am. Chem. Soc. 134, 7045–7055 (2012).
20. 20.
Bowman, A. L., Grant, I. M. & Mulholland, A. J. QM/MM simulations predict a covalent intermediate in the hen egg white lysozyme reaction with its natural substrate. Chem. Commun. 37, 4425–4427 (2008).
21. 21.
Pasi, M., Riccardi, L., Fantucci, P., De Gioia, L. & Papaleo, E. Dynamic properties of a psychrophilic α-amylase in comparison with a mesophilic homologue. J. Phys. Chem. B 113, 13585–13595 (2009).
22. 22.
Papaleo, E., Pasi, M., Tiberti, M. & De Gioia, L. Molecular dynamics of mesophilic-like mutants of a cold-adapted enzyme: insights into distal effects induced by the mutations. PLoS ONE 6, e24214 (2011).
23. 23.
Lanes, O., Leiros, I., Smalås, A. & Willassen, N. Identification, cloning, and expression of uracil-DNA glycosylase from Atlantic cod (Gadus morhua): characterization and homology modeling of the cold-active catalytic domain. Extremophiles 6, 73–86 (2002).
24. 24.
Assefa, N. G., Niiranen, L., Willassen, N. P., Smalås, A. & Moe, E. Thermal unfolding studies of cold adapted uracil-DNA N-glycosylase (UNG) from Atlantic cod (Gadus morhua). A comparative study with human UNG. Comp. Biochem. Physiol. B Biochem. Mol. Biol. 161, 60–68 (2012).
25. 25.
Xu, Y., Feller, G., Gerday, C. & Glansdorff, N. Moritella cold-active dihydrofolate reductase: are there natural limits to optimization of catalytic efficiency at low temperature? J. Bacteriol. 185, 5519–5526 (2003).
26. 26.
Coquelle, N., Fioravanti, E., Weik, M., Vellieux, F. & Madern, D. Activity, stability and structural studies of lactate dehydrogenases adapted to extreme thermal environments. J. Mol. Biol. 374, 547–562 (2007).
27. 27.
Altermark, B., Niiranen, L., Willassen, N. P., Smalås, A. O. & Moe, E. Comparative studies of endonuclease I from cold-adapted Vibrio salmonicida and mesophilic Vibrio cholerae. FEBS J. 274, 252–63 (2007).
28. 28.
Georlette, D. et al. Structural and functional adaptations to extreme temperatures in psychrophilic, mesophilic, and thermophilic DNA ligases. J. Biol. Chem. 278, 37015–37023 (2003).
29. 29.
Collins, T., Meuwis, M.-A., Gerday, C. & Feller, G. Activity, stability and flexibility in glycosidases adapted to extreme thermal environments. J. Mol. Biol. 328, 419–428 (2003).
30. 30.
Hobbs, J. K. et al. Change in heat capacity for enzyme catalysis determines temperature dependence of enzyme catalyzed rates. ACS Chem. Biol. 8, 2388–2393 (2013).
31. 31.
van der Kamp, M. W. et al. dynamical origins of heat capacity changes in enzyme-catalysed reactions. Nat. Commun. 9, 1177 (2018).
32. 32.
Nguyen, V. et al. Evolutionary drivers of thermoadaptation in enzyme catalysis. Science 355, 289–294 (2017).
33. 33.
Warshel, A. Computer Modeling of Chemical Reactions in Enzymes and Solutions (John Wiley and Sons, New York, 1991).
34. 34.
Åqvist, J. & Warshel, A. Simulation of enzyme reactions using valence bond force fields and other hybrid quantum/classical approaches. Chem. Rev. 93, 2523–2544 (1993).
35. 35.
Rydberg, E. H. et al. Mechanistic analyses of catalysis in human pancreatic α-amylase: detailed kinetic and structural studies of mutants of three conserved carboxylic acids. Biochemistry 41, 4492–4502 (2002).
36. 36.
Aghajari, N., Roth, M. & Haser, R. Crystallographic evidence of a transglycosylation reaction: ternary complexes of a psychrophilic α-amylase. Biochemistry 41, 4273–4280 (2002).
37. 37.
Brayer, G. D. et al. Subsite mapping of the human pancreatic α-amylase active site through structural, kinetic, and mutagenesis techniques. Biochemistry 39, 4778–4791 (2000).
38. 38.
Qian, M. et al. Enzyme-catalyzed condensation reaction in a mammalian α-amylase. high-resolution structural analysis of an enzyme−inhibitor complex. Biochemistry 40, 7700–7709 (2001).
39. 39.
Zhao, Y. & Truhlar, D. G. The M06 suite of density functionals for main group thermochemistry, thermochemical kinetics, noncovalent interactions, excited states, and transition elements: two new functionals and systematic testing of four m06-class functionals and 12 other functionals. Theor. Chem. Acc. 120, 215–241 (2008).
40. 40.
Marenich, A. V., Cramer, C. J. & Truhlar, D. G. Universal solvation model based on solute electron density and on a continuum model of the solvent defined by the bulk dielectric constant and atomic surface tensions. J. Phys. Chem. B 113, 6378–6396 (2009).
41. 41.
Ishikawa, K., Matsui, I., Honda, K. & Nakatani, H. Substrate-dependent shift of optimum pH in porcine pancreatic α-amylase-catalyzed reactions. Biochemistry 29, 7119–7123 (1990).
42. 42.
Feller, G., le Bussy, O., Houssier, C. & Gerday, C. Structural and functional aspects of chloride binding to Alteromonas haloplanctis α-amylase. J. Biol. Chem. 271, 23836–23841 (1996).
43. 43.
Liu, J., Wang, X. & Xu, D. QM/MM study on the catalytic mechanism of cellulose hydrolysis catalyzed by cellulase Cel5A from Acidothermus cellolyticus. J. Phys. Chem. B 114, 1462–1470 (2010).
44. 44.
Warshel, A. Calculations of enzymatic reactions: calculations of pKa, proton transfer reactions, and general acid catalysis reactions in enzymes. Biochemistry 20, 3167–3177 (1981).
45. 45.
Vocadlo, D. J., Davies, G. J., Laine, R. & Withers, S. G. Catalysis by hen egg-white lysozyme proceeds via a covalent intermediate. Nature 412, 835–838 (2001).
46. 46.
Zhang, R. et al. Directed “in situ” inhibitor elongation as a strategy to structurally characterize the covalent glycosyl-enzyme intermediate of human pancreatic α-amylase. Biochemistry 48, 10752–10764 (2009).
47. 47.
Warshel, A., Hwang, J. K. & Åqvist, J. Computer simulations of enzymatic reactions: examination of linear free-energy relationships and quantum-mechanical corrections in the initial proton-transfer step of carbonic anhydrase. Faraday Discuss. 93, 225–238 (1992).
48. 48.
Warshel, A. Energetics of enzyme catalysis. Proc. Natl Acad. Sci. USA 75, 5250–5254 (1978).
49. 49.
Feierberg, I. & Åqvist, J. Computational modeling of enzymatic keto-enol isomerization reactions. Theor. Chem. Acc. 108, 71–84 (2002).
50. 50.
Aghajari, N., Feller, G., Gerday, C. & Haser, R. structural basis of α-amylase activation by chloride. Protein Sci. 11, 1435–1441 (2002).
51. 51.
Prabhu, N. V. & Sharp, K. A. Heat capacity in proteins. Annu. Rev. Phys. Chem. 56, 521–548 (2005).
52. 52.
Lonhienne, T., Gerday, C. & Feller, G. Psychrophilic enzymes: revisiting the thermodynamic parameters of activation may explain local flexibility. Biochim. Biophys. Acta 1543, 1–10 (2000).
53. 53.
Daniel, R. M. & Danson, M. J. A new understanding of how temperature affects the catalytic activity of enzymes. Trends Biochem. Sci. 35, 584–591 (2010).
54. 54.
D’Amico, S., Gerday, C. & Feller, G. Structural similarities and evolutionary relationships in chloride-dependent α-amylases. Gene 253, 95–105 (2000).
55. 55.
Jorgensen, W. L., Chandrasekhar, J., Madura, J. D., Impey, R. W. & Klein, M. L. Comparison of simple potential functions for simulating liquid water. J. Chem. Phys. 79, 926–935 (1983).
56. 56.
King, G. & Warshel, A. A surface constrained all‐atom solvent model for effective simulations of polar solutions. J. Chem. Phys. 91, 3647–3661 (1989).
57. 57.
Marelius, J., Kolmodin, K., Feierberg, I. & Åqvist, J. Q: a molecular dynamics program for free energy calculations and empirical valence bond simulations in biomolecular systems. J. Mol. Graph. Model. 16, 213–225 (1998).
58. 58.
Robertson, M. J., Tirado-Rives, J. & Jorgensen, W. L. Improved peptide and protein torsional energetics with the OPLS-AA force field. J. Chem. Theory Comput. 11, 3499–3509 (2015).
59. 59.
Lee, F. S. & Warshel, A. A local reaction field method for fast evaluation of long‐range electrostatic interactions in molecular simulations. J. Chem. Phys. 97, 3100–3107 (1992).
60. 60.
Abraham, M. J. et al. GROMACS: high performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX 1–2, 19–25 (2015).
61. 61.
Frisch, M. J. et al. Gaussian 09, Revision D.01. (Gaussian Inc., Wallingford, CT, 2009).
## Acknowledgements
Support from the Swedish Research Council (VR), the Knut and Alice Wallenberg Foundation, and the Swedish National Infrastructure for Computing (SNIC) is gratefully acknowledged. Open access funding provided by Uppsala University.
## Author information
Authors
### Contributions
J.S. performed experiments. J.Å. designed the study. J.S., M.P., and J.Å. analyzed the data and wrote the paper.
### Corresponding author
Correspondence to Johan Åqvist.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Communications thanks Guillaume Lamoureux, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Sočan, J., Purg, M. & Åqvist, J. Computer simulations explain the anomalous temperature optimum in a cold-adapted enzyme. Nat Commun 11, 2644 (2020). https://doi.org/10.1038/s41467-020-16341-2
• Accepted:
• Published:
• ### Hexamerization and thermostability emerged very early during geranylgeranylglyceryl phosphate synthase evolution
• Cosimo Kropp
• , Kristina Straub
• , Mona Linde
• & Patrick Babinger
Protein Science (2021)
• ### Cold-Active β-Galactosidases: Insight into Cold Adaptation Mechanisms and Biotechnological Exploitation
• Marco Mangiagalli
• & Marina Lotti
Marine Drugs (2021)
• ### Biomolecular QM/MM Simulations: What Are Some of the “Burning Issues”?
• Qiang Cui
• , Tanmoy Pal
• & Luke Xie
The Journal of Physical Chemistry B (2021)
• ### A meta-analysis of the activity, stability, and mutational characteristics of temperature-adapted enzymes
• Stewart Gault
• , Peter M. Higgins
• , Charles S. Cockell
• & Kaitlyn Gillies
Bioscience Reports (2021)
• ### Hidden Conformational States and Strange Temperature Optima in Enzyme Catalysis
• Johan Åqvist
• , Jaka Sočan
• & Miha Purg
Biochemistry (2020)
|
# Remove __init__ files from generated templates
XMLWordPrintable
## Details
• Type: Improvement
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
0.9
• Team:
SQuaRE
## Description
The template package currently creates __init__.py files that contain this boilerplate when generating a stack_package (and possibly other types of templates):
# ... standard header... import pkgutil, lsstimport __path__ = pkgutil.extend_path(__path__, __name__)
These files are completely unnecessary in Python 3 and add an unnecessary dependency on the base package. Please stop generating these files .
Please also remove base and utils as dependencies in the ups table file. As far as I can see the former is only used for lsstimport (which we no longer need) and the latter is not used at all.
## Activity
Hide
Jonathan Sick added a comment -
Sounds good.
utils got added because they're so heavily used by typical testing scenarios (i.e., someone complained that it wasn't present). But I agree that we should probably expect a developer to know when utils is necessary.
Show
Jonathan Sick added a comment - Sounds good. utils got added because they're so heavily used by typical testing scenarios (i.e., someone complained that it wasn't present). But I agree that we should probably expect a developer to know when utils is necessary.
Hide
Russell Owen added a comment -
utils is not normally used by T&S, which is why prefer it not be included in the ups table file. But based on a conversation on # dm-docs T&S should have a separate template anyway, since the docs layout is likely to be simpler.
Show
Russell Owen added a comment - utils is not normally used by T&S, which is why prefer it not be included in the ups table file. But based on a conversation on # dm-docs T&S should have a separate template anyway, since the docs layout is likely to be simpler.
Hide
Jonathan Sick added a comment -
Good points.
By the way, I'm 70% sure I can accommodate the simpler doc format in a common template. Shall I give it a shot to implement it? Are there other key things you need to keep changing for T&S packages?. (If it doesn't work easily, I'm super happy to add a ts_package template as we discussed).
Show
Jonathan Sick added a comment - Good points. By the way, I'm 70% sure I can accommodate the simpler doc format in a common template. Shall I give it a shot to implement it? Are there other key things you need to keep changing for T&S packages?. (If it doesn't work easily, I'm super happy to add a ts_package template as we discussed).
Hide
Russell Owen added a comment -
I know of one more possible hitch for T&S which I hope Andy Clements can speak to: the boilerplate for where to look up package issues has been wrote for T&S software. But we just switched to using DM's Jira setup and as long as T&S plans to add a component for each ts_ package that boilerplate should be perfect. If Andy Clements says we will have those components then my personal opinion is that it would be well worth trying to use the standard template for T&S packages.
Show
Russell Owen added a comment - I know of one more possible hitch for T&S which I hope Andy Clements can speak to: the boilerplate for where to look up package issues has been wrote for T&S software. But we just switched to using DM's Jira setup and as long as T&S plans to add a component for each ts_ package that boilerplate should be perfect. If Andy Clements says we will have those components then my personal opinion is that it would be well worth trying to use the standard template for T&S packages.
Hide
Russell Owen added a comment -
Looks great! One small suggested change to the .table files which you can make or not as you see fit.
Show
Russell Owen added a comment - Looks great! One small suggested change to the .table files which you can make or not as you see fit.
Hide
Jonathan Sick added a comment -
I believe that that the stack_package template now documents RFC-52, and implements it.
Show
Jonathan Sick added a comment - I believe that that the stack_package template now documents RFC-52 , and implements it.
Hide
Jonathan Sick added a comment -
Merged. Thanks for the prompting to do this.
Show
Jonathan Sick added a comment - Merged. Thanks for the prompting to do this.
## People
• Assignee:
Jonathan Sick
Reporter:
Russell Owen
Reviewers:
Russell Owen
Watchers:
Jonathan Sick, Russell Owen
|
• ### The Principle of Similitude in Biology: From Allometry to the Formulation of Dimensionally Homogenous Laws'(1707.02340)
Meaningful laws of nature must be independent of the units employed to measure the variables. The principle of similitude (Rayleigh 1915) or dimensional homogeneity, states that only commensurable quantities (ones having the same dimension) may be compared, therefore, meaningful laws of nature must be homogeneous equations in their various units of measurement, a result which was formalized in the $\rm \Pi$ theorem (Vaschy 1892; Buckingham 1914). However, most relations in allometry do not satisfy this basic requirement, including the 3/4 Law' (Kleiber 1932) that relates the basal metabolic rate and body mass, which it is sometimes claimed to be the most fundamental biological rate (Brown et al. 2004) and the closest to a law in life sciences (West \& Brown 2004). Using the $\rm \Pi$ theorem, here we show that it is possible to construct a unique homogeneous equation for the metabolic rates, in agreement with data in the literature. We find that the variations in the dependence of the metabolic rates on body mass are secondary, coming from variations in the allometric dependence of the heart frequencies. This includes not only different classes of animals (mammals, birds, invertebrates) but also different exercise conditions (basal and maximal). Our results demonstrate that most of the differences found in the allometric exponents (White et al. 2007) are due to compare incommensurable quantities and that our dimensionally homogenous formula, unify these differences into a single formulation. We discuss the ecological implications of this new formulation in the context of the Malthusian's, Fenchel's and the total energy consumed in a lifespan relations.
• ### How AGN and SNe feedback affect mass transport and black hole growth in high redshift galaxies(1701.06172)
Jan. 22, 2017 astro-ph.GA
By using cosmological hydrodynamical simulations we study the effect of supernova (SN) and active galactic nuclei (AGN) feedback on the mass transport of gas on to galactic nuclei and the black hole (BH) growth down to redshift z~6. We study the BH growth in relation with the mass transport processes associated with gravity and pressure torques, and how they are modified by feedback. Cosmological gas funelled through cold flows reaches the galactic outer region close to free-fall. Then torques associated to pressure triggered by gas turbulent motions produced in the circum-galactic medium by shocks and explosions from SNe are the main source of mass transport beyond the central ~ 100 pc. Due to high concentrations of mass in the central galactic region, gravitational torques tend to be more important at high redshift. The combined effect of almost free-falling material and both gravity and pressure torques produces a mass accretion rate of order ~ 1 M_sun/yr at ~ pc scales. In the absence of SN feedback, AGN feedback alone does not affect significantly either star formation or BH growth until the BH reaches a sufficiently high mass of $\sim 10^6$ M_sun to self-regulate. SN feedback alone, instead, decreases both stellar and BH growth. Finally, SN and AGN feedback in tandem efficiently quench the BH growth, while star formation remains at the levels set by SN feedback alone due to the small final BH mass, ~ few 10^5 M_sun. SNe create a more rarefied and hot environment where energy injection from the central AGN can accelerate the gas further.
• ### Unveiling the role of galactic rotation on star formation(1606.03454)
Dec. 30, 2016 astro-ph.GA
We study the star formation process at galactic scales and the role of rotation through numerical simulations of spiral and starburst galaxies using the adaptive mesh refinement code Enzo. We focus on the study of three integrated star formation laws found in the literature: the Kennicutt-Schmidt (KS) and Silk-Elmegreen (SE) laws, and the dimensionally homogeneous equation proposed by Escala (2015) $\Sigma_{\rm SFR} \propto \sqrt{G/L}\Sigma_{\rm gas}^{1.5}$. We show that using the last we take into account the effects of the integration along the line of sight and find a unique regime of star formation for both types of galaxies, suppressing the observed bi-modality of the KS law. We find that the efficiencies displayed by our simulations are anti-correlated with the angular velocity of the disk $\Omega$ for the three laws studied in this work. Finally, we show that the dimensionless efficiency of star formation is well represented by an exponentially decreasing function of $-1.9\Omega t_{\rm ff}^{\rm ini}$, where $t_{\rm ff}^{\rm ini}$ is the initial free-fall time. This leads to a unique galactic star formation relation which reduces the scatter of the bi-modal KS, SE, and Escala (2015) relations by 43%, 43%, and 35% respectively.
• ### A Portrait of Cold Gas in Galaxies at 60pc Resolution and a Simple Method to Test Hypotheses That Link Small-Scale ISM Structure to Galaxy-Scale Processes(1606.07077)
June 22, 2016 astro-ph.GA
The cloud-scale density, velocity dispersion, and gravitational boundedness of the interstellar medium (ISM) vary within and among galaxies. In turbulent models, these properties play key roles in the ability of gas to form stars. New high fidelity, high resolution surveys offer the prospect to measure these quantities across galaxies. We present a simple approach to make such measurements and to test hypotheses that link small-scale gas structure to star formation and galactic environment. Our calculations capture the key physics of the Larson scaling relations, and we show good correspondence between our approach and a traditional "cloud properties" treatment. However, we argue that our method is preferable in many cases because of its simple, reproducible characterization of all emission. Using, low-J 12CO data from recent surveys, we characterize the molecular ISM at 60pc resolution in the Antennae, the Large Magellanic Cloud, M31, M33, M51, and M74. We report the distributions of surface density, velocity dispersion, and gravitational boundedness at 60pc scales and show galaxy-to-galaxy and intra-galaxy variations in each. The distribution of flux as a function of surface density appears roughly lognormal with a 1sigma width of ~0.3 dex, though the center of this distribution varies from galaxy to galaxy. The 60pc resolution line width and molecular gas surface density correlate well, which is a fundamental behavior expected for virialized or free-falling gas. Varying the measurement scale for the LMC and M31, we show that the molecular ISM has higher surface densities, lower line widths, and more self-gravity at smaller scales.
• ### Multiscale mass transport in z~6 galactic discs: fueling black holes(1512.02446)
May 25, 2016 astro-ph.GA
By using AMR cosmological hydrodynamic N-body zoom-in simulations, with the RAMSES code, we studied the mass transport processes onto galactic nuclei from high redshift up to $z\sim6$. Due to the large dynamical range of the simulations we were able to study the mass accretion process on scales from $\sim50[kpc]$ to $\sim$ few $1[pc]$. We studied the BH growth on to the galactic center in relation with the mass transport processes associated to both the Reynolds stress and the gravitational stress on the disc. Such methodology allowed us to identify the main mass transport process as a function of the scales of the problem. We found that in simulations that include radiative cooling and SNe feedback, the SMBH grows at the Eddington limit for some periods of time presenting $\langle f_{EDD}\rangle\approx 0.5$ throughout its evolution. The $\alpha$ parameter is dominated by the Reynolds term, $\alpha_R$, with $\alpha_R\gg 1$. The gravitational part of the $\alpha$ parameter, $\alpha_G$, has an increasing trend toward the galactic center at higher redshifts, with values $\alpha_G\sim 1$ at radii <$\sim$ few $10^1[pc]$ contributing to the BH fueling. In terms of torques, we also found that gravity has an increasing contribution toward the galactic center at earlier epochs with a mixed contribution above $\sim 100 [pc]$. This complementary work between pressure gradients and gravitational potential gradients allows an efficient mass transport on the disc with average mass accretion rates of the order $\sim$ few $1 [M_{\odot}/yr]$. These level of SMBH accretion rates found in our cosmological simulations are needed in all models of SMBH growth that attempt to explain the formation of redshift $6-7$ quasars.
• ### Super massive black holes in star forming gaseous circumnuclear discs(1503.01664)
Sept. 21, 2015 astro-ph.GA
Using N-body/SPH simulations we study the evolution of the separation of a pair of SMBHs embedded in a star forming circumnuclear disk (CND). This type of disk is expected to be formed in the central kilo parsec of the remnant of gas-rich galaxy mergers. Our simulations indicate that orbital decay of the SMBHs occurs more quickly when the mean density of the CND is higher, due to increased dynamical friction. However, in simulations where the CND is fragmented in high density gaseous clumps (clumpy CND), the orbits of the SMBHs are erratically perturbed by the gravitational interaction with these clumps, delaying, in some cases, the orbital decay of the SMBHs. The densities of these gaseous clumps in our simulations and in recent studies of clumpy CNDs are significantly higher than the observed density of molecular clouds in isolated galaxies or ULIRGs, thus, we expect that SMBH orbits are perturbed less in real CNDs than in the simulated CNDs of this study and other recent studies. We also find that the migration timescale has a weak dependence on the star formation rate of the CND. Furthermore, the migration timescale of a SMBH pair in a star-forming clumpy CND is at most a factor three longer than the migration timescale of a pair of SMBHs in a CND modeled with more simple gas physics. Therefore, we estimate that the migration timescale of the SMBHs in a clumpy CND is on the order of $10^7$ yrs.
• ### On the Functional Form of the Universal Star Formation Law(1411.7043)
Feb. 19, 2015 astro-ph.GA
We study the functional form of the star formation law, using the Vaschy-Buckingham Pi theorem. We find that that it should have a form $\rm \dot{\Sigma}_{\star} \propto \sqrt{\frac{G}{L}}\Sigma_{gas}^{3/2}$, where L is a characteristic length that is related with an integration scale. With a reasonable estimation for L, we find that galaxies from different types and redshifts, including Low Surface Brightness galaxies, and individual star-forming regions in our galaxy, obey this single star formation law. We also find that depending on the assumption for L, this star formation law adopt different formulations of $\rm \dot{\Sigma}_{\star}$ scaling, that are widely studied in the literature: $\rm \Sigma_{gas}^{3/2}, \Sigma_{gas}/t_{orb}, \Sigma_{gas}/t_{ff} \, and \, \Sigma_{gas}^{2}/v_{turb}$. We also study secondary controlling parameters of the star formation law, based on the current evidence from numerical simulations and find that for galaxies, the star formation efficiency should be controlled, at least, by the turbulent Toomre parameter, the sonic and Alfvenic Mach numbers.
• ### The Interstellar Medium and Star Formation in Local Galaxies: Variations of the Star Formation Law in Simulations(1401.5082)
April 23, 2014 astro-ph.CO, astro-ph.GA
We use the Adaptive Mesh Refinement code Enzo to model the interstellar medium in isolated local disk galaxies. The simulation includes a treatment for star formation and stellar feedback. We get a highly supersonic turbulent disk, which is fragmented at multiple scales and characterized by a multi-phase interstellar medium. We show that a Kennicutt-Schmidt (KS) relation only holds when averaging over large scales. However, values of star formation rates and gas surface densities lie close in the plot for any averaging size. This suggests an intrinsic relation between stars and gas at cell-size scales, which dominates over the global dynamical evolution. To investigate this effect, we develop a method to simulate the creation of stars based on the density field from the snapshots, without running the code again. We also investigate how the star formation law is affected by the characteristic star formation timescale, the density threshold and the efficiency considered in the recipe. We find that the slope of the law might vary from ~1.4 for a free-fall timescale, to ~1.0 for a constant depletion timescale. We further demonstrate that a power-law is recovered just by assuming that the mass of the new stars is a fraction of the mass of the cell $m_\star=\epsilon\rho_{\rm gas}\Delta x^3$, with no other physical criteria required. We show that both efficiency and density threshold do not affect the slope, but the right combination of them can adjust the normalization of the relation, which in turn could explain a possible bi-modality in the law.
• ### Binary Disk interaction II: Gap-Opening criteria for unequal mass binaries(1310.4509)
Oct. 16, 2013 astro-ph.GA
We study the interaction between an unequal mass binary with an isothermal circumbinary disk, motivated by the theoretical and observational evidence that after a major merger of gas-rich galaxies, a massive gaseous disk with a SMBH binary will be formed in the nuclear region. We focus on the gravitational torques that the binary exerts onto the disk and how these torques can drive the formation of a gap in the disk. This exchange of angular momentum between the binary and the disk is mainly driven by the gravitational interaction between the binary and a strong non-axisymmetric density perturbation that is produced in the disk, as response to the presence of the binary. Using SPH numerical simulations we tested two gap-opening criterion, one that assumes that the geometry of the density perturbation is an ellipsoid/thick-spirals and another that assumes a geometry of flat-spirals for the density perturbation. We find that the flat-spirals gap opening criterion successfully predicts which simulations will have a gap on the disk and which simulations will not have a gap on the disk. We also study the limiting cases predicted by the gap-opening criteria. Since the viscosity in our simulations is considerably smaller than the expected value in the nuclear regions of gas-rich merging galaxies, we conclude that in such environments the formation of a circumbinary gap is unlikely.
• ### The Impact of Bound Stellar Orbits and General Relativity on the Temporal Behavior of Tidal Disruption Flares(1303.4837)
Sept. 18, 2013 astro-ph.HE
We have carried out general relativistic particle simulations of stars tidally disrupted by massive black holes. When a star is disrupted in a bound orbit with moderate eccentricity instead of a parabolic orbit, the temporal behavior of the resulting stellar debris changes qualitatively. The debris is initially all bound, returning to pericenter in a short time ~ the original stellar orbital timescale. The resulting fallback rate can thus be much higher than the Eddington rate. Furthermore if the star is disrupted close to the hole, in a regime where general relativity is important, the stellar and debris orbits display general relativistic precession. Apsidal precession can make the debris stream cross itself after several orbits, likely leading to fast debris energy dissipation. If the star is disrupted in an inclined orbit around a spinning hole, nodal precession reduces the probability of self-intersection, and circularization may take many dynamical timescales, delaying the onset of flare activity. An examination of the particle dynamics suggests that quasi-periodic flares with short durations, produced when the center of the tidal stream passes pericenter, may occur in the early-time light curve. The late-time light curve may still show power-law behavior which is generic to disk accretion processes. The detection triggers for future surveys should be extended to capture such "non-standard" short-term flaring activity before the event enters the asymptotic decay phase, as this activity is likely to be more sensitive to physical parameters such as the black hole spin.
• ### Near-infrared adaptive optics imaging of infrared luminous galaxies: the brightest cluster magnitude - star formation rate relation(1308.6293)
Aug. 28, 2013 astro-ph.CO
We have established a relation between the brightest super star cluster magnitude in a galaxy and the host star formation rate (SFR) for the first time in the near infrared (NIR). The data come from a statistical sample of ~ 40 luminous IR galaxies (LIRGs) and starbursts utilizing K-band adaptive optics imaging. While expanding the observed relation to longer wavelengths, less affected by extinction effects, it also pushes to higher SFRs. The relation we find, M_K ~ - 2.6 log SFR, is similar to that derived previously in the optical and at lower SFRs. It does not, however, fit the optical relation with a single optical to NIR color conversion, suggesting systematic extinction and/or age effects. While the relation is broadly consistent with a size-of-sample explanation, we argue physical reasons for the relation are likely as well. In particular, the scatter in the relation is smaller than expected from pure random sampling strongly suggesting physical constraints. We also derive a quantifiable relation tying together cluster-internal effects and host SFR properties to possibly explain the observed brightest SSC magnitude vs. SFR dependency.
• ### Binary-disk interaction: Gap-Opening criteria(1209.5988)
Nov. 21, 2012 astro-ph.CO, astro-ph.GA
We study the interaction of an equal mass binary with an isothermal circumbinary disk motivated by the theoretical and observational evidence of the formation of massive black holes binaries surrounded by gas, after a major merger of gas-rich galaxies. We focus on the torques that the binary produces on the disk and how the exchange of angular momentum can drive the formation of a gap on it. We propose that the angular momentum exchange between the binary and the disk is through the gravitational interaction of the binary and a (tidally formed) global non-axisymmetric perturbation in the disk. Using this gravitational interaction we derive an analytic criterion for the formation of a gap in the disk that can be expressed on the structural parameters h/a and M(< r)/M_{bin}. Using SPH simulations we show that the simulations where the binary opens a gap in the disk and the simulations where the disk does not have a gap are distributed in two well separate regions. Our analytic gap-opening criterion predicts a shape of the threshold between this two regions that is consistent with our simulations and the other ones in the literature. We propose an analogy between the regime without (with) a gap in the disk and the Type I (Type II) migration that is observed in simulations of planet-disk interaction (binaries with extreme mass ratios), emphasizing that the interaction that drives the formation of a gap on the disk is different in the regime that we analyze (comparable mass binary).
• ### Gravitational Fragmentation in Galaxy Mergers: A Stability Criteria(1202.1283)
Oct. 4, 2012 astro-ph.CO
We study the gravitational stability of gaseous streams in the complex environment of a galaxy merger, because mergers are known to be places of ongoing massive cluster formation and bursts of star formation. We find an analytic stability parameter for case of gaseous streams orbiting around the merger remnant. We test our stability criteria using hydrodynamical simulations of galaxy mergers, obtaining satisfactory results. We find that our criteria successfully predicts the streams that will be gravitationally unstable to fragment into clumps.
• ### Central regions of LIRGs: rings, hidden starbursts, Supernovae and star clusters(1202.6236)
Feb. 28, 2012 astro-ph.CO
We study star formation (SF) in very active environments, in luminous IR galaxies, which are often interacting. A variety of phenomena are detected, such as central starbursts, circumnuclear SF, obscured SNe tracing the history of recent SF, massive super star clusters, and sites of strong off-nuclear SF. All of these can be ultimately used to define the sequence of triggering and propagation of star-formation and interplay with nuclear activity in the lives of gas rich galaxy interactions and mergers. In this paper we present analysis of high-spatial resolution integral field spectroscopy of central regions of two interacting LIRGs. We detect a nuclear 3.3 um PAH ring around the core of NGC 1614 with thermal-IR IFU observations. The ring's characteristics and relation to the strong star-forming ring detected in recombination lines are presented, as well as a scenario of an outward expanding starburst likely initiated with a (minor) companion detected within a tidal feature. We then present NIR IFU observations of IRAS 19115-2124, aka the Bird, which is an intriguing triple encounter. The third component is a minor one, but, nevertheless, is the source of 3/4 of the SFR of the whole system. Gas inflows and outflows are detected at the locations of the nuclei. Finally, we briefly report on our on-going NIR adaptive optics imaging survey of several dozen LIRGs. We have detected highly obscured core-collapse SNe in the central kpc, and discuss the statistics of "missing SNe" due to dust extinction. We are also determining the characteristics of hundreds of super star clusters in and around the core regions of LIRGs, as a function of host-galaxy properties.
• ### A law for star formation in galaxies(1104.3596)
April 18, 2011 astro-ph.CO
We study the galactic-scale triggering of star formation. We find that the largest mass-scale not stabilized by rotation, a well defined quantity in a rotating system and with clear dynamical meaning, strongly correlates with the star formation rate in a wide range of galaxies. We find that this relation can be understood in terms of self-regulation towards marginal Toomre stability and the amount of turbulence allowed to sustain the system in this self-regulated quasi-stationary state. We test such an interpretation by computing the predicted star formation rates for a galactic interstellar medium characterized by lognormal probability distribution function and find good agreement with the observed relation.
• ### Direct Formation of Supermassive Black Holes via Multi-Scale Gas Inflows in Galaxy Mergers(0912.4262)
Dec. 22, 2009 astro-ph.CO
Observations of distant bright quasars suggest that billion solar mass supermassive black holes (SMBHs) were already in place less than a billion years after the Big Bang. Models in which light black hole seeds form by the collapse of primordial metal-free stars cannot explain their rapid appearance due to inefficient gas accretion. Alternatively, these black holes may form by direct collapse of gas at the center of protogalaxies. However, this requires metal-free gas that does not cool efficiently and thus is not turned into stars, in contrast with the rapid metal enrichment of protogalaxies. Here we use a numerical simulation to show that mergers between massive protogalaxies naturally produce the required central gas accumulation with no need to suppress star formation. Merger-driven gas inflows produce an unstable, massive nuclear gas disk. Within the disk a second gas inflow accumulates more than 100 million solar masses of gas in a sub-parsec scale cloud in one hundred thousand years. The cloud undergoes gravitational collapse, which eventually leads to the formation of a massive black hole. The black hole can grow to a billion solar masses in less than a billion years by accreting gas from the surrounding disk.
• ### On the global triggering mechanism of star formation in galaxies(0909.4318)
Sept. 23, 2009 astro-ph.CO
We study the large-scale triggering of star formation in galaxies. We find that the largest mass-scale not stabilized by rotation, a well defined quantity in a rotating system and with clear dynamical meaning, strongly correlates with the star formation rate in a wide range of galaxies. We find that this relation can be explained in terms of the threshold for stability and the amount of turbulence allowed to sustain the system in equilibrium. Using this relation we also derived the observed correlation between the star formation rate and the luminosity of the brightest young stellar cluster.
• ### Stability of Galactic Gaseous Disks and the Formation of Massive Clusters(0808.1280)
Aug. 8, 2008 astro-ph
We study gravitational instabilities in disks, with special attention to the most massive clumps that form because they are expected to be the progenitors of globular-type clusters. The maximum unstable mass is set by rotation and depends only on the surface density and orbital frequency of the disk. We propose that the formation of massive clusters is related to this largest scale in galaxies not stabilized by rotation. Using data from the literature, we predict that globular-like clusters can form in nuclear starburst disks and protogalactic disks but not in typical spiral galaxies, in agreement with observations.
• ### Formation of Nuclear Disks and Supermassive Black Hole Binaries in Multi-Scale Hydrodynamical Galaxy Mergers(0807.3329)
July 22, 2008 astro-ph
(Abridged) We review the results of the first multi-scale, hydrodynamical simulations of mergers between galaxies with central supermassive black holes (SMBHs) to investigate the formation of SMBH binaries in galactic nuclei. We demonstrate that strong gas inflows produce nuclear disks at the centers of merger remnants whose properties depend sensitively on the details of gas thermodynamics. In numerical simulations with parsec-scale spatial resolution in the gas component and an effective equation of state appropriate for a starburst galaxy, we show that a SMBH binary forms very rapidly, less than a million years after the merger of the two galaxies. Binary formation is significantly suppressed in the presence of a strong heating source such as radiative feedback by the accreting SMBHs. We also present preliminary results of numerical simulations with ultra-high spatial resolution of 0.1 pc in the gas component. These simulations resolve the internal structure of the resulting nuclear disk down to parsec scales and demonstrate the formation of a central massive object (~ 10^8 Mo) by efficient angular momentum transport. This is the first time that a radial gas inflow is shown to extend to parsec scales as a result of the dynamics and hydrodynamics involved in a galaxy merger, and has important implications for the fueling of SMBHs. Due to the rapid formation of the central clump, the density of the nuclear disk decreases significantly in its outer region, reducing dramatically the effect of dynamical friction and leading to the stalling of the two SMBHs at a separation of ~1 pc. We discuss how the orbital decay of the black holes might continue in a more realistic model which incorporates star formation and the multi-phase nature of the ISM.
• ### Stability of Galactic Gas Disks and the Formation of Massive Clusters(0806.0853)
June 4, 2008 astro-ph
We study gravitational instabilities in disks, with special attention to the most massive clumps that form because they are expected to be the progenitors of globular-type clusters. The maximum unstable mass is set by rotation and depends only on the surface density and orbital frequency of the disk. We propose that the formation of massive clusters is related to this largest scale in galaxies not stabilized by rotation. Using data from the literature, we predict that globular-like clusters can form in nuclear starburst disks and protogalactic disks but not in typical spiral galaxies, in agreement with observations.
• ### Towards a Comprehensive Fueling-Controlled Theory on the Growth of Massive Black Holes and Host Spheroids(0705.4457)
May 30, 2007 astro-ph
|
# Iterating Composition over a list of tuples
I produce a function list with
Tuples[{a, r}, 2]
(* {{a, a}, {a, r}, {r, a}, {r, r}} *)
where functions a and r represent adding some fraction and taking the reciprocal:
a[z_, fraction_] := z + fraction
r[z_] := 1/z
I want to Composition[] each of the ordered pairs in the tuples list, e.g.
a[a[x,frac]], then a[r[x]], etc. into a new list:
{a[a[x,frac]],a[r[x,frac]],r[a[x,frac]],r[r[x,frac]]}
with the values for each of the list elements being the computed values of each of the composed functions.
The second term above would look like this:
Composition[a, r][y]
(* 1/2 + 1/y *)
• What is r[x,frac] supposed to return when you've only defined r[z_]? What about a[r[x,frac]] when you've only defined a to take two arguments? e.g. The example you give at the end will evaluate to a[1/y] not 1/2+1/ywith a, r defined the way they are here. – N.J.Evans Nov 4 '16 at 17:02
• Good point. My mistake. A single argument for both functions works just fine, since the fraction used in the add function a[] can be passed down from the calling function. – James T. Nov 4 '16 at 21:08
Unless you really want the functions to take different numbers of arguments (which is not clear from the question):
a[z_] := z + 1/2
r[z_] := 1/z
tuples = Tuples[{a, r}, 2]
Composition[##][x] & @@@ tuples
{1 + x, 1/2 + 1/x, 1/(1/2 + x), x}
• Thanks so much! I have almost no experience with that type of expression/notation, and had wondered if that was the way to go here. Took me a bit to see how it works, but it is clear to me now. So elegant and concise. And I really appreciate how quickly you solved this for me. As for the use of the fraction in the function a[], that can be a variable in the calling function. – James T. Nov 4 '16 at 21:05
• See also the other answer by march. If you find the answers given here useful, feel free to upvote them by clicking the top-oriented grey triangle. The answer you like most can be accepted by clicking the grey checkmark. – corey979 Nov 4 '16 at 21:08
• If I wanted to get the successive compositions of each ordered pair from the inside out using ComposeList[], how would I have to modify the expression? In other words, I want a list {{x,a[x],a[a[x]]},...}. – James T. Nov 4 '16 at 21:40
• That looks more like a NestList. You already answered: ComposeList. But do not extend the scope of the initial question in comments. It's better to ask a new one. – corey979 Nov 4 '16 at 21:44
I will redefine your functions so that they both take two inputs, and the both output fraction so that in the composition, we can keep track of what fraction is, even though r doesn't depend on it:
Clear[a, r]
a[{z_, frac_}] := {z + frac, frac}
r[{z_, frac_}] := {1/z, frac}
Then, we will define a function that takes as inputs z and frac, forms all compositions of the function, then applies the compositions to the inputs. Finally, at the end, we extract just the desired output:
applyComposition[z_, frac_] := First /@ With[
{f = Composition @@@ Tuples[{a, r}, 2]},
Through[f[{z, frac}]]
]
Then,
applyComposition[x, frac]
(* {2 frac + x, frac + 1/x, 1/(frac + x), x} *)
• Thanks for an interesting different approach! – James T. Nov 4 '16 at 21:11
A pure operator version of corey's answer (i.e., a version without a pure function) could go:
tf = Through @* (Composition @@@ tuples)
Through@*{a@*a, a@*r, r@*a, r@*r}
Then:
tf[x]
`
{1 + x, 1/2 + 1/x, 1/(1/2 + x), x}
|
# Technical Papers and Additional References
The documentation web pages describe the key features of APOGEE observations, data analysis, and data products, but, by necessity, the pages must be concise. The APOGEE technical papers provide more details and context for the APOGEE data.
The references that are required for all use of the APOGEE data are given in Required References, and our current full compilation of papers is given in All Technical Papers. References to convenience data products added to APOGEE data products are given in non-SDSS Data References. References to catalogs used for Targeting are summarized in Targeting References .
## Required References
In addition to the SDSS required references and acknowledgement, all users of APOGEE data should acknowledge the following publications in papers and presentations:
In addition, those using stellar parameters or chemical abundances need to cite the following publications:
Lastly, users should consider the list of papers below and cite those that are appropriate for their analyses, including appropriate references to non-SDSS data that are included in the SDSS catalogs for the convenience of the user (See Additional Non-SDSS Data).
### Other Acknowledgements
We encourage authors to use acknowledge facilities and software following the technical requirements of the Journal they are using. We further encourage authors to follow the advice in the Best Practices for Data Publication in the Astronomical Literature (Chen et al. 2021).
## All Technical Papers
### Current Data Release Papers
All papers using DR17 data are required to cite the following papers:
DR17 Data Release Description
Abdurro’uf et al. (2022) is the latest SDSS-IV data release paper, describing SDSS Data Release 17. The latest data release paper should always be used as the primary reference for most uses of SDSS data.
DR17 Pipeline and Data Release Description and Validation
Holtzman et al. (in prep.) provides extensive detail with regard to DR16-related modifications of the data reduction and ASPCAP pipelines and evaluates all DRP- and ASPCAP-generated data products (e.g., radial velocity values, stellar atmospheric parameters and individual element abundances).
### Overview
Publications using APOGEE-1 or APOGEE-2 data are required to cite the overview papers.
APOGEE Overview
Majewski et al. (2017) discusses the scientific motivation for the APOGEE-1 survey, the survey requirements, and the choice of survey fields. It describes survey operations, summarizes the level to which requirements are met, and references the data releases. Much of this information remains highly relevant for APOGEE-2.
APOGEE-2 Overview
Majewski et al. (in prep.) discusses the scientific motivation for the APOGEE-2 survey, its survey requirements, and choices relating to the programs that were initiated. It describes dual hemisphere survey operations, summarizes the level to which requirements are met, and references the data releases.
### Telescopes
The telescope can be identified for a target using the TELESCOPE tag. Users should reference the publications describing the telescopes that obtained the data in their samples.
SDSS 2.5m Telescope
Gunn et al. (2006) describes the technical setup for the SDSS 2.5-meter telescope.
NMSU 1m Telescope
Holtzman et al. (2010) describes the technical setup for the NMSU 1-meter telescope.
LCO 2.5m Telescope
Bowen and Vaughan (1973) describes the optical design of the Irénée du Pont Telescope at Las Campanas Observatory.
### Instrument & Hardware
All papers are required to cite the APOGEE Instruments paper. Other references are optional.
The APOGEE Instruments
Wilson et al. (2019) describes the technical details of the APOGEE-N and APOGEE-S instruments, themselves, and present instrument performance data. Prior to DR16, the reference was Wilson et al. (2012).
Fiber Development and FRD Testing
Brunner et al. (2010) describes technical details about the APOGEE fibers, optimization concerns, and testing the focal ratio degradation (FRD) required for the instrument.
VPH Grating
Arns et al. (2010) discusses the technical details of the VPH grating.
The Cryostat
Blank et al. (2010) describes technical considerations for and the design of the APOGEE cyrostat.
### Targeting
All papers are required to cite the appropriate targeting paper(s) from the list below. At a minimum, the survey-level strategies are described in Zasowski et al. (2013), Zasowski et al. (2017) , Beaton et al. (2021), and Santana et al. (2021). The other references may be more suitable for sub-samples.
APOGEE-1 Target Selection
Zasowski et al. (2013) discusses the target selection for main survey and APOGEE-1 ancillary science projects.
APOGEE-2 Target Selection
Zasowski et al. (2017) describes the APOGEE-2 field plan and explains in detail the targeting strategy employed for APOGEE-2N and APOGEE-2S.
• Pinsoneault et al. (2018) provides updates to the APOKASC program in APOGEE-2.
• Cottle et al. (2018) describes the target selection for the Young Clusters program in APOGEE-2.
• Nidever et al. (2020) describes the target selection for the Magellanic Clouds program in APOGEE-2. Though we note substantial expansions are described in Santana et al. (2021).
• Donor et al. (2018) provides an update for the Open Cluster Chemical Abundance and Mapping survey (OCCAM), including revisions to the target selection using early releases from Gaia.
Final APOGEE-2N Target Selection
Beaton et al. (2021) describes the final APOGEE-2N field plan, special programs in APOGEE-2N, and modifications to the target selection strategy undertaken to reach survey goals.
Final APOGEE-2S Target Selection
Santana et al. (2021) describes the final APOGEE-2S field plan, special programs in APOGEE-2S, and modifications to the target selection strategy undertaken to reach survey goals.
### Data Processing & RV Pipeline
APOGEE Data Reduction
Nidever et al. (2015) discusses the data reduction pipeline, describing how the raw data are analyzed to produce reduced, calibrated spectra. It also presents additional instrument performance data (flats, darks, LSF, persistence, etc.) and discusses the measurement of radial velocities and their quality.
• In DR17, a new procedure, "Doppler," was adopted for RV Measurements. This code is also available on github: github/dnidever/doppler
• The specific implementation of these codes for APOGEE data are available on github/sdss/apogee
NMSU 1-meter Data Processing
Holtzman et al. (2015) has sections discussing the 1-meter data processing, which differs slightly from that in the main survey.
The following papers address specific expansions or applications of our RV results that may be useful for other science applications:
• Deshpande et al. (2013) discusses specific requirements for the determination of M-dwarf stars with APOGEE data.
• Badenes et al. (2018) use the variation of RV for the APOGEE sample as a means to draw inference on the binary population. Technical descriptions and testing conducted in this paper are useful for evaluating the RV precision and variability.
• Price-Whelan et al. (2018, 2020, in prep.) use the individual epoch RVs to identify multiple star systems using The Joker (Price-Whelan et al. 2017, ASCL Entry, GitHub). Technical descriptions and testing conducted in this paper are useful for evaluating the RV precision and variability.
In DR16 and DR17, a Value-Added Catalog applied these methods to the full dataset and made those results fully public.
### Stellar Parameters & Abundances
All papers are required to cite the ASPCAP Overview paper alongside the Data Release paper that describes its current implementation. Users may also wish to acknowledge the underlying software for ASPCAP using "software" keywords as are described below.
ASPCAP Overview
Garcia Perez et al. (2016) describes how the spectra are analyzed to derive stellar parameters and abundances. It demonstrates validation of the overall method using simulated data and discusses uncertainties that are introduced by real-world issues: SNR, issues related computational efficiency, variation and uncertainty of the LSF, and issues involving the loss of information under skylines. It presents some basic tests of the methodology from very-high-resolution observations of some well-studied stars.
APOGEE Model Atmospheres
Jönsson et al. (2020) describes the model grid and the interpolation methods that were adopted in DR16 and DR17.
APOGEE Line List
The line list used for DR16 and DR17 was significantly expanded from previous efforts, this is described in Smith et al. (2021).
Additional papers discussing the line list development are as follows:
• The line list used for DR10-DR15 was described in Shetrone et al. (2015) , which also presents the details of how the H-band linelists, a critical component for ASPCAP, were developed.
• The APOGEE linelists were tested using high-resolution IR spectra of several well-studied stars in Smith et al. (2013).
• Further tests of the individual elements derived from the ASPCAP pipeline are found in Cunha et al. (2015).
• The identification and characterization of Neodymium is described in Hasselquist et al. (2016).
• The identification and characterization of Cerium is described Cunha et al. (2017)
APOGEE Spectral Grids
The DR17 spectral grids are described in Holtzman et al. (in prep). Additional supporting publications are:
Other papers describing the development of the spectral grids are as follows:
• The DR16 grids were described by Jönsson et al. (2020).
• Prior to DR16, Zamora et al. (2015) presents how the spectral synthesis was done, documents the libraries that have been used, and investigates the sensitivity of the result to the choice of synthesis code and model atmospheres.
• Mezaros et al. 2012 presented additional supporting material for the APOGEE spectral grids.
FERRE
Allende Prieto et al. (2006) describes FERRE, which is the spectral fitting engine used for ASPCAP. The code is available on github with its own documentation available (pdf; updated regularly).
The Cannon
The Cannon is described in Ness et al. (2015) and Casey et al. (2016).
### Data Release Papers
Users are required to cite the appropriate Data Release Paper.
APOGEE DR10 Calibration
Meszaros et al. (2013) discusses calibration of stellar parameters that were released in DR10.
APOGEE DR12 Data
Holtzman et al. (2015) describes the APOGEE data contained in DR12 of in SDSS-III.
APOGEE DR13/DR14 Data
Holtzman et al. (2018) provides extensive detail with regard to DR13- and DR14-related modifications of the data reduction and ASPCAP pipelines and evaluates all DRP- and ASPCAP-generated data products (e.g., radial velocity values, stellar atmospheric parameters and individual element abundances).
DR13/DR14 Validation
Jönsson et al. (2018) compares DR13 and DR14 ASPCAP-derived parameters with various optical abundance analyses from the literature.
DR16 Data
Jönsson et al. (2020) is the DR16 data release paper that describes the processing and data analysis employed.
DR17 Data
Holtzman et al. (in prep.) is the DR17 APOGEE data release paper that describes the processing and data analysis employed.
## Additional non-SDSS Data Products in APOGEE Catalogs
The APOGEE data products contain some data adopted from other major observational programs. In this section, we provide references for those surveys or methods. Typically, these data are provided for a large fraction of the APOGEE targets. Data that was specifically use for targeting only is summarized in Targeting References.
#### Gaia EDR3 References
Gaia Mission
Gaia Collaboration, Prusti et al. (2016) provides a description of the Gaia mission, including the spacecraft, instruments, survey, and measurement principles.
Gaia EDR3
Gaia EDR3 Astrometry
Lindegren et al. (2021) provides the description of the Gaia EDR3 Astrometric Solution.
Gaia EDR3 Photometry
Riello et al. (2021) provides a description of the Gaia EDR3 Photometric System.
Seabroke et al. (2021) provides a description of the updates to the Gaia DR2 radial velocities as presented in Gaia EDR3.
Distances
Bailer-Jones et al. (2021) is a catalog of geometric distances derived from Gaia EDR3 parallax data using rigorous statistical methods.
#### Photometry
2MASS
The canonical paper for 2MASS is Skrutskie et al. (2006) and the acknowledgement is provided on the 2MASS IPAC webpage.
WISE
Wright et al. (2010) describes the WISE mission and spacecraft and the AllWise catalog is cited as Cutri et al. (2013). The acknowledgement is provided on the WISE2 IPAC Webpage and the AllWise IPAC Webpage.
Spitzer
Werner et al. (2004) describes the Spitzer Space Telescope and Mission. The IRAC instrument is described in Fazio et al. (2004). In particular, APOGEE targeting in the disk and Bulge makes use of data products from Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE). Acknowledgements for Spitzer are described on the Spitzer IRAC Webpage.
Washington+DDO51
The Washington+DDO51 photometry included in the summary files is described in Zasowksi et al. (2013).
Those using these data may also include the following acknowledgement: "The APOGEE project thanks Jeff Munn (NOFS) for collecting Washington$+DDO51$ imaging for large areas of the sky."
#### Reddening and Extinction
$E(B-V)$
$E(B-V)$ estimates are provided from the maps of Schlegel, Finkbeiner, and Davis (1998).
RJCE
The Rayleigh-Jeans Color Excess method is also employed to estimate $A_{K}$ using 2MASS with either WISE or Spitzer photometry and the method is described in Majewski, Zasowski, and Nidever (2011) with some additional modifications described in Zasowksi et al. (2013).
## Targeting Catalog References
For some scientific purposes, having an understanding of the input targeting references can be important as these guided the target's selection. Two parameters are important: (1) the input photometry catalog (sets the exposure time) and (2) proper motion catalogs (proper motion corrections are applied to the position).
### Photometry Catalog References
The targeting photometry references are coded into the SRC_H tag in the summary catalogs. Full information can be found in the intermediate targeting files on the SAS (see Data Access for more details).
The Targeting Team has provided a DR17 H_SRC Table (PDF with hyperlinks) that provides the SRC_H values, a description of the source and the application, and the appropriate references, when possible. This Table was constructed from the intermediate targeting files and, thus, includes all input SRC_H values irregardless of if they appear in the final output catalogs.
### Proper Motion Catalog References
The input proper motion catalog references are provided both in the intermediate targeting files on the SAS (see Data Access for more details). These may be important for many scientific applications because these proper motions are used to determine the coordinates of the target at the epoch of observation.
The Targeting Team has provided a DR17 TARG_PM_SRC Table (PDF with Hyperlinks) that provides the TARG_PM_SRC values, a description of the source and the application, and the appropriate references, when possible. This Table was constructed from the intermediate targeting files and, thus, includes all input TARG_PM_SRC values irregardless of if they appear in the final output catalogs.
|
Contact Andrius Kulikauskas [email protected] +370 607 27 665 Eičiūnų km, Alytaus raj, Lithuania Support me! Patreon... in September? Paypal to [email protected] Bookshelf Thank you! Upload Orthogonal polynomials Orthogonal polynomials are inescapable in quantum mechanics. They are the nit and grit of calculations for solving the Schroedinger equation. They have rational terms that beg to be interpreted combinatorially. This line of exploration is very natural to me because of my Ph.D. thesis in algebraic combinatorics. Interpretation of these polynomial terms spills the guts of quantum mechanics. It yields clues as to how space and time relate to energy, how they get constructed from cells, what is the role of measurement, and overall, how the wave function works. The harmonic oscillator: Hermite polynomials. The harmonic oscillator seems to be a paradigmatic example to understand. The solutions involve the Hermite polynomials as key factors. I became curious, what could these polynomials actually mean, combinatorially and physically? It turns out that there are two variants of Hermite polynomials, those used by physicists and those used by probabilists. Below is a nice interpretation of the probabilist version. br />Your browser does not allow HTML5 video. Mathematically, the probabilist Hermite polynomials {$\textrm{He}_n$} are generating (in a sense, "counting") the involutions on {$n$} letters, which is to say, those permutations of {$n$} letters which are their own inverses. These are precisely the permutations that consist of one-cycles and two-cycles. Each one-cycle is assigned weight {$x$} and each two-cycle is assigned weight {$-1$}. For example, there are four involutions on the letters {$1, 2, 3$}, namely the identity {$()$}, which gets weight {$x^3$}, and the three two-cycles {$(1 2)$}, {$(1 3)$}, {$(2 3)$}, which each get weight {$-x$}. Their combined weight is {$\textrm{He}_3=x^3-3x$}. In physics, the above combinatorics expresses Wick's theorem for fermions. With some imagination and some hindsight, we can interpret {$\textrm{He}_n$} as enumerating the possibilities for a set of {$n$} places in space. The energy rises with the number of places. An empty place is given weight {$x$}. The energy is a sum of contributions from configurations, and the highest power contribution (making for greatest variation) is when each place is empty. A place may be nonempty if it is linked with some other place, in which case they each get weight {$i$} or each get weight {$j=-i$}, contributing a total weight of {$i^2=j^2=-1$}. In working with {$i$} it is crucial to remember that it is never a single number but always stands for identical twins, the conjugates {$i$} and {$j$}, which are alike in all their properties except that they are not each other. We have no basis to distinguish them except that one is not the other. I imagine the contribution of {$i$} indicates momentum and simply time. It is as if a place is nonempty if time is hopping back and forth there. The sense is that a place can be nonempty only if it is occupied in an ambiguous way, so that {$i + (-i)$} = 0 at each nonempty space and yet the total weight is {$-1$} for the pair of nonempty spaces. Note also that we are distinguishing the possibility of "objects" (one-cycles) and "relationships" (two-cycles) which is so important for category theory, notably the two perspectives of the Yoneda lemma. Overall, my point is that a successful interpretation of the polynomial can, should and will carry physical intuition that is simple, deep, rich and basic. Interpreting the derivative Taking the derivative of the function {$e^{x^2}$} involves using the chain rule. If we continue taking the derivative, then we are also using the product rule. The function {$e^{x^2}$} persists but get multiplied by a Hermite polynomial. The chain rule yields a new factor of {$2x$}, which is the weight of a square. The product rule has us differentiate the Hermite polynomial, which means eliminating one of the x's, or rather, filling it in with an i, and also filling in the new cell with an i, yielding a weight of -1. Thus although we do add a new cell, but it is filled and also we fill an existing cell. This is the effect of the derivative operator, which is the momentum. The positions commute in that we don't have to keep track of their order. But we have to keep track of the momentums because their order matters. The hydrogen atom: Laguerre polynomials The idea that the weight {$x$} indicates an empty space suggests itself in the intepretation of the Laguerre polynomials, which arise in solving the Schroedinger equation for the hydrogen atom. In combinatorics, the Laguerre polynomials are related to the rook polynomials, which count the ways of placing rooks on a chessboard so that no two rooks attack each other. More abstractly, we can count partially defined permutation matrices, which is to say, matrices of {$0$}s and {$1$}s with at most one {$1$} in each row and each column. The solution to Schroedinger's equation for the hydrogen atom is given by the radial wave function {$\rho^{l+1}e^{–\rho}\nu(\rho)$} where {$p$} is the distance of the electron from the proton. The asymptotic behavior, when {$\rho\rightarrow\infty$}, so that the electron is free of the proton, is given by the factor {$\rho^{l+1}e^{–\rho}\nu(\rho)$}. The remaining, nonasymptotic factor, {$\nu(\rho)$}, is an associated Laguerre polynomial, multiplied by some normalization constant. This factor accounts for the peculiarities when the electron is close to the proton. The closer the electron, the higher the energy, and the more possibilities. The associated Laguerre polynomials {$L_n^{(\alpha)}$} are simply a generalization of the Laguerre polynomials {$L_n$}. They are based on rectangular {$n\times(n + \alpha)$} chessboards rather than square {$n\times n$} chessboards but seem to be otherwise the same. Evidently, the matrices (the boards) and their squares are describing the possible relations between the proton and the electron. Note that the denominator {$n!$} asserts that, say, the order of the {$n$} columns is irrelevant whereas the order of the {$n$} rows is relevant. This suggests, for example, that the proton is taken as given, at the center of mass, and so its possible relations (its rows) are defined absolutely, well ordered, whereas the electron's relations (its columns), are defined relatively, thus not ordered. We see from the interpretations that the energy is given by the size of the matrices. The factor {$\nu(\rho)=L_{n-l-1}^{2l+1}(2p)$} is indexed by {$0\leq l \leq n$}. This means that, say, the height of the matrix is {$n-l-1$} and the width is {$l+1$}. Gaining or losing energy involves increasing or decreasing the width of the matrix, which is to say, the electron's possible relations The relation between the electron and proton may be completely undefined, as with a Exclusivity principle The rook polynomials in the solution to the hydrogen atom bring to mind the principle of exclusivity which is important for fermions such as the proton and electron. Exclusivity seems to be a central concept in defining orthogonal polynomials. Kim and Zeng have a marvelous combinatorial formula for the linearization coefficients of the general Sheffer polynomials, which specializes to many of the orthogonal polynomials that come up in solutions of the Schroedinger equation. These linearization coefficients have to do with products of the polynomials and I have yet to understand how they relate to the much simpler questions that I am exploring. However, their formula is based on derangements, which are permutations without fixed points, and thus may perhaps be considered as models of exclusivity. Infinite well: Chebyshev polynomials Empty cell has weight 2x Note that in the Chebyshev polynomial the empty cell is given weight 2x. This is a recurring theme. The physicist Hermite polynomials likewise give the empty cell the weight 2x. And the model of the hydrogen atom expresses the associated Laguerre polynomial in terms of the input {$2\rho$}. The z-axis component: associated Legendre function I have an interpretation of the terms of the Legendre polynomial. It is given by the ways of taking paths on a square grid from (0,0) to (n,n) where the possible steps are East (weight x), North (weight 1), and East-North-East (weight -1). In other words, a step north may be thought of as having a particle with weight 1, and a step east-north-east may have, instead, two particles, each with weight i. This generating function is related to approximating pi with a fraction. I will try to find interpretations for the associated Legendre functions. Energy and space. Measurement as the violation of least action The weight {$x$} is a measurement of space, that is, of an empty place. Such a measurement takes energy. It is a violation of the principle of least action. The energy expresses that violation. Furthermore, measuring empty space, where nothing happens, indicates the perspective of an observer's choice framework, the simplexes. Other polynomials to consider The angular equation (and spherical harmonics): Associated Legendre functions. Infinite spherical well (Griffiths, 4.40). Spherical Bessel functions and spherical Neumann functions. Stephen Wolfram's List of Exact Solutions Here is a helpful footnote from Stephen Wolfram's book A New Kind of Science, which is available online for free. Some notable cases where closed-form analytical results have been found in terms of standard mathematical functions include: quadratic equations (~2000 BC) (Sqrt) cubic, quartic equations (1530s) ({$x^{1/n}$}) 2-body problem (1687) (Cos) catenary (1690) (Cosh) brachistochrone (1696) (Sin) spinning top (1849 1888 1888) (JacobiSN, WeierstrassP, hyperelliptic functions) quintic equations (1858) (EllipticTheta) half-plane diffraction (1896) (FresnelC) Mie scattering (1908) (BesselJ, BesselY, LegendreP) Einstein equations (Schwarzschild (1916), Reissner–Nordström (1916), Kerr (1963) solutions) (rational and trigonometric functions) quantum hydrogen atom and harmonic oscillator (1927) (LaguerreL, HermiteH) 2D Ising model (1944) (Sinh, EllipticK) various Feynman diagrams (1960s-1980s) (PolyLog) KdV equation (1967) (Sech etc.) Toda lattice (1967) (Sech) six-vertex spin model (1967) (Sinh integrals) Calogero–Moser model (1971) (Hypergeometric1F1) Yang–Mills instantons (1975) (rational functions) hard-hexagon spin model (1979) (EllipticTheta) additive cellular automata (1984) (MultiplicativeOrder) Seiberg–Witten supersymmetric theory (1994) (Hypergeometric2F1). When problems are originally stated as differential equations, results in terms of integrals ("quadrature") are sometimes considered exact solutions—as occasionally are convergent series. When one exact solution is found, there often end up being a whole family—with much investigation going into the symmetries that relate them. It is notable that when many of the examples above were discovered they were at first expected to have broad significance in their fields. But the fact that few actually did can be seen as further evidence of how narrow the scope of computational reducibility usually is. Notable examples of systems that have been much investigated, but where no exact solutions have been found include the 3D Ising model, quantum anharmonic oscillator and quantum helium atom. - Stephen Wolfram, A New Kind of Science, Notes to Chapter 12, Section 6. Stephen Wolfram champions a computational approach, which favors form over content. He bets that there are not that many forms available and that by running through all possibilities we will stumble across the right one. But I rather invest to penetrate the meaning of what we already know Nature to be saying. I am combinatorially interpreting the exact solutions of the Schroedinger equation to discover clues to unlock the mystery of more general situations. For my research notes, see: Research.OrthogonalPolynomials, Research.QuantumPhysicsResearch As usual, many of my observations are speculative. You may wonder, What does Andrius know about quantum physics?
Šis puslapis paskutinį kartą keistas April 09, 2021, at 10:52 AM
|
# Showing that a function is not differentiable
I want to show that $f(x,y) = \sqrt{|xy|}$ is not differentiable at $0$.
So my idea is to show that $g(x,y) = |xy|$ is not differentiable, and then argue that if $f$ were differentiable, then so would $g$ which is the composition of differentiable functions $\cdot^2$ and $g$.
But I'm stuck as to how to do this. In the one variable case, to show that $q(x) = |x|$ is not differentiable, I can calculate the limit $\frac{|x + h| - |x|}{h}$ as $h\to 0^+$ and $h\to 0^-$, show that the two one-sided limits are distinct, and conclude that the limit $$\lim_{h\to 0}\frac{|x + h| - |x|}{h}$$ does not exist.
The reason this is easier is that I do not have to have in mind the derivative of the function $q$ in order to calculate it.
But in the case of $g(x,y) = |xy|$, to show that $g$ is not differentiable at $0$, I would need to show that there does not exist a linear transformation $\lambda:\mathbb{R}^{2}\to\mathbb{R}$ such that
$$\lim_{(h,k)\to (0,0)}\frac{\left||hk| - \lambda(h,k)\right|}{|(h,k)|} = 0$$
I thought of assuming that I had such a $\lambda$, and letting $(h,k)\to (0,0)$ along both $(\sqrt{t},\sqrt{t})$ and $(-\sqrt{t},-\sqrt{t})$ as $t\to 0^{+}$, but this didn't seem to go anywhere constructive.
-
Seems to me that your insight about $|xy|$ is not correct. The one-variable analog of this function seems to me to be $x^2$, not $|x|$. Think of what the graph $z=xy$ looks like: it’s very similar to $z=x^2-y^2$, but rotated $45^\circ$ about the $z$-axis. As a result, I am pretty sure that $|xy|$ is differentiable at the origin, even though it’s not at all other points on the $x$ and $y$ axes. – Lubin Aug 14 '12 at 0:30
Since $|xy| \leq \|(x,y)\|^2$, it follows that $(x,y) \to |xy|$ is differentiable at the origin (with derivative $0$). – copper.hat Aug 14 '12 at 0:54
Note: My previous answer was incorrect, thanks to @Lubin for catching that.
Simplify your life and show that $\phi(x) = f(x,x) = |x|$ is not differentiable at $0$. It will follow from this that $f$ is not differentiable at $0$.
Look at the definition of differentiability for this case, which is that $\lim_{h \to 0, h \neq 0} \frac{\phi(h)-\phi(0)}{h}$ exists. We have $\phi(0) = 0$, and $\phi(h) = |h|$, so we are looking at the limit of $h \mapsto \mathbb{sign}(h)$ as $h \to 0$.
If you choose $h_n = \frac{(-1)^n}{n}$, it is easy to see that $\frac{\phi(h_n)-\phi(0)}{h_n} = (-1)^n$, hence it has no limit. It follows that $f$ is not differentiable at $0$.
-
I don’t agree that the nondifferentiability of $f$ at $(0,1)$ has anything to do with the fact of nondifferentiability of $f$ at $(0,0)$. – Lubin Aug 14 '12 at 0:34
@Lubin: You are correct, I must have been asleep... – copper.hat Aug 14 '12 at 0:42
@Lubin: I have fixed my proof bug. – copper.hat Aug 14 '12 at 0:52
you weren’t nearly as asleep as I was when I was in the midst of a many-hundred-word response. We are in total agreement now. – Lubin Aug 14 '12 at 1:07
It's worse that that. I typed up the above first, and before saving the edits I thought could make non-differentiability even more obvious by taking $y=1$ (to get $\sqrt{|x|}$) and rewrote my answer. Oh well, I shouldn't pack bags and answer questions at the same time :-). – copper.hat Aug 14 '12 at 1:11
Use directional derivatives. Note that the limit of $$\frac{f(h,h)-f(0,0)}{h\sqrt{2}}=\frac{|h|}{h\sqrt{2}}$$ as $h\to 0^+$ is different than that as $h\to 0^-$.
-
In agreement with @Cameron, I would show the nondifferentiability of $f(x,y)$ at $(0,0)$ in the following way. Intersect the graph with a vertical plane through the $z$-axis, say given by $y=\lambda x$. The intersection is given by $z=|\lambda|^{1/2}\cdot|x|$. So above each of the four quadrants of the $x,y$-plane, the graph consists of strings stretched out from the origin at varying angles. In particular, the “diagonal” plane $y=x$ intersects the graph in a V-shaped figure exactly like the familiar graph of absolute value in one-variable calculus.
-
|
# Members & Guests
## Dr. Markus Upmeier
Universität Augsburg
E-mail: Markus.Upmeier(at)math.uni-augsburg.de
Telephone: +49 821 598 - 2228
Homepage: https://www.math.uni-augsburg.de/prof/di...
## Publications within SPP2026
Let X be a compact manifold, G a Lie group, PX a principal G-bundle, and B_P the infinite-dimensional moduli space of connections on P modulo gauge. For a real elliptic operator E we previously studied orientations on the real determinant line bundle over B_P. These are used to construct orientations in the usual sense on smooth gauge theory moduli spaces, and have been extensively studied since the work of Donaldson.
Here we consider complex elliptic operators F and introduce the idea of spin structures, square roots of the complex determinant line bundle of F. These may be used to construct spin structures in the usual sense on smooth complex gauge theory moduli spaces. We study the existence and classification of such spin structures. Our main result identifies spin structures on X with orientations on X×S1. Thus, if PX and QX×S1 are principal G-bundles with Q|X×{1}≅P, we relate spin structures on (B_P,F) to orientations on (B_Q,E) for a certain class of operators F on X and E on X×S1.
Combined with arXiv:1811.02405, we obtain canonical spin structures for positive Diracians on spin 6-manifolds and gauge groups G=U(m),SU(m). In a sequel we will apply this to define canonical orientation data for all Calabi-Yau 3-folds X over the complex numbers, as in Kontsevich-Soibelman arXiv:0811.2435, solving a long-standing problem in Donaldson-Thomas theory.
We develop a categorical index calculus for elliptic symbol families. The categorified index problems we consider are a secondary version of the traditional problem of expressing the index class in K-theory in terms of differential-topological data. They include orientation problems for moduli spaces as well as similar problems for skew-adjoint and self-adjoint operators. The main result of this paper is an excision principle which allows the comparison of categorified index problems on different manifolds. Excision is a powerful technique for actually solving the orientation problem; applications appear in the companion papers arXiv:1811.01096, arXiv:1811.02405, and arXiv:1811.09658.
Let X be a compact manifold, D a real elliptic operator on X, G a Lie group, P a principal G-bundle on X, and B_P the infinite-dimensional moduli space of all connections on P modulo gauge, as a topological stack. For each connection \nabla_P, we can consider the twisted elliptic operator on X. This is a continuous family of elliptic operators over the base B_P, and so has an orientation bundle O^D_P over B_P, a principal Z_2-bundle parametrizing orientations of KerD^\nabla_Ad(P) + CokerD^\nabla_Ad(P) at each \nabla_P. An orientation on (B_P,D) is a trivialization of O^D_P.
In gauge theory one studies moduli spaces M of connections \nabla_P on P satisfying some curvature condition, such as anti-self-dual instantons on Riemannian 4-manifolds (X, g). Under good conditions M is a smooth manifold, and orientations on (B_P,D) pull back to
orientations on M in the usual sense of differential geometry.
This is important in areas such as Donaldson theory, where one needs an orientation on M
to define enumerative invariants.
We explain a package of techniques, some known and some new, for proving orientability and constructing canonical orientations on (B_P,D), after fixing some algebro-topological information on X. We use these to construct canonical orientations on gauge theory moduli spaces, including new results for moduli spaces of flat connections on 2- and 3-manifolds,
instantons, the Kapustin-Witten equations, and the Vafa-Witten equations on 4-manifolds, and the Haydys-Witten equations on 5-manifolds.
Suppose (X, g) is a compact, spin Riemannian 7-manifold, with Dirac operator D. Let G be SU(m) or U(m), and E be a rank m complex bundle with G-structure on X. Write B_E for the infinite-dimensional moduli space of connections on E, modulo gauge. There is a natural principal Z_2-bundle O^D_E on B_E parametrizing orientations of det D_Ad A for twisted elliptic operators D_Ad A at each [A] in B_E. A theorem of Walpuski shows O^D_E is trivializable.
We prove that if we choose an orientation for det D, and a flag structure on X in the sense of Joyce arXiv:1610.09836, then we can define canonical trivializations of O^D_E for all such bundles E on X, satisfying natural compatibilities.
Now let (X,\varphi,g) be a compact G_2-manifold, with d(*\varphi)=0. Then we can consider moduli spaces M_E^G_2 of G_2-instantons on E over X, which are smooth manifolds under suitable transversality conditions, and derived manifolds in general. The restriction of O^D_E to M_E^G_2 is the Z_2-bundle of orientations on M_E^G_2. Thus, our theorem induces canonical orientations on all such G_2-instanton moduli spaces M_E^G_2.
This contributes to the Donaldson-Segal programme arXiv:0902.3239, which proposes defining enumerative invariants of G_2-manifolds (X,\varphi,g) by counting moduli spaces M_E^G_2, with signs depending on a choice of orientation. This paper is a sequel to Joyce-Tanaka-Upmeier arXiv:1811.01096, which develops the general theory of orientations on gauge-theoretic moduli spaces, and gives applications in dimensions 3,4,5 and 6.
|
54 609
Assignments Done
98,2%
Successfully Done
In November 2017
Your physics homework can be a real challenge, and the due date can be really close — feel free to use our assistance and get the desired result.
Be sure that math assignments completed by our experts will be error-free and done according to your instructions specified in the submitted order form.
# Answer on Microeconomics Question for asia
Question #17027
2-Mohamed purchases two products, mineral water and popcorn. The marginal utility of mineral water is 60 and the marginal utility of popcorn is 30. The price of a bottle of mineral water is $2.00 and the price of a box of popcorn is$1.00. The utility-maximizing rule suggests that Mohamed should:
a-increase consumption of popcorn and decrease consumption of mineral water.
b-increase consumption of popcorn and increase consumption of mineral water.
c-decrease consumption of popcorn and increase consumption of mineral water.
d-make no change in the consumption of mineral water or popcorn.
Need a fast expert's response?
Submit order
and get a quick answer at the best price
for any assignment or question with DETAILED EXPLANATIONS!
|
# Rocket Engine Design
1. Sep 22, 2004
### Mahler765
I'm a junior attending an accelerated high school who's working on this year's science project. As of right now, I'm planning on building a liquid fuel rocket that is powered by gasoline with concentrated hydrogen peroxide acting as an oxidizer. I am currently working through the equations needed to design a deLaval nozzle. So all that I know so far is what the rocket will be fueled by and that it will have a deLaval nozzle. The size of the rocket depends on the design of the engine as does the amount of fuel because i would not have an accurate idea of how much thrust i would need. I'm planning on either making the deLaval from a ceramic or copper, I'm still not sure which. My research has told me that copper will do fine for a combustion chamber. However my worry is that the copper will act as a catalyst for the hydrogen peroxide and cause decomposition when I don't want it to happen. Another question is if i really need a catalyst since the combustion of the gasoline will be generating large amounts of heat.
I'm operating on a short budget so obtaining all the pumps or mechanical parts to build a detailed engine is basically out of the question. One thing that would make the design easier is if I could pre-mix the hydrogen peroxide and the gasoline which would eliminate the need for a pump(s). Another thing that I have been toying around with is the idea of burning a gasoline ice cube. I've tried freezing gasoline in my freezer and it's freezing point is apparently lower than that of water. If all else fails, I do have access to liquid nitrogen through a local university but i'm hoping that dry ice or a better freezer will suffice. With the gasoline frozen, it could then be used as a solid fuel with an oxidizer and I could just fill the combustion chamber with gasoline, freeze it, and then keep it frozen until it's time to launch.
For the project I will be measuring things such as burn rate, etc. If any of you have any ideas for an alternate liquid fuel that I could compare the gasoline and hydrogen peroxide with your suggestions will be much appreciated. I am two years ahead in math and a year ahead in science so I actually have some idea of what I'm doing but I've already been told not to do the project by three college professors. Also, I have taken an accelerated three week course in aerospace engineering through the Duke TIP summer program so I also have a background in that. If you have any suggestions as to different variables that i could test in this project please be sure to mention them. All thoughts on the matter will be welcome.
Thanks
2. Sep 22, 2004
### enigma
Staff Emeritus
Hi Mahler,
Welcome to PF!
You probably want a thrust to weight ratio about 2.5-3. To get it that high, you'll need a pretty high chamber pressure.
Do you know the combustion temperature of the gas/H202? Metals lose strength at temperatures much lower than the melting point.
Yeah, that's commonly referred to as a 'bomb'
Without pumps you will have a big problem getting the fuel into the chamber to maintain the reaction. Once you fire it off, the pressure will rise in the combustion chamber, and unless your feed pressure is higher, the combustion gases will flow backwards into the fuel/oxidizer tanks and start a combustion there. This is commonly referred to as a 'bomb'.
VERY dangerous idea! This is commonly referred to as a 'bomb'.
If your fuel is solid how are you going to continue supplying more to the combustion chamber? The mass flows through rockets are pretty high, and unless you've got a huge (read: heavy) combustion chamber the fuel will be depleted before you get very far.
I hate to say it, but I'm going to chime in with them. Building a liquid rocket is no small task. It took the best engineers in the country years to get the first one working, and without the background knowledge given to you by people who have been in the field, you'd basically be starting from scratch in terms of experience. Even if you could get it working safely, there is practically no way you'll be able to do it in a year by yourself. What you're proposing is extremely advanced even for a multiple year, multiple person graduate level project with adequate funding. Everything needs to work correctly in rocketry or you get an explosion.
Here is what I would recommend to you:
Look into picking up some larger model rockets. Most of those use solid fuels, and all of the components are sized to prevent explosions. If the manufacture of components is a requirement for your project (and you're dead set on designing nozzles), I'd recommend building different expansion ratio nozzles and mounting them to the existing upper half of the motor. You can still test mass flow, thrust, specific impulse, etc. that way. The lower combustion temperatures of model rockets would mean you could probably get away with plastic, ceramic, or composite nozzles without melting them.
Here are a few other ideas you might try (I think these are probably more do-able, but I don't know your exact assignment):
* You might be able to bore small pressure taps into the nozzle and measure static pressure distributions along the length (this would probably need to be done in a static fire situation).
* If you've got moderate funding, you could try to get an accelerometer and a microprocessor to gather and store the acceleration profile of the rocket during a launch. Programming a microprocessor and getting valid data would really be impressive! Learning assembly language is tough! This would also test your soldering and electronics abilities and let you design a shock absorber for the electronics and a parachute deployment system to return the "black box" safely to the ground.
* You can talk to various university professors or maybe even a local distributor of FEMLAB (or similar) and model flow through various rocket nozzles. You can probably manage to borrow or beg a semester or two long student license from someone. You can see how well the results correspond to theory, and you'll be able to see firsthand how the shape of the nozzles affect flow parameters. This would include (among other things): flow separation from the walls as you make the nozzle be more and more bell-like, exit mach number variations as expansion ratio is varied, etc. Additionally, the pictures this project would give you would wow any presentation, and familiarity with FEMLAB would practically guarantee you a research position with a professor during your undergraduate years.
You're shooting for the stars right now. If I'm understanding your situation correctly, you need something that's reasonably attainable in a semester or two (and won't be the only thing you're working on, either). The main point of science projects is to learn something. The second point is to do something cool which interests you and you have fun with. Speaking from experience, for a project to be fun, it has to be do-able.
A book which you would probably find valuable is:
Space Propulsion Analysis and Design by Humble, Henry, and Larson.
It is an upper undergraduate level textbook, but you should be able to work your way through it if you're determined. It goes all the way from basic equations on rocket operation and fluid dynamics to specifics of electrical and nuclear propulsion motors.
Hope that helped some, without discouraging you too much! :tongue2:
Last edited: Sep 22, 2004
3. Sep 22, 2004
### Janitor
Not to cast a wet blanket on things, but have you considered aiming for merely a static test of your engine instead? At least as a first step? And in that case, obviously you will want to use a videocam for close observation of the firing, and probably sand bags or a bunker of some sort for protecting any humans who are in harm's way. Concentrated hydrogen peroxide is a famous killer, as German engineers discovered in the WWII years.
4. Sep 23, 2004
### Mahler765
Thanks for the Help
I'm used to discouragement by now so don't worry about it. Your suggestions for alternative variables are really great ideas. What exactly is FEMLAB? I'm in Birmingham, Alabama so I may even be able to find something like that in Huntsville with the Space and Rocket Center. I was planning on testing the engine just by itself while it was stationary to see if it would work the first place before I even tried installing it in the rocket. I'd be working closely with my math teacher who used to be a physicist for NASA and my uncle who's a chemical engineer. I think that I will try to do just a static test. That will eliminate the size limits of the engine and allow me to actually use materials that will provide me with something that works.
As to the gasoline being provided to the combustion chamber. My idea was to freeze the gasoline in the combustion chamber with a whole through the middle and just have the H2O2 flow with the whole to oxidize the gasoline.
Speaking of the Germans. I have a friend in Germany who is helping me do research and he found an engineer in Switzerland who builds hydrogen peroxide engines for small companies and we were using him as a source. And apparently, they sell the 30% concentration H2O2 over the counter in drug stores in Germany just as we sell the 3% concentration. I thought that was interesting.
So you are saying just to build the engine and get it to work. And then test it using different nozzles? I've researched a little on this and I found several different kinds of software that deal with this. As far as requirements for the project, there aren't any. I'm doing this by choice because it's what i'm interested in. Most people like doing the inferior biology projects where they just going some kind of bacteria and see what happens when the conditions are changed a bit. I'll be researching your suggestions more thoroughly this weekend when I have a little more free time.
Thanks for your help.
5. Sep 23, 2004
### enigma
Staff Emeritus
FEMLAB is a http://www.comsol.com/showroom/gallery/. A teammate used it to model fluid flow through various rocket nozzles for a design project I was working on. We were looking to find the smallest expansion ratio we could use and still have sonic flow at the throat.
What you are thinking of doing is very dangerous. That geometry provides what is called 'progressive'. As the fuel burns, the area available for burning increases, which increases the rate of the reaction, which increases the pressure in the chamber, whcih increases the burn rate, which increases the area available for burning, which increases the rate of the reaction, which increases the pressure in the chamber, which blows out your combustion chamber, which leaves you will a big smoking crater where your lab used to be and gives you a trip to the hospital, morgue, or both in turn. That's not to mention what happens when the temperature in the chamber gets hot enough to melt the frozen gas and the entire thing starts to burn at once. Solid fuels don't transmit heat very well for a good reason.
No, I'm saying go out and buy some model rockets and test them. Once you get some data, then look into replacing certain parts and seeing what happens.
You're seriously shooting too high. Even if you're Werner van Braun re-incarnated, I place heavy odds against you building something in a year. I'd even place odds against a team from Lockheed or Boeing getting a new design built in a year. If by some miracle, you do get a working prototype built, what happens if it blows up? 20-30% of professionally built rockets blow up on the first launch, and that's even with proper training, staffing, funding, production facilities, testing and much more time than you have. It took Goddard years to build his first rocket, and he had much more training and experience than any of us.
It took 8 junior and senior aerospace engineering students (myself included) an entire semester to do preliminary engineering analysis on a rocket design. Our final report was nowhere even close to the point where we could build anything with it. We had to make tons of simplifying assumptions to get any results at all, because we didn't have the capability to do the preliminary experimental research needed.
BTW: historically, it has costed about 7.125 * (mass in kg)^.55 Million in \$2002 to build a liquid fueled rocket from scratch.
Last edited by a moderator: Apr 21, 2017 at 9:03 AM
6. Sep 23, 2004
### Staff: Mentor
Solid model rockets are a good idea (and a fun hobby). Even they are a serious undertaking. The Discovery channel is currently running a show on model rockets and the centerpiece of the show is a 15' high rocket that was launched to a height of about 30,000 feet. It took a small team of hobbyists a year or so to build. I don't want to encourage you to scare your chemistry teacher too much, but the larger solid rocket motors are hand-made by hobbyists and there is some experimentation that can be done there.
A word on this type of project: I don't know your specific situation, but I did a year-long project in 9th grade where my task was to design a fighter-jet from scratch and test it in a wind tunnel (of my own construction). It took several months just to build the wind tunnel and by the end of the year, all I had was a few sketches and a concept for the plane. But I learned a lot about aerospace engineering (like, wtf is a "slug"!?) and that was still worthy of a A.
7. Sep 23, 2004
### sigma
wtf is a slug?
8. Sep 23, 2004
### Cliff_J
Find someone who's been unlucky enough to be around an exploded tire. Deaf, blind, missing a finger or more comes to mind. That's a simple rubber donut at 80-90psi and there's millions of them all designed to handle thousands of pounds of weight. But when they fail...
Even the solid rocket motors require licencing to build past a certain size and the licencing is a progression from smaller engines to larger ones.
I'd stick with the advice given above. Even a store bought rocket can have problems at launch and turn itself into a projectile. But at least its designed to not become a bomb as well.
Cliff
9. Sep 23, 2004
### Staff: Mentor
A slug is an English unit of mass apparently only used by Aersospace Engineers.
10. Sep 24, 2004
### Mahler765
As to the slug question. When I took the aerospace engineering course at Duke, we used it alot. My understanding is that something equivilent to a newton. English units are used in all the aerospace equations in the atmosphere. So the air pressure at sea level is something like 2377 x 10 raised to the -6 slugs per cubic foot. Correct me if i'm wrong.
11. Sep 24, 2004
### Janitor
And of course Goddard had a wealthy benefactor in Daniel Guggenheim, through the advocacy of Charles Lindbergh.
In my late teens I had similar thoughts of building my own single stage liquid propellant rocket, but I never got beyond the point of doing some library research and sketching some ideas for the combustion chamber and the associated plumbing. I remember jotting down a list of materials that some library book said were compatible with various propellants. Concentrated peroxide had a very brief list of compatible metals and sealing materials!
Around that same time I came across a book by some German who was part of rocket development in the war years (Dornberger?), and I think it was in that book that I read about an Me-163 pilot whose corpse was basically melted from exposure to C-stoff, or whatever it was the Germans called hydrogen peroxide.
12. Sep 24, 2004
### enigma
Staff Emeritus
Other way around. A slug is a unit of mass like the kilogram. It is 32.2 times as large as a pound-mass.
13. Oct 6, 2004
### Evil Knievel
Useful contact?
Mahler
Hi. Follow this link: http://www.aardvark.co.nz/pjet/spinning1.shtml . This guy actually builds pulse jet engines as a hobby and he seems quite handy. This particular link shows how he built a metal spinning lathe with which he turned venturi's. I am sure that you could get him interested.
Good luck!!
Knievel
Last edited by a moderator: Apr 21, 2017 at 9:18 AM
14. Oct 6, 2004
### Janitor
The lathe is surely the way to go for making parts that have an axis of symmetry. Does anybody know if underwater explosive forming is still used for giving metal a compound curve? Or is that an obsolete method?
15. Oct 13, 2004
### MaximumTaco
I've been reading through the many many project logs on the Armadillo Aerospace website, they have alot of interesting things regarding their work with H2O2 monopropellant and catalyst systems
16. Jun 9, 2005
### ZA
Not to discourage you, but it is obvious that you have NO IDEA OF WHAT YOU ARE REALLY DOING! This may sound coincidental, but I too am a junior designing a liquid fuel rocket engine. I have done research and calculations on rockets and rocket engines for over a year now. I have also finished first year college level physics (Mechanics and E&M) and first year college level inorganic chemistry with high "A"s. I am also currently doing Calculus so I know what I am talking about.
First of all, YOU NEED TO KNOW WHAT YOU'RE DOING! you need to know exactly the thrust that you're shooting for, the pressure your engine will operate at, the burn temperature of your reactants at that pressure, consider a cooling mechanism that will prevent a meltdown ESPECIALLY WITH COPPER (where you're looking at a meltdown at a measly 1083 Celcius), know the strength characteristics of your metal or alloy used at the temperatures operated at (which differ greatly with temperature for some metals), have a target mass for the engine, consider the oxidation of your engine at high temperatures and have it galvanized with an inert metal such as platinum (it's dirt cheap if you're putting on a very thin layer). work through the primitive versions of DeLaval equations to get a rough idea of your engine's characteristics, and finally, FIGURE OUT THE FEED SYSTEM YOU WILL EMPLOY.
I strongly discourage you from using hydrogen peroxide as your oxidizer, as it is not the optimal oxidizer simply because most of the oxidizer's mass will end up as water and will not be involved in the main exothermic reaction. Instead, you should really try liquid oxygen, which is MUCH more effective as an oxidizer, doesn't spontaneously decompose, and it can pressurize your feed system. I realize there are problems in making and storing liquid oxygen, but the advantages more than compensate for the inconvenience.
Also, as already mentioned, premixing your reactants for a liquid fueled rocket engine is not only stupid, it's plain suicidal! If the heat from your combustion chamber gets conducted back to your fuel tank, a huge explosion will follow immediately, and shrapnel from the engine will kill anyone nearby! Trust me, your teachers know what they're talking about!
I don't know where you got the crazy idea for using copper for your rocket engine. Copper melts at 1083 Celcius under normal conditions, and at the temperatures you are operating at, even if you somehow prevent melting with a really good cooling mechanism, copper will simply be too plastic to withstand the pressures you're supposed to be aiming at. I am designing my engine to consist of either Titanium, or a thin walled Tungsten. The greatest problem with these metals is machining, which I am trying to figure out now.
I also strongly discourage you from building a hybrid engine with frozen gasoline because of two reasons. First of all, your entire fuel supplies will have to be stored in the combustion chamber, which will mean either a short flight duration, or a huge combustion chamber to hold all of your fuel, which will mean a huge mass for your engine. The second reason is that most of your gasoline will be located in a chamber at a temperature of a few thousand Kelvin which will melt and evaporate at a rate MUCH faster than the stoichiometric ratio flow of the oxidizer, which will result in an extremely inefficient engine.
I am planning my engine to work on Liquid Hydrogen-Oxygen which gives an extremely high efficiency, and a relatively low combustion temperature. Gaseous Oxygen and Hydrogen can be obtained easily and then liquified via adiabatic compression and expansion. Using these components eliminates the need for a pump and simplifies the feed system by pressurizing the fuel tanks, which can be kept at a needed pressure with properly adjusted pressure valves.
If you have more questions, which I may or may not be able to answer, e-mail me at [email protected]
Last edited: Jun 9, 2005
17. Jun 9, 2005
### NateTG
Alternative approaches
Something else to consider is using a $H_2O_2$ single-fuel rocket, which can be much simpler. The standard approach is to have a nitrogen tank and use that to pressurize the $H_2O_2$. Then the peroxide is fed into the 'combustion' chamber where it mixes with a catalyst (usually silver screens).
Peroxide rockets are relatively inefficient, but quite simple, and are 'cold'. (I think somewhere around 600C.) They're used in the 'rocket packs' that occasionally show up in the movies.
18. Jun 9, 2005
### FredGarvin
I can't say I agree with recommending working with LOX to anyone who does not completely understand what they are doing.
|
# Search for violation of Lorentz invariance in top quark pair production and decay
Collaboration, D0 and Bertram, Iain and Borissov, Guennadi and Fox, Harald and Ross, Anthony and Williams, Mark and Ratoff, Peter (2012) Search for violation of Lorentz invariance in top quark pair production and decay. Physical review letters, 108 (26). ISSN 1079-7114
Preview
PDF
PhysRevLett.108.261603.pdf - Published Version
## Abstract
Using data collected with the D0 detector at the Fermilab Tevatron Collider, corresponding to 5.3 fb$^{-1}$ of integrated luminosity, we search for violation of Lorentz invariance by examining the \ttbar production cross section in lepton+jets final states. We quantify this violation using the standard-model extension framework, which predicts a dependence of the \ttbar production cross section on sidereal time as the orientation of the detector changes with the rotation of the Earth. Within this framework, we measure components of the matrices $(c_Q)_{\mu\nu 33}$ and $(c_U)_{\mu\nu 33}$ containing coefficients used to parametrize violation of Lorentz invariance in the top quark sector. Within uncertainties, these coefficients are found to be consistent with zero.
Item Type:
Journal Article
Journal or Publication Title:
Physical review letters
© 2012 American Physical Society 7 pages, 2 figures, submitted to Phys. Rev. Lett
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/3100
Subjects:
Departments:
ID Code:
63517
Deposited By:
Deposited On:
19 Apr 2013 10:02
Refereed?:
Yes
Published?:
Published
|
# Find the values of k,if the equation $8x^2-16xy+ky^2-22x+34y=12$ represents an ellipse.
This question has multiple parts. Therefore each part has been answered as a separate question on Clay6.com
|
# Error Messages to Fear
The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown.
Supernatural Horror in Literature, HP Lovecraft, 1927.
Security error messages appear to take pride in providing limited information. In particular, they are usually some generic IOException wrapping a generic security exception. There is some text in the message, but it is often Failure unspecified at GSS-API level, which means "something went wrong".
Generally a stack trace with UGI in it is a security problem, though it can be a network problem surfacing in the security code.
The underlying causes of problems are usually the standard ones of distributed systems: networking and configuration.
Some of the OS-level messages are covered in Oracle's Troubleshooting Kerberos docs.
Here are some of the common ones seen in Hadoop stack traces and what we think are possible causes
That is: on one or more occasions, the listed cause was the one which, when corrected, made the stack trace go away.
## GSS initiate failed —no further details provided
WARN ipc.Client (Client.java:run(676)) - Couldn't setup connection for [email protected] to /172.22.97.127:8020
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:558) This is widely agreed to be one of the most useless of error messages you can see. The only ones that are worse than this are those which disguise a Kerberos problem, such as when ZK closes the connection rather than saying "it couldn't authenticate". If you see this connection, work out which service it was trying to talk to —and look in its logs instead. Maybe, just maybe, there will be more information there. ## Server not found in Kerberos database (7) or service ticket not found in the subject • DNS is a mess and your machine does not know its own name. • Your machine has a hostname, but the service principal is a /_HOST wildcard and the hostname is not one there's an entry in the keytab for. We've seen this in the stdout of a NN TGS_REQ { ... }UNKNOWN_SERVER: authtime 0, [email protected] for krbtgt/[email protected], Server not found in Kerberos database ## No valid credentials provided (Mechanism level: Illegal key size)] Your JVM doesn't have the extended cryptography package and can't talk to the KDC. Switch to openjdk or go to your JVM supplier (Oracle, IBM) and download the JCE extension package, and install it in the hosts where you want Kerberos to work. ## Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled [javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled)]] This has surfaced in the distant past. Assume it means the same as above: the JVM doesn't have the JCE JAR installed. ## No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt This may appear in a stack trace starting with something like: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] It's very common, and essentially means "you weren't authenticated" Possible causes: 1. You aren't logged in via kinit. 2. You have logged in with kinit, but the tickets you were issued with have expired. 3. Your process was issued with a ticket, which has now expired. 4. You did specify a keytab but it isn't there or is somehow otherwise invalid 5. You don't have the Java Cryptography Extensions installed. 6. The principal isn't in the same realm as the service, so a matching TGT cannot be found. That is: you have a TGT, it's just for the wrong realm. 7. Your Active Directory tree has the same principal in more than one place in the tree. 8. Your cached ticket list has been contaminated with a realmless-ticket, and the JVM is now unhappy. (See "The Principal With No Realm") 9. The program you are running may be trying to log in with a keytab, but it's got a Hadoop FileSystem instance before that login took place. Even if the service is not logged in. that FS instance is unauthenticated. ## Failure unspecified at GSS-API level (Mechanism level: Checksum failed) One of the classics 1. The password is wrong. A kinit command doesn't send the password to the KDC —it sends some hashed things to prove to the KDC that the caller has the password. If the password is wrong, so is the hash, hence an error about checksums. 2. There was a keytab, but it didn't work: the JVM has fallen back to trying to log in as the user. 3. Your keytab contains an old version of the keytab credentials, and cannot parse the information coming from the KDC, as it lacks the up to date credentials. 4. SPENGO/REST: Kerberos is very strict about hostnames and DNS; this can somehow trigger the problem. http://stackoverflow.com/questions/12229658/java-spnego-unwanted-spn-canonicalization; 5. SPENGO/REST: Java 8 behaves differently from Java 6 and 7 which can cause problems HADOOP-11628. ## javax.security.auth.login.LoginException: No password provided When this surfaces in a server log, it means the server couldn't log in as the user. That is, there isn't an entry in the supplied keytab for that user and the system (obviously) doesn't want to fall back to user-prompted password entry. Some of the possible causes • The wrong keytab was specified. • The configuration key names used for specifying keytab or principal were wrong. • There isn't an entry in the keytab for the user. • The spelling of the principal is wrong. • The hostname of the machine doesn't match that of a user in the keytab, so a match of service/host fails. Ideally, services list the keytab and username at fault here. In a less than ideal world —that is the one we live in— things are sometimes less helpful Here, for example, is a Zookeeper trace, saying it is the user null that is at fault. 2015-12-15 17:16:23,517 - WARN [main:SaslServerCallbackHandler@105] - No password found for user: null 2015-12-15 17:16:23,536 - ERROR [main:ZooKeeperServerMain@63] - Unexpected exception, exiting abnormally java.io.IOException: Could not configure server because SASL configuration did not allow the ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: No password provided at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:207) at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:111) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) ## javax.security.auth.login.LoginException: Unable to obtain password from user Believed to be the same as the No password provided Exception in thread "main" java.io.IOException: Login failure for alice@REALM from keytab /etc/security/keytabs/spark.headless.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:962) at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:564)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:154) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
The JVM Kerberos code needs to have the password for the user to login to kerberos with, but Hadoop has told it "don't ask for a password'", so the JVM raises an exception.
Root causes should be the same as for the other message.
## failure to login using ticket cache file
You aren't logged via kinit, the application isn't configured to use a keytab. So: no ticket, no authentication, no access to cluster services.
you can use klist -v to show your current ticket cache
fix: log in with kinit
## Clock skew too great
GSSException: No valid credentials provided
(Mechanism level: Attempt to obtain new INITIATE credentials failed! (null))
GSSException: No valid credentials provided (Mechanism level: Clock skew too great (37) - PROCESS_TGS
kinit: krb5_get_init_creds: time skew (343) larger than max (300)
This comes from the clocks on the machines being too far out of sync.
This can surface if you are doing Hadoop work on some VMs and have been suspending and resuming them; they've lost track of when they are. Reboot them.
If it's a physical cluster, make sure that your NTP daemons are pointing at the same NTP server, one that is actually reachable from the Hadoop cluster. And that the timezone settings of all the hosts are consistent.
## KDC has no support for encryption type
This crops up on the MiniKDC if you are trying to be clever about encryption types. It doesn't support many.
## GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)
Rarely seen. Switching kerberos to use TCP rather than UDP makes it go away
In /etc/krb5.conf:
[libdefaults]
udp_preference_limit = 1
Note also UDP is a lot slower to time out.
## Receive timed out
Usually in a stack trace like
Caused by: java.net.SocketTimeoutException: Receive timed out
at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:390) at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:343)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.krb5.KdcComm.send(KdcComm.java:327)
at sun.security.krb5.KdcComm.send(KdcComm.java:219)
at sun.security.krb5.KdcComm.send(KdcComm.java:191)
at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:319)
at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
This means the UDP socket awaiting a response from KDC eventually gave up.
• The hostname of the KDC is wrong
• The IP address of the KDC is wrong
• There's nothing at the far end listening for requests.
• A firewall on either client or server is blocking UDP packets
Kerberos waits ~90 seconds before timing out, which is a long time to notice there's a problem.
Switch to TCP —at the very least, it will fail faster.
## javax.security.auth.login.LoginException: connect timed out
Happens when the system is set up to use TCP as an authentication channel, and the far end KDC didn't respond in time.
• The hostname of the KDC is wrong
• The IP address of the KDC is wrong
• There's nothing at the far end listening for requests.
• A firewall somewhere is blocking TCP connections
## GSSException: No valid credentials provided (Mechanism level: Connection reset)
We've seen this triggered in Hadoop tests after the MiniKDC through an exception; its thread exited and hence the Kerberos client got a connection error.
When you see this assume network connectivity problems, or something up at the KDC itself.
## Principal not found
The hostname is wrong (or there is more than one hostname listed with different IP addresses) and so a principal of the form user/_HOST@REALM is coming back with the wrong host, and the KDC doesn't find it.
## Defective token detected (Mechanism level: GSSHeader did not find the right tag)
Seen during SPNEGO Authentication: the token supplied by the client is not accepted by the server.
This apparently surfaces in Java 8 version 8u40; if Kerberos server doesn't support the first authentication mechanism which the client offers, then the client fails. Workaround: don't use those versions of Java.
This is now acknowledged by Oracle and has been fixed in 8u60.
## Specified version of key is not available (44)
Client failed to SASL authenticate:
javax.security.sasl.SaslException:
GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level
(Mechanism level: Specified version of key is not available (44))]
The meaning of this message —or how to fix it— is a mystery to all.
One possibility is that the keys in your keytab have expired. Did you know that can happen? It does. One day your cluster works happily. The next your client requests are failing, with this message surfacing in the logs.
klist -kt zk.service.keytab
Keytab name: FILE:zk.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
5 12/16/14 11:46:05 zookeeper/devix.cotham.uk@COTHAM
5 12/16/14 11:46:05 zookeeper/devix.cotham.uk@COTHAM
5 12/16/14 11:46:05 zookeeper/devix.cotham.uk@COTHAM
5 12/16/14 11:46:05 zookeeper/devix.cotham.uk@COTHAM
One thing to see there is the version number in the KVNO table.
Oracle describe the JRE's handling of version numbers in their bug database.
From an account logged in to the system, you can look at the client's version number
## Kerberos credential has expired
Seen on an IBM JVM in HADOOP-9969
javax.security.sasl.SaslException:
Failure to initialize security context [Caused by org.ietf.jgss.GSSException, major code: 8, minor code: 0
major string: Credential expired
minor string: Kerberos credential has expired]
The kerberos ticket has expired and not been renewed.
Possible causes
• The renewer did start, but didn't try to renew in time.
• A JVM/Hadoop code incompatibility stopped renewing from working.
• Renewal failed for some other reason.
• The client was kinited in and the token expired.
• Your VM clock has jumped forward and the ticket now out of date without any renewal taking place.
## SASL No common protection layer between client and server
Not Kerberos, SASL itself
16/01/22 09:44:17 WARN Client: Exception encountered while connecting to the server :
javax.security.sasl.SaslException: DIGEST-MD5: No common protection layer between client and server
at com.sun.security.sasl.digest.DigestMD5Client.checkQopSupport(DigestMD5Client.java:418)
at com.sun.security.sasl.digest.DigestMD5Client.evaluateChallenge(DigestMD5Client.java:221)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:558) at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:373) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:727) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:723) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:722)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy23.renewLease(Unknown Source)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
## On windows: No authority could be contacted for authentication
Reported on windows clients, especially related to the Hive ODBC client. This is kerberos, just someone else's library.
1. Make sure that your system is happy in the AD realm, etc. Do this first.
2. Make sure you've configured the ODBC driver according to the instructions.
## During service startup java.lang.RuntimeException: Could not resolve Kerberos principal name: + unknown error
This something which can arise in the logs of a service. Here, for example, is a datanode failure.
Could not resolve Kerberos principal name: java.net.UnknownHostException: xubunty: xubunty: unknown error
This is not a kerberos problem. It is a network problem being misinterpreted as a Kerberos problem, purely because it surfaces in security code which assumes that all failures must be Kerberos related.
2016-04-06 11:00:35,796 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: java.lang.RuntimeException: Could not resolve Kerberos principal name: java.net.UnknownHostException: xubunty: xubunty: unknown error
at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:290) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:108) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:781) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1138) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:432) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2423) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2310) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2357) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2538) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2562) Caused by: java.lang.RuntimeException: Could not resolve Kerberos principal name: java.net.UnknownHostException: xubunty: xubunty: unknown error at org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:90) at org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:455) at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:445) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:340) ... 11 more Caused by: java.net.UnknownHostException: xubunty: xubunty: unknown error at java.net.InetAddress.getLocalHost(InetAddress.java:1505) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:224) at org.apache.hadoop.security.SecurityUtil.replacePattern(SecurityUtil.java:192) at org.apache.hadoop.security.SecurityUtil.getServerPrincipal(SecurityUtil.java:147) at org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:87) ... 14 more Caused by: java.net.UnknownHostException: xubunty: unknown error at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
... 18 more2016-04-06 11:00:35,799 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12016-04-06 11:00:35,806 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
{code}
The root cause here was that the host xubunty which a service was configured to start on did not have an entry in /etc/hosts, nor DNS support. The attempt to look up the IP address of the local host failed.
The fix: add the short name of the host to /etc/hosts.
This example shows why errors reported as Kerberos problems, be they from the Hadoop stack or in the OS/Java code underneath, are not always Kerberos problems. Kerberos is fussy about networking; the Hadoop services have to initialize Kerberos before doing any other work. As a result, networking problems often surface first in stack traces belonging to the security classes, wrapped with exception messages implying a Kerberos problem. Always follow down to the innermost exception in a trace as the immediate symptom of a problem, the layers above attempts to interpret that, attempts which may or may not be correct.
## Against Active Directory: Realm not local to KDC while getting initial credentials
Nobody quite knows.
It's believed to be related to Active Directory cross-realm/forest stuff, but there are hints that it can also be raised when the kerberos client is trying to auth with a KDC, but supplying a hostname rather than the realm.
This may be because you have intentionally or unintentionally created A Disjoint Namespace.aspx))
If you read that article, you will get the distinct impression that even the Microsoft Active Directory team are scared of Disjoint Namespaces, and so are going to a lot of effort to convince you not to go there. It may seem poignant that even the developers of AD are scared of this, but consider that these are probably inheritors of the codebase, not the original authors, and the final support line for when things don't work. Their very position in the company means that they get the worst-of-the-worst Kerberos-related problems. If they say "Don't go there", it'll be based on experience of fielding those support calls and from having seen the Active Directory source code.
|
# Configuration file organization
When GRR is installed on a system, a system wide, distribution specific, configuration file is also installed (by default /etc/grr/grr_server.yaml). This specifies the basic configuration of the GRR service (i.e. various distribution specific locations). However, the configuration typically needs to be updated with site specific parameters (for example new crypto keys, the Client.control_urls setting to allow the client to connect to the server, etc.)
In order to avoid overwriting the user customized configuration files when GRR is updated in future, we write site specific configuration files to a location specified by the Config.writeback config option (by default /etc/grr/server.local.yaml). This local file only contains the parameters which are changed from the defaults, or the system configuration file.
Note
The system configuration file is never modified, all updated configurations are always written to the writeback location. Do not edit the system configuration file as changes in this file will be lost when GRR is upgraded.
You can see all available configuration options and their current values in the “Settings” pane in GRR’s web UI.
|
Browse Questions
Home >> AIMS
# When a charged particle moves through a magnetic field it suffers a change in its
$\begin{array}{1 1}(A)\;\text{energy}&(B)\;\text{mass}\\(C)\;\text{speed}&(D)\;\text{direction of motion}\end{array}$
Can you answer this question?
When a charged particle moves through a magnetic field it suffers a change in its direction of motion
Hence (D) is the correct option.
answered Apr 15, 2014
|
# Can you provide us calculations for an upper bound of $\frac{15}{\pi^2}ne^{H_n}\log(H_n)-e^{H_{n^2}}\log(H_{n^2})$?
In Regarding the $\sigma(n)$ function, I say in this site Mathematics Stack Exchange was stated in one of the answer, if I understand well, the following
Claim (Fischer). Let $\sigma(n)=\sum_{d\mid n}d$ the sum of divisor function, then for any positive integer $n\geq 1$ one has $$\frac{\sigma(n^2)}{n\sigma(n)}< \frac{\zeta(2)}{\zeta(4)}=\frac{15}{\pi^2}.$$ And it is possible improve this bound.
Thus, if I understand well Lagarias claim in his paper, I say the last of his comments about Kaneko's statement (currently I understand the proof of Lagarias lemmas, but not all paper his paper), page 542 of Lagarias, An Elementary Problem Equivalent to the Riemann Hypothesis American Mathematical MONTHLY June-July 2002, I can combine previous claims to state that for $n$ sufficiently large (around $n>60$) $$\sigma(n^2)< \frac{15}{\pi^2}ne^{H_n}\log H_n,$$ where we are denoting by $$H_k=1+\frac{1}{2}+\ldots+\frac{1}{k}$$ the kth harmonic number (that is this well arithmetic function), thus one has that for an integer $n\geq 1$ one can compute $H_{n^2}$ as definition to be $$H_{n^2}=\sum_{k=1}^{n^2}\frac{1}{k}=1+\frac{1}{2}+\frac{1}{3}+\ldots+\frac{1}{n^2}.$$ My question is
Question. Can you provide us an upper bound for the difference (I am considering that it is a positive difference, I don't know how prove it) $$\frac{15}{\pi^2}ne^{H_n}\log(H_n)-e^{H_{n^2}}\log(H_{n^2})$$ for large positive integers $n$? I say calculations that I can learn and study, thus it isn't neccesary the best bound or work with the best Fischer's claim. Thanks in advance.
• Also I will try understand well cited question/answer in this site MSE. I hope that previous myself question has mathematical sense; also I am interesting in if it is possible combine the property that the sum of divisor function is multiplicative with calculations involving Robin, Lagarias or Kaneko equivalences. But it is a different question. – user243301 Aug 9 '16 at 17:39
• Please, all users are welcome to explore my ideas and exploit them, if there were some potentially useful. I hope that those aren't the worse, I say myself ideas/calculations, because currently I have no good abilities in mathematics, but I believe in effort and always I try that my point of departure is solid, that is the cause of I used the best references, because are the best advantage. – user243301 Aug 9 '16 at 19:26
Since $\frac{1}{x}$ is decreasing, we have $\int_n^{n+1} \frac{1}{x}dx \leq \frac{1}{n} \leq \int_{n-1}^n \frac{1}{x}dx$, so: $$\log(n+1) < \sum_{k=1}^n \frac{1}{k}=H_n < \log(n)+1$$ Thus since $\log$ and $\exp$ are increasing: $$\frac{15}{\pi^2}n(n+1)\log (\log(n+1))\leq \frac{15}{\pi^2}ne^{H_n} \log(H_n) \leq \frac{15}{\pi^2}n^2 e \log(\log(n)+1)$$ $$(n^2+1) \log(\log(n^2+1))\leq e^{H_{n^2}} \log(H_{n^2}) \leq en^2\log(\log(n^2)+1))$$ Taking difference we have tight upper and lower bounds? Not sure if this was what you're looking for?
• I did the calculations and the first was $1+\log(n+1)-log 2<H_n<1+\log n-log 2$, that is thus your first claim, the other calculations also I understand well. Thus I accept your answer, truly this kind of calculations should be able do myseflt. Very thanks much and thanks for the patience of all users. – user243301 Aug 9 '16 at 18:35
In the answer to this MSE question, Will Jagy hints that an upper bound for $\sigma(n^2)/\left(n\sigma(n)\right)$ is given by $$\dfrac{\sigma(n^2)}{n\sigma(n)} < \dfrac{\zeta(2)}{\zeta(3)} \approx 1.3684327776\ldots.$$
This improves on Fischer's upper bound of $$\dfrac{\zeta(2)}{\zeta(4)} \approx 1.519817754635\ldots.$$
|
if you want to remove an article from website contact us from top.
# planet x has a mass of m and a radius of r. planet y has a mass of 3m and a radius of 3r. identical satellites orbit both planets at a distance r above their surfaces, as shown above. the planets are separated by such a large distance that the gravitational forces between them are negligible.
Category :
### James
Guys, does anyone know the answer?
get planet x has a mass of m and a radius of r. planet y has a mass of 3m and a radius of 3r. identical satellites orbit both planets at a distance r above their surfaces, as shown above. the planets are separated by such a large distance that the gravitational forces between them are negligible. from EN Bilgi.
## Solved Ral Rol 3R 3M Planet X Planet Y Planet X has a mass
Answer to Solved Ral Rol 3R 3M Planet X Planet Y Planet X has a mass
### Manage Consent Preferences
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
#### Sale of Personal Data
If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
Clear Filters
All Consent Allowed
Source : www.chegg.com
## Physics ap classroom part A
Study Sets for Physics ap classroom part A.
3/4 F0
Meterstick and timer
Experiment 1 only
Yes. Other external forces are exerted on the planet, but they are of negligible magnitude.
Place the object on the disk and measure the distance from the center of the disk to the center of mass of the object by using a meterstick. Slowly increase the rate the disk rotates until the object begins to slide off the disk. Record the time in which
FY=3/4FX
gx=3gy
M1 , M2 , and a0
A proton and neutron located 1.0 mm apart
T1+T2/2M
Accelerometer Force sensor
MRocket 1kg FThrusters 12N MRocket 3kg FThrusters 36N
9T--------------- 4t----------
20 M/S^2
A satellite is in orbit around a planet. An object is in free fall just after it is released from rest.
No, because the slope of the curve of the graph indicates that the acceleration is less than gg, which indicates that a force other than gravity is exerted on the object.
No, because the net centripetal force exerted on the ball is the combination of the tension force from the string and the force due to gravity from Earth.
O---------> F gravity
1x10^19 N
There is another celestial body that exerts a gravitational force on the moon.
Source : www.assignguru.com
## 13.4 Satellite Orbits and Energy
### Circular Orbits
As noted at the beginning of this chapter, Nicolaus Copernicus first suggested that Earth and all other planets orbit the Sun in circles. He further noted that orbital periods increased with distance from the Sun. Later analysis by Kepler showed that these orbits are actually ellipses, but the orbits of most planets in the solar system are nearly circular. Earth’s orbital distance from the Sun varies a mere 2%. The exception is the eccentric orbit of Mercury, whose orbital distance varies nearly 40%.
Determining the orbital speed and orbital period of a satellite is much easier for circular orbits, so we make that assumption in the derivation that follows. As we described in the previous section, an object with negative total energy is gravitationally bound and therefore is in orbit. Our computation for the special case of circular orbits will confirm this. We focus on objects orbiting Earth, but our results can be generalized for other cases.
Consider a satellite of mass m in a circular orbit about Earth at distance r from the center of Earth ((Figure)). It has centripetal acceleration directed toward the center of Earth. Earth’s gravity is the only force acting, so Newton’s second law gives
Figure 13.12 A satellite of mass m orbiting at radius r from the center of Earth. The gravitational force supplies the centripetal acceleration.
We solve for the speed of the orbit, noting that m cancels, to get the orbital speed
Consistent with what we saw in (Figure) and (Figure), m does not appear in (Figure). The value of g, the escape velocity, and orbital velocity depend only upon the distance from the center of the planet, and not upon the mass of the object being acted upon. Notice the similarity in the equations for vorbit${v}_{\text{orbit}}$ and vesc${v}_{\text{esc}}$. The escape velocity is exactly 2$\sqrt{2}$ times greater, about 40%, than the orbital velocity. This comparison was noted in (Figure), and it is true for a satellite at any radius.
To find the period of a circular orbit, we note that the satellite travels the circumference of the orbit 2πr$2\pi r$ in one period T. Using the definition of speed, we have vorbit=2πr/T${v}_{\text{orbit}}=2\pi r\text{/}T$. We substitute this into (Figure) and rearrange to get
We see in the next section that this represents Kepler’s third law for the case of circular orbits. It also confirms Copernicus’s observation that the period of a planet increases with increasing distance from the Sun. We need only replace ME${M}_{\text{E}}$ with MSun${M}_{\text{Sun}}$ in (Figure).
We conclude this section by returning to our earlier discussion about astronauts in orbit appearing to be weightless, as if they were free-falling towards Earth. In fact, they are in free fall. Consider the trajectories shown in (Figure). (This figure is based on a drawing by Newton in his Principia and also appeared earlier in Motion in Two and Three Dimensions.) All the trajectories shown that hit the surface of Earth have less than orbital velocity. The astronauts would accelerate toward Earth along the noncircular paths shown and feel weightless. (Astronauts actually train for life in orbit by riding in airplanes that free fall for 30 seconds at a time.) But with the correct orbital velocity, Earth’s surface curves away from them at exactly the same rate as they fall toward Earth. Of course, staying the same distance from the surface is the point of a circular orbit.
Figure 13.13 A circular orbit is the result of choosing a tangential velocity such that Earth’s surface curves away at the same rate as the object falls toward Earth.
We can summarize our discussion of orbiting satellites in the following Problem-Solving Strategy.
### Example
#### The International Space Station
Determine the orbital speed and period for the International Space Station (ISS).
#### Strategy
Since the ISS orbits 4.00×102km$4.00\phantom{\rule{0ex}{0ex}}×\phantom{\rule{0ex}{0ex}}{10}^{2}\text{km}$ above Earth’s surface, the radius at which it orbits is RE+4.00×102km${R}_{\text{E}}+4.00\phantom{\rule{0ex}{0ex}}×\phantom{\rule{0ex}{0ex}}{10}^{2}\text{km}$. We use (Figure) and (Figure) to find the orbital speed and period, respectively.
#### Solution
Using (Figure), the orbital velocity is
which is about 17,000 mph. Using (Figure), the period is
which is just over 90 minutes.
#### Significance
The ISS is considered to be in low Earth orbit (LEO). Nearly all satellites are in LEO, including most weather satellites. GPS satellites, at about 20,000 km, are considered medium Earth orbit. The higher the orbit, the more energy is required to put it there and the more energy is needed to reach it for repairs. Of particular interest are the satellites in geosynchronous orbit. All fixed satellite dishes on the ground pointing toward the sky, such as TV reception dishes, are pointed toward geosynchronous satellites. These satellites are placed at the exact distance, and just above the equator, such that their period of orbit is 1 day. They remain in a fixed position relative to Earth’s surface.
By what factor must the radius change to reduce the orbital velocity of a satellite by one-half? By what factor would this change the period?
In (Figure), the radius appears in the denominator inside the square root. So the radius must increase by a factor of 4, to decrease the orbital velocity by a factor of 2. The circumference of the orbit has also increased by this factor of 4, and so with half the orbital velocity, the period must be 8 times longer. That can also be seen directly from (Figure).
### Example
#### Determining the Mass of Earth
Determine the mass of Earth from the orbit of the Moon.
#### Strategy
We use (Figure), solve for ME${M}_{\text{E}}$, and substitute for the period and radius of the orbit. The radius and period of the Moon’s orbit was measured with reasonable accuracy thousands of years ago. From the astronomical data in Appendix D, the period of the Moon is 27.3 days =2.36×106s$=2.36\phantom{\rule{0ex}{0ex}}×\phantom{\rule{0ex}{0ex}}{10}^{6}\phantom{\rule{0ex}{0ex}}\text{s}$, and the average distance between the centers of Earth and the Moon is 384,000 km.
#### Solution
Solving for ME${M}_{\text{E}}$,
#### Significance
Compare this to the value of 5.96×1024kg$5.96\phantom{\rule{0ex}{0ex}}×\phantom{\rule{0ex}{0ex}}{10}^{24}\phantom{\rule{0ex}{0ex}}\text{kg}$ that we obtained in (Figure), using the value of g at the surface of Earth. Although these values are very close (~0.8%), both calculations use average values. The value of g varies from the equator to the poles by approximately 0.5%. But the Moon has an elliptical orbit in which the value of r varies just over 10%. (The apparent size of the full Moon actually varies by about this amount, but it is difficult to notice through casual observation as the time from one extreme to the other is many months.)
There is another consideration to this last calculation of ME${M}_{\text{E}}$. We derived (Figure) assuming that the satellite orbits around the center of the astronomical body at the same radius used in the expression for the gravitational force between them. What assumption is made to justify this? Earth is about 81 times more massive than the Moon. Does the Moon orbit about the exact center of Earth?
The assumption is that orbiting object is much less massive than the body it is orbiting. This is not really justified in the case of the Moon and Earth. Both Earth and the Moon orbit about their common center of mass. We tackle this issue in the next example.
### Example
#### Galactic Speed and Period
Let’s revisit (Figure). Assume that the Milky Way and Andromeda galaxies are in a circular orbit about each other. What would be the velocity of each and how long would their orbital period be? Assume the mass of each is 800 billion solar masses and their centers are separated by 2.5 million light years.
#### Strategy
We cannot use (Figure) and (Figure) directly because they were derived assuming that the object of mass m orbited about the center of a much larger planet of mass M. We determined the gravitational force in (Figure) using Newton’s law of universal gravitation. We can use Newton’s second law, applied to the centripetal acceleration of either galaxy, to determine their tangential speed. From that result we can determine the period of the orbit.
#### Solution
In (Figure), we found the force between the galaxies to be
and that the acceleration of each galaxy is
Since the galaxies are in a circular orbit, they have centripetal acceleration. If we ignore the effect of other galaxies, then, as we learned in Linear Momentum and Collisions and Fixed-Axis Rotation, the centers of mass of the two galaxies remain fixed. Hence, the galaxies must orbit about this common center of mass. For equal masses, the center of mass is exactly half way between them. So the radius of the orbit, rorbit${r}_{\text{orbit}}$, is not the same as the distance between the galaxies, but one-half that value, or 1.25 million light-years. These two different values are shown in (Figure).
Figure 13.14 The distance between two galaxies, which determines the gravitational force between them, is r, and is different from rorbit${r}_{\text{orbit}}$, which is the radius of orbit for each. For equal masses, rorbit=1/2r${r}_{\text{orbit}}=1\text{/}2r$. (credit: modification of work by Marc Van Norden)
Using the expression for centripetal acceleration, we have
Solving for the orbit velocity, we have vorbit=47km/s${v}_{\text{orbit}}=47\phantom{\rule{0ex}{0ex}}\text{km/s}$. Finally, we can determine the period of the orbit directly from T=2πr/vorbit$T=2\pi r\text{/}{v}_{\text{orbit}}$, to find that the period is T=1.6×1018s$T=1.6\phantom{\rule{0ex}{0ex}}×\phantom{\rule{0ex}{0ex}}{10}^{18}\phantom{\rule{0ex}{0ex}}\text{s}$, about 50 billion years.
#### Significance
The orbital speed of 47 km/s might seem high at first. But this speed is comparable to the escape speed from the Sun, which we calculated in an earlier example. To give even more perspective, this period is nearly four times longer than the time that the Universe has been in existence.
In fact, the present relative motion of these two galaxies is such that they are expected to collide in about 4 billion years. Although the density of stars in each galaxy makes a direct collision of any two stars unlikely, such a collision will have a dramatic effect on the shape of the galaxies. Examples of such collisions are well known in astronomy.
|
# Tag Info
19
The question seems somewhat under-specified in the sense that it did not specify the desired error probability of the procedure. Assuming one means constant error probability, then the above is indeed the best I know. For a detailed discussion see Sec 2.5.2.4 in my book "The Foundations of Cryptography - Volume 1" available at http://www.wisdom.weizmann.ac....
12
Recent work by Alon, Moran, and Yehudayoff gives an $O(n/\log n)$ approximation algorithm. Let $d$ be the VC-dimension of a sign matrix $S$. The idea is that there exists an efficiently computable matrix $M$ with sign pattern $S$ such that $\mathrm{rank}\ M = O(n^{1-1/d})$; the sign rank of $S$ is at least $d$. So the algorithm computes $M$ and outputs ...
11
I will formalize a variant of this question where "efficiency" is replaced by "computability". Let $C_n$ be the concept class of all languages $L\subseteq\Sigma^*$ recognizable by Turing machines on $n$ states or fewer. In general, for $x\in\Sigma^*$ and $f\in C_n$, the problem of evaluating $f(x)$ is undecidable. However, suppose we have access to a (...
10
PAC comes in two flavors -- "information theoretic PAC" and "efficient PAC." The latter asks for computational efficiency whereas the former cares only about sample size. One usually understands which is referred to from context. Indeed, it is not known whether (efficient) PAC learning is NP-hard in general, but results on the cryptographic hardness of ...
10
My thanks to Aryeh for bringing this question to my attention. As others have mentioned, the answer to (1) is Yes, and the simple method of Empirical Risk Minimization in $\mathcal{C}$ achieves the $O((d/\varepsilon)\log(1/\varepsilon))$ sample complexity (see Vapnik and Chervonenkis, 1974; Blumer, Ehrenfeucht, Haussler, and Warmuth, 1989). As for (2), ...
10
In the classic PAC learning (i.e., classification) model, rare instances are not a problem. This is because the learner's test points are assumed to come from the same distribution as the training data. Thus, if a region of space is so sparse as to be poorly represented in the training sample, its probability of appearing during the test phase is low. You'...
9
We know something close to what you want. If you look at Ke Yang's "Honest Statistical Queries" -- there is no noise at all, but only "sampling error". In this model, you pass in a parameter $t$, and the Oracle takes $t$ samples, honestly evaluates the passed-in function (onto {0,1}), and returns the average value of the function on the samples. In ...
9
The main thing missing from your list is the beautiful 2006 paper of Klivans and Sherstov. They show there that PAC-learning even depth-2 threshold circuits is as hard as solving the approximate shortest vector problem.
9
Let me clarify the question a bit first: Agnostic learning conjunctions is known to be NP-hard only if the learner needs to be proper (output a conjunctions as a hypothesis) and work for any input distribution. The reductions in FGKP06 are for the uniform distribution and to the best of my knowledge there is no similar result for general distributions. But ...
8
What you describe is a non-stochastic version of the "functional multi-arm bandit problem": you know you have an unknown function from some class C (does not have to be randomly selected), and you have query access to this function. The goal is to find the element which maximizes the function. As you say, depending on the class C, this may or may not require ...
7
Here is a better bound on the sample complexity. (Although the computational complexity is still $n^k$.) Theorem. Assume there exists a subcube $S$ of size $2^{n-k}$ such that $|\mathbb{E}_{x \in S}[f(x)]| \geq 0.12$. With $O(2^k \cdot k \cdot \log n)$ samples we can, with high probability, identify a subcube $S'$ of size $2^{n-k}$ such that $|\mathbb{E}_{x ... 7 For the very limited situation in which$k=1$and we're only interested in functions whose domain is$\{0,1\}^n$, the VC dimension is$\lfloor \log_2(n+1) + 1 \rfloor$. The upper bound follows essentially from Sasho's argument: there are$2+2n$1-juntas on$n$variables, and if there were more than$\log_2$of that many inputs, we could find a function that ... 7 Second version, hopefully correct. I claim that solving the feasibility problem$\exists? x: Ax \le b$reduces in strongly polynomial time to finding a linear separator. Then it's easy to reduce linear programming to the feasibility problem. Let us first reduce the strict feasibility problem$\exists? x: Ax < b$to finding a linear separator. Towards ... 6 You are probably looking for this paper: Víctor Dalmau and Peter Jeavons, Learnability of quantified formulas, TCS 306 485–511, 2003. doi:10.1016/S0304-3975(03)00342-6 In short, the learning complexity of a family of quantified formulas over a finite domain of values is determined by its clone of polymorphisms. This includes CSPs as a special case of ... 6 I guess you already had a look at Breiman's 2001 paper about RF. I can just point out a few other references: Empirical comparisons of different RF simplifications that allow proving theorems: Narrowing the Gap: Random Forests In Theory and In Practice This is the newest reference I can provide. In this paper you can also find some citations of Biau's ... 6 First, let's distinguish between empirical end expected Rademacher complexities. The former is defined for a function class$F$and sequence of points$X_1,\ldots, X_n$, by $$\hat R_n(F;X_1,\ldots,X_n) = E_\sigma \sup_{f\in F}\frac1n\sum_{i=1}^n \sigma_i f(X_i).$$ The latter is defined for a function class$F$and distribution$D$, by $$R_n(F;D) = E_{(X_1,\... 6 Your questions (1) and (2) are related. First, let's talk about proper PAC learning. It is known that there are proper PAC learners that achieve zero sample error, and yet require \Omega(\frac{d}{\epsilon}\log\frac1\epsilon) examples. For a simple proof of the \epsilon dependence, consider the concept class of intervals [a,b]\subseteq[0,1] under the ... 6 This was proven in M. Talagrand. Sharper bounds for Gaussian and empirical processes. The Annals of Probability, pages 28–76, 1994. This is mentioned in e.g. this paper (Section 1.1.2), which does a pretty good job (in my opinion) of summarizing the landscape. 6 It is not hard to see that without additional stability assumptions one won't be able to get high probability bounds. For example consider predicting unbiased coin using majority label in the sample. With probability ~1/\sqrt{n} we get that majority of leave-one-out is exactly the opposite of the excluded point so LOO will give error of 1. Note that the ... 6 Another good introductory book is "Foundations of Machine Learning" by Mohri et al.: https://www.amazon.com/Foundations-Machine-Learning-Mehryar-Mohri/dp/0262039400/. It has a large overlap with the Shai and Shai book, but also quite a bit of content that they don't cover. There are also good books and surveys on more advanced or specialized topics: ... 5 I don't have a solution to this problem, but the analogous case where the two distributions are discrete has been analyzed in the cryptographic literature. Suppose we want to distinguish between two distributions \mathcal{D}_0, \mathcal{D}_1, where these two distributions are "close". Suppose we have n observations (i.e., a sequence of n numbers ... 5 Following Simone's answer, Gerard Biau has several very good papers looking at convergence and consistency for random forests. The analyses are for slightly simplified versions of the algorithm compared to Breiman 2001, but less simplified than previous results. Biau's papers (along with his collaborators) are all available on his website: http://www.... 5 Rather than computing the VC dimension of a particular function class, it's usually more interesting to understand how generic properties of a function class relate to its VC dimension. For example, function spaces with linear dimension d have VC-dim at most d. You can also bound the VC-dim of a function class realized by circuits with bounded depth/... 5 Depth-2 TC0 probably can't be PAC learned in subexponential time over the uniform distribution with a random oracle access. I don't know of a reference for this, but here's my reasoning: We know that parity is only barely learnable, in the sense that the class of parity functions is learnable in itself, but once you do just about anything to it (such as ... 5 What you are saying is that given N random samples one cannot simulate an algorithm that makes T queries to VSTAT(N). If the T queries are chosen adaptively then one might need more samples (the best upper bound is \sqrt{T} N samples). This is true but not an issue for the planted clique paper you mentioned. That paper is concerned with proving a ... 5 Write p=p_0=1-q. We may assume that \epsilon<\eta \le p\le 1/2. Then the sample complexity is of order \log(1/\delta) times the reciprocal of the relative entropy D((p,q)||(p+\epsilon,q-\epsilon)). This yields sample complexity \Theta(p\epsilon^{-2}\log(1/\delta)). 5 Yuval Peres gave the answer in terms of the Kullback-Leibler divergence. Another way is to recall that the sample complexity will be captured by the inverse of the squared Hellinger distance between the two coins. Now, letting D_p and D_{p+\varepsilon} be the distributions of a Bernoulli random variable with parameter p and p+\varepsilon ... 5 To add to the currently accepted answer: Yes. The$$O\left(\frac{d}{\varepsilon}\log\frac{1}{\varepsilon}\right)$$sample complexity upper bound holds for proper PAC learning as well (although it is important to note that it may not lead to a computationally efficient learning algorithm. Which is normal, since unless$\mathsf{NP}=\mathsf{RP}$is it known ... 5 The algorithm is named RPNI, not RNPI. Given that the language generating the inputs is regular and that enough examples are given (the characteristic set), the algorithm returns the canonical (i.e., minimal) DFA for that language. If the generating language is not regular, the automaton may grow unboundedly with the set of inputs. Since the algorithm ... 5 For distributions with finite support of size$d$, when the error metric is the$\ell_1$distance, the analogue of VC dimension is exactly$d$. (In fact, it's pretty much the VC dimension -- since to estimate a distribution over$d$in$\ell_1$is equivalent to agnostically PAC-learning the concept class$2^{[d]}\$). For discrete distributions with infinite ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# Data-driven control system
(Redirected from Data-driven control systems)
Data-driven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on experimental data collected from the plant [1].
In many control applications, trying to write a mathematical model of the plant is considered a hard task, requiring efforts and time to the process and control engineers. This problem is overcome by data-driven methods, which allow to fit a system model to the experimental data collected, choosing it in a specific models class. The control engineer can then exploit this model to design a proper controller for the system. However, it is still difficult to find a simple yet reliable model for a physical system, that includes only those dynamics of the system that are of interest for the control specifications. The direct data-driven methods allow to tune a controller, belonging to a given class, without the need of an identified model of the system. In this way, one can also simply weight process dynamics of interest inside the control cost function, and exclude those dynamics that are out of interest.
## Overview
The standard approach to control systems design is organized in two-steps:
1. Model identification aims at estimating a nominal model of the system ${\displaystyle {\widehat {G}}=G\left(q;{\widehat {\theta }}_{N}\right)}$, where ${\displaystyle q}$ is the unit-delay operator (for discrete-time transfer functions representation) and ${\displaystyle {\widehat {\theta }}_{N}}$ is the vector of parameters of ${\displaystyle G}$ identified on a set of ${\displaystyle N}$ data. Then, validation consists in constructing the uncertainty set ${\displaystyle \Gamma }$ that contains the true system ${\displaystyle G_{0}}$ at a certain probability level.
2. Controller design aims at finding a controller ${\displaystyle C}$ achieving closed-loop stability and meeting the required performance with ${\displaystyle {\widehat {G}}}$.
Typical objectives of system identification are to have ${\displaystyle {\widehat {G}}}$ as close as possible to ${\displaystyle G_{0}}$, and to have ${\displaystyle \Gamma }$ as small as possible. However, from an identification for control perspective, what really matters is the performance achieved by the model-based controller on ${\displaystyle G_{0}}$ and not the intrinsic quality of the model.
One way to deal with uncertainty is to design a controller that has an acceptable performance with all models in ${\displaystyle \Gamma }$, including ${\displaystyle G_{0}}$. This is the main idea behind robust control design procedure, that aims at building frequency domain uncertainty descriptions of the process. However, being based on worst-case assumptions rather than on the idea of averaging out the noise, this approach typically leads to conservative uncertainty sets. Rather, data-driven techniques deal with uncertainty by working on experimental data, and avoiding excessive conservativism.
In the following, the main classifications of data-driven control systems are presented.
### Indirect and direct methods
The fundamental distinction is between indirect and direct controller design methods. The former group of thechniques is still retaining the standard two-step approach, i.e. first a model is identified, then a controller is tuned based on such model. The main issue in doing so is that the controller is computed from the estimated model ${\displaystyle {\widehat {G}}}$ (according to the certainty equivalence principle), but in practice ${\displaystyle {\widehat {G}}\neq G_{0}}$. To overcome this problem, the idea behind the latter group of techniques is to map the experimental data directly onto the controller, without any model to be identified in between.
### Iterative and noniterative methods
Another important distinction is between iterative and noniterative (or one-shot) methods. In the former group, repeated iterations are needed to estimate the controller parameters, during which the optimization problem is performed based on the results of the previous iteration, and the estimation is expected to become more and more accurate at each iteration. This approach is also prone to on-line implementations (see below). In the latter group, the (optimal) controller parametrization is provided with a single optimization problem. This is particularly important for those systems in which iterations or repetitions of data collection experiments are limited or even not allowed (for example, due to economic aspects). In such cases, one should select a design technique capable of delivering a controller on a single data set. This approach is often implemented off-line (see below).
### On-line and off-line methods
Since, on practical industrial applications, open-loop or closed-loop data are often available continuously, on-line data-driven techniques use those data to improve the quality of the identified model and/or the performance of the controller each time new information is collected on the plant. Instead, off-line approaches work on batch of data, which may be collected only once, or multiple times at a regular (but rather long) interval of time.
## Iterative feedback tuning
The iterative feedback tuning (IFT) method was introduced in 1994 [2], starting from the observation that, in identification for control, each iteration is based on the (wrong) certainty equivalence principle.
IFT is a model-free technique for the direct iterative optimization of the parameters of a fixed-order controller; such parameters can be successively updated using information coming from standard (closed-loop) system operation.
Let ${\displaystyle y^{d}}$ be a desired output to the reference signal ${\displaystyle r}$; the error between the achieved and desired response is ${\displaystyle {\tilde {y}}(\rho )=y(\rho )-y^{d}}$. The control design objective can be formulated as the minimization of the objective function:
${\displaystyle J(\rho )={\frac {1}{2N}}\sum _{t=1}^{N}E\left[{\tilde {y}}(t,\rho )^{2}\right].}$
Given the objective function to minimize, the quasi-Newton method can be applied, i.e. a gradient-based minimization using a gradient search of the type:
${\displaystyle \rho _{i+1}=\rho _{i}-\gamma _{i}R_{i}^{-1}{\frac {d{\widehat {J}}}{d\rho }}(\rho _{i}).}$
The value ${\displaystyle \gamma _{i}}$ is the step size, ${\displaystyle R_{i}}$ is an appropriate positive definite matrix and ${\displaystyle {\frac {d{\widehat {J}}}{d\rho }}}$ is an approximation of the gradient; the true value of the gradient is given by the following:
${\displaystyle {\frac {dJ}{d\rho }}(\rho )={\frac {1}{N}}\sum _{t=1}^{N}\left[{\tilde {y}}(t,\rho ){\frac {\delta y}{\delta \rho }}(t,\rho )\right].}$
The value of ${\displaystyle {\frac {\delta y}{\delta \rho }}(t,\rho )}$ is obtained through the following three-step methodology:
1. Normal Experiment: Perform an experiment on the closed loop system with ${\displaystyle C(\rho )}$ as controller and ${\displaystyle r}$ as reference; collect N measurements of the output ${\displaystyle y(\rho )}$, denoted as ${\displaystyle y^{(1)}(\rho )}$.
2. Gradient Experiment: Perform an experiment on the closed loop system with ${\displaystyle C(\rho )}$ as controller and 0 as reference ${\displaystyle r}$; inject the signal ${\displaystyle r-y^{(1)}(\rho )}$ such that it is summed to the control variable output by ${\displaystyle C(\rho )}$, going as input into the plant. Collect the output, denoted as ${\displaystyle y^{(2)}(\rho )}$.
3. Take the following as gradient approximation: ${\displaystyle {\frac {\delta {\widehat {y}}}{\delta \rho }}(\rho )={\frac {\delta C}{\delta \rho }}(\rho )y^{(2)}(\rho )}$.
A crucial factor for the convergence speed of the algorithm is the choice of ${\displaystyle R_{i}}$; when ${\displaystyle {\tilde {y}}}$ is small, a good choice is the approximation given by the Gauss–Newton direction:
${\displaystyle R_{i}={\frac {1}{N}}\sum _{t=1}^{N}{\frac {\delta {\widehat {y}}}{\delta \rho }}(\rho _{i}){\frac {\delta {\widehat {y}}^{T}}{\delta \rho }}(\rho _{i}).}$
## Noniterative correlation-based tuning
Noniterative correlation-based tuning (nCbT) is a noniterative method for data-driven tuning of a fixed-structure controller[3]. It provides a one-shot method to directly synthesize a controller based on a single dataset.
Suppose that ${\displaystyle G}$ denotes an unknown LTI stable SISO plant, ${\displaystyle M}$ a user-defined reference model and ${\displaystyle F}$ a user-defined weighting function. An LTI fixed-order controller is indicated as ${\displaystyle K(\rho )=\beta ^{T}\rho }$, where ${\displaystyle \rho \in \mathbb {R} ^{n}}$, and ${\displaystyle \beta }$ is a vector of LTI basis functions. Finally, ${\displaystyle K^{*}}$ is an ideal LTI controller of any structure, guaranteeing a closed-loop function ${\displaystyle M}$ when applied to ${\displaystyle G}$.
The goal is to minimize the following objective function:
${\displaystyle J(\rho )=\left\|F{\bigg (}{\frac {K^{*}G-K(\rho )G}{(1+K^{*}G)^{2}}}{\bigg )}\right\|_{2}^{2}.}$
${\displaystyle J(\rho )}$ is a convex approximation of the objective function obtained from a model reference problem, supposing that ${\displaystyle {\frac {1}{(1+K(\rho )G)}}\approx {\frac {1}{(1+K^{*}G)}}}$.
When ${\displaystyle G}$ is stable and minimum-phase, the approximated model reference problem is equivalent to the minimization of the norm of ${\displaystyle \varepsilon (t)}$ in the scheme in figure.
The idea is that, when G is stable and minimum phase, the approximated model reference problem is equivalent to the minimization of the norm of ${\displaystyle \varepsilon }$.
The input signal ${\displaystyle r(t)}$ is supposed to be a persistently exciting input signal and ${\displaystyle v(t)}$ to be generated by a stable data-generation mechanism. The two signals are thus uncorrelated in an open-loop experiment; hence, the ideal error ${\displaystyle \varepsilon (t,\rho ^{*})}$ is uncorrelated with ${\displaystyle r(t)}$. The control objective thus consists in finding ${\displaystyle \rho }$ such that ${\displaystyle r(t)}$ and ${\displaystyle \varepsilon (t,\rho ^{*})}$ are uncorrelated.
The vector of instrumental variables ${\displaystyle \zeta (t)}$ is defined as:
${\displaystyle \zeta (t)=[r_{W}(t+\ell _{1}),r_{W}(t+\ell _{1}-1),\ldots ,r_{W}(t),\ldots ,r_{W}(t-\ell _{1})]^{T}}$
where ${\displaystyle \ell _{1}}$ is large enough and ${\displaystyle r_{W}(t)=Wr(t)}$, where ${\displaystyle W}$ is an appropriate filter.
The correlation function is:
${\displaystyle f_{N,\ell _{1}}(\rho )={\frac {1}{N}}\sum _{t=1}^{N}\zeta (t)\varepsilon (t,\rho )}$
and the optimization problem becomes:
${\displaystyle {\widehat {\rho }}={\underset {\rho \in D_{k}}{\operatorname {arg\,min} }}J_{N,\ell _{1}}(\rho )={\underset {\rho \in D_{k}}{\operatorname {arg\,min} }}f_{N,\ell _{1}}^{T}f_{N,\ell _{1}}.}$
Denoting with ${\displaystyle \phi _{r}(\omega )}$ the spectrum of ${\displaystyle r(t)}$, it can be demonstrated that, under some assumptions, if ${\displaystyle W}$ is selected as:
${\displaystyle W(e^{-j\omega })={\frac {F(e^{-j\omega })(1-M(e^{-j\omega }))}{\phi _{r}(\omega )}}}$
then, the following holds:
${\displaystyle \lim _{N,\ell _{1}\to \infty ,\ell _{1}/N\to \infty }{\widehat {\rho }}=\rho ^{*}.}$
### Stability constraint
There is no guarantee that the controller ${\displaystyle K}$ that minimizes ${\displaystyle J_{N,\ell _{1}}}$ is stable. Instability may occur in the following cases:
• If ${\displaystyle G}$ is non-minimum phase, ${\displaystyle K^{*}}$ may lead to cancellations in the right-half complex plane.
• If ${\displaystyle K^{*}}$ (even if stabilizing) is not achievable, ${\displaystyle K(\rho )}$ may not be stabilizing.
• Due to measurement noise, even if ${\displaystyle K^{*}=K(\rho )}$ is stabilizing, data-estimated ${\displaystyle {\widehat {K}}(\rho )}$ may not be so.
Consider a stabilizing controller ${\displaystyle K_{s}}$ and the closed loop transfer function ${\displaystyle M_{s}={\frac {K_{s}G}{1+K_{s}G}}}$. Define:
${\displaystyle \Delta (\rho ):=M_{s}-K(\rho )G(1-M_{s})}$
${\displaystyle \delta (\rho ):=\left\|\Delta (\rho )\right\|_{\infty }.}$
Theorem
The controller ${\displaystyle K(\rho )}$ stabilizes the plant ${\displaystyle G}$ if
1. ${\displaystyle \Delta (\rho )}$ is stable
2. ${\displaystyle \exists \delta _{N}\in (0,1)}$ s.t. ${\displaystyle \delta (\rho )\leq \delta _{N}.}$
Condition 1. is enforced when:
• ${\displaystyle K(\rho )}$ is stable
• ${\displaystyle K(\rho )}$ contains an integrator (it is canceled).
The model reference design with stability constraint becomes:
${\displaystyle \rho _{s}={\underset {\rho \in D_{k}}{\operatorname {arg\,min} }}J(\rho )}$
${\displaystyle {\text{s.t. }}\delta (\rho )\leq \delta _{N}.}$
A convex data-driven estimation of ${\displaystyle \delta (\rho )}$ can be obtained through the discrete Fourier transform.
Define the following:
{\displaystyle {\begin{aligned}&{\widehat {R}}_{r}(\tau )={\frac {1}{N}}\sum _{t=1}^{N}r(t-\tau )r(t){\text{ for }}\tau =-\ell _{2},\ldots ,\ell _{2}\\[4pt]&{\widehat {R}}_{r\varepsilon }(\tau )={\frac {1}{N}}\sum _{t=1}^{N}r(t-\tau )\varepsilon (t,\rho ){\text{ for }}\tau =-\ell _{2},\ldots ,\ell _{2}.\end{aligned}}}
For stable minimum phase plants, the following convex data-driven oprimization problem is given:
{\displaystyle {\begin{aligned}{\widehat {\rho }}&={\underset {\rho \in D_{k}}{\operatorname {arg\,min} }}J_{N,\ell _{1}}(\rho )\\[3pt]&{\text{s.t.}}\\[3pt]&{\bigg |}\sum _{\tau =-\ell _{2}}^{\ell _{2}}{\widehat {R}}_{r\varepsilon }(\tau ,\rho )e^{-j\tau \omega _{k}}{\bigg |}\leq \delta _{N}{\bigg |}\sum _{\tau =-\ell _{2}}^{\ell _{2}}{\widehat {R}}_{r}(\tau ,\rho )e^{-j\tau \omega _{k}}{\bigg |}\\[4pt]\omega _{k}&={\frac {2\pi k}{2\ell _{2}+1}},\qquad k=0,\ldots ,\ell _{2}+1.\end{aligned}}}
## Virtual reference feedback tuning
Virtual Reference Feedback Tuning (VRFT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset.
VRFT was first proposed [4] as ${\displaystyle VRD^{2}}$ and then fixed and extended for LTI [5] and LPV [6] systems.
The main idea is to define a desired closed loop model ${\displaystyle M}$ and to use its inverse dynamics to obtain a virtual reference ${\displaystyle r_{v}(t)}$ from the measured output signal ${\displaystyle y(t)}$.
The main idea is to define a desired closed loop model M and to use its inverse dynamics to obtain a virtual reference from the measured output signal y.
The virtual signals are ${\displaystyle r_{v}(t)=M^{-1}y(t)}$ and ${\displaystyle e_{v}(t)=r_{v}(t)-y(t).}$
The optimal controller is obtained from noiseless data by solving the following optimization problem:
${\displaystyle {\widehat {\rho }}_{\infty }={\underset {\rho }{\operatorname {arg\,min} }}\lim _{N\to \infty }J_{vr}(\rho )}$
where the optimization function is given as follows:
${\displaystyle J_{vr}^{N}(\rho )={\frac {1}{N}}\sum _{t=1}^{N}\left(u(t)-K(\rho )e_{v}(t)\right)^{2}.}$
## References
1. ^ Bazanella, A.S., Campestrini, L., Eckhard, D. (2012). Data-driven controller design: the ${\displaystyle H_{2}}$ approach. Springer, ISBN 978-94-007-2300-9, 208 pages.
2. ^ Hjalmarsson, H., Gevers, M., Gunnarsson, S., & Lequin, O. (1998). Iterative feedback tuning: theory and applications. IEEE control systems, 18(4), 26–41.
3. ^ van Heusden, K., Karimi, A. and Bonvin, D. (2011), Data-driven model reference control with asymptotically guaranteed stability. Int. J. Adapt. Control Signal Process., 25: 331–351. doi:10.1002/acs.1212
4. ^ Guardabassi, Guido O., and Sergio M. Savaresi. "Approximate feedback linearization of discrete-time non-linear systems using virtual input direct design." Systems & Control Letters 32.2 (1997): 63–74.
5. ^ Campi, Marco C., Andrea Lecchini, and Sergio M. Savaresi. "Virtual reference feedback tuning: a direct method for the design of feedback controllers." Automatica 38.8 (2002): 1337–1346.
6. ^ Formentin, S., Piga, D., Tóth, R., & Savaresi, S. M. (2016). Direct learning of LPV controllers from data. Automatica, 65, 98–110.
|
## Hydrogen energy levels: an algebraic derivation
Posted by Diego Assencio on 2016.10.15 under Physics (Quantum mechanics)
In this post, we will derive the energy levels of the Hydrogen atom using only operator algebra, i.e., without dealing with wave functions. Our work will mainly consist in defining special operators which will assist us in determining the eigenvalues of the Hamiltonian $H$ directly. Our model of the Hydrogen atom is an electron of mass $m$ under the influence of a Coulomb potential; the associated Hamiltonian should be known by any physics student who has taken a quantum mechanics course: $$\displaystyle H = \frac{{\bf p}^2}{2m} - \frac{e^2}{r} \label{post_d379f7e34709563115ebd4c41241ed5e_hamiltonian}$$ The first term on the right-hand side above represents the kinetic energy of the electron and the second term represents its electrostatic potential energy in the presence of a proton fixed at the origin. On the equation above, ${\bf p}$ and ${\bf x}$ are the momentum and position operators respectively, and $r = \|{\bf x}\|$.
Most of our derivation will rely on the properties of the self-adjoint operator ${\bf A}$ defined below (called the Laplace-Runge-Lenz vector operator): $$\displaystyle{\bf A} = \frac{1}{2}({\bf L}\times{\bf p} - {\bf p}\times{\bf L}) + me^2\frac{\bf x}{r}$$ where ${\bf L} = {\bf x}\times{\bf p}$ is the angular momentum operator. The self-adjoint property of ${\bf A}$ comes from the fact that ${\bf x}$ and $r$ are self-adjoint and also because $({\bf L}\times{\bf p})^{\dagger} = - {\bf p}\times{\bf L}$. In what follows, we will extensively use the Einstein summation notation, i.e., a product $a_ib_i$ represents the sum $\sum_{i=1}^3 a_ib_i$ for two vector quantities ${\bf a} = (a_1, a_2, a_3)$ and ${\bf b} = (b_1, b_2, b_3)$ respectively.
### Basic commutators
First, let us list some well-known commutators and compute some others which we will need later: $$\begin{eqnarray} [x_i, x_j] &=& 0 \\[5pt] [p_i, p_j] &=& 0 \\[5pt] [x_i, p_j] &=& i\hbar\delta_{ij} \\[5pt] [x_i, L_j] &=& [x_i, \epsilon_{jkl}x_k p_l] = \epsilon_{jkl} x_k [x_i, p_l] = \epsilon_{jkl} x_k i\hbar\delta_{il} = i\hbar \epsilon_{jki} x_k = i\hbar \epsilon_{ijk} x_k \label{post_d379f7e34709563115ebd4c41241ed5e_comm_xi_Lj} \\[5pt] [p_i, L_j] &=& [p_i, \epsilon_{jkl}x_k p_l] = \epsilon_{jkl} [p_i, x_k] p_l = \epsilon_{jkl} (-i\hbar)\delta_{ik} p_l = -i\hbar\epsilon_{jil} p_l = i\hbar\epsilon_{ijk} p_k \label{post_d379f7e34709563115ebd4c41241ed5e_comm_pi_Lj} \\[5pt] \end{eqnarray}$$ where above we used multiple properties of the Levi-Civita symbol $\epsilon_{ijk}$. Using the following identity: $$[A, BC] = ABC - BCA + BAC - BAC = [A,B]C + B[A,C] \label{post_d379f7e34709563115ebd4c41241ed5e_comm_A_BC}$$ we have that: $$\begin{eqnarray} [L_i, L_j] &=& [\epsilon_{imn}x_m p_n, \epsilon_{jkl}x_kp_l] \nonumber\\[5pt] &=& \epsilon_{imn}\epsilon_{jkl} [x_m p_n, x_k p_l] \nonumber\\[5pt] &=& \epsilon_{imn}\epsilon_{jkl} \left( [x_m p_n, x_k ] p_l + x_k [x_m p_n, p_l] \right) \nonumber\\[5pt] &=& \epsilon_{imn}\epsilon_{jkl} \left(x_m [p_n, x_k] p_l + x_k [x_m, p_l] p_n\right) \nonumber\\[5pt] &=& \epsilon_{imn}\epsilon_{jkl} \left( x_m (-i\hbar)\delta_{kn} p_l + x_k i\hbar\delta_{ml} p_n\right) \nonumber\\[5pt] &=& i\hbar \left(-\epsilon_{imk}\epsilon_{jkl} x_m p_l + \epsilon_{iln}\epsilon_{jkl} x_k p_n\right) \nonumber\\[5pt] &=& i\hbar \left(\epsilon_{imk}\epsilon_{jlk} x_m p_l - \epsilon_{inl}\epsilon_{jkl} x_k p_n\right) \nonumber\\[5pt] &=& i\hbar \big((\delta_{ij}\delta_{ml} - \delta_{il}\delta_{mj}) x_m p_l - (\delta_{ij}\delta_{nk} - \delta_{ik}\delta_{nj}) x_k p_n\big) \nonumber\\[5pt] &=& i\hbar \left(\delta_{ij}x_m p_m - \delta_{il}\delta_{mj} x_m p_l - \delta_{ij}x_n p_n + \delta_{ik}\delta_{nj} x_k p_n\right) \nonumber\\[5pt] &=& i\hbar \left( - \delta_{il}\delta_{mj} x_m p_l + \delta_{ik}\delta_{nj} x_k p_n\right) \nonumber\\[5pt] &=& i\hbar \left( - \delta_{il}\delta_{jm} x_m p_l + \delta_{im}\delta_{jl} x_m p_l\right) \nonumber\\[5pt] &=& i\hbar \left( \delta_{im}\delta_{jl} - \delta_{il}\delta_{jm} \right)x_m p_l \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk}\epsilon_{mlk} x_m p_l \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk}\epsilon_{kml} x_m p_l \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk} L_k \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_Lj} \end{eqnarray}$$ Additionally, since $r = \sqrt{x_ix_i}$: $$\displaystyle [p_i, f(r)] = -i\hbar \frac{\partial f(r)}{\partial x_i} = -i\hbar \frac{d f(r)}{d r}\frac{\partial r}{\partial x_i} = -i\hbar \frac{d f(r)}{d r}\frac{x_i}{r}$$ we have that: $$\displaystyle \left[p_i, \frac{1}{r}\right] = -i\hbar \frac{d}{d r}\left(\frac{1}{r}\right)\frac{x_i}{r} = -i\hbar \left(-\frac{1}{r^2}\right)\frac{x_i}{r} = i\hbar \frac{x_i}{r^3} \label{post_d379f7e34709563115ebd4c41241ed5e_eq_p1r}$$ Using equation \eqref{post_d379f7e34709563115ebd4c41241ed5e_eq_p1r} and the fact that $[x_i, f(r)] = 0$, we can then show that: $$\left[L_i, \frac{1}{r}\right] = \epsilon_{ijk}\left[x_j p_k, \frac{1}{r}\right] = \epsilon_{ijk}x_j \left[p_k, \frac{1}{r}\right] = \epsilon_{ijk} x_j i\hbar \frac{x_k}{r^3} = \frac{i\hbar}{r^3} \epsilon_{ijk} x_j x_k = 0 \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_1r}$$ since $\epsilon_{ijk}$ is an anti-symmetric tensor. Equation \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_1r} is directly related to the fact that $r$ is invariant under rotations.
### Computation of $[L_i, A_j]$
Using the commutators we computed above, we can compute $[L_i, A_j]$ directly: $$\begin{eqnarray} \displaystyle [L_i, A_j] &=& \left[L_i, \frac{1}{2}\epsilon_{jkl}(L_k p_l - p_k L_l) + me^2 \frac{x_j}{r}\right] \nonumber\\[5pt] &=& \frac{1}{2}\epsilon_{jkl}\left([L_i, L_k p_l] - [L_i,p_k L_l]\right) + me^2 \left[L_i,\frac{x_j}{r}\right] \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_Aj_tmp} \end{eqnarray}$$ We will compute each of the terms above separately. From equations \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_pi_Lj}, \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_A_BC} and \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_Lj}, we have that: $$\begin{eqnarray} \epsilon_{jkl}[L_i, L_k p_l] &=& \epsilon_{jkl}([L_i, L_k] p_l + L_k [L_i, p_l]) \nonumber\\[5pt] &=& \epsilon_{jkl}i\hbar\epsilon_{ikm} L_m p_l + \epsilon_{jkl}L_k(-i\hbar)\epsilon_{lim} p_m \nonumber\\[5pt] &=& i\hbar\epsilon_{jlk}\epsilon_{imk} L_m p_l - i\hbar \epsilon_{jkl}\epsilon_{iml} L_k p_m \nonumber\\[5pt] &=& i\hbar\big((\delta_{ji}\delta_{lm} - \delta_{jm}\delta_{li}) L_m p_l - (\delta_{ji}\delta_{km} - \delta_{jm}\delta_{ki}) L_k p_m \big)\nonumber\\[5pt] &=& i\hbar\left(\delta_{ji} L_m p_m - \delta_{jm}\delta_{li} L_m p_l - \delta_{ji}L_m p_m + \delta_{jm}\delta_{ki} L_k p_m \right)\nonumber\\[5pt] &=& i\hbar\left(- \delta_{jm}\delta_{li} L_m p_l + \delta_{jm}\delta_{ki} L_k p_m \right)\nonumber\\[5pt] &=& i\hbar\left(- \delta_{jm}\delta_{li} L_m p_l + \delta_{jl}\delta_{mi} L_m p_l \right)\nonumber\\[5pt] &=& i\hbar\left(\delta_{im}\delta_{jl} - \delta_{il}\delta_{jm} \right) L_m p_l\nonumber\\[5pt] &=& i\hbar \epsilon_{ijk}\epsilon_{mlk} L_m p_l \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk}\epsilon_{kml} L_m p_l \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk} ({\bf L}\times{\bf p})_k \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_Lkpl} \end{eqnarray}$$ $$\begin{eqnarray} \epsilon_{jkl}[L_i, p_k L_l] &=& \epsilon_{jkl}([L_i, p_k] L_l + p_k [L_i, L_l]) \nonumber\\[5pt] &=& \epsilon_{jkl}(-i\hbar)\epsilon_{kim} p_m L_l + \epsilon_{jkl} p_k i\hbar\epsilon_{ilm} L_m \nonumber\\[5pt] &=& i\hbar\epsilon_{jlk}\epsilon_{imk} p_m L_l - i\hbar \epsilon_{jkl}\epsilon_{iml} p_k L_m \nonumber\\[5pt] &=& i\hbar\big((\delta_{ji}\delta_{lm} - \delta_{jm}\delta_{li}) p_m L_l - (\delta_{ji}\delta_{km} - \delta_{jm}\delta_{ki}) p_k L_m \big)\nonumber\\[5pt] &=& i\hbar\left(\delta_{ji} p_m L_m - \delta_{jm}\delta_{li} p_m L_l - \delta_{ji} p_m L_m + \delta_{jm}\delta_{ki} p_k L_m \right)\nonumber\\[5pt] &=& i\hbar\left(- \delta_{jm}\delta_{li} p_m L_l + \delta_{jm}\delta_{ki} p_k L_m \right)\nonumber\\[5pt] &=& i\hbar\left(- \delta_{jm}\delta_{li} p_m L_l + \delta_{jl}\delta_{mi} p_m L_l \right)\nonumber\\[5pt] &=& i\hbar\left(\delta_{im}\delta_{jl} - \delta_{il}\delta_{jm} \right) p_m L_l\nonumber\\[5pt] &=& i\hbar \epsilon_{ijk}\epsilon_{mlk} p_m L_l \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk}\epsilon_{kml} p_m L_l \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk} ({\bf p}\times{\bf L})_k \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_pkLl} \end{eqnarray}$$ Finally, using equations \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_xi_Lj} and \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_1r}, we can show that: $$\displaystyle \left[L_i,\frac{x_j}{r}\right] = \frac{1}{r} [L_i, x_j] = \frac{1}{r} (-i\hbar) \epsilon_{jik} x_k = i\hbar \epsilon_{ijk} \frac{x_k}{r} = i\hbar \epsilon_{ijk}\left(\frac{{\bf x}}{r}\right)_k \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_xjr}$$ Using equations \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_Lkpl}, \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_pkLl} and \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_xjr} on equation \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_Aj_tmp} then yields: $$\begin{eqnarray} \displaystyle [L_i, A_j] &=& \frac{1}{2}\epsilon_{jkl}\left([L_i, L_k p_l] - [L_i,p_k L_l]\right) + me^2 \left[L_i,\frac{x_j}{r}\right] \nonumber\\[5pt] &=& \frac{1}{2}\big(i\hbar \epsilon_{ijk} ({\bf L}\times{\bf p})_k - i\hbar \epsilon_{ijk} ({\bf p}\times{\bf L})_k\big) + me^2 i\hbar \epsilon_{ijk}\left(\frac{{\bf x}}{r}\right)_k \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk} \left( \frac{1}{2}\big( ({\bf L}\times{\bf p})_k - ({\bf p}\times{\bf L})_k\big) + me^2 \left(\frac{{\bf x}}{r}\right)_k \right) \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_Aj_tmp2} \end{eqnarray}$$ Therefore: $$\boxed{ [L_i, A_j] = i\hbar \epsilon_{ijk} A_k } \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Li_Aj}$$
### Computation of $[A_i, A_j]$
The computation of $[A_i, A_j]$ is very long and tedious. Since it does not introduce any new techniques beyond what has been presented above, we will merely state the result of this commutator, but the reader is strongly encouraged to try to prove the equation below as its derivation is an excellent way to deeply understand everything we have done so far (a lot of helpful material for that can be found here): $$\boxed{ [A_i, A_j] = -i\hbar 2m \epsilon_{ijk} L_k H }$$ where $H$ is the Hamiltonian given in equation \eqref{post_d379f7e34709563115ebd4c41241ed5e_hamiltonian}.
### The ${\bf K}^+$ and ${\bf K}^-$ operators
Suppose we have an eigenstate of $H$ representing a bound state with energy eigenvalue $E = -\kappa^2 / 2m$ for some $\kappa \gt 0$ (the assumption $E \lt 0$ is correct for a bound state since $E \geq 0$ means the electron has enough kinetic energy to escape the proton's electric potential). Let then ${\bf K}^{\pm}$ be defined as below: $$\displaystyle {\bf K}^{\pm} = \frac{1}{2}{\bf L} \pm \frac{1}{2\kappa}{\bf A}$$ We are interested in computing $[K^+_i, K^-_j]$ and $[K^\pm_i, K^\pm_j]$: $$\begin{eqnarray} \displaystyle [K^+_i, K^-_j] &=& \frac{1}{4}[L_i, L_j] - \frac{1}{4\kappa}[L_i,A_j] + \frac{1}{4\kappa}[A_i,L_j] - \frac{1}{4\kappa^2}[A_i,A_j] \nonumber\\[5pt] &=& \frac{i\hbar}{4}\left(\epsilon_{ijk}L_k - \frac{1}{\kappa}\epsilon_{ijk}A_k - \frac{1}{\kappa} \epsilon_{jik}A_k - \frac{1}{\kappa^2} (-2m) \epsilon_{ijk}L_k H\right) \nonumber\\[5pt] &=& \frac{i\hbar}{4}\left(\epsilon_{ijk}L_k - \frac{1}{\kappa}\epsilon_{ijk}A_k + \frac{1}{\kappa} \epsilon_{ijk}A_k - \frac{1}{\kappa^2} (-2m) \epsilon_{ijk}L_k H\right) \nonumber\\[5pt] &=& \frac{i\hbar}{4} \left(\epsilon_{ijk}L_k - \frac{1}{\kappa^2} (-2m) \epsilon_{ijk}L_k H\right) \end{eqnarray}$$ Applying this result to the energy eigenstate mentioned above, we have $H \rightarrow E = -\kappa^2/2m$, so we get: $$\begin{eqnarray} \displaystyle [K^+_i, K^-_j] &=& \frac{i\hbar}{4}\left(\epsilon_{ijk}L_k - \frac{1}{\kappa^2} (-2m) \epsilon_{ijk}L_k \frac{(-\kappa^2)}{2m}\right) \nonumber\\[5pt] &=& \frac{i\hbar}{4}\left(\epsilon_{ijk}L_k - \epsilon_{ijk}L_k\right) \end{eqnarray}$$ Therefore: $$\boxed{ \displaystyle [K^{+}_i, K^{-}_j] = 0 }$$ Also: $$\begin{eqnarray} \displaystyle [K^\pm_i, K^\pm_j] &=& \frac{1}{4} [L_i, L_j] \pm \frac{1}{4\kappa}[L_i, A_j] \pm \frac{1}{4\kappa}[A_i, L_j] + \frac{1}{4\kappa^2}[A_i,A_j] \nonumber\\[5pt] &=& \frac{i\hbar}{4}\left(\epsilon_{ijk}L_k \pm \frac{1}{\kappa}\epsilon_{ijk}A_k \mp \frac{1}{\kappa} \epsilon_{jik}A_k + \frac{1}{\kappa^2} (-2m) \epsilon_{ijk}L_k H\right) \nonumber\\[5pt] &=& \frac{i\hbar}{4}\left(\epsilon_{ijk}L_k \pm \frac{1}{\kappa}\epsilon_{ijk}A_k \pm \frac{1}{\kappa} \epsilon_{ijk}A_k + \frac{1}{\kappa^2} (-2m) \epsilon_{ijk}L_k H\right) \nonumber\\[5pt] &=& \frac{i\hbar}{4} \epsilon_{ijk} \left( L_k \pm \frac{2}{\kappa} A_k + \frac{1}{\kappa^2}(-2m)L_k \frac{(-\kappa^2)}{2m}\right) \nonumber\\[5pt] &=& \frac{i\hbar}{4} \epsilon_{ijk} \left( 2L_k \pm \frac{2}{\kappa}A_k\right) \nonumber\\[5pt] &=& i\hbar \epsilon_{ijk} \left( \frac{1}{2} L_k \pm \frac{1}{2\kappa}A_k\right) \end{eqnarray}$$ Therefore: $$\boxed{ [K^\pm_i, K^\pm_j] = i\hbar \epsilon_{ijk} K^{\pm}_k } \label{post_d379f7e34709563115ebd4c41241ed5e_comm_Ki_Kj}$$
### Computation of ${\bf A}^2$
$$\begin{eqnarray} {\bf A}^2 &=& {\bf A}\cdot{\bf A} \nonumber\\[5pt] &=& \frac{1}{4}\big(({\bf L}\times{\bf p} )\cdot( {\bf L}\times{\bf p}) - ({\bf L}\times{\bf p})\cdot( {\bf p}\times{\bf L})\big) \nonumber\\[5pt] &+& \frac{1}{4}\big(-({\bf p}\times{\bf L})\cdot({\bf L}\times{\bf p}) + ({\bf p}\times{\bf L})\cdot( {\bf p}\times{\bf L})\big) \nonumber\\[5pt] &+& \frac{me^2}{2}({\bf L}\times{\bf p} - {\bf p}\times{\bf L})\cdot \frac{\bf x}{r} + \frac{me^2}{2}\frac{\bf x}{r} \cdot ({\bf L}\times{\bf p} - {\bf p}\times{\bf L}) \nonumber\\[5pt] &+& m^2e^4\frac{\bf x}{r}\cdot \frac{\bf x}{r} \label{post_d379f7e34709563115ebd4c41241ed5e_AA_tmp} \end{eqnarray}$$ Each of the terms on the equation above will be computed separately: $$\begin{eqnarray} ({\bf L}\times{\bf p} )\cdot( {\bf L}\times{\bf p}) &=& ({\bf L}\times{\bf p} )_i( {\bf L}\times{\bf p})_i \nonumber\\[5pt] &=&(\epsilon_{ijk}L_j p_k)(\epsilon_{ilm}L_l p_m) \nonumber\\[5pt] &=&\epsilon_{jki}\epsilon_{lmi}L_j p_k L_l p_m \nonumber\\[5pt] &=&(\delta_{jl}\delta_{km} - \delta_{jm}\delta_{kl})L_j p_k L_l p_m \nonumber\\[5pt] &=&L_j p_k L_j p_k - L_j p_k L_k p_j \end{eqnarray}$$ Since $\epsilon_{ijk}$ is anti-symmetric, we have that: $$p_k L_k = {\bf p}\cdot{\bf L} = \epsilon_{ijk} p_i x_j p_k = 0$$ The equation above then becomes: $$\begin{eqnarray} ({\bf L}\times{\bf p} )\cdot( {\bf L}\times{\bf p}) &=& L_j p_k L_j p_k \nonumber\\[5pt] &=&L_j (L_j p_k + [p_k, L_j]) p_k \nonumber\\[5pt] &=&L_j L_j p_k p_k + i\hbar\epsilon_{kjm}p_m p_k \nonumber\\[5pt] &=&{\bf L}^2{\bf p}^2 \end{eqnarray}$$ Using a nearly identical sequence of derivation steps, we can also show that: $$\begin{eqnarray} ({\bf L}\times{\bf p})\cdot( {\bf p}\times{\bf L}) &=& -{\bf L}^2{\bf p}^2 \\[5pt] ({\bf p}\times{\bf L})\cdot( {\bf L}\times{\bf p}) &=& -({\bf L}^2 + 4\hbar^2){\bf p}^2 \\[5pt] ({\bf p}\times{\bf L})\cdot( {\bf p}\times{\bf L}) &=& {\bf L}^2{\bf p}^2 \end{eqnarray}$$ Let us now proceed to the other terms: $$\displaystyle({\bf L}\times{\bf p}) \cdot \frac{\bf x}{r} = \epsilon_{ijk} L_j p_k x_i\frac{1}{r} = - L_j \epsilon_{jik} p_k x_i\frac{1}{r} = - L_j L_j^\dagger\frac{1}{r} = -\frac{\bf L^2}{r}$$ $$\displaystyle \frac{\bf x}{r} \cdot ({\bf p}\times{\bf L}) = \left( ({\bf p}\times{\bf L})^\dagger \cdot \left(\frac{\bf x}{r}\right)^\dagger \right)^\dagger = \left( -({\bf L}\times{\bf p}) \cdot \frac{\bf x}{r} \right)^\dagger = \left( \frac{\bf L^2}{r} \right)^\dagger = \frac{\bf L^2}{r}$$ $$\begin{eqnarray} \displaystyle \frac{\bf x}{r} \cdot ({\bf L}\times{\bf p}) &=& \frac{1}{r} \epsilon_{ijk} x_i L_j p_k \nonumber\\[5pt] &=& \frac{1}{r} \epsilon_{ijk} (L_j x_i + [x_i, L_j]) p_k \nonumber\\[5pt] &=& \frac{1}{r} (\epsilon_{ijk} L_j x_i p_k + \epsilon_{ijk} i\hbar \epsilon_{ijm} x_m p_k) \nonumber\\[5pt] &=& \frac{1}{r} (-\epsilon_{jik} L_j x_i p_k + i\hbar \epsilon_{ikj} \epsilon_{imj} x_m p_k) \nonumber\\[5pt] &=& \frac{1}{r} (- L_j L_j + i\hbar (\delta_{ii}\delta_{km} - \delta_{im}\delta_{ki}) x_m p_k) \nonumber\\[5pt] &=& \frac{1}{r} (- {\bf L}^2 + i\hbar (3x_k p_k - x_i p_i)) \nonumber\\[5pt] &=& \frac{1}{r} (- {\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p}) \end{eqnarray}$$ $$\begin{eqnarray} \displaystyle ({\bf p}\times{\bf L}) \cdot \frac{\bf x}{r} &=& \left( \left( \frac{\bf x}{r} \right)^\dagger \cdot ({\bf p}\times{\bf L})^\dagger \right)^\dagger \nonumber\\[5pt] &=& \left( \frac{\bf x}{r} \cdot (-{\bf L}\times{\bf p}) \right)^\dagger \nonumber\\[5pt] &=& -\left( \frac{1}{r} (- {\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p}) \right)^\dagger \nonumber\\[5pt] &=& -(-{\bf L}^2 - 2 i\hbar {\bf p}\cdot{\bf x}) \frac{1}{r} \nonumber\\[5pt] &=& \left({\bf L}^2 + 2 i\hbar p_i x_i\right) \frac{1}{r} \nonumber\\[5pt] &=& \left({\bf L}^2 + 2 i\hbar (x_i p_i + [p_i, x_i]) \right) \frac{1}{r} \nonumber\\[5pt] &=& \left({\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p} + 2 i\hbar (-i\hbar)\delta_{ii}\right) \frac{1}{r} \nonumber\\[5pt] &=& ({\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p} + 6\hbar^2) \frac{1}{r} \nonumber\\[5pt] &=& \frac{1}{r} ({\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p} + 6\hbar^2) + \left[ 2 i\hbar {\bf x}\cdot{\bf p}, \frac{1}{r}\right] \nonumber\\[5pt] &=& \frac{1}{r} ({\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p} + 6\hbar^2) + 2i\hbar x_i \left[ p_i, \frac{1}{r}\right] \nonumber\\[5pt] &=& \frac{1}{r} ({\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p} + 6\hbar^2) + 2i\hbar x_i i\hbar \frac{x_i}{r^3} \nonumber\\[5pt] &=& \frac{1}{r} ({\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p} + 6\hbar^2) - 2\hbar^2 \frac{1}{r} \nonumber\\[5pt] &=& \frac{1}{r} ({\bf L}^2 + 2 i\hbar {\bf x}\cdot{\bf p} + 4\hbar^2) \end{eqnarray}$$ Putting all the results above on equation \eqref{post_d379f7e34709563115ebd4c41241ed5e_AA_tmp} gives us: $$\begin{eqnarray} {\bf A}^2 &=& \frac{1}{4}({\bf L}^2{\bf p}^2 + {\bf L}^2{\bf p}^2 + {\bf L}^2{\bf p}^2 + 4\hbar^2 {\bf p}^2 + {\bf L}^2{\bf p}^2) \nonumber\\[5pt] &+& \frac{me^2}{2r}\left(-{\bf L}^2 -{\bf L}^2 - 2i\hbar{\bf x}\cdot{\bf p} - 4\hbar^2 -{\bf L}^2 + 2i\hbar{\bf x}\cdot{\bf p} - {\bf L}^2\right) \nonumber\\[5pt] &+& m^2e^4 \nonumber\\[5pt] &=& {\bf L}^2{\bf p}^2 + \hbar^2{\bf p}^2 - \frac{2me^2}{r}{\bf L}^2 - \frac{2me^2\hbar^2}{r} + m^2e^4 \nonumber\\[5pt] &=& ({\bf L}^2 + \hbar^2)\left({\bf p}^2 - \frac{2me^2}{r}\right) + m^2e^4 \nonumber\\[5pt] &=& ({\bf L}^2 + \hbar^2)2mH + m^2e^4 \end{eqnarray}$$ Applying this result to our energy eigenstate, we have $H \rightarrow E = -\kappa^2/2m$, so we get: $$\boxed{ {\bf A}^2 = -\kappa^2({\bf L}^2 + \hbar^2) + m^2e^4 } \label{post_d379f7e34709563115ebd4c41241ed5e_AA}$$
### Determination of $E$ in terms of ${\bf K}^\pm$
First, since ${\bf L}$ and ${\bf A}$ are self-adjoint operators: $${\bf L}\cdot{\bf A} + {\bf A}\cdot{\bf L} = {\bf L}\cdot{\bf A} + ({\bf L}^\dagger\cdot{\bf A}^\dagger)^\dagger = {\bf L}\cdot{\bf A} + ({\bf L}\cdot{\bf A})^\dagger$$ Given that: $$\displaystyle {\bf L}\cdot{\bf A} = {\bf L}\cdot\frac{1}{2}({\bf L}\times{\bf x} - {\bf x}\times{\bf L}) + me^2{\bf L}\cdot\frac{\bf x}{r} = me^2({\bf x}\times{\bf p})\cdot\frac{\bf x}{r} = 0$$ then: $${\bf L}\cdot{\bf A} + {\bf A}\cdot{\bf L} = 0 \label{post_d379f7e34709563115ebd4c41241ed5e_LA_AL}$$ From equations \eqref{post_d379f7e34709563115ebd4c41241ed5e_AA} and \eqref{post_d379f7e34709563115ebd4c41241ed5e_LA_AL}, we have that: $$\begin{eqnarray} \displaystyle ({\bf K}^\pm)^2 &=& {\bf K}^\pm\cdot{\bf K}^\pm \nonumber\\[5pt] &=& \frac{1}{4}{\bf L}\cdot{\bf L} \pm \frac{1}{4\kappa}({\bf L}\cdot{\bf A} + {\bf A}\cdot{\bf L}) + \frac{1}{4\kappa^2}{\bf A}\cdot{\bf A} \nonumber\\[5pt] &=& \frac{1}{4}{\bf L}^2 + \frac{1}{4\kappa^2}( -\kappa^2({\bf L}^2 + \hbar^2) + m^2e^4 ) \nonumber\\[5pt] &=& \frac{1}{4}{\bf L}^2 - \frac{1}{4}{\bf L}^2 - \frac{\hbar^2}{4} + \frac{m^2e^4}{4\kappa^2} \end{eqnarray}$$ Therefore: $$({\bf K}^+)^2 = ({\bf K}^-)^2 = \frac{1}{4}\left(-\hbar^2 + \frac{m^2e^4}{\kappa^2}\right) \label{post_d379f7e34709563115ebd4c41241ed5e_Kp2}$$ With this result, we then have that: $$\boxed{ \displaystyle E = -\frac{\kappa^2}{2m} = -\frac{m^2e^4}{2m}\left(\hbar^2 + 4({\bf K}^\pm)^2\right)^{-1} } \label{%INDEX_E_tmp}$$
### A final expression for $E$
As shown on equation \eqref{post_d379f7e34709563115ebd4c41241ed5e_comm_Ki_Kj}, the operators $K^+_i$ and $K^-_i$ satisfy the commutation relations for angular momentum operators, i.e.: $$\begin{eqnarray} [K^+_i, K^+_j] &=& i\hbar \epsilon_{ijk} K^+_k \\[5pt] [K^-_i, K^-_j] &=& i\hbar \epsilon_{ijk} K^-_k \end{eqnarray}$$ It then follows directly that: $$\begin{eqnarray} [({\bf K}^+)^2, K^+_i] &=& 0 \\[5pt] [({\bf K}^-)^2, K^-_i] &=& 0 \end{eqnarray}$$ Using ladder operators, it is straightforward to prove that the eigenvalues of $({\bf K}^+)^2$ and $({\bf K}^-)^2$ have the form $\hbar^2 j(j+1)$ for values of $j$ which are either integers of half-integers, i.e., $j = (n-1)/2$ for integer values $n \geq 1$. Since we are assuming we have an energy eigenstate with energy $E = -\kappa^2/2m$, and since equation \eqref{post_d379f7e34709563115ebd4c41241ed5e_Kp2} implies that such a state is also an eigenstate of both $({\bf K}^+)^2$ and $({\bf K}^-)^2$, then from equation \eqref{%INDEX_E_tmp} we have that: $$\begin{eqnarray} \displaystyle E &=& -\frac{m^2e^4}{2m}\left(\hbar^2 + 4({\bf K}^\pm)^2\right)^{-1} \nonumber\\[5pt] &=& -\frac{m^2e^4}{2m}\left(\hbar^2 + 4\hbar^2j(j+1)\right)^{-1} \nonumber\\[5pt] &=& -\frac{m^2e^4}{2m}\left(\hbar^2 + 4\hbar^2\frac{n-1}{2}\frac{n+1}{2}\right)^{-1} \nonumber\\[5pt] &=& -\frac{m^2e^4}{2m}\left(\hbar^2 + \hbar^2(n^2 - 1)\right)^{-1} \end{eqnarray}$$ Finally, we have our desired result, which we express as $E_n$ instead of $E$ to make the dependence on $n$ explicit: $$\displaystyle E_n = -\frac{me^4}{2\hbar^2n^2}$$ Since we used $4\pi\epsilon_0 = 1$ on equation \eqref{post_d379f7e34709563115ebd4c41241ed5e_hamiltonian}, we can replace $e^2$ with $e^2 / 4\pi\epsilon_0$ to get the energy levels in SI units: $$\boxed{ \displaystyle E_n = -\frac{me^4}{2\hbar^2(4\pi\epsilon_0)^2n^2} \quad \textrm{for} \quad n = 1, 2, 3, \ldots }$$
|
# Project Euler 8 - Redux
Spurred by this question: Project Euler #8 I decided to try to solve it with as clean code as possible.
Here is the problem formulation:
The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
This is my implementation:
#include <vector>
#include <string>
#include <iostream>
#include <cstdint>
static const char* c_input =
"73167176531330624919225119674426574742355349194934"
"96983520312774506326239578318016984801869478851843"
"85861560789112949495459501737958331952853208805511"
"12540698747158523863050715693290963295227443043557"
"66896648950445244523161731856403098711121722383113"
"62229893423380308135336276614282806444486645238749"
"30358907296290491560440772390713810515859307960866"
"70172427121883998797908792274921901699720888093776"
"65727333001053367881220235421809751254540594752243"
"52584907711670556013604839586446706324415722155397"
"53697817977846174064955149290862569321978468622482"
"83972241375657056057490261407972968652414535100474"
"82166370484403199890008895243450658541227588666881"
"16427171479924442928230863465674813919123162824586"
"17866458359124566529476545682848912883142607690042"
"24219022671055626321111109370544217506941658960408"
"07198403850962455444362981230987879927244284909188"
"84580156166097919133875499200524063689912560717606"
"05886116467109405077541002256983155200055935729725"
"71636269561882670428252483600823257530420752963450";
std::vector<std::string> partition(const std::string& input){
std::vector<std::string> ans;
std::string::size_type pos = 0;
while (pos < input.size()){
auto first_non_zero = input.find_first_not_of('0', pos);
// Only zeros left
if (first_non_zero == std::string::npos)
break;
auto next_zero = input.find('0', first_non_zero);
// No zeros left, assume end of string
if (next_zero == std::string::npos)
next_zero = input.size();
ans.emplace_back(input.substr(first_non_zero, next_zero - first_non_zero));
pos = next_zero;
}
return ans;
}
int toint(char digit){
return digit - '0';
}
int main(int, char**){
const auto problem_parts = partition(c_input);
typedef decltype(problem_parts)::size_type size_type;
const size_type num_digits = 13;
uint64_t ans = 0;
for (const auto& problem : problem_parts){
if (problem.size() < num_digits)
continue;
uint64_t running_product = 1;
for (size_type i = 0; i < num_digits; ++i){
running_product *= toint(problem[i]);
}
for (size_type i = num_digits; i < problem.size(); ++i){
// Carefull of rounding and overflow here, division first and then multiplication.
running_product = running_product / toint(problem[i - num_digits]) * toint(problem[i]);
if (running_product > ans)
ans = running_product;
}
}
std::cout << "Answer: " << ans << std::endl;
return 0;
}
Is there anything I can do to improve this, nitpicking welcome.
• What's your time? – Martin York Mar 7 '15 at 2:19
• Not sure what the partition() is doing. But Why not just keep 13 running totals. Then you just have to loop over the data once and you don't need the relatively expensive division. – Martin York Mar 7 '15 at 9:12
• @LokiAstari The running time is just about instant. It has linear time complexity in the size of the input string. The number of adjacent factors to consider doesn't affect the run time. The problem can be partitioned into smaller problems by realizing that all 0s will create a zero product around them. Thus splitting the input into subproblems by the zeros creates simpler problems to solve. I'm partitioning the problem as is commonly said. I do not understand your proposal of keeping 13 running totals, could you elaborate? – Emily L. Mar 8 '15 at 10:54
• Your idea is better. – Martin York Mar 8 '15 at 15:16
It is good the way you do not use using namespace std;. This practice will save you a lot of headaches later on.
In fact, the only problem I found with this code is that you do not use braces around one-line if statements.
if (next_zero == std::string::npos)
next_zero = input.size();
Using braces will not change the runtime behavior of your code, but it can help you prevent errors if you make a mistake, like Apple did with their Apple SSL bug.
## Process once
In partition you are already touching every element in the string once, you might as well convert the character to an int here and return std::vector<int>. Thereby saving all the other toInt calls.
## Bug
I think you will miss the maximum product if it's in the first 13 digits of a partition, in the first for loop you prime running_product with 13 digits (index 0 through 12), entering the second for loop ans is the result from the last partition, or zero, but at the 'if' check running_product is looking at digits (index 1 through 13). Therefore the result of indices 0-12 is never used as ans. Also if the partition has exactly num_digits digits ans would be not checked.
uint64_t running_product = 1;
for (size_type i = 0; i < num_digits; ++i){
running_product *= toint(problem[i]);
}
if (running_total > ans) ans = running_total;
for (size_type i = num_digits; i < problem.size(); ++i){
...
}
Adding another check is a quick fix for this.
## Avoid no brace blocks
This is a little bit of a taste issue, the one line block without braces is something that may cause hard to find errors when dealing with older code or multiple team members. If you do it be consistent (There is one loop that has braces with one line, most of the others don't. When it is used I personally prefer to have the statement in one line rather than in two making it a better indication that this is a one liner. E.g.
if (first_non_zero == std::string::npos) break;
## Allocations
This is a small euler example, what you are doing is perfectly fine, in a larger scope for a case like this, i'd prefer an algorithm that doesn't have to allocate space again for most of the incoming data. You could easily modify the lower loop to recalculate your running_total when hitting a zero, yes that might cause more multiplications and test but allocations are a bigger drain than multiplication.
Just determining the partition size rather than actually physically partitioning the string into substring what have the same effect without the allocations.
## Standard types aren't necessarily in the global namespace
#include <cstdint>
You can't subsequently use std::uint64_t without qualification. You could bring it in (within the main() function):
using std::uint64_t;
It would give the compiler a little more latitude (and get us back to not depending on optional features), however to write:
using result_type = std::uint_fast64_t;
(We only care that we can hold any number up to 9^13, i.e. 2,541,865,828,329, or 43 bits).
## Bug: size_type isn't what you think
You seem to be using the size type of a vector as the size type of its elements:
typedef decltype(problem_parts)::size_type size_type;
const size_type num_digits = 13;
I think what's meant is
typedef decltype(problem_parts)::value_type::size_type size_type;
const size_type num_digits = 13;
Or, in modern language,
using size_type = decltype(problem_parts)::value_type::size_type;
## Reduce the number of strings used
The partition() function is clear and obvious. But it does take its input as a std::string and it does create a whole vector of std::string before you start any of the calculations. You could avoid a bunch of copying by accepting input as a C-style string and by producing each output on demand, rather than all at once in a collection.
A quick win would be to use string_view objects in place of the strings that are collected in the vector:
#include <string_view>
#include <cstring>
std::vector<std::string_view> partition(const char *input) {
const char *const input_end = input + std::strlen(input);
std::vector<std::string_view> ans;
ans.reserve(1 + (input_end - input) / 10); // Assume digits uniformly distributed, and round up
for (auto pos = input; pos != input_end; ) {
auto first = std::find_if(pos, input_end, [](const char c){ return c != '0';});
pos = std::find(first, input_end, '0');
if (pos != first)
ans.emplace_back(first, pos - first);
}
return ans;
}
With no other changes to the program, this eliminates a bunch of small allocations:
before: total heap usage: 28 allocs, 28 frees, 83,358 bytes allocated
after: total heap usage: 3 allocs, 3 frees, 75,344 bytes allocated
Similarly, you could avoid creating and adding "short" strings to the result vector; the trade-off is that you have to move the knowledge of what's "short" through to partition(). See the final worked example.
## Use a simpler main() declaration
We can just use the form that takes no arguments, since we don't accept any command-line arguments:
int main()
## Spelling mistake
// Careful of rounding and overflow here, ...
(only one 'l' in 'careful'). You did ask for for some nitpicking, after all!
Note that you don't really need any care about overflow - even if you multiply first, you still have at least 18 bits of headroom (roughly 806,351 times as much as you need). Rounding would only be an issue if you were to write
running_product *= toint(problem[i]) / toint(problem[i - num_digits]);
Separate *= and /= would be fine, of course.
# Worked example
#include <algorithm>
#include <cstdint>
#include <cstring>
#include <iostream>
#include <string_view>
#include <vector>
static auto const c_input =
"73167176531330624919225119674426574742355349194934"
"96983520312774506326239578318016984801869478851843"
"85861560789112949495459501737958331952853208805511"
"12540698747158523863050715693290963295227443043557"
"66896648950445244523161731856403098711121722383113"
"62229893423380308135336276614282806444486645238749"
"30358907296290491560440772390713810515859307960866"
"70172427121883998797908792274921901699720888093776"
"65727333001053367881220235421809751254540594752243"
"52584907711670556013604839586446706324415722155397"
"53697817977846174064955149290862569321978468622482"
"83972241375657056057490261407972968652414535100474"
"82166370484403199890008895243450658541227588666881"
"16427171479924442928230863465674813919123162824586"
"17866458359124566529476545682848912883142607690042"
"24219022671055626321111109370544217506941658960408"
"07198403850962455444362981230987879927244284909188"
"84580156166097919133875499200524063689912560717606"
"05886116467109405077541002256983155200055935729725"
"71636269561882670428252483600823257530420752963450";
std::vector<std::string_view> partition(const char *input, std::size_t min_length) {
auto const input_end = input + std::strlen(input);
std::vector<std::string_view> ans;
ans.reserve(1 + (input_end - input) / 10); // Assume digits uniformly distributed, and round up
for (auto pos = input; pos != input_end; ) {
auto const first = std::find_if(pos, input_end, [](const char c){ return c != '0';});
pos = std::find(first, input_end, '0');
std::size_t length = pos - first;
if (length >= min_length)
ans.emplace_back(first, length);
}
return ans;
}
int main() {
using result_type = std::uint_least64_t;
std::size_t num_digits = 13;
auto const problem_parts = partition(c_input, num_digits);
using size_type = decltype(problem_parts)::value_type::size_type;
result_type ans = 0;
for (const auto& problem : problem_parts) {
result_type running_product = 1;
for (size_type i = 0; i < problem.size(); ++i) {
running_product *= problem[i] - '0';
if (i >= num_digits)
running_product /= problem[i - num_digits] - '0';
if (running_product > ans)
ans = running_product;
}
}
std::cout << "Answer: " << ans << std::endl;
return 0;
}
|
Physics in the Pub
15 Aug 2017, Samuel Hinton
A fun night of physics talks, songs and verse with Dr Phil Dooley
Physics in the Pub came to Brisbane! With the esteemed Dr Phil Dooley as MC for the night, we had a great evening of eight physics acts. Whilst I trying to impress the sheer scale and significance of supernovae on the audience, I was definitely not the star of the show that night!
Chris Mesiku sang a reggae song about $E=mc^2$ and Einstein. Phil himself sang a cover of Nirvana about particle physics and light.
I think if I do it again I obviously need to present, if not in song, in verse at least!
The entire night is available on Youtube here if you were interested thanks to Till:
|
# estimating lattice sums of concave functions
Suppose that $f$ is a twice-differentiable concave function from $R^2$ to $R$ that's negative outside of some bounded set (e.g. $f(x,y)=1-x^2-y^2$) and let $F=$max$(f,0)$. Let $S_n$ be the Riemann sum for the integral of $F$ over $R^2$ obtained by summing the values of $F$ at all points in the lattice $(Z/n)^2$ and dividing by $n^2$. What sort of bounds can be given for the difference between $S_n$ and the integral of $F$ over $R^2$? Is it $O(1/n)$ or $O(1/n^2)$ or what? This is a more focussed version of the question error estimates for multi-dimensional Riemann sums .
-
Oy! Mr. Propp, are you going to accept or comment at mathoverflow.net/questions/71432/…? (Also, I answered your comment at (mathoverflow.net/questions/71344/…) – Ricky Demer Aug 18 '11 at 19:04
It looks like the error is in $O(1/n^2)$, with a precise and optimal bound $C/n^2$ if you have a fixed bound on (1) the second derivative of the function (2) the radius of the region where it is non-negative.
As the question is stated there are two sources for the error term:
• the error in each square, centered at a point of the lattice, on which the function is strictly positive. This terms is controled by the second derivative of the function (it clearly vanishes for a linear function) at it is bounded by $O(1/n^4)$, since the number of squares is $O(n^2)$ the estimate on this whole term is $O(1/n^2)$,
• the error term in the boundary squares, those on which the function takes both a $>0$ and a zero value. On those squares the error is $O(1/n^3)$ and the number or such boundary squares is $O(n)$ so we get again a bound $O(1/n^2)$.
(Note that a complete argument has to be more precise because the function $f$ could have zero derivative at the points where it vanishes, then the number of boundary squares is $O(n^2)$ but I think the result does not change).
To check that this estimate is optimal you can think of a function which is invariant under a rotation of angle $\pi/2$ and equal to say $N-x$ on $y>0, -y+u\leq x\leq y-u$ for some small $u>0$. Then the first error term can be made smaller than the second, while the second "boundary" error term is indeed of the order of $1/n^2$ (the boundary errors all sum up).
-
The errors in each cell tend to compensate. If instead of having a compactly supported function with limited regularity, we have a functin $F$ in the Schwartz class, then the total error is $O(n^{-k})$ for every $k>0$. This is a consequence of the Poisson summation formula and the fact that the Fourier transform of $F$ is of Schwartz class too. Therefore the question is really about the effect of the limited regularity of $F$, and whether the concavity helps. – Denis Serre Aug 18 '11 at 16:14
I agree but as stated the main error term comes from the boundary squares. In some cases at least no compensation occurs, as in the example I tried to described, where all centers of boundary squares are at points where $f=0$ so that all those boundary terms are positive. – Jean-Marc Schlenker Aug 18 '11 at 17:48
|
# Quiz #1
Below I am giving you a handful of problems. You don't need to do all of them, but you must do the first problem.
Question #1: Let st be your student id number. Enter the formula on the blackboard. The answer to the question determines which two other questions you should answer.
Question #2: Given a string w where length(w) is even, construct a string where the first and second characters are switched, the third and fourth characters are switched, the fifth and sixth characters are switched, etc. For instance, if w:="ABCDEFGH" then the output string out is "BADCFEHG".
Question #3: Given a string w where length(w) is even, construct a string consisting of the letters in the even positions, followed by the letters in the odd positions. For instance, if w:="ABCDEFGH" then the output string out is "BDFHACEG".
Question #4: Given a string w with letters and other symbols, construct a string where lower case letters are replaced by upper case letters and upper case letters are replaced by lower case letters. For instance if w:="What do YOU want with Me?", then the output string should be "wHAT DO you WANT WITH mE?"
Question #5: Show that the curves $\sqrt{25-x^2}$ and $sin(2(x-5)\pi)$ intersect in exactly two points. Find the area between the two curves.
Question #6: Find at least one point of intersection for $0 < x < 10$ between the curve $sin(5\pi x)$ and $\sqrt{10x-x^2}$. Give the value of $x$ to at least 3 decimal places. Hint: one way of doing this is to graph the difference of these curves and find the zeros.
Question #7 Find the area between the two curves $6 sin^3( \pi x)$ and $sin^3(3 \pi x) sin^3( \pi x)$ for $x$ between 0 and 2.
You should be able to complete this quiz within the class time. If you finish after the class time your overall grade for this assignment will be reduced by 10% per day. Make sure that your file is uploaded by 12:30pm. Upload your worksheet to the course moodle. You are expected to work alone on this assignment.
|
Definition:Big-O Notation/Real
Definition
Estimate at infinity
Let $f$ and $g$ be real functions defined on a neighborhood of $+ \infty$ in $\R$.
The statement:
$\map f x = \map \OO {\map g x}$ as $x \to \infty$
is equivalent to:
$\exists c \in \R_{\ge 0}: \exists x_0 \in \R: \forall x \in \R: \paren {x \ge x_0 \implies \size {\map f x} \le c \cdot \size {\map g x} }$
That is:
$\size {\map f x} \le c \cdot \size {\map g x}$
for $x$ sufficiently large.
This statement is voiced $f$ is big-$\OO$ of $g$ or simply $f$ is big-$\OO$ $g$.
Point Estimate
Let $x_0 \in \R$.
Let $f$ and $g$ be real-valued or complex-valued functions defined on a punctured neighborhood of $x_0$.
The statement:
$\map f x = \map \OO {\map g x}$ as $x \to x_0$
is equivalent to:
$\exists c \in \R_{\ge 0}: \exists \delta \in \R_{>0}: \forall x \in \R : \paren {0 < \size {x - x_0} < \delta \implies \size {\map f x} \le c \cdot \size {\map g x} }$
That is:
$\size {\map f x} \le c \cdot \size {\map g x}$
for all $x$ in a punctured neighborhood of $x_0$.
Also known as
The big-$\OO$ notation, along with little-$\oo$ notation, are also referred to as Landau's symbols or the Landau symbols, for Edmund Georg Hermann Landau.
In analytic number theory, sometimes Vinogradov's notations $f \ll g$ or $g \gg f$ are used to mean $f = \map \OO g$.
This can often be clearer for estimates leading to typographically complex error terms.
Some sources use an ordinary $O$:
$f = \map O g$
Also defined as
Some authors require that the functions appearing in the $\OO$-estimate be positive or strictly positive.
Examples
Example: $10 \times$ Function at $+\infty$
Let $f: \R \to \R$ be the real function defined as:
$\forall x \in \R: \map f x = 10 x$
Then:
$\map f x = \map \OO x$
as $x \to \infty$.
Example: Sine Function at $+\infty$
Let $f: \R \to \R$ be the real function defined as:
$\forall x \in \R: \map f x = \sin x$
Then:
$\map f x = \map \OO 1$
as $x \to \infty$.
Example: $x = \map \OO {x^2}$ at $+\infty$
Let $f: \R \to \R$ be the real function defined as:
$\forall x \in \R: \map f x = x$
Then:
$\map f x = \map \OO {x^2}$
as $x \to \infty$.
|
# Optimal Binarize value for counting experiment
I am currently working on a physics experiment where I count particles. I have looked at and used a bit of code from the following sources: Cell counting from an image file and Count Elements in Image
However,this is my first attempt at the code.
(Note: this code wasn't giving good counts for the particles at particular slide depths. I attached my final solution to the end of this post):
locate = SystemDialogInput["FileOpen"];
img = Import[locate]
b = FillingTransform[ColorNegate[Binarize[img]]]
disT = DistanceTransform[b, Padding -> 0];
Method -> "Rainfall"];
cells = SelectComponents[w, "Count",
0 < # < 500 &];(*Pixel interval*)measures =
ComponentMeasurements[
Grid[{{Style[
"Number of Particles is " <> ToString[Dimensions[measures][[1]]] <>
".", "Title"],
Button["Save it?",
Export[DirectoryName[locate] <> FileBaseName[locate] <> ".txt",
"Number of Cell is " <> ToString[Dimensions[measures][[1]]] <>
". " <> DateString[]]]}, {Show[img,
Graphics[{Red, Circle @@ # & /@ (measures[[All, 2, 1 ;; 2]]),
Text, {measures[[All, 2, 3]], measures[[All, 2, 1]]}]}]]}}]
isn't working very well with my images.
Furthermore, I am having to adjust the value of x in Binarize[img, x] for each image in order to get accurate counts. I don't exactly know how to adjust Binarize to get consistent counts for my data
The final picture in my data set is also proving to be a challenge to process.
I solved the ImageCrop[] problem of this question by pre-processing all of my images, now I just need to find a way to make Binarize[] give good counts of the particles in my image(s).
Final Code
For those who need it.
(The above method gave somewhat inaccurate results so I settled on this one, it basically searches for the images called ker, ker2, and ker3 and finds the points associated with these images in the file that you input to the function. It produced some nice counts)
(ker) = Kernal 1 = (ker2) = Kernal 2 = (ker3) = Kernal 3 =
ker = (*Image of Kernal 1*)
img = Import[locate];
i = ImageCorrelate[img, ker, NormalizedSquaredEuclideanDistance];
dots = Point[#[[2]]] & /@
ComponentMeasurements[
MorphologicalComponents[ColorNegate[Binarize[i, 0.22]]],
"Centroid"];
Length[dots]
Show[img, Graphics[{Red, dots}]]
ker2 = (*Image of Kernal 2*)
i = ImageCorrelate[img, ker2, NormalizedSquaredEuclideanDistance];
dots2 = Point[#[[2]]] & /@
ComponentMeasurements[
MorphologicalComponents[ColorNegate[Binarize[i, 0.22]]],
"Centroid"];
Length[dots2]
Show[img, Graphics[{Red, dots2}]]
ker3 = (*Image of Kernal 3*)
i = ImageCorrelate[img, ker3, NormalizedSquaredEuclideanDistance];
dots3 = Point[#[[2]]] & /@
ComponentMeasurements[
MorphologicalComponents[ColorNegate[Binarize[i, 0.22]]],
"Centroid"];
Length[dots3]
Show[img, Graphics[{Red, dots2}]]
Length[dots] + Length[dots2] + Length[dots3]
You may want to do some processing on the image before your Binarize it.
Working with the Blue channel of your jpg:
img = ColorSeparate[Import["tRGTt.jpg", "JPG"]][[2]];
Use a BottomHatTransform to correct the background
img2 = Binarize[BottomHatTransform[img, DiskMatrix[15]]]
There are many ComponentMeasurements options you can play with to be selective about the components it picks.
morph = MorphologicalComponents[img2, CornerNeighbors -> True];
comps = ComponentMeasurements[morph, {"Centroid", "EquivalentDiskRadius", "Circularity"}, ((5 < #2 < 30) && (#3 > 0.2)) &];
Show[Image[img, "Byte", ImageSize -> Large], Graphics[{Red, Map[Circle[#, 15] &, comps[[1 ;;, 2, 1]]]}]]
It's definitely not perfect, there are some false positives and it misses a lot of your faint, out of focus particles, but hopefully its enough to give you a start! In your measurement, is it possible to keep your particles in the focal plane of your imaging system to make your processing easier? For example by making a thinner flow cell for your particles?
• That looks like it works great for the faint particles. But I realized that I was missing the point of this counting experiment. I was trying to count particles that were out of focus - aka not on the depth of the slide that i was concerned with. – Daniel Schulze Sep 27 '15 at 19:25
• So what I did was use images which related to "In-focus" particles and counted those using the method that matched the pictures of in-focus particles to the picture of the input slide – Daniel Schulze Sep 27 '15 at 19:27
|
# Does the lagrangian contain all the information about the representations of the fields in QFT?
+ 4 like - 0 dislike
870 views
Given the Lagrangian density of a theory, are the representations on which the various fields transform uniquely determined?
For example, given the Lagrangian for a real scalar field $$\mathscr{L} = \frac{1}{2} \partial_\mu \varphi \partial^\mu \varphi - \frac{1}{2} m^2 \varphi^2 \tag{1}$$ with $(+,-,-,-)$ Minkowski sign convention, is $\varphi$ somehow constrained to be a scalar, by the sole fact that it appears in this particular form in the Lagrangian?
As another example: consider the Lagrangian $$\mathscr{L}_{1} = -\frac{1}{2} \partial_\nu A_\mu \partial^\nu A^\mu + \frac{1}{2} m^2 A_\mu A^\mu,\tag{2}$$ which can also be cast in the form $$\mathscr{L}_{1} = \left( \frac{1}{2} \partial_\mu A^i \partial^\mu A^i - \frac{1}{2} m^2 A^i A^i \right) - \left( \frac{1}{2} \partial_\mu A^0 \partial^\mu A^0 - \frac{1}{2} m^2 A^0 A^0 \right). \tag{3}$$ I've heard$^{[1]}$ that this is the Lagrangian for four massive scalar fields and not that for a massive spin-1 field. Why is that? I understand that it produces a Klein-Gordon equation for each component of the field: $$( \square + m^2 ) A^\mu = 0, \tag{4}$$ but why does this prevents me from considering $A^\mu$ a spin-1 massive field?
[1]: From Matthew D. Schwartz's Quantum Field Theory and the Standard Model, p.114:
A natural guess for the Lagrangian for a massive spin-1 field is $$\mathcal{L} = - \frac{1}{2} \partial_\nu A_\mu \partial_\nu A_\mu + \frac{1}{2} m^2 A_\mu^2,$$ where $A_\mu^2 = A_\mu A^\mu$. Then the equations of motion are $$( \square + m^2) A_\mu = 0,$$ which has four propagating modes. In fact, this Lagrangian is not the Lagrangian for a amassive spin-1 field, but the Lagrangian for four massive scalar fields, $A_0, A_1, A_2$ and $A_3$. That is, we have reduced $4 = 1 \oplus 1 \oplus 1 \oplus 1$, which is not what we wanted.
This post imported from StackExchange Physics at 2014-11-27 10:37 (UTC), posted by SE-user glance
asked Nov 26, 2014
Why would writing out the Lagrangian for each component of a vector field prevent you from viewing the vector field as a vector field? I think whatever you heard about not being able to do so is wrong.
This post imported from StackExchange Physics at 2014-11-27 10:37 (UTC), posted by SE-user bechira
Also re the title, at least within the scope of what you're asking, the Lagrangian specifies the representation by the virtue that it is written in terms of a field in some specific rep, e.g. a scalar field Lagrangian specifies dynamics of a scalar field not a vector one. But of course that says nothing about not being able to view components of a vector field as scalar fields
This post imported from StackExchange Physics at 2014-11-27 10:37 (UTC), posted by SE-user bechira
If each component of A satisfies the Klein-Gordon equation, that doesn't necessarily mean that the components of A transform like a vector under Lorentz transformations.
This post imported from StackExchange Physics at 2014-11-27 10:37 (UTC), posted by SE-user jabirali
For the actual Lagrangians describing a vector field, google the Maxwell Lagrangian (massless spin-1 field) and Proca Lagrangian (massive spin-1 field).
This post imported from StackExchange Physics at 2014-11-27 10:37 (UTC), posted by SE-user jabirali
@glance i see i misundersyood the question a bit the first time. the author is saying that the lagrangian constructed is not in the desired 3+1 rep, as jabirali pointed out here
This post imported from StackExchange Physics at 2014-11-27 10:37 (UTC), posted by SE-user bechira
Comment to the question (v5): As M. Schwartz mentions on top of p. 115, the energy density for the Lagrangian (2) is not bounded from below because the kinetic term of $A_0$ field has the wrong sign, and hence the theory is not physical in the first place. Therefore the discussion of possible representations and interpretations of (2) seems somewhat academic. On the other hand, if $A_0$ did not have the wrong sign, then $A_{\mu}$ could not be viewed as a 4-covector, but could only be interpreted as 4 scalars.
This post imported from StackExchange Physics at 2014-11-27 10:37 (UTC), posted by SE-user Qmechanic
Yes I understand that. However I'm trying to understand if there are also reasons/consistency arguments from the group-theoretical point of view. For example: why can't I say (or can I?) that $A^\mu$ is a spin-1 field for that choice of the Lagrangian? (despite the fact of it being unphysical for independent reasons). This is also addressed on that same page (p.115), and my question actually arises from that argumentation which I'm not sure I get.
This post imported from StackExchange Physics at 2014-11-27 10:38 (UTC), posted by SE-user glance
Field $\psi_{a_{1}...a_{n}\dot{b}_{1}...\dot{b}_{m}}$ with a given spin and mass (i.e. field which transforms under irrep of the Poincare group) must satisfy some determined conditions called irreducibility conditions: $$\tag 1 \hat{W}^{2}\psi_{a_{1}...a_{n}\dot{b}_{1}...\dot{b}_{m}} = -m^{2}\frac{n + m}{2}\left(\frac{n + m}{2} + 1\right)\psi_{a_{1}...a_{n}\dot{b}_{1}...\dot{b}_{m}},$$ $$\tag 2 \hat{P}^{2}\psi_{a_{1}...a_{n}\dot{b}_{1}...\dot{b}_{m}} = m^{2}\psi_{a_{1}...a_{n}\dot{b}_{1}...\dot{b}_{m}}.$$ Here $\hat{W}$ is Pauli-Lubanski operator and $\hat{P}$ is translation operator. Representation with equal quantity $\frac{n + m}{2}$ are equivalent.
If you construct lagrangian which leads to $(1), (2)$, you will uniquely determine transformation properties of field with a given mass and spin.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\varnothing$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
# Local limit theorems for finite and infinite urn models
Hsien-Kuei Hwang, Svante Janson. Local limit theorems for finite and infinite urn models, Annals of Probability, 36(3) (2008), 992--1022.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.