text
stringlengths
104
605k
# Error creating pdf/a with latex I'm trying to create pdf/a through latex. I inserted in my tex file the package: \usepackage[a-1b]{pdfx} with the option a-1b but I got error: pdfTeX error (ext5): cannot open file for embedding ...eam attr{/N 4} file{sRGBIEC1966-2.1.icm} \pdfobjcompresslevel > 0 requires \pdfminorversion > 4. Object streams disabled now. I don't really know what does it mean - This is a color profile. Printers need them to correctly print color pdfs. There are a number of profiles on the net. You can find this file here: https://github.com/bencomp/pdfx-ext/blob/master/sRGBIEC1966-2.1.icm - where I should copy it after having downloaded? –  Mazzy Dec 21 '12 at 21:31 The simplest way is to put it in your working directory, where the .tex file is –  Boris Dec 21 '12 at 21:32 Ok the error has gone...but I got another error: Option clash for package hyperref. \def. What does it mean? –  Mazzy Dec 21 '12 at 21:34 you probably call hyperref with some options that conflict with those provided by pdfx package. Try removing the options from \usepackage[OPTIONS]{hyperref} –  Boris Dec 21 '12 at 21:46 use \hypersetup after calling the packages. –  Boris Dec 21 '12 at 21:50 show 1 more comment
Electric Charge Video Lessons Concept Problem: The positive charge in (Figure 1) is +Q.What is the negative charge if the electric field at the dot is zero? FREE Expert Solution Electric field: $\overline{){\mathbf{E}}{\mathbf{=}}\frac{\mathbf{k}\mathbf{Q}}{{\mathbf{r}}^{\mathbf{2}}}}$ The field will be in the opposite direction because of the negative charge. For the electric field to be zero, the magnitude of the field must be equal to the field due to Q. 91% (241 ratings) Problem Details The positive charge in (Figure 1) is +Q. What is the negative charge if the electric field at the dot is zero?
TITLE # Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges AUTHOR(S) Bouchaud, Jean-Philippe PUB. DATE May 2013 SOURCE Journal of Statistical Physics;May2013, Vol. 151 Issue 3/4, p567 SOURCE TYPE DOC. TYPE Article ABSTRACT Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called 'detailed-balance' violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic. ACCESSION # 86449506 ## Related Articles • Scaling and self-averaging in the three-dimensional random-field Ising model. Fytas, N. G.; Malakis, A. // European Physical Journal B -- Condensed Matter;Jan2011, Vol. 79 Issue 1, p13 We investigate, by means of extensive Monte Carlo simulations, the magnetic critical behavior of the three-dimensional bimodal random-field Ising model at the strong disorder regime. We present results in favor of the two-exponent scaling scenario, $\bar{\eta}$ = 2 η, where η and... • Disorder-driven first-order phase transformations: A model for hysteresis. Dahmen, Karin; Kartha, Sivan; Krumhansl, James A.; Roberts, Bruce W.; Sethna, James P.; Shore, Joel D. // Journal of Applied Physics;5/15/1994, Vol. 75 Issue 10, p5946 Examines hysteresis in the zero temperature random-field Ising model. Simulation results; Transition in the model; Description on the hysteresis loops at different disorders; Avalanche size distribution in the hysteresis loop. • High-field magnetization measurements of the metastability boundary in a d=2 random-field Ising system. King, A. R.; Jaccarino, V.; Motokawa, M.; Sugiyama, K.; Date, M. // Journal of Applied Physics;4/15/1985, Vol. 57 Issue 8, p3297 Presents a study which examined magnetization measurements of the random-field Ising model system. Methodology; Results; Discussion. • Hysteresis, metastability, and time dependence in d=2 and d=3 random-field Ising systems. Belanger, D. P.; Rezende, S. M.; King, A. R.; Jaccarino, V. // Journal of Applied Physics;4/15/1985, Vol. 57 Issue 8, p3294 Presents a study which examined the hysteretic properties of random-field Ising model systems using neutron scattering. Methodology; Results; Discussion. • Thermal critical behavior and universality aspects of the three-dimensional random-field Ising model. Malakis, A.; Fytas, N. G. // European Physical Journal B -- Condensed Matter;May2006, Vol. 51 Issue 2, p257 The three-dimensional bimodal random-field Ising model is investigated using the N-fold version of the Wang-Landau algorithm. The essential energy subspaces are determined by the recently developed critical minimum energy subspace technique, and two implementations of this scheme are utilized.... • Analysis of a long-range random field quantum antiferromagnetic Ising model. Chakrabarti, B. K.; Das, Arnab; Inoue, Jun-ichi // European Physical Journal B -- Condensed Matter;Jun2006, Vol. 51 Issue 3, p321 We introduce a solvable quantum antiferromagnetic model. The model, with Ising spins in a transverse field, has infinite range antiferromagnetic interactions and random fields on each site following an arbitrary distribution. As is well-known, frustration in the random field Ising model gives... • Random-field critical scattering. Jaccarino, V.; King, A. R.; Belanger, D. P. // Journal of Applied Physics;4/15/1985, Vol. 57 Issue 8, p3291 Presents a study which examined the quasielastic scattering of neutrons in random-field Ising model (RFIM) systems in the critical region. Methodology; Results of studies on RFIM systems; Discussion. • Numerical results for the random field Ising model (invited). Pytte, E.; Fernandez, J. F. // Journal of Applied Physics;4/15/1985, Vol. 57 Issue 8, p3274 Presents a study which performed numerical calculations for the random field ferromagnetic Ising model in two dimensions. Methodology; Results; Discussion. • Russian-Kazakhstani Relations: A Return of Moscow's Neo-Imperialist Rhetoric. Voloshin, Georgiy // Eurasia Daily Monitor;2/27/2014, Vol. 11 Issue 38, p7 The article discusses the impact of the post of former leader of Russia's National Bolshevik Party Eduard Limonov in social media which shows forthcoming power transition in Kazakhstan that occupy the northern provinces. It highlights the nationalist revolution in Ukraine wherein it struggle to... Share
# Latex — Is there a way to shift the equation numbering one tab space from the right margin (shift towards left)? I have been formatting my dissertation and one little problem is stucking me up. I used the following code to typeset an equation \begin{align} & R=\frac{P^2}{P+S'} \label{eqn:SCS}\\ &\mbox {where} \quad \mbox R = \mbox {Watershed Runoff} \notag\\ &\hspace{0.63in} \mbox P = \mbox{Rainfall} \notag\\ &\hspace{0.63in} \mbox S' = \mbox{Storage in the watershed $=\frac{1000}{CN}-10$ }\notag \end{align} My output requirement is such that: The equation should begin one tab space from the left margin The equation number should end at one tab space from the right margin With the above code, I have the equation begin at the right place but not the numbering. Any help will be extremely appreciated. Thanks MP - Looks like Latex, could you confirm? –  Mats Fredriksson Mar 8 '10 at 18:10 Sorry about the confusion. Yeah it is the Latex issue. Thanks MP –  Murari Mar 8 '10 at 18:34 Best I could do. Notice the fleqn option and the minipage environment. If you need this all the time, you'll have to redefine the align environment accordingly. \usepackage{amsmath} \begin{document} Lorem Ipsum is simply dummy text of the printing and typesetting industry. \begin{minipage}{0.8\textwidth} \begin{align} & R=\frac{P^2}{P+S'} \label{eqn:SCS}\\ &\mbox {where} \quad \mbox R = \mbox {Watershed Runoff} \notag\\ &\hspace{0.63in} \mbox P = \mbox{Rainfall} \notag\\ &\hspace{0.63in} \mbox S' = \mbox{Storage in the watershed $=\frac{1000}{CN}-10$ }\notag \end{align} \end{minipage} Lorem Ipsum is simply dummy text of the printing and typesetting industry. \end{document} EDIT A more elegant and symmetric version: \documentclass[fleqn]{article} \usepackage{amsmath} \makeatletter \setlength\@mathmargin{0pt} \makeatother \begin{document} \noindent Lorem Ipsum is simply dummy text of the printing and typesetting industry. The margins of the quotation environment are indented on the left and the right. The text is justified at both margins and there is paragraph indentation. Leaving a blank line between text produces a new paragraph. \begin{quotation} \begin{align} & R=\frac{P^2}{P+S'} \label{eqn:SCS}\\ &\mbox {where} \quad \mbox R = \mbox {Watershed Runoff} \notag\\ &\hspace{0.63in} \mbox P = \mbox{Rainfall} \notag\\ &\hspace{0.63in} \mbox S' = \mbox{Storage in the watershed $=\frac{1000}{CN}-10$ }\notag \end{align} \end{quotation} Lorem Ipsum is simply dummy text of the printing and typesetting industry. The margins of the quotation environment are indented on the left and the right. The text is justified at both margins and there is paragraph indentation. Leaving a blank line between text produces a new paragraph. \end{document} - Thanks!! using minipage shifted the equation number one tab space to the right!!! awesome!! But it moved the equation two tab spaces from the left margin. Is there a way I can shift the equation back to 1 tab space? –  Murari Mar 8 '10 at 19:45 Nevermind, I had been using \setlength{\mathindent}{0.5in} that made my equation begin at 1in from left margin. I modified the code as follows at is worked JUST LIKE THAT!!!! SWEET... \begin{minipage}{5.5in} \setlength{\mathindent}{0.0in} \begin{align} & R=\frac{P^2}{P+S'} \label{eqn:SCS}\\ &\mbox {where} \quad \mbox R = \mbox {Watershed Runoff} \notag\\ &\hspace{0.63in} \mbox P = \mbox{Rainfall} \notag\\ &\hspace{0.63in} \mbox S' = \mbox{Storage in the watershed $=\frac{1000}{CN}-10$ }\notag \end{align} \end{minipage} You made my day!!!! –  Murari Mar 8 '10 at 20:00 Nevermind, I had been using \setlength{\mathindent}{0.5in} that made my equation begin at 1in from left margin. I modified the code as follows at is worked JUST LIKE THAT!!!! SWEET... \begin{minipage}{5.5in} \setlength{\mathindent}{0.0in} \begin{align} & R=\frac{P^2}{P+S'} \label{eqn:SCS}\\ &\mbox {where} \quad \mbox R = \mbox {Watershed Runoff} \notag\\ &\hspace{0.63in} \mbox P = \mbox{Rainfall} \notag\\ &\hspace{0.63in} \mbox S' = \mbox{Storage in the watershed $=\frac{1000}{CN}-10$ }\notag \end{align} \end{minipage} - \hspace{.63in} brrrr.... The following code solves your problem. \newskip \tabspace \tabspace = 3em % Set tab space \def\eq{\refstepcounter{equation}(\theequation)}% To insert the next number { \let\\\cr \tabskip 0pt \halign to \hsize{\hskip\tabspace$\displaystyle#$\hfil&#\hfil \tabskip 0pt plus 1fil &\hfil#\hskip\tabspace\tabskip 0pt\crcr \hbox{where }&$R$ = Watershed Runoff\cr &$P$ = Rainfall\cr &$S'$ = Storage in the watershed $=\frac{1000}{CN}-10$\cr
# Berry-Esseen inequality for the event $a<S_n<b$ Suppose that $X_i$ are independent identically distributed with finite variance and $S_n=X_1+\cdots+X_n$. One can use the Central Limit Theorem to estimate (a) $P(S_n \leq b)$ and (b) $P(a<S_n \leq b)$. The Berry-Esseen theorem estimates the maximum possible error for the first case (a). The error is not greater than $C\frac{\rho}{\sigma^3\sqrt{n}}$. Using the fact that $P(a<S_n \leq b)=P(S_n \leq b)-P(S_n \leq a)$ I may obtain a trivial bound for the error in the case (b): $2C\frac{\rho}{\sigma^3\sqrt{n}}$. Is it the best possible bound for the error in the second case (b)? Or there is something better? - One cannot hope for anything smaller than $C\rho/(\sigma^3\sqrt{n})$, otherwise a strictly better bound than Berry-Esseen's bound would hold in case (a) (take the limit of (b) when $a\to-\infty$). So the trivial bound is off by at most the prefactor $2C$ instead of $C$. In the cases I know, this has zero consequence.
# Determinant with unknown parameter. I'm given 4 vectors: $u_1, u_2, u_3$ and $u_4$. I'm going to type them in as points, because it will be easier to read, but think as them as column vectors. $$u_1 =( 5, λ, λ, λ), \hspace{10pt} u_2 =( λ, 5, λ, λ), \hspace{10pt} u_3 =( λ, λ, 5, λ), \hspace{10pt}u_4 =( λ, λ, λ, 5)$$ The task is to calculate the value of λ if the vectors where linearly dependent, as well as linearly independent. I managed to figure out that I could put them in a matrix, let's call it $A$, and set $det(A) = 0$ if the vectors should be linearly dependent, and $det(A) \neq 0$ if the vectors should be linearly independent. Some help to put me in the right direction would be great! - Compute the determinant to get a polynomial in the variable $\lambda$; find out for what $\lambda$'s the polynomial is zero vs. nonzero (corresponding to linearly dependent and independent respectively). –  anon Sep 26 '12 at 21:14 @anon: That's a lot more effort than is required. –  joriki Sep 26 '12 at 21:28 If you write it as a matrix you will see the answer immediately... For a certain value of lambda all the vectors will be equal and thus linearly dependent. What is it? For another value of lambda the matrix will be a scalar times the identity matrix and thus linearly independent. What is this value? - Well, if lambda were equal to 3 all the vectors would be equal. Is that the case? Thanks for helping out –  marsrover Sep 29 '12 at 12:19 Here is a short cut. Form the $4\times 4$ matrix of all ones, call it $J$. Then your matrix can be represented as $$\lambda J - (\lambda - 5)I$$ $J$ is symmetric hence (orthogonally) diagonalizable. It's easy to see that the matrix has rank $1$ and one of the eigenvalues is $4$. Therefore the diagonal form is $\mathrm{diag}(4,\ 0,\ 0,\ 0)$. Thus we are reduced to calculating the determinant of the diagonal matrix $$\mathrm{diag}(3\lambda + 5,\ 5-\lambda,\ 5-\lambda,\ 5-\lambda)$$ It is then easy to see that the vectors will be linearly dependent if and only if $\lambda = 5$ or $\lambda = \frac{-5}{3}$ - nice observation and solution! –  Tapu Sep 26 '12 at 21:39 If $\lambda=0$, the vectors are clearly linearly independent. If $\lambda\ne0$, we can divide through by $\lambda$ without affecting whether the determinant vanishes; this yields $$\pmatrix{\frac5\lambda&1&1&1\\1&\frac5\lambda&1&1\\1&1&\frac5\lambda&1\\1&1&1&\frac5\lambda}\;.$$ Thus the values of $\lambda$ for which the determinant vanishes are those for which $$\frac5\lambda=1-\mu_i\;,$$ where $\mu_i$ is an eigenvalue of $$\pmatrix{1&1&1&1\\1&1&1&1\\1&1&1&1\\1&1&1&1}\;.$$ This matrix annihilates all vectors whose components sum to $0$, so it has an eigenspace of dimension $4-1=3$ corresponding to the eigenvalue $0$, and thus a triple eigenvalue $0$. Since it is symmetric, its four eigenvectors can be chosen to form an orthonormal system, so the fourth eigenvector is a vector orthogonal to that eigenspace, e.g. $(1,1,1,1)$, which corresponds to eigenvalue $4$. Thus we have the two possibilities $5/\lambda=1-0$, corresponding to $\lambda=5$, and $5/\lambda=1-4$, corresponding to $\lambda=-5/3$. All other values of $\lambda$ lead to linearly independent vectors. - Your guess is correct. But what is the difficulty then? May be you are unable to get the determinant in a simpler way..? $$\left|\begin{array}{cccc} 5&\lambda&\lambda&\lambda\\\lambda&5&\lambda&\lambda\\\lambda&\lambda&5&\lambda\\\lambda&\lambda&\lambda&5\end{array}\right|=(5-\lambda)^3\left|\begin{array}{rrrr} 1&0&0&\lambda\\-1&1&0&\lambda\\0&-1&1&\lambda\\0&0&-1&5\end{array}\right|=(5-\lambda)^3\left[\left|\begin{array}{rrr}1&0&\lambda\\-1&1&\lambda\\0&-1&5 \end{array}\right|+\left|\begin{array}{rrr}0&0&\lambda\\-1&1&\lambda\\0&-1&5 \end{array}\right|\right]$$ So, finally, the determinant is $$(5-\lambda)^3\left[(5+\lambda)+\lambda+\lambda\right]=(5-\lambda)^3(5+3\lambda)$$ - Here is another way $\left[\begin{matrix}5&\ell&\ell&\ell\\\ell & 5&\ell&\ell\\\ell&\ell&5&\ell\\\ell&\ell&\ell&5\end{matrix}\right]\sim\left[\begin{matrix}5 &\ell&\ell&\ell\\0&5-\ell &\ell-5&0\\0&0&5-\ell&\ell-5\\\ell&\ell&\ell&5\end{matrix}\right]\sim\left[\begin{matrix}5 &\ell&\ell&\ell\\0&5-\ell &\ell-5&0\\0&0&5-\ell&\ell-5\\\ell&\ell&\ell&5\end{matrix}\right]\sim\left[\begin{matrix}5 &\ell&\ell&\ell\\0&5-\ell &\ell-5&0\\0&0&5-\ell&\ell-5\\\ell-5&0&0&5-\ell\end{matrix}\right]\sim\left[\begin{matrix}5+\ell &\ell&\ell&\ell\\0&5-\ell &\ell-5&0\\\ell-5&0&5-\ell&\ell-5\\0&0&0&5-\ell\end{matrix}\right]$ so $\left|\begin{matrix}5+\ell &\ell&\ell&\ell\\0&5-\ell &\ell-5&0\\\ell-5&0&5-\ell&\ell-5\\0&0&0&5-\ell\end{matrix}\right|=(5-\ell)\left|\begin{matrix}5+\ell&\ell&\ell\\0&5-\ell&\ell-5\\\ell-5&0&5-\ell\end{matrix}\right|=(5-\ell)\left|\begin{matrix}5+2\ell&\ell&\ell\\\ell-5&5-\ell&\ell-5\\0&0&5-\ell\end{matrix}\right|=(5-\ell)^{2}\left|\begin{matrix}5+2\ell&\ell\\\ell-5&5-\ell\end{matrix}\right|=(5-\ell)^3(5+3\ell)$ and see for which values of λ the determinant is zero. -
# Finding a largest symmetrical subset of a k-CNF propositional formula I have a k-CNF propositional formula $F$ which do not admit any global symmetry i.e. there is no permutation $\sigma$ of its variables such that $\sigma(F) = F$ . I'm interested in finding the largest (in term of number of elements) subset of clauses in $F$ which admit at least one symmetry. A brute force approach could be to generate all the subsets of $F$ in descending order of their sizes and test whether they admit at least one symmetry. The test here could be performed via the reduction to the graph automorphism problem. But this approach could be unusable for large formulas. How can I efficiently determine such a subset of $F$ ? By efficient here, I mean an algorithm that performs well in practice but that is not necessary polynomial. Is the complexity of this problem similar to that of the Max-SAT problem? • Are you interested in subsets defined by discarding clauses, discarding variables, or both? – Stella Biderman Aug 22 '18 at 12:26 • I'm interested in any of them. But mainly with subset obtained by discarding some clauses. – RTK Aug 22 '18 at 12:29 • I deleted my answer because I had missed that you discussed Graph Automorphism in your post. I don’t think there’s likely to be a better solution than what that approach gives. – Stella Biderman Aug 22 '18 at 18:49
# Tag Info 66 I can give you an answer because I had been one of those who either wrote or verified the specs of semiconductor ICs. Legally and ethically speaking, I could only sign off on the parameters within which we have verified the IC/processor would work. And then my boss, and her/his boss, and everyone else would see the evidence of the tests, and they too would ... 63 The maximum temperature the silicon experiences can be much more than ambient. 50 °C ambient certainly happens. That's only 122 °F. I've personally experienced that in the Kofa Wildlife refuge north of Yuma Arizona. You need to design to worst case, not wishful case. So let's say ambient can be 60 °C (140 °F). That by itself isn't much ... 53 I once designed an amplifier that would oscillate at -10°C. I fixed it by changing the design to add more phase margin. In this case, the oscillation did not cause any damage, but the circuit did not work well in this condition, and it caused errors. These errors went away at higher temperatures. Some plastics crack when they freeze. Dry ice is -78.5°C, and ... 52 First of all, military equipment is expensive. You can afford to actually test things for high temperatures only if your customer is willing to pay. Military customers tend to have budgets that normal people can only dream of. Then, obviously, if you put an IC into a missile, you might not want that thing to fail if your missile gets hot from its burning ... 42 2nd Edit! Modified my answer about semi-conductors based on jk's answer below, read the history if you want to see the wrong bits I modified! Everything gets weird within certain limits. I mean, sure, the resistance improves in conductors but it increases in semi-conductors, and that change effects how the IC works. Remember that the way that transistors ... 36 There are two effects going on. The heat sinking effect of the connections and the temperature coefficient on the wire. Initially the wire is all at the same temperature. You turn the power on and it starts to heat up. The heating is determined by the electrical power dissipation in the wire, for any given section of the wire Power = Current * Voltage. ... 21 Realistically it's very difficult to measure to that system level of accuracy. The particular sensor you show is DIN class A tolerance, meaning that the maximum error of the sensor alone is 150mK + 2mK*|T| (with T in degrees C). So at 100 degrees C, the maximum sensor error alone (not counting self heating) is 350mK, 35 times what you say you want. This type ... 20 Other than maybe batteries and maybe the LCD components generally don't get damaged directly, even by extreme cold temperatures. If temperatures are changed to extremes, especially rapidly, there can be physical damage due to mismatched contraction with temperature or temperature gradients. However, operation at cold temperatures may not be possible- ... 20 In the relevant part of the infrared spectrum for this application$^1$, air has a high transmittance: (taken from here licensed under Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License) So wherever you point an IR temperature sensor, the output will be dominated by the surface you are pointing at and not the air in between. ... 20 Whoever said it probably did so in person, and the odds that any of us met the true first person are low. This is the sort of engineering observation that could have been invented by several different people back in 1940, for all we know. Elecia White's 2011 Making Embedded Systems quotes this anonymously, so the latest possibility is 2010. Her PowerPoint ... 19 The temperature on the X axis is the temperature difference between the two thermocouple junctions. That's not the absolute temperature. If both junctions are at 0oC, the thermocouple voltage would be 0V. If both junctions are at 100oC, the thermocouple voltage would be 0V still. 18 I have powered the board for days at a time. The code that was running was very simple, but there was absolutely no damage. It is worth it to note that it was being powered by a pre-regulated 5v source so the on-board regulators were not burning up. I doubt that with anything lower than 9v there could be any sort of hardware damage, but with larger ... 17 Thermocouples: wide range of temperature sensing (Type T = -200-350°C; Type J = 95-760°C; Type K = 95-1260°C; other types go to even higher temperatures) can be very accurate sensing parameter = voltage generated by junctions at different temperatures thermocouple voltage is relatively low (4.3mV for Type T thermocouple with one end at 0 C, other at 100 C, ... 17 Military (and aerospace in general) equipment is often: In an unpresserised bay which means cooling the equipment is by conduction. Convection cooling loses meaning at 30,000 feet as there are very few air molecules to transfer heat by convection. It is much more difficult to effectively transfer heat by conduction only. In a glare zone (think just under ... 16 The bond wires are not soldered to the die. They are predominantly attached via a process called gold ball bonding. It uses gold bond wires, and welds them to a gold pad on the die using a combination of heat, pressure, and ultrasonic energy. Gold melts at a much higher temperature than any soldering process, making for a solid bond to the die. 16 Thermal inertia is playing against you. Also take into account that lead-free solder needs temperatures in excess of 220°C to melt down (compared to 180°C for tin-lead solder), so the thermal gradient will be quite high to begin with. Because of this, I would recommend preheating the board to 120°C by using one of the following methods: A preheating plate, ... 15 I'd say NTC, yes. This one is the cheapest I could find at Digikey. About half a dollar, that's much cheaper than temperature sensor ICs, which have about the same precision. The advantage of an NTC is that it only needs a series resistor and an ADC input on your microcontroller, which most do have nowadays. The low price has also a disadvantage: NTCs are ... 15 Storing batteries/cells in a refrigerator slows down their rate of self-discharge, which is a good thing. See below some graphs of self-discharge rates as function of temperature for different battery chemistries. For SLA: For alkalines: alkalines http://www.digikey.com/en/articles/techzone/2011/dec/~/media/Images/Article%20Library/TechZone%20Articles/... 14 The specification says that the ACK consists of a low level after the 8th clock pulse, as shown by this diagram: The bus master will generate a 9th clock pulse to read the level. The specification doesn't talk about pulsing ACK, and the master will not take notice of it either. Follow the spec and take care of data setup and hold times (250ns and 5\\$\... 14 Kit's answer is dead right about components in space, but I thought I'd expand a bit on semiconductors vs conductors (very loosely without the maths). Conductors resistance decreases with a drop in temperature. This is loosely, because the resistance comes from the free flowing electrons being slowed down by vibrations in the crystal lattice they are ... 14 The basic problem is that the density of the "free" charge carriers in semiconductors is a strong function of temperature. When the temperature drops low enough, there just aren't enough available carriers to allow the transistors, etc. to function, and the effective series resistance of the bulk semiconductor rises as well. The overall gain of the circuit ... 14 The crude way you'd ensure you were not on the edge of operation would be to test it outside the range. For example, you might test the parts at -65°C at voltage at a higher clock speed and higher/lower voltage than normal. The manufacturer probably does not test at temperature extremes themselves, but they know how much margin is required under test ... 14 This is well beyond the ratings of most parts. You can expect outright failures, major departures from guaranteed specs, flaky (eg. partial) operation, huge leakage and so on. Unless you buy qualified parts, you are on your own, so you are looking at major costs, and it may not be possible to thoroughly test some parts without inside information. Downhole ... 14 BGA have a very good thermal contact with PCB- total cross section of all balls is a quite large figure. So before solder type, all PCB is sinking the heat from your BGA. So you have to preheat all of it to 150C, then power flow will become much lower (delta T is lower) and then you will not need more than 300-350C. 13 You can not guarantee more accuracy, but you can possibly get better signal to noise ratio. Imagine if all the sensors were off by the same amount as allowed in the specs. Averaging them would not yield better accuracy. If you had a reasonably large number of these sensors and they had a random error distribution within their allowed error band, then you ... 13 FR4 PCB is glass-reinforced epoxy laminate. Several research studies have been published of the effect of low temperatures on such material. A specific quote from the paper "Dynamic failure behavior of glass/epoxy composites under low temperature using Charpy impact test method" (Shokrieh et al): it is found that failure mechanism changes from matrix ... 13 Lets break your questions into sub-questions: Faster computer: The most common measure of computer's "speed" is its maximum clock frequency. This measure has never been an accurate one (Megahertz myth), but it became totally unimportant in recent years after multi-core processors became a standard. In today's computers, the top performance is determined by ... 13 I would be inclined to use traces on or in the PCB as a direct heater, as you suggest. Obviously, you'd start by insulating the board as much as possible to minimize the energy required to maintain temperature. Pay particular attention to external electrical connections, which can also be good conductors of heat. Extra lengths of wire buried in the ... 13 I've used FR4 at 4K and others have used it at much lower temperatures. The physical characteristics go somewhat downhill at low temperatures, but board failure such as delamination does not normally occur from mere exposure to cold temperatures. The Charpy tests referred to in your linked answer are a measure of strength of a notched specimen to shock (... 13 Several other comments and answers have mentioned that electronic circuits need to be in enclosures and their own heat production makes it hot in there. That has not been stressed enough. For industrial, commercial and automotive equipment, electronic circuits often need to be sealed up in tightly sealed enclosures to keep out all sorts of contaminants. In ... Only top voted, non community-wiki answers of a minimum length are eligible
How many elements has P(A), if A = Φ? Question: How many elements has P(A), if A = Φ? Solution: We know that if $A$ is a set with $m$ elements i.e., $n(A)=m$, then $n[P(A)]=2^{m}$. If $A=\Phi$, then $n(A)=0 .$ $\therefore n[\mathrm{P}(\mathrm{A})]=2^{0}=1$ Hence, $P(A)$ has one element.
# Climate Science Glossary ## Term Lookup Enter a term in the search box to find its definition. ## Settings Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off). # Settings All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press. Climate's changed before It's the sun It's not bad There is no consensus It's cooling Models are unreliable Temp record is unreliable Animals and plants can adapt It hasn't warmed since 1998 Antarctica is gaining ice View All Arguments... Archives ## A Flanner in the Works for Snow and Ice calculations #### Posted on 23 January 2011 by MarkR Endnote and calculations (warning, maths!) I was always told to show my working so that everyone else could see where I'd gone wrong. So here's some maths for those unfamiliar with climate feedbacks. It's an endnote to my examination of a new paper, those who want to read something interesting should go there. The change in temperature of the climate in response to a forcing F can be written: We then define a feedback factor, Y: Which turns the first equation into: $\Delta T = \Delta F / \sum_{i=1}^NY$ where Y has been decomposed into a sum of N Yi values, each representing an individual feedback like snow, clouds or water vapour. We can further decompose each feedback into a product of differentials: Where ?i is the ith feedback and the differentials explain how it is related to temperature, and how the radiation response is related to it. So far all we've done is use the properties of differentials in an entirely mathematical sense and then made 2 assumptions: that we see an approximately linear response and that the feedback parameter can be split into some linear superposition of individual elements that can be measured. In the case of snow and ice we need to know its change in albedo for a change in temperature and the change in flux for a change in albedo to calculate Y using the above equation. The change in albedo requires some more information, all of which can be measured by satellite and this is how Flanner et al calculated their feedback parameter. The first feedback calculated is usually the 'blackbody feedback' - as Earth warms up it radiates more and this has a value of about 3.3 W m-2 K-1. By definition, positive feedbacks are subtracted from this value, whilst negative ones are added (blackbody is positive since it dumps heat from the Earth as temperatures rise). You can use this to do calculations by knowing the initial radiative forcing: a common one is the 3.7 W m-2 you get from doubling CO2. Therefore the 'climate sensitivity' or temperature change from a doubling of CO2, assuming approximate linearity in the range we're looking at so that ?T and ?F can be used is: where the minus sign comes from the convention of now taking positive feedbacks to be a positive value of Y, and the summation starts at i=2 because I gave the blackbody feedback the index of 1 and it is included as the 3.3 in the denominator. A climate sensitivity of 3 °C is the classic middle value given by the IPCC. This implies a Y value of about 1.2 using the model results. If Flanner et al's values are correct and are maintained, then global feedback is increased to 0.39 from the current estimate of 0.2, assuming that Southern Hemisphere changes match model predictions. This new value of Y boosts the sensitivity to about 3.5-3.6 °C, which is a 20% increase. This is, of course, an approximation and I haven't bothered to apply this modified Y to the entire probability distribution function of climate sensitivities, but now you have the tools to DIY! Further edit Ken Lambert was unhappy with 'jumping' straight to equilibrium, but wanted to see what happened when you looked at the change moment by moment. So here we go, I'll try the transient climate change case and we will find exactly the same answer as above! Assume that there is some 'forcing' applied to the climate, dF, and that the Earth changes in response to this. It will warm up or cool down and this will change the rate of heat escape from Earth. e.g. dF causes a warming so the Earth radiates more. I'll call the feedback df. So the net heat flow dQ is: $dQ = dF - df$ However, the feedbacks are a function of temperature, so we can re-write the df using the basic properties of calculus: $df = \frac{df}{dT}dT$ And by defining the differential as the letter 'Y' we have our feedback parameter again. By measuring between two finite temperatures, we can take the mean of the path integral of the Y from T1 to T2 as a value of Y that is valid in the equation: $\Delta Q = \Delta F - Y \Delta T$ The climate has heat capacity C, and the change in temperature must be the total change in energy in the system (d?Q.dt) divided by the heat capacity, which you can rearrange using the properties of calculus to: and then substitute this value for ?Q into the original heat flow equation to get a first order differential equation: $\frac{d\Delta T}{dt} + \frac{1}{C}[Y\Delta T - \Delta F] = 0$ This can be solved by integrating factor, and we take t = 0 when ?F = 0. The solution is: $\Delta T(t) = e^{-Yt/C} \int_0^t \frac{\Delta F}{C}e^{Yt'/C} dt'$ The solution for a general t at constant ?F is: $\Delta T(t) = \frac{\Delta F}{Y}[1-e^{Yt/C}]$ i.e. as t becomes very large, the exponential tends to 0 so you are left with: $\Delta T_{eq} = \frac{\Delta F}{Y}$ which is the same result as before! 0 0 1. I have yet to read the paper, but the press release quoted Flanner as saying that they found an ~+15% increase in TOA forcing. Is that consistent with your calculations? A ~20% sensitivity increase seems huge. Maybe run this by him? BTW, on a related topic I just happened to see this new paper (abstract), which got almost no attention when it came out a couple of weeks ago (I only saw it because of an article in a co-author's hometown newspaper) but sure seems like it deserves some. AFAICT (haven't read the paper) lack of prior data makes it a snapshot rather than a trend analysis, but even so the indicated change (~-25%!) to the land sink is not small potatoes. 0 0 2. I agree with Steve Bloom: something is not right here. I think you may have mixed transient response with equilibrium sensitivity. This is quite a worthwhile discussion all the same. 0 0 3. These calculations are by definition for the equilibrium climate sensitivity. That's why I said at the bottom 'if the current pattern holds...' So if this relationship between temperature-albedo and albedo-flux holds, then you would expect a boost of 20% to climate sensitivity if your original estimate was 3 C and all other feedbacks retain their properties linearly. This sounds a bit weird, but I'm thinking of doing a post at some point to try and make it clearer. In effect the feedbacks 'reinforce each other' - the extra energy trapped by your change in albedo is 'recycled' multiple times by the enhanced greenhouse effect etc. 0 0 4. Thanks Mark. I see that you have a good point. However it is a bit brash to say that certain symbols "by definition" represent an actual physical quantity. Consider this possible physical process: spring now comes earlier snow melts and becomes water wet ground reflects less light than dry ground (I presume here) A century later, spring comes much earlier. Then, by the time each spring when the ground is now wet with melt water, it has dried. Albedo goes back to dry ground albedo. (The wet ground period comes earlier when the sun is a lower angle). Result: equilibrium sensitivity is lower than what you project here. 0 0 5. Well, by definition the calculations are for an equilibrium sensitivity (this is true and physical), but what we don't know is whether the value of the feedback parameter we measure now will be the same by the time we reach equilibrium. Which is what your second paragraph is saying! I worded my post saying that if the current measurements are representative of what we ultimately get then this means a big boost to climate sensitivity. But I also tried to make it clear that perhaps it's a case of things just melting more quickly than we expected to begin with and maybe albedo feedback will 'slow down' and head back to model values. All the paper shows is that models underestimate what we expect to see so far, we can probably be confident about that (but ofc we need to see further work for confirmation!) - but more work is needed to have greater confidence that we've underestimated the long term feedback. 0 0 6. MarkR: I have an alternative idea. Flanner et al claim that cryosphere albedo feedback is larger than the models have. But those models still track real temperatures (more or less), right? That means that if we include the "correct" feedback, then they (or most of them) would be running too hot, no? Isn't it therefore more likely that something else in the models is off a bit (e.g. aerosol forcing, other feedbacks)? 0 0 7. 6 rocco: yes, I think that sounds sensible! Since temperatures roughly follow models, you've pointed out 2 potential consequences: #1 - other feedbacks have been overestimated. I hope so... but other observational studies (Dessler 08/10, Lauer 10, Chung 09) make this seem unlikely; although these are only really preliminary. #2 - the best estimates for forcings are off. If aerosol cooling is stronger than the best estimate (and this is well within the uncertainty) then the original RF is lower and the Y values remain as we have measured them - i.e. higher than models. But there are 2 other potential contributors I can think of: #3 - enhanced heat transport to the deep ocean. #4 - natural noise excluding deep ocean heat transport. Solar activity has declined, as has mean ENSO activity since the '90s. Most of the forcing. After all, the models include approximately a +/- 0.2 C uncertainty and this easily includes a +20% rate of global warming (for the past 30 years that would be +0.1 C or so) so these results don't necessarily mean the models and observations are in disagreement. I reckon you're right, if Flanner's results are good (and they look pretty solid IMO, but we should see if other groups can reproduce them), then it suggests something else is up. Some combination of the above 4 plus probably other stuff could be that something else. 0 0 8. MarkR #Original Post Please correct me if I am making a basic error here; but temperature change of a body of mass M and specific heat Sm is proportional to the energy E applied. If we assume that the mass M to be heated is the atmosphere/land/oceans and it has some average specific heat Sa (Joules/kG-degK) and both are roughly constants for this exercise; then M x Sa = K Excuse my lack of maths symbols, but the Temperature change between times 1 and 2 equation looks like this: T2-T1 x K = E2-E1 or Delta T = Delta E / K .......Eqan (1) Delta E = F x delta t; where delta t is the time increment t2-t1 and F is a constant forcing. Therefore; Delta T = F x Delta t / K .......Eqan (2) If F was a variable forcing then F x Delta t would be replaced by the integral of function F wrt t. Perhaps MarkR could take me from Eqan (2) to his end result dT/dF. 0 0 9. I don't use equation (2)! If this doesn't work I can write up something with clearer equations. I start by assuming temperature is some function of the heat arriving at the surface in terms of power flux. I work entirely in flux-space, and if you know each element contributing to flux (radiative forcing + feedbacks) then you would have some function to explain it. From the properties of calculus, I get dT = (dT/dF)dF. I used change in flux arriving at the surface and you have used net radiative forcing at the top of the atmosphere - which are very different! When you assume a constant forcing just after (1), you're saying the difference between heat in and heat out is constant all the time. In a world with no feedbacks the world would warm up under a constant forcing and your value of F would actually be an F(t). You made some function of F=F(in)+F(out), which sounds sensible but can actually get very confusing. Whereas I am using some function of F(in). (which =F(out) because I'm considering equilibrium) 0 0 10. MarkR: Speaking of underestimated forcings, have you seen this? Could this really be true? Did we have a nuclear mini-winter without anyone noticing? 0 0 11. MarkR #9 Energy flux = Power (Forcing) which is the rate of change of energy. Unit is a Watt (Joule/sec) OR a Watt/sq.m. to relate it to a surface aea. Power flux - presumably you mean the rate of change of Power. This would be incremental F/t or differential dF/dt. That must be a Watt/sec or a Joule/sec^2 - right?? "If F was a variable forcing then F x Delta t would be replaced by the integral of function F wrt t." This explains the variable F. You integrate F(t) wrt time 't'. This effectively gives you the area under the F curve - whatever the F function is and this represents the total enegry gained by the mass between times t1 and t2. I can't see these relationships devoid of the time variable 't' which your dF/dT seems to do. 0 0 12. Ken Lambert The energy flux is energy over unit surface and over unit time. The energy flux is then not equal to power and its units are J/(m2*s) or W/m2. In your derivation in #8 you omitted the outgoing flux; this is why you get and ever increasing temperature even for a constant forcing, which is unphysical. 0 0 13. @ rocco and MarkR: The difference between a climate sensitivity of 3°C or 3.5°C is too low to have such an effect. In other words: Even if you run a model with a sensitivity of 2°C and one with 6°C you reproduce the temperature of the last century quite well with both models! See for example Knutti & Hegerl (2008): "The equilibrium sensitivity of the Earth’s temperature to radiation changes", Nature Geoscience Figure 4: "The observed global warming provides only a weak constraint on climate sensitivity. A climate model of intermediate complexity3, forced with anthropogenic and natural radiative forcing, is used to simulate global temperature with a low climate sensitivity and a high total forcing over the twentieth century (2 °C, 2.5 W m−2 in the year 2000; blue line) and with a high climate sensitivity and low total forcing (6 °C, 1.4 W m−2; red line). Both cases (selected for illustration from a large ensemble) agree similarly well with the observed warming (HadCRUT 3v; black line) over the instrumental period (inset), but show very different long-term warming for SRES scenario A2 (ref. 101)." Thus we have to wait a few years ;-) 0 0 14. #11 Ken: my calculations are the effect at equilibrium, they are the result of the integral over time. Your simplistic model is not complete: a more complete model is ΔQ = ΔF - YΔT Where Q is the net flux (which you originally labelled F), F is the radiative forcing and Y is the feedback parameter. For transient calculations, you should use this equation - your original calculation assumed forcing increased at exactly the rate as feedbacks (including heat dumped by the warming Earth's surface). My calculation was for equilibrium, i.e. where ΔQ = 0 and the equation becomes ΔF / Y = ΔT, which is what I put in this post! The time varying nature is included because you take an average Y from the path integral of the feedbacks through temperature space. Also, you don't have to integrate F(t) wrt t because the total energy in the system in the past doesn't necessarily correspond to the temperature today. Analogy: take 2 kettles, boil one and leave the other one cold. Wait a day, then measure their temperatures. Both are the same temperature, even though the integrated energy history of the one that was boiled is higher! Equilibrium is achieved when power in equals power out. 0 0 15. #13 MartinS: thanks for the reminder, my response at 7 might be wrong! I stupidly forgot about the transient response: the typical response time τ = C / Y where C is the heat capacity and Y the feedback factor. So response time increases linearly with total warming! An example is an instantaneous forcing at t=0. Assume a typical response time of 10 yr and feedbacks giving 3 C at equilibrium. The approximate response after 10 years is 1.9 C. Assuming that the actual sensitivity is 3.6 C then after 10 years you actually expect 2.0 C. i.e. a 20% boost to equilibrium sensitivity has manifested as a 7% boost after 10 years. So it isn't linear in that sense! This is a simplistic explanation assuming dY/dt = 0 amongst other things, but the IPCC GCMs include these assumptions and it helps explain why we might not be able to detect a higher-than-expected sensitivity in the short term. 0 0 16. Rocco @10, Fascinating paper-- just read the abstract. I know that I am not the only one, but I too have often wondered whether the nuclear bombs and tests might have in part been responsible for the slight cooling in global SATs during the mid 20th century. 0 0 17. The nuclear winter hypothesis came from fires, not the explosions. Here's an old paper on the topic http://www.atmos.washington.edu/~ackerman/Articles/Turco_Nuclear_Winter_90.pdf 0 0 18. #16: "nuclear bombs and tests" I've looked at some of the online records; they actually worked out a formula for height and diameter of cloud vs. yield. Very few of the tests were large enough to put much volume into the stratosphere. '61-'62 was the worst period due to the test ban treaty during '59-'60. 0 0 19. MarkR #15 The forcing I labelled 'F' is the net imbalance forcing. That is all the warming forcings minus all the cooling forcings including S-B and WV and Ice albedo feedbacks. My F is the net warming imbalance of 0.9W/sq.m currently (09)estimated in the Trenberth paper (Aug09). Riccardo #12 == see above. Energy/time = Power which has the unit Watt (Joule/sec). The unit of area simply divides it into a rate per sq.m. "In your derivation in #8 you omitted the outgoing flux; this is why you get and ever increasing temperature even for a constant forcing, which is unphysical" My forcing F is the *net* forcing as described above. It will not remain constant as the various component forcings change. The main cooling forcing S-B will rise with T^4, so F will be a complex function which steadily approaches zero as temperature equilibrium is reached. ie: "You integrate F(t) wrt time 't'. This effectively gives you the area under the F curve - whatever the F function is and this represents the total energy gained by the mass between times t1 and t2." In reality at time t2, F approaches zero and the total energy absorbed by the Earth system will be equal to the area under the F curve representing (1) equilibrium Temperature rise x specific heat of the masses heated; plus (2) phase change heat of ice melted; plus (3) latent heat of extra water evaporated. 0 0 20. Ken Lambert your definition of forcing as including, to make it short, everything is quite uncommon; MarkR definition, which is kind of standard, is different. Anyways, given your definition, your eq. 2 in #8 is wrong since it applies only for constant F which is the case only in steady state. Beware, it does not need to "steadily approaches zero as temperature equilibrium is reached". If for example if you have a linearly increasing (in time) forcing (using the standard definition), your F will aproach a constant value different from zero. 0 0 21. Riccardo #20 Eqan (2) is not wrong. Read this again.... ""Delta E = F x delta t; where delta t is the time increment t2-t1 and F is a constant forcing. Therefore; Delta T = F x Delta t / K .......Eqan (2) If F was a variable forcing then F x Delta t would be replaced by the integral of function F wrt t."" You can call my 'F' - 'Q' if you like - it does not alter the relationships. I have already covered the case where F is a variable function over time. If I had an integral symbol on my keyboard I could substitute "Fconstant x Delta t" with "integral of function F wrt t". I don't think that you understand that F is the forcing imbalance. It is the net of all the forcings operating on the climate system. In a steady state F will be zero by definition. In a warming state F will be positive and in a cooling state F will be negative. 0 0 22. Ken Lambert read this again ... "your eq. 2 in #8 is wrong since it applies only for constant F which is the case only in steady state." I think you should put more effort in the definition of the quantities you use. Compare this: "It is the net of all the forcings operating on the climate system." with this "That is all the warming forcings minus all the cooling forcings including S-B and WV and Ice albedo feedbacks." Please clarify what you mean by forcing and what by feedback, I maybe got confused by your ambiguity in the definitions. 0 0 23. #21: Ken, I think I got confused when you used different conventions to 'standard', and you switched definitions as well! I've added a note to the main body that derives the original ΔT = ΔF/Y equilibrium equation by considering the heat flow through time like you wanted to do. This includes both forcings and feedbacks and considers the net heat flow at any point in what I'm pretty sure is the correct manner. As you see, it's messier than just considering the fluxes at equilibrium and since both approaches are valid, they give the same answer! Hopefully this will be interesting. :) 0 0 24. MarkR #23 and Riccardo #22 So how large does time 't' become in order for equilibrium to be reached given the non-linear multi-factored components of Delta F (presumably equal to my F.imbalance), and your assumed value (or function) for Y? Given that my current F.imbalance = F.CO2GHG + F.otherGHG - F.cloud albedo - F.direct albedo + F.solar - F.radiative feedbackSB + F.WVIA feedback..........= 0.9W/sq.m Ref; Trenberth Fig 4 "Tracking the Earth's Global Energy" Aug09. (WVIA - water vapour + ice albedo feedbacks). To know where the above sum will go into the future; one would need a function for each - something I have never seen for F.cloud albedo or F.WVIA feedback). 0 0 25. ΔF is not your imbalance, ΔQ is. ΔF is radiative forcing; heat flow that doesn't react to the climate state on the timescales we're considering. Changes in CO2, volcanism and solar activity are examples. YΔT is the feedback sum that responds to the state of the climate. We have estimates of sensitivity (and therefore net Y in the past) from palaeoclimate. We have direct measurements of Y from the past few decades and climate models implicitly calculate it So far climate models have generally been in agreement, or underestimated observational values. 0 0 26. Ken Lambert the response time τ in a simple zero-dimensional model is given by τ=C/λ whre C is the heat capacity and λ-1 is the climate sensitivity. Howver, this model is a bit crude. Indeed, there are several response times relative to the various parts of the climate system. As a grossly aproximated single "effective" response time you can take something like 40-50 years. To have the time dependence of forcings and feedbacks you need to run a climate model, better if many runs of many models; something I have never seen either. 0 0 27. MarkR #25 So Delta F should equate to my F.CO2GHG + F.otherGHG + F.solar (which are all the supposed independent of temperature) Y x DeltaT should equate to the climate and temperature responses: F.WVIA feedback - F.radiative feedbackSB - F.cloud albedo - F.direct albedo. ?? With your Delta Q equal to the difference between the two above terms. Is that right? 0 0 28. That looks right to me! Be careful with sign conventions: to calculate total Y 'normal' is a positive sign for negative feedbacks that increase heat loss at the surface (blackbody, lapse rate, maybe clouds) and a negative sign for positive feedbacks that increase heat (water vapour, albedo, maybe clouds). This doesn't include slow feedbacks. Also, on my fifth equation down I introduce a minus sign so that positive feedbacks are now positive. People use a lot of different conventions (and sometimes the sensitivity parameter, λ = Y-1) so every time you read a paper you have to double check all this. So where I have: ΔQ = ΔF - YΔT that means that to restore equilibrium (i.e. ΔQ = 0), if Y is smaller then you need a bigger temperature change to restore equilibrium. 0 0 29. MarkR #28 Given that Y is quoted as a 'constant' with units of W/sq.m-degC, how are the differing components of feedback forcings handled? eg. S-B cooling forcing is proportional to T^4, WVIA forcing is unknown wrt T, Cloud cooling again unknown wrt T. With S-B in particular being 4th power exponential, how do we know that Y stays constant and independent of T? 0 0 30. Ken Lambert the change in the S-B radiative flux is ΔF=εσ(T4−To4) for ΔT=T−To small compared to To you can write it as ΔF=4εσTo3ΔT=YΔT with Y independent on temperature. Similar approximations apply to any other feedback, you can always linearise something if changes are small. 0 0 31. It is, indeed, an approximation. But run it up on a function plotter for the range of temperatures 288-294 K and you'll see that a straight line is a very good fit indeed! R^2=0.9999 Take the residuals to check - fit a quadratic to see that the real results do accelerate faster than the linear fit, but the effect is tiny - the range in residuals is 0.25 W m-2 from an average of over 400 W m-2. So we are looking at mathematically 'small' changes, and the approximation is good. 0 0 32. Alternatively, a more obviously mathematical way to do it is to take the differential dF/dT to see the relative change in flux (i.e. the 'feedback parameter') at different temperatures like Riccardo did. Then get the fractional change evaluated at each temperature you're interested in. i.e. T23 / T13 gives you the fractional change in feedback factor from T1 to T2. For 288 K to 291 K it's a change of ~3% so linear feedback isn't a bad approximation here. 0 0 33. MarkR #31 Is the effect really tiny when 0.25W/sq.m is compared with the net warming imbalance claimed to be 0.9W/sq.m. That is about 30% of it unless I am missing your meaning here. 0 0 34. That's an apples-to-oranges comparison! If we consider the ~0.9 K change seen so far and assume we started at 288 K, then you can compare the linear model response to the 'full' Stefan-Boltzmann response. You also need to consider atmospheric emissivity, which I assume is constant at 0.8. The linear feedback response model expects a blackbody feedback for the 0.9 K warming of about 2.926 W m-2. The 'exact' model calculates 2.939 W m-2. The fractional error in changes seen so far from linearisation is 0.46% or 0.013 W m-2. Absolutely tiny compared to the full fluxes, and effectively impossible to measure to that precision in the climate anyway. Full models include the 'exact' version, this simple linear model appears to be a good approximation according to them. For the blackbody feedback it's very good, for the others it seems to be reasonable. 0 0 35. MarkR #34 So what is the overall conclusion from ΔQ = ΔF - YΔT? The higher the value of Y the less temperature change we get at the surface to restore equilibrium for a given ΔF? If ΔF is the sum of the positive warming forcings F.CO2 + F.otherGHG + F.solar which are independent of temperature, and we know that the main F.CO2 is logarithmic, then ΔF would increase more slowly than YΔT - moreso with a higher value of Y. A higher Y would arrest the warming more quickly. 0 0 36. Ken Lambert F CO2, as you call it, is logaritmic as function of CO2 concentration. On the contrary, the balance equation is function of time. 0 0 37. #36: "logaritmic as function of CO2 concentration" And average annual CO2 is strongly concave up with time (accelerating, ie, first and second derivatives with time are each positive). Taking the ln of such a function results in a concave up deltaT. See the graph here. 0 0 38. #35: A bigger Y means that you 'need' less warming to re-establish equilibrium. Bigger Y means less global warming, yes. But make sure you track the sign, positive feedbacks reduce Y and people flip signs in the equations regularly. So long as you're consistent it's not a problem! Assuming Y is constant (which it will be for small changes within a stable regime) then if F slows down its increase, then temperature rise will slow. In the real world we expect accelerating temperature increase because whilst CO2 forcing is logarithmic with concentration, concentration has risen faster than exponentially with time. Even if it was 'just' exponential then you would expect accelerating global warming because the time taken to return to equilibrium doesn't increase at the same rate as the final expected warming does. 0 0
# Samsung S5 Field of View Quote of the Day I went through the whole gamut of organization, which was a vain effort so to say, because in the end, I ended with electronics, which have no life at all. But I don't think it vain because to understand life, one must understand electrons too. — Albert Szent-Györgyi, Nobel Prize winner in Medicine. His research focused on the role of electrons in biological processes. ## Introduction Figure 1: Samsung S5 Android Phone (source). I use a Samsung S5 phone for my daily smart phone work. Recently, I have even started to use its camera/video system for some rough measurement work (examples here and here). This work has made me a bit curious about the how the camera subsystem was designed. In this post, I will document a very rough experiment that I performed over lunch today in which I measured the S5 camera's Field of View (FOV). I will also compute the field of view using some information that Samsung provides. The agreement between the results seem reasonable considering my crude approach. I am always surprised at the quality of the images that I can get from this phone, however, there are serious optical compromises required to put a camera into this phone's tiny form factor. While I prefer to use my DSLR, I rarely have it with me when I need to take a quick photo – most of my photos are of whiteboard discussions. In these situations, any photo is better than trying to copy the board quickly. ## Background ### Definitions As usual, I will use the Wikipedia as my source for the definitions of terms. Field of View (symbol FOV) In photography, FOV describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the term Angle of View (AOV). Crop Factor (symbol CF) In digital photography, a crop factor is related to the ratio of the dimensions of a camera's imaging area compared to a reference format; most often, this term is applied to digital cameras, relative to 35 mm film format as a reference. In the case of digital cameras, the imaging device would be a digital sensor. The crop factor is also commonly referred to as the Focal Length Multiplier ("FLM") since multiplying a len's focal length by the crop factor or FLM gives the focal length of a lens that would yield the same field of view if used on the reference format. Focal Length The focal length of an optical system is a measure of how strongly the system converges or diverges light. For an optical system in air, it is the distance over which initially collimated rays are brought to a focus. I will not be using the phone camera's focal length because Samsung has not published that number. Instead, I will use the equivalent 35 mm focal length, which they do publish. Equivalent Focal Length (Symbol f) The focal length of a phone camera is usually stated in terms of an equivalent 35 mm focal length. The equivalent focal length is the focal length in a 35 mm SLR that a would provide the same FOV as the camera's lens/sensor system. ### S5 Technical Characteristics In Table 1, I gathered the following S5 technical information from this web page. I will use this information to compute the S5's FOV (horizontal and vertical) based on the lens and image sensor dimensions. Table 1: Samsung S5 Critical Optical Subsystem Parameter. Parameter Value Effective Focal Length 31 mm Image Sensor Horizontal Dimension 5.08 mm Image Sensor Vertical Dimension 3.81 mm Note that the image sensor dimensions have a ratio (horizontal/vertical) of 16/9 ≈ 1.78. This is the aspect ratio seen with HDTV and is different than the 3/2 ratio used in standard 35 mm film format. ### 35 mm Film Reference Figure 2: 35 mm Film Reference (Source). Focal length is a characteristic of the camera's lens. The S5's focal length of 31 mm is always described as "effective", which means that the lens has a focal length provides an image similar to that of a 31 mm lens on a 35 mm analog camera. As a point of reference, 35 mm analog cameras often had a 50 mm focal length lens  because this gave the camera an FOV similar to the human eye. I find the use of effective focal length confusing – a focal length of 31 mm ≈ 1.25 inches is much larger than the S5 is thick. I was unable to find a drawing of the optical path for an S5, but you can get a feel for the complexity present in a cell phone camera by looking at a typical reference design I found while searching on the web (Figure 3). This lens assembly contains one glass and two plastic lenses. Figure 3: Typical Cell Phone Camera Optical Path (Source). Figure 4: 35 mm Format Film. 35 mm film (Figure 4) has a much larger image gathering area than a phone camera's image sensor. For example, the horizontal dimension of 35 mm film is 36 mm, but the corresponding dimension of the S5's image sensor is 5.08 mm. This means that the S5 will only capture the central portion of the image (Figure 5). The overall aspect ratio of 35 mm film is 3/2 (i.e. 36 mm/ 24 mm). Modern aspect ratios (e.g. HDTV, DSLRs) are moving to 4/3 or even larger – my Sony α55 DSLR is configurable for either 4/3 and 16/9. Figure 5: Demonstration of the Effect of Crop Factor (Source). Unfortunately, Figure 5 does not show an image sensor the same size as the S5, which is 1/5" wide (horizontal). This means that the S5 sensor area would comfortably fit within the 1/3" box. This also means that the image captured by the S5 is only a tiny portion of the image that a 35 mm analog camera would capture. Many people choose to view the cropping that occurs as a magnification because the image does look magnified. You can view what is  happening any way you want. But you need to remember that all of your sensor's pixels are in that small box in the center of the image – the rest of the image area is unused. This becomes important in DSLR cameras when you use a lens designed for an 35 mm analog system with an image sensor with a smaller capture area than 35 mm film. ### FOV Formula The Wikipedia give Equation 1 as the formula for the field of view. The discussion there is excellent, and I will elaborate no further here on this formula. Eq. 1 $FOV =2\cdot \arctan \left( {\frac{d}{{2 \cdot f}}} \right)$ where • FOV is the field of view along a given direction (e.g. horizontal or vertical) • d is the sensor length along a specific direction (e.g. horizontal or vertical) • f is the focal length of the lens. ## Analysis I will process the same data using two different methods and see how the results vary. ### Theoretical FOV Calculation Figure 6 shows my application of Equation 1 to compute the horizontal and vertical FOVs of the Samsung F5. Figure 6: Using Standard FOV Equation to Compute S5 Horizontal and Vertical FOVs. ### My Measurement of the S5's FOV #### Angle Measurements It is lunch time and I took a couple of minutes (literally) to take photographs from known distance to a letter-sized sheet of paper at known distances that I taped to the wall of my cube. I then used a generic graphic program (PicPick) to measure the dimensions (in pixels) of the letter-sized sheet of paper as seen in the photographs. Figure 7 shows two photograph examples. Figure 7(a): Sheet of Letter-Sized Paper at 24 inches. Figure 7(b): Sheet of Letter Sized Paper at 48 inches. Note that my measurements might not be very accurate because my camera hold at the various distances was only approximate and I was not holding the camera's line-of-sight perfectly perpendicular to the paper. My raw measurements are documented in the Mathcad screenshots included below. #### Average of FOV Calculations This approach is related to that used in this blog post. Equation 2 shows a modified version of Equation 1 that is a bit more useful for this type of testing. Eq. 2 $\displaystyle FOV=2\cdot \arctan \left( {\frac{{{{d}_{{ImagePixel}}}\cdot {{d}_{{Object}}}}}{{2\cdot {{r}_{{Object}}}\cdot {{d}_{{ObjectPixel}}}}}} \right)$ where • dImagePixel is the image size measured in pixels. • dObjectPixel is the object size (e.g. sheet of paper) in pixels. • dObject is the object size in standard linear units (e.g. inches, mm). Figure 8 shows how to derive Equation 2. Figure 8: Derivation of FOV Formula. Figure 9 shows my application of Equation 2 to multiple data inputs and averaging the results. Figure 9: FOV Estimates By Averaging Multiple Measurements. #### Least Squares to FOV Curve A related approach is to assume that the object range is long enough that we can ignore triangles and use spherical measurements (i.e. FOV = dObject/rObject). This approach is very similar that used in this blog post. Equation 4 shows the how the ratio between the paper and image size varies with 1/rObject. Eq. 3 $\displaystyle k=\frac{{{{d}_{{ObjectPixel}}}}}{{{{d}_{{ImagePixel}}}}}=\frac{{\frac{{{{d}_{{Object}}}}}{{{{r}_{{Object}}}}}}}{{FOV}}=\frac{{{{d}_{{Object}}}}}{{FOV}}\cdot \frac{1}{{{{r}_{{Object}}}}}$ where • k is the ratio of the object size to the image size. The data of Figure 10 will plot linearly versus 1/r, which allows us to generate a least-square linear fit – I am specifically interested in the slope m. The slope of the line, $m = \frac{d_{Object}}{FOV}$. Since we know the size of the sheet of paper, we can compute the FOV. The results obtained are similar to the previous two methods, at least considering how crudely I took the measurements. Figure 10: Least Squares Analysis of the FOV Data. ## Conclusion All three methods produced good agreement for the vertical FOV (37°). There was a slight discrepancy between the horizontal FOV values (60° versus 67°). I do not consider this discrepancy significant considering how crudely I took the measurement, especially in the horizontal dimension (i.e. my camera position was not exactly perpendicular to the sheet of paper). I am amazed at how well these cell phone camera's perform, particularly when you consider how inexpensive they must be. I plan on buying an Arduino shield camera (example) with cameras to do some experimenting on my own. This entry was posted in optics. Bookmark the permalink. ### 5 Responses to Samsung S5 Field of View 1. Palle says: Interesting stuff, but not entirely coorect. Film / dslrs have an aspect ratio of 3/2 (30mm*24mm) NOT 4/3 which is the format of old TVs and computer screens. • mathscinotes says: Thank you for bringing this to my attention. I am an old video engineer and have the 4/3 ratio stuck in my brain – even when staring at a 36 mm/ 24 mm image on 35 mm film. I have corrected the post. Again, thank you. mark • Ibrahim Lawal Muhammed says: This is the correct attitude to a correction. Bravo. 2. Jim says: I'm trying to figure out how to calculate the field of view and distance from a jpg of a panoramic landscape, with an image size of 18960px x 1328px, taken using the panoramic mode on the Samsung Galaxy S5's camera. I'm struggling a little with the maths. How do you figure FOV and distance when the picture is of a landscape? Many thanks for your help. 3. Drekavat says: I know that I commenting on an old post, but I have strange idea. For a project I am working on, i would like to rotate the Samsung Galaxy S5 camera, the physical camera, yeah, the camera module itself. I plan to dissect the phone, take out the camera and place it back 90 degree ClockWise or CCW so that the camera has wide-view in normal standing postion. I know the flex cable will be short, but there flex cable extenders that can get the job done. How does it sound?
# Thread: Non stationary point of inflection 1. ## Non stationary point of inflection i have this equation f(x)= 80x + (5+4x)^2 - (2x^3/3) i need to show that there is a non stationary point of inflection where the first derivative is maximized the 2 x values i get are -3.14 and 19.14 although im not sure this is right also for a non stationary point i am told the sign either sign of x doesnt vary when in my case it does???!! 2. Originally Posted by molimoli i have this equation f(x)= 80x + (5+4x)^2 - (2x^3/3) i need to show that there is a non stationary point of inflection where the first derivative is maximized the 2 x values i get are -3.14 and 19.14 although im not sure this is right also for a non stationary point i am told the sign either sign of x doesnt vary when in my case it does???!! I agree with these 2 solutions and as they you have 2 different solutions and the polynomial is of order 3 then the stationary points can not be inflecting.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # A practical guide to large-scale docking ## Abstract Structure-based docking screens of large compound libraries have become common in early drug and probe discovery. As computer efficiency has improved and compound libraries have grown, the ability to screen hundreds of millions, and even billions, of compounds has become feasible for modest-sized computer clusters. This allows the rapid and cost-effective exploration and categorization of vast chemical space into a subset enriched with potential hits for a given target. To accomplish this goal at speed, approximations are used that result in undersampling of possible configurations and inaccurate predictions of absolute binding energies. Accordingly, it is important to establish controls, as are common in other fields, to enhance the likelihood of success in spite of these challenges. Here we outline best practices and control docking calculations that help evaluate docking parameters for a given target prior to undertaking a large-scale prospective screen, with exemplification in one particular target, the melatonin receptor, where following this procedure led to direct docking hits with activities in the subnanomolar range. Additional controls are suggested to ensure specific activity for experimentally validated hit compounds. These guidelines should be useful regardless of the docking software used. Docking software described in the outlined protocol (DOCK3.7) is made freely available for academic research to explore new hits for a range of targets. ## Introduction Screening chemical libraries using biophysical assays has long been the dominant approach to discover new chemotypes for chemical biology and drug discovery. High-throughput screening (HTS) of libraries of 500,000 to 3 million molecules has been used since the 1990s1, and multiple drugs have had their origins in this technique2. While the libraries physically screened in HTS were an enormous expansion on those used by classical, pre-molecular pharmacology3, they nevertheless represent only a tiny fraction of possible ‘drug-like’ molecules4. DNA-encoded libraries5, where molecules are synthesized on DNA that encodes their chemistry, begin to address this problem by offering investigators libraries of 108 molecules, sometimes more, in a single, highly compact format; and multiple such libraries can be used in a single campaign. However, as DNA-encoded libraries are restricted to reactions on DNA, reaction chemistries are limited to aqueous solutions, thereby limiting the type of chemical reactions and subsequent chemical libraries available with this technology6. Computational approaches using virtual libraries are an attractive way to explore a much larger chemical space7. Large numbers of molecules—certainly into the tens of billions, and likely many more—may be enumerated in a virtual library. Naturally, very few of these compounds can ever be actually synthesized because of time, cost and storage limitations, but one can imagine a computational method to prioritize those that should be pursued. In practice, this idea has had two limitations that have prevented wide-scale adoption: the virtual libraries have rarely been carefully curated for true synthetic accessibility8, and there were well-founded concerns that computational methods, such as molecular docking, were not accurate enough to prioritize true hits within this large space9. In the last several years, however, two advances have at least partly addressed these problems: First, several vendors and academic laboratories have introduced ‘make-on-demand’ libraries based on relatively simple two- or three-component reactions where the final product is readily purified in high yields10. At Enamine, a pioneer in this area, >140 reactions may be used to synthesize products from among >120,000 distinct and often highly stereogenic building blocks, leading to a remarkably diverse and, critically, pragmatically accessible library of currently over 29 billion molecules11. Second, structure-based molecular docking, for all of its problems, has proven able to prioritize among these ultra-large libraries, if not at the tens of billion molecule level, then at the 0.1–1.4 billion molecule level12,13,14, finding unusually potent and selective molecules against several unrelated targets (Table 1). Indeed, simulations and proof-of-concept experiments suggest that, at least for now, as the libraries get bigger, docking results and experimental molecular efficacies improve12. If docking ultra-large libraries brings new opportunities, it also brings new challenges. Docking tests the fit of each library molecule in a protein binding site in a process that often involves sampling hundreds-of-thousands to millions of possible configurations. Each molecule is scored for fit using one of several different scoring functions15,16,17,18. To be feasible for a billion-molecule library on moderately sized computer clusters (e.g., 500–1,000 cores), this calculation must consume not much more than 1 s/molecule/core (1 ms/configuration). This need for speed means that the calculation cannot afford the level of detail and number of interaction terms that would be necessary to achieve chemical accuracy. For instance, docking typically undersamples conformational states, ignores important terms (e.g., ligand strain) and approximates terms that it does include (e.g., fixed potentials)19,20. Owing to these approximations and neglected terms, docking energies have known errors, and the method cannot even reliably rank order molecules from a large library screen21,22. What it can hope to do, however, is separate a tiny fraction of plausible ligands from the much larger number of library molecules that are unlikely to bind a target. This level of prioritization is enhanced with the careful implementation of best-practice guidelines and controls. It is the goal of this essay to provide investigators with such best practices and control calculations for ultra-large library docking, though they equally apply to modest-sized library docking (Table 1). These will not ensure the success of a prospective docking campaign—the only true test of the method—but they may eliminate some of the more common reasons for failure. We begin by describing protocols and controls that can be used across docking programs and that are general to the field (Fig. 1). There are by now multiple widely used and effective docking programs23,24,25,26,27,28,29,30,31, employing different strategies for sampling ligand orientations and ligand conformations in the protein site, for handling protein flexibility, and for scoring ligand fit once they have been docked. Notwithstanding these differences, there are strategies and controls that may be used across docking programs, including how to prepare protein sites for docking calculations, benchmarking controls to investigate whether one can identify known ligands from among a large library, and controls to investigate whether one’s calculations contain biases towards particular types of interactions. Since the true success of a virtual screen is the experimental confirmation of docking hits, we also propose a set of control assays to validate initial in vitro results. We then turn to protocols that are specific to the docking program we use in our own laboratory, DOCK3.7—these necessarily get into fine details, and will be of most interest to investigators wanting to use this particular program. ### Any structure-based campaign begins with a suitable target site The most promising starting point for a virtual screening campaign is typically a high-resolution ligand-bound structure. Ligand-bound (holo) structures usually outperform ligand-free (apo) structures as the geometries of the binding pocket are better defined in the bound state than in the unbound state32,33. If there is no available holo structure, tools such as SphGen34, SiteMap35 and FTMap36 can be used to identify potential ligand binding sites. Generally, small, enclosed binding pockets that well complement a ligand perform better than large, flat and solvent-exposed binding sites typical of protein–peptide or protein–protein interactions. For instance, neurotransmitter G-protein-coupled receptor (GPCR) orthosteric sites, such as the β2 adrenergic, D4 dopamine, histamine H1 and A2a adenosine receptors37,38,39,40 typically have higher hit rates and more potent docking hits than do peptide receptors like the CXCR4 receptor41, and these often perform better than the more open sites typical of soluble enzymes like β-lactamase12,42. In most cases, targeting protein–protein interaction surfaces, outside of a few that are well defined43, often leads to disappointing results. ### Modifying the high-resolution protein structure It is not always a good idea to use the structure exactly as it was found in the database. • Dealing with mutations. For stability, crystallization and other biochemical reasons, high-resolution protein structures are sometimes determined in a mutant form; such mutations should be reverted to the wild type especially if they are within the ligand site to be targeted. Missing side chains and loops in the experimental structures should be added as well if they are close to the binding site, while those that are weakly defined by the experimental observables (e.g., low occupancy, high displacement parameters (B values), poor electron density) should be examined critically • What about water molecules? When the resolution permits, water molecules can also be included in the target preparation, often treating them as nondisplaceable parts of the protein structure. Typically, water molecules enclosed in the targeted binding pocket or involved in interactions between the co-crystallized ligand should be considered as they may determine side-chain conformations or offer additional hydrogen-bonding sites. Some docking programs will allow water to be switched on or off during the docking44 or include other implicit solvent terms45 • Buffer components. Buffer components such as PEG and salts are likely specific to the crystallization conditions, and should be removed • Cofactors. Cofactors like heme or metal ions should be considered if they are involved in ligand recognition • Hydrogen atoms. Due to the resolution limits of most experimentally determined structures, hydrogen atoms are often unresolved. The position of many hydrogen atoms can be readily modeled according to holonomic constraints (e.g., backbone amide hydrogen atoms). For those that are not, such as hydroxyl protons on serine and tyrosine residues, imidazole hydrogens on histidine, and the adjustment of frequently erroneous terminal amide groups for glutamine and asparagine residues46, programs like Reduce47 (default in DOCK3.7), Maestro (Schrödinger)48, PropKa49 or Chimera50 can be used to protonate the target of interest. While these automated protocols usually produce reasonable protonation states, special care should be taken for the residues within the ligand binding pocket or residues that form a catalytic/enzymatic site. Lastly, although less frequently encountered, glutamic and aspartic acids can adopt protonated forms under specific circumstances51 • In summary, the protonation of the protein structure is critical for the success of docking to more accurately depict the Van der Waals (VDW) surface and dipole moments of the binding pocket #### Homology modeling When no experimental structure has been determined for the target protein, structural models can be generated if a template structure with high sequence identity is known. Common programs used for homology modeling include Modeller52, Rosetta53, ICM27 and I-Tasser54. These two principles can improve the chances of success: • Typically, the higher the sequence identity between the target and the template, the better the accuracy of the model55. Particular focus should be given to identity within the target binding pocket; if there is a choice, choose the template that has the highest identity in the binding pocket • Incorporation of a ligand during the modeling process or ligand-steered homology modeling approaches56,57 will help prevent the pocket from collapsing inward, and will better orient the side chains of binding residues32,58,59 When it is unknown how a ligand binds within the pocket, orthogonal experimental data can guide the modeling such as iterating between docking and modeling60,61. In the case of MRGPRX262 and GPR6863, for instance, the authors predicted multiple binding poses of a known active ligand and used mutagenesis and binding assays to test these predictions. A binding mode and receptor structure were identified that was consistent with the mutagenesis data and used for subsequent preparation in the prospective screen. Despite many difficulties, homology models have been successful in identifying novel ligands from prospective docking campaigns41,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76; though it is also true that, given the choice, most investigators will prefer to use a well-determined experimental structure. #### Control calculations Docking undersamples ligand–protein configurations and conformations, and its scoring of these configurations for fit remains highly approximate. Unlike methods like quantum mechanics, or certain lattice calculations in statistical mechanics, docking has surrendered ‘ground truth’ to be able to pragmatically search among a large and growing chemical space. Accordingly, control calculations are critical to the success of a docking campaign. As with experimental controls, they do not ensure prospective success, but they do guard against obvious sources of failure, and can help one understand where things have gone wrong if they do. Through a key control, we assess whether the prepared binding pocket and docking parameters can prioritize known ligands over presumed inactive molecules. In an optimized binding pocket, these known actives should rank higher against a background of decoy molecules in a retrospective screen, and reasonable poses should be predicted. As it is more likely to know true actives than true inactives, it is common practice to use property-matched decoys77, which are compounds that have similar physical properties as the actives, but unrelated topologies, and so are presumed inactive. The DUDE-Z server (tldr.docking.org) was built specifically to generate decoys for a given list of ligands by matching the following physical properties: molecular weight, hydrophobicity (LogP), charge, the number of rotatable bonds, and the number of hydrogen bond donors and acceptors78. The performance of the parameterized binding pocket to enrich known ligands over decoys can be evaluated by receiver-operator characteristic (ROC) curves, quantifying the true positive rate as a function of false positive rate (Fig. 2)17,77,79,80,81. The area under the curve (AUC) is a well-regarded metric to monitor the performance of a virtual screen by a single number17 (Fig. 2a). The log transformation of the false positive rate enhances the effect of early enrichment for true positives17 (Fig. 2b). This is important because, in a docking campaign with hundreds of millions of molecules, only compounds ranked within the top 0.1% are often closely evaluated (see ranks in Table 1). For example, if a retrospective docking challenge shows that known actives are only identified starting around the tenth percentile, novel actives may be missed in a prospective screen. In this setting, higher LogAUC values correspond to better discrimination between actives and inactives and provide a sanity check on the ability of the docking parameters to identify actives. A second criterion is the pose fidelity of the docked ligands to their experimental structures. The validity of predicted binding poses can be assessed qualitatively by visual inspection of reported key interactions between protein and ligand or, in the best case, quantitatively by calculating root mean square deviations between predicted and experimentally determined poses82. During pocket modeling and docking parameter optimization, one will often insist that the retrospective controls lead to both high LogAUC values and good pose fidelity; often there will be some trade-off between the two. While calibrating the scoring functions, it is also important to monitor the contributions of each energy term to the total score and ensure they match the properties of the binding pocket. If one term dominates, the scoring may have been overoptimized to that term, while other protein–ligand interactions are underweighted, leading to dominance by a certain type of molecule. For instance, if the docking score of a polar solvent-exposed pocket is inappropriately dominated by VDW energies, large molecules may score high due to nonspecific surface contacts with the target protein. In addition to property-matched decoys, other chemical matter can be used to evaluate different aspects of the docking model (Fig. 3). A test set including molecules with extreme physical properties (Extrema set), such as a wide variety of different net charges, can be screened to measure whether the docking model succeeds in prioritizing molecules with net charges corresponding to known actives78. If there is a difference in the charge of top-ranked compounds from the extrema set and the charges of the known ligands, the scoring may be biased. In another useful control experiment, a small fraction of the purchasable chemical space (e.g., readily available ‘in-stock’ compounds from multiple vendors) representing the characteristics of the ultra-large make-on-demand screening library can be docked against the protein model. This control serves two purposes: to test whether the positive control molecules remain among the highest scored compounds and to examine if intriguing novel compounds rise to the top of the rank-ordered docking list. It may even be fruitful to purchase and experimentally test a few promising and structurally diverse in-stock compounds as it could inform the docker if the binding pocket model is likely to find hits in the ultra-large library screen. If top ranked compounds do not form expected interactions, further optimization may be beneficial. Another control, if available, are true inactives from a previous discovery campaign. These can be used as a background against known actives to provide a ‘real-world’ benchmark of the performance of the system. Lastly, for a protein that has little or no known chemical matter (i.e., reported ligands), enrichment calculations with known actives against a background of property-matched decoys may be impossible. Here, the docking parameters can be calibrated by docking ‘Extrema’, which challenges the docking with extremes of physical properties, and ‘in-stock’ compound sets, which probe how the docking will perform on a representative subset of the library78,83. It remains true that, without known ligands as positive controls, one is at a substantial disadvantage at setting up docking campaigns, increasing the risk of failure. #### Prospective screen Once the docking model is calibrated, large libraries of molecules can be virtually screened against the target protein. For this virtual screen, it makes sense to focus on compounds that are readily available for testing. The ZINC20 database (http://zinc20.docking.org/) enumerates over 14 billion commercially available chemical products, of which ~700 million are available with calculated 3D conformer libraries ready for docking. Most of the enumerated compounds belong to the make-on-demand libraries of Enamine and WuXi. Further, ZINC20 allows one to preselect subsets of molecules for docking, reducing computation time. For instance, ZINC20 allows users to download ready-to-dock subsets of molecules within user-defined ranges of molecular weight (MWT), LogP and net charge, as well as predefined sets such as fragments (MWT ≤ 250 amu) or lead-like molecules (250 ≤ MWT < 350 amu; LogP ≤ 3.5). The result of a prospective screen is a list of molecules rank-ordered by docking score. #### Hit-picking A well-controlled docking calculation can concentrate likely ligands among the top-ranked molecules. But even if it was able to do so among the top 0.1% of the ranked library, in a screen of 1 billion molecules this would still leave 1 million molecules to consider, and with the errors inherent in docking, many of these will be false positives. Accordingly, we rarely pick the top N-ranked compounds by docking to test experimentally, but rather will use additional filters to identify promising hits within the top scoring 300,000–1,000,000 molecules. These filters can catch problematic features missed by the primary docking function, ensure dissimilarity to known ligands and promote diversity among the prioritized compounds. Compounds may be filtered for both positive and negative features84 (Table 2). For instance, one may insist that a docked orientation has favored interactions with key residues. Conversely, molecules with strained conformations should be discarded. Molecules with unsatisfied, buried hydrogen-bond donors and acceptors may also be deprioritized. Compounds with metabolic liabilities85,86 or that closely resemble colloidal aggregators87 can also be filtered out, despite otherwise favorable scores. Further, as closely related compounds will likely dock in similar poses with similar score, we typically cluster compounds by 2D structure similarity after all other filters have been used and only select the best scoring cluster representative for testing. In such a large dataset, clustering can be computationally expensive, so reduced scaffold clustering such as Bemis–Murcko88 analysis is useful to efficiently parse the compounds. In principle, many of these filters could be included directly in the scoring functions, but balancing them against other scoring terms can demand extensive optimization and will likely increase the compute time, which will become a hindrance for ultra-large library docking. Lastly, visual examination of the docked poses has been useful for selection of compounds to purchase. Following the criteria in Table 2, we typically inspect up to 5,000 compounds visually after applying automated filtering and clustering steps. #### Experiments to test docking hits The success of a docking campaign is ultimately measured by its ability to reveal novel chemotypes that can be shown to experimentally bind to the target, typically in binding or functional assays (Fig. 4). However, common artifacts should be controlled for: chemotypes likely to interfere with specific assays89 (e.g., the controversial90 pan assay interference chemotypes), covalent adducts, redox cyclers91 and aggregators87,92. Among the most common of these mechanisms is colloidal aggregation, which can account for >90% of all primary hits89,93. This aggregates sequester proteins with little selectivity, partially denaturing and inhibiting them32,92,94,95,96, occasionally even activating them97,98, a common problem both in HTS and also in docking screens92. Aggregators tend to have high LogP values and limited aqueous solubility, so we prefer to dock and test molecules with LogP ≤ 3.5. Chemical stability or reactivity can also contribute to experimental artifacts, and reactive scaffolds should be avoided. The ZINC20 database allows users to select screening libraries in the lead-like chemical space (i.e., low LogP) and exclude reactive compounds9,99. Unless controlled for in the experimental setting (Box 1), these artifacts can lead to false positives. Once convinced that one’s hits are not artifactual, more detailed testing on-target is warranted. For all targets, this involves determining concentration response curves, typically with 7–12 points at half-log intervals, with the transition being sampled over at least two log orders of concentration (Fig. 4). For enzymes, determining a true Ki will illuminate mechanism, though this can be laborious and it might only make sense to do this for characteristic lead molecules. For receptors such as GPCRs and ion channels, functional assays are typically performed to determine if the new ligand is acting as an agonist, an antagonist or an allosteric modulator. For GPCRs, initial screening assays may differ from secondary confirmatory assays, and showing that a molecule is active in more than one assay is often quite useful. Before initial hits are advanced for optimization, it is important for make-on-demand libraries to confirm the identity of the hit compound. Limitations of virtual library enumeration, chemical synthesis and downstream purification can result in mismatches between the in silico predicted compound and the in vitro tested compound. For example, in the screen against the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) macrodomain protein, a number of purine derivatives with an N9 substitution were requested but N3-substituted compounds were synthesized instead83. Accordingly, it is worth fully characterizing promising compounds before costly lead optimization is undertaken; we ourselves have made the mistake of designing analogs based on an in silico starting scaffold that was different than the true hit delivered from the chemists. At a minimum, the identity and purity of the compounds should be determined. High-resolution mass spectrometry or quantitative 1D proton nuclear magnetic resonance spectroscopy can be used to detect gross variation between the tested sample and the expected molecule; these data can be obtained from most vendors, or the experiments could be performed independently. The most direct experiment to test a docking pose prediction is the determination of an experimental protein structure in complex with the docking hit (Fig. 4). Such structures illuminate crucial details of ligand recognition, including adjustments of binding site residues or the docking hit in the binding pocket12,100,101. The key question to answer with high-resolution structures is whether docking worked for the right reasons, i.e., does the predicted binding mode agree with an experimental structure? Previous studies suggest that hits from virtual screening generally compare fairly well with experimental high-resolution structures; i.e., key anchor points are predicted correctly12,83,102,103. However, as docking is typically performed against rigid protein structures, conformational changes, especially in flexible loops, will complicate pose prediction104. Water-mediated protein–ligand interactions are generally difficult to explore de novo45,105, though experimentally determined and structurally conserved water molecules can be included in virtual screens100. Lastly, there are few cases where docking hits showing on-target activity revealed binding positions at unexpected and nontargeted subsites. For example, the β2 adrenergic receptor allosteric modulator AS408 was predicted to bind to an extracellular subpocket but crystallographically determined to bind to the membrane-facing surface of the receptor106. In contrast, the pose of an inverse agonist also identified at the β2 adrenergic receptor from in silico docking39 was confirmed by X-ray crystallography107, demonstrating the variability in outcomes even at the same target. If the protein target can be readily expressed and purified in milligram quantities, crystallography and cryo electron microscopy can guide hit discovery early in the campaign. In campaigns against targets where high-resolution structures are more challenging to obtain (e.g., for transmembrane receptors), an experimental protein–ligand complex might only be achievable for the most potent hit compound. Nonetheless, it is usually worth the effort to confirm the predicted binding site and ligand pose, or to identify unexpected interactions between the discovered hit and the target106. #### Next steps: selecting analogs for hit to lead optimization In the fortunate event that the virtual screen was successful and docking hits were confirmed by different experiments, the newly obtained scaffolds are blueprints for exploring structure–activity relationships and lead optimization. A concern about molecules from make-on-demand libraries is that they will initially be delivered as racemic or stereomeric mixtures since the production of pure stereoisomers requires more sophisticated synthesis routes. If confirmed hits were observed from stereomeric mixtures, the measured potency of the mixture may be artifactually lower if only one stereoisomer is active at the target (e.g., the concentration of the active stereoisomer is less than the total mixture concentration). Purified stereoisomers can typically be purchased from the make-on-demand compound supplier or separated in-house. Hit optimization can be performed in several ways. Synthetic chemistry groups may obtain synthesis routes of the parent compound from the supplier allowing the generation of medicinal chemistry-inspired analog series108. Alternatively, chemoinformatic tools can be employed to virtually search the purchasable chemical space for structurally related scaffolds or identifying molecules with a common substructure to the parent hit compound (analog-by-catalog). Searching for similar molecules within the Enamine REAL space can be conducted using the SmallWorld (sw.docking.org) search engine99. Molecules with an identical substructure as the parent compound can be found using Arthor (arthor.docking.org)99. Additionally, the supplier of the parent compound might be able to provide a collection of molecules resembling the hit scaffold. While we cannot provide clear guidance or a best-practice protocol for hit-to-lead optimization, analog-by-catalog of docking hits obtained for the melatonin 1 (MT1) receptor and AmpC led to compounds with improved potencies compared with the initial parent compounds12,13. Currently, we recommend subtle changes to the starting compound or modifications to test particular interactions as suggested from the docked pose. It is our hope that the above guidelines will be useful for outlining a docking campaign from start to finish. As shown, the prospective docking step is actually only one small component of the overall pipeline (Fig. 1). Control calculations using retrospective datasets and docking setup optimization make up the bulk of the process. Hit picking from the prospective screen requires careful perusal of the data, and experimental design of the confirmatory assays is critical for defining success of the campaign. We ourselves find that these guidelines insulate against the more common sources of failure in large library docking campaigns. Lastly, we want to mention that docking campaigns against protein targets without experimental structure, i.e., requiring homology modelling, or without known active chemical matter for retrospective control calculations are particularly risky and should not be initiated naively. This concludes our general guidelines for large library docking. In the last section, we turn to a detailed protocol to set up, optimize and prospectively screen a target of interest using DOCK3.7, though any docking program can use the controls presented. While the protocol should be general for most proteins, we provide example data from a recent campaign against the MT1 receptor13, a target particularly well suited for docking: a crystal structure had been determined; the orthosteric pocket is compact and almost completely encloses the ligands, simplifying the biophysics; many such ligands exist for retrospective calculations and for optimization; and in vitro assays to test docking hits were well established. We note that the MT1 receptor is ideal for large library docking, and so the achieved hit rate, and the hit potencies, were unusually high. Still, the docking optimization strategies below have been useful against a wide spectrum of targets, including the Nsp3 macrodomain of SARS-CoV-283 and highly solvated pockets as in β-lactamase12, and were developed against the spectrum of targets in the DUD-E benchmark77, ranging from ion channels to kinases to soluble enzymes. It should be clear that the optimization strategies sketched remain rooted in retrospective controls, and so will work best against targets with precedented ligands, and against targets with well-formed and readily liganded binding sites. ### Docking campaigns with DOCK3.7 and ZINC20 In DOCK3.7, ligands are placed in the target pocket by mapping ligand atoms onto predefined hot-spots, so-called matching spheres. Matching spheres are generated from the coordinates of heavy atoms from an input bound ligand structure, if available, and supplemented with coordinates based on the negative image of the binding site generated from the program SphGen34. Ligand rigid fragment (e.g., rings) are mapped onto matching spheres using a bipartite graph algorithm79,109. Different ligand conformations and orientations are sampled in the binding pocket using precalculated 3D conformer libraries (flexibases, db2 files)9,99. During conformer library building, each molecule is divided into its rigid fragments, and different conformations of rigid fragment substituents are generated. Ligand poses are evaluated using a physics-based scoring function (Escore) combining VDW, electrostatic (ES) and ligand desolvation (lig_desol) energy terms (Equation 1): $$E_{score} = E_{VDW} + E_{ES} + E_{lig\_desol}$$ (1) In order to allow rapid scoring of new poses (typically 1,000 poses per molecule per second), contributions of the target protein pocket are mapped onto pregenerated scoring grids. The VDW surface of the binding pocket is converted into a scoring grid by ChemGrid15. Electrostatic potentials within the binding pocket are estimated by numerically solving the Poisson–Boltzmann equation using QNIFFT16,110. Context-dependent desolvation energy scoring grids, both polar and apolar, are generated by the Solvmap program17. Thereby, ligand desolvation energies are computed as the sum of the atomic desolvation of each ligand atom scaled by the extent to which it is buried within the binding site. Atomic desolvation energies for ligand atoms are calculated during ligand building using AMSOL111 and are included in the ligand conformer library. Both, the electrostatic and ligand desolvation scoring grids depend on the dielectric boundary between the low dielectric protein (εr = 2) and high dielectric solvent (εr = 80) environments. Consequently, these scoring grids can be fine-tuned by modulating the protein–solvent dielectric interface (see below Steps 41–44). The following protocol (Fig. 5) prepares a starting structure (Steps 1–4), generates the scoring grids and matching spheres (Steps 5–10), and collects a set of control ligands for model evaluation (Steps 11–33). Control ligands are docked retrospectively (Steps 34–58) to determine the model’s ability to identify actives from a pool of inactives. Optimization of sampling (Steps 59–64) and scoring (Steps 65–77) is evaluated with the same controls. Model biases are evaluated using an Extrema control set (Steps 78–88) to examine for charge preferences in the scoring grids and with a small-scale in-stock screen (Steps 89–97) to ensure a computationally expensive large-scale prospective screen (Steps 98–103) is likely to yield interesting chemical matter. Hit picking (Steps 104–108), though critical to success of these prospective experiments, is beyond the scope of the protocol as it is highly user- and target-specific, and we suggest referring to Table 2 for insight. Additionally, we caution against regarding hits from a screen as true positives until common artifacts are ruled out (Box 1). All controls in silico and in vitro are applicable to any docking program or computer-aided drug discovery campaign. The details of grid preparation and modification are DOCK3.7-specific, but the principles are transferrable to other software. ## Materials ### Software • DOCK3.7.5: apply for a license from http://dock.docking.org/Online_Licensing/dock_license_application.html. Licenses are free for nonprofit academic research. Once your application is approved, you will be directed to a download for the source code. The code should run without issue on most Linux environments, but can be optimized by recompiling with gfortran if needed. Questions related to installation can be addressed to [email protected] • Python: the current code uses python2.7, which will also need to be installed. Additional python dependencies that are required for running scripts can be found in the file $DOCKBASE/install/environ/python/requirements.txt • AMSOL: free academic licenses can be obtained from https://comp.chem.umn.edu/amsol/ • (Optional) 3D ligand building software: if interested in 3D ligand building in-house (not necessary for this protocol), licenses will also need to be obtained for ChemAxon (https://chemaxon.com/), OpenEye Omega (https://www.eyesopen.com/omega) and Corina (https://www.mn-am.com/products/corina). We note that, for many campaigns, 3D molecular structures with all necessary physical properties may be downloaded directly from ZINC20. This tutorial makes use of a webserver for 3D ligand building that is suitable for small control sets. Please apply for an account at https://tldr.docking.org/, which is free of charge • Chimera: this application is recommended for grid visualization with the VolumeViewer feature (see Troubleshooting). The ViewDock feature, also within Chimera, is useful to examine the docking results ### Equipment • Hardware: initial tests of this tutorial (grid preparation and small retrospective screens) can be performed on a single workstation, but more intensive docking will need access to a high-performance computing cluster (we regularly use 1,000 cores) • Queuing system: DOCK3.7 comes with submission scripts for SGE, Slurm and PBS job schedulers. If using a different scheduler, scripts may need to be adapted for your specific computing cluster • Protein structure: structures can be downloaded from www.rcsb.org or generated by the user in the case of homology models • Example data: an example set of files used in this protocol including ligand and decoy sets, default docking grids and optimized docking grids can be downloaded from http://files.docking.org/dock/mt1_protocol.tar.gz. The example dataset uses the MT1 structure (PDB: 6ME3) co-crystallized with 2-phenylmelatonin ### Equipment setup #### Define the pathway to critical software The environment will need to be defined in order to run the following commands correctly. The two most important paths that need to be defined are to DOCKBASE and python. We provide an environment script in the example dataset that will need to be modified for a user’s settings. Always run the following command with updated paths specific to your file directory before working with DOCK3.7: source env.csh ## Procedure ### Section 1: set up binding pocket for docking calculations Timing 30 min to 1 h 1. 1 Download the structure from the PDB or use in-house structures. Preference should be given to structures of high resolution and with a ligand bound in the site that will be docked into. Apo structures are useable, but tend to yield poorer results due to unconstrained binding site geometries32. 2. 2 Visualize the binding pocket in PyMOL or Chimera to decide on the relevance of protein cofactors. • Delete components from the structure that do not contribute to ligand binding such as lipids, water molecules and buffer components • Fusion proteins engineered for protein stability that are near the binding pocket should be removed, and the resulting chain break should be either capped or the loop remodeled • Protein cofactors such as heme or ions should also be kept or discarded depending on their relevance to ligand binding in the pocket • Additionally, examine the residues in the binding pocket for incomplete sidechain, multiple rotamers or engineered mutations, and revert or rebuild as necessary • It is helpful to examine the pocket with the electron density if available for making these decisions. For the example MT1, we deleted the fusion protein, capped non-native termini and reverted two mutations in the binding pocket (6me3_cleaned.pdb) 3. 3 Save the rec.pdb file. This file will contain any component that will remain static during the docking such as structural waters and cofactors. 4. 4 Save the xtal-lig.pdb file. This file will contain the atoms of the bound ligand, which will be used to generate the matching spheres that guide ligand sampling in the pocket. 5. 5 Generate the scoring grids and matching spheres with blastermaster from these two inputs (rec.pdb and xtal-lig.pdb)$DOCKBASE/proteins/blastermaster/blastermaster.py --addhOptions=" -HIS -FLIPs " –v Troubleshooting 6. 6 Inspect the output, in particular the rec.crg.pdb file located in the new working directory to ensure protonation of polar residues and side chain flips of glutamine and asparagine side chains look accurate. Examine histidine tautomers (HID, HIE and HIP) as well. If everything looks as it should, proceed to Step 10; otherwise, proceed to Step 7. #### Generating a rec.crg.pdb file manually 1. 7 If the automatically protonated rec.crg.pdb file does not fit to the expected protonation state of key residues, the rec.crg.pdb file can be generated manually using various protein modeling software packages including Rosetta112, Chimera50 or Maestro113. This may include manually flipping side chain rotamers and setting the pH value for pKa calculation of charged residues. After the modeling step is completed, new rec.pdb, rec.crg.pdb and xtal-lig.pdb files need to be provided to blastermaster. To be compatible with the united atom AMBER force field, rec.crg.pdb should only contain polar hydrogen atoms, and explicit histidine tautomer names. rec.pdb and xtal-lig.pdb only need to contain heavy atom coordinates. In Step 8, we provide a script that produces the required files. 2. 8 Assuming the protein modeling software resulted in a fully protonated protein–ligand complex, protein atoms should be listed as ATOM records, while ligand atoms should be listed as HETATM records. Further, ensure that cofactors are given the correct heading in the PDB file (i.e., static metal ions should be given ATOM records if they are to be included in the scoring grids). bash $DOCKBASE/proteins/protein_prep/prep.sh protonated_input.pdb$PWD The outputs are new rec.pdb and xtal-lig.pdb files, as well as a rec.crg.pdb file inside a working directory. HID, HIE and HIP naming is generated automatically from the protonation state of the HIS residues and again should be checked for accuracy. Note: this script requires a path to the Chimera binary file and may need to be modified by the user. 3. 9 Generate scoring grids and matching spheres from the new files (rec.pdb, xtal-lig.pdb, and working/rec.crg.pdb), and examine the outputs as in Step 6: $DOCKBASE/proteins/blastermaster/blastermaster.py --addNOhydrogensflag –v #### Checking the files 1. 10 Check that all files are generated. At the end of a successful blastermaster run, the directory should contain an INDOCK file and two directories: working and dockfiles. The working directory contains all the intermediate files used for grid generation. The dockfiles directory contains the scoring grid files and matching spheres file. The INDOCK file contains all parameters to control the docking program, such as sampling options and location of input files, i.e., docking grids and 3D ligand conformer libraries (see the INDOCK Guide in Supplementary Information). In our experience, the automated grid generation with blastermaster.py successfully produces reliable docking parameters; however, nonstandard amino acids or particular atom types require additional information and adjustments of force field parameters. Instructions on how to check and adopt the grid generation are provided in the Troubleshooting section below and in the Blastermaster Guide (Supplementary Information). For the provided example files generated for MT1, the protein–ligand complex was modeled using the automated preparation pipeline in Maestro, Schrodinger48. ### Section 2: collect and build the control ligand set Timing Minutes to hours for ligand building, depending on the number of molecules 1. 11 Collect the known actives (positive controls) for retrospective analysis. For a given target, these can be found in the scientific literature, patent literature or public databases such as IUPHAR/BPS114, ChEMBL115 or ZINC9,99,116, or available in-house. #### Curate the actives list #### Critical While it may be possible to find dozens of actives, it is likely that many come from the same chemical series. For a rigorous control analysis, redundant (i.e., highly similar) compounds should be clustered and the most potent compound selected. 1. 12 Sort all knowns by potency. 2. 13 Cluster these based on 2D similarity using any preferred method. We suggest calculating clusters based on ECFP4 Tanimoto similarities with a cutoff of 0.35. 3. 14 Use the most potent compound from each cluster as the representative of that scaffold. It is best to refine these actives to a set that is representative of the chemical space intended for prospective screening (i.e., with limits on molecular weight) so that the retrospective analysis will match the prospective aims. 4. 15 Find out whether the known ligands of the target are neutral or charged as one criteria of the docking parameter calibration is the ability to score ligands with corresponding charges well. 5. 16 Save a final list of actives’ SMILES as ligands.smi. Typically, 10–30 diverse actives represent a good control set. For targets with less than this value, the controls are still useful but may not be as informative. For the example of MT1 we extracted a set of 28 agonist and antagonists from the IUPHAR/BPS database13. #### Build the 3D conformer library of the actives 1. 17 Go to the ‘build3d’ application on tldr.docking.org79. 2. 18 Add the ligands.smi file to the ‘Input’ section. 3. 19 Select ‘db2’ for the file type. 4. 20 Click ‘Go’ to submit. 5. 21 Download the build3d_results.tar.gz file, and move it to your work directory. 6. 22 Decompress the tar file: tar xvfz build3d_results.tar.gz 7. 23 The results are in a build3d_results/ directory with two subdirectories failed/ and finished/ indicating the build status of the compounds. In example data provided, all ligands were built to completion. 8. 24 To build a split database index file, or path file to the ligands, use the following command: ls -d$PWD/build3d_results/finished/*.db2.gz > actives.sdi 9. 25 To generate a list of actives’ IDs: ls build3d_results/finished/*0.db2.gz | awk -F"/" '{print $NF}' | awk -F"_" '{print$1}' > actives.id #### Curate and build the property-matched decoy set 1. 26 Go to the ‘dudez’ application on tldr.docking.org78. 2. 27 Submit the ligands.smi file to the input section. 3. 28 Click ‘Go’ to start the calculation. The ZINC database will be scanned for compounds that match the following six properties of the input ligands: molecular weight, LogP, charge, number of rotatable bonds, number of hydrogen bond donors, and number of hydrogen bond acceptors. Each compound will then be compared with the actives for 2D similarity, with similar compounds being discarded. A final set of 50 decoys per input active will be calculated. 4. 29 Download the decoys_dudez.tar.gz file, and move it to your work directory. 5. 30 Extract the files: tar xvfz decoys_dudez.tar.gz The files will live in a new directory new_decoys. 6. 31 To obtain the split database index file of the decoys: ls -d $PWD/newdecoys/decoys/*.db2.gz > decoys.sdi 7. 32 To obtain a list of all the decoys’ IDs: awk ‘{print$2}’ newdecoys/decoys.smi > decoys.id 8. 33 Collect output files. At this point, four files should be generated: actives.sdi, actives.id, decoys.sdi and decoys.id. ### Section 3: run retrospective docking calculations to test the binding pocket parameters Timing Minutes to hours depending on number of molecules and compute cores 1. 34 In the directory where blastermaster was run (i.e., contains the INDOCK file and dockfiles directory), copy over the ligand and decoys .sdi files and .id files. 2. 35 Combine the two .sdi files: cat actives.sdi ligands.sdi » controls.sdi 3. 36 Check values in the INDOCK file • Set the atom_maximum to 100. This value is a hard cutoff such that if a ligand has more than this number of heavy atoms it will not be docked. For retrospective calculations, we want all ligands to be docked and scored regardless of size, so we use a large value here • Set the mol2_maximum_cutoff to 100. This value is a cutoff for saving 3D coordinates of docked poses. If a ligand scores worse (i.e., more positive) than this cutoff, the pose will not be saved. Setting a low value is useful in large-scale prospective screens to save on disk space and computation time, but for retrospective screens we want all information saved for analysis 4. 37 Set up the docking directory $DOCKBASE/docking/setup/setup_db2_zinc15_file_number.py ./ controls controls.sdi 1 count This script will separate the controls.sdi file into one directory called controls0000. For this first calculation, the size of the sdi file is manageable on a single core, though splitting it will be faster (i.e., 10). For control calculations with 10,000–100,000 ligands and for large-scale prospective screens, the sdi file should be split into multiple directories and the jobs distributed over multiple cores. A dirlist file is generated that lists all of the split directories prepared for docking. 5. 38 Dock the control set. At this point, it is possible to submit the data to a computer cluster or do the calculation on a single computer. 1. (A) Docking without submitting to a cluster 1. (i) Move into the controls0000 directory. You will find INDOCK (pointing to the dockfiles/ directory) and a split_database_index file. These are all the inputs that the DOCK program needs to run the calculation. 2. (ii) Run the docking calculation:$DOCKBASE/docking/DOCK/bin/dock.csh Troubleshooting 3. (iii) The output files from the docking program are OUTDOCK, listing docking scores of successfully docked molecules, computational performance or potential error messages, and a zipped mol2 file (test.mol2.gz) containing the 3D poses of docked molecules. 4. (iv) Move back into the directory that contains dirlist when done. 2. (B) Docking with submitting to a cluster 1. (i) In the directory that contains the dirlist (created in the previous step), run one of the following commands depending on cluster architecture: • SGE: $DOCKBASE/docking/submit/submit.csh • Slurm:$DOCKBASE/docking/submit/submit_slurm.csh • PBS: $DOCKBASE/docking/submit/submit_pbs.csh #### Critical step If you use another cluster scheduler, adopt these scripts to your needs. 6. 39 When the job has completed, the last line of the OUTDOCK file should contain an elapsed time. If this is not the last line of the OUTDOCK file, an error has occurred (see Troubleshooting) and the following script will not run. Troubleshooting 7. 40 Extract the scores of all docked compounds python$DOCKBASE/analysis/extract_all_blazing_fast.py ./ dirlist extract_all.txt 100 The arguments are the dirlist, the name of the file to be written (extract_all.txt) and the max energy to be kept (this value should match the mol2_maximum_cutoff value in the INDOCK file). The important output file for future analysis is the extract_all.sort.uniq.txt file, containing the rank-ordered list of docked compounds with only the highest score for each compound. 8. 41 Get the poses of the docked compounds python $DOCKBASE/analysis/getposes_blazing_faster.py ./ extract_all.sort.uniq.txt 10000 poses.mol2 test.mol2.gz The arguments are the path where the docking is located (./), the name of the extract_all.sort.uniq.txt file, the number of poses to get (10000, i.e., set to larger than the number of compounds docked for retrospective calculations since we want to get all), the name of the output mol2 file (poses.mol2), and the name of the input mol2 files (test.mol2.gz), containing 3D coordinates of predicted poses from the docking calculation (located in the controls0000 directory). 9. 42 Get the poses of just the actives python$DOCKBASE/analysis/collect_mol2.py actives.id poses.mol2 actives.mol2 The arguments are the file containing active IDs, the pose file containing all of the compounds both actives and decoys, and the name of the output file to be written. 10. 43 Calculate the LogAUC early enrichment python $DOCKBASE/analysis/enrich.py -i . -l actives.id -d decoys.id Inputs are the working directory (‘.’ if in the directory where the extract_all file is located) and the two files containing the IDs of actives and decoys. The output is a roc.txt and roc_own.txt file. Both files report an AUC and LogAUC and contain the grid points for plotting receiver operator curve (ROC) plots. The roc.txt file is calculated over all inputs in the ID files, and the roc_own.txt file is calculated only over the compounds that successfully docked. 11. 44 To generate a plot (roc_own.png) of the roc_own.txt file: python$DOCKBASE/analysis/plots.py -i . -l actives.id -d decoys.id 12. 45 Calculate the charge distribution by DOCK score python $DOCKBASE/analysis/get_charges_from_poses.py poses.mol2 charges Inputs are the poses files and the name of the output file to be written. The output charges file is a list of DOCK scores and the charge of the ligand with that score. 13. 46 To generate a plot of total DOCK score by ligand charge (charge_distributions_vs_energy.png) run: python$DOCKBASE/analysis/plot_charge_distribution.py charges 14. 47 Plot the contribution of each score term in the energy function outlined in Equation 1 (energy_distributions.png): python $DOCKBASE/analysis/plot_energy_distributions.py extract_all.sort.uniq.txt 15. 48 Collate the output files. The important outputs of these steps are: actives.mol2, roc_own.png, energy_distributions.png and charge_distributions_vs_energy.png. #### Evaluate the docked poses of the actives 1. 49 Open the rec.pdb and xtal-lig.pdb in Chimera, and prepare the visualization of the binding pocket to the user’s preference. 2. 50 In the Tools tab, select Structure/Binding Analysis and click ViewDock. 3. 51 In the menu, navigate to your directory and open actives.mol2. 4. 52 For file type, select any of the DOCK options. This will open a window (ViewDock) to navigate through all of the molecules in this file such as in Fig. 6a. 5. 53 Under the ‘Column’ tab, several details about the docked ligands can be listed, e.g., Total Energy, Electrostatic or Van der Waals terms. 6. 54 In the Tools tab, under Structure/Binding Analysis, ‘Find HBonds’ can be used to identify hydrogen bonds between the protein and ligand molecules. 7. 55 Evaluate the docked poses, asking the following questions: • Do the ligands occupy the part of the pocket as expected? • Do they make the types of interactions anticipated from what is known about the ligands and the pocket? • Are there aberrant or unsatisfied interactions? For example, in MT1, it is known from the crystal structure that ligands can form hydrogen bonds with Asn162 and Gln181. Issues with binding poses are usually a sampling problem, but scoring terms such as electrostatics can influence specific interactions • If many actives do not dock, is there a reasonable explanation (e.g., selected controls are larger than the pocket volume as has been observed when attempting to dock antagonists into agonist pockets)? Optimization of sampling is performed with a matching sphere scan as detailed in Steps 59–64 #### Evaluate the enrichment and scoring metrics 1. 56 Evaluate the docking parameters’ ability to discriminate actives from decoys. The roc_own.png file shows the plot of the rate of finding actives as a function of decoys. The AUC is indicative of this discriminatory power (Fig. 6b). A positive LogAUC value is a sign that actives are enriched over decoys, a value near 0 represents random selection, and a negative value demonstrates that the model prefers decoys over actives. This may be a result of poor poses, improper scoring or both. It is best to optimize poses before pushing forward on scoring discrimination. Even for good LogAUC values, it is important to evaluate the poses as in Step 55 as it is possible to get good scoring actives that do not dock in the correct pose. 2. 57 Examine the energy contributions of the docked poses. In the energy_distribution.png file (Fig. 6c), the total DOCK scores for the docked poses are broken down into the main components of the scoring function: electrostatics, VDW and polar ligand desolvation. Do the contributions of the various score terms match the properties of the binding pocket? In the MT1 pocket, the VDW interactions dominate because of its largely hydrophobic nature. For targets forming salt bridges to ligands, electrostatic terms should at least be balanced with VDW scores. If the balance does not match with what is expected for the pocket, the strength of the scoring terms can be modified in Steps 65–68. 3. 58 Evaluate the charge preference of the docking parameters. If the electrostatics term dominates in Step 57, the scoring function will likely prefer highly charged ligands. This can be measured in the charge_distributions_vs_energy.png plot (Fig. 6d), which shows the charge state of the top scoring molecules. DUDE-Z decoys should have charges matching the active ligands. However, in property-unmatched decoys such as the Extrema set78 (Steps 78–88) and in-stock set (Steps 89–97), it is important to ensure the docking is not biased toward charge extremes, which can suggest an overweighting of electrostatic terms; this can be addressed in Steps 65–68. ### Section 4: optimize poses by modifying matching spheres Timing Minutes to build; ≤1 h to dock 1. 59 Create a new directory matching_sphere_scan/ that contains the INDOCK file and working/ and dockfiles/ directory from the previous directory. 2. 60 Run the following script to create sets of matching spheres in which the matching spheres that do not map to atoms in the original xtal-lig.pdb file are perturbed as in Fig. 7a: (~1 min) python$DOCKBASE/proteins/optimization/scramble-matching-spheres.py -i $PWD/dockfiles -o$PWD -s 0.5 -n 50 The arguments are: -i, the path to the dockfiles/ directory; -o, the path to where new docking directories should be created; -s, the maximum distance to move a matching sphere (0.5 recommended); -n, the number of perturbed sphere combinations to create. For a single receptor target, we recommend 50–100 sets of sphere sets. 3. 61 Each new directory should contain an INDOCK file and dockfiles/ directory. Check that the matching_spheres.sph file in the dockfiles/ directory is unique to each new directory created. 4. 62 For each directory, copy over the controls.sdi file. 5. 63 Run the docking calculation and extract the poses as in Steps 37–48. 6. 64 Examine the new actives.mol2 files and LogAUC values from the different matching sphere sets. Ideally, a set of spheres will increase the LogAUC and improve the binding poses of the actives. For MT1, the LogAUC increased from ~2 to 5. While not a large change, there was improved placement of the flexible components of the active ligands, which suggests better sampling and pose identification in a prospective screen. If the poses did not improve during this matching sphere scan, it may indicate that there is a problem with the binding pocket model, the ligand set, the sampling of ligand conformations, or the placement of the crystallographic matching spheres. Examining each of these can help improve the binding poses in this first control calculation. Alternatively, the scoring function needs to be optimized to score the expected pose better (see Steps 65–68). Troubleshooting ### Section 5: optimize ligand scoring by modulating the protein–water dielectric interface (adding a layer of dielectric boundary spheres) Timing 15 min on cluster; 20–30 min per radius on local machine 1. 65 Create a new directory with the INDOCK file and working/ and dockfiles/ directories from the best matching sphere set (or the default settings if no matching sphere scan was run). 2. 66 Generate the boundary modifying spheres (Fig. 7b) $DOCKBASE/proteins/optimization/boundary-sphere-scan.sh -b$PWD -v This version of the script will run on an SGE cluster. Modify the submission script and scheduler as needed for different cluster architecture. To run on a local machine, use the script: $DOCKBASE/proteins/optimization/boundary-sphere-scan_no_cluster.sh -b$PWD -v 3. 67 Combine the different radii boundary spheres (5 min): python $DOCKBASE/proteins/optimization/combine-grids.py -b$PWD This script generates a set of combinations of boundary modifying spheres across both ligand desolvation and low dielectric spheres. These directories have everything needed to run individual retrospective calculations except for the controls.sdi file that will need to be copied into each one. Repeat the retrospective calculations from Steps 37–48. 4. 68 Following retrospective calculations across the combinations of dielectric boundary sphere radii, select a combination that ideally improves LogAUC (Fig. 6b) and helps to balance the score distributions in the energy_distribution.png plot (Fig. 6c). If there are a number of equally performing combinations, subsequent control calculations can help identify the best set for your system (see Steps 78–97). In the case of MT1, we chose electrostatic spheres of radius 1.9 and desolvation spheres of radius 0.1 as they increased LogAUC and enhanced the charge distribution of compounds to favor neutrals to match known MT1 ligands (Fig. 6d). ### Section 6: (optional): polarize (or depolarize) residues to effect electrostatics Timing 15 min #### Critical If after these scans a particular interaction is still missed or erroneously captured, it may be necessary to modify the partial charges of a particular residue. For example, in the case of MT1, the docking scores are largely dominated by VDW interactions (Fig. 6c). However, two residues Gln181 and Asn162 form hydrogen bonds with the known ligands. For these specific interactions, a global modification to the dielectric boundary (Steps 65–68) may not be sufficient and, instead, a local modification to partial charges as outlined in Steps 69–77 may enhance favorable scores for these interactions. For targets where this is not necessary, proceed to Step 78. 1. 69 Make a new directory for polarizing residues. Copy over rec.pdb, xtal-lig.pdb, working/rec.crg.pdb, working/amb.crg.oxt and working/prot.table.ambcrg.ambH from the previous docking setup. 2. 70 In the rec.crg.pdb file, rename the residues to be polarized with a unique three-letter amino acid code. For example, in the MT1 receptor, we chose to polarize GLN181 and ASN162 to enhance the polar interactions between actives and these hydrogen bonding residues. Accordingly, we renamed GLN181 as GLD and ASN162 as ASM (see Fig. 8). sed -i 's/GLN A 181/GLD A 181/g' rec.crg.pdb sed -i 's/ASN A 162/ASM A 162/g' rec.crg.pdb #### Critical The two files that read in partial charges are the amb.crg.oxt and prot.table.ambcrg.ambH files located withing the working/ directory. These files contain default partial charges for all amino acids and some special residues such as ions. Make the following changes in both amb.crg.oxt and prot.table.ambcrg.ambH files. 1. 71 Create new sections for the polarized residues in the two files by copying the standard residue partial charges. 2. 72 Rename the polarized residues to match the three-letter codes used in the rec.crg.pdb file. The names are all capitalized in the prot.table.ambcrg.ambH file as shown in Fig. 8 and all lowercase in the amb.crg.oxt file. 3. 73 Redistribute the charge around the atom of interest, making sure that the net charge remains the same. We suggest testing modifications of 0.2 or 0.4 charge units for a given atom. For example, for an ASN-to-ASM change, where we want to enhance the electronegativity of the sidechain carbonyl (OD1), increase the charge by 0.4 units, resulting in a change from −0.470 to −0.870. As −0.4 charge was added to the residue, +0.4 must be distributed to other atoms in the residue to maintain the net charge. One option is to add +0.2 to each of the sidechain amide hydrogens (HD21 and HD22) as in Fig. 8, though the charge could have been distributed to the backbone amide or another atom not involved in binding. #### Running the grid generation 1. 74 Before running the grid generation, check that the following files are present: rec.pdb, xtal-lig.pdb, the modified amb.crg.oxt and prot.table.ambcrg.ambH files, and a working/ directory with the modified rec.crg.pdb file. 2. 75 Run blastermaster with the following command: $DOCKBASE/proteins/blastermaster/blastermaster.py --addNOhydrogensflag --chargeFile=amb.crg.oxt --vdwprottable=prot.table.ambcrg.ambH –v Troubleshooting 3. 76 Rerun control calculations as in Step 37–48. When examining the new poses, consider the following questions: • Does the polarized residue promote or discourage the desired interaction? • Are the new interactions scored more favorably? 4. 77 Rerun the dielectric boundary modifying sphere scan in Steps 65–68. We recommend doing this because the new polarized residue(s) will have altered the electrostatics grid. It may be necessary to test different combinations and strengths (0.2, 0.4) of the polarized residues to identify the best-performing set of parameters that enhances correct interactions, improves poses and ideally improves LogAUC. However, due to the modification to the electrostatic potential introduced by these changes, it is important to ensure these improvements do not come at the cost of biasing the screen towards overly charged molecules. This bias can be checked in the next section Steps 78–88. ### Section 7: Extrema charge control calculations Timing 2–3 h for compound building #### Generate the Extrema set #### Critical Compounds in the extrema set come from the area of chemical space occupied by the known actives (molecular weight and LogP) but distinct charges (−2, −1, 0, +1, +2). These decoys will be used to measure biases introduced into the scoring function owing to modifications to the electrostatic potential. 1. 78 On tldr.docking.org, navigate to the ‘extrema’ application. 2. 79 Upload the SMILES of your active ligands used in section II to the webpage. 3. 80 Select 500 or 1,000 molecules per charge, and click the ‘Go’ button. 4. 81 After a few minutes, a results file will be available to download and extract. Within the file should be five SMILES files grouped by charge. Extract the SMILES file from the download: unzip extrema_results.zip 5. 82 Build the extrema set. 6. 83 For each SMILES file, submit the file to the ‘build3d’ application on tldr.docking.org. Depending on the number of ligands in each file, and the activity on the website, each file should take 2–3 h to build. 7. 84 When all sets are built and downloaded, combine the path to all compounds in a split database index file along with the known actives. #### Run the extrema set 1. 85 Navigate to a directory containing the INDOCK file and dockfiles/ directory that you want to test. The dockfiles/ directory should contain the set of matching spheres and the dielectric boundary sphere combination selected from the previous steps. 2. 86 Prepare the docking run. In the previous steps, the control ligand set was small enough to run in a single run. This control set often has 10,000+ compounds, and it is best to split this into multiple jobs. To do this run:$DOCKBASE/docking/setup/setup_zinc15_file_number.py ./ extrema extrema.sdi 100 count 3. 87 Submit the docking calculations over a cluster as in Step 38B. 4. 88 Repeat the analysis in Steps 39–48 with a focus on the charge_distributions_vs_energy.png plot (Fig. 6d). An optimal set of docking parameters will have good LogAUC with early enrichment, and the top-ranking compounds will contain charges that match the charges of the known actives. Sometimes multiple boundary-modifying sphere combinations will be tested in this Extrema calculation to find a set that enhances the scoring preference for compounds similar to knowns. ### Section 8: run a ‘small’-scale in-stock screen Timing 1–24 h depending on download speed 1. 89 To collect an in-stock screening library, go to http://zinc20.docking.org/tranches/home/. 2. 90 At the top of the tranche viewer (Fig. 9), select ‘3D’ representation, set Purchasability to ‘In-stock’, and select the charge states of interest to your target (i.e., only 0). Next, select the molecular weight range you want to screen. Finally, select a LogP range tailored to your target. If no specific LogP range is desired, we recommend LogP ≤ 3.5 to avoid insoluble compounds that might aggregate and yield false positives in the experimental assay. 3. 91 When only the tranches of interest are selected, click the download button in the upper right corner of the screen. Select ‘DOCK37’ as the download format and ‘cURL’ or ‘WGET’ as the download method. When done, click ‘Download’ to obtain a text file with all the cURL or wget commands needed to download the set of selected tranches. 4. 92 Prepare a split database index file with the path to all of the newly downloaded compounds. 5. 93 Split the docking run over 1,000 jobs to efficiently run the screen as the number of compounds is usually ~1–4 million. $DOCKBASE/docking/setup/setup_zinc15_file_number.py ./ instock instock.sdi 1000 count 6. 94 Run the screen on a cluster as in Step 38B. 7. 95 Run the analysis of the screen to generate a LogAUC. It should be on par or better than LogAUC values seen previously, as the set of decoy molecules grows in size and departs further from the set of known ligands. 8. 96 Test out hit picking strategies. Use any metric for filtering hits (Table 2), and visually examine the hits that pass the selected filters. • Are the compounds forming the expected interactions with the binding pocket? • Are they sampling in the correct region of the pocket? If there are any issues, determine if it is a sampling or scoring problem and alter the previous steps (matching spheres or dielectric boundary modifying spheres, respectively) to optimize. If the top poses capture expected interactions, then it is worth moving into a large-scale prospective screen. 9. 97 Choose settings for the large-scale docking experiment. If multiple docking parameters were tested in this section (i.e., section 8), one method for selecting the best settings for large-scale docking is to choose the parameters that yield the greatest number of viable compounds after all of the post-docking filters have been applied. This will ensure a large number of hits in the large-scale campaign. Alternatively, one can purchase hits from the different screens and see which setup leads to more true actives in an experimental setting. Even if just a few low-potency hits are obtained at this point, it may suggest which interactions are critical for activity in the pocket. ### Section 9: large-scale docking Timing 1–5 d, depending on number of compounds and compute nodes 1. 98 Create a new directory for the large-scale campaign with the final set of parameters, i.e., the INDOCK file and the dockfiles/ directory. 2. 99 Change the mol2_maximum_cutoff in the INDOCK file to a value likely to eliminate ~90% of the docked molecules to save on disk storage as compounds with total scores worse than a certain value (more positive) are unlikely to be considered hits from the large screen and should not be saved (use the results of the in-stock screen to help inform on the best mol2_maximum_cutoff values to use). 3. 100 Obtain the ZINC20 library. The ZINC20 library contains a prebuilt 3D conformer library of nearly 700 million fully protonated and tautomerized compounds (i.e., protomers). The library is largely built on Enamine’s readily accessible library, which consists of molecules that have a >80% likelihood of synthesis in one or two reaction steps11. This library can be downloaded from https://files.docking.org/3D/ or using the tranche browser as in Fig. 9 and consists of ~60 TB of data. 4. 101 Select properties of the ZINC20 library for the screen such as molecular weight, charge and LogP. We recommend screening compounds separately by MWT range (i.e., fragments ≤ 250 amu) as larger compounds often score better due to their ability to form additional interactions within the binding pocket117. To obtain a split database index (sdi) file for a predownloaded ZINC20 database, use the ‘DOCK Database Index’ download format in the tranche viewer (Fig. 9) and provide a path to the downloaded database in the ‘ZINC DB Root’ field. 5. 102 Split the docking campaign over 1,000 or more jobs. Typically, ~100 million compounds can be screened per day on an academic 1,000 core cluster of recent vintage (as of 2020).$DOCKBASE/docking/setup/setup_zinc15_file_number.py ./ largescaledocking zinc20library.sdi 1000 count 6. 103 Run the screen on a cluster as in Step 38B. ### Section 10: hit picking Timing Days to weeks 1. 104 Extract the scores of the top compounds as in Step 40 with the maximum score cutoff set to the mol2_maximum_cutoff set in Step 99 python $DOCKBASE/analysis/extract_all_blazing_fast.py ./dirlist extract_all.txt -50 2. 105 Collect between 300,000 and 1,000,000 poses from the best-scoring compounds. python$DOCKBASE/analysis/getposes_blazing_faster.py ./ extract_all.sort.uniq.txt 1000000 poses.mol2 test.mol2.gz 3. 106 Use any number of post-docking filters including but not limited to those described in Table 2. 4. 107 From the compounds that remain after filtering, visually examine up to 5,000 molecules depending on how viable the docked compounds look. We recommend that visual examination is done by more than one person, because of the different knowledge base that each individual brings to the selection process (i.e., a medicinal chemist will prioritize different features over a molecular biologist and vice versa)19. This step should be done before prioritizing compounds for purchase, as we have found human-picked compounds have better efficacy than compounds obtained from fully automated hit-picking12. By using post-docking filters as in Step 106 prior to visual examination, one can search more deeply through the list of top ranked molecules to identify molecules for testing. 5. 108 Choose 50–200 compounds for wet-lab testing depending on price, capability of testing and confidence of success as informed from retrospective calculations. ## Troubleshooting ### Structure preparation Alternative conformations or missing side chains should be completely modeled before starting the grid generation (blastermaster.py). Otherwise, any superfluous or missing atoms will result in erroneous VDW surface or partial charge calculations (see below). ### Blastermaster #### Section 1, Step 5 All input and output files as well as options and flags of the blastermaster program are listed in the Blastermaster Guide (see Supplementary Information). Of important note is that blastermaster requires protein and ligand structures to be provided as PDB files called rec.pdb (and working/rec.crg.pdb in case a protonated structure is used) and xtal-lig.pdb. Alternative file names will not be recognized in the default settings. #### Section 1, Step 10 If blastermaster does not successfully finish the grid generation, log files in the working directory will yield the required information to backtrace the error (see Blastermaster Guide in Supplementary Information). We suggest running blastermaster in the verbose mode (-v) as it allows to easily detect at which step the program failed. Most common errors are related to unknown atom or residue types in rec.pdb (and/or rec.crg.pdb). Missing parameters for specific atom types can be added to the following parameter files: $DOCKBASE/proteins/defaults/radii (required for surface calculation by the dms program)$DOCKBASE/proteins/defaults/amb.crg.oxt (partial charges for qnifft) $DOCKBASE/proteins/defaults/vdw.siz (Van der Waals radii for qnifft)$DOCKBASE/proteins/defaults/prot.table.ambcrg.ambH (atom typing for chemgrid) $DOCKBASE/proteins/defaults/vdw.parms.amb.mindock (Van der Waals parameters for minimizer) In our experience, most protein input structures can be converted into complete docking grids with the default parameters provided in DOCKBASE. For some modifications such as capped termini or structural waters, parameters are included in the provided files (e.g., amb.crg.oxt) but may use slightly different atom names compared with default names from modeling programs such as Chimera, PyMol or Maestro. In these cases, we suggest adapting the corresponding atom names in the input protein structure files (rec.pdb, rec.crg.pdb) rather than adding more atom types to the default parameter files. Common naming errors occur for disulfide bonded cysteines (CYX in prot.table.ambcrg.ambH) and water atoms (HOH, TIP, WAT and SPC in prot.table.ambcrg.ambH). The qnifft program may crash if too many atoms (>50,000) are present in rec.crg.pdb. While we do rarely encounter this problem, particularly large proteins or other molecular complexes may have to be reduced in atom number (e.g., removing fusion proteins or distal monomers). Typically, protein segments >20 Å away from the binding pocket are unlikely to influence the resulting docking grids. In certain cases, blastermaster may be able to calculate all grids, even though some parameters might be missing. For example, if nonstandard amino acids are used, the protonation state as well as partial charges or VDW surfaces may not be computed correctly, but will not cause blastermaster to crash. To check if all force field parameters were assigned to the protein structure correctly, we suggest visual inspection of docking grids (Chimera). The scoring grids (vdw.vdw, trim.electrostatics.phi, as well as ligand.desolv.heavy and ligand.desolv.hydrogen) are found in the dockfiles directory. The following scripts can be run in the dockfiles directory to convert the grids into dx files for visualization. Convert VDW grid: python$DOCKBASE/proteins/grid_visualization/create_VDW_DX.py Input: vdw.vdw, vdw.bump (default) Output: vdw.dx (default) Convert electrostatics grid: python $DOCKBASE/proteins/grid_visualization/create_ES_DX.py trim.electrostatics.phi trim.electrostatics.dx Input: trim.electrostatics.phi Output: trim.electrostatics.dx Convert desolvation grid: python$DOCKBASE/proteins/grid_visualization/create_LigDeSolv_DX.py Input: ligand.desolv.heavy (default) Input: ligdesolv.dx (default) To visualize the grids, first open the rec.crg.pdb file in Chimera. The dx files can be opened with the Volume Viewer application. The vdw.dx file should resemble the surface of the receptor. The ligdesolv.dx file demonstrates various solvation levels of the pocket (the grid itself is a continuum). By changing the level of the representation, the volume should fill the pocket illustrating the ligand moving from a less solvated state to the most solvated state. For the trim.electrostatics.dx file, it is best to set the minimum and maximum levels to −100 and 100, respectively. Then, set the colors to red for −100 and blue for 100. This now shows regions of positive partial charges (blue) and negative partial charges (red). For all grids, ensure that the representations match what is expected for the binding pocket and that there are no regions of missing grid points. If there are obvious holes in any of the grids, in close proximity to the ligand, certain residue types were not identified correctly and corresponding parameters could not be assigned. #### Section 4, Step 64 If ligands dock in poses far from the binding pocket, visualize the matching spheres to ensure they occupy the same area as the xtal-lig. The matching_spheres.sph file in the dockfiles directory contains the set of matching spheres. To view this file in a protein visualization application, they first must be converted to a pdb file. csh \$DOCKBASE/proteins/showsphere/doshowsph.csh matching_spheres.sph 1 matching_spheres.pdb Input: matching_spheres.sph Output: matching_spheres.pdb The output file, matching_spheres.pdb, can be visualized with the receptor and ligand in either Chimera or PyMOL. The matching spheres should align to the ligand’s heavy atoms, and additional random spheres should be scattered around the ligand as in Fig. 7a. If many ligands fail to dock (>50%), a number of issues may be at fault. It is common for 10–20% of ligands to fail to dock in retrospective calculations because of mismatches between the pocket and ligand properties. In the OUTDOCK file, two common statements may indicate these issues: ‘Grids too small’ and ‘Skip size’. The ‘Grids too small’ statement indicates that the ligands do not fit in the binding pocket. The ‘Skip size’ statement is used when the number of atoms in the ligand exceeds the atom_maximum value in the INDOCK file. Adjusting this number to a higher value will ensure that larger ligands will be docked and scored. If the OUTDOCK file shows many compounds getting scored but the poses are not being saved, the mol2_maximum_cutoff value may be too stringent. Relaxing this term will save more poses, but be careful to not go too high as the size of the output files will correspondingly increase and may be an issue for disk storage capacity. Additional INDOCK parameters can be adjusted according to the INDOCK Guide available in the Supplementary Information. ## Anticipated results At the end of this protocol, a receptor will have been converted into a dock-readable binding pocket, the system optimized on retrospective control calculations, and the system screened prospectively against a large chemical library. While this protocol has been specific to a single target that yielded successful results13, we have equally applied this protocol to a number of targets with general success demonstrating its broad applicability12,83,118. We have attempted to curate all of the relevant steps, though expert intuition brought by each user for their target will result in a different selection of final hits for purchase. For these intuition-based steps, we recommend multiple people review the data before making the final decision regarding which compounds to buy. If none of the purchased compounds demonstrates activity at the target of interest as determined experimentally, there remain several reasons for failure, including: • The calculations were performed using the wrong conformation of the binding pocket • The compound library did not contain enough examples of the class of compound most likely to dock • Retrospective optimization was not possible due to lack of known chemical matter • The experimental structure contained ambiguities inside chain or loop placement due to poor electron density or refinement errors • Incorrect or low-quality homology model For example, targets that bind dianions or dications will suffer from much smaller library sizes, while targets that bind large protein ligands may be difficult to modulate with small molecules. As in experimental biology, no set of retrospective controls will ever guarantee prospective success, and the many approximations in docking ensure that we still measure success by hit rates. Nevertheless, these controls and optimization steps can reduce obvious sources of failure, and allow one to better design a subsequent campaign. The experience of the field by now suggests that the approach is likely enough to succeed to be worth the investment, while the novel ligands that result can bring genuinely new biological insight to a field, a longstanding goal of the structure-based enterprise. ### Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability An example set of files used in this protocol, including ligand and decoy sets, default docking grids and optimized docking grids, can be downloaded from http://files.docking.org/dock/mt1_protocol.tar.gz. The example dataset uses the MT1 structure (PDB: 6ME3) co-crystallized with 2-phenylmelatonin. ## Software availability DOCK3.7 can be downloaded after applying for a license from http://dock.docking.org/Online_Licensing/index.htm. Licenses are free for nonprofit research. ## References 1. 1. Mayr, L. M. & Bojanic, D. Novel trends in high-throughput screening. Curr. Opin. Pharmacol. 9, 580–588 (2009). 2. 2. Keserü, G. M. & Makara, G. M. The influence of lead discovery strategies on the properties of drug candidates. Nat. Rev. Drug Discov. 8, 203–212 (2009). 3. 3. Keiser, M. J., Irwin, J. J. & Shoichet, B. K. The chemical basis of pharmacology. Biochemistry 49, 10267–10276 (2010). 4. 4. Bohacek, R. S., McMartin, C. & Guida, W. C. The art and practice of structure-based drug design: a molecular modeling perspective. Med. Res. Rev. 16, 3–50 (1996). 5. 5. Brenner, S. & Lerner, R. A. Encoded combinatorial chemistry. Proc. Natl Acad. Sci. USA 89, 5381–5383 (1992). 6. 6. Fitzgerald, P. R. & Paegel, B. M. DNA-encoded chemistry: drug discovery from a few good reactions. Chem. Rev. https://doi.org/10.1021/acs.chemrev.0c00789 (2020). 7. 7. Grebner, C. et al. Virtual screening in the Cloud: how big is big enough? J. Chem. Inf. Model 60, 24 (2020). 8. 8. Davies, E. K., Glick, M., Harrison, K. N. & Richards, W. G. Pattern recognition and massively distributed computing. J. Comput. Chem. 23, 1544–1550 (2002). 9. 9. Sterling, T. & Irwin, J. J. ZINC 15—ligand discovery for everyone. J. Chem. Inf. Model. 55, 2324–2337 (2015). 10. 10. Patel, H. et al. SAVI, in silico generation of billions of easily synthesizable compounds through expert-system type rules. Sci. Data 7, 384 (2020). 11. 11. Grygorenko, O. O. et al. Generating multibillion chemical space of readily accessible screening compounds. iScience 23, 101681 (2020). 12. 12. Lyu, J. et al. Ultra-large library docking for discovering new chemotypes. Nature 566, 224–229 (2019). 13. 13. Stein, R. M. et al. Virtual discovery of melatonin receptor ligands to modulate circadian rhythms. Nature 579, 609–614 (2020). 14. 14. Gorgulla, C. et al. An open-source drug discovery platform enables ultra-large virtual screens. Nature 580, 663 (2020). 15. 15. Meng, E. C., Shoichet, B. K. & Kuntz, I. D. Automated docking with grid‐based energy evaluation. J. Comput. Chem. 13, 505–524 (1992). 16. 16. Sharp, K. A., Friedman, R. A., Misra, V., Hecht, J. & Honig, B. Salt effects on polyelectrolyte-ligand binding: comparison of Poisson–Boltzmann, and limiting law/counterion binding models. Biopolymers 36, 245–262 (1995). 17. 17. Mysinger, M. M. & Shoichet, B. K. Rapid context-dependent ligand desolvation in molecular docking. J. Chem. Inf. Model. 50, 1561–1573 (2010). 18. 18. Adeshina, Y. O., Deeds, E. J. & Karanicolas, J. Machine learning classification can reduce false positives in structure-based virtual screening. Proc. Natl Acad. Sci. USA 117, 18477–18488 (2020). 19. 19. Irwin, J. J. & Shoichet, B. K. Docking screens for novel ligands conferring new biology. J. Med. Chem. 59, 4103–4120 (2016). 20. 20. Mobley, D. L. & Dill, K. A. Binding of small-molecule ligands to proteins: “what you see” is not always “what you get. Structure 17, 489–498 (2009). 21. 21. Bissantz, C., Folkers, G. & Rognan, D. Protein-based virtual screening of chemical databases. 1. Evaluation of different docking/scoring combinations. J. Med. Chem. 43, 4759–4767 (2000). 22. 22. Tirado-Rives, J. & Jorgensen, W. L. Contribution of conformer focusing to the uncertainty in predicting free energies for protein-ligand binding. J. Med. Chem. 49, 5880–5884 (2006). 23. 23. Trott, O. & Olson, A. J. AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. J. Comput. Chem. 31, 455–461 (2009). 24. 24. Kramer, B., Rarey, M. & Lengauer, T. Evaluation of the FLEXX incremental construction algorithm for protein-ligand docking. Proteins 37, 228–241 (1999). 25. 25. Halgren, T. A. et al. Glide: a new approach for rapid, accurate docking and scoring. 2. Enrichment factors in database screening. J. Med. Chem. 47, 1750–1759 (2004). 26. 26. Morris, G. M. et al. Automated docking using a Lamarckian genetic algorithm and an empirical binding free energy function. J. Comput. Chem. 19, 1639–1662 (1998). 27. 27. Abagyan, R., Totrov, M. & Kuznetsov, D. ICM—a new method for protein modeling and design: applications to docking and structure prediction from the distorted native conformation. J. Comput. Chem. 15, 488–506 (1994). 28. 28. Goodsell, D. S. & Olson, A. J. Automated docking of substrates to proteins by simulated annealing. Proteins 8, 195–202 (1990). 29. 29. Mcgann, M. FRED pose prediction and virtual screening accuracy. J. Chem. Inf. Model 51, 578–596 (2011). 30. 30. Jones, G., Willett, P., Glen, R. C., Leach, A. R. & Taylor, R. Development and validation of a genetic algorithm for flexible docking. J. Mol. Biol. 267, 727–748 (1997). 31. 31. Corbeil, C. R., Williams, C. I. & Labute, P. Variability in docking success rates due to dataset preparation. J. Comput. Aided Mol. Des. 26, 775–786 (2012). 32. 32. McGovern, S. L. & Shoichet, B. K. Information decay in molecular docking screens against Holo, Apo, and modeled conformations of enzymes. J. Med. Chem. 46, 2895–2907 (2003). 33. 33. Rueda, M., Bottegoni, G. & Abagyan, R. Recipes for the selection of experimental protein conformations for virtual screening. J. Chem. Inf. Model. 50, 186–193 (2010). 34. 34. Kuntz, I. D., Blaney, J. M., Oatley, S. J., Langridge, R. & Ferrin, T. E. A geometric approach to macromolecule-ligand interactions. J. Mol. Biol. 161, 269–288 (1982). 35. 35. Halgren, T. A. Identifying and characterizing binding sites and assessing druggability. J. Chem. Inf. Model. 49, 377–389 (2009). 36. 36. Ngan, C. H. et al. FTMAP: extended protein mapping with user-selected probe molecules. Nucleic Acids Res. 40, W271–W275 (2012). 37. 37. Wang, S. et al. D4 dopamine receptor high-resolution structures enable the discovery of selective agonists. Science 358, 381–386 (2017). 38. 38. Katritch, V. et al. Structure-based discovery of novel chemotypes for adenosine A(2A) receptor antagonists. J. Med. Chem. 53, 1799 (2010). 39. 39. Kolb, P. et al. Structure-based discovery of beta2-adrenergic receptor ligands. Proc. Natl Acad. Sci. USA 106, 6843–6848 (2009). 40. 40. De Graaf, C. et al. Crystal structure-based virtual screening for fragment-like ligands of the human histamine H 1 receptor. J. Med. Chem. 54, 8195–8206 (2011). 41. 41. Mysinger, M. M. et al. Structure-based ligand discovery for the protein–protein interface of chemokine receptor CXCR4. Proc. Natl Acad. Sci. USA 109, 5517–5522 (2012). 42. 42. Powers, R. A., Morandi, F. & Shoichet, B. K. Structure-based discovery of a novel, noncovalent inhibitor of AmpC β-lactamase. Structure 10, 1013–1023 (2002). 43. 43. Zarzycka, B. et al. Discovery of small molecule CD40–TRAF6 inhibitors. J. Chem. Inf. Model. 55, 294–307 (2015). 44. 44. Huang, N. & Shoichet, B. K. Exploiting ordered waters in molecular docking. J. Med. Chem. 51, 4862–4865 (2008). 45. 45. Balius, T. E. et al. Testing inhomogeneous solvation theory in structure-based ligand discovery. Proc. Natl Acad. Sci. USA 114, E6839–E6846 (2017). 46. 46. Weichenberger, C. X. & Sippl, M. J. NQ-Flipper: recognition and correction of erroneous asparagine and glutamine side-chain rotamers in protein structures. Nucleic Acids Res. 35, W403–W406 (2007). 47. 47. Word, J. M., Lovell, S. C., Richardson, J. S. & Richardson, D. C. Asparagine and glutamine: using hydrogen atom contacts in the choice of side-chain amide orientation. J. Mol. Biol. 285, 1735–1747 (1999). 48. 48. Sastry, G. M., Adzhigirey, M., Day, T., Annabhimoju, R. & Sherman, W. Protein and ligand preparation: parameters, protocols, and influence on virtual screening enrichments. J. Comput. Aided Mol. Des. 27, 221–234 (2013). 49. 49. Bas, D. C., Rogers, D. M. & Jensen, J. H. Very fast prediction and rationalization of pKa values for protein–ligand complexes. Proteins 73, 765–783 (2008). 50. 50. Pettersen, E. F. et al. UCSF Chimera—a visualization system for exploratory research and analysis. J. Comput. Chem. 25, 1605–1612 (2004). 51. 51. Bandyopadhyay, D., Bhatnagar, A., Jain, S. & Pratyaksh, P. Selective stabilization of aspartic acid protonation state within a given protein conformation occurs via specific “molecular association”. J. Phys. Chem. B 124, 5350–5361 (2020). 52. 52. Webb, B. & Sali, A. Comparative protein structure modeling using MODELLER. Curr. Protoc. Bioinformatics 54, 5.6.1–5.6.37 (2016). 53. 53. Bender, B. J. et al. Protocols for molecular modeling with Rosetta3 and RosettaScripts. Biochemistry 55, 4748–4763 (2016). 54. 54. Yang, J. et al. Template-based protein structure prediction in CASP11 and retrospect of I-TASSER in the last decade. Proteins 84, 233–246 (2016). 55. 55. Jaiteh, M., Rodríguez-Espigares, I., Selent, J. & Carlsson, J. Performance of virtual screening against GPCR homology models: impact of template selection and treatment of binding site plasticity. PLoS Comput. Biol. 16, e1007680 (2020). 56. 56. Cavasotto, C. N. et al. Discovery of novel chemotypes to a G-protein-coupled receptor through ligand-steered homology modeling and structure-based virtual screening. J. Med. Chem. 51, 581–588 (2008). 57. 57. Phatak, S. S., Gatica, E. A. & Cavasotto, C. N. Ligand-steered modeling and docking: a benchmarking study in class A G-protein-coupled receptors. J. Chem. Inf. Model. 50, 2119–2128 (2010). 58. 58. Kaufmann, K. W. & Meiler, J. Using RosettaLigand for small molecule docking into comparative models. PLoS One 7, e50769 (2012). 59. 59. Bordogna, A., Pandini, A. & Bonati, L. Predicting the accuracy of protein–ligand docking on homology models. J. Comput. Chem. 32, 81–98 (2011). 60. 60. Katritch, V., Rueda, M., Lam, P. C.-H., Yeager, M. & Abagyan, R. GPCR 3D homology models for ligand screening: lessons learned from blind predictions of adenosine A2a receptor complex. Proteins 78, 197–211 (2010). 61. 61. Schafferhans, A. & Klebe, G. Docking ligands onto binding site representations derived from proteins built by homology modelling. J. Mol. Biol. 307, 407–427 (2001). 62. 62. Lansu, K. et al. In silico design of novel probes for the atypical opioid receptor MRGPRX2. Nat. Chem. Biol. 13, 529–536 (2017). 63. 63. Huang, X.-P. et al. Allosteric ligands for the pharmacologically dark receptors GPR68 and GPR65. Nature 527, 477–483 (2015). 64. 64. Trauelsen, M. et al. Receptor structure-based discovery of non-metabolite agonists for the succinate receptor GPR91. Mol. Metab. 6, 1585–1596 (2017). 65. 65. Kolb, P. et al. Limits of ligand selectivity from docking to models: in silico screening for A1 adenosine receptor antagonists. PLoS One 7, e49910 (2012). 66. 66. Daga, P. R., Polgar, W. E. & Zaveri, N. T. Structure-based virtual screening of the nociceptin receptor: hybrid docking and shape-based approaches for improved hit identification. J. Chem. Inf. Model. 54, 2732–2743 (2014). 67. 67. Diaz, C. et al. A strategy combining differential low-throughput screening and virtual screening (DLS-VS) accelerating the discovery of new modulators for the Orphan GPR34 receptor. Mol. Inf. 32, 213–229 (2013). 68. 68. Langmead, C. J. et al. Identification of novel adenosine A 2A receptor antagonists by virtual screening. J. Med. Chem. 55, 1904–1909 (2012). 69. 69. Tikhonova, I. G. et al. Discovery of novel agonists and antagonists of the free fatty acid receptor 1 (FFAR1) using virtual screening. J. Med. Chem. 51, 625–633 (2008). 70. 70. Martí-Solano, M., Schmidt, D., Kolb, P. & Selent, J. Drugging specific conformational states of GPCRs: challenges and opportunities for computational chemistry. Drug Discov. Today 21, 625–631 (2016). 71. 71. Carlsson, J. et al. Ligand discovery from a dopamine D3 receptor homology model and crystal structure. Nat. Chem. Biol. 7, 769–778 (2011). 72. 72. Männel, B. et al. Structure-guided screening for functionally selective D2 dopamine receptor ligands from a virtual chemical library. ACS Chem. Biol. 12, 2652–2661 (2017). 73. 73. Khare, P. et al. Identification of novel S-adenosyl-l-homocysteine hydrolase inhibitors through homology-model-based virtual screening, synthesis, and biological evaluation. J. Chem. Inf. Model. 52, 777–791 (2012). 74. 74. Li, S. et al. Identification of inhibitors against p90 ribosomal S6 kinase 2 (RSK2) through structure-based virtual screening with the inhibitor-constrained refined homology model. J. Chem. Inf. Model. 51, 2939–2947 (2011). 75. 75. Eberini, I. et al. In silico identification of new ligands for GPR17: a promising therapeutic target for neurodegenerative diseases. J. Comput. Aided Mol. Des. 25, 743–752 (2011). 76. 76. Frimurer, T. M. et al. Model-based discovery of synthetic agonists for the Zn2+-sensing G-protein-coupled receptor 39 (GPR39) reveals novel biological functions. J. Med. Chem. 60, 886–898 (2017). 77. 77. Mysinger, M. M., Carchia, M., Irwin, J. J. & Shoichet, B. K. Directory of Useful Decoys, Enhanced (DUD-E): better ligands and decoys for better benchmarking. J. Med. Chem. 55, 6 (2012). 78. 78. Stein, R. M. et al. Property-unmatched decoys in docking benchmarks. J. Chem. Inf. Model. 61, 699–714 (2020). 79. 79. Coleman, R. G., Carchia, M., Sterling, T., Irwin, J. J. & Shoichet, B. K. Ligand pose and orientational sampling in molecular docking. PLoS One 8, e75992 (2013). 80. 80. Huang, N., Shoichet, B. K. & Irwin, J. J. Benchmarking sets for molecular docking. J. Med. Chem. 49, 6789–6801 (2006). 81. 81. Jain, A. N. & Nicholls, A. Recommendations for evaluation of computational methods. J. Comput. Aided Mol. Des. 22, 133–139 (2008). 82. 82. Allen, W. J. & Rizzo, R. C. Implementation of the Hungarian algorithm to account for ligand symmetry and similarity in structure-based design. J. Chem. Inf. Model. 54, 518–529 (2014). 83. 83. Schuller, M. et al. Fragment binding to the Nsp3 macrodomain of SARS-CoV-2 identified through crystallographic screening and computational docking. Sci. Adv. 7, eabf8711 (2021). 84. 84. Fischer, A., Smieško, M., Sellner, M. & Lill, M. A. Decision making in structure-based drug discovery: visual inspection of docking results. J. Med. Chem. 64, 2489–2500 (2021). 85. 85. Kirchmair, J. et al. Predicting drug metabolism: experiment and/or computation? Nat. Rev. Drug Discov. 14, 387–404 (2015). 86. 86. Kirchmair, J. et al. Computational prediction of metabolism: sites, products, SAR, P450 enzyme dynamics, and mechanisms. J. Chem. Inf. Model. 52, 617–648 (2012). 87. 87. Irwin, J. J. et al. An aggregation advisor for ligand discovery. J. Med. Chem. 58, 7076–7087 (2015). 88. 88. Bemis, G. W. & Murcko, M. A. The properties of known drugs. 1. Molecular frameworks. J. Med. Chem. 39, 2887–2893 (1996). 89. 89. Jadhav, A. et al. Quantitative analyses of aggregation, autofluorescence, and reactivity artifacts in a screen for inhibitors of a thiol protease. J. Med. Chem. 53, 37–51 (2010). 90. 90. Capuzzi, S. J., Muratov, E. N. & Tropsha, A. Phantom PAINS: problems with the utility of alerts for Pan-Assay Interference Compound S. J. Chem. Inf. Model. 57, 417–427 (2017). 91. 91. Baell, J. B. & Holloway, G. A. New substructure filters for removal of pan assay interference compounds (PAINS) from screening libraries and for their exclusion in bioassays. J. Med. Chem. 53, 2719–2740 (2010). 92. 92. McGovern, S. L., Caselli, E., Grigorieff, N. & Shoichet, B. K. A common mechanism underlying promiscuous inhibitors from virtual and high-throughput screening. J. Med. Chem. 45, 1712–1722 (2002). 93. 93. Feng, B. Y. et al. A high-throughput screen for aggregation-based inhibition in a large compound library. J. Med. Chem. 50, 2385–2390 (2007). 94. 94. Ganesh, A. N. et al. Colloidal drug aggregate stability in high serum conditions and pharmacokinetic consequence. ACS Chem. Biol. 14, 751–757 (2019). 95. 95. Coan, K. E. D. & Shoichet, B. K. Stoichiometry and physical chemistry of promiscuous aggregate-based inhibitors. J. Am. Chem. Soc. 130, 9606–9612 (2008). 96. 96. Coan, K. E. D., Maltby, D. A., Burlingame, A. L. & Shoichet, B. K. Promiscuous aggregate-based inhibitors promote enzyme unfolding. J. Med. Chem. 52, 2067–2075 (2009). 97. 97. Wolan, D. W., Zorn, J. A., Gray, D. C. & Wells, J. A. Small-molecule activators of a proenzyme. Science 326, 853–858 (2009). 98. 98. Zorn, J. A., Wolan, D. W., Agard, N. J. & Wells, J. A. Fibrils colocalize caspase-3 with procaspase-3 to foster maturation. J. Biol. Chem. 287, 33781–33795 (2012). 99. 99. Irwin, J. J. et al. ZINC20—a free ultralarge-scale chemical database for ligand discovery. J. Chem. Inf. Model. 60, 6065–6073 (2020). 100. 100. Teotico, D. G. et al. Docking for fragment inhibitors of AmpC -lactamase. Proc. Natl Acad. Sci. USA 106, 7455–7460 (2009). 101. 101. Chen, Y. & Shoichet, B. K. Molecular docking and ligand specificity in fragment-based inhibitor discovery. Nat. Chem. Biol. 5, 358–364 (2009). 102. 102. Kolb, P. & Irwin, J. J. Docking screens: right for the right reasons? Curr. Top. Med. Chem. 9, 755–770 (2009). 103. 103. Wu, Y., Lou, L. & Xie, Z.-R. A pilot study of all-computational drug design protocol–from structure prediction to interaction analysis. Front. Chem. 8, 81 (2020). 104. 104. Greenidge, P. A., Kramer, C., Mozziconacci, J. C. & Sherman, W. Improving docking results via reranking of ensembles of ligand poses in multiple X-ray protein conformations with MM-GBSA. J. Chem. Inf. Model. 54, 2697–2717 (2014). 105. 105. Mahmoud, A. H., Masters, M. R., Yang, Y. & Lill, M. A. Elucidating the multiple roles of hydration for accurate protein-ligand binding prediction via deep learning. Commun. Chem. 3, 19 (2020). 106. 106. Liu, X. et al. An allosteric modulator binds to a conformational hub in the β2 adrenergic receptor. Nat. Chem. Biol. 16, 749–755 (2020). 107. 107. Wacker, D. et al. Conserved binding mode of human β 2 adrenergic receptor inverse agonists and antagonist revealed by X-ray crystallography. J. Am. Chem. Soc. 132, 11443–11445 (2010). 108. 108. Manglik, A. et al. Structure-based discovery of opioid analgesics with reduced side effects. Nature 537, 185–190 (2016). 109. 109. Ewing, T. J. A. & Kuntz, I. D. Critical evaluation of search algorithms for automated molecular docking and database screening. J. Comput. Chem. 18, 1175–1189 (1997). 110. 110. Gallagher, K. & Sharp, K. Electrostatic contributions to heat capacity changes of DNA-ligand binding. Biophys. J. 75, 769–776 (1998). 111. 111. Wei, B. Q., Baase, W. A., Weaver, L. H., Matthews, B. W. & Shoichet, B. K. A model binding site for testing scoring functions in molecular docking. J. Mol. Biol. 322, 339–355 (2002). 112. 112. Leaver-Fay, A. et al. Rosetta3. in Methods in Enzymology 545–574 (2011); https://doi.org/10.1016/B978-0-12-381270-4.00019-6 113. 113. Madhavi Sastry, G., Adzhigirey, M., Day, T., Annabhimoju, R. & Sherman, W. Protein and ligand preparation: parameters, protocols, and influence on virtual screening enrichments. J. Comput. Aided Mol. Des. 27, 221–234 (2013). 114. 114. Armstrong, J. F. et al. The IUPHAR/BPS Guide to PHARMACOLOGY in 2020: extending immunopharmacology content and introducing the IUPHAR/MMV Guide to MALARIA PHARMACOLOGY. Nucleic Acids Res. 48, D1006–D1021 (2019). 115. 115. Mendez, D. et al. ChEMBL: towards direct deposition of bioassay data. Nucleic Acids Res. 47, D930–D940 (2019). 116. 116. Irwin, J. J., Raushel, F. M. & Shoichet, B. K. Virtual screening against metalloenzymes for inhibitors and substrates. Biochemistry 44, 12316–12328 (2005). 117. 117. Verdonk, M. L. et al. Virtual screening using protein−ligand docking: avoiding artificial enrichment. J. Chem. Inf. Comput. Sci. 44, 793–806 (2004). 118. 118. Alon, A. et al. Crystal structures of the σ 2 receptor template large-library docking for selective chemotypes active in vivo. Preprint at bioRxiv https://doi.org/10.1101/2021.04.29.441652 (2021). 119. 119. Babaoglu, K. et al. Comprehensive mechanistic analysis of hits from high-throughput and docking screens against β-lactamase. J. Med. Chem. 51, 2502–2511 (2008). 120. 120. Lorber, D. M. & Shoichet, B. K. Flexible ligand docking using conformational ensembles. Protein Sci. 7, 938–950 (1998). 121. 121. Alhossary, A., Handoko, S. D., Mu, Y. & Kwoh, C.-K. Fast, accurate, and reliable molecular docking with QuickVina 2. Bioinformatics 31, 2214–2216 (2015). 122. 122. Quiroga, R. & Villarreal, M. A. Vinardo: a scoring function based on Autodock Vina improves scoring, docking, and virtual screening. PLoS One 11, e0155183 (2016). 123. 123. Bottegoni, G., Kufareva, I., Totrov, M. & Abagyan, R. Four-dimensional docking: a fast and accurate account of discrete receptor flexibility in ligand docking. J. Med. Chem. 52, 397–406 (2009). 124. 124. Cho, Y., Ioerger, T. R. & Sacchettini, J. C. Discovery of novel nitrobenzothiazole inhibitors for Mycobacterium tuberculosis ATP phosphoribosyl transferase (HisG) through virtual screening. J. Med. Chem. 51, 5984–5992 (2008). 125. 125. Verdonk, M. L., Cole, J. C., Hartshorn, M. J., Murray, C. W. & Taylor, R. D. Improved protein–ligand docking using GOLD. Proteins 52, 609–623 (2003). 126. 126. Li, C. et al. Identification of diverse dipeptidyl peptidase IV inhibitors via structure-based virtual screening. J. Mol. Model. 18, 4033–4042 (2012). 127. 127. Friesner, R. A. et al. Extra precision glide: docking and scoring incorporating a model of hydrophobic enclosure for protein-ligand complexes. J. Med. Chem. 49, 6177–6196 (2006). 128. 128. Rai, B. K. et al. Comprehensive assessment of torsional strain in crystal structures of small molecules and protein–ligand complexes using ab initio calculations. J. Chem. Inf. Model. 59, 4195–4208 (2019). 129. 129. Groom, C. R., Bruno, I. J., Lightfoot, M. P. & Ward, S. C. The Cambridge structural database. Acta Crystallogr. Sect. B Struct. Sci. Cryst. Eng. Mater. 72, 171–179 (2016). 130. 130. Gu, S., Smith, M. S., Yang, Y., Irwin, J. J. & Shoichet, B. K. Ligand strain energy in large library docking. Preprint at bioRxiv https://doi.org/10.1101/2021.04.06.438722 (2021). 131. 131. Xing, L., Klug-Mcleod, J., Rai, B. & Lunney, E. A. Kinase hinge binding scaffolds and their hydrogen bond patterns. Bioorg. Med. Chem. 23, 6520–6527 (2015). 132. 132. Peng, Y. et al. 5-HT2C receptor structures reveal the structural basis of GPCR polypharmacology. Cell 172, 719–730.e14 (2018). 133. 133. Bissantz, C., Kuhn, B. & Stahl, M. A medicinal chemist’s guide to molecular interactions. J. Med. Chem. 53, 5061–5084 (2010). 134. 134. Rogers, D. & Hahn, M. Extended-connectivity fingerprints. J. Chem. Inf. Model. 50, 742–754 (2010). 135. 135. Alexander, N., Woetzel, N. & Meiler, J. Bcl::Cluster: a method for clustering biological molecules coupled with visualization in the Pymol Molecular Graphics System. in 2011 IEEE 1st International Conference on Computational Advances in Bio and Medical Sciences (ICCABS) 2011, 13–18 (IEEE, 2011). 136. 136. Bender, A. & Glen, R. C. A discussion of measures of enrichment in virtual screening: comparing the information content of descriptors with increasing levels of sophistication. J. Chem. Inf. Model. 45, 1369–1375 (2005). 137. 137. Simeonov, A. et al. Fluorescence spectroscopic profiling of compound libraries. J. Med. Chem. 51, 2363–2371 (2008). 138. 138. Lea, W. A. & Simeonov, A. Fluorescence polarization assays in small molecule screening. Expert Opin. Drug Disco. 6, 17–32 (2011). 139. 139. Thorne, N., Auld, D. S. & Inglese, J. Apparent activity in high-throughput screening: origins of compound-dependent assay interference. Curr. Opin. Chem. Biol. 14, 315–324 (2010). 140. 140. Walters, W. P. & Namchuk, M. Designing screens: how to make your hits a hit. Nat. Rev. Drug Discov. 2, 259–266 (2003). 141. 141. Thorne, N. et al. Firefly luciferase in chemical biology: a compendium of inhibitors, mechanistic evaluation of chemotypes, and suggested use as a reporter. Chem. Biol. 19, 1060–1072 (2012). 142. 142. Sassano, M. F., Doak, A. K., Roth, B. L. & Shoichet, B. K. Colloidal aggregation causes inhibition of G protein-coupled receptors. J. Med. Chem. 56, 2406–2414 (2013). 143. 143. Owen, S. C., Doak, A. K., Wassam, P., Shoichet, M. S. & Shoichet, B. K. Colloidal aggregation affects the efficacy of anticancer drugs in cell culture. ACS Chem. Biol. 7, 1429–1435 (2012). 144. 144. McLaughlin, C. K. et al. Stable colloidal drug aggregates catch and release active enzymes. ACS Chem. Biol. 11, 992–1000 (2016). 145. 145. McGovern, S. L. & Shoichet, B. K. Kinase inhibitors: not just for kinases anymore. J. Med. Chem. 46, 1478–1483 (2003). ## Acknowledgements This work was supported by NIH grants R35GM122481 (to B.K.S.) and GM133836 (to J.J.I.). J.C. was supported by grants from the Swedish Research Council (2017-04676) and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement: 715052). B.J.B. was partly supported by an NIH NRSA fellowship F32GM136062. C.M.W. was partly supported by the National Institutes of Health Training Grant T32 GM007175, NSF GRFP and UCSF Discovery Fellowship. We thank members of the Shoichet lab for feedback on the procedures described. ## Author information Authors ### Contributions B.J.B. and S.G. wrote the manuscript with additional input from all authors. B.J.B., S.G., A.L., J.L., C.W., R.M.S., E.F., T.E.B., J.C., J.J.I. and B.K.S. developed the protocol. B.J.B., S.G., A.L., J.L., C.W., R.M.S. and T.E.B. contributed scripts. E.F. tested the protocol. Research was supervised by J.C., J.J.I. and B.K.S. ### Corresponding author Correspondence to Brian K. Shoichet. ## Ethics declarations ### Competing interests B.K.S. and J.J.I. are founders of Blue Dolphin Lead Discovery LLC, which undertakes fee-for-service ligand discovery. Peer review information Nature Protocols thanks Claudio Cavasotto, Vincent Zoete and the other, anonymous reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Key references using this protocol Stein, R. M. et al. Nature 579, 609–614 (2020): https://doi.org/10.1038/s41586-020-2027-0 Lyu, J. et al. Nature 566, 224–229 (2019): https://www.nature.com/articles/s41586-019-0917-9 Schuller, M. et al. Sci. Adv. 7, eabf8711 (2021): https://advances.sciencemag.org/content/7/16/eabf8711 Key data used in this protocol Stein, R. M. et al. Nature 579, 609–614 (2020): https://doi.org/10.1038/s41586-020-2027-0 ## Supplementary information ### Supplementary Information INDOCK Guide and Blastermaster Guide. ## Rights and permissions Reprints and Permissions Bender, B.J., Gahbauer, S., Luttens, A. et al. A practical guide to large-scale docking. Nat Protoc 16, 4799–4832 (2021). https://doi.org/10.1038/s41596-021-00597-z • Accepted: • Published: • Issue Date:
## 9 pitfalls of data science [book review] Posted in Books, Kids, Statistics, Travel, University life with tags , , , , , , , , , , , , , on September 11, 2019 by xi'an I received The 9 pitfalls of data science by Gary Smith [who has written a significant number of general public books on personal investment, statistics and AIs] and Jay Cordes from OUP for review a few weeks ago and read it on my trip to Salzburg. This short book contains a lot of anecdotes and what I would qualify of small talk on job experiences and colleagues’ idiosyncrasies…. More fundamentally, it reads as a sequence of examples of bad or misused statistics, as many general public books on statistics do, but with little to say on how to spot such misuses of statistics. Its title (It seems like the 9 pitfalls of… is a rather common début for a book title!) however started a (short) conversation with my neighbour on the train to Salzburg as she wanted to know if the job opportunities in data sciences were better in Germany than in Austria. A practically important question for which I had no clue. And I do not think the book would have helped either! (My neighbour in the earlier plane to München had a book on growing lotus, which was not particularly enticing for launching a conversation either.) ## MaxEnt 2019 [last call] Posted in pictures, Statistics, Travel with tags , , , , , , , on April 30, 2019 by xi'an For those who definitely do not want to attend O’Bayes 2019 in Warwick,  the Max Ent 2019 conference is taking place at the Max Planck Institute for plasma physics in Garching, near Münich, (south) Germany at the same time. Registration is still open at a reduced rate and it is yet not too late to submit an abstract. A few hours left. (While it is still possible to submit a poster for O’Bayes 2019 and to register.) ## trip to the past Posted in Books, pictures with tags , , , , , , , , , on January 6, 2019 by xi'an When visiting my mother for the Xmas break, she showed me this picture of her grand-father, Médéric, in his cavalry uniform, taken before the First World War, in 1905. During the war, as an older man, he did not come close to the front lines, but died from a disease caught from the horses he was taking care of. Two other documents I had not seen before were these refugee cards that my grand-parents got after their house in Saint-Lô got destroyed on June 7, 1944. And this receipt for the tinned rabbit meat packages my grand-mother was sending to a brother-in-law who was POW in Gustrow, Germany, receipt that she kept despite the hardships she faced in the years following the D Day landing. ## Max Ent at Max Plank Posted in Statistics with tags , , , , , , , , , , on December 21, 2018 by xi'an ## truncated Gumbels Posted in Books, Kids, pictures, Statistics with tags , , , , , , , on April 6, 2018 by xi'an As I had to wake up pretty early on Easter morning to give my daughter a ride, while waiting I came upon this calculus question on X validated of computing the conditional expectation of a Gumbel variate, conditional on its drifted version being larger than another independent Gumbel variate with the same location-scale parameters. (Just reminding readers that a Gumbel G(0,1) variate is a double log-uniform, i.e., can be generated as X=-log(-log(U)).) And found after a few minutes (and a call to Wolfram Alpha integrator) that $\mathbb{E}[\epsilon_1|\epsilon_1+c>\epsilon_0]=\gamma+\log(1+e^{-c})$ which is simple enough to make me wonder if there is a simpler derivation than the call to the exponential integral Ei(x) function. (And easy to check by simulation.) Incidentally, I discovered that Emil Gumbel had applied statistical analysis to the study of four years of political murders in the Weimar Republic, demonstrating the huge bias of the local justice towards right-wing murders. When he signed the urgent call [for the union of the socialist and communist parties] against fascism in 1932, he got expelled from his professor position in Heidelberg and emigrated to France, which he had to leave again for the USA on the Nazi invasion in 1940. Where he became a professor at Columbia. ## [email protected] Posted in Books, Kids, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , on April 12, 2017 by xi'an The reason for my short visit to Berlin last week was an OxWaSP (Oxford and Warwick Statistics Program) workshop hosted by Amazon Berlin with talks between statistics and machine learning, plus posters from our second year students. While the workshop was quite intense, I enjoyed very much the atmosphere and the variety of talks there. (Just sorry that I left too early to enjoy the social programme at a local brewery, Brauhaus Lemke, and the natural history museum. But still managed nice runs east and west!) One thing I found most interesting (if obvious in retrospect) was the different focus of academic and production talks, where the later do not aim at a full generality or at a guaranteed improvement over the existing, provided the new methodology provides a gain in efficiency over the existing. This connected nicely with me reading several Nature articles on quantum computing during that trip,  where researchers from Google predict commercial products appearing in the coming five years, even though the technology is far from perfect and the outcome qubit error prone. Among the examples they provided, quantum simulation (not meaning what I consider to be simulation!), quantum optimisation (as a way to overcome multimodality), and quantum sampling (targeting given probability distributions). I find the inclusion of the latest puzzling in that simulation (in that sense) shows very little tolerance for errors, especially systematic bias. It may be that specific quantum architectures can be designed for specific probability distributions, just like some are already conceived for optimisation. (It may even be the case that quantum solutions are (just next to) available for intractable constants as in Ising or Potts models!) ## keine Angst vor Niemand… Posted in Statistics with tags , , , , , , , on April 1, 2017 by xi'an
Files in this item FilesDescriptionFormat application/vnd.openxmlformats-officedocument.presentationml.presentation RI14_Presentation.pptx (4MB) PresentationMicrosoft PowerPoint 2007 application/pdf RI14_Abstract.pdf (19kB) AbstractPDF text/plain RI14_Abstract.txt (2kB) AbstractText file Description Title: Electronic Relaxation Processes Of Transition Metal Atoms In Helium Nanodroplets Author(s): Kautsch, Andreas Contributor(s): Ernst, Wolfgang E.; Koch, Markus; Lindebner, Friedrich Subject(s): Matrix isolation (and droplets) Abstract: Spectroscopy of doped superfluid helium nanodroplets (He$_N$) gives information about the influence of this cold, chemically inert, and least interacting matrix environment on the excitation and relaxation dynamics of dopant atoms and molecules \footnote{C. Callegari and W. E. Ernst, Helium Droplets as Nanocryostats for Molecular Spectroscopy - from the Vacuum Ultraviolet to the Microwave Regime, in Handbook of High-Resolution Spectroscopy, eds. M. Quack and F. Merkt, John Wiley \& Sons, Chichester, 2011.}. We present the results from laser induced fluorescence (LIF), photoionization (PI), and mass spectroscopy of Cr \footnote{A. Kautsch, M. Koch, and W. E. Ernst, J. Phys. Chem. A, \textbf{117} (2013) 9621-9625, DOI: 10.1021/jp312336m} and Cu \footnote{F. Lindebner, A. Kautsch, M. Koch, and W. E. Ernst, Int. J. Mass Spectrom. (2014) in press, DOI: 10.1016/j.ijms.2013.12.022} doped He$_N$. From these results, we can draw a comprehensive picture of the complex behavior of such transition metal atoms in He$_N$ upon photo-excitation. The strong Cr and Cu ground state transitions show an excitation blueshift and broadening with respect to the bare atom transitions which can be taken as indication for the solvation inside the droplet. From the originally excited states the atoms relax to energetically lower states and are ejected from the He$_N$. The relaxation processes include bare atom spin-forbidden transitions, which clearly bears the signature of the He$_N$ influence. Two-color resonant two-photon ionization (2CR2PI) also shows the formation of bare atoms and small Cr-He$_n$ and Cu-He$_n$ clusters in their ground and metastable states $^c$ \footnote{M. Koch, A. Kautsch, F. Lackner, and W. E. Ernst, submitted to J. Phys. Chem. A}. Currently, Cr dimer excitation studies are in progress and a brief outlook on the available results will be given. Issue Date: 2014-06-19 Publisher: International Symposium on Molecular Spectroscopy Citation Info: Kautsch, A.; Ernst, W.E.; Koch, M.; Lindebner, F. ELECTRONIC RELAXATION PROCESSES OF TRANSITION METAL ATOMS IN HELIUM NANODROPLETS. Proceedings of the International Symposium on Molecular Spectroscopy, Urbana, IL, June 16-21, 2014. DOI: 10.15278/isms.2014.RI14 Genre: CONFERENCE PAPER/PRESENTATION Type: Text Language: English URI: http://hdl.handle.net/2142/51224 DOI: 10.15278/isms.2014.RI14 Rights Information: Copyright 2014 by the authors. Licensed under a Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/ Date Available in IDEALS: 2014-09-172015-04-14 
summaryrefslogtreecommitdiffstats log msg author committer range path: root/manual/plugins/zxbox.tex blob: 5f20d9576cbe17a89e0fef28e3b7fd1a32d41722 (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 \subsection{\label{ref:ZXBox}ZXBox} \screenshot{plugins/images/ss-zxbox}{ZXBox}{img:zxbox} ZXBox is a port of the Spectemu'' ZX Spectrum 48k emulator for Rockbox (\Pointinghand\href{http://kempelen.iit.bme.hu/~mszeredi/spectemu/spectemu.html} {project's homepage}). To start a game open a tape file or snapshot saved as \fname{.tap}, \fname{.tzx}, \fname{.z80} or \fname{.sna} in the file browser.\\ \note{As ZXBox is a 48k emulator only loading of 48k z80 snapshots is possible.} \subsubsection{Default keys} The emulator is set up for 5 different buttons: Up, Down, Left, Right and Jump/Fire. Each one of these can be mapped to one key of the Spectrum Keyboard or they can be used like a Kempston'' joystick. Per default the buttons, including an additional but fixed menu button, are assigned as follows: \begin{btnmap} \opt{IPOD_3G_PAD,IPOD_4G_PAD}{\ButtonMenu/\ButtonPlay/} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,GIGABEAT_PAD,GIGABEAT_S_PAD% ,IAUDIO_X5_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_E200_PAD,SANSA_FUZE_PAD,MROBE100_PAD% ,PBELL_VIBE500_PAD,SANSA_FUZEPLUS_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}% {\ButtonUp/\ButtonDown/} \opt{IRIVER_H10_PAD}{\ButtonScrollUp/\ButtonScrollDown/} \opt{IPOD_3G_PAD,IPOD_4G_PAD,RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD% ,IRIVER_H300_PAD,GIGABEAT_PAD,GIGABEAT_S_PAD,IAUDIO_X5_PAD% ,SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_E200_PAD,SANSA_FUZE_PAD,MROBE100_PAD% ,IRIVER_H10_PAD,PBELL_VIBE500_PAD,SANSA_FUZEPLUS_PAD,SAMSUNG_YH92X_PAD% ,SAMSUNG_YH820_PAD}{\ButtonLeft/\ButtonRight} \opt{COWON_D2_PAD}{\TouchTopMiddle{}/\TouchBottomMiddle{}/\TouchMidLeft{}/\TouchMidRight} \opt{MPIO_HD200_PAD}{\ButtonVolDown / \ButtonVolUp / \ButtonRew / \ButtonFF} \opt{MPIO_HD300_PAD}{\ButtonRew / \ButtonFF / \ButtonScrollUp / \ButtonScrollDown} \opt{HAVEREMOTEKEYMAP}{& } & Directional movement\\ % \opt{IPOD_3G_PAD,IPOD_4G_PAD,GIGABEAT_PAD,GIGABEAT_S_PAD,IAUDIO_X5_PAD% ,SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_E200_PAD,SANSA_FUZE_PAD,MROBE100_PAD ,SANSA_FUZEPLUS_PAD}{\ButtonSelect} \opt{RECORDER_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonPlay} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOn} \opt{ONDIO_PAD}{\ButtonMenu} \opt{IRIVER_H10_PAD}{\ButtonRew} \opt{COWON_D2_PAD}{\TouchCenter} \opt{PBELL_VIBE500_PAD}{\ButtonOK} \opt{MPIO_HD200_PAD}{\ButtonFunc} \opt{MPIO_HD300_PAD}{\ButtonEnter} \opt{HAVEREMOTEKEYMAP}{& } & Jump/Fire\\ % \opt{RECORDER_PAD}{\ButtonFOne} \opt{ONDIO_PAD}{\ButtonOff} \opt{IPOD_3G_PAD,IPOD_4G_PAD}{\ButtonHold{} switch} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode} \opt{GIGABEAT_PAD,GIGABEAT_S_PAD,COWON_D2_PAD}{\ButtonMenu} \opt{SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_E200_PAD,MROBE100_PAD}{\ButtonPower} \opt{SANSA_FUZE_PAD}{Long \ButtonHome} \opt{IAUDIO_X5_PAD}{\ButtonPlay} \opt{IRIVER_H10_PAD}{\ButtonFF} \opt{PBELL_VIBE500_PAD}{\ButtonCancel} \opt{MPIO_HD200_PAD}{\ButtonRec + \ButtonPlay} \opt{MPIO_HD300_PAD}{Long \ButtonMenu} \opt{SANSA_FUZEPLUS_PAD}{\ButtonBack} \opt{SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonRew} \opt{HAVEREMOTEKEYMAP}{& } & Open ZXBox menu\\ \end{btnmap} \subsubsection{ZXBox menu} \begin{description} \item[ Vkeyboard.] This is a virtual keyboard representing the Spectrum keyboard. Controls are the same as in standard Rockbox, but you just press one key instead of entering a phrase. \item[Play/Pause Tape.] Toggles playing of the tape (if it is loaded). \item[Save Quick Snapshot.] Saves snapshot into \fname{/.rockbox/zxboxq.z80}. \item[Load Quick Snapshot.] Loads snapshot from \fname{/.rockbox/zxboxq.z80}. \item[Save Snapshot.] Saves a snapshot of the current state. You would enter the full path and desired name - for example \fname{/games/zx/snapshots/chuckie.sna}. The snapshot format will be chosen after the extension you specified, per default \fname{.z80} will be taken in case you leave it open. \item[Toggle Fast Mode.] Toggles fastest possible emulation speed (no sound, maximum frameskip etc.). This is Useful when loading tapes with some specific loaders. \item[Options.] \begin{description} \item[Map Keys To Kempston.] Controls whether the \daps{} buttons should simulate a Kempston'' joystick or some assigned keys of the Spectrum keyboard. \item[Display Speed.]Toggle displaying the emulation speed (in percent). \item[Invert Colours.] Inverts the Spectrum colour palette, sometimes helps visibility. \item[Frameskip] Sets the number of frames to skip before displaying one. With zero frameskip ZXBox tries to display 50 frames per second. \item[Sound.]Turns sound on or off. \item[Volume.]Controls volume of sound output. \item[Predefined Keymap] Select one of the predefined keymaps. For example \setting{2w90z} means: map ZXBox's \btnfnt{Up} to \setting{2}, \btnfnt{Down} to \setting{w}, \btnfnt{Left} to \setting{9}, \btnfnt{Right} to \setting{0} and \btnfnt{Jump/Fire} to \setting{z}. This example keymap is used in the Chuckie Egg'' game. \item[Custom Keymap] This menu allows you to map one of the Spectrum keys accessible through the plugin's virtual keyboard to each one of the buttons. \item[Quit.] Quits the emulator.. \end{description} \end{description} \nopt{ipodvideo}{% no scaling for here, still include it? \subsubsection{Hacking graphics} Due to ZXBox's simple (but fast) scaling to the screen by dropping lines and columns some games can become unplayable. It is possible to hack graphics to make them better visible with the help of an utility such as the Spectrum Graphics Editor''. Useful tools can be found at the World of Spectrum'' site (\url{http://www.worldofspectrum.org/utilities.html}).}
# Right definition of linear grammar I was referring book by Peter Linz, which defines linear grammar as follows: A linear grammar is a grammar in which at most one variable can occur on the right side of any production, without restriction on the position of this variable. The wikipedia page also defines it similar way at the top of Linear grammar page a linear grammar is a context-free grammar that has at most one nonterminal in the right hand side of each of its productions. By these definition, it seems that the production of the following form is allowed: $S\rightarrow aSb$ But next on the same page, wikipedia defines it as follows: linear grammars in which all nonterminals in right hand sides are at the left or right ends, but not necessarily all at the same end. By this definition, it seems that the productions of the above form are not allowed. Then what is correct definition? ## 1 Answer They second quote from Wikipedia has lost its context. Here it is complete, with emphasis added: Another special type of linear grammar is the following: • linear grammars in which all nonterminals in right hand sides are at the left or right ends, but not necessarily all at the same end. A linear grammar has at most one non-terminal on the right-hand side of any rule, as per Linz' definition. But any linear grammar can be transformed into an equivalent linear grammar of that special type, simply by adding a new non-terminal. Wikipedia goes on to show an example of this transformation. So it is possible to assume wolog that a linear grammar is in that form, if it turns out to be helpful. • Do you mean (1) linear grammar is one which has at most one non terminal on right hand sides of each production (i.e. $S\rightarrow aSb, S\rightarrow aS, S\rightarrow Sb$) (2) left linear is one which have all non terminals on left ends (i.e. $S\rightarrow Sb$) (3) right linear is one which have all non terminals on right ends (i.e. $S\rightarrow aS$) and (4) we can have "another type of linear grammar (not having any specific name)" which can have productions of both forms: right linear and left linear ($S\rightarrow aS, S\rightarrow Sb$ but not $S\rightarrow aSb$ )) – anir Jul 21 '18 at 12:49 • @anlr: That's basically what Wikipedia is saying, yes. I personally wouldn't have phrased it that way; the point is that the restricted form (non-terminals are always at one end) does not restrict the expressivity of linear grammars. Nevertheless, the key point of a linear grammar is that every rhs has at most one non-terminal. That's why it's called linear. It's like a linear polynomial: the largest exponent must be one (and you can canonicalise by collapsing all the linear terms into a single term). – rici Jul 21 '18 at 14:38
# What happens to the determinant of a matrix when a row of the matrix is multiplied by a constant? What will happen to the following matrix if the third row is multiplied by 3? 3 3 3 1 5 6 3 4 5 Also, in general, how does scaling a row of a square matrix affect its determinant? - ## 2 Answers The determinant is multiplied by the scaling factor. You can see this from the definition of the determinant as the signed sum of all products with one factor from each row and column – since each summand contains exactly one factor from the scaled row, each summand is scaled by the scaling factor, and thus so is the sum. - What will happen if a column is multiplied by a constant? Will it have the same effect? –  Anderson Green Sep 26 '12 at 20:31 @Anderson: Since "row" and "column" play equivalent roles in the definition of the determinant as the signed sum of all products with one factor from each row and column, all properties of determinants that hold for rows also hold for columns and vice versa. –  joriki Sep 26 '12 at 20:33 Does this mean that the determinant of the transpose of a matrix is the same as the determinant of the original matrix? –  Anderson Green Sep 26 '12 at 20:43 @Anderson: Indeed it does. –  joriki Sep 26 '12 at 20:48 If you multiply a row by $n$, the determinant is multiplied by $n$. -
## Found 54 Documents (Results 1–54) 100 MathJax ### Global exact controllability of ideal incompressible magnetohydrodynamic flows through a planar duct. (English)Zbl 07549313 MSC:  93B05 76E25 93C20 Full Text: ### Controllability of bilinear quantum systems in explicit times via explicit control fields. (English)Zbl 1477.35202 MSC:  35Q41 93C20 93B05 Full Text: ### On the global controllability of scalar conservation laws with boundary and source controls. (English)Zbl 1479.35561 MSC:  35L65 93B05 93C20 Full Text: Full Text: Full Text: ### On the uniform controllability for a family of non-viscous and viscous Burgers-$$\alpha$$ systems. (English)Zbl 1480.93034 MSC:  93B05 35Q35 35G25 Full Text: Full Text: ### Bilinear quantum systems on compact graphs: well-posedness and global exact controllability. (English)Zbl 1461.93033 MSC:  93B05 93B70 81Q93 Full Text: Full Text: Full Text: Full Text: ### Global exact controllability to the trajectories of the Kuramoto-Sivashinsky equation. (English)Zbl 1433.93016 MSC:  93B05 93C20 35K55 Full Text: ### Exact boundary controllability for one-dimensional quasilinear hyperbolic systems. (English)Zbl 1429.35003 Coron, Jean-Michel (ed.) et al., One-dimensional hyperbolic conservation laws and their applications. Lecture notes for the LIASFMA Shanghai summer school, Shanghai, China, August 16–27, 2015. Hackensack, NJ: World Scientific; Beijing: Higher Education Press. Ser. Contemp. Appl. Math. CAM 21, 1-86 (2019). Full Text: Full Text: ### Exact boundary controllability for 1-D quasilinear wave equations with dynamical boundary conditions. (English)Zbl 1391.35283 MSC:  35L72 93B05 35L20 Full Text: Full Text: Full Text: ### Global controllability and stabilizability of Kawahara equation on a periodic domain. (English)Zbl 1322.93022 MSC:  93B05 93D20 35Q53 Full Text: Full Text: ### Global exact controllability of 1d Schrödinger equations with a polarizability term. (Contrôle exact global d’équations de Schrödinger 1d avec un terme de polarisabilité.) (English. Abridged French version)Zbl 1287.93013 MSC:  93B05 93C20 35Q41 Full Text: ### Exact null controllability of semilinear evolution systems. (English)Zbl 1298.90078 MSC:  90C26 90C48 Full Text: Full Text: ### Exact boundary controllability of nodal profile for 1-D quasilinear wave equations. (English)Zbl 1284.35255 MSC:  35L20 35L72 93B05 Full Text: ### Approximate controllability for the parabolic-elliptic system. (Chinese. English summary)Zbl 1265.93028 MSC:  93B05 93C20 ### Global exact boundary controllability for 1-D quasilinear wave equations. (English)Zbl 1213.35290 MSC:  35L20 35L72 93B05 Full Text: ### Global exact boundary controllability for first order quasilinear hyperbolic systems. (English)Zbl 1213.35299 MSC:  35L60 93C20 93B05 Full Text: Full Text: ### Exact boundary controllability for reducible quasi-linear hyperbolic system with a class of nonlocal boundary conditions. (Chinese. English summary)Zbl 1224.35268 MSC:  35L50 49J20 93B05 ### Global controllability of nonviscous and viscous Burgers-type equations. (English)Zbl 1282.93050 MSC:  93B05 93C20 35Q53 Full Text: ### A note on the one-side exact boundary controllability for quasilinear hyperbolic systems. (English)Zbl 1152.35425 MSC:  35L50 93C20 93B05 Full Text: ### Exact boundary observability for quasilinear hyperbolic systems. (English)Zbl 1155.93015 MSC:  93B07 35L60 93C20 Full Text: ### Global exact controllability for quasilinear hyperbolic systems of diagonal form with linearly degenerate characteristics. (English)Zbl 1143.93008 MSC:  93B05 93C20 35L40 Full Text: ### Exact controllability of systems governed by parabolic differential equations with globally Lipschitz nonlinearities. (Chinese. English summary)Zbl 1174.93338 MSC:  93B05 93C10 35K20 ### Exact controllability for multidimensional semilinear hyperbolic equations. (English)Zbl 1143.93005 MSC:  93B05 93B07 35B37 Full Text: ### Exact controllability for nonautonomous first order quasilinear hyperbolic systems. (English)Zbl 1197.93062 MSC:  93B05 35L72 93C20 Full Text: ### Exact local controllability of a one-control reaction-diffusion system. (English)Zbl 1136.93007 MSC:  93B05 93C20 Full Text: Full Text: ### Local exact controllability to the trajectories of the Boussinesq system. (English)Zbl 1098.35027 MSC:  35B37 93B05 93C10 35R35 Full Text: ### Exact boundary controllability for quasilinear wave equations. (English)Zbl 1105.93013 MSC:  93B05 93C20 49J20 Full Text: ### Exact boundary controllability for second order quasilinear hyperbolic systems. (English)Zbl 1134.93316 MSC:  93B05 35L50 35L20 35B37 93C20 ### Global exact boundary controllability of a class of quasilinear hyperbolic systems of conservation laws. II. (English)Zbl 1105.93012 MSC:  93B05 93C20 49J20 Full Text: ### Global exact controllability of semi-linear time reversible systems in infinite dimensional space. (English)Zbl 1060.93050 Cohen, Gary C. (ed.) et al., Mathematical and numerical aspects of wave propagation, WAVES 2003. Proceedings of the sixth international conference on mathematical and numerical aspects of wave propagation, Jyväskylä, Finland, 30 June – 4 July 2003. Berlin: Springer (ISBN 3-540-40127-X/hbk). 183-188 (2003). ### Global exact boundary controllability of a class of quasilinear hyperbolic systems of conservation laws. (English)Zbl 1106.93306 MSC:  93B05 35L65 Full Text: Full Text: ### Controllability of second order semilinear Volterra integrodifferential systems in Banach spaces. (English)Zbl 0935.93013 MSC:  93B05 93C10 93C30 ### Well posedness and control of semilinear wave equations with iterated logarithms. (English)Zbl 0939.35124 MSC:  35L70 35L20 Full Text: ### Boundary controllability of differential equations with nonlocal condition. (English)Zbl 0917.93009 MSC:  93B05 93C25 93C10 Full Text: ### Global exact controllability of a class of quasilinear hyperbolic systems. (English)Zbl 0915.93007 MSC:  93B05 93C20 Full Text: ### Null controllability of semilinear integrodifferential systems in Banach space. (English)Zbl 0892.93011 MSC:  93B05 93C25 93C10 Full Text: ### Exact controlability of waves in somewhat regular domains. (Contrôlabilité exacte des ondes dans des ouverts peu réguliers.) (French)Zbl 0892.93009 Reviewer: O.Cârjá (Iaşi) MSC:  93B05 53C20 35L05 ### Energy decay and exact controllability for the Petrovsky equation in a bounded domain. (English)Zbl 0712.93003 MSC:  93B05 93C20 35B37 93C25 Full Text: ### The application of fixed point theorems to global nonlinear controllability problems. (English)Zbl 0576.93009 Mathematical control theory, Banach Cent. Publ. 14, 319-344 (1985). Reviewer: J.Klamka all top 5 all top 3
# differential forms in algebraic topology solutions The guiding principle in this book is to use differential forms as an aid in exploring some of the less digestible aspects of algebraic topology. Applied to Poisson manifolds, this immediately gives a slight improvement of Hector-Dazord’s integrability criterion [12]. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them. We also explain problems and solutions in positive characteristic. by 82 , Springer - Verlag , New York , 1982 , xiv + 331 pp . By using the de Rham theory of differential forms as a prototype of cohomology, the machineries of algebraic topology are made easier to assimilate. With its stress on concreteness, motivation, and readability, "Differential Forms in Algebraic Topology" should be suitable for self-study or for a one- semester course in topology. As a co ...", We show that every Lie algebroid A over a manifold P has a natural representation on the line bundle QA = ∧ top A ⊗ ∧ top T ∗ P. The line bundle QA may be viewed as the Lie algebroid analog of the orientation bundle in topology, and sections of QA may be viewed as transverse measures to A. In the second section we present an extension of the van Est isomorphism to groupoids. Denoting the form on the left-hand side by ω, we now calculate the left h... ...ppear to be of great importance in applications: Theorem 1 (The Čech Theorem): The nerve complex of a collection of convex sets has the homotopy type of the union of the sets. The type IIA string, the type IIB string, the E8 × E8 heterotic string, and Spin(32)/Z2 heterotic string on a K3 surface are then each analyzed in turn. Accord­ ingly, we move primarily in the realm of smooth manifolds and use the de Rham theory as a prototype of all of cohomology. We obtain coverage data by using persistence of homology classes for Rips complexes. There are more materials here than can be reasonably covered in a one-semester course. The asymptotic convergence of discrete solutions is investigated theoretically. Differential Forms in Algebraic Topology, (1982) by R Bott, L W Tu Venue: GTM: Add To MetaCart. One of the features of topology in dimension 4 is the fact that, although one may always represent ξ as the fundamental class of some smoothly, "... We consider coverage problems in sensor networks of stationary nodes with minimal geometric data. Some acquaintance with manifolds, simplicial complexes, singular homology and cohomology, and homotopy groups is helpful, but not really necessary. Accord­ ingly, we move primarily in the realm of smooth manifolds and use the de Rham theory as a prototype of all … Amazon.in - Buy Differential Forms in Algebraic Topology: 82 (Graduate Texts in Mathematics) book online at best prices in India on Amazon.in. It may take up to 1-5 minutes before you receive it. 9The classification of even self-dual lattices is extremely restrictive. Free delivery on qualified orders. I'm thinking of reading "An introduction to … As a first application we clarify the connection between differentiable and algebroid cohomology (proved in degree 1, and conjectured in degree 2 by Weinstein-Xu [47]). Let X be a smooth, simply-connected 4-manifold, and ξ a 2-dimensional homology class in X. This is the same as the one introduced earlier by Weinstein using the Poisson structure on A ∗. We therefore turn to a different method for obtaining a simplicial complex ... ... H2(S, Z) is torsion free to make this statement to avoid any finite subgroups appearing. They also make an almost ubiquitous appearance in the common statements concerning string duality. We review the necessary facts concerning the classical geometry of K3 surfaces that will be needed and then we review “old string theory ” on K3 surfaces in terms of conformal field theory. We also show that our variational problem dynamically sets to zero the Futaki, "... (i) Topology of embedded surfaces. We emphasize the unifying ...". Topological field theory is discussed from the point of view of infinite-dimensional differential geometry. differential forms in algebraic topology graduate texts in mathematics Oct 09, 2020 Posted By Ian Fleming Media Publishing TEXT ID a706b71d Online PDF Ebook Epub Library author bott raoul tu loring w edition 1st publisher springer isbn 10 0387906134 isbn 13 9780387906133 list price 074 lowest prices new 5499 used … We introduce a new technique for detecting holes in coverage by means of homology, an algebraic topological invariant. In the first section we discuss Morita invariance of differentiable/algebroid cohomology. We consider coverage problems in sensor networks of stationary nodes with minimal geometric data. This leads to a general formula for the volume function in terms of topological fixed point data. We show that the Einstein–Hilbert action, restricted to a space of Sasakian metrics on a link L in a Calabi–Yau cone M, is the volume functional, which in fact is a function on the space of Reeb vector fields. For a proof, see, e.g., =-=[14]-=-. Differential Forms in Algebraic Topology: 82: Bott, Raoul, Tu, Loring W: Amazon.nl Selecteer uw cookievoorkeuren We gebruiken cookies en vergelijkbare tools om uw winkelervaring te verbeteren, onze services aan te bieden, te begrijpen hoe klanten onze services gebruiken zodat we verbeteringen kunnen aanbrengen, en om … Σ, the degree of the normal bundle. The primary purpose of these lecture notes is to explore the moduli space of type IIA, type IIB, and heterotic string compactified on a K3 surface. In the second section we present an extension of the van Est isomorphism to groupoids. In complex dimension n = 3 these results provide, via AdS/CFT, the geometric counterpart of a–maximisation in four dimensional superconformal field theories. January 2009; DOI: ... 6. I. The differential $D:C \to C$ induces a differential in cohomology, which is the zero map as any cohomology class is represented by an element in the kernel of $D$. Read "Differential Forms in Algebraic Topology" by Raoul Bott available from Rakuten Kobo. Differential Forms in Algebraic Topology (Graduate Texts in Mathematics Book 82) eBook: Bott, Raoul, Tu, Loring W.: Amazon.com.au: Kindle Store Differential Forms in Algebraic Topology: 82: Bott, Raoul, Tu, Loring W.: Amazon.com.au: Books With its stress on concreteness, motivation, and readability, this book is equally suitable for self-study and as a one-semester course in topology. Differential Forms in Algebraic Topology-Raoul Bott 2013-04-17 Developed from a first-year graduate course in algebraic topology, this text is an informal introduction to some of the main ideas of contemporary homotopy and cohomology theory. Q.3 Indeed $K^n$ is in general not a subcomplex. E.g., For example, the wedge product of differential forms allow immediate construction of cup products without digression into acyclic models, simplicial sets, or Eilenberg-Zilber theorem. By using the de Rham theory of differential forms as a prototype of cohomology, the machineries of algebraic topology are made easier to assimilate. Introduction Soc. Differential Forms in Algebraic Topology textbook solutions from Chegg, view all supported editions. By using the de Rham theory of differential forms as a prototype of cohomology, the machineries of algebraic topology are made easier to assimilate. By using the de Rham theory of differential forms as a prototype of cohomology, the machineries of algebraic topology are made easier to assimilate. Boston University Libraries. Within the text itself we have stated with care the more advanced results that are needed, so that a mathematically mature reader who accepts these background materials on faith should be able to read the entire book with the minimal prerequisites. The main tool which is invoked is that of string duality. (N.S.) Differential Forms in Algebraic Topology Raoul Bott, Loring W. Tu (auth.) Social. In de Rham cohomology we therefore have i i [dbα]= 2π 2π [d¯b]+α[Σ] =c1( ¯ L)+α[Σ]. As a first application we clarify the connection between differentiable and algebroid cohomology (proved in degree 1, and ...", In the first section we discuss Morita invariance of differentiable/algebroid cohomology. Bull. E.g., For example, the wedge product of differential forms allow immediate construction of cup products without digression into acyclic models, simplicial sets, or Eilenberg-Zilber theorem. The use of differential forms avoids the painful and for the beginner unmotivated homological algebra in algebraic topology. The guiding principle in this book is to use differential forms as an aid in exploring some of the less digestible aspects of algebraic topology. Douglas N. Arnold, Richard S. Falk, Ragnar Winther, by Accord ingly, we move primarily in the realm of smooth manifolds and use the de Rham theory as a prototype of all of cohomology. We show that the Einstein–Hilbert action, restricted to a space of Sasakian ...", We study a variational problem whose critical point determines the Reeb vector field for a Sasaki–Einstein manifold. least in characteristic 0. We relate this function both to the Duistermaat– Heckman formula and also to a limit of a certain equivariant index on M that counts holomorphic functions. Risks and difficulties haunting finite element schemes that do not fit the framework of discrete dif-, "... We show that every Lie algebroid A over a manifold P has a natural representation on the line bundle QA = ∧ top A ⊗ ∧ top T ∗ P. The line bundle QA may be viewed as the Lie algebroid analog of the orientation bundle in topology, and sections of QA may be viewed as transverse measures to A. As discrete differential forms … Unfortunately, nerves are very difficult to compute without precise locations of the nodes and a global coordinate system. Primary 14-02; Secondary 14F10, 14J17, 14F20 Keywords. In particular, there are no coordinates and no localization of nodes. The file will be sent to your Kindle account. Topological field theory is discussed from the point of view of infinite-dimensional differential geometry. Hello Select your address Best Sellers Today's Deals Electronics Gift Ideas Customer Service Books New Releases Home Computers Gift Cards Coupons Sell The main tool which is invoked is that of string duality. Dario Martelli, James Sparks, et al. Read this book using Google Play Books app on your PC, android, iOS devices. This follows from π1(S) = 0 and the various relations between homotopy and torsion in homology and cohomology =-=[12]-=-. Sorted by ... or Seiberg-Witten invariants for closed oriented 4-manifold with b + 2 = 1 is that one has to deal with reducible solutions. The finite element schemes are in-troduced as discrete differential forms, matching the coordinate-independent statement of Maxwell’s equations in the calculus of differential forms. With its stress on concreteness, motivation, and readability, this book is equally suitable for self-study and as a one-semester course in topology. Accord­ ingly, we move primarily in the realm of smooth manifolds and use the de Rham theory as a prototype of all of cohomology. The latter captures connectivity in terms of inter-node communication: it is easy to compute but does not in itself yield coverage data. Let X be a smooth, simply-connected 4-manifold, and ξ a 2-dimensional homology class in X. Stefan Cordes, Gregory Moore, Sanjaye Ramgoolam, by Math. It may takes up to 1-5 minutes before you received it. Mail Our solutions are written by Chegg experts so you can be assured of the highest quality! We have indicated these in the schematic diagram that follows. by Meer informatie We offer it in the hope that such an informal account of the subject at a semi-introductory level fills a gap in the literature. Algebraic di erential forms, cohomological invariants, h-topology, singular varieties 1. Download for offline reading, highlight, bookmark or take notes while you read Differential Forms in Algebraic Topology. A direct sum of vector spaces C = e qeZ- C" indexed by the integers is called a differential complex if there are homomorphismssuch that d2 = O. d is the … E.g., For example, the wedge product of differential forms allow immediate construction of cup products without digression into acyclic models, simplicial sets, or Eilenberg-Zilber theorem. 1 Calculu s o f Differentia l Forms. A Short Course in Differential Geometry and Topology. These homological invariants are computable: we provide simulation results. The impetus for these techniques is a completion of network communication graphs to two types of simplicial complexes: the nerve complex and the Rips complex. One of the features of topology in dimension 4 is the fact that, although one may always represent ξ as the fundamental class of some smoothly ...", (i) Topology of embedded surfaces. The impetus f ...". By using the de Rham theory of differential forms as a prototype of cohomology, the machineries of algebraic topology are made easier to assimilate. Other readers will always be interested in your opinion of the books you've read. This article discusses finite element Galerkin schemes for a number of lin-ear model problems in electromagnetism. Services . In particular, there are no coordinates and no localization of nodes. With its stress on concreteness, motivation, and readability, "Differential Forms in Algebraic Topology" should be suitable for self-study or for a one- semester course in topology. Access Differential Forms in Algebraic Topology 0th Edition solutions now. Tools. We emphasize the unifying role of equivariant cohomology both as the underlying principle in the formulation of BRST transformation laws and as a central concept in the geometrical interpretation of topological field theory path integrals. Certain sections may be omitted at first reading with­ out loss of continuity. This generalizes the pairing used in the Poincare duality of finite-dimensional Lie algebra cohomology. We show that there is a natural pairing between the Lie algebroid cohomology spaces of A with trivial coefficients and with coefficients in QA. Sorted by: Results 1 - 10 of 659. Fast and free shipping free returns cash on delivery available on eligible purchase. Although we have in mind an audience with prior exposure to algebraic or differential topology, for the most part a good knowledge of linear algebra, advanced calculus, and point-set topology should suffice. These are expository lectures reviewing (1) recent developments in two-dimensional Yang-Mills theory and (2) the construction of topological field theory Lagrangians. Both formulae may be evaluated by localisation. Differential Forms in Algebraic Topology Graduate Texts in Mathematics: Amazon.es: Bott, Raoul, Tu, Loring W.: Libros en idiomas extranjeros For applications to homotopy theory we also discuss by way of analogy cohomology with arbitrary coefficients. These are expository lectures reviewing (1) recent developments in two-dimensional Yang-Mills theory and (2) the construction of topological field theory Lagrangians. K3 surfaces provide a fascinating arena for string compactification as they are not trivial sp ...", The primary purpose of these lecture notes is to explore the moduli space of type IIA, type IIB, and heterotic string compactified on a K3 surface. Read Differential Forms in Algebraic Topology: 82 (Graduate Texts in Mathematics) book reviews & author details and more at Amazon.in. As a consequence, there is a well-defined class in the first Lie algebroid cohomology H 1 (A) called the modular class of the Lie algebroid A. Differential forms in algebraic topology, GTM 82 (1982) by R Bott, L W Tu Add To MetaCart. Sam Evens, Jiang-hua Lu, Alan Weinstein. ... in algebraic geometry and topology. We introduce a new technique for detecting holes in coverage by means of homology, an algebraic topological invariant. Accord­ ingly, we move primarily in the realm of smooth manifolds and use the de Rham theory as a prototype of all of cohomology. In the third section we describe the relevant characteristic classes of representations, living in algebroid cohomology, as well as their relation to the van Est map. The discussion is biased in favour of purely geometric notions concerning the K3 surface, by This book is not intended to be foundational; rather, it is only meant to open some of the doors to the formidable edifice of modern algebraic topology. Differential Forms in Algebraic Topology (Graduate Texts... en meer dan één miljoen andere boeken zijn beschikbaar voor Amazon Kindle. You can write a book review and share your experiences. Th ...", This article discusses finite element Galerkin schemes for a number of lin-ear model problems in electromagnetism. We will use the notation Γm,n to refer to an even self-dual lattice of signature (m, n). Buy Differential Forms in Algebraic Topology by Bott, Raoul, Tu, Loring W. online on Amazon.ae at best prices. By using the de Rham theory of differential forms as a prototype of cohomology, the machineries of algebraic topology are made easier to assimilate. Tools. The guiding principle in this book is to use differential forms as an aid in exploring some of the less digestible aspects of algebraic topology. The use of differential forms avoids the painful and for the beginner unmotivated homological algebra in algebraic topology. As a second application we extend van Est’s argument for the integrability of Lie algebras. I'd very much like to read "differential forms in algebraic topology". , $29 . Differential Forms in Algebraic Topology - Ebook written by Raoul Bott, Loring W. Tu. The former gives information about coverage intersection of individual sensor nodes, and is very difficult to compute. Volume 10, Number 1 (1984), 117-121. Review: Raoul Bott and Loring W. Tu, Differential forms in algebraic topology James D. Stasheff As discrete differential forms represent a genuine generalization of conventional Lagrangian finite elements, the analysis is based upon a judicious adaptation of established techniques in the theory of finite elements. There have been a lot of work in this direction in the Donaldson theory context (see Göttsche … The asymptotic convergence of discrete solutions is investigated theoretically. P. B. Kronheimer, T. S. Mrowka, - Fourth International Conference on Information Processing in Sensor Networks (IPSN’05), UCLA, Finite element exterior calculus, homological techniques, and applications, Lectures on 2D Yang-Mills Theory, Equivariant Cohomology and Topological Field Theories, Finite elements in computational electromagnetism, Transverse measures, the modular class, and a cohomology pairing for Lie algebroids, Introduction to the variational bicomplex, Sasaki-Einstein manifolds and volume minimisation, Coverage and Hole-detection in Sensor Networks via Homology, Differentiable and algebroid cohomology, Van Est isomorphisms, and characteristic classes, The College of Information Sciences and Technology. Since the second cohomology of the neighbourhood is 1-dimensional, it follows that this closed 2-form represents the Poincaré dual of Σ (see =-=[BT]-=- for this construction of the Thom class). 25 per page Differential forms in algebraic topology, by Raoul Bott and Loring W Tu , Graduate Texts in Mathematics , Vol . Mathematics Subject Classi cation (2010). I would guess that what they wanted to say there is that the grading induces a grading$K_p^{\bullet}$for each$p\in … Differential Forms in Algebraic Topology (Graduate Texts in Mathematics Book 82) eBook: Bott, Raoul, Tu, Loring W.: Amazon.ca: Kindle Store The use of differential forms avoids the painful and for the beginner unmotivated homological algebra in algebraic topology. The finite element schemes are in-troduced as discrete differential forms, matching the coordinate-independent statement of Maxwell’s equations in the calculus of differential forms. This extends our previous work on Sasakian geometry by lifting the condition that the manifolds are toric. The case of holomorphic Lie algebroids is also discussed, where the existence of the modular, "... We study a variational problem whose critical point determines the Reeb vector field for a Sasaki–Einstein manifold. The guiding principle in this book is to use differential forms as an aid in exploring some of the less digestible aspects of algebraic topology. The file will be sent to your email address. With its stress on concreteness, motivation, and readability, this book is equally suitable for self-study and as a one-semester course in topology. Differential Forms in Algebraic Topology The guiding principle in this book is to use differential forms as an aid in exploring some of the less digestible aspects of algebraic topology. Apart from background in calculus and linear algbra I've thoroughly went through the first 5 chapters of Munkres. This extends our previous work on Sasakian geometry by lifting the condition that the manifolds are toric. With its stress on concreteness, motivation, and readability, this book is equally suitable for self-study and as a one-semester course in topology. Amer. As a result we prove that the volume of any Sasaki–Einstein manifold, relative to that of the round sphere, is always an algebraic number. K3 surfaces provide a fascinating arena for string compactification as they are not trivial spaces but are sufficiently simple for one to be able to analyze most of their properties in detail. Navigate; Linked Data; Dashboard; Tools / Extras; Stats; Share . The highest quality we have indicated these in the calculus of differential Forms avoids painful! Discrete differential Forms in Algebraic Topology textbook solutions from Chegg, view all supported editions are. Receive it criterion [ 12 ] Ebook written by Chegg experts so you can be reasonably covered in a course... Invoked is that of string duality Topology - Ebook written by Raoul Bott, W.... 9The classification of even self-dual lattice of signature ( m, n ) four dimensional superconformal theories... Problems in electromagnetism on Sasakian geometry by lifting the condition that the manifolds are toric the subject at a level! A proof, see, e.g., =-= [ 14 ] -=- meer informatie Forms! The notation Γm, n to refer to an even self-dual lattices is extremely restrictive dimension n = 3 results. Sent to your Kindle account of signature ( m, n ) ;! Sparks, et al but does not in itself yield coverage data the manifolds are toric offer it in common! Topology of embedded surfaces 14F10, 14J17, 14F20 Keywords is easy to compute by using of!, there are more materials here than can be reasonably covered in a course. Individual sensor nodes, and ξ a 2-dimensional homology class in X consider! Section we present an extension of the highest quality general not a subcomplex the... Algebra in Algebraic Topology: 82 ( Graduate Texts in Mathematics ) book reviews & author details and more Amazon.in. Problems and solutions in positive characteristic buy differential Forms in Algebraic Topology 14F20.... Ads/Cft, the geometric counterpart of a–maximisation in four dimensional superconformal field theories stationary nodes with minimal geometric.... Indicated these in the common statements concerning string duality, new York,,! Forms, matching the coordinate-independent statement of Maxwell’s equations in the hope that such an informal account of Books. For applications to homotopy theory we also discuss by way of analogy cohomology with coefficients... Of differentiable/algebroid cohomology Books you 've read to your email address by means of homology, an topological! M, n ) beginner unmotivated homological algebra in Algebraic Topology take notes while you read differential Forms Algebraic. Individual sensor nodes, and is very difficult to compute without precise locations of the van Est isomorphism groupoids! Dynamically sets to zero the Futaki, ... ( I ) Topology embedded! First section we present an extension of the nodes and a global coordinate system and for volume... Compute without precise locations of the subject at a semi-introductory level fills a gap in the common concerning. Application we extend van Est’s argument for the beginner unmotivated homological algebra in Algebraic Topology finite-dimensional Lie algebra.... Immediately gives a slight improvement of Hector-Dazord’s integrability criterion [ 12 ] persistence of homology classes Rips... Delivery available on eligible purchase an extension of the highest quality sections may be omitted at first with­. Statements concerning string duality differentiable/algebroid cohomology of lin-ear model problems in electromagnetism by Bott. Delivery available on eligible purchase I 've thoroughly went through the first section we present an extension of the and. The nodes and a global coordinate system will use the notation Γm, n.. Homotopy theory we also show that our variational problem dynamically sets to zero the Futaki ... The asymptotic convergence of discrete solutions is investigated theoretically communication: it is easy compute. - 10 of 659 lin-ear model problems in electromagnetism 14-02 ; Secondary 14F10, 14J17 14F20... Van Est isomorphism to groupoids Weinstein using the Poisson structure on a ∗ is very to... Auth. integrability criterion [ 12 ] natural pairing between the Lie cohomology. Manifolds are toric offline reading, highlight, bookmark or take notes while you read differential Forms, the... W. Tu, 1982, xiv + 331 pp the finite element are... Readers will always be interested in your opinion of the subject at a level. Of continuity nerves are very difficult to compute but does not in itself yield coverage data by persistence... Materials here than can be assured of the Books you 've read a semi-introductory fills... Texts in Mathematics ) book reviews & author details and more at.! May take up to 1-5 minutes before you received it materials here than can be assured of highest... Algebraic Topology ( Graduate Texts in Mathematics ) book reviews & author details and more at....: results 1 - 10 of 659 your email address but does not in yield... Of differential Forms in Algebraic Topology that of string duality point of view of infinite-dimensional differential geometry Forms avoids painful. Email address previous work on Sasakian geometry by lifting the condition that the manifolds are toric argument for integrability... Materials here than can be reasonably covered in a one-semester course: results 1 - 10 of 659 ; ;. Data ; Dashboard ; Tools / Extras ; Stats ; Share may be omitted at first with­... From the point of view of infinite-dimensional differential geometry 82, Springer - Verlag new. Captures connectivity in terms of topological fixed point data also show that there is a natural between... We consider coverage problems in electromagnetism Ebook written by Chegg experts so you can assured... Topology textbook solutions from Chegg, view all supported editions 82, -! Sent to your email address andere boeken zijn beschikbaar voor Amazon Kindle before you received.. That our variational problem dynamically sets to zero the Futaki, (. Provide simulation results miljoen andere boeken zijn beschikbaar voor Amazon Kindle be omitted at first with­. 14F20 Keywords applied to Poisson manifolds, simplicial complexes, singular homology and cohomology, and is very to! Also explain problems and solutions in positive characteristic solutions now Poisson manifolds, simplicial,... From the point of view of infinite-dimensional differential geometry 1 - 10 of 659 the of... Receive it spaces of a with trivial coefficients and with coefficients in QA homology classes for Rips complexes geometry... Algebraic topological invariant, Loring W. Tu are more materials here than can be assured of the Est., bookmark or take notes while you read differential Forms avoids the painful and for the function. General formula for the integrability of Lie algebras: 82 ( Graduate Texts in )! Field theories new technique for detecting holes in coverage by means of homology classes Rips! Previous work on Sasakian geometry by lifting the condition that the manifolds are toric Extras ; ;. Author details and more at Amazon.in obtain coverage data information about coverage intersection individual... Topology by Bott, Loring W. online on Amazon.ae at best prices results provide, via AdS/CFT the! Via AdS/CFT, the geometric counterpart of a–maximisation in four dimensional superconformal field theories auth )... Homology and cohomology, and is very difficult to compute but does not in itself yield data. Show that there differential forms in algebraic topology solutions a natural pairing between the Lie algebroid cohomology of. 10 of 659 chapters of Munkres spaces of a with trivial coefficients and with coefficients in.... To refer to an even self-dual lattices is extremely restrictive by Dario Martelli, James,. The main tool which is invoked is that of string duality convergence of discrete solutions is investigated theoretically this our... Offline reading, highlight, bookmark or take notes while you read differential Forms meer dan één miljoen andere zijn. - 10 of 659 for a number of lin-ear model problems in electromagnetism spaces differential forms in algebraic topology solutions with. Of finite-dimensional Lie algebra cohomology calculus and linear algbra I 've thoroughly went through the first section we an! Our solutions are written by Raoul Bott, Loring W. online on at! May takes up to 1-5 minutes before you received it compute but does not in itself yield data. Natural pairing between the Lie algebroid cohomology spaces of a with trivial coefficients and with coefficients in.... We extend van Est’s argument for the beginner unmotivated homological algebra in Algebraic Topology of Lie algebras differential! Indeed $K^n$ is in general not a subcomplex Tu ( auth. 1982, +... Free shipping free returns cash on delivery available on eligible purchase here than can be reasonably covered in a course. Supported editions Tu, Loring W. Tu ( auth. 've read as the one introduced earlier by Weinstein the... For detecting holes in coverage by means of homology, an Algebraic invariant... You 've read reading with­ out loss of continuity lifting the condition that the are... In general not a subcomplex experts so you can be assured of the you., view all supported editions statement of Maxwell’s equations in the calculus of differential Forms Algebraic. By way of analogy cohomology with arbitrary coefficients is discussed from the point of view of infinite-dimensional differential.... Book review and Share your experiences lattice of signature ( m, n to refer to an even self-dual of! Singular homology and cohomology, and is very difficult to compute isomorphism to groupoids n.... 5 chapters of Munkres be sent to your Kindle account the manifolds toric... Minimal geometric data email address of even self-dual lattices is extremely restrictive coverage data by using persistence homology! Raoul Bott, Raoul, Tu, Loring W. Tu ( auth. field theory is discussed from the of... On a ∗ covered in a one-semester course 0th Edition solutions now finite... One introduced earlier by Weinstein using the Poisson structure on a ∗ appearance in the calculus differential. Poincare duality of finite-dimensional Lie algebra cohomology you receive it the nodes and global! The Futaki, ... ( I ) Topology of embedded surfaces 14F10, 14J17 14F20. With manifolds, simplicial complexes, singular homology and cohomology, and is very difficult to compute previous. Of the highest quality are in-troduced as discrete differential Forms avoids the painful and for the beginner unmotivated homological in...
# Transformations of a vector in the active viewpoint 1. Nov 18, 2015 ### spaghetti3451 In Peskin and Schroeder page 37, a diagram illustrates how, under the active transformation, the orientation of a vector field must be rotated forward as the point of evaluation of the field is changed. I understand that the change of the orientation of the vector field is the same idea as that of the transformation matrix mixing up the vector field components to form the new vector field components. Now, the textbook goes on to mention that: under 3-dimensional rotations, $V^{i}(x) \rightarrow R^{ij}V^{j}(R^{-1}x)$ under Lorentz transformations, $V^{\mu}(x) \rightarrow \Lambda^{\mu}_{\nu}V^{\nu}(R^{-1}x)$ I understand that because we are working in the active transformation, the vector field itself is rotated (or boosted) in space(time) and the coordinate system (or reference system) is held fixed. Can someone explain to me in simple layman terms why we are applying the a transformation matrix on the vector field when we are using the inverse transformation matrix on the coordinates? 2. Nov 18, 2015 ### ddd123 A vector field is a map with vector domain and range. Think about the classic $\mathbb{R}^2 \rightarrow \mathbb{R}^2$ map. If you just do $V^i(x) \mapsto R^{ij} V^j(x)$ you are rotating the range vector while keeping it in the same position. That means you don't get a rotated vector field, but a completely different field. Think about the classic curling field: If you do $V^i(x) \mapsto R^{ij} V^j(x)$ with a 90° rotation matrix, you obtain a central vector field! While if you actively rotate the whole field, sure the resulting vector is rotated 90° but referred to the old vector function in the old position. Edit: maybe I see where your doubt lies. For instance, you don't do $V^i(x) \mapsto R^{ij} V^j(R x)$ because the right hand side V would have to be a different function from the left hand side V to be the actively rotated field. Also, don't see in the correct formula $V^i(x) \mapsto R^{ij} V^j(R^{-1} x)$ the resulting field as a function of $R^{-1}x$, see it as a function of $x$. Then it will make sense. Last edited: Nov 18, 2015 3. Nov 19, 2015 ### spaghetti3451 Thanks! I get it! Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
## The private life of numbers (2): from figurate to happy numbers This is Part 2 of a set of three posts adapted from Mateja Prešern’s talk at The Burn in November 2011. In Part 1 we looked at the set of “lucky” numbers. Of course, there are many other sets of “interesting” numbers that we could define — Tanya Khovanova’s excellent Number Gossip website lists several dozen. Some of these sets are really just curiosities, while others have a lot of mathematical interest, and even touch on unsolved problems. In this post we’ll look at a few of them… ### Figurate numbers The natural number $n$ is a triangular number if $n$ dots can be assembled into an equilateral triangle: The first few triangular numbers are 1, 3, 6, 10, 15, 21, 28, 36, … The nth triangular number is given by $T_n = \displaystyle\sum_{k=1}^n k =\displaystyle\frac{n(n+1)}{2} = \binom{n+1}{2}.$ (A good exercise for the reader: prove this, by induction or otherwise!) Similarly, a square number is one that counts the dots in a square. Clearly the nth square number is just $n^2$. It’s perhaps less obvious that the triangular and square numbers are related: the nth square number is the sum of the nth and (n-1)th triangular numbers. Here’s why: We wouldn’t be mathematicians, of course, if we didn’t generalise things wherever possible… A polygonal number is any number that counts the number of dots in a regular polygon: there are pentagonal numbers, hexagonal numbers and so on. The notion can also be generalised into three dimensions, giving (for example) the tetrahedral numbers, and still more generally to give the general concept of a figurate number. Finding ways to count these is a good exercise for any budding combinatoricists among you… ### Some ominously named numbers "The Number of the Beast is 666" by William Blake Vampire numbers. The number $n$ is called a vampire number if there exists a factorization of $n$ using its digits. For example: $126 = 6 \times 21.$ $1260 = 21 \times 60.$ $125460 = 204 \times 615 = 246 \times 510.$ Evil numbers. The number $n$ is an evil number if it has an even number of 1s in its binary expansion. (If it has an odd number of 1s, it is odious. Sometimes you just can’t win.) For example, $1 = 1_2$, $3 = 11_2$, $5 = 101_2$, $6 = 110_2$, $9 = 1001_2$, and so on… There are 4999 evil numbers below 10 000. (How could you work this out without listing them all?) Apocalyptic powers. The number $n$ is an apocalyptic power if $2^n$ contains the consecutive digits 666. For example, $2^{157} = 182687704\mathbf{666}362864775460604089535377456991567872.$ The first ten apocalyptic powers are: 157; 192; 218; 220; 222; 224; 226; 243; 245; 247. ### Happy numbers After all that, it’s a relief to turn to something more cheerful! Take the sum of the squares of the digits of a natural number $n$. If iterating this operation eventually leads to 1, then $n$ is a happy number. For example, 23 is happy: The first few happy numbers are 1, 7, 10, 13, 19, 23, 28, 31, 32, 44, 49, 68, 70… If $d_1d_2...d_k$ is happy, then so is $d_1^2+d_2^2+...+d_k^2$. For example, $49 \ \rightarrow \ 4^2+9^2 = 97 \ \rightarrow \ 9^2+7^2 = 130 \ \rightarrow \ 1^2+3^2 = 10 \ \rightarrow \ 1^2 = 1.$ If $d_1d_2...d_k$ is happy, then so is any number with the same digits, and any amount of additional zero digits. For example, 167, 176, 617, 671, 716 and 761 are all happy; and so are 1670, 1607, 1067,… You might have noticed that among the first few happy numbers listed above, the consecutive numbers 31 and 32 appear. A little later on, we find a sequence of three consecutive happy numbers: 1880, 1881, 1882. Further on still, we find 7839, 7840, 7841, 7842. In fact, Theorem. There exist sequences of consecutive happy numbers of arbitrary length. The proof is a little too involved to give here, but here’s a link to the paper by El-Sedy and Siksek (2000) that proves it. The first sequence of five consecutive happy numbers starts at 44488. The first sequence of six consecutive happy numbers happens to be the first sequence of seven consecutive happy numbers, namely, the sequence that begins with 7899999999999959999999996. Happy numbers in the smallest sequence of eight have 159 digits, and we’re not going to list them here! (No sequences of 20 have been found, so that gives you something to do on a rainy afternoon.) If n is not happy, then the sequence of sums of squares of digits does not go to 1. Such numbers are called — wait for it! — unhappy. It can be shown that the sequence  for unhappy numbers always ends up in the cycle 4, 16, 37, 58, 89, 145, 42, 20. There are infinitely many happy numbers and infinitely many unhappy numbers. "Don't they teach recreational mathematics any more?" (By the way, for anybody who doubts whether this sort of thing has practical applications, in the Doctor Who episode 42, the sequence of happy primes 313, 331, 367, 379 is used as a code for unlocking a sealed door on a spaceship about to collide with a sun. So there.) Coming in Part 3: numbers go beyond happiness and reach perfection… (MP/DP) This entry was posted in Articles. Bookmark the permalink. ### 3 Responses to The private life of numbers (2): from figurate to happy numbers 1. len says: Hello I really liked your blog and I’ll aquiu often in the future. on this post, I would ask you, happy numbers can be expressed as recurrence relations, as well as the recursive formulas of the Fibonacci numbers???? Or is that impossible??? Goodbye and see you soon … • strathmaths says: Hmm… There certainly can’t be a simple recurrence formula as there is for the Fibonacci numbers, because the property of “happiness” depends on the choice of base — happy numbers in base 7, say, wouldn’t be the same as happy numbers in base 10. Algebraic recurrence relations like that for the Fibonacci numbers, on the other hand, don’t care about the base that’s used. This doesn’t mean there isn’t some very sneaky way to construct a recurrence relation that used some property of the base-10 representation of the numbers… all I can say is that I’ve never heard of one and I find it pretty hard to imagine how it could be constructed. But of course that doesn’t prove anything! (DP)
# C++ equivalent to Application.StartupPath? This topic is 3419 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Just a quick question: What is the Native C++ (non .NET) equivalent to Application.StartupPath? I've googled it with no luck... ##### Share on other sites There is no standard equivalent. argv[0] is all you have. ##### Share on other sites In Windows, you can use GetModuleFileName() and lop off the file name to get just the path. ##### Share on other sites Okay, with GetModuleFileName, what do I pass as the hModule? I've never used that type before... also what is argv[0]? I'm kind of new to C++, I understand it well but I don't know all the functions yet (switched from VB and C#) EDIT: Nevermind, I googled it and found out it should be NULL. Thanks for telling me about the function [smile] EDIT AGAIN: I've implemented it successfully, thanks for the help. It took me a while to figure out the LPCH buffer and Size but that's what MSDN is for :D ##### Share on other sites Quote: Original post by Super Llamaalso what is argv[0]? In a standard C++ application, your main function should look something like this: #include <iostream>int main(int argc, char **argv) { std::cout << argv[0] << '\n'; // This will print the application's launch path} The first item in the argv array (i.e. argv[0]) is your program's launch path. However, if you are using WinMain instead of main, things will be a little different. ##### Share on other sites Quote: Original post by swiftcoderThe first item in the argv array (i.e. argv[0]) is your program's launch path. That's not guaranteed. According to the C++ standard (section 3.6.1 paragraph 2), argv[0] is "the name used to invoke the program". This may or may not include the path to the application, and in fact is allowed to be an empty string. ##### Share on other sites Quote: Original post by SiCrane Quote: Original post by swiftcoderThe first item in the argv array (i.e. argv[0]) is your program's launch path. That's not guaranteed. According to the C++ standard (section 3.6.1 paragraph 2), argv[0] is "the name used to invoke the program". This may or may not include the path to the application, and in fact is allowed to be an empty string. Fair enough, but in all the command line environments I have encountered, it has given exactly the (expanded) string I typed to launch the application, which is the relative path from the working directory. It doesn't work so often in GUI environments - in particular, it is always the root directory for a Mac GUI application. However, a GUI application probably shouldn't be messing around in the application's install location - there are well defined directories on each platform reserved for program data ('~/Library/Application Support' on Mac, 'Documents & Settings' on Windows...). ##### Share on other sites What argv[0] contains is not just dependent on command line environment, but also dependent on compiler and standard library implementation. For example compiling this program #include <iostream>int main (int, char ** argv) { std::cout << argv[0] << std::endl;} with gcc 3.4.4 in cygwin to the executable a.exe and running it from Vista's command prompt produces the output "a". No path included or even any path delimiters. It's not even the full name of the executable. ##### Share on other sites I'm trying to load bitmaps from a folder in the application's install folder, so SiCrane's method worked perfectly for me. I'm using WinMain instead of just Main, so there is no argv parameter. This is my first GUI application in Native C++, EasilyConfused taught me how to use GDI+ in a different help topic yesterday, now I'm trying to load a bitmap from the same folder. GetModuleFileName() works nicely. Now I just need to figure out how to chop off the end of it... I'm addicted to the VBA String class XD ##### Share on other sites You can use std::string (or std::wstring if compiling for Unicode) for this like so: char module_name[MAX_PATH]; GetModuleFileName(0, module_name, MAX_PATH); std::string path(module_name); path.erase(path.find_last_of('\\'), std::string::npos); ##### Share on other sites Quote: Original post by SiCranewith gcc 3.4.4 in cygwin to the executable a.exe and running it from Vista's command prompt produces the output "a". No path included or even any path delimiters. It's not even the full name of the executable. Did you provide a full path on the command line, or did you run it from its own directory? Either way, it is good to know that one shouldn't rely on the behaviour. ##### Share on other sites I ran it from the executable's directory. ##### Share on other sites Quote: Original post by SiCraneI ran it from the executable's directory. Then you would expect to get a relative path, i.e. 'a.exe'. On the other hand, since Windows does not require you to type the .exe extension, you may well have typed just 'a' to launch the program, in which case I would expect you to get exactly that in argv[0]. Note that the program inherits the users working directory, so passing that argv[0] to exec() (or whatever the equivalent is), will launch the program itself, so you do in fact have a complete path. I don't have cygwin installed here, but I would be interested to see what happens if you specify a full path on the command line. ##### Share on other sites As SiCrane has pointed out there is not a defined answer as to what argv[0] 's value will be. http://www.faqs.org/faqs/unix-faq/programmer/faq/ Quote: 1.14 How can I find a process' executable file?===============================================This would be a good candidate for a list of Frequently UnansweredQuestions', because the fact of asking the question usually means that thedesign of the program is flawed. :-)You can make a best guess' by looking at the value of argv[0]'. If thiscontains a /', then it is probably the absolute or relative (to thecurrent directory at program start) path of the executable. If it doesnot, then you can mimic the shell's search of the PATH' variable, lookingfor the program. However, success is not guaranteed, since it is possibleto invoke programs with arbitrary values of argv[0]', and in any case theexecutable may have been renamed or deleted since it was started.If all you want is to be able to print an appropriate invocation name witherror messages, then the best approach is to have main()' save the valueof argv[0]' in a global variable for use by the entire program. Whilethere is no guarantee whatsoever that the value in argv[0]' will bemeaningful, it is the best option available in most circumstances.The most common reason people ask this question is in order to locateconfiguration files with their program. This is considered to be bad form;directories containing executables should contain *nothing* exceptexecutables, and administrative requirements often make it desirable forconfiguration files to be located on different filesystems to executables.A less common, but more legitimate, reason to do this is to allow theprogram to call exec()' *on itself*; this is a method used (e.g. by someversions of sendmail') to completely reinitialise the process (e.g. if adaemon receives a SIGHUP'). If you are using a system which has /proc/self/ then see HKO's post here ##### Share on other sites Quote: Original post by swiftcoder Quote: Original post by SiCraneI ran it from the executable's directory. Then you would expect to get a relative path, i.e. 'a.exe'. On the other hand, since Windows does not require you to type the .exe extension, you may well have typed just 'a' to launch the program, in which case I would expect you to get exactly that in argv[0]. Note that the program inherits the users working directory, so passing that argv[0] to exec() (or whatever the equivalent is), will launch the program itself, so you do in fact have a complete path. In other words, the behavior is unpredictable and dependent on user invocation. On the other hand, Application.StartupPath and GetModuleFileName are independent of user action. ie, you can't rely on argv[0]. ##### Share on other sites Quote: Original post by swiftcoderNote that the program inherits the users working directory, so passing that argv[0] to exec() (or whatever the equivalent is), will launch the program itself, so you do in fact have a complete path. Actually, it gives the same result if I dump it somewhere on the PATH and run it from a random directory. Which means that you've got absolutely no information about the path of the executable from argv[0]. ##### Share on other sites Quote: Original post by SiCrane Quote: Original post by swiftcoderNote that the program inherits the users working directory, so passing that argv[0] to exec() (or whatever the equivalent is), will launch the program itself, so you do in fact have a complete path. Actually, it gives the same result if I dump it somewhere on the PATH and run it from a random directory. Which means that you've got absolutely no information about the path of the executable from argv[0]. Right - I forgot about PATH. Yeah, it should give you a valid path to launch the application, not necessarily the application's own path. ##### Share on other sites Lol now I'm trying to figure out how to declare the string class successfully... I added #include <string.h> but I don't see a single declaration for 'string' in that file : ##### Share on other sites The C++ std::string class lives in the header <string> - no .h. ##### Share on other sites Ok thanks :) I've never heard of a header without an extension... strange ##### Share on other sites All the C++ standard library headers lack the .h suffix. std::vector lives in <vector>, std::stringstream lives in <sstream>, etc. ##### Share on other sites Lol now how do I convert from std::string to WCHAR*? When I try the following: pname = (WCHAR*)PATHN.c_str(); I get wacky chinese characters... and when I remove the forced conversion it doesn't build... ##### Share on other sites As SiCrane said, use std::wstring if you are using Unicode strings. std::wstring is in the same header as std::string. ##### Share on other sites Ah, thanks, I didn't notice that ;) I'm not compiling for Unicode but the Image::FromFile method requires a WCHAR for some reason XD EDIT: YESSS!!! It worked, converted successfully, and loaded and displayed the bitmap :) Thanks for all the help [smile]
# D-module that is coherent as O-module Suppose that $X$ is an algebraic variety over $\mathbb C$, not necessarily smooth. Is it still true that each $\mathcal D_X$-module ($\mathcal D_X$ is of course the sheaf of differential operators) that is coherent as an $\mathcal O_X$-vodule must be locally free as an $\mathcal O_X$-module? Serge - [Edited to correct errors pointed out by David Ben-Zvi and to answer a query by serge_I. These corrections reduce this answer'' to the status of a comment.] (1) (Over $\mathbb C$) Under strong assumptions on the singularities of $X$ (namely, that $X$ should be cuspidal, Ben-Zvi and Nevins, arXiv 0212094v3), if $\mathcal E$ is coherent on $X$, then giving it a structure as a $\mathcal D_X$-module means giving an isomorphism $\phi:p_1^*\mathcal E\to p_2^*\mathcal E$, where $\mathcal X_1$ is the completion of $X\times X$ along the diagonal and $p_1,p_2:\mathcal X_1\to X$ are the projections. (There is also the cocycle condition, $p_{31}^*\phi=p_{32}^*\phi\circ p_{21}^*\phi,$ where $p_{ij}:\mathcal X_2\to\mathcal X_1$ are the projections from the completion $\mathcal X_2$ of $X\times X\times X$ along the diagonal, but we don't need this here.) Since $X$ is noetherian, there is a unique flattening stratification $X=\coprod X_i$ associated to $\mathcal E$: each $X_i$ is locally closed in $X$ and for any $f:Y\to X$, $f^*\mathcal E$ is locally free if and only if $f$ factors through some $X_i$. The existence of $\phi$ shows that $p_1^*\mathcal E$ and $p_2^*\mathcal E$ have the same flattening stratification, while $\{p_1^{-1}(X_i)\}_i$ is the flattening stratification for $p_1^*\mathcal E$ and $\{p_2^{-1}(X_i)\}_i$ is the flattening stratification for $p_2^*\mathcal E$. Since $X$ is irreducible, there is a unique stratum $X_0$ of maximal dimension, so that $p_1^{-1}(X_0)=p_2^{-1}(X_0)$. Now think in terms of a tubular neighborhood of the diagonal in $X\times X$ to see that this forces $X_0=X$, so that $\mathcal E$ is locally free provided that $X$ is cuspidal. (2) (In char. $p$, or over any base) If $X$ is smooth and $\mathcal D_X$ is taken to be the full ring of differential operators rather than the subring generated by those operators of order at most $1$, then the same argument applies. - This is a very nice argument, but is it clear it applies to D-modules? your definition is that of a stratification (or comodule over jets), which I don't think are the same as modules over the ring of differential operators in general (yours is the much better behaved one in general). I only know they are the same for varieties with only cusp singularities, for which all the various notions of D-module coincide. –  David Ben-Zvi Jan 10 '13 at 18:21 I am sorry, could you, please, explain in more detail how you conclude that for every $i$ there is a $j$ such that $p_1^{-1}(X_i)=p_2^{-1}(X_j)$? –  Serge Lvovski Jan 10 '13 at 18:43 Note that in characteristic p the statement is false even for X smooth if by $D_X$ you mean the ring of "crystalline diffops" (generated by functions and vector fields), rather than the full divided power ring (whose modules are the same as stratifications) -- eg Frobenius pullback of any coherent sheaf is a D-module. –  David Ben-Zvi Jan 10 '13 at 19:04 Also projectivity should not be necessary, the argument you give (and construction of flattening stratification) is local.. maybe Noetherian?? –  David Ben-Zvi Jan 10 '13 at 19:04 @inkspot: oh, thanks, now I see. –  Serge Lvovski Jan 11 '13 at 3:32 No, you can generally only get locally freeness on an open dense set. This is Lemma VII.9.3 in Algebraic D-modules by Armand Borel et al. EDIT: just realised that Lemma talks about where $\mathcal{D}_X$-coherent modules are $\mathcal{O}_X$-locally free, not whether $\mathcal{O}_X$-coherent ones are. A $\mathcal{D}_X$-coherent module needs a good filtration by $\mathcal{O}_X$-coherent modules to be $\mathcal{O}_X$-coherent, so you may have to check that. Section II of Borel et al. should cover this, have a look there. - Well, to tell the truth, I did not manage to find anything to the point in Chapter 2 of "Algebraic D-modules" :( Thanks anyway for your time. –  Serge Lvovski Jan 10 '13 at 14:35 You could also check J.E. Björk, Analytic D-modules and their applications, he spells out the coherence stuff in a little more detail. Might be hard to find a copy, though. –  Ketil Tveiten Jan 10 '13 at 16:37
Finding Formula for recursive definitions 1. Mar 8, 2009 caseyd1981 This is the last part of the problem and I just can not figure out a formula for it. Here is what the question asks: Determine whether each of these proposed definitions is a valid recursive definition of a function f from the set of nonnegative integers to the set of integers. If f is well defined, find a formula for f(n) when n is a nonnegative integer and prove that your formula is valid. I'm stuck on part e: f(0) = 2, f(n) = f(n - 1) if n is odd and n >= 1 and f(n) = 2f(n - 2) if n >=2 I've worked through f(0) - f(9) and I get 2, 2, 4, 4, 8, 8, 16, 16, 32, 32. I just cant seem to figure a formula for this. Any help, much appreciated! 2. Mar 8, 2009 Staff: Mentor Are you sure that you have written the definition of the function correctly? In particular, it seems that the second part should say that f(n) = 2f(n - 2) if n is even and n >= 2. 3. Mar 8, 2009 caseyd1981 Yes, that is correct. I noticed that too but I typed it exactly the way my book did. 4. Mar 8, 2009 Dick Have you defined a floor function? Something like [x]=greatest integer less than or equal to x? That would help you to write it concisely. [n/2] has something in common with your function. 5. Mar 8, 2009 Staff: Mentor It looks to me like this might work as a non-recursive definition for your function: f(x) = 2^$\left\lfloor(x + 2)/2 \rfloor\right$ The L-shaped brackets indicate the greatest integer function, which is the greatest integer that is less than or equal to what's inside the brackets. If the expression in the exponent is an integer, the greatest integer function evaluates to that integer. If the exponent is a non-integer, the greatest integer function essentially chops off the fractional part. For example, the greatest integer in 1 is 1. The greatest integer in 1.5 is 1. Using the formula above as a check, f(0) = 2^$\left\lfloor(0 + 2)/2 \rfloor\right$ = 2^1 = 2 f(1) = 2^$\left\lfloor(1 + 2)/2 \rfloor\right$ = 2^$\left\lfloor(3)/2 \rfloor\right$ = 2^1 = 2 f(2) = 2^$\left\lfloor(2 + 2)/2 \rfloor\right$ = 2^2 = 4 f(3) = 2^$\left\lfloor(3 + 2)/2 \rfloor\right$ = 2^$\left\lfloor 5/2 \rfloor\right$ = 2^2 = 4 6. Mar 8, 2009 caseyd1981 That is it! Thank you all very much. Ok, now I need to prove the formula using induction. Kind of stuck there too....? 7. Mar 30, 2009 papatrpt89 Hello, I'm in a class that uses the same textbook as caseyd1981. I'm working on this exact same problem. Through looking at relationships between powers of 2 and values of n, I've come up with an explicit formula: f(n) = $\left\lceil(n + 1)/2 \rceil\right$ Now, I need to prove it using induction (I'm not sure whether it needs to be mathematical induction or strong induction). So far, I have this: Basis case: f(1) = 2^$\left\lceil(1 + 1)/2 \rceil\right$ = 2^$\left\lceil(2)/2 \rceil\right$ = 2^1 = 2. Inductive hypothesis: if f(n) = 2^$\left\lceil(n + 1)/2 \rceil\right$, then f(n+1) = 2^$\left\lceil((n + 1) + 1) / 2\rceil\right$. However, I'm stuck here, as I don't know how to incorporate the inductive hypothesis f(n) = $\left\lceil(n + 1)/2 \rceil\right$ into f(n+1) = 2^$\left\lceil((n + 1) + 1) / 2\rceil\right$. Anyone have any suggestions? Thanks for your time! Last edited: Mar 30, 2009
## AngouriMath.MathS ← Back to list of namespaces Classes within the AngouriMath.MathS namespace • ### Boolean Summary Some operations on booleans are stored here • ### Compute Summary Implements necessary functions for symbolic computation of limits, derivatives and integrals • ### DecimalConst Summary Some non-symbolic constants • ### Diagnostic Summary This class is used for diagnostic and debug of the library itself. Usually, you do not want to use it in production code. • ### Hyperbolic Summary Represents a few hyperbolic functions • ### Matrices Summary Classes and functions related to matrices are defined here Summary A few functions convenient to use in industrial projects to keep the system more reliable and distribute computations • ### Numbers Summary All operations for Number and its derived classes are available from here. These methods represent the only possible way to explicitly create numeric instances. It will automatically downcast the result for you, so Number.Create(1.0); is an Integer. To avoid it, you may temporarily disable it using var _ = MathS.Settings.DowncastingEnabled.Set(false); var yourNum = Number.Create(1.0); and the result will be a Real. • ### NumberTheory Summary Use it in order to explore further number theory • ### Series Summary That is a collection of some series, expressed in a symbolic form • ### Sets Summary Functions and classes related to sets defined here Class Set defines true mathematical sets It supports intersection, union, subtraction Set • ### Settings Summary A couple of settings allowing you to set some preferences for AM's algorithms To use these settings the syntax is using var _ = MathS.Settings.SomeSetting.Set(5 /* Here you set a value to the setting */); // here you write your code normally • ### UnsafeAndInternal Summary You may need it to manually manage some issues. Although those functions might be used inside the library only, the user may want to use them for some reason • ### Utils Summary Some additional functions that would be barely ever used by the user, but kept for "just in case" as public 2019-2021 Angouri · Project's repo · Site's repo · Octicons
# A determinantal approach to irrationality Posted in Speaker: Affiliation: U of Newcastle/MPIM Date: Wed, 2016-05-25 14:15 - 15:15 Location: MPIM Lecture Hall Parent event: Number theory lunch seminar It is a classical fact that the irrationality of a number $\xi\in\mathbb{R}$ follows from the existence of a sequence $p_n/q_n$ with integral $p_n$ and $q_n$ such that $q_n\xi-p_n\ne0$ for all $n$ and $q_n\xi-p_n\to0$ as $n\to\infty$. In my talk I give an extension of this criterion in the case when the sequence possesses an additional `period' structure; in particular, the requirement $q_n\xi-p_n\to 0$ is weakened. Some applications are discussed including a new proof of the irrationality of $\pi$. © MPI f. Mathematik, Bonn Impressum & Datenschutz
Faculty of Physics M.V. Lomonosov Moscow State University Regular Article # Kinematics in the Special Theory of Ether ## K. F. Szostek$^1$, R. -. Szostek$^2$ ### Moscow University Physics Bulletin 2018. 73. N 4. P. 413 • Article Annotation The aim of this paper is to show that the Michelson–Morley and Kennedy–Thorndike experiments are not sufficient for justification of the theory of special relativity because these experiments can be explained using another theory in which a universal reference frame exists. In this paper, we derive a novel theory of body kinematics with a universal reference frame. We call this theory the special theory of ether (STE). The reason that the universal reference frame could not be found using the Michelson–Morley and Kennedy–Thorndike experiments is also explained. As well, based on a geometric analysis of the Michelson– Morley and Kennedy–Thorndike experiments, we derive another coordinate and time transformation that differs from the Lorentz transformation. In addition, the transformation law of speed, the formula for the addition of velocities for the absolute velocity, as well as length-contraction and time-dilation formulas are derived. The paper contains only the investigations of the original authors. Approved: 2019 January 24 PACS: 02.90.+p Other topics in mathematical methods in physics 03.30.+p Special relativity Rissian citation: К. Ф. Шостэк, Р. -. Шостэк. Вестн. Моск. ун-та. Сер. 3. Физ. Астрон. 2018. № 4. С. 70. Authors K. F. Szostek$^1$, R. -. Szostek$^2$ $^1$The Faculty of Mechanical Engineering and Aeronautics, Dept of Thermodynamics and Fluid Mechanics $^2$Rzeszów University of Technology, The Faculty of Management ### Science News of the Faculty of Physics, Lomonosov Moscow State University This new information publication, which is intended to convey to the staff, students and graduate students, faculty colleagues and partners of the main achievements of scientists and scientific information on the events in the life of university physicists.
# Do pilots really wear “pilot's watches”? High-end mechanical wristwatches from Swiss and German manufacturers (e.g. Breitling, Rolex etc.) are often marketed as "pilot's watches", and have chronometer features to supposedly help pilots perform flight-related computations. While I know next to nothing about aviation, I find it difficult to imagine that such a mechanical watch (which, for example, simply stops working if it is not moved for a few days) would be preferable to a modern quartz and/or digital watch, which keeps running for years on a single battery, self-adjusts with millisecond precision based on radio signals and even without that signal maintains time an order of magnitude more accurately than any mechanical watch. To summarize: What kind of watches do modern commercial pilots actually wear? Or do they nowadays just use their cellphones for timekeeping tasks like everyone else? • In the past, pilot's watches had a superior precision. Their display is easy readable, even at night, the big buttons can be pressed with gloves they had a stop watch, a tachymeter and so on. They were (and still are) a piece of high tech. This and the myth around being a pilot makes this clocks very favored. Even in the past, only a fraction was actually sold to real pilots. I guess today some pilots wear them as they are made for them, not because they need them. This may be different for military pilots. – sweber Sep 17 '15 at 7:45 • Totally off topic but divers do wear diver's watches, including Rolex Oysters. If your digital diving computer bonks, it's important to have a good backup so that you can time your decompression. And when your brain is bonkers on nitrogen poisoning, an analogue watch with one-way bezel is great. I personally use a Seiko Divemaster. – RoboKaren Sep 17 '15 at 14:43 • In my day, if you wanted to know what time it was, you asked the flight engineer. I couldn't resist that. Actually, though, there's more truth than not in that. On long-haul 747 flights with old steam gauges, the f.e. was the one keeping track of the fuel burn. His time, whether from a wristwatch or the clock on his panel, was the defacto cockpit standard. – Terry Sep 17 '15 at 18:26 • It depends on the diver. The former lead diver of the WKPP, arguably some of most hardcore extreme divers on the planet, wore and dove a very inexpensive Timex Ironman Triathlon digital. (I picked up the habit from him, and dove one myself, with very satisfactory results. It Just Worked.) The key to dealing with nitrogen narcosis is not to have that much nitrogen in your mix in the first place. (The first time I dove nitrox (EAN36), I discovered that I could tell the difference in 20 feet of water. I never dove air again.) – John R. Strohm Sep 17 '15 at 22:05 • Best reason I heard from a senior pilot - it gives him something to play with when his wife made him go to the opera. – NobodySpecial Sep 17 '15 at 23:39 High end mechanical pilots watches are very little use in the cockpit whether you are in a commercial airplane or a cessna 152. I love watches and I've searched for years for a watch that was a) very cool looking and b) actually useful when flying and I haven't found one yet. They usually fail on b) because their analogue chronometer and/or timer functions are very hard to interpret, and cannot be read at night. The digital readouts on the analogue/digital types are often small and rarely have lights for night flying. Some have lights that will only work when in calendar mode, which is about as useful as a chocolate teapot. Often the functions are hard to use and require a fair amount of fiddling to get them to work, which is not good when you have a high workload. Commercial airplanes are usually so full of technology that a watch is completely redundant anyway, so for commercial pilots it's a matter of style. I'm not a commercial pilot, I fly older light aircraft with less technology, so having a watch with timer and chrono functions is handy. For this I wear a Casio G-Shock as it has a large and easily readable face, a light activated by its own button, simple to use, and very rugged. Digital kitchen timers also work very well. • I want a chocolate tea pot! – Daniel Griscom Sep 17 '15 at 14:42 • @DanielGriscom: bbc.com/news/uk-england-york-north-yorkshire-29126161 – Ben Hocking Sep 17 '15 at 14:49 • It is a britishism @DanielGriscom. It's one of my favorites I've picked up living in the UK the past few years. – GdD Sep 17 '15 at 15:19 • I think the reason that there's not a better true pilot watch is that the market is too small. What you don't need is a watch, what you need is a wrist computer. I was thinking of writing an android app which would run on an old mobile phone you could strap onto your wrist. It's a lot of work though, and as my experience with my BB Density Altitude app showed it's hard to make a profit of these things. – GdD Sep 17 '15 at 15:22 • @GdD I suppose you could just make an Android Wear app. Android Wear watches are quite literally wrist computers. You could even make set it up to program your route in ahead of time on your phone and have it show you the heading of the next turn and count down to when you should begin it, etc. Google Maps already does that for driving (except that, by default, it counts down distance to next turn rather than time.) – reirab Sep 17 '15 at 15:44 Disclosure: I am both a watch enthusiast and a Pilot but on any note... The answer to your question is generally no, but that varies by personal taste as well as personal choice. There are lots of threads on the various watch forums discussing this very topic here, here and here. The consensus is basically a few things. First off pilots no longer make the kind of money the once did and swiss watches have only shot up in price since say the 70's. If you are talking about an entry level professional pilot making in the very low 5 figures the chances they are going out and buying a $5,000+ watch is low. Now lets talk function. When you are in the cockpit a watch can be useful (provided you can read it). If your heading indicator fails you can use a watch and your turn coordinator to time your turns which can save your life in the clouds. I generally start my chronograph (omega speedmaster) when I take off, this gives me not only en route time but markers of the half hour blocks when I need to switch my tanks (on a piper warrior). For what its worth there is a clock in the plane as well as a timer so I don't need a watch but I enjoy wearing one. When I fly dead reckoning I carry an all mechanical Heuer Stop watch to time my legs. Again these things are in the plane but I find the hand held stop watch easier to use. Some fancy aviation watches provide flight computers on the bezels like breitlings but there is no way you are reading that in the air unless its really smooth up there. The consensus around the internet and from what I have seen is that commercial pilots generally wear digital stuff or nothing at all (most casio and the like). Military fighter pilots where GShock's or what ever is issued if any thing is even issued. Some pilots may wear Rolex-GMT's or the such but that seems to be those that either really like watches or were gifted it etc. However Pan Am had a long standing relationship with Rolex over the years. Pan Am pilots may have sported an albino GMT or a special Pan Am Daytona depending on the era. The GMT Master was even designed in partnership with Pan Am and issued to long haul pilots. On a side note there is a group of pilots that ALWAYS wear fancy swiss watches. Since the early days of the space program Omega has been the watch maker of choice for NASA and the Speedmaster has been the watch of choice as seen here on Buzz Aldrin Some even say that the Speedmaster was used to time the engine burn on apollo 13 which may have in turn saved their lives. It is also reported that Neil Armstrongs Speedmaster Broke on the apollo 11 mission. Here is a comprehensive list of where all the watches reside now. The watches were worn on a NATO style velcro band that was a double wrap size so it could fit over the suits for EVA. The Speedmaster is still worn in space to this day however modern space suits have a flap covering it, making it hard to see in photos. The official Speedmasters (moon watches) are certified for maned space flight and stamped as such. I would also like to expand on your statement ...which, for example, simply stops working if it is not moved for a few days They do not stop working, the spring winds down and they will stop ticking however as soon as they are picked up again (and the rotor is spun if its automatic) they will start ticking again. That being said if you wear the watch every day, un like a digital that may die, a mechanical watch will tick perpetually until a component physically breaks. On a bit of a tangent, some watch makers did make flight instruments (clocks) for aircraft over the years this is where much of the inspiration for the watches comes from. Elgin made a clock that was used in a few WW2 fighters, Heuer even had an IFR stop watch • I doubt that that the astronauts buy their watches, they most probably get them free as part of promotional consideration from Rolex et al. – RoboKaren Sep 17 '15 at 14:41 • The astronaut watches are issued to them by NASA and are exclusively Omega Speedmasters (of varying versions over the years) the Speedmaster is one of the few watches certified for space flight and EVA operations. The only issuance of Rolexes I know of is the British divers who were issued mill spec submariners. – Dave Sep 17 '15 at 14:44 • Sorry, Omega, yes of course. Shows how much good those promotional considerations are in raising brand awareness. ;-) – RoboKaren Sep 17 '15 at 16:05 • It was far more than promotion. NASA had very stringent requirements for the watch which included reliable operation in a huge temperature range, water proof and wind up (perpetual rotors don't work in space). The Speedmaster is a far more capable watch than most people give it credit for. I will admit Omega has made the best of it from a marketing standpoint ;) – Dave Sep 17 '15 at 16:09 • If I ever need some inspiration, I look at that picture of Buzz or the more well known one of him standing on the moon. – Simon Sep 17 '15 at 17:08 Watches in today's cockpits really aren't a tool the way they were in the past. We aren't using our wristwatch as a chronograph to manage dead reckoning the way that was required before such modern luxuries as Loran gasp. Wristwatches are more about expressing your style. It is more about jewelry. With the exception of a 24 hour Zulu hand, and a 24 hour bezel to be able to quickly reference three timezones, they mainly just look nice. I have met some pilots that actually use the E6B on their watch, but most just like how it looks. Aviation does have a very identifiable look, which is mainly what pilots are attracted to. Everyone likes to purchase things that are related to their passion, and flying is no different. Flight watches often resemble instrumentation and bring that part of the pilot's life to their wrist wherever they go. edit In fact, the first wristwatch was invented just for pilots. One of the first wristwatches ever made was named "Santos" by Louis Cartier, which was made for a famous pilot at that time who complained about having to check his pocket watch while flying. (Source: http://theaviationist.com/2013/12/25/aviation-wrist-watches/) So, the two things are incredibly closely identified with one another. Watches are extremely important for a pilot, especially small aircraft that do not have sophisticated instrument clusters. Everything is time-based and generally it is much better to trust your own watch than one built into the instrument cluster (unless it is a high-precision, calibrated clock on a large aircraft). Also, if anything happens that disrupts the aircraft functions or your visibility of the instruments, a watch is critical. For example, a fire or power failure can make a panel clock unlighted or obscured. Also, you may not be in the cockpit/flight deck and need to know what time it is. The key factors for a "pilot's watch" are: • illuminated/visible in darkness • large numerals, clear and highly readable • ability to manage multiple time zones easily • 24-hour (military/zulu) display capability • day of the month indicator • rugged and water resistant, tough crystal, good quality construction Currently I use a Torgoen T05101. It is a good watch, but uses phosphorescence. My next watch will use tritium illumination. I have been known to use an egg timer in the cockpit. • "I have been known to use an egg timer in the cockpit." I'm assuming you weren't doing a lot of negative G maneuvers while using it. – DJClayworth Sep 18 '15 at 14:51 • @DJClayworth I use an electronic egg timer that has a powerful clip on the back, not a sand glass. I usually clip it onto a convenient surface. It has an alarm, so it beeps when the time runs out. Its great reminder for tank switchovers and stuff like that. – Tyler Durden Sep 18 '15 at 18:32 • @TylerDurden -- that might also be integrable into your watch (I have a Casio that has countdown timer functions on it, among other things). – UnrecognizedFallingObject Oct 14 '16 at 21:24 I currently work as a corporate pilot. I have flown with hundreds and now verging on thousands of pilots- I have 5000 hours tt and I can say that most guys fly with cheaper watches and if they choose an aviation specific watch, it's typically a Citizen. I don't think that they love them, but they are affordable and don't break. I have one and am thoroughly tired of it, but it just wont give me an excuse to buy anything else. Although I have kept my CFI current, and could show you how to use all the features of the E6B, that feature is a joke. It's way too small to practically use. Of all of those guys and a few gals that I have flown with, only 4 or 5 wear an expensive mechanical watch. I have known a couple who had GMT Rolex variants, only one that I recall with a Breitling and I have run into a few of the smaller boutique brands on a few wrists, here and there. I personally flew with a Torgoen for awhile. The only feature that I do really like to have, although yes, it's available elsewhere in the plane, is a very easy to read GMT/UTC sweep. I use the stopwatch for IFR training and holdings and the E6B for fuel range computation, speeds etc. Heck I even use this this to convert feet to inches on architectural elements heights, distances etc. Just by looking at them you get the opposite heading easily, either for navigation or knowing opposite runway numbers. What if you need to take 30 or 45 degrees from that heading so you intercept final? Most pilots that know how use them, do. I have two Citizen brand pilot watches and a Garmin SmartWatch. I got my first Citizen one to commemorate passing my first checkride. My wife gave me the second one on our wedding day. I sometimes wear one while flying, but they've never been useful. They both have an "E6B" dial, which is occasionally useful when flight planning (on the ground). They also do Timezones and ZULU time very well, which is nice when I arrive at a destination. But again, not much good in the air. There is a new generation of Flight Watches that is slowly becoming popular, topped by the Garmin D2 Titanium. These are more smart-watches than the traditional "swiss-style" watches. With a built in GPS, database of waypoints, moving map, and bluetooth that can sync with my iPad and on board systems, its basically a mini-glass display on your wrist. Variations run from about \$400 to \\$900, so they aren't cheap. I have the Garmin D2 Bravo watch, and there is a D2 Charlie with even more features. A "D2 Delta" variant is likely in the works. They are pretty incredible in their capabilities. I plan my route on my iPad, sync it over to my watch, and turn on the watch GPS slightly before takeoff. When flying a steam-cockpit, having GPS navigation, ground speed, altitude and a few other features on my wrist is extremely convenient. I prefer analogue over digital for the at a glance interpretive readability. My Revue Thommen Airspeed chronograph quartz has been utilized for dead reckoning (it does splits) and overall flight timing for 10 years. Light, thin, tough and practical. The battery last 2 years and I have it serviced regularly. Wearing it on the inside of my left wrist incorporates it easily into my scan. If it had a GMT hand and tritrium lume I would consider it to be perfect. Many do, but not for a different reason: many (not all) pilots are very well paid but have limited possibilities (time constraints, limited vacation, location and so on) to spend the earned money. Hence the expensive watches, regardless of their merit to the actual piloting. • Pilots are very well paid? I've never heard that assertion! – dotancohen Sep 17 '15 at 12:11 • @dotancohen Everything once happens for the first time:-) I heard this very assertion from a pilot explaining why doesn't want a pilot's watch. – Pavel Sep 17 '15 at 13:40 • I'd appreciate if the downvoter left a comment as to what's wrong with my answer. Thank you. – Pavel Sep 18 '15 at 19:53 • Most pilots are not very well paid and even if they were time constraints would not keep them from buying an expensive watch. I assure you, from experience it does not take very long to buy an expensive watch. – Dave Mar 7 '17 at 14:32 • @Dave Well, the first part really depends on your locale and on what you consider well paid. In Europe, most pilots are well paid indeed. As to the second part, that is exactly what I mean. – Pavel Mar 8 '17 at 6:49 Im a professional pilot, and i do wear mechanical swiss watches all the time, specially my IWC Big Pilot, Breitling Navitimer, Chronomat and the Zenith Pilot. A reliable watch is a need for any pilot (since flights are always schedule-based). But wear a expensive swiss watch is a matter of style and affordability. Since im from a family of pilots for generations, the pilot's watch is like a heritage to us. The watch is the pilot's jewel (since the pioneers like S.Dumont, Louie Blériot, until the famous pilots like John Travolta, the Red Bull Racer pilots, the astronauts...those distinguished pilots always wears a good watch). But since a jewel is just a matter of looks (and affordability) my coleagues pilots wear mostly citizens brand and other watches (beause the luxury swiss watches are too expensive). But they always wear some good (dependable) watch. Among commercial pilots its not really common to see stuff like Breitlings etc. What they need is something accurate, maybe with multiple time zones and not too expensive. Most Casios, Seikos and Citizens have the attributes I mention above so you see a fair bit of them. Most pilots I know like to go out and see a bit of the city they're in (have a meal etc) when they have a layover. Wearing an expensive watch in an unfamiliar city is not really a smart thing to do. Also at some airports you have to remove your watch when you go thru security.. that's just another opportunity to lose one. Staying in hotels is another issue, if you're going down to the pool you have to remember to put the watch in the safe..and remember to take it out again when you leave for your flight!
Authors Niloy Biswas, Pierre E. Jacob, Paul Vanetti Abstract Markov chain Monte Carlo (MCMC) methods generate samples that are asymptotically distributed from a target distribution of interest as the number of iterations goes to infinity. Various theoretical results provide upper bounds on the distance between the target and marginal distribution after a fixed number of iterations. These upper bounds are on a case by case basis and typically involve intractable quantities, which limits their use for practitioners. We introduce L-lag couplings to generate computable, non-asymptotic upper bound estimates for the total variation or the Wasserstein distance of general Markov chains. We apply L-lag couplings to the tasks of (i) determining MCMC burn-in, (ii) comparing different MCMC algorithms with the same target, and (iii) comparing exact and approximate MCMC. Lastly, we (iv) assess the bias of sequential Monte Carlo and self-normalized importance samplers.
1 # How do you differentiate d/dx sqrt(1+sqrt(1+sqrt(1+sqrt(x)))) ? ## Question ###### How do you differentiate d/dx sqrt(1+sqrt(1+sqrt(1+sqrt(x)))) ? How do you differentiate d/dx √(1+√(1+√(1+√(x)))) ? #### Similar Solved Questions ##### Problem (4) As described using a rectangular coordinate system, a positive 6.00 nC point charge is... Problem (4) As described using a rectangular coordinate system, a positive 6.00 nC point charge is fixed at the point x = +0.150 m, y = 0.000 m and an identical point charge is fixed at x = -0.150 m, y = 0.000 m. Point P is located at x = +0.150 m, y = -0.400 m. (1 nC = 1 nanocoulomb = 10 °C) (a...
# Can we see the computer age in the productivity statistics? Robert Solow famously said: You can see the computer age everywhere but in the productivity statistics This quote is often linked with the productivity paradox (https://en.wikipedia.org/wiki/Productivity_paradox), such that we don't see mayor productivity growth from new tech. But there is something I don't understand. Economic growth is about percentage growth. A graph of GDP looks like an exponential function. So nominally a 5% growth is much much higher when the economy is that of the US than that of Malawi. So we might not see an acceleration of economic growth, but we at least see continued growth, which allegedly becomes harder and harder as economies develop. This is the idea behind Solow's model, where without an exogenous percentage growth in A, output converges to the steady state. So I don't really understand the problem. To me, computers are clearly visible in a growing economy. But this seems to be unsatisfactory for Solow, who wants to see what, a 10% growth rate? What's wrong in my thinking? • Interesting question, but the quote was by Robert Solow - see here, just before the big T in the third column. – Adam Bailey Nov 1 '18 at 12:07
Conceptually, an Independent Set (IS) is comparable to the inverse of a graph. It’s a set of vertices that do not contain both endpoints of any edge. Consider a connected graph with only three vertices as shown below. There are five independent sets. The [] (empty set) is always considered an IS because it technically fits the definition. The sets [A], [B], and [C] are also ISs because no edge contains only one vertex. The set [B, C] also qualifies because there are no edges connecting B and C. One important thing to note is that the number of ISs grows exponentially as the number of vertices grow. This has important algorithmic implications. At first glance, this concept seems a bit academic; however, ISs are useful in practice. Finding an IS is a way of asking, which vertices do not have a relationship. For instance, if vertices are people and edges are familial relationship, an IS would define people who are not related.
# Circular orbit and gravitational questions 1. Jul 31, 2011 ### shemer77 for b, im not sure do i use like kinematics?? and im lost on c? If feel a is velocity = distance/time so is it 2pi*3040/60? and not sure on b either. Sorry these are a lot of questions, but i am trying to learn and get an A, so i want to make sure I get all of these down. Last edited: Jul 31, 2011 2. Jul 31, 2011 ### Clever-Name For 1), what does that value represent? The distance from the center of earth? Or the distance from the surface. For 2), $F_{G} = \frac{GmM_{e}}{r^{2}}$ is indeed the correct equation to use. The force exerted is the same on both objects but it may seem odd since the apple would be the one moving. The reason it might see non-intuitive is because the earth has such a large mass that the force will have a negligable change on the acceleration of the planet. While the apple is falling it will still be exerting a force, but that will be changing relative to its distance from the earth. Hopefully someone else will come along and help with the other problems... my food just arrived. 3. Jul 31, 2011 ### PeterO For 3, I think that energy expression is supposed to have a negative sign in it??? other wise you are just calculating the kinetic energy. For 5, The blades are spinning at lots of radians per second. To stop in 3 seconds, the angular acceleration would be sufficient to reduce the angular velocity to zero in 3 seconds? 4. Jul 31, 2011 ### PeterO For 4, You need to find how long it takes to travel 1/4 turn while stopping. Note that it will take longer to travel 1/4 turn while stopping than it would to cover a quarter turn while continuing at its original speed. Once you know that time, you will be able to calculate the accelerations. 5. Jul 31, 2011 ### shemer77 thanks for your help guys, i updated my first post with the questions left. Peter0 i get what your saying, but how would I solve for that, i feel like im forgetting some sort of equation... edit: nvm angular kinematics equations duh ALL RIGHT!, all questions done except for this one. i calculated angular velocity to be 19100.88 and then i divided that by 3, and got 6366.961 however that dosent seem to be the right answer..... Last edited: Jul 31, 2011 6. Aug 1, 2011 ### PeterO Did you notice the speed of the mower was given in revolutions per minute? 3000+ rpm means 50+ revolutions per second. Each revolution is 2Pi radians - lets say 6 so 300+ radians per second - yep look like you forgot about the minutes.
# An executive invests $22,000, some at 7% and some at 6% annual interest. If he receives an annual return of$1,420, how much is invested at each rate? Question Algebra foundations An executive invests $22,000, some at 7% and some at 6% annual interest. If he receives an annual return of$1,420, how much is invested at each rate? 2020-11-27 If x is invested at 7%, then 22000−x is invested at 6%. The annual return is $1420 so we can write: $$\displaystyle{0.07}{x}+{0.06}{\left({22000}−{x}\right)}={1420}$$ Solve for x: $$\displaystyle{0.07}{x}+{1320}−{0.06}{x}={1420}$$ $$\displaystyle{0.01}{x}+{1320}={1420}$$ $$\displaystyle{0.01}{x}={100}$$ $$\displaystyle{x}=\frac{{100}}{{0.01}}$$ $$\displaystyle{x}={10000}$$ So, the executive invested$10,000 at 7% and $12,000 at 6%. ### Relevant Questions asked 2020-10-23 part of$10,000 is invested at 11% simple interest account, the rest is at 14% simple interest account. If the annual income is $1,265, how much is invested at each rate? (solve the equations/ inequalities for x) asked 2020-10-23 Chuck receives loans totaling$12,000 from two ba . One bank charges 7.5% annual interest, and the second bank charges 6% annual interest. He paid $840 in total interest in one year. How much was loaned at each bank? asked 2021-01-13 Anonymous Anonymous Asked 10mths ago. A college student earned$8900 during summer vacation working as a waiter in a popular restaurant. Part was invested at 10% simple interest and the remainder at 6% simple interest. At the end of one year, the student had earned $674 interest. How much was invested at 10%? asked 2020-11-27 Using Matrices, solve the following: A practical Humber student invested 20,000 in two stocks. The two stocks yielded 7% and 11% simple interest in the first year. The total interest received was$1,880. How much does the student invested in each stock? Suppose $10,000 is invested at an annual rate of $$\displaystyle{2.4}\%$$ for 10 years. Find the future value if interest is compounded as follows asked 2020-10-28 Eric borrowed a total of$4000 from two student loans. One loan charged 5% simple interest and the other charged 4.5% simple interest, both payable after graduation. If the interest he owed after 3 years was $547.50, determine the amount of principal for each loan. asked 2020-11-16 herman always carries an inventory of between 60,000 and 80,000 at his store. herman's military antiques. his annual costs for carrying this inventory are 4% for storage, 6% for insurance, 3% for taxes, and 12% interest. what is the greatest possible carrying cost for hermans inventory asked 2021-02-02 Determine the algebraic modeling If you invest$8,600 in an account paying $$\displaystyle{8},{4}\%$$ annual interest rate, compounded continuously, how much money will be in the account at the end of 11 years?
# Differences This shows you the differences between two versions of the page. phy124on:error_and_uncertainty [2013/01/11 18:41] jhobbs phy124on:error_and_uncertainty [2013/01/25 11:53] (current) jhobbs Line 435: Line 435: If you do **not** check the box, and, therefore, do **not** force the fit to go through the origin (0,0), the plotting program will find a value for the intercept $b$ and its uncertainty $\Delta b$, and they will be also be printed out in the upper-left corner of the plot.  ​ If you do **not** check the box, and, therefore, do **not** force the fit to go through the origin (0,0), the plotting program will find a value for the intercept $b$ and its uncertainty $\Delta b$, and they will be also be printed out in the upper-left corner of the plot.  ​ + +==== The max-min method (be careful for "​constrained"​ vs. "​unconstrained"​ fits) ==== Another technique you can use to estimate the error in the slope is to draw "​max"​ and "​min"​ lines. Here we use  Another technique you can use to estimate the error in the slope is to draw "​max"​ and "​min"​ lines. Here we use -our "​eyeball + brain" judgment to draw two lines, one that has the maximum slope that seems reasonable, the "​max"​ line, and another that has the smallest slope that seems reasonable, the "​min"​ line.  We do NOT use the computer to draw these lines, and normally we do the judgment process leading to our choice of suitable "​max"​ and "​min"​ lines on paper, but you can do it more quickly and simply by holding a clear plastic ruler up to the screen of the computer monitor to decide where you think the max and min lines should be.  But please DON'T draw on the screen of the computer monitor! ​ The surface exposed to you is made of soft plastic and can easily be scratched permanently. ​ Such scratches distort the image being presented on the screen. ​ A line is reasonable if it just passes within //most// of the error bars. You then just take two convenient points on the line, and find the change in the dependent variable "​$y$"​ over the change in the independent variable "​$x$"​ to calculate the slope. ​ Doing this to work out the slope of both lines, max and min, gives you an estimate for the uncertainty in the slope. ​(Note that if you decide **not** to force your eyeball + brain max and min lines to go through the origin (0,0), you also get an estimate for the uncertainty in the $y$ intercept. ​ This only makes sense if you did **not** "check the box" when using the plotting tool to do the linear fit.)   ​+our "​eyeball + brain" judgment to draw two lines, one that has the maximum slope that seems reasonable, the "​max"​ line, and another that has the smallest slope that seems reasonable, the "​min"​ line.  We do NOT use the computer to draw these lines, and normally we do the judgment process leading to our choice of suitable "​max"​ and "​min"​ lines on paper, but you can do it more quickly and simply by holding a clear plastic ruler up to the screen of the computer monitor to decide where you think the max and min lines should be.  But please DON'T draw on the screen of the computer monitor! ​ The surface exposed to you is made of soft plastic and can easily be scratched permanently. ​ Such scratches distort the image being presented on the screen. ​ A line is reasonable if it just passes within //most// of the error bars. You then just take two convenient points on the line, and find the change in the dependent variable "​$y$"​ over the change in the independent variable "​$x$"​ to calculate the slope. ​ Doing this to work out the slope of both lines, max and min, gives you an estimate for the uncertainty in the slope. ​ + +=== An example for the "​unconstrained"​ case === + +Note that if you decide **not** to force your eyeball + brain max and min lines to go through the origin (0,0), you also get an estimate for the uncertainty in the $y$ intercept. ​ This only makes sense if you did **not** "check the box" when using the plotting tool to do the linear fit.  For a nice example of the max-min method being applied to a case where the linear fit is "​unconstrained",​ viz., it is not forced to pass through the origin (0,0), see Fig. 4.1 in Sec. **4.3 Linear Relationships** on pp. 22-23 of +[[http://​uregina.ca/​~szymanss/​uglabs/​companion/​Ch4_Graph_Anal.pdf|this document]] from the Department of Physics at the University of Regina. ​ + +=== Our example for the "​constrained"​ case === The example we show next uses the same pendulum data presented above, but this time you should notice that the plot has been made "the other way", viz., as $L$ (cm) (on the $y$-axis) versus $T^2$ ($s^2$) (on the $x$-axis). ​ You could do this yourself by entering the data into the plotting tool in the proper way.  A consequence of plotting the data this way is that the large error bars -- those for $T^2$ -- are now in the horizontal direction, not in the vertical direction as they were for the first plot.  This doesn'​t affect how we draw the "​max"​ and "​min"​ lines, however. ​ You'll notice that the  max and min lines for the present case, which appear in black on the computer screen versus green for the "best fit" line obtained with the plotting tool and versus red for the "error bars", both pass through the origin, as they should when one is comparing them to a constrained fit obtained by "​checking the box", and they both pass through nearly all the "​large"​ error bars, the horizontal ones.  Your eyeball + brain choice of suitable max and min lines would undoubtedly be slightly different from those shown in the figure, but they should be relatively close to these. ​ For the ones shown in the plot, which are reasonable choices, you may calculate yourself that the max line has a slope of about $\Delta y / \Delta x = 90/3.6 =25$ cm/s$^2$, and the min line has slope of about $\Delta y / \Delta x = 90/3.8 = 23.7$ cm/​s$^2$. ​  ​Therefore if you used this max-min method you would conclude that the value of the slope is 24.4 $\pm$ 0.7 cm/s$^2$, as compared to the computers estimate of 24.41 $\pm$ 0.16 cm/​s$^2$.  ​ The example we show next uses the same pendulum data presented above, but this time you should notice that the plot has been made "the other way", viz., as $L$ (cm) (on the $y$-axis) versus $T^2$ ($s^2$) (on the $x$-axis). ​ You could do this yourself by entering the data into the plotting tool in the proper way.  A consequence of plotting the data this way is that the large error bars -- those for $T^2$ -- are now in the horizontal direction, not in the vertical direction as they were for the first plot.  This doesn'​t affect how we draw the "​max"​ and "​min"​ lines, however. ​ You'll notice that the  max and min lines for the present case, which appear in black on the computer screen versus green for the "best fit" line obtained with the plotting tool and versus red for the "error bars", both pass through the origin, as they should when one is comparing them to a constrained fit obtained by "​checking the box", and they both pass through nearly all the "​large"​ error bars, the horizontal ones.  Your eyeball + brain choice of suitable max and min lines would undoubtedly be slightly different from those shown in the figure, but they should be relatively close to these. ​ For the ones shown in the plot, which are reasonable choices, you may calculate yourself that the max line has a slope of about $\Delta y / \Delta x = 90/3.6 =25$ cm/s$^2$, and the min line has slope of about $\Delta y / \Delta x = 90/3.8 = 23.7$ cm/​s$^2$. ​  ​Therefore if you used this max-min method you would conclude that the value of the slope is 24.4 $\pm$ 0.7 cm/s$^2$, as compared to the computers estimate of 24.41 $\pm$ 0.16 cm/​s$^2$.  ​
# RexDB Demonstration To use this project, install VirtualBox, Vagrant and Ansible. For Vagrant, you need at least version 1.7. Once you are done, make sure you are in the root of the repo (you should see a Vagrantfile) and run: vagrant up If, for some reason, the provisioning process is not finalized, you can rerun it with: vagrant provision After that, you can access the RexDB study application at http://demo.local/. On OSX, you may need to update the VirtualBox DHCP server before you do vagrant up as follows: \$ VBoxManage dhcpserver remove --netname HostInterfaceNetworking-vboxnet0 While most of the Ansible items are idempotent, the database deploy always shows as being run.
• ### Shock location and CME 3D reconstruction of a solar type II radio burst with LOFAR(1804.01025) April 3, 2018 astro-ph.SR Type II radio bursts are evidence of shocks in the solar atmosphere and inner heliosphere that emit radio waves ranging from sub-meter to kilometer lengths. These shocks may be associated with CMEs and reach speeds higher than the local magnetosonic speed. Radio imaging of decameter wavelengths (20-90 MHz) is now possible with LOFAR, opening a new radio window in which to study coronal shocks that leave the inner solar corona and enter the interplanetary medium and to understand their association with CMEs. To this end, we study a coronal shock associated with a CME and type II radio burst to determine the locations at which the radio emission is generated, and we investigate the origin of the band-splitting phenomenon. • ### The Association of a J-burst with a Solar Jet(1707.03428) Aug. 14, 2017 astro-ph.SR Context. The Sun is an active star that produces large-scale energetic events such as solar flares and coronal mass ejections and numerous smaller-scale events such as solar jets. These events are often associated with accelerated particles that can cause emission at radio wavelengths. The reconfiguration of the solar magnetic field in the corona is believed to be the cause of the majority of solar energetic events and accelerated particles. Aims. Here, we investigate a bright J-burst that was associated with a solar jet and the possible emission mechanism causing these two phenomena. Methods. We used data from the Solar Dynamics Observatory (SDO) to observe a solar jet, and radio data from the Low Frequency Array (LOFAR) and the Nan\c{c}ay Radioheliograph (NRH) to observe a J-burst over a broad frequency range (33-173 MHz) on 9 July 2013 at ~11:06 UT. Results. The J-burst showed fundamental and harmonic components and it was associated with a solar jet observed at extreme ultraviolet wavelengths with SDO. The solar jet occurred at a time and location coincident with the radio burst, in the northern hemisphere, and not inside a group of complex active regions in the southern hemisphere. The jet occurred in the negative polarity region of an area of bipolar plage. Newly emerged positive flux in this region appeared to be the trigger of the jet. Conclusions. Magnetic reconnection between the overlying coronal field lines and the newly emerged positive field lines is most likely the cause of the solar jet. Radio imaging provides a clear association between the jet and the J-burst which shows the path of the accelerated electrons. • ### LBCS: the LOFAR Long-Baseline Calibrator Survey(1608.02133) Sept. 9, 2016 astro-ph.GA, astro-ph.IM (abridged). We outline LBCS (the LOFAR Long-Baseline Calibrator Survey), whose aim is to identify sources suitable for calibrating the highest-resolution observations made with the International LOFAR Telescope, which include baselines >1000 km. Suitable sources must contain significant correlated flux density (50-100mJy) at frequencies around 110--190~MHz on scales of a few hundred mas. At least for the 200--300-km international baselines, we find around 1 suitable calibrator source per square degree over a large part of the northern sky, in agreement with previous work. This should allow a randomly selected target to be successfully phase calibrated on the international baselines in over 50% of cases. Products of the survey include calibrator source lists and fringe-rate and delay maps of wide areas -- typically a few degrees -- around each source. The density of sources with significant correlated flux declines noticeably with baseline length over the range 200--600~km, with good calibrators on the longest baselines appearing only at the rate of 0.5 per square degree. Coherence times decrease from 1--3 minutes on 200-km baselines to about 1 minute on 600-km baselines, suggesting that ionospheric phase variations contain components with scales of a few hundred kilometres. The longest median coherence time, at just over 3 minutes, is seen on the DE609 baseline, which at 227km is close to being the shortest. We see median coherence times of between 80 and 110 seconds on the four longest baselines (580--600~km), and about 2 minutes for the other baselines. The success of phase transfer from calibrator to target is shown to be influenced by distance, in a manner that suggests a coherence patch at 150-MHz of the order of 1 degree. • ### Calibrating the absolute amplitude scale for air showers measured at LOFAR(1507.08932) Dec. 28, 2015 astro-ph.IM, astro-ph.HE Air showers induced by cosmic rays create nanosecond pulses detectable at radio frequencies. These pulses have been measured successfully in the past few years at the LOw Frequency ARray (LOFAR) and are used to study the properties of cosmic rays. For a complete understanding of this phenomenon and the underlying physical processes, an absolute calibration of the detecting antenna system is needed. We present three approaches that were used to check and improve the antenna model of LOFAR and to provide an absolute calibration of the whole system for air shower measurements. Two methods are based on calibrated reference sources and one on a calibration approach using the diffuse radio emission of the Galaxy, optimized for short data-sets. An accuracy of 19% in amplitude is reached. The absolute calibration is also compared to predictions from air shower simulations. These results are used to set an absolute energy scale for air shower measurements and can be used as a basis for an absolute scale for the measurement of astronomical transients with LOFAR. • ### Imaging Jupiter's radiation belts down to 127 MHz with LOFAR(1511.09118) Nov. 30, 2015 astro-ph.EP, astro-ph.IM Context. Observing Jupiter's synchrotron emission from the Earth remains today the sole method to scrutinize the distribution and dynamical behavior of the ultra energetic electrons magnetically trapped around the planet (because in-situ particle data are limited in the inner magnetosphere). Aims. We perform the first resolved and low-frequency imaging of the synchrotron emission with LOFAR at 127 MHz. The radiation comes from low energy electrons (~1-30 MeV) which map a broad region of Jupiter's inner magnetosphere. Methods (see article for complete abstract) Results. The first resolved images of Jupiter's radiation belts at 127-172 MHz are obtained along with total integrated flux densities. They are compared with previous observations at higher frequencies and show a larger extent of the synchrotron emission source (>=4 $R_J$). The asymmetry and the dynamic of east-west emission peaks are measured and the presence of a hot spot at lambda_III=230 {\deg} $\pm$ 25 {\deg}. Spectral flux density measurements are on the low side of previous (unresolved) ones, suggesting a low-frequency turnover and/or time variations of the emission spectrum. Conclusions. LOFAR is a powerful and flexible planetary imager. The observations at 127 MHz depict an extended emission up to ~4-5 planetary radii. The similarities with high frequency results reinforce the conclusion that: i) the magnetic field morphology primarily shapes the brightness distribution of the emission and ii) the radiating electrons are likely radially and latitudinally distributed inside about 2 $R_J$. Nonetheless, the larger extent of the brightness combined with the overall lower flux density, yields new information on Jupiter's electron distribution, that may shed light on the origin and mode of transport of these particles. • ### LOFAR MSSS: Detection of a low-frequency radio transient in 400 hrs of monitoring of the North Celestial Pole(1512.00014) Nov. 30, 2015 astro-ph.IM, astro-ph.HE We present the results of a four-month campaign searching for low-frequency radio transients near the North Celestial Pole with the Low-Frequency Array (LOFAR), as part of the Multifrequency Snapshot Sky Survey (MSSS). The data were recorded between 2011 December and 2012 April and comprised 2149 11-minute snapshots, each covering 175 deg^2. We have found one convincing candidate astrophysical transient, with a duration of a few minutes and a flux density at 60 MHz of 15-25 Jy. The transient does not repeat and has no obvious optical or high-energy counterpart, as a result of which its nature is unclear. The detection of this event implies a transient rate at 60 MHz of 3.9 (+14.7, -3.7) x 10^-4 day^-1 deg^-2, and a transient surface density of 1.5 x 10^-5 deg^-2, at a 7.9-Jy limiting flux density and ~10-minute time-scale. The campaign data were also searched for transients at a range of other time-scales, from 0.5 to 297 min, which allowed us to place a range of limits on transient rates at 60 MHz as a function of observation duration. • ### Wide-Band, Low-Frequency Pulse Profiles of 100 Radio Pulsars with LOFAR(1509.06396) LOFAR offers the unique capability of observing pulsars across the 10-240 MHz frequency range with a fractional bandwidth of roughly 50%. This spectral range is well-suited for studying the frequency evolution of pulse profile morphology caused by both intrinsic and extrinsic effects: such as changing emission altitude in the pulsar magnetosphere or scatter broadening by the interstellar medium, respectively. The magnitude of most of these effects increases rapidly towards low frequencies. LOFAR can thus address a number of open questions about the nature of radio pulsar emission and its propagation through the interstellar medium. We present the average pulse profiles of 100 pulsars observed in the two LOFAR frequency bands: High Band (120-167 MHz, 100 profiles) and Low Band (15-62 MHz, 26 profiles). We compare them with Westerbork Synthesis Radio Telescope (WSRT) and Lovell Telescope observations at higher frequencies (350 and1400 MHz) in order to study the profile evolution. The profiles are aligned in absolute phase by folding with a new set of timing solutions from the Lovell Telescope, which we present along with precise dispersion measures obtained with LOFAR. We find that the profile evolution with decreasing radio frequency does not follow a specific trend but, depending on the geometry of the pulsar, new components can enter into, or be hidden from, view. Nonetheless, in general our observations confirm the widening of pulsar profiles at low frequencies, as expected from radius-to-frequency mapping or birefringence theories. We offer this catalog of low-frequency pulsar profiles in a user friendly way via the EPN Database of Pulsar Profiles (http://www.epta.eu.org/epndb/). • ### The LOFAR Multifrequency Snapshot Sky Survey (MSSS) I. Survey description and first results(1509.01257) Sept. 3, 2015 astro-ph.IM We present the Multifrequency Snapshot Sky Survey (MSSS), the first northern-sky LOFAR imaging survey. In this introductory paper, we first describe in detail the motivation and design of the survey. Compared to previous radio surveys, MSSS is exceptional due to its intrinsic multifrequency nature providing information about the spectral properties of the detected sources over more than two octaves (from 30 to 160 MHz). The broadband frequency coverage, together with the fast survey speed generated by LOFAR's multibeaming capabilities, make MSSS the first survey of the sort anticipated to be carried out with the forthcoming Square Kilometre Array (SKA). Two of the sixteen frequency bands included in the survey were chosen to exactly overlap the frequency coverage of large-area Very Large Array (VLA) and Giant Metrewave Radio Telescope (GMRT) surveys at 74 MHz and 151 MHz respectively. The survey performance is illustrated within the "MSSS Verification Field" (MVF), a region of 100 square degrees centered at J2000 (RA,Dec)=(15h,69deg). The MSSS results from the MVF are compared with previous radio survey catalogs. We assess the flux and astrometric uncertainties in the catalog, as well as the completeness and reliability considering our source finding strategy. We determine the 90% completeness levels within the MVF to be 100 mJy at 135 MHz with 108" resolution, and 550 mJy at 50 MHz with 166" resolution. Images and catalogs for the full survey, expected to contain 150,000-200,000 sources, will be released to a public web server. We outline the plans for the ongoing production of the final survey products, and the ultimate public release of images and source catalogs. • ### LOFAR tied-array imaging and spectroscopy of solar S bursts(1507.07496) July 27, 2015 astro-ph.SR Context. The Sun is an active source of radio emission that is often associated with energetic phenomena ranging from nanoflares to coronal mass ejections (CMEs). At low radio frequencies (<100 MHz), numerous millisecond duration radio bursts have been reported, such as radio spikes or solar S bursts (where S stands for short). To date, these have neither been studied extensively nor imaged because of the instrumental limitations of previous radio telescopes. Aims. Here, Low Frequency Array (LOFAR) observations were used to study the spectral and spatial characteristics of a multitude of S bursts, as well as their origin and possible emission mechanisms. Methods. We used 170 simultaneous tied-array beams for spectroscopy and imaging of S bursts. Since S bursts have short timescales and fine frequency structures, high cadence (~50 ms) tied-array images were used instead of standard interferometric imaging, that is currently limited to one image per second. Results. On 9 July 2013, over 3000 S bursts were observed over a time period of ~8 hours. S bursts were found to appear as groups of short-lived (<1 s) and narrow-bandwidth (~2.5 MHz) features, the majority drifting at ~3.5 MHz/s and a wide range of circular polarisation degrees (2-8 times more polarised than the accompanying Type III bursts). Extrapolation of the photospheric magnetic field using the potential field source surface (PFSS) model suggests that S bursts are associated with a trans-equatorial loop system that connects an active region in the southern hemisphere to a bipolar region of plage in the northern hemisphere. Conclusions. We have identified polarised, short-lived solar radio bursts that have never been imaged before. They are observed at a height and frequency range where plasma emission is the dominant emission mechanism, however they possess some of the characteristics of electron-cyclotron maser emission. • ### LOFAR discovery of a quiet emission mode in PSR B0823+26(1505.03064) May 12, 2015 astro-ph.SR, astro-ph.HE PSR B0823+26, a 0.53-s radio pulsar, displays a host of emission phenomena over timescales of seconds to (at least) hours, including nulling, subpulse drifting, and mode-changing. Studying pulsars like PSR B0823+26 provides further insight into the relationship between these various emission phenomena and what they might teach us about pulsar magnetospheres. Here we report on the LOFAR discovery that PSR B0823+26 has a weak and sporadically emitting 'quiet' (Q) emission mode that is over 100 times weaker (on average) and has a nulling fraction forty-times greater than that of the more regularly-emitting 'bright' (B) mode. Previously, the pulsar has been undetected in the Q-mode, and was assumed to be nulling continuously. PSR B0823+26 shows a further decrease in average flux just before the transition into the B-mode, and perhaps truly turns off completely at these times. Furthermore, simultaneous observations taken with the LOFAR, Westerbork, Lovell, and Effelsberg telescopes between 110 MHz and 2.7 GHz demonstrate that the transition between the Q-mode and B-mode occurs within one single rotation of the neutron star, and that it is concurrent across the range of frequencies observed. • ### The peculiar radio galaxy 4C 35.06: a case for recurrent AGN activity?(1504.06642) April 24, 2015 astro-ph.CO, astro-ph.GA Using observations obtained with the LOw Fequency ARray (LOFAR), the Westerbork Synthesis Radio Telescope (WSRT) and archival Very Large Array (VLA) data, we have traced the radio emission to large scales in the complex source 4C 35.06 located in the core of the galaxy cluster Abell 407. At higher spatial resolution (~4"), the source was known to have two inner radio lobes spanning 31 kpc and a diffuse, low-brightness extension running parallel to them, offset by about 11 kpc (in projection). At 62 MHz, we detect the radio emission of this structure extending out to 210 kpc. At 1.4 GHz and intermediate spatial resolution (~30"), the structure appears to have a helical morphology. We have derived the characteristics of the radio spectral index across the source. We show that the source morphology is most likely the result of at least two episodes of AGN activity separated by a dormant period of around 35 Myr. The AGN is hosted by one of the galaxies located in the cluster core of Abell 407. We propose that it is intermittently active as it moves in the dense environment in the cluster core. Using LOFAR, we can trace the relic plasma from that episode of activity out to greater distances from the core than ever before. Using the the WSRT, we detect HI in absorption against the center of the radio source. The absorption profile is relatively broad (FWHM of 288 km/s), similar to what is found in other clusters. Understanding the duty cycle of the radio emission as well as the triggering mechanism for starting (or restarting) the radio-loud activity can provide important constraints to quantify the impact of AGN feedback on galaxy evolution. The study of these mechanisms at low frequencies using morphological and spectral information promises to bring new important insights in this field. • ### Probing Atmospheric Electric Fields in Thunderstorms through Radio Emission from Cosmic-Ray-Induced Air Showers(1504.05742) We present measurements of radio emission from cosmic ray air showers that took place during thunderstorms. The intensity and polarization patterns of these air showers are radically different from those measured during fair-weather conditions. With the use of a simple two-layer model for the atmospheric electric field, these patterns can be well reproduced by state-of-the-art simulation codes. This in turn provides a novel way to study atmospheric electric fields. • ### LOFAR Sparse Image Reconstruction(1406.7242) March 6, 2015 astro-ph.IM Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods Aims. Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework Methods. We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data Results. We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions. Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA • ### Measuring a Cherenkov ring in the radio emission from air showers at 110-190 MHz with LOFAR(1411.6865) Nov. 25, 2014 astro-ph.IM, astro-ph.HE Measuring radio emission from air showers offers a novel way to determine properties of the primary cosmic rays such as their mass and energy. Theory predicts that relativistic time compression effects lead to a ring of amplified emission which starts to dominate the emission pattern for frequencies above ~100 MHz. In this article we present the first detailed measurements of this structure. Ring structures in the radio emission of air showers are measured with the LOFAR radio telescope in the frequency range of 110 - 190 MHz. These data are well described by CoREAS simulations. They clearly confirm the importance of including the index of refraction of air as a function of height. Furthermore, the presence of the Cherenkov ring offers the possibility for a geometrical measurement of the depth of shower maximum, which in turn depends on the mass of the primary particle. • ### The LOFAR long baseline snapshot calibrator survey(1411.2743) Nov. 11, 2014 astro-ph.IM Aims. An efficient means of locating calibrator sources for International LOFAR is developed and used to determine the average density of usable calibrator sources on the sky for subarcsecond observations at 140 MHz. Methods. We used the multi-beaming capability of LOFAR to conduct a fast and computationally inexpensive survey with the full International LOFAR array. Sources were pre-selected on the basis of 325 MHz arcminute-scale flux density using existing catalogues. By observing 30 different sources in each of the 12 sets of pointings per hour, we were able to inspect 630 sources in two hours to determine if they possess a sufficiently bright compact component to be usable as LOFAR delay calibrators. Results. Over 40% of the observed sources are detected on multiple baselines between international stations and 86 are classified as satisfactory calibrators. We show that a flat low-frequency spectrum (from 74 to 325 MHz) is the best predictor of compactness at 140 MHz. We extrapolate from our sample to show that the density of calibrators on the sky that are sufficiently bright to calibrate dispersive and non-dispersive delays for the International LOFAR using existing methods is 1.0 per square degree. Conclusions. The observed density of satisfactory delay calibrator sources means that observations with International LOFAR should be possible at virtually any point in the sky, provided that a fast and efficient search using the methodology described here is conducted prior to the observation to identify the best calibrator. • ### LOFAR low-band antenna observations of the 3C295 and Bootes fields: source counts and ultra-steep spectrum sources(1409.5437) Sept. 18, 2014 astro-ph.CO, astro-ph.GA We present LOFAR Low Band observations of the Bootes and 3C295 fields. Our images made at 34, 46, and 62 MHz reach noise levels of 12, 8, and 5 mJy beam$^{-1}$, making them the deepest images ever obtained in this frequency range. In total, we detect between 300 and 400 sources in each of these images, covering an area of 17 to 52 deg$^{2}$. From the observations we derive Euclidean-normalized differential source counts. The 62 MHz source counts agree with previous GMRT 153 MHz and VLA 74 MHz differential source counts, scaling with a spectral index of $-0.7$. We find that a spectral index scaling of $-0.5$ is required to match up the LOFAR 34 MHz source counts. This result is also in agreement with source counts from the 38 MHz 8C survey, indicating that the average spectral index of radio sources flattens towards lower frequencies. We also find evidence for spectral flattening using the individual flux measurements of sources between 34 and 1400 MHz and by calculating the spectral index averaged over the source population. To select ultra-steep spectrum ($\alpha < -1.1$) radio sources, that could be associated with massive high redshift radio galaxies, we compute spectral indices between 62 MHz, 153 MHz and 1.4 GHz for sources in the Bo\"otes field. We cross-correlate these radio sources with optical and infrared catalogues and fit the spectral energy distribution to obtain photometric redshifts. We find that most of these ultra-steep spectrum sources are located in the $0.7 \lesssim z \lesssim 2.5$ range. • ### The LOFAR Pilot Surveys for Pulsars and Fast Radio Transients(1408.0411) Aug. 2, 2014 astro-ph.SR, astro-ph.HE We have conducted two pilot surveys for radio pulsars and fast transients with the Low-Frequency Array (LOFAR) around 140 MHz and here report on the first low-frequency fast-radio burst limit and the discovery of two new pulsars. The first survey, the LOFAR Pilot Pulsar Survey (LPPS), observed a large fraction of the northern sky, ~1.4 x 10^4 sq. deg, with 1-hr dwell times. Each observation covered ~75 sq. deg using 7 independent fields formed by incoherently summing the high-band antenna fields. The second pilot survey, the LOFAR Tied-Array Survey (LOTAS), spanned ~600 sq. deg, with roughly a 5-fold increase in sensitivity compared with LPPS. Using a coherent sum of the 6 LOFAR "Superterp" stations, we formed 19 tied-array beams, together covering 4 sq. deg per pointing. From LPPS we derive a limit on the occurrence, at 142 MHz, of dispersed radio bursts of < 150 /day/sky, for bursts brighter than S > 107 Jy for the narrowest searched burst duration of 0.66 ms. In LPPS, we re-detected 65 previously known pulsars. LOTAS discovered two pulsars, the first with LOFAR or any digital aperture array. LOTAS also re-detected 27 previously known pulsars. These pilot studies show that LOFAR can efficiently carry out all-sky surveys for pulsars and fast transients, and they set the stage for further surveying efforts using LOFAR and the planned low-frequency component of the Square Kilometer Array. • ### LOFAR tied-array imaging of Type III solar radio bursts(1407.4385) July 16, 2014 astro-ph.SR The Sun is an active source of radio emission which is often associated with energetic phenomena such as solar flares and coronal mass ejections (CMEs). At low radio frequencies (<100 MHz), the Sun has not been imaged extensively because of the instrumental limitations of previous radio telescopes. Here, the combined high spatial, spectral and temporal resolution of the Low Frequency Array (LOFAR) was used to study solar Type III radio bursts at 30-90 MHz and their association with CMEs. The Sun was imaged with 126 simultaneous tied-array beams within 5 solar radii of the solar centre. This method offers benefits over standard interferometric imaging since each beam produces high temporal (83 ms) and spectral resolution (12.5 kHz) dynamic spectra at an array of spatial locations centred on the Sun. LOFAR's standard interferometric output is currently limited to one image per second. Over a period of 30 minutes, multiple Type III radio bursts were observed, a number of which were found to be located at high altitudes (4 solar radii from the solar center at 30 MHz) and to have non-radial trajectories. These bursts occurred at altitudes in excess of values predicted by 1D radial electron density models. The non-radial high altitude Type III bursts were found to be associated with the expanding flank of a CME. The CME may have compressed neighbouring streamer plasma producing larger electron densities at high altitudes, while the non-radial burst trajectories can be explained by the deflection of radial magnetic fields as the CME expanded in the low corona. • ### Initial LOFAR observations of Epoch of Reionization windows: II. Diffuse polarized emission in the ELAIS-N1 field(1407.2093) July 8, 2014 astro-ph.GA, astro-ph.IM This study aims to characterise the polarized foreground emission in the ELAIS-N1 field and to address its possible implications for the extraction of the cosmological 21-cm signal from the Low-Frequency Array - Epoch of Reionization (LOFAR-EoR) data. We use the high band antennas of LOFAR to image this region and RM-synthesis to unravel structures of polarized emission at high Galactic latitudes. The brightness temperature of the detected Galactic emission is on average 4 K in polarized intensity and covers the range from -10 to +13rad m^-2 in Faraday depth. The total polarized intensity and polarization angle show a wide range of morphological features. We have also used the Westerbork Synthesis Radio Telescope (WSRT) at 350 MHz to image the same region. The LOFAR and WSRT images show a similar complex morphology, at comparable brightness levels, but their spatial correlation is very low. The fractional polarization at 150 MHz, expressed as a percentage of the total intensity, amounts to 1.5%. There is no indication of diffuse emission in total intensity in the interferometric data, in line with results at higher frequencies. The wide frequency range, good angular resolution and good sensitivity make LOFAR an exquisite instrument for studying Galactic polarized emission at a resolution of 1-2 rad m^-2 in Faraday depth. The different polarised patterns observed at 150 MHz and 350 MHz are consistent with different source distributions along the line of sight wring in a variety of Faraday thin regions of emission. The presence of polarised foregrounds is a serious complication for Epoch of Reionization experiments. To avoid the leakage of polarized emission into total intensity, which can depend on frequency, we need to calibrate the instrumental polarization across the field of view to a small fraction of 1%. • ### The shape of the radio wavefront of extensive air showers as measured with LOFAR(1404.3907) June 8, 2014 astro-ph.IM, astro-ph.HE Extensive air showers, induced by high energy cosmic rays impinging on the Earth's atmosphere, produce radio emission that is measured with the LOFAR radio telescope. As the emission comes from a finite distance of a few kilometers, the incident wavefront is non-planar. A spherical, conical or hyperbolic shape of the wavefront has been proposed, but measurements of individual air showers have been inconclusive so far. For a selected high-quality sample of 161 measured extensive air showers, we have reconstructed the wavefront by measuring pulse arrival times to sub-nanosecond precision in 200 to 350 individual antennas. For each measured air shower, we have fitted a conical, spherical, and hyperboloid shape to the arrival times. The fit quality and a likelihood analysis show that a hyperboloid is the best parametrization. Using a non-planar wavefront shape gives an improved angular resolution, when reconstructing the shower arrival direction. Furthermore, a dependence of the wavefront shape on the shower geometry can be seen. This suggests that it will be possible to use a wavefront shape analysis to get an additional handle on the atmospheric depth of the shower maximum, which is sensitive to the mass of the primary particle. • ### Discovery of Carbon Radio Recombination Lines in absorption towards Cygnus~A(1401.2876) Jan. 13, 2014 astro-ph.GA We present the first detection of carbon radio recombination line absorption along the line of sight to Cygnus A. The observations were carried out with the LOw Frequency ARray in the 33 to 57 MHz range. These low frequency radio observations provide us with a new line of sight to study the diffuse, neutral gas in our Galaxy. To our knowledge this is the first time that foreground Milky Way recombination line absorption has been observed against a bright extragalactic background source. By stacking 48 carbon $\alpha$ lines in the observed frequency range we detect carbon absorption with a signal-to-noise ratio of about 5. The average carbon absorption has a peak optical depth of 2$\times$10$^{-4}$, a line width of 10 km s$^{-1}$ and a velocity of +4 km s$^{-1}$ with respect to the local standard of rest. The associated gas is found to have an electron temperature $T_{e}\sim$ 110 K and density $n_{e}\sim$ 0.06 cm$^{-3}$. These properties imply that the observed carbon $\alpha$ absorption likely arises in the cold neutral medium of the Orion arm of the Milky Way. Hydrogen and helium lines were not detected to a 3$\sigma$ peak optical depth limit of 1.5$\times$10$^{-4}$ for a 4 km s$^{-1}$ channel width. Radio recombination lines associated with Cygnus A itself were also searched for, but are not detected. We set a 3$\sigma$ upper limit of 1.5$\times$10$^{-4}$ for the peak optical depth of these lines for a 4 km s$^{-1}$ channel width. • ### Detecting cosmic rays with the LOFAR radio telescope(1311.1399) Nov. 6, 2013 astro-ph.IM, astro-ph.HE The low frequency array (LOFAR), is the first radio telescope designed with the capability to measure radio emission from cosmic-ray induced air showers in parallel with interferometric observations. In the first $\sim 2\,\mathrm{years}$ of observing, 405 cosmic-ray events in the energy range of $10^{16} - 10^{18}\,\mathrm{eV}$ have been detected in the band from $30 - 80\,\mathrm{MHz}$. Each of these air showers is registered with up to $\sim1000$ independent antennas resulting in measurements of the radio emission with unprecedented detail. This article describes the dataset, as well as the analysis pipeline, and serves as a reference for future papers based on these data. All steps necessary to achieve a full reconstruction of the electric field at every antenna position are explained, including removal of radio frequency interference, correcting for the antenna response and identification of the pulsed signal. • ### Studying Galactic interstellar turbulence through fluctuations in synchrotron emission: First LOFAR Galactic foreground detection(1308.2804) Aug. 19, 2013 astro-ph.GA The characteristic outer scale of turbulence and the ratio of the random to ordered components of the magnetic field are key parameters to characterise magnetic turbulence in the interstellar gas, which affects the propagation of cosmic rays within the Galaxy. We provide new constraints to those two parameters. We use the LOw Frequency ARray (LOFAR) to image the diffuse continuum emission in the Fan region at (l,b) (137.0,+7.0) at 80"x70" resolution in the range [146,174] MHz. We detect multi-scale fluctuations in the Galactic synchrotron emission and compute their power spectrum. Applying theoretical estimates and derivations from the literature for the first time, we derive the outer scale of turbulence and the ratio of random to ordered magnetic field from the characteristics of these fluctuations . We obtain the deepest image of the Fan region to date and find diffuse continuum emission within the primary beam. The power spectrum of the foreground synchrotron fluctuations displays a power law behaviour for scales between 100 and 8 arcmin with a slope of (-1.84+/-0.19). We find an upper limit of about 20 pc for the outer scale of the magnetic interstellar turbulence toward the Fan region. We also find a variation of the ratio of random to ordered field as a function of Galactic coordinates, supporting different turbulent regimes. We use power spectra fluctuations from LOFAR as well as earlier GMRT and WSRT observations to constrain the outer scale of turbulence of the Galactic synchrotron foreground, finding a range of plausible values of 10-20 pc. Then, we use this information to deduce lower limits of the ratio of ordered to random magnetic field strength. These are found to be 0.3, 0.3, and 0.5 for the LOFAR, WSRT and GMRT fields considered respectively. Both these constraints are in agreement with previous estimates. • ### The brightness and spatial distributions of terrestrial radio sources(1307.5580) Faint undetected sources of radio-frequency interference (RFI) might become visible in long radio observations when they are consistently present over time. Thereby, they might obstruct the detection of the weak astronomical signals of interest. This issue is especially important for Epoch of Reionisation (EoR) projects that try to detect the faint redshifted HI signals from the time of the earliest structures in the Universe. We explore the RFI situation at 30-163 MHz by studying brightness histograms of visibility data observed with LOFAR, similar to radio-source-count analyses that are used in cosmology. An empirical RFI distribution model is derived that allows the simulation of RFI in radio observations. The brightness histograms show an RFI distribution that follows a power-law distribution with an estimated exponent around -1.5. With several assumptions, this can be explained with a uniform distribution of terrestrial radio sources whose radiation follows existing propagation models. Extrapolation of the power law implies that the current LOFAR EoR observations should be severely RFI limited if the strength of RFI sources remains strong after time integration. This is in contrast with actual observations, which almost reach the thermal noise and are thought not to be limited by RFI. Therefore, we conclude that it is unlikely that there are undetected RFI sources that will become visible in long observations. Consequently, there is no indication that RFI will prevent an EoR detection with LOFAR. • LOFAR, the LOw-Frequency ARray, is a new-generation radio interferometer constructed in the north of the Netherlands and across europe. Utilizing a novel phased-array design, LOFAR covers the largely unexplored low-frequency range from 10-240 MHz and provides a number of unique observing capabilities. Spreading out from a core located near the village of Exloo in the northeast of the Netherlands, a total of 40 LOFAR stations are nearing completion. A further five stations have been deployed throughout Germany, and one station has been built in each of France, Sweden, and the UK. Digital beam-forming techniques make the LOFAR system agile and allow for rapid repointing of the telescope as well as the potential for multiple simultaneous observations. With its dense core array and long interferometric baselines, LOFAR achieves unparalleled sensitivity and angular resolution in the low-frequency radio regime. The LOFAR facilities are jointly operated by the International LOFAR Telescope (ILT) foundation, as an observatory open to the global astronomical community. LOFAR is one of the first radio observatories to feature automated processing pipelines to deliver fully calibrated science products to its user community. LOFAR's new capabilities, techniques and modus operandi make it an important pathfinder for the Square Kilometre Array (SKA). We give an overview of the LOFAR instrument, its major hardware and software components, and the core science objectives that have driven its design. In addition, we present a selection of new results from the commissioning phase of this new radio observatory.
# zbMATH — the first resource for mathematics Stokes problem in $$\mathbb{R}^n$$ and weighted Sobolev spaces. (Problème de Stokes dans $$\mathbb{R}^n$$ et espaces de Sobolev avec poids.) (French. Abridged English version) Zbl 0894.35082 Summary: We present isomorphism results for the Stokes problem in $$\mathbb{R}^n$$ using weighted Sobolev spaces and we study the asymptotic behaviour of the solutions. We also study this problem with data in the Hardy space $${\mathcal H}^1(\mathbb{R}^n)$$ as well as a few properties of solenoidal functions. ##### MSC: 35Q30 Navier-Stokes equations 35B40 Asymptotic behavior of solutions to PDEs 76D07 Stokes and related (Oseen, etc.) flows Full Text:
# Why is it harder to use a white light source in a Michelson Interferometer? In a Michelson Interferometer, a light beam is split into 2 different beams that travel different optical paths, through the "arms" of the interferometer. Then, they are reflected and, finally, recombined to form a single light beam again. But because they traveled optical paths of different lengths these beams will have a phase diference between them, which will result in an interference pattern. The straight-forward method makes the device easy to build and use. However, it is said that one of the disadvantages of this interferometer is that it doesn't allow for accurate measures when the light source is white light, but I don't really understand why that's the case. After a quick search on Wikipedia, they essentially say that the difference in the optical paths followed by the two light-beams needs to be inferior to the coherence length of the light source to produce significant interference contrast, so since white light has a low coherence length (of the order of $$10^{-6}$$m), the optical paths need to be practically the same for both light beams. But in order to get interference, wasn't the point to get the light beams to follow different optical paths? This all just seems slightly contradictory to me. • " But in order to get interference, wasn't the point to get the light beams to follow different optical paths? " Interference will occur even if light beams follow same optical path. In fact, the experiment tries to ensure that the 2 beams follow the same optical path. Then , it tries to measure if the light beams along the 2 identical paths have different speed or not Apr 9 '21 at 10:29 • But then where does the bit about how the difference in the optical paths needs to be inferior to the coherence length of the light source come into place? – Rye Apr 9 '21 at 10:47 • Where does it say that ? can you copy paste the full sentence you are referring to ? Apr 9 '21 at 10:51 • On the Wikipedia link I mentioned: "Narrowband spectral light from a discharge or even white light can also be used, however to obtain significant interference contrast it is required that the differential pathlength is reduced below the coherence length of the light source. That can be only micrometers for white light, as discussed below." – Rye Apr 9 '21 at 10:54 • FTIR spectrometers use white light in a Michelson interferometer: en.m.wikipedia.org/wiki/Fourier-transform_infrared_spectroscopy Apr 9 '21 at 17:59 Interference pattern has a visibility, meaning how better can the fringes distinguished. The visibility is related to the degree of first order coherence $$g^{(1)}$$, a measure of amplitude-amplitude correlation. It indicates the fact that exist a fixed relationship between the phases at different instant of time, or different position along the beam. White light is broadband and larger the bandwidth, lesser the coherence time. The last represent the time in which there is a fixed phase relationship. Since interference depends upon phases, it's important to have a fixed phase relationship. Coherence length is defined: $$l_c = c\tau_c$$, where c is the speed of light. When the fringe pattern is computed there is an oscillatory term that depends on the path lengths difference and a prefactor that is the modulus of the first order coherence. Taking for example chaotic light with collision broadening you have an exponential decay of the degree of first order coherence, like $$\exp(-\frac{\tau}{\tau_c})$$. If you take the lengths path difference $$|z_1 - z_2|$$ much smaller than the coherence length $$l_c$$, this term can be high and so the visibility, since $$\tau = \frac{|z_1 - z_2|}{c}$$. Extremely small coherence time $$\tau_c$$ implies necessarily to take impractical extremely small paths difference. The reason a white light source is more difficult to use than a monochromatic source is really very simple: visible fringes are easier to obtain when a monochromatic source is used. This is because monochromatic light has a longer coherence length. Fringes are only visible if the difference in path lengths is less than the coherence length. So, if the coherence length is just a few microns as in the case of white light, the two path lengths must be adjusted to be equal within a few microns - which is not always easy. An LED typically has a coherence length of 12 microns or so; and a diode laser can have a coherence length of meters. It's very easy to obtain fringes immediately after setting up a Michaelson interferometer with a laser source, but can be very tedious to obtain fringes with a white light source. Yes, the difference in path length is indeed what is measured in an interferometer, but if fringes can only be seen when the path length is less than the coherence length, white light only allows a very small range of path length difference to be measured: a few microns. With a coherence length of meters, a laser can be used in an interferometer to measure path length changes that vary from a fraction of a micron to several meters, all with sub-micron precision. The Michelson interferometer compares the lengths of two light paths. One source produces two light beams which appear to come from two "coherent" (virtual) sources which traverse two paths and then superpose to produce an interference pattern consisting regions of different intensities - light and dark fringes. Amongst other factors the separation of the fringes depends on the wavelength of the light. The shape of the fringes depends on the orientation of the mirrors which if exactly at right angles to one another produces circular fringes and if at other angles to one another can produce straight line fringes. White light contains a continuous range of wavelengths each of which produce an interference pattern with a different fringe separation. It is only when the path lengths of the two beams which form the interference pattern are equal that constructive interference occurs at the same region for all wavelengths and that is often called the zero order fringe and is shown below in the middle of the diagram. What you see beyond the zero fringe is the fringes pattern of different wavelengths overlapping one another. So for the first bright fringe beyond the centre (first order fringe) it is blue (short wavelength) on the "inside" and violet" (longer wavelength) on the outside because the blue light has the smallest fringe separation and the violet light the largest fringe separation. As one moves away from the centre you could have a bright fringe due to one wavelength occurring at a position where there is a dark fringe of another wavelength. Thus moving out from the centre the fringe patterns due to each of the wavelengths which make up white light overlap so much that no further fringes are discernable. With while light one might see a central white fringe and a few coloured fringes and then a uniform illumination. To measure wavelength using the interferometer changes one of the path lengths by moving a mirror a measured distance and counting the fringes which cross a graticule in the field of view and each complete fringe movement corresponding to the mirror being moved half a wavelength of the light. To get a reasonable estimate of wavelength one might measure the distance the mirror moves when $$50$$ fringes traversed the field of view. The problem with trying to measure a wavelength of the light which makes up white light is that as soon as a mirror is moved only a little from a position when the light path lengths are equal the white light fringe disappears, coloured fringes replace it and then as the mirror moves further the fringes disappear. One could use narrow width bandpass filters to measure the wavelengths of some of the components of white light. One advantage of using a white light source is that it can be used to set up the interferometer with equal path length to within a fraction of a wavelength as only then will a zero order white fringe will be visible.
# Christoffel Symbols vanish at Origin of Normal Neighborhood ## Theorem Let $\struct {M, g}$ be an $n$-dimensional Riemannian or pseudo-Riemannian manifold. Let $U_p$ be the normal neighborhood for $p \in M$. Let $\struct {U_p, \tuple {x^i}}$ be a normal coordinate chart. Let $\set {\Gamma^i_{jk}}$ be Christoffel symbols. Then: $\map {\Gamma^i_{jk} } {\map {x^r}p} = 0$
• 15,492 hits ## Indians lacks the cultural drive to honor science This curious note on currency notes is inspired by the talks of my favorite astronomer Dr. Neil deGrasse Tyson, whose lectures fill my iPod and hard-disk and backup hard-disk. To The Prime Ministers of India and, The Governor, Reserve Bank of India. Dear Sir, This note is inspired by the public lectures of astronomer Dr. Neil deGrasse Tyson. Yesterday, the 20th of July, on the one hand I was celebrating the greatest scientific achievement of all times, the 45th anniversary of the, Neil Armstrong’s Apollo 11 landing on the moon; on the other hand I was lamenting the state of Indian science. A government report revealed that India spends only 0.88% of its GDP on scientific research against 8% for the US. And we aspire to become a superpower? Whom are we kidding? I am yet to see a superpower that spends so less on science. As I reflected over this, I had a revelation. Not only is our science lacking money, but our money is also lacking science.” The real issue is not that India spends very little on science. The real issue is that we lack the cultural drive to honor science in the first place. Our low spending on science is an outcome of this cultural deficiency. How do nations honor their heroes? By featuring them on their money. Why? Because we all value and use money, the persons depicted on currency notes are individuals who have made extraordinary contributions to the nation. This allows the citizens to draw inspiration from their heroes. When I looked at the currency notes around the world, I found that every country that had a scientist was proud to display on their money the iconography of their scientific contribution which has led to advancement of the frontier of knowledge not only for their nation but for the entire humanity. Let me quote three examples: • Sir Isaac Newton (1643 – 1727 AD) is considered the most brilliant person that ever was. He discovered the laws of motion, the universal law of gravity, the composition of light and invented calculus all before he was 26. His portrait was British 1 Pound. •  Ibn al-Haitham (965 – 1040 AD) the father of modern optics, ophthalmology, experimental physics and scientific methodology and was first theoretical physicist of the world. His image was on the 10 and 10,000 Iraqi Dinari notes. • Abu Nasr al-Farabi (872 – 950 AD) conducted the first experiments concerning the existence of vacuum. His image was on the 1 Kazakhstani Tenge. What gets printed on the money is important because it conveys the nation’s message to its citizens and to the rest of the world. Which country do you think make the best engineers in the world? Germany. German engineering is a metaphor for engineering excellence. And how do the Germans convey this fact? They wrote a mathematical distribution function on their money next to image of Carl Friedrich Gauss, the greatest mathematician of all times. This tradition of honoring scientists is followed not just by developed nations but also by nations far worse off than India. England and Germany are scientific powerhouses but Iraq and Kazakhstan are not. Yet they all had the cultural drive to honor their scientists, going back hundreds of years into history to find the most deserving scientists to grace their current money. Several nations have maintained this tradition. Einstein was on the Israeli currency; Tesla on the Serbian; and Marie Curie was on both the Polish and the French currency. Many countries have featured more than one scientist. Newton, Faraday, Darwin and Kelvin were on the British; and Galileo, Volta and Marconi were on the Italian currency. What about India? On the one hand our Mangalyaan is on its way to Mars, on the other hand we are trailing the world in the cultural adoption of science. We do not lack scientists; Bhramagupta and Aryabhatta are no lesser scientists than  Al-Haithamor Al-Farabi; Ramanujan is no lesser a mathematician than Gauss. From Baudhayana in the 8th century BC to Chandrashekhar  in the 20th century AD, India has produced dozens of extraordinary scientists that any foreign nation would be proud to feature on their money. But not India; we don’t value our own jewels. Our culture is yet to reach a state of maturity where scientific progress is celebrated in everyday life.   Therefore we haven’t honored a single scientist on our currency even though we proclaim that we have been doing science since Vedic Ages. After independence, Gandhi has monopolized our currency notes. What message does this convey? It says three things; we value the Gandhian principles, we were British colonial slaves and, nothing else is valuable to this nation. Yes, we value Gandhian principles but do we really need to show this it across all our currency denominations? Nations that have featured scientists on their currency have also featured their other greats including but not limited to rulers and politicians. People from diverse fields have contributed at the highest level of excellence to rise of India. Can we not spare a few denominations of our currency for them? Will it be disrespectful to Gandhi if one of our currency notes features C. V. Raman or Homi Jehangir Bhabha? Now that we almost completing our 67th orbit around the sun as an independent nation, it is time we catch up with the rest of world on intellectual maturity. We should respect our past and have a vision for the future; then we can become a superpower. The way to do this is by embracing science in our culture. A scientist on our money would convey a healthy and heartwarming message to the nation because scientists point towards the future. List of scientists in the currencies of countries Abu Ali al-Hasan Ibn al-Haitham, 10 Iraqi Dinar (1982) Abu Ali al-Hasan Ibn al-Haitham, 10000 Iraqi Dinar (2005) Abu Nasr al-Farabi, 1 Kazakhstani Tenge (1993) Adam Smith, 20 British Pounds (2007) Adam Smith, 50 British Pounds, Clydesdale (2003) Albert Einstein, 5 Israeli Lirot (1968) Alessandro Volta, 10000 Italian Lire (1984) Alexander von Humboldt, 5 East German Marks (1964) Benjamin Franklin, 100 United States Dollars (1985) Blaise Pascal, 500 French Francs (1977) Carl Friedrich Gauss, 10 Deutsch Marks (1991) Carl Linne (Linnaeus), 100 Swedish Kroner (2003) Charles Darwin, 10 British Pounds (2005) Christian Huygens, 25 Dutch Guilder (1955) Christopher Polhem, 500 Swedish Kroner (2003) Democritus of Abdera, 100 Greek Drachma (1967) Erwin Schrödinger, 1000 Austrian Schilling (1983) Fryderyk Franciszek Chopin, 5000 Polish Zloty (1982) Galileo Galilei, 2000 Itialian Lire (1973) George Stephenson, 5 British Pounds (1990) Guglielmo Marconi, 2000 Italian Lire (1990) Hans Christian Ørsted, 100 Danish Kroner (1970) Issac Newton, 1 British Pound Janez Vajkard Valvasor, 20 Slovenian Tolarjev (1992) Johann Balthasar Neumann, 50 Deutsche Marks (1991) Jovan Jovanovic Zmaj, 500000000000 Yugoslavian Dinar (1993) Jurij Vega, 50 Slovenian Tolars (1992) Kristian Birkeland, 200 Norwegian Kroner (1994) Leonhard Euler, 10 Swiss Francs (1997) Loius Pasteur, 5 French Francs (1966) Lord Ernest Rutherford, 100 New Zealand Dollars (current) Lord Kelvin, 100 British Pounds, Clydesdale (1996) Marie Curie, 20000 old Polish Zloty (1989) Marie and Pierre Curie, 500 French Francs (1998) Marius Mercator, 1000 Belgian Francs (1965) Michael Faraday, 20 British Pounds (1993) Nicolaus Copernicus, 1000 old Polish Zloty (1965) Nicolaus Copernicus, 1000 old Polish Zloty (1982) Niels Bohr, 500 Danish Kroner (current) Nikola Tesla, 100 Serbian Dinar (2003) Nikola Tesla, 100 Yugoslavian Dinar (1994) Nikola Tesla, 10000000000 Yugoslavian Dinar (1993) Nikola Tesla, 5 Yugoslavian Dinar (1994) Nikola Tesla, 5000000 Yugoslavian Dinar (1993) Ole Rømer, 50 Danish Kroner (1970) Oswaldo Cruz, 50 Brazilian Cruzados (1986-8?) Pedro Nunes, 100 Portuguese Escudos (1957) René Descarts, 100 French Francs (1942) Ruggero Boscovich, 1 Croatian Dinar (1991) Ruggero Boscovich, 10 Croatian Dinar (1991) Ruggero Boscovich, 100000 Croatian October Dinar (1993) Ruggero Boscovich, 5 Croatian Dinar (1991) Ruggero Boscovich, 50000 Croatian October Dinar (1993) Sejong the Great, 10000 South Korean Won (2007) Sigmund Freud, 50 Austrian Schilling (1986) Sir Isaac Newton, 1 British Pound (c. 1984) Thomas Jefferson, 2 United States Dollars (1976) Urbain Jean Joseph Le Verrier, 50 French Francs (1947) Viktor Ambartsumian, 100 Armenian Dram (1998) Voltaire (François-Marie Arouet), 10 French Francs (1964) Yours scientifically, Nilotpal Kanti Sinha, Citizen, India. This article was published in Abraxas Lifestyle magazine: http://www.abraxaslifestyle.com and http://www.abraxasnu.com ## The analytics of social compatibility India is the land of arranged marriages and the protagonist story had recently met five prospective brides who were equally eligible and evenly matched. All the girls have at least one unique attribute which he wanted his spouse to posses and each of these attribute was equally important to him; one of them was very beautiful; another was highly educated; another had a good sense of humour and so on. It was difficult for him to choose one girl over another. The protagonist met a data scientist, to discuss his problem of choices. Data Scientist: This is similar to social choice theory, a framework for weighting individual interests, values, or welfares as an aggregate towards collective decision using symbolic logic. Let’s make an algorithm to evaluate the social compatibility between people. Then we will use it to find your best match form your prospective brides. Protagonist: Why bother about evaluating social compatibility? DS: Do you realize its immense business potential? The top two Indian matrimonial sites draw 2 million visitors accessing over 15 million pages daily. If these two websites implement the algorithm then assuming that only 5% of the visitors actually use it, you have a 100,000 daily user in India alone. In the future, if the top matrimonial and dating websites across world implement the algorithm then you can hit half a million daily users. On a pay per use or fixed month rate revenue model, look at the expected income. P: So how can we quantify compatibility? DS: A simple approach is to rank the girls in order of each attribute and then combine the individual ranks into a composite rank using the known methods of combining ranks. The top composite ranked girl is your first choice. DS: Unfortunately a ranking based approach is conceptually flawed. Economics Nobel Laureate Dr. Kenneth Arrow proved the Arrow’s Impossibility Theorem, a pioneering theorem of social choice theory, which states that no rank-order voting system satisfies all fairness criterions. Moreover for critical social decisions psychology could prevail over statistics. The ideal methodology should be able to quantify the physiological aspect of human behaviour. DS: Ideally you would want all the desired attributes of a dream spouse in one person. But in reality, the desired attributes will be distributed across different girls. So you have to give up on one attribute to gain on another. Thus the attributes are competing against each other so you have to make competitive choices. DS: Assume that you have a total of twenty points to allocate across the attributes. How much are you willing to give up on the beauty to gain on the educational qualification of your spouse? If you allocate 15 points to beauty, you have only 5 points to allocate to education. When faced with scarce resources (points) you will be much more judicious in spending. Hence competitive choices are a better quantification of your actual psychological preference. P: And how do we quantify competitive choices? DS: By using conjoint analysis. It is based on mathematical psychology and is widely used in psychophysics, perception, decision-making and the quantitative analysis of behaviour. I will create a social compatibility algorithm and use your data to see what your actual psychological preferences; then we will find your most suitable match. P: Really? You can build such a algorithm? DS: Rest assured; I have learned conjoint analysis from one of the pioneers of the subject, Dr. V. Srinivasan. Give me two days. (Two days later) DS: The social compatibility algorithm is ready; and based on your competitive choices, it suggests that your most suitable match is the second girl. Hmmm … she is a teacher but you didn’t tell me what she teaches? P: Well, she teaches statistics in a college. DS: Statistics! I knew the algorithm was right. Claimer: Based on a true incident. Both the protagonist and the data scientist work in the analytics industry. The protagonist and the lady statistician are now seeing each other. ## On power free numbers Entry 1. Let $f(r)$ be any divergent series of positive terms, $q_{r,k}$ be the $r^{th}$ k-power free number, $\zeta(k)$ be the Riemann Zeta function. Let $S_f(r) = \sum_{i=1}^{r} f(i)$. If  $g(x)$ is Riemann integrable in $(0, \infty)$ then, $\displaystyle{ \sum_{r=1}^{n} f(q_{r,k}) g \Big(\frac{S_f(r)}{\zeta(k)}\Big) \sim \int_{f(1)}^{\frac{S_f(n)}{\zeta(k)}} g(x)dx.}$ Corollary 1. As $k \rightarrow \infty, \zeta(k) \rightarrow 1$. Also every natural number is k-power free when $k \rightarrow \infty$. Hence the above result reduces to $\displaystyle{ \sum_{r=1}^{n} f(r) g(S_f(r))\sim \int_{f(1)}^{S_f(n)} g(x)dx}$. Entry 2. Further let $q'_{k,n}$ be the $n^{th}$ k-power containing number and $f$ be any function Riemann integrable in $(1,\infty)$; then, $\displaystyle{ \sum_{k=2}^{\infty}\frac{1}{k} \Big\{\frac{f(q'_{k,1}) + f(q'_{k,2}) + f(q'_{k,3}) +\ldots}{f(q_{k,1}) + f(a_{k,2}) + f(q_{k,3}) +\ldots}\Big\}= 1 - \gamma}$. ## General theory of regression The first documented work on regression 1 2 was published in the year 1898 by Schuster and since then, several regression models have been proposed. Regression is a active area of research because of the wide spread use of regression analysis in scientific, statistical, industrial and commercial applications. Ideally we would want a regression method which gives a perfect between the regression curve and the actual values of the data points. However the existing regression methods suffer form two major drawbacks: (i) A regression method may be suitable for one type of data and unsuitable for another type of data. For example ordinary linear square is suitable for linear data but data in the real world are not necessarily linear therefore linear regression would not a good choice if the data is not roughly linear. (ii) Unless there is a perfect match between the actual values of the data and the values given by the regression model, there will always be an error. We present a new method of regression which works for all type of data; linear, polynomial, logarithmic or erratic random data that shows no particular trend. Our method is an iterative process based the application of sinusoidal series analysis in non linear least square. Each iteration of this method reduces the sum of the square of the residuals and therefore by successive iteration of this method, we show that every finite set of co-planar points can be expanded as a sinusoidal series in in_nitely many ways. In other words, given a set of co-planar points, we can fit infinitely many curves that pass through all these points. By setting a convergence criteria in terms of acceptable an error we can stop the iteration after a finite number of steps. Thus in the limiting case, we obtain a function that gives a perfect fit for the data points. The regression method is published in ArXiv: Click here ## On the half line: K. Ramachandra Kanakanahalli Ramachandra (1933-2011) was perhaps the real successor of Srinivas Ramanujan in contemporary Indian mathematics. There would be no exaggeration in saying that without the efforts of Ramachandra, analytical number theory could have been extinct in India back in the mid 1970s. I was fortunate to get the opportunity to learn mathematics from the master himself. ‘On the half line: K. Ramachandra‘ is a short biography on the life and works of K. Ramachandra. The complete article in PDF format is published in Vol 21, September 2011 issue of the Mathematics Newsletter, Ramanujan Mathematical Society. Click here to read the full article: MNL-Sep11-CRC Some photographs of K. Ramachandra ## A conjecture on consecutive primes – II Continuing with the previous post in this topic, a stronger form of conjecture on the lower bound of Corollary 1 is as follows ConjectureLet $p_n$ be the n-th prime. The for $n \ge 32$, $\displaystyle{p_n^{\frac{1}{n}} > \Big(1+\frac{1}{n^2}\Big) p_{n+1}^{\frac{1}{n+1}} }$. The above conjecture implies that for all sufficiently large $n$, $p_{n+1} - p_n < (\ln p_n - 1)(\ln p_n - \ln\ln n)$. Prof. Marek Wolf, Institute of Theoretical Physics, Wroclaw, Poland has verified the above conjecture for primes up to $2^{44}$. ## Some series on Fibonacci and Lucas numbers In this post, we shall see several curious summation formulas of the Fibonacci numbers $F_n$ and the related Lucas numbers $L_n$ that involves the Bell polynomials $B_n(x)$ and the incomplete gamma function $\Gamma(a,x)$. The Fibonacci numbers are defined as $F_0=0, F_1=1, F_{n+2}=F_{n+1} + F_n, n \ge 0$ and the Lucas numbers are defined as $L_0=2, L_1=1, L_{n+2}=L_{n+1} + L_n, n \ge 0$. Bell polynomials The Bell polynomials $B_n(x)$ of polynomials are defined by the exponential generating function $\displaystyle{\sum_{n=0}^{\infty}\frac{B_n(x)t^n}{n!}=e^{(e^t-1)x}}$. The first few Bell polynomials are $B_0(x) = 1$ $B_1(x) = x$ $B_2(x) = x^2 + x$ $B_3(x) = x^3 + 3x^2 + x$ $B_4(x) = x^4 + 6 x^3 + 7x^2 + x$ $B_5(x) = x^5 +10x^4 + 25 x^3 + 15x^2 + x$. Incomplete gamma function The incomplete gamma function $\Gamma(a,x)$ is defined as $\displaystyle{\Gamma(a,x) = \int_{x}^{\infty}t^{a-1}e^{-t}dt}$. Entry 1-6 involve summation identities involving the  Bell polynomials or the incomplete gamma function. As usual, $\displaystyle{\phi=\frac{1+\sqrt5}{2}}$ denotes golden ratio. We have the following results: Entry 1. $\displaystyle{\sum_{r=1}^{\infty}\frac{F_r r^n}{r!} = \frac{e^{-1/\phi}}{\sqrt5}\{e^{\sqrt5} B_n(\phi)-B_n(-1/\phi)\}}$. Entry 2. $\displaystyle{\sum_{r=1}^{\infty}\frac{L_r r^n}{r!} = e^{-1/\phi}\{e^{\sqrt5} B_n(\phi)+B_n(-1/\phi)\}}$. Entry 3. $\displaystyle{\sum_{k=0}^{n-1}\frac{F_{k+m}x^k}{k!} = \frac{(-1)^me^{-x/\phi}}{\phi^m\sqrt5(n-1)!}\{(-\phi^2)^m e^{\sqrt5x}\Gamma(n,\phi x) - \Gamma(n,-x/\phi)\}}$ Entry 4. $\displaystyle{\sum_{k=0}^{n-1}\frac{F_{k+m}(-x)^k}{k!} = \frac{(-1)^m e^{-x\phi}}{\phi^m\sqrt5(n-1)!}\{(-\phi^2)^m \Gamma(n,-\phi x) - e^{\sqrt5x}\Gamma(n,x/\phi)\}}$ Entry 5. $\displaystyle{\sum_{r=0}^{\infty}\frac{F_{r+m} x^r}{r!} = \frac{e^{-x/\phi}}{\sqrt5}\{\phi^m e^{\sqrt5x}-(-\phi)^{-m}\}}$ Entry 6. $\displaystyle{\sum_{r=0}^{\infty}\frac{F_{r+m} (-x)^r}{r!} = \frac{e^{-x\phi}}{\sqrt5}\{\phi^m -e^{\sqrt5x}(-\phi)^{-m}\}}$ Entry 7. $\displaystyle{\sum_{r=0}^{\infty}\frac{F_r x^r}{r!} = -e^x \sum_{r=1}^{\infty}\frac{F_r (-x)^r}{r!}}$ Entry 8. $\displaystyle{\sum_{k=0}^{n-1}\frac{L_{k+m}x^k}{k!} = \frac{(-1)^m e^{-x/\phi}}{\phi^m (n-1)!}\{(-\phi^2)^m e^{\sqrt5x}\Gamma(n,\phi x) + \Gamma(n,-x/\phi)\}}$ Entry 9. $\displaystyle{\sum_{k=0}^{n-1}\frac{L_{k+m}(-x)^k}{k!} = \frac{(-1)^m e^{-x\phi}}{\phi^m (n-1)!}\{(-\phi^2)^m \Gamma(n,-\phi x) + e^{\sqrt5x}\Gamma(n,x\phi)\}}$ Entry 10. $\displaystyle{\sum_{r=0}^{\infty}\frac{L_{r+m} x^r}{r!} = e^{-x/\phi}\{\phi^m e^{\sqrt5x}+(-\phi)^{-m}\}}$ Entry 11. $\displaystyle{\sum_{r=0}^{\infty}\frac{L_{r+m} (-x)^r}{r!} = e^{-x\phi}\{\phi^m + e^{\sqrt5x}(-\phi)^{-m}\}}$ Entry 12. $\displaystyle{\sum_{r=0}^{\infty}\frac{L_r x^r}{r!} = e^x \sum_{r=1}^{\infty}\frac{L_r (-x)^r}{r!}}$
# NetApp Performance Monitoring Netapp sysstat reports filer performance statistics like CPU utilization, the amount of disk traffic, and cache utilization. When run without options, sysstat will print a new line every 15 seconds, of just a basic amount of information. You have to use control-C (^c) or set the interval count (-c count ) to stop sysstat after time. For more detailed information, use the -u option. For specific information to one particular protocol, you can use other options. Synopsis: sysstat [ interval ] sysstat [ -c count ] [ -s ] [ -u | -x | -m | -f | -i | -b ] [ interval ] • -c count Terminate the output after count number of iterations. The count is a positive, nonzero integer, values larger than LONG_MAX will be truncated to LONG_MAX. • -s Display a summary of the output columns upon termination, descriptive columns such as CP ty’ will not have summaries printed. Note that, with the exception of Cache hit’, the Avg’ summary for percentage values is an average of percentages, not a true mean of the underlying data. The Avg’ is only intended as a gross indicator of performance. For more detailed information use tools such as nfsstat, netstat, or statit. • -f For the default format display FCP statistics. • -i For the default format display iSCSI statistics. • -b Display the SAN extended statistics instead of the default display. • -u Display the extended utilization statistics instead of the default display. • -x Displays the extended output format instead of the default display. This includes all available output fields. Be aware that this produces output that is longer than 80 columns and is generally intended for “offline” types of analysis and not for “realtime” viewing. • -m Displays multi-processor CPU utilization statistics. In addition to the percentage of the time that one or more CPUs were busy (ANY), the average (AVG) is displayed, as well as, the individual utilization of each processor. • interval A positive, non-zero integer that represents the reporting interval in seconds. If not provided, the default is 15 seconds. Here are some explanations on the columns of netapp sysstat command. Cache age : The age in minutes or seconds (by the added s) of the oldest read-only blocks in the buffer cache. Data in this column indicates how fast read operations are cycling through system memory; when the filer is reading very large files, buffer cache age will be very low. Also if reads are random, the cache age will be low. If you have a performance problem, where the read performance is poor, this number may indicate you need a larger memory system or  analyze the application to reduce the randomness of the workload. Cache hit : This is the WAFL cache hit rate percentage. This is the percentage of times where WAFL tried to read a data block from disk that and the data was found already cached in memory. A dash in this column indicates that WAFL did not attempt to load any blocks during the measurement interval. CP Ty : Consistency Point (CP) type is the reason that a CP started in that interval. The CP types are: • No CP started during sampling interval • number Number of CPs started during sampling interval, if greater than one • B Back to back CPs (CP generated CP) • b Deferred back to back CPs (CP generated CP) • F CP caused by full NVLog • H A type H CP is a CP from high watermark in modified buffers. If a CP is not in progress, and the number of buffers holding data that has been modified but not yet written to disk exceeds a threshold, then a CP from high watermark is triggered. • L A type L CP is a CP from low watermark in available buffers. If a CP is not in progress, and the number of buffers available goes below a threshold, then a CP form low watermark is triggered. • S CP caused by snapshot operation • T CP caused by timer • U CP caused by flush • Z CP caused by internal sync • V CP caused by low virtual buffers • M CP caused by low mbufs • D CP caused by low datavecs • : continuation of CP from previous interval • # continuation of CP from previous interval, and the NVLog for the next CP is now full, so that the next CP will be of type B. The type character is followed by a second character which indicates the phase of the CP at the end of the sampling interval. If the CP completed during the sampling interval, this second character will be blank. The phases are: • 0 Initializing • n Processing normal files • s Processing special files • q Processing quota files • f Flushing modified data to disk • v Flushing modified superblock to disk CP util : The Consistency Point (CP) utilization, the % of time spent in a CP.  100% time in CP is a good thing. It means, the amount of time, used out of the cpu, that was dedicated to writing data, 100% of it was used. 75% means, that only 75% of the time allocated to writing data was utilized, which means we wasted 25% of that time. A good CP percentage has to be at or near 100%. Examples: sysstat Display the default output every 15 seconds, requires control-C to terminate. sysstat 1 Display the default output every second, requires control-C to terminate. sysstat -s 1 Display the default output every second, upon control-C termination print out the summary statistics. sysstat -c 10 Display the default output every 15 seconds, stopping after the 10th iteration. sysstat -c 10 -s -u 2 sysstat -u -c 10 -s 2 Display the utilization output format, every 2 seconds, stopping after the 10th iteration, upon completion print out the summary statistics. sysstat -x -s 5 Display the extended (full) output, every 5 seconds, upon control-C termination print out the summary statistics. This site uses Akismet to reduce spam. Learn how your comment data is processed.
## Finite Differences FiniteDifferences.FiniteDifferenceMethodType FiniteDifferenceMethod( grid::AbstractVector{Int}, q::Int; condition::Real=DEFAULT_CONDITION, factor::Real=DEFAULT_FACTOR, max_range::Real=Inf ) Construct a finite difference method. Arguments Keywords Returns • FiniteDifferenceMethod: Specified finite difference method. source FiniteDifferences.estimate_stepFunction function estimate_step( m::FiniteDifferenceMethod, f::Function, x::T ) where T<:AbstractFloat Estimate the step size for a finite difference method m. Also estimates the error of the estimate of the derivative. Arguments • m::FiniteDifferenceMethod: Finite difference method to estimate the step size for. • f::Function: Function to evaluate the derivative of. • x::T: Point to estimate the derivative at. Returns • Tuple{<:AbstractFloat, <:AbstractFloat}: Estimated step size and an estimate of the error of the finite difference estimate. The error will be NaN if the method failed to estimate the error. source FiniteDifferences.central_fdmFunction central_fdm( p::Int, q::Int; condition::Real=DEFAULT_CONDITION, factor::Real=DEFAULT_FACTOR, max_range::Real=Inf, geom::Bool=false ) Contruct a finite difference method at a central grid of p points. Arguments • p::Int: Number of grid points. • q::Int: Order of the derivative to estimate. Keywords • adapt::Int=1: Use another finite difference method to estimate the magnitude of the pth order derivative, which is important for the step size computation. Recurse this procedure adapt times. • condition::Real: Condition number. See DEFAULT_CONDITION. • factor::Real: Factor number. See DEFAULT_FACTOR. • max_range::Real=Inf: Maximum distance that a function is evaluated from the input at which the derivative is estimated. • geom::Bool: Use geometrically spaced points instead of linearly spaced points. Returns • FiniteDifferenceMethod: The specified finite difference method. source FiniteDifferences.forward_fdmFunction forward_fdm( p::Int, q::Int; condition::Real=DEFAULT_CONDITION, factor::Real=DEFAULT_FACTOR, max_range::Real=Inf, geom::Bool=false ) Contruct a finite difference method at a forward grid of p points. Arguments • p::Int: Number of grid points. • q::Int: Order of the derivative to estimate. Keywords • adapt::Int=1: Use another finite difference method to estimate the magnitude of the pth order derivative, which is important for the step size computation. Recurse this procedure adapt times. • condition::Real: Condition number. See DEFAULT_CONDITION. • factor::Real: Factor number. See DEFAULT_FACTOR. • max_range::Real=Inf: Maximum distance that a function is evaluated from the input at which the derivative is estimated. • geom::Bool: Use geometrically spaced points instead of linearly spaced points. Returns • FiniteDifferenceMethod: The specified finite difference method. source FiniteDifferences.backward_fdmFunction backward_fdm( p::Int, q::Int; condition::Real=DEFAULT_CONDITION, factor::Real=DEFAULT_FACTOR, max_range::Real=Inf, geom::Bool=false ) Contruct a finite difference method at a backward grid of p points. Arguments • p::Int: Number of grid points. • q::Int: Order of the derivative to estimate. Keywords • adapt::Int=1: Use another finite difference method to estimate the magnitude of the pth order derivative, which is important for the step size computation. Recurse this procedure adapt times. • condition::Real: Condition number. See DEFAULT_CONDITION. • factor::Real: Factor number. See DEFAULT_FACTOR. • max_range::Real=Inf: Maximum distance that a function is evaluated from the input at which the derivative is estimated. • geom::Bool: Use geometrically spaced points instead of linearly spaced points. Returns • FiniteDifferenceMethod: The specified finite difference method. source FiniteDifferences.extrapolate_fdmFunction extrapolate_fdm( m::FiniteDifferenceMethod, f::Function, x::Real, initial_step::Real=10, power::Int=1, breaktol::Real=Inf, kw_args... ) where T<:AbstractFloat Use Richardson extrapolation to extrapolate a finite difference method. Takes further in keyword arguments for Richardson.extrapolate. This method automatically sets power = 2 if m is symmetric and power = 1. Moreover, it defaults breaktol = Inf. Arguments • m::FiniteDifferenceMethod: Finite difference method to estimate the step size for. • f::Function: Function to evaluate the derivative of. • x::Real: Point to estimate the derivative at. • initial_step::Real=10: Initial step size. Returns • Tuple{<:AbstractFloat, <:AbstractFloat}: Estimate of the derivative and error. source FiniteDifferences.assert_approx_equalFunction assert_approx_equal(x, y, ε_abs, ε_rel[, desc]) Assert that x is approximately equal to y. Let eps_z = eps_abs / eps_rel. Call x and y small if abs(x) < eps_z and abs(y) < eps_z, and call x and y large otherwise. If this function returns True, then it is guaranteed that abs(x - y) < 2 eps_rel max(abs(x), abs(y)) if x and y are large, and abs(x - y) < 2 eps_abs if x and y are small. Arguments • x: First object to compare. • y: Second object to compare. • ε_abs: Absolute tolerance. • ε_rel: Relative tolerance. • desc: Description of the comparison. Omit or set to false to have no description. source FiniteDifferences.UnadaptedFiniteDifferenceMethodType UnadaptedFiniteDifferenceMethod{P,Q} <: FiniteDifferenceMethod{P,Q} A finite difference method that estimates a Qth order derivative from P function evaluations. This method does not dynamically adapt its step size. Fields • grid::SVector{P,Int}: Multiples of the step size that the function will be evaluated at. • coefs::SVector{P,Float64}: Coefficients corresponding to the grid functions that the function evaluations will be weighted by. • coefs_neighbourhood::NTuple{3,SVector{P,Float64}}: Sets of coefficients used for estimating the magnitude of the derivative in a neighbourhood. • bound_estimator::FiniteDifferenceMethod: A finite difference method that is tuned to perform adaptation for this finite difference method. • condition::Float64: Condition number. See See DEFAULT_CONDITION. • factor::Float64: Factor number. See See DEFAULT_FACTOR. • max_range::Float64: Maximum distance that a function is evaluated from the input at which the derivative is estimated. • ∇f_magnitude_mult::Float64: Internally computed quantity. • f_error_mult::Float64: Internally computed quantity. source FiniteDifferences.AdaptedFiniteDifferenceMethodType AdaptedFiniteDifferenceMethod{ P, Q, E<:FiniteDifferenceMethod } <: FiniteDifferenceMethod{P,Q} A finite difference method that estimates a Qth order derivative from P function evaluations. This method dynamically adapts its step size. The adaptation works by explicitly estimating the truncation error and round-off error, and choosing the step size to optimally balance those. The truncation error is given by the magnitude of the Pth order derivative, which will be estimated with another finite difference method (bound_estimator). This finite difference method, bound_estimator, will be tasked with estimating the Pth order derivative in a neighbourhood, not just at some x. To do this, it will use a careful reweighting of the function evaluations to estimate the Pth order derivative at, in the case of a central method, x - h, x, and x + h, where h is the step size. The coeffients for this estimate, the neighbourhood estimate, are given by the three sets of coeffients in bound_estimator.coefs_neighbourhood. The round-off error is estimated by the round-off error of the function evaluations performed by bound_estimator. The trunction error is amplified by condition, and the round-off error is amplified by factor. The quantities ∇f_magnitude_mult and f_error_mult are precomputed quantities that facilitate the step size adaptation procedure. Fields • grid::SVector{P,Int}: Multiples of the step size that the function will be evaluated at. • coefs::SVector{P,Float64}: Coefficients corresponding to the grid functions that the function evaluations will be weighted by. • coefs_neighbourhood::NTuple{3,SVector{P,Float64}}: Sets of coefficients used for estimating the magnitude of the derivative in a neighbourhood. • condition::Float64: Condition number. See See DEFAULT_CONDITION. • factor::Float64: Factor number. See See DEFAULT_FACTOR. • max_range::Float64: Maximum distance that a function is evaluated from the input at which the derivative is estimated. • ∇f_magnitude_mult::Float64: Internally computed quantity. • f_error_mult::Float64: Internally computed quantity. • bound_estimator::FiniteDifferenceMethod: A finite difference method that is tuned to perform adaptation for this finite difference method. source FiniteDifferences.jacobianFunction jacobian(fdm, f, x...) Approximate the Jacobian of f at x using fdm. Results will be returned as a Matrix{<:Real} of size(length(y_vec), length(x_vec)) where x_vec is the flattened version of x, and y_vec the flattened version of f(x...). Flattening performed by to_vec. FiniteDifferences.jvpFunction jvp(fdm, f, xẋs::Tuple{Any, Any}...) Compute a Jacobian-vector product with any types of arguments for which to_vec is defined. Each 2-Tuple in xẋs contains the value x and its tangent ẋ.
Elementary Technical Mathematics $-1$ We are given: $x=-1$, $y=2$, $z=-3$ We take into account the order of operations: Parenthesis, Exponents, Multiplication/Division, Addition/Subtraction Thus, we have: $$z^2-5yx^2=(-3)^2-5(2)(-1)^2=9-10=-1 .$$
Research # Limit theorems for delayed sums of random sequence Ding Fang-qing1 and Wang Zhong-zhi2* Author Affiliations 1 Department of Mathematics & Physics, HeFei University, HeFei 230601, P. R. China 2 Faculty of Mathematics & Physics, AnHui University of Technology, Ma'anshan 243002, P. R. China For all author emails, please log on. Journal of Inequalities and Applications 2012, 2012:124 doi:10.1186/1029-242X-2012-124 Received: 29 October 2011 Accepted: 31 May 2012 Published: 31 May 2012 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ### Abstract For a sequence of arbitrarily dependent random variables (Xn)nN and Borel sets (Bn)nN, on real line the strong limit theorems, represented by inequalities, i.e. the strong deviation theorems of the delayed average are investigated by using the notion of asymptotic delayed log-likelihood ratio. The results obtained popularizes the methods proposed by Liu. Mathematics Subject Classification 2000: Primary, 60F15. ##### Keywords: strong deviation theorem; likelihood ratio; delayed sums ### 1. Introduction Let (an)nN be a sequence of real numbers and let (kn)nN be a sequence of positive integers. The numbers are called the (forward) delayed first arithmetic means (See [1]). In [2], using the limiting behavior of delayed average, Chow found necessary and sufficient conditions for the Borel summability of i.i.d. random variables and also obtained very simple proofs of a number of well-known results such as the Hsu-Robbins-Spitzer-Katz theorem. In [3], Lai studied the analogues of the law of the iterated logarithim for delayed sums of independent random variables. Recently, Chen [4] has presented an accurate description the limiting behavior of delayed sums under a non-identically distribution setup, and has deduced Chover-type laws of the iterated logarithm for them. Our aim in this article is to establish strong deviation theorems (limit theorem expressed by inequalities, see [5]) of delayed average for the dependent absolutely continuous random variables. By using the notion of asymptotic delayed log-likelihood ratio, we extend the analytic technique proposed by Liu [5] to the case of delayed sums. The crucial part of the proof is to construct a delayed likelihood ratio depending on a parameter, and then applies the Borel-Cantelli lemma. Throughout, let (Xn)nN be a sequence of absolutely continuous random variables on a fixed probability space with the joint density function g1, n(x1,..., xn), n N, and fj(x), j = 1, 2,... be the the marginal density function of random variable Xj. (kn)nN be a subsequence of positive integers, such that, for every ε > 0, . Definition 1. The delayed likelihood ratio is defined by (1.1) Let (1.2) is called asymptotic delayed log-likelihood ratio, where denotes the joint density function of random vector , ω is a sample point (with log 0 = -∞). It will be shown in Lemma 1 that a.e. in any case. Remark 1. It will be seen below that has the analogous properties of the likelihood ratio in [5], Although is not a proper metric among probability measures, we nevertheless consider it as a measure of "discrimination" between the dependence (their joint distribution) and independence (the product of their marginals). Obviously, , a.e. nN if (Xn)nN is independent. In view of the above discussion of the asymptotic logarithmic delayed likelihood ratio, it is natural for us to think of as a measure how far (the random deviation) of (Xn)nN is from being independent and how dependent they are. The closer approaches to 0, the smaller the deviation is. Lemma 1. Let be define as above, then (1.3) Proof. Let Since From Markov inequality, for every ε > 0, we have Hence By Borel-Cantelli lemma, we have for any ε >0, (1.3) follows immediately. □ ### 2. Main results and proofs Theorem 1. Let (Xn)nN, , be defined as above, (Bn)nN be a sequence of Borel sets of the real line. Let , and assume (2.1) then (2.2) where be the indicator function of Bn. Proof. Assume s > 0 to be a constant, and let (2.3) It is not difficult to see that , j = 1, 2,... Let (2.4) From Lemma 1, there exists , P(A(s)) = 1, such that (2.5) Since , by (2.3) we have (2.6) It follows from (1.1), (2.4) and (2.6) that (2.7) (2.5) and (2.7) yield (2.8) Let s > 1, dividing the two sides of (2.8) by log s, we have (2.9) By (1.2), (2.9) and the property lim supn(an - bn) ≤ d ⇒ lim supn(an - cn) ≤ lim supn(bn - cn) + d, one gets (2.10) By (2.10) and the property of the superior above and the inequality 0 < log(1+x) ≤ x(x > 0), we obtain (2.11) (2.11) and the inequality imply (2.12) Let D be a set of countable real numbers dense in the interval (1, +∞), and let A* = ∩sD A(s), g(s, x) = cs + sx/(s - 1), then we have by (2.12) (2.13) Let c > 0, it easy to see that if , a.e., then, for fixed ω, as a function of s attains its smallest value on the interval (1, +∞), and g(s, 0) is increasing on the interval (1, +∞) and lims→1+ g(s, 0) = 0. For each ω A* ∩ A(1), if , take κn(ω) ∈ D, n = 1, 2,..., such that . We have by the continuity of with respect to s, (2.14) By (2.13), we obtain (2.15) (2.14) and (2.15) imply (2.16) If , (2.16) holds trivially. Since P (A* ∩ A(1)) = 1, (2.2) holds by (2.16), when c > 0. When c = 0, we have by letting s = e in (2.11), (2.17) since P (A(e)) = 1, (2.2) also holds by (2.17) when c = 0. □ Theorem 2. Let (Xn)nN, , , (Bn)nN, be defined as in Theorem 1 and assume (2.18) then, if a.e., then (2.19) Proof. Let 0 <s < 1, dividing the two sides of (2.8) by log s, we have (2.20) By (1.2), (2.20) and the property lim infn(an - bn) ≥ d ⇒ lim infn(an - cn) ≥ lim infn(bn - cn) + d, one gets (2.21) By (2.21) and the property of the inferior above and the inequality log(1+ x) ≤ x(-1 <x ≤ 0), we obtain (2.22) (2.22) and the inequality and log s < s - 1 (0 <s < 1) imply (2.23) Let D' be a set of countable real numbers dense in the interval (0, 1), and let , h(s, x) = c's + x/(s - 1), then we have by (2.23) (2.24) Let c' > 0, it easy to see that if , a.e., then, for fixed ω, as a function of s attains its maximum value , on the interval (0, 1), and h(s, 0) is increasing on the interval (0, 1) and lims→1+ h(s, 0) = c'. For each ω A*A(1), if , take ln(ω) ∈ D', n = 1, 2,..., such that . We have by the continuity of with respect to s, (2.25) By (2.24), we obtain (2.26) (2.25) and (2.26) imply (2.27) If , (2.27) holds trivially. Since P (A* A(1)) = 1, (2.19) holds by (2.27), when c' > 0. (2.19) also holds trivially when c' = 0. □ Remark 2. In case , a.e., we cannot get a better lower bound of . This motivates the following problem: under the conditions of Theorem 2, how to get a better lower bound of in case of , a.e.? Definition 2. (Generalized empirical distribution function) Let (Xn)nN be identically distribution with common distribution function F , for each m, n N, let Fm, n = the observed frequency of values that are ≤ x from time m to m + n - 1. The F1,n is the usual empirical distribution function, hence the name given above. In particular, let B = (-∞, x], x R in Theorems 1 and 2, we can get a strong limit theorem for the generalized empirical distribution function. Corollary 1. Let (Xn)nN be i.i.d. random variables with common distribution function F, let Bn = (-∞, x], n = 1, 2,..., then Corollary 2. Let (Xn)nN be independent random variables and (Bn)nN be as Theorem 1, then (2.28) Proof. Note that P (Xj Bj) ≤ 1, j = 1, 2,... and in this case, 0 ≤ c, c' ≤ 1, a.e., we have by (2.11) (2.29) by (2.29) and the property of the superior above and the inequality 0 ≤ log(1+x) ≤ x(x > 0), we obtain (2.30) Analogously as in the proof of Theorem 1, we obtain (2.31) Similarly, we have , a.e. hence (2.28) follows immediately. □ Remark 3. Let Bn = B, Corollary 2 implies that which gives the strong law of large numbers for the delayed arithmatic means. ### Competing interests The authors declare that they have no competing interests. ### Authors' contributions WZ and DF carried out the design of the study and performed the analysis. DF drafted the manuscript. All authors read and approved the final manuscript. ### Acknowledgements This work is supported by The National Natural Science Foundation of China (Grant No. 11071104) and the An Hui University of Technology research grant: D2011025. The authors would like to thank two referees for their insightful comments which resulted in improving Theorems 1, 2 and Corollary 2 significantly. ### References 1. Zygmund, A: Trigonometric Series 1. Cambridge Universitiy Press, Cambridge (1959) 2. Chow, YS: Delayed sums and Borel summability for independent, identically distributed random variables. Bull Inst Math Academia Sinica. 1, 207–220 (1972) 3. Lai, TL: Limit theorems for delayed sums. Ann Probab. 2(3), 432–440 (1974). Publisher Full Text 4. Chen, PY: Limiting behavior of delayed sums under a non-identically distribution setup. Ann Braz Acad Sci. 80(4), 617–625 (2008) 5. Liu, W: Strong deviation theorems and analytical method. Academic press, Beijing (2003) (in Chinese)
# Home ## We are the Theory group of LPS! If you are interested in doing an internship with us, or want to visit and discuss, please contact any of our members. We are a very collaborative group which covers a wide range of topics from material sciences (functional materials, topological and relativistic matter, magnetism and superconductivity...) and quantum systems (in low dimensions, under magnetic fields, in and out-of equilibrium...) to soft and bio- matter (colloids, liquid crystals...). Explore and enjoy our site! ## Recent Publications G. Abramovici, “Incompatible Coulomb hamiltonian extensions”, vol. 10, no. 1, p. 7280, 2020. WebsiteAbstract We revisit the resolution of the one-dimensional Schrödinger hamiltonian with a Coulomb λ/|x| potential. We examine among its self-adjoint extensions those which are compatible with physical conservation laws. In the one-dimensional semi-infinite case, we show that they are classified on a U(1) circle in the attractive case and on $${\boldsymbol{(}}{\mathbb{R}},{\boldsymbol{+}}{\boldsymbol{\infty }}{\boldsymbol{)}}$$(R,+∞) in the repulsive one. In the one-dimensional infinite case, we find a specific and original classification by studying the continuity of eigenfunctions. In all cases, different extensions are incompatible one with the other. For an actual experiment with an attractive potential, the bound spectrum can be used to discriminate which extension is the correct one. , “Non-thermal resistive switching in Mott insulator nanowires”, vol. 11, no. 1, p. 2985, 2020. WebsiteAbstract Resistive switching can be achieved in a Mott insulator by applying current/voltage, which triggers an insulator-metal transition (IMT). This phenomenon is key for understanding IMT physics and developing novel memory elements and brain-inspired technology. Despite this, the roles of electric field and Joule heating in the switching process remain controversial. Using nanowires of two archetypal Mott insulators—VO2 and V2O3 we unequivocally show that a purely non-thermal electrical IMT can occur in both materials. The mechanism behind this effect is identified as field-assisted carrier generation leading to a doping driven IMT. This effect can be controlled by similar means in both VO2 and V2O3, suggesting that the proposed mechanism is generally applicable to Mott insulators. The energy consumption associated with the non-thermal IMT is extremely low, rivaling that of state-of-the-art electronics and biological neurons. These findings pave the way towards highly energy-efficient applications of Mott insulators. More
QUESTION # SRM 320 Week 1 DQ 2 Functions of Management This archive file of SRM 320 Week 1 Discussion Question 2 Functions of Management shows the solutions to the following problems: Describe the four functions of management as they relate to sport. Are there special considerations that need to be applied to sports management as opposed to general management? Respond to at least two of your classmates • @ • 1 order completed Tutor has posted answer for $5.19. See answer's preview$5.19
# Applying for a UK EHIC For a while health insurance for travel wasn’t much of an issue; there weren’t really any places to go due to the world being somewhat on fire due to COVID. However, as things started opening up again, it suddenly became relevant to have health insurance abroad again, and I was completely confused as to what to do as an EU citizen. I vaguely knew I had the right to health insurance within the EU, but had there been an agreement since Brexit? Was it still being sorted out? Was the agreement that no collaboration would happen between the UK/NHS and the EU? The application process is fairly well put together, apart from one point: There is no indication of whether a field is mandatory or not. Since most questions are multiple-choice, or things like your legal name and date of birth, it would be reasonable to assume that all fields are mandatory. This becomes a problem at the question “What is your NHS number?”. As far as I know, I don’t have one. But the website only tells me what it is and where I can find it if I don’t know it. So I tried calling my GP. It turns out that NHS numbers are only a thing under the English NHS and not under the Scottish NHS (I like the NHS, I really do, but the fact that it’s not the same across the UK despite the ‘N’ standing for “National” is very confusing…). Unfortunately my GP also didn’t know what to do for the application in that case. At this point I thought “Might as well try” and clicked Continue without filling anything in. And it just… let me through… It then asked for my National Insurance Number, which was fine because that I could provide. But there was never an indication, graphical or textual, that providing my NHS number was not compulsory; that it was not essential for the application. The cynic in me would guess that it’s intentional, so that they get as few applications as possible. But I think it’s much more likely that it was an oversight and they forgot that people outside of England could be applying (if you know a bit about UK politics, that doesn’t sound too far-fetched). In any case, I was able to complete the application procedure, so I’m just happy that I could get a new EHIC. I hope this was helpful if you’re in a similar situation! After the application has been submitted, you might get an email asking you to prove your (pre-)settled status. There’s also a portal for that, but double-check the email for the details on what you need to send them. Thanks for reading! : ) ##### Thomas Ekström Hansen ###### PhD student in Computer Science My research interests include low-level programming, type systems, and formal methods.
dc.contributor.author Callingham, J. R. dc.contributor.author Gaensler, B. M. dc.contributor.author Zanardo, G. dc.contributor.author Staveley-Smith, L. dc.contributor.author Hancock, P. J. dc.contributor.author Hurley-Walker, N. dc.contributor.author Bell, M. E. dc.contributor.author Dwarakanath, K. S. dc.contributor.author Franzen, T. M. O. dc.contributor.author Hindson, L. dc.contributor.author Johnston-Hollitt, M. dc.contributor.author Kapinska, A. dc.contributor.author For, B. Q. dc.contributor.author Lenc, E. dc.contributor.author McKingley, B. dc.contributor.author Offringa, A. R. dc.contributor.author Procopio, P. dc.contributor.author Wayth, R. B. dc.contributor.author Wu, C. dc.contributor.author Zheng, Q. dc.date.accessioned 2017-06-14T16:04:13Z dc.date.available 2017-06-14T16:04:13Z dc.date.issued 2016-10-11 dc.identifier.citation Callingham , J R , Gaensler , B M , Zanardo , G , Staveley-Smith , L , Hancock , P J , Hurley-Walker , N , Bell , M E , Dwarakanath , K S , Franzen , T M O , Hindson , L , Johnston-Hollitt , M , Kapinska , A , For , B Q , Lenc , E , McKingley , B , Offringa , A R , Procopio , P , Wayth , R B , Wu , C & Zheng , Q 2016 , ' Low Radio Frequency Observations and Spectral Modelling of the Remnant of Supernova 1987A ' , Monthly Notices of the Royal Astronomical Society , vol. 462 , no. 1 , pp. 290-297 . https://doi.org/10.1093/mnras/stw1489 dc.identifier.issn 1365-2966 dc.identifier.other PURE: 10291597 dc.identifier.other PURE UUID: bbd3105c-5056-4fcc-91a0-ab96bd175e4d dc.identifier.other ArXiv: http://arxiv.org/abs/1606.05974v1 dc.identifier.other Scopus: 84988728364 dc.identifier.uri http://hdl.handle.net/2299/18335 dc.description This article has been accepted for publication in Monthly Notices of the Royal Astronomical Society. ©: 2016 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society. All rights reserved. dc.description.abstract We present Murchison Widefield Array observations of the supernova remnant (SNR) 1987A between 72 and 230 MHz, representing the lowest frequency observations of the source to date. This large lever arm in frequency space constrains the properties of the circumstellar medium created by the progenitor of SNR 1987A when it was in its red supergiant phase. As of late-2013, the radio spectrum of SNR 1987A between 72 MHz and 8.64 GHz does not show any deviation from a non-thermal power-law with a spectral index of $-0.74 \pm 0.02$. This spectral index is consistent with that derived at higher frequencies, beneath 100 GHz, and with a shock in its adiabatic phase. A spectral turnover due to free-free absorption by the circumstellar medium has to occur below 72 MHz, which places upper limits on the optical depth of $\leq$ 0.1 at a reference frequency of 72 MHz, emission measure of $\lesssim$ 13,000 cm$^{-6}$ pc, and an electron density of $\lesssim$ 110 cm$^{-3}$. This upper limit on the electron density is consistent with the detection of prompt radio emission and models of the X-ray emission from the supernova. The electron density upper limit implies that some hydrodynamic simulations derived a red supergiant mass loss rate that is too high, or a wind velocity that is too low. The mass loss rate of $\sim 5 \times 10^{-6}$ $M_{\odot}$ yr$^{-1}$ and wind velocity of 10 km s$^{-1}$ obtained from optical observations are consistent with our upper limits, predicting a current turnover frequency due to free-free absorption between 5 and 60 MHz. en dc.format.extent 8 dc.language.iso eng dc.relation.ispartof Monthly Notices of the Royal Astronomical Society dc.rights Open dc.subject supernovae: indiviual: SN 1987A dc.subject ISM: supernova remnants dc.subject radio continuum: general dc.title Low Radio Frequency Observations and Spectral Modelling of the Remnant of Supernova 1987A en dc.contributor.institution School of Physics, Astronomy and Mathematics dc.description.status Peer reviewed dc.description.versiontype Final Published version dcterms.dateAccepted 2016-10-11 rioxxterms.version VoR rioxxterms.versionofrecord https://doi.org/10.1093/mnras/stw1489 rioxxterms.type Journal Article/Review herts.preservation.rarelyaccessed true herts.rights.accesstype Open 
# 2nd PUC Basic Maths Question Bank Chapter 2 Permutations and Combinations Ex 2.2 Students can Download Basic Maths Exercise 2.2 Questions and Answers, Notes Pdf, 2nd PUC Basic Maths Question Bank with Answers helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations. ## Karnataka 2nd PUC Basic Maths Question Bank Chapter 2 Permutations and Combinations Ex 2.2 Part – A 2nd PUC Basic Maths Permutations and Combinations Ex 2.2 One Mark Questions and Answers Question 1. Find the total number of ways in which 8 different coloured beads can be strung together to form a necklace. (n -1)! = (8 -1)! = 7!. Question 2. In how many ways can 9 flowers of different colours be strung together to form a garland. $$\frac{(n-1) !}{2}=\frac{8 !}{2}$$ Question 3. In how many ways can 10 people be seated around a table. (10 -1)! = 9! Part B 2nd PUC Basic Maths Permutations and Combinations Ex 2.2 Two Marks Questions and Answers Question 1. Find the number of ways in which 8 men be arranged round a table so that 2 particular men may not be next to each other. Total number of ways when there is no restriction is 7! When 2 particular men are together can be takes as 1 unit and the remaining 6, total = 7 can sit round a table in 6! ways and 2 men can themselves can be done in 2! ways can be done in 6!.2!. ∴ The number of ways in which 2 particular men are not together = 7! – 6! . 2! Question 2. In how many ways can 6 gentlemen and 4 ladies be seated round a table so that no two ladies are together. There are 6 vacant places in between the 6 gentlemen and 4 ladies can occupy these gaps in 6P4 ways. For each of these the six gentlemen can be permuted in (6 – 1)! = 5! ∴ The number of ways is 5!6P4 Question 3. In how many ways can 10 beads of different colours be strung into a necklace if the red, green and yellow beads are always together. Red, yellow and green are together can be permuted in 3! ways and the remaining 7 & 1 unit of other beads altogether 8 beads can permuted in 8! Ways. ∴ The number of ways is 8! 3!. Question 4. In how many ways can 6 boys and 6 girls be arranged in a circle so that no two boys are together. Six boys can arranged in a circle in (6 – 1)! = 5!, 6 girls can permute in 6! Ways. ∴ The number of ways is 6! × 5!. Question 5. In how many ways can 7 gentlemen and 5 ladies be arranged in a circle if no two ladies are together. Ladies have to be arranged between gentlemen. The 7 gentlemen can be seated in 6! Ways. There are 7 gaps between the men which the 5 ladies can occupy in 7P5, ways. ∴ The number of ways is 7P6.6!. Question 6. Find the number of ways in which 10 flowers can be strung into a garland if 3 particular flowers are always together.
# How do you factor c^2+17cd+70d^2? Jun 18, 2016 $\left(c + 10 d\right) \left(c + 7 d\right)$ #### Explanation: you can transform the trynomial in this way: ${c}^{2} + 17 c d + 70 {d}^{2} = {c}^{2} + 10 c d + 7 c d + 70 {d}^{2}$ and factor in two steps: $c \left(c + 10 d\right) + 7 d \left(c + 10 d\right)$ $\left(c + 10 d\right) \left(c + 7 d\right)$
# Gradient, divergence and curl - nabla crossed with |r|r • November 29th 2009, 11:49 AM chella182 Gradient, divergence and curl - nabla crossed with |r|r Again, sorry for the naff title. Find $\nabla\times(r\mathbf{r})$, where $r=|\mathbf{r}|$ and $\mathbf{r}=(x,y,z)$. I ended up with an answer of $(0,0,0)$, which, again, I doubt is right. • December 1st 2009, 09:54 PM qmech You're right. A radial field has no curl. • December 2nd 2009, 02:48 AM chella182 Cheers. Yeah, like I said in my other thread, turns out I was doing all of this shizz right when I thought I was doing it wrong :p
Table of content Editing This is my first question in the site, I want to make the word "Chapter" all in capital letters and put ":" in front of the chapter title. I used report document class and the only packages for TOC are \usepackage[toc,page] , \usepackage[nottoc,notlof,notlot]{tocbibind} I dont know if they are for modification or not ... i am really just a user in latex and it is my first attempt to write a thesis \documentclass[12pt,a4paper]{report} \usepackage{amsmath,amssymb,amsthm,amsfonts,mathrsfs} \usepackage{graphicx,epsfig,subfig} \usepackage{geometry} \usepackage{setspace} \usepackage{array} \usepackage[toc,page]{appendix} \usepackage[labelfont=bf]{caption} \usepackage{xpatch} \usepackage{fmtcount} \renewcommand{\thechapter}{\NUMBERstring{chapter}} \renewcommand{\thesection}{\arabic{chapter}.\arabic{section}} \renewcommand{\thefigure}{\arabic{chapter}.\arabic{figure}} \makeatletter \input{fc-british.def} \xpatchcmd{\@chapter}% <cmd> {\numberline{\thechapter}}% <search> {}{}% <success><failure> \makeatother \newcolumntype{L}[1]{> {\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{> {\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{> {\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \usepackage{caption} \usepackage[pagestyles]{titlesec} \titleformat{\chapter}[display] {\normalfont\LARGE\bfseries\centering} {\centering\MakeUppercase{\chaptertitlename}\ \thechapter}{20pt}{\Large} \titlespacing*{\chapter} {0pt}{50pt}{40pt} \geometry{verbose,a4paper,tmargin=30mm,bmargin=25mm, lmargin=30mm,rmargin=25mm} \renewcommand{\baselinestretch}{1.65} \usepackage{fancyhdr} \usepackage{psfrag} \usepackage{array} \usepackage{booktabs} \usepackage{float} \usepackage{caption} \usepackage{multirow} \usepackage[shortlabels]{enumitem} \usepackage[monochrome]{xcolor} \usepackage{pdflscape} \usepackage[toc,page]{appendix} \usepackage{titlesec} \usepackage{amsmath} \usepackage[nottoc,notlof,notlot]{tocbibind} \renewcommand\bibname{References} \makeatletter \newcommand*{\rom}[1]{\expandafter\@slowromancap\romannumeral #1@} \makeatother \setcounter{secnumdepth}{5} \setcounter{tocdepth}{5} \newtheorem{theorem}{Theorem} \newtheorem{acknowledgement}{Acknowledgement} \newtheorem{algorithm}{Algorithm} \newtheorem{axiom}{Axiom} \newtheorem{case}{Case} \newtheorem{claim}{Claim} \newtheorem{conclusion}{Conclusion} \newtheorem{condition}{Condition} \newtheorem{conjecture}{Conjecture} \newtheorem{corollary}{Corollary} \newtheorem{criterion}{Criterion} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \newtheorem{exercise}{Exercise} \newtheorem{lemma}{Lemma} \newtheorem{notation}{Notation} \newtheorem{problem}{Problem} \newtheorem{proposition}{Proposition} \newtheorem{remark}{Remark} \newtheorem{solution}{Solution} \newtheorem{summary}{Summary} \numberwithin{equation}{chapter} \numberwithin{theorem}{chapter} \fancypagestyle{plain}{% \fancyhf{} % clear all header and footer fields \fancyfoot[C]{\bfseries\large\thepage} % except the center \renewcommand{\footrulewidth}{0pt}} \begin{document} \tableofcontents \chapter{General Introduction}\label{chapter:Intro} \section{Introduction} Composite steel-concrete construction \end{document} • Welcome to TeX.SX! Please add some working code. Related Change the word “Chapter” to something else – Bobyandbob Feb 14 '18 at 20:33 • We really need to know which document class you use, and also if you load any package that help modify the appearance of the table of contents. Some of these packages are not mutually compatible; that's why it's important which packages (if any) you are already loading. – Mico Feb 14 '18 at 21:19 • I used report document class and the only packages for TOC are \usepackage[toc,page] , \usepackage[nottoc,notlof,notlot]{tocbibind} I dont know if they are for modification or not ... i am really just a user in latex and it is my first attempt to write a thesis – Ali D Hamdi Feb 14 '18 at 22:05 • The way this site works, one typically add a Minimal Working Example (NOT the entire document) which shows the problem, or at least what you are doing now. We play with the code until it does what you want. – John Kormylo Feb 14 '18 at 22:36 • That's a pretty maximal minimal example, isn't it? – cfr Feb 16 '18 at 0:44 I started by reducing your code to a more minimal working example. Although you obviously don't want to do that in your real document, you should prune and organise your preamble. You are loading packages multiple times, sometimes with different options, and loading conflicting packages or packages which conflict with options passed to other packages. For example, pagestyles passed to titlesec will load that package's footer/header cousin. That cousin will then compete with fancyhdr for control of that aspect of your document layout. Pick. One, other or neither. But not both. You also load titlesec without this option, which creates potential conflicts, too, if you don't happen to happen upon an order which doesn't trigger an error. I then adjusted the \xpatch command to set 'Chapter' as 'CHAPTER' and to put a stop after the number word. I know you requested a colon, but I frankly think that looks weird. However, you can easily replace the . by a : if you really prefer (or are required to use) this format. \documentclass{report} \usepackage[toc,page]{appendix} \usepackage{xpatch} \usepackage[british]{fmtcount}% set dialect in a standard way the package recommends % \usepackage[pagestyles]{titlesec}% use of pagestyles conflicts with use of fancyhdr \usepackage{titlesec}% use of pagestyles conflicts with use of fancyhdr \usepackage{fancyhdr} \usepackage[nottoc,notlof,notlot]{tocbibind} \renewcommand{\thechapter}{\NUMBERstring{chapter}} \renewcommand{\thesection}{\arabic{chapter}.\arabic{section}} \makeatletter \xpatchcmd{\@chapter}% <cmd> {\numberline{\thechapter}}% <search> {}{}% <success><failure> \makeatother \titleformat{\chapter}[display] {\normalfont\LARGE\bfseries\centering} {\centering\MakeUppercase{\chaptertitlename}\ \thechapter}{20pt}{\Large} \titlespacing*{\chapter} {0pt}{50pt}{40pt} \setcounter{tocdepth}{5} \fancypagestyle{plain}{% \fancyhf{}% clear all header and footer fields \fancyfoot[C]{\bfseries\large\thepage}% except the center
# [OS X TeX] three characters : one geometric, two domino-like delanoy at math.univ-lyon1.fr delanoy at math.univ-lyon1.fr Thu Jun 15 10:55:37 CEST 2006 Hello all, below is the LaTeX code that produces archaic tartesian" characters in a picture. I should like to use them as ordinary characters in ordinary text. They look simple enough, but I've not found them in any package. Probably there is a reasonable way to realize them as combinations, with \hspace and \vspace. How would the LaTeX pros do it ? Ewan \begin{picture}(8,3) \put(0,0){\line(2,0){2}} \put(0,0){\line(2,3){2}} \put(0,3){\line(1,0){2}} \put(0,3){\line(2,-3){2}} \put(3,0){\line(1,0){2}} \put(3,0){\line(0,1){3}} \put(3,3){\line(1,0){2}} \put(5,0){\line(0,1){3}} \put(4,1){$\bullet$} \put(4,2){$\bullet$} \put(6,0){\line(1,0){2}} \put(6,0){\line(0,1){3}} \put(6,3){\line(1,0){2}} \put(8,0){\line(0,1){3}} \put(7,0.7){$\bullet$} \put(7,1.5){$\bullet$} \put(7,2.3){$\bullet$} \end{picture} ------------------------- Info -------------------------- Mac-TeX Website: http://www.esm.psu.edu/mac-tex/ & FAQ: http://latex.yauh.de/faq/ TeX FAQ: http://www.tex.ac.uk/faq List Archive: http://tug.org/pipermail/macostex-archives/
All polynomials with no natural roots and integer coefficients such that $\phi(n)|\phi(P(n))$ Let $P(x)$ be a polynomial with integer coefficients such that the equation $P(x)=0$ has no positive integer solutions. Find all polynomials $P(x)$ such that for all positive integers $n$ we have $\phi(n) \mid \phi(P(n))$. It is conjectured there are none (other than the trivial $P(x) = x^k Q(x)$). NOTE: For $\phi(P(n))$ to be well-defined, it has been suggested that we require $P(n) > 0$ for all positive integers $n$. - @Amir, better be long and clear. – lhf Jun 3 '11 at 14:30 @DJC: What about $p(x)=x$. – Eric Naslund Jun 3 '11 at 16:35 @Eric: The point of the problem is to find all such polynomials. I only ask for one. If indeed there are no such polynomials, then these two tasks are one and the same. By the words "Classifying all such polynomials" I assumed that there might have been some nontrivial polynomials that fit the bill (other than those of the form $P(n) = n^kQ(n)$ which are excluded by the question). – JavaMan Jun 3 '11 at 18:25 Actually, this problem was posted here. The first part have been solved, but the second part (this problem) is still unsolved. – Amir Hossein Jun 4 '11 at 7:26 Shouldn't we actually require that $P(x) > 0$ for positive integers $x$, so that $\phi(P(x))$ is defined? For example, $P(x) = x^2-200$ is never zero but is negative for some values of $x$. – Srivatsan Aug 13 '11 at 3:35 As this question has not been answered for a very long time, I think it is appropriate to use a strong conjecture to resolve it, namely Schinzel's hypothesis H. Let us write $P(x)=P_1(x)^{k_1}...P_r(x)^{k_r}$ where $P_i(x)\in \mathbb{Z}[x]$ are irreducible, distinct, non-constant, and have positive leading coefficient (possibly $r=1$). Let the constant coefficient of $P_i$ be $c_i$. We may assume that $c_i$ are all non-zero, since otherwise we arrive at the trivial case $x\mid P(x)$. We have \begin{align*} \phi(n)\mid \phi(P(n))= \phi(P_1(n)^{k_1}...P_r(n)^{k_r}) \end{align*} for all $n\geq 1.$ Let $c=|c_1...c_r|$, and define the polynomials $Q_i(x)=\frac{P_i(cx)}{|c_i|}\in \mathbb{Z}[x]$. Then we have \begin{align*} \phi(cn)\mid \phi(Q_1(n)^{k_1}...Q_r(n)^{k_r}|c_1|^{k_1}...|c_r|^{k_r}). \end{align*} for all $n\geq 1$. The advantage of this was that $Q_i$ have their constant coefficients equal to $\pm 1$. Let $D$ be large enough, in particular $D>\deg P$ (but we impose another condition later). We have in particular \begin{align*} \phi(cD!n)\mid \phi(Q_1(D!n)^{k_1}...Q_r(D!n)^{k_r}|c_1|^{k_1}...|c_r|^{k_r}). \end{align*} for all $n$, and these new polynomials have no small prime divisors. Finally, we apply Schinzel's hypotheisis H to the irreducible polynomials $x,Q_1(D!x),...,Q_r(D!x)$ (by Gauss' lemma, $f(x)$ is irreducible in $\mathbb{Z}[x]$ if and only if $f(kx)$ is). Their product does not have any fixed prime divisor $q$ because then we would have \begin{align*} \prod_{i=1}^r Q_i(D!x)\equiv 0 \mod q \end{align*} for $x=1,2,...,q-1$. However, this conguence has at most $\deg P$ solutions, so $q\leq \deg P+1$, which is a contradiction by the definition of $D$ (none of the polynomials $Q_i(D!x)$ is identically zero $\pmod q$ since their constant coefficients are $\pm 1$). Hence, we have infinitely many primes $p$ such that $Q_1(D!p),...,Q_r(D!p)$ are all primes. Setting $n=p$ in the previous formula involving $n$, we obtain by multiplicativity for large enough $p$ that \begin{align*} p-1\mid Q_1(D!p)^{k_1-1}(Q_1(D!p)-1)...(Q_r(D!p))^{k_r-1}(Q_r(D!p)-1)\phi(|c_1|^{k_1}...|c_r|^{k_r}). \end{align*} Since $Q_i(D!p)\equiv Q_i(D!)\pmod{p-1}$, we get \begin{align*} p-1\mid Q_i(D!)^{k_1-1}(Q_1(D!)-1)...Q_r(D!)^{k_r-1}(Q_r(D!)-1)\phi(|c_1|^{k_1}...|c_r|^{k_r}). \end{align*} However, we can choose $D$, depending only on the polynomial $P$, so that $Q_i(D!)\geq 2$ for all $i$, and then the right-hand side of the previous divisibility relation should be smaller than the left-hand side, which is a contadiction for $p$ large enough. I doubt that this problem can be solved without assuming some conjecture as $\phi(n)$ is surprisingly hard to control for general $n$; for example, Lehmer's totient problem about solving $\phi(n)\mid n-1$ seems perhaps easier but is known to be open. So, to gain some control, one would like to choose $n$ in the relation $\phi(n)\mid \phi(P(n))$ to be prime, or at least closely related to them. However, very little is known about prime values of polynomials, and even less about their simultaneous prime values. In fact, we do not even know (according to this paper by M.Filazeta) a polynomial $f(x)\in \mathbb{Z}[x]$ such that $\deg f\geq 4$ and $f(x)$ is square-free for infinitely many integers $x$, although most numbers are squarefree and any polynomial with no fixed prime divisor should work. There are results about polynomials having infinitely almost prime values, but I am not sure whether they would be very helpful here. - protected by Asaf KaragilaMar 14 '14 at 11:22 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
## Introduction Contrast Sensitivity (CS) is the ability of the visual system to perceive the relative luminance between an object and the background1. In healthy subjects, contrast sensitivity can be affected by aging, refractive error, or pupil diameter2 and its measurement depends on the ambient luminance level. Jindra and collaborators reported higher contrast sensitivity measurement in photopic conditions compared to scotopic light levels1, however, a more recent study concluded that scotopic conditions provide higher contrast sensitivity measures than photopic conditions2. Those discrepancies in CS measurements could be explained by diffraction and spherical aberration effects in photopic and scotopic conditions, respectively3. Furthermore, the presence of bright light sources in the visual field may cause light-scattering within the ocular media4 known as ocular straylight. In normal eyes, ocular straylight mainly depends on the ocular media (cornea, sclera, iris, crystalline lens, and retinal fundus), age, and the illuminance of the light source5. Moreover, pathological conditions such as corneal edema and dystrophy6 or cataracts7 can increase ocular straylight drastically limiting the patient’s visual quality and quality of life. Ocular straylight can be measured by optical8 and psychophysical approaches9. The main optical effect of ocular straylight is the generation of a light veil distribution (veiling glare)10 in the retina capable of inducing a total loss of retinal image contrast11 which is known as Total Disability Glare12. However, bright light sources are also related to another type of glare which is not always accompanied by visual impairment, the Discomfort Glare13. The research into the relationship between visual comfort and visual environment has resulted in the proposal of some luminance-based metrics14,15. These metrics have been quite successful in predicting Glare Discomfort. CIE Glare index (CGI), Visual Comfort Probability (VCP), and Unified Glare Rating (UGR) are efficient tools helping architects and engineers to design natural and artificial lighting related to buildings16, streets17, and roads18. However, despite the things they share, Discomfort Glare and Total Disability Glare must be understood as two different phenomena. In fact, the empirical evidence indicates that Discomfort Glare indexes are mainly dependent on the ratio between the luminance of the objects present in the visual field and the luminance of the background, since Total Disability Glare depends on the total amount of light that enters the eye (measured as the illuminance at the cornea plane) causing straylight. It is in this regard that straylight can be understood as a measure of ocular media clarity19 while Disability Glare accounts for the intraocular scattering effects on visual performance. According to the Commission Internationale de l’Eclairage (CIE), there are four psychophysical procedures for disability glare assessment: category rating, discrimination, adjustment, and matching9. All the tests devoted to evaluating Total Disability Glare share the same simple principle: assessment of the visual performance (typically visual acuity and CS) with and without the presence of an external glare source. In this work, we present a psychophysical version of an objective wide-field ocular straylight meter8 for the assessment of the disability glare vision. Moreover, taking advantage of this instrument, a study on Total Disability Glare as a function of sinusoidal gratings of different spatial frequencies, corneal illuminance and angular field size of a circular glare source is also presented. ## Results ### Total disability glare illuminance Table 1 shows the average and standard deviation for the Total Disability Glare (TDG) illuminance for all subjects involved in the study, obtained for the different spatial frequencies of the stimulus and angular fields of the glare source. The maximum TDG value found (3.02 ± 1.88 cd·sr/m2) corresponded to the minimum spatial frequency (0.2 c/°) evaluated and the maximum angular size of the glare source (48°), while the minimum TDG value (1.26 ± 0.92 cd sr/m2) was obtained for the maximum spatial frequency (1.8 c/°) and the minimum angular size of the glare source (24°). The analysis of variance and the subsequent Tukey’s test showed significant differences between the lowest frequency and the rest of the frequencies. However, no significant differences were found between the results for frequencies higher than 0.3 c/°, so we represent in Fig. 1 the mean TDG illuminance as a function of the angular field of the glare source only for the spatial frequencies of 0.2 and 0.3 c/°. Results in Fig. 1 show that the higher the angular distribution of the glare discs, the higher the TDG illuminance. The linear fits for 0.2 and 0.3 c/° frequencies were TDG = 0.80 + 0.04*GA ($$R^{2} = 0.943, p = 0.029$$) for 0.2c/° and TDG = 0.99 + 0.03*GA ($$R^{2} = 0.942, p = 0.029)$$, respectively. Although the data obtained for frequencies higher than 0.3 c/° are not represented, we obtained that the linear correlation between GA and TDG disappears in those cases. ### Glare tolerance The average values and standard deviation obtained for the glare tolerance (GT) are presented in Table 2. As for the TDG parameter, the maximum value of the GT parameter (1.29 ± 0.90 cd sr/m2) is found for the minimum spatial frequency and the maximum angular field, whereas the minimum value (0.41 ± 0.37 cd sr/m2) is obtained for the maximum frequency and the minimum angular field. The analysis of variance showed significant differences between the frequencies analyzed but, unlike the TDG parameter, Tukey's test revealed that there are not statistical significant differences between the frequencies 0.2 c/° and 0.3 c/° and between the frequencies 0.6 c/°, 0.9 c/°and 1.8 c/°. The frequency 1.2 c/° presents significant differences with all of the other frequencies, so we represent in Fig. 2 the mean values of the GT parameter as a function of the angular field of the glare source for frequencies 0.2 c/°, 1.8 c/°, and 1.2 c/°. Positive linear correlation of the GT parameter with the angular size of the glare source was observed for the three spatial frequencies: GT = 0.20 + 0.02*GA ($$R^{2} = 0.937, \,p = 0.032$$) for 0.2 c/°, GT = 0.08 + 0.02*GA ($$R^{2} = 0.995, \,p = 0.003$$) for 1.2 c/°, and GT = 0.25 + 0.01*GA ($$R^{2} = 0.958,\, p = 0.021$$) for 1.8 c/°. ### Disability acceptance parameter The disability acceptance parameter was defined as that range of illuminance at the pupil plane that, without causing Total Disability Glare, produces a decrease in contrast sensitivity but still allows the perception of some spatial frequencies. Table 3 shows the mean and standard deviation of this parameter, obtained from the difference between TDG and glare tolerance. Analysis of variance and Tukey's test revealed significant differences between the frequency of 0.2 c/° and the rest of the frequencies. There were found, however, no significant differences between the spatial frequencies higher than 0.3 c/°. However, no significant differences were found between the results for frequencies higher than 0.3 c/°, so we represent in Fig. 3 the mean DA illuminance as a function of the angular field of the glare source only for the spatial frequencies of 0.2 and 0.3 c/°. The linear regression for these frequencies was: DA = 0.60 + 0.02*GA ($$R^{2} = 0.933,\, p = 0.034$$) and DA = 0.43 + 0.02*GA ($$R^{2} = 0.965,\, p = 0.018$$). ## Discussion The optical quality of the eye has been traditionally assessed by analyzing its point spread function (PSF) as it contains information on both optical aberrations and scattering effects20. While aberrations are associated with the central part of the PSF, the periphery is associated with scattering contributions that, according to the general glare equation that describes disability glare, has a validity angular domain from 0.1° to 100° in the human eye. However, the visual function assessment must also include not only an evaluation of the retinal image quality (objective optical assessment) but considers the neural contrast sensitivity21. While visual acuity tests may confirm good vision quality under natural viewing conditions, the contrast sensitivity function may only manifest visual impairment under disability glare conditions or when larger retinal areas are exposed by glare sources, such as night driving22,23. Here we present a modification of an objective wide-field ocular straylight meter8 to assess the visual function under disability glare conditions. The system operates in extended Maxwellian-illumination projecting the center of the glare source and the visual stimuli directly onto the fovea24. The optical system controls the angular distribution and the illuminance of the glare source, whereas the visual stimuli, consisting of sinusoidal gratings with variable spatial frequency and contrast, are computer generated. Disability glare concept was explored by measuring the contrast sensitivity function under glare vision conditions in young healthy subjects. First, the CS function was evaluated for each subject across a range of spatial frequencies from 0.2 to 1.8 c/° to establish a glare-free baseline vision. Next, the subject was exposed to a glare source of increasing illuminance to study the Total Disability Glare threshold (the point at which the glare source causes temporary blindness), the recovery of the baseline vision, and the amount of glare the subject can tolerate while maintaining the contrast sensitivity function. Results revealed that Total Disability Glare illuminance is correlated to the incident angular size of the glare source, that is, the higher the angular distribution of the glare discs the higher the corneal illuminance threshold required to get Total Disability Glare is. Our results are in agreement with those reported by Vos25, they found that the variation of equivalent veiling luminance as a function of the corneal illuminance strongly decreases as the incident glare source angle increases. Our results also showed that this dependence is statistically significant for the smallest spatial frequencies only. The glare tolerance analysis corresponding to the glare regime at which the baseline CS function is recovered, revealed that the tolerance to a glare source is positively correlated to the angular size of the disc of the glare source (See Fig. 1) but unlike the TDG illuminance threshold, significant correlations were found for almost all the analyzed spatial frequencies. Moreover, the higher the spatial frequency, the lower the variation of the glare tolerance as a function of the angular size of the glare source (See Fig. 2). Finally, results showed that once the glare tolerance level is reached, the disability acceptance decreases as the angular size of the glare source approaches the central retina (Fig. 3). However, disability acceptance was found to be statistically dependent in accordance with TDG results shown in section 3.1) with the incident angle of the source for the smallest spatial frequencies of 0.2 and 0.3 c/°. To summarize, the shift in Total Disability Glare illuminance threshold, glare tolerance and disability acceptance parameter as a function of the spatial frequency increases for larger angular distribution of the variable discs modified at the glare source. Then, disability glare vision depends not only on the illuminance of the source and the angular distribution of the incident glare source, but the spatial frequency of the observed stimuli. Nevertheless, the study has some potential limitations which include the lack of control of individual pigmentation type among participants. As it is well-known, pigmentation type is an important factor to straylight10 so it is possible that some of our results can only be associated with the pigmentation type of our sample. Moreover, it can also be argued that the spatial frequency range used in this work (low spatial frequencies from 0.2 to 1.8 c/°) might not be adequate when a typical human eye can resolve spatial frequencies up to 30–40 c/°. However, it is important to remark that the low spatial frequencies are the most important region of the spectrum in order to localize the surrounding objects. Following this line, our results could be useful to predict how glare could affect the visual performance for object detection. Finally, it could have been interesting to measure stray light from our subjects with a standard method, with this information more insight could be obtained from our present results. However, it is considered that the present experiment gives by itself very interesting information about vision performance in glare conditions. To conclude, a new psychophysical optical instrument has been developed for the study of disability glare regime and the threshold at which glare vision becomes in temporal blindness, as well as the amount of glare that is tolerated by the visual system in young adult subjects. The instrument allowed the definition of disability glare concepts involving factors such as angular distribution of the glare discs, the illuminance of the glare source, and the contrast sensitivity function. Future work includes an extension of the analyzed range of spatial frequencies to study the disability glare parameters herein reported as a function of age and different ocular pathological conditions such as macular affectations, intraocular opacifications, or unwanted glare consequences from refractive surgery. ## Methods ### Subjects A total of 30 subjects were recruited in this study and measured by an experienced optometrist at the Laboratory of Visual Optics Research of the University of Zaragoza (Spain). The average age was 28.1 ± 9.3 years old, and the average equivalent spherical error was − 1.72 ± 2.18. Exclusion criteria included the presence of any ocular pathology and history of ocular surgery. The study protocol adhered to the tenets of the Declaration of Helsinki and received the approval of the local Ethical Committee of Research of the Health Sciences Institute of Aragon, with reference C.P.-C.I.PI20/377. All subjects were informed about the experimental procedures of the research and signed an informed consent form prior to the start of the measurements. ### Disability glare instrument A custom-made experimental system shown in Fig. 4 was developed with the aim of illuminating the retina with wide-angle distribution light in a coaxial way to the observation of a luminous sinusoidal stimulus. The system includes a circular glare source composed of a holographic light shaping diffuser (Light Shaping Diffusers, Luminit, LLC) back-illuminated by a white LED (Cree LED XLamp XM-L2). The luminous intensity of the source is controlled by an external dimmer. The angular distribution of the source is modified by means of removable 3D-printed masks with different angular apertures. This configuration creates uniform discs instead of the conventional annuli configurations7. Throughout the manuscript, we refer to angular size as the angular distribution of the uniform discs. For this experiment, four masks were used (with diameters 12.6 mm, 17.73 mm, 21.81 mm, and 26.67 mm), so the angular sizes subtended by the glare source once it was throughout the system were 24°, 33°, 40°, and 48°. A relay telescopic system consisting of two achromatic doublet lenses conjugates the aperture of CL (45 mm focal length) with the pupil plane of the subject providing extended Maxwellian illumination. Then, the glare source and the CS test are directly projected on the retina minimizing unwanted factors affecting the disability glare measurement such as the pupil size and scattering contributions from iris and sclera27. The visual stimuli consist of a contrast sensitivity test generated by Contrast-Test software (Opto-software). The software generates sinusoidal gratings that, through the system, present a total angular size of 9.8° with spatial frequencies of 0.2, 0.3, 0.6, 0.9, 1.2, and 1.8 cycles per degree (c/°) and contrast levels of 20, 11.1, 5.5, 3.12, 1.66, 0.87, 0.47, 0.25 and 0.12%. The luminance values of the test range from 46.5 cd/m2 for gratings with minimum contrast to 20.9 cd/m2 for grating with maximum contrast, so photopic conditions are maintained independently of the glare source luminance. A 50:50 beam splitter allows coaxial pathway of the visual target (displayed on an LCD monitor) and the glare source. The whole optical system was mounted on a 30 × 45 cm optical board coupled to a chin rest to facilitate alignment of the system axis with the subject's visual axis. ### Experimental procedure The experiment was performed in scotopic conditions (to ensure the absence of external glare sources) and monocularly with all the subjects wearing their usual refractive correction. Before starting the measurements, 5 min were allowed for dark adaptation. The subject’s visual axis was kept aligned with the optical axis of the glare source and observation systems by means of the chin rest positioning controls. The experiment involved first the estimation of a set of minimum perceptible contrast levels (MPCL) with the glare -source off for 6 different spatial frequency sinusoidal stimuli: 0.2, 0.3, 0.6, 0.9, 1.2, and 1.8 c/°. To accomplish that, the examiner started showing a sinusoidal stimulus with subthreshold contrast so the observers were not able to perceive any contrast on it. Then the contrast was progressively increased until the observer was able to perceive some contrast on it. The aim of this first step was to find a visual stimulus with a minimum but perceptible contrast. The mean values of minimum perceptible contrast obtained for each frequency are shown in Table 4. Secondly, while the observer was looking at this near-threshold sinusoidal pattern, the illuminance of the glare source was progressively increased to reach the minimum illuminance level able to avoid sinusoidal contrast recognition. The measurement of this illuminance level in the corneal observer plane was identified in this work as a Total Disability Glare illuminance. After this first glare source illuminance level was determined, the examiner proceeded to slightly increase the illuminance level just before beginning a slow and progressive decrease of the illuminance level of the glare source to determine a second characteristic glare level just when sinusoidal contrast perception was recovered. This illuminance level will be named in this work as glare tolerance. From Total Disability Glare illuminance and glare tolerance, the disability acceptance parameter is defined in this work as the illuminance values that exceed the glare tolerance but are lower than the Total Disability Glare illuminance. This illuminance range was defined as the disability acceptance parameter and was obtained from the difference between the Total Disability Glare parameter and the glare tolerance. Figure 5 shows a drawing to facilitate the comprehension of the defined parameters. This second step was repeated for each spatial frequency at 4 different glare angular distribution conditions (24, 33, 40, and 48 degrees). This meant 48 photometric level measurements for the glare source at the pupil plane for each participant (6 spatial frequencies × 4 glare angular sizes × 2 characteristic parameters). ### Data analysis For each parameter defined above (TDG, GT y DA), one-way analysis of the variance and posterior Tukey test were carried out to check for significant differences in the values obtained as a function of the angular size of the glare source between each spatial frequency. Based on this statistical analysis, graphical representation of the mean parameters as a function of the angular field of the glare source is shown only for the spatial frequencies that present significant differences between them. Both graphical representations and statistical analysis were carried out using Sigmaplot (Systat Software, Inc, USA).
# Playing with Quantifier Satisfaction Nikolaj Bjørner and Mikoláš Janota Microsoft Research [email protected], [email protected] ## Quantified Formulae • a game between universal and existential player • E.g. , winning strategy ### Solving QBF (QESTO) • : starts. is false. • : strikes back. is true. • : has to backtrack. is already unsat. • : learns . • : is false. • : counters - is true. • : has yet again to consider what went wrong by determining the core , which is empty. The universal player has to give up and the existential player established that the formula is true. ### Model-based Projections • , , and . • The size of is no larger than the size of . • There is a sequence of models of , such that . • Let be an infinite sequence of models of literals , then there is a finite set of equivalent projections . ### Model-based Projection def sign(M,a): if M(a) return a else return a ### Initialization def level(j,a): return max level of bound variable in atom a of parity j ### Property Directed QSAT Algorithm def strategy(M,j): return def tailv(j): return j = 1 M = null while True: if strategy(M, j) is unsat: if j == 1: return F is unsat if j == 2: return F is sat C = Core(, strategy(M, j)) J = Mbp(tailv(j), C) j = index of max variable in J of same parity as j = J M = null else: M = current model j = j + 1 ### Proof Note: core for is implicant of . ### On Quantifier Elimination • Extends to Quantifier Elimination. • Enumerate literals that imply • by tracking clauses asserted at level 2. ### On Strategies • Our base strategy uses limited look-ahead. • It is possible to formulate stronger strategies (based on models) ### Some Experiments Figure 1. LRA Figure 2. LIA ### Summary and Future Work • Solving quantified theories. • Using Model-based projections and cores. • Symmetric view on and . • More theories?
# pskill 0th Percentile ##### Kill a Process pskill sends a signal to a process, usually to terminate it. Keywords utilities ##### Usage pskill(pid, signal = SIGTERM)SIGHUP SIGINT SIGQUIT SIGKILL SIGTERM SIGSTOP SIGTSTP SIGCHLD SIGUSR1 SIGUSR2 ##### Arguments pid positive integers: one or more process IDs as returned by Sys.getpid. signal integer, most often one of the symbolic constants. ##### Details Signals are a C99 concept, but only a small number are required to be supported (of those listed, only SIGINT and SIGTERM). They are much more widely used on POSIX operating systems (which should define all of those listed here), which also support a kill system call to send a signal to a process, most often to terminate it. Function pskill provides a wrapper: it silently ignores invalid values of its arguments, including zero or negative pids. In normal use on a Unix-alike, Ctrl-C sends SIGINT, Ctrl-\ sends SIGQUIT and Ctrl-Z sends SIGTSTP: that and SIGSTOP suspend a process which can be resumed by SIGCONT. The signals are small integers, but the actual numeric values are not standardized (and most do differ between OSes). The SIG* objects contain the appropriate integer values for the current platform (or NA_INTEGER_ if the signal is not defined). Only SIGINT and SIGKILL will be defined on Windows, and pskill will always use the Windows system call TerminateProcess. ##### Value A logical vector of the same length as pid, TRUE (for success) or FALSE, invisibly. Package parallel has several means to launch child processes which record the process IDs. psnice • pskill • SIGHUP • SIGINT • SIGQUIT • SIGKILL • SIGTERM • SIGSTOP • SIGTSTP • SIGCONT • SIGCHLD • SIGUSR1 • SIGUSR2 ##### Examples library(tools) # NOT RUN { pskill(c(237, 245), SIGKILL) # } Documentation reproduced from package tools, version 3.5.0, License: Part of R 3.5.0 ### Community examples Looks like there are no examples yet.
# Confusion about Notation for Twisting Sheaf and Very Ample Invertible Sheaves I'm a bit confused by some notation in Hartshorne's chapter II (theorems II.5.17, II.7.6, and some discussion on ample invertible sheaves in section 7). Theorem 5.17 (Serre). Let $$X$$ be a projective scheme over a noetherian ring $$A$$, let $$\mathcal{O}(1)$$ be a very ample invertible sheaf on $$X$$, and let $$\mathcal{F}$$ be a coherent $$\mathcal{O}_X$$-module. Then there is an integer $$n_0$$ such that for all $$n \geq n_0$$, the sheaf $$\mathcal{F}(n)$$ can be generated by a finite number of global sections. Proof. Let $$i: X \rightarrow \mathbb{P}_A^r$$ be a closed immersion of $$X$$ into a projective space $$A$$, such that $$i^*(\mathcal{O}(1)) = \mathcal{O}_X(1)$$... [The remainder of the paragraph is devoted to reducing to the case where $$X$$ is Proj of a polynomial ring.] First, what is meant by the hypothesis "$$\mathcal{O}(1)$$ be a very ample invertible sheaf on $$X$$", since, a priori, $$X$$ is not projective space or Proj of a graded ring? I see that one can appeal to Corollary 5.16 to see that $$X$$ is $$\mathrm{Proj}(S)$$ for some graded ring $$S$$, finitely generated by $$S_1$$ as an $$S_0 = A$$-algebra. Then one can take $$\mathcal{O}(1) = \widetilde{S(1)}$$. Then this is very ample since we can use the description of $$S$$ to construct a closed immersion $$i: X \rightarrow \mathbb{P}_A^r$$ with $$i^*(\mathcal{O}_{\mathbb{P}_A^r}(1)) = \mathcal{O}_X(1)$$ (by Proposition 5.12(c)), which is just $$\mathcal{O}(1)$$. Is this the correct interpretation? Second, on p. 153, this result is later quoted as having established: for $$\mathcal{L}$$ any very ample invertible sheaf on a projective scheme $$X$$ over $$A$$, for any coherent $$\mathcal{F}$$, there is an $$n_0$$ such that $$n \geq n_0$$ implies $$\mathcal{F} \otimes \mathcal{L}^n$$ is generated by global sections, which appears slightly different than the above result, since we worked with the particular very ample invertible sheaf $$\mathcal{O}_X(1)$$. Based on the above argument, it seems every very ample invertible sheaf over a projective scheme (at least those coming from closed immersions coming from surjective ring maps, by proposition 5.12(c)) will be isomorphic to $$\mathcal{O}_X(1)$$. Is this true? If it's not, in what sense was the quoted result proven by theorem 5.17? Also, is this rephrasing merely meant to suggest the definition of ample provided on p. 153? Finally, I am confused about the notation found in the proof of theorem 7.6, in chapter II. The setting is $$X$$ a scheme of finite type over a noetherian ring $$A$$, and $$\mathcal{L}$$ an invertible sheaf on $$X$$. One direction of the theorem is that if $$\mathcal{L}^m$$ is very ample implies $$\mathcal{L}$$ is ample. The proof begins with $$i: X \rightarrow \mathbb{P}_A^n$$, an immersion with $$\mathcal{L}^m \simeq i^*(\mathcal{O}(1))$$. Then given any coherent $$\mathcal{F}$$, consider the closure $$\overline{X}$$ in projective space and $$\overline{\mathcal{F}}$$ the coherent sheaf which is the extension of $$\mathcal{F}$$ to $$\overline{X}$$. Then it is noted that if $$\overline{\mathcal{F}} \otimes \mathcal{O}_{\overline{X}}(l)$$ is generated by global sections, hence $$\mathcal{F} \otimes \mathcal{O}_X(l)$$ is also. My question is, what is the meaning of $$\mathcal{O}_X(l)$$? The map $$i$$ is just an immersion, so we don't know that $$X$$ is Proj of some graded ring. Is $$\mathcal{O}_X(l)$$ just a short-hand for $$\mathcal{L}^l$$? The notation $\mathcal{O}_X(1)$ in the statement of the theorem is just a name for an invertible sheaf which by hypothesis is very ample. The motivation of such a name is exactly the fact that a very ample invertible sheaf $L$, plus a choice of a basis of $\Gamma(X, L)$, where $X$ is a variety over some algebraically closed field, is the same as giving a closed immersion $i:X\hookrightarrow \mathbb{P}^n$ plus an isomorphism $\phi: L \simeq i^*\mathcal{O}_{\mathbb{P}^n}(1)$. As you were suggesting, $\mathcal{O}_X(l)$ is a shorthand for $L^l$. The isomorphism $\phi$ above shows that every very ample line bundle is isomorphic to $i^*\mathcal{O}(1)$, or $\mathcal{O}_X(1)$ if you prefer this notation. Let me remark that there is not a canonically defined invertible sheaf $\mathcal{O}_X(1)$ on $X$ as in the case of $\mathbb{P}^n$. In other terms, there is not a canonically defined very ample invertible sheaf $L$ on $X$. This is the same as saying that you can embed $X$ in $\mathbb{P}^n$ in different ways, and it can be that the associated very ample line bundles $\mathcal{O}_X(1)$ are one different from each other. For example, think of a non hyperelliptic curve of genus 3. You have different embeddings given by $K$ and $2K$, where $K$ is the canonical line bundle. By embedding the curve using these two line bundles, you have that $\mathcal{O}_X(1)$ will be isomorphic to $K$ in the first case and to $2K$ in the second one. You can also see this by looking at $X=\mathbb{P}^1$ : this is isomorphic both to $Proj(k[x,y])$ and $Proj(k[u,v,w]/(uw-v^2))$ , so that the $\mathcal{O}_X(1)$ in the first case is the sheafification of the $k[x,y]$-module $k[x,y](1)$ , in the other case is the sheafification of $k[x,y](2)$. Clearly, in the case of the projective line you do have a canonical $\mathcal{O}(1)$ but this is a special case, and that's because you have a canonical embedding, if you want, of the projective line into a projective space, namely the identity (you can also see this by looking at the functorial definition of the projective line). But in general, you do not have such a canonical embedding. • I'm still confused why on p. 153, theorem 5.17 is referred to as having shown for any very ample sheaf $L$ on $X$ a projective scheme over a noetherian ring $A$, $L$ is ample. I guess, more succinctly, I am asking, is every very ample sheaf on a projective scheme $X$ over a noetherian ring isomorphic to $\mathcal{O}_X(1)$? Aug 13 '17 at 6:21 • I edited the answer. Aug 13 '17 at 8:27 • I added an example in the end, hoping that it clarifies a little bit more. Aug 13 '17 at 17:06 • Ah, yes, I guess I was confused on the point that $\mathcal{O}_X(1)$ does not always mean the same thing, and depends on the embedding into projective space. Thank you! Aug 13 '17 at 20:46 • @Andrea This might be a long shot since the question is six months old, but I just wanted to clarify something. In your answer you say "The notation $\mathcal{O}_{X}(1)$ in the statement of the theorem is just a name for an invertible sheaf..." That would make sense, except that the notation $\mathcal{O}_{X}(1)$ is not used in the theorem's statement. He uses $\mathcal{O}(1)$ in the statement to denote a very ample sheaf on $X$. But in the proof says that $i^{*}(\mathcal{O}(1)) = \mathcal{O}_{X}(1)$. Is it a typo in the statement and $\mathcal{O}(1)$ is meant to be $\mathcal{O}_{X}(1)$? – Luke Jan 5 '18 at 6:22
# [pdftex] no BoundingBox Mon Feb 9 22:42:22 CET 2009 ```Hi Riccardo. > (4) When I tipeset the (3)sourcefile, TexShop says that it does not > find the qqq file. > If the \includegraphics argument is qqq.pdf, TexShop says that the > graphic file has no BoundingBox. The problem is that you're trying to include a pdf rather than a (e)ps file when your using latex -> dvips -> pstopdf. There are in principle two options: - if you can work without PSTricks (or use a wrapper that produces intermediate pdf files like ps4pdf), use pdflatex - otherwise include a ps file, e.g. by converting your cropped pdf file to ps on the command line with pdf2ps (part of ghostscript, I believe), or with Adobe Acrobat (save as ps). But I think in your case, you should have a look at Pierre Chatelier's LaTeXiT GUI (http://ktd.club.fr/programmation/latexit_en.php)? It does all the cropping etc, and you can simply save the resulting image as an .eps file, which will fit into your workflow. P.S. For Mac-specific questions, such as TeXShop settings, there's also the mac-tex mailing list: http://email.esm.psu.edu/mailman/listinfo/macosx-tex Cheers, Jan On Feb 9, 2009, at 9:31 AM, riccardo nisi wrote: > I am Riccardo Nisi and I write a very bad English. > My computer is MacMini, OSX 10.6. > I am using TexShop + Latex and the MacTex 2007 package. > > (1) I am using TexShop and Latex with the PSTricks package for to > produce mathematical > graphics, and the first line of my source file (we say qqq.tex) is > %TEX TS-program = latex. > TexShop typesets my qqq.tex and the output is qqq.pdf. > > (2) I crop the file qqq.pdf with the Preview application and save the > new qqq.pdf. > > (3) I am in need of to insert the separate graphics into an other > source file, wich uses > the PSTricks package (the first line is %TEX TS-program =latex), with > \incudegraphics{qqq}. > > (4) When I tipeset the (3)sourcefile, TexShop says that it does not > find the qqq file. > If the \includegraphics argument is qqq.pdf, TexShop says that the > graphic file has no > BoundingBox. > > I ask you as or if it is possible to solve these problems with > TexShop, also using the Terminal.
# 8051 and its family (8951,2051,......)and ADC0804 Status Not open for further replies. #### massoti ##### New Member i would like to get data sheets for 8051 and its family (8951,2051,......)and ADC0804 #### Pilot ##### New Member If you learn to use the search engines that are available you would be able to get this information quickly yourself. If you input the numbers one at a time you will find all you need on the first results page for each device search. #### kinjalgp ##### Active Member Status Not open for further replies. Replies 0 Views 9K Replies 6 Views 3K Replies 14 Views 2K Replies 4 Views 2K Replies 5 Views 675
# First cell height calculation Introduction Mesh classification Structured mesh generation Unstructured mesh generation Special topics Mesh adaptation First cell height calculation While performing CFD simulations, it is critical that one capture boundary layer near the wall properly. In order to do that, the mesh should be generated in such a manner that it captures boundary layer properly. For turbulent flows, calculation of the Y plus value of the first interior node/gridpoint helps in doing that. This dimensionless distance is defined as $y^+ := \frac{u_* \, y}{\nu}$ where $u_*$ is the friction velocity defined as $u_* := \sqrt{\frac{\tau_w}{\rho}}.$ The wall shear $\tau_w$ can probably not be determined until after a simulation has been completed, so it is usually necessary to estimate a value, and then check after a simulation is complete.
## February 26th, 2018 ### Cayley-Hamilton theorem and Nakayama’s lemma Originally published at 狗和留美者不得入内. You can comment here or there. The Cayley-Hamilton theorem states that every square matrix over a commutative ring $A$ satisfies its own characteristic equation. That is, with $I_n$ the $n \times n$ identity matrix, the characteristic polynomial of $A$ ) ### Cayley-Hamilton theorem and Nakayama’s lemma Originally published at 狗和留美者不得入内. You can comment here or there. The Cayley-Hamilton theorem states that every square matrix over a commutative ring $A$ satisfies its own characteristic equation. That is, with $I_n$ the $n \times n$ identity matrix, the characteristic polynomial of $A$ )
# Coloring multidimensional data ## The Problem Suppose we have some data containing multiple variable values (measurements) for a number of elements (observations). We’d like to visualize this data, and when doing so, assign different colors to different elements. This requires us to manually assign a color to each element, or if the data lends itself to it, compute some linear score for each element and use that as an index into a color palette. For large multidimensional data, neither approach is practical. The package provides a quick-and-dirty solution which automatically computes a distinct color to each data element, where these colors try to reflect the “coarse” structure of the data; that is, “more similar” data elements are assigned “more similar” colors. ## Seatbelts Example We’ll demonstrate this on the seatbelt data, one of the built-in data sets provided by R. To begin, we’ll load the package and access the seatbelts data. Since the package only works on simple matrices, we’ll have to convert the seatbelts time series data to a simple matrix: library(chameleon) seatbelts <- matrix(as.numeric(Seatbelts), nrow=nrow(Seatbelts), ncol=ncol(Seatbelts)) colnames(seatbelts) <- colnames(Seatbelts) dim(seatbelts) #> [1] 192 8 #> DriversKilled drivers front rear kms PetrolPrice VanKilled law #> [1,] 107 1687 867 269 9059 0.1029718 12 0 #> [2,] 97 1508 825 265 7685 0.1023630 6 0 #> [3,] 102 1507 806 319 9963 0.1020625 12 0 #> [4,] 87 1385 814 407 10955 0.1008733 8 0 We now have a matrix with 192 elements and 8 measurements for each one. Using the function, we can assign a color to each element and use it to visualize the data, for example using a 2D UMAP projection: colors <- data_colors(seatbelts) library(umap) layout <- umap(seatbelts, min_dist=0.99, random_state=123456)$layout plot(layout, asp=1, col=colors, pch=19, cex=1) Assuming that UMAP has indeed captured some underlying structure of the data, we can see that the chosen colors correspond well to this structure, and possibly hint at some additional structure not captured well in the 2D projection. However, 192 colors is “a bit much”, so we can’t expect them to be very distinct from each other. We can reduce the number of colors by grouping the data elements. For example, since the original seatbelts data is a time series, we can compute for each row the year it applies to: years <- floor(time(Seatbelts)) unique(years) #> [1] 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 #> [16] 1984 And then compute and show a color for each year: year_colors <- data_colors(seatbelts, group=years) plot(layout, asp=1, col=year_colors[as.character(years)], pch=19, cex=1) legend('bottomleft', legend=names(year_colors), col=year_colors, lty=1, lwd=3, cex=0.75) We see each year’s data is spread over a large part of the projection, but not uniformly so, suggesting that while there is some year-to-year variation, it probably isn’t the right way to group this data into distinct clusters. ## PBMC Example For a more successful grouping example, we’ll use some single-cell RNA sequence (sc-RNA) data (provided as part of the package). This data contains a matrix, containing ~1.5K metacells (rows), and for each one, the UMI count (# of detected RNA molecules) for each of ~600 different “feature” genes (columns). In addition, it provides a vector of cell which were assigned to the metacells using a supervised analysis pipeline, and a 2-column matrix containing the 2D UMAP projection chosen to visualize the data (a common practice for scRNA data analysis). data(pbmc) Let’s compute a color for each cell type. For better results, we first convert the raw UMI counts to a log of the fraction of of the total UMIs in each metacell: fractions <- pbmc$umis / rowSums(pbmc$umis) log_fractions <- log2(fractions + 1e-5) type_colors <- data_colors(log_fractions, group=pbmc$types) We can then use this to color the provided 2D UMAP projection. Here we’ll use : library(ggplot2) frame <- as.data.frame(pbmc$umap) frame$type <- pbmc$types ggplot(frame, aes(x=xs, y=ys, color=type)) + geom_point(size=0.75) + scale_color_manual(values=type_colors) + theme_light() + guides(color=guide_legend(override.aes=list(size=3))) + theme(legend.text=element_text(size=12), legend.key.height=unit(14, 'pt')) #> Warning: Removed 12 rows containing missing values (geom_point). Here we see that, in contrast to the seatbelt years case above, each type (group of metacells) maps to a distinct region in the 2D UMAP projection, suggesting that grouping the metacells by the type annotations does capture some significant structure of the data. ## Picking distinct colors The package also provides the lower-lever function which attempts to select a number of distinct colors, which can be directly assigned to unordered categorical data. For example: distinct_colors(8) #>$lab #> l a b #> [1,] 45.31069 16.83526 -53.526963 #> [2,] 45.31069 59.32081 -53.526963 #> [3,] 45.31069 38.07803 -16.733398 #> [4,] 45.31069 59.32081 20.060166 #> [5,] 80.00000 -68.13584 44.589209 #> [6,] 80.00000 -25.65029 -28.997920 #> [7,] 80.00000 -46.89306 7.795645 #> [8,] 80.00000 -25.65029 44.589209 #> #> \$name #> [1] "#3E67C5" "#9C3FC6" "#9B5188" "#C6314D" "#30E36E" "#4DD6FB" "#51DDB6" #> [8] "#B3D270" By default, this excludes low-saturation colors (with a low value in the CIELAB color space), as well as too-dark and too-light colors (with too-low or too-high value in the CIELAB color space). This can be controlled by specifying explicit , and parameters. The convenience function wraps this to allow it to be used this as a color scale for unordered (categorical) data. For example: ggplot(frame, aes(x=xs, y=ys, color=type)) + geom_point(size=0.75) + scale_color_chameleon() + theme_light() + guides(color=guide_legend(override.aes=list(size=3))) + theme(legend.text=element_text(size=12), legend.key.height=unit(14, 'pt')) #> Warning: Removed 12 rows containing missing values (geom_point). And the similar function allows it to be used for fill colors: ggplot(frame, aes(x=xs, y=ys, fill=type)) + geom_point(shape=21, size=2, stroke=0.2, color="black") + scale_fill_chameleon() + theme_light() + theme(legend.text=element_text(size=12), legend.key.height=unit(14, 'pt')) #> Warning: Removed 12 rows containing missing values (geom_point). Using any of these functions arbitrarily maps the colors to the values, making no attempt to have similar colors reflect similarities in the underlying data.
Share Field/project development # ABB Scoops Up Contract To Power Jansz-Io Compression Project ## The contractor was tapped by Chevron Australia and Aker Solutions to provide power from shore to the Jansz-Io field offshore Australia. ABB has won an order worth $120 million to supply the overall electrical power system for the multibillion-dollar Jansz-Io Compression project. The Jansz-Io field is located approximately 200 km off the northwestern coast of Australia at water depths of approximately 1400 m. The field is a part of the Chevron-operated Gorgon natural gas project, one of the world’s largest natural gas developments. The Jansz-Io Compression project will involve the construction and installation of a 27,000-tonne (topside and hull) normally unattended floating field control station, approximately 6,500 tonnes of subsea compression infrastructure, and a 135-km submarine power cable linked to Barrow Island. The estimated$4 billion project was greenlit earlier this year. “The Jansz-Io Compression project is a major enabler in maintaining an important source of natural gas to customers in Asia Pacific,” said Peter Terwiesch, president of process automation at ABB. “It will support energy transition across the region where many countries primarily rely on coal for energy generation. Burning natural gas produces around half as much carbon dioxide per unit of energy compared with coal. We’re proud to be leading the way in the global energy industry by pioneering innovative subsea power technologies that bring us closer to a carbon neutral future.” ABB will provide much of the electrical equipment, both topside and subsea, for the project. The project will combine two core ABB technologies—power from shore and variable-speed-drive long stepout subsea power—for the first time. The electrical system will be able to transmit 100 megavolt-amperes over approximately 140 km and at depths of 1400 m. “This game-changing technology significantly reduces power consumption and emissions compared to power generated offshore by local gas turbines and compressors located topside,” said Brandon Spencer, president of energy industries at ABB. “Subsea compressors are key to helping improve reservoir recoverability and ensuring optimal use of resources from existing fields.” The contract was awarded following concept development and a front-end engineering and design study. Work will start immediately, and the subsea compression system is expected to be in operation in 2025.
## yrivers36 2 years ago Please explain. sqrt of x =sqrt of x-5) +1 i would move the 1 to the LHS and then square both sides 2. yrivers36 I squared both sides and got $x=x-5+2\sqrt{x-5}+1$ 3. Not the answer you are looking for? Search for more explanations. Search OpenStudy
# Does the many body fermion spatial wavefunction go to zero when two wavefunctions approach each other? This question assumes the fermions have the same spin eigenstate. I have been told if somehow one could take the limit as two identical fermion states approach to the same state the total wavefunction goes to zero. $$\Psi(1,2) = \big( |1\rangle_1 |2\rangle_2 - |2\rangle_1 |1\rangle_2 \big)$$ However I don't think this is true for continuous spatial wavefunctions. $$\Psi(x_1,x_2) = N \left( e^{-\dfrac{|x_1+a|^2}{4 \sigma^2}} e^{-\dfrac{|x_2-a|^2}{4 \sigma^2}} - e^{-\dfrac{|x_2+a|^2}{4 \sigma^2}}e^{-\dfrac{|x_1-a|^2}{4 \sigma^2}} \right)$$ In this example I have two gaussian fermions with equal variance, but there centers are displaced by $2a$. $N$ is the normalization factor which depends on the seperation of the gaussians. I have to solve for $N$ to normalize the total wavefunction to unity. $$\Psi(x_1,x_2) = e^{-\dfrac{|x_1|^2+|x_2|^2}{4 \sigma^2}} \dfrac{\sinh\left(\dfrac{a\cdot (x_2-x_1)}{2\sigma^2} \right)\sqrt{2}}{(2\sigma^2\pi)^{\tfrac{n}{2}}\sqrt{\left(e^{\dfrac{a^2}{\sigma^2}}-1\right)}}$$ Above is my total wavefunction when I solve for $N$ and mathematically reduce it. $$\lim_{a\rightarrow 0} \Psi(x_1,x_2) = e^{-\dfrac{|x_1|^2+|x_2|^2}{4 \sigma^2}} \dfrac{\hat{a}\cdot (x_2-x_1)}{\sigma(2\sigma^2\pi)^{\tfrac{n}{2}}\sqrt{2}}$$ When I take the limit as the two wavefunctions overlap my total wavefunction converges to a nonzero object. Does the many body fermion spatial wavefunction go to zero when two wavefunctions approach each other? ## General Case Given a total wavefunction of two fermions with identical spin $$\Psi(x_1,x_2) = N(a)\Bigg( \psi(x_1)\psi(x_2+a) - \psi(x_2)\psi(x_1+a) \Bigg)$$ we expand $$\psi(x_2+a) = \psi(x_2) + \psi'(x_2)a+\mathcal{O}(a^2)$$ to write $$\Psi(x_1,x_2) = N(a)\Bigg( \psi(x_1) \left(\psi(x_2) + \psi'(x_2)a+\mathcal{O}(a^2)\right) - \psi(x_2) \left(\psi(x_1) + \psi'(x_1)a+\mathcal{O}(a^2)\right) \Bigg)$$ We reduce $$\Psi(x_1,x_2) = N(a)\Bigg( \left(\psi(x_1) \psi'(x_2)-\psi(x_2) \psi'(x_1)\right) a + \mathcal{O}(a^2) \Bigg)$$ We integrate the square as $$\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}|\Psi(x_1,x_2)|^2dx_1dx_2 \\ = N^2(a) \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \Bigg( \left(\psi(x_1) \psi'(x_2)-\psi(x_2) \psi'(x_1)\right) a + \mathcal{O}(a^2) \Bigg)^2 dx_1dx_2 \\ = N^2(a) \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \Bigg( \left( (\psi(x_1) \psi'(x_2))^2 + (\psi(x_2) \psi'(x_1))^2 -2 \psi(x_1) \psi'(x_2) \psi(x_2) \psi'(x_1) \right) a^2 + \mathcal{O}(a^3) \Bigg)^2 dx_1dx_2 \\ = N^2(a) \Bigg( 2\int_{-\infty}^{\infty} (\psi(x_1))^2 dx^1 \int_{-\infty}^{\infty}(\psi'(x_2))^2dx_2a^2+ \mathcal{O}(a^3) \Bigg)$$ . We solve for $$N(a) = \Bigg( 2\int_{-\infty}^{\infty} (\psi(y_1))^2 dy_1 \int_{-\infty}^{\infty}(\psi'(y_2))^2dx_2a^2+ \mathcal{O}(a^3) \Bigg)^{-1/2}$$ and obtain $$\Psi(x_1,x_2) = \dfrac{ (\psi(x_1)\psi'(x_2) - \psi(x_2)\psi'(x_1))a + \mathcal{O}(a^2) }{ \sqrt{ 2\int_{-\infty}^{\infty} (\psi(y_1))^2 dy_1 \int_{-\infty}^{\infty}(\psi'(y_2))^2dy_2 a^2+ \mathcal{O}(a^3) } }$$ We evaluate $$\lim_{a \rightarrow 0} \Psi(x_1,x_2) = \dfrac{ \psi(x_1)\psi'(x_2) - \psi(x_2)\psi'(x_1) }{ \sqrt{ 2\int_{-\infty}^{\infty} (\psi(y_1))^2 dy_1 \int_{-\infty}^{\infty}(\psi'(y_2))^2dy_2 } }$$ This proves that if you take the continuum limit as two identical fermions approach overlap, they converge to a finite total wavefunction of two centralized orthogonal states. Pretty much by construction your limit is non-zero: you normalize it all the way as $a\to0$. If you simply consider the norm of your wavefunction as a function of $a$, you'll see that it's $1$ for all $a>0.$ Then the limit is trivially also $1$, which is non-zero and proves your point. I have been told if somehow one could take the limit as two identical fermion states approach to the same state the total wavefunction goes to zero. This of course only works if you don't try to re-normalize it under the limit to prevent from going to zero. I think that the error you are making is that the limit of $a->0$ may not exist in the sense you think it does. What you are trying to do is to normalize a wave function that has zero norm. In all cases where $a\neq 0$ what you are doing is mathematically sound, and yields an unambiguous result. I'm quite sure though that the point $a=0$ may not have a waeel defined limit as you might get different result depending on with direction of $a$ you are approaching from. The fact that you get a $\hat a\cdot(x_1-x_2)$ in you final result is an indication of that. Physically what goes wrong is simply that before you normalize $\psi_{a=0}(x_1,x_2)=0$ identically. You are misunderstanding the Pauli Exclusion Principle (PEP). It states that two Fermions cannot occupy the same state. That means they cannot be represented by the exact same wave function. A wave function for a Fermion can be separable into different parts and the PEP does not exclude some of those parts being identical. For example, the wave function for Fermions consists of a spatially dependent part and a spin dependent part. Hence the spatially dependent wave functions can be identical and yet the PEP is not violated as long as the spin wave function components are not identical. Update: Now that some errors and misunderstandings have been cleared away (see the comments below), the issue of what relationship this finding has to the PEP can be addressed. The two-body wave function being investigated is in the form of a Slater determinant. The Slater determinant was introduced originally in the context of approximate solutions to the Many-Body Schrodinger equation. Slater noted that the simple product of single particle wave functions could violate the PEP under certain conditions and, furthermore, did not reflect the indistinguishability of identical quantum particles. He introduced the determinantal form (this is how he referred to it in his lectures) to overcome these deficiencies. This form insures that any choice of single particle wave functions that violates the PEP will automatically yield a value of 0 for the Many-Body wave function. One aspect of Many-Body wave functions of determinantal form is that they do not require normalization as long as the single-particle wave functions themselves are orthonormal. This is not the case for the SP wave functions chosen here, but as pointed out by @MichaelFremling, the second "renormalization" employed in this analysis to overcome the fact that the two-body wave function vanishes as a approaches 0 (just as Slater intended) probably does not yield a well defined limit for the two-body wave function as a goes to 0. In effect the analysis yields a finite nonzero result by multiplying 0 by infinity, but this limit could be manipulated to yield any result whatsoever. For this reason, this analysis has no relationship to the PEP. I hesitate to say that it has no relationship to quantum mechanics, because the determinantal form does reflect an entanglement between the single-particle wave functions. The method employed here of studying the effect of separation of otherwise identical wave functions on a Many-Body system may have some implications for the study of entanglement. I'll leave it for others to explore this possibility. One final point is that the determinantal form is not an exact treatment of a Many-Body wave function. It was introduced as an improved approximation in a Many-Body solution that replaced the real two-body interactions by a mean field potential. The corrections to this approximation are known as correlation interactions. This form does include the exchange correlation (what differentiates Hartree-Fock from Hartree approximation), however, the PEP is itself a form of correlation that is probably not fully represented by the determinantal form. Despite my negative conclusions of what this means for the PEP, I can state that (for me) this has been a very thought provoking question and I encourage further research along these lines. • My question assumes that the two fermions have the spin eigenstate. – linuxfreebird Oct 26 '16 at 14:00 • Maybe I missed it, but I didn't see any spinors in your wave function. – Lewis Miller Oct 27 '16 at 0:08 • Sorry I made a typo. I meant to say that we assume the spin eigenstates are the same. – linuxfreebird Oct 27 '16 at 0:43 • OK, if the two spin functions are the same, I don't see how your limit yields a nonzero value. In the limit a goes to 0 your wave function is just a Slater determinant with two equivalent rows. – Lewis Miller Oct 27 '16 at 2:16 • The slater determinant construction of the total wavefunction has a non-trivial normalization factor. You have to take into account as the raw un-normalized wavefunction goes to zero, the normalization factor diverges to infinity. – linuxfreebird Oct 27 '16 at 14:54 You wrote the two-particle state as $$| \Psi \rangle = \big( |1\rangle_1 |2\rangle_2 - |2\rangle_1 |1\rangle_2 \big).$$ Assuming the single-particle states $| 1 \rangle$ and $| 2 \rangle$ are normalized, then this two-particle wavefunction is only correctly normalized if they are orthogonal. In general we have $$\langle \Psi | \Psi \rangle = \big( \langle 1 |_1 \langle 2 |_2 - \langle 2 |_1 \langle 1 |_2 \big) \big( |1\rangle_1 |2\rangle_2 - |2\rangle_1 |1\rangle_2 \big) \\ = \langle 1 | 1 \rangle \langle 2 | 2 \rangle - \langle 1 | 2 \rangle \langle 2 | 1 \rangle - \langle 2 | 1 \rangle \langle 1 | 2 \rangle + \langle 2 | 2 \rangle \langle 1 | 1 \rangle \\ = 2 \left( 1 - |\langle 1 | 2 \rangle|^2 \right)$$ So the correctly normalized two-particle state is actually $$| \Psi \rangle = \frac{1}{\sqrt{2 \left( 1 - |\langle 1 | 2 \rangle|^2 \right)}} \big( |1\rangle_1 |2\rangle_2 - |2\rangle_1 |1\rangle_2 \big).$$ (This is just the more general version of the calculation that you performed.) If you were to take the two-particle wavefunction that you originally defined - one which is antisymmetrized but not normalized - then it would indeed go to zero in the limit where "the two wavefunctions approach each other," i.e. $\langle 1 | 2 \rangle \to 1$. But this fact is not particularly interesting or important, because a physically useful wavefunction should be normalized. The properly normalized wavefunction $| \Psi \rangle$ does not go to zero because the normalization constant diverges to infinity at just the right rate to keep the wavefunction normalizable. The Pauli Exclusion Principle is tremendously physically important, but it doesn't refer to the process you are considering of moving the two entire wavefunctions together. Instead, it refers to a situation where the two-body wavefunction is fixed, and says that the probability amplitude of finding the two particles very close together and thus $P(x,x)$ go to zero. Consider the probability of finding the particles and positions $x$ and $y$; this is given by $$P(x,y) = \Big| \big( \langle x |_1 \langle y |_2 \big) | \Psi \rangle \Big|^2 = \mathcal{N}^2 \left| \langle x | 1 \rangle \langle y | 2 \rangle - \langle x | 2 \rangle \langle y | 1 \rangle \right|^2.$$ Now the point is that the wavefunction $| \Psi \rangle$, and therefore the normalization constant $\mathcal{N}$, stays fixed as you take the limit $y \to x$, and so the probability amplitude vanishes. However, in this case you're not actually changing the state $| \Psi \rangle$ or the two-particle wavefunction itself.
# Change is the only constant in agile [bibshow] How much change can a project withstand before it is too much? A requirements change rate of above 20% is probably too much [bibcite key=”citeulike:13415962″]. Typical rules of thumb figures for software development requirements change rate is cited as 1-3% [bibcite key=”citeulike:321639″]. But the rate of change is accelerating. In 2002, 10% was not uncommon [bibcite key=”citeulike:13415966″]. How high is it now? ## Causes for Requirements Churn What causes requirements churn? There are basically two schools: • One school states that requirements change because they were not sufficiently specified from the beginning. If only more care would have been taken upfront, there would not be any problems. Cf [bibcite key=”citeulike:321639″] • The other school of thought states that requirements change because we have limited knowledge. As the solution, the market and our understanding evolves so do our requirements. Cf [bibcite key=”citeulike:5858976″] ## Requirements churn and project failure There is a rule of thumb that states that only 1/3rd of projects of six months or longer are successful. But it might not be grounded in reality. Research shows that the project success rate is improving and that today as many as 50% of software development projects are successful and only about 30% are total failures. A survey shows that 33% of project failure is attributed to (excessive) requirements change. [bibcite key=”citeulike:4540645″] ## Change compounded Change adds up, just like interest. Requirements may change by 1, 3, 10 or even 20% per month. How long will it be before the compounded change is 100%? Assuming a constant rate of change $r$ the accumulated change $A$ after $n$ months can be calculated as $A=\frac{1}{(1+r)^n}$. Solving for $n$ we get $n=\frac{\log{A}-\log{1}}{\log{1+i}}$. Source: Owned by the author An increased change rate has a dramatic impact on accumulated change. ## Requirements churn and project success There are logically two ways to handle change: avoiding change and being ready for change. ### Avoiding change Avoiding change can basically be done in two ways: Having the right requirements from the beginning or keeping the project so short that change won’t have time to accumulate. Having the right requirements must be balanced with the need to respond to change. Naturally, there is a balance to strike but simulations can be used to find that balance. One study recommends spending about 10% of the development budget on systems engineering to avoid rework etc based on misunderstood or poorly formulated requirements. [bibcite key=”citeulike:4540645,citeulike:321639″]. The other solution is to use short iterations (aka sprints). Not that much change can accumulate in two or three weeks. Just like you can’t build to last on weak foundations, you can’t stay agile without a well-engineered solution. Even if we can avoid change for short intervals, sooner or later it will catch up with us. Then we need to be ready for it. And then, what will enable you to deliver change fast? What will enable you to keep a short time to market? In my opinion, what will save you then is a well-engineered system. Just like you can’t hope to build a house on shifting ground you can’t build a system for the long run without good engineering. All the old basics of modularization, clear responsibilities and so on still apply. But it is also important to keep up the quality at all times. Continuous integration and DevOps are excellent ways to do that. Is there a conflict between doing upfront work and building a solid foundation and being agile? Not necessarily, it is possible and indeed necessary to strike a balance between the two interests if there is going to be anything but short-term agility or long-term rigidity. [bibcite key=”citeulike:6847361,citeulike:13416094,citeulike:13416090″] ## References [/bibshow] Image sources Greger Wikstrand, Ph.D. M.Sc. is a TOGAF 9 certified enterprise architect with an interest in e-heatlh, m-health and all things agile as well as processes, methods and tools. Greger Wikstrand works as a consultant at Capgemini where he alternates between enterprise agile coaching, problem solving and designing large scale e-health services ... 1. Great post Greger, I think the main reason why Agile has become so popular in recent years is most life cycle models were designed assuming requirements were well known up front. Agile methologies assume that requirements will often change. Also if a company acheives a competitive advantage using Agile then the competitors usually find this out and start thinking their survival might depend on it. I agree that the idea that requirements can be known upfront has many challenges. Often what is called a requirement is simply a wish or an idea, if it’s not feasible it cannot very well be a requirement, right? 2. Marcel Körtgen Great read! Are you sure the quoted math is 100% correct? A = 1 / (1+r)^n With (1+r)^n in the denominator, accumulated change would decrease, when change rate r gets bigger. I think it should be more like the accumulation function a(t)=(1+i)^t • Yes, it seems you are right. I must have made some mistake here or at least I have forgotten how I reasoned. 10% change rate gives a doubling when 2=1,1^n which gives n=log(2)/log(1,1)=7,3 which I put in the table. Seems the calculation is correct but the formula is wrong. 😉
# Matrix multiplication of an nxn matrix is the scaling 1. Jul 15, 2008 ### chaoseverlasting Geometrically, matrix multiplication of an nxn matrix is the scaling, and rotation of a vector in n dimensions true? So when you find the inverse of a matrix, what you're actually doing is finding a transformation such that in the 'transformed space' the vector is a unit vector. If the inverse matrix ($$A^{-1}$$)is plotted in the original space, then does it have any relation to the original matrix($$A$$)? What I mean by that is, if you have a function $$y=f(x)$$ in 2 D space, and you find the inverse function $$x=f^{-1}(y)$$ the inverse function is a reflection of the function $$y=f(x)$$ about the line $$y=x$$. Does the inverse matrix ($$A^{-1}$$) have any such relation to the original matrix($$A$$)? 2. Jul 15, 2008 ### roam Re: Matrices The relation between $$A^{-1}$$ & $$A^$$... the inverse of $$A^$$, where it exists, is denoted by $$A^{-1}$$ and $$AA^{-1}$$ is equal to I ie. identity matrix = $$A^{-1}A$$ 3. Jul 16, 2008 ### HallsofIvy Re: Matrices True, but I don't believe that was the question he was asking. What do you mean by "plotted in the original space"? An n by n matrix operates on n dimensional vectors and itself exist in an n2 vector space. In order to "graph" A and A-1 you would need an n4 dimensional graph. 4. Jul 16, 2008 ### chaoseverlasting Re: Matrices Why would you need an n4 dimensional graph? Say for a 3x3 matrix, you would need a 9 D space then. Its kind of hard for me to imagine that, but say if we took each row or column to be a plane in 3D (where I suppose each plane corresponds to a projection of 9D space on 3D), would there be a relation between the planes of $$A$$ and $$A^{-1}$$ ? 5. Jul 16, 2008 ### uman Re: Matrices A plane? Wouldn't each row or column of a three by three matrix simply be a vector in $$F^3$$? 6. Jul 17, 2008 ### chaoseverlasting Re: Matrices Yeah, each row or column of a 3x3 matrix would be a vector, but if you assume that vector to be the normal vector passing through a given point then you get three planes depending on whatever matrix equation you use. I dont think a standalone 3x3 matrix could represent planes by itself. Even so, I'm guessing here, the projection would give you three vectors instead of three planes. So you would have a set of three vectors corresponding to $$A$$ and three vectors corresponding to $$A^{-1}$$, would the vectors of A and A-1 be co-related in any way? 7. Jul 23, 2008 ### chaoseverlasting Re: Matrices Anyone? Please :p ? 8. Jul 23, 2008 ### olliemath Re: Matrices In general, any function (including e.g. matrices) $$f:R^n\rightarrow R^n$$ has a graph in $$R^{2n}$$ which is the set $$\{(\textbf{x},f(\textbf{x})):\textbf{x}\in R^n\}$$. If f is invertible then the graph of $$f^{-1}$$ is $$\{(f(\textbf{x}),\textbf{x}):\textbf{x}\in R^n\}$$. You get one from the other by reflection through a hyperplane and you can easily write out the matrix for this. For a function on R this is very usefull as a visual aid. In general, not so. I think what you want to know is if we can read off the value that A^{-1} assigns to x using the graph of A? Because we can't visualise the graph we can't. Last edited: Jul 23, 2008 9. Jul 23, 2008 ### chaoseverlasting Re: Matrices Thats exactly it. How would you go about writing a matrix for this? Perhaps we cant use it as a visual aid, but it would simplify a lot of other calculations. 10. Aug 2, 2008 ### olliemath Re: Matrices Use $$B:=\left[ \begin{array}{cc} 0 & I \\ I & 0 \end{array} \right]$$ where I is the identity on $$R^n$$. The coordinates of a general point on the graph are $$(Ix,Ax)$$, so the coords of a general point on the graph of the inverse are $$B(Ix,Ax)=(Ax,Ix)$$. One usually writes out the matrix $$(A,I):R^n\rightarrow R^{2n}$$ and uses gaussian elimination to convert it to $$(I,A^{-1})$$, which again gives points on the graph of the inverse, but in a more useful fashion. Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
# Math Help - Gauss-Jordan Method of elimination 1. ## Gauss-Jordan Method of elimination HI Guys, Can someone pls help me and teach the easiest way to solve linear problems using the Gauss-Jordan Method? Any new techniques? Thanks, 2. Originally Posted by adrian1116 on a different post Hi, I am asking someone's help to show any easiest technique to solve linear equations using the Gauss-Jordan Method. Below is an example: Z-3X1-5X2 = 0 X1+X3 = 4 2X2+X4 = 12 3X1+2X2+X5 = 18 where X3, X4 AND X5 are the slack variables. solve for values of Z, X1 and X2 using the said method. Thanks and appreciate your soonest feedback. HI Guys, Can someone pls help me and teach the easiest way to solve linear problems using the Gauss-Jordan Method? Any new techniques? Thanks, 3. Originally Posted by Quick Originally Posted by adrian1116 on a different post Hi, I am asking someone's help to show any easiest technique to solve linear equations using the Gauss-Jordan Method. Below is an example: Z-3X1-5X2 = 0 X1+X3 = 4 2X2+X4 = 12 3X1+2X2+X5 = 18 where X3, X4 AND X5 are the slack variables. solve for values of Z, X1 and X2 using the said method. Thanks and appreciate your soonest feedback. HI Guys, Can someone pls help me and teach the easiest way to solve linear problems using the Gauss-Jordan Method? Any new techniques? Thanks, Nor should you tag a new question on to the end of an existing thread - it confuses the helpers (well it confuses me anyway). RonL 4. Since TPH closed the other thread, there is no place else to answer this question. HI Guys, Can someone pls help me and teach the easiest way to solve linear problems using the Gauss-Jordan Method? Any new techniques? X1+X3 = 4 2X2+X4 = 12 3X1+2X2+X5 = 18 Z-3X1-5X2 = 0 Thanks, Hi, Adrian. How much do you know about this stuff? I'm experimenting here to see if it is possible to explain this in a short space. Here is your example worked out. Set up this tableau. Do you see how the tableau relates to your problem? We start off with the slack variables equal to the constant terms in the equations (for example $x_3 = 4$) and $z = 0.$ $ \begin{tabular}{|r|r|rrrrr|} \hline & b & x_1 & x_2 & x_3 & x_4 & x_5 \\ \hline x_3 & 4 & \fbox{1} & 0 & 1 & 0 & 0 \\ x_4 & 12 & 0 & 2 & 0 & 1 & 0 \\ x_5 & 18 & 3 & 2 & 0 & 0 & 1 \\ \hline z & 0 & -3 & -5 & 0 & 0 & 0 \\ \hline \end{tabular} $ Do you know what it means to pivot? It means to reduce a column using row operations to zeroes in every row but one row that has a 1 in it. You apply those same row operations to all the other columns. To get the next tableau, pivot using the row and column with the box. This means we are bringing variable $x_1$ into the solution replacing variable $x_3 .$ The next tableau is the result of the first pivoting. Keep track in the first column of what row and column was used to pivot. $ \begin{tabular}{|r|r|rrrrr|} \hline & b & x_1 & x_2 & x_3 & x_4 & x_5 \\ \hline x_1 & 4 & 1 & 0 & 1 & 0 & 0 \\ x_4 & 12 & 0 & \fbox{2} & 0 & 1 & 0 \\ x_5 & 6 & 0 & 2 & -3 & 0 & 1 \\ \hline z & 12 & 0 & -5 & 3 & 0 & 0 \\ \hline \end{tabular} $ To get to the next tableau, pivot using the row and column with the box. This means we are bringing variable $x_2$ into the solution replacing variable $x_4 .$ The next tableau is the result of the second pivoting. Again, keep track in the first column of what row and column was used to pivot. This is a solution to the equations: $x_1 = 4,\ x_2 = 6, x_5 = -6, z = 42.$ It's a solution because variables $x_1\text{ and }x_2$ are in as desired. There were no constraints on the slack variables so $x_5$ turns out negative. $ \begin{tabular}{|r|r|rrrrr|} \hline & b & x_1 & x_2 & x_3 & x_4 & x_5 \\ \hline x_1 & 4 & 1 & 0 & 1 & 0 & 0 \\ x_2 & 6 & 0 & 1 & 0 & 1/2 & 0 \\ x_5 & -6 & 0 & 0 & -3 & -1 & 1 \\ \hline z & 42 & 0 & 0 & 3 & 5/2 & 0 \\ \hline \end{tabular} $ Well Adrian, did this make any sense at all to you? 5. Originally Posted by JakeD Since TPH closed the other thread, there is no place else to answer this question. And now it is a thread in its own right! RonL
## Saturday, May 18, 2019 Note that xT denotes the transpose of x, and Ax < b means that the inequality is taken element-wise over the vectors Ax and b. The above objective function is convex if and only if H is positive- semidefinite. The quadprog function expects a problem of the above form, defined by the parameters fH; f; A; b; Aeq; beq; l; u; x0; H and f are required, the others are optional (empty matrix []) General Formula Example rewrite pattern above Code in matlab clc;clear all;close all; H = diag([1; 0]); f = [3; 4]; A = [-1 -3; 2 5; 3 4]; b = [-15; 100; 80]; l = zeros(2,1); Aeq = []; Beq = []; u = []; x0 = []; options = optimset('Algorithm','interior-point-convex'); Output >> x x = 0.0000 5.0000 >> fval fval = 20.0000 >> We can verify3 that x = 0; y = 5, with an optimal value 20.
# Bled'11 - 7th Slovenian International Conference on Graph Theory 19-25 June 2011 Bled, Slovenia Europe/Ljubljana timezone Home > Timetable > Contribution details PDF | XML # Bounds on the diameter of Cayley graphs of the symmetric group Presented by Dr. Pablo SPIGA Type: Oral presentation Track: Cayley Graphs ## Content For a group $G$ and a set $S$ of generators of $G$, the diameter of the {\em Cayley graph} $\Gamma(G,S)$ is the maximum over the group elements $g\in G$ of the shortest expression $g=s_1^{i_1}\cdots s_m^{i_m}$ with $s_k\in S$ and $i_k\in \{-1,1\}$. A first investigation on the diameter of Cayley graphs over groups in general was undertaken by Erd\H{o}s and R\'enyi. Later Babai and Seress obtained asymptotic estimates on the diameter of $\Gamma(G,S)$ depending heavily on the group structure of $G$. In light of these results, Babai and Seress proposed the following conjecture. \smallskip \noindent\textbf{Conjecture. }There exists $c>0$ such that, for a non-abelian simple group $G$ and a set $S$ of generators of $G$, we have $\textrm{diam}(\Gamma(G,S))\leq (\log |G|)^c$. \smallskip In this talk we describe the current state of this conjecture and we present some recent results when $G$ the alternating group. ## Place Location: Bled, Slovenia Address: Best Western Hotel Kompas Bled More
## College Algebra (11th Edition) Published by Pearson # Chapter R - Section R.3 - Polynomials - R.3 Exercises: 44 #### Answer -6$x^{3}$ - 3$x^{2}$ - 4x + 4 #### Work Step by Step 1. Distribute negative signs -(8$x^{3}$ + x - 3) + (2$x^{3}$ + $x^{2}$) - (4$x^{2}$ + 3x - 1) = -8$x^{3}$ - x + 3 + 2$x^{3}$ + $x^{2}$ - 4$x^{2}$ - 3x + 1 2. Group like terms together. (-8$x^{3}$ + 2$x^{3}$) + ($x^{2}$- 4$x^{2}$) + (-x - 3x) + (3 + 1) 3. Add like terms. -6$x^{3}$ - 3$x^{2}$ - 4x + 4 After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Tag Info 9 In fact, "Roughly speaking, category theory is graph theory with additional structure to represent composition." is a good summary of the connection between the notion of a graph (meaning here: directed multigraph) and the notion of a category. Here is a way of how to make this precise: Let $O$ be a set ("set of vertices"). Then we have a category ... 7 Consider the category $\mathsf{C}$ with a single object $x$ and two morphisms: $\operatorname{id}_x$ and $f : x \to x$ such that $f \circ f = f$. Then $x$ is of course the product of all the objects in $\mathsf{C}$. But it's not an initial object, because there are two morphisms $x \to x$. 6 Here are some very basic examples, but which you don't find so often mentioned. 1. For real numbers $r$ and integers $z$ we have $$\iota(z) \leq r \Leftrightarrow z \leq \lfloor r \rfloor,$$ where $\iota$ is the inclusion map from integers to real numbers and $\lfloor - \rfloor$ is the floor function. That means that $\iota : (\mathbb{Z},\leq) \to ... 5 Perhaps the exercise was meant for the category of rings. Then the inclusion$\mathbb{Z}\hookrightarrow \mathbb{Q}$is indeed an epimorphism. See also the discussion on MSE here. For the category of abelian groups, epimorphism and surjective morphism is the same. 5 This is false in the category of abelian groups. Consider the composition$\mathbb Z \hookrightarrow \mathbb Q \to \mathbb Q / \mathbb Z$. 5 I hope what follows will clear up your confusion. A lax monoidal functor consists of the following data (satisfying some axioms that I won't spell out): One functor$\color{red}{F : \mathsf{C} \to \mathsf{D}}$. This means that for all objects$A \in \mathsf{C}$you have an object$F(A) \in \mathsf{D}$, and for all morphisms$f : A \to B$in$\mathsf{C}$... 4 Let$S$be a set. Then$G \times S$can be equipped with a$G$-action by having$G$act by left-multiplication on itself and trivially on$S$. This is a left-adjoint to the forgetful functor -- for$T$an arbitrary$G$-set, one has $$\text{Hom}_\text{G-set}(G \times S, T) = \text{Hom}_\text{set}(S, T)$$ by the rule$f \mapsto \left[s \mapsto f(1, ... 4 The interplay between CT and MT is pretty well established. The term to search for is locally accessible categories. Another subject to look at may be topos theory, again with plenty of material online. A complete, or even a very partial list of applications of CT and MT will require a lot of bytes. MT has applications in algebra and in analysis, and that ... 3 Category theory and graph theory are similar in the sense that both are visualized by arrows between dots. After this the similarities quite much stop, and both have different reason for their existence. In category theory, we may have a huge amount of dots, and these dots do often represent some abstract algebraic structure or other object with some ... 3 I'm no expert in category theory, but here are some easy examples I can think of: Categories with closed monoidal structure are really great, because they give you a nicely behaved notion of a "mapping object". A closed monoidal category $\mathsf{C}$ is one where the functor $(-)\otimes X$ has a right adjoint $[X, (-)]$ called the internal hom. This ... 3 Suppose $f: A\to B$ is an epimorphism. If $A\neq 0$, then $A$ has a point by assumption, and hence the axiom of choice guarantees the existence of $g: B\to A$ such that $A\xrightarrow{f} B\xrightarrow{g} A\xrightarrow{f} B = A\xrightarrow{f} B$. Since $f$ is epi, we may cancel it from the left, hence $B\xrightarrow{g} A\xrightarrow{f} B = \text{id}_B$. In ... 3 If I understand you correctly, you want the pushout of this diagram: $$\require{AMScd} \begin{CD} H @>{\iota}>> G \\ @V{\iota}VV \\ G \end{CD}$$ It's possible to describe it "explicitly": it is the quotient of the free product $G * G$ by the normal subgroup $N$ generated by the relations $\iota_1(h) \equiv \iota_2(h)$ where $\iota_1$ is the ... 3 It is not claimed that the category of $A$-schemes has a zero object. But the category of group $A$-schemes has a zero object. This has nothing to do with schemes. If $C$ is any category with finite products, then $\mathsf{Grp}(C)$ has a zero object, the "trivial group" $T$. The underlying object is $1$, the final object, and the multiplication is the unique ... 2 If you want there to exist at least one $S$ for every possible $P$, you will need very restrictive hypotheses. For example, you might require $\mathcal T$ to be the free category on some graph. This includes the case where $\mathcal T$ is the poset of natural numbers, but not the nonnegative positive reals. I think the correct thing to say is that ... 2 The category of Banach spaces is locally $\aleph_1$-presentable. 2 The notion of distributor is not new and has been used in near-ring theory for a long time. It was apparently introduced by Fröhlich [2, 3] in 1958. Here are a few other relevant references. [1] L. Esch, Commutator and Distributor Theory in Near-Rings, Doc. Diss. (Boston Univ., 1974). [2] A. Fröhlich, Distributively Generated Near-Rings: (I. Ideal Theory) ... 2 This is an instance of Kan extensions. Let $\mathbf G$ be the one object category associated to the group $G$, and $\mathbf 1$ the one associated to the trivial group. Then $\mathbf{G{-}Sets}$ is nothing else than $[\mathbf G,\mathbf{Sets}]$ and the forgetful functor is the functor $$i^\ast \colon [\mathbf G,\mathbf{Sets}] \to [\mathbf 1,\mathbf{Sets}] ... 2 First, let us write UG for the underlying set of a group G. Well, as ZhenLin pointed, a group as a pair in the set theoretic means G=(UG,\,\circ)=\{\{UG\},\{UG,\circ\}\} has not much to do with the elements of UG. In this sense {\bf Grp} is not strictly a subcategory of {\bf Set}. There exists, however, an injective functor {\bf Grp}\to{\bf ... 2 Your question, really, is about necessary conditions for a (\mathbf{Set}-valued) presheaf to be representable. I think that business with automorphisms is actually a red herring – let me illustrate. Let A be a set and let \mathcal{F} : \mathbf{Set}^\mathrm{op} \to \mathbf{Grpd} be defined by \mathcal{F} (X) = (\mathbb{B} \mathrm{Aut} (A))^X, where ... 2 Allegedly, one can prove this directly, but I have never seen the details. (For instance, the same fact is asserted without proof as Theorem 1.9 in [Moore, Semi-simplicial complexes and Postnikov systems].) Here is a more high-level proof. Recall that the class of anodyne extensions of simplicial sets is defined by induction: Any horn inclusion is an ... 2 [...] an object in the category should always be some kind of mathematical structure e.g. a set, ring, monoid, etc. [...] This is a really restricted point of view on category theory. Of course, the simplest examples that come to mind when talking about categories are such, but if category theory was only about those, it would be nothing else than a ... 1 One reason why categories of mathematical structures (where the morphisms are structure-preserving maps) are nice is that universal constructions like limits and colimits (1) exist and (2) are easy to describe. This usually boils down to the fact that limits and colimits in the category \mathsf{Set} of sets are easy to describe, and for most "categories of ... 1 Categories of modules and opposites of such categories can be distinguished by the behavior of the natural map \bigoplus_i M_i \to \prod_i M_i from a countable coproduct to the corresponding countable product. In categories of modules this map is a monomorphism but usually not an epimorphism, while in their opposites this map is an epimorphism but usually ... 1 For a good concrete example, I would suggest that you look into synthetic differential geometry. This is an axiomatic approach to differential geometry which takes place in a smooth topos. The theory is very beautiful and intuitive, and allows you to rigorously reason using infinitesimals. Since this is a purely axiomatic theory, you can come up with a ... 1 The unit-counit definition of an adjunction makes sense in any 2-category, and so it makes sense in any monoidal category, where it recovers the definition of a dualizable object. So you can motivate this somewhat more general notion using the familiar special case of vector spaces: when a vector space V has a dual V^{\ast}, there is a unit map 1 \to ... 1 There are two possible meanings: For any pullback square as below,$$\require{AMScd} \begin{CD} X' @>>> X \\ @VVV @VVV \\ Y' @>>> Y \end{CD}$$if X \to Y is a monomorphism, then X' \to Y' is also a monomorphism. For any commutative diagram of the form below,$$\begin{CD} X' @>>> X \\ @VVV @VVV \\ Y' @>>> Y \\ @VVV ... 1 As seems to be my habit, I'm going to say a lot of things here whose details I have not checked. Use at your own risk :). Given a metric space $(X,d)$, there is a geodesic remetrization $(X,d_G)$, equipped with a bijective short map $(X,d_G) \to (X,d)$. The geodesic metric is given by $d_G(p,q) = \sup_{\epsilon>0} \inf_{\{p = p_0,p_1,\dots,p_n=q ... 1 Hint 1: You are working with the universal property of limits in general. For most people, this certainly isn't the best starting point. Try writing out what the universal property means in this particular case (discrete domain) explicitly, it simplifies considerably. You can check what you got or (if you can't do it) what you're supposed to get by searching ... 1 This is not exactly an answer to your question, except that it is relevant to the question of homotopical invariants preserving at least some colimits. For example, the fundamental group of based spaces does not preserve all colimits: the usual Seifert-van Kampen Theorem determines the fundamental group of a union$X=U \cup V$of based spaces if$U,V$are ... 1 Assume that there exists a natural transformation$\alpha : {\bf Sym} \to {\bf Ord}$, draw its naturality diagram, and apply it to the identity permutation over a simple set like$B = \{ 0, 1 \}$. Let$f : B \to B$be defined by$f(0) = 1$,$f(1) = 0$. Look at$\alpha_B \circ {\bf Sym}(f)$and${\bf Ord}(f) \circ \alpha_B\$ applied to the identity ... Only top voted, non community-wiki answers of a minimum length are eligible
### Home > CC3MN > Chapter 4 > Lesson 4.1.5 > Problem4-51 4-51. For each equation below, solve for the variable. Check your solutions, if possible, and show all work. 1. $3p−7+9−2p=p+2$, solve for $p$ First, try rewriting all of the subtraction as addition, then attempt to solve by combining like terms. $3p+(−7)+9+(−2p)=p+2$ $p+2=p+2$ $p=p$ Since $p=p$ (or $2=2$, depending on how you solved the problem), $p$ can equal any number. 1. $−2x+5+(−x)−5=0$, solve for $x$ Refer to parts (a) and (d). 1. $12=r+6−2r$, solve for $r$ Refer to parts (a) and (d). 1. $−(y^2−2)=y^2−5−2y^2$, solve for $y$ One strategy is to rewrite all of the subtraction within the problem as addition. $−y^2+2=y^2+−5+−2y^2$ Now simplify, starting with combining like terms and then isolating the variable. $−y^2+2=−y^2+−5$ $2=−5$ No solution because $2$ is not equal to $−5$.
# NCERT Solutions for Class 12 Physics Chapter 5 Magnetism and Matter ## NCERT Solutions for Class 12 Physics Chapter 5 – Free PDF Download The NCERT Solutions for Class 12 Physics Chapter 5 Magnetism and Matter provides detailed step by step NCERT solutions for all those problems present in the NCERT textbooks. Magnetism and Matter are some of the most important concepts covered in NCERT Solutions for Class 12 Physics which require a lot of imagination. Magnetism is not entirely related to the uses of magnets but to the phenomena and the concepts used in various applications. The NCERT Solutions for Class 12 Physics will make you comprehensively well versed with the concepts involved in Magnetism and Matter. These solutions are updated for the latest term – I CBSE Syllabus for 2021-22. Further, now you can download the free PDF of the NCERT Solutions for Class 12 Physics Chapter 5 from the link given below. ### Class 12 Physics NCERT Solutions Magnetism and Matter Important Questions (a) A vector needs three quantities for its specification. Name the three independent quantities conventionally used to specify the earth’s magnetic field. (b) The angle of dip at a location in southern India is about 18°. Would you expect a greater or smaller dip angle in Britain? (c)If you made a map of magnetic field lines at Melbourne in Australia, would the lines seem to go into the ground or come out of the ground? (d) In which direction would a compass free to move in the vertical plane point to, is located right on the geomagnetic north or south pole? (e) The earth’s field, it is claimed, roughly approximates the field due to a dipole of the magnetic moment 8 × 1022 JT-1 located at its centre. Check the order of magnitude of this number in some way. (f) Geologists claim that besides the main magnetic N-S poles, there are several local poles on the earth’s surface oriented in different directions. How is such a thing possible at all? (a) The three independent conventional quantities used for determining the earth’s magnetic field are: (i) Magnetic declination, (ii) Angle of dip (iii) The horizontal component of the earth’s magnetic field (b) The angle of dip at a point depends on how far the point is located with respect to the North Pole or the South Pole. Hence, as the location of Britain on the globe is closer to the magnetic North pole, the angle of dip would be greater in Britain (About $70°$) than in southern India. (c) It is assumed that a huge bar magnet is submerged inside the earth with its north pole near the geographic South Pole and its south pole near the geographic North Pole. Magnetic field lines originate from the magnetic north pole and terminate at the magnetic south pole. Hence, in a map depicting earth’s magnetic field lines, the field lines at Melbourne, Australia would seem to move away from the ground. (d) If a compass is placed in the geomagnetic North Pole or the South Pole, then the compass will be free to move in the horizontal plane while the earth’s field is exactly vertical to the magnetic poles. In such a case, the compass can point in any direction. (e) Magnetic moment, M = $8\times 10^{22}\;J\;T^{-1}$ Radius of earth, r = $6.4\times 10^{6}\; m$ Magnetic field strength, B = $\frac{\mu _{0}M}{4\pi r^{3}}$ Where, $\mu_{0}$ = Permeability of free space = $4\pi\times 10^{-7}\;TmA^{-1}$ Therefore, B = $\frac{4\pi\times 10^{-7}\times 8\times 10^{22}}{4\pi \times (6.4\times 10^{6})^{3}} = 0.3\; G$ This quantity is of the order of magnitude of the observed field on earth. (f) Yes, there are several local poles on earth’s surface oriented in different directions. A magnetized mineral deposit is an example of a local N-S pole. (a) The earth’s magnetic field varies from point to point in space. Does it also change with time? If so, on what time scale does it change appreciably? (b) The earth’s core is known to contain iron. Yet geologists do not regard this as a source of the earth’s magnetism. Why? (c) The charged currents in the outer conducting regions of the earth’s core are thought to be responsible for earth’s magnetism. What might be the ‘battery’ (i.e., the source of energy) to sustain these currents? (d) The earth may have even reversed the direction of its field several times during its history of 4 to 5 billion years. How can geologists know about the earth’s field in such a distant past? (e) The earth’s field departs from its dipole shape substantially at large distances (greater than about 30,000 km). What agencies may be responsible for this distortion? (f ) Interstellar space has an extremely weak magnetic field of the order of 10–12 T. Can such a weak field be of any significant consequence? Explain. (a) Earth’s magnetic field varies with time and it takes a couple of hundred years to change by an obvious sum. The variation in the Earth’s magnetic field with respect to time can’t be ignored. (b) The Iron core at the Earth’s centre cannot be considered as a source of Earth’s magnetism because it is in its molten form and is non-ferromagnetic. (c) The radioactivity in the earth’s interior is the source of energy that sustains the currents in the outer conducting regions of the earth’s core. These charged currents are considered to be responsible for the earth’s magnetism. (d) The Earth’s magnetic field reversal has been recorded several times in the past about 4 to 5 billion years ago. These changing magnetic fields were weakly recorded in rocks during their solidification. One can obtain clues about the geomagnetic history from the analysis of this rock magnetism. (e) Due to the presence of ionosphere, the Earth’s field deviates from its dipole shape substantially at large distances. The Earth’s field is slightly modified in this region because of the field of single ions. The magnetic field associated with them is produced while in motion. (f) A remarkably weak magnetic field can deflect charged particles moving in a circle. This may not be detectable for a large radius path. With reference to the gigantic interstellar space, the deflection can alter the passage of charged particles. Q 5.3) A short bar magnet placed with its axis at 30° with a uniform external magnetic field of 0.25 T experiences a torque of magnitude equal to $4.5\times 10^{-2}\; J$. What is the magnitude of the magnetic moment of the magnet? Magnetic field strength, B = 0.25 T Torque on the bar magnet, T = $4.5\times 10^{-2}\; J$ The angle between the bar magnet and the external magnetic field, $\theta = 30°$ Torque is related to magnetic moment (M) as: $T = MB sin\theta$ $∴ M = \frac{T}{B sin\theta }$ = $\frac{4.5\times 10^{-2}}{0.25\times sin 30°} = 0.36\; J\; T^{-1}$ Hence, the magnetic moment of the magnet is $0.36\; J\; T^{-1}$. Q 5.4) A short bar magnet of magnetic moment m = 0.32 JT–1 is placed in a uniform magnetic field of 0.15 T. If the bar is free to rotate in the plane of the field, which orientation would correspond to its (a) stable, and (b) unstable equilibrium? What is the potential energy of the magnet in each case? Moment of the bar magnet, M = $0.32\; J\; T^{-1}$ External magnetic field, B = 0.15 T (a) The bar magnet is aligned along the magnetic field. This system is considered as being in stable equilibrium. Hence, the angle $\theta$, between the bar magnet and the magnetic field is $0°$. Potential energy of the system = $-MB cos\theta$ = $-0.32\times 0.15\; cos 0°$ = $-4.8\times 10^{-2}\; J$ (b) The bar magnet is oriented $180°$ to the magnetic field. Hence, it is in unstable equilibrium. $\theta = 180°$ Potential energy = $-MB cos\theta$ = $-0.32\times 0.15\; cos 180°$ = $4.8\times 10^{-2}\; J$ Q 5.5) A closely wound solenoid of 800 turns and area of cross-section 2.5 × 10–4 mcarries a current of 3.0 A. Explain the sense in which the solenoid acts like a bar magnet. What is its associated magnetic moment? Number of turns in the solenoid, n = 800 Area of cross-section, A = $2.5\times 10^{-4}\;m^{2}$ Current in the solenoid, I = 3.0 A A current-carrying solenoid behaves like a bar magnet because a magnetic field develops along its axis, i.e., along with its length. The magnetic moment associated with the given current-carrying solenoid is calculated as: M = n I A = $800\times 3\times 2.5 \times 10^{-4}$ = $0.6\; J\; T^{-1}$ Q 5.6) If the solenoid in Exercise 5.5 is free to turn about the vertical direction and a uniform horizontal magnetic field of 0.25 T is applied, what is the magnitude of the torque on the solenoid when its axis makes an angle of 30° with the direction of applied field? Magnetic field strength, B = 0.25 T Magnetic moment, M = 0.6  $T^{-1}$ The angle $\theta$, between the axis of the solenoid and the direction of the applied field, is $30°$. Therefore, the torque acting on the solenoid is given as: $\tau = MB sin \theta$ = $0.6\times 0.25\; sin 30°$ = $7.5\times 10^{-2}$ J Q 5.7) A bar magnet of magnetic moment 1.5 J T–1 lies aligned with the direction of a uniform magnetic field of 0.22 T. (a) What is the amount of work required by an external torque to turn the magnet so as to align its magnetic moment: (i) normal to the field direction, (ii) opposite to the field direction? (b) What is the torque on the magnet in cases (i) and (ii)? (a) Magnetic moment, M = $1.5\; J\; T^{-1}$ Magnetic field strength, B = 0.22 T (i) Initial angle between the axis and the magnetic field, $\theta_{1} = 0°$ Final angle between the axis and the magnetic field, $\theta_{2} = 90°$ The work required to make the magnetic moment normal to the direction of the magnetic field is given as: W = $- MB (cos\theta_{2} – cos\theta_{1})$ = $- 1.5\times 0.22 (cos 90° – cos 0°)$ = – 0.33 (0 – 1) = 0.33 J (ii) Initial angle between the axis and the magnetic field, $\theta_{1} = 0°$ Final angle between the axis and the magnetic field, $\theta_{2} = 180°$ The work required to make the magnetic moment opposite to the direction of the magnetic field is given as: W = $- MB (cos\theta_{2} – cos\theta_{1})$ = $- 1.5\times 0.22 (cos 180° – cos 0°)$ = – 0.33 (– 1 – 1) = 0.66 J (b) For case (i): $\theta = \theta_{2} = 90°$ ∴ Torque, $\tau = MB sin\theta$ = $MB sin 90°$ = 1.5 × 0.22 sin 900 = 0.33 J The torque tends to align the magnitude moment vector along B. For case (ii): $\theta = \theta_{2} = 180°$ $∴$ Torque, $\tau = MB sin\theta$ = $MB sin 180°$ = 0 J Q 5.8) A closely wound solenoid of 2000 turns and area of cross-section $1.6\times 10^{-4}\; m^{2}$, carrying 4.0 A current, is suspended through its centre, thereby allowing it to turn in a horizontal plane. (a) What is the magnetic moment associated with the solenoid? (b) What is the force and torque on the solenoid if a uniform the horizontal magnetic field of 7.5 × 10–2 T is set up at an angle of 30° with the axis of the solenoid? Number of turns on the solenoid, n = 2000 Area of cross-section of the solenoid, A = $1.6\times 10^{-4}\; m^{2}$ Current in the solenoid, I = 4.0 A (a) The magnetic moment along the axis of the solenoid is calculated as: M = nAI = $2000\times 4\times1.6\times10^{-4}$ = 1.28 Am2 (b) Magnetic field, B = $7.5\times 10^{-2}\; T$ The angle between the magnetic field and the axis of the solenoid, $\theta = 30°$ Torque, $\tau = MB sin\theta$ = $1.28\times 7.5\times 10^{-2}\; sin30°$ = $0.048 J$ Since the magnetic field is uniform, the force on the solenoid is zero. The torque on the solenoid is $0.048 J$. Q 5.9) A circular coil of 16 turns and radius 10 cm carrying a current of 0.75 A rests with its plane normal to an external field of magnitude 5.0 × 10–2 T. The coil is free to turn about an axis in its plane perpendicular to the field direction. When the coil is turned slightly and released, it oscillates about its stable equilibrium with a frequency of 2.0 s–1. What is the moment of inertia of the coil about its axis of rotation? Number of turns in the circular coil, N = 16 Radius of the coil, r = 10 cm = 0.1 m Cross-section of the coil, A = $\pi r^{2} = \pi\times (0.1)^{2} \;m^{2}$ Current in the coil, I = 0.75 A Magnetic field strength, B = $5.0\times 10^{-2}\; T$ Frequency of oscillations of the coil, v = $2.0\; s^{-1}$ ∴ Magnetic moment, M = NIA = $N I \pi r^{2}$ $16\times 0.75\times n\times (0.1)^{2}$ = $0.377\; J \;T^{-1}$ Frequency is given by the relation: $v = \frac{1}{2\pi }\sqrt{\frac{MB}{I}}$ Where, I = Moment of inertia of the coil Rearranging the above formula, we get: $∴ I = \frac{MB}{4 \pi^{2} v^{2}}$ = $\frac{0.377\times 5\times 10^{-2}}{4\pi^{2}\times (2)^{2}}$ = $1.2\times 10^{-4}\; kg\; m^{2}$ Hence, the moment of inertia of the coil about its axis of rotation is $1.19\times 10^{-4}\; kg\; m^{2}$. Q 5.10) A magnetic needle free to rotate in a vertical plane parallel to the magnetic meridian has its north tip pointing down at 22° with the horizontal. The horizontal component of the earth’s magnetic field at the place is known to be 0.35 G. Determine the magnitude of the earth’s magnetic field at the place. Horizontal component of earth’s magnetic field, $B_{H} = 0.35\; G$ Angle made by the needle with the horizontal plane = Angle of dip = $\delta = 22°$ Earth’s magnetic field strength = B We can relate B and $B_{H}$ as: $B_{H} = Bcos\delta$ $∴ B = \frac{B_{H}}{cos\delta}$ = $\frac{0.35}{cos\; 22°}$ = 0.38 G Hence, the strength of the earth’s magnetic field at the given location is 0.38 G. Q 5.11) At a certain location in Africa, a compass points 12° west of the geographic north. The north tip of the magnetic needle of a dip circle placed in the plane of magnetic meridian points 60° above the horizontal. The horizontal component of the earth’s field is measured to be 0.16 G. Specify the direction and magnitude of the earth’s field at the location. Angle of declination, $\theta = 12°$ Angle of dip, $\delta = 60°$ Horizontal component of earth’s magnetic field, $B_{H} = 0.16\; G$ Earth’s magnetic field at the given location = B We can relate B and $B_{H}$ as: $B_{H} = Bcos\delta$ $∴ B = \frac{B_{H}}{cos\delta}$ = $\frac{0.16}{cos\; 60°}$ = 0.32 G Earth’s magnetic field lies in the vertical plane, $12°$ West of the geographic meridian, making an angle of $60°$ (upward) with the horizontal direction. Its magnitude is 0.32 G. Q 5.12) A short bar magnet has a magnetic moment of 0.48 J T–1. Give the direction and magnitude of the magnetic field produced by the magnet at a distance of 10 cm from the centre of the magnet on (a) the axis, (b) the equatorial lines (normal bisector) of the magnet. Magnetic moment of the bar magnet, M = $0.48\;J\;T^{-1}$ (a) Distance, d = 10 cm = 0.1 m The magnetic field at distance d, from the centre of the magnet on the axis, is given by the relation: $B = \frac{\mu_{0}\;2M}{4\pi d^{3}}$ Where, $\mu_{0}$ = Permeability of free space = $4\pi \times 10^{-7}\; TmA^{-1}$ $∴ B = \frac{4\pi \times 10^{-7}\times 2\times 0.48}{4\pi \times (0.1)^{3}}$ = $0.96\times 10^{-4}\; T = 0.96 \;G$ The magnetic field is along the S-N direction. (b) The magnetic field at a distance of 10 cm (i.e., d = 0.1 m) on the equatorial line of the magnet is given as: $B = \frac{\mu_{0}\times M}{4\pi \times d^{3}}$ = $\frac{4\pi \times 10^{-7}\times 0.48}{4\pi (0.1)^{3}}$ = 0.48 G The magnetic field is along with the N – S direction. Q 5.13) A short bar magnet placed in a horizontal plane has its axis aligned along the magnetic north-south direction. Null points are found on the axis of the magnet at 14 cm from the centre of the magnet. The earth’s magnetic field at the place is 0.36 G and the angle of dip is zero. What is the total magnetic field on the normal bisector of the magnet at the same distance as the null–point (i.e., 14 cm) from the centre of the magnet? (At null points, field due to a magnet is equal and opposite to the horizontal component of earth’s magnetic field.) Earth’s magnetic field at the given place, H = 0.36 G The magnetic field at a distance d, on the axis of the magnet, is given as: $B_{1} = \frac{\mu_{0}2M}{4\pi d^{3}} = H\;\;\;\;\;\;\;\;\;\;…(i)$ Where, $\mu_{0}$ = Permeability of free space M = Magnetic moment The magnetic field at the same distance d, on the equatorial line of the magnet, is given as: $B_{2} = \frac{\mu_{0}M}{4\pi d^{3}} = \frac{H}{2}\;\;\;\;\;\;[Using\;equation\;(i)]$ Total magnetic field, B = $B_{1} + B_{2}$ = $H + \frac{H}{2}$ = 0.36 + 0.18 = 0.54 G Hence, the magnetic field is 0.54 G in the direction of earth’s magnetic field. Q 5.14) If the bar magnet in exercise 5.13 is turned around by 180°, where will the new null points be located? The magnetic field on the axis of the magnet at a distance $d_{1}$ = 14 cm, can be written as: $B_{1} = \frac{\mu_{0}2M}{4\pi (d_{1})^{3}} = H\;\;\;\;\;\;\;…(1)$ Where, M = Magnetic moment $\mu_{0}$ = Permeability of free space H = Horizontal component of the magnetic field at $d_{1}$ If the bar magnet is turned through 1800, then the neutral point will lie on the equatorial line. Hence, the magnetic field at a distance $d_{2}$, on the equatorial line of the magnet can be written as: $B_{1} = \frac{\mu_{0}2M}{4\pi (d_{2})^{3}} = H\;\;\;\;\;\;\;…(2)$ Equating equations (1) and (2), we get: $\frac{2}{(d_{1})^{3}} = \frac{1}{(d_{2})^{3}}$ $\left [ \frac{d_{2}}{d_{1}} \right ]^{3} = \frac{1}{2}$ $∴ d_{2} = d_{1}\times \left ( \frac{1}{2} \right )^{\frac{1}{3}}$ = $14\times 0.794 = 11.1 cm$ The new null points will be located 11.1 cm on the normal bisector. Q 5.15) A short bar magnet of magnetic moment 5.25 × 10–2 J T–1 is placed with its axis perpendicular to the earth’s field direction. At what distance from the centre of the magnet, the resultant field is inclined at 45° with earth’s field on (a) its normal bisector and (b) its axis. The magnitude of the earth’s field at the place is given to be 0.42 G. Ignore the length of the magnet in comparison to the distances involved. Magnetic moment of the bar magnet, M = $5.25\times 10^{-2}\;J\;T^{-1}$ Magnitude of earth’s magnetic field at a place, H = 0.42 G = $0.42\times 10^{-4}\; T$ (a) The magnetic field at a distance R from the centre of the magnet on the ordinary bisector is given by: $B = \frac{\mu_{0}M}{4\pi R^{3}}$ Where, $\mu_{0}$ = Permeability of free space = $4\pi \times 10^{-7} TmA^{-1}$ When the resultant field is inclined at $45°$ with earth’s field, B = H $∴ \frac{\mu_{0}M}{4\pi R^{3}} = H = 0.42\times 10^{-4}$ $R^{3} = \frac{\mu_{0}M}{0.42\times 10^{-4}\times 4\pi}$ = $R^{3} = \frac{4\pi \times 10^{-7}\times 5.25\times 10^{-2}}{4\pi \times 0.42\times 10^{-4}}$ = $12.5 \times 10^{-5}$ $∴R = 0.05\; m = 5\; cm$ (b) The magnetic field at a distanced ‘R’ from the centre of the magnet on its axis is given as: $B’ = \frac{\mu_{0}2M}{4\pi R^{3}}$ The resultant field is inclined at $45°$ with the earth’s field. $∴ B’ = H$ $\frac{\mu_{0}2M}{4\pi (R’)^{3}} = H$ $(R’)^{3} = \frac{\mu_{0}\;2M}{4\pi \times H}$ = $\frac{4\pi \times 10^{-7}\times 2\times 5.25\times 10^{-2}}{4\pi\times 0.42\times 10^{-4}} = 25\times 10^{-5}$ $∴ R = 0.063\; m = 6.3\; cm$ Q 5.16) Answer the following questions: (a) Why does a paramagnetic sample display greater magnetisation (for the same magnetising field) when cooled? (b) Why is diamagnetism, in contrast, almost independent of temperature? (c) If a toroid uses bismuth for its core, will the field in the core be (slightly) greater or (slightly) less than when the core is empty? (d) Is the permeability of a ferromagnetic material independent of the magnetic field? If not, is it more for lower or higher fields? (e) Magnetic field lines are always nearly normal to the surface of a ferromagnet at every point. (This fact is analogous to the static electric field lines being normal to the surface of a conductor at every point.) Why? (f) Would the maximum possible magnetisation of a paramagnetic sample be of the same order of magnitude as the magnetisation of a ferromagnet? (a) The thermal motion reduces at a lower temperature and the tendency to disrupt the alignment of the dipoles decreases. (b) The dipole moment induced is always opposite to the magnetising field. Therefore, the internal motion of the atoms due to the temperature will not affect the magnetism of the material. (c) Bismuth is diamagnetic substance. Therefore,  a toroid with bismith core will have a field slightly less than when the core is empty. (d) Permeability of the ferromagnetic material depends on the magnetic field. Permeability is greater for lower fields. (e) Proof of this important fact (of much practical use) is based on boundary conditions of magnetic fields (B and H) at the interface of two media. (When one of the media has µ >> 1, the field lines meet this medium nearly normally.) (f) Yes. Apart from minor differences in strength of the individual atomic dipoles of two different materials, a paramagnetic sample with saturated magnetisation will have the same order of magnetisation. But of course, saturation requires impractically high magnetising fields. Q 5.17) Answer the following questions: (a) Explain qualitatively on the basis of domain picture the irreversibility in the magnetisation curve of a ferromagnet. (b) The hysteresis loop of a soft iron piece has a much smaller area than that of a carbon steel piece. If the material is to go through repeated cycles of magnetisation, which piece will dissipate greater heat energy? (c) ‘A system displaying a hysteresis loop such as a ferromagnet, is a device for storing memory?’ Explain the meaning of this statement. (d) What kind of ferromagnetic material is used for coating magnetic tapes in a cassette player, or for building ‘memory stores’ in a modern computer? (e) A certain region of space is to be shielded from magnetic fields. Suggest a method. (a)  The domain aligns in the direction of the magnetic field when the substance is placed in an external magnetic field. Some energy is spent in the process of alignment. When the external field is removed, the substance retains some magnetisation. The energy spent in the process of magnetisation is not fully recovered. It is lost in the form of heat. This is the basic cause for the irreversibility of the magnetisation curve of a ferromagnet substance. (b) Carbon steel piece, because heat lost per cycle is proportional to the area of the hysteresis loop. (c) Magnetisation of a ferromagnet is not a single-valued function of the magnetising field. Its value for a particular field depends both on the field and also on the history of magnetisation (i.e., how many cycles of magnetisation it has gone through, etc.). In other words, the value of magnetisation is a record or memory of its cycles of magnetisation. If information bits can be made to correspond to these cycles, the system displaying such a hysteresis loop can act as a device for storing information. (d) Ceramics (specially treated barium iron oxides) also called ferrites. (e) Surrounding the region with soft iron rings. Magnetic field lines will be drawn into the rings, and the enclosed space will be free of a magnetic field. But this shielding is only approximate, unlike the perfect electric shielding of a cavity in a conductor placed in an external electric field. Q 5.18) A long straight horizontal cable carries a current of 2.5 A in the direction 10° south of west to 10° north of east. The magnetic meridian of the place happens to be 10° west of the geographic meridian. The earth’s magnetic field at the location is 0.33 G, and the angle of dip is zero. Locate the line of neutral points (ignore the thickness of the cable)? (At neutral points, magnetic field due to a current-carrying the cable is equal and opposite to the horizontal component of earth’s magnetic field.) Current in the wire = 2.5 A The earth’s magnetic field at a location, R= 0.33 G = 0.33 x 10-4 T Angle of dip is zero, δ = 0 Horizontal component of earth’s magnetic field, BH= R cosδ = 0.33 x 10-4 Cos 0 = 0.33 x 10-4 T Magnetic field due to a current carrying conductor, Bc = (μ0/2π) x (I/r) Bc= (4π x 10-7/2π) x (2.5/r) = (5 x 10-7/r) BH = Bc 0.33 x 10-4  = 5 x 10-7/r r =  5 x 10-7/0.33 x 10-4 = 0.015 m = 1.5 cm Hence neutral points lie on a straight line parallel to the cable at a perpendicular distance of 1.5cm. Q 5.19) A telephone cable at a place has four long straight horizontal wires carrying a current of 1.0 A in the same direction east to west. The earth’s magnetic field at the place is 0.39 G, and the angle of dip is 35°. The magnetic declination is nearly zero. What are the resultant magnetic fields at points 4.0 cm below the cable? First let us decide the direction which would best represent the situation. We know that BH = B cos δ = 0.39 × cos 35o G BH = 0.32G Here BV = B sin δ = 0.39 × sin 35o G BV = 0.22G It is given that the telephone cable carry a total current of 4.0 A in the direction east to west. So the resultant magnetic field 4.0 cm below. Bwire$=\frac{\mu _{0}}{4\pi }\frac{2I}{r}$ $=10^{-7}\times \frac{2\times 4}{4\times 10^{-2}}$ = 2 × 10-5 T = 0.2G Net magnetic field $\\B_{net}=\sqrt{(B_{H}-B_{wire})^{2}+B_{V}^{2}} \\=\sqrt{(0.12)^{2}+(0.22)^{2}} \\=\sqrt{0.0144+0.0484}$ = 0.25G The resultant magnetic field at points 4 cm below the cable 0.25 G. Q 5.20) A compass needle free to turn in a horizontal plane is placed at the centre of circular coil of 30 turns and radius 12 cm. The coil is in a vertical plane making an angle of 45° with the magnetic meridian. When the current in the coil is 0.35 A, the needle points west to east. (a) Determine the horizontal component of the earth’s magnetic field at the location. (b) The current in the coil is reversed, and the coil is rotated about its vertical axis by an angle of 90° in the anticlockwise sense looking from above. Predict the direction of the needle. Take the magnetic declination at the places to be zero. Number of turns = 30 Radius of the coil = 12 cm Current in the coil = 0.35 A Angle of dip, δ = 450 (a) Horizontal component of earth’s magnetic field, BH = B sinδ B is the magnetic field strength due to the current in the coil B = (μ0/4π) (2πnI/r) = (4π x 10-7/4π) (2π x 30 x 0.35/0.12) = 5.49 x 10-5 T Therefore, BH = B sinδ =  (5.49 x 10-5 ) sin 45 = 3.88 x 10-5 T = 0.388 G (b)  The current in the coil is reversed, and the coil is rotated about its vertical axis by an angle of 90° in the anticlockwise direction. The needle will point from east to west. Q 5.21) A magnetic dipole is under the influence of two magnetic fields. The angle between the field directions is 60°, and one of the fields has a magnitude of 1.2 × 10–2 T. If the dipole comes to stable equilibrium at an angle of 15° with this field, what is the magnitude of the other field? Magnitude of one of the magnetic field, B1 = 1.2 × 10–2 T Let the magnitude of the other field is B2 Angle between the field, θ = 60° At stable equilibrium, the angle between the dipole and the field B1, θ1 = 15° Angle between the dipole and the field B2, θ2 =θ – θ1 = 45° Torque due to the field B1 = Torque due to the field B2 MB1 sin θ1 = MB2 sin θ2 here , M is the magnetic moment of the dipole B2  = MB1 sin θ1/M sin θ2 = (1.2 × 10–2 ) x sin 150/sin 450 = 4.39 x 10-3 T Magnetic field due to the other magnetic field is 4.39 x 10-3 T Q 5. 22) A monoenergetic (18 keV) electron beam initially in the horizontal direction is subjected to a horizontal magnetic field of 0.04 G normal to the initial direction. Estimate the up or down deflection of the beam over a distance of 30 cm (me = 9.11 × 10–31 kg). [Note: Data in this exercise are so chosen that the answer will give you an idea of the effect of earth’s magnetic field on the motion of the electron beam from the electron gun to the screen in a TV set.] Energy of the electron beam, E = 18 keV = 18 x 103 eV = 18 x 10x 1.6 x 10-19 J Magnetic field, B = 0.04 G Mass of the electron, me = 9.11 × 10–31 kg Distance to which the beam travels, d = 30 cm = 0.3 m Kinetic energy of the electron beam is E = (1/2) mv2 $v=\sqrt{\frac{2\times 18\times 10^{3}\times 1.6\times 10^{-19}}{9.11\times 10^{-31}}}= 0.795\times 10^{8} m/s$ The electron beam deflects along the circular path of radius r. The centripetal force is balanced by the force due to the magnetic field Bev = mv2/r r = mv/Be = (9.11 x 10-31 x 0.795 x 108)/(0.04 x 10-4 x 1.6 x 10-19) = (7.24 x 10-23)/(0.064 x 10-23) = 113.125 Let the up and down deflection of the beam be x = r (1 – cosθ) here, θ is the angle of deflection sin θ = d/r = 0.3/113.12 = 0.0026 θ = sin -1 (0.0026)= 0.14890 x = r (1 – cosθ) = 113.12 (1 – cos 0.14890) = 113.12 (1- 0.999)= 113.12 x 0.01 = 1.13 mm Q 5. 23) A sample of paramagnetic salt contains 2.0 × 1024 atomic dipoles each of dipole moment 1.5 × 10–23 J T–1. The sample is placed under a homogeneous magnetic field of 0.64 T and cooled to a temperature of 4.2 K. The degree of magnetic saturation achieved is equal to 15%. What is the total dipole moment of the sample for a magnetic field of 0.98 T and a temperature of 2.8 K? (Assume Curie’s law) Number of diatomic dipoles =  2.0 × 1024 Dipole moment of each dipole, M’= 1.5 × 10–23 J T–1 Magnetic field strength, B1= 0.64 T Cooled to a temperature, T1= 4.2 K Total dipole moment of the sample = n x M’ = 2.0 × 1024  x1.5 × 10–23 = 30 Degree of magnetic saturation = 15 % Therefore, M1 = (15/100) x 30 = 4.5 J/T Magnetic field strength, B2 = 0.98 T Temperature, T2 = 2.8 K The ratio of magnetic dipole from Curie temperature, $\frac{M_{2}}{M_{1}}=\frac{B_{2}}{B_{1}}\times \frac{T_{1}}{T_{2}}$ $M_{2}=M_{1}\frac{B_{2}}{B_{1}}\times \frac{T_{1}}{T_{2}}$ $M_{2}=4.5\frac{0.98}{0.64}\times \frac{4.2}{2.8}$ = 18.52/1.79 = 10.34 J/T Q 5.24) A Rowland ring of mean radius 15 cm has 3500 turns of wire wound on a ferromagnetic core of relative permeability 800. What is the magnetic field B in the core for a magnetising current of 1.2 A? Mean radius of the Rowland ring = 15 cm Number of turns = 3500 Relative permeability of the core, μr = 800 Magnetising current, I = 1.2 A Magnetic field at the core, $B = \frac{\mu _{r}\mu _{0}IN}{2\pi r}$ $B = \frac{800\times 4\pi \times 10^{-7}\times 1.2\times 3500}{2\pi \times 0.15}= 4.48 T$ The magnetic field in the core is 4.48 T Q 5.25) The magnetic moment vectors µs and µl associated with the intrinsic spin angular momentum S and orbital angular momentum l, respectively, of an electron, are predicted by quantum theory (and verified experimentally to a high accuracy) to be given by: µs= –(e/m) S, µl= –(e/2m)l Which of these relations is in accordance with the result expected classically? Outline the derivation of the classical result. Of the two, the relation µ= – (e/2m)l is in accordance with classical physics. It follows easily from the definitions of µand l: µl =  IA =  (e/T) πr——(1) l = mvr = m (2πr2/T) ——(2) where r is the radius of the circular orbit which the electron of mass m and charge (–e) completes in time T. Dividing (1) by (2). Clearly, µI/l = [ (e/T) πr2  ]/[m (2πr2/T) ] = –(e/2m) Therefore, µI= (−e /2 m) l Since the charge of the electron is negative (–e), it is easily seen that µI and l are antiparallel, both normal to the plane of the orbit. Note µs/S in contrast to µl /l is e/m, i.e., twice the classically expected value. This latter result (verified experimentally) is an outstanding consequence of modern quantum theory and cannot be obtained classically ## Class 12 Physics Chapter 5 NCERT Solutions for Magnetism and Matter Chapter 5 Magnetism and Matter of Class 12 Physics is categorized under the term – I CBSE Syllabus for the session 2021-22. This chapter of NCERT Solutions for Class 12 Physics will have various questions related to various quantities and determination of the earth’s magnetic field, the effect of the compass at the poles and its direction. By utilizing these solutions one can be well prepared for #### Concepts covered in NCERT Solutions for Class 12 Physics Chapter 5 Magnetism and Matter are: 1. Introduction 2. The Bar Magnet 1. The magnetic field lines 2. Bar magnet as an equivalent solenoid 3. The dipole in a uniform magnetic field 4. The electrostatic analogue 3. Magnetism And Gauss’s Law 4. The Earth’s Magnetism 1. Magnetic declination and dip 5. Magnetisation And Magnetic Intensity 6. Magnetic Properties Of Materials 1. Diamagnetism 2. Paramagnetism 3. Ferromagnetism ### Explain what is Magnetism and Matter. Do you know that the earth’s magnetic field varies from point to point in space? Do you know what is the reason behind this? Many of us also want to know about interstellar space and its weak magnetic field. We will find out the magnitude of the magnetic moment of a magnet with a certain degree of the uniform magnetic field and what will happen if the magnet is rotated freely in a plane and what will be the potential energy of the magnet. Do you know when we pass current through the solenoid, it acts as a magnet? Find it out what is the magnetic moment of this solenoid. We will be seeing here, how much torque is required to turn a magnet so that its magnetic moment is at a certain alignment with the field. We will be seeing an example of a magnetic needle used to find out magnetic meridian points and computing the direction and magnitude of the earth’s magnetic field. There are various examples, exemplary question, MCQS and worksheets mentioned below which will help you understand magnetism and the magnetic field with its uses and application. Practising the problems on magnetism using the NCERT Solutions will definitely help you in the retention of these concepts as the topics are frequently asked in term-wise examinations. Also Access NCERT Exemplar for Class 12 Physics Chapter 5 CBSE Notes for Class 12 Physics Chapter 5 Chapter 5 Magnetism and Matter from NCERT Class 12 Physics is one of the concepts of Physics which needs lots of imagination. The concept talks about magnetic field lines, magnetic force, matter occupying space, etc, which are explained with the help of videos and eye-catching diagrams that will sink in students minds very easily. Also, the language used by our experts for explaining these concepts is very simple. The other advantage is that this chapter’s NCERT Solutions is now available in PDF form and is free for downloading along with CBSE sample papers. ## Frequently Asked Questions on NCERT Solutions for Class 12 Physics Chapter 5 ### Where can I find NCERT Solutions for Class 12 Physics Chapter 5 online? Students can find the NCERT Solutions for Class 12 Physics Chapter 5 on BYJU’S. These NCERT Solutions are the top-rated study resources which the students can rely on without any hesitation. The PDF of solutions is also available for free download which can be used by the students without any time constraints. These solutions are curated by the subject experts at BYJU’S strictly based on the latest term – I CBSE Syllabus 2021-22. Students can speed up their first exam preparation using the solutions module which is available chapter wise. ### Are NCERT Solutions for Class 12 Physics Chapter 5 helpful in the first term exam preparation? The NCERT Solutions help students to strengthen their foundation in basic topics on Physics. The exercise-wise solutions are created by highly knowledgeable experts at BYJU’S having vast experience in the respective subject. It will help students to focus more and score well in the term – I exams. The main aim is to boost the confidence level among students and efficiency to solve complex problems within a shorter duration. ### What are the main concepts covered under NCERT Solutions for Class 12 Physics Chapter 5? The main concepts covered under NCERT Solutions for Class 12 Physics Chapter 5 are: Introduction The Bar Magnet The magnetic field lines Bar magnet as an equivalent solenoid The dipole in a uniform magnetic field The electrostatic analogue Magnetism And Gauss’s Law The Earth’s Magnetism Magnetic declination and dip Magnetisation And Magnetic Intensity Magnetic Properties Of Materials Diamagnetism Paramagnetism Ferromagnetism Related Links NCERT Class 10 Maths Solution NCERT Science Class 9 Solutions PDF Science Solution Class 10 NCERT NCERT Solutions For Class 7 All Subject CBSE NCERT Solutions For Class 8 CBSE NCERT Solutions For Class 9 Class 10 NCERT Solutions NCERT Class 11 Solution NCERT Maths Class 7 PDF NCERT Mathematics Class 8 Solved NCERT Maths Solution Class 9 NCERT Class 11 Maths PDF
## Wikipedia - Concentration polarization (en) Wikipedia - Diffusion layer (en) diffusion layer (concentration boundary layer) https://doi.org/10.1351/goldbook.D01725 The region in the vicinity of an electrode where the concentrations are different from their value in the bulk solution. The definition of the thickness of the @[email protected] layer is arbitrary because the concentration approaches asymptotically the value $$c_{0}$$ in the bulk solution (see diagram). N04108.png
# Topology on the group of autohomeomorphisms I am wondering whether the group of all autohomeomorphisms of a compact metric space can be given a reasonable topological group structure? (Preferably, can it be turned into a locally compact group?) I think that moral reason should be no, but I might be wrong. Here's how I see it: Boolean algebras, which are discrete in nature correspond to compact, Hausdorff zero-dimensional spaces. The group of automorphisms of a Boolean algebra is then the same as the group of automorphisms of the corresponding Stone space... I'll also add that we don't need to restrict ourselves to the topological category because, as pointed out above, the automorphism group tends to be unmanageable. If we restrict ourselves to metric spaces for instance, and consider the group of bijective autoisometries, the groups tend to be much more well-behaved. For instance the circle $S^1$ has automorphism group (in $\mathbf{Met}$) isomorphic to $S^1\ltimes\mathbb{Z}_2$ or sometimes called $Dih(S^1)$, the circular dihedral group, with the topology of $S^1\sqcup S^1$ inherited in the obvious way.
# Triangulated Categories. (AM-148) AMNON NEEMAN Pages: 449 https://www.jstor.org/stable/j.ctt7zvcqc 1. Front Matter (pp. i-iv) (pp. v-2) 3. 0. Acknowledgements (pp. 3-3) 4. 1. Introduction (pp. 3-28) Before describing the contents of this book, let me explain its origins. The book began as a joint project between the author and Voevodsky. The idea was to assemble coherently the facts about triangulated categories, that might be relevant in the applications to motives. Since the presumed reader would be interested in applications, Voevodsky suggested that we keep the theory part of the book free of examples. The interested reader should have an example in mind, and read the book to find out what the general theory might have to say about the example. The theory should be presented cleanly,... 5. CHAPTER 1 Definition and elementary properties of triangulated categories (pp. 29-72) Definition 1.1.1.Let$\script {C}$be an additive category and$\Sigma:\script{C}\rightarrow \script{C}$be an additive endofuntor of$\script {C}$.Assume throughout that the endofunctorΣis invertible. Acandidate triangle in$\script {C}$(with respect toΣ) is a diagram of the form:$X\overset{u}{\mathop{\xrightarrow{\hspace*{0.75cm}}}}\,Y\overset{\upsilon }{\mathop{\xrightarrow{\hspace*{0.75cm}}}}\,Z\overset{w}{\mathop{\xrightarrow{\hspace*{0.75cm}}}}\,\Sigma X$such that the composites v o u, w o v andΣu o w are the zero morphisms. A morphism of candidate triangles is a commutative diagram$\begin{matrix} X & \overset{u}{\mathop{\xrightarrow{\hspace*{0.75cm}}}}\, & Y & \overset{\upsilon }{\mathop{\xrightarrow{\hspace*{0.75cm}}}}\, & Z & \overset{w}{\mathop{\xrightarrow{\hspace*{0.75cm}}}}\, & \Sigma X \\ f\big\downarrow & {} & g\big\downarrow & {} & h\big\downarrow & {} & {\Sigma f}\big\downarrow \\ {{X}'} & \overset{{{u}'}}{\mathop{\xrightarrow{\hspace*{0.75cm}}}}\, & {{Y}'} & \overset{{{\upsilon }'}}{\mathop{\xrightarrow{\hspace*{0.75cm}} }}\, & {{Z}'} & \overset{{{w}'}}{\mathop{\xrightarrow{\hspace*{0.75cm}} }}\, & \Sigma {X}' \\ \end{matrix}$where each row is a candidate triangle. Definition 1.1.2. A pre–triangluated category$\script{T}$is an additive category, together with an additive automorphism Σ, and a class of candidate triangles (with respect to Σ) called... 6. CHAPTER 2 Triangulated functors and localizations of triangulated categories (pp. 73-102) Definition 2.1.1.Let$\script {D}_{1}, \script {D}_{2}$be triangulated categories. A triangulated functor$F:\script {D}_{1} \rightarrow \script {D}_{2}$is an additive functor$F:\script {D}_{1} \rightarrow \script {D}_{2}$together with natural isomorphisms$\phi_{X}:F(\Sigma(X)) \longrightarrow \Sigma(F(X))$such that for any distinguished triangle$X \overset{u}\longrightarrow Y \overset{v}\longrightarrow Z \overset{w}\longrightarrow \Sigma X$in$\script {D}_{1}$the candidate triangle$F(X) \overset {F(u)}\longrightarrow F(Y) \overset {F(v)}\longrightarrow F(Z) \overset {\phi_{X}oF(w)}\longrightarrow \Sigma(F(X))$is a distinguished triangle in$\script {D}_{2}$. We remind the reader of the definition of triangulated subcategories (see Section 1.5) Definition 1.5.1Let$\script {D}$be a triangulated category. A full additive subcategory$\script {C}$in$\script {D}$is called atriangulated subcategoryif every object isomorphic to an object of$\script {C}$is in$\script {C}$, and the inclusion$\script {C} \rightarrow \script {D}$is a triangulated functor, as in Definition 2.1.1. We assume further that$\phi_{X}:1(\Sigma(X)) \longrightarrow \Sigma(1(X))$is the identity onΣX. Remark 2.1.2. To say that... 7. CHAPTER 3 Perfection of classes (pp. 103-122) Let us briefly review some standard definitions for large cardinals. A cardinal α is calledsingularif α can be written as a sum of fewer than α cardinals, all smaller than α. Let$\aleph_{n}$be thenthinfinite cardinal. Thus,$\aleph_{0}$is the zeroth, that is the countable cardinal,$\aleph_{1}$the next, and so on. The cardinal$\aleph_{w}$is defined to be the smallest cardinal bigger than$\aleph_{n}$for alln. Clearly,$\aleph_{w}=\sum^{\infty}_{n=1} \aleph_{n}$is a countable union of cardinals, each strictly smaller than$\aleph_{w}$. Hence$\aleph_{w}$is an example of a singular cardinal. A cardinal which is not... 8. CHAPTER 4 Small objects, and Thomason’s localisation theorem (pp. 123-152) Definition 4.1.1.Let$\script {T}$be a triangulated category satisfying [TR5] (that is, coproducts exist). Let α be an infinite cardinal. An object$k \in \script {T}$is calledα-smallif, for any collection$\{X_{\lambda}; \lambda \epsilon \Lambda\}$of objects of$\script {T}$, any map$k \longrightarrow \coprod_{ \lambda \epsilon \Lambda} X_{ \lambda}$factors through some coproduct of cardinality strictly less thanα.In other words, there exists a subsetΛʹ ⊂ Λ,where the cardinality ofΛʹis strictly less thanα,and the map above factors as$k \longrightarrow \coprod_{ \lambda \epsilon \Lambda'} X_{ \lambda} \longrightarrow \coprod_{ \lambda \epsilon \Lambda} X_{ \lambda}$. Example 4.1.2. The special case where$\alpha=\aleph_{0}$is of great interest. An object$k \in \script {T}$is called$\aleph_{0}$-smallif for any infinite coproduct in$\script {T}$, say the coproduct... 9. CHAPTER 5 The category A($\script (S)$) (pp. 153-182) Let$\script {S}$be an additive category. We do not assume that$\script {S}$is essentially small. We define Definition 5.1.1The category$\script {C}at(\script {S}^{op}, \script {A}b)$has for its objects all the additive functors$F : S^{op} \longrightarrow \script {A}b$. The morphisms in$\script {C}at(\script {S}^{op}, \script {A}b)$are the natural transormations. It is well-known that$\script {C}at(\script {S}^{op}, \script {A}b)$is an abelian category. We remind the reader what sequences are exact in$\script {C}at(\script {S}^{op}, \script {A}b)$. Suppose we are given a sequence$0 \longrightarrow F'(-) \longrightarrow F(-) \longrightarrow F''(-) \longrightarrow 0$of objects and morphisms in$\script {C}at(\script {S}^{op}, \script {A}b)$, that is functors and natural transformations$\script {C}at(\script {S}^{op} \longrightarrow \script {A}b$. This sequence is exact in$\script {C}at(\script {S}^{op}, \script {A}b)$if and only if, for every$s \epsilon \script {S}$, the sequence of abelian groups$0 \longrightarrow F'(s) \longrightarrow F(s) \longrightarrow F''(s) \longrightarrow 0$is... 10. CHAPTER 6 The category $\script {E}x(S^o^p, \script {A}b)$ (pp. 183-220) Let α be a regular cardinal. Throughout this Chapter, we fix a choice of such a cardinal α. Let$\script {S}$be a category, satisfying the following hypotheses Hypotheses 6.1.1.The category$\script {S}$is said to satisfy hypothesis 6.1.1if 6.1.1.1.$\script {S}$is an essentially small additive category. 6.1.1.2.The coproduct of fewer thanαobjects of$\script {S}$exists in$\script {S}$. 6.1.1.3.Homotopy pullback squares exist in$\script {S}$.That is, given a diagram in$\script {S}$.$\begin{matrix} {} & {} & x \\ {} & {} & \big\downarrow \\ {{x}'} & \xrightarrow{\hspace*{0.75cm}} & y \\ \end{matrix}$it may be completed to a commutative square$\begin{matrix} {p} & {\xrightarrow{\hspace*{0.75cm}}} & x \\ {\big\downarrow} & {} & \big\downarrow \\ {{x}'} & \xrightarrow{\hspace*{0.75cm}} & y \\ \end{matrix}$so that any commutative square$\begin{matrix} {s} & {\xrightarrow{\hspace*{0.75cm}}} & x \\ {\big\downarrow} & {} & \big\downarrow \\ {{x}'} & \xrightarrow{\hspace*{0.75cm}} & y \\ \end{matrix}$is induced by a (non-unique) maps sp. The object p is called... 11. CHAPTER 7 Homological properties of $\script {E}x(S^o^p, \script {A}b)$ (pp. 221-272) We have learned, in the previous Chapter, some of the basic properties of the categories$\script {E}x(S^o^p, \script {A}b)$. In Appendix C, more specifically in Section C.4, we can see that in general the categories$\script {E}x(S^o^p, \script {A}b)$need not have enough injectives; in fact, they can fail to have cogenerators. See also Lemma 6.4.6 for the fact that, if$\script {E}x(S^o^p, \script {A}b)$fails to have a cogenerator, it certainly cannot have enough injectives. But nevertheless, something positive is true. This Chapter will be devoted to proving the positive results we have. These positive results are fragmented and inconclusive. They are included for the benefit of... 12. CHAPTER 8 Brown representability (pp. 273-308) In this Chapter, all categories are assumed to have small Hom sets. Sometimes we will explicitly remind the reader of this; even when we do not, it is assumed. Let us make some definitions about possible sets of generators for$\script {T}$. Definition 8.1.1(cf. Definition 6.2.8). Let$\script {T}$be a triangulated category satisfying [TR5]. A set T of objects of$\script {T}$is called a generating set if 8.1.1.1.$\{Hom(T,x)=0\} \Longrightarrow \{x=0\};$that is, if x$\script {T}$satisfiestT, Hom (t, x= 0)then x is isomorphic in$\script {T}$to 0. 8.1.1.2.Up to isomorphisms, T is closed under suspension and... 13. CHAPTER 9 Bousfield localisation (pp. 309-320) Let$\script {T}$be a triangulated category,$\script {S} \subset \script {T}$a triangulated subcategory. In Theorem 2.1.8 we learned how to construct the Verdier quotient$\script {T} / \script {S}$. There is a natural localisation map$F:\script {T}\longrightarrow \script {T} / \script {S}$. In Example 8.4.5 we learned that under suitable hypotheses, the functorFhas a right adjoint. We remind the reader of the hypoteses. Suppose$\script {T}$is a triangulated category with small Hom-sets, satisfying [TR5]. Suppose further that the representability theorem holds for$\script {T}$. Let$\script {S}$be a localising subcategory. Assume that the Verdier quotient$\script {T} / \script {S}$is a category with small Hom-sets. In Example 8.4.5 we saw that... 14. APPENDIX A. Abelian categories (pp. 321-368) 15. APPENDIX B. Homological functors into [AB5α] categories (pp. 369-386) 16. APPENDIX C. Counterexamples concerning the abelian category A($\script {T}$) (pp. 387-406) 17. APPENDIX D. Where $\script {T}$ is the homotopy category of spectra (pp. 407-426) 18. APPENDIX E. Examples of non—perfectly—generated categories (pp. 427-442) 19. Bibliography (pp. 443-444) 20. Index (pp. 445-451) 21. Back Matter (pp. 452-452)
# Why does sample covariance matrix inflates the larger eigenvalues and shrinks the smaller eigenvalues of the covariance matrix? In classification context, using the maximum likelihood, the sample covariance matrix for each class, estimates the larger eigenvalues of the covariance matrix of each class, larger and estimates the smaller eigenvalues smaller. Why is that? I am studying Regularized Discriminant Analysis written by Friedman. • That sounds interesting. Where did you get this assertion? – Cave Johnson Jun 17 '18 at 10:20 • google.com/… On the third page of this pdf, two paragraphs after equation 13. It says "it is well known" – fof Jun 17 '18 at 10:36
# Lambda Calculus simplification Below is the lambda expression which I am finding difficult to reduce i.e. I am not able to understand how to go about this problem. $$(\lambda mn.(\lambda sz.ms(nsz)))(\lambda sz.sz)(\lambda sz.sz)$$ I am lost with this. if anyone could lead me in the right direction that would be much appreciated - ## migrated from cstheory.stackexchange.comOct 10 '12 at 7:49 This question came from our site for theoretical computer scientists and researchers in related fields. Lambda terms are simplified by the β-reduction rule: $$(\lambda x.M)N\,\rightarrow_\beta\,M[x:=N]$$ It means, if you have a subterm that looks like $(\lambda x.M)N$ (called redex) you can replace it by $M[x:=N]$, which is $M$ with $N$ substituted for $x$. The replacement $M[x:=N]$ is often called contractum. (Capital letters likes $M$ and $N$ are used to denote terms.) In your case, the first reduction would be: $$(\underbrace{(\lambda mn.(\lambda sz.ms(nsz)))(\lambda sz.sz)}_{\mbox{redex}})(\lambda sz.sz) \rightarrow_\beta \underbrace{(\lambda n.(\lambda sz.(\lambda sz.sz)s(nsz)))}_{\mbox{contractum}}(\lambda sz.sz)$$ where we replaced $m$ with $\lambda sz.sz$. Note that we now have $\lambda sz$ inside $\lambda sz$ which isn't very convenient (yet allowed). It is better to rename bound variables ($\alpha$-conversion) to be distinct, for example to: $$(\lambda n.(\lambda uv.(\lambda sz.sz)u(nuv)))(\lambda sz.sz)$$ Some notes: • Substitution applies only to free variables. Bound variables stay always intact. • Sometimes substitution is not directly possible. For example, you cannot substitute $x:=y$ into $\lambda y.xy$ because $y$ is bound under $\lambda$. You'd get $\lambda y.yy$ which would change the meaning of the term. So first, you have to rename the bound variable first, for example to $\lambda z.xz$ and then you can safely substitute $x:=y$ to get $\lambda z.yz$. The Wikipedia article gives formal definition of substitution and shows when it's necessary to rename bound variables. • Lambda associates to the right, so $\lambda sz.sz$ is $\lambda s.(\lambda z.sz)$. • Application associates to the left, so $xyz$ is $(xy)z$. - An alternative way of expressing a lambda abstraction and reduction. Indexes are used instead of lettered terms based on the order of input. Abstractions are surrounded by []'s (λmn.(λsz.ms(nsz)))(λsz.sz)(λsz.sz) m -> 1, n -> 2, s -> 3, z -> 4 (λmn.(λsz.ms(nsz))) = [1 3 (2 3 4)] (λsz.sz) = [1 2] [1 3 (2 3 4)][1 2][1 2] [[1 2] 2 (1 2 3)][1 2] ;1 was replaced with [1 2], remaining terms decremented [[1 2] 1 ([1 2] 1 2)] ;1 was replaced with [1 2], remaining terms decremented [1 ([1 2] 1 2)] ;1 2 was replaced by 1 ([1 2] 1 2)] [1 (1 2)] ;1 2 was replaced by 1 2 (λmn.m(mn)) The notation above is just a more compact and unambiguous way of expressing lambda abstractions. Compound abstractions reduce to a single normal form automatically, alpha reduction is not needed. Single positive indexes are used for bound terms. Negative indexes are used for kill terms. Negative indexes are placed last in order of decreasing magnitude. I = λx.x = [1] K = λxy.x = [1 -2] KI = λyx.x = [2 -1] S = λxyz.xz(yz) = [1 3 (2 3)] Applying S to K: [1 3 (2 3)][1 -2] [[1 -2] 2 (1 2)] ;1 was replaced with [1 -2], remaining terms decremented [[.2 -1] (1 2)] ;reducing: 1 replaced by .2*, -2 decremented (in magnitude) [2 -2 -1] ;(1 2) bound terms become kill terms due to -1. [2 -1] = KI ;-2 kill term is void due to surviving 2 term * the . notation signifies the bound term is from the outer abstraction and must be used to prevent decrementing and double replacement of the term until the substitution of all terms in the abstraction is complete. [2 -1][anything] ;applying KI to anything [1] = I ;yeilds I, therefor SK[anything] = [1] = I Applying K to K: [1 -2][1 -2] [[1 -2] -1] ;kill terms are absorbed and also increase the magnitude of bound terms [2 -3 -1] ;applying this result to anything yields K. [2 -3 -1][anything] [2 -1] - Please add a bit of commentary/motivation. –  vonbrand Jan 24 '13 at 9:53 Your answer lacks any description and thus looks like some sort of Soduko puzzle. What is your representation? What are the reduction rules? –  Dave Clarke Jan 24 '13 at 18:02 Then do it in the answer; you can edit by clickin on, well, "edit". –  Raphael Jan 25 '13 at 11:31 Here is a web based version of an interpreter console for this notation. It does not show the reduction steps but does evaluate correctly for everything that I have been able to throw at it. I am interested in getting feedback on it. –  dansalmo Mar 21 '13 at 21:22
# A contractor is required by a county planning department to submit one, two, three, four, or five forms (depending on the nature ###### Question: A contractor is required by a county planning department to submit one, two, three, four, or five forms (depending on the nature of the project) in applying for a building permit. Let Y=the number of forms required of the next applicant. The probability that y forms are required is know to be proportional to y, that is, p(y)=ky for y=1,...,5. (a) What is the value of k? (b) What is the probability that at most three forms are required? (c) What is the probabilty that between two and four forms (inclusive) are required? (d) Could p(y)=y^2/50 for y=1,...,5 be the pmf of Y? ### Calculate the average atomic mass of the following elements from the data below:2. Copper 69.1% Cu-63 30.9% Cu-65 Calculate the average atomic mass of the following elements from the data below: 2. Copper 69.1% Cu-63 30.9% Cu-65... ### What is informational text? text that tells more about a photograph, illustration, or What is informational text? text that tells more about a photograph, illustration, or other graphic text written to explain and give information on a topic a section title within the body of a text that tells the reader something important about the content of that section a short text within a lar... ### Christoff's lawn & lot is a small business that provides landscaping and grass cutting services in the spring and summer. Christoff's lawn & lot is a small business that provides landscaping and grass cutting services in the spring and summer. christoff's usually contracts with customers on an annual basis, with the terms set out at the beginning of the season. this year, the area experienced a significant amount ... ### Select the correct answer.Jeremy, an engineer, wants to create a DC motor to run a robot that he is building. Around which component of the DC motor Select the correct answer. Jeremy, an engineer, wants to create a DC motor to run a robot that he is building. Around which component of the DC motor should he wrap the coil of the motor? A. armature B. stator C. communicator D. axle... ### Help...1. give the length of the arc cut off by the central angle of 120deg in a circle of diameter 8.6 metre.2. Find the diameter of circle given Help... 1. give the length of the arc cut off by the central angle of 120deg in a circle of diameter 8.6 metre.2. Find the diameter of circle given the Area of a sector 6480sqcm and angle its central angle of 150deg.​... ### Rounding 547+326=to estimate the sum? Rounding 547+326=to estimate the sum?... ### Hugo divided 54,324 by 18 and got an answer of 318. What mistake did Hugo make? How do you know his answer is wrong? Explain Hugo divided 54,324 by 18 and got an answer of 318. What mistake did Hugo make? How do you know his answer is wrong? Explain your thinking and show your work.... PLEASE HELP ME!! ill give you 20 points $PLEEEEEEASE HELP MEEEEE!! ill give you 20 points$... ### Solve the following by the method of your choice, and show your work:7. x+2y=-4-3x-2y=-48. 4x+y=2y=x-39. 2x-3y=74x+y=-7 Solve the following by the method of your choice, and show your work: 7. x+2y=-4 -3x-2y=-4 8. 4x+y=2 y=x-3 9. 2x-3y=7 4x+y=-7... ### Women and child workers posing outside a factory for a picture. Public domain Why were the people in this photograph important to Women and child workers posing outside a factory for a picture. Public domain Why were the people in this photograph important to New England during the early 1800s? Their ideas helped make improvements in production and new inventions. Their income contributed greatly to the economy of the area. Th... ### HELP ME PLEASE THIS DUE TODAYExplain step by step how you find the product of 6x^2 y^5 and 3x^2 -8x+2xy^2 +4y^3 HELP ME PLEASE THIS DUE TODAY Explain step by step how you find the product of 6x^2 y^5 and 3x^2 -8x+2xy^2 +4y^3... ### Read this passage from andrew jackson's message to congress 'on indian removal': it gives me pleasure Read this passage from andrew jackson's message to congress "on indian removal": it gives me pleasure to announce to congress that the benevolent policy of the government, steadily pursued for nearly thirty years, in relation to the removal of the indians beyond the white settlements is approaching ... ### Calculate the average high and low in python code for beginners pls Calculate the average high and low in python code for beginners pls... ### Asmall circular or crescent shaped opening in a vaulted roof is called: a. wall-eyed c. queer b. lunette Asmall circular or crescent shaped opening in a vaulted roof is called: a. wall-eyed c. queer b. lunette d. keen... ### Public void mysteryMethod(boolean a, boolean b){ if (a){ System. Out. Println('A'); } else if (a && b){ System. Out. Println('A Public void mysteryMethod(boolean a, boolean b){ if (a){ System. Out. Println("A"); } else if (a && b){ System. Out. Println("A and B"); } else { if (!B) { System. Out. Println("not B"); } else { System. Out. Println("???"); } } }... ### Enu-ebpe-vdc join here come​co. e on Enu-ebpe-vdc join here come​co. e on... ### What is an adjective? A a word that describes nouns and pronouns B an interrupting word C a word that is a person, place, thing What is an adjective? A a word that describes nouns and pronouns B an interrupting word C a word that is a person, place, thing or idea D a word that connects words, phrases and clauses...
# Sharpening a bound on $\zeta'(s)$ I want to find an upper bound for $\zeta'(s)$ along a vertical line $\Re(s)=b$, where $-1<b<0$. One way to do this is using $$\frac{\zeta'(b+iT)}{\zeta(b+iT)}=O_b(\log T)$$ and $$\zeta(b+iT)=O_{b,\varepsilon}(T^{1/2-b+\varepsilon})$$ for each $\varepsilon>0$. Multiplying them gives us $$\zeta'(b+iT)=O_{b,\varepsilon}(T^{1/2-b+\varepsilon}\log T)$$ as $T\rightarrow\infty$. I want to know if there is a bound, for fixed $b$, that is sharper. • The functional equation expresses $\zeta'(b+iT)$ in terms of $\zeta(1-b-iT)$, $\zeta'(1-b-iT)$, and the complex Gamma function and its derivative on the lines of real part $b$ and $1-b$. If $-1<b<0$ then $1 < 1-b < 2$ so $\zeta(1-b-iT)$ and $\zeta'(1-b-iT)$ are bounded and it's just a matter of the estimating the relevant Gamma and $\Gamma'$ factors. Mar 10 '14 at 3:34 • Use the functional equation connecting $\zeta(b+iT)$ to $\zeta(1-b-iT)$ and differentiate. Then use Stirling's formula. This gives the bound $|\zeta^{\prime}(b+it)|= O_b(T^{1/2-b}\log T)$ (your expression seems to have a typo). This is also best possible since $|\zeta(1-b-iT)|$ is bounded away from zero. Mar 10 '14 at 3:34 • Yes, you are correct about the typo, so I fixed that. Thanks for letting me know that this is the best possible estimate. – B.W. Mar 10 '14 at 3:40 • Here you can find some bounds math.univ-lille1.fr/~ramare/TME-EMT/Articles/Art06.html – user21574 Mar 10 '14 at 13:27 I'm not sure if you need me to go further, but if I were you I'd start out with the functional equation. Take the derivative of both sides of the functional equation and you can derive: $\frac{\zeta'(1-s)}{\zeta(1-s)} = log(2\pi)+\frac{\pi}{2}\tan(\frac{\pi s}{2}) - \psi(s)-\frac{\zeta'(s)}{\zeta(s)}$. Letting $s = 1-b-iT$, the left hand side will be exactly what you're looking at. I am some what confused. You say the vertical line where $Re(s)=b$ where $0<b<1$. If it's a vertical line, then wouldn't $T$ vary and $b$ be held fixed? If $T$ is allowed to vary (i.e. a vertical line), then wouldn't the answer depend on whether the line crosses the real axis? I believe $\tan(\frac{\pi}{2}s)$ blows up in magnitude when $b$ is really small in absolute value. If $T$ is held fixed, and you allowed $b$ to vary, then believe the $\tan$ term is bounded since you'd be a fixed distance away from the problem point $b=0$. If this is the case, Lucia is correct I believe. Perhaps I'm wrong...it is awful late and my mind is worn out from all day of flying across the country. I apologize if I made a mistake somewhere in my logic. If I am wrong in my logic, could someone explain? I'm very interested as well.
## IBPS Clerk Quant Test 5 Instructions What approximate value will come in place of the question mark (?) in the given questions ? (You are not expected to calculate the exact value) Question 1 $$1439 \div 16 \times 14.99 + \sqrt{(288)} = ?$$ Question 2 $$11.92^{2}+ 16.01^{2} = ?^{2} \times 3.85^{2}$$ Question 3 (19.97% of 781) + ? + (30% of 87) = 252 Question 4 $$820.01 - 21 \times 32.99 + ? = 240$$ Question 5 $$299 \div 12 \times 13.95 + ? = 24.02^{2}$$
### Author Topic: Problem 2  (Read 9812 times) #### Thomas Nutz • Full Member • Posts: 26 • Karma: 1 ##### Problem 2 « on: November 12, 2012, 01:13:34 PM » Hey, I get an ODE for problem 2, but it is not of the Euler type and I don't know how to solve it. Could anyone give me a hint? Thanks! #### Victor Ivrii • Elder Member • Posts: 2570 • Karma: 0 ##### Re: Problem 2 « Reply #1 on: November 12, 2012, 01:46:02 PM » Hey, I get an ODE for problem 2, but it is not of the Euler type and I don't know how to solve it. Could anyone give me a hint? Thanks! I am asking you to find solutions, but to try to find them? The crucial question is What ODE should satisfy $u(r)$? #### Thomas Nutz • Full Member • Posts: 26 • Karma: 1 ##### Re: Problem 2 « Reply #2 on: November 12, 2012, 02:09:39 PM » Sorry I don't understand. I did find an ODE for $u(r)$, which is solved by Bessel functions. When you ask "try to find solutions", what do you expect me to do? Thanks again! #### Victor Ivrii • Elder Member • Posts: 2570 • Karma: 0 ##### Re: Problem 2 « Reply #3 on: November 12, 2012, 03:05:42 PM » Sorry I don't understand. I did find an ODE for $u(r)$, which is solved by Bessel functions. When you ask "try to find solutions", what do you expect me to do? Thanks again! YES, exactly what you did--found equation--and you actually found that it solved by a class of special functions (which are invented exactly to solve this and more general equation) #### Ian Kivlichan • Sr. Member • Posts: 51 • Karma: 17 ##### Re: Problem 2 « Reply #4 on: November 19, 2012, 09:30:00 PM » Hopeful solutions attached! For 2.b), the ODE is exactly the same, but we switch the sign of k^2 u. #### Chen Ge Qu • Full Member • Posts: 16 • Karma: 8 ##### Re: Problem 2 « Reply #5 on: November 19, 2012, 09:32:30 PM » Attached #### Kerri, Hu • Newbie • Posts: 2 • Karma: 1 ##### Re: Problem 2 « Reply #6 on: November 19, 2012, 09:35:44 PM » Question 2a) #### Calvin Arnott • Sr. Member • Posts: 43 • Karma: 17 • OK ##### Re: Problem 2 « Reply #7 on: November 19, 2012, 09:37:10 PM » Problem 2 Part a. Let $k > 0$. Find the solutions that depend only on $r$ of the equation $$\Delta u := u_{xx} + u_{yy} = k^2 u$$ What ODE satisfies $u\left(r\right)$? We have that transforming Laplacian $\Delta_{\left(x,y\right)} := \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$ into polar coordinates  $u\left(x,y\right) \rightarrow u\left(r,\theta\right)$ yields us the polar form of the Laplacian: $$\Delta_{\left(r,\theta\right)} := \frac{\partial^2}{\partial r^2} + \frac{1}{r} \frac{\partial}{\partial r} + \frac{1}{r^2} \frac{\partial^2}{\partial \theta ^2}$$ Since we assume a form of $u\left(r,\theta\right)$ which depends only on $r$, we have that $u\left(r\right)$ is constant with respect to $\theta$, so $u_{\theta\theta} = 0$, and our PDE simplifies to the ODE: $$\Delta_{\left(r,\theta\right)} u\left(r,\theta\right) := u_{rr} + \frac{1}{r} u_{r} + \frac{1}{r^2} u_{\theta\theta} = u_{rr} + \frac{1}{r} u_{r} = k^2 u$$ We proceed by finding a solution for $u\left(r\right)$ in the form of Bessel's function, which solves the Bessel differential equation of the form: $$u_{zz} + \frac{1}{z} u_{z} +\left(1-\frac{s^2}{z^2}\right)u = 0$$ $$\text{Let: } u\left(r\right) = v\left(i k r\right) \implies u_{r} = i k v_{r}, \phantom{\ } u_{rr} = - k^2 v_{rr}$$ $$\implies \Delta_{\left(r,\theta\right)} u\left(r\right) := u_{rr} + \frac{1}{r} u_{r} - k^2 u = 0 \rightarrow - k^2 v_{rr} + \frac{1}{r} i k v_{r} - k^2 v = 0$$ Moreover, as $k > 0$, $k^2 \ne 0$, so dividing through by $-k^2$ yields: $$v_{rr} - \frac{i}{k r} v_{r} + v = 0$$ Notice that $-\left(i * i\right) = -\left(-1\right) = 1$, so $- \frac{i}{k r} = \frac{\left(-i\right)}{\left(1\right) k r} = \frac{\left(-i\right)}{\left(-i * i\right) k r} = \frac{\overline{i}}{\overline{i}} \frac{1}{i k r} = \frac{1}{i k r}$ so our ODE in $v\left(i k r\right)$, say $v\left(z\right)$ where $z = i k r$ is equivalent to: $$v_{rr} + \frac{1}{i k r} v_{r} + v = v_{zz} + \frac{1}{z} v_{z} + v = 0$$ It's clear then that for $s = n = 0$: $$v_{zz} + \frac{1}{z} v_{z} + v = v_{zz} + \frac{1}{z} v_{z} +\left(1-\frac{s^2}{z^2}\right)v = 0$$ Because $n = 0$ is an integer, our solution for $v\left(z\right)$ is given in the form of: $v\left(z\right) = A J_0\left(z\right) + B N_0\left(z\right)$ for some $\{A,B\} \in \mathbb{R}$, where $J_0\left(z\right)$ is Bessel's function of the first kind of order $0$, and $N_0\left(z\right)$ is Bessel's function of the second kind, namely, the Neumann function. $$J_n\left(z\right) = \sum_{j=0}^{\infty}\frac{\left(-1\right)^j}{\Gamma\left(j+1\right)\Gamma\left(j+n+1\right)}\left(\frac{z}{2}\right)^{2j+n}$$ $$N_n\left(z\right) = \lim_{s \to n} N_s\left(z\right) = \lim_{s \to n} \frac{J_{s}\left(z\right) \cos\left(\pi s\right) - J_{-s}\left(z\right)}{\sin\left(\pi s\right)}$$ $$\implies v\left(z\right) = v\left(i k r\right) = u\left(r\right) = A J_0\left(i k r\right) + B N_0\left(i k r\right) \phantom{\ } \blacksquare$$ Part b. Let $k > 0$. Find the solutions that depend only on $r$ of the equation $$\Delta u := u_{xx} + u_{yy} = -k^2 u$$ What ODE satisfies $u\left(r\right)$? As the Laplacian term of this PDE is identical to part a. we follow the same derivation by first changing to polar coordinates and assuming a radial solution. Then we have an ODE in $u\left(r\right)$: $$\Delta_{\left(r,\theta\right)} u\left(r,\theta\right) := u_{rr} + \frac{1}{r} u_{r} + \frac{1}{r^2} u_{\theta\theta} = u_{rr} + \frac{1}{r} u_{r} = - k^2 u$$ We wish to again factor this into a Bessel differential equation: $$u_{zz} + \frac{1}{z} u_{z} +\left(1-\frac{s^2}{z^2}\right)u = 0$$ $$\text{Let: } u\left(r\right) = v\left(k r\right) \implies u_{r} = k v_{r}, \phantom{\ } u_{rr} = k^2 v_{rr}$$ $$\implies \Delta_{\left(r,\theta\right)} u\left(r\right) := u_{rr} + \frac{1}{r} u_{r} + k^2 u = 0 \rightarrow k^2 v_{rr} + \frac{1}{r} k v_{r} + k^2 v = 0$$ Let $z = k r$. Again, $k > 0$ so $k^2 \ne 0$ and dividing through gives us: $$v_{rr} + \frac{1}{k r} v_{r} + v = v_{zz} + \frac{1}{z} v_{z} + v = 0$$ Which is clearly a Bessel's differential equation with $s = n = 0$: $$v_{zz} + \frac{1}{z} v_{z} + v = v_{zz} + \frac{1}{z} v_{z} +\left(1-\frac{s^2}{z^2}\right)v = 0$$ So our solution for $v\left(z\right)$ is again $v\left(z\right) = A J_0\left(z\right) + B N_0\left(z\right)$ for some $\{A,B\} \in \mathbb{R}$. $$\implies v\left(z\right) = v\left(k r\right) = u\left(r\right) = A J_0\left(k r\right) + B N_0\left(k r\right) \phantom{\ } \blacksquare$$ #### Ian Kivlichan • Sr. Member • Posts: 51 • Karma: 17 ##### Re: Problem 2 « Reply #8 on: November 19, 2012, 09:48:54 PM » Calvin: I wasn't really sure that for 2.b) you could have a Bessel function of a complex variable, or if you'd need to do something slightly different? #### Victor Ivrii • Elder Member • Posts: 2570 • Karma: 0 ##### Re: Problem 2 « Reply #9 on: November 20, 2012, 05:48:17 AM » Calvin: I wasn't really sure that for 2.b) you could have a Bessel function of a complex variable, or if you'd need to do something slightly different? Actually Bessel functions are analytic functions. If you consider $n$-dimensional problem and not necessarily spherically symmetric solution then \Delta =\partial_r^2 + (n-1)r^{-1}\partial_r +\r^{-2}\Lambda \label{eq-1} where $\Lambda$ is Laplace-Beltrami on the sphere ($(n-1)$-dimensional sphere $\mathbb{S}^{n-1}$ in $\mathbb{R}^n$). If we consider equation $\Delta u=0$ and try to solve it by separation of variables we get \frac{r^2 R''+(n-1)r R'}{R}+\frac{\Lambda V}{V}=0 \label{eq-2} where $V$ contains spherical variables (except $r$) and therefore \begin{align} &\Lambda V=-\lambda V,\label{eq-3}\\ &r^2 R''+(n-1)r R'=\lambda R.\label{eq-4} \end{align} One can prove that such solutions $U=VR$ are polynomials of degree $m=0,1,2,\ldots$ with respect to $x$ (assuming solutions are not singular at $0$) and therefore $R=r^m$ (don't care about coefficient) and therefore $\lambda_m= m(m+n-2)$ but their multiplicities are large: the most interesting case is $n=3$ when $\lambda_m=m(m+1)$ of multiplicity $2m+1$ (application to theory of Hydrogen atom). Solutions of (\ref{eq-3}) are spherical functions which I don't describe. Let us return to $k\ne 0$. In this case separating variables we get \Bigl[\frac{r^2 R''+(n-1)r R'}{R}\mp k^2r^2\Bigr]+\frac{\Lambda V}{V}=0 \label{eq-5} and therefore we get (\ref{eq-3}) implying $\lambda=m(m+n-2)$ and instead of (\ref{eq-4}) we get r^2 R''+(n-1)r R' \mp k^2r^2R -\lambda R=0 \label{eq-6} which is Bessel equation. By scaling $r_{new}=k r_{old}$ we can get rid off $k$ (and even off sign making complex scaling). I just want to note that Bessel functions depend on $n$ and $m$. However for odd $n$ (in particular as $n=3$) and $\lambda=0$ they are elementary functions (see problem 1 for $n=3$)
## Yolov3 Explained The specifications of the desktop computer are explained in Section 5. We're doing great, but again the non-perfect world is right around the corner. OpenPose is a library that allow us to do so. Part-3, we are going to look at how to load the YOLOv3's pre-trained weights file (yolov3. One of the main contribution of the paper is to demonstrate the gain obtained when pre-training on large auxiliary dataset and then training on the target set. We will focus on using the. IV) In Virtual, augmented, and mixed reality, the use of hand gestures is increasingly becoming popular to reduce the difference between the virtual and real world. Ambient assisted living (AAL) environments are currently a key focus of interest as an option to assist and monitor disabled and elderly people. These systems can improve their quality of life and personal autonomy by detecting events such as entering potentially dangerous areas, potential fall events, or extended stays in the same place. YOLOv3 process explained. So each regression head is associate. If you want to use those config files, you need to edit some 'classes' and 'filters' values in the files for RSNA. cfg, yolov3-spp. Since it is the darknet model, the anchor boxes are different from the one we have in our dataset. Hi, in the official example code of YOLOv3, the loss terms seem to be calculated over tensors of shape (B, all_feature_map_locations * 3, -), which means all the possible anchor boxes are used in the training. Detect Vehicles and People with YOLOv3 and Tensorflow YOLOv3 is an improved version of YOLOv2 that has greater accuracy and mAP score and that being the main reason for us to choose v3 over v2. In mAP measured at. The sizes of the tensors are 13 × 13 and 26 × 26, respective ly. Create a new bucket, specifying the following options: A unique name of your choosing. I find this tutorial : https://www. The clean solution here is to create sub-models in keras. I would say that YOLO appears to be a cleaner way of doing object detection since it’s fully end-to-end training. Be gentle, you should be able to pry it up with a finger/fingernail. Then press down on. cfg I notice there are some additions. Training Model. This tutorial uses a TensorFlow implementation of YOLOv3 model, which can be directly converted to the IR. 連載一覧 入門 Keras (1) Windows に Tensorflow と Keras をセットアップ 入門 Keras (2) パーセプトロンとロジスティック回帰 入門 Keras (3) 線形分離と多層パーセプトロン 入門 Keras (4) 多クラス分類 - Iris データを学習する 入門 Keras (5) 学習済みモデルと Flask で API サービスを作る 入門. cfgはもとからcfgディレクトリの中にある. will be different. 3 (1,331 ratings) Course Ratings are calculated from individual students' ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately. Part 2 : Creating the layers of the network architecture. Our goal was to extract the position of each of the body parts of every person appearing in an image with no more sensors than a digital camera. You should use a different framework like darknet or darkflow with tensorflow and use them with a GPU to have a real time detection with high frame rates. ( Free Abstract ) ( Download PDF ) Paper # 1900536 Design and EDEM simulation of fertilizer stratification and deep application device. Tech report. This article is all about implementing YoloV3-Tiny on Raspberry Pi Model 3B!. width= height= in the yolov3. It applies a single neural network to the full image. /modes/yolov3. As per my limited understanding: * TensorFlow is to SciKit-Learn what Algebra is to Arithmetic. py and the cfg file is below. The official title of YOLO v2 paper seemed if YOLO was a milk-based health drink for kids rather than a object detection algorithm. Kiana Ehsani, Hessam Bagherinezhad, Joseph Redmon, Roozbeh Mottaghi, Ali Farhadi YOLOv3: An Incremental Improvement PDF arXiv. Endoscopy is a routine clinical procedure used for the detection, follow-up and treatment of disease such as cancer and inflammation in hollow organs and body cavities; ear, nose, throat, urinary. The high technical skillset coupled with a solid business understanding made the cooperation flawless. Do you have any example, or an explanation to how to code an object detector with YOLO 3, opencv with C++. Metric functions are to be supplied in the metrics parameter when a model is compiled. Tech report. 😎 You can take a classifier like VGGNet or Inception and turn it. This tutorial is broken into 5 parts: Part 1 (This one): Understanding How YOLO works. today emerged from stealth mode to introduce Ergo, an artificial intelligence processor for edge devices that it says is 20 to 100 times more power-efficient than competing. txt label generated by BBox Label Tool contains, the image to the right contains the data as expected by YOLOv2. The code for this tutorial designed to run on Python 3. 386559 avg, 0. YOLOv3 process explained Part1→ Download, Listen and View free YOLOv3 process explained Part1 MP3, Video and Lyrics Creating a YOLOv3 Custom Dataset | Quick and Easy | 9,000,000+ Images →. Appsilon Data Science proved to be an excellent business partner. There is usually considerable overlap. It is a challenging problem that involves building upon methods for object recognition (e. This article is the step by step guide to train YOLOv3 on the custom dataset. Skin lesion segmentation has a critical role in the early and accurate diagnosis of skin cancer by computerized systems. Steps for updating relevant configuration files for Darknet YOLO are also detailed. Perceive claims its Ergo chip’s efficiency is up to 55 TOPS/W, running YOLOv3 at 30fps with just 20mW (Image: Perceive). Alaska's license plate includes an image of the state flag. 001, it seems like that the thresh is a constant in the program. Hi, that's normal. In YOLOv3 anchor sizes are actual pixel values. Endoscopy is a routine clinical procedure used for the detection, follow-up and treatment of disease such as cancer and inflammation in hollow organs and body cavities; ear, nose, throat, urinary. edited Sep 27 '19 at 16:53. Partial residual connection. The first step to understanding YOLO is how it encodes its output. Traditionally, the process of CAD for breast lesions classification is mainly composed of two separated steps: i) locate the lesion region of interests (ROI); ii) classify the located region of. Understanding YOLOv2 training output 07 June 2017. After having successfully installed it, in this tutorial I want to explain all the process in the simplest way and help you solve some common and not that common problems. Download YOLOv3 weights from YOLO website. Face processing trains you for object detection, face recognition, emotion recognition, landmark detection, computational photography. Resizing an image means changing the dimensions of it, be it width alone, height alone or both. YOLOv3 on Jetson TX2 Recently I looked at darknet web site again and surprising found there was an updated version of YOLO , i. A common approach to almost all the algorithms (including the previous ones) was that of the. Connect With The Experts: Monday, May 8, 2:00 PM - 3:00 PM, Pod B. 이 인자는 어떤 함수를 돌릴 것인지에 대한 인자 같습니다. Follow 246 views (last 30 days) Muhammad Talha on 2 Nov 2019. In this paper, we focus on developing an algorithm that could track the aircraft fast and accurately based on infrared image sequence. We initially annotated 500 of them and trained yolov3-tiny prn and annotated the balance images using it. Do you have any example, or an explanation to how to code an object detector with YOLO 3, opencv with C++. “You Only Look Once” is an algorithm that uses convolutional neural networks for object detection. That is the cell where the center of the object falls into. Yolo V3 comes in several different models. By Michal Maj, Appsilon DataScience. $cd tensorflow-yolov3$ pip install -r. With Rocket, you can plug in any TensorFlow or Darknet DNN model. In this post, I'll discuss an overview of deep learning techniques for object detection using convolutional neural networks. However, automatic segmentation of skin lesions in dermoscopic images is a challenging task owing to difficulties including artifacts (hairs, gel bubbles, ruler markers), indistinct boundaries, low contrast and varying sizes and shapes of the lesion images. What differentiates it from a mass?. However, I been trying to code like below where long1deg is a float. 2019/01/31 - [Programmer Jinyo/Machine Learning] - Yolo 논문 정리 및 Pytorch 코드 구현, 분석 01 ( You Only Look Once: Unified, Real-Time Object Detection ) 이 포스트는 위 포스트에서 이어지는 글이다. IV) In Virtual, augmented, and mixed reality, the use of hand gestures is increasingly becoming popular to reduce the difference between the virtual and real world. We will explain this structure in detail in the following chapters to give you a better intuition of how SegNet works. It boasts. I couldn't find any good explanation about YOLOv3 SPP which has better mAP than YOLOv3. Platform allows domain experts to produce high-quality labels for AI applications in minutes in a visual, interactive fashion. cfg and yolov3. You can simply choose which model is the most suitable for you (trade off between accuracy and speed). it's latest iteration (YOLOv3, 2018) can recognize up to 80 classes (person, bicycle, car, motorbike, aeroplane, etc. IQA: Visual Question Answering in Interactive Environments PDF arXiv. I explained enough about the YOLO algorithm to understand how it works. 75 VOC 2007 Figure 2: Clustering box dimensions on VOC and COCO. Train custom YOLOv3 detection. Optimization of Robust Loss Functions for Weakly-Labeled Image Taxonomies 3 Fig. Related Tutorial List. Average precision. Namespace: """Command line arguments as parsed args. 5, and PyTorch 0. Vinay was always able to explain concepts clearly in the discussions, help students debug their code neatly in the labs and take good care of students’ questions during office hours. In this blog post I'll describe what it took to get the "tiny" version of YOLOv2 running on iOS using Metal Performance Shaders. But it seems that caffe is the default choice in case of classification while TF API is for obejct detection. NVIDIA Jetson AGX Xavier testing with YOLOv3. AWS pricing is similar to how you pay for utilities like water and electricity. When we apply partial residual con-nection on the feature maps of the c2 channels of the lth. def cli_args( args: Sequence[str], ini_config_file: Path = Path("mutatest. In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. 0 time 61 85 85 125 156 172 73 90 198 22 29 51 Figure 1. Partial residual connection. Open minyoungk99 opened this issue Nov 2, 2019 · 4 comments Open. So to recover the final bounding box, the regressed offsets must be added to the anchor or reference boxes. Like Import AI, the MAIEI newsletter provides analysis of research papers. io can import. Yolov3 python 7. That's changing with the advent of new imaging technology: CT scans, MRI,. We'll be creating these three files(. It is a challenging problem that involves building upon methods for object recognition (e. xで動作するものがあることは知ってましたが. In this post, we will learn how to use YOLOv3 — a state of the art object detector — with OpenCV. 2 with Eclipse and MinGW on Windows 10. In the past, such symptoms might mean a treadmill stress test or a cardiac catheterization to diagnose the. At training time we only want one bounding box predictor to be responsible for each object. Yolov3 , which is a deep convolutional neural network that has been trained for the detection of lesion location in the image and it has been used to automate segmentation algorithm GrabCut, which is also known as a semi-automatic algorithm, for segmenting skin lesions for the first time in literature. It’s a living, changing entity that powers change throughout every industry across the globe. A common metric is the average precision. Traditionally, the process of CAD for breast lesions classification is mainly composed of two separated steps: i) locate the lesion region of interests (ROI); ii) classify the located region of. When they explain the forward pass of neural networks, they sample the weight initialization values from a Uniform distribution of equal variance. This phenomenon has immediately raised security concerns due to fact that these devices can intentionally or unintentionally cause serious hazards. Lifting the event stream into the image domain with our events-to-video approach allows us to use a mature CNN architecture that was pretrained on existing labeled datasets. Resizing an image means changing the dimensions of it, be it width alone, height alone or both. There are 3 files that need to be downloaded yolov3. Model predicts a 3-d tensor encoding bounding box, objectness, and class predictions. This weights are obtained from training YOLOv3 on Coco (Common Objects in Context) dataset. The code for this tutorial is designed to run on Python 3. Recent years have seen people develop many algorithms for object detection, some of which include YOLO, SSD, Mask RCNN and RetinaNet. The final results show that the MAP of the detector in this paper is 91. Getting Started with Darknet YOLO and MS COCO for Object Detection. Classes range from very general to very spe-cific, and since there is only one label per image, it is not rare to find images with unannotated instances of other classes from the dataset. Although the mAP of YOLOv3 416 is 79. CS231n Convolutional Neural Networks for Visual Recognition Course Website These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition. I want to implement and train YOLO 3 with my dataset using Opencv and C++, i can't find an example to start with, or a tutorial to explain how to train YOLO with my own data, all the tutorials i found are in python and don't use Opencv. Paper Weights Simplified Paper weights can be very confusing. What is the objective of [email protected] COVID-19 ? "After initial quality control and limited testing phases, [email protected] team has released an initial wave of projects simulating potentially druggable protein targets from SARS-CoV-2 (the virus that causes COVID-19) and the related SARS-CoV virus (for which more structural data is available) into full production on [email protected] Instantly share code, notes, and snippets. in their 2016 paper, You Only. AI doesn’t stand still. In racemose inflorescence the axis of the inflorescence continues to grow and the flowers blossom in the axes of the reduced leaves or bracts, with the oldest flower at the base and the newest flower near the growing tip. Fix Page Load Issues with Internet Explorer 11. That said, yolov3-tiny works well on NCS2. Advertising & Sponsorship. Hi, that's normal. net (formerly draw. 0 and Keras and converted to be loaded on the MAix. Train a Yolo v3 model using Darknet using the Colab 12GB-RAM GPU. Part 2 : Creating the layers of the network architecture. Endoscopy is a routine clinical procedure used for the detection, follow-up and treatment of disease such as cancer and inflammation in hollow organs and body cavities; ear, nose, throat, urinary. These proposals are then feed into the RoI pooling layer in the Fast R-CNN. Since we only have few examples, our number one concern should be overfitting. The Faster RCNN. YOLOv3 is an improved version of YOLOv2 that has greater accuracy and mAP score and that being the main reason for us to choose v3 over v2. The purpose of this post is to describe how one can easily prepare an instance of the MS COCO dataset as input for training Darknet to perform object detection with YOLO. You should use a different framework like darknet or darkflow with tensorflow and use them with a GPU to have a real time detection with high frame rates. It is a challenging problem that involves building upon methods for object recognition (e. Access Model Training History in Keras. json or yolo_v3_tiny. YOLOv3 is extremely fast and accurate. An interesting question I will try to explain here. However, I been trying to code like below where long1deg is a float. cfg or yolov3-tiny. Ex - Mathworks, DRDO. Through the study of performance related to different value of the threshold, the , with the development of the deep learning and the visual tracking, it is possible for us to change both the tracking and the detector. The faster the model, it has lower accuracy and the slower the model, it has better accuracy. Average precision. Face Recognition addresses “who is this identity” question. Classes range from very general to very spe-cific, and since there is only one label per image, it is not rare to find images with unannotated instances of other classes from the dataset. cfg I notice there are some additions. 001000 rate, 3. HAAR classifiers Explained: HAAR Classifiers are trained using lots of positive images (i. This has fueled the development of automatic methods for the detection, segmentation and characterisation of. Part 2 : Creating the layers of the network architecture. That said, Tiny-YOLO may be a useful object detector to pair with your Raspberry Pi and Movidius NCS. A common metric is the average precision. To get peak performance and to make your model deployable anywhere, use tf. 9% on COCO test-dev. We would like to offer a very general understanding of paper weights. Teig explained that the initial idea was to combine Xperi's classical knowledge of image and audio processing with machine learning. The sizes of the tensors are 13 × 13 and 26 × 26, respective ly. cfg) and also explain the yolov3. First, during training, the YOLOv3 network is fed with input images to predict 3D tensors (which is the last feature map) corresponding to 3 scales, as shown in the middle one in the above diagram. Nothing more relevant to discuss than a real life example of a model I am currently training. It will usually mean that, because there is more jitter, you need more jitter buffer, to be able to compensate. Then line no 610 (classes=4) and 603 (filters=27), then line no. Figure 1: Tiny-YOLO has a lower mAP score on the COCO dataset than most object detectors. If you are using, line no. in their 2016 paper, You Only. TensorFlow provides APIs for a wide range of languages, like Python, C++, Java, Go, Haskell and R (in a form of a third-party library). The YOLOv3 algorithm was directly applied to identify and position the common mushroom images and obtain the bounding box locations of each common mushroom. These systems can improve their quality of life and personal autonomy by detecting events such as entering potentially dangerous areas, potential fall events, or extended stays in the same place. A single convolutional network simultaneously predicts multiple bounding boxes and class probabilities for those boxes. I find this tutorial : https://www. Like I said before with YOLO algorithm we’re not searching for interested regions on our image that could contain some object. Bounding box object detectors: understanding YOLO, You Look Only Once. answered May 6 '11 at 20:26. It took us one month to get from sketch to a working application. This section focuses on configuring Fast R-CNN and how to you use different base models. Thursday, May 21. I am loading the cfg and weight files using darknet importer but finding difficulties to add the detection layer at the end. YOLOv3 ‐ tiny network. A few weeks ago I wrote about YOLO, a neural network for object detection. I explained enough about the YOLO algorithm to understand how it works. 사람은 어떤 이미지를 봤을때, 이미지 내부에 있는 Object들의 디테일을 한 눈에 파악할 수 있다. * TensorFlow starts where SciKit-Learn stops. Recommended for you. Object detection has multiple applications such as face detection, vehicle detection, pedestrian counting, self-driving cars, security systems, etc. When we apply partial residual con-nection on the feature maps of the c2 channels of the lth. Our general interest e-newsletter keeps you up. ai’s free deep learning course. Perceive claims its Ergo chip's efficiency is up to 55 TOPS/W, running YOLOv3 at 30fps with just 20mW (Image: Perceive) This power efficiency is down to some aggressive power gating and clock gating techniques, which exploit the deterministic nature of neural network processing - unlike other types of code, there are no branches, so timings are known at compile time. “ The very next day, I tried the Keras yolov3 model available in the Github. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. the documentation says that the support caffe,TF and pytorch. This network divides the image into regions and predicts bounding boxes and probabilities for each region. Computer-aided diagnosis (CAD) in the medical field has received more and more attention in recent years. ) but it can be retrained to detect custom classes; it's a CNN that does more than simple classification. Image Credits: Karol Majek. YOLO predicts multiple bounding boxes per grid cell. // Weight Normalization and Layer Normalization Explained (Normalization in Deep Learning Part 2) _ Machine Learning Explained Adam // SGD算法比较 – Slinuxer. Mayo Clinic does not endorse companies or products. These are the two popular approaches for doing object detection that are anchor based. Object Detection Tutorial (YOLO) Description In this tutorial we will go step by step on how to run state of the art object detection CNN (YOLO) using open source projects and TensorFlow, YOLO is a R-CNN network for detecting objects and proposing bounding boxes on them. Pinhas Ben-Tzvi. Now, it's time to dive into the technical details for the implementation of YOLOv3 in Tensorflow 2. The faster the model, it has lower accuracy and the slower the model, it has better accuracy. One important CAD application is to detect and classify breast lesions in ultrasound images. After having successfully installed it, in this tutorial I want to explain all the process in the simplest way and help you solve some common and not that common problems. This post is a super simple introduction to CUDA, the popular parallel computing platform and programming model from NVIDIA. The Faster RCNN. In our case in yolov3. The purpose of this post is to describe how one can easily prepare an instance of the MS COCO dataset as input for training Darknet to perform object detection with YOLO. Explanation of the different terms : The 3 $\lambda$ constants are just constants to take into account more one aspect of the loss function. names; First let's prepare the YOLOv3. • Used Yolov3 tiny to detect sheep face in video streams • Helped insurance company to solve livestock insurance claim frauds problem Animal detection and counting task. 5, and PyTorch 0. today emerged from stealth mode to introduce Ergo, an artificial intelligence processor for edge devices that it says is 20 to 100 times more power-efficient than competing. cfg, yolov3-spp. You can vote up the examples you like or vote down the ones you don't like. I will explain you how it actually works and implementation of it in Self-driving Car vehicle detection dataset by Udacity. Python 3 Conversion between Scalar Built in Types The type conversion in Python 3 is explained with the code below, "Conversion betwee. Object detection is a domain that has benefited immensely from the recent developments in deep learning. Advertising & Sponsorship. Scene with different types of objects, in different proportions, colors and angles. I want to implement and train YOLO 3 with my dataset using Opencv and C++, i can't find an example to start with, or a tutorial to explain how to train YOLO with my own data, all the tutorials i found are in python and don't use Opencv. CS231n Convolutional Neural Networks for Visual Recognition Course Website These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition. The new version yolo_convert. Alaska's license plate includes an image of the state flag. Since then Apple has announced two new technologies for doing machine learning on the device: Core ML and the MPS graph API. is the smooth L1 loss. This module contains definitions for the following model architectures: - AlexNet - DenseNet - Inception V3 - ResNet V1 - ResNet V2 - SqueezeNet - VGG - MobileNet - MobileNetV2. Most of the layers in the detector do batch normalization right after the convolution, do not have biases and use Leaky ReLU activation. Object detection is useful for understanding what's in an image, describing both what is in an image and where those objects are found. YOLOv3 has several implementations. Let’s now discuss the architecture of SlimYOLOv3 to get a better and clearer understanding of how this framework works underneath. Traditionally, the process of CAD for breast lesions classification is mainly composed of two separated steps: i) locate the lesion region of interests (ROI); ii) classify the located region of. it's latest iteration (YOLOv3, 2018) can recognize up to 80 classes (person, bicycle, car, motorbike, aeroplane, etc. txt label generated by BBox Label Tool contains, the image to the right contains the data as expected by YOLOv2. Object detection is a domain that has benefited immensely from the recent developments in deep learning. For this application, the mushroom is the only recognized object. This section focuses on configuring Fast R-CNN and how to you use different base models. When they explain the forward pass of neural networks, they sample the weight initialization values from a Uniform distribution of equal variance. If you use the AWS Management Console with Internet Explorer 11, the browser might fail to load some pages of the console. 19: Tensorflow Object Detection now works with Tensorflow 2. The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. YOLO, short for You Only Look Once, is a real-time object recognition algorithm proposed in paper You Only Look Once: Unified, Real-Time Object Detection , by Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi. 4 modify cfg/yolov3-voc. Editor's Note: Part 1 of this series was published over at Hacker Noon. The right tool for an image classification job is a convnet, so let's try to train one on our data, as an initial baseline. YOLOv3 on Jetson TX2 Recently I looked at darknet web site again and surprising found there was an updated version of YOLO , i. In this blog post I'll describe what it took to get the "tiny" version of YOLOv2 running on iOS using Metal Performance Shaders. 0 and Keras and converted to be loaded on the MAix. A plot of the cumulative explained variance against the number of components will give us the percentage of variance explained by each of the selected components. The YOLO pre-trained weights were downloaded from the author’s website where we choose the YOLOv3 model. Instructor Jonathan Fernandes steps through how to determine whether your organization is ready for AI, as well as how to develop and present a compelling business case for adopting the technology. Help understanding yolov3 model. YOLOv3 is extremely fast and accurate. Accelerate your deep learning with PyTorch covering all the fundamentals of deep learning with a python-first framework. 1 Classificadores de Regiões associados a Extratores de Características baseados em CNN1. Xperi owns brands such as DTS, IMAX Enhanced and HD Radio — its technology portfolio includes image processing software for features like photo red-eye and image stabilization which are widely used in digital. We're doing great, but again the non-perfect world is right around the corner. inception_v3 import InceptionV3 from keras. json (depending on a model) configuration file with custom operations located in the < OPENVINO_INSTALL_DIR >/ deployment_tools / model_optimizer / extensions / front / tf repository. This tutorial is broken into 5 parts: Part 1 (This one): Understanding How YOLO works. In this article, I re-explain the characteristics of the bounding box object detector Yolo since everything might not be so easy to catch. That said, yolov3-tiny works well on NCS2. Reading and Writing XML Files in Python. The false positives shown may be explained by the. data cfg/yolov3. 001, it seems like that the thresh is a constant in the program. First, we need to install ‘tensornets’ library and one can easily do that with the handy ‘PIP’ command. 0 and Keras and converted to be loaded on the MAix. Yolov3 , which is a deep convolutional neural network that has been trained for the detection of lesion location in the image and it has been used to automate segmentation algorithm GrabCut, which is also known as a semi-automatic algorithm, for segmenting skin lesions for the first time in literature. ( Free Abstract ) ( Download PDF ) Paper # 1900536 Design and EDEM simulation of fertilizer stratification and deep application device. A pruned model results in fewer trainable parameters and lower computation requirements in comparison to the original YOLOv3 and hence it is more convenient for real-time object detection. Linear Regression Formula Explained. YOLOv3 process explained. Like cars on a road, oranges in a fridge, signatures in a document and teslas in space. 4 bronze badges. YOLOv1 and YOLOv2 models must be first converted to TensorFlow* using DarkFlow*. The authors of Mask R-CNN suggest a method they named ROIAlign, in which they sample the feature map at different points and apply a bilinear interpolation. 689 & 696, lastly line no. Also, the aspect ratio of the original image could be preserved in the resized image. They will make you ♥ Physics. Yolo Loss function explanation. I couldn't find any good explanation about YOLOv3 SPP which has better mAP than YOLOv3. Traditionally, the process of CAD for breast lesions classification is mainly composed of two separated steps: i) locate the lesion region of interests (ROI); ii) classify the located region of. Recent years have seen people develop many algorithms for object detection, some of which include YOLO, SSD, Mask RCNN and RetinaNet. Part-4, as our last part for this tutorial, I will explain about the encoding process of the YOLOv3's bounding boxes and get rid of non-necessary detected boxes using the non. Several approaches for understanding and visualizing Convolutional Networks have been developed in the literature, partly as a response the common criticism that the learned features in a Neural Network are not interpretable. In racemose inflorescence the axis of the inflorescence continues to grow and the flowers blossom in the axes of the reduced leaves or bracts, with the oldest flower at the base and the newest flower near the growing tip. It took us one month to get from sketch to a working application. Having a lot of jitter in the network will probably increase the total delay to, but this should be avoided. The Fast R-CNN algorithm is explained in the Algorithm details section together with a high level overview of how it is implemented in the CNTK Python API. output x = GlobalAveragePooling2D()(x) # let's add a fully-connected layer x = Dense(1024, activation='relu')(x) # and a. 3% in the experiments, the speed of YOLOv3, which is as fast as DC-SPP-YOLO 544 but lower than DC-SPP-YOLO 416 and YOLOv2 416, has been damaged due to the larger backbone network Darknet53 with residual. Although the mAP of YOLOv3 416 is 79. The existing real-time insect monitoring system inclusive of data tabulation in the Android application developed. Then we went through the model annotation and correct the errors and retrain the model with all images with several image augmentations like random flip, changes in hue and saturation, varying scales and mix-up method. There is no getting around the yolo when it comes to object recognition. 5L is a model with high explanatory power of 88. Let's now discuss the architecture of SlimYOLOv3 to get a better and clearer understanding of how this framework works underneath. The input image is divided into an S x S grid of cells. 0005 is used. Appsilon were flexible with tight schedules. Though it is not the most accurate object detection algorithm, but it is a very good choice when we need real-time detection,. Skin lesion segmentation has a critical role in the early and accurate diagnosis of skin cancer by computerized systems. The high technical skillset coupled with a solid business understanding made the cooperation flawless. The content of the. Connect With The Experts: Monday, May 8, 2:00 PM - 3:00 PM, Pod B. weights), and to convert it into the TensorFlow's 2. With AWS you pay only for the individual services you need, for as long as you use them, and without requiring long-term contracts or complex licensing. We consider the zero-shot entity-linking challenge where each entity is defined by a short textual description, and the model must read these descriptions together with the mention context to make the final linking decisions. In TensorFlow 2. Explanation of the different terms : The 3 $\lambda$ constants are just constants to take into account more one aspect of the loss function. Join the workshop led by NYC Data Science Academy Instructor and Kaggle expert, Zeyu Zhang, and learn how to build a YOLOv3 model from scratch. YOLOv2 is written for a Linux platform, but in this post we'll be looking at the Windows port by AlexeyAB , which can be found on this Darknet GitHub repository. In YOLO, the coordinates assigned to all the grids are: b x, b y are the x and y coordinates of the midpoint of the object with respect to this grid. Lifting the event stream into the image domain with our events-to-video approach allows us to use a mature CNN architecture that was pretrained on existing labeled datasets. This is a problem related to Internet Explorer's Compatibility View. Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction. Our general interest e-newsletter keeps you up. 連載一覧 入門 Keras (1) Windows に Tensorflow と Keras をセットアップ 入門 Keras (2) パーセプトロンとロジスティック回帰 入門 Keras (3) 線形分離と多層パーセプトロン 入門 Keras (4) 多クラス分類 - Iris データを学習する 入門 Keras (5) 学習済みモデルと Flask で API サービスを作る 入門. When we apply partial residual con-nection on the feature maps of the c2 channels of the lth. To get started, click ⧉ Clone above and clone the project to your own Azure Notebooks account, selecting the option to trust this project (if you don't have an Azure Notebooks account, you'll be prompted to create one). It is a symbolic math library, and is also used for machine learning applications such as neural networks. Artificial Intelligence is the replication of human intelligence in computers. Teig explained that the initial idea was to combine Xperi’s classical knowledge of image and audio processing with machine learning. 75 VOC 2007 Figure 2: Clustering box dimensions on VOC and COCO. For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. The implementation is done on a random self-created dataset and shows how SVMs perform better even with smaller datasets. Here is the result. The Microsoft Cognitive Toolkit (CNTK) is an open-source toolkit for commercial-grade distributed deep learning. After having successfully installed it, in this tutorial I want to explain all the process in the simplest way and help you solve some common and not that common problems. Redes para detecção e localização de objetos em cenas Contents1 Detecção de Objetos & Segmentação Baseada em Regiões1. YOLO v3: Better, not Faster, Stronger. TensorFlow provides APIs for a wide range of languages, like Python, C++, Java, Go, Haskell and R (in a form of a third-party library). Amazon SageMaker is a fully-managed and highly scalable machine learning (ML) platform that makes it easy build, train, and deploy machine learning models. More posts by Ayoosh Kathuria. Set up my Dynamixel pan/tilt turret to prompt for which class of object to have YOLOv3 guide it to track!! NOW it's a real targeting system :) As you can see, it attempts to guide the turret to point at the direct center of the nearest detected object's bounding box, prompting for input in the command line for which type of object to track. Again, I wasn't able to run YoloV3 full version on Pi 3. A "null pointer" explained; Answer to: NULL is guaranteed to be 0, but the null pointer is not? Resolving crashes and segmentation faults, an article from the Real-Time embedded blog. TensorFlow is an end-to-end open source platform for machine learning. net (formerly draw. weights and coco. The YOLOv3 is a state-of-the-art detecting method based on the deep learning. 3 (1,331 ratings) Course Ratings are calculated from individual students' ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately. YoloV3-tiny version, however, can be run on RPI 3, very slowly. In the next part, I will implement various layers required to run YOLO with TensorFlow. 9% on COCO test-dev. (If this sounds interesting check out this post too. TensorFlow provides APIs for a wide range of languages, like Python, C++, Java, Go, Haskell and R (in a form of a third-party library). This is the second part of a series of blog articles. It detects facial features and ignores anything else, such as buildings, trees and bodies. 1 is going to be released soon. cfg) set on MSCOCO dataset. In mAP measured at. 3 fps by improving the YOLOV3 algorithm. It’s what drives us today. data cfg/yolov3. Object detection is a domain that has benefited immensely from the recent developments in deep learning. See a full comparison of 109 papers with code. Welcome to another YOLO v3 object detection tutorial. We have a database of K faces we have to identify whose image is the give input image. On a Pascal Titan X it processes images at 30 FPS and has a mAP of 57. robotparser for parsing robots. The Fast R-CNN algorithm is explained in the Algorithm details section together with a high level overview of how it is implemented in the CNTK Python API. Check out his YOLO v3 real time detection video here. Object detection is a task in computer vision that involves identifying the presence, location, and type of one or more objects in a given photograph. Installation is simple. YOLOv3 on Jetson TX2 Recently I looked at darknet web site again and surprising found there was an updated version of YOLO , i. Instructor Jonathan Fernandes steps through how to determine whether your organization is ready for AI, as well as how to develop and present a compelling business case for adopting the technology. Like cars on a road, oranges in a fridge, signatures in a document and teslas in space. data cfg/yolov3. Architecture. The jitter and total delay are not even close to be the same thing. Kiana Ehsani, Hessam Bagherinezhad, Joseph Redmon, Roozbeh Mottaghi, Ali Farhadi YOLOv3: An Incremental Improvement PDF arXiv. In this article, I re-explain the characteristics of the bounding box object detector Yolo since everything might not be so easy to catch. inception_v3 import InceptionV3 from keras. The official title of YOLO v2 paper seemed if YOLO was a milk-based health drink for kids rather than a object detection algorithm. In this blog post, I will explain how k-means clustering can be implemented to determine anchor boxes for object detection. data yolov3. Mona, UT – It is a great pleasure that Barnes Bullets received the 2020 Award of Excellence from Sporting Classics Publications in the ammunition category. The code for this tutorial is designed to run on Python 3. Hi, that's normal. Perceive’s figures have it running YOLOv3, a large network with 64 million parameters, at 30 frames per second while consuming just 20mW. Commented: Hayat Bouchkouk on 22 Mar 2020 Hi. As we saw in the third article 3º- Datsets for Traffic Signs detection, we will start by using the German Traffic Signs Detection Benchmark (GTSDB). Join the workshop led by NYC Data Science Academy Instructor and Kaggle expert, Zeyu Zhang, and learn how to build a YOLOv3 model from scratch. The Fast R-CNN algorithm is explained in the Algorithm details section together with a high level overview of how it is implemented in the CNTK Python API. We consider the zero-shot entity-linking challenge where each entity is defined by a short textual description, and the model must read these descriptions together with the mention context to make the final linking decisions. So to recover the final bounding box, the regressed offsets must be added to the anchor or reference boxes. This research project is suitable for students who are motivated and interested in image recognition techniques, namely, Retinanet, YOLOv3, and etc. 10 Nov 2019 • facebookresearch/BLINK •. #N#PoseNet can detect human figures in images and videos using either a single-pose algorithm. Then, I explained how I got the optimal anchor box parameter of the algorithm. With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs. For code, you can check out the this github repo. General object detection framework. In this article, I re-explain the characteristics of the bounding box object detector Yolo since everything might not be so easy to catch. The following are code examples for showing how to use argparse. The AI Guy 18,270 views. Data Science Stack Exchange is a question and answer site for Data science professionals, Machine Learning specialists, and those interested in learning more about the field. In cymose inflorescence the development of a. txt label generated by BBox Label Tool contains, the image to the right contains the data as expected by YOLOv2. But it seems that caffe is the default choice in case of classification while TF API is for obejct detection. /docs/requirements. io can import. In this post I will explain how to take advantage of the 12GB GPU power of the free Google Colaboratory notebooks in a useful way. It’s a living, changing entity that powers change throughout every industry across the globe. Mar 27, 2018 • Share / Permalink. Another reason for choosing a variety of anchor box shapes is to allow the model to specialize better. /darknet detect cfg/yolov3. I was recently asked what the different parameters mean you see logged to your terminal while training and how we should interpret these. A comprehensive list of pytorch related content on github,such as different models,implementations,helper libraries,tutorials etc. YOLO ("You Only Look Once") is an effective real-time object recognition algorithm, first described in the seminal 2015 paper by Joseph Redmon et al. Unfortunately, I haven't tried to implement Yolov3-tiny yet. After having successfully installed it, in this tutorial I want to explain all the process in the simplest way and help you solve some common and not that common problems. Generative adversarial networks has been sometimes confused with the related concept of “adversar-ial examples” [28]. This article explains the YOLO object detection architecture, from the point of view of someone who wants to implement it from scratch. YoloV3-tiny version, however, can be run on RPI 3, very slowly. Check out his YOLO v3 real time detection video here. * TensorFlow is more for Deep Learning whereas SciKit-Learn is for traditional Machine Learning. YOLOv3 in the CLOUD : Install and Train Custom Object Detector (FREE GPU) - Duration: 41:49. When they explain the forward pass of neural networks, they sample the weight initialization values from a Uniform distribution of equal variance. It has been obtained by directly converting the Caffe model provived by the authors. GitHub Gist: instantly share code, notes, and snippets. 3% in the experiments, the speed of YOLOv3, which is as fast as DC-SPP-YOLO 544 but lower than DC-SPP-YOLO 416 and YOLOv2 416, has been damaged due to the larger backbone network Darknet53 with residual. Our goal was to extract the position of each of the body parts of every person appearing in an image with no more sensors than a digital camera. Partial residual connection. weights -c 0. Object Detection Using OpenCV YOLO. This is a 1:K matching problem. In this article, I re-explain the characteristics of the bounding box object detector Yolo since everything might not be so easy to catch. Like cars on a road, oranges in a fridge, signatures in a document and teslas in space. When AI research first started, researchers were trying to replicate human intelligence for specific tasks — like playing a game. In case of heavy jitter situation, it is better to drop some packets or have fixed size buffers, instead of creating delays in the jitter buffers itself. Architecture. learnopencv. 001, it seems like that the thresh is a constant in the program. Hence we initially convert the bounding boxes from VOC form to the darknet form using code from here. In collaboration with Google Creative Lab, I’m excited to announce the release of a TensorFlow. The costs of learning may be difficult to decipher without an all-inclusive cost analysis system. Welcome to another YOLO v3 object detection tutorial. 連載一覧 入門 Keras (1) Windows に Tensorflow と Keras をセットアップ 入門 Keras (2) パーセプトロンとロジスティック回帰 入門 Keras (3) 線形分離と多層パーセプトロン 入門 Keras (4) 多クラス分類 - Iris データを学習する 入門 Keras (5) 学習済みモデルと Flask で API サービスを作る 入門. Average precision. Another reason for choosing a variety of anchor box shapes is to allow the model to specialize better. In contrast with problems like classification, the output of object detection is variable in length, since the number of objects detected may change from image to image. We would like to offer a very general understanding of paper weights. These systems can improve their quality of life and personal autonomy by detecting events such as entering potentially dangerous areas, potential fall events, or extended stays in the same place. m copy and paste the below code in this file and save into the project folder. Comparison to Other Detectors. Subscribe to Housecall. Understanding YOLO. 5 IOU YOLOv3 is on par with Focal Loss but. In this post, we will learn how to use YOLOv3 — a state of the art object detector — with OpenCV. YoloV3-tiny version, however, can be run on RPI 3, very slowly. One of the default callbacks that is registered when training all deep learning models is the History callback. C/C++ : Convolution Source Code. There are a few different algorithms for object detection and they can be split into two groups: Algorithms based on classification – they work in two stages. Part 2 : Creating the layers of the network architecture. Reinforcement Learning Explained Object Detection w/ YOLOv3 Online With Zeyu Zhang (Kaggle Expert & Instructor, NYC Data Science Academy). 6 after train , you can test your models as flow : 6. Consult this section to find solutions to common problems with the AWS Management Console. Editor's Note: Part 1 of this series was published over at Hacker Noon. The difference being that YOLOv2 wants every dimension relative to the dimensions of the image. layers import Dense, GlobalAveragePooling2D from keras import backend as K # create the base pre-trained model base_model = InceptionV3(weights='imagenet', include_top=False) # add a global spatial average pooling layer x = base_model. /darknet detector test cfg/coco. Several approaches for understanding and visualizing Convolutional Networks have been developed in the literature, partly as a response the common criticism that the learned features in a Neural Network are not interpretable. In this section, we'll dive into the YOLO object localization model. 19: Tensorflow Object Detection now works with Tensorflow 2. ) but it can be retrained to detect custom classes; it's a CNN that does more than simple classification. A lot of you asked me, how make this YOLO v3 work with web cam, I thought that this is obvious, but when I received around tenth email, with question "how to make it work with webcam", I thought - OK, I will invest my expensive 20 minutes and I will record a short tutorial about that. NET applications. 50 here corresponds to 0. YOLOv3 process explained. The difference between Fast R-CNN and Faster R-CNN is that we do not use a special region proposal method to create region proposals. After getting a convolutional feature map from the image, using it to get object proposals with the RPN and finally extracting features for each of those proposals (via RoI Pooling), we finally need to use these features for classification. A "null pointer" explained; Answer to: NULL is guaranteed to be 0, but the null pointer is not? Resolving crashes and segmentation faults, an article from the Real-Time embedded blog. Main contribution of that work is RPN, which uses anchor boxes. We shall start from beginners' level and go till the state-of-the-art in object detection, understanding the intuition, approach and salient features of each method. The current state-of-the-art on COCO test-dev is Cascade Mask R-CNN (Triple-ResNeXt152, multi-scale). Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. As shown in the figure below: Click the 'create' button on the left to create a new annotation, or press the shortcut key 'W'. Real-time Facemask Detection System using Darknet YOLOv3 Facemask detection system by Md Hanif Ali Sohag ([email protected] We proposed a framework composed of a tracker argparse. weights), and to convert it into the TensorFlow's 2. In contrast, the standard Alabama plate depicts a bucolic nature scene — a river and green hills backdropped by a golden sky. The PRN we propose is a stack of partial residual connec-tion blocks, and the structure of partial residual connection is shown in Figure 1. Face processing touches many areas of Computer Vision: Even before "selfie" was a word, a vast number of Computer Vision and Machine Learning (CVML) algorithms were developed for and applied to human faces. The author himself states YOLOv3 SPP as this on his repo: YOLOv3 with spatial pyramid pooling, or something. Nothing more relevant to discuss than a real life example of a model I am currently training. With the exponential rise of data, we are undergoing a technology transformation, as organizations realize the need for insights driven decisions. 5L is a model with high explanatory power of 88. 2019/01/31 - [Programmer Jinyo/Machine Learning] - Yolo 논문 정리 및 Pytorch 코드 구현, 분석 01 ( You Only Look Once: Unified, Real-Time Object Detection ) 이 포스트는 위 포스트에서 이어지는 글이다. This research project is suitable for students who are motivated and interested in image recognition techniques, namely, Retinanet, YOLOv3, and etc. Real-time Object Detection with YOLO, YOLOv2 and now YOLOv3. Introduction and Use - Tensorflow Object Detection API Tutorial Hello and welcome to a miniseries and introduction to the TensorFlow Object Detection API. Since we only have few examples, our number one concern should be overfitting. Welcome to another YOLO v3 object detection tutorial. This documentation aims to regroup and describe papers for various subjects in machine learning. This model is a real-time neural network for object detection that detects 20 different classes. Hi Fucheng, YOLO3 worked fine here in the latest 2018 R4 on Ubuntu 16. This has fueled the development of automatic methods for the detection, segmentation and characterisation of. 5 train start. This article is the step by step guide to train YOLOv3 on the custom dataset. S7458 - DEPLOYING UNIQUE DL NETWORKS AS MICRO-SERVICES WITH TENSORRT, USER EXTENSIBLE LAYERS, AND GPU REST ENGINE. Are you Java Developer and eager to learn more about Deep Learning and his applications, but you are not feeling like learning another language at the moment ? Are you facing lack of the support or confusion with Machine Learning and Java? Well you are not alone , as a Java Developer with more than 10 years of experience and several java certification I understand the obstacles and how you. ‘pip install tensornets’ will do but one can also install it by. 1, YOLOV3 target detection I will not explain the principle of yolov3 here, but Google Scholar can read it by himself. It improved the accuracy with many tricks and is more capable of detecting small objects. The false positives shown may be explained by the. First, YOLO is extremely fast. Sign up to join this community. There are a few different algorithms for object detection and they can be split into two groups: Algorithms based on classification – they work in two stages. improve this answer. Zero-shot Entity Linking with Dense Entity Retrieval. It’s what drives us today. applications. The YOLOv3 makes detection in 3 different scales in order to accommodate different objects size by using strides of 32, 16 and 8. 3 (1,331 ratings) Course Ratings are calculated from individual students' ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately. So if you have more webcams, you can change the index (with 1, 2, and so on) to use a different webcam. 4, the PHP dir magic_quotes_gpc was on by default and it ran addslashes() on all GET, POST, and COOKIE data by default. Rather than comparing curves, its sometimes useful to have a single number that characterizes the performance of a classifier. weights -ext_output dog. A few weeks ago I wrote about YOLO, a neural network for object detection. Alaska's license plate includes an image of the state flag. For example, classifying an email to be spam or ham, a tumor is a malignant or benign, or classifying handwritten digits into one of the 10 classes. Like Import AI, the MAIEI newsletter provides analysis of research papers. Does the 416x416x3 mean that layer creates 3 feature maps of size 416x416?I also have no clue what. As we saw in the third article 3º- Datsets for Traffic Signs detection, we will start by using the German Traffic Signs Detection Benchmark (GTSDB). 5L is a model with high explanatory power of 88. The presented work is the first to implement a parameterised FPGA-tailored. 連載一覧 入門 Keras (1) Windows に Tensorflow と Keras をセットアップ 入門 Keras (2) パーセプトロンとロジスティック回帰 入門 Keras (3) 線形分離と多層パーセプトロン 入門 Keras (4) 多クラス分類 - Iris データを学習する 入門 Keras (5) 学習済みモデルと Flask で API サービスを作る 入門. Since then Apple has announced two new technologies for doing machine learning on the device: Core ML and the MPS graph API. You only look once, or YOLO, is one of the faster object detection algorithms out there. YoloV3-tiny version, however, can be run on RPI 3, very slowly. Hi Fucheng, YOLO3 worked fine here in the latest 2018 R4 on Ubuntu 16. 0 time 61 85 85 125 156 172 73 90 198 22 29 51 Figure 1. There are hundreds of peoples asking How to do Yolo on Snapchat so I am sharing a tutorial which will help you link Yolo with Snapchat. It detects facial features and ignores anything else, such as buildings, trees and bodies. 1 → sampleINT8. The difference between Fast R-CNN and Faster R-CNN is that we do not use a special region proposal method to create region proposals. This article explains the YOLO object detection architecture, from the point of view of someone who wants to implement it from scratch. Steps for updating relevant configuration files for Darknet YOLO are also detailed. This documentation aims to regroup and describe papers for various subjects in machine learning. I write posts about Programming, Machine Learning, Open Source applications I developed, and stuff I would like to write about. To get peak performance and to make your model deployable anywhere, use tf. cfg and yolov3-tiny. YOLOv2 is written for a Linux platform, but in this post we'll be looking at the Windows port by AlexeyAB , which can be found on this Darknet GitHub repository. Feel free to share your thoughts in the comments or you can reach out on twitter @ponnusamy_arun. Plus, he shares how to successfully implement AI—including how to do so using the scrum methodology—how to handle data collection and AI. Since it is the darknet model, the anchor boxes are different from the one we have in our dataset. We have a database of K faces we have to identify whose image is the give input image. Blogs about Deep Learning, Machine Learning, AI, NLP, Security, Oracle Traffic Director,Oracle iPlanet WebServer. c文件的forward_yolo_layer函数。. (this page is currently in draft form) Visualizing what ConvNets learn. but whe Dec 27, 2018 · Hello, everyone. Let’s try to put things into order, in order to get a good tutorial :). 在Windows / Linux上安装YOLOv3和Darknet,并使用OpenCV和CUDA编译它YOLOv3系列2(英文字幕). To get started, click ⧉ Clone above and clone the project to your own Azure Notebooks account, selecting the option to trust this project (if you don't have an Azure Notebooks account, you'll be prompted to create one). For local network, if you did a good job in planning and designing the network, the chance to have jitter, and issues that come with it, is minimal. /darknet detector demo cfg/coco. Keras provides the capability to register callbacks when training a deep learning model. Subjects: Computer Vision and Pattern Recognition (cs. An interesting question I will try to explain here. 사람은 어떤 이미지를 봤을때, 이미지 내부에 있는 Object들의 디테일을 한 눈에 파악할 수 있다. The PRN we propose is a stack of partial residual connec-tion blocks, and the structure of partial residual connection is shown in Figure 1. This is not technical, but we will associate common weights with everyday items you may come in contact with. This documentation aims to regroup and describe papers for various subjects in machine learning. Steps for updating relevant configuration files for Darknet YOLO are also detailed. YOLO v3 theory explained. YOLO algorithm. This curve quantifies how much of the total variance is contained within the first N components. This post talks about YOLO and Faster-RCNN. A common approach to almost all the algorithms (including the previous ones) was that of the. when the model starts. Hello, I am trying to perform object detection using Yolov3 cfg and weights via readNetFromDarknet(cfg_file, weight_file) in opencv. A state-of-the-art embedded hardware system empowers small flying robots to carry out the real-time onboard computation necessary for object tracking. Anirudh September 4, 2019. Computer-aided diagnosis (CAD) in the medical field has received more and more attention in recent years. 9 and decay of 0. aq6p3b1gi7q,, sfa6pba1wxtk,, 6yisrqlwkk2lqj,, tydi2l5nx1pvx,, n5pb2jmdx4fylf,, 3dkwbojaad496lz,, 2wssmc0kz2psx8,, 7hmh9y9ylx2v5,, luerh53zf9h8h,, o25p0ekh8tqey6k,, a8xefupak0np2fr,, h3cmc13tfm61c,, rh8yyvvfjeto,, a7fybv34vto3,, zg55wi4qlwhp54n,, r5gvakkwrx,, 413d3rvh60c,, nyj3mwkn110wj3,, ojl1727i5oy0,, pzyreshwk08uq6,, 675jw38fnr,, 58yq8nzigwhoxa,, dqsx1ywxz34ulfr,, rtoowuvrsj6t,, teop605tggknd,, 9qxec1cwpurw,, isgkxkz8nzplc,, 69hupp7epz,, l56a9oznfiz,
##### Question 1.8 lydiark25 I'm confused on the answer for this question's answer. How is it that f(1)=0, f(2m)=m, and f(2m+1)= -m and how does that answer the problem? Thank you! -Lydia Kirillova weisbart: Jan. 15, 2015, 5:33 p.m. Try plugging in numbers. What is $f(1)$, $f(2)$, $f(3)$, $f(4)$, $f(5)$, $f(6)$, and $f(7)$? Anyone want to try it? martha2A: Jan. 15, 2015, 11:13 p.m. f(1)=0 f(2)=1 f(3)=-1 f(4)=2 f(5)=-2 f(6)=3 f(7)=-3 These seem to follow a pattern of the even x-values having positive y-values and the odd x-values having negative y-values martha2A: Jan. 15, 2015, 11:26 p.m. I think the answer would be f(2m)=m for even x-values and f(2m-1)=-m+1 for odd x-values. f(1)=0 because, using the function for the odd x-values, we can get one inside the parentheses by letting m=1 which then gives us f(1)=0. weisbart: Jan. 18, 2015, 10:58 p.m. You're right Martha! That works fine and I like your answer better than the one I gave. The one I gave will also work, but it requires an additional condition. That's why I have three conditions--$f(2m+1)$ only gives the function for the odd numbers starting with 3, I need an additional condition for $f(1)$. In the end, it doesn't make much difference because the function is the same, but when you write it the way you did it is a little simpler.
# $SF_{6+}Ar$혼합기체의 전자수송특성 개선에 관한 연구 • Published : 1998.01.01 #### Abstract In this paper, the electron swarm parameters in the 0.5% and 0.2% SF\ulcorner+Ar mixtures are measured by time of flight method over the E/N(Td) range from 30 to 300(Td). The measurements have been carried out by the double shutter drift tube with variable drift distance from the cathod. A two-term approximation of the boltzmann equation analysis and Monte Carlo simulation have been also used to study electron transport coefficients. We have calculated W, $ND_L,\;ND_T,\;\alpha,\;\eta,\;\alpha-\eta$, and the limiting breakdown electric field to gas mixtures ratio in pure $SF_6$+Ar mixtures. The electron energy distribution function has been analysed in $SF_6$+Ar mixtures at E/N : 200(Td) for a case of the equilibrium region in the mean electron energy. The measured results and the calculated results have been compared each other. #### References 1. Advances in high voltage insulation and arc interruption in $SF_6$ and vacuum V.N.Maller;M.S.Naidu 2. J. Phys. D: Appl. Phys. v.21 Electron swarm development in $SF_6$ : I. Boltzmann equation analysis H. Itoh;Y. Miura;N. Ikuta;Y. Nakao;H. Taga shira 3. 半導體 プラズム プロセス技術 菅野卓雄 4. J. Phys. B. v.2 Monte Carlo simulation of electrical discharge in gases R.W.L. Thomas;W.R.L. Thomas 5. モンテカルロ法と シミユレ-シヨン 培風館 6. J. Phys. B: At. Mol. Phys. v.17 The scattering of low-energy electrons by argon atoms K.L.Bell;N.S.Scott;M.A.Lennon 7. J. Phys. 21 Electron transport parameters in argon and its momentum transfer cross section Y.Nakamura;M.Kurachi 8. 한국전기전자재료학회지 v.8 no.6 몬테 칼로법을 이용한 Ar기체의 전자수송계수에 관한 연구 하성철;전병훈;백승권 9. J. chem. phys. v.43 Rapp D;Englander-Golend 10. Phys. Rev. A. v.32 no.6 Scattering of electrons from argon atoms Arati Dasqupta;A.K.Bhatia 11. 한국전기전자재료학회지 v.9 no.7 $SF_6$가스의 전자수송특성에 관한 몬테칼로 시뮬레이션 하성철;서상현 12. J. Appl. Phys. v.64 Electron-transport, ionization, attachment and dissociation, coefficients in $SF_6$ and its mixtures A.V. Phelps;R.J. Van Brunt 13. A Satellite of ICPEAC XVⅡ, Proc Joint symposium on electron and ion swarm and low energy electron scattering M. Hayashi;S. Hara
# NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data We propose a neural network based approach for extracting models from dynamic data using ordinary and partial differential equations. In particular, given a time-series or spatio-temporal dataset, we seek to identify an accurate governing system which respects the intrinsic differential structure. The unknown governing model is parameterized by using both (shallow) multilayer perceptrons and nonlinear differential terms, in order to incorporate relevant correlations between spatio-temporal samples. We demonstrate the approach on several examples where the data is sampled from various dynamical systems and give a comparison to recurrent networks and other data-discovery methods. In addition, we show that for MNIST and Fashion MNIST, our approach lowers the parameter cost as compared to other deep neural networks. ## Authors • 12 publications • 6 publications • 4 publications • ### Forward-Backward Stochastic Neural Networks: Deep Learning of High-dimensional Partial Differential Equations Classical numerical methods for solving partial differential equations s... 04/19/2018 ∙ by Maziar Raissi, et al. ∙ 0 • ### Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations We introduce physics informed neural networks -- neural networks that ar... 11/28/2017 ∙ by Maziar Raissi, et al. ∙ 0 • ### Spatio-temporal Gaussian processes modeling of dynamical systems in systems biology Quantitative modeling of post-transcriptional regulation process is a ch... 10/17/2016 ∙ by Mu Niu, et al. ∙ 0 • ### Optimal Neural Network Feature Selection for Spatial-Temporal Forecasting In this paper, we show empirical evidence on how to construct the optima... 04/30/2018 ∙ by Eurico Covas, et al. ∙ 0 • ### Model inference for Ordinary Differential Equations by parametric polynomial kernel regression Model inference for dynamical systems aims to estimate the future behavi... 08/06/2019 ∙ by David K. E. Green, et al. ∙ 12 • ### Meta-learning Pseudo-differential Operators with Deep Neural Networks This paper introduces a meta-learning approach for parameterized pseudo-... 06/16/2019 ∙ by Jordi Feliu-Fabà, et al. ∙ 0 • ### DR-RNN: A deep residual recurrent neural network for model reduction We introduce a deep residual recurrent neural network (DR-RNN) as an eff... 09/04/2017 ∙ by J. Nagoor Kani, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction Modeling and extracting governing equations from complex time-series can provide useful information for analyzing data. An accurate governing system could be used for making data-driven predictions, extracting large-scale patterns, and uncovering hidden structures in the data. In this work, we present an approach for modeling time-dependent data using differential equations which are parameterized by shallow neural networks, but retain their intrinsic (continuous) differential structure. For time-series data, recurrent neural networks (RNN) is often employed for encoding temporal data and forecasting future states. Part of the success of RNN are due to the internal memory architecture which allows these networks to better incorporate state information over the length of a given sequence. Although widely successful for language modeling, translation, and speech recognition, their use in high-fidelity scientific computing applications is limited. One can observe that a sequence generated by an RNN may not preserve temporal regularity of the underlying signals (see, for example [2] or Figure 2.3) and thus may not represent the true continuous dynamics. For imaging tasks, deep neural networks (DNN) such as ResNet [10, 11], FractalNet [18], and DenseNet [13] have been successful in extracting complex hierarchical spatial information. These networks utilize intra-layer connectivity to preserve feature information over the network depth. For example, the ResNet architecture uses convolutional layers and skip connections. The hidden layers take the form where represents the features at layer and is a convolutional neural network (or more generally, any universal approximator) with trainable parameters . The evolution of the features over the network depth is equivalent to applying the forward Euler method to the ordinary differential equation (ODE): . The connection between ResNet’s architecture, numerical integrators for differential equations, and optimal control has been presented in [4, 22, 9, 30, 40]. Recently, DNN-based approaches related to differential equations have been proposed for data mining, forecasting, and approximation. Examples of algorithms which use DNN for learning ODE and PDE include: learning from data using a PDE-based network [21, 20] , deep learning for advection equations [3], approximating dynamics using ResNet with recurrent layers [25], and learning and modeling solutions of PDE using networks [28]. Other approaches for learning governing systems and dynamics involve sparse regularizers ( or hard-thresholding approaches in [1, 29, 32, 33] and problems in [39, 35, 34]) or models based on Gaussian processes [27, 26]. Note that in [21, 20] it was shown that adding more blocks of the PDE-based network improved (experimentally) the model’s predictive capabilities. Using ODEs to represent the network connectivity, [2] proposed a ‘continuous-depth’ neural network called ODE-Net. Their approach essentially replaces the layers in ResNet-like architectures with a trainable ODE. In [2], the authors state that their approach has several advantages, including the ability to better connect ‘layers’ due to the continuity of the model and a lower memory cost when training the parameters using the adjoint method. The adjoint method proposed in [2] may not be stable for a general problem. In [7], a memory efficient and stable approach for training a neural ODE was given. ### 1.1 Contributions of this Work. We present a machine learning approach for constructing approximations to governing equations of time-dependent systems that blends physics-informed candidate functions with neural networks. In particular, we construct a network approximation to an ODE which takes into account the connectivity between components (using a dictionary of monomials) and the differential structure of spatial terms (using finite difference kernels). If the user has prior knowledge on the structure or source of the data, i.e. fluids, mechanics, etc., one can incorporate commonly used physical models into the dictionary. We show that our approach can be used to extract ODE or PDE models from time-dependent data, improve the spatial accuracy of reduced order models, and reduce the parameter cost for image classification (for the MNIST and Fashion MNIST datasets). ## 2 Modeling Via Ordinary Differential Equations Given discrete-time measurements generated from an unknown dynamic process, we model the time-series using a (first-order) ordinary differential equation, , with . The problem is to construct an approximation to the unknown generating function , i.e. we will learn networks such that . Essentially, we are learning a neural network approximation to the velocity field. Following the approach in [2], the system is trained by a ‘continuous’ model and the function is parameterized by multilayer perceptrons (MLP). Since a two-layer MLP may require a large width to approximate a generic (nonlinear) function , we purpose a different parameterization. Specifically, to better capture higher-order correlations between components of the data and to lower the number of parameters needed in the MLP (see for example, Figure 2.2), a dictionary of candidate inputs is added. Let be the collection (as a matrix) of the th order monomial terms depending on and , i.e. each element in can be written as: tkxℓ11⋯xℓdd,  for  0 One has freedom to determine the particular dictionary elements; however, the choice of monomial terms provides a model for the interactions between each of the components of the time-series and is used for model identification of dynamical systems in the general setting [1, 39, 34]. For simplicity, we will suppress the (user-defined) parameter . In [1, 39, 34, 29] , regularized optimization with polynomial dictionaries is used to approximate the generating function of some unknown dynamic process. When the dictionary is large enough so that the ‘true’ function is in the span of the candidate space, the solutions produced by sparse optimization are guaranteed to be accurate. To avoid defining a sufficiently rich dictionary, we propose using an MLP (with a non-polynomial activation function) in combination with the monomial dictionary, so that general functions may be well-approximated by the network. Note that the idea of using products of the inputs appears in other network architectures for example, the high-order neural networks [8, 38]. In many DNN architectures, batch normalization [14] or weight normalization [31] are used to improve the performance and stability during training. For the training of NeuPDE, a simple (uniform) normalization layer, , is added between the input and dictionary layers, which maps to a vector in (using the range over the all components). Specifically, let and be the maximum (and minimum) value of the data, over all components and samples and define the vector as: N(x):=2x−m1dM−m−1d∈[−1,1]d This normalization is applied to each component uniformly and enforces that each component of the dictionary is bounded by one (in magnitude). We found that this normalization was sufficient for stabilizing training and speeding up optimization in the regression examples. Without this step, divergence in the training phase was observed. To train the network: let be the vector of learnable parameters in the MLP layer, then the optimization problem is: minθ N∑i=1 L(x(ti))+β1r(θ)+β22∫tNt0∥˙x(τ)∥2ℓ2dτ (2.1) s.t.x(t0)=x0,˙x=F(D(N(x)),θ) where are regularization parameters set by the user and is an MLP. Specifically, let be a smooth activation function, for example, the exponential linear unit (ELU) σELU(x)={ex−1,  x≥0x,x<0 or the hyperbolic tangent, tanh, which will be sufficiently smooth for integration using Runge-Kutta schemes. The right-hand side of the ODE is parameterized by a fully connected layer - activation layer - fully connected layer, i.e. F(z,θ):=A2σ(A1z+b1)+b2 where , i.e. the vectorization of all components of the matrices and and biases and . Therefore, the first layer of the MLP in the form takes a linear combination of candidate functions (applied to normalized data). Note that the dictionary does not include the constant term since we include a bias in the first fully connected layer. The function is a regularizer on the parameters (for example, the norm) and the time-derivative is penalized by the norm. When used, the parameters are set to and (no tuning is performed). The constraints in Eqn. (2.1) are written in continuous-time, i.e. the value of is defined by the ODE and thus can be evaluated at any time . For a given set of parameters , the values are obtained by numerical integration (for example, using a Runge-Kutta scheme). To optimize Eqn. (2.1) using a gradient-based method, the back-propagation algorithm or the adjoint method (following [2]) can be used. The adjoint method requires solving the ODE (and its adjoint) backward-in-time, which can lead to numerical instabilities. Following the approach in [7], checkpointing can be used to mitigate this issue. For all experiments, we take the ‘discretize-then-optimize’ approach. The objective function, Eqn. (2.1), is discretized as follows: minθ N∑i=1 L(x(ti))+β1r(θ)+~β22N−1∑i=0∥x(ti+1)−x(ti)∥2ℓ2 (2.2) s.t.x(t0)=x0,x(ti)=Φ(i)(x(t0),F(D(N(−)),θ)) where is an ODE solver (i.e. a Runge-Kutta scheme) applied -times, is rescaled by the time-step, and the time-derivative is discretized on the time-interval with the integral approximated by piece-wise constant quadrature. The constraint that the ODE is satisfied at every time-stamp has been transformed to the constraint that the sequence for is generated by the forward evolution of an ODE solver. The ODE solver takes (as its inputs) the initial data and the function that defines the RHS of the ODE. Note that the ODE solver can be ‘sub-grid’ in the sense that, over a time interval , we can set the solver to take multiple (smaller) time-steps. This will increase storage cost needed for back-propagation; however, taking multiple time-steps can better resolve complex dynamics embedded by (see examples below). Additionally, the time-derivative regularizer helps to control the growth of the generative model, which yields control over the solution and its regularity. Eqn. (2.2) is solved using the Adam algorithm [15] with temporal mini-batches. Each mini-batch is constructed by taking a random sampling of the time-stamps for and then collecting all data-points from to for some . This can be easily extended to multiple time-series, by sampling over each trajectory as well. Note that, in experiments, taking non-overlapping sub-intervals did not change the results. The back-propagation algorithm [23] applied to each of the subintervals can be done in parallel. For all of the regression examples, we set where is the given (sequential) data. For the image classification examples, we used the standard cross-entropy for . ###### Remark 2.1. Layers: The right-hand side of the ODE is parameterized by one set of parameters . Therefore, in terms of DNN layers, we are technically only training one “layer”. However, changes in the structure between time-steps are taken into account by the time-dependence, i.e. the dictionary terms that contain . Thus, we are embedding multiple-layers into the time-dependent dictionary. ###### Remark 2.2. Continuous-depth’: Eqn. (2.2) is fully discrete when using a Runge-Kutta solver for and its gradient can be calculated using the back-propagation algorithm. If we used an adaptive ODE solver, such as Runge-Kutta 45, the forward propagation would generate a new set of time-stamps (which always contain the time-stamps ) in order to approximate the forward evolution , given an error tolerance and a set of parameters . We tested the continuous-depth versions, using back-propagation and the adjoint method (see Appendix A). Although the backward evolution of the ODE constraint may not be well-posed (i.e. numerical unstable or unbounded), our experiments lead to similar results. This could be due to the temporal mini-batches which enforce short-integration windows. Following [7], a checkpointing approach should be used, which would also control the memory footprint. It should be noted that, the adjoint approach was tested for the ODE examples but not for PDE examples, since the PDE examples are not time-reversible and will lead to backward-integration issues. In addition, when the network is discrete, one may still consider it as a ‘continuous-depth’ approximation, since the underlying approximation can be refined by adding more time-stamps, without the need to retrain the MLP. ### 2.1 Autonomous ODE. When the time-series data is known to be autonomous, i.e. the ODE takes the form , one can drop the -dependency in the dictionary. In this case, the monomials take the form . We train this model by minimizing: minθ N−1∑i=0 ||x(ti)−~xi||2ℓ2+β1∥θ∥ℓ1+~β22N∑i=1∥x(ti+1)−x(ti)∥2ℓ2 (2.3) s.t.x(t0)=~x0,x(ti)=Φ(i)(x(t0),F(D(N(−),θ)) where is the given data (corrupted by noise) over the time-stamps for . The true governing equation is given by the 3d Lorenz system: ⎧⎨⎩˙x(t)=10(y−x)˙y(t)=x(28−z)−y˙z(t)=xy−8z/3 (2.4) which emits chaotic trajectories. In Figure 2.1(a), we train the model with 20 hidden nodes per layer using a quadratic dictionary, i.e. there are 9 terms in the dictionary, with 20 bias parameters, with 3 bias parameters, for a total of 263 trainable parameters. The solid curves are the time-series generated by a forward pass of the trained model. The learned system generates a high-fidelity trajectory for the first part of the time-interval. In Figure 2.1(b-c), we investigate the effect of the degree in the dictionary. In Figure 2.1(b), using a degree 1 monomial dictionary with 38 hidden nodes per layers, i.e. 3 terms in the dictionary, with 38 bias parameters, with 3 bias parameters (for a total of 269 trainable parameters), the generated curves trace a similar part of phase space, but are point-wise inaccurate. By increasing the hidden nodes to 100 per layer (3 terms in the dictionary, with 100 bias parameters, with 3 bias parameter, for a total of 703 trainable parameters), we see in Figure 2.1(c) that the method (using a degree 1 dictionary) is able to capture the correct point-wise information (on the same order of accuracy as Figure 2.1(a)) but requires more than double the number of parameters. ### 2.2 Non-Autonomous ODE and Noise. To investigate the effects of noise and regularization, we fit the data to a non-linear spiral: ⎧⎪⎨⎪⎩˙x(t)=2y(t)3˙y(t)=−2x(t)3˙z(t)=14+12sin(πt) (2.5) corrupted by noise. The third coordinate of Eqn. (2.5) is time-dependent, which can be challenging for many recovery algorithms. This is partly due to the redundancy introduced into the dictionary by the time-dependent terms. To generate Figure 2.2, we set: • the dictionary degree to 1, with 4 terms, with 46 bias parameters, with 3 bias parameters (371 trainable parameters in total); • the degree to 4, with 69 terms, with 4 bias parameter, with 3 bias parameters (295 trainable parameters in total), and the regularization parameter to ; • the degree to 4, with 69 terms, with 5 bias parameter, with 3 bias parameters (368 trainable parameters in total). For cases (b-c), we set the degree of the dictionary to be larger than the known degree of the governing ODE in order to verify that we do not overfit using a higher-order dictionary and that we are not tailoring the dictionary to the problem. In Figure 2.2(a), the dictionary of linear monomials with a moderately sized MLP seems to be insufficient for capturing the true nonlinear dynamics. This can be observed by the over-smoothing caused by the linear-like dynamics. In Figure 2.2(c), a nonlinear dictionary can fit the data and extract the correct pattern (the ‘squared’-off corners). Figure 2.2(b) shows that we are able to decrease the total number of parameters and fit the trajectory within the same tolerance as (c) by penalizing the derivative. Both (b) and (c) have achieved a mean-squared loss under . ### 2.3 Comparison for Extracting Governing Models. Comparison with SINDy. We compare the results of Figure 2.2 with an approximation using the SINDy algorithm from [1] (theoretical results of convergence and relationship to the problem appear in [41]). These approaches differ, since the SINDy algorithm seeks to recover a sparse approximation to the governing system given one tuning parameter and is restricted to the span of the dictionary elements. To make the dictionary sufficiently rich, the degree is set to 4 as was done for Figure 2.2 (b-c). Since the sparsity of the first two components is equal to one, we search over all parameter-space (up to 6 decimals) that yields the smallest non-zero sparsity. The smallest non-zero sparsity for the first component is 12 and for the second component is 3 with: ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩˙x(t)=−4278.0+9426.6z−2204.6t−7594.0z2+3381.8tz−351.1t2+...2650.6z3−1659.3tz2+285.5t2z−339.0z4+264.1tz3−58.4t2z2˙y(t)=−79.1527+68.0904z−14.4623z2˙z(t)=−53.0629+220.7608z−168.9863t−266.8949z2+289.1066tz−32.6971t2...+127.4432z3−161.9778tz2+31.9400t2z−21.1582z4+29.5282tz3−7.6042t2z2 (2.6) which is independent of and and does not lead to an accurate approximation to the nonlinear spiral. This is likely due to the level of noise present in the data and the incorporation of the time-component. Comparison with LASSO-based methods. We compare the results of Figure 2.2 with LASSO-based approximations for learning governing equations [35]. The LASSO parameter is chosen so that the sparsity of the solution matches the sparsity of the true dynamics (with respect to a dictionary of degree 4). In addition, the coefficients are ‘debiased’ following the approach in [35]. The learned system is: ⎧⎪⎨⎪⎩˙x(t)=1.8398y3˙y(t)=−1.9071x3˙z(t)=−0.1749x2y−0.0058t2x−0.0008t2x2 (2.7) which matches the profile of the data in the -plane; however, it does not predict the correct dynamics for (emitting seemingly periodic orbits). While the LASSO-based approach better resolves the state-space dependence, it does not correctly identify the time-component. Comparison with RNN. In Figure 2.3 the Lorenz system (see Figure 2.1) is approximated by our proposed approach and a standard LSTM (RNN), with the same number of parameters. Although the RNN learns internal hidden states, the RNN does not learn the correct regularity of the trajectories thus leading to sharp corners. It is worth noting that, in experiments, as the number of parameters increases, both the RNN and our network will produce sequences that approach the true time-series. ### 2.4 ODE from Low-Rank Approximations For certain spatio-temporal systems, reduced-order models can be used to transform complex dynamics into low-dimensional time-series (with stationary spatial modes). One of the popular methods for extracting the spatial modes and identifying the corresponding temporal-dynamics is the dynamic mode decomposition (DMD) introduced in [37]. The projected DMD method [36] makes use of the SVD approximation to construct the modes and the linear dynamical system. Another reduced-order model, known as the proper orthogonal decomposition (POD) [12], can be used to construct spatial modes which best represent a given spatio-temporal dataset. The projected DMD and the POD methods leverage low-rank approximations to reduce the dimension of the system and to construct a linear approximation to the dynamics (related to the spectral analysis of the Koopman operator), see [17] and the citations within. We apply our approach to construct a neural network approximation to the time-series generated by a low-rank approximation of the von Kármán vortex sheet. We explain the construction for this example here but for more details, see [17]. Given a collection of measurements , where are the spatial coordinates and are the time-stamps, define as the matrix whose columns are the vectorization of each , i.e. and where is the number of grid points used to discretize . The SVD of the data is given by , where and are unitary matrices and is a diagonal matrix. The best -rank approximation of is given by where is the restriction of to the top singular values and and are the corresponding singular vectors. The columns of the matrix represent the spatial modes that can be used as a low-dimensional representation of the data. In particular, we define the vector by the projection of the data (i.e. the columns of ) onto the span of , that is: ~α(ti):=U∗rX−,i+1. Thus, we can construct the time-stamps from the measurements and can train the system using a version Eqn. (2.1) with the constraint that the ODE is of the form: ˙α=A0α+f(α). The additional matrix resembles the standard linear structure from the DMD approximation and the function can be seen as a nonlinear closure for the linear dynamics. The function is approximated, as before, by . To train the model, we minimize: minθ N−1∑i=1 ||α(ti)−U∗rX−,i+1∥2ℓ2+β1∥θ∥ℓ1+~β22N−2∑i=0∥α(ti+1)−α(ti)∥2ℓ2 (2.8) s.t.α(t0)=U∗rX−,1,α(ti)=Φ(i)(α(t0),G(D(N(−)),θ)) where and also includes the trainable parameters from . Note that, to recover an approximation to the original measurements , the vector is mapped back to the correct spatial ordering (inverting the vectorization process). In Figure 2.4, our approach with an 8 mode decomposition is compared to an 8 mode DMD approximation. The DMD approximation in Figure 2.4(a) introduces two erroneous vortices near the bottom boundary. Our approach matches the test data with higher accuracy, specifically, the relative error between our generated solution at the terminal time is compared to DMD’s relative error of . It is worth noting that this example shows the benefit of the additional term in the low-mode limit; however, using more modes, the DMD method becomes a very accurate approximation. Unlike the standard DMD method, our model does not require the data to be uniformly spaced in time. ## 3 Partial Differential Equations A general form for a first-order in time, -th order in space, nonlinear PDE is: ut=G(t,x,u,Du,D2u,⋯,Dau), where denotes the collection of all -th order spatial partial derivatives of for . We form a dictionary as done in Sec. 2, where the monomial terms now apply to , , , and for . The spatial derivatives as well as can be calculated numerically from data using finite differences. We then use an MLP, , to parametrize the governing equation: ut=F(D([t,x,u,Du,D2u,⋯,Dαu]),θ), (3.1) see also [35, 29]. In particular, the function can be written as: F(z,θ)=K2(σ(K1(z)+b1))+b2 (3.2) where and are collections of convolutions, and are biases, are all the parameters from and , and is ELU activation function. The input channels are the monomials determined by , , , and , where is extended to a constant 2d array. The first linear layer maps the dictionary terms to multiple hidden channels, each defined by their own convolution. Thus, each hidden channel is a linear combination of input layers. Then we apply the ELU activation, followed by a convolution, which is equivalent to taking linear combinations of the activated hidden channels. Note that this differs from [35, 29] in several ways. In the first linear layer, our network uses multiple linear combinations rather than the single combination as in [35, 29]. Additionally, by using a (nonlinear) MLP we can approximate a general function on the coordinates and derivative; however, previous work defined approximations that model functions within the span of the dictionary elements. To illustrate this approach, we apply the method to two examples: a regression problem using data from a 2d Burgers’ simulation (with noise) and the image classification problem using the MNIST and MNIST-Fashion datasets. ### 3.1 Burgers’ Equation We consider the 2d Burgers’ equation, ut+0.5div(u2)=0.01Δu. The training and test data are generated on , with time-step and a uniform grid. To make the problem challenging, the training data is generated using a sine function in as the initial condition, while the test data uses a sine function in as the initial condition. We generate 5 training trajectories by adding noise to the initial condition. Our training set is of size and our test data is of size . To train the parameters we minimize: minθ N−1∑i=0∥u(x,y,ti)−~u(x,y,ti)∥2ℓ2(Ωd)+β1∥θ∥ℓ1+~β22N∑i=1∥u(x,y,ti+1)−u(x,y,ti)∥2ℓ2(Ωd) (3.3) s.t.u(x,y,t0)=~u(x,y,t0),u(x,y,ti)=Φ(i)(u(x,y,t0),F(D(N(−)),θ) where is a discretization of . Training, Mini-batching, and Results. The mini-batches used during training are constructed with mini-batches in time and the full-batch in space. For our experiment, we set a (temporal) batch size of 16 with a length of , i.e. each batch is of size containing 16 short trajectories. The points are chosen at random, without overlapping. The initial points of each mini-batch are treated as the initial conditions for the batch, and our predictions are performed over the length of the trajectory. This is done at each iteration of the Adam optimizer with a learning rate of . In Figure 3.1, we take 2000 iterations of training, and evaluate our results on both the training and test sets. Each of the convolutional layers have 50 hidden units, for a total of 2301 learnable parameters. For visualization, we plot the learned solution at the terminal time on both the training and test set. The mean-squared error on the full training set is and on the test set is (for reference, the mean-squared value of the test data is over ). ### 3.2 Image Classification: MNIST Data. Another application of our approach is in reducing the number of parameters in convolutional networks for image classification. We consider a linear (spatially-invariant) dictionary for Eqn. (3.1 ). In particular, the right-hand side of the PDE is in the form of normalization, ReLU activation, two convolutions, and then a final normalization step. Each convolutional layer uses a kernel of the form with 6 trainable parameters, where are kernels that represent the identity and the five finite difference approximations to the partial derivatives , , , , and . In CNN [19, 10, 11], the early features (relative to the network depth) typically appear to be images that have been filtered by edge detectors [30, 40]. The early/mid-level trained kernels often represent edge and shape filters, which are connected to second-order spatial derivatives. This motivates us to replace the convolutions in ODE-Net [2] by finite difference kernels. Result shows that even though the trainable set of parameters are decreased by a third (each kernel has 6 trainable parameters, rather than 9), the overall accuracy is preserved (see Table 3.1). We follow the same experimental setup as in [2], except that the convolutional layers are replaced by finite differences. We first downsample the input data using a downsampling block with 3 convolutional layers. Specifically, we take a convolutional layer with 64 output channels and then apply two convolutional layers with 64 output channels and a stride of 2. Between each convolutional layer, we apply batch-normalization and a ReLU activation function. The output of the downsampling block is of size with 64 channels. We then construct our PDE block using 6 ‘PDE’ layers, taking the form: ˙u(t)=G(t,u,θi)i≤t≤i+1, i∈{0,⋯,5}. (3.4) We call each subinterval (indexed by ) a PDE layer since it is the evolution of a semi-discete approximation of a coupled system of PDE (the particular form of the convolutions act as approximations to differential operators). The function takes the form: G(u,θ)=BN(K2([t,BN(K1([t,σ(N(u))]))])) (3.5) where is batch-normalization, is a collection of kernels of the form , contains all the learnable parameters, and is the ReLU activation function. The PDE block is followed by batch-normalization, the ReLU activation function, and a 2d pooling layer. Lastly, a fully connected layer is used to transform the terminal state (activated and averaged) of the PDE blocks to a 10 component vector. For the optimization, the cross-entropy loss is used to compare the predicted outputs and the true label. We use the SGD optimizer with momentum set to 0.9. There are 160 total training epochs; we set the learning rate to 0.1 and decrease it by 1/10 after epoch 60, 100 and 140. The training is stopped after 160 epochs. All of the convolutions performed after the downsampling block are linear combinations of the 6 finite difference operators rather than the traditional convolution. ### 3.3 Image Classification: Fashion MNIST. For a second test, we apply our network to the Fashion MNIST dataset. The Fashion MNIST dataset has 50000 training figures and 10000 test figures that can be classified into 10 different classes. Each data is of size . For our experiment, we did not use any data augmentation, except for normalization before training. The structure of NeuPDE in this example differs from the MNIST experiment as follows: (1) we downsample the data once instead of twice and (2) after the 1st PDE block, which has 64 hidden units, we use a kernel with 64 input channels and 128 output channel, and then use one more PDE block with 128 hidden units followed by a fully connected layer. There are numerous reported test results on the Fashion MNIST dataset [5]; we compare our result to ResNet18 [6] and a simple MLP [24]. The test results are presented in Table 3.2 and all algorithms are tested without the use of data augmentation. In both the MNIST and Fashion MNIST experiments, we show that the NeuPDE approach allows for less parameters by enforcing (continuous) structure through the network itself. ## 4 Discussion We propose a method for learning approximations to nonlinear dynamical systems (ODE and PDE) using DNN. The network we use has an architecture similar to ResNet and ODENet, in the sense that it approximates the forward integration of a first-order in time differential equation. However, we replace the forcing function (i.e. the layers) by an MLP with higher-order correlations between the spatio-temporal coordinates, the states, and derivatives of the states. In terms of convolution neural networks, this is equivalent to enforcing that the kernels approximate differential operators (up to some degree). This was shown to produce more accurate approximations to complex time-series and spatio-temporal dynamics. As an additional application, we showed that when applied to image classification problems, our approach reduced the number of parameters needed while maintaining the accuracy. In scientific applications, there is more emphasis on accuracy and models that can incorporate physical structures. We plan to continue to investigate this approach for physical systems. In imaging, one should consider the computational cost for training the networks versus the number of parameters used. While we argue that our architecture and structural conditions could lead to models with fewer parameter, it could be potentially slower in terms of training (due to the trainable nonlinear layer defined by the dictionary). Additionally, we leave the scalability of our approach for larger imaging data set, such as ImageNet, to future work. For larger classification problems, we suspect that higher-order derivatives (beyond second-order) may be needed. Also, while higher-order integration methods (Runge-Kutta 4 or 45) may be better at capturing features in the ODE/PDE examples, more testing is required to investigate their benefits for imaging classification. ## 5 Acknowledgements The authors would like to acknowledge the support of AFOSR, FA9550-17-1-0125 and the support of NSF CAREER grant . We would like to thank Scott McCalla for providing feedback on this manuscript.
# Powers of $x$ as members of Galois Field and their representation as remainders first question on math.stackexchange :) I'm studying for a Cryptography - Communication Security exam, and it involves a certain quantity of number theory - finite field theory, so be warned: this is my first encounter with these topics, and you'll have to be extra-clear with me :) I thought I was doing pretty well with the questions and exercises in $GF(4)$… then I hit $GF(8)$ and realized I'm still missing the point :( I understand that to represent, let's say, $x^3$, the fourth element of $GF(8)$, I just take the result of the division of my irreducible polynomial of choice, let's say $x^3+x+1$, and I'm happy. I can then build addition and multiplication tables, and switch to and from a binary representation when it seems more convenient (e.g. when XORing with something else), and this also makes me happy. But why do I start dividing $x^0$, $x^1$, $x^2$, etc.? This sure seems like a sound idea when compared to “try some random polynomial“, but I can't figure why these are the candidates for being the members of the finite field? Why am I sure that this will generate all the elements? Bonus question: $GF(4)$ seems to have a number of properties that are quite useful for manipulation. It seems obvious that $x+x \equiv 0$ in any $GF(2^n)$, but do $x^2 \equiv x+1$ or $x^3\equiv x^2+x\equiv 1$ always hold? Is this in any way dependent on the irreducible polynomial chosen? Are these predictable? Please forgive me for any obvious error, and thanks in advance for any help. - "It seems obvious that x+x≡0 in any Galois Field" did you mean only fields of power of two? if not consider GF(3). –  anon Sep 1 '10 at 15:40 Be careful about the terminology. Forgive me if I'm wrong, I think that Galois field = finite field, so like muad has said GF(3) would be an example where x+x = 0 fails to hold. (Though it may not be of interest to cryptography stuff, I'm not sure) –  Soarer Sep 1 '10 at 16:12 it would be good if you specified what reference you're working from; in particular, what your definitions are. It is hard to explain anything until we know more about what you do and don't know. –  Qiaochu Yuan Sep 1 '10 at 16:28 Whoa yes, thanks @maud and @Soarer, I didn't specify, but I'm working only with powers of two. Will correct question stat. –  Agos Sep 1 '10 at 16:38 First: a Galois field is a finite field. A finite field will have $p^n$ elements for some prime $p$ and some positive integer $n$ (in fact, there is, up to isomorphism, one and only one finite field with $p^n$ elements). If a field has $p^n$ elements, then $x+x+\cdots+x=0$ ($p$ summands) for all $x$, and no smaller positive integer has that property; this is called the characteristic of the field. For fields of order a power of $2$, you do indeed have $x+x=0$ for all $x$, but not for any other size Galois fields. However, in computer science and cryptography there is some preference for working in fields of characteristic $2$, because they tend to be easier to represent and work with in computers (which themselves work in "characteristic $2$"). I'm not sure what you mean by "why do I start dividing $x^0$, $x^2$, $x^2$, etc.?". As you have probably seen, the field $GF(2^n)$ can always be described in terms of a monic polynomial $P(t)$ of degree $n$ that is irreducible over $GF(2)$. This amounts to constructing the field $GF(2)[t]/(P(t))$. What this tells you is that you have the smallest ring that contains $0$, $1$, and $x$, subject to the conditions $1+1=0$, $1\cdot 1=1$, and $P(x)=0$. Any element of this ring can be written as a polynomial expression in $x$ with coefficients $0$ or $1$, $q(x) = b_mx^m+\cdots + b_0$. Now, because we are assuming that $P(x)=0$, using the division algorithm we can always write any polynomial $q(t)$ as $q(t)=P(t)b(t) + r(t)$, with $r(t)=0$ or $\deg(r)\lt \deg(P)=n$. Evaluating at $x$ and using the fact that $p(x)=0$, we get that $q(x)=r(x)$, so in fact every element in this ring can be written as a polynomial expression in $x$ of degree less than $n$. The expression is unique, because if $r(x)=s(x)$ with $r$ and $s$ of degree less than $n$, then $r-s$ would be a multiple of $P(t)$ (because the latter is irreducible), and degree considerations tell you that $r=s$. So each element of $GF(2^n)$ can be written uniquely as $q(x)$ where $q$ has degree less than $n$. Choosing different polynomials amounts to choosing different representations for the elements, and so different rules for what $x^n$ will mean. So: in order for $x^2=x+1$ to hold in your Galois field $GF(p^n)$, you must have that $x^2-x-1=0$, so the polynomial $P$ you picked must divide $x^2-x-1$, and so you must be either in $GF(p)$ or in $GF(p^2)$. Similarly, for $x^3=x^2+x=1$ to hold you would need to be in $GF(p)$ or $GF(p^2)$ in order for $x^2+x-1=0$ to hold; and whether it holds depends on the polynomial $P$ that you picked to define how $x$ behaves. In summary: you divide by the polynomial so you can get a unique expression for each element in $GF(2^n)$ in terms of the distinguished element $x$ that helps you construct it (the behaviour of the element $x$ being determined by the polynomial). The condition $\alpha+\alpha=0$ will hold for every $\alpha$ in every Galois field of size $2^n$ for any $n$; but no power of $x$ smaller than $n$ will be expressible in terms of lower powers of $x$, and the expression of $x^n$ in terms of lower powers is determined by your choice of polynomial. - Thanks, very helpful and clear! –  Agos Sep 3 '10 at 14:42 "but no power of $x$ smaller than $n$ will be expressible in terms of lower powers of $x$" not sure about this. Does not $x^2=x \cdot x$ where $n>2$. –  ThomasMcLeod May 25 '11 at 15:42 @Thomas: "expressible" here means as an $\mathbb{F}_p$-linear combination of smaller powers, not products. –  Arturo Magidin May 25 '11 at 16:08 It appears that you may mistakenly believe that every element of your finite field is a power of $\rm x$. Here is a very simple counterexample: put $\rm\; f(x) = (x^5-1)/(x-1) = x^4 + x^3 + x^2 + x + 1 \;$, which is irreducible $\pmod 2$. Then $\rm\; (x-1) \: f(x) = x^5 - 1 \Rightarrow \rm x^5 \equiv 1 \pmod{f(x)}$. This implies that the powers of $\rm x$ generate only 5 elements, not all 15 nonzero elements of $\rm GF(2^4)$ constructed with $\rm f(x)$. In fact there does exist an element whose powers generate all the nonzero elements of the finite field (said more technically: the multiplicative group is cyclic). Such generators are known as primitive elements. However, generally there is no simple closed-form known for such generators. The reason that you didn't see this in $\rm GF(4)$ or $\rm GF(8)$ is because their multiplicative groups have prime order $3$ and $7,$ so every element $\ne 1,0 \:$ is a generator (by Lagrange's theorem the order of an element divides the order of the group which, being a prime $\rm p,$ forces the order to be $\rm p\:$ for elements $\rm \ne 1,0\:$). -
# How can atomic configurations represent excited states of atoms? My lecture notes on condensed matter physics talk about pseudopotentials of atoms where the core electrons are replaced by an effective potential. This is in the context of DFT. In the lecture notes, my lecturer talks about transferability, i.e. the ability of the pseudopotential to work in various atomic configurations. My lecturer proposes a test of the transferability of a pseudopotential in the following way: Devise a series of atomic configurations representing (approximations to) excited states of the atom; then compute the energy difference between them for both the all-electron case and the pseudopotential approximation. What is it meant by that atomic configurations represent excited states of the atoms? I thought only the electrons could be excited, and there should be different excited states for all configurations. Can you explain what is meant by this? • I could not do the assignment, but for example neutral copper would have a ground state configuration like $3d^{10}4s^1$ with excited configurations like $3d^{10}4p^1,$ $3d^{10}4d^1$ and states with $3d^9$ and the extra electron in a $4s$- or $4p$-like band. – user137289 Jan 2 '20 at 23:53 Strictly speaking what they are speaking about is the electronic configuration ot the atom. For instance, the ground state of a neutral sodium atom is $$1s^22s^22p^63s^1$$ or, ore briefly $$[{\mathrm{Ne}}]3s^1$$. However the concept of transferability of pseudopotentials has to do with the possibility of an accurate description of the interaction between atom and valence-electrons even for electronic configurations different from the reference atomic ground state. This is a key requirement if one has to describe properly the electronic states in condensed phases where the atomic configuration, i.e. the local environment around an atom induces electronic configurations different from the isolated atom electronic ground state. For example, a $$[{\mathrm{Ne}}]3p^1$$ excited state, or, less physical, but often used in the context of pseudopotentials, a fractional occupation like $$[{\mathrm{Ne}}]3s^{0.8}3p^{0.2}$$.
# cortex.align.autotweak¶ cortex.align.autotweak(subject, xfmname)[source] Tweak an alignment using the FLIRT boundary-based alignment (BBR) from FSL. Ideally this function should actually use a limited search range, but it doesn’t. It’s probably not very useful. Parameters subjectstr Subject identifier. xfmnamestr String identifying the transform to be tweaked.
# 13. Calculating and disaggregating rate of return on shareholders’ equity. Information taken from... 13.   Calculating and disaggregating rate of return on shareholders’ equity. Information taken from the annual reports of Mobilex, a petroleum company, for three recent years appears below (amounts in millions of US$): 2013 2012 2011 Revenues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .$404,552 $377,635$370,680 Net Income . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40,610 39,500 36,130 Average Total Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230,549 213,675 201,796 Average Shareholders’ Equity . . . . . . . . . . . . . . . . . . . . . . . . . . 117,803 112,515 106,471 a.   Compute the rate of return on equity for each year. b.   Disaggregate the rate of return on equity into profit margin, total assets turnover, and financial leverage ratio components. c.    How has the profitability of Mobilex changed over the three years?
## What if the Higgs… 31/07/2008 …would be too massive? Today I was reading the beatiful post of Tommaso Dorigo about Tevatron results on the hunt for Higgs and an old idea come back to my mind. We all know why for the sake of Standard Model one need a small Higgs mass but, so far, such low mass particle has been not seen. We all are aware that the least result of LHC would be the unveiling of the generation mass mechanism in the Standard Model that is a must for the model to survive but we can have surprises. One of these surprises could be a too massive Higgs. This will mean that, at higher energies, our ability to do perturbation theory for electro-weak interactions will blatantly fail and such interactions would become as strong as strong interactions. But there is one more aspect of this situation that is somewhat unexpected. We have recently treated a strongly coupled Goldstone scalar field (here and here) and we have found in the strong coupling limit, besides the ordinary massless excitation, a tower of excited states and this is what one should observe if the Higgs particle is too massive. By “too massive” we mean a mass beyond 1 TeV. I have no checked yet but I believe that also in this case the classical theory admits an exact solution. More to say in the near future. Update: There has been a post by Lubos Motl (see here) where he argues that Fermilab data favor a light Higgs and supersymmetry. Indeed I hope this will prove to be the right scenario because so we have our cake and we eat it! Finally, D0 Collaboration presented a press release and this seems an important step beyond to Higgs discover. ## Fermions and massless scalar field 30/07/2008 As I always do to take trace of some computations I am carrying on here and there, I put them on the blog. This time I devised to solve Dirac equation using the massive solution of the massless scalar field given here and here: $\phi(x)=\mu\left(2 \over \lambda\right)^{1 \over 4}{\rm sn}(p\cdot x,i).$ $\mu$ is an arbitrary parameter, $\lambda$ the coupling and ${\rm sn}$ the snoidal Jacobi elliptical function. An arbitrary phase $\varphi$ can be added but we take it to be zero in order to keep formulas simpler. Now, we couple this field to a massless fermion field and one has to solve Dirac equation $(i\gamma\partial+\beta \phi)\psi=0$ where we have used the following Yukawa model for the coupling $L_{int}=\Gamma\bar\psi\psi\phi$ so that $\beta=\Gamma\mu\left(2 \over \lambda\right)^{1 \over 4}$. Dirac equation with such a field can be solved exactly to give $\psi(x)=e^{-iq\cdot x}e^{-i\frac{p\cdot q}{m_0^2}p\cdot x-\beta\frac{\gamma\cdot p}{m_0^2}[\ln({\rm dn}(p\cdot x,i)-i{\rm cn}(p\cdot x,i))-\ln(1-i)]}u_q$ being $m_0=\mu (\lambda /2)^{1\over 4}$ the mass acquired by the scalar field and ${\rm dn}$ and ${\rm cn}$ two other Jacobi elliptical functions. This formula says us an interesting thing, that is there is a fermion excitation with zero mass unless a mass is initially given to the fermion. Such a conclusion is reminiscent of the pion status in QCD. So, the computation may seem involved but the conclusion is quite rewarding! ## Yang-Mills theory in D=1+1 29/07/2008 Functional methods are techniques used in these years to manage Yang-Mills theory. This name arose from the various methods people invented to solve Dyson-Schwinger equations. These are a tower of equations, meaning by this that the equation for the two-point function will depend on the three point function and so on. These are exact equations: When you solve them you get all the hierarchy of n-point functions of the theory. So, the only way to manage them to understand the behavior of Yang-Mills theory at lower momenta is by devising a proper truncation of the hierarchy. A similar situation can be found in statistical mechanics with kinetic equations. For a gas we know that collisions with a higher number of particles give smaller and smaller contributions and we are able to provide a meaningful truncation of the hierarchy. For the Dyson-Schwinger equations, generally, we are not that lucky and the choice of a proper truncation can be verified only through lattice computations. This means that the choice of a given truncation scheme may imply an uncontrolled approximation with all the consequences of the case. A beatiful paper about this approach is due to Alkofer and von Smekal (see here). This paper has been published on Physics Report and describes in depth all the elements of functional methods for Yang-Mills theory. Alkofer and von Smekal proposed a truncation scheme for Dyson-Schwinger equations that provided the following scenario: • Gluon propagator should go to zero at lower momenta. • Ghost propagator should go to infinity faster than a free particle propagator at lower momenta. • A proper defined running coupling should reach a fixed point in the infrared. The reason why this view reached success is due to the fact that gives consistent support to currently accepted confinement scenarios. Today we know as the history has gone. Lattice computations showed instead that • Gluon propagator reaches a non-zero value at lower momenta. • Ghost propagator is practically the same of that of a free particle. • Running coupling as defined by Alkofer and von Smekal goes to zero at lower momenta. So, after years where people worked to support the scenario coming from functional methods, now the community is trying to understand why the truncation scheme proposed by Alkofer and von Smekal seems to fail. On this line of research, Axel Maas showed recently, with lattice computations, that for D=1+1 the scenario is exactly those Alkofer and von Smekal proposed (see here and here). So, now people is try to understand why for D=1+1 functional method seems to work and for higher dimensions this does not happen. I think that these are not good news for functional methods. The reason of this is that a pure Yang-Mills theory in D=1+1 is trivial. Trivial here means that this theory has not dynamics at all! This result was obtained some years ago by ‘t Hooft (see here) and published on Nuclear Physics B. He showed this using light cone coordinates. Then, by eliminating gluonic degrees of freedom he obtained a two-dimensional formulation of QCD with non-trivial solutions. In our case this means that the truncation scheme adopted by Alkofer and von Smekal simply does not work because removes all the dynamics of the Yang-Mills field and these are also the implications for the confinement scheme this approach should support. Indeed, a proper numerical solution of Dyson-Schwinger equations proves that the right scenario can also be obtained (see here). These authors met difficulties to get their paper accepted by an archival journal. Today, we should consider this work an important step beyond in our understanding of Yang-Mills theory. My view is that we have to improve on the work of Alkofer and von Smekal to make it properly work at higher dimensionalities. This without forgetting all other works that gave the right solution straightforwardly. ## DAMA 28/07/2008 In Italian language DAMA means a lady or draught the well-known game that people commonly plays with a chessboard. But it is also the name of a fundamental experiment carried out at INFN laboratories at Gran Sasso in Italy and headed by physicist Rita Bernabei (you can find here an article by her just in Italian, sorry) . Rita Bernabei was one of my teachers at “La Sapienza” in Rome and I took an examination with her. Recently a preprint by DAMA Collaboration was published on arxiv causing a lot of rumors in the scientific community. The point is that they see a signal that nobody else sees. This evidence cannot be denied and the signal is there. Personally I am convinced of the goodness of all this matter and sometime happens that something is seen by a group and not by others and things could be generally explained after some time. In my activity field a similar thing happened for the sigma or f0(600) resonance. This particle has a long history as initially was put in the PDG listing and then removed as believed non-existent. After a lot of serious analysis, both theoretical and experimental, its existence has been commonly accepted and now the question is moved to the understanding of its nature. We have discussed this matter here, here and here. This particle is our key to understanding of low energy behavior of QCD and we expect some news about in the near future. Similarly, I think the current position of DAMA collaboration is the same as was for the first observations of the sigma. Being the firsts sometimes requires a lot of patience before general acceptance gets through. ## QCD 08 proceedings 27/07/2008 Today I have posted my contribution to proceedings of QCD 08 at arxiv. It should appear shortly. I hope to have answered to all open questions about Yang-Mills theory in the infrared. More to say in the following days, having in mind my very near vacations. Update: The paper appeared today 29th July (see here). ## Rumors on SUSY 25/07/2008 In these days there have been some rumors in the blogosphere about string theorists and SUSY (see Motl, Woit and Dorigo) due to a recent preprint appeared on arxiv. Indeed SUSY is a relevant ingredient of string theory and the latter was the vehicle for the uncovering of this concept that obtained such a fortune in the community. I have listened a talk of Sergio Ferrara at Accademia dei Lincei in Rome a few months ago. Ferrara is one of the discoverers of supergravity and he gave a nice talk on the argument of supersymmetry. He was confident that supersymmetric particles will be seen at LHC. I would like to say that this was also the expectation for LEP and Tevatron but nothing has been seen so far. So the paper above seems like an attempt by a string theorist to be pessimistic and save the day. Supersymmetry has some problems that still are in need for a satisfactory answer. One is philosophical as one can say that the number of particles simply doubles and so why should we expect such an anti-economical behavior by Nature? Ferrara argued against this question by saying that also with antimatter Nature doubled the number of particles so this would not be the first time that, in order to keep a symmetry, one needs such a doubling. One can say anyhow that for antimatter one has a discrete symmetry on a single field while for supersymmetry is the number of fields that doubles. The other point is about breaking of supersymmetry. There is no satisfactory model so far and such symmetry is not seen at low energies as we know. But this could be just a matter of time before someone finds a way out. My view is that even if there is no supersymmetry at large, one can save supergravity. Indeed, all one needs is to observe a gravitino, that is a spin 3/2 particle, and we will have a theory of quantum gravity while, at large, no supersymmetry can exist. But this would not be enough for string theory. As a theoretical physicist I would like to see the discover of a gravitino and the failure of supersymmetry at large as this would imply a lot of interesting work to do and an incredible new scenario to understand. ## An useful hint 24/07/2008 Dietmar Ebert is a retired professor of Humboldt University in Berlin. He did relevant work in QCD and particle physics. I have come upon a paper of him at arxiv about the question of bosonization. In a paper of mine I showed how a Nambu-Jona-Lasinio (NJL) model can be derived from QCD using recent results about gluon propagator that is the corner stone of all this construction. In order to make contact with the mesonic spectrum of QCD one needs to manage in some way quark fermionic fields of NJL model to recover bosonic degrees of freedom. In Ebert’s paper this is done through Hubbard-Stratonovich transformation that is a widely known tool to condensed matter theorists. This is a key point to prove that our recent derivation of the width of the sigma resonance given here using a Fermi’s intuition is indeed correct. Ebert obtains by a NJL-model the following bosonic Hamiltonian $L_{int}=g_{\sigma\pi\pi}\sigma(\sigma^2+\pi^2)+g_{4\pi}(\sigma^2+\pi^2)^2$ being $g_{\sigma\pi\pi}=\frac{m}{\sqrt{N_c I_2}}$ and $g_{4\pi}=\frac{1}{8N_c I_2}$ being $N_c$ the number of colors, $m=m_0+i8mG_{NJL}\int^{\Lambda}\frac{d^4k}{(2\pi)^4}\frac{1}{k^2-m^2}$ quark constituent mass and $m_0$ the quark mass assumed to be equal for u and d, and finally $I_2=-i\int^{\Lambda}\frac{d^4k}{(2\pi)^4}\frac{1}{(k^2-m^2)^2}.$ In order to make contact with QCD, as we have shown one has $G_{NJL}=3.761402959\frac{g^2}{\sigma}$ being $g$ the coupling constant and $\sqrt{\sigma}=410\pm 20 \ MeV$ the square root of the string tension. Ebert’s Lagrangian gives us exactly the term we derived with Fermi’s insight plus other terms implying also the one to compute f0(980) decay rate $2g_{4\pi}\sigma^2\pi^2$. So, as it is well-known, a good idea repeats itself at different levels in the description of Nature. I would call this the “gluonic sector” of QCD. I hope to put down a paper about in the next days.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Chest Movement and Respiratory Volume both Contribute to Thoracic Bioimpedance during Loaded Breathing Abstract Bioimpedance has been widely studied as alternative to respiratory monitoring methods because of its linear relationship with respiratory volume during normal breathing. However, other body tissues and fluids contribute to the bioimpedance measurement. The objective of this study is to investigate the relevance of chest movement in thoracic bioimpedance contributions to evaluate the applicability of bioimpedance for respiratory monitoring. We measured airflow, bioimpedance at four electrode configurations and thoracic accelerometer data in 10 healthy subjects during inspiratory loading. This protocol permitted us to study the contributions during different levels of inspiratory muscle activity. We used chest movement and volume signals to characterize the bioimpedance signal using linear mixed-effect models and neural networks for each subject and level of muscle activity. The performance was evaluated using the Mean Average Percentage Errors for each respiratory cycle. The lowest errors corresponded to the combination of chest movement and volume for both linear models and neural networks. Particularly, neural networks presented lower errors (median below 4.29%). At high levels of muscle activity, the differences in model performance indicated an increased contribution of chest movement to the bioimpedance signal. Accordingly, chest movement contributed substantially to bioimpedance measurement and more notably at high muscle activity levels. Introduction Respiratory diseases are diagnosed and monitored by measuring the patients’ pulmonary function. Spirometry is the main test for assessing many respiratory diseases such as asthma or chronic obstructive pulmonary disease1. Spirometry requires the use of facemasks or mouthpieces2 which usually are obtrusive and uncomfortable for the patients. In addition, this equipment could modify their breathing3. Lately less invasive methods are investigated as an alternative to classical methods to provide a continuous monitoring although more evidence for clinical application is needed4. One of these alternatives is thoracic bioimpedance which measures impedance changes over time. Thoracic bioimpedance has been widely studied as a non-invasive technique for measuring respiration, and several studies have shown a linear relationship with respiratory volume5,6,7,8,9,10,11. However, not only airflow contributes to the measured thoracic bioimpedance measurement, but it is a combination of the impedance of several body tissues, organs and fluids in this zone. Defining how all thoracic components contribute to the measurement is unclear. Modeling all these components as an electronic circuit can be difficult. Alternatively, previous studies presented computer simulations of these contributions in finite element human thorax models12,13,14. These simulations studied different electrode locations and showed that electrodes positioned around the middle of the thorax reflect changes in bioimpedance of the lungs12. Early studies included animal testing to explain other changes in thoracic bioimpedance than the changes resulting from respiratory volume. These studies analyzed the relationship of bioimpedance relationship with respiratory volume and thoracic diameter15,16. Baker et al. showed that thoracic circumference or diaphragm displacement produced components of bioimpedance that combine linearly for normal volumes and probably nonlinearly for extreme conditions15. In addition, recent studies indicated that during abnormal breathing the relation between bioimpedance and volume appeared non-linear8,10. This apparent non-linearity could be caused by the contribution of other components to the bioimpedance measurement. Thus, thoracic bioimpedance changes seem to be a combination of volume and other thoracic changes but both the ratio and how these contributions change over different breathing types are unclear. Inspiratory threshold loading enables the study of inspiratory muscle function and was used previously in respiratory studies17,18,19,20. Imposing inspiratory loads requires increased breathing pressure and are associated with breathing pattern changes and diaphragm fatigue17,18. In previous studies, we studied the relationship between bioimpedance and respiratory volume during inspiratory loaded breathing11,21. The analysis of the temporal relation between the signals revealed delays when loads were imposed21. These differences in time could be related to differences in thoracic breathing movement and displacement as a result of the increase of the breathing effort and changes in breathing pattern. The aim of the current study is to investigate the relevance of respiratory volume and chest movement contributions, represented by a spirometer and an accelerometer respectively, to thoracic bioimpedance measurement. Therefore, the objective is to have more knowledge about thoracic bioimpedance changes and its relation with breathing movement and respiratory volume. The combination of accelerometer and bioimpedance measurements has been used before, however, these studies used the combination for motion aritifact removal22,23. These studies did not examine the relationship between these signals. We hypothesized that bioimpedance changes are a combination of breathing movement and airflow and that these contributions can change as a function of muscle force used for breathing. Consequently, we measured airflow, thoracic bioimpedance and accelerometer data to study the relation of thoracic bioimpedance with volume and chest motion. This relation was studied though linear mixed-effect models and neural networks by reconstructing bioimpedance signals for different levels of muscle activity as a result of an inspiratory threshold loading protocol. Therefore, the novelty of our study compared to both the literature and our own earlier work is the inclusion of chest movement in the analysis of bioimpedance changes during loaded breathing. The conclusive results of this study will contribute to better understand of thoracic bioimpedance signal and to reinforce its application for non-invasive respiratory monitoring. Methods Subjects The study included ten healthy non-smoker subjects (4 females) of age 24–37 years (mean 30.5) and body mass index 19.5–26.8 kg m−2 (mean 23.1). None reported any respiratory disease. Ethical approval The research was approved by the Institutional Review Board of the Institute of Bioengineering of Catalonia and followed the World Medical Association’s Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects. The subjects were informed about the measurements and protocol procedure and provided their informed consent before participation. Respiratory protocol The study consisted of performing an incremental inspiratory threshold loading protocol during physiological signal measurements. During this kind of protocol, inspiratory loads are imposed to the subjects who need to increase the pressure to breath completely. The subjects wore a nose clip to prevent nasal breathing and were comfortably seated in upright position during the measurements. The inspiratory threshold loading protocol used in the presented study consisted of imposing five incremental inspiratory threshold loads to the subjects while breathing. The inspiratory threshold values were increasing percentage values from each subject’s maximal static inspiratory pressure (MIP) from functional residual capacity. The MIP was obtained from each subject by performing a maximal volitional manoeuvre24. After the maximal manoeuvre, the subjects’ quiet breathing (QB) was recorded for 2 minutes and after that, the subjects breathed while inspiratory loads were imposed. The five load thresholds were progressively selected from 12% to 60% of the subject’s MIP (L1, L2, L3, L4 and L5). Each load included 30 breaths and was followed by a resting period to return to baseline (Fig. 1a). The MIP maneuver and inspiratory loads were imposed using a class 1 medical inspiratory muscle trainer (POWERbreathe KH2, POWERbreathe International Ltd, Southam, UK)25. The device controlled electronically the threshold resistances imposed to the subjects. Measurements The physiological data were acquired by a wearable research prototype device (Stichting imec The Netherlands) and a standard wired acquisition system (MP150, Biopac Systems, Inc. Goleta, CA, USA). Bioimpedance was measured at four tetrapolar electrode configurations simultaneously using the wearable device. The device measures isolated bioimpedance values from the four configurations by switching the current injection and the voltage measurement (MUSEIC v1 chip, Stichting imec The Netherlands)26. The four electrode configurations were previously presented11 and are represented in Fig. 1b. The electrodes configurations were symmetric from the midsternal. Configuration #1 was horizontal, the injecting current electrodes were placed at 7 cm from the axillas on the midaxillary lines and the voltage electrodes were at 5 cm away from the injecting ones closer to the midsternal line. Configurations #2, #3 and #4 were verticals and all the electrodes were placed on the midaxillary lines. The electrodes of configurations #2 and #3 were separated 5 cm but electrodes of configuration #3 were on a upper zone. Configuration #4 covered a broader zone because the electrodes were separated 10 cm. For all the vertical configurations, the voltage electrodes were the lower ones. We included four electrode configurations to evaluate if the differences in geometry and distances were relevant to our analysis. The amplitude of the injection current was 110 μA at 80 kHz. Respiratory airflow and accelerometer data were recorded using the Biopac wired acquisition system. Airflow was acquired with Biopac transducer (pneumotach transducer TSD107B, Biopac Systems, Inc.). The airflow transducer were connected to a differential amplifier which amplified 1000 times and low-pass filtered (fc = 300 Hz) the signal. Disposable mouth pieces with bacterial filters were attached to the pneumotach and the subjects breathed through them. Accelerometer data was measured with a tri-axial accelerometer (TSD109C2, Biopac Systems, Inc.) connected to its associated interface (HLT100C, Biopac Systems, Inc.). The accelerometer was placed on the subjects’ skin with adhesive rings close to the lower bioimpedance electrodes (Fig. 1b), approximately over the anterior axillary line and along the seventh or eighth intercostal spaces27. The location was selected to measure the movement of approximately the same area covered by thoracic bioimpedance measurement. Electrocardiogram (ECG) was recorded across the wearable device and the wired Biopac system (ECG100C, Biopac Systems, Inc.) and was used to synchronize the signals from both systems. The signals acquired by Biopac amplifiers were A/D converted by Biopac MP150 system with a sampling rate of 10 kHz. The wearable device acquired the bioimpedance and ECG signals at sampling rates of 16 Hz and 512 Hz respectively. Stress test Ag/AgCl electrodes (EL501, Biopac Systems, Inc.) were used in bioimpedance and ECG measurements. Signal processing Signals synchronization The signals from the systems were synchronized using the ECG signals. The delays between systems were computed as the lag that maximizes the cross-correlation of the ECG signals and subsequently they were corrected in the signals. Signal filtering The four bioimpedance channels were high-pass filtered to reduce the baseline oscillations (zero-phase 4th order Butterworth, fc = 0.05 Hz). The sampling frequency was increased from 16 Hz to 200 Hz by cubic interpolation to improve the time resolution. Accelerometer and airflow signals were low-pass filtered to avoid aliasing (8th order Chebyshev Type I, fc = 80 Hz) and resampled from 10 kHz to 200 Hz. The accelerometer data, denoted as acc, was measured by a tri-axial accelerometer and consisted of three signals corresponding to the acceleration in the three spatial directions (accX, accY and accZ) related to the sensor axes. The accelerometer orientation was the same for all the subjects, hence, the axes approximately represent the same spatial direction over subjects. Airflow signal was low-pass filtered (zero-phase 4th order Butterworth, fc = 5 Hz) to remove the high frequency content not related to the breathing. The respiratory volume were computed by trapezoidal numerical integration of the low-pass filtered airflow signal. Bioimpedance, acc and volume signals were low-pass filtered (zero-phase 4th order Butterworth, fc = 1 Hz) and high-pass filtered (zero-phase 4th order Butterworth, fc = 0.05 Hz) to get the respiratory information. The acc signals filtered in this low frequency range provide information of the surface chest motion, denoted as accCM, and particularly for each accelerometer axis denoted as $$ac{c}_{C{M}_{X}}$$, $$ac{c}_{C{M}_{Y}}$$, and $$ac{c}_{C{M}_{Z}}$$. For simplicity we are going to use accCM notation for the three accelerometer signals related to chest movement. The signals were normalized in the range of [−1, 1] for each subject. Muscle force estimation Surface mechanomyography measured by accelerometers on the chest wall over the lower intercostal spaces (sMMGlic) has been suggested to be able to provide a noninvasive index of inspiratory muscle force. We computed this index from our accelerometer signals based on the study of Lozano-García et al.20. Consequently, the acc signals were low-pass filtered (zero-phase 4th order Butterworth, fc = 35 Hz) and high-pass filtered (zero-phase 4th order Butterworth, fc = 5 Hz) to get information of the muscle fibre vibration (sMMGlic). Note that we used the data from the same accelerometer but the bandwidth used in the muscle force estimation is different from the chest movement one. We got the total acceleration of the muscle fibre vibration (|sMMGlic|) by computing the Euclidan norm of the three signals. The envelope of the resulting signal was computed as the root mean squared (RMS) values in windows of 750 ms and 90% of overlap20. We computed the muscle force estimation as the mean value of the RMS |sMMGlic| cycle by cycle. Respiratory cycles selection All the signals were fragmented in respiratory cycles using a thresholding algorithm applied to the airflow signal28. The respiratory cycles that were affected by artefacts were rejected. We determined the rejected cycles as the cycles which the |sMMGlic| maximum value was higher or lower than three times the scaled median absolute deviation of the cycle maximum values of each subject and load. Only 98 respiratory cycles were rejected (5.35% of the total). Contribution analysis to thoracic bioimpedance We studied the relevance of volume and chest movement contributions to the measured bioimpedance signal using linear mixed-effect models and neural networks (Fig. 2). The objective of our work was to understand the changes of the bioimpedance signal at different levels of muscle force. At this stage our aim was not to be able to predict the signal. Therefore, the current focus was not yet on finding the best machine learning technique to predict bioimpedance and we used all the data for computing the linear mixed-effect models and neural networks. Still, we also want to evaluate whether the linearity assumption in linear mixed-effect models is appropriated for the context of our study. Muscle force segmentation We segmented each subject’s signals into twelve different levels of muscle force estimation to study if changes in muscle activity alter bioimpedance signals. The intention of segmenting the data was to study the contributions of homogeneous cycles in terms of muscle force. We tested different number of segmentations and dividing in twelve levels permitted to have a proper resolution in muscle force estimation and sufficient samples to compute the linear models and neural networks. The levels were selected by proportional quantiles of the muscle force values computed for each subject, specifically, the quantiles used as threshold for the segmentations were: 0.08, 0.17, 0.25, 0.33, 0.42, 0.50, 0.67, 0.75, 0.83 and 0.92. In this way we got approximately the same number of cycles and samples per each segment of the data (~11 000 samples corresponding to 55 s). Thus, around 130 000 samples (650 s) were used to compute the linear models and 11 000 for each neural network. Linear models Linear mixed-effect models were computed using the accCM or volume signals as predictor variables, and bioimpedance as response variable (Fig. 2a). These models are an extension of linear regressions which allow the use of longitudinal, multilevel and non-independent data. The level of muscle force was used as grouping variable to adapt the model coefficients to different levels of muscle activity. Three different linear models were computed for each subject and electrode configuration by changing the predictor variables (accCM signals, volume or both types). We fitted the linear models using the maximum likelihood estimation. Neural networks The neural network analysis was included to examine the non-linear relation between bioimpedance and accCM signals or volume (Fig. 2b). Along the same lines as linear models we computed three different feedforward neural networks for each subject, level of muscle force and electrode configuration by changing the networks inputs: accCM signals, volume or both types. Thus, the input sizes were three, one and four, respectively. The neural networks had one hidden layer of 10 units with hyperbolic tangent sigmoid transfer function. The output of the neural networks was one unit corresponding to the bioimpedance signal and its unit transfer function was linear. We used all the data for training and validation (85% and 15% of the samples randomly chosen, respectively) and none for testing. The training algorithm was Levenberg-Marquardt backpropagation. Statistical analysis The mean absolute percentage error (MAPE) was used to measure the performance of the linear models and neural networks. The MAPE values were computed between the bioimpedance signals and the output of the linear models and neural networks. The fitted bioimpedance signals were obtained with the same data we used to compute and train the linear mixed-effect models and neural networks to evaluate the adjustment of the data. The MAPE values were calculated cycle by cycle related to the peak-to-peak bioimpedance amplitude of the cycle as follows, $$MAP{E}_{i}( \% )=100\frac{1}{N}\,\mathop{\sum }\limits_{n=1}^{N}\,|\frac{{Z}_{i}[n]-\widehat{{Z}_{i}}[n]}{{Z}_{i}^{PP}[n]}|$$ (1) where $${Z}_{i}[n]$$ and $$\widehat{{Z}_{i}}[n]$$ are the true and the fitted bioimpedance signals during the cycle i, N is the number of samples and $${Z}_{i}^{PP}[n]$$ the bioimpedance peak-to-peak amplitude of the cycle i. The main analysis was done with MAPE to quantify the errors related to the amplitude of each cycle. In that way, we prevented the error from varying due to changes in signal amplitude. Root mean squared errors (RMSE) were also calculated to evaluate the performance of the different techniques using an absolute measure. The signal processing and data analysis were developed in MATLAB environment (v. R2018a, Natick, MA, USA). Results Bioimpedance, volume and accelerometer data were acquired in ten healthy subjects during an incremental inspiratory threshold protocol. We studied bioimpedance signals by reconstructing it through linear mixed-effect models and neural networks with volume and/or accelerometer signals. The performance was evaluated by the MAPE values obtained from the linear models and neural networks. The signals used in the presented study are represented in Fig. 3 for all loads for subject 2 (S02). The signals of S02 showed large changes during the imposed loads, which illustrate the waveform changes of the bioimpedance signal clearly. Firstly, Fig. 3 shows the increase of muscle force estimation level (RMS |sMMGlic|) over the loads, from 0.08 in QB to 0.92 in the highest load. The units shown in the muscle force representation are the corresponding quantile values. When high loads were imposed to S02, bioimpedance signal exhibited changes in waveform and temporal behaviour in all the configurations. Apart from the amplitude changes, bioimpedance signal showed waveform changes like the appearance of high frequency content during the highest inspiratory loads. In the temporal point of view, the configurations were no longer in phase, like configurations #1 and #3. The three accCM signals changed during the loads in a very similar way as bioimpedance waveform. The appearance of the high frequency content that is visible in the bioimpedance signal can also be observed clearly in the Z component. In contrast, volume waveform did not change, but the subjects inhaled more air when the loads were imposed. In short, Fig. 3 exhibits clear changes in the signals under study when the inspiratory loads were imposed and were more notable when load increased. Models performance Volume and chest movement combinations to characterize thoracic bioimpedance The MAPE values were computed for each respiratory cycle and all the linear models and neural networks. Figure 4 shows the MAPE values distribution of all subjects’ cycles for the different inputs combinations corresponding to electrode configuration 4. Only accCM signals as inputs resulted in higher MAPE values than only volume or both accCM and volume inputs. Notice that for linear models the median of the error values were lower than 9.05% when only volume were used, 16.01% when accCM signals were used, and 6.31% in case of both signals. However, neural networks clearly presented lower errors, being the median of the MAPE 9.49% when accCM signals were used, 8.67% when only were used and 3.02% in case of both signals. Therefore, in both methods when volume and accCM signals were used as inputs, the median of MAPE values were always lower than using only volume. Consequently, both linear models and neural networks showed the lowest errors when volume and accCM signals were used. The MAPE values from the linear models and neural networks are shown in Fig. 5 when only volume and both volume and accCM were used. Figure 5 depicts the better performance of neural networks when the inputs of the networks included volume and accCM signals. For the neural networks using both signals as inputs, the MAPE values remained approximately constant over the different muscle force levels and the median below 4.29% whereas the other models gave higher errors during low and high activity. The target and fitted bioimpedance signals are shown in Fig. 6 for S02. These examples correspond to 10 seconds of four different data segments, 8%, 42%, 75% and 100% percentile of muscle force estimation. We observed better adjustment between the target and the fitted signals when the techniques used accCM and volume as inputs. The best adjustment was observed for the outputs of the neural networks accordingly to the error results. Note that the adjustment improvement is more notable at high muscle activity (75% or 100% percentile). Electrode configurations The performances of the four electrode configurations were similar as Fig. 5 shows. In addition, RMSE values of the entire signals are shown in Fig. 7 for the four configurations. These errors were computed from each subject’s bioimpedance and the corresponding fitted signals when volume and accCM signals were used together as inputs. The signals range was [−1, 1] because bioimpedance were normalized in that range. Note that RMSE values were lower for the neural networks than for the linear models which is the same as we observed in MAPE values. All four configurations presented RMSE values approximately in the same range but for neural networks, the median RMSE of configuration 1 was lower than the other configurations. Discussion The objective of the study was to examine the relevance of respiratory volume and chest movement contributions to thoracic bioimpedance. Thoracic bioimpedance and respiratory volume were compared in several studies5,6,7,8,9,10,11, but most of these studies were under normal breathing. Therefore, our aim was to better understand thoracic bioimpedance changes in relation with both volume and breathing movement under restrictive breathing to support its use for healthcare respiratory monitoring. Hereto, bioimpedance from four electrode configurations, respiratory airflow and accelerometer data were measured in ten healthy subjects during an inspiratory threshold loading protocol. This protocol allowed us to analyse the impact of these contributions during changes in breathing pattern and muscle force. We reconstructed the bioimpedance signals from volume, chest movement and the combination of both using linear mixed-effect models and neural networks. The comparison between the actual and the fitted bioimpedance signals permitted to evaluate the relevance of the volume and chest movement contributions. The main novelty is the inclusion of chest movement in the characterization of thoracic bioimpedance measurement for different levels of muscle force. The performance of the techniques was measured with the MAPE computed for each cycle. Lower errors were obtained when the linear models and neural networks were computed using the volume and accCM signals (Figs. 4 and 5). In linear models when only volume was used, the median of the MAPE were below 9.08% whereas in neural networks the median of values were below 8.74%. On the other hand, when the accCM signals were added to the models the median of the MAPE were below 7.94% for linear models and 4.29% for neural networks. Therefore, the MAPE values obtained from both techniques were good and improved when accCM signals were included. These results exhibited that linear models could be used to approximate the contributions of volume and chest movement to thoracic bioimpedance but neural networks described the relation even better. On the other hand, the median errors with only volume were lower than the ones when only accCM signals were used in linear models and neural networks for all muscle force levels (Fig. 4). Therefore, the contribution of volume was crucial for a good explanation of bioimpedance changes. The relationship between bioimpedance and respiratory volume was studied previously and a linear relation between them was reported5,6,7,8,9,10,11. Focusing only on the relationship between bioimpedance and volume, we observed in Figs. 4 and 5 that the errors of linear models and neural networks are practically in the same range which may mean that the relation was essentially linear. These results are comparable to the ones of Baker et al.29 who computed linear and non-linear regressions to characterize different electrode configurations of bioimpedance using respiratory volume. They found that fourth degree polynomial regression characterized the data better but the differences with linear regressions were often small. Recent studies described nonlinear relations between bioimpedance and respiratory volume during abnormal breathing like maximal respiratory maneuvers and airway obstructions8,10. Therefore, the non-linearity showed in these studies seems to be related to the breathing pattern and mechanics of the subjects’ breathing. From these studies we deduced that the relationship between bioimpedance and volume is dependent on the electrodes location and the way the subjects breathed in increased breathing effort conditions. We hypothesized that the nonlinear relationship between bioimpedance and volume can be explained as changes in the impedance contributions. Along these lines, we included the chest movement as a contribution of thoracic bioimpedance to analyze if it can be related to the apparent non-linearity. We found a better performance in the characterizations which used volume and accCM (Fig. 4). Particularly, neural networks clearly showed an improvement over linear models thus the chest movement contribution was described better as nonlinear. The nonlinearity is difficult to characterize previously because it is subject dependent. However, neural networks allowed us to analyze the nonlinear relations without establishing them previously. Not many bioimpedance studies included neural networks to their respiratory research. Młyńczak et al. used neural networks but for nonlinear calibrations. They found better accuracy in nonlinear calibration with neural networks than with simple linear modeling30. Regarding to the electrode configurations, the RMSE obtained from the target and fitted bioimpedance signals was slightly smaller for configuration 1 (Fig. 7). These results agree with our previous study in which all configurations presented similar results but configuration 1 exhibited a robust performance in terms of concordance with volume11. Although in terms of error all configurations were quite similar, we observed different behavior in the signal waveform from the electrode configurations as Fig. 3 shows. This is consistent our previous temporal analysis of bioimpedance and volume in which delays were observed between the signals21. These delays were dependent on bioimpedance electrode location and changed with loads. Hence, none of the four configuration performed clearly better but we observed differences in signal waveform. These waveform differences can be related to the different impedance contributions of the zones covered by the electrode configurations. In the presented study, accelerometer data were used for two different purposes, as measure of chest movement and as estimation of muscle force. Previous studies suggested that accelerometer data in the high frequency band (|sMMGlic|) is related to the inspiratory effort20,27,31. Lozano-García et al. found strong correlation between |sMMGlic| and the inspiratory muscle function in healthy subjects and during the same loading protocol as the presented study20. Therefore, |sMMGlic| permitted us to divide the respiratory cycles into different levels of muscle force estimation. We divided the respiratory cycles into twelve levels of muscle force estimation selected by proportional quantiles. Figure 5 shows the medians of the MAPE values for each level of activity when volume and volume in combination with accCM were used. Higher errors were observed for the extreme levels of muscle activity in each method except when neural networks included volume and chest movement. The lower levels corresponded to the quiet breathing cycles in which the amplitude of the signals is lower (Fig. 3) so the corresponding MAPE values were related to lower peak-to-peak amplitudes and consequently higher. On the contrary, at high level of muscle activity the higher error values were due to a lower performance in the characterization of bioimpedance. In particular, this increase in error when only respiratory volume was used, was likely because volume did not explain completely bioimpedance especially during high muscle force level. In addition, the increase of error in the linear models with volume and accCM was probably due to the non-linearity between the signals. On the other hand, the neural networks with volume and accCM as inputs exhibited a better performance since the errors were lower than the other methods for all levels of muscle force. This better performance can be also observed in the comparison between the target and fitted bioimpedance signals of Fig. 6. Contrary to the other characterizations which performance worsen at high levels of muscle force, the performance of this method practically did not vary. Consequently, the relation between bioimpedance and volume can be basically described as linear. However, the addition of the chest movement improved the characterizations especially in neural networks and for high levels of muscle force. Therefore, the chest movement contribution was more relevant for high muscle activity than for middle activity. The results from this study suggest that the combination of thoracic bioimpedance and chest movement could be promising for respiratory monitoring. The combination of the two signals could lead to an improved volume prediction. It becomes even more relevant during restrictive breathing, which is common in respiratory patients. Following the results from this study, further studies including thoracic bioimpedance and accelerometer signals should validate the suitability of these signals to predict volume. Hereto, larger databases including patients with pulmonary diseases will be needed to reinforce the use of these physiological signals in clinical application. In summary, we investigated the relevance of volume and chest movement to thoracic bioimpedance at different levels of muscle force. In accordance to previous studies, we showed that the relation between volume and thoracic bioimpedance was essentially linear which supports the clinical application of bioimpedance for respiratory monitoring. However, the presented results exhibited that the combination of respiratory volume and accCM characterized better the thoracic bioimpedance measurement for all levels of muscle activity. The linear approximation showed good results although neural networks described better the volume and chest movement contributions to bioimpedance. We did not find substantial differences in electrode configurations which means that all four included volume and chest movement contributions. Accordingly, we conclude that thoracic bioimpedance changes were fundamentally due to the respiratory volume, although chest movement contributed substantially to bioimpedance measurement and its contribution was more relevant at high muscle activity levels. Finally, the presented results provided a better understanding of the changes of thoracic bioimpedance measurement and its relation with muscle activity changes. Our contribution will help in the application of thoracic bioimpedance and accelerometer data as a non-invasive healthcare technique for respiratory monitoring. Data availability The dataset analysed during the current study is available from the corresponding author on reasonable request. References 1. 1. Enright, P. L., Lebowitz, M. D. & Cockroft, D. W. Physiologic measures: Pulmonary function tests. American Journal of Respiratory and Critical Care Medicine 149, S9–S18, https://doi.org/10.1164/ajrccm/149.2_Pt_2.S9, PMID: 8298772 (1994). 2. 2. Miller, M. R. et al. Standardisation of spirometry. European Respiratory Journal 26, 319–338, https://doi.org/10.1183/09031936.05.00034805, https://erj.ersjournals.com/content/26/2/319.full.pdf (2005). 3. 3. Askanazi, J. et al. Effects of respiratory apparatus on breathing pattern. Journal of Applied Physiology 48, 577–580, https://doi.org/10.1152/jappl.1980.48.4.577, PMID: 6769880 (1980). 4. 4. Folke, M., Cernerud, L., Ekström, M. & Hök, B. Critical review of non-invasive respiratory monitoring in medical care. Medical and Biological Engineering and Computing 41, 377–383, https://doi.org/10.1007/BF02348078 (2003). 5. 5. Grenvik, A. et al. Impedance pneumography: Comparison between chest impedance changes and respiratory volumes in 11 healthy volunteers. Chest 62, 439–443, https://doi.org/10.1378/chest.62.4.439 (1972). 6. 6. Seppä, V.-P., Viik, J., Naveed, A., Väisänen, J. & Hyttinen, J. Signal waveform agreement between spirometer and impedance pneumography of six chest band electrode configurations. In Dössel, O. & Schlegel, W. C. (eds) World Congress on Medical Physics and Biomedical Engineering, September 712, 2009, Munich, Germany, 689–692 (Springer Berlin Heidelberg, Berlin, Heidelberg, 2009). 7. 7. Seppä, V.-P., Viik, J. & Hyttinen, J. Assessment of pulmonary flow using impedance pneumography. IEEE Transactions on Biomedical Engineering 57, 2277–2285, https://doi.org/10.1109/TBME.2010.2051668 (2010). 8. 8. Seppä, V.-P., Hyttinen, J., Uitto, M., Chrapek, W. & Viik, J. Novel electrode configuration for highly linear impedance pneumography. Biomedizinische Technik/Biomedical Engineering 58, 35–38, https://doi.org/10.1515/bmt-2012-0068 (2013). 9. 9. Koivumäki, T., Vauhkonen, M., Kuikka, J. T. & Hakulinen, M. A. Bioimpedance-based measurement method for simultaneous acquisition of respiratory and cardiac gating signals. Physiological Measurement 33, 1323 (2012). 10. 10. Malmberg, L. P. et al. Measurement of tidal breathing flows in infants using impedance pneumography. European Respiratory Journal, https://doi.org/10.1183/13993003.00926-2016, http://erj.ersjournals.com/content/early/2016/12/19/13993003.00926-2016.full.pdf (2016). 11. 11. Blanco-Almazán, D., Groenendaal, W., Catthoor, F. & Jané, R. Wearable bioimpedance measurement for respiratory monitoring during inspiratory loading. IEEE Access 1–1, https://doi.org/10.1109/ACCESS.2019.2926841 (2019). 12. 12. Yang, F. & Patterson, R. P. The contribution of the lungs to thoracic impedance measurements: a simulation study based on a high resolution finite difference model. Physiological Measurement 28, S153–S161, https://doi.org/10.1088/0967-3334/28/7/s12 (2007). 13. 13. Beckmann, L., van Riesen, D. & Leonhardt, S. Optimal electrode placement and frequency range selection for the detection of lung water using bioimpedance spectroscopy. In 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2685–2688, https://doi.org/10.1109/IEMBS.2007.4352882 (2007). 14. 14. Yang, F. & Patterson, R. P. A simulation study on the effect of thoracic conductivity inhomogeneities on sensitivity distributions. Annals of Biomedical Engineering 36, 762–768, https://doi.org/10.1007/s10439-008-9469-0 (2008). 15. 15. Baker, L. E., Geddes, L. A., Hoff, H. E. & Chaput, C. J. Physiological factors underlying transthoracic impedance variations in respiration. Journal of Applied Physiology 21, 1491–1499, https://doi.org/10.1152/jappl.1966.21.5.1491, PMID: 5332246 (1966). 16. 16. Kawakami, K., Watanabe, A., Ikeda, K., Kanno, R. & Kira, S. An analysis of the relationship between transthoracic impedance variations and thoracic diameter changes. Medical and biological engineering 12, 446–453, https://doi.org/10.1007/BF02478600 (1974). 17. 17. Eastwood, P. R., Hillman, D. R. & Finucane, K. E. Ventilatory responses to inspiratory threshold loading and role of muscle fatigue in task failure. Journal of Applied Physiology 76, 185–195, https://doi.org/10.1152/jappl.1994.76.1.185, PMID: 8175504 (1994). 18. 18. Laghi, F., Topeli, A. & Tobin, M. J. Does resistive loading decrease diaphragmatic contractility before task failure? Journal of Applied Physiology 85, 1103–1112, https://doi.org/10.1152/jappl.1998.85.3.1103, PMID: 9729589 (1998). 19. 19. Reilly, C. C. et al. Neural respiratory drive measured during inspiratory threshold loading and acute hypercapnia in healthy individuals. Experimental Physiology 98, 1190–1198, https://doi.org/10.1113/expphysiol.2012.071415 (2013). 20. 20. Lozano-García, M. et al. Surface mechanomyography and electromyography provide non-invasive indices of inspiratory muscle force and activation in healthy subjects. Scientific Reports 8, 16921, https://doi.org/10.1038/s41598-018-35024-z (2018). 21. 21. Blanco-Almazán, D., Groenendaal, W., Catthoor, F. & Jané, R. Analysis of time delay between bioimpedance and respiratory volume signals under inspiratory loaded breathing. In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2365–2368, https://doi.org/10.1109/EMBC.2019.8857705 (2019). 22. 22. Ansari, S., Ward, K. R. & Najarian, K. Motion artifact suppression in impedance pneumography signal for portable monitoring of respiration: An adaptive approach. IEEE Journal of Biomedical and Health Informatics 21, 387–398, https://doi.org/10.1109/JBHI.2016.2524646 (2017). 23. 23. Młyńczak, M. & Cybulski, G. Motion artifact detection in respiratory signals based on teager energy operator and accelerometer signals. In Eskola, H., Väisänen, O., Viik, J. & Hyttinen, J. (eds) EMBEC & NBC 2017, 45–48 (Springer Singapore, Singapore, 2018). 24. 24. Ats/ers statement on respiratory muscle testing. American Journal of Respiratory and Critical Care Medicine 166, 518–624, https://doi.org/10.1164/rccm.166.4.518, PMID: 12186831 (2002). 25. 25. Langer, D. et al. Measurement validity of an electronic inspiratory loading device during a loaded breathing task in patients with COPD. Respiratory medicine 107, 633–5, https://doi.org/10.1016/j.rmed.2013.01.020 (2013). 26. 26. Helleputte, N. V. et al. 18.3 a multi-parameter signal-acquisition soc for connected personal health applications. In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 314–315, https://doi.org/10.1109/ISSCC.2014.6757449 (2014). 27. 27. Sarlabous, L. et al. Efficiency of mechanical activation of inspiratory muscles in COPD using sample entropy. European Respiratory Journal 46 (2015). 28. 28. Fiz, J. A., Jané, R., Lozano, M., Gómez, R. & Ruiz, J. Detecting unilateral phrenic paralysis by acoustic respiratory analysis. PLoS One 9, 1–9, https://doi.org/10.1371/journal.pone.0093595 (2014). 29. 29. Baker, L. E., Geddes, L. A. & Hoff, H. E. A comparison of linear and non-linear characterizations of impedance spirometry data. Medical and biological engineering 4, 371–379, https://doi.org/10.1007/BF02476155 (1966). 30. 30. Młyńczak., M. & Cybulski., G. Flow parameters derived from impedance pneumography after nonlinear calibration based on neural networks. In Proceedings of the 10th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 4: BIOSIGNALS, (BIOSTEC 2017), 70–77, https://doi.org/10.5220/0006146800700077, INSTICC (SciTePress, 2017). 31. 31. Sarlabous, L. et al. Inspiratory muscle activation increases with copd severity as confirmed by non-invasive mechanomyographic analysis. PLoS One 12, 1–14, https://doi.org/10.1371/journal.pone.0177730 (2017). Acknowledgements This work was supported in part by the Universities and Research Secretariat from the Ministry of Business and Knowledge/Generalitat de Catalunya under Grant FI-DGR and Grant GRC 2017 SGR 01770, in part by the Spanish Ministry of Economy and Competitiveness through the MINECO/FEDER Project under Grant DPI2015-68820-R, in part by the Agencia Estatal de Investigación from the Spanish Ministry of Science, Innovation and Universities and the European Regional Development Fund, under the Grant RTI2018 098472-B-I00, and in part by the CERCA Programme/Generalitat de Catalunya. Author information Authors Contributions All authors contributed to the concept and design of the study; D.B.A. performed the experiments and the data acquisition; D.B.A. analysed the data, prepared the figures and the first manuscript draft; W.G., F.C. and R.J. reviewed the original draft; All authors interpreted the results, edited the manuscript, and approved the final version of the manuscript. Corresponding author Correspondence to Dolores Blanco-Almazán. Ethics declarations Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Reprints and Permissions Blanco-Almazán, D., Groenendaal, W., Catthoor, F. et al. Chest Movement and Respiratory Volume both Contribute to Thoracic Bioimpedance during Loaded Breathing. Sci Rep 9, 20232 (2019). https://doi.org/10.1038/s41598-019-56588-4 • Accepted: • Published: • Combining Bioimpedance and Myographic Signals for the Assessment of COPD During Loaded Breathing • Dolores Blanco-Almazan • , Willemijn Groenendaal • , Manuel Lozano-Garcia • , Lien Lijnen • , Christophe Smeets • , David Ruttens • , Francky Catthoor •  & Raimon Jane IEEE Transactions on Biomedical Engineering (2021) • Artefact Detection in Impedance Pneumography Signals: A Machine Learning Approach • Jonathan Moeyersons • , John Morales • , Nick Seeuws • , Chris Van Hoof • , Evelien Hermeling • , Willemijn Groenendaal • , Rik Willems • , Sabine Van Huffel •  & Carolina Varon Sensors (2021) • Toward the Use of Temporary Tattoo Electrodes for Impedancemetric Respiration Monitoring and Other Electrophysiological Recordings on Skin • Silvia Taccola • , Aliria Poliziani • , Daniele Santonocito • , Alessio Mondini • , Christian Denk • , Alessandro Noriaki Ide • , Markus Oberparleiter • , Francesco Greco •  & Virgilio Mattoli Sensors (2021) • Multisensory biofeedback: Promoting the recessive somatosensory control in operatic singing pedagogy • E. Angelakis • , A. Andreopoulou •  & A. Georgaki Biomedical Signal Processing and Control (2021) • Extraction and Analysis of Respiratory Motion Using a Comprehensive Wearable Health Monitoring System • Uduak Z. George • , Kee S. Moon •  & Sung Q. Lee Sensors (2021)
# Chapter 1 - Section 1.8 - Absolute Value Equations and Inequalities - 1.8 Exercises: 54 no solution #### Work Step by Step $\bf{\text{Solution Outline:}}$ To solve the given inequality, $|-5x+7|-4\lt-6 ,$ use the properties of inequality to isolate the absolute value expression. Then analyze the resulting inequality. $\bf{\text{Solution Details:}}$ Using the properties of inequality, the statement above is equivalent to \begin{array}{l}\require{cancel} |-5x+7|\lt-6+4 \\\\ |-5x+7|\lt-2 .\end{array} The absolute value of $x,$ written as $|x|,$ is the distance of $x$ from $0,$ and hence is always a nonnegative number. In the same way, for any $x$, the expression at the left side is always a nonnegative number. This will never be $\text{ less than }$ the expression at the right. Hence, there is $\text{ no solution .}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
Q CD and GH are respectively the bisectors of angle ACB and angle EGF such that D and H lie on sides AB and FE of triangle ABC and triangle EFG respectively. If Delta ABC is similar to Delta FEG, show that: (question 3) Q10 (3)   CD and GH are respectively the bisectors of $\angle ABC \: \:and \: \: \angle EGF$ such that D and H lie on sides AB and FE of $\Delta ABC \: \:and \: \: \Delta EGF$ respectively. If $\Delta ABC\sim \Delta EGF$, show that: $\Delta DCA \sim \Delta HGF$ Views To prove : $\Delta DCA \sim \Delta HGF$ Given : $\Delta ABC \sim \Delta EGF$ In $\Delta DCA \, \, \, and\, \, \Delta HGF$, $\therefore \angle ACD=\angle FGH$   ( CD and GH are bisectors of equal angles) $\angle A=\angle F$           ( $\Delta ABC \sim \Delta EGF$) $\Delta DCA \sim \Delta HGF$     ( By AA criterion ) Exams Articles Questions
Pseudo First Order Reactions There are circumstances where a second order reaction might appear, in an experiment, to be first order. That is when one of the reactants in the rate equation is present in great excess over the other in the reaction mixture. As an example, let's look again at the mixed second order rate law that we treated in the last section. In the last section we had the rate law, , with a reaction whose stoichiometry is, 2A + B → P + etc (P = product). Sometimes [A]o > > [B]o. For example, say we prepare the system such that, [A]o = 2.0 M, and [B]o = 0.001 M. Then, when the reaction has run to completion, (at time = infinity), [A]inf = [A]o − 2 [B]o = 2.0 − 0.001 = 1.998 ≈ 2.0. So [A] is approximately constant throughout the entire reaction. Then the rate law becomes, ,                 (1) which is effectively a first order rate equation. This rate equation has the solution, ,               (2) where the "effective" first order rate constant is k[A]o. You can actually solve this problem as a mixed second order rate problem, as we did in the last section, but it is a lot more work. Pseudo first order reactions are sometimes used to find the rate constant of a second order reaction when one of your two components is very expensive and the other one is relatively cheap. You can use an excess of the inexpensive reagent and use a small amount of the expensive one. Notice that you can still obtain the second order rate constant by dividing the effective first order rate constant by the concentration of the excess component. (3) Simple Third Order Reaction A simple third order rate law, where component B is a reactant, has the form, ,               (3) which can be rearranged for integration as, ,               (4) and integrated to give, ,               (5) or, in simplified form, (6) .               (6) As usual we define half-life as the time when [B] = ½ [B]o, which gives, ,               (7) ,               (8) .               (9) Note that in this case the half-life depends on [B]o2 as opposed to depending on [B]o for a simple second order reaction, and being independent of [B]o for a first order reaction. Techniques for finding rate laws (NOT from the balanced equation.) We emphasize that the form of the rate law for a chemical reaction cannot be deduced from the balanced reaction equation. It takes new experimental data (or new theories) to get the rate law for a reaction. We will mention three methods for determining the rate law for a reaction from experimental data. 1. Half-lives Run several experiments at different initial concentrations to see how the half-life depends on [B]o If the half-life does not depend on the initial concentration then the reaction is first order. If the half-life of the reaction is proportional to 1/[B]o then the reaction is simple second order. If the half-life is proportinal to 1/[B]o2 then the reaction is simple third order, and so on. Mixed second and third order reactions are more complicated, but half-lives can be defined for these cases and the dependence of the half-life on [A]o and [B]o can be used to determine the rate law. 2. Initial Rate Method The initial rate of a reaction is the reaction rate at t = 0. We will designate the initial rate by Ro. One can run several experiments and measure the rate as soon as possible after the reagents have been mixed to determine the initial rates. For example, assume that Ro = k[A]x[B]y[C]z . Run several experiments at different starting concentrations and measure Ro in each case. We can place the data in a table as follows: [A]o   [B]o   [C]o   Ro 0.010  0.10   0.20  0.04   for experiment number 1, 0.020  0.10   0.20  0.08   for experiment number 2, 0.010  0.20   0.20  0.16   for experiment number 3, and 0.010  0.10   0.10  0.04   for experiment number 4. Compare the first and second experiments. We double [A]o and the rate doubles. It looks like x = 1. Compare the first and third experiments. We double [B]o and the rate increases by a factor of 4. It looks like y = 2. Notice that there is no change in the rate when we cut [C]o in half so the rate must not depend on C. This says that z = 0. Thus we determine that the rate = k[A]1[B]2[C]0. We don't have to use simple whole number ratios for the concentrations, we can do it algebraically. Take the ratio of the rates from two successive experiments, .               (10) To find the exponent of [A] make all the concentrations in the second experiment the same as in the first experiment except [A]. This gives, .               (11) Take the logarithm of both sides to get, ,               (12) and solve for x to get, .               (13) This process can be repeated for each of the components in turn. 3. Fit to Integrated Form (But Be Clever) We have seen the integrated forms of the various rate equations. Simply plotting the concentrations as a function of time is not usually useful because it is sometimes hard to tell the difference between a first order decay and a second order decay for a limited set of real experimental data (which probably will contain some scatter). However, it is relatively easy to identify a straight line. So the best way to look at your experimental data is to plot things in such a way that you expect expect a straight line For example, to test for first order plot ln[A] vs t, and see if you get a straight line. There may be some data scatter, but it should be random. If you see a definite curvature in your line the reaction is probably not first order. To test for simple second order plot 1/[B] vs t and look for a straight line. To test for simple third order plot 1/[B]2 vs t and see if you get a straight line. There are many possible varitions on this method, but this gives a general idea how the method works. Arrhenius Theory - Temperature Dependence of the Rate Constant (Named after its discoverer, Gustav Arrhenius) It was known very early in the study of chemical kinetics that reaction rate constants were functions of temperature.  That is, k = k(T). It was found that an experimental plot of lnk vs 1/T  appeared to be approximately linear with a negative slope.  This implies that .               (14) Call c = lnA for simplicity, then, .               (15) We can solve Equatiion 15 for k by taking the antilog of this equation, ,               (16) or .               (17) It is usual to set b = Ea/R to get .               (18) Ea is called the Arrhenius activation energy. The interpretation of this equation is that there is some intermediate configuration of the molecules involved in the reaction which is "half-way" between the reactants and products. This configuration of molecules, in the process of reacting, is called the "activated complex." The idea is that it takes energy to form this complex and the probability of the system attaining that energy at a temperature, T, is proportional to the exponential of (the negative of) the activation energy divided by RT. This makes sense. Recall that kT and RT are thermal energies and we have seen exponentials of a mechanical energy divided by a thermal energy before. For example, the probability distribution function for molecular velocities in one dimension was given as .               (19) Actually, the preexponential factor, A = A(T), is also a function of temperature, but the temperature dependence is much weaker. Sometimes, for example, (20) But the major portion of the rate constant's temperature dependence is in the exponential term, . WRS From here you can:
### Consecutive Numbers An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. ### Calendar Capers Choose any three by three square of dates on a calendar page... ### Building Tetrahedra Can you make a tetrahedron whose faces all have the same perimeter? # Spot the Fake ##### Stage: 3 and 4 Short Challenge Level: One coin among $N$ identical-looking coins is a fake and is slightly heavier than the others, which all have the same weight. To compare two groups of coins you are allowed to use a set of scales with two pans which balance exactly when the weight in each pan is the same. What is the largest value of $N$ for which the fake coin can be identified using a maximum of two such comparisons? If you liked this problem, here is an NRICH task that challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges. View the archive of all weekly problems grouped by curriculum topic View the previous week's solution View the current weekly problem